Quantcast
Channel: SCN : Blog List - SAP HANA and In-Memory Computing
Viewing all 902 articles
Browse latest View live

10 Golden Rules for SAP Suite on HANA & S/4HANA Migrations

$
0
0

This time last year, I wrote about experiences migrating SAP BW customers in 10 Golden Rules for SAP BW on HANA Migrations. Things change in a revolution around the sun, and over the last year, we have found a sharp increase in the number of customers going live with Suite on HANA. It turns out that whilst there is some transferability of skills from BW to Suite migrations, there are just as many differences.

 

With that in mind, here's my 10 guidelines for SoH Migrations - they do, of course, apply just as well to S/4HANA!

 

1) Start with BW on HANA

 

This isn't an absolute rule and greenfield customers should definitely go directly onto Suite on HANA, but customers with a complex SAP landscape should not go live with Suite as their first system. Why?

 

I met with Vishal Sikka in TechEd in 2011 and asked him comment that if BW would soon run on HANA, that meant NetWeaver ran on HANA which meant they could easily run the Suite. I asked him why SAP chose not to also announce Suite on HANA support. His comment was that if a large customer's BW system were to go down, they would open a support ticket and get it resolved. If their ECC system went down and they stopped the manufacturing facility, he would get a phone call from a CIO.

 

Whilst HANA is an incredibly stable database and ECC runs very well on it, HANA will be a new database to the organization and it is best to start with a system which is not transactional in its nature. I always recommend to do BW on HANA first - it allows the training of staff, implementation of infrastructure, backup, high availability and disaster recovery, monitoring processes etc. Once BW is in, the organization will have the maturity to support HANA, and ergo the maturity to migrate Suite.

 

If getting live on Suite is a priority then you can run parallel projects, going live with BW first, and with Suite 4-8 weeks later.

 

Some people asked me what they should do if they don't have BW. That's OK, doing BW first is just a neat way to build organizational process intelligence around HANA. If you don't have BW then you just need to be a little more structured around how you build that capability, to mitigate any risk. Things you need to consider include architecture, sizing, networks, updates, backup/restore, HA/DR and monitoring. There's processes like support and incident management, change management, release management and transport management and people-centric items like support personnel and DBAs. Just the same as any other system.

 

2) Build an Integrated Schedule

 

This is important for any project, but with Suite on HANA it is essential. There will be connected systems like Supply Chain Management, forecasting systems like BPC, reporting systems like BW, third party interfaces and integration. There will be a raft of front end tools like SAP Gui, Portals, Web Stores. Cloud integration to SuccessFactors or Ariba or Salesforce.

 

You need to involve teams from Basis, Architecture, Infrastructure, Networking, Custom Development, Test Management, Finance, HCM and others. Suite touches the whole business.

 

We always build an integrated schedule that describes the project in way that can be displayed on a single monitor screen, so everyone understands what is happening and when. Now ensure that everyone buys into this.

 

Make sure that your integrated schedule also contains a reference to other releases or projects which will run concurrently, so you can track them, any change freezes, or dependencies. It's important in an integrated schedule that you ask teams not to pad their times, but to provide realistic estimates for how long tasks will take. Then, as a project manager, you add in contingency which will allow some slippage for issue resolution.

 

3) Build mid-level and detailed plans

 

I like to have 3 levels of plans for a migration project. The integrated schedule which typically describes the project on a weekly basis is the first.

 

The second is the detailed plan, which is at a task-time-resource level. Detailed plans are very hard to read, and only experienced project managers really know how to work with them and interpret them, building Gantt charts with complex dependencies, resource costing, WBS and allowing burndown charts and earned value calculations. Typically only the PMO needs to use the detailed plan.

 

The third is a mid-level plan, which is at the task-day-team level. This allows you to explain to the project team what they need to do and when, every day. Why every day? Because this allows you to squeeze the plan, and shorter projects have better time to value and lower cost.

 

4) Have a communications plan and stakeholder map

 

This can be very straightforward, but a Suite on HANA project will have eyes from many places in an organization, and rumor travels faster than the speed of light. Decide who, how and when to communicate with, and do it regularly. I find that CIOs and other senior leaders often like a short weekly update - my rule of thumb is it should be readable with one swipe of the finger on a smart phone.

 

A weekly 15 minute all-hands call can be useful too - for anyone interested in getting an interactive update.

 

If you communicate regularly to all your stakeholders then you dramatically reduce the chances of misinformation spreading and causing disruption to your project.

 

5) Have a production-sized sandbox/pilot

 

The details of how this works will depend on your organization, landscape and complexity but once you enter the main development system, you will have a change freeze. The best way to keep this change freeze short is to be prepared, and you can't be prepared unless you have previously completed a production-sized migration.

 

So take a system copy of the production integration environment (BW, ECC, PI etc.) and then migrate the ECC system to HANA. Let your Basis team do this 2-3 times before you release it to the technical and functional teams if possible, so they can hone their process.

 

It's also possible to do this early on, prior to purchasing all the hardware (buy one sandbox which is roughly sized using the SAP Sizing Guide). If you do this, you can validate sizing so you have confidence in your Bill of Material, and do things like archiving and data aging to reduce your hardware requirements.

 

6) Consider having some skin in the game from SAP

 

SAP Active Global Support (AGS) and Professional Services Organization (PSO) have merged into one group, called ONE Support. Regardless of whether you are a Max Attention, Active Embedded or Enterprise Support customer, you can contract them to be involved in the planning and support of the project. In particular they have a service catalog available which has a service for planning and for custom code management. There are free services for pre- and post-go-live checks which you should book in 6 weeks ahead of time.

 

Having SAP bless your architecture, sizing and plan is a big bonus and they have good quality resources for this sort of work. In addition, there is a HANA Ambassador program available in North America which provides a resource which reports into the Global Customer Office at SAP. It's a good way to ensure your project gets the attention it requires.

 

7) Join the Customer Advisory Council

 

There is a HANA Customer Council run by Scott Feldman which meets periodically. It's available free of charge for senior IT folks and project sponsors to go and talk to other customers and hear what's going on in the ground, and gain some additional confidence. More details including Information on this Council plus how to join the international HANA Global Community can be found at in this blog.

 

8) Change Many, Test Once

 

This goes against some of the views of IT folks but I am a big promoter of a change-many, test-once approach. SAP has excellent an excellent tool for HANA migrations called DMO, which will upgrade, patch, perform a Unicode conversion and migrate to HANA in one step, all without touching your source system.

 

This does increase the amount of effort in root cause analysis of problems (which caused the problem) but it provides a single test landscape. One of the biggest risks in any project is inadequate testing, and it allows you to have the conversation with the test team: I've reduce the number of UAT runs from 3 to one, please give me support!

 

9) Solution Manager is your friend

 

I don't think I ever thought I'd say this, but Solution Manger is your friend in a HANA migration! There is a SCN Wiki SAP Solution Manager WIKI - Custom Code Management - Solution Manager - SCN Wiki which has lots of useful pages including the tooling available for HANA Migrations.

 

This includes the Custom Code Management Cockpit (CDMC) which tells you what code has been customized and will break, Usage & Procedure Logging (UPL), which tells you what code is used and how much and Clonefinder, which tells you what transactions have been cloned to custom, and how customized they are.

 

Custom code is not your friend in HANA migrations, especially clones, because cloned transactions won't get updated with all the nice HANA optimizations that come as part of SAP ERP 6.0 EhP7.

 

Remember that you need to patch to the very latest version of Solution Manager! Don't take a N-1 approach to this!

 

10) Test, test and test!

 

Talk to your test manager and ensure that you have a good test strategy. Do you have separate phases for unit, integration, performance, stress and user testing? Do you have test automation using a tool like HP QualityCenter?

 

How wide is your test coverage? Does it include front end solutions like portals and external web access?

 

And remember that there will be some effort in custom code restitution, so leave time to do this. Whilst HANA is an amazing database, it is a columnar database and some custom code will not run optimally if it was poorly written. Row-based databases can be much more forgiving than columnar databases for shoddy code!

 

11) Build an integrated cutover plan

 

I've heard so many teams within an organization talk about "my plan". There needs to be "the plan"! The way we do this is to build a cutover spreadsheet with a numbering system and forecast times for every activity, and an integrated playbook which matches the numbering system in the spreadsheet to the table of contents.

 

Then when you ask the Basis team where they are at 3am, they can tell you what number, and you can see where you are relative to the schedule in 5 seconds flat. You can replace forecast with actual and get a revised cutover plan as you go.

 

Final Words

 

Now I've written this blog and looked back on the BW blog, it is fascinating how different the motions are in a BW on HANA migration, but that shouldn't come as a surprise: ECC on HANA is a transactional system, with all the complexities that come with this. It runs the core systems of the most complex companies in the world.

 

One thing I've missed - it should be implicit - is that in a migration project, you need experienced resources. You need people that know your company, your processes, and external experts. Make sure you work with people you trust and want to work with, and can depend on to buy you a cup of coffee at 3am! And good luck!

 

I'm interested in your feedback and tips - what have I missed?

 

P.S. This blog takes me past 15k points to the "Diamond" badge in SCN. Thank you all so much for your support through the years, I truly appreciate your time in reading my work.


Using Custom Dictionaries with Text Analysis in HANA SPS9, for Formula One Twitter Analysis

$
0
0

Having done some work with the unstructured text engine within the SAP HANA Platform I wanted to capture and share how to do this.  For this example I have used Twitter data looking at Formula One hashtags and F1 accounts.

 

The linguistic engine is just one of the engines in the HANA Platform but is not often talked about but it is very easy to use to extract structured information from unstructured text. This text could be held in a simple character field or it could be within a binary document, we support many binary formats including TXT, RTF, HTML, PDF, DOC, DOCX, XLS, XLSX, PPT, PPTX and MSG. The official Text Analysis, Text Search and Text Mining documentation can be found here

 

For this example I have used the Text Analysis (TA) engine straight out of the box and yes it works, the results were OK, but as you would expect with any Industry, Line of Business or sport F1 has its own terms, the drivers and teams(constructors) being prime example of these so I wanted to create a custom dictionary to improve the understanding of these.

 

There's a good blog that shows the old way (HANA SP7) of doing this SAP HANA Custom Dictionary


With SP9, this is even easier, there's only really 3 steps.

  1. Create the XML dictionary
  2. Reference the dictionary in a TA configuration file
  3. Call the Text Analysis Configuration with SQL

 

1.1 HANA Web IDE

Go to the HANA Web IDE, for me this is at

http://ukhana.mo.sap.corp:8001/sap/hana/ide/editor/

For others it would be

http(s)://<HANA HOSTNAME>:80<HANA INSTANCE>/sap/hana/ide/editor/


WebIDE-1.png

 

1.2 Create Dictionary File

Create a New "File" for the dictionary, I used the path sap.hana.ta.config

The file needs to end in .hdbtextdict

WebIDE-New FIle.png

 

1.3 Create the dictionary

Here's a snippet of mine, I have attached the full XML file below.  Check your XML file opens in a web browser, also be care of the double quotes " - sometimes you may find the "smart quotes" like “ and ” which are not smart for XML files! 

 

<?xml version="1.0" encoding="UTF-8"?><dictionary xmlns="http://www.sap.com/ta/4.0">   <entity_category name="F1 Driver">      <entity_name standard_form="Lewis Hamilton">            <variant name ="Lewis"/>            <variant name ="Hamilton"/>            <variant name ="HAM"/>            <variant name ="@LewisHamilton"/>            <variant name ="#TeamLH"/>            <variant name ="LewisHamilton"/>      </entity_name>      <entity_name standard_form="Jenson Button">            <variant name ="Jenson"/>            <variant name ="Button"/>            <variant name ="#JB22"/>            <variant name ="@JensonButton"/>            <variant name ="JensonButton"/>      </entity_name>      <entity_name standard_form="Kimi Raikkonen">            <variant name ="Kimi"/>            <variant name ="Raikkonen"/>            <variant name ="Kimi Räikkönen"/>            <variant name ="Räikkönen"/>            <variant name ="Ferrari Kimi Raikkonen"/>      </entity_name>     </entity_category></dictionary>


Below you can see the full Dictionary XML file inside the WebIDE

Once you click Save you should see in the black console as above that is gets saved and activated (compiled automagically) immediately.

F1.hdbtextdict.png


2.1 Create configuration file 

The easiest way is to chose one if the other .hdbtextconfig file that you see.  Whichever one is the most appropriate.

This can be done easily - Right click copy and paste. I chose the VOICEOFCUSTOMER one as I was initially using some Twitter data for the unstructured analysis. Give the new file a sensible name, remember to keep the .hdbtextconfig extension.


2.2  Edit Configuration file

Open your newly copied file and scroll to the bottom.

Add, an entry that references your Dictionary file you created above for me I added

 

<string-list-value>sap.hana.ta.config::F1.hdbtextdict</string-list-value>


2.3 Save configuration file

You should see it also activates at the same time, which will check for any errors too.

F1.hdbtextconfig.png

 

3.1 Database Table

You can now use your new configuration. I loaded some Twitter data using the HANA Data Provisioning Agent that's also part of HANA SP9.  I created a simple table F1-TWEETS with 3 columns, It must have a primary key and also a text field in either an NVARCHAR, VARCHAR, BLOB or CLOB

F1-TWEETS.png

INSERT INTO "F1-TWEETS" (
SELECT "Id", "ScreenName", "Tweet"  FROM "F1"."F1HANA-Twitter_Status");


3.2 Create FullText index with the new configuration

CREATE FULLTEXT INDEX "F1-TWEETS-FTI" ON "F1"."F1-TWEETS"("Tweet")
CONFIGURATION 'F1'
FAST PREPROCESS OFF
TEXT ANALYSIS ON;

This creates a new table in my case $TA_F1-TWEETS-FTI which contains the structured version of the unstructured data.

When you use a dictionary the TA_NORMALIZED


3.3 Visualisation of the Restults

To illustrate the difference that the dictionary makes compare the 2 visualisations that I created with Lumira using a calculation view against the $TA_F1-TWEETS-FTI table.


Without the Dictionary - TA_TOKEN

Without-Dictionary.png

Without-Dictionary 2.png



With the Dictionary - TA_NORMALIZED

With-Dictionary.png

With-Dictionary 2.png

 

For me it is clear that there enormous benefit to using the Text Analysis to turn unstructured data into meaning information and when you combine that with the custom dictionaries you have a very powerful tool.

How to uninstall the Automated Predictive Library in SAP HANA

$
0
0

Following on fromHow to Install the Automated Predictive Library in SAP HANA

I thought it would be useful to share when and how to uninstall the Automated Predictive Library (APL).  Removing is a lot easier and quicker but it is not documented anywhere that I have seen.

 

When you update HANA revisions, for example from 93 to 94 then you need to uninstall and re-install the APL.


You can't just install the APL again because the installer detects that it is already installed (even though it won't work).  So you need to manually remove APL by deleting it.

 

I have done this via these commands

 

rm -rf /usr/sap/IAN/exe/linuxx86_64/plugins/sap_afl_sdk_apl_1.1.0.20.0_1

rm -rf /usr/sap/IAN/SYS/global/hdb/plugins/sap_afl_sdk_apl

 

Now you can easily re-install with the original APL installer


mo-dad69e91a:/tmp/apl-1.1.0.20-linux_x64/installer # ./hdbinst


APL Uninstall.png

Quick Access to the SAP HANA Development Documentation (for SAP HANA Studio)

$
0
0

Have you seen the newest version of the SAP HANA Developer Information Map? This version is available in the SAP Help Portal now at help.sap.com/hana_platform in the Development Information section.

 

This feature is available only in the online help version of the document.

 

In SAP HANA version SPS 08, SAP published the first SAP HANA Developer Information Map to help you to locate the correct document based on your specific needs. With the latest version of this guide (released in March this year) SAP provides direct access to the specific section of the SAP HANA Developer Guide and related reference guides directly from the image in the HTML version. (A copy of the image only follows. The links are active in the online help.) Give this a try and let us know (using comments or 'Like' for this blog) whether or not you find this helpful and if you would like to see more of these types of navigation aids in future documentation.

InterGraph.png

 

If you do not see the Note on the linked page to the guide, then clear your browser's cache and try the link again.

HANA Effect Podcast #10: Dell Rolls Out Real-time Sales Reporting with HANA

$
0
0

Bart Crider, Director of Enterprise BI, shares how Dell has revolutionized their sales and planning processes with real-time data for their entire sales force using a variety of real-time data sources.

 

HE10.jpg

 

We hope you enjoy hearing Dell’s first-hand experience with mission-critical SAP HANA.  Please let us know your feedback in the comments below.

To get more real-world customer HANA stories, subscribe to our iTunes or SoundCloud feed for weekly podcasts that will cover multiple in-production customer use case scenarios for SAP HANA.

Also, if you’ve got a killer SAP HANA scenario and would like to share it on the HANA Effect podcast, please let us know.

 

Transcript: SAP HANA Effect Episode 10

Sponsored by:

xeon_i_ww_rgb_3000

Small surprises, well documented

$
0
0

We all noticed it: SAP HANA has grown.

A lot, actually.

 

Let's begin with a little sentimental look into the past

 

What had begun as an in-memory database management system with some funky development artifacts like attribute-, analytic- and calculation views has grown in a huge feature packed platform for data management and processing.

 

With it, the description of what SAP HANA does and how it can be used had to grow: the documentation.

Checking my Docu OLD folder, this is what I find:

old_docs.png

That's more than 18 times as much documentation content for SPS 8 than there was for SPS 2 back in 2011.

 

Size does matter or does it?

 

But it's not just about the sheer size of the documentation.

 

The documentation page itself had been improved quite a bit:

docu_head.png

Besides the better layout and content organization, the documentation page (as well as the documents themselves) now contain the revision this documentation belongs to and the date of the last update.

While it may not seem to be a big deal, in fact this is not the "usual SAP" way of updating the documentation only together with SPS levels.

This means, new and improved documentation can get to you much faster.

 

Suprises long wished for

 

The next thing is something I really love to have it: the option to simply download the whole documentation package as a single zip file.

 

As you can tell from my Docu OLD folder above, I like to have the documentation locally available and not just online or inside SAP HANA Studio.

With the search functionality of Windows 7 having all the documents in one folder makes it real easy to run a quick search in the documentation for some information.

(Note: depending on your Windows and Acrobat Reader version, you might need to install the PDF iFilter 64Bit from Acrobat to index the contents of PDF files in Windows Search)

 

If you're like me and barely ever use the table of content links shown above, but just scroll down the page to find the actual links, then this is the section you are looking for from now on:

compl_docu_download.png

 

As far as I can tell, this section is now available on nearly all documentation packages, mostly "somewhere down the page"...

Maybe this can be changed to a better location some time.

 

Both of those features were things I (and I am sure others did as well) had asked for and seeing them now I feel a little bit proud, too .

 

Spoilt for choice

 

Another new "thing" with SAP HANA SPS 9 is that there are now additional features and functionality that is not part of the core SAP HANA package.

 

These SAP HANA Options can be licensed separately and thus require their separate set of documentation.

 

In order to keep the level of confusion as low as possible, the documentation developers put the a new sub-section into the left-hand sided navigation bar.

All SAP HANA Options documentation can be found just there.

options_docu.png

 

 

And that's it again!

 

I really do like the improvements even though I wonder why I always have to find out about them by accident.

After all, we're all busy and if there's something new worth reading, I don't want to wait to find out about it a year later...

 

It would be a great thing to have more pointers to such things, just like the recent blog from the SAP HANA documentation developer team about the Information Map (Quick Access to the SAP HANA Development Documentation (for SAP HANA Studio))

 

There will always be more points on the wishlist (how would you feel about an RSS feed on the documentation page, announcing new document versions when they get uploaded? I'd love that!). And I find it a great thing to see them turning into reality one by one.

 

There you go - now you know!

 

Cheers, Lars

SQL and System Views Documentation Updates for SAP HANA SPS 09

$
0
0

The SAP HANA SQL and System Views Reference was updated to reflect the new licensing options available for SAP HANA. All together it documents 137 SQL statements, 155 SQL functions and 308 system views.

 

To make it easier to locate the information relating to options the structure of the guide was changed to include separate sections for SQL Reference for Options and System Views Reference for Options.

 

sql_ref_options.png

Some of the highlights include:

 

Partitioning

New type of multi-level partitioning called range-range, this allows you to use a year as the partition specification and create a number of partitions for each year, for example all records from 1- 20,000 and records greater than 20,000. Additional data types for range partitioning include BIGINT and DECIMAL.

 

Table re-distribution

Table re-distribution now allows you to assign tables or partitions to a particular node in a distributed SAP HANA system.

 

Regular Expressions

SAP HANA SPS 09 supports regular expression operators in SQL statements. The search pattern grammar is based on Perl Compatible Regular Expression (PCRE).

 

Table Sampling

The TABLESAMPLE operator allows queries to be executed over ad-hoc random samples of tables.

Samples are computed uniformly over data items that are qualified by a columnar base table.

For example, to compute an approximate average of the salary field for managers

over 1% of the employee (emp) table you could run the following query:

 

SELECT count(*), avg(salary) FROM emp TABLESAMPLE SYSTEM (1) WHERE type = 'manager'

 

Note that sampling is currently limited to column base tables and repeatability is not supported.

 

Number Functions
Number functions take numeric values, or strings with numeric characters, as inputs and return numeric values.


BITCOUNT
Counts the number of set bits in the given integer or VARBINARY value

 

BITXOR
Performs an XOR operation on the bits of the given non-negative integer or VARBINARY values

 

BITOR
Performs an OR operation on the bits of the given non-negative integer or VARBINARY values

 

BITNOT
Performs a bitwise NOT operation on the bits of the given integer value

 

More information on all of these features can be found in the SAP HANA SQL and System Views Reference on the SAP Help Portal.

 

Additional Resources

 

The HANA Academy also has a number of videos on these new SQL features and many more topics. Be sure to check them out:


SAP HANA Academy - SQL Functions: String_Agg


SAP HANA Academy - SQL Functions: TABLESAMPLE


SQL Functions: PERCENTILE_CONT and PERCENTILE_DISC


SAP HANA Academy - SQL Functions: BITCOUNT Bitwise Operation


SAP HANA Academy - SQL Functions: BITOR Bitwise Operation


SAP HANA Academy - SQL Functions: BITXOR Bitwise Operation

 

Additional SQL guides include:

 

SAP HANA Search Developer Guide


SAP HANA Spatial Reference


Backup and Recovery commands in the SAP HANA Administration Guide

#HANAEffect Episode 11 – #SAPHANA Drives Top and Bottom-line Value at @Cisco

$
0
0

Dileep Moturi, IT Project Manager, shares how Cisco has used HANA to provide real-time sales reporting and drive top and bottom-line value.  Cisco estimates the HANA system has saved them over 7,000 hours of senior executive data manipulation efforts.

HE11.jpg

We hope you enjoy hearing Cisco’s first-hand experience with mission-critical SAP HANA.  Please let us know your feedback in the comments below.

 

To get more real-world customer HANA stories, subscribe to our iTunes or SoundCloud feed for weekly podcasts that will cover multiple in-production customer use case scenarios for SAP HANA.

Also, if you’ve got a killer SAP HANA scenario and would like to share it on the HANA Effect podcast, please let us know.

 

Transcript: SAP HANA Effect Episode 11 _CISCO

Sponsored by:

xeon_i_ww_rgb_3000


How to...implement HANA dynamic material UOM conversions

$
0
0

First off, I have to recognize that this conversation was started (as always) in some blog comments with Tim Korba, who provided an approach that I tweaked and used in a current project. That's the best part about this community -  a majority of my "aha" moments have always come by collaborating with others or taking an interesting idea and building on it.

 

Creating Quantity Unit of Measure Conversion in HANA

 

Problem Statement: A given set of data needs to be converted dynamically according to user input, for example - "I want to see all quantities in CS (cases)". The existing UOM conversions provided with HANA only cover the most basic conversions like length, weight and volume. From an SAP data perspective, we are always interested in converting back and forth between the different UOM’s that a material may exist in, which could be EA, CS, PL, BG, BX, and on and on....

 

Functional overview

  1. Get a QTY that is in the BUOM that you would like to convert to another UOM. In the case that your transaction data is not in BUOM, convert it before starting the following steps.
  2. Determine if a conversion factor exists for the specific material and the target UOM.
  3. If a conversion exists, use the conversion factor to calculate the correct quantity as associate the target UOM, else use the existing quantity and associate the BUOM.

 

HANA Implementation Overview

I chose to show this in a graphical calculation view since that's where I spend most of my time. Of course this can be applied to any other view with the same logic. Tim Korba had started with an attribute view on an analytic view, which is the same train of though but implemented with a different artifact.


 

1. Create the base transaction data set. In this case, it is a union created from a number of calculation views, but this could be any dataset really. This data is from APO and is already in BUOM. The aggregation node simply rolls up the result of the union before any joins.

 

Sample transaction data at this step would look like this.

 

 

2. The first join serves to retrieve the BUOM using the material master (MATKEY in APO, MARA in ECC). Depending on application area you might already have this on the main dataset, in this case I did not. This BUOM will be used in case we have no matching entry from the UOM conversion. Since we know all materials must have an entry in material master, we model this an inner join.

 

 

Sample data at this step would look like this.


 

 

3. Retrieve all the conversions that are relevant for your target UOM. We use an input parameter to apply a filter on MEINH (alternative unit of measure) and also at the same time calculate a conversion factor we’ll use later on.

 

 

FACTOR formula

 

Input Parameter details - you could also use the MEINH column as a list of values or another table. I just made this direct to keep it simple.

 

MEINH Filter

 

An example of results from this branch of the calc view would look like this, assuming that the input parameter value was “CS”.

 

 

 

4. Using a Left Outer Join from the transaction data to the conversion data, we see if there is a conversion available. It’s possible there is no conversion for the specific material, so a LOJ is appropriate - and we’ll deal with cases where those are not found in the next steps.

 


At this point the data will look like this, assuming we have a match for the requested target UOM


 


 

5. Create two measures; one to cover the final UOM and one that cover the final qty. The Final UOM will be the target UOM if a conversion is found OR the original BUOM. The Final Qty will be qty x factor if a conversion is found, or the original quantity.

 

FINAL_UOM

If we don’t have a target conversion that the user was asking for (via input parameter), then we have to use the BUOM that we started with. So the result of this column will either be “EA” or the target of “CS” for example.

 

 

 

 

FINAL_QTY_SALES

This column will convert into the target unit if there was a target unit found, else we’ll revert to the original qty, the same logic used in deriving the UOM in the other calculated column. The key here is to make sure on the semantic tab we are associating this QTY with the UOM we derived in the previous calculated column.

 

 

 

 

Sample data here would look like this, we can see that the previous 16,000,000 EA was successfully converted to 16,000 CS based on a conversion rate of 100EA/1CS.

 

 

Testing the opposite scenario, where there is no target found (using “XYZ” as the target UOM), the following would be the result at the final aggregation node. Since there was no target found, we have to revert back to the BUOM we already knew.

 

 

As seen through a client tool like Analysis for Office, anytime we are mixing units, the aggregation will show an asterisk (*) since two unlike units can’t be aggregated. When you look at the material level however, you can see the qty and the unit associated with it.

 

 

Happy HANA,

Justin

SAP HANA Platform - Innovations within Utilities

$
0
0

Recently, we visited a utilities customer in UK, which does most of their reporting from SAP BW, which is fed by SAP ERP. They had two installations of ERP, one with standard modules and another with ISU. Totally unaware of SAP HANA and its value, we gave them an insight into the benefits with BW on HANA and innovations with S4HANA. It was challenging as they were far away from newest technology and innovations. This meant that we had to give them a glance into the ‘The New SAP’ - SAP without complexity. Not just showing simplification but a new platform which supports innovation and new business processes.

Their current reporting capabilities were delivered via BEx analyser, which tells us that they kept themselves within a shell without bringing change or new capabilities.


By 2020, in utilities 1.5 billion devices will be connected. With the Smart grids concept and intelligent assets, these are also becoming more intelligent as increasing amounts of embedded software and analytics are used to diagnose asset health, which plays a critical role in the utility ecosystem. This creates large volumes of data that enable utilities to improve energy efficiencies, services, and customer engagement. Utilities businesses will have smart meters in the near future to monitor power, gas, and water consumption in real time. To create the ‘wow’ effect and to be memorable, I created an infographic on their business processes whilst also showcasing the innovations within SAP HANA platform and S4HANA. Infographic highlighted analytical, predictive, geo-spatial as well as Fiori and mobility to help them vision a single platform with all the capabilities.



utilities infographic.png

Below is a link to an offline demo as well, which demonstrates a scenario wherein a pipeline maintenance engineer analyses the health of a network of pipelines and pumps and through existing data obtained by geo-mapping of real time sensor data from the pipelines. This demo also showcases the connected devices concept via Internet of Things (IoT). Based on historical data, predictive customer demand is calculated to help the engineer decide whether to replace the part or repair.


Offline demo


Thanks & Regards,


Guneet Wadhwa

SAP UK

Happy Easter folks - easter date calculation

$
0
0

Fast Lane: Just download attached files, self explanatory.

 

 

Easter Sunday, kids asleep, now daddy gets to play. I decided (month ago) to port my code for calculating Easter Sunday to Hana,

for fun of it and to test CTE and UDFs on Hana .

 

It was not as easy as I thought it would be.  My Hana SPS09 rev93.

 

I had to abandon idea of using CTE's and connected to that my old MS SQL code.

It was said CTE's should  work, although not official yet, but I was not able to stumble upon syntax to make it work.

When I gave up, kids were asleep no more, so night shift was ahead.


I had my diskette with Clipper Summer'87 Easter calc code, but no floppy in laptop


So I googled and found nice simple algorithm, not elegant but now I did not care any more

http://aa.usno.navy.mil/faq/docs/easter.php


Logical place to do calculation in was scalar UDF.

Then I learned that it does not support SQL statements

http://scn.sap.com/community/developer-center/hana/blog/2013/07/01/scalar-user-defined-functions-in-sap-hana

 

Should I use procedure ? Honestly, I would rather not do it at all then use procedure for such a thing.

 

Finnaly I just made it work, it was to late to post it (not easter Sunday any more), so I postponed posting it and added function for Orthodox Easter calculation using Meeus Julian Algorithm

http://en.wikipedia.org/wiki/Computus

 

It can be written differently, optimized, part of the code merged, with more variables, or less…certanly better documented , but here it is as it is, code attached in files also.

 

Gregorian Catholic Easter Calculation

easter1.jpg


Gregorian Ortodox Easter Calculation


easter2.jpg

Good Friday

is then peace of cake

select ADD_DAYS(get_easter_for_year(2015), - 2) AS"GoodFriday"from dummy ;

 

Easter Monday

in Croatia and some other countries is non working day so:

select ADD_DAYS(get_easter_for_year(2015), 1) AS"EasterMonday"from dummy ;


I also learned that now if I select a code in SQL editor and execute that it worked (just like in MS SQL SMS ).

Before it was not and entire script got executed as I recall. I missed that. Great then.

 

easter3.jpg

I've also lerned that in Hana integer divisions do not work like in MS SQL, so

 

select 3/2 "myInt" from dummy;

 

is not 1  but 1.5

as opose to

 

select 3/2 "myInt"   in MS SQL

 

Taking care of that I had to use FLOOR or CAST as INT.

I picked CAST.

 

So there You go,  fun is over.

 

Any corrections are wellcomed.

 

Happy Easter !

[SAP HANA Academy] SAP HANA Cloud Integration for Data Services

$
0
0

In a series of 14 videos the SAP HANA Academy'sTahir Hussain "Bob" Babar shows how to preform the most common SAP HANA Cloud Integration for Data Services tasks. SAP HANA Cloud Integration (HCI) is SAP's strategic integration platform for SAP Cloud customers. HCI provides out-of-the-box connectivity across cloud and on-premise solutions. Beneath the real-time process integration capabilities HCI also contains a data integration part that allows efficient and secure usage of extract, transform and load (ETL) tasks to move data between on-premise systems and the cloud.


Bob's 14 tutorial videos are linked below with accompanying synopses. Please watch the videos here or on the SAP HANA Academy's HCI-DS playlist.

 

HCI Data Services: Overview

Screen Shot 2015-03-25 at 10.48.40 AM.png

To open the series Bob a provides quick chalkboard overview of how data services works within SAP HANA Cloud Integration. This series will show you how to install HCI-DS agents and how to build various datastores to connect to SAP ERP or BW system, flat files, a OData provider, AWS, and a weather service in WSDL format. In addition the series will teach you how to build tasks that will extract data from these various sources, transform it using HCI-DS functions and load it into a SAP HANA schema in the SAP HANA Cloud Platform.

Screen Shot 2015-04-08 at 11.40.58 AM.png

Bob finishes this introductory video by walking through the basics of the seven HCI-DS tabs.


HCI Data Services: Agent Install

Screen Shot 2015-03-25 at 10.50.16 AM.png

The second video in the series shows how to download, install and configure the data services agent on a Windows machine.

 

This video assumes you’ve already activated your HCI-DS account. Once logged in to HCI-DS, click the link to download agent package in the agents tab and download the 64 bit for Windows version from the service market place.

 

While following along with the simple download check the box next to specify port numbers to view the four port numbers that will be used for inter-process communication. For the agent to function we only need the HCP port open for outbound traffic. Leave all of the defaults while finishing the installation of the data services agent.

 

To configure the connectivity to HCI first log in with the administrator user. This is usually the SCN ID of your HCP account. You can allocate the administrator role to your HCP user from the administration tab of HCI-DS.

 

Next you will need an agent configure file. To find this go to the HCI-DS agents tab and select new agent. Name the agent and allocate it to an existing or new group. Download the configuration text file on the next screen and save it to your machine. Copy the configuration file and then create and save a new file that can be linked to the HCI-DS agent configuration page. 

 

Once you’ve uploaded the file select yes in the wizard to successfully start the agent. Refreshing the agents tab will display a green box that confirms the HCI-DS agent has been started and is properly connected.

 

HCI Data Services: SAP HCP HANA Datastore

Screen Shot 2015-03-25 at 11.11.58 AM.png

Continuing the series, Bob shows how to create a datastore that connects to a SAP HANA schema in HCP. Access to the schema is secured through an access token generated via the HCP client console.

 

To create a HCP connection navigate to the HCI-DS datastores tab, click on the configuration option and enter the HCP administrator account name and schema ID. To get the access token, you must connect to the agent machine that stores your HCP client. In that machine open a command prompt window and set the proper proxies. Bob outlines the commands to enter that will eventually generate the access token. Paste the token into the datastores tab and save. Clicking connect will successfully connect to the HCP datastore.

 

Access to the various objects that will be created must be given to the granted role we are using with HCP. After logging into the SAP Web-based Development Workbench open a new SQL console and execute the line of code displayed below. This will generate a role name.

Screen Shot 2015-04-08 at 12.45.10 PM.png

Next execute the SQL statement below using that role name. This will allow the user to insert, update and delete on the schema. Now no security ride issues will occur when using HCI-DS to load data into the SAP HANA Schema in HCP.

Screen Shot 2015-04-08 at 12.51.11 PM.png

HCI Data Services: OData Datastore

Screen Shot 2015-03-25 at 11.13.22 AM.png

Bob’s next video will take data from an OData provider and store it in a SAP HANA Schema in HCP. To build a new datastore click the add button in the datastores tab. Choose adapter as the type and OData as the adapter type. Set the depth to 2 so all the relationships can be expanded one level deep.

 

In a browser go to services.odata.org/ and click on the Read-Only Northwind Service link and change V3 in the URL to V2 as currently only OData version 2 is supported. 

 

Note you may need to change the proxy for the OData adaptor on your agent machine. If so, in the Stat menu go to SAP Data Services Agent and then click on configure agent. Choose to configure adaptors and set the adaptor type to OData. Bob’s agent is internal, so he adds wdf.sap.corp to his host, http, and https proxy in the additional Java Launcher text box.

 

Copy the modified Northwind Service URL and go into the datastores tab of HCI-DS. Build a new Datastore named OData with an OData adaptor type. Paste in the Northwind URL as the endpoint URL and click save to create the adaptor.

 

Click the test the connection button to confirm the connection is working. The ultimate test is to see if you can import an object. So click on the import objects option, choose Alphabetical_list_of_products and click import.  Now you have created an OData datastore, set the proxies to successfully connect and imported OData services from the provider.

 

HCI Data Services: OData to HCP Task

Screen Shot 2015-03-25 at 11.14.30 AM.png

This tutorial video shows how to take data from an OData datastore and load it into a SAP HANA Schema in HCP.

 

First create a table in the SAP HANA Schema in HCP. Next build a project with a task that contains a data flow. This task's data flow will take data from the OData provider and store it in SAP HANA. Bob shows how to create an Extract, Transform, and Load dataflow that joins the data source to the target query.

 

Mapping the OData input to the SAP HANA output is done by dragging input column names to their corresponding output column names. A green box will appear if the OData to HCP task has been executed successfully. The view history provides a trace log, monitor log and error log that displays what was processed.

 

HCI Data Services: Flat File Datastore

Screen Shot 2015-03-25 at 11.16.24 AM.png

The sixth video of the HCI-DS series shows how to configure the DS agent to extract data from a file that sits on a Windows client. First Bob has to configure a change to the directories of the SAP DS agent by adding a simple CSV file that contains employee names. After that Bob builds a new datastore with a file format group as the type and a specific root directory. Now that a link to the file directory has been established, the file formats need to be built.

 

In this tutorial Bob elects to create the file format from scratch. Bob chooses comma as his column delimiter, default as his newline style, none for his text qualifier, and marks so that the first row contains column names. After Bob specifies a pair of columns as integer and varchar(50) he has finished setting up the file format.

 

HCI Data Services: Flat File to HCP Task

Screen Shot 2015-03-25 at 11.17.30 AM.png

Continuing from the previous video this tutorial shows how to use HCI-DS to load data from a flat file (CSV) on a Windows client to a SAP HANA schema on HCP. In HCI-DS create a new task in the projects tab with the previously loaded flat file as the data source.

 

Next Bob builds a new target in the SAP HANA Web-based Development Workbench using a script he has already written that creates a column table named FILE_EMPLOYEES. After selecting HANA_ON_HCP as the target datastore, Bob saves the task and defines the data flow.

 

Bob imports the newly created table as a the data flow’s target object. Bob uses EMPLOYEES.csv as the source file and then joins it to the target query. After auto mapping the columns by name the data flow is created, verified and executed.

 

HCI Data Services: MySQL to HCP Task

Screen Shot 2015-03-25 at 11.18.40 AM.png

Bob shows how to use the HCI-DS platform to load data from a MySQL database on a Windows client into a SAP HANA Schema within HCP. Bob has built an ODBC user and connects to his MySQL schema. Bob creates a new datastore that has a MySQL database type and ODBC as the data source.

 

After importing the table, Bob creates a new task in the projects tab. Bob selects his source and target and then builds a MySQL table in the SAP Web-based Development Workbench. Bob imports the table as his target object in the data flow and then adds a source table before then joining it to the target query. Once mapping is completed and validation is successful Bob executes the task.

 

Bob further verifies the connection’s success by inserting a new value into one of the rows in his MySQL table. After re-executing the task Bob sees the new value displayed in the SAP Web-based Development Workbench.

 

HCI Data Services: WSDL Datastore

Screen Shot 2015-03-25 at 11.19.52 AM.png

In the next HCI-DS tutorial video Bob demonstrates how to use HCI-DS to create a datastore for a WSDL Web Service that will provide current weather data based on an inputed zip code.

 

The XML for the WSDL can be viewed here: wsf.cdyne.com/WeatherWS/Weather.asmx?WSDL.

 

The input parameter will be a zip code. The output will contain the zip code's forecast, temperature, city, wind, pressure, etc.

 

Bob builds a new WSDL datastore with a SOAP Web Service as the type and the URL listed above as the path. Then he imports GetCityWeatherbyZIP as an object. Instead of columns WSDLs have a request schema and a reply schema. For this WSDL the zip code is the request schema and corresponding weather information is the reply schema.

 

HCI Data Services: WSDL to HCP Task

Screen Shot 2015-03-25 at 11.21.18 AM.png

Continuing along from the previous video Bob shows how the weather data from the WSDL is outputted into a SAP HANA schema in HCP.

 

First Bob creates a new column table in the SAP Web-based Development Workbench called WSDL Weather by executing a SQL statement.

 

At the time of this tutorial’s creation HCI-DS doesn’t yet have a row generation transform that will output one row. So Bob needs to force HCI-DS to output one row by creating a table with just a single row. Bob creates a new flat file that contains 1 as the value in its first and only row in his Windows client. Now Bob creates a new datastore for the single rowed table and adds an integer column named ID.

 

Bob creates a new task with file format as the source and HANA_ON_HCP as the target. Then Bob starts building the data flow by importing the WSDL weather table and adding it as the target object.

 

Bob then imports the One_Row.csv table and joins it to the transform. Within the transform Bob selects the GetCityWeatherByZip WSDL as the output and enters a New York City zip code in the mapping.

 

Next Bob adds a Web Service transform to the data flow and joins it to the first transform. After choosing GetCityWeatherByZip as the output in the transform, Bob maps the pair of WSDLs together. So he has joined the source table that contains the zip code input parameter to the WSDL. Continuing on Bob joins the WSDL function call to an XML map and joins the XML map to the target query.

 

In the XML map Bob adds all of the reply columns (city, wind, temperature, etc.) from the input to the output. Finally in the target query Bob elects to auto map by name. Then Bob saves, validates and executes the task.


Bob verifies the connection by running a select on the WSDL table in the SAP Web-based Development workbench to see the current weather data for the SAP office in New York City.

 

HCI Data Services: ERP to HCP Task

Screen Shot 2015-03-25 at 11.22.22 AM.png

In this tutorial video Bob shows how to use HCI-DS to extract data from an ERP provider, filter the data and then load it into a SAP HANA Schema in HCP.

 

First in the SAP HANA Web-based Development Workbench Bob creates a column table called ERP_Customers via a SQL statement. In HCI-DS Bob creates a new datastore with SAP Business Suite Applications as the type and uses an agent that exists on the same SAP system. After entering his personal credentials, client number and system number, Bob saves and then tests the connection of his new datastore.

 

Next Bob imports the table before creating a new task with an ERP source and a HANA_ON_HCP target. Then Bob imports the ERP_Customers table as the target object. When working with an ERP source you have three additional ABAP transformation options. Bob connects the source to the target query, maps the columns together and then elects to filter for where the region = NY. After validating and executing the data flow, Bob runs a select statement in the SAP HANA schema and sees all of the customers in New York.

 

HCI Data Services: Using Functions

Screen Shot 2015-03-25 at 11.23.33 AM.png

In this video Bob highlights how to use some of the available functions within the query transform in a HCI-DS dataflow.

 

Bob replicates and renames the ERP task he built in the previous video. Bob wants to remove all of the trailing zeros in front of each customer number for all US-based customers. So in the target query of the replicated ERP task’s data flow Bob selects the Customer_ID in the output and navigates to the mapping transform details tab below. Bob elects to run a ltrim string function on the Customer_ID column with 0 as the trim_string.

 

Now after closing, validating and executing the ltrim modified replicated task, Bob confirms that the zeros have been removed from the Customer_ID by running a select statement on his ERP table in the SAP Web-based Development Workbench.

 

HCI Data Services: Sandbox to Production

Screen Shot 2015-03-25 at 11.27.31 AM.png

In the series’ next video Bob demonstrates how to promote a task from the sandbox environment to the production environment so it can be scheduled. To promote a task first select it and then choose promote task under more actions.

 

After executing the task in the production environment Bob now has the option to schedule it. When building a new schedule you can determine the starting time and the frequency in which it will run.

 

The administration tab allows admins to create additional users with a developer, operator or administrator role. Notifications can be set so an email is sent when a task is executed successfully or fails.

 

HCI Data Services: Loading Data From AWS

Screen Shot 2015-04-08 at 11.06.31 AM.png

Bob’s final video in the HCI-DS series shows how to load data from a Amazon Web Services file into the SAP HANA Cloud Platform with HCI-DS.

 

First Bob opens a specific port, 8080, on his AWS instance. Next Bob creates a new HCI-DS agent and lets that agent know where the folder containing his AWS data is located on his Windows machine. Bob lists this same file as the root directory in his new datastore to connect to the file.

 

After creating a new column table in the SAP Web-based Development Workbench, Bob begins to build a new task in the projects tab. In the target query of the task’s data flow Bob maps the source columns from the AWS text file to the corresponding columns in the HCP target table before executing the task.

 

Bob is able to verify that this task has successfully connected the AWS file to HCP by adding an additional row to his AWS text file on the Windows machine. Then after re-executing the task and re-run running the select statement on the SAP Web-based Development Workbench, Bob’s AWS table in HCP now has that additional row.


SAP HANA Academy over 900 free tutorial videos on using SAP HANA and SAP HANA Cloud Platform.


Follow @saphanaacademy

#HANAEffect #Podcast: @Toms Shoes Walks thru BW on #SAPHANA

$
0
0

Rita Lefler, Global BI Director at Tom's Shoes walks us through their amazing BW on HANA migration.  Hear how Tom's Shoes uses real-time analytics to maximize product availability and profitability.

 

HE12.jpg

 

We hope you enjoy hearing Tom's Shoes first-hand experience with mission-critical SAP HANA.  Please let us know your feedback in the comments below.

 

To get more real-world customer HANA stories, subscribe to our iTunes or SoundCloud feed for weekly podcasts that will cover multiple in-production customer use case scenarios for SAP HANA.

 

Also, if you’ve got a killer SAP HANA scenario and would like to share it on the HANA Effect podcast, please let us know.

 

Transcript: SAP HANA Effect Episode 12 TOMS_FINAL

 

Sponsored by:

xeon_i_ww_rgb_3000

Join the ASUG Pre-Conference Seminars “Application Development Based on SAP NetWeaver Application Server for ABAP and SAP HANA” by HANA Distinguished Engineers

$
0
0

Join the "Application Development Based on SAP NetWeaver Application Server for ABAP and SAP HANA" by Thomas Jung and Rich Heilman

Monday, May 4, 1:00 p.m. – 5:00 p.m.

 

http://events.sap.com/sapandasug/en/pre-conference.html?c=2%253#sap-netweaver

 

This session will provide an overview on how to leverage SAP HANA from SAP NetWeaver AS for ABAP applications that integrate with the SAP Business Suite. Speakers will explore concrete examples and best practices for customers and partners based on SAP NetWeaver AS for ABAP 7.4. This includes the following aspects: the impact of SAP HANA on existing customer-specific developments, advanced view building capabilities, and easy access to database procedures in the application server for ABAP; usage of advanced SAP HANA capabilities like text search or predictive analysis from the application server for ABAP; and best practices for an end-to-end application design on SAP HANA. Finally, with SAP NetWeaver 7.4, SAP has reached a new milestone in evolving the application server for ABAP programming language to a modern expression-oriented programming language. The new SAP NetWeaver Application Server for ABAP features covered in this session will include inline declarations, constructor expressions, table expressions, table comprehensions, and the new deep move corresponding.

#HANAEffect #Podcast Ep. 13 – @EMC Blazes a Trail to @putitonhana

$
0
0

Michael Harding , SAP Enterprise Architect at EMC, talks about their very ambitious and very fast HANA evolution, from sidecar reporting, to CRM on HANA to Sales & Operations Planning.  Make sure to follow @putitonhana to keep up with all their awesome HANA learnings.

HE13.jpg


We hope you enjoy hearing EMC’s first-hand experience with mission-critical SAP HANA.  Please let us know your feedback in the comments below.

 

To get more real-world customer HANA stories, subscribe to our iTunes or SoundCloud feed for weekly podcasts that will cover multiple in-production customer use case scenarios for SAP HANA.

Also, if you’ve got a killer SAP HANA scenario and would like to share it on the HANA Effect podcast, please let us know.

 

Transcript: SAP HANA Effect Episode 13

Sponsored by:

xeon_i_ww_rgb_3000


SAP HANA System Copy- Homogeneous Backup/Recovery Method

$
0
0

SAP HANA system copy Procedure-  Below is the HANA System copy using SWPM for Backup/Recovery method from PRD to QAS.

 

--- SID of HANA production is PRD- > So, schema will be SAPPRD

--- SID of HANA Quality is QAS- > So, Schema will be SAPQAS

 

1. Take the backup of HANA Prd system

2. Copy/Move the the backup form PRD Host to QAS Host

3. Ensure that QAS is having enough space for Backup.

4. Download SWPM latest

5. Download Kernel.

6. Download SAP HANA License of QAS

 

Start SWPM and select Database refresh or DB Move

 

 

 

Profile Directory Of App server

 

 

Provide default Master Password for all users

 

 

Provide passwords for SIDADM & SAPServiceSID

 

Provide path of kernel Directory

 

Select Homogeneous System Copy method

 

 

Provide your DBSID as QAS , DatabaseHost will be QAS Host name & Instance number of QAS

 

 

SWPM will take the below schemas

 

 

Here Schema name should be SAPPRD

 

 

Provide SIDADM Password

 

 

Provide the location of the backup path as per Point 2 . Backup name should be the prefix name of the complete data backup. I have used the default backup name (COMPLETE_DATA_BACKUP).

 

Provide the location of SAP HANA QAS License as per Point 6.

 

 

Provide DDIC password

 

 

Select Local Client directory as HANA Client software Patch. Perhaps you can select "central directory also" Depends on the requirement.

 

After clicking "Next" You will be asked to check the final list of parameters and after confirming your actual HANA Refresh will start

 

 

Post Successful Completion, You need to update the hdbuserstore on SAP Application server. -> This will connect the SAP Application and HANA DB.

Run Report RS_BW_POST_MIGRATION to adjust SAP HANA Calculation views.

 

P.S: I am not including Post refresh activity steps. This will be common like other refresh activity.

 

Refrence Note:

 

1844468 - Homogeneous system copy on SAP HANA

1709838 - BW 7.3 on HANA: System copy using backup and recovery

 

Regards,

Pavan Gunda

HANATEC certification experience

$
0
0

Few days ago I passed certification C_HANATEC_142.

I'll quote myself from http://scn.sap.com/blogs/HanaKahuna/2014/12/18/hanasup-certification-experience

Best way to test this studying method and long time remembering is to take a HANATEC exam (cca 80% of material is the same as for HANASUP).  “


C_HANASUP was in december 2014, C_HANATEC April 2015, 4 months to forget what I have learned, without current Hana Project

(daily Job is Abap, WD4A and Hana development for fun when I have time).

 

In between I did not do nothing in Hana, I prepared and got certificated as Sap Hana Development Professional ( P_HANAIMP_142 ),

so I did have to reread and learn Administration and Support material for that exam.

That was in February, so anyway 2 months to forget little details is more then enough.

 

What I did to prevent that ?

Whenever I can I read questions people ask in SCN. Real problems from real projects (more or less).

When I can,  I engage, even if its just posting where one can find material to work out an answer.

 

I actually did not plan to go to HANATEC,  but there was an opportunity, so why not. As I said 80% of material is the same as for HANASUP.

 

Preparation

There was only cca one week to prepare for this exam. So I did what I thought was the best in light of methods I described in my first blog post.

I read HA200 one more time.

I read Hana Installation one more time.

 

Time to test SQ4R

I used Re-Read and Review for my old Cornell Notes for C_HANASUP.

I did not go thru Hana system doing HA200 examples, cause I know that basic stuff from working on development system to setup things for development or test and tweak things when needed.

 

Actual Exam

Classic 80q, 180 minutes

https://training.sap.com/shop/certification/c_hanatec142-sap-certified-technology-associate---sap-hana-edition-2014-g/

Much harder for me then I anticipated (I'm no administrator). I will never ever take an exam with so short notice for preparing.

I realise now that You just need time to get in Administrator frame of mind to anticipate questions when reading material.

There's just too much things that are SAP BC baseline knowledge that does not come naturally to me.

 

Conclusion

S4R is ok, better then cramming for sure. Long time retention is much better. Rereading and Reviewing are needed.

 

Is it worth for Developer to study that ?

Absolutely !  You have to know your way around administration.

When I was MS SQL developer, almost a rule was that one is actually a 90% an administrator also. Nobody's gonna review your security permissions for You while You wait.

So if You have a chance, learn all You can learn, with or without certification. Although it is not unimportant to have certificate,  more important is to know things and have fun with your Job.

8 hours is too long if You do what you don't like.

 

Have Fun !

Community criss cross puzzle posting in multiple locales

$
0
0

SAP loves to make a point of eating its own dog food and so lowly SAP clerks like yours-truly are offered the experience to participate in multiple user communities be it SCN or one of the many SAP JAM groups.

 

In one of the SAP internal groups I recently answered a question, that I believe will be interesting to SCN readers, too.

So this is it:

 

"... a  user want to have the client display in English format.

Decimal values are displayed in german format "123,5", but they want to have it the english format "123.5" - 'decimal point is point'.

[...]

I set the properties for the system in HANA Studio (or Eclipse) with Properties --> Additional Properties --> Locale "English".

I also tried to set the User Parameter 'LOCALE' for the user to "EN" or "EN-EN", also without effect.

Has anybody an idea, which client setting has to be chosen to get decimal values displayed in english format? ... "

 

To answer this question there are a couple of things to be aware of.

 

Pieces of the puzzle

 

Piece 1: SAP HANA Studio has a preference setting to switch on or off result data formatting.

Piece 2: SAP HANA Studio is a Eclipse based product.

Piece 3: Eclipse is written in JAVA.

Piece 4: JAVA provides a rich set of API functions to format data, exposed via Formatter objects.

Piece 5: The JAVA Formatter objects are using so-called Locales, which are objects that bundle localization specific settings.

 

Putting the pieces together

Whether or not the SAP HANA studio formats the data in the SQL result set grid depends on the setting SAP HANA -> Runtime -> Result -> Format values.

studio result setting.png

Now, having activated the formatting, we can see that e.g. numeric data with decimal places gets formatted.

 

When I am logged on with a German Windows language setting, I will see that the number 1234.56 will be printed out as '1.234,56' - as the thousands delimiter for Germany is the dot '.' and the decimal separator is the comma ','.

With English language settings this would have been the other way round.

 

If you ask yourself "Why would I ever switch this off?", then it's probably good to know that formatting lots of data is not a trivial task and can take up a considerable time when printing your results. That might slow down your development cycle of re-running queries considerably...

 

"Stop talking, show us how it looks like!"

 

... I hear you say, so here you go:

Formatting Enabled, German locale settings

german form.png

Behold and compare to the output with formatting switched off:

 

Formatting disabled, output as raw as possible in SAP HANA Studio

default form.png

SAP HANA Studio provides a place where one can specify the session language for a user connection and it is even called 'Locale':

studio result language.png

Irritatingly this setting only affects the SAP HANA behavior within a SAP HANA DB session. That is, text joins and language filters use that setting to return language dependent data.

 

An important part in understanding why this setting doesn't help here is to understand that the whole business of printing data out to the screen is done by the client application and not by the database server. This includes formatting the output.

 

Technically speaking, numbers don't have a format to the database.

In the database numbers have a scale and a number of significant fractional digits.

If and how those are printed out is a different matter - just like your good old TI-30 would calculate with 11 significant digits internally, while displaying 8 of them at max.

 

Having said that, I would agree with the notion that when we have a setting called LOCALE this should either change the 'whole experience' or there should be more specific setting options. Something like 'output locale' or so... (like hereAPI: Internationalization).

 

Anyhow, the point is: the LOCALE setting with the connection doesn't fix our formatting requirement.

 

Fiddling with the invisible pieces

So, we know that the data gets formatted via a JAVA Formatter, but apparently the LOCALE setting doesn't set this thing up for us.

Checking the JAVA SDK documentation (LOCALE and FORMATTER again) the avid reader figures out:

 

If the application doesn't specify the LOCALE for the FORMATTER it uses the LOCALE that is currently active in the JAVA VM.

 

As it turns out, the default JVM behavior is to try to figure out the locale setting of the operating system user that is running the JVM.

But there are options to overwrite this, as we find here.

 

Working the puzzle from the edges

So, we can specify command line arguments when starting a JVM to set locale parameters:

-Duser.language=EN , -Duser.country=UK and -Duser.variant=EN

 

This is all great, but how to start the SAP HANA Eclipse with such parameters?

 

Putting in the final pieces

This last piece is actually easy.

Eclipse uses a parameter file called eclipse.ini located in the folder where you find the the Eclipse executable.

SAP HANA Studio simply uses a renamed version of those files, so the parameter file we're looking for is called hdbstudio.ini just as the executable is called hdbstudio.exe.

 

-startup
plugins/org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar
--launcher.library
plugins/org.eclipse.equinox.launcher.win32.win32.x86_64_1.1.200.v20150204-1316
-vm
plugins/com.sap.ide.sapjvm.jre.win32.x86_64_81.0.0/jre/bin
--launcher.XXMaxPermSize
512m
--launcher.defaultAction
openFile
--launcher.appendVmargs
-vmargs
-Xmx1024m
-Dfile.encoding=UTF-8
-Xms256m
-XX:MaxPermSize=512m
-XX:+HeapDumpOnOutOfMemoryError

Looking into this file we find that it contains a part that starts with -vmargs.

This section allows to specify parameters for the JVM used by Eclipse respectively SAP HANA Studio.

 

Putting in the desired locale parameters like this

[...]
-vmargs
-Xmx1024m
-Dfile.encoding=UTF-8
-Duser.language=en
-Duser.country=UK
-Duser.variant=EN
-Xms256m
-XX:MaxPermSize=512m
-XX:+HeapDumpOnOutOfMemoryError

will make provide the setting we are looking for.

Be aware that in order to be able to safe this file on MS Windows you will need to have elevated privileges.

 

The final picture

To set the new settings active it is required to restart SAP HANA Studio.

We can see that the formatting option now uses the specified locale.

 

Formatting Enabled, English locale settings

english format.png

Unfortunately the setting is active for all connections used by this SAP HANA Studio.

So we don't have a proper user specific setting for the data formatting - a workaround at best.

 

This puts SAP HANA Studio clearly into not being a data consumer/end-user tool.

It's an administration and development tool, just like it had been positioned since day one.

 

Having the picture completed, we can take a last look at our puzzle and pack it away again.

 

output_1INXRI.gif

 

There you go - now you know.

 

Cheers,

Lars

 

p.s.

An alternative approach would have been to set the JVM arguments via environment variables.

This approach would allow to easily create several different startup scripts that first set the parameter and then start SAP HANA Studio.

If you got the gist of this post, you shouldn't have issue plugging this together.

#HANAEffect Ep. 14 – @Deloitte Delivers Real-Time Financials to Improve Client Service

$
0
0

Chris Dinkel, Director of IT for Deloitte Consulting, shares how real-time general ledger has transformed their finance functions and enabled them to streamline their service delivery to clients.

 

HE14.jpg

 

We hope you enjoy hearing Deloitte’s first-hand experience with mission-critical SAP HANA.  Please let us know your feedback in the comments below.

 

To get more real-world customer HANA stories, subscribe to our iTunes or SoundCloud feed for weekly podcasts that will cover multiple in-production customer use case scenarios for SAP HANA.

 

Also, if you’ve got a killer SAP HANA scenario and would like to share it on the HANA Effect podcast, please let us know.

 

Transcript: SAP HANA Effect Episode 14 FINAL

 

Sponsored by:

xeon_i_ww_rgb_3000

Generating Core Data Services files with Sybase PowerDesigner

$
0
0

Generating Core Data Services files with Sybase PowerDesigner

 

Contents


Generating Core Data Services files with Sybase PowerDesigner. 1

What is Core Data Services?. 1

The basics: Creating a project, a physical data model and the data model1

Installing the HDBCDS Extension File (HDBCDS.xem). 6

The funny part: Generating the CDS definition file. 8

Using Domains in the Data Model (CDS Basic Types). 11

Working with multiple Contexts. 13

Conclusion. 16

 

 

 


What is Core Data Services?

As described in the SAP HANA CDS Reference guide, Core Data Services (CDS) is an infrastructure that can be used by database developers to create the underlying (persistent) data model which the application services expose to UI clients.

In that sense, CDS looks to be similar to SQL DDL but the key advantage is that CDS definition files are created as design-time objects that means that can be transported together with HANA Models.

Design-time objects for Data Model definition + logic + UI layer is indispensable for Native HANA Apps.

With this SPD Extension you can create CDS (.hdbdd) files from Physical Data Models.

So, let’s take the following example from the SAP HANA CDS Reference Guide

1.png

This code creates two tables after activation in the schema MYSCHEMA.

The CDS Guide explains in details all the elements that composes a CDS file (.hdbdd).

 


The basics: Creating a project, a physical data model and the data model


Go to File -> New Project and give a name to the project

2.png

 

1)      Go to File -> New Model and select Physical Model under the Information category.

3.png

2)      Create the Book and Author tables

4.png
5.png

 

3)      Create the SCHEMA

Go to Model -> User and Roles -> Schemas


6.png

7.png



4)    Set the Table’s owner


Go to Model -> Tables

8.png

And set the Owner column to MYSCHEMA

9.png

 

5)      Preview the Data Model (press Ctrl+G or Right click on the Model -> Properties )

10.png


 

Installing the HDBCDS Extension File (HDBCDS.xem)


Before proceeding we must create a folder to store the .XEM extension file.


Go to Model -> Extensions -> Attach an Extension


11.png

Click on Path and select the folder that contains the .XEM file

12.png
13.png

 

After the Path is added, select the HDBCDS extension

 

14.png

 

Once the HDBCDS extension is added, a new object is added to the toolbox for creating contexts

 

15.png

The funny part: Generating the CDS definition file

 

Before generating the .hdbdd file from the PDM we must create a Package and a Context.

Using the package tool in the toolbox under SAP HANA Database 1.0 category, create a package named com.acme.myapp1

 

16.png

 

Use the context tool to create a context named Books.

 

17.png

 

Set the context Books as main context for the data model.


Go to Model Properties (right click on DemoCDS Model -> Properties)
In the CDS Tab set the Books context for the Context property

18.png

 

Finally, generate the CDS Definition files. Go to Tools -> Extended Generation

19.png

In the Targets tabs select HDBCDS, let the Selection by default selecting all objects and in Generated Files select the .hdbdd and .hdbschema files.
Set an output directory and click Ok to generate the two files

20.png
21.png

Using Domains in the Data Model (CDS Basic Types)

CDS supports the definition of shared types so that your columns definition can reference the types instead of declaring explicit the data type.

f.e.

22.png

 

To use this feature, go to Model -> Domains and define a Domain, for example “TextoLargo” defined as VARCHAR(100)

23.png

Set the Domain in the columns definition, instead of setting the Data Type.

Go to Model -> Columns

24.png

 

And set de Domain value to “TextoLargo” for those fields of Varchar(100) type

Tip: If the Domain column is not present in the Columns properties, press Ctrl+U and select “Domain”.

Go to Tools -> Extended Generation and generate the model.hdbddfile once again.

25.png

 

Working with multiple Contexts

 

As described in the CDS Guide, we can define multiple contexts for grouping tables that belongs to the same functional group.

For instance, let’s rename the Book context to Main context, and create 2 more contexts named “Datamodel” and “Application”

Go to the Book table’s properties, and select the CDS tab, then set the Context to “Datamodel

26.png

 

Create a 3rdtable named UILogic and set the “Application” context in the CDS Tab


27.png

 

 

Finally, generate the hdbdd file to see the context definition.


28.png


 

Conclusion


It is possible to use Sybase PowerDesigner to generate SAP HANA CDS files. Using an enterprise modeling tool such as PowerDesigner gives more transparency in the model definition and simplifies the data model maintenance. There is no big effort to generate the CDS file once the PDM is defined.


There are some limitations in this extension, for example structured typed cannot be created.


If you want to enhance this extension, feel free to modify the code.


Go to Extensions -> HDBCDS -> Properties to see all the code behind this extension

29.png

 

 

Viewing all 902 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>