Quantcast
Channel: SCN : Blog List - SAP HANA and In-Memory Computing
Viewing all 902 articles
Browse latest View live

Demystifying SAP HANA Administration

$
0
0

This is the first in a series of blogs by SAP instructors, designed to “demystify” some of the hottest topics in SAP technology. We’ll also provide links to information that can help you expand your knowledge. If you have a topic you’d like us to discuss, post a reply to this blog. And of course, we encourage you to comment or ask questions.  We’re here to help each other learn!

 

Since its release, SAP HANA represented a radical change in the direction of SAP's product architecture. As an SAP Instructor, I’ve been wondering how easy it would be to manage the new SAP HANA platform. Can I use platform and administration tools that are currently in place? Or, do I need to learn a new set of gadgets?

 

After reading through some of the documentation available in SAP Learning Hub,and reviewing SAP courses such as SAP HANA Introduction (HA100), SAP HANA - Operations & Administration (HA200), and SAP HANA Implementation and Modeling (HA300), I realized that the learning curve is quite easy – if you get some guidance up-front. That’s why I’m here! This blog will demystify SAP HANA administration by listing tools you can use to handle periodic administrative tasks, and by explaining the key capabilities of each and when to use them.

 

If you’re an SAP System Administrator (BASIS Administrator) or Database Administrator, you may be surprised to learn that although there are new tools to manage SAP HANA, some of the old SAP BASIS tools will also help you get the job done. These are the five major SAP HANA administration tools:

 

  • SAP HANA studio
  • SAP HANA cockpit
  • SAP DB Control Center
  • SAP Solution Manager/DBA Cockpit
  • Command Line (HDBSQL)

 

The following figure summarizes the role of each tool.

HANA_AdminTools_Overview_HA200Col10_Figure208.JPG

 

Tool #1 - SAP HANA studio

 

Administrators use SAP HANA studio to start and stop services, monitor the system, configure system settings, perform backup/recovery, and manage users and authorizations.

HANA_HANAStudioConsole2_HA200Col10_Figure211.JPG

HANA_Studio_HA200Col10_Figure65.JPG


Tool #2 - SAP HANA cockpit

 

SAP HANA cockpit is an SAP Fiori Launchpad application that provides single point-of-access to a range of web-based applications for the administration of SAP HANA. It’s installed automatically with SAP HANA and displays the content as tiles arranged in groups that can be customized. Additionally, the cockpit provides a role-based concept that enables you to control which tiles/apps from the tiles catalog can be accessed by which administrator.

 

HANA_SAPHANACockpit_HA200_Col10_Figure222.JPG

Tool #3 - SAP DB Control Center


SAP DB Control Center is SAP’s new generation of enterprise-class systems management solutions that enables DBAs to administer and monitor a variety of SAP database products in one graphical user interface, either locally or remotely.

It was first available with SAP HANA SPS 09, and it’s added as a delivery unit that can be downloaded from the SAP Support Portal – Software Downloads.

HANA_SAPDBControlCenter_HA200Col10_Figure210.JPG


Tool #4 - SAP Solution Manager/DBACOCKPIT

 

DBACOCKPIT is an ABAP based database management tool available in SAP NetWeaver ABAP systems such as SAP Solution Manager. It supports SAP HANA as well as other databases.

 

The transaction code DBACOCKPIT enables you to monitor and administer SAP HANA remotely. Additionally, it provides quick access to a range of analysis information, for example, performance monitoring, space management, and job scheduling.

HANA_DBACockpit_RemoteMonitoring_HA200Col10_Figure226.JPG


Tool #5 - HDBSQL – Command Line Tool

 

SAP HANA HDBSQL is a command line tool for entering and executing SQL statements, executing database procedures, and querying information about SAP HANA databases.

 

HANA_HDBSQL_HA200Col10_Figure240.JPG

 

Follow me at LinkedIn Fabio Souza

                              Linkedin.jpg


The SAP HANA Platform, Powering the Digital Transformation

$
0
0

BLOG/Webcast review by Martin Mysyk, Enterprise Architect, SAP and POC for Enterprise Architecture SIG, ASUG.


A special webcast deserves a special review. On February 29, 2016, ASUG held an extended webcast which included a review of their 2015 SAP HANA Adoption survey as well as a presentation on how the SAP HANA platform is powering Digital Transformation. The presenters were China Martens, Senior Editor, Media and Research at ASUG, and Matt Zenus, VP, Database and Data Management Go-to-Market, SAP. The 90-minute presentation was very informative and I would recommend that you watch the whole presentation. This is a recap of the presentations and where you can find more information on the topics raised.

 

Part 1 – ASUG SAP HANA Adoption Survey

The first presentation in the webcast was a great review by China Martens on the results of the “SAP HANA Adoption Survey: What ASUG Members Say in 2015.” As background to her presentation, Martens explained the aims of ASUG Research. To summarize, they are: to share experiences; provide data points to ASUG members; give feedback to SAP; and to use the research findings as the basis for SAP and ASUG to jointly create additional educational resources for members. You can find more information about the survey, which drew 1,253 respondents between September and October 2015, and its results here.

 

Good news! According to the survey, HANA adoption numbers are all up. I am not going to review the survey results in depth here but the findings included the benefits HANA adopters are reporting as well as the reasons why other companies have yet to invest in HANA. You can see the results for yourself by watching the webinar if you missed it. It can be downloaded here.

I think it is great that ASUG does these surveys, it gives you a second opinion on what is happening in the SAP ecosystem, and what people are seeing outside of the SAP marketing engine.

 

Just a few comments on the survey:

  • Included in the survey’s findings is analysis by Kevin Reilly, ASUG’s S/4HANA Community Advocate. This is good commentary and one person’s interpretation of ASUG Research’s own analysis of the survey results
  • The overall report also includes an “SAP Point of View” response by Steve Lucas, President, Platform & Analytics, SAP. Lucas keeps his remarks short and to the point and includes lots of links so that people can do their own due diligence into SAP’s HANA platform

 

My observations from the webcast: More people have adopted SAP HANA and are aware of the technology. This speaks well for the increase in market share and future adoption of the SAP HANA platform.

 

Part II - The SAP HANA Platform, Powering the Digital Transformation

The second speaker on the webcast was Matt Zenus, VP, Database and Data Management Go-to-Market, SAP. This part of the presentation began with an explanation of SAP’s approach to Digital Transformation. SAP clearly has SAP HANA positioned to be the cornerstone, or core, of all digital transformation projects for both its customers and its own organization.

 

Zenus started out by talking about digital business and the drivers for technology justification and adoption to stay competitive. I have seen this material before in some of the SAP and ASUG webcasts and it is starting to resonate now. The world really is changing rapidly and we need to keep up. Several of the examples he gave from Uber to Under Armour illustrated the different business model changes that are occurring.

 

“New digital architectures MUST go beyond traditional DBMS,” Matt Zenus, SAP


Zenus cited industry analyst Forrester Research’s evaluation “The Forrester Wave™: In-Memory Database Platforms, Q3 2015.” [You can access the Forrester Wave free of charge here.] SAP is providing leading-edge technology to enable Digital Transformation. The Forrester evaluation certainly validates the market recognition of the technology and the leadership vision of SAP in this area. It also supports some of the findings from the ASUG 2015 HANA Adoption survey which I mentioned earlier.

 

During the ASUG webcast, Zenus explained how SAP HANA is continuing to mature and expand its functionality. So even if you think you know HANA, I’d advise you to keep evaluating and re-evaluating the technology as new capabilities are being added all the time.

 

Later in the webcast, Zenus talked about a companion product to SAP HANA; SAP HANA Vora. The message that I took away from his presentation is that a digital divide exists between Enterprise and Big Data. Accessibility is key to make sense of the data in business terms. To get across the Big Data digital divide, SAP has introduced HANA Vora which extends the SAP technology landscape to Hadoop and reduces complexity in distributed environments.

Also mentioned, SAP HANA Vora works with, but does not require, SAP HANA to be installed --HANA Vora installs with Spark on the Hadoop environment. The combination of these two technologies presents a pretty compelling story for a digital platform which companies can use to make sense of Big Data.


Watch the webcast for the review of the ASUG HANA survey, then continue on with the second half to see how SAP HANA and HANA Vora can help you in the world of digital transformation and decide for yourself where your journey takes you.

SAP HANA Webinar Series: Experts’ Insights on Best Practices, Case Studies & Lessons Learned

$
0
0

New Blog @  http://bit.ly/1MihzBn

 

  • New SAP HANA Webinar Series helps you gain insight and deliver solutions to your organization.
    • Learn about upcoming sessions and more in our blog series. http://bit.ly/1MihzBn
    • Each webinar features an in-depth perspective on various topics by an SAP HANA expert.
    • Make sure to check out our blog so you don’t miss your opportunity to interact and learn from the best.

  • Encourage colleagues to join the iFG community to support your SAP HANA implementations and adoption initiatives.

iFG Webinar Series 1.jpg

[SAP HANA Academy] Utilize Modeled Persistence Objects in the SAP HANA Data Lifecycle Manager

$
0
0

In a series of three tutorial videos the SAP HANA Academy's Tahir Hussain Babar (Bob) details how to use Modeled Persistence Objects in the SAP HANA Data Lifecycle Manager. In this series Bob is using DLM SPS03 on top of SAP HANA SPS 11. This series is part of the SAP HANA Academy's comprehensive playlist covering What's New in SAP HANA SPS 11.


Overview of the Exploration Module


In the first video of the series Bob provides an introduction to the SAP HANA Data Lifecycle Manger's Exploration Module by showcasing how to use it for analyzing a system's data.

Screen Shot 2016-03-14 at 2.34.51 PM.png

Bob first launches SAP HANA Studio. Bob is running a SAP HANA SPS 11 system that already has Dynamic Tiering installed. Bob has create a schema called BOBSDATA that contains the sales data he will be using throughout the series. Also, Bob has a system user that already has the necessary rights (shown below) to execute all of these tasks.

Screen Shot 2016-03-14 at 10.28.23 PM.png

BOBSDATA contains two tables of sales orders. Bob runs a Select statement on both of the tables in a new SQL console to show that the data contained in his schema is organized in a Header table and a Footer table.

Screen Shot 2016-03-14 at 10.31.24 PM.png

The link between the two data sets is the ID column. The SALESORDERS_HOT table contains information on the individual transaction and the SALESORDERITEMS_HOT table contains information on the products and quantity of each order. Bob next runs the below SQL statement to group the data by year.

Screen Shot 2016-03-14 at 11.06.10 PM.png

The data is now divided by 2014 and 2015.


In a web browser Bob logs into the DLM as his system user. The exploration function can be used to establish which tables are appropriate candidates for data relocation. Essentially it summarizes the amount of data (space) that each of your tables or other objects are occupying within your SAP HANA system.

Screen Shot 2016-03-14 at 11.30.04 PM.png

This is a very useful utility when viewed from a two step approach. First, you can preform graph-based exploration on different levels. With a sun burst chart (show above) you can drill down into information on various levels. The levels are either host, schema or table. For example, we can use this to identify the 10 largest tables within our system or identify the 10 largest tables within our schema on a given host.


Second, you can use the form-based exploration to view how the data in a selected SAP HANA table is spread out according to a selected column filter. With this insight you can then derive meaningful business rules for relocating the data.


Currently Bob has a relatively small, single node system. By drilling down further, Bob can see how much data is being used by each host and each port. As Bob only has one host and one port he clicks on the eye icon next to each to deselect them. Now Bob sees how much data each of his schemas are taking up. His _SYS_REPO is taking a hefty 72% of the total share.

Screen Shot 2016-03-15 at 12.23.33 AM.png

Hypothetically, if BOBSDATA was taking up too large of a share of his schema he may want to archive it. By drilling down into that schema Bob can view the total memory share of each table in the schema. Bob further modifies the sunburst chart so he can view how much each individual table in his BOBSDATA schema is taking up within the entire system.

Screen Shot 2016-03-15 at 12.28.17 AM.png


How to Build Table Groups with Modeled Persistence Objects

Screen Shot 2016-03-14 at 2.53.37 PM.png

Bob examines how to build tables groups using Modeled Persistence Objects in the SAP HANA Data Lifecycle Manager in the series' second video. One way to relocate Bob's header and footer SALESORDERS tables would be to build a pair of lifecycle profiles. However, using Modeled Persistence Objects is a much better method as the two tables will be linked together in a table group.


In the Data Lifecycle Manager Bob opens the MANAGE MODELED PERSISTENCE OBJECTS tab and clicks on the plus button to create a new join. Bob names his join SHA_LP_MPO. At the bottom of the screen Bob searches for BOBSDATA in the text box adjacent to Fully Qualified Table Names and selects his two SALESORDER tables from the resultant drop down menu.

Screen Shot 2016-03-15 at 12.40.42 AM.png

ID is the Common Key Column Name shared by the two tables, so Bob selects that. Next Bob clicks Save and then clicks Activate.


With the new object activated Bob opens the MANAGE LIFECYCLE PROFILES tab and builds a new profile by clicking on the plus button. Bob names his new DLM Profile SHA_LP_MPO. For Source Persistence Bob select the SAP HANA Table Group option and uses his recently created Managed Persistence Object for the Table Group Name. Bob opts to use a Manual Trigger. Bob keeps the Destination Attributes shown below.

Screen Shot 2016-03-15 at 12.56.53 AM.png

In the Rule Editor Bob leaves the Rule Editor Type as Table Group SQL-Based Rule Editor and then scrolls to the bottom. The only Column available for Bob is the ID column.


Back in SAP HANA Studio Bob must find the highest ID listed for each of his years. Bob executes the below SQL statement to output the maximum ID for each year.

Screen Shot 2016-03-15 at 1.00.26 AM.png

So Bob wants to discriminate by the maximum IDs for 2014 and 2015. Back in the MANAGE LIFECYCLE PROFILES tab Bob selects the ID column and then writes less than < 2014 Maximum ID. Note Bob removes the number's commas. So all of the IDs from 2014 will be moved across. Then Bob clicks Validate Syntax.

Screen Shot 2016-03-15 at 1.05.36 AM.png

About Half of the records will be affected and relocated. Next click on Save and after click on Activate. Then click on the Simulate button and choose Data Relocation Count to see how many items from both of the SALESORDERS tables will be moved from Hot to Cold.

Screen Shot 2016-03-15 at 1.10.06 AM.png

Next, click Run to trigger the relocation manually. To confirm that the relocation is working, open the LOGS tab, select your Lifecycle profile and then choose to show the logs for the current run ID. Once the relocation is completed, in the MANAGE LIFECYCLE PROFIlES tab, you can see how many rows were moved from your SAP HANA system to SAP HANA Dynamic Tiering.

Screen Shot 2016-03-15 at 1.15.38 AM.png

The Difference Between Pruning Views and G Views

Screen Shot 2016-03-14 at 2.57.00 PM.png

In the third and final video of the series Bob examines the different objects that were created when he built the lifecycle profile.


In the MANAGED LIFECYCLE PROFILES tab with your recently created MPO profile selected, scroll to the bottom of the page and choose the Destination Persistence to view the objects you have created. Due to the fact that multiple objects were relocated (the header and the footer table) we now have four views. A pruning node view, a union view and the root view for the SALESORDERS and SALESORDERITEMS tables.

Screen Shot 2016-03-15 at 11.22.17 AM.png

Opening Generated Objects will show the single stored procedure that will execute that rule.


Back in SAP HANA Studio Bob runs the SQL syntax shown below to see how many of his records have been moved to Dynamic Tiering.

Screen Shot 2016-03-15 at 11.29.03 AM.png

Most of the records from 2014 have been moved to Dynamic Tiering. Next Bob opens the view folder in his SAP_HANA_DLM_GNR schema. The view folder contains normal, aka SQL Union views, called G views. These G views are highlighted below. The other type of views are called P views and those are listed in the Column Views folder.

Screen Shot 2016-03-15 at 3.32.10 PM.png

The G views perform a regular SQL union on the table which resides in SAP HANA Dynamic Tiering and SAP HANA. Bob shows the content of the G view that combines the orders.

Screen Shot 2016-03-16 at 8.48.07 AM.png

Bob copies the syntax highlighted above and pastes it into a new SQL console. Bob changes TOP 1000 to COUNT(*) in the syntax and then executes the statement. This returns a count on the combination of the two data sets. Next Bob runs the below statement that will provide a count of his SALESORDERS table from his BOBSDATA Schema.

Screen Shot 2016-03-16 at 8.51.46 AM.png

The count for the view stored in Dynamic Tiering and SAP HANA is much large than the count of the SALESORDERS tables. Bob confirms the data's source by opening the create statement for his DLM_GVIEW_BOBSDATA_SALESORDERS_HOT views and highlights the pair of sources.

Screen Shot 2016-03-16 at 8.55.15 AM.png

The problem with the G views is that no matter what select statement you run, it will access data from both stores. So even if you only want data from 2015 it will use both sources.


The Pruning View has in-built intelligence so it will only access the relevant store based on the SQL query run against it. When reporting you should use a P view rather than a G view as it will use the system resources more efficiently.

Screen Shot 2016-03-16 at 9.09.32 AM.png

For more tutorial videos about What's New with SAP HANA SPS 11 please check out this playlist.


SAP HANA Academy - Over 1,300 free tutorials videos on SAP HANA, SAP Analytics and the SAP HANA Cloud Platform.


Follow us on Twitter @saphanaacademy and connect with us on LinkedIn to stay abreast of our latest free tutorials.

Modeling CM/YTM/LYTM Comparison in Calculation Views using Input Parameters derived from Scalar Procedure

$
0
0

The motivation for this exercise is based on the approach Uladzislau Pralat  presented here.

Other approaches are also presented by Justin Molenaur , Ravindra Channe& Guneet Wadhwaand can be referred following the below links:

http://scn.sap.com/community/hana-in-memory/blog/2013/07/26/implementation-of-wtd-mtd-ytd-in-hana-using-script-based-calc-view-calling-graphical-calc-view

http://scn.sap.com/community/hana-in-memory/blog/2014/03/10/how-tocalculate-ytd-mtd-anytd-using-date-dimensions

http://scn.sap.com/community/hana-in-memory/blog/2015/01/09/simple-example-of-year-to-date-ytd-calculation-in-sap-hana

 

The difference in the approach, I am discussing here, lies in leveraging the Input Parameter’s “Derived from Procedure/Scalar Function” option to deduce the Year_To_Month (CYTM), Last_Year_Current_Month (LYCM) and Last_Year_To_Month (LYTM) values as input parameters and can further be applied as Filters. I am using 'Procedures returning scalar values' to compute the month. This is similar to the approach we adopt in BW/BEx reports where the CYTM,LYTM variables are populated using CMOD exit.

 

The demo here is based SAP HANA SP10.

 

Approach:

  • Create a base “reusable” calculation view having sales table and M_TIME_DIMENSION table joined on the required date field.
  • Create one base input parameter which accepts user entered Calendar Month. This is used further as an input for the scalar procedures to calculate CYTM, LYCM, LYCM.
  • Create a calculation view (may treat it as a Reporting View) having the above created base view and apply the input parameters as filters on CALMONTH field.

 

Development:

  • Create a base Calculation View CA_SUPERSTORE_SALES_REUSE.

1.png

  • Reuses this view in another Calculation view and use Union on all the four nodes corresponding to CM, CYTM, LYCM, LYTM Projection nodes.

2.png

Union Node Mapping:

12A.png

  • Observe the filters on each projection node corresponding to the four period categories getting calculated under the union node.
    • CM : is straight forward, CALMONTH filtered on the User direct input calendar month input parameter.

3.png

    • CYTM : CALMONTH is filtered based on V_CYTM_FROM input parameter derived using the scalar procedure taking V_CM as the input;

4A.png

V_CYTM_FROM input parameter definition and its mapping:

5.png

6.png

The code snippet evaluating CYTM_FROM is:

CREATEPROCEDURE"_SYS_BIC"."PR_SUPERSTORE_SALES_CYTM_SCALAR"(ININ_CALMONTHVARCHAR(6), OUTOUT_CALMONTHVARCHAR(6))

LANGUAGESQLSCRIPT

SQLSECURITYDEFINER

AS

BEGIN

  DECLAREV_CALMONTHVARCHAR(6);

  DECLAREV_YEARVARCHAR(4);

  V_YEAR := LEFT(IN_CALMONTH,4);

  V_CALMONTH := CONCAT(:V_YEAR,'01');

  SELECTV_CALMONTHINTOOUT_CALMONTHFROMDUMMY;

END;

 

    • LYCM: Filter details

7A.png

Input Parameter definition and mapping details:

8A.png

Code snippet for LYCM is:

CREATEPROCEDURE"_SYS_BIC"."PR_SUPERSTORE_SALES_LYCM_SCALAR"(ININ_CALMONTHVARCHAR(6), OUTOUT_CALMONTHVARCHAR(6))

LANGUAGESQLSCRIPT

SQLSECURITYDEFINER

AS

BEGIN

  DECLAREV_CALMONTHVARCHAR(6);

  DECLAREV_MONTHVARCHAR(2);

  DECLAREV_YEARVARCHAR(4);

  V_YEAR := LEFT(IN_CALMONTH,4);

  V_YEAR := V_YEAR - 1;

  V_MONTH := RIGHT(IN_CALMONTH,2); 

  V_CALMONTH := CONCAT(:V_YEAR,:V_MONTH);

  SELECTV_CALMONTHINTOOUT_CALMONTHFROMDUMMY;

END;

 

    • LYTM:

9.png

Input Parameter definition and mapping details: First screen shot gives the 'From' value corresponding to Last Year and the second one gives the 'To' value

10.png

11.png

Code snippet for LYTM_FROM:

CREATEPROCEDURE"_SYS_BIC"."PR_SUPERSTORE_SALES_LYTM_FROM_SCALAR"(ININ_CALMONTHVARCHAR(6), OUTOUT_CALMONTHVARCHAR(6))

LANGUAGESQLSCRIPT

SQLSECURITYDEFINER

AS

BEGIN

  DECLAREV_CALMONTH_FROMVARCHAR(6);

  DECLAREV_YEARVARCHAR(4);

  V_YEAR := LEFT(IN_CALMONTH,4);

  V_YEAR := V_YEAR - 1;

  V_CALMONTH_FROM := CONCAT(:V_YEAR,'01');

  SELECTV_CALMONTH_FROMINTOOUT_CALMONTHFROMDUMMY;

END;

Code Snippet for LYTM_TO:

CREATEPROCEDURE"_SYS_BIC"."PR_SUPERSTORE_SALES_LYTM_TO_SCALAR"(ININ_CALMONTHVARCHAR(6), OUTOUT_CALMONTHVARCHAR(6))

LANGUAGESQLSCRIPT

SQLSECURITYDEFINER

AS

BEGIN

  DECLAREV_CALMONTH_TOVARCHAR(6);

  DECLAREV_YEARVARCHAR(4);

  DECLAREV_MONTHVARCHAR(2);

  V_YEAR := LEFT(IN_CALMONTH,4);

  V_YEAR := V_YEAR - 1;

  V_MONTH := RIGHT(IN_CALMONTH,2);

  V_CALMONTH_TO := CONCAT(:V_YEAR,:V_MONTH);

  SELECTV_CALMONTH_TOINTOOUT_CALMONTHFROMDUMMY;

END;

 

  • And finally, the output:

13.png

  • The PlanViz: looking at the plan visualization, we can observe various filters in action

14.png

Times lines are here:

14A.png

I would like to thank deepak hpfor providing me tip to fix the procedure issue; you may refer here for details;

 

 

- Prasad A V

SAP HANA database interactive terminal (hdbsql) - by the SAP HANA Academy

$
0
0

Introduction

 

At the SAP HANA Academy we are currently updating our tutorial videos on the topic of SAP HANA administration for the latest support package stack SPS 11. You can find the full playlist here: SAP HANA Administration - YouTube

 

On of the topics that we have added is the SAP HANA database interactive terminal or hdbsql as it is mostly known. You can also watch them in a dedicated playlist: and SAP HANA database interactive terminal - YouTube

 

Overview

 

Hdbsql is a command line tool for executing commands on SAP HANA databases. No Fiori, no cloud, not even colourful Windows, just plain old terminal style command line. It is included with each server installation and also with each SAP HANA client, which is its strongest asset: it is always there for you to rely on.

 

It is called the database interactive terminal but you can also execute commands non-interactively, as a script that it is. It is probably even the most common use case.

 

The tool is documented on the SAP Help Portal in the last chapter of the Administration Guide: SAP HANA HDBSQL (Command Line Reference) - SAP HANA Administration Guide - SAP Library. The chapter is only a handful of pages and serves as a reference. This means, for example, that you will informed about the command line option

-S <sqlmode>           either "INTERNAL" or "SAPR3"

 

but you will not be informed about the use case for SQL mode "SAPR3". Using hdbsql for this reason can be a bit tricky and if you search in the SCN Forums, you will find that at times the tool causes some confusion and bewilderment.

 

Getting Started

 

The first tutorial video shows how to get started with the SAP HANA database interactive terminal.

 

There are hdbsql command line options - for example, for help use option -h from the Linux shell -

hdbadm> hdbsql -h

 

and there are hdbsql commands - for example, again, for help use command \h or \? from the hdbsql prompt.

 

hdbsql> \h

 

As with the h for help, commands and command line options sometimes use the same letter but mostly not and note that the command line options are case sensitive. The table below shows some examples.

 

UsageOptionCommand
Help screen-h\h
Connect\c
Disconnect\di
Exit\q
Execute\g or ;
Status\s
Input file-I (uppercase i) <file>\i <file>
Output file\e\o <file>
Multiline mode-m

\mu

SQL mode-S\m
Autocommit-z\a
Separator-c <separator>

 

 

Installing a License File

 

One good use case scenario for hdbsql is installing a license file. Say, you just have installed an SAP HANA system on a slim SUSE or RedHat Linux server without any (X-Windows) graphical environment. There is no Windows client at hand either. How to proceed? Simple! The interactive terminal and the command

 

SET SYSTEM LICENSE ' <license file contents>'

 

In the tutorial video below, I show you how this is done:

 

 

Secure User Store

 

The next video explains how to use the secure user store. This allows you to store SAP HANA connection information, including user passwords, securely on clients. In this way, client applications can connect to SAP HANA without the user having to enter host name or logon credentials.

 

To connect to the SAP HANA database you need to provide username and password

 

hdbadm> hdbsql -u DBA -p P@$w0rd!

 

or

hdbsql> \c -u DBA -p P@$w0rd!

 

The SAP HANA connection defaults to <localhost> with port 30015. So the connect strings above will only work if you execute them on the SAP HANA server with instance number 00. For the other 99.99% of the cases, you will need to provide the "node" information with -n or \n (or just the instance number when on "localhost"). In case of multitenant databases, add -d or \d with database name.

 

hdbadm> hdbsql -u DBA -p P@$w0rd! -n <host>[:<port>] -i <instance number>

or

 

hdbsql> \c -u DBA -p P@$w0rd! \n <host>[:<port>] \i <instance number>

 

If you just provide the username and leave out the password (option or command), you will be prompted to enter it interactively. This is a good practice, of course, as you do not want the password recorded in the history file of the Linux shell or displayed on the screen. However, for any batch job or Windows service connection, this will not suffice and typically you will want to work with the secure user store.

 

In the secure user store the connection string (host:port + user + password) is safely stored with the password encrypted. By default, only the file owner can access this file and you can use this to connect to the SAP HANA database from a script file (backup from cron) or as Windows service (without interactive logon).

 

With a key in the secure user store, you can now connect using command \U or option -U

hdbadm> hdbsql -U KEYNAME

or

 

hdbsql> \c -U KEYNAME

 

The KEYNAME is case sensitive and is the name or alias given to a particular connection string (host:port user password). You can store as many strings as needed. As mentioned, the password is stored encrypted and cannot be extracted in any way.

 

The tool to manage keys in the secure user store is called hdbuserstore and is documented in the Security Guide: hdbuserstore Commands - SAP HANA Security Guide - SAP Library

 

 

Working with Input Files

 

When working with hdbsql it is often convenient to use input or script files. This avoids typø errors and allows for scheduled execution. However, there is another good reason to use an input file. Say, you want to perform a select on a tables with a namespace between slashes:

 

SQL> SELECT count(*) from /ABCD/MYTABLE

 

On the command line prompt this would cause an issue and the slash (/) is considered a special character by the shell. So you would have to escape it with a backslash

 

hdbadm> hdbsql -U KEYNAME "SELECT count(*) from \/ABCD\/MYTABLE"

 

Unless you like ASCII art you probably will get tired of this very soon. Best to use an input file here.

 

Another use case is when you want to execute a procedure that contains declarations and statements terminated with a semi-colon (;). The semi-colon is also the default separator for hdbsql so it will start to execute your procedure after the first declaration. How to solve this?

 

DO

BEGIN

  DECLARE starttime timestamp;

  DECLARE endtime timestamp;

  starttime = CURRENT_UTCTIMESTAMP;

 

select count(*) FROM sflight.sairport;

 

  endtime =  CURRENT_UTCTIMESTAMP;

  SELECT

  :starttime AS "Start Time"

  , :endtime AS "End Time"

  , NANO100_BETWEEN( :starttime, :endtime)/10000 AS "Elapsed Time (ms)" FROM DUMMY;

END

#

 

Again, use an input file. End you procedure with another character, for example "#" and then start hdbsql with the command line option -c "#" and -mu for multiline mode. By default, hdbsql runs in single line mode, which means that it will send the contents of the buffer to the SAP HANA server when you hit the Return key.

hdbadm> hdbsql -U KEYNAME -i /tmp/inputfile.sql -c "#" -mu

 

In the video below, I show you some examples of working with input files:

 

 

 

SQLMode = SAPR3

 

But what about SQLMode? In the hdbsql command line reference, a number of commands and command line options are listed that are a little less obvious.

 

Fortunately, SAP HANA expert and SCN Moderator Lars Breddeman was willing to share his knowledge with me on the more obscure parameters and options. Thanks Lars!

 

sqlmode (internal / SAPR3) - The SQLMODE changes how semantics (NULLs, empty strings, trailing spaces, etc.) are handled. This is typically used with the SAP NetWeaver Database Shared Library (DBSL). DB Connect Architecture - Modeling - SAP Library Possible use cases are development and support.

 

auto commit - Same functionality as in the SAP HANA Studio: if you ever want to execute more than one command in the same transaction, you need to switch from autocommit ON to OFF and manually COMMIT or ROLLBACK. Also: all things concerning transaction management, like locking or MVCC, can only be demonstrated if you don’t immediately commit.

 

escape - Escape ON makes sure that all values are enclosed into “ ” (default setting). If you want export the output to a file and copy the contents of that file into Microsoft Excel, this causes all values to be interpreted as text. If you want your number to be numbers and dates to be dates, set escape OFF

 

prepared statements - Prepared statements behave a little bit differently internally, especially for one-off statements simply executing those statements without explicit prior preparation and invocation via parameters might be beneficial. Also, MDX statements are not executable via prepared statements. So, if you want to quickly run a MDX statement from hdbsql you’d have to switch off the usage of prepared statements. This is also how it is done in SAP HANA Studio.

 

saml-assertion - New with SAP HANA SPS 10 is that you can now authenticate using a SAML assertion. Hdbsql is a good tool to test SAML implementations.

 

SSL options - This allows for encrypted communication with the SAP HANA database. This needs to be configured on the server side first. The client parameters really deal with the certificate storage on the client side.

 

 

More Information

 

SAP HANA Academy Playlists (YouTube)

SAP HANA Administration - YouTube

SAP HANA database interactive terminal - YouTube

 

Product documentation

SAP HANA HDBSQL (Command Line Reference) - SAP HANA Administration Guide - SAP Library

Secure User Store (hdbuserstore) - SAP HANA Security Guide - SAP Library

hdbuserstore Commands - SAP HANA Security Guide - SAP Library

Install a Permanent License - SAP HANA Administration Guide - SAP Library

 

SCN Blogs

Backup and Recovery: Scheduling Scripts - by the SAP HANA Academy

SAP HANA Academy: Backup and Recovery - Backup Configuration Files

SAP HANA Client Software, Different Ways to Set the Connectivity Data

Primeros pasos con SAP HANA HDBSQL

 

 

Thank you for watching

You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at youtube.com/saphanaacademy, follow us on Twitter @saphanaacademy., or connect with us on LinkedIn.

How to screw up your HANA database in 5 seconds

$
0
0

I like the SAP HANA database. I really do. Writing demanding SQL statements has never been so much fun since I throw them at SAP HANA. And the database simply answers, really quickly. While the database itself works fine, from time to time I stumble upon some strange issues around HANA administration where I notice that SAP HANA is still a quite new database. In certain cases the database is in real danger, so I want to share with you a perfidious trap.

 

You remember that starting with SAP HANA revision 93, a revision update automatically changed the database from the standalone statisiticsserver to the embedded statisticsserver? You could in theory keep the standalone statisticsserver, but I believe no one actually did this. So did you ever wonder why the systemOverview.py script provides this irritating warning?

statisticsserver.pngI double-checked this on revision 111. The warning is still there. Now you could say, this is a harmless warning and should be ignored. Since SPS09 a standalone statisticsserver is against the clear recommendation from SAP. However, what if some lesser experienced HANA administrator sees this message, takes it seriously and tries to start the standalone statisticsserver anyway?

 

TL;DR:DO NOT DO THIS!

 

First of all, SAP did not yet remove the hdbstatisticsserver binary from the IMDB_SERVER.SAR packages. It is still available, even in revision 112.

statisticsserver2.png

However, it should not be possible to run it if you use the embedded statisticsserver, right? Starting the standalone statisticsserver in this scenario should result in an error message and no harm be done? Well, not quite. So far the topology for my HANA instance looks like this:

m_services1.png

And now I screw up my HANA database via one simple command:

statisticsserver3.png

Oh no! What have I done? When checking the trace file of this new process, it detects the embedded statistics server and disables itself, but only after the topology was already botched up.

 

[31147]{-1}[-1/-1] 2016-03-22 10:16:36.813528 i StatsServ    StatisticsServerStarter.cpp(00081) : new StatisticsServer active. Disabling myself...
[31147]{-1}[-1/-1] 2016-03-22 10:16:36.834024 i StatsServ    StatisticsServerStarter.cpp(00096) : new StatisticsServer active. Disabling myself DONE.
[31147]{-1}[-1/-1] 2016-03-22 10:16:36.836820 i assign       TREXIndexServer.cpp(01793) : assign to volume 5 finished

 

 

 

So I stop the ominous process asap:

statisticsserver5.png

However, in M_SERVICES I still see the "new" service! This is not nice. How do I clean up this mess?

m_services2.png

m_volumes.png

 

This is not just a cosmetic issue. Important systems are protected by HANA system replication. Now this new (but inactive) service breaks the system replication! This is really bad:

replication1.png

 

 

How can we fix the system replication? Let's try the obvious way on the secondary site:

HDB stop

hdbnsutil -sr_unregister

hdbnsutil -sr_register --name=site2 --mode=sync --remoteHost=eahhan01 --remoteInstance=10

HDB start

 

The procedure seems to work. Unfortunately this does not really reinitialize the replication, because if I try a takeover then I get this error:

takeover.png

 

I cannot even perform a backup on the primary site, because that stupid statisticsserver is not active. Dang!

 

If you have been curious and screwed up your crash&burn instance, then you can try to fix the situation with such commands. Proceed at your own risk:

ALTER SYSTEM ALTER CONFIGURATION ('daemon.ini','host','eahhan01') UNSET ('statisticsserver','instances') WITH RECONFIGURE

ALTER SYSTEM ALTER CONFIGURATION ('topology.ini','system') UNSET ('/host/eahhan01','statisticsserver') WITH RECONFIGURE

ALTER SYSTEM ALTER CONFIGURATION ('topology.ini','system') UNSET ('/volumes','5') WITH RECONFIGURE

For more details, have a look at SAP notes 1697613, 2222249, 1950221.

 

Now the Python script shows that the system replication looks fine again:

replication2.png

 

IMPORTANT: Never solely rely on the output of this check script or what you see in the HANA studio on system replication. I recommend to test the takeover after all changes of the topology. It might happen that all lights are green and nevertheless the takeover fails after some topology change.

 

 

Hopefully SAP will remove the false warning about a missing statisticsserver in script systemOverview.py soon. Given their strong commitment to backwards compatibility for SAP HANA, I doubt they will remove the standalone statisticsserver altogether.

High Availability and Disaster Recovery with the SAP HANA Platform on openSAP

$
0
0

Companies have become more dependent upon their IT infrastructure and systems to perform important business tasks throughout their business days as well as outside business hours. It is critical that their IT systems do not fail and are available 24/7 to all end users. With High Availability and Disaster Recovery with the SAP HANA Platform, companies can be assured that their systems are always available. Learners are invited to join the latest openSAP course, High Availability and Disaster Recovery with the SAP HANA Platform, in the SAP HANA Core Knowledge Series, starting this May.

 

High availability (HA) measures the system’s ability to remain accessible in the event of a system component failure and avoid system down times, ensuring the system is always available. Disaster Recovery ensures that, in the unlikely event of a system failure, the system will not lose any data and will be restored fully to its original state. Both of these capabilities are included in SAP HANA Platform with multiple options to select from a recovery time objective (RTO) and recovery point objective (RPO) perspective.

 

The course, High Availability and Disaster Recovery with the SAP HANA Platform, will provide a general overview of SAP HANA platform high availability and disaster recovery features. The course will also expand on the capabilities and demo videos will be provided to showcase the configurations and setup required for operations in a data center environment. The course will run over a three week period and is aimed at enterprise architects, system administrators, and database administrators. Learners taking part in this course should have a basic knowledge of database and software/hardware systems.

 

Enrollment is now open for High Availability and Disaster Recovery with the SAP HANA Platformand is open to everyone interested in learning about this topic and all you need to sign up is a valid email address.

 

As with all openSAP courses, registration, enrollment, and learning content are provided free of charge.

Other upcoming and current courses include:

Build Your Own SAP Fiori App in the Cloud – 2016 Edition

Software Development on SAP HANA (Delta SPS 11)

Implementation of SAP S/4HANA

Implementation Made Simple for SAP SuccessFactors Solutions

Digital Transformation Across the Extended Supply Chain

SAP Business Warehouse powered by SAP HANA (Update Q2/2016)

Sustainability Through Digital Transformation

SAP HANA Cloud Platform Essentials (English)

SAP HANA Cloud Platform Essentials (Japanese)


[SAP HANA Academy] Learn How to Use Core Data Services in SAP S/4 HANA

$
0
0

In a five part video series the SAP HANA Academy's Tahir Hussain Babar (Bob) walks through how to set up and use Core Data Services in SAP S/4 HANA.


Introduction to CDS - Creating a CAL Instance and Creating an ERP User


In the first video of the series Bob details how to get a S/4 HANA instance and how to create the S/4 HANA user that's necessary for executing the tasks preformed in the series.

Screen Shot 2016-03-21 at 11.05.25 AM.png

There are a few prerequisites before you can start this series. First, it's assumed that you already have a SAP Cloud Appliance Library account and that you have instantiated a solution with an image called S/4 HANA, on premise edition - Fully Activated. When you click on create instance you are creating a few machines: SAP BusinessObjects BI Platform 4.1 server, SAP ERP 607 server on SAP HANA SPS09, and SAP HANA Windows client. Essentially, you're building three separate servers.

Screen Shot 2016-03-21 at 11.30.09 AM.png

Once you have created this instance, if you move your cursor over the solution title, there is a link to a Getting Started document. It's assumed that you've followed all of the steps detailed in the document in order to instantiate your own instance of S/4 HANA.


When you log into your Windows instance there will be a pair of tools that you will utilize. One is Eclipse, which is used for development and where you will build the CDSs. Please make sure you're using the latest version of Eclipse. You should have the HANA Development tools, including the ABAP perspective, updated on a regular basis. The other tool is the SAP Login which is used to access the SAP ERP system.


Open Eclipse and then click on Window > Perspective > Open Perspective > Other and choose ABAP to open a new ABAP perspective.


To create a new ERP user with all rights, first, open the SAP Logon and log into client 100, which is a preconfigured S4 client, with the default user and password. Next, run the command /nus01 to create a new user.

Screen Shot 2016-03-21 at 4.54.04 PM.png

Bob names his user SHA and then clicks the create button. On the Maintain Users screen you must provide a last name in the Address tab and change the default password in the Logon Data tab. Also, in the Profiles tab, you must add SAP_ALL (have all SAP System authorizations) and SAP_New (have new authorizations checks). This essentially creates a copy of the default client 100 user so you will have enough roles and rights to preform the tasks carried out later on in the series. Click on the save icon to finish creating the new user.Screen Shot 2016-03-21 at 5.05.35 PM.png

Next, click on the orange log off button and log in as the new user. As it's the first time you're logging in as the new user you will be prompted to change your password.


How to Create an ABAP Project and Load Demo Data

 

Below in the second video of the series Bob details how to create a new ABAP project within Eclipse and how to load demo data.

Screen Shot 2016-03-21 at 5.09.12 PM.png

In the ABAP perspective in Eclipse right click on the projects window and select New > Project > ABAP > ABAP Project and then click Next. Choose to define the system connection manually. Enter your System ID - Bob's is S4H. The Application Server is the IP address of your ERP system. Also, enter your Instance ID (Bob's is 00) before clicking Next. Enter your Client (100) user name, password and preferred language code (EN for English) before clicking on finish. Now you have created a ABAP project within your ERP system using your recently created user and connection.


Drilling down into Favorite Packages will show the temp package ($TMP-SHA) that has been created. S/4 HANA is installed, so there are already a ton of existing CDS views. Scroll down into the APL package and search for and open the ODATA_MM folder. Your CDSs are stored in a folder in the ODATA_MM_ANALYTICS package. Two sub folders exist. Access Controls, which deals with security, and Data Definitions, where you build CDSs.

Screen Shot 2016-03-21 at 9.43.45 PM.png

The CDS highlighted above, C_OVERDUPO, is the CDS in the overdue purchased orders tiles used in the SAP Fiori launchpad.


Back in the SAP GUI log in as the new user you recently created. To load some demo data run the command /nse38. Next, choose SAPBC_DATA_GENERATOR. You will be using S-Flight, which creates a series of tables and BAPIs that enable you to test your ERP system using an airline's booking system's flight data. Next, hit the execute button and select the Standard Data Record option before hitting execute again.

Screen Shot 2016-03-21 at 10.02.50 PM.png

To see the data run the command /nse16. Enter SCARR for the table name and then click on the object at the top left to see the data.

Screen Shot 2016-03-21 at 10.04.49 PM.png

How to Create Interface Views

Screen Shot 2016-03-21 at 10.06.28 PM.png

Bob shows how to create basic, aka interface, views in the series' third video. You will be building a CDS view on top of the data contained in the Demo Data table called SCARR. The table lists the various Airline carriers, the carrier ID and the currency code. You will be exposing this data via OData as a gentle introduction into the CDS concept.


CDSs aren't written in ABAP but the objects will exist in the ABAP repository. Essentially, it is a combination of both OpenSQL and a various list of annotations. The annotations further define the view as well as all of the data elements within that CDS.


In Eclipse right click on the empty package in the ABAP project and select New > Other ABAP Repository Object > Core Data Services. The two options available are DCL Source and DDL Source. DCL Source is used for security by enabling you to preform role-level security. Bob opts for the other option and selects DDL Source before clicking Next.


Bob enters Airline, private view, VDM interface view as his description. However, when you build CDS views they will share a namespace and therefore should not interfere with productive or delivered views. Basically, you must utilize a naming convention.


So Bob names his CDS view ZXSHI_AIRLINE. ZX means it's a development workspace. SH is the first two letters of his user name. I means that it's a basic view. A basic view hits the raw data in your tables. In-between there will be a series of other views with consumption views at the top. Analytics or OData are exposed to consumption views. 

Screen Shot 2016-03-21 at 10.32.34 PM.png

After clicking on Next, Bob will select his Transport Requests. Transport Requests enable you to move content from system to system and can be used for productive CDS views. However, as these are local CDS views, you won't need any Transport Requests. Clicking Next brings you to the list of Templates. These cover the most common use cases such as joins between different tables or associations. Click Finish to create the CDS view.


Several default annotations are created with the CDS. First, change the define view name in line 5 to match ZXSHI_ARILINE. Next, hitting control+space next to as select from will bring up code completion so you can find the scarr data source. The bottom left hand side shows an outline for the query that Bob is building up.

Screen Shot 2016-03-21 at 10.57.04 PM.png

On line 6 you will need to select the column. Press control+space and choose scarr.carrid and then add as Airline. Also, add scarr.currcode as AirlineLocalCurrency and scarr.url as AirlineURL.

Screen Shot 2016-03-22 at 9.26.11 AM.png

The first annotation is @AbabCatalog.sqlViewName and will, essentially, be the same name as the view but without any underscores. So enter ZXSHIAIRLINE. To check the view click on the save button and then drill into the view located in the Data Definitions folder in the CDS folder. Activate the package and then open a data preview on the ZXSHI_AIRLINE CDS to see the list of airlines, currencies and URLs.

Screen Shot 2016-03-22 at 9.51.35 AM.png

Another annotation that must be changed is @EndUserText.label on line 4. You should replace the existing technical term with just Airline as it is more readable. This text label is exposed on objects within your OData services. Next, add an annotation to signal that this is a basic/interface view by typing @VDM.viewType: #BASIC on a new line.

Screen Shot 2016-03-22 at 10.02.15 AM.png

Basic views are private as the end user never accesses the system directly. Another type of view is a Composite, which is a basis underlying view for Analytics. It is used to combine different views via associations. The Consumption view is an end user view, which is accessible through an analytical front-end or is used for publication to OData.


Bob adds an additional annotation, @Analytics.dataCatagory: #DIMENSION, in another line so analytics can be used with these views. This indicates that it will be a dimension type table.

Screen Shot 2016-03-23 at 2.01.41 PM.png

There is also a different set of annotations that you will see inside a select statement. For example, the annotation @semantics won't appear when you try to enter it with all of the other annotations at the top. However, it will appear and can be entered when you type it within the select statement at the bottom of the CDS. Bob adds the annotation @Semantics.CurrencyCode: true above his carr.currcode line to imply that it is a currency code. Bob also adds @Semantics.url to infer that the line below is a url.

Screen Shot 2016-03-25 at 10.06.24 AM.png

You must define a key if you want to expose the CDS as OData. The carrier ID will be the primary key, so Bob types key in front of the scarr.carrid as Airline.


When you first created this DDL, there was the other option of creating a DCL instead. A DCL involves access controls as you can define which user will have access to which data in a specific table. Currently there are no DCLs, so the annotation @AccessControl.authorizationsCheck: #CHECK needs to be changed to @AccessControl.authorizationsCheck: #NOT_REQUIRED. Therefore, in this example there will be no role level security.

Screen Shot 2016-03-25 at 12.47.23 PM.png

Full Syntax - Interface View

-----------------------------------------------------------------------------------------------------------------------------------------

@AbapCatalog.sqlViewName: 'ZXSHIAIRLINE'

@AbapCatalog.compiler.compareFilter: true

@AccessControl.authorizationsCheck: #NOT_REQUIRED

@EndUserText.label: 'Airline'

@VDM.viewType: #BASIC

@Analytics.dataCategory: #DIMENSION

define view ZxshI_Airline as select from scarr {

key scarr.carrid as Airline,

@Semantics.currencyCode: ture

scarr.currcode as AirlineLocalCurrency,

@Semantics.url:true

scarr.url as AirlineURL

}

-----------------------------------------------------------------------------------------------------------------------------------------

 

Once the CDS looks like the syntax above click on the save button. Activate the CDS and then open a data preview on it to verify there is data in it.


How to Create a Consumption View

Screen Shot 2016-03-25 at 12.51.00 PM.png

In the fourth video in the series Bob walks through how to create a consumption view. Normally there would be many interface views so there would be a full breadth of dimensions and facts available for the analytics tools. You can use associations to join data from multiple basic views together in a composite view. In this simple demo Bob skips building a composite view and chooses to build an consumption view which he will expose to OData.


Bob right clicks on his data definitions folders and chooses to build a new DDL Source. Bob names his view ZXSHC_AIRLINEQUERY. ZX is for development view, SH are the initials for Bob's user and C is for consumption view. For the description Bob enters Airline query, public view, VDM consumption view and then clicks next. Bob leaves the default for Transport Request and chooses a basic view without any associations or joins before clicking finish to create his consumption view.

Screen Shot 2016-03-25 at 1.57.33 PM.png

First, Bob changes line 6 to define view zxshc_Airlinequery as select from zxshI_Airline {. Next, Bob modifies his sqlViewName in line 1 to 'ZXSHCAIRLINEQ'. Keep in mind that this view name can only be 16 characters long. Then, Bob modifies the annotation in line 4 to read @EndUserText.label: 'Airline'. After, Bob marks that it's a consumption view by adding a new annotation, @VDM.viewType: #CONSUMPTION, underneath.


Next, Bob selects the columns (Airline, AirlineLocalCurrency and AirlineURL) by pressing control+space on line 7 and chooses Insert all elements - Template. Finally, you must add an annotation beneath the consumption view annotation to expose the view as OData. Bob enters @OData.publish: ture.

Screen Shot 2016-03-29 at 9.21.13 AM.png

Save and then activate the consumption view CDS. After activation, you will see that an error has occurred. Hovering the cursor over the caution marker will inform you that there is a missing key element in the ZXSHCAIRLINEQ view.

Screen Shot 2016-03-29 at 10.31.27 AM.png

To fix it, add key at the beginning of line 8 before ZxshI_Airline.Airline. Then save and activate.


Now if you hover the cursor over the OData line it will inform you that the activation needs to be done manually through the /IWFND/MAINT_SERVICE command in ERP. Finally, Bob changes the annotation for the authorizationCheck to #NOT_REQUIRED.


Full Syntax - Consumption View

-----------------------------------------------------------------------------------------------------------------------------------------

@AbapCatalog.sqlViewName: 'ZXSHCAIRLINEQ'

@AbapCatalog.compiler.compareFilter: true

@AccessControl.authorizationCheck: #NOT_REQUIRED

@EndUserText.label: 'Airline'

@VDM.viewType: #CONSUMPTION

@OData.publish: true

define view zxshc_Airlinequery as select from ZxshI_Airline {

key ZxshI_Airline.Airline,

ZxshI_Airline.AirlineLocalCurrency,

ZxshI_Airline.AirlineURL

}

-----------------------------------------------------------------------------------------------------------------------------------------


Creating OData Services

Screen Shot 2016-03-29 at 11.23.16 AM.png

In the fifth and final video in the series Bob shows how to create OData services from the interface and consumption views he recently created based on CDSs from S/4 HANA.


Bob will now have to execute the command /IWFND/MAINT_SERVICE within his ERP system to register the OData service. In the SAP GUI, go to the top level of the ERP and enter the command /IWFND/MAINT_SERVICE. If you get an error informing you to log in but you are already logged in as your user, then place a n in front of the command. So it will display /nIWFND/MAINT_SERVICE before you press enter.


You will register the CDS consumption view you built in the ABAP repository and expose it as OData on the Activate and Maintain Services page. The service that you must add is listed in Eclipse when you click on the maker next to the OData annotation in your consumption view. The service is called ZXSHC_AIRLINEQUERY_CDS. Please copy it.

Screen Shot 2016-03-29 at 2.54.25 PM.png

Back on the Activate and Maintain Services page, click on the Add Service button. For the System Alias choose LOCAL_PGW as it's the S/4 HANA trusted service. Paste in your copied service as the Technical Service Name. Then, hit the Get Services button on the top left hand side. Next, select ZXSHC_AIRLINEQUERY_CDS as the backend service and click on the Add Selected Services button.

Screen Shot 2016-03-29 at 2.59.50 PM.png

The only item that needs to be addressed on the Add Service window that pops up is the Package Assignment under the Creation Information header. Clicking on Local Object will link the package name where which you created the CDS in Eclipse ($TMP) to the service in your ERP system. Then, click the execute button and you will get the message that the service was created and its metadata was loaded successfully.

Screen Shot 2016-03-29 at 2.59.50 PM.png

To verify, go back into the ABAP perspective in Eclipse and click on the check ABAP development object button. It will now display a message that an OData service has been generate.

Screen Shot 2016-03-29 at 3.03.27 PM.png

If you click on the ODATA-Service link it will open a new window in your default browser and request that you log in with the appropriate user. Even if the extension of the OData reads sap-ds-debug=true your service is correctly exposed if it looks like the one displayed below.

Screen Shot 2016-03-29 at 3.07.44 PM.png

If you change the extension to $metadata than you can view the OData service's metadata. It shows the names of all of the columns and queries. If you append the query name at the end of the URL you can see the data for each of the 18 airlines.

Screen Shot 2016-03-29 at 3.11.04 PM.png

If you want to learn more about OData syntax please visit the documentation page at odata.org.


For more tutorial videos about What's New with SAP HANA SPS 11 please check out this playlist.


SAP HANA Academy - Over 1,300 free tutorials videos on SAP HANA, SAP Analytics and the SAP HANA Cloud Platform.


Follow us on Twitter @saphanaacademy and connect with us on LinkedIn to stay abreast of our latest free tutorials.

SAML SSO setup and configuration

$
0
0

Hi All,

 

My name is Man-Ted Chan and I'm from the SAP HANA Product Support team. This post is just bring some attention to our SAML SSO setup and configuration documents on our trouble shooting wiki.

 

The wiki can be found here

 

SAP HANA and In-Memory Computing Troubleshooting Guide - Technology Troubleshooting Guide - SCN Wiki

 

Direct links:

 

SAML SSO for BI Platform to HANA V 1 0 0.pdf

SAML SSO for Analysis for Office to HANA V 1 0 0.pdf

 

 

Please note that we are looking to change these pdf files to wiki pages.

 

Also in the comments feel free to let us know some other type of docs you would like to see.

 

 

Thanks,

 


Man-Ted

SAP HANA Multitenancy Multi-tenant Database encryption with change of encryption root key for SYSTEMDB

$
0
0

Why This Blog:

 

In the course of my work, I am currently, amongst other things, setting up a SAP HANA system running multiple database tenants with high level of security.

 

In this case, the high level security measure is to enable Data Volume Encryption on the Hana system.

 

This is the first time I have enabled Data Volume Encryption with a Hana Multitenant Database.

 

After we have executed steps described in SAP HANA Administration guide, for enabling Data Volume Encryption, the alert '57' was raised in our SYSTEMDB reporting "Inconsistent SSFS". At this point our tenant DB was working without issues including backup. For system DB we were experiencing all symptoms reported by SAP Note 2097613.

 

2016-04-04_09-58-11.png

 

Supporting Documentation:

 

SAP HANA Security Guide

Section:

9 Data Storage Security in SAP HANA

 

SAP HANA Administration Guide

Sections:

4.4.1.2 Enable Data Volume Encryption Without System Reinstallation

4.4.2 Data Volume Encryption in Multitenant Database Containers

 

2097613 - Database is running with inconsistent Secure Storage File System (SSFS)

 

Assumption :

As part of the procedure you have option to change the encryption key. You have decided to change the encryption key of your SYSTEMDB.

You have just converted your Single node SAP HANA system to MDC. There is SYSTEMDB and Single Tenant running in our system.

You have fully encrypted both SYSTEMDB and Tenant DB.

You have change the root encryption of your tenant DB therefore you are not able to do restore of SSFS described in SAP Note above which would render the DB unusable.



Solution:


Reseting persistency information of SYSTEMDB in SSFS


Login to your SAP HANA system via <sid>adm user and execute following commands:

cdexe

./hdbcons

\e hdbnameserver <instance no.> <SAP HANA System name> - This will connect to nameserver of the "SYSTEMDB"

crypto ssfs resetConsistency - This command will reset the consistency information in the SSFS activating new key

SAP S/4HANA: How will you get there?

$
0
0

SCN - SAP S4HANA_Ramesh.pngSAP S/4HANA recently celebrated its first birthday, and, as all proud relatives are apt to do, I thought back on the accomplishments and earnings that this first year has brought us. First, as SAP’s greatest innovation since SAP R/3, the new suite has seen a remarkable rollout, with more customer interest and coverage by analysts and the media than anyone saw coming in this short period. I look forward to SAPPHIRE NOW, this May 17-19, where I will share some of TCS’ early customer successes as they migrate to SAP S/4HANA.

 

The last six months have been particularly busy for TCS and our SAP clients as we develop business cases demonstrating the benefits for customers to migrate to SAP S/4HANA, including: overall shrinking of their data footprint, lowering of development costs and reducing total cost of ownership. Here’s another benefit that is perhaps more subtle but very important for IT departments. Customers come to us with legacy systems that include massive customizations. We believe that perhaps as much as 70% of these customizations can be avoided with SAP S/4HANA, freeing up administrative and IT resources for higher uses. As the head of our TCS SAP Practice, Akhilesh Tiwari, recently shared on this blog, for all of these reasons and many more, we believe that SAP S/4HANA will be the big topic for discussion at this year’s SAPPHIRE NOW.

 

SAP S/4HANA: Making the business case

Let’s be clear: Transitioning an organization to SAP S/4HANA requires a considerable investment in effort, time, and money. Most SAP customers, I believe, know they will make the move at some point over the next one to five years. So the initial questions we often hear are around when should I start, how much will I pay and when do I start seeing the benefits?

Regardless of your starting point and systems, building a smart roadmap to migration is the first step to ensure the transition to SAP S/4HANA is smooth. TCS’ proven roadmaps help customers manage large, complex technology and business process transformations in a series of well-defined phases. We start by looking at a client’s business objectives and outlining a technology transition that gets them from their current state to full implementation in the required time period. But our guidance goes well beyond solving technology issues. We help our clients make the SAP S/4HANA business case for their organization. Stakeholders can see at any point along the timeline what costs will be involved, and the expected financial and business returns delivered as the system is implemented.

SAP S/4HANA Roadmap

SAP S/4HANA is proving extremely beneficial to our clients across industries who are challenged to build single balance sheets that accurately reflect multiple product lines. The power to do this is available for the first time with SAP S/4HANA. By giving an enterprise a real-time view into its on-the-ground financial condition, executives are empowered to make decisions about resource allocation much more quickly. The roadmap shows them how and when they will hit these milestones. In this way, TCS partners with customers to sell the project to financial stakeholders.

Our roadmaps also help customers take control of cost planning. We phase our implementation projects over years in order to spread out the cost. If a “big bang” deployment is too pricey, we can phase it out over a period of shorter go lives, shrinking costs into more bite-size chunks. We can also help the customer stage the implementation so that components with earlier payoffs are completed up front.

As I look back on this first year of SAP S/4HANA, I know that it cemented our commitment to help customers not only as technical partners but as partners in making the business case to their organizations. With more and more clients considering their migration plans, TCS is able to share industry-specific insights that help organizations meet their specific business objectives as they make the move to this powerful, next-generation business suite that will position their business for longer term growth and agility. I look forward to exchanging ideas on how to make the move to SAP S/4HANA successful. Please share your comments here and let me know if you would like to meet during SAPPHIRE NOWin Orlando, May 17-19.

HANA CatEye! Experimental Project with NodeJS + MongoDB + RaspberryPI3

$
0
0

Header.png

Hello Everyone,

In my previous article, I have explained BPC on HANA by using HANA objects and their advantages. This time i want to write about NodeJS which is going to be the primary Javascript runtime in HANA XS with SAP HANA SP11. Also I have developed a simple application by using NodeJS named as HANA CatEye.

I’m still beginner in NodeJS but after digging inside the technology, I found NodeJS very simple to learn and develop applications.

After i read NodeJS will be included in SAP HANA SP11 i decided learn more about it and  initially planned to develop a basic “hello world” example. Later on  i decided to build an application (HANA CatEye) which can be benefit for my future HANA projects.

One of my biggest problem during the development was keeping track of code changes. Sometimes me or my colleagues need to roll back the code to previous state due to user’s decision on calculation logic, bugs or performance reasons. What i need was a version control mechanism and being aware of any code change.
In HANA, there is already a standard version control mechanism and change management features but I didn’t find it easy and useful. Hence, HANA CatEye application is designed to backup HANA development objects to the local DB, create versions for the changed objects and code version comparison to see what has changed between versions.

It is built based on NodeJS, MongoDB and hosted on Raspberry PI3 which is a microcomputer and a half size of iphone6 that cost around USD 35.

What is NodeJS ?

It is a JavaScript runtime that uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. Maybe many of you know that JavaScript run on browsers but there is an exception for NodeJS, it doesn’t need a browser to run (thanks to Google Chrome V8 engine) and it is commonly used run web servers and handling http requests/responses.

One of the major differences of NodeJS from client side scripting or many other programming languages is, asynchronous nature. For example, reading parameters line 27 and 28 would be execute sequentially, but I/O functions in lines 31 and 32 are triggered concurrently and they are not depended on each other’s result or response time.

So basically the whole point is it doesn't have to wait for I/O. Because of that Node scales well for I/O intensive requests and allows many users to be connected simultaneously, we can also extend it to the database access. Multiple requests can be send to HANA database in concurrent without interrupting the process flow of application logic.

NodeJS Modules/Packages

There are many libraries available to be installed over NodeJS to add new functionalities to the applications. For example, we can use module named “Express” as a web application framework to handle low level http library, “HDB” module to connect HANA server, Mongoose for our MongoDB database operations, Swig to render webpage and many other modules which has their own purposes. Even for the same purpose, you can find many modules work in different logic but does the same job. It is a very rich environment.

Installing a module is very easy by using NPM which is our package manager for NodeJS.For example HDB module installation for HANA connectivity is just one line in command tool.

-> npm install HDB
below is how i installed HDB to my project in raspberry.


After installing, connection also very easy (this is basic connection mode, so we just connect with username and password);

Currently i don’t have HANA SP11 installation, but in a real implementation we are not going to use HDB module for HANA connectivity which I mentioned above, we will be using SAP delivered modules that is under the installation folder for SP11.


Below is the usage of modules for both public and SAP delivered ones. We are using "require" command to include modules in our application.

We didn’t want the traditional application server — large-scale, stateful, necessarily takes a lot of memory

SAP HANA Product Management Director Thomans Jung said “We didn’t want the traditional application server — large-scale, stateful, necessarily takes a lot of memory. We wanted something lightweight and pass-through that was just there as part of the database, and that could do basic RESTful server enablement and web serving, and not a whole lot else”.

They are  not only making it lightweight but they are also simplifying the development model


NodeJS is very lightweight even it can run on Raspberry PI3 with MongoDB without any problem. Below is my tiny Raspberry PI3 that costs around 35 USD. It is running an OS called Raspbian which is a free operating system based on Debian optimized for the Raspberry Pi hardware. I placed a lego near it for you to compare the size. So far i deployed many applications on it and had no problem. 1 GB ram is already enough for NodeJS applications.

What is HANA CatEye ?

I started learning NodeJS from tutorials and paid more attention after i heard it will be in SAP HANA. Later I decided to build this application based on one of the tutorial related to CRUD operations on MongoDB that i found on the web.
Currently for my BPC on HANA development, I have more than 100 procedures. We keep adding and removing new lines from the current procedures based on needs. Keeping the track of changes is really difficult.

Here comes the CatEye, it connects the HANA server by using HDB module, checks the source code of existing procedure, creates a new version and save in to MongoDB which is also running on Raspberry PI3.MongoDB is a perfect fit for NodeJS applications. It lets us to focus on front-end rather than database layer.
Backup time took around 4 seconds for 100+ procedures, which is much faster than SAP Hana Studio :).

First of all, lets create a dummy procedures and see the result:-

We have created 2 procedures named ZTEST_GET_GUID and ZTEST_GET_MATERIAL.
Then, I open CatEye application and click on “Meow” button;

CatEye connected to HANA system and based on my selection filters on the backend, it will read the specific procedures. It will also count the local objects available in CatEye system which is either transferred from HANA or manually added.Application send 2 concurrent requests to read both from HANA server for the procedures to be transferred and MongoDB for the actual ones on our local.
Due to non-blocking architecture, those requests won’t stop the flow of application logic and whenever their response is sent, their result is loaded to the page separately.
After clicking Sync button, procedures are transferred to our local.

And lets check the objects on our local MongoDB.

For demo purpose, I changed ZTEST_GET_MATERIAL procedure and also created a new procedure ZTEST_GET_PRICE  on HANA layer. Lets Sync again!

Now CatEye found a new procedure(ZTEST_GET_PRICE) and created it and also created second versions for the updated procedure(ZTEST_GET_MATERIAL). Lets use version comparison;


Red lines no more exist on new version and green line is the changed line. Now we can easily backup and track what has changed on procedures :).

You can find the video of the actual integration and version comparison here;

 

 

Overall, the project took 2 weeks to develop. I build from scratch and really had so much fun & learn many things during the development phase of this experimental project. There are several NodeJS modules used on the application, some examples are ; SAP HANA connectivity(hdb), managing asynchronous flows(async), notifications(toastr), parsing POST data(body), MongoDB operations(Mongoose), code highlighting(react-highligt) ,rending HTML pages(swig), showing code difference(diff) all has their own specific purpose. I asked to Thomas Jung if SAP is limiting us for any NodeJS modules/packages and he said there is no limitation but we are responsible for any bugs/security problems caused by the public module. I can not wait to use NodeJS in HANA SP11 with real scenarios

 

You can find samples done with SP11 by Thomas Jung in his GitHub page.
https://github.com/I809764ß

References:

Thomas Jung SP11 new developer features ; http://scn.sap.com/community/developer-center/hana/blog/2015/12/08/sap-hana-sps-11-new-developer-features-nodejs#comment-659207 .

Why HANA XS is moving to NodeJS ?  http://thenewstack.io/sap-unveils-hanas-full-featured-node-js-integration/

HANA change management and version control; http://scn.sap.com/community/hana-in-memory/blog/2015/02/24/step-by-step-guide-on-sap-hana-version-control-features

Top 10 reasons to use NodeJS :
http://blog.modulus.io/top-10-reasons-to-use-node

[SAP HANA Academy] Discover How to Setup and Use the KPI Modeler in SAP S/4 HANA

$
0
0

Over a series of five tutorial videos Tahir Hussain "Bob" Babar provides an overview on how to setup and use the KPI Modeler in SAP S/4 HANA. This series is part of the SAP HANA Academy's S/4 HANA playlist. These videos were made with the greatly appreciated help and assistance of Bokanyi Consulting, Inc.'s Frank Chang.


How to Set up a SAP S/4 HANA ERP User

Screen Shot 2016-03-30 at 4.06.00 PM.png

Linked above is the first video in the series where Bob details how to set up a SAP S/4 HANA ERP user. This is accomplished by copying the roles and profiles from an existing user. If you don't want to use your main BPINST user then please follow the steps Bob outlines.


First, log into SAP Logon. This is Bob's connection to both the back-end and the front-end server as he is using a central hub installation. Use 100, the pre-configured S/4 client, as the client and login with your BPINST username and password. Next, choose to run a SU01 - User Maintenance (Add Roles etc.) transaction from the SAP Easy Access screen. Then, choose to look at the BPINST user's rights and navigate to the Roles tab.

Screen Shot 2016-03-31 at 10.39.50 AM.png

Copy all of the roles and then launch a new window by running the command /osu01 to create a new user. Bob names his new user KPI and clicks on the new button. The only information you need to allocate in the Address tab is a last name. In the Logon Data tab enter a password. Then, in the Roles tab, paste in the roles you copied from the BPINST user. Be aware that sometimes all of the roles aren't copied. So double check to make sure that your new user has all of BPINST's roles.


Next, copy the first three profiles (SAP_ALL, SAP_NEW, S_A_SYSTEM) that are listed in the BPINST user's Profiles tab and paste them into the Profiles tab of your new KPI user.

Screen Shot 2016-03-31 at 11.30.33 AM.png

Now you have a duplicate of the BPINST user.


How to Change the SAP Fiori Launchpad with the Launchpad Designer

Screen Shot 2016-03-30 at 5.21.34 PM.png

In the second video of the series Bob provides an overview of the SAP Fiori Launchpad in SAP S/4 HANA. Also, Bob shows how to change the SAP Fiori Launchpad using the SAP Fiori Designer.


In a web browser log into the SAP Fiori Launchpad Designer with the recently created KPI user on Client 100. The SAP Fiori Launchpad Designer enables you to change the look and feel of certain tiles in your SAP Fiori Launchpad. A list of tiles is located on the right side of the SAP Fiori Launchpad Designer and a list of catalogs is along the left.

Screen Shot 2016-03-31 at 11.39.42 AM.png

The tool that the end-user will see is the SAP Fiori Launchpad for SAP S/4 HANA. Bob opens the SAP Fiori Launchpad in another tab. The example Bob shows of a SAP Fiori application is for Operational Processing. Clicking on the hamburger button on the left will open the Tile Catalog. Bob elects to open the KPI Design Catalog.

Screen Shot 2016-03-31 at 11.44.17 AM.png

To provide an example of what an end-user might experience, Bob opens the Sales - Sales Order Processing catalog and then opens the Sales Order Fullfillment All Issues tile. This gives the end user a normal tabular report on Sales Order Fullfillment Issues by connecting to a table located in SAP S/4 HANA through OData.

Screen Shot 2016-03-31 at 11.50.11 AM.png

Another tile, Sales Order Fillfillment Issues - Resolved Issues, has an embedded KPI which shows that there are 64 issues that need to be resolved on 29 sales orders.

Screen Shot 2016-03-31 at 5.41.48 PM.png

Back in the SAP Fiori Launchpad Designer, Bob searches for ssb in the Tile Catalog. Bob opens up the SAP: Smart Business Technical Catalog. This is where you can change the form of navigation for a tile including all of the options related to the KPI monitor. The KPI Design Catalog is very similar.

Screen Shot 2016-03-31 at 6.23.05 PM.png

The SAP Fiori Launchpad Designer is used to direct target navigation. To demonstrate, Bob searches for order processing and opens up the Sale - Sales Order Processing catalog. If you view the tiles in list format you will find an Action and a Target URL for each of the tiles. This informs you what will happen when the tile is selected. With the Target Mappings option you can define what will happen when you select a specific tile. You can also choose whether or not the tile can be viewed on a tablet and/or phone as well.

Screen Shot 2016-03-31 at 6.30.29 PM.png

How to Create and Secure a Catalog

Screen Shot 2016-03-30 at 5.22.00 PM.png

Bob details how to create a catalog in the series' third video. Bob also walks through how to secure the catalog so users who are on the SAP Fiori Launchpad can access it


To create a new catalog, first, click on the plus button at the bottom of the SAP Fiori Launchpad Designer. Bob elects to create a catalog using Standard syntax and gives his a title and an ID of ZX_KPI_CAT. Once the new catalog is created, click on the Target Mapping icon. You can create a new Target Mapping here but the simplest way is to copy a Target Mapping from an existing catalog. So Bob navigates to the Target Mapping for the Sales - Sales Order Processing catalog. Then, Bob selects the Target Mapping at the bottom that has * as its semantic object before clicking on the Create Reference button at the bottom.

Screen Shot 2016-04-01 at 11.43.25 AM.png

Selecting the catalog you've recently created (ZX_KPI_CAT) will create a Target Mapping in that catalog with the same rights as the semantic object you selected from the existing catalog. Now, back in the ZX_KPI_CAT catalog you can confirm that the Target Mapping of * has been replicated.

Screen Shot 2016-04-01 at 7.05.12 PM.png

Next, you must enable a user be able to access the catalog. So go back into SAP Logon and login as the KPI user on client 100. Running the command /npfcg will open up role maintenance. This is where you can build a role. Bob names his role ZX_KPI_CAT and selects single role. Bob duplicates the name as the description and saves the role. Then, in the menu tab, Bob chooses SAP Fiori Launchpad Catalog as the transaction. Next, Bob finds and selects his ZX_KPI_CAT in the menu for Catalog ID.

Screen Shot 2016-04-01 at 7.31.09 PM.png

This has built a role that grants access to the ZX_KPI_CAT catalog. Next, Bob opens the User tab and enters KPI as the User ID. Now, after saving, the KPI user can access the ZX_KPI_CAT catalog and the security has been fully setup.


Accessing Core Data Services

Screen Shot 2016-03-30 at 5.22.40 PM.png

In the fourth video of the series Bob shows how to access a Core Data Service. Core Data Services access the SAP S/4 HANA tables which are ultimately exposed as OData. For more information on how to build and use CDSs please watch this series of tutorials form the SAP HANA Academy.


First, in Eclipse, Bob duplicates the connection he's already established but opts to use the KPI user with client 100 instead of the his original SHA user. Now Bob is connected to the SAP S/4 HANA system as the KPI user. Next, Bob finds an already existing CDS by opening a Search on the ABAP Object Repository and searching for an object named ODATA_MM_ANALYTICS. Once the search has located ODATA_MM_ANALYTICS (ABAP Package), Bob opens it and navigates to it's Package Hierarchy in order to see its exact link.

Screen Shot 2016-04-05 at 10.04.58 AM.png

ODATA_MM_ANALYTICS is in a sub-package of APPL called ODATA_MM. Navigate to the ODATA_MM package from the System Library on the left-hand side and find ODATA_MM_ANALYTICS before adding it to your favorites. Opening the Data Definitions folder from the Core Data Services folder in the ODATA_MM_ANALYTICS package will show the pre-built Core Data Services. Bob opens C_OVERDUEPO. C_OVERDUEPO is a consumption view. So a BI tool will directly hit it.

Screen Shot 2016-04-05 at 11.25.05 AM.png

Another way to view a CDS's syntax is to right-click on it and choose to open it with the Graphical Editor. This depicts the logical view of the data. The C_OVERDUEPO view comes from the P_OVERDUEP01 view. This is a great way to track the data back to its source table.

Screen Shot 2016-04-05 at 5.11.00 PM.png

To check that the data from the C_OVERDUEPO CDS is correctly exposed as OData, Bob resets his perspective. Then, Bob right clicks on and opens OData Exposure underneath the secondary objects header in the outline. This opens the OData in a browser and Bob logins as the KPI user. To test, you can append $metadata to the end of the URL to see the various columns for the entities of the CDS view.

Screen Shot 2016-04-06 at 10.33.16 AM.png

Using the KPI Modeler

Screen Shot 2016-03-30 at 5.23.13 PM.png

In the fifth and final video of the series Bob details how to use the KPI Modeler.


First, Bob opens the KPI Design catalog in the SAP Fiori Launchpad and selects the Create Tile tile. Bob names it KPI Overdue PO and chooses C_OVERDUEPO as the CDS View for the Data Source. Then, Bob selects the corresponding OData Service and entity set called /sap/opu/odata/sap/C_OVERDUEPO_CDS and C_OverduePOResults respectively. For Value Measure Bob selects OverdueDays. Then, he clicks Activate and Add Evaluation.

Screen Shot 2016-04-06 at 10.53.19 AM.png

The evaluation is a filter that regulates what you want the data to show. Bob names the evaluation Last Year - KPI. For Input Parameters and Filters Bob elects to only display EUR as his currency and sets his evaluation period for 365 days. For his KPI Goal Type Bob keeps the default, Fixed Value Type. Bob sets his target threshold for 500, his warning threshold for 300 and his critical threshold for 100. Then, Bob clicks Activate and Configure New.

Screen Shot 2016-04-06 at 11.46.34 AM.png

There Bob is presented with various tile formatting options. In his simple demonstration Bob keeps the default tile configurations. Bob chooses ZX_KPI_CAT as his catalog before clicking on Save and Configure Drill-Down. Drill-Down determines what happens when the KPI is selected. Bob chooses to filter down with a Dimension of Material and a Measure of Overdue Days. This will create the chart depicted below.

Screen Shot 2016-04-07 at 10.21.27 AM.png

Bob gives his view a title of By Product and chooses to use Actual Backend Data. So when the tile is clicked on in the SAP Fiori Launchpad it will link to the chart. After clicking OK, Bob clicks on the + button at the top of the screen to add some of the various charts that are subsequently listed. The selections will appear when the tile is drilled into. You can add additional graphical options if you desire different views of the data. Bob selects two charts before clicking on Save Configuration.

Screen Shot 2016-04-07 at 10.33.49 AM.png

Back on the homepage of the KPI Design Window, Bob clicks on the pen object on the bottom right of the screen to configure what will be seen in the window. Click on the Add Group button and name it. Bob name's his KPI's Fiori Tile Group. Then, clicking the + button below the name allows you to add catalogs. It will load all of the catalogs your user has created. Bob adds the ZX_KPI_CAT catalog.

Screen Shot 2016-04-07 at 11.17.03 AM.png

Once you turn off edit mode you can view your Overdue PO tile.

Screen Shot 2016-04-07 at 11.19.32 AM.png

For more tutorial videos about What's New with SAP HANA SPS 11 please check out this playlist.


SAP HANA Academy - Over 1,300 free tutorials videos on SAP HANA, SAP Analytics and the SAP HANA Cloud Platform.


Follow us on Twitter @saphanaacademy and connect with us on LinkedIn to stay abreast of our latest free tutorials.

OpenSSL vulnerability DROWN attack CVE-2016-0800

$
0
0

An update for HANA users who want to know further on the OpenSSL DROWN attack.

 

SAP HANA and HANA based applications should not be affected by the DROWN vulnerability.


SAP HANA database uses SAP’s own CommonCryptoLib for communication encryption purposes, which is not affected by DROWN.

 

SAP HANA can be configured to use the OpenSSL instance which is provided by the Linux operating system (provided by Suse or RedHat). SSLv2 is not offered/used in these scenarios.

Therefore this configuration is also not affected by DROWN. Customers are advised to update their operating system according to their maintenance agreements with their operating system vendors. SAP explicitly allows customers to deploy security updates of the operating system.

 

More information:

http://service.sap.com/sap/support/notes/1944799 (SLES)http://service.sap.com/sap/support/notes/2009879 (Red Hat, see attached document)

SAP HANA extended application services, advanced model (XS Advanced) shipment contains OpenSSL for communication encryption. These channels do not support SSLv2 and are therefore not affected by DROWN.


Introducing SAP HANA Vora1.2

$
0
0

SAP HANA Vora 1.2 was released recently and with this new version we have added several new features to the product. Some of the key ones I want to highlight in this blog are

 

  • Support for MapR Hadoop distro
  • Introducing new “OLAP” modeler to build hierarchical data models on Vora data
  • Discovery service using open source Consul – to register Vora services automatically
  • New Catalog to replace Zookeper as metadatstore
  • Native persistency for metadata catalog using Distributed shared log
  • Thriftserver for client access thru jdbc-spark connectivity

 

The new installer for Vora in ver1.2 extends the simplified installer to be able to use Hadoop Management tools like MapR Control System to deploy Vora on all the Hadoop/Spark nodes. This is an addition to what was provided in ver1.0 for Cloudera Manager and Ambari admin tools.

 

pic1.png

 

Vora Modeler provides a rich UI to interact with data stored in Hadoop/HDFS, parquet, orc and S3 files by either using the SQL editor or the Data Browser. Once you have the Vora tables in place you can create “olap” models to build dimensional data structures on this data.

 

pic2.png

At the core of Vora we are looking to enable the distributed computing at scale when working with data both in SAP HANA and Hadoop/Spark environments. By pushing down processing of different algorithms to where the data is and by reducing the data movement between the two data platforms we deliver fast query processing and performance for extremely large volumes of data. We have also introduced new features like distributed partitions and co-located joins to achieve these performance optimizations.

 

HANA Vora went GA early March and we are seeing several customer use cases that enables BigData Analytics and IoT scenarios. If you are at ASUG/Sapphire during May2016, stop by to hear about real life customers discuss their implementations and gain insights from these technologies.

 

Vora Developer edition has been updated to ver1.2, you can access it from here

HANA MDC: Tenant crash while recovering other tenant on the same appliance

$
0
0

The blog post is to bring attention to an issue we have been facing on our HANA Multitenant Database container(MDC) setup

 

Background:

We have a Scale up MDC setup with more than 15 Tenant Database's in non prod on SPS10

Part of quarterly release activities we refresh non prod systems from production MDC tenant backups

Until last year we had less than 10 tenants and the regular refresh was working as expected

 

Issue:

We had introduced more non prod tenants end of last year and during the next refresh cycle we started noticing a tenant crash while we were working on refresh of another tenant

A complete check of trace logs of the crashed tenant confirmed we had signal 6 errors exactly around the same time the other tenant was being refreshed

After multiple attempts to being up the tenant did not work, we had to involve SAP support to check the cause of the issue

Meanwhile we restored the crashed tenant using backups

 

Cause:
SAP Support took more than a month to identify the cause of the issue and another occurrence of the same issue while restoring a different tenant confirmed there was a correlation

SAP confirmed the following, when we have more than 10 Tenants on a single MDC appliance we will come across this issue(on version SPS11 revision 112.02 and below)

For example if we have 15 tenants and lets say the tenant with Database ID 5 is being restored using a backup of production tenant it will impact the tenant with Database ID 15 and this tenant will crash and fail to start up. Same issue would occur on tenants with Database ID 13 and 14 if tenants with Database ID 3 and 4 are recovered using a backup

 

Resolution:

 

SAP has addressed the issue in SPS11 Database maintenance revision 112.02 that released today 12-Apr-2016

Please find the link below for the same and the screenshot that confirms the issue in the note

 

http://service.sap.com/sap/support/notes/2300417

MDC_Issue_Tenants.jpg

Please let me know if anyone has any thoughts or inputs on this issue and hope the blog is useful in understanding the cause of the issue and available solution

2016 ASUG Pre-Conference Seminar: Building the Business Case for SAP HANA

$
0
0

Are you exploring the possible benefits that SAP HANA may provide for your company? Are you confident there are strong use cases, yet challenged by putting together that all important Business Case to “sell it” internally? Then this session is for you!

 

Please join us for this interactive session where we will discuss how to prioritize your use cases and determine the critical value drivers to generate a Business Case that will resonate within your company.

 

The session also includes live customer insights, describing their personal experiences through this effort and how they successfully convinced their company of the value and benefits possible with SAP HANA through a solid Business Case.

 

The Agenda for this half-day Pre-Conference seminar includes:

  • Why do you need a business case anyway?
  • Methodology for building a business case
  • Levels of value
  • Value management life cycle
  • Create the storyline
  • Adding the financial dimension
  • Example of the process
  • Best practices approach
  • SAP Benchmarking
  • Bringing it all together
  • Customer testimonial

 

You can find more details about this Pre-Conference and Registration details at:

http://events.sap.com/sapandasug/en/asugpreconf.html#section_4

 

We look forward to meeting you at this ASUG Pre-Conference Seminar on Monday morning, May 16, in Orlando!

 

SAP HANA Solutions GoToMarket team

SAP Global HANA Center of Excellence team

*click* - *click* - *doubleclick* and nothing happens

$
0
0

Today's tidbit is one of those little dumb things that happen every now and then and when I think: "Great, now this doesn't work... WTF...?"

Usually that's a bit frustrating for me as I like to think that I know how stuff works around here (here, meaning my work area, tools, etc.).

 

So here we go. Since the SAP HANA Studio is currently not "an area of strategic investment" and a the Web based tools are on the rise, I try to use those more often.

I even have the easy to remember user-friendly URL (http://<LongAndCrypticNodeName.SomeDomainname.Somethingelse>:<FourDigitPortNumber>/sap/hana/ide/catalog/) saved as a browser bookmark - ain't I organized!


And this thing worked before.

I have used it.

So click on the link and logon to the instance and get this fancy "picture" (as my Dad would explain it to me -  everything that happens on the screen is a "picture", which is really helpful during phone-based intra-family help-desking...):

 

2016-04-14_22-17-36.gif

Pic 1 - The starting 'picture', looking calm and peaceful... for now

 

Ok, the blocky colors are due to GIF file format limitation to 256 colors, but you should be able to see the important bits and pieces.

 

There is some hard to read error message, that I choose to ignore and click on the little blue SQL button and then ... nothing happens.

I click again and again as if I cannot comprehend that the computer understood me the first time, but no amount of clicks yields to open the SQL editor.

What is going on?

Next step:

 

Do the PRO-thing...

     ... open Google Developer Tools...

     ... delete session cookies and all the saved information.

     ... Logon again.

 

Lo and behold, besides the much longer loading time for the page, nothing changed.

 

Great. So what's else is wrong? Did the last SAP HANA upgrade mess with the Web tools?

2.gif
Pic 2 - wild clicking on the button and visually enhanced error message indicating some bad thing

 

Luckily, that wasn't it.

Somewhere in the back of my head I remembered, that I had a couple of browser extensions installed.

 

Now I know what you're thinking: Of course it's the browser extensions. That moron! Totally obvious.

What can I say? It wasn't to me.

3.gif

Pic 3 - there's the culprit, the root cause and trigger for hours of frustration

 

It just didn't occur to me that e.g. the Wikiwand browser extension that I use to have the Wikipedia articles in a nicer layout would install a browser wide hook to the CTRL+CLICK event and that this would prevent the Web tools to sometimes not open.

After disabling this (there's a settings page for this extension) the Web tools resumed proper function.

Good job!

 

So is the Wikiwand extension a bad thing? No, not at all. There are tons of other extensions that do the same.

 

While I would really like to demand back the precious hours of my life this little mishap took from me, I assume that this request would be a bit pointless.

To me, at least, this experience, leaves me with the insight, that I clearly thought to simplistic about the frontend technology we use today. Web browsers are incredible far from a standard environment and controlling what the end user finally sees is not easy (of really possible).

 

Ok, that's my learning of the day.

 

Cheers,

Lars

 

p.s.

the error message "Could not restore tab since editor was not restorable" not only seems to be a tautology, but also had absolutely nothing to do with the problem in this case.

RANK Function by SQL & Calculation View

$
0
0

RANK  Logic by SQL RANK Function & SQL Logic  & Calculation View Graphical & CE Function.


Scenario:

⦁ Consider a Non SAP load (e.g: Flat file load )  which Full load daily , gets loaded into HANA table.

⦁ Because of Full load , we get daily all transactions uploaded into HANA table, unless we implement any pseudo delta logic  in source side.

⦁ We have chance of getting same transaction multiple times from source file , if there were multiple changes on any key figures for same Transaction ID.

⦁ For Example , Order 100000, on created on date have Order Qty as 10 KG.

⦁ On same day or next subsequent day for above transaction there is  increase  in order qty from 10 KG to 20 KG.

⦁ So from NON SAP source we get these transaction multiple times with old Value & new value Order Qty with Different Time Stamp.

 

Requirement

So our requirement to report only Transactions with latest time stamp data, which have most current Key figures from Source Data.

To achieve this RANK node can be useful in Calculation View.

This functionality is same as RANK function in SQL .

 

Column Table Creation:

    CREATE COLUMN TABLE <schema name>.SALES_FLAT

              ( SAELSORDER INTEGER,

               SALESITEM  SMALLINT,

               DOC_TYPE VARCHAR(4),

               MATERIAL_NUM NVARCHAR(18),

               ORDER_QTY TINYINT,

               UNIT VARCHAR(2),

               NET_VALUE DECIMAL(15,2),

               CURRENCY VARCHAR(3),

               CREATAED_AT TIMESTAMP

               );

.

Then we load Day 1 Records, ( here loading based on SQL , instead Import File) for just showcase functionality,

 

DAY1 Load:

INSERT INTO SALES_FLAT VALUES( 10000,10,'ZOR','MAT0001',10,'KG',1500,'INR','2016-04-01 09:10:59');

INSERT INTO SALES_FLAT VALUES( 10000,10,'ZOR','MAT0001',20,'KG',2500,'INR','2016-04-01 09:11:00');

INSERT INTO SALES_FLAT VALUES( 10001,10,'ZOR','MAT0002',10,'KG',4500,'INR','2016-04-01 09:12:15');

INSERT INTO SALES_FLAT VALUES( 10002,10,'ZOR','MAT0003',20,'KG',3500,'INR','2016-04-01 09:13:10');

INSERT INTO SALES_FLAT VALUES( 10003,10,'ZOR','MAT0004',10,'KG',1500,'INR','2016-04-01 09:13:59');

INSERT INTO SALES_FLAT VALUES( 10004,10,'ZOR','MAT0005',10,'KG',1500,'INR','2016-04-01 09:14:59');

INSERT INTO SALES_FLAT VALUES( 10004,10,'ZOR','MAT0005',40,'KG',8500,'INR','2016-04-01 09:15:59');

DAY1

we have order 10000, item 10 having multiple changes with different time stamp.

We have Order 10004, item 10 having multiple changes with different time stamp.

 

DAY2 Load

DAY1 + DAY2 ( Full load ): in this case we have not implemented any Delta logic.

DAY 2 Load records:

INSERT INTO SALES_FLAT VALUES( 10000,10,'ZOR','MAT0001',20,'KG',2500,'INR','2016-04-02 09:10:59');

INSERT INTO SALES_FLAT VALUES( 10000,10,'ZOR','MAT0001',30,'KG',3500,'INR','2016-04-02 09:11:00');

INSERT INTO SALES_FLAT VALUES( 10001,10,'ZOR','MAT0002',20,'KG',5500,'INR','2016-04-02 09:12:15');

INSERT INTO SALES_FLAT VALUES( 10002,10,'ZOR','MAT0003',30,'KG',6500,'INR','2016-04-02 09:13:10');

INSERT INTO SALES_FLAT VALUES( 10003,10,'ZOR','MAT0004',20,'KG',7500,'INR','2016-04-02 09:13:59');

INSERT INTO SALES_FLAT VALUES( 10004,10,'ZOR','MAT0005',20,'KG',8500,'INR','2016-04-02 09:14:59');

INSERT INTO SALES_FLAT VALUES( 10004,10,'ZOR','MAT0005',50,'KG',9500,'INR','2016-04-02 09:15:59');

DAY2 load Time stamp is April 2nd 2016.

In DAY2 load we have same Transactions with Different Time Stamp & Different Key Figures values.

So here Requirement to Report Only  the latest updated Changes from Table SALES_FALT.

 

 

RANK function by SQL logic:

SELECT SAELSORDER, SALESITEM, DOC_TYPE,MATERIAL_NUM, ORDER_QTY,UNIT, NET_VALUE, CURRENCY, CREATAED_AT,

    RANK() OVER(PARTITION BY SAELSORDER, SALESITEM ORDER BY CREATAED_AT DESC ) AS "RANK " FROM SALES_FLAT ORDER BY SAELSORDER, SALESITEM;

 

RANK Function.jpg

 

 

In above we can see one Extra Column as RANK, in this column orders got sorted based on Created At Timestamp & assigned RANK value.

These Values got Derived based on SQL RANK Functionality.

In Above code, we did RANK based on PARTITION BY on columns Sales Order number & Item and ORDER BY CREATAED_AT DESC.

 

RANK Logic by SQL without RANK Function:

SELECT SAELSORDER, SALESITEM, DOC_TYPE,MATERIAL_NUM, ORDER_QTY,UNIT, 

            NET_VALUE, CURRENCY, CREATAED_AT,

                   (select count(*) from SALES_FLAT T1 where T1.SAELSORDER = T2.SAELSORDER AND T1.SALESITEM = T2.SALESITEM AND T1.CREATAED_AT < T2.CREATAED_AT) +1 as RANK from SALES_FLAT T2 order by SAELSORDER, SALESITEM;

 

SQL Logic without RANK function.jpg

 

Both Output were same , With RANK functionality & without RANK Functionality.


RANK Functionality introduced in HANA SP8.


 

RANK function by Calculation View:

 

RANK Node using in Calculation View by Graphical

Graphical Calculation View in RANK Node, after Selecting Required Table, need to Set values to Required Parameters like Sort Direction, Order By & Partition BY.

 

We have below one Check box to Generate Extra Column in Our Calculation View, which holds RANK Values.

There is Threshold Parameter, it is the place where we can Fix Value or pass input parameter, which applies on Newly Generate Column.

It means if we pass as 1, then it will report All records which having RANK = 1.

RANK node Cal View.jpg

In Graphical Calculation View in RANK Node, after Selecting Required Table, need to Set values to Required Parameters like Sort Direction, Order By & Partition BY.

 

We have  one Check box in above screen shot, to Generate Extra Column in Our Calculation View, which holds RANK Values.

There is Threshold Parameter, it is the place where we can Fix Value or pass input parameter, which applies on Newly Generated Column.

It means if we pass as 1, then it will report All records which having RANK = 1.

 

 

Cal View output.jpg

by Passing Input parameter to Calculation View, we get only those Records which we required, this input parameters works on newly generated column RANK.

Passing Value 1, we get latest Timpstamp Transaction items from Calculation view, this is because in RANK node , SORT order of CREATED AT field is Descending Order.

Viewing all 902 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>