Quantcast
Channel: SCN : Blog List - SAP HANA and In-Memory Computing
Viewing all 902 articles
Browse latest View live

HANA Effect Podcast #6 – Killer Use Cases at CenterPoint Energy, Part 2

$
0
0

In the second of a two-part episode, Raj Erode IT Architect at CenterPoint Energy joins us to discuss three killer SAP HANA use cases for CRM and predictive scenarios and their incredibly innovative Internet of Things (IoT) and Big Data scenario.  Centerpoint won the 2014 HANA Innovation Award for their Big Data scenario.

 

We hope you enjoy hearing Centerpoint’s first-hand experience with mission-critical SAP HANA.  Please let us know your feedback in the comments below.

 

To get more real-world customer HANA stories, subscribe to our iTunes or SoundCloud feed for weekly podcasts that will cover multiple in-production customer use case scenarios for SAP HANA.

 

Also, if you’ve got a killer SAP HANA scenario and would like to share it on the HANA Effect podcast, please contact me.

 

HE6.jpg

 

Transcript: SAP HANA Effect Episode 6

 

Sponsored by

xeon_i_ww_rgb_3000


How to...perform mass user creation / role assignment

$
0
0

A recent request came up at a customer who was migrating to a larger independent HANA sidecar scenario and wanted to move large groups of users from one HANA instance to another. Since there is no mass user creation functionality that I am aware of, we had to create one. This would be helpful for applications services teams who are responsible for creating users and assigning roles in large amounts. This can certainly be enhanced to cover a number of other scenarios, but this works for the requirement at hand and was fast to implement.

 

Basic workflow

- Populate a table either through file upload or another UI method, that would have the structure shown below

- Create users that do no already exist, taking into account that they may or may not have SAML enabled and also may or may not have a specific validity date.

- Add roles to users that do not already have that role assigned

- Clean out maintenance table

 

DDL

 

CREATE COLUMN TABLE "HANA_FOUNDATION"."USER_MAINTAIN" ("USER_NAME" NVARCHAR(12),     "PASSWORD" NVARCHAR(15),     "VALID_TO" NVARCHAR(8),     "SAML_ENABLED" NVARCHAR(1),     "SAML_PROVIDER" NVARCHAR(40),     "EXTERN_ID" NVARCHAR(20),     "ROLE" NVARCHAR(60)) UNLOAD PRIORITY 5 AUTO MERGE

SQL Script code

 

/********* Begin Procedure Script ************/
--Justin Molenaur 02/12/2015
--Create users, assign roles and enable SAML based on excel file upload
--Check for existing user and role assignment beforehand
i INTEGER;
row_count INTEGER;
loop_current_SQL NVARCHAR(200);
valid_date NVARCHAR(8);
valid_SAML NVARCHAR(1);
BEGIN
--Select unique users to be created that don't already exist
it_user_list = SELECT DISTINCT A."USER_NAME", A."PASSWORD", A."SAML_ENABLED", A."SAML_PROVIDER", A."EXTERN_ID", A."VALID_TO"
FROM "HANA_FOUNDATION"."USER_MAINTAIN" A
LEFT OUTER JOIN "SYS"."USERS" B
ON (A."USER_NAME" = B."USER_NAME")
WHERE B."USER_NAME" IS NULL;
SELECT COUNT("USER_NAME") into row_count FROM :it_user_list; --Get count of users to create
--Loop for Creation of users that don't exist yet
FOR i IN 0 .. :row_count -1 DO
SELECT "VALID_TO" --Check if a validity date is maintained
into valid_date FROM :it_user_list
LIMIT 1 OFFSET :i;
SELECT "SAML_ENABLED" --Check if a validity date is maintained
into valid_SAML FROM :it_user_list
LIMIT 1 OFFSET :i;
IF :valid_date IS NULL AND :valid_SAML = 'Y' THEN --No Validity, SAML    SELECT 'CREATE USER ' || A."USER_NAME" || ' PASSWORD ' || A."PASSWORD"    || ' WITH IDENTITY ''' || A."EXTERN_ID" || ''' FOR SAML PROVIDER ' || A."SAML_PROVIDER"    INTO loop_current_SQL    FROM :it_user_list A    LIMIT 1 OFFSET :i;  
ELSEIF :valid_date IS NOT NULL and :valid_SAML = 'Y' THEN --Validity, SAML    SELECT 'CREATE USER ' || A."USER_NAME" || ' PASSWORD ' || A."PASSWORD"    || ' WITH IDENTITY ''' || A."EXTERN_ID" || ''' FOR SAML PROVIDER ' || A."SAML_PROVIDER"    || ' VALID UNTIL ''' || A."VALID_TO" || 235900 || ''''    INTO loop_current_SQL    FROM :it_user_list A    LIMIT 1 OFFSET :i;  
ELSEIF :valid_date IS NOT NULL and :valid_SAML = 'N' THEN --Validity, No SAML    SELECT 'CREATE USER ' || A."USER_NAME" || ' PASSWORD ' || A."PASSWORD"    || ' VALID UNTIL ''' || A."VALID_TO" || 235900 || ''''    INTO loop_current_SQL    FROM :it_user_list A    LIMIT 1 OFFSET :i;
ELSE --No Validity, No SAML    SELECT 'CREATE USER ' || A."USER_NAME" || ' PASSWORD ' || A."PASSWORD"    INTO loop_current_SQL    FROM :it_user_list A    LIMIT 1 OFFSET :i;  
END IF;      EXEC(:loop_current_SQL);
END FOR;
--Select distinct role assignments needed, checking for already existing role assignments
it_role_list = SELECT DISTINCT A."USER_NAME", A."ROLE"
FROM "HANA_FOUNDATION"."USER_MAINTAIN" A
LEFT OUTER JOIN "SYS"."GRANTED_ROLES" B
ON (A."USER_NAME" = B."GRANTEE" AND A."ROLE" = B."ROLE_NAME")
WHERE B."GRANTEE" IS NULL;
--Get count of roles to assign
SELECT COUNT("USER_NAME") into row_count FROM :it_role_list ;
--Loop for assignment of roles
FOR i IN 0 .. :row_count -1 DO    SELECT 'GRANT "' || A."ROLE" || '" TO ' || A."USER_NAME"    INTO loop_current_SQL    FROM :it_role_list A    LIMIT 1 OFFSET :i;      EXEC(:loop_current_SQL);
END FOR;
DELETE FROM "HANA_FOUNDATION"."USER_MAINTAIN"; --Clear out maintenance table when complete
END;
/********* End Procedure Script ************/

 

There you go, simple as that. Now get out there and create users in a massive way!

 

Happy HANA,

Justin

Innovation Day Focused on Predictive Maintenance & Service in Newtown Square, PA

$
0
0

Companies today are facing critical challenges.  You need to know how to optimize maintenance while providing support services.  Plus you are always worried about how to save costs. But where do you start?  How do you take your maintenance program to the next level? 

 

To help you face these challenges, SAP will be hosting an Innovation Day focusing on Predictive Maintenance and Service.   You can join us free of charge at the SAP office in Newtown Square, PA on Thursday March 26, 2015.

 

We’ll discuss real business cases were companies have used huge amounts of data gathered from machine to machine connectivity and powered by the real-time processing capabilities embedded in SAP HANA to make informed decisions.

 

You’ll learn how companies are able to realize a multi-million dollar annual return on their investment through:

 

 

  • Increased Revenue due to greater asset uptime supporting increased productivity;
  • ReducedService Parts Inventory allowing for more accurate & timely service parts inventory forecasting;
  • Decreased R & D Costs based on predictive maintenance findings regarding equipment design resulting in higher quality products and fewer warranty claims

 

Helping increase uptime of your critical assets is a win-win for both you and your customers. SAP’s Solution for Predictive Maintenance & Service can
help.  So to register or to learn more just click here or on the link below:

 

 

INNOVATION DAY in NEWTOWN SQUARE, PA

 

Thursday, March 26, 2015  from 9 AM – 12:30PM (local time)

 

 

SAP Office, Executive Briefing Center, Demonstration Room B in Building 1 located at 3999 West Chester Pike, Newtown Square, PA 19073

 

Click here to Register or Learn More.

Using HANA Input Parameters in Tableau

$
0
0

We are using Tableau at my work and I think it is Harry Potter level magical when it comes to creating beautiful, easy to understand visuals.  However, when connecting to HANA there seems to be a limitation in passing input parameters to a view.  When first connecting to a view in Tableau, it will prompt you for any input parameters used in your view.

 

InputParameter.PNG

 

This problem with this method is you can't change it dynamically in a dashboard with a parameter created in Tableau.  To pass an input parameter dynamically with a parameter in Tableau, there are two different methods you can use.  The first is creating a Custom SQL data source in Tableau.  Using the RSPCPROCESSLOG table as an example data source (I have another blog coming using it ), here is an example of using a Tableau parameter and a Custom SQL data source.

 

select l.variante as process_chain,
to_date(utctolocal(to_timestamp(l.starttimestamp), 'PST')) as start_date,
utctolocal(to_timestamp(l.starttimestamp), 'PST') as start_time,
utctolocal(to_timestamp(l.endtimestamp), 'PST') as end_time,
round((seconds_between(to_timestamp(l.starttimestamp), to_timestamp(l.endtimestamp)))/60, 2) as duration
from <your schema here>.rspcprocesslog l
where l.starttimestamp is not null and l.starttimestamp != 0
and l.endtimestamp is not null and l.endtimestamp != 0
and to_date(utctolocal(to_timestamp(l.starttimestamp), 'PST')) = <Parameters.My Parameter>

While this works just fine, it still doesn't solve the problem of using an actual input parameter in your HANA view.  To do this, we will use a Custom SQL data source in Tableau again and query the view with PLACEHOLDER.  Our calculation view in HANA will look like this and our input parameter is IP_START_DATE.

 

var_out =  select l.variante as process_chain,    to_date(utctolocal(to_timestamp(l.starttimestamp), 'PST')) as start_date,  utctolocal(to_timestamp(l.starttimestamp), 'PST') as start_time,  utctolocal(to_timestamp(l.endtimestamp), 'PST') as end_time,  round((seconds_between(to_timestamp(l.starttimestamp), to_timestamp(l.endtimestamp)))/60, 2) as duration  from rspcprocesslog l  and l.starttimestamp is not null and l.starttimestamp != 0  and l.endtimestamp is not null and l.endtimestamp != 0  and to_date(utctolocal(to_timestamp(l.starttimestamp), 'PST')) = IP_START_DATE;

In your Custom SQL data source in Tableau, use the following syntax.

 

select * from "_SYS_BIC"."YOUR_CALCULATION_VIEW"('PLACEHOLDER' = ('$$IP_START_DATE$$', <Parameters.My Parameter>))

You will need to add your view as a data source the first way mentioned in this blog so you can add a Tableau parameter that you can insert into your Custom SQL data source.  I noticed that when I set my parameter as a date in Tableau, it created an error when passing it HANA.  To solve this, I changed the data type to a string and it worked just fine.  You won't have a date picker, but typing YYYY-MM-DD is not very hard.

 

Now you may be asking why does this matter and why not just write the SQL directly in Tableau?  If you need to push down your parameters to intermediary steps in your calculation view and report on the results, you need to be able to pass values to an input parameter.  Also, by having the value you are passing to the HANA input parameter as a Tableau parameter, you can put it on a dashboard and allow a user to easily change it.  This opens up the possibility to all sorts of creative, dynamic visualizations you can create.  I hope this helps someone out and thanks for reading.

 

Also, I would like to thank my coworker Vineeth for figuring out the PLACEHOLDER version and allowing me to share it with you.

SAP HANA - Expert Guided Implementations

$
0
0

Hello community,

 

Recently, I find a good learning source Expert Guided Implementations - SAP HANA.

 

You can click the link below to involve in:

Register for SAP HANA EGIs(SAP Service Marketplace s-user required)

 

Register in HANA EGI.jpg

 

The short overview:

 

SAP HANA (1 of 6) - Monitoring and Troubleshooting (https://service.sap.com/sap/bc/bsp/spn/esa_redirect/index.htm?gotocourse=X&courseid=70213507)

Description: Connect the customers HANA database to the Solution Manager Includes the configuration of Root Cause Analysis and Technical Monitoring for SAP HANA.

Goals: Guides the customer through the configuration process.


SAP HANA (2 of 6) - Database Administration and Operations (https://service.sap.com/sap/bc/bsp/spn/esa_redirect/index.htm?gotocourse=X&courseid=70203106)

Description: Demonstrates the tools necessary for the administration and operation of the SAP HANA database.

Goals: Ensures administration & monitoring procedures with other systems using Solution Manager are aligned.


SAP HANA (3 of 6) - Advanced Database Monitoring (https://service.sap.com/sap/bc/bsp/spn/esa_redirect/index.htm?gotocourse=X&courseid=70228292)

Description: Provides information to allow you to analyze, diagnose and resolve common issues found in SAP HANA.

Goals: See the tools and techniques necessary for monitoring of SAP HANA Database, Find the root cause of common issues relating to the normal operation of the SAP HANA database.


SAP HANA (4 of 6) - Make your Custom Code Ready (https://service.sap.com/sap/bc/bsp/spn/esa_redirect/index.htm?gotocourse=X&courseid=70230808)

Description: gives the basic knowledge how to perform a code check review. It also demonstrates how to evaluate the usage or coverage of an object and how to find It demonstrates the methodology and tools to be used.

Goals: Get a work package of own custom code to be optimized. See SQL Performance guidelines are given and checks for functional correctness. See some SAP HANA specific solutions.

 

SAP HANA (5 of 6) - Data Modelling (https://service.sap.com/sap/bc/bsp/spn/esa_redirect/index.htm?gotocourse=X&courseid=70228291)

Description: provides information to data modellers seeking to optimize reporting performance

Goals: Demonstrate advanced Modelling Features and full text search. Demonstrate processing information models and managing modelling content

 

SAP HANA (6 of 6) - Profitability Analysis (https://service.sap.com/sap/bc/bsp/spn/esa_redirect/index.htm?gotocourse=X&courseid=70228290)

Description: Gives fully visualization of cost & profit drivers when working with big volumes of financial data. The SAP CO-PA Accelerator, empowers organizations with easy access to data to make timely decisions. It allows for faster analysis of profitability data, cost allocations processed significantly faster.

Goals: Empowers business users with easy access to profitability information.


You can also choose the language and time for the course.
Language and Time.jpg

 

I registered in SAP HANA (6 of 6) - Profitability Analysis with language Chinese in last year and learned a lot.The mentor of this course had solid professional knowledge and was very patient to answer your questions.

 

Wish you do not miss this.

 

Regards,

Ning Tong

Comfort your eyes with custom themes…

$
0
0

Hi Folks. Are you bored of working with the classic eclipse look and feel? Do you want to customize look and feel of your Hana studio? In this blog I will be explaining how to theme Hana studio.

 

Since HANA Studio is an eclipse based tool, we can make use of available eclipse themes. I will be showing how to theme HANA Studio with Eclipse Luna.


Step 1: Install Eclipse Market place.

 

1) Help->Install New Software..

2) Click on Add. Provide name as mpc and location as http://download.eclipse.org/mpc/luna/ . Then click OK

3) Check EPP Marketplace Client->next. Agree  and Install.

 

1st.JPG

 

Step 2: Open Eclipse Marketplace


1) Help-> Eclipse Marketplace

2) Search for the below provided themes and install.

  • Eclipse Moonrise UI Theme
  • Jeeyul’s Eclipse Themes
  • Eclipse Color Theme
  • Color IDE Pack

Note: Color IDE Pack is to theme the editors.


122.JPG


Step 3: Restart HANA Studio


Step 4: Apply the theme.


1) Go to Window->Preferences->General->Appearance

2) Select the desired theme and Apply.


111.png


Step 5: Coloring the Editor


1) Expand Appearance by clicking on it.

2) Select Color Theme

3) From the selection window choose the desired editor theme.


11111.JPG


Note: If you are using Luna Version then you will have a choice of choosing between Dark and Light themes without installing any custom themes.


Cheers ,

Safiyu

SAP HANA Hands on tests ( part 1 ) : HANA DB installation

$
0
0

Hello,

 

This is a blog series about some hands on training I'm performing in-house on SAP HANA.

In this 1st part I'm just sharing some information about the installation of the HANA DB in a VMware 5.5 ESXi virtual machine.

As a starting point I followed the requirements found in the these 2 blog posts :

 

SAP HANA Installation in Oracle VirtualBox

 

How to install the HANA server software on a virtual machine

 

And also followed these SAP notes :

 

http://service.sap.com/sap/support/notes/1944799

http://service.sap.com/sap/support/notes/2001528

http://service.sap.com/sap/support/notes/2000003 ( this one gives a lot of information on different topics regarding SAP HANA )

 

 

My configuration is as follows :

 

"Hardware" : Using a VMware 5.5 esxi VM with

     - 64 Gb of RAM

     - 232Gb for disks ( 32Gb for the OS / 200Gb for the HANA filesystem ).


Note :

if you plan to use the HANA DB standalone, it appears, in my configuration that 28Gb of RAM is o.k .

24Gb appears to be the strict minimum.

As I also installed an ECC6 EHP7 server on the same box, I got into some lack of memory troubles and had to upgrade to 64Gb not to have any issue.

 

 

OS : Suse linux SLES 11 SP3.

HANA DB version : 1.00.82.00.394270

 

Of course, this hardware setup is not certified by SAP and should not be used in production ( or whatsoever customer related anyway :-) ... ) but will do the trick for "lab" testing purposes.

 

I won't go into much details here about the installation process as the 2 blogs metionned above give really good details about it, even if they were performed on previous SAP HANA version.

 

The main steps are as follows :

 

1- build up your VM on your VMware ESXi infrastructure.

     Here is the set up I used :

 

vmproperties1.png

 

disk 1 :

 

disk1.png

 

 

disk 2 :

 

disk2.png

 

2- Start it up and make it boot on the SLES 11 SP3 ISO in order to perform the OS installation.

Follow the SLES installation wizard.

 

3 - Configure your VM ( OS additionnal requirements / network / File systems )

For the FS layout i followed the one  showed in How to install the HANA server software on a virtual machine .

 

4 - Install the HANA database

 

The thing that differs here is that I could use the hdblcm tool.

I used the hana installation DVD : 51048744 .

 

I used the basic installation parameters as follows :

 

hana_inst1.png

Note :

 

For some reasons, the sdbrun and install.sh scripts in the installation material extract were not "executable".

I had to set the X right on these. Everything ran fine after that.

 

 

These are the components that can be installed :

hana_inst2.png

hana_inst3.png

Selected all the components but the hana studio. Depending on the deployment you wish to perform you can choose not to install everything.

hana_inst5.png

 

hana_inst6.png

 

hana_inst7.png

hana_inst8.png

 

hana_inst9.png

 

hana_inst10.png

 

hana_inst11.png

Summary screen :

hana_inst12.png

 

 

Installation follow up screen :

 

hana_inst13.png

 

The end :

 

hana_inst14.png

 

Then you have an HANA db ready for test usage !

HANA Effect Podcast #7: Commercial Metals Strategic HANA Roadmap

$
0
0

Bruce Weinberg, Senior Director, IT Global Shared Services at Commercial Metals Company shares the strategic perspective on their ambitious analytics & HANA roadmap.  Hear how CMC uses real-time analytics to innovate in the commodity steel industry.

We hope you enjoy hearing CMC’s first-hand experience with mission-critical SAP HANA.  Please let us know your feedback in the comments below.

To get more real-world customer HANA stories, subscribe to our iTunes or SoundCloud feed for weekly podcasts that will cover multiple in-production customer use case scenarios for SAP HANA.

Also, if you’ve got a killer SAP HANA scenario and would like to share it on the HANA Effect podcast, please contact me.

 

HE7.jpg

 

Transcript: SAP HANA Effect Episode 7

 

Sponsored by:

xeon_i_ww_rgb_3000


HANA Accelerators - make the side-car your main car

$
0
0

Making the HANA Side Car your Main Car

 

Let’s face it.  If you work in the SAP world and haven’t heard about HANA, you’re living under a rock (or maybe still on R2?).  In the past three years, every SAPPHIRE, ASUG and TechEd (sorry, d-Code) event has focused primarily on HANA and its benefits.  SAP is now even rewriting and pushing down their ABAP code to HANA to take advantage of its capabilities.  S4HANA and Simple Finance have the capability of drastically simplifying an organizations data models.  However, a number of customers have purchased HANA as their BI solution, but are not quite ready to take the leap to these solutions. That’s where the HANA Accelerators come in.  The HANA Accelerators (aka, HANA Side Car) allows customers to leverage their HANA investment like never before.  By redirecting the selects from their standard database to HANA, significant performance improvements can be gained with this technology.  And the beauty of this solution is that the implementation time is very fast, saving you money and improving your ROI.

 

The Why

 

If you are like most of my customers, your system performance was probably great when you first went live – especially if you did a phased rollout.  However, as time has passed, you’ve likely added additional organizations, new processes and more data to the system.  While the old saying ‘Disk is cheap’ may be true, getting to the data on the disk becomes more expensive the larger the dataset.   At least, that’s the way it’s been with traditional databases.  With the introduction of HANA, SAP has revolutionized the way that data is stored and accessed.  Through the use of the column store database, the data within the database is compressed significantly – in most cases by a factor of 7 or more (by that I mean, 7 terabytes of row store data becomes 1 terabyte of column store data). Once the data is compressed, the amount of RAM needed to put the data into memory is much less.  Accessing data in RAM is an incredibly more efficient method than having to access it via disk – even with Solid State Drives.  But to me, this isn’t even the greatest benefit – at least not from a business perspective.  Since the data is stored in columns, every column essentially becomes an index.  This has two incredible benefits.  First, there is no need to have secondary indexes on your tables.  This alone can save a huge amount of disk space.  Second, it means I can now access my data using any combination of fields I want.  From an end user perspective, this is huge.  I am no longer limited to the indexes someone else chose for me.  And what that really means is that not only can I analyze my data in ways which meet my exact business needs, but I can even design the system in ways that were never possible.

 

The What

 

While Suite on HANA is starting to gain traction, many customers would like to see a better return from their HANA investment today.  In order to do this, SAP has delivered two Accelerator products.  The most commonly known product is the ERP Accelerators. The ERP Accelerators are a collection of specific points in SAP’s standard code, whereby a redirect to HANA is performed.  The ERP Accelerators were delivered as part of the standard system with SAP Note 1620213 (and subsequent additional notes).  Below is a list of just a few of the delivered accelerators.

 

Table 1 - SAP Delivered ERP Accelerators

 

Accelerator

Transaction

FI Line Item Browsers                   

New

Report Writer/Painter

Existing

CO Assessments/Distributions

Existing

Asset Accounting Reporting

Existing

Material Ledger – Price Analysis

Existing

Material Document List

Existing

Public Sector – AVC report

New

Federal Government – Payments

Existing

 

In addition to these, there are a number of other accelerators and SAP continues to add to this list.  The accelerators can be categorized into one of the three areas:

 

  • Reporting – These accelerators are used to select data quickly from the standard line item tables like BSEG, GLFLEXA, PSMGLFLEXA, ANEP, COEP, etc.
  • Transactional – These accelerators are used within the processing of transactions like posting an FI document or running CO Allocations
  • Interface – These are primarily BW related where a Virtual Infoprovider can be setup from BW

 

The second Accelerator product is the Business Application Accelerators.  This product allows you to redirect any of your own custom programs. The product is an add-on to the standard system and must be requested via information in SAP Note 1694697.

 

One additional transaction of note is the General Table Display.  If you’ve been using SAP long enough, you’ve probably used transaction SE16.  SAP has delivered a new transaction – SE16H. This new transaction allows you to select your secondary database connection and query data directly within the ERP system.  Additionally, this transaction allows you to do a left outer join with another table from your standard database or from HANA.

 

The How

 

ERP Accelerators

 

While most customers are unaware, SAP has provided the ability to connect to a secondary database from within the ERP system since the days of R/3 4.0B.  The transaction DBCO allows user to create a connection to another database by providing the IP address of the database server and login credentials. The HANA Accelerators utilize this secondary connection to redirect the selection of data from the standard installed database to HANA.

 

Figure 1 - DBCO Secondary Connection

 

ERP_DBCO.JPG

 

Of course, the data first needs to reside in HANA in order to utilize this redirection.  This is where the System Landscape Transformation (SLT) product is essential.  By using the SLT, data can be replicated from the ERP system in near real time.  This provides for the use of the functionality with not only reports, but with transaction processing.  The SLT is a product that creates triggers at the database level which are executed anytime an Insert, Update or Delete occurs on a specified table. The trigger then populates a shadow table with the key of the table to keep a record of which entries need to be transferred.  The SLT system then copies the record from the ERP system to HANA using RFC connections. This entire process usually takes less than a second to go from the update in ECC to the update in HANA.  SLT does allow for more advanced ETL capabilities, but this is the basic concept.  One thing to note – in order to use the accelerators for a specific table, the structure of the table must be the same in both systems.  I’ll even take this one step further – I believe the data should be a mirror of each other (e.g. no transformations).  My reasoning here is simple – If I run a report that is not accelerated and compare it against one that is, there should be no difference in the results.

 

Once the basic setup is complete for connecting and replicating to HANA, it’s time to view each accelerator. In transaction HDBC (or HDBS), you can view each accelerator to determine the functionality and requirements to activate.  Each accelerator can be activated across your whole system or by user id.  Further, the accelerator can be activated and deactivated very quickly in case you run into issues.

 

Figure 2 - ERP Accelerator - General Settings

 

HDBC_GL_LIB_General.JPG

 

The key to being able to activate an accelerator is the existence of the necessary tables in HANA with the structures identical.  Some accelerators have as few as one table required, while others have a large number. In figure 3 below, the General Ledger Line Item Browser with its new transaction FBL3H requires five tables from the ERP system as well as a new generated view that can be created from this screen. The generated views are created in the ERP system as well as the HANA system – both without data.

 

Figure 3 - ERP Accelerator - Replication Tables

 

HDBC_GL_LIB.JPG

 

Business Application Accelerators

 

The BAA is an add-on product that must be applied via transaction SAINT.  The component name is SWT2DB.  To use this product, an XML file is created which defines the program and table which will be redirected.  Once the XML is uploaded, a database connection (from transaction DBCO or DBACOCKPIT) is assigned to the scenario.  The scenario is then activated and all future selects are redirected.

 

In my experience, the BAA is just as important (if not more so) than the ERP accelerators.  With this component, we have enabled numerous reports and interfaces to run significantly faster than before.   Let’s face it – SAP developers do a pretty good job of writing efficient code, while other developers do not have the quality assurance and performance mindset of those in Waldorf or Palo Alto.  At most of my clients, the long running programs are typically the ones that start with a Z.  As such, some of the largest gains we’ve seen is through the acceleration of reports, extracts and interfaces via the BAA.

 

The Results


Now if you’ve made it this far, you must be wondering about the results.  These products have truly transformed some of my client’s experiences. The end users no longer have to wait minutes and hours for reports, the O&M team does not have to stay up all night monitoring jobs, the database administrators are happy that the primary database is not dragging, and the functional team is free to organize the data any way they want.   Some specific examples of what we’ve seen are:

 

  1. One interface was running for over 15 hours.  After acceleration via the BAA, it ran for 9 minutes.
  2. The Cost Allocation program was running for over 24 hours at the end of the month – it now runs in 20 minutes.
  3. The Vendor Line Item Browser would timeout and never return the data for some vendors – now it runs in less than 1 minute.
  4. Depending on how some users ran reports, the system would either timeout (if in foreground) or run for days (in background).  Now the users can select their data however they want and the data is returned in seconds.

 

The additional speed is nice, but it really comes down to what happens when you attain this level of performance. If a report takes 5 minutes to return the data, the user will hopefully do some other work – but they may just take a short break.  If it takes 3 hours, they’ll definitely do other things, but they’ll also rarely rerun that report based on their findings.  If the report returns the data in 30 seconds, the user can do something actionable with that information (like post a correcting entry, etc.) and rerun the report to analyze the data again.  Providing a system that works for your users, improves their performance, improves the overall system performance and ensures increased satisfaction and acceptance of the system are all possible with the HANA Accelerators.

OData Service Definition Modification Exist for Tables with an IDENTITY Column

$
0
0

Recently I was using a table in HANA with IDENTITY column that was just added in HANA in SP08. Lars Breddemann has a great blog post on that here. My table had a PK ID that was IDENTITY field which HANA would create automatically incremented on INSERT (ie. PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY).

 

The SQL for the CREATE looked like this:

 

CREATE COLUMN TABLE SENSOR_READING (

   SENSOR_READING_ID INTEGER   PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY,

   SENSOR_ID         INTEGER   NOT NULL,

   READING           DOUBLE    NOT NULL,

   READ_DATE         TIMESTAMP NOT NULL

);

 

I initially used a very basic mapping in my sensor_reading.xsodata file:

 

[sensor_reading.xsodata]

 

 

service namespace "workshop.workshop_i826714"{

  "I826714"."SENSOR_READING" as "sensorReading";

}

 

This normally works great and the OData Service definition is simple and short. Initially I tried to do POST/CREATE to the OData service with values for only 3 of 4 columns (SENSOR_ID, READING, READ_DATE) as I assumed the PK (SENSOR_READING_ID) would be auto-generated.

 

Unfortunately, when testing this out I was receiving "400 Bad Request" errors. The full response is below:

 

<?xml version="1.0" encoding="utf-8" standalone="yes"?>

<error

    xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata">

    <code/>

    <message xml:lang="en-US">The serialized resource has an missing value for member &#x0027;SENSOR_READING_ID&#x0027;.</message>

</error>

 

This happens because I was sending the JSON (3x columns only):

 

{

    "SENSOR_ID": 1,

    "READING": "10",

    "READ_DATE": "/Date(1425487218000)/"

}

 

What the oData service wanted was this JSON (all 4x columns) regardless if one of the columns in the table was marked as IDENTITY.

 

{

   "SENSOR_READING_ID" : 1,

    "SENSOR_ID": 1,

    "READING": "10",

    "READ_DATE": "/Date(1425487218000)/"

}

 

*** NOTE: I also tried sending a <NULL> thinking that maybe the service would replace it on CREATE, but that didn't work either. Besides having the column/value enumerated in the JSON, since the column was marked as PRIMARY KEY, it also couldn't be <NULL>. It had to be not null value. ***

 

 

After some searching for some ideas from other HANA/OData experts what I was resorted to was using Modification Exit with XS JavaScript via the OSDL in my *.xsodata file. In this manner I was able to send the IDENTITY column (SENSOR_READING_ID) with a 'dummy value' and then in *.xsodata file call out to JavaScript function to do SQL INSERT locally, ignoring the 'dummy value'.


The final *.xsodata file looked like this with the OSDL Syntax for "using" keyword. (The documentation I believe says it can be used for 'create', 'update' and 'delete'. I only tried 'update').

 

[sensor_reading.xsodata]

 

service namespace "workshop.workshop_i826714"{

  "I826714"."SENSOR_READING" as "sensorReading" create using "workshop.workshop_i826714a:sensor_reading_create.xsjslib::sensor_reading_create";

}

 

The OSDL syntax to bind XS JS to specific entity modification is:

 

<Package.Path>:<file>.<suffix>::<XSJS_FunctionName>

 

So a sample would look like this:

 

service { "sample.odata::table" as "Table" update using "sap.test:jsexit.xsjslib::update_instead";}

 

 

In the *.xsjslib file, the XS JS function gets the single row table in the param and then builds a prepared statement to INSERT into the table. The param has a propery "afterTableName" which we use to columns values after temporary table name.

 

[sensor_reading_create.xsjslib]

 

function sensor_reading_create(param) {

    let after = param.afterTableName;

 

    let pStmt = param.connection.prepareStatement('select * from "'+after+'"');
    pStmt.executeQuery();
    var rs = pStmt.executeQuery();

 

    if (rs.next()) {

        pStmt = param.connection.prepareStatement('insert into "I826714"."SENSOR_READING"("SENSOR_ID", "READING", "READ_DATE") values(?, ?, ?)');

 

        pStmt.setInteger(1, rs.getInteger(2));         
        pStmt.setDouble(2, rs.getDouble(3));           
        pStmt.setTimestamp(3, rs.getTimestamp(4));     


        pStmt.executeUpdate();
        pStmt.close();


        }

 

    rs.close();

}

 

 

Really this just allows me to call SQL INSERT in XS JS instead of using the automatic *.xsodata service mapping.

 

 

 

The ADVANTAGES:

  • Code works with IDENTITY columns. It gets by the OData column/entity requirement issues.
  • Developers can also add in more custom logic and SQL injections

 

The DISADVANTAGES:

  • PK column dummy value must be sent to server. Extra overhead/resources wasted over the wire.
  • Extra overhead @ HANA to pause and call-out XS JS function. Not sure exactly how much, but I assume it's going to be slower.
  • A lot more code/work if you want crate OData Service Definition for tables with IDENTITY column

 

 

In summary, this will work, but not sure it's ideal. I'm wondering if anyone knows of a better solution, such as a OSDL keyword that would allow columns to be tagged as not required on the POST/CREATE or POST/UPDATE for OData service. This would save a lot of hassle of creating your own XS JS function. If you know of solution, please reply in the comments!

 

If anyone has any corrections, suggestions or other feedback, please post as well.

Thanks,

w.

 

 

You can find more about Modification Exists with XS JavaScript and OSDL in general in Section 7.1.6 in the SAP HANA Developer Guide.

 

Special thanks to David Fishburn and Marcus Pridham who also helped me debug this.

SAP HANA Hands on tests ( part 2 ) : HANA DB Standby

$
0
0

Hello,

 

In a previous blog I showed a brief overview on SAP HANA installation on VMWare ESXi ( SAP HANA Hands on tests ( part 1 ) : HANA DB installation ) .

I had one hana box installed with an HANA DB running on it and an SAP ECC EHP7 instance also ( not a recommended set up ) :

 

hana_scalout2.png

 

Now, I have replicated my first box using VMWare functionnalities, cleaned up the HDB and ECC on it in order to have a brand new hana box.

The setup is the same.

I have configured the nfs beetween my 2 boxes ( hdbtest1 and hdbtest2 ).

 

Lets configure the Hana replication standby:

 

Mount /hana directory from hdbtest1 to hdbtest2 :

 

hana_scalout1.png

 

Install the required hana software on the scaleout node using the embedded hdblcm tool.

 

This time I use the hdblcm "text mode" ( a bit R3setup notalgic ) :

 

 

SAP HANA Lifecycle Management - SAP HANA 1.00.82.00.394270

**********************************************************

 

System Properties:

HTL /hana/shared/HTL HDB_ALONE

        HDB00  version: 1.00.82.00.394270  host: froxhdbtest  plugins: lcapps,afl

 

 

Enter Root User Name [root]:

Enter Root User Password:

 

Collecting information from host 'froxhdbtest2'...

Information collected from host 'froxhdbtest2'.

 

 

Options:

  Index | Listen Interface | Description

  ---------------------------------------------------------------------------------------------

  1     | global           | The HANA services will listen on all network interfaces

  2     | internal         | The HANA services will only listen on a specific network interface

 

NOTE : for testing purpose I selected option 1. But on a production setup I'd probably use a dedicated NIC in order to put the Hana inter processes communication on a dedicated network.

 

Select Listen Interface / Enter Index [1]:

Enter System Administrator (htladm) Password:

Enter Certificate Host Name For Host 'froxhdbtest2' [froxhdbtest2]:

 

 

Summary before execution:

=========================

 

 

Add Additional Hosts to SAP HANA System

   Add Hosts Parameters

      Installation Path: /hana/shared

      SAP HANA System ID: HTL

      Install SSH Key: Yes

      Root User Name: root

      Listen Interface: global

      Enable the installation or upgrade of the SAP Host Agent: Yes

      Certificate Host Names: froxhdbtest2

   Additional Hosts

      froxhdbtest2

         Storage Partition: <<assign automatically>>

 

 

Do you want to continue? (y/n): y

 

 

Adding additional host to SAP HANA Database...

  Adding additional host...

  Adding host 'froxhdbtest2'...

Registering HANA Lifecycle Manager on remote hosts...

Regenerating SSL certificates on remote hosts...

Deploying Host agent configurations on remote hosts...

Updating Component List...

SAP HANA system : Add hosts finished successfully.

Log file written to '/var/tmp/hdb_hdblcm_add_hosts_2015-03-06_14.47.56/hdblcm.log'.

 

 

Now you have an additional host ready to take over in case of first node failure :

 

hana_scalout3.png

 

hana_scalout4.png

Of course in this particular case where the hdbtest also holds the HDB storage, I would run into trouble if I totally lose it.

So I will revert the roles : hdbtest2 will be master and hdbtest will be slave :

 

hana_scalout7.png

 

Now let' s play with the standby feature :

 

I have my ERP instance running with an sgen run going on in order to have some load.

 

hana_scalout5.png

 

Here is the HDB system status before the poweroff :

 

hana_scalout8.png

 

I poweroff the hdbtest2.

We can see the system hdbtest2 going down. The hana admin console shows some error messages :

 

hana_scalout9.png

 

The work processes on the ECC instance are switching to reconnect status, and the transaction are being rolledback :

 

hana_scalout6.png

 

The hdbtest1 host should take the lead after a while.

Now the Master host is lost on hdbtest2 following the poweroff, we can see the hdbIndexserver initializing on hdbtest1:

 

hana_scalout10.png

The services are now started on the hdbtest1.

 

hana_scalout12.png

 

In My ECC instance I can see the work processes have reconnected to the HDB.

 

hana_scalout13.png

hana_scalout15.png

The ECC server got out of the reconnect status. The system is available again after a few minutes.

The SGEN is of course canceled but I can resume it :

 

hana_scalout16.png

ECC is O.K. The database is available through the hdbtest1 node which was the former standby host.

We can see the hdbtest2 is seen as Inactive.

hana_scalout17.png

hana_scalout18.png

 

Now we can restart the hdbtest2 host. No fallback should happen.

 

hana_scalout19.png

This is also O.K. The HDB continues to work on hdbtest1. Host hdbtest2 is back in the set up.

 

We are now fully back online.

The system was out for only a few minutes.

The test is O.K.

The next step will be to apply patches using this set up.

SAP HANA Webcast: Monitoring Essentials featuring Bradmark Surveillance - Register Today!

$
0
0

Run powerful real-time monitoring that supports your real-time in-memory investment.


Register Here!

 

March 24th 2015 @ 10:00 PT / 1:00 ET

 

Join us, as we welcome guest speaker Dan Lahl, vice president, product marketing SAP, who will discuss how IT organizations run transactional and analytical applications on a single in-memory platform delivering real-time actionable insights while simplifying their IT landscape.  During this one-hour webcast, Bradmark’s Edward Stangler, R&D director, HANA products will showcase key essentials for effectively monitoring SAP HANA, including:

 

  • Tracking Key SAP HANA Feature
    • Top Column Store Tables.
    • Status / progress on delta merges and full / partial loads.
    • Memory breakdown.
  • Reviewing Overall Health
    • CPU usage, volume I/O, memory breakdown, and instance information.
  • Familiar Metrics for Experienced DBAs
    • Statements.
    • Memory usage.
    • Space Usage.
    • Operations and transactions.
    • Connections. 
    • Network I/O, app tier, login, current SQL and SQL plan, etc. 
  • Alerting on HANA Resources
    • Space usage in volume data / log, long-running statements / transactions / SQL, delta growing too large (not merged fast enough) for column store tables, and more.
  • Flashback on Recent HANA Problems
    • Viewing historical data through real-time UI.

 

Register Today... to join us for this informative event.

 

And learn how Bradmark's Surveillance for SAP HANA satisfies an organization’s system management requirements across the SAP HANA computing platform, so you can maintain a production-ready environment and your peace of mind.

 

We look forward to seeing you online!

Installing Two ABAP Systems on Separate Tenants in SAP HANA SPS 09

$
0
0

Over the last few months, I’ve been trying out various scenarios involving the new multitenant database containers in SAP HANA SPS 09, and I thought it might be helpful to share my findings and examples with others who want to get their feet wet with this new feature. So here goes…

 

“Multitenant database containers” is a bit of a mouthful, so for the rest of this article I’m going to use the abbreviation MDC.

 

 

 

The first scenario I tested was the installation of SAP HANA SPS 09 with MDC, followed by the installation of two ABAP systems on two HANA tenants: 

Scenario_1_MDC_install_2_ABAP_systems_corrected_cropped.jpg

 

 

Installing SAP HANA with multitenancy

 

I started by installing HANA with MDC using hdblcmgui. The installation procedures are well documented on SAP Help Portal, so I won’t go into all the details here. The only thing you do differently from a standard installation is change the database mode from single_container (the default) to multiple_containers:

 

HANA_installation_with_MDC.jpg


The result is a system database but no tenant databases inside a HANA system that supports multiple database containers. For the distinction between a tenant database and the system database, see the SAP HANA Master Guide.


I then added my system database to the Systems view in the SAP HANA studio:

Add_system_MDC_system_DB.jpg

 

 

Once I was logged on as the administrator of the system database, I was able to create a tenant database in the SQL console using the CREATE DATABASE statement:

Creating_tenant_DB.jpg

Created_tenant_DB_confirmation.jpg

 

 

I added the tenant database in the Systems view:

Add_system_MDC_tenant_DB.jpg

 

Then I created and logged on to a second tenant. The Systems view in the SAP HANA studio then looked like this:

 

Added_systems_in_studio_MDC.jpg

 

The system database had an additional SYS_DATABASES schema:

SYS_DATABASES_schema_in_system_DB.jpg

The SYSTEM user of the system database has the privilege DATABASE ADMIN for the execution of operations on tenant databases.

DATABASE_ADMIN_privilege.jpg

 

 

Installing NetWeaver on a HANA database tenant

 

The software provisioning manager SP 7 provided with SL Toolset 1.0 SPS 12 supports MDC, so I was able to install an SAP NetWeaver 7.4 SR 2 on each of the tenants. This involved specifying the name of the tenant database with the tenant database’s administrator password, as well as the password of the system database administrator. These steps are described in detail, with screenshots, in Stefan Seemann’s blog: http://scn.sap.com/community/it-management/alm/software-logistics/blog/2014/12/02/software-provisioning-manager-and-hana-multi-tenant-database-containers

 

Installation of the HANA client was part of the same procedure.

 

 

 

Stopping and starting tenant databases

 

Having backed up the tenant databases, I then stopped one of them from the SQL console of the system database:

Stopping_tenant_DB.jpg

To open the administration console of the stopped tenant, I was prompted to log on with the credentials of the operating system user:

Logging_on_as_sidadm_user.jpg

It baffled me somewhat that the administration console of the stopped tenant database (DB2) should show the index server of the tenant (DB1), but it’s because the operating system user (the “SID user”) can currently see the processes of all database containers in this view.

 

Tenant database DB2 in the process of stopping:

Tenant_DB_DB2_stopping.jpg

Tenant database DB2 when stopped:

Tenant_DB_DB2_stopped.jpg

 

 

Development has told us that improved visibility and transparency of the processes for different database containers is in the pipeline.

 

Enabling HTTP access to tenant databases

 

I also enabled HTTP access to the individual tenants, but more about that in my next blog.

Migrating ERP and BW on HANA to a Single HANA System with Multitenant Database Containers

$
0
0

Having successfully installed HANA with multitenant database containers (see my previous blog), I wanted to find out if everything would run just as smoothly in the case of an update to SPS 09 with conversion to multitenant database containers. As in my first blog, multitenant database containers are abbreviated to MDC.


My starting point was a HANA database on revision 80 with SAP EHP 6 for SAP ERP 6.0, version for SAP HANA running on top of it. The BW system was running on a separate HANA that was still on revision 70. The idea was to get the ERP and BW systems running on two tenants in the same HANA.

1_Scenario_2_update_with_MDC_running_ERP_BW.png

Updating to SPS 09

 

I downloaded the latest software components from SAP Service Marketplace using the SAP HANA studio (at the time, this was revision 92), and then prepared the software archive for the updatebefore executing hdblcmgui. All this is well described in the SAP HANA Server Installation and Update Guide.

 

Don’t be put off by the fact that you don’t see an option to migrate to MDC in the update wizard, as we did in the installation procedure. The conversion to MDC is that it is a post-installation step (see section "Converting to MDC" below). And actually this makes sense, because many customers will want to introduce MDC after they have been working with the new support package stack for a while. The update to SPS 09 from a lower support package stack is always from a single container to a single container.

 

 

We ran into a few minor issues at operating system level, which were solved by ensuring that we had upgraded to the versions recommended in SAP Note 1944799 (our system landscape hadn’t been updated for a while).   We also migrated to CommonCryptoLib as described in SAP Note2093286.

 

More serious was the fact that the ERP system wouldn’t start once the update had finished. This was because the new 3-digit HANA revision codes were not recognized:

2_Error_in_ERP_after_update_to_SPS_09.jpg

According to SAP Note 1952701,we needed 740 Patch Level 48 but unfortunately this version was no longer on SAP Service Marketplace, so we ended up upgrading the kernel from 740 to 741.

 

 

 

Converting to MDC

 

The conversion from a single database container to multitenant database containers worked as described in the documentation.  Make sure you don’t forget any of the pre- or post-conversion steps, and migrate - don’t remove - the statistics server.

For an example with screen shots, see this blog post by N. van der Linden:  http://scn.sap.com/community/developer-center/hana/blog/2014/12/17/convert-to-a-multi-tenant-hana-database .

 

The result is one system database and one tenant database inside a HANA system that supports multiple database containers (as opposed to installation with MDC, which gives you only the system database). The system ID and the name of the tenant database are the same: in our example, HN1. 

3_Converted_system_DB_and_tenant_in_studiio.jpg

 

We were gratified to see the schema of our ERP system in the catalog of the tenant database, and not under the system database:

4_ERP_catalog_in_ERP_tenant_following_conversion_to_MDC.jpg

 

We now started the ERP system, this time without issues.

 

The only issue we did notice was that the repository roles were missing in the system database: 

5_Roles_in_system_DB_after_conversion.jpg

 

 


It turned out that the problem had been caused by our shutting down the database while the thread ImportOrUpdateContent was still active in the system database (visible on the Performance tab):  

6_Thread_ImportOrUpdateContent.jpg

This thread was triggered as part of the conversion to MDC, when the command hdbnameserver –resetUserSystem was issued. The  consequences can sometimes be more serious, so make sure you wait for the import of the delivery units to finish before shutting down. For more information, see SAP Note 2136496. As of Revision 94, ImportUpdateContent will no longer be triggered by this command. Moreover, development has told us that it plans to minimize the delivery units import time to a fraction of what it is in SPS 09.

 

If you have other issues when converting to MDC, please consult SAP Notes 2140297 and 2140344.

 

 

 

Transferring the BW system to the same HANA as the ERP system

 

We started by creating a second tenant in the target system. We gave it the same name as the source system SID (HB1), but there is no technical reason why you have to do this.  It was then necessary to update the source system to SPS 09 and convert it to MDC, before backing up the tenant in the source system and recovering it into the target tenant and system. SAP HANA database backup and recovery is explained in the SAP HANA Administration Guide in SAP Help Portal. Thus, the process was as follows:

  1. Update target system to SPS 09.
  2. Convert target system to MDC.
  3. Create second tenant in target system.
  4. Update source system to SPS 09.
  5. Convert source system to MDC.
  6. Back up tenant in source system.
  7. Recover this backup into the target system tenant created in step 3.

 

One thing to note is that once we had done the recovery, the password of the tenant’s SYSTEM user reverted to what it had been in the source system, overwriting the password we had specified when creating the tenant in the target system. This is normal system behavior. For more information about the passwords of MDC systems, see the documentation.

 

The next step was to update the SAP HANA database client of the ERP as well as the BW system with hdbsetup.

 

Then, before restarting HANA or our BW system, we reconfigured the connection from BW to the new HANA database with the <SID>adm user using hdbuserstore. In the screen shot below, the turquoise rectangle represents the fully qualified domain name of the original HANA system and the yellow rectangles represent the fully qualified domain name of the HANA multitenant system.

8_hdbuserstore.jpg

 

For more information about hdbuserstore, see the documentation here

 

In the above example, the SQL port of our BW tenant is 30041 because the instance number is 00 and 3<instance number>41 is the first SQL port assigned to a manually created tenant.

 

For more information about the ports of multitenant database containers, see the documentation here.  Note, in particular, that the ports for a converted tenant database are different from those of tenant databases that are added subsequently.

 


Enabling HTTP access to the correct database container

 

We now configured the internal SAP Web Dispatcher so that it would know which HTTP requests to send to which database container from the Web-based applications running on the XS engine.

 

Originally, we set up IP addresses for each tenant database but this is not necessary; it works fine with DNS alias host names.

9_webdispatcher.ini.jpg

The first entry (wdisp/system_0) initially looked like this:

SID=$(SAPSYSTEMNAME), EXTSRV=http://localhost:3$(SAPSYSTEM)08, SRCURL=/

This entry is for the converted tenant which, in our case, is the tenant on which the ERP system runs.

We changed it as follows because we required additional entries:

SID=$(SAPSYSTEMNAME), EXTSRV=http://localhost:3$(SAPSYSTEM)08, SRVHOST=<fqdn>

 

We added a second entry with the DNS alias for the BW tenant (wdisp/system_1) and another entry with the DNS alias for the system database (wdisp/system_3).

 

 

We also updated the XS properties for each of our database containers in order to be able to open and work with the SAP HANA cockpit from the SAP HANA studio.


System database: 
10_XS_properties_DNS_alias_system_DB.jpg


Converted tenant database (on which ERP runs):

11_XS_properties_ERP_tenant.jpg


Created tenant database (on which BW runs):

12_XS_properties_DNS_alias_of_created_tenant.jpg

 

You can find full step-by-step instructions on how to configure HTTP access to multitenant database containers in SAP Help Portal.

HANA Effect Podcast #8: Commercial Metals Rolls out BPC on HANA

$
0
0

Michael Begala, Manager, Global BI & Analytics, shares how Commercial Metals Company completed their first SAP HANA project, Business Planning & Consolidation on SAP HANA, and set a path towards an integrated global data warehouse built on SAP HANA.

We hope you enjoy hearing CMC’s first-hand experience with mission-critical SAP HANA.  Please let us know your feedback in the comments below.

 

To get more real-world customer HANA stories, subscribe to our iTunes or SoundCloud feed for weekly podcasts that will cover multiple in-production customer use case scenarios for SAP HANA.

 

Also, if you’ve got a killer SAP HANA scenario and would like to share it on the HANA Effect podcast, please contact me.

 

HE8.jpg

 

Transcript: SAP HANA Effect Episode 8

 

Sponsored by:

xeon_i_ww_rgb_3000


Apply today for the SAP HANA on Power Ramp-Up Program

$
0
0

SAP HANA has already revolutionized businesses by removing the limits of traditional database architectures and accelerating enterprise applications. With the goal in mind to provide customers choices, SAP has collaborated with IBM to enable IBM Power Systems to run the SAP HANA platform.

Customers will now have the opportunity to deploy their SAP HANA environment on IBM enterprise-class hardware platform. We have recently opened up nominations for customers, who want to participate in the first Ramp-Up wave (Scope: SAP BW powered by SAP HANA) which begins March 31, 2015. Explore this opportunity today and visit the SAP on Power Ramp-Up page to apply. Additionally, you can also find more information on the IBM Systems and Services for SAP HANA website.

 

About the SAP and IBM partnership:

SAP and IBM have been working together as strategic alliance partners, addressing the needs of thousands of joint customers consistently helping them run their businesses for over 40 years. SAP and IBM have delivered comprehensive technology solutions for their customers, from the beginnings of enterprise computing to the incredible environment of today—a landscape of knowledge, connectivity and pervasive information unimagined when they first joined forces the journey began. The strong alliance between SAP and IBM is demonstrated at over 35 dedicated facilities for collaborative development, education, solution delivery and innovation. SAP and IBM’s partnership is a comprehensive end-to-end partnership spanning across IBM’s hardware, software, consulting and cloud services businesses and covering all SAP products and solutions including Applications, Analytics, Mobility, Cloud and SAP HANA. For more information please visit www.ibm-sap.com.


ibm_black_sap_color_small.jpeg

Introducing the SAP Automated Predictive Library

$
0
0

Hi,

 

You may have already heard about the recent release of SAP Predictive Analytics 2.0, but may not be aware that this also includes the SAP Automated Predictive Library (APL) for SAP HANA.

 

The APL is effectively the SAP InfiniteInsight (formerly KXEN) predictive logic optimized and adapted to execute inside the SAP HANA database itself for maximum performance - just like the SAP HANA Predictive Analysis Library (PAL) and Business Function Library (BFL).

 

Obviously when you already have data in SAP HANA it makes sense to perform heavy-duty processing such as data mining as close as possible to where the data resides - and this is exactly what the APL provides.

 

By way of comparison, the PAL provides a suite of predictive algorithms that you can call at will - as long as you know which algorithm you need, whereas the APL focuses on automation of the predictive process and uses it's own in built intelligence to identify the most appropriate algorithm for a given scenario. So the two are very much complementary.

 

There are a couple to ways to take advantage of the APL. Of course, you can exploit the APL when using the SAP Predictive Analytics 2.0 desktop application - whenever accessing SAP HANA as a data source. In this case usage is implicit.

 

However it's also possible to access the APL independently of SAP Predictive Analytics 2.0. You can access the APL explicitly using SQLScript or from the Application Function Modeler (AFM) in SAP HANA Studio. And, of course, you can embed APL capabilities into your own custom SAP HANA applications.


We've put together a series of SAP HANA Academy hands-on video tutorials to explain how to access the APL from SAP HANA Studio using SQL Script:

 

1. Reference Guide & Download

In this video, part of the SAP Automated Predictive Library (APL) for SAP HANA series, we will introduce the SAP Automated Predictive Library (APL), download the APL reference guide, then download sample data & code and extract them for later use.

 

2. Import Sample Data & Check Installation

In this video, part of the SAP Automated Predictive Library (APL) for SAP HANA series, we will use SAP HANA studio to import the provided sample data into a SAP HANA schema, ensure the SAP HANA script server is running, and verify that the APL has been correctly installed.

 

3.Create APL User & Table Types

In this video, part of the SAP Automated Predictive Library (APL) for SAP HANA series, we will create and authorize a SAP HANA database user so that it can make use of the APL. We will also set up APL table types and test the APL using the "ping" function.

 

4. Predicting Auto Insurance Claim Fraud

In this video, part of the SAP Automated Predictive Library (APL) for SAP HANA series, we will use the APL to predict auto insurance claim fraud.

 

This example shows how an insurance company assesses past insurance frauds in order to create a category of client characteristics that may be susceptible to make fraudulent claims.

 

The first step of the analysis is to prepare the main input tables, one containing data that has already been analyzed, that contains some known fraud cases. This table is used to train the model. The results are used to indicate which variable(s) to use as the target and describe the claims data.

 

After considering past data and past fraudulent claims, the customer uses the data to train the APL model on that date produce an updated model that will be applied to the new data in order to detect potential fraud risks.

 

After training the model, the APL function returns summary information regarding the model as well as indicators like the Predicitve Power (KI) of the model, or the Prediction Confidence of the results (KR).

 

At the end of the data mining process, the “Apply Model” function produces scores in the form of a table that can be queried.

 

 

Or for the YT playlist follow this link: http://bit.ly/hanaapl

 

We hope these help you get started with the APL.

 

For a more in-depth discussion of the APL do check out the excellent blog by Ashish Morzaria.

 

Enjoy!

 

Philip

SAP HANA – ASUG Pre-Conference Seminars May 4, 2015

$
0
0

Heading off to the SAP SapphireNow and ASUG Annual Conference in Orlando in May? Have you registered for any pre-conference seminars? The pre-conference seminars are half or full day ASUG seminars available on a number of different topics. These will be held on May 4th 2015, a day before the regular conference kicks off from May 5th to 7th 2015.

 

 

Check out the following link for a listing of these seminars.

 

 

http://events.sap.com/sapandasug/en/pre-conference.html?bc=2%3

 

 

Our colleagues in SAP HANA Product Management team as well as Solution Management teams are preparing several seminars focused around SAP
HANA. Be sure to take advantage of this opportunity! Some of the SAP HANA sessions that you will find interesting are listed below

(be sure to check the above link for a complete listing of the Pre-Conference Seminars). These will appeal to folks interested in getting the big picture on the business value as well as end-to-end coverage of HANA. In addition two session are targeted specifically to application developers.

 

 

End-to-End SAP HANAOverview

Monday, May 4, 8:30 a.m. - 5:00 p.m.

 

Interested in learning about SAP HANA? Feeling overwhelmed with the depth and breadth of SAP HANA? Not sure where to start? Come join the SAP HANA product management team for a full-day, pre-conference session to get a primer on SAP HANA. SAP will provide an end-to-end overview of SAP HANA technologies, highlighting the
must-have information for you to be SAP HANA ready and get you oriented for deeper-dive sessions in the main conference. Various SAP HANA product management team members will provide coverage across the many topic areas and will be on hand to answer your queries

 

Building the Business Case for SAP HANA

Monday, May 4, 8:30a.m. - 12:00 p.m.

 

Understand the possible use cases for an SAP HANA implementation in an intensive, deep-dive working session, where details on determining business value and building a
business case will be shared. Learn how to prioritize use cases and determine value drivers. Also, hear a customer testimonial on their experience with defining use cases, prioritizing them, and ultimately building the detailed business case.

 

Application Development Based on SAP NetWeaver Application Server for
ABAP and SAP HANA

Monday, May 4, 1:00 p.m. - 5:00 p.m.

 

This session will provide an overview on how to leverage SAP HANA from SAP NetWeaver AS for ABAP applications that integrate with the SAP
Business Suite. Speakers will explore concrete examples and best practices for customers and partners based on SAP NetWeaver AS for ABAP 7.4. This includes the following aspects: the impact of SAP HANA on existing customer-specific developments, advanced view building capabilities, and easy access to database
procedures in the application server for ABAP; usage of advanced SAP HANA capabilities like text search or predictive analysis from the application
server for ABAP; and best practices for an end-to-end application design on SAP HANA. Finally, with SAP NetWeaver 7.4, SAP has reached a new milestone in
evolving the application server for ABAP programming language to a modern expression-oriented programming language. The new SAP NetWeaver Application Server for ABAP features covered in this session will include inline declarations, constructor expressions, table expressions, table comprehensions, and the new deep move corresponding.

 

 

Hands-On Predictive Modeling and Application Development Using SAP HANA
Predictive Analysis Library (PAL) and R

Monday, May 4, 8:30 a.m. - 12:00 p.m.

 

 

At this session, you will learn how to create a Fiori-like application. When you build your own app with SAP Web IDE, the browser-based tool for rapid application development, you will benefit from a set of proven application patterns and UI controls. You will also experience a high degree of flexibility and control when developing with SAPUI5.

 

 

If you still to register please do take a look at the pre-conference seminars. If already registered you should be able to add pre-conference seminars to your registration.

 

We look forward to seeing you in Orlando in May!

SAP HANA Dynamic Tiering Doubts and Questions - Brainstorming

$
0
0

Q1. Since Dynamic Tiering was claimed as Disk columnar based, appreciate if someone can shed some light on why every query runs on extended table, remote Row Scan was used instead of Column Search.


Understand that “remote” was used by SAP HANA treated extended table as virtual table.

 

Experiment A:

Select * from Extended Storage table and based on the visualize plan, remote row scan was used.

 

Experiment B:

Select specific column from Extended Storage table and based on the visualize plan, remote row scan was used as well.

 

 

Experiment C and D: Replicate experiment A and B query on HANA Columnar table. From below, we can see that Column Search was used for both queries.

 

 

 

Q2. In RLV enabled Extended Storage, noticed that delta is constantly high in log_es as shown below.


Since RLV is improve performance to enable write concurrently and acts like a delta store in HANA, when will the delta merge happens and will the percent used shrink down if delta merge happens?

 

Q3. Realize that we can enable the trace “fedtrace” for indexserver to give us how indexserver operated on esstore, is there any trace we can enable on esserver to gain more insight on how exactly esserver works?



All valuable inputs and questions are welcome and perhaps we can use this space as a central knowledge base for Dynamic Tiering.

 

Hopefully these questions are answered before we consider Dynamic Tiering as a solution for multi data temperature.

 

Cheers,

Nichoals Chang

HANA Effect Podcast #9: Bloomin' Brands is Cookin' Up Innovation with HANA

$
0
0

James Williams, Manager DBA & SAP Basis at Bloomin' Brands, explains how real-time planning and consolidation has Bloomin' Brands cookin' up innovation.

 

HE9.jpg

 

We hope you enjoy hearing Bloomin' Brands first-hand experience with mission-critical SAP HANA.  Please let us know your feedback in the comments below.

 

To get more real-world customer HANA stories, subscribe to our iTunes or SoundCloud feed for weekly podcasts that will cover multiple in-production customer use case scenarios for SAP HANA.

 

Also, if you’ve got a killer SAP HANA scenario and would like to share it on the HANA Effect podcast, please let us know.

 

Transcript: SAP HANA Effect Episode 9

 

Sponsored by:

 

xeon_i_ww_rgb_3000

Viewing all 902 articles
Browse latest View live