Quantcast
Channel: SCN : Blog List - SAP HANA and In-Memory Computing
Viewing all 902 articles
Browse latest View live

Microsoft ODBC Driver for SQL Server on Linux - by the SAP HANA Academy

$
0
0

Introduction

 

At the SAP HANA Academy we are currently updating our tutorial videos about SAP HANA administration [SAP HANA Administration - YouTube].

 

One of the topics that we are working on is SAP HANA smart data access (SDA) [SAP HANA Smart Data Access - YouTube].

 

Configuring SDA involves the following activities

  1. Install an ODBC driver on the SAP HANA server
  2. Create an ODBC data source (for remote data sources that require an ODBC Driver Manager)
  3. Create a remote data source (using SQL or SAP HANA studio)
  4. Create virtual tables and use them in calculation views, etc.

 

As of SPS 11, the following remote data sources are supported:

 

 

  • Apache Hadoop (Simba Apache Hive ODBC)
  • Apache Spark

 

In the SAP HANA Administration Guide, prerequisites and procedures are documented for each supported data source, but the information is intended as a simple guide and you will need 'to consult the original driver documentation provided by the driver manufacturer for more detailed information'.

 

In this series of blogs, I will provide more detailed information about how perform activity 1. and 2,; that is, installing and configuring ODBC on the SAP HANA server.

 

The topic of this blog is the installation and configuration of the Microsoft ODBC driver for SQL Server on Linux.

 

 

Video Tutorial

 

In the video tutorial below, I will show you in less than 10 minutes how this can be done.

 

 

If you would like to have more detailed information, please read on.

 

 

Supported ODBC Driver Configurations

 

At the time of writing, there are two ODBC drivers for SQL Server available for the Linux (and Windows) platform: version 11 and 13 (Preview).

 

Microsoft ODBC driver for SQL Server on LinuxSQL ServerOS (64-bit)unixODBCSAP HANA Smart Data Access support
Version 13 (Preview)2016, 2014, 2012, 2008, 2005RHEL 7, SLES 122.3.1Not supported
Version 112014, 2012, 2008, 2005RHEL 5, 6; SLES 112.3.0SQL Server 2012

 

For SAP HANA smart data access, the only supported configuration is Microsoft ODBC Driver 11 in combination with SQL Server 2012. Supported means that this combination has been validated by SAP development. It does not mean that the other combinations do not work; they probably work just fine. However, if you run into trouble, you will be informed to switch to a supported configuration.

 

Information about supported configurations is normally provided in the SAP Product Availability Matrix on the SAP Support Portal, however, so far only ASE and IQ are listed. For the full list of supported remote data sources, see SAP Note 1868209 - SAP HANA Smart Data Access: Central Note.

 

 

unixODBC

 

On the Windows platform, the ODBC driver manager is bundled together with the operating system but on UNIX and Linux this is not the case so you will have to install one.

 

The unixODBC project is open source. Both SUSE Linux Enterprise Server (SLES) and Red HatEnterprise Linux (RHEL) provide a supported version of unixODBC bundled with the operating system (RPM package). However, for the Microsoft ODBC Driver Version 11, the bundled unixODBC package is not supported and you will need to compile release 2.3.0 from the source code. This is described below.

 

unixODBCRelease DateOS (64-bit)Microsoft ODBC Driver
2.3.4 (latest)08.2015
2.3.210.2013
2.3.111.2001RHEL 7, SLES 12Version 13 (Preview)
2.3.004.2010Version 11
2.2.1411.2008RHEL 6
2.2.1210.2006SLES 11

 

 

System Requirements

 

First, you will need to validate that certain OS packages are installed and if not, install them (System Requirements).

 

This concerns packages like the GNU C Library (glibc), GNU Standard C++ library (libstdc++), the GNU Compiler Collection (GCC) to name a few, without which you will not get very far compiling software. Also, as the Microsoft ODBC Driver supports integrated security, Kerberos and OpenSSL libraries are required.

 

 

Installing the Driver Manager

 

Next, you will need to download and build the source for the unixODBC driver manager (Installing the Driver Manager).

 

  1. Connect as root
  2. Download and extract the Microsoft driver
  3. Run the script build_dm.sh to download, extract, build, and install the unixODBC Driver Manager

 

script.png

 

The build script performs the installation with the following configuration:

# export CPPFLAGS="-DSIZEOF_LONG_INT=8"

# ./configure --prefix=/usr --libdir=/usr/lib64 --sysconfdir=/etc --enable-gui=no --enable-drivers=no --enable-iconv --with-iconv-char-enc=UTF8 --with-iconv-ucode-enc=UTF16LE"

# make

# make install

 

Note the PREFIX, LIBDIR and SYSCONFDIR directives. This will put the unixODBC driver manager executables (odbcinst, isql), the shared object driver files, and the system configuration files (odbcinst.ini and odbc.ini for system data sources) all in standard locations. With this configuration, there is no need to set the environment variables PATH, LD_LIBRARY_PATH and ODBCINSTINI for the login shell.

 

 

Installing the Microsoft ODBC Driver

 

Next, we can install the ODBC driver [Installing the Microsoft ODBC Driver 11 for SQL Server on Linux].

 

Take a look again at the output of the build_dm.sh (print screen above). Note the passage:

 

PLEASE NOTE THAT THIS WILL POTENTIALLY INSTALL THE NEW DRIVER MANAGER OVER ANY

EXISTING UNIXODBC DRIVER MANAGER.  IF YOU HAVE ANOTHER COPY OF UNIXODBC INSTALLED,

THIS MAY POTENTIALLY OVERWRITE THAT COPY.

 

For this reason, you might want to make a backup of the driver configuration file (odbcinst.ini) before you run the installation script.

 

  1. Make a backup of odbcinst.ini
  2. Run install.sh --install

 

install.png

 

 

The script will register the Microsoft driver with the unixODBC driver manager. You can verify this with the odbcinst utility:

odbcinst -q -d -n "ODBC Driver 11 for SQL Server"

 

Should the install have overwritten any any previous configuration, you either need to register the drivers with the driver manager again or, and this might be easier,  restore the odbcinst.ini file and manually add the Microsoft driver.

 

For this, create a template file (for example, mssql.odbcinst.ini.template) with the following lines:

 

 

[ODBC Driver 11 for SQL Server]

Description=Microsoft ODBC Driver 11 for SQL Server

Driver=/opt/microsoft/msodbcsql/lib64/libmsodbcsql-11.0.so.2270.0

Threading=1

 

Then register the driver with the driver manager using the command:

odbcinst -i -d -f mssql.odbcinst.ini.template

 

 

Create the data source and test the connection

 

Finally, we can register a data source with the driver manager. For this, create a template file and save it as mssql.odbc.ini.template.

 

You can give the data source any name. Here MSSQLTest is used, but for production systems, using the database name might be more sensible (spaces are allowed for the data source name).

 

Driver = name of the driver in odbcinst.ini or the full path to driver file

Description = optional

Server = host (FQDN); protocol and port are optional, if omitted tcp and 1433 will be used.

Database = database name (defaults to Master)

 

[MSSQLTest]

Driver = ODBC Driver 11 for SQL Server

Description = SQL Server 2012 test instance

; Server = [protocol:]server[,port]

; Server = tcp:mo-9e919a5cc.mo.sap.corp,1433

Server = mo-9e919a5cc.mo.sap.corp

Database = AdventureWorks2012

 

Register the DSN with the driver manager as System DSN using the odbcinst utility:

odbcinst -i -s -l -f mssql.odbc.ini.template

 

Verify:

odbcinst -q -s -l -n "MSSQLTest"

 

Test connection:

isql -v "MSSQLTest" <username> <password>

 

The -v [erbose] flag can useful in case the connection fails, as it will tell you, for example, that your password is incorrect. For more troubleshooting, see below.

 

 

System or User Data Source

 

It is up to you, of course, whether to register the data source as a system data source or a user data source. As the SAP HANA server typically is a dedicated database system, using only system data sources has two advantages:

 

  1. Single location of data source definitions
  2. Persistence for system updates

 

With the data sources defined in a single location, debugging connectivity issues is simplified, particularly when multiple drivers are used.

 

With the data sources defined outside of the SAP HANA installation directory, you avoid that your odbc.ini will be removed when you uninstall or update your system.

 

To register the DSN with the driver manager as User DSN using the odbcinst utility, connect with your user account and execute:

odbcinst -i -s -h -f mssql.odbc.ini.template

 

The difference is the  -h (home) flag and not  - l (local).

 

Verify:

odbcinst -q -s -h -n "MSSQLTest"

 

Test connection (same as when connecting to a system data source):

isql -v "MSSQLTest" <username> <password>

 

user.png

 

Note that when no user data source is defined, odbcinst will return a SQLGetPrivateProfileString message.

 

 

Troubleshooting

 

Before you test your connection, it is always a good idea to validate the input.

 

For the driver, use the "ls" command to verify that the path to the driver is correct.

 

odbcini.png

 

For the data source, use the "ping" command to verify that the server is up and use "telnet" to verify that the port can be reached (1433 for SQL Server is the default but other ports may have been configured; check with the database administrator).

 

ini.png

If you misspell the data source name, the [unixODBC] [Driver Manager] will inform you that the


Data source name not found, and no default driver specified

 

If you make mistakes with the user name or password, the driver manager will not complain but the isql tool will forward the message of the database server.

 

isql.png

 

If the database server cannot be reached, for example, because it is not running, or because the port is blocked, isql will also inform you by forwarding the message from the database server. Note that the message will depend on the database server used. The information we get back from SQL Server is much more user-friendly then DB2, for example.

 

connect.png

 

If the driver manager cannot find the driver file, it will return a 'file not found' message. There could be a mistake in the path to driver file.

 

notfound.png

 

Troubleshooting: SAP Notes

 

Note 2180119 - FAQ: SAP HANA Smart Data Access lists the question:


Q: After recent upgrade to SAP HANA database remote connection through SDA fails. What could be the cause?

A: If you installed ODBC drivers in your HANA exe directory as per SAP Note 1868702 these ODBC drivers will be removed during a revision update and have to be installed again after the update.

 

The mentioned SAP Note 1868702 now points to the SAP HANA Administration Guide and here we read:

guide.png

Two options are presented:

  1. Copy the driver files to $DIR_EXECUTABLE, or
  2. Define the LD_LIBRARY_PATH to the driver file in the $HOME/.customer.sh script file

 

Option 1 will not persist any SAP HANA system update, so option 2 is preferred.

 

However, if you create a System DSN and register the source with driver manager, as described in this blog, there is no need to either copy the driver files or set the variable.

 

Note 2151882 - ODBC Error Using HANA Smart Data Access to MSSQL describes an issue where the shared libraries linked to the ODBC driver cannot be found by the driver manager. This time, the solution is to set the variable in the $HOME/.sapenv.sh file. This will also work but will not persist a SAP HANA system update as this file will be overwritten. Not recommended.

 

Note 2141242 - Error "Can't open lib '/opt/microsoft/msodbcsql/lib64/libmsodbcsql-11.0.so.2260.0' : file not found" when using Smart Data Access describes a similar issue, this time the shared libraries of the driver manager cannot be found. This issue can be avoided by properly installing the unixODBC driver manager as described by Microsoft (see above).

 

 

More Information

 

SAP HANA Academy Playlists (YouTube)

 

SAP HANA Administration - YouTube

SAP HANA Smart Data Access - YouTube.

 

Product documentation

 

SAP HANA Smart Data Access - SAP HANA Administration Guide - SAP Library

 

SAP Notes

 

1868209 - SAP HANA Smart Data Access: Central Note

2180119 - FAQ: SAP HANA Smart Data Access

2141242 - Error "Can't open lib '/opt/microsoft/msodbcsql/lib64/libmsodbcsql-11.0.so.2260.0' : file not found" when using Smart Data Access

2151882 - ODBC Error Using HANA Smart Data Access to MSSQL

 

SCN Blogs

 

SDA Setup for SQLServer 12

SAP HANA Smart Data Access(1): A brief introduction to SDA

Smart Data Access - Basic Setup and Known Issues

Connecting SAP HANA 1.0 to MS SQL Server 2012 for Data Provisioning

SAP Hana Multi-Tenant DB with Replication

 

Microsoft Developer Network (MSDN)

 

Microsoft ODBC Driver for SQL Server on Linux

Download ODBC Driver for SQL Server

 

 

Thank you for watching

 

You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at youtube.com/saphanaacademy, follow us on Twitter @saphanaacademy., or connect with us on LinkedIn.


SAP HANA April/May Webinar Series: Best Practices, Case Studies & Lessons Learned

$
0
0


The SAP HANA Webinar Series helps you gain insights
and deliver solutions to your organization.


This acclaimed series is sponsored by the SAP HANA international Focus Group (iFG) for customers, partners, and experts to support SAP HANA implementations and adoption initiatives. Learn about upcoming sessions and download Meeting Invitations here.


>>>Check outour new SAP HANA blog post about the following April & May webinars:


SAP HANA international Focus Group (iFG) Sessions:

  • April 14 – What’s New in SAP HANA Vora 1.2
  • April 21 – Introduction to OLAP Modeling in SAP HANA VORA
  • April 28 – Falkonry; Intelligent Monitoring of IoT Conditions
  • May 5 – Preview of SAP HANA @ SAPPHIRE NOW and ASUG Annual Conferences


SAP HANA Customer Spotlights:

  • April 19 – National Hockey League (NHL) Enables Digital Transformation with SAP HANA Platform:Register >>
  • April 26 – CenterPoint Energy – Analyzing Big Data, Faster with Reduced Storage costs:Register >>


New to the iFG community?


iFG Webinar Series 4.jpg

HANA MDC: Tenant crash while recovering other tenant on the same appliance

$
0
0

The blog post is to bring attention to an issue we have been facing on our HANA Multitenant Database container(MDC) setup

 

Background:

We have a Scale up MDC setup with more than 15 Tenant Database's in non prod on SPS10

Part of quarterly release activities we refresh non prod systems from production MDC tenant backups

Until last year we had less than 10 tenants and the regular refresh was working as expected

 

Issue:

We had introduced more non prod tenants end of last year and during the next refresh cycle we started noticing a tenant crash while we were working on refresh of another tenant

A complete check of trace logs of the crashed tenant confirmed we had signal 6 errors exactly around the same time the other tenant was being refreshed

After multiple attempts to being up the tenant did not work, we had to involve SAP support to check the cause of the issue

Meanwhile we restored the crashed tenant using backups

 

Cause:
SAP Support took more than a month to identify the cause of the issue and another occurrence of the same issue while restoring a different tenant confirmed there was a correlation

SAP confirmed the following, when we have more than 10 Tenants on a single MDC appliance we will come across this issue(on version SPS11 revision 112.02 and below)

For example if we have 15 tenants and lets say the tenant with Database ID 5 is being restored using a backup of production tenant it will impact the tenant with Database ID 15 and this tenant will crash and fail to start up. Same issue would occur on tenants with Database ID 13 and 14 if tenants with Database ID 3 and 4 are recovered using a backup

 

Resolution:

 

SAP has addressed the issue in SPS11 Database maintenance revision 112.02 that released today 12-Apr-2016

Please find the link below for the same and the screenshot that confirms the issue in the note

 

http://service.sap.com/sap/support/notes/2300417

MDC_Issue_Tenants.jpg

Please let me know if anyone has any thoughts or inputs on this issue and hope the blog is useful in understanding the cause of the issue and available solution

HANA: Lost Updates

$
0
0


Brief Intro of what happens in Lost Updates?


A simple example which will make you understand Lost Update.

For an example:

Session 1: User A reads record 1

Session 2: User B reads record 1

Session 1: User A updates record 1

Session 2: User B updated record 1

User B have not seen the record updated by User A and updated the existing record resulting in Lost Updates.

 

How can we tackle it programmatically?


  1. We maintain a time stamp field on the record and give it to the user who is requesting the record. Now when the user wants to save the modified record we just check it again time stamp of the record if the time stamp are not same, record have been updated before and return an error.  
  2. We do the checksum of all the fields. Now when you user go back to update we just check the checksum if the checksum are not same we return an error.

   There are the other solutions too such as generating a random number and assigning the number to a record.


Let’s see how we can handle these in ODATA


It’s too cumbersome to do the above things and put into the service with Hana. Here where the etag functionality of ODATA services comes into picture. We need to take care of few things and we are ready to save database from lost updates.

These mechanism can be applied both on the tables and views. For views you have to specify the key.


Here is an example of simple service that does the task for us:

Service.PNG


Now when you do get the data from the service you will always get the etag in the metadata. As show in screen shot below:Etag-Token.PNG

 

Look at the screen shot we get the etag in request from server.

Now the ETags comes with two different options weak and strong tags.

 

  • A weak ETag could be considered the last updated time or the version of a document. Weak ETags are prefixed with the “W/” to indicate they are weak.
  • A strong ETag is considered if the entire representation of the entity is considered binary identical so all fields are compared and typically a hash of the entity is used as the strong ETag. Strong ETags do not have the leading prefix weak ETags do.

 

We will go here with weak tag. Screen shot from using the postman:

Postman-putRequest.PNG

 

If-match header have two options one is the value etag value and the other can be *.  If value is provided it will validate against the token and for * it will not validate it.

If you update the record twice it will give you error as “412 precondition failed” because the etag token is changed.

 

Hope this will help.

Vora 1.2 installation Cheat sheet: Concepts, Requirements and Installation

$
0
0

SAP HANA Vora provides an in-memory processing engine which can scale up to thousands of nodes, both on premise and in cloud. Vora fits into the Hadoop Ecosystem and extends the Spark execution framework.


Concepts and Requirements:


Sap HANA VORA 1.2 consists of the two following main components:

 

  • SAP HANA Vora Engine:
    SAP HANA Vora instances hold data in memory and boost the performance.
  • SAP HANA Vora Spark Extension Library:
    • Provides access to SAP HANA Vora through Spark.
    • Makes available additional functionality, such as a hierarchy implementation.

 

1.png

 

These two components are included inside the Vora packages which are available as follows and you could choose based on your Hadoop distribution.

 

  • SAP HANA Vora for Ambari: VORA_AM<version>.TGZ
  • SAP HANA Vora for Cloudera: VORA_CL<version>.TGZ
  • SAP HANA Vora for MapR: VORA_MR<VERSION>.TGZ

2.png


To download the packages: https://launchpad.support.sap.com/#/softwarecenter/search/vora%25201.2


Vora 1.2 supports the following operation systems:

  • SUSE Linux Enterprise Server (SLES) 11 SP3
  • Red Hat Enterprise Linux (RHEL) 6.7 and 7.2

You should also follow the Installation and Administration guide for the compatibility pack installations: http://help.sap.com/hana_vora

Following table shows you the combination of operating system, cluster provisioning tool, and Hadoop distribution:

3.png

Remember that the minimal setup for Vora 1.2 is :

  • 4 cores
  • 8 GB of RAM
  • 20 GB of free disk space for HDFS data
  • Note: You can’t install Vora 1.2 on a single node

In order to have the Vora 1.2 running, you have to have the following Vora services installed and configured and I will walk you through their installment and configurations on the clusters.

  • SAP HANA Vora Base: Vora libraries and binaries. Installs on all hosts.
  • SAP HANA Vora Catalog: Vora distributed metadata store. Installs on one node and usually on DLOG node.
  • SAP HANA Vora Discovery Service: Manages service registrations and installs on all nodes. In server mode installs on 3 nodes(Max 7) and selects the bootstrapping host. In client mode, installs on all remaining nodes. Note: You can’t install DS server and client both on the same node.
  • SAP HANA Vora Distributed Log: Provides persistence for Vora Catalog. Install usually on the master node(5 nodes recommended).
  • SAP HANA Vora Thriftserver: Gateway compatible with Hive JDBC connector. Usually install on the jumpbox where DS, DLOG and Catalog servers are not installed.
  • SAP HANA Vora Tools: Web UI for Vora 1.2 modeler. Install on the same node as Vora Thriftserver.
  • SAP HANA Vora V2Server: Vora Engine. Installs on all worker nodes ( Datanodes)

 

The installation and configuration should either happen at the same time for all the services or you should follow the following order to make sure of handling the dependencies:

6.png

The following schema shows you the architecture for clusters with 4 nodes and the assignment of different Vora 1.2 services which we will set up in this document:

One Master node, One Server node and two workers.


Screen Shot 2016-04-07 at 1.39.42 PM.png

 

*** Our assumption is that you have your Hadoop clusters set up with HDFS 2.6.x or 2.7.1, ZooKeeper 3.4.6, Spark 1.5.2, Yarn cluster manager 2.7.1 components.


Installing Vora 1.2 Services:


Step 1) Adding Vora Base: You have to add Vora base on all nodes and they have to be installed as clients as shown below.

Screen Shot 2016-03-29 at 4.53.49 PM.png

Screen Shot 2016-03-29 at 4.58.24 PM.png

— no extra configuration is needed.

— you can click on the proceed button as is shown below even if you get the error since you’re not using MapReduce jobs:

 

Screen Shot 2016-03-29 at 4.59.49 PM.png

 

— Click on complete.

Screen Shot 2016-03-29 at 5.02.04 PM.png

— notice that the Vora base is now added to your services:

Screen Shot 2016-03-29 at 5.02.51 PM.png

Step 2) Now we add Vora discovery as 3 Vora discovery servers and one client.

 

Screen Shot 2016-03-29 at 5.03.40 PM.png

 

Adding the Vora Discovery client:

Screen Shot 2016-03-29 at 5.18.04 PM.png

-- Vora discovery servers need extra configurations:

— in vora_discovery_bootstrap add the master DNS

— in vora_discovery_servers add your server DNS’s

Screen Shot 2016-03-29 at 5.20.32 PM.png

 

 

— proceed and deploy the service

notice that vora discovery service is now installed:

 

Screen Shot 2016-03-29 at 5.24.47 PM.png

 

Step 3) Now we add Vora Distributed Log service :


 

Screen Shot 2016-03-29 at 5.26.11 PM.png

 

— we install DLOG servers on the same machines where we installed our Discovery Servers.

 

Screen Shot 2016-03-29 at 5.29.49 PM.png

 

— No extra configurations are needed.

— click Next-> click Proceed anyway—>click  Complete

— Notice that vora DLOG is now added to the services:

 

Screen Shot 2016-03-29 at 5.31.47 PM.png

 

Step 4) Next step is to install Vora Catalog:

 

Screen Shot 2016-03-29 at 5.33.03 PM.png

 

— Install Catalog on your master node:


Screen Shot 2016-03-29 at 5.35.12 PM.png


— click Next->click Proceed anyway—>click Complete

 

— Notice that vora Catalog is added to the services:

Screen Shot 2016-03-29 at 5.36.55 PM.png

 

Step 5) Time to install V2Server as shown below:


Screen Shot 2016-03-29 at 5.38.10 PM.png


— extra configuration: add the Vora V2Server Worker service to worker1 and worker2 nodes and remove it from your server node.

 

 

Screen Shot 2016-03-29 at 5.40.45 PM.png

 

— click Next->click Proceed anyway—>click Complete

 

— Notice that vora V2Server is now added to the services:

 

Screen Shot 2016-03-29 at 5.43.47 PM.png

Step 6) Time to install Vora Thriftserver and Vora Tools:

 

Screen Shot 2016-03-29 at 5.45.13 PM.png

Screen Shot 2016-03-29 at 5.47.40 PM.png

 

— you have to add more configurations to the thrift server as it’s shown below:

— add vora_thriftserver_java_home = /usr/lib/jvm/java --this value depends on where JAVA installed on your system

— add vora_thriftserver_spark_home =  /usr/hdp/2.3.4.0-3485/spark --this is your Spark Home value

 

Screen Shot 2016-03-29 at 5.51.58 PM.png

 

— click Next-> click Proceed anyway—>click Complete

 

— Notice that vora thriftServer and Vora tools  are now added to the services:

 

Screen Shot 2016-03-29 at 5.53.03 PM.png

 

Now click on HDFS, MaprReduce2 and YARN services which are in red and restart all affected as shown below:

Screen Shot 2016-03-29 at 5.58.15 PM.png

Congratulations!! You now have Vora 1.2 services installed on your clusters.

 

Step 7) To validate your Vora:

— SSH to your worker1 node and run:

— source /etc/vora/vora-env.sh

— $VORA_SPARK_HOME/bin/start-spark-shell.sh


and you should now see the SQL contexts (Vora SQL Context and Spark SQL Context) bieng available.

SAP HANA community alert 260416: new moderator aboard

$
0
0

Dear SAP HANA aficionados

 

If have not been living under a stone lately and visited the SAP HANA corners of SCN (SAP HANA and In-Memory Computing, SAP HANA Developer Center) recently you surely will have noticed that there is currently one particular person putting in effort, patience and willingness to help others in extraordinary measures.

Of course I am writing about Florian Pfeffer here.

 

Florian, as you may know by now, is not only a HDE but also had been SCN Member of the Month just this March.

He also managed to earn the 'Super Answer Hero' badge which showcases his commitment to the community.

So, it's fair to say, that this star is flying high right now.

 

As my interest with SCN is in community development, I took the chance and asked if he would like to become a moderator, which Florian agreed to.

From now on, the SAP HANA and In-Memory Computing space has three permanent moderators assigned:

 

Once again, I would like to thank both Lucas and Florian for their engagement and also encourage others to step up and become more involved with SAP HANA and the community around it. There is always room for more high profile contributors!

 

Cheers,

Lars

Vora 1.2 Modeling Tool

$
0
0

SAP HANA Vora provides an in-memory processing engine which can scale up to thousands of nodes, both on premise and in cloud. Vora fits into the Hadoop Ecosystem and extends the Spark execution framework.

 

Following image shows you where Hadoop fits in the Hadoop ecosystem:

 

1.png

 

Recently, Vora 1.2 has been released with the following new features:

  • Support for MapR Hadoop distribution
  • Vora modeler – for building data models on top including OLAP
  • Added features to Hierarchies
  • Dlog, Discovery services using Consul tool
  • Enhanced performance through partitioning and co-located joins



The focus of this blog is to introduce you to the Vora Data Modeling tool.


For more information on other features released with 1.2 please refer to:

http://scn.sap.com/community/hana-in-memory/blog/2016/04/11/introducing-sap-hana-vora12

and for more information on the concepts such as DLOG, Discovery Service, Vora configuration and installation please refer to:

Vora 1.2 installation Cheat sheet: Concepts, Requirements and Installation

 

Vora 1.2 Modeling Tool:

In order to communicate with Vora engine, you could use Apache Zeppelin or Jupyter Notebook( http://scn.sap.com/community/developer-center/hana/blog/2016/01/21/visualizing-data-with-jupyter), mostly for coding.


We also designed the Vora modeling tool (modeler) to facilitate the development across structured, unstructured and semi-structured data and has been released as a beta version with Vora 1.2 .With Vora modeler you have access to SQL editor and also the Modeler perspective which give you the option to code or drag and drop the artifacts to develop your data view.

 

By installing the Vora Tools as part of the Vora 1.2 installation, you will have access to Vora modeler through your browser and via the port 9225:

http://<DNS_NAME_OF_JUMPBOX_NODE>:9225


Here is how your home page should look like. This page will give you access to the main perspectives, connection feature and help.


2.png


Vora Modeling has three main perspectives:

  • Data Browser
  • SQL Editor
  • Modeler

Data Browser allows you to view the available tables, views, dimensions and cubes in Vora engine. It also allows you to have a preview of the data, download the data as a CSV file, filter the columns and refresh them.

 

Here is a view of your Data Browser:

3.png

 

SQL Editor perspective allows you to run the queries using Vora SQL, it also shows you the compilation warnings, errors and outputs and the result of the query when you run the select.

 

5.png

 

 

 

 

 

The Modeler perspective could be used to create SQL views or Dimensions or Cubes. You could also use the subselect artifact to create the nested queries. Below you could see a view of the Modeler:


6.png




Features available to Vora 1.2 included but not limited to:

 

Support for

  • Dimensions and Cubes
  • Annotations
  • Joins
  • Simple and multiple joins
  • Auto propose the join conditions
  • Define the join condition using an editor
  • SQL Editor supporting VORA SQL
  • Modeling perspective
  • Subselect
  • Unions
  • Union has been visualized with the notion of resultset, provide better management of groupby, orderbyetc at different levels
  • Regenerate the views as Spark SQL
  • Exporting the tables as CSV

 

For more information on how to use the Vora modeler and create data analysis scenario refer to Vora Developer's Guide, Chapter 11.

http://help.sap.com/hana_vora


iFG Webinar Series: Get Ready for SAP HANA SPS12

$
0
0

>>Check out our new SAP HANA blog post for the SPS12 webinar schedule and meeting invitations.


SAP HANA Product Management and the SAP HANA International Focus Group (iFG) are rolling out the SPS12 Webinar Series. iFG is the premier channel for ABM customers to receive these updates. As the sponsor for prior releases (i.e. SPS10, SPS11, etc.), the feedback has been overwhelmingly positive.


SAP HANA SPS12, with an expected release just before SAPPHIRE NOW, offers customers a compelling value proposition. It can help existing or new customers run mission critical applications and analytics on SAP HANA.


For many, this series is the MUST see and hear event of the year. We will have “operator assisted” calls and expect to break all records for participation.


Following each session, the recordings and available materials can be accessed on the SAP HANA iFG Jam group.


New to the iFG community?

SPS12 Photo.png


SAP HANA Modeling - Project Learnings and Tips

$
0
0


Key benefits of Data Modeling in SAP HANA:

Building analytics and data mart solutions using SAP HANA enterprise data modeling offers various benefits, compared to the traditional data warehousing solutions such as SAP BW.

  • Virtual data models with on the fly calculation of results, which enables reporting accuracy and requires very limited data storage – powered by the in-memory processing, columnar storage and parallel processing etc.
  • Ability to perform highly processing intensive calculations efficiently – For example identify the customers where the sales revenue is greater than the average sales revenue per customer
  • Real time reporting leveraging the data replication and access techniques such as SLT, Smart data access etc.


Apart from the HANA sidecar or data mart solutions, HANA modeling also plays an essential role in the BW on HANA mixed scenarios, S/4 HANA Analytics, Predictive Analytics and Native HANA applications etc.

 

Objectives of this blog:

In this blog, I would like to share some of the experiences and learnings from various projects while implementing HANA modeling solutions. The intent of this blog is to provide some insights and approaches to the HANA modelers, which can be helpful when they start working on the solution design and development. However it does not cover the detailed explanation of the HANA modeling features.


Requirement analysis and setting expectations:

Understand the reporting requirements of the project clearly and try to conceptualize the HANA models to be built based the required KPIs. Few key aspects of the solution design: KPI definition including the details such as data sources, dimensions, filters, calculation logic, granularity and data volumes etc.

 

At times, the business users would expect HANA models to deliver best performance even with wide open selection criteria and with many columns in the output. Even though SAP HANA data models are expected to deliver sub-second response time, we need to be aware of the fact that there will be limited resources (memory, processing engines) in a HANA instance. Hence it is essential to implement the HANA models to be more efficient as per the performance guidelines.

 

Test the waters before diving deep: Validate the features, tools and integration aspects

It is always better to start with a prototyping of a sample end-to-end solution, before proceeding with the full- fledged implementation. This includes the steps like setting up data provisioning using SLT, BODS etc.., building HANA views and consuming HANA views from the reporting tool. Prototyping will help us in verifying if all the functionalities are working as expected. With this approach we can check and address the connectivity and security related issues beforehand.

It is also recommended to verify the functionality, understand the pros and cons of new features before trying to use them in our models.

 

Decision criteria for HANA Modeling approaches:

In the recent releases like SP 10 and SP 11, HANA Modeling functionality has been greatly enhanced with several features to cover various complex requirements. Always try to implement the models using Graphical calculation views unless there are specific requirements that can be only implementing using SQL Script. In general we may need to implement SQL Script based Calculation views only for the scenarios such as complex lookup and calculations, recursive logic etc.


While creating graphical calculation views, we need to implement the entire logic virtually using various nodes in different layers. It requires innovative thinking along with solid data modeling skills and a very good understanding of different SQL statements in order to build complex and effective HANA views.

Try to implement your HANA modeling view as per the features supported by the current support pack / revision level and also consider the guidelines and future road-map of SAP:

 

For instance, SAP suggests that calculation views are to be implemented for most of the requirements:

  • Dimension type calculation views: To model master data or to implement "value help" views for variables etc..
  • Star join type calculation views: As an alternative to analytic views
  • Cube type calculation views: Mainly for the reporting, which includes measures along with aggregations etc.

 

There are few scenarios where we have to decide between “on the fly calculations” vs “persistence of the results”. For some of the highly processing intensive calculations where the real time reporting is not essential and also for the scenarios like calculating and storing snapshots of results (such as weekly inventory snapshots), we have to implement the logic using SQL Script stored procedures in HANA to persist the results in a table. Subsequently a simple view can be built on this table to enable reporting.

 

Seeing Is Believing: Data validation is crucial

Prepare a comprehensive data validation and test plans for your HANA views. We can leverage different techniques to ensure that the HANA view is producing the results exactly as per the requirement. Ensure that your test cases will include the validation of the attributes and measures along with any filters, calculations & aggregations, counters, currency conversions etc.

 

Below are the key tools and techniques to perform data validation of HANA views:

  • Data preview option: Using the data preview option at the HANA view level and also at the individual node level is the simplest option to validate the data during the development of HANA views. Leverage the various options such as Raw data with filters, Distinct value analysis, Generating the SQL statement from the log etc. to perform different types of validations using the Data Preview option

 

  • Custom SQL queries: We can write and execute custom SQL queries in HANA studio SQL editor, and compare the results of HANA view to ensure that the results are matching. Here we can leverage the various types of SQL statements to perform complex data validations - for example to compare the data between the HANA view and the base tables

 

  • Reporting from Excel, Analysis Office or other reporting tools: For validation of larger data volumes and for the validation of semantics (labels / formatting etc.) we can leverage the tools like Analysis for Office


Be conscious about the Input data – Few important aspects in the Data Provisioning:

Identify the list of tables to be imported from different sources using SLT or other data provisioning tools and assess the memory requirements. To ensure the optimal utilization of HANA database, it is advisable to replicate only those tables which are essential to meet the requirements.


Few options to optimize the SLT table replication needs:

  1. Try to leverage the BW objects (DSO or info objects) if the corresponding data is already available in the connected BW schema – This will save the space in HANA as are avoiding the table replication
  2. Apply filters to avoid SLT replication of unwanted data for large tables into HANA


Try to leverage the transformation capabilities at the SLT or BODS level, wherever feasible. Especially in the scenarios where we need to filter the data model based on a calculated column, it would be ideal to derive this calculated column during the data provisioning.


Smart tools will enable better Productivity:

There are several tools and options available in the HANA studio, which helps us in maintaining the views in a simplified manner and increase productivity. Leverage these tools and features while building and maintaining HANA views.

 

Listed below are some of these tools and their utility in HANA modeling process:

 

  • Show lineage (columns): helps us to trace the origin of attributes and measures in HANA views.

     Untitled.jpg

 

  • Replacing nodes and data sources(in Graphical views)– to replace the nodes (projection, join..) with a different node OR replace the data sources (views or tables) with a different view or table within a modeled view

          Replace node.jpg    

         

  • Import columns from Table type or flat file (For script based Calculation views): This will simplify the creation of output structure for a script based calculation view – instead of manually maintaining the output columns, we can import the column details from an existing table or view

      Import columns.jpg

 

     Note: The following options are available when we right click on a HANA view:

 

  • Generate Select SQL - Using this option we can get the generated SELECT statement for any of the HANA views, which can be customized and executed from the SQL editor


  • Refactoring views: Using this option we can move the views across the packages, which automatically adjusts the inner views to reflect the new package.


  • Where-used list: To identify the list of objects where the current views has been used and assess the impact of any changes


  • Auto-documentation: To generate the documentation of a modeled view, which can be leveraged as part of the technical documentation   


Conclusion: My sincere thanks to the SCN community and especially to all the experts, who has been a great source of inspiration. I am hoping that this blog will be useful to you in learning and implementing the HANA modeling solutions.

Understand HANA SPS and supported Operating Systems

$
0
0

HANA Revision Strategy

 

SAP is shipping regular corrections and updates. Corrections are shipped in the form of Revisions and Support Packages of the product’s components. New HANA capabilities are introduced twice a year in form of SAP HANA Support Package Stack (SPS). The Datacenter Service Point (DSP) is based upon the availability of a certain SAP HANA revision, which had been running internally at SAP for production enterprise applications for at least two weeks before it is officially released.

 

See SAP Note “2021789 - SAP HANA Revision and Maintenance Strategy” for more details

 

p1.png

Supported Operating Systems

 

According to SAP Note “2235581– Supported Operating Systems” there are two Linux distributions that can be used with SAP HANA Platform.

 

  • SUSE Linux Enterprise Server for SAP Applications product link

 

 

All information in that document refers to these two products from SUSE and Red Hat only.

 

 

There some additional notes which applies, when selecting an Operating System release for SAP HANA:

 

  • HANA SPS Revision Notes:  HANA revision (x) requires  a a minimum OS release (y) – e.g.  SAP Note 2233148& 2250138- SLES 11 SP2 minimum for HANA SPS11


 

  • Only host types released by the hardware partners are suitable for using SAP software productively on Linux ( SAP Note 171356 )

  • SAP cannot support  software from third-party providers  (e.g. OS)  which is no longer maintained by the manufacturer (SAP Note 52505)

 

SUSE Linux Enterprise Server for SAP

 

With SUSE Linux Enterprise Server (SLES) for SAP there are major releases 11 & 12 with service packs. Several Service Packs (SP) are available in an SUSE release and it is possible to stay with a specific SP until support ends.  The general support for a Service pack ends at a defined date. This “general end” dates are communicated on SUSE Web page: https://www.suse.com/lifecycle/

 

p2.png

 

 

With an RTC dates of HANA SPS there are supported SUSE Linux Enterprise Server for SAP (SLES) versions available which can be combinations for a supported stack.

 

 

p3.png

Red Hat Enterprise Linux (RHEL) for SAP HANA

 

Starting with SAP HANA Platform SPS 08 Red Hat Enterprise Linux (RHEL) is supported for SAP HANA. Red Hat follows an approach with major & minor releases. With Extended Update Support for specific RHEL for SAP HANA releases it is possible to remain on a specific minor release even if there is new minor release.

You can find more details about Red Hat Enterprise Linux Life Cycle on  the web page : https://access.redhat.com/support/policy/updates/errata

With the release date of HANA SPS there are supported Red Hat Enterprise Linux (RHEL) for SAP HANA versions available which can be combinations for a supported stack.

 

p4.png

 

HANA SPS & OS version timeline

 

We see a timeline of HANA SPS and available OS releases from SUSE and Red Hat. The next overview and all earlier timelines shows lifecycles of HANA and SLES/RHEL and not an official support status for HANA releases.

 

 

 

p5.png

 

The marked  intersections show those points in time where there is a supported release from the OS vendors at the RTC date of a new HANA SPS.

Older operating system releases are not longer supported and disappear in the timeline after SUSE ‘general support end’ or  Red Hat ‘EUS’ end date and will be replaced with a new OS version or service pack.

OS Validation

End of Life of an OS

 

The sample on the next picture shows SLES 11 – SP3which has a general support end date in January 2017. This is a point in time where validations stops for that OS release and upcoming HANA SPS are no more supported on SLES for SAP 11 SP3.

 

 

p6.png

 

Sunrise of an OS

 

If a new OS release is available, SAP want to support it with the upcoming HANA SPS. The next sample timeline shows SLES 11 for SAP SP4, which was available before HANA SPS11 was releases. Therefore  following SPSs are also supported on SLES for SAP 11 SP4.

 

p7.png

 

SLES 11 for SAP SP4 was validated and SAP supports this OS version with HANA SPS11.

 

Support Matrix

 

This validation methodology and the timelines of HANA revisions and OS releases leads to a “Support Matrix” which is presented in this sample table:

 

p8.png

 

The corridor shows the combinations of OS releases with HANA SPS which will be supported by SAP.

ODXL - An open source Data Export Layer for SAP/HANA based on OData

$
0
0

I'm very pleased to be able to announce the immediate availability of the Open Data Export Layer (ODXL) for SAP/HANA!

Executive summary

ODXL is a framework that provides generic data export capabilities for the SAP/HANA platform. ODXL is implemented as a xsjs Web service that understand OData web requests, and delivers a response by means of a pluggable data output handler. Developers can use ODXL as a back-end component, or even as a global instance-wide service to provide clean, performant and extensible data export capabilities for their SAP/HANA applications.

 

Currently, ODXL provides output handlers for comma-separated values (csv) as well as Microsoft Excel output. However, ODXL is designed so that developers can write their own response handlers and extend ODXL to export data to other output formats according to their requirements.

 

ODXL is provided to the SAP/HANA developer community as open source software under the terms of the Apache 2.0 License. This means you are free to use, modify and distribute ODXL. For the exact terms and conditions, please refer to the license text.

 

The source code is available on github. Developers are encouraged to check out the source code and to contribute to the project. You can contribute in many ways: we value any feedback, suggestions for new features, filing bug reports, or code enhancements.

What exactly is ODXL?

ODXL was borne from the observation that the SAP/HANA web applications that we develop for our customers often require some form of data export, typically to Microsoft Excel. Rather than creating this type of functionality again for each project, we decided to invest some time and effort to design and develop this solution in such a way that it can easily be deployed as a reusable component. And preferably, in a way that feels natural to SAP/HANA xs platform application developers.

What we came up with, is a xsjs web service that understands requests that look and feel like standard OData GET requests, but which returns the data in some custom output format. ODXL was designed to make it easily extensible so that developers can build their own modules that create and deliver the data in whatever output format suits their requirements.

This is illustrated in the high-level overview below:



For many people, there is an immediate requirement to get Microsoft Excel output. So, we went ahead and implemented output handlers for .xlsx and .csv formats, and we included those in the project. This means that ODXL supports data export to the .xlsx and .csv formats right out of the box.

However, support for any particular output format is entirely optional and can be controlled by configuration and/or extension:

  • Developers can develop their own output handlers to supply data export to whatever output format they like.
  • SAP/HANA Admins and/or application developers can choose to install only those output handlers they require, and configure how Content-Type headers and OData $format values map to output handlers.

So ODXL is OData? Doesn't SAP/HANA suppport OData already?

The SAP/HANA platform provides data access via the OData standard. This facility is very convenient for object-level read- and write access to database data for typical modern web applications. In this scenario, the web application would typically use asynchronous XML Http requests, and data would be exchanged in either Atom (a XML dialect) or JSON format.

 

ODXL's primary goal is to provide web applications with a way to export datasets in the form of documents. Data export tasks typically deal with data sets that are quite a bit larger than the ones accessed from within a web application. In addition, an data export document might very well compromise multiple parts - in other words, it may contain multiple datasets. The typical example is exporting multiple lists of different items from a web application to a workbook containaing multiple spreadsheets with data. In fact, the concrete use case from whence ODXL originated was the requirement to export multiple datasets to Microsoft Excel .xlsx workbooks.

 

So, ODXL is not OData. Rather, ODXL is complementary to SAP/HANA OData services. That said, the design of ODXL does borrow elements from standard OData.

OData Features, Extensions and omissions

ODXL GET requests follow the syntax and features of OData standard GET requests. Here's a simple example to illustrate the ODXL GETrequest:

GET "RBOUMAN"/"PRODUCTS"?$select=PRODUCTCODE, PRODUCTNAME& $filter=PRODUCTVENDOR eq 'Classic Metal Creations' and QUANTITYINSTOCK gt 1&$orderby=BUYPRICE desc&$skip=0&$top=5

This request is build up like so:

  • "RBOUMAN"/"PRODUCTS": get data from the "PRODUCTS" table in the database schema called "RBOUMAN".
  • $select=PRODUCTCODE, PRODUCTNAME: Only get values for the columns PRODUCTCODE and PRODUCTNAME.
  • $filter=PRODUCTVENDOR eq 'Classic Metal Creations' and QUANTITYINSTOCK ge 1: Only get in-stock products from the vendor 'Classic Metal Creations'.
  • $orderby=BUYPRICE desc: Order the data from highest price to lowest.
  • $skip=0&$top=5: Only get the first five results.

For more detailed information about invoking the odxl service, check out the section about the sample application. The sample application offers a very easy way to use ODXL for any table, view, or calculation view you can access and allows you to familiarize yourself in detail with the URL format.

In addition, ODXL supports the OData $batchPOST request to support export of multiple datasets into a single response document.

The reasons to follow OData in these respects are quite simple:

  • OData is simple and powerful. It is easy to use, and it gets the job done. There is no need to reinvent the wheel here.
  • ODXL's target audience, that is to say, SAP/HANA application developers, are already familiar with OData. They can integrate and use ODXL into their applications with minimal effort, and maybe even reuse the code they use to build their OData queries to target ODXL.

ODXL does not follow the OData standard with respect to the format of the response. This is a feature: OData only specifies Atom (an XML dialect) and JSON output, whereas ODXL can supply any output format. ODXL can support any output format because it allows developers to plug-in their own modules called output handlers that create and deliver the output.

  Currently ODXL provides two output handlers: one for comma-separated values (.csv), and one for Microsoft Excel (.xlsx). If that is all you need, you're set. And if you need some special output format, you can use the code of these output handlers to see how it is done and then write your own output handler.

  ODXL does respect the OData standard with regard to how the client can specify what type of response they would like to receive. Clients can specify the MIME-type of the desired output format in a standard HTTP Accept:request header:

  • Accept: text/csv specifies that the response should be returned in comma separated values format.
  • Accept: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet specifies that the response should be returned in open office xml workbook format (Excel .xlsx format).

Alternatively, they can specify a $format=<format> query option, where <format>identifies the output format:

  • $format=csv for csv format
  • $format=xlsx for .xlsx format

Note that a format specified by the $format query option will override any format specified in an Accept:-header, as per OData specification.

ODXL admins can configure which MIME-types will be supported by a particular ODXL service instance, and how these map to pluggable output handlers. In addition, they can configure how values for passed for the $format query option map to MIME-types. ODXL comes with a standard configuration with mappings for the predefined output handlers for .csv and .xlsx output.

On the request side of things, most of OData's features are implemented by ODXL:

  • The $select query option to specify which fields are to be returned
  • The $filter query option allows complex conditions restricting the returned data. OData standard functions are implemented too.
  • The $skip and $top query options to export only a portion of the data
  • The $orderby query option to specify how the data should be sorted

ODXL currently does not offer support for the following OData features:

The features that are currently not supported may be implemented in the future. For now, we feel the effort the implement them and adequately map their semantics to ODXL may not be worth the trouble. However, an implementation can surely be provided should there be sufficient interest from the community.

Installation

Use ODXL presumes you already have a SAP/HANA installation with a properly working xs engine. You also need HANA Studio, or Eclipse with the SAP HANA Tools plugin installed.

  Here are the steps if you just want to use ODXL, and have no need to actively develop the project:

  1. In HANA Studio/Eclipse, create a new HANA xs project. Alternatively, find an existing HANA xs project.
  2. Find the ODXL repository on github, and download the project as a zipped folder. (Select a particular branch if you desire so; typically you'll want to get the master branch)
  3. Extract the project from the zip. This will yield a folder. Copy its contents, and place them into your xs project directory (or one of its sub directories)
  4. Activate the new content.

After taking these steps, you should now have a working ODXL service, as well as a sample application. The service itself is in the service subdirectory, and you'll find the sample application inside the app subdirectory.

 

  The service and the application are both self-contained xs applications, and should be completely independent in terms of resources. The service does not require the application to be present, but obviously, the application does rely on being able to call upon the service.

 

  If you only need the service, for example, because you want to call it directly from your own sample application, then you don't need the sample application. You can safely copy only the contents of the service directory and put those right inside your project directory (or one of its subdirectories) in that case. But even then, you might still want to hang on to the sample application, because you can use that to generate the web service calls that you might want to do from within your application.

 

If you want to actively develop ODXL, and possibly, contribute your work back to the community, then you should clone or fork the github repository and work from there.

Getting started with the sample application

To get up and running quickly, we included a sample web application in the ODXL project. The purpose of this sample application is to provide an easy way to evaluate and test ODXL.

 

The sample application lets you browse the available database schemas and queryable objects: tables and views, including calculation views (or at least, their SQL queryable runtime representation). After making the selection, it will build up a form showing the available columns. You can then use the form to select or deselect columns, apply filter conditions, and/or specify any sorting order. If the selected object is a calculation view that defines input parameters, then a form will be shown where you can enter values for those too.

 

In the mean while, as you're entering options into the form, a textarea will show the URL that should be used to invoke the ODXL service. If you like, you can manually tweak this URL as well. Finally, you can use one of the download links to immediately download the result corresponding to the current URL in either .csv or .xlsx format.

 

Alternatively, you can hit a button to add the URL to a batch request. When you're done adding items to the batch, you can hit the download workbook button to download as single .xlsx workbook, containing one worksheet for each dataset in the batch.

 

What versions of SAP/HANA are supported?

We initially built and tested ODXL on SPS9. The initial implementation used the $.hdb database interface, as well as the $.util.Zip builtin.

 

  We then built abstraction layers for both database access and zip support to allow automtic fallback to the $.db database interface, and to use a pure javascript implementation of the zip algorithm based on Stuart Knightley's JSZip library. We tested this on SPS8, and everyting seems to work fine there.

 

  We have not actively tested earlier SAP/HANA versions, but as far as we know, ODXL should work on any earlier version. If you find that it doesn't, then please let us know - we will gladly look into the issue and see if we can provide a solution.

How to Contribute

If you want to, there are many different ways to contribute to ODXL.

  1. If you want to suggest a new feature, or report a defect, then please use the github issue tracker.
  2. If you want to contribute code for a bugfix, or for a new feature, then please send a pull request. If you are considering to contribute code then we do urge you to first create an issue to open up discussion with fellow ODXL developers on how to best scratch your itch
  3. If you are using ODXL and if you like it, then consider to spread the word - tell your co-workers about it, write a blog, or a tweet, or a facebook post.

Thank you in advance for your contributions!

Finally

I hope you enjoyed this post! I hope ODXL will be useful to you. If so, I look forward to getting your feedback on how it works for you and how we might improve it. Thanks for your Time!

SAP HANA SPS 12 What's New: Security - by the SAP HANA Academy

$
0
0

Introduction

 

In the upcoming weeks we will be posting new videos to the SAP HANA Academy to show new features and functionality introduced with SAP HANA Support Package Stack (SPS) 12.

 

The topic of this blog is security.

 

For the complete list of what's new blogs, see What's New with SAP HANA SPS 12 - by the SAP HANA Academy

 

For Security for the SPS 11 release, see SAP HANA SPS 11 What's New: Security - by the SAP HANA Academy

 

Overview Video

 

SAP HANA Security - SPS 12 - YouTube

 

 

 

 

 

What's New?

 

Security Administration with SAP HANA Cockpit

 

The tile catalog SAP HANA Security Overview has a new tile: Authentication, which lists the status of the password policy and any SSO configuration. The tile opens the Password Policy and Blacklist app.

 

Screen Shot 2016-05-12 at 17.04.48.png

 

The new Password Policy and Blacklist app allows you to view and edit password policy and blacklist of the SAP HANA database. In previous versions, SAP HANA studio was required for these tasks.

 

Screen Shot 2016-05-12 at 16.57.07.png

 

The Auditing tile now allows you to configure auditing: enable/disable, audit trail targets and create/change/delete audit policies. In previous versions, SAP HANA studio was required for these tasks.

 

Screen Shot 2016-05-12 at 17.03.34.png

 

The Data Volume Encryption app now allows you to change the root key used for data volume encryption. Alternatively, the root key can be changed using SQL (see below). In previous versions, the command line tool hdbnsutil was required to perform this task.

 

Screen Shot 2016-05-12 at 17.10.00.png

 

You can also see a history of root key changes using the view M_PERSISTENCE_ENCRYPTION_KEYS.

 

Authorization

 

Two new system views are available for analysing user authorisations:

  • EFFECTIVE_PRIVILEGE_GRANTEES
  • EFFECTIVE_ROLE_GRANTEES

 

Screen Shot 2016-05-12 at 16.53.25.png

 

Two new roles are available to support users administrating SAP HANA using SAP Solution Manager and SAP NetWeaver tools:

  • sap.hana.admin.roles::SolutionMangagerMonitor
  • sap.hana.admin.roles::RestrictedUserDBSLAccess

 

Authentication

 

You can now disable authentication mechanisms that are not used in your environment.

Screen Shot 2016-05-12 at 16.38.28.png

SAP HANA smart data access now supports SSO with Kerberos and Microsoft Active Directory for connections to SAP HANA remote sources.

 

Encryption

 

You can change the root key for data volume encryption using either SAP HANA cockpit or SQL (note that no native UI for HANA studio has been included).

SQL> ALTER SYSTEM PERSISTENCE ENCRYPTION CREATE NEW ROOT KEY

 

SAP HANA studio now support client certification validation for the SAP HANA database connection

 

Screen Shot 2016-05-12 at 16.16.15.png

 

The SAP HANA user store (hdbuserstore) now also supports JDBC connections and multitenant databases.

 

Auditing

 

Several new user actions in the SAP HANA database can now be audited:

  • CREATE | ALTER | DROP PSE
  • CREATE | DROP CERTIFICATES
  • CREATE | DROP SCHEMA

 

Screen Shot 2016-05-12 at 16.22.59.png

 

Cross-database queries in SAP HANA multitenant database containers are now audited in the tenant database in which the query is executed.

 

The maximum length of a statement can set set using the system parameter audit_statement_length in section [auditing configuration] of global.ini.

 

 

Enhanced Database Trace Information for Authorization issues

 

On this topic, see the blog by Sinéad Higgins:

 

Enhanced database trace information for authorization issues in SAP HANA SPS 12

 

Security Checklists and Recommendations

 

A new guide has been added to the SAP HANA documentation set: SAP HANA Security Checklists and Recommendations. This guide extends and replaces the Security Configuration Checklist paragraph from the SAP HANA Security Guide.

 

Screen Shot 2016-05-12 at 16.24.42.png

 

 

Security for SAP HANA Extended Application Services, Advanced Model

 

A new paragraph has been added to the SAP HANA Security Guide on the topic of Extended Application Services:

Security for SAP HANA Extended Application Services, Advanced Model

 

 

Additional Information

 

SAP HANA Security Playlist

 

 

SAP HANA

 

Help Portal: SAP HANA Platform Core SPS 12

 

 

SAP Notes

 

SCN

 

 

Thank you for watching

 

You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at youtube.com/saphanaacademy.

 

Follow us on Twitter @saphanaacademy.

 

Connect with us on LinkedIn.

Vulnerability in glibc CVE-2015-7547 (2016-02-16)

$
0
0

A vulnerability in glibc has been reported (CVE-2015-7547).

 

This file is part of the Linux distributions. SAP HANA dynamically links the glibc library of the Linux operating system. glibc is not part of the HANA shipment. HANA customers can update their operating system according to the maintenance agreement with their Linux vendor. A restart of processes/programs after the update might be necessary.

 

 

SAP explicitly allows customers to deploy security updates of the operating system. More information:

SAP HANA on AWS Certified for X1 Instance with 2TB of RAM

$
0
0

SAP HANA was launched and first certified on AWS in 2013, and since that time a large and still fast growing number of customers have deployed SAP HANA on the AWS Cloud. Initially, first-time users were running non-production, test and dev or QA workloads on cr1.8xlarge or r3.8xlarge with 244 GB of RAM. Increasingly, production setups involving HA/DR with high support and SLA requirements are becoming popular. Of course, production workloads are always the most demanding and where we see an increase in SAP’s deployment figures. AWS has demonstrated their capability to support our mutual customers in this evolution with a series of nice SAP BW-EML and SD Benchmarks which can be found here.

 

Customers are also doing added value scenarios with SAP HANA on AWS including SAP HANA + SAP Vora + Hadoop on AWS, R integrated with SAP HANA, and even HCP integrations with AWS.

 

The more customers are using all these innovations, the higher the demand for growing SAP HANA node sizes. As already announced by AWS, they’ve taken the decision to offer X1 instances, featuring the x1.32xlarge with 2 TB of RAM, based on a 4-socket Intel E7-8880 v3 CPU. X1 instances have now been certified for SAP HANA OLAP and OLTP workloads in single node deployments. Respective SAP Benchmark results are soon to be published. With the latest SAP HANA core-to-memory ratio on SAP HANA SPS11 or later and on Haswell, this enables full usage of the the RAM offered on X1 for both, BWoH / OLAP and SoH / OLTP. This is surely a big leap for SAP HANA Cloud deployments, especially for S/4HANA implementations.

There is a roadmap ahead after SAPPHIRE 2016 for additional capabilities such as scale-out deployments on X1, that will further boost SAP HANA adoption as the most innovative and powerful in-memory data platform. To learn more about X1, visit Amazon EC2 X1 Instances.  To learn more about SAP HANA on AWS, please visit SAP HANA on the AWS Cloud.

LEARN WITH US : S/4HANA Analytics!!

$
0
0

LEARN WITH US : S/4HANA Analytics!!

 

Its been great opportunity to attain KeyInsight Conference 2016 by Complete_stream and garner lots of knowledge about the SAP systems, upcoming systems and upgraded systems on HANA and Cloud. Here is the first blog in regarding to S/4 HANA Analytics on scn.sap.com, gives us immense pleasure to be a part of a big huge community of SAP people and mentors. Writing a blog is a part of our assignment but the things is getting good where we are LEARNING! Thanks to Tony de Thomasis and bco6181 for inspiring to plunge into this world.

 

What it is?


S/4 HANA stands for Simple 4th Generation Business suite solutions from SAP which will run on SAP HANA. SAP HANA is one of the preferred product among the companies seeking for an optimized enterprise solution because the product has come a long way from its previous predecessors which had its transaction processing and analytical processing in different platforms, that meant more time on data output and decision making.


It is very well known as The Next Generation Business Suite and it is greatest innovation since R3 and ERP from the SAP world. It unites the software and people to build businesses to run on real-time, networked and in simple way. It has got built-in analytics for hybrid transactional and analytical applications.


What does it do?

K.jpg

SAP S/4 HANA provides enhanced analytical capabilities due to the architecture based on SAP HANA. Now SAP Hana is all about Insight and immediate action on LIVE data, nullifying the process of batch processing and ETL. Some of the best features with S/4 Hana analytics are cross system online reports, a built in BI system, Smart business, analytical applications and many more. Real-time reporting on data is available from the one single SAP S/4HANA component, which aid you to get many other tools for analytical purposes by creating quick queries.



How does real-time data and historical data work together?


SAP S/4 HANA Analytics + SAP Business Warehouse


Now let us have a closer look from the data perspective!


When a BW system is running on an SAP HANA database, the BW data is stored in a special schema known as the BW-managed schema and is exposed via InfoProviders (e.g. DataStore Objects, Info Objects, etc.). In other (remote) SAP HANA schemas, data can be stored in SAP HANA tables and accessed via HANA views which are based e.g. on Core Data Services (CDS) technology.

2.png

(Image source: http://itelligencegroup.com/wp-content/usermedia/hana-extensibility-2.png)

 

You can make data available from any SAP HANA database schema of your choice in BW. You can also make BW data (data from the BW-managed schema in the SAP HANA database) available in a different SAP HANA schema. To do so you can use virtual access methods such as Open ODS Views (using HANA Smart Data Access for remote scenarios) and data replication methods like SAP LT Replication Server.

 

S/4 Hana analytics just uses the concept of Instant Insight to action by using its’ built in analytics for hybrid transactional and analytical processes. One of the applications that work on this principle is SAP Smart business cockpits which use advanced analytics enabling the business user to see instant real time data to solve any business situations. They are individualized, more accurate, and more collaborative and can be operated form anywhere, anytime.


 

The process of combining the real time data and multi sourced data with S/4 HANA analytics and SAP Business Warehouse respectively has helped the company provide a hybrid solution which is a strategic move. S/4 for HANA analytics complements SAP BW (powered by SAP HANA) helping in better services and decision making for the organizations.


Upcoming?


SAP Smart Business the latest generation of decision supporting application powered by S/4 HANA and SAP FIORI combining new working models with a consumer-grade, multi-channel user experience.


For more detail and in-depth insight please visit on:

http://sapbeluxevent.be/SAPForum/wp-content/uploads/2015/09/INNO-IT-4_Jurgen-Thielemans-S4HANA_Analytics_IT.pdf


____________________________

 


Special Thanks to :-

Tony de ThomasisSAP MENTOR and LECTURER

BCO6181: A Journey to Greatness Inspired to get on blogging on SCN

Paul Hawking SAP MENTOR and Unit Coordinator at Victoria University

 

_____________________________


Thanks for Reading!!!


Authors,


Karthik Nagaraj and Neeralie Prajapati


SAP HANA SPS 12 What's New: Installation and Update - by the SAP HANA Academy

$
0
0

Introduction

 

In the upcoming weeks we will be posting new videos to the SAP HANA Academy to show new features and functionality introduced with SAP HANA Support Package Stack (SPS) 12.

 

What's New with SAP HANA SPS 12 - by the SAP HANA Academy

 

The topic of this blog is installation and update.

 

For the SPS 11 version, see SAP HANA SPS 11 What's New: Installation and Update - by the SAP HANA Academy

 

 

Tutorial Video

 

SAP HANA Academy - SAP HANA SPS 12: What's New? - Installation and Update - YouTube

 

 

 

What's New?

 

Supported Operating Systems

 

For SAP HANA Platform SPS 12 on Intel-based hardware platforms the minimum operating system versions are:

 

For SAP HANA Platform SPS 12 on IBM Power Systems the minimum operating system version is:

  • SLES 11 SP4 for IBM Power

 

Download Components

 

Components and component updates can be downloaded from the SAP Support Portal using the Download Components tile in the SAP HANA Platform Lifecycle Management (HDBLCM) web tool.

 

LCM.png

 

In previous editions, SAP HANA studio was used to perform this task. Here, as elsewhere, we see the gradual move of functionality from the full Windows client to the web interface.

 

studio.png

Note also that the new Download Components tile very much resembles the new SAP ONE Support Launchpad, section System Operations and Maintenance > Software Downloads (on premise).

OSL.png

 

Extract Components

 

Component archives which were downloaded from the SAP Support Portal can be prepared for the update using the extract_components action of the SAP HANA HDBLCM resident program in the command-line interface.

 

In previous editions, you had to perform this task using the SAPCAR tool (and not forget the signature validation as described in Note 2178665). The extract_component parameter now performs these tasks for you, so only a single tool, hdblcm, is needed for all tasks.

 

hdblcm.png

 

Usability Improvements

 

The interactive modes of the SAP HANA database lifecycle manager (HDBLCM) have been optimized to deliver improved user experience. If a list contains only a single option, it is selected as the default value.

 

 

SAP HANA XS Advanced Runtime

 

The database lifecycle manager tool (hdblcm) now supports the installation and update of the SAP HANA XS Advanced Runtime.

 

The following XS Advanced Runtime parameters are available:

  • xs_components_cfg - Specifies the path to the directory containing MTA extension descriptors (*.mtaext)
  • xs_customer_space_isolation - Run applications in customer space with a separate OS user
  • xs_customer_space_user_id - OS user ID used for running XS Advanced applications in customer space
  • xs_domain_name - Specifies the domain name of an xs_worker host
  • xs_routing_mode - Specifies the routing mode to be used for XS advanced runtime installations
  • xs_sap_space_isolation - Run applications in SAP space with a separate OS user
  • xs_sap_space_user_id - OS user ID used for running XS advanced runtime applications in SAP space

 

Documentation

 

The SAP HANA Installation and Update Guide has been updated for the above-mentioned topics but also has been extended to include a paragraph on SAP HANA and Virtualisation. For the latest support status, see SAP Note 1788665 - SAP HANA Support for virtualized / partitioned (multi-tenant) environments.

 

Additionally, all database lifecycle manager parameter references are now part of the Parameter Reference chapter in the SAP HANA Server Installation and Update Guide. For the resident lifecycle manager tool, these were previously documented in the Administration Guide only.

 

 

Documentation

 

For more information see:

 

 

 

Thank you for watching

 

You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at youtube.com/saphanaacademy.

 

Follow us on Twitter @saphanaacademy.

 

Connect with us on http://linkedin.com/in/saphanaacademy.

SAP HANA, running on SUSE Linux Enterprise, coming to Azure

$
0
0

Microsoft CEO Satya Nadella and SAP CEO Bill McDermott announced together, today on stage at SAPPHIRE NOW 2016 joint plans to deliver broad support for the SAP HANA® platform deployed on Microsoft Azure.


SUSE and Microsoft today announce that SAP HANA is coming to Microsoft Azure running on SUSE Linux Enterprise Server.

 

So if you're looking to spin up SAP HANA instances on Azure you’ll now be able to (using the same pricing as for SUSE Linux Enterprise Server for SAP Applications).


“We’re excited that our partnership with SAP is delivering powerful, new options for SAP HANA deployments on Azure – including support for SUSE Linux Enterprise Server.” – Madhan Arumugam Ramakrishnan, Principal Manager, Microsoft Azure.

“SUSE, SAP, and Microsoft Azure continue to develop and deliver solutions for cloud data centers — with a focus on the enterprise,” said Kristin Kinan, Director of Public Cloud Alliances at SUSE.  “Bringing SAP HANA to Azure, running on SUSE Linux Enterprise Server, is the latest step in enabling maximum power and flexibility to our enterprise customers.”


Microsoft Azure will speak on the SUSE Booth (655) at SAPPHIRE NOW 2016 on Wednesday, May 18 at 11:10 - please join.


SUSE was the development and first Linux platform for SAP HANA and SAP and SUSE has shared a long-standing partnership in the SAP Linux Labs in Walldorf, Germany


If you're looking for technical guides and tech-casts on security hardening, scale-out and system-replication fail-over, please see here. For more info on SUSE and Azure - see here SUSE + Microsoft Azure | SUSE


And watch this video to see why SUSE Linux Enterprise Server for SAP Applications is the Leading Linux Platform for SAP!

 

SUSE Linux Enterprise Server for SAP Applications is now available, on demand, via Amazon Web Services (AWS)

$
0
0

SUSE Linux Enterprise Server for SAP Applications is now available, on demand, via Amazon Web Services (AWS) and the AWS Marketplace; With no minimum fee and you pay only for the compute hours used.

 


 

This solution also includes high availability resource agents for deployments of the SAP HANA platform. These agents allow SAP HANA instances to failover between AWS Availability Zones and were jointly engineered by SAP, SUSE and Amazon to run on the AWS infrastructure.

SLESSAPonAWSMarket


And, of course, using SUSE’s “bring-your-own-subscription” program, you can use your existing SUSE Linux Enterprise for SAP Applications subscription to build and test SAP workloads on AWS. See aws.amazon.com/suse for more details on that program.


Swing by the SUSE (655) or Protera (473) booths to apply for a free proof of concept to help plan your SAP HANA deployment on AWS at SAPPHIRE NOW 2016 in Orlando.


AWS will also be speaking on the SUSE Booth at 1:50pm May 17 - come and listen!

 

“AWS provides the on-demand, highly reliable and scalable cloud computing services that meet the evolving needs of our customers,” said Naji Almahmoud, head of global business development for SUSE. “Expanding the availability of SUSE Linux Enterprise on AWS gives them more flexibility to take advantage of the leading Linux platform for SAP solutions.”


Dave McCann, vice president, AWS Marketplace and Catalog Services, Amazon Web Services, Inc., said, “SUSE is a pioneer in managing their customers’ complexity, reducing cost and delivering mission-critical cloud-based services, offering an innovative approach that brings business and IT together to innovate. We are excited to see the expanded availability of SUSE Linux Enterprise Server for SAP Applications on AWS through AWS Marketplace. The access will ensure SAP users experience the advantages provided by the on-demand, highly reliable and scalable cloud computing services of AWS.”

 

For technical papers and more information see: Amazon Web Services and SUSE

Enhanced database trace information for authorization issues in SAP HANA SPS 12

Predefined Users in SAP HANA

$
0
0

A number of predefined operating system and database users are required for installing, upgrading, and operating SAP HANA. Further users may exist depending on additionally installed components. A brief overview is available here:Predefined Users in SAP HANA

Viewing all 902 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>