Quantcast
Channel: SCN : Blog List - SAP HANA and In-Memory Computing
Viewing all 902 articles
Browse latest View live

SAP HANA SPS 12 What's New: Performance Monitoring - by the SAP HANA Academy

$
0
0

Introduction

 

In the upcoming weeks we will be posting new videos to the SAP HANA Academy to show new features and functionality introduced with SAP HANA Support Package Stack (SPS) 12.

 

What's New with SAP HANA SPS 12 - by the SAP HANA Academy

 

The topic of this blog is performance monitoring.

 

For the SPS 11 version, see SAP HANA SPS 11 What's New: Performance Monitoring - by the SAP HANA Academy.

 

 

What's New?

 

SAP HANA Performance Monitoring Apps

 

SAP HANA Performance Monitoring is a new tile catalog available in SAP HANA cockpit with the deployment of the new Workload Replay delivery unit.

 

In this tile catalog, three new apps have been included:

  • Capture Workload
  • Replay Workload
  • Analyze Workload

 

loio7cabfe3bbe5f4836a647344a76690a62_LowRes.jpg

 

Capturing and replaying workloads from an SAP HANA system can help you evaluate potential impacts on performance or stability after a change in hardware or software configuration.

 

Possible use cases are:

  • Hardware change
  • SAP HANA revision upgrade
  • SAP HANA INI parameter change
  • Table partitioning change
  • Index change
  • Landscape reorganization for SAP HANA scale-out systems

 

 

Tutorial Video

 

SAP HANA Academy - SAP HANA SPS 12: What's New? - Capture and Replay Workloads - YouTube

 

 

 

 

SAP HANA Administration Apps

 

Several apps in the SAP HANA Database Administration catalog of the SAP HANA cockpit have been enhanced for performance monitoring features.

 

Screen Shot 2016-05-18 at 15.45.36.png

 

Performance Monitor

 

Select Export All in the footer bar in the Performance Monitor app to export KPI data as a single data set. This ZIP file can be imported into the new Support app (Support Tools tile). You can also save your own set of KPIs using the new variant Custom, and select Show Jobs to display jobs above the load graph to show which jobs had an effect on your system performance.

 

Screen Shot 2016-05-18 at 15.52.43.png

 

Import and Export of Performance Data for Support Process

 

To analyze and diagnose database problems, you can now import performance monitor data from a ZIP file into SAP HANA cockpit. You can export data using the Performance Monitor app.

 

Screen Shot 2016-05-18 at 15.58.17.png

 

Threads

 

You can now monitor for long-running threads and analyze quickly any blocking situation using the Threads app. The tile indicates the number of currently active and blocked threads.

 

Screen Shot 2016-05-18 at 15.41.27.png

 

Statements Monitor

 

The Monitor Statements tile indicates the number of long-running statements and blocking situations. The app displays information about the memory consumption of statements. New for SPS 12 is that Memory Tracking can be enabled or disabled in the footer bar. Memory tracking is required for Workload Management.

 

enableMemTrack.png

 

Workload Management Configuration

 

Manage all workload classes using the new Workload Classes app. Workload classes and workload class mappings can be created to configure workload management for the SAP HANA database. Memory tracking needs to be enabled for workload management.

 

Screen Shot 2016-05-18 at 15.46.12.png

 

Documentation

 

A new paragraph has been added to the SAP HANA Troubleshooting and Performance Guide about network performance and connectivity problems.

 

The following topics are addressed:

  • Network Performance Analysis on Transactional Level
  • Stress test with SAP's NIPING tool to confirm the high network latency (or bandwidth exhaustion)
  • Application and Database Connectivity Analysis
  • SAP HANA System Replication Communication Problems
  • Analysis steps to resolve SAP HANA inter-node communication issues

 

Screen Shot 2016-05-18 at 14.41.35.png

 

Additional Information

 

Help Portal: SAP HANA Platform Core SPS 12

 

 

SAP Notes

 

 

 

Thank you for watching

 

You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at youtube.com/saphanaacademy.

 

Follow us on Twitter @saphanaacademy.

 

Connect with us on http://linkedin.com/in/saphanaacademy.


Capturing and Replaying Workloads - by the SAP HANA Academy

$
0
0

Introduction

 

One of the new SPS 12 features for monitoring and managing performance in SAP HANA is the ability to capture and replay workloads. This feature enables you to take a performance snapshot of your current system -- a captured workload -- and then execute the same workload again on the system (or another system from backup) after some major hardware or software configuration change has been made. This will help you evaluate potential impacts on performance or stability after, for example, a revision upgrade, parameter modifications, table partition or index changes, or even whole landscape reorganisations.

 

In this blog, I will describe the required preparation and the operational procedures.

 

Preparation

 

Import Delivery Unit

 

To capture, replay and analyze workloads you use the three new apps in the equally new SAP HANA Performance Monitoring tile catalog of the SAP HANA cockpit.

 

loio7cabfe3bbe5f4836a647344a76690a62_LowRes.jpg

 

The apps are not included with a standard installation of SAP HANA but are provided as a delivery unit (DU): HANA_REPLAY.

 

You can import the DU using the SAP HANA Application Lifecycle Management (ALM) tool, which is part of SAP HANA cockpit. Alternatively, you can use the ALM command line tool (hdbalm) or use SAP HANA studio (File > Import > SAP HANA Content > Delivery Unit)

 

Screen Shot 2016-05-20 at 15.29.48.png

 

Grant Roles

 

The DU adds the following roles:

  • sap.hana.replay.roles::Capture
  • sap.hana.replay.roles::Replay
  • sap.hana.workloadanalyzer.roles::Administrator
  • sap.hana.workloadanalyzer.roles::Operator

 

Typically, you would grant a user with system administration privileges the Capture replay and Replay replay role. This could be the same user or a different user.

 

The workloadanalyzer roles are granted to users who need to perform the analysis on the target system. Operators have read-only access to the workload analysis tool.

 

 

Configure SAP HANA cockpit

 

The Analyze Workload app is added automatically to the SAP HANA cockpit if you have any of the two workloadanalyzer roles. The Capture Workload and Replay Workload apps need to be added manually from the tile catalog.

 

 

Configure Replayer Service

 

On the target system you need to configure and start the replayer service before you can replay a workload.

 

For this, you need to have access as the system administrator the SAP HANA host and create the file wlreplayer.ini in directory $SAP_RETRIEVAL_PATH, typically /usr/sap/<SID>/HDB<instance_number>/<hostname>.

 

This file needs to contain the following lines

[communication]

listeninterface = .global

 

[trace]

filename = wlreplayer

alertfilename = wlreplay_alert

 

Next, start the replayer service with the hdbwlreplayer command:

dbwlreplayer -controlhost hana01 -controlinstnum 00 -controladminkey SYSADMIN, HDBKEY -port 12345

 

Use the following values for the parameters:

 

ParameterDescription
controlhostdatabase host name
controlinstnumdatabase instance number
controladminkeyuser name and secure store key (separated by comma)
portavailable port
controldbnameoptionally, database name in case of multitenant database container system

 

Secure Store Key

 

In case you are not familiar with secure store keys, or need a refresher, see SAP HANA database interactive terminal (hdbsql) - by the SAP HANA Academy or the video SAP HANA Academy - SAP HANA Administration: Secure User Store - hdbuserstore [SPS 11] - YouTube

 

 

 

Procedure

 

Once you have performed the preparation steps, the procedure is simple.

 

1. Capture Workload

 

Connect with SAP HANA cockpit to the system, open the Capture Workload app and click Start New Capture in the Capture Management display area. Provide a name and optional description and use the ON/OFF switches to collect an explain plan or performance details. The capture can be started on-demand or scheduled. Optionally filter can be set on the name of the application name, database user, schema user, application user, client or statement type (DML, DDL, procedure, transaction, session, system). Also, a threshold duration can be set and the passport trace level.

 

Screen Shot 2016-05-20 at 17.26.48.png

 

When done, click Stop Capture.

 

Screen Shot 2016-05-20 at 17.26.08.png

 

Optionally, you can set the capture destination, trace buffer size and trace file size for all captures with Configure Capture.

 

Screen Shot 2016-05-20 at 17.03.26.png

 

2. Replay Workload: Preprocess

 

Once one or more capture have been taken, open the Replay Workload app from the HANA cockpit to preprocess the capture. The captured workloads are listed in the Replay Management display area. Click Edit and then click Start Preprocessing on the bottom right.

 

Screen Shot 2016-05-20 at 17.25.06.png

 

2. Replay Workload

 

Once the capture has been preprocessed, you can start the replay from the same Replay Workload app.

 

First select the (preprocessed) replay candidate that you want to replay, then select Configure Replay.

Screen Shot 2016-05-20 at 17.30.49.png

 

In the Replay Configuration window, you need to provide

  • Host, instance number and database mode (Multiple for a multitenant database container system) of the HANA system
  • Replay Admin user (with role sap.hana.replay.roles::Replay) with either password or secure store key
  • Replay speed: 1x, 2x, 4x, 8x, 16x
  • Collect Explain plan
  • Replayer Service
  • User authentication from the session contained in the workload

 

Screen Shot 2016-05-20 at 17.33.39.png

When the Replay has finished, you can select Go to Report to view replay statistics.

 

Screen Shot 2016-05-20 at 17.40.57.png

 

Screen Shot 2016-05-20 at 17.42.58.png

 

 

3. Analyze Workload

 

Third and final step is to analyze the workload. For this start the Analyze Workload app from the SAP HANA cockpit. You can analyze on different dimensions like Service, DB User, Application Name, etc.

 

Screen Shot 2016-05-20 at 17.45.43.png

 

 

Video Tutorial

 

In the video tutorial below, I will show you in less than 10 minutes the whole process, both preparation and procedures.

 

 

 

More Information

 

SAP HANA Academy Playlists (YouTube)

 

SAP HANA Administration - YouTube

 

Product documentation

 

Capturing and Replaying Workloads - SAP HANA Administration Guide - SAP Library

 

 

Thank you for watching

 

You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at youtube.com/saphanaacademy, follow us on Twitter @saphanaacademy., or connect with us on LinkedIn.

SAP HANA SPS 12 What's New: Platform Lifecycle Management - by the SAP HANA Academy

$
0
0

Introduction

 

In the upcoming weeks we will be posting new videos to the SAP HANA Academy to show new features and functionality introduced with SAP HANA Support Package Stack (SPS) 12.

 

For the complete list of blogs see: What's New with SAP HANA SPS 12 - by the SAP HANA Academy

 

The topic of this blog is SAP HANA Platform Lifecycle Management new features.

 

For the related installation and update topics, see SAP HANA SPS 12 What's New: Installation and Update - by the SAP HANA Academy

 

 

Tutorial Video

 

SAP HANA Academy - SAP HANA SPS 12: What's New? - Platform Lifecycle Management - YouTube

 

 

 

What's New?

 

Converting an SAP HANA System to Support Multitenant Database Containers

 

SAP HANA SPS 9 introduced the multitenant database container concept, where a single SAP HANA system contains one or more SAP HANA tenant databases. This allows for an efficient usage of shared resources, both hardware and database management.

 

At install time, you select the SAP HANA database mode: single container or multiple containers. Should you want to change the mode after the installation you have to perform a conversion. In earlier revisions, this task was performed on the command line with the tool hdbnsutil, as you can view in the following tutorial video:

 

 

As of SPS 12, an SAP HANA system can now be converted to support multitenant database containers using the SAP HANA database lifecycle manager (HDBLCM) resident program. With every installation of SAP HANA, hdblcm is included and enables you to perform common post-installation and configuration tasks. The tool is hosted by the SAP host agent and not, like the SAP HANA cockpit, by the SAP HANA database.

 

The Convert to Multitenant Database Containers task is available for all interfaces: web, windows and command line, but the web interface allows you to set advanced parameters:

  • Import delivery units into the system database (default = Y)
  • Do not start instance after reconfiguration
  • Do not start tenant database after reconfiguration
  • Set instance startup and shutdown timeout

 

During the conversion, the original system database is configured as tenant and a new system database is created. This operation is quick as we only need to shutdown the SAP HANA database, update a few settings and restart the instance. Importing the standard HANA content (web IDE, SAPUI5, cockpit, etc.) takes most time and can optionally be postponed or not performed all together.

 

Note that the conversion is permanent.

 

 

Screen Shot 2016-05-25 at 10.09.06.png

 

The web UI allows to set advanced parameters.

Screen Shot 2016-05-25 at 10.11.56.png

 

Adding and Removing Host Roles

 

It is now possible to add and remove host roles after installation in a single-host or multiple-host SAP HANA system using the SAP HANA database lifecycle manager (HDBLCM) resident program.

 

As of SPS 10, you have the option to install SAP HANA systems with multiple host roles - including database server roles and SAP HANA option host roles - on one host, or give an existing SAP HANA host additional roles during system update. This enables you to share hardware between the SAP HANA server and SAP HANA options. This concerns the MCOS deployment type: Multiple Components One System.

 

Typical roles are worker and standby and exist for the the SAP HANA database, dynamic tiering, accelerator for SAP ASE, and the XS advanced runtime. Additionally roles are available for smart data streaming and remote data sync.

 

Database worker is the default role. For distributed systems, with multiple SAP HANA hosts, systems can assigned the standby role for High Availability purposes.

 

Screen Shot 2016-05-25 at 10.36.03.png

 

System Verification Tool

 

You can check the installation of an SAP HANA system using the SAP HANA database lifecycle manager (HDBLCM) resident program in the command-line interface for troubleshooting. The check tool outputs basic information about the configuration of the file system, system settings, permission settings, and network configuration and you can use the generated log files as a reference in the case of troubleshooting.

 

Screen Shot 2016-05-25 at 10.05.38.png

 

Documentation

 

A new guide is available that documents how to configure, manage, and monitor an SAP HANA system that supports SAP HANA multitenant database containers

 

Screen Shot 2016-05-25 at 11.02.07.png

 

 

Documentation

 

For more information see:

 

SAP Help Portal

 

 

SAP Notes

 

 

 

SCN Blogs

 

 

 

Thank you for watching

 

You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at youtube.com/saphanaacademy.

 

Follow us on Twitter @saphanaacademy.

 

Connect with us on http://linkedin.com/in/saphanaacademy.

MDC conversion on HANA System Replication configured

$
0
0

Just to share some tips on converting single container with Hana System Replication configured to MDC.

 

MDC system can only be replicated as the whole system, it means that the system database and all tenant databases are part of system replication. A take over happen for the whole Hana database (system database + all tenant databases) and is not possible to take over just for a particular container.

 

In our scenario, we have system replication setup for single container systems running on 112.02, and we decided to convert them into MDC. As we know that primary and secondary must be identical (N+N, nodes (except Standby) and services) during system replication setup, there's no exception for MDC.

 

Hence, i don't see any other way than breaking the system replication between primary and secondary, convert them into MDC individually, and reconfigure the system replication.

 

Steps performed as below:

1) Stop Secondary

# HDB stop

 

2) On secondary, clean up replication config

# hdbnsutil -sr_cleanup --force

 

3) Start up secondary. Now the secondary startup as active database

# HDB start

 

4) On primary, clear system replication config.

# hdbnsutil -sr_disable --force

 

once done, you can check with command # hdbnsutil -sr_state --sapcontrol=1

 

It is critical to clear the system replication config to avoid hitting error below during MDC conversion:


/hana/shared/SID/exe/linuxx86_64/hdb/python_support> python convertMDC.py

Stop System

Convert Topology to MDC

Set database Isolation low

Export Topology

Reinit SYSTEMDB persistence

RETURN CODE:

1

OUT BEGIN:


error: 'hdbnsutil -initTopology' is not allowed on system replication sites.

failed.


OUT END

ERROR BEGIN


ERROR END

'hdbnsutil failed!'

 

i believe above error is due to 2281734 - Re-Initialize secondary site in HANA system replication where hdbnsutil -initTopology is prohibited on system replication on primary and secondary site to avoid data loss.

 

If you hit above error, you can't redo the MDC conversion as its topology already converted to multidb. Workaround is bring up nameserver to reset user SYSTEM password manually. Refer to administration guide, section resetting system user password in MDC.

 

4) Convert both primary and secondary to MDC by running python convertMDC.py at the same time.

 

5) MDC conversion completed and system were started

 

shutdown is completed.

Start System

Conversion done

Please reinstall your licenses and delivery units into the SYSTEMDB.

Tenant SID can now be started by execution:

           "ALTER SYSTEM START DATABASE SID"

 

6) go to Primary and startup tenant

# ALTER SYSTEM START DATABASE SID

 

7) in Primary, reconfigure system replication by running below command to enable system replication

# hdbnsutil -sr_enable --name=UrName

 

8) Stop secondary and perform the replication setup

hdbnsutil -sr_register --remoteHost=PrimaryHost --remoteInstance=## --replicationMode=syncmem --operationMode=delta_datashipping --Name=UrName

 

9) On Studio -> Primary -> Landscape -> System Replication, you will notice full data replication is needed.

 

10) Once full data shipping completed, your replication should be active now with MDC

Capture.PNG

 

On secondary you'll see:

Capture.PNG

 

11) redeploy delivery_unit by running below on Primary:

# /hana/shared/SID/global/hdb/install/bin> ./hdbupdrep

 

Now, your MDC conversion with system replication setup is completed.

 

 

Also, i've tested below scenario:

 

a) on primary, convert single container to MDC whilst system replication is running, and encountered below error:

error: 'hdbnsutil -initTopology' is not allowed on system replication sites.

failed.

 

b) on primary, convert single container to MDC with system replication config on, but shutdown secondary, and encountered same error:

error: 'hdbnsutil -initTopology' is not allowed on system replication sites.

failed

 

c) Converted only primary to MDC. Tried to startup secondary to resume replication, but secondary refused to startup with due to the replication port is different, 4XX00 is used instead of 3XX00 for SAP HANA system replication with MDC.

 

Hopefully in future revision, MDC conversion running on existing system replication setup would be much easier without the need of breaking and synchronize again with full data shipping.

 

Please share if there's an alternate way of doing this, for whoever has done the MDC conversion on Hana system replication configured. Would interested to know ;-)

 

Hope it helps and enjoy!

 

Thanks,

Nicholas Chang

SAP HANA Distinguished Engineer (HDE) Webinar: Overview of SAP HANA On-Premise Deployment Options

$
0
0

Join the SAP HANA Distinguished Engineer (HDE) Webinar (part of SAP HANA iFG Community Calls) to learn about SAP HANA on-premise deployment options.


Title: Overview of SAP HANA On-Premise Deployment Options

Speaker:Tomas KROJZL, SAP HANA Distinguished Engineer, SAP HANA Specialist, SAP Mentor, IBM

Moderator: Scott Feldman

Date: June 2nd, 2016  Time: 8:00 - 9:30 AM Pacific, 11:00 - 12:30 PM Eastern (USA), 5:00 PM CET (Germany)


https://jam4.sapjam.com/profile/uit7WY0ZrikCVja7RvnWkZ/documents/lirq9KQhBo3zElM39jf1QW/thumbnail?max_x=850&max_y=850&version_id=4094188


See all SAP HANA Distinguished Engineer (HDE) webinarshere.


Abstract:

SAP HANA can be deployed on-premise in many different ways: single-node or scale-out, bare metal or virtualized, appliance or TDI. With these infrastructure options there are multiple ways to share one environment between multiple applications. We provide a basic orientation between individual deployment options and share best practice experience on what are good combinations and which choices should be avoided.

Join the session to get an overview of SAP HANA on-premise deployment options.

To join the meeting:https://sap.na.pgiconnect.com/i800545

Participant Passcode: 110 891 4496



Germany: 0800 588 9331 tel:08005889331,,,1108914496#


UK: 0800 368 0635 tel:08003680635,,,1108914496#


US and Canada: 1-866-312-7353 tel:+18663127353,,,1108914496#

For all other countries, see the attached meeting request.

 

About Tomas:

SAP HANA Specialist (SAP Mentor, SAP HANA Distinguished Engineer), Certified SAP HANA Specialist/Architect focused on SAP HANA data centric architecture (infrastructure, High Availability, Disaster Recovery, etc.), integration (Monitoring, Backups, etc.), deployment (implementation projects) and operation.


Background: SAP HANA Distinguished Engineers are the best of the best hand picked by HDE Council that are not only knowledgeable in implementing SAP HANA but also committed to sharing their knowledge with the community.

 

As part of the effort to share experiences made by HDEs, we started this HDE webinar series.

 

This webinar series is part ofSAP HANA International Focus Group (iFG).

JoinSAP HANA International Focus Group (iFG) to gain exclusive access to webinars, access to experts, SAP HANA product feedback, and customer best practices, education, peer-to-peer insights as well as virtual and on-site programs.

You can see the upcoming SAP HANA iFG session detailshere.

 

Note: If you get "Access Denied" error while accessing SAP HANA iFG webinar series / sessions, you need to first join  the community to gain access.

 

Follow HDEs on Twitter @SAPHDE

Follow me on Twitter@rvenumbaka

What's New with SAP HANA - by the SAP HANA Academy

$
0
0

Introduction

 

This blog provides an overview of all SAP HANA What's New playlists and SCN blogs published by the  SAP HANA Academy together with other related information.

 

Screen Shot 2016-05-26 at 12.45.07.png

 

SCN Blogs - by the SAP HANA Academy

 

 

 

SAP HANA Academy playlist on YouTube

 

 

sps12.png

 

What's New blogs on blogs.saphana.com

 

 

 

Introducing SAP HANA on hana.sap.com

 

 

 

SAP Help Portal

 

 

 

SAP Notes

 

 

 

Product Availability Matrix (PAM)

 

 

Thank you for watching

 

You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at youtube.com/saphanaacademy.

 

Follow us on Twitter @saphanaacademy

 

Connect with us on LinkedIn

SAP HANA SPS 12 What's New: Backup and Recovery - by the SAP HANA Academy

$
0
0

Introduction

 

In the upcoming weeks we will be posting new videos to the SAP HANA Academy to show new features and functionality introduced with SAP HANA Support Package Stack (SPS) 12.

 

The topic of this blog is backup and recovery.

 

For the complete list of blogs see: What's New with SAP HANA SPS 12 - by the SAP HANA Academy

 

For the SPS 11 version, see SAP HANA SPS 11 What's New: Backup and Recovery - by the SAP HANA Academy

 

 

Tutorial Video

 

SAP HANA Academy - SAP HANA SPS 12: What's New? - Backup and Recovery - YouTube

 

 

 

What's New?

 

Schedule Data Backups (SAP HANA Cockpit)

 

You can now schedule complete data backups or delta backups to run at specific intervals using the Backup tile of the SAP HANA cockpit. Backup scheduling relies on the XS Job Scheduler and requires the SAP HANA database to be up and running.

 

For each schedule, you define backup and destination type, prefix and destination.

Screen Shot 2016-05-27 at 09.46.50.png

The schedule requires a name, start time, and recurrence: daily, weekly, monthly with time.

Screen Shot 2016-05-27 at 09.47.26.png

 

Schedules listing with PAUSE || button. Once created, schedules cannot be modified (only deleted).

Screen Shot 2016-05-27 at 09.50.39.png

 

Estimated Backup Size (SAP HANA Cockpit)

 

When you create a backup, SAP HANA Cockpit now also displays the estimated backup size. This feature was earlier available in SAP HANA studio.

 

By toggling between the backup types, you can easily compare the estimated backup sizes of complete, incremental and differential backups.

 

Screen Shot 2016-05-27 at 09.52.48.png

 

You can view the backup prefix in the Backup Overview page.

 

 

Resuming an Interrupted Recovery

 

As of SPS 12, it is possible to resume an interrupted recovery, instead of repeating the entire recovery from the beginning. For this you need to have both a full backup with - optionally delta backups and - log backups.

 

During a recovery, SAP HANA automatically defines fallback points, which mark the point after which it is possible to resume a recovery. The fallback points are recorded in backup.log, which indicate whether it is possible to resume a recovery.

 

The Log Replay Interval is configurable [global.ini: log_recovery_resume_point_interval = [18000 - 0; default = 1800s].

Screen Shot 2016-05-26 at 15.49.35.png

Note that it is normally only necessary to resume a recovery in exceptional circumstances.

 

 

Recovery Enhancements

 

As of SPS 12, it is now possible to

  • recover an SAP HANA database using a combination of a storage snapshot and delta backups (incremental and differential backups)
  • reconstruct the SAP HANA backup catalog using file-based delta data backups
  • identify a specific data backup by specifying backup destination, prefix, and SID (when using BACKINT in case that the backup catalog is not available)

 

 

Documentation

 

For more information see:

 

SAP Help Portal

 

 

SAP Notes

 

 

 

SCN Blogs

 

 

 

Thank you for watching

 

You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at youtube.com/saphanaacademy.

 

Follow us on Twitter @saphanaacademy.

 

Connect with us on http://linkedin.com/in/saphanaacademy.

SAP HANA Vora: Graphical Modelling tool basic example

$
0
0

SAP Hana Vora is a 'Big Data' In-memory reporting engine sitting on top of an Hadoop Cluster.

Data can be loaded into the Hadoop Cluster memory from multiple source e.g. HANA, The Hadoop File system (HDFS), remote files systems like AWS S3

 

With the release of SAP Hana Vora 1.2 it's now possible to graphically model views (e.g. joining multiple datasets) similar to a Hana calculation view.

The following link has all the details to get you started  with Vora SAP HANA Vora - Troubleshooting

 

This blog contains a very basic introductory example of using the new graphical modelling tool.

The steps are:

  1. Create 2 example datasets in HDFS, using scala and spark
  2. Create Vora tables, linked to these files
  3. Model a view joining these tables, and filtering on key elements

 

Firstly the following  2 datasets need to be created for transactional and master data (reporting attributes).

 

Transactional Data

COMPANYCODEACCOUTGROUPAMOUNT_USD
AU01Revenue300.0
GB01Revenue1,000.0
US01Revenue5,000.0
US01Expense-3,000.0
US02Revenue700.0

 

Master Data

COMPANYCODEDESCRIPTIONCOUNTRY
AU01Australia 1AU
GB01United Kingdom 1UK
US01United States of America 1US
US02United States of America 2US

 

In the following steps open source Zeppelin is used to interact with Vora, Spark and HDFS.

 

Open Zeppelin and create a new notebook.


 

Next create the sample data using Spark and Scala.

Create sample Company Data and save to HDFS

fs.delete(new Path("/user/vora/zeptest/companyData"), true)

val companyDataDF = Seq(

    ("GB01","Revenue", 1000.00),

    ("US01","Revenue", 5000.00),

    ("US01","Expense",-3000.00),

    ("US02","Revenue", 700.00),

    ("AU01","Revenue", 300.00)).toDF("Company","AccountGroup","Amount_USD")

 

 

companyDataDF.repartition(1).save("/user/vora/zeptest/companyData", "parquet")


 

Create sample Company Master Data and save to HDFS

fs.delete(new Path("/user/vora/zeptest/companyAttr"), true)

val companyAttrDF = Seq(

    ("GB01","United Kingdom 1", "UK"),

    ("US01","United States of America 1", "US"),

    ("US02","United States of America 2", "US"),

    ("AU01","Australia 1", "AU")).toDF("Company","Description", "Country")

companyAttrDF.repartition(1).save("/user/vora/zeptest/companyAttr", "parquet")


 

 

Lets now check in HDFS that the directories/files have been created

Directory listing in HDFS

import org.apache.hadoop.fs.FileSystem

import org.apache.hadoop.fs.Path

val fs = FileSystem.get(sc.hadoopConfiguration)

var status = fs.listStatus(new Path("/user/vora/zeptest"))

status.foreach(x=> println(x.getPath))

 

 

Next use the %vora option in Zeppelin to create the Vora tables

 

Create the Vora Tables

%vora CREATE TABLE COMPANYDATA(

    COMPANYCODE VARCHAR(4),

    ACCOUNTGROUP VARCHAR(10),

    AMOUNT_USD DOUBLE

)

USING com.sap.spark.vora

OPTIONS (

    tableName "COMPANYDATA",

    paths "/user/vora/zeptest/companyData/*",

    format "parquet"

)

%vora CREATE TABLE COMPANYATTR(

    COMPANYCODE VARCHAR(4),

    DESCRIPTION VARCHAR(50),

    COUNTRY VARCHAR(2)

)

USING com.sap.spark.vora

OPTIONS (

    tableName "COMPANYATTR",

    paths "/user/vora/zeptest/companyAttr/*",

    format "parquet"

)

 

 

Next use the %vora option in Zeppelin to check the Tables have been loaded correctly

Check the Vora Tables

%vora show tables

%vora SELECT * FROM COMPANYDATA order by COMPANYCODE , ACCOUNTGROUP DESC

 



Now with the tables created we are ready to use the modelling tool


Launch the Vora tools (running on port 9225 on Developer edition)


Vora Tables created in Zeppelin or or other instances of the Spark context may not yet be visible in the Data Browser.

To make the visible then use SQL Editor and register the previously created tables using the following statement.


REGISTER ALL TABLES USING com.sap.spark.vora OPTIONS (eagerLoad "false") ignoring conflicts




The tables are now visible for data preview via the 'Data Browser'.


Now the  'Modeler' can be used to create the view VIEW_COMPANY_US_REVENUE


In this example the modelling tool is used to:

  • Join Transactional data and Master data  on COMPANYCODE
  • Filter by COUNTRY = 'US' and ACCOUNTGROUP = 'Revenue'
  • AMOUNT_USD Results summarised by COUNTRY



 

The generated SQL of the view can be previewed



Once saved the new view VIEW_COMPANY_US_REVENUE can be previewed via the 'Data Browser'.


The new view will be accessible via external reporting tools, Zeppelin and other Spark Context.


I hope this helps gets you started exploring the capabilities of Vora.


Modelling Learning Double Action or two things I just learned about modelling in SAP HANA SPS 11

$
0
0

Far fetched...


A colleague asked me over a year ago (2015 and SPS 9 ... sounds ancient now, I know) whether it is possible to leverage information models in a different SAP HANA instance via SDA (Smart Data Access - look it up in the documentation if you didn't know this yet).

The scenario in mind here was a SAP BW on HANA system reading data from a Suite on HANA system and using the SAP HANA live content (http://scn.sap.com/docs/DOC-59928, http://help.sap.com/hba) installed there.

The Open ODS feature of SAP BW on HANA was to be used here as it allows reading from tables and views exposed via SDA in the local SAP HANA instance.

 

Now this idea sounds splendid.

Instead of having to manually build an extractor or an data export database view (both of which can be extensive development efforts), why not simply reuse the ready made content of SAP HANA live for this?

As usual the proof of the pudding is in the eating and as soon as it was tried out a severe shortcoming was identified:

 

select * from "LARS"."IMACCESS_LBPB/SCV_USERS"    ('PLACEHOLDER' = ('$$userNameFilter$$', 'USER_NAME= LARS'))
Could not execute 'select * from "LARS"."IMACCESS_LBPB/SCV_USERS"('PLACEHOLDER' = ('$$userNameFilter$$', 'USER_NAME= ...'
SAP DBTech JDBC: [7]: feature not supported:
Cannot use parameters on row table: IMACCESS_LBPB/SCV_USERS: line 1 col 22 (at pos 21)

BOOM!

I just created an Information Model similar to the ones provided with the SAP HANA Live content including the heavily used Input Parameters to enable the model to be flexible and reusable (and also to allow filter push-down) but SAP HANA tells me:


"Nope, I'm not doing this, because the PLACEHOLDER syntax only works for information views and not for 'row tables'."

 

This 'row table' part of the error message stems from the fact that SAP HANA SPS 9 showed SDA tables as row store tables. This also means that all data read from the SDA source gets temporarily stored in SAP HANA row store tables before further processed in the query.

One reason for doing that probably was that the mapping from ODBC row format to column store format (especially the data type mapping from other vendors DBMS) was easier to manage with the SAP HANA row store.

Having said that, when accessing another SAP HANA system, such format mapping surely should be no problem, right?

Right.

And in fact there is an option to change this: the parameter "virtual_table_format" in the "smart_data_access" section on of the indexserver.ini:

 

= Configuration

Name                     | Default

  indexserver.ini          |       

    smart_data_access      |       

     virtual_table_format  | auto 

 

This parameter can be set to ROW, COLUMN or AUTO (the SPS 11 default value, automatically using the right format depending on the SDA adapter capabilities).

For more on how "capabilities" influence the SDA adapter behavior, check the documentation.

 

Back last year I wasn't aware of this parameter and so I couldn't try and see if, after changing the parameter, the query would've worked.

Anyhow, like all good problems the question just popped up again and I had an opportunity to look into this topic once more.

 

"Smarter" at last...

And lo and behold, with SAP HANA SPS 11 the PLACEHOLDER syntax works like a charm even for virtual tables.

 

SELECT -- local execution ---     "D10_VAL",     "D100_VAL",     sum("KF1") AS "KF1",     sum("KF2") AS "KF2",     sum("CC_KF1_FACTORED") AS "CC_KF1_FACTORED"
FROM "_SYS_BIC"."devTest/stupidFactView"    ('PLACEHOLDER' = ('$$IP_FACTOR$$','34'))
WHERE "D10_VAL" = 'DimValue9'
and "D100_VAL" = 'DimValue55'
GROUP BY     "D10_VAL",     "D100_VAL";

 

/*

D10_VAL     D100_VAL    KF1         KF2         CC_KF1_FACTORED

DimValue9   DimValue55  -1320141.70 525307979   -44884817     

 

 

successfully executed in 352 ms 417 µs  (server processing time: 7 ms 385 µs)

successfully executed in 356 ms 581 µs  (server processing time: 8 ms 437 µs)

successfully executed in 350 ms 832 µs  (server processing time: 8 ms 88 µs)

 

 

OPERATOR_NAME       OPERATOR_DETAILS                                         EXECUTION_ENGINE

COLUMN SEARCH       'DimValue9',

                     DIM1000.D100_VAL,

                     SUM(FACT.KF1),

                     SUM(FACT.KF2),

                     TO_BIGINT(TO_DECIMAL(SUM(FACT.KF1), 21, 2) * '34')

                     (LATE MATERIALIZATION, OLTP SEARCH, ENUM_BY: CS_JOIN)   COLUMN

  AGGREGATION       GROUPING:

                        DIM1000.VAL,

                    AGGREGATION:

                        SUM(FACT.KF1),

                        SUM(FACT.KF2)                                        COLUMN

    JOIN            JOIN CONDITION:

                    (INNER) FACT.DIM100 = DIM1000.ID,

                    (INNER) FACT.DIM10 = DIM10.ID                            COLUMN

      COLUMN TABLE                                                           COLUMN

      COLUMN TABLE  FILTER CONDITION: DIM1000.VAL = n'DimValue55'            COLUMN

      COLUMN TABLE  FILTER CONDITION: DIM10.VAL = n'DimValue9'               COLUMN

*/

 

See how the SPS 11 SQL optimisation is visible in the EXPLAIN PLAN: since the tables involved are rather small and only two dimensions are actually referenced, the OLAP engine (usually responsible for STAR SCHEMA queries) didn't kick in, but the execution was completely done in the Join Engine.

 

Also notable: the calculated key figure was reformulated internally into a SQL expression AFTER the parameter value (34) was supplied.

This is a nice example for how SAP HANA does a lot of the query optimisation upon query execution.

If I had used a placeholder (question mark - ?) for the value instead, this whole statement would still work, but it would not have been optimised by the SQL optimizer and instead the calculation view would've been executed "as-is".

 

Now the same statement accessing the "remote" view:

     

SELECT -- SDA access ---     "D10_VAL",     "D100_VAL",     sum("KF1") AS "KF1",     sum("KF2") AS "KF2",     sum("CC_KF1_FACTORED") AS "CC_KF1_FACTORED"
FROM "DEVDUDE"."self_stupidFactView"    ('PLACEHOLDER' = ('$$IP_FACTOR$$','34'))
WHERE "D10_VAL" = 'DimValue9'
and "D100_VAL" = 'DimValue55'
GROUP BY     "D10_VAL",     "D100_VAL";

/*

D10_VAL     D100_VAL    KF1         KF2         CC_KF1_FACTORED

DimValue9   DimValue55  -1320141.70 525307979   -44884817     

 

successfully executed in 351 ms 430 µs  (server processing time: 12 ms 417 µs)

successfully executed in 360 ms 272 µs  (server processing time: 11 ms 15 µs)

successfully executed in 359 ms 371 µs  (server processing time: 11 ms 914 µs)

 

OPERATOR_NAME           OPERATOR_DETAILS                                                       EXECUTION_ENGINE

COLUMN SEARCH           'DimValue9', self_stupidFactView.D100_VAL,

                        SUM(self_stupidFactView.KF1),

                        SUM(self_stupidFactView.KF2),

                        SUM(self_stupidFactView.CC_KF1_FACTORED)

                        (LATE MATERIALIZATION, OLTP SEARCH, ENUM_BY: REMOTE_COLUMN_SCAN)       COLUMN

  COLUMN SEARCH         SUM(self_stupidFactView.KF1),

                        SUM(self_stupidFactView.KF2),

                        SUM(self_stupidFactView.CC_KF1_FACTORED),

                        self_stupidFactView.D100_VAL

                        (ENUM_BY: REMOTE_COLUMN_SCAN)                                          ROW

    REMOTE COLUMN SCAN  SELECT SUM("self_stupidFactView"."KF1"),

                        SUM("self_stupidFactView"."KF2"),

                        SUM("self_stupidFactView"."CC_KF1_FACTORED"),

                        "self_stupidFactView"."D100_VAL"

                        FROM "_SYS_BIC"."devTest/stupidFactView"

                            ( PLACEHOLDER."$$IP_FACTOR$$" => '34' )  "self_stupidFactView"

                        WHERE "self_stupidFactView"."D10_VAL" = 'DimValue9'

                        AND "self_stupidFactView"."D100_VAL" = 'DimValue55'

                        GROUP BY "self_stupidFactView"."D100_VAL"                               EXTERNAL

 

*/  

Because of the mentioned parameter setting, SAP HANA now can create a statement that can be send to the "remote" database to produce the wanted output.

Note how the statement in the REMOTE COLUMN SCAN is not exactly the statement we used: the aggregated columns are now the first in the statement and the parameter syntax used is the new "arrow"-style syntax (PLACEHOLDER."$$<name> $$" => '<value>'). This nicely reveals how SDA actually rewrites the statement in order to get the best outcome depending on the source systems capabilities.

 

For a better overview on what happens in both scenarios please look at this piece of ASCII art in awe

 

|[ ]| = system boundaries

 

local statement execution

|[SQL statement ->    Information view -> Tables +]|

                                                  |

|[       RESULT < -------------------------------+]|

 

 

SDA statement execution

|[SQL Statement -> Virtual Table -> SDA connection ->]| --- ODBC transport --> |[ Information view -> Tables +]|

                                                                                                             |

|[       RESULT < -----------------------------------]| <-- ODBC transport --- |[--<  RESULT <---------------+]|

 

For more on SDA, BW on HANA and how both work together have a look here:

 

And while there, don't miss out on the other "new in SPS 11"- stuff (if not already familiar with it anyhow)

 

The Web, Stars and the importance of trying things out

 

For the question discussed above I of course needed to have a test setup ready.

Creating the SDA remote source was the easiest part here, as I just created a "self" source system (BW veterans will remember this approach) that simply pointed to the very same SAP HANA instance.

 

In order to emulate a proper SAP HANA live view I needed to create an Information model with Input Parameters, so I thought: easy, let's just quickly build one in the Web based development workbench.

 

So far I've done most of the modelling in SAP HANA studio, so I took this opportunity to get a bit more familiar with the new generation of tools.

I wanted to build a classic Star-Schema-Query model, so that I could use the Star Join function.

From SAP HANA Studio I knew that this required calculation views of of FACT and DIMENSION to work.

 

Not a problem at all to create those.

factview.png

A CUBE type view for the fact table

dimview.png

One of the Dimension type views

 

I then went on and created a new calculation view of data type CUBE and checked the WITH STAR JOIN check box.

createstarview.png

 

Next I tried to add all my FACT and DIMENSION views to the join, but boy was I wrong...

addtablestostarjoin.png

Clicking on the button should allow to add the views.

 

nofactview.png

But no option there to add the fact view into the STAR JOIN node - while adding dimension just worked fine:

dimviewssleect.png

Now I had all my dimensions in place but no way to join them with fact table:

starjoinwithdimviews.png

 

After some trial and error (and no, I didn't read the documentation and I should have. But on the other hand, a little more guidance in the UI wouldn't hurt either) I figured out that one has to manually add a projection or aggregation node that feeds into the Star Join:

add_aggregation.png

Once this is done, the columns that should be visible in the Star join need to be mapped:

And NOW we can drag and drop the join lines between the different boxes in the Star Join editor.

mapping_aggr.png

Be careful not to overlook that the fact table that just got added, might not be within the current window portion. In that case either zoom out with the [-] button or move the view around via mouse dragging or the arrow icons.

joiningfact.png

 

After the joins are all defined (classic star schema, left outer join n:1, remember?) again the mapping of the output columns need to be done.

mappingkfs.png

Here, map only the key figures, since the dimension columns are already available in the view output  anyhow as "shared columns".

exposedcolumns.png

For my test I further went on and added a calculated key figure that takes an Input Parameter to multiply one of the original key figures. So,nothing crazy about that, which is why I spare you the screen shot battle for this bit .

 

And that's it again for today.

Two bits of new knowledge in one blog post, tons of screenshots and even ASCII art - not too bad for a Monday I'd say.

 

There you go, Now you know!

 

 

Lars

HANA Studio on High resolution displays

$
0
0

If you try to work with SAP HANA Studio on HiDPI (High resolution display), like Apple retina or Microsoft Surface, you will see that there is a problem with the size of the icons:

 

In Surface 4 at 2736x1824, icons are tiny, unusable, as you can see in the screenshot (compare it with the size of the fonts):

 

2016-05-31_17h26_00.png

 

As HANA Studio is based in Eclipse, I tried some recommendations that I found in https://bugs.eclipse.org/bugs/show_bug.cgi?id=421383#c60 with correct results:

 

2016-05-31_17h24_38.png

 

Windows instructions

 

Create a new registry key with REGEDIT

 

Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\SideBySide\

And create a new entry (DWORD VALUE)

 

Name: PreferExternalManifest

Value: 1

 

Create a Manifest file

 

Open hdbstudio.exe location (by default C:\Program Files\sap\hdbstudio)

Create a new file: hdbstudio.exe.manifest  (or use the attached file, and remove .xml extension)

with this content:

 

 

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>

<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" xmlns:asmv3="urn:schemas-microsoft-com:asm.v3">

    <description>eclipse</description>

    <trustInfo xmlns="urn:schemas-microsoft-com:asm.v2">

        <security>

            <requestedPrivileges>

                <requestedExecutionLevel xmlns:ms_asmv3="urn:schemas-microsoft-com:asm.v3"

                               level="asInvoker"

                               ms_asmv3:uiAccess="false">

                </requestedExecutionLevel>

            </requestedPrivileges>

        </security>

    </trustInfo>

    <asmv3:application>

        <asmv3:windowsSettings xmlns="http://schemas.microsoft.com/SMI/2005/WindowsSettings">

            <ms_windowsSettings:dpiAware xmlns:ms_windowsSettings="http://schemas.microsoft.com/SMI/2005/WindowsSettings">false</ms_windowsSettings:dpiAware>

        </asmv3:windowsSettings>

    </asmv3:application>

</assembly>

 

 

 

Now, you can open your HANA Studio with "normal" icons :-)

 

 

Disclaimer: Note that modifying the registry can cause serious problems that may require you to reinstall your operating system.

SAP HANA SPS 12 What's New: High Availability and Disaster Recovery - by the SAP HANA Academy

$
0
0

Introduction

 

In the upcoming weeks we will be posting new videos to the SAP HANA Academy to show new features and functionality introduced with SAP HANA Support Package Stack (SPS) 12.

 

The topic of this blog is high availability and disaster recovery.

 

For the complete list of blogs see: What's New with SAP HANA SPS 12 - by the SAP HANA Academy

 

For the SPS 11 version, see SAP HANA SPS 11 What's New: HA/DR - by the SAP HANA Academy

 

 

Tutorial Video

 

SAP HANA Academy - SAP HANA SPS 12: What's New? - High Availability and Disaster Recovery - YouTube

 

 

 

 

 

What's New?

 

Monitoring System Replication with the SAP HANA cockpit

 

The System Replication tile in the SAP HANA cockpit now displays the operation mode, together with replication mode, tiers and status.

cockpit.png

 

The overview page of the System Replication app displays (configurable) information for both primary and secondary sites, a performance graph with network statistics and a tile for Alerts, that links back to the Alerts app in the SAP HANA cockpit. From the status bar, the Log File Viewer can be opened.

 

overview.png

 

The System Replication Details view displays information on the operation mode, the replayed log position, as well as the replayed log position time.

 

details.png

 

In a system replication scenario, the SAP HANA Cockpit for Offline Administration can be used to perform a takeover from the primary side by the secondary site.

 

takeover.png

 

System Replication Alerts

 

A new alert (#94 - Log replay backlog for system replication secondary) is raised when the system replication log replay backlog is increased. Thresholds and schedule are configurable.

 

94.png

 

Monitoring System Parameter Changes

 

The configuration parameter checker reports on any differences between primary, secondary, and - new for SPS 12 - tier-3 secondary systems. Also new with SPS 12 is that you can now replicate the system (or .ini file) parameters based on alerts. The parameter replication can be enabled to all sites.

 

[inifile_checker]

enable = true

replicate = true

inifile.png

 

 

Initializing the Secondary

 

Apart from using a full data backup, it is now possible to initialize the secondary system using a binary storage copy, that is, either a snapshot or a full offline database copy.

 

 

Alerting of Secondary Systems

 

The Alerts app of the SAP HANA cockpit now also displays alerts issued by secondary system hosts.

 

 

Supported Replication Modes Between Sites

 

 

In a multitier system replication scenario, support for ASYNC replication has been extended with both SYNCMEM and SYNC. In other words, all replication modes are supported for all sites.

 

Documentation

 

For more information see:

 

SAP Help Portal

 

 

SAP Notes

 

 

SCN Blogs

 

 

Thank you for watching

 

You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at youtube.com/saphanaacademy.

 

Follow us on Twitter @saphanaacademy.

 

Connect with us on http://linkedin.com/in/saphanaacademy.

SDI/SDQ OData Adapter in HANA SPS12 - GET, PUT operation and REPLICATION flowgraph

$
0
0

Working with the OData Adapter in HANA SPS12


In this blog entry the following activities are covered:

  • OData Adapter creation
  • Using a public OData service with read and write capabilities
  • OData remote source & virtual table creation
  • Virtual table select (OData GET)
  • Virtual table insert (OData PUT)
  • Modeling in SAP HANA WebIde in Google Chrome Browser
  • Flowgraph to capture the delta of OData source (each step for creation is described)


This is an extract from the “Whats New in HANA smart data integration – SPS12” slidedeck. It describes some of the new abilities that the OData adapter has in SPS12.

 

1.png

 

Used documents:

- HANA EIM SPS12 Administration Guide

- HANA Academy Video "SAP HANA Academy - Smart Data Integration/Quality: The Table Comparison Transform [SPS09"


Used resources:

- HANA SPS12 system

- Chrome browser with WebIde




Now let's begin...

 

It is a really simple scenario we are dealing with: Accessing a public OData URL, read from and write to the tables exposed there. HANA is running on SPS12 on premise. The purpose of this blog entry is to better illustrate the different steps analogous as they are explained in the HANA EIM Administration Guide (chapter 8.13.1).

 

In this scenario we are accessing the following public OData Service URL (ODATA V4) which allows read and write operations.

 

http://services.odata.org/V4/OData/(S(1t3xkksfwh00yknrwftrjhm5))/OData.svc/

 

The following describes the context of tables that we can access in this example:

2.png

 

In order to realize the above mentioned points, you can consider the following steps:

 

1.) Create the OData Adapter in HANA studio manually via the following SQL command (this is necessary and an exception as the OData Adapter is NO system adapter! Other than the common adapters used in the SDA context):

 

     create"ODataAdapter"'display_name=OData Adapter;description=Odata Adapter'AT;

 

(select * from"SYS"."ADAPTERS"):

 

3.png

 

2.) Create new remote source, choosing the previously create OData Adapter:

(System priviledge "CREATE REMOTE SOURCE" required")

 

4.png

 

Enter your proxy, trust, CSRF and format settings according to your requirements. For further information on each of the fields consider reading the EIM Administration Guide.

 

In my case I set “Support Format Query” to true which allows me to receive the dataset in JSON-format. In the credentials section (if you also use a public OData service to test e.g. the adapter capabilities you need to enter anything… I entered for instance “test” as user and “test” as password. It will allow you to connect as it doesn’t require any user).

 

In productive and secure environments you most probably need a user, password and/or a certificate to be on the safe side! ;-)

 

5.png


3.) Browse remote tables and create virtual tables of your choice you want to play with


Now go to Provisioning and browse your remote tables. You should now be able to access them via your new OData remote source:

 

6.png

     7.png

 

4.) Working with your virtual table (1) – READ (or GET…)

 

You can now browse the entries of your virtual table.

 

8.png

 

You can verify the entries displayed in HANA studio with the entries you get when calling the service directly from a URL

 

http://services.odata.org/V4/OData/(S(1t3xkksfwh00yknrwftrjhm5))/OData.svc/Products/?$format=json

 

9.png

 

5.) Working with your virtual table (2) – WRITE (or PUT…)

 

If your OData service allows read and write operations you can also insert new entries with e.g. using the corresponding insert statement:


insertinto"SYSTEM"."MV1_RW_ODATA_Products"values(13,'Afri Cola','The Original Cola','01.10.2005 00:00:00.0','01.10.2006 00:00:00.0',3,9.9);


10.png

 

 

You should now see the new entry when firing a GET request from your browser on that table:

 

http://services.odata.org/V4/OData/(S(1t3xkksfwh00yknrwftrjhm5))/OData.svc/Products/?$format=json

 

11.png

 

6.) Create a replication task with the table comparison transform node to capture the delta only. In order to achieve this, you need to follow the subsequent steps:

 

(1)    In the flowgraph builder in the WebIDE drag and drop a new data source node and select the virtual table your OData source table. Remember that you use WebIde only with Google Chrome Browser.

 

12.png

 

(2)    Drag and drop a new Table Comparison node onto the canvas and connect your data source node with it.


13.png

 

(3)    Intermediate step (this could also be done in advance...) Create your target table. You could do it as follows:


CREATETABLE"ODATA_Products_target"LIKE"SYSTEM"."MV1_RW_ODATA_Products";

 

It will create a similar table with the same table structure.

 

(4)    Adjust your target table and a new primary key of type integer, e.g. “SURR_ID” (this approach follows the concept as introduced in the SAP HANA Academy Video “SAP HANA Academy - Smart Data Integration/Quality: The Table Comparison Transform [SPS09]”).

 

In my example I need to drop the already existing primary key “ID” first in order to realize adding a new primary key “SURR_ID”.

 

ALTERTABLE"SYSTEM"."ODATA_PRODUCTS_TARGET"DROPCONSTRAINT ID;

 

You can then add a new column called “SURR_ID” of type integer to your target table (in edit mode).

 

14.png

 

(5)    After adding this new column you can specify this column as a new primary key with the following command:


ALTERTABLE"SYSTEM"."ODATA_PRODUCTS_TARGET"ADDCONSTRAINT PK PRIMARYKEY (SURR_ID);

 

You need to do this for the table comparison transform to work properly. We will see this later on in this document

 

15.png

 

(6)   What you also need to do is create a database sequence for your data sink in your flowgraph. You can do this either in WebIde or in HANA Studio. You need to create a new file that ends with *.hdbsequence. Just right click on your package and select “File”.

 

16.png

 

Enter your sequence name with the correct file ending.

 

17.png

 

Specify your sequence properties. Adjust the schema property according to your schema’s name. Save your sequence.

 

18.png

 

(7)    Now you can edit your table comparison node. First and foremost choose your comparison table. This needs to be your previously created target table that has the “SURR_ID” and ideally the same structure as your source table (…or another structure, depending on your requirements).

 

19.png

 

In the table comparison transform specify the generated key attribute which is your “SURR_ID” field of your target table.

 

20.png

 

     If you OData source allows delete operations you can select "Detect Deleted Rows From Comparison Table". This will ensure that deleted rows in the source      will also be deleted in your target table.

21.png

 

     Click on attributes in your Table Comparison node and drag and drop the fields which you want to be considered for comparison. You need to set the key      column as "Key = true". The table comparison node will capture every change that occurs in one of these columns and correspondingly transfer the delta to      your target table.

 

22.png


(8) Select a data sink node. Choose your target table as your data sink.

23.png

 

Enter the database sequence into the corresponding field of your data sink. You can find it on the "Settings" tab of your data sink node. What you should also do is specify your key generation attribute which is "SURR_ID" in our case.

 

2.jpg


(9) Save your flowgraph and execute the task to conduct an initial load of your target table.


If you can´t save your flowgraph in WebIde you're probably lacking priviledges. Make sure you have object priviledges on:


- EXECUTE

on "_SYS_REPO"."TEXT_ACCESSOR"

and "_SYS_REPO"."MULTI_TEXT_ACCESSOR"


- SELECT, UPDATE, INSERT, DELETE, EXECUTE granted to _SYS_REPO


You may also refer to the EIM Administration Guide chapter "7.1 Assign Roles and Priviledges"

24.png

 

(10) Check in system table “M_TASKS” how many records where processed with the initial load.

 

SELECT * FROM"SYS"."M_TASKS";

 

25.png

 

(11) Insert a new record into the source table of the OData service with e.g. an insert command, re-run the task and check how many records have been processed. It is assumed that only the delta or respectively changed rows were transferred.

 

INSERTINTO"SYSTEM"."MV1_RW_ODATA_PRODUCTS"VALUES(14,'NEW COLA','THE ORIGINAL WOW COLA','01.10.2005 00:00:00.0','01.10.2006 00:00:00.0',3,39.9);

 

START TASK "SYSTEM"."SDI::ODATA_FG01";

 

SELECT * FROM"SYS"."M_TASKS";


26.png

 

Alternatively you could also execute an update statement on your OData source table, re-run the task, see how many records were processed and what has changed and compare your source and target table if it really worked out for you.

 

(12) Finally see what is in the target table. We can see that our inserted record was appended to the table:

 

SELECT * FROM"SYSTEM"."ODATA_Products_target";


27.png

 

On top of what we modelled we could now create a replication task to let the batch task run e.g. every ten minutes to capture the delta of our OData source.

 

If you have any comments, hints, tips and tricks what I did please share and let us all know. I also embrace suggestions to enhance this example.

 

Kind regards

 

Stefan

Publishing Lumira Stories on BI Platform - Issues and Solutions

$
0
0

Publishing SAP Lumira Stories on BI Platform

                                                   

Once you have installed the SAP Lumira server for BI Platform, you can then share and enable Lumira stories on BI Launchpad same way as exploring stories in Lumira Server.

 

BI platform enforces security on Lumira documents, and allows access and categorization in the same manner as other BI platform content, allowing you to seamlessly adopt SAP Lumira within your organisation.

                                        

Configuring Central Management Console: (CMC)

                                                               

Log on to CMC and then, go to Servers and make sure the following processes are running and enabled:

 

-  AdaptiveProcessingServer

- WebApplicationContainerServer

- AdaptiveJobServer

 

 

Publishing/Saving Lumira Stories:      

 

In order to publish a Lumira story on revision 1.30, you need to save your story on to the BI Platform from Lumira Desktop.

 

  1. 1.     Click on ‘Save As’ and select SAP BI Platform.
  2. 2.     Enter the URL to access the Lumira Server BI Platform is which usually is:

http://<hostname>:<port>/biprws

Port is the RESTful port here (i.e.6405).

 

 

pic1.PNG

 

 

Sharing Lumira Stories from BI Platform:

 

From the BI Launchpad, locate your Lumira document and generate an OpenDoc link. To do this, right click on the Lumira story and click on Document Link.

 

pic2.png

This will generate a link which you can share with other users. You may notice that the link maybe something like:
pic3.png

 

In this case, you can manually generate the link. The link should be in a format of

 

http://<servername>:<port>/BOE/OpenDocument/opendoc/openDocument.jsp?

<parameter1>

sIDType=CUID

 

<parameter2>

&iDocID=

 

<parameter3>

&storyName=

 

Parameter 1 is always CUID and Parameter 2 is iDoc ID which can be obtained from the properties of the Lumira Document shown below from the BI Launchpad and parameter 3 is the story name. Other parameters include page number and refresh parameter which determines if the page should be refreshed or not.

 

pic4.png

URL example:

 

http://<hostname>:<8080>/BOE/OpenDocument/opendoc/openDocument.jsp?sIDType=CUID&iDocID=AXwTdmrDj3dPtEbKV8MS4h8&storyName=HANA Story

On multiple mistakes with IN conditions

$
0
0
Based on SAP HANA SPS 11

Dear readers

 

there is a long standing modelling problem with SAP HANA calculation views:

Using multiple input parameters to filter data similar to the SQL IN predicate.

This discussion Handling multi value input parameters can be taken as a comprehensive example.

 

It seems so straight forward at first and so practial.

Once the input parameter is defined the  data preview tool built into SAP HANA Studio or your reporting client of choice can read the meta data for it and present the user with a nice UI dialog to specify values.

 

Something as fancy as this:

input par.png

 

Now, the way that this works is rather counter intuitive.

For graphical calculation views there are a couple of nicely written blog posts available, like Using Multiple Values in Input parameter for filtering in Graphical Calculation View but it seems that scripted calculation views did simply not want to be as flexible.

 

For those, rather clunky (and not very well performing) solutions had to be built to make it possible at all, (see SAP HANA: Handling Dynamic Select Column List and Multiple values in input parameter or How to process and use multi-value input parameter in a scripted view in HANA)

Either the solution involved dynamic SQL or some form of parameter string mangling with loops and pseudo-dynamic temporary result set constructs.

Other approaches proposed to avoid the problem altogether and use multiple parameters (instead of one multi-valued parameter).

 

Developer arrogance driving solution finding...

The last time I read one of those discussions (yesterday) I thought:


"This cannot be the right solution. There must be some easier way to do it!"

 

So arrogance got the better of me - HA! It cannot be that difficult. (It's so cheesy that for once Comic Sans is a fitting choice).

I dare to guess that nearly every developer had that feeling every now and then (if not, I would have a hard time finding a good explanation for so many drastically underestimated development efforts...)

Attacking the problem

My first impulse was to use the APPLY_FILTER() function, but I soon learned what many others probably discovered before: it doesn't solve the problem.

The reason for that is the way APPLY_FILTER() works.

It takes takes the table variable and your filter string and constructs a new SQL statement.

For example if your table variable is called vfact and your input parameter selection was 1, 2 and 5 your scripted calculation view could look like this:

 

/********* Begin Procedure Script ************/
BEGIN  vfact = select * from fact;  declare vfiltD10 nvarchar(50); -- this is a temp variable to construct the filter condition  vfiltD10 = ' "DIM10" IN ( ' || :IP_DIM10 || ' )';  var_out = APPLY_FILTER (:vfact, :vfiltD10);
END /********* End Procedure Script ************/

This compiles fine and if you try to run it with some parameters you are greeted with a surprise:

 

SELECT     "DIM10", "DIM100", "DIM1000", "DIM1000000",     "KF1", "KF2"
FROM "_SYS_BIC"."devTest/MULTIIP"        ('PLACEHOLDER' = ('$$IP_DIM10$$','1,3,6')) ;

Could not execute 'SELECT "DIM10", "DIM100", "DIM1000", "DIM1000000", "KF1", "KF2" FROM "_SYS_BIC"."devTest/MULTIIP" ...' in 373 ms 962 µs .

SAP DBTech JDBC: [2048]: column store error: search table error:  [2620] "_SYS_BIC"."devTest/MULTIIP/proc": [130] (range 2) InternalFatal exception: not a valid number string '1,3,6'

 

Not only is this error annoying, but it's FATAL... shudder!

After some investigation I found out that the input parameter not only provides the digits and the separating commas but also the enclosing single-quotes.

Nothing easier than getting rid of those:

 

  vfiltD10 = ' "DIM10" IN ( ' || replace (:IP_DIM10 , char(39), '')  || ' )';

With this, the single-quotes get easily removed (39 is the ASCII value for the single quotes and the CHAR function returns the character for the provided ASCII code - this just makes it easier to handle the double-triple-whatever-quotation syntax required when the single-quote character should be put into a string).

 

Of course, seeing that we have not yet reached the end of this blog post, you already know: that wasn't the solution.

 

The problem here was not only the quotation marks but also that  SAP HANA does not parse the string for the input parameter value. The result for the filter variable is that we do not get the condition

 

  actual condition          ===> syntax structure

 

  "DIM10" IN ( 1, 3, 6)     ===> X IN ( c1, c2, c3)

but

  "DIM10" IN ( >'1, 3, 6'<) ===> X IN ( c1 )

 

So even when we remove the quotation marks, we still end up with just one value (I enclosed this single value in>' '< for easier distinction).

 

Interlude

The different syntax structures pointed out above are easily overlooked also in standard SQL. Often developers do not fully realize that a IN condition with 3 parameters is structurally different from a IN condition with 2 or 4 parameters.

Whenever the number fo parameters of the IN condition changes, the statement is effectively a new statement to the database, requiring new parsing and optimisation and also allocating its own space in the shared SQL cache.

 

This is another detail that ABAP developers do not need to worry about, since the

SAP NetWeaver database interface gracefully splits up IN-lists into equal chunks and recombines the result set automatically. See this ancient piece SAP Support case "FOR ALL ENTRIES disaster"for more details.

 

One approach to avoid this issue can be to use temporary tables instead of the IN condition. Especially when parsing/query optimisation is taking a long time for your application, this might be an approach worthwhile to implement.

 

Back to the main topic though!

So, the "obvious" approach of using APPLY_FILTER() does not help in this case.

Is it possible that it is just not possible to take multiple input parameter values into an IN list? But graphical calculation views can do it - and rather easy.

 

And in this observation laid the key for the solution. What is different between graphical and scripted calculation views?

Right, graphical calculation views do not produce SQL for the boxes we draw up.

Technically speaking it replaces them with Plan Operators - very much similar to the abandoned CE_-functions.

 

Do you see where this is heading?

Yes, indeed. The solution I found works with CE_-functions.

 

Oh how very heretic!

May the performance gods get angry with me for making the SAP HANA execution switch engines...

 

But first, let's look at the solution, shall we?

 

/********* Begin Procedure Script ************/
BEGIN     vfact = select * from fact;  var_out = CE_PROJECTION(:vfact,                     [ "DIM10", "DIM100", "DIM1000", "DIM1000000"                     , "KF1", "KF2" ],                      'IN ("DIM10", $$IP_DIM10$$)');
END /********* End Procedure Script ************/

 

Easy to see, this approach mimics the filter approach for graphical calculation views.

To not over complicate things I only used theCE_PROJECTIONfunction for the filter part - everything else is still in efficient, familiar SQL.

Important to note is that this works only, when the input parameter is referenced with the $$<name>$$ format.

Also important to recall is that the complete filter expression needs to be provided as one string enclosed in single quotation marks ( ' <filter expression goes here> ' ).

 

"OK!", you may say, "this works, but now you broke the holy rule of CE_-functions damnation. The performance of this surely is way worse due to the implicit engine change!"

Well, let's have a look into this!

 

First the explain plan for the SQL based statement:

 

SELECT     "DIM10","DIM100","DIM1000", "DIM1000000",     "KF1","KF2"
FROM FACT
where DIM10 IN (1,3,6) ;

OPERATOR_NAME   OPERATOR_DETAILS                                         EXEC_ENGINE SUBTREE_COST

COLUMN SEARCH   FACT.DIM10, FACT.DIM100, FACT.DIM1000, FACT.DIM1000000,  COLUMN       1.645529062

                FACT.KF1, FACT.KF2                                                  

                (LATE MATERIALIZATION, OLTP SEARCH, ENUM_BY: CS_TABLE)              

  COLUMN TABLE  FILTER CONDITION:                                                   

                (ITAB_IN (DIM10))                                                   

                FACT.DIM10 = 1 OR FACT.DIM10 = 3 OR FACT.DIM10 = 6       COLUMN    

 

Now the scripted calculation view version:

 

SELECT     "DIM10","DIM100","DIM1000", "DIM1000000",     "KF1","KF2"
FROM "_SYS_BIC"."devTest/MULTIIP"        ('PLACEHOLDER' = ('$$IP_DIM10$$','1,3,6')) ;

 

OPERATOR_NAME   OPERATOR_DETAILS                                         EXEC_ENGINE SUBTREE_COST

COLUMN SEARCH   FACT.DIM10, FACT.DIM100, FACT.DIM1000, FACT.DIM1000000,  COLUMN       1.645529062

                FACT.KF1, FACT.KF2                                                  

                (LATE MATERIALIZATION, OLTP SEARCH, ENUM_BY: CS_TABLE)              

  COLUMN TABLE  FILTER CONDITION:                                                   

                (ITAB_IN (DIM10))                                                   

                FACT.DIM10 = 1 OR FACT.DIM10 = 3 OR FACT.DIM10 = 6       COLUMN     

 

See any difference?

No?

That's right, there is none. And yes, further investigation with PlanViz confirmed this.

SAP HANA tries to transform graphical calculation views and CE_-functions internally to SQL equivalents, so that the SQL optimizer can be leveraged. This does not always work, since the CE_-function are not always easy to map to a SQL equivalent, but a simple projection with a filter works just fine.

 

Now there you have it.

Efficient and nearly elegant IN condition filtering based on multiple input parameters.

 

There you go, now you know.

Have a great weekend everyone!

 

Lars

SAP HANA Distinguished Engineer (HDE) Webinar: SAP Business Suite powered by SAP HANA Migration - Best Practices & Lessons Learned

$
0
0

Join the SAP HANA Distinguished Engineer (HDE) Webinar (part of SAP HANA iFG Community Calls) to learn about SAP Business Suite powered by SAP HANA Migration best practices.


Title: SAP Business Suite powered by SAP HANA Migration - Best Practices & Lessons Learned

Speaker Kiran Musunuru, SAP HANA Distinguished Engineer, Sr. Principal SAP HANA Architect, Infosys Consulting

Moderator: Scott Feldman

Date: July 21st, 2016  Time: 8:00 - 9:00 AM Pacific, 11:00 - 12:00 PM Eastern (USA), 5:00 PM CET (Germany)


thumbnail.png

 

See all SAP HANA Distinguished Engineer (HDE) webinarshere.


Abstract:

  • This is a great session for everyone planning their ERP journey to HANA providing a great overview on all the key elements to consider during the migration to Suite on HANA.
  • This session will have a focus on reducing risk using various tools and optimizing the downtime window, including what to look out for when moving to S4HANA.
  • The following topics will be covered:
    • How to setup a SAP HANA migration project and estimate accurately
    • What are the different migration options with special focus on DMO and when to use it?
    • What are the technical phases of the DMO approach?
    • How can you turn and optimize the DMO setup and execution?
    • How to troubleshoot issues during the DMO execution?
    • What are best practices and lessons learnt from HANA migrations performed of various sizes?
    • What is different with an S4HANA migration/conversion?

 

To join the meeting:https://sap.na.pgiconnect.com/i800545

Participant Passcode: 110 891 4496



Germany: 0800 588 9331 tel:08005889331,,,1108914496#


UK: 0800 368 0635 tel:08003680635,,,1108914496#


US and Canada: 1-866-312-7353 tel:+18663127353,,,1108914496#

For all other countries, see the attached meeting request.


Background: SAP HANA Distinguished Engineers are the best of the best hand picked by HDE Council that are not only knowledgeable in implementing SAP HANA but also committed to sharing their knowledge with the community.

 

As part of the effort to share experiences made by HDEs, we started this HDE webinar series.

 

This webinar series is part ofSAP HANA International Focus Group (iFG).

JoinSAP HANA International Focus Group (iFG) to gain exclusive access to webinars, access to experts, SAP HANA product feedback, and customer best practices, education, peer-to-peer insights as well as virtual and on-site programs.

You can see the upcoming SAP HANA iFG session detailshere.

 

Note: If you get "Access Denied" error while accessing SAP HANA iFG webinar series / sessions, you need to first join  the community to gain access.

 

Follow HDEs on Twitter @SAPHDE

Follow me on Twitter@rvenumbaka



SAP HANA editions and options - by the SAP HANA Academy

$
0
0

Introduction

 

Different editions, options and "additional capabilities" are available when licensing SAP HANA. In this blog, I will provide an overview and list resources where you can find more information.

 

 

Editions

 

For SAP HANA SPS 12, the following editions are available:

  • Platform Edition
  • Enterprise Edition
  • Real-time Data Edition

 

Platform Edition

 

The SAP HANA Platform Edition provides core database technology and includes, among others, the following software components:

  • SAP HANA Database
  • SAP HANA Client
  • SAP HANA Studio
  • SAP HANA XS
  • SAP HANA Advanced Data Processing
  • SAP HANA Spatial

 

A software component is a separately installable unit.

For support information about versions and releases, see the Product Availability Matrix | SAP Support Portal.

 

The SAP HANA database is available for both the Intel and IBM Power Systems architecture running SUSE Linux Enterprise Server (SLES) and Red Hat Enterprise Linux (RHEL).

 

The SAP HANA client is available on several UNIX, Linux, and Microsoft Windows platforms and includes the clients SQLDBC, ODBC, JDBC, Python, Node.js, Ruby, ODBO (Windows), ADO.NET (Windows). SQLDBC (SQL Database Connectivity) is an SAP-specific runtime environment for developing database applications and database interfaces leveraging ODBC and JDBC. The other clients are common standards.

 

The SAP HANA studio is a plugin for the Eclipse IDE (integrated development environment) available for Linux, Microsoft Windows and Apple MacOS platforms and includes perspectives for development, modeling and system administration. Although still very relevant today as client tool, current focus for SAP HANA new functionality is on web-based tooling and not SAP HANA studio.

 

The SAP HANA XS (Extended Application Services) is the application server for native SAP HANA-based web applications. It is installed with the SAP HANA system and allows developers to write and run SAP HANA-based applications without the need to run an additional application server. SAP HANA XS is also used to run web-based tools that come with SAP HANA, for instance for administration, lifecycle management and development. As of release SPS 11, two versions (models) are available: classic and advanced.

 

SAP HANA Advanced Data Processing and SAP HANA Spatial are options and are explained in more detail below under Options.

 

 

Enterprise and Real-time Data Edition

 

Apart from the use case where SAP HANA serves as primary persistence (database) for SAP Netweaver-based applications like SAP Business Suite or Business Warehouse, most other use cases require some form of provisioning to access the source data needed for in-memory reporting or the analysis of business data.

 

The following data provisioning methods are available:

  • SAP Landscape Transformation (LT) Replication Server (trigger-based replication)
  • SAP HANA Direct Extractor Connection (DXC)
  • SAP Data Services (ETL-based replication)
  • SAP Replication Server (log-based replication)

 

The SAP HANA Enterprise Edition bundles the SAP HANA platform edition with two of these methods as additional components:

 

The SAP HANA Real-time Data Edition bundles the SAP HANA platform edition with SAP Replication Server (SRS).

 

 

Options

 

Apart from editions, SAP offers for SAP HANA also several options and - in legal-ese - "additional capabilities". To use them in a production system, a software license is required.

 

For SAP HANA SPS 12, the following options and additional capabilities are available, in order of appearance:

  • SAP HANA Advanced Data Processing [SPS 09]
    • Search
    • Text Analysis
    • Text Mining
  • SAP HANA Spatial - [SPS 06]
  • SAP HANA Accelerator for SAP ASE - [SPS 09]
  • SAP HANA Dynamic Tiering - [SPS 09]
  • SAP HANA Smart Data Integration and SAP HANA Smart Data Quality - [SPS 09]
  • SAP HANA Smart Data Streaming - [SPS 09]
  • SAP HANA Real-Time Replication - [SPS 10]
    • SAP Landscape Transformation (LT) Replication Server

    • SAP HANA Remote Data Sync

  • SAP HANA Data Warehousing Foundation [SPS 11]
  • IoT SIM Management for SAP HANA - [SPS 12]

 

These options and additional capabilities will be explained below.

 

 

SAP HANA Advanced Data Processing

 

The SAP HANA Advanced Data Processing option provides the ability to store, process, search, explore, and analyze both structured and unstructured textual and interlinked structured data with relationships. The native text capability allows for full-text searching, advanced text analysis (including Natural Language Processing, Sentiment Extraction (Voice of the Customer), and Entity Extraction within documents), and text mining to discover relevant and related terms and documents.

 

The ADP option was added to SAP HANA with SPS 09 bundeling Search, Text Mining and Text Analysis. The Text Mining and Text Analysis features were added to the SAP HANA platform with SPS 05 and SPS 06. The technology, however, is much older and goes back to a Xeroc PARC research project further developed by a company named Inxight, later acquired by Business Objects.

 

Full Playlist: Text Analysis, Text Mining and Search - YouTube

 

Open SAP Course: Text Analytics with SAP HANA Platform - Anthony Waite, Yolande Meessen, Bill Miller, and Michael Wiesner

 

 

 

SAP HANA Spatial

 

The SAP HANA spatial option provides functions to analyze and process geospatial information in SAP HANA. SAP HANA Spatial comprises types, methods, and constructors you can use to access, manipulate, and analyze spatial data.

 

Full playlist: SAP HANA Spatial - YouTube

 

 

 

SAP HANA Accelerator for SAP ASE

 

The SAP HANA Accelerator option for SAP ASE was introduced with SPS 09. The SAP HANA accelerator for SAP ASE option provides SAP Adaptive Server Enterprise (ASE) users the ability to use SAP HANA on SAP ASE data for real-time analytics:

  • Develop Analytical Applications on SAP HANA: SAP ASE users can run reports and analyze data in SAP HANA using the data in SAP ASE.
  • Pushdown Optimisation: Accelerate existing SAP ASE reporting applications such as stored procedures and queries using SAP HANA. Using accelerator for SAP ASE, you can maximize the push down of logic (not OTLP applications) to SAP HANA with minimal, or no code changes.

 

loioff8f77a68e174bbf97cf7a3fe78c9ee6_HiRes.png

 

 

SAP HANA Dynamic Tiering

 

SAP HANA dynamic tiering is a native big data solution for SAP HANA that adds disk-based extended storage to the SAP HANA in-memory database. Dynamic tiering enhances SAP HANA with large volume, warm data management capability, placing hot data in SAP HANA in-memory tables, and warm data in extended tables.

 

Dynamic tiering enables you to migrate hot SAP HANA data to warm or cold disk-based extended storage as the data ages. Extended storage reduces the footprint of your SAP HANA in-memory database, and applies cost-efficient storage and processing technologies to your data, depending on its value.

 

loio9619c137a6294e14b98928558683a208_LowRes.png

 

Full playlist:   SAP HANA Dynamic Tiering - YouTube

 

 

 

SAP HANA Smart Data Integration and SAP HANA Smart Data Quality

 

SAP HANA smart data integration and SAP HANA smart data quality enhances, cleanses, and transforms data to make it more accurate and useful. With the speed advantages of SAP HANA, SAP HANA smart data integration and SAP HANA smart data quality can connect with any source, provision and cleanse data, and load data into SAP HANA on-premise or in the cloud.

 

In earlier releases, this option was known as SAP HANA Enterprise Information Management (EIM).

 

HANANative.png

 

Full playlist:   SAP HANA smart data integration & smart data quality - YouTube

 

 

 

SAP HANA Smart Data Streaming

 

SAP HANA smart data streaming is a specialised option that processes streams of incoming event data in real time, and collects and acts on this information. Smart data streaming is ideally suited for situations where data arrives as events happen, and where there is value in collecting, understanding, and acting on this data right away. Some examples of data sources that produce streams of events in real time include:

  • Sensors
  • Smart devices
  • Web sites (click streams)
  • IT systems (logs)
  • Financial markets (prices)
  • Social media

 

Smart data streaming uses the same technology as SAP Event Stream Processor (ESP).

 

Full playlist: SAP HANA Smart Data Streaming - YouTube (videos by Diana Healy)

 

 

 

SAP HANA Real-Time Replication

 

The SAP HANA real-time replication option bundles three technologies for replicating data in real time from any source system to the SAP HANA database:

  • SAP Landscape Transformation (LT) Replication Server
  • SAP HANA Remote Data Sync

 

SAP Landscape Transformation (LT) Replication Server is the SAP technology that allows you to load and replicate data in real-time from ABAP source systems and non-ABAP source systems to an SAP HANA environment using a trigger-based replication approach to pass data from the source system to the target system.

 

Screen Shot 2016-06-06 at 16.53.01.png

 

 

 

SAP HANA Remote Data Sync (RDSync) brings the benefits of HANA analytics to distributed systems where computing happens at many remote sites with little or no local administration (anything from smartphones or Raspberry Pi’s to multi-user servers), e.g:

  • Internet of Things (IoT) applications with edge data requirements, including some predictive maintenance applications.
  • “Satellite server” use cases, in which autonomous on-site servers keep remote workplaces, from oil rigs to retail stores, working regardless of network latency or availability.
  • Mobile enterprise applications built on a distributed database model

 

RDSync is based on the SAP SQL Anywhere data synchronization technology called MobiLink.

loio7551b536579e4915809042d302a51876_LowRes.png

 

SAP HANA Data Warehousing Foundation

 

The SAP HANA Data Warehousing Foundation option is a series of packaged tools for large-scale SAP HANA installations which support data management and distribution within a SAP HANA landscape. With SAP HANA Data Warehousing Foundation you can achieve smart data distribution across complex landscapes, optimize the memory footprint of data in SAP HANA and streamline administration and development, thereby reducing TCO and supporting SAP HANA administrators and data warehouse designers.

 

Full playlist:   SAP HANA Data Warehousing Foundation - YouTube

 

 

 

IoT SIM Management for SAP HANA

 

IoT SIM management for SAP HANA (SIMM) is an on-premise connectivity management solution that enables you to manage data and SMS usage to choose the best rate plan to optimize your business needs. SIMM offers an integrated dashboard powered by SAP HANA, allowing users to track and see a multifaceted view of their SIM-based IoT device status, usage, rate plans, and billing, across operators.

 

 

Platform Edition Add-ons and Plug-ins

 

Apart from the main software components listed above there also are a number of add-ons and plug-ins for the Platform Edition that are use case dependent (links to the documentation on the SAP Help Portal).

 

 

SAP HANA Information Access

 

SAP HANA Information Access (InA) is a search engine built into the core of the SAP HANA platform. InA has been part of SAP HANA since the initial releases.

For more information, see https://blogs.saphana.com/2013/12/05/sap-hanas-built-in-search-engine/ and http://scn.sap.com/people/lucas.sparvieri/blog/2012/08/13/how-to-create-your-own-web-application-using-sap-hana-ui-toolkit-for-information-access

 

SHINE

 

SAP HANA Interactive Education (SHINE) is education content to learn and develop SAP HANA applications.

For more information, see https://blogs.saphana.com/2014/03/10/shine-sap-hana-interactive-education/

 

SAP HANA Spatial Map

 

The SAP HANA Spatial Map client is part of the SAP HANA spatial option (see above).

 

 

 

SAP HANA DB Control Center

 

The SAP HANA DB Control Center is a systems management solution that enables DBAs to administer and monitor a variety of SAP database products in one graphical UI either locally or remotely.

 

 

Application Function Library (AFL)

 

Application functions are like database procedures written in C++ and called from outside to perform data intensive and complex operations.  Functions for a particular topic are grouped into an application function library (AFL), such as the Predictive Analysis Library (PAL) and the Business Function Library (BFL).

 

Full playlist: Predictive Analysis Library - YouTube

 

 

 

SAP HANA Smart Data Access

 

With smart data access (SDA) you can expose data from remote sources as virtual tables. combine these with other data in SAP HANA physical tables using SAP HANA models, and apply SAP HANA predictive, planning and text search algorithms to this combined data.

 

The SAP HANA Hadoop Controller, Ambari Cockpit and Spark Controller provide integration for Big Data providers and are part of smart data access.

 

Full playlist: SAP HANA Smart Data Access - YouTube

 

 

More Information

 

SAP HANA Academy YouTube Channel

 

Text Analysis, Text Mining and Search - YouTube

Predictive Analysis Library - YouTube

SAP HANA Smart Data Access - YouTube

SAP HANA Spatial - YouTube

SAP HANA Dynamic Tiering - YouTube

SAP HANA smart data integration & smart data quality - YouTube

SAP HANA Smart Data Streaming - YouTube

SAP HANA Data Warehousing Foundation - YouTube

SAP HANA XS Advanced Model in SPS 11 - YouTube

 

 

Product documentation

 

SAP HANA Options and Additional Capabilities – SAP Help Portal Page

SAP Landscape Transformation – SAP Help Portal Page

SAP HANA Real-Time Replication – SAP Help Portal Page

Application Function Library (AFL) - SAP HANA Business Function Library (BFL) - SAP Library

What is PAL? - SAP HANA Predictive Analysis Library (PAL) - SAP Library

SAP HANA Spatial – SAP Help Portal Page

SAP HANA Accelerator for SAP ASE – SAP Help Portal Page

SAP HANA Dynamic Tiering – SAP Help Portal Page

SAP HANA Smart Data Integration and SAP HANA Smart Data Quality – SAP Help Portal Page

SAP HANA Smart Data Streaming – SAP Help Portal Page

SAP HANA Data Warehousing Foundation 1.0 – SAP Help Portal Page

IoT SIM management for SAP HANA – SAP Help Portal Page

 

 

SAP Notes

 

2091815 - SAP HANA Options: Additional Information

 

 

SCN blogs, docs and other resources

 

Introduction to Spatial Processing with SAP HANA and DemoApp for B1 Summit 2015

Big picture for SAP HANA Accelerator for ASE (A4A)

SAP HANA Data Warehousing Foundation

Hana Smart Data Integration - Overview

SAP Solutions for the Internet of Things

SAP LT Replication Server

SAP Replication Server

Starter Blog for SAP LT Replication Server

Smart Data Streaming Developer Center

SAP Data Services | Data Integration, Quality & Cleansing

 

 

SAP HANA blogs

 

https://blogs.saphana.com/2016/04/05/getting-sap-erp-data-hana-smart-way/

https://blogs.saphana.com/2016/01/11/hana-smart-data-integration-one-stop-solution-data-handling/

https://blogs.saphana.com/2015/09/11/simplifying-sap-hana-data-integration-landscape/

https://blogs.saphana.com/2014/12/17/sap-hana-dynamic-tiering/

https://blogs.saphana.com/2014/03/10/shine-sap-hana-interactive-education/

https://blogs.saphana.com/2013/12/05/sap-hanas-built-in-search-engine/

https://blogs.saphana.com/2013/07/22/smart-data-access-data-virtualization-with-sap-hana/

https://blogs.saphana.com/2013/09/19/spatial-processing-with-sap-hana/

 

 

Thank you for watching

 

You can view more free online videos and hands-on use cases to help you answer the What, How and Why questions about SAP HANA and the SAP HANA Cloud Platform on the SAP HANA Academy at youtube.com/saphanaacademy, follow us on Twitter @saphanaacademy., or connect with us on LinkedIn.

SAPInsider HANA 2016 Vienna, Jun 20 - 22

$
0
0

SAPInsider.jpg

SAPInsider HANA 2016 is happening in two weeks, this time round in Vienna Austria. From June 20-22, join SAP experts for SAP HANA as they share their knowledge and expertise on some of the key topics at the event.


Session Highlights

To make sure you maximize your experience at the event, here's a highlight of some of the sessions that you should not miss:


  1. SAP HANA  -  The platform for next-generation applications and advanced analytics
    Monday, 20 June 2016, 11:15 – 12:30, Hall N2
    byRuediger Karl
    Maximizing existing investments on the SAP HANA platform is a topic for most CIOs. Join Ruediger Karl from SAP SE to explore the latest features in SAP HANA and understand how you can leverage these capabilities to further simplify your IT landscape, accelerate insights, and innovate modern applications.

  2. Develop and deploy new business applications with the tools and services in SAP HANA
    Wednesday, 22 June 2016, 16:00 – 17:15, 1.61
    byThomas Grassl
    Receive practical advice on how to leverage SAP HANA to access a variety of data types, including structured, unstructured, streaming, spatial, and more. Join Thomas Grassl from SAP SE, as he guides you through demos to learn how real businesses are transforming IT with innovative apps built on SAP HANA.

  3. Configuration and deployment considerations for cost-optimizing SAP HANA infrastructure
    Tuesday, 21 June 2016, 08:30 – 09:45, Hall N1
    by Matthias Haendly
    Total cost of ownership and landscape complexity are some of the major challenges CIOs face for their IT infrastructure investments. Join Matthias Haendly from SAP SE in this session to learn how to lower your total cost of ownership and maximize the performance and efficiency of your SAP HANA infrastructure and resource investments.

  4. How to deliver trusted data for SAP HANA and SAP S/4HANA
    Wednesday, 22 June 2016, 10:15 – 11:30, L2
    byPhilip On
    With the explosion of data coming from Big Data, cloud, social, and IoT environments, you need enterprise information management (EIM) solutions to deliver trusted data for accurate analytics and effective business operations. Join Philip On from SAP to learn how to exploit the native capabilities of SAP HANA for data quality, data integration, data virtualization, and event streaming.

 

For more information on the agenda, do check out all six program tracks for the event here.

 

 

Engage our experts live on site

Be sure also to join our experts at the “Ask the Experts” session for detailed answers to some of the questions that matter the most to you and your business. Sessions will take place on both Monday and Tuesday, so be sure to add this to your agenda schedule.

 

Ask the Experts: HANA 2016

  • Monday, 20 June 2016, 18:30 – 19:15, Exhibition Hall
    withDan Lahl, SAP and more


 

 

Attend the Keynote

Learn more about SAP's vision for the digital enterprise, and don't miss the following keynote session:

 

  • Transforming organisations to thrive in the digital world
    Monday, 20 June, 2016, 09:00 - 10:30, Hall E1
    withIrfan Khan, Jayne Landry from SAP
    Businesses are facing significant change in ths highly competitive and rapidly changing economy. Join Irfan Khan, GM and Global Head of SAP Database & Data Management, and Jayne Landry, Global VP and GM SAP Business Intelligence, as they share SAP’s vision for the digital enterprise and demonstrate ways in which SAP technology is preparing organisations to prosper in the digital world.

 

 

Registration Details

If you have not signed up for this event, be sure to register for the event here. For more information, do visit the SAPInsider Vienna website here, or view the online brochure here. We look forward to seeing you there!


SAP HANA Upgrade from SP11 to SP12

$
0
0

Introduction

SAP has recently released SP12 with whole lot of new features.  I started to explore the new SP12 features with the 1st step of upgrading HANA DB to SP12 version.

 

The intent of this blog is to document the upgrade process for HANA DB to SP12 and then start exploring various new features as part of SP12.

Please find below step by step details for SP12 upgrade.

 

The process is not different from the upgrade process what we used to follow till now. So let’s get started with the initial planning and further steps for upgrade to latest version of HANA DB


In our case the source HANA version is SP11 and Target version is SP12

 

Project Plan/ Task List:

 

Task NoActivity DetailsTime (Mins)
1Do the pre-requisite checks1 Day
2Read thru SAP notes release for SP12 and any other required dependencies0.5 Day
3Download the required media from the service market place30
4Validate the file systems15
5Extract and transfer media files to server30
6Take backup before starting the upgrade15
7Start the HANA DB upgrade40
8Take backup after the upgrade process is completed15
9Complete the post upgrade activities10

 

Pre-Upgrade Steps:

 

Download the media files from SMP – service market place

 

https://service.sap.com/swdc

 

1.jpg

Required SAP notes and upgrade guide

 

http://help.sap.com/hana/SAP_HANA_Server_Installation_Guide_en.pdf

 

As of SAP HANA SPS Platform 12 (Revision 120), the SAP HANA server installation and update documentation has been updated for XS Advanced installation, as well as other smaller improvements to existing functionality.


Note: We will be using AWS cloud platform for this upgrade, with 256 GB RAM and SUSE SLES 11 SP3 platform. This is the demo system which I am using and the application server is S/4 HANA.

 

Supported Operating Systems for SAP HANA: For information about supported operating systems for SAP HANA, see SAP Note 2235581 - SAP HANA: Supported Operating Systems

 

SUSE Linux Enterprise Server (SLES)

SAP Note 1944799 - SAP HANA Guidelines for SLES Operating System

SAP Note 1855805 - Recommended SLES 11 packages for HANA support on OS level

SAP Note 1954788 - SAP HANA DB: Recommended OS settings for SLES 11 / SLES for SAP Applications 11 SP3

 

Check the OS level File system details and all the required permissions

 

The SAP HANA database lifecycle manager (HDBLCM) requires certain file systems in order to successfully install an SAP HANA system

 

2.jpg

Check the <sid>adm user ID/PWD and required OS user ID/PWD

 

Transfer the media files to HANA server

 

Please make sure you are taking the backup before we start the actual upgrade activities

 

3.jpg

Backup Details before Upgrade: Backup Size: 114.72 GB, Time taken to complete backup time: 14 mins 20 sec

 

4.jpg

Stop the application server: Login with <sid>adm  > and stop the SAP application server


5.jpg

 

Upgrade Process:


We will be performing upgrade thru command line; other ways of doing the upgrade is thru HDBLCMGUI.

 

Navigate to Installation Media > Run command - ./hdblcm

 

6.jpg

 

7.jpg

 

8.jpg

 

As of SAP HANA SPS Platform 12 (Revision 120), SAP HANA supports installation and update of the SAP HANA XS Advanced Runtime.

SAP HANA supports installation and update of the SAP HANA XS Advanced Runtime

 

The following XS Advanced Runtime parameters are available now:

  • xs_components_cfg: Specifies the path to the directory containing MTA extension descriptors (*.mtaext)
  • xs_customer_space_isolation :Run applications in customer space with a separate OS user
  • xs_customer_space_user_id :OS user ID used for running XS Advanced applications in customer space
  • xs_domain_name: Specifies the domain name of an xs_worker host. The domain name has to resolve to the SAP HANA host which is running the xscontroller and xsuaaserver service.
  • xs_routing_mode: Specifies the routing mode to be used for XS advanced runtime installations.
  • xs_sap_space_isolation: Run applications in SAP space with a separate OS user
  • xs_sap_space_user_id: OS user ID used for running XS advanced runtime applications in SAP space

 

As of SAP HANA SPS Platform 12, the SAP HANA database lifecycle manager (HDBLCM) supports the update of the XS Advanced Runtime.

For more information, see Installing XS Advanced Runtime in the SAP HANA Server Installation and Update Guide

 

At the end of installation you will find new services added to SAP HANA Studio

 

9.jpg

 

SAP HANA database upgrade is successfully completed

 

10.jpg

 

New HANA DB version is, 1.00.120

 

11.jpg

 

All the services are up and running with new XS engine related services added as part of SP12

 

12.jpg

Post Upgrade:

 

Final step is to take the backup after upgrade

 

13.jpg

 

Backup Details after Upgrade: Backup Size: 119.41 GB, Time taken to complete backup time: 14 mins 52 sec

 

Post upgrade DB size is increased by approx. 5 GB

14.jpg

Start the application server

15.jpg

 

SAP HANA Upgrade is successfully completed. Now it's time to explore the new SP12 functionality!

 

Reference Links:

 

https://launchpad.support.sap.com/#

https://apps.support.sap.com/sap/support/pam

http://help.sap.com/hana/Whats_New_SAP_HANA_Platform_Release_Notes_en.pdf

http://help.sap.com/hana/SAP_HANA_Server_Installation_Guide_en.pdf

Hands-on video tutorials for SAP HANA Graph

$
0
0

Hi,

 

Graph processing is a hot topic right now - especially since it enabled investigative journalists to uncover the world's biggest offshore banking scandal. Commonly referred to as the "Panama Papers", graph processing allowed investigators to sift through a 2.6 terabyte morass of data incorporating 1,400 offshore tax havens and 100,000 companies in order to reveal hitherto unknown connections.


Did you know that SAP HANA includes a native in-memory graph processing capability?

GRAPH 03.PNG

To quoteMichael Eacrett's recent blog covering what's new in SPS 12:

"SAP HANA graph data processing is now generally available, providing the processing capabilities help customers extract deeper insights from hyper-connected data and their relationships. SAP HANA includes a graph engine with built-in graph algorithms (neighborhood search, shortest path, strongly connected components, pattern matching) to find connections without manually creating complex JOIN statements. It also introduces a Property Graph model with flexible schema, which enables users to traverse relationships without the need for predefined modeling.  It also comes with a graph viewer tool (for quick visualization and dynamic interaction (i.e. change algorithm parameters) with graph data real-time, and a graph modeler tool that is (integrated with SAP Web IDE for SAP HANA to create and consume graphs visually instead of via SQL or SQLScript."


So you're raring to get going with graph processing but don't know where to start? Well that's where the SAP HANA Academy comes in...


With nearly two hours of hands-on video content, the new SAP HANA Academy playlist covering SAP HANA graph data processing provides tutorials that cover everything from a chalkboard overview and introduction, to downloading, installing, and working with the graph viewer tool, to enabling graph processing inside calculation views via the graphical modeler based on XS Advanced and SAP Web IDE for SAP HANA.

 

Here are direct links to all of the video tutorials published so far:


Getting Started


Create Graph Workspace

 

Graph Viewer:

Overview of Graph Viewer

Neighborhood Search

Neighborhood Search with Parameters

Strongly Connected Components

Shortest Path

Pattern Search

 

Graphical modeler in XS Advanced and SAP Web IDE for SAP HANA:

Getting Started

Create Project

Create Graph Workspace

Calc View - Strongly Connected Components

Calc View - Neighborhood Search

Calc View - Neighborhood Search with Parameters

 

Alternatively, here's the main playlist on YouTube: SAP HANA Academy - Graph

 

The code samples used in the videos are available here: https://github.com/saphanaacademy/Graph

 

If this has whetted your appetite in other new stuff in SPS 12 do check out the following playlist: SAP HANA Academy - What's New in SPS 12

 

All feedback welcome – in the comments section below, @pmugglestone or mailto:HanaAcademy@sap.com.

 

Happy graph processing!

 

Philip

Dynamic filter / User Exit Variable for HANA Views

$
0
0

Recently I saw some posts in SCN where people asked "how can we restrict data HANA views dynamically?" Like showing data for "Last 3 months" , "Last 13 Months" or "Last two years" etc . So, thought of listing the steps in a small blog post.

 

Requirement : Based on user input or current month , we want to restrict data in our view.  Common requirement is to show last 12/13 months data dynamically. In general , we want to go 'n' number of months back from current month or whatever month user would select.

 

Solution :  From  SPS09 , HANA allows Input Parameter of type   "Derived From Procedure /Scalar Function" - and I would use this to meet this requirement. Good thing about this solution is , we can restrict the data at the first projection node and avoid bringing unnecessary data to upper nodes.  We shall write a procedure which would take month input from user ( or current month if nothing supplied)  and return  "to" and "from" month .

 

This procedure takes the input of  "desired month" and  "number of months" user wants to go back.   Default value for "desired month " is Current Month and that for "look back month" is 12 .

 

Example in Fig 1, give me 5 months range from 2016/01 i.e 2015/08 to 2016/01  . We shall see procedure coding later. Same procedure can be used for any number of look back month .

Proc_Call.PNG

Fig 1 - Procedure Call .  Here my "from month" is 2015/08 and "to month" is 2016/01

 

Detail Steps :

 

Step 1) Create a procedure with Input and output type as string ( must) . It can have only onescalar  output parameter . By this time you probably guessed why I have "to" and "from" month in same field separated by a hyphen .

 

Step 2) Create two Input Parameters , one type  "Direct" and another "Derived From Procedure/Scalar Function " in your Calculation / Analytic View .

IP_DIRECT.PNG

Fig 2 : Input Parameter type Direct

 

IP_Procedure.PNG

Fig 3 : Input Parameter type "Derived from Procedure/Scalar Function"

 

Step 3) We need to map one input parameter with another to receive user input and return calculated month range. Click on "Manage Mapping". This will pass user selection of month and number of months we want to go back .  For this model, I have selected  2 ( constant ) months back from user month/current month ; based on the requirement we can ask user to select number of month he/she wants to go back .  For that we need create another Input Parameter type 'Direct' .

Capture.PNG

Fig 4 : Mapping of Input Parameter to pass user selection to Procedure .

 

4) Now , we will use Input Parameter IP_MONTH to restrict data in projection node. Right click on the "Expression" under "Filter" folder and click "Open" .

expression1.PNG    expression.PNG

Fig 5 : Filtering Expression

 

I am using "leftstr" and "rightstr" operators to take only relevant portion from the output of procedure, i.e first 6 character and last 6 character respectively

 

Output of Procedure  201604-201606

( Leftstr (201604-201606),6 )   =  201604  and (  Rightstr ( 201604-201606) ,6 ) = 201606 .   My Calmonth would be between 201604 and 201606 .

 

Let's see at what level Filters are getting applied  from PlanViz --  In total I am getting 703 records as per plan viz , we will check what is the total number of records in base table based on our selection from SAP end .

Plan VIz.PNG       results3.PNG

This procedure can be reused for any number of look back months. Either we need select a different constant value or ask user to select number of months he wants to go back .

 

 

Detail of Procedure :


We need to go to HANA Development perspective to create this procedure .

 

Open  "Repository View" ----> Right click on your Package -----> Select "New"


Choose "Others" ----> SAP HANA ----> Database Development ---->Stored Procedure   . Provide Procedure Name and Target Schema. It would automatically take .hdbprocedure extension .  Put the attached code , save and activate the procedure .   Once activated successfully, test the procedure from SQL prompt using "call <Procedure> " statement .  . If everything fine, follow from step 2.


PS:  In case you want, you can use two different procedure to return "To" and "From" values of the month selection . In that case, you would not need to use leftstr or rightstr operators.



Would be great to have your feedback .

Viewing all 902 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>