Quantcast
Channel: SCN : Document List - SAP HANA and In-Memory Computing
Viewing all 1183 articles
Browse latest View live

HANA Rules Framework

$
0
0

Welcome to the SAP HANA Rules Framework (HRF) Community Site!


SAP HANA Rules Framework provides tools that enable application developers to build solutions with automated decisions and rules management services, implementers and administrators to set up a project/customer system, and business users to manage and automate business decisions and rules based on their organizations' data.

In daily business, strategic plans and mission critical tasks are implemented by a countless number of operational decisions, either manually or automated by business applications. These days - an organization's agility in decision-making becomes a critical need to keep up with dynamic changes in the market.


HRF Main Objectives are:

  • To seize the opportunity of Big Data by helping developers to easily build automated decisioning solutions and\or solutions that require business rules management capabilities
  • To unleash the power of SAP HANA by turning real time data into intelligent decisions and actions
  • To empower business users to control, influence and personalize decisions/rules in highly dynamic scenarios

HRF Main Benefits are:

Rapid Application Development |Simple tools to quickly develop auto-decisioning applications

  • Built-in editors in SAPHANA studio that allow easy modeling of the required resources for SAP HANA rules framework
  • An easy to implement and configurable SAPUI5 control that exposes the framework’s capabilities to the business users and implementers

Business User Empowerment | Give control to the business user

  • Simple, natural, and intuitive business condition language (Rule Expression Language)

Untitled.png

  • Simple and intuitive UI control that supports text rules and decision tables

NewTable.png

  • Simple and intuitive web application that enables business users to manage their own rules

Rules.png    

Scalability and Performance |HRF as a native SAP HANA solution leverages all the capabilities and advantages of the SAPHANA platform.


For more information on HRF please contact shuki.idan@sap.com  and/or noam.gilady@sap.com

Interesting links:

SAP solutions already utilizing HRF:

Use cases of SAP solutions already utilizing HRF:

SAP Transportation Resource Planning

TRP_Use_Case.jpg

SAP FraudManagement

Fraud_Use_Case.JPG

SAP hybris Marketing (formerly SAP Customer Engagement Intelligence)

hybris_Use_Case.JPG

SAP Operational Process Intelligence

OPInt_Use_Case.JPG


SAP HANA TDI on Cisco UCS and VMware vSphere - Part 1

$
0
0

Introduction

 

Support for non-productive SAP HANA systems on VMware vSphere 5.1 has been announced in November 2012. Since April 2014, also productive SAP HANA systems can be virtualized by customers on VMware vSphere 5.5. Currently, some restrictions apply which are preventing SAP HANA to be treated like any other SAP Application running on VMware vSphere. But because the conditions for virtualized HANA will be harmonized in the future in order to fit to a homogenous SAP technology platform, it is recommended to continously keep the SAP documentation up-to-date and always refer to the latest version only.

 

The audience of this series should have a basic understanding of following components:

 

Additionally, a deeper understanding of SAP HANA TDI is mandatory:

SAP: SAP HANA Tailored Datacenter Integration

Cisco: SAP HANA TDI on Cisco UCS

Cisco and EMC: Cisco UCS Integrated Infrastructure for SAP Applications with EMC VNX

Cisco and NetApp: FlexPod Datacenter for SAP Solution with Cisco ACI

 

Following combination has been tested and verified as working. It is strongly advised against using lower versions, while newer versions are considered to work as expected as long as the combination is reflected in the Cisco and VMware compatibility guides.

  • Cisco UCS Manager 2.1
  • VMware ESXi 5.5
  • VMware vCenter Server 5.5
  • SUSE Linux Enterprise Server 11 SP3
  • Red Hat Enterprise Linux Server 6.5
  • SAP HANA 1.0 SPS07

 

Although one of the goals of this series is to consolidate information about the virtualized HANA deployment process, there is still a lot of necessary referencing to other documentation. Please consult the following during the planning and installation phase:

 

Virtualization References

Document ID / URL

Description

SAP Note 1492000

General Support Statement for Virtual Environments

SAP Note 2161991

VMware vSphere configuration guidelines

SAP Note 1606643

VMware vSphere host monitoring interface

SAP Note 1788665

SAP HANA Support for virtualized / partitioned (multi-tenant) environments

SAP Note 1995460

SAP HANA on VMware vSphere in production

SAP Note 2024433

Multiple SAP HANA VMs on VMware vSphere in production

SAP Note 2157587

SAP Business Warehouse, powered by SAP HANA on VMware vSphere in scale-out and production

SAP Note 2237937

Virtual Machines hanging with VMware ESXi 5.5 Update 3 and 6.0

HANA Virtualization Guidelines from SAP

SAP HANA Virtualization Guidelines with VMware vSphere

HANA Virtualization Guidelines from VMware

SAP HANA Virtualization Guidelines with VMware vSphere

SAP Virtualization Best PracticesOverall SAP Virtualization Best Practices from VMware

 

Linux References

Document ID / URL

Description

SAP Note 171356

SAP software on Linux: General information

SAP Note 2235581

SAP HANA: Supported Operating Systems

SAP Note 1944799

SAP HANA Guidelines for SLES Operating System Installation

SAP Note 2009879

SAP HANA Guidelines for RHEL Operating System Installation

SAP Note 2205917

SAP HANA DB: Recommended OS settings for SLES 12

SAP Note 2240716

SAP HANA DB: Recommended OS settings for SLES 11 SP4

SAP Note 2247020

SAP HANA DB: Recommended OS settings for RHEL 6.7

SAP Note 2001528

SAP HANA Database SPS 08, SPS 09 and SPS 10 on RHEL 6 or SLES 11

SAP Note 2228351

SAP HANA Database SPS 11 (or higher) on RHEL 6 or SLES 11

 

______________________________

Part 1 - Introduction

Part 2 - ESXi Host

Part 3 - Virtual Machine

Part 4 - Guest Operating System

Troubleshooting SAP HANA Authorisation issues

$
0
0

This document will deal with issues regarding analytical privileges with SAP HANA.


 

So what are Privileges some might ask?

 

System Privilege:

System privileges control general system activities. They are mainly used for administrative purposes, such as creating schemas, creating and changing users and roles, performing data backups, managing licenses, and so on.

Object Privilege:

Object privileges are used to allow access to and modification of database objects, such as tables and views. Depending on the object type, different actions can be authorized (for example, SELECT, CREATE ANY, ALTER, DROP, and so on).

Analytic Privilege:

Analytic privileges are used to allow read access to data in SAP HANA information models (that is, analytic views, attribute views, and calculation views) depending on certain values or combinations of values. Analytic privileges are evaluated during query processing.

In a multiple-container system, analytic privileges granted to users in a particular database authorize access to information models in that database only.

Package Privilege:

Package privileges are used to allow access to and the ability to work in packages in the repository of the SAP HANA database.

Packages contain design time versions of various objects, such as analytic views, attribute views, calculation views, and analytic privileges.

In a multiple-container system, package privileges granted to users in a particular database authorize access to and the ability to work in packages in the repository of that database only.

 

For more information on SAP HANA privileges please see the SAP HANA Security Guide:

http://help.sap.com/hana/SAP_HANA_Security_Guide_en.pdf

 

 

So, you are trying to access a view, a table or simply trying to add roles to users in HANA Studio and you are receiving errors such as:

  • Error during Plan execution of model _SYS_BIC:onep.Queries.qnoverview/CV_QMT_OVERVIEW (-1), reason: user is not authorized
  • pop1 (rc 2950, user is not authorized)
  • insufficient privilege: search table error: [2950] user is not authorized
  • Could not execute 'SELECT * FROM"_SYS_BIC"."<>"' SAP DBTech JDBC: [258]: insufficient privilege: Not authorized.SAP DBTech JDBC: [258]: insufficient privilege: Not authorized

 

These errors are just examples of  some the different authorisation issues you can see in HANA Studio, and each one is pointing towards a missing analytical privilege.

 

Once you have created all your models, you then have the opportunity to define your specific authorization requirements on top of the views that you have created.

 

So for example, we have a model in HANA Studio Schema and its called "_SYS_BIC:Overview/SAP_OVERVIEW"

We have a user, lets just say its the "SYSTEM" user, and when you query this view you get the error:

 

Error during Plan execution of model _SYS_BIC:Overview/SAP_OVERVIEW (-1), reason: user is not authorized.

 

So if you are a DBA, and you get a message from a team member informing you that they getting a authorisation issue in HANA Studio. What are you to do?

How are you supposed to know the User ID? And most importantly, how are you to find out what the missing analytical privilege is?

 

So this is the perfect opportunity to run an authorisation trace through the means of the SQL console on HANA Studio.

So if you follow the below instructions it will walk you through executing the authorisation trace:

 

1) Please run the following statement in the HANA database to set the DB  trace:

alter system alter configuration ('indexserver.ini','SYSTEM') SET
('trace','authorization')='info' with reconfigure;

 

2) Reproduce the issue/execute the command again/

 

3)When the execution finishes please turn off the trace as follows in the Hana studio:

alter system alter configuration ('indexserver.ini','SYSTEM') unset
('trace','authorization') with reconfigure;

 

_____________________________________________________________________________________________________________________________

 

If you would like a more detailed trace on the privileges needed you could also execute the DEBUG level trace (Usually SAP Development would request this)

 

1) Please run the following statement in the HANA database to set the DB  trace:

alter system alter configuration ('indexserver.ini','SYSTEM') SET
('trace','authorization')='debug' with reconfigure;


 

2) Reproduce the issue/execute the command again


 

3)When the execution finishes please turn off the trace as follows in the Hana studio:

alter system alter configuration ('indexserver.ini','SYSTEM') unset
('trace','authorization') with reconfigure;

 

______________________________________________________________________________________________________________________________

 

So now that you have turned the trace on and reproduced the issue, now you must turn off the trace.

 

You should now see a new indexserver0000000trc file created in the Diagnosis Files Tab in HANA Studio

Capture.PNG

 

So once you open the trace files, scroll to the end of the file and you should see something familiar to this:

 

e cePlanExec      cePlanExecutor.cpp(06890) : Error during Plan execution of model _SYS_BIC:onep.Queries.qnoverview/CV_QMT_OVERVIEW (-1), reason: user is not authorized
i TraceContext    TraceContext.cpp(00718) : UserName=TABLEAU, ApplicationUserName=luben00d, ApplicationName=HDBStudio, ApplicationSource=csns.modeler.datapreview.providers.ResultSetDelegationDataProvider.<init>(ResultSetDelegationDataProvider.java:122);csns.modeler.actions.DataPreviewDelegationAction.getDataProvider(DataPreviewDelegationAction.java:310);csns.modeler.actions.DataPreviewDelegationAction.run(DataPreviewDelegationAction.java:270);csns.modeler.actions.DataPreviewDelegationAction.run(DataPreviewDelegationAction.java:130);csns.modeler.command.handlers.DataPreviewHandler.execute(DataPreviewHandler.java:70);org.eclipse.core.commands
i Authorization    XmlAnalyticalPrivilegeFacade.cpp(01250) : UserId(123456) is missing analytic privileges in order to access _SYS_BIC:onep.MasterData.qn/AT_QMT(ObjectId(15,0,oid=78787)). Current situation:
AP ObjectId(13,2,oid=3): Not granted.
i Authorization    TRexApiSearch.cpp(20566) : TRexApiSearch::analyticalPrivilegesCheck(): User TABLEAU is not authorized on _SYS_BIC:onep.MasterData.qn/AT_QMT (787878) due to XML APs
e CalcEngine      cePopDataSources.cpp(00488) : ceJoinSearchPop ($REQUEST$): Execution of search failed: user is not authorized(2950)
e Executor        PlanExecutor.cpp(00690) : plan plan558676@<> failed with rc 2950; user is not authorized
e Executor        PlanExecutor.cpp(00690) : -- returns for plan558676@<>
e Executor        PlanExecutor.cpp(00690) : user is not authorized(2950), plan: 1 pops: ceJoinSearchPop pop1(out a)
e Executor        PlanExecutor.cpp(00690) : pop1, 09:57:41.755  +0.000, cpu 139960197732232, <> ceJoinSearchPop, rc 2950, user is not authorized
e Executor        PlanExecutor.cpp(00690) : Comm total: 0.000
e Executor        PlanExecutor.cpp(00690) : Total: <Time- Stamp>, cpu 139960197732232
e Executor        PlanExecutor.cpp(00690) : sizes a 0
e Executor        PlanExecutor.cpp(00690) : -- end executor returns
e Executor        PlanExecutor.cpp(00690) : pop1 (rc 2950, user is not authorized)

 

So we can see from the trace file that User who is trying to query from the view is called TABLEAU. TABLEAU is also represented by the User ID (123456)

 

So by looking at the lines:

 

i Authorization    XmlAnalyticalPrivilegeFacade.cpp(01250) : UserId(123456) is missing analytic privileges in order to access _SYS_BIC:onep.MasterData.qn/AT_QMT(ObjectId(15,0,oid=78787)).

&

i Authorization    TRexApiSearch.cpp(20566) : TRexApiSearch::analyticalPrivilegesCheck(): User TABLEAU is not authorized on _SYS_BIC:onep.MasterData.qn/AT_QMT (787878) due to XML APs

 

We can clearly see that TABLEAU user is missing the correct analytical privileges to access the _SYS_BIC:onep.MasterData.qn/AT_QMT which is located on Object 78787.

 

So now we have to find out who owns the Object 78787. We can find out this information by querying the following:

 

select * from objects where object_oid = '<oid>';

Select * from objects where object_oid = '78787'

 

Once you have found out the owner for this object, you can get the owner to Grant the TABLEAU user the necessary privileges to query the object.

 

Please be aware that if you find that the owner of an object is _SYS_REPO, this is not as straight forward as logging in as _SYS_REPO as this is not possible because SYS_REPO is a technical database user used by the SAP HANA repository. The repository consists of packages that contain design time versions of various objects, such as attribute views, analytic views, calculation views, procedures, analytic privileges, and roles. _SYS_REPO is the owner of all objects in the repository, as well as their activated runtime versions.

You have to create a .hdbrole file which which gives the access ( Development type of role, giving select, execute, insert etc access) on this schema. You then assign this role to the user who is trying to access the object.

 

 

Another option that is available for analyzing privileges issues was introduced as of SP9. This comes in the form of the Authorization Dependency Viewer. Man-Ted Chan has prepared an excellent blog on this new feature:

 

http://scn.sap.com/community/hana-in-memory/blog/2015/07/07/authorization-dependency-viewer

 

 

 

For more useful information on Privileges can be seen in the following KBA's:

KBA #2220157 - Database error 258 at EXE insufficient

KBA #1735586 – Unable to grant privileges for SYS_REPO.-objects via SAP HANA Studio authorization management.

KBA #1966219 – HANA technical database user _SYS_REPO cannot be activated.

KBA #1897236 – HANA: Error&quot; insufficient privilege: Not authorized &quot; in SM21

KBA #2092748 – Failure to activate HANA roles in Design Time.

KBA #2126689 – Insufficient privilege. Not authorized

KBA #2250445 - SAP DBTech JDBC 485 - Invalid definition of structured privilege: Invalid filter condition

 

 

For more useful Troubleshooting documentation you can visit:

 

http://wiki.scn.sap.com/wiki/display/TechTSG/SAP+HANA+and+In-Memory+Computing

 

 

Thank you,

 

Michael

Hana security (part1) : Authentication model Kerberos/SPNEGO

$
0
0

In my security document part 1, I’ll explain how to configure SAP Hana authentication method based on:

  • Hana SSO with Kerberos authentication
  • Single Sign-on with SPNEGO

 

My configuration will be based on a single container database architecture with my internal network.

 

For my setup I’ll use my lab environment based on VMware, Microsoft Server 2008 R2, SAP Hana Rev 101 on SLES 11 SP3 and Windows 7 Enterprise.

 

 

Disclaimer: For my deployment I’ll issue local certificate with no outside exposure.

 

In order execution

 

  • Register SLES server into DNS
  • Step-by-step check on Hana server
  • Create Hana Database service user in AD
  • Register service Principal Name (SPN) in AD
  • Generate the keytab file
  • Configure Kerberos from Hana studio
  • Configure SPNEGO for XS application

 

 

Note used

 

1837331 - HOWTO HANA DB SSO Kerberos/ Active Directory

1813724 - HANA SSO/Kerberos: create keytab and validate conf

1900023 - How to setup SAML SSO to HANA from BI

 

 

Link used

 

Help SAP Hana Platform Core - Security

 

Overview Architecture

archi.jpg

 

The following architecture is based on virtual server, below the information detail for my deployment:

 

•    Domain: will.lab

•    Active Directory and NTP server: activedir.will.lab / 192.168.0.109

•    Hana server: vmhana01.will.lab / 192.168.0.116

•    Desktop client: desk-client.will.lab / 192.168.0.137

 

 

 

Register SLES server in DNS

 

Registering your Linux server into my DNS will make easier the management of the entire landscape, but to make the registration successful several prerequisite needs to be respected:

•    The network card needs to be setup according the dns entry

•    The Linux server must have the same time zone as the DNS

 

The necessary configuration can be done by the tools “yast or yast2” or by command line, I will use Yast2 to make it

 

Specify the DNS server ip in “Name Servers” and the domain in “Domain search”

1.jpg

 

My ntp server is my Active Directory server

2.jpg

 

Once done, form network service, select “Windows Domain Membership”

3.jpg

 

As you can see I did not put “will.lab” but “will” only, then choose the option you want to propagate and hit ok to validate.

4.jpg

 

Once reached out the administrator password is needed to join the domain, put it and hit “obtain list” to make sure the password is ok

5.jpg

 

We are now part of the domain

6.jpg

 

To validate I’ll make two test, I’ll fist make an nslookup from linux server on the my client desktop

7.jpg

 

And the second one is on my client desktop, I’ll ssh my linux server and connect to it with my ad user

8.jpg

 

 

Step-by-step check on Hana server

 

The registration on Linux server done into my AD is the first step, now I need to perform a series of internal check to guaranty a successful configuration.

It consist of:

  • Hostname resolution check
  • Hana database server krb5.conf file check

 

Run the following command and make sure you have the exact result as below with your information

9.jpg

 

And also check the reverse lookup

10.jpg

 

The registration in AD changed the entry the krb5.conf file, add the following line

11.jpg

 

And make two test with “kinit” and “klist” tool

12.jpg

 

 

Create Hana database service user in Active Directory

 

The check done with the expected return value, I need now to create a service users in active Directory to represent Hana database which will be map as SPN.

I’ll create those service in a dedicated OU in to grant necessary administrative privileges

13.jpg

 

I used the command line, but it could also be created from tool in AD :

dsadd user "cn=vmhana01,OU=hana,DC=will,DC=lab" -pwd <password> -disabled no -pwdneverexpires yes -acctexpires never

14.jpg

 

Now make a connection test from Hana server by using “kinit” tool

15.jpg

 

 

Register Service Principal Name (SPN) in AD

 

In this step I will now map a service name to my service user created earlier, in case of Hana the format must be respected as follow:

hdb/<DB server>@<domain>

I use the following syntax in AD to generate it  : setspn -S hdb/vmhana01.will.lab WILL\vmhana01

 

Generate the keytab file


In elevate mode run the following ktpass command:

ktpass -princ hdb/vmhana01.will.lab@WILL.LAB -mapuser WILL\vmhana01 -ptype KRB5_NT_PRINCIPAL -pass <password> -crypto All -out c:\krb5.keytab

16.jpg

Take in note kvno number, in my case “3”

The consistency check can also be done by using the python script “hdbkrbconf.py” provided in the note: 1813724 - HANA SSO/Kerberos: create keytab and validate conf

 

Configure Kerberos for Hana studio

 

The SPN done by the command above did create a keytab on my windows server, I’ll copy it into my Hana server at /etc and check the content

17.jpg

 

And verify the consistency of the keytab in order to prepare the “kvno”

18.jpg

 

The kvno valid I create test user in Active Directory and try to log with it in Hana studio.

I did use the command line but you can also create it by the graphic tool

18.2.jpg

 

Now create I go in studio as “SYSTEM” or a user with administrator privilege to map my test user account for Kerberos

19.jpg

 

Once done i’m logging my test account on my client desktop and add the Hana entry with the “Authentication by current operating system user” option

20.jpg

 

And I’m in without password

21.jpg

 

 

Configure SPNEGO for XS application

 

To configure SPNEGO for XS application a new SNP needs to be created, earlier I did use the following format “hdb/<DB server>@<domain>” for the studio connection, but for HTTP connection the following format needs to be used:

HTTP/<DB server>@<domain>

 

I’ll map the same service user created earlier “vmhana01”

22.jpg

 

And reprocess the check to ensure the correct version of the kvno number

23.jpg

 

Since I’m not a developer, I’ll created test app upon the developer guide tutorial section 2.5 to test the authentication mechanism

 

Here is the .xsjs application created, it display the hello text after logging

24.1.jpg

 

As we can see the password is required

24.jpg

 

To change the authentication behavior, from XS Admin Tool select the created project app and hit “Edit”

25.jpg

 

Specify SPNEGO and save

Note: I did specify SPNEGO for the all package, but it can be also set with more granularity

26.jpg

 

And try again and it works

27.jpg

 

Note: I’m using Mozilla, so use the specific add-on for NTLM Integrated Authentication

 

The configuration is completed for my Kerberos/SSO

 

Williams

Troubleshooting ABAP Dumps in relation to SAP HANA

$
0
0

Purpose

 

The purpose of this document is to instruct SAP customers on how to analyse ABAP dumps.

 

 

Overview

 

How to troubleshoot ABAP Dumps

 

 

Troubleshooting

 

When looking at an ABAP system you can sometimes come across runtime errors in Transaction ST22:

 

Wiki ST22.PNG

 

 

Clicking into the "Today" or "Yesterday" tab will bring up all the ABAP Runtime errors you have encountered in the past 2 days.

 

You can also filter the dates for a particular dump by using the filter feature:

 

wiki ST22 filter.PNG

 

 

Here are some examples of runtime errors you might see in ST22:

 

wiki 2.PNG

wiki5.PNG

wiki 3.PNG

 

 

 

So from looking at these dumps, you can see

1: Category

2: Runtime Errors

3: ABAP Program

4: Application Component

5: Data & Time.

 

 

The ST22 dumps do not really give you much information here so more information will be needed.

 

For more information you will then look into the Dev_W files in the transaction ST11

 

 

 

ST11 allows you to look further into the Dev_w files relating to the dumps in ST22:

 

wiki 4.PNG

 

Go to ST22 > click on the Runtime Errors for "Today", "Yesterday" or a filter. This will being up the specific dump you wish to analyse.

 

Here you will see 11 columns like so:

 

wiki 5.PNG

 

 

Here you can see the columns I have mentioned. The Work Process Index number you need is in the column named WP Index.

 

 

Once you find the dev_w index number you can then go to ST11 and find further information:

 

In the ST11 Dev_w files you have to match the time of the dump in ST22 with the recorded times in the Dev_w process files.

 

 

 

 

If there no usable information in the Dev_W files, the next step would be to analyse the issue from a Database perspective

 

 

From the HANA DB perspective

 

1: Open HANA Studio in SAP HANA Administration Console View

 

wiki 1.PNG

 

 

 

2: Check the diagnosis trace files in accordance with the time stamp of the dump you saw previously in ST22. To do this we have to go to the Diagnosis tab in HANA Studio:

 

wiki2.PNG

 

 

 

3: Check the time stamp from the ST22 dump (Date and Time), and then match this accordingly with the time in either the Indexserver.trc or nameserver.trc.

 

wiki 3.PNG

 

Search for the corresponding time stamp mentioned above i.e. 18/11/2015 @ 10:55:43.

 

Or instead of searching you could use the below SQL:

 

select top 500 service_name, timestamp, trace_text from m_merged_traces where service_name in ('indexserver', 'nameserver') and timestamp between '2015-11-18 10:40:00' and '2015-11-18 10:59:00'

 

 

Searching the nameserver log files can be a good indication of whether your ST22 is related to network issues, you may see errors such as:

 

 

  TrexNet          Channel.cpp(00339) : ERROR: reading from channel 151 <127.0.0.1:<host>> failed with timeout error; timeout=10000 ms elapsed [73973]{-1}[-1/-1] 2015-01-28 01:58:55.208048 e TrexNetBuffer    BufferedIO.cpp(01092) : channel 151 from <127.0.0.1:<host>>: read from channel failed; resetting buffer


 

 

 

If you do find some errors similar to the above then check which host the error is pointing to and check whether or not this service was available at the time of the dump.

 

 

If this does not yield any useful information, the next step is to ask someone from your network team to look into this. Checking the var/logs/messages is always a great place to start.

 

 

When searching through the indexserver.trc file, you could notice some irregularities recorded here. The next step is to search this error on the SAP Service Market Place for a known KBA or Note (Learn how to search more effectively 2081285 - How to enter good search terms to an SAP search? )

 

Related Documents

 

Did you know? You can find details of common issues, fixes, patches and much more by visiting SAP moderated forums on http://scn.sap.com/docs/DOC-18971

Documentation regarding HANA installation, upgrade, administration & development is available at http://help.sap.com/hana_appliance

SAP HANA Troubleshooting WIKI: http://wiki.scn.sap.com/wiki/display/TechTSG/SAP+HANA+and+In-Memory+Computing SAP Discussion HANA: http://scn.sap.com/community/hana-in-memory/ Learn how to search more effectively 2081285 - How to enter good search terms to an SAP search?
__________________________________________________________________________________________________________

Troubleshooting SAP HANA High CPU Utilisation

$
0
0

High CPU Utilisation

 

Whilst using HANA i.e. running reports, executing queries, etc. you can see an alert in HANA Studio that the system has consumed CPU resources and the system has reached full utilisation or hangs.

 

Before performing any traces, please check to see if you have Transparent HugePages enabled on your system. THP should be disabled across your landscape until SAP has recommended activating them once again. Please see the relevant notes in relation to TransparentHugesPages:

 

HUGEPAGES 

 

SAP Note 1944799 - SAP HANA Guidelines for SLES Operating System Installation

SAP Note 1824819 - SAP HANA DB: Recommended OS settings for SLES 11 / SLES for SAP Applications 11 SP2

SAP Note 2131662 - Transparent Huge Pages (THP) on SAP HANA Servers

SAP Note 1954788 - SAP HANA DB: Recommended OS settings for SLES 11 / SLES for SAP Applications 11 SP3

 

 

The THP activity could also be checked in the runtime dumps by searching “AnonHugePages”. Whilst also checking the THP, it is also recommended to check for:

 

Swaptotal = ??

Swapfree = ??

 

This will let you know if there is a reasonable amount of memory in the system.

 

Next you can Check the (GAL) Global allocation limit:  (search for IPM) and check the limit and ensure it is not lower than what the process/thread in question is trying to allocate.

 

Usually it is evident what caused the High CPU’s. In many events it is caused by the execution of large queries or running reports from HANA Studio on models.

 

To capture the High CPU we can use a Kernel Profiler Trace. To be able to use the kernel profiler, you must have the SAP_INTERNAL_HANA_SUPPORT role. This role is intended only for SAP HANA development support.

 

The kernel profile collects, for example, information about frequent and/or expensive execution paths during query processing. It is recommended that you start kernel profiler tracing immediately before you execute the statements you want to analyze and stop it immediately after they have finished. This avoids the unnecessary recording of irrelevant statements. It is also advisable as this kind of tracing can negatively impact performance.

 

When you stop tracing, the results are saved to trace files that you can access on the Diagnosis Files tab of the Administration editor.

 

You cannot analyze these files meaningfully in the SAP HANA studio, but instead must use a tool capable of reading the configured output format, that is KCacheGrind or DOT (default format).

(http://www.graphviz.org/Download_windows.php)

 

You activate and configure the kernel profile in the Administration editor on the Trace Configuration tab. Please be aware that you will also need to have run the runtime dumps also.

 

The Kernel Profiler Trace results reads in conjunction from the runtime dumps to pick out the relevant Stacks and Thread numbers. To see the full information on Kernel Profiler Trace’s please see Note 1804811 or follow the steps below:

 

Please be aware that you will also need to execute 2-3 runtime dumps also. The Kernel Profiler Trace results reads in conjunction from the runtime dumps to pick out the relevant Stacks and Thread numbers.

 

**Please execute the runtime dumps first, then after the RTE dumps are finished you can then activate the kernel profiler trace. We do this because we do not want the RTE dumps recording the kernel tracing and confusing the read**

 

 

To see the full information on Kernel Profiler Trace’s please see Note 1804811 or follow the steps below:

   

Kernel%20Profiler.PNG

 

Connect to your HANA database server as user sidadm (for example via putty) and start HDBCONS by typing command "hdbcons".
To do a Kernel Profiler Trace of your query, please follow these steps:

1. "profiler clear" - Resets all information to a clear state

2. "profiler start" - Starts collecting information.

3. Execute the affected query.

4. "profiler stop" - Stops collecting information.

5. "profiler print -o /path/on/disk/cpu.dot;/path/on/disk/wait.dot" - writes the collected information into two dot files which can be sent to SAP.

 

 

Once you have this information you will see two dot files called

1: cpu.dot

2: wait.dot.

 

To read these .dot files you will need to download GVEdit. You can download this at the following:

  http://www.graphviz.org/Download_windows.php

 

Once you open the program it will look something similar to this:

 

Graph%20Viz.PNG

   

     
The wait.dot file can be used to analyse a situation where a process is running very slowly without any reasons In such cases, a wait graph can help to identify whether the process is waiting for an IndexHandle, I/O, Savepoint lock, etc. (If you are using this for a hang situation and you want to get a proper time line then I suggest you look into the performance load graph for a flat line ie: where nothing was recorded.)

 

So once you open the graph viz tool, please open the cpu.dot file. File > open > select the dot file > open > this will open the file:

Once you open this file you will see a screen such as

 

graphviz%201.PNG

   

 

The graph might already be open and you might not see it because it is zoomed out very large. You need to use the scroll bar (horizontal and vertical to scroll).

 

CPU_DOT%201.PNG

 

From there on it will depend on what the issue is that you are processing.

Normally you will be looking for the process/step that has the highest amount on value for

E= …

Where "E" means Exclusive

There is also:

I=…

Where "I" means Inclusive

The Exclusive is of more interest because it is the exclusive value just for that particular process or step that will indicate if more memory/CPU is used in that particular step or not. In this example case we can see that __memcmp_se44_1= I =16.399% E = 16.399%. By tracing the RED colouring we can see where most of utilisation is happening and we can trace the activity, which will lead you to the stack in the runtime dump, which will also have the thread number we are looking for

 

CPU_DOT%202.PNG

 

CPU_DOT%203.PNG

 

 

 

 

 

By viewing the CPU.dot you have now traced the RED trail to the source of the most exclusive. It is now that you open the RTE (Runtime Dump). Working from the bottom up, we can now get an idea of what the stack will look like in the RTE (Runtime Dump).

 

CPU_DOT%204.PNG

 

 

 

 

By comparing the RED path, you can see that the path matches exactly with this Stack from the Runtime dump. This stack also has the Thread number at the top of the stack.

 

So now you have found the thread number in which this query was executed with. So by searching this thread number in the runtime dump we can check for the parent of this thread & check for the child’s related to that parent. This thread number can then be linked back to the query within the runtime dumps. The exact query can now be found, giving you the information on the exact query and also the USER that executed this query.

 

For more information or queries on HANA CPU please visit Note 2100040 - FAQ: SAP HANA CPU

 

I hope you find this instructive,

 

Thank you,

 

Michael Healy

BO Connectivity Options to Native HANA Models and BW on HANA Cubes

$
0
0

The following options are available to connect BO server to Native HANA and BW on HANA.

 

1. SAP BAPI

2. OLAP SAP BICS Client

3. OLAP SAP HANA Client

4. Relational DB Connection (JDBC / ODBC)

 

SAP BAPI:

 

Business Application Programming Interface (BAPI) is traditional method used to connect BO server (3.x systems) to BW BEx Queries.

This connection is used by Universe Design Tool (UDT) to access BW cubes.

BAPI connection created on BEx Query. So for each query, one BAPI connection need to be created.

 

temp.PNG

 


OLAP SAP BICS Client


OLAP SAP BICS Client (Business Intelligence Consumer Service) connectivity is widely used to connect BO (4.x) Servers to BW Systems.

This Connection can be created via Information Design Tool (IDT) or BOE/BI Server console.

This Connection can be created with or without mentioning the Bex Query name.

So We can able to create reports on multiple BW cubes with one BICS connectivity.


From WebI, using this Connectivity We can create report directly on Bex Query.

No Universes are (Middle Layer) needed.


temp.PNG



OLAP SAP HANA Client:


OLAP SAP HANA Client connectivity is widely used to connect BO (4.x) Servers to Native HANA Systems.

This Connection also can be created via Information Design Tool (IDT) or BOE/BI Server console.

This Connection can be created with or without mentioning the HANA Column view name.

So We can able to create reports on multiple HANA Models with one BICS OLAP HANA Connectivity


In this Connection all Analytic Views and Calculation Views will be treated as Cube.

temp.PNG

This connectivity used by Design Studio, Lumira to create the reports.

 

temp.PNG

 

 

Relational DB Connection (JDBC / ODBC) :


This is relational DB connectivity.

We could not access the Table from OLAP HANA connection.

Here We have access for HANA Table also along with Models.


temp.PNG


This connectivity used by IDT for Universe Creation and used in BO Business Explorer Reporting tool.


temp.PNG


Best Regards,

Muthuram

Recovery Technologies of SAP HANA-Part 1: System Failure Recovery

$
0
0

Databases protection

As we know, SAP HANA is a kind of in-memory database. How SAP HANA ensure that data consistency and correctness when system crash?

To answer this question, we should know that SAP HANA store data not only in memory but also in disk. And it is refer to a concept named database protection. It is means to prevent database from all kinds of interference and destruction, ensure the data save and reliable and recover rapidly from crash. So recovery technologies are important measures of databases protection. Transaction is a sequence of operation that can’t be split. For an example, bank transfer: account A transfer 100 dollar to account B. It is include two update operations:

  • A=A-100
  • B=B+100

These 2 operations cannot be split, they should either do both or never do at all. There are three kinds of state of transaction in log:

  • <Start T> means transaction T has been started.
  • <Commit T> means transaction T has been finished and all modifications have been written to database.
  • <Abort T>means transaction T has been stop and all modifications have been undone.

Databases failure includes three types:

  • Transaction failure is an internal failure of a single transaction and it will not affect other transaction.
  • Media failure is hardware failure such as damage of disk, no space in disk, etc.
  • System failure is soft failure such as outage, machine crash, etc. This kind of failure may result in memory data loss and affect all running transactions.

The goal of recovery of system failure is to recover system to state before failure happens.

Validation of recovery of SAP HANA system failure

The concepts mentioned above are applicable to SAP HANA database. So we can test it to validate recovery of SAP HANA system failure.At first, modify the interval of savepoint. In period of savepoint, SAP HANA system will persistent memory page to disk. It is 300s by default. We change it to 3000s.1.pngOpen two SQL consoles and change the “auto commit” property to off.2.pngRun console 1 sql command:

insert into "LOGTEST"."TEST" values(1,'谢谢大家关注HANAGeek,欢迎大家一起来学习SAP HANA知识,分享SAP HANA知识。');

Run console 2 sql command:

insert into "LOGTEST"."TEST" values(2,'谢谢大家关注HANAGeek,欢迎大家一起来学习SAP HANA知识,分享SAP HANA知识。');commit;

Power off the machine of SAP HANA system. Then restart SAP HANA system and check the content of this table.3.pngWe can regard console 1 and console 2 as transaction 1 and transaction 2. Because T1 executed one modification but committed it, SAP HANA rolled back to situation when T1 did not begin. Because T2 has committed before outage, SAP HANA recovered the system to the situation before outage even if system did not do savepoint operation.

Strategies of system failure

7.pngIf the system failure is media failure, we need recover from copies of data at first. Then system will recover system using logs.

Transaction log

Transaction log is used to manage modifications in database system. It records all modification’s details. We do not need to persist all data when transaction is committed. Transaction log persistence is enough. When system crash, system’s last consistent state can be restore by replaying transaction logs. Hence logs must be recorded as chronological order.There are three types transaction log: undo log, redo log, undo/redo log. There are only two kinds of transaction logs in SAP HANA: undo log, redo log.There are three kinds of records in log files:

  • <Start T> means transaction begin.
  • <Commit, T>/ <Abort T> means transaction end.
  • Update detail
  1. Identification of transaction.
  2. Operation object.
  3. Value before update(undo log)/Value after update(redo log)/Value before update and value after update(undo/redo log).

Redo log

An important feature of redo log is that the log records must be written to disk before update data in to database system. The format of redo log record is <T,x,v> which T for identification of transaction, x for identification of update object and v for value after update.As shown below, operations of transaction T1: A=A-100,B=B+100. Left part of the picture is the steps of T1. Middle part of the picture is the content of redo log. Right part of the picture is initial value of A and B.

5.png

The steps of the recovery of redo log:

  1. Start to scan redo log from head and find all truncations which have the identifier <Commit, T>. Put them in a truncation list L.
  2. Scan records <T, x, v>. If T belong to L, then
  • Write(X ,v) (assign new value v to X)
  • Output(X) (write X to database system)
  1. For each T not belong to L, do write <Abort, T> to log file.

We do not need to concern about transactions without <Commit, T> because they definitely did not write data to database system. We need to redo transactions which have <Commit, T> because they may have not written to database system.The writing of redo log is synchronous with the transaction process. When SAP HANA system restart after crash, it will process redo log to recover system. To improve the efficiency of log processing, SAP HANA system will do save-point (check point). In the period of save point, system persist data which did not persist since last save-point. Hence, only the redo log since last save-point needs to be processed. The redo log before last save-point can be removed.

Undo log

SAP HANA not only persist the update data of transaction which has committed, but also may persist data which has not committed. So we need undo log which has been persisted in disk. The format of undo log record is <T, x, v> whose v represents the value before update.As shown below, the operations of transaction T1: A=A-100, B=B+100. Left part of the picture is the steps of T1. Middle part of the picture is the content of undo log.

6.png

 

The process of recovery:

  1. Start to scan redo log from head and find all truncations which don’t have the identifier <Commit, T> or <Abort, T>. Put them in a truncation list L.
  2. Scan records <T, x, v>. If T belong to L, then

•             Write(X ,v) (assign new value v to X)•             Output(X) (write X to database system)

  1. For each T not belong to L, do write <Abort, T> to log file.

In SAP HANA system, undo log do persistence when save-point which is different with redo log. Besides, undo log is written to the data area but not to the log area.  The reason is that the system can be restore to the state of last save-point since restart from crash. If transactions after last save-point have committed, system can restore it using by redo log. If they have not committed, we do not need undo log after last save point to restore. So undo log after last save-point is useless. The advantages of this mechanism are:

  • Fewer log records need to be persisted when transaction processing.
  • It will slow the increase of disk.
  • Database can be restored to the state of consistency from data area.

Save-point

When data base crashed, we need to scan all undo list and redo list to restore it. There are problems of this method:

  • It will take a long time to scan the log.
  • It will make the redo list too long, so take a long time to restore.

So SAP HANA chooses do save-point regularly:

  1. Do not accept new transactions.
  2. Write undo records to data area.
  3. Write modified memory pages into disk.
  4. Write identifier of save-point into redo log.

The process of save point is shown as below.

4.png


Setting Custom theme for HANA XS applications

Reset the SYSTEM User's Password in HANA DB

$
0
0

Overview

 

If the SYSTEM user's password is lost, you can reset it as the operating system administrator by starting the index server in emergency mode. If your HANA DB is Multitenant, this process will not work.  My HANA DB revision was 102.04

 

Prerequisites

 

You have the credentials of the operating system administrator (<sid>adm).

 

Procedure

 

Step1: Log on to the server on which the master index server is running as the operating system user (that is, <sid>adm user).

 

Step2: Open a command line interface.

 

Step3: Shut down the instance by executing the following command:

/usr/sap/<SID>/HDB<instance>/exe/sapcontrol -nr <instance> -function StopSystem HDB

Step3.png

Step4: In a new session, start the name server by executing the following commands:

 

/usr/sap/<SID>/HDB<instance>/hdbenv.sh

/usr/sap/<SID>/HDB<instance>/exe/hdbnameserver

Step4.png

This will stay hanged state…

 

Step5: In a new session, start the compile server by executing the following commands:

 

/usr/sap/<SID>/HDB<instance>/hdbenv.sh

/usr/sap/<SID>/HDB<instance>/exe/hdbcompileserver

Step5.png

This will stay hanged state…

 

Step6: In a new session, start the index server by executing the following commands:

 

/usr/sap/<SID>/HDB<instance>/hdbenv.sh

/usr/sap/<SID>/HDB<instance>/exe/hdbindexserver -resetUserSystem

Step6.png

The following prompt appears: resetting of user SYSTEM - <<<new password>>>

 

Step7: Enter a new password for the SYSTEM user.

You must enter a password that complies with the password policy configured for the system.

The password for the SYSTEM user is reset and the index server stops.

 

Step8: In the terminals in which they are running, end the name server and compile server processes by pressing CTRL+C.

 

Step9: In a new session, start the instance by executing the following command:

/usr/sap/<SID>/HDB<instance>/exe/sapcontrol -nr <instance> -function StartSystem HDB

 

 

Note:

 

In a scale-out system, you only need to execute the commands on the master index server.


Results

 

The SYSTEM user's password is reset. You do not have to change this new password the next time you log on with this user regardless of your password policy configuration.

SAP HANA REVISION UPDATE – SPS10

$
0
0

Reason for HANA DB patch level update

We are copying HANA DB from SLES 11.3 revision 102.01 to RHEL 6.5 revision 102.00 through Backup / Restore method using SWPM (homogeneous system copy). While restoring any HANA DB it is necessary to have at least same or higher patch level into the target environment. This is reason we are updating target HANA DB environment from revision 102.00 to latest available patch level 102.04.

Download SAP HANA patches

Download following updates (Database, Studio & Client) from SAP marketplace and transfer to HANA server

Fig6.pngFig7.png

Current available patch level is 102.04 (PL04). So we are updating into PL 04. We will download Studio, Client & DB for update.

SAP HANA Backup before update

Take complete backup before rev update start

Fig4.png

Extract HANA Patches

Move all SAR files into HANA host server and extract using switch –manifest SIGNATURE.SMFig8.png

If you extract more than one component SAR into a single directory, you need to move the SIGNATURE.SMF file to the subfolder (SAP_HANA_DATABASE, SAP_HANA_CLIENT, SAP_HANA_STUDIO etc.), before extracting the next SAR in order to avoid overwriting the SIGNATURE.SMF file. For more information, see also SAP Note 2178665 in Related Information.

Fig9.png

Fig10.png

Do the same for client & studio as well

Fig11.png

Fig11.png

HANA Update via STUDIO

Run SAP HANA Platform Lifecycle Management from HANA STUDIO

Fig12.png

Fig13.png

Fig14.png

Select the location from HANA host

Fig16.png

Fig17.png

Fig18.png

Fig19.png

Fig20.png

Fig21.png

Fig22.png

Fig23.png

This completes HANA patch level update.

Importing Database Systems in Bulk into SAP DB Control Center

$
0
0

Registering a large number of database systems to monitor one by one can become a time consuming process. Instead of adding one system at a time, you can add multiple systems at once using an import file.

 

Note: This document was originally created by a former colleague of mine, Yuki Ji, in support of a customer engagement initiative regarding the SAP DB Control Center (DCC) product.  To preserve this knowledge, the document was migrated to this space.

 

  1. Configure Monitored Systems - Make sure to setup the technical user for each system you wish to configure. You will need the credentials of the technical user for import registration.
  2. Create an Import File - Create an import file according to the format indicated in the link. Save the import file as either a .csv or a .txt file.

    In the import file you are able to omit system login IDs or passwords. If you do so, SAP DCC prompts you during the import process for the missing information BUT it prompts you for only one user/password pair. The entered pair is then used for each system that is missing credentials for that import. If you are importing multiple systems with unique user/password combinations, it is simplest to import the systems in separate operations or to include all the login credentials in the import file.
    import_csvFile.png
  3. Login to DCC and open System Directory. In the bottom right corner of System Directory select Import Systems.

    import_button.PNG

  4. Select your Import file from your local system and click Load File to continue to the next step.

    import_selectFilePage.png

  5. Below you will see that one of the systems we entered, PM1, has already been registered so no further actions can be taken for that system. For the other systems, PM2 and YJI we can see that we are missing credentials for system YJI. The credentials will be addressed in the next step.

    In this step we choose which systems we wish to continue registering. For this example I am selecting both PM2 and YJI.

    Note: If DCC is unable to find or connect to the system, or there is a typo in the import file, you will see the related error messages on this page. Some things to check initially: monitored system configuration, Import file spelling and inputs.import_selectImport.png



  6. On selecting Import, because we have credentials missing we are asked to supply them. The window also give a reminder that the credentials entered here will be applied to all systems that do not already credentials supplied. The credentials I enter will be applied to system YJI.

    import_addCredentials.png

  7. After clicking Login, if the credentials are correct the systems are registered! If after clicking Login the system is not registered, check the credentials you entered and whether they match the system.

    import_completed.png

SAP HANA Workload Management

$
0
0

SAP HANA is a real-time, in-memory data management platform that addresses all data processing scenarios of any customer application. This leads to the typical operating environment where many concurrent users running different applications produce a heterogeneous set of workloads in the SAP HANA database. These various workloads have different resource demands to the system and compete for shared resources in the SAP HANA database. SAP HANA provides various workload management options for controlling resource consumption. The document gives detailed information about that options based on SPS11.

View this Document

Feature Scope Description for SAP HANA

$
0
0

The feature scope description for SAP HANA explains which features, capabilities and documentation are available for SAP HANA SPS11.

View this Document

Union Node Pruning in Modeling with Calculation View

$
0
0

Data Modeling in SAP HANA using calculation view is  one of the supreme capabilities provided for end user to mold his raw data in to a well structured result set by leveraging multiple operational capabilities exposed by calculation view. On the way of achieving this we need to think on different aspects involved.

Let me take few lines here to quote some of the real world examples to provide better clarity on my talk.

We all know that there are two major parameters which we generally take in to consideration when we are qualifying or defining the standard of any automobile, which are nothing but 'Horse Power(HP)' and 'Mileage' of the automobile. There is always a trade off between the two, by which i mean that a higher HP automobile yields  reduced mileage and vice versa. Why does this happen? It is because we are making the underlying mechanical engines  to generate more HP and thus consuming most of the source energy(fuel) for this purpose.

 

Let us now get back from the mechanical world to our HANA world and start thinking the underlying execution of HANA database analogous to the above quoted mechanism.

When our calculation view starts computing complex calculations on big data its a matter of fact that the we will  have a  trade off between the performance and volume of data.

 

In this kind of Big Data scenarios where in we are expecting a better Horse Power from the underlying engine, let us even think how can we make the mileage/Performance also to be better in the current document.

 

One of the new features of HANA SPS11 called 'Union Node Pruning in calculation view 'supports us to achieve this by reducing the cost of execution of calculation view by Pruning union operation dynamically based on the query by end user.

 

Let us understand this by an example : Consider that we are creating sales report for a product across the years using a calculation view. The view consists of two data sources, which  are current sales data(for YEAR >= 2015) and archived sales data(YEAR <= 2014) and both of which are provided as input to union node of calculation view as shown below :

 

CV_PRUN.PNG

 

Now think of a scenario where in we are querying the calculation view to get the result of current_sales, wouldn't it be great if the underlying execution engine queries only on the current_sales based data source and prune the operation on the archived data source.

 

Yes, this can now be achieved in the case of union operation in a calculation view by providing pruning definition in a predefined database table which is called as Pruning configuration table.

 

Definition of the pruning configuration table should be of the below format :

 

 

union_prun.PNG

 

and an example content  for the pruning configuration  table is as below :

 

pruning_content.PNG

 

 

 

 

 

 

CALC_SCENARIO comprises of the calculation view that involves union node pruning and INPUT column takes the data source names involved in the union node of calculation view.

 

Now in the advanced view properties of calculation view mention this pruning table as shown below:

 

view_properties.PNG

 

Now activate the above view which involves 2 data sources PRUN1 and PRUN2 with pruning configuration table.

 

And execute the query on that view involving a filter condition that is equal to the condition mentioned in the pruning configuration table :

 

SELECT

  "ID",

  "YEAR",

  sum("SALES") AS "SALES"

FROM "_SYS_BIC"."hanae2e.poorna.sp11.ws42/CV_PRUN"

WHERE (("YEAR" > ('2005')))

GROUP BY "ID",

  "YEAR"

 

Visualize the plan for the above query and we see that the union node is pruned as the filter condition matches the one in pruning configuration table, which is  as shown below :

 

plan_viz.PNG

 

 

Now remove the pruning configuration table from view properties of the calculation view, activate it and execute the above query again and perform the plan viz of the same. We now see the union node coming in to picture, thus the query invoking both archived data and  current data in spite of requirement just being the current sales data.



union_non_prun.PNG

 

Thus Union node pruning in CV now helps to decide how the execution flow must be carried based on the query dynamically

 

Hope the provided information is useful. Any suggestion and feedback for improvement will be much appreciated.

 

Thank you


Increased schema flexibility in SAP HANA

$
0
0

Schema flexibility is one of the key capabilities  in SAP HANA that majorly helps to bring in flexibility with the column store table definition. A brief insight with a good example can be seen in this  Getting flexible with SAP HANA

 

Let us now understand the new capabilities in schema flexibility with this document.

 

With HANA SPS11 customers can now avail the increases capabilities of schema flexibility in SAP HANA, let us now understand the same via some examples.

 

Create a column table to store the employee details considering his Name, ID and Age by giving a room to add the other necessary information of employee based on the needs at run time by using the below syntax :

 

Create column table employee_Details (emp_id int, emp_name varchar(100), AGE int) with schema flexibility;


Adding the clause of ‘with schema flexibility’ during the table creation enables the provision of dynamic column creation during the DML operations like insert/upsert, update or delete.


Once the base structure for Employee_details table is created , there comes a requirement to add some more details  like employee_salary, employee_department as new columns to the created table definition, now the dynamicity of employee_details table comes in handy as we have enabled it ‘with schema flexibility’ option, instead of altering the structure of the table we can now directly add whatever data we need to the table as shown below :


Insert into employee_Details(emp_id , emp_name, AGE, employee_salary, employee_department) values(1,’RAM’,29,1000,PI_HANA);


Now the insert statement will get executed successfully irrespective of whether the column highlighted in the insert operations are existing or not, which means the 2 new highlighted columns must get added   to the metadata of the table implicitly as part of the insert statement.


Nature of flexible table is to create the dynamic column with default data type as NVARCHAR having maximum length (5000), if we do not  want to use this default nature and make the data type of dynamic column  as user’s choice it can now be done  with HANA SPS11 during the creation of table. lets say in our case any dynamic column that gets added to employee_Details table must have the data type of intereger then we can do it by writing my create statement as:

 

Create column table employee_Details (emp_id int, emp_name varchar(100), AGE int) with schema flexibility (DEFAULT DATA TYPE INTEGER);


Now any newly created dynamic columns during the insertion/update will take integer as the data type.


If we have a case where the details that get added to employee_Details table are heterogeneous entries and we want the dynamic columns to construct their data types based on the inserted value , we can do that by the following create statement : which is considered as ‘Data type detection’.


Create column table employee_Details (emp_id int, emp_name varchar(100), AGE int) with schema flexibility(DEFAULT DATA TYPE *).


Here the dynamic columns created constructs their type of data based on the value inserted.

 

That is:

 

Insert  into employee_Details (emp_id, emp_name, AGE,Emp_deprtment, emp_salary) values(1,’RAM’,29,’PI_HANA’,2000);

 

The last two columns take numeric and string data types respectively which differs from the default case.

Data type detection behavior is valid for both single-valued and multi-valued entities.

 

Here comes a case where ‘employee_feedback’ is  to be dynamically added to employee_Details table and is initially entered as an integer value for the first year’s rating, then the data type of employee_feedback column is constructed as integer and in the coming year if the same column finds an entry of floating value like 3.5 it becomes an impossible action to capture it. So to enable this use case we have an idea here during the table creation :


Create column table employee_Details (emp_id int, emp_name varchar(100), AGE int) with schema flexibility(DEFAULT DATA TYPE * AUTO DATA TYPE PROMOTION )

Yes, it is the option of Data type promotion during the creation which gets our use case ready.


This must help us to maintain the data type with the most genric formbased on the data.

As an example for first year rating our insert statement goes like dis:


Insert  into employee_Details (emp_id, emp_name, AGE,Emp_deprtment, emp_salary, employee_rating) values(1,’RAM’,29,’PI_HANA’,2000, 4);


Now employee_rating  column takes data type as integer.


And in the coming year when it hits a floating value :

 

Insert  into employee_Details (emp_id, emp_name, AGE,Emp_deprtment, emp_salary, employee_rating) values(1,’RAM’,29,’PI_HANA’,2000, 4.5);


The data type of employee_rating will  automatically get promoted to a floating type thus sufficing the needs without any errors.


Here is the allowed conversion rule for data type promotion :


conversion_rule.PNG




Here is an other case of multi valued promotion that is supported, we now have employee_phone as a new detail to the table and gets added with a value of varchar which is a phone number as below

 

Insert  into employee_Details (emp_id, emp_name, AGE,Emp_deprtment, emp_salary, employee_rating, employee_phone) values(1,’RAM’,29,’PI_HANA’,2000, 4.56,’01233556589’)


It takes the entered input as a singled valued var-char.


Now when employees start using dual/ triple sim cell there is a need to store the multi-valued char’s, It should now be possible to store new data set in the same column without altering it as we have enabled the table with auto data type promotion.

 

That is :

 

Insert  into employee_Details (emp_id, emp_name, AGE,Emp_deprtment, emp_salary, employee_rating, employee_phone) values(1,’RAM’,29,’PI_HANA’,2000, 4.56,array(’01233556589’,’983232131’,’324324’));


Must now convert employee_Phone column into a multi-valued character attribute.


Flexible table usage majorly contributes for better memory management, to support this we have an operation called ‘Garbage Collection’.

 

In our case we decide to normalize the ‘employee_feedback’ details by having a separate table for it and thus flush all the values existing in the ‘employee_feedback’ column of employee_details table. 

 

Now implicitly ‘Garbage collection’ comes into picture if our employee_details table is enabled for it in a below manner :

 

Create column table employee_Details (emp_id int, emp_name varchar(100), AGE int) with schema flexibility (RECLAIM);

 

Enabling the RECLAIM option will now turn on the Garbage collection and dynamic columns(in our case ‘employee_feedback’) will be automatically dropped if no values are left for all rows in the column.


What if my need for all the above discussed features come after my table is created but not during the creation of table . Should we drop the table and create them ? Answer is  No.

Or what if somewhere between the time slots we wish to disable the above characteristics individually in the created table.


It is possible to do that, as all the above discussed operations are supported even with table alter operation as shown below:


1)ALTER TABLE <table name> DISABLE SCHEMA FLEXIBILITY

2)ALTER TABLE <table name> ENABLE SCHEMA FLEXIBILITY [(<options>)]

3)ALTER TABLE <table name> ALTER SCHEMA FLEXIBILITY (<options>)
4) ALTER TABLE <table name> ALTER <column name> [<data type>] DISABLE SCHEMA FLEXIBILITY

5)ALTER TABLE <table name> ALTER <column name> [<data type>] ENABLE SCHEMA FLEXIBILITY

 

One line explanation for the above operations are correspondingly explained below :


1) With this, all dynamic columns are being converted to static columns. If the column conversion for a dynamic column fails, the operation fails as whole and no changes are applied. Normal tables are allowed to only have a certain number of columns (currently 1,000 columns). In order to successfully convert a Flexible Table into a normal table, the number of columns in the Flexible table must not exceed this limit.


2)Turns flexibility of a database table on.


3)In this case, the option list is mandatory. All schema flexibility options that are listed in the CREATE TABLE … WITH SCHEMA FLEXIBILITY section can be used here and changes on one or several options for a Flexible Table must be successful.


4)Here the specified Dynamic columns must get converted to static columns.


5)Here the specified static columns must get converted to Dynamic columns.

 

 

 

Hope the provided information is useful. Any suggestion and feedback for improvement will be much appreciated.

 

Thank you

HANA System Rename (hostname) through hdblcmgui command

$
0
0

Prerequisites

  • You are logged in as root user.
  • The SAP HANA system has been installed with the SAP HANA database lifecycle manager (HDBLCM).
  • The SAP HANA database server is up and running. Otherwise, inconsistencies in the configuration might occur.

Go to HDBCLM directory of the HANA host

# cd /hana/shared/SEC/hdblcm

# ./hdblcmgui

Fig1.png

Choose the ‘rename of SAP HANA System’

Fig2.png

Enter <SID>ADM password of HANA DB and mention the new hostname that need to updated

Fig3.png

Check the information in the screen and proceed to next

Fig4.png

If you want to change otherwise proceed next

Fig5.png

Click on rename button

Fig6.png

Now HANA DB hostname changed to <<new hostname>>


“Backint for SAP HANA” Certification

$
0
0

Backint for SAP HANA is an API that enables 3rd party tool vendors to directly connect their backup agents to the SAP HANA database. Backups are transferred via pipe from the SAP HANA database to the 3rd party backup agent, which runs on the SAP HANA database server and then sends the backups to the 3rd party backup server.


3rd party backup tools that use the Backint API are integrated with SAP HANA's tools. You can execute the following actions directly from SAP HANA's tools:

  • Backup and recovery
  • Housekeeping (deleting old backups)
  • Configuration of tool-specific parameters


For more information on the integration of 3rd party backup tools with SAP HANA, please refer to the Administration Guide.

Note: Installation documentation for the 3rd party backup tools is provided by the 3rd party tool vendors.


Which tools are certified? (Updated 2016-01-13)

 

VendorBackup ToolOn Intel ArchitectureOn POWER Architecture
Allen SystemsASG-Time NavigatorYesNo
CommvaultSimpana, Hitachi Data Protection Suite (via Simpana Backint interface)YesNo
EMCNetworker, EMC Interface for Data Domain BoostYesNo
HPData Protector, HP StoreOnce Plug-in for SAP HANAYesNo
IBMTivoli Storage Manager for EnterpriseYesNo
IBMSpectrum Protect for Enterprise Resource PlanningNoYes
LibelleBusinessShadowYesNo
SEPSesamYesNo
Veritas (Symantec)NetBackupYesNo


You can find more details on the certified tools in the Application Development Partner Directory: Enter the search term “HANA-BRINT” and click on a partner name for further details.



As a customer, what do I need to know?

Backup tools that use Backint for SAP HANA can only be installed on SAP HANA if they have been certified by SAP. You can find further details on this and on the installation of 3rd party tools on SAP HANA in general in the following SAP Notes:

Note: Snapshots are not part of the Backint API specification, and currently no certification is required for 3rd party tools using HANA snapshots.


Do I have to use a 3rd party backup tool to back up SAP HANA?

No, SAP HANA comes with native functionality for backup and recovery. Using 3rd party backup tools is an alternative to using the native SAP HANA functionality.

For more information on SAP HANA backup and recovery, please refer to the Administration Guide.


As a tool vendor, what do I have to do to get my backup tool certified for SAP HANA?

A detailed description of the certification process is available on the following page:Backup/recovery API for the SAP HANA database (Backint for SAP HANA (HANA-BRINT 1.1))


HANA Rules Framework

$
0
0

Welcome to the SAP HANA Rules Framework (HRF) Community Site!


SAP HANA Rules Framework provides tools that enable application developers to build solutions with automated decisions and rules management services, implementers and administrators to set up a project/customer system, and business users to manage and automate business decisions and rules based on their organizations' data.

In daily business, strategic plans and mission critical tasks are implemented by a countless number of operational decisions, either manually or automated by business applications. These days - an organization's agility in decision-making becomes a critical need to keep up with dynamic changes in the market.


HRF Main Objectives are:

  • To seize the opportunity of Big Data by helping developers to easily build automated decisioning solutions and\or solutions that require business rules management capabilities
  • To unleash the power of SAP HANA by turning real time data into intelligent decisions and actions
  • To empower business users to control, influence and personalize decisions/rules in highly dynamic scenarios

HRF Main Benefits are:

Rapid Application Development |Simple tools to quickly develop auto-decisioning applications

  • Built-in editors in SAP HANA studio that allow easy modeling of the required resources for SAP HANA rules framework
  • An easy to implement and configurable SAPUI5 control that exposes the framework’s capabilities to the business users and implementers

Business User Empowerment | Give control to the business user

  • Simple, natural, and intuitive business condition language (Rule Expression Language)

Untitled.png

  • Simple and intuitive UI control that supports text rules and decision tables

NewTable.png

  • Simple and intuitive web application that enables business users to manage their own rules

Rules.png   

Scalability and Performance |HRF as a native SAP HANA solution leverages all the capabilities and advantages of the SAP HANA platform.


For more information on HRF please contact shuki.idan@sap.com  and/or noam.gilady@sap.com

Interesting links:

SAP solutions already utilizing HRF:

Here are some (partial list) SAP solutions that utilizes HRF in different domains: 

Use cases of SAP solutions already utilizing HRF:

SAP Transportation Resource Planning

TRP_Use_Case.jpg

SAP Fraud Management

Fraud_Use_Case.JPG

SAP hybris Marketing (formerly SAP Customer Engagement Intelligence)

hybris_Use_Case.JPG

SAP Operational Process Intelligence

OPInt_Use_Case.JPG

I've done HANA training courses, How to practice?

$
0
0

HANATEC certificationhas in it's curricula the "HA100 - SAP HANA Introduction" and the "HA200 - SAP HANA Installation & Operations" training courses.

 

These courses are, in my opinion well structured and with enought exercises to get the exact understanding on the presented subjects. Perhaps the View concepts and creation in HA100 is to much stressed out than expected for the technical or BASIS consultant.

 

After a couple of weeks I was faced with the need to go thru all the course stuff again and I was searching for a HANA system to support my study and to do some or all the exercises .

 

The answer is in the HA100 course SPS7 though not anymore in SPS10. Creating a HANA Cloud free subscription for the "SAP HANA Developer Edition" it's enought for evaluations and  exploration covering the HA100 material at least.

 

 

Get access to a HANA system

 

To get a free account we can do it from http://developers.sap.com/hana and there sign up (only step 2 on next picture) to get started with a free SAP HANA developer edition in the cloud.

 

hana_DEV_Center.png

 

We should be aware these web pages are changing and continually evolving so one can find out diferent look on the pages.

 

After filling all the information to sign up we get the confirmation via e-mail:

 

welcome_hana_cloud.png

 

From the confimation e-mail we get the URL to access our just created Hana cloud, where s000#######will be the S-user:


https://account.hanatrial.ondemand.com/cockpit#/acc/s000######trial/services

 

 

Get some tutorial

 

The data model available in the evaluation system is not the same as the one used in HA100 training course. The following document posted by Stoyan Manchev is as a very good alternative even if it doesn't go so deeper on the exercises about creating views .

 

8 Easy Steps to Develop an XS application on the SAP HANA Cloud Platform

 

 

When following steps 1 to 4 (Step 1 looks a bit different in the actual version, see bellow in "Changes on 8 steps tutorial") we begin preparing the environment and then connecting HANA studio do the Hana cloud and creating a view. Don't need to go thru step 5 and nexts since they are not related with our certification.

 

To run the step 2 we need a HANA Studio. We can download it to install from https://tools.hana.ondemand.com/#hanatools; I've done it with the Luna edition.

 

Take your time. It will take a while to get everything fitted together in order to create the views.

 

 

Changes on 8 steps tutorial

 

The picture on the above tutorial should be changed to this one (Select New to create a new schema):

cloud_schema.jpg

When creating the new schema select the following:

 

schema_ID.png

 

Updating HANA Studio to connect to a cloud system

 

To connect to a cloud system using HANA Studio we need to install additional tools:

 

https://help.hana.ondemand.com/help/frameset.htm?b0e351ada628458cb8906f55bcac4755.html

pic1.png

 

 

pic2.png

 

And as result we get a new option to Add cloud system:

 

pic3.png

 

 

Test your knowledge

 

After going thru these steps we'll master the HANA100 and to test our knowledge  before going to SAP to make the examen we can do a small assessment which we get choosing "Discovery Preview: SAP HANA Introduction" on https://performancemanager.successfactors.eu

 

This is a 12 hours e-learning free course based on HA100 with a web assessment included.


Limitations

 

Unfortunatelly this HANA Developpment Edition for wich we can have a freee access on the cloud is useless to cover almost all if not all the subjects of HA200 Because it has limitations in the adiministration parts. We are not able to define users, roles or even display any administration view.

Viewing all 1183 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>