Quantcast
Channel: SCN : Document List - SAP HANA and In-Memory Computing
Viewing all 1183 articles
Browse latest View live

SAP HANA Authorisation Troubleshooting

$
0
0

Every now and again I receive issues regarding SAP authorisation issues. I thought it might be useful to create a troubleshooting walk through.

 

This document will deal with issues regarding analytical privilege in SAP HANA Studio

 

So what are Privileges some might ask?

System Privilege:

System privileges control general system activities. They are mainly used for administrative purposes, such as creating schemas, creating and changing users and roles, performing data backups, managing licenses, and so on.

Object Privilege:

Object privileges are used to allow access to and modification of database objects, such as tables and views. Depending on the object type, different actions can be authorized (for example, SELECT, CREATE ANY, ALTER, DROP, and so on).

Analytic Privilege:

Analytic privileges are used to allow read access to data in SAP HANA information models (that is, analytic views, attribute views, and calculation views) depending on certain values or combinations of values. Analytic privileges are evaluated during query processing.

In a multiple-container system, analytic privileges granted to users in a particular database authorize access to information models in that database only.

Package Privilege:

Package privileges are used to allow access to and the ability to work in packages in the repository of the SAP HANA database.

Packages contain design time versions of various objects, such as analytic views, attribute views, calculation views, and analytic privileges.

In a multiple-container system, package privileges granted to users in a particular database authorize access to and the ability to work in packages in the repository of that database only.

 

For more information on SAP HANA privileges please see the SAP HANA Security Guide:

http://help.sap.com/hana/SAP_HANA_Security_Guide_en.pdf

 

 

So, you are trying to access a view, a table or simply trying to add roles to users in HANA Studio and you are receiving errors such as:

  • Error during Plan execution of model _SYS_BIC:onep.Queries.qnoverview/CV_QMT_OVERVIEW (-1), reason: user is not authorized
  • pop1 (rc 2950, user is not authorized)
  • insufficient privilege: search table error: [2950] user is not authorized
  • Could not execute 'SELECT * FROM"_SYS_BIC"."<>"' SAP DBTech JDBC: [258]: insufficient privilege: Not authorized.SAP DBTech JDBC: [258]: insufficient privilege: Not authorized

 

These errors are just examples of  some the different authorisation issues you can see in HANA Studio, and each one is pointing towards a missing analytical privilege.

 

Once you have created all your models, you then have the opportunity to define your specific authorization requirements on top of the views that you have created.

 

So for example, we have a model in HANA Studio Schema and its called "_SYS_BIC:Overview/SAP_OVERVIEW"

We have a user, lets just say its the "SYSTEM" user, and when you query this view you get the error:

 

Error during Plan execution of model _SYS_BIC:Overview/SAP_OVERVIEW (-1), reason: user is not authorized.

 

So if you are a DBA, and you get a message from a team member informing you that they getting a authorisation issue in HANA Studio. What are you to do?

How are you supposed to know the User ID? And most importantly, how are you to find out what the missing analytical privilege is?

 

So this is the perfect opportunity to run an authorisation trace through the means of the SQL console on HANA Studio.

So if you follow the below instructions it will walk you through executing the authorisation trace:

 

1) Please run the following statement in the HANA database to set the DB  trace:

alter system alter configuration ('indexserver.ini','SYSTEM') SET
('trace','authorization')='info' with reconfigure;

 

2) Reproduce the issue/execute the command again/

 

3)When the execution finishes please turn off the trace as follows in the Hana studio:

alter system alter configuration ('indexserver.ini','SYSTEM') unset
('trace','authorization') with reconfigure;

 

 

So now that you have turned the trace on, reproduced the issue and turned off the trace, you should now see a new indexserver0000000trc file created in the Diagnosis Files Tab in HANA Studio

Capture.PNG

 

So once you open the trace files, scroll to the end of the file and you should see something familiar to this:

e cePlanExec       cePlanExecutor.cpp(06890) : Error during Plan execution of model _SYS_BIC:onep.Queries.qnoverview/CV_QMT_OVERVIEW (-1), reason: user is not authorized
i TraceContext     TraceContext.cpp(00718) : UserName=TABLEAU, ApplicationUserName=luben00d, ApplicationName=HDBStudio, ApplicationSource=csns.modeler.datapreview.providers.ResultSetDelegationDataProvider.<init>(ResultSetDelegationDataProvider.java:122);csns.modeler.actions.DataPreviewDelegationAction.getDataProvider(DataPreviewDelegationAction.java:310);csns.modeler.actions.DataPreviewDelegationAction.run(DataPreviewDelegationAction.java:270);csns.modeler.actions.DataPreviewDelegationAction.run(DataPreviewDelegationAction.java:130);csns.modeler.command.handlers.DataPreviewHandler.execute(DataPreviewHandler.java:70);org.eclipse.core.commands
i Authorization    XmlAnalyticalPrivilegeFacade.cpp(01250) : UserId(123456) is missing analytic privileges in order to access _SYS_BIC:onep.MasterData.qn/AT_QMT(ObjectId(15,0,oid=78787)). Current situation:
AP ObjectId(13,2,oid=3): Not granted.
i Authorization    TRexApiSearch.cpp(20566) : TRexApiSearch::analyticalPrivilegesCheck(): User TABLEAU is not authorized on _SYS_BIC:onep.MasterData.qn/AT_QMT (787878) due to XML APs
e CalcEngine       cePopDataSources.cpp(00488) : ceJoinSearchPop ($REQUEST$): Execution of search failed: user is not authorized(2950)
e Executor         PlanExecutor.cpp(00690) : plan plan558676@<> failed with rc 2950; user is not authorized
e Executor         PlanExecutor.cpp(00690) : -- returns for plan558676@<>
e Executor         PlanExecutor.cpp(00690) : user is not authorized(2950), plan: 1 pops: ceJoinSearchPop pop1(out a)
e Executor         PlanExecutor.cpp(00690) : pop1, 09:57:41.755  +0.000, cpu 139960197732232, <> ceJoinSearchPop, rc 2950, user is not authorized
e Executor         PlanExecutor.cpp(00690) : Comm total: 0.000
e Executor         PlanExecutor.cpp(00690) : Total: <Time- Stamp>, cpu 139960197732232
e Executor         PlanExecutor.cpp(00690) : sizes a 0
e Executor         PlanExecutor.cpp(00690) : -- end executor returns
e Executor         PlanExecutor.cpp(00690) : pop1 (rc 2950, user is not authorized)

 

So we can see from the trace file that User who is trying to query from the view is called TABLEAU. TABLEAU is also represented by the User ID (123456)

 

So by looking at the lines:

i Authorization    XmlAnalyticalPrivilegeFacade.cpp(01250) : UserId(123456) is missing analytic privileges in order to access _SYS_BIC:onep.MasterData.qn/AT_QMT(ObjectId(15,0,oid=78787)).

&

i Authorization    TRexApiSearch.cpp(20566) : TRexApiSearch::analyticalPrivilegesCheck(): User TABLEAU is not authorized on _SYS_BIC:onep.MasterData.qn/AT_QMT (787878) due to XML APs

 

We can clearly see that TABLEAU user is missing the correct analytical privileges to access the _SYS_BIC:onep.MasterData.qn/AT_QMT which is located on Object 78787.

 

So now we have to find out who owns the Object 78787. We can find out this information by querying the following:

 

select * from objects where object_oid = '<oid>';

Select * from objects where object_oid = '78787'

 

Once you have found out the owner for this object, you can get the owner to Grant the TABLEAU user the necessary privileges to query the object.

 

 

Another option that is available for analyzing privileges issues was introduced as of SP9. This comes in the form of the Authorization Dependency Viewer. Man-Ted Chan has prepared an excellent blog on this new feature:

 

http://scn.sap.com/community/hana-in-memory/blog/2015/07/07/authorization-dependency-viewer

 

 

 

For more useful information on Privileges can be seen in the following KBA's:

KBA #2220157 - Database error 258 at EXE insufficient

KBA #1735586 – Unable to grant privileges for SYS_REPO.-objects via SAP HANA Studio authorization management.

KBA #1966219 – HANA technical database user _SYS_REPO cannot be activated.

KBA #1897236 – HANA: Error&quot; insufficient privilege: Not authorized &quot; in SM21

KBA #2092748 – Failure to activate HANA roles in Design Time.

KBA #2126689 – Insufficient privilege. Not authorized

 

 

For more useful Troubleshooting documentation you can visit:

 

http://wiki.scn.sap.com/wiki/display/TechTSG/SAP+HANA+and+In-Memory+Computing

 

 

Thank you,

 

Michael


[SAP HANA Academy] Live4 ERP Agility: SDI DP Server

$
0
0

Continuing with the Smart Data Integration part of the SAP HANA Academy’s Live4 ERP Agile Solutions in SAP HANA Cloud Platform course, Tahir Hussain Babar (Bob) shows how to turn on the data-provisioning server needed to connect to a Hadoop system with the using SAP Smart Data Integration. Check out Bob’s tutorial video below.

Screen Shot 2015-10-13 at 3.28.36 PM.png

(0:40 – 2:55) How to Start the DP Server

 

First on your machine that contains Eclipse, open Eclipse and then click on the Administration console button while your system user is selected. Go to the configuration tab and expand the daemon.ini file. Then expand the dpserver file, click on instance and then type 1 in both the new value text box for both the System and the Hosts before clicking save.

Screen Shot 2015-10-13 at 3.46.29 PM.png

Now after hitting refresh in the Landscape tab it will display a yellow triangle to signify that the DP Server is starting up.

Screen Shot 2015-10-13 at 3.47.32 PM.png

(2:55 – 4:50) How to add the Required Roles for the Live4 User

 

You will also need to set certain authorizations in order for the Live4 user to carry out Smart Data Integration tasks.

 

First go to your system user and expand the Security folder. Then expand the user folder and select the Live4 user. This is the user that a developer will use to do any and all of the work in Eclipse and/or the WebIDE throughout the course.

 

In the Live4 user navigate to the System Privileges tab and click on the green plus sign to add a trio of privileges. Essentially we will need to utilize an agent, install a Hadoop adapter and create a virtual table. So select ADAPTER ADMIN, AGENT ADMIN and CREATE REMOTE SOURCE and then click ok.

Screen Shot 2015-10-13 at 4.01.50 PM.png

Then after clicking the green execute button those three new system privileges will be granted to the Live4 user.


For further tutorial videos about the ERP Agility with HCP course please view this playlist.


SAP HANA Academy - Over 1,200 free tutorial videos on SAP HANA, Analytics and the SAP HANA Cloud Platform.


Follow us on Twitter @saphanaacademy and connect with us on LinkedIn.

SAP HANA Smart Data Access Setup

$
0
0

PURPOSE

The purpose of this document is to define clear steps of connecting BW on HANA DB from HANALIVE DB to use BW models in HANALIVE reports. In this scenario we have used two separate HANA DBs (one for BW and another for HANALIVE for ERP 1.0)


SCOPE

This scope applies for Basis team who support Smart Data Access (SDA) configuration after will go live. This procedure applied for prerequisites, installation and post installation configuration of complete SDA setup between BW on HANA DB and HANALIVE DB.


COMPONENT DETAILS

SAP BW system running on SAP NW 7.4 SP8 with HANA DB SPS8 Revision 82

HANALIVE DB is used as side car scenario with version SPS8 revision 82


WHAT THE PREREQUISITES ARE FOR SAP HANA SMART DATA ACCESS?

Software Versions

You have installed SAP HANA SP7 or higher and the remote data sources are available.

ODBC Drivers

You have installed the ODBC drivers for the databases you want to connect see SAP note 1868702 on each HANA node. If you installed ODBC drivers in your HANA exe directory as per note 1868702 these ODBC drivers will be removed during a revision update and have to be installed again after the update.


BUSINESS CASE

SAP HANA smart data access makes it possible to connect remote data sources and to present the data contained in these data sources as if from local SAP HANA tables. This can be used, for example, in SAP Business Warehouse installations running on SAP HANA to integrate data from remote data sources.

 

In SAP HANA, virtual tables are created to represent the tables in the remote data source. Using these virtual tables, joins can be executed between tables in SAP HANA and tables in the remote data source. All access to the remote data source is read-only.

 

In this scenario we are doing the Smart Data Access setup on Enterprise HANA system to connect to BW on HANA system remotely


Please check the attached document for detailed procedure of SDA setup for HANA.

 

SAP BW ON HANA & HANA SMART DATA ACCESS – SETUP

 

1. Create user with required privileges in BW on HANA DB

Login to remote source SAP HANA system BWoH, using SYSTEM user and create the user SDA with following privileges.

 

System Privilege: Catalog Read

Object Privileges:

    • SELECT on Schema SAPABAP1
    • SELECT on _SYS_BIC & _SYS_BI

Fig1.png

Fig2.png

Fig3.png

Fig4.png

Note: Schema SAPABAP1 contains all the required base tables which HANA Modelling team wants to build their reports.


2. Logon to HANALIVE DB as SYSTEM user and configure the Smart Data Access.

 

SAP HANA system authorization CREATE REMOTE SOURCE is required to create a remote source. SYSTEM user already has this authorization.

 

In the Systems view, open -> Provisioning -> Remote Sources.

 

Right click Remote Sourcesand select New Remote Source.

Fig5.png

Enter the following information: Source Name. Select the Adapter Namefrom the drop-down list as HANA (ODBC) in this case as we are connection remote SAP HANA Database. Enter the values for: Server, Port, User Nameand the Password of user SDA which we have created on SAP BW on HANA system.

 

Click the Save the Editoricon in the upper right-hand corner of the screen.

Fig6.png

3. Connection Verification

After the SDA connection is successfully created, verify if you could connect to remote source AP6 system and check if you could see the tables under schema SAPABAP1

Fig7.png

4. User authorization to access


Authorization to access data in the remote data source is determined by the privileges of the database user as standard.

 

Grant the following privileges to the role assigned to the Modelling users, so that they can create virtual tables and then write SQL queries which could operate on virtual tables. The SAP HANA query processor optimizes these queries, and executes the relevant part of the query in the target database, returns the results of the query to SAP HANA, and completes the operation.

Fig8.png

Challenges of Moving to HANA on Cloud

$
0
0

Challenges of HANA on Cloud

PURPOSE

The purpose of this document is to define lesson learns and pain points / challenges while customer wanted to build new SAP landscape on HANA on cloud or move existing SAP application into HANA on Cloud. In this document, all challenges distributed into three different HANA on cloud services.

Migration existing SAP application to HANA on Cloud

Green Field implementation

AMS support to HANA on cloud

SCOPE

This scope applies for Basis team who will involve above three deployment for HANA on cloud. This information can be taken as starting point of any above deployment scenarios.

Picture1.png

HANA Rules Framework

$
0
0

Welcome to the SAP HANA Rules Framework (HRF) Community Site!


SAP HANA Rules Framework provides tools that enable application developers to build solutions with automated decisions and rules management services, implementers and administrators to set up a project/customer system, and business users to manage and automate business decisions and rules based on their organizations' data.

In daily business, strategic plans and mission critical tasks are implemented by a countless number of operational decisions, either manually or automated by business applications. These days - an organization's agility in decision-making becomes a critical need to keep up with dynamic changes in the market.


HRF Main Objectives are:

  • To seize the opportunity of Big Data by helping developers to easily build automated decisioning solutions and\or solutions that require business rules management capabilities
  • To unleash the power of SAP HANA by turning real time data into intelligent decisions and actions
  • To empower business users to control, influence and personalize decisions/rules in highly dynamic scenarios

HRF Main Benefits are:

Rapid Application Development |Simple tools to quickly develop auto-decisioning applications

  • Built-in editors in SAPHANA studio that allow easy modeling of the required resources for SAP HANA rules framework
  • An easy to implement and configurable SAPUI5 control that exposes the framework’s capabilities to the business users and implementers

Business User Empowerment | Give control to the business user

  • Simple, natural, and intuitive business condition language (Rule Expression Language)

Untitled.png

  • Simple and intuitive UI control that supports text rules and decision tables

NewTable.png

  • Simple and intuitive web application that enables business users to manage their own rules

Rules.png    

Scalability and Performance |HRF as a native SAP HANA solution leverages all the capabilities and advantages of the SAPHANA platform.


For more information on HRF please contact shuki.idan@sap.com  and/or noam.gilady@sap.com

Interesting links:

SAP solutions already utilizing HRF:

Use cases of SAP solutions already utilizing HRF:

SAP Transportation Resource Planning

TRP_Use_Case.jpg

SAP FraudManagement

Fraud_Use_Case.JPG

SAP hybris Marketing (formerly SAP Customer Engagement Intelligence)

hybris_Use_Case.JPG

SAP Operational Process Intelligence

OPInt_Use_Case.JPG

HANA Data Warehousing Foundation 1.0 - Overview

$
0
0

This presentation how the SAP HANA Data Warehousing Foundation 1.0 provides specific data management tools, to support large scale SAP HANA use cases. It complements the data warehouse offerings of SAP BW powered by SAP HANA and native SAP HANA EDW.

View this Presentation

SAP Hana EIM (SDI/SDQ) setup

$
0
0

In my documentation I’ll explain how to setup and configure SAP Hana SP10 EIM (SDI/SDQ) with Sybase IQ database and ERP on Hana database schema source system to replicated data for realtime data replication.

 

I will show in detail step and configuration point to achieve it.

 

In order execution

  • Create Sybase IQ database
  • Enable DP server for SDI
  • Enable Script server for SDQ
  • Install SDQ cleanse and geocode directory
  • Install DU HANA_IM_DP (Data provisioning)
  • Install and Register Data Provisioning Agent
  • Create remote source
  • Data replication and monitoring

 

Configuration required on SP9

The xsengine needs to be turn to true (if not done)

The statistic server needs to be turn to true (if not done)

The DU HANA_IM_ESS needs to be imported

 

 

Guide used

 

SAP Hana EIM Administration Guide SP10
SAP Hana EIM Configuration guide SP10

 

Note used

 

179583 - SAP HANA Enterprise Information Management SPS 10 Central Release Note

 

Link used

 

http://help.sap.com/hana_platform

Overview Architecture

7-15-2015 6-09-06 PM.jpg

 

Starting Hana SP9 the new features call SDI (Smart Data Integration) and SDQ (Smart Data Quality) has been introduce.

 

The purpose of these new features is to leverage an integrated ETL mechanism directly into Hana over SDA

 

To make it simple:

  • Smart Data Integration provide data replication and transformation services
  • Smart Data Quality provide an advanced transformation to support data quality functionality

 

 

Create Sybase IQ database

 

In order to have a dedicated database to work with I’ll create my own database into IQ server:

 

From the SCC go to administration and proceed as follow
7-10-2015 3-55-36 PM.jpg

7-10-2015 4-00-54 PM.jpg

7-10-2015 4-01-28 PM.jpg


SCC agent password: the password define during the IQ server installation
Utility server password: auto fill do not change it
IQ server port: use an unused port, I already have 2 db running so I pic the next number
Database path: <path where the db is stored><dbname>.db
IQ main dbspace path: <path where the dbspace is stored><dbname>.iq
7-10-2015 4-06-10 PM.jpg

 

Check mark ok

7-10-2015 4-14-09 PM.jpg

 


Execute
7-10-2015 4-18-37 PM.jpg

7-10-2015 4-19-51 PM.jpg

7-10-2015 4-20-47 PM.jpg


My database now available I’ll create 3 simple tables for this test, I’ll use interactive SQL
7-10-2015 4-49-11 PM.jpg

 

With the following syntax

7-10-2015 4-53-48 PM.jpg

 

  

Enable Data Provisioning server for SDI

 

When Hana is installed, by default the DP server is note activate, in order to have the ability to use SDI it needs to be enabled. The value needs to be change to 1
7-10-2015 5-22-47 PM.jpg

 


Enable Script server for SDQ

 

To take advantage of the SDQ functionality the script server value needs to be change to 1
7-10-2015 5-47-54 PM.jpg

 

 

 

Install SDQ cleanse and geocode directory

 

The Cleanse and Geocode nodes rely on reference data found in directories where we download and deploy to the SAP HANA server.

 

To download those directories go on the SMP and select the one you need
You can download several directories depending on what you are licensed for.
7-10-2015 8-50-57 PM.jpg

7-10-2015 8-55-08 PM.jpg

 

Once downloaded decompress it at the following location:
/usr/sap/<SID/SYS/global/hdb/IM/reference_data
7-11-2015 8-09-10 PM.jpg

 


Install delivery unit HANA_IM_DP (Data Provisioning)

 

The specific delivery unit needs to be downloading and upload from the studio or the web interface, this will provide you:

  • The monitoring functionality
  • The proxy application to provide a way to communicate with the DPA (cloud scenario)
  • The admin application for DPA configuration (cloud scenario)

7-11-2015 8-31-14 PM.jpg

7-11-2015 8-31-32 PM.jpg

 

Upload from the studio

7-11-2015 8-32-45 PM.jpg

 


Once done assign the monitoring role and add the view from the cockpit

7-11-2015 8-36-00 PM.jpg

7-11-2015 8-48-03 PM.jpg

 

 

Install and register Data Provisioning Agent

 

The Data Provisioning Agent is used to make the bridge between Hana and source system where the driver can’t be run from Hana (DPS) over a pre-build adapter, in some case it allow Hana to write back data into source system.
Use the DPA allow live replication.

 

The agent is part of the package download earlier

7-11-2015 9-04-37 PM.jpg


Run and installed it as needed

7-8-2015 9-40-08 PM.jpg

Once installed open the cockpit agent

7-11-2015 9-12-27 PM.jpg

 


Make sure the agent is started, connect and register it to Hana with the necessary adapter

7-12-2015 4-46-46 PM.jpg

Let create the source system in Hana now.

 

 



Create remote source

 

Now that my IQ db is in place and my Hana adapter is installed I will create my source system in SDA where I need to get the data from.

Let start with my IQ database, before create the connection in SDA install and set the lib on Hana server. To create my connection I will use the following statement:

 

create remote source I841035 adapter iqodbc configuration 'Driver=libdbodbc16_r.so;ServerName=HANAIQ03;CommLinks=tcpip(host=usphlvm1789:1113)' with CREDENTIAL TYPE'PASSWORD'USING'user=I841035;password=xxxxxxx';

 

Once done refresh the provisioning

7-11-2015 10-42-09 PM.jpg

And create the ERP on Hana schema source system by selecting the adapter added earlier

7-12-2015 4-42-33 PM.jpg

7-12-2015 4-59-13 PM.jpg

 

  

And check the remote subscription form the cockpit

7-12-2015 5-47-27 PM.jpg

 

 


Data replication and monitoring

 

My remote source connect I will now define which table I want to replicate and how I want it to look like once loaded.

Make sure your user schema is part of _SYS_REPO with “CREATE ANY” granted.

 


From the development workbench go to “Editor” and select your package and create a new replication task

7-12-2015 6-16-35 PM.jpg

7-12-2015 6-18-23 PM.jpg

 

And fill the necessary information, target schema, virtual table schema, table prefix and so on.

From detail perspective several option are possible

 

Add/remove/edit table

7-12-2015 7-08-12 PM.jpg

 

Set filter


Define the load behavior in order to have a certain level of detail on the change that encore on the table.

7-12-2015 6-56-01 PM.jpg

  

Partition data for better performance

7-12-2015 7-09-15 PM.jpg

Once you preference are set, save the configuration to activate it.

7-12-2015 7-14-02 PM.jpg

 

From the monitoring side check the task log

7-12-2015 7-23-36 PM.jpg


Once activate go on the catalog view and check if the procedure is crated as well as the virtual tables/views and table, and invoke the procedure to start the replication

7-12-2015 7-27-51 PM.jpg

 

I did repeat the same procedure for my ERP on Hana schema, once the procedure is invoked on the remote Hana db we can see additional table created and trigger for the relevant table replicated

7-15-2015 1-38-18 PM.jpg

 

From a monitoring side, I did add 4 additional user and we can see the apply count

7-15-2015 2-42-17 PM.jpg


The replication is now operational, in my next document I’ll explain how to configure several datasource and construct one realtime report with the input of different table.

 

Williams.

SAP HANA Data Warehousing Foundation

$
0
0

SDNPIC.jpg

SAP HANA Data Warehousing Foundation 1.0

 

This first release will provide packaged tools for large Scale SAP HANA use cases to support data management and distribution within a SAP HANA landscapre more efficiently. Further versions will focus on additionals tools to  support native SAP HANA data warehouse use cases, in particular data lifecycle management.

 

 

 

On this landingpage you will find a summary of information to get started
with SAP HANA Data Warehousing Foundation 1.0

Presentations

 

 

Demos

 

SAP HANA Data Warehousing Foundation  Playlist.
In this SAP HANA Data Warehousing playlist you will find demos showing the Data Distribution Optimizer (DDO) as well as the Data Lifecycle Manager (DLM)

 

SAP HANA Academy

 

Find out more about the Data Temperature integration with HANA including Data Lifecycle Management on the DWF Youtube channel: of SAP HANA Academy


SAP HANA Authorisation Troubleshooting

$
0
0

Every now and again I receive issues regarding SAP authorisation issues. I thought it might be useful to create a troubleshooting walk through.

 

This document will deal with issues regarding analytical privilege in SAP HANA Studio

 

So what are Privileges some might ask?

System Privilege:

System privileges control general system activities. They are mainly used for administrative purposes, such as creating schemas, creating and changing users and roles, performing data backups, managing licenses, and so on.

Object Privilege:

Object privileges are used to allow access to and modification of database objects, such as tables and views. Depending on the object type, different actions can be authorized (for example, SELECT, CREATE ANY, ALTER, DROP, and so on).

Analytic Privilege:

Analytic privileges are used to allow read access to data in SAP HANA information models (that is, analytic views, attribute views, and calculation views) depending on certain values or combinations of values. Analytic privileges are evaluated during query processing.

In a multiple-container system, analytic privileges granted to users in a particular database authorize access to information models in that database only.

Package Privilege:

Package privileges are used to allow access to and the ability to work in packages in the repository of the SAP HANA database.

Packages contain design time versions of various objects, such as analytic views, attribute views, calculation views, and analytic privileges.

In a multiple-container system, package privileges granted to users in a particular database authorize access to and the ability to work in packages in the repository of that database only.

 

For more information on SAP HANA privileges please see the SAP HANA Security Guide:

http://help.sap.com/hana/SAP_HANA_Security_Guide_en.pdf

 

 

So, you are trying to access a view, a table or simply trying to add roles to users in HANA Studio and you are receiving errors such as:

  • Error during Plan execution of model _SYS_BIC:onep.Queries.qnoverview/CV_QMT_OVERVIEW (-1), reason: user is not authorized
  • pop1 (rc 2950, user is not authorized)
  • insufficient privilege: search table error: [2950] user is not authorized
  • Could not execute 'SELECT * FROM"_SYS_BIC"."<>"' SAP DBTech JDBC: [258]: insufficient privilege: Not authorized.SAP DBTech JDBC: [258]: insufficient privilege: Not authorized

 

These errors are just examples of  some the different authorisation issues you can see in HANA Studio, and each one is pointing towards a missing analytical privilege.

 

Once you have created all your models, you then have the opportunity to define your specific authorization requirements on top of the views that you have created.

 

So for example, we have a model in HANA Studio Schema and its called "_SYS_BIC:Overview/SAP_OVERVIEW"

We have a user, lets just say its the "SYSTEM" user, and when you query this view you get the error:

 

Error during Plan execution of model _SYS_BIC:Overview/SAP_OVERVIEW (-1), reason: user is not authorized.

 

So if you are a DBA, and you get a message from a team member informing you that they getting a authorisation issue in HANA Studio. What are you to do?

How are you supposed to know the User ID? And most importantly, how are you to find out what the missing analytical privilege is?

 

So this is the perfect opportunity to run an authorisation trace through the means of the SQL console on HANA Studio.

So if you follow the below instructions it will walk you through executing the authorisation trace:

 

1) Please run the following statement in the HANA database to set the DB  trace:

alter system alter configuration ('indexserver.ini','SYSTEM') SET
('trace','authorization')='info' with reconfigure;

 

2) Reproduce the issue/execute the command again/

 

3)When the execution finishes please turn off the trace as follows in the Hana studio:

alter system alter configuration ('indexserver.ini','SYSTEM') unset
('trace','authorization') with reconfigure;

 

 

So now that you have turned the trace on, reproduced the issue and turned off the trace, you should now see a new indexserver0000000trc file created in the Diagnosis Files Tab in HANA Studio

Capture.PNG

 

So once you open the trace files, scroll to the end of the file and you should see something familiar to this:

e cePlanExec       cePlanExecutor.cpp(06890) : Error during Plan execution of model _SYS_BIC:onep.Queries.qnoverview/CV_QMT_OVERVIEW (-1), reason: user is not authorized
i TraceContext     TraceContext.cpp(00718) : UserName=TABLEAU, ApplicationUserName=luben00d, ApplicationName=HDBStudio, ApplicationSource=csns.modeler.datapreview.providers.ResultSetDelegationDataProvider.<init>(ResultSetDelegationDataProvider.java:122);csns.modeler.actions.DataPreviewDelegationAction.getDataProvider(DataPreviewDelegationAction.java:310);csns.modeler.actions.DataPreviewDelegationAction.run(DataPreviewDelegationAction.java:270);csns.modeler.actions.DataPreviewDelegationAction.run(DataPreviewDelegationAction.java:130);csns.modeler.command.handlers.DataPreviewHandler.execute(DataPreviewHandler.java:70);org.eclipse.core.commands
i Authorization    XmlAnalyticalPrivilegeFacade.cpp(01250) : UserId(123456) is missing analytic privileges in order to access _SYS_BIC:onep.MasterData.qn/AT_QMT(ObjectId(15,0,oid=78787)). Current situation:
AP ObjectId(13,2,oid=3): Not granted.
i Authorization    TRexApiSearch.cpp(20566) : TRexApiSearch::analyticalPrivilegesCheck(): User TABLEAU is not authorized on _SYS_BIC:onep.MasterData.qn/AT_QMT (787878) due to XML APs
e CalcEngine       cePopDataSources.cpp(00488) : ceJoinSearchPop ($REQUEST$): Execution of search failed: user is not authorized(2950)
e Executor         PlanExecutor.cpp(00690) : plan plan558676@<> failed with rc 2950; user is not authorized
e Executor         PlanExecutor.cpp(00690) : -- returns for plan558676@<>
e Executor         PlanExecutor.cpp(00690) : user is not authorized(2950), plan: 1 pops: ceJoinSearchPop pop1(out a)
e Executor         PlanExecutor.cpp(00690) : pop1, 09:57:41.755  +0.000, cpu 139960197732232, <> ceJoinSearchPop, rc 2950, user is not authorized
e Executor         PlanExecutor.cpp(00690) : Comm total: 0.000
e Executor         PlanExecutor.cpp(00690) : Total: <Time- Stamp>, cpu 139960197732232
e Executor         PlanExecutor.cpp(00690) : sizes a 0
e Executor         PlanExecutor.cpp(00690) : -- end executor returns
e Executor         PlanExecutor.cpp(00690) : pop1 (rc 2950, user is not authorized)

 

So we can see from the trace file that User who is trying to query from the view is called TABLEAU. TABLEAU is also represented by the User ID (123456)

 

So by looking at the lines:

i Authorization    XmlAnalyticalPrivilegeFacade.cpp(01250) : UserId(123456) is missing analytic privileges in order to access _SYS_BIC:onep.MasterData.qn/AT_QMT(ObjectId(15,0,oid=78787)).

&

i Authorization    TRexApiSearch.cpp(20566) : TRexApiSearch::analyticalPrivilegesCheck(): User TABLEAU is not authorized on _SYS_BIC:onep.MasterData.qn/AT_QMT (787878) due to XML APs

 

We can clearly see that TABLEAU user is missing the correct analytical privileges to access the _SYS_BIC:onep.MasterData.qn/AT_QMT which is located on Object 78787.

 

So now we have to find out who owns the Object 78787. We can find out this information by querying the following:

 

select * from objects where object_oid = '<oid>';

Select * from objects where object_oid = '78787'

 

Once you have found out the owner for this object, you can get the owner to Grant the TABLEAU user the necessary privileges to query the object.

 

 

Another option that is available for analyzing privileges issues was introduced as of SP9. This comes in the form of the Authorization Dependency Viewer. Man-Ted Chan has prepared an excellent blog on this new feature:

 

http://scn.sap.com/community/hana-in-memory/blog/2015/07/07/authorization-dependency-viewer

 

 

 

For more useful information on Privileges can be seen in the following KBA's:

KBA #2220157 - Database error 258 at EXE insufficient

KBA #1735586 – Unable to grant privileges for SYS_REPO.-objects via SAP HANA Studio authorization management.

KBA #1966219 – HANA technical database user _SYS_REPO cannot be activated.

KBA #1897236 – HANA: Error&quot; insufficient privilege: Not authorized &quot; in SM21

KBA #2092748 – Failure to activate HANA roles in Design Time.

KBA #2126689 – Insufficient privilege. Not authorized

 

 

For more useful Troubleshooting documentation you can visit:

 

http://wiki.scn.sap.com/wiki/display/TechTSG/SAP+HANA+and+In-Memory+Computing

 

 

Thank you,

 

Michael

[SAP HANA Academy] Live4 ERP Agility: SDI DP Server

$
0
0

Continuing with the Smart Data Integration part of the SAP HANA Academy’s Live4 ERP Agile Solutions in SAP HANA Cloud Platform course, Tahir Hussain Babar (Bob) shows how to turn on the data-provisioning server needed to connect to a Hadoop system with the using SAP Smart Data Integration. Check out Bob’s tutorial video below.

Screen Shot 2015-10-13 at 3.28.36 PM.png

(0:40 – 2:55) How to Start the DP Server

 

First on your machine that contains Eclipse, open Eclipse and then click on the Administration console button while your system user is selected. Go to the configuration tab and expand the daemon.ini file. Then expand the dpserver file, click on instance and then type 1 in both the new value text box for both the System and the Hosts before clicking save.

Screen Shot 2015-10-13 at 3.46.29 PM.png

Now after hitting refresh in the Landscape tab it will display a yellow triangle to signify that the DP Server is starting up.

Screen Shot 2015-10-13 at 3.47.32 PM.png

(2:55 – 4:50) How to add the Required Roles for the Live4 User

 

You will also need to set certain authorizations in order for the Live4 user to carry out Smart Data Integration tasks.

 

First go to your system user and expand the Security folder. Then expand the user folder and select the Live4 user. This is the user that a developer will use to do any and all of the work in Eclipse and/or the WebIDE throughout the course.

 

In the Live4 user navigate to the System Privileges tab and click on the green plus sign to add a trio of privileges. Essentially we will need to utilize an agent, install a Hadoop adapter and create a virtual table. So select ADAPTER ADMIN, AGENT ADMIN and CREATE REMOTE SOURCE and then click ok.

Screen Shot 2015-10-13 at 4.01.50 PM.png

Then after clicking the green execute button those three new system privileges will be granted to the Live4 user.


For further tutorial videos about the ERP Agility with HCP course please view this playlist.


SAP HANA Academy - Over 1,200 free tutorial videos on SAP HANA, Analytics and the SAP HANA Cloud Platform.


Follow us on Twitter @saphanaacademy and connect with us on LinkedIn.

SAP HANA Smart Data Access Setup

$
0
0

PURPOSE

The purpose of this document is to define clear steps of connecting BW on HANA DB from HANALIVE DB to use BW models in HANALIVE reports. In this scenario we have used two separate HANA DBs (one for BW and another for HANALIVE for ERP 1.0)


SCOPE

This scope applies for Basis team who support Smart Data Access (SDA) configuration after will go live. This procedure applied for prerequisites, installation and post installation configuration of complete SDA setup between BW on HANA DB and HANALIVE DB.


COMPONENT DETAILS

SAP BW system running on SAP NW 7.4 SP8 with HANA DB SPS8 Revision 82

HANALIVE DB is used as side car scenario with version SPS8 revision 82


WHAT THE PREREQUISITES ARE FOR SAP HANA SMART DATA ACCESS?

Software Versions

You have installed SAP HANA SP7 or higher and the remote data sources are available.

ODBC Drivers

You have installed the ODBC drivers for the databases you want to connect see SAP note 1868702 on each HANA node. If you installed ODBC drivers in your HANA exe directory as per note 1868702 these ODBC drivers will be removed during a revision update and have to be installed again after the update.


BUSINESS CASE

SAP HANA smart data access makes it possible to connect remote data sources and to present the data contained in these data sources as if from local SAP HANA tables. This can be used, for example, in SAP Business Warehouse installations running on SAP HANA to integrate data from remote data sources.

 

In SAP HANA, virtual tables are created to represent the tables in the remote data source. Using these virtual tables, joins can be executed between tables in SAP HANA and tables in the remote data source. All access to the remote data source is read-only.

 

In this scenario we are doing the Smart Data Access setup on Enterprise HANA system to connect to BW on HANA system remotely


Please check the attached document for detailed procedure of SDA setup for HANA.

 

SAP BW ON HANA & HANA SMART DATA ACCESS – SETUP

 

1. Create user with required privileges in BW on HANA DB

Login to remote source SAP HANA system BWoH, using SYSTEM user and create the user SDA with following privileges.

 

System Privilege: Catalog Read

Object Privileges:

    • SELECT on Schema SAPABAP1
    • SELECT on _SYS_BIC & _SYS_BI

Fig1.png

Fig2.png

Fig3.png

Fig4.png

Note: Schema SAPABAP1 contains all the required base tables which HANA Modelling team wants to build their reports.


2. Logon to HANALIVE DB as SYSTEM user and configure the Smart Data Access.

 

SAP HANA system authorization CREATE REMOTE SOURCE is required to create a remote source. SYSTEM user already has this authorization.

 

In the Systems view, open -> Provisioning -> Remote Sources.

 

Right click Remote Sourcesand select New Remote Source.

Fig5.png

Enter the following information: Source Name. Select the Adapter Namefrom the drop-down list as HANA (ODBC) in this case as we are connection remote SAP HANA Database. Enter the values for: Server, Port, User Nameand the Password of user SDA which we have created on SAP BW on HANA system.

 

Click the Save the Editoricon in the upper right-hand corner of the screen.

Fig6.png

3. Connection Verification

After the SDA connection is successfully created, verify if you could connect to remote source AP6 system and check if you could see the tables under schema SAPABAP1

Fig7.png

4. User authorization to access


Authorization to access data in the remote data source is determined by the privileges of the database user as standard.

 

Grant the following privileges to the role assigned to the Modelling users, so that they can create virtual tables and then write SQL queries which could operate on virtual tables. The SAP HANA query processor optimizes these queries, and executes the relevant part of the query in the target database, returns the results of the query to SAP HANA, and completes the operation.

Fig8.png

Challenges of Moving to HANA on Cloud

$
0
0

Challenges of HANA on Cloud

PURPOSE

The purpose of this document is to define lesson learns and pain points / challenges while customer wanted to build new SAP landscape on HANA on cloud or move existing SAP application into HANA on Cloud. In this document, all challenges distributed into three different HANA on cloud services.

Migration existing SAP application to HANA on Cloud

Green Field implementation

AMS support to HANA on cloud

SCOPE

This scope applies for Basis team who will involve above three deployment for HANA on cloud. This information can be taken as starting point of any above deployment scenarios.

Picture1.png

HANA Rules Framework

$
0
0

Welcome to the SAP HANA Rules Framework (HRF) Community Site!


SAP HANA Rules Framework provides tools that enable application developers to build solutions with automated decisions and rules management services, implementers and administrators to set up a project/customer system, and business users to manage and automate business decisions and rules based on their organizations' data.

In daily business, strategic plans and mission critical tasks are implemented by a countless number of operational decisions, either manually or automated by business applications. These days - an organization's agility in decision-making becomes a critical need to keep up with dynamic changes in the market.


HRF Main Objectives are:

  • To seize the opportunity of Big Data by helping developers to easily build automated decisioning solutions and\or solutions that require business rules management capabilities
  • To unleash the power of SAP HANA by turning real time data into intelligent decisions and actions
  • To empower business users to control, influence and personalize decisions/rules in highly dynamic scenarios

HRF Main Benefits are:

Rapid Application Development |Simple tools to quickly develop auto-decisioning applications

  • Built-in editors in SAPHANA studio that allow easy modeling of the required resources for SAP HANA rules framework
  • An easy to implement and configurable SAPUI5 control that exposes the framework’s capabilities to the business users and implementers

Business User Empowerment | Give control to the business user

  • Simple, natural, and intuitive business condition language (Rule Expression Language)

Untitled.png

  • Simple and intuitive UI control that supports text rules and decision tables

NewTable.png

  • Simple and intuitive web application that enables business users to manage their own rules

Rules.png    

Scalability and Performance |HRF as a native SAP HANA solution leverages all the capabilities and advantages of the SAPHANA platform.


For more information on HRF please contact shuki.idan@sap.com  and/or noam.gilady@sap.com

Interesting links:

SAP solutions already utilizing HRF:

Use cases of SAP solutions already utilizing HRF:

SAP Transportation Resource Planning

TRP_Use_Case.jpg

SAP FraudManagement

Fraud_Use_Case.JPG

SAP hybris Marketing (formerly SAP Customer Engagement Intelligence)

hybris_Use_Case.JPG

SAP Operational Process Intelligence

OPInt_Use_Case.JPG

HANA Data Warehousing Foundation 1.0 - Overview

$
0
0

This presentation how the SAP HANA Data Warehousing Foundation 1.0 provides specific data management tools, to support large scale SAP HANA use cases. It complements the data warehouse offerings of SAP BW powered by SAP HANA and native SAP HANA EDW.

View this Presentation

SAP Hana EIM (SDI/SDQ) setup

$
0
0

In my documentation I’ll explain how to setup and configure SAP Hana SP10 EIM (SDI/SDQ) with Sybase IQ database and ERP on Hana database schema source system to replicated data for realtime data replication.

 

I will show in detail step and configuration point to achieve it.

 

In order execution

  • Create Sybase IQ database
  • Enable DP server for SDI
  • Enable Script server for SDQ
  • Install SDQ cleanse and geocode directory
  • Install DU HANA_IM_DP (Data provisioning)
  • Install and Register Data Provisioning Agent
  • Create remote source
  • Data replication and monitoring

 

Configuration required on SP9

The xsengine needs to be turn to true (if not done)

The statistic server needs to be turn to true (if not done)

The DU HANA_IM_ESS needs to be imported

 

 

Guide used

 

SAP Hana EIM Administration Guide SP10
SAP Hana EIM Configuration guide SP10

 

Note used

 

179583 - SAP HANA Enterprise Information Management SPS 10 Central Release Note

 

Link used

 

http://help.sap.com/hana_platform

Overview Architecture

7-15-2015 6-09-06 PM.jpg

 

Starting Hana SP9 the new features call SDI (Smart Data Integration) and SDQ (Smart Data Quality) has been introduce.

 

The purpose of these new features is to leverage an integrated ETL mechanism directly into Hana over SDA

 

To make it simple:

  • Smart Data Integration provide data replication and transformation services
  • Smart Data Quality provide an advanced transformation to support data quality functionality

 

 

Create Sybase IQ database

 

In order to have a dedicated database to work with I’ll create my own database into IQ server:

 

From the SCC go to administration and proceed as follow
7-10-2015 3-55-36 PM.jpg

7-10-2015 4-00-54 PM.jpg

7-10-2015 4-01-28 PM.jpg


SCC agent password: the password define during the IQ server installation
Utility server password: auto fill do not change it
IQ server port: use an unused port, I already have 2 db running so I pic the next number
Database path: <path where the db is stored><dbname>.db
IQ main dbspace path: <path where the dbspace is stored><dbname>.iq
7-10-2015 4-06-10 PM.jpg

 

Check mark ok

7-10-2015 4-14-09 PM.jpg

 


Execute
7-10-2015 4-18-37 PM.jpg

7-10-2015 4-19-51 PM.jpg

7-10-2015 4-20-47 PM.jpg


My database now available I’ll create 3 simple tables for this test, I’ll use interactive SQL
7-10-2015 4-49-11 PM.jpg

 

With the following syntax

7-10-2015 4-53-48 PM.jpg

 

  

Enable Data Provisioning server for SDI

 

When Hana is installed, by default the DP server is note activate, in order to have the ability to use SDI it needs to be enabled. The value needs to be change to 1
7-10-2015 5-22-47 PM.jpg

 


Enable Script server for SDQ

 

To take advantage of the SDQ functionality the script server value needs to be change to 1
7-10-2015 5-47-54 PM.jpg

 

 

 

Install SDQ cleanse and geocode directory

 

The Cleanse and Geocode nodes rely on reference data found in directories where we download and deploy to the SAP HANA server.

 

To download those directories go on the SMP and select the one you need
You can download several directories depending on what you are licensed for.
7-10-2015 8-50-57 PM.jpg

7-10-2015 8-55-08 PM.jpg

 

Once downloaded decompress it at the following location:
/usr/sap/<SID/SYS/global/hdb/IM/reference_data
7-11-2015 8-09-10 PM.jpg

 


Install delivery unit HANA_IM_DP (Data Provisioning)

 

The specific delivery unit needs to be downloading and upload from the studio or the web interface, this will provide you:

  • The monitoring functionality
  • The proxy application to provide a way to communicate with the DPA (cloud scenario)
  • The admin application for DPA configuration (cloud scenario)

7-11-2015 8-31-14 PM.jpg

7-11-2015 8-31-32 PM.jpg

 

Upload from the studio

7-11-2015 8-32-45 PM.jpg

 


Once done assign the monitoring role and add the view from the cockpit

7-11-2015 8-36-00 PM.jpg

7-11-2015 8-48-03 PM.jpg

 

 

Install and register Data Provisioning Agent

 

The Data Provisioning Agent is used to make the bridge between Hana and source system where the driver can’t be run from Hana (DPS) over a pre-build adapter, in some case it allow Hana to write back data into source system.
Use the DPA allow live replication.

 

The agent is part of the package download earlier

7-11-2015 9-04-37 PM.jpg


Run and installed it as needed

7-8-2015 9-40-08 PM.jpg

Once installed open the cockpit agent

7-11-2015 9-12-27 PM.jpg

 


Make sure the agent is started, connect and register it to Hana with the necessary adapter

7-12-2015 4-46-46 PM.jpg

Let create the source system in Hana now.

 

 



Create remote source

 

Now that my IQ db is in place and my Hana adapter is installed I will create my source system in SDA where I need to get the data from.

Let start with my IQ database, before create the connection in SDA install and set the lib on Hana server. To create my connection I will use the following statement:

 

create remote source I841035 adapter iqodbc configuration 'Driver=libdbodbc16_r.so;ServerName=HANAIQ03;CommLinks=tcpip(host=usphlvm1789:1113)' with CREDENTIAL TYPE'PASSWORD'USING'user=I841035;password=xxxxxxx';

 

Once done refresh the provisioning

7-11-2015 10-42-09 PM.jpg

And create the ERP on Hana schema source system by selecting the adapter added earlier

7-12-2015 4-42-33 PM.jpg

7-12-2015 4-59-13 PM.jpg

 

  

And check the remote subscription form the cockpit

7-12-2015 5-47-27 PM.jpg

 

 


Data replication and monitoring

 

My remote source connect I will now define which table I want to replicate and how I want it to look like once loaded.

Make sure your user schema is part of _SYS_REPO with “CREATE ANY” granted.

 


From the development workbench go to “Editor” and select your package and create a new replication task

7-12-2015 6-16-35 PM.jpg

7-12-2015 6-18-23 PM.jpg

 

And fill the necessary information, target schema, virtual table schema, table prefix and so on.

From detail perspective several option are possible

 

Add/remove/edit table

7-12-2015 7-08-12 PM.jpg

 

Set filter


Define the load behavior in order to have a certain level of detail on the change that encore on the table.

7-12-2015 6-56-01 PM.jpg

  

Partition data for better performance

7-12-2015 7-09-15 PM.jpg

Once you preference are set, save the configuration to activate it.

7-12-2015 7-14-02 PM.jpg

 

From the monitoring side check the task log

7-12-2015 7-23-36 PM.jpg


Once activate go on the catalog view and check if the procedure is crated as well as the virtual tables/views and table, and invoke the procedure to start the replication

7-12-2015 7-27-51 PM.jpg

 

I did repeat the same procedure for my ERP on Hana schema, once the procedure is invoked on the remote Hana db we can see additional table created and trigger for the relevant table replicated

7-15-2015 1-38-18 PM.jpg

 

From a monitoring side, I did add 4 additional user and we can see the apply count

7-15-2015 2-42-17 PM.jpg


The replication is now operational, in my next document I’ll explain how to configure several datasource and construct one realtime report with the input of different table.

 

Williams.


SAP HANA Data Warehousing Foundation

$
0
0

SDNPIC.jpg

SAP HANA Data Warehousing Foundation 1.0

 

This first release will provide packaged tools for large Scale SAP HANA use cases to support data management and distribution within a SAP HANA landscapre more efficiently. Further versions will focus on additionals tools to  support native SAP HANA data warehouse use cases, in particular data lifecycle management.

 

 

 

On this landingpage you will find a summary of information to get started
with SAP HANA Data Warehousing Foundation 1.0

Presentations

 

 

Demos

 

SAP HANA Data Warehousing Foundation  Playlist
In this SAP HANA Data Warehousing playlist you will find demos showing the Data Distribution Optimizer (DDO) as well as the Data Lifecycle Manager (DLM)

 

SAP HANA Academy

 

Find out more about the Data Temperature integration with HANA including Data Lifecycle Management on the DWF Youtube channel of SAP HANA Academy

How to Plan for Disaster Recovery with SAP HANA

$
0
0

Abstract

As devices, systems, and networks become more complex, there are simply more things that can go wrong due either to man-made or natural disasters. See the options available with SAP HANA for disaster recovery that help organizations keep business running smoothly in such circumstances.


Key Concept

SAP HANA provides a single platform to extract and analyze massive amounts of structured and unstructured data in real time from multiple sources such as social media, blogs, online reviews, emails, and discussion forums. The analyzed information helps customer to answer specific questions, increase revenue, and make accurate and timely decisions. The analyzed information and the data that is accumulated over the years is the backbone for an organization. If something happens to the data due to natural or man-made disasters, business can come to a halt. Making use of the SAP HANA disaster recovery features can save companies from such an outcome.


Learning Objectives

By reading the article you will be able to:

  • Understand the concept of disaster recovery
  • Understand the disaster recovery options with SAP HANA


Nowadays most every company has its business continuity plan that helps during an unfortunate event such as flood or earthquake. A disaster recovery plan is part of business continuity plan focusing mainly on the restoration of IT infrastructure and operations after a crisis.

 

There are two important terms when it comes to a disaster recovery plan: recovery time objective (RTO) and recovery point objective (RPO), as shown in Figure 1.


Disaster recovery concepts.jpg

Figure 1 Disaster recovery concepts


RTO is the target time in the future to get your application back online and running after a disaster has struck. The goal here is to quickly calculate how much time is required to recover. The cost of the disaster recovery option varies with the time. The lower the time to recover the higher the cost. For example, if your organization’s RTO is three weeks then you may able to invest in a less expensive recovery option, whereas if it is four hours then you need a higher budget and high level of preparation.

 

RPO is the target time in the past from which the system will be restored. The goal here is to determine the time between data backups and the amount of data that could be lost in between backups during a disaster event. The data backup time depends on how long your organization can afford to operate without data before the disaster happens. For example, if your organization can survive with two days of lost data, then data backup RPO will be two days.

 

The cost analysis according to the RTO and RPO along with SAP HANA disaster recovery features is shown in Figure 2.The lower the RTO AND RPO objectives, the higher the cost.


RPO & RTO of Disaster Recovery Features.jpg

Figure2 – RPO & RTO of Disaster Recovery Features


SAP HANA offers three features for disaster recovery:

  • Backups and recovery
  • Storage replication
  • System replication

 

Backups and Recovery

Data backup in SAP HANA is written into disk from memory. It can only be performed when the database is online. SAP HANA supports following backup methods (Figure 3).

  • Data backup (savepoint). The data backup process is asynchronous. The SQL data and the undo log are saved to storage to ensure a speedy restart. You can customize the savepoint time to five minutes.
  • Log backup (redo log). It is used to record a change in data and is performed synchronously. The data is saved to the persistent storage as a database transaction

is committed. The reason for saving the logs is that when either a power failure of any other disaster happens, the log can be executed again to bring the database to the most consistent state.

Data and log backup.jpg

Figure3 – Data and log backup


With SAP Hana SPS10 there are two new data backup options (Figure 4)

  • Incremental Backup. These are the smallest delta data backup as only the changed data is backed up at frequent intervals after the last full or delta backup. As the data backup is at frequent intervals so the data backup size is small, takes less time and occurs at fast speed. They have high RTO as they need to be restored one after another in the sequence.
  • Differential Backup – It is the delta data backup which is done after the full data backup. It has more data size then incremental backup and takes more time. It has low RTO as compared to incremental backup as the number of backup to be restored is less.

Incremental and Differential Backup.jpg

Figure4: Incremental and Differential Backup


The data backup can be performed either from the SAP HANA studio, database administration (DBA) Cockpit, or by executing SQL commands via HDBSQL. SAP HANA HDBSQL is a command line tool for executing commands on SAP HANA databases.The following authorizations are required to perform data backup in SAP HANA:

  • Backup admin
  • Catalog read

 

The savepoint and log play an important role in the recovery of the system during a disaster(Figure 5). The database is first restarted and then the last savepoint is reloaded. There can be a huge data gap between the last savepoint and the disaster point (Figure 5). The gap between the points are fulfilled by re-executing the incremental or differential backup and then the redo log which contains the log of recent executed transaction. Once the transaction is re-executed the database is back to its consistent state. The uncommitted transactions are rolled back using an undo log whereas the committed transactions present in the redo log are executed.


Savepoint and redo log.jpg

Figure5 – Savepoint and redo log


SAP has also provided an application programming interface (API) to support data backup and recovery using third-party tools. Some of the third-party tools are as follows


  • Symantec NetBackup,
  • IBM TSM for Enterprise
  • Commvault Simpana
  • HP Data Protector

 

The backup and recovery is more successful in the case of power failure or disk failure. This does not help when the persistent storage is itself destroyed or some logical error has occurred.The cost of backup and recovery is not huge as compared to other options. The backup and recovery is used by company that does not have the need for a short RTO and RPO = 0.

 

Storage Replication

Storage replication is the process of mirroring disk content to a secondary data center with a standby SAP HANA system (Figure 6). The transfer process can be either synchronous or asynchronous depending on the distance between the primary and the standby SAP HANA system. As the distance between the primary and secondary center increases the latency time for writing the log also increases. The further the distance, the higher the time is to save the data between the centers. This reduces performance. The synchronous transfer is therefore used for shorter distances whereas the asynchronous method is for longer distances. Synchronous data replication between the primary and secondary site ensures zero data loss (RPO=0). This allows the protection of a data center against events such as power outages, fire, floods, or hurricanes.

 

The asynchronous storage replication can also be used but it is possible that during a takeover the changes that were made last may be lost. In some application scenarios, this loss can be accepted, in others not. SAP suggests you use synchronous replication as it gives more performance and even the slightest chance of data loss is removed. Due to continuous replication it offers a better RPO than backup but it requires a high bandwidth and low latency connection between the primary and the secondary site. This can be mainly used for recovery from local storage corruption or recovery after a disaster.

Storage Replication.jpg

Figure6 – Storage Replication


System Replication

SAP HANA system replication ships all data to a secondary system located at another site. Once SAP HANA system replication is enabled, each server process on the secondary system establishes a connection with its primary counterpart and requests a snapshot of the data (Figure 7). Further all logged changes in the primary system are replicated continuously to secondary system. Each persisted. Transaction persistence to disk log in the primary system is sent to the secondary system. A transaction in the primary system is not committed before the logs are replicated.There are different options available:

 

  • Synchronous: The primary system does not commit the transaction until the secondary system sends an acknowledgement to the primary system as soon as data is received and persisted.
  • Synchronous in memory: The primary system does not commit the transaction until the secondary system sends an acknowledgement to the primary as soon as data is received. 
  • Asynchronous: As per the design of asynchronous replication, the primary does not wait until the secondary sends an acknowledgement.
  • Synchronous full sync – The synchronous option is executed with a full sync option. In a full sync operation, transaction processing on the primary site is blocked. No transaction will be committed on the primary server before being committed on the secondary server when the secondary site is currently not connected, and newly created log buffers cannot be shipped to the secondary site. This behavior ensures that no transaction can be locally committed without shipping to the secondary site.

 

SAP HANA system replication has less RTO and is faster than storage replication. SAP suggests you use synchronous replication as there can be a data loss during asynchronous replication.

System replication.jpg

Figure7 – System replication


The main benefits of system replication are as follows

  • The secondary system can be used during planned downtime of the primary system
  • The secondary system can be used during a software fault in the primary system
  • The secondary system can be used during a disaster
  • The secondary system can be used during a crash of the primary system

 

The log entries in the secondary system are executed continuously, immediately after they have been received. This means that the secondary system can take over with virtually no delay, if the primary system fails. This replication solution offers a low RPO and RTO to the customers

[SAP HANA Academy] Live4 ERP Agility: SDI DP Server

$
0
0

Continuing with the Smart Data Integration part of the SAP HANA Academy’s Live4 ERP Agile Solutions in SAP HANA Cloud Platform course, Tahir Hussain Babar (Bob) shows how to turn on the data-provisioning server needed to connect to a Hadoop system with the using SAP Smart Data Integration. Check out Bob’s tutorial video below.

Screen Shot 2015-10-13 at 3.28.36 PM.png

(0:40 – 2:55) How to Start the DP Server

 

First on your machine that contains Eclipse, open Eclipse and then click on the Administration console button while your system user is selected. Go to the configuration tab and expand the daemon.ini file. Then expand the dpserver file, click on instance and then type 1 in both the new value text box for both the System and the Hosts before clicking save.

Screen Shot 2015-10-13 at 3.46.29 PM.png

Now after hitting refresh in the Landscape tab it will display a yellow triangle to signify that the DP Server is starting up.

Screen Shot 2015-10-13 at 3.47.32 PM.png

(2:55 – 4:50) How to add the Required Roles for the Live4 User

 

You will also need to set certain authorizations in order for the Live4 user to carry out Smart Data Integration tasks.

 

First go to your system user and expand the Security folder. Then expand the user folder and select the Live4 user. This is the user that a developer will use to do any and all of the work in Eclipse and/or the WebIDE throughout the course.

 

In the Live4 user navigate to the System Privileges tab and click on the green plus sign to add a trio of privileges. Essentially we will need to utilize an agent, install a Hadoop adapter and create a virtual table. So select ADAPTER ADMIN, AGENT ADMIN and CREATE REMOTE SOURCE and then click ok.

Screen Shot 2015-10-13 at 4.01.50 PM.png

Then after clicking the green execute button those three new system privileges will be granted to the Live4 user.


For further tutorial videos about the ERP Agility with HCP course please view this playlist.


SAP HANA Academy - Over 1,200 free tutorial videos on SAP HANA, Analytics and the SAP HANA Cloud Platform.


Follow us on Twitter @saphanaacademy and connect with us on LinkedIn.

SAP HANA Smart Data Access Setup

$
0
0

PURPOSE

The purpose of this document is to define clear steps of connecting BW on HANA DB from HANALIVE DB to use BW models in HANALIVE reports. In this scenario we have used two separate HANA DBs (one for BW and another for HANALIVE for ERP 1.0)


SCOPE

This scope applies for Basis team who support Smart Data Access (SDA) configuration after will go live. This procedure applied for prerequisites, installation and post installation configuration of complete SDA setup between BW on HANA DB and HANALIVE DB.


COMPONENT DETAILS

SAP BW system running on SAP NW 7.4 SP8 with HANA DB SPS8 Revision 82

HANALIVE DB is used as side car scenario with version SPS8 revision 82


WHAT THE PREREQUISITES ARE FOR SAP HANA SMART DATA ACCESS?

Software Versions

You have installed SAP HANA SP7 or higher and the remote data sources are available.

ODBC Drivers

You have installed the ODBC drivers for the databases you want to connect see SAP note 1868702 on each HANA node. If you installed ODBC drivers in your HANA exe directory as per note 1868702 these ODBC drivers will be removed during a revision update and have to be installed again after the update.


BUSINESS CASE

SAP HANA smart data access makes it possible to connect remote data sources and to present the data contained in these data sources as if from local SAP HANA tables. This can be used, for example, in SAP Business Warehouse installations running on SAP HANA to integrate data from remote data sources.

 

In SAP HANA, virtual tables are created to represent the tables in the remote data source. Using these virtual tables, joins can be executed between tables in SAP HANA and tables in the remote data source. All access to the remote data source is read-only.

 

In this scenario we are doing the Smart Data Access setup on Enterprise HANA system to connect to BW on HANA system remotely


Please check the attached document for detailed procedure of SDA setup for HANA.

 

SAP BW ON HANA & HANA SMART DATA ACCESS – SETUP

 

1. Create user with required privileges in BW on HANA DB

Login to remote source SAP HANA system BWoH, using SYSTEM user and create the user SDA with following privileges.

 

System Privilege: Catalog Read

Object Privileges:

    • SELECT on Schema SAPABAP1
    • SELECT on _SYS_BIC & _SYS_BI

Fig1.png

Fig2.png

Fig3.png

Fig4.png

Note: Schema SAPABAP1 contains all the required base tables which HANA Modelling team wants to build their reports.


2. Logon to HANALIVE DB as SYSTEM user and configure the Smart Data Access.

 

SAP HANA system authorization CREATE REMOTE SOURCE is required to create a remote source. SYSTEM user already has this authorization.

 

In the Systems view, open -> Provisioning -> Remote Sources.

 

Right click Remote Sourcesand select New Remote Source.

Fig5.png

Enter the following information: Source Name. Select the Adapter Namefrom the drop-down list as HANA (ODBC) in this case as we are connection remote SAP HANA Database. Enter the values for: Server, Port, User Nameand the Password of user SDA which we have created on SAP BW on HANA system.

 

Click the Save the Editoricon in the upper right-hand corner of the screen.

Fig6.png

3. Connection Verification

After the SDA connection is successfully created, verify if you could connect to remote source AP6 system and check if you could see the tables under schema SAPABAP1

Fig7.png

4. User authorization to access


Authorization to access data in the remote data source is determined by the privileges of the database user as standard.

 

Grant the following privileges to the role assigned to the Modelling users, so that they can create virtual tables and then write SQL queries which could operate on virtual tables. The SAP HANA query processor optimizes these queries, and executes the relevant part of the query in the target database, returns the results of the query to SAP HANA, and completes the operation.

Fig8.png

Challenges of Moving to HANA on Cloud

$
0
0

Challenges of HANA on Cloud

PURPOSE

The purpose of this document is to define lesson learns and pain points / challenges while customer wanted to build new SAP landscape on HANA on cloud or move existing SAP application into HANA on Cloud. In this document, all challenges distributed into three different HANA on cloud services.

Migration existing SAP application to HANA on Cloud

Green Field implementation

AMS support to HANA on cloud

SCOPE

This scope applies for Basis team who will involve above three deployment for HANA on cloud. This information can be taken as starting point of any above deployment scenarios.

Picture1.png

Viewing all 1183 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>