Quantcast
Channel: SCN : Document List - SAP HANA and In-Memory Computing
Viewing all 1183 articles
Browse latest View live

E2E – Consuming HANA procedures using ADBC

$
0
0

Author:          Sathish Kumar Dharmaraj

Company:      NTT DATA Global Delivery Services Private Limited

 

Introduction

Our certain section of ABAP reports being executed in SAP NW 7.3 on Oracle DB and by running these reports for large selection takes lot of time in execution and it occupies lot of resources. Hence, on having HANA DB, push down the entire logic to HANA by replicating the source tables using SLT, developing the procedures on base tables and consuming the procedures in SAP using ADBC.

 

This end to end guide illustrates the replication method of SLT, building the procedures and how to consume in SAP.

 

HANA

 

1) Go to Programs -> SAP HANA -> SAP HANA Studio

Pic 1.jpg

 

2) Go to Window -> Open Perspective -> SAP HANA Modeler

Pic 2.jpg

 

3) Click Add System Icon

Pic 3.jpg

 

4) Provide host name, instance and description. Click Next.

 

Pic 4.jpg

 

5) Provide User name and Password.

 

Pic 5.jpg

 

6) Click Finish

 

Pic 6.jpg

 

7) Click Select System from the Quick Launch Modeler Screen

 

Pic 7.jpg

 

8) Select your corresponding environment

 

Pic 8.jpg

 

9) Click Data Provisioning

Pic 9.jpg

 

10) Select a schema (Source System) to replicate the table

 

Pic 10.jpg

 

11) Select Load (if it is Initial Load (one-time)) or Replicate (If you want both Initial + Delta records)

Pic 11.jpg

 

 

12) Enter the table name if you want to load/replicate

 

Pic 12.jpg

 

13) Choose the table and click Add button to transfer from source tables to selected tables.

 

Pic 12.jpg

 

14) Click Finish

 

Pic 14.jpg

 

15) Table will be initially in Load (Action) – Scheduled (Status) and when the load is in progress it shows the Action as Load and Status as In Process, when the load is completed, it shows the action as Load and Status as Executed. For the replication, it shows the action as Replicate and Status as In Process. You can monitor the status by clicking the refresh button in the top right corner of the below screen.

 

Pic 15.jpg

 

Pic 15.jpg

 

16) Go to Catalog -> Choose your schema

 

Pic 16.jpg

17) Go to Schema -> Tables

 

Pic 17.jpg

 

18) Replicated table will be shown here

 

Pic 18.jpg

 

19) To create the procedure, go to Content

 

Pic 19.jpg

 

 

20) To create Package, Go to Content -> New -> Package

 

Pic 20.jpg

 

21) Enter the package information details like Technical name, description and delivery unit.

 

Pic 21.jpg

 

22) Delivery unit can be assigned at later stage.

 

Pic 22.jpg

 

 

23) Select your package -> Right Click -> New -> Procedure

 

Pic 23.jpg

 

24) Provide procedure details and choose the default schema were the tables being replicated via SLT.

 

Pic 24.jpg

 

 

25) Click Finish

 

Pic 25.jpg

 

 

26) Procedure has different panes, Script View, Input Pane and Output Pane.

 

Pic 26.jpg

 

a) Create the output parameters from the Output pane, Select Output Parameters -> Right Click -> New (List out the fields for output)


Pic 27.jpg


b) Save and Activate the procedure

 

27) Once the procedure been activated, it will be stored under _SYS_BIC schema -> Procedure

 

Pic 28.jpg

 

28) Execute the procedure using CALL statement

 

Pic 29.jpg

 

29) Result will be shown as below

 

Pic 30.jpg

 

30) To insert the data into an table, use CALL procedure WITH OVERVIEW

 

Pic 31.jpg

 

ABAP

 

 

 

Follow below instructions for calling procedures in ABAP

 

 

35) Declare types which is exact similar to Output Parameter structure

Pic 32.jpg

 

 

36) SQL connection data declaration and its variables

 

Pic 33.jpg

 

37) Fetching the DB connection name should be maintained in DBCON table or using DBCO transaction.

 

Pic 34.jpg

 

38) Fetch the table name using the below CALL procedure WITH OVERVIEW

 

Pic 36.jpg

 

39) Execute Query which will return the table name

 

Pic 38.jpg

 

40) Execute the table using select and fetch the data

 

Pic 39.jpg

 

 

41) Assign the table result into target variable/internal table.

 

Pic 40.jpg

I hope this document helps to give an idea to extract the information from HANA using ADBC connection.


SAP HANA Hands on tests ( part 4.2 ) : HANA replication failback

$
0
0

Where am i at ?

 

Previously I performed a takeover test , killing the primary system which was hdbtest1.

hdbtest2 node took over the HDB and i could restart my ECC application ( needing to restart it due to my lab configuration. see my previous doc / blog : SAP HANA Hands on tests ( part 4.1 ) : HANA replication takeover ) .

Now, at first I wanted to perform the failback only, but in the end I 'll also perform a "Near Zero DownTime" update of the HANA platform and then a failback.

 

Start situation:

 

failback0.png

 

HDB is running on Node2 as the primary instance.

Node1 failed a few days ago and is therefore not in sync anymore .

The HDB software is still there and installed on HDB node1.

 

I'm currently running the following version of HDB : 1.00.093.00.1424770727 (fa/newdb100_rel)

 

Target :

 

  • HDB in version 1.00 SPS 97.
  • Node1 back as primary
  • Node2 back as stby.

 

How :

 

Basically, it should take these main steps :

 

  • Get my hana node1 back in the configuration as a STBY node.
  • Perform the software  update on this stby HDB node ( hdbtest1 ) .
  • Takeover -> Node1 is back as primary node
  • Update HDB node2
  • Put HDB node2 back in the configuration as STBY host.
  • Perform the required post update steps.

 

Where to get some information :

 

First of all : RTNG ! ( Read The Notes and Guides ) .

There are lots of notes and guides around for this topic.

The main ones I followed :

 

 

I also read this excellent blog about Hana SPS updates :

 

 

Let's go !

 

Get hana node 1 back in the game !

 

Having a look at the HDB replication statuses, I have this situation due to my previous simulated crash / takeover :

 

hdbtest2 :

 

failback2.png

hdbtest1 :

 

failback2_bis.png

 

As you can see, the situation is not clear. I have 2 systems claiming here they are primary. So the First step for me is to clean this up .

At first i thought I would have to run into some clean up using some unregistering commands and putting the host back in.

It turns out that you only need to register the system again in the configuration.

So all I had to do was registering the system again using the hdbnsutil tool ( adding the option --force_full_replica ) :

 

hdbnsutil -sr_register --remoteHost=hdbtest2 --remoteInstance=HTL --mode=syncmem --name=HTLPRIM --force_full_replica

 

failback3.png

hdbtest1 node is now in syncmem mode instead of primary.

hdbtest2 is aware of the change in the topology :  hdbtest1 is seen as secondary_host :

 

failback4.png

Now we restart the hdbtest1 node.

The system replication is triggered on startup :

 

failback5.pngfailback6.png

The replication is back. hdbnode2 is replicating to hdbnode1 :

 

failback7.png


The replication is back online.


That said, for the failback to be complete, I will turn back to the initial situation while updating hana using Near Zero DownTime update concept :

     Node 1 as primary

     Node 2 as standby


Next steps will be described here :

 

SAP HANA Hands on tests ( part 4.3 ) : Near Zero DownTime update using replication

SAP HANA Hands on tests ( part 4.3 ) : Near Zero DownTime update using replication

$
0
0

Situation so far


This is a follow up to the HANA replication oriented tests I'm currently playing with. 

Following tests around replication :



I will now set the situation back to where I started, but with the HDB updated from 1.0.93 to 1.0.97.



Start situation :


The primary host running the HDB is on hdbtest2

The standby host is on hdbtest1

The replication is back online ( hdbtest2-> hdbtest1 )

HDB version : 1.0.93


Target :


Primary host running the HDB will be hdbtest1

The standby host will be hdbtest2

HDB version : 1.0.97

Replication will be set back ( hdbtest1 -> hdbtest2 )


Relevant docs :




NZDT update : updating the standby host


First, I update the current standby node hdbtest1.

The following entry needs to be added to the hdbuserstore :


hdbuserstore SET SRTAKEOVER hdbtest1:30015 system <password>

hdbuserstore LIST

 

DATA FILE      : /usr/sap/HTL/home/.hdb/hdbtest1/SSFS_HDB.DAT

 

KEY SRTAKEOVER

  ENV : hdbtest1:30015

  USER: system

 


From the hdb software SPS directory  as root :

 

# ./hdblcm --action=update

 

SAP HANA Lifecycle Management - SAP HANA 1.00.097.00.1434028111

***************************************************************

 

Scanning Software Locations...

Detected components:

    SAP HANA Database (1.00.097.00.1434028111) in /hana/HANA_SPS97/EXTRACT/51049967/DATA_UNITS/HDB_SERVER_LINUX_X86_64/server

    SAP HANA AFL (Misc) (1.00.097.00.1434039685) in /hana/HANA_SPS97/EXTRACT/51049967/DATA_UNITS/HDB_AFL_LINUX_X86_64/packages

    SAP HANA LCAPPS (1.00.097.000.454405) in /hana/HANA_SPS97/EXTRACT/51049967/DATA_UNITS/HANA_LCAPPS_10_LINUX_X86_64/packages

    SAP TRD AFL FOR HANA (1.00.097.00.1434039685) in /hana/HANA_SPS97/EXTRACT/51049967/DATA_UNITS/HDB_TRD_AFL_LINUX_X86_64/packages

    SAP HANA Database Client (1.00.097.00.1434028111) in /hana/HANA_SPS97/EXTRACT/51049967/DATA_UNITS/HDB_CLIENT_LINUX_X86_64/client

    SAP HANA Studio (2.00.0.19.000000) in /hana/HANA_SPS97/EXTRACT/51049967/DATA_UNITS/HDB_STUDIO_LINUX_X86_64/studio

    SAP HANA Smart Data Access (1.00.4.004.0) in /hana/HANA_SPS97/EXTRACT/51049967/DATA_UNITS/SAP_HANA_SDA_10/packages

 

Choose system to update

 

  Index | System            | Database Properties

  --------------------------------------------------

  1    | HTL (update)      | 1.00.093.00.1424770727

        |                  | hdbtest1 (worker)

        |                  |

  2    | Exit (do nothing) |

 

 

Enter selected system index [2]: 1

 

Choose components to be installed or updated:

  Index | Components | Description

  --------------------------------------------------------------------------------------------------------------------------

  1    | all        | All components

  2    | server    | Update SAP HANA Database from version 1.00.093.00.1424770727 to version 1.00.097.00.1434028111

  3    | client    | Update SAP HANA Database Client from version 1.00.093.00.1424770727 to version 1.00.097.00.1434028111

  4    | afl        | Update SAP HANA AFL (Misc) from version 1.00.093.1.1425042048 to version 1.00.097.00.1434039685

  5    | lcapps    | Update SAP HANA LCAPPS from version 1.00.093.00.451387 to version 1.00.097.000.454405

  6    | smartda    | Update SAP HANA Smart Data Access from version 1.00.3.005.0 to version 1.00.4.004.0

  7    | studio    | Update SAP HANA Studio from version 2.00.0.11.000000 to version 2.00.0.19.000000

  8    | trd        | Update SAP TRD AFL FOR HANA from version 1.00.093.1.1425042048 to version 1.00.097.00.1434039685

 


Enter comma-separated list of the selected indices [2,3,4,5,6,8]: 1

Enter System Administrator (htladm) Password: ***********

Enter Database User Name [SYSTEM]:

Enter Database User (SYSTEM) Password:***********

 

Summary before execution:

=========================

 

 

SAP HANA Components

  Update Parameters

      Remote Execution: ssh

      SAP HANA System ID: HTL

      Database User Name: SYSTEM

      SAP HANA Database Client Installation Path: /hana/shared/HTL/hdbclient

      SAP HANA Studio Installation Path: /hana/shared/HTL/hdbstudio

  Software Components

      SAP HANA AFL (Misc)

        Update from version 1.00.093.1.1425042048 to 1.00.097.00.1434039685

        Location: /hana/HANA_SPS97/EXTRACT/51049967/DATA_UNITS/HDB_AFL_LINUX_X86_64/packages

      SAP HANA LCAPPS

        Update from version 1.00.093.00.451387 to 1.00.097.000.454405

        Location: /hana/HANA_SPS97/EXTRACT/51049967/DATA_UNITS/HANA_LCAPPS_10_LINUX_X86_64/packages

      SAP TRD AFL FOR HANA

        Update from version 1.00.093.1.1425042048 to 1.00.097.00.1434039685

        Location: /hana/HANA_SPS97/EXTRACT/51049967/DATA_UNITS/HDB_TRD_AFL_LINUX_X86_64/packages

      SAP HANA Database

        Update from version 1.00.093.00.1424770727 to 1.00.097.00.1434028111

        Location: /hana/HANA_SPS97/EXTRACT/51049967/DATA_UNITS/HDB_SERVER_LINUX_X86_64/server

      SAP HANA Database Client

        Update from version 1.00.093.00.1424770727 to 1.00.097.00.1434028111

        Location: /hana/HANA_SPS97/EXTRACT/51049967/DATA_UNITS/HDB_CLIENT_LINUX_X86_64/client

      SAP HANA Studio

        Update from version 2.00.0.11.000000 to 2.00.0.19.000000

        Location: /hana/HANA_SPS97/EXTRACT/51049967/DATA_UNITS/HDB_STUDIO_LINUX_X86_64/studio

      SAP HANA Smart Data Access

        Update from version 1.00.3.005.0 to 1.00.4.004.0

        Location: /hana/HANA_SPS97/EXTRACT/51049967/DATA_UNITS/SAP_HANA_SDA_10/packages

 

 

Note: Cannot verify database user (SYSTEM) password in advance: no connection available.

Note: Cannot perform license check: no connection available

 

 

Do you want to continue? (y/n):y

 

Updating components...

Updating SAP HANA AFL (Misc)...

  Preparing package 'AFL'...

  Installing SAP Application Function Libraries to /hana/shared/HTL/exe/linuxx86_64/plugins/afl_1.00.097.00.1434039685_2158696...

  Installing package 'AFL'...

Updating SAP HANA LCAPPS...

  Preparing package 'LCAPPS'...

  Installing SAP liveCache Applications to /hana/shared/HTL/exe/linuxx86_64/plugins/lcapps_1.00.097.00.454405_4578353...

  Installing package 'LCAPPS'...

Updating SAP TRD AFL FOR HANA...

  Preparing package 'TRD'...

  Installing SAP TRD AFL FOR SAP HANA to /hana/shared/HTL/exe/linuxx86_64/plugins/trd_1.00.097.00.1434039685_2158696...

  Installing package 'TRD'...

Updating SAP HANA Database...

  Extracting software...

  Updating package 'Saphostagent Setup'...

  Updating package 'Python Support'...

  Updating package 'Python Runtime'...

  Updating package 'Product Manifest'...

  Updating package 'Binaries'...

  Updating package 'Installer'...

  Updating package 'Ini Files'...

  Updating package 'Emergency Support Package'...

  Updating package 'Documentation'...

  Updating package 'Delivery Units'...

  Updating package 'DAT Languages'...

  Updating package 'DAT Configfiles'...

  Stopping system...

  Starting service (sapstartsrv)...

  Starting system...

  Importing delivery units...

Updating Resident hdblcm...

  Cleaning up old installation of Resident hdblcm...

  Installing Resident hdblcm...

Updating SAP HANA Database Client...

  Preparing package 'Python Runtime'...

  Preparing package 'Product Manifest'...

  Preparing package 'SQLDBC'...

  Preparing package 'REPOTOOLS'...

  Preparing package 'Python DB API'...

  Preparing package 'ODBC'...

  Preparing package 'JDBC'...

  Preparing package 'HALM Client'...

  Preparing package 'Client Installer'...

  Installing SAP HANA Database Client to /hana/shared/HTL/hdbclient...

  Updating package 'Python Runtime'...

  Updating package 'Product Manifest'...

  Updating package 'SQLDBC'...

  Updating package 'REPOTOOLS'...

  Updating package 'Python DB API'...

  Updating package 'ODBC'...

  Updating package 'JDBC'...

  Updating package 'HALM Client'...

  Updating package 'Client Installer'...

Updating SAP HANA Studio...

  Preparing package 'Studio Director'...

  Preparing package 'Client Installer'...

  Installing SAP HANA Studio to /hana/shared/HTL/hdbstudio...

  Updating package 'Studio Director'...

  Updating package 'Client Installer'...

Updating SAP HANA Studio Update repository...

Installing SAP HANA Smart Data Access...

Updating Component List...

Updating SAP HANA instance integration on local host...

  Deploying SAP Host Agent configurations...

SAP HANA components updated with warnings.

 

Note:

 

Log file written to '/var/tmp/hdb_hdblcm_update_2015-09-07_09.21.54/hdblcm.log' on host 'hdbtest1'.

 

 


So now the hdbtest1 is updated to SPS 97.

Looking at the logs we can see that the hdbtest2 system ( which is still primary )  saw the hdbtest1 going down :


failback10.png


On hdbtest1, the update restarts the HDB during the process :


failback14.png

hdbtest1 is back online as the replication database now in version 1.0.97.

The replication can go on again, the resync is triggered automatically when the standby node is restarted :

failback9.png

NZDT update : takeover

 

My standby hdbtest1 node is running on the new HANA SPS :

 

failback16.png


My primary is still on the former version :


failback15.png


The replication is O.K.

Now I can decide when  to switch to the new version.

I had triggered an SGEN during the update process.

Here is its current status :

 

failback17.png

 

We "takeover" to the new node running 097 :

 

hdbnsutil -sr_takeover

 

checking local nameserver ...

done.

hdbtest1:/usr/sap/HTL/HDB00>

 

 

Note :

 

Now I have something I did not expect : my 2 HDBs are alive !!

 

failback18.png

on node 2 :

 

hdbtest2:/usr/sap/HTL/HDB00> hdbnsutil -sr_state

checking for active or inactive nameserver ...

 

System Replication State

~~~~~~~~~~~~~~~~~~~~~~~~

 

mode: primary

site id: 2

site name: HTLSTBY

 

Host Mappings:

~~~~~~~~~~~~~~

 

hdbtest2 -> [HTLPRIM] hdbtest1

hdbtest2 -> [HTLSTBY] hdbtest2

 

done.

 

on node 1 :

 

hdbtest1:/usr/sap/HTL/HDB00> hdbnsutil -sr_state

checking for active or inactive nameserver ...

 

System Replication State

~~~~~~~~~~~~~~~~~~~~~~~~

 

mode: primary

site id: 1

site name: HTLPRIM

 

Host Mappings:

~~~~~~~~~~~~~~

hdbtest1 -> [HTLPRIM] hdbtest1

hdbtest1 -> [HTLSTBY] hdbtest2

done.

 

And this is not good .

The SGEN from my HEC ECCserver is still running on the "wrong" database :

Which on a production system with actual production data being written would mean : I am currently updating the wrong DB.

failback19.png

 

Looking at the opened sessions on each HDB :

 

hdbnode1

failback21.png

hdbnode2

failback20.png

In this situation, where I'm updating the system, and therefore the running HDB is not crashed, I thought the sr_takeover would, in some way, kind of disable the taken over database.

I expected the HDB on hdbnode2 to be disabled. Then have my HEC ECCsystem go down, and have the HDB running on node1 ready and

waiting for connections.

Looking at my SGEN log tables I can see that something went wrong and the updates went in the "2 databases" :

 

Query my database on hdbnode1 : I have the following entries in table D010LINF :

failback24.png

Doing the same query on hdbnode 2 results in this :

failback25.png

failback26.png

 

I can find back my "data gap" on hdbnode2 which was supposedly taken over...

My ECC instance was still running on node hdbnode2 until I decided to "HDB stop" it, although I had performed the sr_takeover.

Then I really had 2 hdb working as primary.

But this also means that I have lost data as I am supposed to run on node1 since the takeover ...

The entries written in D010LINF from 12:13 to 12:48 are now missing from my db running on hdbnode1.

 

Again, as a workaround, I do have to stop the hdbnode before performing the sr_takeover.

I'll try to find the root cause of this and post it back.

 

 

 

 

I don't know right now if I did something wrong but, as a safety measure, I'd rather stop the HDB on my currently running node and THEN perform the sr_takeover.

 

That said. let' s have a look at the HDB logs on hdbnode1 :

  • This is when I trigger the sr_takeover :

 

[5141]{-1}[-1/-1] 2015-09-07 12:13:51.210075 i PersistenceManag DisasterRecoverySecondaryImpl.cpp(00546) : Takeover on secondary ..[5141]{-1}[-1/-1] 2015-09-07 12:13:51.210461 i PersistenceManag DisasterRecoveryProtocol.cpp(03028) : skipping preload, because last preload occurred on current spv: 8468[5141]{-1}[-1/-1] 2015-09-07 12:13:51.210504 i Logger          BackupHandlerImpl.cpp(00341) : Shutting down log backup, 0 log backup(s) pending[5141]{-1}[-1/-1] 2015-09-07 12:13:51.232039 i PersistenceManag PersistenceManagerImpl.cpp(04622) : Restart page version 1 loaded (96bytes)[5141]{-1}[-1/-1] 2015-09-07 12:13:51.232065 i PersistenceManag PersistenceManagerImpl.cpp(04647) : Initial maximum known TID after restart: 4859536[5141]{-1}[-1/-1] 2015-09-07 12:13:51.232067 i Logger          PersistenceManagerImpl.cpp(04655) : Newest known master commit position: 0x754b1c97[5141]{-1}[-1/-1] 2015-09-07 12:13:51.232070 i Logger          PersistenceManagerImpl.cpp(04665) : Known last prepare commit position on volume 1: 0x1bd2307[5141]{-1}[-1/-1] 2015-09-07 12:13:51.232072 i Logger          PersistenceManagerImpl.cpp(04665) : Known last prepare commit position on volume 3: 0x1ae3c44[5141]{-1}[-1/-1] 2015-09-07 12:13:51.232081 i PersistenceManag PersistenceManagerImpl.cpp(04719) : Known DTX volume set [1,3], 0 RTT entries[5141]{-1}[-1/-1] 2015-09-07 12:14:01.000951 i PersistenceManag PersistenceSessionRegistry.cpp(00266) : Start loading open sessions and history cleanup files[5141]{-1}[-1/-1] 2015-09-07 12:14:01.026604 i Logger          PersistenceSessionRegistry.cpp(01075) : Open session count at restart: 1/1, max known TID at restart: 4859536[5141]{-1}[-1/-1] 2015-09-07 12:14:01.026651 i PersistenceManag PersistenceSessionRegistry.cpp(00273) : Loading 1 open session(s) and 30 history cleanup file(s) finished in 0.051289 seconds;[5141]{-1}[-1/-1] 2015-09-07 12:14:01.026976 i PersistenceManag PersistenceManagerImpl.cpp(04518) : Data recovery finished.[5141]{-1}[-1/-1] 2015-09-07 12:14:01.596178 i Service_Startup  ContMgr.cc(00070) : Initializing system catalog.[5141]{-1}[-1/-1] 2015-09-07 12:14:01.927800 i Service_Startup  ContMgr.cc(00168) : Initializing system catalog done.[5103]{-1}[-1/-1] 2015-09-07 12:14:01.979122 i Service_Startup  SmFastRestart.cc(00721) : Loading RowStore segments from Persistency[5103]{-1}[-1/-1] 2015-09-07 12:14:03.004376 i RowStorePageAcce AbsolutePageAccessImpl.cpp(01137) : LoadMultiplePageBlocksAtStartup[5103]{-1}[-1/-1] 2015-09-07 12:14:03.004414 i RowStorePageAcce AbsolutePageAccessImpl.cpp(01155) : allocate 134 segments requested to load and collect information about superblocks to read...[5103]{-1}[-1/-1] 2015-09-07 12:14:05.296289 i RowStorePageAcce AbsolutePageAccessImpl.cpp(01309) : collecting information done in 2291msec.[5103]{-1}[-1/-1] 2015-09-07 12:14:05.296831 i RowStorePageAcce AbsolutePageAccessImpl.cpp(01340) : SuperblockPrefetchCalculation:        allocationLimit=47038569451



 

 

  • The HDB triggers some migration tasks as we are moving from 093 to 097 :

 

[5141]{-1}[-1/-1] 2015-09-07 12:14:27.145153 i Service_Startup  mm_recovery.cc(01051) : RS: metadata & data are separated

[5141]{-1}[-1/-1] 2015-09-07 12:14:27.184470 i Service_Startup  md_conv_util.cc(01978) : metadata version of current DB image: 205

[5141]{-1}[-1/-1] 2015-09-07 12:14:27.184555 i Service_Startup  md_conv_util.cc(01982) : metadata version of binary: 230

[5141]{-1}[-1/-1] 2015-09-07 12:14:27.184557 i Service_Startup  md_conv_util.cc(01986) : [metadata upgrade] start (205 -> 230)

[5141]{-1}[-1/-1] 2015-09-07 12:14:27.184559 i Service_Startup  md_conv_util.cc(01988) : [metadata upgrade] begin of phase I (converting physical DB image)

[5141]{-1}[-1/-1] 2015-09-07 12:14:27.184584 i Service_Startup  md_conv_util.cc(01941) : [metadata upgrade] (217 -> 218)

[5141]{-1}[-1/-1] 2015-09-07 12:14:27.195328 i Service_Startup  md_conv_util.cc(01949) : [metadata upgrade] (229 -> 230)

[5141]{-1}[-1/-1] 2015-09-07 12:14:27.209722 i Service_Startup  md_conv_util.cc(01993) : [metadata upgrade] end of phase I (converting physical DB image)

....

[5141]{-1}[-1/-1] 2015-09-07 12:15:26.900129 i Service_Startup  catalog.cc(00595) : Auto migration started.

[5141]{-1}[16/-1] 2015-09-07 12:15:27.484108 i Service_Startup  catalog.cc(01916) : AutoMigration: Definition of system view M_BLOCKED_TRANSACTIONS has been changed. Its metadata is updated.

[5141]{-1}[16/-1] 2015-09-07 12:15:27.512982 i Service_Startup  catalog.cc(01916) : AutoMigration: Definition of system view M_CATALOG_MEMORY has been changed. Its metadata is updated.

[5141]{-1}[16/-1] 2015-09-07 12:15:27.529787 i Service_Startup  catalog.cc(01916) : AutoMigration: Definition of system view M_CLIENT_VERSIONS has been changed. Its metadata is updated.

[5141]{-1}[16/-1] 2015-09-07 12:15:27.564950 i Service_Startup  catalog.cc(01916) : AutoMigration: Definition of system view CS_JOIN_PATHS has been changed. Its metadata is updated.

[5141]{-1}[16/-1] 2015-09-07 12:15:27.598634 i Service_Startup  catalog.cc(01916) : AutoMigration: Definition of system view M_CS_COLUMNS has been changed. Its metadata is updated.

[5141]{-1}[16/-1] 2015-09-07 12:15:27.628812 i Service_Startup  catalog.cc(01916) : AutoMigration: Definition of system view M_FUZZY_SEARCH_INDEXES has been changed. Its metadata is updated.

[5141]{-1}[16/-1] 2015-09-07 12:15:27.663963 i Service_Startup  catalog.cc(01916) : AutoMigration: Definition of system view M_CS_ALL_COLUMNS has been changed. Its metadata is updated.

[5141]{-1}[16/-1] 2015-09-07 12:15:27.707009 i Service_Startup  catalog.cc(01916) : AutoMigration: Definition of system view M_CS_TABLES has been changed. Its metadata is updated.

[5141]{-1}[16/-1] 2015-09-07 12:15:27.736352 i Service_Startup  catalog.cc(01916) : AutoMigration: Definition of system view M_CS_PARTITIONS has been changed. Its metadata is updated.

[5141]{-1}[16/-1] 2015-09-07 12:15:27.784178 i Service_Startup  catalog.cc(01916) : AutoMigration: Definition of system view M_CONNECTIONS has been changed. Its metadata is updated.

 

 

 

I then reconnect my HEC system to the HDB and restart.

I'm now running on HDB node 1.

The NZDT update in itself is O.K on node 1.

 

Note 2 :

One other thing I went through, which surprised me in a good way, is that in the end, I was not really forced to "shutdown ECC / modify ECC parameters / restart ECC" in order to work with the failed-over node although, in my set up , I had not switched IP adresses from one node to another.

I'll try to understand why. This looked like some kind of "scale-out"

 

The update of all the components was performed also :

failback27.png

 

 

NZDT update : Update of the old primary node

 

The situation is as follows now :

 

failback28.png

Hdbtest1 is back as primary node running the HDB in version 1.0.97.

Hdbtest2 is offline and still on version 1.0.93.

Replication is off.

 

 

We update the hdbnode2 :

 

From the hdb software SPS directory  as root on hdbnode2 :



hdbupd --nostart=on ( as stated in the SAP_HANA_Administration guide ) -> fails in my set up ( would have worked without any plugins installed ):


 

SAP HANA Lifecycle Management - Database Upgrade 1.00.097.00.1434028111

***********************************************************************

 

Select a SAP HANA Database installation:

 

 

No | System | Properties

------------------------------------

0  | HTL    | 1.00.093.00.1424770727

   |        | hdbtest2 (worker)

   |        |

1  | None   | (Abort upgrade)

 

 

Specify the sequence number of the system to be upgraded [1]: 0

Upgrade failed

  SAP HANA Database 1.00.097.00.1434028111 is not compatible with installed plugin(s):

    SAP Application Function Libraries

      Currently active version: 1.00.093.1.1425042048

      No installed, inactive newer version found.

       => Update the 'SAP Application Function Libraries' plugin!

      Skip this plugin dependency check with the command line option

      --ignore=check_plugin_dependencies if you want to deactivate the

      'SAP Application Function Libraries' plugin and update it later, or

      if you no longer use the functions provided by this plugin.

      Follow the instructions in SAP Note 1920457.

    SAP liveCache Applications

      Currently active version: 1.00.093.00.451387

      No installed, inactive newer version found.

       => Update the 'SAP liveCache Applications' plugin!

      Skip this plugin dependency check with the command line option

      --ignore=check_plugin_dependencies if you want to deactivate the

      'SAP liveCache Applications' plugin and update it later, or

      if you no longer use the functions provided by this plugin.

      Follow the instructions in SAP Note 1920457.

    SAP TRD AFL FOR SAP HANA

      Currently active version: 1.00.093.1.1425042048

      No installed, inactive newer version found.

       => Update the 'SAP TRD AFL FOR SAP HANA' plugin!

      Skip this plugin dependency check with the command line option

      --ignore=check_plugin_dependencies if you want to deactivate the

      'SAP TRD AFL FOR SAP HANA' plugin and update it later, or

      if you no longer use the functions provided by this plugin.

      Follow the instructions in SAP Note 1920457.

 

 

 

 


I could follow the guidelines in the SAP note, but to me as I am updating a standby instance, I think I'd probably use the hdblcm tool instead with the following extra options :

 

hdbtest2:/hana/EXTRACT/51049967/DATA_UNITS/HDB_SERVER_LINUX_X86_64 # ./hdblcm --action=update --hdbupd_server_nostart

 

SAP HANA Lifecycle Management - SAP HANA 1.00.097.00.1434028111

***************************************************************

 

Scanning Software Locations...

Detected components:

    SAP HANA Database (1.00.097.00.1434028111) in /hana/EXTRACT/51049967/DATA_UNITS/HDB_SERVER_LINUX_X86_64/server

    SAP HANA AFL (Misc) (1.00.097.00.1434039685) in /hana/EXTRACT/51049967/DATA_UNITS/HDB_AFL_LINUX_X86_64/packages

    SAP HANA LCAPPS (1.00.097.000.454405) in /hana/EXTRACT/51049967/DATA_UNITS/HANA_LCAPPS_10_LINUX_X86_64/packages

    SAP TRD AFL FOR HANA (1.00.097.00.1434039685) in /hana/EXTRACT/51049967/DATA_UNITS/HDB_TRD_AFL_LINUX_X86_64/packages

    SAP HANA Database Client (1.00.097.00.1434028111) in /hana/EXTRACT/51049967/DATA_UNITS/HDB_CLIENT_LINUX_X86_64/client

    SAP HANA Studio (2.00.0.19.000000) in /hana/EXTRACT/51049967/DATA_UNITS/HDB_STUDIO_LINUX_X86_64/studio

    SAP HANA Smart Data Access (1.00.4.004.0) in /hana/EXTRACT/51049967/DATA_UNITS/SAP_HANA_SDA_10/packages

 

Choose system to update

 

  Index | System            | Database Properties

  --------------------------------------------------

  1     | HTL (update)      | 1.00.093.00.1424770727

        |                   | hdbtest2 (worker)

        |                   |

  2     | Exit (do nothing) |


Enter selected system index [2]: 1

Choose components to be installed or updated:

  Index | Components | Description

  --------------------------------------------------------------------------------------------------------------------------

  1     | all        | All components

  2     | server     | Update SAP HANA Database from version 1.00.093.00.1424770727 to version 1.00.097.00.1434028111

  3     | client     | Update SAP HANA Database Client from version 1.00.093.00.1424770727 to version 1.00.097.00.1434028111

  4     | afl        | Update SAP HANA AFL (Misc) from version 1.00.093.1.1425042048 to version 1.00.097.00.1434039685

  5     | lcapps     | Update SAP HANA LCAPPS from version 1.00.093.00.451387 to version 1.00.097.000.454405

  6     | smartda    | Update SAP HANA Smart Data Access from version 1.00.3.005.0 to version 1.00.4.004.0

  7     | studio     | Update SAP HANA Studio from version 2.00.0.11.000000 to version 2.00.0.19.000000

  8     | trd        | Update SAP TRD AFL FOR HANA from version 1.00.093.1.1425042048 to version 1.00.097.00.1434039685

 

Enter comma-separated list of the selected indices [2,3,4,5,6,8]: 1

Enter System Administrator (htladm) Password:

Enter Database User Name [SYSTEM]:

Enter Database User (SYSTEM) Password:

 

Summary before execution:

=========================

 

 

SAP HANA Components

   Update Parameters

      Remote Execution: ssh

      SAP HANA System ID: HTL

      Database User Name: SYSTEM

      SAP HANA Database Client Installation Path: /hana/shared/HTL/hdbclient

      SAP HANA Studio Installation Path: /hana/shared/HTL/hdbstudio

   Software Components

      SAP HANA AFL (Misc)

         Update from version 1.00.093.1.1425042048 to 1.00.097.00.1434039685

         Location: /hana/EXTRACT/51049967/DATA_UNITS/HDB_AFL_LINUX_X86_64/packages

      SAP HANA LCAPPS

         Update from version 1.00.093.00.451387 to 1.00.097.000.454405

         Location: /hana/EXTRACT/51049967/DATA_UNITS/HANA_LCAPPS_10_LINUX_X86_64/packages

      SAP TRD AFL FOR HANA

         Update from version 1.00.093.1.1425042048 to 1.00.097.00.1434039685

         Location: /hana/EXTRACT/51049967/DATA_UNITS/HDB_TRD_AFL_LINUX_X86_64/packages

      SAP HANA Database

         Update from version 1.00.093.00.1424770727 to 1.00.097.00.1434028111

         Location: /hana/EXTRACT/51049967/DATA_UNITS/HDB_SERVER_LINUX_X86_64/server

      SAP HANA Database Client

         Update from version 1.00.093.00.1424770727 to 1.00.097.00.1434028111

         Location: /hana/EXTRACT/51049967/DATA_UNITS/HDB_CLIENT_LINUX_X86_64/client

      SAP HANA Studio

         Update from version 2.00.0.11.000000 to 2.00.0.19.000000

         Location: /hana/EXTRACT/51049967/DATA_UNITS/HDB_STUDIO_LINUX_X86_64/studio

      SAP HANA Smart Data Access

         Update from version 1.00.3.005.0 to 1.00.4.004.0

         Location: /hana/EXTRACT/51049967/DATA_UNITS/SAP_HANA_SDA_10/packages

 

 

Note: Cannot verify database user (SYSTEM) password in advance: no connection available.

Note: Cannot perform license check: no connection available

 

Do you want to continue? (y/n): y

 

Updating components...

Updating SAP HANA AFL (Misc)...

  Preparing package 'AFL'...

  Installing SAP Application Function Libraries to /hana/shared/HTL/exe/linuxx86_64/plugins/afl_1.00.097.00.1434039685_2158696...

  Installing package 'AFL'...

Updating SAP HANA LCAPPS...

  Preparing package 'LCAPPS'...

  Installing SAP liveCache Applications to /hana/shared/HTL/exe/linuxx86_64/plugins/lcapps_1.00.097.00.454405_4578353...

  Installing package 'LCAPPS'...

Updating SAP TRD AFL FOR HANA...

  Preparing package 'TRD'...

  Installing SAP TRD AFL FOR SAP HANA to /hana/shared/HTL/exe/linuxx86_64/plugins/trd_1.00.097.00.1434039685_2158696...

  Installing package 'TRD'...

Updating SAP HANA Database...

  Extracting software...

  Updating package 'Saphostagent Setup'...

  Updating package 'Python Support'...

  Updating package 'Python Runtime'...

  Updating package 'Product Manifest'...

  Updating package 'Binaries'...

  Updating package 'Installer'...

  Updating package 'Ini Files'...

  Updating package 'Emergency Support Package'...

  Updating package 'Documentation'...

  Updating package 'Delivery Units'...

  Updating package 'DAT Languages'...

  Updating package 'DAT Configfiles'...

  Stopping system...

Updating Resident hdblcm...

  Cleaning up old installation of Resident hdblcm...

  Installing Resident hdblcm...

Updating SAP HANA Database Client...

  Preparing package 'Python Runtime'...

  Preparing package 'Product Manifest'...

  Preparing package 'SQLDBC'...

  Preparing package 'REPOTOOLS'...

  Preparing package 'Python DB API'...

  Preparing package 'ODBC'...

  Preparing package 'JDBC'...

  Preparing package 'HALM Client'...

  Preparing package 'Client Installer'...

  Installing SAP HANA Database Client to /hana/shared/HTL/hdbclient...

  Updating package 'Python Runtime'...

  Updating package 'Product Manifest'...

  Updating package 'SQLDBC'...

  Updating package 'REPOTOOLS'...

  Updating package 'Python DB API'...

  Updating package 'ODBC'...

  Updating package 'JDBC'...

  Updating package 'HALM Client'...

  Updating package 'Client Installer'...

Updating SAP HANA Studio...

  Preparing package 'Studio Director'...

  Preparing package 'Client Installer'...

  Installing SAP HANA Studio to /hana/shared/HTL/hdbstudio...

  Updating package 'Studio Director'...

  Updating package 'Client Installer'...

Updating SAP HANA Studio Update repository...

Installing SAP HANA Smart Data Access...

Updating Component List...

Updating SAP HANA instance integration on local host...

  Deploying SAP Host Agent configurations...

SAP HANA components updated with warnings.

 

 

Note:

 

 

Log file written to '/var/tmp/hdb_hdblcm_update_2015-09-09_15.07.32/hdblcm.log' on host 'hdbtest2'.

 

The update is done and O.K.

A quick check to make sure :

 

hdbtest2:/usr/sap/HTL/HDB00/hdbtest2/trace> HDB version

HDB version info:

  version:             1.00.097.00.1434028111

  branch:              fa/newdb100_maint_rel

  git hash:            e6e474976d1dd01703d242877bb1ee5e7b3b2f2a

  git merge time:      2015-06-11 15:08:31

  weekstone:           0000.00.0

  compile date:        2015-06-11 15:22:05

  compile host:        ld7272.wdf.sap.corp

  compile type:        rel

 

Now I can set the hdbnode2 back in the replication configuration :

 

hdbtest2:/usr/sap/HTL/HDB00/hdbtest2/trace> hdbnsutil -sr_state

checking for active or inactive nameserver ...

nameserver hdbtest2:30001 not responding.

nameserver hdbtest2:30001 not responding.

 

 

System Replication State

~~~~~~~~~~~~~~~~~~~~~~~~

 

mode: primary

site id: 2

site name: HTLSTBY

done.

hdbtest2:/usr/sap/HTL/HDB00/hdbtest2/trace> hdbnsutil -sr_register --remoteHost=hdbtest1 --remoteInstance=HTL --mode=syncmem --name=HTLSTBY --force_full_replica

adding site ...

checking for inactive nameserver ...

nameserver hdbtest2:30001 not responding.

collecting information ...

updating local ini files ...

done.

 

hdbnode2 is back in the game.

Last verification before rstarting the STBY system :

 

hdbtest2:/usr/sap/HTL/HDB00/hdbtest2/trace> hdbnsutil -sr_state

checking for active or inactive nameserver ...

nameserver hdbtest2:30001 not responding.

nameserver hdbtest2:30001 not responding.

 

System Replication State

~~~~~~~~~~~~~~~~~~~~~~~~

 

mode: syncmem

site id: 2

site name: HTLSTBY

done.

 

 

Now let's restart the hdbnode2 HDB to have the replication back online :

 

failback30.png

failback31.png

failback32.png

 

Everybody is back and up to date !!

 

Target :


hdbtest1 is the primary host running the HDB

the standby host is hdbtest2

HDB version : 1.0.97

Replication is set back ( hdbtest1 -> hdbtest2 )

 

What else ?


From an HANA server perspective, the update is completed.

That said, we still need to perform the usual post update steps :

 

redeploy the views

updating the SAP HDB studio on the workstations ( the one on the server was updated ) .

updating the HDB client

 

These were described here : SAP HANA Hands on tests ( part 3.1 ) : Applying patches to HANA DB

 

In the end I'd rather perform the hdb db client and hdb studio updates first and then the HDB server update.

SAP HANA Guidelines for running virtualized

$
0
0

With SAP HANA SPS 05, SAP announced support for running SAP HANA running SAP HANA in a virtualized environment for non-production scenarios, using VMware vSphere 5.1 on Intel E7 Westmere-EX platform. VMware vSphere 5.1 in conjunction with SAP HANA SPS 05 was at that time the first and only hypervisor supported by SAP HANA.

 

Meanwhile SAP has gathered further experience in running SAP HANA within virtualized environments, allowing us to further extend the support and to include support of SAP HANA on VMware vSphere 5.5 and Hitachi LPAR 2.0 in production.

This document is targeted at SAP partners and customers interested in running SAP HANA in such virtualized environments, as it describes in more detail the existing constraints which need to be met, when running SAP HANA virtualized in production and non-production scenarios.

In addition to this document, which focuses on the general conditions under which SAP HANA may be virtualized, additional information and necessary best practices are published by the hypervisor vendors (as of today VMware and Hitachi) on how to configure their corresponding hypervisor, the VMs and the guest operating system, to run SAP HANA in a supportable and performace optimized environment.

 

 

Whenever we have news about changed conditions and platforms under which SAH HANA is being supported to be run virtualized, we will also update SAP Note 1788665 as well as the SAP HANA virtualized roadmap to provide you with latest knowledge and information.

SAP HANA TDI - Overview

$
0
0

SAP HANA tailored data center integration (TDI) was released in November 2013 to offer an additional approach of deploying SAP HANA. While the deployment of an appliance is easy and comfortable for customers, appliances impose limitations on the flexibility of selecting the hardware components for compute servers, storage, and  network. Furthermore, operating appliances may require changes to established IT operation processes. For those who prefer leveraging their established processes and gaining more flexibility in hardware selection for SAP HANA, SAP introduced SAP HANA TDI. For more information please download this overview presentation.

View this Document

SAP HANA: Revision Strategy

$
0
0

This document explains the meaning of the following terms: - SAP HANA Revision - SAP HANA Release Strategy - SAP HANA Maintenance Revision - SAP HANA Datacenter Service Point

View this Document

What-If analysis with Design Studio and HANA as backend.

$
0
0

The aim of this blog is demonstrate the creation of a ‘What-if’ analysis report with SAP design studio and HANA as backend database. Let’s consider a scenario ,where we have to decide between ‘Buy Now’ and ‘Buy Later’ options, based on the below user inputs.

  1. Unit Price
  2. Quantity
  3. Discount
  4. Delay Days

 

pic1.png

 

On click of the ‘Submit’ button, the values entered in the input fields (Unit Price,Quantity,Discount & delay days) will be passed on to the ‘Input parameters’ of the HANA data model.

 

pic2.png

 

Below script is written on the ‘On-Click’ event of the submit button to pass on the values to the Input parameters of the HANA model.

 

pic3.png

 

pic4.png

 

Here ‘DS_WHATIF_ANALYSIS’ is the data source that is built in design studio on top of the HANA data model. Based on the input values(a,b,c,d), the HANA model will determine the  ‘Buy Now’ & ‘Buy Later’ values in a chart in design studio output.

pic5.png

 

HANA Data Model:

             The HANA model is based on a table (has 2 columns containing the inflation rate for each month).

 

 

pic6.png

 

pic7.png

 

Create 4 input parameters (Day_Delay, Unit_Price, Quantity & Discount) of parameter type ‘Direct’. These are the input parameters that will receive the input values entered by the user in the report.

 

pic8.png

 

Create another input parameter ‘Inflation_Curr’ of parameter type ‘Derived from table’. This is the current inflation rate, which will be maintained in a custom table.

 

pic9.png

 

Create the below calculated columns in the same sequence as below,

 

pic10.png

 

  1. Calmonth_Now1:

pic11.png

 

2.Calmonth_Now

 

pic12.png

 

3.Calmonth_Later1

                Here we are adding the delay days from the input to the current date to get the Buy later period.

 

pic13.png


4. Calmonth_Later2:

pic14.png


5.Now_Flag


pic15.png


6.Later_Flag:


pic16.png

 

7.Inflation_Now:

 

pic17.png

 

 

8.Inflation_Later

 

pic18.png

 

9.Incremental_Inflation

 

pic19.png

 

10.Buy_Now:

                Buy now will consider ‘discount percentage’ and ‘Holding Cost’ into consideration.

 

pic20.png

 

11.Buy_Later:

                Buy later will consider the inflation rate for the delayed period into consideration.

 

pic21.png

 

Now let’s activate the model and test the it.

 

pic22.png

 

Enter the values for the input parameters.

 

pic23.png

 

We can see the value of ‘Buy Now’ and ‘Buy Later’ prices now. The same will be represented in design studio as a chart, when we click on the submit button.

 

pic24.png

 

 

Design Studio report Output:

 

pic25.png

 

Based on the above output, ‘Buy Now’ seems like a better option.

SAP HANA Client Installation and Update Guide for SCN

$
0
0

This SAP HANA client guide describes the installation and update of the free SAP HANA client available for download on SCN and store.sap.com.

View this Document


SAP Hana TDI setup - VMware 5.1 (part 1)

$
0
0

In this documentation I will explain how to setup and configure a Hana TDI scenario test environment. I will show in detail step and configuration point to achieve this configuration.


For my Hana TDI test case I’ll use my own lab on Vmware Vsphere 5.1.0 and run Hana revision 91 with the following deployment scenario:

  • 1 master + 1 worker (load balancing)
  • 1 master + 1 worker + 1 standby (HA)
  • 1 Hana on Primary site + 1 Hana on Secondary site (DR replication)

 

Disclaimer:  this is a personal documentation for test purpose; I will willingly bypass the mandatory hardware check and HW configuration check

 

In order execution

  • Configure Vsphere for the relevant scenario
  • Configure and setup scenario 1 - load balancing
  • Configure and setup scenario 2 - High Availability
  • Configure and setup scenario 3 - DR replication

 

Guide used:

SAP Hana Administration Guide SP9

SAP HANA virtualization and TDI_V2

SUSE Linux Enterprise High Availability ExtensionSLEHA

 

Note used:

1944799 - SAP HANA Guidelines for SLES Operating System Installation

2070786 - SAP systems on VMware vSphere: minimum SAP and OS requirements

1788665 - SAP HANA Support for Virtualized Environments

1943937 - Hardware Configuration Check Tool - Central Note

1969700 - SQL statement collection for SAP HANA

 

Link used:

Novell SAPHanaSR 10162 - SAPHanaSR provides resource agents to control the SAP HANA database in high availability environments

Configure Vsphere for the relevant scenario

1.jpg

 

From the above diagram my esxi server is configure with 4 internal disk, each of them for a specific purpose, I have installed and configured a virtual NAS appliance (freenas) in order to set nfs shared and mount point.

 

I will explain how to create new SLES vm from a template for Hana so it will be easier to have a reference (image ready) to deploy a new server

 

SLES installation

  • 4 cores
  • 32 GB of Ram
  • 30 GB of local disk for OS
  • 2 NIC card

 

I obviously did not detail how to create a new vm within vsphere since it's very straight forward, but once the new SLES server is ready3-27-2015 6-36-16 PM.jpg

 

I start some post work in order to have the VMware tools installed and make it Hana SP9 compliant4-1-2015 3-40-06 PM.jpg4-1-2015 3-56-36 PM.jpg

 

Now let’s make the system Hana SP9 relevant; in order to do this some additional package needs to be installed on the system regarding the note “1944799 - SAP HANA Guidelines for SLES Operating System Installation”. Those package can be install by using the os command or by using the tool “Yast” ,“Yast2” or “Zypper”In my case this two components were too low, the required version is 17.24-1-2015 4-47-54 PM.jpg4-1-2015 4-49-37 PM.jpg

 

Once deploy I install the specific package for Hana (check the following link for further information on what this package is all about)4-2-2015 3-18-26 PM.jpg

 

My system is all set now; I perform a snapshot of it and create a template for the future deployment4-2-2015 3-38-13 PM.jpg

 

The VM is now ready to have Hana installed on it, but first I need to set the volume used for the installation.

I will use my FreeNas VM appliance to set my volume with the following rule:

/hana/log FS = 0,5 x Ram (shared)

/hana/data FS = 1 x Ram (shared)

/hana/shared FS = 1 x Ram (shared)

/usr/sap FS = 20 GB (local drive)

 

 

The drives ready I now add it to my server

5-6-2015 3-32-51 PM.jpg

 

Hana is ready now to be installed.

Configure and setup for scenario 1 – Load Balancing

2.jpg

 

Let start the first scenario by installing Hana single node, before to run the installed I run the HanaHwCheck script to see if my server component are supported for a Hana installation.

4-3-2015 9-39-49 AM.jpg

 

Of course it failed!!! but it’s the first thing that needs to be run before doing an install, in order to by passed this check during the install I will set the following variable IDSPISPOPD="1".

 

Once set do a quick check on the script again

4-3-2015 9-47-23 AM.jpg

 

Now the second test performance script will needs to be run to validate TDI approach, to do this first download the “Hardware config check “ tool on the market place and extract it into “/hana/shared” (also available from the complet dvd set)4-3-2015 10-11-35 AM.jpg

 

Form the note “1943937 - Hardware Configuration Check Tool - Central Note” I adapt the .json file setting according my environment (refer to the attach document in the note) and copy them into “/hana/shared/hwcct”Once done I run my performance test4-3-2015 11-07-11 AM.jpg

 

The output tells me what needs to be corrected; I don’t run all of them since it’s for a lab test purpose

 

 

 

The installation of the master node is now done

4-9-2015 8-41-18 PM.jpg

 

I order to now install the second node for load balancing I’ll deploy the new server by using VM Template4-9-2015 8-43-58 PM.jpg

 

I fast forward since it’s pretty straight, I did customize the template in order to receive the new config4-9-2015 9-09-48 PM.jpg

 

My new server ready let's add it to the current landscape, but before to run the script to add the new host I first need to set the master parameter “listeninterface” to “global”

4-9-2015 10-06-24 PM.jpg

 

Now the new server go to “/hana/shared/HB1/global/hdb/install/bin” and run the script ./hdbaddhost4-9-2015 10-09-17 PM.jpg

 

The new host appears

4-9-2015 10-14-14 PM.jpg

 

Paid attention now to the fact that the system in now distributed, let's go to the next scenario.4-9-2015 10-12-52 PM.jpg

Configure and setup for scenario 2 – High Availability (basic scenario)

3.jpg

 

Since I did explain how to install a new vm over a template, I’ll skip this part in the documentation but will focus more on the standby installation and failover test.Quick sum of activities not documented in this section:

  • Deployment of vm from template
  • Adjustment of parameters
  • Filesystem mount for new system

 

Proceed the same way to add a new host like previously, but choose option "2 - Standby".

Now that the standby node has been added let’s make a failover test to see how Hana react, to force the take over from host03 I’ll perform a HDB stop on the node 2 (vmhana02)

4-10-2015 4-36-02 PM.jpg


Stop at 4:36 pm, failover finish at 4:38 and we can see that the node #3 become slave and not standby anymore

4-10-2015 4-38-32 PM.jpg

 

Since there is no automatic failed back, once the node #2 is back it become the Standby node

4-10-2015 4-42-08 PM.jpg


For the next scenario, check the second part of the document below.

SAP Hana TDI setup - VMware 5.1 (part 2)

SAP Hana TDI setup - VMware 5.1 (part 2)

$
0
0

Hello this is the second part on my document SAP Hana TDI setup - VMware 5.1, i'll explain how to configure HSR for Hana.

You can check the first part of the document for the previous deployment scenario from the link below

SAP Hana TDI setup - VMware 5.1 (part 1)

 

 

 

Configure and setup for scenario 3 – Disaster Recovery (HSR)

4.jpg


An important point when doing a HRS configuration before set things up is that we must have the same topology on both sites, the standby node is optional.


Quick sum of activities not documented in this section:

  • Deployment of vm from template
  • Adjustment of parameters
  • File system mount for new system
  • Secondary Hana installation

From a technical stand point in order to make it happen, several network and server consideration needs to be take care, to realize this configuration the following step needs to be proceed as below:

  • Have both Hana system deploy and up and running
  • HSR configured between Hana systems
  • Take over test
  • Configure SLES11 SP3 Cluster
  • Set the SAP Hana integration

 

HSR configuration

 

Since my 2 Hana landscape are up and running for my 2 site, I can start the setup of the replication process.

 

On the primary site enable the replication

4-21-2015 4-25-18 PM.jpg

4-21-2015 4-26-57 PM.jpg

4-21-2015 4-36-13 PM.jpg

4-21-2015 4-38-25 PM.jpg


Now I stop the secondary site in order to register it

4-21-2015 4-43-08 PM.jpg

4-21-2015 4-44-00 PM.jpg


Note: do not provide fqdn

4-21-2015 6-49-07 PM.jpg

 

While Site2 is starting check out Site1

4-21-2015 4-57-03 PM.jpg


After a minute the replication is “Active”

4-21-2015 5-02-07 PM.jpg

 

And the secondary system become unavailable for connection with all service up and running

4-21-2015 5-02-54 PM.jpg

 

 

HSR Takeover testing

 

Now that my systems are setup for replication, I will create user and some package on Site1 and validate after taking over to Site2 if all my change has been taken in consideration.

4-21-2015 5-08-45 PM.jpg4-21-2015 5-12-50 PM.jpg

 

In order to perform the takeover, from Site2 proceed as following

4-21-2015 5-21-16 PM.jpg

 

Disable the replication from the former primary site

4-21-2015 5-34-33 PM.jpg4-21-2015 5-34-52 PM.jpg

 

And stop it and register it as secondary now

4-21-2015 5-39-34 PM.jpg

4-21-2015 6-08-23 PM.jpg

 

Once done we can see that the Site2 became the primary and Site1 the secondary, also I can now see my user and package created earlier are on the second host

4-21-2015 7-19-41 PM.jpg

 

All the action performed above trough the studio can also be done by command line tool with: hdbnsutil .

 

In the next part of my documentation I’ll explain how to configure SLES in order to configure Hana with it.

 

 

 

SLES and Hana setup for DR

 

In this section I’ll explain how to control the failover process automatically with Hana on SLES Cluster, in order to make it happen I first configure SLES as a cluster with the two servers (hana01 and hana02)

 

On the 2 servers install the “ha_sles” package

4-22-2015 5-16-50 PM.jpg

 

Once installed, run the initial cluster config on the first node by using “sleha-init” command

4-24-2015 3-44-48 PM.jpg

4-24-2015 3-47-28 PM.jpg

 

Now done, go on the secondary node and register it into the cluster y running “sleha-join”

4-24-2015 4-10-05 PM.jpg

 

And do a check at the HAWK interface from the address provide during the first node install, i can see my 2 servers clustred now

5-1-2015 7-46-36 PM.jpg

 

In order to make Hana embedded in my Linux Cluster, I did install at the beginning the package “SAPHanaSR”, I’ll then use the HAWK to set it up.

4-24-2015 4-55-16 PM.jpg

4-24-2015 4-57-36 PM.jpg

 

Once the wizard done I check the status (The red errors are because at this moment I did not have installed the host agent)

4-24-2015 10-36-26 PM.jpg

 

Once the problem fixed I make a test on the virtualip define earlier for the cluster “192.168.0.145” and see which node I’m on

4-24-2015 10-22-15 PM.jpg

I'm on the first node

4-24-2015 10-25-13 PM.jpg

 

The configuration is completed ...

 

On my next blog i'll focus on the testing scenario with SLES Ha/DR.

 

Williams

SAP Hana SP10 Installation on Red Hat 6.6

$
0
0

In my documentation I’ll explain how to install SAP Hana SP10 on a Linux Red Hat 6.6 system in my test environment.

 

I will show in detail step and configuration point to achieve it.

 

For my setup I’ll use my own lab on Vmware Vsphere 5.1.0 and run Hana revision 101, I’ll reuse my existing environment setup in my previous documentation.

 

In order execution

  • Download Red Hat 6.6 release
  • Install the minimal RHEL
  • Configure RHEL
  • SAP Hana installation

 

Guide used

 

Red Hat Enterprise Linux (RHEL) 6.x Configuration Guide for SAP HANA

SAP HANA Master Guide

 

Note used

 

SAP Note 171356 - SAP Software on Linux: General information

SAP Note 2009879 - SAP HANA Guidelines for Red Hat Enterprise Linux (RHEL) Operating System

SAP Note 1496410 - Red Hat Enterprise Linux 6.x: Installation and Upgrade

SAP Note 2136965 - SAP HANA DB: Recommended OS settings for RHEL 6.6

SAP Note 2001528 - Linux: SAP HANA Database SPS 08 revision 80 (or higher) on RHEL 6 or SLES 1

 

Link used

 

Red Hat Enterprise Linux for SAP HANA: system updates and supportability

Help SAP Hana

RedHat access documentation

 

Overview Architecture

56.jpg

 

In my previous documentation “SAP Hana TDI setup - VMware 5.1” I have explain and setup different scenario of Hana deployment, I’ll use same proceed to create my vm and template for reuse later.

 

 

Download Red Hat 6.6 release

 

In order to be able to download Red Hat dvds you must first register

1.jpg

 

Once done, validate your account and download the necessary media

2.jpg

 

Now downloaded I store it in my datastore

3.jpg

 

 

Install the minimal RHEL

 

My vm ready and my dvd ready I start to install the base Red Hat system

4.jpg

 

5.jpg

 

6.jpg

 

7.jpg

 

8.jpg

 

Make sure the automatic connection to the network card is enabled

9.jpg

 

Set the time zone and system clock according your location11.jpg

 

12.jpg

 

I choose the first option since my system is just about to be created

13.jpg

 

14.jpg

 

Choose minimal option

15.jpg

 

Installation in progress

16.jpg

 

Installation completed

17.jpg

 

 

Configure RHEL

 

The base installation is now over the system needs to be compliant to host SAP Hana, the configuration consist of the following activities:

 

•    Subscribe your system to Red Hat channels

•    Install the base package group and xfs tools

•    Create the /usr/sap/ storage for SAP HANA.

•    Mount the file system for SAP HANA instance

•    Install dependencies package for Hana

•    Install the SAP Java Virtual Machine or IcedTea

•    Disable SELinux in /etc/sysconfig/selinux

•    Install and configure the package tuned-profiles-sap-hana

•    Configure the profile for vmware

•    Set the parameters in /etc/sysctl.conf

•    Set the symbolic links

•    Add the kernel command line argument for crash huge page

•    Omit the application crash and core file handling of the operating system

 

 

 

 

Subscribe your system to Red Hat channels

 

In order to be able download patch, package and so on, the system must be register against Redhat by subscription, but in order to have SAP Hana specific package for RedHat you need to be register in “Partner center” and join as an existing partner company or apply for partnership

25.jpg

 

Once the registration fully completed and approved (it takes few days), run the subscription manager on your system

24.jpg

 

And list all the available production available for you, you must have “Red Hat Enterprise Linux for SAP Hana” listed

26.jpg

27.jpg

 

Attach the subscription to your pool ID (this information is system dependent)

28.jpg

29.jpg

 

Run the subscription release

30.jpg

 

Disable all existing repositories

31.jpg

 

And finally enable on “Hana’ repositories

32.jpg

 

 

Install base package group and xfs tools

 

33.jpg

34.jpg

 

Create /usr/sap storage for SAP Hana

 

From my ESXi host I did add another volume to my RedHat server in order to create “/usr/sap/” file system.

I check now by “lsblk –f” command of my new volume is there

36.jpg

 

Let’s create my physical volume by “pvcreate /dev/sdb” command

37.jpg

 

Now create the new volume group by “vgcreate new_volume /dev/sdb” and run the command “vgs” to check

38.jpg

39.jpg

 

The VG available with 50gb of space I’m creating now my logical volume with only 40gb of space in order to keep some room on my disk

40.jpg

 

And finally I create my physical file system and mount it

41.jpg

 

Result

42.jpg

 

 

Mount the file system for SAP HANA instance

 

The next phase will be to add the nfs mount point from my NAS server to RedHat for Hana by editing the fstab

Do not for forget to install nfs package by “ yum groupinstall "Network file system client"”

43.jpg

 

Once done “mount –a” and check

44.jpg

 

 

Install dependencies package for Hana

 

Like in SLES environment, RHEL needs to have specific package dependency in order to deploy Hana.

45.jpg

 

Install the SAP Java Virtual Machine or IcedTea

46.jpg

 

Disable SELinux in /etc/sysconfig/selinux

47.jpg

 

Install and configure the package tuned-profiles-sap-hana from the RHEL for SAP HANA channel to minimize latencies

48.jpg

 

For Hana running on VMware

49.jpg

 

Set the parameters in /etc/sysctl.conf

50.jpg

 

Set the symbolic link for compatibility reason

51.jpg

 

Add the kernel command line argument for crash huge page by editing /boot/grub/grub.conf file

52.jpg

 

Omit the application crash and core file handling of the operating system

53.jpg

 

This completed for the system preparation in order to install SAP Hana, I can proceed to the next section

 

 

 

SAP Hana installation

 

We are ready now to install SAP Hana and it’s like running it on SLES, so no surprise.

I’ll go by the hdblcmgui to run the install because a specific string is required as you can see in the red square below.

57.jpg

58.jpg

 

That’s it, Red Hat is very specific regarding package so do not missed any step.

 

Williams

Hana security (part1) : Authentication model Kerberos/SPNEGO

$
0
0

In my security document part 1, I’ll explain how to configure SAP Hana authentication method based on:

  • Hana SSO with Kerberos authentication
  • Single Sign-on with SPNEGO

 

My configuration will be based on a single container database architecture with my internal network.

 

For my setup I’ll use my lab environment based on VMware, Microsoft Server 2008 R2, SAP Hana Rev 101 on SLES 11 SP3 and Windows 7 Enterprise.

 

 

Disclaimer: For my deployment I’ll issue local certificate with no outside exposure.

 

In order execution

 

  • Register SLES server into DNS
  • Step-by-step check on Hana server
  • Create Hana Database service user in AD
  • Register service Principal Name (SPN) in AD
  • Configure Kerberos from Hana studio
  • Configure SPNEGO for XS application

 

 

Note used

 

1837331 - HOWTO HANA DB SSO Kerberos/ Active Directory

1813724 - HANA SSO/Kerberos: create keytab and validate conf

1900023 - How to setup SAML SSO to HANA from BI

 

 

Link used

 

Help SAP Hana Platform Core - Security

 

Overview Architecture

archi.jpg

 

The following architecture is based on virtual server, below the information detail for my deployment:

 

•    Domain: will.lab

•    Active Directory and NTP server: activedir.will.lab / 192.168.0.109

•    Hana server: vmhana01.will.lab / 192.168.0.116

•    Desktop client: desk-client.will.lab / 192.168.0.137

 

 

 

Register SLES server in DNS

 

Registering your Linux server into my DNS will make easier the management of the entire landscape, but to make the registration successful several prerequisite needs to be respected:

•    The network card needs to be setup according the dns entry

•    The Linux server must have the same time zone as the DNS

 

The necessary configuration can be done by the tools “yast or yast2” or by command line, I will use Yast2 to make it

 

Specify the DNS server ip in “Name Servers” and the domain in “Domain search”

1.jpg

 

My ntp server is my Active Directory server

2.jpg

 

Once done, form network service, select “Windows Domain Membership”

3.jpg

 

As you can see I did not put “will.lab” but “will” only, then choose the option you want to propagate and hit ok to validate.

4.jpg

 

Once reached out the administrator password is needed to join the domain, put it and hit “obtain list” to make sure the password is ok

5.jpg

 

We are now part of the domain

6.jpg

 

To validate I’ll make two test, I’ll fist make an nslookup from linux server on the my client desktop

7.jpg

 

And the second one is on my client desktop, I’ll ssh my linux server and connect to it with my ad user

8.jpg

 

 

Step-by-step check on Hana server

 

The registration on Linux server done into my AD is the first step, now I need to perform a series of internal check to guaranty a successful configuration.

It consist of:

  • Hostname resolution check
  • Hana database server krb5.conf file check

 

Run the following command and make sure you have the exact result as below with your information

9.jpg

 

And also check the reverse lookup

10.jpg

 

The registration in AD changed the entry the krb5.conf file, add the following line

11.jpg

 

And make two test with “kinit” and “klist” tool

12.jpg

 

 

Create Hana database service user in Active Directory

 

The check done with the expected return value, I need now to create a service users in active Directory to represent Hana database which will be map as SPN.

I’ll create those service in a dedicated OU in to grant necessary administrative privileges

13.jpg

 

I used the command line, but it could also be created from tool in AD :

dsadd user "cn=vmhana01,OU=hana,DC=will,DC=lab" -pwd <password> -disabled no -pwdneverexpires yes -acctexpires never

14.jpg

 

Now make a connection test from Hana server by using “kinit” tool

15.jpg

 

 

Register Service Principal Name (SPN) in AD

 

In this step I will now map a service name to my service user created earlier, in case of Hana the format must be respected as follow:

hdb/<DB server>@<domain>

 

In elevate mode run the following ktpass command:

ktpass -princ hdb/vmhana01.will.lab@WILL.LAB -mapuser WILL\vmhana01 -ptype KRB5_NT_PRINCIPAL -pass <password> -crypto All -out c:\krb5.keytab

16.jpg

Take in note kvno number, in my case “3”

 

 

Configure Kerberos for Hana studio

 

The SPN done by the command above did create a keytab on my windows server, I’ll copy it into my Hana server at /etc and check the content

17.jpg

 

And verify the consistency of the keytab in order to prepare the “kvno”

18.jpg

 

The kvno valid I create test user in Active Directory and try to log with it in Hana studio.

I did use the command line but you can also create it by the graphic tool

18.2.jpg

 

Now create I go in studio as “SYSTEM” or a user with administrator privilege to map my test user account for Kerberos

19.jpg

 

Once done i’m logging my test account on my client desktop and add the Hana entry with the “Authentication by current operating system user” option

20.jpg

 

And I’m in without password

21.jpg

 

 

Configure SPNEGO for XS application

 

To configure SPNEGO for XS application a new SNP needs to be created, earlier I did use the following format “hdb/<DB server>@<domain>” for the studio connection, but for HTTP connection the following format needs to be used:

HTTP/<DB server>@<domain>

 

I’ll map the same service user created earlier “vmhana01”

22.jpg

 

And reprocess the check to ensure the correct version of the kvno number

23.jpg

 

Since I’m not a developer, I’ll created test app upon the developer guide tutorial section 2.5 to test the authentication mechanism

 

Here is the .xsjs application created, it display the hello text after logging

24.1.jpg

 

As we can see the password is required

24.jpg

 

To change the authentication behavior, from XS Admin Tool select the created project app and hit “Edit”

25.jpg

 

Specify SPNEGO and save

Note: I did specify SPNEGO for the all package, but it can be also set with more granularity

26.jpg

 

And try again and it works

27.jpg

 

Note: I’m using Mozilla, so use the specific add-on for NTLM Integrated Authentication

 

The configuration is completed for my Kerberos/SSO

 

Williams

Lessons Learnt: Bottom up TCO Analysis for HANA Platform

$
0
0

This document is a summary of lessons learnt from real customer experience in HANA sales cycles during Q2 and Q3 of 2015. Determining value of any investment is of paramount importance to any IT and business team and there are several applicable ways of getting to the dollar impact. Two such methods are briefly described initially with the value points from a HANA platform based analysis shared in detail for the bottom up TCO analysis.

Two ways to analyze value and benefits of an IT investment

    1. Top down cost avoidance calculation – This is one of the most popular ways of assessing the business value leveraged by SAP’s IVE team regularly. As part of their survey with the customer, we determine their short and long-term initiatives for introducing new technical solutions (especially non-SAP solutions) to the business including the estimated costs and time associated with them. Due to the simplification and platform capabilities made possible with HANA platform, we are able to either eliminate or simplify these implementations compared to the customer’s current approach. As an example, one of the customers I worked with in Southern California was planning to implement 22 new dashboards for Supply Chain performance monitoring using non-SAP technology. Their SI had provided an estimate of USD 500,000 per dashboard for a total of USD 11 million cost + 3 million for analytics software purchase that was positioned to be specialized for supply chain performance monitoring. With S/4HANA based Supply Chain control tower and Integrated Business Planning for Supply Chain and other standard Supply Chain related Fiori applications, we were able to reduce the cost of this business use case substantially and absorb the general cost of commissioning S/4HANA in their environment as part of a larger project. Working through the list of priorities and projects planned for the next several years and mapping them to available SAP S/4HANA or HANA platform capabilities can result in multi-million dollar cost avoidance in addition to simplification and access to the latest innovation from SAP. Between the two methods discussed here, the top down method typically uncovers bigger stones where hidden dollar savings might be found when positioning HANA but these are estimates that may vary.
    2. Bottom up TCO calculation – The second and the other preferred method for our customer’s IT organizations is the bottom up TCO calculation where we prepare a summary of their current spend on IT infrastructure, resources, maintenance, support, change management and other operational aspects and compare those with equivalent in a post HANA world. The savings derived by this method are very close to the real ones as they are based on formulas or proposals, however these numbers may not be as large as those derived from the top down calculation. Both top down and bottom up approaches should be considered for a holistic analysis. Bottom up TCO analysis also required knowledge of the customer’s architecture and it has been observed that Pre-Sales working in conjunction with IVE is the best combination to perform this analysis.

  Typical considerations in a bottom up analysis

Following is a framework to collect inputs from the customer about their current spend on their environment. Detailed excel template available on demand.

  1. Hardware Maintenance & Storage Costs    
  1. Infrastructure Hardware - SAP applications
  2. Infrastructure Hardware – Others
  3. SAP Infrastructure Storage Management
  4. Current Data Warehouse Environment
  1. Hardware Acquisition costs
  1. Infrastructure Hardware
  1. Software Related Costs
  1. Infrastructure Software
  1. Resource Costs      
  1. Cost of creating and maintaining customizations related to SAP Data
  1. Migration Conversion Costs
  1. Total Migration/ Conversion Costs        

Observed Challenges and pain points in a typical OLTP and OLAP landscape

To baseline the customer’s current environment, architecture and the challenges posed by them, the following suggested points can be leveraged from the framework we developed for the customer engagements where such analyses were performed. This is an indicative list and there could be additional or different challenges that your customer might be facing which could form part of your analysis. A visual summary of these challenges is presented below and described subsequently:

 

 

  1. Limited Operational Reporting with ECC on traditional DB due to resource, performance and tuning considerations for an OLTP environment on a traditional DB
  2. High Data redundancy with multiple copies of SAP data in the landscape like data marts, data warehouses and copies of data within these due to their architecture of persistence, aggregation, indexing etc. Adding DEV, QAS and PROD environments for each of these parallel environments quickly creates an unmanageable challenge
  3. Data Governance, Quality and security challenges within data marts / copies of SAP data
  4. High TCO with shadow IT costs to maintain numerous data silos
  5. Bloated Data Warehouse footprint with multiple implicit copies of data due to internal staging, indexing, aggregates, data stores etc. that a traditional DW architecture would force them into
  6. Unsustainable performance workarounds like aggregates and indices in a traditional DW that not only add to the data footprint but do not provide a cost effective scalable model
  7. High data latency with 24 – 48 hrs. delay in data availability for reporting within the traditional DW
    1. No real time business analytics as a result and delayed data results in loss of business context under which the queries were raised in the first place
  8. Reporting tool proliferation within Business users for self-service analytics
    1. Inconsistent User Experience across multiple reporting tools from different vendors
    2. Implicit need for IT to support such business acquired tools outside their regular support plans and skills
    3. Variety of security models and further data silos created by each such tool
  9. Costly and time consuming end to end change management processes due to the multi-layered architecture
  10. Limited Change Agility due to complexity of the architecture which prevents IT from delivering changes and new content to the business in a timely manner


How does HANA platform provide value?


HANA not only provides a remedy for the above-mentioned challenges but also tremendous value to the business and IT organizations within any enterprise. These benefits are visualized in contrast to the previous illustration as follows:

 

 

Future State Architecture Benefits – Immediate Business and IT value

  1. Massive reduction of the current data footprint through
    1. Single copy of SAP ECC data across business processes and teams
    2. Data provisioning for business functions through pre-built virtualized data models (non-materialized)
  2. Simplification of the landscape by
    1. Data marts elimination and replacement with Virtual data models and views on a single copy of persisted data
    2. Large reduction of DW data volume or eventual elimination of the traditional DW based on the customer’s roadmap
  3. Easier access to SAP data for reporting with
    1. Business user friendly semantics of the model
    2. Direct operational reporting from SAP ECC Vs BW or DW
  4. Real time / Near Real time analytics on all data with
    1. Combined transactions (OLTP) and Analytics (OLAP) in a single platform on a single copy of data
    2. Analytics Simulation (What if analysis) in real time for better decision making
    3. Supply Chain visibility analytics in real time on SAP ECC with HANA enterprise as an example
    4. Category spend optimization from Ariba (Supplier spend), Concur (Travel Expense), Fieldglass (Contract labor) and ECC (Direct and Indirect procurement) with HANA enterprise depending on the customer’s preferred solutions for these scenarios
  5. Access to granular data with
    1. Line item level detailed analysis is enabled in real time
    2. Elimination of aggregates and pre-calculated totals as in a DW
  6. Improved change agility with
    1. Easier end to end change management process, fewer layers to change
  7. Compelling and consistent User Experience
    1. Any user device enabled, browser based access with beautiful and intuitive UI
    2. Elimination of the need to deploy new reporting and transactional tools
  8. Lower TCO with
    1. Simplified landscape and data footprint which results in smaller backups, faster recoveries and lower investment in infrastructure redundancy related to them
    2. Better utilization of current H/W resources especially storage
    3. Better utilization of IT and Business human resources through elimination of Shadow IT organizations
  9. Robust Data Governance, Quality and Security with enterprise grade best practices within the platform

 

End State Architecture Benefits – Full Business and IT value of SAP HANA

    1. All of the previously mentioned SAP HANA platform benefits plus
    2. Further simplification of the customer’s landscape through SAP S/4HANA
      1. Elimination of SAP BW with BCS capabilities in Integrated Planning with S/4HANA capabilities and Integrated Business Planning for Finance and Supply chain (some of it is planned functionality as of Q3 2015)
      2. Increased utilization of HANA enterprise for non-SAP data (for e.g. in DW currently) for a single source of the truth across SAP and non-SAP data sources
    3. New Business Processes introduction by leveraging HANA for IoT edition on the same platform
    4. Advanced predictive analytics processing of Clinical trial data in real / near real time in conjunction with ERP inventory data

 

The above are a few indicative examples to help with moving in the right direction with the analysis and the lists are a mix of known and not so well known points that can be leveraged as part of the framework.

 

Thanks for reading!


Sudhendu

SAP HANA in Data Centers (SPS10+)

$
0
0

This documents provides an overview of all features that enable enterprise readiness for SAP HANA in version SPS10

View this Document

SAP HANA Authorisation Troubleshooting

$
0
0

Every now and again I receive issues regarding SAP authorisation issues. I thought it might be useful to create a troubleshooting walk through.

 

This document will deal with issues regarding analytical privilege in SAP HANA Studio

 

So what are Privileges some might ask?

System Privilege:

System privileges control general system activities. They are mainly used for administrative purposes, such as creating schemas, creating and changing users and roles, performing data backups, managing licenses, and so on.

Object Privilege:

Object privileges are used to allow access to and modification of database objects, such as tables and views. Depending on the object type, different actions can be authorized (for example, SELECT, CREATE ANY, ALTER, DROP, and so on).

Analytic Privilege:

Analytic privileges are used to allow read access to data in SAP HANA information models (that is, analytic views, attribute views, and calculation views) depending on certain values or combinations of values. Analytic privileges are evaluated during query processing.

In a multiple-container system, analytic privileges granted to users in a particular database authorize access to information models in that database only.

Package Privilege:

Package privileges are used to allow access to and the ability to work in packages in the repository of the SAP HANA database.

Packages contain design time versions of various objects, such as analytic views, attribute views, calculation views, and analytic privileges.

In a multiple-container system, package privileges granted to users in a particular database authorize access to and the ability to work in packages in the repository of that database only.

 

For more information on SAP HANA privileges please see the SAP HANA Security Guide:

http://help.sap.com/hana/SAP_HANA_Security_Guide_en.pdf

 

 

So, you are trying to access a view, a table or simply trying to add roles to users in HANA Studio and you are receiving errors such as:

  • Error during Plan execution of model _SYS_BIC:onep.Queries.qnoverview/CV_QMT_OVERVIEW (-1), reason: user is not authorized
  • pop1 (rc 2950, user is not authorized)
  • insufficient privilege: search table error: [2950] user is not authorized
  • Could not execute 'SELECT * FROM"_SYS_BIC"."<>"' SAP DBTech JDBC: [258]: insufficient privilege: Not authorized.SAP DBTech JDBC: [258]: insufficient privilege: Not authorized

 

These errors are just examples of  some the different authorisation issues you can see in HANA Studio, and each one is pointing towards a missing analytical privilege.

 

Once you have created all your models, you then have the opportunity to define your specific authorization requirements on top of the views that you have created.

 

So for example, we have a model in HANA Studio Schema and its called "_SYS_BIC:Overview/SAP_OVERVIEW"

We have a user, lets just say its the "SYSTEM" user, and when you query this view you get the error:

 

Error during Plan execution of model _SYS_BIC:Overview/SAP_OVERVIEW (-1), reason: user is not authorized.

 

So if you are a DBA, and you get a message from a team member informing you that they getting a authorisation issue in HANA Studio. What are you to do?

How are you supposed to know the User ID? And most importantly, how are you to find out what the missing analytical privilege is?

 

So this is the perfect opportunity to run an authorisation trace through the means of the SQL console on HANA Studio.

So if you follow the below instructions it will walk you through executing the authorisation trace:

 

1) Please run the following statement in the HANA database to set the DB  trace:

alter system alter configuration ('indexserver.ini','SYSTEM') SET
('trace','authorization')='info' with reconfigure;

 

2) Reproduce the issue/execute the command again/

 

3)When the execution finishes please turn off the trace as follows in the Hana studio:

alter system alter configuration ('indexserver.ini','SYSTEM') unset
('trace','authorization') with reconfigure;

 

 

So now that you have turned the trace on, reproduced the issue and turned off the trace, you should now see a new indexserver0000000trc file created in the Diagnosis Files Tab in HANA Studio

Capture.PNG

 

So once you open the trace files, scroll to the end of the file and you should see something familiar to this:

e cePlanExec       cePlanExecutor.cpp(06890) : Error during Plan execution of model _SYS_BIC:onep.Queries.qnoverview/CV_QMT_OVERVIEW (-1), reason: user is not authorized
i TraceContext     TraceContext.cpp(00718) : UserName=TABLEAU, ApplicationUserName=luben00d, ApplicationName=HDBStudio, ApplicationSource=csns.modeler.datapreview.providers.ResultSetDelegationDataProvider.<init>(ResultSetDelegationDataProvider.java:122);csns.modeler.actions.DataPreviewDelegationAction.getDataProvider(DataPreviewDelegationAction.java:310);csns.modeler.actions.DataPreviewDelegationAction.run(DataPreviewDelegationAction.java:270);csns.modeler.actions.DataPreviewDelegationAction.run(DataPreviewDelegationAction.java:130);csns.modeler.command.handlers.DataPreviewHandler.execute(DataPreviewHandler.java:70);org.eclipse.core.commands
i Authorization    XmlAnalyticalPrivilegeFacade.cpp(01250) : UserId(123456) is missing analytic privileges in order to access _SYS_BIC:onep.MasterData.qn/AT_QMT(ObjectId(15,0,oid=78787)). Current situation:
AP ObjectId(13,2,oid=3): Not granted.
i Authorization    TRexApiSearch.cpp(20566) : TRexApiSearch::analyticalPrivilegesCheck(): User TABLEAU is not authorized on _SYS_BIC:onep.MasterData.qn/AT_QMT (787878) due to XML APs
e CalcEngine       cePopDataSources.cpp(00488) : ceJoinSearchPop ($REQUEST$): Execution of search failed: user is not authorized(2950)
e Executor         PlanExecutor.cpp(00690) : plan plan558676@<> failed with rc 2950; user is not authorized
e Executor         PlanExecutor.cpp(00690) : -- returns for plan558676@<>
e Executor         PlanExecutor.cpp(00690) : user is not authorized(2950), plan: 1 pops: ceJoinSearchPop pop1(out a)
e Executor         PlanExecutor.cpp(00690) : pop1, 09:57:41.755  +0.000, cpu 139960197732232, <> ceJoinSearchPop, rc 2950, user is not authorized
e Executor         PlanExecutor.cpp(00690) : Comm total: 0.000
e Executor         PlanExecutor.cpp(00690) : Total: <Time- Stamp>, cpu 139960197732232
e Executor         PlanExecutor.cpp(00690) : sizes a 0
e Executor         PlanExecutor.cpp(00690) : -- end executor returns
e Executor         PlanExecutor.cpp(00690) : pop1 (rc 2950, user is not authorized)

 

So we can see from the trace file that User who is trying to query from the view is called TABLEAU. TABLEAU is also represented by the User ID (123456)

 

So by looking at the lines:

i Authorization    XmlAnalyticalPrivilegeFacade.cpp(01250) : UserId(123456) is missing analytic privileges in order to access _SYS_BIC:onep.MasterData.qn/AT_QMT(ObjectId(15,0,oid=78787)).

&

i Authorization    TRexApiSearch.cpp(20566) : TRexApiSearch::analyticalPrivilegesCheck(): User TABLEAU is not authorized on _SYS_BIC:onep.MasterData.qn/AT_QMT (787878) due to XML APs

 

We can clearly see that TABLEAU user is missing the correct analytical privileges to access the _SYS_BIC:onep.MasterData.qn/AT_QMT which is located on Object 78787.

 

So now we have to find out who owns the Object 78787. We can find out this information by querying the following:

 

select * from objects where object_oid = '<oid>';

Select * from objects where object_oid = '78787'

 

Once you have found out the owner for this object, you can get the owner to Grant the TABLEAU user the necessary privileges to query the object.

 

 

Another option that is available for analyzing privileges issues was introduced as of SP9. This comes in the form of the Authorization Dependency Viewer. Man-Ted Chan has prepared an excellent blog on this new feature:

 

http://scn.sap.com/community/hana-in-memory/blog/2015/07/07/authorization-dependency-viewer

 

 

 

For more useful information on Privileges can be seen in the following KBA's:

KBA #2220157 - Database error 258 at EXE insufficient

KBA #1735586 – Unable to grant privileges for SYS_REPO.-objects via SAP HANA Studio authorization management.

KBA #1966219 – HANA technical database user _SYS_REPO cannot be activated.

KBA #1897236 – HANA: Error&quot; insufficient privilege: Not authorized &quot; in SM21

KBA #2092748 – Failure to activate HANA roles in Design Time.

KBA #2126689 – Insufficient privilege. Not authorized

 

 

For more useful Troubleshooting documentation you can visit:

 

http://wiki.scn.sap.com/wiki/display/TechTSG/SAP+HANA+and+In-Memory+Computing

 

 

Thank you,

 

Michael


[SAP HANA Academy] Live4 ERP Agility: SDI DP Server

$
0
0

Continuing with the Smart Data Integration part of the SAP HANA Academy’s Live4 ERP Agile Solutions in SAP HANA Cloud Platform course, Tahir Hussain Babar (Bob) shows how to turn on the data-provisioning server needed to connect to a Hadoop system with the using SAP Smart Data Integration. Check out Bob’s tutorial video below.

Screen Shot 2015-10-13 at 3.28.36 PM.png

(0:40 – 2:55) How to Start the DP Server

 

First on your machine that contains Eclipse, open Eclipse and then click on the Administration console button while your system user is selected. Go to the configuration tab and expand the daemon.ini file. Then expand the dpserver file, click on instance and then type 1 in both the new value text box for both the System and the Hosts before clicking save.

Screen Shot 2015-10-13 at 3.46.29 PM.png

Now after hitting refresh in the Landscape tab it will display a yellow triangle to signify that the DP Server is starting up.

Screen Shot 2015-10-13 at 3.47.32 PM.png

(2:55 – 4:50) How to add the Required Roles for the Live4 User

 

You will also need to set certain authorizations in order for the Live4 user to carry out Smart Data Integration tasks.

 

First go to your system user and expand the Security folder. Then expand the user folder and select the Live4 user. This is the user that a developer will use to do any and all of the work in Eclipse and/or the WebIDE throughout the course.

 

In the Live4 user navigate to the System Privileges tab and click on the green plus sign to add a trio of privileges. Essentially we will need to utilize an agent, install a Hadoop adapter and create a virtual table. So select ADAPTER ADMIN, AGENT ADMIN and CREATE REMOTE SOURCE and then click ok.

Screen Shot 2015-10-13 at 4.01.50 PM.png

Then after clicking the green execute button those three new system privileges will be granted to the Live4 user.


For further tutorial videos about the ERP Agility with HCP course please view this playlist.


SAP HANA Academy - Over 1,200 free tutorial videos on SAP HANA, Analytics and the SAP HANA Cloud Platform.


Follow us on Twitter @saphanaacademy and connect with us on LinkedIn.

SAP Hana TDI setup - VMware 5.1 (part 1)

$
0
0

In this documentation I will explain how to setup and configure a Hana TDI scenario test environment. I will show in detail step and configuration point to achieve this configuration.


For my Hana TDI test case I’ll use my own lab on Vmware Vsphere 5.1.0 and run Hana revision 91 with the following deployment scenario:

  • 1 master + 1 worker (load balancing)
  • 1 master + 1 worker + 1 standby (HA)
  • 1 Hana on Primary site + 1 Hana on Secondary site (DR replication)

 

Disclaimer:  this is a personal documentation for test purpose; I will willingly bypass the mandatory hardware check and HW configuration check

 

In order execution

  • Configure Vsphere for the relevant scenario
  • Configure and setup scenario 1 - load balancing
  • Configure and setup scenario 2 - High Availability
  • Configure and setup scenario 3 - DR replication

 

Guide used:

SAP Hana Administration Guide SP9

SAP HANA virtualization and TDI_V2

SUSE Linux Enterprise High Availability ExtensionSLEHA

 

Note used:

1944799 - SAP HANA Guidelines for SLES Operating System Installation

2070786 - SAP systems on VMware vSphere: minimum SAP and OS requirements

1788665 - SAP HANA Support for Virtualized Environments

1943937 - Hardware Configuration Check Tool - Central Note

1969700 - SQL statement collection for SAP HANA

 

Link used:

Novell SAPHanaSR 10162 - SAPHanaSR provides resource agents to control the SAP HANA database in high availability environments

Configure Vsphere for the relevant scenario

1.jpg

 

From the above diagram my esxi server is configure with 4 internal disk, each of them for a specific purpose, I have installed and configured a virtual NAS appliance (freenas) in order to set nfs shared and mount point.

 

I will explain how to create new SLES vm from a template for Hana so it will be easier to have a reference (image ready) to deploy a new server

 

SLES installation

  • 4 cores
  • 32 GB of Ram
  • 30 GB of local disk for OS
  • 2 NIC card

 

I obviously did not detail how to create a new vm within vsphere since it's very straight forward, but once the new SLES server is ready3-27-2015 6-36-16 PM.jpg

 

I start some post work in order to have the VMware tools installed and make it Hana SP9 compliant4-1-2015 3-40-06 PM.jpg4-1-2015 3-56-36 PM.jpg

 

Now let’s make the system Hana SP9 relevant; in order to do this some additional package needs to be installed on the system regarding the note “1944799 - SAP HANA Guidelines for SLES Operating System Installation”. Those package can be install by using the os command or by using the tool “Yast” ,“Yast2” or “Zypper”In my case this two components were too low, the required version is 17.24-1-2015 4-47-54 PM.jpg4-1-2015 4-49-37 PM.jpg

 

Once deploy I install the specific package for Hana (check the following link for further information on what this package is all about)4-2-2015 3-18-26 PM.jpg

 

My system is all set now; I perform a snapshot of it and create a template for the future deployment4-2-2015 3-38-13 PM.jpg

 

The VM is now ready to have Hana installed on it, but first I need to set the volume used for the installation.

I will use my FreeNas VM appliance to set my volume with the following rule:

/hana/log FS = 0,5 x Ram (shared)

/hana/data FS = 1 x Ram (shared)

/hana/shared FS = 1 x Ram (shared)

/usr/sap FS = 20 GB (local drive)

 

 

The drives ready I now add it to my server

5-6-2015 3-32-51 PM.jpg

 

Hana is ready now to be installed.

Configure and setup for scenario 1 – Load Balancing

2.jpg

 

Let start the first scenario by installing Hana single node, before to run the installed I run the HanaHwCheck script to see if my server component are supported for a Hana installation.

4-3-2015 9-39-49 AM.jpg

 

Of course it failed!!! but it’s the first thing that needs to be run before doing an install, in order to by passed this check during the install I will set the following variable IDSPISPOPD="1".

 

Once set do a quick check on the script again

4-3-2015 9-47-23 AM.jpg

 

Now the second test performance script will needs to be run to validate TDI approach, to do this first download the “Hardware config check “ tool on the market place and extract it into “/hana/shared” (also available from the complet dvd set)4-3-2015 10-11-35 AM.jpg

 

Form the note “1943937 - Hardware Configuration Check Tool - Central Note” I adapt the .json file setting according my environment (refer to the attach document in the note) and copy them into “/hana/shared/hwcct”Once done I run my performance test4-3-2015 11-07-11 AM.jpg

 

The output tells me what needs to be corrected; I don’t run all of them since it’s for a lab test purpose

 

 

 

The installation of the master node is now done

4-9-2015 8-41-18 PM.jpg

 

I order to now install the second node for load balancing I’ll deploy the new server by using VM Template4-9-2015 8-43-58 PM.jpg

 

I fast forward since it’s pretty straight, I did customize the template in order to receive the new config4-9-2015 9-09-48 PM.jpg

 

My new server ready let's add it to the current landscape, but before to run the script to add the new host I first need to set the master parameter “listeninterface” to “global”

4-9-2015 10-06-24 PM.jpg

 

Now the new server go to “/hana/shared/HB1/global/hdb/install/bin” and run the script ./hdbaddhost4-9-2015 10-09-17 PM.jpg

 

The new host appears

4-9-2015 10-14-14 PM.jpg

 

Paid attention now to the fact that the system in now distributed, let's go to the next scenario.4-9-2015 10-12-52 PM.jpg

Configure and setup for scenario 2 – High Availability (basic scenario)

3.jpg

 

Since I did explain how to install a new vm over a template, I’ll skip this part in the documentation but will focus more on the standby installation and failover test.Quick sum of activities not documented in this section:

  • Deployment of vm from template
  • Adjustment of parameters
  • Filesystem mount for new system

 

Proceed the same way to add a new host like previously, but choose option "2 - Standby".

Now that the standby node has been added let’s make a failover test to see how Hana react, to force the take over from host03 I’ll perform a HDB stop on the node 2 (vmhana02)

4-10-2015 4-36-02 PM.jpg


Stop at 4:36 pm, failover finish at 4:38 and we can see that the node #3 become slave and not standby anymore

4-10-2015 4-38-32 PM.jpg

 

Since there is no automatic failed back, once the node #2 is back it become the Standby node

4-10-2015 4-42-08 PM.jpg


For the next scenario, check the second part of the document below.

SAP Hana TDI setup - VMware 5.1 (part 2)

SAP Hana TDI setup - VMware 5.1 (part 2)

$
0
0

Hello this is the second part on my document SAP Hana TDI setup - VMware 5.1, i'll explain how to configure HSR for Hana.

You can check the first part of the document for the previous deployment scenario from the link below

SAP Hana TDI setup - VMware 5.1 (part 1)

 

 

 

Configure and setup for scenario 3 – Disaster Recovery (HSR)

4.jpg


An important point when doing a HRS configuration before set things up is that we must have the same topology on both sites, the standby node is optional.


Quick sum of activities not documented in this section:

  • Deployment of vm from template
  • Adjustment of parameters
  • File system mount for new system
  • Secondary Hana installation

From a technical stand point in order to make it happen, several network and server consideration needs to be take care, to realize this configuration the following step needs to be proceed as below:

  • Have both Hana system deploy and up and running
  • HSR configured between Hana systems
  • Take over test
  • Configure SLES11 SP3 Cluster
  • Set the SAP Hana integration

 

HSR configuration

 

Since my 2 Hana landscape are up and running for my 2 site, I can start the setup of the replication process.

 

On the primary site enable the replication

4-21-2015 4-25-18 PM.jpg

4-21-2015 4-26-57 PM.jpg

4-21-2015 4-36-13 PM.jpg

4-21-2015 4-38-25 PM.jpg


Now I stop the secondary site in order to register it

4-21-2015 4-43-08 PM.jpg

4-21-2015 4-44-00 PM.jpg


Note: do not provide fqdn

4-21-2015 6-49-07 PM.jpg

 

While Site2 is starting check out Site1

4-21-2015 4-57-03 PM.jpg


After a minute the replication is “Active”

4-21-2015 5-02-07 PM.jpg

 

And the secondary system become unavailable for connection with all service up and running

4-21-2015 5-02-54 PM.jpg

 

 

HSR Takeover testing

 

Now that my systems are setup for replication, I will create user and some package on Site1 and validate after taking over to Site2 if all my change has been taken in consideration.

4-21-2015 5-08-45 PM.jpg4-21-2015 5-12-50 PM.jpg

 

In order to perform the takeover, from Site2 proceed as following

4-21-2015 5-21-16 PM.jpg

 

Disable the replication from the former primary site

4-21-2015 5-34-33 PM.jpg4-21-2015 5-34-52 PM.jpg

 

And stop it and register it as secondary now

4-21-2015 5-39-34 PM.jpg

4-21-2015 6-08-23 PM.jpg

 

Once done we can see that the Site2 became the primary and Site1 the secondary, also I can now see my user and package created earlier are on the second host

4-21-2015 7-19-41 PM.jpg

 

All the action performed above trough the studio can also be done by command line tool with: hdbnsutil .

 

In the next part of my documentation I’ll explain how to configure SLES in order to configure Hana with it.

 

 

 

SLES and Hana setup for DR

 

In this section I’ll explain how to control the failover process automatically with Hana on SLES Cluster, in order to make it happen I first configure SLES as a cluster with the two servers (hana01 and hana02)

 

On the 2 servers install the “ha_sles” package

4-22-2015 5-16-50 PM.jpg

 

Once installed, run the initial cluster config on the first node by using “sleha-init” command

4-24-2015 3-44-48 PM.jpg

4-24-2015 3-47-28 PM.jpg

 

Now done, go on the secondary node and register it into the cluster y running “sleha-join”

4-24-2015 4-10-05 PM.jpg

 

And do a check at the HAWK interface from the address provide during the first node install, i can see my 2 servers clustred now

5-1-2015 7-46-36 PM.jpg

 

In order to make Hana embedded in my Linux Cluster, I did install at the beginning the package “SAPHanaSR”, I’ll then use the HAWK to set it up.

4-24-2015 4-55-16 PM.jpg

4-24-2015 4-57-36 PM.jpg

 

Once the wizard done I check the status (The red errors are because at this moment I did not have installed the host agent)

4-24-2015 10-36-26 PM.jpg

 

Once the problem fixed I make a test on the virtualip define earlier for the cluster “192.168.0.145” and see which node I’m on

4-24-2015 10-22-15 PM.jpg

I'm on the first node

4-24-2015 10-25-13 PM.jpg

 

The configuration is completed ...

 

On my next blog i'll focus on the testing scenario with SLES Ha/DR.

 

Williams

SAP HANA SP100 SDA setup with Apache Hadoop

$
0
0

In my documentation I’ll explain how to setup and configure a SAP Hana SP10 SDA with Apache Hadoop. I will show in detail step and configuration point to achieve this it.

 

HANA revision 100 reserve a lot new features, refer to the following link for the complete list

SAP Hana SP10 what's new

 

In order execution

 

  • Apahce Hadooop installation
  • Setup Hana to consume Hadoop data
  • Connect SAP Hana studio to Hadoop for SDA
  • Manage Hadoop Cluster with Ambari

 

Guide used

Simba ODBC Driver for Apache Hive

HDP installation GuideHANA SDA guide


Note used

2165826 - SAP HANA Platform SPS 10 Release Note

2177918 - SAP HANA Hadoop Ambari Cockpit SP10


Link used

http://hortonworks.com/products/releases/hdp-2-0-ga/#installhttp://www.simba.com/connectors/apache-hadoop-hive-odbc

 

 

Architecture overview

7-5-2015 1-51-00 AM.jpg

Installation of Apache Hadoop

 

Apache Hadoop will be installed on our Windows environment, before installing the package, the following software needs to be installed as requirement :

  • Microsoft Visual C++ 2010 Redistributable Package (64 bit)
  • Oracle JDK 7 64-bit
  • Microsoft.NET framework 4.0
  • Python 2.7

 

Once the required software are installed download the latest version from the website (the current version is 2.3)

 

Open DOS and run "msiexec /lv d:\hdplog.txt /i "D:\Software\Hadoop\hdp-2.0.6.0.winpkg.msi" to launch the program3-21-2014 10-42-38 AM.jpg3-21-2014 10-43-31 AM.jpg

 

Choose Derby as DB flavor

3-21-2014 10-44-46 AM.jpg3-21-2014 10-59-39 AM.jpg

 

Open the command line shortcut and start the hadoop services

3-21-2014 11-00-51 AM.jpg3-21-2014 11-04-08 AM.jpg

 

All the services are running

3-21-2014 11-07-08 AM.jpg

 

Validate the installation by making a SmokeTest

3-21-2014 11-17-03 AM.jpg

 

Check the node status and the cluster status

7-4-2015 10-59-01 PM.jpg

Setup Hana to consume Hadoop Data

The Hadoop server is now up and running but before creating a connection from Hana, two ODBC drivers need to be download on the Hana server.

UnixODBC driver and SimbaODBC driver

 

 

UnixODBC driver can be download from the following website: http://www.unixodbc.org/3-20-2014 2-50-53 PM.jpg

 

Once both are downloaded, start by decompressing the SimbaODBC driver3-21-2014 11-29-06 AM.jpg

 

Use the command “gunzip” to remove the “gz” and use after the command “tar xvf” to decompress the tar file. The simba folder is the extracte. Do the same thing for unixODBC

3-21-2014 11-36-57 AM.jpg

 

Move the two folders at the root level

3-21-2014 11-38-41 AM.jpg

 

As <SID>adm user, move into the samba setup folder and copy the samba.hiveodbc.ini in the home directory, then do a VI and change parameters

3-21-2014 12-34-43 PM.jpg3-21-2014 12-43-58 PM.jpg

 

Now install the last version of unixODBC driver for Simba, from unixODBC folder run the following command:

1) ./configure

2) Make

3) Make install

3-21-2014 12-51-06 PM.jpg

 

Configure the classpathby creating customer.sh file using VI (~/.customer.sh) with the fallowing entry3-21-2014 1-32-19 PM.jpg

 

And create an odbc.ini file using VI (~/.odbc.ini) with the following entry:

  • DSN name
  • Driver location
  • Host ip of the Hadoop server
  • Port to use for Hiveserver(default)
  • Hive server type
  • Athentication method
  • User for authentication
  • Password for user auth.

3-21-2014 2-25-31 PM.jpg

 

And link this file to the customer.sh file created before by adding the following line:
export ODBCINI=$HOME/.odbc.ini

 

Do a test connection from Hana server to Hadoopby running : isql –v HIVE

3-21-2014 2-13-56 PM.jpg

 

The license for samba driver needs to be installed once it’s done do the test again

3-21-2014 2-18-16 PM.jpg

 

Successfully connected

3-21-2014 2-26-53 PM.jpg

 

Do a “show tables “ to make sure we are on the right system

3-21-2014 2-34-57 PM.jpg

 

 

Connect SAP Hana to Hadoop for SDA

 

In provisioning, choose create remote source and create a new

3-21-2014 8-50-00 PM.jpg

 

Fill up all the required information

3-21-2014 8-52-09 PM.jpg

 

Refresh the remote source panel

3-21-2014 8-53-34 PM.jpg

 

The connection is made and can see the tables available

 

 

 

Manage Hadoop cluster with Ambari

 

Stating SP10, HANA provide a new delivery unit which allow to manage your Hadoop cluster over Ambari

"HANAHADOOPAMBR10_0-80001012.zip"

7-5-2015 12-16-09 AM.jpg

 

Once upload, the new role needs to be assign

7-5-2015 12-49-56 AM.jpg

 

and the application is available in the catalogue

7-5-2015 12-41-17 AM.jpg

 

Access it and provide the necessary information

7-5-2015 12-53-06 AM.jpg

 

and access the cockpit

7-5-2015 1-30-55 AM.jpg

 

The simple Hadoop connection over SDA is done.

 

Williams

SAP HANA Smart Data Access Setup

$
0
0

PURPOSE

The purpose of this document is to define clear steps of connecting BW on HANA DB from HANALIVE DB to use BW models in HANALIVE reports. In this scenario we have used two separate HANA DBs (one for BW and another for HANALIVE for ERP 1.0)


SCOPE

This scope applies for Basis team who support Smart Data Access (SDA) configuration after will go live. This procedure applied for prerequisites, installation and post installation configuration of complete SDA setup between BW on HANA DB and HANALIVE DB.


COMPONENT DETAILS

SAP BW system running on SAP NW 7.4 SP8 with HANA DB SPS8 Revision 82

HANALIVE DB is used as side car scenario with version SPS8 revision 82


WHAT THE PREREQUISITES ARE FOR SAP HANA SMART DATA ACCESS?

Software Versions

You have installed SAP HANA SP7 or higher and the remote data sources are available.

ODBC Drivers

You have installed the ODBC drivers for the databases you want to connect see SAP note 1868702 on each HANA node. If you installed ODBC drivers in your HANA exe directory as per note 1868702 these ODBC drivers will be removed during a revision update and have to be installed again after the update.


BUSINESS CASE

SAP HANA smart data access makes it possible to connect remote data sources and to present the data contained in these data sources as if from local SAP HANA tables. This can be used, for example, in SAP Business Warehouse installations running on SAP HANA to integrate data from remote data sources.

 

In SAP HANA, virtual tables are created to represent the tables in the remote data source. Using these virtual tables, joins can be executed between tables in SAP HANA and tables in the remote data source. All access to the remote data source is read-only.

 

In this scenario we are doing the Smart Data Access setup on Enterprise HANA system to connect to BW on HANA system remotely


Please check the attached document for detailed procedure of SDA setup for HANA.

 

SAP BW ON HANA & HANA SMART DATA ACCESS – SETUP

 

1. Create user with required privileges in BW on HANA DB

Login to remote source SAP HANA system BWoH, using SYSTEM user and create the user SDA with following privileges.

 

System Privilege: Catalog Read

Object Privileges:

    • SELECT on Schema SAPABAP1
    • SELECT on _SYS_BIC & _SYS_BI

Fig1.png

Fig2.png

Fig3.png

Fig4.png

Note: Schema SAPABAP1 contains all the required base tables which HANA Modelling team wants to build their reports.


2. Logon to HANALIVE DB as SYSTEM user and configure the Smart Data Access.

 

SAP HANA system authorization CREATE REMOTE SOURCE is required to create a remote source. SYSTEM user already has this authorization.

 

In the Systems view, open -> Provisioning -> Remote Sources.

 

Right click Remote Sourcesand select New Remote Source.

Fig5.png

Enter the following information: Source Name. Select the Adapter Namefrom the drop-down list as HANA (ODBC) in this case as we are connection remote SAP HANA Database. Enter the values for: Server, Port, User Nameand the Password of user SDA which we have created on SAP BW on HANA system.

 

Click the Save the Editoricon in the upper right-hand corner of the screen.

Fig6.png

3. Connection Verification

After the SDA connection is successfully created, verify if you could connect to remote source AP6 system and check if you could see the tables under schema SAPABAP1

Fig7.png

4. User authorization to access


Authorization to access data in the remote data source is determined by the privileges of the database user as standard.

 

Grant the following privileges to the role assigned to the Modelling users, so that they can create virtual tables and then write SQL queries which could operate on virtual tables. The SAP HANA query processor optimizes these queries, and executes the relevant part of the query in the target database, returns the results of the query to SAP HANA, and completes the operation.

Fig8.png

Viewing all 1183 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>