Quantcast
Channel: SCN : Document List - SAP HANA and In-Memory Computing
Viewing all 1183 articles
Browse latest View live

Troubleshooting SAP HANA Authorisation issues

$
0
0

This document will deal with issues regarding analytical privileges with SAP HANA.


 

So what are Privileges some might ask?

 

System Privilege:

System privileges control general system activities. They are mainly used for administrative purposes, such as creating schemas, creating and changing users and roles, performing data backups, managing licenses, and so on.

 

Object Privilege:

Object privileges are used to allow access to and modification of database objects, such as tables and views. Depending on the object type, different actions can be authorized (for example, SELECT, CREATE ANY, ALTER, DROP, and so on).

 

Analytic Privilege:

Analytic privileges are used to allow read access to data in SAP HANA information models (that is, analytic views, attribute views, and calculation views) depending on certain values or combinations of values. Analytic privileges are evaluated during query processing.

In a multiple-container system, analytic privileges granted to users in a particular database authorize access to information models in that database only.

 

Package Privilege:

Package privileges are used to allow access to and the ability to work in packages in the repository of the SAP HANA database.

Packages contain design time versions of various objects, such as analytic views, attribute views, calculation views, and analytic privileges.

In a multiple-container system, package privileges granted to users in a particular database authorize access to and the ability to work in packages in the repository of that database only.

 

For more information on SAP HANA privileges please see the SAP HANA Security Guide:

http://help.sap.com/hana/SAP_HANA_Security_Guide_en.pdf

 

 

So, you are trying to access a view, a table or simply trying to add roles to users in HANA Studio and you are receiving errors such as:

  • Error during Plan execution of model _SYS_BIC:onep.Queries.qnoverview/CV_QMT_OVERVIEW (-1), reason: user is not authorized
  • pop1 (rc 2950, user is not authorized)
  • insufficient privilege: search table error: [2950] user is not authorized
  • Could not execute 'SELECT * FROM"_SYS_BIC"."<>"' SAP DBTech JDBC: [258]: insufficient privilege: Not authorized.SAP DBTech JDBC: [258]: insufficient privilege: Not authorized

Capture.PNG

 

These errors are just examples of  some the different authorization issues you can see in HANA Studio, and each one is pointing towards a missing analytical privilege.

 

Once you have created all your models, you then have the opportunity to define your specific authorization requirements on top of the views that you have created.

 

So for example, we have a model in HANA Studio Schema and its called "_SYS_BIC:Overview/SAP_OVERVIEW"

We have a user, lets just say its the "SYSTEM" user, and when you query this view you get the error:

 

Error during Plan execution of model _SYS_BIC:Overview/SAP_OVERVIEW (-1), reason: user is not authorized.

 

So if you are a DBA, and you get a message from a team member informing you that they getting a authorisation issue in HANA Studio. What are you to do?

How are you supposed to know the User ID? And most importantly, how are you to find out what the missing analytical privilege is?

 

So this is the perfect opportunity to run an authorisation trace through the means of the SQL console on HANA Studio.

So if you follow the below instructions it will walk you through executing the authorisation trace:

 

1) Please run the following statement in the HANA database to set the DB  trace:

alter system alter configuration ('indexserver.ini','SYSTEM') SET
('trace','authorization')='info' with reconfigure;

 

2) Reproduce the issue/execute the command again/

 

3)When the execution finishes please turn off the trace as follows in the Hana studio:

alter system alter configuration ('indexserver.ini','SYSTEM') unset
('trace','authorization') with reconfigure;

 

_____________________________________________________________________________________________________________________________

 

Only use this when instructed by SAP. It's recommended to use "INFO" rather than "DEBUG" in normal circumstances.

 

 

If you would like a more detailed trace on the privileges needed you could also execute the DEBUG level trace (Usually SAP Development would request this)

 

1) Please run the following statement in the HANA database to set the DB  trace:

alter system alter configuration ('indexserver.ini','SYSTEM') SET
('trace','authorization')='debug' with reconfigure;


 

2) Reproduce the issue/execute the command again


 

3)When the execution finishes please turn off the trace as follows in the Hana studio:

alter system alter configuration ('indexserver.ini','SYSTEM') unset
('trace','authorization') with reconfigure;

 

______________________________________________________________________________________________________________________________

 

So now that you have turned the trace on and reproduced the issue, now you must turn off the trace.

 

You should now see a new indexserver0000000trc file created in the Diagnosis Files Tab in HANA Studio

Capture.PNG

 

So once you open the trace files, scroll to the end of the file and you should see something familiar to this:

 

e cePlanExec      cePlanExecutor.cpp(06890) : Error during Plan execution of model _SYS_BIC:onep.Queries.qnoverview/CV_QMT_OVERVIEW (-1), reason: user is not authorized
i TraceContext    TraceContext.cpp(00718) : UserName=TABLEAU, ApplicationUserName=luben00d, ApplicationName=HDBStudio, ApplicationSource=csns.modeler.datapreview.providers.ResultSetDelegationDataProvider.<init>(ResultSetDelegationDataProvider.java:122);csns.modeler.actions.DataPreviewDelegationAction.getDataProvider(DataPreviewDelegationAction.java:310);csns.modeler.actions.DataPreviewDelegationAction.run(DataPreviewDelegationAction.java:270);csns.modeler.actions.DataPreviewDelegationAction.run(DataPreviewDelegationAction.java:130);csns.modeler.command.handlers.DataPreviewHandler.execute(DataPreviewHandler.java:70);org.eclipse.core.commands
i Authorization    XmlAnalyticalPrivilegeFacade.cpp(01250) : UserId(123456) is missing analytic privileges in order to access _SYS_BIC:onep.MasterData.qn/AT_QMT(ObjectId(15,0,oid=78787)). Current situation:
AP ObjectId(13,2,oid=3): Not granted.
i Authorization    TRexApiSearch.cpp(20566) : TRexApiSearch::analyticalPrivilegesCheck(): User TABLEAU is not authorized on _SYS_BIC:onep.MasterData.qn/AT_QMT (787878) due to XML APs
e CalcEngine      cePopDataSources.cpp(00488) : ceJoinSearchPop ($REQUEST$): Execution of search failed: user is not authorized(2950)
e Executor        PlanExecutor.cpp(00690) : plan plan558676@<> failed with rc 2950; user is not authorized
e Executor        PlanExecutor.cpp(00690) : -- returns for plan558676@<>
e Executor        PlanExecutor.cpp(00690) : user is not authorized(2950), plan: 1 pops: ceJoinSearchPop pop1(out a)
e Executor        PlanExecutor.cpp(00690) : pop1, 09:57:41.755  +0.000, cpu 139960197732232, <> ceJoinSearchPop, rc 2950, user is not authorized
e Executor        PlanExecutor.cpp(00690) : Comm total: 0.000
e Executor        PlanExecutor.cpp(00690) : Total: <Time- Stamp>, cpu 139960197732232
e Executor        PlanExecutor.cpp(00690) : sizes a 0
e Executor        PlanExecutor.cpp(00690) : -- end executor returns
e Executor        PlanExecutor.cpp(00690) : pop1 (rc 2950, user is not authorized)

 

So we can see from the trace file that User who is trying to query from the view is called TABLEAU. TABLEAU is also represented by the User ID (123456)

 

So by looking at the lines:

 

i Authorization    XmlAnalyticalPrivilegeFacade.cpp(01250) : UserId(123456) is missing analytic privileges in order to access _SYS_BIC:onep.MasterData.qn/AT_QMT(ObjectId(15,0,oid=78787)).

&

i Authorization    TRexApiSearch.cpp(20566) : TRexApiSearch::analyticalPrivilegesCheck(): User TABLEAU is not authorized on _SYS_BIC:onep.MasterData.qn/AT_QMT (787878) due to XML APs

 

We can clearly see that TABLEAU user is missing the correct analytical privileges to access the _SYS_BIC:onep.MasterData.qn/AT_QMT which is located on Object 78787.

 

So now we have to find out who owns the Object 78787. We can find out this information by querying the following:

 

select * from objects where object_oid = '<oid>';

Select * from objects where object_oid = '78787'

 

Once you have found out the owner for this object, you can get the owner to Grant the TABLEAU user the necessary privileges to query the object.

 

Please be aware that if you find that the owner of an object is _SYS_REPO, this is not as straight forward as logging in as _SYS_REPO as this is not possible because SYS_REPO is a technical database user used by the SAP HANA repository. The repository consists of packages that contain design time versions of various objects, such as attribute views, analytic views, calculation views, procedures, analytic privileges, and roles. _SYS_REPO is the owner of all objects in the repository, as well as their activated runtime versions.

You have to create a .hdbrole file which which gives the access ( Development type of role, giving select, execute, insert etc access) on this schema. You then assign this role to the user who is trying to access the object.

 

 

Another option that is available for analyzing privileges issues was introduced as of SP9. This comes in the form of the Authorization Dependency Viewer. Man-Ted Chan has prepared an excellent blog on this new feature:

 

http://scn.sap.com/community/hana-in-memory/blog/2015/07/07/authorization-dependency-viewer

 

 

 

For more useful information on Privileges can be seen in the following KBA's:

KBA #2220157 - Database error 258 at EXE insufficient

KBA #1735586 – Unable to grant privileges for SYS_REPO.-objects via SAP HANA Studio authorization management.

KBA #1966219 – HANA technical database user _SYS_REPO cannot be activated.

KBA #1897236 – HANA: Error&quot; insufficient privilege: Not authorized &quot; in SM21

KBA #2092748 – Failure to activate HANA roles in Design Time.

KBA #2126689 – Insufficient privilege. Not authorized

KBA #2250445 - SAP DBTech JDBC 485 - Invalid definition of structured privilege: Invalid filter condition

 

 

For more useful Troubleshooting documentation you can visit:

 

http://wiki.scn.sap.com/wiki/display/TechTSG/SAP+HANA+and+In-Memory+Computing

 

 

Thank you,

 

Michael


HANA Rules Framework

$
0
0

Welcome to the SAP HANA Rules Framework (HRF) Community Site!


SAP HANA Rules Framework provides tools that enable application developers to build solutions with automated decisions and rules management services, implementers and administrators to set up a project/customer system, and business users to manage and automate business decisions and rules based on their organizations' data.

In daily business, strategic plans and mission critical tasks are implemented by a countless number of operational decisions, either manually or automated by business applications. These days - an organization's agility in decision-making becomes a critical need to keep up with dynamic changes in the market.


HRF Main Objectives are:

  • To seize the opportunity of Big Data by helping developers to easily build automated decisioning solutions and\or solutions that require business rules management capabilities
  • To unleash the power of SAP HANA by turning real time data into intelligent decisions and actions
  • To empower business users to control, influence and personalize decisions/rules in highly dynamic scenarios

HRF Main Benefits are:

Rapid Application Development |Simple tools to quickly develop auto-decisioning applications

  • Built-in editors in SAP HANA studio that allow easy modeling of the required resources for SAP HANA rules framework
  • An easy to implement and configurable SAPUI5 control that exposes the framework’s capabilities to the business users and implementers

Business User Empowerment | Give control to the business user

  • Simple, natural, and intuitive business condition language (Rule Expression Language)

Untitled.png

  • Simple and intuitive UI control that supports text rules and decision tables

NewTable.png

  • Simple and intuitive web application that enables business users to manage their own rules

Rules.png   

Scalability and Performance |HRF as a native SAP HANA solution leverages all the capabilities and advantages of the SAP HANA platform.


For more information on HRF please contact shuki.idan@sap.com  and/or noam.gilady@sap.com

Interesting links:

SAP solutions already utilizing HRF:

Here are some (partial list) SAP solutions that utilizes HRF in different domains: 

Use cases of SAP solutions already utilizing HRF:

SAP Transportation Resource Planning

TRP_Use_Case.jpg

SAP Fraud Management

Fraud_Use_Case.JPG

SAP hybris Marketing (formerly SAP Customer Engagement Intelligence)

hybris_Use_Case.JPG

SAP Operational Process Intelligence

OPInt_Use_Case.JPG

How To Configure Network Settings for HANA System Replication

Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 1)

$
0
0

Hello All,

 

Its been some time that I have been working in HANA and related areas like SLT, Lumira, Fiori and so on.

So thought of sharing some topics here, which would be handy.

 

Disclaimer :

1) This series is exclusively for Beginners in HANA and all those HANA experts here, please excuse me

2) These are some Solutions/Observations that we have found handy in our projects and am quite sure there would be multiple ways to derive the same result.

3) These series of documents are collaborative in nature. So please feel free to edit the documents,wherever required!

4) All the points mentioned here were observed in HANA system with revisions >= 82.

 

Part 2 of this series can be found here --> Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 2)

Part 3 of this series can be found here -->  Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 3)

Part 4 of this series can be found here -->  http://scn.sap.com/docs/DOC-65343

 

 

1) Related to HANA:

 

Use Case: We have a table in a HANA schema and we were asked if there is any option to find a where used list where the table has been used.

Table Name: STKO.

Solution: Go to schema SYS.

There you will find a view named OBJECT_DEPENDENCIES.

You will get the dependent information in that view.

 

In SQL Terms: SELECT * FROM "SYS"."OBJECT_DEPENDENCIES" where BASE_OBJECT_NAME = 'STKO'

PIC1.JPG

 

--> Following is another way to see the 'Where-Used List':

 

In HANA Studio Left Navigator Pane > Catalog > Any Schema > Tables folder > Context Menu (Right click on the table), select option ' Open Defenition'

Open Def.jpg

Then in the right hand side, below the editor pane along side properties tab you see the tab ' Where-Used List '

Where-Used List.jpg

 

2)  Related to HANA/SLT:

 

Use Case: We have a new SLT configuration enabled for a source system.

Which all tables would be created automatically under the target schema defined in the configuration?

 

Observation: We have created a Non-SAP configuration in SLT and MII_SQl was the configuration name provided in SLT.

Now in HANA side, you will see that the schema MII_SQL  has the following tables by default.

PIC2.png

 

3)  Related to HANA:

Use Case: We have a HANA Information View. We want to know the Number of records available in the output.

 

Solution: HANA Information View --> Semantics --> Data preview --> Show Log --> Generated SQL.

Pic3.png

 

 

 

Copy the “SYS_BIC”.sap.hba.ZDBR44364/CV_FMIFIIT (My calculation view for this documement purpose)

Now write a SQl command.

 

Pivc4.png

 

 

4)  Related to HANA:

Use Case:  We need to connect to a HANA cloud system. How to do that.

 

Solution: Initially when we see the HANA studio, we will see the following:

p5.png

 

Now Click, Install New Software

p6.png

 

Add https://tools.hana.ondemand.com/kepler

 

Once it is installed, you will now see the option to add the Cloud System in HANA Studio.

 

p7.png

 

While connecting to the cloud system, you might encounter the following error:

p8.png

 

 

p9.png

 

Access the following path(Preferences) and make the required changes in the HTTP and HTTPS line items.

P1.JPG

 

 

 

Some times, you might get a following error message.

p1.JPG

This happens when the service is temporary down and you should be able to connect to the HANA cloud system after some time. So please try back after some time.

 

Sometimes, you might get the following error:

Untitled.png

The work around that we had done to overcome this issue was to Reinstall the Kepler components again into the  Eclipse/ HANA Studio.

 

5) Related to HANA:

 

Use Case: We have created a Information View, but it failed to activate with the following error message:

p10.png

 

Solution: Execute the SQL command

GRANT Select on Schema <Schema_Name> to _SYS_REPO with GRANT option.

Once this SQL is executed, the model validation would be successful.

 

 

6)  Related to Lumira:

 

Use Case: Lumira hangs during loading at the following screen.

 

Capture65.JPG

 

 

Solution: This happens sometimes due to issue in user profiles.

Go to C Drive: Users --> Find User --> Delete the .sapvi file and try loading Lumira again.

 

 

7) Related to HANA:

 

Use Case: Using the option 'SAVE AS DELIMITED TEXT FILE'(Comma Delimiter), I had to export a table which had columns containing values like the following,

P1.JPG

Disclaimer: In Real time, this should not have happened as the ID with comma separation doesn't look that good.

 

If you observe closely, the 'CMPLID' column values itself is comma separated and when the same was exported, it was creating a new column after the comma separation in CSV file (the alignment of the columns were going wrong)

 

P1.JPG

 

Solution: During the Export of the table from HANA, I had used the option 'SAVE AS HTML FILE'.

 

Now once we got that HTML File, it was fed into a Third Party Solution 'http://www.convertcsv.com/html-table-to-csv.htm'

The HTML file was converted to CSV using that.

 

P1.JPG

 

This can further be loaded back to HANA without any issues.

 

 

 

8)  Related to HANA/SLT

 

Use Case: Some tables were missing in the Data Provisioning Option in HANA studio, in case of a Non-SAP source system scenario where the SLT configuration is already up and running since a long time.

 

Solution: This needs a little more explanation and the same has been published here in SDN few days ago. Please find the link below:

http://scn.sap.com/docs/DOC-63399

 

 

9)  Related to HANA:

 

Use Case: You were performing lot of steps in HANA studio and in between you want to perform an activity whose link is available only in 'Quick Launch Screen', but it is not seen in UI.

 

Solution: You Can go to the following option to 'Reset Perspective'

P1.png

 

Or else, the following option can be used to get only the 'QUICK VIEW' screen.

P1.png

 

 

10) Related to HANA

 

Use Case: SAP has delivered new DU's (Say for Manufacturing OEE) and you have been asked to import the latest DU content to your HANA system.

 

Solution: Log into service.sap.com.

Click on SAP Support Portal.

Click on Software Downloads

Click on Support Packages and Patches

Click on A-Z Alphabetical List and select H

It will take you to a screen like below:

P1.JPG

Download the MANUFACTURING CONTENT to your desktop. It will a ZIP File.

 

There will be a .TGZ file (Not LANG_.TGZ File) inside that and it needs to be imported into your system using the following option.

 

p1.JPG

 

Once the Delivery Unit is successfully imported, you can check for the same in the 'DELIVERY UNITS' link in Quick Launch in HANA Studio.

 

 

11) Related to HANA:


Use Case : While trying to connect Join_1 and Projection, I was getting the following warning(Comapartment Changes). We tried all options to connect both the nodes, but system was not allowing us to do so.

Capture1.JPG


Solution:Finally, we had to close the Whole View and relaunch it again. After doing that, we were able to join the nodes.

 

12) Related to HANA:


Use Case:  For a POC/DEMO, We had to Generate huge number of Test Data/records(at the order of more than 1 Billion) into HANA Schema Tables.

Main catch here was that the whole activity was not just generating junk data but some meaningful data with some conditions.

 

Solution:

2 Tools were available to fulfil our requirement.

1) DEXTOR --> You can get more details in this video:

https://sap.emea.pgiconnect.com/p7mdn240kw9/?launcher=false&fcsContent=true&pbMode=normal

 

2) HANA Data Generator:

http://scn.sap.com/docs/DOC-42320

 

Eventually we used the 2nd option, but there are some limitations and at times you find not get the expected results,but yes it is indeed a very nice tool

 

Sample screen where we had given some conditions:

Capture1.JPG

 

NOTE: Please note that it was not working in JAVA 8 version and I had to uninstall 8 and install 6 for making the tool work

 

 

Hope this document would be handy!

 

BR

Prabhith-

Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 2)

$
0
0

Hello All,

 

Its been some time that I have been working in HANA and related areas like SLT, Lumira, Fiori and so on.

So thought of sharing some topics here, which would be handy.

 

Disclaimer :

1) This series is exclusively for Beginners in HANA and all those HANA experts here, please excuse me

2) These are some Solutions/observations that we have found handy in our projects and am quite sure there would be multiple ways to derive the same result.

3) These series of documents are collaborative in nature. So please feel free to edit the documents,wherever required!

4) All the points mentioned here is observed on HANA systems whose revision is >= 82.


Part 1 of this series can be found here --> Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 1)

Part 3 of this series can be found here -->  Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 3)

Part 4 of this series can be found here -->   http://scn.sap.com/docs/DOC-65343

 

13) Related to HANA:

Use Case: You already have a HANA system configured in Studio.

Once you log in, you could see that 'SAP Control REQUEST HAS FAILED' even though the services are all started.

P1.png

 

Solution: In most cases, remove the system from the studio and add the same system again.

It should start again without any issues.

 

 

14) Related to HANA:

Use Case: My customer had sent me a excel file (which looks like the following) and I was asked to load the same into a schema table in HANA.

Please note that there is a COUNTER Column having value 1 in each row.

P1.JPG

When we upload, we are getting an error like the following:

 

'INSERT, UPDATE and UPSERT are disallowed on the generated column: Cannot insert into the generated field COUNTER'

P1.JPG

Work around: We had tried many options but nothing was working out for us.

So we deleted the 'COUNTER' column from the excel and then uploaded the data.

 

Later using an ALTER Statement, we were able to include the 'COUNTER' column aswell.

 

P1.JPG

PS: The actual reason for this error is still not clear, but could see some interesting discussions about this here in SDN.

This should be helpful --> EXPERIENCE WITH IDENTITY FEATURE IN SAP HANA

 

 

15) Related to HANA:

Use Case: My customer had sent me a excel file (which looks like the following) and I was asked to load into a schema table in HANA.

P1.JPG

We were trying to upload the data to HANA, where the Data type of the above 2 fields 'DATEA' and 'LDATE' was 'DATE'.

Upload from Flat file was throwing the following error.

'at.jave.sql.Date.strict_valueOf'

P1.JPG

 

Workaround: We had to change the data type of the fields 'DATEA' and 'LDATE' to 'NVARCHAR'.and the data was successfully uploaded.

This was a just a workaround and am not sure if we have a permanent solution for this issue.

P1.JPG

 

Another Work Around for Loading Date fields from excel to HANA:

In HANA side, keep the corresponding column as date data type.

Untitled333.png

 

Go the CSV file and do the following modification and save the csv file.

Capture3333.JPG

Now try loading the table to HANA using Data from Local file option  and it will be successful.

 

PS: Even after saving the csv file, you might see the excel column in the old format, but dont worry, the loading will be successful.

This will work only in cases, where you dont have a null value under the date column.

 

 

16)Related to HANA/ ABAP Development Tools

Use Case: We had to do of a debugging a procedure in an Promotion Management System running on HANA database.

We we clicked on the particular procedure, it showed us a message 'Please use the ABAP development tools in Eclipse'.(SE80 screen is shown below)

Untitled1.png

 

Solution: We had to configure ABAP perspective in Eclipse/Studio and were able to proceed with debugging.

Please see some interesting documents on the related topic here:

ABAP Managed Database Procedures - Introduction

Tutorial: How to Debug an ABAP Managed Database Procedure

 

Post configuring the ABAP Perspective, we will be able to log into the ABAP system using the same.

Capture11.png

 

The above shown screen of SE80 in ABAP perspective will look like the following in HANA Studio.

Untitled11.png

 

17)  Related to HANA/ ABAP Development Tools

Use Case:  We had to install 'ABAP Development Tools' in HANA Studio.

 

Solution: Please follow the steps mentioned by Senthil in the following document.

Step-by-step Guide to setup ABAP on HANA with Eclipse

 

When you follow the document, at one point you will have to select the required add-on's.

Kepler.JPG

 

Once the steps are successfully completed, you would be able to see the following perspectives(selected ones from the previous screen) in your Studio:

Pers.JPG

 

 

18) Related to HANA Studio/Eclipse Environment

Use Case: While working in HANA studio, an error 'Failed to create Parts Control' occured.

 

Observation: This error is some how related to Eclipse environment.

The workaround we had done was to close the studio and run again.

Close and run again.png

 

We had observed this error in the following environment:

HANA Studio version is 1.00.82.0

HANA system version is 1.00.85.00.397590

 

Please find an important discussion on this topic here:

Failed to create the part's controls

 

 

19)Related to HANA Studio/Citrix Environment

Use Case: This was observed in an Internal Citrix environment and is not expected much in customer projects.

The Studio fails to load and shows the following error message:

Capture1.JPG

Solution: This is an error related to workspace space issue.

HANA studio settings were reset and a new workspace(which has a larger space) was assigned to the new studio installation.

 

 

20) Related to HANA

Use Case: I was trying to load a flat file to HANA and I was getting Scientific notations in some columns

 

Solution:

Initially, I was trying to load an .xlsx file and I was getting the Scientific Notations.

EX like below:

2.JPG

 

Then I Changed the .xlsx to .csv (File --> Save as --> .csv) and loaded again to Hana and ioutput was coming as expected:

EX like below:

 

1.JPG

 

One more thing what I could observe was that if I change .xlsx to .csv by simply changing the extension and then load to HANA, I was getting something like below:

3.JPG

 

21)Related to HANA Studio/Eclipse Environment

Use Case: We had installed the plugin's like 'ABAP' and was working in that perspective.

Due to some action, we were getting the message: 'Secure Storage is Locked'.

Secure storage is locked.png

 

Observation: The functional part of the secure storage is documented by Rocky in his blog here:

The "not quite" secure storage HANA Studio, Reconnect your Studio to HANA Servers!

 

You can also find a very detailed discussion about this topic here:

"Error when connecting to system" or "Invalid Username or Password" with HANA Studio

 

Solution: We followed the following path and deleted the related contents and restarted again.

Pers.JPG

 

 

 

Hope this document would be handy


BR

Prabhith

Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 3)

$
0
0

Hello All,

 

Its been some time that I have been working in HANA and related areas like SLT, Lumira, Fiori and so on.

So thought of sharing some topics here, which would be handy.

 

Disclaimer :

1) This series is exclusively for Beginners in HANA and all those HANA experts here, please excuse me

2) These are some Solutions/Observations that we have found handy in our projects and am quite sure there would be multiple ways to derive the same result.

3) These series of documents are collaborative in nature. So please feel free to edit the documents,wherever required!

4) All the points mentioned here is observed on HANA systems whose revision is >= 82.


Part 1 of this series can be found here -->  Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 1)

Part 2 of this series can be found here -->  Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 2)

Part 4 of this series can be found here      -->     http://scn.sap.com/docs/DOC-65343



22) Related to HANA

Use Case: We have a table A in HANA under Schema A. We were asked to create the similar table structure in a different schema B.


Solution: HANA Generated SQL can be one of the solution.

Go to the Table that you want to recreate under another schema:


Schema A Table --> Right Click --> Open Definition

In next screen you will find the entire table definition.

Right Click on the space and copy the Export SQL content and paste it in SQL console under Schema B.

D1.png

SDN1.JPG

 


23) Related to HANA

Use Case: We were asked to provide the total count of records in a table.

Solution: 2 Different steps that can be used are:

SDN.png

SDN.png



24)Related to HANA

Use Case:  We are logged into an HANA instance in studio as user A and in between we had to log into the same instance using a different user:


Solution:

SDN.png

25)  Related to HANA/SDA

Use Case:  We have a Table A and a view is created on top of that table A in a HANA Instance (Say HAS)

Now we want to  create the same view on another HANA Instance (HAT), but the corresponding table A is not available.


Solution: SDA(Smart data access) was the solution that we implemented

Go to the Target HANA Instance(HAT) and create a SDA connection on the Source HANA instance(HAS)

SDN.JPG

SDN.png

Once the connection is successfully established, all the tables in Source Instance(HAS) would be seen in the Target HANA instance(HAT)

Identify the required table A --> Right Click --> 'Add as Virtual table' and save the same under the required Schema

SDN.png

Now since the Target HANA instance(HAT) contains the required table A, we would be able to create the view on top of that.

 



26) Related to HANA/SDA

Use Case: After doing the steps mentioned in Step23, we created the  HANA view in the SDA target system,  activated it and tried to Preview the data.

But unfortunately it went into the following Authorization related error.

SDN.png

PS: Here STM is source HANA instance for SDA and error clearly mentions that system is facing issues while opening the remote database.


Solution: The table 'VOICEOFCUSTOMER' is available under Schema 'S1' in the source HANA instance 'STM'.

We added the Schema S1 under Object Privilages of user SDA_Connect

SDNB.png

That resolved our authorization issue and we were able to preview the output of the view in the SDA target (HAT)




27) Related to HANA/SQL

Use Case: We were asked if there is a method to show the totals as a Seperate Row under the Data set.

This was for a configuration step in a connected GRC system. Ideal Customer use case needs to be checked upon...

 

Solution: ROLLUP command in HANA can be leveraged here:

SDN1.png

After adding the ROLLUP command:

SDN.JPG

28) Related to HANA Live:

Use Case: For meeting a customer requirement, we had to join 2 HANA Live Views.

Observation: In case if the already existing HANA Live Views have Prompts, that might not be appearing when you drag the same to a Projection or Aggregation node. In that case, you can add the same prompts manually and map the HANA Live prompts with the one that you have created using the following option.

1.png1.png

 

 

29) Related to HANA:

Use Case: We had a requirement to show the current date as the default value in HANA Input Parameter.

Solution : We can use a Expression like the following:

NOTE: 20150525 format seems to be working only with BO Analysis office tool.

1.JPG3.JPG

 

 

2.JPG4.JPG

 

30) Related to HANA

Use Case: There was a table already created in HANA having a specific column with datatype NVARCHAR(10). The same table was consumed in a Attribute and further in a Calculation view. We were given a fresh excel sheet for uploading to the table but the values contained in the new excel sheet for that column had many entries whose length is around 25.

 

Solution: Truncate table "<Schema_Name>"."<Table_Name>".

Alter Table "<Schema_Name>"."<Table_Name>" Alter ("<Field_Name>" NVARCHAR(30)).

 

31) Related to HANA

Use Case : We had some Information views in System-Local --> bw2hana package.

Unfortunatley whenever we try to do a data preview of the view,it goes into the following error:

1.png

Solution:

We had checked for all types of authorization related points. especially the ones mentioned in the following note:

SAP Note:   1761917 - Executing or activating an Analytic Object fails with "user is not authorized" or "invalidated view"

 

Finally, relaized that most of the views there were having Apply Privileges option as 'SQL Analytic Privileges'.

We changed the privilege to Blank and the preview view option started working...

2.png

 

32) Related to HANA:

Use Case: Over a course of time, we had found a mistake and had to change the Primary key of a table.

Before:

Captureq.JPG

 

Solution:

ALTER TABLE "ZDBR_44093_STUD_MGT"."SURVEY2" DROP PRIMARY KEY

 

After SQL:

Capturer.JPG

 

 

33) Related to Lumira:

Use Case: We were getting issues in Lumira (like the screenshot below).

Capture21.JPG

Solution:

Whenever you publish a Lumira dataset to HANA Server(In my case, Lumira and Hana are on same box), the dataset gets

collected under the following path in the form of a HANA calculation view--> HANA System --> Contents --> sap(Package) --> bi --> content

You can search the rest of the actual path from the above screen

In my case: 3002393 --> 6b40aaed-ad10-4180-8200-bcfc5365709b --> HANA calculation view

Capture223.JPG

 

Now click on the view and redeploy

Untitled3.png

 

Now the Lumira report will be shown.

Capture22.JPG

 

Hope this document would be handy!

Will keep adding more tips here....

 

BR

Prabhith

Parallelization options with the SAP HANA and R-Integration

$
0
0

Why is parallelization relevant?

 

The R-Integration with SAP HANA aims at leveraging R’s rich set of powerful statistical, data mining capabilities, as well as its fast, high-level and built-in convenience operations for data manipulation (eg. Matrix multiplication, data sub setting etc.) in the context of a SAP HANA-based application. To benefit from the power of R, the R-integration framework requires a setup with two separate hosts for SAP HANA and the R/Rserve environment. A brief summary of how R processing from a SAP HANA application works is described in the following:

 

  • SAP HANA triggers the creation of a dedicated R-process on the R-host machine, then
  • R-code plus data (accessible from SAP HANA) are transferred via TCP/IP to the spawned R-process.
  • Some computational tasks take place within the R-process, and
  • the results are sent back from R to SAP HANA for consumption and further processing.


For more details, see the SAP HANA R Integration Guide: http://help.sap.com/hana/SAP_HANA_R_Integration_Guide_en.pdf

 

There are certain performance-related bottlenecks within the default integration setup which should be considered. The main ones are the following:

  • Firstly, latency is incurred when transferring large datasets from SAP HANA to the R-process for computation on the foreign host machine.
  • Secondly, R inherently executes in a single threaded mode. This means that, irrespective of the number of CPU resources available on the R-host machine, an R-process will by default execute on a single CPU core. Besides full memory utilization on the R-host machine, the available CPU processing capabilities will remain underutilized.


A straightforward approach to gain performance improvements in the given setup is by leveraging parallelization. Thus I want to present an overview and highlight avenues for parallelization within the R-Integration with SAP HANA in this document.


Overview of parallelization options


The parallelization options to consider vary from hardware scaling (host box) to R-process scaling and are illustrated in the following diagram


0-overview.png

The three main paths to leverage parallelization, as illustrated above, are the following:

     (1) Trigger the execution of multiple R-calls in parallel from within SQLScript procedures in SAP HANA

     (2) Use parallel R libraries to spawn child (worker) R processes within parent (master) R-process execution

     (3) Scale the number of R-host machines connected to SAP HANA for parallel execution (scale memory and add computational power)


While each option can be implemented independently of one another, they can as well be combined and mixed. For example if you go for (3)– scaling number of R-hosts, you need (1)– Trigger the execution of multiple R-calls, for parallelism to take place. Without (1), you may remain “only” in a better high availability/fault tolerant scenario.  


Based on the following use case, I would illustrate the different parallelization approaches using some code examples:

A Health Care unit wishes to predict cancer patient’s survival probability over different time horizons, after following various treatment options based on diagnosis.  Let's assume the following information:

  • Survival periods for prediction are: half year, one year and two years
  • Accordingly, 3 predictive models have been trained (HALF, ONE, TWO) to predict a new patient’s survival probability over these periods, given a set predictor variables based on historical treatment data.


In a default approach without leveraging parallelization, you would have one R-CALL transferring a full set of new patient data to be evaluated, plus all three models from SAP HANA to the R-host. On the R-host, a single-threaded R process will be spawned. Survival predictions for all 3 periods would be executed sequentially. An example of the SAP HANA stored procedure of type RLANG is as shown below.


0-serial.png

In the code above 3 trained models (variable tr_models) are passed to the R-Process for predicting the survival of new patient data (variable eval). The survival prediction based on each model takes place in the body of the “for loop” statement highlighted above.

 

Performance measurement: For dataset size of 1.038.024 (~16.15 MB) observations and 3 trained Blob model objects (each~26.8MB), an execution time of 8.900 seconds was recorded.


There are various sources of overhead involved in this scenario. The most notable ones are:

  • Network communication overhead, in copying one dataset + 3 models (BLOB) from SAP HANA to R.
  • Code complexity, sequentially executing each model in a single-threaded R-process. Furthermore, the “for” loop control construct, though in-built into base R, may not be efficient from a performance perspective in this case.

 

By employing parallelization techniques, I hope to achieve better results in terms of performance. Let the results of this scenario constitute our benchmark for parallelization.



Applying the 3 parallelization options to the example scenario


1. Parallelize by executing multiple R-calls from SAP HANA


We can exploit the inherent parallel nature of SAP HANA’s database processing engines by triggering multiple R-calls to run in parallel as illustrated as above. For each R-call triggered by SAP HANA, the Rserve-process would spawn an independent R-runtime process on the R-host machine.

 

An example illustrating how an SAP HANA SQLScript-stored procedure with multiple parallel calls of stored procedure type RLANG is given below. In the example, one thought is to separate patient survival prediction across 3 separate R-Calls as follows:

1-1 Rlang.png

  • Create an RLANG stored procedure handling survival prediction for just one model ( see input variable tr_model).
  • Include expression “READS SQL DATA” (as highlighted above) in the RLANG procedure definition for parallel execution of R-operators to occur, when embedded in a procedure of type SQLScript. Without this instruction, R-calls embedded in an SQLScript will excute sequentially.
  • Then create an SQLSCRIPT procedure

1-2 SQLScript.png


  • Embed 3 RLANG procedure-calls within the SQLSCRIPT procedure as highlighted. Notice that I am calling the same RLANG procedure defined previously but I pass on different trained model objects (trModelHalf, trModelOne, trModelTwo) to separate survival predication across different R-calls.
  • In this SQLScript procedure you can include the READS SQL DATA expression (recommended for security reasons as documented in the SAP HANA SQLScript Reference guide) in the SQLSCRIPT procedure definition, but to trigger R-Calls in parallel it is not mandatory. If included however, you cannot use DDL/DML instructions (INSERT/UPDATE/DELETE etc) within the SQLSCRIPT procedure.
  • On the R host, 3 R processes will be triggered, and run in parallel. Consequently, 3 CPU cores will be utilized on the R machine.


Performance measurement: In this parallel R-calls scenario example, an execution time of 6.278 seconds was experienced. This represents a performance gain of roughly 29.46%. Although this indicates an improvement in performance, we may have theoretically expected a performance improvement close to 75%, given that we trigger 3 R-calls. The answer for this gab is overhead. But which one?


In this example, I parallelized survival prediction across 3 R-calls, but still transmit the same patient dataset in each R-call. While the improvement in performance could be explained, firstly, by the fact that now HANA transmits lesser data per R-call (only one model, as opposed to three in the default scenrio) and consequently the data transfer may be faster. Secondly, each model survival prediction is performed in 3 separate R-runtimes.

 

There are two other avenues we could explore for optimization in this use case scenario. One is to further parallelize R-runtime prediction itself (see section 2). The other is to further reduce the amount of data transmitted per R-call by splitting the patient dataset in HANA and parallelize the data transmitted across separate R-calls (see section 4).

 

Please note that without the READS SQL DATA instruction in the RLANG procedure definition an execution time of 13.868 seconds was experienced. This is because each R-CALL embedded in the SQLscript procedure is executed sequentially (3 R-call roundtrips).


2. Parallelize the R-runtime execution using parallel R libraries



By default, R execution is single threaded. No matter how much processing resource is available on the R-host machine (64, 32, 8 CPU cores etc.), a single R runtime process will only use one of them. In the following I will give examples of some techniques to improve the execution performance by running R code in parallel.

 

Several open source R packages exist which offer support for parallelism with R. The most popular packages for R-runtime parallelism on a single host are “parallel” and “foreach. The “parallel” package offers a myriad of parallel functions, each specific to the nature of data (lists, arrays etc.) subject to parallelism. Moreover, for historical reasons, one can classify these parallel functions roughly under two broad categories, prefixed by “par-“ (parallel snow cluster) and “mc-“ (multicore).

 

In the following example I use the multicore function mcLapply() to invoke parallel R processes on the patient dataset. Within each of the 3 parallel R-runtimes triggered from HANA, split the patient data into 3 subsets, then, parallelize survival prediction on each subset. See figure below.


2-1.png

The script example above highlights the following:

  • 3 CPU cores are used (variable n.cores)by the R-process
  • The patient data is split into 3 partitions, according to number of chosen cores, using the “splitIndices” function.
  • The task to be performed (survival prediction) by each CPU core is defined in function “scoreFun
  • Then I call the mclapply()split.idx) , how many CPU cores to use, and which function should be executed by each core.


In this example, 3 R-processes (master) are initially triggered in parallel on the R-host by the 3 R-calls. Then within each master R-runtime, 3 additional child R-processes (worker) are spawn by calling mclapply(). On the R-host, therefore, we will have 3 processing groups executing in parallel, each consisting of 4 R-Runtimes (1 for master and 3 for workers). Each group is dedicated to predict patient survival based one model. For this setup 12 CPUs will be used in total.

 

Performance measurement: In this parallel R package scenario using mclapply(), an execution time of 4.603 seconds was observed. This represents roughly 48.28% gain in performance over the default (benchmark) scenario and a roughly 20% improvement over the parallel R-call example presented in section 2.


3. Parallelize by scaling the number of R-Host machines connected to HANA for parallel execution


It is also possible to connect SAP HANA to multiple R-hosts, and exploit this setup for parallelization. The major motivation for choosing this option is to increase the number of processing units (as well as memory) available for computation, provided the resources of a single host are not sufficient. With this constellation, however, it would not be possible to control which R-host receives which R request. The choice will be determined randomly via an equally-weighted round-robin technique. From an SQLScript procedure perspective, nothing changes. You can reuse the same parallel R-call scripts as exemplified in section 1 above.


Setup Prerequisites


  • Include more than one IPv4 addresses in CalcEngine parameter cer_rserve_addressesindexserver.inixsengine.ini file (see section 3.3 of SAP HANA R Integration Guide)
  • Setup parallel R-Calls within as SQLSCRIPT procedure, as described in section

3-1 config.png

I configure 2 R-host addresses in the calcengine rserve address option shown above. While still using the same SQLScript procedure as in the 3 R-Calls scenario example (I change nothing in the code), I trigger parallelization of 3 R-calls across two R-host machines.


3-2 Parallel R -call.png

Performance measurement: The scenario took 6.342 seconds to execute. This execution time is similar to the times experienced in the parallel R-calls example. This example only demonstrates that parallelism works in a multi R-host setup. Its real benefit for parallelization comes into play when it believed the computational resources (CPUs, memory) available on one R-box are not enough.


4. Optimizing data transfer latency between SAP HANA and R


As discussed in section 1, one performance overhead is in the transmission of the full patient data set in each parallel R-call from HANA to R (see example in section 1). We could further reduce the latency in data transfer by splitting data set into 3 subsets in HANA, then using 3 parallel R-calls we transfer each subset from HANA to R for prediction. In each R call, however, we would have to also transfer all 3 models.


An example illustrating this concept is provided in the next figure.


4-1 split in hana.png


In the example above, the following is performed

  • The patient dataset (eval) is split into 3 subsets in HANA (eval1, eval2, eval3).
  • 3 R-calls are triggered, each with the transferring a data subset together with all 3 models.
  • On the R-host, 3 master R-process will be triggered. Within each master R-Process I parallelize survival prediction across 3 cores using pair functions mcpallelel()/mccollect() for task parallelism in the “parallel” R-package from the  (task parallelism) as shown below.


4-2 parallelize in R.png

 

  • I create and R funtion (scoreFun) to specify a particular task. This function focuses on predicting survival based on one model input parameter.
  • For each call of mcparallel() function an R process is started in parallel and will evaluate the expression in R function definition scoreFun. I assign each model individually.
  • With a list of assigned tasks I then call mccollect() to retrieve the results of parallel survival prediction.


In this manner, the overall data transfer latency is reduced to the size of data in each subset. Furthermore, we still maintaining completeness of data via parallel R-calls. The consistency in the results of this approach is guaranteed if there is no dependency in the result computation for each observation in the data set.

 

Performance measurement: With this scenario, an execution time of 2.444 seconds was observed. This represents a 72.54% performance gain over the default benchmark scenario. This represents roughly 43% improvement over the parallel R-call scenario example in section 1, and a 24.26% improvement over the parallel R-runtime execution (with parallel R-libraries) example in section 2. A fantastic result supporting the case for parallelization.


Concluding Remarks


The purpose of this document is to illustrate how techniques of parallelization can be implemented to address performance-related bottlenecks within the default integration setup between SAP HANA and R. The document presented 3 parallelization options one could consider:


  • Trigger parallel R-calls from HANA
  • Use parallel R libraries to parallelize the R-execution
  • Parallelize R-calls across multiple R-hosts.

 

With parallel R libraries you can improve the performance of a triggered R-process execution by spawning additional R-runtime instances executing on the R-host (see section 2). You can either parallelize by data (split data set computation across multiple R-runtimes), or by task (split algorithmic computation across multiple R-runtimes). A good understanding of the nature of the data and the algorithm is, therefore, fundamental to choosing how to parallelize. When executing parallel R runtimes using R-libraries we should remember that there is an additional setup overhead incurred by the system when spawning child (worker) R-processes and terminating them. The benefits of parallelism using option should, therefore, be appreciated after prior testing in an environment similar to the productive environment it will eventually run.


On the other hand, when using the trigger parallel R-calls option, no additional overhead is incurred on the overall performance. This option provides us with a means to increase the number of data transmission lanes between HANA and the R-host, as well as allows us spawn multiple parent R-runtime processes in the R-host. Exploiting this option led to the following key finding: The data transfer latency between HANA and R can, in fact, be significantly reduced by splitting the data set in HANA, and then parallelize the transfer of each subset from HANA to R using parallel R-calls (as illustrated in section 4).





Other Blog Links

Install R Language and Integrate it With SAP HANA

Custom time series analytics with HANA, R and UI5

New SQLScript Features in SAP HANA 1.0 SPS9

How to see which R packages are installed on an R server using SAP HANA Studio.

Quick SAP HANA and R usecase

Let R Embrace Data Visualization in HANA Studio

Connect ABAP with R via FastRWeb running on Rserve

HANA meets R    

Creating an OData Service using R

SAP HANA Application Example : Personal Spending Analysis - HANA Models

[SAP HANA Academy] Live3: Web Services Authorization

$
0
0

[Update: April 5th, 2016 -  The Live3 on HCP tutorial series was created using the SAP HANA Cloud Platform free developer trial landscape in January 2015. The HCP landscape has significantly evolved over the past year. Therefore one may encounter many issues while following along with series using the most recent version of the free developer trail edition of HCP.]


Continuing the Live3 on HCP series the SAP HANA Academy’s Philip Mugglestone details how to configure the authorizations for the web services aspect of the Live3 application. Philip will define application privileges and user roles. Check out Philip’s video below.

Screen Shot 2015-04-24 at 2.03.01 PM.png

(0:20 – 5:05) Adding and Configuring the .xsprivileges and user.hadrole File

 

Picking up from the previous video Philip first removes the index.html file from the live3 project in the SAP Web-based Development Workbench. Next he opens the services folder from the Live3 code repository on GitHub and select the user.hdbrole and the .xsprivileges files. Then he drags and drops the files into the Multi-File Drop Zone on the SAP HANA Web-based Development Workbench. This will automatically install and run the files on the server.

 

The .xsprivileges file gives the privilege to run the application called execute. The user.hdbrole file creates a role that has access to tables and views. The code has failed as currently it’s just boiler plate code with a template account and schema name. So insert your personal trial account name in both places and paste in your schema name into the two marked lines in the code. Hitting the save button will save the role.

Screen Shot 2015-04-24 at 2.11.21 PM.png

(5:05 – 9:30) - Setting the Proper Authorization for the HDB Role

 

Next Philip makes sure that the .xsaccess file specifies that the user has the proper role before they can use the application. Drag and drop the .xsaccess file into SAP HANA Web-based Development Workbench and then hit f5 to re-execute. Now the .xsaccess file will have a comment instructing you to paste in your trial name. After pasting and save the file now the authentication method will be form based. However, in the HCP developer trail edition the authentication will be done automatically.

 

Now we will use another specific HCP stored procedure to authorize a user to this HDB role. Open the 06 setupAuthorizations.sql file from the scripts GitHub folder and paste the lines of code into a new SQL console in Eclipse. Make sure you add your p trial account number before executing the call in your SAP HANA system to successfully grant the role.

Screen Shot 2015-04-24 at 2.27.40 PM.png

Follow along with the Live3 on HCP course here.


SAP HANA Academy over 900 free tutorial videos on using SAP HANA and SAP HANA Cloud Platform.


Follow @saphanaacademy


[SAP HANA Academy] Live3: Web Services - Setup OData

$
0
0

[Update: April 5th, 2016 -  The Live3 on HCP tutorial series was created using the SAP HANA Cloud Platform free developer trial landscape in January 2015. The HCP landscape has significantly evolved over the past year. Therefore one may encounter many issues while following along with series using the most recent version of the free developer trail edition of HCP.]


In the next installment of the Live3 on HCP course the SAP HANA Academy’s Philip Mugglestone introduces OData and shows how to set up an OData service for the live3 web services project so you can access tables and views on the web. Check out Philip’s video below.

Screen Shot 2015-04-28 at 12.25.31 PM.png

(0:12 – 1:05) Introduction to OData

 

OData is short for open database protocol. OData is rest services that allow you to access the data in databases via URLs. You can liken it to ODBC for the web. SAP HANA supports rest services through OData, so there is nothing to setup and install. Therefore you can simple activate OData against your tables and views. For more information about OData visit odata.org.

 

(1:05 – 6:30) Configuring and Examining the services.xsodata File

 

With the live3 project selected in the SAP HANA Web-based Development Workbench drag the services.xsodata file from the services folder in the Live3 Github code repository and drop it into the Multi-File Drop Zone.

 

The powerful code in the services.xsodata file will reference data from the application schema so you must run a global replace to insert your personal schema name. After clicking the save button you will have activated the OData service.

Screen Shot 2015-04-28 at 12.46.23 PM.png

The code makes a reference in the service to the name of a table or view to access the data across the web. The code specifies the name of the table or view with the schema and gives a name for an entity because OData works with entities and properties as opposed to tables and columns. The first part of the code references the Tweets table and makes it so that the entity Tweets will be available through the OData format. The create, update and delete functions will be forbidden as we only want to read the data on the web.

 

A similar process is done for the Tweeters view. However, its slightly different as OData requires a key and views don’t have primary keys. So you must specify the name of the column, in this case user, which will then be the primary key.

 

The next section of code is for the TweetersClustered data and has a key of user. This piece of code is slightly different than the prior two sections of code as we want to make a navigation to associate it with another table. This is similar to a SQL join.

 

The 4th part of the code setups the OData service for the Clusters view with ClusterNumber as the primary key and a navigation to setup a association.

 

(6:30 – 9:00) How Associations Work

 

The final two sections of the code are association statements that setup the associations’ definitions between pairs of entities. In the first statement the principle is the Clusters entity with ClusterNumber as the key. For each ClusterNumber there may be multiple people in that cluster. So the dependent entity is the TweetersClustered entity with ClusterNumber as the key and a multiplicity of many (*). This is effectively equivalent to an inner join where data is read from the Clusters table and joined with the data read from the TwettersClustered table in order to get all of the Tweeters.

 

Names are given in order to reference the associations. For example in the Clusters2Tweeters association a property is inside the association that contains the name for each individual Tweeter. This same one to many type of relationship is created for all of the Tweeters that have been clustered so we can see their individual tweets. So the Tweeters2Tweets association will be referred to as Tweets.

 

Hitting the execute button with services.xsodata selected will setup a working OData service. It will generate the XML file pictured below on the web.

Screen Shot 2015-04-28 at 12.59.39 PM.png

Follow along with the Live3 on HCP course here.


SAP HANA Academy over 900 free tutorial videos on using SAP HANA and SAP HANA Cloud Platform.


Follow @saphanaacademy

[SAP HANA Academy] Live3: Web Services - Using OData

$
0
0

[Update: April 5th, 2016 -  The Live3 on HCP tutorial series was created using the SAP HANA Cloud Platform free developer trial landscape in January 2015. The HCP landscape has significantly evolved over the past year. Therefore one may encounter many issues while following along with series using the most recent version of the free developer trail edition of HCP.]


Continuing the Live3 on the SAP HANA Cloud Platform course the SAP HANA Academy’s Philip Mugglestone provides a closer examination of the previously setup OData web services by running some example queries. Watch Philip's tutorial video below.

Screen Shot 2015-04-29 at 12.10.32 PM.png

(0:20 –  4:20) Viewing Meta Data and Entities in JSON Format

 

Running the services.xsodata file has generated a URL based on the trail account (p number), SAP HANA instance (dev), project (live3), and file (services.xsodata). Calling the file lists out the existing entities (Tweets, Tweeters, TweetersClustered and Clusters).

 

With OData we can make requests via URL based syntax. For example appending /$metadata to the end of the URL displays the full meta data for all of the properties within each entity. The data you get from OData is self referencing and is very important as SAPUI5 can read this meta data automatically to generate the screens.

Screen Shot 2015-04-29 at 12.16.49 PM.png

Be careful when looking at the individual entities in OData as there may be 100,000s of, for example Tweets, and you don’t want to read them all. So appending /Tweets?$top=3 to the URL only displays the top 3 Tweets in XML format.

Screen Shot 2015-04-29 at 12.24.54 PM.png

The XML format appears a bit messy so you can convert it to JSON format by adding &$format=json to the URL. By default the JSON format isn’t as readable as possibly desired so you download for free JSONView from the chrome store in order to display it in a nice readable format.

Screen Shot 2015-04-29 at 12.34.42 PM.png

To see only certain parts of an entity's data, for instance the id and text columns, you can append &$select= id,text to the URL. This returns only the id and text values, as well as the meta data for the Tweets entity.

Screen Shot 2015-04-29 at 12.37.03 PM.png

(4:20 – 6:30) OData's Filter, Expand and Count Parameters

 

Philip next shows the data for his Clusters entity by adding /Clusters?$format=json to the URL. Similar to a where clause in SQL, Philip filters his results by adding &$filter=clusterNumber eq1 to display only his first cluster.

Screen Shot 2015-04-29 at 12.38.13 PM.png

To see the Twitters association from the Clusters entity Philip adds an expand parameter by entering &$expand=Tweeters to the end of the URL. This returns all of the information for each of the individual Twitters in cluster 1.

Screen Shot 2015-04-29 at 12.39.49 PM.png

To see the number of rows for an entity add /$count after the entity’s name in the URL.


Follow along with the Live3 on HCP course here.


SAP HANA Academy over 900 free tutorial videos on using SAP HANA and SAP HANA Cloud Platform.


Follow @saphanaacademy

[SAP HANA Academy] Live3: Web Service - Setup XSJS

$
0
0

[Update: April 5th, 2016 -  The Live3 on HCP tutorial series was created using the SAP HANA Cloud Platform free developer trial landscape in January 2015. The HCP landscape has significantly evolved over the past year. Therefore one may encounter many issues while following along with series using the most recent version of the free developer trail edition of HCP.]


Part of the SAP HANA Academy’s Live3 on HCP course, the below video tutorial from Philip Mugglestone shows how to add server-side scripting capabilities to the live3 web services project. With this you can configure actions to refresh the clustering and reset the database. Watch Philip’s video below.

Screen Shot 2015-05-05 at 3.27.02 PM.png

(0:35 – 3:00) Inserting the Proper Schema Name and P Number into the services.xsjs Code

 

With the live3 project selected in the SAP Web-based Development Workbench, open the services folder of the Live3 GitHub code repository and drag the services.xsjs file in the Multi-File Drop Zone. First you must do a global replace to insert your schema name. Also you must insert your account p number where marked in the code as the code checks to verify if the user has the execute privilege. After verification the user can preform the reset and/or cluster operation.

 

(3:00 – 6:00) Examining the Code’s Logic

 

The code is very straight forward. It first checks if the user has the privilege to execute. If so then the URL command (cmd) parameter will be returned. It will pause there and wait for the command. If cmd=reset then it will call the reset function and if cmd=cluster than it will call the cluster function. If neither reset nor cluster is entered then it will display invalid command. If the user isn’t authorized then a not authorized message will appear.

Screen Shot 2015-05-05 at 3.52.21 PM.png

The reset function’s code first sets the schema and then truncates (empties) the Tweets table that is loaded directly via node.js. Next it empties the PAL results and centers tables. Then the full text analysis index is first cleaned out and then recreated using the same code that was used earlier in the setup text analysis piece. The only difference from earlier is that the code is modified with a backslash in front of every single quotation mark in the SQL.

Screen Shot 2015-05-05 at 3.48.42 PM.png

The cluster function’s code is similar to the setup Predictive SQL code. The schema is set and the PAL results and centers tables are truncated. Then the procedure is called. On the web as opposed to seeing the results directly, instead the results table will display questions marks first. Then it will loop around a set of results and then insert those results into the table using JavaScript.

Screen Shot 2015-05-05 at 3.51.17 PM.png

(6:00 – 7:30) Testing services.xsjs

 

Executing the services.xsjs file will open a web page that displays invalid command: undefined. This should happen as it didn’t recognize the default command that was specified. So you must delete the default anti caching system that appears after /service.xsjs? in the URL and them add a valid command. For instance cmd=cluster.

 

Entering the command for cluster won’t display anything on the web page at this point. However to show that the file has run with a valid command open the developer tools (control+shift+I in Chorme) and go to the network tab. In the network tab there will be information about the call.

Screen Shot 2015-05-05 at 3.53.00 PM.png

Follow along with the Live3 on HCP course here.


SAP HANA Academy over 900 free tutorial videos on using SAP HANA and SAP HANA Cloud Platform.


Follow @saphanaacademy

[SAP HANA Academy] Live3: Web Services - Debugging

$
0
0

[Update: April 5th, 2016 -  The Live3 on HCP tutorial series was created using the SAP HANA Cloud Platform free developer trial landscape in January 2015. The HCP landscape has significantly evolved over the past year. Therefore one may encounter many issues while following along with series using the most recent version of the free developer trail edition of HCP.]


The SAP HANA Academy’s Philip Mugglestone continues the Live3 on HCP course by showing how the server-side scripting application can be easily debugged using the SAP HANA Web-based Development Workbench.  Check out Philip’s tutorial video below.

Screen Shot 2015-05-07 at 10.29.27 AM.png

(0:15 – 4:10) How to Debug the XSJS Application

 

First identify the user account. This is listed near the top right corner of the SAP HANA Web-based Development Workbench. Right click on the user name (in Philip’s case it begins with DEV_) and select inspect element. Then copy the user account name so it can be used later on in the debugging.

Screen Shot 2015-05-07 at 10.37.01 AM.png

Now a definition must be created that enables this user to preform debugging. When logged into the server go to the URL displayed below ending with /sap/hana/xs/debugger. On the Grant Access screen paste in the copied account name into the Username text box. Set an expiration date and time for when the debugging access will cease and then click the grant button. Now this user can debug the session.

Screen Shot 2015-05-07 at 10.38.24 AM.png

Back in the SAP HANA Web-based Development Workbench choose the services.xsjs file and hit the execute button to open it up in a new browser tab. Append cmd=cluster1 to the end of the URL to return an invalid command. Now open the developer tools (control+shift+I in chrome) and navigate to the resources tab. Then expand the Cookies folder and open the session cookie file. Identify the value of the xxSessionId.

Screen Shot 2015-05-07 at 10.44.59 AM.png

Now back in the SAP HANA Web-based Development Workbench click the settings button. Then choose the value of the xxSessionId as the session to debug and click apply. A message will appear that the debugger has been attached to the session. Next set a break point where the command is being processed in the code.

Screen Shot 2015-05-07 at 10.46.16 AM.png

Now make a call in the URL. Philip enteres cmd=cluster2. The screen won’t change from earlier and will still say Invalid Command: cluster1 as it will say waiting for hanaxs.trail.ondemand. This is because the debugger has been opened in the SAP HANA Web-based Development Workbench. You will see that the cluster 2 command has been entered and the debugger has come to the break point that was set. You have the normal debugging options such as step in, step over, step through, etc. If you hit the resume button on the debugger than on the file page it will now say Invalid Command: cluster2.

Screen Shot 2015-05-07 at 10.55.56 AM.png

This is how you can access the debugger to preform real-time debugging when using xs in SAP HANA.

 

Follow along with the Live3 on HCP course here.


SAP HANA Academy over 900 free tutorial videos on using SAP HANA and SAP HANA Cloud Platform.


Follow @saphanaacademy

[SAP HANA Academy] Live3: Web Services - Authentication

$
0
0

[Update: April 5th, 2016 -  The Live3 on HCP tutorial series was created using the SAP HANA Cloud Platform free developer trial landscape in January 2015. The HCP landscape has significantly evolved over the past year. Therefore one may encounter many issues while following along with series using the most recent version of the free developer trail edition of HCP.]


In the next part of the SAP HANA Academy’s Live3 on HCP course Philip Mugglestone explains why a “proxy” authentication server is needed to access your SAP HANA Cloud Platform web services from a SAP HANA Cloud HTML5 application. Watch Philip’s tutorial video below.

Screen Shot 2015-05-08 at 10.05.31 AM.png

(0:12 – 3:00) Issue with HTML5 Authentication for the HCP Developer Trail Edition

 

Prior to this tutorial the web services were set up using the SAP HANA instance. We now want to access our Live3 app, OData, and server side JavaScript from a front end application UI.

 

Back in the SAP HANA Cloud Platform Cockpit our SAP HANA instance now has one application. Clicking on the application shows the URL, which you can navigate to and then enter a command like we've done in the earlier videos in the Live3 course.

 

There is one slight complication to building a HTML5 front end application. Our SAP HANA instances in the developer trail edition of HCP use SAML 2.0 authentication. Normally to access a backend system when working with a HTML5 application you use a destination in order to reference a folder or URL. The destination appears to be local to where the HTML5 application is hosted. However, it is pushed out to a backend system that can be hosted anywhere on the internet (even behind a firewall if you use the cloud connector). The destination is very important as it allows you get around the restriction of most browsers.

 

The trail edition of the SAP HANA Cloud Platform uses only SAML 2.0 as the authentication for the SAP HANA instance. SAML 2.0 is not an authentication method available in the destination configuration in the SAP HANA Cloud Platform Cockpit. Fortunately there is workaround.

Screen Shot 2015-05-08 at 10.32.13 AM.png

(3:00 – 4:45) Explanation for Proxy’s Necessity via the Live3 Course Architecture

 

Normally the browser or mobile HTML5 app would access the SAP HANA Cloud Platform where the HTML5 app is hosted. It would then access a backend system, which is SAP native web services, through a destination. However, we can’t connect the destination to the SAP HANA XS instance. So a destination can be defined that goes through the SAP HANA Cloud Connector that is installed locally on the desktop. Then a proxy will be inserted in-between the SAP HANA Cloud Connector and the native web services to account for the SAML 2.0 authentication and then connect back to the destination. This would not be run in production but is being used in this course purely as a work around of a technical limitation of the free trail developer edition of the SAP HANA Cloud Platform.

Screen Shot 2015-05-08 at 10.35.08 AM.png

(4:45 – 5:45) Locating the Proxy

 

The necessary proxy was created by SAP Mentor, Gregor Wolf. Search Google for “Gregor Wolf GitHub” and click on the link to his page. Under the popular repositories section open the hanatrail-auth-proxy file. Written in node.js the file will allow us to access the SAP HANA web services via a destination. The next video will detail how to download and install the proxy.


Follow along with the SAP HANA Academy's Live3 on HCP course here.


SAP HANA Academy - Over 900 free tutorial videos on using SAP HANA and SAP HANA Cloud Platform.


Follow @saphanaacademy

[SAP HANA Academy] Live3: Web Services - Authentication Setup Proxy

$
0
0

[Update: April 5th, 2016 -  The Live3 on HCP tutorial series was created using the SAP HANA Cloud Platform free developer trial landscape in January 2015. The HCP landscape has significantly evolved over the past year. Therefore one may encounter many issues while following along with series using the most recent version of the free developer trail edition of HCP.]


Continuing from the previous tutorial video of the SAP HANA Academy’s Live3 on HCP course, Philip Mugglestone shows how to setup the “proxy” authentication server for the HCP trail developer edition. Watch Philip's tutorial video below.

Screen Shot 2015-05-11 at 10.43.55 AM.png

(0:20 – 3:30) Installing the Prerequisites for the hanatrail-auth-proxy File and Modifying its Code

 

On the hanatrail-auth-proxy page located on SAP Mentor, Gregor Wolf’s GitHub, click on the download ZIP button. Extract the downloaded zip and then open a command window on the hantrail-auth-proxy file.

 

First a few prerequisite node.js modules (cheerio and querystirng) must be installed. In the command window enter npm install cheerio. Wait a few seconds for the cheerio installation to be completed before entering npm install querystring.

Screen Shot 2015-05-11 at 12.04.10 PM.png

*Note – The component has been updated since this video was recorded. Simply use “npm install” from the main hanatrail-auth-proxy folder. There is now no need to install cheerio and querstring explicitly.*

 

Next we need to make a few changes to the hanatrail-auth-proxy code. First right click to edit the config.js file with notepad++. First you must set a port to use. This will create a web server that is similar to the nodejs we created earlier for loading the Twitter data.


You also must insert the correct host. The host is the beginning of the services.xsodata URL. For example Philip’s host is s7hanaxs.hanatrail.ondemand.com. Leave the timeout and https as is before saving the code.

Screen Shot 2015-05-11 at 12.09.28 PM.png

*Note – The config.js and server-basic-auth files have moved to the examples subfolder. You must still verify that the “host” option in examples/config.js matches your SAP HANA XS instance.*

 

(3:30 – 6:30) Running the Proxy

 

To start the proxy application, back in the command window enter node server-basic-auth.js. A message will appear saying the SAP HANA Cloud Platform trail proxy is running on the port host number.

Screen Shot 2015-05-11 at 12.17.15 PM.png

Open a new web browser tab and enter localhost:portnumber/URL of application. So in Philip’s example he enters the URL displayed below.

Screen Shot 2015-05-11 at 11.55.39 AM.png

After logging in with your HCP p number the authentication for the SAP HANA instance using SAML 2.0 should be preformed automatically. Effectively now the proxy, acting as a local web server, talks as if it’s the SAP HANA Cloud Platform trial edition. You can now make all of the calls that were demonstrated in previous videos (e.g. metadata, clusters) using the localhost URL.

 

Follow along with the SAP HANA Academy's Live3 on HCP course here.


SAP HANA Academy - Over 900 free tutorial videos on using SAP HANA and SAP HANA Cloud Platform.


Follow @saphanaacademy

SD Simplified data models - S/4 HANA On-premise 1511

$
0
0

Hello All

 

This document gives key points related to SD data models simplified in S/4 HANA On-Premise 1511.

 

Consultants can refer SAP Note 2198647 for guidelines.

 

  • Elimination of Status Tables VBUK ,VBUP - Status fields have been moved to the corresponding header & item tables -VBAK & VBAP for sales documents ,LIKP ,LIPS for deliveries  & VBRK for billing documents.
  • Simplification of Document Flow table VBFA
  • Field Length extension of SD document category  : Data Element VBTYP (Char1) has been replaced by data element VBTYPL(Char4) and elimination of field VBTYP_EXT(Char4)
  • Elimination of redundancies : Document Index Table VAKPA ,VAPMA ,VLKPA ,VLPMA ,VRKPA,VRPMA.
  • Elimination of Rebate Index Table VBOX - You may refer Rebate Optimization.

 

Key Benefits are as below :

  • Reduced memory footprint : Simplified Document flow ,elimination of index tables ,Few aggregates.
  • Increased performance of HANA queries & code pushdown (one select statement instead of two ,easier join for header /items including status and business data.
  • Robust rebate processing and no redundancies due to aggregates.

Capture.JPG

 

Thanks for reading

 

Shubham


LSMW was in ECC , HANA S/4 is all about Rapid Data Migration(RDM)

$
0
0

http://service.sap.com/public/rds-dm2s4op.

 

Hi

 

For the users / consultants who are anxious about data migration in HANA S/4 ...Scope of LSMW (our classic ECC R/3 migration tool) in S/4 HANA..here is the answer friends..

 

The Legacy System Migration workbench (LSMW) is available within S/4 HANA On - Premise 1511 edition but cannot be considered as migration tool.

The current content and used interfaces will not be adapted to SAP S/4 HANA data model.

 

Users can expect restrictions around transaction recordings as this is not possible with new Fiori screens on one hand and changed interfaces on the other.

 

SAP Note 2287723 talks about the same.

 

Capture.JPG

 

The recommendation is to consider DM options offered for SAP S/4 HANA ,On premise 1511 using the available pre configured migration content.

Rapid Data Migration (RDM) based on SAP data service can also be considered for challenging DM.

 

Visit the link pasted on top for details.

 

Cheers!!!

 

Shubham

SAP HANA Data Sheet

$
0
0

SAP HANA is built on the next generation, massively parallel, in-memory data processing design paradigm to enable faster information processing. This new architecture enables converged OLTP and OLAP data processing within a single in-memory column based data store with ACID compliance, while eliminating data redundancy and latency. By providing advanced capabilities, such as predictive, text analytics, spatial processing, data virtualization, on the same architecture, it further simplifies application development and processing across big data sources and structures. This makes SAP HANA the most suitable platform for building and deploying next-generation, realtime, applications and analytics.

 

This data sheet explains the capabilities, features and benefits of SAP HANA platform.

Exercise: How to Selective import of meta-data in Hana and table data

$
0
0

In this exercise, we will first import meta-data of the table i.e. the structure first and then the table data.

To selective import some required tables , we will be pulling data from ECC via BO data services.

 

it is like ECC(source) => BODS =>  HANA studio (target)

so, we would need details like host name, port no or server address and user-id, password of these three systems.

ECC details, you can get from your launch pad.

rt. click on your hanadb server of your user-id in HANA studio > properties and you will get hana host name info.

 

Steps involved:

Step 1: IN BODS system: We will create the connection from BODS to ECC and BODS to HANA, we call them datastore

Step 2: Import table on your ECC ref. datastore (e.g. <<vikg_ecc>>). and mention  MARA, MAKT or reqd. table

Step 3: GO to Hana studio > Modeler tab > setup the configure import server link (w.r.to data services)

Step 4: In Hana: Selective import your table (MARA, MAKT) using <<vikg_ecc>> ref. BODS datastore in your reqd. schema.

Check your table has been created

Step 5: IN BODS system, Import your table in the hana ref. datastore( e.g. <<vikg_hana2>>)

Step 6: Create a project > job > dataflow

 

Picture3.png

 

Step 1: IN BODs system: We will create the connection from BODS to ECC and BODS to HANA

 

1.a. (BODS_ECC):: Enter ECC connection details: go to datastore tab (from the bottom panel)> create new > give a name

> select datastore type as SAP applications

Give other details:

In my case :server name: STECCSLT, user-id, pwd and

And in the advanced tab, mention client-id e.g : 800 and system id (e.g. 74 in my case)

 

 

vikg_ecc.PNG

 

1.b. (BODS_hana):: Enter Connecton to hana details:

go to datastore tabe > create new > give  a name < select datastore type as database > select database type as SAP Hana > select database version

Give other details:

In my case server name: sthana, port: 30015, give user-id,password


so, we created vikg_ecc and vikg_hana2 datstore.


vikg_hana2.PNG



Step 2: In BODS sys.: Import table on your ECC ref. datatore(vikg_ecc). Rt.click > import by name for e.g. MARA,MAKT table


import_bname01.PNG




tables_fromecc.PNG


Step 3: GO to Hana studio > Modeler tab > click on <<configure import server>> link


hana_import_server.PNG

Give details:

Select your hana system > give your BODS server address (e.g. STBODSSRVR) > give your repository name > e.g.. (user5_repo)

> leave odbc datasource as blank > enter port as 8080

 

 

bods_server_detals__03.PNG

 

 

Step 4: Also notice, the other <<Import>> button ( you can find it below configure import server link) > click on it >

choose selective import of meta-data> choose your hanadbserver >

 

 

import_03.PNG

selective_import.PNG

Now we are first importing metadata of tables from ECC via BOdataservices , (so choose your datastore <<vikg_ecc>>

(which will appear as option, this is the same datastore we  just created in BODS).

      Select type of objects as tables. Now search and select your table, which we imported in BODS, vikg_ecc datastore e.g. MARA or MAKT

    Choose your schema, where you want to import  your table.

    And then click next and finish

selective_import02.PNG

The the table will appear in our schema,but it will not have any data in it.

  Now we know, only the structure of tables are there in HANA. So now , we will transfer data to these tables via BODS

 

hana_structure.PNG

 

Step 5: IN BODS system, (rt. click &) Import your table in hana ref dastore( <<vikg_hana2>>),,


The owner is your schema, where you have just created the metadata in your Hana studio..

 

import_bods_01.PNG

bods_hana_tables.PNG

 

Step 6: Create a project > job > dataflow

In the dataflow, pull MARA  table from <<vikg_ecc>> datastore(drag-n-drop),,

let it pass through query transformation> connect it to Mara from <<vikg_hana2>> datastore (drag-n-drop),,,

select vikg_hana2.MARA as target table

 

In the query transformation,,map all the fields to the output.

The dataflow would look like this..

dflow01.PNG

 

 

 

Next >> execute the job >>select your dataflow >> once your job finishes successfully.

 

Go to Hana and preview your data

 

mara_data.PNG

 

 

Thanks & best regards,

Vikas G

How to check your HANA Devices Health with HWCCT Tool and MiniChecks

$
0
0

There are two tools, SAP HANA HW Configuration Check Tool (HWCCT) and Minichecks, for checking the health of SAP HANA System.

There could be more than these two tools but I want to mention about HWCCT and Minichecks in my document and give the detailed how to guides of these tools.

 

Notes and links that needs to be read;

1943937 - Hardware Configuration Check Tool - Central Note

Install the SAP HANA HW Configuration Check Tool

CERTIFIED SAP HANA® HARDWARE DIRECTORY

1969700 - SQL statement collection for SAP HANA

1999993 - How-To: Interpreting SAP HANA Mini Check Results

 

How to use Hardware Configuration Check Tool (HWCCT);

There is an attachment in note 1943937 named SAP_HANA_Hardware_Configuration_Check_Tool_1.8.pdf. You can find general info, detailed configuration and usage of the tool in this note and document.

There is a section (2.9.1 Install the SAP HANA HW Configuration Check Tool) in SAP HANA Administration Guide for the tool, too.

 

Below you can find the practice of this tool on our scale up HANA System.

 

1- Determine the right tool for your hardware

You should find your appliance on CERTIFIED SAP HANA® HARDWARE DIRECTORY and go to the details. Find your hardware's certified scenario like the one in below picture. Then check the note 1943937 to find the right tool. Eg: "All HWCCT tests of appliances (compute servers) certified with scenario HANA-HWC-AP SU 1.1 or HANA-HWC-AP RH 1.1 must use HWCCT of SAP HANA SPS10 or higher or a related SAP HANA revision".

HWCCT_CertScenario.jpg

 

2- Download the determined tool, copy to the server and install it

From swdc you can find the tool "SAP HANA HW CONFIG CHECK 1.0" under SAP HANA PLATFORM EDIT. 1.0 and download it.

Install with SAPCAR -xf HWCCT_112_xxxx.SAR

 

3- Prepare the test_config.json file

There are 3 test modules; Landscape Test Module, File System Test Module, NetworkTest Module.

 

There is a Caution for File System and NetworkTest Modules in the document that " The test should only be used before going into production. It should only be used on production machines when this has first been requested by SAP Support"

 

Preperation of the json file is described in detail and templates are given for all test modules. I prepared mine like below. By checking the caution above you can use either test id 1 or all of them according to your scenarios. I also attached the json file to the document.

HWCCT_config.jpg

4- Run the tool and the result

Login to your HANADB host system

Change directory to the folder hwcct

 

hanaprd:/hana/shared/hwcct112 # source envprofile.sh

hanaprd:/hana/shared/hwcct112 # ./hwval -f test_config.json

 

And the result will be like this;

HWCCT_eval.jpg

Could be like this also;

HWCCT_eval2.jpg

How to use Minichecks

The other tool for healthcheck is Minichecks that are coming with the note 1969700 - SQL statement collection for SAP HANA as attachments.

This tool is explained in detail in the document Keeping your HANA system healthy with Mini Checks by Ronald Konijnenburg.

So I will not go into details and will just give little info about the tool.

 

Download the attachment SQL_Statements.zip of the note and import them into System Information tab of SAP HANA Administration Console. You can find Minicheck SQL scripts under Configuration and Minichecks folder. You should run the suitable script according to the version of your system.

HANA_Minichecks.jpg

 

When you run the script you will get a result like below. You should focus on the column C and the rows with  X. There is SAP note for each row. You can read and apply the notes. These notes are explained in detail in the SAP Note 1999993 - How-To: Interpreting SAP HANA Mini Check Results

HANA_Minichecks_Result.jpg

 

Here are some useful links;

Keeping your HANA system healthy with Mini Checks

2177604 - FAQ: SAP HANA Technical Performance Optimization Service

 

 

This is how I applied the tools.

 

Regards,

Yuksel AKCINAR

Reset the SYSTEM User's Password in HANA DB

$
0
0

Overview

 

If the SYSTEM user's password is lost, you can reset it as the operating system administrator by starting the index server in emergency mode. If your HANA DB is Multitenant, this process will not work.  My HANA DB revision was 102.04

 

Prerequisites

 

You have the credentials of the operating system administrator (<sid>adm).

 

Procedure

 

Step1: Log on to the server on which the master index server is running as the operating system user (that is, <sid>adm user).

 

Step2: Open a command line interface.

 

Step3: Shut down the instance by executing the following command:

/usr/sap/<SID>/HDB<instance>/exe/sapcontrol -nr <instance> -function StopSystem HDB

Step3.png

Step4: In a new session, start the name server by executing the following commands:

 

/usr/sap/<SID>/HDB<instance>/hdbenv.sh

/usr/sap/<SID>/HDB<instance>/exe/hdbnameserver

Step4.png

This will stay hanged state…

 

Step5: In a new session, start the compile server by executing the following commands:

 

/usr/sap/<SID>/HDB<instance>/hdbenv.sh

/usr/sap/<SID>/HDB<instance>/exe/hdbcompileserver

Step5.png

This will stay hanged state…

 

Step6: In a new session, start the index server by executing the following commands:

 

/usr/sap/<SID>/HDB<instance>/hdbenv.sh

/usr/sap/<SID>/HDB<instance>/exe/hdbindexserver -resetUserSystem

Step6.png

The following prompt appears: resetting of user SYSTEM - <<<new password>>>

 

Step7: Enter a new password for the SYSTEM user.

You must enter a password that complies with the password policy configured for the system.

The password for the SYSTEM user is reset and the index server stops.

 

Step8: In the terminals in which they are running, end the name server and compile server processes by pressing CTRL+C.

 

Step9: In a new session, start the instance by executing the following command:

/usr/sap/<SID>/HDB<instance>/exe/sapcontrol -nr <instance> -function StartSystem HDB

 

 

Note:

 

In a scale-out system, you only need to execute the commands on the master index server.


Results

 

The SYSTEM user's password is reset. You do not have to change this new password the next time you log on with this user regardless of your password policy configuration.

Viewing all 1183 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>