Quantcast
Channel: SCN : Document List - SAP HANA and In-Memory Computing
Viewing all 1183 articles
Browse latest View live

Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 1)

$
0
0

Hello All,

 

Its been some time that I have been working in HANA and related areas like SLT, Lumira, Fiori and so on.

So thought of sharing some topics here, which would be handy.

 

Disclaimer :

1) This series is exclusively for Beginners in HANA and all those HANA experts here, please excuse me

2) These are some Solutions/Observations that we have found handy in our projects and am quite sure there would be multiple ways to derive the same result.

3) These series of documents are collaborative in nature. So please feel free to edit the documents,wherever required!

4) All the points mentioned here were observed in HANA system with revisions >= 82.

 

Part 2 of this series can be found here --> Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 2)

Part 3 of this series can be found here -->  Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 3)

Part 4 of this series can be found here -->  http://scn.sap.com/docs/DOC-65343

 

 

1) Related to HANA:

 

Use Case: We have a table in a HANA schema and we were asked if there is any option to find a where used list where the table has been used.

Table Name: STKO.

Solution: Go to schema SYS.

There you will find a view named OBJECT_DEPENDENCIES.

You will get the dependent information in that view.

 

In SQL Terms: SELECT * FROM "SYS"."OBJECT_DEPENDENCIES" where BASE_OBJECT_NAME = 'STKO'

PIC1.JPG

 

--> Following is another way to see the 'Where-Used List':

 

In HANA Studio Left Navigator Pane > Catalog > Any Schema > Tables folder > Context Menu (Right click on the table), select option ' Open Defenition'

Open Def.jpg

Then in the right hand side, below the editor pane along side properties tab you see the tab ' Where-Used List '

Where-Used List.jpg

 

2)  Related to HANA/SLT:

 

Use Case: We have a new SLT configuration enabled for a source system.

Which all tables would be created automatically under the target schema defined in the configuration?

 

Observation: We have created a Non-SAP configuration in SLT and MII_SQl was the configuration name provided in SLT.

Now in HANA side, you will see that the schema MII_SQL  has the following tables by default.

PIC2.png

 

3)  Related to HANA:

Use Case: We have a HANA Information View. We want to know the Number of records available in the output.

 

Solution: HANA Information View --> Semantics --> Data preview --> Show Log --> Generated SQL.

Pic3.png

 

 

 

Copy the “SYS_BIC”.sap.hba.ZDBR44364/CV_FMIFIIT (My calculation view for this documement purpose)

Now write a SQl command.

 

Pivc4.png

 

 

4)  Related to HANA:

Use Case:  We need to connect to a HANA cloud system. How to do that.

 

Solution: Initially when we see the HANA studio, we will see the following:

p5.png

 

Now Click, Install New Software

p6.png

 

Add https://tools.hana.ondemand.com/kepler

 

Once it is installed, you will now see the option to add the Cloud System in HANA Studio.

 

p7.png

 

While connecting to the cloud system, you might encounter the following error:

p8.png

 

 

p9.png

 

Access the following path(Preferences) and make the required changes in the HTTP and HTTPS line items.

P1.JPG

 

 

 

Some times, you might get a following error message.

p1.JPG

This happens when the service is temporary down and you should be able to connect to the HANA cloud system after some time. So please try back after some time.

 

Sometimes, you might get the following error:

Untitled.png

The work around that we had done to overcome this issue was to Reinstall the Kepler components again into the  Eclipse/ HANA Studio.

 

5) Related to HANA:

 

Use Case: We have created a Information View, but it failed to activate with the following error message:

p10.png

 

Solution: Execute the SQL command

GRANT Select on Schema <Schema_Name> to _SYS_REPO with GRANT option.

Once this SQL is executed, the model validation would be successful.

 

 

6)  Related to Lumira:

 

Use Case: Lumira hangs during loading at the following screen.

 

Capture65.JPG

 

 

Solution: This happens sometimes due to issue in user profiles.

Go to C Drive: Users --> Find User --> Delete the .sapvi file and try loading Lumira again.

 

 

7) Related to HANA:

 

Use Case: Using the option 'SAVE AS DELIMITED TEXT FILE'(Comma Delimiter), I had to export a table which had columns containing values like the following,

P1.JPG

Disclaimer: In Real time, this should not have happened as the ID with comma separation doesn't look that good.

 

If you observe closely, the 'CMPLID' column values itself is comma separated and when the same was exported, it was creating a new column after the comma separation in CSV file (the alignment of the columns were going wrong)

 

P1.JPG

 

Solution: During the Export of the table from HANA, I had used the option 'SAVE AS HTML FILE'.

 

Now once we got that HTML File, it was fed into a Third Party Solution 'http://www.convertcsv.com/html-table-to-csv.htm'

The HTML file was converted to CSV using that.

 

P1.JPG

 

This can further be loaded back to HANA without any issues.

 

 

 

8)  Related to HANA/SLT

 

Use Case: Some tables were missing in the Data Provisioning Option in HANA studio, in case of a Non-SAP source system scenario where the SLT configuration is already up and running since a long time.

 

Solution: This needs a little more explanation and the same has been published here in SDN few days ago. Please find the link below:

http://scn.sap.com/docs/DOC-63399

 

 

9)  Related to HANA:

 

Use Case: You were performing lot of steps in HANA studio and in between you want to perform an activity whose link is available only in 'Quick Launch Screen', but it is not seen in UI.

 

Solution: You Can go to the following option to 'Reset Perspective'

P1.png

 

Or else, the following option can be used to get only the 'QUICK VIEW' screen.

P1.png

 

 

10) Related to HANA

 

Use Case: SAP has delivered new DU's (Say for Manufacturing OEE) and you have been asked to import the latest DU content to your HANA system.

 

Solution: Log into service.sap.com.

Click on SAP Support Portal.

Click on Software Downloads

Click on Support Packages and Patches

Click on A-Z Alphabetical List and select H

It will take you to a screen like below:

P1.JPG

Download the MANUFACTURING CONTENT to your desktop. It will a ZIP File.

 

There will be a .TGZ file (Not LANG_.TGZ File) inside that and it needs to be imported into your system using the following option.

 

p1.JPG

 

Once the Delivery Unit is successfully imported, you can check for the same in the 'DELIVERY UNITS' link in Quick Launch in HANA Studio.

 

 

11) Related to HANA:


Use Case : While trying to connect Join_1 and Projection, I was getting the following warning(Comapartment Changes). We tried all options to connect both the nodes, but system was not allowing us to do so.

Capture1.JPG


Solution:Finally, we had to close the Whole View and relaunch it again. After doing that, we were able to join the nodes.

 

12) Related to HANA:


Use Case:  For a POC/DEMO, We had to Generate huge number of Test Data/records(at the order of more than 1 Billion) into HANA Schema Tables.

Main catch here was that the whole activity was not just generating junk data but some meaningful data with some conditions.

 

Solution:

2 Tools were available to fulfil our requirement.

1) DEXTOR --> You can get more details in this video:

https://sap.emea.pgiconnect.com/p7mdn240kw9/?launcher=false&fcsContent=true&pbMode=normal

 

2) HANA Data Generator:

http://scn.sap.com/docs/DOC-42320

 

Eventually we used the 2nd option, but there are some limitations and at times you find not get the expected results,but yes it is indeed a very nice tool

 

Sample screen where we had given some conditions:

Capture1.JPG

 

NOTE: Please note that it was not working in JAVA 8 version and I had to uninstall 8 and install 6 for making the tool work

 

 

Hope this document would be handy!

 

BR

Prabhith-

 

USEFUL NOTES:

1929953 - SAP HANA Studio's Content Folder missing from Modeler Perspective


HANA Rules Framework (HRF)

$
0
0

Welcome to the SAP HANA Rules Framework (HRF) Community Site!


SAP HANA Rules Framework provides tools that enable application developers to build solutions with automated decisions and rules management services, implementers and administrators to set up a project/customer system, and business users to manage and automate business decisions and rules based on their organizations' data.

In daily business, strategic plans and mission critical tasks are implemented by a countless number of operational decisions, either manually or automated by business applications. These days - an organization's agility in decision-making becomes a critical need to keep up with dynamic changes in the market.


HRF Main Objectives are:

  • To seize the opportunity of Big Data by helping developers to easily build automated decisioning solutions and\or solutions that require business rules management capabilities
  • To unleash the power of SAP HANA by turning real time data into intelligent decisions and actions
  • To empower business users to control, influence and personalize decisions/rules in highly dynamic scenarios

HRF Main Benefits are:

Rapid Application Development |Simple tools to quickly develop auto-decisioning applications

  • Built-in editors in SAP HANA studio that allow easy modeling of the required resources for SAP HANA rules framework
  • An easy to implement and configurable SAPUI5 control that exposes the framework’s capabilities to the business users and implementers

Business User Empowerment | Give control to the business user

  • Simple, natural, and intuitive business condition language (Rule Expression Language)

Untitled.png

  • Simple and intuitive UI control that supports text rules and decision tables

NewTable.png

  • Simple and intuitive web application that enables business users to manage their own rules

Rules.png   

Scalability and Performance |HRF as a native SAP HANA solution leverages all the capabilities and advantages of the SAP HANA platform.


For more information on HRF please contact shuki.idan@sap.com  and/or noam.gilady@sap.com

Interesting links:

SAP solutions already utilizing HRF:

Here are some (partial list) SAP solutions that utilizes HRF in different domains: 

Use cases of SAP solutions already utilizing HRF:

SAP Transportation Resource Planning

TRP_Use_Case.jpg

SAP Fraud Management

Fraud_Use_Case.JPG

SAP hybris Marketing (formerly SAP Customer Engagement Intelligence)

hybris_Use_Case.JPG

SAP Operational Process Intelligence

OPInt_Use_Case.JPG

Unable to Open Alert History Information Due to Large Table _SYS_STATISTICS.STATISTICS_ALERTS_BASE

$
0
0

Recently, a customer says that there are huge amounts of alerts shown in SAP HANA Studio/DBACOCKPIT in one of SAP HANA system which has not been monitored for long time.
Alert Priority.jpg

The alert detailed information page is hanging and does not return or returns errors listed below after clicking high priority alerts for example (the overview page is hanging in the worse situation).

Return Error.jpg

 

From DBACOCKPIT -> System Information -> Large Tables, I see that the size of table _SYS_STATISTICS.STATISTICS_ALERTS_BASE which contains alert history has more than 30GB.

 

According to note 2170779 - SAP HANA DB: Big Statistics Server Table STATISTICS_ALERTS_BASE Leads to Performance Impact on the System

 

Firstly, customer using embedded statistics server with MDC environment, I have to disable embedded statistics server within System DB to prevent endless delete situation (the configuration takes effect immediately, no need to restart HANA DB).

nameserver.ini [statisticsserver] active = false

 

Secondly, cleanup the old alerts which more than 1 day for example then check and fix the latest alerts which takes around 30 minutes for me.

DELETE FROM "_SYS_STATISTICS"."STATISTICS_ALERTS_BASE" WHERE "ALERT_TIMESTAMP" < add_days(CURRENT_TIMESTAMP, -25);

 

Then I see the latest alerts and their detail information and try to fix one by one. For alerts do not need to be keep for long time, I set the shorten retetion date.

update _SYS_STATISTICS.STATISTICS_SCHEDULE set RETENTION_DAYS_CURRENT = 10 where ID = 79

 

Thirdly, enable embedded statistics server.

nameserver.ini [statisticsserver] active = true

 

Last but not least, I try to persuad customer to monitor the system in their daily or weekly tasks.

 

Notes:

1. For more information, please refer to SAP HANA Administration Guide SPS 11 -> 2.5.1.7.3 The Statistics Service -> Data Management in the Statistics Service.

2. View _SYS_STATISTICS.STATISTICS_ALERTS is created using the data in table _SYS_STATISTICS.STATISTICS_ALERTS_BASE.

3. 2073112 - FAQ: SAP HANA Studio -> 10. What can I do if opening the overview tab in the administration console takes a long time?

Opening the overview tab in the administration console often suffers from a high amount of SAP HANA alerts. If it takes many seconds or even minutes to open the overview tab, you can check according to SAP Note 2147247 if the number of alerts in table STATISTICS_ALERTS_BASE is too high and perform a cleanup.

Scientific Publications and Activities of the SAP HANA Database Campus

$
0
0

This is a list of selected publications and activities made by the SAP HANA Database Campus.


2016

  • Ismail Oukid, Johan Lasperas, Anisoara Nica, Thomas Willhalm, Wolfgang Lehner. FPTree: A Hybrid SCM-DRAM Persistent and Concurrent B-Tree for Storage Class Memory. SIGMOD 2016, San Francisco, California, USA, June 26 - July 1 2016.
  • Ismail Oukid, Daniel Booss, Adrien Lespinasse, Wolfgang Lehner. On Testing Persistent-Memory-Based Software. DaMoN 2016 (co-located with SIGMOD 2016), San Francisco, California, USA, June 27, 2016.
  • David Kernert, Wolfgang Lehner, Frank Köhler. Topology-Aware Optimization of Big Sparse Matrices and Matrix Multiplications on Main-Memory Systems. ICDE 2016, Helsinki, Finland, May 16-20, 2016.
  • Elena Vasilyeva, Maik Thiele, Thomas Heinze, Wolfgang Lehner. DebEAQ - Debugging Empty-Answer Queries On Large Data Graphs (Demonstration). ICDE 2016, Helsinki, Finland, May 16-20, 2016.
  • Elena Vasilyeva. Why-Query Support in Graph Databases (PhD Symposium). ICDE 2016, Helsinki, Finland, May 16-20, 2016.

2015

  • Elena Vasilyeva, Maik Thiele, Christof Bornhövd, Wolfgang Lehner. Considering User Intention in Differential Graph Queries. Journal of Database Management (JDM), 26(3), 21-40. doi: 10.4018/JDM.2015070102
  • Elena Vasilyeva, Maik Thiele, Christof Bornhövd, Wolfgang Lehner. Answering "Why Empty?" and "Why So Many?" queries in graph databases. Journal of Computer and System Sciences (2015), DOI=10.1016/j.jcss.2015.06.007 http://dx.doi.org/10.1016/j.jcss.2015.06.007
  • 2nd place in the ACM SIGMOD 2015 programming contest. For more details, click here.
  • The second SAP HANA student Campus Open House day took place in Walldorf on June 24th, 2015. For more details, click here.
  • Mehul Wagle, Daniel Booss, Ivan Schreter. Scalable NUMA-Aware Memory Allocations with In-Memory Databases. TPCTC 2015 (co-located with VLDB 2015), Kohala Coast, Hawaii, USA, August 31 - September 4, 2015.
  • Marcus Paradies, Elena Vasilyeva, Adrian Mocan, Wolfgang Lehner. Robust Cardinality Estimation for Subgraph Isomorphism Queries on Property Graphs. Big-O(Q) 2015 (co-located with VLDB 2015), Kohala Coast, Hawaii, USA, August 31 - September 4, 2015.
  • Max Wildemann, Michael Rudolf, Marcus Paradies. The Time Has Come: Traversal and Reachability in Time-Varying Graphs. Big-O(Q) 2015 (co-located with VLDB 2015), Kohala Coast, Hawaii, USA, August 31 - September 4, 2015.
  • Iraklis Psaroudakis, Tobias Scheuer, Norman May, Abdelkader Sellami, Anastasia Ailamaki. Scaling Up Concurrent Main-Memory Column-Store Scans: Towards Adaptive NUMA-aware Data and Task Placement. VLDB 2015, Kohala Coast, Hawaii, USA, August 31 - September 4, 2015.
  • Jan Finis, Robert Brunel, Alfons Kemper, Thomas Neumann, Norman May, Franz Faerber. Indexing Highly Dynamic Hierarchical Data. VLDB 2015, Kohala Coast, Hawaii, USA, August 31 - September 4, 2015.
  • David Kernert, Norman May, Michael Hladik, Klaus Werner, Wolfgang Lehner. From Static to Agile - Interactive Particle Physics Analysis with the SAP HANA DB. DATA 2015, Colmar, France, July 20-22, 2015.
  • Marcus Paradies, Wolfgang Lehner, Christof Bornhövd. GRAPHITE: An Extensible Graph Traversal Framework for Relational Database Management Systems. SSDBM 2015, San Diego, USA, June 29 - July 1, 2015.
  • Elena Vasilyeva, Maik Thiele, Adrian Mocan, Wolfgang Lehner. Relaxation of Subgraph Queries Delivering Empty Results. SSDBM 2015, San Diego, USA, June 29 - July 1, 2015.
  • Florian Wolf, Iraklis Psaroudakis, Norman May, Anastasia Ailamaki, Kai-Uwe Sattler. Extending Database Task Schedulers for Multi-threaded Application Code. SSDBM 2015, San Diego, USA, June 29 - July 1, 2015.
  • Ingo Müller, Peter Sanders, Arnaud Lacurie, Wolfgang Lehner, Franz Färber. Cache-Efficient Aggregation: Hashing Is Sorting. SIGMOD 2015, Melbourne, Australia, May 31-June 4, 2015.
  • Daniel Scheibli, Christian Dinse, Alexander Böhm. QE3D: Interactive Visualization and Exploration of Complex, Distributed Query Plans . SIGMOD 2015 (Demonstration), Melbourne, Australia, May 31-June 4, 2015.
  • Martin Kaufmann, Peter M. Fischer, Norman May, Chang Ge, Anil K. Goel, Donald Kossmann. Bi-temporal Timeline Index: A Data Structure for Processing Queries on Bi-temporal Data. ICDE 2015, Seoul, Korea, April 2015.
  • Robert Brunel, Jan Finis, Gerald Franz, Norman May, Alfons Kemper, Thomas Neumann, Franz Faerber. Supporting Hierarchical Data in SAP HANA. ICDE 2015, Seoul, Korea, April 2015.
  • David Kernert, Frank Köhler, Wolfgang Lehner. SpMachO - Optimizing Sparse Linear Algebra Expressions with Probabilistic Density Estimation. EDBT 2015, Brussels, Belgium, March 23-27, 2015.
  • Alexander Böhm: Keynote: Novel Optimization Techniques for Modern Database Environments. BTW 2015: 23-24, March 5, 2015, Hamburg
  • Alexander Böhm, Mathias Golombek, Christoph Heinz, Henrik Loeser, Alfred Schlaucher, Thomas Ruf: Panel: Big Data - Evolution oder Revolution in der Datenverarbeitung? BTW 2015: 647-648, March 5, 2015, Hamburg
  • Ismail Oukid, Wolfgang Lehner, Thomas Kissinger, Thomas Willhalm, Peter Bumbulis. Instant Recovery for Main-Memory Databases. CIDR 2015, Asilomar, California, USA. January 4-7, 2015.

 

2014

  • The first SAP HANA Student Campus Open House day took place in Walldorf on June 5th, 2014. For more details, click here.
  • Iraklis Psaroudakis, Florian Wolf, Norman May, Thomas Neumann, Alexander Böhm, Anastasia Ailamaki, Kai-Uwe Sattler. Scaling up Mixed Workloads: a Battle of Data Freshness, Flexibility, and Scheduling. TPCTC 2014, Hangzhou, China, September 1-5, 2014.
  • Michael Rudolf, Hannes Voigt, Christof Bornhövd, Wolfgang Lehner. SynopSys: Foundations for Multidimensional Graph Analytics. BIRTE 2014, Hangzhou, China, September 1, 2014.
  • Elena Vasilyeva, Maik Thiele, Christof Bornhövd, Wolfgang Lehner: Top-k Differential Queries in Graph Databases. In Advances in Databases and Information Systems - 18th East European Conference, ADBIS 2014, Ohrid, Republic of Macedonia, September 7-10, 2014.
  • Kim-Thomas Rehmann, Alexander Böhm, Dong Hun Lee, Jörg Wiemers: Continuous performance testing for SAP HANA. First International Workshop on Reliable Data Services and Systems (RDSS), Co-located with ACM SIGMOD 2014, Snowbird, Utah, USA
  • Guido Moerkotte, David DeHaan, Norman May, Anisoara Nica, Alexander Böhm: Exploiting ordered dictionaries to efficiently construct histograms with q-error guarantees in SAP HANA. SIGMOD Conference 2014, Snowbird, Utah, USA
  • Ismail Oukid, Daniel Booss, Wolfgang Lehner, Peter Bumbulis, Thomas Willhalm. SOFORT: A Hybrid SCM-DRAM Storage Engine For Fast Data Recovery. DaMoN 2014, Snowbird, USA, June 22-27, 2014.
  • Iraklis Psaroudakis, Thomas Kissinger, Danica Porobic, Thomas Ilsche, Erietta Liarou, Pinar Tözün, Anastasia Ailamaki, Wolfgang Lehner. Dynamic Fine-Grained Scheduling for Energy-Efficient Main-Memory Queries. DaMoN 2014, Snowbird, USA, June 22-27, 2014.
  • Marcus Paradies, Michael Rudolf, Christof Bornhövd, Wolfgang Lehner. GRATIN: Accelerating Graph Traversals in Main-Memory Column Stores. GRADES 2014, Snowbird, USA, June 22-27, 2014.
  • David Kernert, Frank Köhler, Wolfgang Lehner. SLACID - Sparse Linear Algebra in a Columnar In-Memory Database System. SSDBM, Aalborg, Denmark, June/July 2014.
  • Ingo Müller, Peter Sanders, Robert Schulze, Wei Zhou. Retrieval and Perfect Hashing using Fingerprinting. SEA 2014, Copenhagen, Denmark, June/July 2014.
  • Martin Kaufmann, Peter M. Fischer, Norman May, Donald Kossmann. Benchmarking Bitemporal Database Systems: Ready for the Future or Stuck in the Past? EDBT 2014, Athens, Greece, March 2014.
  • Ingo Müller, Cornelius Ratsch, Franz Färber. Adaptive String Dictionary Compression in In-Memory Column-Store Database Systems. EDBT 2014, Athens, Greece, March 2014.
  • Elena Vasilyeva, Maik Thiele, Christof Bornhövd, Wolfgang Lehner: GraphMCS: Discover the Unknown in Large Data Graphs. EDBT/ICDT Workshops: 200-207.

 

2013

  • Sebastian Breß, Felix  Beier, Hannes Rauhe, Kai-Uwe Sattler, Eike Schallehn, Gunter Saake,  Efficient co-processor utilization in database query processing,  Information Systems, Volume 38, Issue 8, November 2013, Pages 1084-1096
  • Martin  Kaufmann. PhD Workshop: Storing and Processing Temporal Data in a Main  Memory Column Store. VLDB 2013, Riva del Garda, Italy, August 26-30,  2013.
  • Hannes Rauhe, Jonathan Dees, Kai-Uwe Sattler, Franz Färber.  Multi-Level Parallel Query Excecution Framework for CPU and GPU. ADBIS  2013, Genoa, Italy, September 1-4, 2013.
  • Iraklis Psaroudakis, Tobias Scheuer, Norman May, Anastasia Ailamaki. Task Scheduling for Highly Concurrent Analytical and Transactional Main-Memory Workloads. ADMS 2013, Riva del Garda, Italy, August 2013.
  • Thomas Willhalm, Ismail Oukid, Ingo Müller, Franz Faerber. Vectorizing Database Column Scans with Complex Predicates. ADMS 2013, Riva del Garda, Italy, August 2013.
  • David Kernert, Frank Köhler, Wolfgang Lehner. Bringing Linear Algebra Objects to Life in a Column-Oriented In-Memory Database. IMDM 2013, Riva del  Garda, Italy, August 2013.
  • Martin Kaufmann, Peter M. Fischer, Norman May, Andreas Tonder, Donald Kossmann. TPC-BiH: A Benchmark for Bi-Temporal Databases. TPCTC 2013, Riva del Garda, Italy, August 2013.
  • Martin Kaufmann, Panagiotis Vagenas, Peter M. Fischer (Univ. of Freiburg), Donald Kossmann, Franz Färber (SAP). DEMO: Comprehensive and Interactive Temporal Query Processing with SAP HANA. VLDB 2013, Riva del Garda, Italy, August 26-30, 2013.
  • Philipp Große, Wolfgang Lehner, Norman May: Advanced Analytics with the SAP HANA Database. DATA 2013.
  • Jan  Finis, Robert Brunel, Alfons Kemper, Thomas Neumann, Franz Faerber,  Norman May. DeltaNI: An Efficient Labeling Scheme for Versioned  Hierarchical Data. SIGMOD 2013, New York, USA, June 22-27, 2013.
  • Michael  Rudolf, Marcus Paradies, Christof Bornhövd, Wolfgang Lehner. SynopSys: Large Graph Analytics in the SAP HANA Database Through Summarization. GRADES 2013, New York, USA, June 22-27, 2013.
  • Elena Vasilyeva, Maik Thiele, Christof Bornhövd, Wolfgang Lehner: Leveraging Flexible Data Management with Graph Databases. GRADES 2013, New York, USA, June 22-27, 2013.
  • Jonathan Dees, Peter  Sanders. Efficient Many-Core Query Execution in Main Memory  Column-Stores. ICDE 2013, Brisbane, Australia, April 8-12, 2013
  • Martin  Kaufmann, Peter M. Fischer (Univ. of Freiburg), Donald Kossmann, Norman  May (SAP). DEMO: A Generic Database Benchmarking Service. ICDE 2013,  Brisbane, Australia, April 8-12, 2013.

  • Martin Kaufmann,  Amin A. Manjili, Peter M. Fischer (Univ. of Freiburg), Donald Kossmann,  Franz Färber (SAP), Norman May (SAP): Timeline Index: A Unified Data  Structure for Processing Queries on Temporal Data, SIGMOD 2013,  New  York, USA, June 22-27, 2013.
  • Martin  Kaufmann, Amin A. Manjili, Stefan Hildenbrand, Donald Kossmann,  Andreas Tonder (SAP). Time Travel in Column Stores. ICDE 2013, Brisbane,  Australia, April 8-12, 2013
  • Rudolf, M., Paradies, M., Bornhövd, C., & Lehner, W. (2013). The Graph Story of the SAP HANA Database. BTW (pp. 403–420).
  • Robert Brunel, Jan Finis: Eine effiziente Indexstruktur für dynamische hierarchische Daten. BTW Workshops 2013: 267-276

 

2012

  • Rösch, P., Dannecker, L., Hackenbroich, G., & Färber, F. (2012). A Storage Advisor for Hybrid-Store Databases. PVLDB (Vol. 5, pp. 1748–1758).
  • Sikka, V., Färber, F., Lehner, W., Cha, S. K., Peh, T., & Bornhövd,  C. (2012). Efficient transaction processing in SAP HANA database.  SIGMOD  Conference (p. 731).
  • Färber, F., May, N., Lehner, W., Große, P., Müller, I., Rauhe, H., & Dees, J. (2012). The SAP HANA Database -- An Architecture Overview. IEEE Data Eng. Bull., 35(1), 28-33.
  • Sebastian Breß, Felix Beier, Hannes Rauhe, Eike Schallehn, Kai-Uwe Sattler, and Gunter Saake. 2012. Automatic selection of processing units for coprocessing in databases. ADBIS'12

 

2011

  • Färber, F., Cha, S. K., Primsch, J., Bornhövd, C., Sigg, S., & Lehner, W. (2011). SAP HANA Database - Data Management for Modern Business Applications. SIGMOD Record, 40(4), 45-51.
  • Jaecksch, B., Faerber, F., Rosenthal, F., & Lehner, W. (2011). Hybrid data-flow graphs for procedural domain-specific query languages, 577-578.
  • Große, P., Lehner, W., Weichert, T., & Franz, F. (2011). Bridging Two Worlds with RICE Integrating R into the SAP In-Memory Computing Engine, 4(12), 1307-1317.

 

2010

  • Lemke, C., Sattler, K.-U., Faerber, F., & Zeier, A. (2010). Speeding up queries in column stores: a case for compression, 117-129.
  • Bernhard Jaecksch, Franz Faerber, and Wolfgang Lehner. (2010). Cherry picking in database languages.
  • Bernhard Jaecksch, Wolfgang Lehner, and Franz Faerber. (2010). A plan for OLAP.
  • Paradies, M., Lemke, C., Plattner, H., Lehner, W., Sattler, K., Zeier, A., Krüger, J. (2010): How to Juggle Columns: An Entropy-Based Approach for Table Compression, IDEAS.

 

2009

  • Binnig, C., Hildenbrand, S., & Färber, F. (2009). Dictionary-based order-preserving string compression for main memory column stores. SIGMOD Conference (p. 283).
  • Kunkel, Julian M., Tsujita, Y., Mordvinova, O., & Ludwig, T. (2009). Tracing Internal Communication in MPI and MPI-I/O. 2009 International Conference on Parallel and Distributed Computing, Applications and Technologies (pp. 280-286).
  • Legler, T. (2009). Datenzentrierte Bestimmung von Assoziationsregeln in parallelen Datenbankarchitekturen.
  • Mordvinova, O., Kunkel, J. M., Baun, C., Ludwig, T., & Kunze, M. (2009). USB flash drives as an energy efficient storage alternative. 2009 10th IEEE/ACM International Conference on Grid Computing (pp. 175-182).
  • Transier, F. (2009). Algorithms and Data Structures for In-Memory Text Search Engines.
  • Transier, F., & Sanders, P. (2009). Out of the Box Phrase Indexing. In A. Amir, A. Turpin, & A. Moffat (Eds.), SPIRE (Vol. 5280, pp. 200-211).
  • Willhalm, T., Popovici, N., Boshmaf, Y., Plattner, H., Zeier, A., & Schaffner, J. (2009). SIMD-scan: ultra fast in-memory table scan using on-chip vector processing units. PVLDB, 2(1), 385-394.
  • Jäksch, B., Lembke, R., Stortz, B., Haas, S., Gerstmair, A., & Färber, F. (2009). Guided Navigation basierend auf SAP Netweaver BIA. Datenbanksysteme für Business, Technologie und Web, 596-599.
  • Lemke, C., Sattler, K.-uwe, & Franz, F. (2009).  Kompressionstechniken für spaltenorientierte BI-Accelerator-Lösungen.  Datenbanksysteme in Business, Technologie und Web, 486-497.
  • Mordvinova,  O., Shepil, O., Ludwig, T., & Ross, A. (2009). A Strategy For Cost  Efficient Distributed Data Storage For In-Memory OLAP. Proceedings IADIS  International Conference Applied Computing, pages 109-117.

 

2008

  • Hill, G., & Ross, A. (2008). Reducing outer joins. The VLDB Journal, 18(3), 599-610.
  • Weyerhaeuser, C., Mindnich, T., Faerber, F., & Lehner, W. (2008). Exploiting Graphic Card Processor Technology to Accelerate Data Mining Queries in SAP NetWeaver BIA. 2008 IEEE International Conference on Data Mining Workshops (pp. 506-515).
  • Schmidt-Volkmar, P. (2008). Betriebswirtschaftliche Analyse auf operationalen Daten (German Edition) (p. 244). Gabler Verlag.
  • Transier, F., & Sanders, P. (2008). Compressed Inverted  Indexes for In-Memory Search Engines. ALENEX (pp. 3-12).

2007

  • Sanders, P., & Transier, F. (2007). Intersection in Integer Inverted Indices.
  • Legler, T. (2007). Der Einfluss der Datenverteilung auf die Performanz  eines Data Warehouse. Datenbanksysteme für Business, Technologie und  Web.

 

2006

  • Bitton, D., Faerber, F., Haas, L., & Shanmugasundaram, J. (2006). One platform for mining structured and unstructured data: dream or reality?, 1261-1262.
  • Geiß, J., Mordvinova, O., & Rams, M. (2006). Natürlichsprachige Suchanfragen über strukturierte Daten.
  • Legler, T., Lehner, W., & Ross, A. (2006). Data mining with the SAP NetWeaver BI accelerator, 1059-1068.

Understand HANA SPS and supported Operating Systems

$
0
0

HANA Revision Strategy

 

SAP is shipping regular corrections and updates. Corrections are shipped in the form of Revisions and Support Packages of the product’s components. New HANA capabilities are introduced twice a year in form of SAP HANA Support Package Stacks (SPS). The Datacenter Service Point (DSP) is based upon the availability of a certain SAP HANA revision, which had been running internally at SAP for production enterprise applications for at least two weeks before it is officially released.

 

See SAP Note “2021789 - SAP HANA Revision and Maintenance Strategy” for more details

 

p1.png

Supported Operating Systems

 

According to SAP Note “2235581– Supported Operating Systems” the following two Linux distributions can be used with the SAP HANA Platform.

 

  • SUSE Linux Enterprise Server for SAP Applications (SLES for SAP)

 

  • Red Hat Enterprise Linux for SAP HANA (RHEL for SAP HANA)

 

All information in this document refers to these two products from SUSE and Red Hat only.

 

Several additional notes applies, when selecting an Operating System release for SAP HANA.

 

  • HANA SPS Revision Notes:  HANA revision (x) requires  a minimum OS release (y) – e.g.  SAP Note 2233148& 2250138  - SLES 11 SP2 minimum for HANA SPS11


  • Hardware that has been certified within the SAP HANA hardware certification program has been certified for a list of available combinations on a specific operating system version. See SAP Certified Appliance Hardware for SAP HANA

 

  • Only host types released by SAP hardware partners are suitable for using SAP software productively on Linux ( SAP Note  171356 )

 

  • SAP cannot support  software from third-party providers  (e.g. OS)  which is no longer maintained by the manufacturer (SAP Note 52505)

 

SUSE Linux Enterprise Server for SAP  (SLES for SAP)

  With SUSE Linux Enterprise Server for SAP (SLES for SAP) there are major releases 11 & 12 with their appropriate service packs. Several Service Packs (SP) are available in an SUSE release and it is possible to stay with a specific SP until its support ends.  The general support for a Service pack ends at a defined date. These “general end” dates are communicated on SUSE Web pages.

 

 

p2.png

 

 

With an RTC (Release To Customer) dates of HANA SPS there are supported SUSE Linux Enterprise Server for SAP (SLES) versions available which can be combinations for a supported stack. When a new SAP HANA SPS is released new “SLES for SAP” versions are available to build a supported combination:

 

 

p3.png

Red Hat Enterprise Linux for SAP HANA (RHEL for SAP HANA)

 

Starting with SAP HANA Platform SPS 08 Red Hat Enterprise Linux for SAP HANA is supported for SAP HANA. Red Hat follows an approach with major & minor releases. With Extended Update Support (EUS) for specific RHEL for SAP HANA releases it is possible to remain on a specific minor release even if there is already a subsequent minor release available.

 

You can find more details about  Life Cycle on Red Hat Enterprise Linux product web pages. When a new SAP HANA SPS is released new RHEL for SAP HANA versions are available which can be used as a supported stack:

 

 

p4.png

 

HANA SPS & OS version timeline

 

We see a timeline of HANA SPS and available OS releases from SUSE and Red Hat. The next overview and all earlier timelines shows lifecycles of HANA and SLES/RHEL and not an official support status for HANA releases.

 

 

 

p5.png

 

The marked intersections show those points in time when there is a supported release from the OS vendors at the RTC date of a new HANA SPS. 

Older operating system releases are not longer supported and disappear in the timeline after SUSE’s “general support end” for “SLES for SAP” or  Red Hat’s “EUS” end date and will be replaced with a new OS version or service pack.

OS Validation

End of Life of an OS

 

The sample on the next picture shows SLES for SAP 11 SP3that has a general support end date in January 2017. This is a point in time when validation stops for that OS release and upcoming HANA SPS are no more supported on SLES for SAP 11 SP3.

 

 

p6.png

 

Sunrise of an OS

 

If a new OS release is available, SAP is planning to support it with the next upcoming HANA SPS. The following sample timeline shows SLES for SAP 11 SP4, which was available before HANA SPS11 was released. Following SPS are also supported on SLES for SAP 11 SP4 since it has not reached its general support end date yet.

 

 

p7.png

 

SLES  for SAP 11 SP4 was validated and SAP supports this OS version with HANA SPS11.

 

Support Matrix

 

This validation methodology and the timelines of HANA revisions and OS releases leads to a “Support Matrix” which is presented in this sample table:

 

 

p8.png

 

The corridor shows the combinations of OS releases with HANA SPS which will be supported by SAP.

Troubleshooting ABAP Dumps in relation to SAP HANA

$
0
0

Purpose

 

The purpose of this document is to instruct SAP customers on how to analyse ABAP dumps.

 

 

Overview

 

How to troubleshoot ABAP Dumps

 

 

Troubleshooting

 

When looking at an ABAP system you can sometimes come across runtime errors in Transaction ST22:

 

Wiki ST22.PNG

 

 

Clicking into the "Today" or "Yesterday" tab will bring up all the ABAP Runtime errors you have encountered in the past 2 days.

 

You can also filter the dates for a particular dump by using the filter feature:

 

wiki ST22 filter.PNG

 

 

Here are some examples of runtime errors you might see in ST22:

 

wiki 2.PNG

wiki5.PNG

wiki 3.PNG

 

 

 

So from looking at these dumps, you can see

1: Category

2: Runtime Errors

3: ABAP Program

4: Application Component

5: Data & Time.

 

 

The ST22 dumps do not really give you much information here so more information will be needed.

 

For more information you will then look into the Dev_W files in the transaction ST11

 

 

 

ST11 allows you to look further into the Dev_w files relating to the dumps in ST22:

 

wiki 4.PNG

 

Go to ST22 > click on the Runtime Errors for "Today", "Yesterday" or a filter. This will being up the specific dump you wish to analyse.

 

Here you will see 11 columns like so:

 

wiki 5.PNG

 

 

Here you can see the columns I have mentioned. The Work Process Index number you need is in the column named WP Index.

 

 

Once you find the dev_w index number you can then go to ST11 and find further information:

 

In the ST11 Dev_w files you have to match the time of the dump in ST22 with the recorded times in the Dev_w process files.

 

 

 

 

If there no usable information in the Dev_W files, the next step would be to analyse the issue from a Database perspective

 

 

From the HANA DB perspective

 

1: Open HANA Studio in SAP HANA Administration Console View

 

wiki 1.PNG

 

 

 

2: Check the diagnosis trace files in accordance with the time stamp of the dump you saw previously in ST22. To do this we have to go to the Diagnosis tab in HANA Studio:

 

wiki2.PNG

 

 

 

3: Check the time stamp from the ST22 dump (Date and Time), and then match this accordingly with the time in either the Indexserver.trc or nameserver.trc.

 

wiki 3.PNG

 

Search for the corresponding time stamp mentioned above i.e. 18/11/2015 @ 10:55:43.

 

Or instead of searching you could use the below SQL:

 

select top 500 service_name, timestamp, trace_text from m_merged_traces where service_name in ('indexserver', 'nameserver') and timestamp between '2015-11-18 10:40:00' and '2015-11-18 10:59:00'

 

 

Searching the nameserver log files can be a good indication of whether your ST22 is related to network issues, you may see errors such as:

 

 

  TrexNet          Channel.cpp(00339) : ERROR: reading from channel 151 <127.0.0.1:<host>> failed with timeout error; timeout=10000 ms elapsed [73973]{-1}[-1/-1] 2015-01-28 01:58:55.208048 e TrexNetBuffer    BufferedIO.cpp(01092) : channel 151 from <127.0.0.1:<host>>: read from channel failed; resetting buffer


 

 

 

If you do find some errors similar to the above then check which host the error is pointing to and check whether or not this service was available at the time of the dump.

 

 

If this does not yield any useful information, the next step is to ask someone from your network team to look into this. Checking the var/logs/messages is always a great place to start.

 

 

When searching through the indexserver.trc file, you could notice some irregularities recorded here. The next step is to search this error on the SAP Service Market Place for a known KBA or Note (Learn how to search more effectively 2081285 - How to enter good search terms to an SAP search? )

 

 

Related Notes / KBA's

 

Troubleshooting '10108' errors

Troubleshooting '10709' errors

Related Documents

 

Did you know? You can find details of common issues, fixes, patches and much more by visiting SAP moderated forums on http://scn.sap.com/docs/DOC-18971

Documentation regarding HANA installation, upgrade, administration & development is available at http://help.sap.com/hana_appliance

SAP HANA Troubleshooting WIKI: http://wiki.scn.sap.com/wiki/display/TechTSG/SAP+HANA+and+In-Memory+Computing SAP Discussion HANA: http://scn.sap.com/community/hana-in-memory/ Learn how to search more effectively 2081285 - How to enter good search terms to an SAP search?
__________________________________________________________________________________________________________

SAP Hana Vora 1.2 setup with SAP Hana SP11 integration

$
0
0

In my documentation I’ll explain how install/configure SAP Hana Vora 1.2 with SAP Hana SP11 integration, I will demonstrate in detail how to setup a Horthonworks ecosystem in order to realize this configuration.

 

For my setup I’ll use my own lab on Vmware Vsphere 6.0, run SAP Hana Vora 1.2, SAP Hana Revision 112 and use Hadoop HDFS stack 2.7.2.

 

Disclaimer: My deployment is only for test purpose, I make the security simple from a network perspective in order to realize this configuration and use open source software.

 

 

In order execution

 

  • Deploy Horthonworks ecosystem
  • Install SAP Hana Vora for Ambari
  • Install SAP Hana Spark Controller 1.5
  • Install Spark assembly and dependent library
  • Configure Hive Metastore
  • Configure Spark queue
  • Adjust MapReduce2 class path
  • Connect SAP Hana to SAP Hana Vora

 

 

Guide used

 

SAP HANA Vora Installation and Developer Guide

SAP HANA Administration Guide

 

 

Note used

 

2284507 - SAP HANA Vora 1.2 Release Note

2203837 - SAP HANA Vora: Central Release Note

2213226 - Prerequisites for installing SAP HANA Vora: Operating Systems and Hadoop Components

 

 

Link used

 

Help SAP Hana for SAP HANA Vora 1.2

HDP Documentation Ver 2.3.4

 

 

Overview Architecture

 

5-9-2016 9-14-03 AM.jpg

 

The architecture is based on a full virtual environment, running SAP Hana Vora 1.2 require mandatory component as part of the Hadoop ecosystem:

• HDFS 2.6.x or 2.7.x

• ZooKeeper

• Spark 1.5.2

• Yarn cluster manager

 

For my configuration all my server are registered in my DNS and sync with an NTP server.

 

 

 

Deploy Horthonworks Ecosystem

 

The Horthonworks ecosystem deployment consist of several step

1. Prepare the server by sharing SSH Public Key

2. Install MySQL connector

3. Install Ambari

4. Install Hive database

5. Install and configure HDP cluster

 

 

For the installation in order to make it simple, I decide to use the “Ambari Automated Installation” based on HDP vreison 2.3.4 which can be deploy with SPARK version 1.5.2.

8.2.jpg

 

To realize this configuration my deployment will comport 3 vms:

Ambari: ambari.will.lab

Yarn: yarn.will.lab

Hana: vmhana02.will.lab


 

Prepare the server by sharing SSH Public Key

 

My 3 severs up and running we have to set the SSH Public key on Ambari server in order to allow it to install Ambari agent on host which are part of the cluster.

 

I first create the rsa key-pair

1.jpg

 

And copy the public key on the remote server “yarn”

2.jpg

 

And try to ssh my remote server to confirm that I don’t need to use the password

3.jpg

 

Install MySQL connector

 

Hive requires a relational database to store Hive Metastore, I install the MySQL connect and note the path, it will be required during the initialization setup of Ambari

3.1.jpg

 

3.2.jpg

 

Install Ambari

 

On the Ambari server we have download the Ambari repository for SLES11:

wget -nv http://public-repo-1.hortonworks.com/ambari/suse11/2.x/updates/2.2.0.0/ambari.repo -O /etc/zypp/repos.d/ambari.repo

4.jpg

 

And finally install Ambari

5.jpg

 

Now installed, the Ambari server needs to setup:

Note: I decide to use Oracle JDK 1.8 and the embedded database for Ambari PostgreSQL

6.jpg

 

Once done start the server and check the status

8.jpg

 

Note: I did not specify the MySQL connector path at the beginning of the initialization of Ambari, in order to include it stop Amabri and load it by re-executing the following command

8.1.jpg

 

Install Hive Database

 

By default on RHEL/CentOS/Oracle Linux 6, Ambari will install an instance of MySQL on the Hive Metastore host. Since i'm using SLES i need to create an instance of MySQL for Hive Metastore.

20.jpg

 

Install and configure HDP cluster

 

The server up and running we can start the installation and the configuration of the HDP cluster components, to proceed launch the Apache Ambari url and execute the wizard with the default user and password “admin/admin”

9.jpg

 

Follow the step provided by the wizard to create your cluster

10.jpg

 

11.jpg

 

For this section provide the private key generated earlier on Ambari server

12.jpg

 

13.jpg

 

Host added successfully, but check the warning message

14.jpg

 

Choose the necessary services you wants to deploy

17.jpg

 

Assign the service you wants to run on the selected master node, since I’m using one host only it’s a no brainer. Additional host can be assigned later upon your needs

18.jpg

 

Assign Slave and client

18.1.png

 

Customize your service upon your needs as well, in my case I use a MySQL database so I need to provide the database information

19.jpg

 

19.1.png

 

Review the configuration for all service and execute

21.jpg

 

21.2.jpg

 

21.3.jpg

 

Once completed, access the Ambari web page and make some checks to see the running services

22.jpg

The Horthonwork ecosystem now installed we can proceed with the SAP Hana Vora for Amabri installation

 

 

 

SAP Hana Vora for Amabri

 

SAP HANA Vora 1.2 is now available for download as a single installation package for the Ambari and Cloudera cluster provisioning tools. These packages also contain the SAP HANA Vora Spark extension library (spark-sap-datasources-<VERSION>-assembly.jar), which no longer needs to be downloaded separately.

23.jpg

 

The following components will be deployed from the provisioning tool

24.jpg

 

For Vora Dlog component a specific library is required on the server “libaio”, make sure it’s installed

25.jpg

 

Once download, from Ambari server copy the VORA_AM* file into:

/var/lib/ambari-server/resources/stacks/HDP/2.3/service folder

26.jpg

 

And decompress it, it will generate the several vora application folder

27.jpg

 

Then restart the Ambari server in order to load the new service

27.1.png

  

Once completed install the new Vora service form the Ambari dashboard

29.jpg

 

Select the vora application to deploy and hit Next to install it

28.jpg

 

The Vora Discovery and Thriftserver will required some customization entry such as hostname and java location

30.jpg


30.1.png

 

31.jpg

 

The new service appear now, yes I have red services but will be fixed.

31.1.jpg

 

The Vora engine installed, I need to install the Spark Controller

 

 

 

Install SAP Hana Spark Controller 1.5

 

The Spark controller needs to be download from the marketplace, this is an .rpm package.

32.jpg

 

Once downloaded execute the rpm command to install it

33.jpg

 

When the installation is completed the /usr/sap/spark/controller folder is normally generated

33.1.jpg

 

The next phase is now to install the Spark assembly file and Dependent libraries

 

 

 

Install Spark assembly and dependent library

 

The Spark assembly file and Depend libraries needs to be copied into spark controller external lib folder.

Note: up to now only the assembly.jar lib version 1.5.2 is the only supported version to works with t Vora 1.2, I’ll download page at https://spark.apache.org/download.html

34.jpg

 

Decompress the folder and copy the necessary library into “/usr/sap/spark/controller/lib/external” folder

34.1.jpg

 

And I will update the hanaes-site.xml file in /usr/sap/spark/controller/conf folder to update the content

34.2.jpg

 

Spark and Yarn create staging directories in /hana/hanaes directory in HDFS, this directory needs to be created manually by the following command as hdfs user:

hdfs dfs –mkdir /user/hanaes

35.jpg

 

 

 

Configure Hive Metastore

 

Since SAP Hana Spark Controller connect to the Hive Metastore, the hive-site.xml file needs to be available in controller’s class path.

To do it I will create a symbolic link in the /usr/sap/spark/controller/conf folder

36.jpg

 

And adjust the hive-site.xml file with the following parameter:

• Hive.execution.engine = mr

• Hive.metastore.client.connect.retry.delay = remove the (s)

• Hive.metastore.client.connect.socket.timeout = remove the (s)

• Hive.security.authorization.manager = org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider

 

Note this change are made only because for our example we are using Horthonworks distribution, with Cloudera it’s not required

 

 

 

Configure Spark queue

 

 

In order to  avoid Spark to take all available resources from Yarn manager and thus leaving no resource for any other application running on Yarn resource manager, I need to configure Spark dynamic Allocation by setting up a queue in ‘Queue Manager”

37.jpg

 

Create it then save and refresh from the action button

38.jpg

 

Once done from hanaes-site.xml file add the spark.yarn.queue property

39.jpg

 

39.1.jpg

 

 

 

Adjust Mapreduce2 class path

 

One import point to take in consideration about Spark Controller, is the fact that the component library path call during startup doesn’t support variable such as “${hdp.version}”.

 

This variable is declared in the MapReduce2 configuration

39.2.jpg

 

Expand the Advanced mapred-site property and locate the parameter “mapreduce.application.classpath

39.3.jpg

 

Copy/past the whole string in your favorite editor and change all reference of ${hdp.version} entries by the current hdp version

39.4.jpg

 

Before the change

$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure

 

After the change

$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/hdp/2.3.0.0-2557/hadoop/lib/hadoop-lzo-0.6.0.2.3.0.0-2557.jar:/etc/hadoop/conf/secure

 

Once done, as “hanaes” user, start the Spark Controller from the directory /usr/sap/spark/controller/bin

40.2.jpg

 

Check the Spark log to see if it’s running properly in /var/log/hanaes/hana_controller.log

As we can see I have an error in my config file

40.1.jpg

 

 

 

Connect SAP Hana to SAP Hana Vora

 

My Horthonworks ecosystem in place and SAP Hana Vora 1.2 deployed, I can connect my Hana instance to it over the Spark adapter.

Before trying to make any connection one specific library needs to be copy into “/usr/sap/spark/controller/lib” folder, from “/var/lib/ambari-agent/cache/stacks/HDP/2.3/services/vora-base/package/lib/vora-spark/lib” copy the spark-sap-datasources-1.2.33-assembly.jar file

41.jpg

 

Once done restart the Spark Controller

 

Now to connect on my Hadoop from Hana in need to create a new remote connection by using the following SQL statement

42.jpg

 

 

Since I did not create any table in my Hadoop environment this is why nothing appear below default, in order to test it I’ll create a new schema and load a table (csv) into it and see the result in hana

43.jpg

 

Note: you can download some csv sample here

Sample insurance portfolio

Real estate transactions

Sales transactions

Company Funding Records

Crime Records

44.jpg

 

Once done check the result from Hive view

46.jpg

 

And make the check in Hana by creating and querying the virtual table

47.jpg

 

48.jpg

 

49.jpg

 

It’s all good I have my data

50.jpg

 

My configuration is now completed with SAP Hana Vora 1.2 setup and connection with SAP Hana SP11.

SAP Hana Dynamic Tiering setup on Multi-tenant database with DWF - part 2

$
0
0

In the first part of the documentation i have explain how install and setup Hana MDC with dynamic tiering including the deployment of DWF on tenant database, in the second part of the document i'll explain now how to configure DWF (DLM part) to create external storage and move table from hana to external storage

 

 

Create external storage

 

The DWF installed I now able to make some movement of table to external destination, but before doing it I need to make create the destination over DLM

Note: When creating a storage destination DLM provides a default schema for the generated objects, this schema can be overwritten

 

Dynamic Tiering

53.jpg

 

IQ 16.0

Note: for the parameter to use, the information must be according the SDA connection

54.jpg

 

SPARK

Note: for spark the schema of the source persistence object is used for the generated objects,

Before to create the remote I have to specify to my index server that I will use my Spark connection for aging data

 

I run the following sql statement from the studio:

ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini', 'SYSTEM')

SET ('data_aging', 'spark_remote_source') = 'SPARK_LAB' WITH RECONFIGURE;

 

ALTER SYSTEM ALTER CONFIGURATION ('xsengine.ini', 'SYSTEM')

SET ('data_aging', 'spark_remote_source') = 'SPARK_LAB' WITH RECONFIGURE;

55.jpg

 

Also from the Spark Controller, the hanaes-site.xml file needs to be edit in order to set the extend storage

55.1.jpg

 

56.jpg

 

My 3 external storage are now created but as we can see they are inactive, so to activate hit “Activate”

56.1.png

 

Once activated

57.jpg

 

 

Move table to external storage

 

My external storage added to DLM, in order to move table into them I need to lifecycle profile for each of them

58.jpg

 

Which will allow me to specify if I want to move group of table or only specific table, the way I want to move them “trigger based or manual”

Note: When using SAP IQ as the storage destination type, you need to manually create the target tables in IQ. (use the help menu to generate the DDL)

59.1.jpg

 

59.jpg

 

From a destination attribute option you can specify the reallocation direction of the table transfer and the Packet Size to be transfer:

Note: Spark doesn’t support the packaging

60.jpg

 

Depending on the option chosen above a clash strategy can be define in order to handle unique key constraint violation

61.jpg

 

Note : Spark doesn’t support the clash strategies. This means that unique key constraint violations are ignored and records with a unique key might be relocated multiple times, which can result in incorrect data in the storage.

 

Once the destination attribute define you will need to setup the reallocation rule in order to identifies the relevant records in the source persistence to be relocated to the target persistence

61.1.jpg

 

When satisfied save and activate your configuration, eventually run a simulation to test it.

62.jpg

 

When the configuration is saved and activate for IQ and DT, the generated object “aka: generated procedure” is created

62.1.jpg

 

For my document purpose I’ll trigger all my data movement manually

63.jpg

 

64.jpg

 

When the trigger job is running, according the rule define in the reallocation rule, the amount of record count should match. For each external destination the log can be check

65.jpg

 

 

Query table from external source

 

In order to query the data from the external since that table has been moved, I first need to check in the destination schema the generated object

I can see the 2 tables moved, 1 in dynamic tiering “Inusrance” and the other one as a virtual table fir IQ “Crime”

66.jpg

 

One additional table “PRUNING” show the scenario and the criteria define from the rule editor for the table

67.jpg

 

For Spark the schema of the source persistence object is used for the generated objects

68.jpg

 

My configuration is now completed for dynamic tiering on Hana multi tenant database with DLM.


SAP Hana Dynamic Tiering setup on Multi-tenant database with DWF

$
0
0

In my documentation I’ll explain how install/ configure SAP Hana MDC with Dynamic Tiering and deploy SAP Hana Data Warehouse Foundation 1.0 in order to support data management and distribution within my landscape including Hadoop (spark) and Sybase IQ.

 

For my setup I’ll use my own lab on Vmware Vsphere 6.0, run SAP Hana revision 112.02 and use Sybase IQ and Hadoop HDFS stack 2.7.2.

 

I’ll will create a new environment by using vm template explain in my previous documentation.

 

Disclaimer: My deployment is only for test purpose, I make the security simple from a network perspective in order to realize this configuration and use open source software.

 

In order execution


  • Install Hana in MDC mode
  • Connect tenant database to IQ and Hadoop over SDA
  • Install Dynamic Tiering
  • Setup Dynamic Tiering for Tenant database
  • Install SAP Hana Data Warehouse Foundation
  • Create external storage
  • Move table to external source
  • Query tables from external

 

 

Guide used

 

SAP HANA Multitenant Database Containers

SAP HANA Dynamic Tiering: Administration Guide

SAP HANA Data Warehousing Foundation Installation Guide

SAP HANA Data Warehousing Foundation 1.0 Planning PAM

 

 

Note used

 

2225582 - SAP HANA Dynamic Tiering SPS 11 Release Note

2092669 - Release Note SAP HANA Data Warehousing Foundation

2290350 - Spark Controller Compatibility Matrix

2183717 - Data Type Support for Extended Tables

2290922 - Unsupported Features and Datatypes for a Spark Destination

 

 

Link used

 

Help SAP Hana

 

 

High Level Architecture overview

1.jpg

 

From a high level architecture point of view I’ll deploy 4 vms all registered in my internal DNS

  • vmhana01 – master hana node multi-tenant
  • vmhana07 – dynamic tiering worker node
  • vmiq01 – Sybase IQ 16.0
  • Hadoop – Horthonworks Haddop HDFS stack 2.7.2

 

 

Detail overview

2.jpg

 

From a detail point of view, my Hana MDC database will be setup with one tenant database (TN1) connected over SDA to Sybase IQ and Hadoop by Spark controller.

The TN1 database will have DWF 1.0 deployed on it and will be configured with the DT host as a dedicated service.

The dynamic tiering host share the /hana/shared file system with the vmhana01 host in order to be installed with HDT database.

 

 

Install Hana in MDC mode

 

I my previous documentation I have already explain how to install and configure Hana in MDC mode by command line and sql statement.

 

I’ll re-explain how to do it this time by using the graphical tool (hdblcmgui) and setup tenant database by the Hana cockpit.

 

My media downloaded I’m ready to start

Note : I’ll just capture the important screen

3.jpg

 

4.jpg

 

Note: I do my system as a single, because I’ll add my DT host after in my process

5.jpg

 

Note: Dynamic Tiering doesn’t support high tenant isolation.

6.jpg

 

8.jpg

 

My system is now up and running

9.jpg

 

The system ready, from the cockpit I’ll create my tenant database

10.jpg

 

11.jpg

 

12.jpg

 

My tenant is now up and running

14.jpg

 

Now from a network perspective if I want to access my tenant database cockpit some change needs to be done at the system database layer

From the configuration panel filter on “xsengine.ini” and open the “public_urls” parameter

13.jpg

 

Double click on the http_url or https_url to setup the url (alias) access of the tenant database

15.jpg

 

Once done you can see that the url to access the tenant TN1 database is setup

16.jpg

 

Note: make sure if you are working with a DNS that the alias is registered, if you are not using a DNS enter the entry into the /etc/hosts of Hana

17.jpg

 

18.jpg

 

My alias added I can access the cockpit

19.jpg

 

 

Connect tenant database to Sybase IQ and Hadoop over SDA

 

My tenant database is now running I need to connect it to remote source to store my aging data, let start with my IQ database, before create the connection in SDA install and set the lib on Hana server.

To create my connection I will use the following statement:

 

create remote source IQHOMELAB adapter iqodbc configuration 'Driver=libdbodbc16_r.so;ServerName=IQLAB;CommLinks=tcpip(host=vmiq01:1113)' with CREDENTIAL TYPE 'PASSWORD' USING 'user=iqhomelab;password=xxxxx';

 

20.jpg

 

My IQ connection is working I can add the other on Hadoop over Spark controller

 

 

 

Install Dynamic Tiering

 

 

Install dynamic tiering is done in 2 part, you need to first install the add-on component and then add the host which will execute the query, this can be done in one step.

 

Note: before to start the installation make sure the necessary folder or file system are created

22.jpg

 

And the /hana/shared file system is mounted on the dynamic tiering host

22.1.jpg

 

The installation can be done from graphical interface, command line or web interface for my documentation I’ll use the second option (command line) since last I did by web interface

23.jpg

 

24.jpg

 

Once the installation completed we can see now the Dynamic Tiering installed but not configured yet

25.jpg

 

From a service perspective the DT appear as a “utility” from the SYSTEMDB hosts tab and is not visible for tenant database

26.jpg

 

 

 

Setup Dynamic Tiering for tenant database

 

Make the setup of DT for tenant database consist to make the DT service (esserver) visible to tenant database. Keep in consideration that DT and tenant database work as 1:1

 

The first step is to modify properties in the global.ini file to prepare resources on each tenant database to support SAP HANA dynamic tiering.

 

On the SYSTEM database run the following SQL to enable the tenant database to use DT functionalities:

 

ALTER SYSTEM ALTER CONFIGURATION ( 'global.ini', 'SYSTEM' ) SET( 'customizable_functionalities', 'dynamic_tiering' ) = 'true'

 

27.jpg

 

And check if the parameter is set to “true” in the global.ini

28.jpg

 

The next step will be to isolate the “log” and the “data” of the tenant database for DT, to do so I will first create at OS layer two specific directory which belong to my tenant DB “TN1”

29.jpg

 

And run the two following SQL statement to make is active:

 

ALTER SYSTEM ALTER CONFIGURATION ( 'global.ini', 'DATABASE' , 'TN1' )

SET( 'persistence', 'basepath_datavolumes_es') = '/hana/data_es/TN1' WITH RECONFIGURE

 

ALTER SYSTEM ALTER CONFIGURATION ( 'global.ini', 'DATABASE' , 'TN1' )

SET( 'persistence', 'basepath_logdatavolumes_es') = '/hana/log_es/TN1' WITH RECONFIGURE

 

30.jpg

 

Then check in the global.ini

31.jpg

 

The preparation completed I can now provision the DT service to my tenant DB by running the following SQL command at the SYSTEMDB layer:

 

ALTER DATABASE TN1 ADD 'esserver'

 

TN1 service before the DT provisioning

32.jpg

 

After the provisioning we can that DT is now available to TN1

33.jpg

 

Note: after the service (esserver) affected to the tenant database, it’s no longer visible to the SYSTEMDB

34.jpg

 

The configuration ready, I need to deploy the dynamic tiering delivery unit in to TN1 in order to administrate it. From modeler perspective select your tenant DB and select the HANA_TIERING.tgz and HDC_TIERING.tgz file from server to be imported.

35.jpg

 

36.jpg

 

37.jpg

 

Once the DU imported to the tenant I assign the necessary role to my user

39.jpg

 

Now done I can access the cockpit and finish the configuration of it

41.jpg

 

42.jpg

 

Once successfully created we can check at os layer if the data are written at the correct place

43.jpg

 

Dynamic Tiering on tenant database completed, I can start the deployment DWF

44.jpg

 

 

Install SAP Hana Data Warehouse Foundation

 

SAP DWF content is delivered in software components, each software component contains a functional delivery unit (independent delivery units) and a language delivery unit.

  • Functional delivery units provide core services
  • SAP HANA Data Warehousing Foundation applications
  • Language delivery units
  • Documentation for the applications

 

Once the DWF zip file is downloaded store it but do not decompress it, form the tenant database cockpit load the zip file in order to install the new software

45.jpg

 

Run the installation

46.jpg

 

The component installed now, in order to configure SAP HANA Data Warehousing Foundation some parameter needs to be added at the xsengine.ini of the tenant database.

 

From the SYSTEMDB expand the xsengine.ini and add the following parameter and value

47.jpg

 

Data Distribution Optimizer and Data Lifecycle Manager use this mechanism use SQL statements from server-side JavaScript application when generating and executing redistribution plans in Data Distribution Optimizer.

 

To enable this functionality, I need to enable it from the XS Artifact Administration of my tenant database.

Located the two component sap.hdm.ddo.sudo and sap.hdm.core.sudo and activate them

48.jpg

 

49.jpg

 

Now activated I can provide the necessary role to my user so I can administrate DWF from the cockpit

Notre: I gave my account all admin role, but in real work it won’t happen ;-)

50.jpg

 

And now from the cockpit I can see it at the following url:

http://<tenant>:<port>/sap/hdm/dlm/index.html

http://<tenant>:<port>/sap/hdm/ddo/index.html

51.jpg

 

52.jpg

 

Finally generate default schema for Generated Objects and roles needed for Data Lifecycle Manager by the following statement:

 

call "SAP_HDM_DLM"."sap.hdm.dlm.core.db::PREPARE_BEFORE_USING"();

52.1.jpg

 

52.2.jpg

The component in place i can now start the configuration to move tables to external storage.

See Part II

SAP HANA Database Campus – Open House 2016 in Walldorf

$
0
0

The SAP HANA Database Campus invites students, professors, and faculty members interested in database research to join our third Open House at SAP's headquarters. Throughout your day, you will get an overview of database research at SAP, meet the architects of SAP HANA and learn more about academic collaborations. There are a couple of interesting presentations by developers and academic partners. Current students and PhD candidates present their work and research. For external students and faculty members it is a great chance to find interesting topics for internships and theses.


The event takes place on June 2nd, 2016, during 09:3016:00 in Walldorf, Germany. Free lunch and snacks are provided for all attendees. The entire event is held in English.

 

Register here

 

 

Looking forward to seeing you in Walldorf,

The SAP HANA Database Campus

students-hana@sap.com

 

 

Location:

  • SAP Headquarters,WDF03, Robert-Bosch-Str. 30, 69190, Walldorf, Germany
  • Room E4.02, Check-In Desk in the lobby of WDF03

 

Agenda:

  • 09:00-09:30 Arriving
  • 09:30-10:00 Check-In
  • 10:00-10:15 Opening
  • 10:15-11:00 Keynote
    • Dr. Norman May (SAP HANA group) – Topic will be announced
  • 11:00-12:00 Poster Session Part 1 & Career Booth
  • 12:00-12:45 Lunch
  • 12:45-13:00 Office Tour
  • 13:00-14:00 Session 1 – Academic
    • Prof. Anastasia Ailamaki (EPFL) Scaling Analytical and OLTP Workloads on Multicores: Are we there yet? [30 min]
    • Ismail Oukid (SAP HANA PhD Student, TU Dresden) FPTree: A Hybrid SCM-DRAM Persistent and Concurrent BTree for Storage Class Memory [15 min]
    • Robert Brunel (SAP HANA PhD Student, TU Munich) – Managing Hierarchical Data in Main-Memory RDBMS[15 min]
  • 14:00-15:00 Poster Session Part 2, Career Booth & Coffee Break
  • 15:00-15:45 Session 2 – SAP
    • Hinnerk Gildhoff (SAP) – SAP HANA Spatial & Graph[20 min]
    • Daniel Booss (SAP)– SAP HANA Basis [20 min]
  • 15:45-16:00 Best Student/PhD-Student Poster & Open House Closing

 

 

Archive of previous events


By participating you agree to appear in photos and videos taken during the event and published on SCN and CareerLoft.

Predefined Users in SAP HANA

$
0
0

A number of predefined operating system and database users are required for installing, upgrading, and operating SAP HANA. Further users may exist depending on additionally installed components.


Here's a brief overview of all such users. More detailed information is available in the linked documentation, which is part of the SAP HANA platform documentation available on SAP Help Portal.


Note that this information is valid for SAP HANA SPS 12.


 

Operating System Users

The following operating system (OS) users are created during the installation of SAP HANA.


Component

User

Purpose/Description

Link to More Info

SAP HANA Database

sapadmUser required to authenticate to SAP Host AgentPredefined Users - SAP HANA Security Guide - SAP Library
<sid>admAdministration user that owns all SAP HANA files and all related operating system processes
OS users for tenant databases in mulitple-container system configured for high isolationAdministration user that owns all SAP HANA files and all related operating system processes of a particular tenant database

SAP HANA Extended Application Services, Advanced Model (XSA)

XS_ADMIN

Administrative user for the XS advanced application server, has unlimited access to Controller API

Predefined XSA Users - SAP HANA Security Guide - SAP Library

HDI_BROKER_CONTROLLER

User for HDI Broker API

sap_sb

User for UAA Broker API

Database Users

Depending on the components you installed, several database users will be available after installation or must be created for a specific purpose.

 

Database users may or may not correspond to real people. Users that do not correspond to real people are referred to as "technical database users". Most standard technical database users are used internally to perform certain tasks and it's not possible to log on with them.

 

Component

User

Purpose/Description

Link to More Info

SAP HANA Database

SYSTEM

Database superuser

Predefined Users - SAP HANA Security Guide - SAP Library

SYS

Technical database user that owns database objects such as system tables and monitoring views

XSSQLCC_AUTO_USER_
<generated_ID>

Technical database users automatically generated on activation of SQL connection configurations

_SYS_AFL

Technical user that owns all objects for Application Function Libraries

_SYS_EPM

Technical database used by the SAP Performance Management (SAP EPM) application

_SYS_REPO

Technical database user used by the SAP HANA repository (SAP HANA XS, classic model).

_SYS_STATISTICS

Technical database user used by the internal monitoring mechanism of the SAP HANA database

_SYS_TASK

Technical database user in SAP HANA Enterprise Information Management. This user owns all task framework objects.

_SYS_WORKLOAD_REPLAY

Technical database user used by capture and replay capability of the SAP HANA Performance Management tool.

_SYS_XB

Technical user for internal use only

SAP HANA Extended Application Services, Advanced Model (XSA)

SYS_XS_RUNTIME

Owns the Controller’s SAP HANA schema containing BlobStore, ConfigStore and SecureStore

Predefined XSA Users - SAP HANA Security Guide - SAP Library

SYS_XS_UAA

Owns the UAA’s SAP HANA schema for user management

SYS_XS_UAA_SEC

Owns the UAA’s SAP HANA secure store for the user credentials

SYS_XS_HANA_BROKER

Owns the HDI Broker’s SAP HANA schema

SYS_XS_SBSS

Owns SAP HANA schema containing procedures to generate user passwords in a secure manner; used by the HDI Broker

_SYS_DI

Owns all HDI SQL-based APIs, for example all API procedures in the_SYS_DI schema and API procedures in containers

_SYS_DI_*_CATALOG

Technical users used by the HDI to access database system catalog tables and views

_SYS_DI_SU

Technical superuser of the HDI created at installation time

_SYS_DI_TO

Owns transaction and connections of all internal HDI transactions

Further technical users for HDI schema-based containers

See documentation

SAP DB Control Center

Administration user (e.g., DCC_ADM)

Database user required for the SAP DCC administrator who adds, imports, and removes systems.

Setting up SAP DCC for the First Time - SAP DB Control Center 4 Guide - SAP Library

Configuration user (e.g., DCC_CONFIG)

Database user required for the configuration of SAP DB Control Center

Collector user (e.g., DCC_COLLECTOR)

Technical database user used by SAP DCC for data collections and other background tasks.

Technical user (e.g. SAPDBCC)

Technical database user used by SAP DCC to identify systems that can be added for management and to monitor the health of systems once they're added. This account is not intended for human users.

SAP HANA Dynamic Tiering

_SYS_ES

Technical database user used by dynamic tiering; automatically created when you create extended storage._SYS_ES logs on internally through the dynamic tiering service.SAP HANA Dynamic Tiering Administration Guide - SAP Library

ES_ADMIN

Administrator user that should only be used by administrators for troubleshooting and with the guidance of SAP support.Dynamic Tiering Administration User - SAP HANA Dynamic Tiering Administration Guide - SAP Library

SAP HANA Accelerator for SAP ASE

sa

Administrator user used to establish the connection between SAP HANA and SAP ASE. The user can assign administration control to selected SAP ASE login accounts.

Permissions - SAP HANA Accelerator for SAP ASE: Administration Guide - SAP Library

SAP HANA Smart Data Streaming

SYS_STREAMING

Technical database user used to perform policy administration functions such as granting and revoking privileges

SYS_STREAMING and SYS_STREAMING_ADMIN - SAP HANA Smart Data Streaming: Security Guide - SAP Library

SYS_STREAMING_ADMIN

Technical database user used to perform all tasks in smart data streaming, except publishing or subscribing to streams

SAP HANA Smart Data Integration and SAP HANA Smart Data Quality

No additional standard database users available or required

SAP HANA Advanced Data Processing: File Loader

FLACCESS

Technical database user used for file loader access

File Loader Guide for SAP HANA

FLADMIN

Technical database user used for file loader administration

FLDBCONN

Technical database user used for file loader connections to the SAP HANA database

SAP HANA Spatial

Content viewer user

Database user required to view geo content using the Geo Content viewer tool SAP HANA Spatial Reference 

Create a User to View Geo-Content - SAP HANA Spatial Reference - SAP Library

Geospatial Metadata Installer user (for example RESTRICTED_USER)

Database user required to use the Geospatial Metadata Installer

Create Database Users - SAP HANA Spatial Reference - SAP Library

Connection user (for example, CONNECTOR)

Database user required to establish the required SQLCC connection and modify the defined database object using Geospatial Metadata Installer

SAP HANA Remote Data Sync

SYS_SYNC

Technical database user that performs synchronizations for Remote Data Sync clients.

SAP HANA Remote Data Sync: Security Guide

SAP HANA Trigger-Based Data Replication Using SAP LT Replication Server

Connection user

Initial technical database user required to create a database connection from the SAP LT Replication Server to the SAP HANA system

Security Guide for Trigger-Based Data Replication Using SAP Landscape Transformation Replication Server

Replication user

Technical database user required to connect from the SAP LT Replication Server to the SAP HANA system for replication. One replication user is created for each replication schema. The replication user has the same name as the corresponding schema.

SAP Hana EIM Connection Scenario Setup - Part 1

$
0
0

In my documentation I’ll explain how to setup and configure a SAP Hana SP10 EIM (SDI/SDQ) connection with Data Provisioning agent based on Cloud and On-Premise scenario.

 

This documentation is build in 3 part:

SAP Hana EIM  Connection Scenario Setup - Part 1 (current)

SAP Hana EIM  Connection Scenario Setup - Part 2

SAP Hana EIM  Connection Scenario Setup - Part 3

 

In my first documentation I have explain how to replicate data over by using Hana SDI capabilities with SAP Hana adapter, in this document I’ll explain how to configure and connect the DP Agent to several source system to retrieve and replicated data for On-Site and Cloud scenario.

 

 

I will show in detail step and configuration point to achieve this setup with the following adapter:

Log Reader (Oracle, DB2, MSSQL) / SAP ASE / Teradata and Twitter

 

 

Note: The Data Provisioning Agent must be installed on the same operating system as your source database, but not necessarily on the same machine

There are currently three exceptions to this rule:

  1. Oracle on Solaris and the Data Provisioning Agent on Linux is supported
  2. Oracle on HP-UX and the Data Provisioning Agent on Linux is supported
  3. Oracle on Windows and the Data Provisioning Agent on Linux is supported

 

 

In order execution

 

  • Security (Roles and Privileges)
  • Configuration for On-Premise scenario
  • Configuration for Cloud scenario
  • Log Reader adapter setup (Oracle / DB2 / MS SQL)
  • SAP ASE adapter setup
  • Teradata adapter setup
  • Twitter adapter setup
  • Real time table replication

 

 

Guide used

 

SAP Hana EIM Administration Guide SP10

SAP Hana EIM Configuration guide SP10

 

 

Note used

 

2179583 - SAP HANA Enterprise Information Management SPS 10 Central Release Note

2091095 - SAP HANA Enterprise Information Management

 

 

Link used

http://help.sap.com/hana_eim

  

 

Overview Architecture

 

archi1.jpg

On-premise landscape

 

archi2.jpg

Cloud landscape

 

 

  

Security (Roles and Privileges)

 

Before to start trying to make any connection form DP Agent server and Hana, it’s important to provide the necessary credential to the user involve in the configuration based upon your landscape scenario.

 

 

For the on-premise and cloud scenario as an administrator ensure you have:

  • Privileges: AGENT ADMIN and ADAPTER ADMIN
  • Application privilege: sap.hana.im.dp.admin::Administrator

 

 

For the cloud scenario an additional user is required, its use as a technical user aka “xs agent” required when you need to register an agent:

  • Privileges: AGENT MESSAGING
  • Application privilege : sap.hana.im.dp.proxy::AgentMessaging

 

 

 

Configuration for On-Premise scenario

 

In the case of on-premise scenario, the interaction with the DP agent and Hana server is done through TCP/IP connection, similar to connection through the Hana Studio.

1.jpg

 

2.jpg

 

 

Configuration for Cloud Landscape scenario

 

The cloud scenario require in case of SSL connection a specific setup, Hana can be access directly from the internal webdispatcher or access through a proxy, for my configuration I use the direct connection over SSL.

 

 

One of the requirement to make Hana available over HTTPS is to have a valid “CommonCryptoLib” library (libsapcrypto.so), by default when Hana is installed it comes with it.

3.jpg

 

 

Now from the webdispatcher page in SSL and Trust Configuration tab, I create CA request and send it to my CA Authority and import the response

4.jpg

 

Once signed I import the CA response in my trusted list of PSE

5.png

 

6.jpg

 

And change the format view from TEXT to Perm in order to review the chain

7.jpg

 

 

Once completed I’ll change the default port used by the webdispatcher in order to use standard port 80/443. In order to do this, from the webdispatcher.ini change the default port to the one you want to use and add the parameter “EXTBIND=1”

8.jpg

 

Once saved at the os layer you need to bind the default SSL port to use, by default when hana is installed it create an “icmbnd.new” file, rename it to “icmbnd” and change the right on it. You must be root to do this.

9.jpg

 

Now my Hana instance is available from HTTPS access

10.jpg

 

 

The Hana certificate needs to be imported in the DP Agent server, to do this into the “ssl” directory of the DPA.

 

First change the default “cacerts” password with the following keytool command where the cacerts file is located

 

keytool -storepasswd  -new [new password ]  -keystore cacerts

11.jpg

 

Then create a “SAPSSL.cer” file, open it with your favorite editor and paste the entire chain from the imported webdispatcher certificate

12.jpg

 

And import it into the “cacerts”

Keytool –importcert –keystore cacerts –storepass <password> -file SAPSSL.cer -noprompt

13.jpg

 

I can now configure DPA to use SSL connection for Hana

14.jpg

 

15.jpg

 

 

LogReader adapter setup

 

Log Reader adapters provide real-time changed-data capture capability to replicate changed data from Oracle, Microsoft SQL Server, and IBM DB2 databases to SAP HANA in real time; In certain case, you can also write back to a virtual table.

 

 

Oracle 12c LogReader adapter

 

The first point to take in consideration before to start the configuration of any LogReader adapter, is to download the necessary JDBC libraries specific to the source used and store them into the lib directory of the data provisioning agent.

 

 

Download the libraries from:

Oracle

Microsoft SQL

IBM DB2

17.jpg

 

Note for all the database setup, I will not explain how to install them but focus on the step which needs to be perform in order to work with SDI configuration.

 

In order to enable the real time replication capability in Oracle, a specific script needs to be run on the Oracle database, this script is located in the “scripts” into the DP Agent server

18.png

Note: the script assume that the default user for the replication is LR_USER

 

Before to run it, check if the database is in archivelog mode, if not enabled it needs to be changed

1.jpg

 

Since the DP agent and Oracle doesn’t reside on the same server, we need to copy the timezome_11.dat file from the Oracle server to the DP Agent server.

2.jpg

 

And specify the location of the fie from the Oracle preference adapter

3.jpg

 

Now done I’ll use Oracle SQL Developer to execute the script, which will also create the LR_USER

Note : don’t forget to change the script in order to setup the password for the user.

5.jpg

 

Now done, I can register my adapter and create my remote connection

6.jpg

 

From the studio when you specify the OracleLogReader adapter, it’s important to specify the administrator LogReader port and the user define for the replication.

7.jpg

 

From a connection point with oracle we are done, next MS SQL setup

 

 

  

Microsoft SQL 2008 R2 LogReader adapter

 

 

Since EIM relies on database log to perform data movement, which means that logs must be available until the data is successfully read and replicated to Hana, MS SQL Server must be configure in Full Recovery Mode.

19.png

 

For my SQL based scenario it will make my database CDC (change data capture) enable, this feature is supported by EIM but the “truncate” operation on table is not

8.jpg

 

Once activated, make the check

9.jpg

 

After the feature is enabled on the database, the cdc schema, cdc user, data capture metadata tables are automatically created

10.jpg

 

Since I don’t have create table to replicate for now, I’ll explain later how to enable this feature for each table I want to track

  

Once done enable DAC to allow remote connection from facets

11.jpg

 

And make the log files readable, copy the sybfilter and sybfiltermgr from the dp server from the logreader folder to the MSSQL server

11.1.jpg

 

11.1.1.jpg

 

Anywhere on the server create a file named “LogPath.cfg” and set the variable environment “RACFGFilePath” which point to the location

11.1.2.jpg

 

Open the LogPath.cfg file and provide the location of the .ldf file

11.1.3.jpg

SAP Hana EIM Connection Scenario Setup - part 2

$
0
0

Install the sybfilter and start it

11.2.jpg

 

11.1.1.jpg

 

11.2.1.jpg

 

Share the folder which contain the log file .ldf and the log with the dp agent server

11.3.jpg

 

11.4.jpg

 

In the dp agent server in order to map the relationship of the directory of the database edit the mssql_log_path_mapping.props file in the logreader config directory

11.5.jpg

 

And finally check the TCP/IP enablement on the SQL server

12.jpg

 

SQL server is ready, I can register my adapter and create my remote connection

12.1.jpg

12.2.jpg

 

Now completed let’s move to DB2 database setup

 

 

 

IBM DB2 10.5 LogReader adapter

 

In order to have DB2 working with LogReader few step needs to be achieved, I will first add a new pool buffer and a temporary user tablespace, turn DB2 in archivelog mode on and create a specific user for the replication.

 

 

You can either do it over command line or using the studio.

13.jpg

 

14.jpg

 

15.jpg

 

Check the current log setting

16.jpg

 

And last step is to create the OS technical user (ra_user) for the replication

17.jpg

 

And grant the necessary privileges to the user

18.jpg

 

Now completed I can register my adapter and create my remote connection

19.jpg

 

20.jpg

 

 

Sybase ASE adapter setup

 

The SAP ASE adapter provides realtime replication and change data capture functionality to SAP HANA or back to a virtual table.

 

In the interface file add the additional necessary entries:

  • The entry name must be the same as the Adapter Instance Name specified when creating remote source.
  • The host name or IP must be the name of server where SAP ASE adapter will be running.
  • The port must be the same as the SAP ASE Adapter Server port that you set up in the ASE adapter interface file, located in <DPAgent_root>/Sybase/interfaces.

22.jpg

 

From an ASE perspective I’ll create 2 users which are required when the remote connection is created from Hana

rep_user : for replication with the role “replication”

mnt_user : for maintenance

21.jpg

 

Once done register your remote adapter and create the remote source in Hana

23.jpg

 

24.jpg

 

The connection completed we can start the other configuration for Teradata

 

 

 

Teradata adapter setup

 

 

For my Teradata database, I will use the revision 15.0 packaged from the Teradata website for ESXi available at the following link “Teradata Virtual Machine Community Edition for VMware

 

 

In order to create a remote connection another account is require with the following select privilege on “dbc” tables:

  • "DBC"."UDTInfo"
  • "DBC"."DBase"
  • "DBC"."AccessRights"
  • "DBC"."TVM"
  • "DBC"."TVFields"

 

33.jpg

 

34.jpg

 

Once done register your remote adapter and create the remote source in Hana

 

Note : The official documentation on SP10 doesn’t mention that you need to load the jdbc driver of Teradata into the lib folder of the dp agent.

You can download this driver from the Teradata website according your revision and extract it into the lib directory

35.jpg

 

36.jpg

 

37.jpg

 

38.jpg

 

20.png

 

Link to :

SAP Hana EIM  Connection Scenario Setup - Part 1

SAP Hana EIM  Connection Scenario Setup - Part 3

SAP Hana EIM Connection Scenario Setup - part 3

$
0
0

Twitter adapter setup

 

In order to replicate and consume content into Hana from twitter, I need to create a “Twitter apps” from the developer space (https://dev.twitter.com)

25.jpg

 

From the documentation link click on “Manage my Apps”

26.jpg

 

It will lead you to the application management page and click on the “Create New App” button

27.jpg

 

Provide the necessary information, accept the license term and hit click on the “create your Twitter application” at the bottom of the page

28.jpg

 

The application now create, four information will be required in order to create the remote connection with Hana:

  • Consumer Key (API Key)
  • Consumer Secret (API Secret)
  • Access Token
  • Access Token Secret

 

From the created application page click on the “keys and Access Tokens”

29.jpg

 

From the page note the two keys for the consumer

30.jpg

 

And from the bottom of the page create the access token to generate them

31.jpg

 

32.jpg

 

Now completed I need to register my adapter and create my new connection in Hana

32.1.jpg

 

32.2.jpg

 

 

 

Real time table replication

 

All my remote source connection are now created I can proceed with table replication, for my test lab I have created the same table to replicate in all remote source database named “Store”

32.3.jpg

 

MS SQL


For MS SQL when you setup the database to use “Change Data Capture” to track change, you need to specify on which table you want the track to encore

39.jpg

 

From the Workbench editor create we need to create the replication task and uncheck “initial load”

21.png

 

 

ORACLE


I earlier ran the “oracle_init.sql” script with the default user LR_USER for the replication, I did create my “store” table which belong to this user for the replication

40.jpg

 

From the workbench repeat the procedure to create a replication task and uncheck “initial load”

41.jpg

 

For DB2, Teradata and ASE, from the workbench repeat the procedure to create a replication task and uncheck “initial load”

 

Once the replication is working you can check the task from the “DP Provisioning Task Monitor”

42.jpg

 

TWITTER

 

For Twitter replication when the remote connection is created two table should appear, the one I will use for my tweet replication is the “status” table

43.jpg

 

From the workbench I start to setup the live replication and check the replication task

44.jpg

 

45.jpg

 

From the studio, I can see the content of the table which contain all the tweets and news

46.jpg

 

Replicate information for the status table bring a lot of element, for my test I create a tweet on my tweeter page and see if this one appear in the tale

47.jpg

 

And I can see my tweet in the table

48.jpg

 

The next step now is to filter my content, I’ll create additional tweet and filter the replication on them only

49.jpg

 

In order to filter, from the workbench apply a filter on “ScreenName” column, basically the screen name value should be your account name.

50.jpg

 

And refresh my status table

51.jpg

 

My HomeLab replication is now completed.

 

Link to :

SAP Hana EIM  Connection Scenario Setup - Part 1

SAP Hana EIM  Connection Scenario Setup - Part 2

Enabling Historical Alerts in SAP HANA with DB Control Center

$
0
0

Introduction

SAP HANA is constantly collecting various health and performance statistics that can be viewed by database administrators (DBAs) to ensure the system is operating at optimal capacity.  Situations may arise where SAP HANA encounters problems and that will typically trigger an alert to notify DBAs of pending or potential issues. By analyzing the patterns of alerts in the past, the DBA can develop insights into the behavior of their systems, for instance, to learn how system configuration changes may be affecting performance. This document describes how to enable Historical Alerts on your SAPHANA systems using SAP HANA DB Control Center (DCC).

 

Requirements

  • An SAP HANA system with DCC, both SPS11 or higher
  • SAP HANA Studio

 

Steps

We will be using DCC and  SAP HANA Studio in order to accomplish our task. DCC collects both system health data (as viewed in the Enterprise Health Monitor), and HANA alert data (as viewed in the Alert Monitor). We will enable historical system health data using the SAP DCC Setup wizard, then complete our setup by enabling and configuring Historical Alerts collection in SAP HANA Studio. Finally, we will learn how to undo the changes made to our system.

  1. Preparing Your System
  2. Enable Historical Alerts in DCC
  3. Complete Setup of Historical Alerts in SAP HANA Studio
  4. Enable Historical Alerts on Registered Systems
  5. Turn Off Historical Alerts and Undo Changes


Preparing Your System

For a number of steps in this tutorial, we will be interacting with the tables inside the catalog of your HANA system. To access the necessary tables, log in to your system in HANA Studio, then in the Systems view expand your system along Catalog > SAP_HANA_DBCC > Tables, as per the screenshot below. The tables we will be using are named Alerts.HistoryConfig, Alerts.HistoryData, and Site.PreferenceValues. To see the contents of these tables, right click on each table and select “Open Content” from the context menu. At this time, you should also open a SQL Console for your system by right clicking on the System name in the Systems view, and selecting “Open SQL Console”  from the context menu.

Catalog_Structure.png

 

Enable Historical Alerts in DCC

1. Navigate to your DCC System's Launchpad, using the URL format below, and provide credentials at the login prompt. Ensure that your user has the sap.hana.dbcc.roles::DBCCConfig role (without this role, the user cannot use the DCC Setup app).
    http://<host_name>:<port>/sap/hana/dbcc/


2. Click on the SAP DCC Setup tile. You should see a list of DCC settings as the default tab.


Info: By Default,Data History should be Disabled and History in days should be 30 days. If Data History is Enabled on your DCC System, skip Step 3.


3. Click the Edit button in the bottom-right corner. Check the Enabled checkbox and choose a Length of history (Default 30 days). Click the Save button.Data_History_SS.png

 

At this point, we have enabled the collection of historical system health data for the systems registered in DCC. We will confirm that this operation was successful in the next two steps.


4. In SAP HANA Studio, open/refresh the Site.PreferenceValues table. If the previous steps were performed correctly, you will notice an entry under name“apca.historical.enabled” with v_int value of 1, and under name“apca.historical.purge.max_age” with v_int value of 43,200 (in minutes, equivalent to 30 days).


Note: If the Length of history in DCC was never changed from the default value, the “apca.historical.purge.max_age” may not be present. In this case SAP HANA will use the default value of 30 days. To generate this record, either adjust the Length of history in DCC (You may set it back to default and the record will remain) or execute the following SQL statement:


upsert "SAP_HANA_DBCC"."sap.hana.dbcc.data::Site.PreferenceValues" ("name", "v_int") 
values ('apca.historical.purge.max_age', 43200) with primary key;


5. (Optional) As a check to ensure the previous steps were performed correctly, you can execute the following SQL statement to ensure system health data (Availability/Capacity/Performance) is being collected correctly. The result should be historical system health data populated to the current minute, as shown below:

   

select top 1000 * from "SAP_HANA_DBCC"."sap.hana.dbcc.data::APCA.Historical" order by "timestamp" desc;

APCA_Historical_SS.png

 

Complete Setup of Historical Alerts in SAP HANA Studio

Now that Historical system health data has been enabled, we must enable the Historical Alert data. To prepare for this step, open a SQL console for your system, as shown in the Preparing Your System step.


1. Generate two new records in the Site.PreferenceValues table. These records will enable the collection of Historical Alerts data. To do this, execute the following SQL Statements:

   

upsert "SAP_HANA_DBCC"."sap.hana.dbcc.data::Site.PreferenceValues" ("name", "v_int") 
values ('alert.historical.enabled', 1) with primary key;

   

upsert "SAP_HANA_DBCC"."sap.hana.dbcc.data::Site.PreferenceValues" ("name", "v_int")
values ('alert.historical.purge.max_age', 43200) with primary key;


Info: The “alert.historical.enabled” record acts as a master toggle switch for Historical Alert collections. When it is set to 1, DCC allows Historical Alert Collections to occur. As we will see later in this document, Historical Alerts can still be turned off for an individual system registered in DCC. However, if the “alert.historical.enabled” record is set to 0, no Historical Alert collection will occur, whether or not it is enabled on each system.

 

Info: The “alert.historical.purge.max_age” acts as a global purge age default. As we will see later in this document, the purge age can be overridden for an individual system registered in DCC.


2. (Optional) In order to check the values in the Site.PreferenceValues table have been entered correctly, execute the following SQL Statement:

select top 1000 * from "SAP_HANA_DBCC"."sap.hana.dbcc.data::Site.PreferenceValues";

Site.PreferenceValues_SS.png

 

Enable Historical Alerts on Registered Systems

At this point, we have configured DCC to allow Historical Alert collection to occur. Now, all that is left to do is configure the individual systems registered in DCC to allow or block Historical Alert collection, according to our landscape requirements. For the purposes of this document, we will enable Historical Alert Collection for all but one registered system, and we will override the purge age for another system.

For this step, we will need to open the content of the Alerts.HistoryConfig table. This table contains one record for each registered system, with a variety of associated alert collection parameters.

Alerts.HistoryConfig_SS.png

 

1. You can identify each system by its historyUrl, which contains the system host name. For any systems that require additional configuration, it is recommended that you note the resourceId value to avoid using the long historyUrl in your SQL statements. In our example, we will be disabling alerts for the system with resourceId = 132, and overriding the purge age for the system with resourceId = 140.


2. To enable Historical Alerts for all registered systems, execute the following SQL Statement:

   

update "SAP_HANA_DBCC"."sap.hana.dbcc.data::Alerts.HistoryConfig"
set "isEnabled" = 1;


3. After opening/refreshing the Alerts.HistoryConfig table, we notice that the isEnabled field is set for each system. If you wish to have Historical Alert data collected for all systems, and you do not require any additional system-specific configuration, you may skip to Step 6.


4. To disable Historical Alert collection for the “mdc-tn2” system only (with resourceId = 132)¸ we will run the following SQL Statement:

   

update "SAP_HANA_DBCC"."sap.hana.dbcc.data::Alerts.HistoryConfig"
set "isEnabled" = 0 where "resourceId" = 132;


5. To set the purge age for the “dewdflhana2314” system only (with resourceId = 140) to 60 days (86400 minutes), we will run the following SQL Statement:

   

update "SAP_HANA_DBCC"."sap.hana.dbcc.data::Alerts.HistoryConfig"
set "maxAge" = 86400 where "resourceId" = 140;


6. Finally, refresh the Alerts.HistoryData table (you may need to order by “collectTimestamp” DESC). You should now see the Historical Alerts data populated this table, for the systems on which you have enabled Historical Alert collection.

 

 

Turn off Historical Alerts and Undo Changes

The simplest way to turn off Historical Alerts collection is to navigate back to the SAP DCC Setup tile in SAP DB Control Center, and uncheck the Enabled box. This action will set “v_int” = 0 in the “apca.historical.enabled” record of the Site.PreferenceValues table, causing DCC to stop collecting Historical System Health data. However, if you wish to undo the changes made in this document, you can execute the following SQL Statements:

   

delete from "SAP_HANA_DBCC"."sap.hana.dbcc.data::Site.PreferenceValues"
where "name" = 'alert.historical.enabled';
delete from "SAP_HANA_DBCC"."sap.hana.dbcc.data::Site.PreferenceValues"
where "name" = 'alert.historical.purge.max_age';
update "SAP_HANA_DBCC"."sap.hana.dbcc.data::Alerts.HistoryConfig"
set "isEnabled" = 0, "maxAge" = 0;

Conclusion

Historical Alerts can prove useful for deeper insight and analysis of system health and performance. SAP HANA provides the capability to fine tune Historical Alert collection and storage over your landscape of systems, using SAP DB Control Center and SAP HANA Studio. For a collection of all the SQL Statements used in this document, please refer to the available files, enableAlerts.sql and disableAlerts.sql.

It is also possible to move the historical alerts into extended storage using SAP HANA Dynamic Tiering.  For complete details on this topic, please refer to this document: http://scn.sap.com/docs/DOC-69205.


Enhanced database trace information for authorization issues in SAP HANA SPS 12

$
0
0

Several new and enhanced security features are available with SAP HANA SPS 12. For an overview see SAP HANA SPS 12 What's New: Security - by the SAP HANA Academy.

 

One further enhancement available with SPS 12 is the improved usability of the database trace for authorization issues.

 

When faced with authorization errors like "insufficient privilege: Not authorized", you typically enable database tracing for the "authorization" component and raise the trace level to INFO, for example using the SAP HANA studio like this:

authorizationtrace.png

However, understanding the information in the resulting trace file could be difficult. Let's have a look at some examples.

 

If a user ELKE performs a SELECT on a table sys.p_objects_ but is not authorized to do so, the pre-SPS 12 trace would have looked like this:

 

[36096]{200088}[18/-1] 2016-01-14 17:30:54.969173 i Authorization    SQLFacade.cpp(01535) : UserId(151595) is not authorized to do SELECT on ObjectId(2,0,oid=133410)

[36096]{200088}[18/-1] 2016-01-14 17:30:54.969225 i Authorization    SQLFacade.cpp(01960) :

    schemas and objects in schemas :

    SCHEMA-133151-SYS : {} , {SELECT}

        TABLE-133410-P_OBJECTS_ : {} , {SELECT}

[36096]{200088}[18/-1] 2016-01-14 17:30:54.969241 i Authorization    query_check.cc(03644) : User ELKE tried to execute 'select * from sys.p_objects_'

The name of the user and queried table is in there, but a little hard to find. And there's no information on the actual authorization problem (which privilege is missing?)

 

In SPS 12, you can see much more easily what the problem is:

[33536]{300098}[25/-1] 2016-05-11 15:21:02.156360 i Authorization    SQLFacade.cpp(02507) : User ELKE is missing privilege SELECT for TABLE SYS.P_OBJECTS_

[33536]{300098}[25/-1] 2016-05-11 15:21:02.156406 i Authorization    query_check.cc(03626) : User ELKE tried to execute 'select * from sys.p_objects_'

 

Another example: User UTE tries to access the view ELKE.ELKEVIEW view but gets a "not authorized" error despite having SELECT privilege on the schema ELKE. The problem here is that the owner of the view (ELKE) doesn't have the SELECT WITH GRANT OPTION on all dependent objects. But which one?

 

The pre-SPS 12 trace of such an authorization error would have looked like this:

[36096]{200088}[18/-1] 2016-01-14 17:30:57.197126 i TraceContext     TraceContext.cpp(00878) : UserName=UTE, ApplicationUserName=D024855, ApplicationName=HDBStudio, ApplicationSource=csns.sql.editor.SQLExecuteFormEditor$2$1.run(SQLExecuteFormEditor.java:856);, StatementHash=647967f17e04607ca7e7df165c6a7b88

[36096]{200088}[18/-1] 2016-01-14 17:30:57.197100 i Authorization    SQLFacade.cpp(01415) : UserId(151595) is not authorized to grant SELECT on ObjectId(2,0,oid=151599)

[36096]{200088}[18/-1] 2016-01-14 17:30:57.197157 i Authorization    SQLFacade.cpp(01958) : check for GRANT/REVOKE

[36096]{200088}[18/-1] 2016-01-14 17:30:57.197161 i Authorization    SQLFacade.cpp(01960) :

    schemas and objects in schemas :

    SCHEMA-151594-MASTER : {} , {SELECT}

        TABLE-151599-MTAB : {SELECT} , {}

        TABLE-151611-MTAB4 : {SELECT} , {}

        TABLE-151623-MTAB8 : {SELECT} , {}

        TABLE-151602-MTAB1 : {SELECT} , {}

        TABLE-151608-MTAB3 : {SELECT} , {}

        TABLE-151620-MTAB7 : {SELECT} , {}

        TABLE-151626-MTAB9 : {SELECT} , {}

        TABLE-151605-MTAB2 : {SELECT} , {}

        TABLE-151617-MTAB6 : {SELECT} , {}

It's difficult to see who or what has the problem.

 

In SPS 12, the problem is more easily identified:

[33536]{300098}[25/-1] 2016-05-11 15:21:09.101423 i TraceContext     TraceContext.cpp(00878) : UserName=UTE, StatementHash=647967f17e04607ca7e7df165c6a7b88

[33536]{300098}[25/-1] 2016-05-11 15:21:09.101407 i Authorization    SQLFacade.cpp(02507) : User ELKE is not allowed to grant privilege SELECT for TABLE MASTER.MTAB4

[33536]{300098}[25/-1] 2016-05-11 15:21:09.101451 i Authorization    check_view.cc(01103) : User UTE is not authorized to use VIEW ELKE.ELKEVIEW because of missing grantable privileges on underlying objects

Scientific Publications and Activities of the SAP HANA Database Campus

$
0
0

This is a list of selected publications and activities made by the SAP HANA Database Campus.


2016

  • Ismail Oukid, Johan Lasperas, Anisoara Nica, Thomas Willhalm, Wolfgang Lehner. FPTree: A Hybrid SCM-DRAM Persistent and Concurrent B-Tree for Storage Class Memory. SIGMOD 2016, San Francisco, California, USA, June 26 - July 1 2016.
  • Ismail Oukid, Daniel Booss, Adrien Lespinasse, Wolfgang Lehner. On Testing Persistent-Memory-Based Software. DaMoN 2016 (co-located with SIGMOD 2016), San Francisco, California, USA, June 27, 2016.
  • David Kernert, Wolfgang Lehner, Frank Köhler. Topology-Aware Optimization of Big Sparse Matrices and Matrix Multiplications on Main-Memory Systems. ICDE 2016, Helsinki, Finland, May 16-20, 2016.
  • Elena Vasilyeva, Maik Thiele, Thomas Heinze, Wolfgang Lehner. DebEAQ - Debugging Empty-Answer Queries On Large Data Graphs (Demonstration). ICDE 2016, Helsinki, Finland, May 16-20, 2016.
  • Elena Vasilyeva. Why-Query Support in Graph Databases (PhD Symposium). ICDE 2016, Helsinki, Finland, May 16-20, 2016.

2015

  • Elena Vasilyeva, Maik Thiele, Christof Bornhövd, Wolfgang Lehner. Considering User Intention in Differential Graph Queries. Journal of Database Management (JDM), 26(3), 21-40. doi: 10.4018/JDM.2015070102
  • Matthias Hauck, Marcus Paradies, Holger Fröning, Wolfgang Lehner and Hannes Rauhe, Highspeed Graph Processing Exploiting Main-Memory Column Stores, Workshop on Performance Engineering for Large Scale Graph Analytics (PELGA2015), in conjunction with EuroPar 2015, Vienna, Austria, Aug. 25, 2015
  • Elena Vasilyeva, Maik Thiele, Christof Bornhövd, Wolfgang Lehner. Answering "Why Empty?" and "Why So Many?" queries in graph databases. Journal of Computer and System Sciences (2015), DOI=10.1016/j.jcss.2015.06.007 http://dx.doi.org/10.1016/j.jcss.2015.06.007
  • 2nd place in the ACM SIGMOD 2015 programming contest. For more details, click here.
  • The second SAP HANA student Campus Open House day took place in Walldorf on June 24th, 2015. For more details, click here.
  • Mehul Wagle, Daniel Booss, Ivan Schreter. Scalable NUMA-Aware Memory Allocations with In-Memory Databases. TPCTC 2015 (co-located with VLDB 2015), Kohala Coast, Hawaii, USA, August 31 - September 4, 2015.
  • Marcus Paradies, Elena Vasilyeva, Adrian Mocan, Wolfgang Lehner. Robust Cardinality Estimation for Subgraph Isomorphism Queries on Property Graphs. Big-O(Q) 2015 (co-located with VLDB 2015), Kohala Coast, Hawaii, USA, August 31 - September 4, 2015.
  • Max Wildemann, Michael Rudolf, Marcus Paradies. The Time Has Come: Traversal and Reachability in Time-Varying Graphs. Big-O(Q) 2015 (co-located with VLDB 2015), Kohala Coast, Hawaii, USA, August 31 - September 4, 2015.
  • Iraklis Psaroudakis, Tobias Scheuer, Norman May, Abdelkader Sellami, Anastasia Ailamaki. Scaling Up Concurrent Main-Memory Column-Store Scans: Towards Adaptive NUMA-aware Data and Task Placement. VLDB 2015, Kohala Coast, Hawaii, USA, August 31 - September 4, 2015.
  • Jan Finis, Robert Brunel, Alfons Kemper, Thomas Neumann, Norman May, Franz Faerber. Indexing Highly Dynamic Hierarchical Data. VLDB 2015, Kohala Coast, Hawaii, USA, August 31 - September 4, 2015.
  • David Kernert, Norman May, Michael Hladik, Klaus Werner, Wolfgang Lehner. From Static to Agile - Interactive Particle Physics Analysis with the SAP HANA DB. DATA 2015, Colmar, France, July 20-22, 2015.
  • Marcus Paradies, Wolfgang Lehner, Christof Bornhövd. GRAPHITE: An Extensible Graph Traversal Framework for Relational Database Management Systems. SSDBM 2015, San Diego, USA, June 29 - July 1, 2015.
  • Elena Vasilyeva, Maik Thiele, Adrian Mocan, Wolfgang Lehner. Relaxation of Subgraph Queries Delivering Empty Results. SSDBM 2015, San Diego, USA, June 29 - July 1, 2015.
  • Florian Wolf, Iraklis Psaroudakis, Norman May, Anastasia Ailamaki, Kai-Uwe Sattler. Extending Database Task Schedulers for Multi-threaded Application Code. SSDBM 2015, San Diego, USA, June 29 - July 1, 2015.
  • Ingo Müller, Peter Sanders, Arnaud Lacurie, Wolfgang Lehner, Franz Färber. Cache-Efficient Aggregation: Hashing Is Sorting. SIGMOD 2015, Melbourne, Australia, May 31-June 4, 2015.
  • Daniel Scheibli, Christian Dinse, Alexander Böhm. QE3D: Interactive Visualization and Exploration of Complex, Distributed Query Plans . SIGMOD 2015 (Demonstration), Melbourne, Australia, May 31-June 4, 2015.
  • Martin Kaufmann, Peter M. Fischer, Norman May, Chang Ge, Anil K. Goel, Donald Kossmann. Bi-temporal Timeline Index: A Data Structure for Processing Queries on Bi-temporal Data. ICDE 2015, Seoul, Korea, April 2015.
  • Robert Brunel, Jan Finis, Gerald Franz, Norman May, Alfons Kemper, Thomas Neumann, Franz Faerber. Supporting Hierarchical Data in SAP HANA. ICDE 2015, Seoul, Korea, April 2015.
  • David Kernert, Frank Köhler, Wolfgang Lehner. SpMachO - Optimizing Sparse Linear Algebra Expressions with Probabilistic Density Estimation. EDBT 2015, Brussels, Belgium, March 23-27, 2015.
  • Alexander Böhm: Keynote: Novel Optimization Techniques for Modern Database Environments. BTW 2015: 23-24, March 5, 2015, Hamburg
  • Alexander Böhm, Mathias Golombek, Christoph Heinz, Henrik Loeser, Alfred Schlaucher, Thomas Ruf: Panel: Big Data - Evolution oder Revolution in der Datenverarbeitung? BTW 2015: 647-648, March 5, 2015, Hamburg
  • Ismail Oukid, Wolfgang Lehner, Thomas Kissinger, Thomas Willhalm, Peter Bumbulis. Instant Recovery for Main-Memory Databases. CIDR 2015, Asilomar, California, USA. January 4-7, 2015.

 

2014

  • The first SAP HANA Student Campus Open House day took place in Walldorf on June 5th, 2014. For more details, click here.
  • Iraklis Psaroudakis, Florian Wolf, Norman May, Thomas Neumann, Alexander Böhm, Anastasia Ailamaki, Kai-Uwe Sattler. Scaling up Mixed Workloads: a Battle of Data Freshness, Flexibility, and Scheduling. TPCTC 2014, Hangzhou, China, September 1-5, 2014.
  • Michael Rudolf, Hannes Voigt, Christof Bornhövd, Wolfgang Lehner. SynopSys: Foundations for Multidimensional Graph Analytics. BIRTE 2014, Hangzhou, China, September 1, 2014.
  • Elena Vasilyeva, Maik Thiele, Christof Bornhövd, Wolfgang Lehner: Top-k Differential Queries in Graph Databases. In Advances in Databases and Information Systems - 18th East European Conference, ADBIS 2014, Ohrid, Republic of Macedonia, September 7-10, 2014.
  • Kim-Thomas Rehmann, Alexander Böhm, Dong Hun Lee, Jörg Wiemers: Continuous performance testing for SAP HANA. First International Workshop on Reliable Data Services and Systems (RDSS), Co-located with ACM SIGMOD 2014, Snowbird, Utah, USA
  • Guido Moerkotte, David DeHaan, Norman May, Anisoara Nica, Alexander Böhm: Exploiting ordered dictionaries to efficiently construct histograms with q-error guarantees in SAP HANA. SIGMOD Conference 2014, Snowbird, Utah, USA
  • Ismail Oukid, Daniel Booss, Wolfgang Lehner, Peter Bumbulis, Thomas Willhalm. SOFORT: A Hybrid SCM-DRAM Storage Engine For Fast Data Recovery. DaMoN 2014, Snowbird, USA, June 22-27, 2014.
  • Iraklis Psaroudakis, Thomas Kissinger, Danica Porobic, Thomas Ilsche, Erietta Liarou, Pinar Tözün, Anastasia Ailamaki, Wolfgang Lehner. Dynamic Fine-Grained Scheduling for Energy-Efficient Main-Memory Queries. DaMoN 2014, Snowbird, USA, June 22-27, 2014.
  • Marcus Paradies, Michael Rudolf, Christof Bornhövd, Wolfgang Lehner. GRATIN: Accelerating Graph Traversals in Main-Memory Column Stores. GRADES 2014, Snowbird, USA, June 22-27, 2014.
  • David Kernert, Frank Köhler, Wolfgang Lehner. SLACID - Sparse Linear Algebra in a Columnar In-Memory Database System. SSDBM, Aalborg, Denmark, June/July 2014.
  • Ingo Müller, Peter Sanders, Robert Schulze, Wei Zhou. Retrieval and Perfect Hashing using Fingerprinting. SEA 2014, Copenhagen, Denmark, June/July 2014.
  • Martin Kaufmann, Peter M. Fischer, Norman May, Donald Kossmann. Benchmarking Bitemporal Database Systems: Ready for the Future or Stuck in the Past? EDBT 2014, Athens, Greece, March 2014.
  • Ingo Müller, Cornelius Ratsch, Franz Färber. Adaptive String Dictionary Compression in In-Memory Column-Store Database Systems. EDBT 2014, Athens, Greece, March 2014.
  • Elena Vasilyeva, Maik Thiele, Christof Bornhövd, Wolfgang Lehner: GraphMCS: Discover the Unknown in Large Data Graphs. EDBT/ICDT Workshops: 200-207.

 

2013

  • Sebastian Breß, Felix  Beier, Hannes Rauhe, Kai-Uwe Sattler, Eike Schallehn, Gunter Saake,  Efficient co-processor utilization in database query processing,  Information Systems, Volume 38, Issue 8, November 2013, Pages 1084-1096
  • Martin  Kaufmann. PhD Workshop: Storing and Processing Temporal Data in a Main  Memory Column Store. VLDB 2013, Riva del Garda, Italy, August 26-30,  2013.
  • Hannes Rauhe, Jonathan Dees, Kai-Uwe Sattler, Franz Färber.  Multi-Level Parallel Query Excecution Framework for CPU and GPU. ADBIS  2013, Genoa, Italy, September 1-4, 2013.
  • Iraklis Psaroudakis, Tobias Scheuer, Norman May, Anastasia Ailamaki. Task Scheduling for Highly Concurrent Analytical and Transactional Main-Memory Workloads. ADMS 2013, Riva del Garda, Italy, August 2013.
  • Thomas Willhalm, Ismail Oukid, Ingo Müller, Franz Faerber. Vectorizing Database Column Scans with Complex Predicates. ADMS 2013, Riva del Garda, Italy, August 2013.
  • David Kernert, Frank Köhler, Wolfgang Lehner. Bringing Linear Algebra Objects to Life in a Column-Oriented In-Memory Database. IMDM 2013, Riva del  Garda, Italy, August 2013.
  • Martin Kaufmann, Peter M. Fischer, Norman May, Andreas Tonder, Donald Kossmann. TPC-BiH: A Benchmark for Bi-Temporal Databases. TPCTC 2013, Riva del Garda, Italy, August 2013.
  • Martin Kaufmann, Panagiotis Vagenas, Peter M. Fischer (Univ. of Freiburg), Donald Kossmann, Franz Färber (SAP). DEMO: Comprehensive and Interactive Temporal Query Processing with SAP HANA. VLDB 2013, Riva del Garda, Italy, August 26-30, 2013.
  • Philipp Große, Wolfgang Lehner, Norman May: Advanced Analytics with the SAP HANA Database. DATA 2013.
  • Jan  Finis, Robert Brunel, Alfons Kemper, Thomas Neumann, Franz Faerber,  Norman May. DeltaNI: An Efficient Labeling Scheme for Versioned  Hierarchical Data. SIGMOD 2013, New York, USA, June 22-27, 2013.
  • Michael  Rudolf, Marcus Paradies, Christof Bornhövd, Wolfgang Lehner. SynopSys: Large Graph Analytics in the SAP HANA Database Through Summarization. GRADES 2013, New York, USA, June 22-27, 2013.
  • Elena Vasilyeva, Maik Thiele, Christof Bornhövd, Wolfgang Lehner: Leveraging Flexible Data Management with Graph Databases. GRADES 2013, New York, USA, June 22-27, 2013.
  • Jonathan Dees, Peter  Sanders. Efficient Many-Core Query Execution in Main Memory  Column-Stores. ICDE 2013, Brisbane, Australia, April 8-12, 2013
  • Martin  Kaufmann, Peter M. Fischer (Univ. of Freiburg), Donald Kossmann, Norman  May (SAP). DEMO: A Generic Database Benchmarking Service. ICDE 2013,  Brisbane, Australia, April 8-12, 2013.

  • Martin Kaufmann,  Amin A. Manjili, Peter M. Fischer (Univ. of Freiburg), Donald Kossmann,  Franz Färber (SAP), Norman May (SAP): Timeline Index: A Unified Data  Structure for Processing Queries on Temporal Data, SIGMOD 2013,  New  York, USA, June 22-27, 2013.
  • Martin  Kaufmann, Amin A. Manjili, Stefan Hildenbrand, Donald Kossmann,  Andreas Tonder (SAP). Time Travel in Column Stores. ICDE 2013, Brisbane,  Australia, April 8-12, 2013
  • Rudolf, M., Paradies, M., Bornhövd, C., & Lehner, W. (2013). The Graph Story of the SAP HANA Database. BTW (pp. 403–420).
  • Robert Brunel, Jan Finis: Eine effiziente Indexstruktur für dynamische hierarchische Daten. BTW Workshops 2013: 267-276

 

2012

  • Rösch, P., Dannecker, L., Hackenbroich, G., & Färber, F. (2012). A Storage Advisor for Hybrid-Store Databases. PVLDB (Vol. 5, pp. 1748–1758).
  • Sikka, V., Färber, F., Lehner, W., Cha, S. K., Peh, T., & Bornhövd,  C. (2012). Efficient transaction processing in SAP HANA database.  SIGMOD  Conference (p. 731).
  • Färber, F., May, N., Lehner, W., Große, P., Müller, I., Rauhe, H., & Dees, J. (2012). The SAP HANA Database -- An Architecture Overview. IEEE Data Eng. Bull., 35(1), 28-33.
  • Sebastian Breß, Felix Beier, Hannes Rauhe, Eike Schallehn, Kai-Uwe Sattler, and Gunter Saake. 2012. Automatic selection of processing units for coprocessing in databases. ADBIS'12

 

2011

  • Färber, F., Cha, S. K., Primsch, J., Bornhövd, C., Sigg, S., & Lehner, W. (2011). SAP HANA Database - Data Management for Modern Business Applications. SIGMOD Record, 40(4), 45-51.
  • Jaecksch, B., Faerber, F., Rosenthal, F., & Lehner, W. (2011). Hybrid data-flow graphs for procedural domain-specific query languages, 577-578.
  • Große, P., Lehner, W., Weichert, T., & Franz, F. (2011). Bridging Two Worlds with RICE Integrating R into the SAP In-Memory Computing Engine, 4(12), 1307-1317.

 

2010

  • Lemke, C., Sattler, K.-U., Faerber, F., & Zeier, A. (2010). Speeding up queries in column stores: a case for compression, 117-129.
  • Bernhard Jaecksch, Franz Faerber, and Wolfgang Lehner. (2010). Cherry picking in database languages.
  • Bernhard Jaecksch, Wolfgang Lehner, and Franz Faerber. (2010). A plan for OLAP.
  • Paradies, M., Lemke, C., Plattner, H., Lehner, W., Sattler, K., Zeier, A., Krüger, J. (2010): How to Juggle Columns: An Entropy-Based Approach for Table Compression, IDEAS.

 

2009

  • Binnig, C., Hildenbrand, S., & Färber, F. (2009). Dictionary-based order-preserving string compression for main memory column stores. SIGMOD Conference (p. 283).
  • Kunkel, Julian M., Tsujita, Y., Mordvinova, O., & Ludwig, T. (2009). Tracing Internal Communication in MPI and MPI-I/O. 2009 International Conference on Parallel and Distributed Computing, Applications and Technologies (pp. 280-286).
  • Legler, T. (2009). Datenzentrierte Bestimmung von Assoziationsregeln in parallelen Datenbankarchitekturen.
  • Mordvinova, O., Kunkel, J. M., Baun, C., Ludwig, T., & Kunze, M. (2009). USB flash drives as an energy efficient storage alternative. 2009 10th IEEE/ACM International Conference on Grid Computing (pp. 175-182).
  • Transier, F. (2009). Algorithms and Data Structures for In-Memory Text Search Engines.
  • Transier, F., & Sanders, P. (2009). Out of the Box Phrase Indexing. In A. Amir, A. Turpin, & A. Moffat (Eds.), SPIRE (Vol. 5280, pp. 200-211).
  • Willhalm, T., Popovici, N., Boshmaf, Y., Plattner, H., Zeier, A., & Schaffner, J. (2009). SIMD-scan: ultra fast in-memory table scan using on-chip vector processing units. PVLDB, 2(1), 385-394.
  • Jäksch, B., Lembke, R., Stortz, B., Haas, S., Gerstmair, A., & Färber, F. (2009). Guided Navigation basierend auf SAP Netweaver BIA. Datenbanksysteme für Business, Technologie und Web, 596-599.
  • Lemke, C., Sattler, K.-uwe, & Franz, F. (2009).  Kompressionstechniken für spaltenorientierte BI-Accelerator-Lösungen.  Datenbanksysteme in Business, Technologie und Web, 486-497.
  • Mordvinova,  O., Shepil, O., Ludwig, T., & Ross, A. (2009). A Strategy For Cost  Efficient Distributed Data Storage For In-Memory OLAP. Proceedings IADIS  International Conference Applied Computing, pages 109-117.

 

2008

  • Hill, G., & Ross, A. (2008). Reducing outer joins. The VLDB Journal, 18(3), 599-610.
  • Weyerhaeuser, C., Mindnich, T., Faerber, F., & Lehner, W. (2008). Exploiting Graphic Card Processor Technology to Accelerate Data Mining Queries in SAP NetWeaver BIA. 2008 IEEE International Conference on Data Mining Workshops (pp. 506-515).
  • Schmidt-Volkmar, P. (2008). Betriebswirtschaftliche Analyse auf operationalen Daten (German Edition) (p. 244). Gabler Verlag.
  • Transier, F., & Sanders, P. (2008). Compressed Inverted  Indexes for In-Memory Search Engines. ALENEX (pp. 3-12).

2007

  • Sanders, P., & Transier, F. (2007). Intersection in Integer Inverted Indices.
  • Legler, T. (2007). Der Einfluss der Datenverteilung auf die Performanz  eines Data Warehouse. Datenbanksysteme für Business, Technologie und  Web.

 

2006

  • Bitton, D., Faerber, F., Haas, L., & Shanmugasundaram, J. (2006). One platform for mining structured and unstructured data: dream or reality?, 1261-1262.
  • Geiß, J., Mordvinova, O., & Rams, M. (2006). Natürlichsprachige Suchanfragen über strukturierte Daten.
  • Legler, T., Lehner, W., & Ross, A. (2006). Data mining with the SAP NetWeaver BI accelerator, 1059-1068.

SAP HANA Database Campus – Open House 2016 in Walldorf

$
0
0

The SAP HANA Database Campus invites students, professors, and faculty members interested in database research to join our third Open House at SAP's headquarters. Throughout your day, you will get an overview of database research at SAP, meet the architects of SAP HANA and learn more about academic collaborations. There are a couple of interesting presentations by developers and academic partners. Current students and PhD candidates present their work and research. For external students and faculty members it is a great chance to find interesting topics for internships and theses.


The event takes place on June 2nd, 2016, during 09:3016:00 in Walldorf, Germany. Free lunch and snacks are provided for all attendees. The entire event is held in English.

 

Register here

 

 

Looking forward to seeing you in Walldorf,

The SAP HANA Database Campus

students-hana@sap.com

 

 

Location:

  • SAP Headquarters,WDF03, Robert-Bosch-Str. 30, 69190, Walldorf, Germany
  • Room E4.02, Check-In Desk in the lobby of WDF03

 

Agenda:

  • 09:00-09:30 Arriving
  • 09:30-10:00 Check-In
  • 10:00-10:15 Opening
  • 10:15-11:00 Keynote
    • Dr. Norman May (SAP HANA group) – Topic will be announced
  • 11:00-12:00 Poster Session Part 1 & Career Booth
  • 12:00-12:45 Lunch
  • 12:45-13:00 Office Tour
  • 13:00-14:00 Session 1 – Academic
    • Prof. Anastasia Ailamaki (EPFL) Scaling Analytical and OLTP Workloads on Multicores: Are we there yet? [30 min]
    • Ismail Oukid (SAP HANA PhD Student, TU Dresden) FPTree: A Hybrid SCM-DRAM Persistent and Concurrent BTree for Storage Class Memory [15 min]
    • Robert Brunel (SAP HANA PhD Student, TU Munich) – Managing Hierarchical Data in Main-Memory RDBMS[15 min]
  • 14:00-15:00 Poster Session Part 2, Career Booth & Coffee Break
  • 15:00-15:45 Session 2 – SAP
    • Hinnerk Gildhoff (SAP) – SAP HANA Spatial & Graph[20 min]
    • Daniel Booss (SAP)– SAP HANA Basis [20 min]
  • 15:45-16:00 Best Student/PhD-Student Poster & Open House Closing

 

 

Archive of previous events


By participating you agree to appear in photos and videos taken during the event and published on SCN and CareerLoft.

Troubleshooting SAP HANA Authorisation issues

$
0
0

This document will deal with issues regarding analytical privileges with SAP HANA.


 

So what are Privileges some might ask?

 

System Privilege:

System privileges control general system activities. They are mainly used for administrative purposes, such as creating schemas, creating and changing users and roles, performing data backups, managing licenses, and so on.

 

Object Privilege:

Object privileges are used to allow access to and modification of database objects, such as tables and views. Depending on the object type, different actions can be authorized (for example, SELECT, CREATE ANY, ALTER, DROP, and so on).

 

Analytic Privilege:

Analytic privileges are used to allow read access to data in SAP HANA information models (that is, analytic views, attribute views, and calculation views) depending on certain values or combinations of values. Analytic privileges are evaluated during query processing.

In a multiple-container system, analytic privileges granted to users in a particular database authorize access to information models in that database only.

 

Package Privilege:

Package privileges are used to allow access to and the ability to work in packages in the repository of the SAP HANA database.

Packages contain design time versions of various objects, such as analytic views, attribute views, calculation views, and analytic privileges.

In a multiple-container system, package privileges granted to users in a particular database authorize access to and the ability to work in packages in the repository of that database only.

 

For more information on SAP HANA privileges please see the SAP HANA Security Guide:

http://help.sap.com/hana/SAP_HANA_Security_Guide_en.pdf

 

 

So, you are trying to access a view, a table or simply trying to add roles to users in HANA Studio and you are receiving errors such as:

  • Error during Plan execution of model _SYS_BIC:onep.Queries.qnoverview/CV_QMT_OVERVIEW (-1), reason: user is not authorized
  • pop1 (rc 2950, user is not authorized)
  • insufficient privilege: search table error: [2950] user is not authorized
  • Could not execute 'SELECT * FROM"_SYS_BIC"."<>"' SAP DBTech JDBC: [258]: insufficient privilege: Not authorized.SAP DBTech JDBC: [258]: insufficient privilege: Not authorized

Capture.PNG

 

These errors are just examples of  some the different authorization issues you can see in HANA Studio, and each one is pointing towards a missing analytical privilege.

 

Once you have created all your models, you then have the opportunity to define your specific authorization requirements on top of the views that you have created.

 

So for example, we have a model in HANA Studio Schema and its called "_SYS_BIC:Overview/SAP_OVERVIEW"

We have a user, lets just say its the "SYSTEM" user, and when you query this view you get the error:

 

Error during Plan execution of model _SYS_BIC:Overview/SAP_OVERVIEW (-1), reason: user is not authorized.

 

So if you are a DBA, and you get a message from a team member informing you that they getting a authorisation issue in HANA Studio. What are you to do?

How are you supposed to know the User ID? And most importantly, how are you to find out what the missing analytical privilege is?

 

So this is the perfect opportunity to run an authorisation trace through the means of the SQL console on HANA Studio.

So if you follow the below instructions it will walk you through executing the authorisation trace:

 

1) Please run the following statement in the HANA database to set the DB  trace:

alter system alter configuration ('indexserver.ini','SYSTEM') SET
('trace','authorization')='info' with reconfigure;

 

2) Reproduce the issue/execute the command again/

 

3)When the execution finishes please turn off the trace as follows in the Hana studio:

alter system alter configuration ('indexserver.ini','SYSTEM') unset
('trace','authorization') with reconfigure;

 

_____________________________________________________________________________________________________________________________

 

Only use this when instructed by SAP. It's recommended to use "INFO" rather than "DEBUG" in normal circumstances.

 

 

If you would like a more detailed trace on the privileges needed you could also execute the DEBUG level trace (Usually SAP Development would request this)

 

1) Please run the following statement in the HANA database to set the DB  trace:

alter system alter configuration ('indexserver.ini','SYSTEM') SET
('trace','authorization')='debug' with reconfigure;


 

2) Reproduce the issue/execute the command again


 

3)When the execution finishes please turn off the trace as follows in the Hana studio:

alter system alter configuration ('indexserver.ini','SYSTEM') unset
('trace','authorization') with reconfigure;

 

______________________________________________________________________________________________________________________________

 

So now that you have turned the trace on and reproduced the issue, now you must turn off the trace.

 

You should now see a new indexserver0000000trc file created in the Diagnosis Files Tab in HANA Studio

Capture.PNG

 

So once you open the trace files, scroll to the end of the file and you should see something familiar to this:

 

e cePlanExec      cePlanExecutor.cpp(06890) : Error during Plan execution of model _SYS_BIC:onep.Queries.qnoverview/CV_QMT_OVERVIEW (-1), reason: user is not authorized
i TraceContext    TraceContext.cpp(00718) : UserName=TABLEAU, ApplicationUserName=luben00d, ApplicationName=HDBStudio, ApplicationSource=csns.modeler.datapreview.providers.ResultSetDelegationDataProvider.<init>(ResultSetDelegationDataProvider.java:122);csns.modeler.actions.DataPreviewDelegationAction.getDataProvider(DataPreviewDelegationAction.java:310);csns.modeler.actions.DataPreviewDelegationAction.run(DataPreviewDelegationAction.java:270);csns.modeler.actions.DataPreviewDelegationAction.run(DataPreviewDelegationAction.java:130);csns.modeler.command.handlers.DataPreviewHandler.execute(DataPreviewHandler.java:70);org.eclipse.core.commands
i Authorization    XmlAnalyticalPrivilegeFacade.cpp(01250) : UserId(123456) is missing analytic privileges in order to access _SYS_BIC:onep.MasterData.qn/AT_QMT(ObjectId(15,0,oid=78787)). Current situation:
AP ObjectId(13,2,oid=3): Not granted.
i Authorization    TRexApiSearch.cpp(20566) : TRexApiSearch::analyticalPrivilegesCheck(): User TABLEAU is not authorized on _SYS_BIC:onep.MasterData.qn/AT_QMT (787878) due to XML APs
e CalcEngine      cePopDataSources.cpp(00488) : ceJoinSearchPop ($REQUEST$): Execution of search failed: user is not authorized(2950)
e Executor        PlanExecutor.cpp(00690) : plan plan558676@<> failed with rc 2950; user is not authorized
e Executor        PlanExecutor.cpp(00690) : -- returns for plan558676@<>
e Executor        PlanExecutor.cpp(00690) : user is not authorized(2950), plan: 1 pops: ceJoinSearchPop pop1(out a)
e Executor        PlanExecutor.cpp(00690) : pop1, 09:57:41.755  +0.000, cpu 139960197732232, <> ceJoinSearchPop, rc 2950, user is not authorized
e Executor        PlanExecutor.cpp(00690) : Comm total: 0.000
e Executor        PlanExecutor.cpp(00690) : Total: <Time- Stamp>, cpu 139960197732232
e Executor        PlanExecutor.cpp(00690) : sizes a 0
e Executor        PlanExecutor.cpp(00690) : -- end executor returns
e Executor        PlanExecutor.cpp(00690) : pop1 (rc 2950, user is not authorized)

 

So we can see from the trace file that User who is trying to query from the view is called TABLEAU. TABLEAU is also represented by the User ID (123456)

 

 

So by looking at the lines:

 

i Authorization    XmlAnalyticalPrivilegeFacade.cpp(01250) : UserId(123456) is missing analytic privileges in order to access _SYS_BIC:onep.MasterData.qn/AT_QMT(ObjectId(15,0,oid=78787)).

&

i Authorization    TRexApiSearch.cpp(20566) : TRexApiSearch::analyticalPrivilegesCheck(): User TABLEAU is not authorized on _SYS_BIC:onep.MasterData.qn/AT_QMT (787878) due to XML APs

 

We can clearly see that TABLEAU user is missing the correct analytical privileges to access the _SYS_BIC:onep.MasterData.qn/AT_QMT which is located on Object 78787.

 

So now we have to find out who owns the Object 78787. We can find out this information by querying the following:

 

select * from objects where object_oid = '<oid>';

Select * from objects where object_oid = '78787'

 

Once you have found out the owner for this object, you can get the owner to Grant the TABLEAU user the necessary privileges to query the object.

 

This has since changed in the new revision of SP12, see here.

 

Please be aware that if you find that the owner of an object is _SYS_REPO, this is not as straight forward as logging in as _SYS_REPO as this is not possible because SYS_REPO is a technical database user used by the SAP HANA repository. The repository consists of packages that contain design time versions of various objects, such as attribute views, analytic views, calculation views, procedures, analytic privileges, and roles. _SYS_REPO is the owner of all objects in the repository, as well as their activated runtime versions.

You have to create a .hdbrole file which which gives the access ( Development type of role, giving select, execute, insert etc access) on this schema. You then assign this role to the user who is trying to access the object.

 

 

Another option that is available for analyzing privileges issues was introduced as of SP9. This comes in the form of the Authorization Dependency Viewer. Man-Ted Chan has prepared an excellent blog on this new feature:

 

http://scn.sap.com/community/hana-in-memory/blog/2015/07/07/authorization-dependency-viewer

 

 

 

For more useful information on Privileges can be seen in the following KBA's:

KBA #2220157 - Database error 258 at EXE insufficient

KBA #1735586 – Unable to grant privileges for SYS_REPO.-objects via SAP HANA Studio authorization management.

KBA #1966219 – HANA technical database user _SYS_REPO cannot be activated.

KBA #1897236 – HANA: Error&quot; insufficient privilege: Not authorized &quot; in SM21

KBA #2092748 – Failure to activate HANA roles in Design Time.

KBA #2126689 – Insufficient privilege. Not authorized

KBA #2250445 - SAP DBTech JDBC 485 - Invalid definition of structured privilege: Invalid filter condition

 

 

For more useful Troubleshooting documentation you can visit:

 

http://wiki.scn.sap.com/wiki/display/TechTSG/SAP+HANA+and+In-Memory+Computing

 

 

Thank you,

 

Michael

Troubleshooting SAP HANA Delivery Units and HANA Live Packages issues.

$
0
0

Whilst importing delivery units into your HANA System you can sometimes run into some common errors which can easily be fixed without the means of opening a SAP Incident.

 

Lets look at an example.

 

Here you are importing SAP HANA Analytics into your system. During the import you see an error:

 

Wiki.PNG

 

To get a more in depth look a what actually went wrong here, we would need to look into the installation log (this is printed after the import fails) or the indexserver.trc file:

 

[37654]{228998}[123/-1] 2014-04-07 23:24:06.604933 e REPOSITORY       activator.cpp(01179) : Repository: Activation failed for at least one object;At least one runtime reported an error during revalidation. Please see CheckResults for details.

 

The problem in such cases is the person who was responsible for the prerequisites for the import did not check SAP note 1781992 before starting the import.

It is very important to have the necessary tables in the SAP_ECC Schema or else the import will fail. Best thing to do if this fails is to compare the existing tables with the tables listed in the note:

 

select table_name from m_cs_tables where schema_name = 'SAP_ECC' order by table_name asc;

 

1: What do you do if the import is still failing after all the tables have been imported?

 

Check the tables for invalid attributes and make sure the tables are set up correctly. (This just involves recreating the table).

 

You should also note which Delivery Units have failed to import. Re-importing the Delivery Unit again is also a valid approach to fix activation or deployment errors.

 

2: What do I do if the activation of some of views is failing after the import?

 

Make sure that when the tables are being searched in the schema, that it is searching the correct schema, this will involve insuring you have your schema mapping done correctly. An example of this can be seen in the below trace:

 

One table for a calculation view was searched in schema DT4_XT5: "- CalculationNode(WRF_CHARVALT): ColumnTable DT4_XT5:WRF_CHARVALT not found (cannot get catalog object)."

 

When searching this schema doesn't exist. Therefore the activation of "sap.is.retail.ecc.RetailCharacteristicValue" will obviously fail. So now you have to ask yourself the question, what schema did I start the installation with, did I start with a different schema and change it somewhere in between the process? Also check to see if you moved the "WRF_CHARDVALT" table elsewhere.

 

 

3: What if the user does not have the required privileges when activating CAR HANA Content or any other content?


When checking the logs you see errors with activation which refer to, lets say SAP_CRM schema. When looking for this schema in the catalog you cannot see it, so this leads to the question, does this actually exist or is my user prohibited from viewing this? The answer which is most likely is the lather. Make sure your user has been granted the SELECT privilege on the SAP_CRM schema. A good guide would be to follow SAP Note 1936727

 

So referring to the Note, you could check the _SYS_REPO to see if any important privileges are missing such as EXECUTE, INSERT, etc. Please note that normally the _SYS_REPO normally only needs the SELECT privilege for a plain installation.

 

 

 

4: What if you face errors when importing HANA SMART BUSINESS FOR ERP 1.0?

 

Checking the installation log after the import fails you see:

 

Object: sap.hba.r.sappl604::BSMOverview.calculationview ==> Error Code: 40117 Severity: 3

Object: sap.hba.r.sappl604::BSMOverview.calculationview ==> Repository: Encountered an error in repository runtime extension;Model inconsistency. Create Scenario failed:

 

 

ColumnView _SYS_BIC:sap.hba.r.sappl604/BSMInfoOverview not found (cannot get catalog object)(CalculationNode (BSMInfoOverview))

 

 

The following errors occured: Inconsistent calculation model (34011)

Details (Errors):

- CalculationNode (BSMInfoOverview): ColumnView _SYS_BIC:sap.hba.r.sappl604/BSMInfoOverview not found (cannot get catalog object).


The solution for this can be found in SAP Note 2317634

 

 

5: You receive errors when performing data previews on the package sap.hba.ecc?

 

When viewing the sap.hba.ecc packages you can see some calculation views are marked red. When clicking on data preview you see error:

wik2.PNG


What we now need to ask is, did any of these views ever work at all or is it specific to a individual views? If you answered "yes" to the first question, then I would re-deploy the delivery unit again and this should fix this issue.

 

If it is specific views that are causing the issue, then try and re-deploy each view separately through Studio and activate again. If this does not work then work go into the Diagnosis Files tab in the HANA Studio and pull down the most recent errors which should have been printed in the indexserver.trc file. Check to see which table is trying to be reached for this view and what schema it is in. Check if the user you are logged in with has the correct privileges along with the _SYS_REPO. (An authorisation trace may be useful here).

 

 

The solution for this can also be found in SAP Note 2318731.

 

 

 

Notes and Guide recommended to follow:

 

 

  1. SAP Note 2117481 - Release InformationNote: SAP Simple Finance, on-premise edition 1503.
  2. SFIN admin guide: https://websmp102.sap-ag.de/~sapidb/012002523100007133222015E/SFIN_ADMIN_GUIDE_201.pdf
  3. SAP HANA Content with HDBALM: http://help.sap.com/saphelp_hanaplatform/helpdata/en/bd/b7a459c3144fcab1c5641c72c1158d/content.htm
Viewing all 1183 articles
Browse latest View live