Quantcast
Channel: SCN : Document List - SAP HANA and In-Memory Computing
Viewing all 1183 articles
Browse latest View live

SAP HANA Data Warehousing Foundation

$
0
0

SDNPIC.jpg

SAP HANA Data Warehousing Foundation 1.0

 

This first release will provide packaged tools for large Scale SAP HANA use cases to support data management and distribution within a SAP HANA landscapre more efficiently. Further versions will focus on additionals tools to  support native SAP HANA data warehouse use cases, in particular data lifecycle management.

 

 

 

On this landingpage you will find a summary of information to get started
with SAP HANA Data Warehousing Foundation 1.0

Presentations

 

 

Demos

 

SAP HANA Data Warehousing Foundation  Playlist
In this SAP HANA Data Warehousing playlist you will find demos showing the Data Distribution Optimizer (DDO) as well as the Data Lifecycle Manager (DLM)

 

SAP HANA Academy

 

Find out more about the Data Temperature integration with HANA including Data Lifecycle Management on the DWF Youtube channel of SAP HANA Academy


Troubleshooting SAP HANA Authorisation issues

$
0
0

This document will deal with issues regarding analytical privileges with SAP HANA.


 

So what are Privileges some might ask?

 

System Privilege:

System privileges control general system activities. They are mainly used for administrative purposes, such as creating schemas, creating and changing users and roles, performing data backups, managing licenses, and so on.

Object Privilege:

Object privileges are used to allow access to and modification of database objects, such as tables and views. Depending on the object type, different actions can be authorized (for example, SELECT, CREATE ANY, ALTER, DROP, and so on).

Analytic Privilege:

Analytic privileges are used to allow read access to data in SAP HANA information models (that is, analytic views, attribute views, and calculation views) depending on certain values or combinations of values. Analytic privileges are evaluated during query processing.

In a multiple-container system, analytic privileges granted to users in a particular database authorize access to information models in that database only.

Package Privilege:

Package privileges are used to allow access to and the ability to work in packages in the repository of the SAP HANA database.

Packages contain design time versions of various objects, such as analytic views, attribute views, calculation views, and analytic privileges.

In a multiple-container system, package privileges granted to users in a particular database authorize access to and the ability to work in packages in the repository of that database only.

 

For more information on SAP HANA privileges please see the SAP HANA Security Guide:

http://help.sap.com/hana/SAP_HANA_Security_Guide_en.pdf

 

 

So, you are trying to access a view, a table or simply trying to add roles to users in HANA Studio and you are receiving errors such as:

  • Error during Plan execution of model _SYS_BIC:onep.Queries.qnoverview/CV_QMT_OVERVIEW (-1), reason: user is not authorized
  • pop1 (rc 2950, user is not authorized)
  • insufficient privilege: search table error: [2950] user is not authorized
  • Could not execute 'SELECT * FROM"_SYS_BIC"."<>"' SAP DBTech JDBC: [258]: insufficient privilege: Not authorized.SAP DBTech JDBC: [258]: insufficient privilege: Not authorized

 

These errors are just examples of  some the different authorisation issues you can see in HANA Studio, and each one is pointing towards a missing analytical privilege.

 

Once you have created all your models, you then have the opportunity to define your specific authorization requirements on top of the views that you have created.

 

So for example, we have a model in HANA Studio Schema and its called "_SYS_BIC:Overview/SAP_OVERVIEW"

We have a user, lets just say its the "SYSTEM" user, and when you query this view you get the error:

 

Error during Plan execution of model _SYS_BIC:Overview/SAP_OVERVIEW (-1), reason: user is not authorized.

 

So if you are a DBA, and you get a message from a team member informing you that they getting a authorisation issue in HANA Studio. What are you to do?

How are you supposed to know the User ID? And most importantly, how are you to find out what the missing analytical privilege is?

 

So this is the perfect opportunity to run an authorisation trace through the means of the SQL console on HANA Studio.

So if you follow the below instructions it will walk you through executing the authorisation trace:

 

1) Please run the following statement in the HANA database to set the DB  trace:

alter system alter configuration ('indexserver.ini','SYSTEM') SET
('trace','authorization')='info' with reconfigure;

 

2) Reproduce the issue/execute the command again/

 

3)When the execution finishes please turn off the trace as follows in the Hana studio:

alter system alter configuration ('indexserver.ini','SYSTEM') unset
('trace','authorization') with reconfigure;

 

_____________________________________________________________________________________________________________________________

 

Only use this when instructed by SAP. It's recommended to use "INFO" rather than "DEBUG" in normal circumstances.

 

 

If you would like a more detailed trace on the privileges needed you could also execute the DEBUG level trace (Usually SAP Development would request this)

 

1) Please run the following statement in the HANA database to set the DB  trace:

alter system alter configuration ('indexserver.ini','SYSTEM') SET
('trace','authorization')='debug' with reconfigure;


 

2) Reproduce the issue/execute the command again


 

3)When the execution finishes please turn off the trace as follows in the Hana studio:

alter system alter configuration ('indexserver.ini','SYSTEM') unset
('trace','authorization') with reconfigure;

 

______________________________________________________________________________________________________________________________

 

So now that you have turned the trace on and reproduced the issue, now you must turn off the trace.

 

You should now see a new indexserver0000000trc file created in the Diagnosis Files Tab in HANA Studio

Capture.PNG

 

So once you open the trace files, scroll to the end of the file and you should see something familiar to this:

 

e cePlanExec      cePlanExecutor.cpp(06890) : Error during Plan execution of model _SYS_BIC:onep.Queries.qnoverview/CV_QMT_OVERVIEW (-1), reason: user is not authorized
i TraceContext    TraceContext.cpp(00718) : UserName=TABLEAU, ApplicationUserName=luben00d, ApplicationName=HDBStudio, ApplicationSource=csns.modeler.datapreview.providers.ResultSetDelegationDataProvider.<init>(ResultSetDelegationDataProvider.java:122);csns.modeler.actions.DataPreviewDelegationAction.getDataProvider(DataPreviewDelegationAction.java:310);csns.modeler.actions.DataPreviewDelegationAction.run(DataPreviewDelegationAction.java:270);csns.modeler.actions.DataPreviewDelegationAction.run(DataPreviewDelegationAction.java:130);csns.modeler.command.handlers.DataPreviewHandler.execute(DataPreviewHandler.java:70);org.eclipse.core.commands
i Authorization    XmlAnalyticalPrivilegeFacade.cpp(01250) : UserId(123456) is missing analytic privileges in order to access _SYS_BIC:onep.MasterData.qn/AT_QMT(ObjectId(15,0,oid=78787)). Current situation:
AP ObjectId(13,2,oid=3): Not granted.
i Authorization    TRexApiSearch.cpp(20566) : TRexApiSearch::analyticalPrivilegesCheck(): User TABLEAU is not authorized on _SYS_BIC:onep.MasterData.qn/AT_QMT (787878) due to XML APs
e CalcEngine      cePopDataSources.cpp(00488) : ceJoinSearchPop ($REQUEST$): Execution of search failed: user is not authorized(2950)
e Executor        PlanExecutor.cpp(00690) : plan plan558676@<> failed with rc 2950; user is not authorized
e Executor        PlanExecutor.cpp(00690) : -- returns for plan558676@<>
e Executor        PlanExecutor.cpp(00690) : user is not authorized(2950), plan: 1 pops: ceJoinSearchPop pop1(out a)
e Executor        PlanExecutor.cpp(00690) : pop1, 09:57:41.755  +0.000, cpu 139960197732232, <> ceJoinSearchPop, rc 2950, user is not authorized
e Executor        PlanExecutor.cpp(00690) : Comm total: 0.000
e Executor        PlanExecutor.cpp(00690) : Total: <Time- Stamp>, cpu 139960197732232
e Executor        PlanExecutor.cpp(00690) : sizes a 0
e Executor        PlanExecutor.cpp(00690) : -- end executor returns
e Executor        PlanExecutor.cpp(00690) : pop1 (rc 2950, user is not authorized)

 

So we can see from the trace file that User who is trying to query from the view is called TABLEAU. TABLEAU is also represented by the User ID (123456)

 

So by looking at the lines:

 

i Authorization    XmlAnalyticalPrivilegeFacade.cpp(01250) : UserId(123456) is missing analytic privileges in order to access _SYS_BIC:onep.MasterData.qn/AT_QMT(ObjectId(15,0,oid=78787)).

&

i Authorization    TRexApiSearch.cpp(20566) : TRexApiSearch::analyticalPrivilegesCheck(): User TABLEAU is not authorized on _SYS_BIC:onep.MasterData.qn/AT_QMT (787878) due to XML APs

 

We can clearly see that TABLEAU user is missing the correct analytical privileges to access the _SYS_BIC:onep.MasterData.qn/AT_QMT which is located on Object 78787.

 

So now we have to find out who owns the Object 78787. We can find out this information by querying the following:

 

select * from objects where object_oid = '<oid>';

Select * from objects where object_oid = '78787'

 

Once you have found out the owner for this object, you can get the owner to Grant the TABLEAU user the necessary privileges to query the object.

 

Please be aware that if you find that the owner of an object is _SYS_REPO, this is not as straight forward as logging in as _SYS_REPO as this is not possible because SYS_REPO is a technical database user used by the SAP HANA repository. The repository consists of packages that contain design time versions of various objects, such as attribute views, analytic views, calculation views, procedures, analytic privileges, and roles. _SYS_REPO is the owner of all objects in the repository, as well as their activated runtime versions.

You have to create a .hdbrole file which which gives the access ( Development type of role, giving select, execute, insert etc access) on this schema. You then assign this role to the user who is trying to access the object.

 

 

Another option that is available for analyzing privileges issues was introduced as of SP9. This comes in the form of the Authorization Dependency Viewer. Man-Ted Chan has prepared an excellent blog on this new feature:

 

http://scn.sap.com/community/hana-in-memory/blog/2015/07/07/authorization-dependency-viewer

 

 

 

For more useful information on Privileges can be seen in the following KBA's:

KBA #2220157 - Database error 258 at EXE insufficient

KBA #1735586 – Unable to grant privileges for SYS_REPO.-objects via SAP HANA Studio authorization management.

KBA #1966219 – HANA technical database user _SYS_REPO cannot be activated.

KBA #1897236 – HANA: Error&quot; insufficient privilege: Not authorized &quot; in SM21

KBA #2092748 – Failure to activate HANA roles in Design Time.

KBA #2126689 – Insufficient privilege. Not authorized

KBA #2250445 - SAP DBTech JDBC 485 - Invalid definition of structured privilege: Invalid filter condition

 

 

For more useful Troubleshooting documentation you can visit:

 

http://wiki.scn.sap.com/wiki/display/TechTSG/SAP+HANA+and+In-Memory+Computing

 

 

Thank you,

 

Michael

SAP HANA SPS10 – What is New for Backup and Recovery

$
0
0

This post outlines new and enhanced features of SAP HANA backup and recovery with Support Package Stack 10.

The information here has been collected from several sources with the intent of making it more accessible to people interested.

 

Contents

 

 

Recovery Using Delta Backups

With SPS10, SAP HANA supports recovery using delta backups (incremental and differential backups).

 

Full Backups and Delta Backups

SAP HANA now supports the following backup types:

 

From SPS10,a full backup is used to mean:

  • Data backup
    A data backup includes all the data structures that are required to recover the database.
  • Storage snapshot
    A storage snapshot captures the content of the SAP HANA data area at a particular point in time.

 

From SPS10, SAP HANA now supports the following delta backups:

  • Incremental backup
    An incremental backup stores the data changed since the last data backup - either the last data backup or the last delta backup (incremental or differential).
  • Differential backup
    Differential backups store all the data changed since the last data backup.

 

Note that delta backups (incremental or differential) contain actual data, whereas log backups contain redo log entries.

 

Delta backups are included in the backup catalog.

When you display the backup catalog, delta backups are hidden by default.

 

To display delta backups in the backup catalog:

  1. In SAP HANA studio, open the Backup Console and go to the Backup Catalog tab.
  2. Select Show Delta Backups.

 

You can use both incremental and differential backups in your backup strategy.

 

Backup lifecycle management now also includes delta backups.

When you delete all the backups older than a specific full backup, older delta backups are also deleted along with older full backups and log backups.

 

 

SAP HANA Recovery Options Using Delta Backups

If delta backups are available, they are included by default in a recovery. In the recovery dialog in SAP HANA studio, you can choose to perform a recovery without using delta backups:

sap-hana-other-settings.png

 

 

If you include delta backups, SAP HANA automatically determines the optimal recovery strategy based on all the available backups.

 

 

Recovery to the Most Recent State

What you need:

 

  • A data backup

    AND
  • The last differential backup
    Note:
    This is only supported for a data backup, not for a storage snapshot.

    AND
  • Subsequent incremental backups
    Note:
    This is only supported for a data backup, not for a storage snapshot.

    AND
  • Subsequent log backups

    AND
  • Redo log entries that are still available in the log area
    (If the log area is still available.)

 

 

Recovery to a Point in Time in the Past

What you need:

 

As for a recovery to the most recent state.

Redo log entries from the log area may not be needed.

 

 

SQL Statements for Recovery Without Delta Backups

By default, SAP HANA includes delta backups in a recovery.

 

If you wish to recover SAP HANA without using delta backups, you can use the following SQL statement:

 

RECOVER DATABASE UNTIL TIMESTAMP '<timestamp>' IGNORE DELTA DATA BACKUPS

 

Example:

RECOVER DATABASE UNTIL TIMESTAMP '2015-05-15 10:00:00' IGNORE DELTA DATA BACKUPS

 

 

Finding and Checking the Backups Needed for a Recovery

You can use hdbbackupdiag to display the backups needed to recover the database.

In this way, you can minimize the number of backups that need to be made available for a recovery.

 

With SPS10, hdbbackupdiag supports delta backups.

 

More information: Checking Whether a Recovery is Possible

 

 

Prerequisites for Performing a Recovery

 

  • Operating system user <sid>adm
  • Read access to the backup files
  • System privilege DATABASE ADMIN
    (for tenant databases in a SAP HANA multiple-container system)

 

 

Third-Party Backup Tools and Delta Backups

Delta backups are compatible with the current API specification for third-party backup tools (Backint).

 

  • For delta data backups, SAP HANA uses the Backint option -l LOG in combination with the data Backint parameter file:

    -p /usr/sap//SYS/global/hdb/opt/hdbconfig/initData.utl

    Third-party backup tools sometimes use the Backint option -l to determine the backup container using the Backint parameter file.
    This means that for the option –l LOG, the log backup container is used.
  • Caution:
    Backup containers that were, until SPS10, only used for log backups may be sized too small for delta backups.
    If a log full situation occurs, this could cause a database standstill.
  • The Backint parameter file is tool-specific and typically contains information such as the backup destination.
    Note: Some third-party backup tools support only one Backint parameter file for both data and log backups
  • Recommendation:
    Ask your backup tool vendor for details of how to configure the tool to work with delta backups.
    If in doubt, configure two dedicated Backint parameter files: one for data backups, and one for log backups.

 

SQL Statements for Delta Backups

To create an incremental backup, use the following SQL statement:

 

BACKUP DATA INCREMENTAL USING FILE ('<file name>')

 

If the file name is ‘2015-08-03’, this SQL statement creates the following delta backup files:

 

Data backup file: 2015-08- 03_databackup_incremental_0_1431675865039_0_1
Data backup file: 2015-08- 03_databackup_incremental_1431675646547_143167586 5039_1_1
Data backup file: 2015-08- 03_databackup_incremental_1431675646547_143167586 5039_2_1
Data backup file: 2015-08- 03_databackup_incremental_1431675646547_143167586 5039_3_1

 

To execute a differential backup, use the following SQL statement:

 

BACKUP DATA DIFFERENTIAL USING FILE ('<file name>')

 

If the file name is ‘2015-08-03’, this SQL statement creates the following delta backup files:

 

Data backup file: 2015-08- 03_databackup_differential_0_1431675646547_0_1
Data backup file: 2015-08- 03_databackup_differential_1431329211296_1431675646547_1_1
Data backup file: 2015-08- 03_databackup_differential_1431329211296_14316756 46547_2_1
Data backup file: 2015-08- 03_databackup_differential_1431329211296_14316756 46547_3_1

 

In this example, 1431329211296 is the backup ID of the basis data backup; 1431675646547 is the backup ID of the delta backup.

 

Prerequisites for Working with Delta Backups

System privilege BACKUP ADMIN, BACKUP OPERATOR (recommended for batch users only), or DATABASE ADMIN (for MDC)

 

More Information

Delta Backups

 

 

Backup Functionality in SAP HANA Cockpit

In addition to SAP HANA studio, you can now also start SAP HANA backup operations from SAP HANA cockpit.

 

From SAP HANA cockpit, you can:

  • Create data backups
  • Display information about data backups

 

Create Data Backups in SAP HANA Cockpit

Using SAP HANA cockpit, you can create data backups.

 

  1. In SAP HANA cockpit, click the Data Backup tile.
  2. Choose Start New Backup and specify the backup settings.

    sap-hana-cockpit-backup-progress.png

     

  3. To start the backup, choose Back Up.

    sap-hana-cockpit-backup-start.png

     

    The overall progress is displayed on the Data Backup tile.

    To see more details of the backup progress, click the tile.

 

 

Display Information About Backups in SAP HANA Cockpit

If a backup is running, the Data Backup tile displays its progress.

 

If no backup is running, the Data Backup tile displays the status of the most recent full backup:

  • Successful
  • Running
  • Snapshot Prepared
  • Canceled
  • Error

 

Click the tile to display more details from the backup catalog:

sap-hana-cockpit-backup-catalog.png

 

 

The following information is displayed:

  • Time range that the backup catalog covers
  • Total size of the backup catalog
    Information about the most recent backups within the time range
    (status, start time, backup type, duration, size, destination type and comment)

 

Click a row to display more details:

sap-hana-cockpit-backup-catalog-details.png

 

 

Prerequisites for Creating Backups in SAP HANA Cockpit

 

  • System privilege BACKUP OPERATOR or BACKUP ADMIN
  • Role:
    • sap.hana.backup.roles::Operator
      or
    • sap.hana.backup.roles::Administrator

 

Notes

Storage snapshots, backup lifecycle management, and database recovery are currently not supported in SAP HANA cockpit.

 

More Information

SAP HANA Cockpit

SAP HANA Administration Guide: SAP HANA Database Backup and Recovery

 

 

Support for SAP HANA Multitenant Database Containers

In SAP HANA studio, the steps to recover a SAP HANA multitenant database container system are similar to the steps to recover a SAP HANA single-container system.

 

Note:

Storage snapshots are currently not supported for SAP HANA multitenant database container systems.

 

The system database plays a central role in the backup and recovery of SAP HANA multitenant database containers:

  • The system database can initiate backups of the system database itself as well as of individual tenant databases.
    A tenant database can also perform its own backups (unless this feature has been disabled for the tenant database)
  • Recovery of tenant databases is always initiated from the system database
  • To recover a complete SAP HANA multitenant database container system, the system database and all the tenants need to be recovered individually.

 

 

SAP HANA Multitenant Database Containers and Third-Party Backup Tools

When you work with third-party backup tools and SAP HANA multitenant database container system, you should be aware of some specific points:

 

Isolation Level “High”

With SPS10, a new option “isolation level” was introduced for SAP HANA multitenant database container systems.

 

In isolation level high, each tenant database has its own dedicated operating system user.

 

In high isolation scenarios, Backint is supported by SAP HANA. However, you should check with your third- party tool vendor whether any tool-specific restrictions apply.

 

Tenant Copy

Tenant copy using Backint is currently not supported.

To copy a tenant database using backup and recovery, use file system-based backups instead.

 

 

DBA Cockpit for SAP HANA: New Backup Functionality

DBA Cockpit for SAP HANA supports the following SAP HANA SPS10 functionality:

  • Delta backups (incremental and differential backups)

    This feature is available with the following SAP_BASIS Support Packages and above:
    • 7.50 SP01
    • 7.40 SP13
    • 7.31 SP17
    • 7.30 SP14
    • 7.02 SP18
  • Backups of tenant databases
    All tenant databases in an SAP HANA multitenant database container can be backed up independently of each other.

    This feature is available with the following SAP_BASIS Support Packages and above:
    • 7.40 SP10
    • 7.31 SP15
    • 7.30 SP13
    • 7.02 SP17

 

Note that tenant database, on which the DBA Cockpit is installed, is supported "out-of-the-box".

No additional setup steps are necessary in DBA Cockpit.

System databases need to be integrated manually.

More information: SAP Help Portal -> DBA Cockpit for SAP HANA -> Add a Database Connection

 

To schedule backups:

 

  1. In DBA Cockpit, Choose Jobs -> DBA Planning Calendar.
    Alternatively, use SAP transaction DB13.
  2. To schedule a new data backup, drag an item from the Action Pad to a cell in the calender.
    dba-cockpit-action-pad.png
    To back up a tenant database, choose Complete Data Backup.
    Tenant databases are backed up from within the system database.
  3. In the dialog box, specify the information required.
    dba-cockpit-backup-mdc.png
    For Database Name, specify the name of the tenant database you want to back up.
  4. Choose Add or Execute Immediately.
    The backup is scheduled for the time you specified or is started.

 

More information: SAP Note 2164096 - Schedule backups for SAP HANA multiple-container systems with DB13

 


New Monitoring View: Progress of Backup

M_BACKUP_PROGRESS provides detailed information about the most recent data backup.

 

Here is a comparison of M_BACKUP_PROGRESS and M_BACKUP_CATALOG / M_BACKUP_CATALOG_FILES:

 

M_BACKUP_CATALOG

M_BACKUP_CATALOG_FILES

M_BACKUP_PROGRESS
All types of backups (data, log, storage snapshots, if available)Data backups only (data, delta, incremental)
All completed and currently running backups since the database was createdCurrently running and last finished backups only
PersistentCleared at database restart
Total amount of data for finished backups onlyTotal and already transferred amount of data for all backups

 

System views are located in the SYS schema.

 

More information: M_BACKUP_PROGRESS in the SAP HANA SQL and System Views Reference Guide

 

 

Which SAP HANA Tool Supports What?

Below is an overview of the backup and recovery functionality supported by the different SAP HANA tools with SAP HANA SPS10:

 

SAP HANA StudioSAP HANA CockpitDBA Cockpit for SAP HANA
Data BackupYESYESYES
Storage SnapshotYES
Incremental BackupYESYES
Differential BackupYESYES
Schedule BackupsYES
Database RecoveryYES
Support for Tenant DatabasesYESYES

 

 

More Information

SAP HANA User Guides

 

 

Overview Presentation

SAP HANA Backup/Recovery Overview

 

 

Training

 

 

 

SAP Notes

 

  • 2165826
    Information about SAP HANA Platform Support Package Stack (SPS) 10
  • 1642148
    FAQ: SAP HANA database backup and recovery
  • 2031547
    Overview of SAP-certified 3rd party backup tools and associated support process
  • 2039883
    FAQ: SAP HANA database and storage snapshots
  • 2165547
    FAQ: SAP HANA Database Backup & Recovery in a SAP HANA System Replication landscape
  • 2091951
    Best Practices for SAP HANA Backup and Restore

 

Further SAP notes are available on component HAN-DB-BAC

SAP HANA - interesting notes and other information

$
0
0

Dear all,

 

for more than four years I'm working with SAP HANA now. During this period of time we had to find solutions for different kinds of problems. In many cases I received additional information extending my knowledge around SAP HANA, too.

 

The following notes and web pages might be very useful for your daily work with SAP HANA. The mentioned SAP notes explain various parts of SAP HANA very well. In the web you can find well written examples and explanations. All the information might help you administering and maintaining your HANA landscape.

 

note numberDescription
2063657HANA system replication takeover decision guidelines
1925267forgot SYSTEM password
1999997FAQ: SAP HANA memory
2044468FAQ: SAP HANA partitioning
1999998FAQ: SAP HANA lock analysis
2000002FAQ: SAP HANA sql optimization
2100009FAQ: SAP HANA savepoints
2147247FAQ: SAP HANA statistics server
2000003FAQ: SAP HANA
2114710FAQ: SAP HANA threads and thread samples
1999993how-to: interpreting SAP HANA mini check results
1514967SAP HANA: central note
2186744FAQ: SAP HANA parameters
2036111configuration parameters for the SAP HANA systems
1969700sql statement collection for SAP HANA
1999880FAQ: HANA system replication

 

Interesting websites with useful information.

URL
Description
https://blogs.saphana.comSAP HANA Blog
http://help.sap.com/hanaHelp SAP HANA Platform (Core)
http://help.sap.com/hana_platformSAP HANA Platform (Core)
http://hana.sap.com/abouthana.htmlSAP HANA Information
http://scn.sap.com/community/business-suite/blog/2015/03/02/sap-s4hana-frequently-asked-questions--part-1HANA FAQ - links to parts 2 and 3 are given in the article, too


On https://open.sap.com course for SAP HANA, too, are published. These courses will give you a deeper knowledge of SAP HANA. You can register for free to these courses.

 

Enjoy the given websites and find some useful information for your daily work. Please add additional comments if you like to.

 

Martin

Union Node Pruning in Modeling with Calculation View

$
0
0

Data Modeling in SAP HANA using calculation view is  one of the supreme capabilities provided for end user to mold his raw data in to a well structured result set by leveraging multiple operational capabilities exposed by calculation view. On the way of achieving this we need to think on different aspects involved.

Let me take few lines here to quote some of the real world examples to provide better clarity on my talk.

We all know that there are two major parameters which we generally take in to consideration when we are qualifying or defining the standard of any automobile, which are nothing but 'Horse Power(HP)' and 'Mileage' of the automobile. There is always a trade off between the two, by which i mean that a higher HP automobile yields  reduced mileage and vice versa. Why does this happen? It is because we are making the underlying mechanical engines  to generate more HP and thus consuming most of the source energy(fuel) for this purpose.

 

Let us now get back from the mechanical world to our HANA world and start thinking the underlying execution of HANA database analogous to the above quoted mechanism.

When our calculation view starts computing complex calculations on big data its a matter of fact that the we will  have a  trade off between the performance and volume of data.

 

In this kind of Big Data scenarios where in we are expecting a better Horse Power from the underlying engine, let us even think how can we make the mileage/Performance also to be better in the current document.

 

One of the new features of HANA SPS11 called 'Union Node Pruning in calculation view 'supports us to achieve this by reducing the cost of execution of calculation view by Pruning union operation dynamically based on the query by end user.

 

Let us understand this by an example : Consider that we are creating sales report for a product across the years using a calculation view. The view consists of two data sources, which  are current sales data(for YEAR >= 2015) and archived sales data(YEAR <= 2014) and both of which are provided as input to union node of calculation view as shown below :

 

CV_PRUN.PNG

 

Now think of a scenario where in we are querying the calculation view to get the result of current_sales, wouldn't it be great if the underlying execution engine queries only on the current_sales based data source and prune the operation on the archived data source.

 

Yes, this can now be achieved in the case of union operation in a calculation view by providing pruning definition in a predefined database table which is called as Pruning configuration table.

 

Definition of the pruning configuration table should be of the below format :

 

 

union_prun.PNG

 

and an example content  for the pruning configuration  table is as below :

 

pruning_content.PNG

 

 

 

 

 

 

CALC_SCENARIO comprises of the calculation view that involves union node pruning and INPUT column takes the data source names involved in the union node of calculation view.

 

Now in the advanced view properties of calculation view mention this pruning table as shown below:

 

view_properties.PNG

 

Now activate the above view which involves 2 data sources PRUN1 and PRUN2 with pruning configuration table.

 

And execute the query on that view involving a filter condition that is equal to the condition mentioned in the pruning configuration table :

 

SELECT

  "ID",

  "YEAR",

  sum("SALES") AS "SALES"

FROM "_SYS_BIC"."hanae2e.poorna.sp11.ws42/CV_PRUN"

WHERE (("YEAR" > ('2005')))

GROUP BY "ID",

  "YEAR"

 

Visualize the plan for the above query and we see that the union node is pruned as the filter condition matches the one in pruning configuration table, which is  as shown below :

 

plan_viz.PNG

 

 

Now remove the pruning configuration table from view properties of the calculation view, activate it and execute the above query again and perform the plan viz of the same. We now see the union node coming in to picture, thus the query invoking both archived data and  current data in spite of requirement just being the current sales data.



union_non_prun.PNG

 

Thus Union node pruning in CV now helps to decide how the execution flow must be carried based on the query dynamically

 

Hope the provided information is useful. Any suggestion and feedback for improvement will be much appreciated.

 

Thank you

Increased schema flexibility in SAP HANA

$
0
0

Schema flexibility is one of the key capabilities  in SAP HANA that majorly helps to bring in flexibility with the column store table definition. A brief insight with a good example can be seen in this  Getting flexible with SAP HANA

 

Let us now understand the new capabilities in schema flexibility with this document.

 

With HANA SPS11 customers can now avail the increases capabilities of schema flexibility in SAP HANA, let us now understand the same via some examples.

 

Create a column table to store the employee details considering his Name, ID and Age by giving a room to add the other necessary information of employee based on the needs at run time by using the below syntax :

 

Create column table employee_Details (emp_id int, emp_name varchar(100), AGE int) with schema flexibility;


Adding the clause of ‘with schema flexibility’ during the table creation enables the provision of dynamic column creation during the DML operations like insert/upsert, update or delete.


Once the base structure for Employee_details table is created , there comes a requirement to add some more details  like employee_salary, employee_department as new columns to the created table definition, now the dynamicity of employee_details table comes in handy as we have enabled it ‘with schema flexibility’ option, instead of altering the structure of the table we can now directly add whatever data we need to the table as shown below :


Insert into employee_Details(emp_id , emp_name, AGE, employee_salary, employee_department) values(1,’RAM’,29,1000,PI_HANA);


Now the insert statement will get executed successfully irrespective of whether the column highlighted in the insert operations are existing or not, which means the 2 new highlighted columns must get added   to the metadata of the table implicitly as part of the insert statement.


Nature of flexible table is to create the dynamic column with default data type as NVARCHAR having maximum length (5000), if we do not  want to use this default nature and make the data type of dynamic column  as user’s choice it can now be done  with HANA SPS11 during the creation of table. lets say in our case any dynamic column that gets added to employee_Details table must have the data type of intereger then we can do it by writing my create statement as:

 

Create column table employee_Details (emp_id int, emp_name varchar(100), AGE int) with schema flexibility (DEFAULT DATA TYPE INTEGER);


Now any newly created dynamic columns during the insertion/update will take integer as the data type.


If we have a case where the details that get added to employee_Details table are heterogeneous entries and we want the dynamic columns to construct their data types based on the inserted value , we can do that by the following create statement : which is considered as ‘Data type detection’.


Create column table employee_Details (emp_id int, emp_name varchar(100), AGE int) with schema flexibility(DEFAULT DATA TYPE *).


Here the dynamic columns created constructs their type of data based on the value inserted.

 

That is:

 

Insert  into employee_Details (emp_id, emp_name, AGE,Emp_deprtment, emp_salary) values(1,’RAM’,29,’PI_HANA’,2000);

 

The last two columns take numeric and string data types respectively which differs from the default case.

Data type detection behavior is valid for both single-valued and multi-valued entities.

 

Here comes a case where ‘employee_feedback’ is  to be dynamically added to employee_Details table and is initially entered as an integer value for the first year’s rating, then the data type of employee_feedback column is constructed as integer and in the coming year if the same column finds an entry of floating value like 3.5 it becomes an impossible action to capture it. So to enable this use case we have an idea here during the table creation :


Create column table employee_Details (emp_id int, emp_name varchar(100), AGE int) with schema flexibility(DEFAULT DATA TYPE * AUTO DATA TYPE PROMOTION )

Yes, it is the option of Data type promotion during the creation which gets our use case ready.


This must help us to maintain the data type with the most genric formbased on the data.

As an example for first year rating our insert statement goes like dis:


Insert  into employee_Details (emp_id, emp_name, AGE,Emp_deprtment, emp_salary, employee_rating) values(1,’RAM’,29,’PI_HANA’,2000, 4);


Now employee_rating  column takes data type as integer.


And in the coming year when it hits a floating value :

 

Insert  into employee_Details (emp_id, emp_name, AGE,Emp_deprtment, emp_salary, employee_rating) values(1,’RAM’,29,’PI_HANA’,2000, 4.5);


The data type of employee_rating will  automatically get promoted to a floating type thus sufficing the needs without any errors.


Here is the allowed conversion rule for data type promotion :


conversion_rule.PNG




Here is an other case of multi valued promotion that is supported, we now have employee_phone as a new detail to the table and gets added with a value of varchar which is a phone number as below

 

Insert  into employee_Details (emp_id, emp_name, AGE,Emp_deprtment, emp_salary, employee_rating, employee_phone) values(1,’RAM’,29,’PI_HANA’,2000, 4.56,’01233556589’)


It takes the entered input as a singled valued var-char.


Now when employees start using dual/ triple sim cell there is a need to store the multi-valued char’s, It should now be possible to store new data set in the same column without altering it as we have enabled the table with auto data type promotion.

 

That is :

 

Insert  into employee_Details (emp_id, emp_name, AGE,Emp_deprtment, emp_salary, employee_rating, employee_phone) values(1,’RAM’,29,’PI_HANA’,2000, 4.56,array(’01233556589’,’983232131’,’324324’));


Must now convert employee_Phone column into a multi-valued character attribute.


Flexible table usage majorly contributes for better memory management, to support this we have an operation called ‘Garbage Collection’.

 

In our case we decide to normalize the ‘employee_feedback’ details by having a separate table for it and thus flush all the values existing in the ‘employee_feedback’ column of employee_details table. 

 

Now implicitly ‘Garbage collection’ comes into picture if our employee_details table is enabled for it in a below manner :

 

Create column table employee_Details (emp_id int, emp_name varchar(100), AGE int) with schema flexibility (RECLAIM);

 

Enabling the RECLAIM option will now turn on the Garbage collection and dynamic columns(in our case ‘employee_feedback’) will be automatically dropped if no values are left for all rows in the column.


What if my need for all the above discussed features come after my table is created but not during the creation of table . Should we drop the table and create them ? Answer is  No.

Or what if somewhere between the time slots we wish to disable the above characteristics individually in the created table.


It is possible to do that, as all the above discussed operations are supported even with table alter operation as shown below:


1)ALTER TABLE <table name> DISABLE SCHEMA FLEXIBILITY

2)ALTER TABLE <table name> ENABLE SCHEMA FLEXIBILITY [(<options>)]

3)ALTER TABLE <table name> ALTER SCHEMA FLEXIBILITY (<options>)
4) ALTER TABLE <table name> ALTER <column name> [<data type>] DISABLE SCHEMA FLEXIBILITY

5)ALTER TABLE <table name> ALTER <column name> [<data type>] ENABLE SCHEMA FLEXIBILITY

 

One line explanation for the above operations are correspondingly explained below :


1) With this, all dynamic columns are being converted to static columns. If the column conversion for a dynamic column fails, the operation fails as whole and no changes are applied. Normal tables are allowed to only have a certain number of columns (currently 1,000 columns). In order to successfully convert a Flexible Table into a normal table, the number of columns in the Flexible table must not exceed this limit.


2)Turns flexibility of a database table on.


3)In this case, the option list is mandatory. All schema flexibility options that are listed in the CREATE TABLE … WITH SCHEMA FLEXIBILITY section can be used here and changes on one or several options for a Flexible Table must be successful.


4)Here the specified Dynamic columns must get converted to static columns.


5)Here the specified static columns must get converted to Dynamic columns.

 

 

 

Hope the provided information is useful. Any suggestion and feedback for improvement will be much appreciated.

 

Thank you

Troubleshooting Hanging Situations in HANA

$
0
0

Purpose

The purpose of this document is to instruct SAP customers on how to analyse hanging situations in their HANA system.

 

Overview

So what constitutes a hanging situation in HANA? When we talk about a hanging situation we generally refer to a whole system wide hang, as opposed to highlighting one specific activity, ie: queries or operations. This means we are going to look at the systems conditions (performance) which leads the DB to run slowly, or not at all in some cases;


ex:

  • Database has stopped responding but has not crashed.
  • Database hangs during startup and does not start up.
  • Application is hanging but new connections to the database are still possible (Possibly not a HANA DB issue which must be analysed from an application perspective before looked at from a HANA DB perspective)

 

 

Like all other software systems, SAP HANA relies on hardware to run its processes. Even when looking at the most basic single sever, single host, we can see many areas which over lap and interact with one another, in other words, when you see a hang do not just assume the cause is all related to HANA.

 

Wiki.png

 

So a small example of a hang / freeze situation I have witnessed is when a user goes to open the "Administrative Tab" in HANA Studio and the system hangs for a long period of time. Below is a small example of troubleshooting this issue.

 

Troubleshooting

So here are some issues that you may face with hanging and what Traces are needed to troubleshoot.


1: Hanging situation with High CPU Utilization.

2: Hanging situations with CPU Utilization because all threads are waiting.

  • Traces needed for further analysis: Runtime Dumps.

3: Hanging situations where a logon via SSH is not possible. (Either wrong OS configuration or an OS/Hardware issue.)

 


Wrong OS configuration


The system must not swap. What you have to remember is that HANA limits its memory usage by the Global Allocation Limit. Other non HANA processes or other instances can interfere with the HANA system. So it is important that HANA is really assigned the memory up to the GAL and nothing else using this.

 

A large file cache can lead to problem. When checking the cache size (top command) please remember that HANA shared memory is also booked as "shared". The remaining cache size is (in general) not available for HANA. If this is high, then we have to find out why?. The most probable reason for this is outside HANA. (See Linux paging improvements)

 

Transparent Huge Pages must be deactivated. THP will make HANA run quicker for a while but when it comes to splitting THP's , the system gets so slow that working with it is not possible.

 


The first thing you have to think of when you face this situation is to execute Runtime Dumps immediately.

 

Runtime Dumps can useful for the following situations:

 

  • Standstill situations
  • High CPU
  • Hanging Threads
  • Query cancellation not working

 

By checking a RTE dump you can look for certain key indicators such as Large Query plans. Large Query plans can indicate problematic SQL Queries. The thread of this SQL query can then be checked via the parent and also its child threads. These threads then link you back to short stacks which then can checked to see what exactly this stack is doing. See Troubleshooting High CPU for further info.

 

As the HANA Studio will more than likely be hanging during the hang / slow performance, you can use SSH to run the Runtime Dumps with 2 minute intervals through the means of this hdbsql script:

 

 

DATE=`date +%Y%m%d.%H%M%S` ; for PID in `ps x|grep -E "hdb(index)server"|grep -v grep|awk '{print $1}'` ; do CMDLINE=`ps x|grep -E "^ *${PID}"|grep -v grep |awk '{for(i=5;i<=NF;i++) printf $(i)}'` ; echo $PID - $CMDLINE ; hdbcons -p ${PID} "runtimedump dump -c" >${CMDLINE}-${PID}-${DATE}.dump ; done ; sleep 120 ; DATE=`date +%Y%m%d.%H%M%S` ; for PID in `ps x|grep -E "hdb((index+)server"|grep -v grep|awk '{print $1}'` ; do CMDLINE=`ps x|grep -E "^ *${PID}"|grep -v grep |awk '{for(i=5;i<=NF;i++) printf $(i)}'` ; echo $PID - $CMDLINE ; hdbcons -p ${PID} "runtimedump dump -c" >${CMDLINE}-${PID}-${DATE}.dump ; done ; sleep 120 ; DATE=`date +%Y%m%d.%H%M%S` ; for PID in `ps x|grep -E "hdb((index+)server"|grep -v grep|awk '{print $1}'` ; do CMDLINE=`ps x|grep -E "^ *${PID}"|grep -v grep |awk '{for(i=5;i<=NF;i++) printf $(i)}'` ; echo $PID - $CMDLINE ; hdbcons -p ${PID} "runtimedump dump -c" >${CMDLINE}-${PID}-${DATE}.dump ; done

 

 

After running the script you can then open the generated RTE dumps. The dumps will show you what exact queries were running at the time of the hang / freeze.

 

These queries can then either be searched on the SAP search or you can check to see if these are your own custom queries which need to be looked at it terms of optimization. (Also if you have to open an incident with SAP this information will be what the engineer will be looking for)

In relation to the HANA Studio Hang, the solution for this can be found by searching the SQL Query generated which will return the Note High CPU when opening admin console

 

 

The vast majority of hanging situations are related to bottleneck issues with CPU, Storage, Network etc.

 

Usually the DBA will know the time and date of the hang that is causing the issues, but if this is not known you can always use the Performance load graph. As of SP9 you can now use the HANA Cockpit load graph. (I know this was a function already available in previous revisions but this did not work very well and crashed a lot). This preforms better and does not crash like its predecessor in Studio if the nameserverhistory file was large.

 

Going to the SAP HANA Cockpit, you can then see the SAP HANA Database Administration section with nice looking Fiori designed tiles:

 

Wiki 1.PNG

Here you can check at what time and date did the system experience the issues.


 

Please also be aware of the HANA Offline Cockpit functionality that became available recently. By logging in with the SIDADM user you can use the "Troubleshoot Unresponsive System" also:

 

Wiki4.PNG

 

If the load graph cannot be accessed by either Studio or Cockpit you can also use the TOP command at OS level which will show you the running processes:

 

wiki 3.PNG

 

So now you have the Time Stamp of the issue. Now you go to HANA Diagnosis Tab in Studio , or its corresponding tile in Cockpit.

 

Here is where you locate the time stamp in the relevant files so you can see what was happening before, during and after the hang.

 

The first files to look into are the indexserver + nameserver. Check the corresponding time stamps (Before and during the time) in these files to see if any obvious errors are apparent. Some examples of errors you may see before the system hang can be:

 

 

  • SQL error 131: transaction rolled back by lock wait timeout
  • SQL error 133: transaction rolled back by detected deadlock

 

If you see these please see Note on Lock Analysis. Many Useful SQL scripts exist also at Note 1969700

 

The so called MVCC Ani Ager periodically checks for problematic statements. It reports idle cursors or long running write transactions after 60 minutes.

 

It closes idle cursors after 12 hours.

 

  • mvcc_anti_ager.cc(01291) : There are too many un-collected versions.

        ('number of versions > 1000000' or 'maximum number of versions per record > 100000')

 

  • The cursor possibly block the garbage collection of HANA database.

         mvcc_anti_ager.cc(01291) : There are too many un-collected versions on table "<schema>"."<table>"

         ('number of versions for one of the partitions > 1000000' or 'maximum number of versions per record > 100000')

 

  • The transaction blocks the garbage collection of HANA database.

        mvcc_anti_ager.cc(01199) : long running uncommitted write transaction detected.

        mvcc_anti_ager.cc(01082) : The Connection is disconnected forcefully because it is blocking garbage collection for too long period.

        Statement.cc(03190) : session control command is performed by ..., user=SYSTEM, query=ALTER SYSTEM DISCONNECT SESSION '<conn_id>'

        mvcc_anti_ager.cc(00834) : long running cursor detected.

 

  • The open cursor possibly blocks the garbage collection of HANA database.

         Please close a cursor in application or kill the connection by "ALTER SYSTEM DISCONNECT SESSION '<conn_id>' "

 

The above refers to a long running transaction that has yet to be committed and could be causing your system to hang. If you were to see any of these errors please see FAQ on Garbage Collection.


Blocked transactions can lead to a hanging situation from an application perspective. Blocked transactions are usually not a database issue and need to be analyzed from an application point of view.

 

To find out the blocked transaction see the System Information Tab. You then query "Blocked Transactions“:

 

Wiki1.PNG

 

Long running transaction can block the garbage collection for executing. A very high number of MVCC versions (>5 Mil) can lead to a slow system or even a hanging like situation. If you would like to query the MVCC versions amount you can find this in the monitoring view M_MVCC_OVERVIEW:

 

WIki 2.PNG

 

If you then like to drill down further into the MVCC versions, you can also see how many MVCC versions per table by querying M_RS_TABLE_VERSION_STATISTICS:

 

wiki 3.PNG

 

So by using the above information you should be able to find the blocker transaction. You should then try to disconnect this connection:

ALTER SYSTEM DISCONNECT SESSION '12345'. If the cancellation was successful then the reason is normally related to an application or user issue and should be investigated more from that perspective.

 

If the connection cannot be cancelled then this is either 1 or 2 things:

 

1: A long running query that cannot be cancelled. So this is the time to run the runtime dumps as mentioned earlier.

 

2: Or it could be caused by an issue that requires further attention from SAP Development support via an incident.

 

 

Hanging situations in relation to SAVEPOINTS:

 

Savepoints speed up the startup time of a database because not all the redo logs have to be replayed but only the log from the last savepoint. The Savepoint coordinator periodically performs savepoints, which the default is 5 minutes. The savepoints are also triggered by several other operations such as data backups, a database shutdown or after a restart is completed.

 

If a system crashes during the savepoint operation, the system can still be restored from the last savepoint due to the shadow page concept. The shadow page concept is more about how to allocate and reuse pages in the data file and doesn't affect the recover-ability that much. But it e.g. frees you (to a very large extent) from the need for data file reorganisation. Here changed pages will not be overwritten directly, but instead marked as available and the changed content is placed at some other available location in the data file. Since SAP HANA keeps track on which pages contain the current data, there is no need to overwrite or clear unused pages, so after some time the whole data file will contain some data.

 

Data backup operations write a global savepoint, which is a consistent set of savepoints from all servers in the SAP HANA system. It is possible to restore a SAP HANA system from such a data backup, without replaying the redo log.

 

The Savepoint is split into three individual stages:

 

Phase 1 (PAGEFLUSH): All modified pages are determined that are not yet written to disk. The savepoint coordinator triggers writing of all these pages and waits until the I/O operations are complete.

 

Phase 2 (CRITICAL): The is the critical part of a savepoint operation where no concurrent write operations are allowed. This is achieved using the consistent change lock. To minimize the impact on concurrent operations, phase 2 must be kept as short as possible. The savepoint coordinator determines and stores the savepoint log position and the list of open transactions. Also pages that were changed during phase 1 are written to disk asynchronously.

 

Phase 3 (POSTCRITICAL): Changes are allowed in this phase again. The savepoint coordinator waits until all asynchronous I/O operations related to the savepoint are finished and marks the savepoint as completed.

 

During the critical phase the savepoint holds an exclusive Consistent Change Lock. Other write operations into the data volume are blocked during that time.

 

You can identify these by selecting the following:

 

M_SAVEPOINTS monitoring view in runtimedump or via hdbcons command (STATE ENTERCRITICAL or CRITICAL)

 

Capture.PNG

 

Possible Reasons: Bad IO throughput (check SAP Note 1999930) Blocked Consistent Change Lock by waiting writer (check SAP Note 2214279)

 

 

 

If you have checked the HANA logs and you see nothing obvious or problematic, then here is when you check the var/log/messages.

 

If you see some irregularities in these files then open a ticket with your hardware provider.

 

The main point to take from this document is to ALWAYS try and capture the hang with runtime dumps. This will then give you the DBA or SAP a very good chance of identifying the root cause of the hang.

 

 

Related KBA's:

2280288 - TREXviaDBSL and TREXviaDBSLWithParameter Statements can remain Open with Status 'Suspended'

2256719 - Potential Standstill Situations Due to Wrong Calculation of Parked Workers

1999020 - How-To: SAP HANA Troubleshooting when Database is no longer accessible

HANA stopped unexpectedly due to the accidental deletion of shared memory lock

$
0
0

Symptom

One of my friend faces a strange problem regarding to the unpacked stop of HANA server after cleaning some files under /tmp directory.

After manual starting the system, HANA will run normal again (version: SPS 09 Rev.95).

 

In nameserver_hosta....trc, the following error is shown:

[79877]{-1}[-1/-1] 2016-02-24 01:15:07.912168 f NameServer

TREXNameServer.cpp(03342) : shared memory lock

'/tmp/.hdb_ABC_30_lock'was deleted -> stopping instance ...

[79877]{-1}[-1/-1] 2016-02-24 01:15:10.655484 i Service_Shutdown

transmgmt.cc(06027) : Preparing for TransactionManager shutdown

 

Analysis

File /tmp/.hdb_<SID>_<instance number>_lock is used by HANA as a shared memory lock. If the file is deleted by chance, the database cannot manage the access of shared memory segment anymore and therefore has to stop accordingly.

For more detailed information, please refer to 1984700 - HANA stopped unexpectedly

If you are using RedHat Enterprise Linux, please take care of tmpwatch which delete files order than sometime.

 

For HANA <= 09, the shared memory lock file is /tmp/.hdb_<sid>_<inst_id>_lock

For HANA >= 10, the shared memory lock file is /var/lib/hdb/<sid>/.hdb_<sid>_<inst_id>_lock

(1999998 - FAQ: SAP HANA Lock Analysis)

 

Solution

DO NOT delete shared memory lock file.

If you are running RedHat, please remove tmpwatch from the system's cron job。

 

Hope the blog can help you fix the same kind of problem you face. Thanks to Chiwo Lee for experience sharing.

 

Regards,

Ning


Myth of HANA

$
0
0

Hi experts,

 

since SAP HANA was avaiable in the year 2011 (GA), I come across a lot of untruth about the new in-memory platform. As consultant I was able to talk to many costumers and other consultants on events like TechED, DSAG, Business partner days etc. Every time I was impressed after this long time that so much dangerous smattering is still out there.

The most answers to the statements are pretty easy to find in the offical notes, guides and other documents (blogs, presentations, articles etc.), but may it is an overload of information.

 

1) start time

2) cross SID backup

3) col / row store conversion

4) sizing *2

5) statistics

6) data fragmentation
7) persistency layer

8) high memory consumption HANA vs. Linux

9) Backup

10) Backup catalog

 

S stand for statement and A for the answer

Used SQL scripts are available in the attachment of note 1969700 - SQL statement collection for SAP HANA

 

 

1) Start time

S: "The start time (availability of the SAP system) must be 30 to 60min to load all data into memory"

A: Yes, to load all data into memory it takes some time, but for any DB it also takes time to fill its data buffer. For any DB the data buffer will be filled on first access of the data and stay there until the the LRU (least recently used) algorithm takes place and push it out of the buffer.

HANA is loading the complete row store on every start into memory. After this the system is available!

Short description of start procedure:

1) open data files

2) read out information about last savepoint ( apping of logical pages to physical pages in the data file / open transaction list)

3) load row store (depends on the size and the I/O subsystem; about 5min for 100GB)

4) replay redo logs

5) roll back uncommited transactions

6) perform savepoint

7) load col table defined as preload and lazy load of col tables (async load of Column tables that were loaded before restart)

 

For more details have a look at the SAP HANA Administration guide (search for "Restart Sequence") or the SAP HANA Administration book => Thanks to Lars and Richard for this great summary!

 

Example:

Test DB 40GB NW 740 system with a none enterprise storage (=slow):

SQL HANA_IO_KeyFigures_Total:

read: 33mb/s

avg-read-size: 31kb

avg-read-time: 0,93ms

write: 83mb/s

avg-write-size: 243kb

avg-write-time: 2,85ms

row store size: 11GB

cpu: 8vcpu (vmware; CPU E5-2680 v2 @ 2.80GHz)

 

start time without preload: AVG 1:48

stop time without preload: AVG 2:15

 

start time with 5GB col table (REPORSRC)

SQL for preload (more information in the guide "SAP HANA SQL and System views Reference"):

alter table REPOSRC preload all

 

verify with HANA_Tables_ColumnStore_PreloadActive script from note 1969700

 

start time with preload: AVG 1:49

stop time with preload: AVG 2:18

 

Why the start time don't increase although 5GB more data have to be loaded?

Since SPS 7, the preloading, together with the reloading, of tables happens async directly after the HDB restart has finished. That way, the system is again available for SQL access that do not require the information of the columns that are still being loaded.

 

With enterprise hardware the start times are faster!

 

If you want to know how long it takes to load all data into memory you can execute a python script.

load all tables into memory with python script:

cdpy (/usr/sap/HDB/SYS/exe/hdb/python_support/)

python ./loadAllTables.py --user=System --password=<password> --address=<hostname> --port=3xx15 --namespace=<schema_name>

 

[140737353893632, 854.406] << ending loadAllTables, rc = 0 (RC_TEST_OK) (91 of 91 subtests passed), after 854.399 secs

 

In a simular enterprise system it takes about 140-200sec.

 

2) Cross SID backup

S: "It is not possible not refresh a system via Cross-SID-copy"

A: Cross SID copy (single container) from disk is already available since a long time. Since SPS09 it is also avaiable via backint interface.

Multitenant database container (MDC) for a Cross-SID-copy are currently (SPS11) only able to restore via disk.

 

3) Col / row store conversion

S: "Column tables can't be converted to row store and vice versa. It is defined by sap which tables are stored in which type."

A: It is correct that during the migration the SWPM (used for syscopy) procedure creates files in which store the tables are created. But you can change the type from row to column and vice versa on the fly.

In the past SAP delivered a rowstorelist.txt with note 1659383. This approach is out-dated. Nowadays you can use the latest version of SMIGR_CREATE_DDL with the option "RowStore List" (Note 1815547)

 

4) Sizing * 2

S: "You have to double the sizing the result of the sizing report."

A: Results of Sizing reports are final, you dont have to double them.

 

example(BWoH):

|SIZING DETAILS                                                                |

|==============                                                                |

|                                                                              |

| (For 512 GB node)      data [GB]     total [GB]                              |

|                                      incl. dyn.                              |

| MASTER:                                                                      |

| -------                                                                      |

|                                                                              |

|  Row Store                    53            106                              |

|  Master Column Store          11             21                              |

|  Caches / Services            50             50                              |

|  TOTAL (MASTER)              114            178                              |

|                                                                              |

| SLAVES:                                                                      |

| -------                                                                      |

|                                                                              |

|  Slave  Column Store          67            135                              |

|  Caches / Services             0              0                              |

|  TOTAL (SLAVES)               67            135                              |

| ---------------------------------------------------------------              |

|  TOTAL (All Servers)         181            312                              |

 

This is a scale up solution. So Master and Slave are functional on one host. In a scale out solution you have one host as master for the transaction load. This one holds all row store tables. SAP recommends to have a min. of 3 hosts in a BW scale out solution. The other 2 slaves are for the reporting load.

SAP HANA Main Memory Sizing is divided into static and the dynamic RAM requirement. The static part relates to the amount of main memory that is used for the holding the table data. The dynamic part has exact the same size as the static one and is used for temp data => grouping, sorting, query temp objects etc.

 

In this example you have:

row store 53 *2 = 106GB

Master column 11*2 =21(rounded) + 67*2= 135 (rounded) => 156GB

Caches / Services 50GB is needed for every host

106+156+50 in sum 312GB

 

5) Statistics

S: "Statistics are not needed any more. So no collect runs are needed"

A: For the Col store the Statement is correct in cause of the known data distribution through the dictionary. For the row store there is an automatically collection of statistics. So you don't have to schedule them. Currently it is not documented how you can trigger the collection or change sample size.

 

6) Data Fragmentation

S: "You don't have to take care of data fragmentation. All is saved in memory via col store and there is no fragmention of data"

A: Some tables are created in the row store. The row store still follows the old rules and conditions which results in fragmentation of data. How to analyze it?

Please see note 1813245 - SAP HANA DB: Row store reorganization

 

SELECT HOST, PORT, CASE WHEN (((SUM(FREE_SIZE) / SUM(ALLOCATED_SIZE)) > 0.30) AND SUM(ALLOCATED_SIZE) > TO_DECIMAL(10)*1024*1024*1024)
THEN 'TRUE' ELSE 'FALSE' END "Row store Reorganization Recommended", TO_DECIMAL( SUM(FREE_SIZE)*100 / SUM(ALLOCATED_SIZE), 10,2)
"Free Space Ratio in %",TO_DECIMAL( SUM(ALLOCATED_SIZE)/1048576, 10, 2) "Allocated Size in MB",TO_DECIMAL( SUM(FREE_SIZE)/1048576, 10, 2) "Free Size in MB"
FROM M_RS_MEMORY WHERE ( CATEGORY = 'TABLE' OR CATEGORY = 'CATALOG' ) GROUP BY HOST, PORT

Reorg advise: if row store is bigger than 10GB and more than 30% free space

 

!!!Please check all prerequesites in the notes before you start the reorg!!! (online / offline reorg)

Row Store offline Reorganization is triggered at restart time and thus service downtime is required. Since it's guaranteed that there are no update transactions during the restart time, it achieves the maximum compaction ratio.

 

Before

Row Store Size: 11GB

Freespace: ~3GB

in %: 27% (no reorg needed)

 

But for testing I configured the needed parameters in indexserver.ini (don't forget to remove them afterwards!):

4min startup time => while starting the row store will reorganized in offline mode

 

After

Row Store Size: 7,5GB

Freespace: ~250MB

in %: 3,5%

 

Additionally you should consider the tables with multiple containers if revision is 90+. Multiple containers are typically introduced when additional columns are added to an existing table. As a consequence of multiple containers the performance can suffer, e.g. because indexes only take effect for a subset of containers

HANA_Tables_RowStore_TablesWithMultipleContainers

 

The compression methods of the col store (incl. indexes) should also be considered.

As of SPS 09 you can switch the largest unique indexes to INVERTED HASH indexes. In average you can save more than 30 % of space. See SAP Note 2109355 for more information. Compression optimization for those tables:

UPDATE "<table_name>" WITH PARAMETERS ('OPTIMIZE_COMPRESSION' = 'FORCE')

 

Details:2112604 - FAQ: SAP HANA Compression

 

7) Persistency layer

S: "The persistency layer consists of exactly the same data which are loaded into memory"

A: As descibed in statement 3) the memory is parted into 2 areas. The temp data won't be stored on disk. The persistency layer on disk consists of the payload of data, before&after images / shadow pages concept + snapshot data + delta log (for delta merge). The real delta structure of the merge scenario only exists in memory, but it is written to the delta logs.

Check out this delta by yourself:

HANA_Memory_Overview

check memory usage vs. disk size

 

8) High Memory consumption HANA vs. Linux

S: "The used memory of the processes is the memory which is currently in use by HANA"

A: No, for the Linux OS it is not transparent what HANA currently real uses. The numbers in "top" are never maching the ones in the hana studio. HANA communicates free pages not instantly to the OS. There is a time offset for freed memory.

There is a pretty nice document which explaines this behaviour in detail:

http://scn.sap.com/docs/DOC-60337

 

The garbage collection takes by default pretty late. If your system shows a high memory consumtion the root cause may not necessarily a bad sizing or high load. The reason could also be a late GC.

2169283 - FAQ: SAP HANA Garbage Collection

One kind of garbage collection we already discussed in 6) row and col fragmentation. Another one is for Hybrid LOBs and there is one for the whole memory. Check out your current heap memory usage with HANA_Memory_Overview.

In my little test system the value is 80GB. In this example we have 14GB for Pool/Statistics , 13GB for Pool/PersistenceManager/PersistentSpace(0)/DefaultLPA/Page and 9GB for Pool/RowEngine/TableRuntimeData

Check also the value of col EXCLUSIVE_ALLOCATED_SIZE in the monitoring view "M_HEAP_MEMORY". It contains the sum of all allocations in this heap allocator since the last startup.

 

select CATEGORY, EXCLUSIVE_ALLOCATED_SIZE,EXCLUSIVE_DEALLOCATED_SIZE,EXCLUSIVE_ALLOCATED_COUNT,
EXCLUSIVE_DEALLOCATED_COUNT from M_HEAP_MEMORY
where category = 'Pool/Statistics' or category='Pool/PersistenceManager/PersistentSpace(0)/DefaultLPA/Page' or category='Pool/RowEngine/TableRuntimeData';

Just look at the index server port 3xx03 (may be the xsengine is also listed if active)

 

CATEGORYEXCLUSIVE_ALLOCATED_SIZEEXCLUSIVE_DEALLOCATED_SIZEEXCLUSIVE_ALLOCATED_COUNTEXCLUSIVE_DEALLOCATED_COUNT
Pool/PersistenceManager/PersistentSpace(0)/DefaultLPA/Page384.055.164.928369.623.433.2166.177.0195.856.165
Pool/RowEngine/TableRuntimeData10.488.371.360792.726.99283.346.94526
Pool/Statistics2.251.935.681.4722.237.204.512.6967.146.662.5277.084.878.887

 

In cause of a lot of deallocation there is a gap between the EXCLUSIVE_ALLOCATED_SIZE and the currently allocated size. The difference is usually free for reuse and can be freed with a GC run.

 

But by default the memory GC will be triggered by default in the following cases:

Parameter + Default valueDetails
async_free_target = 95 (%)When proactive memory garbage collection is triggered, SAP HANA tries to reduce allocated memory below async_free_target percent of the global allocation limit.
async_free_threshold = 100 (%)With the default of 100 % the garbage collection is quite "lazy" and only kicks in when there is a memory shortage. This is in general no problem and provides performance advantages, as the number of memory allocations and deallocations is minimized.
gc_unused_memory_threshold_abs = 0 (MB)Memory garbage collection is triggered when the amount of allocated, but unused memory exceeds the configured value (in MB).
gc_unused_memory_threshold_rel = -1 (%)Memory garbage collection is triggered when the amount of allocated memory exceeds the used memory by the configured percentage.

 

The % values are related to the configured global allocation limit.

 

Unnessarily triggered GC should be absolutely avoided, but it depends on your system load and sizing how you configure these values.

If we now trigger a manual GC for the memory area:

hdbcons 'mm gc -f'

 

Before:

heap: 80GB

 

free -m

             total       used       free     shared    buffers     cached

Mem:        129073     126877       2195      15434        142      32393

-/+ buffers/cache:     94341      34731

 

 

Garbage collection. Starting with 96247664640 allocated bytes.

82188451840 bytes allocated after garbage collection.

 

After:

heap: 72GB

 

free -m

             total       used       free     shared    buffers     cached

Mem:        129073     113680      15393      15434        142      32393

-/+ buffers/cache:     81144      47929


 

 

So at this time inside the hdb there is in this scenario not so much difference, but at the OS side the not allocated memory will be freed.

You don't have to do this manually! HANA is fully aware of the memory management.

If you get an alert (id 1 / 43) in cause of memory usage of your services, you should analyze not only row and col store. Take also care of the GC of the heap memory. In the past there were some bugs in this area.

Alert defaults:

ID 1: Host physical memory usage:      low: 95% medium: 98% high:100%

ID43: memory usage of services:         low: 80% medium: 90% high:95%

As you can see a GC will be triggered lazy at 100% fill ratio of the global allocationlimit by default may be it is too late for your system before the GC takes place or you can react to it.

 

In addition to the memory usage check the mini check script and the note advices. If you are not sure how to analyze or solve the issue you can order a TPO service at SAP (2177604 - FAQ: SAP HANA Technical Performance Optimization Service).

 

 

 

9) Backup

S: "Restore requires logs for consistent restore"

A: wrong, a HANA backup based on snapshot technology. So the backup is consistent without any additional log file. This means it is a full online copy of one particular consistent state which is defined by the log position at the time executing the backup.

Sure if you want to roll forward you have to apply Log Files for point in time recovery or most recent state.

 

 

10) Backup Catalog

S: "Catalog information are stored in a file like oracle *.anf which is needed for recovery"

A: The backup catalog is saved on every data AND log backup. It is not saved as human readable file! you can check the catalog in hana database studio or with command "strings log_backup_0_0_0_0.<backupid>" in the backup location of your system if you make backup-to-disk.

The backup catalog includes all needed information which file belongs to which backup set. If you delete your backups on disk/VTL/tape level the backup catalog still holds the unvalid information. There is currently no automatism which clean it up. Just check the size of your backup catalog if it is bigger than about 20MB you should take care of housekeeping (depends on your backup retention and size of the system) the backup catalog, because it will be saved as already mentioned EVERY log AND data backup. This means more than 200 times a day!

 

 

Summary

At the end you also have to take care of your data housekeeping and ressource management. You can save a lot of ressources if you consider all the hints in the notes.


I hope I could clarify some statements for you.


Best Regards,

Jens Gleichmann

I've done HANA training courses, How to practice?

$
0
0

HANATEC certificationhas in it's curricula the "HA100 - SAP HANA Introduction" and the "HA200 - SAP HANA Installation & Operations" training courses.

 

These courses are, in my opinion well structured and with enought exercises to get the exact understanding on the presented subjects. Perhaps the View concepts and creation in HA100 is to much stressed out than expected for the technical or BASIS consultant.

 

After a couple of weeks I was faced with the need to go thru all the course stuff again and I was searching for a HANA system to support my study and to do some or all the exercises .

 

The answer is in the HA100 course SPS7 though not anymore in SPS10. Creating a HANA Cloud free subscription for the "SAP HANA Developer Edition" it's enought for evaluations and  exploration covering the HA100 material at least.

 

 

Get access to a HANA system

 

To get a free account we can do it from http://developers.sap.com/hana and there sign up (only step 2 on next picture) to get started with a free SAP HANA developer edition in the cloud.

 

hana_DEV_Center.png

 

We should be aware these web pages are changing and continually evolving so one can find out diferent look on the pages.

 

After filling all the information to sign up we get the confirmation via e-mail:

 

welcome_hana_cloud.png

 

From the confimation e-mail we get the URL to access our just created Hana cloud, where s000#######will be the S-user:


https://account.hanatrial.ondemand.com/cockpit#/acc/s000######trial/services

 

 

Get some tutorial

 

The data model available in the evaluation system is not the same as the one used in HA100 training course. The following document posted by Stoyan Manchev is as a very good alternative even if it doesn't go so deeper on the exercises about creating views .

 

8 Easy Steps to Develop an XS application on the SAP HANA Cloud Platform

 

 

When following steps 1 to 4 (Step 1 looks a bit different in the actual version, see bellow in "Changes on 8 steps tutorial") we begin preparing the environment and then connecting HANA studio do the Hana cloud and creating a view. Don't need to go thru step 5 and nexts since they are not related with our certification.

 

To run the step 2 we need a HANA Studio. We can download it to install from https://tools.hana.ondemand.com/#hanatools; I've done it with the Luna edition.

 

Take your time. It will take a while to get everything fitted together in order to create the views.

 

 

Changes on 8 steps tutorial

 

The picture on the above tutorial should be changed to this one (Select New to create a new schema):

cloud_schema.jpg

When creating the new schema select the following:

 

schema_ID.png

 

Updating HANA Studio to connect to a cloud system

 

To connect to a cloud system using HANA Studio we need to install additional tools:

 

https://help.hana.ondemand.com/help/frameset.htm?b0e351ada628458cb8906f55bcac4755.html

pic1.png

 

 

pic2.png

 

And as result we get a new option to Add cloud system:

 

pic3.png

 

 

Test your knowledge

 

After going thru these steps we'll master the HANA100 and to test our knowledge  before going to SAP to make the examen we can do a small assessment which we get choosing "Discovery Preview: SAP HANA Introduction" on https://performancemanager.successfactors.eu

 

This is a 12 hours e-learning free course based on HA100 with a web assessment included.


Limitations

 

Unfortunatelly this HANA Developpment Edition for wich we can have a freee access on the cloud is useless to cover almost all if not all the subjects of HA200 Because it has limitations in the adiministration parts. We are not able to define users, roles or even display any administration view.

SAP HANA Data Warehousing Foundation

$
0
0

SDNPIC.jpg

SAP HANA Data Warehousing Foundation 1.0

 

This first release will provide packaged tools for large Scale SAP HANA use cases to support data management and distribution within a SAP HANA landscapre more efficiently. Further versions will focus on additionals tools to  support native SAP HANA data warehouse use cases, in particular data lifecycle management.

 

 

 

On this landingpage you will find a summary of information to get started
with SAP HANA Data Warehousing Foundation 1.0

Presentations

 

 

Demos

 

SAP HANA Data Warehousing Foundation  Playlist
In this SAP HANA Data Warehousing playlist you will find demos showing the Data Distribution Optimizer (DDO) as well as the Data Lifecycle Manager (DLM)

 

SAP HANA Academy

 

Find out more about the Data Temperature integration with HANA including Data Lifecycle Management on the DWF Youtube channel of SAP HANA Academy

Troubleshooting SAP HANA Authorisation issues

$
0
0

This document will deal with issues regarding analytical privileges with SAP HANA.


 

So what are Privileges some might ask?

 

System Privilege:

System privileges control general system activities. They are mainly used for administrative purposes, such as creating schemas, creating and changing users and roles, performing data backups, managing licenses, and so on.

Object Privilege:

Object privileges are used to allow access to and modification of database objects, such as tables and views. Depending on the object type, different actions can be authorized (for example, SELECT, CREATE ANY, ALTER, DROP, and so on).

Analytic Privilege:

Analytic privileges are used to allow read access to data in SAP HANA information models (that is, analytic views, attribute views, and calculation views) depending on certain values or combinations of values. Analytic privileges are evaluated during query processing.

In a multiple-container system, analytic privileges granted to users in a particular database authorize access to information models in that database only.

Package Privilege:

Package privileges are used to allow access to and the ability to work in packages in the repository of the SAP HANA database.

Packages contain design time versions of various objects, such as analytic views, attribute views, calculation views, and analytic privileges.

In a multiple-container system, package privileges granted to users in a particular database authorize access to and the ability to work in packages in the repository of that database only.

 

For more information on SAP HANA privileges please see the SAP HANA Security Guide:

http://help.sap.com/hana/SAP_HANA_Security_Guide_en.pdf

 

 

So, you are trying to access a view, a table or simply trying to add roles to users in HANA Studio and you are receiving errors such as:

  • Error during Plan execution of model _SYS_BIC:onep.Queries.qnoverview/CV_QMT_OVERVIEW (-1), reason: user is not authorized
  • pop1 (rc 2950, user is not authorized)
  • insufficient privilege: search table error: [2950] user is not authorized
  • Could not execute 'SELECT * FROM"_SYS_BIC"."<>"' SAP DBTech JDBC: [258]: insufficient privilege: Not authorized.SAP DBTech JDBC: [258]: insufficient privilege: Not authorized

 

These errors are just examples of  some the different authorisation issues you can see in HANA Studio, and each one is pointing towards a missing analytical privilege.

 

Once you have created all your models, you then have the opportunity to define your specific authorization requirements on top of the views that you have created.

 

So for example, we have a model in HANA Studio Schema and its called "_SYS_BIC:Overview/SAP_OVERVIEW"

We have a user, lets just say its the "SYSTEM" user, and when you query this view you get the error:

 

Error during Plan execution of model _SYS_BIC:Overview/SAP_OVERVIEW (-1), reason: user is not authorized.

 

So if you are a DBA, and you get a message from a team member informing you that they getting a authorisation issue in HANA Studio. What are you to do?

How are you supposed to know the User ID? And most importantly, how are you to find out what the missing analytical privilege is?

 

So this is the perfect opportunity to run an authorisation trace through the means of the SQL console on HANA Studio.

So if you follow the below instructions it will walk you through executing the authorisation trace:

 

1) Please run the following statement in the HANA database to set the DB  trace:

alter system alter configuration ('indexserver.ini','SYSTEM') SET
('trace','authorization')='info' with reconfigure;

 

2) Reproduce the issue/execute the command again/

 

3)When the execution finishes please turn off the trace as follows in the Hana studio:

alter system alter configuration ('indexserver.ini','SYSTEM') unset
('trace','authorization') with reconfigure;

 

_____________________________________________________________________________________________________________________________

 

Only use this when instructed by SAP. It's recommended to use "INFO" rather than "DEBUG" in normal circumstances.

 

 

If you would like a more detailed trace on the privileges needed you could also execute the DEBUG level trace (Usually SAP Development would request this)

 

1) Please run the following statement in the HANA database to set the DB  trace:

alter system alter configuration ('indexserver.ini','SYSTEM') SET
('trace','authorization')='debug' with reconfigure;


 

2) Reproduce the issue/execute the command again


 

3)When the execution finishes please turn off the trace as follows in the Hana studio:

alter system alter configuration ('indexserver.ini','SYSTEM') unset
('trace','authorization') with reconfigure;

 

______________________________________________________________________________________________________________________________

 

So now that you have turned the trace on and reproduced the issue, now you must turn off the trace.

 

You should now see a new indexserver0000000trc file created in the Diagnosis Files Tab in HANA Studio

Capture.PNG

 

So once you open the trace files, scroll to the end of the file and you should see something familiar to this:

 

e cePlanExec      cePlanExecutor.cpp(06890) : Error during Plan execution of model _SYS_BIC:onep.Queries.qnoverview/CV_QMT_OVERVIEW (-1), reason: user is not authorized
i TraceContext    TraceContext.cpp(00718) : UserName=TABLEAU, ApplicationUserName=luben00d, ApplicationName=HDBStudio, ApplicationSource=csns.modeler.datapreview.providers.ResultSetDelegationDataProvider.<init>(ResultSetDelegationDataProvider.java:122);csns.modeler.actions.DataPreviewDelegationAction.getDataProvider(DataPreviewDelegationAction.java:310);csns.modeler.actions.DataPreviewDelegationAction.run(DataPreviewDelegationAction.java:270);csns.modeler.actions.DataPreviewDelegationAction.run(DataPreviewDelegationAction.java:130);csns.modeler.command.handlers.DataPreviewHandler.execute(DataPreviewHandler.java:70);org.eclipse.core.commands
i Authorization    XmlAnalyticalPrivilegeFacade.cpp(01250) : UserId(123456) is missing analytic privileges in order to access _SYS_BIC:onep.MasterData.qn/AT_QMT(ObjectId(15,0,oid=78787)). Current situation:
AP ObjectId(13,2,oid=3): Not granted.
i Authorization    TRexApiSearch.cpp(20566) : TRexApiSearch::analyticalPrivilegesCheck(): User TABLEAU is not authorized on _SYS_BIC:onep.MasterData.qn/AT_QMT (787878) due to XML APs
e CalcEngine      cePopDataSources.cpp(00488) : ceJoinSearchPop ($REQUEST$): Execution of search failed: user is not authorized(2950)
e Executor        PlanExecutor.cpp(00690) : plan plan558676@<> failed with rc 2950; user is not authorized
e Executor        PlanExecutor.cpp(00690) : -- returns for plan558676@<>
e Executor        PlanExecutor.cpp(00690) : user is not authorized(2950), plan: 1 pops: ceJoinSearchPop pop1(out a)
e Executor        PlanExecutor.cpp(00690) : pop1, 09:57:41.755  +0.000, cpu 139960197732232, <> ceJoinSearchPop, rc 2950, user is not authorized
e Executor        PlanExecutor.cpp(00690) : Comm total: 0.000
e Executor        PlanExecutor.cpp(00690) : Total: <Time- Stamp>, cpu 139960197732232
e Executor        PlanExecutor.cpp(00690) : sizes a 0
e Executor        PlanExecutor.cpp(00690) : -- end executor returns
e Executor        PlanExecutor.cpp(00690) : pop1 (rc 2950, user is not authorized)

 

So we can see from the trace file that User who is trying to query from the view is called TABLEAU. TABLEAU is also represented by the User ID (123456)

 

So by looking at the lines:

 

i Authorization    XmlAnalyticalPrivilegeFacade.cpp(01250) : UserId(123456) is missing analytic privileges in order to access _SYS_BIC:onep.MasterData.qn/AT_QMT(ObjectId(15,0,oid=78787)).

&

i Authorization    TRexApiSearch.cpp(20566) : TRexApiSearch::analyticalPrivilegesCheck(): User TABLEAU is not authorized on _SYS_BIC:onep.MasterData.qn/AT_QMT (787878) due to XML APs

 

We can clearly see that TABLEAU user is missing the correct analytical privileges to access the _SYS_BIC:onep.MasterData.qn/AT_QMT which is located on Object 78787.

 

So now we have to find out who owns the Object 78787. We can find out this information by querying the following:

 

select * from objects where object_oid = '<oid>';

Select * from objects where object_oid = '78787'

 

Once you have found out the owner for this object, you can get the owner to Grant the TABLEAU user the necessary privileges to query the object.

 

Please be aware that if you find that the owner of an object is _SYS_REPO, this is not as straight forward as logging in as _SYS_REPO as this is not possible because SYS_REPO is a technical database user used by the SAP HANA repository. The repository consists of packages that contain design time versions of various objects, such as attribute views, analytic views, calculation views, procedures, analytic privileges, and roles. _SYS_REPO is the owner of all objects in the repository, as well as their activated runtime versions.

You have to create a .hdbrole file which which gives the access ( Development type of role, giving select, execute, insert etc access) on this schema. You then assign this role to the user who is trying to access the object.

 

 

Another option that is available for analyzing privileges issues was introduced as of SP9. This comes in the form of the Authorization Dependency Viewer. Man-Ted Chan has prepared an excellent blog on this new feature:

 

http://scn.sap.com/community/hana-in-memory/blog/2015/07/07/authorization-dependency-viewer

 

 

 

For more useful information on Privileges can be seen in the following KBA's:

KBA #2220157 - Database error 258 at EXE insufficient

KBA #1735586 – Unable to grant privileges for SYS_REPO.-objects via SAP HANA Studio authorization management.

KBA #1966219 – HANA technical database user _SYS_REPO cannot be activated.

KBA #1897236 – HANA: Error&quot; insufficient privilege: Not authorized &quot; in SM21

KBA #2092748 – Failure to activate HANA roles in Design Time.

KBA #2126689 – Insufficient privilege. Not authorized

KBA #2250445 - SAP DBTech JDBC 485 - Invalid definition of structured privilege: Invalid filter condition

 

 

For more useful Troubleshooting documentation you can visit:

 

http://wiki.scn.sap.com/wiki/display/TechTSG/SAP+HANA+and+In-Memory+Computing

 

 

Thank you,

 

Michael

SAP HANA SPS10 – What is New for Backup and Recovery

$
0
0

This post outlines new and enhanced features of SAP HANA backup and recovery with Support Package Stack 10.

The information here has been collected from several sources with the intent of making it more accessible to people interested.

 

Contents

 

 

Recovery Using Delta Backups

With SPS10, SAP HANA supports recovery using delta backups (incremental and differential backups).

 

Full Backups and Delta Backups

SAP HANA now supports the following backup types:

 

From SPS10,a full backup is used to mean:

  • Data backup
    A data backup includes all the data structures that are required to recover the database.
  • Storage snapshot
    A storage snapshot captures the content of the SAP HANA data area at a particular point in time.

 

From SPS10, SAP HANA now supports the following delta backups:

  • Incremental backup
    An incremental backup stores the data changed since the last data backup - either the last data backup or the last delta backup (incremental or differential).
  • Differential backup
    Differential backups store all the data changed since the last data backup.

 

Note that delta backups (incremental or differential) contain actual data, whereas log backups contain redo log entries.

 

Delta backups are included in the backup catalog.

When you display the backup catalog, delta backups are hidden by default.

 

To display delta backups in the backup catalog:

  1. In SAP HANA studio, open the Backup Console and go to the Backup Catalog tab.
  2. Select Show Delta Backups.

 

You can use both incremental and differential backups in your backup strategy.

 

Backup lifecycle management now also includes delta backups.

When you delete all the backups older than a specific full backup, older delta backups are also deleted along with older full backups and log backups.

 

 

SAP HANA Recovery Options Using Delta Backups

If delta backups are available, they are included by default in a recovery. In the recovery dialog in SAP HANA studio, you can choose to perform a recovery without using delta backups:

sap-hana-other-settings.png

 

 

If you include delta backups, SAP HANA automatically determines the optimal recovery strategy based on all the available backups.

 

 

Recovery to the Most Recent State

What you need:

 

  • A data backup

    AND
  • The last differential backup
    Note:
    This is only supported for a data backup, not for a storage snapshot.

    AND
  • Subsequent incremental backups
    Note:
    This is only supported for a data backup, not for a storage snapshot.

    AND
  • Subsequent log backups

    AND
  • Redo log entries that are still available in the log area
    (If the log area is still available.)

 

 

Recovery to a Point in Time in the Past

What you need:

 

As for a recovery to the most recent state.

Redo log entries from the log area may not be needed.

 

 

SQL Statements for Recovery Without Delta Backups

By default, SAP HANA includes delta backups in a recovery.

 

If you wish to recover SAP HANA without using delta backups, you can use the following SQL statement:

 

RECOVER DATABASE UNTIL TIMESTAMP '<timestamp>' IGNORE DELTA DATA BACKUPS

 

Example:

RECOVER DATABASE UNTIL TIMESTAMP '2015-05-15 10:00:00' IGNORE DELTA DATA BACKUPS

 

 

Finding and Checking the Backups Needed for a Recovery

You can use hdbbackupdiag to display the backups needed to recover the database.

In this way, you can minimize the number of backups that need to be made available for a recovery.

 

With SPS10, hdbbackupdiag supports delta backups.

 

More information: Checking Whether a Recovery is Possible

 

 

Prerequisites for Performing a Recovery

 

  • Operating system user <sid>adm
  • Read access to the backup files
  • System privilege DATABASE ADMIN
    (for tenant databases in a SAP HANA multiple-container system)

 

 

Third-Party Backup Tools and Delta Backups

Delta backups are compatible with the current API specification for third-party backup tools (Backint).

 

  • For delta data backups, SAP HANA uses the Backint option -l LOG in combination with the data Backint parameter file:

    -p /usr/sap//SYS/global/hdb/opt/hdbconfig/initData.utl

    Third-party backup tools sometimes use the Backint option -l to determine the backup container using the Backint parameter file.
    This means that for the option –l LOG, the log backup container is used.
  • Caution:
    Backup containers that were, until SPS10, only used for log backups may be sized too small for delta backups.
    If a log full situation occurs, this could cause a database standstill.
  • The Backint parameter file is tool-specific and typically contains information such as the backup destination.
    Note: Some third-party backup tools support only one Backint parameter file for both data and log backups
  • Recommendation:
    Ask your backup tool vendor for details of how to configure the tool to work with delta backups.
    If in doubt, configure two dedicated Backint parameter files: one for data backups, and one for log backups.

 

SQL Statements for Delta Backups

To create an incremental backup, use the following SQL statement:

 

BACKUP DATA INCREMENTAL USING FILE ('<file name>')

 

If the file name is ‘2015-08-03’, this SQL statement creates the following delta backup files:

 

Data backup file: 2015-08- 03_databackup_incremental_0_1431675865039_0_1
Data backup file: 2015-08- 03_databackup_incremental_1431675646547_143167586 5039_1_1
Data backup file: 2015-08- 03_databackup_incremental_1431675646547_143167586 5039_2_1
Data backup file: 2015-08- 03_databackup_incremental_1431675646547_143167586 5039_3_1

 

To execute a differential backup, use the following SQL statement:

 

BACKUP DATA DIFFERENTIAL USING FILE ('<file name>')

 

If the file name is ‘2015-08-03’, this SQL statement creates the following delta backup files:

 

Data backup file: 2015-08- 03_databackup_differential_0_1431675646547_0_1
Data backup file: 2015-08- 03_databackup_differential_1431329211296_1431675646547_1_1
Data backup file: 2015-08- 03_databackup_differential_1431329211296_14316756 46547_2_1
Data backup file: 2015-08- 03_databackup_differential_1431329211296_14316756 46547_3_1

 

In this example, 1431329211296 is the backup ID of the basis data backup; 1431675646547 is the backup ID of the delta backup.

 

Prerequisites for Working with Delta Backups

System privilege BACKUP ADMIN, BACKUP OPERATOR (recommended for batch users only), or DATABASE ADMIN (for MDC)

 

More Information

Delta Backups

 

 

Backup Functionality in SAP HANA Cockpit

In addition to SAP HANA studio, you can now also start SAP HANA backup operations from SAP HANA cockpit.

 

From SAP HANA cockpit, you can:

  • Create data backups
  • Display information about data backups

 

Create Data Backups in SAP HANA Cockpit

Using SAP HANA cockpit, you can create data backups.

 

  1. In SAP HANA cockpit, click the Data Backup tile.
  2. Choose Start New Backup and specify the backup settings.

    sap-hana-cockpit-backup-progress.png

     

  3. To start the backup, choose Back Up.

    sap-hana-cockpit-backup-start.png

     

    The overall progress is displayed on the Data Backup tile.

    To see more details of the backup progress, click the tile.

 

 

Display Information About Backups in SAP HANA Cockpit

If a backup is running, the Data Backup tile displays its progress.

 

If no backup is running, the Data Backup tile displays the status of the most recent full backup:

  • Successful
  • Running
  • Snapshot Prepared
  • Canceled
  • Error

 

Click the tile to display more details from the backup catalog:

sap-hana-cockpit-backup-catalog.png

 

 

The following information is displayed:

  • Time range that the backup catalog covers
  • Total size of the backup catalog
    Information about the most recent backups within the time range
    (status, start time, backup type, duration, size, destination type and comment)

 

Click a row to display more details:

sap-hana-cockpit-backup-catalog-details.png

 

 

Prerequisites for Creating Backups in SAP HANA Cockpit

 

  • System privilege BACKUP OPERATOR or BACKUP ADMIN
  • Role:
    • sap.hana.backup.roles::Operator
      or
    • sap.hana.backup.roles::Administrator

 

Notes

Storage snapshots, backup lifecycle management, and database recovery are currently not supported in SAP HANA cockpit.

 

More Information

SAP HANA Cockpit

SAP HANA Administration Guide: SAP HANA Database Backup and Recovery

 

 

Support for SAP HANA Multitenant Database Containers

In SAP HANA studio, the steps to recover a SAP HANA multitenant database container system are similar to the steps to recover a SAP HANA single-container system.

 

Note:

Storage snapshots are currently not supported for SAP HANA multitenant database container systems.

 

The system database plays a central role in the backup and recovery of SAP HANA multitenant database containers:

  • The system database can initiate backups of the system database itself as well as of individual tenant databases.
    A tenant database can also perform its own backups (unless this feature has been disabled for the tenant database)
  • Recovery of tenant databases is always initiated from the system database
  • To recover a complete SAP HANA multitenant database container system, the system database and all the tenants need to be recovered individually.

 

 

SAP HANA Multitenant Database Containers and Third-Party Backup Tools

When you work with third-party backup tools and SAP HANA multitenant database container system, you should be aware of some specific points:

 

Isolation Level “High”

With SPS10, a new option “isolation level” was introduced for SAP HANA multitenant database container systems.

 

In isolation level high, each tenant database has its own dedicated operating system user.

 

In high isolation scenarios, Backint is supported by SAP HANA. However, you should check with your third- party tool vendor whether any tool-specific restrictions apply.

 

Tenant Copy

Tenant copy using Backint is currently not supported.

To copy a tenant database using backup and recovery, use file system-based backups instead.

 

 

DBA Cockpit for SAP HANA: New Backup Functionality

DBA Cockpit for SAP HANA supports the following SAP HANA SPS10 functionality:

  • Delta backups (incremental and differential backups)

    This feature is available with the following SAP_BASIS Support Packages and above:
    • 7.50 SP01
    • 7.40 SP13
    • 7.31 SP17
    • 7.30 SP14
    • 7.02 SP18
  • Backups of tenant databases
    All tenant databases in an SAP HANA multitenant database container can be backed up independently of each other.

    This feature is available with the following SAP_BASIS Support Packages and above:
    • 7.40 SP10
    • 7.31 SP15
    • 7.30 SP13
    • 7.02 SP17

 

Note that tenant database, on which the DBA Cockpit is installed, is supported "out-of-the-box".

No additional setup steps are necessary in DBA Cockpit.

System databases need to be integrated manually.

More information: SAP Help Portal -> DBA Cockpit for SAP HANA -> Add a Database Connection

 

To schedule backups:

 

  1. In DBA Cockpit, Choose Jobs -> DBA Planning Calendar.
    Alternatively, use SAP transaction DB13.
  2. To schedule a new data backup, drag an item from the Action Pad to a cell in the calender.
    dba-cockpit-action-pad.png
    To back up a tenant database, choose Complete Data Backup.
    Tenant databases are backed up from within the system database.
  3. In the dialog box, specify the information required.
    dba-cockpit-backup-mdc.png
    For Database Name, specify the name of the tenant database you want to back up.
  4. Choose Add or Execute Immediately.
    The backup is scheduled for the time you specified or is started.

 

More information: SAP Note 2164096 - Schedule backups for SAP HANA multiple-container systems with DB13

 


New Monitoring View: Progress of Backup

M_BACKUP_PROGRESS provides detailed information about the most recent data backup.

 

Here is a comparison of M_BACKUP_PROGRESS and M_BACKUP_CATALOG / M_BACKUP_CATALOG_FILES:

 

M_BACKUP_CATALOG

M_BACKUP_CATALOG_FILES

M_BACKUP_PROGRESS
All types of backups (data, log, storage snapshots, if available)Data backups only (data, delta, incremental)
All completed and currently running backups since the database was createdCurrently running and last finished backups only
PersistentCleared at database restart
Total amount of data for finished backups onlyTotal and already transferred amount of data for all backups

 

System views are located in the SYS schema.

 

More information: M_BACKUP_PROGRESS in the SAP HANA SQL and System Views Reference Guide

 

 

Which SAP HANA Tool Supports What?

Below is an overview of the backup and recovery functionality supported by the different SAP HANA tools with SAP HANA SPS10:

 

SAP HANA StudioSAP HANA CockpitDBA Cockpit for SAP HANA
Data BackupYESYESYES
Storage SnapshotYES
Incremental BackupYESYES
Differential BackupYESYES
Schedule BackupsYES
Database RecoveryYES
Support for Tenant DatabasesYESYES

 

 

More Information

SAP HANA User Guides

 

 

Overview Presentation

SAP HANA Backup/Recovery Overview

 

 

Training

 

 

 

SAP Notes

 

  • 2165826
    Information about SAP HANA Platform Support Package Stack (SPS) 10
  • 1642148
    FAQ: SAP HANA database backup and recovery
  • 2031547
    Overview of SAP-certified 3rd party backup tools and associated support process
  • 2039883
    FAQ: SAP HANA database and storage snapshots
  • 2165547
    FAQ: SAP HANA Database Backup & Recovery in a SAP HANA System Replication landscape
  • 2091951
    Best Practices for SAP HANA Backup and Restore

 

Further SAP notes are available on component HAN-DB-BAC

HANA Rules Framework

$
0
0

Welcome to the SAP HANA Rules Framework (HRF) Community Site!


SAP HANA Rules Framework provides tools that enable application developers to build solutions with automated decisions and rules management services, implementers and administrators to set up a project/customer system, and business users to manage and automate business decisions and rules based on their organizations' data.

In daily business, strategic plans and mission critical tasks are implemented by a countless number of operational decisions, either manually or automated by business applications. These days - an organization's agility in decision-making becomes a critical need to keep up with dynamic changes in the market.


HRF Main Objectives are:

  • To seize the opportunity of Big Data by helping developers to easily build automated decisioning solutions and\or solutions that require business rules management capabilities
  • To unleash the power of SAP HANA by turning real time data into intelligent decisions and actions
  • To empower business users to control, influence and personalize decisions/rules in highly dynamic scenarios

HRF Main Benefits are:

Rapid Application Development |Simple tools to quickly develop auto-decisioning applications

  • Built-in editors in SAP HANA studio that allow easy modeling of the required resources for SAP HANA rules framework
  • An easy to implement and configurable SAPUI5 control that exposes the framework’s capabilities to the business users and implementers

Business User Empowerment | Give control to the business user

  • Simple, natural, and intuitive business condition language (Rule Expression Language)

Untitled.png

  • Simple and intuitive UI control that supports text rules and decision tables

NewTable.png

  • Simple and intuitive web application that enables business users to manage their own rules

Rules.png   

Scalability and Performance |HRF as a native SAP HANA solution leverages all the capabilities and advantages of the SAP HANA platform.


For more information on HRF please contact shuki.idan@sap.com  and/or noam.gilady@sap.com

Interesting links:

SAP solutions already utilizing HRF:

Here are some (partial list) SAP solutions that utilizes HRF in different domains: 

Use cases of SAP solutions already utilizing HRF:

SAP Transportation Resource Planning

TRP_Use_Case.jpg

SAP Fraud Management

Fraud_Use_Case.JPG

SAP hybris Marketing (formerly SAP Customer Engagement Intelligence)

hybris_Use_Case.JPG

SAP Operational Process Intelligence

OPInt_Use_Case.JPG

SAP HANA : The Row store , column store and Data Compression

$
0
0

Here is an attempt to explain the row store data layout, column store data layout and the data compression technique.

 

Row Store : Here all data connect to a row is placed next to each other. See below an example.

 

Table 1 :

Name

Location

Gender

…..

…..

….

Sachin

Mumbai

M

Sania

Hyderabad

F

Dravid

Bangalore

M

…….

……

……

 

Row store corresponding to above table is

 

row store.jpg

 

 

Column store : Here contents of a column are placed next to each other. See below illustration of table 1.

 

column store.png

 

 

Data Compression : SAP HANA provide series of data compression technique that can be used for data in the column store. To store contents of a column , the HANA database creates minimum two data structures. A dictionary vector and an attribute vector. See below table 2 and the corresponding column store.

 

Table 2.

Record

Name

Location

Gender

…..

…..

…..

….

3

Blue

Mumbai

M

4

Blue

Bangalore

M

5

Green

Chennai

F

6

Red

Mumbai

M

7

Red

Bangalore

F

……

…..

……

……

 

column store2.png

 

 

Here in the above example the column ‘Name’ has repeating values ‘Blue’ and ‘Red’. Similarly for ‘Location’ and ‘Gender’. The dictionary vector stores each value of  a column only once in a sorted order and also a position is maintained against each value. With reference to the above example , the dictionary vectors of Name , Location and Gender could be as follows.

 

Dictionary vector : Name

Name

Position

….

……

Blue

10

Green

11

Red

12

…..

……

 

Dictionary vector : Location

Location

Position

….

……

Bangalore

3

Chennai

4

Mumbai

5

…..

……

 

 

Dictionary vector : Gender

Gender

Position

F

1

M

2

 

 

Now the Attribute vector corresponding the above table would be as follows. Here it stores the integer values , which is the positions in dictionary vector.

 

  dictionary enco.png


Increased schema flexibility in SAP HANA

$
0
0

Schema flexibility is one of the key capabilities  in SAP HANA that majorly helps to bring in flexibility with the column store table definition. A brief insight with a good example can be seen in this  Getting flexible with SAP HANA

 

Let us now understand the new capabilities in schema flexibility with this document.

 

With HANA SPS11 customers can now avail the increases capabilities of schema flexibility in SAP HANA, let us now understand the same via some examples.

 

Create a column table to store the employee details considering his Name, ID and Age by giving a room to add the other necessary information of employee based on the needs at run time by using the below syntax :

 

Create column table employee_Details (emp_id int, emp_name varchar(100), AGE int) with schema flexibility;


Adding the clause of ‘with schema flexibility’ during the table creation enables the provision of dynamic column creation during the DML operations like insert/upsert, update or delete.


Once the base structure for Employee_details table is created , there comes a requirement to add some more details  like employee_salary, employee_department as new columns to the created table definition, now the dynamicity of employee_details table comes in handy as we have enabled it ‘with schema flexibility’ option, instead of altering the structure of the table we can now directly add whatever data we need to the table as shown below :


Insert into employee_Details(emp_id , emp_name, AGE, employee_salary, employee_department) values(1,’RAM’,29,1000,PI_HANA);


Now the insert statement will get executed successfully irrespective of whether the column highlighted in the insert operations are existing or not, which means the 2 new highlighted columns must get added   to the metadata of the table implicitly as part of the insert statement.


Nature of flexible table is to create the dynamic column with default data type as NVARCHAR having maximum length (5000), if we do not  want to use this default nature and make the data type of dynamic column  as user’s choice it can now be done  with HANA SPS11 during the creation of table. lets say in our case any dynamic column that gets added to employee_Details table must have the data type of intereger then we can do it by writing my create statement as:

 

Create column table employee_Details (emp_id int, emp_name varchar(100), AGE int) with schema flexibility (DEFAULT DATA TYPE INTEGER);


Now any newly created dynamic columns during the insertion/update will take integer as the data type.


If we have a case where the details that get added to employee_Details table are heterogeneous entries and we want the dynamic columns to construct their data types based on the inserted value , we can do that by the following create statement : which is considered as ‘Data type detection’.


Create column table employee_Details (emp_id int, emp_name varchar(100), AGE int) with schema flexibility(DEFAULT DATA TYPE *).


Here the dynamic columns created constructs their type of data based on the value inserted.

 

That is:

 

Insert  into employee_Details (emp_id, emp_name, AGE,Emp_deprtment, emp_salary) values(1,’RAM’,29,’PI_HANA’,2000);

 

The last two columns take numeric and string data types respectively which differs from the default case.

Data type detection behavior is valid for both single-valued and multi-valued entities.

 

Here comes a case where ‘employee_feedback’ is  to be dynamically added to employee_Details table and is initially entered as an integer value for the first year’s rating, then the data type of employee_feedback column is constructed as integer and in the coming year if the same column finds an entry of floating value like 3.5 it becomes an impossible action to capture it. So to enable this use case we have an idea here during the table creation :


Create column table employee_Details (emp_id int, emp_name varchar(100), AGE int) with schema flexibility(DEFAULT DATA TYPE * AUTO DATA TYPE PROMOTION )

Yes, it is the option of Data type promotion during the creation which gets our use case ready.


This must help us to maintain the data type with the most genric formbased on the data.

As an example for first year rating our insert statement goes like dis:


Insert  into employee_Details (emp_id, emp_name, AGE,Emp_deprtment, emp_salary, employee_rating) values(1,’RAM’,29,’PI_HANA’,2000, 4);


Now employee_rating  column takes data type as integer.


And in the coming year when it hits a floating value :

 

Insert  into employee_Details (emp_id, emp_name, AGE,Emp_deprtment, emp_salary, employee_rating) values(1,’RAM’,29,’PI_HANA’,2000, 4.5);


The data type of employee_rating will  automatically get promoted to a floating type thus sufficing the needs without any errors.


Here is the allowed conversion rule for data type promotion :


conversion_rule.PNG




Here is an other case of multi valued promotion that is supported, we now have employee_phone as a new detail to the table and gets added with a value of varchar which is a phone number as below

 

Insert  into employee_Details (emp_id, emp_name, AGE,Emp_deprtment, emp_salary, employee_rating, employee_phone) values(1,’RAM’,29,’PI_HANA’,2000, 4.56,’01233556589’)


It takes the entered input as a singled valued var-char.


Now when employees start using dual/ triple sim cell there is a need to store the multi-valued char’s, It should now be possible to store new data set in the same column without altering it as we have enabled the table with auto data type promotion.

 

That is :

 

Insert  into employee_Details (emp_id, emp_name, AGE,Emp_deprtment, emp_salary, employee_rating, employee_phone) values(1,’RAM’,29,’PI_HANA’,2000, 4.56,array(’01233556589’,’983232131’,’324324’));


Must now convert employee_Phone column into a multi-valued character attribute.


Flexible table usage majorly contributes for better memory management, to support this we have an operation called ‘Garbage Collection’.

 

In our case we decide to normalize the ‘employee_feedback’ details by having a separate table for it and thus flush all the values existing in the ‘employee_feedback’ column of employee_details table. 

 

Now implicitly ‘Garbage collection’ comes into picture if our employee_details table is enabled for it in a below manner :

 

Create column table employee_Details (emp_id int, emp_name varchar(100), AGE int) with schema flexibility (RECLAIM);

 

Enabling the RECLAIM option will now turn on the Garbage collection and dynamic columns(in our case ‘employee_feedback’) will be automatically dropped if no values are left for all rows in the column.


What if my need for all the above discussed features come after my table is created but not during the creation of table . Should we drop the table and create them ? Answer is  No.

Or what if somewhere between the time slots we wish to disable the above characteristics individually in the created table.


It is possible to do that, as all the above discussed operations are supported even with table alter operation as shown below:


1)ALTER TABLE <table name> DISABLE SCHEMA FLEXIBILITY

2)ALTER TABLE <table name> ENABLE SCHEMA FLEXIBILITY [(<options>)]

3)ALTER TABLE <table name> ALTER SCHEMA FLEXIBILITY (<options>)
4) ALTER TABLE <table name> ALTER <column name> [<data type>] DISABLE SCHEMA FLEXIBILITY

5)ALTER TABLE <table name> ALTER <column name> [<data type>] ENABLE SCHEMA FLEXIBILITY

 

One line explanation for the above operations are correspondingly explained below :


1) With this, all dynamic columns are being converted to static columns. If the column conversion for a dynamic column fails, the operation fails as whole and no changes are applied. Normal tables are allowed to only have a certain number of columns (currently 1,000 columns). In order to successfully convert a Flexible Table into a normal table, the number of columns in the Flexible table must not exceed this limit.


2)Turns flexibility of a database table on.


3)In this case, the option list is mandatory. All schema flexibility options that are listed in the CREATE TABLE … WITH SCHEMA FLEXIBILITY section can be used here and changes on one or several options for a Flexible Table must be successful.


4)Here the specified Dynamic columns must get converted to static columns.


5)Here the specified static columns must get converted to Dynamic columns.

 

 

 

Hope the provided information is useful. Any suggestion and feedback for improvement will be much appreciated.

 

Thank you

Troubleshooting Hanging Situations in HANA

$
0
0

Purpose

The purpose of this document is to instruct SAP customers on how to analyse hanging situations in their HANA system.

 

Overview

So what constitutes a hanging situation in HANA? When we talk about a hanging situation we generally refer to a whole system wide hang, as opposed to highlighting one specific activity, ie: queries or operations. This means we are going to look at the systems conditions (performance) which leads the DB to run slowly, or not at all in some cases;


ex:

  • Database has stopped responding but has not crashed.
  • Database hangs during startup and does not start up.
  • Application is hanging but new connections to the database are still possible (Possibly not a HANA DB issue which must be analysed from an application perspective before looked at from a HANA DB perspective)

 

 

Like all other software systems, SAP HANA relies on hardware to run its processes. Even when looking at the most basic single sever, single host, we can see many areas which over lap and interact with one another, in other words, when you see a hang do not just assume the cause is all related to HANA.

 

Wiki.png

 

So a small example of a hang / freeze situation I have witnessed is when a user goes to open the "Administrative Tab" in HANA Studio and the system hangs for a long period of time. Below is a small example of troubleshooting this issue.

 

Troubleshooting

So here are some issues that you may face with hanging and what Traces are needed to troubleshoot.


1: Hanging situation with High CPU Utilization.

2: Hanging situations with CPU Utilization because all threads are waiting.

  • Traces needed for further analysis: Runtime Dumps.

3: Hanging situations where a logon via SSH is not possible. (Either wrong OS configuration or an OS/Hardware issue.)

 


Wrong OS configuration


The system must not swap. What you have to remember is that HANA limits its memory usage by the Global Allocation Limit. Other non HANA processes or other instances can interfere with the HANA system. So it is important that HANA is really assigned the memory up to the GAL and nothing else using this.

 

A large file cache can lead to problem. When checking the cache size (top command) please remember that HANA shared memory is also booked as "shared". The remaining cache size is (in general) not available for HANA. If this is high, then we have to find out why?. The most probable reason for this is outside HANA. (See Linux paging improvements)

 

Transparent Huge Pages must be deactivated. THP will make HANA run quicker for a while but when it comes to splitting THP's , the system gets so slow that working with it is not possible.

 


The first thing you have to think of when you face this situation is to execute Runtime Dumps immediately.

 

Runtime Dumps can useful for the following situations:

 

  • Standstill situations
  • High CPU
  • Hanging Threads
  • Query cancellation not working

 

By checking a RTE dump you can look for certain key indicators such as Large Query plans. Large Query plans can indicate problematic SQL Queries. The thread of this SQL query can then be checked via the parent and also its child threads. These threads then link you back to short stacks which then can checked to see what exactly this stack is doing. See Troubleshooting High CPU for further info.

 

As the HANA Studio will more than likely be hanging during the hang / slow performance, you can use SSH to run the Runtime Dumps with 2 minute intervals through the means of this hdbsql script:

 

 

DATE=`date +%Y%m%d.%H%M%S` ; for PID in `ps x|grep -E "hdb(index)server"|grep -v grep|awk '{print $1}'` ; do CMDLINE=`ps x|grep -E "^ *${PID}"|grep -v grep |awk '{for(i=5;i<=NF;i++) printf $(i)}'` ; echo $PID - $CMDLINE ; hdbcons -p ${PID} "runtimedump dump -c" >${CMDLINE}-${PID}-${DATE}.dump ; done ; sleep 120 ; DATE=`date +%Y%m%d.%H%M%S` ; for PID in `ps x|grep -E "hdb((index+)server"|grep -v grep|awk '{print $1}'` ; do CMDLINE=`ps x|grep -E "^ *${PID}"|grep -v grep |awk '{for(i=5;i<=NF;i++) printf $(i)}'` ; echo $PID - $CMDLINE ; hdbcons -p ${PID} "runtimedump dump -c" >${CMDLINE}-${PID}-${DATE}.dump ; done ; sleep 120 ; DATE=`date +%Y%m%d.%H%M%S` ; for PID in `ps x|grep -E "hdb((index+)server"|grep -v grep|awk '{print $1}'` ; do CMDLINE=`ps x|grep -E "^ *${PID}"|grep -v grep |awk '{for(i=5;i<=NF;i++) printf $(i)}'` ; echo $PID - $CMDLINE ; hdbcons -p ${PID} "runtimedump dump -c" >${CMDLINE}-${PID}-${DATE}.dump ; done

 

 

After running the script you can then open the generated RTE dumps. The dumps will show you what exact queries were running at the time of the hang / freeze.

 

These queries can then either be searched on the SAP search or you can check to see if these are your own custom queries which need to be looked at it terms of optimization. (Also if you have to open an incident with SAP this information will be what the engineer will be looking for)

In relation to the HANA Studio Hang, the solution for this can be found by searching the SQL Query generated which will return the Note High CPU when opening admin console

 

 

The vast majority of hanging situations are related to bottleneck issues with CPU, Storage, Network etc.

 

Usually the DBA will know the time and date of the hang that is causing the issues, but if this is not known you can always use the Performance load graph. As of SP9 you can now use the HANA Cockpit load graph. (I know this was a function already available in previous revisions but this did not work very well and crashed a lot). This preforms better and does not crash like its predecessor in Studio if the nameserverhistory file was large.

 

Going to the SAP HANA Cockpit, you can then see the SAP HANA Database Administration section with nice looking Fiori designed tiles:

 

Wiki 1.PNG

Here you can check at what time and date did the system experience the issues.


 

Please also be aware of the HANA Offline Cockpit functionality that became available recently. By logging in with the SIDADM user you can use the "Troubleshoot Unresponsive System" also:

 

Wiki4.PNG

 

If the load graph cannot be accessed by either Studio or Cockpit you can also use the TOP command at OS level which will show you the running processes:

 

wiki 3.PNG

 

So now you have the Time Stamp of the issue. Now you go to HANA Diagnosis Tab in Studio , or its corresponding tile in Cockpit.

 

Here is where you locate the time stamp in the relevant files so you can see what was happening before, during and after the hang.

 

The first files to look into are the indexserver + nameserver. Check the corresponding time stamps (Before and during the time) in these files to see if any obvious errors are apparent. Some examples of errors you may see before the system hang can be:

 

 

  • SQL error 131: transaction rolled back by lock wait timeout
  • SQL error 133: transaction rolled back by detected deadlock

 

If you see these please see Note on Lock Analysis. Many Useful SQL scripts exist also at Note 1969700

 

The so called MVCC Ani Ager periodically checks for problematic statements. It reports idle cursors or long running write transactions after 60 minutes.

 

It closes idle cursors after 12 hours.

 

  • mvcc_anti_ager.cc(01291) : There are too many un-collected versions.

        ('number of versions > 1000000' or 'maximum number of versions per record > 100000')

 

  • The cursor possibly block the garbage collection of HANA database.

         mvcc_anti_ager.cc(01291) : There are too many un-collected versions on table "<schema>"."<table>"

         ('number of versions for one of the partitions > 1000000' or 'maximum number of versions per record > 100000')

 

  • The transaction blocks the garbage collection of HANA database.

        mvcc_anti_ager.cc(01199) : long running uncommitted write transaction detected.

        mvcc_anti_ager.cc(01082) : The Connection is disconnected forcefully because it is blocking garbage collection for too long period.

        Statement.cc(03190) : session control command is performed by ..., user=SYSTEM, query=ALTER SYSTEM DISCONNECT SESSION '<conn_id>'

        mvcc_anti_ager.cc(00834) : long running cursor detected.

 

  • The open cursor possibly blocks the garbage collection of HANA database.

         Please close a cursor in application or kill the connection by "ALTER SYSTEM DISCONNECT SESSION '<conn_id>' "

 

The above refers to a long running transaction that has yet to be committed and could be causing your system to hang. If you were to see any of these errors please see FAQ on Garbage Collection.


Blocked transactions can lead to a hanging situation from an application perspective. Blocked transactions are usually not a database issue and need to be analyzed from an application point of view.

 

To find out the blocked transaction see the System Information Tab. You then query "Blocked Transactions“:

 

Wiki1.PNG

 

Long running transaction can block the garbage collection for executing. A very high number of MVCC versions (>5 Mil) can lead to a slow system or even a hanging like situation. If you would like to query the MVCC versions amount you can find this in the monitoring view M_MVCC_OVERVIEW:

 

WIki 2.PNG

 

If you then like to drill down further into the MVCC versions, you can also see how many MVCC versions per table by querying M_RS_TABLE_VERSION_STATISTICS:

 

wiki 3.PNG

 

So by using the above information you should be able to find the blocker transaction. You should then try to disconnect this connection:

ALTER SYSTEM DISCONNECT SESSION '12345'. If the cancellation was successful then the reason is normally related to an application or user issue and should be investigated more from that perspective.

 

If the connection cannot be cancelled then this is either 1 or 2 things:

 

1: A long running query that cannot be cancelled. So this is the time to run the runtime dumps as mentioned earlier.

 

2: Or it could be caused by an issue that requires further attention from SAP Development support via an incident.

 

 

Hanging situations in relation to SAVEPOINTS:

 

Savepoints speed up the startup time of a database because not all the redo logs have to be replayed but only the log from the last savepoint. The Savepoint coordinator periodically performs savepoints, which the default is 5 minutes. The savepoints are also triggered by several other operations such as data backups, a database shutdown or after a restart is completed.

 

If a system crashes during the savepoint operation, the system can still be restored from the last savepoint due to the shadow page concept. The shadow page concept is more about how to allocate and reuse pages in the data file and doesn't affect the recover-ability that much. But it e.g. frees you (to a very large extent) from the need for data file reorganisation. Here changed pages will not be overwritten directly, but instead marked as available and the changed content is placed at some other available location in the data file. Since SAP HANA keeps track on which pages contain the current data, there is no need to overwrite or clear unused pages, so after some time the whole data file will contain some data.

 

Data backup operations write a global savepoint, which is a consistent set of savepoints from all servers in the SAP HANA system. It is possible to restore a SAP HANA system from such a data backup, without replaying the redo log.

 

The Savepoint is split into three individual stages:

 

Phase 1 (PAGEFLUSH): All modified pages are determined that are not yet written to disk. The savepoint coordinator triggers writing of all these pages and waits until the I/O operations are complete.

 

Phase 2 (CRITICAL): The is the critical part of a savepoint operation where no concurrent write operations are allowed. This is achieved using the consistent change lock. To minimize the impact on concurrent operations, phase 2 must be kept as short as possible. The savepoint coordinator determines and stores the savepoint log position and the list of open transactions. Also pages that were changed during phase 1 are written to disk asynchronously.

 

Phase 3 (POSTCRITICAL): Changes are allowed in this phase again. The savepoint coordinator waits until all asynchronous I/O operations related to the savepoint are finished and marks the savepoint as completed.

 

During the critical phase the savepoint holds an exclusive Consistent Change Lock. Other write operations into the data volume are blocked during that time.

 

You can identify these by selecting the following:

 

M_SAVEPOINTS monitoring view in runtimedump or via hdbcons command (STATE ENTERCRITICAL or CRITICAL)

 

Capture.PNG

 

Possible Reasons: Bad IO throughput (check SAP Note 1999930) Blocked Consistent Change Lock by waiting writer (check SAP Note 2214279)

 

 

 

If you have checked the HANA logs and you see nothing obvious or problematic, then here is when you check the var/log/messages.

 

If you see some irregularities in these files then open a ticket with your hardware provider.

 

The main point to take from this document is to ALWAYS try and capture the hang with runtime dumps. This will then give you the DBA or SAP a very good chance of identifying the root cause of the hang.

 

 

Related KBA's:

2280288 - TREXviaDBSL and TREXviaDBSLWithParameter Statements can remain Open with Status 'Suspended'

2256719 - Potential Standstill Situations Due to Wrong Calculation of Parked Workers

1999020 - How-To: SAP HANA Troubleshooting when Database is no longer accessible

Setting Custom theme for HANA XS applications

Reset the SYSTEM User's Password in HANA DB

$
0
0

Overview

 

If the SYSTEM user's password is lost, you can reset it as the operating system administrator by starting the index server in emergency mode. If your HANA DB is Multitenant, this process will not work.  My HANA DB revision was 102.04

 

Prerequisites

 

You have the credentials of the operating system administrator (<sid>adm).

 

Procedure

 

Step1: Log on to the server on which the master index server is running as the operating system user (that is, <sid>adm user).

 

Step2: Open a command line interface.

 

Step3: Shut down the instance by executing the following command:

/usr/sap/<SID>/HDB<instance>/exe/sapcontrol -nr <instance> -function StopSystem HDB

Step3.png

Step4: In a new session, start the name server by executing the following commands:

 

/usr/sap/<SID>/HDB<instance>/hdbenv.sh

/usr/sap/<SID>/HDB<instance>/exe/hdbnameserver

Step4.png

This will stay hanged state…

 

Step5: In a new session, start the compile server by executing the following commands:

 

/usr/sap/<SID>/HDB<instance>/hdbenv.sh

/usr/sap/<SID>/HDB<instance>/exe/hdbcompileserver

Step5.png

This will stay hanged state…

 

Step6: In a new session, start the index server by executing the following commands:

 

/usr/sap/<SID>/HDB<instance>/hdbenv.sh

/usr/sap/<SID>/HDB<instance>/exe/hdbindexserver -resetUserSystem

Step6.png

The following prompt appears: resetting of user SYSTEM - <<<new password>>>

 

Step7: Enter a new password for the SYSTEM user.

You must enter a password that complies with the password policy configured for the system.

The password for the SYSTEM user is reset and the index server stops.

 

Step8: In the terminals in which they are running, end the name server and compile server processes by pressing CTRL+C.

 

Step9: In a new session, start the instance by executing the following command:

/usr/sap/<SID>/HDB<instance>/exe/sapcontrol -nr <instance> -function StartSystem HDB

 

 

Note:

 

In a scale-out system, you only need to execute the commands on the master index server.


Results

 

The SYSTEM user's password is reset. You do not have to change this new password the next time you log on with this user regardless of your password policy configuration.

SAP HANA REVISION UPDATE – SPS10

$
0
0

Reason for HANA DB patch level update

We are copying HANA DB from SLES 11.3 revision 102.01 to RHEL 6.5 revision 102.00 through Backup / Restore method using SWPM (homogeneous system copy). While restoring any HANA DB it is necessary to have at least same or higher patch level into the target environment. This is reason we are updating target HANA DB environment from revision 102.00 to latest available patch level 102.04.

Download SAP HANA patches

Download following updates (Database, Studio & Client) from SAP marketplace and transfer to HANA server

Fig6.pngFig7.png

Current available patch level is 102.04 (PL04). So we are updating into PL 04. We will download Studio, Client & DB for update.

SAP HANA Backup before update

Take complete backup before rev update start

Fig4.png

Extract HANA Patches

Move all SAR files into HANA host server and extract using switch –manifest SIGNATURE.SMFig8.png

If you extract more than one component SAR into a single directory, you need to move the SIGNATURE.SMF file to the subfolder (SAP_HANA_DATABASE, SAP_HANA_CLIENT, SAP_HANA_STUDIO etc.), before extracting the next SAR in order to avoid overwriting the SIGNATURE.SMF file. For more information, see also SAP Note 2178665 in Related Information.

Fig9.png

Fig10.png

Do the same for client & studio as well

Fig11.png

Fig11.png

HANA Update via STUDIO

Run SAP HANA Platform Lifecycle Management from HANA STUDIO

Fig12.png

Fig13.png

Fig14.png

Select the location from HANA host

Fig16.png

Fig17.png

Fig18.png

Fig19.png

Fig20.png

Fig21.png

Fig22.png

Fig23.png

This completes HANA patch level update.

Viewing all 1183 articles
Browse latest View live