Quantcast
Channel: SCN : Document List - SAP HANA and In-Memory Computing
Viewing all 1183 articles
Browse latest View live

Execution management by HANA Graphical Calculation View

$
0
0

SAP HANA, An appliance with set of unique capabilities has provided wide range of possibilities for end user to perform data modeling. One among them is 'Graphical Calculation View', which helps in leveraging the fullest potential of SAP HANA in data modeling sector.

 

This Document helps in understanding the nature of execution management by HANA Graphical Calculation Views by utilizing the allowed set of properties in it. So as to reveal out the effectiveness in handling the properties to manage the execution flow.

 

First among the property set under discussion is


'KEEP FLAG' for attributes in Aggregation Node

 

Above property gives chance for end user to leverage the full capacity of calculation engine by utilizing the nature of Aggregation node.

Let us understand how it is achieved by considering a simple example.

 

Consider a simple SALARY_ANALYSIS table as shown below. Keeping it simple to make the understanding better.

 

EMPLOYEE_TYPESALARYYEAR
1100002001-01-01
1200002002-01-01
1500002005-01-01
2250002002-01-01
2300002004-01-01
3450002006-01-01

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 



Above table gives the salary details of each employee type on date basis. Let us now try to get maximum salary in each of the employee_category/type using graphical calculation view.

 

 

Step 1: Create a new graphical calculation view of DIMENSION Data category and add an aggregation node to the view.

 

Now add the above created column table into the newly inserted aggregation node and set EMPLOYEE_TYPE  as the output column.

Along with which add SALARY as an aggregated column to the output and rename it as MAX_SALARY_IN_EACH_EMP_TYPE

and set its aggregation type to MAX

Aggren.PNG

Step 2 : Now connect the Aggregation Node to the default Projection Node and just select MAX_SALARY_IN_EACH_EMP_TYPE column to the output in Projection Node.

pROJECN.png

 

Step 3 : Save and activate the view.  After which perform Data Preview on the Aggregation Node. We get Maximum Salary for each employee type in the underlying table.

intermediate_dp.png

 

Step 4 : Now perform Data Preview at the Calculation View level

dp.png

Here we see that the end result of the view gives only one row of data discarding the Employee_Type with which grouping was done in the aggregation node. Plan Visualized Query for the same preview is as show below.

vizual.png

By the above visualization of execution plan we figure out that the EMPLOYEE_TYPE column used in the aggregation node is pruned and not passed to the higher nodes when it is not queried in the End Result. Thus we are unable to get the MAX_SALARY for each employee type, instead we are getting the MAX_SALARY out of the total salary list.


To achieve the former case of getting MAX_SALARY based on the EMPLOYEE_TYPE even when the EMPLOYEE_TYPE is not part of the end query, we must enable this special property called 'Keep Flag' on the attributes.


Step 5 : Go back to the Aggregation Node used in the view and select the EMPLOYEE_TYPE column which is added as attribute, set the property 'KEEP_FLAG' to true and activate the view.

keep_flag.png


Step 6 : Once the activation is completed successfully, perform data preview of the model and check the result. We now get the MAX_SALARY grouped by EMPLOYEE_TYPE although EMPLOYEE_TYPE is not part of the end query.

dp_with_kf.png


Plan Visualization on the same query now behaves differently after setting the 'KEEP_FLAG' to true for EMPLOYEE_TYPE column, thus propagates the grouping attribute to the higher nodes as shown below:


viz_kf.png

 


Hence the end user is given chance to either retain the query optimization or to fit the model for required response by toggling the KEEP_FLAG property of the attributes in aggregation node.


Let us now pitch into an other connected property of Aggregation Node.


Always Aggregate Result in Aggregation Node


Taking the above example of SALARY_ANALYSIS, let us understand the usage of 'Always Aggregate Result' property in the aggregation node.


Step 1: Create a new Graphical Calculation View of cube type and add SALARY_ANALYSIS table as the data source into the aggregation node,. now select EMPLOYEE_TYPE and YEAR as non-aggregated or attribute columns and SALARY as the aggregated output column.

aggre1.png


Step 2 : Save and activate the view, after which execute query on the above view involving EMPLOYEE_TYPE and SALARY columns as show below

 

always_aggre.png

 

The above select statement do not involve any client side aggregation or group by clause, output values here are the result of default aggregation and group by operations in the aggregation node as shown below:

                                                                                  image.png

 

Let us now execute the same statement with 'Where Clause' usage in the query and see the result

date.png

 

Here we see that the introduction of 'Where clause' has also introduced the same column in 'Group by' Clause and thus the result gets varied from the previous query by having YEAR column also as part of the group by operation in aggregation node.

 

Step 3: To Avoid the above introduction of filter column in the group by clause and to address the only set of columns requested in the query, we have a property called Always Aggregate Result to be set to true so that the aggregation will not vary based on the filter column in the requested query.

 

always_agg_res.png

Above cases are true only when client side aggregations are not set in the requested query.

Thus with the Always Aggregate Result mode the execution model will be in the below format final.png

 

 

Step 4 : Now execute the same query with' Where clause' on YEAR column after setting the ALWAYS AGGREGATE RESULT property.

final12.png


We now see that the grouping happens only by EMPLOYEE_TYPE which is in the requested query.



There by we see the usage and benefits of two key properties KEEP_FLAG and ALWAYS AGGREGATE RESULT for execution management

in Graphical Calculation View .


Hope the provided information is useful. Any suggestion and feedback for improvement will be much appreciated.

 

Thank you


HANA Data Warehousing Foundation 1.0 - Overview

$
0
0

This presentation how the SAP HANA Data Warehousing Foundation 1.0 provides specific data management tools, to support large scale SAP HANA use cases. It complements the data warehouse offerings of SAP BW powered by SAP HANA and native SAP HANA EDW.

View this Presentation

SAP Hana EIM (SDI/SDQ) setup

$
0
0

In my documentation I’ll explain how to setup and configure SAP Hana SP10 EIM (SDI/SDQ) with Sybase IQ database and ERP on Hana database schema source system to replicated data for realtime data replication.

 

I will show in detail step and configuration point to achieve it.

 

In order execution

  • Create Sybase IQ database
  • Enable DP server for SDI
  • Enable Script server for SDQ
  • Install SDQ cleanse and geocode directory
  • Install DU HANA_IM_DP (Data provisioning)
  • Install and Register Data Provisioning Agent
  • Create remote source
  • Data replication and monitoring

 

Configuration required on SP9

The xsengine needs to be turn to true (if not done)

The statistic server needs to be turn to true (if not done)

The DU HANA_IM_ESS needs to be imported

 

 

Guide used

 

SAP Hana EIM Administration Guide SP10
SAP Hana EIM Configuration guide SP10

 

Note used

 

179583 - SAP HANA Enterprise Information Management SPS 10 Central Release Note

 

Link used

 

http://help.sap.com/hana_platform

Overview Architecture

7-15-2015 6-09-06 PM.jpg

 

Starting Hana SP9 the new features call SDI (Smart Data Integration) and SDQ (Smart Data Quality) has been introduce.

 

The purpose of these new features is to leverage an integrated ETL mechanism directly into Hana over SDA

 

To make it simple:

  • Smart Data Integration provide data replication and transformation services
  • Smart Data Quality provide an advanced transformation to support data quality functionality

 

 

Create Sybase IQ database

 

In order to have a dedicated database to work with I’ll create my own database into IQ server:

 

From the SCC go to administration and proceed as follow
7-10-2015 3-55-36 PM.jpg

7-10-2015 4-00-54 PM.jpg

7-10-2015 4-01-28 PM.jpg


SCC agent password: the password define during the IQ server installation
Utility server password: auto fill do not change it
IQ server port: use an unused port, I already have 2 db running so I pic the next number
Database path: <path where the db is stored><dbname>.db
IQ main dbspace path: <path where the dbspace is stored><dbname>.iq
7-10-2015 4-06-10 PM.jpg

 

Check mark ok

7-10-2015 4-14-09 PM.jpg

 


Execute
7-10-2015 4-18-37 PM.jpg

7-10-2015 4-19-51 PM.jpg

7-10-2015 4-20-47 PM.jpg


My database now available I’ll create 3 simple tables for this test, I’ll use interactive SQL
7-10-2015 4-49-11 PM.jpg

 

With the following syntax

7-10-2015 4-53-48 PM.jpg

 

  

Enable Data Provisioning server for SDI

 

When Hana is installed, by default the DP server is note activate, in order to have the ability to use SDI it needs to be enabled. The value needs to be change to 1
7-10-2015 5-22-47 PM.jpg

 


Enable Script server for SDQ

 

To take advantage of the SDQ functionality the script server value needs to be change to 1
7-10-2015 5-47-54 PM.jpg

 

 

 

Install SDQ cleanse and geocode directory

 

The Cleanse and Geocode nodes rely on reference data found in directories where we download and deploy to the SAP HANA server.

 

To download those directories go on the SMP and select the one you need
You can download several directories depending on what you are licensed for.
7-10-2015 8-50-57 PM.jpg

7-10-2015 8-55-08 PM.jpg

 

Once downloaded decompress it at the following location:
/usr/sap/<SID/SYS/global/hdb/IM/reference_data
7-11-2015 8-09-10 PM.jpg

 


Install delivery unit HANA_IM_DP (Data Provisioning)

 

The specific delivery unit needs to be downloading and upload from the studio or the web interface, this will provide you:

  • The monitoring functionality
  • The proxy application to provide a way to communicate with the DPA (cloud scenario)
  • The admin application for DPA configuration (cloud scenario)

7-11-2015 8-31-14 PM.jpg

7-11-2015 8-31-32 PM.jpg

 

Upload from the studio

7-11-2015 8-32-45 PM.jpg

 


Once done assign the monitoring role and add the view from the cockpit

7-11-2015 8-36-00 PM.jpg

7-11-2015 8-48-03 PM.jpg

 

 

Install and register Data Provisioning Agent

 

The Data Provisioning Agent is used to make the bridge between Hana and source system where the driver can’t be run from Hana (DPS) over a pre-build adapter, in some case it allow Hana to write back data into source system.
Use the DPA allow live replication.

 

The agent is part of the package download earlier

7-11-2015 9-04-37 PM.jpg


Run and installed it as needed

7-8-2015 9-40-08 PM.jpg

Once installed open the cockpit agent

7-11-2015 9-12-27 PM.jpg

 


Make sure the agent is started, connect and register it to Hana with the necessary adapter

7-12-2015 4-46-46 PM.jpg

Let create the source system in Hana now.

 

 



Create remote source

 

Now that my IQ db is in place and my Hana adapter is installed I will create my source system in SDA where I need to get the data from.

Let start with my IQ database, before create the connection in SDA install and set the lib on Hana server. To create my connection I will use the following statement:

 

create remote source I841035 adapter iqodbc configuration 'Driver=libdbodbc16_r.so;ServerName=HANAIQ03;CommLinks=tcpip(host=usphlvm1789:1113)' with CREDENTIAL TYPE'PASSWORD'USING'user=I841035;password=xxxxxxx';

 

Once done refresh the provisioning

7-11-2015 10-42-09 PM.jpg

And create the ERP on Hana schema source system by selecting the adapter added earlier

7-12-2015 4-42-33 PM.jpg

7-12-2015 4-59-13 PM.jpg

 

  

And check the remote subscription form the cockpit

7-12-2015 5-47-27 PM.jpg

 

 


Data replication and monitoring

 

My remote source connect I will now define which table I want to replicate and how I want it to look like once loaded.

Make sure your user schema is part of _SYS_REPO with “CREATE ANY” granted.

 


From the development workbench go to “Editor” and select your package and create a new replication task

7-12-2015 6-16-35 PM.jpg

7-12-2015 6-18-23 PM.jpg

 

And fill the necessary information, target schema, virtual table schema, table prefix and so on.

From detail perspective several option are possible

 

Add/remove/edit table

7-12-2015 7-08-12 PM.jpg

 

Set filter


Define the load behavior in order to have a certain level of detail on the change that encore on the table.

7-12-2015 6-56-01 PM.jpg

  

Partition data for better performance

7-12-2015 7-09-15 PM.jpg

Once you preference are set, save the configuration to activate it.

7-12-2015 7-14-02 PM.jpg

 

From the monitoring side check the task log

7-12-2015 7-23-36 PM.jpg


Once activate go on the catalog view and check if the procedure is crated as well as the virtual tables/views and table, and invoke the procedure to start the replication

7-12-2015 7-27-51 PM.jpg

 

I did repeat the same procedure for my ERP on Hana schema, once the procedure is invoked on the remote Hana db we can see additional table created and trigger for the relevant table replicated

7-15-2015 1-38-18 PM.jpg

 

From a monitoring side, I did add 4 additional user and we can see the apply count

7-15-2015 2-42-17 PM.jpg


The replication is now operational, in my next document I’ll explain how to configure several datasource and construct one realtime report with the input of different table.

 

Williams.

SAP HANA Data Warehousing Foundation

$
0
0

SDNPIC.jpg

SAP HANA Data Warehousing Foundation 1.0

 

This first release will provide packaged tools for large Scale SAP HANA use cases to support data management and distribution within a SAP HANA landscapre more efficiently. Further versions will focus on additionals tools to  support native SAP HANA data warehouse use cases, in particular data lifecycle management.

 

 

 

On this landingpage you will find a summary of information to get started
with SAP HANA Data Warehousing Foundation 1.0

Presentations

 

 

Demos

 

SAP HANA Data Warehousing Foundation  Playlist
In this SAP HANA Data Warehousing playlist you will find demos showing the Data Distribution Optimizer (DDO) as well as the Data Lifecycle Manager (DLM)

 

SAP HANA Academy

 

Find out more about the Data Temperature integration with HANA including Data Lifecycle Management on the DWF Youtube channel of SAP HANA Academy

HANA Rules Framework

$
0
0

Welcome to the SAP HANA Rules Framework (HRF) Community Site!


SAP HANA Rules Framework provides tools that enable application developers to build solutions with automated decisions and rules management services, implementers and administrators to set up a project/customer system, and business users to manage and automate business decisions and rules based on their organizations' data.

In daily business, strategic plans and mission critical tasks are implemented by a countless number of operational decisions, either manually or automated by business applications. These days - an organization's agility in decision-making becomes a critical need to keep up with dynamic changes in the market.


HRF Main Objectives are:

  • To seize the opportunity of Big Data by helping developers to easily build automated decisioning solutions and\or solutions that require business rules management capabilities
  • To unleash the power of SAP HANA by turning real time data into intelligent decisions and actions
  • To empower business users to control, influence and personalize decisions/rules in highly dynamic scenarios

HRF Main Benefits are:

Rapid Application Development |Simple tools to quickly develop auto-decisioning applications

  • Built-in editors in SAPHANA studio that allow easy modeling of the required resources for SAP HANA rules framework
  • An easy to implement and configurable SAPUI5 control that exposes the framework’s capabilities to the business users and implementers

Business User Empowerment | Give control to the business user

  • Simple, natural, and intuitive business condition language (Rule Expression Language)

Untitled.png

  • Simple and intuitive UI control that supports text rules and decision tables

NewTable.png

  • Simple and intuitive web application that enables business users to manage their own rules

Rules.png    

Scalability and Performance |HRF as a native SAP HANA solution leverages all the capabilities and advantages of the SAPHANA platform.


For more information on HRF please contact shuki.idan@sap.com  and/or noam.gilady@sap.com

Interesting links:

SAP solutions already utilizing HRF:

Use cases of SAP solutions already utilizing HRF:

SAP Transportation Resource Planning

TRP_Use_Case.jpg

SAP FraudManagement

Fraud_Use_Case.JPG

SAP hybris Marketing (formerly SAP Customer Engagement Intelligence)

hybris_Use_Case.JPG

SAP Operational Process Intelligence

OPInt_Use_Case.JPG

Managing cold data in SAP HANA database memory

$
0
0

If you already had a chance to check out the new SAP Press e-bite: Data Aging for SAP Business Suite on SAP HANA (New SAP Press book: Data Aging for SAP Business Suite on SAP HANA) you may start wondering how cold data is managed in SAP HANA memory. After all sometimes access to historical data is justifiable.

 

So once you have the cold data moved away from the main memory you will notice that it is re-loaded whenever needed. But for how long will the cold data stay in memory so that the aging still makes sense? This blog explains how cold data is managed in memory and describes different strategies for keeping it all under control.

 

HANA database takes care of loading and unloading data to and from memory automatically with the aim to keep all relevant information in memory. In most cases only the necessary columns (i.e. columns that are actually used) are loaded into memory on the first request and kept there for later use. For example after system restart only a few columns might be initially loaded into memory and only for the hot partition as shown in the figure below.

 

m_cs_all_columns_after_restart.PNG

 

You can monitor history of loads and unloads from system views M_CS_LOADS and M_CS_UNLOADS or Eclipse IDE • Administration • Performance • Load • Column Unloads.

 

SELECT * FROM "SYS"."M_CS_LOADS" WHERE table_name = 'BKPF';

SELECT * FROM "SYS"."M_CS_UNLOADS" WHERE table_name = 'BKPF';

 

1. Loading data into memory

 

The following reasons may cause a column table to be loaded into HANA memory:

  • First time access to a table column (for example when executing SQL statement)
  • Explicit loads triggered with SQL command LOAD
  • Reload after startup for tables and/or columns defined with PRELOAD statement. You can check the status in "SYS"."TABLES" and "SYS"."TABLE_COLUMNS".
  • Pre-warming – reload after startup based on columns loaded before the system shutdown (configurable option – see SAP OSS Note 2127458).

As of SPS 09, load and pre-load do not consider cold partitions.


2. Paged partitions

 

Loads of data into memory happen column-wise and they cannot be restricted to specific partitions. However, if paged attributes are used with partitioning, loads will happen in a more granular way i.e. page-wise. This means that only data that is in the area in which you are searching will be loaded.


In case partitions were not created with paged attribute (see e-bite: Data Aging for SAP Business Suite on SAP HANA) you can use report RDAAG_PARTITIONING_MIGRATION to change the loading behavior (see figure below). This can be done for a single table, Data Aging Object or the whole Data Aging Group. For more information check SAP OSS Note 1996342.


migration_report.PNG

3. Un-loading data from memory

 

Unloads happen based on a "least recently used" (LRU) approach. In case of memory shortage, columns that have not been used for the longest period of time are unloaded first. You can also prioritize this behavior by using UNLOAD PRIORITY setting. However, unload priority can only be set for a whole table, without a possibility to distinguish between cold and hot partitions. It can be 0 ~ 9, where 0 means not-unloadable and 9 means earliest unload.


SELECT UNLOAD_PRIORITY FROM "SYS"."TABLES" WHERE TABLE_NAME = 'BKPF'


In M_CS_UNLOADS (REASON column) you can see a detail reason explaining why the unload happen:

  • LOW MEMORY – happens automatically when memory becomes scarce (see SAP OSS Note 1993128).
  • EXPLICIT – un-load triggered with SQL command UNLOAD.
  • UNUSED RESOURCE - Automatic unloads when a column exceeds the configured unused retention period.

 

Too many unloads may indicate that memory requirements are exceeded. This will affect performance since tables need to be fetched again from disk on the next access request. Time spent on loading table columns during SQL statement preparation can be found in TOTAL_TABLE_LOAD_TIME_DURING_PREPARATION column in M_SQL_PLAN_CACHE system view. You can find more information on how to handle column store unloads in SAP OSS Note 1977207.


Even with enough resources in normal circumstances, it may happen that complex queries will create high volume intermediate results and thus lead to memory shortage. Once memory allocation limit has been reached on HANA host, the memory manager will start unloading data (cashes, buffers, columns) based on the "least recently used" approach. In such cases it would be desirable to have the cold partitions removed first. As per business definition they have the lowest priority. The following two chapters present such mechanism.

 

4. Unloading paged memory

 

Paged attribute for partitioning allows separating data of hot and cold partitions in a way that cold data is not loaded into memory if unnecessary. However once data records (columns) have been loaded into memory from cold partitions they will reside there until memory shortage. Then unloading is triggered according to LRU approach. As already mentioned it is not possible to set unloading priorities differently for each partition.


However, for cold partitions created with paged attribute it is possible to precede this mechanism by setting up limits for paged memory pool. There are two configuration parameters that have to be considered (see also SAP Note 2111649):

  • PAGE_LOADABLE_COLUMNS_MIN_SIZE– in case of low memory paged resources will be unloaded first, before any other resources, as long as memory they occupy is bigger than the value set up for this parameter. Only once their size is below this limit, standard LRU mechanism will be triggered.
  • PAGE_LOADABLE_COLUMNS_LIMIT– if paged resources exceeds this limit some of them will be unloaded according to LRU approach until the total size of paged memory is back again at min level.

Values should be specified in MB. Default value is 1047527424 MB which equals 999 TB. Current size of the paged memory can be found in the system view M_MEMORY_OBJECT_DISPOSITIONS as shown in figure below.

page_off_sizes_after_select_all.PNG


In the example below the parameters were set to small values for demonstration purposes (Eclipse IDE go to ADMINISTRATION • CONFIGURATION) as shown in figure below.


page_config_on.PNG


At this point executing statement SELECT * FROM bkpf WHERE _dataaging <> ‘00000000’ will cause the paged memory to grow above the allowed limits. Paged memory will be unloaded until minimum limit is reached as shown in figure below.

page_on_sizes_after_select_all.PNG

5. Data retention


In order to manage more selectively cold partitions in memory you can use Auto-Unload feature of SAP HANA. It allows unloading of tables or partitions from memory automatically after a defined unused retention period. Configuring a retention for unloads typically increases the risk of unnecessary unloads and loads. However, retention periods (unlike priorities) can be set on a partition level. Therefore their use for managing cold storage might be justifiable.


In the figure below all partitions (including cold) are loaded partially. Almost all columns from recently accessed partitions are loaded into memory. This includes hot and cold (2015) partitions. Only some key columns from 2013 and 2014 cold partitions are loaded.retention_partially_loaded.PNG


To see current table setup and memory usage run the following statement in SQL console in Eclipse IDE as shown in figure below:

retention_not_set.PNG


To set retention periods execute the SQL ALTER TABLE statement with UNUSED RETENTION PERIOD option. Retention periods are provided in seconds for the whole table or for each partition separated with commas. If multiple values are specified, the number of values must match the number of table partitions. The first partition which represents hot storage will have the retention periods set to 0. This means that the global default value from system configuration will be used for this partition (see below). When it comes to cold partitions let’s use some short time frames in the example scenario (5 min.):


ALTER TABLE "SAPERP"."BKPF" WITH PARAMETERS('UNUSED_RETENTION_PERIOD'='0, 300, 300, 300');


Retention periods should now be visible in the table setup – see figure below.retention_set.PNG


Next step is to switch on the auto-unload function for HANA database. In Eclipse IDE go to ADMINISTRATION • CONFIGURATION • GLOBAL.INI • MEMORYOBJECTS  and set the configuration parameters the following way (see figure below):

  • UNUSED_RETENTION_PERIOD– number of seconds after which an unused object can be unloaded. Default value is 0 which means that auto-unload is switched off by default for all tables. Set the default to 31536000 secs (1 year).
  • UNUSED_RETENTION_PERIOD_CHECK_INTERVAL– check frequency for objects (tables and partitions) exceeding the retention time. Default value is 7200 (every 2 hours). In the example let’s use short time frame of 600 seconds.

 

retention_parameters.PNG


After the check interval has passed you may check again the memory status. The total memory used is now much lower for the fourth partition for the year 2015, for which all the columns/pages were unloaded as shown in figure below.

retention_unloaded.PNG


In addition in the column view you can see that only columns for the hot partition remain in the memory (figure below).

retention_unloaded_columns.PNG


In the M_CS_UNLOADS view you will notice that new events were recorded with reason code UNUSED RESOURCE and only the columns from cold partitions of BKPF were moved away from memory (see figure below). Hot partition and other tables were not affected.

retention_unloads.PNG


Auto-unload lets you manage cold areas of memory more actively however it also has a side effect. All the columns/pages from the cold partitions are removed from memory. After that even when accessing documents from the hot storage in SAP ERP it may happen that the key columns/pages from cold partitions will be reloaded from disk (see figure below). This may happen for example when trying to display a “hot” document in FB03 without specifying fiscal year. In this case SAP ERP will perform initial search for a full document key without restriction to hot storage only (see note 2053698). This may have negative impact on such queries executed for the first time after auto-unload.

retention_reload_of_key.PNG

6. SAP HANA Scale-out and table distribution

 

In case cold storage has already grown significantly over the years, it might be advisable to scale out current HANA system to more than one host. The main purpose of scaling out HANA system is to reduces delta merge processing time, balance the load on database and achieve higher parallelization level for queries execution. However, one of such hosts could also be dedicated for cold storage only. When tables are partitioned over several hosts the unloading mechanism is managed per host. This means that usage of cold data to any extent will not have impact on the performance of queries executed against hot data. Moreover, the cold storage node could be smaller in terms of allocated memory.


You can see the current table and partition distribution in the TABLE DISTRIBUTION editor. You can open it by right-clicking on CATALOG or SCHEMA in the SYSTEMS view in Eclipse IDE and choosing SHOW TABLE DISTRIBUTION – see figure below.

table_distribution.PNG


Before distributing partitions in SAP HANA database, first you need to switch off paged attributes property. The status needs to be reset to deactivated. You can do it with report RDAAG_PARTITIONING_MIGRATION – see section 2. Paged partitions for more information. SAP HANA provides several automatic redistribution and optimization methods that helps improving significantly the performance in a multi-node HANA cluster. To manually move cold partitions to a dedicated host run the following SQL statement:


ALTER TABLE "SAPERP"."BKPF" MOVE PARTITION <part_id> TO '<host:port>' [PHYSICAL];


Where <part_id> is the partition number and <host:port> is the location where the partition is to be moved. The partition will be moved if the target host has sufficient memory.

When moving tables without the PHYSICAL addition then only the link is moved to another host and not the whole table. The physical table is moved when merge process is triggered. You can see details on the current logical and physical locations in the system views:

  • M_TABLE_LOCATIONS – shows logical location of a table/partition (1st figure below).
  • M_TABLE_PERSISTENCE_LOCATIONS – shows physical data locations (persistence parts).  This will include items from M_TABLE_LOCATIONS but also nodes that still contains some persistence of the table. This happens when tables is moved but not yet merged (2nd figure below).

 

m_table_locations.PNG

 

m_table_persistence_location.PNG

Troubleshooting ABAP Dumps in relation to SAP HANA

$
0
0

Purpose

 

The purpose of this document is to instruct SAP customers on how to analyse ABAP dumps.

 

 

Overview

 

How to troubleshoot ABAP Dumps

 

 

Troubleshooting

 

When looking at an ABAP system you can sometimes come across runtime errors in Transaction ST22:

 

Wiki ST22.PNG

 

 

Clicking into the "Today" or "Yesterday" tab will bring up all the ABAP Runtime errors you have encountered in the past 2 days.

 

You can also filter the dates for a particular dump by using the filter feature:

 

wiki ST22 filter.PNG

 

 

Here are some examples of runtime errors you might see in ST22:

 

wiki 2.PNG

wiki5.PNG

wiki 3.PNG

 

 

 

So from looking at these dumps, you can see

1: Category

2: Runtime Errors

3: ABAP Program

4: Application Component

5: Data & Time.

 

 

The ST22 dumps do not not really give you much information here so more information will be needed.

 

For more information you will then look into the Dev_W files in the transaction ST11

 

 

 

ST11 allows you to look further into the Dev_w files relating to the dumps in ST22:

 

wiki 4.PNG

 

 

To find the work Dev_w file that corresponds with the dump, you can see this is ST22.

 

Go to ST22 > click on the Runtime Errors for "Today", "Yesterday" or a filter. This will being up the specific dump you wish to analyse.

 

Here you will see 11 columns like so:

 

wiki 5.PNG

 

 

Here you can see the columns I have mentioned. The Work Process Index number you need is in the column named WP Index.

 

 

Once you find the dev_w index number you can then go to ST11 and find further information:

 

In the ST11 Dev_w files you have to match the time of the dump in ST22 with the recorded times in the Dev_w process files.

 

 

 

If there no useable information in the Dev_W files, the next step would be to analyse the issue from the Database side.

 

 

To analyse from the Database side:

 

1: Open HANA Studio in SAP HANA Administration Console View

 

wiki 1.PNG

 

 

 

2: Check the diagnosis trace files in accordance with the time stamp of the dump you saw previously in ST22. To do this we have to go to the Diagnosis tab in HANA Studio:

 

wiki2.PNG

 

 

 

3: Check the time stamp from the ST22 dump (Date and Time), and then match this accordingly with the time in either the Indexserver.trc or nameserver.trc.

 

wiki 3.PNG

 

Search for the corresponding time stamp mentioned above i.e. 18/11/2015 @ 10:55:43.

 

Searching the nameserver log files can be a good indication of whether your ST22 is related to network issues, you may see errors such as:

 

 

  TrexNet          Channel.cpp(00339) : ERROR: reading from channel 151 <127.0.0.1:<host>> failed with timeout error; timeout=10000 ms elapsed [73973]{-1}[-1/-1] 2015-01-28 01:58:55.208048 e TrexNetBuffer    BufferedIO.cpp(01092) : channel 151 from <127.0.0.1:<host>>: read from channel failed; resetting buffer


 

If you do find some errors similar to the above, firstly check which host the error is pointing to and check whether or not this service was available at the time of the dump.

 

 

If this does not yield any useful information, the next step is to ask someone from your network team to look into this. Checking the var/logs/messages is always a great place to start.

 

 

When searching through the indexserver.trc file, you could notice some irregularities recorded here. The next step is to search this error on the SAP Service Market Place for a known KBA or Note (Learn how to search more effectively 2081285 - How to enter good search terms to an SAP search? )

 

Related Documents

 

Did you know? You can find details of common issues, fixes, patches and much more by visiting SAP moderated forums on http://scn.sap.com/docs/DOC-18971

Documentation regarding HANA installation, upgrade, administration & development is available at http://help.sap.com/hana_appliance

SAP HANA Troubleshooting WIKI: http://wiki.scn.sap.com/wiki/display/TechTSG/SAP+HANA+and+In-Memory+Computing SAP Discussion HANA: http://scn.sap.com/community/hana-in-memory/ Learn how to search more effectively 2081285 - How to enter good search terms to an SAP search?
__________________________________________________________________________________________________________

SAP HANA Authorisation Troubleshooting

$
0
0

Every now and again I receive issues regarding SAP authorisation issues. I thought it might be useful to create a troubleshooting walk through.

 

This document will deal with issues regarding analytical privilege in SAP HANA Studio

 

So what are Privileges some might ask?

System Privilege:

System privileges control general system activities. They are mainly used for administrative purposes, such as creating schemas, creating and changing users and roles, performing data backups, managing licenses, and so on.

Object Privilege:

Object privileges are used to allow access to and modification of database objects, such as tables and views. Depending on the object type, different actions can be authorized (for example, SELECT, CREATE ANY, ALTER, DROP, and so on).

Analytic Privilege:

Analytic privileges are used to allow read access to data in SAP HANA information models (that is, analytic views, attribute views, and calculation views) depending on certain values or combinations of values. Analytic privileges are evaluated during query processing.

In a multiple-container system, analytic privileges granted to users in a particular database authorize access to information models in that database only.

Package Privilege:

Package privileges are used to allow access to and the ability to work in packages in the repository of the SAP HANA database.

Packages contain design time versions of various objects, such as analytic views, attribute views, calculation views, and analytic privileges.

In a multiple-container system, package privileges granted to users in a particular database authorize access to and the ability to work in packages in the repository of that database only.

 

For more information on SAP HANA privileges please see the SAP HANA Security Guide:

http://help.sap.com/hana/SAP_HANA_Security_Guide_en.pdf

 

 

So, you are trying to access a view, a table or simply trying to add roles to users in HANA Studio and you are receiving errors such as:

  • Error during Plan execution of model _SYS_BIC:onep.Queries.qnoverview/CV_QMT_OVERVIEW (-1), reason: user is not authorized
  • pop1 (rc 2950, user is not authorized)
  • insufficient privilege: search table error: [2950] user is not authorized
  • Could not execute 'SELECT * FROM"_SYS_BIC"."<>"' SAP DBTech JDBC: [258]: insufficient privilege: Not authorized.SAP DBTech JDBC: [258]: insufficient privilege: Not authorized

 

These errors are just examples of  some the different authorisation issues you can see in HANA Studio, and each one is pointing towards a missing analytical privilege.

 

Once you have created all your models, you then have the opportunity to define your specific authorization requirements on top of the views that you have created.

 

So for example, we have a model in HANA Studio Schema and its called "_SYS_BIC:Overview/SAP_OVERVIEW"

We have a user, lets just say its the "SYSTEM" user, and when you query this view you get the error:

 

Error during Plan execution of model _SYS_BIC:Overview/SAP_OVERVIEW (-1), reason: user is not authorized.

 

So if you are a DBA, and you get a message from a team member informing you that they getting a authorisation issue in HANA Studio. What are you to do?

How are you supposed to know the User ID? And most importantly, how are you to find out what the missing analytical privilege is?

 

So this is the perfect opportunity to run an authorisation trace through the means of the SQL console on HANA Studio.

So if you follow the below instructions it will walk you through executing the authorisation trace:

 

1) Please run the following statement in the HANA database to set the DB  trace:

alter system alter configuration ('indexserver.ini','SYSTEM') SET
('trace','authorization')='info' with reconfigure;

 

2) Reproduce the issue/execute the command again/

 

3)When the execution finishes please turn off the trace as follows in the Hana studio:

alter system alter configuration ('indexserver.ini','SYSTEM') unset
('trace','authorization') with reconfigure;

 

 

So now that you have turned the trace on and reproduced the issue, now you must turn off the trace.

 

You should now see a new indexserver0000000trc file created in the Diagnosis Files Tab in HANA Studio

Capture.PNG

 

So once you open the trace files, scroll to the end of the file and you should see something familiar to this:

e cePlanExec       cePlanExecutor.cpp(06890) : Error during Plan execution of model _SYS_BIC:onep.Queries.qnoverview/CV_QMT_OVERVIEW (-1), reason: user is not authorized
i TraceContext     TraceContext.cpp(00718) : UserName=TABLEAU, ApplicationUserName=luben00d, ApplicationName=HDBStudio, ApplicationSource=csns.modeler.datapreview.providers.ResultSetDelegationDataProvider.<init>(ResultSetDelegationDataProvider.java:122);csns.modeler.actions.DataPreviewDelegationAction.getDataProvider(DataPreviewDelegationAction.java:310);csns.modeler.actions.DataPreviewDelegationAction.run(DataPreviewDelegationAction.java:270);csns.modeler.actions.DataPreviewDelegationAction.run(DataPreviewDelegationAction.java:130);csns.modeler.command.handlers.DataPreviewHandler.execute(DataPreviewHandler.java:70);org.eclipse.core.commands
i Authorization    XmlAnalyticalPrivilegeFacade.cpp(01250) : UserId(123456) is missing analytic privileges in order to access _SYS_BIC:onep.MasterData.qn/AT_QMT(ObjectId(15,0,oid=78787)). Current situation:
AP ObjectId(13,2,oid=3): Not granted.
i Authorization    TRexApiSearch.cpp(20566) : TRexApiSearch::analyticalPrivilegesCheck(): User TABLEAU is not authorized on _SYS_BIC:onep.MasterData.qn/AT_QMT (787878) due to XML APs
e CalcEngine       cePopDataSources.cpp(00488) : ceJoinSearchPop ($REQUEST$): Execution of search failed: user is not authorized(2950)
e Executor         PlanExecutor.cpp(00690) : plan plan558676@<> failed with rc 2950; user is not authorized
e Executor         PlanExecutor.cpp(00690) : -- returns for plan558676@<>
e Executor         PlanExecutor.cpp(00690) : user is not authorized(2950), plan: 1 pops: ceJoinSearchPop pop1(out a)
e Executor         PlanExecutor.cpp(00690) : pop1, 09:57:41.755  +0.000, cpu 139960197732232, <> ceJoinSearchPop, rc 2950, user is not authorized
e Executor         PlanExecutor.cpp(00690) : Comm total: 0.000
e Executor         PlanExecutor.cpp(00690) : Total: <Time- Stamp>, cpu 139960197732232
e Executor         PlanExecutor.cpp(00690) : sizes a 0
e Executor         PlanExecutor.cpp(00690) : -- end executor returns
e Executor         PlanExecutor.cpp(00690) : pop1 (rc 2950, user is not authorized)

 

So we can see from the trace file that User who is trying to query from the view is called TABLEAU. TABLEAU is also represented by the User ID (123456)

 

So by looking at the lines:

i Authorization    XmlAnalyticalPrivilegeFacade.cpp(01250) : UserId(123456) is missing analytic privileges in order to access _SYS_BIC:onep.MasterData.qn/AT_QMT(ObjectId(15,0,oid=78787)).

&

i Authorization    TRexApiSearch.cpp(20566) : TRexApiSearch::analyticalPrivilegesCheck(): User TABLEAU is not authorized on _SYS_BIC:onep.MasterData.qn/AT_QMT (787878) due to XML APs

 

We can clearly see that TABLEAU user is missing the correct analytical privileges to access the _SYS_BIC:onep.MasterData.qn/AT_QMT which is located on Object 78787.

 

So now we have to find out who owns the Object 78787. We can find out this information by querying the following:

 

select * from objects where object_oid = '<oid>';

Select * from objects where object_oid = '78787'

 

Once you have found out the owner for this object, you can get the owner to Grant the TABLEAU user the necessary privileges to query the object.

 

 

Another option that is available for analyzing privileges issues was introduced as of SP9. This comes in the form of the Authorization Dependency Viewer. Man-Ted Chan has prepared an excellent blog on this new feature:

 

http://scn.sap.com/community/hana-in-memory/blog/2015/07/07/authorization-dependency-viewer

 

 

 

For more useful information on Privileges can be seen in the following KBA's:

KBA #2220157 - Database error 258 at EXE insufficient

KBA #1735586 – Unable to grant privileges for SYS_REPO.-objects via SAP HANA Studio authorization management.

KBA #1966219 – HANA technical database user _SYS_REPO cannot be activated.

KBA #1897236 – HANA: Error&quot; insufficient privilege: Not authorized &quot; in SM21

KBA #2092748 – Failure to activate HANA roles in Design Time.

KBA #2126689 – Insufficient privilege. Not authorized

 

 

For more useful Troubleshooting documentation you can visit:

 

http://wiki.scn.sap.com/wiki/display/TechTSG/SAP+HANA+and+In-Memory+Computing

 

 

Thank you,

 

Michael


Execution management by HANA Graphical Calculation View

$
0
0

SAP HANA, An appliance with set of unique capabilities has provided wide range of possibilities for end user to perform data modeling. One among them is 'Graphical Calculation View', which helps in leveraging the fullest potential of SAP HANA in data modeling sector.

 

This Document helps in understanding the nature of execution management by HANA Graphical Calculation Views by utilizing the allowed set of properties in it. So as to reveal out the effectiveness in handling the properties to manage the execution flow.

 

First among the property set under discussion is


'KEEP FLAG' for attributes in Aggregation Node

 

Above property gives chance for end user to leverage the full capacity of calculation engine by utilizing the nature of Aggregation node.

Let us understand how it is achieved by considering a simple example.

 

Consider a simple SALARY_ANALYSIS table as shown below. Keeping it simple to make the understanding better.

 

EMPLOYEE_TYPESALARYYEAR
1100002001-01-01
1200002002-01-01
1500002005-01-01
2250002002-01-01
2300002004-01-01
3450002006-01-01

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 



Above table gives the salary details of each employee type on date basis. Let us now try to get maximum salary in each of the employee_category/type using graphical calculation view.

 

 

Step 1: Create a new graphical calculation view of DIMENSION Data category and add an aggregation node to the view.

 

Now add the above created column table into the newly inserted aggregation node and set EMPLOYEE_TYPE  as the output column.

Along with which add SALARY as an aggregated column to the output and rename it as MAX_SALARY_IN_EACH_EMP_TYPE

and set its aggregation type to MAX

Aggren.PNG

Step 2 : Now connect the Aggregation Node to the default Projection Node and just select MAX_SALARY_IN_EACH_EMP_TYPE column to the output in Projection Node.

pROJECN.png

 

Step 3 : Save and activate the view.  After which perform Data Preview on the Aggregation Node. We get Maximum Salary for each employee type in the underlying table.

intermediate_dp.png

 

Step 4 : Now perform Data Preview at the Calculation View level

dp.png

Here we see that the end result of the view gives only one row of data discarding the Employee_Type with which grouping was done in the aggregation node. Plan Visualized Query for the same preview is as show below.

vizual.png

By the above visualization of execution plan we figure out that the EMPLOYEE_TYPE column used in the aggregation node is pruned and not passed to the higher nodes when it is not queried in the End Result. Thus we are unable to get the MAX_SALARY for each employee type, instead we are getting the MAX_SALARY out of the total salary list.


To achieve the former case of getting MAX_SALARY based on the EMPLOYEE_TYPE even when the EMPLOYEE_TYPE is not part of the end query, we must enable this special property called 'Keep Flag' on the attributes.


Step 5 : Go back to the Aggregation Node used in the view and select the EMPLOYEE_TYPE column which is added as attribute, set the property 'KEEP_FLAG' to true and activate the view.

keep_flag.png


Step 6 : Once the activation is completed successfully, perform data preview of the model and check the result. We now get the MAX_SALARY grouped by EMPLOYEE_TYPE although EMPLOYEE_TYPE is not part of the end query.

dp_with_kf.png


Plan Visualization on the same query now behaves differently after setting the 'KEEP_FLAG' to true for EMPLOYEE_TYPE column, thus propagates the grouping attribute to the higher nodes as shown below:


viz_kf.png

 


Hence the end user is given chance to either retain the query optimization or to fit the model for required response by toggling the KEEP_FLAG property of the attributes in aggregation node.


Let us now pitch into an other connected property of Aggregation Node.


Always Aggregate Result in Aggregation Node


Taking the above example of SALARY_ANALYSIS, let us understand the usage of 'Always Aggregate Result' property in the aggregation node.


Step 1: Create a new Graphical Calculation View of cube type and add SALARY_ANALYSIS table as the data source into the aggregation node,. now select EMPLOYEE_TYPE and YEAR as non-aggregated or attribute columns and SALARY as the aggregated output column.

aggre1.png


Step 2 : Save and activate the view, after which execute query on the above view involving EMPLOYEE_TYPE and SALARY columns as show below

 

always_aggre.png

 

The above select statement do not involve any client side aggregation or group by clause, output values here are the result of default aggregation and group by operations in the aggregation node as shown below:

                                                                                  image.png

 

Let us now execute the same statement with 'Where Clause' usage in the query and see the result

date.png

 

Here we see that the introduction of 'Where clause' has also introduced the same column in 'Group by' Clause and thus the result gets varied from the previous query by having YEAR column also as part of the group by operation in aggregation node.

 

Step 3: To Avoid the above introduction of filter column in the group by clause and to address the only set of columns requested in the query, we have a property called Always Aggregate Result to be set to true so that the aggregation will not vary based on the filter column in the requested query.

 

always_agg_res.png

Above cases are true only when client side aggregations are not set in the requested query.

Thus with the Always Aggregate Result mode the execution model will be in the below format final.png

 

 

Step 4 : Now execute the same query with' Where clause' on YEAR column after setting the ALWAYS AGGREGATE RESULT property.

final12.png


We now see that the grouping happens only by EMPLOYEE_TYPE which is in the requested query.



There by we see the usage and benefits of two key properties KEEP_FLAG and ALWAYS AGGREGATE RESULT for execution management

in Graphical Calculation View .


Hope the provided information is useful. Any suggestion and feedback for improvement will be much appreciated.

 

Thank you

Getting the Counters right with stacked Calculation Views

$
0
0

Data Modeling in SAP HANA provides great amount of flexibility to the end user. One of the key capabilities in Modeling sector of SAP HANA is computation of user required calculations. One of the variations in calculation which end user can perform in Calculation Views/Analytic Views is 'Counter'.

 

Let us now understand the operations under the hood when stack of Calculation Views are created in a project.

 

Consider a simple 'Product Details' table which comprise of the product information along with its sales information as shown below:

 

PRODUCTSTORECUSTOMERQUANTITYREVENUEUNIT
BudweiserEDEKAIBM360720BOTTL
CokeEDEKAALDI260390BOTTL
CokeEBAYIBM200300BOTTL
CokeEDEKAIBM250375BOTTL
HeadsetMETROALDI2120PACKG
HeadsetEBAYIBM10600PACKG
ipadEBAYALDI106000PACKG
ipadMETROALDI106000PACKG
ipadMETROIBM106000PACKG

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Above table provides product details along with the store in which the product is available.

 

Let us now go ahead and create Graphical Calculation View that helps in getting the distinct number of stores for each product.

 

Step 1 : As first step towards achieving the above requirement let us create a Graphical Calculation View of CUBE type and add a projection node to it.Now add the above created table as data source to the projection node and connect the result of it to the default aggregation node in the View

 

Step 2: Now in the Aggregation Node create a new 'Counter' on STORE column to get the distinct number of stores.

 

store_cnt.png

Step 3: Proceed ahead by setting QUANTITY and REVENUE columns as measure with aggregation type SUM, along with the above created counter being measure having 'Count Distinct' as its aggregation type and the left out columns are set to attributes. After which save and activate the View.

Once the activation goes successful execute the below SQL Query on top of the calculation view created to get the distinct number of stores for each product.

 

SELECT"PRODUCT", sum("QUANTITY") AS"QUANTITY", sum("REVENUE") AS"REVENUE", sum("StoreCount") AS"StoreCount"

 

FROM"_SYS_BIC"."hanae2e.poorna/CV_DISTINCT_COUNT"  WHERE"CUSTOMER"IN ('IBM','ALDI'GROUPBY"PRODUCT"


query1.png


Thus the above result set helps us in getting the distinct number of stores for each product.

Till here we do not encounter any surprises while getting the result from 'Counter'.

 

Let us now proceed and use the above created view as data source in another calculation view.

 

Step 4: Create another Graphical Calculation View of CUBE type, add the above created Calculation View as the data source to its aggregation node and extract the semantics as such from the underlying calculation view. After which save and activate the view.

 

cv_transparent.png

 

Step 5: Now preview on the latter view in a similar way as we queried on the base view earlier.

 

SELECT"PRODUCT",sum("QUANTITY") AS"QUANTITY", sum("REVENUE") AS"REVENUE", sum("StoreCount") AS"StoreCount"

FROM"_SYS_BIC"."hanae2e.poorna/CV_TRANSP_FILTER_UNSET"WHERE"CUSTOMER"IN ('IBM','ALDI')GROUPBY"PRODUCT"


query2.png

Here is the surprise in result set, where the StoreCount which is a 'Counter' created in base Calculation View returns different (wrong) result (highlighted ones) when queried from the Top Calculation View.


To understand the reason behind wrong result we have to look at the execution plan of the above executed query:


plan_viz.png



CvTransparentFilter in the above representation is the Top Calculation View which is having CvCountDiscount as its data source.


Now in the Request Query on the Top Calculation View, we are querying PRODUCT and StoreCount columns along with the Filter applied on the CUSTOMER column.


There by, above query on the Top View sends a request to the underlying View to involve CUSTOMER column also as part of the requested attribute list. Which ends up in grouping the StoreCount value by (PRODUCT, CUSTOMER) which ideally should be grouped only by PRODUCT.

Consequence of which gives us wrong result when queried from Top View.


Step 6: To over come the above surprise in result set, we have a property called 'Transparent Filter', which when flagged as true for CUSTOMER column in the Aggregation Node of Top View(CvTransparentFilter) and also in the Aggregation Node of underlying View(CvCountDistinct) solves the problem by pushing filter operation down to the lower projection node and remove CUSTOMER as View Attribute from the Aggregation Nodes. Which in turn makes the Top View to work on the result set grouped only by PRODUCT column irrespective of the filter column present in the requested query. Better picture of the same is shown below by the execution plan:


final_query.png

 

Step 7: Below is the 'Transparent Filter' property that needs to be flagged to get the correct value of counter in the final result set when we are working with stacked Calculation Views.


TF_FLAG.png


Step 8: After setting 'Transparent Filter' to true in Aggregation Node of both the Calculation Views, query on Top View fetches correct result for Counter column.


SELECT"PRODUCT", sum("QUANTITY") AS"QUANTITY", sum("REVENUE") AS"REVENUE", sum("StoreCount") AS"StoreCount"FROM"_SYS_BIC"."hanae2e.poorna/CV_TRANSPARENT_FILTER"WHERE"CUSTOMER"IN ('IBM','ALDI') GROUPBY"PRODUCT"


correct_res.png



Reference for 'Transparent Filter' Flag is available in SAP HANA Modeling Guide under the section 'Create Counters'.


Hope the provided information is useful. Any suggestion and feedback for improvement will be much appreciated.

 

Thank you





Capturing and Using Multi-valued Attributes in HANA Table

$
0
0

SAP HANA is well known for its proficient Architecture along with hardware and software optimizations. With hardware optimization SAP HANA database allows the developer/user to specify whether a table is to be stored column-wise or row-wise.

 

As an extension to the existing features of column table in HANA, user can now define the columns in a way to make it store multi-values or Array of values.

This document helps in understanding of how to define and work with Multi-Valued Attributes/Columns.

 

To understand the same let us consider a simple example of storing personal details of an Employee in a single table.

 

Step 1 : Create a column table that helps to store Employee ID, Employee Firstname, Employee Lastname and Employee Phone details. Former 3 details in the table are considered to have Single Value for each Employee, where as Phone details can be a Multi-Valued column for each Employee. That means, each Employee can have more than one phone details. Thus to make sure the table structure suffice the need to store multiple phone details for each Employee in same column of the table, we must define 'Phone' column in the table as Multi-Valued or Array column as shown below:

 

CREATECOLUMNTABLE Employee (

      ID intPRIMARYKEY,

      Firstname VARCHAR(20),

      Lastname VARCHAR(20),

      Phone VARCHAR(15) ARRAY --WITHOUT DUPLICATES

)

 

'Without Duplicates' condition implies that storage of same phone number more than once in the array list is not allowed.

 

 

Step 2 : Let us now insert the details of Employee in to the above created column table :

 

INSERTINTO Employee (ID, Firstname, Lastname, Phone)

VALUES (1, 'Hans', 'Peters', ARRAY('111-546-2758', '435-756-9847', '662-758-9283'))

 

Above insert statement helps to store the single values in first three columns of the table and array of values into the last column of the table.

 

Step 3 : We can differently insert value to array list by selecting data from already existing column table using a nested select query.

 

INSERTINTO Employee (ID, Firstname, Lastname, Phone)

VALUES (2, 'Harry', 'Potter', ARRAY(select'1245-1223-223'from dummy))


Step 4 : We can now induce a temporary phone number by preforming concatenation operation between array column and an array value :


SELECT ID, ARRAY('123-456-4242') || Phone FROM Employee


Step 5 : Let us now go ahead and retrieve the selected phone details from existing array of Phone numbers.


SELECT ID, TRIM_ARRAY(Phone, 2) FROM Employee


TRIM_ARRAY function helps to remove the specified length of undesired values at the end of the array. Here it Trims last 2 Phone number details in the array list and thus provides first phone number from the Phone column of the table for all Employees.


Step 6 : When user wishes to access the specific phone number  of an Employee, we can get that member value in the array by specifying its ordinal position as shown in the below SQL:


SELECT ID, MEMBER_AT(Phone, 1) FROM Employee


This helps to get the first phone number details in the array list of each Employee, for employees without phone numbers null value will be returned.


Step 7: Conditional retrieval of value from the Array list can be done by using CASE statement as shown below:


SELECT     

      CASEWHEN CARDINALITY(Phone) >= 3 THEN MEMBER_AT(Phone, 1)  ELSE NULL    END

FROM Employee;

 

Based on the cardinality of array column either the member value is retrieved or  null value is returned.


Step 8 :  Let us rotate the phone number details in array list by 90 degree so that the values in the array are split to  row values of a column of a table by using UNNEST function as shown below :


SELECTDISTINCT Phones.Number FROM UNNEST(Employee.Phone) AS Phones (Number);

 

unnest.png

Above set of operations are supported for Numeric and Non-Numeric Array Columns in the table.


Here by,  we are done with the creation, storage and usage of Multi-Valued Attributes in HANA Table.


Hope the provided information is useful. Any suggestion and feedback for improvement will be much appreciated.

 

Thank you




Historical Data in SAP DB Control Center

$
0
0

The Historical Data feature in SAP DB Control Center (DCC) collects and stores system health status information. This feature, however, is not automatically enabled. This document will guide you through setting up data capture and different options for data visualization.

 

Note: This document was originally created by a former colleague of mine, Yuki Ji, in support of a customer engagement initiative regarding the SAP DB Control Center (DCC) product.  To preserve this knowledge, the document was migrated to this space.

 

The information in this document applies to DCC SP10 or higher.

 

The purge period of historical data collection needs to be configured according to the capacity of the system hosting DCC and the number of systems being monitored. Relation between systems monitored and space needed can be found in the DCC documentation. Otherwise to enable historical data and configure purging requires only two SQL statements to be run.

 

Enabling Historical Data Capture

In your DCC system execute the following statements with a user who has the DBCCAdmin role, such as DCC_ADM. The statements are to enable/disable historical data collection, and to set the data purge period in minutes (21600 minutes = 15 days, 4320 = 3 days, 1440 = 1 day).

 

To enable historical data collection:

upsert "SAP_HANA_DBCC"."sap.hana.dbcc.data::Site.PreferenceValues" ("name", "v_int")
values('apca.historical.enabled', 1) with primary key;
upsert "SAP_HANA_DBCC"."sap.hana.dbcc.data::Site.PreferenceValues" ("name", "v_int")
values('apca.historical.purge.max_age', 21600) with primary key;

 

To disable historical data collection:

upsert "SAP_HANA_DBCC"."sap.hana.dbcc.data::Site.PreferenceValues" ("name", "v_int")
values('apca.historical.enabled', 0) with primary key;

 

For more information on configuration or on the schema of the stored data, refer to section 3.9.4 of the DB Control Center 4 Guide documentation or Configure Storage of Historical Data.

 

Data Visualization

 

The data that is collected is stored in the table named sap.hana.dbcc.data::APCA.Historical under the schema SAP_HANA_DBCC and can be viewed by running this SQL statement:

 

SELECT TOP 1000 * FROM "SAP_HANA_DBCC"."sap.hana.dbcc.data::APCA.Historical" [order by timestamp desc];

 

The column resourceId tracks which registered system the information is for. The correlation of resourceId and system name is mapped can be viewed by running this SQL statement:

 

SELECT [TOP 1000] "RES_ID","RES_NAME" FROM SAP_HANA_DBCC"."sap.hana.dbcc.data::RESOURCES"

 

Reminder: historical data is collected about every 2 minutes, so over the course of a day a single triggered alert can be checked and noted up to about of 720 times.

 

Now that data is collected, there are several options on how to manipulate and visualize the data. In this blog post we will cover some available options through SAP tools: a custom XS application, SAP Lumira, SAP Crystal Reports.

 

Custom XS Application

 

To create a custom XS application, you can use SAP HANA Studio with the SAP HANA Development perspective, or the SAP HANA WebIDE.

 

One way of taking the data from the table and displaying it is to:

 

  1. Create a view to simplify the desired data.

    Note: To update a view for HANA you are required to drop it, then recreate that view. Filtering by resourceId returns only the results for a specific system (see table sap.hana.dbcc.data::RESOURCES to find a system's resourceId).

    --drop view SAP_HANA_DBCC.apca_view
    create view SAP_HANA_DBCC.apca_view(
    "day",
    "alertHigh",
    "alertMedium",
    "alertLow"
    )
    as select
    TO_VARCHAR("timestamp", 'YYMMDD') as "day",
    sum("alertHigh"),
    sum("alertMedium"),
    sum("alertLow")
    from "SAP_HANA_DBCC"."sap.hana.dbcc.data::APCA.Historical"
    where "timestamp" > ADD_DAYS(CURRENT_TIMESTAMP, -31) --and "resourceId" = 101
    group by TO_VARCHAR("timestamp", 'YYMMDD')
    order by "day" desc;
  2. In the XS application use OData calls to access data from the system by creating an .xsodata file.

    service namespace "data_visualization.services" {    "SAP_HANA_DBCC"."APCA_VIEW" as "View" key generate local "key";
    }
  3. Use existing development frameworks for UI and for graphing to display the data.


viz2_xsDisplay.PNG

 

The above graph displays the SUM of alerts noted by DCC per day for the past 31 days.

 

SAP Lumira

 

In SAP Lumira, generating a report starts with defining a dataset then manipulating the measures and dimensions to display the information desired about the registered system(s).

 

The following is an example query to create a dataset, including the system name, that can be used to create High/Medium/Low priority Alert related graphs or reports:

 

select "resourceId","RES_NAME", "timestamp", "alertHigh", "alertMedium","alertLow" from    "SAP_HANA_DBCC"."sap.hana.dbcc.data::APCA.Historical",    "SAP_HANA_DBCC"."sap.hana.dbcc.data::RESOURCES"
where "SAP_HANA_DBCC"."sap.hana.dbcc.data::APCA.Historical"."resourceId"= "SAP_HANA_DBCC"."sap.hana.dbcc.data::RESOURCES"."RES_ID"

 

The same example query without the system name:

 

select "resourceId”, "timestamp", "alertHigh", "alertMedium","alertLow" from    "SAP_HANA_DBCC"."sap.hana.dbcc.data::APCA.Historical”

 

Creating the dataset by running a SQL query on a database system:


viz2_lumiradataset.PNG

 

You can further manipulate and prepare your data, or begin creating your graphs under Visualize. In Lumira simply drag and drop you Measures and Dimensions from the far left column and begin defining your graph.


viz2_lumiraCreateVisualization.PNG

 

Some examples of graphs you could create:

 

viz2_lumiraGraph1.PNG

The SUM of all High, Medium, or Low priority Alerts triggered and noted over the course of a week. Recall, DCC collects data on a system about every two minutes.


viz2_lumiraGraph2.PNG

In this graph we can see which days of the week had the most alerts per system and per alert.

 

Once the graphs have been created SAP Lumira can be used to generate infographics or other reports.

 

SAP Crystal Reports

 

As with SAP Lumira, for SAP Crystal Reports you will need to define a data source before manipulate the data as you wish.

 

To define your HANA system as a data source you will need to have the HANA Client installed on the computer that is running Crystal Reports. Once it is installed you can then use ODBC or JDBC to create a connection to the DCC enabled system and pull the data from the tables sap.hana.dbcc.data::APCA.Historical and sap.hana.dbcc.data::Resources.

 

With Crystal Reports you can then create graphs and generate detailed reports and report templates of your system landscape health.

 

Below, both graphs are created using a dataset for resource 101, or system YJI over the course of 31 days.


viz2_crGraphs.PNG

 

Follow-up

The use and visualization of collected Historical Data can be performed using various tools - XS applications, SAP Lumira, and SAP Crystal Reports to name a few. As you begin looking at the historical data and how you will be using it, please consider the following:

  • Is there other system information you would like to see captured?
  • What reports would you create with the currently available information?
  • Ideally, what reports would you create with additional information?

How to update SAP HANA database and required components

$
0
0

When you want to install the new version of the Hana platform or before an SUM package upgrade you can use this howto document.


1. Before updating Hana database, you should detect which components exist and will be updated in Hana platform. You can check it via SAP HANA Studio (in the overview tab) or hdblcm tool. You should keep the same version for database and other components as a rule.

 

1.png

hana:/hana/shared/HD1/hdblcm # ./hdblcm

 

SAP HANA Lifecycle Management - SAP HANA 1.00.102.00.1442292917

***************************************************************

 

Choose an action to perform

 

Index | Action to be performed     | Description

--------------------------------------------------------------------------------------

  1     | add_hosts                  | Add Additional Hosts to the SAP HANA System

  2     | configure_internal_network | Configure Inter-Service Communication

  3     | configure_sld              | Configure System Landscape Directory Registration

  4     | print_component_list       | Print Component List

  5     | rename_system              | Rename the SAP HANA System

  6     | uninstall                  | Uninstall SAP HANA Components

  7     | unregister_system          | Unregister the SAP HANA System

  8     | update_component_list      | Update Component List

  9     | update_components          | Install or Update Additional Components

  10    | update_host                | Update the SAP HANA Instance Host integration

  11    | exit                       | Exit (do nothing)

 

Enter selected action index [11]: 8

Component list will be updated with the detected components on the system.

Do you want to continue? (y/n): y

 

Updating Component List...

 

Log file written to '/var/tmp/hdb_HD1_hdblcm_update_component_list_2015-11-25_16.27.38/hdblcm.log' on host 'hana'.

 

In the specified log file, you can see the installed products and versions. Before starting update, do not forget backup your HANA system.

 

2. Download the required softwares from SAP Support Portal


https://support.sap.com/ --> Download Software --> Support Packages and Patches --> A-Z --> H (choose letter H) -->SAP HANA PLATFORM EDITION --> SAP HANA PLATFORM EDIT. 1.0 --> Entry by Component

2.png

In this case, download HANA database, HANA client, HANA AFL, HANA Table Redistribution AFL, HANA studio with SAP Download Manager. In the distributed platform (in this case, SAP application server is Wintel) do not forget to download HANA client for both platforms.

 

Copy and extract all downloaded packages to the SAP HANA server.


3. HANA database can be updated using SAP HANA Platform Lifecycle Management OS level commands.


SAP HANA Platform Lifecycle Management is running on Fiori:

3.png


In this case, we are updating HANA database with command line option. As an HANA database update sequence, please choose:


1.Update HANA AFL (Application Function Library)
2.Update HANA database
3.Update HANA client
4.Update HANA Table Redistribution AFL
5.Update HANA studio


4.png

Updating HANA AFL, navigate the respective platform folder and run the hdbinst as user root.
5.png


With the same flow, install whole components:


Updating HANA database server, change folder HDB_SERVER_LINUX_X86_64 and run the script hdbupd as user root. You will need password of the database user SYSTEM.


Updating HANA client, change folder HDB_CLIENT_LINUX_X86_64 and run one of the scripts (hdbsetup or hdbinst) as user root.


Updating HANA Table Redistribution AFL, change folder HDB_TRD_AFL_LINUX_X86_64 and run the script hdbinst as user root.


Updating HANA studio, change the folder HDB_STUDIO_LINUX_X86_64 and run one of the scripts (hdbsetup or hdbinst) as user root.


In every log file of the installation package, you will see the exact package installation path.


4.  Checking SAP HANA database version after update:


4.1. SAP HANA database version check with command line with user sidadm:


#HDB version

6.png


4.2. SAP HANA version check with HANA Platform Lifecycle Management (or Hana studio):

7.png

 

SAP HANA Data Warehousing Foundation

$
0
0

SDNPIC.jpg

SAP HANA Data Warehousing Foundation 1.0

 

This first release will provide packaged tools for large Scale SAP HANA use cases to support data management and distribution within a SAP HANA landscapre more efficiently. Further versions will focus on additionals tools to  support native SAP HANA data warehouse use cases, in particular data lifecycle management.

 

 

 

On this landingpage you will find a summary of information to get started
with SAP HANA Data Warehousing Foundation 1.0

Presentations

 

 

Demos

 

SAP HANA Data Warehousing Foundation  Playlist
In this SAP HANA Data Warehousing playlist you will find demos showing the Data Distribution Optimizer (DDO) as well as the Data Lifecycle Manager (DLM)

 

SAP HANA Academy

 

Find out more about the Data Temperature integration with HANA including Data Lifecycle Management on the DWF Youtube channel of SAP HANA Academy

HANA Rules Framework

$
0
0

Welcome to the SAP HANA Rules Framework (HRF) Community Site!


SAP HANA Rules Framework provides tools that enable application developers to build solutions with automated decisions and rules management services, implementers and administrators to set up a project/customer system, and business users to manage and automate business decisions and rules based on their organizations' data.

In daily business, strategic plans and mission critical tasks are implemented by a countless number of operational decisions, either manually or automated by business applications. These days - an organization's agility in decision-making becomes a critical need to keep up with dynamic changes in the market.


HRF Main Objectives are:

  • To seize the opportunity of Big Data by helping developers to easily build automated decisioning solutions and\or solutions that require business rules management capabilities
  • To unleash the power of SAP HANA by turning real time data into intelligent decisions and actions
  • To empower business users to control, influence and personalize decisions/rules in highly dynamic scenarios

HRF Main Benefits are:

Rapid Application Development |Simple tools to quickly develop auto-decisioning applications

  • Built-in editors in SAPHANA studio that allow easy modeling of the required resources for SAP HANA rules framework
  • An easy to implement and configurable SAPUI5 control that exposes the framework’s capabilities to the business users and implementers

Business User Empowerment | Give control to the business user

  • Simple, natural, and intuitive business condition language (Rule Expression Language)

Untitled.png

  • Simple and intuitive UI control that supports text rules and decision tables

NewTable.png

  • Simple and intuitive web application that enables business users to manage their own rules

Rules.png    

Scalability and Performance |HRF as a native SAP HANA solution leverages all the capabilities and advantages of the SAPHANA platform.


For more information on HRF please contact shuki.idan@sap.com  and/or noam.gilady@sap.com

Interesting links:

SAP solutions already utilizing HRF:

Use cases of SAP solutions already utilizing HRF:

SAP Transportation Resource Planning

TRP_Use_Case.jpg

SAP FraudManagement

Fraud_Use_Case.JPG

SAP hybris Marketing (formerly SAP Customer Engagement Intelligence)

hybris_Use_Case.JPG

SAP Operational Process Intelligence

OPInt_Use_Case.JPG


SAP HANA Authorisation Troubleshooting

$
0
0

Every now and again I receive issues regarding SAP authorisation issues. I thought it might be useful to create a troubleshooting walk through.

 

This document will deal with issues regarding analytical privilege in SAP HANA Studio

 

So what are Privileges some might ask?

System Privilege:

System privileges control general system activities. They are mainly used for administrative purposes, such as creating schemas, creating and changing users and roles, performing data backups, managing licenses, and so on.

Object Privilege:

Object privileges are used to allow access to and modification of database objects, such as tables and views. Depending on the object type, different actions can be authorized (for example, SELECT, CREATE ANY, ALTER, DROP, and so on).

Analytic Privilege:

Analytic privileges are used to allow read access to data in SAP HANA information models (that is, analytic views, attribute views, and calculation views) depending on certain values or combinations of values. Analytic privileges are evaluated during query processing.

In a multiple-container system, analytic privileges granted to users in a particular database authorize access to information models in that database only.

Package Privilege:

Package privileges are used to allow access to and the ability to work in packages in the repository of the SAP HANA database.

Packages contain design time versions of various objects, such as analytic views, attribute views, calculation views, and analytic privileges.

In a multiple-container system, package privileges granted to users in a particular database authorize access to and the ability to work in packages in the repository of that database only.

 

For more information on SAP HANA privileges please see the SAP HANA Security Guide:

http://help.sap.com/hana/SAP_HANA_Security_Guide_en.pdf

 

 

So, you are trying to access a view, a table or simply trying to add roles to users in HANA Studio and you are receiving errors such as:

  • Error during Plan execution of model _SYS_BIC:onep.Queries.qnoverview/CV_QMT_OVERVIEW (-1), reason: user is not authorized
  • pop1 (rc 2950, user is not authorized)
  • insufficient privilege: search table error: [2950] user is not authorized
  • Could not execute 'SELECT * FROM"_SYS_BIC"."<>"' SAP DBTech JDBC: [258]: insufficient privilege: Not authorized.SAP DBTech JDBC: [258]: insufficient privilege: Not authorized

 

These errors are just examples of  some the different authorisation issues you can see in HANA Studio, and each one is pointing towards a missing analytical privilege.

 

Once you have created all your models, you then have the opportunity to define your specific authorization requirements on top of the views that you have created.

 

So for example, we have a model in HANA Studio Schema and its called "_SYS_BIC:Overview/SAP_OVERVIEW"

We have a user, lets just say its the "SYSTEM" user, and when you query this view you get the error:

 

Error during Plan execution of model _SYS_BIC:Overview/SAP_OVERVIEW (-1), reason: user is not authorized.

 

So if you are a DBA, and you get a message from a team member informing you that they getting a authorisation issue in HANA Studio. What are you to do?

How are you supposed to know the User ID? And most importantly, how are you to find out what the missing analytical privilege is?

 

So this is the perfect opportunity to run an authorisation trace through the means of the SQL console on HANA Studio.

So if you follow the below instructions it will walk you through executing the authorisation trace:

 

1) Please run the following statement in the HANA database to set the DB  trace:

alter system alter configuration ('indexserver.ini','SYSTEM') SET
('trace','authorization')='info' with reconfigure;

 

2) Reproduce the issue/execute the command again/

 

3)When the execution finishes please turn off the trace as follows in the Hana studio:

alter system alter configuration ('indexserver.ini','SYSTEM') unset
('trace','authorization') with reconfigure;

 

 

So now that you have turned the trace on and reproduced the issue, now you must turn off the trace.

 

You should now see a new indexserver0000000trc file created in the Diagnosis Files Tab in HANA Studio

Capture.PNG

 

So once you open the trace files, scroll to the end of the file and you should see something familiar to this:

e cePlanExec      cePlanExecutor.cpp(06890) : Error during Plan execution of model _SYS_BIC:onep.Queries.qnoverview/CV_QMT_OVERVIEW (-1), reason: user is not authorized
i TraceContext    TraceContext.cpp(00718) : UserName=TABLEAU, ApplicationUserName=luben00d, ApplicationName=HDBStudio, ApplicationSource=csns.modeler.datapreview.providers.ResultSetDelegationDataProvider.<init>(ResultSetDelegationDataProvider.java:122);csns.modeler.actions.DataPreviewDelegationAction.getDataProvider(DataPreviewDelegationAction.java:310);csns.modeler.actions.DataPreviewDelegationAction.run(DataPreviewDelegationAction.java:270);csns.modeler.actions.DataPreviewDelegationAction.run(DataPreviewDelegationAction.java:130);csns.modeler.command.handlers.DataPreviewHandler.execute(DataPreviewHandler.java:70);org.eclipse.core.commands
i Authorization    XmlAnalyticalPrivilegeFacade.cpp(01250) : UserId(123456) is missing analytic privileges in order to access _SYS_BIC:onep.MasterData.qn/AT_QMT(ObjectId(15,0,oid=78787)). Current situation:
AP ObjectId(13,2,oid=3): Not granted.
i Authorization    TRexApiSearch.cpp(20566) : TRexApiSearch::analyticalPrivilegesCheck(): User TABLEAU is not authorized on _SYS_BIC:onep.MasterData.qn/AT_QMT (787878) due to XML APs
e CalcEngine      cePopDataSources.cpp(00488) : ceJoinSearchPop ($REQUEST$): Execution of search failed: user is not authorized(2950)
e Executor        PlanExecutor.cpp(00690) : plan plan558676@<> failed with rc 2950; user is not authorized
e Executor        PlanExecutor.cpp(00690) : -- returns for plan558676@<>
e Executor        PlanExecutor.cpp(00690) : user is not authorized(2950), plan: 1 pops: ceJoinSearchPop pop1(out a)
e Executor        PlanExecutor.cpp(00690) : pop1, 09:57:41.755  +0.000, cpu 139960197732232, <> ceJoinSearchPop, rc 2950, user is not authorized
e Executor        PlanExecutor.cpp(00690) : Comm total: 0.000
e Executor        PlanExecutor.cpp(00690) : Total: <Time- Stamp>, cpu 139960197732232
e Executor        PlanExecutor.cpp(00690) : sizes a 0
e Executor        PlanExecutor.cpp(00690) : -- end executor returns
e Executor        PlanExecutor.cpp(00690) : pop1 (rc 2950, user is not authorized)

 

So we can see from the trace file that User who is trying to query from the view is called TABLEAU. TABLEAU is also represented by the User ID (123456)

 

So by looking at the lines:

i Authorization    XmlAnalyticalPrivilegeFacade.cpp(01250) : UserId(123456) is missing analytic privileges in order to access _SYS_BIC:onep.MasterData.qn/AT_QMT(ObjectId(15,0,oid=78787)).

&

i Authorization    TRexApiSearch.cpp(20566) : TRexApiSearch::analyticalPrivilegesCheck(): User TABLEAU is not authorized on _SYS_BIC:onep.MasterData.qn/AT_QMT (787878) due to XML APs

 

We can clearly see that TABLEAU user is missing the correct analytical privileges to access the _SYS_BIC:onep.MasterData.qn/AT_QMT which is located on Object 78787.

 

So now we have to find out who owns the Object 78787. We can find out this information by querying the following:

 

select * from objects where object_oid = '<oid>';

Select * from objects where object_oid = '78787'

 

Once you have found out the owner for this object, you can get the owner to Grant the TABLEAU user the necessary privileges to query the object.

 

Please be aware that if you find that the owner of an object is _SYS_REPO, this is not as straight forward as logging in as _SYS_REPO as this is not possible because SYS_REPO is a technical database user used by the SAP HANA repository. The repository consists of packages that contain design time versions of various objects, such as attribute views, analytic views, calculation views, procedures, analytic privileges, and roles. _SYS_REPO is the owner of all objects in the repository, as well as their activated runtime versions.

You have to create a .hdbrole file which which gives the access ( Development type of role, giving select, execute, insert etc access) on this schema. You then assign this role to the user who is trying to access the object.

 

 

Another option that is available for analyzing privileges issues was introduced as of SP9. This comes in the form of the Authorization Dependency Viewer. Man-Ted Chan has prepared an excellent blog on this new feature:

 

http://scn.sap.com/community/hana-in-memory/blog/2015/07/07/authorization-dependency-viewer

 

 

 

For more useful information on Privileges can be seen in the following KBA's:

KBA #2220157 - Database error 258 at EXE insufficient

KBA #1735586 – Unable to grant privileges for SYS_REPO.-objects via SAP HANA Studio authorization management.

KBA #1966219 – HANA technical database user _SYS_REPO cannot be activated.

KBA #1897236 – HANA: Error&quot; insufficient privilege: Not authorized &quot; in SM21

KBA #2092748 – Failure to activate HANA roles in Design Time.

KBA #2126689 – Insufficient privilege. Not authorized

 

 

For more useful Troubleshooting documentation you can visit:

 

http://wiki.scn.sap.com/wiki/display/TechTSG/SAP+HANA+and+In-Memory+Computing

 

 

Thank you,

 

Michael

Troubleshooting High CPU Utilisation

$
0
0

High CPU Utilisation

 

Whilst using HANA i.e. running reports, executing queries, etc. you get an alert in HANA Studio that the system has consumed CPU resources and the system has reached full utilisation or hangs.

 

Before performing any traces, please check to see if you have Transparent HugePages enabled on your system. THP should be disabled across your landscape until SAP has recommended activating them once again. Please see the relevant notes in relation to TransparentHugesPages:

 

HUGEPAGES 

 

SAP Note 1944799 - SAP HANA Guidelines for SLES Operating System Installation

SAP Note 1824819 - SAP HANA DB: Recommended OS settings for SLES 11 / SLES for SAP Applications 11 SP2

SAP Note 2131662 - Transparent Huge Pages (THP) on SAP HANA Servers

SAP Note 1954788 - SAP HANA DB: Recommended OS settings for SLES 11 / SLES for SAP Applications 11 SP3

 

 

The THP activity can also be checked in the runtime dumps by searching “AnonHugePages”. Whilst also checking the THP, it is also recommended to check for:

 

Swaptotal = ??

Swapfree = ??

 

This will let you know if there is a reasonable amount of memory in the system.

 

Next you can Check the (GAL) Global allocation limit:  (search for IPM) and check the limit and ensure it is not lower than what the process/thread in question is trying to allocate.

 

Usually it is evident what caused the High CPU’s. In many events it is caused by the execution of large queries or running reports from HANA Studio on models.

 

In order to analyse the activities, the second step is to run a Kernel Profiler Trace along with 3-4 runtime dumps whilst the issue is occurring.

 

A Kernel Profiler Trace can be used when you experience high memory consumption or performance issues during query execution. Please be aware that you will also need to execute 2-3 runtime dumps also. The Kernel Profiler Trace results reads in conjunction from the runtime dumps to pick out the relevant Stacks and Thread numbers.

 

 

To see the full information on Kernel Profiler Trace’s please see Note 1804811 or follow the steps below:

            

Kernel%20Profiler.PNG

 

Connect to your HANA database server as user sidadm (for example via putty) and start HDBCONS by typing command "hdbcons".
To do a Kernel Profiler Trace of your query, please follow these steps:

1. "profiler clear" - Resets all information to a clear state

2. "profiler start" - Starts collecting information.

3. Execute the affected query.

4. "profiler stop" - Stops collecting information.

5. "profiler print -o /path/on/disk/cpu.dot;/path/on/disk/wait.dot" - writes the collected information into two dot files which can be sent to SAP.

 

 

Once you have this information you will see two dot files called

1: cpu.dot

2: wait.dot.

 

To read these .dot files you will need to download GVEdit. You can download this at the following:

  http://www.graphviz.org/Download_windows.php

 

Once you open the program it will look something similar to this:

 

Graph%20Viz.PNG

            

     
The wait.dot file can be used to analyse a situation where a process is running very slowly without any reasons In such cases, a wait graph can help to identify whether the process is waiting for an IndexHandle, I/O, Savepoint lock, etc.

 

So once you open the graph viz tool, please open the cpu.dot file. File > open > select the dot file > open > this will open the file:

Once you open this file you will see a screen such as

 

graphviz%201.PNG

            

 

The graph might already be open and you might not see it because it is zoomed out very large. You need to use the scroll bar (horizontal and vertical to scroll).

 

CPU_DOT%201.PNG

    

From there on it will depend on what the issue is that you are processing.

Normally you will be looking for the process/step that has the highest amount on value for

E= …

Where "E" means Exclusive

There is also:

I=…

Where "I" means Inclusive

The Exclusive is of more interest because it is the exclusive value just for that particular process or step that will indicate if more memory/CPU is used in that particular step or not. In this example case we can see that __memcmp_se44_1= I =16.399% E = 16.399%. By tracing the RED colouring we can see where most of utilisation is happening and we can trace the activity, which will lead you to the stack in the runtime dump, which will also have the thread number we are looking for

 

CPU_DOT%202.PNG

 

CPU_DOT%203.PNG

    

 

    

 

 

By viewing the CPU.dot you have now traced the RED trail to the source of the most exclusive. It is now that you open the RTE (Runtime Dump). Working from the bottom up, we can now get an idea of what the stack will look like in the RTE (Runtime Dump).

 

CPU_DOT%204.PNG

    

 

 

 

By comparing the RED path, you can see that the path matches exactly with this Stack from the Runtime dump. This stack also has the Thread number at the top of the stack.

 

So now you have found the thread number in which this query was executed with. So by searching this thread number in the runtime dump we can check for the parent of this thread & check for the child’s related to that parent. This thread number can then be linked back to the query within the runtime dumps. The exact query can now be found, giving you the information on the exact query and also the USER that executed this query.

 

For more information or queries on HANA CPU please visit Note 2100040 - FAQ: SAP HANA CPU

 

Thank you,

 

Michael Healy

SAP HANA TDI - FAQ

$
0
0

SAP HANA tailored data center integration (TDI) was released in November 2013 to offer an additional approach of deploying SAP HANA. While the deployment of an appliance is easy and comfortable for customers, appliances impose limitations on the flexibility of selecting the hardware components for compute servers, storage, and  network. Furthermore, operating appliances may require changes to established IT operation processes. For those who prefer leveraging their established processes and gaining more flexibility in hardware selection for SAP HANA, SAP introduced SAP HANA TDI. For more information please download this FAQ document.

View this Document

Storing SAP DB Control Center Historical Data in SAP HANA Dynamic Tiering

$
0
0

This document provides instructions on how to selectively move SAP DB Control Center historical data from its in-memory SAP HANA repository to extended tables in SAP HANA Dynamic Tiering.

 

SAP HANA with the Dynamic Tiering (DT) option enables the migration of ‘hot’ and ‘warm’ data from SAP HANA in-memory tables to Dynamic Tiering extended storage. ‘Hot’ data is data that resides in memory, allowing maximum access performance, whereas ‘warm’ data does not (always) reside in memory; ‘warm’ data isn’t accessed as often as ‘hot’ data so moving it to extended storage frees up memory.  This paper will explain the steps on how to move SAP DB Control Center (DCC) historical data from its in-memory tables to extended storage. Doing so will allow you to keep all historical data that DCC generates – the most up-to-data historical data will be ‘hot’ while the older data will be ‘warm’.

 

PREREQUISITES

 

To move historical data from DCC tables to the DT extended tables, you require the following:

 

  • Access to an SAP HANA SPS09 or higher system
  • SAP DB Control Center (DCC) SP10 or higher
  • Both the DCC and the SAP HANA Dynamic Tiering component installed on that SAP HANA system
  • Basic working knowledge of SAP HANA Studio

 

If you would like to learn more about SAP HANA Dynamic Tiering, please review the following quick start guide: http://scn.sap.com/docs/DOC-66016.

 

CHANGE PURGE SETTINGS FOR STORAGE OF HISTORICAL DATA

 

We will first go over how to update the purge settings on DCC. DCC deletes records that are older than a max age that we will set up in the settings (the default max age is 30 days).  For example, if our purge settings were adjusted to have a max age of one month, then in intervals of 5 minutes, records older than 1 month will be deleted.  The purpose of purging is to delete old data to make space for newly generated data and that is why it happens often.  We want to set a high value for max age so that we have enough time to transfer the data because if the max age is set to a very small value, it can potentially result in the deletion of new data before it gets moved to extended storage. As such, we’ll set the max purge age to an artificially high number like 5256000 minutes (1 year) so that it doesn’t need to be monitored constantly.

 

Follow the instructions below to update the purge settings for DCC:

 

  1. Right click on your SYSTEM user.
  2. Click "Open SQL Console".
    1.png
  3. Copy and paste the following script onto the console.

    upsert "SAP_HANA_DBCC"."sap.hana.dbcc.data::Site.PreferenceValues" ("name", "v_int")     
    values('apca.historical.purge.max_age', 525600) with primary key;
  4. Click the Deploy icon to execute the script.

 

CREATE EXTENDED TABLES

 

To migrate data from in-memory to extended storage, we’ll have to first create an extended table with the same schema as the table in in-memory that we’re transferring data from. The following instructions will guide you on how to do that:

 

  1. Right click on your SYSTEM user.
  2. Click "Open SQL Console".
  3. Copy and paste the following script onto the console.  This script contains the SQL statements that, when run, will create a table in extended storage. For the purpose of this exercise, we will be naming the table ‘test_DT’.

    create table "SAP_HANA_DBCC"."test_DT" ( 
    resourceid      bigint               not null, 
    timestamp      timestamp       not null, 
    result              nvarchar(8)     not null, 
    state               smallint           not null, 
    availability      smallint           not null, 
    performance   smallint          not null, 
    capacity          smallint          not null, 
    alerthigh          integer           not null, 
    alertmedium    integer           not null, 
    alertlow           integer            not null, 
    alertinfo           integer            not null, 
    primary key (resourceid, timestamp) ) USING EXTENDED STORAGE;
  4. Click the Deploy icon.
  5. Left click on your SYSTEM user.
  6. Click log off.
    2.png
  7. Double click on your SYSTEM user to restart the user.
  8. Click SYSTEM > Catalog > SAP_HANA_DBCC > Tables.
    3.png

 

The table that you created in this section will now be found in this folder. This is an empty table in extended storage. You will also notice that this table listed in the catalogue will also have EXTENDED noted beside its name. This is to inform the user that the table is an extended table.

 

MIGRATING SELECTED HISTORICAL DATA FROM IN-MEMORY TO EXTENDED STORAGE USING STORED PROCEDURES

 

As mentioned earlier, the purpose of adding the DT option to work alongside DCC, is to be able to migrate ‘warm’ historical data from DCC in memory tables to extended tables in DT.  Data that is used for daily reporting and other high-priority data is ‘hot’ data and must stay in in memory. However, ‘warm’ data is other data required to operate applications, and so since that data isn’t used as frequently as ‘hot’ data, it can be moved to extended storage.  In this case, the historical data is considered as the ‘warm’ data that is sitting in DCC. Therefore, we would want to select the historical (warm) data and move it to DT tables.  To migrate selected historical data from in-memory to extended storage, we will be using a stored procedure. In this case the stored procedure is called “Migrate_Aged_Orders_1”. Once this set of SQL statements are run, any data that is older than the specified date in the variable, varAgedDate, will be inserted into the extended table that you created in the previous section. It will then be deleted from the in-memory table; this deletion of the data transferred is necessary so that duplicate entries do not occur.

 

Complete the following:

 

  1. Right click on your SYSTEM user.
  2. Click “Open SQL Console.
  3. Copy and paste the following script onto the console.

    CREATE PROCEDURE "SAP_HANA_DBCC"."Migrate_Aged_Orders_1" ()
    AS BEGIN DECLARE varAgedDate DATE := ADD_DAYS(CURRENT_DATE, -1);
    INSERT INTO "SAP_HANA_DBCC"."test_DT" ( SELECT * FROM "SAP_HANA_DBCC"."test" WHERE "SAP_HANA_DBCC"."test"."TIMESTAMP" < :varAgedDate );
    DELETE FROM "SAP_HANA_DBCC"."test" WHERE "SAP_HANA_DBCC"."test"."TIMESTAMP" < :varAgedDate;
    END;
  4. Click the Deploy icon.
  5. Left click on your SYSTEM user.
  6. Click log off.
  7. Double click on your SYSTEM user to restart the user.
  8. Click SYSTEM > Catalog > SAP_HANA_DBCC > Procedures.  You will find the procedure you just created saved here. Every time you would like to run this procedure, you will have to manually call it. Proceed with the instructions to complete this step.
    12.PNG
  9. Right click on your SYSTEM user.
  10. Click "Open SQL Console".
  11. Copy and paste the following script onto the console.

    CALL "SAP_HANA_DBCC"."Migrate_Aged_Orders_1" ();
  12. Click the Deploy icon.

 

Note: This procedure will only be migrating all historical data from days prior to the day before the current date; this is just for illustrative purposes. In practice, the procedure would be set so that the value of the varAgedDate variable aligns better to your organization’s data storage policy (e.g. keep the latest 3 months in main memory and move everything else to extended storage).

 

VERIFY DATA PARTITION

 

After executing the Migrate_Aged_Orders_1 () stored procedure, you can run the following SQL statements to verify that the historical data is now partitioned accordingly between the in-memory and dynamic tiering tables.

     

    To verify that the data that you have selectively migrated to the extended storage, execute the following queries:

     

    SELECT * FROM "SAP_HANA_DBCC"."test" order by "TIMESTAMP" asc;
    SELECT * FROM "SAP_HANA_DBCC"."test" order by "TIMESTAMP" desc;
    SELECT * FROM "SAP_HANA_DBCC"."test_DT" order by "TIMESTAMP" asc;
    SELECT * FROM "SAP_HANA_DBCC"."test_DT" order by "TIMESTAMP" desc;

    The results that we get after running the two SQL statements above are illustrated below.  There are screenshots of each table’s results in ascending and descending order, to show how much of the data has been deleted and moved from the ‘test’ table to the ‘test_DT’ table.   In this example, you will notice that the earliest data was collected on October 28th, 2015 and the latest data was collected on November 18th, 2015. Since the variable, ‘varAgedDate’, was set to one day, you will notice from the screenshots the data left in the ‘test’ table has the latest data record from November 18th 2015 9:08PM and the earliest data record for 12:00AM.  The ‘test_DT’ table has the latest data record from November 17th, 2015 and the earliest data record for October 28th, 2015.

     

    The “test” in-memory table results:

    4.png
    5.png

     

    The "test_DT" extended table results:

    6.png
    7.png

     

    You can also check this by right clicking on the TESTING_HISTORICAL_DT table in the catalogue and choosing Open Data Preview from the popup menu.

     

    Alternatively, you can issue SQL statements to retrieve the row count of the different tables (the counts should change over time):

     

    SELECT COUNT(*) FROM "SAP_HANA_DBCC"."test";
    SELECT COUNT(*) FROM "SAP_HANA_DBCC"."test_DT";

    For illustration purposes, if we were to run just the first COUNT statement above, we would get the following result, which shows that there are 994 rows in the test table after the migration process. Also to confirm that this is correct, if we were to go back and run the SELECT * FROM "SAP_HANA_DBCC"."test" order by "TIMESTAMP"; like in the second screenshot, we can see from the first column and last row that 994 rows are left in the ‘test’ table.

     

    Running:

     

    SELECT COUNT(*) FROM "SAP_HANA_DBCC"."test";

    10.png

     

    Running:

     

    SELECT * FROM "SAP_HANA_DBCC"."test" order by "TIMESTAMP";

    11.png

     

    SUMMARY

     

    The instructions above show you how to migrate historical data from the in-memory tables in DCC to the extended storage in DT.  The procedure allows you to keep the most up-to-date historical data (hot) in memory while moving the older data (warm) to expended storage.

     

    To summarize the steps:

    1. Adjust the purge settings to an artificially large number so that you will not have to continuously monitor your data.
    2. Create extended tables in Dynamic Tiering with the same columns that you want to move data from.
    3. The migration process will then take place by calling a stored procedure to move selected data from the in memory table to the extended table, after which you will verify your results.

    After completing these steps, you should have a table in DT with the historical data which include the earliest row of data from the last purge session, to the latest row of data based on the variable date that you noted in the stored procedure. The table in DCC should only have whatever is left over after the data has been moved over. After the data transfer process is complete, the result will include the two tables with the selected historical data migrated successfully.

    SAP HANA Fiber Channel Storage Connector Admin Guide

    $
    0
    0

    To use block storages together with SAP HANA, appropriate re-mounting and IO fencing mechanisms must be employed. SAP HANA offers a ready to use Fiber Channel Storage Connector. This paper covers the prerequisites, HANA configuration and the HANA lifecycle concerning the Fiber Channel Storage Connector.

    View this Document

    Viewing all 1183 articles
    Browse latest View live