Quantcast
Channel: SCN : Document List - SAP HANA and In-Memory Computing
Viewing all 1183 articles
Browse latest View live

Upcoming SAP HANA Webcasts and Events


Hands on SAP HANA Webcast Series Schedule

$
0
0

SAP Community Network presents "Hands on SAP HANA," an informal, audience-driven webcast series beginning in September. Technical experts from SAP and its partners will answer questions about planning, implementing, and using SAP HANA.

 

The webcast agenda and presentation is based on questions and interests from the audience - beginning with the questions you ask in the following pages. Speakers will provide answers in an interactive format during the live webcasts, and continue to monitor this space for any follow-up questions.

 

Schedule

Learn more about each session and its speakers, register, and start asking questions.

 

 

DateSession TitleSpeakers
(New Date!)
December 5

The Unique Benefits of Running SAP HANA on the Cisco UCS Server Platform

 

Dave Kloempken, Global Director of Data Center Solutions Sales, Cisco

Erik Lillestolen, Product Manager for SAP HANA, Cisco

David Ramsay, Director, Ecosystem Innovation & Business Development, SAP

September 17

High Availability and Disaster Recovery with SAP HANA

Dr. Oliver Rettig, Smart Appliance Development Lead, IBM

Rich Travis, IBM SAP Infrastructure Architect

Ralf Czkalla, HANA Development, SAP

David Ramsay, Director, Ecosystem Innovation & Business Development, SAP

September 10

Deploying SAP HANA in Your Data Center

Rich Travis, IBM SAP Infrastructure Architect

Raik Langer, Project Manager, HANA Development, SAP

Ralf Czkalla, HANA Development, SAP

David Ramsay, Director, Ecosystem Innovation & Business Development, SAP

Extended Where-Used search for HANA Information model objects

$
0
0

While working on HANA development activities, many a times an object needs to be modified. It is also required to find out which all objects might get impacted due to the given object change. HANA studio provides very easy way to identify the directly impacted objects using "Where-Used" functionality.

 

The Where-Used window can be invoked by right click on the HANA Information model object.

 

doc_03.JPG

The window can also be opened from the Menu option Window -> Show View -> Other -> Modeler -> Where-Used List

 

During development life cycle the information might be required to identify which objects might get impacted due to the change to the:

 

  • Given Information model object - identify First level objects
  • First Level objects - identify Second level and then to the Third level and so on.
  • Table in a schema

 

Currently the Multi-level Object impact or impact due to the change to a Table is not directly visible.

 

The change to the table is more relevant when BW objects are exposed as HANA Information models and it is required to identify the impacted objects due to a change in InfoObject ( P Table / T Table) or DSO Active table.

 

Please find below a piece of code which provides a simple way to quickly identify the impacted objects.

 

Please feel free to comment / help enhance the code.

 

Any other easier way to find the information is most welcome.

 

/*

  ***********************************************************************************

--  Custom Program: To identify Multi-Level Where-Used List

--  Developed By: Ravindra Channe

--  Version: 1.0

--  Date: 05-Mar-2013

--  Version: 1.1

--  Edited: 27-Dec-2013

--  Modification: Increase object name column size to 256.

--  Usage: call PR_R_WHERE_USED('<object_name>');

--         <object_name> = Name of Table, Attribute View, Analytic View, Calc View

--  Tables used:

--    Custom Table: lt_obj_ref

--    System Table: _SYS_REPO.active_objectcrossref

--  Pre-requisit: SELECT Access to _SYS_REPO objects, especially active_objectcrossref

  ***********************************************************************************

*/

 

CREATE COLUMN TABLE lt_obj_ref (V_OBJ_NAME varchar(256), V_COMMENTS varchar(300), V_SEARCHED varchar(1));

-- V_SEARCHED flag values: n = New, s = Search, c = Search Completed

 

DROP procedure PR_R_WHERE_USED;

-- Please ignore any error if the Procedure doesn't exist

 

CREATE PROCEDURE PR_R_WHERE_USED (in  vi_obj_name  varchar(256))

LANGUAGE SQLSCRIPT AS

   vl_cnt       integer := 1;

   vl_level     integer := 1;

BEGIN

-- Empty the table for the new search

   delete from lt_obj_ref;

 

-- Insert data for the first level objects

   INSERT INTO lt_obj_ref (V_OBJ_NAME, V_COMMENTS, V_SEARCHED)

   SELECT distinct FROM_OBJECT_NAME, 'Object "'||:vi_obj_name||'" found in '||FROM_OBJECT_SUFFIX||' in object "'||FROM_PACKAGE_ID||'\'||FROM_OBJECT_NAME||'" at level '||:vl_level, 'n'

   FROM _SYS_REPO.active_objectcrossref

   WHERE to_object_name = :vi_obj_name;

 

-- Check if there are any objects to search 

   SELECT count(*) into vl_cnt FROM lt_obj_ref WHERE V_SEARCHED = 'n';

   WHILE :vl_cnt > 0 DO

      vl_level := :vl_level + 1;

 

--    Set the objects to Search from New

      update lt_obj_ref set V_SEARCHED = 's' where V_SEARCHED = 'n';

 

      insert into lt_obj_ref (V_OBJ_NAME, V_COMMENTS, V_SEARCHED)

      select distinct a.FROM_OBJECT_NAME, 'Object "'||b.V_OBJ_NAME||'" found in '||a.FROM_OBJECT_SUFFIX||' in object "'||a.FROM_PACKAGE_ID||'\'||a.FROM_OBJECT_NAME||'" at level '||:vl_level, 'n'

      from _SYS_REPO.active_objectcrossref a, lt_obj_ref b

      where to_object_name = b.V_OBJ_NAME and b.V_SEARCHED = 's';

 

--    Set the objects to Search Completed from Search

      update lt_obj_ref set V_SEARCHED = 'c' where V_SEARCHED = 's';

 

--    Check if there are any objects to search 

      select count(*) into vl_cnt from lt_obj_ref where V_SEARCHED = 'n';

 

   END WHILE;

END;

 

-- Check the data from Where-Used Extended Search


SELECT V_COMMENTS FROM lt_obj_ref;

SAP HANA Cloud Integration for data services Tutorials

$
0
0

Official Product Tutorials

The following tutorials demonstrate how to do the most common tasks in SAP HANA Cloud Integration for data services.

 

 

Configuring Datastores

 

 

Using Data Flows

SAP HANA Cloud Integration

$
0
0

Welcome to SAP HANA Cloud Integration! hci

 

Do you want to be one of the early adopters? Then test and learn more about HCI. We offer you a tenant with exclusive access to get first hands-on experience with our cloud based integration solution. Don’t miss this opportunity to work closely with SAP development on this new solution.

SAP HANA Cloud Integration (HCI) is making cloud integration simple and reliable. Hence it is SAP’s strategic integration platform for SAP Cloud Customers. It provides out-of-the-box connectivity across cloud and on-premise solutions. Beneath the real-time process integration capabilities it also contains a data integration part that allows efficient and secure usage of ETL tasks to move data between on-premise systems and the cloud. Additionally to these two different integration flavors SAP is offering prepackaged integration content as reference templates that allows customers to quickly realize new business scenarios. This drastically reduces integration project lead times and lowers resource consumption significantly.
hci overview.jpg

 

As of today SAP HANA Cloud Integration is available for customers and partners as an Application Edition, especially for a dedicated set of SAP Cloud Applications (such as SAP Cloud for Customer, SuccessFactors), as well as the SAP Financial Services Network, which provides a reliable and secure platform for the integration of financial institutions with their corporate customers. Both are offered on an annual subscription base. Upon purchase predefined, ready to run content (prepackaged integration flows) can be made available in a customer specific tenant, also without the immediate need for additional hardware or integration skills at the client’s side.

 

Today, SAP HANA Cloud Integration supports out-of-the-box integration for
  • SAP’s Cloud for Customer application to on-premise SAP CRM / SAP ERP
  • SuccessFactors BizX suite of applications to SAP HCM

 

Integration Content Catalog.jpg

These applications provide prepackaged integration content for HCI, presented in an Integration Content Catalog and accessible over a web-based application. It eases the daily work of configurators, administrators and business analysts for exploring ready to run integration content as well as introductory information and demos. The integration content covers templates with prebuilt process integration, data integration flows and other integration artifacts that significantly reduce the implementation time, cost, and risk. These templates provide the bases for the easy adoption to specific business needs.
The design time is Eclipse based offering an Integration Designer perspective for integration developers to configure, deploy, administer, and monitor integration flows on detailed level.

 

Have a look at the SAP HANA Cloud Integration Landing Page - Public Integration Content Catalog to see how easy it is for customers and partners to find and understand what it is all about.

 

 

 

Secure 2.jpg

HCI - hosted in SAP HANA Cloud and offered as a managed service on top of SAP HANA Cloud Platform - comes with complete new architecture and deployment options that are designed and best suited for cloud-cloud and cloud-on-premise integration and process orchestration. Since the integration can be consumed as a service the solution provides a multi-tenant architecture and comprises highest level of security features such as content encryption, signing of messages, encrypted data storage and certificate based authentication.  It contains a core runtime for processing, transformation, and routing of messages, as well as an out-of-the-box connectivity support (IDoc over SOAP, sFTP, SOAP/https, SuccessFactors adapter). SAP HANA Cloud Integration will be developed towards a functional rich cloud-based integration platform. A continuously increasing set of connectors and available enterprise integration patterns will lay the foundation for this.

 

New content will be posted here, so stay tuned!

 

 

Attending SAP TechEd in Bangalore? Check out  the Process Orchestration and Integration sessions!
Resources
Documentation and Ramp-up Knowledge Transfer (RKT)                         
Tooling and Public Integration Content

 

Related SCN Spaces and Documents
Articles and Blogs

 

Tutorials

Consuming BW Parent Child Hierarchy in HANA - A workaround

$
0
0

We all know how simple it is to create a hierarchy in BW and consume it in BEX. Now with the presence of BW on HANA some of us might want to use the existing Hierarchies, maintained in BW, in HANA .

Well, there is no direct way to do it, but the following workaround will help you extract and replicate the hierarchy in HANA from BW.

So what do you need to start off with?

  • An instance of BW on HANA
  • Hierarchies maintained in BW

 

  • Basic SQL and HANA Modeling background
  • A front end tool to consume the hierarchy created in HANA

 

In this document, I will take the example of Profit Center Hierarchy.

 

P.S. – I have used a sample dataset for this document.

 

So how do we do it?

Here are the Steps to be followed:

  • Create a Table: The approach, which we are going to follow here, would require a table to be persisted which would hold the data of Profit Centers and their parent profit center nodes.
  • Locate the H table and prepare the SQL: The above mentioned table needs to be populated with some data. This data will be populated using a SQL with a self-join of a table. So to start off with, check if you have the H Table of Profit Center Hierarchy in the SAP HANA Catalog created for the BW instance or not. In my case the name of the Catalog is CUSTOM_TABLES and the name of the tables is HPROFIT_CTR.

 

The table structure is as shown below:

1.png

Here only 14 records are shown. In general you will have thousands of records in this table depending on the number of Hierarchies and number of levels it has in BW. For different hierarchies the Hierarchy ID will be different. For this example I have taken the case of only one Profit Center Hierarchy maintained in BW.

 

The main columns in this table, which we are going to focus on, are: NodeID, NodeName and ParentID.

 

The NodeID is a unique identifier for each node and profit center. The NodeName column has the name of the nodes/profit centers. The ParentID column has the detail about the Parent of that node/profit center.

 

We need to perform a self-join on this H table in order to get the relationship in one single row for a Profit Center. In order to do this we need to write a simple SQL which is as follows:

2.png

The output of this SQL is as shown below:

 

3.png

Here the Child Column has the child nodes and the Parent Column has the Parent nodes.

The most important part of this output is the ‘?’ as parent for the ProfitNode1 which is achieved by the Left Outer Join in the self-join. The ‘?’ or null value signifies that the ProfitNode1 does not have any parent and it is the top most node of the hierarchy.

 

P.S. - Remember, whenever you create a parent child hierarchy in HANA and try to consume it in front end then all the nodes should have one parent. The topmost node should have null as parent. If you do not have this structure in place, you will not be able to consume the hierarchy in Front End Tool and end up getting error. Also, the chain of Parent Child should not break while creating Hierarchies. This will be explained in the later part of the document.

 

  • Create the Persisted Table:  Now, this data needs to be pushed into a persisted table which then, can be utilized in a Calculation View to create a Parent Child hierarchy. To achieve this, you can either schedule the SQL in BODS or you can use an "INTO" clause at the end of the SQL if your nodes are not going to be changed in future.
For this example, I will create a simple 2 column table named "PARENT_CHILD" in CUSTOM_TABLES catalog and load the data using INTO Clause.

4.png

 

We have extracted the Hierarchy Information from BW into HANA. Now this "PARENT_CHILD" table will be used for our modeling.

P.S. – If you do not wish to persist the data into a table and would like to execute the query on the fly when user runs the report, then you can create a script based calculation view using the same SQL. This calculation view will again be used inside the final calculation view.

 

 

 

  • Modeling of Views: This is a very simple but most critical step in creating Hierarchy. The general thinking goes in the direction of creating a hierarchy in Attribute View, then consuming this in an analytic view and finally using this analytic view into the Calculation View which then will be used for reporting. But there are several problems with this process like you cannot expose a Hierarchy created in Attribute View to Calculation View through an Analytic View. I will explain the most critical issue, which this approach has: The Break in Parent Child Relationship. So what this issue is all about?

There are 2 constraints when you create a parent child hierarchy. One is that the topmost node should have null as parent which I have already explained. The 2nd constraint is, when you are creating a parent – child hierarchy, there should not be any child without a parent. For example: If A is parent of B, B is parent of C and C is parent of D, then at no time while creating (not consuming) the hierarchy, this link should break. So you must be wondering, "When this situation will arise?" .

 

If you look at the HPROFIT_CTR table, you will find that it is a mix of Nodes and the leaves of the hierarchy. A node is that entry which has a child and a leaf is that entry which does not have any child. If you look at the HPROFIT_CTR table, you will find that ProfitNode1, ProfitNode2 and ProfitNode3 are the Node entries as they have at least 1 child whereas the entries from P01 to P011 are the Leaf Entries.

 

Generally, in the transaction table e.g. FAGLFLEXT table of ECC, the Profit Center column will always have the leaf entries and not the Node entries. So now, if you join this transaction data with an attribute view created on PARENT_CHILD, using Inner Join or Left Outer Join or Referential Join based on Profit Center, the Node Entries of the Attribute View will not come in the output of the Analytic View because they are not present in the transaction table or the data foundation of the analytic view.

 

So merely joining the Attribute View based on PARENT_CHILD table will not help. What you need to do is, along with the join on this table in Analytic View, perform a union of this table with the analytic view in the Calculation View. In that way, while creating the hierarchy, you will have all the nodes and leaves present in the calculation view. Once this is done, create a Parent Child Hierarchy in the Calculation View and then consume it in Front End Tool.

 

Let’s do some hands on exercise on this. So to start off with, create an attribute view on PARENT_CHILD table as shown below:

 

5.png

 

Activate this Attribute View.

Now, create an Analytic View using your transaction table. I have created a dummy transaction table for this exercise:

 

6.png

Create an Analytic View using this table and join the Attribute View, created earlier, to the data foundation using Left Outer Join on Profit Center and Child (Left Table: Data Foundation, Cardinality N:1):

 

 

 

 

7.png

Save and Activate this Analytic View.

 

Now we will create a calculation view on top of this Analytic View Layer and create Hierarchy there.

 

So let us name it as CALCULATION_SALES_DETAILS. This calculation view will have the main source as ANALYTIC_SALES_DETAILS having union with the PROFIT_CENTER Attribute View on PARENT_CHILD table:

 

8.png

Now, in the output node, add the attributes and measures to the final output. Also, create a new Parent Child Hierarchy using the below mentioned information:

 

9.png

 

10.png

 

 

Save and Activate the Calculation View.

 

 

This view can now be consumed in the front end tools. I will use MS Excel to show the data:

11.png

Here you can see that the Hierarchy of Profit Center is consumed as we would like it to be.

 

If you are thinking why we have done the union of the Attribute view then think about the broken linkage issue which I have explained earlier. The union takes care of any broken linkage as all the links are present in the attribute view.

 

 

The solution provided here, is completely based on my project experience. Hope this document helps you in understanding the Parent Child Hierarchy better.

Hana DB backup & restores ? & FAQ

$
0
0

Backup and Recovery of HANA Database

 

 


For the need of optimal performance, the SAP HANA database holds the data in memory.

During normal database operation, data is automatically saved from memory to disk at regular SAVEPOINTS. Additionally, all data changes are recorded in the redo log. The redo log is saved from memory to disk with each committed database transaction. After a improper shutdown, the database can be restarted like any disk-based database, and it returns to its last consistent state by replaying the redo log since the last SAVEPOINT.

 

This SAVEPOINT consists of 3 Stages:

 

Stage A

• All modified pages are determined that are not yet written to disk. The

SAVE POINT coordinator triggers writing of these pages.

 

Stage B:

• The write operations for phase 3 are prepared.

• A consistent change lock is acquired → no write operations are allowed.

• All pages are determined that were modified during phase 1 and written

to a temporary buffer.

• List of open transactions is retrieved.

• Row store information for uncommitted changes made during phase 1 is

written to disk.

• Current log position is determined (log position from which logs must be

read during restart).

• Change lock is released.

 

Stage C

• All data is written to disk. Changes are allowed again during this phase.

• Temporary buffers created in phase 2.

• List of open transactions

• Row store check point is invoke

• Log queue is flushed up to the SAVEPOINT log position

• Restart record is written (containing e.g. the SAVEPOINT log position)

 

While SAVEPOINTs and log writing protect your data against power failures,

SAVEPOINTs do not help if the persistent storage itself is damaged.

 

Database backup can be done manually and also can be scheduled to run automatically

 

Manual backups are done through

                SAP HANA Studio

                DBA COCKPIT

                SQL COMMAND

Scheduling  can be done

                Using DBA Calendar in DBA COCKPIT

                Using script (though SQL Interface)

 

Performing Backups

You can specify whether data and log backups are written to the file system or

using third-party backup tools. The Backint for SAP HANA interface performs

all the actions needed to write the backup data to external storage. The backup

tools communicate directly with the SAP HANA database through the Backint for

SAP HANA interface.

Backint can be configured both for data backups and for log backups.

Data and logs can only be backed up when the SAP HANA database is

Online

Backup and recovery always applies to the whole database. It is not possible

 

to backup and recover individual database objects.

At the beginning of a recovery, all the data and log backups to be recovered

must be available.

The SAP HANA database software version used during the recovery must

always be the same or higher than the version of the software used to create

the backup.

When the data area is backed up, all the payload data from all the servers is backed

  1. This happens in both single-host and multihost environments.

LOG Backup

 

Log backups are done at regular intervals to allow the reuse of log segments.

Log segment is backed up during these situations:

• The log segment is full

• The log segment is closed after exceeding the configured time threshold

• The database is started

 

Enabling and Disabling of Automatic log backup can be done by changing the value of the parameter enable_auto_log_backup.

 

Default: enable_auto_log_backup = yes

 

Maintain Parameter - log_mode

log_mode = overwrite

Log segments are freed by savepoints and no log backup is performed. For

example, this can be useful for test installations that do not need to be backed

up or recovered.

log_mode = legacy In legacy mode, no log backup is performed. Log segments

are retained until a full data backup is performed. This is to allow recovery from

the most recent full backup and the log in the log area. This was the default setting

for SAP HANA SPS 02.

 

 

 

 

FAQS are attached...

 

 


Featured Content for SAP HANA and In-Memory Computing

$
0
0

Special Feature on SAP River

SAP River is a new development language and environment for developing a complete SAP HANA backend application, describing the data model, the business logic and access control within a single and integrated program specification. Check out the latest content flowing in:

 

Watch this space and the SAP HANA Developer Center for more SAP River content. January 8, 2014

 

6 Golden Rules for New SAP HANA Developers

http://scn.sap.com/profile-image-display.jspa?imageID=22926&size=72With mass adoption taking hold for SAP HANA, more resources are needed to support those beginning the journey. Having seen many of the same mistakes queried as he culled the forums, SAP Mentor John Appleby came up with a few golden rules to help.

 

See also Appleby’s 10 Predictions: What's new for SAP HANA in 2014?? January 8, 2014

 

Special Feature on SAP HANA Migration

Use the decision matrix to find the best migration option for SAP NetWeaver AS ABAP, watch a demo of change management in SAP HANA to learn how to benefit from the new feature in SPS07, and find recommendations on the the Migration to SAP HANA document. Visit the Software Logistics space for more. January 8, 2014

 

See more recently featured content.


Recently Featured Content on SAP HANA and In-Memory Business Data Management

$
0
0

 

2014

 

Special Feature on SAP River

SAP River is a new development language and environment for developing a complete SAP HANA backend application, describing the data model, the business logic and access control within a single and integrated program specification. Check out the latest content flowing in:

 

Watch this space and the SAP HANA Developer Center for more SAP River content. January 8, 2014

 

 

6 Golden Rules for New SAP HANA Developers

http://scn.sap.com/profile-image-display.jspa?imageID=22926&size=72With mass adoption taking hold for SAP HANA, more resources are needed to support those beginning the journey. Having seen many of the same mistakes queried as he culled the forums, SAP Mentor John Appleby came up with a few golden rules to help.

 

 

See also Appleby’s 10 Predictions: What's new for SAP HANA in 2014?? January 8, 2014


 

Special Feature on SAP HANA Migration

Use the decision matrix to find the best migration option for SAP NetWeaver AS ABAP, watch a demo of change management in SAP HANA to learn how to benefit from the new feature in SPS07, and find recommendations on the the Migration to SAP HANA document. Visit the Software Logistics space for more. January 8, 2014

 

2013

 

What's new in SAP HANA SPS 07?

SPS 07 for SAP HANA is focused on three major themes: Open platform innovation to support a broad range of applications developed by SAP and its partner ecosystem, developer empowerment with and new capabilities to increase developer efficiency, and extended support for mission critical enterprise applications. Find out more about the latest release in John Appleby’s blog and Lars Breddemann's"dazzling highlights" blog. December 09, 2013

 

Be sure to check out the related webcast series focused on the SPS07 release.

 

Eight Easy Steps to Develop an XS Application on the SAP HANA Cloud Platform

The SAP HANA Cloud Platform provides you with native development capabilities, allowing you to build and execute native SAP HANA XS applications that run in the cloud. Find out how from Stoyan Manchev. December 09, 2013


Big Data Geek - Finding and Loading NOAA Hourly Climate Data

Take a journey to the National Oceanic and Atmospheric Administration (NOAA), as John Appleby extracts and analyzes big data with SAP HANA and SAP Lumira. In Part I, he begins to download the data, pull it into HANA, and build a readable model for Lumira. In Part 2, Appleby seeks to answer the question posed by a fellow SCN member, “Is it getting warmer in Virginia?” November 18, 2013

 

Find out how you can test out SAP HANA and SAP Lumira in a big data scenario (processing Wikipedia data with Hive) in this blog by Greg Chase.

 

The HANA Journey: Determining When HANA is Right for You

You've probably heard of different ways to deploy SAP HANA in your business, using solutions such as SAP Business Suite accelerators, SAP HANA applications,SAP BW on HANA, SAP Business Suite on HANA, and more. But when do you start implementing it? Read The HANA Journey: Determining When HANA is Right for You from Tom Kurtz to find out. November 18, 2013

 

SAP HANA Cloud Integration

Want to learn more about SAP HANA Cloud Integration? Read SAP HANA Cloud Integration (HCI): Getting Started by Meghna Shishodiya and SAP HANA Cloud Integration: An Intro by Sujit Hemachandran. Finally, see the SAP HANA Cloud Integration: Early Customer and Partner Project in which Udo Paltzer shares details about the opportunity to get hands-on experience with SAP HCI. November 18, 2013


The Unique Benefits of Running SAP HANA on the Cisco UCS Server Platform

SAP and Cisco over the past four years with the launch of the Cisco Unified Computing System (UCS) server platform have partnered in just about every SAP solution. In this webcast, you will learn how your organization can lower TCO implementing Cisco’s TDI Shared Networking for HANA​, and how to maximize ROI through the benefits of Cisco’s Disaster Tolerance capabilities. November 15, 2013


Learn how to use SAP Lumira SP12 with SAP NetWeaver BW

With SAP Lumira SP12, customers can now connect to the relational Universes (UNX) using the SAP NetWeaver BW InfoProvider level as a source for their analytics. In Part 1 of a new blog series, Ingo Hilgefort provides the details to get your own SAP NetWeaver BW on SAP HANA system. October 16, 2013

 

The SAP HANA and Cloud Platform Symbiosis

The SAP HANA Cloud Platform combines the power of SAP HANA with the ease of use of a cloud platform, lowering the entry barrier to experience SAP HANA. SAP Mentor Matthias Steiner explains how cloud + HANA is more than the sum of its parts. October 16, 2013

 

Playing with SAP HANA

This in-depth presentation from Lars Breddemann shows some interesting and not-so obvious aspects of SAP HANA’s column store tables. Get more details in the SAP HANA space. October 16, 2013

Your own SAP NetWeaver BW on SAP HANA with BI4 system... in just a few minutes

It is now possible to get your own BW on HANA system connected to a fully configured BI 4.0 system, plus a 300-page tutorial containing everything you need to know about BW on HANA, in a matter of mere minutes. Ingo Hilgefort outlines the trial offering and how to get it from SAP HANA Marketplace in this recent blog.

 

Join the October 17 webinar to learn how you can use SAP NetWeaver BW on SAP HANA with SAP BusinessObjects BI4.x. September 27, 2013

Get your hands on SAP HANA webinar replays!

Transcripts and replays are available for the September webcast "Hands on SAP HANA" series:

 

Deploying SAP HANA in Your Data Center - Replay, Q&A Available

High Availability and Disaster Recovery with SAP HANA - Replay, Q&A Available

 

Want more? Check out the Big Data webinar series. September 24, 2013

 

Best Practice: Test Management for SAP Business Suite on SAP HANA Migration Projects

This technical guide describes test management activities necessary for functional and performance tests that enable customers to migrate to SAP HANA without negatively impacting business processes. September 24, 2013

 

SAP HANA Landscape Redistribution with SP6

Jochen Becker describes how the landscape redistribution process was fundamentally revised in SAP HANA SP6. September 24, 2013

 

My encounter with the freight train called HANA

Former SAP architect Holger Stumm shares his perspective on the momentum behind SAP HANA. September 24, 2013

 

What´s new in SAP Operational Process Intelligence SP01

The first support pack offered by the SAPOPInt team provides new goodies and fixes based on your feedback.

Harshavardhan Jegadeesan explains some of the new features in his blog. September 24, 2013

 

SAP HANA Reference for Developers

Interested in learning more about SAP HANA from a development perspective? Vivek Singh Bhoj, application developer at Mindtree, has compiled this comprehensive, two-part SAP HANA reference: Part 1 | Part 2September 13, 2013

 

Connecting to HANA with Python using ODBC or oData

Ronald Konijnenburg provides advice, and a “head start” for connecting to SAP HANA with Python. September 13, 2013

 

SAP HANA Lifecycle Management

SAP HANA software includes a lifecycle manager that incorporates procedures for customizing and updating your SAP HANA platform and for managing SAP HANA content products and transports. September 13, 2013

 

Testing SAP HANA

This paper delivers best practices with respect to testing SAP HANA and outlines how SAP Consulting can assist customers in developing and implementing a tailored testing approach by taking those practices into account. September 13, 2013

 

The specified item was not found.

Profit from the best practice information provided in this document for your classical migration (using software provisioning manager 1.0) of your ABAP system to SAP HANA. September 13, 2013

 

The Fastest Way to Become a Real-time Business

SAP HANA Enterprise Cloud launched earlier this year following extensive work on petabyte scale SAP HANA infrastructure. Swen Conrad, who was part of the launch team, explains some of the key qualities and differentiating characteristics of the HEC offering. August 28, 2013

 

Convergence of OLTP and OLAP Analytics

We’ve seen the evolution of analytics – from operational analytics, using OLTP ABAP programs in ERP, to analytics using SAP BW and a more robust architecture and governance, storing data in an EDW (Enterprise Data Warehouse), and running OLAP reports. Read more in the blog posted by Alexandra Carvalho. August 28, 2013

 

SAP HANA Developer Guide

This guide explains how to build applications using SAP HANA, including how to model data, how to write procedures, and how to build application logic in SAP HANA Extended Application Services (SAP HANA XS). August 28, 2013

 

"Hands on SAP HANA" Webcasts

SCN presents "Hands on SAP HANA," an informal, audience-driven webcast series beginning in September. Technical experts from SAP and its partners will answer questions about planning, implementing, and using SAP HANA. The webcast agenda and presentation is based on questions from the audience - beginning with those you ask right here. August 13, 2013

 

Using HANA Modeler in the SAP HANA Cloud

SAP's vision to enable SAP HANA native development on the SAP HANA Cloud platform has reached beta: Developers can now leverage SAP HANA data-persistence and analytic model artifacts using the SAP HANA Studio Development or Modeler perspective. In this blog, SAP's Stoyan Manchev shows how to build a sample application that uses a calculation view and is consumed via Java Servlet. August 13, 2013

The Real Reason for SAP HANA

Hear it from SAP co-founder and Chairman of the Supervisory Board, Hasso Plattner himself. August 13, 2013

 

SAP HANA SPS6: New Developer Features

In this blog post from Thomas Jung, you'll learn more about the recent release of SAP HANA SPS6 (Revision 60). Developers working on SAP HANA can expect a wide variety of new and improved features, including development workflow. Jung also includes sample code and videos. Ready to try SAP HANA yourself? Download a trial version today.August 13, 2013

 

Building Your First End-to-End SAP HANA Cloud "PersonsList" Web Application Scenario

The document offers developers a comprehensive way for developing an SAP HANA Cloud web application scenario. Among other things, you'll learn to set up your SAP HANA Cloud Development Environment from scratch; develop a minimal end-to-end SAP HANA Cloud Application from UI to database; publish and run each developed increment on SAP HANA Cloud local runtime for testing. August 13, 2013

 

Factors to Consider when Choosing an SAP HANA Managed Hosting Provider

Once you’ve decided to implement SAP HANA in a managed hosting environment, you’ll begin the process of evaluating potential partners. This is the first entry in a four-part blog series by Binoy James in which he examines how the size of your implementation, network needs, and application environment consolidation play into your decision. August 13, 2013

The Top 10 SAP HANA Myths Spread by Other Vendors

SAP Mentor John Appleby solves for X in the equation, "Another vendor has told me X about SAP HANA, is it true?" while Vijay Vijayasankar explains why A Faster Horse Just Won’t Cut It Anymore. July 19, 2013

 

The Process Black Box – SAP Operational Process Intelligence shines a light inside

Much has been made of what breakthroughs SAP HANA can provide to the world of business. In this blog, SAP Mentor Owen Pettiford of CompriseIT shares his take on how sapopint is a product that delivers.

 

Join Owen in the SAP Mentor Monday webinarto find out more about SAPOPInt.

 

Also learn about SAPOPInt from Sebastian Zick, who shares his real-world experience on how CubeServ gained transparency of processes across system boundaries. July 19, 2013

 

Real-time Sentiment Ratings of Movies on SAP HANA One

SAP intern Wenjun Zhou describes using SAP HANA One to power an app that rated movies. Learn how he used the Rotten Tomatoes API to find newly released movies, along with Twitter and Sina Weibo APIs to check sentiment among U.S. and China-based movie-goers, respectively. July 19, 2013

 

SAP Solutions and Best Practices for Testing SAP HANA

Ensuring a smooth introduction and ongoing operation of applications running on SAP HANA requires a thorough and automated testing approach. This includes a selection of representative test cases that address technical, functional, and non-functional needs, as well as dedicated tools that accelerate and increase the effectiveness of testing efforts. Learn from this white paper how to develop and implement a tailored testing approach based on best practices. July 19, 2013

 

New Features in SAP HANA Studio Revision 55

In case you missed it, Lars Breddemann reports on the improvements released with revision 55 of SAP HANA Studio. July 19, 2013

 

Calling XSJS Service Using SAP UI5

Inspired by a previous blog on SAP HANA Extended Application Services, Vivek Singh Bhoj elaborates on how to call the XSJS service. July 19, 2013


SAP HANA Turns 2

Our groundbreaking in-memory platform is growing up. See how SAP HANA continues to transform and inspire business. June 21, 2013

 

Setting the Record Straight - SAP HANA vs. IBM DB2 BLU

Recently IBM announced BLU accelerator for DB2, which does query acceleration of manually selected tables in DB2 in batch mode using in-memory and columnar techniques. However, there were some unsubstantiated claims and over-reaching statements amid the excitement about BLU. SAP’s Ken Tsai provides his assessment in this blog. June 18, 2013

 

SAP HANA Commands, Command Line Tools, SQL Reference Examples for NetWeaver Basis Administrators

Andy Silvey set out on a massive undertaking to create a one-stop shop reference for HANA commands and command line tools, plus administrator's SQL queries. Lucky for us, he decided to share it here. June 18, 2013

 

SAP HANA Enterprise Cloud - Important step to... where?

SAP Mentor (and SAP HANA Distinguished EngineerTomas Krojzl speculates on what the May 7 announcement of HANA Enterprise Cloud might mean for SAP in the future. His blog inspires more than a few opinionated comments. June 17, 2013

 

How Does It Work Together: BW, BW on SAP HANA, Suite on SAP HANA, SAP HANA Live

Part 3 of this solid blog series continues the discussion of how diverse customer landscapes can be efficient and synergistic. Posted by Ingo Hilgefort, Part 1 introduced the different landscapes and discussed a customer site without SAP as the backend. In Part 2, Ingo discussed an SAP ERP customer and how such a customer could leverage SAP BusinessObjects BI with and without SAP HANA Live. June 6, 2013

 

Get SAP HANA One Premium

SAP HANA One Premium is designed for users who want to run their SAP HANA instances 24x7. See the demo of SAP HANA on AWS (on the AWS blog) and learn about its ability to connect to source data from SAP Business Suite and SAP NetWeaver BW in the cloud, SAP Enterprise Support, and the Data Services component of SAP HANA Cloud Integration. Also read this related blog by SAP Mentor Richard Hirsch and this SAPInsider article. For more information and resources, visit the SAP HANA One page. June 6, 2013

 

Understanding the HANA Enterprise Cloud:  An Initial Whiteboard

If you want to better grasp SAP’s new HANA Enterprise Cloud offering, follow SAP Mentor Richard Hirsch as he diagrams his way to a better understanding in his recent blog.

 

SAP’s Bjoern Goerke provides additional clarity across the cloud offerings in his recent blog.

 

Follow the ‘hana_enterprise_cloud’ tag for related blogs. May 17, 2013

 

Introduction to Software Development on SAP HANA - Free Online Course

By now you might have heard about the launch of openSAP – a platform that offers free online learning opportunities. Who better to instruct the first course than SAP Mentor and eLearning expert (just check our library) Thomas Jung? But this, he says, is something different than “traditional” eLearning.... May 17, 2013

 

#NBA Sabermetrics 101 (Powered by #SAPHANA)

In the SAPPHIRE NOW keynote, SAP co-CEO Bill McDermott talked about how SAP is working with professional sports to create the next-generation fan experience. In his latest blog, SCN’s own Chris Kim discusses the value SAP can bring to the sports and entertainment industry. For more on SAP and sports, check out Proof of SAP’s evolution from a B2B to a B2C company. May 17, 2013


Enter SAP Enterprise Cloud

Last week, SAP announced the SAP HANA Enterprise Cloud service. SAP HANA Enterprise Cloud empowers you to deploy SAP Business Suite, SAP NetWeaver BW, and custom HANA applications to support real-time business. Learn more in the blog by executive board memberVishal Sikka and watch the replay of the press event.Then read Siva Darivemula’s blog Adding a New On-Ramp to the HANA Highway for more insight. May 8, 2013

 

SAP HANA Enterprise Cloud: “Instant Value without Compromise”

SVP Mark Yolton describes what the new offering can do for customers, shares his take on the announcement as well as some early reactions from the media. His blog is also filled with HANA resources. May 8, 2013

 

SAP HANA Cloud Integration Now Available

SAP HANA Cloud Integration, now available for customers and partners, is an integration platform hosted in SAP HANA Cloud that facilitates the integration of business processes spanning across different departments, organizations, or companies. Mariana Mihaylova explains and provides resources in this document. May 8, 2013

 

Cloudy on the terminology? Check out the blog by Bjoern Goerke in which he clarifies recent branding around cloud.

 

New SAP HANA Development Platform Training as Massive Open Online Course (MOOC)

Register for a new online course: "Introduction to Software Development on SAP HANA." Over six weeks’ time, you’ll get an overview of the native programming capabilities of SAP HANA. Dr. Markus Schwarz, SVP SAP Education, says, "We want to give learners choice. With the new course we can reach even a broader audience." May 1, 2013

 

The Evolution of HANA One: More Than Just HANA Hosted in a Public Cloud

SAP Mentor Richard Hirsch comments on a recent SAPinsider publication about HANA One. May 1, 2013

 

SLT Suggestions for SAP HANA

In what he describes as a “brainstorming blog,” Thomas Krojzl writes about his ideas on how SLT replication could be improved. Don’t worry, he’s open to criticism. Seems like a good time to like it, rate it, and comment away! April 26, 2013

 

Bipedal Process and Data Intelligence.... Stop Hopping.... RUN!

The fact that we’re living in the age of big data is no surprise at this point, but according to Alan Rickayzen, “the age of process intelligence has just started.” Find out what he means, where HANA comes into the picture, and how solution experts and process operators and process owners are the big benefactors of SAP Operational Process Intelligence. April 26, 2013


Why Users Need SSO in SAP HANA

With Single Sign On (SSO), users can directly log in from any front-end application and access the SAP HANA database without providing login credentials again. Read more highly rated blogs on SAP HANA. This blog by Kiran Musunuru gives you details on setting up SSO with SAP HANA using Kerberos. April 26, 2013

 

New Publications from SAPinsider:

 

A Look Under the Hood of SAP HANA

Get look at some of the key components of the SAP HANA platform and the features and functions that make the it compelling for developers.

 

SAPinsider: SAP HANA One Illuminates New Possibilities

Learn about the instant deployment option that facilitates smaller SAP HANA projects and applications that are not easily accommodated by on-premise system procurement cycles. April 26, 2013

 

Pairing the Power of SAP HANA with the Innovative Agility of a Startup

Learn more about the Startup Focus program, how to get involved, and what it means for SAP customers. April 26, 2013

 

Best Practices for SAP HANA Data Loads

As SAP Mentor John Appleby says, “you can take the best technology in the world, create a bad design, and it will work badly. Yes, even SAP HANA can be slow.” With that in mind, check out his best practices for HANA data loading. April 10, 2013

 

Performance Guidelines for ABAP Development on the SAP HANA Database

If you’re an experienced ABAP developer, you’re probably familiar with the classic performance guidelines for using Open SQL. This begs the question of what changes are there to the guidelines in the context of SAP HANA. Eric Westenberger tackles that question.  April 10, 2013

 

Experience the Magic: How to Setup Your Own ABAP on HANA in the Cloud

Are you an ABAP developer who can’t wait to explore the intricacies of ABAP on HANA coding? Do you want to set up a sandbox environment where you can try out things such as consuming HANA Calculation Views or Stored Procedures from ABAP programs, and learn how to accelerate your ABAP applications with HANA or build entirely new ones? Then SAP Mentor Thorsten Franz wrote this for you. April 10, 2013

 

Tame BIG Processes with SAP Operational Process Intelligence, Powered by SAP HANA

Read the three-part series by Harshavardhan Jegadeesan, in which he walks through "big processes," the challenges they pose, and how SAP Operational Process Intelligence, powered by SAP HANA can help businesses overcome them. Then see how to test drive #SAPOPInt in this post. March 22, 2013

 

Get your hands on this HANA stuff:

March 13, 2013

 

Migrating Java Open-Source Application from Oracle to SAP HANA

The purpose of this document is to guide the process of migrating OLTP systems from a source ORACLE database to a target SAP HANA database. The Java Open-Source mvnForm is used in this guide to simulate the example of an OLTP system on the source Oracle database. March 7, 2013

 

When SAP HANA met R - What's new?

Last year’s ”When SAP HANA met R - First kiss” blog has some people wondering what’s new the integration of the SAP HANA database with R. Blag responds in his recent blog. March 4, 2013

Webinar: SAP Business Suite Powered by SAP HANA

On January 10, 2013, SAP announced the availability of the SAP Business Suite powered by SAP HANA – built to deliver an integrated family of business applications unifying analytics and transactions into a single in-memory platform. Join an exclusive webcast on March 14, at 15:00 CET and learn how to become a real-time business.

 

Engage with SAP HANA through Hours of Free Videos and Projects

Explore the SAP HANA Academy and watch more than 250 videos answering your what, why, and how questions about SAP HANA.March 4, 2013

 

Uncovering the Value of SAP BW Powered by HANA: Answering the Second Question

When Suite runs on HANA, BW runs on HANA, and assorted data marts run on HANA - what would be different for a business user? After talking to several customers, Vijay Vijayasankar thinks it’s the "ease of answering the second question" that is the most value adding scenario for a business user. What is your "second question"? March 4, 2013


Clear the Process Fog with SAP Operational Process Intelligence

Learn about this new SAP offering designed to improve your operational efficiency. Check out the overview video on YouTube and share your thoughts on therelated blog by Peter McNulty. February 21, 2013

 

Say cheese... on taking snapshots with SAP HANA

In this detailed blog, Lars Breddemann shows how to take a snapshot of your SAP HANA instance. February 21, 2013

 

Fast is Not a Number

You might call it a constructive rant, but why not ask the difficult questions? Jim Spath - SAP Mentor, SCN forum moderator, ASUG volunteer, employee of a company that runs SAP – does. February 21, 2013

 

The OLAP Compiler in BW on SAP HANA

Thomas Zurek blogs about a functionality he considers one of the “crown jewels” of BW on HANA.February 21, 2013

SAP HANA Certification Pathways

In this comprehensive blog, Vishal Soni shares his organization’s plans which outline paths to SAP HANA certification for technical consultants and application consultants.February 18, 2013


Harness Insight from Hadoop with MapReduce and Text Data Processing Using SAP Data Services and SAP HANA

This white paper, developed at SAP Co-Innovation Lab,  explores how IT organizations can use solutions from SAP and our partners to harness the value of large volumes of data stored in Hadoop, identify salient entities from unstructured textual data, and combine it with structured data in SAP HANA to leverage meaningful information in real-time. February 13, 2013


New SAP TV Videos on SME Customers Using SAP HANA

Michael Nuesslein of SAP TV announces two new SAP HANA game-changer videos worth checking out. January 28, 2013

 

SAP on HANA, and Pushdown for All: News about ABAP's Adventurous Relationship with the Database

Business Suite on HANA wasn't all news to this SAP Mentor, but the January 10 announcement came with some "extremely new and noteworthy" information to Thorsten Franz, such as a shift in the ABAP programming model. January 21, 2013

The Business Suite on HANA: The Announcement and What this Means for Customers

Besides providing an overview of the January 10 announcement, SAP Mentor and SCN Moderator Luke Marson outlines customer benefits and his thoughts on what it all means. Of course there are still questions, as summarized in Carsten Nitschke’s candidly-titled “What I did not learn” blog. Don’t miss the discussion that follows.

 

As far as what’s next, SAP Mentor Richard Hirsch“connects the dots” and suggests the next big play for HANA. January 17, 2013

 

2013 - The Year of the SAP Database

With the incredible success of SAP HANA over the last 18 months and a greatly expanded database and technology portfolio, SAP is poised to surge ahead in the database market. SAP Mentor John Appleby shares his thoughts on why 2013 will be a pivotal year. January 3, 2013

 

SAP TechEd Sessions on SAP HANA

What principles guide SAP’s platform and infrastructure decisions? Watch Introduction to Our Technology Strategy and Road Map to learn about the "big bets" that SAP is making in the technology game. Then learn about Integrating SAP HANA into Your Landscape through the intelligent use of in-memory technology. You’ll gain valuable insight with this interview: From ABAPer to MOBILEr: The Evolution of SAP Developers, where SAP Mentor DJ Adams talks about developer evolution with SAP HANA, Java, Eclipse, and Cloud. Watch more sessions on SAP HANA. January 10, 2013

 

It’s Here: SAP Business Suite, Powered by SAP HANA

SAP just announced availability of the SAP Business Suite powered by SAP HANA. SCN’s own Siva Darivemula summarizes the announcement, including a blog post by SAP CTO Vishal Sikka and overview video. January 10, 2013

 

What's New in SAP HANA SPS05

Following the model of his very successful "What's New" blogs from his SAP NetWeaver Portal days, Daniel Wroblewski summarizes the new features of SAP HANA SPS05 in this blog. See the related post by Lucas Sparvieri about the SAP HANA Text Analysis capabilities of SPS05. January 3, 2013


Meet the Distinguished Engineers

SAP HANA is the fastest growing product in SAP's history, with over 400 customers after just 12 months, and there will be an unprecedented demand for SAP HANA resources. With this comes the need to understand the level of experience of a HANA engineer and their areas of expertise. The Distinguished Engineer program is an SAP-sponsored, community-led effort to address this perceived skills gap in the HANA technical community, and to recognize those with a high level of technical skills, as well as embracing those who are learning and are on their way to gaining skills. Learn more. January 3, 2013

 

New from SAPinsider Magazine:

Optimizing ABAP for SAP HANA: SAP's 3-Step Approach - In this article, you'll learn SAP's three-step approach to optimize SAP NetWeaver Application Server (SAP NetWeaver AS) ABAP for the SAP HANA database.

 

Build Solutions Powered by SAP HANA to Transform Your Business - Read how the SAP Custom Development organization is helping customers build business-critical solutions powered by SAP HANA. January 3, 2013

 

2012

HANA Videos from SAP TechEd Live

Replay these interviews from Madrid for a variety of insights into SAP HANA:

 

 

Find more interviews in the catalog of HANA interviews from Las Vegas. November 28, 2012


SAP HANA One Innovative App Contest

Build your most innovative app on HANA One in AWS Cloud. Register by December 12, 2012. Learn more. December 3, 2012

 

More HANA from SAP TechEd Live!

Replay these interviews from Madrid for a variety of insights into SAP HANA:

 

 

Find more interviews in the catalog of HANA interviews from Las Vegas. November 28, 2012

 

New Space: SAP NetWeaver BW powered by SAP HANA

Follow the new space dedicated to releases of SAP NetWeaver BW on SAP HANA. November 26, 2012

 

How to Configure SAP HANA for CTS

Learn how to use the Change and Transport System (CTS) together with SAP HANA. November 26, 2012

 

SAP HANA Installation Guide – Trigger-Based Data Replication

This guide details the installation and configuration of trigger-based replication for SAP HANA – the SAP Landscape Transformation Replication Server.November 26, 2012

 

The Road to HANA for Software Developers

Developer Whisperer Juergen Schmerder published this helpful guide for developers interested in HANA to help find their way through the jungle of documents out there. October 31, 2012

 

Preparing for HANA: How to Achieve SAP Certified Technology Associate Certification

How do you prepare for the actual certification? In this blog, SAP Mentor Tom Cenens provides some helpful information on the certification and how to pass. October 31, 2012

 

Hit “Replay” on SAP HANA! Visit SAP TechEd Online

The SAP TechEd Live studio in Las Vegas featured interviews about SAP HANA One (productive HANA on AWS), SAP HANA Academy, RDS for HANA, the HANA Distinguished Engineer program, how startups are using HANA, and a deep dive on SAP HANA development. Check outall these and more interviews. October 26, 2012

 

SAP HANA Academy: Watch, Follow, and Learn SAP HANA from SAP and Ecosystem Partners Experts

This week at SAP TechEd, we announced the launch of the SAP HANA Academy. Access videos and exercises about everything from security, to working with data in SAP HANA Studio and SAP BusinessObjects Data Services, to integrating SAP HANA with Mobile or Analytics. Also, see the related SAP TechEd Online video. October 23, 2012

 

Better Choice – SAP BW on SAP HANA

You think you know databases? Think again. Watch the short animated video to see how you can make a better choice with SAP BW on HANA. Learn how you can better handle your exploding data volume, and why your business can benefit from real time data analysis. October 23, 2012

 

Join the first Google+ HANA Hangout!

Hang out with SAP HANA experts on Monday, October 29 at 9 am PT for a live, streamed chat about SAP HANA and big data. Participants include Aiaz Kazi, Head of Technology & Innovation Marketing for SAP, and Amit Sinha, Head of Database & Technology Marketing at SAP and special guest Irfan Khan, CTO of Sybase. October 26, 2012


What Customers Say About SAP HANA

“Fujitsu and SAP’s  history of co-innovation and collaboration have now provided both very large and small customers with a scalable in memory appliance that can quickly be implemented to dramatically increase data processing and real time information analytics for decision making,” says Rolf Schwirz, CEO Fujitsu Technology Solutions. Read more in SAP In-Memory Computing - Procter & Gamble Customer Testimonial, SAP HANA Helps Federal Mogul to Improve Performance, SAP HANA Helps Booan Run Better, Hilti Customer Testimonial and Charite Improves Lives with SAP HANA. October 5, 2012

 

First Experience with ABAP for HANA – Evolution or Revolution?

Check out this excellent blog by SAP Mentor Tobias Trapp, and contribute to the new, dedicated ABAP for HANA space.

 

Read more about how co-innovation among SAP and SAP Mentors enabled optimization of the ABAP platform for HANA in Sanjay Khanna’sblogAll for One and HANA for All. October 3, 2012

 

With All the Facts and Information Readily Available, Why Is It So Tough for Some to Speak Truth About SAP HANA?

Mark Yolton, SVP Communities & Social Media at SAP, put together this nice collection of great blogs, videos, articles, and other content that will help you understand the essence and the truth about SAP HANA. Top picks include: What Oracle Won't Tell You about SAP HANA by Steve Lucas, EVP Database & Technology at SAP, and Puneet Suppal's SAP HANA and the Pretenders. October 3, 2012

 

Turbocharge Your Applications with SAP HANA (Webinar Recording)

In this recording, learn how to add new revenue streams and monetize in-memory computing with new services and offerings, turbocharge your applications with SAP for OEM Partners, and reduce administration costs and do ETL, materialization, aggregation, and summarizing in just one step.

 

Video Blog: The State of SAP HANA - Debating Killer Apps and Skills Needs

To commemorate the first year anniversary of HANA's General Availability, Jon Reed taped this special Google Hangout with fellow SAP Mentors John Appleby, Vijay Vijayasankar, and Harald Reiter. September 14, 2012

 

How to Analyze Who Has Access to Particular Objects

Following his blogs on how to analyze security relations in SAP HANA system, SAP Mentor Tomas Krojzl looks at authorization relationship between users and objects. September 14, 2012

 

New Publication: A Storage Advisor for Hybrid-Store Databases

This paper, published in the Proceedings of the VLDB Endowment by SAP Research, proposes a storage advisor tool that supports database administrators in deciding between row and column data management. September 14, 2012

 

Spotfire on HANA (and a bit of a comparison)

After a previous blog “Tableau on HANA,” Ronald Konijnenburg of Logica Nederland B.V. got curious again about how a similar third-party application would behave when connecting it to HANA. September 7, 2012


From Online Gaming to Genome Analysis SAP HANA Creates New Business Opportunities

Technology itself does not give your business an edge—how you use that technology does. In her latest blog post, SAP’s Ruks Omar introduces the SAP HANA Use Case Repository, where you’ll find numerous applications for SAP HANA, and a call to share your use case. September 7, 2012

 

SAPInsider: SAP HANA is a Journey, Not a Destination

Since its release in 2010, SAP HANA has rapidly evolved from an appliance for accelerating analytics to an application platform — and there's still more to come. In this SAPinsider Q&A, hear from Dan Kearnan, Senior Director of Marketing for Data Warehousing and SAP HANA, who discusses this in-memory technology's impressive growth and sheds light on where it's headed. September 7, 2012

 

Free Course on In-Memory Data Management

Gain deep technical understanding of a dictionary-encoded column-oriented in-memory database and its application in enterprise computing with this new offering from the Hasso Plattner Institute (HPI).

 

The course, guided by Dr. Plattner himself, begins September 3, 2012 and requires about 3 hrs effort per week over 6 weeks. See the course overview on SCN and visit the openHPI web site for complete details. August 14, 2012

 

Webinar Replay Now Available

Transform Your Business with the Real-Time Power of SAP HANA - According to a study by Oxford Economics, companies that implement real-time systems see an average 21% increase in revenue, and a 19% reduction in IT cost.1 But what does real time really mean? August 24, 2012

 

Sign up for the August 16 Webinar: Transform Your Business with the Real-Time Power of SAP HANA

This 30-minute webinar focuses on how a real-time in memory data platform can give companies unprecedented and immediate insights into their customers, products, services and operations by enabling the analysis of huge volumes of data from virtually any source -- for improved agility and bottom line performance. August 14, 2012

 

I'm in a HANA State of Mind

Says SAP Mentor John Appleby, "...because once you start spotting opportunities for SAP HANA, you won't stop until you find ways to disrupt entire industries." August 1, 2012

 

SAP HANA Startup Forum Day - TLV 2012

Erez Sobol, Head of Technology Ventures at SAP Labs Israel, recaps an exciting day of learning and collaboration centered around big data and SAP technologies as part of the world-wide SAP Startup Focus Program. August 2, 2012

 

HANA and the Future of Personalized Medicine

Medicine can now be aided by tools capable of processing large volumes of data quickly. HANA is well placed to establish a strong role in the new era of personalized medicine. Mark Heffernan shares some thoughts and observations on the potential for HANA and personalized medicine. July 31, 2012

 

SAP Hana Code Jam - Why Code in SAP Hana?

SAP Mentor Tammy Powlas shares her experience at the first SAP CodeJam focused exclusively on SAP HANA. July 30, 2012

 

New Installation/Import - Including Unified Installer -  for SAP NetWeaver BW Powered by SAP HANA

SAP’s Roland Kramer provides guidance for implementing BW on SAP HANA, whether it’s a new installation or an export of an existing system with any DB export. July 27, 2012


Using JPA to Persist Application Data in SAP HANA

This document proposes a solution for using the Java Persistence API framework JPA to persist Java classes in HANA DB. July 18, 2012

 

Create Your Own Security Monitoring Tool

SAP Mentor Tomas Krojzl of IBM shows how to create a tool that will give you an overview of user role assignments in your SAP HANA system.Part I | Part IIJuly 18, 2012

 

Real-time Gross-to-Net Profitability Analysis - HANA PoC at COIL

Vistex partnered with SAP and IBM in the SAP Co-Innovation Lab to develop a solution to provide real-time profitability analytics while reducing the overall impact on transactional processing and other business operations. In this blog, Kevin Liu of SAP Co-Innovation Lab introduces the project and resulting white paper.

 

SAP NetWeaver AS ABAP for HANA

How does ABAP help to leverage the benefits of in-memory database technology? This documentdescribes SAP's vision, strategy, development, and commitment to enable ABAP for SAP HANA.June 25, 2012

 

Does SAP HANA Replace BW? (Hint: No.) - Part 2

In this part 2 blog, SAP Mentor John Appleby continues where SAP EVP Steve Lucas left off in his original blog post inspired by a series of posts in the Twittersphere. June 25, 2012

 

Download the SAP HANA Essentials eBook (It's Free!)

In this video blog, SAP HANA expert Jeffrey Word introduces the new book SAP HANA Essentials eBook. June 25, 2012

 

Announcing the SAP HANA Distinguished Engineer Program

Learn about a new program from SAP that aims to promote and encourage technical expertise in SAP HANA. June 19, 2012

 

Happy First Birthday, HANA!

On the first birthday of SAP HANA, SAP Mentor Vijay Vijaysankar from IBM reflects on the progress made over the last year and looks forward to challenges and opportunities ahead. June 18, 2012

 

SAP Insider: Powered by SAP HANA

In this SAPinsider article, Scott Leatherman of SAP explains how value-added resellers, independent software vendors, and systems integration partners are helping companies that have "big data" challenges understand the scale of SAP HANA and identify areas where it can help drive their business forward. June 18, 2012

 

Get your own SAP HANA DB server on Amazon Web Services

Get your hands on your own HANA DB server using three different image sizes we made available for you. Check out now and create your own HANA@AWS environment and get started with SAP HANA!  June 1, 2012

 

Happy Birthday to You, HANA!

On Monday, June 18, SAP HANA turns one year old, and we'd like to you to be a part of the celebration. Bay Area residents may join us in Palo Alto, and everyone's welcome to join in on the virtual birthday party. Festivities start at 10 AM Pacific time. June 14, 2012

 

Understanding Look and Feel of SAP HANA STUDIO

In this document, Krishna Tangudu discusses the basic navigation for the SAP HANA Sandbox system, with an emphasis on the look and feel of the system. May 31, 2012

 

Rapid Deployment Solution for Banking Powered by SAP HANA Transforms your Business

To help banks to speed up the adoption of SAP HANA, SAP offers Rapid Deployment Solutions for banking. Susanne Knopp highlights them in this recent blog. May 31, 2012

 

Getting Started with SAP HANA Developer Center

In this short tutorial, SAP Mentor and Development Expert Alvaro Tejada Galindo covers some HANA Developer Center essentials: Creation of a Developer Center account, CloudShare, creation of row and column tables, upload of CSV file to SAP HANA, creation of Stored Procedure, creation of a view, and graphic analysis using SAP HANA Studio own tools. May 9, 2012

 

Who's Talking About SAP HANA? Find out on this "Conversation Heat Map"

Chris Kim of SAP Global Marketing introduces a tool for visualizing social media conversations around #SAP #HANA. Check it out. May 10, 2012

 

Explore Use Cases, Quantify Business Value

In these two blogs, SAP Mentor Rukhshaan Omar previews two new decision-making tools she'll be unveiling at SAPPHIRE NOW Orlando: The HANA use case repository and the business value calculator. May 8, 2012

 

Developer's Journal: ABAP/HANA Connectivity via Secondary Database Connection

Interested in how to access HANA from your ABAP systems? In his edition of the HANA Developer's Journal, Thomas Jung explains how much can be done today when HANA runs as a secondary database for your current ABAP based systems and what development options within the ABAP environment support this  scenario. April 15, 2012

 

SAP HANA Technical Overview – An Entry Point to Our Revolutionary Chapter

This blog introduces the latest and greatest technical overview white paper for SAP HANA. This essential document provides a general understanding of SAP HANA as of support package 3 (SP03), and covers database administration, deployment scenarios, data load architecture scenarios, and more. 20 April 2012

 

SAP HANA Scale-Out Performance Test: Blog Commentary

In his blog SAP HANA - Scale Out Performance Test Results - Early Findings, Sam Bhat of United Software provides general guidelines for people interested in considering new database technologies like SAP HANA. Josh Greenbaum (EAC ) summarizes the data from SAP’s latest HANA scalability test in his blog SAP Ups the HANA Challenge following SAP’s April 10 press conference. 20 April 2012

 

Visit the SAP Newsroom for more news from the April 10 press conference. 11 April 2012

 

Inside SAP HANA: Optimizing Data Load Performance and Tuning

SAP Mentor John Appleby outlines seven steps and offers insight into the best ways to optimize data models and load performance in SAP HANA. He covers not only optimizing the data model, but testing load parameters and choosing a partition scheme carefully. 4 April, 2012

 

Back to Featured Content.

HANA_ADMIN.tgz import fails during SPS07 installation.

$
0
0

During the installation/upgrade of HANA 1.0 SPS07, the installation process fails at the import of HANA_ADMIN.tgz.

HANA_ADMIN.tgz import fails due to the non-existence of statistics table in HANA DB. As the table are missing the import can not be carried out successfully.

 

In order to resolve this issue please perform the below :-

 

After the  failure of HANA_ADMIN.tgz connect to the HANA DB using HANA STUDIO.

HANA DB will be up and running as the HANA DB SERVER was installed.

Only the Delivery Units were not import due to which the error was reported.

 

By default the STATISTICS server is no longer part of the HANA SPS07 due to which the statistics tables required for import of the HANA_ADMIN Delivery unit are not available and the import fails.

The statistics server has to be manually activated.

This can be done by going to Configuration Tab --> nameserver.ini --> statisticsserver  and changing parameter "active: from " false" to "true"

 

 

 

Once this is Statistics server is activated , we can successfully import the delivery units.

 

The above to help us in resolving the issue related to failure in importing the Delivery Units during installation or upgrade of HANA to SPS07.

 

Thanks,

 

Regards

 

Shaik Arshad

Reading HANA table content using ABAP

$
0
0

*&---------------------------------------------------------------------*

*& Report  ZTEST_HANA_QUERY1

*&

*&---------------------------------------------------------------------*

*&

*&

*&---------------------------------------------------------------------*

 

REPORT ZDEMO_HANA_QUERY.

 

type-POOLs:ADBC.

 

types : BEGIN OF ty_purchase_order,

            PURCHASEORDERID type char30,

            CreatedBy type char30,

            GrossAmount type char30,

            NetAmount type char30,

            Currency type char30,

       END OF ty_purchase_order.

data : lt_data type TABLE OF ty_purchase_order,

        ls_data type ty_purchase_order.

 

data lv_sql type string.

data lv_from type string.

data lv_where type string.

data lv_where2 type string.

data lv_where3 type string.

data : v_rows type i.

data : lt_meta type ADBC_RS_METADATA_DESCR_TAB,

        lt_column type ADBC_COLUMN_TAB,

        ls_meta like LINE OF lt_meta,

        ls_column like LINE OF lt_column.

 

*Define variables for the connection

data: lr_dbconn         type ref to cl_sql_connection,

lv_con            type dbcon_name value '<your hana db host added in dbcon view in sm30>',

lr_sql_env        type ref to cl_sql_statement,

lr_sql_result     type ref to cl_sql_result_set,

lr_sql_exc        type ref to cx_sql_exception,             "#EC NEEDED

lr_par_exc        type ref to cx_parameter_invalid,         "#EC NEEDED

dref              TYPE REF TO data,

tab_ref           type REF TO data,

gt_components     type cl_abap_structdescr=>component_table,

gs_components     like LINE OF gt_components.

FIELD-SYMBOLS : <table> TYPE ANY TABLE,

                 <str> type any,

                 <FIELDV> TYPE ANY.

 

*Get connection

lr_dbconn = cl_sql_connection=>get_connection( con_name = lv_con ).

CREATE OBJECT lr_sql_env

   EXPORTING

     con_ref = lr_dbconn.

*Define sql script

lv_sql = 'Select TOP 5 *'.

lv_from = 'FROM "SAP_HANA_EPM_DEMO"."sap.hana.democontent.epm.data::purchaseOrder"'.

 

concatenate lv_sql lv_from into lv_sql respecting blanks SEPARATED BY space.

 

lr_sql_result = lr_sql_env->execute_query( lv_sql ).

   lt_meta = lr_sql_result->GET_METADATA( ).

 

   LOOP AT lt_meta INTO ls_meta.

*    IF SY-TABIX > 2.

*      EXIT.

*    ENDIF.

     ls_column = ls_meta-COLUMN_NAME.

     TRANSLATE ls_column to upper case.

     append ls_column to lt_column.

     gs_components-NAME = ls_meta-COLUMN_NAME.

     gs_components-TYPE ?= cl_abap_datadescr=>DESCRIBE_BY_DATA( P_DATA = ls_meta-DATA_TYPE ).

     REPLACE '.' INTO gs_components-NAME WITH ''.

     CONDENSE  gs_components-NAME.

     append gs_components to gt_components.

   ENDLOOP.

 

   DATA lr_tabledescr TYPE REF TO cl_abap_tabledescr.

 

 

  lr_tabledescr = cl_abap_tabledescr=>create(

  p_line_type  = CL_ABAP_STRUCTDESCR=>CREATE(

                     P_COMPONENTS          = gt_components

                 ) ).

 

  CREATE DATA tab_ref TYPE HANDLE lr_tabledescr.

 

lr_sql_result->SET_PARAM_TABLE(

   exporting

     ITAB_REF             tab_ref   " Reference to Output Variable

     CORRESPONDING_FIELDS = lt_column

).

*

***Execute the query

   lr_sql_result->NEXT_PACKAGE( ).

   v_rows = lr_sql_result->Next( ).

*

   ASSIGN tab_ref->* to <table>.

 

   LOOP AT <table> ASSIGNING <str>.

     MOVE-CORRESPONDING <str> to ls_data.

     append ls_data to lt_data.

   ENDLOOP.

 

   BREAK-POINT.

Export and Import feature in HANA Studio

$
0
0

Hi Everyone,

This document will help you to understand the various options available under Export and Import in HANA Studio. This is based on current design and may subject to change in future releases of HANA Studio.

Here it goes...

Why do we have Export and Import?

                When the user creates Information Models, Tables, Landscapes and if he wants to move them to different systems (new or existing), instead of recreating everything there in the target, he can simply export them and import it into a target system to reduce the effort of the user. This is similar to transport of objects in BW terms. And also this supports importing of Meta data from other systems as well.

Where to access this?

               Export and Import can be accessed from Quick Launch->Content or through the File menu or through the right click context menu option of tables

1.jpg

 

What are the options available in Export & Import?

Export:

2.jpg

Under SAP HANA Content:

Delivery Unit: A single Unit which can be mapped to multiple packages and can be exported as single entity. So that all the packages assigned to Delivery Unit can be treated as single unit. The user can use this option to export all the packages that make a delivery unit and the relevant objects contained in it to a HANA Server or to local Client location. This is kind of “Transport Request” in BW terms.

Developer Mode: This option can be used to export individual objects (Views) to a location in the local system. This can be used only in rare cases.

SAP Support Mode: This can be used to export the objects along with the data for SAP support purposes. This can be used when requested. For Eg, user creates a view which throws up error and he couldn’t able to resolve. In that case he can use this option to export the view along with data and share it with SAP for debugging purpose.

Under SAP HANA Studio:

Landscape: To export the landscape from one system to other (There is already a document talks about this in detail, hence I am not covering here).The link for the document is SAP HANA: importing and exporting Landscapes in HANA STUDIO

Tables: This option can be used to export tables along with its content

 

Import:

3.jpg

Under SAP HANA Content:

Delivery Unit: Exported Delivery Unit can be imported either from HANA Server or from local Client location

Developer Mode: To import the already exported views from the local Client location

Mass Import of Metadata: This option will be used to import the meta data (table definition) from SAP ERP source systems using SAP Load Controller in to HANA, if the user uses Sybase Replication Server for Data Provisioning

Selective Import of Meta Data: This is similar to above but in this case, SAP BO Data Services will be used for Data Provisioning

Under SAP HANA Studio:

Landscape: To import the exported landscape in the target system

Tables: To import the exported tables into target system

 

How to make use of these options?

Export:

Delivery Unit:

Delivery Unit should have been created by user prior to the usage of it. Creation of Delivery Unit can be done through Quick Launch->Setup->Delivery Units…

4.jpg

Once the Delivery Unit is created and the packages assigned to it,

Go to Quick Launch->Content->Export->Delivery Unit->Select the Delivery Unit. The user can see the list of packages assigned to it.

5.jpg

He can export the Delivery Unit either to HANA Server location or to local Client location by selecting through the radio button

6.jpg

The user can restrict the export through “Filter by time” which means Views which are modified within the specified time interval will be exported

Select the Delivery Unit and Export Location and then Click Next->Finish. This will export the selected Delivery Unit to the specified location

Developer Mode:

Go to Quick Launch->Content->Export->Developer Mode->Next

Select the Views to be exported, the user can select individual Views or group of Views and Packages and select the local Client location for export and Finish

7.jpg

SAP Support Mode:

Go to Quick Launch->Content->Export->SAP Support Mode->Next

8.jpg

Select the View that need to be debugged by SAP Support. This will export the View along with the table data it refers to. This will be directly exported to HANA Server “backup” location

Landscape: It’s already discussed in other Document as I mentioned earlier

Tables:

This is already discussed @ http://scn.sap.com/docs/DOC-26312

Import:

Delivery Unit:

Go to Quick Launch->Content->Import->Delivery Unit->Next and select the Delivery Unit (from HANA Server or from Local Client)

9.jpg

Here, the user can select “Overwrite inactive versions” which means that if there are any inactive version of objects (from previous import) exist those will be overwritten. If the user  select “Activate objects”, then after the import, all the imported objects will be activated by default, the user no need to trigger the activation manually for the imported views.

Developer Mode:

Go to Quick Launch->Content->Import->Developer Mode->Next

10.jpg

Browse for the Local Client location where the views are exported and select the views to be imported, the user can select individual Views or group of Views and Packages and Click Finish

The user can select “Overwrite Existing Objects” to overwrite already imported objects, if any.

Mass Import of Metadata:

Go to Quick Launch->Content->Import->Mass Import of Metadata ->Next and select the target system

Configure the System for Mass Import and click Finish

11.jpg

Selective Import of Metadata:

Go to Quick Launch->Content->Import->Selective Import of Metadata ->Next and select the target system

Select the Source Connection of type “SAP Applications”. Remember that the DataStore should have been created already of type SAP Applications->Click Next

12.jpg

Now select the tables (User can also search for table here) and the target schema to import the Meta data and click Next

Validate if needed and click Finish

13.jpg

Import of Landscape and Tables are in a similar way. Hence not discussing about them here.

You "may" expect a provision to import the data directly from flat file(excel or csv), through this import option into HANA, in the future releases of HANA Studio.


Hope this document helps to understand the Import and Export feature of HANA Studio.


Please share your comments, if any. Thanks.

Rgds, Murali

SAP HANA Audit Trail - Best Practice

$
0
0

Version 1.0 2013

 

General Information regarding the SAP HANA Audit Trail can be found in chapter 12 Auditing Activity in SAP HANA Systems of the SAP_HANA_Administration_Guide and chapter 9 Auditing Activity in SAP HANA SystemsSAP_HANA_Security_Guide.

 

This version of the document refers to SAP HANA SPS 6 (revision 60).

 

Introduction

The main requirement for auditing of a system is traceability of actions performed in that system. The main question is: Who did or tried to do what when?

 

In a database, typical actions that need to be audited are:

 

  • changes of a user's authorization
  • creation or deletion of database objects
  • authentication of users
  • changes of the system configuration
  • changes of the audit configuration
  • access or alter sensitive information

 

Auditing does not directly increase the system's security. But wisely designed, it can help:

 

  • uncover security holes in case too many privileges were granted to some user
  • show attempts to break security
  • protect the system owner against accusations of security violation and data misuse
  • the system owner meet their security standards

 

If auditing should be introduced to an instance, some global settings have to be specified once to enable the auditing globally and to specify where to store the auditing result to.

 

The storing of this result is named “audit trail” in this document.

 

Additionally one or more so-called “audit policies” have to be created and enabled to specify, which actions on which objects for which users should be audited.

 

Currently, only the OS SYSLOG is a supported audit trail target. The OS SYSLOG provides means to safely store the audit trail in a fashion such that even the database administrator cannot access or change it. There are numerous storage possibilities for the OS SYSLOG, including storing it on other systems. Also, the OS SYSLOG is the default log daemon in Unix/Linux systems, so that many IT departments  already have a strategy in place to deal with OS SYSLOG entries. This provides a high degree of flexibility and security, as well as integration into a larger system landscape.

 

Basic Configuration

It is strongly recommended to activate the Audit Trail and in the default configuration to write to the OS SYSLOG. This statement applies especially for production systems but is also valid for all other system types
To process the messages from HANA audit trail the OS SYSLOG must be configured properly to process the messages. Use the OS SYSLOG manual for technical configuration details.

 

The OS SYSLOG can be configured to write its messages to several targets at a time. Targets to be considered are:

 

  • A central directory on an external Log Server, to prevent loss of data due to server issues or data manipulation. This also protects the data against access of users with extended local privileges like local DB and OS administrators.
  • A local directory on OS level to prevent loss of data due to network issues or log server downtime. Ensure to restrict the group of users with change privileges on these files to an absolute minimum.
  • The HANA trace file directory <HANA_ROOT>/<SID>/HDB<InstNr>/<servername>/trace because this file can easily be read by HANA administrators with the privilege CATALOG READ or the role MONITORING using the HANA Studio.

 

Configure the targets based on your compliance requirements and according to your analytic processes which consume the log.

Align you auditing policies and users having access to the logs with the relevant data protection rules.

 

Global Audit Configuration using HANA Studio

In the HANA Studio there is an interface to set the global audit configuration. A user needs the system privilege AUDIT ADMIN or INIFILE ADMIN to perform these tasks.

 

The auditing can be enabled or disabled globally.

 

The type of the audit trail should be set to 'SYSLOGPROTOCOL'.

 

The type of the audit trail can be set to 'CSVTEXTFILE', too. But THIS MUST NOT BE USED ON PRODUCTION SYSTEMS as it has severe restrictions. On non-productive systems it will help checking the audit results. In case of a non-productive instance using 'CSVTEXTFILE', you can specify the path where the text file should be stored.

 

Global Audit Configuration using SQL statements

If for some reason, the HANA Studio cannot be used, the user with the system privilege AUDIT ADMIN or INIFILE ADMIN can use the following SQL statements to alter the configuration of the auditing system.

 

Activate/deactivate global auditing:


ALTER SYSTEM ALTER CONFIGURATION ('global.ini','SYSTEM') set ('auditing configuration','global_auditing_state' ) = 'true' with reconfigure;

ALTER SYSTEM ALTER CONFIGURATION ('global.ini','SYSTEM') set ('auditing configuration','global_auditing_state' ) = 'false' with reconfigure;


Set audit trail type:


ALTER SYSTEM ALTER CONFIGURATION ('global.ini','SYSTEM') set ('auditing configuration','default_audit_trail_type' ) = 'CSVTEXTFILE' with reconfigure;

ALTER SYSTEM ALTER CONFIGURATION ('global.ini','SYSTEM') set ('auditing configuration','default_audit_trail_type' ) = 'SYSLOGPROTOCOL' with reconfigure;


Set audit target path:

This only works when the audit trail type is CSVTEXTFILE.

 

ALTER SYSTEM ALTER CONFIGURATION ('global.ini','SYSTEM') set ('auditing configuration','default_audit_trail_path' ) = '<path>' with reconfigure;


Audit Policies

Audit Policies define which events to audit.

 

Audit Policies being created are not enabled, meaning they will not trigger any auditing. They have to be enabled explicitly and can be disabled and enabled at any point in time afterwards by an administrator. The administrator needs the system privilege AUDIT ADMIN to enable or disable an audit policy.

 

The policy has several further attributes, which are used to narrow the number of events that are audited (see appendix).

 

Users with any of the system privileges AUDIT ADMIN, CATALOG READ or DATA ADMIN are able to check existing audit policies in systemview AUDIT_POLICIES.

 

Audit Classes

To have a more functional view on auditing we introduce the phrase “Audit Classes”. An Audit Class represents a functional aspect that should be tracked using the audit trail while the audit policies represent the technical implementation.

 

These Classes can be grouped in three main groups:

 

  • Auditing
    • Self-Auditing
    • User SYSTEM
    • Technical Users
    • Logon Monitoring
    • Exceptional Users
  • Change Documents of User and Authorization Management
  • Data Access and Procedure Execution

 

Find background information for the Audit classes right below. A proposal of audit policies is provided as table at the end of the document.

 

Self-Auditing

To ensure the reliability to the audit trail it is essential to track changes in the audit trail configuration itself. These changes can be of two types:

 

  1. Changes regarding the System Settings for Auditing
    Define an Audit Policy for the action ‘SYSTEM CONFIGURATION CHANGE’
    Monitor the audit trail for changes of the three parameters global.ini -> auditing configuration -> default_audit_trail_path, default_audit_trail_type and global_auditing_state
  2. Changes of the auditing policies
    Define an Audit Policy for the actions ‘ENABLE AUDIT POLICY’ and ‘DISABLE AUDIT POLICY’.

 

Limitations and Mitigations of Self-Auditing:

 

  1. LIMITATION:‘SYSTEM CONFIGURATION CHANGE’ writes currently (SPS6) to the defined audit target only. When the target is changed from SYSLOG to CSV File this parameter change is written to the new target CSV file only and is not documented in the old target SYSLOG. You may only notice that no further entries are written to the SYSLOG.
    MITIGATION: The content of all ini-files are available in the Configuration Validation tool in the SAP Solution Manager and should be monitored.
  2. LIMITATION: The actions ENABLE/DISABLE AUDIT POLICY do not audit the deletion or change of an audit policy.
    MITIGATION: Periodically check the content of the audit policies. You may use the views public.AUDIT_POLICIES or sys.AUDIT_POLICIES.
  3. SAP considers delivering a mandatory audit policy that forces a self-auditing and solves these limitations.

 

Auditing the SYSTEM User

User SYSTEM should be deactivated in a productive environment and only used in exceptional situations. Every attempt to use user SYSTEM must be treated as critical. As soon as someone activated the user in an emergency situation you should monitor all activities.


Note: As of today (SPS6) the SYSTEM user must be active for updating the database.

 

Technical Users

Technical users use to have static passwords that are not changed but stored in external incoming connections or background jobs. Because of that it’s very unusual that these users provide invalid credentials. You should pay attention, when it happens that the logon of a technical user fails because

 

  • a failed logon could indicate a misuse of the account.
  • a repeated failed logon leads to a locked technical account and cause downtime for the business process/interface that uses this account.

 

Logon Monitoring

Track unsuccessful connect attempts of all users to detect critical activities.
Track successful connects to 'invalidate' preceding unsuccessful attempts (e.g. caused by mistyping a password).

 

Exceptional Users

Track all activities of exceptional users like SUPPORT, SAPOSS, EMERGENCY, <firefighter>. These users use to have enhanced authorizations, are for temporary use only and are often provided to external persons in exceptional situations.

 

Change Documents of User and Authorization Management

In an ABAP environment so called ‘Change Documents’ allow tracking of activities in the user and authorization management such as creating and deleting users, altering roles or granting privileges and roles. In SAP HANA this is currently not available as separate functionality but you can track these activities using the audit trail. It is not primary used to detect suspicious issues but to provide a history of changes:
Common activities in User Management, Role and Privilege Management and System Configuration should be recorded for informational purpose.

 

Data Access and Procedure Execution

Based on your business needs it may be useful to monitor activities on dedicated data objects and procedures

 

Proposal for Technical Implementation

Find a proposal for the technical definition of Audit Policies on SAP HANA in the following tables.

The first table column ‘Action’ names the Audited Action, while the other column headers name the audit class. In the cells you find information about Audited Action Status (SUCCESSFUL, UNSUCCESSFUL, ALL), the Audit Level (EMERGENCY, CRITICAL, ALERT, WARNING, INFO) and if applicable the restriction to users.

 

AUDITING

Action

Self-Auditing

User SYSTEM

Technical users

Logon Monitoring

Exceptional Users

ALTER USER

 

ALL - CRITICAL - User SYSTEM only

 

 

 

CONNECT

 

UNSUCCESSFUL - CRITICAL - User SYSTEM only

UNSUCCESSFUL - CRITICAL - User SAP<SID>


UNSUCCESSFUL - ALERT - <all technical users>

UNSUCCESSFUL – WARNING



SUCCESSFUL - INFO

 

SYSTEM CONFIGURATION CHANGE

SUCCESSFUL - CRITICAL for parameters global.ini -> auditing configuration -> default_audit_trail_path, default_audit_trail_type and global_auditing_state

 

 

 

 

ENABLE AUDIT POLICY

SUCCESSFUL - CRITICAL

 

 

 

 

DISABLE AUDIT POLICY

 

 

 

 

ALL ACTIONS

 

SUCCESFULL - INFO - User SYSTEM only

 

 

ALL - INFO - <list of users>

 

 

CHANGE DOCUMENTS

DATA ACCESS and PROCEDURE EXECUTION

Action

Change Documents
of User and Authorization Management

Data Access and Procedure Execution

GRANT PRIVILEGE

 

 

REVOKE PRIVILEGE

 

 

GRANT STRUCTURED PRIVILEGE

 

 

REVOKE STRUCTURED PRIVILEGE

 

 

GRANT APPLICATION PRIVILEGE

 

 

REVOKE APPLICATION PRIVILEGE

 

 

GRANT ROLE

 

 

REVOKE ROLE

 

 

GRANT ANY

SUCCESSFUL – INFO

 

REVOKE ANY

 

CREATE USER

SUCCESSFUL – INFO

 

DROP USER

 

ALTER USER

 

CREATE ROLE

 

DROP ROLE

 

SYSTEM CONFIGURATION CHANGE

SUCCESSFUL - INFO

 

CREATE STRUCTURED PRIVILEGE

SUCCESSFUL - INFO

 

DROP STRUCTURED PRIVILEGE

 

ALTER STRUCTURED PRIVILEGE

 

ACTIVATE REPOSITORY CONTENT

SUCCESSFUL - INFO

 

IMPORT REPOSITORY CONTENT

 

EXPORT REPOSITORY CONTENT

 

SET SYSTEM LICENSE

SUCCESSFUL - INFO

 

UNSET SYSTEM LICENSE

 

DROP TABLE

SUCCESSFUL - INFO

 

INSERT

 

Do not use (SPS6)**

UPDATE

 

on demand for dedicated data objects

DELETE

 

SELECT

 

EXECUTE

 

and procedures

 

** Auditing INSERT can currently cause general SAP HANA system issues under special circumstances
The INSERT statement is also included in ALL ACTIONS. Because of that ALL ACTIONS should be logged very rarely only for dedicated users.

 

Appendix

 

Syntax of SQL Statements to create an Audit Policy

The syntax looks as follows in SPS6. It is subject for further improvements; therefore you should check the latest SAP HANA SQL documentation which matches to the revision of your installation.

<<<picture>>>

 

The name of the audit policy has to be unique to other audit policies.

 

The audit action specified behind the keyword AUDITING can be set to only successful, only unsuccessful or all statements.

 

Unsuccessful in this case means that the user was not authorized to execute the action. If another error occurs (e.g. misspellings in user or object names and syntax errors) the statement is not audited.

 

Only tables, views, and procedures can be specified as object. They should be named inclusive the schema, they belong to. Currently, sequences and synonyms cannot be chosen as objects to be audited.

 

Only if all audit actions specified can be combined with all objects specified, the creation of the audit policy will be possible.

 

Objects have to exist, before they can be named in an audit policy.

 

If an object was named in an audit policy and will be deleted, the audit policy will remain at its current state.

 

If the object will be re-created, the audit policy will work for this object again.

 

If ACTIONS FOR is used, then a user_name has to be specified, for which the audit policy should trigger the auditing. For all other actions the specification of a user is optional and will restrict the auditing to those users.

 

Users have to exist before they can be named in an audit policy.

 

Each policy is assigned a level. Possible levels, in decreasing order of importance, are: EMERGENCY, ALERT, CRITICAL, WARNING, INFO.

 

Tools checking the audited actions can then find the most important information and separate from that just having WARNING LEVEL, for example.


References:

 

SAP HANA SQL Reference
http://help.sap.com/hana/SAP_HANA_SQL_Reference_en.pdf


or

 

SAP HANA SQL Reference
http://help.sap.com/saphelp_hanaplatform/helpdata/en/20/ff532c751910148657c32fe3431a9f/content.htm

-> CREATE AUDIT POLICY
http://help.sap.com/saphelp_hanaplatform/helpdata/en/20/d3d56075191014af43d6487fcaa603/content.htm

 

Audit Result / Audit Entry

The audited actions are written in so-called audit entries. Each action may result in one or more audit entries. If an audited action was done implicitly by a procedure, the call to this procedure will be audited together with the audited action.

 

Audit entries written to a audit trail have the following fields with the following meaning:

 

Field

Meaning

Example Value

Event Timestamp

When did the event occur? (in system local time)

08.09.2012 05:34

Service Name

Which service did the event occur in?

Indexserver

Hostname

 

myhanablade23.customer.corp

SID

 

HAN

Instance Number

 

23

Port Number

 

32303

Client IP Address

ip address of the client application

127.0.0.2

Client Name

name of the client machine

lu241511

Client Process ID

pid of the client process

19504

Client Port Number

port of the client process

47273

Policy Name

 

AUDIT_GRANT

Audit Level

 

CRITICAL

Audit Action

 

GRANT PRIVILEGE

Active User

the user that executed the statement

MYADMIN

Target Schema

On which schema was the privilege granted or in which schema is the target object?

PRIVATE

Target Object

On which object was the privilege granted?

 

Privilege Name

Which privilege was granted/revoked?

SELECT

Grantable

Was the privilege or role granted with/without GRANT/ADMIN OPTION

NON GRANTABLE

Role Name

Which role was granted/revoked?

 

Target Principal

Who was the target of the action. Useful for grant/revoke statements.

HAXXOR

Action Status

Was the statement successful or unsuccessful

SUCCESSFUL

Component

future use for configuration changes

 

Section

future use for configuration changes

 

Parameter

future use for configuration changes

 

Old Value

future use for configuration changes

 

New Value

future use for configuration changes

 

Comment

 

 

Executed Statement

 

GRANT SELECT ON SCHEMA PRIVATE TO HAXXOR

Session Id

id of the session the statement was executed in

400006

 

In the OS SYSLOG the fields are separated by ';', meaning that an entry looks like this:

<Event Timestamp>;<Service Name>;<Hostname>;<SID>;<Instance Number>;<Port Number>;<Client IP Address>;<Client Name>;<Client Process ID>;<Client Port Number>;<Audit Level>;<Audit Action>;<Active User>;<Target Schema>;<Target Object>;<Privilege Name>;<Grantable>;<Role Name>;<Target Principal>;<Action Status>;<Component>;<Section>;<Parameter>;<Old Value>;<New Value>;<Comment>;<Executed Statement>;<Session Id>;


Known limitations

Since the audit manager is part of the database engine, it can only audit events that happen inside of the database engine. If the database engine is not online while the event occurs, there is simply no way the event can be detected and processed by the audit manager.

 

There are currently two such events which are noteworthy:

  • the upgrade of a DB instance. The upgrade is triggered while the instance is offline and when it comes online again it’s not possible to reconstruct which user triggered the upgrade at what time.
  • changes to the INI files containing the system configuration. Unless the change is done via the SQL interface, the database engine does not see it. It may also have been done when the system was offline.

 

Note that the audit manager is part of every server that is part of the DB. Thus, also the name server, statistics server or script server can write an audit trail. However, their ability to write audit trails is limited by the fact that they can't access the catalog tables that contain information about audit policies. Thus, they can only write non-policy-based events, e.g. startup or shutdown.

 

Performance Considerations

Putting a low performance impact on the running system is probably the most prominent quality of the auditing infrastructure. If auditing is enabled for a certain system, a lookup for qualifying audit policies has to be done for every incoming query. This lookup might be quite complex since it has to determine all underlying object accessed by a certain composite object (e.g. a view or a procedure). However, that call is in most cases a non-blocking call, because its outcome does not influence the query execution. On the other hand the audit entry is written whether or not the statement finishes successfully.

The second part possibly being a bottleneck is the log writing into OS-file audit trails. Those file container should have log write buffers that collect write requests to reduce disk IO. However, the audit administrator should be aware of the fact that using write buffers in a database crash scenario could potentially result in the loss of audit trail entries.

How to use HDBAdmin to analyze performance traces in SAP HANA

$
0
0

Most of the time, the Plan Visualizer is sufficiently powerful to understand what is going on inside of SAP HANA when you run a query. However, sometimes you need to get to a lower level of detail to understand exactly what is going on in the calculation engine.

 

It is then possible to use HANA Studio to record performance traces, and analyze them with HDBAdmin. This is a fairly advanced topic, so beware!

 

First, let's pick a query which runs slowly. This query takes 12 seconds, which is longer than I'd like. Admittedly, it's a tough query, grouping 1.4bn transactions and counting over 2m distinct customers.

Screen Shot 2014-01-14 at 8.21.23 AM.png

We can double click on our HANA system in the HANA systems view, which will bring up the HANA Overview. Select Trace Configuration.

Screen Shot 2014-01-14 at 8.23.55 AM.png

Now select the little edit button, next to the Performance Trace. Give your trace a name - I called it slowquery.tpt, and optionally select your username as a restriction, if you're sharing the system. You don't want to be doing anything else with this user whilst you run this query. You need to select a duration - 60 seconds is enough for me.

Screen Shot 2014-01-14 at 8.25.35 AM.png

Select Finish, and immediately go and rerun your query. You can go and disable the performance trace in the same place, or you can just wait for the time to expire if you're not doing anything else.

 

Now, we need to go and get some software. You need two things - an X-Windows Client and a SSH client. If you are using MS Windows then I highly recommend Xming and PuTTY - other software usually causes problems. Go ahead and download and install them. When you run Xming, you will see an X appear in your task bar. Hover over it - and you should see that it shows Xming Server:0.0. If it doesn't show 0.0, take a note of what it says.

Screen Shot 2014-01-14 at 8.30.14 AM.png

Now, you need to fire up PuTTY. PuTTY doesn't come with an installer - it's just a single executable, and it will bring up a configuration window like this:

Screen Shot 2014-01-14 at 8.32.14 AM.pngPut your HANA hostname or IP in "hostname". Then select Connection -> SSH -> X11. Tick Enable X11 forwarding and type localhost:0.0 (or whatever number Xming displayed) in the X display location. This enables putty to know that we have an X server running on our local machine, and tell HANA about this.

Screen Shot 2014-01-14 at 8.33.49 AM.png

Go back to the Session tab on the left, type a name for your session (I called it HANA) and select Save. Now select Open.

Screen Shot 2014-01-14 at 8.35.14 AM.png

PuTTY should connect to your HANA system. You MUST now login as the correct HANA admin user - hdbadm in my case. This is critical.

Screen Shot 2014-01-14 at 8.37.42 AM.png

Now we can do two things just to check things are good.

 

1) echo $DISPLAY - this shows us that we have the display correctly set. Note that it shows localhost:10.0 - that's just PuTTY taking care of X11 for us. Good.

2) xclock - this is the tried and tested way to check that X11 is functioning. You may see an Xming at the bottom of your screen and have to select that before you see the clock. Close the clock once it comes up - we are now good.

Screen Shot 2014-01-14 at 8.39.16 AM.png

You can go right ahead and type ./HDBAdmin.sh, and HDBAdmin should now open.

Screen Shot 2014-01-14 at 8.41.04 AM.png

Unfortunately HDBAdmin doesn't work out the box (not sure why) and you have to install the Emergency Support Package, the first time you run it. We do this with 4 little commands:

 

1) cd exe

2) sudo tar zxvf /usr/sap/HDB/SYS/global/hdb/emergency/emergencySupport.tgz .

3) sudo chown hdbadm.sapsys AttributeEnginePy.so _fuzzyPy.so executorPy.so

4) cd ..

 

This moves us into the executables folder, extracts the emergency support library as a superuser (you will need the root password) and then changes the permissions so the HANA user is the owner. Note that I used SID HDB, which is the name for my system. Yours will be different.

Screen Shot 2014-01-14 at 9.06.25 AM.png

Now you can go ahead and rerun ./HDBAdmin, and it will run without errors.

Screen Shot 2014-01-14 at 8.49.15 AM.png

Select More, which will prompt you to move to Advanced Mode, and then Perf. Trace. We are now in the right mode to open our query. Finally!

Screen Shot 2014-01-14 at 8.51.39 AM.png

Select the Load button, and then double click on your machine name (saphana1), and then double click on the trace folder. You should now see your performance trace appear. Select your trace and select Open.

Screen Shot 2014-01-14 at 8.52.41 AM.png

Your performance trace is now opened. Select Call plans, and you should see your slow-running query. Double click on it.

Screen Shot 2014-01-14 at 8.55.02 AM.png

Interestingly, the trace shows in this case that the cost is in merging dicts and aggregating in parallel. This system has a large number of partitions on the table (60) and there is a cost in doing a count distinct in each partition, and then remerging them. Perhaps a simpler partition strategy or one with the CUSTOMER_ID field as a hash, would reduce the cost of this query. I'll give that a go.

Screen Shot 2014-01-14 at 8.56.30 AM.pngFinal Words

 

The HDBAdmin tool is a legacy tool which is incredibly powerful, but not that easy to use. The PlanViz tool inside HANA Studio is much easier to use and easier to understand.

 

But, if you are stuck and you need to get into the depths of query execution in HANA, HDBAdmin is a very powerful tool. Happy hunting!

Upcoming SAP HANA Webcasts and Events


SAP HANA Release Notes List

$
0
0

I hope my fellow HANA colleagues can help me fill the gaps here, there are some revisions missing.

 

Secondly I will start adding release dates and maybe the number of fixed issues and other interesting measures/facts.

 

SAP NoteVersionRevision #Release Date# of public Fixes
15330481.0030.11.20100
15341851.0108.12.20104
15462291.0207.01.20112
15560971.03
15709351.04
15834601.05
15931701.06
1.07
1.08
1.09
16037941.010
16014891.011
16047251.012
16148701.013
16214861.014
16257121.015
16294401.016
16391401.017
16464541.018
1.019
16489011.020
16532921.021
16578921.022
16632281.023
16739651.024
16842551.025
16929811.026
17116431.027
17007501.028
17176331.029
17233701.030
17263901.031
17297841.032
17372051.033
17407111.034
17445261.035
17543141.036
17628341.037
17721041.038
17767981.039
17742531.040
17869631.041
1.042
1.043
1.044
17939171.045
18007241.046
18036741.047
18088971.048
18088821.049
18199281.050
18236551.051
18306211.052
18258951.053
18480351.054
18524251.055
18589381.056
18691421.057
18731151.058
1.059
18802741.060
18885851.061
18942851.062
19016701.063
19081391.064
19122911.065
19131741.066

 

1600147,SAP HANA SPS 02 Release Note

1642937,SAP HANA SPS 03 Release Note

1703675,SAP HANA SPS 04 Release Note

1771591,SAP HANA SPS 05 Release Note

1848976,SAP HANA SPS 06 Release Note

1921675,SAP HANA SPS 07 Release Note

SAP HANA Academy: Backup and Recovery - Backup Configuration Files

$
0
0

The SAP HANA Academy has published a new video in the series SAP HANA SPS 7 Backup and Recovery.

 

Backup and Recovery - Backup Configuration Files | SAP HANA

 

 

In this video you can view how to generate the ALTER SYSTEM ALTER CONFIGURATION statement for changed parameters.

 

The SQL file is attached to this document.

SQL.png

 

Syntax

 

select 'ALTER SYSTEM ALTER CONFIGURATION  ('''|| file_name ||''', '''||        case layer_name            when 'SYSTEM' then layer_name            when 'HOST' then layer_name ||''', '''|| host        end ||        ''') SET ('''|| section ||''', '''|| key ||''') = '''||value ||''' WITH RECONFIGURE;'        as "Configuration File Backup"  from m_inifile_contents
where layer_name != 'DEFAULT'

 

We also show a simple sample backup script that copies HOST and SID configuration files.

 

The backup.sh file is attached to this document.

 

#!/bin/bash
# Simple sample backup script
# SAP HANA Academy
# http://academy.saphana.com
# Twitter: @saphanaacademy
# define backup prefix
TIMESTAMP="$(date +\%F\_%H\%M)"
BACKUP_PREFIX="SCHEDULED"
BACKUP_PREFIX="$BACKUP_PREFIX"_"$TIMESTAMP"
# source HANA environment
. /usr/sap/shared/DB1/HDB01/hdbenv.sh
# execute command with user key
# asynchronous runs job in background and returns prompt
hdbsql -U backup "backup data using file ('$BACKUP_PREFIX') ASYNCHRONOUS"
# create backup directory
BACKUP_DIR="$DIR_INSTANCE/backup"
BACKUP_DIR="$BACKUP_DIR"/INIFILE/"$TIMESTAMP"
SYS_BACKUP_DIR="$BACKUP_DIR/$SAPSYSTEMNAME"
HOST_BACKUP_DIR="$BACKUP_DIR/$VTHOSTNAME"
mkdir -p $BACKUP_DIR $SYS_BACKUP_DIR $HOST_BACKUP_DIR
# copy host configuration files
HOST_INI_DIR="$DIR_INSTANCE/$VTHOSTNAME"
cp $HOST_INI_DIR/*.ini $HOST_BACKUP_DIR
# copy system configuration files
SYS_INI_DIR="$DIR_INSTANCE"
SYS_INI_DIR="$(dirname "$SYS_INI_DIR")"
SYS_INI_DIR="$SYS_INI_DIR/SYS/global/hdb/custom/config"
cp $SYS_INI_DIR/*.ini $SYS_BACKUP_DIR
cp $SYS_INI_DIR/hdbconfiguration_* $SYS_BACKUP_DIR

 

Finally, we briefly discuss the SAP HANA hdbparam tool, which - for as far as I know - has not yet been documented.

hdbparam.png

 

SAP HANA Academy

Denys van Kempen

"no version information available" message when running SAP HANA command line tools

$
0
0

According to the SAP Product Availability Matrix for SAP HANA Platform Edition, the supported operating systems are SLES 11 SP1 and SP2.

 

For those that are running the latest SAP HANA revisions on the earlier SLES 11 SP1, you may notice that when running SAP HANA command line tools on the SLES server a message is returned:

 

/lib64/libuuid.so.1: no version information available (required by /usr/sap/DB1/HDB01/exe/libhdbbasement.so)

 

hdbbackupdiag -h

hdbbackupdiag: /lib64/libuuid.so.1: no version information available (required by /usr/sap/DB1/HDB01/exe/libhdbbasement.so)

 

Usage: hdbbackupdiag [options] [-d <directory>] { -b <backup> | -c <backup catalog> }

 

hana:~ #  cat /etc/SuSE-release

SUSE Linux Enterprise Server 11 (x86_64)

VERSION = 11

PATCHLEVEL = 1

 

The file libhdbbasement.so is linked against libuuid. The version included with SLES 11 SP1 = 1.41.1

hana:~ # find /usr/sap -name libhdbbasement.so

/usr/sap/shared/DB1/exe/linuxx86_64/HDB_1.00.70.00.386119_1342680/libhdbbasement.so

 

hana:~ # ldd /usr/sap/shared/DB1/exe/linuxx86_64/HDB_1.00.70.00.386119_1342680/libhdbbasement.so | grep lib uuid

        libuuid.so.1 => /lib64/libuuid.so.1 (0x00007f4ea8aed000)

 

lrwxrwxrwx 1 root root 14 2012-06-04 15:08 /lib64/libuuid.so.1 -> libuuid.so.1.2

 

To solve this issue, you need to upgrade libuuid to the SP2 version using YaST or zypper

hana:~ # ls -l /lib64/libuuid.so.1

lrwxrwxrwx 1 root root 16 Jan 23 16:10 /lib64/libuuid.so.1 -> libuuid.so.1.3.0

 

hana:~ # rpm -qa | grep libuuid

libuuid1-2.16-6.13.1

Publications of the SAP HANA database department

$
0
0

This is a list of selected publications made by the SAP HANA database team.

 

2014

  • Ingo Müller, Cornelius Ratsch, Franz Faerber. Adaptive String Dictionary Compression in In-Memory Column-Store Database Systems. EDBT 2014, Athens, Greece, March 2014.

 

2013

  • Sebastian Breß, Felix  Beier, Hannes Rauhe, Kai-Uwe Sattler, Eike Schallehn, Gunter Saake,  Efficient co-processor utilization in database query processing,  Information Systems, Volume 38, Issue 8, November 2013, Pages 1084-1096
  • Martin  Kaufmann. PhD Workshop: Storing and Processing Temporal Data in a Main  Memory Column Store. VLDB 2013, Riva del Garda, Italy, August 26-30,  2013.
  • Hannes Rauhe, Jonathan Dees, Kai-Uwe Sattler, Franz Färber.  Multi-Level Parallel Query Excecution Framework for CPU and GPU. ADBIS  2013, Genoa, Italy, September 1-4, 2013.
  • Iraklis Psaroudakis, Tobias Scheuer, Norman May, Anastasia Ailamaki. Task Scheduling for Highly Concurrent Analytical and Transactional Main-Memory Workloads. ADMS 2013, Riva del Garda, Italy, August 2013.
  • Thomas Willhalm, Ismail Oukid, Ingo Müller, Franz Faerber. Vectorizing Database Column Scans with Complex Predicates. ADMS 2013, Riva del Garda, Italy, August 2013.
  • David Kernert, Frank Köhler, Wolfgang Lehner. Bringing Linear Algebra Objects to Life in a Column-Oriented In-Memory Database. IMDM 2013, Riva del  Garda, Italy, August 2013.
  • Martin Kaufmann, Peter M. Fischer, Norman May, Andreas Tonder, Donald Kossmann. TPC-BiH: A Benchmark for Bi-Temporal Databases. TPCTC 2013, Riva del Garda, Italy, August 2013.
  • Martin Kaufmann, Panagiotis Vagenas, Peter M. Fischer (Univ. of Freiburg), Donald Kossmann, Franz Färber (SAP). DEMO: Comprehensive and Interactive Temporal Query Processing with SAP HANA. VLDB 2013, Riva del Garda, Italy, August 26-30, 2013.
  • Philipp Große, Wolfgang Lehner, Norman May: Advanced Analytics with the SAP HANA Database. DATA 2013.
  • Jan  Finis, Robert Brunel, Alfons Kemper, Thomas Neumann, Franz Faerber,  Norman May. DeltaNI: An Efficient Labeling Scheme for Versioned  Hierarchical Data. SIGMOD 2013, New York, USA, June 22-27, 2013.
  • Michael  Rudolf, Marcus Paradies, Christof Bornhövd, Wolfgang Lehner. SynopSys:  Large Graph Analytics in the SAP HANA Database Through Summarization.  GRADES 2013, New York, USA, June 22-27, 2013.
  • Jonathan Dees, Peter  Sanders. Efficient Many-Core Query Execution in Main Memory  Column-Stores. ICDE 2013, Brisbane, Australia, April 8-12, 2013
  • Martin  Kaufmann, Peter M. Fischer (Univ. of Freiburg), Donald Kossmann, Norman  May (SAP). DEMO: A Generic Database Benchmarking Service. ICDE 2013,  Brisbane, Australia, April 8-12, 2013.

  • Martin Kaufmann,  Amin A. Manjili, Peter M. Fischer (Univ. of Freiburg), Donald Kossmann,  Franz Färber (SAP), Norman May (SAP): Timeline Index: A Unified Data  Structure for Processing Queries on Temporal Data, SIGMOD 2013,  New  York, USA, June 22-27, 2013.
  • Martin  Kaufmann, Amin A. Manjili, Stefan Hildenbrand, Donald Kossmann,  Andreas Tonder (SAP). Time Travel in Column Stores. ICDE 2013, Brisbane,  Australia, April 8-12, 2013
  • Rudolf, M., Paradies, M., Bornhövd, C., & Lehner, W. (2013). The Graph Story of the SAP HANA Database. BTW (pp. 403–420).
  • Robert Brunel, Jan Finis: Eine effiziente Indexstruktur für dynamische hierarchische Daten. BTW Workshops 2013: 267-276

 

2012

  • Rösch, P., Dannecker, L., Hackenbroich, G., & Färber, F. (2012). A Storage Advisor for Hybrid-Store Databases. PVLDB (Vol. 5, pp. 1748–1758).
  • Sikka, V., Färber, F., Lehner, W., Cha, S. K., Peh, T., & Bornhövd,  C. (2012). Efficient transaction processing in SAP HANA database.  SIGMOD  Conference (p. 731).
  • Färber, F., May, N., Lehner, W., Große, P., Müller, I., Rauhe, H., & Dees, J. (2012). The SAP HANA Database -- An Architecture Overview. IEEE Data Eng. Bull., 35(1), 28-33.
  • Sebastian Breß, Felix Beier, Hannes Rauhe, Eike Schallehn, Kai-Uwe Sattler, and Gunter Saake. 2012. Automatic selection of processing units for coprocessing in databases. ADBIS'12

 

2011

  • Färber, F., Cha, S. K., Primsch, J., Bornhövd, C., Sigg, S., & Lehner, W. (2011). SAP HANA Database - Data Management for Modern Business Applications. SIGMOD Record, 40(4), 45-51.
  • Jaecksch, B., Faerber, F., Rosenthal, F., & Lehner, W. (2011). Hybrid data-flow graphs for procedural domain-specific query languages, 577-578.
  • Große, P., Lehner, W., Weichert, T., & Franz, F. (2011). Bridging Two Worlds with RICE Integrating R into the SAP In-Memory Computing Engine, 4(12), 1307-1317.

 

2010

  • Lemke, C., Sattler, K.-U., Faerber, F., & Zeier, A. (2010). Speeding up queries in column stores: a case for compression, 117-129.
  • Bernhard Jaecksch, Franz Faerber, and Wolfgang Lehner. (2010). Cherry picking in database languages.
  • Bernhard Jaecksch, Wolfgang Lehner, and Franz Faerber. (2010). A plan for OLAP.
  • Paradies, M., Lemke, C., Plattner, H., Lehner, W., Sattler, K., Zeier, A., Krüger, J. (2010): How to Juggle Columns: An Entropy-Based Approach for Table Compression, IDEAS.

 

2009

  • Binnig, C., Hildenbrand, S., & Färber, F. (2009). Dictionary-based order-preserving string compression for main memory column stores. SIGMOD Conference (p. 283).
  • Kunkel, Julian M., Tsujita, Y., Mordvinova, O., & Ludwig, T. (2009). Tracing Internal Communication in MPI and MPI-I/O. 2009 International Conference on Parallel and Distributed Computing, Applications and Technologies (pp. 280-286).
  • Legler, T. (2009). Datenzentrierte Bestimmung von Assoziationsregeln in parallelen Datenbankarchitekturen.
  • Mordvinova, O., Kunkel, J. M., Baun, C., Ludwig, T., & Kunze, M. (2009). USB flash drives as an energy efficient storage alternative. 2009 10th IEEE/ACM International Conference on Grid Computing (pp. 175-182).
  • Transier, F. (2009). Algorithms and Data Structures for In-Memory Text Search Engines.
  • Transier, F., & Sanders, P. (2009). Out of the Box Phrase Indexing. In A. Amir, A. Turpin, & A. Moffat (Eds.), SPIRE (Vol. 5280, pp. 200-211).
  • Willhalm, T., Popovici, N., Boshmaf, Y., Plattner, H., Zeier, A., & Schaffner, J. (2009). SIMD-scan: ultra fast in-memory table scan using on-chip vector processing units. PVLDB, 2(1), 385-394.
  • Jäksch, B., Lembke, R., Stortz, B., Haas, S., Gerstmair, A., & Färber, F. (2009). Guided Navigation basierend auf SAP Netweaver BIA. Datenbanksysteme für Business, Technologie und Web, 596-599.
  • Lemke, C., Sattler, K.-uwe, & Franz, F. (2009).  Kompressionstechniken für spaltenorientierte BI-Accelerator-Lösungen.  Datenbanksysteme in Business, Technologie und Web, 486-497.
  • Mordvinova,  O., Shepil, O., Ludwig, T., & Ross, A. (2009). A Strategy For Cost  Efficient Distributed Data Storage For In-Memory OLAP. Proceedings IADIS  International Conference Applied Computing, pages 109-117.

 

2008

  • Hill, G., & Ross, A. (2008). Reducing outer joins. The VLDB Journal, 18(3), 599-610.
  • Weyerhaeuser, C., Mindnich, T., Faerber, F., & Lehner, W. (2008). Exploiting Graphic Card Processor Technology to Accelerate Data Mining Queries in SAP NetWeaver BIA. 2008 IEEE International Conference on Data Mining Workshops (pp. 506-515).
  • Schmidt-Volkmar, P. (2008). Betriebswirtschaftliche Analyse auf operationalen Daten (German Edition) (p. 244). Gabler Verlag.
  • Transier, F., & Sanders, P. (2008). Compressed Inverted  Indexes for In-Memory Search Engines. ALENEX (pp. 3-12).

2007

  • Sanders, P., & Transier, F. (2007). Intersection in Integer Inverted Indices.
  • Legler, T. (2007). Der Einfluss der Datenverteilung auf die Performanz  eines Data Warehouse. Datenbanksysteme für Business, Technologie und  Web.

 

2006

  • Bitton, D., Faerber, F., Haas, L., & Shanmugasundaram, J. (2006). One platform for mining structured and unstructured data: dream or reality?, 1261-1262.
  • Geiß, J., Mordvinova, O., & Rams, M. (2006). Natürlichsprachige Suchanfragen über strukturierte Daten.
  • Legler, T., Lehner, W., & Ross, A. (2006). Data mining with the SAP NetWeaver BI accelerator, 1059-1068.

SAP HANA Academy: Backup and Recovery - Backup Catalog

$
0
0

The SAP HANA Academy has published a new video in the series SAP HANA SPS 7 Backup and Recovery.

 

Backup and Recovery - Backup Catalog | SAP HANA

 

 

In this video, we briefly introduce you how to work with the Backup Catalog and the monitoring views M_BACKUP_CATALOG and M_BACKUP_CATALOG_FILES.

 

This query, for examples, get's you all the data and log backup files needed to recover using the latest backup

 

SELECT DESTINATION_PATH  FROM M_BACKUP_CATALOG_FILES
WHERE BACKUP_ID >= (SELECT TOP 1 BACKUP_ID                          FROM M_BACKUP_CATALOG                      WHERE STATE_NAME = 'successful'                        AND ENTRY_TYPE_NAME = 'complete data backup'                      ORDER BY UTC_START_TIME desc)

 

hdbbackupcheck

 

The main part of the tutorial video is about  a new tool introduced with SAP HANA 1.0 SPS 7 (or revision 64 to be exact): hdbbackupcheck.

 

You can use the hdbbackupcheck tool to check (data or log) backup files for changes, e.g. after the files have been transferred from the database to an external backup tool. The tool imports the backup file, checks the metadata for correctness and consistency, and checks whether  contents have changed.

hana:/usr/sap/DB1/HDB01> hdbbackupcheck -h

Usage: hdbbackupcheck [options] <backup> [-i <backupid>] [-e <ebid>]

 

Options:

  -v: display all known information

  --backintParamFile <filename>: use parameter file specified for a backint call

hana:/usr/sap/DB1/HDB01>

 

The tool is called with the data or log backup file as input and, optionally, a -v (erbose) flag for more detailed output. The file can be checked against a backup ID or, when using BACKINT, an external backup id (EBID).

 

Screen Shot 2014-01-24 at 17.34.20.png

The interesting aspect about this tool is that it can be used outside of the context of a SAP HANA installation on any SUSE Linux host.

 

hana:/usr/sap/DB1/HDB01/backup/log> cdexe

 

hana:/usr/sap/DB1/SYS/exe/hdb> hdbbackupcheckpack hdbcheck

a hdbbackupcheck

a libhdbbackup.so

a libhdbbasement.so

a libhdbbasis.so

a libhdbconfig.so

a libhdbcrypto.so

a libhdbcsbase.so

a libhdbdataaccess.so

a libhdblicensing.so

a libhdblttbase.so

a libhdbpersistence.so

a libhdbtransactionmanager.so

a libhdbunifiedtable.so

a libhdbunifiedtypes.so

a libhdbversion.so

a libicudata.so.46

a libicui18n.so.46

a libicuuc.so.46

a libirc.so

a libxmldom.so

 

hana:/usr/sap/DB1/SYS/exe/hdb> cp hdbcheck $HOME

 

hana:/usr/sap/DB1/SYS/exe/hdb> cd $HOME

hana:~> mkdir hdbcheckdir

hana:~> mv hdbcheck hdbcheckdir/

 

hana:~> cp $DIR_INSTANCE/exe/SAPCAR hdbcheckdir/

 

hana:~> cd hdbcheckdir

hana:~/hdbcheckdir> ll

total 145980

-rw-r----- 1 db1adm sapsys 145155119 2014-01-24 16:36 hdbcheck

-rwxr-x--- 1 db1adm sapsys  4170776 2014-01-24 16:39 SAPCAR

 

 

hana:~/hdbcheckdir> ./SAPCAR -xvf hdbcheck

SAPCAR: processing archive hdbcheck (version 2.01)

x hdbbackupcheck

x libhdbbackup.so

x libhdbbasement.so

x libhdbbasis.so

x libhdbconfig.so

x libhdbcrypto.so

x libhdbcsbase.so

x libhdbdataaccess.so

x libhdblicensing.so

x libhdblttbase.so

x libhdbpersistence.so

x libhdbtransactionmanager.so

x libhdbunifiedtable.so

x libhdbunifiedtypes.so

x libhdbversion.so

x libicudata.so.46

x libicui18n.so.46

x libicuuc.so.46

x libirc.so

x libxmldom.so

SAPCAR: 20 file(s) extracted

 

hana:~/hdbcheckdir> export LD_LIBRARY_PATH=~/hdbcheckdir:$LD_LIBRARY_PATH

 

hana:~/hdbcheckdir> ./hdbbackupcheck

Usage: ./hdbbackupcheck [options] <backup> [-i <backupid>] [-e <ebid>]

 

Options:

  -v: display all known information

  --backintParamFile <filename>: use parameter file specified for a backint call

hana:~/hdbcheckdir>

 

 

 

 

The tool hdbbackupcheck is documented in the SAP HANA Administration Guide for SPS 7 and SAP Note 1869119

 

hdbbackupdiag

 

The other tool demonstrated, is hdbbackupdiag. In SPS 7 a new flag is introduced, which is somewhat confusingly called check, as it does not do what hdbbackupcheck does.

 

With flag --check, hdbbackupdiag checks if

  • the location (file system) or external backup tool is correct
  • the current operating system user has read authorization for the file
  • the backup ID corresponds to the backup ID specified in the backup catalog

 

hana:~/hdbcheckdir> hdbbackupdiag -h

Usage: hdbbackupdiag [options] [-d <directory>] { -b <backup> | -c <backup catalog> }

 

Options:

        --generate: generates a new backup catalog using data path given by --dataDir and log paths given by --logDirs options

        --dump: dumps the content of the backup catalog

 

=== NEW in SPS 7 ===

 

 

        --check: check for files using data path given by --dataDir and log paths given by --logDirs options

        --useBackintForCatalog: search backup catalog in backint if --check is given

        --backintDataParamFile <filename>: backint parameter file to use if --check is given

        --backintLogParamFile <filename>: backint parameter file for log backups to use if --check is given

        -i <backup_id>: check files beginning with given backup id

 

 

===

 

 

        -f: display filenames only

        -v: display all known information

        -u "YYYY-MM-DD HH:MM:SS": defines UNTIL timestamp in UTC

Notes:

        if -d is not defined the current directory will be used otherwise the specified directory must be an absolute path

        the specified backup_catalog or backup must be relative to current directory or the directory specified with -d

hana:~/hdbcheckdir>

 

 

hdbbackupdiag --check.png

 

Another usage for the hdbbackupdiag tool is the (re-)generation of the backup catalog (Note 1812057).

 

The tool hdbbackupidag is documented in the SAP HANA Administration Guide for SPS 7 and SAP Notes 1873247 and 1812057

 

 

 

 

 

 

 

 

SAP HANA Academy

Denys van Kempen

Viewing all 1183 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>