Quantcast
Channel: SCN : Document List - SAP HANA and In-Memory Computing
Viewing all 1183 articles
Browse latest View live

Embedded Statistics Service Migration

$
0
0

Introduction

The following is a how-to guide for the migration from the old StatisticsServer to the new Embedded Statistics Service (ESS). It describes steps to do before, during, and after the migration. Please note, that some steps are not needed if you are using a HANA system based on SPS9 or later. Which steps to be omitted in this case is documented in this guide.  The SQL statements mentioned below have to be executed as SYSTEM user.


What is the Statistics Server?

The statistics server assists us by monitoring their SAP HANA system, collecting historical performance data and warning them of system alerts (such as resource
exhaustion). The historical data is stored in the _SYS_STATISTICS schema; for more information on these tables, please view the statistical views reference page on help.sap.com/hana_appliance.


What is the NEW Statistics Server?

The new Statistics Server is also known as the embedded Statistics Server or Statistics Service. Prior to SP7 the Statistics Server was a separate server process -
like an extra Index Server with monitoring services on top of it. The new Statistics Server is now embedded in the Index Server. The advantage of this is
to simplify the SAP HANA architecture and assist us in avoiding out of memory issues of the Statistics Server, as it was defaulted to use only 5% of the
total memory. 
In SP7 and SP8 the old Statistics Server is still implemented and shipped to customers, but can migrate to the new statistics service if they would like by following SAP note 1917938.


How to implement the New Statistics Server?

The following screen caps will show how to implement the new Statistics Server. We also make note of what your system looks like before and after you perform this
implementation (the steps to perform the migration are listed in SAP note 1917938 as well).


Preparation steps before migration


I. Check of schema _SYS_STATISTICS. If the check is not successful, the migration cannot be performed.

  SQL Statement:  call CHECK_TABLE_CONSISTENCY ('CHECK','_SYS_STATISTICS',null) “root” access to

 

II. Check if the preconditions of SAP note 1917938 for migration to ESS are fulfilled. configuration change is required to activate the new statistics server:

nameserver.ini-> [statisticsserver]->active=true

1.png


Trigger Migration

 

The following SQL statement triggers the migration to the ESS. After the migration has finished successfully, there is no way back to the old StatisticsServer!

 

alter system alter configuration ('nameserver.ini','SYSTEM') set ('statisticsserver','active')='true' with reconfigure

2.png

 

To speed up the migration, you should truncate large tables in schema _SYS_STATISTICS. That will not have an impact on the system, but you lose some
monitoring data. To identify large tables, execute the following SQL statement:

 

select table_name, sum(memory_size_in_total) from sys.m_cs_tables where schema_name ='_SYS_STATISTICS' group by table_name order by 2 desc.


As a rule of thumb, all tables with memory size in main bigger than 50.000.000 bytes should be truncated. The table _SYS_STATISTICS.STATISTICS_LASTVALUES does
not need to be truncated manually, because this will be done automatically during the migration. It is best practice to truncate at least the following tables in schema _SYS_STATISTICS:


HOST_SQL_PLAN_CACHE

HOST_CONNECTION_STATISTICS

HOST_CONNECTIONS

 

The migration can take a while. This depends mainly on the size of the history tables to be migrated. To monitor the progress of the installation, execute the following SQL statement:

 

select value from _SYS_STATISTICS.STATISTICS_PROPERTIES where key = 'internal.installation.state'

3.png

 

While migrating, the result of the above is “Installing, start at: <timestamp>”. Execute this statement repeatedly until the result is “Done (okay) since <timestamp>”. With this result, the migration has finished successfully. If the result is “Done (error) since <timestamp>”, the migration finished unsuccessfully, in which case you have to check note 2006652. If necessary contact the SAP support.

 

Optionally, you can also view the migration progress in the nameserver trace file and the master indexserver trace file. The trace component(s) to be observed have the prefix “STATS_”. During the migration, SQLScript procedures are created. If this starts, you can see the following trace entry in the nameserver trace file:

 

[18634]{-1}[-1/-1] 2014-11-06 16:50:49.583107 i STATS_CTRL     
NameServerControllerThread.cpp(00230) : installing...

 

Meanwhile, you can see in the master indexserver trace installing commands like:

 

           [18890]{-1}[-1/-1] 2014-11-06 16:51:09.665615 i STATS_WORKER   
               ConfigurableInstaller.cpp(00030) : installing Startup_Preamble (id is 10000)

 

If the installation is finished, you see the following entry in the nameserver trace file:

 

            [18634]{-1}[-1/-1] 2014-11-06 16:52:24.886644 i STATS_CTRL     
               NameServerControllerThread.cpp(00255) : installation done

 

 

Repeat Step1 and check now

Check of schema _SYS_STATISTICS. If the check is not successful, the migration cannot be performed.

 

call CHECK_TABLE_CONSISTENCY ('CHECK','_SYS_STATISTICS',null)

4.png

 

Finalizing Steps after successful migration

 

After successful migration, check if all ESS objects (collectors and checks) are running. For this, execute the following SQL statement:

 

select * from "_SYS_STATISTICS"."STATISTICS_SCHEDULE" where status = 'Disabled' and statusreason = 'timeout'

5.png

 

The returned result set should be empty. However, if it is not empty, some objects ran into timeout. They can be switched on again by using the following SQL
statement:

 

update "_SYS_STATISTICS"."STATISTICS_SCHEDULE" set status = 'Idle' where status = 'Disabled' and statusreason = 'timeout'

 

With SPS9 or later, timed out objects will be re-enabled automatically after a certain time, so switching them on again is not necessary in this case.

 

In case of a scale-out system, all tables in schema _SYS_STATISTICS have to be moved to the master indexserver. With SPS9 or later, this is done automatically during
migration, for other systems implement SAP note 2091256. Moving of tables could take a while, so repeatedly execute the following SQL statement until the
returned result set is empty:

 

select table_name,location from SYS.M_TABLE_LOCATIONS where schema_name = '_SYS_STATISTICS' and location != (select host||':'||port from sys.m_services
where service_name='indexserver' and detail='master')

6.png

 

 

Check OLD Statistic Server Service

 

1. Check if the old StatisticsServer service has been removed by executing the following SQL statement:

select * from sys.m_services where service_name='statisticsserver'

7.png

 

  The result set of the above should by empty, otherwise the service is still running, in which case you have to contact SAP support.

 

2. Check if the volume of the old StatisticsServer service has been removed by executing the following SQL statement:

 

select * from sys.m_volumes where service_name = 'statisticsserver'

 

8.png

 

 

3. Check if ESS is active by executing the following SQL statement:

 

select value from "PUBLIC"."M_INIFILE_CONTENTS" where file_name ='nameserver.ini' and layer_name = 'SYSTEM' and section = 'statisticsserver' and key = 'active'

 

9.png

 

The result of the above should be “true”, otherwise the ESS has been disabled, in which case you have to contact SAP support.

 

4. Check if all collectors and checks are running. Execute the following SQL statement:

 

select * from _sys_statistics.statistics_schedule where status != 'Inactive'

10.png

 

In the result set you have to check the column LATEST_START_SERVERTIME. The values of this column should be not null (after initial start of ESS this can take a while), and show the actual time. Furthermore, no objects should be disabled due to timeout. If so, re-enable them by executing the SQL statement mentioned above.

 

 

Recommendation for systems which are below HANA 1.0 SPS 9

 

For systems not on at least SPS9 it is recommended that you check on a regular basis (e.g. once a week) if all collectors and checks are still on schedule by executing the following SQL statement:

 

select * from "_SYS_STATISTICS"."STATISTICS_SCHEDULE" where status = 'Disabled' and statusreason = 'timeout'

11.png

 

If the above result set is not empty, re-enable the timed out objects as described above. Furthermore, you get an internal alert for each collector or check that has been disabled.

 

Solution Manager User Configuration

 

If you work with the Solution Manager, you have to execute the following SQL statements (cmp. note 1917938):

 

grant execute on schema _SYS_STATISTICS to <Solution Manager user>

 

grant update on _SYS_STATISTICS.STATISTICS_SCHEDULE to <Solution Manager user>

 

 

 

Helpful SAP Notes

2092033 Embedded Statistics Service Migration Guide

1917938 Migration of the statistics server for Revision 74 or higher

2006652 SAP HANA Statistics Server - Switch to Embedded Statistics Service fails

2031635 SAP HANA Statistics Server - mail sending does not work completely

2091256 HANA Statistics Server - Embedded Statistics Server is deactivated automatically


HANA System Replication – Backup

$
0
0

Purpose

This blog will focus on how backup is done for SAP HANA System Replication (HSR).

 

Pre-reading

As intro to HSR you can read through our previous blog on HSR: HANA System Replication - Take-over process. Here you will also find further material about the HSR topic in general.

 

Introduction

In our recent blog HANA System Replication - Take-over process, we discussed how to setup HSR and how to run through a takeover. In this blog we will discuss, how to have continuous backup during a takeover.

 

Before discussing backup for HSR, we will do a quick summary of how backup works in HANA:

  • To make backups in HANA, you have to start with full data backup: the continuously written log backups are the "diff" to the last full data backup. Hence, to be replayed, log backups require a full data backup as starting point.
  • A full backup contains everything which is required to restore the database, to the point in time the full backup was made.
  • If HANA is restored to a full backup, the state of the full backup is restored and open transactions (open at the state of the full backup) are rolled back.
  • A log backup contains all actions executed on the database, regardless if committed or not. Log backups allow to restore the database to any point in time, but require a full backup as starting point. Hence,
    • For a restore, there is always a full backup required. Also log backups require a full backup as starting point: Starting from the full backup, log backups are “replayed” to restore the required state (e.g., specific point in time, all log backups available, etc.).
    • Log backups before the last full backup cannot be replayed any more.
    • Log backups which cannot be replayed are not deleted automatically. See, e.g., https://service.sap.com/sap/support/notes/1852242 for a script deleting not needed log backups.
    • Deleting backup files (e.g., running the script from note 1852242) does *NOT* clear the backup catalog (see info about backup catalog below).
  • When committed, transactions are written to persistent storage in "log segments.” Log backup is written asynchronously, i.e., there is no immediate backup once a transaction is committed.
  • If a log backup is written, the according log segment is closed. Or: if a log segment is closed, an according log backup is written. Hence, defining a backup interval also defines how often a log segment is written.
    • Technically, the backup is written asynchronously, i.e., closed segments are put into an internal backup queue. Only if segments are actually written to backup and contained in a save point, the log segment is "free" and can be overwritten. This is why a disabled log backup will cause the log area to run full and hence will cause HANA to hang.
    • Furthermore, a log segment is closed if it is full. Hence, the backup interval is the maximum time how often backups of log segments are written.
    • Note that if there is no commit for a segment, this segment is not written (backed up) as there is nothing which could be restored (although data are written in segments before they are committed, only transactions which have been committed are restored). Hence, not having log backups does not mean backup is failing, but that there are no commits.
  • HANA keeps a history of written backups called backup catalog (e.g., see in studio -> Backup -> Backup Catalog). As this information is needed to restore HANA, the backup catalog itself is also backed up. This backup catalog contains the file paths to the full/log backup files.

 

Backup for System Replication: Facts

  • For system replication, the primary has to be configured for backup, otherwise the secondary will not be able to register.
  • While functioning system replication can guarantee data redundancy, with backups one can recover to any point in time. During a takeover, backup is essential as system replication as redundancy mechanisms falls away.
  • Only the primary site runs backups.

=> Recommendation: A script or agent starting the backup should check, if the site is running as primary.

  • With a takeover, also the backup catalog is taken over, i.e., the "backup state.”

=> Recommendation: The secondary site should have access to the same file path for backups as the primary site. It could be the same "physical directory", a automated or manual replicated directory on different hardware.

  • The secondary receives the information which log entry is written in a new log segment.
    • After a takeover from site A to site B, site B is able to write log backups of transactions started and/or committed on site A. The backup catalog is, as every thing else, taken over with the take over. Thus, site B continues to write log backups to the backup catalog. To be able to recover on site B, site B needs 1) the data/log backup written on site A, 2) the data/log backup written on site B, and 3) a backup catalog written on site B. If site A and B use the same path for data and log backup, it will, according to the backup catalog written on site B, use the right data/log backup files.
    • After a takeover from site A to site B (and no further action), site A is still in primary mode. If site A is still running, it is writing backups and backup catalogs. Such backup catalogs written after the takeover on site A can be source of error: they are referencing log backups written on site A, although site A is not the "source of truth" any more. The backup catalog on site A contains backups of site A and no backups of site B and hence must not be used for a restore. As HANA is choosing the latest backup catalog for a restore, those backup catalogs must not be accessible to HANA on site B, as otherwise the "wrong history" is restored.

=> Recommendation: if the backup directories on site A and site B are not be the physically same, there should be a replication from site A to site B. After a takeover, site B (i.e., the logical primary site) should be in control (of replication) of data and log backups. Hence, site B should either delete or not replicate backup catalogs created after the takeover from site A. Backup catalogs are stored in the backup log folder and have the form log_backup_0_0_0_0_.<seconds since Jan 01 1970><milliseconds>

 

 

Exemplary run through

We will now run through an exemplary takeover. Note that this is one possible run in a variety of setups.

  • Before the takeover, site A is running as primary, site B is running as secondary.
    • Site A is writing log backups to /backup/<SID>/data for full backups and /backup/<SID>/log for log backups, whereas /backup is mounted to NAS_A
    • Site B has mounted /backup to NAS_B
    • There is a synchronisation between NAS_A to NAS_B, i.e., the backup folders
  • T1 (Timestamp 1): transaction C1 is committed (site A)
  • T2: a log backup and backup catalog is written to /backup/<SID>/log/<name><T2> (physically to NAS_A) and replicated to NAS_B
  • T3: transaction C2 is committed (site A)
  • The Data Center of site A has some network problems and it is decided to take over to site B
  • T4: The virtual IPs pointing to site A are reconfigured to point to site B, i.e., existing connections from clients to site A will break, new connections will wait for site B to become available
    Note: here is the crucial point, where two things have to made sure:
    • No client must be able to connect to site A and commit a transaction. There is no (and there cannot be a) built-in mechanism which would make sure that site A knows site B has taken over (e.g., if there is no connection between site A and site B due to the cause which resulted in the takeover, you cannot rely on the mechanisms HANA possesses to make sure not client connects to site A).
    • Site A must not (over-)write (the) a backup catalog (of site B), as in a case of a restore, the backup of site A would be restored, although site B was running as source of truth. It is sufficient if the backup catalog of site B is the most recent one, however, it is more safe to assure that site A does not write any backup catalog at all.
      As noted above, this cannot be guaranteed by HANA as HANA would rely on infrastructure which may cause the takeover (i.e., is currently failing). It depends on the infrastructure how those points can be made sure, e.g., using virtual IPs moved from site A to site B, or unmounting the shared backup site, etc.
  • T5: On site B, sr_takeover is executed, i.e., site B starts to replay the logs on its most recent sync point
  • T6: transaction C3 is committed on site A (it must not be from a remote client)
  • T7: a log backup and backup catalog is written to /backup/<SID>/log/<name><T7> (physically to NAS_A), only the backup, but not the backup catalog is replicated to NAS_B
  • T8: site B writes a log backup and backup catalog to /backup/<SID>/log/<name><T8> (physically to NAS_B) containing the log segment started after T2, i.e., including transaction C2
  • T9: site B starts to serve client requests
  • T10: transaction C4 is committed (site B)
  • T11: site B writes a log backup and backup catalog to /backup/<SID>/log/<name><T11> (physically to NAS_B)

 

If site B would require a restore (from backup) after T11, site B would

  • Find the most recent backup catalog in /backup/<SID>/log at NAS_B, i.e., the backup catalog written at T11
  • Backup catalog T11 would reference the log backups
    • T2: it was written on site A before the takeover and hence was part of the backup catalog which was taken over with the takeover started at T5
    • T8: was written on site B, but containing the transactions committed on site A not part of the backup catalog before the takeover
    • T11: written on site B, containing the transactions committed on site B (i.e., C4)
  • Backup catalog T11 would not reference
    • T6: transaction C3 was committed on site A after site A was not the “source of truth” any more, and hence is not committed on site B and hence must not be containted in the backup catalog. Whoever is doing a take-over is responsible that no client is able to commit a transaction on site A after the take-over. See also HANA System Replication - Take-over process
  • Backup catalog T11 does not reference the log backup T7, as this backup was written after the takeover and hence was not in the backup catalog

 

General note: the mechanisms discussed here have been discussed along the backup to files, however the general mechanisms are also valid when using BACKINT!

 

Authors

Frank Bannert

Helmut Petritsch

Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 3)

$
0
0

Hello All,

 

Its been some time that I have been working in HANA and related areas like SLT, Lumira, Fiori and so on.

So thought of sharing some topics here, which would be handy.

 

Disclaimer :

1) This series is exclusively for Beginners in HANA and all those HANA experts here, please excuse me

2) These are some Solutions/Observations that we have found handy in our projects and am quite sure there would be multiple ways to derive the same result.

3) These series of documents are collaborative in nature. So please feel free to edit the documents,wherever required!

4) All the points mentioned here is observed on HANA systems whose revision is >= 82.


Part 1 of this series can be found here -->  Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 1)


Part 2 of this series can be found here -->  Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 2)



19) Related to HANA

Use Case: We have a table A in HANA under Schema A. We were asked to create the similar table structure in a different schema B.


Solution: HANA Generated SQL can be one of the solution.

Go to the Table that you want to recreate under another schema:


Schema A Table --> Right Click --> Open Definition

In next screen you will find the entire table definition.

Right Click on the space and copy the Export SQL content and paste it in SQL console under Schema B.

D1.png

SDN1.JPG

 


20) Related to HANA

Use Case: We were asked to provide the total count of records in a table.

Solution: 2 Different steps that can be used are:

SDN.png

SDN.png



21)Related to HANA

Use Case:  We are logged into an HANA instance in studio as user A and in between we had to log into the same instance using a different user:


Solution:

SDN.png

22)  Related to HANA/SDA

Use Case:  We have a Table A and a view is created on top of that table A in a HANA Instance (Say HAS)

Now we want to  create the same view on another HANA Instance (HAT), but the corresponding table A is not available.


Solution: SDA(Smart data access) was the solution that we implemented

Go to the Target HANA Instance(HAT) and create a SDA connection on the Source HANA instance(HAS)

SDN.JPG

SDN.png

Once the connection is successfully established, all the tables in Source Instance(HAS) would be seen in the Target HANA instance(HAT)

Identify the required table A --> Right Click --> 'Add as Virtual table' and save the same under the required Schema

SDN.png

Now since the Target HANA instance(HAT) contains the required table A, we would be able to create the view on top of that.

 



23) Related to HANA/SDA

Use Case: After doing the steps mentioned in Step23, we created the  HANA view in the SDA target system,  activated it and tried to Preview the data.

But unfortunately it went into the following Authorization related error.

SDN.png

PS: Here STM is source HANA instance for SDA and error clearly mentions that system is facing issues while opening the remote database.


Solution: The table 'VOICEOFCUSTOMER' is available under Schema 'S1' in the source HANA instance 'STM'.

We added the Schema S1 under Object Privilages of user SDA_Connect

SDNB.png

That resolved our authorization issue and we were able to preview the output of the view in the SDA target (HAT)




24) Related to HANA/SQL

Use Case: We were asked if there is a method to show the totals as a Seperate Row under the Data set.

This was for a configuration step in a connected GRC system. Ideal Customer use case needs to be checked upon...

 

Solution: ROLLUP command in HANA can be leveraged here:

SDN1.png

After adding the ROLLUP command:

SDN.JPG

25) Related to HANA Live:

Use Case: For meeting a customer requirement, we had to join 2 HANA Live Views.

Observation: In case if the already existing HANA Live Views have Prompts, that might not be appearing when you drag the same to a Projection or Aggregation node. In that case, you can add the same prompts manually and map the HANA Live prompts with the one that you have created using the following option.

1.png1.png

 

 

26) Related to HANA:

Use Case: We had a requirement to show the current date as the default value in HANA Input Parameter.

Solution : We can use a Expression like the following:

NOTE: 20150525 format seems to be working only with BO Analysis office tool.

1.JPG3.JPG

 

 

2.JPG4.JPG

 

 

Hope this document would be handy!

Will keep adding more tips here....

 

BR

Prabhith

HANA Basics

$
0
0

                                                                           HANA Basics

 

 

SAP HANA is a database like most of us know, but it is not just a database like many others in the market: SAP HANA provides a unique combination of hardware and software innovations which have a huge potential to optimize business applications.

 

1 Hardware: Memory is not a bit and limited resources now. Modern Server can now have up to 2 TB of main memory which can even hold complete database on RAM, which shifts the bottleneck from I/O to CPU Cache and main memory.

 

2.Row and Column Search: As we knew old database have row based search only. HANA can store the table and data in Column based store. Previously column search was used data warehousing type of work where combined functions play huge role.

 

HANA allows developer to specify whether table to be stored columnar or row wise. With column-based storage, data is only partially blocked. Therefore, individual columns can be processed at the same time by different cores. Apart from performance reasons, column store offers much more potential leverage state-of-the-art data compression concepts. For example, SAP HANA works with bit encoded values and compresses repeated values, which results in much less memory requirements than for a classical row store table. The fact that SAP HANA comes with different engines to process calculation logic and execute programming code is a great opportunity to push data-intensive calculations from the ABAP application layer into the SAP HANA database. For this reason, SAP ABAP has been enhanced with Net Weaver 7.30 and 7.40 to exploit the advanced in-memory features of SAP HANA. This results in less data transfer between application layer and database layer and a much better usage of resources. The application layer focuses more on orchestration and triggering the processing within the database. In the end, complex logic can be processed in very little time which results in great performance improvements

Automated Deletion of Tables/Views/Table Types of a Schema in SAP HANA

$
0
0

     There was a use case to clear a specified schema in a Test HANA system. And the schema was having good number of tables, table types and views. And writing DROP statement manually was a lengthy task.  In order to automate this task, I thought of writing a stored procedure in HANA. Stored procedure should also have displayed a message on deletion or completion of task. Below are details about stored procedure and usage of it.


1.1 Summary -

  

    Document illustrates a SQL Script stored Procedure, implemented in SAP HANA (SP09), to delete various objects (Tables/Views/Types) of a Schema. Objects Technical names are Case sensitive.


1.2 Description

Stored procedure has Input Parameters Schema and Object Type (TABLE/TYPE/VIEW). Values of Object Type should be given in upper case.  Stored procedure has Output Parameter which displays a list of objects deleted.

 

Procedure.jpg

1.3 Procedure SQL Script code

 

/* *********************************************************************** **

** PROCEDURE: TEST_DEL_TAB_WRAPPER                                         **

** DESCRIPTION:                                                               **

** Procedure has input parameters SCHEMA and OBJECT TYPE **

** Procedure reads tables SYS.OBJECTS and SYS.TABLES                       **

** Dynamic SQL statement performs deletion                                    **

** *********************************************************************** */

 

/********* Begin Procedure Script ************/

BEGIN

 

declare lv_count integer := 0;

declare lv_init integer := 1;

declare lv_tabnm1 nvarchar(100);

declare lv_schema1 nvarchar(50);

declare lv_obj_type nvarchar(50);

declare lv_full_tabnm varchar(100);

 

CREATELOCALTEMPORARYTABLE #OP_TEXT_1 ( TEXT_OP NVARCHAR(100));

 

if :ip_obj_type = 'TABLE'then

 

       p_sys_objects =

       selectdistinct T1.SCHEMA_NAME, T1.OBJECT_NAME, T1.OBJECT_TYPE from

       (selectdistinct SCHEMA_NAME, OBJECT_NAME, OBJECT_TYPE

       from SYS.OBJECTS

       where

       SCHEMA_NAME = :ip_schema and OBJECT_TYPE = 'TABLE'

       orderby SCHEMA_NAME, OBJECT_NAME, OBJECT_TYPE) T1

       innerjoin

       (selectdistinct SCHEMA_NAME, TABLE_NAME, TABLE_TYPE, IS_USER_DEFINED_TYPE

       from SYS.TABLES

       where

       SCHEMA_NAME = :ip_schema and IS_USER_DEFINED_TYPE = 'FALSE'

       orderby SCHEMA_NAME, TABLE_NAME, TABLE_TYPE, IS_USER_DEFINED_TYPE) T2

       on T1.SCHEMA_NAME = T2.SCHEMA_NAME

       and T1.OBJECT_NAME = T2.TABLE_NAME;

 

elseif :ip_obj_type = 'TYPE'then

 

       p_sys_objects =

       selectdistinct T1.SCHEMA_NAME, T1.OBJECT_NAME, 'TYPE'as OBJECT_TYPE from

       (selectdistinct SCHEMA_NAME, OBJECT_NAME, OBJECT_TYPE

       from SYS.OBJECTS

       where

       SCHEMA_NAME = :ip_schema and OBJECT_TYPE = 'TABLE'

       orderby SCHEMA_NAME, OBJECT_NAME, OBJECT_TYPE) T1

       innerjoin

       (selectdistinct SCHEMA_NAME, TABLE_NAME, TABLE_TYPE, IS_USER_DEFINED_TYPE

       from SYS.TABLES

       where

       SCHEMA_NAME = :ip_schema and IS_USER_DEFINED_TYPE = 'TRUE'

       orderby SCHEMA_NAME, TABLE_NAME, TABLE_TYPE, IS_USER_DEFINED_TYPE) T2

       on T1.SCHEMA_NAME = T2.SCHEMA_NAME

       and T1.OBJECT_NAME = T2.TABLE_NAME;

 

elseif :ip_obj_type = 'VIEW'then

 

       p_sys_objects =

       selectdistinct SCHEMA_NAME, OBJECT_NAME, OBJECT_TYPE

       from SYS.OBJECTS

       where

       SCHEMA_NAME = :ip_schema and OBJECT_TYPE = 'VIEW'

       orderby SCHEMA_NAME, OBJECT_NAME, OBJECT_TYPE;

 

endif;

 

selectcount(*) into lv_count from :p_sys_objects;

 

op_param = selectdistinct SCHEMA_NAME, OBJECT_NAME,

ROW_NUMBER() OVER  (partitionby SCHEMA_NAME, OBJECT_TYPE orderby OBJECT_NAME) asrow_num,

:lv_count as TOT_COUNT, OBJECT_TYPE from :p_sys_objects;

 

if :lv_count > 0 then

 

while (:lv_init <= :lv_count)

do

 

       selectdistinct SCHEMA_NAME into lv_schema1 from :op_param

       where

       row_num = :lv_init;

   

       selectdistinct OBJECT_NAME into lv_tabnm1 from :op_param

       where

       row_num = :lv_init;

   

       selectdistinct OBJECT_TYPE into lv_obj_type from :op_param

       where

       row_num = :lv_init;

   

       if (:lv_schema1 isnotnulland :lv_tabnm1 isnotnull) then

 

              insertinto #op_text_1 select'Deleted: '|| :lv_obj_type || ' '||'"'||:lv_schema1||'"'||'.'||'"'||:lv_tabnm1||'"'as TEXT_OP from dummy;

 

              select'DROP '|| :lv_obj_type || ' '||'"'||:lv_schema1||'"'||'.'||'"'||:lv_tabnm1||'"'into lv_full_tabnm from dummy;

 

              EXEC :lv_full_tabnm;

   

       endif;

   

       lv_init := :lv_init + 1;

 

endwhile;

 

endif;

 

if :lv_count = 0 then

 

       insertinto #op_text_1 select'No Entries Found'as TEXT_OP from dummy;

 

endif;

 

OP_TEXT = select TEXT_OP from #OP_TEXT_1;

 

droptable #OP_TEXT_1;

 

END;

  /********* End Procedure Script ************/



1.4 Test Results –


A. Schema TEST has tables and views as below

  TABLE1, TABLE2, Table3 & VIEW1


Schema.jpg

B. Execute procedures


IMAG2.jpg

IMAG3.jpg

1.4 Result –

 

Tables and Views deleted from Schema.

 

 

End of document.

Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 1)

$
0
0

Hello All,

 

Its been some time that I have been working in HANA and related areas like SLT, Lumira, Fiori and so on.

So thought of sharing some topics here, which would be handy.

 

Disclaimer :

1) This series is exclusively for Beginners in HANA and all those HANA experts here, please excuse me

2) These are some Solutions/Observations that we have found handy in our projects and am quite sure there would be multiple ways to derive the same result.

3) These series of documents are collaborative in nature. So please feel free to edit the documents,wherever required!

4) All the points mentioned here were observed in HANA system with revisions >= 82.

 

Part 2 of this series can be found here --> Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 2)

Part 3 of this series can be found here -->  Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 3)

 

1) Related to HANA:

 

Use Case: We have a table in a HANA schema and we were asked if there is any option to find a where used list where the table has been used.

Table Name: STKO.

Solution: Go to schema SYS.

There you will find a view named OBJECT_DEPENDENCIES.

You will get the dependent information in that view.

 

In SQL Terms: SELECT * FROM "SYS"."OBJECT_DEPENDENCIES" where BASE_OBJECT_NAME = 'STKO'

PIC1.JPG

 

--> Following is another way to see the 'Where-Used List':

 

In HANA Studio Left Navigator Pane > Catalog > Any Schema > Tables folder > Context Menu (Right click on the table), select option ' Open Defenition'

Open Def.jpg

Then in the right hand side, below the editor pane along side properties tab you see the tab ' Where-Used List '

Where-Used List.jpg

 

2)  Related to HANA/SLT:

 

Use Case: We have a new SLT configuration enabled for a source system.

Which all tables would be created automatically under the target schema defined in the configuration?

 

Observation: We have created a Non-SAP configuration in SLT and MII_SQl was the configuration name provided in SLT.

Now in HANA side, you will see that the schema MII_SQL  has the following tables by default.

PIC2.png

 

3)  Related to HANA:

Use Case: We have a HANA Information View. We want to know the Number of records available in the output.

 

Solution: HANA Information View --> Semantics --> Data preview --> Show Log --> Generated SQL.

Pic3.png

 

 

 

Copy the “SYS_BIC”.sap.hba.ZDBR44364/CV_FMIFIIT (My calculation view for this documement purpose)

Now write a SQl command.

 

Pivc4.png

 

 

4)  Related to HANA:

Use Case:  We need to connect to a HANA cloud system. How to do that.

 

Solution: Initially when we see the HANA studio, we will see the following:

p5.png

 

Now Click, Install New Software

p6.png

 

Add https://tools.hana.ondemand.com/kepler

 

Once it is installed, you will now see the option to add the Cloud System in HANA Studio.

 

p7.png

 

While connecting to the cloud system, you might encounter the following error:

p8.png

 

 

p9.png

 

Access the following path(Preferences) and make the required changes in the HTTP and HTTPS line items.

P1.JPG

 

 

 

Some times, you might get a following error message.

p1.JPG

This happens when the service is temporary down and you should be able to connect to the HANA cloud system after some time. So please try back after some time.

 

 

5) Related to HANA:

 

Use Case: We have created a Information View, but it failed to activate with the following error message:

p10.png

 

Solution: Execute the SQL command

GRANT Select on Schema <Schema_Name> to _SYS_REPO with GRANT option.

Once this SQL is executed, the model validation would be successful.

 

 

6)  Related to Lumira:

 

Use Case: Lumira hangs during loading at the following screen.

 

Capture65.JPG

 

 

Solution: This happens sometimes due to issue in user profiles.

Go to C Drive: Users --> Find User --> Delete the .sapvi file and try loading Lumira again.

 

 

7) Related to HANA:

 

Use Case: Using the option 'SAVE AS DELIMITED TEXT FILE'(Comma Delimiter), I had to export a table which had columns containing values like the following,

P1.JPG

Disclaimer: In Real time, this should not have happened as the ID with comma separation doesn't look that good.

 

If you observe closely, the 'CMPLID' column values itself is comma separated and when the same was exported, it was creating a new column after the comma separation in CSV file (the alignment of the columns were going wrong)

 

P1.JPG

 

Solution: During the Export of the table from HANA, I had used the option 'SAVE AS HTML FILE'.

 

Now once we got that HTML File, it was fed into a Third Party Solution 'http://www.convertcsv.com/html-table-to-csv.htm'

The HTML file was converted to CSV using that.

 

P1.JPG

 

This can further be loaded back to HANA without any issues.

 

 

 

8)  Related to HANA/SLT

 

Use Case: Some tables were missing in the Data Provisioning Option in HANA studio, in case of a Non-SAP source system scenario where the SLT configuration is already up and running since a long time.

 

Solution: This needs a little more explanation and the same has been published here in SDN few days ago. Please find the link below:

http://scn.sap.com/docs/DOC-63399

 

 

9)  Related to HANA:

 

Use Case: You were performing lot of steps in HANA studio and in between you want to perform an activity whose link is available only in 'Quick Launch Screen', but it is not seen in UI.

 

Solution: You Can go to the following option to 'Reset Perspective'

P1.png

 

Or else, the following option can be used to get only the 'QUICK VIEW' screen.

P1.png

 

 

10) Related to HANA

 

Use Case: SAP has delivered new DU's (Say for Manufacturing OEE) and you have been asked to import the latest DU content to your HANA system.

 

Solution: Log into service.sap.com.

Click on SAP Support Portal.

Click on Software Downloads

Click on Support Packages and Patches

Click on A-Z Alphabetical List and select H

It will take you to a screen like below:

P1.JPG

Download the MANUFACTURING CONTENT to your desktop. It will a ZIP File.

 

There will be a .TGZ file (Not LANG_.TGZ File) inside that and it needs to be imported into your system using the following option.

 

p1.JPG

 

Once the Delivery Unit is successfully imported, you can check for the same in the 'DELIVERY UNITS' link in Quick Launch in HANA Studio.

 

Hope this document would be handy!

 

BR

Prabhith-

Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 2)

$
0
0

Hello All,

 

Its been some time that I have been working in HANA and related areas like SLT, Lumira, Fiori and so on.

So thought of sharing some topics here, which would be handy.

 

Disclaimer :

1) This series is exclusively for Beginners in HANA and all those HANA experts here, please excuse me

2) These are some Solutions/observations that we have found handy in our projects and am quite sure there would be multiple ways to derive the same result.

3) These series of documents are collaborative in nature. So please feel free to edit the documents,wherever required!

4) All the points mentioned here is observed on HANA systems whose revision is >= 82.


Part 1 of this series can be found here --> Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 1)

Part 3 of this series can be found here -->  Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 3)

 

11) Related to HANA:

Use Case: You already have a HANA system configured in Studio.

Once you log in, you could see that 'SAP Control REQUEST HAS FAILED' even though the services are all started.

P1.png

 

Solution: In most cases, remove the system from the studio and add the same system again.

It should start again without any issues.

 

 

12) Related to HANA:

Use Case: My customer had sent me a excel file (which looks like the following) and I was asked to load the same into a schema table in HANA.

Please note that there is a COUNTER Column having value 1 in each row.

P1.JPG

When we upload, we are getting an error like the following:

 

'INSERT, UPDATE and UPSERT are disallowed on the generated column: Cannot insert into the generated field COUNTER'

P1.JPG

Work around: We had tried many options but nothing was working out for us.

So we deleted the 'COUNTER' column from the excel and then uploaded the data.

 

Later using an ALTER Statement, we were able to include the 'COUNTER' column aswell.

 

P1.JPG

PS: The actual reason for this error is still not clear, but could see some interesting discussions about this here in SDN.

This should be helpful --> EXPERIENCE WITH IDENTITY FEATURE IN SAP HANA

 

 

13) Related to HANA:

Use Case: My customer had sent me a excel file (which looks like the following) and I was asked to load into a schema table in HANA.

P1.JPG

We were trying to upload the data to HANA, where the Data type of the above 2 fields 'DATEA' and 'LDATE' was 'DATE'.

Upload from Flat file was throwing the following error.

'at.jave.sql.Date.strict_valueOf'

P1.JPG

 

Workaround: We had to change the data type of the fields 'DATEA' and 'LDATE' to 'NVARCHAR'.and the data was successfully uploaded.

This was a just a workaround and am not sure if we have a permanent solution for this issue.

P1.JPG

 

 

14)Related to HANA/ ABAP Development Tools

Use Case: We had to do of a debugging a procedure in an Promotion Management System running on HANA database.

We we clicked on the particular procedure, it showed us a message 'Please use the ABAP development tools in Eclipse'.(SE80 screen is shown below)

Untitled1.png

 

Solution: We had to configure ABAP perspective in Eclipse/Studio and were able to proceed with debugging.

Please see some interesting documents on the related topic here:

ABAP Managed Database Procedures - Introduction

Tutorial: How to Debug an ABAP Managed Database Procedure

 

Post configuring the ABAP Perspective, we will be able to log into the ABAP system using the same.

Capture11.png

 

The above shown screen of SE80 in ABAP perspective will look like the following in HANA Studio.

Untitled11.png

 

15)  Related to HANA/ ABAP Development Tools

Use Case:  We had to install 'ABAP Development Tools' in HANA Studio.

 

Solution: Please follow the steps mentioned by Senthil in the following document.

Step-by-step Guide to setup ABAP on HANA with Eclipse

 

When you follow the document, at one point you will have to select the required add-on's.

Kepler.JPG

 

Once the steps are successfully completed, you would be able to see the following perspectives(selected ones from the previous screen) in your Studio:

Pers.JPG

 

 

16) Related to HANA Studio/Eclipse Environment

Use Case: While working in HANA studio, an error 'Failed to create Parts Control' occured.

 

Observation: This error is some how related to Eclipse environment.

The workaround we had done was to close the studio and run again.

Close and run again.png

 

We had observed this error in the following environment:

HANA Studio version is 1.00.82.0

HANA system version is 1.00.85.00.397590

 

Please find an important discussion on this topic here:

Failed to create the part's controls

 

 

17)Related to HANA Studio/Citrix Environment

Use Case: This was observed in an Internal Citrix environment and is not expected much in customer projects.

The Studio fails to load and shows the following error message:

Capture1.JPG

Solution: This is an error related to workspace space issue.

HANA studio settings were reset and a new workspace(which has a larger space) was assigned to the new studio installation.

 

18)Related to HANA Studio/Eclipse Environment

Use Case: We had installed the plugin's like 'ABAP' and was working in that perspective.

Due to some action, we were getting the message: 'Secure Storage is Locked'.

Secure storage is locked.png

 

Observation: The functional part of the secure storage is documented by Rocky in his blog here:

The "not quite" secure storage HANA Studio, Reconnect your Studio to HANA Servers!

 

You can also find a very detailed discussion about this topic here:

"Error when connecting to system" or "Invalid Username or Password" with HANA Studio

 

Solution: We followed the following path and deleted the related contents and restarted again.

Pers.JPG

 

Hope this document would be handy


BR

Prabhith

SMD Agent on HANA Host

$
0
0

                                                                            SMD Agent on HANA Host

 


Please Login to HANA Studio and go to the particular SID

 

 

1.png

 

 

 

Navigate to option

 

 

“Add Solution Manager Diagnostic Agent (SMD)”

2.png

 

 

 

It will give you below navigation

 

 

Provide the host details

 

3.png

 

Provide the parameters / details below.

 

4.png

 

 

 

Provide the setup details as below and then review the summary.

5.png

 

 

Provide the details above for SMD agent, SID, Number (98,97), Virtual name if any.

 

Click on Execute button and it will be done successfully. If there is any issue it will update you with errors.

 

6.png


Additional Box on Hana Server

$
0
0

                                                     Add Additional SAP HANA System

 

Starting with Database Installation

 

1. Go to HANA Studio \ Life Cycle Management \ Add Additional SAP Hana System

 

1.PNG

2.

2.PNG

Create the directories and give the full permission for same.

 

3. Provide the details about sidadm user, password and confirm the password

 

3.PNG

 

4. It will distribute memory with various SID based on total RAM

 

4.PNG

 

5. Provide the details as required.

 

5.PNG

6. Click 'NO' on below

7.PNG

 

 

7.

8.PNG

 

Adding HANA with SID "<SID>". It will give you starting HANA Installation. This will take several minutes to complete...

 

Once done it is finish. You have to do SAP Installation based on your product. For which there are some screenshots below.

 

1.

 

9.PNG

 

2. Click Custom settings or typical if you like

3. It will ask you SAP SID and Destination drive (for windows)

 

10.PNG

 

4. Provide Fully Qualified domain name if asked

5. Kernel Location

 

11.PNG

 

6. It will ask for some passwords, password for OS users, Database SID, Hostname, Instance number and DB password.

 

12.PNG

 

7.  Provide passwords if asked, It will ask for HANA Client location, please provide the location.

 

13.PNG

 

8. It will ask you Export DVD Location, please provide the same.

9. It will ask DB System Admin Password, Schema password also it will give some default import parameters for HANA

 

15.PNG

 

10. It will give you default ports, Instance number, Message server details,  will ask for SL Controller location, DAA Server details (where to install, passwords, Instance number),  SLD Details and will ask to Review parameters if any needed and once execute will complete the installation.

 

Hope this helps..

HANA Rules Framework

$
0
0

Welcome to the SAP HANA Rules Framework (HRF) Community Site!


SAP HANA Rules Framework provides tools that enable application developers to build solutions with automated decisions and rules management services, implementers and administrators to set up a project/customer system, and business users to manage and automate business decisions and rules based on their organizations' data.

In daily business, strategic plans and mission critical tasks are implemented by a countless number of operational decisions, either manually or automated by business applications. These days - an organization's agility in decision-making becomes a critical need to keep up with dynamic changes in the market.


HRF Main Objectives are:

  • To seize the opportunity of Big Data by helping developers to easily build automated decisioning solutions and\or solutions that require business rules management capabilities
  • To unleash the power of SAP HANA by turning real time data into intelligent decisions and actions
  • To empower business users to control, influence and personalize decisions/rules in highly dynamic scenarios

HRF Main Benefits are:

Rapid Application Development |Simple tools to quickly develop auto-decisioning applications

  • Built-in editors in SAPHANA studio that allow easy modeling of the required resources for SAP HANA rules framework
  • An easy to implement and configurable SAPUI5 control that exposes the framework’s capabilities to the business users and implementers

Business User Empowerment | Give control to the business user

  • Simple, natural, and intuitive business condition language (Rule Expression Language)

Untitled.png

  • Simple and intuitive UI control that supports text rules and decision tables

Untitled1png.png

Scalability and Performance |HRF as a native SAP HANA solution leverages all the capabilities and advantages of the SAPHANA platform.


HRF licensing:

Currently SAPHANA Rules Framework is not a standalone product - but it is a  free SAP HANA add-on, though, in order to use it, the customer should comply to few terms.

For more information on HRF terms of use please contact noam.gilady@sap.comand/orshuki.idan@sap.com 

Interesting links:

SAP solutions already utilizing HRF:

Use cases of SAP solutions already utilizing HRF:

SAP FraudManagement

Fraud_Use_Case.JPG

SAP hybris Marketing (formerly SAP Customer Engagement Intelligence)

hybris_Use_Case.JPG

SAP Operational Process Intelligence

OPInt_Use_Case.JPG

SAP HANA TDI - FAQ

$
0
0

SAP HANA tailored data center integration (TDI) was released in November 2013 to offer an additional approach of deploying SAP HANA. While the deployment of an appliance is easy and comfortable for customers, appliances impose limitations on the flexibility of selecting the hardware components for compute servers, storage, and  network. Furthermore, operating appliances may require changes to established IT operation processes. For those who prefer leveraging their established processes and gaining more flexibility in hardware selection for SAP HANA, SAP introduced SAP HANA TDI. For more information please download this FAQ document.

View this Document

Alert configuration on HANA Box

$
0
0

                                                                               Alert configuration on HANA Box

 

 

 

1. Navigate / Login to your HANA System, Navigate to your SID and Double click on system

 

 

1.PNG

 

2. Navigate yourself to tab called 'Alert '  and on your right hand side you have 'configure button' as below.

 

2.PNG

 

3. Once you click on 'Configure tab', You need to provide the details below , which includes Email address (sender and receiver), Threshold values, Timings.

 

 

3.PNG+

 

4. Provide details above, go to the next tab and provide threshold details you will get some by default like disk usage, CPU Usage, Connections, Physical memory usage, Long running statements and others.

 

 

5. Go to next tab called Configure Start Time for Periodic checks in which you have to provide the Start time for checks like below.

 

4.PNG

 

 

Click ok

SAP HANA Student Campus Open House Day in Walldorf

$
0
0

2015

 

We, the SAP HANA Student Campus, invite students, professors, and faculty members to our second Open House Day at SAP's headquarters in Walldorf, Germany. Throughout your day in Walldorf, you will get an overview of database research at SAP, meet the architects of SAP HANA and learn more about academic collaborations. There are a couple of interesting presentations by SAP HANA developers and academic partners. Current students and PhD candidates will present their work and research. For external students and faculty members it is a great chance to find interesting topics for internships and theses.


The event will take place on June 24th, 2015, during 09:00-17:00 in building WDF03 (Robert-Bosch-Straße 30/34, 69190, Walldorf, Germany) Room E4.02. We will meet in the lobby of Building WDF03 between 09:00-09:30. Driving directions are attached to the end of this post. Free lunch and snacks are arranged for all attendees. Registration link below!


  • 09:00-09:30: Arrival
  • 09:30-10:00: Check-in
  • 10:00-10:15: Opening
  • 10:15-11:15: Keynote
    • Dr. Alexander Böhm (SAP HANA Architect)   
  • 11:15-12:00: Poster session
  • 12:00-12:45: Lunch
  • 12:45-13:00: Short guided tour through the Campus
  • 13:00-14:30: Session 1 Academic
    • Prof. Dr. Guido Moerkotte (University of Mannheim)
    • Prof. Dr. Peter Fischer (University of Freiburg) – Indexing for Bi-Temporal Data
  • 14:30-15:00: Break & poster judging
  • 15:00-16:30: Session 2 – SAP
    • Ingo Müller (PhD Student, SAP) – Cache-Efficient Aggregation: Hashing Is Sorting
    • Martin Weidner (SAP) – SAP Velocity
    • Hinnerk Gildhoff (SAP) – SAP HANA Spatial Engine
  • 16:30-17:00: Best posters & Closing

 

The entire event will be held in English.

 

 

REGISTRATION: Click here

MAP: SAP Headquarters in Walldorf, Building WDF03 in Walldorf

Attendees are invited to register with the link above with their academic e-mail address. Any updates will be sent to these e-mail addresses, and this webpage will be updated as well.

 

 

Looking forward to seeing you in Walldorf,

The SAP HANA Student Campus team

Contact: students-hana@sap.com

 

Archive of previous events

Hybris Commerce Suite 5.5 Installation

$
0
0

1. Download Hybris Software.

 

2. Extract the Software at some location.

 

3. Open cmd and go to /hybris/bin/platform path and run --> setantenv.bat

 

4. After completion of setantenv.bat command run -->ant clean all

 

BUILD SUCCESSFULL message pops up after successfull completion of this command.

 

5. Now Run --> hybrisserver.bat

 

After some time it pops up message on cmd -- "INFO : server startup in 576847 sec" .

 

6. Now open in browser -->http://localhost:9001/

 

7. Hybris Commerce Suite 5.5 installation is successful.

SAP NetWeaver BW Powered by SAP HANA: Frequently Asked Questions

$
0
0

Scientific Publications of the SAP HANA Database Campus

$
0
0

This is a list of selected publications made by the SAP HANA Database Campus.

 

2015

  • Marcus Paradies, Wolfgang Lehner, Christof Bornhövd. GRAPHITE: An Extensible Graph Traversal Framework for Relational Database Management Systems. SSDBM 2015, San Diego, USA, June 29 - July 1, 2015
  • Iraklis Psaroudakis, Tobias Scheuer, Norman May, Abdelkader Sellami, Anastasia Ailamaki. Scaling Up Concurrent Main-Memory Column-Store Scans: Towards Adaptive NUMA-aware Data and Task Placement. VLDB 2015, Kohala Coast, Hawaii, USA, August 31 - September 4, 2015.
  • Jan Finis, Robert Brunel, Alfons Kemper, Thomas Neumann, Norman May, Franz Faerber. Indexing Highly Dynamic Hierarchical Data. VLDB 2015, Kohala Coast, Hawaii, USA, August 31 - September 4, 2015.
  • David Kernert, Norman May, Michael Hladik, Klaus Werner, Wolfgang Lehner. From Static to Agile - Interactive Particle Physics Analysis with the SAP HANA DB. DATA 2015, Colmar, France, July 20-22, 2015.
  • Florian Wolf, Iraklis Psaroudakis, Norman May, Anastasia Ailamaki, Kai-Uwe Sattler. Extending Database Task Schedulers for Multi-threaded Application Code. SSDBM 2015, San Diego, USA, June 29 - July 1, 2015
  • Ingo Müller, Peter Sanders, Arnaud Lacurie, Wolfgang Lehner, Franz Färber. Cache-Efficient Aggregation: Hashing Is Sorting. SIGMOD 2015, Melbourne, Australia, May 31-June 4, 2015.
  • Daniel Scheibli, Christian Dinse, Alexander Böhm. QE3D: Interactive Visualization and Exploration of Complex, Distributed Query Plans . SIGMOD 2015 (Demonstration), Melbourne, Australia, May 31-June 4, 2015
  • Martin Kaufmann, Peter M. Fischer, Norman May, Chang Ge, Anil K. Goel, Donald Kossmann. Bi-temporal Timeline Index: A Data Structure for Processing Queries on Bi-temporal Data. ICDE 2015, Seoul, Korea, April 2015.
  • Robert Brunel, Jan Finis, Gerald Franz, Norman May, Alfons Kemper, Thomas Neumann, Franz Faerber. Supporting Hierarchical Data in SAP HANA. ICDE 2015, Seoul, Korea, April 2015.
  • David Kernert, Frank Köhler, Wolfgang Lehner. SpMachO - Optimizing Sparse Linear Algebra Expressions with Probabilistic Density Estimation. EDBT 2015, Brussels, Belgium, March 23-27, 2015.
  • Alexander Böhm: Keynote: Novel Optimization Techniques for Modern Database Environments. BTW 2015: 23-24, March 5, 2015, Hamburg
  • Alexander Böhm, Mathias Golombek, Christoph Heinz, Henrik Loeser, Alfred Schlaucher, Thomas Ruf: Panel: Big Data - Evolution oder Revolution in der Datenverarbeitung? BTW 2015: 647-648, March 5, 2015, Hamburg
  • Ismail Oukid, Wolfgang Lehner, Thomas Kissinger, Thomas Willhalm, Peter Bumbulis. Instant Recovery for Main-Memory Databases. CIDR 2015, Asilomar, California, USA. January 4-7, 2015.

 

2014

  • Iraklis Psaroudakis, Florian Wolf, Norman May, Thomas Neumann, Alexander Böhm, Anastasia Ailamaki, Kai-Uwe Sattler. Scaling up Mixed Workloads: a Battle of Data Freshness, Flexibility, and Scheduling. TPCTC 2014, Hangzhou, China, September 1-5, 2014.
  • Michael Rudolf, Hannes Voigt, Christof Bornhövd, Wolfgang Lehner. SynopSys: Foundations for Multidimensional Graph Analytics. BIRTE 2014, Hangzhou, China, September 1, 2014.
  • Elena Vasilyeva, Maik Thiele, Christof Bornhövd, Wolfgang Lehner: Top-k Differential Queries in Graph Databases. In Advances in Databases and Information Systems - 18th East European Conference, ADBIS 2014, Ohrid, Republic of Macedonia, September 7-10, 2014.
  • Kim-Thomas Rehmann, Alexander Böhm, Dong Hun Lee, Jörg Wiemers: Continuous performance testing for SAP HANA. First International Workshop on Reliable Data Services and Systems (RDSS), Co-located with ACM SIGMOD 2014, Snowbird, Utah, USA
  • Guido Moerkotte, David DeHaan, Norman May, Anisoara Nica, Alexander Böhm: Exploiting ordered dictionaries to efficiently construct histograms with q-error guarantees in SAP HANA. SIGMOD Conference 2014, Snowbird, Utah, USA
  • Ismail Oukid, Daniel Booss, Wolfgang Lehner, Peter Bumbulis, Thomas Willhalm. SOFORT: A Hybrid SCM-DRAM Storage Engine For Fast Data Recovery. DaMoN 2014, Snowbird, USA, June 22-27, 2014.
  • Iraklis Psaroudakis, Thomas Kissinger, Danica Porobic, Thomas Ilsche, Erietta Liarou, Pinar Tözün, Anastasia Ailamaki, Wolfgang Lehner. Dynamic Fine-Grained Scheduling for Energy-Efficient Main-Memory Queries. DaMoN 2014, Snowbird, USA, June 22-27, 2014.
  • Marcus Paradies, Michael Rudolf, Christof Bornhövd, Wolfgang Lehner. GRATIN: Accelerating Graph Traversals in Main-Memory Column Stores. GRADES 2014, Snowbird, USA, June 22-27, 2014.
  • David Kernert, Frank Köhler, Wolfgang Lehner. SLACID - Sparse Linear Algebra in a Columnar In-Memory Database System. SSDBM, Aalborg, Denmark, June/July 2014.
  • Ingo Müller, Peter Sanders, Robert Schulze, Wei Zhou. Retrieval and Perfect Hashing using Fingerprinting. SEA 2014, Copenhagen, Denmark, June/July 2014.
  • Martin Kaufmann, Peter M. Fischer, Norman May, Donald Kossmann. Benchmarking Bitemporal Database Systems: Ready for the Future or Stuck in the Past? EDBT 2014, Athens, Greece, March 2014.
  • Ingo Müller, Cornelius Ratsch, Franz Färber. Adaptive String Dictionary Compression in In-Memory Column-Store Database Systems. EDBT 2014, Athens, Greece, March 2014.
  • Elena Vasilyeva, Maik Thiele, Christof Bornhövd, Wolfgang Lehner: GraphMCS: Discover the Unknown in Large Data Graphs. EDBT/ICDT Workshops: 200-207.

 

2013

  • Sebastian Breß, Felix  Beier, Hannes Rauhe, Kai-Uwe Sattler, Eike Schallehn, Gunter Saake,  Efficient co-processor utilization in database query processing,  Information Systems, Volume 38, Issue 8, November 2013, Pages 1084-1096
  • Martin  Kaufmann. PhD Workshop: Storing and Processing Temporal Data in a Main  Memory Column Store. VLDB 2013, Riva del Garda, Italy, August 26-30,  2013.
  • Hannes Rauhe, Jonathan Dees, Kai-Uwe Sattler, Franz Färber.  Multi-Level Parallel Query Excecution Framework for CPU and GPU. ADBIS  2013, Genoa, Italy, September 1-4, 2013.
  • Iraklis Psaroudakis, Tobias Scheuer, Norman May, Anastasia Ailamaki. Task Scheduling for Highly Concurrent Analytical and Transactional Main-Memory Workloads. ADMS 2013, Riva del Garda, Italy, August 2013.
  • Thomas Willhalm, Ismail Oukid, Ingo Müller, Franz Faerber. Vectorizing Database Column Scans with Complex Predicates. ADMS 2013, Riva del Garda, Italy, August 2013.
  • David Kernert, Frank Köhler, Wolfgang Lehner. Bringing Linear Algebra Objects to Life in a Column-Oriented In-Memory Database. IMDM 2013, Riva del  Garda, Italy, August 2013.
  • Martin Kaufmann, Peter M. Fischer, Norman May, Andreas Tonder, Donald Kossmann. TPC-BiH: A Benchmark for Bi-Temporal Databases. TPCTC 2013, Riva del Garda, Italy, August 2013.
  • Martin Kaufmann, Panagiotis Vagenas, Peter M. Fischer (Univ. of Freiburg), Donald Kossmann, Franz Färber (SAP). DEMO: Comprehensive and Interactive Temporal Query Processing with SAP HANA. VLDB 2013, Riva del Garda, Italy, August 26-30, 2013.
  • Philipp Große, Wolfgang Lehner, Norman May: Advanced Analytics with the SAP HANA Database. DATA 2013.
  • Jan  Finis, Robert Brunel, Alfons Kemper, Thomas Neumann, Franz Faerber,  Norman May. DeltaNI: An Efficient Labeling Scheme for Versioned  Hierarchical Data. SIGMOD 2013, New York, USA, June 22-27, 2013.
  • Michael  Rudolf, Marcus Paradies, Christof Bornhövd, Wolfgang Lehner. SynopSys: Large Graph Analytics in the SAP HANA Database Through Summarization. GRADES 2013, New York, USA, June 22-27, 2013.
  • Elena Vasilyeva, Maik Thiele, Christof Bornhövd, Wolfgang Lehner: Leveraging Flexible Data Management with Graph Databases. GRADES 2013, New York, USA, June 22-27, 2013.
  • Jonathan Dees, Peter  Sanders. Efficient Many-Core Query Execution in Main Memory  Column-Stores. ICDE 2013, Brisbane, Australia, April 8-12, 2013
  • Martin  Kaufmann, Peter M. Fischer (Univ. of Freiburg), Donald Kossmann, Norman  May (SAP). DEMO: A Generic Database Benchmarking Service. ICDE 2013,  Brisbane, Australia, April 8-12, 2013.

  • Martin Kaufmann,  Amin A. Manjili, Peter M. Fischer (Univ. of Freiburg), Donald Kossmann,  Franz Färber (SAP), Norman May (SAP): Timeline Index: A Unified Data  Structure for Processing Queries on Temporal Data, SIGMOD 2013,  New  York, USA, June 22-27, 2013.
  • Martin  Kaufmann, Amin A. Manjili, Stefan Hildenbrand, Donald Kossmann,  Andreas Tonder (SAP). Time Travel in Column Stores. ICDE 2013, Brisbane,  Australia, April 8-12, 2013
  • Rudolf, M., Paradies, M., Bornhövd, C., & Lehner, W. (2013). The Graph Story of the SAP HANA Database. BTW (pp. 403–420).
  • Robert Brunel, Jan Finis: Eine effiziente Indexstruktur für dynamische hierarchische Daten. BTW Workshops 2013: 267-276

 

2012

  • Rösch, P., Dannecker, L., Hackenbroich, G., & Färber, F. (2012). A Storage Advisor for Hybrid-Store Databases. PVLDB (Vol. 5, pp. 1748–1758).
  • Sikka, V., Färber, F., Lehner, W., Cha, S. K., Peh, T., & Bornhövd,  C. (2012). Efficient transaction processing in SAP HANA database.  SIGMOD  Conference (p. 731).
  • Färber, F., May, N., Lehner, W., Große, P., Müller, I., Rauhe, H., & Dees, J. (2012). The SAP HANA Database -- An Architecture Overview. IEEE Data Eng. Bull., 35(1), 28-33.
  • Sebastian Breß, Felix Beier, Hannes Rauhe, Eike Schallehn, Kai-Uwe Sattler, and Gunter Saake. 2012. Automatic selection of processing units for coprocessing in databases. ADBIS'12

 

2011

  • Färber, F., Cha, S. K., Primsch, J., Bornhövd, C., Sigg, S., & Lehner, W. (2011). SAP HANA Database - Data Management for Modern Business Applications. SIGMOD Record, 40(4), 45-51.
  • Jaecksch, B., Faerber, F., Rosenthal, F., & Lehner, W. (2011). Hybrid data-flow graphs for procedural domain-specific query languages, 577-578.
  • Große, P., Lehner, W., Weichert, T., & Franz, F. (2011). Bridging Two Worlds with RICE Integrating R into the SAP In-Memory Computing Engine, 4(12), 1307-1317.

 

2010

  • Lemke, C., Sattler, K.-U., Faerber, F., & Zeier, A. (2010). Speeding up queries in column stores: a case for compression, 117-129.
  • Bernhard Jaecksch, Franz Faerber, and Wolfgang Lehner. (2010). Cherry picking in database languages.
  • Bernhard Jaecksch, Wolfgang Lehner, and Franz Faerber. (2010). A plan for OLAP.
  • Paradies, M., Lemke, C., Plattner, H., Lehner, W., Sattler, K., Zeier, A., Krüger, J. (2010): How to Juggle Columns: An Entropy-Based Approach for Table Compression, IDEAS.

 

2009

  • Binnig, C., Hildenbrand, S., & Färber, F. (2009). Dictionary-based order-preserving string compression for main memory column stores. SIGMOD Conference (p. 283).
  • Kunkel, Julian M., Tsujita, Y., Mordvinova, O., & Ludwig, T. (2009). Tracing Internal Communication in MPI and MPI-I/O. 2009 International Conference on Parallel and Distributed Computing, Applications and Technologies (pp. 280-286).
  • Legler, T. (2009). Datenzentrierte Bestimmung von Assoziationsregeln in parallelen Datenbankarchitekturen.
  • Mordvinova, O., Kunkel, J. M., Baun, C., Ludwig, T., & Kunze, M. (2009). USB flash drives as an energy efficient storage alternative. 2009 10th IEEE/ACM International Conference on Grid Computing (pp. 175-182).
  • Transier, F. (2009). Algorithms and Data Structures for In-Memory Text Search Engines.
  • Transier, F., & Sanders, P. (2009). Out of the Box Phrase Indexing. In A. Amir, A. Turpin, & A. Moffat (Eds.), SPIRE (Vol. 5280, pp. 200-211).
  • Willhalm, T., Popovici, N., Boshmaf, Y., Plattner, H., Zeier, A., & Schaffner, J. (2009). SIMD-scan: ultra fast in-memory table scan using on-chip vector processing units. PVLDB, 2(1), 385-394.
  • Jäksch, B., Lembke, R., Stortz, B., Haas, S., Gerstmair, A., & Färber, F. (2009). Guided Navigation basierend auf SAP Netweaver BIA. Datenbanksysteme für Business, Technologie und Web, 596-599.
  • Lemke, C., Sattler, K.-uwe, & Franz, F. (2009).  Kompressionstechniken für spaltenorientierte BI-Accelerator-Lösungen.  Datenbanksysteme in Business, Technologie und Web, 486-497.
  • Mordvinova,  O., Shepil, O., Ludwig, T., & Ross, A. (2009). A Strategy For Cost  Efficient Distributed Data Storage For In-Memory OLAP. Proceedings IADIS  International Conference Applied Computing, pages 109-117.

 

2008

  • Hill, G., & Ross, A. (2008). Reducing outer joins. The VLDB Journal, 18(3), 599-610.
  • Weyerhaeuser, C., Mindnich, T., Faerber, F., & Lehner, W. (2008). Exploiting Graphic Card Processor Technology to Accelerate Data Mining Queries in SAP NetWeaver BIA. 2008 IEEE International Conference on Data Mining Workshops (pp. 506-515).
  • Schmidt-Volkmar, P. (2008). Betriebswirtschaftliche Analyse auf operationalen Daten (German Edition) (p. 244). Gabler Verlag.
  • Transier, F., & Sanders, P. (2008). Compressed Inverted  Indexes for In-Memory Search Engines. ALENEX (pp. 3-12).

2007

  • Sanders, P., & Transier, F. (2007). Intersection in Integer Inverted Indices.
  • Legler, T. (2007). Der Einfluss der Datenverteilung auf die Performanz  eines Data Warehouse. Datenbanksysteme für Business, Technologie und  Web.

 

2006

  • Bitton, D., Faerber, F., Haas, L., & Shanmugasundaram, J. (2006). One platform for mining structured and unstructured data: dream or reality?, 1261-1262.
  • Geiß, J., Mordvinova, O., & Rams, M. (2006). Natürlichsprachige Suchanfragen über strukturierte Daten.
  • Legler, T., Lehner, W., & Ross, A. (2006). Data mining with the SAP NetWeaver BI accelerator, 1059-1068.

[SAP HANA Academy] Live 4 ERP Agility: Using the Live 4 Map

$
0
0

In this tutorial video the SAP HANA Academy’s Jamie Wiseman demonstrates how to use the Live4 map application. The map application is one part of the ERP Agile Solution demo application. The SAP HANA Academy’s ERP Agile Solution Live4 demo details how you can securely marry data from your ERP system to externally sourced data via the SAP HANA Cloud Platform.


The SAP HANA Academy will build, for free, an ad hoc ERP Agile Solution for any and all SAP ERP and Business Suite customers. Please contact us at hanaacademy@sap.com to inquire about an ad hoc ERP Agile Solution. Watch Jamie’s tutorial video below.

Screen Shot 2015-06-08 at 12.22.45 PM.png

(0:20 – 2:25) Overview of the Live4 Map Tile

 

In the main ERP Agile Inventory demo SAP Fiori application click on the Map title. The map is a SAP HANA XS application which was created using SAPUI5 and utilities the HERE Maps’ API. The popped up text box that reads 48,244,638 at the bottom of the application is the total population (derived from census data) of the map’s current boundaries. Panning or zooming the map will run a web service against one of our SAP HANA database tables to return a new total population for that area.

Screen Shot 2015-06-08 at 12.32.03 PM.png

In fact every time the map’s boundaries are changed a trio of web services are run against the SAP HANA system. The second web service returns all of the current store information from the SAP ERP system and also the current air quality and weather data from the Hadoop data lake. Depending on the area, this data can be culled from up to three separate weather stations. SAP HANA’s geospatial capabilities are used to identify the weather stations closest to each store.

 

On the map a blue circular marker represents each store location. Clicking on a marker will display the store name, the reading from its closest AQI station, and the local area's humidity and temperature. The pop over pictured below shows no inventory alerts for the Drugs R Us Bishop location. Clicking on the Alerts button on the bottom banner will detail where alerts are currently occurring. Right now in Jamie's example no store has an alert.

Screen Shot 2015-06-08 at 12.36.36 PM.png

(2:25 – 3:55) Internet of Things Simulator

 

Back in the Live4 home SAP Fiori application screen click on the Simulator IoT title. In this screen you can manually change the AQI, temperature, and humidity values for each individual store. In this tutorial example Jamie ramps up the AQI, temperature and humidity for the store in Bishop to force some predicted inventory issues.

Screen Shot 2015-06-08 at 12.44.03 PM.png

Back in the map title if we just pan the map a little bit we will return new information from our SAP HANA system. Now any store with an inventory alert will be marked by a slightly larger red circle. If you click on a red alert marker you will see all of the stores that can transfer the needed inventory of certain products with high demand and low supply to that location. We are using SAP HANA geospatial to find out the nearest transfer store with enough excess product to sate the alerted store’s demand without resulting in a shortage of their own.

Screen Shot 2015-06-08 at 12.49.00 PM.png

Now the Alters option lists a table with the full set of products that have inventory alerts along with the transfer store information. This information represents the third web service that is being run against the SAP HANA database.

Screen Shot 2015-06-08 at 12.47.58 PM.png

(3:55 – 4:40) The Map Application’s Charts and About Options

 

In addition to the population total mentioned earlier we can also view other demographic information in the map application by clicking on the Charts option in the bottom banner. The charts show demographic information about the map’s current boundaries including gender, age, education and commute.

Screen Shot 2015-06-08 at 12.50.56 PM.png

If you desire further details about the maps application click on the About option in the bottom banner. This will pop up a key for the different symbols that can be displayed on the map and also links to a ERP Agile Solution User guide as well further information on the SAP HANA Academy.

Screen Shot 2015-06-08 at 12.51.49 PM.png

For further tutorial videos about the ERP Agile Inventory with HCP course please view this playlist.


SAP HANA Academy - Over 1,000 free tutorial videos on SAP HANA, Analytics and the SAP HANA Cloud Platform.


Follow us on Twitter @saphanaacademy.

Scheduling a XSengine Background job to call a procedure

$
0
0

Background:

Xsengine Background jobs can be scheduled for SQL script procedures and XSJS functions.

 

Prerequisites:

 

  1. ) Add a section “scheduler” to xsengine.ini with property enabled = true

  And run reconfigure service for the xsengine as SYSTEM user under Landscape\services tab

tt1.jpg

This will enable the XSengine job scheduling.

 

  1. )    You got access to role sap.hana.xs.admin.roles::JobAdministrator and sap.hana.xs.admin.roles::HTTPDestAdministrator

 

Setup:

 

Can be done either in HANAstudio development perspective or HANA web based development editor, which can be reached through:

 

http://< HOSTNAME>:8000/sap/hana/xs/ide/editor/

Sample using Webbased development editor:

 

Creating a new subpackage for my jobs for an easier overview:

tt1.jpg

 

Creating new .xsjob file:

tt1.jpg

 

Code sample:

{

   "description": "Background job test",

   "action": "<package_path>::<procedure_or_function>",

   "schedules": [

       {

         "description": "Background job test",

         "xscron": "* * * * * * */*"

     

       }

   

    ]

}

 

Xscron samples:

 

2013 * * fri 12 0 0 -> Run the job every Friday in 2013 at 12:00.

* * 3:-2 * 12:14 0 0 -> Run every hour between 12:00 and 14:00 every day between the third and second-to-last day of the month.

* * * -1.sun 9 0 0 -> Run the job on the last Sunday of every month at 09:00.

 

Enabling the scheduling:


Open XSengine Administration:

http://<HOSTNAME>:8000/sap/hana/xs/admin

  Browse to your job and specify a USER and PASSWORD under which the Job should be executed, tick the active checkbox and save. (A dedicated user with the required access should be created for this task)


tt1.jpg


The job will run not and its status can be monitored in the Job log section:

  1. e.g.

tt.jpg

Or with the tables:

"_SYS_XS"."JOBS"

"_SYS_XS"."JOB_SCHEDULES"

"_SYS_XS"."JOB_LOG"

 

Hope this document will help you to overcome the usage of external tools to call a procedure. Now HANA Supports this feature.

 

Regards,

Sharath Borra

[SAP HANA Academy] Live 4 ERP Agility: HCI-DS & SDI Chalkboard

$
0
0

In the below tutorial video, part of the Live 4 ERP Agility course, the SAP HANA Academy’s Tahir Hussain (Bob) Babar gives a chalkboard overview about the two data sources that will be married in the SAP HANA Cloud Platform during the course.

Screen Shot 2015-06-09 at 4.43.43 PM.png

(0:18 – 1:40) Overview of the Two Data Sources

 

One of the data sources is comprised of a set of four tables that will be brought in from a SAP ERP system. The SAP HANA Cloud Integration for Data Services (HCI-DS) will be used to copy the data from the remote ERP system into the SAP HANA Cloud Platform. Additionally rather than copying or replicating the data into the SAP HANA Cloud Platform we will create a virtual table/view to the remote data source. To achieve this we will use SAP Smart Data Integration to connect to the second data source.

 

The second data source is a Hadoop data lake which contains years and years of weather information from the EPA. We won’t necessarily need to replicate all of that weather data into the SAP HANA Cloud Platform. So we will connect to it with a virtual table/view so we can retrieve the data as and when we need it.

 

The videos following this introductory one will show you how to setup technically the aspects necessary achieve this.

Screen Shot 2015-06-09 at 4.47.28 PM.png

(1:40 – 3:35) Overview of How to Connect the Data Sources to the SAP HANA Cloud Platform

 

The first step is to generate your own SAP HANA Cloud Platform account. This course will assume that you have already generated a SAP HANA Cloud Platform account. Please view this video from the SAP HANA Academy to see how you can simply get your own personal SAP HANA Cloud Platform account.

 

*Important your SAP HANA Cloud Platform account must be upgraded to SAP HANA SPS09*

 

Next we will copy the ERP data and create the virtual tables for the Hadoop data in a SAP HANA Cloud Platform schema called Live 4. SAP has a variety of tools that can copy the data into the SAP HANA Cloud Platform. We can replicate the data using Smart Data Integration. We can do real-time replication. We can copy the data using the ETL tool, SAP Data Services. In the Live 4 ERP Agility course we are going to use a new tool called SAP HANA Cloud Integration for Data Services. HCI-DS is essentially SAP Data Services but in the cloud. In the next five videos of the course Bob will show you how to set up HCI-DS and how to copy the specific table you need to carry out this demo.

 

Due to the fact that we don’t want the years and years of EPA weather data from the Hadoop data lake in our SAP HANA Cloud Platform we will use SAP Smart Data Integration to create virtual tables/views. After that we can run a select statement to get the data that we will need in real-time.

Screen Shot 2015-06-09 at 4.48.39 PM.png

(3:35 – 5:00) Overview of the Agents

 

In both the instances for connecting a data source we need to use an agent. Bob will show you how to download and install the SAP Data Services agent for HCI-DS. This will reside on a Windows or Linux system. Then Bob will show how to download and install the Data Provisioning agent on a Windows box that will enable the creation of a virtual table from Hadoop to the SAP HANA Cloud Platform.

Screen Shot 2015-06-09 at 4.50.07 PM.png

Reminder the SAP HANA Academy will build, for free, an ad hoc SAP ERP Agile Solution for any SAP ERP or Business Suite customer. Please contact us hanaacademy@sap.com to inquire.


For further tutorial videos about the ERP Agility with HCP course please view this playlist.


SAP HANA Academy - Over 1,000 free tutorial videos on SAP HANA, Analytics and the SAP HANA Cloud Platform.


Follow us on Twitter @saphanaacademy.

[SAP HANA Academy] Live 4 ERP Agility: HCI-DS Agent Install

$
0
0

Continuing the SAP HANA Academy’s Live 4 ERP Agility course, Tahir Hussain (Bob) Babar shows how to install the SAP Data Services Agent on a Windows box provisioned in the Cloud. The Agent will be used by SAP HANA Cloud Integration for Data Services (HCI-DS) to load data to a SAP HANA database that is in the SAP HANA Cloud Platform. Check out Bob’s tutorial video below.

Screen Shot 2015-06-10 at 9.47.05 AM.png

(0:35 – 2:40) Downloading the Agent & Configuration File

 

In order to use HCI-DS you must install the SAP Data Services Agent so that it can access your source data and target system. In this course the source data is from the ERP system and the target system is the SAP HANA Cloud Platform.

 

In the Windows machine Bob has provisioned he opens a browser (Google Chrome) and enters his HCI-DS URL. After logging into his HCI account, Bob navigates to the AGENTS tab. Clicking on the Download Agent Package hyperlink at the top will take you to the SAP Service Marketplace where you can follow the simple steps to download the Agent.

 

Bob’s Agent is already in his downloads folder so he clicks on the New Agent Button at the top right of the screen in the AGENTS tab. In the New Agent screen that pops up Bob names the Agent MyAgent and puts it into a New Group that he names MyGroup before clicking next.

Screen Shot 2015-06-10 at 9.59.25 AM.png

On the next screen Bob clicks on the hyperlink adjacent to Step 2 to download the configuration file. The configuration file will link the agent to the HCI-DS system. Once the configuration file has been downloaded, click save and close. Now MyAgent is displayed on the AGENTS screen but it’s NEVER been connected.

Screen Shot 2015-06-10 at 10.19.06 AM.png

(2:40 – 4:20) Installing the Agent

 

Bob opens the Agent Install File from his downloads folder and is taken to new window where he is prompted to define the parameters. Bob leaves the defaults for Installation Path and Configuration File Location and then enters his Window’s credentials for where the agent will be installed before clicking the install button.

Screen Shot 2015-06-10 at 10.20.41 AM.png

Note: there is no need to change the port as you can use the same port to connect.

 

Once it has been successfully installed click on finish.

 

(4:20 - 6:00) Configuring the SAP Data Services Agent

 

In the next window that pops up (SAP Data Services Agent Configuration) you will need to enter your SAP HANA Cloud Integration server details. First enter your SAP HANA Cloud Platform user name and password in the Administrator user name and Administrator password text boxes. Then for the Agent configuration file click on the adjacent box to select the configuration file you downloaded earlier. Next, if you use a proxy server like Bob does, then click to check the User proxy server option and then enter your Proxy host and Proxy port information. Finally click the Upload button.

Screen Shot 2015-06-10 at 10.23.00 AM.png

Once you see the Upload completed successfully message click Ok and then Exit. Now the SAP Data Services Agent will need to be restarted in order for those configuration changes to take effect. So click yes to restart it.

 

Back in your browser on the AGENTS tab of your HCI-DS system it will still display that MyAgent has NEVER started. Clicking the refresh button will change the status of MyAgent from a red triangle to a green square. Now you can see the Agent’s version, engine version, repository version and the last time it was connected.

Screen Shot 2015-06-10 at 10.24.53 AM.png

Now you have successfully installed the SAP Data Services Agent so you can use HCI-DS to load data into the SAP HANA Cloud Platform.


Reminder the SAP HANA Academy will build, for free, an ad hoc SAP ERP Agile Solution for any SAP ERP or Business Suite customer. Please contact us hanaacademy@sap.com to inquire.


For further tutorial videos about the ERP Agility with HCP course please view this playlist.


SAP HANA Academy - Over 1,000 free tutorial videos on SAP HANA, Analytics and the SAP HANA Cloud Platform.


Follow us on Twitter @saphanaacademy.

Viewing all 1183 articles
Browse latest View live