Quantcast
Channel: SCN : Document List - SAP HANA and In-Memory Computing
Viewing all 1183 articles
Browse latest View live

SAP HANA SPS10 – What is New for Backup and Recovery

$
0
0

This post outlines new and enhanced features of SAP HANA backup and recovery with Support Package Stack 10.

The information here has been collected from several sources with the intent of making it more accessible to people interested.

 

Contents

 

 

Recovery Using Delta Backups

With SPS10, SAP HANA supports recovery using delta backups (incremental and differential backups).

 

Full Backups and Delta Backups

SAP HANA now supports the following backup types:

 

From SPS10,a full backup is used to mean:

  • Data backup
    A data backup includes all the data structures that are required to recover the database.
  • Storage snapshot
    A storage snapshot captures the content of the SAP HANA data area at a particular point in time.

 

From SPS10, SAP HANA now supports the following delta backups:

  • Incremental backup
    An incremental backup stores the data changed since the last data backup - either the last data backup or the last delta backup (incremental or differential).
  • Differential backup
    Differential backups store all the data changed since the last data backup.

 

Note that delta backups (incremental or differential) contain actual data, whereas log backups contain redo log entries.

 

Delta backups are included in the backup catalog.

When you display the backup catalog, delta backups are hidden by default.

 

To display delta backups in the backup catalog:

  1. In SAP HANA studio, open the Backup Console and go to the Backup Catalog tab.
  2. Select Show Delta Backups.

 

You can use both incremental and differential backups in your backup strategy.

 

Backup lifecycle management now also includes delta backups.

When you delete all the backups older than a specific full backup, older delta backups are also deleted along with older full backups and log backups.

 

 

SAP HANA Recovery Options Using Delta Backups

If delta backups are available, they are included by default in a recovery. In the recovery dialog in SAP HANA studio, you can choose to perform a recovery without using delta backups:

sap-hana-other-settings.png

 

 

If you include delta backups, SAP HANA automatically determines the optimal recovery strategy based on all the available backups.

 

 

Recovery to the Most Recent State

What you need:

 

  • A data backup

    AND
  • The last differential backup
    Note:
    This is only supported for a data backup, not for a storage snapshot.

    AND
  • Subsequent incremental backups
    Note:
    This is only supported for a data backup, not for a storage snapshot.

    AND
  • Subsequent log backups

    AND
  • Redo log entries that are still available in the log area
    (If the log area is still available.)

 

 

Recovery to a Point in Time in the Past

What you need:

 

As for a recovery to the most recent state.

Redo log entries from the log area may not be needed.

 

 

SQL Statements for Recovery Without Delta Backups

By default, SAP HANA includes delta backups in a recovery.

 

If you wish to recover SAP HANA without using delta backups, you can use the following SQL statement:

 

RECOVER DATABASE UNTIL TIMESTAMP '<timestamp>' IGNORE DELTA DATA BACKUPS

 

Example:

RECOVER DATABASE UNTIL TIMESTAMP '2015-05-15 10:00:00' IGNORE DELTA DATA BACKUPS

 

 

Finding and Checking the Backups Needed for a Recovery

You can use hdbbackupdiag to display the backups needed to recover the database.

In this way, you can minimize the number of backups that need to be made available for a recovery.

 

With SPS10, hdbbackupdiag supports delta backups.

 

More information: Checking Whether a Recovery is Possible

 

 

Prerequisites for Performing a Recovery

 

  • Operating system user <sid>adm
  • Read access to the backup files
  • System privilege DATABASE ADMIN
    (for tenant databases in a SAP HANA multiple-container system)

 

 

Third-Party Backup Tools and Delta Backups

Delta backups are compatible with the current API specification for third-party backup tools (Backint).

 

  • For delta data backups, SAP HANA uses the Backint option -l LOG in combination with the data Backint parameter file:

    -p /usr/sap//SYS/global/hdb/opt/hdbconfig/initData.utl

    Third-party backup tools sometimes use the Backint option -l to determine the backup container using the Backint parameter file.
    This means that for the option –l LOG, the log backup container is used.
  • Caution:
    Backup containers that were, until SPS10, only used for log backups may be sized too small for delta backups.
    If a log full situation occurs, this could cause a database standstill.
  • The Backint parameter file is tool-specific and typically contains information such as the backup destination.
    Note: Some third-party backup tools support only one Backint parameter file for both data and log backups
  • Recommendation:
    Ask your backup tool vendor for details of how to configure the tool to work with delta backups.
    If in doubt, configure two dedicated Backint parameter files: one for data backups, and one for log backups.

 

SQL Statements for Delta Backups

To create an incremental backup, use the following SQL statement:

 

BACKUP DATA INCREMENTAL USING FILE ('<file name>')

 

If the file name is ‘2015-08-03’, this SQL statement creates the following delta backup files:

 

Data backup file: 2015-08- 03_databackup_incremental_0_1431675865039_0_1
Data backup file: 2015-08- 03_databackup_incremental_1431675646547_143167586 5039_1_1
Data backup file: 2015-08- 03_databackup_incremental_1431675646547_143167586 5039_2_1
Data backup file: 2015-08- 03_databackup_incremental_1431675646547_143167586 5039_3_1

 

To execute a differential backup, use the following SQL statement:

 

BACKUP DATA DIFFERENTIAL USING FILE ('<file name>')

 

If the file name is ‘2015-08-03’, this SQL statement creates the following delta backup files:

 

Data backup file: 2015-08- 03_databackup_differential_0_1431675646547_0_1
Data backup file: 2015-08- 03_databackup_differential_1431329211296_1431675646547_1_1
Data backup file: 2015-08- 03_databackup_differential_1431329211296_14316756 46547_2_1
Data backup file: 2015-08- 03_databackup_differential_1431329211296_14316756 46547_3_1

 

In this example, 1431329211296 is the backup ID of the basis data backup; 1431675646547 is the backup ID of the delta backup.

 

Prerequisites for Working with Delta Backups

System privilege BACKUP ADMIN, BACKUP OPERATOR (recommended for batch users only), or DATABASE ADMIN (for MDC)

 

More Information

Delta Backups

 

 

Backup Functionality in SAP HANA Cockpit

In addition to SAP HANA studio, you can now also start SAP HANA backup operations from SAP HANA cockpit.

 

From SAP HANA cockpit, you can:

  • Create data backups
  • Display information about data backups

 

Create Data Backups in SAP HANA Cockpit

Using SAP HANA cockpit, you can create data backups.

 

  1. In SAP HANA cockpit, click the Data Backup tile.
  2. Choose Start New Backup and specify the backup settings.

    sap-hana-cockpit-backup-progress.png

     

  3. To start the backup, choose Back Up.

    sap-hana-cockpit-backup-start.png

     

    The overall progress is displayed on the Data Backup tile.

    To see more details of the backup progress, click the tile.

 

 

Display Information About Backups in SAP HANA Cockpit

If a backup is running, the Data Backup tile displays its progress.

 

If no backup is running, the Data Backup tile displays the status of the most recent full backup:

  • Successful
  • Running
  • Snapshot Prepared
  • Canceled
  • Error

 

Click the tile to display more details from the backup catalog:

sap-hana-cockpit-backup-catalog.png

 

 

The following information is displayed:

  • Time range that the backup catalog covers
  • Total size of the backup catalog
    Information about the most recent backups within the time range
    (status, start time, backup type, duration, size, destination type and comment)

 

Click a row to display more details:

sap-hana-cockpit-backup-catalog-details.png

 

 

Prerequisites for Creating Backups in SAP HANA Cockpit

 

  • System privilege BACKUP OPERATOR or BACKUP ADMIN
  • Role:
    • sap.hana.backup.roles::Operator
      or
    • sap.hana.backup.roles::Administrator

 

Notes

Storage snapshots, backup lifecycle management, and database recovery are currently not supported in SAP HANA cockpit.

 

More Information

SAP HANA Cockpit

SAP HANA Administration Guide: SAP HANA Database Backup and Recovery

 

 

Support for SAP HANA Multitenant Database Containers

In SAP HANA studio, the steps to recover a SAP HANA multitenant database container system are similar to the steps to recover a SAP HANA single-container system.

 

Note:

Storage snapshots are currently not supported for SAP HANA multitenant database container systems.

 

The system database plays a central role in the backup and recovery of SAP HANA multitenant database containers:

  • The system database can initiate backups of the system database itself as well as of individual tenant databases.
    A tenant database can also perform its own backups (unless this feature has been disabled for the tenant database)
  • Recovery of tenant databases is always initiated from the system database
  • To recover a complete SAP HANA multitenant database container system, the system database and all the tenants need to be recovered individually.

 

 

SAP HANA Multitenant Database Containers and Third-Party Backup Tools

When you work with third-party backup tools and SAP HANA multitenant database container system, you should be aware of some specific points:

 

Isolation Level “High”

With SPS10, a new option “isolation level” was introduced for SAP HANA multitenant database container systems.

 

In isolation level high, each tenant database has its own dedicated operating system user.

 

In high isolation scenarios, Backint is supported by SAP HANA. However, you should check with your third- party tool vendor whether any tool-specific restrictions apply.

 

Tenant Copy

Tenant copy using Backint is currently not supported.

To copy a tenant database using backup and recovery, use file system-based backups instead.

 

 

DBA Cockpit for SAP HANA: New Backup Functionality

DBA Cockpit for SAP HANA supports the following SAP HANA SPS10 functionality:

  • Delta backups (incremental and differential backups)

    This feature is available with the following SAP_BASIS Support Packages and above:
    • 7.40 SP13
    • 7.31 SP17
    • 7.30 SP14
    • 7.02 SP18
  • Backups of tenant databases
    All tenant databases in an SAP HANA multitenant database container can be backed up independently of each other.

    This feature is available with the following SAP_BASIS Support Packages and above:
    • 7.40 SP10
    • 7.31 SP15
    • 7.30 SP13
    • 7.02 SP17

 

Note that tenant database, on which the DBA Cockpit is installed, is supported "out-of-the-box".

No additional setup steps are necessary in DBA Cockpit.

System databases need to be integrated manually.

More information: SAP Help Portal -> DBA Cockpit for SAP HANA -> Add a Database Connection

 

To schedule backups:

 

  1. In DBA Cockpit, Choose Jobs -> DBA Planning Calendar.
    Alternatively, use SAP transaction DB13.
  2. To schedule a new data backup, drag an item from the Action Pad to a cell in the calender.
    dba-cockpit-action-pad.png
    To back up a tenant database, choose Complete Data Backup.
    Tenant databases are backed up from within the system database.
  3. In the dialog box, specify the information required.
    dba-cockpit-backup-mdc.png
    For Database Name, specify the name of the tenant database you want to back up.
  4. Choose Add or Execute Immediately.
    The backup is scheduled for the time you specified or is started.

 

More information: SAP Note 2164096 - Schedule backups for SAP HANA multiple-container systems with DB13

 


New Monitoring View: Progress of Backup

M_BACKUP_PROGRESS provides detailed information about the most recent data backup.

 

Here is a comparison of M_BACKUP_PROGRESS and M_BACKUP_CATALOG / M_BACKUP_CATALOG_FILES:

 

M_BACKUP_CATALOG

M_BACKUP_CATALOG_FILES

M_BACKUP_PROGRESS
All types of backups (data, log, storage snapshots, if available)Data backups only (data, delta, incremental)
All completed and currently running backups since the database was createdCurrently running and last finished backups only
PersistentCleared at database restart
Total amount of data for finished backups onlyTotal and already transferred amount of data for all backups

 

System views are located in the SYS schema.

 

More information: M_BACKUP_PROGRESS in the SAP HANA SQL and System Views Reference Guide

 

 

Which SAP HANA Tool Supports What?

Below is an overview of the backup and recovery functionality supported by the different SAP HANA tools with SAP HANA SPS10:

 

SAP HANA StudioSAP HANA CockpitDBA Cockpit for SAP HANA
Data BackupYESYESYES
Storage SnapshotYES
Incremental BackupYESYES
Differential BackupYESYES
Schedule BackupsYES
Database RecoveryYES
Support for Tenant DatabasesYESYES

 

 

More Information

SAP HANA User Guides

 

 

Overview Presentation

SAP HANA Backup/Recovery Overview

 

 

Training

 

 

 

SAP Notes

 

  • 2165826
    Information about SAP HANA Platform Support Package Stack (SPS) 10
  • 1642148
    FAQ: SAP HANA database backup and recovery
  • 2031547
    Overview of SAP-certified 3rd party backup tools and associated support process
  • 2039883
    FAQ: SAP HANA database and storage snapshots
  • 2165547
    FAQ: SAP HANA Database Backup & Recovery in a SAP HANA System Replication landscape
  • 2091951
    Best Practices for SAP HANA Backup and Restore

 

Further SAP notes are available on component HAN-DB-BAC


SAP HANA Hands on tests ( part 3.1 ) : Applying patches to HANA DB

$
0
0

I'm using HANA DB V1.00.82 as a starting point.

I'm performing an update of the existing component using the hdblmgui.

In this first step I am not using the HDB studio HLM embedded interface.

The hana server is a single node one with ECC6 EHP7 running on it ( fresh installation ) .

 

 

A few things to know about HANA patches / updates :

 

There are different kind of patches available for the Hana DB :

 

- SPS : stands for Support Packages Stacks.

 

Support packages stacks for HANA will provide corrections and new features or capabilities.

SPS are scheduled and rekeased twice a year.

Updates are strictly downward compatible.

 

- SPS Revisions :

 

Contains some individual updates and software corrections available for SAP HANA

Revisions are cumulative and strictly downward compatible.

 

- Maintenance revision :

 

Includes only major bug fixes.

These are provided until the next SAP HANA production system verified revision is released. Around 3 months after the release of the next SPS.

Then customers must update to the next SP revision to obtain the new bug fixes.

 

- Datacenter Service Points:

 

These are planned 3 months after an SPS release and are versions of SAP HANA that have been run in production enterprise applications at SAP before being released to customer. The DC Service points are released in March and September.

 

That said, here is my  current version of my test hana database :

 

1.00.82.00.384070.

 

I'm running HANA Version 1.00 SPS 8 revision 2.

No maintenance revision applied ( or version 00 )

 

To do a parallel with ABAP Support packages, I would say that :

 

     Datacenter Service Points are rather like SR : ie a verified package that can be applied securely as a whole ( although it is a new concept coming in           with      HANA  ).

     Hana Support Packages Stacks make me think of EHPs in the ABAP world : bringing in new features and capabilities for HANA.

     SPS revisions are rather like individual ABAP SPs : binging updates and software corrections.

     Maintenance revision make me think of SAP Notes : major bug fixes.


     Of course there are some differences but it helps me to understand it.

 

 

 

Note : What I would follow as a maintenance strategy, on a personal point of view, on a productive landscape is :

Get my hana db to the latest DC Service point available. In beetween, apply the maintenance revision when needed.

Update to the next DC service point when available.

I'd keep the SPS update if i'm out of solution or clearly require an update to the next SPS ( ie no maintenance revision available nor DC service point in the needed timeframe ) .

This means, in the  end , at least schedule an update each 6 months from a DC service point to the next available when possible.

 

 

 

Now we will update from 1.00.82.00.384070. to SPS93 :

 

- Download the Hana DB software from the SAP market place ( support.sap.com/swdc -> software by a-z index -> H -> hana platform )

 

- Follow the update guide : SAP HANA Master Update Guide :

 

1.Stop all processes.

2.Make a system backup if necessary

3. Perform an update.

4.Update the depending components.

5.Perform the post-update steps.

6.Restart all processes.

 

First stop any applicationn using the HDB .

I had only an ECC running on my HDB then : Stopsap (R3) of the ECC server.

Stop the HDB : HDB stop as htladm

I've performed a full system backup ie not only the db but also the VM hosting the HDB. - -> done with veeam once the ECC and HDB are stopped.

 

 

Perform the update :

 

- uncompress the HDB updates in a directory.

 

- Will update each Hana component individually as if I was applying "a single SP" to each components of my hana DB :

 

Starting with the Hana core server :

 

cd /hana/HANA_SPS93/EXTRACTED/IMDB_SERVER100_93_0/SAP_HANA_DATABASE

- start the hdblcmgui ( I wanted the GUI ) and perform the update :

 

update_1.png

 

update_2.png

 

At first I wanted to perform the update component by component but it appears that we can patch everything at the same time using the "add component location" button.

Then I updated everything giving the packages location :

 

 

update_3.png  update_4.pngupdate_5.png

All the updates are detected.Then I can continue the update.

 

update_6.png

 

Some OS packages required to be updated on the system. As shown in the popup below :

 

update_7.png

 

We performed the update and restarted the Hana DB update :

 

update_8.png  update_9.png

 

update_10.pngupdate_11.png

 

The update finishes O.K.


I restart the HDB only and then proceed with the post update steps :


- Update of the Hana Studio.


You just need to execute the update.

Nothing much to say about it.


My SAP Hana studio is now on version 2.0.11 .


update_12.png


HANA DB Post upgrade steps :


As stated in the Hana update guide :


 

After an upgrade to a new SAP HANA SPS it is recommended to redeploy the calculation views. For more information, see SAP Note 1962472 - Redeployment of calculation views recommended when upgrading to a new SPS.



We perform these steps :


Solution

You can redeploy all views (not only those of type "calculation") of a package. In HANA Modeler, you can select a repository package and choose "redeploy" via the context menu. This will regenerate the runtime objects of all active obejcts.


With the SAP HANA Studio -> SAP HANA Modeler.


You choose : "Redeploy" and select the packages.

update_16.pngupdate_17.png


update_18.png

Redeployment is then executed.


update_19.png

You can monitor it :

 

update_20.png

in the log you can check if everything is O.K :

 

update_21.png


This part is O.K.



- Update the HDB client for SAP software :



For my ECC server, the hana client is installed by default in /usr/sap/SID/hddbclient folder.

Before updating the client you can check the version used here ( screen from an ECC6 EHP7 ) instance :


update_13.png


We stop the system and perform the client installation :

Unpack the client archive :

IMDB_CLIENT100_93_0-10009663.SAR


As ecc sidadm, go to the folder and start hdbinst :


update_14.png


Restart the ECC and perform a quick verification :


update_15.png


The client is updated.


The system is O.K.



Once everything is done, you can reopen the system.

SAP HANA Hands on tests ( part 4 ) : HANA replication setup

$
0
0

In this document I'll cover briefly the HANA replication set up.

I currently have a test node running HANA 1.00.93.

An ECC test system is using it.

 

The replication relies, among other things, on using hdb logs.So of course, the primary system needs to be in "archive log" mode ( <- log_mode normal / enable_auto_log_backup yes ...  sorry ... remains of my Oracle past... ) , and prior to the replication setup you need a full DB backup.

The replication set up in itslef is easy and straightforward provided you have respected the pre- requisites :

 

  • Primary and standby HDB are installed and online,
  • Software versions are the same. Or the standby runs an higher software version than the primary ( which is an interesting point when it comes to updating with minimal downtime )
  • SIDs of primary and standby are the same.
  • The network ports beetween the primary and standby sites are available and useable ( have a look at your Fwall settings if required ).

 

Then I went into the configuration steps :

 

On the primary node :

 

replication1.png

 

I had this popup because I tagged my test system as a production one.

 

replication2.png

You define your logical system names :

 

replication3.png

 

replication4.png

 

This first part of the configuration is O.K.

 

replication5.png

 

Et Voilà !  The first part of the configuration is done.

 

Second step on the stby system :

 

You now have to configure the replication on the stby system.

Using the HDB GUI :

 

replication6.png

You need to register this system as "secondary system" :

 

replication7.png

 

I gave here the stdby logical name and the desired sync mode. Depending on the SLAs needs, you'll set the replication mode accordingly.

 

 

replication9.png

 

Validate this last popup and then the replication set up starts :

 

replication10.png

It can be followed here :

replication11.png

replication12.png

 

Once everything went fine, you can see these statuses :

 

replication13.png

Nice and easy !

 

Next step : takeover and failback !

How to Setup SAP HANA Audit Trace (Quick Start)

$
0
0

there is a great document on hana auditing http://scn.sap.com/docs/DOC-51098 which explains all the details regarding hana auditing

 

 

this document shows how easy it is to setup auditing in hana:

 

 

You need the Authorization AUDIT ADMIN:

 

 

in SAP Hana Administration, go to security node:

overview_truncate.jpg

 

here you have to generally activate the Auditing Feature (the same can also be done using global.ini/auditing_configuration or with SQL

ALTER SYSTEM ALTER CONFIGURATION ('global.ini','SYSTEM') set ('auditing configuration','global_auditing_state' ) = 'true' with reconfigure; )

 

Select Log File Destination (Default is Hana Table CSTABLE), see http://scn.sap.com/docs/DOC-51098 for details

 

 

create Audit Policies here (using green + button) or use SQL Statemens to do the same:

 

* Policy to Monitor assignements of Privileges/Roles etc (CRITICAL)

DROPAUDIT POLICY Z_USER_AUDITING;

CREATEAUDIT POLICY Z_USER AUDITING ALLGRANT PRIVILEGE, REVOKE PRIVILEGE, GRANT ROLE, REVOKE ROLE LEVEL CRITICAL;

ALTERAUDIT POLICY Z_USER ENABLE;

 

* Policy to Monitor unsuccessful Logins (WARNING)

DROPAUDIT POLICY Z_CONNECT_UNSUCCESSFUL;

CREATEAUDIT POLICY Z_CONNECT_UNSUCCESSFUL AUDITING unsuccessful CONNECTLEVEL WARNING;

ALTERAUDIT POLICY Z_CONNECT_UNSUCCESSFUL ENABLE;

 

* Policy to Monitor successful Logins (INFO)

DROPAUDIT POLICY Z_CONNECT_SUCCESSFUL;

CREATEAUDIT POLICY Z_CONNECT_SUCCESSFUL AUDITING successful CONNECTLEVEL INFO;

ALTERAUDIT POLICY Z_CONNECT_SUCCESSFUL ENABLE;

 

* Policy to Monitor ALL Actions with user SYSTEM (INFO(

DROPAUDIT POLICY Z_SYSTEM;

CREATEAUDIT POLICY Z_SYSTEM AUDITING ALL ACTIONS FORSYSTEMLEVEL INFO;

ALTERAUDIT POLICY Z_SYSTEM ENABLE;

 

* Policy to Monitor ALTER commands with SYSTEM (CRITICAL)

DROPAUDIT POLICY Z_SYSTEM_ALTER;

CREATEAUDIT POLICY Z_SYSTEM_ALTER AUDITING ALLALTERUSERFORSYSTEMLEVEL CRITICAL;

ALTERAUDIT POLICY Z_SYSTEM_ALTER ENABLE;

 

* Policy to find unsuccessful Logons with SYSTEM User (CRITICAL)

DROPAUDIT POLICY Z_SYSTEM_UNSUCCESSFUL_LOGON;

CREATEAUDIT POLICY Z_SYSTEM_UNSUCCESSFUL_LOGON AUDITING unsuccessful CONNECTFORSYSTEM LEVEL CRITICAL;

ALTERAUDIT POLICY Z_SYSTEM_UNSUCCESSFUL_LOGON ENABLE;

* Example Policy for Selects on specific table or schema

DROPAUDIT POLICY Z_OBJECT_AUDIT;

CREATEAUDIT POLICY Z_OBJECT_AUDIT AUDITING successful  SELECTON M2MEVAL.*   LEVEL INFO;

ALTERAUDIT POLICY Z_OBJECT_AUDIT ENABLE;

 

 

How to Reorg Audit Log:


Use the Red Icon on the top/right to select truncate of old records:

button_truncate.jpg

 

 

select_truncate.jpg

 

how can i show the audit_log entries:

 

use SQL or Data Browser on Public Synonym AUDIT_LOG:

  show_audit_trace.jpg

How to Setup Hana Authorization Trace

$
0
0

How to activate an Authorization Trace in case of authorization Problems:

(something similar to Transaction ST01 in Netweaver ABAP)



Go to Hana System Administration, Trace Configuration:

 

 

 

Trace_Configuration.jpg

 

User Specific Trace, select New Configuration (small Icon 'Create' upper right Corner of User-Specific Trace)

 

Context Name is a description for this user-defined-trace

select Indexserver, select 'Show all components', select authorization

can be set to 'INFO' (this is optional, error is the default)

User_defined_trace.jpg

 

click on Finish

 

 

 

now trace is active

 

now Switch to the relevant user an produce the error

error_message.jpg

 

 

go to diagnosis Files and select the tracefile

diagnosis_files.jpg

example

 

[66898]{410859}[124/-1] 2015-08-14 12:32:42.216756 i Authorization    SQLFacade.cpp(01353) : UserId(2637946) is not authorized to do SELECT on ObjectId(2,0,oid=141224)

[66898]{410859}[124/-1] 2015-08-14 12:32:42.217089 i Authorization    SQLFacade.cpp(01750) :

    schemas and objects in schemas :

SCHEMA-141016-_SYS_BI : {} , {SELECT}

TABLE-141224-BIMC_ALL_CUBES : {} , {SELECT}

 

[66898]{410859}[124/-1] 2015-08-14 12:32:42.217415 i Authorization    query_check.cc(03287) : User AUTHTEST tried to execute 'select * from _SYS_BI.BIMC_ALL_CUBES WHERE CUBE_NAME = 'AN_M2MEVAL' AND CATALOG_NAME = 'swisscom.its.m2m''

 

SAP DBTech JDBC: [258]: insufficient privilege: insufficient privilege: Not authorized at ptime/query/checker/query_check.cc:3290

SAP HANA TDI on Cisco UCS and VMware vSphere - Part 2

$
0
0

ESXi Host

 

UCS Service Profile

 

The ESXi host provides the platform where virtual machines are running on. The service profile contains the configuration of the hardware in a Cisco UCS environment. Service Profile Templates or vSphere Auto Deploy can be used to ease the ESXi deployment process. In this example, a standalone service profile creation is shown.

 

For each vSwitch, it is recommended to configure two uplink interfaces with MTU 9000 as trunk. The VLAN assignment takes place in the port group configuration of the vSwitch.

 

Image6.jpg

 

In order to get the best performance for virtualization, certain BIOS features should be enabled. The c-states can be controlled by the hypervisor and do not necessarily have to be disabled. It depends on the performance needs vs. power saving aspects how balanced this configuration should be.

 

Image17a.jpg

 

vsphere1.jpg

VMware vSphere screenshot

 

Although the use of VM-FEX is optional, it is recommended to enable all Intel Direct IO features.

 

Image18.jpg

 

 

Network

 

SAP HANA has different types of network communication channels to support the different SAP HANA scenarios and setups. It is recommended to consult the SAP HANA TDI - Network Requirements whitepaper.

 

network_tdi2.jpg

Source: SAP AG

 

On the basis of the listed network requirements, every server must be equipped with two 1 or 10 Gigabit Ethernet (10 Gigabit Ethernet is recommended) interfaces for scale-up systems to establish communication with the application servers and users (client zone). If the storage for SAP HANA is external and accessed through the network, two additional 10 Gigabit Ethernet or 8-Gbps Fibre Channel interfaces are required (storage zone). Scale-out systems need a 10 Gigabit Ethernet link for internode communication (internal zone). When using multiple ESXi hosts in a vSphere Cluster with enabled DRS, at least one additional 10 Gigabit Ethernet link is required for vMotion traffic.

 

For the internal zone, storage zone and the vMotion network, it is recommended to configure jumbo frames end-to-end.

 

The network traffic can be consoliated by using the same vSwitch with several load-balanced uplinks.

 

Image11.jpg

 

Storage

 

The storage system must be certified as part of an appliance, independent of the appliance vendor, or must be certified as SAP HANA TDI storage.

 

It is recommended to physically separate the the origin (VMFS LUN or NFS export) of the datastores providing data and log for performance reasons. Following performance classes can be distinguished:

 

Category

Read Performance

Write Performance

OS boot disk

medium

low

/hana/shared

medium

low

/hana/data

very high

high

/hana/log

high

very high

backup

low

medium

 

It is also recommended to consult the recommendations from the storage hardware partners:

EMC: Tailored Datacenter Integration Content

NetApp Reference Architecture: SAP HANA on VMware vSphere and NetApp FAS Systems

NetApp Configuration Guide: SAP HANA on NetApp FAS Systems

______________________________

Part 1 - Introduction

Part 2 - ESXi Host

Part 3 - Virtual Machine

Part 4 - Guest Operating System

Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 4)

$
0
0

Hello All,

 

Its been some time that I have been working in HANA and related areas like SLT, Lumira, Fiori and so on.

So thought of sharing some topics here, which would be handy.

 

Disclaimer :

1) This series is exclusively for Beginners in HANA and all those HANA experts here, please excuse me

2) These are some Solutions/Observations that we have found handy in our projects and am quite sure there would be multiple ways to derive the same result.

3) These series of documents are collaborative in nature. So please feel free to edit the documents,wherever required!

4) All the points mentioned here is observed on HANA systems whose revision is >= 82.


Part 1 of this series can be found here -->  Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 1)

Part 2 of this series can be found here -->  Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 2)

Part 3 of this series can be found here --> Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 3)



34)Related to HANA

Use Case: We were unable to access/open the Catalog Folder of our HANA instance and the following error message was appearing.

  Capture22.JPG

un1.png

Solution: We had raised the issue with SAP Internal Support.

And as of their observation: all HDB processes were online and there wasn't any obvious reason for an error.As a Quick fix, they restarted the HANA DB and

the error was solved.(our's was a Demo System anyway)

 

Note: Some related Discussions --> https://scn.sap.com/thread/3729403

 

35) Related to HANA Studio:

Use Case: We had to uninstall the Existing version of HANA Studio.

 

Solution:

Got to Control Panel --> Uninstall the HANA Studio.

Untitled22.png

The Lifecycle Manager will ask you to enter the HANA Studio installation instance(in our case; it was 0)

Untitled33.png

After entering 0, you will get the following screen:

Untitled33.png

By pressing any key, you will get the message that the HANA studio version is successfully uninstalled.

 

36) Related to HANA

Use Case: At times, while navigating through the HANA Contents, we come across the across the following message(Contains Hidden Objects).

Capture4444.JPG

Solution:

Go to the Preferences --> HANA --> Modeler --> Content Presentation --> Check 'Show all Objects'.

Capture3333.JPG

Capture55555.JPG

 

Now the hidden objects will be displayed:

Capture2222.JPG

 

37) Related to HANA SQL

Some Useful SQL Commands:

a) Renaming a column of an already existing Table:

RENAME COLUMN "<Schema_Name>"."<Table_Name>"."<Old_Column_Name>" to "<New_Column_Name>"

 

b) Adding a new column to an already existing Table:

ALTER TABLE "<Schema_Name>"."<Table_Name>" ADD (<New_Column_Name><DataType>)

 

c) Update a Column Entry:

UPDATE "<Schema_Name>"."<Table_Name>" SET  "<Column_Name>" = '<New_Entry>' where "<Column_Name>" = '<Old_Entry>'

 

d) IF Function:

If("Status"= 'Completed Successfully','0',if("Status"= 'In progress','1',if("Status"= 'Withdrawn','2','?')))

Capture5555.JPG

 

38) Related to HANA Studio:

Use Case: We got the following error while previewing an HANA Analytical view:

Message: [6941] Error Executing Physical Plan : AttributeEngine: this operation is not implemented for this attribute.

66.png

Solution: The above error message point towards a field named CMPLID.

On careful observation, it was found that the CMPLID has different data types in the connected attribute views and the Fact table in the following analytic view.

Untitled1.JPG

Related SAP Note: 1966734 - Workaround for "AttributeEngine: this operation is not implemented for this attribute type" error

 

39) Related to HANA:

Use Case: How to find the Schema Owners?

Solution: SELECT * FROM "SYS"."SCHEMAS"

Capture2222.JPG

 

40) Related to HANA:

Use Case: How to provide specific authorizations to some limited tables within a schema?

Solution: Object Privileges --> Schema Name --> Right Click --> Add catalog objects --> Provide the specific table names.

Untitled2222.png

 

Hope this document would be handy!

 

41) Related to sFIN

Use Case: The definition of HANA Calculation View BANKSTATEMENTMONITOR is not correct after migration to SAP Simple Finance, on-premise editon 1503.

The expected definition of HANA view after migration is something like the following:

Capture1111.JPG

Unfortunately due to a program error, the view definition after migration will still look something like the following:

Capture111111.JPG

 

Solution: We had raised this issue with the support/development team and they have now released the following new OSS note.

2205205 - Bank Statement Monitor: Definition Correction of HANA Calculation View BANKSTATEMENTMONITOR

 

After following the manual activities mentioned in the note, the issue will be resolved.

 

42) Related to SLT:

An SLT configuration was already created without giving multiple usage option.(You want to switch from 1: 1 to 1: N in already existing SLT configuration)

No we wanted to create a new connection with the same source and a different target, but system was not allowing us to do the same, as we were getting the message that a configuration with the same source is already available.

 

Solution: NOTE 1898479 - SLT replication: Redefinition of existing DB triggers.

The solution for this issue was explained in the note and the manual steps (1-9) has to be done in the SLT system to solve this.

 

Will keep adding more points here...

 

BR

Prabhith-

SAP HANA TDI on Cisco UCS and VMware vSphere - Part 3

$
0
0

Virtual Machine

 

CPU and Memory Sizing

 

This document is not a sizing instrument but delivers a guideline of technically possible configurations. For proper sizing, refer to BW on HANA (BWoH) and Suite on HANA (SoH) sizing documentation. The ratio between CPU and memory can be defined as 6.4 GB per vCPU for Westmere-EX, 8.53 GB per Ivy Bridge-EX and 10.66 GB per Haswell-EX vCPU. The numbers for Suite on HANA can be doubled.

 

See that the ESXi host and every virtual machine produce some memory overhead. For example, an ESXi server with 512 GB physical RAM can not host a virtual machine with 512 GB vRAM because the server would need some static memory space for its kernel and some dynamic memory space for each virtual machine. A virtual machine with eg. 500 GB vRAM would most likely fit into a 512 GB ESXi host.

 

This table shows the vCPU to vRAM ratio on a Haswell-EX server. Under "VMs per 3 TB Host", the maximal amount of VMs is shown considering two scenarios:

  1. BWoH prod
    • This is the amount of virtual machines that can be deployed on one host for BW on HANA in a productive environment, that means, there is no resource overcommitment.
  2. CPU overcommit
    • This is the amount of virtual machines that can be deployed on one host while allowing CPU overcommitment. Memory overcommitment is not recommended at all for HANA virtual machines.

 

vCPU

vRAM

VMs per 3 TB Host

BWoH

SoH

BWoH prod

CPU overcommit

18

18

64 GB

8

46

18

18

128 GB

8

23

36

18

256 GB

4

11

36

18

384 GB

4

7

54

36

512 GB

2

5

72*

36

768 GB

1

3

108*

54

1 TB

1

2

-

72*

1.5 TB

-

1

-

108*

2 TB

-

1

* 4 TB vRAM and 128 vCPU per VM with vSphere 6

 

More details are available here:

SAP: SAP HANA Guidelines for running virtualized

VMware: Best Practices and Recommendations for Scale-Up Deployments of SAP HANA on VMware vSphere

VMware: Best Practices and Recommendations for Scale-Out Deployments of SAP HANA on VMware vSphere

 

 

Virtual Machine Creation

 

Create virtual machine with hardware version 10 (ESXi 5.5)

Image2.jpg

 

Select SLES 11 64-bit or RHEL 6 64-bit as Guest OS

Image3.jpg

 

Configure cores per socket

Image4.jpg

 

It is recommended to configure the cores per socket according to the actual hardware configuration, which means:

  • 10 cores per socket on HANA certified Westmere-EX processors
  • 15 cores per socket on HANA certified Ivy Bridge-EX processors
  • 18 cores per socket on HANA certified Haswell-EX processors

 

Virtual SCSI controller configuration

Image5.jpg

 

Configure 4 virtual SCSI controllers of the type "VMware Paravirtual".

 

Configure virtual disks

Image6.jpg

 

To fully utilize the virtual resources, a disk distribution is recommended where the disks are connected to different virtual SCSI controllers. This improves parallel IO processing inside the guest OS.

 

The size of the disks can initially be chosen lower than the known requirements for HANA appliances. This is based on the capability of in increasing virtual disk size online.

 

Disk

Mount point

Size (old measure)

Size (new combined measure)

root

/

80 GB

80 GB

hanashared

/hana/shared

1 * vRAM

1 * vRAM

hanadata

/hana/data

3 * vRAM

1 * vRAM

hanalog

/hana/log

1 * vRAM

vRAM < 512 GB: 0.5 * vRAM

vRAM ≥ 512 GB: min. 512 GB

 

Configure virtual network adapters

 

vsphere2.jpg

 

Configure one VMXNET3 adapter for each network and connect it to the corresponding port group. See that some networks are only configured on ESXi level, such as storage and backup network.

 

Enable Latency Sensitivity settings

Image8.jpg

 

To ensure the latency sensitivity, CPU and memory reservation has to be set, too. While the CPU reservation can vary, the memory reservation has to be set to 100 %. Check the box "Reserve all guest memory (All locked)".

 

Image10.jpg

 

After creating a VM, the guest OS installation can begin.

 

______________________________

Part 1 - Introduction

Part 2 - ESXi Host

Part 3 - Virtual Machine

Part 4 - Guest Operating System


SAP HANA TDI on Cisco UCS and VMware vSphere - Part 4

$
0
0

Guest Operating System

 

Installation

 

The installation of the Linux OS has to be done according to the SAP Notes for HANA systems on SLES or RHEL. The network and storage configuration parts are heavily depending on the TDI landscape, therefore no general advice can be given.

 

Disk Distribution

 

The VMDKs should be distributed on differently tiered storage systems because of performance reasons. Example:

Drawing1.jpg

In the end, a realistic storage quality classification as well as a thorough distribution of the disks among datastores and virtual SCSI adapters ensures good disk IO performance for the HANA instance.

 

Configuration

 

With regards to the network configuration, it is not recommended to configure bond devices inside the Linux guest OS. Such a configuration is used in native environments to guarantee availability of the network adapters. In virtual environments, the redundant uplinks of the vSwitch take on that role.

 

In /etc/sysctl.conf some tuning might be necessary for scale-out and in-guest NFS scenarios:

net.ipv4.tcp_slow_start_after_idle = 0

net.core.rmem_max = 16777216

net.core.wmem_max = 16777216

net.core.rmem_default = 262144

net.core.wmem_default = 262144

net.core.optmem_max = 16777216

net.core.netdev_max_backlog = 300000

net.ipv4.tcp_rmem = 65536 262144 16777216

net.ipv4.tcp_wmem = 65536 262144 16777216

net.ipv4.tcp_no_metrics_save = 1

net.ipv4.tcp_moderate_rcvbuf = 1

net.ipv4.tcp_window_scaling = 1

net.ipv4.tcp_timestamps = 1

net.ipv4.tcp_sack = 1

sunrpc.tcp_slot_table_entries = 128

 

MANDATORY: Apply generic SAP on vSphere optimizations and additional HANA optimizations for SLES and RHEL.

 

Validation

 

To validate the solution, the same hardware configuration check tool as for the appliances is used but with slightly different KPIs. SAP supports performance-related SAP HANA issues only if the installed solution has passed the validation test successfully.

 

Volume

Block Sizes

Test File Size

KPIs

Initial Write (MB/s)

Overwrite (MB/s)

Read (MB/s)

Latency (µs)

Log

4K

5G

-

30

-

1000

16K

16G

-

120

-

1000

1M

16G

-

250

250

-

Data

4K

5G

-

-

-

-

16K

16G

40

100

-

-

64K

16G

100

150

250

-

1M

16G

150

200

300

-

16M

16G

200

250

400

-

64M

16G

200

250

400

-

Source: SAP AG, Version 1.7

 

______________________________

Part 1 - Introduction

Part 2 - ESXi Host

Part 3 - Virtual Machine

Part 4 - Guest Operating System

SAP HANA - High Availability FAQ

SSL and single-node HANA systems

$
0
0

This document is part of a series on the security features available in HANA.

 

While single-node configurations are the simplest, a foundation will be beneficial when expanding to distributed systems.

legend.gif

 

Setup without a third-party CA (like DigiCert or Verisign)

single_n_01.gif

Acting as your own Certificate Authority (CA) provides you with the control to sign certificates as you wish. The disadvantage is being required to distribute your root certificate to all of your client’s trust stores which may or may not be trivial.

 

This configuration requires:

  • Generating a root certificate.
  • Generating a server certificate signing request
  • Signing the server CSR with the root private key.
  • Importing the signed server certificate and private key to the server’s key store.
  • Distributing the root certificate to client trust stores.

 

Example certificate Subjects and Issuers

Certificate: S
Subject: C=CA, O= MyCompany, OU=IT, CN=hananode.mycompany.corp
Issuer: C=CA, O= MyCompany, OU= IT, CN= HANA CA

 

Certificate: R
Subject: C=CA, O= MyCompany, OU= IT, CN= HANA CA
Issuer: C=CA, O= MyCompany, OU= IT, CN=HANA CA

 

S is in hananode’s key store, R is in the client’s trust store.

 

Chain of trust

In order for a successful connection to be made, clients construct a chain of trust with the provided certificate:

  • During the handshake, hananode will serve the certificates contained in its key store: S. The client now has S and any certificates in its trust store (which includes R) to build the chain of trust.
  • The certificate that starts the chain is the one in which the Common Name (CN) matches the FQDN of hananode, certificate S.
  • In order to validate S:
    • The client checks S’s Issuer field and looks for a certificate whose Subject field matches; in this case certificate R.
    • The issuer’s signature in S is then decrypted using the public key in R to ensure S was indeed signed by the private key corresponding to R.
    • S is now validated.
  • The same steps are taken in order to validate R, however, because R is self-signed, it will end up decrypting its signature with its own public key.
  • A completed chain isn’t immediately trusted. This chain will be trusted if the root certificate, R, belongs to the client’s trust store. R is in the clients trust store therefore S and R form a valid chain of trust and the connection will be successful.

 

 

Setup with a third-party CA

single_n_02.gif

The root certificates of common CAs such as Verisign and DigiCert are usually distributed with ssl-enabled clients; the Java run-time environment provides a trust store with root certificates for Java-based clients and many web browsers contain trust stores with a default collection of root certificates. The advantage of this setup is not needing to distribute the root certificate. The disadvantage is the cost required to have the server CSR signed by a CA.

This configuration requires:

  • Generating a server CSR.
  • Paying a CA to verify your identity and sign the server CSR.
  • Importing the signed server certificate and server private key to the server’s key store.

 

Example certificate Subjects and Issuers:

Certificate: S
Subject: C=CA, O= MyCompany, OU=IT, CN=hananode.mycompany.corp
Issuer: CN=VeriSign Class 3 Public Primary Certification Authority - G5, OU=VeriSign Trust Network, O=VeriSign, Inc., C=US

 

Certificate:R
Subject: CN=VeriSign Class 3 Public Primary Certification Authority - G5, OU=VeriSign Trust Network, O=VeriSign, Inc., C=US
Issuer: CN=VeriSign Class 3 Public Primary Certification Authority - G5, OU=VeriSign Trust Network, O=VeriSign, Inc., C=US

 

S is in hananode’s key store, R is in the client’s trust store.

 

Chain of trust

Although the setup and Issuer field differs, the steps for constructing the chain of trust are identical.

 

Notes

If your company/organization already has an internal CA and all the company clients already have your CA’s root certificate(s) installed, you may be able to request a signature from the relevant department, thus eliminating the cost of using a third-party CA and avoiding the root-certificate distribution step.

 

 

Setup with a third-party CA (and intermediate certificates)

single_n_03.gif

Many times CAs will sign your certificates with an intermediate private key and provide you with one or more intermediate certificates in addition to your server certificate. I know this is the case when using Verisign. This will require the signed server certificates and the provided intermediate certificates be imported into the server’s key store.

This configuration requires:

  • Generating a server CSR.
  • Paying a CA to verify your identity and sign the server CSR.
  • Importing the signed server certificate, server private key, and supplied intermediate certificates to the server’s key store.

 

Example certificate Subjects and Issuers:

Certificate: S
Subject: C=CA, O=MyCompany, OU=IT, CN=hananode.mycompany.corp
IssuerCN=VeriSign Class 3 Secure Server CA - G3, OU=VeriSign Trust Network, O=VeriSign, Inc., C=US

 

Certificate: I
Subject: CN=VeriSign Class 3 Secure Server CA - G3, OU=VeriSign Trust Network, O=VeriSign, Inc., C=US
IssuerCN=VeriSign Class 3 Public Primary Certification Authority - G5, OU=VeriSign Trust Network, O=VeriSign, Inc., C=US

 

Certificate: R
Subject: CN=VeriSign Class 3 Public Primary Certification Authority - G5, OU=VeriSign Trust Network, O=VeriSign, Inc., C=US
IssuerCN=VeriSign Class 3 Public Primary Certification Authority - G5, OU=VeriSign Trust Network, O=VeriSign, Inc., C=US

 

S and I are in hananode’s key store, R is in the client’s trust store.

 

Chain of trust

The chain of trust will be one certificate longer in this example. Clients are served certificates S and I by hananode, and their trust store certificates at their disposal to construct the chain of trust.

  • The client starts with the certificate whose CN matches the FQDN of hananode, certificate S.
  • S’s Issuer matches I’s Subject, therefore I’s public key is used to verify the signature in S.
  • I’s Issuer matches R’s Subject, therefore R’s public key is used to verify the signature in I.
  • R’s Issuer matches R’s Subject, therefore R’s public key is used to verify the signature in R. The self-signed root certificate and the end of the chain.
  • R is contained in the client’s trust store therefore this chain, and hananode, is trusted.

 

Notes

You may be provided with multiple intermediate certificates. All intermediate certificates will need to be imported into the server’s key store or the clients trust store.

 

 

Setup with an intermediate company CA

single_n_04.gif

Some organizations will become an intermediate CA of their domain by purchasing a signature for a wildcard certificate. This allows the organization to sign user certificates for its subdomains backed by the original CA. For example, if the company MyCompany owns a wildcard certificate for the domain mycompany.corp, it will be able to sign certificates for domains such as hananode.mycompany.corp. If this is the case, you may be able to obtain a signed certificate from your companies intermediate CA for the node(s) in your HANA landscape. This avoids the trouble of distributing root certificates to clients.

This configuration requires:

  • Generating a server CSR.
  • Requesting your company’s intermediate CA sign the CSR.
  • Importing the signed server certificate, server private key, and company’s intermediate certificate.

 

Example certificate Subjects and Issuers:

Certificate: S
Subject: C=CA, O=MyCompany, OU=IT, CN=hananode.mycompany.corp
IssuerCN=MyCompany Intermediate CA, OU=IT, O=MyCompany, C=CA

 

Certificate: I
Subject:CN=MyCompany Intermediate CA, OU=IT, O=MyCompany, C=CA
IssuerCN=VeriSign Class 3 Public Primary Certification Authority - G5, OU=VeriSign Trust Network, O=VeriSign, Inc., C=US

 

Certificate: R
Subject: CN=VeriSign Class 3 Public Primary Certification Authority - G5, OU=VeriSign Trust Network, O=VeriSign, Inc., C=US
IssuerCN=VeriSign Class 3 Public Primary Certification Authority - G5, OU=VeriSign Trust Network, O=VeriSign, Inc., C=US

 

S and I are in hananode’s key store, R is in the client’s trust store.

 

Chain of trust

Constructing the chain of trust for this scenario is identical to the previous scenario.

 

Notes

If your company’s intermediate certificate, I in this example, is already distributed to clients, you won’t be required to import the intermediate certificate into the server’s key store.

Additionally, if there are multiple intermediate certificates, all intermediate certificates will need to be imported into the server’s key store or the clients trust store.

SAP HANA Hands on tests ( part 4.1 ) : HANA replication takeover

$
0
0

Hi,

 

In a previous document / blog , I performed the replication setup of my HANA test db.

My layout is described by this diagram. in this first attempt, the replication is perfomed @ HDB level ( not at the disks level, i'll probably give it a try later ) :

 

replication14.png

 

My first HANA system sits in DC1 ( hdbtest1 ) the second is in DC2 ( hdbtest2 ).

What i did so far was :

 

  • Install the HDB node 1
  • Install an ECC EHP7 on top of it
  • Build a stdby node
  • Replicate to HDB node 2

 

In my test set up , I do not have the nice network layer and HA solution that would allow me to switch from one DC to another with minimum downtime.

I will perform this switch manually ( mainly by re-configuring the ECC application server to point on hdbtest2 instead of hdbtest1 as a db server and modifying the hdbuserstore "DEFAULT" key ) .

So, in fact, my ECC server will still work from DC1, while the HDB will be available in DC2.

 

The hana takeover can be done using the HDB studio or the command line tool "hdbnsutil".

When it comes to DR scenario, I like to consider nothing works as it should, so let's pretend the HDBStudio does not work and use the hdbnsutil command line huh !

 

Generate some load on the HDB

 

I'm running SGEN on the ECC platform in order to generate some load on the system :

 

takeover2.png

takeover3.png

and I can see in some HDB log files that things are being replicated :

 

on hdbtest1 ( PRIM ) :

hdbtest1.png

on hdbtest2 ( STBY ) :

hdbtest2.png

Now let' s "crash the system " !!

hdbkill.png

 

Ouch ... This one was nasty :

As of now, the primary DB is down, the standby is still in standby mode.

takeover4.png

The ECC server lost the DB :

takeover5.png

The stby HDB lost contact of the primary one :

takeover6.png

 

Takeover

 

 

Let' s perform the HANA takeover :

 

    • Connect to the STBY ( surviving ) host .
    • Use the <hdb>adm user.
    • Trigger the takeover :

hdbnsutil -sr_takeover ( Easy )

 

looking at the logs on the stby hosts, we can see the takeover progress :

takeover7.pngtakeover8.png

takeover9.png

 

The HDB is back online in a few minutes on node 2 in DC2 :

 

takeover10.png

 

Now I have to stop and reconfigure my ECC server ( this is due to my test infra layout ) .

In this kind of layout I had to modify the DEFAULT entry in mt <sid>adm hdb userstore :

 

hdbuserstore SET DEFAULT "hdbtest1:30015;hdbtest2:30015" SAPHEC1 <password>

 

Restart my ECC instance. Et Voilà !

The system is back online with the hana DB working in DC2 on node 2.

 

Let's reconnect and keep going with the SGEN :

 

takeover11.png

takeover12.png

 

Everything worked.

Next step : failback !

How to Perform System Replication for SAP HANA

SAP HANA TDI - Overview

$
0
0

SAP HANA tailored data center integration (TDI) was released in November 2013 to offer an additional approach of deploying SAP HANA. While the deployment of an appliance is easy and comfortable for customers, appliances impose limitations on the flexibility of selecting the hardware components for compute servers, storage, and  network. Furthermore, operating appliances may require changes to established IT operation processes. For those who prefer leveraging their established processes and gaining more flexibility in hardware selection for SAP HANA, SAP introduced SAP HANA TDI. For more information please download this overview presentation.

View this Document

SAP HANA Server Installation and Update Guide


SAP HANA Administration Guide

$
0
0

The SAP HANA Administration Guide describes the main tasks and concepts necessary for the ongoing operation of SAP HANA.

 

Attention: the attached file is for the outdated SAP HANA SPS 8.

Find the current version at http://help.sap.com/hana/SAP_HANA_Administration_Guide_en.pdf

 

The following areas are covered:

  • SAP HANA studio as an SAP HANA administration tool
  • Starting and stopping SAP HANA systems
  • System configuration
  • License management
  • User provisioning
  • Monitoring (for example, performance, memory usage, disk usage, alert situations)
  • Backup and recovery
  • Distributed system management
  • High availability
  • Remote data access using SAP HANA smart data access
  • Other administrative tasks, for example, managing tables, managing persistence encryption, auditing database activity, and so on.

SAP Hana SP10 Installation on Red Hat 6.6

$
0
0

In my documentation I’ll explain how to install SAP Hana SP10 on a Linux Red Hat 6.6 system in my test environment.

 

I will show in detail step and configuration point to achieve it.

 

For my setup I’ll use my own lab on Vmware Vsphere 5.1.0 and run Hana revision 101, I’ll reuse my existing environment setup in my previous documentation.

 

In order execution

  • Download Red Hat 6.6 release
  • Install the minimal RHEL
  • Configure RHEL
  • SAP Hana installation

 

Guide used

 

Red Hat Enterprise Linux (RHEL) 6.x Configuration Guide for SAP HANA

SAP HANA Master Guide

 

Note used

 

SAP Note 171356 - SAP Software on Linux: General information

SAP Note 2009879 - SAP HANA Guidelines for Red Hat Enterprise Linux (RHEL) Operating System

SAP Note 1496410 - Red Hat Enterprise Linux 6.x: Installation and Upgrade

SAP Note 2136965 - SAP HANA DB: Recommended OS settings for RHEL 6.6

SAP Note 2001528 - Linux: SAP HANA Database SPS 08 revision 80 (or higher) on RHEL 6 or SLES 1

 

Link used

 

Red Hat Enterprise Linux for SAP HANA: system updates and supportability

Help SAP Hana

RedHat access documentation

 

Overview Architecture

56.jpg

 

In my previous documentation “SAP Hana TDI setup - VMware 5.1” I have explain and setup different scenario of Hana deployment, I’ll use same proceed to create my vm and template for reuse later.

 

 

Download Red Hat 6.6 release

 

In order to be able to download Red Hat dvds you must first register

1.jpg

 

Once done, validate your account and download the necessary media

2.jpg

 

Now downloaded I store it in my datastore

3.jpg

 

 

Install the minimal RHEL

 

My vm ready and my dvd ready I start to install the base Red Hat system

4.jpg

 

5.jpg

 

6.jpg

 

7.jpg

 

8.jpg

 

Make sure the automatic connection to the network card is enabled

9.jpg

 

Set the time zone and system clock according your location11.jpg

 

12.jpg

 

I choose the first option since my system is just about to be created

13.jpg

 

14.jpg

 

Choose minimal option

15.jpg

 

Installation in progress

16.jpg

 

Installation completed

17.jpg

 

 

Configure RHEL

 

The base installation is now over the system needs to be compliant to host SAP Hana, the configuration consist of the following activities:

 

•    Subscribe your system to Red Hat channels

•    Install the base package group and xfs tools

•    Create the /usr/sap/ storage for SAP HANA.

•    Mount the file system for SAP HANA instance

•    Install dependencies package for Hana

•    Install the SAP Java Virtual Machine or IcedTea

•    Disable SELinux in /etc/sysconfig/selinux

•    Install and configure the package tuned-profiles-sap-hana

•    Configure the profile for vmware

•    Set the parameters in /etc/sysctl.conf

•    Set the symbolic links

•    Add the kernel command line argument for crash huge page

•    Omit the application crash and core file handling of the operating system

 

 

 

 

Subscribe your system to Red Hat channels

 

In order to be able download patch, package and so on, the system must be register against Redhat by subscription, but in order to have SAP Hana specific package for RedHat you need to be register in “Partner center” and join as an existing partner company or apply for partnership

25.jpg

 

Once the registration fully completed and approved (it takes few days), run the subscription manager on your system

24.jpg

 

And list all the available production available for you, you must have “Red Hat Enterprise Linux for SAP Hana” listed

26.jpg

27.jpg

 

Attach the subscription to your pool ID (this information is system dependent)

28.jpg

29.jpg

 

Run the subscription release

30.jpg

 

Disable all existing repositories

31.jpg

 

And finally enable on “Hana’ repositories

32.jpg

 

 

Install base package group and xfs tools

 

33.jpg

34.jpg

 

Create /usr/sap storage for SAP Hana

 

From my ESXi host I did add another volume to my RedHat server in order to create “/usr/sap/” file system.

I check now by “lsblk –f” command of my new volume is there

36.jpg

 

Let’s create my physical volume by “pvcreate /dev/sdb” command

37.jpg

 

Now create the new volume group by “vgcreate new_volume /dev/sdb” and run the command “vgs” to check

38.jpg

39.jpg

 

The VG available with 50gb of space I’m creating now my logical volume with only 40gb of space in order to keep some room on my disk

40.jpg

 

And finally I create my physical file system and mount it

41.jpg

 

Result

42.jpg

 

 

Mount the file system for SAP HANA instance

 

The next phase will be to add the nfs mount point from my NAS server to RedHat for Hana by editing the fstab

Do not for forget to install nfs package by “ yum groupinstall "Network file system client"”

43.jpg

 

Once done “mount –a” and check

44.jpg

 

 

Install dependencies package for Hana

 

Like in SLES environment, RHEL needs to have specific package dependency in order to deploy Hana.

45.jpg

 

Install the SAP Java Virtual Machine or IcedTea

46.jpg

 

Disable SELinux in /etc/sysconfig/selinux

47.jpg

 

Install and configure the package tuned-profiles-sap-hana from the RHEL for SAP HANA channel to minimize latencies

48.jpg

 

For Hana running on VMware

49.jpg

 

Set the parameters in /etc/sysctl.conf

50.jpg

 

Set the symbolic link for compatibility reason

51.jpg

 

Add the kernel command line argument for crash huge page by editing /boot/grub/grub.conf file

52.jpg

 

Omit the application crash and core file handling of the operating system

53.jpg

 

This completed for the system preparation in order to install SAP Hana, I can proceed to the next section

 

 

 

SAP Hana installation

 

We are ready now to install SAP Hana and it’s like running it on SLES, so no surprise.

I’ll go by the hdblcmgui to run the install because a specific string is required as you can see in the red square below.

57.jpg

58.jpg

 

That’s it, Red Hat is very specific regarding package so do not missed any step.

 

Williams

SAP HANA Backup/Recovery Overview

$
0
0

SAP HANA holds the bulk of its data in memory for maximum performance, but it still uses persistent storage to provide a fallback in case of failure. After a power failure, the database can be restarted like any disk-based database and returns to its last consistent state. To protect against data loss resulting from hardware failure, backups are required. Both native backup/recovery functions, an interface for connecting 3rd party backup tools and support for storage snapshots are available. For recovery, SAP HANA offers many different options, including point-in-time recovery.

View this Document

HANA Rules Framework

$
0
0

Welcome to the SAP HANA Rules Framework (HRF) Community Site!


SAP HANA Rules Framework provides tools that enable application developers to build solutions with automated decisions and rules management services, implementers and administrators to set up a project/customer system, and business users to manage and automate business decisions and rules based on their organizations' data.

In daily business, strategic plans and mission critical tasks are implemented by a countless number of operational decisions, either manually or automated by business applications. These days - an organization's agility in decision-making becomes a critical need to keep up with dynamic changes in the market.


HRF Main Objectives are:

  • To seize the opportunity of Big Data by helping developers to easily build automated decisioning solutions and\or solutions that require business rules management capabilities
  • To unleash the power of SAP HANA by turning real time data into intelligent decisions and actions
  • To empower business users to control, influence and personalize decisions/rules in highly dynamic scenarios

HRF Main Benefits are:

Rapid Application Development |Simple tools to quickly develop auto-decisioning applications

  • Built-in editors in SAPHANA studio that allow easy modeling of the required resources for SAP HANA rules framework
  • An easy to implement and configurable SAPUI5 control that exposes the framework’s capabilities to the business users and implementers

Business User Empowerment | Give control to the business user

  • Simple, natural, and intuitive business condition language (Rule Expression Language)

Untitled.png

  • Simple and intuitive UI control that supports text rules and decision tables

NewTable.png

Scalability and Performance |HRF as a native SAP HANA solution leverages all the capabilities and advantages of the SAPHANA platform.


For more information on HRF please contact shuki.idan@sap.com  and/or noam.gilady@sap.com

Interesting links:

SAP solutions already utilizing HRF:

Use cases of SAP solutions already utilizing HRF:

SAP Transportation Resource Planning - NEW!!!

TRP_Use_Case.jpg

SAP FraudManagement

Fraud_Use_Case.JPG

SAP hybris Marketing (formerly SAP Customer Engagement Intelligence)

hybris_Use_Case.JPG

SAP Operational Process Intelligence

OPInt_Use_Case.JPG

Scientific Publications and Activities of the SAP HANA Database Campus

$
0
0

This is a list of selected publications and activities made by the SAP HANA Database Campus.

 

2015

  • Elena Vasilyeva, Maik Thiele, Christof Bornhövd, Wolfgang Lehner. Answering "Why Empty?" and "Why So Many?" queries in graph databases. Journal of Computer and System Sciences (2015), DOI=10.1016/j.jcss.2015.06.007 http://dx.doi.org/10.1016/j.jcss.2015.06.00
  • 2nd place in the ACM SIGMOD 2015 programming contest. For more details, click here.
  • The second SAP HANA student Campus Open House day took place in Walldorf on June 24th, 2015. For more details, click here.
  • Mehul Wagle, Daniel Booss, Ivan Schreter. Scalable NUMA-Aware Memory Allocations with In-Memory Databases. TPCTC 2015 (co-located with VLDB 2015), Kohala Coast, Hawaii, USA, August 31 - September 4, 2015.
  • Marcus Paradies, Elena Vasilyeva, Adrian Mocan, Wolfgang Lehner. Robust Cardinality Estimation for Subgraph Isomorphism Queries on Property Graphs. Big-O(Q) 2015 (co-located with VLDB 2015), Kohala Coast, Hawaii, USA, August 31 - September 4, 2015.
  • Max Wildemann, Michael Rudolf, Marcus Paradies. The Time Has Come: Traversal and Reachability in Time-Varying Graphs. Big-O(Q) 2015 (co-located with VLDB 2015), Kohala Coast, Hawaii, USA, August 31 - September 4, 2015.
  • Iraklis Psaroudakis, Tobias Scheuer, Norman May, Abdelkader Sellami, Anastasia Ailamaki. Scaling Up Concurrent Main-Memory Column-Store Scans: Towards Adaptive NUMA-aware Data and Task Placement. VLDB 2015, Kohala Coast, Hawaii, USA, August 31 - September 4, 2015.
  • Jan Finis, Robert Brunel, Alfons Kemper, Thomas Neumann, Norman May, Franz Faerber. Indexing Highly Dynamic Hierarchical Data. VLDB 2015, Kohala Coast, Hawaii, USA, August 31 - September 4, 2015.
  • David Kernert, Norman May, Michael Hladik, Klaus Werner, Wolfgang Lehner. From Static to Agile - Interactive Particle Physics Analysis with the SAP HANA DB. DATA 2015, Colmar, France, July 20-22, 2015.
  • Marcus Paradies, Wolfgang Lehner, Christof Bornhövd. GRAPHITE: An Extensible Graph Traversal Framework for Relational Database Management Systems. SSDBM 2015, San Diego, USA, June 29 - July 1, 2015
  • Elena Vasilyeva, Maik Thiele, Adrian Mocan, Wolfgang Lehner. Relaxation of Subgraph Queries Delivering Empty Results. SSDBM 2015, San Diego, USA, June 29 - July 1, 2015.
  • Florian Wolf, Iraklis Psaroudakis, Norman May, Anastasia Ailamaki, Kai-Uwe Sattler. Extending Database Task Schedulers for Multi-threaded Application Code. SSDBM 2015, San Diego, USA, June 29 - July 1, 2015
  • Ingo Müller, Peter Sanders, Arnaud Lacurie, Wolfgang Lehner, Franz Färber. Cache-Efficient Aggregation: Hashing Is Sorting. SIGMOD 2015, Melbourne, Australia, May 31-June 4, 2015.
  • Daniel Scheibli, Christian Dinse, Alexander Böhm. QE3D: Interactive Visualization and Exploration of Complex, Distributed Query Plans . SIGMOD 2015 (Demonstration), Melbourne, Australia, May 31-June 4, 2015
  • Martin Kaufmann, Peter M. Fischer, Norman May, Chang Ge, Anil K. Goel, Donald Kossmann. Bi-temporal Timeline Index: A Data Structure for Processing Queries on Bi-temporal Data. ICDE 2015, Seoul, Korea, April 2015.
  • Robert Brunel, Jan Finis, Gerald Franz, Norman May, Alfons Kemper, Thomas Neumann, Franz Faerber. Supporting Hierarchical Data in SAP HANA. ICDE 2015, Seoul, Korea, April 2015.
  • David Kernert, Frank Köhler, Wolfgang Lehner. SpMachO - Optimizing Sparse Linear Algebra Expressions with Probabilistic Density Estimation. EDBT 2015, Brussels, Belgium, March 23-27, 2015.
  • Alexander Böhm: Keynote: Novel Optimization Techniques for Modern Database Environments. BTW 2015: 23-24, March 5, 2015, Hamburg
  • Alexander Böhm, Mathias Golombek, Christoph Heinz, Henrik Loeser, Alfred Schlaucher, Thomas Ruf: Panel: Big Data - Evolution oder Revolution in der Datenverarbeitung? BTW 2015: 647-648, March 5, 2015, Hamburg
  • Ismail Oukid, Wolfgang Lehner, Thomas Kissinger, Thomas Willhalm, Peter Bumbulis. Instant Recovery for Main-Memory Databases. CIDR 2015, Asilomar, California, USA. January 4-7, 2015.

 

2014

  • The first SAP HANA Student Campus Open House day took place in Walldorf on June 5th, 2014. For more details, click here.
  • Iraklis Psaroudakis, Florian Wolf, Norman May, Thomas Neumann, Alexander Böhm, Anastasia Ailamaki, Kai-Uwe Sattler. Scaling up Mixed Workloads: a Battle of Data Freshness, Flexibility, and Scheduling. TPCTC 2014, Hangzhou, China, September 1-5, 2014.
  • Michael Rudolf, Hannes Voigt, Christof Bornhövd, Wolfgang Lehner. SynopSys: Foundations for Multidimensional Graph Analytics. BIRTE 2014, Hangzhou, China, September 1, 2014.
  • Elena Vasilyeva, Maik Thiele, Christof Bornhövd, Wolfgang Lehner: Top-k Differential Queries in Graph Databases. In Advances in Databases and Information Systems - 18th East European Conference, ADBIS 2014, Ohrid, Republic of Macedonia, September 7-10, 2014.
  • Kim-Thomas Rehmann, Alexander Böhm, Dong Hun Lee, Jörg Wiemers: Continuous performance testing for SAP HANA. First International Workshop on Reliable Data Services and Systems (RDSS), Co-located with ACM SIGMOD 2014, Snowbird, Utah, USA
  • Guido Moerkotte, David DeHaan, Norman May, Anisoara Nica, Alexander Böhm: Exploiting ordered dictionaries to efficiently construct histograms with q-error guarantees in SAP HANA. SIGMOD Conference 2014, Snowbird, Utah, USA
  • Ismail Oukid, Daniel Booss, Wolfgang Lehner, Peter Bumbulis, Thomas Willhalm. SOFORT: A Hybrid SCM-DRAM Storage Engine For Fast Data Recovery. DaMoN 2014, Snowbird, USA, June 22-27, 2014.
  • Iraklis Psaroudakis, Thomas Kissinger, Danica Porobic, Thomas Ilsche, Erietta Liarou, Pinar Tözün, Anastasia Ailamaki, Wolfgang Lehner. Dynamic Fine-Grained Scheduling for Energy-Efficient Main-Memory Queries. DaMoN 2014, Snowbird, USA, June 22-27, 2014.
  • Marcus Paradies, Michael Rudolf, Christof Bornhövd, Wolfgang Lehner. GRATIN: Accelerating Graph Traversals in Main-Memory Column Stores. GRADES 2014, Snowbird, USA, June 22-27, 2014.
  • David Kernert, Frank Köhler, Wolfgang Lehner. SLACID - Sparse Linear Algebra in a Columnar In-Memory Database System. SSDBM, Aalborg, Denmark, June/July 2014.
  • Ingo Müller, Peter Sanders, Robert Schulze, Wei Zhou. Retrieval and Perfect Hashing using Fingerprinting. SEA 2014, Copenhagen, Denmark, June/July 2014.
  • Martin Kaufmann, Peter M. Fischer, Norman May, Donald Kossmann. Benchmarking Bitemporal Database Systems: Ready for the Future or Stuck in the Past? EDBT 2014, Athens, Greece, March 2014.
  • Ingo Müller, Cornelius Ratsch, Franz Färber. Adaptive String Dictionary Compression in In-Memory Column-Store Database Systems. EDBT 2014, Athens, Greece, March 2014.
  • Elena Vasilyeva, Maik Thiele, Christof Bornhövd, Wolfgang Lehner: GraphMCS: Discover the Unknown in Large Data Graphs. EDBT/ICDT Workshops: 200-207.

 

2013

  • Sebastian Breß, Felix  Beier, Hannes Rauhe, Kai-Uwe Sattler, Eike Schallehn, Gunter Saake,  Efficient co-processor utilization in database query processing,  Information Systems, Volume 38, Issue 8, November 2013, Pages 1084-1096
  • Martin  Kaufmann. PhD Workshop: Storing and Processing Temporal Data in a Main  Memory Column Store. VLDB 2013, Riva del Garda, Italy, August 26-30,  2013.
  • Hannes Rauhe, Jonathan Dees, Kai-Uwe Sattler, Franz Färber.  Multi-Level Parallel Query Excecution Framework for CPU and GPU. ADBIS  2013, Genoa, Italy, September 1-4, 2013.
  • Iraklis Psaroudakis, Tobias Scheuer, Norman May, Anastasia Ailamaki. Task Scheduling for Highly Concurrent Analytical and Transactional Main-Memory Workloads. ADMS 2013, Riva del Garda, Italy, August 2013.
  • Thomas Willhalm, Ismail Oukid, Ingo Müller, Franz Faerber. Vectorizing Database Column Scans with Complex Predicates. ADMS 2013, Riva del Garda, Italy, August 2013.
  • David Kernert, Frank Köhler, Wolfgang Lehner. Bringing Linear Algebra Objects to Life in a Column-Oriented In-Memory Database. IMDM 2013, Riva del  Garda, Italy, August 2013.
  • Martin Kaufmann, Peter M. Fischer, Norman May, Andreas Tonder, Donald Kossmann. TPC-BiH: A Benchmark for Bi-Temporal Databases. TPCTC 2013, Riva del Garda, Italy, August 2013.
  • Martin Kaufmann, Panagiotis Vagenas, Peter M. Fischer (Univ. of Freiburg), Donald Kossmann, Franz Färber (SAP). DEMO: Comprehensive and Interactive Temporal Query Processing with SAP HANA. VLDB 2013, Riva del Garda, Italy, August 26-30, 2013.
  • Philipp Große, Wolfgang Lehner, Norman May: Advanced Analytics with the SAP HANA Database. DATA 2013.
  • Jan  Finis, Robert Brunel, Alfons Kemper, Thomas Neumann, Franz Faerber,  Norman May. DeltaNI: An Efficient Labeling Scheme for Versioned  Hierarchical Data. SIGMOD 2013, New York, USA, June 22-27, 2013.
  • Michael  Rudolf, Marcus Paradies, Christof Bornhövd, Wolfgang Lehner. SynopSys: Large Graph Analytics in the SAP HANA Database Through Summarization. GRADES 2013, New York, USA, June 22-27, 2013.
  • Elena Vasilyeva, Maik Thiele, Christof Bornhövd, Wolfgang Lehner: Leveraging Flexible Data Management with Graph Databases. GRADES 2013, New York, USA, June 22-27, 2013.
  • Jonathan Dees, Peter  Sanders. Efficient Many-Core Query Execution in Main Memory  Column-Stores. ICDE 2013, Brisbane, Australia, April 8-12, 2013
  • Martin  Kaufmann, Peter M. Fischer (Univ. of Freiburg), Donald Kossmann, Norman  May (SAP). DEMO: A Generic Database Benchmarking Service. ICDE 2013,  Brisbane, Australia, April 8-12, 2013.

  • Martin Kaufmann,  Amin A. Manjili, Peter M. Fischer (Univ. of Freiburg), Donald Kossmann,  Franz Färber (SAP), Norman May (SAP): Timeline Index: A Unified Data  Structure for Processing Queries on Temporal Data, SIGMOD 2013,  New  York, USA, June 22-27, 2013.
  • Martin  Kaufmann, Amin A. Manjili, Stefan Hildenbrand, Donald Kossmann,  Andreas Tonder (SAP). Time Travel in Column Stores. ICDE 2013, Brisbane,  Australia, April 8-12, 2013
  • Rudolf, M., Paradies, M., Bornhövd, C., & Lehner, W. (2013). The Graph Story of the SAP HANA Database. BTW (pp. 403–420).
  • Robert Brunel, Jan Finis: Eine effiziente Indexstruktur für dynamische hierarchische Daten. BTW Workshops 2013: 267-276

 

2012

  • Rösch, P., Dannecker, L., Hackenbroich, G., & Färber, F. (2012). A Storage Advisor for Hybrid-Store Databases. PVLDB (Vol. 5, pp. 1748–1758).
  • Sikka, V., Färber, F., Lehner, W., Cha, S. K., Peh, T., & Bornhövd,  C. (2012). Efficient transaction processing in SAP HANA database.  SIGMOD  Conference (p. 731).
  • Färber, F., May, N., Lehner, W., Große, P., Müller, I., Rauhe, H., & Dees, J. (2012). The SAP HANA Database -- An Architecture Overview. IEEE Data Eng. Bull., 35(1), 28-33.
  • Sebastian Breß, Felix Beier, Hannes Rauhe, Eike Schallehn, Kai-Uwe Sattler, and Gunter Saake. 2012. Automatic selection of processing units for coprocessing in databases. ADBIS'12

 

2011

  • Färber, F., Cha, S. K., Primsch, J., Bornhövd, C., Sigg, S., & Lehner, W. (2011). SAP HANA Database - Data Management for Modern Business Applications. SIGMOD Record, 40(4), 45-51.
  • Jaecksch, B., Faerber, F., Rosenthal, F., & Lehner, W. (2011). Hybrid data-flow graphs for procedural domain-specific query languages, 577-578.
  • Große, P., Lehner, W., Weichert, T., & Franz, F. (2011). Bridging Two Worlds with RICE Integrating R into the SAP In-Memory Computing Engine, 4(12), 1307-1317.

 

2010

  • Lemke, C., Sattler, K.-U., Faerber, F., & Zeier, A. (2010). Speeding up queries in column stores: a case for compression, 117-129.
  • Bernhard Jaecksch, Franz Faerber, and Wolfgang Lehner. (2010). Cherry picking in database languages.
  • Bernhard Jaecksch, Wolfgang Lehner, and Franz Faerber. (2010). A plan for OLAP.
  • Paradies, M., Lemke, C., Plattner, H., Lehner, W., Sattler, K., Zeier, A., Krüger, J. (2010): How to Juggle Columns: An Entropy-Based Approach for Table Compression, IDEAS.

 

2009

  • Binnig, C., Hildenbrand, S., & Färber, F. (2009). Dictionary-based order-preserving string compression for main memory column stores. SIGMOD Conference (p. 283).
  • Kunkel, Julian M., Tsujita, Y., Mordvinova, O., & Ludwig, T. (2009). Tracing Internal Communication in MPI and MPI-I/O. 2009 International Conference on Parallel and Distributed Computing, Applications and Technologies (pp. 280-286).
  • Legler, T. (2009). Datenzentrierte Bestimmung von Assoziationsregeln in parallelen Datenbankarchitekturen.
  • Mordvinova, O., Kunkel, J. M., Baun, C., Ludwig, T., & Kunze, M. (2009). USB flash drives as an energy efficient storage alternative. 2009 10th IEEE/ACM International Conference on Grid Computing (pp. 175-182).
  • Transier, F. (2009). Algorithms and Data Structures for In-Memory Text Search Engines.
  • Transier, F., & Sanders, P. (2009). Out of the Box Phrase Indexing. In A. Amir, A. Turpin, & A. Moffat (Eds.), SPIRE (Vol. 5280, pp. 200-211).
  • Willhalm, T., Popovici, N., Boshmaf, Y., Plattner, H., Zeier, A., & Schaffner, J. (2009). SIMD-scan: ultra fast in-memory table scan using on-chip vector processing units. PVLDB, 2(1), 385-394.
  • Jäksch, B., Lembke, R., Stortz, B., Haas, S., Gerstmair, A., & Färber, F. (2009). Guided Navigation basierend auf SAP Netweaver BIA. Datenbanksysteme für Business, Technologie und Web, 596-599.
  • Lemke, C., Sattler, K.-uwe, & Franz, F. (2009).  Kompressionstechniken für spaltenorientierte BI-Accelerator-Lösungen.  Datenbanksysteme in Business, Technologie und Web, 486-497.
  • Mordvinova,  O., Shepil, O., Ludwig, T., & Ross, A. (2009). A Strategy For Cost  Efficient Distributed Data Storage For In-Memory OLAP. Proceedings IADIS  International Conference Applied Computing, pages 109-117.

 

2008

  • Hill, G., & Ross, A. (2008). Reducing outer joins. The VLDB Journal, 18(3), 599-610.
  • Weyerhaeuser, C., Mindnich, T., Faerber, F., & Lehner, W. (2008). Exploiting Graphic Card Processor Technology to Accelerate Data Mining Queries in SAP NetWeaver BIA. 2008 IEEE International Conference on Data Mining Workshops (pp. 506-515).
  • Schmidt-Volkmar, P. (2008). Betriebswirtschaftliche Analyse auf operationalen Daten (German Edition) (p. 244). Gabler Verlag.
  • Transier, F., & Sanders, P. (2008). Compressed Inverted  Indexes for In-Memory Search Engines. ALENEX (pp. 3-12).

2007

  • Sanders, P., & Transier, F. (2007). Intersection in Integer Inverted Indices.
  • Legler, T. (2007). Der Einfluss der Datenverteilung auf die Performanz  eines Data Warehouse. Datenbanksysteme für Business, Technologie und  Web.

 

2006

  • Bitton, D., Faerber, F., Haas, L., & Shanmugasundaram, J. (2006). One platform for mining structured and unstructured data: dream or reality?, 1261-1262.
  • Geiß, J., Mordvinova, O., & Rams, M. (2006). Natürlichsprachige Suchanfragen über strukturierte Daten.
  • Legler, T., Lehner, W., & Ross, A. (2006). Data mining with the SAP NetWeaver BI accelerator, 1059-1068.
Viewing all 1183 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>