Quantcast
Channel: SCN : Document List - SAP HANA and In-Memory Computing
Viewing all 1183 articles
Browse latest View live

[SAP HANA Academy] SDA: SAP HANA Spark Controller Overview [SPS 10]

$
0
0

As part of the series profiling the new features for Smart Data Access for SAP HANA SPS10, the SAP HANA Academy’s Tahir Hussain (Bob) Babar walks through a series of chalkboards that provide a high level overview on how connecting SAP HANA to Hadoop has progressed over recent SAP HANA SPS releases. Bob also profiles how we now can connect to Hadoop using SAP HANA SPS10's new SAP HANA Spark Controller. Check out Bob’s video below.

Screen Shot 2015-07-20 at 11.44.36 AM.png

Also check out this SAP HANA Academy playlist to learn much more about SAP HANA Smart Data Access.

 

(1:30 - 5:50) Overview of Connecting Hadoop to SAP HANA in SAP HANA SPS07

 

Since SAP HANA SPS07 we have been able to connect SAP HANA to Hadoop using SAP HANA Smart Data Access. With SAP HANA installed on a Linux server we can join data between the two systems. Imagine that there is a schema with a bunch of tables in your SAP HANA system and you want retrieve a very large amount of data which is stored in Hadoop in HDFS (Hadoop distributed file system). There are a few different ways to access the data including MapReduce or Spark. These engines are used to parallel process and obtain data from large data sets. HiveQL is used to access the data in HDFS. SAP HANA Studio is used to access SAP HANA as a client.

 

In SAP HANA SPS07 you were able to connect to a Hadoop system from SAP HANA using SAP HANA Studio. To accomplish this you used putty or ssh to install various files (UNIXODBC Drivers and the Hive Driver) on your SAP HANA Linux server. The Hive Driver would connect to Hive on the Hadoop server which would then ultimately go through MapReduce to connect to the files on the HDFS system.

 

Then an end user using SAP HANA Studio could build a remote source and then a virtual table on the SAP HANA Linux server. That virtual table would then connect through the UNIXODBC and then through the Hive Driver to Hive on the Hadoop system to run the MapReduce. After this you were able to join with a single SQL statement data from SAP HANA and the Hadoop system.

 

This worked but was very cumbersome. Also now with the SAP HANA Cloud Platform you don’t have ready access to the Linux server where all of these SAP HANA proxies reside.

Screen Shot 2015-07-20 at 12.04.02 PM.png

(5:50 – 7:00) Overview of Connecting Hadoop to SAP HANA in SAP HANA SPS08

 

In the next release, SAP HANA SPS08, instead of using MapReduce you were able to use Spark (a more updated version of MapReduce). Now after a Spark Driver was installed on the SAP HANA Linux sever you could connect through Hive to use Spark to access the data in HDFS. Due to the technology advancements of Spark over MapReduce the connectivity was much quicker. Also the connectivity path was built with a near identical process in the SAP HANA Studio.

Screen Shot 2015-07-20 at 12.07.11 PM.png

(7:00 – 8:15) Overview of Connecting Hadoop to SAP HANA in SAP HANA SPS09

 

In SAP HANA SPS09 there was no need to install the UNIXODBC Driver and the Spark/Hive Driver and no work had to be preformed on the SAP HANA Linux server. Instead this new concept of a MapReduce Archive File is created with Java code in the SAP HANA Studio and then deployed on the SAP HANA Linux server. The MapReduce Archive File then connects to MapReduce in the Hadoop system and then ultimately connects to HDFS.

 

Another concept released in SPS09 was Virtual UDFS (User Defined Functions). With Virtual UDFS a user could connect directly to HDFS and bypass MapReduce. The user would create these objects directly in the SAP HANA Studio.

Screen Shot 2015-07-20 at 12.13.07 PM.png

(8:15 – 10:00) Overview of Connecting Hadoop to SAP HANA in SAP HANA SPS10

 

Now with SAP HANA SPS10 there is no need to deploy anything from SAP HANA Studio apart from creating the remote data source. All the work is performed on the Hadoop cluster. Essentially this new piece, the SAP HANA Spark Controller, is installed, configured and assembled directly on the Hadoop cluster. You then use YARN Shuffle and a Spark Assembly to connect SAP HANA to the HDFS system.

 

No work needs to be done in the SAP HANA Linux server because the Hadoop system is configured to use the SAP HANA Spark Controller to connect to the remote data source in the SAP HANA Studio. The SAP HANA Spark Controller uses the same method of going through Hive, then Spark and then finally connecting to HDFS.

Screen Shot 2015-07-20 at 12.33.23 PM.png

The next six videos in the What’s New with SAP HANA SPS10 playlist will cover how to install and configure the SAP HANA Spark Controller so you can run a single SQL statement based on data in both your SAP HANA and HDFS systems.

 

For over 75 tutorial videos on What's New with SAP HANA SPS10 please check out this SAP HANA Academy playlist.


SAP HANA Academy - Over 1,200 free tutorial videos on SAP HANA, Analytics and the SAP HANA Cloud Platform.


Follow us on Twitter @saphanaacademy


Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 2)

$
0
0

Hello All,

 

Its been some time that I have been working in HANA and related areas like SLT, Lumira, Fiori and so on.

So thought of sharing some topics here, which would be handy.

 

Disclaimer :

1) This series is exclusively for Beginners in HANA and all those HANA experts here, please excuse me

2) These are some Solutions/observations that we have found handy in our projects and am quite sure there would be multiple ways to derive the same result.

3) These series of documents are collaborative in nature. So please feel free to edit the documents,wherever required!

4) All the points mentioned here is observed on HANA systems whose revision is >= 82.


Part 1 of this series can be found here --> Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 1)

Part 3 of this series can be found here -->  Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 3)

 

13) Related to HANA:

Use Case: You already have a HANA system configured in Studio.

Once you log in, you could see that 'SAP Control REQUEST HAS FAILED' even though the services are all started.

P1.png

 

Solution: In most cases, remove the system from the studio and add the same system again.

It should start again without any issues.

 

 

14) Related to HANA:

Use Case: My customer had sent me a excel file (which looks like the following) and I was asked to load the same into a schema table in HANA.

Please note that there is a COUNTER Column having value 1 in each row.

P1.JPG

When we upload, we are getting an error like the following:

 

'INSERT, UPDATE and UPSERT are disallowed on the generated column: Cannot insert into the generated field COUNTER'

P1.JPG

Work around: We had tried many options but nothing was working out for us.

So we deleted the 'COUNTER' column from the excel and then uploaded the data.

 

Later using an ALTER Statement, we were able to include the 'COUNTER' column aswell.

 

P1.JPG

PS: The actual reason for this error is still not clear, but could see some interesting discussions about this here in SDN.

This should be helpful --> EXPERIENCE WITH IDENTITY FEATURE IN SAP HANA

 

 

15) Related to HANA:

Use Case: My customer had sent me a excel file (which looks like the following) and I was asked to load into a schema table in HANA.

P1.JPG

We were trying to upload the data to HANA, where the Data type of the above 2 fields 'DATEA' and 'LDATE' was 'DATE'.

Upload from Flat file was throwing the following error.

'at.jave.sql.Date.strict_valueOf'

P1.JPG

 

Workaround: We had to change the data type of the fields 'DATEA' and 'LDATE' to 'NVARCHAR'.and the data was successfully uploaded.

This was a just a workaround and am not sure if we have a permanent solution for this issue.

P1.JPG

 

Another Work Around for Loading Date fields from excel to HANA:

In HANA side, keep the corresponding column as date data type.

Untitled333.png

 

Go the CSV file and do the following modification and save the csv file.

Capture3333.JPG

Now try loading the table to HANA using Data from Local file option  and it will be successful.

 

PS: Even after saving the csv file, you might see the excel column in the old format, but dont worry, the loading will be successful.

 

 

16)Related to HANA/ ABAP Development Tools

Use Case: We had to do of a debugging a procedure in an Promotion Management System running on HANA database.

We we clicked on the particular procedure, it showed us a message 'Please use the ABAP development tools in Eclipse'.(SE80 screen is shown below)

Untitled1.png

 

Solution: We had to configure ABAP perspective in Eclipse/Studio and were able to proceed with debugging.

Please see some interesting documents on the related topic here:

ABAP Managed Database Procedures - Introduction

Tutorial: How to Debug an ABAP Managed Database Procedure

 

Post configuring the ABAP Perspective, we will be able to log into the ABAP system using the same.

Capture11.png

 

The above shown screen of SE80 in ABAP perspective will look like the following in HANA Studio.

Untitled11.png

 

17)  Related to HANA/ ABAP Development Tools

Use Case:  We had to install 'ABAP Development Tools' in HANA Studio.

 

Solution: Please follow the steps mentioned by Senthil in the following document.

Step-by-step Guide to setup ABAP on HANA with Eclipse

 

When you follow the document, at one point you will have to select the required add-on's.

Kepler.JPG

 

Once the steps are successfully completed, you would be able to see the following perspectives(selected ones from the previous screen) in your Studio:

Pers.JPG

 

 

18) Related to HANA Studio/Eclipse Environment

Use Case: While working in HANA studio, an error 'Failed to create Parts Control' occured.

 

Observation: This error is some how related to Eclipse environment.

The workaround we had done was to close the studio and run again.

Close and run again.png

 

We had observed this error in the following environment:

HANA Studio version is 1.00.82.0

HANA system version is 1.00.85.00.397590

 

Please find an important discussion on this topic here:

Failed to create the part's controls

 

 

19)Related to HANA Studio/Citrix Environment

Use Case: This was observed in an Internal Citrix environment and is not expected much in customer projects.

The Studio fails to load and shows the following error message:

Capture1.JPG

Solution: This is an error related to workspace space issue.

HANA studio settings were reset and a new workspace(which has a larger space) was assigned to the new studio installation.

 

 

20) Related to HANA

Use Case: I was trying to load a flat file to HANA and I was getting Scientific notations in some columns

 

Solution:

Initially, I was trying to load an .xlsx file and I was getting the Scientific Notations.

EX like below:

2.JPG

 

Then I Changed the .xlsx to .csv (File --> Save as --> .csv) and loaded again to Hana and ioutput was coming as expected:

EX like below:

 

1.JPG

 

One more thing what I could observe was that if I change .xlsx to .csv by simply changing the extension and then load to HANA, I was getting something like below:

3.JPG

 

21)Related to HANA Studio/Eclipse Environment

Use Case: We had installed the plugin's like 'ABAP' and was working in that perspective.

Due to some action, we were getting the message: 'Secure Storage is Locked'.

Secure storage is locked.png

 

Observation: The functional part of the secure storage is documented by Rocky in his blog here:

The "not quite" secure storage HANA Studio, Reconnect your Studio to HANA Servers!

 

You can also find a very detailed discussion about this topic here:

"Error when connecting to system" or "Invalid Username or Password" with HANA Studio

 

Solution: We followed the following path and deleted the related contents and restarted again.

Pers.JPG

 

 

 

Hope this document would be handy


BR

Prabhith

How To Configure Network Settings for HANA System Replication

How to Perform System Replication for SAP HANA

SAP HANA Backup/Recovery Overview

$
0
0

SAP HANA holds the bulk of its data in memory for maximum performance, but it still uses persistent storage to provide a fallback in case of failure. After a power failure, the database can be restarted like any disk-based database and returns to its last consistent state. To protect against data loss resulting from hardware failure, backups are required. Both native backup/recovery functions, an interface for connecting 3rd party backup tools and support for storage snapshots are available. For recovery, SAP HANA offers many different options, including point-in-time recovery.

View this Document

Unable to open HANA XS Admin Page

$
0
0

We have MDC for HANA.When I am trying to open XS Administration Tool i.e the URL (http://hostname:8000/sap/hana/xs/wdisp/admin/)

itself I am getting below error.

 

Capture.JPG

 

 

After checking the trace, I see that the Web Dispatcher is actively denying the access to the administration page due to its settings:

 

 

[Thr 139637169854208] HttpModExecuteRule: execute RewriteRule [0]

"RegIForbiddenUrl ^/sap/hana/xs/wdisp/admin/(.*) -"

[Thr 139637169854208] HttpModExecuteRule: apply pattern

"^/sap/hana/xs/wdisp/admin/(.*)" to path: "/sap/hana/xs/wdisp/admin/"

[Thr 139637169854208] HttpModExecuteRule: pattern

"^/sap/hana/xs/wdisp/admin/(.*)" matched path:

"/sap/hana/xs/wdisp/admin/"

 

 

 

 

I could see the Rule file contains below

 

 

Hostname:/usr/sap/<SID>/HDB00/hostname/tmp> more default_mod_rules_admin_ui

 

 

#These are the default modification rules used for restricting accessto

the Admin UI to SingleDb and SystemDb users only!

 

if %{SID} != ---

 

RegIForbiddenUrl ^/sap/hana/xs/wdisp/admin/(.*) - [break]

 

Hostname:/usr/sap/<SID>/HDB00/hostname/tmp>

 

 

These files are not meant to be edited manually.Even if you edit it and restart the webdispatcher this file will be generated automatically to its previous status.

 

 

Apparently the file 'default_mod_rules_admin_ui' is auto-generated after doing some webdispatcher configuration on HANA,every time restart of webdispatcher ,this file will be re-written.The reason we have this 'default_mod_rules_admin_ui' file to restrict access to #/sap/hana/xs/wdisp/admin/' is because the Web Dispatcher Administration must not be accessed by tenant DB users since there is one Web Dispatcher for multiple tenants.As a consequence access to the Web Dispatcher Administration is only allowed for SystemDB user.Due to the fact that the Web Dispatcher has no knowledge about which DB is the SystemDB, access is by default forbidden for all users and has to be manually enabled for the SystemDB.

 

 

1) Therefore, In your configuration you have to set the parameter wdisp/enable_admin_ui_for_sid=<Instance Name> in webdispatcher.ini. Then restart webdispatcher.

 

 

     Captures.JPG

 

You can refer to the note 2017899 - HANA Web Dispatcher - Multi DB - Access to Administration UI is restricted to System DB users

 

Example:

 

#wdisp/system_0=SID=$(SAPSYSTEMNAME), EXTSRV=http://localhost:3$(SAPSYSTEM)14,SRCVHOST=hostname.abc.com  #This system represents the System DB

 

#wdisp/system_1=SID=TN1, EXTSRV=http://localhost:3$(SAPSYSTEM)42,SRCVHOST=hostname-TN1    # This system represents a Tenant DB

 

#wdisp/enable_admin_ui_for_sid = SYS  # Only users in the System DB that have the role   sap.hana.xs.wdisp.admin::WebDispatcherAdmin  or sap.hana.xs.wdisp.admin::WebDispatcherMonitor  are authorized to access the Admin UI.

 

Therefore, the correct URL should be:

 

http://hostname.abc.com:8000/sap/hana/xs/wdisp/admin.

 

2) Please ensure that the permissions for SYSTEM User within HANA Studio are correct. To configure these:

 

Start HANA and connect to the HANA Server

 

Navigate to Security, then Users

 

Open user SYSTEM

 

On the Granted Roles tab click the green + to add a role

 

Search for role: sap.hana.xs.wdisp.admin::WebDispatcherAdmin or sap.hana.xs.wdisp.admin::WebDispatcherMonitor

 

Only users in the System DB that have the above roles are authorized to access the Admin UI.

 

You can refer to sap note   2107899 - HANA Web Dispatcher - Multi DB - Access to Administration UI is restricted to System DB users

This behavior was a security issue fixed with Revision 91. As a result, it is unfortunately not described in the original HANA Admin Guide for SPS 9. However, this is only required for SP09. In SP10 this is done automatically.

 

Also you will get #403 access denied# if you try to access the Web Dispatcher Admin UI using the tenant DB (http://Tenant-DB:8000/sap/hana/xs/wdisp/admin). This is in fact an expected behavior because in MultiDB system Tenant DBs are not supposed to have access to the Web Dispatcher Admin. Only the System DB is able to access the Admin UI.

Introduction: High Availability for SAP HANA

$
0
0

SAP HANA provides a comprehensive Fault and Disaster Recovery solution, and High Availability for Business Continuity. This paper explains the terminology and concepts of High Availability, and provides a comprehensive overview of the different High Availability design options available today for SAP HANA, in support of fault and disaster recovery.

View this Document

SAP Business Suite Powered by SAP HANA High Availability with NEC EXPRESSCLUSTER

$
0
0

Background

Cloud environments are now being used by the majority of companies, an increasing number of which are deploying SAP HANA on their cloud infrastructure services. Companies are using SAP HANA not only for fast analysis of big data but also for their mission-critical systems. This has led to a growing need to improve the availability of SAP HANA running on cloud infrastructure services.

 

Although SAP HANA has a high availability (HA) functionality, it is still necessary to manually switch servers if a failure occurs. This causes an outage in operations from failure detection to completion of server failover, which can potentially lead to lost business opportunities.

 

EXPRESSCLUSTER, NEC’s high availability infrastructure software, automatically detects failures in a system that uses SAP HANA running on Amazon Web Services (AWS) and switches to a standby server (performs failover). NEC wished to verify whether EXPRESSCLUSTER could shorten operational downtime and boost operational efficiency by cooperating with SAP HANA.

This article focuses on AWS deployments, however EXPRESSCLUSTER can also provide HA for SAP HANA on premise installations.

 

Overview

NEC created a SAP HANA cluster environment on AWS using EXPRESSCLUSTER.

Various types of failures were hypothesized on the created environment and it was verified that a cluster system could be restored by data synchronization using the EXPRESSCLUSTER automatic failover function and SAP HANA system replication function, and that operations could be continued without pause (that is, that SAP ERP Application Server automatically connected SAP HANA again and operations continued without stopping).

 

The system configuration used in this verification is shown in the figure below.

In this configuration, EXPRESSCLUSTER monitors failures and switches operations and SAP HANA synchronizes data.

 

Capture.JPG

 

 

  • Availability on AWS

AWS has multiple data centers called Availability Zones in locations such as Tokyo and Singapore. Customers can select the Availability Zone that they want to use and freely determine the Availability Zone in which to allocate an EC2 instance. Availability Zones are connected via high-speed dedicated lines. A system can be created across multiple Availability Zones. To realize the high availability required by mission-critical systems, the two instances composing a cluster must be allocated to different Availability Zones.

 

  • Failover on AWS

In cluster configuration, the connection destinations of the cluster must be able to be switched transparently. The virtual private cloud (VPC) of AWS can be used to set the network routing (Route Table), and the network routing can be operated by using an application program interface (API). Connection destinations can be switched by using this API and routing a virtual IP address (virtual IP in the above figure) to the elastic network interface (ENI) of the instance.

 

 

  • Data synchronization (system replication)

The system replication function of SAP HANA can cause data loss when an actual failure occurs, even in Synchronous mode. The SAP Note 2063657 - HANA System Replication takeover decision guideline” provides criteria for takeover decision. Before executing the takeover, the operator must check these criteria.

 

 

NEC adopted the full sync option in synchronous mode. The possibility of data loss can be eliminated by using the full sync option together with EXPRESSCLUSTER. NEC recommends by NEC this setting.

 

The following figure shows an illustration of the system when Server 1 is running as the primary server and Server 2 is running as the secondary server. The SAP ERP application server is connected to the SAP HANA server by accessing a virtual IP address.

operation01.pngThe next figure shows a failure occurs on the primary server. In this case EXPRESSCLUSTER stops SAP HANA on Server 1, and changes SAP HANA on Server 2 from the secondary server (which has been running until now in sync mode) to the primary server, allowing SAP HANA operations to continue. In addition, EXPRESSCLUSTER switches the virtual IP address of Server 1 to that of Server 2. The SAP ERP application server is connected to the new primary SAP HANA server by accessing its virtual IP address.

 

operation02.png


The following illustration shows a failure on the secondary server. Now EXPRESSCLUSTER stops SAP HANA on Server 2 and disables the full sync option on Server 1, allowing SAP HANA operations to continue.


operation03.png

 

 

Supported scenarios and requirements

Only the scenarios and parameters indicated below are supported for a successful cooperation between SAP HANA and EXPRESSCLUSTER. For general system replication requirements please follow SAP guidelines.

 

1. Two-node cluster consisting of a scale-up configuration (single node)  x 2

2. Both nodes must belong to the same network segment.

3. Both nodes must run as a single instance. No quality assurance or development system is running.

4. SAP HANA SPS09 (revision 90) or later

5. The automatic startup attribute of SAP HANA must be set to “off.” (SAP HANA startup is managed by EXPRESSCLUSTER.)

 

NEC tested the availability of the SAP HANA cluster configuration running on AWS using EXPRESSCLUSTER when the following failures occurred:

 

Failure typeServerComponentFailureDesired actionResult
Hardware failurePrimaryServerServer downFailover (to a standby server)
NetworkNetwork downFailover (to a standby server)
SecondaryServerServer downNo Failover (disable full sync option)
NetworkNetwork downNo Failover (disable full sync option)
Software failurePrimaryOSOS hungFailover (to a standby server)
SAP HANA DBService downFailover (to a standby server)
Process downFailover (to a standby server)
SecondaryOSOS hungNo Failover (disable full sync option)
SAP HANA DBService downNo Failover (disable full sync option)
Process downNo Failover (disable full sync option)
Cloud failurePrimaryAvailability ZoneZone downFailover (to a standby server)
SecondaryAvailability ZoneZone downNo Failover (disable full sync option)

 

 

The following operations have been checked and verified when the above mentioned failures occurred:

  • EXPRESSCLUSTER detected the failure and failed over SAP HANA.
  • The connection from SAP ERP remained available, and operations could continue. (Data could be updated and referenced.)

 


Results

Using SAP HANA's stock system replication settings servers must be switched manually when a failure occurs. In a configuration together with EXPRESSCLUSTER, EXPRESSCLUSTER automatically executes all operations from failure detection to failover when a failure occurs.

NEC has also verified that the potential for data loss can be eliminated by using the full sync option, and that operations can continue without stopping because EXPRESSCLUSTER automatically disables the full sync option when a failure occurs on the secondary server.

 

 

References

Whitepaper on SAP Business Suite Powered by SAP HANA HA with EXPRESSCLUSTER

http://www.nec.com/en/global/prod/expresscluster/en/collaterals/pdf/SAPHANA_WP_EN_01.pdf

EXPRESSCLUSTER

http://www.nec.com/en/global/prod/expresscluster/

EXPRESSCLUSTER 3.3 new features

http://www.nec.com/en/global/prod/expresscluster/en/collaterals/pdf/EXPRESSCLUSTER_x33_newfeatures_nec_en.pdf

SAP HANA Server Installation and Update Guide

http://help.sap.com/hana/SAP_HANA_Server_Installation_Guide_en.pdf

SAP HANA Administration Guide

http://help.sap.com/hana/SAP_HANA_Administration_Guide_en.pdf

SAP Note 1656099 - SAP Applications on AWS: Supported DB/OS and AWS EC2 products

http://service.sap.com/sap/support/notes/1656099

SAP Note 1964437 - SAP HANA on AWS: Supported AWS EC2 products

http://service.sap.com/sap/support/notes/1964437

SAP Note 2063657 - HANA System Replication takeover decision guideline

http://service.sap.com/sap/support/notes/2063657


SAP HANA TDI on Cisco UCS and VMware vSphere - Part 1

$
0
0

Introduction

 

Support for non-productive SAP HANA systems on VMware vSphere 5.1 has been announced in November 2012. Since April 2014, also productive SAP HANA systems can be virtualized by customers on VMware vSphere 5.5. Currently, some restrictions apply which are preventing SAP HANA to be treated like any other SAP Application running on VMware vSphere. But because the conditions for virtualized HANA will be harmonized in the future in order to fit to a homogenous SAP technology platform, it is recommended to continously keep the SAP documentation up-to-date and always refer to the latest version only.

 

The audience of this series should have a basic understanding of following components:

 

Additionally, a deeper understanding of SAP HANA TDI is mandatory:

SAP: SAP HANA Tailored Datacenter Integration

Cisco: SAP HANA TDI on Cisco UCS

 

Following combination has been tested and verified as working. It is strongly advised against using lower versions, while newer versions are considered to work as expected as long as the combination is reflected in the Cisco and VMware compatibility guides.

  • Cisco UCS Manager 2.1
  • VMware ESXi 5.5
  • VMware vCenter Server 5.5
  • SUSE Linux Enterprise Server 11 SP3
  • Red Hat Enterprise Linux Server 6.5
  • SAP HANA 1.0 SPS07

 

Although one of the goals of this series is to consolidate information about the virtualized HANA deployment process, there is still a lot of necessary referencing to other documentation. Please consult the following during the planning and installation phase:

 

Virtualization References

Document ID / URL

Description

SAP Note 1492000

General Support Statement for Virtual Environments

SAP Note 2161991

VMware vSphere configuration guidelines

SAP Note 1606643

VMware vSphere host monitoring interface

SAP Note 1788665

SAP HANA Support for virtualized / partitioned (multi-tenant) environments

SAP Note 1995460

SAP HANA on VMware vSphere in production

SAP Note 2024433

Multiple SAP HANA VMs on VMware vSphere in production (controlled availability)

SAP Note 2157587

SAP Business Warehouse, powered by SAP HANA on VMware vSphere in scale-out and production (controlled availability)

HANA Virtualization Guidelines from SAP

SAP HANA Virtualization Guidelines with VMware vSphere

HANA Virtualization Guidelines from VMware

SAP HANA Virtualization Guidelines with VMware vSphere

 

Linux References

Document ID / URL

Description

SAP Note 171356

SAP software on Linux: General information

SAP Note 1944799

SAP HANA Guidelines for SLES Operating System Installation

SAP Note 2009879

SAP HANA Guidelines for RHEL Operating System Installation

SAP Note 1954788

SAP HANA DB: Recommended OS settings for SLES 11 SP3

SAP Note 2136965

SAP HANA DB: Recommended OS settings for RHEL 6.6

SAP Note 2001528

SAP HANA Database SPS 08 revision 80 (or higher) on RHEL 6 or SLES 11

 

______________________________

Part 1 - Introduction

Part 2 - ESXi Host

Part 3 - Virtual Machine

Part 4 - Guest Operating System

SAP HANA TDI on Cisco UCS and VMware vSphere - Part 2

$
0
0

ESXi Host

 

UCS Service Profile

 

The ESXi host provides the platform where virtual machines are running on. The service profile contains the configuration of the hardware in a Cisco UCS environment. Service Profile Templates or vSphere Auto Deploy can be used to ease the ESXi deployment process. In this example, a standalone service profile creation is shown.

 

For each vSwitch, it is recommended to configure two uplink interfaces with MTU 9000 as trunk. The VLAN assignment takes place in the port group configuration of the vSwitch.

 

Image6.jpg

 

In order to get the best performance for virtualization, certain BIOS features should be enabled. The c-states can be controlled by the hypervisor and do not necessarily have to be disabled. It depends on the performance needs vs. power saving aspects how balanced this configuration should be.

 

Image17a.jpg

 

vsphere1.jpg

VMware vSphere screenshot

 

Although the use of VM-FEX is optional, it is recommended to enable all Intel Direct IO features.

 

Image18.jpg

 

 

Network

 

SAP HANA has different types of network communication channels to support the different SAP HANA scenarios and setups. It is recommended to consult the SAP HANA TDI - Network Requirements whitepaper.

 

network_tdi2.jpg

Source: SAP AG

 

On the basis of the listed network requirements, every server must be equipped with two 1 or 10 Gigabit Ethernet (10 Gigabit Ethernet is recommended) interfaces for scale-up systems to establish communication with the application servers and users (client zone). If the storage for SAP HANA is external and accessed through the network, two additional 10 Gigabit Ethernet or 8-Gbps Fibre Channel interfaces are required (storage zone). Scale-out systems need a 10 Gigabit Ethernet link for internode communication (internal zone). When using multiple ESXi hosts in a vSphere Cluster with enabled DRS, at least one additional 10 Gigabit Ethernet link is required for vMotion traffic.

 

For the internal zone, storage zone and the vMotion network, it is recommended to configure jumbo frames end-to-end.

 

The network traffic can be consoliated by using the same vSwitch with several load-balanced uplinks.

 

Image11.jpg

 

Storage

 

The storage system must be certified as part of an appliance, independent of the appliance vendor, or must be certified as SAP HANA TDI storage.

 

It is recommended to physically separate the the origin (VMFS LUN or NFS export) of the datastores providing data and log for performance reasons. Following performance classes can be distinguished:

 

Category

Read Performance

Write Performance

OS boot disk

medium

low

/hana/shared

medium

low

/hana/data

very high

high

/hana/log

high

very high

backup

low

medium

 

It is also recommended to consult the recommendations from the storage hardware partners:

EMC: Tailored Datacenter Integration Content

NetApp Reference Architecture: SAP HANA on VMware vSphere and NetApp FAS Systems

NetApp Configuration Guide: SAP HANA on NetApp FAS Systems

______________________________

Part 1 - Introduction

Part 2 - ESXi Host

Part 3 - Virtual Machine

Part 4 - Guest Operating System

[SAP HANA Academy] Live4 ERP Agility: SDI Hadoop Overview

$
0
0

In the next part of theSAP HANA Academy’sLive4 ERP Agile Solutions in SAP HANA Cloud Platformcourse,Tahir Hussain Babar (Bob)provides an overview on the source Hadoop system used in the course. The Hadoop system is where the EPA (Environmental Protection Agency) weather data has been loaded. It will be a source data set for a SAP Fiori application in the SAP HANA Cloud Platform. The data doesn't need to be replicated in HCP and thus the data is accessed using virtual tables and remote sources created with SAP Smart Data Integration. Watch Bob’s video below.

Screen Shot 2015-08-21 at 11.57.54 AM.png

(0:30 – 2:30) Background on Data in Hadoop

 

In the Live4 course a connection will be made from HCP to a Hadoop data lake that contains over 70 years of weather data from the EPA. There is no need to copy all of that weather data into HCP so a virtual table for that data lake will be created so we will only get the data that is needed.

 

Many options exist for data connectivity. One way demonstrated earlier in the course was to use HCI-DS is to copy the data to the remote system. Another method is to use SAP Smart Data Integration to replicate the data. With SDI, once the data is changed in the source system it will immediately be reflected in HCP. Yet another method is using virtual tables. With virtual tables no replication is necessary as you have a virtual view in your data source.

 

(2:30 - 7:00) Examination of the Databases in the Hadoop system

 

On a Windows machine Bob is running Hadoop Hortonworks data platform 1.2. If you don’t have your own Hadoop system please check out this series of four videos on the SAP HANA Academy on how to use, install and configure Hadoop Hortonworks.

 

This data set (download here on GitHub) contains weather data from the EPA. Everyday the EPA surveys over 300 weather stations and has recorded various measurements for over 70 years. Instead of storing all of that data on the hot storage system of SAP HANA we will store it on Hadoop as a cold storage system that we can then access.

 

Bob launches a command prompt window and connects to his HIVE 0.12.0 system and then executes hive from his bin directory. If you're following along with the course please make sure to use HIVE 0.12.0 throughout. Bob has already loaded the EPA databases into his Hadoop system so by entering show databases; as a command he can view his three databases (default, epa and live2). Then entering show tables in epa; shows the five tables from the EPA Bob has exposed in this database (aqi_datalake, humidity_datalake, pressure_datalake, temp_datalake and wind_datalake).

Screen Shot 2015-08-21 at 11.59.51 AM.png

Next Bob enters select * from epa.aqi_datalake; to see all of the 70 years worth of air quality index data stored in a table format that has been translated using the HIVE query language. Hive query language enables BI tools and SAP HANA to understand the format.

Screen Shot 2015-08-21 at 12.01.30 PM.png

In the next video Bob will show you the user authorizations and agents you need in order to connect your Hadoop system to HCP using Smart Data Integration.


For further tutorial videos about the ERP Agility with HCP course please view this playlist.


SAP HANA Academy - Over 1,200 free tutorial videos on SAP HANA, Analytics and the SAP HANA Cloud Platform.


Follow us on Twitter @saphanaacademy and connect with us on LinkedIn.

Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 4)

$
0
0

Hello All,

 

Its been some time that I have been working in HANA and related areas like SLT, Lumira, Fiori and so on.

So thought of sharing some topics here, which would be handy.

 

Disclaimer :

1) This series is exclusively for Beginners in HANA and all those HANA experts here, please excuse me

2) These are some Solutions/Observations that we have found handy in our projects and am quite sure there would be multiple ways to derive the same result.

3) These series of documents are collaborative in nature. So please feel free to edit the documents,wherever required!

4) All the points mentioned here is observed on HANA systems whose revision is >= 82.


Part 1 of this series can be found here -->  Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 1)

Part 2 of this series can be found here -->  Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 2)

Part 3 of this series can be found here --> Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 3)



34)Related to HANA

Use Case: We were unable to access/open the Catalog Folder of our HANA instance and the following error message was appearing.

  Capture22.JPG

un1.png

Solution: We had raised the issue with SAP Internal Support.

And as of their observation: all HDB processes were online and there wasn't any obvious reason for an error.As a Quick fix, they restarted the HANA DB and

the error was solved.(our's was a Demo System anyway)

 

Note: Some related Discussions --> https://scn.sap.com/thread/3729403

 

35) Related to HANA Studio:

Use Case: We had to uninstall the Existing version of HANA Studio.

 

Solution:

Got to Control Panel --> Uninstall the HANA Studio.

Untitled22.png

The Lifecycle Manager will ask you to enter the HANA Studio installation instance(in our case; it was 0)

Untitled33.png

After entering 0, you will get the following screen:

Untitled33.png

By pressing any key, you will get the message that the HANA studio version is successfully uninstalled.

 

36) Related to HANA

Use Case: At times, while navigating through the HANA Contents, we come across the across the following message(Contains Hidden Objects).

Capture4444.JPG

Solution:

Go to the Preferences --> HANA --> Modeler --> Content Presentation --> Check 'Show all Objects'.

Capture3333.JPG

Capture55555.JPG

 

Now the hidden objects will be displayed:

Capture2222.JPG

 

37) Related to HANA SQL

Some Useful SQL Commands:

a) Renaming a column of an already existing Table:

RENAME COLUMN "<Schema_Name>"."<Table_Name>"."<Old_Column_Name>" to "<New_Column_Name>"

 

b) Adding a new column to an already existing Table:

ALTER TABLE "<Schema_Name>"."<Table_Name>" ADD (<New_Column_Name><DataType>)

 

c) Update a Column Entry:

UPDATE "<Schema_Name>"."<Table_Name>" SET  "<Column_Name>" = '<New_Entry>' where "<Column_Name>" = '<Old_Entry>'

 

d) IF Function:

If("Status"= 'Completed Successfully','0',if("Status"= 'In progress','1',if("Status"= 'Withdrawn','2','?')))

Capture5555.JPG

 

38) Related to HANA Studio:

Use Case: We got the following error while previewing an HANA Analytical view:

Message: [6941] Error Executing Physical Plan : AttributeEngine: this operation is not implemented for this attribute.

66.png

Solution: The above error message point towards a field named CMPLID.

On careful observation, it was found that the CMPLID has different data types in the connected attribute views and the Fact table in the following analytic view.

Untitled1.JPG

Related SAP Note: 1966734 - Workaround for "AttributeEngine: this operation is not implemented for this attribute type" error

 

39) Related to HANA:

Use Case: How to find the Schema Owners?

Solution: SELECT * FROM "SYS"."SCHEMAS"

Capture2222.JPG

 

40) Related to HANA:

Use Case: How to provide specific authorizations to some limited tables within a schema?

Solution: Object Privileges --> Schema Name --> Right Click --> Add catalog objects --> Provide the specific table names.

Untitled2222.png

 

Hope this document would be handy!

 

41) Related to sFIN

Use Case: The definition of HANA Calculation View BANKSTATEMENTMONITOR is not correct after migration to SAP Simple Finance, on-premise editon 1503.

The expected definition of HANA view after migration is something like the following:

Capture1111.JPG

Unfortunately due to a program error, the view definition after migration will still look something like the following:

Capture111111.JPG

 

Solution: We had raised this issue with the support/development team and they have now released the following new OSS note.

2205205 - Bank Statement Monitor: Definition Correction of HANA Calculation View BANKSTATEMENTMONITOR

 

After following the manual activities mentioned in the note, the issue will be resolved.

 

42) Related to SLT:

An SLT configuration was already created without giving multiple usage option.(You want to switch from 1: 1 to 1: N in already existing SLT configuration)

No we wanted to create a new connection with the same source and a different target, but system was not allowing us to do the same, as we were getting the message that a configuration with the same source is already available.

 

Solution: NOTE 1898479 - SLT replication: Redefinition of existing DB triggers.

The solution for this issue was explained in the note and the manual steps (1-9) has to be done in the SLT system to solve this.

 

Will keep adding more points here...

 

BR

Prabhith-

SAP HANA TDI on Cisco UCS and VMware vSphere - Part 3

$
0
0

Virtual Machine

 

CPU and Memory Sizing

 

This document is not a sizing instrument but delivers a guideline of technically possible configurations. For proper sizing, refer to BW on HANA (BWoH) and Suite on HANA (SoH) sizing documentation. The ratio between CPU and memory can be defined as 6.4 GB per vCPU for Westmere-EX, 8.53 GB per Ivy Bridge-EX and 10.66 GB per Haswell-EX vCPU. The numbers for Suite on HANA can be doubled.

 

See that the ESXi host and every virtual machine produce some memory overhead. For example, an ESXi server with 512 GB physical RAM can not host a virtual machine with 512 GB vRAM because the server would need some static memory space for its kernel and some dynamic memory space for each virtual machine. A virtual machine with eg. 500 GB vRAM would most likely fit into a 512 GB ESXi host.

 

This table shows the vCPU to vRAM ratio on a Haswell-EX server. Under "VMs per 3 TB Host", the maximal amount of VMs is shown considering two scenarios:

  1. BWoH prod
    • This is the amount of virtual machines that can be deployed on one host for BW on HANA in a productive environment, that means, there is no resource overcommitment.
  2. CPU overcommit
    • This is the amount of virtual machines that can be deployed on one host while allowing CPU overcommitment. Memory overcommitment is not recommended at all for HANA virtual machines.

 

vCPU

vRAM

VMs per 3 TB Host

BWoH

SoH

BWoH prod

CPU overcommit

18

18

64 GB

8

46

18

18

128 GB

8

23

36

18

256 GB

4

11

36

18

384 GB

4

7

54

36

512 GB

2

5

72*

36

768 GB

1

3

108*

54

1 TB

1

2

-

72*

1.5 TB

-

1

-

108*

2 TB

-

1

* 4 TB vRAM and 128 vCPU per VM with vSphere 6

 

More details are available here:

SAP: SAP HANA Guidelines for running virtualized

VMware: Best Practices and Recommendations for Scale-Up Deployments of SAP HANA on VMware vSphere

VMware: Best Practices and Recommendations for Scale-Out Deployments of SAP HANA on VMware vSphere

 

 

Virtual Machine Creation

 

Create virtual machine with hardware version 10 (ESXi 5.5)

Image2.jpg

 

Select SLES 11 64-bit or RHEL 6 64-bit as Guest OS

Image3.jpg

 

Configure cores per socket

Image4.jpg

 

It is recommended to configure the cores per socket according to the actual hardware configuration, which means:

  • 10 cores per socket on HANA certified Westmere-EX processors
  • 15 cores per socket on HANA certified Ivy Bridge-EX processors
  • 18 cores per socket on HANA certified Haswell-EX processors

 

Virtual SCSI controller configuration

Image5.jpg

 

Configure 4 virtual SCSI controllers of the type "VMware Paravirtual".

 

Configure virtual disks

Image6.jpg

 

To fully utilize the virtual resources, a disk distribution is recommended where the disks are connected to different virtual SCSI controllers. This improves parallel IO processing inside the guest OS.

 

The size of the disks can initially be chosen lower than the known requirements for HANA appliances. This is based on the capability of in increasing virtual disk size online.

 

Disk

Mount point

Size (old measure)

Size (new combined measure)

root

/

80 GB

80 GB

hanashared

/hana/shared

1 * vRAM

1 * vRAM

hanadata

/hana/data

3 * vRAM

1 * vRAM

hanalog

/hana/log

1 * vRAM

vRAM < 512 GB: 0.5 * vRAM

vRAM ≥ 512 GB: min. 512 GB

 

Configure virtual network adapters

 

vsphere2.jpg

 

Configure one VMXNET3 adapter for each network and connect it to the corresponding port group. See that some networks are only configured on ESXi level, such as storage and backup network.

 

Enable Latency Sensitivity settings

Image8.jpg

 

To ensure the latency sensitivity, CPU and memory reservation has to be set, too. While the CPU reservation can vary, the memory reservation has to be set to 100 %. Check the box "Reserve all guest memory (All locked)".

 

Image10.jpg

 

After creating a VM, the guest OS installation can begin.

 

______________________________

Part 1 - Introduction

Part 2 - ESXi Host

Part 3 - Virtual Machine

Part 4 - Guest Operating System

SAP HANA TDI on Cisco UCS and VMware vSphere - Part 4

$
0
0

Guest Operating System

 

Installation

 

The installation of the Linux OS has to be done according to the SAP Notes for HANA systems on SLES or RHEL. The network and storage configuration parts are heavily depending on the TDI landscape, therefore no general advice can be given.

 

Disk Distribution

 

The VMDKs should be distributed on differently tiered storage systems because of performance reasons. Example:

Drawing1.jpg

In the end, a realistic storage quality classification as well as a thorough distribution of the disks among datastores and virtual SCSI adapters ensures good disk IO performance for the HANA instance.

 

Configuration

 

With regards to the network configuration, it is not recommended to configure bond devices inside the Linux guest OS. Such a configuration is used in native environments to guarantee availability of the network adapters. In virtual environments, the redundant uplinks of the vSwitch take on that role.

 

In /etc/sysctl.conf some tuning might be necessary for scale-out and in-guest NFS scenarios:

net.ipv4.tcp_slow_start_after_idle = 0

net.core.rmem_max = 16777216

net.core.wmem_max = 16777216

net.core.rmem_default = 262144

net.core.wmem_default = 262144

net.core.optmem_max = 16777216

net.core.netdev_max_backlog = 300000

net.ipv4.tcp_rmem = 65536 262144 16777216

net.ipv4.tcp_wmem = 65536 262144 16777216

net.ipv4.tcp_no_metrics_save = 1

net.ipv4.tcp_moderate_rcvbuf = 1

net.ipv4.tcp_window_scaling = 1

net.ipv4.tcp_timestamps = 1

net.ipv4.tcp_sack = 1

sunrpc.tcp_slot_table_entries = 128

 

MANDATORY: Apply generic SAP on vSphere optimizations and additional HANA optimizations for SLES and RHEL.

 

Validation

 

To validate the solution, the same hardware configuration check tool as for the appliances is used but with slightly different KPIs. SAP supports performance-related SAP HANA issues only if the installed solution has passed the validation test successfully.

 

Volume

Block Sizes

Test File Size

KPIs

Initial Write (MB/s)

Overwrite (MB/s)

Read (MB/s)

Latency (µs)

Log

4K

5G

-

30

-

1000

16K

16G

-

120

-

1000

1M

16G

-

250

250

-

Data

4K

5G

-

-

-

-

16K

16G

40

100

-

-

64K

16G

100

150

250

-

1M

16G

150

200

300

-

16M

16G

200

250

400

-

64M

16G

200

250

400

-

Source: SAP AG, Version 1.7

 

______________________________

Part 1 - Introduction

Part 2 - ESXi Host

Part 3 - Virtual Machine

Part 4 - Guest Operating System

FAQ: High Availability for SAP HANA


SAP HANA - High Availability FAQ

DBA Cockpit for SAP HANA: Support for Multitenant Database Containers with SAP HANA SPS09 and Higher

$
0
0

You can use DBA Cockpit for SAP HANA to monitor and administer local or remote SAP HANA databases in SAP systems in an ABAP environment.

 

This post provides an overview of the features in DBA Cockpit for SAP HANA that support SAP HANA multitenant database containers.

 

Contents

 

 

Prerequisites

You can connect to an SAP HANA multitenant database container using any version of DBA Cockpit for SAP HANA, even very old versions.

 

To use the newest features described in this document, you need to be working with:

  • At least SAP HANA SPS09

AND at least the following SAP_BASIS Support Packages:

  • 7.40 SP10
  • 7.31 SP15
  • 7.30 SP13
  • 7.02 SP17

 

 

Connect to an SAP HANA Database in DBA Cockpit

To monitor a remote database in DBA Cockpit, you need to create a connection to that database.


  1. From DBA Cockpit, choose DB Connections -> SAP HANA database -> Add.
    dba-cockpit-add-connection.png
  2. Specify the information required.
  3. The SQL port to be used for the connection is entered in the form:
    3xxyy
    xx is the SAP HANA instance number.
    yy is a two-digit number as follows:
    • 3xx15 is for a SAP HANA single-container system
    • 3xx13 is for the system database in a SAP HANA multitenant database container
      The SQL port numbers for tenant databases are normally increased by increments of 3, starting from 41:
      3xx15, 3xx41, 3xx44, or 3xx47 etc.

 

To find out the instance number:

  • From SAP HANA studio, open the Administration view.
    The instance number is displayed in the title after the host name.
  • If the monitored SAP HANA database is an SAP System, you can log on to the SAP System to see the instance number.
    Choose System -> Status.
    The instance number is displayed in the field Name in the section Database Data.

 

To find out the SQL port, you can execute the following SQL query:

SELECT DISTINCT(sql_port) FROM SYS.M_SERVICES WHERE SQL_PORT > 0

To open the SQL Console, from SAP HANA studio, right-click over the database and choose Open SQL Console from the context menu.

 

 

 

 

What Information Does DBA Cockpit Show for Multitenant Database Containers?

 

Section: Overview

Choose Current Status -> Overview.

 

An SAP HANA multitenant database container is indicated by “Multitenant database container = Yes”.

To determine whether a database is multitenant or not, DBA Cockpit reads the parameter [multidb] mode in global.ini.

[multidb] mode can be either multidb or singledb.

 

The SID displayed after this is the system ID of the entire SAP HANA system.

To find the SID of the entire SAP HANA system, DBA Cockpit reads M_DATABASE.SYSTEM_ID.

To find the SID of the database, DBA Cockpit reads M_DATABASE.DATABASE_NAME.

 

In the example screen below, the Overview shows the system database of the SAP HANA multitenant database container MD2:

dba-cockpit-overview-mdc.png

 

If you are working with a tenant database, its name is displayed in the top left corner of the Overview section.

 

In the example screen below, the Overview shows a tenant database DB0 inside the multitenant database container MD2:

dba-cockpit-overview-mdc-tenant.png

 

 

 

Section: SAP HANA Multitenant Database Containers

This section is displayed only if the you are logged on to the system database.

 

This section offers the following functions:

dba-cockpit-mdc-section.png

Depending on the support package that you have installed, some of these functions may not be available in your system.

 

These functions are described in the sections that follow.

 

 

List of Tenant Databases

You can display an overview of configured tenant databases.

DBA Cockpit reads this information from the schema M_DATABASES.

 

  • In DBA Cockpit, connect to the system database.
  • Open SAP HANA multitenant database containers.
  • Choose List of tenant databases
    An overview of tenant databases is displayed, showing basic information for each tenant database.
    dba-cockpit-list-of-tenantdbs3.png


In later support packages, the overview of tenant databases additionally shows alerts, memory used, disks used, and total CPU.
This feature is available with the following SAP_BASIS Support Packages or higher:

  • 7.40 SP12
  • 7.31 SP17
  • 7.30 SP14
  • 7.02 SP18
    dba-cockpit-tenant-dbs-latest2.png

From this screen, you can stop or start a tenant database, or create a new tenant database.

 

Note that alerts are only supported with SAP HANA SPS10 (and the above SAP_BASIS Support Packages).

Memory used, disks used, and total CPU are supported with SAP HANA SPS09.

 

 

 

Data Browser for Schema SYS_DATABASES

You can display an overview of the views and tables.

DBA Cockpit reads this information from the schema SYS_DATABASES.

 

dba-cockpit-browser-sys_databases.png

 

To display the content of a table or view, double-click a row.

Database Name:
If you specify a particular database, DBA Cockpit displays only the content of the selected table or view for that tenant database.

 

 

 

Backup Catalog

DBA Cockpit shows the backup catalog for the entire system and for all the tenant databases.

 

This feature is available with the following SAP_BASIS Support Packages or higher:

  • 7.40 SP11
  • 7.31 SP15
  • 7.30 SP13
  • 7.02 SP17

dba-cockpit-backup-catalog.png

 

 

Diagnosis Files

DBA Cockpit displays trace files for the entire system and all the tenant databases.

 

This feature is available with the following SAP_BASIS Support Packages or higher:

  • 7.40 SP12
  • 7.31 SP17
  • 7.30 SP14
  • 7.02 SP18

dba-cockpit-diagnosis-files.png

You can download every file displayed on this screen.

Choose Download  File(s) Content or Download  Compressed File(s) Content.

 

 

Schedule Backups of Tenant Databases

You can schedule backups of tenant databases from inside a system database.

 

  1. In DBA Cockpit, choose Jobs -> DBA Planning Calendar.
  2. To create a new action, drag and drop an action from the Action Pad to a calendar cell in the future.
    You can also drag and drop actions to reschedule them.
    To copy an action, hold down the CTRL key while dragging.
  3. Specify the information required.
  4. Specify the Database Name of a tenant database (or of the system database).
  5. Choose Add to schedule the action.

 

Alternatively, you can schedule tenant database backups by logging onto a tenant database instead of the system database.

More information: SAP Note 2164096 - Schedule backups for SAP HANA multiple-container systems with DB13

 

 

More Information

SCN: SAP HANA SPS10 – What is New for Backup and Recovery - Section: DBA Cockpit for SAP HANA: New Backup Functionality

SSL and single-node HANA systems

$
0
0

This document is part of a series on the security features available in HANA.

 

While single-node configurations are the simplest, a foundation will be beneficial when expanding to distributed systems.

legend.gif

 

Setup without a third-party CA (like DigiCert or Verisign)

single_n_01.gif

Acting as your own Certificate Authority (CA) provides you with the control to sign certificates as you wish. The disadvantage is being required to distribute your root certificate to all of your client’s trust stores which may or may not be trivial.

 

This configuration requires:

  • Generating a root certificate.
  • Generating a server certificate signing request
  • Signing the server CSR with the root private key.
  • Importing the signed server certificate and private key to the server’s key store.
  • Distributing the root certificate to client trust stores.

 

Example certificate Subjects and Issuers

Certificate: S
Subject: C=CA, O= MyCompany, OU=IT, CN=hananode.mycompany.corp
Issuer: C=CA, O= MyCompany, OU= IT, CN= HANA CA

 

Certificate: R
Subject: C=CA, O= MyCompany, OU= IT, CN= HANA CA
Issuer: C=CA, O= MyCompany, OU= IT, CN=HANA CA

 

S is in hananode’s key store, R is in the client’s trust store.

 

Chain of trust

In order for a successful connection to be made, clients construct a chain of trust with the provided certificate:

  • During the handshake, hananode will serve the certificates contained in its key store: S. The client now has S and any certificates in its trust store (which includes R) to build the chain of trust.
  • The certificate that starts the chain is the one in which the Common Name (CN) matches the FQDN of hananode, certificate S.
  • In order to validate S:
    • The client checks S’s Issuer field and looks for a certificate whose Subject field matches; in this case certificate R.
    • The issuer’s signature in S is then decrypted using the public key in R to ensure S was indeed signed by the private key corresponding to R.
    • S is now validated.
  • The same steps are taken in order to validate R, however, because R is self-signed, it will end up decrypting its signature with its own public key.
  • A completed chain isn’t immediately trusted. This chain will be trusted if the root certificate, R, belongs to the client’s trust store. R is in the clients trust store therefore S and R form a valid chain of trust and the connection will be successful.

 

 

Setup with a third-party CA

single_n_02.gif

The root certificates of common CAs such as Verisign and DigiCert are usually distributed with ssl-enabled clients; the Java run-time environment provides a trust store with root certificates for Java-based clients and many web browsers contain trust stores with a default collection of root certificates. The advantage of this setup is not needing to distribute the root certificate. The disadvantage is the cost required to have the server CSR signed by a CA.

This configuration requires:

  • Generating a server CSR.
  • Paying a CA to verify your identity and sign the server CSR.
  • Importing the signed server certificate and server private key to the server’s key store.

 

Example certificate Subjects and Issuers:

Certificate: S
Subject: C=CA, O= MyCompany, OU=IT, CN=hananode.mycompany.corp
Issuer: CN=VeriSign Class 3 Public Primary Certification Authority - G5, OU=VeriSign Trust Network, O=VeriSign, Inc., C=US

 

Certificate:R
Subject: CN=VeriSign Class 3 Public Primary Certification Authority - G5, OU=VeriSign Trust Network, O=VeriSign, Inc., C=US
Issuer: CN=VeriSign Class 3 Public Primary Certification Authority - G5, OU=VeriSign Trust Network, O=VeriSign, Inc., C=US

 

S is in hananode’s key store, R is in the client’s trust store.

 

Chain of trust

Although the setup and Issuer field differs, the steps for constructing the chain of trust are identical.

 

Notes

If your company/organization already has an internal CA and all the company clients already have your CA’s root certificate(s) installed, you may be able to request a signature from the relevant department, thus eliminating the cost of using a third-party CA and avoiding the root-certificate distribution step.

 

 

Setup with a third-party CA (and intermediate certificates)

single_n_03.gif

Many times CAs will sign your certificates with an intermediate private key and provide you with one or more intermediate certificates in addition to your server certificate. I know this is the case when using Verisign. This will require the signed server certificates and the provided intermediate certificates be imported into the server’s key store.

This configuration requires:

  • Generating a server CSR.
  • Paying a CA to verify your identity and sign the server CSR.
  • Importing the signed server certificate, server private key, and supplied intermediate certificates to the server’s key store.

 

Example certificate Subjects and Issuers:

Certificate: S
Subject: C=CA, O=MyCompany, OU=IT, CN=hananode.mycompany.corp
IssuerCN=VeriSign Class 3 Secure Server CA - G3, OU=VeriSign Trust Network, O=VeriSign, Inc., C=US

 

Certificate: I
Subject: CN=VeriSign Class 3 Secure Server CA - G3, OU=VeriSign Trust Network, O=VeriSign, Inc., C=US
IssuerCN=VeriSign Class 3 Public Primary Certification Authority - G5, OU=VeriSign Trust Network, O=VeriSign, Inc., C=US

 

Certificate: R
Subject: CN=VeriSign Class 3 Public Primary Certification Authority - G5, OU=VeriSign Trust Network, O=VeriSign, Inc., C=US
IssuerCN=VeriSign Class 3 Public Primary Certification Authority - G5, OU=VeriSign Trust Network, O=VeriSign, Inc., C=US

 

S and I are in hananode’s key store, R is in the client’s trust store.

 

Chain of trust

The chain of trust will be one certificate longer in this example. Clients are served certificates S and I by hananode, and their trust store certificates at their disposal to construct the chain of trust.

  • The client starts with the certificate whose CN matches the FQDN of hananode, certificate S.
  • S’s Issuer matches I’s Subject, therefore I’s public key is used to verify the signature in S.
  • I’s Issuer matches R’s Subject, therefore R’s public key is used to verify the signature in I.
  • R’s Issuer matches R’s Subject, therefore R’s public key is used to verify the signature in R. The self-signed root certificate and the end of the chain.
  • R is contained in the client’s trust store therefore this chain, and hananode, is trusted.

 

Notes

You may be provided with multiple intermediate certificates. All intermediate certificates will need to be imported into the server’s key store or the clients trust store.

 

 

Setup with an intermediate company CA

single_n_04.gif

Some organizations will become an intermediate CA of their domain by purchasing a signature for a wildcard certificate. This allows the organization to sign user certificates for its subdomains backed by the original CA. For example, if the company MyCompany owns a wildcard certificate for the domain mycompany.corp, it will be able to sign certificates for domains such as hananode.mycompany.corp. If this is the case, you may be able to obtain a signed certificate from your companies intermediate CA for the node(s) in your HANA landscape. This avoids the trouble of distributing root certificates to clients.

This configuration requires:

  • Generating a server CSR.
  • Requesting your company’s intermediate CA sign the CSR.
  • Importing the signed server certificate, server private key, and company’s intermediate certificate.

 

Example certificate Subjects and Issuers:

Certificate: S
Subject: C=CA, O=MyCompany, OU=IT, CN=hananode.mycompany.corp
IssuerCN=MyCompany Intermediate CA, OU=IT, O=MyCompany, C=CA

 

Certificate: I
Subject:CN=MyCompany Intermediate CA, OU=IT, O=MyCompany, C=CA
IssuerCN=VeriSign Class 3 Public Primary Certification Authority - G5, OU=VeriSign Trust Network, O=VeriSign, Inc., C=US

 

Certificate: R
Subject: CN=VeriSign Class 3 Public Primary Certification Authority - G5, OU=VeriSign Trust Network, O=VeriSign, Inc., C=US
IssuerCN=VeriSign Class 3 Public Primary Certification Authority - G5, OU=VeriSign Trust Network, O=VeriSign, Inc., C=US

 

S and I are in hananode’s key store, R is in the client’s trust store.

 

Chain of trust

Constructing the chain of trust for this scenario is identical to the previous scenario.

 

Notes

If your company’s intermediate certificate, I in this example, is already distributed to clients, you won’t be required to import the intermediate certificate into the server’s key store.

Additionally, if there are multiple intermediate certificates, all intermediate certificates will need to be imported into the server’s key store or the clients trust store.

Dynamic SQL Analytic Privilege in SAP HANA

$
0
0

I would to like to take you through this new feature ‘Dynamic SQL analytic privilege in SAP HANA’ which is available from SAP HANA SPS10

 

Before going into details a quick introduction to ‘what is ANALYTIC PRIVILEGE?’

     Analytic privileges grant different users access to different portions of data in the same view (can also be termed as row level authorizations) based on their business role.


We can implement analytic privilege in two ways

  • Classical XML- based Analytic Privileges.
  • SQL Analytic Privileges.


Here we would be focusing on ‘How You Can Create Dynamic SQL Analytic Privilege in SAP HANA’, which will provide you the flexibility to create analytic privilege within the familiar SQL environment.

To get a detailed explanation on ‘Classical XML- based Analytic Privilege’, please check out thisblog by Jagan Mohan Reddy


OK let’s start!


Here is my Analytical view on which I want to apply SQL Analytic Privilege, as you can see below you will have to Select ‘SQL Analytical Privileges’ in the view properties when you create the view.


one.png



The output of the view looks like this

two.png


I would want this view to be accessed by another user PRI_TEST_USER,but only those rows where PUBLISHER name starts with letter ‘S’ ,should be visible to this user.


So the steps to achieve this use case is as follows,


1: Create an authorization table to store the USERNAME and the CONDITION to restrict the data

2: Create a procedure which will provides the matching condition based on user login

3: Create Analytical privilege of SQL type.

4: Assign the created Analytical privilege to the USER.


1: Create an Authorization table to store the Uername and the Condition to restrict the data


CREATECOLUMNTABLE"PRIYANKA"."PRODUCT_AP_DYNAMIC_AUTH"

("USER_NAME"NVARCHAR(120),"CONDITION"NVARCHAR(300));

insertinto"PRIYANKA"."PRODUCT_AP_DYNAMIC_AUTH"values('PRI_TEST_USER','(PUBLISHER LIKE ''s%'')');

insertinto"PRIYANKA"."PRODUCT_AP_DYNAMIC_AUTH"values('HE2E_USER','(PUBLISHER IN (''barnes & noble'',''ballantine'')')



2: Create a procedure which will provides the matching filter condition based on user login.

(There are few rules to followed while creating procedure, which is well explained in the HANA Modelling Guide : Section 9)


createprocedure"PRIYANKA".DYNAMIC_AP_DEMO(out OUT_FILTER VARCHAR(500))

language sqlscript sql security definer reads sql data as

v_Filter VARCHAR(500);  

CURSOR v_Cursor

FORSELECT"CONDITION"FROM"PRIYANKA"."PRODUCT_AP_DYNAMIC_AUTH"

WHERE"USER_NAME" = SESSION_USER;

BEGIN

OPEN v_Cursor;  

FETCH v_Cursor INTO v_Filter;

OUT_FILTER := v_Filter;

CLOSE v_Cursor; 

END;



3: Create Analytical privilege of SQL type.

  • Right click on your package under Content folder->new->Analytic privilege
  • Provide details as required

three.png

  • Click on Next
  • Select the view which you have created for applying Analytic Privilege

fr.png

  • Click on Finish
  • In the editor select 'Dynamic' radio button.
  • Type: Catalog procedure
  • Select the procedure which you have created in the previous step

six.png

  • Activate your Analytic Privilege.



4: Assign the created Analytical privilege to the USER.


  • Expand security node, expand 'Users' node, double click on the user for which you want to apply the restriction
  • Click on Analytic Privilege.
  • Click on the green plus icon and add the Analytic Privilege which we have created.
  • Click on execute button.


sev.png


That's it ,We are done with the creation steps, shall we see the Result!!!

  • Login as PRI_TEST_USER
  • Open the Analytical view
  • Click on data preview


There it is 

eig.png

 

 

 

Its my first document here on SCN, Hope it was helpful to you, any suggestion for improvement would be much appreciated ,Thank you

SAP HANA AND QLIK VIEW/SENSE

$
0
0

Hello All,

 

Today I found a new 3rd party reporting tool QlikView / QlikSense . I tried to connect the reporting tool with sap hana and found that it is very easy procedure to connect it with sap hana and do the reporting.

We have to follow some stpes to create ODBC data source and make the successful ODBC connection with sap hana.

Prerequisites:

1) Before creating an odbc connection, it is required to login in sap hana studio with the particular username and password which is going to make the conenction with odbc.

2) HANA CLIENT should be installed in the system.

 

STEPS for ODBC Conenction


1) Go to control panel ->Administrative Tools

c1.png

2) Click on DataSources(ODBC)

c2.png


3) It will open a popup ODBC DATA source Administrator -> Click on Add button.

It will open to create a new data source. Click on finish.

c3.png

4) It will open a new popup window where you have to give the details of SAP HANA Description like <IP/hostname>:3$$15 where $$ is instance no.

c4.png


5) After that press connect button which will ask for the username and password for the particular system. Provide the correct details and connect with the system. It will show you the message of successfully connected.

 

 

STEPS IN QLIK VIEW

 

6) Now Open your Qlik Sense Software and Create new app.

Give the name of the APP

q1.png

7) Click on Create Button

q2.png


8) Open the App

q3.png


9) Add the data

q3.png

q4.png

10) Connect the data

q5.png


11) Select the Provider

q6.png

12) It should be SAP HANA MDX Provider

q7.png

13) Provide the User DSN which you created as ODBC connector name.

q8.png

q9.png

14) Provide the username and password.

q10.png


15) Which will open the below query. Select the particular Owner

q11.png

  q13.png

16) Select the particular tables

17)  Load and Finish with the table.

q14.png


18) It will show the below message Data was successfully loaded.

q15.png

19) It will show you the different chart list , start creating your own chart list as per your requirement.

q16.png

 

20) Create the bar chart as below and I have also create 2-3 demo chart example for you.


q17.png

q18.png

q19.png

q20.png

q21.png

q22.png

q23.png

 

q24.png

q25.png

 

Start learning and explore more learning examples at QlikView.


Happy Learning

 

 

Thanks and Regards,

Neelkanth

Viewing all 1183 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>