Quantcast
Channel: SCN : Document List - SAP HANA and In-Memory Computing
Viewing all 1183 articles
Browse latest View live

SAP HANA Installation (SP11)

$
0
0




SAP HANA Installation


Downloading the Installation Software from SMP

17.png

 

2.png

 

Using HDBLCMGUI



 

Using HDBLCM Command Line

1.png

Choose Option1 - Install new system

4.png

Choose Option 2 -  All components

 

5.png
Choose Option 2 - Multiple_containers

7.png

Choose Option 1 - single_container as XS Advanced Runtime does not support multitenant database containers

8.png

9.png

Choose Option 1 - All components for XS Advanced Components

10.png

Confirm with y to continue installation

12.png14.png15.png16.png


SAP HANA TDI on Cisco UCS and VMware vSphere - Part 3

$
0
0

Virtual Machine

 

CPU and Memory Sizing

 

This document is not a sizing instrument but delivers a guideline of technically possible configurations. For proper sizing, refer to BW on HANA (BWoH) and Suite on HANA (SoH) sizing documentation. The ratio between CPU and memory can be defined as 6.4 GB per vCPU for Westmere-EX, 8.53 GB per Ivy Bridge-EX and 10.66 GB per Haswell-EX vCPU. The numbers for Suite on HANA can be doubled.

 

See that the ESXi host and every virtual machine produce some memory overhead. For example, an ESXi server with 512 GB physical RAM can not host a virtual machine with 512 GB vRAM because the server would need some static memory space for its kernel and some dynamic memory space for each virtual machine. A virtual machine with eg. 500 GB vRAM would most likely fit into a 512 GB ESXi host.

 

This table shows the vCPU to vRAM ratio on a Haswell-EX server. Under "VMs per 3 TB Host", the maximal amount of VMs is shown considering two scenarios:

  1. BWoH prod
    • This is the amount of virtual machines that can be deployed on one host for BW on HANA in a productive environment, that means, there is no resource overcommitment.
  2. CPU overcommit
    • This is the amount of virtual machines that can be deployed on one host while allowing CPU overcommitment. Memory overcommitment is not recommended at all for HANA virtual machines.

 

vCPU

vRAM

VMs per 3 TB Host

BWoH

SoH

BWoH prod

CPU overcommit

18

18

64 GB

4**

46

18

18

128 GB

4**

23

36

18

256 GB

4

11

36

18

384 GB

4

7

54

36

512 GB

2

5

72*

36

768 GB

1

3

108*

54

1 TB

1

2

-

72*

1.5 TB

-

1

-

108*

2 TB

-

1

* 4 TB vRAM and 128 vCPU per VM with vSphere 6

** according to SAP Note 2024433, a physical CPU socket must not serve more than one VM

 

More details are available here:

SAP: SAP HANA Guidelines for running virtualized

VMware: Best Practices and Recommendations for Scale-Up Deployments of SAP HANA on VMware vSphere

VMware: Best Practices and Recommendations for Scale-Out Deployments of SAP HANA on VMware vSphere

 

 

Virtual Machine Creation

 

Create virtual machine with hardware version 10 (ESXi 5.5)

Image2.jpg

 

Select SLES 11 64-bit or RHEL 6 64-bit as Guest OS

Image3.jpg

 

Configure cores per socket

Image4.jpg

 

It is recommended to configure the cores per socket according to the actual hardware configuration, which means:

  • 10 cores per socket on HANA certified Westmere-EX processors
  • 15 cores per socket on HANA certified Ivy Bridge-EX processors
  • 18 cores per socket on HANA certified Haswell-EX processors

 

Virtual SCSI controller configuration

Image5.jpg

 

Configure 4 virtual SCSI controllers of the type "VMware Paravirtual".

 

Configure virtual disks

Image6.jpg

 

To fully utilize the virtual resources, a disk distribution is recommended where the disks are connected to different virtual SCSI controllers. This improves parallel IO processing inside the guest OS.

 

The size of the disks can initially be chosen lower than the known requirements for HANA appliances. This is based on the capability of in increasing virtual disk size online.

 

Disk

Mount point

Size (old measure)

Size (new combined measure)

root

/

80 GB

80 GB

hanashared

/hana/shared

1 * vRAM

1 * vRAM

hanadata

/hana/data

3 * vRAM

1 * vRAM

hanalog

/hana/log

1 * vRAM

vRAM < 512 GB: 0.5 * vRAM

vRAM ≥ 512 GB: min. 512 GB

 

Configure virtual network adapters

 

vsphere2.jpg

 

Configure one VMXNET3 adapter for each network and connect it to the corresponding port group. See that some networks are only configured on ESXi level, such as storage and backup network.

 

Enable Latency Sensitivity settings

Image8.jpg

 

To ensure the latency sensitivity, CPU and memory reservation has to be set, too. While the CPU reservation can vary, the memory reservation has to be set to 100 %. Check the box "Reserve all guest memory (All locked)".

 

Image10.jpg

 

After creating a VM, the guest OS installation can begin.

 

______________________________

Part 1 - Introduction

Part 2 - ESXi Host

Part 3 - Virtual Machine

Part 4 - Guest Operating System

SAP HANA Workshop, Development and object flow (Part-1)

$
0
0

Since years I have been driving the HANA Projects (Implementation, Support & Migration). Every time I travel to Customer places (Across countries) to understand the project scope and expectations. I attend workshops to collect information from Product Owners, User teams & Technical teams. At the end of the day I will be having bunch of key information mainly into. . .

 

    1. How to start gathering information to kick start the project.
    2. How to start up HANA Development.
    3. How are the objects in HANA structured / Located in various packages/content.

 

Please have the details:

How to start gathering information to kick start the project.

  1. Need to kick-off a Workshop calling the stack holders (As below)

 

a1.JPG

 

 

       2. Create a detailed document from workshop, (Good to take sign off )

      

       3. Start Initiating the Design of Function, Technical, Build and so on  (As below)

a2.JPG

 

 

 

     4. How are the objects in HANA structured / Located in various packages/content.  (As below)

             Hint: This will be helpful during Project LCM Stage (Life cycle Management / Transportation)

 

a3.JPG

SAP HANA TDI on Cisco UCS and VMware vSphere - Part 1

$
0
0

Introduction

 

Support for non-productive SAP HANA systems on VMware vSphere 5.1 has been announced in November 2012. Since April 2014, also productive SAP HANA systems can be virtualized by customers on VMware vSphere 5.5. Currently, some restrictions apply which are preventing SAP HANA to be treated like any other SAP Application running on VMware vSphere. But because the conditions for virtualized HANA will be harmonized in the future in order to fit to a homogenous SAP technology platform, it is recommended to continously keep the SAP documentation up-to-date and always refer to the latest version only.

 

The audience of this series should have a basic understanding of following components:

 

Additionally, a deeper understanding of SAP HANA TDI is mandatory:

SAP: SAP HANA Tailored Datacenter Integration

Cisco: SAP HANA TDI on Cisco UCS

 

Following combination has been tested and verified as working. It is strongly advised against using lower versions, while newer versions are considered to work as expected as long as the combination is reflected in the Cisco and VMware compatibility guides.

  • Cisco UCS Manager 2.1
  • VMware ESXi 5.5
  • VMware vCenter Server 5.5
  • SUSE Linux Enterprise Server 11 SP3
  • Red Hat Enterprise Linux Server 6.5
  • SAP HANA 1.0 SPS07

 

Although one of the goals of this series is to consolidate information about the virtualized HANA deployment process, there is still a lot of necessary referencing to other documentation. Please consult the following during the planning and installation phase:

 

Virtualization References

Document ID / URL

Description

SAP Note 1492000

General Support Statement for Virtual Environments

SAP Note 2161991

VMware vSphere configuration guidelines

SAP Note 1606643

VMware vSphere host monitoring interface

SAP Note 1788665

SAP HANA Support for virtualized / partitioned (multi-tenant) environments

SAP Note 1995460

SAP HANA on VMware vSphere in production

SAP Note 2024433

Multiple SAP HANA VMs on VMware vSphere in production (controlled availability)

SAP Note 2157587

SAP Business Warehouse, powered by SAP HANA on VMware vSphere in scale-out and production (controlled availability)

SAP Note 2237937

Virtual Machines hanging with VMware ESXi 5.5 Update 3 and 6.0

HANA Virtualization Guidelines from SAP

SAP HANA Virtualization Guidelines with VMware vSphere

HANA Virtualization Guidelines from VMware

SAP HANA Virtualization Guidelines with VMware vSphere

 

Linux References

Document ID / URL

Description

SAP Note 171356

SAP software on Linux: General information

SAP Note 1944799

SAP HANA Guidelines for SLES Operating System Installation

SAP Note 2009879

SAP HANA Guidelines for RHEL Operating System Installation

SAP Note 2205917

SAP HANA DB: Recommended OS settings for SLES 12

SAP Note 1954788

SAP HANA DB: Recommended OS settings for SLES 11 SP3

SAP Note 2136965

SAP HANA DB: Recommended OS settings for RHEL 6.6

SAP Note 2001528

SAP HANA Database SPS 08 revision 80 (or higher) on RHEL 6 or SLES 11

 

______________________________

Part 1 - Introduction

Part 2 - ESXi Host

Part 3 - Virtual Machine

Part 4 - Guest Operating System

[SAP HANA Academy] SAP HANA SPS11 - HANA Spatial: Spatial Functionality in SQLScript

$
0
0

Part of the SAP HANA Academy'sWhat's New with SAP HANA SPS11 series, Jamie Wiseman provides a tutorial video on using spatial functionality in HANA SQLScript. Specifically Jamie will detail how to use spatial data type ST_GEOMETRY and spatial methods to access and manipulate spatial data in HANA SQLScript. Note this feature is only applicable to SAP HANA SPS11 and it will not work for any prior iterations of SAP HANA.


Watch Jamie's quick tutorial below.

Screen Shot 2015-12-09 at 3.08.41 PM.png

In SAP HANA SPS11 SQLScript now supports spatial data type ST_GEOMETRY and SQL spatial methods to access and manipulate spatial data. To demonstrate this Jamie is using a user defined function that uses spatial dimensions in a procedure. Then Jamie will call that procedure which contains a couple of ST_GEOMETRY inputs.


You must be logged into your SAP HANA system with a user who has the rights to create new functions and procedures. Please visit this GitHub page to access, for free, the script used in this tutorial.

Screen Shot 2015-12-09 at 5.43.59 PM.png

Lines 1-9 of the script create a user defined function that has a pair of ST_GEOMETRY input parameters. Line 8 of the function uses the ST distance method to calculate the distance between the two inputs. The distance calculated is subjected to whatever SRID (spatial reference ID) is used.

Screen Shot 2015-12-09 at 5.58.35 PM.png

Lines 11-27 create a procedure that not only references the user defined function, but also incorporates a pair of geometry type inputs. Also, two other spatial methods are used to create an additional display value. This display value consists of two different well-know text methods that will show spatial values in an easier to read text format. Currently at the time of this tutorial's creation only geometry spatial type values are allowed in SQLScript in SAP HANA SPS11.


If you were to uncomment line 13, which contains a point spatial type, then an error would be returned.

Screen Shot 2015-12-09 at 5.59.39 PM.png

Lines 29-35 deal with the procedure call. The syntax for the two parameter inputs creates geometry spatial types. In Jamie's example both are points. The SRIDs of the two geometries are specified in the same input.

Screen Shot 2015-12-09 at 6.00.52 PM.png

If you run all of the syntax at once you will get the results from the procedure's nested distance call and will also display the results from the two additional spatial methods.

Screen Shot 2015-12-09 at 5.52.34 PM.png

For more tutorials videos on What's New with SAP HANA SPS11 please check out this playlist.


SAP HANA Academy - Over 1,300 free tutorials videos on SAP HANA, SAP Analytics and the SAP HANA Cloud Platform.


Follow us on Twitter @saphanaacademy and connect with us on LinkedIn to keep apprised of our latest free tutorials.

SAP HANA TDI on Cisco UCS and VMware vSphere - Part 4

$
0
0

Guest Operating System

 

Installation

 

The installation of the Linux OS has to be done according to the SAP Notes for HANA systems on SLES or RHEL. The network and storage configuration parts are heavily depending on the TDI landscape, therefore no general advice can be given.

 

Disk Distribution

 

The VMDKs should be distributed on differently tiered storage systems because of performance reasons. Example:

Drawing1.jpg

In the end, a realistic storage quality classification as well as a thorough distribution of the disks among datastores and virtual SCSI adapters ensures good disk IO performance for the HANA instance.

 

Configuration

 

With regards to the network configuration, it is not recommended to configure bond devices inside the Linux guest OS. Such a configuration is used in native environments to guarantee availability of the network adapters. In virtual environments, the redundant uplinks of the vSwitch take on that role.

 

In /etc/sysctl.conf some tuning might be necessary for scale-out and in-guest NFS scenarios:

net.ipv4.tcp_slow_start_after_idle = 0

net.core.rmem_max = 16777216

net.core.wmem_max = 16777216

net.core.rmem_default = 262144

net.core.wmem_default = 262144

net.core.optmem_max = 16777216

net.core.netdev_max_backlog = 300000

net.ipv4.tcp_rmem = 65536 262144 16777216

net.ipv4.tcp_wmem = 65536 262144 16777216

net.ipv4.tcp_no_metrics_save = 1

net.ipv4.tcp_moderate_rcvbuf = 1

net.ipv4.tcp_window_scaling = 1

net.ipv4.tcp_timestamps = 1

net.ipv4.tcp_sack = 1

sunrpc.tcp_slot_table_entries = 128

 

MANDATORY: Apply generic SAP on vSphere optimizations and additional OS configurations for SLES 11 SP4, SLES 12 or RHEL 6.7.

 

Validation

 

To validate the solution, the same hardware configuration check tool as for the appliances is used but with slightly different KPIs. SAP supports performance-related SAP HANA issues only if the installed solution has passed the validation test successfully.

 

Volume

Block Sizes

Test File Size

KPIs

Initial Write (MB/s)

Overwrite (MB/s)

Read (MB/s)

Latency (µs)

Log

4K

5G

-

30

-

1000

16K

16G

-

120

-

1000

1M

16G

-

250

250

-

Data

4K

5G

-

-

-

-

16K

16G

40

100

-

-

64K

16G

100

150

250

-

1M

16G

150

200

300

-

16M

16G

200

250

400

-

64M

16G

200

250

400

-

Source: SAP SE, Version 1.8

 

______________________________

Part 1 - Introduction

Part 2 - ESXi Host

Part 3 - Virtual Machine

Part 4 - Guest Operating System

SAP HANA TDI on Cisco UCS and VMware vSphere - Part 2

$
0
0

ESXi Host

 

UCS Service Profile

 

The ESXi host provides the platform where virtual machines are running on. The service profile contains the configuration of the hardware in a Cisco UCS environment. Service Profile Templates or vSphere Auto Deploy can be used to ease the ESXi deployment process. In this example, a standalone service profile creation is shown.

 

For each vSwitch, it is recommended to configure two uplink interfaces with MTU 9000 as trunk. The VLAN assignment takes place in the port group configuration of the vSwitch.

 

Image6a.jpg

 

In order to get the best performance for virtualization, certain BIOS features should be enabled. The c-states can be controlled by the hypervisor and do not necessarily have to be disabled. It depends on the performance needs vs. power saving aspects how balanced this configuration should be.

 

Image17a.jpg

 

vsphere1.jpg

VMware vSphere screenshot

 

Although the use of VM-FEX is optional, it is recommended to enable all Intel Direct IO features.

 

Image18.jpg

 

 

Network

 

SAP HANA has different types of network communication channels to support the different SAP HANA scenarios and setups. It is recommended to consult the SAP HANA TDI - Network Requirements whitepaper.

 

network_tdi2.jpg

Source: SAP SE

 

On the basis of the listed network requirements, every server must be equipped with two 1 or 10 Gigabit Ethernet (10 Gigabit Ethernet is recommended) interfaces for scale-up systems to establish communication with the application servers and users (client zone). If the storage for SAP HANA is external and accessed through the network, two additional 10 Gigabit Ethernet or 8-Gbps Fibre Channel interfaces are required (storage zone). Scale-out systems need a 10 Gigabit Ethernet link for internode communication (internal zone). When using multiple ESXi hosts in a vSphere Cluster with enabled DRS, at least one additional 10 Gigabit Ethernet link is required for vMotion traffic.

 

For the internal zone, storage zone and the vMotion network, it is recommended to configure jumbo frames end-to-end.

 

The network traffic can be consoliated by using the same vSwitch with several load-balanced uplinks.

 

Image11.jpg

 

Storage

 

The storage system must be certified as part of an appliance, independent of the appliance vendor, or must be certified as SAP HANA TDI Storage in the Certified SAP HANA Hardware Directory.

 

It is recommended to physically separate the the origin (VMFS LUN or NFS export) of the datastores providing data and log for performance reasons. Additional performance classes can be distinguished:

 

Category

Read Performance

Write Performance

OS boot disk

medium

low

/hana/shared

medium

low

/hana/data

very high

high

/hana/log

high

very high

backup

low

medium

 

It is also recommended to consult the recommendations from the storage hardware partners:

EMC: Tailored Datacenter Integration Content

NetApp Reference Architecture: SAP HANA on VMware vSphere and NetApp FAS Systems

NetApp Configuration Guide: SAP HANA on NetApp FAS Systems

______________________________

Part 1 - Introduction

Part 2 - ESXi Host

Part 3 - Virtual Machine

Part 4 - Guest Operating System

[SAP HANA Academy] SAP HANA SPS11 - SAP HANA Security - How to Create a User with"NO_FROCE_FIRST_PASSWORD"

$
0
0

In a tutorial video the SAP HANA Academy's Jamie Wiseman shows how to create a user using the "NO_FORCE_FIRST_PASSWORD_CHANGE" option. This option enables a user to not have to change their password the very first time they log into SAP HANA. Note that your password's lifetime policy will still apply. The option is new in SAP HANA SPS11 but will not work in prior iterations of SAP HANA.


Check out Jamie's video below.

Screen Shot 2015-12-11 at 11.54.52 AM.png

Lines 2-3 of the syntax shown below creates a user with the NO FORCE_FIRST_PASSWORD_CHANGE option. With this option the user's current and valid password is specified during the user creation. Therefore it is highly recommended to use a secure password at this point. The user you're logged into SAP HANA as must have the appropriate rights to create a user.

Screen Shot 2015-12-11 at 11.56.27 AM.png

After all of the syntax is run navigate to the results from the SELECT * FROM USERS statement.

Screen Shot 2015-12-11 at 12.17.25 PM.png

The newly created user's information is displayed on the last line of the table. Scrolling to the two Password columns (PASSWORD_CHANGE_TIME and PASSWORD_CHANGE_NEEDED) verifies that the user will not need to change their password until the password's lifetime expires.

Screen Shot 2015-12-11 at 12.18.23 PM.png

If you're unsure regarding the system view name that contains your password policy it will be displayed in the results table from the SELECT * FROM M_PASSWORD_POLICY statement. Note that the password's lifetime policy in Jamie's demonstration in 182 days.

Screen Shot 2015-12-11 at 12.30.15 PM.png

For more tutorials videos on What's New with SAP HANA SPS11 please check out this playlist.


SAP HANA Academy - Over 1,300 free tutorials videos on SAP HANA, SAP Analytics and the SAP HANA Cloud Platform.


Follow us on Twitter @saphanaacademy and connect with us on LinkedIn to keep apprised of our latest free tutorials.


SAP HANA Backup/Recovery Overview

$
0
0

SAP HANA holds the bulk of its data in memory for maximum performance, but it still uses persistent storage to provide a fallback in case of failure. After a power failure, the database can be restarted like any disk-based database and returns to its last consistent state. To protect against data loss resulting from hardware failure, backups are required. Both native backup/recovery functions, an interface for connecting 3rd party backup tools and support for storage snapshots are available. For recovery, SAP HANA offers many different options, including point-in-time recovery.

View this Document

Archive of SAP HANA Database Campus Open House Events

$
0
0

This is an archive of previous open house events. The latest event, along with more information, can be found here.

 

2015

 

We, the SAP HANA Student Campus, invite students, professors, and faculty members to our second Open House Day at SAP's headquarters in Walldorf, Germany. Throughout your day in Walldorf, you will get an overview of database research at SAP, meet the architects of SAP HANA and learn more about academic collaborations. There are a couple of interesting presentations by SAP HANA developers and academic partners. Current students and PhD candidates will present their work and research. For external students and faculty members it is a great chance to find interesting topics for internships and theses.


The event will take place on June 24th, 2015, during 09:00-17:00 in building WDF03 (Robert-Bosch-Straße 30/34, 69190, Walldorf, Germany) Room E4.02. We will meet in the lobby of Building WDF03 between 09:00-09:30. Driving directions are attached to the end of this post. Free lunch and snacks are arranged for all attendees.

 

  • 09:00-09:30: Arrival
  • 09:30-10:00: Check-in
  • 10:00-10:15: Opening
  • 10:15-11:15: Keynote
    • Dr. Alexander Böhm (SAP HANA Architect)  –From Theory to Practice: Developing an Enterprise-Class DBMS at SAP
  • 11:15-12:00: Poster session
  • 12:00-12:45: Lunch
  • 12:45-13:00: Short guided tour through the Campus
  • 13:00-14:30: Session 1 Academic
    • Prof. Dr. Guido Moerkotte (University of Mannheim) – Requirements, Architecture, and Design in Database Management Systems
    • Prof. Dr. Peter Fischer (University of Freiburg) – Indexing for Bi-Temporal Data
  • 14:30-15:00: Break & poster judging
  • 15:00-16:30: Session 2 – SAP
    • Ingo Müller (PhD Student, SAP) – Cache-Efficient Aggregation: Hashing Is Sorting
    • Martin Weidner (SAP) – SAP Velocity
    • Hinnerk Gildhoff (SAP) – SAP HANA Spatial Engine
  • 16:30-17:00: Best posters & Closing

 

The entire event will be held in English.

 

 

MAP: SAP Headquarters in Walldorf, Building WDF03 in Walldorf

 

By participating you agree to appear in photos and videos taken during the event and published on SCN and CareerLoft.

 

Looking forward to seeing you in Walldorf,

The SAP HANA Student Campus team

Contact: students-hana@sap.com

 

 

2014

 

The first open house event took place on June 5th, 2014, during 10:00-16:00, at Robert-Bosch-Straße 30/34 (69190, Walldorf, Germany), Building WDF03, Room E4.02. The agenda was the following:

 

  • 09:30-10:00: Check In
  • 10:00-13:00:
    • Keynote by Prof. Dr.-Ing. Wolfgang Lehner (Technische Universität Dresden, Dep. of Computer Science), titled "Challenges and Trends in Main-memory centric data processing".
      • Discussion
    • Talk by Frank Koehler (SAP AG), titled "C++? C++!".
      • Discussion
    • Appetizer session with an overview of all presented topics.
    • Poster session.
  • 13:00-14:00: Lunch break.
  • 13:30-14:00: Short guided tour through the Campus
  • 14:00-15:45:
    • Invited talk by Prof. Dr. rer. nat. Peter Sanders (Karlsruher Institut für Technologie, Fakultät für Informatik), titled "Algorithm Engineering @ SAP Hana and KIT".
      • Discussion.
    • Presentation by Dipl.-Inform. Jonathan Dees (PhD student at Karlsruher Institut für Technologie, SAP AG), titled "Algorithmic Challenges for fast SQL Query Execution and Building a Product".
      • Discussion
    • Presentation by Lars Volker (SAP AG) and Minji Lee (Master Student, Ruprecht-Karls-Universität Heidelberg), titled "SAP HANA SPATIAL - Innovation delivered!"
      • Discussion
  • 15:45: Award Presentation.
  • 16:00: Closing

SAP HANA Database Campus - Open House 2016 in Walldorf

$
0
0

The SAP HANA Database Campus invites students, professors, and faculty members interested in database research to join our third Open House at SAP's headquarters. Throughout your day, you will get an overview of database research at SAP, meet the architects of SAP HANA and learn more about academic collaborations. There are a couple of interesting presentations by developers and academic partners. Current students and PhD candidates present their work and research. For external students and faculty members it is a great chance to find interesting topics for internships and theses.


The event takes place on June 2nd, 2016, during 09:30-16:00 in Walldorf, Germany. Free lunch and snacks are provided for all attendees. The entire event is held in English.

 

Register here

 

 

Looking forward to seeing you in Walldorf,

The SAP HANA Database Campus

students-hana@sap.com

 

 

Location:

 

Agenda:

  • 09:00-09:30 Arriving
  • 09:30-10:00 Check-In in the lobby of building WDF03
  • 10:00-10:15 Opening
  • 10:15-11:00 Keynote
    • Speaker and Topic will be announced
  • 11:00-12:00 Poster Session Part 1
  • 12:00-12:45 Lunch
  • 12:45-13:00 Office Tour
  • 13:00-14:00 Session 1 – Academic
    • Academic Partner (30 min)
    • HANA PhD Student (15 min)
    • HANA PhD Student (15 min)
  • 14:00-15:00 Poster Session Part 2 & Coffee Break
  • 15:00-15:45 Session 2 – SAP
    • Speaker and Topic will be announced (20 min)
    • Speaker and Topic will be announced (20 min)
  • 15:45-16:00 Best Student/PhD-Student Poster & Open House Closing

 

 

Archive of previous events


By participating you agree to appear in photos and videos taken during the event and published on SCN and CareerLoft.

[SAP HANA Academy] SAP HANA SPS11 - HANA Spatial: Intersection Aggregate Method, ST_INTERSECTIONAGGR

$
0
0

In a tutorial video the SAP HANA Academy's Jamie Wiseman examines the Intersection Aggregate method, which is part of SAP HANA Spatial. This method is new to SAP HANA and available in SAP HANA SPS11. Watch Jamie's video below.

Screen Shot 2015-12-15 at 10.11.31 AM.png

The Intersection Aggregate method returns the spatial intersection of all of the geometries in a group. The aggregate is logically computed by repeatedly applying the spatial type intersection method to combine two geometries at a time.


Jamie is using the .ST_AsText method so that the results are returned in a more readable format as opposed to the results coming back in a binary format. The syntax used in this video is available in the description of the video and also at the bottom of this post.


Lines 3-19 of the syntax will create a test schema and a test table. Next several geometries will be added to the shape column. The SRID will be 0 as Jamie will be looking at geometries in a Cartesian system.

Screen Shot 2015-12-15 at 10.13.46 AM.png

If you run lines 4-25 you will return the data from your test table as shown below.

Screen Shot 2015-12-15 at 10.15.10 AM.png

The table contains a pair of closed polygons, specifically a square and a triangle, and a line segment. The visual representation of these geometries are depicted below. The geometries were added in the following order: square, triangle, line.

Screen Shot 2015-12-15 at 10.17.04 AM.png

Executing lines 28-29 will run the Intersection Aggregation on the shape column. The result is the single line segment that is shown below.

Screen Shot 2015-12-15 at 10.18.09 AM.png


Screen Shot 2015-12-15 at 10.18.31 AM.png

Note that this method will return a null if there is no common intersection between the complete set of geometries that are being compared. Also if the final set that the method looks at is a single geometry then that single geometry will be returned.


For more tutorials videos on What's New with SAP HANA SPS11 please check out this playlist.


SAP HANA Academy - Over 1,300 free tutorials videos on SAP HANA, SAP Analytics and the SAP HANA Cloud Platform.


Follow us on Twitter @saphanaacademy and connect with us on LinkedIn to keep apprised of our latest free tutorials.


Tutorial's Syntax:


-- create a test schema and add a test table

CREATE SCHEMA DEVTEST;

CREATE COLUMN TABLE DEVTEST.SPATIALTEST (

ID INTEGER PRIMARY KEY GENERATED ALWAYS AS IDENTITY,

SHAPE ST_GEOMETRY(0)

);

 

 

-- add several geometries to the test table

INSERT INTO DEVTEST.SPATIALTEST VALUES (

NEW ST_Polygon('Polygon ((0 0, 4 0, 4 4, 0 4, 0 0))')

);

INSERT INTO DEVTEST.SPATIALTEST VALUES (

NEW ST_Polygon('Polygon ((1 3, 3 1, 3 3, 1 3))')

);

INSERT INTO DEVTEST.SPATIALTEST VALUES (

NEW ST_LineString('LineString (0 0, 3 3)')

);

 

 

-- look at the data in the test table

SELECT

SHAPE.ST_AsWKT() AS SHAPE,

SHAPE.ST_GeometryType() AS SHAPE_TYPE

FROM DEVTEST.SPATIALTEST;

 

 

-- use ST_IntersectionAggr on the SHAPE column

SELECT  ST_IntersectionAggr ( SHAPE ).ST_AsText()

FROM DEVTEST.SPATIALTEST;

[SAP HANA Academy] SAP HANA SPS11 - SQL Functions: BinToHex & BinToNHex Functions

$
0
0

In a tutorial video the SAP HANA Academy's Jamie Wiseman examines two binary to hexadecimal conversion SQL Functions. Jamie details how to use the BinToHex () and the BinToNHex() functions. This pair of functions are new to SAP HANA SPS11. Watch Jamie's video below.

Screen Shot 2015-12-16 at 4.00.46 PM.png

The difference between the two new SQL functions is the output type. The conversion of BinToHex is a VarChar output while the conversion of BinToNHex is a NVarChar output.


Jamie enters the syntax shown below and displayed at the bottom of this post into a SQL console in SAP HANA Studio.

Screen Shot 2015-12-16 at 3.49.06 PM.png

Both functions require a single parameter (input value) and produce a hexadecimal value as the output. Running the syntax returns identical results for the two queries, except that the BinToNHex has a NVarChar output.

Screen Shot 2015-12-16 at 3.59.28 PM.png

So the decision to use BinToHex or BinToNHex will largely depend on what your application is expecting.


Note that both the BinToHex and BinToNHex functions will automatically convert any input to binary. Therefore you won't need to use a To_BINARY conversion. A To_BINARY conversion is currently displayed in the second column of the result tables highlighted above.


For more tutorials videos on What's New with SAP HANA SPS11 please check out this playlist.


SAP HANA Academy - Over 1,300 free tutorials videos on SAP HANA, SAP Analytics and the SAP HANA Cloud Platform.


Follow us on Twitter @saphanaacademy and connect with us on LinkedIn to keep apprised of our latest free tutorials.


Tutorial's Syntax:


SELECT

BINTOHEX('ABC'),

BINTOHEX(TO_BINARY('ABC'))

FROM DUMMY;

 

-- Converts a binary value to a hexadecimal value as a VARCHAR data type.

-- The input value is converted to a binary value first if it is not a binary value.

 

--results:

--414243, 414243


 

SELECT

BINTONHEX('ABC'),

BINTONHEX(TO_BINARY('ABC'))

FROM DUMMY; 

 

-- Converts a binary value to a hexadecimal value as an NVARCHAR data type.

-- The input value is converted to a binary value first if it is not a binary value.

 

--results:

--414243, 414243

SAP HANA - Host Auto-Failover

$
0
0

Host Auto-Failover is a built-in, fully automated high availability solution for recovering from the failure of a SAP HANA host. This paper explains how this mechanism works in detail and describes the important interfaces an administrator has to pay attention to.

View this Document

Troubleshooting High CPU Utilisation

$
0
0

High CPU Utilisation

 

Whilst using HANA i.e. running reports, executing queries, etc. you can see an alert in HANA Studio that the system has consumed CPU resources and the system has reached full utilisation or hangs.

 

Before performing any traces, please check to see if you have Transparent HugePages enabled on your system. THP should be disabled across your landscape until SAP has recommended activating them once again. Please see the relevant notes in relation to TransparentHugesPages:

 

HUGEPAGES 

 

SAP Note 1944799 - SAP HANA Guidelines for SLES Operating System Installation

SAP Note 1824819 - SAP HANA DB: Recommended OS settings for SLES 11 / SLES for SAP Applications 11 SP2

SAP Note 2131662 - Transparent Huge Pages (THP) on SAP HANA Servers

SAP Note 1954788 - SAP HANA DB: Recommended OS settings for SLES 11 / SLES for SAP Applications 11 SP3

 

 

The THP activity could also be checked in the runtime dumps by searching “AnonHugePages”. Whilst also checking the THP, it is also recommended to check for:

 

Swaptotal = ??

Swapfree = ??

 

This will let you know if there is a reasonable amount of memory in the system.

 

Next you can Check the (GAL) Global allocation limit:  (search for IPM) and check the limit and ensure it is not lower than what the process/thread in question is trying to allocate.

 

Usually it is evident what caused the High CPU’s. In many events it is caused by the execution of large queries or running reports from HANA Studio on models.

 

To be able to use the kernel profile, you must have the SAP_INTERNAL_HANA_SUPPORT role. This role is intended only for SAP HANA development support.

 

The kernel profile collects, for example, information about frequent and/or expensive execution paths during query processing. It is recommended that you start kernel profiler tracing immediately before you execute the statements you want to analyze and stop it immediately after they have finished. This avoids the unnecessary recording of irrelevant statements. It is also advisable as this kind of tracing can negatively impact performance.

 

When you stop tracing, the results are saved to trace files that you can access on the Diagnosis Files tab of the Administration editor.

 

You cannot analyze these files meaningfully in the SAP HANA studio, but instead must use a tool capable of reading the configured output format, that is KCacheGrind or DOT (default format).

(http://www.graphviz.org/Download_windows.php)

 

You activate and configure the kernel profile in the Administration editor on the Trace Configuration tab. Please be aware that you will also need to have run the runtime dumps also.

 

The Kernel Profiler Trace results reads in conjunction from the runtime dumps to pick out the relevant Stacks and Thread numbers. To see the full information on Kernel Profiler Trace’s please see Note 1804811 or follow the steps below:

 

Please be aware that you will also need to execute 2-3 runtime dumps also. The Kernel Profiler Trace results reads in conjunction from the runtime dumps to pick out the relevant Stacks and Thread numbers.

 

 

To see the full information on Kernel Profiler Trace’s please see Note 1804811 or follow the steps below:

     

Kernel%20Profiler.PNG

 

Connect to your HANA database server as user sidadm (for example via putty) and start HDBCONS by typing command "hdbcons".
To do a Kernel Profiler Trace of your query, please follow these steps:

1. "profiler clear" - Resets all information to a clear state

2. "profiler start" - Starts collecting information.

3. Execute the affected query.

4. "profiler stop" - Stops collecting information.

5. "profiler print -o /path/on/disk/cpu.dot;/path/on/disk/wait.dot" - writes the collected information into two dot files which can be sent to SAP.

 

 

Once you have this information you will see two dot files called

1: cpu.dot

2: wait.dot.

 

To read these .dot files you will need to download GVEdit. You can download this at the following:

  http://www.graphviz.org/Download_windows.php

 

Once you open the program it will look something similar to this:

 

Graph%20Viz.PNG

     

     
The wait.dot file can be used to analyse a situation where a process is running very slowly without any reasons In such cases, a wait graph can help to identify whether the process is waiting for an IndexHandle, I/O, Savepoint lock, etc.

 

So once you open the graph viz tool, please open the cpu.dot file. File > open > select the dot file > open > this will open the file:

Once you open this file you will see a screen such as

 

graphviz%201.PNG

     

 

The graph might already be open and you might not see it because it is zoomed out very large. You need to use the scroll bar (horizontal and vertical to scroll).

 

CPU_DOT%201.PNG

 

From there on it will depend on what the issue is that you are processing.

Normally you will be looking for the process/step that has the highest amount on value for

E= …

Where "E" means Exclusive

There is also:

I=…

Where "I" means Inclusive

The Exclusive is of more interest because it is the exclusive value just for that particular process or step that will indicate if more memory/CPU is used in that particular step or not. In this example case we can see that __memcmp_se44_1= I =16.399% E = 16.399%. By tracing the RED colouring we can see where most of utilisation is happening and we can trace the activity, which will lead you to the stack in the runtime dump, which will also have the thread number we are looking for

 

CPU_DOT%202.PNG

 

CPU_DOT%203.PNG

 

 

 

 

 

By viewing the CPU.dot you have now traced the RED trail to the source of the most exclusive. It is now that you open the RTE (Runtime Dump). Working from the bottom up, we can now get an idea of what the stack will look like in the RTE (Runtime Dump).

 

CPU_DOT%204.PNG

 

 

 

 

By comparing the RED path, you can see that the path matches exactly with this Stack from the Runtime dump. This stack also has the Thread number at the top of the stack.

 

So now you have found the thread number in which this query was executed with. So by searching this thread number in the runtime dump we can check for the parent of this thread & check for the child’s related to that parent. This thread number can then be linked back to the query within the runtime dumps. The exact query can now be found, giving you the information on the exact query and also the USER that executed this query.

 

For more information or queries on HANA CPU please visit Note 2100040 - FAQ: SAP HANA CPU

 

I hope you find this instructive,

 

Thank you,

 

Michael Healy


SAP Hana TDI setup - VMware 5.1 (part 2)

$
0
0

Hello this is the second part on my document SAP Hana TDI setup - VMware 5.1, i'll explain how to configure HSR for Hana.

You can check the first part of the document for the previous deployment scenario from the link below

SAP Hana TDI setup - VMware 5.1 (part 1)

 

 

 

Configure and setup for scenario 3 – Disaster Recovery (HSR)

4.jpg


An important point when doing a HRS configuration before set things up is that we must have the same topology on both sites, the standby node is optional.


Quick sum of activities not documented in this section:

  • Deployment of vm from template
  • Adjustment of parameters
  • File system mount for new system
  • Secondary Hana installation

From a technical stand point in order to make it happen, several network and server consideration needs to be take care, to realize this configuration the following step needs to be proceed as below:

  • Have both Hana system deploy and up and running
  • HSR configured between Hana systems
  • Take over test
  • Configure SLES11 SP3 Cluster
  • Set the SAP Hana integration

 

HSR configuration

 

Since my 2 Hana landscape are up and running for my 2 site, I can start the setup of the replication process.

 

On the primary site enable the replication

4-21-2015 4-25-18 PM.jpg

4-21-2015 4-26-57 PM.jpg

4-21-2015 4-36-13 PM.jpg

4-21-2015 4-38-25 PM.jpg


Now I stop the secondary site in order to register it

4-21-2015 4-43-08 PM.jpg

4-21-2015 4-44-00 PM.jpg


Note: do not provide fqdn

4-21-2015 6-49-07 PM.jpg

 

While Site2 is starting check out Site1

4-21-2015 4-57-03 PM.jpg


After a minute the replication is “Active”

4-21-2015 5-02-07 PM.jpg

 

And the secondary system become unavailable for connection with all service up and running

4-21-2015 5-02-54 PM.jpg

 

 

HSR Takeover testing

 

Now that my systems are setup for replication, I will create user and some package on Site1 and validate after taking over to Site2 if all my change has been taken in consideration.

4-21-2015 5-08-45 PM.jpg4-21-2015 5-12-50 PM.jpg

 

In order to perform the takeover, from Site2 proceed as following

4-21-2015 5-21-16 PM.jpg

 

Disable the replication from the former primary site

4-21-2015 5-34-33 PM.jpg4-21-2015 5-34-52 PM.jpg

 

And stop it and register it as secondary now

4-21-2015 5-39-34 PM.jpg

4-21-2015 6-08-23 PM.jpg

 

Once done we can see that the Site2 became the primary and Site1 the secondary, also I can now see my user and package created earlier are on the second host

4-21-2015 7-19-41 PM.jpg

 

All the action performed above trough the studio can also be done by command line tool with: hdbnsutil .

 

In the next part of my documentation I’ll explain how to configure SLES in order to configure Hana with it.

 

 

 

SLES and Hana setup for DR

 

In this section I’ll explain how to control the failover process automatically with Hana on SLES Cluster, in order to make it happen I first configure SLES as a cluster with the two servers (hana01 and hana02)

 

On the 2 servers install the “ha_sles” package

4-22-2015 5-16-50 PM.jpg

 

Once installed, run the initial cluster config on the first node by using “sleha-init” command

4-24-2015 3-44-48 PM.jpg

4-24-2015 3-47-28 PM.jpg

 

Now done, go on the secondary node and register it into the cluster y running “sleha-join”

4-24-2015 4-10-05 PM.jpg

 

And do a check at the HAWK interface from the address provide during the first node install, i can see my 2 servers clustred now

5-1-2015 7-46-36 PM.jpg

 

In order to make Hana embedded in my Linux Cluster, I did install at the beginning the package “SAPHanaSR”, I’ll then use the HAWK to set it up.

4-24-2015 4-55-16 PM.jpg

4-24-2015 4-57-36 PM.jpg

 

Once the wizard done I check the status (The red errors are because at this moment I did not have installed the host agent)

4-24-2015 10-36-26 PM.jpg

 

Once the problem fixed I make a test on the virtualip define earlier for the cluster “192.168.0.145” and see which node I’m on

4-24-2015 10-22-15 PM.jpg

I'm on the first node

4-24-2015 10-25-13 PM.jpg

 

The configuration is completed ...

 

On my next blog i'll focus on the testing scenario with SLES Ha/DR.

 

Williams

How To Configure Network Settings for HANA System Replication

Bye bye "Standart Jobs function"

$
0
0

Standart Jobs function (transaction SM36) is obsolete in SAP S/4HANA for sap technical jobs. Instead of it, we have a new easy way: Technical Job Repository.

Technical Job Repository carries out the scheduling of periodic technical sap jobs in the client.

Also this mechanism is carried out automatically by the system. You do not need any interaction and scheduling, unlike the earlier Standard Jobs function in SM36.

New Job Repository function calls the transaction SJOBREPO.

SJOBREPO displays an overview of all job definitions delivered and scheduled by SAP in the technical job repository.

You can use SJOBREPO to change a job definition within certain limits or to completely deactivate it.

21.PNG

There is a hourly working trigger for technical background job generation. One kernel parameter is responsible for this mechanism (rdisp/job_repo_activate_time). These generations logs are stored in table BTCCTL for the last execution time.

There is good one SAP note (#2190119) and it has an attached PDF document that explains the functionality and customizing possibilities of the technical job repository in SAP S/4HANA.

Have fun!

Troubleshooting SAP HANA Authorisation issues

$
0
0

Every now and again I receive questions regarding SAP authorisation issues. I thought it might be useful to create a troubleshooting walk through.

 

This document will deal with issues regarding analytical privilege in SAP HANA Studio

 

So what are Privileges some might ask?

System Privilege:

System privileges control general system activities. They are mainly used for administrative purposes, such as creating schemas, creating and changing users and roles, performing data backups, managing licenses, and so on.

Object Privilege:

Object privileges are used to allow access to and modification of database objects, such as tables and views. Depending on the object type, different actions can be authorized (for example, SELECT, CREATE ANY, ALTER, DROP, and so on).

Analytic Privilege:

Analytic privileges are used to allow read access to data in SAP HANA information models (that is, analytic views, attribute views, and calculation views) depending on certain values or combinations of values. Analytic privileges are evaluated during query processing.

In a multiple-container system, analytic privileges granted to users in a particular database authorize access to information models in that database only.

Package Privilege:

Package privileges are used to allow access to and the ability to work in packages in the repository of the SAP HANA database.

Packages contain design time versions of various objects, such as analytic views, attribute views, calculation views, and analytic privileges.

In a multiple-container system, package privileges granted to users in a particular database authorize access to and the ability to work in packages in the repository of that database only.

 

For more information on SAP HANA privileges please see the SAP HANA Security Guide:

http://help.sap.com/hana/SAP_HANA_Security_Guide_en.pdf

 

 

So, you are trying to access a view, a table or simply trying to add roles to users in HANA Studio and you are receiving errors such as:

  • Error during Plan execution of model _SYS_BIC:onep.Queries.qnoverview/CV_QMT_OVERVIEW (-1), reason: user is not authorized
  • pop1 (rc 2950, user is not authorized)
  • insufficient privilege: search table error: [2950] user is not authorized
  • Could not execute 'SELECT * FROM"_SYS_BIC"."<>"' SAP DBTech JDBC: [258]: insufficient privilege: Not authorized.SAP DBTech JDBC: [258]: insufficient privilege: Not authorized

 

These errors are just examples of  some the different authorisation issues you can see in HANA Studio, and each one is pointing towards a missing analytical privilege.

 

Once you have created all your models, you then have the opportunity to define your specific authorization requirements on top of the views that you have created.

 

So for example, we have a model in HANA Studio Schema and its called "_SYS_BIC:Overview/SAP_OVERVIEW"

We have a user, lets just say its the "SYSTEM" user, and when you query this view you get the error:

 

Error during Plan execution of model _SYS_BIC:Overview/SAP_OVERVIEW (-1), reason: user is not authorized.

 

So if you are a DBA, and you get a message from a team member informing you that they getting a authorisation issue in HANA Studio. What are you to do?

How are you supposed to know the User ID? And most importantly, how are you to find out what the missing analytical privilege is?

 

So this is the perfect opportunity to run an authorisation trace through the means of the SQL console on HANA Studio.

So if you follow the below instructions it will walk you through executing the authorisation trace:

 

1) Please run the following statement in the HANA database to set the DB  trace:

alter system alter configuration ('indexserver.ini','SYSTEM') SET
('trace','authorization')='info' with reconfigure;

 

2) Reproduce the issue/execute the command again/

 

3)When the execution finishes please turn off the trace as follows in the Hana studio:

alter system alter configuration ('indexserver.ini','SYSTEM') unset
('trace','authorization') with reconfigure;

 

 

So now that you have turned the trace on and reproduced the issue, now you must turn off the trace.

 

You should now see a new indexserver0000000trc file created in the Diagnosis Files Tab in HANA Studio

Capture.PNG

 

So once you open the trace files, scroll to the end of the file and you should see something familiar to this:

 

e cePlanExec      cePlanExecutor.cpp(06890) : Error during Plan execution of model _SYS_BIC:onep.Queries.qnoverview/CV_QMT_OVERVIEW (-1), reason: user is not authorized
i TraceContext    TraceContext.cpp(00718) : UserName=TABLEAU, ApplicationUserName=luben00d, ApplicationName=HDBStudio, ApplicationSource=csns.modeler.datapreview.providers.ResultSetDelegationDataProvider.<init>(ResultSetDelegationDataProvider.java:122);csns.modeler.actions.DataPreviewDelegationAction.getDataProvider(DataPreviewDelegationAction.java:310);csns.modeler.actions.DataPreviewDelegationAction.run(DataPreviewDelegationAction.java:270);csns.modeler.actions.DataPreviewDelegationAction.run(DataPreviewDelegationAction.java:130);csns.modeler.command.handlers.DataPreviewHandler.execute(DataPreviewHandler.java:70);org.eclipse.core.commands
i Authorization    XmlAnalyticalPrivilegeFacade.cpp(01250) : UserId(123456) is missing analytic privileges in order to access _SYS_BIC:onep.MasterData.qn/AT_QMT(ObjectId(15,0,oid=78787)). Current situation:
AP ObjectId(13,2,oid=3): Not granted.
i Authorization    TRexApiSearch.cpp(20566) : TRexApiSearch::analyticalPrivilegesCheck(): User TABLEAU is not authorized on _SYS_BIC:onep.MasterData.qn/AT_QMT (787878) due to XML APs
e CalcEngine      cePopDataSources.cpp(00488) : ceJoinSearchPop ($REQUEST$): Execution of search failed: user is not authorized(2950)
e Executor        PlanExecutor.cpp(00690) : plan plan558676@<> failed with rc 2950; user is not authorized
e Executor        PlanExecutor.cpp(00690) : -- returns for plan558676@<>
e Executor        PlanExecutor.cpp(00690) : user is not authorized(2950), plan: 1 pops: ceJoinSearchPop pop1(out a)
e Executor        PlanExecutor.cpp(00690) : pop1, 09:57:41.755  +0.000, cpu 139960197732232, <> ceJoinSearchPop, rc 2950, user is not authorized
e Executor        PlanExecutor.cpp(00690) : Comm total: 0.000
e Executor        PlanExecutor.cpp(00690) : Total: <Time- Stamp>, cpu 139960197732232
e Executor        PlanExecutor.cpp(00690) : sizes a 0
e Executor        PlanExecutor.cpp(00690) : -- end executor returns
e Executor        PlanExecutor.cpp(00690) : pop1 (rc 2950, user is not authorized)

 

So we can see from the trace file that User who is trying to query from the view is called TABLEAU. TABLEAU is also represented by the User ID (123456)

 

So by looking at the lines:

 

i Authorization    XmlAnalyticalPrivilegeFacade.cpp(01250) : UserId(123456) is missing analytic privileges in order to access _SYS_BIC:onep.MasterData.qn/AT_QMT(ObjectId(15,0,oid=78787)).

&

i Authorization    TRexApiSearch.cpp(20566) : TRexApiSearch::analyticalPrivilegesCheck(): User TABLEAU is not authorized on _SYS_BIC:onep.MasterData.qn/AT_QMT (787878) due to XML APs

 

We can clearly see that TABLEAU user is missing the correct analytical privileges to access the _SYS_BIC:onep.MasterData.qn/AT_QMT which is located on Object 78787.

 

So now we have to find out who owns the Object 78787. We can find out this information by querying the following:

 

select * from objects where object_oid = '<oid>';

Select * from objects where object_oid = '78787'

 

Once you have found out the owner for this object, you can get the owner to Grant the TABLEAU user the necessary privileges to query the object.

 

Please be aware that if you find that the owner of an object is _SYS_REPO, this is not as straight forward as logging in as _SYS_REPO as this is not possible because SYS_REPO is a technical database user used by the SAP HANA repository. The repository consists of packages that contain design time versions of various objects, such as attribute views, analytic views, calculation views, procedures, analytic privileges, and roles. _SYS_REPO is the owner of all objects in the repository, as well as their activated runtime versions.

You have to create a .hdbrole file which which gives the access ( Development type of role, giving select, execute, insert etc access) on this schema. You then assign this role to the user who is trying to access the object.

 

 

Another option that is available for analyzing privileges issues was introduced as of SP9. This comes in the form of the Authorization Dependency Viewer. Man-Ted Chan has prepared an excellent blog on this new feature:

 

http://scn.sap.com/community/hana-in-memory/blog/2015/07/07/authorization-dependency-viewer

 

 

 

For more useful information on Privileges can be seen in the following KBA's:

KBA #2220157 - Database error 258 at EXE insufficient

KBA #1735586 – Unable to grant privileges for SYS_REPO.-objects via SAP HANA Studio authorization management.

KBA #1966219 – HANA technical database user _SYS_REPO cannot be activated.

KBA #1897236 – HANA: Error&quot; insufficient privilege: Not authorized &quot; in SM21

KBA #2092748 – Failure to activate HANA roles in Design Time.

KBA #2126689 – Insufficient privilege. Not authorized

KBA #2250445 - SAP DBTech JDBC 485 - Invalid definition of structured privilege: Invalid filter condition

 

 

For more useful Troubleshooting documentation you can visit:

 

http://wiki.scn.sap.com/wiki/display/TechTSG/SAP+HANA+and+In-Memory+Computing

 

 

Thank you,

 

Michael

SAP HANA TDI on Cisco UCS and VMware vSphere - Part 2

$
0
0

ESXi Host

 

UCS Service Profile

 

The ESXi host provides the platform where virtual machines are running on. The service profile contains the configuration of the hardware in a Cisco UCS environment. Service Profile Templates or vSphere Auto Deploy can be used to ease the ESXi deployment process. In this example, a standalone service profile creation is shown.

 

For each vSwitch, it is recommended to configure two uplink interfaces with MTU 9000 as trunk. The VLAN assignment takes place in the port group configuration of the vSwitch.

 

Image6a.jpg

 

In order to get the best performance for virtualization, certain BIOS features should be enabled. The c-states can be controlled by the hypervisor and do not necessarily have to be disabled. It depends on the performance needs vs. power saving aspects how balanced this configuration should be.

 

bios1.jpg

bios2.jpg

 

vsphere1.jpg

VMware vSphere screenshot

 

Although the use of VM-FEX is optional, it is recommended to enable all Intel Direct IO features.

 

Image18.jpg

 

 

Network

 

SAP HANA has different types of network communication channels to support the different SAP HANA scenarios and setups. It is recommended to consult the SAP HANA TDI - Network Requirements whitepaper.

 

network_tdi2.jpg

Source: SAP SE

 

On the basis of the listed network requirements, every server must be equipped with two 1 or 10 Gigabit Ethernet (10 Gigabit Ethernet is recommended) interfaces for scale-up systems to establish communication with the application servers and users (client zone). If the storage for SAP HANA is external and accessed through the network, two additional 10 Gigabit Ethernet or 8-Gbps Fibre Channel interfaces are required (storage zone). Scale-out systems need a 10 Gigabit Ethernet link for internode communication (internal zone). When using multiple ESXi hosts in a vSphere Cluster with enabled DRS, at least one additional 10 Gigabit Ethernet link is required for vMotion traffic.

 

For the internal zone, storage zone and the vMotion network, it is recommended to configure jumbo frames end-to-end.

 

The network traffic can be consoliated by using the same vSwitch with several load-balanced uplinks.

 

Image11.jpg

 

Storage

 

The storage system must be certified as part of an appliance, independent of the appliance vendor, or must be certified as SAP HANA TDI Storage in the Certified SAP HANA Hardware Directory.

 

It is recommended to physically separate the the origin (VMFS LUN or NFS export) of the datastores providing data and log for performance reasons. Additional performance classes can be distinguished:

 

Category

Read Performance

Write Performance

OS boot disk

medium

low

/hana/shared

medium

low

/hana/data

very high

high

/hana/log

high

very high

backup

low

medium

 

It is also recommended to consult the recommendations from the storage hardware partners:

EMC: Tailored Datacenter Integration Content

NetApp Reference Architecture: SAP HANA on VMware vSphere and NetApp FAS Systems

NetApp Configuration Guide: SAP HANA on NetApp FAS Systems

______________________________

Part 1 - Introduction

Part 2 - ESXi Host

Part 3 - Virtual Machine

Part 4 - Guest Operating System

Viewing all 1183 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>