Quantcast
Channel: SCN : Document List - SAP HANA and In-Memory Computing
Viewing all 1183 articles
Browse latest View live

Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 4)

$
0
0

Hello All,

 

Its been some time that I have been working in HANA and related areas like SLT, Lumira, Fiori and so on.

So thought of sharing some topics here, which would be handy.

 

Disclaimer :

1) This series is exclusively for Beginners in HANA and all those HANA experts here, please excuse me

2) These are some Solutions/Observations that we have found handy in our projects and am quite sure there would be multiple ways to derive the same result.

3) These series of documents are collaborative in nature. So please feel free to edit the documents,wherever required!

4) All the points mentioned here is observed on HANA systems whose revision is >= 82.


Part 1 of this series can be found here -->  Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 1)

Part 2 of this series can be found here -->  Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 2)

Part 3 of this series can be found here --> Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 3)



34)Related to HANA

Use Case: We were unable to access/open the Catalog Folder of our HANA instance and the following error message was appearing.

  Capture22.JPG

un1.png

Solution: We had raised the issue with SAP Internal Support.

And as of their observation: all HDB processes were online and there wasn't any obvious reason for an error.As a Quick fix, they restarted the HANA DB and

the error was solved.(our's was a Demo System anyway)

 

Note: Some related Discussions --> https://scn.sap.com/thread/3729403

 

35) Related to HANA Studio:

Use Case: We had to uninstall the Existing version of HANA Studio.

 

Solution:

Got to Control Panel --> Uninstall the HANA Studio.

Untitled22.png

The Lifecycle Manager will ask you to enter the HANA Studio installation instance(in our case; it was 0)

Untitled33.png

After entering 0, you will get the following screen:

Untitled33.png

By pressing any key, you will get the message that the HANA studio version is successfully uninstalled.

 

36) Related to HANA

Use Case: At times, while navigating through the HANA Contents, we come across the across the following message(Contains Hidden Objects).

Capture4444.JPG

Solution:

Go to the Preferences --> HANA --> Modeler --> Content Presentation --> Check 'Show all Objects'.

Capture3333.JPG

Capture55555.JPG

 

Now the hidden objects will be displayed:

Capture2222.JPG

 

37) Related to HANA SQL

Some Useful SQL Commands:

a) Renaming a column of an already existing Table:

RENAME COLUMN "<Schema_Name>"."<Table_Name>"."<Old_Column_Name>" to "<New_Column_Name>"

 

b) Adding a new column to an already existing Table:

ALTER TABLE "<Schema_Name>"."<Table_Name>" ADD (<New_Column_Name><DataType>)

 

c) Update a Column Entry:

UPDATE "<Schema_Name>"."<Table_Name>" SET  "<Column_Name>" = '<New_Entry>' where "<Column_Name>" = '<Old_Entry>'

 

d) IF Function:

If("Status"= 'Completed Successfully','0',if("Status"= 'In progress','1',if("Status"= 'Withdrawn','2','?')))

Capture5555.JPG

 

38) Related to HANA Studio:

Use Case: We got the following error while previewing an HANA Analytical view:

Message: [6941] Error Executing Physical Plan : AttributeEngine: this operation is not implemented for this attribute.

66.png

Solution: The above error message point towards a field named CMPLID.

On careful observation, it was found that the CMPLID has different data types in the connected attribute views and the Fact table in the following analytic view.

Untitled1.JPG

Related SAP Note: 1966734 - Workaround for "AttributeEngine: this operation is not implemented for this attribute type" error

 

39) Related to HANA:

Use Case: How to find the Schema Owners?

Solution: SELECT * FROM "SYS"."SCHEMAS"

Capture2222.JPG

 

40) Related to HANA:

Use Case: How to provide specific authorizations to some limited tables within a schema?

Solution: Object Privileges --> Schema Name --> Right Click --> Add catalog objects --> Provide the specific table names.

Untitled2222.png

 

Hope this document would be handy!

 

41) Related to sFIN

Use Case: The definition of HANA Calculation View BANKSTATEMENTMONITOR is not correct after migration to SAP Simple Finance, on-premise editon 1503.

The expected definition of HANA view after migration is something like the following:

Capture1111.JPG

Unfortunately due to a program error, the view definition after migration will still look something like the following:

Capture111111.JPG

 

Solution: We had raised this issue with the support/development team and they have now released the following new OSS note.

2205205 - Bank Statement Monitor: Definition Correction of HANA Calculation View BANKSTATEMENTMONITOR

 

After following the manual activities mentioned in the note, the issue will be resolved.

 

42) Related to SLT:

An SLT configuration was already created without giving multiple usage option.(You want to switch from 1: 1 to 1: N in already existing SLT configuration)

No we wanted to create a new connection with the same source and a different target, but system was not allowing us to do the same, as we were getting the message that a configuration with the same source is already available.

 

Solution: NOTE 1898479 - SLT replication: Redefinition of existing DB triggers.

The solution for this issue was explained in the note and the manual steps (1-9) has to be done in the SLT system to solve this.

 

 

43) Related to HANA:

While trying to import, we got the following error:

HANA Error.png

 

Solution: We followed the following Link to solve the issue.

HANA Inactive version error while Object Import - SAP BASIS Tuts

 


Will keep adding more points here...

 

BR

Prabhith-


SAP HANA TDI - FAQ

$
0
0

SAP HANA tailored data center integration (TDI) was released in November 2013 to offer an additional approach of deploying SAP HANA. While the deployment of an appliance is easy and comfortable for customers, appliances impose limitations on the flexibility of selecting the hardware components for compute servers, storage, and  network. Furthermore, operating appliances may require changes to established IT operation processes. For those who prefer leveraging their established processes and gaining more flexibility in hardware selection for SAP HANA, SAP introduced SAP HANA TDI. For more information please download this FAQ document.

View this Document

SAP HANA TDI - Overview

$
0
0

SAP HANA tailored data center integration (TDI) was released in November 2013 to offer an additional approach of deploying SAP HANA. While the deployment of an appliance is easy and comfortable for customers, appliances impose limitations on the flexibility of selecting the hardware components for compute servers, storage, and  network. Furthermore, operating appliances may require changes to established IT operation processes. For those who prefer leveraging their established processes and gaining more flexibility in hardware selection for SAP HANA, SAP introduced SAP HANA TDI. For more information please download this overview presentation.

View this Document

What-If analysis with Design Studio and HANA as backend.

$
0
0

The aim of this blog is demonstrate the creation of a ‘What-if’ analysis report with SAP design studio and HANA as backend database. Let’s consider a scenario ,where we have to decide between ‘Buy Now’ and ‘Buy Later’ options, based on the below user inputs.

  1. Unit Price
  2. Quantity
  3. Discount
  4. Delay Days

 

pic1.png

 

On click of the ‘Submit’ button, the values entered in the input fields (Unit Price,Quantity,Discount & delay days) will be passed on to the ‘Input parameters’ of the HANA data model.

 

pic2.png

 

Below script is written on the ‘On-Click’ event of the submit button to pass on the values to the Input parameters of the HANA model.

 

pic3.png

 

pic4.png

 

Here ‘DS_WHATIF_ANALYSIS’ is the data source that is built in design studio on top of the HANA data model. Based on the input values(a,b,c,d), the HANA model will determine the  ‘Buy Now’ & ‘Buy Later’ values in a chart in design studio output.

pic5.png

 

HANA Data Model:

             The HANA model is based on a table (has 2 columns containing the inflation rate for each month).

 

 

pic6.png

 

pic7.png

 

Create 4 input parameters (Day_Delay, Unit_Price, Quantity & Discount) of parameter type ‘Direct’. These are the input parameters that will receive the input values entered by the user in the report.

 

pic8.png

 

Create another input parameter ‘Inflation_Curr’ of parameter type ‘Derived from table’. This is the current inflation rate, which will be maintained in a custom table.

 

pic9.png

 

Create the below calculated columns in the same sequence as below,

 

pic10.png

 

  1. Calmonth_Now1:

pic11.png

 

2.Calmonth_Now

 

pic12.png

 

3.Calmonth_Later1

                Here we are adding the delay days from the input to the current date to get the Buy later period.

 

pic13.png


4. Calmonth_Later2:

pic14.png


5.Now_Flag


pic15.png


6.Later_Flag:


pic16.png

 

7.Inflation_Now:

 

pic17.png

 

 

8.Inflation_Later

 

pic18.png

 

9.Incremental_Inflation

 

pic19.png

 

10.Buy_Now:

                Buy now will consider ‘discount percentage’ and ‘Holding Cost’ into consideration.

 

pic20.png

 

11.Buy_Later:

                Buy later will consider the inflation rate for the delayed period into consideration.

 

pic21.png

 

Now let’s activate the model and test the it.

 

pic22.png

 

Enter the values for the input parameters.

 

pic23.png

 

We can see the value of ‘Buy Now’ and ‘Buy Later’ prices now. The same will be represented in design studio as a chart, when we click on the submit button.

 

pic24.png

 

 

Design Studio report Output:

 

pic25.png

 

Based on the above output, ‘Buy Now’ seems like a better option.

SAP HANA Client Installation and Update Guide for SCN

$
0
0

This SAP HANA client guide describes the installation and update of the free SAP HANA client available for download on SCN and store.sap.com.

View this Document

Lessons Learnt: Bottom up TCO Analysis for HANA Platform

$
0
0

This document is a summary of lessons learnt from real customer experience in HANA sales cycles during Q2 and Q3 of 2015. Determining value of any investment is of paramount importance to any IT and business team and there are several applicable ways of getting to the dollar impact. Two such methods are briefly described initially with the value points from a HANA platform based analysis shared in detail for the bottom up TCO analysis.

Two ways to analyze value and benefits of an IT investment

    1. Top down cost avoidance calculation – This is one of the most popular ways of assessing the business value leveraged by SAP’s IVE team regularly. As part of their survey with the customer, we determine their short and long-term initiatives for introducing new technical solutions (especially non-SAP solutions) to the business including the estimated costs and time associated with them. Due to the simplification and platform capabilities made possible with HANA platform, we are able to either eliminate or simplify these implementations compared to the customer’s current approach. As an example, one of the customers I worked with in Southern California was planning to implement 22 new dashboards for Supply Chain performance monitoring using non-SAP technology. Their SI had provided an estimate of USD 500,000 per dashboard for a total of USD 11 million cost + 3 million for analytics software purchase that was positioned to be specialized for supply chain performance monitoring. With S/4HANA based Supply Chain control tower and Integrated Business Planning for Supply Chain and other standard Supply Chain related Fiori applications, we were able to reduce the cost of this business use case substantially and absorb the general cost of commissioning S/4HANA in their environment as part of a larger project. Working through the list of priorities and projects planned for the next several years and mapping them to available SAP S/4HANA or HANA platform capabilities can result in multi-million dollar cost avoidance in addition to simplification and access to the latest innovation from SAP. Between the two methods discussed here, the top down method typically uncovers bigger stones where hidden dollar savings might be found when positioning HANA but these are estimates that may vary.
    2. Bottom up TCO calculation – The second and the other preferred method for our customer’s IT organizations is the bottom up TCO calculation where we prepare a summary of their current spend on IT infrastructure, resources, maintenance, support, change management and other operational aspects and compare those with equivalent in a post HANA world. The savings derived by this method are very close to the real ones as they are based on formulas or proposals, however these numbers may not be as large as those derived from the top down calculation. Both top down and bottom up approaches should be considered for a holistic analysis. Bottom up TCO analysis also required knowledge of the customer’s architecture and it has been observed that Pre-Sales working in conjunction with IVE is the best combination to perform this analysis.

  Typical considerations in a bottom up analysis

Following is a framework to collect inputs from the customer about their current spend on their environment. Detailed excel template available on demand.

  1. Hardware Maintenance & Storage Costs    
  1. Infrastructure Hardware - SAP applications
  2. Infrastructure Hardware – Others
  3. SAP Infrastructure Storage Management
  4. Current Data Warehouse Environment
  1. Hardware Acquisition costs
  1. Infrastructure Hardware
  1. Software Related Costs
  1. Infrastructure Software
  1. Resource Costs      
  1. Cost of creating and maintaining customizations related to SAP Data
  1. Migration Conversion Costs
  1. Total Migration/ Conversion Costs        

Observed Challenges and pain points in a typical OLTP and OLAP landscape

To baseline the customer’s current environment, architecture and the challenges posed by them, the following suggested points can be leveraged from the framework we developed for the customer engagements where such analyses were performed. This is an indicative list and there could be additional or different challenges that your customer might be facing which could form part of your analysis. A visual summary of these challenges is presented below and described subsequently:

 

 

  1. Limited Operational Reporting with ECC on traditional DB due to resource, performance and tuning considerations for an OLTP environment on a traditional DB
  2. High Data redundancy with multiple copies of SAP data in the landscape like data marts, data warehouses and copies of data within these due to their architecture of persistence, aggregation, indexing etc. Adding DEV, QAS and PROD environments for each of these parallel environments quickly creates an unmanageable challenge
  3. Data Governance, Quality and security challenges within data marts / copies of SAP data
  4. High TCO with shadow IT costs to maintain numerous data silos
  5. Bloated Data Warehouse footprint with multiple implicit copies of data due to internal staging, indexing, aggregates, data stores etc. that a traditional DW architecture would force them into
  6. Unsustainable performance workarounds like aggregates and indices in a traditional DW that not only add to the data footprint but do not provide a cost effective scalable model
  7. High data latency with 24 – 48 hrs. delay in data availability for reporting within the traditional DW
    1. No real time business analytics as a result and delayed data results in loss of business context under which the queries were raised in the first place
  8. Reporting tool proliferation within Business users for self-service analytics
    1. Inconsistent User Experience across multiple reporting tools from different vendors
    2. Implicit need for IT to support such business acquired tools outside their regular support plans and skills
    3. Variety of security models and further data silos created by each such tool
  9. Costly and time consuming end to end change management processes due to the multi-layered architecture
  10. Limited Change Agility due to complexity of the architecture which prevents IT from delivering changes and new content to the business in a timely manner


How does HANA platform provide value?


HANA not only provides a remedy for the above-mentioned challenges but also tremendous value to the business and IT organizations within any enterprise. These benefits are visualized in contrast to the previous illustration as follows:

 

 

Future State Architecture Benefits – Immediate Business and IT value

  1. Massive reduction of the current data footprint through
    1. Single copy of SAP ECC data across business processes and teams
    2. Data provisioning for business functions through pre-built virtualized data models (non-materialized)
  2. Simplification of the landscape by
    1. Data marts elimination and replacement with Virtual data models and views on a single copy of persisted data
    2. Large reduction of DW data volume or eventual elimination of the traditional DW based on the customer’s roadmap
  3. Easier access to SAP data for reporting with
    1. Business user friendly semantics of the model
    2. Direct operational reporting from SAP ECC Vs BW or DW
  4. Real time / Near Real time analytics on all data with
    1. Combined transactions (OLTP) and Analytics (OLAP) in a single platform on a single copy of data
    2. Analytics Simulation (What if analysis) in real time for better decision making
    3. Supply Chain visibility analytics in real time on SAP ECC with HANA enterprise as an example
    4. Category spend optimization from Ariba (Supplier spend), Concur (Travel Expense), Fieldglass (Contract labor) and ECC (Direct and Indirect procurement) with HANA enterprise depending on the customer’s preferred solutions for these scenarios
  5. Access to granular data with
    1. Line item level detailed analysis is enabled in real time
    2. Elimination of aggregates and pre-calculated totals as in a DW
  6. Improved change agility with
    1. Easier end to end change management process, fewer layers to change
  7. Compelling and consistent User Experience
    1. Any user device enabled, browser based access with beautiful and intuitive UI
    2. Elimination of the need to deploy new reporting and transactional tools
  8. Lower TCO with
    1. Simplified landscape and data footprint which results in smaller backups, faster recoveries and lower investment in infrastructure redundancy related to them
    2. Better utilization of current H/W resources especially storage
    3. Better utilization of IT and Business human resources through elimination of Shadow IT organizations
  9. Robust Data Governance, Quality and Security with enterprise grade best practices within the platform

 

End State Architecture Benefits – Full Business and IT value of SAP HANA

    1. All of the previously mentioned SAP HANA platform benefits plus
    2. Further simplification of the customer’s landscape through SAP S/4HANA
      1. Elimination of SAP BW with BCS capabilities in Integrated Planning with S/4HANA capabilities and Integrated Business Planning for Finance and Supply chain (some of it is planned functionality as of Q3 2015)
      2. Increased utilization of HANA enterprise for non-SAP data (for e.g. in DW currently) for a single source of the truth across SAP and non-SAP data sources
    3. New Business Processes introduction by leveraging HANA for IoT edition on the same platform
    4. Advanced predictive analytics processing of Clinical trial data in real / near real time in conjunction with ERP inventory data

 

The above are a few indicative examples to help with moving in the right direction with the analysis and the lists are a mix of known and not so well known points that can be leveraged as part of the framework.

 

Thanks for reading!


Sudhendu

Challenges of Moving to HANA on Cloud

$
0
0

Challenges of HANA on Cloud

PURPOSE

The purpose of this document is to define lesson learns and pain points / challenges while customer wanted to build new SAP landscape on HANA on cloud or move existing SAP application into HANA on Cloud. In this document, all challenges distributed into three different HANA on cloud services.

Migration existing SAP application to HANA on Cloud

Green Field implementation

AMS support to HANA on cloud

SCOPE

This scope applies for Basis team who will involve above three deployment for HANA on cloud. This information can be taken as starting point of any above deployment scenarios.

Picture1.png

GRC integration with SAP HANA - Guide

$
0
0

1. PURPOSE

The purpose of this document is to define clear steps required to implement GRC on HANA plug in to integrate GRC 10.1 with HANALIVE DB for user provisioning.

2. SCOPE

This scope applies for Basis team who support SAP GRC on HANA configuration after will go live. This procedure applied for pre requisites, installation and post installation configuration of complete SAP GRC HANA “plug in” setup.

This document does not cover security setup that required for User provisioning on HANALIVE through SAP GRC system

3. Component details

Need at least GRC 10.1 with SAP NW 7.4 system to integrate this with HANA

SAP GRC ACCESS CONTROL        11          sap.com              SAP ACCESS CONTROL 10.1

SAP NETWEAVER            7.4          sap.com              SAP NET WEAVER 7.4

HANACLIENT SPS 8 Rev 82 Patch level 0

HANALIVE DB SPS8 Rev 82 Patch level 0

HCO_GRC_PI SP06 Patch level 0 (GRC Plugin)

4. Install SAP HANA CLIENT on GRC source system

Download required HANA client software compatible with OS where GRC installed

Software name -> IMDB_CLIENT100_82_0-10009663.SAR

Need SUDO or root user into source GRC system

Fig1.png

Extract the HANA Client 82 version package in /software_repo/HANA_CLIENT directory

Fig2.png

Check the extracting files

Create directory hdbclient under /user/sap/<SID> file system

Fig3.png

Run hdbinst to install HANA client

Fig4.png

Install the HANA Client with hdbinst command from ROOT user

Fig5.png

Check the install files in /usr/sap/<SID>/hdbclient location


    5. Set the PATH & LD_LIBRARY_PATH variables in sapenv.sh & sapenv.csh file

Fig6.png

Check the ENV from <sid>adm user by opening a new session

Fig7.png

Note – Take restart / bounce of SAP GRC application


    6. Connectivity test from GRC to HANALIVE DB


          A. Check the GRC system connectivity from hdbsql prompt


Fig10.png

          B. Check connection from GRC application level

Create connection user GRC_DBCO_PI in HANALIVE DB with below privileges (for connection test – you can use any existing user e.g. – SYSTEM)

Later you can create this user with below roles (this role will come after plugin deployment in HANALIVE DB) for permanent connection

Fig11.png

Create DB connection through DBCO transaction code from GRC as belowFig12.png

Fig13.png

Fig14.png

Fig15.png

Fig16.png

7.  Deploy D e l i v e r y U n i t with Content for the SAP HANA plug-in for GRC integration with HANA

  1. Start HANA Studio and Open Modeler perspective.
  2. Add System (where HCO_GRC_PI will be deployed) in the HANA Studio by providing
  3.   Host Name, Instance Number, Description, HANA User ID and Password. 
  4. Note: Use SYSTEM or Any HANA User with proper authorizations as User ID to connect to the HANA System where HCO_GRC_PI will be deployed. Mandatory, No exceptions.
  5. After System Registration is completed and Connection verified, Use "Select System" button in the Modeler perspective to select a System you just registered in the previous step.
  6. In the same Modeler Perspective Click "Import" link located under "Content" label and in the
  7.   Open dialog select "Delivery Unit" under SAP HANA Content and Click "Next" button. 
  8. In the opened window Select file location as "Client" and use "Browse" button to navigate to the location where you save file with SAP HANA plug-in you downloaded from SMP.
  9. Note: You may need to use SAPCAR to extract D e l i v e r y U n i t file with extension .T G Z from the archive you downloaded from SMP.
  10. When .TGZ file is selected you will see D e l i v e r y U n i t details in the Object import simulation.
  11. Click "Finish" button to complete D e l i v e r y U n i t deployment and Object Activation process.
  12. Verify deployment by navigating in the "Modeler" perspective to "SAP HANA Systems" where you registered HANA Server.
  13. Expend from Content node, following packages sap --> grc --> pi --> a c and under ac package you should be able to see two packages ara with 16 sql objects and arq with 11 sql objects and db with 2 objects and roles with 1 object.
  14. In this point D e l i v e r y U n i t was deployed successfully.

Now go to HANA Studio and login in HLR system with modeler perspective:  Open Modeler perspective and select Delivery unit




Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 4)

$
0
0

Hello All,

 

Its been some time that I have been working in HANA and related areas like SLT, Lumira, Fiori and so on.

So thought of sharing some topics here, which would be handy.

 

Disclaimer :

1) This series is exclusively for Beginners in HANA and all those HANA experts here, please excuse me

2) These are some Solutions/Observations that we have found handy in our projects and am quite sure there would be multiple ways to derive the same result.

3) These series of documents are collaborative in nature. So please feel free to edit the documents,wherever required!

4) All the points mentioned here is observed on HANA systems whose revision is >= 82.


Part 1 of this series can be found here -->  Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 1)

Part 2 of this series can be found here -->  Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 2)

Part 3 of this series can be found here --> Tips, Experience and Lessons Learned from multiple HANA projects(TELL @ HANA - PART 3)



34)Related to HANA

Use Case: We were unable to access/open the Catalog Folder of our HANA instance and the following error message was appearing.

  Capture22.JPG

un1.png

Solution: We had raised the issue with SAP Internal Support.

And as of their observation: all HDB processes were online and there wasn't any obvious reason for an error.As a Quick fix, they restarted the HANA DB and

the error was solved.(our's was a Demo System anyway)

 

Note: Some related Discussions --> https://scn.sap.com/thread/3729403

 

35) Related to HANA Studio:

Use Case: We had to uninstall the Existing version of HANA Studio.

 

Solution:

Got to Control Panel --> Uninstall the HANA Studio.

Untitled22.png

The Lifecycle Manager will ask you to enter the HANA Studio installation instance(in our case; it was 0)

Untitled33.png

After entering 0, you will get the following screen:

Untitled33.png

By pressing any key, you will get the message that the HANA studio version is successfully uninstalled.

 

36) Related to HANA

Use Case: At times, while navigating through the HANA Contents, we come across the across the following message(Contains Hidden Objects).

Capture4444.JPG

Solution:

Go to the Preferences --> HANA --> Modeler --> Content Presentation --> Check 'Show all Objects'.

Capture3333.JPG

Capture55555.JPG

 

Now the hidden objects will be displayed:

Capture2222.JPG

 

37) Related to HANA SQL

Some Useful SQL Commands:

a) Renaming a column of an already existing Table:

RENAME COLUMN "<Schema_Name>"."<Table_Name>"."<Old_Column_Name>" to "<New_Column_Name>"

 

b) Adding a new column to an already existing Table:

ALTER TABLE "<Schema_Name>"."<Table_Name>" ADD (<New_Column_Name><DataType>)

 

c) Update a Column Entry:

UPDATE "<Schema_Name>"."<Table_Name>" SET  "<Column_Name>" = '<New_Entry>' where "<Column_Name>" = '<Old_Entry>'

 

d) IF Function:

If("Status"= 'Completed Successfully','0',if("Status"= 'In progress','1',if("Status"= 'Withdrawn','2','?')))

Capture5555.JPG

 

38) Related to HANA Studio:

Use Case: We got the following error while previewing an HANA Analytical view:

Message: [6941] Error Executing Physical Plan : AttributeEngine: this operation is not implemented for this attribute.

66.png

Solution: The above error message point towards a field named CMPLID.

On careful observation, it was found that the CMPLID has different data types in the connected attribute views and the Fact table in the following analytic view.

Untitled1.JPG

Related SAP Note: 1966734 - Workaround for "AttributeEngine: this operation is not implemented for this attribute type" error

 

39) Related to HANA:

Use Case: How to find the Schema Owners?

Solution: SELECT * FROM "SYS"."SCHEMAS"

Capture2222.JPG

 

40) Related to HANA:

Use Case: How to provide specific authorizations to some limited tables within a schema?

Solution: Object Privileges --> Schema Name --> Right Click --> Add catalog objects --> Provide the specific table names.

Untitled2222.png

 

Hope this document would be handy!

 

41) Related to sFIN

Use Case: The definition of HANA Calculation View BANKSTATEMENTMONITOR is not correct after migration to SAP Simple Finance, on-premise editon 1503.

The expected definition of HANA view after migration is something like the following:

Capture1111.JPG

Unfortunately due to a program error, the view definition after migration will still look something like the following:

Capture111111.JPG

 

Solution: We had raised this issue with the support/development team and they have now released the following new OSS note.

2205205 - Bank Statement Monitor: Definition Correction of HANA Calculation View BANKSTATEMENTMONITOR

 

After following the manual activities mentioned in the note, the issue will be resolved.

 

42) Related to SLT:

An SLT configuration was already created without giving multiple usage option.(You want to switch from 1: 1 to 1: N in already existing SLT configuration)

No we wanted to create a new connection with the same source and a different target, but system was not allowing us to do the same, as we were getting the message that a configuration with the same source is already available.

 

Solution: NOTE 1898479 - SLT replication: Redefinition of existing DB triggers.

The solution for this issue was explained in the note and the manual steps (1-9) has to be done in the SLT system to solve this.

 

 

43) Related to HANA:

While trying to import, we got the following error:

HANA Error.png

 

Solution: We followed the following Link to solve the issue.

HANA Inactive version error while Object Import - SAP BASIS Tuts

 


Will keep adding more points here...

 

BR

Prabhith-

SAP HANA TDI - FAQ

$
0
0

SAP HANA tailored data center integration (TDI) was released in November 2013 to offer an additional approach of deploying SAP HANA. While the deployment of an appliance is easy and comfortable for customers, appliances impose limitations on the flexibility of selecting the hardware components for compute servers, storage, and  network. Furthermore, operating appliances may require changes to established IT operation processes. For those who prefer leveraging their established processes and gaining more flexibility in hardware selection for SAP HANA, SAP introduced SAP HANA TDI. For more information please download this FAQ document.

View this Document

SAP HANA TDI - Overview

$
0
0

SAP HANA tailored data center integration (TDI) was released in November 2013 to offer an additional approach of deploying SAP HANA. While the deployment of an appliance is easy and comfortable for customers, appliances impose limitations on the flexibility of selecting the hardware components for compute servers, storage, and  network. Furthermore, operating appliances may require changes to established IT operation processes. For those who prefer leveraging their established processes and gaining more flexibility in hardware selection for SAP HANA, SAP introduced SAP HANA TDI. For more information please download this overview presentation.

View this Document

HANA Rules Framework

$
0
0

Welcome to the SAP HANA Rules Framework (HRF) Community Site!


SAP HANA Rules Framework provides tools that enable application developers to build solutions with automated decisions and rules management services, implementers and administrators to set up a project/customer system, and business users to manage and automate business decisions and rules based on their organizations' data.

In daily business, strategic plans and mission critical tasks are implemented by a countless number of operational decisions, either manually or automated by business applications. These days - an organization's agility in decision-making becomes a critical need to keep up with dynamic changes in the market.


HRF Main Objectives are:

  • To seize the opportunity of Big Data by helping developers to easily build automated decisioning solutions and\or solutions that require business rules management capabilities
  • To unleash the power of SAP HANA by turning real time data into intelligent decisions and actions
  • To empower business users to control, influence and personalize decisions/rules in highly dynamic scenarios

HRF Main Benefits are:

Rapid Application Development |Simple tools to quickly develop auto-decisioning applications

  • Built-in editors in SAPHANA studio that allow easy modeling of the required resources for SAP HANA rules framework
  • An easy to implement and configurable SAPUI5 control that exposes the framework’s capabilities to the business users and implementers

Business User Empowerment | Give control to the business user

  • Simple, natural, and intuitive business condition language (Rule Expression Language)

Untitled.png

  • Simple and intuitive UI control that supports text rules and decision tables

NewTable.png

  • Simple and intuitive web application that enables business users to manage their own rules

Rules.png    

Scalability and Performance |HRF as a native SAP HANA solution leverages all the capabilities and advantages of the SAPHANA platform.


For more information on HRF please contact shuki.idan@sap.com  and/or noam.gilady@sap.com

Interesting links:

SAP solutions already utilizing HRF:

Use cases of SAP solutions already utilizing HRF:

SAP Transportation Resource Planning - NEW!!!

TRP_Use_Case.jpg

SAP FraudManagement

Fraud_Use_Case.JPG

SAP hybris Marketing (formerly SAP Customer Engagement Intelligence)

hybris_Use_Case.JPG

SAP Operational Process Intelligence

OPInt_Use_Case.JPG

Students in the SAP HANA Database Department

$
0
0

PhD students

 

There is a number of PhD-students having their offices directly at the SAP HANA Database Campus in Walldorf. They are doing research in the field of database technologies for at least 3 years.

 

PhD StudentThesis Topic
University Department
Robert BrunelHierarchiesTU München
Ingo MüllerAggregation in Column-Store DatabasesKarlsruhe Institute of Technology
Marcus ParadiesGraph DatabasesTU Dresden
Michael RudolfGraph DatabasesTU Dresden
Elena VasilyevaGraph DatabasesTU Dresden
Florian WolfQuery OptimizationTU Ilmenau
Mathias WilhelmProteomicsTU München
David KernertLinear Algebra in DatabasesTU Dresden
Iraklis PsaroudakisEfficient Query Scheduling in OLTP/OLAP ScenariosÉcole polytechnique fédérale de Lausanne
Ismail OukidLeveraging NVRAM in Main-Memory DatabasesTU Dresden
Francesc TrullPhysical Design Optimization of In-Memory DatabasesBarcelonaTech and TU Dresden
Matthias HauckRuprecht-Karls-Universität Heidelberg

 

Master and Bachelor students

 

Other students are writing the Bachelor/Master-Theses (3 - 6 months) at the moment.

 

Student
Supervisor at SAP
University
Joaquín Ossorio CastilloAlexander Böhm, Daniel BäumgesUniversidad de Sevilla
Cornelius RatschIngo MüllerUniversität Heidelberg
Markus RuppFranz FärberHTW des Saarlandes
Sebastian SchlagIngo MüllerKIT
Jan SchlenkerTobias Mindnich, Philipp GroßeDHBW Mannheim
Panagiotis VagenaMartin Kaufmann, Norman MayETH Zurich
Martin WeidnerJonathan DeesKIT
Firas KassemHannes RauheTU Ilmenau
Sehrish IjazFrancesc TrullBarcelonaTech

 

Alumni

 

Over the years, a large number of students finished their PhD, master, and bachelor theses, respectively.

 

StudentThesis Topic
University
2014

Martin Kaufmann

Time-Travel in Column Stores (PhD)ETH Zurich

Hannes Rauhe

Co-Processors in Databases (PhD)TU Ilmenau - Databases and Information Systems Group
Luben AlexandrovSparse Matrix-Matrix MultiplicationKIT
Julien MarchandEPOS Particle Physics Analysis on HANAUniversite de Nantes
2013
Lorena Prieto HorcajoDesign and Implementation of a Simple Query Engine for a Column-Based and Space-Optimized DiscUC3 Madrid
Óscar Rodríguez ZaloñaDictionary Updates for a Hot Transactional Delta Buffer in a Relational Column StoreUC3 Madrid
Andrés Moreno MartínezStorage Design for an Aged In-Memory Store for Updates and Simple Disc-Based OperationsUC3 Madrid
Jorge González LopezData Structures for a Hot Transactional Delta Buffer in a Relational Column Store DatabaseUC3 Madrid
2012
Andreas SchusterCompressed Data Structures for Tuple Identifiers in Column-Oriented Databases (M.Sc.)Universität Heidelberg
Qian LiR-MapReduce (M.Sc.)TU Dresden
Christian LemkePhysische Datenbankoptimierung in hauptspeicherbasierten Column-Store-Systemen (PhD)TU Ilmenau
Amin Amiri ManjiliTime Tables (M.Sc.)ETH Zurich
Robert BrunelIndexing Dynamic and Temporal Hierarchies in Databases (M.Sc.)TU München
Antoine Le MaireImplémentation de plusieurs prototypes tournant sur la plateforme d'application XS de HDB pour preuves de concept (diplôme d'ingénieur)ENSIMAG
Andreas KleinThe CSGridFile for Managing and Querying Point Data in Column Stores (M.Sc.)Universität Heidelberg
Tomas KarnagelApplication of Transactional Memory in In-Memory Database Systems (Diplom)TU Dresden
Alessandro ZalaAlgorithms for Efficient Evaluation of Queries on Historic Data (M.Sc.)ETH Zurich
Christoph KrämerDelta-Merge with OpenCL (B.Sc.)DHBW Mannheim
Stefan HildenbrandScaling Out Column Stores: Data, Queries, and Transactions (PhD)ETH Zurich
Tim GrouisbornCompression-Aware Merge in Partitioned Column-Oriented In-Memory Databases (M.Sc.)DHBW Mannheim
2011
Jochen SeidelJob-Scheduling in Main-Memory Based Parallel Database Systems (Diplom*)KIT
Patrick LorenzEvaluierung einer HashTrie Datenstruktur für den Einsatz in einer hauptspeicherbasierten Datenbank (B.Sc.)DHBW Karlsruhe
Stefan MünchUntersuchung und Optimierung von Algorithmen auf Many-Core / Global-Shared-Memory Hardware-Architekturen (B.Sc.)DHBW Karlsruhe
Andreas MorfSnapshot Isolation in Distributed Column-Stores (M.Sc.)ETH Zurich
Alexander FrömmgenEvaluierung komprimierter Indexverfahren im Kontext der In-Memory Computing Engine der SAP (B.Sc.)DHBW Mannheim
Thomas WeichertIntegration eines statistischen Lernverfahrens als Operation der SAP in-memory Datenbank (Diplom*)TU Dresden
2010
Hannes RauheKonzept zur parallelen Anfrageausführung durch Abhängigkeitsanalyse in Column Stores (Diplom*)TU Ilmenau
Robert KubisPorting the SAP Active Information Store to the SAP TREX-Platform (Diplom*)TU Dresden
Matthias MännichWorkload-basierte physische Datenbankoptimierung für verteilte und spaltenbasierte DatenbankenTU Dresden
Robert SchulzeRepresenting and Processing Uncertain Data in Column-oriented Databases (Diplom*)TU Dresden
2009
Frederik TransierAlgorithms and Data Structures for In-Memory Text Search Engines (PhD)KIT
Guido EhlertEvaluation von Assoziationsregeln durch Suchmaschinentechnologie (Studienarbeit**)TU Dresden
Tobias ZahnVerbesserung von Algorithmen zur unscharfen Suche in Unternehmensdaten in einer hauptspeicherbasierten Suchmaschine (B.Sc.)DHBW Stuttgart
Thomas LeglerDatenzentrierte Bestimmung von Assoziationsregeln in parallelen Datenbankarchitekturen (PhD)TU Dresden
Sascha ZornEvaluierung der Leistungsfähigkeit und Komplexität von zwei Virtual Reality Entwicklungswerkzeugen anhand eines Rubik’s Cubes (B.Sc.)BA Karlsruhe
2008
Michael FaberEvaluierung der Cell-Prozessor-Architektur hinsichtlich einer Performanzsteigerung der SAP-Suchmaschine TREX (Diplom*)BA Karlsruhe
Pascal Schmidt-VolkmarBetriebswirtschaftliche Analyse auf operationalen Daten (PhD)Universität Duisburg-Essen
Sebastian WolfEntwicklung lock-freier Datenstrukturen für die SAP Search Engine TREX (Diplom*)BA Karlsruhe
Philippe MassonExploring Space-Time Trade-Offs in the SAP NetWeaver Engine TREX (M.Sc.)ETH Zurich
2007
Simon KranigAutomatische Klassifikation von Produktbeschreibungen mit der Suchmaschine SAP Netweaver TREX der SAP AG (M.Sc.)Hochschule Reutlingen
Tobias MindnichEvaluierung des Einsatz einer FPGA Karte als Co-Prozessor der SAP Suchmaschine TREX (Diplom*)BA Karslruhe
Christoph WeyerhäuserEvaluierung des Einsatzes einer Grafikkarte als Co-Prozessor zur Performanzsteigerung innerhalb der SAP-Suchmaschine TREX (Diplom*)BA Karlsruhe
Christian KuehrtRanking und Aufbereitung von Assoziationsregeln (Diplom*)TU Dresden
Oleksandr ShepilUntersuchung der Eignung des Google Dateisystems für andere Anwendungen am Beispiel von SAP BIA (M.Sc.)HPI
2006
Olga MordvinovaNatürlichsprachige Suchanfragen über strukturierte Daten (M.Sc.)Universität Heidelberg
Johannes WöhlerKonzeption und prototypische Implementierung eines performanceoptimalen Zugriffsverfahrens auf SAP Business-Objekte in der SAP Enterprise Services Architecture (Diplom*)TU München
2004
Marit RautsoEvaluation von linguistischen Methoden zur Steigerung der Retrievaleffektivität in TREX (Magister*)

Universität Heidelberg

*Diplom and Magister were German equivalents to M.Sc. or M.A. before the Bologna Process in 2010.

**A Studienarbeit is a project work of roughly 3 months as part of a academic degree.

 

Open thesis topics

 

There are always a lot of open topics similar to the ones above. If you are interested in doing a thesis with us, please contact us at students-hana@sap.com.

Execution management by HANA Graphical Calculation View

$
0
0

SAP HANA, An appliance with set of unique capabilities has provided wide range of possibilities for end user to perform data modeling. One among them is 'Graphical Calculation View', which helps in leveraging the fullest potential of SAP HANA in data modeling sector.

 

This Document helps in understanding the nature of execution management by HANA Graphical Calculation Views by utilizing the allowed set of properties in it. So as to reveal out the effectiveness in handling the properties to manage the execution flow.

 

First among the property set under discussion is


'KEEP FLAG' for attributes in Aggregation Node

 

Above property gives chance for end user to leverage the full capacity of calculation engine by utilizing the nature of Aggregation node.

Let us understand how it is achieved by considering a simple example.

 

Consider a simple SALARY_ANALYSIS table as shown below. Keeping it simple to make the understanding better.

 

EMPLOYEE_TYPESALARYYEAR
1100002001-01-01
1200002002-01-01
1500002005-01-01
2250002002-01-01
2300002004-01-01
3450002006-01-01

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 



Above table gives the salary details of each employee type on date basis. Let us now try to get maximum salary in each of the employee_category/type using graphical calculation view.

 

 

Step 1: Create a new graphical calculation view of DIMENSION Data category and add an aggregation node to the view.

 

Now add the above created column table into the newly inserted aggregation node and set EMPLOYEE_TYPE  as the output column.

Along with which add SALARY as an aggregated column to the output and rename it as MAX_SALARY_IN_EACH_EMP_TYPE

and set its aggregation type to MAX

Aggren.PNG

Step 2 : Now connect the Aggregation Node to the default Projection Node and just select MAX_SALARY_IN_EACH_EMP_TYPE column to the output in Projection Node.

pROJECN.png

 

Step 3 : Save and activate the view.  After which perform Data Preview on the Aggregation Node. We get Maximum Salary for each employee type in the underlying table.

intermediate_dp.png

 

Step 4 : Now perform Data Preview at the Calculation View level

dp.png

Here we see that the end result of the view gives only one row of data discarding the Employee_Type with which grouping was done in the aggregation node. Plan Visualized Query for the same preview is as show below.

vizual.png

By the above visualization of execution plan we figure out that the EMPLOYEE_TYPE column used in the aggregation node is pruned and not passed to the higher nodes when it is not queried in the End Result. Thus we are unable to get the MAX_SALARY for each employee type, instead we are getting the MAX_SALARY out of the total salary list.


To achieve the former case of getting MAX_SALARY based on the EMPLOYEE_TYPE even when the EMPLOYEE_TYPE is not part of the end query, we must enable this special property called 'Keep Flag' on the attributes.


Step 5 : Go back to the Aggregation Node used in the view and select the EMPLOYEE_TYPE column which is added as attribute, set the property 'KEEP_FLAG' to true and activate the view.

keep_flag.png


Step 6 : Once the activation is completed successfully, perform data preview of the model and check the result. We now get the MAX_SALARY grouped by EMPLOYEE_TYPE although EMPLOYEE_TYPE is not part of the end query.

dp_with_kf.png


Plan Visualization on the same query now behaves differently after setting the 'KEEP_FLAG' to true for EMPLOYEE_TYPE column, thus propagates the grouping attribute to the higher nodes as shown below:


viz_kf.png

 


Hence the end user is given chance to either retain the query optimization or to fit the model for required response by toggling the KEEP_FLAG property of the attributes in aggregation node.


Let us now pitch into an other connected property of Aggregation Node.


Always Aggregate Result in Aggregation Node


Taking the above example of SALARY_ANALYSIS, let us understand the usage of 'Always Aggregate Result' property in the aggregation node.


Step 1: Create a new Graphical Calculation View of cube type and add SALARY_ANALYSIS table as the data source into the aggregation node,. now select EMPLOYEE_TYPE and YEAR as non-aggregated or attribute columns and SALARY as the aggregated output column.

aggre1.png


Step 2 : Save and activate the view, after which execute query on the above view involving EMPLOYEE_TYPE and SALARY columns as show below

 

always_aggre.png

 

The above select statement do not involve any client side aggregation or group by clause, output values here are the result of default aggregation and group by operations in the aggregation node as shown below:

                                                                                  image.png

 

Let us now execute the same statement with 'Where Clause' usage in the query and see the result

date.png

 

Here we see that the introduction of 'Where clause' has also introduced the same column in 'Group by' Clause and thus the result gets varied from the previous query by having YEAR column also as part of the group by operation in aggregation node.

 

Step 3: To Avoid the above introduction of filter column in the group by clause and to address the only set of columns requested in the query, we have a property called Always Aggregate Result to be set to true so that the aggregation will not vary based on the filter column in the requested query.

 

always_agg_res.png

Above cases are true only when client side aggregations are not set in the requested query.

Thus with the Always Aggregate Result mode the execution model will be in the below format final.png

 

 

Step 4 : Now execute the same query with' Where clause' on YEAR column after setting the ALWAYS AGGREGATE RESULT property.

final12.png


We now see that the grouping happens only by EMPLOYEE_TYPE which is in the requested query.



There by we see the usage and benefits of two key properties KEEP_FLAG and ALWAYS AGGREGATE RESULT for execution management

in Graphical Calculation View .


Hope the provided information is useful. Any suggestion and feedback for improvement will be much appreciated.

 

Thank you

Getting the Counters right with stacked Calculation Views

$
0
0

Data Modeling in SAP HANA provides great amount of flexibility to the end user. One of the key capabilities in Modeling sector of SAP HANA is computation of user required calculations. One of the variations in calculation which end user can perform in Calculation Views/Analytic Views is 'Counter'.

 

Let us now understand the operations under the hood when stack of Calculation Views are created in a project.

 

Consider a simple 'Product Details' table which comprise of the product information along with its sales information as shown below:

 

PRODUCTSTORECUSTOMERQUANTITYREVENUEUNIT
BudweiserEDEKAIBM360720BOTTL
CokeEDEKAALDI260390BOTTL
CokeEBAYIBM200300BOTTL
CokeEDEKAIBM250375BOTTL
HeadsetMETROALDI2120PACKG
HeadsetEBAYIBM10600PACKG
ipadEBAYALDI106000PACKG
ipadMETROALDI106000PACKG
ipadMETROIBM106000PACKG

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Above table provides product details along with the store in which the product is available.

 

Let us now go ahead and create Graphical Calculation View that helps in getting the distinct number of stores for each product.

 

Step 1 : As first step towards achieving the above requirement let us create a Graphical Calculation View of CUBE type and add a projection node to it.Now add the above created table as data source to the projection node and connect the result of it to the default aggregation node in the View

 

Step 2: Now in the Aggregation Node create a new 'Counter' on STORE column to get the distinct number of stores.

 

store_cnt.png

Step 3: Proceed ahead by setting QUANTITY and REVENUE columns as measure with aggregation type SUM, along with the above created counter being measure having 'Count Distinct' as its aggregation type and the left out columns are set to attributes. After which save and activate the View.

Once the activation goes successful execute the below SQL Query on top of the calculation view created to get the distinct number of stores for each product.

 

SELECT"PRODUCT", sum("QUANTITY") AS"QUANTITY", sum("REVENUE") AS"REVENUE", sum("StoreCount") AS"StoreCount"

 

FROM"_SYS_BIC"."hanae2e.poorna/CV_DISTINCT_COUNT"  WHERE"CUSTOMER"IN ('IBM','ALDI'GROUPBY"PRODUCT"


query1.png


Thus the above result set helps us in getting the distinct number of stores for each product.

Till here we do not encounter any surprises while getting the result from 'Counter'.

 

Let us now proceed and use the above created view as data source in another calculation view.

 

Step 4: Create another Graphical Calculation View of CUBE type, add the above created Calculation View as the data source to its aggregation node and extract the semantics as such from the underlying calculation view. After which save and activate the view.

 

cv_transparent.png

 

Step 5: Now preview on the latter view in a similar way as we queried on the base view earlier.

 

SELECT"PRODUCT",sum("QUANTITY") AS"QUANTITY", sum("REVENUE") AS"REVENUE", sum("StoreCount") AS"StoreCount"

FROM"_SYS_BIC"."hanae2e.poorna/CV_TRANSP_FILTER_UNSET"WHERE"CUSTOMER"IN ('IBM','ALDI')GROUPBY"PRODUCT"


query2.png

Here is the surprise in result set, where the StoreCount which is a 'Counter' created in base Calculation View returns different (wrong) result (highlighted ones) when queried from the Top Calculation View.


To understand the reason behind wrong result we have to look at the execution plan of the above executed query:


plan_viz.png



CvTransparentFilter in the above representation is the Top Calculation View which is having CvCountDiscount as its data source.


Now in the Request Query on the Top Calculation View, we are querying PRODUCT and StoreCount columns along with the Filter applied on the CUSTOMER column.


There by, above query on the Top View sends a request to the underlying View to involve CUSTOMER column also as part of the requested attribute list. Which ends up in grouping the StoreCount value by (PRODUCT, CUSTOMER) which ideally should be grouped only by PRODUCT.

Consequence of which gives us wrong result when queried from Top View.


Step 6: To over come the above surprise in result set, we have a property called 'Transparent Filter', which when flagged as true for CUSTOMER column in the Aggregation Node of Top View(CvTransparentFilter) and also in the Aggregation Node of underlying View(CvCountDistinct) solves the problem by pushing filter operation down to the lower projection node and remove CUSTOMER as View Attribute from the Aggregation Nodes. Which in turn makes the Top View to work on the result set grouped only by PRODUCT column irrespective of the filter column present in the requested query. Better picture of the same is shown below by the execution plan:


final_query.png

 

Step 7: Below is the 'Transparent Filter' property that needs to be flagged to get the correct value of counter in the final result set when we are working with stacked Calculation Views.


TF_FLAG.png


Step 8: After setting 'Transparent Filter' to true in Aggregation Node of both the Calculation Views, query on Top View fetches correct result for Counter column.


SELECT"PRODUCT", sum("QUANTITY") AS"QUANTITY", sum("REVENUE") AS"REVENUE", sum("StoreCount") AS"StoreCount"FROM"_SYS_BIC"."hanae2e.poorna/CV_TRANSPARENT_FILTER"WHERE"CUSTOMER"IN ('IBM','ALDI') GROUPBY"PRODUCT"


correct_res.png



Reference for 'Transparent Filter' Flag is available in SAP HANA Modeling Guide under the section 'Create Counters'.


Hope the provided information is useful. Any suggestion and feedback for improvement will be much appreciated.

 

Thank you






Capturing and Using Multi-valued Attributes in HANA Table

$
0
0

SAP HANA is well known for its proficient Architecture along with hardware and software optimizations. With hardware optimization SAP HANA database allows the developer/user to specify whether a table is to be stored column-wise or row-wise.

 

As an extension to the existing features of column table in HANA, user can now define the columns in a way to make it store multi-values or Array of values.

This document helps in understanding of how to define and work with Multi-Valued Attributes/Columns.

 

To understand the same let us consider a simple example of storing personal details of an Employee in a single table.

 

Step 1 : Create a column table that helps to store Employee ID, Employee Firstname, Employee Lastname and Employee Phone details. Former 3 details in the table are considered to have Single Value for each Employee, where as Phone details can be a Multi-Valued column for each Employee. That means, each Employee can have more than one phone details. Thus to make sure the table structure suffice the need to store multiple phone details for each Employee in same column of the table, we must define 'Phone' column in the table as Multi-Valued or Array column as shown below:

 

CREATECOLUMNTABLE Employee (

      ID intPRIMARYKEY,

      Firstname VARCHAR(20),

      Lastname VARCHAR(20),

      Phone VARCHAR(15) ARRAY --WITHOUT DUPLICATES

)

 

'Without Duplicates' condition implies that storage of same phone number more than once in the array list is not allowed.

 

 

Step 2 : Let us now insert the details of Employee in to the above created column table :

 

INSERTINTO Employee (ID, Firstname, Lastname, Phone)

VALUES (1, 'Hans', 'Peters', ARRAY('111-546-2758', '435-756-9847', '662-758-9283'))

 

Above insert statement helps to store the single values in first three columns of the table and array of values into the last column of the table.

 

Step 3 : We can differently insert value to array list by selecting data from already existing column table using a nested select query.

 

INSERTINTO Employee (ID, Firstname, Lastname, Phone)

VALUES (2, 'Harry', 'Potter', ARRAY(select'1245-1223-223'from dummy))


Step 4 : We can now induce a temporary phone number by preforming concatenation operation between array column and an array value :


SELECT ID, ARRAY('123-456-4242') || Phone FROM Employee


Step 5 : Let us now go ahead and retrieve the selected phone details from existing array of Phone numbers.


SELECT ID, TRIM_ARRAY(Phone, 2) FROM Employee


TRIM_ARRAY function helps to remove the specified length of undesired values at the end of the array. Here it Trims last 2 Phone number details in the array list and thus provides first phone number from the Phone column of the table for all Employees.


Step 6 : When user wishes to access the specific phone number  of an Employee, we can get that member value in the array by specifying its ordinal position as shown in the below SQL:


SELECT ID, MEMBER_AT(Phone, 1) FROM Employee


This helps to get the first phone number details in the array list of each Employee, for employees without phone numbers null value will be returned.


Step 7: Conditional retrieval of value from the Array list can be done by using CASE statement as shown below:


SELECT     

      CASEWHEN CARDINALITY(Phone) >= 3 THEN MEMBER_AT(Phone, 1)  ELSE NULL    END

FROM Employee;

 

Based on the cardinality of array column either the member value is retrieved or  null value is returned.


Step 8 :  Let us rotate the phone number details in array list by 90 degree so that the values in the array are split to  row values of a column of a table by using UNNEST function as shown below :


SELECTDISTINCT Phones.Number FROM UNNEST(Employee.Phone) AS Phones (Number);

 

unnest.png

Above set of operations are supported for Numeric and Non-Numeric Array Columns in the table.


Here by,  we are done with the creation, storage and usage of Multi-Valued Attributes in HANA Table.


Hope the provided information is useful. Any suggestion and feedback for improvement will be much appreciated.

 

Thank you




SAP HANA RDL (River Definition Laguage) configuration

$
0
0

1. Introduction

 

SAP River is a new development language and environment for developing a complete SAP HANA backend application, describing the data model, the business logic and access control within a single, coherent and integrated program specification.

Developing an application using RDL has several key benefits over traditional technologies and development models:

  1. Easier and faster development and maintenance:
    1. Declarative, focusing on application intent
    2. Expressive language constructs
    3. Flexible code specification, enabling easier separation of concerns and iterative refinement of application code.
    4. Smaller bill of materials – coherency across different layers and components of the application.
  2. Easily leveraging HANA’s power, while remaining agnostic to underlying technology containers (XS, SQLScript).
    1. Can leverage any underlying supported runtime container, without compromising on running time optimization.
    2. Application execution improves together with the underlying technology, transparently; taking advantage of new capabilities.
  3. Open to legacy and extension code, in all supported containers.

To enable RDL for HANA developer in HANA, as SAP HANA administrator you have to complete below installation and configuration.

 

2.  Import the SAP HANA RDL (River Definition Language)

 

Search path in SAP service market place for the HANA RDL content

Fig2_1.png

Fig2_2.png

As our HANA DB revision is 70, we have selected SP01 for SAP HANA RDL 1.0

Download above software (HANARDL01_10000_20011655.SAR) into local desktop and uncar into one folder as below…

Fig2_3.png

Now upload these files through HANA Studio import process.

 

Note -> Import of HANA RDL content we have done by HANA content import method but we can also use LCM to import the RDL content.

 

In HANA Studio, open SAP HANA Modeler perspective

Fig2_4.png

Click on import – then go to SAP HANA Content -> Delivery Unit

Fig2_5.png

Check client and browse into your local folder *.tgz file

Fig2_6.png

Fig2_7.png

Click on Finish

Fig2_8.png

Fig2_9.png

SAP HANA RDL package installed successfully

Results

After deploying the delivery unit, we can see the sap.hana.rdl package in the repository content

Fig2_10.png

3. Configure SAP HANA to enable SAP River

 

You must indicate that the SAP HANA server is a development machine, which has the following effects on SAP River development:

  • A developer with the DevRole role for a package automatically gets privileges to run an application defined in that package
  • A developer with the DevRole role for a package can run any part of an SAP River application no matter the access control (accessible by) defined in SAP River code

Fig3_1.png

You can check the results by opening the Administration editor and choosing the Configuration tab in HANA studio. The new parameter can be seen at indexserver.ini -> rdl -> developerrole

 

Fig3_2.png


4. Setting up permission

 

As SAP HANA administrator you need to setup permission to the developer so that they can create River application

 

Developers must be given design-time privileges to the repository packages where they will create their design-time objects, as well as runtime permissions to the schema that holds the runtime objects created when activating a River file.

 

Procedure.

    1. Create a new user for the developer, as follows:

  In the SAP HANA Systems view, right-click on the SecurityUsers node of your system, and choose New User.

In User Name, enter a name

In the Password and Confirm fields, enter a temporary password.

Choose in the upper-right part of the user editor.

  1. Create a package for the developers design-time objects, as follows.
  2. In the SAP HANA Systems view, right-click on the Content node of your system, and choose NewPackage.
  3. In Name, enter a name for the package.
  4. Choose OK.

Fig4_1.png

     3. Create a schema for the developers runtime objects. The schema must have the same name as the package, and must be owned by user _sys_repo.

  In my case I have created schema named -> MyRDLPackage

 

This must be done by executing the following SQL statement:

create schema "MyRDLPackage" owned by _sys_repo;

 

     4. Create a role with the name in the format MyRDLPackage$DevRole. The role enables the developer to access, in runtime, any River application that is activated from the package.

    • In the SAP HANA Systems view, right-click on the SecurityRoles node of your system, and choose New Role.
    • In Role Name, enter the name in the correct format.
    • Choose (execute) in the upper-right part of the user editor.

Fig4_2.png

     5. Give permissions to the developer to create, activate, and debug design-time objects in the package, as follows:

    • In the SAP HANA Systems view, go to SecurityUsers and open the new developer user (User whom to give this permission).
    • In Object Privileges, add the REPOSITORY_REST procedure in the SYS schema and give the developer the EXECUTE privilege.
    • In Granted Roles, give the developer the following roles:
      • sap.hana.xs.debugger::Debugger
      • The role we created for the developer
    • In Package Privileges, add the package you created for the developer and give the developer the following privileges for the new package (both native and imported objects):
      • REPO.READ
      • REPO.EDIT
      • REPO.ACTIVATE
      • REPO.MAINTAIN
    • Choose Deploy.

Fig4_3.png

Fig4_4.png

Fig4_5.png

Now click on deploy to finish...

HANA Data Warehousing Foundation 1.0 - Overview

$
0
0

This presentation how the SAP HANA Data Warehousing Foundation 1.0 provides specific data management tools, to support large scale SAP HANA use cases. It complements the data warehouse offerings of SAP BW powered by SAP HANA and native SAP HANA EDW.

View this Presentation

Connecting SAP HANA 1.0 to MS SQL Server 2012 for Data Provisioning

$
0
0

There are several Data Provisioning techniques available within SAP HANA. These are "Smart Data Access" (SDA) and "Smart Data Integration" (SDI). Also available is the Hadoop Integration. This Document will cover the connectivity between SAP HANA 1.0 and MS SQL Server 2012 for SDA and SDI.

 

In the nature of things it is quite tricky to connect a Linux based Application (In our case SAP HANA 1.0) to a Microsoft Windows based Application (In our case MS SQL Server 2012). The official Documentation guides you to the right direction. But, as so often, it tells only half the truth. This Document will show you the rest of the required Information.

 

1. Prequel

Please find here some Information before we start. These Information are Resources, Guides, Links, etc.

 

1.1 Exclusion

This Documents excludes the process of installing and configuring SAP HANA and MS SQL Server 2012. Please consult the official documentation for the correct process.

 

1.2 Software Versioning

The following Software and its Versions are used:

 

- SAP HANA 1.0 SPS10 (Rev. 102) on SUSE Linux Enterprise Server 11.3 for SAP

- Microsoft SQL Server 2012 Express Edition on Microsoft Windows Server 2012 R2

- AdventureWorks DW 2012 Sample Database for Microsoft SQL Server 2012

- unixODBC Manager 2.3.0

- Microsoft ODBC Driver 11 for SQL Server - SUSE Linux CTP

 

1.3 Documentation and Download

The following Documentation helped during the whoe process:

 

The SAP HANA 1.0 SPS10 Administrators Guide (Page 920)

http://help.sap.com/hana/SAP_HANA_Administration_Guide_en.pdf

 

The "Install Instructions" section of the Microsoft ODBC Driver 11 for SQL Server - SUSE Linux CTP

Download Microsoft® ODBC Driver 11 for SQL Server® - SUSE Linux Community Technology Preview from Official Microsoft Dow…

 

The unixODBC Manager 2.3.0

ftp://ftp.unixodbc.org/pub/unixODBC/unixODBC-2.3.0.tar.gz

 

Your Downloads should look like this

1.jpg

 

 

1.4 Assumption

The following is an Assumption that needs to be considered prior of following these Instructions:

 

- SAP HANA is installed and working properly

- MS SQL Server is installed and working properly

- Access (root, sidadm/Administrator) to both Hosts is given

- Both Hosts can communicate with each other

- No Firewalls are blocking the connection

- The Microsoft ODBC Driver 11 for SQL Server - SUSE Linux CTP Driver and the unixODBC Manager 2.3.0 is downloaded somewhere

- The System Requirements for the Microsoft ODBC Driver 11 for SQL Server - SUSE Linux CTP have been met. Please check the "System Requirements" section of the Driver for more Information.

 

 

2. Installation, Configuration and Testing

In this Chapter you will find the Installation and Configuration steps for the Microsoft ODBC Driver 11 for SQL Server - SUSE Linux CTP. You will also find some steps to test the Installation outside SAP HANA.

 

2.1 Installation of the unixODBC Manager 2.3.0

First we start with the Installation of the unixODBC Manager 2.3.0.

 

a. Log In as root.

 

b. Remove any older Version of the unixODBC Manager.

 

c. Extract msodbcsql-11.0.2260.0.tar.gz.

 

     tar -xvf msodbcsql-11.0.2260.0.tar.gz

 

d. Navigate to the msodbcsql-11.0.2260.0 Folder.

 

e. Start the Installation of the unixODBC Manager:

     ./build_dm.sh --download-url=file:///mnt/sapmnt/software/nonsap/linux/microsoft/unixODBC-2.3.0.tar.gz

 

f. Type "YES".

2.png

g. The Result should look as follows:

3.jpg

h. Navigate to the Folder which is highlighted in the lower red rectangle.

 

     cd /tmp/unixODBC.16887.29613.12297/unixODBC-2.3.0

 

i. Type "make install".

j. The Result should look as follows:

4.jpg

The installation of the unixODBC Manager 2.3.0 is completed successfully.


2.2 Install the Microsoft ODBC Driver 11 for SQL Server - SUSE Linux CTP Driver

We now continue and instakk the Microsoft ODBC Driver 11 for SQL Server - SUSE Linux CTP Driver.

 

a. Navigate to the msodbcsql-11.0.2260.0 Folder.

 

b. Verify if your SAP HANA Host completes all Prerequisites:

 

  ./install.sh verify

 

c. The Result should look like this:

5.jpg

  d. Run the Installation with:

 

     ./install.sh install

 

e. Scroll down and type "YES".

6.jpg

f. The Result should look like this:

7.jpg

The Installation of the Microsoft ODBC 11 for SQL Server - SUSE Linux CTP Driver has been completed successfully. the Driver has been installed to the location "/opt/microsoft/msodbcsql".

8.jpg


2.3 Configure the Microsoft ODBC Driver 11 for SQL Server - SUSE Linux CTP

Now we can go ahead and configure everything after the Installation.

 

a. Navigate to "/opt/microsoft/msodbcsql/lib64".

 

b. Copy the File "libmsodbcsql-11.0.so.2260.0" to your SAP HANA Directory.

 

     cp libmsodbcsql-11.0.so.2260.0 /usr/sap/HA1/HDB01/exe

 

c. Navigate to "/etc".

 

d. Open the File "odbc.ini" via:

 

     vi odbc.ini

 

e. Paste the following content after you adjusted it your your Environment:

  [advwrk12]

  Driver=/opt/microsoft/msodbcsql/lib64/libmsodbcsql-11.0.so.2260.0

  Description=<YOUR_DESCRIPTION>

  Server=<sqlhost.domain.com\INSTANCE,PORT>

  Port=<PORT>

  Database=<YOUR_DATABASE_NAME>

  User=

  Password=

  9.jpg

f. Save and Close the File.

 

g. Log In as sidadm.

 

h. Navigate to your Home Directory and open your Profile File.

 

     vi .profile

 

i. Create the ODBCINI Environment Variable:

 

  ODBCINI="/etc/odbc.ini"

  export ODBCINI

 

j. Save and Close the File.

 

k. Log Off as sidadm and Log In back again.

 

l. In order that your SAP HANA Instance takes notice of this Environment Variable you need to restart your Instance.


The configuration ended successfull.

 

2.4 Test the connectivity

In the next Step we will test the connectivity outside SAP HANA. If it doesnt run on OS Level, it will never run on SAP HANA Level.

 

a. Log- In as sidadm.

 

b. Navigate to "/opt/microsoft/msodbcsql/lib64/".

 

c. Check the Library deoendency of the "libmsodbcsql-11.0.so.2260.0" file with:

 

     ldd libmsodbcsql-11.0.so.2260.0

 

d. The Result should look as follows and you should not see any "not found" entry:

10.jpg

 

e. Test the connectivity with the "iusql" command from the unixODBC Manager:

 

          iusql -v <DSN> <USERNAME> <PASSWORD>

 

f. The Result should look like this:

11.jpg

g. By typing "help" we get a list of the content from the AdventureWorks Database:

12.jpg

h. Type "quit" to leave the Application

 

The Test run successful.

 

 

3. Connect SAP HANA to MS SQL Server 2012

Finally we are able to connect our SAP HANA Instance to the MS SQL Server 2012 and import a Table.

 

3.1 Connect SAP HANA to MS SQL Server 2012

First we connect the two Applications.

 

a. Launch the SAP HANA Studio.

 

b. Log In to your Database.

13.jpg

c. Expand "Provisioning".

 

d. Right click "Remote Sources" and select "New Remote Source...".

14.jpg

e. Enter the required Fields:

15.jpg

Please note that "Data Source Name" must match with your DSN entry in the "odbc.ini" File. the DSN is in between the "[ ]".

f. Save your Changes.

g. You should see the following Result:

16.jpg

h. Click on "Test connection"

17.jpg

i. The Result should look as follows:

18.jpg

The connection to the Applications has been established successful.

 

3.2 Import a Table

At the End we will import a MS SQL Server based Table to a SAP HANA Schema.

 

a. Expand your Remote Connection.

19.jpg

b. In our case expant "AdventureWorksDW2012".

 

c. Expand the "dbo" Schema. You will see all available Tables:

20.jpg

d. Right cklick the Table you wish and select "Add as Virtual Table".

 

e. Give it a Name, select your target Schema within SAP HANA and click "Create".

21.jpg

f. Click "OK".

22.jpg

g. Navigate to the Tables of your selected Schema.

23.jpg

h. Right Click your imported Table and select "Open Data Preview".

 

i. You will see the MS SQL Server 2012 Data inside the SAP HANA Studio:

24.jpg

The import process completed successful.

 

Now you can continue to import more Tables and proceed with your Development.

 

NOTE: In the SQL Server Management Studio Activity Monitor you can now see our open Connection

25.jpg

 

 

4. Appendix

Find here some Appendix Informations that have been gathered over the Time.

 

4.1 Appendix 1 - Trace the Driver

29.10.2015

 

If you face Problems during the testing, you can trace the Microsoft ODBC 11 Driver. How you do that can be found here:

 

Data Access Tracing with the ODBC Driver on Linux

 

4.2 Appendix 2 - SAP HANA Multi Node Deployment

29.10.2015

 

In a multi Node Setup of SAP HANA you have to install, configure and test the Driver Installation on each Node.

 

4.3 Appendix 3 - odbc.ini Sample File

29.10.2015

 

In the Attachment you will find a Sample odbc.ini File. This is a very basic one bot does the trick for first connectivity.

 

If you have useful hints which other Parameters should be added, please feel free to post them here.

 

Please be reminded that you have to rename the attached File from "odbc.txt.zip" to "odbc.ini" and place it in "/etc".

SAP Hana EIM (SDI/SDQ) setup

$
0
0

In my documentation I’ll explain how to setup and configure SAP Hana SP10 EIM (SDI/SDQ) with Sybase IQ database and ERP on Hana database schema source system to replicated data for realtime data replication.

 

I will show in detail step and configuration point to achieve it.

 

In order execution

  • Create Sybase IQ database
  • Enable DP server for SDI
  • Enable Script server for SDQ
  • Install SDQ cleanse and geocode directory
  • Install DU HANA_IM_DP (Data provisioning)
  • Install and Register Data Provisioning Agent
  • Create remote source
  • Data replication and monitoring

 

Configuration required on SP9

The xsengine needs to be turn to true (if not done)

The statistic server needs to be turn to true (if not done)

The DU HANA_IM_ESS needs to be imported

 

 

Guide used

 

SAP Hana EIM Administration Guide SP10
SAP Hana EIM Configuration guide SP10

 

Note used

 

179583 - SAP HANA Enterprise Information Management SPS 10 Central Release Note

 

Link used

 

http://help.sap.com/hana_platform

Overview Architecture

7-15-2015 6-09-06 PM.jpg

 

Starting Hana SP9 the new features call SDI (Smart Data Integration) and SDQ (Smart Data Quality) has been introduce.

 

The purpose of these new features is to leverage an integrated ETL mechanism directly into Hana over SDA

 

To make it simple:

  • Smart Data Integration provide data replication and transformation services
  • Smart Data Quality provide an advanced transformation to support data quality functionality

 

 

Create Sybase IQ database

 

In order to have a dedicated database to work with I’ll create my own database into IQ server:

 

From the SCC go to administration and proceed as follow
7-10-2015 3-55-36 PM.jpg

7-10-2015 4-00-54 PM.jpg

7-10-2015 4-01-28 PM.jpg


SCC agent password: the password define during the IQ server installation
Utility server password: auto fill do not change it
IQ server port: use an unused port, I already have 2 db running so I pic the next number
Database path: <path where the db is stored><dbname>.db
IQ main dbspace path: <path where the dbspace is stored><dbname>.iq
7-10-2015 4-06-10 PM.jpg

 

Check mark ok

7-10-2015 4-14-09 PM.jpg

 


Execute
7-10-2015 4-18-37 PM.jpg

7-10-2015 4-19-51 PM.jpg

7-10-2015 4-20-47 PM.jpg


My database now available I’ll create 3 simple tables for this test, I’ll use interactive SQL
7-10-2015 4-49-11 PM.jpg

 

With the following syntax

7-10-2015 4-53-48 PM.jpg

 

  

Enable Data Provisioning server for SDI

 

When Hana is installed, by default the DP server is note activate, in order to have the ability to use SDI it needs to be enabled. The value needs to be change to 1
7-10-2015 5-22-47 PM.jpg

 


Enable Script server for SDQ

 

To take advantage of the SDQ functionality the script server value needs to be change to 1
7-10-2015 5-47-54 PM.jpg

 

 

 

Install SDQ cleanse and geocode directory

 

The Cleanse and Geocode nodes rely on reference data found in directories where we download and deploy to the SAP HANA server.

 

To download those directories go on the SMP and select the one you need
You can download several directories depending on what you are licensed for.
7-10-2015 8-50-57 PM.jpg

7-10-2015 8-55-08 PM.jpg

 

Once downloaded decompress it at the following location:
/usr/sap/<SID/SYS/global/hdb/IM/reference_data
7-11-2015 8-09-10 PM.jpg

 


Install delivery unit HANA_IM_DP (Data Provisioning)

 

The specific delivery unit needs to be downloading and upload from the studio or the web interface, this will provide you:

  • The monitoring functionality
  • The proxy application to provide a way to communicate with the DPA (cloud scenario)
  • The admin application for DPA configuration (cloud scenario)

7-11-2015 8-31-14 PM.jpg

7-11-2015 8-31-32 PM.jpg

 

Upload from the studio

7-11-2015 8-32-45 PM.jpg

 


Once done assign the monitoring role and add the view from the cockpit

7-11-2015 8-36-00 PM.jpg

7-11-2015 8-48-03 PM.jpg

 

 

Install and register Data Provisioning Agent

 

The Data Provisioning Agent is used to make the bridge between Hana and source system where the driver can’t be run from Hana (DPS) over a pre-build adapter, in some case it allow Hana to write back data into source system.
Use the DPA allow live replication.

 

The agent is part of the package download earlier

7-11-2015 9-04-37 PM.jpg


Run and installed it as needed

7-8-2015 9-40-08 PM.jpg

Once installed open the cockpit agent

7-11-2015 9-12-27 PM.jpg

 


Make sure the agent is started, connect and register it to Hana with the necessary adapter

7-12-2015 4-46-46 PM.jpg

Let create the source system in Hana now.

 

 



Create remote source

 

Now that my IQ db is in place and my Hana adapter is installed I will create my source system in SDA where I need to get the data from.

Let start with my IQ database, before create the connection in SDA install and set the lib on Hana server. To create my connection I will use the following statement:

 

create remote source I841035 adapter iqodbc configuration 'Driver=libdbodbc16_r.so;ServerName=HANAIQ03;CommLinks=tcpip(host=usphlvm1789:1113)' with CREDENTIAL TYPE'PASSWORD'USING'user=I841035;password=xxxxxxx';

 

Once done refresh the provisioning

7-11-2015 10-42-09 PM.jpg

And create the ERP on Hana schema source system by selecting the adapter added earlier

7-12-2015 4-42-33 PM.jpg

7-12-2015 4-59-13 PM.jpg

 

  

And check the remote subscription form the cockpit

7-12-2015 5-47-27 PM.jpg

 

 


Data replication and monitoring

 

My remote source connect I will now define which table I want to replicate and how I want it to look like once loaded.

Make sure your user schema is part of _SYS_REPO with “CREATE ANY” granted.

 


From the development workbench go to “Editor” and select your package and create a new replication task

7-12-2015 6-16-35 PM.jpg

7-12-2015 6-18-23 PM.jpg

 

And fill the necessary information, target schema, virtual table schema, table prefix and so on.

From detail perspective several option are possible

 

Add/remove/edit table

7-12-2015 7-08-12 PM.jpg

 

Set filter


Define the load behavior in order to have a certain level of detail on the change that encore on the table.

7-12-2015 6-56-01 PM.jpg

  

Partition data for better performance

7-12-2015 7-09-15 PM.jpg

Once you preference are set, save the configuration to activate it.

7-12-2015 7-14-02 PM.jpg

 

From the monitoring side check the task log

7-12-2015 7-23-36 PM.jpg


Once activate go on the catalog view and check if the procedure is crated as well as the virtual tables/views and table, and invoke the procedure to start the replication

7-12-2015 7-27-51 PM.jpg

 

I did repeat the same procedure for my ERP on Hana schema, once the procedure is invoked on the remote Hana db we can see additional table created and trigger for the relevant table replicated

7-15-2015 1-38-18 PM.jpg

 

From a monitoring side, I did add 4 additional user and we can see the apply count

7-15-2015 2-42-17 PM.jpg


The replication is now operational, in my next document I’ll explain how to configure several datasource and construct one realtime report with the input of different table.

 

Williams.

Viewing all 1183 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>