Quantcast
Channel: SCN : Document List - SAP HANA and In-Memory Computing
Viewing all 1183 articles
Browse latest View live

SAP HANA Cloud Integration

$
0
0
Welcome to SAP HANA Cloud Integration!

 

Integration is key to achieving the benefits of Cloud. SAP HANA Cloud Integration is an integration platform hosted in SAP HANA Cloud that facilitates the integration of business processes spanning across different departments, organizations, or companies. Thereby it enables end-to-end process integration across cloud and on-premise. It also contains data service part that allows for efficiently and securely use ETL tasks to move data between on-premise systems and the cloud.

 

SAP HANA Cloud Integration comes with complete new architecture and deployment options that are designed and best suited for cloud-cloud and cloud-on-premise integration and process orchestration. Since the integration can be consumed as a service the solution is tenant aware and comprises highest level of security features such as content encryption and certificate based communication. It contains a core runtime for processing, transformation, and routing of messages, as well as an out-of-the-box connectivity support (IDOC, sFTP, SOAP/https). The design time - currently used by SAP Cloud Professional Services only - is Eclipse based.

 

As of today SAP HANA Cloud Integration is available for customers and partners as an Application Edition, especially for dedicated set of SAP On-Demand Solutions (SAP Customer On-demand, SuccessFactors BizX, SAP Financial Services Network). Upon purchase predefined, ready to run content (packaged integration) can be made available by SAP Cloud Services in a customer specific tenant, without the immediate need for additional hardware or integration skills at the client’s side. This drastically reduces integration project lead times and lowers resource consumptions significantly. Prepackaged content can be extended by SAP Cloud Professional Services for each single tenant addressing the individual need of every customer.

 

As of the August 2013 release customers and partners can now customize or create their own integrations using SAP HANA Cloud Integration. SAP has created prepackaged integration content for specific scenarios. For example: Cloud integration with SAP ERP, CRM. SuccessFactors BizX with HCM. This prepackaged integration content includes mappings and can be downloaded into SAP HANA Cloud Integration. Customers can now add their own mappings and create own integrations as well.

 

SAP HANA Cloud Integration will be developed towards a functional rich cloud-based integration platform. For the very near future this will include HTML5-based UI’s, allowing partners and customers to display and configure integration flows. Also, as a customer or partner you will be able to develop, extend, test, deploy and share integration content. A continuously increasing set of connectors and available enterprise integration patterns will lay the foundation for this.

 

New content will be posted here, so stay tuned!

 

Resources

 

Documentation and Ramp-up Knowledge Transfer (RKT)
Further information about SAP HANA Cloud Integration can be found in the SAP Help Portal and the RKT material on SAP Service Marketplace:    

 

Related SCN Spaces and documents

 

News & Events

Recently Featured Content on SAP HANA and In-Memory Business Data Management

$
0
0

The Real Reason for SAP HANA

Hear it from SAP co-founder and Chairman of the Supervisory Board, Hasso Plattner himself. August 13, 2013

 

SAP HANA SPS6: New Developer Features

http://scn.sap.com/profile-image-display.jspa?imageID=2203&size=72In this blog post from Thomas Jung, you'll learn more about the recent release of SAP HANA SPS6 (Revision 60). Developers working on SAP HANA can expect a wide variety of new and improved features, including development workflow. Jung also includes sample code and videos. Ready to try SAP HANA yourself? Download a trial version today.August 13, 2013

 

 

Building Your First End-to-End SAP HANA Cloud "PersonsList" Web Application Scenario

The document offers developers a comprehensive way for developing an SAP HANA Cloud web application scenario. Among other things, you'll learn to set up your SAP HANA Cloud Development Environment from scratch; develop a minimal end-to-end SAP HANA Cloud Application from UI to database; publish and run each developed increment on SAP HANA Cloud local runtime for testing. August 13, 2013

 

Factors to Consider when Choosing an SAP HANA Managed Hosting Provider

Once you’ve decided to implement SAP HANA in a managed hosting environment, you’ll begin the process of evaluating potential partners. This is the first entry in a four-part blog series by Binoy James in which he examines how the size of your implementation, network needs, and application environment consolidation play into your decision. August 13, 2013

The Top 10 SAP HANA Myths Spread by Other Vendors

SAP Mentor John Appleby solves for X in the equation, "Another vendor has told me X about SAP HANA, is it true?" while Vijay Vijayasankar explains why A Faster Horse Just Won’t Cut It Anymore. July 19, 2013

 

The Process Black Box – SAP Operational Process Intelligence shines a light inside

http://scn.sap.com/profile-image-display.jspa?imageID=12116&size=72Much has been made of what breakthroughs SAP HANA can provide to the world of business. In this blog, SAP Mentor Owen Pettiford of CompriseIT shares his take on how sapopint is a product that delivers.

 

Join Owen in the SAP Mentor Monday webinarto find out more about SAPOPInt.

 

Also learn about SAPOPInt from Sebastian Zick, who shares his real-world experience on how CubeServ gained transparency of processes across system boundaries. July 19, 2013

 

Real-time Sentiment Ratings of Movies on SAP HANA One

SAP intern Wenjun Zhou describes using SAP HANA One to power an app that rated movies. Learn how he used the Rotten Tomatoes API to find newly released movies, along with Twitter and Sina Weibo APIs to check sentiment among U.S. and China-based movie-goers, respectively. July 19, 2013

 

SAP Solutions and Best Practices for Testing SAP HANA

Ensuring a smooth introduction and ongoing operation of applications running on SAP HANA requires a thorough and automated testing approach. This includes a selection of representative test cases that address technical, functional, and non-functional needs, as well as dedicated tools that accelerate and increase the effectiveness of testing efforts. Learn from this white paper how to develop and implement a tailored testing approach based on best practices. July 19, 2013

 

New Features in SAP HANA Studio Revision 55

http://scn.sap.com/profile-image-display.jspa?imageID=2335&size=72In case you missed it, Lars Breddemann reports on the improvements released with revision 55 of SAP HANA Studio. July 19, 2013

 

 

Calling XSJS Service Using SAP UI5

http://scn.sap.com/profile-image-display.jspa?imageID=21511&size=72Inspired by a previous blog on SAP HANA Extended Application Services, Vivek Singh Bhoj elaborates on how to call the XSJS service. July 19, 2013

 


SAP HANA Turns 2

Our groundbreaking in-memory platform is growing up. See how SAP HANA continues to transform and inspire business. June 21, 2013

 

Setting the Record Straight - SAP HANA vs. IBM DB2 BLU

Recently IBM announced BLU accelerator for DB2, which does query acceleration of manually selected tables in DB2 in batch mode using in-memory and columnar techniques. However, there were some unsubstantiated claims and over-reaching statements amid the excitement about BLU. SAP’s Ken Tsai provides his assessment in this blog. June 18, 2013

 

SAP HANA Commands, Command Line Tools, SQL Reference Examples for NetWeaver Basis Administrators

http://scn.sap.com/profile-image-display.jspa?imageID=18337&size=72Andy Silvey set out on a massive undertaking to create a one-stop shop reference for HANA commands and command line tools, plus administrator's SQL queries. Lucky for us, he decided to share it here. June 18, 2013

 

 

SAP HANA Enterprise Cloud - Important step to... where?

http://scn.sap.com/profile-image-display.jspa?imageID=11591&size=72 SAP Mentor (and SAP HANA Distinguished EngineerTomas Krojzl speculates on what the May 7 announcement of HANA Enterprise Cloud might mean for SAP in the future. His blog inspires more than a few opinionated comments. June 17, 2013

 

 

How Does It Work Together: BW, BW on SAP HANA, Suite on SAP HANA, SAP HANA Live

http://scn.sap.com/profile-image-display.jspa?imageID=21594&size=72Part 3 of this solid blog series continues the discussion of how diverse customer landscapes can be efficient and synergistic. Posted by Ingo Hilgefort, Part 1 introduced the different landscapes and discussed a customer site without SAP as the backend. In Part 2, Ingo discussed an SAP ERP customer and how such a customer could leverage SAP BusinessObjects BI with and without SAP HANA Live. June 6, 2013

 

 

Get SAP HANA One Premium

SAP HANA One Premium is designed for users who want to run their SAP HANA instances 24x7. See the demo of SAP HANA on AWS (on the AWS blog) and learn about its ability to connect to source data from SAP Business Suite and SAP NetWeaver BW in the cloud, SAP Enterprise Support, and the Data Services component of SAP HANA Cloud Integration. Also read this related blog by SAP Mentor Richard Hirsch and this SAPInsider article. For more information and resources, visit the SAP HANA One page. June 6, 2013

 

Understanding the HANA Enterprise Cloud:  An Initial Whiteboard

http://scn.sap.com/profile-image-display.jspa?imageID=14440&size=72If you want to better grasp SAP’s new HANA Enterprise Cloud offering, follow SAP Mentor Richard Hirsch as he diagrams his way to a better understanding in his recent blog.

 

SAP’s Bjoern Goerke provides additional clarity across the cloud offerings in his recent blog.

 

Follow the ‘hana_enterprise_cloud’ tag for related blogs. May 17, 2013

 

Introduction to Software Development on SAP HANA - Free Online Course

http://scn.sap.com/profile-image-display.jspa?imageID=2203&size=72 By now you might have heard about the launch of openSAP – a platform that offers free online learning opportunities. Who better to instruct the first course than SAP Mentor and eLearning expert (just check our library) Thomas Jung? But this, he says, is something different than “traditional” eLearning.... May 17, 2013

 

#NBA Sabermetrics 101 (Powered by #SAPHANA)

http://scn.sap.com/profile-image-display.jspa?imageID=12225&size=72

In the SAPPHIRE NOW keynote, SAP co-CEO Bill McDermott talked about how SAP is working with professional sports to create the next-generation fan experience. In his latest blog, SCN’s own Chris Kim discusses the value SAP can bring to the sports and entertainment industry. For more on SAP and sports, check out Proof of SAP’s evolution from a B2B to a B2C company. May 17, 2013

 


Enter SAP Enterprise Cloud

http://scn.sap.com/profile-image-display.jspa?imageID=6798&size=72 Last week, SAP announced the SAP HANA Enterprise Cloud service. SAP HANA Enterprise Cloud empowers you to deploy SAP Business Suite, SAP NetWeaver BW, and custom HANA applications to support real-time business. Learn more in the blog by executive board memberVishal Sikka and watch the replay of the press event.Then read Siva Darivemula’s blog Adding a New On-Ramp to the HANA Highway for more insight. May 8, 2013

 

SAP HANA Enterprise Cloud: “Instant Value without Compromise”

https://scn.sap.com/profile-image-display.jspa?imageID=2063&size=72SVP Mark Yolton describes what the new offering can do for customers, shares his take on the announcement as well as some early reactions from the media. His blog is also filled with HANA resources. May 8, 2013

 

 

SAP HANA Cloud Integration Now Available

http://scn.sap.com/profile-image-display.jspa?imageID=9293&size=72SAP HANA Cloud Integration, now available for customers and partners, is an integration platform hosted in SAP HANA Cloud that facilitates the integration of business processes spanning across different departments, organizations, or companies. Mariana Mihaylova explains and provides resources in this document. May 8, 2013

 

Cloudy on the terminology? Check out the blog by Bjoern Goerke in which he clarifies recent branding around cloud.

 

New SAP HANA Development Platform Training as Massive Open Online Course (MOOC)

Register for a new online course: "Introduction to Software Development on SAP HANA." Over six weeks’ time, you’ll get an overview of the native programming capabilities of SAP HANA. Dr. Markus Schwarz, SVP SAP Education, says, "We want to give learners choice. With the new course we can reach even a broader audience." May 1, 2013

 

The Evolution of HANA One: More Than Just HANA Hosted in a Public Cloud

http://scn.sap.com/profile-image-display.jspa?imageID=14440&size=72SAP Mentor Richard Hirsch comments on a recent SAPinsider publication about HANA One. May 1, 2013

 

 

SLT Suggestions for SAP HANA

http://scn.sap.com/profile-image-display.jspa?imageID=11591&size=72 In what he describes as a “brainstorming blog,” Thomas Krojzl writes about his ideas on how SLT replication could be improved. Don’t worry, he’s open to criticism. Seems like a good time to like it, rate it, and comment away! April 26, 2013

 

 

Bipedal Process and Data Intelligence.... Stop Hopping.... RUN!

http://scn.sap.com/profile-image-display.jspa?imageID=7443&size=72The fact that we’re living in the age of big data is no surprise at this point, but according to Alan Rickayzen, “the age of process intelligence has just started.” Find out what he means, where HANA comes into the picture, and how solution experts and process operators and process owners are the big benefactors of SAP Operational Process Intelligence. April 26, 2013

 


Why Users Need SSO in SAP HANA

With Single Sign On (SSO), users can directly log in from any front-end application and access the SAP HANA database without providing login credentials again. Read more highly rated blogs on SAP HANA. This blog by Kiran Musunuru gives you details on setting up SSO with SAP HANA using Kerberos. April 26, 2013

 

New Publications from SAPinsider:

 

A Look Under the Hood of SAP HANA

Get look at some of the key components of the SAP HANA platform and the features and functions that make the it compelling for developers.

 

SAPinsider: SAP HANA One Illuminates New Possibilities

Learn about the instant deployment option that facilitates smaller SAP HANA projects and applications that are not easily accommodated by on-premise system procurement cycles. April 26, 2013

 

Pairing the Power of SAP HANA with the Innovative Agility of a Startup

Learn more about the Startup Focus program, how to get involved, and what it means for SAP customers. April 26, 2013

 

Best Practices for SAP HANA Data Loads

As SAP Mentor John Appleby says, “you can take the best technology in the world, create a bad design, and it will work badly. Yes, even SAP HANA can be slow.” With that in mind, check out his best practices for HANA data loading. April 10, 2013

 

Performance Guidelines for ABAP Development on the SAP HANA Database

If you’re an experienced ABAP developer, you’re probably familiar with the classic performance guidelines for using Open SQL. This begs the question of what changes are there to the guidelines in the context of SAP HANA. Eric Westenberger tackles that question.  April 10, 2013

 

 

Experience the Magic: How to Setup Your Own ABAP on HANA in the Cloud

http://scn.sap.com/profile-image-display.jspa?imageID=12354&size=72Are you an ABAP developer who can’t wait to explore the intricacies of ABAP on HANA coding? Do you want to set up a sandbox environment where you can try out things such as consuming HANA Calculation Views or Stored Procedures from ABAP programs, and learn how to accelerate your ABAP applications with HANA or build entirely new ones? Then SAP Mentor Thorsten Franz wrote this for you. April 10, 2013

 

 

Tame BIG Processes with SAP Operational Process Intelligence, Powered by SAP HANA

http://scn.sap.com/profile-image-display.jspa?imageID=6237&size=72Read the three-part series by Harshavardhan Jegadeesan, in which he walks through "big processes," the challenges they pose, and how SAP Operational Process Intelligence, powered by SAP HANA can help businesses overcome them. Then see how to test drive #SAPOPInt in this post. March 22, 2013

 

 

Get your hands on this HANA stuff:

March 13, 2013

 

Migrating Java Open-Source Application from Oracle to SAP HANA

The purpose of this document is to guide the process of migrating OLTP systems from a source ORACLE database to a target SAP HANA database. The Java Open-Source mvnForm is used in this guide to simulate the example of an OLTP system on the source Oracle database. March 7, 2013

 

When SAP HANA met R - What's new?

Last year’s ”When SAP HANA met R - First kiss” blog has some people wondering what’s new the integration of the SAP HANA database with R. Blag responds in his recent blog. March 4, 2013

 

Webinar: SAP Business Suite Powered by SAP HANA

On January 10, 2013, SAP announced the availability of the SAP Business Suite powered by SAP HANA – built to deliver an integrated family of business applications unifying analytics and transactions into a single in-memory platform. Join an exclusive webcast on March 14, at 15:00 CET and learn how to become a real-time business.

 

Engage with SAP HANA through Hours of Free Videos and Projects

Explore the SAP HANA Academy and watch more than 250 videos answering your what, why, and how questions about SAP HANA.March 4, 2013

 

Uncovering the Value of SAP BW Powered by HANA: Answering the Second Question

http://scn.sap.com/profile-image-display.jspa?imageID=4485&size=72 When Suite runs on HANA, BW runs on HANA, and assorted data marts run on HANA - what would be different for a business user? After talking to several customers, Vijay Vijayasankar thinks it’s the "ease of answering the second question" that is the most value adding scenario for a business user. What is your "second question"? March 4, 2013

 


Clear the Process Fog with SAP Operational Process Intelligence

Learn about this new SAP offering designed to improve your operational efficiency. Check out the overview video on YouTube and share your thoughts on therelated blog by Peter McNulty. February 21, 2013

 

Say cheese... on taking snapshots with SAP HANA

http://scn.sap.com/profile-image-display.jspa?imageID=2335&size=72 In this detailed blog, Lars Breddemann shows how to take a snapshot of your SAP HANA instance. February 21, 2013

 

 

 

 

Fast is Not a Number

You might call it a constructive rant, but why not ask the difficult questions? Jim Spath - SAP Mentor, SCN forum moderator, ASUG volunteer, employee of a company that runs SAP – does. February 21, 2013

 

The OLAP Compiler in BW on SAP HANA

http://scn.sap.com/profile-image-display.jspa?imageID=12610&size=72Thomas Zurek blogs about a functionality he considers one of the “crown jewels” of BW on HANA.February 21, 2013

 

 

 

SAP HANA Certification Pathways

In this comprehensive blog, Vishal Soni shares his organization’s plans which outline paths to SAP HANA certification for technical consultants and application consultants.February 18, 2013


Harness Insight from Hadoop with MapReduce and Text Data Processing Using SAP Data Services and SAP HANA

This white paper, developed at SAP Co-Innovation Lab,  explores how IT organizations can use solutions from SAP and our partners to harness the value of large volumes of data stored in Hadoop, identify salient entities from unstructured textual data, and combine it with structured data in SAP HANA to leverage meaningful information in real-time. February 13, 2013


New SAP TV Videos on SME Customers Using SAP HANA

Michael Nuesslein of SAP TV announces two new SAP HANA game-changer videos worth checking out. January 28, 2013

 

SAP on HANA, and Pushdown for All: News about ABAP's Adventurous Relationship with the Database

http://scn.sap.com/profile-image-display.jspa?imageID=12354&size=72 Business Suite on HANA wasn't all news to this SAP Mentor, but the January 10 announcement came with some "extremely new and noteworthy" information to Thorsten Franz, such as a shift in the ABAP programming model. January 21, 2013

 

 

The Business Suite on HANA: The Announcement and What this Means for Customers

http://scn.sap.com/profile-image-display.jspa?imageID=9692&size=72Besides providing an overview of the January 10 announcement, SAP Mentor and SCN Moderator Luke Marson outlines customer benefits and his thoughts on what it all means. Of course there are still questions, as summarized in Carsten Nitschke’s candidly-titled “What I did not learn” blog. Don’t miss the discussion that follows.

 

As far as what’s next, SAP Mentor Richard Hirsch“connects the dots” and suggests the next big play for HANA. January 17, 2013

 

2013 - The Year of the SAP Database

With the incredible success of SAP HANA over the last 18 months and a greatly expanded database and technology portfolio, SAP is poised to surge ahead in the database market. SAP Mentor John Appleby shares his thoughts on why 2013 will be a pivotal year. January 3, 2013

 

SAP TechEd Sessions on SAP HANA

What principles guide SAP’s platform and infrastructure decisions? Watch Introduction to Our Technology Strategy and Road Map to learn about the "big bets" that SAP is making in the technology game. Then learn about Integrating SAP HANA into Your Landscape through the intelligent use of in-memory technology. You’ll gain valuable insight with this interview: From ABAPer to MOBILEr: The Evolution of SAP Developers, where SAP Mentor DJ Adams talks about developer evolution with SAP HANA, Java, Eclipse, and Cloud. Watch more sessions on SAP HANA. January 10, 2013

 

It’s Here: SAP Business Suite, Powered by SAP HANA

SAP just announced availability of the SAP Business Suite powered by SAP HANA. SCN’s own Siva Darivemula summarizes the announcement, including a blog post by SAP CTO Vishal Sikka and overview video. January 10, 2013

 

What's New in SAP HANA SPS05

Following the model of his very successful "What's New" blogs from his SAP NetWeaver Portal days, Daniel Wroblewski summarizes the new features of SAP HANA SPS05 in this blog. See the related post by Lucas Sparvieri about the SAP HANA Text Analysis capabilities of SPS05. January 3, 2013


Meet the Distinguished Engineers

SAP HANA is the fastest growing product in SAP's history, with over 400 customers after just 12 months, and there will be an unprecedented demand for SAP HANA resources. With this comes the need to understand the level of experience of a HANA engineer and their areas of expertise. The Distinguished Engineer program is an SAP-sponsored, community-led effort to address this perceived skills gap in the HANA technical community, and to recognize those with a high level of technical skills, as well as embracing those who are learning and are on their way to gaining skills. Learn more. January 3, 2013

 

New from SAPinsider Magazine:

Optimizing ABAP for SAP HANA: SAP's 3-Step Approach - In this article, you'll learn SAP's three-step approach to optimize SAP NetWeaver Application Server (SAP NetWeaver AS) ABAP for the SAP HANA database.

 

Build Solutions Powered by SAP HANA to Transform Your Business - Read how the SAP Custom Development organization is helping customers build business-critical solutions powered by SAP HANA. January 3, 2013

 

2012

HANA Videos from SAP TechEd Live

Replay these interviews from Madrid for a variety of insights into SAP HANA:

 

 

Find more interviews in the catalog of HANA interviews from Las Vegas. November 28, 2012


SAP HANA One Innovative App Contest

Build your most innovative app on HANA One in AWS Cloud. Register by December 12, 2012. Learn more. December 3, 2012

 

More HANA from SAP TechEd Live!

Replay these interviews from Madrid for a variety of insights into SAP HANA:

 

 

Find more interviews in the catalog of HANA interviews from Las Vegas. November 28, 2012

 

New Space: SAP NetWeaver BW powered by SAP HANA

Follow the new space dedicated to releases of SAP NetWeaver BW on SAP HANA. November 26, 2012

 

How to Configure SAP HANA for CTS

Learn how to use the Change and Transport System (CTS) together with SAP HANA. November 26, 2012

 

SAP HANA Installation Guide – Trigger-Based Data Replication

This guide details the installation and configuration of trigger-based replication for SAP HANA – the SAP Landscape Transformation Replication Server.November 26, 2012

 

The Road to HANA for Software Developers

http://scn.sap.com/profile-image-display.jspa?imageID=9913&size=72Developer Whisperer Juergen Schmerder published this helpful guide for developers interested in HANA to help find their way through the jungle of documents out there. October 31, 2012

 

 

Preparing for HANA: How to Achieve SAP Certified Technology Associate Certification

http://scn.sap.com/profile-image-display.jspa?imageID=12935&size=72How do you prepare for the actual certification? In this blog, SAP Mentor Tom Cenens provides some helpful information on the certification and how to pass. October 31, 2012

 

 

Hit “Replay” on SAP HANA! Visit SAP TechEd Online

http://scn.sap.com/profile-image-display.jspa?imageID=2090&size=72

The SAP TechEd Live studio in Las Vegas featured interviews about SAP HANA One (productive HANA on AWS), SAP HANA Academy, RDS for HANA, the HANA Distinguished Engineer program, how startups are using HANA, and a deep dive on SAP HANA development. Check outall these and more interviews. October 26, 2012

 

SAP HANA Academy: Watch, Follow, and Learn SAP HANA from SAP and Ecosystem Partners Experts

This week at SAP TechEd, we announced the launch of the SAP HANA Academy. Access videos and exercises about everything from security, to working with data in SAP HANA Studio and SAP BusinessObjects Data Services, to integrating SAP HANA with Mobile or Analytics. Also, see the related SAP TechEd Online video. October 23, 2012

 

Better Choice – SAP BW on SAP HANA

You think you know databases? Think again. Watch the short animated video to see how you can make a better choice with SAP BW on HANA. Learn how you can better handle your exploding data volume, and why your business can benefit from real time data analysis. October 23, 2012

 

Join the first Google+ HANA Hangout!

Hang out with SAP HANA experts on Monday, October 29 at 9 am PT for a live, streamed chat about SAP HANA and big data. Participants include Aiaz Kazi, Head of Technology & Innovation Marketing for SAP, and Amit Sinha, Head of Database & Technology Marketing at SAP and special guest Irfan Khan, CTO of Sybase. October 26, 2012


What Customers Say About SAP HANA

“Fujitsu and SAP’s  history of co-innovation and collaboration have now provided both very large and small customers with a scalable in memory appliance that can quickly be implemented to dramatically increase data processing and real time information analytics for decision making,” says Rolf Schwirz, CEO Fujitsu Technology Solutions. Read more in SAP In-Memory Computing - Procter & Gamble Customer Testimonial, SAP HANA Helps Federal Mogul to Improve Performance, SAP HANA Helps Booan Run Better, Hilti Customer Testimonial and Charite Improves Lives with SAP HANA. October 5, 2012

 

First Experience with ABAP for HANA – Evolution or Revolution?

http://scn.sap.com/profile-image-display.jspa?imageID=3662&size=72 Check out this excellent blog by SAP Mentor Tobias Trapp, and contribute to the new, dedicated ABAP for HANA space.

 

Read more about how co-innovation among SAP and SAP Mentors enabled optimization of the ABAP platform for HANA in Sanjay Khanna’sblogAll for One and HANA for All. October 3, 2012

 

 

With All the Facts and Information Readily Available, Why Is It So Tough for Some to Speak Truth About SAP HANA?

http://scn.sap.com/profile-image-display.jspa?imageID=2063&size=72Mark Yolton, SVP Communities & Social Media at SAP, put together this nice collection of great blogs, videos, articles, and other content that will help you understand the essence and the truth about SAP HANA. Top picks include: What Oracle Won't Tell You about SAP HANA by Steve Lucas, EVP Database & Technology at SAP, and Puneet Suppal's SAP HANA and the Pretenders. October 3, 2012


 

Turbocharge Your Applications with SAP HANA (Webinar Recording)

http://scn.sap.com/profile-image-display.jspa?imageID=4526&size=72In this recording, learn how to add new revenue streams and monetize in-memory computing with new services and offerings, turbocharge your applications with SAP for OEM Partners, and reduce administration costs and do ETL, materialization, aggregation, and summarizing in just one step.


 

Video Blog: The State of SAP HANA - Debating Killer Apps and Skills Needs

http://scn.sap.com/profile-image-display.jspa?imageID=2769&size=72

To commemorate the first year anniversary of HANA's General Availability, Jon Reed taped this special Google Hangout with fellow SAP Mentors John Appleby, Vijay Vijayasankar, and Harald Reiter. September 14, 2012

 

 

 

How to Analyze Who Has Access to Particular Objects

http://scn.sap.com/profile-image-display.jspa?imageID=6017&size=72Following his blogs on how to analyze security relations in SAP HANA system, SAP Mentor Tomas Krojzl looks at authorization relationship between users and objects. September 14, 2012

 

 

 

New Publication: A Storage Advisor for Hybrid-Store Databases

This paper, published in the Proceedings of the VLDB Endowment by SAP Research, proposes a storage advisor tool that supports database administrators in deciding between row and column data management. September 14, 2012

 

Spotfire on HANA (and a bit of a comparison)

http://scn.sap.com/profile-image-display.jspa?imageID=8051&size=72After a previous blog “Tableau on HANA,” Ronald Konijnenburg of Logica Nederland B.V. got curious again about how a similar third-party application would behave when connecting it to HANA. September 7, 2012

 

 


From Online Gaming to Genome Analysis SAP HANA Creates New Business Opportunities

http://scn.sap.com/profile-image-display.jspa?imageID=3432&size=72Technology itself does not give your business an edge—how you use that technology does. In her latest blog post, SAP’s Ruks Omar introduces the SAP HANA Use Case Repository, where you’ll find numerous applications for SAP HANA, and a call to share your use case. September 7, 2012

 

 

SAPInsider: SAP HANA is a Journey, Not a Destination

Since its release in 2010, SAP HANA has rapidly evolved from an appliance for accelerating analytics to an application platform — and there's still more to come. In this SAPinsider Q&A, hear from Dan Kearnan, Senior Director of Marketing for Data Warehousing and SAP HANA, who discusses this in-memory technology's impressive growth and sheds light on where it's headed. September 7, 2012

 

Free Course on In-Memory Data Management

http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/webcontent/mimes/headshots/hasso_plattner.jpg

Gain deep technical understanding of a dictionary-encoded column-oriented in-memory database and its application in enterprise computing with this new offering from the Hasso Plattner Institute (HPI).

 

The course, guided by Dr. Plattner himself, begins September 3, 2012 and requires about 3 hrs effort per week over 6 weeks. See the course overview on SCN and visit the openHPI web site for complete details. August 14, 2012

 

 

Webinar Replay Now Available

Transform Your Business with the Real-Time Power of SAP HANA - According to a study by Oxford Economics, companies that implement real-time systems see an average 21% increase in revenue, and a 19% reduction in IT cost.1 But what does real time really mean? August 24, 2012

 

Sign up for the August 16 Webinar: Transform Your Business with the Real-Time Power of SAP HANA

This 30-minute webinar focuses on how a real-time in memory data platform can give companies unprecedented and immediate insights into their customers, products, services and operations by enabling the analysis of huge volumes of data from virtually any source -- for improved agility and bottom line performance. August 14, 2012

 

I'm in a HANA State of Mind

https://www.experiencesaphana.com/community/blogs/blog/2012/08/01/im-in-a-hana-state-of-mind

Says SAP Mentor John Appleby, "...because once you start spotting opportunities for SAP HANA, you won't stop until you find ways to disrupt entire industries." August 1, 2012

 

SAP HANA Startup Forum Day - TLV 2012

Erez Sobol, Head of Technology Ventures at SAP Labs Israel, recaps an exciting day of learning and collaboration centered around big data and SAP technologies as part of the world-wide SAP Startup Focus Program. August 2, 2012

 

HANA and the Future of Personalized Medicine

Medicine can now be aided by tools capable of processing large volumes of data quickly. HANA is well placed to establish a strong role in the new era of personalized medicine. Mark Heffernan shares some thoughts and observations on the potential for HANA and personalized medicine. July 31, 2012

 

SAP Hana Code Jam - Why Code in SAP Hana?

SAP Mentor Tammy Powlas shares her experience at the first SAP CodeJam focused exclusively on SAP HANA. July 30, 2012

 

New Installation/Import - Including Unified Installer -  for SAP NetWeaver BW Powered by SAP HANA

https://scn.sap.com/profile-image-display.jspa?imageID=2770&size=72SAP’s Roland Kramer provides guidance for implementing BW on SAP HANA, whether it’s a new installation or an export of an existing system with any DB export. July 27, 2012

 

 

 

Using JPA to Persist Application Data in SAP HANA

This document proposes a solution for using the Java Persistence API framework JPA to persist Java classes in HANA DB. July 18, 2012

 

Create Your Own Security Monitoring Tool

http://scn.sap.com/profile-image-display.jspa?imageID=6017&size=72SAP Mentor Tomas Krojzl of IBM shows how to create a tool that will give you an overview of user role assignments in your SAP HANA system.Part I | Part IIJuly 18, 2012

 

 

 

Real-time Gross-to-Net Profitability Analysis - HANA PoC at COIL

http://scn.sap.com/profile-image-display.jspa?imageID=4735&size=72

Vistex partnered with SAP and IBM in the SAP Co-Innovation Lab to develop a solution to provide real-time profitability analytics while reducing the overall impact on transactional processing and other business operations. In this blog, Kevin Liu of SAP Co-Innovation Lab introduces the project and resulting white paper.

 

 

SAP NetWeaver AS ABAP for HANA

How does ABAP help to leverage the benefits of in-memory database technology? This documentdescribes SAP's vision, strategy, development, and commitment to enable ABAP for SAP HANA.June 25, 2012

 

Does SAP HANA Replace BW? (Hint: No.) - Part 2

In this part 2 blog, SAP Mentor John Appleby continues where SAP EVP Steve Lucas left off in his original blog post inspired by a series of posts in the Twittersphere. June 25, 2012

 

Download the SAP HANA Essentials eBook (It's Free!)

In this video blog, SAP HANA expert Jeffrey Word introduces the new book SAP HANA Essentials eBook. June 25, 2012

 

Announcing the SAP HANA Distinguished Engineer Program

Learn about a new program from SAP that aims to promote and encourage technical expertise in SAP HANA. June 19, 2012

 

Happy First Birthday, HANA!

http://scn.sap.com/profile-image-display.jspa?imageID=4485&size=72 On the first birthday of SAP HANA, SAP Mentor Vijay Vijaysankar from IBM reflects on the progress made over the last year and looks forward to challenges and opportunities ahead. June 18, 2012

 

 

 

SAP Insider: Powered by SAP HANA

In this SAPinsider article, Scott Leatherman of SAP explains how value-added resellers, independent software vendors, and systems integration partners are helping companies that have "big data" challenges understand the scale of SAP HANA and identify areas where it can help drive their business forward. June 18, 2012

 

Get your own SAP HANA DB server on Amazon Web Services

Get your hands on your own HANA DB server using three different image sizes we made available for you. Check out now and create your own HANA@AWS environment and get started with SAP HANA!  June 1, 2012

 

Happy Birthday to You, HANA!

On Monday, June 18, SAP HANA turns one year old, and we'd like to you to be a part of the celebration. Bay Area residents may join us in Palo Alto, and everyone's welcome to join in on the virtual birthday party. Festivities start at 10 AM Pacific time. June 14, 2012

 

Understanding Look and Feel of SAP HANA STUDIO

http://scn.sap.com/profile-image-display.jspa?imageID=6092&size=72 In this document, Krishna Tangudu discusses the basic navigation for the SAP HANA Sandbox system, with an emphasis on the look and feel of the system. May 31, 2012

 

 

Rapid Deployment Solution for Banking Powered by SAP HANA Transforms your Business

http://scn.sap.com/profile-image-display.jspa?imageID=7094&size=72To help banks to speed up the adoption of SAP HANA, SAP offers Rapid Deployment Solutions for banking. Susanne Knopp highlights them in this recent blog. May 31, 2012

 

 

Getting Started with SAP HANA Developer Center

In this short tutorial, SAP Mentor and Development Expert Alvaro Tejada Galindo covers some HANA Developer Center essentials: Creation of a Developer Center account, CloudShare, creation of row and column tables, upload of CSV file to SAP HANA, creation of Stored Procedure, creation of a view, and graphic analysis using SAP HANA Studio own tools. May 9, 2012

 

 

Who's Talking About SAP HANA? Find out on this "Conversation Heat Map"

Chris Kim of SAP Global Marketing introduces a tool for visualizing social media conversations around #SAP #HANA. Check it out. May 10, 2012

 

Explore Use Cases, Quantify Business Value

http://scn.sap.com/profile-image-display.jspa?imageID=3432&size=72 In these two blogs, SAP Mentor Rukhshaan Omar previews two new decision-making tools she'll be unveiling at SAPPHIRE NOW Orlando: The HANA use case repository and the business value calculator. May 8, 2012

 

 

 

Developer's Journal: ABAP/HANA Connectivity via Secondary Database Connection

http://scn.sap.com/profile-image-display.jspa?imageID=2203&size=72 Interested in how to access HANA from your ABAP systems? In his edition of the HANA Developer's Journal, Thomas Jung explains how much can be done today when HANA runs as a secondary database for your current ABAP based systems and what development options within the ABAP environment support this  scenario. April 15, 2012

 

 

SAP HANA Technical Overview – An Entry Point to Our Revolutionary Chapter

This blog introduces the latest and greatest technical overview white paper for SAP HANA. This essential document provides a general understanding of SAP HANA as of support package 3 (SP03), and covers database administration, deployment scenarios, data load architecture scenarios, and more. 20 April 2012

 

SAP HANA Scale-Out Performance Test: Blog Commentary

In his blog SAP HANA - Scale Out Performance Test Results - Early Findings, Sam Bhat of United Software provides general guidelines for people interested in considering new database technologies like SAP HANA. Josh Greenbaum (EAC ) summarizes the data from SAP’s latest HANA scalability test in his blog SAP Ups the HANA Challenge following SAP’s April 10 press conference. 20 April 2012

 

Visit the SAP Newsroom for more news from the April 10 press conference. 11 April 2012

 

Inside SAP HANA: Optimizing Data Load Performance and Tuning

SAP Mentor John Appleby outlines seven steps and offers insight into the best ways to optimize data models and load performance in SAP HANA. He covers not only optimizing the data model, but testing load parameters and choosing a partition scheme carefully. 4 April, 2012

 

Back to Featured Content.

Featured Content for SAP HANA and In-Memory Computing

$
0
0

The Fastest Way to Become a Real-time Business

SAP HANA Enterprise Cloud launched earlier this year following extensive work on petabyte scale SAP HANA infrastructure. Swen Conrad, who was part of the launch team, explains some of the key qualities and differentiating characteristics of the HEC offering. August 28, 2013

 

Convergence of OLTP and OLAP Analytics

http://scn.sap.com/profile-image-display.jspa?imageID=13204&size=72 We’ve seen the evolution of analytics – from operational analytics, using OLTP ABAP programs in ERP, to analytics using SAP BW and a more robust architecture and governance, storing data in an EDW (Enterprise Data Warehouse), and running OLAP reports. Read more in the blog posted by Alexandra Carvalho. August 28, 2013

 

SAP HANA Developer Guide

This guide explains how to build applications using SAP HANA, including how to model data, how to write procedures, and how to build application logic in SAP HANA Extended Application Services (SAP HANA XS). August 28, 2013

 

"Hands on SAP HANA" Webcasts

SCN presents "Hands on SAP HANA," an informal, audience-driven webcast series beginning in September. Technical experts from SAP and its partners will answer questions about planning, implementing, and using SAP HANA. The webcast agenda and presentation is based on questions and interests from the audience - beginning with the questions you ask in the following pages. August 13, 2013

 

Using HANA Modeler in the SAP HANA Cloud

SAP's vision to enable SAP HANA native development on the SAP HANA Cloud platform has reached beta: Developers can now leverage SAP HANA data-persistence and analytic model artifacts using the SAP HANA Studio Development or Modeler perspective. In this blog, SAP's Stoyan Manchev shows how to build a sample application that uses a calculation view and is consumed via Java Servlet. August 13, 2013

 

 

See more recently featured content.

Upcoming SAP HANA/In-Memory Computing Webcasts and Events

$
0
0

Handling Slowly Changing Dimensions in HANA with SLT Data Replication

$
0
0

Tracking History on HANA:

Handling Slowly Changing Dimensions in HANA with SLT Data Replication

Aug 2013

 

CONTRIBUTORS

Mark Wozniak, mark.wozniak@sap.com, SAP HANA Solution Center

Abani Pattanayak, abani.pattanayak@sap.com, SAP HANA Center of Excellence

Mahesh Sardesai, mahesh.sardesai@sap.com, SAP Database Technology Services

Jody Hesch, jody.hesch@sap.com, Business Analytic Services

 

 

1. BACKGROUND:

 

A customer needs the capability to report against current (“as is”) and historical (“as was”) data for certain dimensions in their SAP HANA data mart system.

 

Dimensions whose values change over time and are captured are referred to as Slowly Changing Dimensions (SCDs). (Technically there are different types of SCDs which meet different business requirements. Those described in this document refer to SCDs of Type 2/6.)

 

Capturing SCDs is a well-understood task for data warehouses/marts and can be handled in various ways including by SAP Business Objects Data Services via the History Preservation transformation.

 

2. PROBLEM STATEMENT:

 

In many instances of SAP HANA data mart solutions, data is replicated from source systems into HANA via SAP LT Replication server (SLT). SLT does not come with history preservation features “out-of-the-box”. As such there is a challenge in addressing the best way to preserve history (in other words, how to track slowly changing dimensions).

 

An ideal approach should include:

- Good performance on large datasets*.

- Ease of implementation

- Maintainable over time

 

 

*Some dimension tables on current retail system at customer site exceed 300 million rows, so good performance is particularly important in instances like these.

 

 

3. SOLUTION APPROACHES:

 

The following documentation describes three solution approaches to capturing Slowly Changing Dimensions in a HANA system with SLT as the replication technology. The first two approaches are outlined briefly. The third approach, which has significant advantages over the first two, is outlined in detail.

 

 

All three approaches involve creating and executing stored procedures on a scheduled basis to populate a history table which we also call an ‘effective’ table. The effective table has, at a minimum, the same fields as the source table as well as validity period fields VALID_FROM and VALID_TO. The primary key fields are the same as the source fields and also include VALID_TO.

 

The effective table can be modeled as an attribute view joined to the Data Foundation in an Analytic View via a Temporal Join. Please see the SP6 SAP HANA Developer’s Guide, p. 175, found at http://help.sap.com/hana_appliance#section6 for details on this join type.

 

Approach 1 involves doing full table comparisons between source and effective tables and updating the effective table accordingly. (In this and following descriptions, ‘source table’ refers to the HANA table that has been replicated from the source system). This approach is the worst-performing of the three approaches and requires the most effort to implement for each set of dimensions for which SCDs are tracked.

 

 

Approach 2 involves creating DB triggers on the source table to capture the DELTA for INSERT/UPDATE/DELETE operations. This approach has better performance than Approach 1 and is easier to implement but can still be challenging to maintain over time. This approach also has performance issue, if high volume delta records.

 

 

Approach 3 entails capturing operation types of source data records (INSERT, UPDATE, DELETE) and flagging records accordingly. Then a stored procedure populates/updates the historical (“as was”) table with source data as well as respective validity periods. (Additional fields such as “Current Flag” can easily be included in the following approach.) Only the deltas from the source table are required (except for initial load), and no trigger is required. Also, SLT configs for this approach can be applied (reused) on any table. As such, Approach 3 is the best-performing, most maintainable and easiest to implement over the 3 approaches.

 

 

3.1 APPROACH 1: Table Compare

Approach 1 involves the following steps (in SQLScript pseudo code) in a stored procedure to compare source and effective tables and update the effective table accordingly.

 

IF effective table is empty THEN

 

- Populate effective table with copy of source table
- Setting VALID_FROM = ‘1900-01-01’
- Setting VALID_TO = ‘9999-12-31’

 

ELSE

 

1) SELECT all records that exist in either
      a.source table or
      b.history table and are currently valid
INTO variable call change_table

 

2) UPDATE all records in history table
      a. that correspond to records in change_table
      b.set VALID_TO = yesterday (timestamp)

 

3) INSERT all records in history table
      a.That correspond to records in change_table on all field values (not just keys)
      b. set VALID_FROM to today
      c. set VALID_TO to ‘9999-12-31’

 

3.2 APPROACH 2: DB Triggers

 

Step 1

 

Add a trigger to the source table that is executed ON INSERT and insert the corresponding record in the effective table.

 

Step 2

 

Add a trigger to source table that is executed ON DELETE and updates the corresponding record in the effective table (i.e. ‘expires’ that record by setting VALID_TO = yesterday). This should reference the old state of the record (REFERENCING OLD ROW<myoldrow>).

 

Step 3

 

Add a trigger to the source table that is executed ON UPDATE and

 

a.  Updates the corresponding record in the effective table (i.e. ‘expires’ that record by setting VALID_TO = yesterday). This should reference the old state of the record (REFERENCING OLD ROW<myoldrow>).

 

b. Inserts a new record in the effective table (VALID_FROM = today and VALID_TO = ‘9999-12-31’). This should reference the new state of the record (REFERENCING NEW ROW<mynewrow>)

 

Step 3

 

Create a stored procedure to initialize the effective table with the following logic.

 

IF effective table is empty THEN

-   Populate effective table with copy of source table

-   Setting VALID_FROM = ‘1900-01-01’

-   Setting VALID_TO = ‘9999-12-31’

 

One of the drawbacks of Approach 2 is the performance of the triggers when large sets of data are changed in the source table (mass inserts / updates / deletes).

 

We ran DELETE statements with filters that would impact different numbers of rows in the source table and arrived at the following measurements.

 

Statement 'DELETE FROM "ERPACC_RPDCLNT200"."MARC" WHERE WERKS = 'X133''

successfully executed in 790 ms 578 µs  (server processing time: 753 ms 555 µs) - Rows Affected: 500

 

Statement 'DELETE FROM "ERPACC_RPDCLNT200"."MARC" WHERE WERKS = 'X520''

successfully executed in 25.422 seconds  (server processing time: 23.823 seconds) - Rows Affected: 7394

 

Statement 'DELETE FROM "ERPACC_RPDCLNT200"."MARC" WHERE WERKS = 'X220''

successfully executed in 31.323 seconds  (server processing time: 30.734 seconds) - Rows Affected: 15011

 

Statement 'DELETE FROM "ERPACC_RPDCLNT200"."MARC" WHERE WERKS = 'X521''

successfully executed in 7:22.061 minutes  (server processing time: 7:20.096 minutes) - Rows Affected: 85827

 

Statement 'DELETE FROM "ERPACC_RPDCLNT200"."MARC" WHERE WERKS = 'X423''

successfully executed in 3:26:14.500 hours  (server processing time: 3:26:10.385 hours) - Rows Affected: 303485

 

ss1.PNG

As you see above, if the chunk size (no of records processed/deleted at one go) is 20K – 50K records, this approach will work fine. However, if more than 50K records were updated/deleted or inserted at one go, this approach will not work.

 

For this reason and others discussed already, we recommend the following approach (Approach 3).

 

3.3 APPROACH 3: SLT Configuration

 

Step 1

 

Define Table Deviation on the source table (again, ‘source’ refers to the HANA replicated table, and ‘target’ table would be that which is transformed, i.e. what we’ve been calling the ‘effective’ table. From an SLT perspective, however, the HANA table is the ‘target’ table).

 

ss2.PNG

 

Define the table deviation using the Edit Table Structure option in the IUCC_REPL_TABSTG tab in SLT.

 

As shown in the next screenshot, add two fields (ZZSLT_FLAG & ZZTIMESTAMP) to store Record Type and Timestamp. This can be configured using the Table Deviation

a.  ZZSLT_FLAG : NVARCHAR(1): To store record type (‘D’ - DELETE, ‘U’ – UPDATE, ‘I’ – INSERT/New)

b. ZZTIMESTAMP: NVARCHAR(14): Timestamp

 

ss3.PNG

Step 2

 

Define a transformation rule for the table (YMARC in this example) in the IUUC_***_RULE_MAP tab.

 

Export Field Name:  MANDT. We choose the first field of the table (MANDT). You can use any field from the table

 

Import Parameter 1: ‘<WA_R_YMARC>’ is the internal name used to address the receiver work area.

 

Insert Include Name: The is the ABAP include we need to create for the transformation

 

 

Step 3

 

Create ABAP include using t-code SE38 in the SLT system.

 

ss4.PNG

 

*&---------------------------------------------------------------------*

*&  Include           ZIUUC_DELETE

*&---------------------------------------------------------------------*

 

FIELD-SYMBOLS:  <ls_data>       TYPE any,

                <lv_operation>  TYPE any,

                <lv_delete>     TYPE any,

                <lv_timestamp>  TYPE any.

 

ASSIGN (i_p1) TO <ls_data>.

 

DATA tstamp  LIKE tzonref-tstamps.

DATA d TYPE D VALUE '20121224'.

DATA t TYPE T VALUE '235500'.

 

d = SY-DATUM.

t = SY-UZEIT.

 

SET COUNTRY 'US'.

 

CONVERT DATE d TIME t INTO

        TIME STAMP tstamp TIME ZONE 'UTC-5 '.

 

"Assign the Time Stamp to ZZTIMESTAMP field

ASSIGN COMPONENT 'ZZTIMESTAMP' OF STRUCTURE <ls_data> TO <lv_timestamp>.

IF sy-subrc = 0.

    <lv_timestamp> = tstamp."

ENDIF.

 

ASSIGN COMPONENT 'IUUC_OPERAT_FLAG' OF STRUCTURE <ls_data> TO <lv_operation>.

"For delete operation

IF sy-subrc = 0 AND <lv_operation>  = 'D'.

               

     "Change this to a update operation – so the record will not be deleted

      <lv_operation>  = 'U'.

 

     "Update the ZZSLT_FLAG to store D (for Delete)

      ASSIGN COMPONENT 'ZZSLT_FLAG' OF STRUCTURE <ls_data> TO <lv_delete>.

      IF sy-subrc = 0.

          <lv_delete> = 'D'.

      ENDIF.

 

"For all other operation

elseif sy-subrc = 0.

 

      "Update the ZZSLT_FLAG to store appropriate record type

      ASSIGN COMPONENT 'ZZSLT_FLAG' OF STRUCTURE <ls_data> TO <lv_delete>.

      IF sy-subrc = 0.

          <lv_delete> = <lv_operation>.

      ENDIF.

 

ENDIF.

 

 

Step 4

 

Set-up replication of the Table using HANA studio.

ss5.PNG

 

Step 5

 

Delete and update some records from the YMARC table.

 

ss6.PNG

As you see above, the delete records and update records are reflected in the ZZSLT_FLAG.

 

Step 6

 

Create a stored procedure to update the history table. Details of the stored procedure will be specific to customer solution. At a minimum, the following steps should be included, in addition to ‘housekeeping’ code such as error-handling.

 

-- STEP 1: Initial load
-- If target tables is empty, load source table into history table,
-- setting VALID_FROM = ‘1900-01-01’ and VALID_TO = ‘9999-12-31’

 

-- STEP 2: Record expiration
-- UPDATE history table, setting VALID_TO = yesterday, for all records in history table
-- that correspond to a record from source table WHERE ZZSLT_FLAG IN (‘D’, ‘U’).
-- Make sure source table is filtered against ZZTIMESTAMP*.
-- This expires deleted records and updated records.

-- STEP 3:Inserting new records
-- Insert source data into history table, setting VALID_FROM = today and VALID_TO =
-- ’9999-12-31’, for source records WHERE ZZSLT_FLAG <> ‘D’.
-- Make sure source table is filtered against ZZTIMESTAMP*.
-- This insert ensures that new records (INSERT) are captured in history table, and latest
-- version of current records (UPDATE) are also captured in history table.

 

* It’s recommended that additional housekeeping code is included in your stored procedure framework that captures dates of successful procedure execution dates. Then, the above code can be filtered via ZZTIMESTAMP > :last_successful_execution_date, where last_successful_execution_date is a scalar variable populated from a query against a procedure execution status table.

 

 

Step 7: Schedule stored procedure execution

 

If SAP Business Objects Data Services (BODS) is part of your landscape, procedures can be scheduled daily (or as often as required) via BODS.

 

If BODS is not an option, the following link from the SAP HANA Academy demonstrates how to schedule data loads via CRON scheduler. This approach can be modified to schedule your stored procedure calls.

 

http://www.saphana.com/docs/DOC-2909

 

 

4. OPEN QUESTIONS:

 

There are few open issues with all the three approach, which need to be considered before applying these solutions to a production environment.

 

1.    As you see the granularity of the effective table is at the Day-level. So what happens if there are multiple changes for a record in a single day? Do you have to track all these changes or just the last one?

2.    What happens if something goes wrong with the SLT server and you need to recreate the configuration and replicate the table again? You have to make sure the Timestamp delta pointer is appropriately adjusted.

 

 

5. CONCLUSION:

 

Please note each of the above approach may work depending on the data volume in the table and no of delta records. 

 

For example Approach 1 may work just fine for low volume of data, say for site master (T001W), for which typical volume is 100 – 10K. However, Approach 2 will be a better fit for this scenario, since there is no need to schedule any procedure on a daily basis. But then these triggers need to be recreated, if the source table was dropped for any reason.

 

Approach 2 will also work fine for high volume scenario, if and only if the delta records to HANA is flowing continuously, so that it does exceed more than 10K/min at any particular time.

 

Approach 3 will work in almost all scenarios. So Approach 2 is better than Approach 1, but Approach 3 is the best option of the three.

 

 

Publications of the SAP HANA database department

$
0
0

This is a list of selected publications made by the SAP HANA database team.

 

2013

  • Sebastian Breß, Felix  Beier, Hannes Rauhe, Kai-Uwe Sattler, Eike Schallehn, Gunter Saake,  Efficient co-processor utilization in database query processing,  Information Systems, Volume 38, Issue 8, November 2013, Pages 1084-1096
  • Martin  Kaufmann. PhD Workshop: Storing and Processing Temporal Data in a Main  Memory Column Store. VLDB 2013, Riva del Garda, Italy, August 26-30,  2013.
  • Hannes Rauhe, Jonathan Dees, Kai-Uwe Sattler, Franz Färber.  Multi-Level Parallel Query Excecution Framework for CPU and GPU. ADBIS  2013, Genoa, Italy, September 1-4, 2013.
  • Iraklis Psaroudakis, Tobias Scheuer, Norman May, Anastasia Ailamaki. Task Scheduling for Highly Concurrent Analytical and Transactional Main-Memory Workloads. ADMS 2013, Riva del Garda, Italy, August 2013.
  • Thomas Willhalm, Ismail Oukid, Ingo Müller, Franz Faerber. Vectorizing Database Column Scans with Complex Predicates. ADMS 2013, Riva del Garda, Italy, August 2013.
  • David Kernert, Frank Köhler, Wolfgang Lehner. Bringing Linear Algebra Objects to Life in a Column-Oriented In-Memory Database. IMDM 2013, Riva del  Garda, Italy, August 2013.
  • Martin Kaufmann, Peter M. Fischer, Norman May, Andreas Tonder, Donald Kossmann. TPC-BiH: A Benchmark for Bi-Temporal Databases. TPCTC 2013, Riva del Garda, Italy, August 2013.
  • Martin Kaufmann, Panagiotis Vagenas, Peter M. Fischer (Univ. of Freiburg), Donald Kossmann, Franz Färber (SAP). DEMO: Comprehensive and Interactive Temporal Query Processing with SAP HANA. VLDB 2013, Riva del Garda, Italy, August 26-30, 2013.
  • Philipp Große, Wolfgang Lehner, Norman May: Advanced Analytics with the SAP HANA Database. DATA 2013.
  • Jan  Finis, Robert Brunel, Alfons Kemper, Thomas Neumann, Franz Faerber,  Norman May. DeltaNI: An Efficient Labeling Scheme for Versioned  Hierarchical Data. SIGMOD 2013, New York, USA, June 22-27, 2013.
  • Michael  Rudolf, Marcus Paradies, Christof Bornhövd, Wolfgang Lehner. SynopSys:  Large Graph Analytics in the SAP HANA Database Through Summarization.  GRADES 2013, New York, USA, June 22-27, 2013.
  • Jonathan Dees, Peter  Sanders. Efficient Many-Core Query Execution in Main Memory  Column-Stores. ICDE 2013, Brisbane, Australia, April 8-12, 2013
  • Martin  Kaufmann, Peter M. Fischer (Univ. of Freiburg), Donald Kossmann, Norman  May (SAP). DEMO: A Generic Database Benchmarking Service. ICDE 2013,  Brisbane, Australia, April 8-12, 2013.

  • Martin Kaufmann,  Amin A. Manjili, Peter M. Fischer (Univ. of Freiburg), Donald Kossmann,  Franz Färber (SAP), Norman May (SAP): Timeline Index: A Unified Data  Structure for Processing Queries on Temporal Data, SIGMOD 2013,  New  York, USA, June 22-27, 2013.
  • Martin  Kaufmann, Amin A. Manjili, Stefan Hildenbrand, Donald Kossmann,  Andreas Tonder (SAP). Time Travel in Column Stores. ICDE 2013, Brisbane,  Australia, April 8-12, 2013
  • Rudolf, M., Paradies, M., Bornhövd, C., & Lehner, W. (2013). The Graph Story of the SAP HANA Database. BTW (pp. 403–420).
  • Robert Brunel, Jan Finis: Eine effiziente Indexstruktur für dynamische hierarchische Daten. BTW Workshops 2013: 267-276

 

2012

  • Rösch, P., Dannecker, L., Hackenbroich, G., & Färber, F. (2012). A Storage Advisor for Hybrid-Store Databases. PVLDB (Vol. 5, pp. 1748–1758).
  • Sikka, V., Färber, F., Lehner, W., Cha, S. K., Peh, T., & Bornhövd,  C. (2012). Efficient transaction processing in SAP HANA database.  SIGMOD  Conference (p. 731).
  • Färber, F., May, N., Lehner, W., Große, P., Müller, I., Rauhe, H., & Dees, J. (2012). The SAP HANA Database -- An Architecture Overview. IEEE Data Eng. Bull., 35(1), 28-33.
  • Sebastian Breß, Felix Beier, Hannes Rauhe, Eike Schallehn, Kai-Uwe Sattler, and Gunter Saake. 2012. Automatic selection of processing units for coprocessing in databases. ADBIS'12

 

2011

  • Färber, F., Cha, S. K., Primsch, J., Bornhövd, C., Sigg, S., & Lehner, W. (2011). SAP HANA Database - Data Management for Modern Business Applications. SIGMOD Record, 40(4), 45-51.
  • Jaecksch, B., Faerber, F., Rosenthal, F., & Lehner, W. (2011). Hybrid data-flow graphs for procedural domain-specific query languages, 577-578.
  • Große, P., Lehner, W., Weichert, T., & Franz, F. (2011). Bridging Two Worlds with RICE Integrating R into the SAP In-Memory Computing Engine, 4(12), 1307-1317.

 

2010

  • Lemke, C., Sattler, K.-U., Faerber, F., & Zeier, A. (2010). Speeding up queries in column stores: a case for compression, 117-129.
  • Bernhard Jaecksch, Franz Faerber, and Wolfgang Lehner. (2010). Cherry picking in database languages.
  • Bernhard Jaecksch, Wolfgang Lehner, and Franz Faerber. (2010). A plan for OLAP.
  • Paradies, M., Lemke, C., Plattner, H., Lehner, W., Sattler, K., Zeier, A., Krüger, J. (2010): How to Juggle Columns: An Entropy-Based Approach for Table Compression, IDEAS.

 

2009

  • Binnig, C., Hildenbrand, S., & Färber, F. (2009). Dictionary-based order-preserving string compression for main memory column stores. SIGMOD Conference (p. 283).
  • Kunkel, Julian M., Tsujita, Y., Mordvinova, O., & Ludwig, T. (2009). Tracing Internal Communication in MPI and MPI-I/O. 2009 International Conference on Parallel and Distributed Computing, Applications and Technologies (pp. 280-286).
  • Legler, T. (2009). Datenzentrierte Bestimmung von Assoziationsregeln in parallelen Datenbankarchitekturen.
  • Mordvinova, O., Kunkel, J. M., Baun, C., Ludwig, T., & Kunze, M. (2009). USB flash drives as an energy efficient storage alternative. 2009 10th IEEE/ACM International Conference on Grid Computing (pp. 175-182).
  • Transier, F. (2009). Algorithms and Data Structures for In-Memory Text Search Engines.
  • Transier, F., & Sanders, P. (2009). Out of the Box Phrase Indexing. In A. Amir, A. Turpin, & A. Moffat (Eds.), SPIRE (Vol. 5280, pp. 200-211).
  • Willhalm, T., Popovici, N., Boshmaf, Y., Plattner, H., Zeier, A., & Schaffner, J. (2009). SIMD-scan: ultra fast in-memory table scan using on-chip vector processing units. PVLDB, 2(1), 385-394.
  • Jäksch, B., Lembke, R., Stortz, B., Haas, S., Gerstmair, A., & Färber, F. (2009). Guided Navigation basierend auf SAP Netweaver BIA. Datenbanksysteme für Business, Technologie und Web, 596-599.
  • Lemke, C., Sattler, K.-uwe, & Franz, F. (2009).  Kompressionstechniken für spaltenorientierte BI-Accelerator-Lösungen.  Datenbanksysteme in Business, Technologie und Web, 486-497.
  • Mordvinova,  O., Shepil, O., Ludwig, T., & Ross, A. (2009). A Strategy For Cost  Efficient Distributed Data Storage For In-Memory OLAP. Proceedings IADIS  International Conference Applied Computing, pages 109-117.

 

2008

  • Hill, G., & Ross, A. (2008). Reducing outer joins. The VLDB Journal, 18(3), 599-610.
  • Weyerhaeuser, C., Mindnich, T., Faerber, F., & Lehner, W. (2008). Exploiting Graphic Card Processor Technology to Accelerate Data Mining Queries in SAP NetWeaver BIA. 2008 IEEE International Conference on Data Mining Workshops (pp. 506-515).
  • Schmidt-Volkmar, P. (2008). Betriebswirtschaftliche Analyse auf operationalen Daten (German Edition) (p. 244). Gabler Verlag.
  • Transier, F., & Sanders, P. (2008). Compressed Inverted  Indexes for In-Memory Search Engines. ALENEX (pp. 3-12).

2007

  • Sanders, P., & Transier, F. (2007). Intersection in Integer Inverted Indices.
  • Legler, T. (2007). Der Einfluss der Datenverteilung auf die Performanz  eines Data Warehouse. Datenbanksysteme für Business, Technologie und  Web.

 

2006

  • Bitton, D., Faerber, F., Haas, L., & Shanmugasundaram, J. (2006). One platform for mining structured and unstructured data: dream or reality?, 1261-1262.
  • Geiß, J., Mordvinova, O., & Rams, M. (2006). Natürlichsprachige Suchanfragen über strukturierte Daten.
  • Legler, T., Lehner, W., & Ross, A. (2006). Data mining with the SAP NetWeaver BI accelerator, 1059-1068.

Academic Partners of the SAP HANA database department

$
0
0

Research in the SAP HANA database department is done in collaboration with a number of academic partners, including the following:

 

Hasso-Plattner-Institut, Universität Potsdam
http://www.lock-keeper.org/images/hpi-logo.jpg
TU Dresden

http://tu-dresden.de/tulogosw.png

Universität Heidelberg
https://www.csi.uni-heidelberg.de/logo1.gif
TU Ilmenau

http://upload.wikimedia.org/wikipedia/commons/7/77/Logo_TU_Ilmenau.png

Karlsruhe Institute of Technology
  • Prof. Dr. rer. nat. Peter Sanders
  • Institute of Theoretical Informatics, Algorithmics II
  • Web site: http://algo2.iti.kit.edu

http://www.defi.kit.edu/summerschool/2011/img/kit_logo_de_farbe_positiv_1.jpg

EPF Lausanne
  • Prof. Dr. Anastasia Ailamaki
  • Data-Intensive Applications and Systems Laboratory (DIAS)
  • Web site: http://dias.epfl.ch/
http://ipn2.epfl.ch/lns/logo/EPFL_logo.jpg
Universität Mannheim
http://upload.wikimedia.org/wikipedia/commons/thumb/0/04/University_of_Mannheim.svg/500px-University_of_Mannheim.svg.png
TU München
http://upload.wikimedia.org/wikipedia/commons/thumb/4/4c/TU_Muenchen_Logo.svg/500px-TU_Muenchen_Logo.svg.png
ETH Zürich

http://upload.wikimedia.org/wikipedia/commons/thumb/6/63/ETH_Z%C3%BCrich_wordmark.svg/800px-ETH_Z%C3%BCrich_wordmark.svg.png

ENSIMAG Grenoble
http://morpheo.inrialpes.fr/people/hetroy/data/uploads/ensimag.jpg

SAP Operational Process Intelligence powered by SAP HANA

$
0
0
Empower your business operations with process visibility and process-aware analytics when needed the most – in real time.

New! SAP Operational Process Intelligence SP01 is available now with great new features!
Read more in thisblog.

Overview

SAP Operational Process Intelligence powered by SAP HANA enables line-of-business users to gain process visibility across their end-to-end business processes with a clear focus, improving the operational decision making to achieve better business outcomes.
As we all know, end-to-end business processes can span multiple systems (SAP, non-SAP), can be either modeled (as in SAP Business Workflow and SAP NetWeaver Business Process Management) or built-in (as transaction or programmed in SAP Business Suite).
In addition, end-to-end processes can span between on-premise and on-demand systems. And at the same time deal with structured data as well as streaming data from social media feeds, internet of things (RFIDs, sensors etc.) and clickstreams.
In short, we have a variety of high volume and high velocity data from many sources – now’s the question: How can we make sense of all of this in our end-to-end processes in a focused and tailored way; provide business relevant information on performance, KPIs, trends and ensure process stays on track?
SAP Operational Process Intelligence powered by SAP HANA brings the answer.
Using the SAP HANA platform to correlate and contextualize operational data - i.e. data from implemented end-to-end processes (process logs, transaction logs, business object data etc.) into a business scenario, business users will be able to get the right information on their processes in real-time.
SAPOPInt_Marketecture.png
Take a closer look and explore our blogs and further content on this hot topic – you will love it!


Resources

Blogs on SAP Operational Process Intelligence

 

Real life examples:

 

Webinars / Recordings

 

Additional Information

 

Related SCN Spaces

Playing with SAP HANA

Hands on SAP HANA: Deploying SAP HANA in Your Data Center (September 10)

$
0
0

Hands on SAP HANAis an audience-driven webcast series about planning, implementing, and using SAP HANA.

 

Deploying SAP HANA in Your Data Center: Watch the replay and read the Q&A transcript

Rich Travis, IBM SAP Infrastructure Architect

Raik Langer, Project Manager, HANA Development, SAP

Ralf Czkalla, HANA Development, SAP

David Ramsay, Director, Ecosystem Innovation & Business Development, SAP

 

Description

Combining OLAP and OTLP into one database, SAP HANA creates a unified view on data from transaction, analysis, decision, and planning systems. Real-time analytics and transactional applications, including planning, can run in mixed operations. Many customers considering adding HANA to their IT environments to gain advantages from its real-time decision making and other capabilities.

 

The data center is at the heart of the enterprise—a complex, interconnected array of equipment, software and data that drives the business and powers every aspect of its operations. Many business, technical, organizational factors need to be considered when adding to or updating the data center environment. They require preparation, readiness, relevant methodology, expertise, planning and others for ensuring the success of any deployments.

 

Is your data center ready for Big Data and SAP HANA? Attend this webcast to gain first-hand perspectives on deploying SAP HANA from experts and the SAP HANA community. In this interactive webcast, you can ask questions that are important to you before, during and after the webcast and you will learn and discuss everything you need to plan and deploy SAP HANA in your data center. Some of the discussion topics include:

 

  • Power
  • Network
  • Data backup/recovery planning
  • Application servers

 

Speaker Bio

Richard Travis is an IBM SAP Infrastructure Architect focusing on IBM’s System x (x86) platform. He has been involved with SAP on IBM infrastructure since 1996 when he was certified as a Basis Consultant. Rich has responsibility for the HANA and BWA “appliance” like solutions across North America and he has been actively involved with each solution since the early ramp-up phases. This involved supporting multiple customers throughout North America. Richard has been involved with broad range of IT solutions for greater than 29 years at IBM.

Hands on SAP HANA Webcast Series Schedule

$
0
0

SAP Community Network presents "Hands on SAP HANA," an informal, audience-driven webcast series beginning in September. Technical experts from SAP and its partners will answer questions about planning, implementing, and using SAP HANA.

 

The webcast agenda and presentation is based on questions and interests from the audience - beginning with the questions you ask in the following pages. Speakers will provide answers in an interactive format during the live webcasts, and continue to monitor this space for any follow-up questions.

 

Schedule

Learn more about each session and its speakers, register, and start asking questions.

 

 

DateSession TitleSpeakers
September 10

Deploying SAP HANA in Your Data Center

Rich Travis, IBM SAP Infrastructure Architect

Raik Langer, Project Manager, HANA Development, SAP

Ralf Czkalla, HANA Development, SAP

David Ramsay, Director, Ecosystem Innovation & Business Development, SAP

September 17

High Availability and Disaster Recovery with SAP HANA

Dr. Oliver Rettig, Smart Appliance Development Lead, IBM

Rich Travis, IBM SAP Infrastructure Architect

Ralf Czkalla, HANA Development, SAP

David Ramsay, Director, Ecosystem Innovation & Business Development, SAP

 

Follow

Watch #handsonhana hashtag to follow the event on Twitter.

Hands on SAP HANA: High Availability and Disaster Recovery with SAP HANA

$
0
0

Hands on SAP HANAis an audience-driven webcast series about planning, implementing, and using SAP HANA.

 

High Availability and Disaster Recovery with SAP HANA: Watch the replay and read the Q&A transcript

Dr. Oliver Rettig, Smart Appliance Development Lead, IBM

Rich Travis, IBM SAP Infrastructure Architect

Ralf Czkalla, HANA Development, SAP

David Ramsay, Director, Ecosystem Innovation & Business Development, SAP

 

Description

Business Continuity requires that the operation of business critical systems remain highly available at all times, even during failures.  Downtime for mission-critical system resources and services such as SAP HANA, results in significant adverse business impact. High Availability and Disaster Recovery (HADR) are no longer options but features of most mission-critical systems.

 

SAP HANA is fully designed for HADR and supports a broad range of recovery scenarios from simple software errors, to disasters that take an entire site offline. Join HADR experts in this webcast to learn HADR capabilities of HANA and how you can protect your data and resources.  Topics of discussion include:

 

  • System Architectures to support HADR
  • Workload planning and optimization for BI, Datamart, BW, etc.
  • Scaling and Automatic failover for HADR
  • Backup, Archiving, and Recovery

 

You don’t have to wait until the live webcast to ask questions about SAP HANA and HADR! You can ask them before and after the webcast at [Forum link] and they will be answered.

 

Speaker Bio

Dr. Oliver Rettig has been the Technical Lead for the IBM Solution for SAP HANA since 2010. He led the definition of the system architecture from standalone to scaleout and to High Availability and Disaster Recovery solutions. Before taking over this responsibility, Oliver held various management and program management positions in  IBM mainframe, POWER systems, System x, and High Performance Computing development areas and enjoyed leveraging his systems and technology expertise as a consultant working on migration and consolidation projects and providing workload optimized solutions to satisfy customers' business needs. Oliver received a doctorate degree in Computer Engineering from Stuttgart University, a diploma in Electrical Engineering from Karlsruhe University, and a master degree in Project Management from the George Washington University. He has been with IBM since 19 years.

 

Richard Travis is an IBM SAP Infrastructure Architect focusing on IBM’s System x (x86) platform. He has been involved with SAP on IBM infrastructure since 1996 when he was certified as a Basis Consultant. Rich has responsibility for the HANA and BWA “appliance” like solutions across North America and he has been actively involved with each solution since the early ramp-up phases. This involved supporting multiple customers throughout North America. Richard has been involved with broad range of IT solutions for greater than 29 years at IBM.

 

Ask Your Questions Here, Now - Here's How

What do you want to know? Get started and ask as many questions as you want about these topics. To ask a question in advance, simply comment on this document.


How to Enhance SAP HANA Live

$
0
0

SAP HANA Live gives customers a great start to come up with operational reports. Often there is a need to enhance this content with custom content to make it more relevant.

View this SAP How-to Guide

Consuming BW Parent Child Hierarchy in HANA - A workaround

$
0
0

 

We all know how simple it is to create a hierarchy in BW and consume it in BEX. Now with the presence of BW on HANA some of us might want to use the existing Hierarchies, maintained in BW, in HANA .

Well, there is no direct way to do it, but the following workaround will help you extract and replicate the hierarchy in HANA from BW.

So what do you need to start off with?

  • An instance of BW on HANA

  • Hierarchies maintained in BW

 

  • Basic SQL and HANA Modeling background

  • A front end tool to consume the hierarchy created in HANA

 

In this document, I will take the example of Profit Center Hierarchy.

 

P.S. – I have used a sample dataset for this document.

 

So how do we do it?

Here are the Steps to be followed:

  • Create a Table: The approach, which we are going to follow here, would require a table to be persisted which would hold the data of Profit Centers and their parent profit center nodes.
  • Locate the H table and prepare the SQL: The above mentioned table needs to be populated with some data. This data will be populated using a SQL with a self-join of a table. So to start off with, check if you have the H Table of Profit Center Hierarchy in the SAP HANA Catalog created for the BW instance or not. In my case the name of the Catalog is CUSTOM_TABLES and the name of the tables is HPROFIT_CTR.

 

The table structure is as shown below:

1.png

Here only 14 records are shown. In general you will have thousands of records in this table depending on the number of Hierarchies and number of levels it has in BW. For different hierarchies the Hierarchy ID will be different. For this example I have taken the case of only one Profit Center Hierarchy maintained in BW.

 

The main columns in this table, which we are going to focus on, are: NodeID, NodeName and ParentID.

 

The NodeID is a unique identifier for each node and profit center. The NodeName column has the name of the nodes/profit centers. The ParentID column has the detail about the Parent of that node/profit center.

 

We need to perform a self-join on this H table in order to get the relationship in one single row for a Profit Center. In order to do this we need to write a simple SQL which is as follows:

 

2.png

The output of this SQL is as shown below:

3.png

Here the Child Column has the child nodes and the Parent Column has the Parent nodes.

The most important part of this output is the ‘?’ as parent for the ProfitNode1 which is achieved by the Left Outer Join in the self-join. The ‘?’ or null value signifies that the ProfitNode1 does not have any parent and it is the top most node of the hierarchy.

 

P.S. - Remember, whenever you create a parent child hierarchy in HANA and try to consume it in front end then all the nodes should have one parent. The topmost node should have null as parent. If you do not have this structure in place, you will not be able to consume the hierarchy in Front End Tool and end up getting error. Also, the chain of Parent Child should not break while creating Hierarchies. This will be explained in the later part of the document.

 

  • Create the Persisted Table:  Now, this data needs to be pushed into a persisted table which then, can be utilized in a Calculation View to create a Parent Child hierarchy. To achieve this, you can either schedule the SQL in BODS or you can use an "INTO" clause at the end of the SQL if your nodes are not going to be changed in future.

For this example, I will create a simple 2 column table named "PARENT_CHILD" in CUSTOM_TABLES catalog and load the data using INTO Clause.

4.png

 

We have extracted the Hierarchy Information from BW into HANA. Now this "PARENT_CHILD" table will be used for our modeling.

 

P.S. – If you do not wish to persist the data into a table and would like to execute the query on the fly when user runs the report, then you can create a script based calculation view using the same SQL. This calculation view will again be used inside the final calculation view.

 

 

 

  • Modeling of Views: This is a very simple but most critical step in creating Hierarchy. The general thinking goes in the direction of creating a hierarchy in Attribute View, then consuming this in an analytic view and finally using this analytic view into the Calculation View which then will be used for reporting. But there are several problems with this process like you cannot expose a Hierarchy created in Attribute View to Calculation View through an Analytic View. I will explain the most critical issue, which this approach has: The Break in Parent Child Relationship. So what this issue is all about?

There are 2 constraints when you create a parent child hierarchy. One is that the topmost node should have null as parent which I have already explained. The 2nd constraint is, when you are creating a parent – child hierarchy, there should not be any child without a parent. For example: If A is parent of B, B is parent of C and C is parent of D, then at no time while creating (not consuming) the hierarchy, this link should break. So you must be wondering, "When this situation will arise?" .

 

If you look at the HPROFIT_CTR table, you will find that it is a mix of Nodes and the leaves of the hierarchy. A node is that entry which has a child and a leaf is that entry which does not have any child. If you look at the HPROFIT_CTR table, you will find that ProfitNode1, ProfitNode2 and ProfitNode3 are the Node entries as they have at least 1 child whereas the entries from P01 to P011 are the Leaf Entries.

 

Generally, in the transaction table e.g. FAGLFLEXT table of ECC, the Profit Center column will always have the leaf entries and not the Node entries. So now, if you join this transaction data with an attribute view created on PARENT_CHILD, using Inner Join or Left Outer Join or Referential Join based on Profit Center, the Node Entries of the Attribute View will not come in the output of the Analytic View because they are not present in the transaction table or the data foundation of the analytic view.

 

So merely joining the Attribute View based on PARENT_CHILD table will not help. What you need to do is, along with the join on this table in Analytic View, perform a union of this table with the analytic view in the Calculation View. In that way, while creating the hierarchy, you will have all the nodes and leaves present in the calculation view. Once this is done, create a Parent Child Hierarchy in the Calculation View and then consume it in Front End Tool.

 

Let’s do some hands on exercise on this. So to start off with, create an attribute view on PARENT_CHILD table as shown below:

 

5.png

 

Activate this Attribute View.

Now, create an Analytic View using your transaction table. I have created a dummy transaction table for this exercise:

 

6.png

Create an Analytic View using this table and join the Attribute View, created earlier, to the data foundation using Left Outer Join on Profit Center and Child (Left Table: Data Foundation, Cardinality N:1):

 

 

 

 

7.png

Save and Activate this Analytic View.

 

Now we will create a calculation view on top of this Analytic View Layer and create Hierarchy there.

 

So let us name it as CALCULATION_SALES_DETAILS. This calculation view will have the main source as ANALYTIC_SALES_DETAILS having union with the PROFIT_CENTER Attribute View on PARENT_CHILD table:

 

8.png

Now, in the output node, add the attributes and measures to the final output. Also, create a new Parent Child Hierarchy using the below mentioned information:

 

9.png

 

10.png

 

 

Save and Activate the Calculation View.

 

 

This view can now be consumed in the front end tools. I will use MS Excel to show the data:

11.png

Here you can see that the Hierarchy of Profit Center is consumed as we would like it to be.

 

If you are thinking why we have done the union of the Attribute view then think about the broken linkage issue which I have explained earlier. The union takes care of any broken linkage as all the links are present in the attribute view.

 

 

The solution provided here, is completely based on my project experience. Hope this document helps you in understanding the Parent Child Hierarchy better.

 

SAP HANA 1.0 SP 6 - Compileserver

$
0
0

Hello,

 

in case you are running SP 6 (revision 60 or higher), you might have wondered what the new "compileserver" process is for (one per HANA instance).

 

compileserver.gif

The compileserver is needed with SAP HANA SP6 as compiling of L-Procedures has been thrown out of Indexserver.

  

Without compile server, HANA will not start any more. This is a feature. For customers this is irrelevant, as HANAs default parameters will make sure that it is started. If customers change these defaults, the system is no longer supported.

Regards,
Marc
SAP Customer Solution Adoption (CSA)


Input Paramters configuration for analytic view and calculation view

$
0
0

Dear All:

 

We maybe have already known the using of input parameters for currency and date convert, there are several documents introduce them, but maybe confuse who created input parameters without currency and date conversion on analytic view or calculation view, it did not work.

I created scenarios for input parameters without currency and date convert, we only use input parameters as filter expression on calculation view.

 

NOTES, the input parameters does not work as filter expression on analytic view.

 

The HANA server is SPS05 and HANA studio is SPS 05 resivion 48.

 

1. Create Analytic view with input parameters and output input or choose parameters on data preview, I will create P_REGIONNAME and P_COST parameters.

P-1.png

Create P_REGIONNAME parameters based on column REGIONNAME.

p-2.png

Create P_COST parameters based on static list.

p-3.png

I defined two parameters displayed on semantics perspective.

p-4.png

Save and activated and Data preview for analytic view.

p-5.png

2. Create calculation view only based on above analytic view.

p-6.png

 

I only us parameters of analytic view, you can create new parameters on calculation view, but you need to map them one by one.

p-7.png

 

The mapping parameters were displayed on calculation view.

p-8.png.

We save and activated calculation view and preview data, we can see two parameters outputted on variables and input parameters perspective, click ok, do not change them.

p-9.png

 

The all data were fetched, the parameters were not work.

p-10.png

 

Now we will set filter expression to used my defined parameters. set region name equals P_REGIONNAME and cost greater than P_COST.

p-11.png

Set filter expression to using defined parameters.

p-12.png

 

Save and activated calculation view and preview data.

p-13.png

We change P_COST value to 10000, we cannot change P_REGIONNAME parameters value, because it depends on analytic view define, but you can define it on calculation directly, by this way, you can change it.

p-14.png

 

The input parameters work well, and the related data were fetched based on filter expression definition.

 

Finally, you can also use input parameters anywhere or any BO report tool.

 

Regards,

Jerry.

Steps to Enable HANA Backup via Cron

Handling Slowly Changing Dimensions in HANA with SLT Data Replication

$
0
0

Tracking History on HANA:

Handling Slowly Changing Dimensions in HANA with SLT Data Replication

Aug 2013

 

CONTRIBUTORS

Mark Wozniak, mark.wozniak@sap.com, SAP HANA Solution Center

Abani Pattanayak, abani.pattanayak@sap.com, SAP HANA Center of Excellence

Mahesh Sardesai, mahesh.sardesai@sap.com, SAP Database Technology Services

Jody Hesch, jody.hesch@sap.com, Business Analytic Services

 

 

1. BACKGROUND:

 

A customer needs the capability to report against current (“as is”) and historical (“as was”) data for certain dimensions in their SAP HANA data mart system.

 

Dimensions whose values change over time and are captured are referred to as Slowly Changing Dimensions (SCDs). (Technically there are different types of SCDs which meet different business requirements. Those described in this document refer to SCDs of Type 2/6.)

 

Capturing SCDs is a well-understood task for data warehouses/marts and can be handled in various ways including by SAP Business Objects Data Services via the History Preservation transformation.

 

2. PROBLEM STATEMENT:

 

In many instances of SAP HANA data mart solutions, data is replicated from source systems into HANA via SAP LT Replication server (SLT). SLT does not come with history preservation features “out-of-the-box”. As such there is a challenge in addressing the best way to preserve history (in other words, how to track slowly changing dimensions).

 

An ideal approach should include:

- Good performance on large datasets*.

- Ease of implementation

- Maintainable over time

 

 

*Some dimension tables on current retail system at customer site exceed 300 million rows, so good performance is particularly important in instances like these.

 

 

3. SOLUTION APPROACHES:

 

The following documentation describes three solution approaches to capturing Slowly Changing Dimensions in a HANA system with SLT as the replication technology. The first two approaches are outlined briefly. The third approach, which has significant advantages over the first two, is outlined in detail.

 

 

All three approaches involve creating and executing stored procedures on a scheduled basis to populate a history table which we also call an ‘effective’ table. The effective table has, at a minimum, the same fields as the source table as well as validity period fields VALID_FROM and VALID_TO. The primary key fields are the same as the source fields and also include VALID_TO.

 

The effective table can be modeled as an attribute view joined to the Data Foundation in an Analytic View via a Temporal Join. Please see the SP6 SAP HANA Developer’s Guide, p. 175, found at http://help.sap.com/hana_appliance#section6 for details on this join type.

 

Approach 1 involves doing full table comparisons between source and effective tables and updating the effective table accordingly. (In this and following descriptions, ‘source table’ refers to the HANA table that has been replicated from the source system). This approach is the worst-performing of the three approaches and requires the most effort to implement for each set of dimensions for which SCDs are tracked.

 

 

Approach 2 involves creating DB triggers on the source table to capture the DELTA for INSERT/UPDATE/DELETE operations. This approach has better performance than Approach 1 and is easier to implement but can still be challenging to maintain over time. This approach also has performance issue, if high volume delta records.

 

 

Approach 3 entails capturing operation types of source data records (INSERT, UPDATE, DELETE) and flagging records accordingly. Then a stored procedure populates/updates the historical (“as was”) table with source data as well as respective validity periods. (Additional fields such as “Current Flag” can easily be included in the following approach.) Only the deltas from the source table are required (except for initial load), and no trigger is required. Also, SLT configs for this approach can be applied (reused) on any table. As such, Approach 3 is the best-performing, most maintainable and easiest to implement over the 3 approaches.

 

 

3.1 APPROACH 1: Table Compare

Approach 1 involves the following steps (in SQLScript pseudo code) in a stored procedure to compare source and effective tables and update the effective table accordingly.

 

IF effective table is empty THEN

 

- Populate effective table with copy of source table
- Setting VALID_FROM = ‘1900-01-01’
- Setting VALID_TO = ‘9999-12-31’

 

ELSE

 

1) SELECT all records that exist in either
      a.source table or
      b.history table and are currently valid
INTO variable call change_table

 

2) UPDATE all records in history table
      a. that correspond to records in change_table
      b.set VALID_TO = yesterday (timestamp)

 

3) INSERT all records in history table
      a.That correspond to records in change_table on all field values (not just keys)
      b. set VALID_FROM to today
      c. set VALID_TO to ‘9999-12-31’

 

3.2 APPROACH 2: DB Triggers

 

Step 1

 

Add a trigger to the source table that is executed ON INSERT and insert the corresponding record in the effective table.

 

Step 2

 

Add a trigger to source table that is executed ON DELETE and updates the corresponding record in the effective table (i.e. ‘expires’ that record by setting VALID_TO = yesterday). This should reference the old state of the record (REFERENCING OLD ROW<myoldrow>).

 

Step 3

 

Add a trigger to the source table that is executed ON UPDATE and

 

a.  Updates the corresponding record in the effective table (i.e. ‘expires’ that record by setting VALID_TO = yesterday). This should reference the old state of the record (REFERENCING OLD ROW<myoldrow>).

 

b. Inserts a new record in the effective table (VALID_FROM = today and VALID_TO = ‘9999-12-31’). This should reference the new state of the record (REFERENCING NEW ROW<mynewrow>)

 

Step 3

 

Create a stored procedure to initialize the effective table with the following logic.

 

IF effective table is empty THEN

-   Populate effective table with copy of source table

-   Setting VALID_FROM = ‘1900-01-01’

-   Setting VALID_TO = ‘9999-12-31’

 

One of the drawbacks of Approach 2 is the performance of the triggers when large sets of data are changed in the source table (mass inserts / updates / deletes).

 

We ran DELETE statements with filters that would impact different numbers of rows in the source table and arrived at the following measurements.

 

Statement 'DELETE FROM "ERPACC_RPDCLNT200"."MARC" WHERE WERKS = 'X133''

successfully executed in 790 ms 578 µs  (server processing time: 753 ms 555 µs) - Rows Affected: 500

 

Statement 'DELETE FROM "ERPACC_RPDCLNT200"."MARC" WHERE WERKS = 'X520''

successfully executed in 25.422 seconds  (server processing time: 23.823 seconds) - Rows Affected: 7394

 

Statement 'DELETE FROM "ERPACC_RPDCLNT200"."MARC" WHERE WERKS = 'X220''

successfully executed in 31.323 seconds  (server processing time: 30.734 seconds) - Rows Affected: 15011

 

Statement 'DELETE FROM "ERPACC_RPDCLNT200"."MARC" WHERE WERKS = 'X521''

successfully executed in 7:22.061 minutes  (server processing time: 7:20.096 minutes) - Rows Affected: 85827

 

Statement 'DELETE FROM "ERPACC_RPDCLNT200"."MARC" WHERE WERKS = 'X423''

successfully executed in 3:26:14.500 hours  (server processing time: 3:26:10.385 hours) - Rows Affected: 303485

 

ss1.PNG

As you see above, if the chunk size (no of records processed/deleted at one go) is 20K – 50K records, this approach will work fine. However, if more than 50K records were updated/deleted or inserted at one go, this approach will not work.

 

For this reason and others discussed already, we recommend the following approach (Approach 3).

 

3.3 APPROACH 3: SLT Configuration

 

Step 1

 

Define Table Deviation on the source table (again, ‘source’ refers to the HANA replicated table, and ‘target’ table would be that which is transformed, i.e. what we’ve been calling the ‘effective’ table. From an SLT perspective, however, the HANA table is the ‘target’ table).

 

ss2.PNG

 

Define the table deviation using the Edit Table Structure option in the IUCC_REPL_TABSTG tab in SLT.

 

As shown in the next screenshot, add two fields (ZZSLT_FLAG & ZZTIMESTAMP) to store Record Type and Timestamp. This can be configured using the Table Deviation

a.  ZZSLT_FLAG : NVARCHAR(1): To store record type (‘D’ - DELETE, ‘U’ – UPDATE, ‘I’ – INSERT/New)

b. ZZTIMESTAMP: NVARCHAR(14): Timestamp

 

ss3.PNG

Step 2

 

Define a transformation rule for the table (YMARC in this example) in the IUUC_***_RULE_MAP tab.

 

Export Field Name:  MANDT. We choose the first field of the table (MANDT). You can use any field from the table

 

Import Parameter 1: ‘<WA_R_YMARC>’ is the internal name used to address the receiver work area.

 

Insert Include Name: The is the ABAP include we need to create for the transformation

 

 

Step 3

 

Create ABAP include using t-code SE38 in the SLT system.

 

ss4.PNG

 

*&---------------------------------------------------------------------*

*&  Include           ZIUUC_DELETE

*&---------------------------------------------------------------------*

 

FIELD-SYMBOLS:  <ls_data>       TYPE any,

                <lv_operation>  TYPE any,

                <lv_delete>     TYPE any,

                <lv_timestamp>  TYPE any.

 

ASSIGN (i_p1) TO <ls_data>.

 

DATA tstamp  LIKE tzonref-tstamps.

DATA d TYPE D VALUE '20121224'.

DATA t TYPE T VALUE '235500'.

 

d = SY-DATUM.

t = SY-UZEIT.

 

SET COUNTRY 'US'.

 

CONVERT DATE d TIME t INTO

        TIME STAMP tstamp TIME ZONE 'UTC-5 '.

 

"Assign the Time Stamp to ZZTIMESTAMP field

ASSIGN COMPONENT 'ZZTIMESTAMP' OF STRUCTURE <ls_data> TO <lv_timestamp>.

IF sy-subrc = 0.

    <lv_timestamp> = tstamp."

ENDIF.

 

ASSIGN COMPONENT 'IUUC_OPERAT_FLAG' OF STRUCTURE <ls_data> TO <lv_operation>.

"For delete operation

IF sy-subrc = 0 AND <lv_operation>  = 'D'.

               

     "Change this to a update operation – so the record will not be deleted

      <lv_operation>  = 'U'.

 

     "Update the ZZSLT_FLAG to store D (for Delete)

      ASSIGN COMPONENT 'ZZSLT_FLAG' OF STRUCTURE <ls_data> TO <lv_delete>.

      IF sy-subrc = 0.

          <lv_delete> = 'D'.

      ENDIF.

 

"For all other operation

elseif sy-subrc = 0.

 

      "Update the ZZSLT_FLAG to store appropriate record type

      ASSIGN COMPONENT 'ZZSLT_FLAG' OF STRUCTURE <ls_data> TO <lv_delete>.

      IF sy-subrc = 0.

          <lv_delete> = <lv_operation>.

      ENDIF.

 

ENDIF.

 

 

Step 4

 

Set-up replication of the Table using HANA studio.

ss5.PNG

 

Step 5

 

Delete and update some records from the YMARC table.

 

ss6.PNG

As you see above, the delete records and update records are reflected in the ZZSLT_FLAG.

 

Step 6

 

Create a stored procedure to update the history table. Details of the stored procedure will be specific to customer solution. At a minimum, the following steps should be included, in addition to ‘housekeeping’ code such as error-handling.

 

-- STEP 1: Initial load
-- If target tables is empty, load source table into history table,
-- setting VALID_FROM = ‘1900-01-01’ and VALID_TO = ‘9999-12-31’

 

-- STEP 2: Record expiration
-- UPDATE history table, setting VALID_TO = yesterday, for all records in history table
-- that correspond to a record from source table WHERE ZZSLT_FLAG IN (‘D’, ‘U’).
-- Make sure source table is filtered against ZZTIMESTAMP*.
-- This expires deleted records and updated records.

-- STEP 3:Inserting new records
-- Insert source data into history table, setting VALID_FROM = today and VALID_TO =
-- ’9999-12-31’, for source records WHERE ZZSLT_FLAG <> ‘D’.
-- Make sure source table is filtered against ZZTIMESTAMP*.
-- This insert ensures that new records (INSERT) are captured in history table, and latest
-- version of current records (UPDATE) are also captured in history table.

 

* It’s recommended that additional housekeeping code is included in your stored procedure framework that captures dates of successful procedure execution dates. Then, the above code can be filtered via ZZTIMESTAMP > :last_successful_execution_date, where last_successful_execution_date is a scalar variable populated from a query against a procedure execution status table.

 

 

Step 7: Schedule stored procedure execution

 

If SAP Business Objects Data Services (BODS) is part of your landscape, procedures can be scheduled daily (or as often as required) via BODS.

 

If BODS is not an option, the following link from the SAP HANA Academy demonstrates how to schedule data loads via CRON scheduler. This approach can be modified to schedule your stored procedure calls.

 

http://www.saphana.com/docs/DOC-2909

 

 

4. OPEN QUESTIONS:

 

There are few open issues with all the three approach, which need to be considered before applying these solutions to a production environment.

 

1.    As you see the granularity of the effective table is at the Day-level. So what happens if there are multiple changes for a record in a single day? Do you have to track all these changes or just the last one?

2.    What happens if something goes wrong with the SLT server and you need to recreate the configuration and replicate the table again? You have to make sure the Timestamp delta pointer is appropriately adjusted.

 

 

5. CONCLUSION:

 

Please note each of the above approach may work depending on the data volume in the table and no of delta records. 

 

For example Approach 1 may work just fine for low volume of data, say for site master (T001W), for which typical volume is 100 – 10K. However, Approach 2 will be a better fit for this scenario, since there is no need to schedule any procedure on a daily basis. But then these triggers need to be recreated, if the source table was dropped for any reason.

 

Approach 2 will also work fine for high volume scenario, if and only if the delta records to HANA is flowing continuously, so that it does exceed more than 10K/min at any particular time.

 

Approach 3 will work in almost all scenarios. So Approach 2 is better than Approach 1, but Approach 3 is the best option of the three.

 

 

SAP Operational Process Intelligence powered by SAP HANA

$
0
0
Empower your business operations with process visibility and process-aware analytics when needed the most – in real time.

New! SAP Operational Process Intelligence SP01 is available now with great new features!
Read more in thisblog.

Overview

SAP Operational Process Intelligence powered by SAP HANA enables line-of-business users to gain process visibility across their end-to-end business processes with a clear focus, improving the operational decision making to achieve better business outcomes.
As we all know, end-to-end business processes can span multiple systems (SAP, non-SAP), can be either modeled (as in SAP Business Workflow and SAP NetWeaver Business Process Management) or built-in (as transaction or programmed in SAP Business Suite).
In addition, end-to-end processes can span between on-premise and on-demand systems. And at the same time deal with structured data as well as streaming data from social media feeds, internet of things (RFIDs, sensors etc.) and clickstreams.
In short, we have a variety of high volume and high velocity data from many sources – now’s the question: How can we make sense of all of this in our end-to-end processes in a focused and tailored way; provide business relevant information on performance, KPIs, trends and ensure process stays on track?
SAP Operational Process Intelligence powered by SAP HANA brings the answer.
Using the SAP HANA platform to correlate and contextualize operational data - i.e. data from implemented end-to-end processes (process logs, transaction logs, business object data etc.) into a business scenario, business users will be able to get the right information on their processes in real-time.
SAPOPInt_Marketecture.png
Take a closer look and explore our blogs and further content on this hot topic – you will love it!


Resources

Blogs on SAP Operational Process Intelligence

 

Real life examples:

 

Webinars / Recordings

 

Additional Information

 

Related SCN Spaces

Academic Partners of the SAP HANA database department

$
0
0

Research in the SAP HANA database department is done in collaboration with a number of academic partners, including the following:

 

Hasso-Plattner-Institut, Universität Potsdam
http://www.lock-keeper.org/images/hpi-logo.jpg
TU Dresden

http://tu-dresden.de/tulogosw.png

Universität Heidelberg
https://www.csi.uni-heidelberg.de/logo1.gif
TU Ilmenau

http://upload.wikimedia.org/wikipedia/commons/7/77/Logo_TU_Ilmenau.png

Karlsruhe Institute of Technology
  • Prof. Dr. rer. nat. Peter Sanders
  • Institute of Theoretical Informatics, Algorithmics II
  • Web site: http://algo2.iti.kit.edu

http://www.defi.kit.edu/summerschool/2011/img/kit_logo_de_farbe_positiv_1.jpg

EPF Lausanne
  • Prof. Dr. Anastasia Ailamaki
  • Data-Intensive Applications and Systems Laboratory (DIAS)
  • Web site: http://dias.epfl.ch/
http://ipn2.epfl.ch/lns/logo/EPFL_logo.jpg
Universität Mannheim
http://upload.wikimedia.org/wikipedia/commons/thumb/0/04/University_of_Mannheim.svg/500px-University_of_Mannheim.svg.png
TU München
http://upload.wikimedia.org/wikipedia/commons/thumb/4/4c/TU_Muenchen_Logo.svg/500px-TU_Muenchen_Logo.svg.png
LMU München

http://www.cipsm.de

http://upload.wikimedia.org/wikipedia/commons/thumb/e/e2/Sigillum_Universitatis_Ludovico-Maximilianeae.svg/500px-Sigillum_Universitatis_Ludovico-Maximilianeae.svg.png
ETH Zürich

http://upload.wikimedia.org/wikipedia/commons/thumb/6/63/ETH_Z%C3%BCrich_wordmark.svg/800px-ETH_Z%C3%BCrich_wordmark.svg.png

ENSIMAG Grenoble
http://morpheo.inrialpes.fr/people/hetroy/data/uploads/ensimag.jpg
Viewing all 1183 articles
Browse latest View live