top of page

Expanding SAP Systems with AWS



The average company only analyses 37%-40% of the data they collect. This leaves a whopping 60% untouched, with no chance to add value to your business. Data-driven companies tend to see more growth, revenue, and customer retention. Companies with a data-driven culture tend to view their data as an asset. They use this asset to gain insights into their business, customers, and market.


Thus, how can you and your company get more value and insight from the data housed in your SAP system? In this series of blogs, we will discuss how AWS can help you answer this question. In a secure, cost-effective and timely manner.


Customers that want more value from their SAP data experience several challenges. Two common challenges are security and data volume. The wide variety of data services offered by AWS along with proper data governance can help to overcome these challenges.


Part one of this six-part series will give you an overview of the purposed architecture. Later blogs will dig deeper into the implementation of this architecture. Demonstrating how you

too can add value to your data ecosystems.


The Advantages

Before discussing the different architectures used in this project. Let’s review the advantages of extracting the data from your SAP system to a cloud provider such as AWS. Data enrichment is essential if you wish to draw meaningful insights from your data. Customers may not want to store this extra data on their SAP systems. They may also not wish to go to the lengths required to collect and process it. Extracting your SAP data to a public cloud opens a world of opportunities for your data. It can be enriched via the use of APIs, flat files or by combining multiple source systems. So that it can be used in ways that were not possible while stored in your data warehouse.


This process of extracting data to a public cloud also exposes your SAP data to big data services. This provides the extra analytical power needed to ask bigger and more complex questions. Outside of the SAP system, your data can also be exposed to machine learning and Artificial Intelligence services. Helping to identify patterns you did not even know were there.


Technical Architecture


The technical architecture for this consists of four layers. The first layer being the ingestion layer; this is where Amazon AppFlow is used to ingest the data from the SAP system. The second layer is storage/staging in this case an S3 bucket containing several different staging areas. Bronze, silver and gold buckets help to clearly denote the quality of your data during each stage of the process. The third layer is the machine learning (ML) layer where Amazon ML services are deployed to model the data you have ingested. For the purposes of this project, we decided to implement Amazon Forecast to demonstrate its ability to forecast demand. Though, Amazon Forecast could be switched out for or paired with other Amazon ML services. Finally, we have the visualisation layer this is where data analysts and line of business users can analyse the ingested SAP data to gain additional insights from the now enriched and refined data.


Why is AWS AppFlow the key to extracting data from your SAP system? AppFlow offers several unique features that make the process of extracting your data much less involved. This service offers a No Code / Pro Code Interface allowing it to cater for a variety of users. Data flows can be configured with just a few clicks. These flows are capable of running at an enterprise scale with a scheduled frequency, in response to a business event or on demand.


AppFlow also offers a variety of security features to reduce exposure to security threats. Data is encrypted while in transit and users can restrict data flows from travelling over the public internet via the use of AWS PrivateLink.


AppFlow like all services provided by AWS is pay-as-you-go meaning that when flows are not in use/running no charges are incurred. AppFlow is also capable of scaling with your business to meet demand as data consumption grows.


Lastly, AppFlow offers a wide variety of pre-built connectors in this series of blog we make use of the SAP OData connector. Which supports both full and incremental/delta loads. Though, connectors for other Software-as-a-Service (SaaS) applications such as Salesforce, Zendesk and Slack are also available. This makes it even easier to consume from multiple source systems with different varieties of data.


Data Architecture


Amazon AppFlows pre-built SAP OData connector can be used to extract data from SAP using the OData protocol. To keep this data architecture at a high level it can be broken down into three stages.


Stage 1: Configure/activate the data sources in your SAP system, for example, the SAP Extractors, CDS Views or SAP SLT.


Stage 2: ODP must be configured for extraction via the SAP gateway of your SAP system.


Stage 3: Using the SAP OData Connector provided by Amazon AppFlow the connection to your OData service must be established. This project used a site-to-site VPN to enable secure communication. Alternatively, an AWS direct connect, or internet connection could be configured to meet the needs of the business.



Setup OData Services



The last piece of the puzzle that we will cover in this blog post is setting up the OData services. The data sources can be created from scratch, but this is an optional step using the T-Code RSO2. Alternatively, an existing data source can be used, requiring only the use of T-Code SEGW. In this scenario, we decided to create the data sources from scratch using RSO2. Using the T-Code RSO2 we can create a new data source as well as validate the table structure and entries. Importantly, we can also enable Change Data Capture (CDC). This allows AppFlow to run incremental/delta loads of the data rather than having to run full data loads each time a flow is triggered. SEGW can then be used to create and register the OData service as well as test it to ensure everything is performing as expected. When the test of the OData service returns HTTP 200 status, we can attempt to ingest the data exposed via the new OData service with Amazon AppFlows pre-built SAP OData connector.


We hope you have enjoyed the first blog in this series and follow along as we continue the process of helping you add value to your data housed in SAP.


If you or your colleagues have further questions or queries, please do not hesitate to contact us at william.hadnett@seaparkconsultancy.com



Comments


Featured Posts
Recent Posts
Archive
Search By Tags
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page