Merge Transform in SAP BODS


Merge Transform is used to merge rows from multiple tables. The only condition for the Merge transform to work is that the structure of the source tables should be same.

This is exact opposite of what we do in Case Transform. But we cannot provide any condition in Merge.

Login to your data services. Create a new Project, call it as ‘Merge_Transform’.

Create Job, workflow and dataflow.

We have created some tables in the Case transform, we will use the same tables to merge the data.

Import the target template tables from your case transform: Continue reading →

How to substract one row data from another in a DSO in SAP BW

Scenario: The source data is coming in the form of ytd numbers. Below is an example of year to date (YTD) numbers:

ytd numbers.png

The user wants to see the monthly values instead of the YTD values, below is the data which the user wants to see in the report:

ytd numbers.png

Thus the monthly 2017.02 values are 2017.02 value – 2017.01 value.

Thus, the first month value will be as it is, the second month value will  be second month – first month, the third month value will be third month value – second month value.


Continue reading →

How to read child hierarchy entries in DTP

Scenario: There is a requirement to load data with a filter on a certain infoObject.

This filter is a parent node of a hierarchy.

The data in the source is having rows with leaf nodes only for this infoObject.

See below diagram to get clarity:


source data.png

From the above diagrams, observe that the data in the source cube is having only leaf nodes for the Region ZREG1. Continue reading →

New book – SAP Webi 4.0 Step by Step

Hi dear readers,

The ebook, SAP Webi 4.0 step by step is now available on Amazon at just $2.99.

The book is invaluable for hands on experience and quick update of your skills to SAP Webi.

SAP Webi is basically a reporting tool but has many more advantages over SAP BW Query Designer. It is much more  user friendly and flexible for building reports and dashboards.

Here is the link for the Kindle ebook in US Amazon market, you can also get it in your local Amazon


New Ebook – SAP BODS Step by Step

Hi dear readers,

The ebook, SAP BODS step by step is now available on Amazon at just $2.99.

The book is invaluable for hands on experience and quick update of your skills to SAP Data Services.

SAP BODS is basically and ETL tool but has many more advantages over SAP BW. However, it still cannot replace SAP Standard BI extractors.

You will find BODS to be like a tool to extract and transfer records between two databases. Learning simple SQL for RDBMS will make it easy for you to understand BODS.

Here is the link for the Kindle ebook in US Amazon market, you can also get it in your local Amazon

Below is the link to buy the paperback version-

ABAP Dataflow & Transports in BODS

This  post introduces the SAP BusinessObjects Data Services objects for extracting data from SAP application sources:

  •  SAP Applications datastores
  •  ABAP data flows
  •  transports


The procedures in this section require that the software has the ability to connect to an SAP server.
The sample tables provided with Data Services that are used in these procedures do not work with all versions of SAP because the structure of standard SAP tables varies in different versions.

Defining an SAP Applications datastore

1. In the local object library, click the Datastores tab.
2. Right-click inside the blank space and click New.
The “Datastore Editor” dialog box opens.
3. In the Datastore name field, type SAP_DS.

This name identifies the database connection inside the software.
4. In the Datastore type list, click SAP Applications to specify the datastore connection path to the database.
5. In the Application server field, type the name of the remote SAP Applications computer (host) to which Data Services connects.
6. Enter the user name and password for the SAP applications Server.
7. Click OK.
Data Services saves your SAP application database connection to the metadata repository.

To import metadata for individual SAP application source tables

Continue reading →

Multiuser functionality in SAP Data Services

In this post you will create two local repositories and one central repository and perform the tasks associated  with sharing objects using Data Services.

You can also check the post Central Repository in BODS for more details.

Multiuser Development

This section introduces you to Data Services features that support multiuser development. Data Services enables multiple users to work on the same application. It enables teams of developers working on separate local metadata repositories to store and share their work in a central repository.
You can implement optional security features for central repositories.


Data Services can use a central repository as a storage location for objects. The central repository contains all information normally found in a local repository such as definitions for each object in an application.

In addition, the central repository retains a history of all its objects. However, the central repository is merely a storage location for this information.

To create, modify, or execute objects such as jobs, always work in your local repository and never in the central repository.
Using a central repository, you can:
• Get objects into the local repository
• Check out objects from the central repository into the local repository
• Check in objects from the local repository to the central repository
• Add objects from the local repository to the central repository


The functionality resembles that of typical file-collaboration software such as Microsoft Visual SourceSafe. Continue reading →

Recoverable workflow in BODS

This post describes on how to:

• Design and implement recoverable work flows.
• Use Data Services conditionals.
• Specify and use the Auto correct load option.
• Replicate and rename objects in the object library.


Recovery Mechanisms

Creating a recoverable work flow manually

A recoverable work flow is one that can run repeatedly after failure without loading duplicate data.
Examples of failure include source or target server crashes or target database errors that could cause a job or work flow to terminate prematurely.
In the following exercise, you will learn how to:
• Design and implement recoverable work flows
• Use Data Services conditionals
• Specify and use the auto-correction table loader option
• Replicate and rename objects in the object library.

Adding the job and defining local variables

1. In the Class_Exercises project, add a new job named JOB_Recovery.
2. Open the job and declare these local variables:

Variable                        Type
$recovery_needed       int
$end_time                     varchar(20)

The $recovery_needed variable determines whether or not to run a data flow in recovery mode. The $end_time variable determines the value of $recovery_needed. These local variables initialize in the script named GetWFStatus (which you will add in the next procedure). Continue reading →


Changed-Data Capture

This post introduces the concept of changed-data capture (CDC). You use CDC techniques to identify changes in a source table at a given point in time (such as since the previous data extraction). CDC captures changes such as inserting a row, updating a row, or deleting a row. CDC can involve variables, parameters, custom (user-defined) functions, and scripts.

Exercise overview

You will create two jobs in this exercise. The first job (Initial) initially loads all of the rows from a source table. You will then introduce a change to the source table. The second job (Delta) identifies only the rows that have been added or changed and loads them into the target table. You will create the target table from a template.
Both jobs contain the following objects.
• An initialization script that sets values for two global variables: $GV_STARTTIME and $GV_ENDTIME
• A data flow that loads only the rows with dates that fall between $GV_STARTTIME and
• A termination script that updates a database table that stores the last $GV_ENDTIME

Continue reading →

Joins and Lookup in SAP Data Services

This post discusses on join conditions and look up in Data Services.

Populating the Sales Fact Table from Multiple Relational Tables

The exercise joins data from two source tables and loads it into an output table. Data Services features introduced in this exercise are:

• Using the query transform FROM clause to perform joins
• Adding columns to an output table
• Mapping column values using Data Services functions
• Using metadata reports to view the sources for target tables and columns

In this exercise, you will:

• Populate the SalesFact table from two source tables:
• Table SalesItem – columns Cust_ID and Order_Date
• SalesOrder – columns Sales_Order_Number, Sales_Line_Item_ID, Mtrl_ID, and Price.
• Use the FROM clause in the Query transform to join the two source tables and add a filter to bring a subset of sales orders to the target.

Populating the Sales Fact Table from Multiple Relational Tables
• Use the LOOKUP_EXT() function to obtain the value for the Ord_status column from the Delivery source table rather than from the SalesOrder table.
• Use metadata reports to view:

• Names of the source tables that populate the target SalesFact table
• Names of source columns that populate the target columns

To add the SalesFact job objects Continue reading →