ABAP Dataflow & Transports in BODS

This  post introduces the SAP BusinessObjects Data Services objects for extracting data from SAP application sources:

  •  SAP Applications datastores
  •  ABAP data flows
  •  transports

Note:

The procedures in this section require that the software has the ability to connect to an SAP server.
The sample tables provided with Data Services that are used in these procedures do not work with all versions of SAP because the structure of standard SAP tables varies in different versions.

Defining an SAP Applications datastore

1. In the local object library, click the Datastores tab.
2. Right-click inside the blank space and click New.
The “Datastore Editor” dialog box opens.
3. In the Datastore name field, type SAP_DS.

This name identifies the database connection inside the software.
4. In the Datastore type list, click SAP Applications to specify the datastore connection path to the database.
5. In the Application server field, type the name of the remote SAP Applications computer (host) to which Data Services connects.
6. Enter the user name and password for the SAP applications Server.
7. Click OK.
Data Services saves your SAP application database connection to the metadata repository.

To import metadata for individual SAP application source tables

Continue reading →

Multiuser functionality in SAP Data Services

In this post you will create two local repositories and one central repository and perform the tasks associated  with sharing objects using Data Services.

You can also check the post Central Repository in BODS for more details.

Multiuser Development

This section introduces you to Data Services features that support multiuser development. Data Services enables multiple users to work on the same application. It enables teams of developers working on separate local metadata repositories to store and share their work in a central repository.
You can implement optional security features for central repositories.

Introduction

Data Services can use a central repository as a storage location for objects. The central repository contains all information normally found in a local repository such as definitions for each object in an application.

In addition, the central repository retains a history of all its objects. However, the central repository is merely a storage location for this information.

To create, modify, or execute objects such as jobs, always work in your local repository and never in the central repository.
Using a central repository, you can:
• Get objects into the local repository
• Check out objects from the central repository into the local repository
• Check in objects from the local repository to the central repository
• Add objects from the local repository to the central repository

crepo1

The functionality resembles that of typical file-collaboration software such as Microsoft Visual SourceSafe. Continue reading →

Recoverable workflow in BODS

This post describes on how to:

• Design and implement recoverable work flows.
• Use Data Services conditionals.
• Specify and use the Auto correct load option.
• Replicate and rename objects in the object library.

 

Recovery Mechanisms

Creating a recoverable work flow manually

A recoverable work flow is one that can run repeatedly after failure without loading duplicate data.
Examples of failure include source or target server crashes or target database errors that could cause a job or work flow to terminate prematurely.
In the following exercise, you will learn how to:
• Design and implement recoverable work flows
• Use Data Services conditionals
• Specify and use the auto-correction table loader option
• Replicate and rename objects in the object library.

Adding the job and defining local variables

1. In the Class_Exercises project, add a new job named JOB_Recovery.
2. Open the job and declare these local variables:

Variable                        Type
$recovery_needed       int
$end_time                     varchar(20)

The $recovery_needed variable determines whether or not to run a data flow in recovery mode. The $end_time variable determines the value of $recovery_needed. These local variables initialize in the script named GetWFStatus (which you will add in the next procedure). Continue reading →

CDC in BODS

Changed-Data Capture

This post introduces the concept of changed-data capture (CDC). You use CDC techniques to identify changes in a source table at a given point in time (such as since the previous data extraction). CDC captures changes such as inserting a row, updating a row, or deleting a row. CDC can involve variables, parameters, custom (user-defined) functions, and scripts.

Exercise overview

You will create two jobs in this exercise. The first job (Initial) initially loads all of the rows from a source table. You will then introduce a change to the source table. The second job (Delta) identifies only the rows that have been added or changed and loads them into the target table. You will create the target table from a template.
Both jobs contain the following objects.
• An initialization script that sets values for two global variables: $GV_STARTTIME and $GV_ENDTIME
• A data flow that loads only the rows with dates that fall between $GV_STARTTIME and
$GV_ENDTIME
• A termination script that updates a database table that stores the last $GV_ENDTIME

Continue reading →

Joins and Lookup in SAP Data Services

This post discusses on join conditions and look up in Data Services.

Populating the Sales Fact Table from Multiple Relational Tables

The exercise joins data from two source tables and loads it into an output table. Data Services features introduced in this exercise are:

• Using the query transform FROM clause to perform joins
• Adding columns to an output table
• Mapping column values using Data Services functions
• Using metadata reports to view the sources for target tables and columns

In this exercise, you will:

• Populate the SalesFact table from two source tables:
• Table SalesItem – columns Cust_ID and Order_Date
• SalesOrder – columns Sales_Order_Number, Sales_Line_Item_ID, Mtrl_ID, and Price.
• Use the FROM clause in the Query transform to join the two source tables and add a filter to bring a subset of sales orders to the target.

Populating the Sales Fact Table from Multiple Relational Tables
• Use the LOOKUP_EXT() function to obtain the value for the Ord_status column from the Delivery source table rather than from the SalesOrder table.
• Use metadata reports to view:

• Names of the source tables that populate the target SalesFact table
• Names of source columns that populate the target columns

To add the SalesFact job objects Continue reading →

Debugger in SAP Data Services

This post describes on how to use debugger in Data Services.

Using the interactive debugger

The Designer includes an interactive debugger that allows you to examine and modify data row by row by placing filters and breakpoints on lines in a data flow diagram.
A debug filter functions as a simple query transform with a WHERE clause. Use a filter to reduce a data set in a debug job execution. A breakpoint is the location where a debug job execution pauses and returns control to you.
This exercise demonstrates how to set a breakpoint and view data in debug mode. Continue reading →

SAP Data Services Designer tool

This post gives you a short overview of the Data Services product and terminology. Refer to the post SAP BO DATA Integrator / Data Services for more details.

Data Services Components

The following diagram illustrates Data Services product components and relationships-

Continue reading →

SQL Transform in SAP BODS

sql_pic.PNG

SQL Transform helps to import a schema in your dataflow that can act as a source.

Create a new batch job, add a workflow and dataflow. In your dataflow, drag the SQL transform from the local object library.  You can find it under the ‘Platform’ set of transforms.

Double click on the transform.

(Refer post Validation Transform to create the required tables in the database)

Select your datastore, write a simple select statement accessing a table already imported in your datastore. Click on ‘Update Schema’.

Observe that now the schema appears on the top window.

schema in.PNG

Add a query transform and add the columns required to the output schema. Join this with a target template table. Save and execute your job.

data_flow.PNG

This transform is particularly useful when you want only a particular set of data from the database table and push it to target level.

Refer to the ebook for more details:

New Ebook – SAP BODS Step by Step