BODS Starter Guide

94
Overview of Business Objects Data Services

description

This Document provide the basic information about the BODS components

Transcript of BODS Starter Guide

Page 1: BODS Starter Guide

Overview of Business Objects Data Services

Page 2: BODS Starter Guide

Understanding Data Integrator

1

i. Data Integrator Architectureii. Data Integrator Components

Page 3: BODS Starter Guide

BODS ARCHITECTURE

Page 4: BODS Starter Guide
Page 5: BODS Starter Guide

Data Integrator Components

Metadata Reporting

Designer

Administrator

Engines

Data Integrator Service

Job Server

Repository

Access Server

Real-time Services

Page 6: BODS Starter Guide

A data integrator has the following components:

Data Integrator Designer Data Integrator Engine Data Integrator Job Server Data Integrator Repository Data Integrator Administrator Data Integrator Services Data Integrator Access Server Data Integrator Web Server Data Integrator SNMP Agent Data Integrator Metadata Reporting Tool

Page 7: BODS Starter Guide

Data Integrator components Designer (create, test and execute jobs that populate data)

Job Server (launches the processing engine and serves as a interface with to the engine and other components)

Job Engine (executes individual jobs)

Repository (DB that stores system objects and user defined objects including source and target metadata and transformation rules)

Access Server (passes messages between web applications and DI Job Server and engines)

Administrator (browser based Administration)

Page 8: BODS Starter Guide

Designer is a GUI based client tool or development tool that allows you to create, test and execute jobs that populates to Data warehouse.

Data Integrator designer

Page 9: BODS Starter Guide

Data Integrator Engine •When DI jobs are executed the job server starts the DI engine process to perform extraction, transformation & loading.

•The DI engine process use parallel pipelining and in memory data transformation to delivers scalable high data throughput.

Data Integrator Job Server

•The DI job server starts the data integrator engine that integrates the data from multiple heterogeneous sources, performs complex data transformations, manages extraction and transaction from ERP system.

•The DI job server can move the data either in batch/real time mode, uses distributed query optimization, multi threading, In memory caching, transaction and parallel pipelining to deliver high data throughput and scalability.

Page 10: BODS Starter Guide

DI Repository •A repository is a central storage place which contains all the necessary information to build the data mart or data warehouse. The DI repository is a set of tables that holds user created and system define objects, source & target metadata and transformation rules.

•A repository is setup an open client server platform to share the metadata with other enterprise tools. DI repository is created on RDBMS, there are two types of repositories

Local Repository It is used by application designer to store definitions of DI objects likeProjects, Jobs, Workflows, Dataflows, Source & Target metadata.

Central Repository A central repository is an optional component that can be used to support multiuser environment. It provides a shared object library allowing developers to check objects in the repository.

Data Integrator Repository

Page 11: BODS Starter Guide

Data Integrator Repository Creation

• Repository is the place where all the information about the BODI Objects will resides.

• By default we will get 80 Tables when we create repository.

• All the Meta data reports are based on the repository information.

• We can create different versions of repository, can also upgrade the existing repository.

• Only one user can login to the repository at a time.

• We can achieve the multi user concept using Central repository check in – check out options.

Page 12: BODS Starter Guide

Data Integrator Administrator

The Administrator provides you to•Scheduling, monitoring, and executing batch jobs •Configuring, starting, and stopping real-time services •Configuring Access Server, and repositories •Configuring and managing adapters •Managing users for Secure central repositories, Profiler repositories

Page 13: BODS Starter Guide

DI Services

•When we install DI on your windows platform two services will be installedDI Service: It starts the job server and access server automatically when system starts.

•DI Web Server: It supports web-based applications like DI Admin and Metadata reporting tool.

Page 14: BODS Starter Guide

DI Access Server

•It is a real-time request reply message broker that collects the message request roots them to real-time service & delivers a message reply with in user specified time frame.

•The access server maintains the queue of messages and senses them to next available real-time service across any number of computing resources.

DI Web Server

It supports web-based applications like DI Admin and Metadata reporting tool.

DI SNMP (Simple Network Management Protocol) Agent

DI SNMP agent monitors & records information about job server and the jobs running on the computer.

Page 15: BODS Starter Guide

DI Metadata reporting tool This tool provides browser based reports on DI metadata which is stored in repository. These reports provides:

Impact and Lineage analysis

It includes the following:Repository summaries We can view the reports on Projects, Jobs, WF’s, DF’s, Datastores, Custom transforms, File formats, Custom functions in the repositoryData store analysis We can view the reports on database connection information view overviews, tables and functions.BO universe analysis For every configured universe can create reports on classed, objects and other objects in the universe.Dependency analysis Search for a specific object in your repository and understand how impact on the objects by other objects in the universe

Page 16: BODS Starter Guide

Auto Documentation

It captures the repository information of all the objects like projects, jobs, WF’s, DF’s etc., and can prepare a report, can take a print outs as well.

Operational Dash Boards

This kind of reports provides graphical representation of DI job execution statistics, job execution duration and execution, duration statistics histories.

Data Quality

This kind of reports provides graphical representation of DI validation rules that you created in batch jobs. We can identify inconsistencies or errors in source data.

Page 17: BODS Starter Guide

Defining Source and Target Metadata

2

i. Using data stores ii. Importing metadata iii. Defining a file format

Page 18: BODS Starter Guide

Understanding datastores

•A datastore provides a connection or multiple connections to data sources such as a database.

•Through the datastore connection, Data Integrator can import metadata from the data source, such as descriptions of fields.

•Data Integrator uses datastores to read data from source tables or load data to target tables

•When you specify tables as sources or targets in a data flow, Data Integrator uses the datastore to determine how to read data from or load data to those tables.

•Data Integrator reads and writes data stored in flat files through flat file formats. It reads and writes data stored in XML documents through DTDs and XML Schemas.

•The specific information that a datastore contains depends on the connection. When your database or application changes, you must make corresponding changes in the datastore information in Data Integrator.

Page 19: BODS Starter Guide

1. From the Datastores tab of the object library, right-click in the blank area and click New.

2. In the Datastore name box, type ODS_DS. This datastore name labels the connection to the database you will use as a source. The datastore name will appear in the local repository. When you create your own projects/applications, remember to give your objects meaningful names.

3. In the Datastore type box, click Database.

4. In the Database type box, click the option that corresponds to the database software being used to store the source data. The remainder of the boxes on the Create New Datastore window depend on the Database type you selected.

6. Click OK. Data Integrator saves a datastore for your source in the repository.

To define a datastore for the source (ODS) database

Page 20: BODS Starter Guide

Importing MetadataMetadata consists of:

– Table name– Column names– Column data types– Primary key columns– Table attributes– RDBMS functions– Application specific data structure

• Importing table information

The metadata consists of the following information related to the table: Table name, Table description, Column name, Column description, Column data type, Primary key column, Table attribute, Owner name.

• Importing stored function and procedure information

The metadata consists of the following information related to the stored procedure: Function parameters, Return type, Name, owner

Page 21: BODS Starter Guide

1. In the Datastores tab, right-click the ODS_DS datastore and click Open. The names of all the tables in the database defined by the datastore named ODS_DS display in a window in the workspace.2. Move the cursor over the right edge of the Metadata column heading until it changes to a resize cursor.3. Double-click the column separator to automatically resize the column.4. Import the following tables by right-clicking each table name and clicking, Import. Alternatively, because the tables are grouped together, click the first name, Shift-click the last, and import them together. (Use Ctrl-clickfor nonconsecutive entries.)ods.ods_customerods.ods_materialods.ods_salesorderods.ods_salesitemods.ods_deliveryods.ods_employeeods.ods_region5.Expand the ODS_DS datastore’s Tables container in the object library and verify the tables are listed in the repository.

To import metadata for ODS source tables into Data Integrator

Page 22: BODS Starter Guide

Defining File FormatFile formats: Data Integrator can use data stored in files for data sources and targets. A file format defines a connection to a file. Therefore, you use a file format toconnect Data Integrator to source or target data when the data is stored in a file rather than a database table.

When working with file formats, we must:

Create a file format template that defines the structure for a file.

Create a specific source or target file format in a data flow.

File format objects can describe files in:

Delimited format - delimiter characters such as commas or tabs separate each field where the max length is 1000 characters.

Fixed width format - the column width is specified by the user. Leading or trailing blank padding is added where fields diverge from specified width. Max length is 1000 characters.

Page 23: BODS Starter Guide

Creating file formats

The file format editor has three work areas: •Properties-Values: used to edit the values for file format properties. Expand and collapse the property groups by clicking the leading plus or minus. •Column Attributes: used to edit and define the columns or fields in the file. Field-specific formats override the default format set in the Properties-Values area. •Data Preview: used to view how the settings affect sample data.

Page 24: BODS Starter Guide

Scripts• Can contain these statements:

– Function calls– If statements– While statements– Assignment statements– OperatorsBasic rules for syntax of the scripts are:– Each line ends with a semicolon(;).– Variable names start with a dollar sign($).– String values are enclosed in single quotation marks(‘).– Comments start with a pound sign(#).– Function calls always specify parameters even if the

function uses no parameters

Page 25: BODS Starter Guide

Template tables• During the initial design of an application, you might find it convenient to use template tables to

represent database tables. With template tables, you do not have to initially create a new table in your DBMS and import the metadata into Data Integrator. Instead, Data Integrator automatically creates the table in the database with the schema defined by the data flow when you execute a job. After creating a template table as a target in one data flow, you can use it as a source in other data flows. Though a template table can be used as a source table in multiple data flows, it can only be used as a target in one data flow.

• Template tables are particularly useful in early application development when you are designing and testing a project.

• When the job is executed, Data Integrator uses the template table to create a new table in the database you specified when you created the template table. Once a template table is created in the database, you can convert the template table in the repository to a regular table.

• Once a template table is converted, you can no longer alter the schema.

Page 26: BODS Starter Guide

Creating a batch job

i. Creating a batch jobii. Creating a simple data flowiii. Adding source and target objects to a data

flowiv. Using the Query transformv. Executing the job

3

Page 27: BODS Starter Guide

Object Hierarchy

Page 28: BODS Starter Guide

Creating a batch jobProjects:

A project is a reusable object that allows you to group jobs. A project is the highest level of organization offered by Data Integrator. Opening a project makes one group of objects easily accessible in the user interface.

Jobs: A job is the only object you can execute. You can manually execute and test jobs in development. In production, you can schedule batch jobs and set up real-time jobs as services that execute a process when Data Integrator receives a message request.

We can include any of the following objects in a job definition:

Data flows – Sources – Targets – Transforms

Work flows – Scripts – Conditionals – While loops – Try/catch blocks

Page 29: BODS Starter Guide

Work flows

A work flow defines the decision-making process for executing data flows (a work flow may contain a data flow). For example, elements in a work flow can determine the path of execution based on a value set by a previous job or can indicate an alternative path if something goes wrong in the primary path. Ultimately, the purpose of a work flow is to prepare for executing data flows and to set the state of the system after the data flows are complete.

Data flows

Data flows extract, transform, and load data. Everything having to do with data, including reading sources, transforming data, and loading targets, occurs inside a data flow. The lines connecting objects in a data flow represent the flow of data through data transformation steps.

Page 30: BODS Starter Guide

Adding a work flow

1. With JOB_SalesOrg selected in the project area, click the work flow button on the tool palette.

2. Click the blank workspace area.A work flow icon appears in the workspace. The work flow also appears in the project area on the left under the job name (expand the job to view).

3. Change the name of the work flow to WF_SalesOrg.

4. Click the name of the work flow.An empty view for the work flow appears in the workspace. You will use this view to define the elements of the work flow. Notice the title bar changes to display the name of the work flow.

Page 31: BODS Starter Guide

Adding a data flow

1. Make sure the work flow window is open in the workspace. If it is not, click the WF_SalesOrg work flow in the project area.

2. Click the data flow button on the tool palette.

3. Click the workspace. A representation of a data flow appears in the workspace. The data flow also appears in the project area.

4. Change the name of the data flow to DF_SalesOrg.

5. Click the data flow name. A definition area or grid for the data flow appears in the workspace. This area will be used to define the data flow as described in the next section.

Page 32: BODS Starter Guide

Adding source & target objects to a dataflow

1. Click the square on the right edge of the source file and drag it to thetriangle on the left edge of the query transform.

2. Use the same drag technique to connect the query transform to the targettable.

Page 33: BODS Starter Guide

QUERY Transform

The query editor, a graphical interface for performing query operations

Page 34: BODS Starter Guide

Defining the query editor window

The query editor, a graphical interface for performing query operations, contains these areas: · Input schema area · Output schema area · Parameters area The i icon indicates tabs containing user-defined entries.

Page 35: BODS Starter Guide

Executing the jobFirst verify that your job server is running by looking at the job server icon at the bottom right of the Designer window. Move the pointer over the icon to see the Job Server name, machine name, and port number in the status area. If the job server is not running, the icon will have a red X on it.

1. Select the job name in the project area, in this case JOB_SalesOrg.2. Right-click and click Execute.

Page 36: BODS Starter Guide

Job Server configuration Screen

Page 37: BODS Starter Guide

Validating, Tracing, and Debugging Batch Jobs

I. Validating and tracing jobsII. Debugging jobs

4

Page 38: BODS Starter Guide

Validating the job

For validating the jobs first click on the job you want to validate.

From the menu bar, click Validation > Validate > Current View.

Note:• Current View — Validates the object definition open in the workspace.

• All Objects in View — Validates the object definition open in theworkspace and all of the objects that it calls.

Also note the Validate Current and Validate All buttons on the toolbar.These buttons perform the same validation as Current View and All Objects in View, respectively.

Page 39: BODS Starter Guide

Trace objects

Monitor Tab

Log tab

Use trace properties to select the information that Data Integrator monitors and writes to the trace log file during a job.

Data Integrator writes trace messages to the trace log associated with the current Job Server and writes error messages to the error log associated with the current Job Server.

Page 40: BODS Starter Guide

Examining statistics logs

The statistics log (also known as the monitor log) quantifies the activities of the components of the job. It lists the time spent in a given component of a job and the number of data rows that streamed through the component.

Examining error logs

Data Integrator produces an error log for every job execution. Use the error logs to determine how an execution failed. If the execution completed without error, the error log is blank.

Page 41: BODS Starter Guide

Debugging jobs using log files

If your Designer is running when job execution begins, the execution window opens automatically, displaying the trace log information.

Page 42: BODS Starter Guide

Using Built-in Transforms

I. Describing built-in transforms

5

Page 43: BODS Starter Guide

• Transform manipulate input sets and produce one or more output sets

• Sometimes transforms such as Date_generation and SQL transform can also be used as source objects.

• Use operation codes with transforms to indicate how each row in the data set is applied to target table

• Most commonly used transform is Query transform & SQL transform.

Transforms

A transform is a set in a dataflow that acts on a data set

Page 44: BODS Starter Guide

Case Transform

Specifies multiple paths in a single transform (different rows are processed indifferent ways).

The Case transform simplifies branch logic in data flows by consolidatingcase or decision making logic in one transform. Paths are defined in anexpression table.

Page 45: BODS Starter Guide

The Case transform editor includes an expression table and a smart editor.

The connections between the Case transform and objects used for aparticular case must be labeled. Each output label in the Case transform must be used at least once.

The Case transform editor

Page 46: BODS Starter Guide

Merge Transform

Combines incoming data sets, producing a single output data set with thesame schema as the input data sets.

All sources must have the same schema, including:• The same number of columns• The same column names• The same data types of columnsIf the input data set contains hierarchical data, the names and data typesmust match at every level of the hierarchy.

Page 47: BODS Starter Guide

SQL transform•Performs the indicated SQL query operation.

•Use this transform to perform standard SQL operations when other built transforms cannot perform them.

•The options for the SQL transform include specifying a datastore, join rank, cache, array fetch size, and entering SQL text.

Array fetch sizeIndicates the number of rows retrieved in a single request to a source database. The default value is 1000. Higher numbers reduce requests, lowering network traffic, and possibly improve performance. The maximum value is 5000.

CacheSelect this check box to hold output from the transform in memory for use in subsequent transforms.Select Cache only if the resulting data set is small enough to fit in memory.

SQL textThe text of the SQL query. This string is passed to the database server. You do not need to put enclosing quotes around the SQL text. You can put enclosing quotes around the table and column names, if required by the syntax rules of the DBMS involved.

Update schemaClick this option to automatically calculate and populate the output schema for the SQL SELECT statement.

Page 48: BODS Starter Guide

Map Operation

•Allows conversions between data manipulation operations.

•The Map_Operation transform allows you to change operation codes on data sets to produce the desired output. For example, if a row in the input data set has been updated in some previous operation in the data flow, we can use this transform to map the UPDATE operation to an INSERT. The result could be to convert UPDATE

Page 49: BODS Starter Guide

• Produces a series of dates incremented as you specify

• Use this transform to produce the key values for a time dimension target.

• From this generated sequence you can populate other fields in the time dimension (such as day_of_week) using functions in a query.

• Data Outputs A data set with a single column named DI_GENERATED_DATE containing the date sequence. The rows generated are flagged as INSERT.

• The Date_Generation transform does not generate hierarchical data.

• Generated dates can range from 1900.01.01 through 9999.12.31.

Date Generation Transform

Page 50: BODS Starter Guide

Validation Transform

•Qualifies a data set based on rules for input schema columns. Allows one validation rule per column. Filter out or replace data that fails your criteria.•Outputs two schemas: Pass and Fail.

•The Match Pattern option allows you to enter a pattern that Data Integratorsupports in its Match_patttern function.

•Action on Failure tabOn this tab you specify what Data Integrator should do with a row of data when a column value fails to meet a condition. You can send the row to the Fail target, to the Pass target, or to both. You can choose to substitute these values with a value or expression using the smart editor provided. For example, the ‘INVALID’ value in this case.

The validation transform provides the ability to compare your incoming data against a set of pre-defined business rules and, if needed, take any corrective actions. The Data Profiler and View Data features can identify anomalies in the incoming data to help you better define corrective actions in the validation transform.

Page 51: BODS Starter Guide

Pivot Transform •Creates a new row for each value in a column that you identify as a pivot column.

•The Pivot transform allows us to change how the relationship between rows is displayed. For each value in each pivot column, Data Integrator produces a row in the output data set. We can create pivot sets to specify more than one pivot column.

Page 52: BODS Starter Guide

Suppose we have a table containing rows for an individual’s expenses, broken down by expense type.This source table has expense numbers in several columns, so we might have difficulty calculating expense summaries. The Pivot transform can rearrange the data into a more manageable form, with all expenses in asingle column, without losing category information.

1

2 3

Page 53: BODS Starter Guide

• Examines and compares the source and target tables

• Generates INSERT for any new row not in the target table

• Generates UPDATE for any row in the target table that has changed

• Ignores any row that is in the target table and has not changed

• Fills in the generated key for the updated rows

Table Comparison Transform

Page 54: BODS Starter Guide

How Table Comparison Transform works?

1 2 3

5

SOURCE Query Table Comparison Key Generation CUSTOMER

4

Page 55: BODS Starter Guide

Table_Comparison Transform• Allows you to detect and forward changes that have occurred since the last time a

target was updated

• Compares two data sets and produces the difference between them as a data set with rows flagged as INSERT or UPDATE

• Allows you to identify changes to a target table for incremental updates

Page 56: BODS Starter Guide

Table_Comparison Transform• Data input

– Input dataset must be flagged as Normal– If input dataset contains hierarchical data, only the top-level data is included in the

comparison, and nested schemas are not passed through to the output• Options:

– Table name : Name of the data store– Generated key column (Optional) : Compares the row with largest key value of row and

ignores other rows, if same row is updated multiple times– Comparison method– Input primary column : Primary column of input dataset, this key must be present in

comparison table with same name and data type– Compare columns: Compares only selected column from input dataset, if no columns are

listed, all columns in input dataset which are also there in comparision tables are used as compare columns

Page 57: BODS Starter Guide

Table_Comparison Transform• Data Output :

– A dataset containing rows flagged as INSERT or UPDATE– Contains only the row that make up the difference between two input sources

• Three possible outcomes from this transform– An Insert row is added– An Update row is added– Row is ignored

Page 58: BODS Starter Guide

History Preserving Transform : Allows DW or DM to maintain the history of Data. Mostly performed on Dimension Tables

Example: Changes in Customer Region would give you incorrect result if you simply update the Customer Record.

Source to extract the rows from source table

Query to map column from the source

Table comparison transform to generate INSERT and UPDATE and to fill in existing keys

History Preserving transform to convert certain UPDATE rows to INSERT rows

Key Generation transform to generate new keys for the updated rows that are now flagged as INSERT

Customer Region

Fred’s Coffee

East

Jane’s Donuts

West

Sandy’s Canada

Central

Source customer table Target customer table

GKey

Data Region

1Fred’s Coffee

East

2Jane’s Donuts

East

3Sandy’s Canada

Central

GKey

DataRegion

1 Fred’s Coffee East

2Jane’s Donuts

East

3Sandy’s Canada

Central

4Jane’s Donuts

West

Target customer history table

Page 59: BODS Starter Guide

Date columns - Valid fromA date or date time column from the source schema.Date columns - Valid toA date or date time column from the source schema.Compare columnsThe column or columns in the input data set for which this transform compares the before- and after-images to determine if there are changes.• If the values in each image of the data match, the transform flags the row as UPDATE. The result updates the warehouse row with values from the new row. The row from the before-image is included in the output asUPDATE to effectively update the date and flag information.• If the values in each image do not match, the row from the after-image is included in the output of the transform flagged as INSERT. The result adds a new row to the warehouse with the values from the new row.

History Preserving Transform Editor

Page 60: BODS Starter Guide

Descriptions of transforms

Page 61: BODS Starter Guide
Page 62: BODS Starter Guide

Using Log filesTool Description

Monitor log Itemizes the steps executed in the job and the time execution began and ended

Statistics log Displays each step of each data flow in the job, the number of rows streamed through each step, and the duration of each step.

Error log Displays the name of the object being executed when an Data Integrator error occurred and the text of the resulting error message. If the job ran against SAP data, some of the ABAP errors are also available in the Data Integrator error log.

Monitor log Statistics log

Error log

Page 63: BODS Starter Guide

Built in Functions

I. Describing built-in functions

6

Page 64: BODS Starter Guide

Built in Functions

Page 65: BODS Starter Guide
Page 66: BODS Starter Guide
Page 67: BODS Starter Guide
Page 68: BODS Starter Guide
Page 69: BODS Starter Guide
Page 70: BODS Starter Guide
Page 71: BODS Starter Guide

• Understanding variables• Describe the variables and parameters window• Difference between global and local variables• Create global variables• Set global variable values• Explain language syntax• Use strings and variables in Data Integrator Scripting language.• Scripting a custom function

Using Data Integrator Scripting Language and Variables

Page 72: BODS Starter Guide

Understanding variables• Variables are symbolic placeholders for values. The data type of variable can be any supported by Data Integrator such as integer, decimal, date or text string.

• A script is a single use object used to call functions and assign values to variables in a work flow• A catch is a part of serial sequence called try/catch block.• A conditional process uses a conditional which is a single use object, available in work flows, that

allows you to branch the execution logic based on results of an expression.

Script

If $AA<0 $AA=0;

$BB=$AA+$BB;Work Flow

Variable defined:

$AA int

$BB int

Catch

If $BB<0 $BB=0;

$AA=$AA+$BB;

Conditional

If $BB<0 $BB=0;

$AA=$AA+$BB;

Page 73: BODS Starter Guide

Describing variables and parameters windowData Integrator displays the variables and parameters defined for an object in the Variables and Parameters window.Variables and Parameters window contains 2 tabs.

Definitions tab allows you to create and view variables(name and data type) and parameters(name, data type, and parameter type) for an object type.Calls tab allows you to view the name of each parameter defined for all objects in a parent object’s definition.

Page 74: BODS Starter Guide

Using local variables & ParametersNaming and defining variablesVariable values and Smart EditorExpression and variable substitutionDifferences between global and local variablesCreating global variablesView global variablesSet global variable values

– Can be set outside the job– As Job Property– As an execution or schedule property

Page 75: BODS Starter Guide

To create, view & set global variables• Global variables are exclusive within a context of a job in which they are created.

They are defined in Variables & Parameters window.

Page 76: BODS Starter Guide

Understanding Data Integrator Scripting Language

• Language Syntax– Supports ANSI SQL-02 varchar behavior– Treats an empty string as zero length varchar value(instead of NULL)– Evaluates comparisons to FALSE.– Uses new is NULL and IS NOT NULL operators in Data Integrator Scripting language to test

for NULL values.– Treats trailing blanks as regular characters, instead of trimming them, when reading from all

sources.– Ignores trailing blanks in comparisons in transforms(Query and Table Comparison) and

functions (decode,ifthenelse,lookup,lookup_ext,lookup_seq)

Page 77: BODS Starter Guide

Basic Syntax rules

• Statements end with a semicolon(;)• Variables begin with the dollar sign($)• String values are enclosed in single quotes(‘)• Comments begin with pound(#)

Page 78: BODS Starter Guide

Using Strings and variables in Data Integrator Scripting language

• Quotation marks• Escape characters• Trailing blanks• Null values• Null values and empty strings

Page 79: BODS Starter Guide

Comparing two variables always test for NULL

Conditions Recommendations

If ($var1=$var2) Do not compare without explicitly testing for NULLS. Business Objects does not recommend using this logic because any relational comparison to NULL value returns FALSE.

If (($var1 IS NULL) AND ($var2 IS NULL)) OR ($var1-$var2))

Will execute the TRUE branch if both $var1 and $var2 are NULL, or if neither are NULL but are equal to each other.

Page 80: BODS Starter Guide

Scripting a custom function

Custom functions Written by the user in Data Integrator scripting language Reusable objects Managed through the function wizard

Built in functions are grouped in following categories: Aggregate Conversion Date Environment Math Miscellaneous String System Validation

Page 81: BODS Starter Guide

Creating Repository

Repository Manager window : Choose the Database type and enter the Database name (as per TNS entry ) , User Id and Password . The Default option in Local . Now click the Create button .

Page 82: BODS Starter Guide

Creating Repository

Once the repository creation will finish, It will display like below that “the local repository successfully created” . Do not work on any action items when repository creation is on . Leave the system/server idle .

Page 83: BODS Starter Guide

Job Server Creation

Page 84: BODS Starter Guide

Job Server Creation

Click the button “Edit Job Server Config”

Page 85: BODS Starter Guide

Job Server Creation

Click ADD button .

Page 86: BODS Starter Guide

Job Server Creation

Enter name without spaces. Port number should be Unique. Click Apply.

Page 87: BODS Starter Guide

Got Error: Because same port number already exists. Change it to 3800

Job Server Creation

Page 88: BODS Starter Guide

Job Server Creation

Click ADD button.

Page 89: BODS Starter Guide

Job Server Creation

After Add job server, Click on restart button.

Page 90: BODS Starter Guide

Migrating projects Steps:- Export: Export Project from one schema to another schema. Generally, use for project backup or production move.

Page 91: BODS Starter Guide

Right Click, Select Work Flow & Click on Export

Page 92: BODS Starter Guide

Provide the Target Repository details and select the Objects.

Page 93: BODS Starter Guide

Note: .atl file is we can create while export all project component in bodi generated file.

Similarly, we can import repository or BODI Jobs from other schema or from .atl file as well.

Page 94: BODS Starter Guide

THANK YOU