Issue 10 Q2 2010 Table of Contents - WordPress.com...2017/07/10 · popular. FTP has been supported...
Transcript of Issue 10 Q2 2010 Table of Contents - WordPress.com...2017/07/10 · popular. FTP has been supported...
Page 1
Issue 10 Q2 2010
Table of Contents Issue 10 Q2 2010 ........................................................................................................................................... 1
Enhanced FTP Adapter with FTPS Support ................................................................................................... 4
Introduction .............................................................................................................................................. 4
Secure FTP ................................................................................................................................................. 4
Downloading read-only files ................................................................................................................... 12
Support for atomic file transfer in ASCII mode ....................................................................................... 21
Comparison to other adapters ................................................................................................................ 23
Conclusion ............................................................................................................................................... 24
Acknowledgements ................................................................................................................................. 24
Creating business documents ..................................................................................................................... 25
Introduction ............................................................................................................................................ 25
Creating Word document based on Word template .............................................................................. 25
Creating Word document based on XSL mapping .................................................................................. 27
Creating Pdf document based on Adobe Acrobat template ................................................................... 29
Creating Pdf document based on XSL mapping ...................................................................................... 32
Conclusion ............................................................................................................................................... 34
Automating Deployment of BizTalk Change Requests ............................................................................... 36
Overview ................................................................................................................................................. 36
Background – integration change management with BizTalk Server ..................................................... 36
Deployment – an opportunity for improvement .................................................................................... 36
Page 2
Standard Onboarding Deployment Automation - SODA ........................................................................ 37
Technology (BizTalk-related) .................................................................................................................. 38
Benefits ................................................................................................................................................... 39
Future ...................................................................................................................................................... 39
Summary ................................................................................................................................................. 40
Request Response Messaging Pattern ........................................................................................................ 41
Windows Server 2008 – Sql Server 2008 – Biztalk Server 2009 ............................................................. 41
Summary ..................................................................................................................................................... 46
Instrumentation Best Practices for High Performance BizTalk Solutions ................................................... 47
Background ............................................................................................................................................. 47
The Challenges ........................................................................................................................................ 47
The Solution ............................................................................................................................................ 49
Performance Considerations .................................................................................................................. 53
Instrumentation Guidelines .................................................................................................................... 54
Instrumentation of Custom Pipeline Components ................................................................................. 55
Instrumentation of BizTalk Maps ............................................................................................................ 56
Instrumentation of BizTalk Orchestrations ............................................................................................. 58
Instrumentation of Business Rules ......................................................................................................... 61
Instrumentation of Custom Components ............................................................................................... 63
Management of Instrumented Applications ........................................................................................... 65
Event Trace Management Application Landscape .............................................................................. 66
Event Trace Management Tasks ......................................................................................................... 66
Conclusion ............................................................................................................................................... 70
Additional Resources/References ........................................................................................................... 71
Financial Messaging Services Bus ............................................................................................................... 72
FMSB and Microsoft ESB ......................................................................................................................... 73
FMSB Architecture .............................................................................................................................. 74
FMSB Components .............................................................................................................................. 75
Need for rich BI within ESB ................................................................................................................. 77
Dashboard ........................................................................................................................................... 80
SWIFT service Router – compound service ......................................................................................... 84
FMSB Add-on (BG Services Engine)......................................................................................................... 91
Page 3
Conclusion ............................................................................................................................................... 92
How To Boost Message Transformations Using the XslCompiledTransform class ..................................... 93
Introduction ............................................................................................................................................ 93
BizTalk Application .............................................................................................................................. 95
Results ................................................................................................................................................... 129
Conclusions ........................................................................................................................................... 135
Follow-Ups ............................................................................................................................................ 135
Code ...................................................................................................................................................... 135
Page 4
Enhanced FTP Adapter with FTPS Support Summary: This article describes in detail the new features added to the FTP adapter in BizTalk Server
2010.
Author: Thiago Almeida (http://connectedthoughts.wordpress.com) is a BizTalk Server MVP and works
as a Senior Consultant at Datacom, one of the largest IT services provider in New Zealand. The contents
of this article and the opinions expressed in it are the sole responsibility of the author. This article is
based on a beta release and features are subject to change before the RTM release.
Introduction Even though new message transmission protocols and standards like the ones supported by WCF are
becoming the first choice for data exchange nowadays, the File Transfer Protocol (FTP) is still very
popular. FTP has been supported by BizTalk Server since its first edition. Now with BizTalk Server 2010
Microsoft have also extended it with three important new features:
Secure FTP support
Downloading read-only files
Atomic file transfer for the ASCII mode
In this article I will describe each of the new features in detail.
Another improvement not discussed in this article is that the performance reliability of the FTP adapter
has been increased, with some gains in the performance while sending data. The socket buffer size has
been increased so that optimal performance is achieved when sending large files.
Secure FTP Secure FTP (also known as FTPS or FTP over SSL) is an extension to the original FTP protocol that adds
SSL/TLS encryption to the channel. It is defined by the RFC 2228, titled ‘FTP Security Extensions’
(http://tools.ietf.org/html/rfc2228).
It is important to note that Secure FTP is different from the SSH File Transfer Protocol, or SFTP, which is
another standard for secure file transfer popular in Unix systems. SFTP’s implementation is completely
different from Secure FTP and is not supported out of the box by BizTalk Server. There are third party
adapters that support it like n/Software’s Adapter (http://www.nsoftware.com/products/biztalk) or the
free SFTP adapter on Codeplex (http://sftpadapter.codeplex.com).
There is a new section in the FTP Adapter’s configuration properties called SSL on both the send and
receive configuration page of the FTP adapter, with four new properties:
Client Certificate Hash – This is the thumbprint of the SHA1 certificate that should be used. If
BizTalk does not find the right certificate it will throw an exception. Ensure that you log on to
Page 5
the machine with the BizTalk Server host instance user account and load the client certificate on
to that user’s Personal store.
FTPS Connection Mode – Explicit or Implicit. On explicit connections an initial negotiation is
established to determine the security mechanism. Once the security is established the
connection is changed to the agreed secure mode. The server can also allow the connection to
continue without encryption if the client does not support it. There is no negotiation with the
Implicit mode, the server expects the client to start the connection directly on a TLS/SSL secure
mode, and the server drops the connection otherwise.
Use Data Protection – Whether the data channel should also be encrypted during the file
exchange. When set to No only the control channel will be encrypted. The FTP commands are
exchanged through the control channel, including credential and identification information. The
data channel is used to transfer the file data according to the specified mode and type. If the
contents of your file should also be encrypted then enable this setting.
Use SSL – This enables or disables FTPS. The default is No, so the adapter does not try to
connect via FTPS by default.
The configuration of the send port to use SSL against an IIS 7.5 FTP server would look like the following:
Figure 1 - FTP Adapter properties
Page 6
Not securing FTP connections is a big vulnerability. Any network packet sniffer tool will be able to trace
the credentials and the data being transferred. Without SSL a packet sniffer would see a trace similar to
the following on the FTP control connection, including the username and password in clear text:
220 Microsoft FTP Service
USER ThiagoWin7-PC|thiagowin7
331 Password required for ThiagoWin7-PC|thiagowin7.
PASS pass@word1
230 User logged in.
PWD
257 "/" is current directory.
PWD
257 "/" is current directory.
PWD
257 "/" is current directory.
TYPE I
200 Type set to I.
PASV
227 Entering Passive Mode (192,168,1,7,225,85).
STOR test.xml
125 Data connection already open; Transfer starting.
226 Transfer complete.
The trace above is also what gets saved to the log file (configurable on the Log property of the FTP
adapter properties) although the password is masked with ‘xxxx’ instead.
Sending the file with SSL enabled changes the transfer considerably. Only the initial conversation to
determine the encryption requirement is in clear text, and from then on only encrypted data is sent:
220 Microsoft FTP Service
AUTH TLS
234 AUTH command ok. Expecting TLS Negotiation.
...encrypted SSL communication and commands...
For troubleshooting purposes the log file generated by the FTP adapter still shows the entire list of
commands:
< 220 Microsoft FTP Service
> AUTH TLS
< 234 AUTH command ok. Expecting TLS Negotiation.
> PBSZ 0
< 200 PBSZ command successful.
> PROT P
< 200 PROT command successful.
> USER ThiagoWin7-PC|thiagowin7
< 331 Password required for ThiagoWin7-PC|thiagowin7.
> PASS xxxx
< 230 User logged in.
> PWD
< 257 "/" is current directory.
> PWD
Page 7
< 257 "/" is current directory.
> PWD
< 257 "/" is current directory.
> TYPE I
< 200 Type set to I.
> PASV
< 227 Entering Passive Mode (192,168,1,7,234,106).
> STOR test.xml
< 125 Data connection already open; Transfer starting.
< 226 Transfer complete.
The adapter also has new message context properties for the SSL feature, namely ClientCertificateHash, FtpsConnectionMode, UseDataProtection and UseSsl. This allows the SSL feature to be configured for dynamic send ports. For example, from an orchestration’s message assignment shape the same secure FTP server detail as above can be set for a dynamic send port:
Figure 2 - Dynamic FTP SSL settings in an orchestration
Of course, hard coding credentials and connection configuration in an orchestration as above is not
recommended. It is recommended to use Enterprise Single Sign-On or a secure database to store them
and then retrieve the details dynamically.
It is common to create a dedicated host to be used on FTP ports (FTPHost on the image below):
Page 8
Figure 3 - Host instances
Usually a dedicated FTP host is created to separate the adapter from everything else running on the
same BizTalk environment. This ensures it has its own host instance(s), memory allocation and CPU
threads. It also aids in fixing and troubleshooting issues since all FTP ports are isolated to this host.
Throttling and performance settings can also be applied specifically to this host to meet the demands of
the adapter. Further isolation between receiving FTP ports and sending FTP ports is also common. Each
environment is different though and this does not apply to all as too many hosts will start to strain the
environment resources.
Note that in the Figure 3 the FTPHost instance logon account is a user called BTServiceTrusted, which is
different from the user BTService configured for the BizTalkServerApplication host instance.
Using the FTPHost on a static FTP send port would mean that we need to load the SSL certificate to the
personal store of the BTServiceTrusted user since that is the user for the FTPHost host instance:
Page 9
Figure 4 - FTP send port host selection
When sending a message through a dynamic send port configured as per Figure 2 BizTalk Server might
fail with an exception saying “No Client Certificate matching the provided client certificate hash was
found. Verify if the certificate is present in the personal store of the corresponding BizTalk host instance
user account”. Why? Dynamic sends use the default send handler for the adapter.
BizTalkServerApplication is the default handler for the out of the box adapters, and therefore it uses the
account for the host instance of BizTalkServerApplication. BTService in this case. The default handler
marked by the black tickbox on the handler icon :
Page 10
Figure 5 - FTP adapter handlers
There are two ways to get around this. Either:
Load the SSL certificate on to the Personal store of BTService, the account configured for the
BizTalkServerApplication host instance which is the default send handler; or
Change the default send handler of the FTP adapter to be FTPHost, the same host we used on
the static send port. This is done by opening the properties page of the FTPHost send handler
and ticking the ‘Make this the default handler’ tickbox:
Page 11
Figure 6 - FTP adapter handler properties
If we change the default handler to be FTPHost then BizTalk Server will find the correct
certificate on the BTServiceTrusted user certificate store and be able to transfer the file. This is
important to keep in mind if your applications use multiple dynamic send ports: since you can’t
dynamically choose what host handler the dynamic send port uses, you have to use the same
user for all the SSL certificates of your dynamic FTP send ports.
If you encounter problems with the SSL feature of the adapter here are a few troubleshooting tips:
Temporarily configure logging by setting up the Log property of the adapter to a folder where
the host instance user has read and write access. For example, C:\FTPTest\Log\FTP.log
Review the BizTalk Server event log for exceptions
If BizTalk Server throws an exception in the event log that says “No credentials are available in
the security package” or "No Client Certificate matching the provided client certificate hash was
found. Verify if the certificate is present in the personal store of the corresponding BizTalk host
instance user account.", ensure you log on to the machine as the BizTalk Server host instance
Page 12
user for the handler selected on the port then load the right certificate in to the user’s Personal
store
Ensure the certificate and the server name match otherwise BizTalk might throw an exception
like the following: “Unable to connect to FTP server ‘127.0.0.1’ as user ‘ThiagoWin7-
PC|thiagowin7’. Inner Exception details: ‘The server name in the server certificate does not
match with the name of the physical server. Make sure you provide the right server name’”.
Downloading read-only files Read only files and files that get refreshed with the same name are common in FTP systems. BizTalk
2010 now supports these scenarios.
The easiest way to test this feature is to configure IIS 7.5 with FTP. Once configured with the basic
settings the FTP site will listen on port 21 and map to the %SystemDrive%\inetpub\ftproot physical path:
Figure 7 - IIS 7.5 FTP site properties
Page 13
BizTalk 2009 and previous versions only had two setting under polling, Interval and Unit:
Figure 8 - BizTalk 2009 FTP polling settings
The updated adapter has three new settings, Delete After Download, Enable Timestamp comparison
and Redownload Interval:
Figure 9 - BizTalk 2010 FTP polling settings
These new settings support the idea of read-only files and files with the same filename getting
appended to or overwritten. Here’s a description of each setting:
Delete After Download – The name says it all – this tells BizTalk if the FTP adapter should or
shouldn’t try to delete the file after reading it. If set to No it will not try to delete the file after
reading its contents and will leave it as it is. If set to Yes it will attempt to delete the file and
ignore the other two settings described below. If the file can’t be deleted BizTalk will create a
Warning entry with event id 5740 on the Application event log with a message that says “Unable
to delete the file ‘<file name>’ on the FTP server. Inner Exception details: ‘The remote file could
not be opened by the FTP server’” This is a similar behavior to the FTP adapter in BizTalk Server
2009, although the inner exception details are now part of the event log entry. Setting this
option to No will tell the adapter to leave the file on the server without raising any warnings,
besides enabling the other two properties to support the read-only file scenario.
Enable Timestamp comparison – Determines if BizTalk Server should use the file’s last modified
timestamp to check if it has changed since the last time it was downloaded. This setting requires
the MDTM command to be supported by the FTP site. If set to Yes then the file gets re-
downloaded only if it was modified and the Redownload Interval property doesn’t apply.
Redownload Interval – This is the time interval that the adapter will wait to re-download a file.
It does this by comparing the filename and URI (for example,
ftp://ftp.myftpserver.com:21/*.xml) and the last time the file was downloaded. This is only used
if Enable Timestamp Comparison is disabled or if the server does not support the MDTM
command. The time unit is determined by the Unit property. A value of -1 disables this feature,
and a value of 0 causes the file to be redownloaded at each polling cycle.
As a simple example to describe these features let us create two read-only files called test.xml and
text2.xml on the FTP folder:
Page 14
Figure 10 - Read-only files on FTP folder
Then let’s configure the FTP adapter on a receive location to poll that folder with the following settings
on the Polling section:
Page 15
Figure 11 - FTP Adapter polling settings
These settings are telling the adapter to poll for files every 30 seconds, but only download previously
downloaded files after 300 seconds (5 minutes) without checking for changes on their modified
timestamp.
Let’s then create a send port with a filter on the receive port name so that any messages received via
the FTP receive port will be routed to this send port:
Page 16
Figure 12 - Send port filter
And then configure it to save the messages to a file location. The filename can be a combination of the
original file name picked up by the FTP adapter combined with the message GUID for uniqueness:
Page 17
Figure 13 - File send port configuration
When we first enable the receive location we can see the two messages on the destination folder that
correspond to the test.xml and text2.xml files that we had on the FTP folder:
Figure 14 - Files created by first FTP poll
Based on our configuration of the FTP receive location BizTalk then redownloads the files only once
every 5 minutes:
Page 18
Figure 15 - Files created by subsequent FTP polls
So how does this feature works under the covers? It wouldn’t be possible without saving the details of
the files somewhere for comparison on the next polling interval cycle. There are two simple tables on
the BizTalk message box database that hold the files information, namely adap_UriKeys and
adap_DownloadedFiles:
Figure 16 - Tables that support the read-only FTP settings and last downloaded timestamp
BizTalk adds the URI and the timestamp comparison configuration to the adap_UriKeys table, and
creates one entry per file for that URI on the adap_DownloadedFiles table (linked by the ID column).
The value on the FILETIMESTAMP column when using the Redownload Interval feature is the last time
the file was downloaded. BizTalk uses this to decide if the file should be redownloaded or not. If it does
then it downloads the file and updates the table with the latest download timestamp.
What if now we modified the configuration of the receive location to enable timestamp comparison?
Page 19
Figure 17 - Enable Timestamp comparison setting
If the FTP server supports the MDTM command then the adapter is able to get the file timestamp. When
the adapter polls for messages next it will update the ENABLETIMECOMPARISON column of the
adap_UriKeys table to 1, and updates the FILETIMESTAMP column of the of the adap_DownloadedFiles
table with the last modified date of each file (instead of the last downloaded date):
Page 20
Figure 18 - Tables that support the read-only FTP settings and last modified timestamp
From then on BizTalk does not use the redownload interval property, i.e., the redownload interval
property is not applicable when the enable timestamp comparison property is enabled. It only
downloads the file again when the saved timestamp in the table is different from the modified
timestamp of the file on the server. If we then make a change only to the test2.xml file for example:
Figure 19 – Modified timestamp of test2.xml file
BizTalk will pick it up on the next polling interval because its timestamp has changed, but it will not pick
up test.xml because it has not changed:
Figure 20 - Files created by subsequent FTP polls and latest test2.xml download
The table below is taken from the FTP adapter documentation and lists when each of the three settings
applies and the expected behavior:
Page 21
Delete After
Download
Enable
Timestamp
Comparison
Redownload
Interval
Adapter Behavior
Yes Not applicable Not
applicable
The adapter deletes a file from the FTP server after
downloading it. This is the default behavior of the
adapter.
No Yes Not
applicable
The adapter does not delete a file from the FTP server
after downloading it. Instead the adapter compares the
file’s last modified timestamp using the MDTM
command. Depending on the timestamp, the adapter
downloads the file again.
No No Applicable The FTP adapter downloads a file from the FTP server
after the interval you specify, irrespective of whether the
file has been modified or not.
Table 1 - Read-only features details from the FTP adapter documentation
Here are some very important tips on this feature that could cause a lot of grief if not noted:
If you change the settings like the redownload interval BizTalk will still have the last timestamps
on those tables and this might affect when it downloads the files next
While we discuss the tables used to support the new adapter feature in this article DO NOT
modify BizTalk Server database tables and their contents manually without being instructed so
by Microsoft
Support for atomic file transfer in ASCII mode For the ASCII file transfer mode the adapter now supports the use of a temporary folder when sending
messages to an FTP server. This is an important feature as many FTP servers do not implement file locks
and might start reading the file before BizTalk Server has fully written it.
The adapter first writes the file to the temporary folder on the FTP server, configured on the Temporary
Folder property, and then moves the file to the final destination folder. If there is a problem during the
upload the adapter restarts the upload when in ASCII mode. For the binary mode it resumes the upload
as it did previously.
In BizTalk 2009 when configuring the temporary folder of a send port with the ASCII file transfer mode a
message box exception pops up:
Page 22
Figure 21 - Temporary folder and transfer mode exception
In BizTalk Server 2010, for send ports, we can now configure the temporary folder even when the file
transfer mode is set to ASCII. Let us use the following configuration:
Figure 22 - ASCII transfer mode and Temporary Folder for FTP send
Note the “Temp” folder configured on the Temporary Folder setting. Now when BizTalk Server is
transmitting a file to the FTP server it first writes the file to the temporary folder on the remote FTP
server then moves it to the final folder. This is easy to see in the FTP log file:
< 220 Microsoft FTP Service
> USER ThiagoWin7-PC|thiagowin7
Page 23
< 331 Password required for ThiagoWin7-PC|thiagowin7.
> PASS xxxx
< 230 User logged in.
> PWD
< 257 "/" is current directory.
> PWD
< 257 "/" is current directory.
> PWD
< 257 "/" is current directory.
> CWD Temp
< 250 CWD command successful.
> PWD
< 257 "/Temp" is current directory.
> TYPE A
< 200 Type set to A.
> PASV
< 227 Entering Passive Mode (192,168,1,7,196,177).
> STOR {77d3175c-3f7c-411c-b829-718dd7a98c50}
< 125 Data connection already open; Transfer starting.
< 226 Transfer complete.
> RNFR {77d3175c-3f7c-411c-b829-718dd7a98c50}
< 350 Requested file action pending further information.
> RNTO //test.xml
< 250 RNTO command successful.
The log above clearly shows that the adapter first browses to the ‘Temp’ directory, then stores the file
with a GUID file name, before finally moving the file to the destination folder. If the transfer is
interrupted BizTalk restarts the ASCII mode transfer later.
A few tips on this feature:
Don’t forget to check that the FTP send port host instance user has read and write permissions
to the temporary and the final folders on the FTP server
The Temporary Folder value is relative to the root folder that the user is presented to when first
logging in to the FTP server
This setting only applies to the transmission of files (send port). On a receive location only binary
mode supports a local temporary folder. For the ASCII mode BizTalk will restart the file
download from the FTP Server.
Comparison to other adapters By far the most commonly utilized FTPS adapter out there is the one from n/Software
(http://www.nsoftware.com/products/biztalk). It has some advantages over the out of the box adapter:
Supported natively in 64-bit. The out of the box BizTalk FTP adapter is still only supported on a
32-bit host
Enhanced logging and extra logging options
Page 24
Allows selection of certificates from stores other than the host instance user store
Advanced tuning options like the number of threads used by each host handler. Note though
that the out of the box FTP adapter has been updated with performance optimizations on this
latest version.
The n/Software adapter license includes several other adapters, like SFTP (SSH File Transfer
Protocol) and Secure Email, as well as pipeline components like Zip and GZip compression
The main disadvantages of the n/Software adapter are:
Requires licensing on top of your BizTalk Server license and yearly support costs
Doesn’t have as many settings for file redownloads. It has a Delete Mode setting but no
Redownload Interval or Enable Timestamp comparison setting.
The n/Software adapters are still an attractive option if the adapter bundle provide you with extra
features required by your projects (like SFTP support for example), but I would seriously consider the
out of the box adapter for FTP and FTPS connectivity first.
Conclusion Secure FTP support, downloading read-only files, and Atomic file transfer for the ASCII mode are very
welcome additions to the BizTalk FTP adapter. In this article I have described each of them in detail.
As a final note, remember that the host for the FTP receive adapter still needs to be clustered if you
have more than one instance of the host running (http://msdn.microsoft.com/en-
us/library/aa561801(v=BTS.20).aspx). The new features do not change this requirement.
Acknowledgements I would like to thank fellow BizTalk MVP Kent Weare (http://kentweare.blogspot.com/), Sameer
Chabungbam from the Microsoft product team and the editors for the technical review and valuable
feedback.
Page 25
Creating business documents By Toon Vanhoutte [[email protected]],
BizTalk Consultant @ Delaware Consulting.
Audience: BizTalk Developers
Technologies: BizTalk Server 2006/2009, VS.NET 2005/2008, Word 2007, Adobe Acrobat 9 Pro
Skill level: Beginner to Expert
Thanks to my colleague Simon Dedeken, for sharing his Pdf experiences.
Introduction Today's integration scenarios handle more and more complex processes, so more human interaction is
often required. In order to respond to this trend, BizTalk did an excellent job with the introduction of
the WSS Adapter. This situation creates a growing need for creating business views of technical
messages. For example the creation of invoices, reminder letters, sales reports, order approvals,
subscription forms, ... and of course preferably in Word or Pdf.
This article describes two ways of creating Word documents and two ways of creating Pdf documents.
At the end there's a comparison between the different methods and I suggest the appropriate method
for each scenario.
The BizTalk flow is always similar: a send port receives a source XML message, where a custom pipeline
component transforms it into a Word/Pdf document. Find here more information on creating custom
pipeline components.
Creating Word document based on Word template This method makes use of the Office Open XML file format. Open XML allows you to add a custom XML
part to a Word template and map the content of that XML to the document itself. By using this method,
you will add a custom XML part to a Word template and map it to the document. In BizTalk, a pipeline
component will receive a similar XML and load it into the Word template.
1. Define the lay-out of the Word document in Word 2007.
2. Add the developer tab to Word 2007. Word Options >> Show Developer tab in the Ribbon
Page 26
3. Enter Design Mode and place Text Content Controls wherever you want them
4. Download the Word Content Control Toolkit here and install.
5. Open the Word document with the Word Content Control Toolkit and create the data mapping:
Create a new custom XML part, in the right pane.
Upload an existing validated test XML message, in the right pane.
Now it’s time to do the binding between the XML test message and the Word Content Controls.
Drag & drop the XML nodes from the Bind View tab to the corresponding Content Controls.
6. Create a send pipeline component with one configuration parameter: TemplatePath.
Reference the .NET library DocumentFormat.OpenXml.
The Execute method should look like this:
//Retrieve the XML input stream
Stream inXmlStream = inmsg.BodyPart.Data;
Page 27
//Load the template into a MemoryStream
MemoryStream outWordStream = new MemoryStream();
byte[] templateBytes = File.ReadAllBytes(TemplatePath);
outWordStream.Write(templateBytes, 0, templateBytes.Length);
using (WordprocessingDocument doc = WordprocessingDocument.Open(outWordStream, true))
{
//Delete the existing Custom XML Part
MainDocumentPart mainPart = doc.MainDocumentPart;
mainPart.DeleteParts<CustomXmlPart>(mainPart.CustomXmlParts);
//Add the XML input stream as Custom XML Part
CustomXmlPart customXml = mainPart.AddNewPart<CustomXmlPart>();
CopyStream(inXmlStream, customXml.GetStream());
}
//Return the created Word Document as stream
outWordStream.Seek(0, SeekOrigin.Begin);
inmsg.BodyPart.Data = outWordStream;
return inmsg;
7. Create a send pipeline with the created component. Configure the correct TemplatePath.
Send an XML test message to a send port that uses this pipeline.
Creating Word document based on XSL mapping This method also makes use of the Office Open XML file format. The main document body
(document.xml) of a Word document is actually an XML message in a pre-defined Office Open XML
structure. By using this method, you will use an XSL mapping to create the main document body and a
pipeline component will wrap it into a Word document package.
1. Define the lay-out of the Word document in Word 2007. Save as .docx.
Note that using header or footer templates, will create references from the document body
(document.xml) to images, which can result in conflicts. In step 6 you will see how header and footer
templates can be accomplished. This is the part of the document body, containing these references:
2. Change the extension from .docx to .zip. Explore the zip, open word/document.xml and copy. More
information about the structure of this XML can be found here.
Page 28
3. Create an XSL (Invoice_To_Word.xsl), paste the Word XML and map the source XML message.
4. Create a BizTalk schema for a generic Word document. Set targetnamespace#root to
http://schemas.openxmlformats.org/wordprocessingml/2006/main#document.
5. Create a BizTalk map between source XML message and Word document, using the custom XSL.
6. Create a send pipeline component with one configuration parameter: TemplatePath.
Use the .NET library System.IO.Packaging, a part of Microsoft\Framework\v3.0\windowsbase.dll
The Execute method should look like this:
//Declare some variables
String contentType = "application/vnd.openxmlformats-officedocument.wordprocessingml.document.main+xml";
String relationshipType = "http://schemas.openxmlformats.org/officeDocument/2006/relationships/officeDocument";
Uri docPartUri = new Uri("/word/document.xml", UriKind.Relative);
MemoryStream outWordStream = new MemoryStream();
Package pkg;
PackagePart mainPart;
//Retrieve the XML input stream
Stream inXmlStream = inmsg.BodyPart.Data;
//Create a Word document from scratch (when NOT using headers and footers)
if (TemplatePath == null)
Page 29
{
//Create package
pkg = Package.Open(outWordStream, FileMode.Create, FileAccess.ReadWrite);
//Create document.xml
mainPart = pkg.CreatePart(docPartUri, contentType);
}
//Use existing template (when using header and footer templates)
else
{
//Get template stream
byte[] templateBytes = File.ReadAllBytes(TemplatePath);
outWordStream.Write(templateBytes, 0, templateBytes.Length);
//Create package
pkg = Package.Open(outWordStream, FileMode.OpenOrCreate, FileAccess.ReadWrite);
//Get document.xml
mainPart = pkg.GetPart(docPartUri);
}
//Copy XML input stream to document.xml
Stream mainPartStream = mainPart.GetStream(FileMode.OpenOrCreate,
FileAccess.ReadWrite);
CopyStream(inXmlStream, mainPartStream);
mainPartStream.Close();
//Set realationship and flush
if (TemplatePath == null)
{
PackageRelationship pkgRelationship = pkg.CreateRelationship(docPartUri,
TargetMode.Internal, relationshipType, "rId1");
}
pkg.Flush();
pkg.Close();
//Return the created Word Document as stream
outWordStream.Seek(0, SeekOrigin.Begin);
inmsg.BodyPart.Data = outWordStream;
return inmsg;
When using header of footer templates, you must specify the path to the Word document that you
used to create the XSL. In that document, all content should be removed, except the templates.
7. Create a send pipeline with the created component. Configure the TemplatePath, if needed.
Send an XML message (similar to source XML) to a send port that uses the created map and this
pipeline.
Creating Pdf document based on Adobe Acrobat template This method makes use of iTextSharp, an open source library for creating and manipulating Pdf
documents. By using this method, a pipeline component will map the data from a source XML into a Pdf
template. The configuration of this mapping is done in the pipeline component, at design-time.
1. Create a Pdf template using Adobe Acrobat. Download trial here. Create a Pdf from file (for
example a Word doc). Choose Forms > Add or Edit Fields. Place the fields (placeholders) wherever you
want them.
Page 30
2. Download the itextsharp.dll here and put it in the GAC.
3. In this sample we will make use of the XPathMutatorStream to retrieve values from the incoming
source XML. The XPathMutatorStream allows you to read Xpaths in a streaming way. This is perfect for
large messages, because BizTalk doesn't have to read the whole message in memory. The disadvantage
is that the Xpath functionality is limited. The XPathMutatorStream is part of the
Microsoft.BizTalk.Streaming & Microsoft.BizTalk.XPathReader libraries, which are in the GAC.
Retrieve the both libraries from the GAC. Use Command Prompt:
copy %systemroot%\assembly\GAC_MSIL\Microsoft.BizTalk.Streaming\3.0.1.0__
31bf3856ad364e35\Microsoft.BizTalk.Streaming.dll "DestinationPath"
copy %systemroot%\assembly\GAC_MSIL\Microsoft.BizTalk.XPathReader\3.0.1.0__
31bf3856ad364e35\Microsoft.BizTalk.XPathReader.dll "DestinationPath"
4. Create a PdfTool class library to create a Pdf document in memory. Reference the itextsharp.dll.
public bool CreatePdf(Stream templateStream, Stream outputStream,
Dictionary<string, object> inputValues, bool flatten)
{
var reader = new PdfReader(templateStream);
bool ok = CreatePdf(reader, outputStream, inputValues, flatten);
reader.Close();
return ok;
}
private bool CreatePdf(PdfReader reader, Stream outputStream,
Dictionary<string, object> inputValues, bool flatten)
{
// Prepare the form
var stamper = new PdfStamper(reader, outputStream);
AcroFields form = stamper.AcroFields;
// Fill the form
bool ok = FillForm(form, inputValues);
// Remove all fields from the pdf
stamper.FormFlattening = flatten;
// Cleanup and DON'T CLOSE THE STREAM stamper.Writer.CloseStream = false;
stamper.Close();
return ok;
}
private bool FillForm(AcroFields form, Dictionary<string, object> inputValues)
{
if (inputValues == null)
return true;
bool ok = true;
Page 31
//Loop all inputValues
foreach (var current in inputValues)
{
var fieldName = current.Key;
var value = current.Value;
if (!SetField(form, fieldName, value))
ok = false;
}
return ok;
}
private bool SetField(AcroFields form, string fieldName, object value)
{
if (form.SetField(fieldName, value.ToString()))
return true;
return false;
}
5. Create a send pipeline component with two configuration parameters: TemplatePath and
PdfMapping Collection. In the collection, the mapping between the source XML and the Pdf Template is
defined: < PdfFieldName , XPath >. This maps the content of an XPath, to a Pdf field of the template.
Read here how you can accomplish this kind of design time properties.
Add a reference to the PdfTool class.
Add a reference to Microsoft.BizTalk.Streaming.dll and Microsoft.BizTalk.XPathReader.dll.
The Execute method should look like this:
//Declare variables
PdfTool PdfTool = new PdfTool();
MemoryStream outPdfStream = new MemoryStream();
XPathCollection xpathQueries = new XPathCollection();
//Load the pdf template into a MemoryStream
MemoryStream templateStream = new MemoryStream(File.ReadAllBytes(TemplatePath));
templateStream.Seek(0, SeekOrigin.Begin);
//Add all Xpaths of PdfMappingCollection to the XPathCollection
foreach (PDFMapping mapping in PdfMappingCollection)
{
xpathQueries.Add(mapping.XPath);
}
//Read input and when Xpath is found, call the delegate method xpathFound
ValueMutator mutator = new ValueMutator(xpathFound);
Page 32
XPathMutatorStream mutatorStream = new XPathMutatorStream(inmsg.BodyPart.Data,
xpathQueries, mutator);
XmlTextReader reader = new XmlTextReader(mutatorStream);
while (reader.Read());
//Create Pdf
PdfTool.CreatePdf(templateStream, outPdfStream, PdfParameters, true);
//Return the created Pdf Document as stream
outPdfStream.Seek(0, SeekOrigin.Begin);
inmsg.BodyPart.Data = outPdfStream;
return inmsg;
The delegate method should look like this:
//This method is called when an Xpath is found
private void xpathFound(int matchIdx, Microsoft.BizTalk.XPath.XPathExpression
matchExpr, string origVal, ref string finalVal)
{
foreach (PDFMapping mapping in PdfMappingCollection)
{
if (mapping.XPath == matchExpr.XPath.ToString())
//Add to Pdf Parameters
PdfParameters.Add(mapping.PdfFieldName, origVal);
}
}
6. Create a send pipeline with the created component. Configure TemplatePath and PdfCollection.
Send an XML message (similar to source XML) to a send port that uses this pipeline.
Creating Pdf document based on XSL mapping This method makes use of XSL-FO (XSL Formatting Objects), which is the part of the XSL specification
that covers the formatting of XML documents. By using this method, you will transform a source XML
message into a XSL-FO object, which will be converted to Pdf with the Apache Formatting Objects
Processor (FOP).
1. Create an XSL-FO mapping (Invoice_To_Pdf.xsl) to map the source XML into the XSL-FO format. Here
are some options listed for doing this:
Write it by yourself.
Convert a Word document into the XSL-FO format (only in Word 2003). More info here.
Convert html into the XSL-FO format. More info here.
...
Page 33
2. Create a BizTalk schema for a generic Pdf document. Set targetnamespace#root to
http://www.w3.org/1999/XSL/Format#root.
3. Create a BizTalk map between source XML message and Pdf document, using the custom XSL-FO.
4. Download ApacheFop.Net.dll here.
Sign the assembly ApacheFop.Net.dll and add it to the GAC. Use Visual Studio Command Prompt:
Generate a key file: sn -k ApacheFop.snk
Get the MSIL for the assembly : ildasm ApacheFop.dll /out: ApacheFop.il
Rename the original assembly, just in case: ren ApacheFop.dll ApacheFop.dll.orig
Build a new assembly from the MSIL output and your key file ilasm ApacheFop.il /dll /key= ApacheFop.snk
5. Create a send pipeline component without configuration parameters.
Reference the .NET library ApacheFop.Net.dll
Reference Vjslib.dll, which is part of Microsoft Visual J# Version 2.0 Redistributable Package. If not
installed, download it here.
The Execute method should look like this:
//Retrieve the XSL-FO input stream
Stream inXmlStream = inmsg.BodyPart.Data;
//Convert XSL-FO input stream to a java InputSource for the Apache driver
sbyte[] inBytes = ToSByteArray(inXmlStream);
InputSource inStream = new InputSource(new ByteArrayInputStream(inBytes));
//Declare the output stream for the Apache driver
ByteArrayOutputStream outStream = new ByteArrayOutputStream();
Page 34
//Run the Apache driver to convert XSL-FO input stream to PDF
Driver driver = new Driver(inStream, outStream);
driver.setRenderer(Driver.RENDER_PDF);
driver.run();
outStream.close();
//Return the created Pdf Document as stream
MemoryStream outPdfStream = new MemoryStream(ToByteArray(outStream.toByteArray()));
outPdfStream.Seek(0, SeekOrigin.Begin);
inmsg.BodyPart.Data = outPdfStream;
return inmsg;
We use these conversion methods between signed and unsigned byte arrays:
private static SByte[] ToSByteArray(Byte[] source)
{
sbyte[] sbytes = new sbyte[source.Length];
System.Buffer.BlockCopy(source, 0, sbytes, 0, source.Length);
return sbytes;
}
private static Byte[] ToByteArray(SByte[] source)
{
byte[] bytes = new byte[source.Length];
System.Buffer.BlockCopy(source, 0, bytes, 0, source.Length);
return bytes;
}
6. Create a send pipeline with the created component. Configure the TemplatePath, if needed.
7. Create a send port that runs in a 32-bit only host instance. The reason is that the Microsoft Visual J#
Version 2.0 Redistributable Package is a 32-bit only release.
8. Send an XML message (similar to source XML) to the send port that uses the created map and this
pipeline.
Conclusion Below you can find a comparison between the 4 methods. When you need to determine the best
method, follow these 2 rules:
If you have the choice, create a Word document. It's easier and requires less additional
software than Pdf.
Page 35
If your content is dynamic, for example repeating fields depending on the source XML, use XSL
Mapping.
Page 36
Automating Deployment of BizTalk Change Requests Author: Nick Walker, Integration Services Team, Microsoft IT
Contributors: Nikhil Tayal, Integration Services Team, Microsoft IT
6/30/2010
Overview This article is a brief overview of the Standard Onboarding Deployment Automation (SODA) tool
developed by the Microsoft Integration Services Team, part of Microsoft IT. It contains background
about the need for a solution like this framework in a large enterprise, discusses how the framework
meets that need, and demonstrates the ways that it provides value.
This article is intended for readers interested in understanding the value proposition of a tool like SODA.
No technical knowledge is necessary. Basic knowledge of enterprise integration concepts is assumed.
Background – integration change management with BizTalk Server In electronic data exchange, an integration application represents a collection of knowledge about the
methods used to integrate two or more systems. This knowledge takes the form of software
components and documentation. Maintaining these artifacts requires careful organization and
standardization.
Components of integration workflows within BizTalk Server, such as business message formats, message
transformation rules, business rules and configuration settings, are flexible digital artifacts. Microsoft
and its trading partners capitalize on this flexibility to ensure optimal business functionality, meaning
that changes to integration applications are frequent. However, even small changes can incur a great
deal of overhead:
Everyone involved must understand the existing solution at the appropriate level. For example,
software engineers must be familiar with the technical implementation of the solution, and
project managers must understand the requirements in great detail.
Change management activities such as signoff-gathering are required to ensure consensus and
pass compliance audits.
Changes to technical implementation must be designed, developed and reviewed.
Workflows must be tested to ensure that new functionality works as specified and existing
functionality hasn’t been broken.
The complete application must be deployed and configured in multiple, individually-configured
environments.
Documentation, including specifications, user manuals, maintenance guides and deployment
guides, must be revised to reflect the updated solution.
Deployment – an opportunity for improvement The Integration Services Team manages many integration applications for a wide variety of customers
and partners. The volume of work required to keep up with change requests presents a daunting
Page 37
organizational challenge. In particular, the deployment of changes to integration applications requires a
significant amount of human interaction, precise communication, and attention to detail.
BizTalk Server includes out-of-the-box support for packaging application installations for deployment,
but in order to use this functionality, users must have direct administrative access to BizTalk
environments and must possess BizTalk-specific knowledge. While project managers typically possess
general knowledge of integration concepts such as address and transport type, they often lack BizTalk-
specific administrative knowledge about concepts such as host configurations and handlers.
Furthermore, the contents of installation packages produced by BizTalk Server tools cannot be examined
in detail and approved by support personnel prior to deployment.
As a result, deployment procedures have always required the support team to deploy and configure the
application into the target environment by hand. Project managers present the support team with
configuration documentation and files containing compiled BizTalk artifacts to be installed, and the
operations team manually interacts with each processing server to install and configure the application.
This is a tedious and error-prone process with many opportunities for subtle mistakes.
Standard Onboarding Deployment Automation - SODA The Integration Services development team recognized the potential for improvement of this extremely
common process. What if instead of storing configuration information in documents sent to the support
team, it could be captured in a friendly interface and stored to a central database? Sets of changes could
be exposed to the support team as deployment requests that could be approved or denied, and an
automated agent driven by the configuration data could automatically deploy the configuration and
artifacts. This was the driving vision for SODA.
SODA is a distributed enterprise application with a Silverlight interface that an architect or tech lead can
use to easily create “configsets.” A configset is a package of BizTalk application configuration bundled
with deployable BizTalk artifacts. Users can use the tool to create a configset for an existing application,
which populates the tool’s configuration screens with live configuration from the target BizTalk
application. Using a friendly interface that mimics what they are currently familiar with, users can make
changes to the configuration and provide files containing compiled BizTalk artifacts that will replace the
currently deployed artifacts.
Page 38
SODA enables standard onboarding project managers to create “configsets” of deployable BizTalk artifacts and configuration and send it to the operations team in the form of a deployment request.
Once a project manager has completed work on a configset, he or she can finalize it and request that it
be deployed. SODA then emails notifications to the operations team and makes the deployment
requests visible in their view of the tool, which they can use to review the detailed configuration and
approve or deny deployment to the target environment. If the operations team approves the request,
the SODA deployment engine is activated at the scheduled deployment time and uses deployment agent
services installed on each BizTalk processing server to automatically install and configure the new
artifacts to the target environment.
Technology (BizTalk-related) Much of SODA’s implementation is entirely separate from BizTalk Server. The user interface is
implemented in Silverlight and driven from a database with a schema designed to contain BizTalk
application information.
SODA has two primary touch points with BizTalk Server:
BizTalk Interrogator Service: A WCF service that queries live BizTalk environments using the
BizTalk Explorer Object Model (ExplorerOM) to retrieve configuration from them. This service,
installed on a web server, uses dynamic assembly loading to load the BizTalk ExplorerOM and
other required assemblies specific to a given version of BizTalk Server. It retrieves the
information about BizTalk artifacts by connecting ExplorerOM to a target server hosting BizTalk
Management DB (BizTalkMgmtDb) and populates the SODA UI with live configuration
information to provide a basis for changes.
Page 39
Deployment engine: A component that uses ExplorerOM to deploy artifacts and configuration
to BizTalk environments. Installed on each server in each BizTalk environment, the engine relies
on a Windows service called the Deployment Agent to interact with the SODA database.
Benefits Reduction of effort and human error associated with deployment – Manual deployment is a
tedious, error-prone process carried out by support staff working from configuration entered in
Excel sheets as part of another human process. With SODA, configuration is entered in a
validating user interface, and the data is simply reviewed for correctness before being saved.
The ability to schedule deployments enables deployments to take place at any hour on any day
without human intervention.
Reduced overhead through standardized communication between project teams and
operations teams – SODA sets a concrete pattern for communication of deployment-related
information between project managers and support teams, ensuring that project managers are
aware of all the information needed for a deployment, and making that information easily
visible and reviewable for operations staff. Automated email notifications ensure that requests
or status updates don’t go unnoticed.
Visibility and reuse of current configuration - When creating a configset for an application, the
current configuration of the application is loaded into the tool for the project manager to work
off of, reducing time spent accessing environments or requesting information from the
operations team.
Relational data storage – By storing deployment data in a database instead of Excel or Word
documents, project managers and operations staff can automatically track, search, report on,
categorize, sort and archive it. Future expansions of the tool will store more kinds of data,
further expanding possibilities for useful reports.
Future The Integration Services Team has exciting plans for future enhancements to SODA that will deliver
value well beyond what it is currently capable of. Potential features discussed include:
Ability to configure, deploy and work with more BizTalk features and artifacts, including
orchestrations, business rules, parties, certificates, currently-unsupported message transports,
pre-constructed binding files and more.
Replacement of document-centric partner information worksheets with data entry and storage
tools built into SODA, allowing for data validation and the advantages of storing partner data in
a relational database (query, search, report, archive, and track).
Copy configuration blocks from one configset to another, enabling easy migration of bulk
configuration with small adjustments between environments.
Creation of new applications, not just changes to existing applications.
Page 40
Summary Managing changes to a large array of integration applications requires a lot of overhead, and seizing
opportunities to simplify change management processes is critical to keeping the business running on
schedule and without mistakes. SODA, a distributed enterprise application, ensures that the deployment
process for change requests follows a standardized process, and automates deployment tasks to reduce
human error and let operations staff focus on more pressing work.
Page 41
Request Response Messaging Pattern Author:Elisa Palombi, MCS, Italy
Windows Server 2008 – Sql Server 2008 – Biztalk Server 2009
In this article I will illustrate a custom Pipeline that implements a Request Response Pattern using only
the messaging engine. , An orchestration will not be used to create the response message.
The Pipeline is used in a WCF Receive Location that receives Request message from the client and then
sends back a Response message after input message validation is performed. The client will send
batched message and enveloping will be used.
When the receive location accept the envelope, the pipeline has to extract each single message inside
the ANY element and validate the messages found against a set of defined schemas.
The input message is structured as an envelope with an ANY unbounded element, that can contains a
set of different schema. Something like this:
<root element>
<item_data> any schema </item_data>
<item_data> any schema </item_data>
<item_data> any schema </item_data>
</root element>
The objective is to read each <item_data> element and validate if inside there is a wel known schema, otherwise my entire validation has failed.
If the validation is successful for each of the ANY elements found, all the messages will be published into
the message box to be processed by orchestrations, and a response message will be created inside the
pipeline as a new IBaseMessage. During Message Box publication of the messages the response
message will also be published and the message will be picked up by the Receive Location. The
response message will be directly subscribed to the same port matching the correct connection to
ensure that response is set back to the correct client.
Page 42
To implement this pattern I create a Custom Pipeline Component and all the core code is in the
Disassemble Method. The following code is the Disassemble method.
//read Biztalk Context Property from input message
const string EPMRRCORRELATIONTOKEN_NAME = "EpmRRCorrelationToken";
const string CORRELATIONTOKEN = "CorrelationToken";
const string ISREQUESTRESPONSE = "IsRequestResponse";
const string REQRESPPIPEID = "ReqRespTransmitPipelineID";
const string EPMRRCORRELATIONTOKEN_NAMESPACE =
"http://schemas.microsoft.com/BizTalk/2003/system-properties";
object epmObj = inmsg.Context.Read(EPMRRCORRELATIONTOKEN_NAME,
EPMRRCORRELATIONTOKEN_NAMESPACE);
object tokenObj = inmsg.Context.Read(CORRELATIONTOKEN,
EPMRRCORRELATIONTOKEN_NAMESPACE);
object rrObj = inmsg.Context.Read(ISREQUESTRESPONSE,
EPMRRCORRELATIONTOKEN_NAMESPACE);
object pipeObj = inmsg.Context.Read(REQRESPPIPEID, EPMRRCORRELATIONTOKEN_NAMESPACE);
try
{
//read bodypart of the input message
IBaseMessagePart msg = inmsg.BodyPart;
//read stream data inside input message and put it in XMLTextReader
XmlTextReader InputXmlTextReader = new XmlTextReader(msg.Data);
InputXmlTextReader.WhitespaceHandling = WhitespaceHandling.None;
In these firsts lines I save some context properties from the incoming message to assign the same value to the output message. These properties are:
- Epmrrcorrelationtoken - Correlationtoken - IsRequestResponse - ReqRespTransmitPipelineID
The GetSchema method returns an XMLSchemaSet object containing all the schemas that the pipeline has to validate.
Page 43
//initialize the XSD schema list to which validate
XmlSchemaSet payloadSchemaList = GetSchema();
//initialize the XML schema list to validate
ArrayList payloadMxlList = new ArrayList();
//setting creation for validation of payload
XmlReaderSettings settings = new XmlReaderSettings();
settings.Schemas.Add(payloadSchemaList);
settings.ValidationType = ValidationType.Schema;
settings.ValidationEventHandler += new
ValidationEventHandler(SchemaValidationEventHandler);
To execute the schema validation I use the XMLReader that requires an XMLReaderSetting object as an input parameter I load all of the schemas and use a TextReader to read the payload. //read each element into the schema
while (!InputXmlTextReader.EOF)
This loop read the entire input message until the end. For each <item_data> element I read the content, that is a single message. . {
//when the pointer arrives to item_data (root node of the payload message)
proceed with validation of it
if ((InputXmlTextReader.LocalName == "item_data") &&
(InputXmlTextReader.NamespaceURI == "mynamespace"))
{
#region Read every item_data element
//load the payload indside a textreader and in a string in memory
//and also move the reader forward
string payload = InputXmlTextReader.ReadInnerXml();
My objective is to validate the payload string. It has to be one of the schemas contained inside the XMLSchemaSet added to the XmlReader. payloadMxlList.Add(payload);
TextReader tr = new StringReader(payload);
//xmlreader creation with inputpayload and schema
XmlReader reader = XmlReader.Create(tr, settings);
// read whole file and check it
payloadIsValid = true;
While Loop validation
while (reader.Read())
{
if (payloadIsValid == false)
{
break; // abort if error
}
}
At the end of the reading, if my flag is false means the validation has failed otherwise success.
Page 44
if (payloadIsValid == false)
{
#region Close readers
if (reader != null)
{
reader.Close();
}
if (tr != null)
{
tr.Close();
}
#endregion
break; // abort if error
}
#region Close readers
if (reader != null)
{
reader.Close();
}
if (tr != null)
{
tr.Close();
}
#endregion
#endregion
}
else
{
//move the reader one step forward
InputXmlTextReader.Read();
}
}
If all the payloads contain valid schema I loop inside my arraylist to extract each payload and load it into the BizTalk Pipeline Queue. //if the loop is finished and all payload are correct
if (payloadIsValid)
{
#region Preparing all payload to send to MSGBOX
string rootname = string.Empty;
string ns = string.Empty;
for (int i = 0; i < payloadMxlList.Count; i++)
{
TextReader tr2 = new StringReader(payloadMxlList[i].ToString());
XmlReader read = XmlReader.Create(tr2);
while (read.Read())
{
if (read.IsStartElement())
{
if (read.Prefix == String.Empty)
{
rootname = read.LocalName;
}
ns = read.NamespaceURI;
break;
}
}
//write to queue to msgbox method
SendPayloadToMsgBox(pc, payloadMxlList[i].ToString(), ns, rootname);
}
Page 45
#endregion
At the end in the queue I will also publish a new message that I manually create that is the WS Response Message with the correct code response. Note that I pass to the creation message method all the context property I saved at the beginning of the disassemble method. #region Create an OK response Message
SendWSResponseToMsgBox(pc,
CreateWSResponseMessage("publish", "topic", "itemid", "subscriptionid",
"Operation Complete Successfully", "1000"),
"mynamespace", "status", epmObj, tokenObj, rrObj, pipeObj);
#endregion
}
else
{
//one or more payload are wrong, preparing a status response with error
#region Create a KO response Message
If the validation fails, I will publish a custom message that I manually create that is the WS Response Message with the wrong code response. Note that I pass to the creation message method all the context property I saved at the beginning of the disassemble method.
SendWSResponseToMsgBox(pc,
CreateWSResponseMessage("publish", "topic", "itemID", "subscriptionID",
"One or more Payload was not in the crrect format", "1103"),
"mynamespace",
"status",
epmObj, tokenObj, rrObj, pipeObj);
#endregion
}
#region Close input message reader
if (InputXmlTextReader != null)
{
InputXmlTextReader.Close();
}
#endregion
}
The SendWSResponseToMsgBox method creates a new IBaseMessage. Inside the body part of the
message I load a Memory Stream I created containing the Response Message. Before publishing it I
promote some properties and I assign to them the same value as the input IBaseMessage I received at
the beginning of the pipeline. This promotion will guarantee a match from the input message session
with the correct output.
The following code describes the publication method with context property promotion:
private void SendWSResponseToMsgBox(IPipelineContext pContext, MemoryStream stream,
string namespaceURI, string rootElement, object emp, object token, object rr, object pipeid)
{
IBaseMessage outMsg;
try
{
string systemPropertiesNamespace =
"http://schemas.microsoft.com/BizTalk/2003/system-properties";
Page 46
outMsg = pContext.GetMessageFactory().CreateMessage();
outMsg.AddPart("Body", pContext.GetMessageFactory().CreateMessagePart(), true);
object messageType = namespaceURI + "#" + rootElement.Replace("ns0:", "");
object routeDirect = true;
outMsg.Context.Promote("MessageType", systemPropertiesNamespace, messageType);
outMsg.Context.Promote("EpmRRCorrelationToken", systemPropertiesNamespace, emp);
outMsg.Context.Promote("RouteDirectToTP", systemPropertiesNamespace,
routeDirect);
outMsg.Context.Promote("IsRequestResponse", systemPropertiesNamespace, rr);
outMsg.Context.Promote("ReqRespTransmitPipelineID", systemPropertiesNamespace,
pipeid);
outMsg.Context.Promote("CorrelationToken", systemPropertiesNamespace, token);
stream.Seek(0, System.IO.SeekOrigin.Begin);
outMsg.BodyPart.Data = stream;
_msgs.Enqueue(outMsg);
}
catch (Exception ex)
{
System.Diagnostics.EventLog.WriteEntry("BTS-PASS", ex.Message,
EventLogEntryType.Error);
throw new ApplicationException("Error in queueing ResponseTo WS Outgoing
messages: " + ex.Message);
}
}
Summary In this article I illustrate a valid alternative to the usual pattern using BizTalk.
In a lot of scenarios where I worked with BizTalk, I saw a big platform with a very heavy traffic
from lot of client that works with synchronous communication. Sometimes client received
timeouts due to less than optimal isolation of synchronous operations from asynchronous
operation.
This way to construct response message, during pipeline validation, will reduce response tim.
This way unhooks the synchronous communication as soon as possible giving BizTalk time to
work for all the asynchronous operations in batch way.
This is not always implementable, but when the synchronous operation is only a splitting of
messages or a validation this way is more performing
Page 47
Instrumentation Best Practices for High
Performance BizTalk Solutions
Authored by: Valery Mizonov (Microsoft Corporation)
Reviewed by: Mark Simms, Jayanthi Sampathkumar, Mustansir Doctor (Microsoft Corporation)
Background
Application code instrumentation has always been crucially important to diagnosing and troubleshooting complex
software solutions. Rich tracing and logging enables the collection of detailed telemetry and diagnostic information
critical to understanding application behavior. This is especially important for distributed server-side applications such
as those deployed on the BizTalk Server platform.
There are many real-world examples where BizTalk developers have added tracing and logging capabilities to their
applications using inefficient methods, resulting in an adverse effect on the application’s performance running under
stress. The large number of tracing events emitted by the instrumented code may become a factor limiting
throughput or increasing latency. It is therefore becoming imperative to leverage more optimal ways of enriching the
application code with instrumentation without sacrificing performance or a rich tracing.
This whitepaper discusses the alternative solution for enriching BizTalk solutions with high-performance
instrumentation, enabling the BizTalk developers to develop fully instrumented applications and significantly facilitate
the diagnostic and troubleshooting aspects of their applications regardless of complexity. The whitepaper highlights
the benefits of the proposed solution and drills down into specific instrumentation scenarios across the entire family
of BizTalk solution artifacts.
The whitepaper is based off real-world lessons learned from customer engagements and reflect the learning’s from
multiple customer projects led by Microsoft’s Windows Server AppFabric Customer Advisory Team (AppFabric CAT).
The Challenges Through a number of BizTalk projects and customer engagements, we observed that many developers
would typically start adding instrumentation into their BizTalk applications by taking dependency on the
System.Diagnostics.Trace and System.Diagnostics.DefaultTraceListener components from the .NET Framework or
Enterprise Library’s Logging Application Block. Developers may also consider using third-party
logging/tracing components such as log4net. Generally, the use of the Win32 Debugging APIs such as those
used by the DefaultTraceListener class and other instrumentation packages may not deliver the desired
level of agility and performance.
For instance, a dependency on the Win32 Debugging APIs for the purposes of code instrumentation may
result in higher than usual CPU utilization on the host machine while tracing data is being captured by the
DebugView utility - one of the most commonly used tools for intercepting Win32 debug events. In the
example below, the CPU utilization hits 85-90% on average when DebugView is running and collecting
trace events from a BizTalk application running under stress. The tool was consuming the vast majority
of the CPU time due to a large number of events being emitted by the instrumented BizTalk application
and tracked by the DebugView utility into a trace log file.
Page 48
.
Note that running DebugView during stress testing may significantly impact the application performance. The
tool is acting as a debugger and hooks on the OUTPUT_DEBUG_STRING_EVENT event. As a result, a BizTalk
application instrumented using System.Diagnostics.Trace component configured with a default trace listener may be
experiencing performance degradation as the application threads will be suspended whilst the debug information is
written to the trace.
From a performance perspective, it is important to understand the throughput characteristics of the chosen
instrumentation package and determine the rate in which instrumentation events such as tracing method calls can be
logged in the output. For example, the benchmarked throughput of the most commonly used
System.Diagnostics.Trace component with a default trace listener can be anywhere between 2500 and 4000 trace
events per second when event capture is enabled in the DebugView tool. Although this number may appear to be
reasonably sufficient, it may turn out to become a factor slowing the application performance down, specifically in the
multi-threaded BizTalk applications where hundreds of events can be emitted concurrently from many worker
threads.
From an operational perspective, having a requirement to restart the BizTalk host instances so that any changes
in the tracing configuration such as enabling, disabling or changing the trace level can take effect may be
highly undesirable or even unacceptable in the production environment. Some instrumentation packages such as
Enterprise Library solve this challenge by providing the support for real-time application configuration refresh,
Page 49
however, this capability remains unavailable for other developers not leveraging the corresponding packages. For
example, there is no support for elasticity in the tracing configuration in the System.Diagnostics.Trace component
unless a developer specifically implements this functionality.
The above experiences and observations force to rethink many of the design and implementation decisions when it
comes to adding code-level instrumentation into BizTalk applications demanding high performance. To this end, we
have come forward with the following solution.
The Solution We recommend the use of a high-performance instrumentation framework leveraging the Event Tracing
for Windows (ETW) infrastructure, enabling applications to take advantage of the general purpose, high-
speed tracing facility provided directly by the OS kernel.
ETW is a high-performance, low overhead and highly scalable tracing facility provided by the Windows
Operating System. Using an efficient buffering and logging mechanism implemented in the operating
system, ETW provides a fast, reliable and versatile set of features for logging events raised by user-mode
applications and kernel-mode device drivers alike. ETW was first introduced on Windows 2000. Since
then, various core OS and system components have adopted ETW to instrument their activities, and it's
currently one of the key instrumentation technologies available on Windows platforms.
The ETW event tracing infrastructure is already being extensively used virtually by all major
infrastructure components inside the BizTalk runtime, including EPM, transport adapters, Message
Agent, etc. The internal APIs in the BizTalk runtime infrastructure provide a generic implementation of a
custom ETW event provider along with helper classes to leverage it from any BizTalk application. One of
the major components in the BizTalk tracing APIs is called TraceProvider and can be found in the
Microsoft.BizTalk.Diagnostics namespace provided by Microsoft.BizTalk.Tracing.dll assembly.
The Microsoft.BizTalk.Diagnostics.TraceProvider class conveys the tracing data to a custom ETW provider
and exposes the following key properties and methods:
namespace Microsoft.BizTalk.Diagnostics
{
[ComVisible(true)]
[Guid("748004CA-4959-409a-887C-6546438CF48E")]
public sealed class TraceProvider
{
public TraceProvider(string applicationName, Guid controlGuid);
public uint Flags { get; }
public bool IsEnabled { get; }
public void TraceMessage(uint traceFlags, object format);
public void TraceMessage(uint traceFlags, object format, object data1);
// + a few more overloads of TraceMessage accepting extra data items.
}
}
The custom instrumentation framework described in this whitepaper relies on the above component and provides a
rich set of tracing methods for various types of events such as informational, warnings, errors and exceptions. There is
also support for tracing the method calls and measuring execution durations using high-resolution timers.
Page 50
The key player in the custom instrumentation framework is the ComponentTraceProvider class which is simply acting
as a wrapper for TraceProvider. Below is the class diagram depicting the core members of the instrumentation
framework:
The ComponentTraceProvider class delivers its core functionality through implementing the purposely defined
custom IComponentTraceProvider interface with the following methods:
// Writes an information message to the trace.
void TraceInfo(string format, params object[] parameters);
// Writes an information message to the trace. This method is provided for optimal
// performance when tracing simple messages which don't require a format string.
void TraceInfo(string message);
// Writes an information message to the trace. This method is intended to be used when
// the data that needs to be written to the trace is expensive to be fetched. The
// method represented by the Func<T> delegate will only be invoked if tracing is
// enabled.
void TraceInfo(Func<string> expensiveDataProvider);
// Writes a warning message to the trace.
void TraceWarning(string format, params object[] parameters);
// Writes a warning message to the trace. This method is provided for optimal
// performance when tracing simple messages which don't require a format string.
void TraceWarning(string message);
Page 51
// Writes a message to the trace. This method can be used to trace detailed
// information which is only required in particular cases.
void TraceDetails(string format, params object[] parameters);
// Writes an error message to the trace.
void TraceError(string format, params object[] parameters);
// Writes an error message to the trace. This method is provided for optimal
// performance when tracing simple messages which don't require a format string.
void TraceError(string message);
// Writes the exception details to the trace.
void TraceError(Exception ex);
// Writes the exception details to the trace.
void TraceError(Exception ex, bool includeStackTrace);
// Writes the exception details to the trace.
void TraceError(Exception ex, bool includeStackTrace, Guid callToken);
// Writes an informational event into the trace log indicating that a method is
// invoked. This can be useful for tracing method calls to help analyze the code
// execution flow. The method will also write the same event into default
// System.Diagnostics trace listener, however this will only occur in the DEBUG code.
// A call to the TraceIn method would typically be at the very beginning of an
// instrumented code.
Guid TraceIn(params object[] inParameters);
// Writes an informational event into the trace log indicating that a method is
// invoked. This can be useful for tracing method calls to help analyze the code
// execution flow. The method will also write the same event into default
// System.Diagnostics trace listener, however this will only occur in the DEBUG code.
// A call to the TraceIn method would typically be at the very beginning of an
// instrumented code. This method is provided to ensure optimal performance when no
// parameters are required to be traced.
Guid TraceIn();
// Writes an informational event into the trace log indicating that a method is about
// to complete. This can be useful for tracing method calls to help analyze the code
// execution flow. The method will also write the same event into default
// System.Diagnostics trace listener, however this will only occur in the DEBUG code.
// A call to the TraceOut method would typically be at the very end of an instrumented
// code, before the code returns its result (if any).
void TraceOut(params object[] outParameters);
// Writes an informational event into the trace log indicating that a method is about
// to complete. This can be useful for tracing method calls to help analyze the code
// execution flow. The method will also write the same event into default
// System.Diagnostics trace listener, however this will only occur in the DEBUG code.
// A call to the TraceOut method would typically be at the very end of an instrumented
// code, before the code returns its result (if any). This method is provided to
// ensure optimal performance when no parameters are required to be traced.
void TraceOut();
// Writes an informational event into the trace log indicating that a method is about
// to complete. This can be useful for tracing method calls to help analyze the code
// execution flow. The method will also write the same event into default
// System.Diagnostics trace listener, however this will only occur in the DEBUG code.
// A call to the TraceOut method would typically be at the very end of an instrumented
// code, before the code returns its result (if any).
void TraceOut(Guid callToken, params object[] outParameters);
Page 52
// Writes an informational event into the trace log indicating a start of a scope for
// which duration will be measured.
long TraceStartScope(string scope, params object[] parameters);
// Writes an informational event into the trace log indicating the start of a scope
// for which duration will be measured. This method is provided in order to ensure
// optimal performance when no parameters are available for tracing.
long TraceStartScope(string scope);
// Writes an informational event into the trace log indicating the start of a scope
// for which duration will be measured. This method is provided in order to ensure
// optimal performance when only 1 parameter of type Guid is available for tracing.
long TraceStartScope(string scope, Guid callToken);
// Writes an informational event into the trace log indicating the end of a scope for
// which duration will be measured.
void TraceEndScope(string scope, long startTicks);
// Writes an informational event into the trace log indicating the end of a scope for
// which duration will be measured.
void TraceEndScope(string scope, long startTicks, Guid callToken);
In addition, the ability to support instrumentation at a component type level was introduced so that tracing can be
enabled and disabled separately for different component types such as custom pipeline components, orchestrations,
maps, rules and so on. The TraceManager class exposes a static instance of an object implementing the
IComponentTraceProvider interface for each supported component type as follows:
// The main tracing component which is intended to be invoked from user code.
public static class TraceManager
{
// The trace provider for user code in the custom pipeline components.
public static IComponentTraceProvider PipelineComponent {/*Omitted for brevity*/}
// The trace provider for user code in workflows (such as expression shapes in the
// BizTalk orchestrations).
public static IComponentTraceProvider WorkflowComponent {/*Omitted for brevity*/}
// The trace provider for user code in the custom components responsible for data
// access operations.
public static IComponentTraceProvider DataAccessComponent {/*Omitted for brevity*/
}
// The trace provider for user code in the transformation components (such as
// scripting functoids in the BizTalk maps).
public static IComponentTraceProvider TransformComponent {/*Omitted for brevity*/}
// The trace provider for user code in the service components (such as Web
// Service, WCF Service or service proxy components).
public static IComponentTraceProvider ServiceComponent {/*Omitted for brevity*/}
// The trace provider for user code in the Business Rules components (such as
// custom fact retrievers, policy executors).
public static IComponentTraceProvider RulesComponent {/*Omitted for brevity*/}
// The trace provider for user code in the business activity tracking components
// (such as BAM activities).
public static IComponentTraceProvider TrackingComponent {/*Omitted for brevity*/}
Page 53
// The trace provider for user code in any other custom components which don't
// fall into any of the standard categories such as Pipeline, Workflow,
// DataAccess, Transform, Service or Rules.
public static IComponentTraceProvider CustomComponent {/*Omitted for brevity*/}
}
The new instrumentation framework delivers the following benefits:
Highest possible performance – the initial tests on a 3.2GHz Quad Core machine have demonstrated that
ETW-based tracing component is capable of writing about 1.4 million trace events per second comparing to
just 4000 events a second delivered by the Trace class from System.Diagnostics;
No high CPU utilization observed whenever trace events are persisted into a log file, even when running
under stress. ETW is using a buffering and logging mechanism implemented in the kernel. The logging
mechanism uses per-processor buffers that are written to disk by an asynchronous writer thread. This
significantly reduces the impact of the log write operations on the application and system performance;
Full operational flexibility enables switching the tracing on and off as well as changing the trace level
dynamically, making it easy to perform detailed tracing in production environments without requiring
reboots or application restarts;
Tracing can be safely enabled to run continuously in a production environment using the circular logging
option which ensures that log files will not outgrow the available disk space;
Lightweight footprint on the instrumented application is achieved through minimizing the external
dependencies down to a single BizTalk assembly and significantly reducing the volume of “baggage”
such as other supporting artifacts, configuration files, etc.
Performance Considerations
When compared to the other 3 popular rivals (System.Diagnostic.Trace in the .NET Framework, log4net and Enterprise
Library, most specifically, in their default configuration), the ETW-based instrumentation framework has demonstrated
a significantly better performance. In order to measure the true extent of performance benefits delivered by ETW, the
following scenario has been tested:
Test case: execute 1 instrumented method 100,000 times writing 1 trace event in each iteration.
Pre-requisites: configure all 4 benchmarked instrumentation frameworks to produce a text-based trace log
file containing the tracing data.
Test bed: a high-end desktop PC with 3.2GHz Quad Core CPU, 4GB RAM, 64-bit OS, RAID-5 SATA II 7.2K disk
array.
Below are the results from the benchmarking exercise:
Page 54
The above results confirm that the ETW-based instrumentation framework outperforms all 3 rivals in the order of
magnitude. When visualized on a chart, the above performance numbers render a substantial gap between the
throughput of the ETW infrastructure and other 3 popular tracing frameworks:
The code samples include a test tool which was used for benchmarking all 4 instrumentation frameworks and derive
the above results.
Instrumentation Guidelines
The examples below illustrate the usage pattern for the tracing framework. The following code fragment
demonstrates a simple use case whereby the instrumented method (BeginAndCompleteActivity) captures 2 events
indicating when the method was invoked (TraceIn) and when it was completed (TraceOut).
public void BeginAndCompleteActivity(ActivityBase activity)
{
Guard.ArgumentNotNull(activity, "activity");
var callToken = TraceManager.TrackingComponent.TraceIn(activity.ActivityName,
activity.ActivityID);
this.eventStream.BeginActivity(activity.ActivityName, activity.ActivityID);
this.eventStream.UpdateActivity(activity.ActivityName, activity.ActivityID,
ActivityTrackingUtility.GetActivityData(activity));
this.eventStream.EndActivity(activity.ActivityName, activity.ActivityID);
Page 55
TraceManager.TrackingComponent.TraceOut(callToken);
}
When enabled, the trace log will contain both events written with a timestamp along with a GUID-based method call
correlation token which enables to match both TraceIn and TraceOut events together:
The following sections drill down into specific instrumentation scenarios across the entire family of artifacts in a
BizTalk solution and demonstrate how the custom instrumentation framework can be leveraged to add rich, high-
performance tracing capabilities into the most common BizTalk solution components.
Instrumentation of Custom Pipeline Components
The custom pipeline components can be instrumented using TraceManager.PipelineComponent which is dedicated
to be used for this type of BizTalk artifacts. Some of the useful events to be captured through the code
instrumentation may include:
Tracing calls to the core methods such as Execute, Disassemble, GetNext, etc. (using TraceIn and TraceOut);
Measuring duration of the above methods (using TraceStartScope and TraceEndScope);
Tracing the internal state of pipeline components which could in turn assist with troubleshooting (using
TraceInfo);
Capturing detailed information about runtime exceptions (using TraceError).
Below is an example of some of the techniques highlighted above:
Page 56
Instrumentation of BizTalk Maps
The BizTalk maps can be instrumented using TransformTraceManager component which can be invoked from
inside a custom scripting functoid associated with an external assembly. Some useful events that can be captured
from the maps when they are being executed by the BizTalk runtime engine may include:
Tracing the invocation of XSLT templates implementing the BizTalk maps (using TraceIn and TraceOut);
Tracing the internal state of maps, e.g. the node values which are being transformed, which could in turn
assist with troubleshooting (using TraceInfo).
In the example below, the instrumented map is using the TraceInfo method to report the values of the GSxx nodes
found in the input document:
Page 57
Note that BizTalk maps cannot invoke static methods from an external assembly by design. Consequently, there is a
separate non-static class (TransformTraceManager) that is responsible for relaying method calls between a map and
the statically initialized singleton TraceManager.TransformComponent object.
When adding instrumentation into BizTalk maps, it’s important to have the custom scripting functoids linked to
appropriate nodes in the output document, otherwise the scripting code will be omitted from the auto-generated
XSLT template. In the above example, all scripting functoids are connected to the root node in the output document
via a helper Logical OR functoid. This ensures that tracing takes place right before creating the root XML element and
results in generating the following XSLT code by the BizTalk mapper:
<!-- There is some XSLT code before this element -->
<xsl:template match="/s2:CSC_837D">
<xsl:variable name="var:v1"
select="ScriptNS0:TraceInfo(string(s0:Headers/s0:GS/s0:GS01/text()))" />
<xsl:variable name="var:v2"
select="ScriptNS0:TraceInfo(string(s0:Headers/s0:GS/s0:GS02/text()))" />
<xsl:variable name="var:v3"
select="ScriptNS0:TraceInfo(string(s0:Headers/s0:GS/s0:GS03/text()))" />
<xsl:variable name="var:v4"
select="ScriptNS0:TraceInfo(string(s0:Headers/s0:GS/s0:GS04/text()))" />
<xsl:variable name="var:v5"
select="ScriptNS0:TraceInfo(string(s0:Headers/s0:GS/s0:GS05/text()))" />
<xsl:variable name="var:v6"
select="ScriptNS0:TraceInfo(string(s0:Headers/s0:GS/s0:GS06/text()))" />
<xsl:variable name="var:v7"
Page 58
select="ScriptNS0:TraceInfo(string(s0:Headers/s0:GS/s0:GS07/text()))" />
<xsl:variable name="var:v8"
select="ScriptNS0:TraceInfo(string(s0:Headers/s0:GS/s0: GS08/text()))" />
<xsl:variable name="var:v9" select="userCSharp:LogicalOr(string($var:v1),
’true’, string($var:v2), string($var:v3), string($var:v4), string($var:v5),
string($var:v6), string($var:v7), string($var:v8))" />
<xsl:if test="$var:v9">
<ns0:W4880200-COMMON-EDIT-MED-CLAIM>
<!-- There is plenty of other XSLT code after this element -->
Instrumentation of BizTalk Orchestrations
As complexity of the BizTalk orchestrations evolves, the instrumentation is becoming a key factor in helping to
diagnose and troubleshoot behavioral problems, performance-related issues and other bottlenecks that were not
foreseen or manifested themselves during development.
A well-instrumented orchestration can be defined as follows:
The entry point into an orchestration is recorded either as the very first activity (for non-activated
orchestrations) or immediately after the top Receive shape (using TraceIn);
The internal state of orchestrations (e.g. variables, results from method calls, non-sensitive message payload)
is wisely traced (using TraceInfo);
Unexpected behavior is reported as soon as it is detected (using TraceWarning or TraceError);
The detailed information about all runtime exceptions is traced from inside the exception handling block
(using TraceError);
The duration of individual scopes as well as the entire orchestration is measured and traced (using
TraceStartScope and TraceEndScope);
The exit point from an orchestration is recorded (using TraceOut) either right before the Terminate shape or
at the very last step in the orchestration.
Below are some examples of the instrumentation techniques highlighted above. First, an entry point into the
orchestration is traced inside the Trace Entry expression shape:
Page 59
Secondly, the internal state of the Listen shape is traced for the purposes of facilitating troubleshooting:
Next, the exception handling scope is instrumented with a TraceError event in order to provide detailed information
about the exception:
Page 60
In addition, the orchestration contains instrumented scopes for which duration is measured by calling the
TraceStartScope at the beginning of the scope and completing the scope with a call to the TraceEndScope method:
And lastly, the exit event is traced right before the orchestration’s termination point:
Page 61
In the above examples, the BizTalk orchestrations are instrumented using TraceManager.WorkflowComponent that
is being invoked from within Expression shapes.
Instrumentation of Business Rules
The instrumentation of business rules can be extremely valuable when there is a requirement to understand how rule
conditions are being evaluated, what rules are being fired as well as how long it takes to execute a rule set (policy).
A well-instrumented business rule policy can be defined as follows:
All calls to helper classes responsible for invoking the BRE APIs (such as Policy objects) are traced (using
TraceIn and TraceOut);
The duration of policy execution is recorded (using TraceStartScope and TraceEndScope);
The BRE policies are invoked passing an instance of a class implementing the IRuleSetTrackingInterceptor
interface - this class should report on all significant events which occur during policy execution;
Each rule in the policy informs the instrumentation framework indicating that the rule was fired by the BRE
engine (using either TraceInfo, TraceWarning or TraceError).
The business rules instrumentation is provided by 3 components available in the custom instrumentation framework:
TraceManager.RulesComponent provides general instrumentation methods for all helper .NET components
accessing the BRE APIs;
RuleTraceManager residing in the Microsoft.BizTalk.CAT.BestPractices.Framework.RulesEngine namespace
implements a static class which is intended to be used on the Action pane inside the Business Rule
Composer. Its main purpose is to expose a set of tracing methods with a fixed parameter list. Internally, the
RuleTraceManager is simply relaying all calls to TraceManager.RulesComponent. A requirement for a
separate class has arisen from the fact that the Business Rules Engine doesn’t currently support the
invocation of methods with a variable number of parameters (also known as parameter arrays);
TracingRuleTrackingInterceptor which is available in the Microsoft.BizTalk.CAT.BestPractices.
Framework.RulesEngine namespace provides a custom implementation of the IRuleSetTrackingInterceptor
interface. All events which occur during policy execution will be traced using the TraceManager.
RulesComponent class.
The following recommendations would apply in the context of business rules instrumentation:
When calling any static methods from inside the business rules, make sure that StaticSupport parameter is
present either in the application’s configuration file (e.g. BTSNTSvc.exe.config) or system registry. Otherwise,
the following behavior may be encountered;
Page 62
Consider adding the TraceInfo instrumentation event into each individual rule to facilitate the analysis of rule
execution without having to analyze all events reported by the Rules Engine through
TracingRuleTrackingInterceptor.
Below are some examples as it relates to the approaches highlighted above. First, the individual rules contain an
instrumentation event recording the fact of the rule being invoked by BRE:
Secondly, any unexpected control flow such as execution of rules not intended to have been fired for a given
parameter set is being traced using the TraceError event:
And lastly, the custom component responsible for invocation of the BRE policies is instrumented with scope duration
tracing and other events which can provide valuable diagnostic information and assist with troubleshooting of the
components accessing the BRE engine and invoking business rules:
Page 63
Instrumentation of Custom Components
Custom .NET components can be instrumented using TraceManager.CustomComponent which enables:
Method-level instrumentation whereby calls to individual methods are recorded in the trace log with zero or
more parameters (using TraceIn and TraceOut);
Measuring durations to be able to capture accurate execution timings (using TraceStartScope and
TraceEndScope);
Tracing the internal state of custom components which could in turn assist with troubleshooting (using
TraceInfo or TraceWarning);
Writing detailed information about runtime exceptions (using TraceError).
Below is an example showing some of the techniques highlighted above:
Page 64
In addition, there may be requirements for enabling individual custom components to emit
instrumentation events and be able to capture these in isolation from events produced by other
components. This approach opens the opportunity for collecting the detailed behavioral and telemetry
data related to a specific component whilst ensuring that any events emitted by other custom
components will not interfere with the trace log content.
To enable this scenario and add instrumentation at the individual component level, the following steps are to be
followed.
First, every .NET component which required to be individually instrumented needs to be decorated with a Guid
attribute available in the System.Runtime.InteropServices namespace. This Guid value will be used to uniquely identify
a component which is going to provide events to the ETW infrastructure. This value is also known as Provider ID in
the ETW taxonomy.
Next, a new protected readonly static field is to be added into each non-sealed instrumented .NET component. For
sealed classes, the private readonly static modifier needs to be used. This class member is statically initialized with
an instance of the framework component implementing the IComponentTraceProvider interface which provides a
set of tracing and instrumentation methods:
Page 65
Lastly, all important aspects of the custom components’ behavior need to be instrumented by calling the relevant
tracing methods provided by the above static class member as opposed to using TraceManager.
CustomComponent:
The benefit of the above technique lies in the ability to collect only those events which are truly necessary to perform
troubleshooting of a specific component in the BizTalk solution, eliminate “noisy” events which may come from other
components, reduce the amount of events in the trace log and greatly reduce the time required to analyze the trace
log data.
Management of Instrumented Applications
Once deployed and launched, an instrumented BizTalk application will be ready to emit a number of events related to
the application’s behavior, internal state, execution scope durations, runtime exceptions and everything else that the
application developers decided to include into the instrumentation context.
The next section is intended to drill down into most common administrative tasks that are involved in managing the
BizTalk solutions instrumented using the ETW event trace infrastructure. Until now, the emphasis has been on a
developer enriching the application code with instrumentation and tracing. It’s time to look at the code
instrumentation from an IT operation perspective and walk through the common management scenarios such as:
Page 66
Extracting data out of the instrumented BizTalk applications into traditional human-readable trace log files
Monitoring the event trace sessions, enabling and disabling event tracing
Configuring the ETW event collectors.
Event Trace Management Application Landscape
The Event Tracing for Windows (ETW) infrastructure provides the ability to start and stop event tracing sessions,
monitor their status, configure a variety of logging settings such as buffer size, flush interval, stop conditions and
more. The majority of the administrative tasks are available from both the GUI and the command-line interface to suit
the needs and preferences of different IT persona types.
In addition to using the Reliability and Performance Monitor, application developers and technical teams have
access to other system tools and scripts for automating many aspects of event trace management:
Logman creates and manages Event Trace Session and Performance logs and supports many functions of
Performance Monitor from the command line;
WEvtUtil enables the retrieval of information about event logs and publishers, install and uninstall event
manifests, and to run queries, export, archive, and clear logs;
TraceLog starts, stops, or enables trace logging;
TraceFmt formats and displays trace messages from an event trace log file or a real-time trace session;
TraceView configures and controls trace sessions and displays formatted trace messages from real-time
trace sessions and trace logs.
The custom instrumentation framework discussed in this whitepaper simplifies the use of the above tools by
providing 2 easy-to-use scripts with a minimum of command-line parameters. These scripts can be found in the
TracingTools folder and are intended to address the following requirements:
StartTrace.cmd provides the ability to start an ETW event trace session either for the entire instrumented
BizTalk solution, its selected component types or individually instrumented classes. This script is nothing
more than just a wrapper for the Tracelog tool from the Windows Resource Kit.
StopTrace.cmd carries the responsibility for stopping the ETW event trace sessions and converting a binary
trace log file into human-readable format. This script is dependent on Tracelog and Tracefmt tools to flush
trace data, stop and apply binary-to-text conversion.
The next subsections are intended to clarify how the above administrative scripts map to common management
scenarios.
Event Trace Management Tasks
The typical administrative tasks involved into managing the events produced by instrumented application code can
be summarized as follows.
Starting Event Trace Sessions
To start an event tracing session for the entire instrumented BizTalk application, run StartTrace.cmd using one of
following commands depending on the trace level required (All, High, Medium or Low). Note that the log file name
specified after -log parameter can be any name but should not contain whitespaces.
Page 67
: All trace events (Info, Details, Warning, Error, In, Out, Start Scope, End Scope) StartTrace -log BtsAppAllEvents -level all
: Limited trace events (Info, Warning, Error) StartTrace -log BtsAppCoreEvents -level high
: Trace events indicating unexpected behavior (Warning, Error) StartTrace -log BtsAppUnexpectedEvents -level medium
: Trace events related to runtime exceptions (Error) StartTrace -log BtsAppExceptions -level low
To start an event tracing session for a specific type of application components, run StartTrace.cmd using one of
following commands depending on the component type and trace level required:
: Start trace for all instrumented pipeline components StartTrace -log PipelineComponentsAllEvents -level all -component Pipeline
: Start trace for all instrumented orchestrations StartTrace -log OrchestrationsAllEvents -level all -component Workflow
: Start trace for all instrumented data access components StartTrace -log DataAccessComponentsAllEvents -level all -component DataAccess
: Start trace for all instrumented maps or custom transform code StartTrace -log MapsAllEvents -level all -component Transform
: Start trace for all instrumented Web/WCF services or service proxies StartTrace -log WCFServicesAllEvents -level all -component Service
: Start trace for all instrumented business rules StartTrace -log BusinessRulesAllEvents -level all -component Rules
: Start trace for all instrumented BAM activities StartTrace -log BAMActivitiesAllEvents -level all -component Tracking
: Start trace for all instrumented custom .NET components StartTrace -log CustomComponentsAllEvents -level all -component Custom
To start an event tracing session for individually instrumented application components, the component’s Guid
attribute value must be provided in the command line as per the following examples:
: Start trace to capture all events for instrumented HL7Disassembler component StartTrace -log HL7DasmFullTrace -level all -component Custom -guid 0EC4A54D-6B97-47C1-9118-A2BF8B4E7595
: Start trace to capture errors in instrumented HL7Disassembler component StartTrace -log HL7DasmErrorTrace -level low -component Custom -guid 0EC4A54D-6B97-47C1-9118-A2BF8B4E7595
It is recommended to consolidate multiple calls to StartTrace and StopTrace scripts into a single parameter-less batch
file in order to make it easier to start and stop tracing.
Page 68
Monitoring Event Trace Sessions
After an event tracing session is started, it will remain running until it is manually stopped, a stop condition is
encountered or host is rebooted. To query the present status of the event tracing sessions, one of the following
approaches can be used.
To find out the event tracing session status from the command line, run the Logman utility using the following
syntax:
: Lists all event tracing sessions configured on the local machine Logman query -ets
: Lists all event tracing sessions with a name matching the specified pattern Logman query -ets | find "AllEvents"
To find out the event tracing session status from the GUI, open Reliability and Performance Monitor and navigate to
the Event Trace Sessions section.
Stopping Event Trace Sessions
To stop a running event trace session, the target management tools differ depending on whether or not the trace
session must be temporarily suspended or terminated with no intent to resume (a complete stop). Consequently, one
of the following approaches can be used.
To completely stop an event tracing session from the command line, run StopTrace.cmd passing the
original name of the trace log without file extension:
: Stop trace for all instrumented pipeline components StopTrace -log PipelineComponentsAllEvents
: Stop trace for all instrumented orchestrations StopTrace -log OrchestrationsAllEvents
: Stop trace for all instrumented data access components StopTrace -log DataAccessComponentsAllEvents
Page 69
: Stop trace for all instrumented maps or custom transform code StopTrace -log MapsAllEvents
: Stop trace for all instrumented Web/WCF services or service proxies StopTrace -log WCFServicesAllEvents
: Stop trace for all instrumented business rules StopTrace -log BusinessRulesAllEvents
: Stop trace for all instrumented BAM activities StopTrace -log BAMActivitiesAllEvents
: Stop trace for all instrumented custom .NET components StopTrace -log CustomComponentsAllEvents
Note that StopTrace.cmd will also convert the binary trace log into text-based format using the Tracefmt
tool. Depending on the log size and disk IO performance, this operation can take several minutes.
To temporarily suspend an event tracing session from the command line without producing a text-based log
file, run the Logman utility using the following syntax:
: Stop event tracing sessions all instrumented pipeline components, : do not convert log into text file at this moment in time Logman stop PipelineComponentsAllEvents -ets
To temporarily suspend the event tracing session from the GUI, open Reliability and Performance Monitor,
navigate to the Event Trace Sessions section, right-click on the target event tracing session and select Stop
from the context menu:
Configuring Event Trace Sessions
The default configuration of the ETW event trace sessions created by StartTrace.cmd can be attributed as follows:
Buffer Size: 128K
Maximum Buffers: 100
Page 70
Log Mode: Circular
Maximum Log Size: 1000MB
Flush Time: Not Set
Clock Type: Performance
Stream Mode: File
Pre-allocate File Space: No
Should the default configuration be found insufficient and need modifications, the script can be updated to include
the desired configuration settings. The new settings must be specified in the following line in StartTrace.cmd:
"%TraceLogTool%" -cir 1000 -b 128 -max 100 -start %TraceLogName% -flags %TraceLevel% -f %TraceLogFileName% -guid #%TraceComponentGUID%
Below are some of the command line parameters supported by the Tracelog tool that may be useful when
customizing the event tracing configuration:
-b <n> Sets buffer size to <n> Kbytes -min <n> Sets minimum buffers -max <n> Sets maximum buffers -f <name> Log to file <name> -append Append to file -prealloc Pre-allocate -seq <n> Sequential logfile of up to n Mbytes -cir <n> Circular logfile of n Mbytes -newfile <n> Log to a new file after every n Mbytes -ft <n> Set flush timer to n seconds -paged Use pageable memory for buffers -rt Enable tracing in real time mode -kd Enable tracing in kernel debugger
Conclusion
The traditional ways of instrumenting BizTalk solutions may not always be the most effective from a performance
standpoint. The commonly used instrumentation and tracing components leveraging the Win32 Debugging APIs may
introduce a potential bottleneck and become responsible for performance degradations in multi-threaded BizTalk
applications running under stress.
From the other side, source code instrumentation delivers a great degree of visibility into the application behavior
and helps reduce the overall troubleshooting efforts. Consequently, a fundamentally new approach to instrumenting
high performance BizTalk solutions has become crucially important to enable collecting the rich and detailed
diagnostic information in a non-intrusive manner with virtually no overhead and no impact on the application
performance.
The Windows Server AppFabric Customer Advisory Team at Microsoft aimed to provide the community with validated
best practices to help BizTalk developers enrich their solutions with the high-performance instrumentation internally
adopted by many Microsoft products. These best practices have been reflected in the form of a reusable framework
which BizTalk developers can easily plug in and adopt in their own implementations.
The source code containing the instrumentation framework discussed in this whitepaper can be found on the MSDN
Code Gallery via the following hyperlink:
Best Practices for Instrumenting High Performance BizTalk Solutions.zip
Page 71
Additional Resources/References
For more information on the related topic, please visit the following resources:
“Improve Debugging And Performance Tuning With ETW” article in the MSDN Magazine;
“Intro to Event Tracing For Windows” post on Matt Pietrek’s blog;
“How To Use Event Tracing For Windows For Performance Analysis” presentation available from the MSDN
Download Center;
“Logman Utility Command Line Reference” on TechNet;
“Tracelog Utility Command Line Reference” article in the MSDN Library;
“Controlling Event Tracing Sessions” article in the MSDN Library;
“Tools for Software Tracing“ article in the MSDN Library.
Page 72
Financial Messaging Services Bus Authors: Dejan Petkovid (Virtual Technical Specialist, Saga d.o.o. Belgrade), Vinay Balasubramaniam
(Program Manager, Microsoft), Colin Kerr – (Industry Technology Strategist, Microsoft)
Summary: Financial Messaging Services Bus (FMSB) is a vertical industry implementation of Microsoft’s
Enterprise Service Bus Toolkit 2.0 on top of BizTalk Server 2009 and BizTalk Accelerator for SWIFT. FMSB
greatly improves time to market for many complex integration solutions especially in Banking and
Capital Markets industries. This paper explains the rationale behind FMSB creation, provides a high‐level
description of FMSB architecture, and discusses how FSMB is used to simplify application connectivity to
SWIFT. FMSB helps software developers and solution architect by providing components and
functionalities within engine which saves development time and give more value from the engine itself.
This document assumes the reader has a basic understanding of generic ESB concepts. For further
reading on the Microsoft ESB Toolkit, refer to: http://www.microsoft.com/soa/solutions/esb.aspx Lets
use this link: http://msdn.microsoft.com/en-us/biztalk/dd876606.aspx
Financial solutions (but this apply to any industry solution at the end), especially when built as
messaging, can in fact form a foundation platform for the development of a specific domain framework
(payment or capital markets, government solution, manufacturing,..) where integration technology, data
transformation, and workflow management are used to orchestrate transaction flows among
applications and clearing systems. In addition to messaging, some commonly used processes are
required for transaction processing, e.g., validation, routing, exception management, and repair. By
developing these processes as reusable services, the messaging infrastructure becomes more than an
integration framework – it takes on the nature of a bus architecture where the lifecycle of a transaction
can be mapped, calling the appropriate services as necessary. This is the essence of the FMSB (ESB):
taking common processing services, abstracting and bundling them as reusable services that can be
configured at implementation time and track execution KPI data as well as custom defined KPI. In
addition, such services can be exposed to third party applications to leverage the preconfigured
processing of the bus components, therefore enhancing client value. When defining FMSB, it is
important to note that these services are business services, as defined by Microsoft Enterprise Service
Bus (ESB) 2.0. The over‐arching concept is to use ESB to orchestrate all services and reuse them as
needed.
The basic architectural elements of a financial services application can be categorized into the following
segments or layers as shown in Figure 1.
Page 73
Figure 1: Financial Application Architecture
When considering the architecture of a financial services application, FMSB sits in the layer known as
“Business Process and Orchestration”, which is covered by the Microsoft technology stack, and provides
integration, orchestration, transformation, and workflow services.
FMSB can be deployed directly to a financial institution client infrastructure project, but also embedded
in a Microsoft partner application solution.
FMSB and Microsoft ESB The FMSB is built principally on BizTalk Server because many of the services relate to BizTalk and
Accelerator for SWIFT components. To conform to BizTalk ESB architectural best practices, the FMSB
was developed upon the BizTalk ESB Toolkit. Most of the FMSB components are very generic and
reusable for any solution built on ESB Toolkit.
The base ESB architecture is also the base architecture for FMSB as shown in Figure 2
Page 74
Figure 2: FMSB and ESB
Financial Messaging Service Bus extends ESB by providing:
Resolvers which simplify solution creation by implementing support for:
o multipart messaging (Read Message Part, Replace Message Part)
o retreive configuration from Dashboard (FMSB Value)
o retreive complex configuration data for SWIFT service (SWIFT Service)
o storing itinerary designer value into itinerary runtime
Loopback adapter (message doesn't leave message box)
Configuration model for defining BAM tracking data for service/itinerary execution
Service Broker Orchestration implementation
Silverlight Dashboard (built on Composite framework (previously known as Prism)) with 5
modules
Set of Financial services and itineraries together with configuration model
FMSB Architecture
FMSB Architecture is presented on following figure:
Page 75
Figure 3: FMSB architecture as add-on for ESB (red cirles represent FMSB add-ons)
FMSB provides:
Core extensions of ESB (enhanced runtime, tracking KPI during itinerary execution)
Extended Exception handling (support for invoking pre-defined exception itinerary)
Loopback adapter (helps to mix messaging and orchestration services inside same itinerary)
Configuration database (new fmsb resolver extends BRE and UDDI resolvers and provides genric
configuration on the Silverlight dashboard)
Silverlight self-service dashboard (for monitoring Live Data, view KPI Reports, Configuring KPI
(BAM), SWIFT service administration)
Interact and FileAct (specific SWIFT SAG adapters) support
FMSB Components
FMSB has several modules which could run and installed separately:
1. CORE modules – those modules could be reused without SWIFT modules. Artifacts include
resolvers, adapter, orchestration service broker, database, entity framework models..
2. SWIFT modules – those modules are connected with Biztalk Accelerator for SWIFT (A4SWIFT)
and pre-built for reusing in SWIFT scenario. SWIFT modules use Core modules. Requires
A4SWIFT and BizTalk SWIFT adapters installed.
3. Tracking modules – those module enhanced tracking capabilities over ESB. Those modules could
be installed independetly of other modules. Requires BAM infrastracture.
Page 76
4. Dashboard – modules for presenting Silverlight experience for working with Dashboard
capabilities and configuration model. Independent of other modules.
Following Figure present the relation of all FMSB modules.
Figure4: Core + Tracking + Dashboard modules with configuration stores
Page 77
Need for rich BI within ESB
Like an ocean surrounding an iceberg, business performance management (BPM) provides the business
context for performance dashboards, which are layered applications built on a business intelligence and
data integration infrastructure (i.e., the base of the iceberg). The most visible elements of a
performance dashboard are the scorecard and dashboard screens, which display performance using
leading, lagging, and diagnostic metrics.
In custom implementation extract data for Dashboard isn’t easy task. By using ESB architecture ESB
runtime (Dispatcher in messaging scenario and Advance method in Orchestration scenario) has a full
control of every message flow inside ESB runtime. But, even with all these knowledge ESB2.0 doesn’t
provide full tracking feature. With pure ESB runtime you can’t extract reports which give you answers on
common questions:
How many itineraries/services worked in past ?
How many itineraries/services currently running ?
In any financial services application, it is very common set of questions:
How many payments have been processed today? o How many exceptions did we have? o How many were urgent requests?
How were today’s payments cleared? o How many were bulk payments? o How many were wire payments?
Domestic vs cross-border? o Who were our top 5 customers today? o What percentage of the total came from these 5
Page 78
These are the main reasons why we enhanced the tracking capability of ESB toolkit.
Tracking architecture
Enriched ESB runtime (with FMSB assemblies „..V1.dll“) now has a capability to extract data by using
new BAM interceptor. These data include:
1. Itinerary data (start time, end time, name, version)
2. Service data (strat time, end time, business name, status,...)
3. KPI inside Message body (user configure)
Interceptor will extract Itinerary and Service data from Itinerary header. KPI inside Message body would
be extracted according configuration model and stored into BAM.
Page 79
FMSB tracking system
Config model
Interceptor
Administration
BAM
FMSB
ESB Itinerary
DB
ESB services
Submit FIN
SWIFT Service Router
Validate MT
Custom
Service Broker
R
u
n
t
i
m
e
Reporting Office Excel
Silverlight
SSRS PowerPivot Custo
m
Cubes Star Schema
…
Page 80
dministration of the system would define which itinerary and services should be tracked (ESB Itinerary
DSL model), how to extract KPI from message body, which services/itinerary and define tracking entity
(Activity with Checkpoints analogy from BAM). See the screenshot below
With FMSB, configuration of KPI isn’t done inside Excell (nor custom XML). Administrator of the system
(or business person) could use Dashboard with Drag/Drop functionality to define all necessary data
(Activities, Checkpoints, Cubes, Measures, Dimensions) together with service position where this data
should be tracked.
This model is published in:
1. Configuration model for tracking
2. BAM star schema to persist tracked data.
During the runtime tracking Interceptor would read Configuration model and extract data from Message
body as defined.
Dashboard
Dashboard presents visualization of the cubes inside SQL Analysis services. This is generic tool and could
be re-used for any cube inside Microsoft SQL Analysis services.
Page 81
Sources – Cubes from SQL Analysis services (OrderDocumentSource)
Mesures – Defined measures for selected Cube (CountOf)
Dimensions – Defined Dimensions for selected Cube (CustomerName, RequestType)
Filter – Dimension for filtering (same as Dimension).
Dashboard provides several pre-defined type of reports (Column, Line, Pie, Bar, Area, Doughnut, Point,
StackedArea) for any source. Following is the sample of Column report:
By selecting different type of report view would be redraw with the same data.
FMSB installation creates BAM cubes for:
Storing Itinerary/Services
Page 82
Those cubes provides source data for report like bellow (percentage of service execution):
Storing Itinerary/Services as Real Time Aggregation for live service view on the system
LiveData
Page 83
LiveData view present IT real view on the system. See the screenshot below for details
LiveData presents:
Current Itineraries working status on the system – „How many itineraries currently working?“
Current Itineraries status – „Status of itineraries on the system?“
Current working services status – „How many services currently working ?“
Note: BAM system store data into BAM RTA Cubes with delay.
Those data presents great information for IT persons to know exactly what the current
system/service/itinerary load is.
Page 84
SWIFT service Router – compound service
SWIFT Service Router is more than a simple service. It encapsulates complex processes for transmitting
any type of SWIFT message, or file, to any SWIFT service over a specific SWIFT protocol. SWIFT Service
Router reuses A4SWIFT functionalities and implements A4SWIFT API for specific scenarios. Also SWIFT
Service Router has a configuration module which is implementing as database. Configuration module
implements model for access to SWIFT network.
Building SWIFT solutions requires specific knowledge about business processes related to specific
solutions, messages, protocols,… If solution is built on top of Microsoft messaging stack then BizTalk
knowledge as well as specific technology is also welcome. Using FMSB SWIFT Service router service
complexity of SWIFT protocols, technical changes, primitives are obsolete.
If customer wants to build SWIFT solution he needs to know following when is using SWIFT Service
Router service:
Knowledge With FMSB
SWIFT solution description
SWIFT integration guide
SWIFT protocols
SWIFT message standards
SWIFT technical changes
SWIFT protocols messages (primitives)
SWIFT solution build
MSFT BizTalk (A4SWIFT)
Not only providing great tool for simplify access to SWIFT network SWIFT service router also provides
Dashboard where SWIFT administrator could setup SWIFT access.
Financial solution architect recognize that common tasks in creating SWIFT enabled solution are:
1. Gathering financial data
2. Process those data
3. Submit data to SWIFT
4. Handle response from SWIFT
Figure5: Common tasks for SWIFT solution
Collect information
Process
Submit to SWIFT
Handle Response
Page 85
How data is collected and processed is specific tasks and depends of integration scenario. But, set of
tasks (business process) for sending information to SWIFT is common and reusable (green boxes).
Functionality of green boxes tries to solve FMSB SWIFT service router.
SWIFT Service Router Service is capable to handle request for sending to SWIFT MX messages, any XML
message, raw file and also ISO15022. Service implements complex logic of transmitting over specific
SWIFT protocols like Interact, FileAct, MQSeries, Fils, FTP.
By reusing this service ISV could build custom solutions effective and be focused much more on
interoperability with legacy systems rather than solve access to swift as presented on following figure.
Figure6: FMSB SWIFT Service Router compound service could be re-used in many ISV solutions.
Integrations with service are possible in two scenarios:
1. Application to application – over MSMQ, Files, FTP, MQSeries adapters
2. Integration in existing Itinerary - In this mode integration with service is done by creating
solution inside Itinerary (as presented on figure bellow):
External systems
ISV solutions
FMSB
SWIFT
Service router
SCT
Core
Service
Bulk payments
Files
Trade engine FIX
Page 86
Figure7: Invocation of SWIFT Service Router compound service inside Itinerary.
Benefits
One service which hosts access to any SWIFTNet service over any type of SWIFT protocol
(support for 3rd party software's (creating custom services))
Works with payload messages and depends of configuration for specific swift service create
primitives with requested fields
Capable to host several external applications (sender of payload, application access rights
(which application is capable to send to which swift service,…)
External application interface could be .NET, Java,… (ESB on-ramp service interface, BizTalk
adapters,…)
SWIFT service model (automatic update with Deployment package for Alliance or manual)
Support environment prefix (doesn’t support mode indicator in service name (read from
configuration)) – one definition for Live, Test, Pilot service
Sample integration
Service is very easy to integrate into some specific financial process. For example: SEPA credit transfer
process could be defined as set of two independent processes:
1) Collecting transaction
2) Creating package (according the SEPA rules), wrap them into SEPA defined envelope and submit
over SWIFT.
Collecting transactions process could be defined as:
Page 87
This process highlighted here is as a sample (available inside FMSB). In this sample ISV is re-using
Transformation service (from ESB), ValidateMX service (from FMSB), DecisionValidMessage service
(FMSB), Start MRSR (FMSB) and Message Content Resolver (FMSB). Only two new services ISV needs to
create (SaveSCTTransaction and invoke Bulk (creating package and submit to SEPA)).
Creating package and submit to SWIFT process could be as:
Page 88
Transformation ESB service is reused several times (only resolver is different). FIX Namespace service is
custom (available as a sample inside FMSB) implementation for solving specific namespace handling
mandate by EBA. Invoking SWIFT Service router process is presented in red circle (available as
Messaging and Orchestration implementation).
SWIFT service router reads message payload and depending on the configuration model decides which
itinerary should be implemented for specific SWIFT access:
Example of FileAct access:
Page 89
In this step SWIFT Service Router during the runtime execution calculates parameters and invokes
specialized itinerary for specific tasks. This process is knows as Injection of Itinerary. In that scenario
SWIFT service router understand that service isn’t just a simple task and inject new itinerary into
execution runtime context of current itinerary.
SWIFT Service administration
Administration of SWIFT Service router is done over FMSB Dashboard.
Page 90
SWIFT service dashboard presents one central place where SWIFT administrator of the system could
configure access to any SWIFT service by defining specific protocol to use, specific validation to perform
on the level of service access or message type, define specific access mode (Store and Forward,
RealTime). This allows separation of configuration task to a dedicated person with SWIFT knowledge as
opposed to the development team who have limited knowledge of the right configuration
Page 91
FMSB Add-on (BG Services Engine) Saga created new add-on on top of FMSB architecture which are named „BG Services Engine“.
This engine provides runtime for browsing report repository much faster then using FMSB Dashboard.
Also runtime allowes creating any group of related reports and present this group to specified business
user group as presented on figure:
Reports could be view side by side and filtered per diffent category (source, measure, render,...)
For any more detail about BG Services Engine please contact: [email protected]
Page 92
Conclusion
FSMB provides great set of ESB add-ons. With core functionalities benefits are available either for
developers or for business persons or for solution architect. By re-using ESB and BizTalk runtime FMSB
provides solid foundation for any specific domain development.
Page 93
How To Boost Message Transformations Using the
XslCompiledTransform class
Authored by: Paolo Salvatori, Principal Program Manager, AppFabric Customer Advisory Team Reviewed by: Mark Simms, Senior Program Manager, AppFabric Customer Advisory Team Curt Peterson, Principal Group Program Manager, AppFabric Customer Advisory Team
Introduction The BizTalk runtime makes extensive use of the System.Xml.Xsl.XslTransform class. When you create
and build a BizTalk project, a separate .NET class is generated for each transformation map. Each of
these classes inherits from the Microsoft.XLANGs.BaseTypes.TransformBase class. For convenience, I
used Reflector to retrieve and report its code in the table below. As you can easily note, the get accessor
of the Transform property returns a XslTransform object.
TransformBase class
[Serializable] public abstract class TransformBase { // Methods protected TransformBase() { } // Properties public virtual string[] SourceSchemas { get { return null; } } public BTSXslTransform StreamingTransform { get { StringReader input = new StringReader(this.XmlContent); XmlTextReader stylesheet = new XmlTextReader(input); BTSXslTransform transform = new BTSXslTransform(); transform.Load(stylesheet, null, base.GetType().Assembly.Evidence); return transform; } } public virtual string[] TargetSchemas { get { return null; } } public XslTransform Transform
Page 94
{ get { StringReader input = new StringReader(this.XmlContent); XmlTextReader stylesheet = new XmlTextReader(input); XslTransform transform = new XslTransform(); transform.Load(stylesheet, null, base.GetType().Assembly.Evidence); return transform; } } public XsltArgumentList TransformArgs { get { XmlDocument document = new XmlDocument(); document.PreserveWhitespace = true; document.LoadXml(this.XsltArgumentListContent); XsltArgumentList list = new XsltArgumentList(); foreach (XmlNode node in document.SelectNodes("//ExtensionObjects/ExtensionObject")) { XmlAttributeCollection attributes = node.Attributes; XmlNode namedItem = attributes.GetNamedItem("Namespace"); XmlNode node3 = attributes.GetNamedItem("AssemblyName"); XmlNode node4 = attributes.GetNamedItem("ClassName"); object extension = Assembly.Load(node3.Value).CreateInstance(node4.Value); list.AddExtensionObject(namedItem.Value, extension); } return list; } } public abstract string XmlContent { get; } public abstract string XsltArgumentListContent { get; } }
When BizTalk Server 2004 was built, the XslTransform was the only class provided by the Microsoft .NET Framework
1.1 to apply an XSLT to an XML document. When the Microsoft .NET Framework version 2.0 was released,
the XslTransform was declared obsolete and replaced by the System.Xml.Xsl.XslCompiledTransform. This class is used
to compile and execute XSLT transformations. In most cases, the XslCompiledTransform class significantly
outperforms the XslTransform class in terms of the time needed to execute the same XSLT against the same XML
document. The article Migrating From the XslTransform Class on MSDN reports as follows:
“The XslCompiledTransform class includes many performance improvements. The new XSLT processor compiles the XSLT
style sheet down to a common intermediate format, similar to what the common language runtime (CLR) does for other
programming languages. Once the style sheet is compiled, it can be cached and reused.”
The caveat is that because the XSLT is compiled to MSIL, the first time the transform is run there is a performance hit,
but subsequent executions are much faster. To avoid paying the extra cost of initial compilation every time a map is
executed, this latter could be cached in a static structure (e.g. Dictionary). I’ll show you how to implement this pattern
in the second part of the article. For a detailed look at the performance differences between the XslTransform and
XslCompiledTransform classes (plus comparisons with other XSLT processors) have a look at following posts:
o XslCompiledTransform Performance: Beating MSXML 4.0
o XslCompiledTransform Slower than XslTransform?
Page 95
Although the overall performance of the XslCompiledTransform class is better than the XslTransform class, the Load
method of the XslCompiledTransform class might perform more slowly than the Load method of the XslTransform
class the first time it is called on a transformation. This is because the XSLT file must be compiled before it is loaded.
However, if you cache an XslCompiledTransform object for subsequent calls its Transform method is incredibly faster
than the equivalent Transform method of the XslTransform class. Therefore, from a performance perspective:
o The XslTransform class is the best choice in a "Load once, Transform once" scenario as it doesn't
require the initial map-compilation.
o The XslCompiledTransform class is the best choice in a "Load once, Cache and Transform many times"
scenario as it implies the initial cost for the map-compilation, but then this overhead is highly
compensated by the fact that subsequent calls are much faster.
As BizTalk is a server application (or, if you prefer an application server), the second scenario is more likely than the
first. The only way to take advantage of this class (given that BizTalk does not currently make use of the
XslCompiledTransform class) is to write custom components. If this seems a little strange to you, remember that all
BizTalk versions since BizTalk Server 2004 have inherited that core engine, based on .NET Framework 1.1. Since the
XslCompiledTransform class wasn’t added until .NET Framework 2.0, it wasn’t leveraged in that version of BizTalk.
While I’m currently working with the product group to see how best to take advantage of this class in the next version
of BizTalk, let’s go ahead and walk through creating a helper class to boost the performance of message
transformations in your current BizTalk implementation using the XslCompiledTransform class and compare its
performance with another helper component that makes use of the old XslTransform class.
BizTalk Application
In order to compare the performance of the XslTransform and XslCompiledTransform classes I created a simple
BizTalk application composed of the following projects:
Helpers
This library contains 2 helper classes called, respectively, XslTransformHelper and XslCompiledTransformHelper. These
components share most of the code and expose the same static methods. I minimized the differences between the 2
classes as the final scope was to compare the performance of the XslTransform and XslCompiledTransform classes. As
their name suggests, the first helper class uses the XslTransform class, while the second makes use of the
XslCompiledTransform class. The Transform static method of both helper classes provides multiple
overloads/variants/signatures. This allows the components to be invoked by any orchestration, pipeline component or
.NET class. Both classes use a static Dictionary to cache maps in-process for later calls. The fully qualified name
(FQDN) of a BizTalk map is used as key to retrieve the value of the corresponding instance within the Dictionary. The
fully qualified name (FQDN) of a BizTalk map can be easily determined as follows:
o Open the BizTalk Administration Console and navigate to the Maps folder within your BizTalk
application.
o Double click the map in question.
o Copy the content of the Name label (see the picture below) and paste it in a text editor.
o Append a comma followed by a space (“, “).
o Copy the content of the Assembly label (see the picture below) and paste it in a text editor.
Page 96
Pretty easy, don’t you think?
XslTransformHelper class
#region Copyright //------------------------------------------------- // Author: Paolo Salvatori // Email: [email protected] // History: 2010-01-26 Created //------------------------------------------------- #endregion #region Using References using System; using System.IO; using System.Text; using System.Collections.Generic; using System.Configuration; using System.Xml; using System.Xml.XPath; using System.Xml.Xsl; using System.Diagnostics; using Microsoft.XLANGs.BaseTypes;
Page 97
using Microsoft.XLANGs.Core; using Microsoft.BizTalk.Streaming; using Microsoft.BizTalk.CAT.Samples.DynamicTransforms.Helpers.Properties; #endregion namespace Microsoft.BizTalk.CAT.Samples.DynamicTransforms.Helpers { public class XslTransformHelper { #region Private Constants private const int DefaultBufferSize = 10240; //10 KB private const int DefaultThresholdSize = 1048576; //1 MB private const string DefaultPartName = "Body"; #endregion #region Private Static Fields private static Dictionary<string, TransformBase> mapDictionary; #endregion #region Static Constructor static XslTransformHelper() { mapDictionary = new Dictionary<string, TransformBase>(); } #endregion #region Public Static Methods public static XLANGMessage Transform(XLANGMessage message, string mapFullyQualifiedName, string messageName) { return Transform(message, 0, mapFullyQualifiedName, messageName, DefaultPartName, false, DefaultBufferSize, DefaultThresholdSize); } public static XLANGMessage Transform(XLANGMessage message, string mapFullyQualifiedName, string messageName, bool debug) { return Transform(message, 0, mapFullyQualifiedName, messageName, DefaultPartName, debug, DefaultBufferSize, DefaultThresholdSize); } public static XLANGMessage Transform(XLANGMessage message, int partIndex, string mapFullyQualifiedName, string messageName,
Page 98
string partName, bool debug, int bufferSize, int thresholdSize) { try { using (Stream stream = message[partIndex].RetrieveAs(typeof(Stream)) as Stream) { Stream response = Transform(stream, mapFullyQualifiedName, debug, bufferSize, thresholdSize); CustomBTXMessage customBTXMessage = null; customBTXMessage = new CustomBTXMessage(messageName, Service.RootService.XlangStore.OwningContext); customBTXMessage.AddPart(string.Empty, partName); customBTXMessage[0].LoadFrom(response); return customBTXMessage.GetMessageWrapperForUserCode(); } } catch (Exception ex) { ExceptionHelper.HandleException(Resources.XslCompiledTransformHelper, ex); TraceHelper.WriteLineIf(debug, null, ex.Message, EventLogEntryType.Error); throw; } finally { if (message != null) { message.Dispose(); } } } public static XLANGMessage Transform(XLANGMessage[] messageArray, int[] partIndexArray, string mapFullyQualifiedName, string messageName, string partName, bool debug, int bufferSize, int thresholdSize) { try { if (messageArray != null && messageArray.Length > 0) { Stream[] streamArray = new Stream[messageArray.Length]; for (int i = 0; i < messageArray.Length; i++) {
Page 99
streamArray[i] = messageArray[i][partIndexArray[i]].RetrieveAs(typeof(Stream)) as Stream; } Stream response = Transform(streamArray, mapFullyQualifiedName, debug, bufferSize, thresholdSize); CustomBTXMessage customBTXMessage = null; customBTXMessage = new CustomBTXMessage(messageName, Service.RootService.XlangStore.OwningContext); customBTXMessage.AddPart(string.Empty, partName); customBTXMessage[0].LoadFrom(response); return customBTXMessage.GetMessageWrapperForUserCode(); } } catch (Exception ex) { ExceptionHelper.HandleException(Resources.XslCompiledTransformHelper, ex); TraceHelper.WriteLineIf(debug, null, ex.Message, EventLogEntryType.Error); throw; } finally { if (messageArray != null && messageArray.Length > 0) { for (int i = 0; i < messageArray.Length; i++) { if (messageArray[i] != null) { messageArray[i].Dispose(); } } } } return null; } public static Stream Transform(Stream stream, string mapFullyQualifiedName) { return Transform(stream, mapFullyQualifiedName, false, DefaultBufferSize, DefaultThresholdSize); } public static Stream Transform(Stream stream, string mapFullyQualifiedName, bool debug) { return Transform(stream, mapFullyQualifiedName, debug, DefaultBufferSize,
Page 100
DefaultThresholdSize); } public static Stream Transform(Stream stream, string mapFullyQualifiedName, bool debug, int bufferSize, int thresholdSize) { try { TransformBase transformBase = GetTransformBase(mapFullyQualifiedName); if (transformBase != null) { VirtualStream virtualStream = new VirtualStream(bufferSize, thresholdSize); XPathDocument xpathDocument = new XPathDocument(stream); transformBase.Transform.Transform(xpathDocument, transformBase.TransformArgs, virtualStream); virtualStream.Seek(0, SeekOrigin.Begin); return virtualStream; } } catch (Exception ex) { ExceptionHelper.HandleException(Resources.DynamicTransformsHelper, ex); TraceHelper.WriteLineIf(debug, null, ex.Message, EventLogEntryType.Error); throw; } return null; } public static Stream Transform(Stream[] streamArray, string mapFullyQualifiedName) { return Transform(streamArray, mapFullyQualifiedName, false, DefaultBufferSize, DefaultThresholdSize); } public static Stream Transform(Stream[] streamArray, string mapFullyQualifiedName, bool debug) { return Transform(streamArray, mapFullyQualifiedName, debug, DefaultBufferSize, DefaultThresholdSize); } public static Stream Transform(Stream[] streamArray, string mapFullyQualifiedName, bool debug,
Page 101
int bufferSize, int thresholdSize) { try { TransformBase transformBase = GetTransformBase(mapFullyQualifiedName); if (transformBase != null) { CompositeStream compositeStream = null; try { VirtualStream virtualStream = new VirtualStream(bufferSize, thresholdSize); compositeStream = new CompositeStream(streamArray); XPathDocument xpathDocument = new XPathDocument(compositeStream); transformBase.Transform.Transform(xpathDocument, transformBase.TransformArgs, virtualStream); virtualStream.Seek(0, SeekOrigin.Begin); return virtualStream; } finally { if (compositeStream != null) { compositeStream.Close(); } } } } catch (Exception ex) { ExceptionHelper.HandleException(Resources.DynamicTransformsHelper, ex); TraceHelper.WriteLineIf(debug, null, ex.Message, EventLogEntryType.Error); throw; } return null; } #endregion #region Private Static Methods private static TransformBase GetTransformBase(string mapFullyQualifiedName) { TransformBase transformBase = null; lock (mapDictionary) { if (!mapDictionary.ContainsKey(mapFullyQualifiedName)) { Type type = Type.GetType(mapFullyQualifiedName); transformBase = Activator.CreateInstance(type) as TransformBase; if (transformBase != null) { mapDictionary[mapFullyQualifiedName] = transformBase; } } else {
Page 102
transformBase = mapDictionary[mapFullyQualifiedName]; } } return transformBase; } #endregion } }
XslCompiledTransformHelper class
#region Copyright //------------------------------------------------- // Author: Paolo Salvatori // Email: [email protected] // History: 2010-01-26 Created //------------------------------------------------- #endregion #region Using References using System; using System.IO; using System.Text; using System.Collections.Generic; using System.Configuration; using System.Xml; using System.Xml.Xsl; using System.Xml.XPath; using System.Diagnostics; using Microsoft.XLANGs.BaseTypes; using Microsoft.XLANGs.Core; using Microsoft.BizTalk.Streaming; using Microsoft.BizTalk.CAT.Samples.DynamicTransforms.Helpers.Properties; #endregion namespace Microsoft.BizTalk.CAT.Samples.DynamicTransforms.Helpers { public class XslCompiledTransformHelper { #region Private Constants private const int DefaultBufferSize = 10240; //10 KB private const int DefaultThresholdSize = 1048576; //1 MB private const string DefaultPartName = "Body"; #endregion #region Private Static Fields private static Dictionary<string, MapInfo> mapDictionary; #endregion #region Static Constructor static XslCompiledTransformHelper() { mapDictionary = new Dictionary<string, MapInfo>(); } #endregion #region Public Static Methods public static XLANGMessage Transform(XLANGMessage message, string mapFullyQualifiedName,
Page 103
string messageName) { return Transform(message, 0, mapFullyQualifiedName, messageName, DefaultPartName, false, DefaultBufferSize, DefaultThresholdSize); } public static XLANGMessage Transform(XLANGMessage message, string mapFullyQualifiedName, string messageName, bool debug) { return Transform(message, 0, mapFullyQualifiedName, messageName, DefaultPartName, debug, DefaultBufferSize, DefaultThresholdSize); } public static XLANGMessage Transform(XLANGMessage message, int partIndex, string mapFullyQualifiedName, string messageName, string partName, bool debug, int bufferSize, int thresholdSize) { try { using (Stream stream = message[partIndex].RetrieveAs(typeof(Stream)) as Stream) { Stream response = Transform(stream, mapFullyQualifiedName, debug, bufferSize, thresholdSize); CustomBTXMessage customBTXMessage = null; customBTXMessage = new CustomBTXMessage(messageName, Service.RootService.XlangStore.OwningContext); customBTXMessage.AddPart(string.Empty, partName); customBTXMessage[0].LoadFrom(response); return customBTXMessage.GetMessageWrapperForUserCode(); } } catch (Exception ex) { ExceptionHelper.HandleException(Resources.XslCompiledTransformHelper, ex); TraceHelper.WriteLineIf(debug, null,
Page 104
ex.Message, EventLogEntryType.Error); throw; } finally { if (message != null) { message.Dispose(); } } } public static XLANGMessage Transform(XLANGMessage[] messageArray, int[] partIndexArray, string mapFullyQualifiedName, string messageName, string partName, bool debug, int bufferSize, int thresholdSize) { try { if (messageArray != null && messageArray.Length > 0) { Stream[] streamArray = new Stream[messageArray.Length]; for (int i = 0; i < messageArray.Length; i++) { streamArray[i] = messageArray[i][partIndexArray[i]].RetrieveAs(typeof(Stream)) as Stream; } Stream response = Transform(streamArray, mapFullyQualifiedName, debug, bufferSize, thresholdSize); CustomBTXMessage customBTXMessage = null; customBTXMessage = new CustomBTXMessage(messageName, Service.RootService.XlangStore.OwningContext); customBTXMessage.AddPart(string.Empty, partName); customBTXMessage[0].LoadFrom(response); return customBTXMessage.GetMessageWrapperForUserCode(); } } catch (Exception ex) { ExceptionHelper.HandleException(Resources.XslCompiledTransformHelper, ex); TraceHelper.WriteLineIf(debug, null, ex.Message, EventLogEntryType.Error); throw; } finally { if (messageArray != null && messageArray.Length > 0)
Page 105
{ for (int i = 0; i < messageArray.Length; i++) { if (messageArray[i] != null) { messageArray[i].Dispose(); } } } } return null; } public static Stream Transform(Stream stream, string mapFullyQualifiedName) { return Transform(stream, mapFullyQualifiedName, false, DefaultBufferSize, DefaultThresholdSize); } public static Stream Transform(Stream stream, string mapFullyQualifiedName, bool debug) { return Transform(stream, mapFullyQualifiedName, debug, DefaultBufferSize, DefaultThresholdSize); } public static Stream Transform(Stream stream, string mapFullyQualifiedName, bool debug, int bufferSize, int thresholdSize) { try { MapInfo mapInfo = GetMapInfo(mapFullyQualifiedName, debug); if (mapInfo != null) { XmlTextReader xmlTextReader = null; try { VirtualStream virtualStream = new VirtualStream(bufferSize, thresholdSize); xmlTextReader = new XmlTextReader(stream); mapInfo.Xsl.Transform(xmlTextReader, mapInfo.Arguments, virtualStream); virtualStream.Seek(0, SeekOrigin.Begin); return virtualStream; } finally {
Page 106
if (xmlTextReader != null) { xmlTextReader.Close(); } } } } catch (Exception ex) { ExceptionHelper.HandleException(Resources.XslCompiledTransformHelper, ex); TraceHelper.WriteLineIf(debug, null, ex.Message, EventLogEntryType.Error); throw; } return null; } public static Stream Transform(Stream[] streamArray, string mapFullyQualifiedName) { return Transform(streamArray, mapFullyQualifiedName, false, DefaultBufferSize, DefaultThresholdSize); } public static Stream Transform(Stream[] streamArray, string mapFullyQualifiedName, bool debug) { return Transform(streamArray, mapFullyQualifiedName, debug, DefaultBufferSize, DefaultThresholdSize); } public static Stream Transform(Stream[] streamArray, string mapFullyQualifiedName, bool debug, int bufferSize, int thresholdSize) { try { MapInfo mapInfo = GetMapInfo(mapFullyQualifiedName, debug); if (mapInfo != null) { CompositeStream compositeStream = null; try { VirtualStream virtualStream = new VirtualStream(bufferSize, thresholdSize); compositeStream = new CompositeStream(streamArray); XmlTextReader reader = new XmlTextReader(compositeStream); mapInfo.Xsl.Transform(reader, mapInfo.Arguments, virtualStream);
Page 107
virtualStream.Seek(0, SeekOrigin.Begin); return virtualStream; } finally { if (compositeStream != null) { compositeStream.Close(); } } } } catch (Exception ex) { ExceptionHelper.HandleException(Resources.XslCompiledTransformHelper, ex); TraceHelper.WriteLineIf(debug, null, ex.Message, EventLogEntryType.Error); throw; } return null; } #endregion #region Private Static Methods private static MapInfo GetMapInfo(string mapFullyQualifiedName, bool debug) { MapInfo mapInfo = null; lock (mapDictionary) { if (!mapDictionary.ContainsKey(mapFullyQualifiedName)) { Type type = Type.GetType(mapFullyQualifiedName); TransformBase transformBase = Activator.CreateInstance(type) as TransformBase; if (transformBase != null) { XslCompiledTransform map = new XslCompiledTransform(debug); using (StringReader stringReader = new StringReader(transformBase.XmlContent)) { XmlTextReader xmlTextReader = null; try { xmlTextReader = new XmlTextReader(stringReader); XsltSettings settings = new XsltSettings(true, true); map.Load(xmlTextReader, settings, new XmlUrlResolver()); mapInfo = new MapInfo(map, transformBase.TransformArgs); mapDictionary[mapFullyQualifiedName] = mapInfo; } finally { if (xmlTextReader != null) { xmlTextReader.Close(); } }
Page 108
} } } else { mapInfo = mapDictionary[mapFullyQualifiedName]; } } return mapInfo; } #endregion } public class MapInfo { #region Private Fields private XslCompiledTransform xsl; private XsltArgumentList arguments; #endregion #region Public Constructors public MapInfo() { this.xsl = null; this.arguments = null; } public MapInfo(XslCompiledTransform xsl, XsltArgumentList arguments) { this.xsl = xsl; this.arguments = arguments; } #endregion #region Public Properties public XslCompiledTransform Xsl { get { return this.xsl; } set { this.xsl = value; } } public XsltArgumentList Arguments { get { return this.arguments; } set { this.arguments = value; } } #endregion
Page 109
} }
Note: Support for embedded scripts is an optional XSLT setting on the XslCompiledTransform class. Script support is
disabled by default. Therefore, to enable script support, it’s necessary to create an XsltSettings object with the
EnableScript property set to true and pass the object to the Load method. That’s what I did in my code above.
Looking at the code of the XslTransformHelper and XslCompiledTransformHelper classes, you can easily note they are
almost identical and share most of the code. The difference between the two is that the first component uses the
XslTransform class, as BizTalk Runtime, to apply maps to input documents, while the second component uses the
XslCompiledTransformHelper class to the same purpose.
Schemas
This project contains 2 Xml Schemas, CalculatorRequest and CalculatorResponse, which define, respectively, the
request and response message and a PropertySchema that defines the Method promoted property. A
CalculatorRequest message can contain zero or multiple Operation elements, as shown in the following picture:
CalculatorRequest message
<CalculatorRequest xmlns="http://microsoft.biztalk.cat/10/dynamictransforms/calculatorrequest"> <Method>UnitTest</Method> <Operations> <Operation> <Operator>+</Operator> <Operand1>82</Operand1> <Operand2>18</Operand2> </Operation> <Operation> <Operator>-</Operator> <Operand1>30</Operand1> <Operand2>12</Operand2> </Operation> <Operation> <Operator>*</Operator> <Operand1>25</Operand1> <Operand2>8</Operand2> </Operation> <Operation> <Operator>\</Operator> <Operand1>100</Operand1> <Operand2>25</Operand2> </Operation> </Operations> </CalculatorRequest>
Page 110
A CalculatorResponse message contains a Result element for each Operation element within the corresponding
CalculatorRequest message, as shown in the following picture:
CalculatorResponse message
<CalculatorResponse xmlns="http://microsoft.biztalk.cat/10/dynamictransforms/calculatorresponse"> <Status>Ok</Status> <Results> <Result> <Value>100</Value> <Error>None</Error> </Result> <Result> <Value>18</Value> <Error>None</Error> </Result> <Result> <Value>200</Value> <Error>None</Error> </Result> <Result> <Value>4</Value> <Error>None</Error> </Result> </Results> </CalculatorResponse>
Maps
This project contains the CalculatorRequestToCalculatorResponse map (see the picture below) that transforms an
inbound request message into the corresponding response message.
Page 111
Orchestrations
This project contains 4 orchestrations.
SingleDynamicTransform Test Case
This flow had been created just to test the XslCompiledTransformHelper class within an orchestration.
Page 112
The following picture depicts the architecture of the SingleDynamicTransform test case.
Message Flow:
1. A One-Way FILE Receive Location receives a new CalculatorRequest xml document from the IN folder.
2. The XML disassembler component within the XMLTransmit pipeline promotes the Method element inside
the CalculatorRequest xml document. The Message Agent submits the incoming message to the
MessageBox (BizTalkMsgBoxDb).
3. The inbound request starts a new instance of the SingleDynamicTransform. This latter uses a Direct Bound
Port and a Filter to receive only the CalculatorRequest messages with the Method promoted property =
“SingleDynamicTransform”.
4. The SingleDynamicTransform invokes the Transform static method exposed by the
XslCompiledTransformHelper class to apply the CalculatorRequestToCalculatorResponse map to the
inbound CalculatorRequest message and generate the corresponding CalculatorResponse document.
5. The CalculatorRequestToCalculatorResponse publishes the CalculatorResponse message to the MessageBox
(BizTalkMsgBoxDb).
6. The response message is retrieved by a One-Way FILE Send Port.
7. The response message is written to an OUT folder by the One-Way FILE Send Port.
DefaultStaticLoop Test Case
As shown in the picture below, this orchestration receives a CalculatorRequest xml document (80KB) and executes a
loop (1000 iterations) in which it uses a Transform Shape to apply the CalculatorRequestToCalculatorResponse map
to the inbound message. The orchestration does not produce any response message. The code within the
StartStepTrace and EndStepTrace Expression Shapes keeps track of the time spent to execute the map at each
iteration, while the code contained in the final Trace Expression Shape writes the total elapsed time on the standard
output. The objective of this test case is to measure the time spent by the orchestration to apply the map to the
inbound document 1000 times using the Transform Shape.
Page 114
The following picture depicts the architecture of the DefaultStaticLoop test case.
Message Flow:
1. A One-Way FILE Receive Location receives a new CalculatorRequest xml document from the IN folder.
2. The XML disassembler component within the XMLTransmit pipeline promotes the Method element inside
the CalculatorRequest xml document. The Message Agent submits the incoming message to the
MessageBox (BizTalkMsgBoxDb).
3. The inbound request starts a new instance of the DefaultStaticLoop. This latter uses a Direct Bound Port and
a Filter to receive only the CalculatorRequest messages with the Method promoted property =
“DefaultStaticLoop”.
4. The DefaultStaticLoop executes a loop (1000 iterations) in which it uses a Transform Shape to apply the
CalculatorRequestToCalculatorResponse map to the inbound CalculatorRequest message (80KB).
DefaultDynamicLoop Test Case
This component is a variation of the DefaultStaticLoop orchestration. As this latter, it receives a CalculatorRequest xml
document (80KB) and executes a loop (1000 iterations), but it doesn’t use a Transform shape to execute the
CalculatorRequestToCalculatorResponse map against the inbound message, it rather uses a Message Assignment
Shape that contain the following code. See How to Use Expressions to Dynamic Transform Messages for more
information on this topic. The objective of this test case is to measure the time spent by the orchestration to apply the
map to the inbound document 1000 times using the transform statement provided by the XLANG Runtime.
startTime = System.DateTime.Now; type = System.Type.GetType("<Map FQDN>"); transform(CalculatorResponse) = type(CalculatorRequest); stopTime = System.DateTime.Now; elapsedTime = stopTime.Subtract(startTime); total = total + elapsedTime.TotalMilliseconds; i = i + 1;
As the DefaultStaticLoop, the orchestration does not produce any response. The code within the CreateResponse
Shape keeps track of the time spent to execute the map at each iteration, while the code contained in the final Trace
Expression Shape writes the total elapsed time on the standard output.
Page 116
The following picture depicts the architecture of the DefaultDynamicLoop test case:
Message Flow:
1. A One-Way FILE Receive Location receives a new CalculatorRequest xml document from the IN folder.
2. The XML disassembler component within the XMLTransmit pipeline promotes the Method element inside
the CalculatorRequest xml document. The Message Agent submits the incoming message to the
MessageBox (BizTalkMsgBoxDb).
3. The inbound request starts a new instance of the DefaultDynamicLoop. This latter uses a Direct Bound Port
and a Filter to receive only the CalculatorRequest messages with the Method promoted property =
“DefaultDynamicLoop”.
4. The DefaultDynamicLoop executes a loop (1000 iterations) in which it uses a Message Assignment Shape to
execute the CalculatorRequestToCalculatorResponse map against the inbound CalculatorRequest message
(80KB).
CustomDynamicLoop Test Case
As the previous orchestrations, the CustomDynamicLoop receives a CalculatorRequest xml document (80KB) and
executes a loop (1000 iterations). However, instead of using a Transform shape or the Dynamic Transformation
mechanism provided by BizTalk to apply the map to the inbound document, it uses an Expression Shape (see the
code below) to invoke the Transform method exposed by my XslCompiledTransformHelper component. The objective
of this test case is to measure the time spent by the orchestration to apply the map to the inbound document 1000
times using the XslCompiledTransformHelper class.
startTime = System.DateTime.Now; CalculatorResponse = Microsoft.BizTalk.CAT.Samples.DynamicTransforms.Helpers.XslCompiledTransformHelper.Transform(CalculatorRequest, "<Map FQDN>"); stopTime = System.DateTime.Now; elapsedTime = stopTime.Subtract(startTime); total = total + elapsedTime.TotalMilliseconds; i = i + 1;
As the previous orchestrations, the CustomDynamicLoop does not produce any response. The code within the final
Trace Expression Shape writes the total elapsed time on the standard output.
Page 117
The following picture depicts the architecture of the CustomDynamicLoop test case.
Page 118
Message Flow:
1. A One-Way FILE Receive Location receives a new CalculatorRequest xml document from the IN folder.
2. The XML disassembler component within the XMLTransmit pipeline promotes the Method element inside
the CalculatorRequest xml document. The Message Agent submits the incoming message to the
MessageBox (BizTalkMsgBoxDb).
3. The inbound request starts a new instance of the CustomDynamicLoop. This latter uses a Direct Bound Port
and a Filter to receive only the CalculatorRequest messages with the Method promoted property =
“CustomDynamicLoop”.
4. The CustomDynamicLoop executes a loop (1000 iterations) in which it uses a XslCompiledTransformHelper
class to execute the CalculatorRequestToCalculatorResponse map against the inbound CalculatorRequest
message (80KB).
Pipeline Components
This project contains 2 custom pipeline components called, respectively, TransformPipelineComponent and
LoopbackPipelineComponent.
TransformPipelineComponent
This component can be used within a Receive or a Send custom pipeline to transform the inbound message using the
XslCompiledTransformHelper class. For the sake of brevity, we just report the code of the Execute method of in the
picture below. Note that if the loopback property exposed by the component equals true, this latter promotes the
RouteDirectToTp context property to true. This way, when the TransformPipelineComponent is used by a Receive
Pipeline within a Request-Response Receive Location, when the Message Agent posts the transformed message to
the MessageBox, this latter is immediately returned as a response to the Receive Location (Loopback pattern).
Page 119
Execute method
public IBaseMessage Execute(IPipelineContext context, IBaseMessage message) { try { if (componentEnabled) { if (context == null) { throw new ArgumentException("The pipeline context parameter cannot be null."); } if (message != null) { IBaseMessagePart bodyPart = message.BodyPart; Stream inboundStream = bodyPart.GetOriginalDataStream(); Stream outboundStream = XslCompiledTransformHelper.Transform(inboundStream, mapFQDN, traceEnabled, bufferSize, thresholdSize); bodyPart.Data = outboundStream; context.ResourceTracker.AddResource(inboundStream); context.ResourceTracker.AddResource(outboundStream); if (loopback) { message.Context.Promote("RouteDirectToTP", "http://schemas.microsoft.com/BizTalk/2003/system-properties", true); } } } } catch (Exception ex) { ExceptionHelper.HandleException("TransformPipelineComponent", ex); TraceHelper.WriteLineIf(traceEnabled, context, ex.Message, EventLogEntryType.Error); } return message; }
LoopbackPipelineComponent
This component can be used to set the RouteDirectToTp context property to true to implement the Loopback pattern.
When used within a Receive Pipeline, the component allows the promotion of the MessageType property without the
need to use an Xml Disassembler. At runtime, the MessageType is mandatory to determine the map to apply to a
given message in a Receive or Send Port.
Page 120
Execute method
public IBaseMessage Execute(IPipelineContext context, IBaseMessage message) { try { if (loopback) { message.Context.Promote("RouteDirectToTP", "http://schemas.microsoft.com/BizTalk/2003/system-properties", true); if (messageType != null) { message.Context.Promote("MessageType", "http://schemas.microsoft.com/BizTalk/2003/system-properties", messageType); } } } catch (Exception ex) { ExceptionHelper.HandleException("LoopbackPipelineComponent", ex); TraceHelper.WriteLineIf(traceEnabled, context, ex.Message, EventLogEntryType.Error); } return message; }
Pipelines
This project contains 2 custom pipelines:
o TransformReceivePipeline: this pipeline contains only an instance of the
TransformPipelineComponent.
o LoopbackReceivePipeline: this pipeline contains only an instance of the
LoopbackPipelineComponent.
Then, I created 2 use cases to compare the performance of the default message transformation provided by BizTalk
Messaging Engine and the message transformation accomplished using my XslCompiledTransformHelper class.
TransformStaticallyDefined Test Case
The following picture depicts the architecture of the TransformStaticallyDefined test case.
Page 121
Message Flow:
1. The DT.TransformStaticallyDefined.WCF-NetTcp.RL WCF-NetTcp Request-Response Receive Location
receives a CalculatorRequest xml document submitted running the InvokeStaticMap Unit Test within Visual
Studio.
2. The LoopbackReceivePipeline promotes the RouteDirectToTp property to true and the MessageType
property. I could have used the Xml Disassembler component within the Receive Pipeline to find and
promote the MessageType, but I preferred to specify the MessageType of the inbound message as part of
the configuration of the Receive Location (see the picture below). This way I can avoid the overhead
introduced by the Xml Disassembler component and measure just the time spent by the Messaging Engine
to apply the CalculatorRequestToCalculatorResponse map statically defined on the Receive Port. Once
transformed the CalculatorRequest message into a CalculatorResponse document, the Message Agent posts
this latter to the MessageBox.
3. The transformed message is immediately returned to the Receive Location.
4. The response message is returned to the InvokeStaticMap Unit Test.
Page 122
DT.TransformStaticallyDefined.RP Configuration
The screen below shows that the use of the CalculatorRequestToCalculatorResponse map has been statically
configured on the DT.TransformStaticallyDefined.RP Receive Port.
Page 123
DT.TransformStaticallyDefined.WCF-NetTcp.RL Configuration
The following picture shows the configuration of the LoopbackReceivePipeline on the
DT.TransformStaticallyDefined.WCF-NetTcp.RL Receive Location.
TransformReceivePipeline Test Case
The following picture depicts the architecture of the TransformReceivePipeline test case.
Page 124
Message Flow:
1. The DT.TransformReceivePipeline.WCF-NetTcp.RL WCF-NetTcp Request-Response Receive Location receives
a CalculatorRequest xml document submitted running the InvokeDynamicMap Unit Test within Visual Studio.
2. The TransformPipelineComponent (the following picture shows its configuration) promotes the
RouteDirectToTp property to true and transforms the inbound message using the
XslCompiledTransformHelper class and the CalculatorRequestToCalculatorResponse map. Then the Message
Agent posts the transformed message to the MessageBox.
3. The transformed message is immediately returned to the Receive Location.
4. The response message is returned to the InvokeDynamicMap Unit Test.
DT.TransformReceivePipeline.WCF-NetTcp.RL Configuration
The following picture shows the configuration of the TransformReceivePipeline on the
DT.TransformReceivePipeline.WCF-NetTcp.RL Receive Location.
Page 125
UnitAndLoadTests
Finally, I created a Test Project called UnitAndLoadTests that contains a small set of unit and load tests described
below:
o TestXslTransformHelper: this unit test can be used to measure the time spent to execute loops
transformations using the XslTransformHelper class, where loops is defined in the configuration file.
The following picture reports the code of the TestXslTransformHelper unit test.
TestXslTransformHelper method
[TestMethod] public void TestXslTransformHelper() { Assert.AreNotEqual<string>(null, inputFile, "The inpuFile key in the configuration file cannot be null."); Assert.AreNotEqual<string>(String.Empty, inputFile, "The inpuFile key in the configuration file cannot be empty."); Assert.AreEqual<bool>(true, File.Exists(inputFile), string.Format(CultureInfo.CurrentCulture, "The {0} file does not exist.", inputFile)); Assert.AreNotEqual<string>(null, mapFullyQualifiedName, "The mapFullyQualifiedName key in the configuration file cannot be null."); Assert.AreNotEqual<string>(String.Empty, mapFullyQualifiedName, "The mapFullyQualifiedName key in the configuration file cannot be empty."); if (traceResponses) { Assert.AreEqual<bool>(true, Directory.Exists(outputFolder), string.Format(CultureInfo.CurrentCulture, "The {0} folder does not exist.", outputFolder)); } Type type = null; try { type = Type.GetType(mapFullyQualifiedName); } catch (Exception ex) { Assert.Fail(ex.Message); } MemoryStream stream = null; string message; using (StreamReader reader = new StreamReader(File.Open(inputFile, FileMode.Open, FileAccess.Read, FileShare.Read))) { message = reader.ReadToEnd(); } byte[] buffer = Encoding.UTF8.GetBytes(message); Stopwatch stopwatch = new Stopwatch();
Page 126
Stream output = null; TestContext.BeginTimer("TestXslTransformHelper"); for (int i = 0; i < loops; i++) { stream = new MemoryStream(buffer); stopwatch.Start(); output = XslTransformHelper.Transform(stream, mapFullyQualifiedName); stopwatch.Stop(); if (output != null && traceResponses) { using (StreamReader reader = new StreamReader(output)) { message = reader.ReadToEnd(); } using (StreamWriter writer = new StreamWriter(File.OpenWrite( Path.Combine(outputFolder, string.Format(CultureInfo.CurrentCulture, "{{{0}}}.xml", Guid.NewGuid().ToString()))))) { writer.Write(message); writer.Flush(); } } } TestContext.EndTimer("TestXslTransformHelper"); Trace.WriteLine(String.Format(CultureInfo.CurrentCulture, "[TestXslTransformHelper] Loops: {0} Elapsed Time (milliseconds): {1}", loops, stopwatch.ElapsedMilliseconds)); }
TestXslCompiledTransformHelper: this unit test can be used to measure the time spent to execute loops
transformations using the XslCompiledTransformHelper class, where loops is defined in the configuration
file. For the sake of brevity, I omitted to include the code of the TestXslCompiledTransformHelper unit test as
this latter is very similar to one of the previous unit test.
InvokeStaticMap: this unit test can be used to send a single CalculatorRequest xml document to the
DT.TransformStaticallyDefined.WCF-NetTcp.RL Receive Location used by the TransformStaticallyDefined Test
case.
InvokeDynamicMap: this unit test can be used to send a single CalculatorRequest xml document to the
DT.TransformReceivePipeline.WCF-NetTcp.RL Receive Location used by the TransformReceivePipeline Test
case.
StaticMapLoadTest: this load test is based on the InvokeStaticMap unit test and can be used to generate
traffic against the TransformStaticallyDefined Use Case.
DynamicMapLoadTest: this load test is based on the InvokeDynamicMap unit test and can be used to
generate traffic against the TransformReceivePipeline Use Case.
All these tests share the same configuration contained in the App.config configuration file. In particular this latter
contains the following information:
The WCF Endpoint used to invoke the DT.TransformStaticallyDefined.WCF-NetTcp.RL and
DT.TransformReceivePipeline.WCF-NetTcp.RL Receive Locations.
The appSettings section defines multiple keys that allows to control the runtime behavior of unit and load
tests:
mapFullyQualifiedName: contains the name of the map used by TestXslTransformHelper and
TestXslCompiledTransformHelper unit tests.
Page 127
inputFile: defines the path of the inbound document used by all unit tests (TestXslTransformHelper
, TestXslCompiledTransformHelper, InvokeStaticMap, InvokeDynamicMap).
outputFolder: indicates the path where to save response messages.
traceResponses: indicates whether to save response messages.
loops: allows to control the number of loop iterations performed by the
TestXslTransformHelper and TestXslCompiledTransformHelper unit tests.
For the sake of completeness, I include below the App.config I used for my tests.
App.config file
<?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <!-- Bindings used by client endpoints --> <bindings> <netTcpBinding> <binding name="netTcpBinding" closeTimeout="01:10:00" openTimeout="01:10:00" receiveTimeout="01:10:00" sendTimeout="01:10:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="100" maxBufferPoolSize="1048576" maxBufferSize="10485760" maxConnections="200" maxReceivedMessageSize="10485760"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="None"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign" /> <message clientCredentialType="Windows" /> </security> </binding> </netTcpBinding> </bindings> <client> <!-- Client endpoints used by client excahnge messages with the WCF Receive Locations --> <endpoint address="net.tcp://localhost:3816/dynamictransforms" binding="netTcpBinding" bindingConfiguration="netTcpBinding" contract="System.ServiceModel.Channels.IRequestChannel" name="StaticMapEndpoint" /> <endpoint address="net.tcp://localhost:3817/dynamictransforms" binding="netTcpBinding" bindingConfiguration="netTcpBinding" contract="System.ServiceModel.Channels.IRequestChannel"
Page 128
name="DynamicMapEndpoint" /> </client> </system.serviceModel> <appSettings> <add key="mapFullyQualifiedName" value="Microsoft.BizTalk.CAT.Samples.DynamicTransforms.Maps.CalculatorRequestToCalculatorResponse, Microsoft.BizTalk.CAT.Samples.DynamicTransforms.Maps, Version=1.0.0.0, Culture=neutral, PublicKeyToken=8c83cae5bc47edb0"/> <add key="inputFile" value="C:\Projects\DynamicTransforms\Test\UnitTest.xml"/> <add key="outputFolder" value="C:\Projects\DynamicTransforms\Test\Out"/> <add key="traceResponses" value="false"/> <add key="loops" value="1000"/> </appSettings> </configuration>
Page 129
Results
Let’s start running some of the test cases and unit tests I created. Take into account that the unit tests and that you
can find in the code attached to the article are parametric and they can be executed using any xml message and map.
Therefore, I strongly encourage you to repeat my tests using your own messages and maps.
TestXslTransformHelper vs. TestXslCompiledTransformHelper
I configured both the unit tests to execute the CalculatorRequestToCalculatorResponse map against the the
UnitTest.xml file (80KB) 1000 times. Each test method uses an instance of the Stopwatch class to measure the time
spent to executing all calls and finally traces a message containing the total elapsed time. The screens below were
taken within Visual Studio at the end of the 2 tests.
TestXslTransformHelper
TestXslCompiledTransformHelper
The difference in terms of performance between the 2 unit tests is simply astonishing:
Page 130
o TestXslTransformHelper Unit Test: Total Elapsed Time = ~144 seconds, Average Elapsed
Time/Transformation = ~144 milliseconds
o TestXslCompiledTransformHelper Unit Test: Total Elapsed Time = ~3.5 seconds, Average Elapsed
Time/Transformation = ~3.5 milliseconds
Obviously, I conducted several test runs and they all confirmed that the XslCompiledTransformHelper is class
dramatically faster than the XslTransformHelper class and this clearly demonstrates that the XslCompiledTransform
class is absolutely much better than the XslTransform class in a Load once, Cache and Transform many times”
scenario.
DefaultStaticLoop Test Case vs. DefaultDynamicLoop Test Case
All the orchestrations used in the 3 test cases share the same structure and implement the same behavior using a
different technique:
o DefaultStaticLoop orchestration: uses a Transform shape to execute the
CalculatorRequestToCalculatorResponse against the inbound document.
o DefaultDynamicLoop orchestration: uses the transform method within a Message Assignment Shape to
accomplish the same task.
o CustomDynamicLoop orchestration: uses the XslCompiledTransformHelper.Transform method to
invoke the map against the request message.
Each orchestration contains a loop that executes the message transformation exactly 1000 times and finally reports
the total elapsed time. For the test I created 3 separate xml files (they can be found in the Test folder), one for each
orchestration. As I explained in the first part of the article, each orchestration receives the request message through a
Direct Bound Port. In particular, the following Filter Expression has been defined on the Activate Receive Shape of
each orchestration:
o http://microsoft.biztalk.cat/10/dynamictransforms/propertyschema.Method == <OrchestrationName>
Therefore, the following files are identical:
o DefaultStaticLoop.xml
o DefaultDynamicLoop.xml
o CustomDynamicLoop.xml
with the exception of the Method element that contains the name of the related orchestration. To execute each test
case is sufficient to copy the corresponding file to the Test\IN folder: the DT.FILE.RL FILE Receive Location will than
receive the message and activate the intended test case. I used DebugView to keep track of the elapsed time reported
by each of the test cases:
Page 131
Results are quite eloquent and give very little room for doubt:
o DefaultStaticLoop Test Case: Total Elapsed Time = ~57.3 seconds, Average Elapsed
Time/Transformation = ~57 milliseconds
o DefaultDynamicLoop Test Case: Total Elapsed Time = ~56.2 seconds, Average Elapsed
Time/Transformation = ~56 milliseconds
o CustomDynamicLoop Test Case: Total Elapsed Time = ~3.6 seconds, Average Elapsed
Time/Transformation = ~3.6 milliseconds
Once again, I conducted several test runs to confirm the results obtained and reported above. These latter clearly
demonstrated that the XslCompiledTransformHelper class is an order of magnitude faster than the default mechanism
provided by BizTalk for transforming messages.
TransformStaticallyDefined vs. TransformReceivePipeline
The objective of this test is to compare the performance of the following test cases:
o TransformStaticallyDefined Test Case: as explained in the first part of the article, the inbound
CalculatorRequest message is transformed using the CalculatorRequestToCalculatorResponse map
declaratively configured on the DT.TransformStaticallyDefined.RP Request Port. Once posted to the
MessageBox, the CalculatorResponse transformed message is immediately returned to the
DT.TransformStaticallyDefined.WCF-NetTcp.RL Receive Location (Loopback pattern).
o TransformReceivePipeline Test Case: the inbound CalculatorRequest message is transformed by the
TransformReceivePipeline hosted by the DT.TransformStaticallyDefined.WCF-NetTcp.RL Receive
Location. In particular, the TransformPipelineComponent invokes the
XslCompiledTransformHelper.Transform static method to apply the
CalculatorRequestToCalculatorResponse map to the inbound xml document. The FQDN of the map is
declaratively specified in the pipeline configuration. Once posted to the MessageBox, the
CalculatorResponse transformed message is immediately returned to the
DT.TransformStaticallyDefined.WCF-NetTcp.RL Receive Location (Loopback pattern).
To generate traffic against the 2 test cases and measure performance I used the following Load Tests defined in the
UnitAndLoadTests Test Project:
o StaticMapLoadTest: this load test is based on the InvokeStaticMap unit test and can be used to
generate traffic against the TransformStaticallyDefined Use Case. The test is configured to send 1000
Page 132
CalculatorRequest messages to the DT.TransformStaticallyDefined.WCF-NetTcp.RL Receive Location
using 25 different worker threads.
o DynamicMapLoadTest: this load test is based on the InvokeDynamicMap unit test and can be used to
generate traffic against the TransformReceivePipeline Use Case. The test is configured to send 1000
CalculatorRequest messages to the DT.TransformStaticallyDefined.WCF-NetTcp.RL Receive Location
using 25 different worker threads.
In particular, as shown in the picture below, I created a custom Counter Set called BizTalk composed of the following
performance counters:
o Inbound Latency (sec): measures the average latency in milliseconds from when the Messaging Engine
receives a document from the adapter until the time it is published to Message Box.
o Request-Response Latency (sec): measures the average latency in milliseconds from when the
Messaging Engine receives a request document from the adapter until the time a response document
is given back to the adapter.
Specifically, the average latency measured by the Inbound Latency (sec) counter includes the time spent for
transforming the message in both use cases. Obviously it counts also the time spent running other activities like
posting the message to the MessageBox, but still it represents a good mechanism to measure to compare the time
spent by the 2 test cases for transforming the inbound message.
Page 133
I conducted several test runs to confirm results obtained. The screens below were taken, respectively, at the end of
StaticMapLoadTest and DynamicMapLoadTest:
StaticMapLoadTest Graphs & Summary
Page 134
DynamicMapLoadTest Graphs & Summary
Page 135
The following table reports for convenience the results highlighted in the screens above:
Test Case Inbound
Latency (sec)
Request Response
Latency (sec)
Avg Test
Time (sec)
Tests/sec
(Throughput)
Duration
(sec)
% CPU
Time
StaticMapLoadTest
(XslTransform)
0.41 0.90 2.55 8.77 114 65.2
DynamicMapLoadTest
(XslCompiledTransform)
0.11 0.49 1.29 17.6 56 43
The difference in terms of latency and throughput between the 2 test cases is quite dramatic and this clearly confirms
once again that the XslCompiledTransform class is much faster than the XslTransform class natively used by BizTalk. In
our case, the adoption of the custom XslCompiledTransformHelper class allowed to double the throughput and halve
the latency. Obviously, the performance gain can vary from case to case as it depends on many factors (inbound
message size, map complexity, etc.), but it’s quite evident that the overall performance of a BizTalk application that
makes an extensive use of message transformations can greatly be improved using a helper component like the
XslCompiledTransformHelper class that exploits the XslCompiledTransform class to compile, invoke and cache maps
for later calls.
Conclusions
As I said in the first part of the article, I started to work with the product group to see how best to take advantage of
the XslCompiledTransform class in the next version of BizTalk. Nevertheless, you can immediately exploit this class in
your custom components to boost the execution of you message transformations. Therefore, I encourage you to
download my code here and repeat the tests described in this article using your own messages and maps.
Follow-Ups
I wrote another article on this subject and extended my code to support multi-source-document-maps. You can find
my post here.
I also created a custom Transform Service for the ESB Toolkit 2.0 based on the XslCompiledTransformHelper class that
I presented on this article. Tests demonstrated that the performances of this component are remarkably higher than
those of the original Transform Service provided out-of-the-box by the ESB Toolkit 2.0. You can read the whole story
and download the code on my Blog at [PLACEHOLDER: I still have to publish the article].
Code
Here you can download the code.