Building Your Internet Infrastructure

91
DELL POWER SOLUTIONS ISSUE 2 2000 THE MAGAZINE FOR DIRECT ENTERPRISE SOLUTIONS I NSIDE T HIS I SSUE ISSUE 2 2000 $9.95 Building Your Internet Infrastructure Building Your Internet Infrastructure U SING P OWER A PP A PPLIANCE S ERVERS P ROVIDING B2B S OLUTIONS U NDERSTANDING S TORAGE FOR D OT -C OMS

Transcript of Building Your Internet Infrastructure

Page 1: Building Your Internet Infrastructure

DELL P

OW

ER S

OLU

TIO

NS

ISS

UE 2

20

00

T H E M A G A Z I N E F O R D I R E C T E N T E R P R I S E S O L U T I O N S

I N S I D E T H I S I S S U E

I S S U E 2 2 0 0 0 $ 9 . 95

Building Your Internet InfrastructureBuilding Your Internet Infrastructure

U S I N G P O W E R A P P A P P L I A N C E S E R V E R S

P R O V I D I N G B 2 B S O L U T I O N S

U N D E R S TA N D I N G S T O R A G E F O R D O T- C O M S

Page 2: Building Your Internet Infrastructure

www.dell.com PowerSolutions 1

E X E C U T I V E N O T E S

Explosion is a good description of today’s e-business envi-ronment. Worldwide Internet commerce is expected to

grow at a compound annual rate of 100 percent to almost$2.5 trillion by 2003. Companies that take advantage of theopportunity will experience greater efficiency, which canlead to a significant competitive advantage. Therefore, manybusinesses today are looking to establish or extend theirInternet presence. Dell is ready to support you.

We believe that success in the e-business environmentwill be determined by the strength of your Internet infra-structure, which is the foundation of your business.Therefore, Dell has launched a comprehensive business strategy—including new products, services, and strategicalliances—to help our customers thrive in the Internet marketplace. Our goal is to create new Web-related capabili-ties by combining our Internet expertise with our uniqueability to organize resources around distinct customer needs.

For example, we recently announced new server products thatare designed to power the Internet infrastructure for e-commerce.Our Dell PowerEdge 6400 and 6450 servers provide the reliability, performance, and scalability that this environmentdemands. In addition, we have introduced a turnkey applianceserver, called the PowerApp, which is specially designed tomeet the Web hosting and traffic management needs of serviceproviders, dot-coms, and click-and-mortar companies that arecreating or expanding their Internet infrastructures.

To help our customers leverage the lower cost of onlineinteractions in today’s e-business environment, Dell hasplanned a comprehensive, tiered e-support program toreduce technical support costs. Dell will soon introduce anew version of OpenManage Resolution Assistant, whichincludes integrated hardware and software diagnostics,extensive knowledge content, and self-help tools. Help deskadministrators can use the product to diagnose and solvecustomers’ problems remotely over the Internet, and to collaborate with Dell’s technical support team.

Dell also offers several consulting services to help cus-tomers meet their e-business objectives. For Internet serviceproviders, application service providers, and Web hostingcompanies, we recently introduced Service Provider Direct,a comprehensive package of programs including rapid serverdeployment, enhanced customer support, and customer co-marketing programs.

A new e-consulting arm, the Dell Expert Services Group,will manage our alliances with Gen3 Partners and ArthurAndersen. Gen3 Partners will offer strategic business develop-ment and technology consulting services to assist Dell’slargest U.S. corporate customers in launching Internet-enabled enterprises. Arthur Andersen will provide similarservices to medium-sized businesses seeking to use theInternet to increase operating efficiency.

Finally, Dell recently launched a new Web site to provide services to small businesses that want to launch and run their companies over the Internet. The new site,www.DellEworks.com, offers a one-stop shopping source tohelp customers establish an Internet presence and grow theirbusiness with online services.

Dell began its migration to the Internet in 1994, with thelaunch of our Web site, www.dell.com. We added online salescapabilities in 1996 and have been leading the way in Internetcommerce ever since. We have built our own Internet infra-structure to integrate our operations online for greater efficien-cies and to harness the Internet for better customer service andsupport. We want to help you do the same, and we hope thearticles in this issue will help you realize the opportunities ofthe Internet commerce explosion.

Michael Dell Dell Chairman and CEO

TheInternet

Explosion

Page 3: Building Your Internet Infrastructure

Managing EditorEddie Ho

Art DirectorIva Frank

DesignersMark Mastroianni, Cynthia Webb

Publication ServicesThe TDA GroupFour Main Street, Suite 100Los Altos, CA 94022

Subscriptions and Address ChangesSubscriptions are free to qualified readers who complete the subscriptioncard found in each issue. To subscribe or change your address, complete and return the business reply card in this issue or visit us at www.dell.com/powersolutions.

About Dell ComputerDell Computer Corporation, headquartered in Round Rock, Texas, near Austin,is the world’s leading direct computer systems company. Dell is the number2 and fastest growing among all major computer systems companies world-wide, with more than 26,100 employees around the globe. Dell uses thedirect business model to sell its high-performance computer systems, andworkstation and storage products to all types of enterprises. For more infor-mation, please visit our Web site at www.dell.com.

Dell4, the Dell logo, OpenManage™, OptiPlex™, PowerApp™, PowerEdge4, PowerVault4—DellComputer Corporation; 3M4—Minnesota Mining and Manufacturing Company; Adaptec4—Adaptec,Inc.; Charles Schwab4—Charles Schwab & Co. Inc.; Cisco4—Cisco Systems; Citrix4, MetaFrame™—Citrix Systems, Inc.; Cray4—Cray Research, Inc.; eBay4—eBay Inc.; F5 Networks™—F5 Networks,Inc.; Gartner Group4—Gartner Group, Inc.; GigaNet4, LAN™—GigaNet, Inc.; Homeruns.comSM—Hannaford Bros. Co.; HP4, OpenView4—Hewlett-Packard Company; Intel4, Pentium4, Xeon™—IntelCorporation; IBM4, RS/60004, SP24—International Business Machines Corporation; IEEE4—Instituteof Electrical and Electronics Engineers; Legato4—Legato Systems, Inc.; Linux4—Linus Torvalos;Lotus Notes4—Lotus Development Corporation; Microsoft4, Windows4, Windows NT4—MicrosoftCorporation; Microstrategy4—Microstrategy Incorporated; NetIQ4—NetIQ Corporation; Netscape™,Netscape Enterprise Server4—Netscape Communications Corporation; NSI4, Double-Take4—NetworkSpecialists, Inc.; Nortel4—Northern Telecom Limited; Novell4, NetWare4, GroupWise4, ZENWorks™,BorderManager™, BrainShare4, ManageWise4, Novell Directory Services4, NDS™—Novell, Inc.;Oracle4—Oracle Corporation; QLogic4—QLogic Corporation; WebLoad4, RADVIEW™—RADVIEWSoftware, Inc.; Rambus4—Rambus, Inc.; Rational4—Rational Software; Red Hat4—Red HatSoftware, Inc.; ROLM4—ROLM Corporation; SAP™, mySAP.com™, R/34—SAP Aktiengesellschaft;Java™, Sun4, Solaris4—Sun Microsystems, Inc.; Bay Networks™—Synoptics Communications, Inc.;Tivoli4, TME 10™—Tivoli Systems, Inc.; TPC-C4, tmpC4—Transaction Processing PerformanceCouncil; VERITAS4, Backup Exec4—VERITAS Software Corporation; VMWare™—VMWare, Inc.;LoanGiant.com™—Worldwide Financial Service, Inc.; UNIX4—Licensed exclusively by X/OpenCompany Limited in the United States and other countries. Other company, product, and servicenames may be trademarks or service marks of others.

Dell Power Solutions is published quarterly by Enterprise Systems Group, DellComputer Corporation, One Dell Way, Round Rock, Texas 78682. This publicationis also available online at www.dell.com/powersolutions. No part of this publica-tion may be reprinted or otherwise reproduced without permission from the editor. Dell does not provide any warranty as to the accuracy of any informationprovided through Dell Power Solutions. The information in this publication is sub-ject to change without notice. Any reliance by the end user on the informationcontained herein is at the end user’s risk. Dell will not be liable for information inany way, including but not limited, to its accuracy or completeness.

© Dell Computer Corporation. All rights reserved. Printed in the U.S.A.

June 2000

� Executive NotesThe Internet ExplosionBy Michael Dell

� Editor’s CommentsPower Solutions Now Online!By Eddie Ho

� Internet EnvironmentScalable, Available Web Sites with PowerApp Appliance ServersBy Darrel Ward

Building a Highly Available E-Commerce Site with Windows NTBy Chong Lee

LoanGiant.com Fine-Tunes Web-based Solution at Dell ASC By Paul Del Vecchio

Enterprise Order-Entry Meets the InternetBy Dustin Hicks, Dan Holling, and Kirsten Nothstine

Providing Business-to-Business Solutions with mySAP.comBy Anand Vridhagiri

Accelerate Your Web Server with Dell PowerApp.cacheBy Joe Huang

Structuring Efficient XML: The Organization of DataBy Paul Laster

Internet Protocol Security RevealedBy Rich Hernandez

Understanding Storage in Today’s Data-Driven Dot-Com EconomyBy Eric Burgener and Thea Hayden

� OS EnvironmentReal-Time Data Protection with NSI Software Double-TakeBy Andrew Thibault

Server Consolidation: An IT PerspectiveBy Rommel Mercado

Building a Scalable, Highly Available Novell ClusterEnvironmentBy Richard Lang

Custom NetWare Server Installations with DellPlusBy Rod Gallagher

Choosing the Right Ingredients: A Recipe for SuccessBy Terry Shagman

� High Performance ComputingThe Lion-X PC Cluster from Penn State

High-Performance Computing with Beowulf ClustersBy Jenwei Hsieh, Ph.D.

� Enterprise ManagementTivoli Storage Management Solutions for E-BusinessBy Ron Riffe

Management Console: Your First Step to Enterprise Systems ManagementBy Dana Lloyd

� Knowledge ManagementHighly Available Forms and Reports Applications with Oracle Fail Safe Release 3.0By Laurence Clarke

1

3

4

9

14

20

23

30

34

37

44

49

54

57

63

67

7175

80

83

87

T A B L E O F C O N T E N T S

2 PowerSolutions

Page 4: Building Your Internet Infrastructure

www.dell.com PowerSolutions 3

E D I T O R ’ S C O M M E N T S

PowerSolutionsNow Online!

You can now read Power Solutions—anytime, anywhere—by visiting www.dell.com/powersolutions at your conven-

ience. Whether you need specific information or you justwant to browse the issues, you can find it at your fingertips.You can also request a subscription, find information aboutadvertising in the printed issue, or download writing guide-lines for authors. But most importantly, you can send us anyfeedback that you would like to share about the journal.Please check it out and let us know what you think.

Thanks to your support over the past 12 months, wehave increased our readership to over 12,000 worldwide. Wehave completed four issues with over 330 pages of technicalinformation on topics such as the OS environment, enter-prise system management, Internet infrastructure, knowledgemanagement, and more.

Our primary goal has always been to be your IT compan-ion. We want to provide you with best-practices and casestudy information from your peers and colleagues. The ITenvironment of today is becoming increasingly complicatedand difficult for a single team to master. So we hope youview Power Solutions as your forum from Dell, supported byyour colleagues.

E-business extends the boundaries of the business enterprise to the new frontier. In this issue we focus on theInternet environment and the dot-coms. Dell has launcheda comprehensive business strategy with new products, services, and strategic alliances for the e-business environ-ment. This issue will help you understand and usePowerApp products, construct scalable environments, consolidate your server farm, manage your IT environmentmore effectively, and better understand storage practices intoday’s demanding business environments.

This issue also continues its coverage on high-performance computing (HPC), another computing environment waiting for commercial exploitation. It features the PowerEdge Linux cluster complex from PennState University as well as the first in a series on HPC with Beowulf clusters, by Dr. Hsieh from Dell’s ClusterDevelopment Group.

We hope you enjoy the issue. Be sure to drop me a note ifyou achieve a major milestone on a long-term project; I am sureour readers would love to get a snapshot of your experience.

Eddie HoManaging Editor

Page 5: Building Your Internet Infrastructure

4 PowerSolutions

Any large Web site, whether it is the intranet of a largeorganization, a critical component to an e-commerce

solution, or an online presence of a traditional brick-and-mortar business, faces uptime and performance demands thatrequire scalable, manageable, and cost-effective solutions.Administrators have traditionally turned to large, internallyscalable, internally available symmetric multiprocessing(SMP) systems to meet these demands. SMP systems can bevery powerful; however, they require a substantial initialinvestment to purchase, and technological expertise toinstall, maintain, and upgrade.

Additionally, in SMP systems, the CPUs share everyresource, specifically the bus, network cards, memory, I/Osystems, operating system, and applications. As the numberof processors in an SMP system increases for more power, theamount of traffic through those resources increases, poten-tially creating a bottleneck that limits system throughputand, consequently, overall system performance.

PowerApp.web Solutions are a Simpler Alternative to SMPsMultiple affordable, optimized Web servers, configured in aload-balanced cluster, on the other hand, offer a solutionthat is simple to deploy and administer. Scaling out in suchthin slices is also a less expensive, more flexible alternativeto SMPs. Dell PowerApp.web appliance servers areuniquely designed to help you scale out your Web site inthis manner. The PowerApp.web 100 is an ultra rack-dense, 1U platform with preconfigured and optimizedWindows® 2000/Internet Information Server (IIS) 5.0 orRed Hat® Linux®/Apache Web server.

PowerApp.web is the logical solution for serviceproviders or any Internet-centric organization, which requirethe ability to deploy additional resources on short notice andperhaps for a short period of time.

For example, suppose a new dot-com company choosesto invest in ads during the Super Bowl to drive traffic to

Scalable, AvailableWeb SiteswithPowerApp

ApplianceServers

To be competitive in the World Wide Web marketplace, service providers, enterprises,

or any organization with a Web presence must use cost-efficient, simple solutions

to keep up with the dynamic nature of this new economy. Appliance servers are

emerging as a viable alternative to traditional, general-purpose hardware/software

solutions because of their ability to be optimized for certain tasks. Dell created its

new PowerApp™ line of appliance servers to help address the need for affordable,

easy to deploy and administer, dedicated devices that can be plugged into any

Internet/intranet infrastructure to do certain tasks, and do them very well.

By Darrel Ward

I N T E R N E T E N V I R O N M E N T

Page 6: Building Your Internet Infrastructure

www.dell.com PowerSolutions 5

its site. The company must scale resourcesquickly and exponentially to support the rushof traffic in the days or weeks following theSuper Bowl. With traditional SMP servers,this undertaking would incur considerablecosts and could require weeks of preparation.However, multiple PowerApp.web units couldbe purchased relatively inexpensively anddeployed in a fraction of the time. After thetraffic peaks and begins to normalize, thePowerApp.web systems could be taken out(and redeployed when needed) gradually inthin increments, rather than removing a largechunk of capacity at once.

Dell has designed two innovative tools tomake all PowerApp solutions simple to deployand administer. The first, PowerApp Kick-Start,is a Java™-based wizard that walks the user through a fewsimple network questions to establish, for example, IPaddress, default gateway, and system name. In minutes, anyadministrator can set up a PowerApp solution on the net-work and have it accessible from anywhere. Figure 1 showshow configuration information is entered into PowerAppKick-Start.

The second feature is PowerApp Admin Tool, an inte-grated graphical interface that aggregates all system and Webserver administration tools into one easy-to-use console. Auser with little or no Windows 2000 or Linux expertise willfind the interface, shown in Figure 2, intuitive, and experi-enced Web administrators will find PowerApp Admin Tool awelcome timesaving feature. Furthermore, administrators canuse either the native Windows 2000 or Red Hat Linux userinterface, instead of the PowerApp Admin Tool interface, toaccomplish tasks, if they prefer.

The bottom line is that PowerApp.webdelivers optimized functionality with ground-breaking ease of use—while maintaining flexibility.

Load Balancing Makes PowerApp.web the Logical SolutionThe inclusion of intelligent load balancing aspart of a strategy to deploy multiple thin Webserver configurations is a cost-effective way toachieve a high degree of scalability, maximumWeb site performance, and consistent servicelevels. Load balancing offers the ability to scaleout, quickly and dynamically, in inexpensiveincrements rather than in large costly chunks.

By load balancing PowerApp.web applianceservers, the work is distributed to the server

within the cluster that has the most available resources. EachPowerApp.web has its own optimized operating system, appli-cation, and hardware resources from which to draw, minimiz-ing share-based performance degradation. Workloads can evenbe distributed among multiple clusters, thus increasing overallsystem performance. With built-in load-balancing software,such as Network Load Balancing (NLB) in Windows 2000Advanced Server, network administrators have the flexibilityto achieve redundancy and support growth at no additionalcost. Alternatively, there are stand-alone devices designed toperform complex load balancing all the way up to the applica-tion layer for maximum availability and performance for mis-sion-critical Web sites and e-commerce implementations.

High Availability Keeps Visitors at the SiteWhile captivating content and flashy graphics may grab theattention of Web site visitors, it is the underlying network

Figure 1. Entering Configuration Information into PowerApp Kick-Start Figure 2. The PowerApp Admin Tool Interface

Multiple affordable,

optimized Web

servers, configured

in a load-balanced

cluster, offer a

solution that is

simple to deploy and

administer.

Page 7: Building Your Internet Infrastructure

6 PowerSolutions

services that keep buyers andvisitors coming back. In anenvironment where a five-second delay in page deliverycan cost you a customer, thehallmarks of any successful e-commerce Web site are crispresponse times, reliable service,and consistent user experience.

A plethora of products areavailable today that help traf-fic move effectively around aWeb site. These products fallinto two basic categories:� Those that know how to

balance the load across Web servers

� Those that cache repetitive Web-based data

Manage Traffic Using Network Load BalancingAvailable on Windows 2000 Advanced Server, NLB is anintegral piece of the Windows 2000-based PowerApp.web100. NLB monitors the status of the Web server and, ifnecessary, can redirect traffic to other, more available Webservers in the NLB cluster. NLB supports clustering of upto 32 nodes and works to evenly distribute incoming trafficwhile also monitoring Web server and network interfacecard (NIC) health—while consuming negligible overhead.

NLB on PowerApp.web introduces the concept of soft-ware scaling, or scaling out, where administrators can addcapacity to their server farms by simply plugging in addi-tional NLB-configured PowerApp.web systems as needed.The benefits of simple, incremental scalability combinedwith high availability make the NLB-configuredPowerApp.web 100 ideal for use with business-critical Webhosting and as a front-end for large e-commerce sites.

Additional features of NLB include full backward com-patibility with Windows NT® Load Balancing Service(WLBS) running on Windows NT Server 4.0. This allowsPowerApp.web to be integrated with existing Web farms.You can also remotely start, stop, and control NLB actionsfrom any networked Windows 2000 or Windows NT oper-ating system using console commands or scripts. In addi-tion, server applications such as IIS require no modifica-tion to run in an NLB cluster, and most operations,including recovery, require no human intervention.

From the client perspective, the cluster operates like asingle Internet logical name and IP address. Yet, NLB retainsindividual names for each computer and can thereforedynamically detect and redistribute the network load when the cluster set changes—for example, when adding

additional nodes to account for traffic spikes. By the same token, NLB automatically detects and recovers the net-work from a failed or off-line computer, so individualPowerApp.web systems can be taken off-line for mainte-nance or repair without disturbing cluster operations.

At a more granular level, NLB allows PowerApp.web systems to balance requests for individual TCP/IP services,such as HyperText Transport Protocol (HTTP) or FileTransfer Protocol (FTP), across the cluster. Additionally,PowerApp.web systems support specification of the load balancing for a single IP port, such as port 80 for Web traffic,using straightforward port management rules that tailor theworkload for each PowerApp appliance.

Manage Traffic with Load-Balancing AppliancesLoad-balancing appliances, in their simplest form, are dedi-cated devices acting as traffic cops that police HTTP, FTP, orother incoming traffic destined for PowerApp.web applianceservers or other application servers. These load balancersintercept traffic before it reaches the servers and determinewhich back-end server is best suited to provide optimal per-formance and the fastest response time to requesting users.

Functionally, the hardware platform of load balancers con-sists of a PC server-based platform, dual high-speed networkadapters: one connects to an Ethernet switch that front-endsthe Web servers and the other links to the Internet feed in arouter or other device. At first glance, PC server-based load bal-ancers may seem out of place in enterprise or high-end ISP net-works. However, tests showthat this is not the case.Because these platforms supportFast Ethernet traffic on the I/Oside, they can safely handletraffic from multiple T3s.

Load Balancers Ensure Quality of Service Load-balancing appliancestrack a variety of health statistics on connectedPowerApp.web appliances orother servers to determinewhich device carries the small-est load and offers the optimalresponse time to handle thetransaction at stake. Load bal-ancers have many methods ofmaking routing decisions, fromusing relatively simplistic pingcommands to measure serverresponse time, to monitoring

NLB monitors the

status of the Web

server and,

if necessary, can

redirect traffic to

other, more available

Web servers in the

NLB cluster.

Load-balancing

appliances, in their

simplest form, are

dedicated devices

acting as traffic cops

that police HTTP, FTP,

or other incoming

traffic destined

for PowerApp.web

appliance servers or

other application

servers.

Page 8: Building Your Internet Infrastructure

www.dell.com PowerSolutions 7

application status at layer 7, the application layer. Layer 7-aware solutions monitor site content responsiveness and cor-rectness from a client perspective to ensure quality of service.

Although it is possible for load balancers to verify thatthe right content is being sent to the client, it is also possible to actually open packets of data coming across thewire and make load-balancing decisions based on thatinformation—regardless of IP address or port number. Bysupporting the capability to examine the packets at layer 7and determine the intent of the data, load balancers canhelp make more logical routing decisions

Take port 80, for example: Generically, it is used for Webor HTTP traffic, but there are many other types of data flowthrough this port. Therefore, it is very useful to know if arequest to an IP address at port 80 is a simple Web pagerequest or an e-commerce transaction in progress thatrequires a higher priority. Application-aware load-balancingappliances can make this determination.

Part of the functionality of an application-aware load-balancing appliance is to guarantee that different types oftraffic can be assigned different priority levels. Instead ofrelying on routing gear to identify traffic through differen-tiated services (diff-serv), Common Open Policy Service(COPS), or other quality-of-service protocols, the layer 7appliance can sift through the traffic and assign prioritiesitself. This eliminates the need for expensive networkports, which have comparatively less functionality.

Load Balancers Work at Wire SpeedThe final benefit that these dedicated load-balancing appli-ances offer is that there is no trade-off between intelligenceand speed. With application-aware technology, the loadbalancers make decisions at wire speed, so there is nodegradation of performance and Web site visitors have thehighest quality experience.

To implement this load-balancing appliance technology,simply place these devices as the front end to yourPowerApp.web appliance servers and PowerEdge® applicationservers. For redundancy and availability, you should deploythem in redundant pairs. These systems will monitor all trafficcoming in and make intelligent decisions on the destinationsof each packet, optimizing the performance of the aggregatedWeb site as a whole.

Web Caching in Reverse Proxy Accelerates DeliveryWeb caching speeds up Web content delivery by takingadvantage of the fact that a group of users will wantthe same information repeatedly from a Web server—especially home pages and the other pages high in the filestructure of any Web site. Because so much of this contentis static, it is inefficient to burden the Web servers to

retrieve fresh copies of thisstatic information.

Dell PowerApp.cache,therefore, uses reverse proxy,a Web server accelerator. Itacts as a proxy cache—inthis case, not for a group ofbrowser users, but for one ormore specific origin Webservers. The key to Webserver acceleration, orreverse proxy, lies in the factthat over 95 percent of theobjects requested from a Web site are static objects,such as HTML pages andgraphics. The remainingrequests are for non-cacheableobjects, such as output fromCGI bin programs.

To implement this caching technology, simply placeDell PowerApp.cache appliance servers as the front end toa farm of PowerApp.web appliance servers. Once again, forredundancy and availability, you should deploy them in acache cluster of two or more appliances. This proxy cachingappliance then pretends to be the Web server, and browsersconnect to it instead of directly to the PowerApp.webappliance servers in the server farm. The bulk of the Webservice workload is thus off-loaded to the cache. The DellPowerApp.cache appliance server stores the most fre-quently requested Web objects in RAM, enabling it torespond extremely quickly. Non-cacheable requests arepassed through to the PowerApp.web farm—most of thetime as fast as or faster than if the browser was directlyconnected to the farm.

Putting it All Together for a Quality Web PresencePowerApp.web appliance servers using a choice of load-balancing technology and PowerApp.cache applianceservers can provide much-needed tools to easily and cost-effectively build high-performance, highly available Websites. You will be able to optimize performance by usingmultiple kinds of these devices in the same deployment.

PowerApp.cache placed in front of NLB-enabledPowerApp.web appliance servers can further reduce thedemands on Web servers by caching commonly requestedpages or Web page elements. In addition, PowerApp.cacheinstalled just outside the load-balancing appliances, whichare installed in front of PowerApp.web server farms, willhelp off-load repetitive traffic from the entire infrastruc-ture behind them, thus fully optimizing the architecture.

With application-aware

technology, the load

balancers make

decisions at wire

speed, so there is no

degradation of

performance and

Web site visitors have

the highest quality

experience.

Page 9: Building Your Internet Infrastructure

8 PowerSolutions

Figure 3. Infrastructure of a PowerApp Appliance Deployment

End Users with Browsers

Internet/Intranet

Router

PowerApp.cacheAppliance Servers

Load-BalancingAppliances

PowerApp.webApplianceServers

PowerEdgeApplication

Servers

PowerEdgeDatabaseServers

PowerVaultStorage

Pow

erAp

p Ap

plia

nces

Pow

erEd

ge S

erve

rs a

nd P

ower

Vaul

t Sto

rage

Figure 3 shows the infrastructure of a PowerApp appliancedeployment.

Furthermore, blending PowerApp appliance servers,PowerEdge servers, and PowerVault® storage will groomresponse times, improve the reliability of service, and offervisitors to your Web site a uniform experience time after time.Utilizing all of these technologies together will maximize yourcompany’s ability to present a quality Web presence.�

Darrel Ward ([email protected]) is the product marketingmanager for Internet Server products in Dell’s Enterprise SystemsGroup. Formerly from Bay Networks™ and Nortel® Networks,Darrel has over 10 years in the high-tech industry managing various products and programs in data networking and theInternet technologies. Darrel has an M.B.A. from TheUniversity of Texas at Austin. He is also a Microsoft CertifiedSystems Engineer (MCSE).

Page 10: Building Your Internet Infrastructure

www.dell.com PowerSolutions 9

The Internet has completely changed the way we do business.A company no longer has to rent space, find the best

location, or even carry inventory. Any company can offer itsproducts and services online with relative ease. Along with newcompanies, every established business will inevitably create ormaintain an Internet presence to leverage its well-known brandname to capture a share of the e-commerce market.

Yet, the Internet is the ultimate customer-empoweringenvironment: The people who click a mousemake all the decisions. They can easily go else-where when all the competitors in the worldare a mouse-click away. Poorly designed Websites or broken links confuse and frustrate cus-tomers, eventually driving them away.

Rule #1: The Store Must Be Open for BusinessDespite all the differences between online andtraditional businesses, one important rulenever changes: The store has to be open before itcan do business. The recent holiday shoppingseason showed what happens when this rule isignored. Many cybershops invested heavily intelevision and radio advertising spots, but

neglected to enhance their back-end operations. As a consequence, slow performance or even crashed sites con-fronted potential customers.

Typically, an e-commerce Web site can be divided intothree tiers: data, business, and user services (see Figure 1).This article focuses on using Windows Load BalancingService (WLBS), Microsoft® Cluster Server, and MicrosoftSQL Server to create a high-availability architecture for an

e-commerce-enabled Web site.

First Tier of an E-Commerce Web Site: User ServicesE-commerce customers typically access vendorapplications through a Web browser. Thus, agood candidate for hosting your Web site isMicrosoft Internet Information Server (IIS) 4.0,part of the Windows NT 4.0 Option Pack, whichintegrates well with the Windows NT infrastruc-ture. Also, be sure to test the latest Windows NTService Pack (Version 6a in late April 2000) andapply it to all your Web servers, because theService Packs are known to boost the perform-ance of IIS. In addition, visiting Microsoft

Building aHighly Available

E-Commerce Sitewith Windows NT

Internet customers who turn away in frustration from a slow or unavailable

Web site may never return. High availability is therefore a critical business

enabler for today’s e-commerce Web sites. This article describes a simple

method for using Microsoft products to configure a highly available Web site.

By Chong Lee

I N T E R N E T E N V I R O N M E N T

Despite all the

differences between

online and traditional

businesses, one

important rule never

changes: The store

has to be open before

it can do business.

Page 11: Building Your Internet Infrastructure

10 PowerSolutions

Internet security bulletins can keep you up-to-date on how toprotect Web servers.

The first step in ensuring the availability of a Web site isto host it on multiple servers so that if one server fails,others will be available to serve customers. Windows LoadBalancing Service not only can automatically redirect cus-tomers to the remaining healthy servers (referred to asnodes) in case of failure, but also dynamically distributeincoming client traffic among a cluster of servers in routineoperations. Any holder of a license for Windows NT 4.0Server, Enterprise Edition, can download WLBS free ofcharge from Microsoft’s Web site at http://www.microsoft.com/ntserver/nts/downloads/winfeatures/wlbs/wlbssitesx86.asp.

Scaling Out with WLBSAs the number of hits on your Web site increases, you cansimply plug another computer running WLBS into the clus-ter (up to 32 nodes) to share the load. This type of imple-mentation is called horizontal scaling, or scaling out. You canuse any industry-standard computer because WLBS has noproprietary hardware requirements. Thus, WLBS offers a sig-nificant cost advantage compared to proprietary hardware-based load-balancing solutions.

WLBS also eliminates a single point of failure because itruns in parallel on all nodes in the cluster. Most hardware-basedload-balancing solutions require two hardware units to achievethe same protection against failure, with the second equallyexpensive hardware component operating in a passive mode.

Installing Windows Load Balancing ServiceTo install WLBS, open Network in the Control Panel, thenadd a new network adapter. When asked for the driver, pointto the folder that contains the expanded file downloadedfrom the Microsoft Web site. Double-click on WindowsLoad Balancing Service and then click OK. A screen similarto the one shown in Figure 2 will appear.

The key to configuring WLBS is identifying severalimportant parameters for the cluster and this particular node.Figure 3 lists the parameters.

Once it is installed and configured, WLBS operatestransparently to ensure a highly available Web site.Customers can access the cluster of servers as if it was asingle computer by using the virtual IP address.

Under normal operations, WLBS automatically balancesthe networking traffic between the clustered computers, scal-ing the performance of one server to the user-specified level.When a computer fails or goes off-line, WLBS automaticallyreconfigures the cluster to direct the customer connectionsto the remaining nodes. When the off-line computer hasbeen repaired or updated, it can rejoin the cluster and recap-ture its share of the workload.

Business Logic

User Services

Web Applications Database

User Services Business Services Data Services

Figure 1. Traditional Three-Tier Architecture for E-Commerce Web Sites

Figure 2. Windows Load Balancing Service Setup Screen

PARAMETER PURPOSE

Primary IP Address Virtual Internet Protocol (IP) address that clients will access

Full Internet Name Fully qualified domain name that translates to the virtual IP address

Priority ID Specifies this node’s unique traffic-handling priority for Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) ports not managed by the Port Rules section of the WLBS Setup dialog box. This parameter is used in case a node within the cluster fails

Dedicated IP Address Specifies this node’s unique IP address used for network traffic not associated with the cluster (for example, Telnet access or copy files to a host within the cluster)

Affinity WLBS can force the same request from the same computer (single) or a whole class-C IP range to access the same node as long as the node is active

Load Percentage If the WLBS cluster includes servers with different configurations, a higher percentage of the load can be assigned to the more powerful server

Figure 3. Windows Load Balancing Service Parameters

Page 12: Building Your Internet Infrastructure

www.dell.com PowerSolutions 11

WLBS uses a distributed algorithm for sta-tistical mapping of the workload between thenodes of the cluster. Since this algorithmconsumes part of the network bandwidth, youshould generally implement WLBS with lessthan 8 to 12 nodes in a particular networksegment to avoid saturating the network. Inaddition, you must modify the order of bind-ings under Network Properties to enablethese servers to respond correctly to WLBStraffic or requests made to a particular node.

WLBS is not the only product that enableshorizontal scaling. Hardware-based load bal-ancers include Cisco’s Local Director, BIG-IPfrom F5 Networks™, and other highly effective products.WLBS on the Windows NT platform, however, is relativelysimple to configure and manage, and it is available free ofcharge with licensing of Windows NT 4.0, Enterprise Edition.

Second Tier of an E-Commerce Web Site: Business ServicesBusiness logic and data validation components reside in the middle tier. With this three-tier architecture, you canupdate your business logic at the middle tier without modi-fying the front-end user interface or the back-end database.Although the initial development might be complex, thebenefits of implementing a three-tier architecture shouldoutweigh the extra effort required to develop scalable andreliable applications.

Microsoft Transaction Server (MTS) 2.0, part of theWindows NT 4.0 Option Pack, is designed to handle the com-ponents of the second tier. It not only monitors and coordi-nates components, but also creates and shares them whenneeded and eliminates them when they are not being used.

MTS also provides database connection pooling, which pre-vents customers from overloading the back-end database by cre-ating a finite number of connections shared among the clients.

Furthermore, you can export your applica-tion configurations into a stand-alone auto-matic setup program, which is activated onlyonce at each client to call remote MTS com-ponents and run applications. Figure 4 shows atypical MTS application.

Banking on the Protection of Microsoft Transaction ServerA classic example of the importance of MTSis a common banking activity. Transferringmoney from your savings account to yourchecking account involves two separatetransactions:

� Debit the amount from your saving account� Credit the same amount to your checking account

If the first transaction executes successfully but thesecond fails to connect to the checking account database,the MSDTC (Microsoft Distributed TransactionsCoordinator) component in MTS will roll back the firsttransaction; that is, it will credit the amount back to the savings account to prevent the loss of your money. Withoutthe coordination of MTS, all activities that involve multiplesteps would pose serious risks.

Reducing Application Development Time Developers who take advantage of MTS can avoid develop-ing basic plumbing services because MTS provides the necessary infrastructure. Therefore, MTS can reduce thedevelopment time needed to build applications in a three-tier model. With MTS, developers can concentrate onimproving the business logic of the system.

Currently MTS supports only static load balancing, andyou must manually specify the server to which your clientwill connect. MTS also does not support failover; thus,

Web InterfaceHTTP

IIS ASP

ObjectObject

ObjectObject

ObjectObject

Microsoft Transaction Server

OLE DBODBC

Database

DCOM

Figure 4. A Typical Microsoft Transaction Server Application

Developers who take

advantage of MTS can

avoid developing basic

plumbing services

because MTS provides

the necessary

infrastructure.

Page 13: Building Your Internet Infrastructure

12 PowerSolutions

if one server fails, you must manually reroute clients to adifferent server. The next release of MTS, however, willsupport dynamic load balancing and failover as well asincorporate a new management interface with performancemonitoring features.

To learn more about programming with MTS, consultDatabase Workshop: Microsoft Transaction Server 2.0, pub-lished by SAMS Publishing (ISBN 0-672-31130-5).

Third Tier of an E-Commerce Web Site: Data ServicesMicrosoft SQL Server 7.0 has been a widely accepted rela-tional database management system (RDBMS) since its releasein 1998. It integrates well with Windows NT, variousMicrosoft applications, and numerous develop-ment tools. In recent tests on a Dell eight-wayPowerEdge 8450 Server, SQL Server 7.0Enterprise Edition smashed all previousTransaction Processing Performance CouncilBenchmark C (TPC-C®) price/performanceresults, ranking first among RDBMS servers.

An eight-way server with 40,168 transac-tions per minute (tpmC®) has never before finished in the top 10 in tests in the RDBMSserver category. In addition, Dell’s PowerEdge8450, running SQL Server 7.0 EnterpriseEdition and Windows NT 4.0 Enterprise

Edition, now leads all other systems in price/performance at $14.86/tpmC. See http://www.microsoft.com/sql for additional information about these test results.

The Enterprise Edition of SQL Server 7.0 is MicrosoftCluster Server (MSCS) aware. Therefore, a highly avail-able database solution is easy to build on this platform. To implement MSCS, install a shared storage system withtwo servers, as shown in Figure 5. Depending on databaserequirements, choose either a Dell PowerVault 201S(SCSI-based) or PowerVault 650F (Fibre Channel-based)storage system.

Clustering the Microsoft SQL Server computers ensuresthat data services will not be interrupted if one of the clustered servers fails. For example, if the processor fails in acluster node, the other server in the cluster will bring up theSQL services with the same IP address and NetBIOS name.Applications connected to the failed server will experiencelittle or no interruption in service, depending on how theapplications are written. As long as sufficient error handlingis built into the application, the failover (the process ofmoving all services from the failed node to the healthynode) can be transparent to other servers that need to con-nect to the virtual SQL Server.

Configuring the Server ClusterClustering with SQL Server on MSCS follows one of twomodels: Active/Passive or Active/Active. An Active/Passiveconfiguration consists of one node as a backup server andone virtual SQL Server running on the active node.

In an Active/Active configuration, each node hosts an active virtual SQL Server. If one node fails, the otheractually hosts two virtual SQL Servers. This arrangementdepends on server hardware with sufficient processing powerto handle the added load.

Note that the two virtual SQL Servers in anActive/Active cluster are actually two individual SQLServers. They have their own databases and do not share anydata by default. Replication can be set up between them if

needed. In a common example configuration,one virtual SQL Server handles the productiondatabase and the other hosts a replica used forreporting purposes. Figure 6 shows a virtualSQL Server as viewed in ClusterAdministrator.

Figures 7 and 8 show the requirements for implementing a basic SQL Server cluster. Allrequirements related to the second virtual SQLServer are applicable only to an Active/Active cluster model. It is best to plan your con-figuration before purchasing any hardware toensure the equipment can handle your needs.

LAN

Servers and Workstations

Node ASQL Server 7.0

Node BSQL Server 7.0

Shared Disks

Heartbeat

Figure 5. Microsoft Cluster Server Overview

Clustering the

Microsoft SQL Server

computers ensures

that data services will

not be interrupted if

one of the clustered

servers fails.

Page 14: Building Your Internet Infrastructure

www.dell.com PowerSolutions 13

Installing Microsoft Cluster ServerTo install MSCS, run the SETUP.EXE located in theMSCS\Cluster\i386 folder on the second CD of WindowsNT 4.0 Server, Enterprise Edition. The installation processprompts you to confirm the parameters that the MSCSsetup has detected; in most cases, the default options arecorrect. After MSCS is installed and its functionality veri-fied, you can install SQL Server 7.0, Enterprise Edition.

Note: Visit http://support.microsoft.com/support/SQL/Content/70Papers/70clstr.asp?LN=EN-US&SD=gn&FR=0.This 70-page document on the Microsoft TechNet Website explains how to perform the installation. SQL Server7.0, Enterprise Edition, has a Failover Cluster Wizard toassist in entering the information you have gathered forthe cluster implementation. Remember to apply SQLService Pack before running the Failover Cluster Wizard;if you do not, you will have to uncluster SQL, apply SQLService Pack, and then recluster SQL.

Competitors are Just a Click AwayAlthough being the first Internet site in a given market seg-ment has a clear advantage, Internet users appreciate sitesthat are fast to load and easy to navigate, have no brokenlinks, and are always available.

In the Internet world, competitors are just one clickaway, and users who turn to a competitor may never comeback. This article has described how to use Microsoft prod-ucts to architect a highly available Web site to showcaseyour products. Using these technologies, you can concen-trate on designing the best user interface and creating prod-ucts that everyone wants to buy. �

Chong Lee ([email protected]) is a senior e-infrastructureconsultant in the Dell Technology Consulting (DTC) Group. Heis a Microsoft Certified Systems Engineer + Internet (MCSE+I),Microsoft Certified Database Administrator (MCDBA), andMicrosoft Certified Trainer (MCT).

Physical Disk Resource Created in RAID the Shared Storage Level Purpose

V: RAID 1 First virtual SQL Server EXE and logs

W: RAID 1 Second virtual SQL Server EXE and logs

X: RAID 5 First virtual SQL Server databases

Y: RAID 5 Second virtual SQL Server databases

Z: RAID 1 MSCS quorum/virtual Microsoft DistributedTransaction Coordinator (MSDTC)

Figure 7. Physical Disk Resource Requirements for an SQL Server Cluster

Figure 6. Cluster Administrator Screen Sample

NetBIOS Name IP Address Description

NODEA 192.168.0.1 Primary node

NODEB 192.168.0.2 Secondary node

VCLUSTER 192.168.0.3 Virtual server name and IP for cluster administration

VMSDTC 192.168.0.4 Virtual server name and IP for MSDTC

VSQL1 192.168.0.5 Virtual server name and IP for the first SQL Server

VSQL2 192.168.0.6 Virtual server name and IP for the second SQLServer (if Active/Active)

Figure 8. Server Name and IP Address Requirements for an SQL Server Cluster

Page 15: Building Your Internet Infrastructure

14 PowerSolutions

In March 2000, mortgage banking services providerLoanGiant.com™ concluded a performance tuning and

optimization lab engagement at the Dell Application SolutionCenter (ASC) located in Round Rock, Texas, near Austin.

The ASC engagement—conducted over several weekswith a team comprised of LoanGiant.com staff and DellASC engineers—yielded a highly impressive 10-fold per-formance improvement in LoanGiant.com’s online loanapplication solution. This quantum bump in performancehas enabled LoanGiant.com to service more customers on itsDell PowerEdge server and PowerVault storage with greaterefficiency than ever before. For Dell, it marked one of themost productive ASC engagements performed to date, and itdemonstrates the added value of ASC services for dot-comsbefore they launch their e-commerce sites.

Over 60,000 Borrowers Served Based in Detroit, LoanGiant.com provides mortgage bankingservices to customers nationwide. A committed and seasoned

team of over 230 loan specialists and computer professionalshas already generated over $5 billion in mortgage loans toover 60,000 borrowers throughout the United States, regard-less of borrowers’ credit histories. LoanGiant.com’s propri-etary software not only maintains client-data confidentiality,but also makes the mortgage loan process faster and easierthan similar competitive products.

The Goal: An Optimized Web-based SolutionThe primary reason for testing at the ASC was to helpLoanGiant.com define, test, and optimize a new Web-basedsolution that would demonstrate extreme performance and reliability characteristics without impacting the existing back-end database. Prior to visiting the ASC,LoanGiant.com had been experiencing response time andquery latency issues with both local and remote branch users.

Loan officers from the home and remote branch officesconnect to LoanGiant.com’s customer loan database viaWindows NT Terminal Server and Citrix® MetaFrame™,

LoanGiant.comFine-Tunes

Web-based SolutionatDell ASC

Dell’s Application Solution Centers, which are located in Round Rock, Texas;

Florham Park, New Jersey; Limerick, Ireland; and Shanghai, China, provide robust

laboratory facilities for Dell customers to stress test and tune their products on

the latest Dell server, storage, and Internet appliance technologies—before their

applications go live. Recently, LoanGiant.com completed testing and tuning of its

Web-based solution at Dell’s Round Rock Application Solution Center.

By Paul Del Vecchio

I N T E R N E T E N V I R O N M E N T

Page 16: Building Your Internet Infrastructure

which then facilitate a connection to the customer loandatabase. The database system, supported by Microsoft SQLServer 7.0, had been supporting a maximum of 150LoanGiant.com users, who frequently experienced sluggishresponse times. Adding online Web users to the existingequation only promised to complicate the matter forLoanGiant.com IT personnel. The team needed answers to

the problem of scaling the database performance to supportboth types of LoanGiant.com’s clients: Web-based (cus-tomer) and traditional LAN-based (loan officer) clients.

LoanGiant.com IT personnel worked with ASC engi-neers to define a solution that would address the criticalnature of their problems. Figure 1 shows the proposed solu-tion, which was later tested in the Dell ASC.

www.dell.com PowerSolutions 15

Figure 1. Proposed LoanGiant.com Solution and Dell ASC Testing Environment

PowerVault 51FFibre Channel Switches

PowerVault 650F Processors and Storage PowerVault

35F Bridge

PowerVault 130T DLT Library

PowerVault 630FExpansion Storage

Client Pool

Web Servers Application ServerDatabaseServers

Gigabit EthernetNetwork Switch

Application Server:Destiny (Integra Application Server)Dell PowerEdge 63504 x 550 MHz Pentium III Xeon 1 MB L22 GB RAM3 x 18 GB LVD (RAID 5)1 x Intel EtherPro 1000SX

Web Server:WLBS Cluster(Internet Information Server 4.0)Dell PowerEdge 63504 x 550 MHz Pentium III Xeon 1 MB L22 GB RAM3 x 18 GB LVD (RAID 5)1 x Intel EtherPro 1000SX

Database Server:MSCS Cluster—Active/Passive(SQL 7.0 Enterprise)Dell PowerEdge 84508 x 550 MHz Pentium III Xeon 2 MB L24 GB RAM2 x 18 GB LVD (RAID 1)1 x Intel EtherPro 1000SX (client net)1 x Intel EtherPro 100 (cluster net)2 x QLogic Fibre Channel HBA

Page 17: Building Your Internet Infrastructure

The Testing MethodologyASC and LoanGiant.com conducted extensive capacity andperformance impact testing. The tests were performed inthree phases.

Phase 1: Measuring Web Server Capacities and PerformanceThe first phase measured front-end Web server capacitiesand performance alone. LoanGiant.com needed to knowhow many concurrent Web loan users the proposed solutioncould handle while maintaining acceptable response times.This test was conducted without introducing existing loanapplication users. The team used WebLoad®, an e-businesstesting and analysis solution from RADVIEW™ Software,Inc., to simulate the Web client workload.

The team installed the WebLoad agent and Windows NTWorkstation 4.0 on 32 Dell OptiPlex™GX1 systems. The WebLoad application(console) was installed on a separateWindows NT server—a Dell PowerEdge6350. The WebLoad console system facili-tates recording and modifying all Web-based client scripts, tests starts and stops,monitors all Web client metrics, anddefines testing criteria and parameters,among other functions.

The team defined acceptable responsetimes for Web-based clients in the load-testing software utility. For the tests, theaverage response time for any Web-basedclient could not exceed five seconds for anyof three reporting intervals.

Each test maintained client reportingintervals of 20 seconds; that is, every 20seconds the WebLoad system collected andreported client metric information. If theaverage response time for any clientexceeded five seconds in three separate reporting intervals,the test terminated. The performance benchmarking com-munity recognizes two-second responses for LAN-connecteddatabase clients as an acceptable response time. The teamdecided that five seconds was more than reasonable forInternet-connected clients.

Phase 2: Measuring Middle-Tier Capacity and PerformanceThe second phase consisted of testing and measuring themiddle-tier (Integra loan-processing application server) performance and capacity. Again, the team did this with-out introducing load produced from Web-based clients tofacilitate problem isolation. Since Integra is a third-partyapplication, generating workload automation for Integraclient simulation proved to be no easy task.

Off-the-shelf stress testing software written for main-stream messaging or Web applications would not be usefulfor this test. The Dell ASC engineering team developed acustom workload using Visual Test from Rational® Softwarethat would call the Integra functions within the clientgraphical user interface (GUI). The team installed theIntegra client and workload script on 80 Dell OptiPlexGX1 systems.

The team measured Integra client response time byobserving the elapsed time for the automated workload scriptto complete a single pass. This proved useful for measuringincremental changes to the application code and/or opti-mization of specific database operations. Each page withinthe Integra loan application GUI contained multiple data-base operations, which when submitted, induced work on

the database server to produce a response.

Phase 3: Measuring Performance Impact on the DatabaseThe third and final phase consisted of running both workloads (Web and loanapplication) simultaneously and measuringthe performance impact on the databasesystem. It also aided in studying any possi-ble interaction between the two types ofusers. All three testing phases addressedstressing the back-end database system.

Phase 1 ResultsPhase 1 results of testing the Web serverperformance and measuring baseline andfinal performance are described in detail.

Testing Web Server (Front-End) PerformanceThe front end consisted of two Webservers, each with Microsoft Internet

Information Server (IIS) 4.0 installed on Windows NT 4.0,Enterprise Edition, which ran on Dell PowerEdge 6350servers. The team load-balanced the Web servers usingWindows NT Load Balancing Service (WLBS), whichenables load balancing of network requests for WindowsNT 4.0, Enterprise Edition, systems. WLBS balanced thenetwork load across specific ports that can be configuredfrom the WLBS properties dialog box. For more informa-tion on WLBS, visit the Microsoft WLBS overview page athttp://www.microsoft.com/ntserver/ntserverenterprise/exec/feature/WLBS/WlbsFeat.asp.

For this test, we configured WLBS to balance the load across ports 80 and 443. Port 80 is used for general Hyper-Text Transport Protocol (HTTP) requests, while port 443 isspecifically used for Secure Sockets Layer (SSL) connections.

16 PowerSolutions

The ASC engagement—

conducted over several

weeks with a team

comprised of LoanGiant.com

staff and Dell ASC

engineers—yielded a highly

impressive 10-fold

performance improvement

in LoanGiant.com’s online

loan application solution.

Page 18: Building Your Internet Infrastructure

The Workload scripts and HTTP client requests for testingWeb server performance were generated with WebLoad fromRADVIEW. Performance metrics were monitored primarilyfrom Windows NT Performance Monitor and WebLoad.

We used three specific workload scripts to generate a userload on the Web server front-end systems:� Loan Application: User connects to site, then completes

and submits online loan application. Multiple fields inseven pages are completed and submitted to begin theloan origination process. This script accounted for 70 per-cent of the client connections.

� Loan Calculator: User connects to site and navigates toloan calculator page, where multiple fields on a singlepage are completed and submitted to acquire loan model-ing information. This script accounted for 15 percent ofthe client connections.

� Testimonials: User connects to site, navigates to cus-tomer testimonials page, and browses previous customertestimonials. This script accounted for 15 percent of theclient connections.

Measuring Web Server Baseline PerformanceThe team measured an initial performance baseline with thedefault registry settings and hardware configuration. We ranan HTTP workload using the script and work distributionnoted above. We ran Windows NT Performance Monitor(Perfmon) remotely from a separate system to characterize theLoanGiant.com system profile during benchmark runs.

Figure 2 depicts a sample of raw data taken from the Webload statistics. Figure 3 shows a Windows NT PerformanceMonitor chart that resulted from the first set of Web loanruns. For the baseline result, a maximum of 241 concurrentusers was achieved before response time increased dramati-cally and finally became unacceptable.

The data in Figure 3 suggests database contention issues,specifically with the hot tables and SQL lock resources. PageLevel Lock Requests per second averaged 16,000 andreached as high as 20,000. The team scrutinized databaseoperations and the effectiveness of indexes.

Measuring Web Server Final PerformanceAfter multiple iterations of testing, performance monitoring,and tuning, the Web loan system was able to sustain 1,376concurrent Web users before response time averagesincreased to unacceptable levels. Tuning work consisted ofboth Web and database server optimizations, and was notlimited to built-in application tuning parameters alone.

The team modified the database to help alleviate con-tention issues discovered during the Web testing phase ofthe engagement. Figure 4 shows random samples of the finalperformance data, taken from the WebLoad reporting chart.

Phase 2 ResultsPhase 2 involved testing and assessing the performance ofIntegra, an online loan application software.

Testing the Integra Loan Application Software(Note: Modifications made as a result of the Web server testinghad no performance impact on the Integra application. Data-base modifications made prior to the Integra testing phase werespecifically designed to address Web performance issues.)

www.dell.com PowerSolutions 17

Runtime Total Load Size Total Response Timein Seconds Current Value Current Value

100 21 —

200 41 —

300 59.625 0.144

400 61 0.989

500 81 0.93

600 101 0.111

700 101 4.064

800 121 0.323

900 141 0.982

1000 159.4 0.397

1100 161 22.264

1200 181 0.233

1300 201 1.438

1400 201 3.151

1500 221 0.344

1600 241 37.975

1620 241 34.986

Figure 2. Web Client Baseline Performance Data

Figure 3. Performance Monitor Chart of Database Server Profile during Web-Client Stress

Page 19: Building Your Internet Infrastructure

LoanGiant.com used online loan application softwaredeveloped and maintained by Integra Software Systems®,headquartered in Franklin, Tennessee. This was the onlysoftware used by LoanGiant.com to process its conventionalloans until the Dell ASC testing and implementation of theonline Web loan system.

Although the new online system gave LoanGiant.comanother mechanism for acquiring new loan applications, itwould not replace Integra. LoanGiant.com still uses Integrasoftware to process all loan applications, whether or not theapplication was acquired online or via conventional means.This meant that the team needed to measureapplication and database server performanceunder an Integra client load. To generateIntegra user load on the database server, weinstalled the automated Integra workload scripton 80 Dell OptiPlex GX1 systems.

We then modified the script to incorporatesleep times, or built-in delays, which better simu-late real-world users. Without sleep times, thescript ran several times faster than any humancould enter the information manually. The resultwas a load on the database server that is not characteristic of the typical user; therefore, it wasnecessary to fine-tune the script sleep times tobetter simulate a typical LoanGiant.com user.

To accomplish this, the team monitored and loggedLoanGiant.com’s production database server over a period ofseveral days, noting user connections and transaction rates(among other performance counters) over multiple eight-hourshifts. This information provided the team with the requireddata to calculate the amount of stress generated by a knownquantity of users, as measured from the database transactionalload. We then used this formula to adjust sleep times in theIntegra script to simulate a given number of users.

Assessing Integra PerformanceDetermining Integra performance involved several factors.

Establishing a baseline. Although it was now possible togenerate stress on the back end with a known quantity ofIntegra users, there was no way to scientifically measure usermetrics or the performance impact of incremental changes tocode or the database. To measure these factors, the teamestablished baseline performance by measuring the elapsedwall clock time—without sleeps incorporated—for oneclient to complete a single pass of the script. This processtook exactly two minutes to run. One pass takes an auto-mated user through the entire loan application, credit grading, and approval process.

Reviewing issues discovered by adding multiple users.As multiple users were added to the system and the load onthe database server was increased, the elapsed time for thesingle user script to complete one pass gradually increased.

A knee in the completion time curve was observed as theload reached 125 users. The result was unacceptable responsetimes, as observed from the individual user screens. Windows NTPerfmon indicated database lock latency and table contentionissues becoming more apparent as the load was increased. Thisbehavior was similar to that noted by the team earlier duringthe Web testing phase of the engagement.

Identifying modifications that enhance performance.After profiling the application and database systems under

stress, the team discovered multiple opportunitiesfor performance improvement. Most performanceenhancements came primarily from tuning indi-vidual database operations and by modifyingindexes to be more effective. As a result, SQLServer database lock resources became less busy,and contention on hot tables was relieved. Aftermodifications, the database system could sustainover 300 Integra users before response timelatencies became unacceptable again.

We reduced the elapsed time for the scriptto complete a single pass from two minutes to32 seconds. Figure 5 depicts the elapsed timefor a single pass of the Integra script before andafter performance enhancements.

18 PowerSolutions

Runtime Total Load Size Total Response Timein Seconds Current Value Current Value

100 183.06 0.483

200 287.377 0.086

300 378.715 0.204

400 480 0.611

500 565.126 1.056

600 661.44 0.322

700 760 0.691

800 844 1.364

1000 936.54 1.523

1100 1040 2.527

1200 1083.477 2.395

1300 1166.351 2.824

1400 1236 3.843

1500 1320 2.249

1600 1348 4.970

1700 1376 11.956

1800 1376 15.481

Figure 4. Web Client Final Performance Data

Most performance

enhancements came

primarily from tuning

individual database

operations and by

modifying indexes to

be more effective.

Page 20: Building Your Internet Infrastructure

Phase 3 ResultsThe results of phase 3 indicated the solution was ready fordeployment.

Testing Impact of the Web and Integra ClientsThe team began the third and final testing phase by initiatinga Web client user load to just over 1,100 Web loan users.They monitored and logged both the Web and databaseservers with Windows NT Perfmon. No notable performanceissues appeared on either the front- or back-end systems. Theythen proceeded to ramp up Integra clients slowly while moni-toring the database system for signs of user contention orinteraction. The system was able to sustain over 300 Integrausers while supporting 1,100 Web clients with no ill effects.

Response times for both client types were well withinacceptable limits and no user interaction was apparent. Figure6 illustrates the SQL Server database system profile via aWindows NT Perfmon chart, as taken during the last phase oftesting. Note that the database system CPU utilization duringtesting was averaging about 50 percent. SQL Page Level LockRequests per second now averaged 8,900 versus 16,000 beforedatabase optimizations. Database lock latencies were greatlyreduced as a result.

Performance Testing Saved Time and MoneyPerformance testing at the Dell ASC proved to be an invalu-able resource for LoanGiant.com. The testing answered manyquestions about the company’s total solution that normallywould have been answered only after the solution wasdeployed into production—after many costly man-hours ofdiscovery and resolution were invested.

Performance testing at the ASC lab and proactivelyseeking answers to questions saved LoanGiant.com time andmoney, and ensured its production implementation wasready for a successful deployment.

Today, LoanGiant.com’s e-commerce solution is aliveand well on the Internet at http://www.loangiant.com.�

Paul Del Vecchio ([email protected]) is a senior systems engineer/consultant working in Dell’s Austin areaApplication Solution Center. Paul is a Microsoft Certified SystemsEngineer (MCSE) and came to Dell from the ISV PerformanceLabs at Intel Corporation. He has spent the past six years helpingcustomers and independent software vendors achieve optimal per-formance of their applications running on Intel architecture.

www.dell.com PowerSolutions 19

Figure 5. Integra Client Baseline versus Final Performance

Figure 6. SQL Server Profile under Web and Integra Client Stress

0 100 200 300 400 500 600

0:16:00

0:14:00

0:12:00

0:10:00

0:08:00

0:06:00

0:04:00

0:02:00

0:00:00

Elap

sed

Tim

e

User Load

BaselineFinal

Want to reach Dell enterprise customers?Advertise in Power Solutions.We have a limited number of pages in each issue for advertising. For a media kit,

call Debbie Dawson at 650-948-3140 ext 102.

Want to reach Dell enterprise customers?Advertise in Power Solutions.

Page 21: Building Your Internet Infrastructure

20 PowerSolutions

Dell Computer Corporation sells more than $40 millionworth of computers each day over the Internet. The

company provides products and services to customers includ-ing major corporations, government agencies, institutions,small businesses, and individuals in more than 140 countries.The volume and variety of these sales requires that Dell systems deal with an enormous number of outside datasources and information requests.

Dell wanted a new order-entry application that wouldbetter meet the needs of its business and its customers. Theexisting order-entry system operation was localized to eachregional office and required complex replication scenarios. Italso dealt with a wide variety of currencies and data stores,and operated within a complicated hierarchy of domesticand international organizations to support multiple separatebusiness incentives. The company needed an improved, simpler order-entry system.

One requirement for the new system was the need toaccommodate new business models, markets, technologies,and future enterprise growth. It also had to integrate withmany disparate existing systems, including legacy applica-tions and data stores. Its pricing model would be highly complex to account for a wide variety of different currencies,exchange rates, billing, freight, and shipping scenarios. The

system would also need to deal with various domestic andinternational organizations that were implementing vastlydifferent marketing and sales incentives.

Other desired characteristics for this type of enterprise-level application include the ability to pass and distributeenormous amounts of data across servers, horizontal scalability for adding more servers to the distributed model,good redundancy and fault tolerance, and the ability to keep downtime to a minimum.

Catapult into E-Business Dell’s dynamic, complex operational requirements demandedan equally progressive and capable technology partner. As aresult, Dell turned to Microsoft and Ernst & Young to designthe architecture for this new order-management system.Microsoft then asked Catapult Systems to help develop and implement the application design, based on MicrosoftSQL Server 7.0.

Dell wanted Microsoft technology for the new system soit could integrate with its existing operating systems. Thissimply made good sense since Dell also uses Microsoft forsome of its desktop applications.

Catapult had proven its expertise in the development andsuccessful implementation of Microsoft-based e-business systems

EnterpriseOrder-Entry

Meets the Internet

Dell’s new order-entry system enables customers to have instant access

to detailed information about their orders via the Internet in a secure

environment, 24 hours a day, 365 days a year. For Dell, this new enterprise

system is more streamlined and centralized, with greater scalability,

redundancy, and fault tolerance than previous system implementations.

By Dustin Hicks, Dan Holling, and Kirsten Nothstine

I N T E R N E T E N V I R O N M E N T

Page 22: Building Your Internet Infrastructure

with such organizations as Sulzer Orthopedics, theGeorge W. Bush Campaign, Oil Properties, and the3M® Austin Center. So partnering with Microsoftto create a system for Dell was a natural fit.

Catapult helped to design, develop, andimplement the database architecture and performthe unit, integration, and functional testing forthe application. Catapult and Microsoft devel-oped a scriptable component tester. This testerallowed functional and positive/negative testingof individual components or groups of compo-nents with a variety of scripted inputs, includingscalar values, XML files, and ADO record sets.The team tested the application and logged errorsto the project’s error database. The developerswere also involved in performance testing and tuning theapplications. Microsoft and Catapult spent months setting upand performance testing a large-scale database with assistancefrom the Dell performance testing team.

N-Tier Architecture Provides Flexibility and ScalabilityAfter the initial design phase, an n-tier architecture waschosen for system development to ensure both flexibility andscalability, and to reduce redundancy. N-tier architectureshave significant performance and system advantages overtraditional two-tier architectures, particularly when a systemuses many different products and multiple machines. Thesearchitectures scale better because they eliminate bottleneckson any single layer. Each layer is scaled independently of theothers and can also be load-balanced individually, therebycreating faster network communications, greater reliability,and greater overall performance.

The application’s presentation layer currently uses a farm ofMicrosoft Internet Information Server and Active Server Pages(ASPs) to instantiate Component Object Model (COM)objects in local Microsoft Transaction Servers (MTSs). Theseservers are connected via a gigabit-switched backbone to aDecision Support Server (DSS) and a lookup server (see Figure1). The actual presentation layer is a combination of client-side DHTML/XML and server-side ASP pages. This representsthe user interface. The business rules layer is implemented ascomponents hosted in MTS. The data layer consists of MTS-hosted components and the database servers.

Each database server has a warm standby server—implemented with Microsoft Cluster Server—that is the recipient of SQL Server 7.0 transaction log shipping. The threelayers of COM objects in each transaction server include busi-ness services, composite data services, and data services.

The database lookup server is the common data store for customer data. It contains a view that combines the live customers on both database servers. When a customer

connects to the server, the lookup server (used once) retrieves all of the data for thatcustomer. One of the database servers thenworks with the customer for the remainder ofthat session. This helps to distribute user con-nections across the databases.

Figure 1 shows the architecture for the neworder-entry system.

This type of design is highly scalable. If theadministrator runs out of connections or spaceanywhere in the system, simply adding anotherserver will extend the service. Catapult avoidedreplication for this system because it does notscale as well as a transaction log shipping solu-tion. In addition, replication would ultimately

make the system more difficult to manage because thelookup server acts as a common data store for dynamic lists,menus, and other functions. The result is a simpler designsince only nonstatic data is being distributed.

The use of clustering addresses redundancy at the hard-ware level. Clustering is two or more servers operating as asingle virtual server for purposes of high availability, scalabil-ity, and/or manageability, ensuring minimal downtime. If oneserver fails, another server takes over the operation of therunning server. Clustering extends a reliable server by dupli-cating the server itself, removing all single points of failure.

www.dell.com PowerSolutions 21

Challenge

Dell Computer Corporation wanted to improve and simplify an internal order-entry system that required complicated replication scenarios for proper operation

Solution

A new enterprise-level Microsoft SQL Server 7.0 order-entry system that is more streamlined and centralized, with greater scalability, redundancy, and fault tolerance

Benefit

Distribution of all server resources, data storage, and user connections

Technologies

� Microsoft SQL Server 7.0� Internet Information Server� Active Server Pages � Internet Explorer 5.0� Extensible Markup Language (XML)� Extensible Stylesheet Language (XSL)� Dynamic HyperText Markup Language (DHTML)� Microsoft Transaction Server

Scenarios

� E-commerce� Internal order-entry system

EXECUTIVE SUMMARY

Dell wanted

a new order-entry

application that

would better meet

the needs of its

business and

its customers.

Page 23: Building Your Internet Infrastructure

The transaction log shipping handles redundancy for the data level. The transaction logs are backed upevery five minutes, and files are copied tothe corresponding warm standby server andapplied to the transaction log of a read-onlydatabase on the warm standby server. If afailure occurs, administrators simply redirectMTS transactions to a warm standby serverand change the read-only status of the data-base. State was stored in XML data islandson the browser.

On the client side, Dell chose SQLServer 7.0 because of its scalability, its abil-ity to deal with a wide range of outside datasources, and the availability of features suchas Data Transformation Services, transactionlog shipping, easy data migration, and itssimple-to-use administration tools. Sincemost of the applications using the systemare intranet applications, presentation layertechnologies include Internet Explorer 5.0

and Active Server Pages to pass XML data items to the browser clients. A customized XML serializer was

developed to optimize the XML for low-bandwidth scenarios, running over links asslow as 56 KB.

An Effective Web-Based Order-Entry SystemUsing the new order-entry system, Dell cus-tomers and customer service representativescan examine detailed information about thestatus of orders over the Internet. The systemsprovide customers instant access to this Web-based information in a secure environ-ment, 24 hours a day, 365 days a year. Thenew application also allows distribution of all server resources, data storage, and userconnections.�

For more information about this and other Catapultprojects, contact Sam T. Goodner, CEO, at 512-328-8181 or visit www.catapultsystems.com.

22 PowerSolutions

Figure 1. Order-Entry System Architecture

DB2

DSS LookupView

DB1

Gigabit-Switched BackboneGigabit-Switched Backbone

TransactionLog Shipping

WarmStandby Server

DB1.customersUnion DB2.customers

Processor with4 GB RAM

IIS/MTS

N-tier architectures

have significant

performance and

system advantages

over traditional two-

tier architectures,

particularly when a

system uses many

different products and

multiple machines.

Page 24: Building Your Internet Infrastructure

www.dell.com PowerSolutions 23

The business world today moves at a much faster pace.The Internet now dictates the speed at which business is

conducted, and new technologies become standard evenbefore they are recognized industry-wide. Because of thesequickly adopted technological advancements, the Internethas become a cost-effective and ubiquitous vehicle for con-necting businesses.

One of the challenges in connecting businesses is coordi-nating the flow of information within organizations andacross organizational boundaries. Electronic data interchange(EDI) has been the traditional technology for exchangingcertain types of data electronically. However, it is difficultand costly to implement and it requires a unique solution foreach pair of trading partners among other issues, such assecurity, stability, and authentication.

Extensible Markup Language (XML) has emerged as anew technology for exchanging structured information overthe Internet, intranets, and extranets. It provides the abilityto separate the data and the structure from the process. The

Internet provides the inter-connection between trading part-ners and Internet-based services provide a layer of security,authentication, and support. Using XML enables companiesto write a single application that will work with a diverse setof customers.

XML has been the impetus for the emergence of e-commerceportals where customers go to only one Web site—theportal—to view and place orders. Vendors go to the sameportal to view and respond to orders, shown in Figure 1.

Thus, businesses have moved to the next wave of communi-cations: business-to-business (B2B) integration—the automatedexchange of information between different organizations—which is rapidly becoming an essential component to ensurethe success of any enterprise in the world of e-business.

What is Business-to-Business Integration?Business-to-business integration is the automated exchangeof information between different organizations. Trading partners across multiple industries—with incompatible IT

ProvidingBusiness-to-Business

Solutionswith mySAP.comAs the market leader of inter-enterprise software solutions, SAP™, (Systems, Applications, and

Products in Data Processing) is leveraging its strength in industry-focused business software

and the world’s largest enterprise software customer base to deliver mySAP.com™. mySAP.com

provides an open collaborative business environment of personalized solutions on demand. This

enables companies of all sizes and industries to fully engage their employees, customers, and

partners to capitalize upon the new Internet economy. This article provides an overview of several

SAP solutions that help businesses integrate business-to-business technology into their operations.

By Anand Vridhagiri

I N T E R N E T E N V I R O N M E N T

Page 25: Building Your Internet Infrastructure

systems—face the daunting task of resolving costly and com-plex integration issues in today’s electronic business world.

B2B integration has become a critical component insolving the quest for information. Its goals include reducingcosts, improving efficiencies, and gaining competitive edge.Today, B2B integration is evolving as the tailor-made solu-tion to these issues.

B2B integration improves external processes, such assupply-chain integration and shipping/logistics tracking, byenabling rapid, cost-effective, real-time links between businesspartners. For example, B2B integration can involve extendedsupply-chain integration. Suppliers are often themselves man-ufacturers within their own supply chains; therefore, whensupplier/manufacturer A is running low on supplies, it can tryto obtain resources from supplier D, E, or F. See Figure 2.

SAP Solutions Facilitate B2B IntegrationSAP solutions can facilitate this business-to-business inte-gration. Purchasing and sales departments, in particular,have a great deal to gain from the latest software advances.Processes can be optimized through intelligent softwarethat does more than simply automate repetitive tasks.

The latest software offers a whole range of opportunitiesfor tightening the links in the supply chain using efficiency-enhancing B2B processes. SAP’s Business-to-BusinessProcurement solution, the first in a family of SAP Internetproducts and one of the core elements of the mySAP.cominitiative, offers the means to exploit the procurementprocess potential.

The range of advanced planning and control tools alreadyavailable for direct procurement (the procurement of goods

for manufacturing and distribution processes) is impressive.These tools include Material Requirements Planning (MRP),Enterprise Resource Planning (ERP), and Advanced Planningand Scheduling (APS). For indirect procurement, however,outmoded, inefficient, vendor-dictated, paper-based, labor-intensive processes are still the order of the day.

SAP B2B ProcurementSAP B2B Procurement (BBP) is an open, inter-enterpriseprocurement solution that facilitates all processes associatedwith the procurement of maintenance, repair, and operationof goods and services—from creating purchase requisitions tosettling invoices. SAP’s BBP solution has been developed forcompanies seeking to streamline their procurement processesby reducing transaction cycle times and improving suppliermanagement. Any customer using SAP R/3® Release 3.1Hor higher can implement SAP BBP.

SAP Internet Transaction ServerThe SAP Internet Transaction Server (ITS) extends thehighly scalable three-tier client/server architecture of SAPR/3 to the Web. The SAP ITS is a gateway between theWeb server and SAP R/3 application server.

The SAP ITS enables Internet and intranet users to com-municate directly with R/3 and run SAP’s standard Internetapplications. Key aspects of the SAP ITS include R/3 Web inte-gration, scalability, performance, openness, and security. TheSAP ITS adds an HTML-based user interface to SAP applica-tions by acting as a gateway between the HTTP and the SAPR/3 application server. The ITS design takes into account secu-rity concerns such as firewall support and Web server security.

24 PowerSolutions

Figure 1. A Web Portal

ReplicatedDisk

Internet

CUSTOMERS VENDORS

INTERMEDIARY

ReplicatedDisk

WebServer

MRPMRP

InternetInternet

ERPERP

OtherOther

Page 26: Building Your Internet Infrastructure

SAP ITS technology provides compatibility with allmajor browsers. The SAP ITS supports full SAP message-server-based load balancing and is implemented as a light-weight multithreaded server offering optimum performance.

The SAP ITS ArchitectureThe SAP ITS acts as a gateway located between one or moreWeb servers and one or more R/3 application servers (SAPR/3 system). The SAP ITS manages all requests andresponses that pass between a Web browser client and anR/3 server. The two main components of the SAP ITS architecture include WGate (Web server gateway) andAGate (application server gateway), shown in Figure 3.

WGate Links the SAP ITS and the Web ServerWGate is the link between the SAP ITS and the Webserver. WGate receives requests from the Web browser viathe Web server, then forwards them to the AGate via aTCP/IP connection—the Web browser cannot connectdirectly to AGate.

WGate always resides on the same machine as the Webserver and supports the following HTTP server interfaces:

Microsoft Internet Information Server ApplicationProgramming Interface (ISAPI), Netscape™ Server API(NSAPI), and other Web server APIs via the CommonGateway Interface (CGI).

Communication between the Web browser and the Webserver is based on single request/response cycles. WGatemust transfer the browser request to a permanently runningserver process, which is handled by AGate.

AGate links the SAP ITS and the SAP R/3 Application Server AGate acts as the link between the SAP ITS and the SAP R/3application server, and provides the core processing componentof the SAP ITS. AGate receives Web browser requests fromWGate and communicates with the R/3 application server viathe Dialog Process or Remote Function Call (RFC) protocol.

AGate processes the request and hands it over to theSAP R/3 system, which either starts the first dialog step of anew transaction or submits further data for the next dialogstep of a transaction already started. When the R/3 system isready with a result, the AGate retrieves it and sends theresponse back to WGate as an HTML page. AGate isresponsible for handling all sessions, service, and user

www.dell.com PowerSolutions 25

Figure 3. ITS Architecture

Web ServerWeb Browser

HTTPISAPINSAPI

CGIWGate AGate

R/3 System

SAP ITS

TCP/IP

DIAGRFC

Internet InternetOperationalSystems

SupplierC

SupplierB

SupplierA

SupplierF

SupplierE

SupplierD

B2BIntegration

Server

B2BIntegration

Server

MANUFACTURER SUPPLIERS SUPPLIERS TO SUPPLIERS

Figure 2. Supply Chain Integration

Page 27: Building Your Internet Infrastructure

management, and for generating the HTML documents thatare sent back to the Web browser client.

It is best not to implement both WGate and AGate onthe same system in a production environment because of thepotential for compromising security. WGate is small andfunctionally simple; it simply passes requests back and forthbetween the Web server and AGate, and is less prone tosecurity threats. AGate, however, handles most of the processing required to run a transaction over the Internetand is connected directly to the R/3 system; therefore, security is an important concern with AGate. To enhancesecurity, a firewall can be placed between the two gateways.

The SAP Business Connector Integrates Via Open Technologies The SAP Business Connector (BC) is a B2B-enabling tech-nology that allows integration with SAP systems via openand non-proprietary technologies. The SAP BC uses theInternet as its communication platform and XML/HTML asthe data format. SAP BC provides seamless integration ofdifferent IT architectures, reduction in cycle times andsupply-chain inefficiencies, and the extension of traditionalelectronic data interchange (EDI) infrastructures.

The BC provides two advantages for SAP users: � XML enables SAP solutions; that is, the flexibility of XML

allows it to adapt as business document standards evolve.� BC facilitates the integration with SAP’s Business

Application Programming Interface (BAPI) andApplication Link Enabling (ALE) technologies.In essence, the SAP BC is an open two-way interface to

SAP business partners on the Internet. See Figure 4.

The SAP BC is bundled with SAP’s mySAP.com packageand SAP’s BBP product, and is also available as a stand-alone package for SAP customers.

All SAP functionality accessible via BAPIs and interme-diate documents (IDocs) can be made available to businesspartners over the Internet as secure XML-based services.This XML service layer makes SAP functionality availableto other applications within the organization or to partnersexternally via the Internet.

The external applications do not require users to under-stand SAP BAPIs or internal data structures. The SAP BCallows native calls into and out of SAP systems without writing BAPI code. It also supports all RFC-enabled functionmodules and includes communications and security infra-structure required to implement B2B solutions.

IDocs Encapsulate Business DocumentsIDocs are SAP data formats for encapsulating businessdocuments. Several third-party software applications help translate IDocs into formats understandable by non-SAP systems.

The SAP BC can translate IDocs into XML and send theresulting messages over the Internet rather than over EDIvalue-added networks (VANs). Thus, the SAP BC providesreal-time, transactional interactions between SAP systemsand other applications, including other Enterprise ResourcePlanning (ERP) systems, legacy systems, and Web-hosteddata, such as supplier catalogs and XML data sources.

The SAP BC appears as an RFC destination to a SAPsystem. (RFC is the communications protocol used to talk toSAP systems.) The same destination can be used to direct aBAPI call to the SAP BC. The SAP RFC library is used tointerface between SAP systems and the SAP BC. Third-partyapplications that already have RFC protocol implemented cannow use the SAP BC for remote access to SAP systems.

How It Works: An Application ExampleThe SAP system generates an IDoc and sends it to the SAPBC, which formats the IDoc and transmits the business document. The message gateway receives the request (theconsumer process), and the publisher process transmits therequest. The final destination message is either sent to thesupplier or pulled by the supplier. The receiving SAP BC atthe supplier receives the XML message, creates an IDoc, andsends it to the local SAP R/3 system. See Figure 5.

mySAP.com Ties All SAP Products TogethermySAP.com is the unifying environment that tiestogether all SAP products, including SAP R/3, SAPindustry solutions, SAP New Dimension offerings, andother key initiatives. In addition, mySAP.com also

26 PowerSolutions

XMLInterface

SAPBC

Internet/HTTP

R/3 System

SAP Business Connector

R/3System

Non-SAPSystems Web

Applications

Web ContentServer

WebBrowser

HTML

HTML/XML

XML

XML

XML

Figure 4. SAP Business Connector

Page 28: Building Your Internet Infrastructure

extends beyond the boundaries of SAP products to inte-grate other content, services, and software. Throughthis unique open environment, mySAP.com empowersemployees, reduces costs, and enables customers toparticipate in the dynamic electronic marketplace.

The Web-enabled solutions of mySAP.com are accessiblethrough the simple, convenient interface of each user’s Webbrowser. The four pillars that build the mySAP.com environ-ment are the Marketplace, Workplace, Business Scenario,and Application Hosting.

mySAP.com Marketplace—The B2B HubThe mySAP.com Marketplace is an electronic B2B hub. It enables companies of all sizes and diverse industries toconduct collaborative business. It provides a foundation forcustomers, suppliers, and business partners to work togetherin a single virtual business environment as if everythingexists under one big umbrella.

The modules of the mySAP.com Marketplace include:� Business directory that contains information about

various industries, thus linking the buyer and seller� Communities that provide a source for updated

information on various industries� Services to help improve business efficiency� Personalized home page with personalized information

The mySAP.com Marketplace, shown in Figure 6,helps ERP systems communicate directly with oneanother. It fosters and supports communities of buyers andsellers in a collaborative business environment by offeringa number of information resources and communicationchannels that help link the buyers and sellers together.The business directory component of the mySAP.comMarketplace helps buyers quickly find suppliers and products. As a one-step business environment, themySAP.com Marketplace offers easy-to-access market

information and secure channels to exchange businessdocuments between buyers and sellers.

For example, a company using SAP BBP to access amySAP.com Marketplace can find a vendor, select a product,and create a purchase order from an electronic shoppingcart. By simply using the desktop, users can accomplish thesetasks efficiently, error-free, and economically. This is a one-step process compared to past processes in which users wentto the vendor, completed the transaction, then updated theirERP systems.

The mySAP.com Workplace Enhances Workplace ProductivitymySAP.com Workplace is a Web-based enterprise portal thatprovides access to information, applications, and services. Attheir desktops, mySAP.com Workplace users can see all thetasks they are required to perform, information necessary fortheir daily work, and links to technical resources.

mySAP.com Workplace is based on roles, depending onthe job description: planner, buyer, sales manager, and so on.Users are not restricted to the SAP R/3 system or othermySAP.com components; any external source that can be

www.dell.com PowerSolutions 27

BCBC

SAP systemgenerates an

IDoc and sendsit to the BC

The BC formatsand transmits the business

document

The messagegateway receives

the request(consumer process)

The PublisherProcess

transmitsrequests

Final destinationmessage is sent toSupplier or pulled

by Supplier(HTML or message)

The receiving BC receives the XML

Message (via service invocation) creates an IDoc and sends

it to the R/3 system

MessageGateway SupplierBCBC Publisher

Process

Figure 5. mySAP.com Portal—An Application Example

Figure 6. mySAP.com Marketplace

Page 29: Building Your Internet Infrastructure

accessed via the Internet can be integrated as part ofmySAP.com Workplace. Unlike a standard available portal,mySAP.com Workplace has several features built in toenhance productivity, including the following:� Content—Nearly 200 roles can be used as templates for

business tasks� Openness—Not limited to R/3 or mySAP.com only; users

can access anything anywhere over the Internet� Security—Industry-standard security and administration

built in to maintain the demands of the enterprise portals� Integration—Drag and relate features; applications can be

linked without being reprogrammed � Mobile computing—Supports mobile computing

Figure 7 shows mySAP.com Workplace. This representsan enterprise portal, which is a logical collection of activitiesand functions that match specific job descriptions.

Two key features of mySAP.com Workplace includeLaunch Pad and MiniApps. Launch Pad is the area fromwhich users navigate directly and rapidly to the system func-tions that are relevant to their particular role. It containslinks to systems, reports, and Web pages.

MiniApps are minor applications that provide informationimportant for a specific role. The drag-and-relate functionallows users to benefit from logical links among objects bysimply selecting one object and relating it to another. A drag-and-relate function, for example, would involve dragging thetracking number from a receipt to a shipping company’s Website to get details about the shipment.

mySAP.com Workplace is one of the building blocks ofmySAP.com (see Figure 8). The architecture of mySAP.comWorkplace consists of the mySAP.com Workplace serverthat supports user management, role administration, andRFC administration. The middleware consists mainly of theSAP ITS, which transports SAP functionality to the Web. AWeb browser allows end users to access mySAP.com

Workplace. The sizing of the three mySAP.com Workplaceelements—the server, middleware, and Web browser—relieson the number of users who work in the portal.

Business Scenarios Help Create Customized Business ProcessesSAP offers a wide choice of Business Scenarios suited for various industries and relationships, each of which can becustomized using mySAP.com. Business scenarios can alsoleverage content knowledge or services, such as onlinecatalogs or auctioning services that are available throughmySAP.com.

The functionality needed to implement BusinessScenarios is available through mySAP.com components.Business scenarios can span across multiple SAP and non-SAP systems, and leverage content knowledge or servicesthrough the mySAP.com Marketplace.

SAP lets you test drive Business Scenarios directlyfrom your desktop through SAP’s Internet Demonstrationand Evaluation System (IDES). Business applicationsinclude E-Commerce, Customer RelationshipManagement, Supply Chain Management, BusinessIntelligence, Human Resources, Manufacturing, andFinancials. Visit http://www.sap.com/solutions/business_scenarios/index.htm for more information.

Application Hosting—Comprehensive Offering of Hosting ServicesIn addition to the mySAP.com Marketplace, Workplace, and Business Scenarios, mySAP.com Application Hostingcomprises the fourth main building block of mySAP.com.mySAP.com Application Hosting is a comprehensive offer-ing of hosting services provided by SAP and partners thatspans the entire mySAP.com solution life cycle—from evalu-ation to implementation to operations and continuousimprovement. The following Internet-based offerings comewith mySAP.com Application Hosting:� Test-Drive Your Solutions Online� Compose Your Solutions Online� Implement Your Solutions Online� Host Your Solutions Online � Build and Host Your Marketplace/Business Community

Online

SAP’s e-Commerce Starter Pack—Putting Everything TogetherSAP’s e-Commerce Starter Pack enables an SAP client tobecome an e-business in a short time using the attractive com-bination of preconfigured software, automatic installation, andadditional services. All components are installed using anautomatic installation procedure. This provides technical pre-configuration of all components, preconfiguration of centralbusiness processes in SAP BBP, and preparation for integrating

28 PowerSolutions

Figure 7. mySAP.com Workplace

Page 30: Building Your Internet Infrastructure

existing component systems, such as SAP R/3, BusinessInformation Warehouse (BW), and Advanced Planner andOptimizer (APO).

The following components are part of the SAP e-CommerceStarter Pack:

Business-to-Business Procurement (BBP)—A develop-ment system for SAP BBP is installed for the complete pur-chasing process for nonproduction goods and services

Requisite catalog integration—Integration with theelectronic catalog from Requisite as an internal catalog, andwith an external (hosted) catalog

mySAP.com Workplace—Browser-based enterpriseportal that delivers SAP solutions in the form of user roles

SAP Online Store—Comprehensive Internet-based saleschannel that is included in the SAP R/3 Core; ideal for bothbusiness-to-consumer and business-to-business selling

SAP—An Internet Solutions LeaderThe Internet is redefining the world of business. SAP hasemerged as an Internet solutions leader by delivering mature, e-business products under the mySAP.com umbrella. SAP hasraised the bar of functionality with enhancements in the rangeof components enabling customer relationship management, e-commerce, supply chain management, human capital manage-ment, business intelligence, and strategic enterprise management.

E-commerce with mySAP.com includes enhancementsfor buying (SAP Business-to-Business Procurement) and sell-ing (SAP Internet Sales). SAP BBP includes direct links tomySAP.com Marketplace and other Internet marketplacesthrough the XML-based Business Connector, enabling one-step business transactions between buyers and sellers. �

Anand Vridhagiri ([email protected]), a member ofthe Dell Enterprise Systems Group, Global Alliance team, hasworked in high-performance computer design, performance analy-sis and optimization, database systems, and enterprise resourceplanning for more than eight years. He has been actively involvedwith the Transaction Processing Performance Council (TPC),Computer Measurement Group (CMG), IEEE, and a speakerat SAP TechEd conferences. Anand has an M.S. degree inElectrical Engineering from New Mexico State University and isa Microsoft Certified Systems Engineer (MCSE).

www.dell.com PowerSolutions 29

Dell and SAP continue to significantly enhance SAP solutions. Dell nowoffers mySAP.com solutions based on Windows 2000, Linux, and Intel0IA-64 platforms. The Dell-SAP Competence Center is located at the newSAP Partner Port in Walldorf, Germany. In addition, Dell operates an SAPCenter of Expertise (COE) in Austin, Texas. These centers are used toactively promote and improve Dell-SAP product performance, quality,service, and support through customer sizing, performance engineering,training, and proof-of-concept testing.

Dell’s track record of SAP certifications on Windows NT includesthree generations of servers. The most recent are the PowerEdge6400/6450, PowerEdge 4400, and PowerEdge 2450. Visitwww.dell.com/sap for more details on the Dell-SAP alliance.

DELL AND SAP—PARTNERS IN B2B INTEGRATION

Figure 8. mySAP.com Workplace

Web Browser

• Web Server• Internet Transaction Server/

SAP GUI for HTML• Add Workplace Software

• Basis 4.6• User Management• Role Administration• RFC Administration Advanced

Planner andOptimizer

BusinessInformationWarehouse

R/3Workplace Server

Web Server

NewServers

Flexible Component Installation• All on one server• One separate server

per component• Several separate servers

per component

...

Page 31: Building Your Internet Infrastructure

30 PowerSolutions

With Internet users clamoring for faster Web site performance, the benefits of caching and content

delivery are becoming more attractive to service providersand site builders. Caching technology speeds up site performance, frees up bandwidth, and lowers connectioncosts by storing objects that are frequently requested in acache at the edge of the network. This reduces networktraffic and off-loads Web server workloads.

The Internet Service Provider (ISP) marketplace created the need for caching technology because of theever-increasing rise in bandwidth costs and end-userdemands for improved performance. Two typical situationsfor caching include:� Accelerating ISP content delivery while increasing

access and lowering bandwidth cost. Placing acaching server at the edge of the network (forwardproxy mode) at the point-of-presence level enables this scenario.

� Accelerating content delivery for large Web sites.Placing a caching server in front of the Web/contentserver (reverse proxy mode) can off-load repetitious workloads.

The studies discussed in this article focus on using theDell PowerApp.cache server appliance for reverse proxy. ThePowerApp.cache server minimizes a Web server’s workloadby reducing the number of user requests to the intendedWeb server, thereby providing an additional resource tohandle dynamic workloads.

Demystifying Web Server AccelerationBy using Domain Name System (DNS) name resolution, thedomain name for the requesting client can be redirected asthe caching server’s Internet Protocol (IP) address. Thisenables the caching server to become the proxy server withthe instant capability to store static objects. The cachingserver will then only access the Web server when an objectis not present in the caching server.

Figure 1 shows the reverse proxy process usingPowerApp.cache during initialization of a client request.With content acceleration, DNS resolves the Web server’sdomain name to the IP address of a PowerApp.cacheserver appliance (reverse proxy). Without content acceleration, DNS resolves the origin Web server’s DNSname to its IP address.

AccelerateYour Web Server

withDell PowerApp.cache

Is your Internet connection getting bogged down? Are your end users complaining

about the World Wide Wait? Are you adding new Internet or intranet applications

that require additional bandwidth and Web servers? These problems may be

solved through a front-end Web caching device. This article provides performance

testing results on Dell’s PowerApp.cache Web caching device, which speeds up

client requests by off-loading repetitive tasks from front-end Web servers.

By Joe Huang

I N T E R N E T E N V I R O N M E N T

Page 32: Building Your Internet Infrastructure

The steps involved in this process include:1. A browser on the Web requests an origin Web server Web

page. This generates a request to DNS for the numeric IPaddress of the Web server.

2. Rather than returning the origin Web server’s numericIP address, DNS returns the numeric IP address of theaccelerator service on the Internet Cache System (ICS)appliance.

3. The browser requests the Web page using the numeric IPaddress of the accelerator service.

4. The accelerator service obtains the Web page objectsfrom the origin Web server.

5. The accelerator returns copies of the objects to thebrowser.

PowerApp.cache as a Web Server AcceleratorA Web server accelerator reduces response time to browserrequests and frees up the origin Web server’s workload. Thisallows the origin Web server to respond more quickly torequests for less frequently requested dynamic data that isnot cached.

PowerApp.cache also can accelerate origin Web serversat remote locations that do not offer broadband connections.The Web server accelerator can be located close to theInternet backbone, delivering high-speed access to browsersfor all cached objects. The connection to the origin Webserver is then used for transporting only those objects notalready in cache.

Simulating an Online Shopping EnvironmentWe recently conducted a performance test of the DellPowerApp.cache appliance server as a Web server accelerator.

The objective of the test was to simulate an online shop-ping environment in which many concurrent users fre-quently browse merchandise. This type of Web site typicallyconsists of 70 percent static HyperText Markup Language(HTML) pages for goods and services and 30 percentdynamic Active Server Pages (ASP) for custom applications.

For this test, page sizes ranged from 1 KB to 500 KB, with an average page size of 22 KB. The Web site containedapproximately 1,000 files and 20 MB of content. Since thecontent was structured in a hierarchy, the frequency ofaccess was 10 percent of the pages receiving 90 percent of the visits.

The performance test included a Web server runningWindows 2000 simulating a single Web site connected to alocal area network (LAN). A PowerApp.cache applianceserved as a Web server accelerator (reverse proxy mode).Multiple virtual clients simulated incoming workloads (seeFigure 2). We collected client response times and datapoints—with and without the caching server for comparison.

The key findings showed that the PowerApp.cache serverperformed as follows: � Decreased the average time to respond to client requests

by 44 percent� Increased the number of Web pages received from the

Web server by 76 percent� Increased the number of instances the Web server

responded to client requests by 79 percent

The results of this performance test show that thePowerApp.cache appliance server, by front-ending frequently accessed static information, can improve the

www.dell.com PowerSolutions 31

Figure 1. Reverse Proxy Process Using PowerApp.cache

Internet

DNS ServerOrigin Web

Server.com = 100.1.1.199

Web Browser

ICS ApplianceIP address =100.1.1.199

Origin Web ServerIP address =

100.1.1.1

52

1

3

4

Figure 2. Test Environment for Two Server Data Paths: Caching versus No Caching

EthernetSwitch

Web ServerPower.App

Cache DNS Server

CachingNo Caching1000Base_TX100Base_TX

Page 33: Building Your Internet Infrastructure

overall efficiency and performance of ISP Web hosts. Thisalso diverts a considerable load from the origin Web server,which allows the Web site to handle a higher volume of userrequests without expensive upgrades to server hardware or tohigher speed network connections.

The Test Environment and Tools The test environment was constructed using the MicrosoftInternet Information Server (IIS) Capacity (MSICAP) pro-gram, an internal tool that is part of the Microsoft WebCapacity Analysis Tool (WCAT).

WCAT can monitor the response of a Web server whileit directs client loads in a controlled environment. WCATtests simulate the activity of a Web server and its manyclient Web browsers communicating in the same network.

A WCAT test consists of four components: server, client,controller, and the network. The test environment involvedthe following components and operations: � Dell PowerApp.cache 200 appliance server Version 1.2

with 1 GB RAM, configured as a Web server accelerator,used in a reverse proxy approach

� DNS resolved the Web server address to thePowerApp.cache 200

� 20 Dell OptiPlex 600 MHz PCs, each simulating 100 vir-tual connections sending various HyperText TransportProtocol (HTTP) get requests to the IIS Web server

� Total number of virtual clients was 2,000

� Workload distribution included 10 percent of the pagesreceiving 90 percent of the visits (that is, home pagesusually receive most of the visits). The workload distribu-tion included the following:—Script file that specified the Web pages clients requested

from the server during the test and the Web files thatmade up those pages

—Distribution file that specified the frequency with whichthe pages in the script file were requested from the server

The WCAT server running IIS 5.0 provided content andsession management, including responding to requests forconnections; establishing, managing, and terminating con-nections; receiving requests for Web content; and processingclient requests and sending responses.

The WCAT clients included one or more desktop comput-ers running the WCAT client application. Each client definedand controlled the number of client browsers used in the test,size and type of pages the client was requesting, rate at whichclients sent the requests, relative frequency at which specificpages were requested, and the duration of the test.

The WCAT controller was the test console that moni-tored and controlled the test environment. The WCAT con-troller application initiated and monitored the WCAT testby using three input files. Once the test was completed, thecontroller application collected the test results and loggedthem into an output file.

The Test MethodologyTo improve IIS 5.0 performance, IIS can run Web applicationsin a pooled out-of-process environment. Applications that runin the Web services process (Inetinfo.exe) result in better per-formance; however, this also increases the risk that a faultyapplication can cause the Web services to become unstable.Microsoft recommends running Inetinfo.exe in its own process,running mission-critical applications in their own processes, andrunning remaining applications in a shared, pooled process.Therefore, the tests, by default, ran in a pooled process.

Each test had the following sequence of coordinated steps: 1. Prepare the controller input files 2. Start the WCAT clients3. Start the WCAT controller4. Start the IIS 5.0 Web server5. Begin the warm-up period: clients send requests to the

Web server, but no data was captured6. Begin the experimental period: clients request Web pages

from the server7. Begin the cool-down period: clients stop sending requests

to the server8. Begin the reporting period: clients send the data they

collected to the controller

32 PowerSolutions

Figure 3. The WCAT Test Environment

Dell PowerEdgeWeb Server

Power.AppCacheDNS Server

PowerEdge 2450 Web ServerCPU: Two 677 MHz Pentium III ProcessorsRAM: 2 GBDisk: PowerEdge Expandable RAID Controller (PERC)

3/SI RAID-5 controllers with 4 SCSI disksNetwork: One 1000BaseTX

Controller

ClientClient

ClientVirtualClients Virtual

Clients

VirtualClients

Cisco Catalyst 2849G

WCAT Clients20 Dell OptiPlex GX1 clientsCPU: 600 MHz Pentium III ProcessorsRAM: 256 MBDisk: One 9 Gig MB diskNetwork: One 100BaseTX NIC

Page 34: Building Your Internet Infrastructure

9. Prepare the output files: calculate and summarize the collected data

During these tests—about three minutes each—thePowerApp.cache 200 appliance server cached static content.Data was then collected on the controller, and performancecounters were collected on the server and analyzed. The client,controller, and server machines were rebooted to maintain thesame beginning state. We repeated the test three times toobtain more cache files that were not cached during prior tests.The test results report the average of the three tests.

Test Results Showed Performance Improvements The test results used the PowerApp.cache appliance as afront end to a Dell PowerEdge 2450 server running Windows2000 and IIS 5.0 Web server. The PowerApp.cache appli-ance obtained a 65 percent cache hit ratio, off-loading thosestatic requests from the origin Web server. This off-loadingaccounted for the performance increases.

We measured the data collected from each client using2,000 concurrent user connections. The results, shown inFigure 4, include the following:

HTTP response/sec: Total number of HTTP request andresponse pairs per second from the client

Average response time recorded by clients: Response timeis the elapsed time per run, in seconds, between the moment arequest was sent and the time the response was received

Data read/sec by client: Amount of Web page contentper second received by the virtual clients from the server

The PowerApp.cache appliance produced a gain of 76 percent in the amount of Web page content received bythe virtual clients from the server. Figure 5 shows details ofthe throughput comparison. The following formula was usedto calculate percentage of improvement:

((# obtained using PowerApp.cache – # obtained withoutPowerApp.cache) ÷ # obtained without PowerApp.cache) × 100

Using the PowerApp.cache appliance also led to a 44 percent improvement in the response time of the serverto client requests. See Figure 6 for details.

Furthermore, the PowerApp.cache appliance led to a 79 percent improvement in the number of instances in whichthe server responded to requests from clients, shown in Figure 7.

Dell PowerApp.cache is a Powerful Caching SolutionThe tests showed a significant improvement in Web serverand Web access performance. The average response time toclient requests decreased by 44 percent, the amount of Webpages received from the Web server increased by 76 percent,and the number of instances the Web server responded toclient requests improved by 79 percent.

The Dell PowerApp.cache appliance server is a powerfulcaching solution that can improve the overall efficiency andperformance of ISP Web hosts by front-ending frequentlyaccessed static information. This, in turn, alleviates a consider-able load from the Web server, allowing it to service a highervolume of user requests without expensive upgrades to serverhardware or to higher speed network connections. �

Joe Huang ([email protected]) is a systems engineer in Dell’seCenter of Excellence in Florham Park, NJ. He specializes inperformance tuning and solution validation. His other areas ofexpertise include implementation and design of high-availability e-commerce solutions and hierarchical storage management,including storage area networks. Joe has a B.S. in ComputerScience from Polytechnic University and is a Microsoft CertifiedSystems Engineer+Internet (MCSE + I).

www.dell.com PowerSolutions 33

Figure 4. Test Results

Dell PowerEdge Data Average Response HTTP 2450 Server Read/Sec Time/Sec Response/Sec

With PowerApp.cache 12.65 MB 10.62 1042.7

Without PowerApp.cache 7.17 MB 18.98 583.09

0

3,000,000

6,000,000

9,000,000

12,000,000

15,000,000

Data

Rea

d/Se

c in

Byt

es With PowerApp CacheWithout PowerApp Cache

Figure 5. Server Data Read Per Second, Per Run

0

5

10

15

20

Aver

age

Resp

onse

Tim

e/Se

c With PowerApp CacheWithout PowerApp Cache

Figure 6. Server Average Response Time Per Run

0

200

400

600

800

Tota

l Res

pons

e Ti

me/

Sec 1200

1000

With PowerApp CacheWithout PowerApp Cache

Figure 7. Server Total Response Time Per Run

Page 35: Building Your Internet Infrastructure

34 PowerSolutions

L ike a Web site built in HyperText Markup Language(HTML), any site built in XML must be planned care-

fully before the first piece of code is loaded onto the servers.Unlike HTML, however, in which coding structures andpractices have been standardized for years, XML is not asrigid in how its coding structures are defined. Ultimately, thegreatest attribute of XML—customizable content tags—canalso be a major headache for developers and users if thesetags are confusing or not intuitive.

To create an XML site that works well on the first imple-mentation, but also scales well with the growth of your com-pany, you must carefully consider how the XML data of yourcompany is organized. XML was created to enable the efficienttransfer of a company’s data across multiple systems and forportability into many different types of outputs. The versatilityof this coding methodology dictates that the code structuremust make sense in multiple, different environments.

A Workable Language in Different EnvironmentsAn example of the necessity for versatility is a manufacturingcompany that uses XML to organize its production, shipping,

and sales data. The verbiage of its XML tags must be wellstructured so it makes sense at each data level and can be logically transferred from one level to the next. Without this logic, the XML could transfer into a workable Web application at some levels, yet fail at others.

This example can be even more specific by analyzing the different levels of data for a manufacturing company. Each ofthese levels might contribute to subtle differences in the XMLcode that organizes the company’s information. On the firstlevel, the company is taking raw materials or components andconverting them into finished products. This production processinvolves various raw materials, as well as how these materials arecombined and in what order, to create the products.

The data about these raw materials and products is valuable, and its collection could follow the manufacturingprocess of a product from its creation on the factory floor toits final destination at the consumer location.

How XML is Used in the Production of a CarOne way to visualize this process is to consider the productionof an automobile. When a car is built at an automobile

StructuringEfficientXML:The

Organization of Data

The success or failure of a company’s Extensible Markup Language (XML) implemen-

tation is based upon key factors the company must consider prior to the introduction

of XML as its Web coding language. This article explains some of these factors and

provides a clear example of how XML helps facilitate a manufacturing process.

By Paul Laster

I N T E R N E T E N V I R O N M E N T

Page 36: Building Your Internet Infrastructure

factory, it is not just created from a bulk blockof steel by one or two skilled craftsmen.Building a car requires many different compo-nents, and each of these components possessescertain data attributes that will ultimately dic-tate the general makeup and value of the car.By using XML, these attributes can be storedin datasets used to create and display data out-puts throughout the life of the data.

This provides an insight into the importanceof using well-considered XML. On the shopfloor, cars can be labeled using XML in a waythat might make perfect sense to the car manu-facturers. As the car manufacturing process progresses through its various stages, however,this labeling might be too specialized for thosewho use this data at a different step in the process.

As a vehicle is manufactured, it is important to ensurethat the data being created will make sense throughout theentirety of the process. This means that the XML must bedefined in a way that keeps it valid throughout the process.

Defining the Attributes of the DatasetFor this car example, let us assume the existence of an XMLdataset <CAR>, which contains certain attributes. Now, atthe first stage of our manufacturing process, the dataset<CAR> has data elements that are attributable to this aspectof the manufacturing process. The XML for this datasetmight appear as shown in Figure 1.

This model has a data element called <CAR>, which hasseveral subelements. Because the car at the manufacturinglevel comprises several components—such as tires—fromother companies, these data elements can also be dividedinto smaller elements.

For example, the <tires> tag for the element <CAR>could be composed of special information that not onlydescribes the attributes that make up a vehicle’s attributes atthe manufacturing level, but also pricing information thatallows the manufacturer’s suggested retail price (MSRP) tobe surmised from this data.

Therefore, the attributes of the data ele-ment <tires> might look similar to Figure 2.

The <tire> data element could beexpanded to include other relevant informa-tion that may be added later. Furthermore,because the <tire> element containsanother subelement called <cost>, this<cost> element from <tire> could laterbe combined with the other <cost> ele-ments of <CAR> to tabulate the <MSRP> of<CAR> in a separate Web-based application.

Conveying Manufacturing Data Helps the Sales ProcessAfter this discussion, it becomes more obvioushow this information can be used for the

process of making and selling cars. If the initially compiledinformation about the manufacture of a car can effectively betransferred to the next levels of the process, it can provide amore efficient sales process.

For example, in the manufacturing stage, it is impor-tant to know the initial costs of the components needed tocreate a car; these components dictate the costs requiredto complete the manufacture of a car. From a distributionperspective, the importance of this information is also relevant, because it can help determine vehicle pricingand profit margin.

Returning to the dataset <CAR> in Figure 1, it nowbecomes obvious how the initial stages of this XML designcan influence other aspects of the vehicle selling process.

Other parts of this initial XML design also can help theselling process. Since XML is merely a classification of dataand not a specific order or display of that data, differentparts of the code can be referenced at different points ofthe selling process. For example, a car distributorship, amidway point between the initial car manufacture and finalsale, considers different information important. Because thecar distributor is more interested in the make and model ofa car, an XML schema could be produced that enables dis-tributors to sort the raw <CAR> datasets and retrieve thedata on different <CAR> datasets, sorting from their specific<model> and <style> fields. See Figure 3.

www.dell.com PowerSolutions 35

<tires><model>123456</model><manufacturer>East Coast Tire Co.</manufacturer> <cost>35</cost>

</tires>

<CAR><model></model><style></style><engine></engine><tires></tires><MSRP></MSRP>

</CAR>

Figure 2. Attributes of the <tires> Data ElementFigure 1. XML Code for the Dataset <CAR>

XML was created

to enable the efficient

transfer of a company’s

data across multiple

systems and for

portability into many

different types

of outputs.

Page 37: Building Your Internet Infrastructure

The Same Data Serves Different NeedsPrice, or the information in the <MSRP> dataset (see Figure3), again becomes important in the final stages of the carsales process. The XML schemas for car dealerships can becreated to allow sorting of this specific data element. Sincethe data is from the same dataset that appears in the initialstages of the XML development, it again underscores theimportance of having well-planned XML data organization.

Every stage of the selling process is interested in differentaspects of the data as it pertains to their specific needs andinterests. Yet even beyond that, data available at early stagesof the process might still be needed at later stages of theprocess—just for different reasons and different uses. At themanufacturing level, the XML elements might be used as aWeb-based tracking mechanism for the manufacture of thecars. At the sales level, this same XML data could be portedinto an HTML consumer Web site to provide customers alist of types of cars available at certain dealerships.

Eventually, the storage of all these different data elementsin the same dataset does much more than just produce a largeamount of data about one car: It tells the entire story of thevehicle— from manufacture to sale. This data becomes veryimportant later, as customer information is added to the dataset.

Organizing Customer Information: Post-Sales Support and MoreThe data element <CAR> can be expanded to include the pur-chaser of the car by adding a new data element <BUYER>.This provides a new customer service value to the vehicle’sdata. Since the data is stored in the same central repositoryand additions are entered by people in each step of the salesprocess, the dataset <BUYER>, shown in Figure 3, can besorted by a customer sales representative from the vehicle

company to provide the consumer with different news aboutthe vehicle. Furthermore, the dataset <BUYER> gives the carmanufacturer easy access to the customer list if emergency sit-uations, such as vehicle recalls, occur.

Microsoft ASP Applications and XMLThose who implement the code specifically design the XMLtaxonomy, often in a way that is specific to the needs andwants of these implementations. This often leads to manydifferent variations on XML code as it moves from one handto the next during the initial development stages. Creatingspecific guidelines can help to avoid these developmentissues and ensure that no such errors can occur in the initialdevelopment of an XML-based Web site.

Currently there are no precise XML-based code generatorsand editors. One approach might be to create a MicrosoftActive Server Pages (ASP) application that can port text-based information into XML code, which can then be checkedagainst a schema prior to launch. The addition of a user-friendly interface might also help encourage employees to usethe new XML code, thereby creating a more robust informa-tion source for the company’s Web-based activities.

The Importance of Planning Ahead This article has focused on re-creating a company’s Web siteas a complete rebuild of an existing infrastructure. Any suchundertaking is massive, regardless of how successful animplementation can be for a company’s long-term goals.

Security and maintaining data integrity are major consid-erations in the planning and design for such a company-wideinformation source. Maintaining data integrity is easy whenonly a few hands touch it. As more people use a database orXML file structure, however, the more susceptible it becomesto version-control errors and the possible inclusion of baddata or computer viruses. Preventive measures should beestablished at the beginning of this type of project to avoidsecurity issues at a later time.�

Paul Laster ([email protected]) is the XML developer forDell Enterprise Systems Group (ESG) Communications. He is atwo-year veteran of the Dell online development community andis the primary technical contact for the Dell Direct Effect AllianceProgram. Paul has a B.S. in English and Business from theUniversity of Texas at Austin.

36 PowerSolutions

<CAR>

<model> </model><style> </style><engine> </engine><tires>

<model> </model><manufacture> </manufacture><cost> </cost>

</tires>

<MSRP> </MSRP>

<buyer> </buyer>

</CAR>

Data fieldsused by manufacturing/distribution

Data fields used by sales/dealerships

Data field used bycustomer servicerepresentatives

Figure 3. Possible XML structure for dataset <CAR>

Test your applications before they go live.Visit a Dell Application Solution Center (ASC). For more information, see ASCs at www.dell.com.

Page 38: Building Your Internet Infrastructure

www.dell.com PowerSolutions 37

Secured communications over public networks andbetween private networks has long been an issue of

great concern. The continued growth of e-commerce overthe Internet, the demand for outsourcing of Web hostingservices, and the increase of Application Service Providers(ASPs) has heightened the need for ubiquitous security.

Today most network traffic for both the Internet and cor-porate intranets is based on TCP/IP. However, the originalInternet Protocol (IP) failed to define any structures forsecurity, so application layer implementations, such asSecure Sockets Layer (SSL) and Secure HyperText TransferProtocol (S-HTTP) have been used to provide data securityover the Internet.

SSL creates a secure connection between aclient and a server over which any amount ofencrypted data can be sent. Web pages thatrequire an SSL connection start their URLswith “https:” instead of the normal “http:”.However, these implementations require thatboth the sending and receiving stations run therequired application software or Web browser—and only the data to and from the Web server issecured. Figure 1 shows SSL running aboveTCP/IP and below high-level applications, such

as Lightweight Directory Access Protocol (LDAP) andInternet Messaging Access Protocol (IMAP).

IPSec Offers Security at the Network LayerIn contrast to SSL and S-HTTP, IPSec is an InternetEngineering Task Force (IETF) standard that provides security at the network layer. This allows for more flexibil-ity during its implementation: IPSec allows for private andsecure communications over the public Internet regardlessof the application or higher level protocols. Other IPSeccharacteristics include authenticating both senders andreceivers, making data confidential via encryption, assuring

data integrity, and working with any IP-basedapplication.

IPSec can protect confidential data—such as human resources data, medical informa-tion, payroll records, and any other sensitiveinformation—being transferred within a localnetwork (intranet) or across the Internet bylimiting data access to authorized users only.The IPSec suite of protocols does not add anysecurity, however, to specific applicationsthat already use security at higher layers suchas SSL, Pretty Good Privacy (used to protect

Internet ProtocolSecurity

RevealedInternet Protocol Security (IPSec) is a suite of protocols that provide secured

communications over the Internet at the network layer. The security services

IPSec provides data source authentication, data integrity, confidentiality, and

protection against replay attacks. This article provides a general overview of

IPSec and its various components.

By Rich Hernandez

SSL creates a secure

connection between a

client and a server

over which any

amount of encrypted

data can be sent.

I N T E R N E T E N V I R O N M E N T

Page 39: Building Your Internet Infrastructure

38 PowerSolutions

e-mail), and Secure Electronic Transactions (used incredit card processing).

Defining IPSec ComponentsThe IPSec cryptography-based security technology suitedefines the following header extensions to IPv4:� Encapsulating Security Payload (ESP) header� Authentication header (AH)

The other major component of the IPSec protocol suite is the Internet Security Association and Key ManagementProtocol (ISAKMP), which is used to implement the InternetKey Exchange (IKE) protocol. The ISAKMP method securelyauthenticates and establishes security associations (SAs).

The two basic modes for implementing the header exten-sions in an IP transaction include:� Transport mode: Supports client-to-client or client-to-server

communications with no intervening security gateways� Tunnel mode: Supports remote access and site-to-site

secured communications

The ESP Header Keeps Your Data SecureAs defined in Request for Comments (RFC) 2406, the ESPheader provides data encryption, data origin authentication,anti-replay, and data integrity services for IP packets. ESPoperates at the network or transport layer; for example, ESPcan be used to secure an FTP session by encrypting all datatransmitted during the session.

As shown in Figure 2, ESP provides protection fromreplay attacks by providing a sequence number within theheader. The sequence number, a unique value inserted intothe header by the sender, is used to determine whether thepacket is a duplicate that should be dropped. When thesequence number reaches a predefined maximum, a new SAis established to restart the sequence number from zero.

The ESP header also contains a Security ParametersIndex (SPI) field that is used as an index to identify theappropriate SA to use when processing an IPSec packet.The SPI is an arbitrary number established by the destina-tion host using IKE. The SPI is authenticated but notencrypted, because this field needs to be in cleartext(normal text) for the destination host to identify theencryption algorithm and key to process an incomingpacket—the SA. The Initialization Vector (IV), the firsteight octets—or bytes—in the protected data field, is usedto seed the encryption algorithm.

An ESP SA defines algorithms for encryption and dataauthentication. Encryption is a mathematical operation thattransforms normal text (cleartext) into ciphertext, whichappears as a series of random characters. The ciphertext is afunction of a key and the cleartext data.

Data Encryption Standard (DES) and 3DES are symmet-ric encryption algorithms: they use the same secret key toencrypt and decrypt the data. As Figure 3 shows, 3DES usesthree different 56-bit keys to produce the ciphertext. Itdecrypts the encrypted result from key X using a differentkey. Finally, it encrypts the result of key Y using key Z.

ESP encrypts the payload using cipher algorithms, suchas DES and 3DES, which are based on a 56-bit key and a168-bit key, respectively. A cipher block chaining (CBC) isused with both DES and 3DES; the amount of data to beencrypted must be a multiple of the block size of the cipher.Hence, data is padded to achieve this result. CBC utilizes anIV to ensure that identical blocks of cleartext will not resultin the same ciphertext.

HTTP LDAP IMAP

Secure Sockets Layer

TCP/IP

Secure Sockets Layer

TCP/IP

HTTP LDAP IMAP Higher LevelProtocols

Network

IPHeader

IPHeader

ESPHeader

TCPHeader Data

ESPTrailer

TCPHeader Data

ESPTrailer

ESPHeader

• SPI• Sequence No.• Initialization Vector• Pad Next Header

Authentication

Encrypted Payload

DataData56-bitKey X56-bitKey X

56-bitKey Y56-bitKey Y

56-bitKey Z56-bitKey Z

EncryptedData

EncryptedData

EncryptEncrypt

DESDES

DecryptDecrypt EncryptEncrypt

DESDESDESDES

IVIV

Figure 1. Secure Sockets Layer Protocol

Figure 2. IPSec ESP Packet

Figure 3. 3DES Process

Page 40: Building Your Internet Infrastructure

www.dell.com PowerSolutions 39

The ESP trailer contains the necessary padding, length ofthe pad, next protocol after ESP, and authentication data ordigest. The authentication data field, which is used to vali-date the authenticity of the packet, is the digest generatedfrom a keyed hash function used for data integrity. ESP usesHashed Message Authentication Code-Message Digest 5(HMAC-MD5) or HMAC-Secure Hash Algorithm (SHA)as authenticator algorithms. The output of these operationsis a message digest truncated to the high-order 96 bits usedto verify the authentication of the data.

In general, the process for the received ESP packet is toverify the sequence number, verify integrity of the data(authenticate), and decrypt the data.

AH Protocol—Integrity without EncryptionAn AH packet, shown in Figure 4, contains an AH betweenthe IP and TCP headers. The AH, defined in RFC 2402,provides the security services of data integrity, data sourceauthentication, and protection against replay attacks.

AH provides data and address integrity withoutencryption. It differs from ESP in that, in addition to providing no encryption, no trailer exists at the end ofthe packet. The message digest, created by the authenti-cation algorithms, uniquely identifies each IP packetgiven a secret key. The AH, shown in Figure 5, containsthe following fields:� SPI—helps locate the SA to process the packet� Sequence number—resists replay attacks by rejecting

duplicate packets� Authentication digest—validates the source of the data

HMACs use a single key for generating and verifyingthe authentication information, as shown in Figure 6.Hash functions take a variable-sized message (data) asinput, compress it, and produce a fixed-size digest. Theresulting digest is attached to the header. Verification at the destination entails hashing the shared secret keywith the data and comparing the result with the digest in the header.

AH uses HMAC over the IP datagram to create theauthentication data. The receiver verifies the integrity of thedata after packet reassembly. Since AH authenticates theouter IP header, it must be aware of IP header fields thatchange when processed by routers. These fields are omittedfrom the authentication calculations.

When both AH and ESP are protecting the same data,the AH is always inserted after the ESP header. The AH issimpler than the ESP header because it does not provide forconfidentiality. The AH has no trailer because there is noneed for padding and a pad-length indicator. There also is noneed for an IV.

The authentication data field is a variable-length field that contains the result of the integrity-checkingfunction. AH implements two mandatory authenticators,HMAC-SHA-96 and HMAC-MD5-96, which areMessage Authentication Code (MAC) functions whoseoutputs are truncated to 96 bits.

AH, like ESP, can be used in either transport mode ortunnel mode. The difference is in the data being protected:In transport mode the upper-layer protocol is protected,while in tunnel mode the entire IP datagram is protected.The SHA hashing algorithm adds 20 bytes to each packetwhile MD5 adds 16 bytes. SHA is considered a slightly moresecure but slower executing algorithm.

Authenticating IPSec Computers through IKE IPSec computers must verify each other (authenticate)before they can begin secured communications. IKE performs

Figure 4. AH Packet

IPHeader

IPHeader

AHHeader

TCPHeader Data

TCPHeader Data

AHHeader

Authenticated

Figure 5. Authentication Header

Next Header Payload Length ReservedNext Header Payload Length Reserved

Security Parameters Index (SPI)

Sequence NumberSequence Number

Authentication Data

0 7 15 31

Figure 6. Hashed Message Authentication Code Process

DataData

HashFunction

HashFunction

DigestDigest

KeyKey

Page 41: Building Your Internet Infrastructure

40 PowerSolutions

authentication during the initial negotiation phase.Authentication can be implemented via one of the following methods:

Pre-shared key: The IPSeccomputers exchange a previ-ously shared password to verifyeach other. This secret stringmust be communicated using anout-of-band mechanism, such asthrough telephone, face-to-face,or other direct communications.This exchange should not be done over the same unse-cured channels that IPSec istrying to secure.

Certificate Authority: The IPSec computers use acommon Certificate Authority (CA) to verify the identity ofeach other. Each computer registers with the CA, such asVeriSign or Entrust. The CA then authenticates each com-puter before they can engage in trusted communications.

Kerberos Version 5 Protocol: This method verifies trustin Windows 2000.

The database must have an SA before IPSec can securean IP packet; the SA can be created manually or dynami-cally. IKE creates them dynamically. The purpose of IKE is tonegotiate an IPSec SA and populate the SA database.

IKE: A Hybrid ProtocolAs described in RFC 2409, IPSec uses IKE to create sharedsecurity parameters and authenticated keys—SAs—betweenthe IPSec cryptographic end points. Two IPSec peers use IKEto establish a shared and an authenticated key.

ISAKMP defines the operation and language constructsused by IKE. OAKLEY is the key determination protocolused by IKE to authenticate the Diffie-Hellman exponentexchange. Therefore, IKE is a hybrid protocol that defines away of deriving authenticated keying material and negotiat-ing shared security policy based on ISAKMP and OAKLEY.

ISAKMP defines packet formats, the retransmission timer,and the programming language. IKE uses the ISAKMP commonframework and procedures for the creation and managementof SAs. A Domain of Interpretation (DOI)—covered in RFC2407—is used to document how IKE negotiates IPSec SAs.

IKE automates key exchange to deliver keys safely basedon a Diffie-Hellman key-exchange protocol. Diffie-Hellmanis a one-way function to securely exchange a shared secretover an untrusted communications channel. It is based onthe exponentiation of prime numbers: A public value isexchanged and exponentiated to create the secret value.

Diffie-Hellman assumes that the hosts, via passwords (pre-shared keys) or digital certificates, know the identities ofthe two IPSec end points. Once the hosts authenticate theend points, IKE exchanges information to establish ashared key. The hosts share configuration information byexchanging common SPIs that specify the encryption algo-rithm, authentication algorithm, and security keys to usewhen establishing an SA.

IKE SAs (different from IPSec SAs) define algorithms toencrypt IKE traffic and define how to authenticate the IPSecend points. IKE uses the two phases of ISAKMP. The firstphase establishes the IKE SA and the second uses that SA tonegotiate SAs for IPSec: � Phase I: Create an IKE SA

—Agree to protection suite—Exchange shared key—Authenticate the IKE SA

� Phase II: Create IPSec SA

Unlike the IPSec SA, the IKE SA is bidirectional: IKE is a request-response protocol, where one party is theinitiator and the other is a responder. Once the IKE SA isestablished, it may be used to protect both inbound andoutbound traffic. The IKE SA has various parameters thatare negotiated between two IPSec end points. Theseparameters are referred to as the protection suites, andthey include the following:� Encryption algorithm� Hash algorithm� Authentication method� Diffie-Hellman group

The protection suites are negotiated between the peers aspart of the first messages they exchange. Each side maintainssome secret information that, once authenticated, is used toprotect IKE messages and derive keys for other security services.

The keys used for the IPSec SA are derived from the IKEshared key. In addition to the IKE key, other parameters,

such as the Diffie-Hellmangroup number, nonces (randomnumbers), and security associa-tion parameters are used toensure perfect forward secrecy.

IPSec Security AssociationsSpecify IPSec ProtocolsThe IPSec SA is the contractbetween two communicatingentities; it specifies the IPSecprotocols used for securing thepackets. The SA provides an

The IPSec SA

is the contract

between two

communicating

entities; it specifies

the IPSec protocols

used for securing

the packets.

IKE SAs define

algorithms to encrypt

IKE traffic and

define how to

authenticate the

IPSec end points.

Page 42: Building Your Internet Infrastructure

www.dell.com PowerSolutions 41

association among the security services, active key, data tobe protected, and the end points.

As Figure 7 shows, the SA is a one-way simplex opera-tion. SAs are established for processing outbound andinbound packets.

The SAs are protocol specific; an SA exists for each pro-tocol. If both AH and ESP are being used, then a separateSA is established for each protocol. The SA parameters arestored in the Security Association Database (SADB). SAscan be created manually or dynamically. A manual SA hasno lifetime limits and must be deleted manually, or it willcontinue to be active indefinitely. Dynamic SAs have a life-time that is negotiated by the key management protocol.This is an important factor, because to ensure security, keysmust be refreshed regularly.

The creation and deletion of SAs are the two mostimportant tasks handled by the SA management protocol,such as IKE. The SA management protocol requires aninterface between the user applications and the operatingsystem kernel to manage the SADB. The SA creation is atwo-step process: The SA management protocol negoti-ates the parameters of the SA, then it updates the SADBwith the SA.

The SAs can be deleted manually or by using IKE. IPSecdoes not refresh keys; the existing SA must be deleted and anew one created. Typically, a new SA is negotiated beforethe existing SA is expired. Typical reasons for an SA to bedeleted is that the lifetime has expired, thekeys are compromised, or the number of bytesencrypted/decrypted or authenticated usingthis SA has exceeded a threshold set by thesecurity policy.

The SPI is used to determine a pointer tothe specific SA in the database that will beused when applying the security policy. Thedestination selects the SA to process a packetvia the SPI in the IPSec header. On the otherhand, the source identifies the SA to use tosecure the packet via the security policy selec-tors. Once the SA is created and added to theSADB, secure packets start flowing.

IPSec Security Policies Define ActionsAn IPSec security policy defines the action to be applied to apacket. The policy is stored in a database called the SecurityPolicy Database (SPD), which is indexed by selectors.Security policies allow for different levels of security to beapplied to traffic on the same network. A security gatewaymay require that all traffic between a protected subnet andanother subnet be encrypted with DES and authenticatedwith HMAC-MD5, while 3DES and HMAC-SHA would beapplied to Telnet traffic to a mail server from a remote subnet.

Policy management is required to add, delete, and modifypolicy. The SPD is stored in the operating system kernel.The SPD defines the traffic to be protected, how to protectit, and who shares in the protection. It must be consulted foreach packet entering or leaving the IP stack. Once con-sulted, the SPD defines three actions based upon a trafficmatch, as identified by the following selectors:� Discard: Do not let the packet in or out.� Bypass: Do not apply or expect security.� Apply: Apply security to outbound traffic and expect

security for inbound traffic by determining a pointer tothe SA in the SADB.

Selectors are extracted from the network and transportlayer headers to map IP traffic to IPSec policy. IPSec policy selectors include source address, destination address,machine name, transport protocol, and source and destina-tion TCP port numbers.

The Modes of IPSecThe AH and ESP header can be applied to an IP packetthrough modes defined by the IPSec architecture. There areseveral possible combinations of modes and protocols:� AH transport mode� AH tunnel mode� ESP transport mode� ESP tunnel mode

Transport mode applies AH and ESP to thetransport layer segment of an IP packet. The IPdata payload is the only protected portion of thepacket. The IP header with the destination andsource address information is not protected.When applying both AH and ESP, AH is appliedlast to calculate data integrity over more data, asshown in Figure 8.

On the other hand, tunnel mode provides forauthentication and encryption of the entire IPpacket. Security gateways use tunnel mode to provide security services on behalf of other net-worked entities. The communication end point is

Figure 7. Unidirectional IPSec SA

HostA

HostA

HostB

HostB

SA (out) SA (in)

SA (in) SA (out)

Security policies

allow for

different levels

of security to be

applied to

traffic on the

same network.

Page 43: Building Your Internet Infrastructure

42 PowerSolutions

IPHeader

IPHeader

AHHeader

AHHeader

TCPPayload

Encrypted

ESPHeader

ESPHeader

TCPPayload

Authenticated

Host1

Host1

Host2

Host2

Source and DestinationIP Addresses are UnprotectedSG1SG1 SG2SG2

Source: Host 1Destination: Host 2SG: Security Gateway

IP Header(Outer)

ESPHeader

TCP DataPayload

IP Header(Inner)

Encrypted

Host1

Host1

Host2

Host2

SA

Internet or IntranetSG1SG1 SG2SG2

AuthenticatedEncryptedEncapsulated

AuthenticatedDecryptedDe-encapsulated

Tunnel

CleartextESP

TrailerIP Header

(Outer)ESP

HeaderTCP DataPayload

IP Header(Inner)

ESPTrailer

Source: Host 1

Destination: Host 2

Source: Security Gate 1

Destination: Security Gate 2

protected inside the inner IP packet, while the crypto endpoint is contained in the outer IP packet. A security gatewaydecapsulates the inner IP packet upon completion of IPSecprocessing, then forwards the packet to its final destination.The end point IP address is protected in the tunnel.

Tunnel mode adds an extra IP Header (outer header);transport mode does not. IPSec defines tunnel mode forboth AH and ESP. As shown in Figure 9, Host 1 wanting tocommunicate with Host 2 can use tunnel mode to allowgateways SG1 and SG2 to provide the IPSec services tosecure the communications over the public network. Theinner IP header is the original packet from Host 1 destinedfor Host 2; the outer header encapsulates the original packetinto another IP header.

IPSec allows for multiple layers of security and multiple tunnels in which the inner header is completelyencompassed by outer header. However, one tunnelcannot overlap another.

Figure 10 depicts how Host 1 can communicate withHost 2 over different segments in a routed network. TwoSAs are defined: Authenticate to gateway SG2 and a VPNbetween SG1 and SG2.

Figure 11 shows the packet constructed and seen bySG2: The source and destination IP addresses for thepacket are shown as received by the SG2 gateway. SG1,SG2, and finally Host 2 process the packet by decapsulat-ing the headers from left to right until the original packetis left for processing by Host 2.

Figure 8. Transport Mode

Figure 9. Tunnel Mode

Page 44: Building Your Internet Infrastructure

www.dell.com PowerSolutions 43

Outbound ProcessingThe IP layer consults the SPD to determine the securityservices to use. Selectors extracted from the headers are usedto point to a policy action. If the SPD action is to applysecurity, a pointer to the SA in the SADB is returned, orIKE is invoked if the SA does not exist in the database. TheAH and ESP headers are added as specified by the SA. Thepacket is forwarded as defined in the gateway or router.

Inbound ProcessingUpon receipt of a packet, the security layer checks the policydatabase for these actions: discard, bypass, and apply. If theaction required is apply and the SA does not exist, then thepacket is dropped. However, if the SA exists in the database,then the packet is passed up to the next layer for processing.If the packet contains IPSec headers, then the IPSec stackprocesses the packets. IPSec extracts the SPI, source address,and destination address. The SADB is indexed based on thefollowing parameters to select the specific SA to apply: SPI,DST Addr, or protocol (AH or ESP). �

More InformationAtkinson, R. and Kent, S. Security Architecture for theInternet Protocol. RFC 2401. November 1998.

Atkinson, R. and Kent, S. IP Encapsulating Security Payload(ESP). RFC 2406. November 1998.

Atkinson, R. and Kent, S. IP Authentication Header. RFC2402. November 1998.

Harkins, D. and Carrel, D. The Internet Key Exchange (IKE).RFC 2409. November 1998.

Pereira, R. The ESP CBC-Mode Cipher Algorithms. RFC2451. November 1998.

Krawczyk, H. HMAC: Keyed-Hashing for MessageAuthentication. RFC 2104. November 1998.

Rich Hernandez ([email protected]) is a senior engi-neer with the Server Networking and Communications Groupwithin the Departmental and Workgroup Server Division at Dell.He has been in the computer and internetworking industry for 16years. Rich has a B.S. in Electrical Engineering from theUniversity of Houston and has engaged in postgraduate studies atColorado Technical University.

IPHeader

IPHeader

ESPHeader

IPHeader

IPHeader

ESPHeader AH IP

HeaderIP

HeaderAH TCPData

ESPTrailer

TCPData

ESPTrailer

Original

Host 1 Builds

Added by SG1

S=125.1.1.1D=146.3.3.2

S=125.1.1.1D=140.3.2.2

S=140.2.2.1D=140.3.2.2

Host1

Host1

Host2

Host2

SA (ESP)

SA (AH)

Gatewayor Router

Tunnel 1

Tunnel 2

Cleartext

SG2SG2

Gatewayor Router

125.1.1.1 140.2.2.1 140.3.2.2 146.3.3.2

146.3.3.1125.1.1.2

SG1SG1

Figure 10. Multiple Tunnels

Figure 11. Packet as Received by SG2

Page 45: Building Your Internet Infrastructure

44 PowerSolutions

There can be no doubt that the Internet is rapidly trans-forming every industry. Companies of all sizes have come

to realize that, in this new data-driven economy, informationresiding within their computer storage resources can be avaluable competitive asset. And Internet infrastructure ande-business demands are fueling explosive data storagerequirements.

Industry experts estimate that the demand for datastorage capacity is increasing by as much as 50 percent to100 percent each year. According to Forrester Research,corporations plan to spend about four times more for storageby 2003 than they did in 1999, with the average companyexpected to have about 150 terabytes of data versus about 15 terabytes today.

Clearly, businesses of all types are grappling with thisphenomenal growth and are focused on deploying theright storage solution. For Internet businesses, however,storage is not a one-size-fits-all proposition. Internet busi-nesses are, therefore, tailoring storage solutions to meettheir specific demands.

Internet Infrastructure Storage RequirementsThe majority of Internet infrastructure customers are lookingfor a storage solution that delivers high availability and relia-bility, scalability, performance, disaster recovery, and cost-effective investment protection. These requirements becomeabsolutely mission critical in the Internet environment.

High Availability and Reliability Prevent LossThe Internet is never closed. In other words, it does not shutdown for holidays, inventory, or re-stocking. The Interneteconomy forces dot-coms and other Internet-related busi-nesses to run at an unforgiving pace. Any misstep, site outage,or other hiccup in availability can translate into a loss ofmarket share, revenue, and market valuation. Because of theseconsequences, high availability and reliability take on criticalimportance in the always up 24×7 Internet environment.

The Market is Now Less Forgiving Within the last year, site outages at e-Bay® and CharlesSchwab® have garnered substantial market attention and

UnderstandingStorage in Today’s

Data-DrivenDot-Com Economy

Gaining that competitive edge in today’s data-driven economy requires properly manag-

ing your company’s data. Choosing the right storage solution for that data, however,

requires careful consideration to choose the storage architecture that best addresses

the relevant business problems. In this article, we look at how the Internet has

intensified storage requirements, review different storage architectures, and explore

what implementations a few real-life companies have selected to accommodate their

own unique storage demands.

By Eric Burgener and Thea Hayden

I N T E R N E T E N V I R O N M E N T

Page 46: Building Your Internet Infrastructure

significant amounts of press coverage. An ITexecutive at Charles Schwab described the outages stating that during 1999 CharlesSchwab had eight outages totaling 6.1 hours,which is one-tenth of one percent of all thetrading hours. Out of 252 trading days, the company had 99.89 percent availability.

Obviously, those numbers are leaps andbounds ahead of availability percentages a fewyears ago, but apparently not good enough for thehigh-availability demands of today’s market. eBayhas scheduled outages Monday mornings from 1 A.M. to 4 A.M., but even that small window isshrinking, forcing the company to maintain awarm backup of its site, which it will soon use forscheduled maintenance and other service inter-ruptions.

To ensure the highest level of availability and reliability,Internet customers are looking for storage solutions withfully redundant components, no single point of failure,failover capabilities, and integrated fault-tolerance. They arelooking for a reliable system that provides data protectionand data integrity.

A highly reliable storage solution will boost system avail-ability by identifying failures before the data is corrupted,allowing failed components to be repaired without systemdowntime to keep all of this transparent to users. Clearly, creating a highly available and reliable storage solution isrequired for Internet-related businesses to remain competitive.

Constant Change Demands Scalable Solutions“The only constant is change.” This statement has becomethe tacit mantra of nearly every IT manager struggling tokeep up with the unprecedented pace of change in the digi-tal economy. This constant change means that e-businessplanning and Internet infrastructures become an ongoingprocess. Storage solutions must be able to accommodate anever-changing work in progress.

Storage solutions should have the flexibility and scalabilityto meet the changing dynamics of the Internet environment,including industry consolidations and acquisitions, expansions,and new services. To brace for the inevitable, Internet busi-nesses are choosing solutions that support non-disruptiveupgrades and can scale gracefully without user interruption.

High-Performance Technology Speeds BackupShrinking backup windows and the requirement for data ondemand means that storage solutions need to be based onhigh-performance technology. Today’s Fibre Channel storage

equipment can offer significant performanceimprovement over SCSI storage. In fact, storage devices based on Fibre Channel canspeed certain application performance up to 70 percent, and offer a significant increase inbackup performance. Storage consolidationsolutions also significantly increased backupperformance.

Disaster Recovery Protects Against LossToday’s data-driven companies rely on real-time data that is frequently changing and frequently accessed to run their businesses. As discussed above, anything that disrupts theavailability or integrity of this data can inter-fere with the success of business operations.

Permanent loss of parts of this data can have serious nega-tive consequences, perhaps even cause concerns about acompany’s continued viability. For Internet businesses withterabytes of vital data, the demand for appropriate disasterrecovery has grown with the exponential growth of storage.

In late 1999, a Pepperdine University professor1 releaseda cost-of-data-loss study, which estimated that the averageincident of lost, stolen, or damaged data costs, per affectedPC, is more than $2,500. These hard costs plus the unpre-dictable costs of angry customers, lost opportunity, and lossof market share can have an extraordinary impact onInternet-related businesses.

Because disasters are difficult to predict, a companymust plan to recover quickly with usable data, as well aseliminate or minimize the potential for data loss. Uponrecovery, data must be fully up to date, complete, and freefrom errors. In addition, operations must continue to runwithout a break in continuity.

Preserving the Existing InvestmentWith all of the changes required to keep a data center up todate, IT budgets are being pushed to their maximum. In mostcases, throwing out existing equipment and starting fromscratch is simply not an option. And budget-constrainedInternet start-ups, along with large Internet-enabled enterprises,need cost-effective solutions that allow them to utilize existingstorage equipment while also implementing new technology.

Choosing the Right Storage ArchitectureThree basic storage architectures (see Figure 1) are availablefor customers to address their storage requirements:

Direct-attached storage (DAS): In direct-attached storage, one or more storage subsystems are directly attached

www.dell.com PowerSolutions 45

1 Pepperdine University economics professor and labor economist David Smith. November 1, 1999.

Industry experts

estimate that the

demand for data

storage capacity is

increasing by as

much as 50 percent

to 100 percent

each year.

Page 47: Building Your Internet Infrastructure

46 PowerSolutions

Figure 1. Basic Storage Architectures

Direct-AttachedStorage

Network-AttachedStorage

NAS Filer ClientsClients

Server Server Server Server

StorageArea Network

to a single server, and all access to that storage must flowthrough that server.

Network-attached storage (NAS): In network-attachedstorage, a storage subsystem is attached to the local area network (LAN) and can be simultaneously shared amongseveral servers.

Storage area network (SAN): In a storage area network,a dedicated network (typically Fibre Channel-based) sitsbetween the servers and the storage subsystems, providingany-to-any connectivity between servers and storage.

All three storage architectures can be configured to support high-speed access to highly available, highly reliable

Page 48: Building Your Internet Infrastructure

data storage. They differ in deployment and managementcomplexity, ability to scale, and how well they support theonline reconfiguration necessary to support increasinglydynamic Internet environments.

DAS Provides Direct High-Speed Access to Limited StorageWhile all three approaches can be used in a complementarymanner, each of the architectures addresses certain problemsbetter than the others. For example, when direct high-speedaccess to storage from a single server is necessary, the DASmodel meets this requirement and offers simplified manage-ment. Many storage subsystems are dual ported, so anotherviable DAS option is clustering, in which two servers aredirectly attached to a single storage subsystem. As long asstorage requirements do not outgrow the performance andavailable capacity of the single server, this model remains aneffective one.

DAS is the widest used technology implementationtoday; but, behind a single server, the DAS model provideslimited storage scalability, and it does not scale well in termsof management or multiserver performance.

Furthermore, as additional servers with their own dedicated storage are added, handling traditional storageadministration tasks, such as capacity management orbackup/restore, can become difficult. As other servers mustgain high-speed access to a particular storage subsystem, theserver to which the storage is attached can potentiallybecome a performance bottleneck.

Because a company’s Internet infrastructure must beable to accommodate extremely high growth without dis-ruption, both in terms of storage capacity and the numberof servers that need high-speed access to a particulardataset, DAS may not be the right long-term growtharchitecture for certain Internet infrastructure build-outs.Many Internet companies need a highly scalable storagearchitecture that can share storage across multiple serverswhile providing high-performance data access. Both NASand SAN serve this need.

NAS Provides Heterogeneous File Sharing and Ease of UseIn an NAS setup, storage is housed in an appliance called afiler, which directly attaches to an Ethernet network. Allservers attached to the Ethernet network have equal accessto the storage housed in the filer. The filer is optimized forhigh-speed data access at the file-system level and possessesits own dedicated processors, memory, and a special-purposeoperating system designed specifically to speed file-systemaccess in network environments.

Filers also offer the unique advantage of supportingheterogeneous file sharing between Wintel and UNIX systems through industry-standard protocols such as

Network File System (NFS) and Common Internet FileSystem (CIFS). Filers are very easy to deploy, installing literally in a matter of minutes, and usually offer a costadvantage over SAN solutions. Filers also leverage theexisting network infrastructure—the LAN—and can beadded without shutting down any servers.

A potential limitation in some NAS offerings is limitedstorage capacity and limited scalability. High-end NAS offerings today max out at 1 TB to 2 TB. The need to scalebeyond that capacity leads to multiple filers that are man-aged in a distributed manner. As NAS environments arescaled to multi-terabyte configurations, this distributed man-agement requirement subjects NAS to many of the sameissues as DAS from a management point of view.

SANs Provide High-Speed Direct Access to Virtually Unlimited StorageSANs move the storage subsystems onto their own dedicatednetwork, typically using Fibre Channel to provide directattachments from multiple servers to multiple storage subsys-tems. Using storage management software, storage partitionscan be defined as either exclusively owned (by a singleserver) or shared across multiple servers. This consolidationenables the storage to be managed as a single pool, repre-senting a potentially significant reduction in managementcomplexity for large environments relative to DAS andNAS. Storage management software allows this pool to beincreased, decreased, or reconfigured more flexibly than theother storage architectures, making SAN a good fit for Webinfrastructure build-out projects that exceed the currentcapacities of the NAS architecture.

SANs can also offer a better investment protection strat-egy than NAS since existing storage subsystems can be rede-ployed in SAN environments. To increase connectivity,servers and storage are often connected through a switchedfabric that supports dedicated, high-speed bandwidth betweeneach server and each storage subsystem. A switched FibreChannel fabric provides millions of addresses to which storagesubsystems can be attached, and storage at any of theseaddresses appear as if it is directly attached to all servers.

With a SAN, there are multiple access paths to anygiven data store through any of the SAN-attached servers.This setup provides high availability and also ensures that asthe storage pool grows, a single server in the SAN will notbecome a bottleneck. In particular, network clients may gainaccess to a particular static data store in the SAN throughany number of SAN-attached servers, depending on storagemanagement software capabilities. This contrasts distinctlywith the NAS model where a single server, or filer, handlesall access to the back-end storage.

All data access between servers and storage occurs acrossthe SAN so that LAN traffic to the servers is not impacted at

www.dell.com PowerSolutions 47

Page 49: Building Your Internet Infrastructure

all. In fact, customers may even increase their LAN bandwidththrough a SAN deployment by moving, to the SAN, adminis-trative tasks that used to take up LAN bandwidth. All datatransfers in the SAN environment are block-oriented today,although future advancements will allow heterogeneous filesharing to eventually become an option in SAN environments.

Real-Life Internet Customer Storage SolutionsStorage is not a one-size-fits-all proposition. The followingsampling of Internet-enabled businesses shows that eachbusiness may use a different storage architecture dependingon its business needs. All of the companies discussed belowchose Dell as the technology partner to help them imple-ment a solution that met their business priorities.

Microstrategy. The priority is high availability and reliabil-ity. Microstrategy® is a leading worldwide provider of enterprisedecision-support software and related consulting, training, and support services. Microstrategy’s products extend decisionsupport to customers by using a broad range of push-and-pulltechnologies, including the Internet, e-mail, telephones,pagers, and other wireless communications devices.

Microstrategy development efforts as well as general busi-ness practices require consistently reliable performance fromits system’s infrastructure. The company needed to standard-ize on a single vendor for servers and storage, and was con-cerned about support costs. Because Microstrategy’s Internetinfrastructure supports its actual product delivery, availabilityand reliability were its top priorities to consistently

provide service to its customers and ensure customer satis-faction. Dell helped Microstrategy implement a reliable,high-availability, high-performing solution that provided abetter value for its business.

Microstrategy is using a direct-attached architecture withclustered servers to meet its demands for direct high-speedaccess to storage and extreme high availability and reliabil-ity. See Figure 2.

HomeRuns.com. The priorities are performance, highavailability, and reliability. HomeRuns.comSM is the nation’smost active home grocery delivery market and one of theleading online grocery retailers. In February 2000, the newmanagement team adopted an aggressive growth strategybased on new investments and operational goals. Part of thisgrowth strategy was the building of a new office that wouldhouse a state-of-the-art data center as the infrastructure forHomeRuns.com’s rapidly growing e-commerce site.

In the world of e-commerce, with customers just oneclick away from the competition, performance is absolutelyessential. HomeRuns.com needed to deliver the fastest, mostresponsive Web site it could in order to continue the growthof its e-commerce business. HomeRuns.com is using a high-speed SAN solution that meets the needs of its dynamic e-commerce business.

Storage Is Definitely Not One Size Fits AllThe Internet evolution has made storage more importantthan ever before. Protecting and managing competitiveinformation assets is imperative in today’s data-driven econ-omy. Choosing the right storage system could prove to beone of the most important decisions an IT department evermakes. Because demands on storage have intensified andstorage is no longer a one-size-fits-all proposition, today’sbusinesses should thoroughly evaluate the demands of theirenvironments in order to implement a storage solution thatworks for their own business.�

Eric Burgener ([email protected]) is a director of productmarketing in Dell’s Storage Product Group. His responsibilitiesinclude defining and managing the launch of a line of storagedomain management products. Eric has over 14 years of experi-ence in the computer industry, in positions ranging from sales andproduct management to product marketing and customer support.Eric has an undergraduate degree from Bowdoin College and anM.B.A. from the University of California at Berkeley.

Thea Hayden ([email protected]) is a marketing advisorfor Dell’s Storage Product Group. Her responsibilities includesupporting the launch of new Dell storage products including storage domain management. Thea has a B.A. in English fromthe University of Utah.

48 PowerSolutions

Figure 2. Microstrategy’s Direct-Attached High-Availability Architecture

LAN/WAN

PowerEdge6300 Servers

PowerVault 650F and 630F External

Fibre Channel Storage

Page 50: Building Your Internet Infrastructure

www.dell.com PowerSolutions 49

Power outages, hardware malfunction, software failure,human error, and natural disasters are the most common

causes for system downtime. The cost of downtime can rangefrom lost business and productivity to cessation of opera-tions, depending on the severity and duration of the failure.NSI® Software Double-Take® helps minimize the risk tobusiness through real-time local and remote replication ofcritical data. Double-Take also provides timely server andapplication failover.

The core of NSI Software technology is its patentedreal-time file delta replication. After performing an initialmirror of selected drives, directories, and/or files, Double-Take subsequently replicates—in real-time—only the byte-level changes to those datasets, instead ofcopying whole files across the network every time theychange. For example, once a mirror of a large database filehas completed, Double-Take then replicates only thechanges within the file as they occur. This significantlyreduces the amount of time and bandwidth necessary totransmit the data.

Double-Take uses standard TCP/IP to transmit data overa local area network (LAN) or a wide area network (WAN)

with bandwidth throttling and encryption as options. Its support of multiple operating systems (including MicrosoftWindows NT/Windows 2000, Novell® NetWare®, and Sun®

Solaris®) and cross-platform replication that can be controlled from a common management console help gearDouble-Take toward today’s heterogeneous server and storage environments. Simple Network ManagementProtocol (SNMP) integration and a complete command-line interface allow automated monitoring and administra-tion of Double-Take via enterprise management systems.

The feature-rich functionality of Double-Take and itsflexible configuration options can help provide many differ-ent types of protection suited for a variety of businessrequirements.

Local High Availability Double-Take can be configured to provide local server andapplication fault tolerance. Critical data is identified on asource server and mirrored to a target server. Replication ofsubsequent changes can take place on either a real-time orscheduled basis. Both mirroring and replication can be sentover a public or dedicated network link.

Real-TimeData Protection

with NSI SoftwareDouble-Take

Ready access to mission-critical data and applications is absolutely necessary

in today’s rapidly expanding, technology-driven business environment. The NSI

Software Double-Take file replication and failover product offers an effective

solution to reduce downtime and keep businesses operational.

By Andrew Thibault

O S E N V I R O N M E N T

Page 51: Building Your Internet Infrastructure

50 PowerSolutions

One-to-One Monitoring and Failover of a Source Server In a one-to-one scenario, an active or passive target server isset to monitor the health of a source server, assuming theidentity and functions of the source server if a failure occurs(see Figure 1).

Users can configure both the failover time and thefailover process itself, which can be set to “automatic” or“manual.” During failover, the IP address, computer name,and data share points of the source server are transferred tothe target server. Pre-failover and post-failover scripts can berun on the target server to quickly start up applicationsand/or services in order to resume normal operations withlittle or no delay to end users.

Many-to-One Monitoring and Failover of Multiple Source ServersIn a many-to-one scenario, the same procedure can be repeatedto allow a single target server to monitor and stand in for multiple source servers (see Figure 2). Double-Take providesfailover support for file and print services in addition to supportfor major applications such as Microsoft Exchange Server,Microsoft SQL Server, Microsoft Internet Information Server(IIS), Oracle®, Lotus Notes®, and SAP.

Off-Site Disaster RecoveryDouble-Take offers additional protection from geographic orsite disasters through remote replication of mission-criticaldata for off-site disaster recovery (see Figure 3). The use of TCP/IP makes Double-Take easily deployable over aWAN. Its highly efficient byte-level replication dramaticallyreduces network load, effectively allowing replication of critical data over slower remote links that would otherwisebe impractical, if not impossible, to use.

For remote replication, bandwidth throttling allowsbandwidth utilization to be limited to designated levels tobalance the replication with other remote network traffic. The ability to schedule remote replication can also helpaccommodate other remote network traffic as well as reducetransmission costs in cases where tariffs favor periodic burst-ing of multiple packets over intermittent streaming.

The Double-Take asynchronous data replication technol-ogy also minimizes performance impact on the source serverthat could otherwise be associated with slower remote linksor an occasional loss of remote connectivity.

Geographic Cluster ProtectionCompanies with investments in clustering technology understand the need for ready access to mission-critical dataand applications. Yet, some may overlook their continued vulnerability to geographic or hardware-related disasters untilit is too late. The reliance of clusters on a single shared storageunit limits their ability to protect against such disasters.

Double-Take adds another layer of protection to clustered environments by replicating cluster data to otherlocal destinations and/or safely off-site (see Figure 4).Additionally, Double-Take’s failover engine enables localfailover from one cluster to another cluster or stand-aloneserver, providing protection against cluster failure and allowing routine cluster maintenance. Double-Take worksequally well in Active/Passive and Active/Active cluster configurations and is also compatible with both MicrosoftCluster Service and NetWare Cluster Services.

NSI Software recently introduced the GeoCluster™ add-on, which allows the geographic separation of two nodesin the same Microsoft cluster through replication of clusterdata and management of the quorum resource, thereby elimi-nating the shared storage requirement.

Storage Consolidation and Enhanced BackupThe distributed nature of many storage environments presents a great challenge to administrators in their efforts to provide enterprise data protection. While local serverbackups prevent undue load on the network, they are diffi-cult to manage in a distributed environment. Centralizedtape backup of multiple servers is more manageable, yet

Figure 1. Monitoring and Failover of a Single Source Server

Servers

Source Target

PowerVault Storage

Source Source Source Target

PowerVault Fibre Channel

Switches

Servers

Figure 2. Monitoring and Failover of Multiple Source Servers

Page 52: Building Your Internet Infrastructure

www.dell.com PowerSolutions 51

MSCSActive/Active

Cluster

PowerVaultFibre Channel

Switches

Target

PowerEdgeServers

PowerVaultStorage

San Francisco

TargetSource

Chicago

Source

MSCSActive/Active

Cluster

WAN

Figure 4. Replicating Cluster Data with Double-Take

TargetSource Source Source Target

PowerEdgeServers

New York Los Angeles

WAN

PowerVaultFibre Channel

Switches

PowerVaultStorage

Figure 3. Remote Replication of Data for Off-Site Disaster Recovery

Page 53: Building Your Internet Infrastructure

increases network load and may prevent backups from completing when faced with a shrinking backup window.

Double-Take’s many-to-one data replication handlesthese challenges, especially consolidating storage for the purpose of centralized tape backup. Using its patented byte-level replication, Double-Take can efficiently send changesfrom multiple production servers to a dedicated backupserver—oftentimes connected to a storage area network(SAN) to accommodate large amounts of data—with mini-mal impact on the production servers or the LAN (seeFigure 5). Because backups are performed on the targetserver and storage devices, they can be run as frequently asnecessary without affecting production.

Double-Take also eliminates open file issues and the needfor open file agents. Regardless of file state on the sourceservers, changes to selected files are captured as they occur andare immediately sent to the target server. If a file on the target istemporarily being backed up as changes occur on a sourceserver, the changes are queued and then automatically appliedto the target server once the file is released. Double-Take is fullycompatible with third-party backup software products fromVERITAS®, Computer Associates, Legato®, BEI, and others.

Other Applications, Support, and Services Double-Take’s one-to-one and cross-platform replication make it ideal for server and operating system (OS) migration.Failover further eases the server migration and OS upgradeprocess, allowing a seamless cut over to the new server whenready. Customers also use the Double-Take one-to-many repli-cation for automatic content distribution from one to multipleload-balanced servers in a front-end Web farm.

Double-Take is a multifaceted tool that minimizes down-time and helps companies stay open for business. Crucial tothe equation is the availability of 24×7 technical support and implementation services. With its robust features andflexible configuration options, Double-Take achieves enterprise-class data protection for businesses that cannotafford to have anything less.�

Additional ResourcesFor additional information on NSI Double-Take orGeoCluster, please call 877-723-3925 in North America or +33 1 47 77 15 77 in Europe or visit the following Web sites:

� Evaluation Software and Information Requesthttp://www.sunbelt-software.com/dell/dt

� Microsoft/Dell/NSI MSCS Disaster Recovery Videohttp://www.microsoft.com/Seminar/1033/20000323DisasterAJ1/Seminar.htm

� Strategic Research Corporation Double-Take Profilehttp://www.nsisoftware.com/pdfs/SRC%20NSI%20Strategic%20Profile.pdf

� Online Purchasinghttp://gigabuys.us.dell.com/store/catalog.asp?Manufacturer=NSI

Andrew Thibault ([email protected]) is directorof Strategic Business Development for Sunbelt Software, distributor of NSI Software Double-Take and GeoCluster.

52 PowerSolutions

Figure 5. Sending Changes from Multiple Production Servers to a Dedicated Backup Server

PowerEdgeServers

Source

PowerVaultFibre Channel Switches

PowerVaultStorage

Source Source Target Source Source Source Target

PowerVaultTape Library

PowerVaultFibre Channel/SCSI Bridge

Page 54: Building Your Internet Infrastructure

54 PowerSolutions

Server consolidation has many potential meanings. Theapplication/Web host layer and data layer consolidation

are the primary areas addressed by Dell IT initiatives. Thisarticle reviews Dell objectives, business requirements, andplans for implementation and testing.

Dell’s move to a multitier architecture has increased thenumber of applications and Web servers in its data centers.Many groups and project teams claim to need a Web server,but the purchase of many servers that are being placed intoproduction has created a real estate issue in the data center.These purchases also have a major impact on the capitalexpenditure budget and engineering and administration personnel. The only resolution is to consolidate these applications onto a smaller number of servers to regain thedata center space and reduce the expenditures.

For an application or Web server to be a candidate foreffective consolidation, it must meet certain requirements;for example, the application must be lightweight, nontrans-actional, and stateless. Luckily, many Dell servers meet theserequirements. For example, many groups use a Web server asa document and form repository.

Several approaches to consolidation at the application/Web layer will significantly reduce the number of servers inthe data centers. Two approaches are clustering technologiesand virtual machine technologies.

Clustering: The Microsoft WLBS Approach Microsoft Windows NT Load Balancing Service (WLBS) is a shared-nothing cluster that provides high availability,scalability, and load balancing for Microsoft InternetInformation Server (IIS). WLBS consists of multipleservers or nodes that function as a single service via a virtual IP address. This cluster is available through theWLBS clustering software.

WLBS allows incoming IP traffic to be dynamically dis-tributed across the cluster via their Virtual IP address. WLBStransparently distributes requests among the nodes and letsthe clients access the cluster using a virtual IP address. Thecluster presents itself as a single server that responds to thevirtual IP address. Adding nodes to accommodate increasingdemands can scale cluster performance. Figure 1 shows aWLBS cluster.

ServerConsolidation:

An IT Perspective

Server consolidation is a hot topic for Windows NT environments. The current

model of one Windows NT server per application is consuming a vast amount of

resources, including personnel, data center real estate, and capital acquisition.

Consolidating servers can reduce the Total Cost of Ownership (TCO) through

reduced network bandwidth and reduced power requirements for the data

centers. This article discusses Dell’s IT initiatives for server consolidation.

By Rommel Mercado

O S E N V I R O N M E N T

Page 55: Building Your Internet Infrastructure

WLBS is a server consolidation technique.Placing many lightweight servers into a clusterhelps to eliminate a large number of servers. Aby-product of the WLBS architecture is itsability to add redundancy to the applicationson those servers.

Placing several smaller applications onto aWLBS cluster reduces the number of serversneeded for those applications; for example, 50small applications—each with its own server—can be reduced to 10 or less. This consolidationnot only regains servers and data center space,but also provides those applications with higheravailability and load balancing.

Note: Application state cannot be main-tained on the application or Web server after this type ofconsolidation because of the constraints in WLBS architec-ture, which does not maintain state. State needs to reside onthe client, the back-end database, or a separate server fromthe WLBS cluster that holds client state for the cluster. SeeMicrosoft white papers on implementing WLBS for moreinformation.

Virtual Machines: VMWareVMWare™ is a new product introduced into the x86 worldthat allows virtual machines to run inside a host machineand operating system, shown in Figure 2. The virtualmachine (VM) technology that it implements allows for one or more “guest” operating systems to run on top of a“host” operating system.

The virtual machine presents itself as a fully functionalx86 system with CPU, memory, disk, video, and network.Installing a guest operating system onto a virtual machine is as simple as inserting the boot medium and turning on thepower. VMWare will do a Power-On Self Test (POST), andthe operating system will install as it does in a “real” machine.

Lightweight applications, the best candidatesfor consolidation, can be run in their own iso-lated virtual memory space. An added benefit ofthis solution is that a misbehaving applicationcan only affect its own instance, leaving theadditional application instances and the hostoperating system unaffected.

VMWare has already made a positiveimpact on IT engineering and testing. Teamsusing VMWare can set up and test separateinstallations of Windows NT with varioustools and applications—on the same computer.Exploratory engineering testing shows thatfour servers can be consolidated in this waywith the VMWare technology.

Dell has been using the VMWare Desktop product to support virtual machines in server testing; however, a VMWareServer product will be available at the end of the year to per-form those tasks more effectively in the server environment.Implementation differences between the two products include:� Server product runs a VMWare kernel instead of a host oper-

ating system (Linux, Windows NT 4.0, or Windows 2000).� Server product is optimized for I/O performance.

Additionally, VMWare will start qualifying drivers for use in VMWare instances, which will significantly improveperformance and add additional functionality.

Dell plans to use VMWare in many areas of IT, includ-ing server consolidation. Consolidating lightweight appli-cations from multiple computers onto several virtualmachines running on one larger server will eliminate alarge number of servers. This will reduce the number ofservers in the production environment, but also be helpfulin the development environment.

For production use, the VMWare Server product will allow multiprocessor instances, unlike the Desktop product.

www.dell.com PowerSolutions 55

VMWare is

a new product

introduced into the

x86 world that allows

virtual machines to run

inside a host machine

and operating system.

Node2Node1 Node3

Virtual IP

WLBSCluster

Public Network

Figure 1. Microsoft WLBS Cluster

VMWareInstance

VMWareInstance

VMWareInstance

VMWareInstance

VMWareInstance

VMWareInstance

Host OS—Linux or Windows NTVMWare Desktop Architecture

Figure 2. VMWare Runs Inside a Host Machine and OS

Page 56: Building Your Internet Infrastructure

Therefore larger servers can host many applications more efficiently than the Desktop product. Additionally, I/Obandwidth will not be an issue, so production loads can runwith no degradation.

Development environments typically have small groupsof developers who need full control of their systems for pro-gramming and testing. If several developers share a server,any instability in one developer’s system must be isolatedfrom other systems sharing that server.

Consolidation at the Data LayerServer consolidation at the data layer is a prime opportu-nity for cost reduction. Two methods most often used forserver consolidation at this layer are multi-instance andmulti-schema. The best method for a particular situationdepends on the parameters of that situation. The discussionbelow presents each of these methods, discusses the prosand cons of each, and outlines how each fits into the con-solidation effort.

Multi-Instance Server Consolidation The multi-instance server consolidation method runs two ormore instances of a database management system (DBMS)on a single server. The main advantages of this architectureare its use of separate memory spaces for each instance andits ability to manage the instances separately.

It is a good architecture for development databases thatmust be taken down frequently or for databases that arebeing used to test scripts and procedures. This architecturelimits the exposure to errors for everyone sharing the serverand running instances on the server.

The architecture does have some major limitations: itsuse of physical memory and memory allocation. A limitednumber of instances can be supported on a single server,based on the size of the server’s memory. There is competi-tion for physical memory space, since each instance createsits own buffers. The operating system controls memory management, so multi-instance architecture works best onadvanced, scalable operating systems such as Windows 2000Server, Advanced Server, and Datacenter Server.

Multi-Schema Server ConsolidationMulti-schema server consolidation runs multiple databaseschemas in one large database instance on a server. In thisdeployment strategy, one database instance contains all ofthe data objects for multiple projects and manages access tothese objects via multiple database schemas.

The multi-schema architecture allows for a greaternumber of databases to be consolidated onto a single server.This architecture is easier for the Database Administrator(DBA) to manage because of its ability to adjust multiple

buffer pools within the database server software. This typeof consolidation is more efficient in its utilization of physi-cal server memory.

Additional efficiencies can be gained by sharing tablespaces through managed views. Therefore, similar projectsshould be grouped together to increase the efficiency inmulti-schema consolidation.

One drawback to this type of deployment is that it requiresa senior DBA because of the increased planning requirements.The DBA must be able to manage separate projects well inorder for them to share the same server. Coordination deter-mines the success or failure of multi-schema consolidation,because projects must work with their databases withindefined maintenance windows. Also, the efficiency of multi-schema is reduced if the projects on a server are unrelated andcannot share database views.

The Target Databases for ConsolidationThe databases at Dell targeted for consolidation are small insize: between 2 MB and 300 MB. This database size is mostcommonly implemented throughout Dell, although thelarger 200 GB to 500 GB databases receive most of the pub-licity. These small databases are generally used for develop-ment and small-scale departmental work.

Database size and the number of connections made tothe databases on a server are considerations. In the multi-instance model, the total number of concurrent connectionsmay cause a problem, especially if some of the databases areaccessed with persistent connections.

Consolidation Reduces Overall CostsThe number of servers in production at Dell has dramaticallyincreased: it has grown from a handful of Windows NTapplication servers to over 1,500 Windows NT applicationand database servers. Dell is now trying to reduce thenumber of those servers by making more effective use of thefaster, larger servers now available.

Dell needs to find ways to consolidate as many servers as possible to reduce the need to build more data centers and lab facilities. The technologies discussed in this article—clustering, virtual machines, and database consolidation—offer tools and methods for reducing thenumber of servers by consolidating their applications anddatabases onto fewer, larger servers.�

Rommel Mercado ([email protected]) is the manager of IT Cluster and Database Systems Engineering. His expertise includes performance analysis and MSCS, OFS and OPS clustering implementation and troubleshooting.Rommel has a B.S. in Aerospace Engineering from theUniversity of Texas at Austin.

56 PowerSolutions

Page 57: Building Your Internet Infrastructure

www.dell.com PowerSolutions 57

Dell and Novell performed proof-of-concept testing of anenterprise-level deployment to determine the best way

to reduce costs, increase bandwidth, and provide higheravailability for applications, yet remain transparent to users.This article describes how Dell and Novell demonstrated thereliability of a 32-node cluster with multiple applicationsusing Novell’s Cluster Services and Dell hardware. It alsodescribes the architecture requirements for failover sufficientto handle a 32-node cluster and for data backup using thestorage area network (SAN) Fibre Channel.

Since it was important to emulate a real customer experi-ence, all of the equipment for the 32- node preconfiguredcluster was ordered directly from the Dell factory. Theservers included Novell NetWare 5 pre-installed with thelatest service pack, all drivers for NetWare, and the NovellCluster Services code—under Dell’s XLA licensing model.Dell can provide these services to all NetWare customersincluding those with Corporate Licensing Agreements(CLAs), VLAs, and Master License Agreements (MLAs).

We first connected all network and fibre cables, then ateach server’s console, verified that each server could “see”the shared storage. Finally, we installed Novell Cluster

Services from one of the clients onto each server using theDell-developed tool that automated the installation process.

Creating the Cluster ArchitectureThe goal of the cluster architecture, based on real-world cus-tomer deployments, was to establish the number of paths suffi-cient for failover for a 32-node cluster. With architecture capa-ble of supporting 32 nodes, Dell and Novell could demonstratethe extreme scalability of a Dell/Novell cluster solution.

The cluster architecture consisted of 32 Dell PowerEdgeservers: one server as a dedicated console-monitoring stationfor real-time monitoring of the fibre storage system status,and one server outside the cluster as a master replica ofNovell Directory Services® (NDS™). See Figure 1.

The Application DesignThe application architecture can be customized easily orchanged to fit individual customer needs or changingenvironments. This provides adequate failover paths forfile access and key applications, such as GroupWise®,ManageWise®, Oracle, DHCP, and Novell DistributedPrinting Services.

Building a Scalable,Highly Available

Novell ClusterEnvironment

Dell and Novell recently teamed up for a proof-of-concept test of an enterprise-

level deployment of 32 nodes in a single cluster that used a consolidated

storage model. This article provides installation and testing details on Dell

PowerEdge server and PowerVault storage systems. Currently existing installa-

tions were reviewed to determine final direction and application deployment

during the architecture phase.

By Richard Lang

O S E N V I R O N M E N T

Page 58: Building Your Internet Infrastructure

NetWare Cluster Services (NWCS) for NetWare 5.1supports 32 nodes through standard Novell support channels out of the box. Novell Consulting Services canprovide more in-depth architecture design and support todevelop a customized solution. Novell Consulting con-tributed to the architecture during this applicationdeployment stage to closely emulate real deployments.

NetWare Cluster Services Manages Network ResourcesNWCS for NetWare 5.x is a server clustering system thatcan help to ensure high availability and manageability of

critical network resources, including data (volumes), applica-tions, server licenses, and services. It is a multinode, NDS-enabled clustering product for NetWare 5 that supportsfailover, failback, and migration (load balancing) of individ-ually managed cluster resources.

GroupWise Design Goals The original plan called for eight servers to provideNovell GroupWise mail services to support 8,000 users.Generally 1,000 users per server is considered the maxi-mum for a mail server, but other considerations may

58 PowerSolutions

Figure 1. The Cluster Architecture

GroupWise

ZENWorks

Web Services Oracle

Services—DHCP and NDPS VERITAS Backup

NDSMaster

Fibre ChannelSwitches

Dell PowerVaultShared Storage

Page 59: Building Your Internet Infrastructure

impact performance. For example, external storage can eliminate bottlenecks with the diskchannel to maintain acceptable response times.We concluded that 500 to 800 users per postoffice are more realistic and representative of areal-world deployment. External storage andadditional RAM would have been considered ata minimum if this were to be deployed into aproduction environment to support 8,000 users.

GroupWise System Design The GroupWise system contained a singledomain, eight post offices, and a Simple MailTransfer Protocol (SMTP) gateway. We chose this designprimarily as a proof-of-concept test for failover fault toler-ance in the 32-node cluster environment. Additionally,this design served to emulate an enterprise-size environ-ment in which scalability is a major concern.

The GroupWise system had eight mail-specific volumes:MAIL1 through MAIL8. These MAIL volumes were definedas shared volumes in the cluster storage, dedicated to provid-ing mail services only.

The GroupWise cluster partition contained 12 availableserver nodes. Except for the domain agent and SMTP gate-way running together on the same node, we assigned eachserver node to host only one GroupWise post office agentwhile the cluster ran in its preferred state. This design fol-lowed the Novell Consulting Services GroupWise designrecommendations and allowed for minimum service inter-ruption time during volume failover.

To facilitate the proof-of-concept test within thisGroupWise environment, each MAIL volume had anassigned single failover path. With four server nodes avail-able for failover, each failover node was a target for up totwo MAIL volumes. To accommodate this strategy, theGroupWise and cluster design had to allow for a failoverhosting of either two post offices running concurrently or apost office running concurrently with the domain agentand SMTP gateway.

GroupWise Deployment Initially, we installed GroupWise using the latest non-enhancement Pack Code 5.5.3, but then upgraded it tothe latest EP SP1 code to take advantage of the latest fixesfor the GroupWise core components and enhancements spe-cific to running in a clustered system. We launched the stan-dard install to extend the schema and update the NetWareAdministrator with GroupWise-specific snap-ins, after whichwe exited the install routine. Rather than creating a newGroupWise55 (GW55) system, we grafted a GroupWisedemo domain and post office into the existing cluster tree.

We imported demo users with active andpopulated mailboxes into the demo postoffice and installed GroupWise agents to thissystem with new startup files. With theagents loaded and the system reconnected,the next activity was to create the new addi-tional post offices.

To expand on the size of this mail systemand emulate a real-world environment, weimported additional users into the tree asrequired. To facilitate failover, each of the 12 GroupWise server nodes in the clusterrequired that agents be installed and appro-

priate startup files be created to host the assigned failovermail volumes. We then installed the GroupWise InternetAgent (GWIA) with a basic configuration.

To facilitate clustering, the GW55 system requiresspecific design considerations and configuration settingsthat allow for a quick and stable failover if a server fails.In most environments, the message transfer agent (MTA)and post office agent (POA) are commonly run on thesame server. In clustered environments, there is anadvantage to running the MTAs and POAs on separateservers when the cluster is running in a normal state: the failover time for a post office on a dedicated mailserver is much faster when only a POA is running onthat server.

To accommodate post offices running concurrently onthe same server or alongside the domain or gateway underfailover conditions, we configured unique client server andMTA ports for each post office agent. In addition, we had toload the post offices into protected memory address space sothey could be unloaded correctly at failback time. We con-figured all post office links for TCP/IP communications tothe domain. This allowed for minimum service interruptionand prevented GroupWise database corruption. We config-ured all post offices to allow only client/server connectionsfrom GroupWise clients.

We used the GroupWise Resource Template object tocreate MAIL cluster resources. This template contains thedefault script to load GroupWise agents—GRPWISE.NCF—and the unload script contains the unload GW* NLM com-mands required in a normal GroupWise environment.

In this test cluster environment, we edited thesescripts to accommodate the possibility of two MAIL volumes running on the same failover node. We replacedGRPWISE.NCF with a command line that explicitlyloaded each post office in protected address space. Withthis configuration, the unload script only required theaddress space to be unloaded, so we disabled the individ-ual unload commands by using a rem statement.

www.dell.com PowerSolutions 59

The goal of the

cluster architecture

was to establish the

number of paths

sufficient for failover

for a 32-node cluster.

Page 60: Building Your Internet Infrastructure

60 PowerSolutions

Netscape Enterprise Server Deployment We installed Netscape Enterprise Server® (NES) on fournodes for capacity and failover capabilities. We testedresources for the Web servers using two methods: migratingto different servers manually for load balancing and remov-ing power from the servers.

The result was the same for both situations: the clientdid not lose access to the Web page. Even hitting reload onthe browser while initiating a shutdown of the Web serverstill migrated services to another server without the clientbeing aware.

The Web server showed very minimal failover times of allapplications tested. In almost every test, the failover was com-pletely transparent, and appeared instantaneous to the user.

Oracle Design and Deployment A successful deployment of Oracle in this clus-tered environment required a dedicated NovellStorage Services (NSS) volume on a shared disk,and that dedicated volume had to be mountedbefore installing NWCS. We also added a dedi-cated IP address for Oracle prior to installation.

We created two databases using OracleVersion 8.0.4.0.6—one for Oracle and theother for an e-commerce application. No addi-tional databases were needed because Oracleruns from shared storage. The Oracle serversalso provided an example showing how an e-commerce solution server could be set up.

Application Failover Testing We performed several tests to determinewhether the applications would failover as necessary.

We forced each server to release and renew IP addressesto test DHCP throughout testing. The average failover timeswere as follows:� GroupWise at <30 seconds� Oracle at <30 seconds� Netscape at <2 seconds

Redundancy TestingWe used automatic transition failover (ATF) to ensure thatno single point of failure was present in the NetWare storagearea network (SAN). We loaded SCSISAN.CDM alongwith redundant QLogic® host bus adapters (HBAs) duringtesting in multiple servers. In addition, we used multipleDell PowerVault 51F Fibre Channel switches, connectingeach switch to a separate storage processor in the DellPowerVault 650F fibre-based storage system.

During testing, we copied files from a Windows NTworkstation client CD-ROM drive to the shared storage.

After verifying the path for data flow, we removed powerand shut down the Fibre Channel switch that was transfer-ring data. The client did not display a notification or anyindication that the data path was interrupted. The serverconsole showed an error message indicating the fibre datachannel was interrupted; it then displayed a message that thedata channel had failed over to the redundant path.

The client never interrupted the file copy process. In aseparate test, we shut down the server hosting the file copy,and the result was the same. In a final test, we removed acable from the HBA hosting the data channel that wasactively transferring data; again, the file copy process auto-matically moved to the redundant data channel completelytransparent to the user.

Volume of Data TestingWe performed a variety of volume failover testsby using two video clips of 31,588 KB and23,631 KB. We ran the video clip from avolume, and then migrated that volume toanother server. We also physically turned off theserver that was running the video clip andwatched it fail over to another server.

Video failover tests were conducted to testfor file availability. An additional test involvedrunning the video clip, resetting the server thatthe volume was on, and immediately resettingthe next preferred node. When simulating apower failure by shutting down one of theservers, a slight interruption could be detected.The video delayed approximately 25 to 30 seconds, but the timer kept running and then

paused while the video clip caught up. The next preferrednode was also reset; then the server where it was runningwas reset to check for failover. The results were the same:when migrating file access from one server to another, nointerruption in the video playback was noted.

Netscape Enterprise Server TestingWe tested NES by loading a Web site, migrating theresource, then reloading the page. We repeated these tests byresetting the servers as well as by powering off the servers.The most common result was that the Web page reloadedimmediately. The worst case encountered was that the userhad to press “reload” to reload the page.

GroupWise TestingWe verified GroupWise for connectivity between agent com-ponents, gateways, and clients. We tested the failover config-uration by failing over all scenarios (by migrating volumeresources both manually and by powering down servers) and

Even hitting reload

on the browser while

initiating a shutdown

of the Web server still

migrated services

to another server

without the client

being aware.

Page 61: Building Your Internet Infrastructure

62 PowerSolutions

verifying that the resources loaded successfully on thefailover nodes and that all clients continued to have access.

We tested client access for service continuity by usingan activity generator utility called Axpress.exe. This utilityallows the GroupWise client to continually run in a loop-ing fashion, simulating typical user activity. We copiedAxpress.exe to the same volume where the test sender’spost office existed. In that way, when the post office disap-peared on failover, so did the test utility and macro script,creating a more real-world scenario.

The test consisted of creating a sample mail message thatincluded a locally stored attachment file. This activity wasrepeated n numbers of times, where n represents the loopstatement as defined in the script file NWCSTEST.AES.Upon failover, the GroupWise client usually paused untilthe post office became available again. In some instances,the client reported an address lookup failure and promptedthe user to resolve the error. The client failover behaviorvaried depending on where in the loop cycle the post officewas failed over.

In all cases, the client was recoverable and the user wasable to continue using the GroupWise client without havingto close or restart.

Novell Distributed Print Services Testing We tested the failover for printing services by migratingresources manually and by powering down servers; theoverall average was 2.4 minutes from failover to printeddocuments.

When migrating Novell Distributed Print Services(NDPS) manually, services were available in only 35 sec-onds. With a complete power-down of the server, it took amaximum of four minutes for the server to reboot and theservices to come online. In all cases, a power-down increasesthe time out before NWCS will migrate services automati-cally. This delay is normal to eliminate communicationerrors and compensate for delays in a production network.

Data Backup is Critical Dell considered data backup one of the most criticalrequirements in this deployment. Dell successfully demon-strated backing up data over the SAN by moving largevolumes of data without impacting the local area network(LAN) infrastructure.

The Backup Architecture The hardware for backup consisted of a Dell PowerVault 130TSCSI DLT Tape Library (with an internal capacity of up tofour DLT7000 drives) connected to a PowerVault 35F Fibre

Channel/SCSI Bridge. The bridge was connected to the SANnetwork using a PowerVault 51F Fibre Channel switch.

Backup TestingThe tests used two backup software1 products:

Computer Associates ArcServeIT ELO Version 6.6.ArcServeIT performed backups successfully on multiple servers inthe cluster SAN environment without impacting the LAN band-width. (Note: To move data over the SAN, the backup softwaremust be installed on each server to be backed up. Although it isstill possible to back up a server without the backup softwareinstalled, the data will be moving over the LAN, not the SAN.During testing the LAN quickly became saturated.)

If the primary server fails, ArcServeIT promotes one ofthe distributed servers to a primary server. Testing alsorevealed some issues with NetWare Client 4.7.

VERITAS Backup Exec® Shared Storage Option™(SSO) Version 8.5. Backup Exec SSO also works in aNetWare SAN environment. Although some initial problemsoccurred in machines using dual processors, we determinedafter further analysis that the NWASPI version was causingthe problem. A new version of NWASPI (dated March 1,2000) from Adaptec® resolved the problem.

We created and successfully completed multiple jobs withboth solutions. Both backup products were exclusive versions,only available directly from Dell Computer. They can be ordereddirectly from Dell in the PowerSuites for Tape Backup solutionbundle by specifying either ArcServeIT or Backup Exec.

A Highly Available Cluster Environment These tests by Dell and Novell incorporated a true fibre-to-fibre storage array network to consolidate the storagerequirements of 32 servers onto one shared-nothing storagemodel. The SAN enables removing potential bandwidthimpact from a production environment. This also enabled ahigh-bandwidth medium for backups, thereby reducing thebackup window to a more manageable size.

NetWare Cluster Services provided the ability to createmultiple redundant paths for application failover for truehigh availability. It also allowed manual load balancingduring periods of high utilization or maintenance, making iteasy to schedule maintenance and downtime for an individ-ual server or application. �

Richard Lang ([email protected]) is a technical marketing man-ager responsible for the technical relationship of the Novell Alliancesat Dell. Richard has B.S. in Management Information Systemsfrom Kennedy Western University and an Applied Science Degreein Electrical Engineering from the DeVry Institute of Technology.

1 Note: The backup software used during testing contained builds exclusive to Dell and available only in the Dell PowerSuites for Tape Backup product bundle.

Page 62: Building Your Internet Infrastructure

www.dell.com PowerSolutions 63

Global deployment in today’s complex IT environment hasmany logistic challenges. Imagine a scenario where you

want to install hundreds of Novell NetWare 5 servers across theglobe. Each one requires unique server information setup, suchas the server name and IP addresses. The traditional method ofperforming the installation would involve these steps:1. Order the servers.2. Ship them to an integrator (or integration department

within the company).3. Install the specific software.4. Ship the servers to the site.5. Configure site-specific settings.6. Bring up each server and test it (troubleshoot any prob-

lems that arose from having too many people configuringthe system).

Now imagine a scenario where you want the same out-come, but using the following process:1. Fill out a few configuration forms.2. Order the servers.3. Have an on-site person plug in the servers and turn them on.

Obviously, the second scenario is the most efficient forthe customer. DellPlus can provide this unique integration

service—a complete and custom installation of your serverswithin Dell’s factory.

A Perfect Back-End CustomizationDellPlus can install most specific software and hardware atthe factory. Servers are assembled in the same fashion asDell’s standard servers, then routed to a special locationwithin the Dell factory for specific software installations.The end result for the customer is a high-quality custominstallation for the servers.

When customizing Dell servers, DellPlus offers three mainoperating systems: several versions of Microsoft Windows,Novell NetWare, and the various forms of UNIX®. Many Dellcustomers are aware of the company’s advanced capabilitieswith Microsoft operating systems and appropriate applica-tions. This article discusses the specific capabilities DellPlusoffers for the Novell operating systems.

Default Installations with Custom RAID SetupThe most basic option for DellPlus customers is a defaultinstallation with custom RAID levels. DellPlus can set upcustom RAID levels in the factory, initialize all of the drives,and install the NetWare software with a DellPlus defaultinstallation. With this option, DellPlus sets up the RAID

Custom NetWare ServerInstallations

withDellPlusImagine letting someone else custom configure all of your new servers for you—

before they even leave the factory. DellPlus, a unique custom factory installation

service from Dell, does just that. DellPlus saves the customer hours per server in

setup time; in large deployments, this time savings can add up significantly. This

article describes DellPlus and provides explicit examples of the service with

NetWare installations, highlighting its benefits for customers.

By Rod Gallagher

O S E N V I R O N M E N T

Page 63: Building Your Internet Infrastructure

www.dell.com PowerSolutions 65

arrays and basic operating system, but the customer can stillconfigure specific information once the server arrives at thecustomer site. DellPlus supports NetWare 4.x and 5.x andcan perform this service on any Dell server.

An example of a default installation with custom RAIDsetup would be a customer who orders multiple PowerEdge2400s, each with five hard drives. The customer wants eachserver to have a RAID-5 array and a hot spare drive to maxi-mize data protection. All the servers will be shipped to thesame location, where the network administrator will config-ure and install the servers onto the network. With RAIDarrays configured before the servers arrive, the company cansave numerous hours of work.

Default Installations with Nonstandard HardwareA default installation of NetWare with nonstandard hardwareis the next basic installation option. The standard factory doesnot always support installation of many pieces of hardware.There simply is no way for the factory to be able to handle allunknown hardware. DellPlus, however, supports the installa-tion of most hardware, including tape drives, network cards,and customer-supplied parts. Depending on the component,the amount of work for the customer can be negligible.

DellPlus supports NetWare 4.x and 5.x with custom hard-ware—in any Dell server. DellPlus must ensure, however, thatthe customer has validated the hardware in the specific serverto ensure that the final product will work as the customerexpects. DellPlus currently does not offer validation services,but relies on the customer to test its own configuration.DellPlus will mention any issues that the engineering stafffinds, but cannot be relied upon to find every issue.

One example is a company that wants its custom hardwareput into its new Dell servers. The company orders multiplePowerEdge 6350s, each with a high-performance network cardand an internal tape backup with NetWare 5.1. The serverswill be shipped to the same location, where the networkadministrator will complete the configuration and installation.This process can save the company several hours per server.Since the system was built with the hardware in place, theadministrator does not have to manipulate the system to makeit work. Reducing the number of times each server is openedlimits the likelihood of encountering problems. All of thehardware also will carry Dell’s warranty, even if Dell’s warrantyis longer than the manufacturer’s original warranty. The sup-port calls all come to Dell if Dell installs the hardware.

Total Personalization with Customer-Supplied Images If a customer has a software image that has been created,DellPlus can load that image in the factory. The servers willthen arrive from the factory with the image already in place.This image combined with a custom RAID and custom

hardware solution can provide a basic level of customizationfor the customer. This service is especially useful whenservers require large amounts of data to be pre-populated onthe server. DellPlus can load gigabytes of data in the factoryand reduce the customer’s setup time.

An example of this scenario is the customer who purchasesmultiple PowerEdge 6300s as database servers for remote sites.DellPlus can put the images together so that all of the data isin place when it arrives at the customer site. The only task leftfor the customer to do on-site is to change the server name andIP information. Depending on the amount of data to be copiedto the servers, this option can save the customer many hours.

This option also allows for any version of NetWare.Since DellPlus does not create the image, customers canchoose any version of software or applications.

Custom NetWare 5 Server ConfigurationThe custom configuration of NetWare 5.x is the most powerfuloption available to customers from DellPlus. With this offering,the customer simply completes a few forms and a spreadsheetwith individual server information. Once this information isgiven to DellPlus, the customer can order any number ofservers, each with unique server information.

A customer with remote sites around the country—oreven around the world—provides a good example of thissimple process. The customer allows DellPlus to install theNetWare 5.x software and custom configure the server foreach site with the server name, IP addresses, and any otherstandard installation options. DellPlus directly ships theservers to the remote site; then someone on-site simply plugsin each server and they are ready for use.

NDS Issues with Custom InstallationWith Novell Directory Services (NDS) as the backbone forNetWare, servers can continue to be configured once they areon-site. Licenses can be applied, partitions can be changed,and new users can be set up. What is important to the cus-tomer is that each server comes up on the network so that itcan be managed remotely. Additional software also can beadded at the factory to further assist with this process.

NetWare relies on NDS, which is currently an issue forcustom configuration. For security reasons, Novell does not

Windows NTDellPlus Offerings NDS Master 32 Nodes Workstation

Custom RAID ✔ ✔ ✔

Custom hardware ✔ ✔

Customer-supplied image ✔

Custom site information ✔ ✔

Figure 1. DellPlus Offerings Used in BrainShare 32-Node Cluster

Page 64: Building Your Internet Infrastructure

66 PowerSolutions

give the ability to install the server into an NDS tree withoutactually “seeing” the tree. This restriction tightens networksecurity by preventing unauthorized servers from joining thetree. This reliance on NDS does make remote installationsmore difficult, however, when NDS must be involved.

DellPlus has some simple work-arounds. One solution is toinstall as many applications as possible at the factory, thencomplete the installation once the server is on-site. Dell per-sonnel can perform this or it can be done automaticallythrough scripting. An example of this would be the customerthat has a basic installation performed in the factory, thenrequests Dell Consulting to arrive on-site with the system tocomplete the installation into their NDS tree.

Working with Novell, DellPlus has developed a new toolthat will allow for the custom migration of a server and itsobjects to an on-site tree. Although some of this functionalityis available in DSMERGE, this new tool has made severalenhancements. DellPlus can take any group of objects andmove them to the customer’s tree. The end result is thatDellPlus is able to install the objects into a “fake” tree in thefactory, then automatically and securely move them to thecustomer’s tree once it is connected.

Currently, this process is unique to Dell. It allows thecomplete installation of NetWare 5.x servers in the factory,including NDS tree integration. In the near future, it willallow for the installation and configuration of applicationssuch as GroupWise and ZENWorks™ in the factory.

This capability is outlined in the custom NetWare 5 serverconfiguration example above. If the customer wants to sendNetWare 5.x servers to remote sites, it will not be necessary forthem to integrate the servers into NDS on their own. Thisintegration can be done automatically once the servers are on-site. Applications such as BorderManager™, GroupWise, andManageWise can now be installed at the factory to the cus-tomer’s specifications.

Showcasing a 32-Node ClusterMany of the DellPlus capabilities are highlighted in this project:the 32-node cluster that Dell showcased at BrainShare® 2000,Novell’s premier technical development conference, in Salt LakeCity in March. The basic design of the cluster was 32 PowerEdge1300s in a NetWare cluster with another PowerEdge 1300 run-ning as the NDS master server. A Windows NT workstation run-ning on a PowerEdge 1300 served as the console. Figure 1 showsa summary of the DellPlus functions used by each of the servers.

The NDS master server was built as an image thatDellPlus put directly onto the server in the factory. It had acustom RAID level and custom data that had been copieddirectly to the volume. This allowed for disaster recovery of any of the nodes once they reached Salt Lake City. Allinformation was available via the NDS master server.

The 32 nodes were each configured with a customNetWare 5 download. The unique server names and IPaddresses were set at the factory. DellPlus also loaded each spe-cific driver, including those for the custom hardware requiredfor the Fibre Channel connections. Support Pack 3a also wasloaded at the factory to provide the servers with the latest files.

Each server was connected to the NDS tree in the fac-tory, so the servers were completely ready for the clusteringsoftware to be loaded on-site. DellPlus could go the extrastep and install the clustering software at the factory as well,since DellPlus has access to the NDS tree. In the future, theDellPlus NDS tool will handle this installation.

The big question remains: How long did it take to set upthe 32-node cluster? DellPlus engineered the project in only afew days. Add a few days for paperwork, and the whole processreally moves quickly. Once the project is set up, the delay issimply factory lead time plus some time for the software down-load. In many cases, no additional software download time isneeded once the project is complete.

DellPlus Handles NetWare Projects with EaseDellPlus has enhanced its capabilities over the past year and can now handle nearly any NetWare project with ease.From custom hardware and RAID arrays to a complete customNetWare installation, DellPlus can provide the appropriatelevel of service. DellPlus will continue to provide new servicesto ensure that it is on the cutting edge of offerings. Figure 2summarizes DellPlus services. �

Rod Gallagher ([email protected]) is the team lead for theNovell team within DellPlus Servers Engineering. His team isresponsible for creation of the Custom Factory Integrationprocesses that DellPlus uses to install Novell NetWare. Rod is aMaster Certified NetWare Engineer (MCNE) and a MicrosoftCertified Systems Engineer (MCSE).

Supported NetWare VersionsInstallations

3.x* 4.x 5.0 5.1 by Version

Default installationw/custom RAID ✔ ✔ ✔ ✔

Default installationw/custom hardware ✔ ✔ ✔ ✔

Customer-suppliedimage ✔ ✔ ✔ ✔

Custom siteinformation ✔ ✔

Clustering ✔ ✔

* NetWare 3.x is supported only in specific situations. Please contact your Dell sales representative for further information.

Figure 2. DellPlus Service Supported Installations

Page 65: Building Your Internet Infrastructure

www.dell.com PowerSolutions 67

Computer-based information systems have been standardwithin most companies for over 20 years. Many busi-

nesses, however, have struggled to integrate processes thathave been fragmented across multiple platforms or isolatedby independent systems based on proprietary architectures.One such group of companies includes restaurants andrelated businesses in the food service industry.

The food service industry requires that various functions—point of sale (POS), kitchen services, inventorycontrol, timekeeping, and other back-office functions—betightly integrated to maximize efficiency, yet the system mustbe scalable to keep pace with the growth of the business.Special environmental requirements also present uniquechallenges for some restaurant operators.

Advanced capabilities in operating systems, applicationsoftware, and client and server technologies have helped theindustry merge these special processes with systems thatenable faster service to the customer and timely delivery ofvital information to management.

Windows Technology Transforms Restaurant SystemsAlthough the emergence of Windows and its thousands ofapplications produced a myriad of choices to improve businessproductivity and effectiveness, this revolution failed to signifi-cantly penetrate the restaurant business. Most established

restaurant systems used proprietary hardware and softwarefrom a single vendor’s system offerings. Few options wereavailable and any change was disruptive and expensive.Although Windows NT began to make Windows-based appli-cations viable, manageability and scalability remained a prob-lem. Windows NT Terminal Server Edition, and nowWindows 2000, show the promise of a potential solution.

For the past four years, restaurant systems have been dra-matically transformed by incorporating Windows NT tech-nology in three out of four POS systems. The general trendsinclude focusing on simplicity, spending less time on com-puters, and providing better service to guests. Anotherimportant factor is application integration: 92 percent ofPOS systems are now being interfaced with back-officeservers. Another new technology, thin client devices withthe server-centric applications have helped complete thepuzzle for many restaurants.

Mr. Gatti’s Chose PrOfitS Mr. Gatti’s Pizza, a business with company-owned storesand franchise locations from Texas to the Carolinas, wasseeking a solution to these issues. Ray Bacon, IT directorfor Mr. Gatti’s, turned to PrOfitS Restaurant Systems forhelp. The result was a system developed from the collabo-ration of Dell Computer, Boundless Technologies, and

Choosing theRight Ingredients:

ARecipefor Success

The food service industry has been slower to adopt Windows-based applications

than other industries. Today, the combination of technologies from companies

such as Dell Computer, PrOfitS Restaurant Systems, Boundless Corporation, and

Intercard provides restaurants with solutions that meet their unique requirements.

By Terry Shagman

O S E N V I R O N M E N T

Page 66: Building Your Internet Infrastructure

68 PowerSolutions

Intercard that will support the current andfuture needs of the business.

As with most restaurants, Mr. Gatti’sneeded precise inventory control, accurateorder entry, and easy access for all employees.Additionally, the restaurant had unique require-ments such as delivery tracking, managing par-ticularly harsh kitchen environments, andinterfacing with the arcade and entertainmentfacilities located in the restaurants.

After evaluating a number of options,Bacon chose PrOfitS™ software because itaddressed Mr. Gatti’s specific needs. This soft-ware maintains employee records, conductstraining sessions, updates menus, tracks thedining habits of customers, and effectively cred-its discount coupons. The software also gathersvital data daily from each Mr. Gatti’s locationand tabulates the results at headquarters, whichsaves considerable labor costs.

The solution included the following components:� Dell Application Server. PrOfitS standardizes on Dell

servers because of the breadth of the product line and theavailability of service nationwide. Using Dell servers, the

application can be properly scaled based on theneeds of the restaurant.� 3Com 8-port 10/100 mini-hub. PrOfitS

chose 3Com® based on its leadership in con-nectivity solutions. The system for a typicalPrOfitS customer initially averages only fourdevices. Installing 8-port hubs allows systemsto be expanded, sometimes without dispatch-ing a technician. For larger installations,PrOfitS daisy chains additional 4- or 8-porthubs, as needed. This enables hubs to beplaced in closer proximity to the devices beingconnected, which minimizes long cable runs.

� Boundless Capio™ or Viewpoint® thinclients. Boundless Technologies provides thetouchscreen drivers already embedded in thedevices. Reliability and industry-leadingmanagement software makes Boundless theperfect fit for PrOfitS installations.

� Liberty touchscreen monitors. Testing showed thatLiberty produced a better-quality image than otherstested. These screens employ resistive Elo touchscreensthat permit operators to press virtual buttons with pens orwhile wearing gloves.

Printer1

Station5Boundless TC

Cash Drawer

Cashier Printer

Station4Boundless TC

Cash Drawer

Cashier Printer

Station7Boundless TC

RedemptionCenter

Dell File Server

Station1Boundless TC

Office Printer

Hub

6-line Caller ID

Dial-In Modem

Station2Boundless TC

CashDrawer

CashierPrinter

Station3Boundless TC

CashDrawer

CashierPrinter

Printer2Printer3

Label Printer

Cash Drawer

Cashier Printer

Station6Boundless TC

Games with Intercard SmartcardReader/Writer

IntercardSmartcard

Reader/WriterDispenser

All Connections Category 5 Ethernet with RJ45 ConnectorsTe

lepho

ne Ja

cks

Figure 1. Store-Level Architecture

The food service

industry requires that

various functions be

tightly integrated to

maximize efficiency,

yet the system must

be scalable to keep

pace with the growth

of the business.

Page 67: Building Your Internet Infrastructure

www.dell.com PowerSolutions 69

� Epson TMT88 40-column receipt printers. Chosen forquality, the Epson TMT88 connects to the parallel portof the Boundless device. Thermal paper eliminates theneed for ink cartridges. It also does not use multiple spin-dle threading, so the paper loads quickly and easily intothe TMT88. Heat resistive paper avoids problems withheat in kitchens and around hot food.

� MMF cash drawers. MMF provides the easiest interfaceto the Epson printer. A simple control code sent via theprinter saves serial ports for other devices.

� Rochelle 2045 Caller ID box. This aids inbound tele-phone orders.

� Eltrol label printers. Label printers connect to one of sixserial ports on the Dell server (two standard ports plus a 4-port Equinox serial I/O board). The Eltrol printer, whichuses thermal paper, is located in the cut-and-box area ofpizza restaurants and produces labels for the pizza boxes.

� Intercard 2-port smart card reader/writers. Intercard hasproven to be effective in gaming environments, whichhave become popular in restaurants. These environmentsprovide entertainment and produce incremental profits.

The POS Integrated SolutionThe applications reside on a Dell server at each restaurantlocation. A 3Com 8-port mini-hub connects POS and officeworkstations to the server. Each POS station consists ofBoundless thin clients with a Liberty touchscreen monitor,Epson TMT88 40 column receipt printer, and MMF cashdrawer. Inbound telephone orders are aided by a Rochelle2045 Caller ID box. Labels for pizza boxes are produced withEltrol label printers. Redemption Center and other manage-ment applications are accessedfrom Boundless Capio orViewpoint thin clients.

Intercard “Fun Teller”smartcard reader/writers areintegral to Mr. Gatti’s enter-tainment system. Customersinsert $1, $5, $10, or $20 bills.The Intercard device writes theappropriate amount of crediton a smartcard and dispenses itto the customer. Customersthen insert the card into thegames they wish to play. Theprice for each game played issubtracted from the smartcard.Customers can add money orview balances remaining ontheir smartcards at anyreader/writer. Intercard polls

the reader/writers at each gameand reports customer gamepreferences in real time.

Once patrons have finishedplaying games, they can redeemtheir earned points for a widerange of prizes. Prize pointsearned while playing are sub-tracted from the smartcardduring the redemption process.

PrOfitS reports that oncecontracts are in place, installa-tions and training take approxi-mately three weeks. A sitereview is performed and station

locations are determined. Installation of new cablingrequired and the necessary phone lines in place require thelongest lead times for the PrOfitS installers. Existing phonelines must be split before re-routing to the caller ID box;operator training is typically completed in three days.

Managing the Bottom Line Mr. Gatti’s realized several specific benefits from the configu-ration of the PrOfitS solution:� Dell has a broad line of servers that allow applications to

be scaled to the needs of the business. The ability to dis-patch technicians for service covered by warranty ormaintenance agreements was key to the PrOfitS decisionto standardize on Dell servers.

� Applications are maintained and updated on a single server;therefore, administration is focused on the server alone.

� Boundless thin clients exhibit high reliability becausethey have no moving parts. The Boundless ViewpointAdministrator utility permits remote OS and softwareupdates and client configuration. The hierarchicalapproach of Viewpoint Administrator provides maximumflexibility in the logical grouping of devices, such asrestaurant region, departmental organization (wait-staff,site managers, headquarters).

� Management receives reports in real time about customerselection and operation of all gaming devices, which aremonitored continuously.

� Coin and currency mechanisms are notoriously trouble-some. Jammed devices cause costly maintenance and taketheir associated games out of commission. With fewermoving parts, Intercard devices provide greater reliabilityand ensure greater customer satisfaction. Electronic giftcertificates also can be issued on a smartcard.

A Windows-based software product combined with aWindows-based terminal such as the Boundless Capio thin

Most established

restaurant systems

used proprietary

hardware and

software from a

single vendor’s

system offerings.

For the past four

years, restaurant

systems have been

dramatically

transformed by

incorporating

Windows NT

technology in three

out of four POS

systems.

Page 68: Building Your Internet Infrastructure

70 PowerSolutions

client, the PrOfitS solution provides Mr. Gatti’s with the busi-ness information required to improve the bottom line and thesoftware to manage the complexities of the pizza business.

The Next Step: ePrOfitS As systems in the food service industry evolve, generaltrends among POS makers include increased emphasis onredundancy and a renewed emphasis on simplicity.

To address this, PrOfitS is now leveraging DSL (digitalsubscriber line) technology to deliver its POS system torestaurants via the Internet with a system called ePrOfitS.Using ePrOfitS reduces capital expenditures for small opera-tors, shrinks administrative costs, and provides multi-unitoperators real-time access to revenue and cost informationalong with the benefits of centralized storage.

Each restaurant has a local network of workstations connected to the Internet via a file server or router usinghigh-speed ADSL (asynchronous digital subscriber loop)telephone lines to the ePrOfitS data center in Houston.Multiple servers run application software and house the storage of each restaurant’s database. All reports and data

are accessible through the Internet. The data center securesand automatically backs up the data in multiple Dell serverswith a 99.999 percent uptime guarantee. The system hardwareincludes thin client workstations with touchscreen monitors,cash drawers, thermal printers, and other peripherals.

Although the food service industry has been slower toadopt Windows-based applications than other industries, the combination of technologies from companies such as Dell Computer, PrOfitS Restaurant Systems, BoundlessCorporation, and Intercard now provides restaurants with solu-tions that meet their unique requirements. Today, the rightingredients enable restaurants to enjoy an IS masterpiece com-parable to the dining experience enjoyed by their customers. �

Terry Shagman has 20 years of experience in applications pro-gramming and design, and has held various management positionsin finance, technical support, sales, and marketing. Since 1993,he has directed efforts at Boundless Technologies to develop thinclient products together with companies including Microsoft,Citrix, and others. Terry majored in accounting at the Universityof Texas at Austin.

Power Solutions Now Online! Power Solutions is available online at www.dell.com/powersolutions.

You can also request a subscription, find information about placing an ad in the printed issue,or download writing guidelines for authors if you would like to contribute to the journal. But mostimportantly, you can send us any feedback that you would like to share about the journal.

Our previous issue focused on Windows 2000 technology. Here are some titles from that issue:

� Designing Your Windows 2000 Active Directory� Clustering Options for Windows 2000� Enhancements to Microsoft Cluster Server for Windows 2000, Advanced Server� Windows 2000 Update: Terminal Services Comes of Age� A First Look: Windows 2000 Tape Backup on Dell� Protecting Windows 2000: VERITAS Backup Exec� Electronic Fax Services for Enterprise Messaging at Dell� Resolving Data Storage Demands with SANs

Be sure to visit us at www.dell.com/powersolutions.

Power Solutions Now Online!

Page 69: Building Your Internet Infrastructure

www.dell.com PowerSolutions 71

The Numerically Intensive Computing (NIC) Group ofthe Center for Academic Computing at Penn State

believed that access to parallel computing resources could bemade widely available using off-the-shelf technology. Thisgroup has designed and built the Lion-X PC Cluster, whichpresents a balanced approach to providing cost-effective,general-purpose parallel computing cycles with high performance and high reliability.

Lion-X also offers research groups considering the pur-chase of their own cluster a unique opportunity to determinewhich hardware and networking technologies best suit theirneeds. Several research groups at Penn State are now usingthe system to test and port applications.

The Lion-X PC Cluster design balances the overall costwith the performance and reliability of a system expected tomeet the requirements of serving a diverse group of researchers.Its high-performance symmetric multiprocessing (SMP) nodesand multiple high-speed data networks provide researcherswith a powerful computational grid-ready PC cluster.

Designing a PC ClusterThe design and implementation of a cost-effective PC cluster require a delicate balance of performance, reliability,and expense. Increases in either performance or reliabilitycan greatly increase system expense.

Traditionally, PC clusters are designed and built for a specific set of applications within a department or researchgroup. Generally the job mix is well known and understood,and certain types of system failures can be tolerated. Sincethis PC cluster is often the only large system to be run at thedepartmental level, time and space constraints are not likelyto be a significant issue.

With these considerations in mind, it is easy to build an inexpensive PC cluster, trading in higher availabilityand/or greater performance for lower cost. For a centralcomputing facility, however, there are still several issuesthat must be considered:� A larger, more diverse user community must be served.� The job mix can vary widely from course-grained to

TheLion-XPC Cluster

fromPenn StateThe Lion-X PC Cluster is a cost-effective, high-performance parallel computing system that

enables Penn State faculty and other researchers to run complex computer simulation programs.

Michael Dell, chairman and CEO of Dell Computer Corporation, nominated the Lion-X PC Cluster

to become part of the Smithsonian collection. It was recently selected to become part of the

Permanent Research Collection of the Smithsonian Institution’s National Museum of American

History and the 2000 Computerworld Smithsonian Collection. This article discusses the design,

implementation, and performance of this PC cluster.

H I G H P E R F O R M A N C E C O M P U T I N G

Page 70: Building Your Internet Infrastructure

fine-grained parallel processes. This mixrequires more investment in networkingtechnology.

� The system must provide the highest possi-ble performance by including componentssuch as fast CPUs, SCSI disks, fast periph-eral component interconnect (PCI) bus,and large amounts of memory.

� The PC cluster requires hardware that willnot become obsolete soon after the clusteris assembled.

� Since larger user groups are more demand-ing than smaller ones, system availabilityand reliability must be very high, requiringthe use of components such as redundantpower supplies.

Facilities often have numerous other large systems, so itis necessary to minimize both floor space and staff interven-tion time. For example, hardware must be supported over itsentire life cycle, both in terms of warranty and parts, andindividual nodes should be easy to access. Furthermore, sincerack-mounted equipment is simpler to maintain, it is gener-ally recommended.

Other Factors to Consider The Lion-X PC Cluster nodes and network hardware can con-sume a significant amount of electricity, so the NIC Grouphad to plan for sufficient electrical power for now and thefuture. Lion-X has 14 dedicated 120v 20-amp circuits.

The Lion-X compute nodes, DellPowerEdge 4350s, are four rack units high.This requires a large footprint for a 32-nodecluster. The Myrinet cable lengths were lim-ited to 10 feet. Given the footprint requiredfor 32 Dell PowerEdge 4350 compute nodes,this cable length imposed a design constraintin which the farthest node must be within 10feet of the Myrinet switch.

Designing High Availability in Lion-XThe central server node of a PC cluster is con-figured to perform multiple roles, such as centralfile server, user log-in system, queuing systemmaster, and compute-node boot server. As aresult, any downtime on the master node can

force the entire cluster system to crash, making this node thesingle weakest link. Part of the Lion-X design was to use aserver machine that offered high availability and high reliabil-ity in all aspects of its service portfolio.

The group chose a Dell PowerEdge 6300 server as theLion-X central server. This machine offers built-in, hot-swappable RAID disk arrays and multiple redundant powersupplies. The drive array is configured in RAID-5 format withboth a parity disk and a hot-swappable spinning spare drive fortwo levels of redundancy.

Lion-X can lose two of its disks and up to two of itspower supplies or power circuits and still remain in service,while waiting for repairs to be performed or power to bereapplied to the affected AC circuits. The hot-swappablecapability allows the server to remain active while theaffected components are replaced, which minimizes down-time and staff intervention time.

The Dell PowerEdge 6300 also offers support consistency.Its hardware will be supported over the projected lifetime ofLion-X and its parts availability will remain consistent.

The group also chose Linux as the operating system forLion-X for numerous reasons, as outlined in Figure 1. Figure2 shows the hardware and software configuration for Lion-X.

Measuring Lion-X PerformanceCompute-node PCI direct memory access (DMA) perform-ance is very important. Without proper PCI DMA per-formance, the high-speed networks would be data starvedand performance would suffer. A 32-bit PCI bus at 33 MHzhas a theoretical peak bandwidth of 132 MB/sec.

Several benchmark tests included the following:� Using the Pallas Message Passing Interface (MPI)

Benchmark Suite 1.2, the MPI point-to-point bandwidthwas measured between pairs of compute nodes on theLion-X cluster.

72 PowerSolutions

� Stability

� Low cost

� Ease of remote administration

� Easy to upgrade and maintain

� Easy to distribute text-based configuration files

� Use of Network File System (NFS) for shared resources

� Updates and installation via Red Hat Package Manager (RPM)

� Open source code—— Less dependency on vendors to get a fix for a problem

� Minimal installation—— No bloat that uses needed resources

� Minimal kernel—— Only those packages that are required need to be installed

WHY LINUX FOR LION-X

Figure 1. The Advantages of Linux for Lion-X

Lion-X can lose two

of its disks and up

to two of its power

supplies or power

circuits and still remain

in service, while waiting

for repairs to be

performed.

Page 71: Building Your Internet Infrastructure

� The PingPong test follows a classical pattern used formeasuring message (data) startup and throughput timefor a single message passed between two machines. Alltests are run using a single CPU within each computenode, except for the N=64 test, which uses two CPUsper compute node. In the N=64 test, the loopbackdevice was used rather than directly passing messageswithin memory. See Figure 3.

� The Myrinet results are more dramatic than they appearfor very small message sizes. For small messages, latencytime is as important as bandwidth. When comparing thelatency time for small messages using Myrinet and FastEthernet, performance improves by at least three timesusing Myrinet. Finer grained parallel codes that passmany small messages benefit from the increased networkperformance. See Figure 3.

� Two Numerical Aerospace Simulation (NAS) ParallelBenchmarks help gauge the overall performance ofLion-X. Integer Sort (IS) is a parallel sort over smallintegers, and LU is a simulated Computational FluidDynamics (CFD) application that uses symmetric suc-cessive over-relaxation (SSOR) to solve a block lowertriangular/block upper triangular system of equationsresulting from an unfactored implicit finite-differencediscretization of the Navier-Stokes equations in threedimensions. The IS benchmark gauges a system’s net-work bandwidth while LU gauges network latency. See Figure 4.

Lessons Learned from Lion-XThe NIC Group learned several lessons from Lion-X, asdescribed in the following sections.

Evaluating Hardware CombinationsThe use of high-performance compute nodes and redundanthigh-performance networks allowed the group to evaluatehardware combinations that provide the highest possibleperformance for the widest variety of applications. Lion-Xcan provide these high-performance cycles over its projectedlifetime. The group has been unable to report more activitywith the Alcatel/Packet Engines Gigabit Ethernet networkbecause there are no suitable drivers. Currently the GigabitEthernet network handles all NFS traffic.

Preventing DowntimeThe choice of server and compute nodes with hot-swappable, field-replaceable components such as disks,power supplies, and fans assures virtually no downtimefrom component failures. To date, the only significantLion-X downtime occurs when nonredundant componentsfail in the central server node. Redundant power also

www.dell.com PowerSolutions 73

Lion-X Hardware

� One central server node: Dell PowerEdge 6350 ——Dual 400 MHz Intel Pentium III Xeon™ processors——1 MB cache for each processor——1 GB of memory——342 GB of disk space in RAID-5 arrays with hot swap ——Redundant power

� 32 compute nodes: Dell PowerEdge 4350 ——Dual 500 MHz Intel Pentium III processors——512K cache for each processor——1 GB of memory——Redundant power supplies

� Two independent gigabit-speed internal network fabrics ——Packet Engines GNIC-II, wire speed 1.0 GB/sec ——Myricom Myrinet, wire speed 1.28 GB/sec

� Fast Ethernet

Lion-X Software

� Operating System——Red Hat Linux (current kernel version is 2.2.15)

Application Software

� Portland PGI Workstation 3.0——Fortran——High Performance Fortran (HPF)——C——C++

� MPICH 1.1.2——Myrinet GM Driver Version 1.1.1——p4 Standard Ethernet

� IMSL (a collection of over 500 mathematical and statistical Fortran routines and functions used in mathematical and statistical analysis)

� NAG Fortran Libraries Mark 18

� NAG C Libraries

� Linear Algebra Package (LAPACK)

� Scalable LAPACK (ScaLAPACK)

� FFTW (a Fast Fourier Transform Library)

� Other software libraries as required

Job Control

� Portable Batch System (PBS)

THE LION-X CONFIGURATION

Figure 2. Lion-X Configuration

Page 72: Building Your Internet Infrastructure

74 PowerSolutions

ensures random circuit failures will not disable the entire cluster.

Supporting a PC ClusterFor ease of maintenance, it is important that all compo-nents in a PC cluster are supported throughout its entireprojected lifetime. These components also must have asuitable warranty period. These steps help contain long-term cluster operating costs.

Long-term support also ensures a consistent parts base,which can lead to less incompatibility among parts in thefuture. The choice of hardware for Lion-X reflected thisapproach and remains a long-term goal of the Lion-Xproject. Furthermore, rack mounting of all Lion-X hard-ware and labeling of every cable have made it easier toservice any component, which has proven to be a verypositive cost benefit.

User Applications and UtilizationThe Linux environment and standard tools and softwarelibraries enable the user community to become very productive on Lion-X. Porting time has been minimal,with most users requiring a simple recompilation of their existing codes. Lion-X went into production onSeptember 1, 1999, and utilization has rapidly peakedsince that time.�

Many staff members of the Center for Academic Computing(CAC) at Penn State contributed to this article. Please direct anyquestions to the CAC’s Numerically Intensive Computing Groupat [email protected].

Research groups are using Lion-X for the following scientific applications:

� Numerical modeling of hurricane motion and intensifi-cation using MM5

� Large-scale molecular dynamics calculations, probingthe change in mechanical property over time of rubbermaterial

� Computational Fluid Dynamics (CFD)

� Exploration of the Fractional Quantum Hall Effect using quantum Monte Carlo to calculate energiesand density profiles

� Large-eddy simulations of the entrainment zone capping the atmospheric boundary layer

� Application of a Reynolds Averaged Navier-Stokes(RANS) solver for turbulence flow to evaluate the flowfield in an air compressor to determine the optimumshape of the compressor

� Investigation of the electronic structure of group IVsemiconductor alloys using Paratec (PARAllel TotalEnergy Calculation).

For other user performance runs, please seehttp://cac.psu.edu/beatnic/Cluster/Lionx/Perf/

CURRENT APPLICATIONS ON LION-X

0 01 02 03 04 05 06 07

70

60

50

40

30

20

10

0

MB

Per S

econ

d

Packet Size (1.00E +)

Myrinet 2 ProcMyrinet 32 ProcMyrinet 64 ProcEthernet 32 Proc

Figure 4. The Integer Sort Benchmark

0 4 8 12 16 20 24 28 32

3000

2500

2000

1500

1000

500

0

LU M

op/s

Number of Processors

240

200

160

120

80

40

0

IS M

op/s

LU MyrinetLU EthernetIS MyrinetIS Ethernet

Figure 3. Fast Ethernet versus Myrinet

Page 73: Building Your Internet Infrastructure

www.dell.com PowerSolutions 75

The concept of Beowulf clusters originated at the Centerof Excellence in Space Data and Information Sciences

(CESDIS), located at the NASA Goddard Space FlightCenter in Maryland. The goal of building a Beowulf clusteris to create a cost-effective parallel computing system frommass-market commodity, off-the-shelf components to satisfyspecific computational requirements in the earth and spacesciences community.

Beowulf Clusters: Clusters of Commodity Computing Systems The first Beowulf cluster was built from 16 Intel DX4™processors connected by a channel-bonded 10 MbpsEthernet, and it ran the Linux operating system. It was aninstant success, demonstrating the concept of using a commodity cluster as an alternative choice for high-performance computing (HPC). After the success of thefirst Beowulf cluster, several more were built by CESDISusing several generations and families of processors andnetwork interconnects.

High-PerformanceComputing

withBeowulf Clusters

Beowulf clusters, constructed from commodity computer systems, have

become the fastest growing alternative choice for building high-performance

parallel computing systems. The rapid advancement of microprocessors, high-

speed network interconnects, and other component technologies have facili-

tated many successful deployments of this type of cluster. This article provides

an overview of Beowulf clusters and discusses the design choices for building

a cost-effective, high-performance Beowulf cluster.

By Jenwei Hsieh, Ph.D.

Beowulf is the legendary hero from the Old Englishpoem of the same name. In this epic tale, Beowulfsaves the Danish kingdom of Hrothgar from two man-eating monsters and a dragon by slaying each.

Beowulf is used today as a metaphor for a newstrategy in high-performance computing that exploitsmass-market technologies to overcome the oppres-sive costs in time and money of supercomputing,thus freeing people to devote themselves to theirrespective disciplines. Ironically, building a Beowulfcluster is so much fun that scientists and researcherseagerly roll up their sleeves to undertake the manytedious tasks involved—at least for the first timethey build one.

H I G H P E R F O R M A N C E C O M P U T I N G

Page 74: Building Your Internet Infrastructure

76 PowerSolutions

At Supercomputing ’96, a supercomputing conference spon-sored by the Association for Computing Machinery (ACM)and the Institute of Electrical and Electronics Engineers(IEEE®), both NASA and the U.S. Department of Energydemonstrated clusters costing less than $50,000 that achievedgreater than a gigaflop-per-second sustained performance.

With the rapid advancement and increasing availability ofmicroprocessor technologies, high-speed network intercon-nects, and other related components, Beowulf clusters havebecome the fastest growing choice for building clusters forHPC. As of November 1999, the fastest Beowulf-class cluster,CPlant (Computational Plant) at Sandia National Laboratory,ranked 44th in the TOP 500 supercomputer list.

While many commercial supercomputersuse the same processor, memory, and controllerchips that are employed by Beowulf clusters,they also integrate proprietary gluing technol-ogies (for example, interconnection networks,special I/O subsystems, and advanced compilertechnologies) that greatly increase cost anddevelopment time. On the other hand, Beowulfclusters only use mass-market components andare not subject to delays and costs from customparts and proprietary design.

Logical View of a Beowulf ClusterA Beowulf cluster uses a multicomputer archi-tecture, as depicted in Figure 1. It features a

parallel computing system that usually consists of one ormore master nodes and one or more compute nodes, or cluster nodes, interconnected via widely available networkinterconnects. All of the nodes in a typical Beowulf cluster arecommodity systems—PCs, workstations, or servers—runningcommodity software such as Linux.

The master node acts as a server for Network File System(NFS) and as a gateway to the outside world. As an NFSserver, the master node provides user file space and othercommon system software to the compute nodes via NFS. As a gateway, the master node allows users to gain accessthrough it to the compute nodes. Usually, the master node

is the only machine that is also connected tothe outside world using a second networkinterface card (NIC).

The sole task of the compute nodes isto execute parallel jobs. In most cases,therefore, the compute nodes do not havekeyboards, mice, video cards, or monitors.All access to the client nodes is providedvia remote connections from the masternode. Because compute nodes do not needto access machines outside the cluster, nordo machines outside the cluster need toaccess compute nodes directly, computenodes commonly use private IP addresses,such as the 10.0.0.0/8 or 192.168.0.0/16address ranges.

Interconnect

LinuxMessage Passing Library

Parallel Applications

ClusterManagement

Tool

LANMaster Node

File Server/Gateway

Compute Nodes

Figure 1. Logical View of a Beowulf-Class Cluster

The first Beowulf

cluster was an instant

success, demonstrating

the concept of using a

commodity cluster as

an alternative choice

for high-performance

computing (HPC).

Page 75: Building Your Internet Infrastructure

www.dell.com PowerSolutions 77

From a user’s perspective, a Beowulf clusterappears as a Massively Parallel Processor (MPP)system. The most common methods of using thesystem are to access the master node eitherdirectly or through Telnet or remote login frompersonal workstations. Once on the masternode, users can prepare and compile their parallel applications, and also spawn jobs on adesired number of compute nodes in the cluster.

Applications must be written in parallelstyle and use the message-passing programmingmodel. Jobs of a parallel application arespawned on compute nodes, which work collaboratively until finishing the application. During theexecution, compute nodes use standard message-passing middleware, such as Message Passing Interface (MPI) andParallel Virtual Machine (PVM), to exchange information.

Applications for Beowulf ClustersSince a Beowulf cluster is an MPP system, it suits applica-tions that can be partitioned into tasks, which can then beexecuted concurrently by a number of processors. Theseapplications range from high-end, floating-point intensivescientific and engineering problems to commercial data-intensive tasks. Uses of these applications include ocean and climate modeling for prediction of temperature and pre-cipitation, seismic analysis for oil exploration, aerodynamic simulation for motor and aircraft design, and molecular modeling for biomedical research.

Benefits of Beowulf Clusters Beowulf clusters offer a number of specific benefits:

Cost-effective: One of the main benefits of a Beowulfcluster is its cost-effectiveness. Beowulf clusters are builtfrom relatively inexpensive commodity components that arewidely available.

Keeps pace with technologies: Since Beowulf clusters onlyuse mass-market components, it is easy to employ the latesttechnologies to maintain the cluster as a state-of-the-art system.

Flexible configuration: Users can tailor a configurationthat is feasible to them and allocate the budget wisely tomeet the performance requirements of their applications. Forexample, fine-grain parallel applications (which exchangesmall messages frequently among processors) may motivateusers to allocate a larger portion of their budget to high-speed interconnects.

Scalability: When the processing power requirementincreases, the performance and size of a Beowulf cluster canbe easily scaled up by adding more compute nodes.

High availability: Each compute node of a Beowulf cluster is an individual machine. The failure of a compute

node will not affect other nodes or the avail-ability of the entire cluster.

Compatibility and portability: Thanks tothe standardization and wide availability ofmessage passing interface, such as MPI andPVM, the majority of parallel applications usethese standard middlewares. A parallel applica-tion using MPI can be easily ported from IBM®

RS/6000 SP2® or Cray® T3E to a Beowulf clus-ter. This is why Beowulf-class clusters are rap-idly replacing these expensive parallel comput-ers in the low-end to midrange HPC market.

Design Choices for Building A Beowulf ClusterBeowulf is not a special software package, new networktopology, or the latest Linux kernel hack. Beowulf is a conceptof clustering commodity computers to form a parallel, virtualsupercomputer. You can easily build a unique Beowulf clusterfrom those components that you consider most appropriate.No Beowulf cluster exists that is general enough to satisfyeveryone’s needs; there are numerous design choices forbuilding a Beowulf cluster.

Following are some considerations derived from our previous experiences with building several Beowulf clustersand measuring their benchmarking results. The bottom line: ABeowulf cluster must be built to perform your applications the best.

Figure 2 shows the architectural stack of a typicalBeowulf cluster, highlighting some of the design choices to consider.

Compute nodes: Qualities to consider for compute nodesinclude processor type, number of processors per node, sizeand speed of level 2 (L2) cache, speed of front-side bus,design of memory subsystem, and PCI bus speed. Some parallel applications are cache-friendly; that is, L2 cache

Highly Parallel ApplicationsHighly Parallel Applications

MPI/ProMPI/Pro MPICHMPICH PVMPVM

Windows NT/2000Windows NT/2000 LinuxLinux

TCP/IP VIA GMTCP/IP VIA GM

Fast Ethernet GigaNet MyrinetFast Ethernet GigaNet Myrinet

PowerEdge 2450 or 6350sPowerEdge 2450 or 6350s

Applications

Middleware

OS

Protocol

Interconnect

Compute Nodes

Figure 2. Architectural Stack of a Typical Beowulf-Class Cluster

From a user’s

perspective, a Beowulf

cluster appears as a

Massively Parallel

Processor (MPP)

system.

Page 76: Building Your Internet Infrastructure

78 PowerSolutions

easily accommodates these applications. Compute nodeswith large full-speed L2 cache can boost the performance ofthis type of application. On the other hand, applicationswith random and wide-range access patterns will be morelikely to benefit from a faster memory subsystem, rather thanlarge L2 cache.

Interconnects and communication protocols: Unless theapplication can be nicely partitioned and only modest levelsof communication are required, high-speed interconnectsunveil the potential performance of a Beowulf cluster. Wehave observed that high-speed interconnects, such asGigaNet® and Myrinet, not only facilitate higher cluster performance than Fast Ethernet and Gigabit Ethernet, butalso achieve a better performance/cost ratio.

Operating systems: The majority ofBeowulf clusters are based on Linux. Wealso found others that took advantage of themultithreading feature of Windows NT andWindows 2000 for building Beowulf-likeWindows NT clusters.

Message passing libraries: Some of theMPI implementations use a polling schemeto achieve ultra-low communicationlatency; others minimize the CPU cyclesused by communication tasks with an interrupt-driven approach. The former ismore suitable for applications with a fine-grain communication pattern, while thelatter helps the majority of medium-grainand course-grain parallel applications.

Application development tools: Manynumeric-intensive parallel applications canachieve better performance by using properoptimization options of compilers and opti-mized common subroutines.

Beowulf Clusters on Dell PlatformsNumerous Beowulf-class and Beowulf-like clusters have beenbuilt on Dell platforms. Dell provides reliable commoditycomputer systems and industry-standard building blocks tohelp its customers build Beowulf clusters. The followingexamples illustrate successful Beowulf cluster deployments.

The Cornell Theory Center (CTC) at CornellUniversity (http://www.tc.cornell.edu/) has deployed two 64-node Beowulf-like clusters based on Windows 2000. Thefirst cluster, called Velocity, is constructed from 64 DellPowerEdge 6350 quad-CPU servers with a total of 256 IntelPentium III Xeon 500 MHz processors. The second cluster,Velocity+, consists of 64 Dell PowerEdge 2450 dual-CPUservers with a total of 128 Intel Pentium III 733 MHzprocessors. Both clusters are interconnected via GigaNet

cLAN™ interconnects. Figure 3 shows the topology of theVelocity+ cluster.

Earlier this year, the Pacific Northwest NationalLaboratory (http://www.emsl.pnl.gov:2080/mscf/) installed aBeowulf cluster with 96 Dell PowerEdge 1300 dual-processorservers and a GigaNet cLAN interconnect fabric that supports up to 128 compute nodes. It is the largest Beowulfcluster based on the GigaNet cLAN interconnect.

The Lion-X cluster (http://cac.psu.edu/beatnic/Cluster/Lionx/index.html) at The Pennsylvania State Universityconsists of a Dell PowerEdge 6300 as the master node and 32 PowerEdge 4350s as compute nodes. The compute nodesare interconnected via Fast Ethernet, Gigabit Ethernet, andMyrinet. Besides supporting research groups at Penn State, the

Lion-X also provides a vehicle to help otherresearch universities investigating the idealconfiguration for their own Beowulf clusters.

The College of Computing atGeorgia Institute of Technology(http://www.cc.gatech.edu/projects/ihpcl/clusters.html) is building a Beowulf clusterconsisting of an interesting type of computenode: 16 Dell PowerEdge 8450 eight-wayservers. The compute nodes are intercon-nected via Fast Ethernet and GigabitEthernet. This Beowulf cluster is designed totake advantage of the dense processing powerof symmetric multiprocessing (SMP) servers.

Another unique type of Beowulf cluster isfor Scientific Visualization. Researchers atSandia National Laboratory are building clus-ters based on Dell Precision Workstation 620sfor stereoscopic visualization and other high-end graphics simulations. These clusters aredesigned to employ the graphic capability and

Rambus® memory subsystem of Precision Workstation 620s.

The Next Step for Beowulf ClustersBeowulf-class clusters have successfully penetrated the HPCmarket, which was traditionally dominated by expensive, “bigiron boxes” from IBM, Cray, Silicon Graphics, and others.Beowulf clusters provide an alternative for building cost-effective, high-performance parallel computing systems.

We expect Beowulf to continue its rapid deployment tofulfill the need for technical computation. We also anticipatethat more commercial parallel applications will find their wayinto Beowulf clusters, such as applications for computationalfinance and data mining. Data mining is one of the mostpromising information technologies today; it helps us extractvaluable information from huge databases by using novel toolsand algorithms for automated knowledge discovery.

Numerous Beowulf-class

and Beowulf-like clusters

have been built on

Dell platforms. Dell

provides reliable

commodity computer

systems and industry-

standard building blocks

to help its customers

build Beowulf clusters.

Page 77: Building Your Internet Infrastructure

www.dell.com PowerSolutions 79

Upcoming Beowulf ArticlesAs we have discussed in this article, a Beowulf cluster is con-structed from a set of building blocks that customers considermost suitable to their applications. Forthcoming articles willdetail some of these building blocks, such as compute nodes,interconnects, and middleware that impact the performanceof a Beowulf cluster. �

Jenwei Hsieh, Ph.D. ([email protected]) is a leadengineer on the Internet Infrastructure Technologies team and

Cluster Development Group at Dell. Jenwei is responsible fordeveloping high-speed interconnects as one of the buildingblocks for the Internet infrastructure and Beowulf clusters. Hehas published more than 30 technical papers in the areas ofmultimedia computing and communications, high-speed networking, serial storage interfaces, and distributed networkcomputing. Jenwei has a Ph.D. in Computer Science from the University of Minnesota and a B.E. from TamkangUniversity in Taiwan.

Ctc081

Ctc083

Ctc089

Ctc092

Ctc085

Ctc087

Ctc096

Ctc095

Ctc094

Ctc067

Ctc084

Ctc082

Ctc091

Ctc090

Ctc086

Ctc093

Ctc109

Ctc110

Ctc104

Ctc103

Ctc105

Ctc108

Ctc097

Ctc099

Ctc111

Ctc112

Ctc102

Ctc101

Ctc106

Ctc107

Ctc098

Ctc100

Ctc116

Ctc113

Ctc125

Ctc127

Ctc117

Ctc122

Ctc121

Ctc114

Ctc115

Ctc126

Ctc128

Ctc119

Ctc120

Ctc124

Ctc066

Ctc068

Ctc073

Ctc076

Ctc070

Ctc072

Ctc078

Ctc077

Ctc065

Ctc067

Ctc074

Ctc075

Ctc069

Ctc071

Ctc080

Ctc079

Switch 6 Switch 4

Switch 10

Switch 9

Switch 3Switch 5Switch 1

Switch 7

Switch 8

Switch 2

Figure 3. Topology of the Velocity+ Cluster at CTC

Page 78: Building Your Internet Infrastructure

80 PowerSolutions

One of your most important tasks as an IT manager ischoosing the best implementation for your e-business

system. Two important components of that implementationinclude e-mail and the e-Web server.

Transitioning to Standardized E-mailAs many businesses transition to an e-business, they oftenchoose to standardize their e-mail software. The followingexample uses Microsoft Exchange Server as the chosen e-mail software.

Several effects occur as your deployment of ExchangeServer progresses. First, you will need morethan one Microsoft Exchange Server—probably many more. One Exchange Server is simply not enough to satisfy the bandwidthand storage capacity requirements.

Second, you will become less tolerant ofoutages—for any reason. This includesplanned outages, such as downtime forbackup, or unplanned outages due to insuffi-cient capacity or errors.

Easing the Transition with Tivoli and DellTivoli® software combined with Dell PowerEdgeservers and PowerVault storage can help you

meet these bandwidth and storage capacity requirements, aswell as avoid outages.

Figure 1 shows a potential configuration for implement-ing several Exchange Servers. Exchange Server runs onPowerEdge servers that are connected to a Fibre Channelstorage area network (SAN) built with PowerVault SANfabric hardware. PowerVault Fibre Channel disk storage forstoring the Exchange Server data and PowerVault tapelibraries for storing backup copies are attached to the SAN.

Tivoli software manages the environment. First, TivoliStorage Manager acts as the one-touch, central management

point for backup of the environment. Next, TivoliData Protection for Microsoft Exchange Server isadded to each Exchange Server. It uses Microsoftsupported interfaces that allow online full, copy,incremental, and differential backups of ExchangeDirectory and Information Stores. Tivoli builds onthese interfaces to add exploitation of the SAN.Each Tivoli Data Protection for MicrosoftExchange Server agent has the ability to write itsbackup data directly through the SAN to thePowerVault tape library.

Tivoli Storage Manager provides the singlepoint of control for tape device allocation andlibrary management. The result is that Exchange

Tivoli

E-BusinessIn the last issue of Dell Power Solutions (Issue 1, 2000), we discussed several

generic business solutions that combined Tivoli Storage Management software

with Dell PowerVault storage. Our focus for this article is using those solutions

to enhance an e-business site.

By Ron Riffe

E N T E R P R I S E M A N A G E M E N T

Storage Management Solutions for

One of your most

important tasks

as an IT manager is

choosing the best

implementation for

your e-business

system.

Page 79: Building Your Internet Infrastructure

www.dell.com PowerSolutions 81

data can be backed up without disruption whileexploiting the technology capabilities of the100-MB-per-second Fibre Channel SAN. Wecall this LAN-free backup, because there is nodata path on the LAN.

Finally, Tivoli Manager for MicrosoftExchange Server is added to each ExchangeServer. This delivers a unique, end-to-endapproach to managing Exchange Servers. Itmonitors performance and availability for theExchange Server environment, manages allExchange Server events, automates routinetasks, and deploys client software.

Tivoli Manager for Microsoft ExchangeServer is fully integrated with TivoliEnterprise. The result is a well-managedExchange Server environment leading to less downtimedue to unplanned outages.

Addressing Web Site Scalability ChallengesAnother business milestone often associated with thetransition to e-business is the creation of a Web site. Aswith e-mail, however, businesses often encounter scala-bility challenges with their Web sites as the volume ofWeb content, the number of Web site hits, and theamount of data transferred all increase. A traditionalremedy has been to install additional network capacity,servers, and storage to handle the growth. Figure 2 showsan example of a Web site infrastructure prior to theavailability of SANs.

Before SANs became available, a business added newWeb servers as its bandwidth needs increased. Because

there was no SAN, each Web server had itsown, locally attached disk storage for content.In order to maintain a consistent Web view,the administrator of this environment had tomanage the duplication and version control ofthe same Web content across all of the Webservers. As a result, the business had to pur-chase many times more disk space than is nec-essary today because many duplicates of thesame data were being shared.

Creating an E-Web Server with Tivoli and DellTivoli software combined with DellPowerEdge servers and PowerVault storageprovide a better solution for resolving Webscalability challenges.

Figure 3 shows a potential configuration for implementinga shared storage environment on a SAN. A Web-servingapplication runs on PowerEdge servers that are connected to aFibre Channel SAN built with PowerVault SAN fabric hard-ware. Attached to the SAN is PowerVault Fibre Channel diskstorage for storing the shared content and PowerVault tape

PowerVault 51FFibre Channel Switch

PowerVault 650FProcessors and Storage

PowerVault 35FBridge

PowerVault 120T or 130TTape Libraries

• MS Exchange• Tivoli Data Protectionfor Exchange

• Tivoli Manager for Exchange

• MS Exchange• Tivoli Data Protectionfor Exchange

• Tivoli Manager for Exchange

PowerVault 630FExpansion Storage

Tivoli StorageManager

Gigabit or 10/100 Mb Ethernet

Figure 1. Fibre Channel SAN for Backing up E-mail

Tivoli software

combined with Dell

PowerEdge servers

and PowerVault

storage provide a

better solution for

resolving Web

scalability challenges.

Gigabit or 10/100 Mb Ethernet

PowerEdgeWeb Servers

PowerVaultStorage

Figure 2. Pre-SAN Web Site Infrastructure

PowerVault 650FProcessors and Storage

PowerVault 35FBridge

PowerVault 120T or 130TTape Libraries

PowerVault 630FExpansion Storage

Gigabit or 10/100 Mb Ethernet

PowerEdgeWeb Servers

PowerVault 51FFibre Channel Switch

• PowerEdge Server• Tivoli SANergy File Sharing• Tivoli Storage Manager

Figure 3. A Fibre Channel SAN-based Shared Storage Environment

Page 80: Building Your Internet Infrastructure

82 PowerSolutions

libraries for storing backup copies. Tivoli softwaremanages the environment.

First, Tivoli SANergy File Sharing enablesall the Web servers to share the same content.The Web servers, therefore, need only one copytotal of the content. One copy reduces diskrequirements and reduces the complexity ofmanaging multiple duplicates of the content.

Next, Tivoli Storage Manager providesbackup for all Web content. Through its integra-tion with Tivoli SANergy File Sharing, TivoliStorage Manager is able to back up the Web con-tent without moving any data through the Webservers or through the LAN. As a result, you canback up Web content without impact to yourWeb site, while exploiting the technology capa-bilities of the 100-MB-per-second Fibre Channel

SAN. We call this server-less backup, becausethere is no data path on the LAN or throughthe Web servers.

Selecting the Right Storage SolutionChoosing the right strategy to implement andmanage your e-business is an important task.Dell and Tivoli offer a logical solution to thispursuit. For additional information, please visit our Web sites at www.dell.com orwww.tivoli.com. �

Ron Riffe ([email protected]) is director ofBusiness Unit Strategy for Tivoli StorageManagement Solutions. Prior to joining Tivoli,Ron spent 10 years as a corporate storage managerfor an international manufacturing firm.

Dell provides the latest technology and affordable solutions to today’s enterprises with itsPowerEdge network servers that deliver powerful, scalable, and reliable performance, and its PowerVaultstorage products, designed to drive high-end storagefeatures in standard computing environments. The new PowerApp appliance servers provide Web hosting and Internet caching for building your Internet infrastructure.

Visit our Web site to see how some Dell customers areusing these state-of-the-art products:

� Internet Grocer Homeruns.com� Broadcast.com� Dell.com� Fast Search and Transfer™� Cornell Theory Center� Monster.com� NextCard

These and other case studies and feature stories aboutDell technology are found at www.dell.com/us/en/ipd/topics/ipd_ipdfolder_000_ipd_detail.htm.

Technologyand

Solutionsfor

Today’sEnterprises

Technologyand

Solutionsfor

Today’sEnterprises

Choosing the right

strategy to implement

and manage your

e-business is an

important task. Dell

and Tivoli offer a

logical solution

to this pursuit.

Page 81: Building Your Internet Infrastructure

www.dell.com PowerSolutions 83

Computer systems today include multiple types of com-puters manufactured by many companies and connected

together through a complex network. Although these com-puters perform various tasks with some dependent on othercomputers, most are dependent on the network.

The task of managing many individual computers fromdifferent manufacturers plus the various networking compo-nents and other peripheral devices has increased in bothscope and complexity. However, one way to greatly simplifythe process and enhance your systems management capabili-ties is to determine what needs to be accomplished andimplement the right steps the first time—and implementthem correctly.

Dedicated Tools Often Add Complexity Dedicated tools, originally intended to ease the burden ofremotely managing equipment, have actually contributed tocomplicating the task of systems management. Over time,most hardware manufacturers began delivering software toolsto assist in the remote monitoring and managing of theequipment that they produced. While this is generally con-sidered good development, it implies that each different type

of equipment from each separate manufacturer requires a different tool. Therefore at some point, the system adminis-trator faces an unwieldy number of incompatible tools forsystems management.

Simply stated, the systems management tools from eachhardware manufacturer are designed to manage systemsproduced by that company. This leads to several importantquestions:� How can you manage these various servers from different

vendors from a single console? � How do you consolidate management consoles without

losing some level of functionality? � How can you collect valuable information, data, and

events in a single place when the individual vendors’tools do not interconnect or share data?

� How can you add equipment from a new hardware vendorwithout losing the comfort level of using the previousvendor’s tools?

The Answer Is a Console The answer is to connect the various tools into a manager ofmanagers console. Although the phrase is a tongue twister, it

Management Console:Your First Step

to Enterprise SystemsManagement

One of the most common challenges for today’s enterprise system administra-

tor is how to monitor and manage systems that range from PCs to servers

from different hardware manufacturers. Administrators tend to consider the

individual merits of the tools supplied by the hardware vendors, but they often

overlook some of the more basic concepts involved. This article takes a look at

one of the first steps in managing an enterprise: the central console.

By Dana Lloyd

E N T E R P R I S E M A N A G E M E N T

Page 82: Building Your Internet Infrastructure

aptly describes the idea of decoupling vendor platform man-agement from enterprise systems management. That is, it elevates the concept of systems management to a higher level.

Systems management encompasses many facets—andeven more for enterprise systems management. However,once you define the tasks involved in systems managementinto separate pieces, it becomes easier to determine thoseyou need first and how to combine the pieces to make a systems management environment that works across multiple hardware platforms.

A Dependable Central Collection StationFirst and foremost in enterprise systems management is thetask of creating a dependable central collection station.(Technically, the priority list is stable hardware and stableinfrastructure, numbers one and two respectively.)

A central collection station becomes the cornerstone ina systems management framework—a place where everymonitored function can send events and information. Itbecomes the fountain of knowledge or information about allsystems in the enterprise. These monitored systems canrange from network infrastructure components, such asrouters, to mission-critical servers and storage devices, toimportant applications such as e-mail or Web services.

Many operations tend to have a single console that collects all events in the enterprise, while other operationsopt to have several consoles dedicated to monitoring specificitems within the enterprise. Regardless of the chosenmethod, desktop or end-user workstations are often not

monitored, which helps to reduce the overall volume ofmanagement traffic.

Centralized Console versus Multiple ConsolesSome network operation centers might want a single console asa trap receptor that can be monitored 24 hours a day. Often,help desk staff work with and monitor this type of environment,because they are generally the first line of defense in answeringcalls from end users, shown in Figure 1. Although this configu-ration is appropriate for most small to mid-sized operations, asingle console can be used effectively in larger organizationswith proper planning, design, and implementation.

Other operations prefer multiple consoles dedicated tomonitoring specific equipment only. In this configuration, it iscommon for the networking staff to monitor only networkinfrastructure components from their console, while the oper-ations group may only monitor certain servers in the environ-ment. In this environment, it is also common to standardizeon a specific manufacturer for common components and relymore heavily on their specific tools. The downside is that noinformation is collected or reported centrally.

Still other operations may have consoles located at eachgeographic site or region. These distributed consoles maymonitor all the systems or selected components at theremote site. In some instances, these dedicated or distributedconsoles may forward selected events and information to amaster console within the network.

An effective management console requires proper plan-ning and design when deciding the type of configuration

84 PowerSolutions

Figure 1. Central Console: Single View of Control

Page 83: Building Your Internet Infrastructure

to implement. Some common considerations include:Quantity of systems. Can a single console can handle

the volume of work that is required? How often is it neces-sary to poll systems for current status?

Location of the systems: LANs versus WANs. Does thenetwork present any issues with delivering timely data across awide area? Is there a local staff of people to monitor a localconsole? Is the wide area network (WAN) dependable?

Hierarchy of consoles. Is a hierarchy of consoles possible?Does the console allow for integration of distributed consoles?Can data be filtered and passed up to a master console?

The intended role of the console(s). Does any particulargroup intend to monitor the console? Does a system adminis-trator need the ability to perform initial troubleshootingfrom the console? Does the console need to integrate tosome other system or service?

One point that cannot be overemphasized about any collection station is the ultimate importance of that station.It must be highly available and dependable. It must be well connected to the network and protected. If a collection stationcannot receive timely events or cannot keep up with theworkload, it provides minimal value.

Using or Leveraging the Data The next step is to leverage or use the central data reposi-tory. A central repository of information can be used in avariety of ways. Although many management consoles canbe extended to provide additional features, most central consoles provide the following features:

Current system status. Most consoles use a color-coding scheme, such as red = bad and green = good. Color,

combined with other visual effects such as blinking orscrolling banners, can enable operators to get a quick indication that all is well or not so well.

Notification and dispatch of trouble calls. Some consoles allow integration into a trouble ticket system. Thiscan prompt a trouble ticket to be automatically generatedand dispatched as soon as the offending event is received.Whether or not the console can issue a trouble ticket, manycan be configured to notify or page a technician automati-cally when it receives an event.

Problem isolation. Using the color coding described aboveand other methods such as zooming in on alerted systems,makes it easy to spot where a network failure has occurred.

Resolution verification. An operator can quickly see ifthe trouble has been resolved and the symbols have returnedto their normal status.

Event correlation. Some consoles use a method that prioritizes the more critical items and lists only these itemswhen multiple failures occur. For example, when a routergoes down, it is generally not necessary for notification ofevery device beyond that router being unavailable.

Central launch pad. Most consoles also allow for theintegration of third-party tools. Hardware vendors can usethis feature to attach or integrate their tools into the man-agement console. The tools can then be launched from thecentral console to further diagnose or potentially repairsome systems.

Integration of Tools into the Central ConsoleThe third step is to integrate the tools from various manu-facturers into the manager of managers console. Dell works

www.dell.com PowerSolutions 85

Figure 2. Dell OpenManage Connections Plugs into Existing Frameworks

DellOpenManageConnections

DellOpenManageConnections

CA UnicenterTNG

HP OpenView4Network

Node Manager

TivoliEnterpriseConsole

MicrosoftSMS

TivoliTME 10NetView

Existing Management Frameworks

Page 84: Building Your Internet Infrastructure

with industry-leading systems management tool providersto ensure that Dell management tools and informationcan be integrated into a wide variety of enterprise systemsmanagement consoles and frameworks.

One of the more important aspects of this integrationis to adhere to using industry-standard protocols andmethods. The Dell OpenManage™ tool set uses SimpleNetwork Management Protocol (SNMP), Desktop

Management Interface (DMI), and Common InformationModel (CIM) protocols to transfer information to andfrom the Dell systems.

The integration tools that Dell provides are collectivelyknown as the Dell OpenManage Connections, which can bedownloaded from the Internet at no cost. They are avail-able for a variety of management consoles and operatingsystem platforms, shown in Figure 2.

Dell OpenManage Connections has three main components: � The SNMP MIBs: These management information bases

(MIBs) help the central console interpret events that itmay receive from a discovered Dell system.

� The Dell console tools: These individual tools can beloaded onto a separate computer or launched from themanager of managers console. These tools can usually belaunched from a pulldown menu or by drilling down intoa system as it is displayed on the console map.

� The Connections code: The main function of this “Delldiscovery code” is to determine whether a particularsystem is a Dell system. If the answer is yes, additionalchecks are made to determine the system features andwhich protocol to use in further communications.Additionally, icons may be created automatically tolaunch the appropriate Dell tools.

Ask the Right QuestionsPlanning, selecting, and implementing a manager of managersconsole involves many considerations. Truly robust and reliable systems management implementations are not theresult of accidental or haphazard assembly of incongruentpieces. Instead, they result from asking the right questions,assessing the needs of the business, and carefully selectingand ultimately integrating the right tools.

Many companies search for experienced partners to helpdesign, select, and/or implement an enterprise managementenvironment. Dell understands this need and has formedpartnerships with many of the industry-leading managementconsole providers. Additionally, Dell has a trained staff ofexperienced consultants in its Dell Technology ConsultingGroup that can assist with any or all of these steps.

For more information on how the Dell TechnologyConsulting Group can assist with enterprise systems monitoring, visit http://www.dell.com/us/en/biz/services/service_dtc_svc.htm. Then choose “Systems Management.”�

Dana Lloyd ([email protected]) is a senior project man-ager with the Custom Solutions Group within the EnterpriseServer Division at Dell. Dana has been in the telecommunicationsand computing industry for over 18 years at companies includingDell, IBM, and ROLM®. He also is coauthor of a book aboutthin client computing.

86 PowerSolutions

Integration code for Dell computer systems, for a variety of industry-leading managementconsoles, can be downloaded at no charge from the Internet at http://www.dell.com/us/en/biz/topics/openmanage_om_main_connections_tools.htm

Note: Be sure to read any README files for last minute installation or usage instructions,which are included with the connector code.

Currently, several different connectors are available. Select the one that is appropriatefor your environment and download it. Connectors are currently available for the followingenterprise systems management applications: � Computer Associates0 Unicenter0 TNG� HP0 OpenView0 Network Node Manager� Microsoft Systems Management Server (SMS) � Tivoli TME 10™ NetView � Tivoli Enterprise Console

Other industry-leading consoles that do not require any connector code from Dellare available. These consoles are able to interchange information from the Dell sys-tems. Management stations such as PATROL from BMC Software and AppManagerfrom NetIQ0 Corporation have their own code to enable them to receive alerts andexchange information with Dell servers. However, individual Dell tools may still beneeded to perform certain operations on the Dell systems.

ACQUIRING THE DELL OPENMANAGE CONNECTIONS CODE

Page 85: Building Your Internet Infrastructure

www.dell.com PowerSolutions 87

Oracle Fail Safe Release 3.0 extends high-availability solu-tions on Windows NT and Windows 2000 Advanced

Server clusters to include both Oracle8i databases and applica-tions built using Oracle Developer 6.0. Now, developers ofWeb-based forms and reports applications can use not onlythe native Java support from Oracle Developer 6.0, but alsothe high availability provided by Oracle Fail Safe Release 3.0.For the first time, a complete Internet business solution—theWeb servers that deliver content, the Forms and Reportsservers that host the application logic, and the back-endOracle database—can be made fail-safe using Oracle Fail Safe.

Overview of Oracle Fail SafeOracle Fail Safe minimizes the downtime of both plannedfailures such as system upgrades, and unplanned failures suchas unexpected hardware or operating system failures. When afailure occurs, the workload on the failed node is automati-cally restarted on the surviving node. By configuring the

entire product stack for failover—that is, Web servers, Formsand Reports servers, and Oracle database servers—Oracle FailSafe allows companies to build complete high-availabilitybusiness solutions with no single point of failure.

Existing Oracle Forms and Reports applications canbe made highly available without modification—there is no need to update or rebuild these applications.Moreover, Oracle Fail Safe is completely nonintrusive:without needing any knowledge of the underlying cluster,clients and end users simply connect to a single server at a fixed network address. Because Oracle Forms andReports servers configured with Oracle Fail Safe areaccessible at a single network address regardless of thecluster node on which the servers actually reside, clientsneed to simply reconnect to the same address to continueworking after a failure. For example, users of a Web-basedform can just click on the reload/refresh button in theirbrowsers to reconnect.

Highly AvailableForms and Reports

Applications with

Oracle Fail SafeRelease 3.0

According to studies by the Gartner Group� and other research organizations,

system downtime costs companies an average of over $14,000 per minute (for

retail brokerages, the cost of downtime can exceed $100,000 per minute). For

these businesses, nearly continuous access to their entire business solutions—to

both data and application logic—is becoming a competitive necessity. For compa-

nies running their business solutions on Windows clusters, an easy-to-use option

for high availability is Oracle Fail Safe. This article explains how to use Oracle Fail

Safe Release 3.0 to create highly available Oracle Forms and Reports applications.

By Laurence Clarke

K N O W L E D G E M A N A G E M E N T

Page 86: Building Your Internet Infrastructure

88 PowerSolutions

Configuring servers on clusters and setting conditions forfailover and failback usually require detailed knowledge ofthe cluster environment. Oracle Fail Safe, however, providesan intuitive graphical user interface called Oracle Fail SafeManager, which offers drag-and-drop cluster management,step-by-step configuration wizards, and intuitive propertysheets. These property sheets simplify the setupand management of highly available resources,such as Oracle Forms and Reports servers.

The Oracle Fail Safe Manager also providesdefault configuration parameters optimized toensure fast failover times, comprehensive verifi-cation tools to troubleshoot and fix commonproblems, and a command-line interface toallow scripting and automation of clusteradministration tasks. In addition, a detailedtutorial and extensive online help system assistnew users in getting up to speed.

How Oracle Forms and Reports Servers Work with Oracle Fail SafeOracle Fail Safe is layered over the MicrosoftCluster Server (MSCS) software (included withWindows NT, Enterprise Edition, and withWindows 2000 Advanced Server) and tightly integrates withthe shared-nothing cluster environment. A cluster is a group ofindependent computer systems, or nodes, working togetheras a single system. Nodes may be single CPU systems or multiple-CPU symmetric multiprocessing systems (SMPs),and they typically communicate with each other via a private heartbeat network connection.

In a shared-nothing cluster, resources such as disks or IP addresses are owned and accessed by only one node at atime. Oracle Fail Safe works with the underlying cluster software to configure and monitor these resources for highavailability; when a resource becomes unavailable andcannot be automatically restarted, the software will attemptto fail the resource over to another node. (Failover refers tothe transfer of control over shared resources, such as disks,from one cluster node to another.)

Figure 1 shows the three major components of the OracleFail Safe architecture:� Oracle Fail Safe Resource Dynamically Linked Libraries

(DLLs)� Oracle Fail Safe Server� Oracle Fail Safe Manager

Every type of resource that can be made fail-safe (forexample, databases, Web servers, Oracle Forms and Reportsservers) requires an Oracle Fail Safe Resource DLL. ThisDLL provides resource-specific configuration, verification,

and management information such as resource dependencies,start-up requirements, and failover policies to the OracleFail Safe server.

Oracle Fail Safe server manages internode communica-tion and works with the underlying cluster software toensure fast failover of highly available resources during

both planned and unplanned node outages.Together, the Oracle Fail Safe Resource DLLsand Oracle Fail Safe server provide distributedconfiguration, verification, and management ofOracle Forms, Reports, and Web servers acrossa cluster. Oracle Fail Safe Manager automatesthe configuration of Oracle Forms and Reportsservers, and provides an easy-to-use interfacefor performing cluster-related management,troubleshooting, and static load balancing.

Oracle Fail Safe ConceptsA group is a logical container of cluster resources,such as disks or database servers. All resourcesin a group share the same availability charac-teristics and can be owned by only one clusternode at a time. A group is the minimal unit offailover; that is, all the resources in a group

fail over together. If a group has a virtual address (com-posed of a network name resource and IP address resource),it becomes a virtual server accessible at a fixed networkaddress regardless of the physical cluster node that ownsthe group. For example, a virtual server containing a Web-based form or report can be accessed at the sameURL—both before and after a failover.

No changes are required for existing client applicationsto work with Oracle Forms and Reports servers deployedwith Oracle Fail Safe. From the client perspective, eachgroup appears as a highly available server. Failover of OracleForms and Reports servers appears to clients as a quick node

Oracle8iResource

DLL

OracleFormsDLL

OracleReports

DLL

Oracle Fail Safe Server

Cluster Software (MSCS)

Windows NT & Network

Oracle Fail Safe Server

Cluster Software (MSCS)

Windows NT & Network

Oracle8iResource

DLL

OracleFormsDLL

OracleReports

DLLOracle

Fail SafeManager

OracleFail SafeManager

Oracle8iDatabase

Oracle Fail Safe Resource Components

Figure 1. Oracle Fail Safe Architecture

Oracle Fail Safe

Manager offers

drag-and-drop

cluster management,

step-by-step

configuration wizards,

and intuitive

property sheets.

Page 87: Building Your Internet Infrastructure

www.dell.com PowerSolutions 89

reboot, typically completing in seconds. After a failover,clients reconnect to the fixed virtual server address; forexample, users can simply click on the reload button in theirWeb browsers and resume work.

While quick failovers ensure that applications remainalmost continuously available, the transient state in OracleForms and Reports servers (or more specifically, in the runtimeengines) will not survive a failover; uncommitted work is lost and the application or end user will need to reissue thesetransactions. Jobs scheduled against a highly available OracleReports server, however, will survive a failover; because the jobqueue is placed on the shared cluster disk, it can be accessedfrom any cluster node.

How To Create Fail-Safe Forms and Reports ApplicationsOnce MSCS and Oracle Fail Safe server have been installedon each node, making a fail-safe Forms or Reports applica-tion requires the following steps:� Ensure that Oracle Developer Server (for Forms and

Reports servers) and Oracle Application Server (for Webservers) have been installed in the same location on bothnodes of the cluster.

� Copy the Web pages and the Forms or Reports files (that is, the.html files and the .fmx or .rdf files) to cluster disks attachedto the shared storage interconnect between the nodes.

� Invoke the Create Group wizard in the Oracle Fail SafeManager to create a group for your Forms or Reportsapplication. When you finish creating the group, OracleFail Safe will automatically invoke the Add VirtualAddress to Group wizard.

� Add one or more virtual addresses to the group using theAdd Virtual Address to Group wizard. Clients will alwaysaccess your fail-safe application at this address.

� Invoke the Add Resource to Group wizard once to createa fail-safe Oracle Web listener that will host your fail-safeapplication.

� Invoke the Add Resource to Group wizard a second timeto create a fail-safe Forms or Reports server or to make anexisting Forms or Reports server fail-safe.

Figure 2 illustrates several steps from the Create Group andAdd Resource to Group wizards. When a resource is added to agroup, Oracle Fail Safe populates a cluster group and builds theappropriate dependency tree. All specified failover and failbackpolicies are defined and registered with the cluster software, andthe group is tested to verify that the resource will correctly failover and restart on each cluster node.

Deploying Highly Available Forms and Reports ApplicationsOracle Fail Safe offers several different configurationoptions for deploying highly available Forms and Reports Figure 2. Create Group and Add Resource to Group Wizards

Page 88: Building Your Internet Infrastructure

90 PowerSolutions

applications, depending on the customer’s workload andbusiness needs. The four main ways to deploy highly avail-able Forms and Reports applications are:� Active/Passive� Active/Active� Partitioned� Multitiered

Although each solution differs in the way work is allocatedbetween the cluster nodes, all share the following features:� One or more Oracle homes are created on a private disk

(for example, the system disk) on each node.� All necessary Oracle product executables are installed in

the Oracle homes on each node.� All database, Forms, Reports, and Web files (for example,

database data files, database log files, Forms executablefiles, PL/SQL files, Reports definitions files, and Webpages) are placed on the shared cluster disks, so they canbe accessed by both nodes.

Active/Passive ConfigurationIn an Active/Passive (or standby) configuration (see Figure 3),one node hosts the Oracle Fail Safe server, Oracle databaseserver, and one or more Forms, Reports, and Web servers,while the other node remains idle, ready to pick up theworkload in the event of a failure. The standby configurationprovides the fastest failover response time, but at the cost ofrequiring the second node to sit idle.

This solution is less expensive than traditional standbysolutions because there is only one copy of the data (kept onthe group’s cluster disks); thus, there is no need to purchasea second complete disk farm and the extra network band-width required for real-time data replication.

Active/Active ConfigurationIn the Active/Active solution shown in Figure 4, an Oracle Fail Safe server, an Oracle database, and one or more Forms,Reports, and Web servers reside on each cluster node. Eachnode backs up the other in the event of a failure. Comparedto the Active/Passive configuration, the Active/Active con-figuration offers better performance (higher throughput)when both nodes are operating, but slower failover and pos-sibly reduced performance when one node is down.

If each node runs at 50 percent capacity, this solution issimilar to a standby solution (the equivalent of an entire noderemains idle), but it makes better use of network bandwidth,since client connections are distributed over two nodes. It isalso possible to run both nodes at higher capacities byenabling failover only for mission-critical applications.

The flexible architecture of Oracle Fail Safe allows manyvariations to the basic Active/Active configuration. Becauseeach cluster node can host multiple virtual servers, each data-base, Forms server, Reports server, and Web server running onthe node can be configured with its own failover and failbackpolicies. Users can also combine the scripting support ofOracle Fail Safe (using the FSCMD command-line interface)with a system monitoring tool, such as Oracle EnterpriseManager, to automate the movement of groups for load-balancing purposes.

Partitioned Workload ConfigurationThe partitioned workload solution, shown in Figure 5, is avariant of an Active/Active solution, in which the appli-cation workload resides on one node and the databaseworkload resides on the other. In this example, Node 1serves the highly available Forms, Reports, and Webservers, while Node 2 serves a highly available Oracledatabase. Each node backs up the other in the event offailure. If the private heartbeat network connectionbetween the nodes has high bandwidth, then the OracleForms or Reports server may be able to optimize database

LAN/WAN

Forms Clients(Web Browsers)

Windows NT Cluster

(Web Pages, FormsFiles, Database Files)

Shared Disks

Oracle Fail SafeServer

WebServer

FormsServer

Oracle8iServer

StandbyServer

Figure 3. Active/Passive (Standby) Configuration

LAN/WAN

Forms andReports Clients(Web Browsers)

Windows NT Cluster

Marketing Files(Web, Forms, Data)

Shared Disks

Oracle Fail SafeServer

WebServer

FormsServer

Oracle8iServer

Sales Files(Web, Reports, Data)

Oracle Fail SafeServer

WebServer

FormsServer

Oracle8iServer

Figure 4. Active/Active Configuration

Page 89: Building Your Internet Infrastructure

www.dell.com PowerSolutions 91

transaction processing by using the private network ratherthan the public network to communicate with the data-base. Because the bandwidth requirement for internodeheartbeat communication is small, the application servercan take advantage of what is effectively a dedicated net-work link to the database.

Multitiered ConfigurationIn a multitiered configuration, Oracle Forms and Reportsservers reside on separate systems from the database. Thiscommon way to scale an Oracle Forms or Reports businesssolution has a single back-end database driving a numberof Forms or Reports servers running on multiple machines.For multiple Forms servers, a load-balancing Web server isused to connect to the Forms servers (this Web server rep-resents a single point of failure in traditional multitieredconfigurations).

The entire solution can be made highly available by configuring just the Web server for failover with OracleFail Safe, and relying on the Web server to detect failuresand redistribute incoming requests to surviving nodes.Availability also can be improved further by making one ormore of the Forms servers fail-safe.

In the case of multiple Reports servers, one of the serversis designated the master and is responsible for dividing theworkload among the remaining servers. In this configuration,both the Reports master and the Web server represent singlepoints of failure that would normally require user interven-tion to correct. By deploying the Reports master and Web

server with Oracle Fail Safe, these single points of failure canbe eliminated (see Figure 6).

The multitiered configuration also allows for very flexible architectures, with multiple clusters and platformsworking together. As an example, highly available Formsand Reports servers running on Windows NT clusters inthe middle tier could interface with a back-end OracleParallel server database running on a UNIX system.Different servers within the application tier can even runon different platforms, so customers with existing Reportsservers running on UNIX machines only need to add a

LAN/WAN

Forms Clients(Web Browsers)

Windows NT Cluster

Web Pagesand Forms Files

Shared Disks

Oracle Fail SafeServer

WebServer

FormsServer

DatabaseFiles

Oracle Fail SafeServer

Oracle8iServer

Figure 5. Partitioned Workload Configuration

Fail Safe ReportsMaster andWeb Server

OracleDatabase

Reports Slaves(Any Platform)

1

2

n

Clients

(Windows NTCluster)

Figure 6. Multitiered Oracle Reports Configuration

Page 90: Building Your Internet Infrastructure

92 PowerSolutions

single Windows NT cluster running OracleFail Safe to make the Reports master (andthus the entire Reports tier) highly available.

Multitiered configurations also allow forincremental deployment of high availabilityinto a customer’s business solution; for exam-ple, by first adding high availability to lessreliable middle-tier application servers beforemodifying legacy back-end database systems.

Oracle Fail Safe Provides High AvailabilityOracle Fail Safe enables rapid deployment ofhighly available Oracle databases and OracleForms and Reports applications on WindowsNT (or Windows 2000) clusters. Oracle FailSafe solutions help Oracle Developer userswho want a fast and easy way to make theirforms and reports highly available. A varietyof configurations are available to suit specificbusiness requirements, from single-cluster configurationsthat offer an attractive low-cost entry point into the high-availability space, to scalable multitiered solutions that

provide high availability with support formultiple platforms and technologies. Oracle Fail Safe Release 3.1, currently in beta development, adds support forWindows 2000 Datacenter Server clustersand further expands application tier supportto include automatic configuration wizardsfor Oracle WebDB, Oracle ApplicationsRelease 11i, Apache Web Server, and othercomponents.

For more information about Oracle FailSafe solutions, visit http://www.oracle.com/nt/clusters/failsafe/. In addition, the latest OracleFail Safe software, online documentation, andadditional technical materials are availablethrough the Oracle Technology Network(OTN) at http://technet.oracle.com/tech/nt/failsafe/. �

Laurence Clarke is a member of the Oracle Fail Safe team.Please send any questions about this article or about Oracle FailSafe in general to [email protected].

Introducing the Dell Application Solution Center(ASC)… a robust laboratory environment designed to test and tune applications and solutions before your product goes live.

The Dell ASC can configure your hardware, software,and network to accurately simulate your computing environment—providing an excellent proving ground for testing changes to your current applications or potential new solutions for your enterprise.

Available to Dell customers, commercial applicationsdevelopers, and customer-based applications developers.

For more information, contact your Dell sales representative or visit us on the Web at www.dell.com.

Introducing the Dell Application Solution Center(ASC)… a robust laboratory environment designed to test and tune applications and solutions beforeyour product goes live.

The Dell ASC can configure your hardware, software,and network to accurately simulate your computing environment—providing an excellent proving ground for testing changes to your current applications or potential new solutions for your enterprise.

Available to Dell customers, commercial applicationsdevelopers, and customer-based applications developers.

For more information, contact your Dell sales representative or visit us on the Web at www.dell.com.

Not sure how

applicationswill work

in the

real world?

Not sure how

applicationswill work

in the

real world?

ASC LABS: � Round Rock, Texas � Florham Park, New Jersey � Limerick, Ireland � Shanghai, ChinaASC LABS: � Round Rock, Texas � Florham Park, New Jersey � Limerick, Ireland � Shanghai, China

Oracle Fail Safe

enables rapid

deployment of highly

available Oracle

databases and

Oracle Forms and

Reports applications

on Windows NT

(or Windows 2000)

clusters.

Page 91: Building Your Internet Infrastructure

Now is the perfect time toupgrade and standardize onMicrosoft’s new and improvedWindows® 2000. And since Delloffers its PowerEdge® servers withWindows 2000 preinstalled, this isan excellent opportunity to upgradeyour outdated server.

Technology today is moving at warpspeeds. Staying ahead of the curve is achallenge, and old servers have thepotential to create serious roadblocks.Dell’s PowerEdge servers and Microsoft’sWindows 2000 are designed to help youhandle next-generation technologydemands. Increased system uptime, fasterperformance, and greater security can provide that leading edge.

Windows 2000 and Dell PowerEdge Servers.In today’s high-tech world, anything elsecould be a technological compromise.

Still usingserver technology

2000

from the last

1 800 WWW DELLwww.dell.com

Take a serious look at Dell’s PowerEdgeservers and Microsoft Windows 2000.Introduce yourself to the possibilities.

millennium?