Siebel 7 Integration with VERITAS High Availability...

24
Siebel 7 Integration with VERITAS High Availability Solution Technical Integration Brief Siebel Systems, Inc.

Transcript of Siebel 7 Integration with VERITAS High Availability...

InteVE

Availa

Techn

Siebel 7 gration with RITAS High bility Solution

ical Integration Brief

S i e b e l S y s t e m s , I n c .

Integration Overview........................................................................................................ 1 Business Case ............................................................................................................................................. 2 Integration Architecture.............................................................................................................................. 3 Validation Summary................................................................................................................................... 8

Installation and Deployment.......................................................................................... 10 System Requirements ............................................................................................................................... 10 Installation Process................................................................................................................................... 14 Further Customization and Configuration ................................................................................................ 16 Ongoing Administration........................................................................................................................... 19

Availability....................................................................................................................... 21 How to Obtain Integration Software and Services ................................................................................... 21 Technical Support..................................................................................................................................... 22

Integration Overview

This document describes the integration between VERITAS’ High Availability solution and Siebel 7. It also communicates the results of the validation work performed and educates the customer on the merits of the resulting solution. The reader is assumed to be familiar with both products.

For more information on Siebel 7 architecture and functionality, refer to the following documents available from Siebel Systems:

♦ Siebel Bookshelf CD containing Siebel product documentation

♦ Siebel Administration Guide

♦ Siebel Applications Guide

♦ Siebel Installation Guide

♦ Siebel Release Notes

♦ Siebel eBusiness Application Integration Guide

♦ Technotes on Siebel SupportWeb

For more information on the VERITAS HA Solution, refer to the following documentation available from VERITAS Software:

♦ VERITAS Volume Manager Administrator’s Guide

♦ VERITAS File System Administrator’s Guide

♦ VERITAS Cluster Server User’s Guide

♦ VERITAS Volume Replicator Administrator’s Guide

♦ VERITAS Global Cluster Manager System Administrator’s Guide

♦ VERITAS Cluster Server Agent for Siebel Gateway Server Installation and Configuration Guide

♦ VERITAS Cluster Server Agent for Siebel Server Installation and Configuration Guide

♦ Cluster Server Agent for Oracle Installation and Configuration Guide

Siebel 7 and VERITAS High Availability Solution 1

Integration Overview ALLIANCES INTEGRATION BRIEF

♦ Cluster Server Agent for Volume Replicator Installation and Configuration Guide

♦ Cluster Server Agent for NFS Mount Installation and Configuration Guide

Business Case Maintaining high levels of access to systems and data is essential as companies increasingly rely on Siebel eBusiness Applications. An unavailable system may cause immediate financial damage due to lost revenue opportunities and increased operational costs, and can impair future business due to customer or partner dissatisfaction, a damaged reputation, or eroding customer confidence.

Furthermore, unplanned system downtime may cost companies hundreds and thousands of dollars, which would be justified an investment in high availability technology in order to protect the value of the enterprise.

Providing continuous and uninterrupted access to Siebel 7 across heterogeneous environments without compromising a quality user experience can be challenging work. VERITAS has addressed the requirement for business continuity by offering a complete High Availability solution.

By implementing the VERITAS High Availability Solution, IT staff can build higher levels of availability for their Siebel applications and throughout the data center, even at levels once thought too expensive, complex to install, or difficult to manage. Below are just a few benefits achieved by implementing the solution:

♦ Protects Siebel eBusiness Applications against failure, from a site-wide disaster to a local fault.

♦ Eliminates Siebel application downtime, both planned and unplanned

♦ Combines file system and disk management technology to ensure easy management of storage, optimum performance, and maximum availability of essential data

♦ Deploy one storage management architecture and HA solution that supports all major operating platforms

♦ Supports widest range of SAN configurations and storage platforms, including EMC, Hitachi, IBM, Sun, HP, Compaq, and others

Siebel 7 and VERITAS High Availability Solution 2

Integration Overview ALLIANCES INTEGRATION BRIEF

Integration Architecture Overview The VERITAS HA solution ensures that Siebel eBusiness Applications are always available, easy to manage, and performing optimally. It enables automatic and manual switchover of Siebel processes both locally and globally, providing comprehensive system availability and management, minimal downtime, and a reduction in IT operational costs.

The solution includes the following products:

VERITAS Foundation Suite version 3.4 The VERITAS Foundation Suite Combines the industry-leading technologies of VERITAS Volume Manager™ and VERITAS File System™ to deliver powerful, online storage management for enterprise computing and SAN environments. Optimal performance tuning and sophisticated management capabilities ensure continuous availability of Siebel applications.

VERITAS Cluster Server version 2.0 The VERITAS Cluster Server is an industry leading open systems clustering solution that eliminates both planned and unplanned downtime, facilitates server consolidation, and effectively manages a wide range of applications in heterogeneous environments. Supporting up to 32 nodes, Cluster Server features the power and flexibility to protect everything from a single critical database instance, to very large multi-application clusters in networked storage environments.

VERITAS Global Cluster Manager version 1.2 VERITAS Global Cluster Manager monitors and controls multiple, geographically distributed VERITAS Cluster Server configurations. The Global Cluster Manager Disaster Recovery Option combines replication solutions with clustering to serve as a critical component of enterprises requiring a complete disaster recovery strategy, with the ability to fail-over an entire site to a remote location.

VERITAS Volume Replicator version 3.2 The VERITAS Volume Replicator delivers the foundation for seamless and continuous data availability across sites to protect from disasters and site failures and to optimize the deployment of planned site migrations. Built upon VERITAS Volume Manager, Volume Replicator reliably and efficiently replicates data to remote locations over any standard IP network. Volume Replicator provides a robust storage-independent replication technology when your business cannot tolerate data loss and prolonged downtime.

It’s important to understand the underlying architecture and implementation strategy to appreciate the merits of this solution.

Siebel 7 and VERITAS High Availability Solution 3

Integration Overview ALLIANCES INTEGRATION BRIEF

Figure 1 depicts the deployment of two Siebel application instances across three geographically dispersed data centers. The London site supports the primary Siebel 7 installation at a headquarters (HQ) office. The Chicago site supports another Siebel instance installed at a regional office. The New York site provides disaster recovery services in addition to running other non-mission critical applications. Cluster Server provides HA services at each site while Global Cluster Manager coordinates HA services across all the sites. Volume Replicator ensures that all shared data is synchronized between sites. Replicated data volumes at the disaster recovery site are not accessible to other applications at the disaster recovery site while replication from the primary sites is active. The data in a replicated data volume can be accessible by breaking off a mirror volume. VERITAS Volume Manager supports this functionality.

Figure 1. Siebel 7 High Availability Configuration

Site Protection In Figure 1, each site is protected by one VERITAS cluster. While this environment consists of one cluster per site, additional clusters can be and often are configured at each site. The HQ cluster in London represents the primary site on which the headquarters instance of Siebel 7 executes. The disaster recovery (DR) cluster in New York is in another data center that is capable of executing the headquarters instance of Siebel 7 if the London site became incapacitated. The New York cluster could also represent a site that is strategically located on another continent in another time zone in order to implement a follow-the-sun strategy.

The Regional cluster in Chicago represents a remote office whose Siebel application is integrated with the headquarters instance. Figure 1 illustrates and emphasizes that the New York site could provide disaster recovery services for both the London and

Siebel 7 and VERITAS High Availability Solution 4

Integration Overview ALLIANCES INTEGRATION BRIEF

Chicago sites. The Siebel customer may choose to configure the DR site to support site migration for both the HQ and Regional instances or may elect to only provide disaster recovery services for the main HQ Siebel instance.

Shared Storage Shared storage is a fundamental requirement to provide HA services. In this environment, shared storage is implemented with three storage area networks (SAN). A SAN enables each node in the VERITAS cluster to access the shared volumes.

Figure 1 also depicts that Volume Replicator is synchronizing shared data volumes between the sites. Volume Replicator uses IP across the WAN to perform this function. If replication is configured properly, disaster recovery time can be measured in minutes, rather than days or weeks. In addition, the potential data loss from the disaster is minimized and possibly eliminated. Volume replication functionality can be reversed from the New York site to the London site when the Siebel application is migrated to London. Additionally, Volume Replicator supports both synchronous and asynchronous replication modes. The Siebel validation project replicated data asynchronously.

Cluster Server Service Groups Each Siebel 7 instance is composed of a set of interdependent processes that run in a distributed configuration in the data center. At a minimum, each instance must include at least the following processes:

♦ Siebel Web Server and associated Siebel Web Component

♦ Siebel Gateway Server

♦ Siebel Application Server (one or more)

♦ Relational database

The Siebel implementation architect has the freedom to run all of these processes on a single node or can optimize the distribution of these processes over several nodes within the confines of the enterprise network. The processes communicate over the network via TCP/IP. As a result, they can be distributed to any node in the enterprise connected to LAN TCP/IP network.

VERITAS Cluster Server protects Siebel application processes by encapsulating them into service groups. A Cluster Server service group usually represents a major process that supports an application (e.g. database server, application server, and Web server). Generally, service groups may execute on any node in the cluster. A service group contains system resources that collaborate to provide a set of services for the application. Common service group resources include disk groups, volumes, file systems, network interfaces, IP addresses, and application processes.

Encapsulating the Siebel application resources into Cluster Server service groups that may execute on any node in the cluster transforms the nodes in the cluster into

Siebel 7 and VERITAS High Availability Solution 5

Integration Overview ALLIANCES INTEGRATION BRIEF

containers in which the Siebel service groups may execute. Siebel application processes are no longer confined to execute on one physical server. Using specifically developed agents, Cluster Server can start, stop, monitor, and migrate the service groups on any configured node in the cluster in response to a server fault or to perform system maintenance on one server while the remaining servers continue to provide application services. Refer to the Cluster Server documentation to obtain complete descriptions of service groups and resources.

Each service group requires a dedicated Volume Manager disk group on which to store the service group’s data and programs. Dedicating a disk group to a service group enables the service group to be independent and mobile by allowing the disk group to be imported and exported on different servers in the cluster without affecting other service groups.

Siebel Application Service Groups The Siebel application service groups are supported by a series of Cluster Server agents. A Cluster Server agent is an interface through which the Cluster Server engine communicates with specific Siebel application processes. The primary actions performed by an agent are to online, monitor, and offline a service group resource. The primary agent used in each service group is named in the following sections. Refer to each agent’s installation and configuration guide for more configuration details and options.

Siebel Gateway Server Service Group This service group contains the Siebel Gateway Server process. The Siebel Gateway serves as a single entry point for accessing Siebel Servers. It also provides enhanced scalability, load balancing, and high availability across the Siebel Servers. Since a Siebel Server instance cannot start if the Gateway Server is not available, the Gateway Server could become a single point of failure. Cluster Server Agent for Siebel Gateway Server version 1.1 was installed to support the Gateway.

Siebel Server Service Group Figure 1 depicts two Siebel Server service groups in each installation. Additional Siebel Servers can be deployed if necessary to meet performance and scalability requirements. A Siebel Server is a middle-tier platform that supports both back-end and interactive processes for all Siebel application clients. The interactive processes run as components within a Siebel Server instance, and support functions such as mobile client synchronization, Siebel Web client support, integration with legacy or third-party data, business process automation, and workflow management. Components can operate in background, batch, and interactive modes. Some of the components can be deployed on multiple Siebel Servers simultaneously while others can be deployed only on one server in the Siebel enterprise. Cluster Server Agent for Siebel Server version 1.1 and Cluster Server Agent for NFS Client version 2.1 were installed to support each Siebel Server instance.

Siebel 7 and VERITAS High Availability Solution 6

Integration Overview ALLIANCES INTEGRATION BRIEF

Siebel File System Service Group The Siebel File System is a shared directory that is network-accessible to each Siebel Server instance. It contains the shared physical files that may be accessed through a Siebel Server. The shared directory is implemented using the NFS services provided by the operating system. The key resources managed by this service group are the physical file system, the NFS daemons, and the NFS share.

Web Server Service Group The Web Server service group contains the Web server in which Siebel Web Server Extension was deployed. Siebel Web Server Extension is a server component that enables communication between Web clients and Siebel Servers. iPlanet 4.1 Web server was implemented for Siebel validation testing. Cluster Server Agent for Netscape SuiteSpot 1.3.0 was installed to support iPlanet. (Note: SuiteSpot is a prior name for iPlanet.)

Replication Service Groups Service Group Although not shown in Figure 1, replication service groups were also implemented for validation testing. These service groups contain resources that perform the data replication functions provided by Volume Replicator. Each application service group (e.g. Siebel Gateway, Siebel Server, Oracle, etc.) has one associated replication service group that is responsible for replicating the application service group’s programs and data. A replication service group must always be online before its associated application service group is brought online. Cluster Server Agent for Volume Replicator 2.0 was installed to support the replication service groups.

Software Installed Locally To create a highly available environment as depicted in Figure 1, the following software is installed on the internal disks of each node:

♦ VERITAS Foundation Suite version 3.4

♦ VERITAS Cluster Server version 2.0

♦ VERITAS Global Cluster Manager version 1.2

♦ VERITAS Volume Replicator version 3.2

♦ VERITAS Cluster Server Agents for Siebel Gateway Server and Siebel Server version 1.1

♦ VERITAS Cluster Server Agent for NFS Client 2.1

♦ VERITAS Cluster Server Agent for Volume Replicator 2.0

♦ VERITAS Cluster Server NameSwitch Agent 1.0

These components provide the foundation services that enabled each node to participate in a cluster.

Siebel 7 and VERITAS High Availability Solution 7

Integration Overview ALLIANCES INTEGRATION BRIEF

Networks Three types of networks are depicted in Figure 1: public Ethernet, private Ethernet, and a storage area network. The public Ethernet provides standard TCP/IP-based access for administrators and users to access all servers and applications. The private Ethernets are configured as point-to-point connections between the two nodes in each cluster. These private, dedicated connections support cluster heartbeat communications. Two private connections are configured between nodes in a cluster to provide redundancy. The storage area network enables the shared disk environment. Each node has a fiber-optic connection to a Brocade switch, and the switch has fiber connections to a disk array.

Validation Summary Validation Testing Environment The Siebel Systems and VERITAS validation team configured multiple clusters in the lab to simulate the environment in Figure 1. Oracle was used as the relational database and iPlanet was implemented for Web servers . Figure 1 serves as a good logical outline of how each cluster was configured to support the testing process.

The validation team conducted extensive testing to insure the integration with Siebel 7 was robust and complete. The team tested application functionality using formal QA methods before, during, and after Cluster Server failover and Global Cluster Manager site migration actions were performed to insure that the Siebel instance was 100% operational in all configurations.

The purpose of failover testing is to determine if a given service group can successfully switch from one node in the cluster to another in both planned and unplanned scenarios. Planned switchovers are commonly done to facilitate the application of system maintenance to a node in the cluster. For example, the data center operations group might want to apply an operating system patch which requires a system reboot. To do so, they could switch all Siebel service groups running on node 2 to another node in the cluster (node 1 in our example) in preparation for the application of the OS patch to node 2. After the patch has been applied and the subsequent reboot has occurred on node 2, the operations team could then switch the affected Siebel service group(s) back to node 2. This strategy eliminates the usual scheduled downtime window in addition to avoiding the negative impact upon the user community due to the switch.

On the other hand, an unplanned switchover occurs when a node in the cluster experiences some type of system fault. For example, if node 2 experienced a hardware failure (e.g. CPU fails), Cluster Server would automatically recover by switching all critical service groups from node 2 to another node in the cluster.

While effective testing of the service group switchover process is critical, it is equally important to fully test the site migration process. Site migration is the process used to move the entire application from one cluster to another. Referring again to Figure 1, in the event of a disaster in the London data center, the operations

Siebel 7 and VERITAS High Availability Solution 8

Integration Overview ALLIANCES INTEGRATION BRIEF

team would move the Siebel application to the alternate disaster recovery site located in New York. For completeness, the validation team successfully tested both individual service group site migrations and complete application migration operations.

The following table summarizes test objectives and results -- demonstrating the completeness of the VERITAS HA solution applied to a Siebel instance:

Feature Validated Not Validated

Siebel Gateway Failover Successfully Tested & Validated

Siebel Gateway Site Migration Successfully Tested & Validated

Siebel Application Server Failover Successfully Tested & Validated

Siebel Application Server Site Migration Successfully Tested & Validated

Siebel Web Server Failover Successfully Tested & Validated

Siebel Web Server Site Migration Successfully Tested & Validated

Siebel File System Fail-over Successfully Tested & Validated

Siebel File System Site Migration Successfully Tested & Validated

Siebel 7 and VERITAS High Availability Solution 9

Installation and Deployment

The purpose of this chapter is to detail the hardware and software configuration required to run a Siebel instance in the context of the VERITAS High Availability Solution. Information detailing minimum hardware and software requirements along with recommended installation steps is presented. The chapter concludes with some configuration “tips” and “hints” developed during the validation testing process to help optimize a specific production installation.

System Requirements Hardware Requirements The VERITAS HA Solution was validated with Siebel 7 in a Solaris 8 environment. Most major Sun server platforms are supported, including workgroup servers, midrange servers, enterprise servers, and rack optimized servers.

In a Solaris environment, the HA Solution has the following general hardware requirements:

Item Description

Nodes in the cluster SPARC systems running Solaris 2.6 or later.

CD-ROM drive Access to CD-ROM drive on each system.

Disks Typical configurations require shared disks to support applications that migrate between systems in the cluster.

Ethernet controllers In addition to the built-in public Ethernet controller, Cluster Server requires at least one more Ethernet interface per system. Two additional interfaces are highly recommended.

Fibre Channel or SCSI host bus adapters

Cluster Server requires at least one built-in SCSI adapter per system to access the operating system disks, and at least one additional SCSI or Fibre Channel Host Bus Adapter per system for shared data disks.

RAM Each Cluster Server system requires at least 256 megabytes.

Siebel 7 and VERITAS High Availability Solution 10

Installation and Deployment ALLIANCES INTEGRATION BRIEF

Table 1. General Solaris Hardware Requirements

Refer to the following guides and release notes for a complete list of supported hardware and other requirements.

♦ VERITAS Volume Manager 3.2 Hardware Notes

♦ VERITAS Volume Manager 3.2 Installation Guide

♦ VERITAS Cluster Server 2.0 Installation Guide

♦ VERITAS Cluster Server 2.0 Release Notes

♦ VERITAS Global Cluster Manager 1.2 System Administrator’s Guide

♦ VERITAS Global Cluster Manager 1.2 Release Notes

Hardware Description

Hosts Five Sun Ultra 10’s

CPU’s One 440 MHz CPU per node

Memory 1 GB RAM

HBA Card Emulex LP8000-N1

Shared Disks Each host was SAN attached to one Sun A5200 disk array (JBOD) that contained sixteen 9 GB disks. SAN supported by Brocade 2400 fibre switch.

Network Cards One Internal Ethernet

One Sun Quad Ethernet card

Table 2. Hardware Used for Siebel Validation Testing

Please refer to the Siebel 2000 Supported Platforms for details on Siebel client and server hardware requirements.

Software Requirements

Software Description

O/S Solaris 8.0, 64-bit

O/S Patches Solaris 8 Jumbo Bundle dated December 2001

The following patches were required for Volume Manager: – 108981-03 – 108806-02

Siebel 7 and VERITAS High Availability Solution 11

Installation and Deployment ALLIANCES INTEGRATION BRIEF

Software Description – SUNWsan – 109529-06 – 111413-02 – 108901-04

Storage Management

VERITAS Foundation Suite 3.4 (included the following)

VERITAS Volume Manager 3.2 patch01

VERITAS File System 3.4 patch02

High Availability VERITAS Cluster Server 2.0 Patch 2

VERITAS Global Cluster Manager 1.2

VERITAS Cluster Server Agent for Siebel Gateway 1.1

VERITAS Cluster Server Agent for Siebel Server 1.1

VERITAS Cluster Server Agent for Netscape SuiteSpot 1.3.01

VERITAS Cluster Server Agent for NFS Client 2.1

VERITAS Cluster Server Agent for Volume Replicator 2.0

VERITAS Cluster Server NameSwitch Agent 1.0

Application Siebel 7.0.4

Database Oracle 8.1.7.2, 32-bit

Patch 1416998, ID 599369, 1390304

Web Server iPlanet Enterprise Edition 4.1, SP8

Table 3. Software Used for Validation Testing

Networking Requirements Each node in the cluster has four network connections to include.

Primary Ethernet Connection The primary connection is a standard Ethernet connection configured to run TCP/IP and connected to the corporate LAN. For Siebel validation testing the hme0 port was the primary Ethernet connection.

Primary Heartbeat Connection Each machine in the cluster must communicate a “heartbeat” status to other cluster members. The heartbeat connection must be redundant. Therefore, network

Siebel 7 and VERITAS High Availability Solution 12

Installation and Deployment ALLIANCES INTEGRATION BRIEF

administrators must configure both a primary and secondary interface. While this communication does not use TCP/IP, it does use Ethernet as the physical layer. For validation testing the team connected port qfe0 on each node in the cluster to a separate and private hub dedicated to this purpose.

Secondary Heartbeat Connection Working as backup to the primary heartbeat connection, the secondary interface is connected to a third separate and distinct network hub. Please note that the network administrator can use a network hub or switched hub for this purpose.

Storage Area Network Connection The final network connection supports the shared storage. For validation testing the team attached each system to a Brocade 2400 Fibre Switch using standard fibre cables. The Sun A5200 storage array is also attached to the Brocade Switch to complete the SAN configuration.

Figure 2 illustrates a typical two-node cluster network diagram. Notice there are three separate network hubs. The first is the standard system network interface, which runs TCP/IP. Hubs 2 and 3 support the primary and secondary heartbeat connections.

Figure 2. Node Cluster Network Diagram

Skill Requirements As presented earlier, one of the core objectives of validating the combined solution of Siebel 7 with the VERITAS HA Solution was to perform the “heavy lifting” required to conduct an application HA assessment, to identify any single points of failure, to create Cluster Server agents for Siebel 7, and to test the monitoring, failover and site migration processes in a representative configuration.

Siebel 7 and VERITAS High Availability Solution 13

Installation and Deployment ALLIANCES INTEGRATION BRIEF

The following skills are considered “essential” to successfully implement the VERITAS HA Solution with Siebel 7:

♦ General knowledge of the Siebel eBusiness Applications architecture as well as Siebel application installation and configuration best practice guidelines

♦ Intermediate UNIX system administration skills, emphasizing file system concepts such as file system creation, sharing, mounting and NFS

♦ Familiarity with basic network administration concepts and procedures

♦ Prior experience with the VERITAS products included in the solution

The following skills are “nice to have” as a way to extend the core solution to protect other data center components and/or handle multiple, simultaneous Siebel instances on heterogeneous machines:

♦ More detailed knowledge of and experience with the VERITAS products included in the solution

♦ Strong understanding of high availability, disaster recovery, and data replication concepts, methods, and technologies

♦ More advanced knowledge of UNIX system administration concepts with a more detailed understanding of journaling file systems, volume replication concepts, and logged based replication

♦ Advanced network configuration to include DNS and IP Interface administration.

VERITAS Enterprise Consulting Services can provide highly qualified and experienced consultants to ensure that your implementation of the HA Solution is successful.

Installation Process The following is an overview of the steps to install and configure the HA Solution with Siebel 7. Refer to each product’s installation guide for detail instructions.

Pre-installation Planning 1. Determine Siebel application processes to include in the clusters.

2. Analyze Cluster Server service group configurations.

3. Design disk groups, volumes, and file systems to support service groups.

4. Determine virtual IP addresses required to support service groups.

Siebel 7 and VERITAS High Availability Solution 14

Installation and Deployment ALLIANCES INTEGRATION BRIEF

Before Siebel 7 Installation 1. Install operating system on all nodes.

2. Configure IP network communications on each node.

3. Install recommended operating system patches.

4. Install Foundation Suite (Volume Manager and File System).

5. Enable Volume Replicator by adding the license key.

6. Install Cluster Server on each system.

7. Install Cluster Server agents (Siebel 7, database, Web, etc.).

8. Install Global Cluster Manager on each system.

9. Create required disk groups, volumes, and file systems.

10. Update hosts files or DNS with virtual IP addresses.

11. Within each replication service group, create Cluster Server resources for the disk group, NIC, and IP.

12. Create required Volume Replicator disk objects.

13. Within each replication service group, create Replicated Volume Group (RVG) resources.

14. Within each application service group, create mount, NIC, and IP resources.

15. Create UNIX account names dedicated to Cluster Server service groups.

16. Install database software.

17. Install database client in each Siebel Server service group file system.

Install Siebel Application 1. Install all Siebel application processes (Server, Gateway, etc.).

2. Apply Siebel patches if required.

After Siebel Installation 1. Add resources to application service groups to support Siebel application

processes.

2. Finish Volume Replicator configurations and start volume replication.

3. Bring all Cluster Server service groups online and evaluate memory utilization by node. Adjust service group deployment as needed.

4. Test switchovers within the cluster.

Siebel 7 and VERITAS High Availability Solution 15

Installation and Deployment ALLIANCES INTEGRATION BRIEF

5. Duplicate primary cluster service group configurations with secondary site.

6. Create global applications using Global Cluster Manager.

7. Test site migration.

Further Customization and Configuration The following notes should be considered and factored into your implementation plan as appropriate.

♦ Use the VERITAS Nameswitch Agent to Manage the Siebel Remote Hostname

The Siebel Remote component requires that the hostname (i.e. the value returned by the command uname –n) of the physical node on which it is running remain unchanged at all times, even after it is switched to another node in the cluster. Cluster Server met this requirement using the NameSwitch agent. The NameSwitch agent changes the node’s hostname before starting the Siebel Server instance. The validation configuration contained only one Siebel Remote component, deployed in the instance labeled Siebel Server 1. If a configuration contains multiple Siebel Remote components, the cluster will need to be configured to prevent multiple Siebel Remote components from running on the same node – which would cause a hostname conflict. The Siebel Server agent installation documentation explains this issue and the use of the NameSwitch agent in more detail.

♦ Install database client connectivity software on dedicated Siebel Application Server file system

One of the steps in the Siebel Server configuration process involves installing the appropriate database client software (i.e. connectivity driver). Although the client software could be installed on one of several different file systems, it is recommended that it be installed on the same file system that is dedicated to the Siebel Server instance’s programs and data files. This file system will be in a disk group that is dedicated to the Cluster Server service group for the Siebel Server instance. This approach ensures that, as the Siebel Server service group is switched among nodes in the cluster, the database client software is always accessible to the Siebel Server instance.

♦ For higher reliability, configure the cluster to automatically restart the Siebel Application Servers when the database is restarted

Siebel 7 and VERITAS High Availability Solution 16

Installation and Deployment ALLIANCES INTEGRATION BRIEF

For validation testing, the cluster was configured to automatically restart the Siebel Servers if, for any reason, the database experienced a switchover or a restart. This is an optional configuration step since the Siebel Servers can be configured to automatically reestablish a lost database connection. Using the cluster to automatically restart the Siebel Servers was selected because some of the Siebel Server components cannot rely on the auto-reconnect feature of the Siebel Server. It was more reliable to simply restart the Siebel Servers after the database was back online. This configuration requirement was implemented using a post-online trigger. The Siebel Server Agent installation documentation explains this trigger in more detail.

♦ Select logical server names NOT physical machine names during Siebel installation

During the installation of the Siebel application you are prompted to provide logical names for various objects (e.g. Siebel Enterprise, Siebel Gateway, and each Siebel Server instance). Given the virtual and mobile nature of a VERITAS-based HA environment, select names for these objects that are cluster-independent and node-independent. In other words, the object’s name should not imply any restriction about where it can be run. This will minimize confusion when viewing the objects using various system utilities and tools.

♦ Double-check hostname parameters after Siebel 7 installation

After completing the Siebel application installation, review the hostname parameters of the Gateway Server and the Siebel Servers. If necessary, use the Server Manager to change a server’s hostname parameter to the virtual hostname or the virtual IP address that is assigned to that server’s service group.

♦ Use virtual hostnames for all Siebel services

When prompted to input hostnames during the Siebel application installation process, use the virtual hostnames assigned to the applicable Siebel service. This is important to ensure that all Siebel services are capable of starting on any node in the cluster and are able to communicate with other Siebel processes that are running on the same node or on another node in the cluster.

♦ Do not configure Resonate within the cluster

If implemented, Resonate should not be included in the cluster, but should instead be installed locally on each node requiring Resonate. Be aware of the possible impact on Resonate if hostname changes occur on a node running Resonate.

♦ Do not start Siebel File System service group on same node as Siebel Server

Siebel 7 and VERITAS High Availability Solution 17

Installation and Deployment ALLIANCES INTEGRATION BRIEF

During validation testing, the NFS server supporting the Siebel File System would sometimes become unresponsive. This seemed to occur when the Siebel File System service group and a Siebel Server service group where online simultaneously on the same node. When the two service groups are online on the same node, the NFS client mount performed by the Siebel Server service group is actually remotely mounting a file system that is located on the same node on which it is executing. To avoid this instability, configure the cluster to prevent these service groups from being online on the same node at the same time. This can be achieved using Cluster Server trigger functionality or by constraining the nodes on which the service groups may execute. Refer to the Cluster Server documentation for more information about these features.

♦ Mount the Siebel File System within the Siebel Server directory tree

The recommended configuration is to mount the NFS file system in the same directory that is used to mount the dedicated VxFS file system for the Siebel Server service group. For example, if the VxFS file system that supports the first Siebel Server instance is mounted on /SiebelServer1, then the mount point for the NFS file system could be /SiebelServer1/SiebelFileSystem. Note that the NFS file system is mounted one directory level lower than the VxFS file system mount. Obviously, this means that the VxFS file system must be mounted before mounting the NFS file system.

♦ Be explicit and specify the unique NFS mount point for each Siebel Server instance

During the installation process of each Siebel Server, the Siebel installation program will prompt for the local mount point directory of the Siebel File System. Be sure to explicitly specify the unique mount point for that Siebel Server instance. Do not use the common parameters from a previous Siebel Server installation.

♦ Override directory for Siebel File System when necessary

When deploying a Siebel component on a specific Siebel Server instance, it may be necessary in some circumstances to override the component’s attribute that specifies the directory for the Siebel File System. This would potentially apply only to those components that require access to the Siebel File System.

Siebel 7 and VERITAS High Availability Solution 18

Installation and Deployment ALLIANCES INTEGRATION BRIEF

Ongoing Administration Ongoing system and application maintenance activities are well documented in the reference manuals that accompany both software suites. Please refer to the appropriate Siebel Systems and VERITAS documentation.

Taking a step back from the implementation details associated with the Siebel Application and VERITAS HA Solution, it is important to put the solution in broader perspective. Wrapping the Siebel application processes with the VERITAS High Availability Solution is a critical step in an overall plan to insure business continuity. However, the total solution includes some additional important steps that require consideration. These steps include:

♦ BACKUP

The starting and ending point for any business continuity plan is backup. To completely protect a production application, the data center operations team must design and execute a comprehensive backup plan to fully guarantee system recoverability. For this reason, VERITAS has committed to a full line of backup and recovery tools as a component to their overall business continuity suite. VERITAS backup and HSM solutions, branded as part of the NetBackup product lines, deliver best of breed functionality for businesses of any size and for data located anywhere from the desktop to the data center.

♦ MIRROR

VERITAS and Siebel Systems are advocates of disaster avoidance when possible. As an added measure, the data center operations team should work to mirror all critical data components to complement the failover and disaster recovery services discussed and validated as part of the Siebel Systems and VERITAS partnership. VERITAS recommends you mirror all critical data to include the operating system, the database, and third party application sources. VERITAS Volume Manager, bundled as part of the VERITAS Foundation Suite, is designed to insure data availability and performance through data mirroring and striping.

♦ TRAIN

Training is the most often overlooked component to a complete business continuity plan. Make sure data center operations, systems programming staff, and application specialists are trained on both the Siebel 7 and VERITAS HA.

♦ TEST

Once a baseline configuration is put into operation, it is critical to periodically and regularly test both failover and site migration functionality. Testing recommendations would include quarterly failover testing and semi-annual full site

Siebel 7 and VERITAS High Availability Solution 19

Installation and Deployment ALLIANCES INTEGRATION BRIEF

migration testing to insure the configuration continues to function efficiently. It is also important to test failover and site migration functionality as part of the standard change control process. Change items of most interest are the application of software patches and fixes to the Siebel eBusiness Application suite, the operating system, and the underlying relational database.

Siebel 7 and VERITAS High Availability Solution 20

Availability

The following VERITAS high availability solution products are generally available.

♦ VERITAS Cluster Server 2.0

♦ VERITAS Foundation Suite 3.4

♦ VERITAS Global Cluster Manager 1.2

♦ VERITAS Volume Replicator 3.2

How to Obtain Integration Software and Services To obtain VERITAS Enterprise Consulting Services, please contact VERITAS Software Corporation Tel: 1-650-527-8000 (outside US) & 1-800-327-2232 (US); Fax: 650-527-8050.

Siebel eBusiness Applications License Siebel eBusiness Applications from your Siebel Systems sales representative.

Veritas HA Solution For additional information about VERITAS Software, its products, or the location of an office near you, please call our corporate headquarters or visit our Web site at www.veritas.com.

The following VERITAS Cluster Server agents are only available through VERITAS Enterprise Consulting Services:

♦ VERITAS Cluster Server Agent for Siebel Gateway 1.1

♦ VERITAS Cluster Server Agent for Siebel Server 1.1

♦ VERITAS Cluster Server Agent for Oracle 2.0

♦ VERITAS Cluster Server Agent for Netscape SuiteSpot 1.3.0

♦ VERITAS Cluster Server Agent for NFS Client 2.1

♦ VERITAS Cluster Server Agent for Volume Replicator 2.0

Siebel 7 and VERITAS High Availability Solution 21

Availability ALLIANCES INTEGRATION BRIEF

♦ VERITAS Cluster Server NameSwitch Agent 1.0

For VERITAS Enterprise Consulting Services, please contact VERITAS Software Corporation Tel: 1-650-527-8000 (outside US) & 1-800-327-2232 (US), Fax: 650-527-8050.

Technical Support For assistance with Siebel eBusiness Applications contact the technical support organization on the Web at http://ebusiness.siebel.com/supportweb or call toll free at 800.214.0400 (650.295.5724). International customers should contact one of the regional technical support centers as follows:

♦ Brazil (São Paulo): +55 11 5110 0800

♦ UK (London): +44 800 072 6787 or +44 1784 494949

♦ Germany (Munich): +49 89 95718 400

♦ France (Paris): +44 800 072 6787 or +44 1784 494949

♦ Ireland (Galway): +44 800 072 6787 or +44 1784 494949

♦ Japan (Tokyo): 0120 606 750 (Japan domestic only)

+81 3 5464 7948 (outside of Japan) ♦ Singapore: +65 212 9266

For assistance with the VERITAS high availability solution products, or for information regarding VERITAS service packages, contact VERITAS Technical Support at 800-342-0652 or [email protected].

Customers from Europe, the Middle East, or Asia, should visit the Technical Support website at http://support.veritas.com for a list of each country’s contact information.

Siebel 7 and VERITAS High Availability Solution 22