Sciencelogic Architecture 7 3 5

download Sciencelogic Architecture 7 3 5

of 52

description

Sciencelogic Architecture 7 3 5

Transcript of Sciencelogic Architecture 7 3 5

  • ScienceLogic ArchitectureScienceLogic Version 7.3.5

  • Table of Contents

    Overview 1Introduction 1ScienceLogic Appliance Functions 2

    User Interface 2Database 2Data Collection 3Message Collection 3API 3

    System Updates 4Backup, Recovery & High Availability Options 5Overview 5Using a SAN 6Configuration Backups 6Scheduled Backups to External File Systems 6Backup of Disaster Recovery Database 7Database Replication for Disaster Recovery 7High Availability for Database Servers 8Differences Between Disaster Recovery and High Availability for Database Servers 8High Availability for Data Collection 9

    Restrictions 10All-In-One Architecture 11Overview 11Disaster Recovery with an All-In-One Appliance 12Scheduled Backup Requirement 12Unsupported Configurations 12

    Distributed Architecture 13Overview 13Architecture Outline 14Database Capacity 14Message Collection 15Interface Layer Requirements 15Scheduled Backup Requirements 15

    Database Layer Configurations 17Overview 17Local Storage 18Single Database Server Using Local Disk Space 18Database with Disaster Recovery 19Single Database attached to a SAN 21Database attached to a SAN with Disaster Recovery 22Database attached to a SAN with Disaster Recovery attached to a SAN 23Clustered Databases with High Availability and SAN 24Clustered Databases with High Availability using Local Disk 25Clustered Databases with High Availability and Disaster Recovery 26Clustered Databases with High Availability and Disaster Recovery attached to a SAN 27Clustered Databases with High Availability Using Local Disk and Disaster Recovery 28

    Interface Layer & API Configurations 29Interface Layer Configurations 29Single Administration Portal 30Multiple Administration Portals 31Load-Balanced Administration Portals 32

  • API Layer Configurations 33Collector Group Configurations 34Overview 34Using a Data Collector for Message Collection 36Using Multiple Data Collectors in a Collector Group 37How Collector Groups Handle Component Devices 38High Availability for Data Collectors 38Using Message Collectors in a Collector Group 41

    Using Virtual Machines 44Overview 44Supported Hypervisors 44Hardware Requirements 45

  • Overview

    Chapter

    1Overview

    In t roduct ion

    This manual describes the architecture of ScienceLogic systems, covering all possible configurations ofScienceLogic appliances. This manual is intended for System Administrators and ScienceLogic staff who areresponsible for planning the architecture and configuration of ScienceLogic systems.

    There are two types of ScienceLogic system:

    l All-In-One. In this type of system, a single appliance handles all the functions of the ScienceLogic platform.The monitoring capacity of an All-In-One system cannot be increased by adding additional appliances.

    l Distributed. The functions of the ScienceLogic platform are divided between multiple appliances. ADistributed system can be as small as two appliances, or can include multiple instances of each type ofappliance.

    1

    1

  • 2ScienceLogic Appliance Funct ions

    There are five general functions that a ScienceLogic appliance can perform. In large ScienceLogic systems, adedicated appliance performs each function. In smaller systems, some appliances perform several functions. Thissection describes each function and the ScienceLogic appliances that can perform each function.

    User Inte rface

    Administrators and users access the user interface through a web browser session. In the user interface, you canview collected data and reports, define organizations and user accounts, define policies, view events, and createand view tickets, among other tasks. The appliance that provides the user interface function also generates allscheduled reports. The following appliances can provide the user interface:

    l All-In-One Appliance. In small ScienceLogic systems, the All-In-One Appliance performs all functions.

    l Database Server. In small to mid-size ScienceLogic systems, aDatabase Server can provide the userinterface in addition to its database function.

    l Administration Portal. In mid-size and large ScienceLogic systems, a dedicated Administration Portalappliance provides the user interface.

    Database

    The appliance that provides the database function is responsible for:

    l Storing all configuration data and data collected from managed devices.

    l In a distributed system, pushing data to and retrieving data from the appliances responsible for collectingdata.

    l Processing and normalizing collected data.

    l Allocating tasks to the other appliances in the ScienceLogic system.

    l Executing automation actions in response to events.

    l Sending all Email generated by the system.

    l Receiving all inbound Email for events, ticketing, and round-trip Email monitoring.

    The following appliances can perform these database functions:

    l All-In-One Appliance. In small ScienceLogic systems, the All-In-One Appliance performs all functions.

    l Database Server. In all distributed ScienceLogic systems, aDatabase Server provides all databasefunctions. In some small to mid-size systems, aDatabase Server might also provide the user interface inaddition to its main database function.

    Overview

  • Overview

    Data Col lect ion

    The ScienceLogic appliances that perform the data collection function retrieve data from monitored devices andapplications in your network at regularly scheduled intervals. In a distributed system, appliances that perform thedata collection function also perform some pre-processing of collected data and execute automation actions inresponse to events.

    The following appliances can perform the collection function:

    l All-In-One Appliance. In small ScienceLogic systems, the All-In-One Appliance performs all functions.

    l Data Collector. In all distributed ScienceLogic systems, one or more Data Collectors provide all datacollection functions. In some systems, aData Collector can also perform the message collection function.

    Message Col lect ion

    The ScienceLogic appliances that perform the message collection function receive and process inbound,asynchronous syslog and trap messages from monitored devices.

    The following appliances can perform the message collection function:

    l All-In-One Appliance. In small ScienceLogic systems, the All-In-One Appliance performs all functions.

    l Message Collector. In most distributed systems, dedicatedMessage Collector appliances performmessage collection. A singleMessage Collector can handle syslog and trap messages from devices that aremonitored by multiple Data Collectors.

    l Data Collector. In some distributed systems, aData Collector can also perform the message collectionfunction. When aData Collector is used for message collection, the Data Collector cannot be configuredfor high availability and handles fewer inbound messages than a dedicatedMessage Collector.

    API

    The ScienceLogic platformprovides a REST-based API that external systems can use to configure the platform andaccess collected data. For example, the API can be used to automate the provisioning of devices in theScienceLogic platform.

    The API is available through the Administration Portal, the All-in-One Appliance, and the Database Server.

    NOTE: In older ScienceLogic systems, a dedicated Integration Server appliance provides the API function.Although current ScienceLogic systems do not offer the dedicated Integration Server appliance,ScienceLogic will continue to support existing dedicated Integration Server appliances.

    3

    1

  • 4Sys tem Updates

    The ScienceLogic platform includes an automatic update tool, called System Updates, that automatically installssoftware updates and patches. The updated software or patch is loaded onto a single ScienceLogic server, and thesoftware is automatically sent out to each ScienceLogic server that needs it. When you apply an update toyourScienceLogic system, the platform automatically uploads any newer versions of the default Power-Packs.

    The System Updates page in the user interface allows you to update the software on a ScienceLogic system. Youmust first download the updates to the local computer (computer where you are running the browser). You canthen load the software update to the ScienceLogic system. When the software is loaded onto the ScienceLogicsystem, you can install the software or schedule the software to be installed at a later time.

    When you apply an update to your ScienceLogic system, the platform automatically uploads any newer versions ofthe default Power-Packs. If a Power-Pack included in an update is not currently installed on your system, theplatform will automatically install the Power-Pack. If a Power-Pack included in an update is currently installed onyour system, the platform will automatically import the Power-Pack, but will not install the Power-Pack.

    For full details on the System Update tool, see the System Administration manual.

    Overview

  • Backup, Recovery & High Availability Options

    Chapter

    2Backup, Recovery & High Availability Options

    Overv iew

    The ScienceLogic platform has multiple options for backup, recovery, and high availability. Different applianceconfigurations support different options; your backup, recovery, and high availability requirements will guide theconfiguration of your ScienceLogic system. This chapter describes each option and the requirements andexclusions for each option.

    This table summarizes which of the options listed in this chapter are supported by:

    l All-In-One Appliances

    l Distributed systems that do not use a SAN for storage

    l Distributed systems that use a SAN for storage

    5

    2

  • 6Using a SAN

    A Database Server can be connected to a SAN for data storage. If a Database Server uses a SAN for data storage,you can use the snapshot features provided by your SAN system to backup all ScienceLogic data.

    All-In-One Appliances cannot use a SAN for data storage.

    In distributed systems, you can use a SAN for data storage only if the SAN will meet the I/O requirements of theScienceLogic system. If you have a large system and are unsure if your SAN will meet these requirements,ScienceLogic recommends using Database Servers that includes local solid-state drives (SSDs).

    If your ScienceLogic system uses a SAN for data storage, you cannot use the scheduled full backup feature.

    For more information on using a SAN with the ScienceLogic platform, including SAN hardware requirements, seethe Setting up a SAN manual.

    Configurat ion Backups

    The ScienceLogic platform allows you to configure a daily scheduled backup of all configuration data stored in theprimary database. Configuration data includes scope and policy information, but does not include collected data,events, or logs.

    The ScienceLogic platform can perform daily configuration backups while the database is running and does notsuspend any ScienceLogic services.

    The ScienceLogic platform can save local copies of the last seven days of configuration backups and stores the firstconfiguration backup of the month for the current month and the first configuration backup of the month for thethree previous months. Optionally, you can configure the platform to either copy the daily configuration backup toan external file system using FTP, SFTP, NFS, or SMB or write the daily configuration backup directly to an externalfile system.

    All configurations of the ScienceLogic platform support configuration backups.

    The configuration backup process automatically ensures that the backup media is large enough. The configurationbackup process calculates the sum of the size of all the tables to be backed up,and then doubles that size; theresulting number is the required disk space for configuration backups. In almost all cases, the required space isless than 1 GB.

    Scheduled Backups to External File Sys tems

    The ScienceLogic platform allows you to configure a scheduled backup of all data stored in the primary database.The platform performs scheduled backups while the database is running and does not suspend any services. Bydefault, the platform creates the backup file on the local disk of the Database Server or All-In-One appliance,transfers the backup file to the external system, then removes the backup file from the Database Server or All-In-One appliance.

    Backup, Recovery & High Availability Options

  • Backup, Recovery & High Availability Options

    Optionally, if you are using an NFS or SMBmount to store the backup, the backup file can be created directly on theremote system.

    An All-In-One Appliance must meet the following requirement to use the scheduled full backup feature:

    l The maximum amount of space used for data storage on the All-In-One Appliance is less than half of thetotal disk space on the All-In-One Appliance.

    A Distributed ScienceLogic system must meet the following requirements to use the scheduled full backup feature:

    NOTE: For large ScienceLogic systems ScienceLogic recommends using the backup options for offlinebackups (that is, the backup is not created locally and copied but instead is written directly to offlinestorage) from the Disaster Recovery database.

    l The maximum amount of space used for data storage on the Database Servers is less than half of the totaldisk space on the Database Servers.

    l The Database Servers in the system do not use a SAN for data storage.

    l To use the scheduled full backup feature, a Distributed ScienceLogic system deployed on a database withdisk storage must have a database smaller than 250 GB in size.

    Backup of Disas ter Recovery Database

    For ScienceLogic systems configured for disaster recovery, you can backup the secondary Disaster Recoverydatabase instead of backing up the primary Database Server. This backup option temporarily stops replicationbetween the databases, performs a full backup of the secondary database, and then re-enables replication andperforms a partial re-sync from the primary.

    ScienceLogic recommends that you backup to an external file system when performing a DR backup.

    l DR backup includes all configuration data, performance data, and log data.

    l During DR backup, the primary Database Server remains online.

    l DR backup is disabled by default. You can configure the ScienceLogic platform to automatically launch thisbackup at a frequency and time you specify.

    l The backup is stored on an NFS mount or SMBmount.

    Database Replicat ion for Disas ter Recovery

    The ScienceLogic platform can be configured to replicate data stored on a Database Server or All-In-OneAppliance to a disaster recovery appliance with the same hardware specifications. The disaster recovery appliancecan be installed at the same site as the primary Database Server or All-In-One Appliance or can be installed at adifferent location.

    7

    2

  • 8If the primary Database Server or All-In-One Appliance fails for any reason, failover to the disaster recoveryappliance is not automated by the ScienceLogic platform and must be performed manually.

    NOTE: If the two Database Servers are not at the same site, you must also use the DRBDProxy module tocontrol network traffic and syncing.

    For details on configuring Disaster Recovery for Database Servers, see the manual Disaster Recovery.

    High Availability for Database Servers

    Database Servers can be clustered in the same location to allow for automatic failover.

    A cluster includes an active Database Server and a passive Database Server. The passive Database Serverprovides redundancy and is dormant unless a failure occurs on the active Database Server. The ScienceLogicplatform ensures that the data on each Database Server is synched and that each Database Server is ready forfailover if necessary. If the active Database Server fails, the passive Database Server automatically becomes activeand performs all required database tasks. The previously passive Database Server remains active until anotherfailure occurs.

    Each database cluster uses a virtual IP address. No reconfiguration of Administration Portals is required in theevent of failover.

    The cluster can use either DRDB Replication (a software solution) or a SAN to synchronize data between the twoDatabase Servers.

    The following requirements must be met to cluster two Database Servers:

    l The Database Servers must have the same hardware configuration.

    l Two network paths must be configured between the two Database Servers. One of the network paths mustbe a direct connection between the Database Servers using a crossover cable.

    All-In-One Appliances cannot be configured in a cluster for high availability.

    For more information on database clustering for high availability, including a description of the scenarios underwhich failover will occur, see the Database Clustering manual.

    Differences Between Disas ter Recovery and High Availabilityfor Database Servers

    The ScienceLogic platform provides two solutions that allow for failover to another Database Server if the primaryDatabase Server fails: Disaster Recovery and High Availability. There are several differences between these twodistinct features:

    Backup, Recovery & High Availability Options

  • Backup, Recovery & High Availability Options

    l The primary and secondary databases in a High Availability configuration must be located together toconfigure the heartbeat network. In a Disaster Recovery configuration, the primary and secondary databasescan be in different locations.

    l In a High Availability configuration, the ScienceLogic platform performs failover automatically, although amanual failover option is available. In a Disaster Recovery configuration, failover must be performedmanually.

    l A High Availability configuration is not supported for All-In-One Appliances. A Disaster Recoveryconfiguration is supported for All-In-One Appliances.

    l A High Availability configuration maintains ScienceLogic system operations if failure occurs on the hardwareor software on the primary Database Server. A Disaster Recovery configuration maintains ScienceLogicsystem operations if the data center where the primary Database Server is located has a major outage,provides a spare Database Server that can be quickly installed if the primary Database Server has apermanent hardware failure, and/or to allow for rotation of ScienceLogic system operations between twodata centers.

    NOTE:A Distributed ScienceLogic system can be configured for both High Availability and Disaster Recovery.

    High Availability for Data Collect ion

    In a Distributed ScienceLogic system, the Data Collectors and Message Collectors are grouped intoCollectorGroups. A Distributed ScienceLogic system must have one or more Collector Groups configured. The DataCollectors included in a Collector Group must have the same hardware configuration.

    In the ScienceLogic platform, each monitored device is aligned with a Collector Group and the platformautomatically determines which Data Collector in that collector group is responsible for collecting data from themonitored device. The ScienceLogic platform evenly distributes the devices monitored by a collector group acrossthe Data Collectors in that collector group. Each monitored device can send syslog and trap messages to any of theMessage Collectors in the collector group aligned with the monitored device.

    To use a Data Collector for message collection, the Data Collector must be in a collector group that contains noother Data Collectors or Message Collectors.

    If you require always-available data collection, you can configure a Collector Group to include redundancy. Whena Collector Group is configured for high availability (that is, to include redundancy), if one of the Data Collectors inthe collector group fails, the ScienceLogic platform will automatically redistribute the devices from the failed DataCollector among the other Data Collectors in the Collector Group. Optionally, the ScienceLogic platform canautomatically redistribute the devices again when the failed Data Collector is restored.

    Each collector group that is configured for high availability includes a setting for Maximum Allowed CollectorOutage. This setting specifies the number of Data Collectors that can fail and data collection will still continue asnormal. If more Data Collectors than the specified maximum fail simultaneously, some or all monitored deviceswill not be monitored until the failed Data Collectors are restored.

    High availability is configured per-Collector Group, so a ScienceLogic system can have a mix of high availabilityand non-high availability collector groups, including non-high availability collector groups that contain a DataCollector that is also being used for message collection.

    9

    2

  • 10

    Restric t ions

    High availability for data collection cannot be used:

    l In All-In-One Appliance systems.

    l For Collector Groups that include a Data Collector that is being used for message collection.

    For details on configuring Collector Groups, see the System Administration manual.

    For more information on the possible configurations for a Collector Group, see the Collector GroupConfigurations chapter.

    Backup, Recovery & High Availability Options

  • All-In-One Architecture

    Chapter

    3All-In-One Architecture

    Overv iew

    In a ScienceLogic system that uses an All-In-One Appliance, a single appliance provides the user interface,database functions, performs data and message collection, and provides API access.

    11

    3

  • 12

    Disas ter Recovery with an All- In-One Appliance

    You can configure an All-In-One Appliance to perform data replication to a secondary All-In-One Appliance fordisaster recovery:

    The secondary All-In-One Appliance must have the same hardware configuration as the primary All-In-OneAppliance.

    Scheduled Backup Requirement

    To use the scheduled full backup feature with an All-In-One Appliance, the maximum amount of space used fordata storage on the All-In-One Appliance must be less than half the total disk space on the All-In-One Appliance.To use the scheduled full backup feature, the All-In-One Appliance must have a database less than 250 GBinsize.

    Unsupported Configurat ions

    The following features are not supported by All-In-One Appliances:

    l Using a SAN for storage.

    l High Availability for Database Servers.

    l High Availability for Data Collectors.

    l Additional Data Collectors, Message Collectors, or Administration Portals.

    All-In-One Architecture

  • Distributed Architecture

    Chapter

    4Distributed Architecture

    Overv iew

    In a Distributed ScienceLogic system, the functions of the ScienceLogic platform are divided between multipleappliances. The smallest Distributed ScienceLogic system has two appliances:

    l A Database Server that provides the user interface and performs all database functions.

    l A Data Collector that performs data collection and message collection.

    Large ScienceLogic systems can include multiple instances of each type of appliance. For a description of theappliance functions, see theOverview chapter.

    13

    4

  • 14

    Architecture Outline

    The general architecture of a distributed system includes two required layers, the database layer and the collectionlayer, and an optional layer, the interface layer:

    The Database Layer Configurations, Interface Layer & API Configurations, and Collector GroupConfigurations chapters describe all the possible configurations for each layer.

    Database Capacity

    In a ScienceLogic system, Database Servers and All-In-One Appliances are labeled with a Capacity. This capacityrepresents the total monitoring capacity of the ScienceLogic system.

    For each discovered device, the ScienceLogic platform calculates a device rating based on the license that wasused to install the ScienceLogic system and the amount of collection the platform is performing for the device. Forexample, a physical device that is polled frequently and has many aligned Dynamic Applications and monitoringpolicies could have a higher device rating than a component device that is polled infrequently and is aligned withonly one or two Dynamic Applications.

    The sum of all the rating values for all devices discovered in the system cannot exceed the Capacity for theDatabase Server or All-In-One Appliance. The Capacity Rating is defined by the license key issued to you byScienceLogic. All Database Servers and All-In-One Appliancesin a system must have the same capacity rating.

    For details on sizing and scaling your ScienceLogic system to fit your workload, contact ScienceLogic Support oryour Account Manager.

    Distributed Architecture

  • Distributed Architecture

    Message Collect ion

    When a Data Collector is used for message collection, the Data Collector can process approximately 20 syslog ortrap messages per second.

    When a Message Collector is used for message collection, the Message Collector can process approximately 100to 300 syslog or trap messages per second. The number of syslog and trap messages that can be processed isdependent on the presence and configuration of syslog and trap event policies.

    In terface Layer Requirements

    The interface layer includes one or more Administration Portals. The Administration Portal provides access to theuser interface and also generates all scheduled reports. However, in some Distributed ScienceLogic systems, theinterface layer is optional and the Database Server can provide all functions of the Administration Portal.

    If your Distributed ScienceLogic system meets all of the following requirements, the interface layer is optional andyour Database Server can provide all functions of the Administration Portal. If your system does not meet all of thefollowing requirements, the interface layer is required and you must include at least one Administration Portal inyour system:

    l The ScienceLogic system will have a low number of concurrent connections to the web interface.

    l The ScienceLogic system will have a low number of simultaneously running reports.

    Precise requirements for concurrent connections and simultaneously running reports vary with usage patterns andreport size. Typically, a dedicated Administration Portal is recommended for a ScienceLogic system with more thanfifty concurrent connections to the web interface or more than 10 scheduled reports per hour.

    Scheduled Backup Requirements

    The ScienceLogic platform allows you to configure a scheduled backup of all data stored in the primary database.The platform performs scheduled backups while the database is running and does not suspend any services. Bydefault, the platform creates the backup file on the local disk of the Database Server or All-In-One appliance,transfers the backup file to the external system, then removes the backup file from the Database Server or All-In-One appliance.

    Optionally, if you are using an NFS or SMBmount to store the backup, the backup file can be created directly on theremote system.

    An All-In-One Appliance must meet the following requirement to use the scheduled full backup feature:

    l The maximum amount of space used for data storage on the All-In-One Appliance is less than half of thetotal disk space on the All-In-One Appliance.

    15

    4

  • 16

    A Distributed ScienceLogic system must meet the following requirements to use the scheduled full backup feature:

    NOTE: For large ScienceLogic systems ScienceLogic recommends using the backup options for offlinebackups (that is, the backup is not created locally and copied but instead is written directly to offlinestorage) from the Disaster Recovery database.

    l The maximum amount of space used for data storage on the Database Servers is less than half of the totaldisk space on the Database Servers.

    l The Database Servers in the system do not use a SAN for data storage.

    l To use the scheduled full backup feature, a Distributed ScienceLogic system deployed on a database withdisk storage must have a database smaller than 250 GB in size.

    Distributed Architecture

  • Database Layer Configurations

    Chapter

    5Database Layer Configurations

    Overv iew

    This chapter contains diagrams for all possible configurations of the database layer in a distributed ScienceLogicsystem. The possible configurations are:

    l A single Database Server using local disk space for storage.

    l One primary Database Server and one secondary Database Server for disaster recovery. Both DatabaseServers use local disk space for storage.

    l A single Database Server using a SAN for storage.

    l One primary Database Server and one secondary Database Server for disaster recovery with the primaryDatabase Server using a SAN for storage.

    l One primary Database Server and one secondary Database Server for disaster recovery with the primaryDatabase Server and the secondary Database Server both using a SAN for storage.

    l Two Database Servers in a high availability cluster using a SAN for storage.

    l Two Database Servers in a high availability cluster using local disk space for storage.

    l Two Database Servers in a high availability cluster using a SAN for storage with an additional DatabaseServer for disaster recovery.

    l Two Database Servers in a high availability cluster using a SAN for storage with an additional DatabaseServer for disaster recovery that uses a SAN for storage.

    l Two Database Servers in a high availability cluster using local disk for storage with an additional DatabaseServer for disaster recovery.

    17

    5

  • 18

    Local Storage

    ScienceLogic Database Servers include one of two types of local storage: HDD or SSD.

    l For ScienceLogic systems that monitor 5,000 or more devices, Database Servers with SSD will providesignificantly improved performance.

    l For ScienceLogic systems that monitor 10,000 or more devices, Database Servers with SSD are the requireddesign standard.

    Single Database Server Us ing Local Disk Space

    Database Layer Configurations

  • Database Layer Configurations

    The following restrictions apply to this configuration:

    l The interface layer is optional only if the system meets all of the requirements listed in the Interface LayerRequirements section in the Distributed Architecture chapter.

    l To use the scheduled full backup feature with this configuration, the maximum amount of space used for datastorage on the Database Server must be less than half the total disk space on the Database Server.

    Database with Disas ter Recovery

    The following restrictions apply to this configuration:

    l To use the scheduled full backup feature with this configuration, the maximum amount of space used for datastorage on the Database Server must be less than half the total disk space on the Database Server.

    l To use the scheduled online full backup feature, a Distributed ScienceLogic system deployed on a databasewith disk storage must have a database less than 250 GBin size.

    19

    5

  • 20

    l For large ScienceLogic systems,you must use the backup options for offline backups (that is, the backup is notcreated locally and copied but instead is written directly to offline storage) from the Disaster Recoverydatabase.

    l The interface layer is optional only if the system meets all of the requirements listed in the Interface LayerRequirements section in the Distributed Architecture chapter.

    For details on configuring Disaster Recovery for Database Servers, see the manual Disaster Recovery.

    Database Layer Configurations

  • Database Layer Configurations

    Single Database at tached to a SAN

    The following restrictions apply to this configuration:

    l The interface layer is optional only if the system meets all of the requirements listed in the Interface LayerRequirements section in the Distributed Architecture chapter.

    l The scheduled full backup feature cannot be used with this configuration. The SAN Configuration andSnapshotsmanual describes how to use SANsnapshots to backup the database in this configuration.

    21

    5

  • 22

    Database at tached to a SAN with Disas ter Recovery

    The following restrictions apply to this configuration:

    l The scheduled full backup feature cannot be used with this configuration. In this configuration, ScienceLogicrecommends using the backup options for offline backups (that is, the backup is not created locally andcopied but instead is written directly to offline storage) from the Disaster Recovery database.

    l The interface layer is optional only if the system meets all of the requirements listed in the Interface LayerRequirements section in the Distributed Architecture chapter.

    For details on configuring Disaster Recovery for Database Servers, see the manual Disaster Recovery.

    Database Layer Configurations

  • Database Layer Configurations

    Database at tached to a SAN with Disas ter Recoveryat tached to a SAN

    The following restrictions apply to this configuration:

    l The scheduled full backup feature cannot be used with this configuration. In this configuration, ScienceLogicrecommends using the backup options for offline backups (that is, the backup is not created locally andcopied but instead is written directly to offline storage) from the Disaster Recovery database.

    l The interface layer is optional only if the system meets all of the requirements listed in the Interface LayerRequirements section in the Distributed Architecture chapter.

    For details on configuring Disaster Recovery for Database Servers, see the manual Disaster Recovery.

    23

    5

  • 24

    Clus tered Databases with High Availability and SAN

    The following restrictions apply to this configuration:

    l The two clustered Database Servers must be located in the same facility and attached directly to each otherwith a network cable.

    l The interface layer is optional only if the system meets all of the requirements listed in the Interface LayerRequirements section in the Distributed Architecture chapter.

    l The scheduled full backup feature cannot be used with this configuration.

    l The backup from DR feature cannot be used with this configuration.

    For details on configuring High Availability for Database Servers, see the manual Database Clustering.

    Database Layer Configurations

  • Database Layer Configurations

    Clus tered Databases with High Availability us ing Local Disk

    The following restrictions apply to this configuration:

    l The two clustered Database Servers must be located in the same facility and attached directly to each otherwith a network cable.

    l The two clustered Database Servers must use DRBD Replication to ensure data is synched between the twoservers.

    l The interface layer is optional only if the system meets all of the requirements listed in the Interface LayerRequirements section in the Distributed Architecture chapter.

    l To use the scheduled online full backup feature, a Distributed ScienceLogic system deployed on a databasewith disk storage must have a database less than 250 GBin size.

    l The backup from DR feature cannot be used with this configuration.

    For details on configuring High Availability for Database Servers, see the manual Database Clustering.

    25

    5

  • 26

    Clus tered Databases with High Availability and Disas terRecovery

    The following restrictions apply to this configuration:

    l The two clustered Database Servers must be located in the same facility and attached directly to each otherwith a network cable.

    l The scheduled full backup feature cannot be used with this configuration.

    l In this configuration, ScienceLogic recommends using the backup options for offline backups (that is, thebackup is not created locally and copied but instead is written directly to offline storage) from the DisasterRecovery database.

    l The interface layer is optional only if the system meets all of the requirements listed in the Interface LayerRequirements section in the Distributed Architecture chapter.

    For details on configuring High Availability for Database Servers, see the manual Database Clustering.

    For details on configuring Disaster Recovery for Database Servers, see the manual Disaster Recovery.

    Database Layer Configurations

  • Database Layer Configurations

    Clus tered Databases with High Availability and Disas terRecovery at tached to a SAN

    The following restrictions apply to this configuration:

    l The two clustered Database Servers must be located in the same facility and attached directly to each otherwith a network cable.

    l The scheduled full backup feature cannot be used with this configuration.

    l In this configuration, ScienceLogic recommends using the backup options for offline backups (that is, thebackup is not created locally and copied but instead is written directly to offline storage) from the DisasterRecovery database.

    l The interface layer is optional only if the system meets all of the requirements listed in the Interface LayerRequirements section in the Distributed Architecture chapter.

    For details on configuring High Availability for Database Servers, see the manual Database Clustering.

    For details on configuring Disaster Recovery for Database Servers, see the manual Disaster Recovery.

    27

    5

  • 28

    Clus tered Databases with High Availability Us ing Local Diskand Disas ter Recovery

    The following restrictions apply to this configuration:

    l The two clustered Database Servers must be located in the same facility and attached directly to each otherwith a network cable.

    l The two clustered Database Servers must use DRBD Replication to ensure data is synched between the twoservers.

    l The scheduled full backup feature cannot be used with this configuration.

    l For large ScienceLogic systems, ScienceLogic recommends using the backup options for offline backups (thatis, the backup is not created locally and copied but instead is written directly to offline storage) from theDisaster Recovery database.

    l The interface layer is optional only if the system meets all of the requirements listed in the Interface LayerRequirements section in the Distributed Architecture chapter.

    For details on configuring High Availability for Database Servers, see the manual Database Clustering.

    For details on configuring Disaster Recovery for Database Servers, see the manual Disaster Recovery.

    Database Layer Configurations

  • Interface Layer & API Configurations

    Chapter

    6Interface Layer & API Configurations

    In terface Layer Configurat ions

    In a Distributed ScienceLogic system, the interface layer is optional if the system meets all the requirements listed inthe Distributed Architecture chapter. If the interface layer is required, it must include at least one AdministrationPortal.

    For all Distributed ScienceLogic systems, browser session information is stored on the main database, not on theAdministration Portal currently in use by the administrator or user.

    29

    6

  • 30

    Single Adminis t rat ion Portal

    In this configuration, the interface layer includes a single Administration Portal. An administrator or user can log into the ScienceLogic platform using the Administration Portal:

    Interface Layer & API Configurations

  • Interface Layer & API Configurations

    Mult iple Adminis t rat ion Portals

    The interface layer can include multiple Administration Portals. An administrator or user can log in to theScienceLogic platform using any of the Administration Portals:

    31

    6

  • 32

    Load-Balanced Adminis t rat ion Portals

    A third-party load-balancing solution can be used to distribute traffic evenly among the Administration Portals:

    NOTE:ScienceLogic does not recommend a specific product for this purpose and does not provide technicalsupport for configuring or maintaining a third-party load-balancing solution.

    Interface Layer & API Configurations

  • Interface Layer & API Configurations

    API Layer Configurat ions

    The ScienceLogic platformprovides an optional REST-based API that external systems can use to configure theplatform and access collected data. For example, the API can be used to automate the provisioning of devicesinthe ScienceLogic platform.

    The API is available through the Administration Portal, the All-in-One Appliance, and the Database Server.

    NOTE:In older ScienceLogic systems, a dedicated Integration Server appliance provides the API function.Although current ScienceLogic systems do not offer the dedicated Integration Server appliance,ScienceLogic will continue to support existing dedicated Integration Server appliances.

    33

    6

  • Collector Group Configurations

    Chapter

    7Collector Group Configurations

    Overv iew

    For Distributed ScienceLogic systems, a collector group is a group of Data Collectors. Data Collectors retrieve datafrom managed devices and applications. This collection occurs during initial discovery, during nightly updates, andin response to policies and Dynamic Applications defined for each managed device. The collected data is used totrigger events, display data in the ScienceLogic platform, and generate graphs and reports.

    Grouping multiple Data Collectors allows you to:

    l Create a load-balanced collection system, where you can manage more devices without loss ofperformance. At any given time, the Data Collector with the lightest load handles the next discovered device.

    l Create a redundant, high-availability system that minimizes downtime should a failure occur. If a DataCollector fails, another Data Collector is available to handle collection until the problem is solved.

    34

    7

  • 35

    In a Distributed ScienceLogic system, the Data Collectors and Message Collectors are organized as CollectorGroups. Each monitored device is aligned with a Collector Group:

    A Distributed ScienceLogic system must have one or more Collector Groups configured. The Data Collectorsincluded in a Collector Group must have the same hardware configuration.

    A Distributed ScienceLogic system could include Collector Groups configured using each of the possibleconfigurations. For example, suppose an enterprise has a main data center that contains the majority of devicesmonitored by the ScienceLogic system. Suppose the enterprise also has a second data center where only a fewdevices are monitored by the ScienceLogic system. The ScienceLogic system might have two collector groups:

    l In the main data center, a Collector Group configured with high availability that contains multiple DataCollectors and Message Collectors.

    l In the second data center, a Collector Group that contains a single Data Collector that is also responsible formessage collection.

    Collector Group Configurations

  • Collector Group Configurations

    Using a Data Collector for Message Collect ion

    To use a Data Collector for message collection, the Data Collector must be in a collector group that contains noother Data Collectors or Message Collectors.

    NOTE:When a Data Collector is used for message collection, the Data Collector can handle fewer inboundmessages than a dedicated Message Collector.

    36

    7

  • 37

    Using Mult iple Data Collectors in a Collector Group

    A Collector Group can include multiple Data Collectors to maximize the number of managed devices. In thisconfiguration, the Collector Group is not configured for high availability:

    In this configuration:

    l All Data Collectors in the Collector Group must have the same hardware configuration

    l If you need to collect syslog and trap messages from the devices aligned with the Collector Group, you mustinclude a Message Collector in the Collector Group. For a description of how a Message Collector can beadded to a Collector Group, see the Using Message Collection Units in a Collector Group section of thischapter.

    l The ScienceLogic platform evenly distributes the devices monitored by a collector group among the DataCollectors in the collector group. Devices are distributed based on the amount of time it takes to collect datafor the Dynamic Applications aligned to each device.

    l Component devices are distributed differently than physical devices; component devices are always alignedto the same Data Collector as its root device.

    NOTE: If you merge a component device with a physical device, the ScienceLogic system allows data for themerged component device and data from the physical device to be collected on different DataCollectors. Data that was aligned with the component device is always collected on the DataCollector for its root device. If necessary, data aligned with the physical device can be collected on adifferent Data Collector.

    Collector Group Configurations

  • Collector Group Configurations

    How Collector Groups Handle Component Dev ices

    Collector Groups handle component devices differently than physical devices.

    For physical devices (as opposed to component devices), after the ScienceLogic system creates the device ID, theScienceLogic system distributes devices, round-robin, among the Data Collectors in the specified CollectorGroup.

    Each component device must use the same Data Collector used by its root device. For component devices, theScienceLogic system must keep all the component devices on the same Data Collector used by the root device(the physical device that manages the component devices). The ScienceLogic platform cannot distribute thecomponent devices among the Data Collectors in the specified Collector Group.

    NOTE: If you merge a component device with a physical device, the ScienceLogic system allows data for themerged component device and data from the physical device to be collected on different DataCollectors. Data that was aligned with the component device is always collected on the DataCollector for its root device. If necessary, data aligned with the physical device can be collected on adifferent Data Collector.

    High Availability for Data Collectors

    To configure a Collector Group for high availability, the Collector Group must include multiple Data Collectors:

    In this configuration:

    l All Data Collectors in the Collector Group must have the same hardware configuration.

    38

    7

  • 39

    l If you need to collect syslog and trap messages from the devices monitored by a high availability CollectorGroup, you must include a Message Collector in the Collector Group. For a description of how a MessageCollector can be added to a Collector Group, see the Using Message Collection Units in a CollectorGroup section of this chapter.

    l Each collector group that is configured for high availability includes a setting for Maximum Allowed CollectorOutage. This setting specifies the number of Data Collectors that can fail and data collection will continue. Ifmore Data Collectors than the specified maximum fail simultaneously, some or all monitored devices will notbe monitored until the failed Data Collectors are restored.

    In this example, the Collector Group includes four Data Collectors. The Collector Group is configured to allow foran outage of two Data Collectors.

    When all Data Collectors are available, the ScienceLogic system evenly distributes the devices monitored by acollector group among the Data Collectors in the Collector Group. In this example, there are 200 devicesmonitored by the Collector Group, with each of the four Data Collectors responsible for collecting data from 50devices. For simplicity, this example assumes that the platform spends the same amount of time collectingDynamic Application data from every device; therefore, the devices are divided evenly across the four collectors.

    If one of the Data Collectors in the example Collector Group fails, the 50 devices that the Data Collector wasmonitoring are redistributed evenly between the other three Data Collectors:

    Collector Group Configurations

  • Collector Group Configurations

    If a second Data Collector in the example Collector Group fails, the 50 devices that theData Collector wasmonitoring are redistributed evenly between the other two Data Collectors:

    If a third Data Collector in the example Collector Group fails, the Collector Group has exceeded its maximumallowable outage. Until one of the three failed Data Collectors becomes available, 100 devices are notmonitored:

    40

    7

  • 41

    Using Message Collectors in a Collector Group

    If you need to collect syslog and trap messages from the devices monitored by a Collector Group that includesmultiple Data Collectors, you must include a Message Collector in the Collector Group:

    Collector Group Configurations

  • Collector Group Configurations

    If your monitored devices generate a large amount of syslog and trap messages, a Collector Group can includemultiple Message Collectors:

    In this configuration, a monitored device can send syslog and trap messages to either Message Collector.

    NOTE:Each syslog and trap message should be sent to only one Message Collector.

    A third-party load-balancing solution can be used to distribute syslog and trap messages evenly among theMessage Collectors in a Collector Group:

    42

    7

  • 43

    NOTE:ScienceLogic does not recommend a specific product for this purpose and does not provide technicalsupport for configuring or maintaining a third-party load-balancing solution.

    One or more Message Collectors can be included in multiple Collector Groups:

    In this configuration, each managed device in Collector Group A and Collector Group Bmust use a unique IPaddress when sending syslog and trap messages. The IP address used to send syslog and trap messages is calledthe primary IP. For example, if a device monitored by Collector Group A and a device monitored by CollectorGroup B use the same primary IP address for data collection, one of the two devices must be configured to use adifferent IP address when sending syslog and trap messages.

    A Collector Group can have multiple Message Collectors that are also included in other Collector Groups. It ispossible to include every Message Collector in your ScienceLogic system in every Collector Group in yourScienceLogic system.

    Collector Group Configurations

  • Using Virtual Machines

    Chapter

    8Using Virtual Machines

    Overv iew

    Each appliance in your ScienceLogic configuration can be run as a virtual machine. This chapter provides software and hardware requirements forScienceLogic appliances running on VMs.

    Supported Hyperv isors

    The ScienceLogic platform supports the following Hypervisors:

    l VMware Version 4.1

    l VMware Version 5.1

    l XenServer Version 4.1

    l Windows Server Hyper-V, 2008 R2 SP1

    44

  • 45

    Note that:

    l Installing a Database Serveror All-In-One Appliance as a virtual machine is supported only on VMware hypervisors.

    l The installation of Linux IntegrationComponent services is not supported.

    l For instructions on how to install VMware tools and Xentools on a ScienceLogic appliance, see the Installation manual.

    l "Over-commit" technology is not supported for any virtual appliance.

    l Running on a virtualization server that is close to capacity will result in unexpected behavior.

    l Fixed storage is required for all appliances. Dynamically-expanding storage is not supported.

    Hardware Requirements

    The following table describes the hardware requirements for each ScienceLogic appliance and the number of supported devices:

    Minimum at 100Devices

    Minimum at 500Devices

    Minimum at 1,000Devices

    Recommended at1,000 Devices

    Description RAM(GB)

    Cores Disk(GB)

    RAM(GB)

    Cores Disk(GB)

    RAM(GB) Cores

    Disk(GB

    RAM(GB)

    CoresDisk(GB) Notes

    Data Collection 4 2 60 12 3 90 16 4 120 24 4 150 Based on number of managed devices on eachcollector. Data Collectors working in a networkwith a heavy video-device workload should havehalf the number of devices per core. For example,use four (4)cores for 250 devices.

    Database Server 6 2 80 16 4 200 24 4 300 48 4 600 Based on number of managed devices on eachdatabase.For ScienceLogic systems with more than 1,000devices, see the table below.

    Using Virtual Machines

  • Using Virtual Machines

    Minimum at 100Devices

    Minimum at 500Devices

    Minimum at 1,000Devices

    Recommended at1,000 Devices

    Description RAM(GB)

    Cores Disk(GB)

    RAM(GB)

    Cores Disk(GB)

    RAM(GB) Cores

    Disk(GB

    RAM(GB)

    CoresDisk(GB) Notes

    Database ServerwithAdministrationPortal

    8 2 80 16 4 200 24 4 300 48 4 600 Database server with users connecting directly tothe local web server for Administration Portaloperations.

    AdministrationPortal

    4 2 60 6 2 60 6 2 60 6 2 60

    IntegrationServer

    4 2 60 6 2 60 6 2 60 6 2 60 APIs are available on Administration Portal,Database, and All-in-One appliances. Adedicated Integration Server is required only incases of extremely high API load.

    MessageCollector

    6 2 60 6 2 90 See Notes See Notes The Message Collector cannot take advantage ofadditional resources in a single system (VM orphysical host) because the local database is verylimited in size and will not benefit from morememory; in addition, the message processing issingle-threaded and will not benefit from morecores. Fora configuration of 6GB RAM, two (2)cores is recommended. For additional capacity,add further message collectors behind a loadbalancer.

    All-in-One 6 2 80 24 4 180 24 4 300 48 6 600 Based on number of managed devices on eachdatabase.

    46

  • 47

    For ScienceLogic systems with more than 1,000 devices, add resources as follows:

    Minimum at 1,000Devices

    AdditionalResources per1,000 Devices

    Description RAM(GB)

    Cores Disk(GB)

    RAM(GB)

    Cores Disk(GB) Notes

    Database Server 24 4 300 16 2 300Performance capacity diminishes after adding more than 12 cores.

    Using Virtual Machines

  • 2003 - 2014, ScienceLogic, Inc.

    All rights reserved.LIMITATION OF LIABILITY ANDGENERAL DISCLAIMER

    ALL INFORMATION AVAILABLE IN THIS GUIDE IS PROVIDED "AS IS," WITHOUT WARRANTY OF ANYKIND, EITHER EXPRESS OR IMPLIED. SCIENCELOGIC AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES,EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OFMERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT.

    Although ScienceLogic has attempted to provide accurate information on this Site, information on this Sitemay contain inadvertent technical inaccuracies or typographical errors, and ScienceLogic assumes noresponsibility for the accuracy of the information. Information may be changed or updated without notice.ScienceLogic may also make improvements and / or changes in the products or services described in thisSite at any time without notice.

    Copyrights and TrademarksScienceLogic, the ScienceLogic logo, and EM7 are trademarks of ScienceLogic, Inc. in the United States,other countries, or both.

    Below is a list of trademarks and service marks that should be credited to ScienceLogic, Inc.The and symbols reflect the trademark registration status in the U.S. Patent and Trademark Office and may not beappropriate for materials to be distributed outside the United States.

    l ScienceLogicl EM7 and em7l Simplify ITl Dynamic Applicationl Relational Infrastructure Management

    The absence of a product or service name, slogan or logo from this list does not constitute a waiver ofScienceLogics trademark or other intellectual property rights concerning that name, slogan, or logo.

    Please note that laws concerning use of trademarks or product names vary by country. Always consult alocal attorney for additional guidance.

    OtherIf any provision of this agreement shall be unlawful, void, or for any reason unenforceable, then thatprovision shall be deemed severable from this agreement and shall not affect the validity and enforceabilityof any remaining provisions. This is the entire agreement between the parties relating to the matterscontained herein.

    In the U.S. and other jurisdictions, trademark owners have a duty to police the use of their marks.Therefore,if you become aware of any improper use of ScienceLogic Trademarks, including infringement orcounterfeiting by third parties, report them to Science Logics legal department immediately.Report as muchdetail as possible about the misuse, including the name of the party, contact information, and copies orphotographs of the potential misuse to: [email protected]

  • 800-SCI-LOGIC (1-800-724-5644)

    International: +1-703-354-1010

    OverviewIntroductionScienceLogic Appliance FunctionsUser InterfaceDatabaseData CollectionMessage CollectionAPI

    System Updates

    Backup, Recovery & High Availability OptionsOverviewUsing a SANConfiguration BackupsScheduled Backups to External File SystemsBackup of Disaster Recovery DatabaseDatabase Replication for Disaster RecoveryHigh Availability for Database ServersDifferences Between Disaster Recovery and High Availability for Database ServersHigh Availability for Data CollectionRestrictions

    All-In-One ArchitectureOverviewDisaster Recovery with an All-In-One ApplianceScheduled Backup RequirementUnsupported Configurations

    Distributed ArchitectureOverviewArchitecture OutlineDatabase CapacityMessage CollectionInterface Layer RequirementsScheduled Backup Requirements

    Database Layer ConfigurationsOverviewLocal StorageSingle Database Server Using Local Disk SpaceDatabase with Disaster RecoverySingle Database attached to a SANDatabase attached to a SAN with Disaster RecoveryDatabase attached to a SAN with Disaster Recovery attached to a SANClustered Databases with High Availability and SANClustered Databases with High Availability using Local DiskClustered Databases with High Availability and Disaster RecoveryClustered Databases with High Availability and Disaster Recovery attached to ...Clustered Databases with High Availability Using Local Disk and Disaster Reco...

    Interface Layer & API ConfigurationsInterface Layer ConfigurationsSingle Administration PortalMultiple Administration PortalsLoad-Balanced Administration PortalsAPI Layer Configurations

    Collector Group ConfigurationsOverviewUsing a Data Collector for Message CollectionUsing Multiple Data Collectors in a Collector GroupHow Collector Groups Handle Component DevicesHigh Availability for Data CollectorsUsing Message Collectors in a Collector Group

    Using Virtual MachinesOverviewSupported HypervisorsHardware Requirements