630-00466-01_004_C3_8.1_Engineering_Guide2.pdf

41
C3 Gateway Controller Engineering Guide 630-00466-01 Rev 004 12/15/10 C3 TM Gateway Controller Engineering Guide This document provides Engineering Rules for the C3 Gateway Controller.

Transcript of 630-00466-01_004_C3_8.1_Engineering_Guide2.pdf

  • C3 Gateway Controller Engineering Guide

    630-00466-01 Rev 004

    12/15/10

    C3TM

    Gateway Controller

    Engineering Guide

    This document provides Engineering Rules for the C3 Gateway Controller.

  • 2 C3 Gateway Controller Engineering Guide

    630-00466-01 Rev 004 12/15/10

    GENBAND, the 3-rings logo, DCO, G6, the G6 logo, GENBAND C3, G2, G9, the G9 logo, GenView, S2, S3, and

    S9 are all trademarks or registered trademarks of GENBAND Inc. or its affiliates in the U.S.A. and other countries.

    All other listed trademarks, if any, are owned by their respective companies.

    2000-2011 GENBAND Inc.

    All rights reserved. All copy, reproduction, derivatives (including, without limitation, translation), modification,

    distribution, republication, transmission, re-transmission and public display or showing of this document, whether

    in whole or part, is strictly prohibited, without the prior written permission of an authorized GENBAND Inc.

    representative. This document, and any software of GENBAND Inc. mentioned in this document, whether

    delivered electronically or via other media, are the sole property of GENBAND Inc. and are available only under

    and pursuant to license agreement.

  • C3 Gateway Controller Engineering Guide 3

    630-00466-01 Rev 004 12/15/10

    Contents

    1 Introduction ........................................................................................................................ 6

    1.1 Scope ................................................................................................................................... 6

    2 Overview ............................................................................................................................. 7

    3 Cluster Configuration......................................................................................................... 9

    3.1 Basic Clustering Rules ....................................................................................................... 13 3.2 Clustering Summary ........................................................................................................... 14

    4 C3 Cluster Capacity Analysis .......................................................................................... 15

    5 Protocol Stack and Application Sizing and Distribution ............................................... 19

    5.1 Sizing ................................................................................................................................. 19 5.2 Application Distribution and Duplication .............................................................................. 21

    5.2.1 Combined Transport and Local Access System (Class 4/5) ..................................... 21 5.2.2 ISUP Growth ............................................................................................................ 22 5.2.3 SIP Growth ............................................................................................................... 22 5.2.4 Other Protocol Growth .............................................................................................. 22

    6 Memory Engineering ........................................................................................................ 23

    7 Media Gateway Control Capacities ................................................................................. 24

    8 Network Configuration ..................................................................................................... 25

    8.1 Control Network Connections ............................................................................................. 25 8.1.1 Local control network ............................................................................................... 26 8.1.2 Remote control network ............................................................................................ 26

    8.2 Quality of Service and Traffic Engineering .......................................................................... 27

    9 Summary ........................................................................................................................... 29

  • 4 C3 Gateway Controller Engineering Guide

    630-00466-01 Rev 004 12/15/10

    Figures

    Figure 1: Typical VoIP Network deployment of C3 .....................................................................................................8

    Figure 2: A Typical VoIP Network Configuration and Protections to Failure Cases ............................................... 25

  • C3 Gateway Controller Engineering Guide 5

    630-00466-01 Rev 004 12/15/10

    Tables

    Table 1: C3 Application Usage ...................................................................................................................................9

    Table 2: System Group Types ................................................................................................................................. 17

    Table 3: Protocol Stack and Subsystem Capacity .................................................................................................. 19

    Table 4: QoS marking on traffic sent out from C3/G9 DSS ..................................................................................... 27

  • 6 C3 Gateway Controller Engineering Guide

    630-00466-01 Rev 004 12/15/10

    1 Introduction

    The GENBAND G3 Gateway Controller is a general purpose softswitch / media gateway controller (MGC) deployed worldwide for local access and long haul transport telephony services. See Reference [1] for a description of the C3.

    This document provides the information required to understand the sizing, scaling and configuration aspects of the C3 MGC for a broad range of applications. It is not intended to provide ordering or other sales-related information.

    Caveat: Engineering rules included in this document serve mainly as guidelines to deployment of GENBANDs C3 Gateway Controller. Due to the broad range of possible deployment scenarios and call mixes, they are not intended to be complete, nor perpetual. They are subject to change in the future due to evolving systems. Please contact GENBAND for the latest information.

    1.1 Scope

    This document addresses the currently supported features and elements of the C3 effective with Release 8.1.

  • C3 Gateway Controller Engineering Guide 7

    630-00466-01 Rev 004 12/15/10

    2 Overview

    This document helps you determine the number and configuration of C3 MGC nodes required to handle a particular call load. It also provides information on adjustable parameters and hard limits that relate to system capacity. Use this document with the C3 System Description, C3 Product Description, G9 Product Description, and G9 Engineering Guide which provide detailed information on each element.

    The C3 MGC is a computing complex composed of C3 node pairs. Most of the MGC software applications operate as active/standby pairs. Each active/standby application is always assigned to a pair of C3 nodes in the complex. A minimum C3 MGC consists of one C3 node pair which supports call processing, signaling gateway, and element management applications. Depending upon the application and required traffic-carrying capacity, additional C3 node pairs may be deployed to provide more processing throughput in the complex. A C3 complex may have a call processing capacity of up to 5,000,000 BHCA, depending upon the mix of call types and C3 node types.

    The engineering rules presented in this document address the following areas:

    Cluster Configuration: The supported number and configuration of C3 nodes in a cluster including application distribution.

    Protocol Stack Sizing: Verified call capacity and hard limits associated with various protocol stacks.

    Application Distribution and Duplication: Explanation of internal applications and protocol stacks that may be duplicated for increased capacity or function.

    Memory Engineering: Rules and guidelines for internally tunable memory settings. Defaults as well as upper limits are explained.

    Summary: Overall application of all configuration rules.

    Note that media gateway engineering is not addressed in this document. Each gateway has specific engineering requirements that may be used in combination with this document to engineer a total solution. The GENBAND 8000 MG Engineering Guide and GENBAND G9 MG Engineering Guide address those products. IP network connections are also not covered; see Reference [3] for this information.

  • 8 C3 Gateway Controller Engineering Guide

    630-00466-01 Rev 004 12/15/10

    Figure 1: Typical VoIP Network deployment of C3

    IP Core

    Network

    G9 8000

    Ethernet

    Switches

    MGC

    Access

    Routers

    Remote

    Media

    Gateway

    Remote

    Media

    Gateway

    Directly

    connected Ethernet

    SwitchAccess

    Router

    Access

    Router

    G9

    PSTN

    PSTN PSTN

    Co-located

    Media

    Gateway

    C3

    Figure 1 above, illustrates the typical network configurations supported by the C3. The C3 control cluster may also be split between 2 geographically redundant sites. Engineering guidelines provided here apply to both the collocated and geographically split C3 cluster configurations unless specifically stated otherwise. Access routers are customer provided as the gateway to their core IP network for each VoIP site. Within each site, Ethernet switches may be used for Layer 2 control plane connectivity. Customers are expected to provide their own Ethernet switches which meet minimum feature, performance, and configuration specifications set by GENBAND. For detailed information on the Ethernet switch recommendation and specification, refer to [3]. Optionally, GENBAND can also furnish and configure Ethernet switches meeting these requirements.

    One C3 site consists of at least 2 controller nodes. Note that this means for a geographically distributed C3, the minimum number of nodes is 4; 2 per site. The inherent reason for geographic redundancy is to ensure maximum possible system operation despite complete loss of a single controller site. In order to meet that requirement, both sites need a minimum of 2 C3 nodes to ensure sufficient call manager application capacity at all times.

    The minimum configuration of a C3/G9 distributed switching system is comprised of a C3 cluster consisting of 2 controller nodes and a G9. Multiple media gateways and controllers can be clustered to form a large VoIP network.

  • C3 Gateway Controller Engineering Guide 9

    630-00466-01 Rev 004 12/15/10

    3 Cluster Configuration

    This section provides guidance on how to size a particular C3 cluster to accommodate the offered traffic. Additional impacts of splitting the cluster in a geographically redundant configuration are addressed after the general single-site case.

    The internal architecture of the C3 cluster is designed to range from 1 to 16 pairs of computing nodes. Currently, configurations up to 8 node pairs have been validated in GENBANDs lab and deployed in revenue service.

    Multiple generations of computing nodes are supported within the C3 cluster to accommodate system capacity expansion over time without requiring replacement of existing nodes. Each node pair must consist of the same model compute node and all nodes must be equipped with at least 8GB of RAM and able to support the operating system version required by the C3 release; currently Sun Solaris 10.

    Individual programs that comprise the C3 application environment are configured by GENBAND to run on specific node pairs within the cluster. The first pair, usually numbered as nodes 1 and 2, will always support the OAM&P subsystem consisting of the Oracle TimesTen database, system management, and element management systems. Other applications may be either moved to or replicated on additional node pairs for increased overall system capacity.

    Individual applications, their functions, and their redundancy strategies are listed below. The applications highlighted in bold are generally available for duplication or re-distribution across extension node pairs for increased system capacity.

    Table 1: C3 Application Usage

    App ID

    Application Name

    Description Redundancy Strategy

    Typical Deployed Configuration

    1 BsPm Platform Manager Pooled - 1 per node All C3 Nodes

    2 BsDm Messaging Distribution Manager

    Pooled - 1 per node All C3 Nodes

    3 BsBootpd G9 Boot Mgr Pooled OAM Nodes

    4 OamFault System Fault Handler Active/Standby on OAM Nodes

    OAM Nodes

    10 CsSp Subscriber Profile Manager

    Active/Standby on 1 or more Node Pairs

    Initial Deployment on OAM Nodes. Combined local access and transit systems should have separate CsSp instances for each function. Some transit system network deployments may also require multiple CsSp pairs.

    11 CsFm Facility Manager Active/Standby OAM Nodes

  • 10 C3 Gateway Controller Engineering Guide

    630-00466-01 Rev 004 12/15/10

    App ID

    Application Name

    Description Redundancy Strategy

    Typical Deployed Configuration

    12 CsCcm Call Data Collection Manager

    Active/Standby OAM Nodes

    13 CsPdm Persistent Data Manager Active/Standby on multiple Node Pairs

    Typically deployed on all nodes running CpCallm.

    14 CsEcsi External Call Service Interface Manager

    Pooled OAM Nodes

    15 CsTftpd TFTP Manager Pooled All Nodes

    17 CsSipReg SIP User Registration and Authentication

    Active/Standby on 1 or more Node Pairs

    Required for SIP user access and authenticated SIP trunking. Typically enabled on same node pair as user access SgwSip stack.

    18 CsEim External Fraud Control Interface

    Active/Standby on 1 or more Node Pairs

    Interface ECTel Fraud Management System

    20 CpCallm Call Manager Pooled Typically deployed on all nodes. Deactivate on OAM nodes for systems with greater than 8 G9/8000 Media Gateways.

    26 CpMgcPm H.248/MGCP Device Manager for non-G9/8000 MGs

    Active/Standby on 1 or more Node Pairs

    Most commonly used to control GENBAND G6 Media Gateways.

    30 SgwFtMgr Signaling Gateway Fault Manager

    Active/Standby on 1 or more Node Pairs

    1 active instance per MTP3 routing group. Typically on same nodes as SgwMtp3

    31 SgwMtp2 MTP2 Protocol Stack Pooled MTP2 is Pooled across all nodes hosting linksets using Adax link interface cards.

    32 SgwMtp3 MTP3 Protocol Stack Active/Standby on 1 or more Node Pairs

    Typically on same nodes as associated SgwIsup, SgwBicc, and SgwTcap.

    33 SgwIsup ISUP Protocol Stack Active/Standby on 1 or more Node Pairs

    Typically on same nodes as associated SgwMtp3. Usually first or second protocol stack to be moved to an extension pair based on CPU utilization.

  • C3 Gateway Controller Engineering Guide 11

    630-00466-01 Rev 004 12/15/10

    App ID

    Application Name

    Description Redundancy Strategy

    Typical Deployed Configuration

    34 SgwTcap TCAP/SCCP Protocol Stack

    Active/Standby on 1 or more Node Pairs

    Typically on same nodes as associated SgwMtp3.

    35 SgwBicc BICC Protocol Stack Active/Standby on 1 or more Node Pairs

    Typically on same nodes as associated SgwMtp3.

    36 SgwSip SIP Protocol Stack Active/Standby on 1 or more Node Pairs

    Typically the first stack moved to an extension node pair based on CPU utilization.

    37 SgwH323 H.323 Protocol Stack Active/Standby on 1 or more Node Pairs

    Any node pair with sufficient CPU availability.

    38 SgwM3ua M3UA Protocol Stack Active/Standby on 1 or more Node Pairs

    Any node pair with sufficient CPU availability.

    40 OamCfg Configuration Manager Active/Standby on OAM Nodes

    OAM Nodes

    41 OamTrap SNMP Trap Manager Active/Standby on OAM Nodes

    OAM Nodes

    42 OamEsa Emergency Standalone Manager

    Pooled Not Currently in use

    43 OamPfm Performance Data Collection Manager

    Active/Standby on OAM Nodes

    OAM Nodes

    44 OamNm Node Manager Pooled on all C3 Nodes

    All C3 Nodes

    45 OamMt Message Trace Tool Active/Standby on OAM Nodes

    OAM Nodes

    46 OamMgMgr G9/8000 MG Download Manager

    Active/Standby OAM Nodes

    47 OamLr Low Runner Idle Time Application

    Pooled on all C3 Nodes

    All C3 Nodes prior to 7.2.40.20. This was the idle task formerly used to observe remaining available CPU. This application is no longer used due to changes in how cpu usage is monitored beginning with Solaris 10.

    48 OamDbAudit Database Audit Manager Pooled OAM Nodes

  • 12 C3 Gateway Controller Engineering Guide

    630-00466-01 Rev 004 12/15/10

    App ID

    Application Name

    Description Redundancy Strategy

    Typical Deployed Configuration

    55 JacOrbAgent Object Request Broker Pooled OAM Nodes. Component of EMS Subsystem

    56 NamingService

    CORBA Naming Pooled OAM Nodes. Component of EMS Subsystem

    57 NotificationService

    CORBA Notification Pooled OAM Nodes. Component of EMS Subsystem

    58 EmsMainServer

    EMS Core Pooled OAM Nodes. Component of EMS Subsystem

    59 CliServer EMS Command Line Interface

    Pooled OAM Nodes. Command Line component of EMS Subsystem

    60 OamAmaFmt Billing File Formatter Active/Standby OAM Nodes

    61 OamAmaDist Billing Distribution Active/Standby OAM Nodes

    62 OamAmaOp Billing Operations Manager Active/Standby OAM Nodes

    70 OamDbMaint Database Operations Manager

    Pooled OAM Nodes

    71 OamSysWatch

    System Services Monitor Pooled All C3 Nodes

    117 BsSnmpMonitord

    SNMP Services Monitor Pooled All C3 Nodes

    118 BsEsad Emergency Standalone Node Daemon

    Pooled ESA Node Only. Not currently in use.

    119 mirrord Solaris RAID driver Pooled Not currently in use.

    120 timestend TimesTen Database Main Daemon

    Pooled on OAM nodes

    OAM Nodes

    122 snmpdm SNMP Research Daemon Pooled All C3 Nodes

    123 brassagt SNMP Research Manager Pooled All C3 Nodes

    124 syslogd Solaris System Log Manager

    Pooled All C3 Nodes

    125 BsAlm_alarmd

    External Alarm Manager Pooled on all C3 Nodes

    All C3 Nodes

    126 BsAlm_rnetd Ethernet Redundancy Manager

    Pooled on all C3 Nodes

    All C3 Nodes

    127 xntpd Network Time Protocol Pooled on all C3 Nodes

    All C3 Nodes

    128 inetd Solaris IP Networking Manager

    Pooled on all C3 Nodes

    All C3 Nodes

  • C3 Gateway Controller Engineering Guide 13

    630-00466-01 Rev 004 12/15/10

    3.1 Basic Clustering Rules

    The C3 control cluster is designed to accommodate a wide range of systems from the smallest single redundant pair of nodes up to a high throughput system consisting of 8 or more node pairs. Proper initial engineering is critical to the ease of long-term system growth.

    Currently, the predominate C3 node platform is the Sun Netra T2000 which is available in 4- or 8-core models and are equipped with a minimum of 8GB of RAM. Also in wide use are the previous generation nodes; the Sun Netra 240 and Sun Netra 440 systems. These are dual- and quad-UltraSPARC IIIi systems also equipped with a minimum of 8GB of RAM. Performance characteristics of the 4- and 8-core T2000 models correlate to the 240 and 440 directly. While any of the supported platforms may serve as the initial OAM&P pair, a view of the system growth plan is critical to intelligent initial deployment.

    The next generation of C3 platform, the Sun Netra T5220, is introduced during release 8.1. A single model is available which is equipped with a minimum of 8GB RAM. The T5220 is minimally the processing equivalent of an 8-core T2000. Standard T5220 configuration includes 8GB RAM, 8 GbE Ports, 2 x 300GB SAS Disk Drives, 1 DVD drive, and 1+1 redundant DC power supplies.

    Any system that is anticipated to grow beyond 750K BHCA or 4 G9/8000 media gateways should deploy the largest capacity C3 nodes currently available as the OAM&P pair at initial installation. Adding or deleting extension node pairs beyond the first pair is very straightforward with no impact to carried traffic. Upgrading the OAM&P pair is still possible with no impact to carried traffic, but, is much more difficult due to required migration of the system database. This kind of upgrade will require contracted professional services from GENBAND. As a baseline, minimal systems consisting of one pair of nodes may support up to 55 or 135 calls per second for Netra 240s and 440s, respectively. A minimal system of T2000 or T5220 nodes will support up to 135 calls per second.

    In typical network deployment scenarios, the call state machine, as implemented in the Call Manager (CpCallm), and the various signaling protocols are the largest consumers of CPU resources. Monitoring the occupancy of these and other applications provides the data required to determine remaining unused capacity and pending system expansion requirements. Usage studies should always be performed in a worst case configuration at busy hour. An example would be a fully duplex system with all active applications running on either the odd or even numbered nodes. This configuration ensures that the active applications are realistically sharing the common compute resource while simultaneously keeping up with standby synchronization work.

    Monitoring system usage changed with the introduction of Solaris 10 and C3 release 7.2.40.20. Changes in how the operating system monitors and reports CPU usage forced a change in how we must monitor overall system capacity. At busy hour, no node should exceed 80% CPU occupancy. In addition, no application should exceed 80% CPU occupancy by its busiest thread of execution. The preferred source of CPU usage is the performance table SYSNODEAPPPERF which record minimum, maximum, and average busiest thread CPU usage for each application over the configured collection period. Instant snapshot data is provided by the Solaris /usr/bin/prstat command. Each C3 node also records regular prstat snapshots in /var/log/mscMonLog. Peak daily CPU usage is recorded in /var/log/nightlyReport.

  • 14 C3 Gateway Controller Engineering Guide

    630-00466-01 Rev 004 12/15/10

    In C3 releases prior to 7.2.40.20, node occupancy was monitored by ensuring that OamLr is always consuming at least 20% CPU. OamLr at 20% occupancy indicates 80% occupancy by revenue generating application work. OamLr is deprecated beginning in C3 release 7.2.40.20 due to differences in CPU usage reporting in Solaris 10 and the Sun CoolThreads processor architecture.

    C3 cluster expansion is always implemented with C3 node pairs. Because the Call Manager is a loadshared application, it is typically deployed on all C3 nodes in the cluster. GENBANDs patented loadsharing implementation distributes new calls across all call managers with the goal of keeping all C3 nodes relatively even in node occupancy. The typical driver for node expansion is increased capacity requirements for 1 or more signaling protocols. These capacity increases may be realized by either relocating an existing signaling stack onto the new node pair or instantiating a new instance of a particular signaling stack on the new node pair. As extension node pairs, each Netra 240 pair will typically host 1 signaling stack and each Netra 440, T2000, or T5220 pair will typically host 2 signaling stacks. A C3 design limitation restricts each node pair to hosting a maximum 1 instance of any particular protocol stack or any other application. An example of application distribution across a 6 pair C3 cluster is illustrated in Appendix C.

    3.2 Clustering Summary

    Engineer Nodes for No Greater than 80% CPU Usage when Hosting All Active Assigned Applications

    Engineer Applications so that no application registers more than 80% CPU occupancy on the busiest thread as reported by SYSNODEAPPPERF

    o In a typical system, the limiting applications will be the signaling stacks SgwSip, and SgwIsup.

    C3 clusters supporting 8 or more G9/8000 Media Gateways and / or exceeding 1M BHCA should be configured with all call processing tasks, CpCallm and all Sgw* applications except SgwMtp2, moved off the OAM nodes to extension nodes.

    Systems deployed with geographical redundancy shall minimally be composed of 4 C3 nodes; 2 per site. This rule is intended to ensure that in the event of a complete loss of 1 control site, sufficient call manager capacity will remain at the surviving site to handle call traffic.

    Each Netra 440, T2000, or T5220 Extension pair should host node base platform applications, CpCallm, and 2 Signaling Stacks

    Each Netra 240 Extension pair should host node base platform applications, CpCallm, and 1 Signaling Stack

  • C3 Gateway Controller Engineering Guide 15

    630-00466-01 Rev 004 12/15/10

    4 C3 Cluster Capacity Analysis

    The C3 Signaling Controller and EMS provide a full set of performance statistics that document the operation of the C3 as well the 8000 and G9 media gateways under its control. The focus of this section is how to use the relevant performance data to understand the current system utilization, quantify available growth capacity, and plan for future expansion. The data and methodologies described herein document the process utilized by GENBAND when analyzing a particular customer installation. The goal of this section is to empower the network operators engineers to perform this analysis and planning exercise locally with minimal assistance from GENBAND.

    The minimum data table requirements for any capacity analysis are System Group Performance (SYSGRPPERF) and System Node Application Performance (SYSNODEAPPPERF). System Group Performance provides peg and seconds of use data for each line and trunk group equipped in the system. System Node Application performance provides the CPU and memory usage of each application on each C3 node in equipped in the cluster. To allow the easiest correlation of the data between these tables, it is desirable that they be recording data on the same time increment; i.e. 15 or 60 minute periods. These and all Performance Measurement data may be read in real-time from the EMS or via the bulk storage files. To read with the EMS GUI, click on Performance in the function ribbon and then select the desired measurement via the Resource Type dropdown. The analysis process as presented here will focus on use of the bulk data collection files and all resource names referenced here will use those files. All measurement resource filename are in all capitals and are referenced as such here. The bulk data files are stored on the C3 OAM nodes under the /stats directory. Rather than access the files directly from the C3, GENBAND strongly recommends configuring performance measurement file exporting and doing all data analysis from off-switch storage. Data exporting is configured on the EMS GUI by selecting Performance in the function ribbon and accessing the Control and Performance Config Params tabs. More detail on this process is provided in the GUI Reference Guide within the C3 / G9 / 8000 Customer Documentation.

    All capacity engineering should be focused on the target high day busy hour which is easily derived from SYSGRPPERF. GENBANDs analysis approach generally begins with collecting these data tables for the assumed busy hour +/- 2 hours. From there, we examine each collected time period to find the one with the highest offered load to use as the focus time period for study. The highest offered load is defined as the period with the most incoming call attempts. All of the GENBAND C3/G9 system performance data is recorded in ASCII comma separated value (CSV) files with one entry per line. This format makes the data simple to import in to a spreadsheet, such as Microsoft Excel, for analysis. Using the Excel Text Import Wizard, select Delimited in Step 1 and Comma as the delimiter in Step 2 then complete the import. As an option, to minimize clutter in your spreadsheet, you may want to skip the import of column 1, the collection period starting time, in Step 3. After importing, perform a summation of the fifth column, INCOMINGATTEMPTS, to get the offered call load for the period. Now that the highest traffic collection period has been identified, our analysis is focused on this time period across the multiple performance statistic tables previously collected.

    SYSGRPPERF provides a broad range of call data including termination types, various types of errors, out of service data, and load controls. While reviewing this data is strongly recommended as a part of normal performance monitoring, these fields are not usually relevant to the task of overall capacity analysis. After identifying the time period for study, the user may

  • 16 C3 Gateway Controller Engineering Guide

    630-00466-01 Rev 004 12/15/10

    find it useful to hide the SYSGRPPERF columns that wont be used in order to more easily focus on the task at hand. The columns used in this analysis are:

    3: SYSGRPNUM

    4: GROUPTYPE

    5: INCOMINGATTEMPTS

    6: OUTGOINGATTEMPTS

    8: TOTALUSAGESEC

    C3 capacity analysis is generally focused on two general areas; overall node occupancy and individual protocol stack occupancy. While applications such as SgwMtp2 and CpCallm are loadshared across potentially all C3 nodes in the cluster, most applications operate as active / standby pairs on pairs of nodes. Under normal circumstances, node pairs are configured in numerically sequential pairs; 1 / 2, 3 / 4, 5 / 6, etc. As we proceed through the node occupancy analysis, we need to assume that all active application instances of a node pair are simultaneously running on the same node to ensure fully redundant capacity. The next step in analyzing individual protocol stack loading is to sort the SYSGRPPERF table output by signaling type. Signaling type for the system group in question is identified numerically in column 4. Translation to signaling type is provided in Table 2.

  • C3 Gateway Controller Engineering Guide 17

    630-00466-01 Rev 004 12/15/10

    Table 2: System Group Types

    GROUPTYPE Signaling Protocol

    0 ISUP

    1 PRI

    2 GR303

    3 CAS Trunk

    4 Line Group

    5 ATM

    6 CAS DAL

    7 BICC Trunk

    8 Lawful Intercept Trunk (CALEA)

    9 VoIP Trunk

    11 V5

    12 SIP Trunk (SIP-T / SIP-I)

    13 SIP DAL

    14 Internode Trunk

    15 R2 Trunk

    16 R2 DAL

    17 H.323 Trunk

    Excel allows data sorting by up to 3 nested parameters. Typically, the user will find it useful to sort the SYSGRPPERF data first by GROUPTYPE ascending, followed by SYSGRPNUM ascending. For deployments with multiple instances of any particular signaling protocol, the system groups must be further subdivided by the routing group that is associated with each instance of the protocol. For ISUP trunks, gateway number under System>MGC SS7 Signaling>Gateways in the GenView EMS GUI generally correlates with the routing group. For SIP, the routing group is within the SIP Gateway provisioning at System>IP Signaling>SIP Signaling>SIP Gateway. After sorting, perform a summation of the INCOMINGATTEMPTS and OUTGOINGATTEMPTS for each routing group within each signaling type to arrive at the total number of half-calls per routing group per protocol. Divide the total INCOMINGATTEMPTS and OUTGOINGATTEMPTS for each protocol routing group by the collection period in seconds to arrive at the half calls per second rate for that individual signaling stack. This is the typical element of measurement for signaling stacks. Additionally, you can sum up the TOTALUSAGESECONDS for each type if you plan to do media gateway Erlang measurements. Appendix A illustrates an example of this; including some convenient formatting such as including the GROUPTYPE name along with the number.

    The next step is correlation of the call rate against the CPU usage for the individual applications. Different network deployments of the C3 load the various C3 applications differently. Knowledge of the network level application is critical to understanding which C3 applications require engineering focus. As discussed previously, signaling protocols are universally a point

  • 18 C3 Gateway Controller Engineering Guide

    630-00466-01 Rev 004 12/15/10

    of focus for analysis. Class 5 subscriber-based deployments or Class 4 deployments with significant Call Processing Language (CPL) usage also require a special focus on CsSp; the subscriber profile manager and feature engine.

    The CPU usage is recorded by the C3 in performance management table SYSNODEAPPPERF. Aggregate CPU usage is listed by application for each recording period and is identified as minimum, maximum, and average usage. Conservative methodology implies that utilization of the maximum CPU occupancy for each application is utilized for analysis. Appendix C shows an example of a compressed and processed copy of this table. For each recording period, the table provides C3 node number, application index, and CPU usage for each. CPU usage is indicated in 1 / 32768 units. An easy way to convert to CPU percentages is to simply add an additional column to the spreadsheet and divide each CPU number by 32768 and set the spreadsheet to display that column as a percentage. Additionally, it may be convenient to add the application names and C3 node type to the spreadsheet. The application ids are listed in Table 1: C3 Application Usage. These modifications have been made to the table as displayed in Appendix C.

    Most C3 applications operate in an active/standby redundancy model and it is normally not known which C3 node out of a pair may be hosting the active copy of an application at any given point in time. The traffic engineer must look at the data from both copies of each application and select the one with the higher CPU occupancy. That will be the active copy. A particular C3 node is likely to have both active and standby application processes running at any given moment. The example in Appendix C shows the active applications in bold. Each application must be examined for processor occupancy against the single CPU limit described earlier. The totality of base platform and active application CPU usage on any particular pair is totaled to assess the overall node usage. Application CPU usage is nominally linear so that extrapolation can be used to understand how much headroom remains available for any particular application or protocol.

  • C3 Gateway Controller Engineering Guide 19

    630-00466-01 Rev 004 12/15/10

    5 Protocol Stack and Application Sizing and Distribution

    5.1 Sizing

    Protocol stack sizing is one of the most critical elements used in predicting current and future C3 node requirements. Stack capacity here is stated in half calls per second for each stack instance to allow the user to equip the proper number of each signaling stack as required by network call model. All basic calls consist of an originating and a terminating half call. Any additional call legs will result in an additional half call applied to the signaling protocol in question. SIP is a special case where the protocol itself may introduce additional signaling load on the stack. In addition to the initial half call, any SIP REDIRECTs must be counted as additional half calls when computing the stack loading.

    Note that this only applies to stacks directly implemented on the C3. The internal output of these stacks is a common internal signaling protocol. Stacks such as PRI and CAS that are implemented in the G9/8000 gateway are normalized into the internal signaling protocol in the gateway. Therefore, these protocols impart call manager state machine and subscriber profile manager load, but, not C3 protocol stack load.

    An additional application subject to heavy loading is the Subscriber Profile Manager (CsSp). This application supports both subscriber and transport trunking feature operations. Feature penetration and their usage are the key drivers of processor usage. These numbers provide starting recommendations that must be monitored by the network operator.

    Table 3: Protocol Stack and Subsystem Capacity

    Stack / Application

    Half Calls per Second

    on Legacy

    C3 Nodes

    Half Calls per Second on T5220 C3

    Nodes

    Transactions per Second

    C3 Applications

    Notes

    ISUP 556 625 -- SgwIsup SgwMtp3 SgwMtp2 SgwFtMgr

    BICC 556 625 -- SgwBicc SgwMtp3 SgwMtp2 SgwFtMgr

  • 20 C3 Gateway Controller Engineering Guide

    630-00466-01 Rev 004 12/15/10

    Stack / Application

    Half Calls per Second

    on Legacy

    C3 Nodes

    Half Calls per Second on T5220 C3

    Nodes

    Transactions per Second

    C3 Applications

    Notes

    SIP 278 290 1355 SgwSip CsSipReg

    SIP is driven more by messages per second than half calls per second. Experience shows that SIP deployments vary so widely that only general guidelines that act as starting points can be provided. SIP message handling is measured in performance table SYSSIPGLOBALPERF.

    SIP-T / SIP-I 195 236

    H.323 278 290 -- SgwH323

    H.248 and MGCP

    278 290 -- CpMgcPm

    TCAP -- 315 SgwTcap SgwMtp3 SgwMtp2 SgwFtMgr

    This is based upon standard memory settings and may be increased through memory parameter tuning while keeping within the capabilities of a single C3 processor. See chapter 6.

  • C3 Gateway Controller Engineering Guide 21

    630-00466-01 Rev 004 12/15/10

    Stack / Application

    Half Calls per Second

    on Legacy

    C3 Nodes

    Half Calls per Second on T5220 C3

    Nodes

    Transactions per Second

    C3 Applications

    Notes

    CsSp 278 290 -- This number can vary greatly depending on feature penetration. In particular, for transport (Class 4) systems, one CsSp instance may support up to 4 times this figure for plain featureless trunking. In all cases, it is recommended that each local access CsSp instance be limited to this number.

    CsSipReg 100,000 registered subscribers with 60 minute re-registration and 5 minute keepalive timers. Alterations in the re-registration and keepalive timers will linearly impact the number of supported registered subscribers.

    5.2 Application Distribution and Duplication

    As highlighted in Table 1, the primary method of increasing system capacity is through duplication or re-distribution of signaling protocol stacks. This section will address the primary goals and recommended implementations.

    5.2.1 Combined Transport and Local Access System (Class 4/5)

    The C3/G9 Distributed switching system can act as both a transport trunking system and a local access system simultaneously in the same network. In both cases, the focal point of feature processing is the Subscriber Profile Manager (CsSp). Multiple CsSp pairs are supported and for combined systems these network applications should be implemented on separate instances. This allows for independent growth of subscriber access, where feature penetration tends to be much higher, without impacting trunking services. Additional subscriber instances should be added for growth when current loading hits 225 calls per second.

    On the trunking side, the largest consumer of CsSp time is Call Processing Language (CPL) features. Any time new CPL features are added to an existing system, CsSp processor loading

  • 22 C3 Gateway Controller Engineering Guide

    630-00466-01 Rev 004 12/15/10

    must be carefully monitored. When the trunking CsSp process exceeds 60% CPU usage, an additional CsSp should be planned and deployed.

    5.2.2 ISUP Growth

    ISUP traffic growth is one the most common drivers for expanding a C3 cluster. Typically, the first change is to move the MTP3 and ISUP stacks off of C3 nodes 1 / 2 and onto an expansion pair. This not only provides additional CPU for the ISUP stack, but, also frees up CPU for the applications remaining on the original pair. Table 3 shows the group of applications used to support ISUP. While the C3 architecture supports placing each application on a separate node pair, the best efficiency can be achieved by keeping ISUP and MTP3 together on the same pair. MTP3 typically requires about one half of the CPU capacity of ISUP for the same call rate. MTP2 is loadshared and can be spread across multiple nodes. Note that each node can support only a single MTP2 linkset.

    After moving the existing ISUP onto a separate node pair, the next growth increment will require an additional ISUP stack on a new pair. In a growing network, growth planning should commence when ISUP has grown to consume over 60% of a single CPU.

    5.2.3 SIP Growth

    SIP traffic growth is the other most common driver for expanding a C3 cluster. SIP-DAL trunking and SIP user access impart significantly different loads on the system. SIP-DAL trunking operates with provisioned endpoints through a trusted network. This alleviates most needs for SIP Registration and Keepalive processing. The Registration and Keepalive functions impart a significant continuous load on the C3 processor complex that is driven by number of registered endpoints rather than call rate.

    SIP protocol processing is nominally the same for trunking and subscribers, though SIP-T / SIP-I typically requires approximately 20% additional CPU per call to process the embedded information elements. SgwSip capacity expansion is accomplished similar to ISUP. First, move the existing process onto a new node pair with additional node pairs and SgwSip instances added as capacity requirements demand. For subscriber access, the SgwSip stack and CsSipReg processes should be maintained on a common node pair because they share a common SIP gateway Ethernet port on the node. As with other protocols, expansion planning in a growing network should commence when SIP occupancy reaches 60% of a single CPU.

    5.2.4 Other Protocol Growth

    Other protocols such as H.323, MGCP, and H.248 to non-G9/8000 gateways grow in a similar manner to ISUP and SIP. The same guidelines should be followed. Lesser utilized protocols are commonly combined on the same node pair such that the combined half calls per second loading does not exceed the average of the stated capacities of each individual protocol.

  • C3 Gateway Controller Engineering Guide 23

    630-00466-01 Rev 004 12/15/10

    6 Memory Engineering

    GENBAND engineers most memory settings to apply to the majority of network applications. Atypical call models may demand resizing of some internal memory settings. GENBAND STRONGLY recommends consulting with your GENBAND representative for assistance in adjusting these parameters. Adjustable parameters are listed in Appendix A. Many parameters are set to the maximum by default and only listed for reference. The most commonly adjusted parameters are those controlling Subscribers, TCAP and SIP.

    Default settings for TCAP will support up to 275 transactions per second assuming a 30 second holding time on the TCAP memory resources. A single TCAP stack can achieve up to 700 transactions per second with a 30 second holding time with the following parameter changes.

    SGW:TCAP:TCAPUSANMBDLGS = 25000 SGW:TCAP:TCAPUSANMBINVS = 25000 CS:ProfileManager:MaxTransactionIds = 25000

    In addition to the basic subscriber sizing parameters, SIP requires an additional parameter for maximum number of simultaneous SIP half calls that can be managed by a single SIP gateway. The default value will support up to 48,900 simultaneous SIP half call legs, assuming a GOS of 10E-8. Maximizing that value increases the number of simultaneous SIP half call legs to 98,475, again assuming a GOS of 10E-8.

    SGW:SIP:SgwSipMaxNoOfCalls = 100000

    Each Call manager is allocated sufficient memory to handle 30,000 simultaneous full calls. This can be increased to a maximum of 50,000 assuming the node has sufficient memory.

  • 24 C3 Gateway Controller Engineering Guide

    630-00466-01 Rev 004 12/15/10

    7 Media Gateway Control Capacities

    Typical deployed C3 Signaling Controller complexes reach full call processing capacity long prior to reaching subtending media gateway capacities. Subject to overall call throughput limits, one C3 Signaling Controller complex can support up to:

    64 G9/8000 Media Gateways

    256 H.248 Trunk Gateways

    10,000 H.248 and MGCP Line Access Gateways

    o 40,000 Line Access Ports on these H.248 and MGCP Gateways

  • C3 Gateway Controller Engineering Guide 25

    630-00466-01 Rev 004 12/15/10

    8 Network Configuration

    This section provides an overview of the guidelines and rules as the C3 is deployed in a customers network. In this section, we focus on the C3s connectivity to the IP core network and the engineer rules applicable network-wide features. Detailed information on this topic is available in reference [3].

    The following diagram (Figure 2) depicts a typical VoIP network configuration in a real deployment case. It also presents the possible failure spots and GENBANDs high-availability solutions to each of this failure cases.

    Figure 2: A Typical VoIP Network Configuration and Protections to Failure Cases

    Carriers IP Core Network

    NIC (A)

    NIC (S)

    G9 MG

    VS (S)

    Customer Routers

    Ethernet Switches

    HSRP/VRRP

    (for Control only)

    VoIP Deployment Site

    1

    53

    6

    9

    VS (A)

    GEI 7

    8

    1012

    12

    PAC (A) PAC (S)

    7

    4

    14

    11

    Covered by C3 NIC card failover1

    Covered by C3 nodal failover2

    Covered by G9 PAC redundant port failover3

    Covered by G9 PAC port failover, T3000 NIC

    failover and router HSRP/VRRP failover5

    Covered by HSRP/VRRP failover on routers6

    Covered by G9 GEI card failover7

    Covered by G9 VS card failover8

    9

    Failure Cases and the Protections

    Covered by network convergence10

    Covered by G9 PAC card failover4

    Detected by Routers VRRP (for control) or by G9 GEI Next-Hop Protection (for bearer)

    12

    No interruption to active path11

    C3 MGC

    NIC (A)

    NIC (S)

    2 1

    GEI

    Network convergence which beyond Tekelecs high availability control.

    Lo

    ad

    Sh

    arin

    g

    The network configuration related engineering rules are organized in this section in three areas: control network connections, bearer network connections, and VoIP QoS and traffic engineering. They will be addressed in details in the following subsections.

    8.1 Control Network Connections

    The control network is a network between C3 MGC and G9 MGs for the MGC to control and manage the MGs. G9 MGs can be co-located locally or distributed remotely across the IP core

  • 26 C3 Gateway Controller Engineering Guide

    630-00466-01 Rev 004 12/15/10

    network. Dedicated fast Ethernet links on the G9 MG and the C3 MGC are used to connect to the IP core network.

    8.1.1 Local control network

    The rules to ensure equipment level redundancy are as follows:

    C3 nodes must be deployed as nodal redundancy pairs.

    The C3 node pairs must be the same model number (Netra 240 or Netra 440).

    The media gateway control links between C3 and each G9/8000 are a redundant pair of active and standby Ethernet links.

    C3 must have primary and secondary network interfaces connected to the control network.

    Each G9 PAC card provides a pair of active and standby links connecting to the control network.

    The standard configuration does not include any Ethernet switches for the control path. As co-located with C3 nodes, the G9 control links connect to the Ethernet switches deployed with the C3 nodes.

    The Ethernet switches deployed between C3 and G9 PAC must be paired for switch equipment redundancy.

    Detailed capability requirements for the Ethernet switches are provided in reference [3]. These switches are provided by the network operator or optionally by GENBAND.

    8.1.2 Remote control network

    When MGs are distributed in different locations, MGC controls them via connectivity spanning over the IP core network. Such a control network is called a remotely extended control network. Routers on either the MGC site or the MG site provide such IP network access.

    Routers should be internally redundant with redundant key components such as the routing engine, forwarding blade, power, fans, etc.

    It is strongly recommended that two routers be deployed at each VoIP site for router equipment redundancy.

    VoIP traffic engineering and load sharing must be taken into consideration when multiple paths to IP networks exist.

    To ensure carrier-grade control network, the access routers which connect MGCs and MGs must follow the following rules:

  • C3 Gateway Controller Engineering Guide 27

    630-00466-01 Rev 004 12/15/10

    The active and standby control links on the C3 must be bridged into the same subnet.

    The active and standby control links on the G9 PAC must be bridged into the same subnet.

    The standard G9 configuration does not include any Ethernet switches for the control path. As a distributed media gateway away from its controller, the G9 control links connect either to the Ethernet switches provided by the customer or directly to customer routers.

    The local control network must not have loops. Avoid serializing multi-staged switches in the control network.

    Both C3 and G9 are capable of detecting router failures with the Next-Hop Failure Detection mechanism implemented in the C3 and G9 software. With this capability, the C3 and G9 are capable of sending continuous control traffic away from a failed router. The impact to control traffic due to a router failure is minimized to the sub-second level. There is no need to run VRRP/HSRP on the pair of routers.

    Guideline: Routers do not need to run VRRP/HSRP in the control network where C3 and/or G9 connect to. One access router failure is still protected with the Next-Hop Failure Detection mechanism.

    8.2 Quality of Service and Traffic Engineering

    Voice traffic should be differentiated from best-effort data traffic in IP networks.

    The carriers core IP network which is transporting VoIP traffic among connected MGs must be DiffServ-enabled in supporting full range of DiffServ code points (DSCP).

    The following categorization is observed in the C3 MG, with the appropriate priorities as below.

    Call control traffic must be handled in the network as the highest possible priority: CS7 in DSCP.

    MG management traffic is strongly suggested to be marked as CS6 in DSCP.

    The following table (Table 4) summarizes the recommended DiffServ markings used for VoIP traffic the G9 as well as the C3 handles. The corresponding mapping to MPLS label EXP bits (if MPLS is used) and IEEE802.1p User Priority bits (if local Ethernet traffic is tagged with VLAN). The actual binary encoding is included as in each bracket in this table.

    Table 4: QoS marking on traffic sent out from C3/G9 DSS

    Traffic come out from C3/G9 DSS (G9 MG and C3 MGC)

    DSCP MPLS Label EXP bits

    IEEE802.1p User Priority

    VoIP Call Control CS7 (0b111000) 7 (0b111) 7 (0b111)

    Network Control, Signaling and Management (IP related protocols)

    CS6 (0b110000) 6 (0b110) 6 (0b110)

  • 28 C3 Gateway Controller Engineering Guide

    630-00466-01 Rev 004 12/15/10

    Traffic come out from C3/G9 DSS (G9 MG and C3 MGC)

    DSCP MPLS Label EXP bits

    IEEE802.1p User Priority

    VoIP bearer traffic EF (0b101110) 5 (0b101) 5 (0b101)

    (Reserved) CS5 (0b101000) - -

    Video over IP (N/A in 8000 MG) AF4x, CS4 (0b100xx0)

    4 (0b100) 4 (0b100)

    Controlled Load (data only, N/A in 8000 MG) AF3x, CS3 (0b011xx0)

    3 (0b011) 3 (0b011)

    Excellent Effort (data only, N/A in 8000 MG) AF2x, CS2 (0b010xx0)

    2 (0b010) 2 (0b010)

    All others which are better than BE AF1x, CS1 (0b001xx0)

    1 (0b001) 1 (0b001)

    Best Effort (BE) Default (0b000000)

    0 (0b000) 0 (0b000)

    In network configurations where control traffic is mixed with bearer traffic, the available GbE bandwidth is shared by control traffic and voice bearer traffic. Voice bearer traffic needs to be controlled under CAC (using the booking factor).

  • C3 Gateway Controller Engineering Guide 29

    630-00466-01 Rev 004 12/15/10

    9 Summary

    In summary, C3 cluster capacity is grown by:

    Adding C3 node pairs

    Spreading protocol stacks and other applications across more node pairs

    Adding additional instances of processing applications

    Ensuring sufficient working memory is allocated to key applications

    System CPU capacity is monitored through performance data table SYSNODEAPPPERF and the system logs at /var/log/mscMonLog and /var/log/nightlyReport.

    This document has provided the guidelines for engineering a C3 control complex based upon some typical assumptions. Because call models and mixes vary in every system, there is no single answer to system expansion that applies to every circumstance. The network operator should apply these guidelines to their own situation and consult with GENBAND Technical Support on any specifics as needed.

  • Appendix A. Memory Sizing Parameters

    This table defines the sizing of internal C3 Signaling Controller components. GENBAND strongly recommends consulting with Customer Service prior to modification of any of these values.

    Manager Parameter Name Description Default Min Max Updatable

    BICC biccNmBnSaps Number of Network Saps 5 0 5 No

    BICC biccNmBSaps Number of BICC Saps 5 0 5 No

    Call Manager lataNumberSize Maximum size of the LATA Number 500 1 999 No

    Call Manager lrnNumberSize Maximum size of the LRN Number 10000 1 65535 No

    Call Manager maxCicRouteIndexSize Maximum size of input cic route index 1500 1 20000 Yes

    Call Manager maxCityCodeSize maximum City Code Screening Index size 500 1 1500 Yes

    Call Manager maximiumForwardTimes Maximum Times of Call Forwarding Allowed 5 1 10 No

    Call Manager maxMsfNode Maximum Number of MSF nodes 64 1 128 Yes

    Call Manager maxOrigRouteIndexSize Maximum size of input orig route index 1500 1 20000 Yes

    Call Manager maxTodIndexSize maximum TOD Index size for TimeOfDay routing table 500 200 2000 Yes

    Call Manager maxTodRouteIndexSize Maximum size of input tod route index 1500 1 20000 Yes

    Call Manager outpulseMapIndexSize Maximum size of the digit fence index 512 1 65535 No

    Call Manager prefixTreeSelectorSize Maximum size of the prefix table selector 15 1 255 No

    Call Manager treatmentGroupIndexSize Maximum size of the treatment group index 25 0 255 No

    CDR Collection Manager CdrVoicePartitionSize Allocated CDR disk partition size (in KBytes) 4194304 Yes

    CDR Collection Manager MaxNumCdrInVoiceFile # of raw CDRs to write to a file 6000 100 100000 Yes

    Facility Manager maxBundleIndex Maximum number of bundles in the collection 255 No

    Facility Manager maxCardPerMsf Maximum number of Cards per MG node 40 No

    Facility Manager MaxConcurrentCallSessionPerPort

    Max concurrent call session per port for SIP subscriber 2 1 4 No

    Facility Manager maxCrvPerDlc Maximum number of Edgepoints per Dlc node 2048 No

    Facility Manager maxDCardPerTdmChanIf Maximum Daughter Cards per Channel Interface card 4 No

    Facility Manager maxDefaultLineGroup Maximum number of default line groups in the system 64 No

    Facility Manager maxDs3PerDCard Maximum DS3s per DS3 Daughter Card(T3 Channel Interface card) 3 No

    Facility Manager maxEptPerDefLineGroup Maximum number of edgepoints per default line group 65535 No

  • C3 Gateway Controller Engineering Guide 31

    630-00466-01 Rev 004 6/16/09

    Facility Manager maxEptPerIsupTrunkGroup Maximum number of Edgepoints per Isup Trunk Group 16384 No

    Facility Manager maxEptPerTrunkGroup Maximum number of Edgepoints per Trunk Group 65535 No

    Facility Manager maxGroupsPerBundle Maximum number of trunk groups in each bundle 64 No

    Facility Manager maxIncomingTrkGrpMeters Max # of incoming meters defined for a Trunk Group 4 4 4 No

    Facility Manager maxMeteredTrkGrps Maximum Trunk Groups that can have meters defined for them 4000 4000 4000 No

    Facility Manager maxNumDlc Maximum number of Dlc nodes in the system 8160 No

    Facility Manager maxNumDlcPerMsf Maximum number of DLC Nodes per MG node 255 No

    Facility Manager maxNumDslIadPerMsf Maximum number of Dsl IADs per MG node 65535 No

    Facility Manager maxNumMsf Maximum number of MG nodes in the system 64 1 64 No

    Facility Manager maxNumV5If Maximum number of V5If nodes in the system 64000 No

    Facility Manager maxNumV5IfPerMsf Maximum number of V5If Nodes per MG node 1000 No

    Facility Manager maxNumVirtualSpans Maximum number of virtual spans 2999 No

    Facility Manager maxOutgoingTrkGrpMeters Max # of outgoing meters defined for a Trunk Group 3 3 3 No

    Facility Manager maxReservedTrunks Maximum reserved trunks, Automatic Trunk Reservation 4096 0 4096 Yes

    Facility Manager maxRouteList Maximum number of Route Lists in the system 4096 4096 40000 No

    Facility Manager maxSipAccVirtGrpId Max # of SipAcc Virtual Groups 9999 9999 9999 No

    Facility Manager maxSpanPerMsf Maximum number of Spans per MG node 5000 No

    Facility Manager maxSpanPerT1E1Card Maximum number of Spans per T1 Channel Interface card 60 No

    Facility Manager maxTrkGrpMeterDefs Max # of Trunk Group Meter definitions defined on the system 100 100 100 No

    Facility Manager maxTrkGrpMeterPrefixGroups Max # of prefix groups defined for the system 100 100 100 No

    Facility Manager maxTrkGrpPrefixesPerGrp Max # of prefixes in a prefix group 10 10 10 No

    Facility Manager maxTrunkGroup Maximum number of Trunk Groups in the system 9999 No

    Facility Manager maxUptPerV5If Maximum number of Edgepoints per V5Ifnode 32767 No

    Facility Manager startingVirtualSpanId Starting Id for virtual spans 7001 No

    ISUP isupNmBnSaps Number of Network Saps 5 0 5 No

    ISUP isupNmBSaps Number of ISUP Saps 5 0 5 No

    ISUP nmbCalRef Number of Call References 112896 0 112896 No

    ISUP nmbCir Number of circuits 112896 0 112896 No

    ISUP nmbCirGrp Max number of circuit groups 5000 0 5000 No

    ISUP nmbDpc Number of dpcs 1024 0 16384 No

    MTP3 slsRange SLS Range 256 0 0 No

  • 32 C3 Gateway Controller Engineering Guide

    630-00466-01 Rev 004 6/16/09

    OamAma Infrastructure AmaAuditDskUpdInterval

    OamAma DB/Disk status updating interval (in Seconds) 600 600 Yes

    OamAma Infrastructure AmaAuditInterval OamAma DB/Disk auditing interval (in Seconds) 21600 60 Yes

    OamAma Infrastructure AmaDiskCriticalAlarm

    OamAma partition occupancy level to send critical alarm 90 0 100 Yes

    OamAma Infrastructure AmaDiskMajorAlarm

    OamAma partition occupancy level to send major alarm 80 0 100 Yes

    OamAma Infrastructure AmaDiskMinorAlarm

    OamAma partition occupancy level to send minor alarm 70 0 100 Yes

    OamAma Infrastructure AmaPartitionSize Allocated AMA disk partition size (in KBytes) 1048576 Yes

    OamAma Infrastructure TgmAmaAuditDskUpdInterval

    OamAma TGM DB/Disk status updating interval (in Seconds) 3600 3600 Yes

    OamAma Infrastructure TgmAmaAuditInterval

    OamAma TGM DB/Disk auditing interval (in Seconds) 21600 21600 Yes

    OamAma Infrastructure TgmAmaDiskCriticalAlarm

    OamAma TGM partition occupancy level to send critical alarm 90 0 100 Yes

    OamAma Infrastructure TgmAmaDiskMajorAlarm

    OamAma TGM partition occupancy level to send major alarm 80 0 100 Yes

    OamAma Infrastructure TgmAmaDiskMinorAlarm

    OamAma TGM partition occupancy level to send minor alarm 70 0 100 Yes

    OamAma Infrastructure TgmAmaPartitionSize

    Allocated TGM AMA disk partition size (in KBytes) 524288 Yes

    OamAma Infrastructure TgmAmaRetentionTime

    Time to keep AMA TGM 2nd files and table entries (in Seconds) 15552000

    15552000 Yes

    OamAma Infrastructure TimeToKeepAmaData

    Time to keep AMA secondary files and table entries (in Seconds) 432000

    432000 Yes

    ProfileManager AllocationRatioLevel CPU Allocation Overload Level 60 Yes

    ProfileManager AllocationRatioLevel0 CPU Allocation Overload Level0 50 Yes

    ProfileManager C4CplMaxPfmTrkGrps Max # of TrkGrps to which C4CPL features may apply 400 0 Yes

    ProfileManager CsSpCPULoadLevel CsSp CPU Overload Level 95 Yes

    ProfileManager CsSpCPULoadLevel0 CsSp CPU Overload Level0 85 Yes

    ProfileManager LocalIS41SSN Local SSN for IS41 8 Yes

    ProfileManager MaxAcctCodes Maximum number of Account Code entries 10000 1 20000 Yes

    ProfileManager MaxAniScreening Maximum Number of ANI allowed 500000 1 100000

    0 Yes

    ProfileManager MaxAuthCodes Maximum number of Auth Code entries 10000 1 20000 Yes

    ProfileManager MaxCallingCards Maximum number of Calling Card entries 1000 1 2000 Yes

    ProfileManager MaxCallingCardUACs Maximum number of Universal Access entries 1000 1 2000 Yes

    ProfileManager MaxCicScreening Maximum number of CIC Screening entries 500 1 1000 Yes

    ProfileManager maxCustomerGroupId Max Customer Group Id 255 0 9999 Yes

  • C3 Gateway Controller Engineering Guide 33

    630-00466-01 Rev 004 6/16/09

    ProfileManager MaxDefaultCSI Maximum number of Default CSI entries 100 1 500 Yes

    ProfileManager MaxFeatureObjects Maximum number of Feature object 50000 Yes

    ProfileManager MaxFeatureProfiles Maximum CPL Profile allowed 50 1 5000 Yes

    ProfileManager MaxInfoDigits Maximum number of Information Digit entries 100 1 200 Yes

    ProfileManager MaxInSwitchNumberPortability Max In-Switch Number Portability entries 10000 1 20000 No

    ProfileManager MaxLaesCases Maximum number of Lawful Intercept Cases 2248 1 2500 Yes

    ProfileManager MaxLaesProfiles Maximum number of Lawful Intercept Profiles 128 1 128 Yes

    ProfileManager MaxNpNumberRanges Maximum number of Number Portability Ranges 10000 1 20000 Yes

    ProfileManager MaxNumClassOfService Maximum Class Of Service object 100 Yes

    ProfileManager MaxNumContextParties Maximum number of Context Parties 6 Yes

    ProfileManager MaxNumMWICreateCallMsg Max Num MWI send create call 20 10 50 Yes

    ProfileManager MaxNumMWILampObjects Max Num MWI Object 5000 100 10000 Yes

    ProfileManager MaxNumOfCallForward Max Number Of CallForward in the Network 5 0 10 Yes

    ProfileManager MaxNumSubscriberGroup Maximum Subscriber Group object 500 Yes

    ProfileManager MaxRingProfileId Maximum Number of Ring Profile object 1000 0 62000 Yes

    ProfileManager MaxTransactionIds Maximum number of transaction id object 25000 Yes

    ProfileManager MaxTriggerItemAssignments Maximum Number of Trigger Item Assignments 1000 10 5000 Yes

    ProfileManager maxTriggerItemId Max AIN Trigger Item Id 511 0 65535 Yes

    ProfileManager QueueDepthRatioLevel Queue Depth Ratio Level 60 Yes

    ProfileManager QueueDepthRatioLevel0 Queue Depth Ratio Level0 50 Yes

    ProfileManager RealTimeCPULevel Real time CPU Overload Level 105 Yes

    ProfileManager RealTimeCPULevel0 Real time CPU Overload Level0 95 Yes

    SCCP nmbAdjDpc Max Number of Adjacent Point Codes 128 0 128 No

    SCCP nmbRtes Max Number of Routes 256 0 256 No

    SGWSIP MsgAllocOverloadAbated MSG Allocation Over Load Abated 5 2 5 Yes

    SGWSIP MsgAllocOverloadLevel1 MSG Allocation Over Load Level 1 15 5 25 Yes

    SGWSIP MsgAllocOverloadLevel2 MSG Allocation Over Load Level 2 20 5 25 Yes

    SGWSIP MsgQDepthOverloadAbated MSG Q Depth Over Load Abated 5 1 5 Yes

    SGWSIP MsgQDepthOverloadLevel1 MSG Q Depth Over Load Level 1 20 5 25 Yes

    SGWSIP MsgQDepthOverloadLevel2 MSG Q Depth Over Load Level 2 25 5 25 Yes

    SGWSIP RealTimeCpuOverloadAbated Real Time CPU Over Load Abated 60 50 75 Yes

    SGWSIP RealTimeCpuOverloadLevel1 Real Time CPU Over Load Level 1 90 50 100 Yes

    SGWSIP RealTimeCpuOverloadLevel2 Real Time CPU Over Load Level 2 95 50 100 Yes

    SGWSIP SgwSipAccessMaxNoOfDomains SIP Max Access Number of Domains 1000 1 1000 Yes

    SGWSIP SgwSipCpuOverloadAbated SIP CPU Over Load Abated 75 50 80 Yes

    SGWSIP SgwSipCpuOverloadLevel1 CPU Over Load Level 1 92 50 100 Yes

  • 34 C3 Gateway Controller Engineering Guide

    630-00466-01 Rev 004 6/16/09

    SGWSIP SgwSipCpuOverloadLevel2 CPU Over Load Level 2 96 50 100 Yes

    SGWSIP SgwSipMaxNoOfCalls Max No of SIP Calls 50000 1000 100000 Yes

    SGWSIP SgwSipMaxRegistersThreshold SIP Max Registers threshold 1000 1 1000 Yes

    SGWSIP SgwSipMaxRemoteHosts Max Number of Remote SIP Hosts 1000 1 1000 Yes

    SGWSIP TimeShareCpuOverloadAbated Time Share CPU Over Load Abated 5 1 5 Yes

    SGWSIP TimeShareCpuOverloadLevel1 Time Share CPU Over Load Level 1 4 1 5 Yes

    SGWSIP TimeShareCpuOverloadLevel2 Time Share CPU Over Load Level 2 3 1 5 Yes

    SipRegistrar MaxSipSubscriber Maximum number of SIP Subscriber object 10000 1000 100000 No

    System Wide MaxNumberOfVMGs Max number of Virtual MGs per Node 34 Yes

    TCAP nmbSaps Max Num. of Saps 10 0 10 No

    TCAP TcapUSapNMBDlgs Max number of dialogs for this SAP 65500 Yes

    TCAP TCAPUSAPNMBINVS Max number of invokes 50000

  • Appendix B. Sample SYSGRPPERF Analysis Format

    STOP TIME STAMP

    SYS GRP NUM GROUP TYPE

    INCOMING ATTEMPTS

    OUTGOING ATTEMPTS

    TOTAL USAGE SEC

    3/26/2009 21:00 200 0 - ISUP 0 0 0

    3/26/2009 21:00 202 0 - ISUP 0 0 0

    3/26/2009 21:00 900 0 - ISUP 0 0 0

    3/26/2009 21:00 1001 0 - ISUP 0 1169 36590

    3/26/2009 21:00 1002 0 - ISUP 12 52 27728

    3/26/2009 21:00 1003 0 - ISUP 29 73 24484

    3/26/2009 21:00 1004 0 - ISUP 15 67 10387

    3/26/2009 21:00 1005 0 - ISUP 10 72 20858

    3/26/2009 21:00 1006 0 - ISUP 38 131 37591

    3/26/2009 21:00 1007 0 - ISUP 56 150 53719

    3/26/2009 21:00 1008 0 - ISUP 12 77 13343

    172 1791 224700

    3/26/2009 21:00 4093 1 - PRI 28 153 32188

    3/26/2009 21:00 4094 1 - PRI 18 53 9626

    3/26/2009 21:00 4095 1 - PRI 18 53 10083

    3/26/2009 21:00 4096 1 - PRI 23 50 10628

    3/26/2009 21:00 4128 1 - PRI 0 0 0

    3/26/2009 21:00 4129 1 - PRI 0 10 3284

    3/26/2009 21:00 4134 1 - PRI 22 52 11920

    3/26/2009 21:00 4137 1 - PRI 6 55 7851

    3/26/2009 21:00 4138 1 - PRI 20 51 12259

    3/26/2009 21:00 4214 1 - PRI 3173 0 83646

    3/26/2009 21:00 4215 1 - PRI 3182 0 85600

    3/26/2009 21:00 4216 1 - PRI 3200 0 88487

    3/26/2009 21:00 4219 1 - PRI 0 4 2055

    9690 481 357627

    3/26/2009 21:00 9901 3 - CAS TRK 0 0 0

    3/26/2009 21:00 9902 3 - CAS TRK 0 1 85

    3/26/2009 21:00 9903 3 - CAS TRK 0 0 0

    0 1 85

    3/26/2009 21:00 1 4 - Line 0 0 0

    3/26/2009 21:00 2 4 - Line 0 0 0

    3/26/2009 21:00 3 4 - Line 0 0 0

    3/26/2009 21:00 4 4 - Line 0 0 0

    3/26/2009 21:00 5 4 - Line 0 0 0

    3/26/2009 21:00 6 4 - Line 0 0 0

    3/26/2009 21:00 7 4 - Line 0 0 0

    3/26/2009 21:00 8 4 - Line 0 0 0

    3/26/2009 21:00 9 4 - Line 0 0 0

    3/26/2009 21:00 10 4 - Line 0 0 0

    3/26/2009 21:00 290 4 - Line 4 0 1503

    4 0 1503

  • 36 C3 Gateway Controller Engineering Guide

    630-00466-01 Rev 004 6/16/09

    STOP TIME STAMP

    SYS GRP NUM GROUP TYPE

    INCOMING ATTEMPTS

    OUTGOING ATTEMPTS

    TOTAL USAGE SEC

    3/26/2009 21:00 4957 6 - CAS DAL 60 0 4432

    3/26/2009 21:00 4958 6 - CAS DAL 0 0 0

    3/26/2009 21:00 4959 6 - CAS DAL 220 0 12387

    3/26/2009 21:00 4960 6 - CAS DAL 36 0 3760

    3/26/2009 21:00 4963 6 - CAS DAL 2533 0 104606

    3/26/2009 21:00 5023 6 - CAS DAL 0 68 2180

    3/26/2009 21:00 5025 6 - CAS DAL 0 253 4525

    3/26/2009 21:00 5045 6 - CAS DAL 0 2 770

    3/26/2009 21:00 5058 6 - CAS DAL 297 0 19052

    3/26/2009 21:00 5059 6 - CAS DAL 4 0 303

    3/26/2009 21:00 5061 6 - CAS DAL 0 44 33547

    3/26/2009 21:00 5062 6 - CAS DAL 0 0 0

    3/26/2009 21:00 5072 6 - CAS DAL 0 575 10857

    3/26/2009 21:00 5074 6 - CAS DAL 0 25 1318

    3150 967 197737

    3/26/2009 21:00 9999 8 - CALEA 0 0 0

    3/26/2009 21:00 4486 12 - SIP Trk 0 0 0

    3/26/2009 21:00 9821 12 - SIP Trk 0 0 0

    3/26/2009 21:00 4136 13 - SIP DAL 0 1269 338569

    3/26/2009 21:00 4139 13 - SIP DAL 0 73 5149

    3/26/2009 21:00 4140 13 - SIP DAL 0 5 8730

    3/26/2009 21:00 4141 13 - SIP DAL 0 1 1393

    3/26/2009 21:00 4142 13 - SIP DAL 133 155 13983

    3/26/2009 21:00 4143 13 - SIP DAL 4585 15 179117

    3/26/2009 21:00 4144 13 - SIP DAL 0 55 121742

    3/26/2009 21:00 4145 13 - SIP DAL 0 3 3

    3/26/2009 21:00 4181 13 - SIP DAL 9 4 2361

    3/26/2009 21:00 4182 13 - SIP DAL 4 0 1593

    3/26/2009 21:00 4188 13 - SIP DAL 0 155 32622

    3/26/2009 21:00 4189 13 - SIP DAL 1 0 85

    4732 1735 705347

    3/26/2009 21:00 81 14 - InterNode 0 0 400033

    3/26/2009 21:00 82 14 - InterNode 0 0 260725

    3/26/2009 21:00 83 14 - InterNode 0 0 153196

    3/26/2009 21:00 84 14 - InterNode 0 0 468231

    3/26/2009 21:00 85 14 - InterNode 0 0 202445

    3/26/2009 21:00 86 14 - InterNode 0 0 285268

    3/26/2009 21:00 87 14 - InterNode 0 0 80315

    1850213

  • C3 Gateway Controller Engineering Guide 37

    630-00466-01 Rev 004 6/16/09

    Appendix C. Sample SYSNODEAPPPERF Analysis Format

    MSC NODE NUM APP MGR ID

    Min CPU %

    Max CPU %

    Avg CPU %

    1 - 514 1 - BsPm 0.00% 0.03% 0.01%

    1 - 514 2 - BsDm 1.65% 5.67% 3.09%

    1 - 514 3 - BsBootpd 0.00% 0.01% 0.00%

    1 - 514 4 - OamFault 0.00% 0.00% 0.00%

    1 - 514 10 - CsSp 5.05% 12.27% 9.19%

    1 - 514 11 - CsFm 0.51% 1.67% 0.97%

    1 - 514 12 - CsCcm 0.15% 0.89% 0.41%

    1 - 514 13 - CsPdm 0.03% 0.26% 0.11%

    1 - 514 14 - CsEcsi 0.00% 0.00% 0.00%

    1 - 514 15 - CsTftpd 0.00% 0.01% 0.00%

    1 - 514 17 - CsSipReg 0.00% 0.01% 0.00%

    1 - 514 20 - CpCallm 2.55% 5.78% 4.20%

    1 - 514 36 - SgwSip 2.67% 4.00% 3.25%

    1 - 514 40 - OamCfg 0.00% 0.03% 0.00%

    1 - 514 41 - OamTrap 0.00% 0.28% 0.05%

    1 - 514 43 - OamPfm 0.00% 0.01% 0.00%

    1 - 514 44 - OamNm 0.00% 0.02% 0.00%

    1 - 514 45 - OamMt 0.00% 0.01% 0.00%

    1 - 514 47 - OamLr 43.26% 80.66% 70.99%

    1 - 514 48 - OamDbAudit 0.00% 0.04% 0.00%

    1 - 514 56 - NamingService 0.00% 0.05% 0.02%

    1 - 514 57 - NotificationService 0.03% 0.78% 0.13%

    1 - 514 58 - CorbaEmsServer 0.22% 4.85% 0.79%

    1 - 514 59 - CliServer 0.02% 0.74% 0.15%

    1 - 514 60 - OamAmaFmt 0.00% 1.78% 0.13%

    1 - 514 61 - OamAmaDist 0.00% 0.01% 0.00%

    1 - 514 62 - OamAmaOp 0.00% 0.04% 0.00%

    1 - 514 70 - OamDbMaint 0.01% 0.15% 0.05%

    1 - 514 71 - OamSysWatch 0.00% 0.02% 0.00%

    1 - 514 117 - BsSnmpMonitord 0.00% 0.01% 0.00%

    1 - 514 120 - timestend 0.01% 0.24% 0.08%

    1 - 514 122 - snmpdm 0.00% 0.04% 0.01%

    1 - 514 123 - brassagt 0.00% 0.02% 0.00%

    1 - 514 124 - syslogd 0.02% 0.25% 0.11%

    1 - 514 125 - BsAlm_alarmd 0.00% 0.00% 0.00%

    1 - 514 126 - BsAlm_rnetd 0.00% 0.00% 0.00%

    1 - 514 127 - xntpd 0.00% 0.16% 0.02%

    1 - 514 128 - inetd 0.00% 0.03% 0.00%

  • 38 C3 Gateway Controller Engineering Guide

    630-00466-01 Rev 004 6/16/09

    MSC NODE NUM APP MGR ID

    Min CPU %

    Max CPU %

    Avg CPU %

    2 - 514 1 - BsPm 0.00% 0.17% 0.01%

    2 - 514 2 - BsDm 0.30% 2.68% 1.00%

    2 - 514 3 - BsBootpd 0.00% 0.01% 0.00%

    2 - 514 4 - OamFault 0.00% 0.25% 0.02%

    2 - 514 10 - CsSp 0.00% 0.27% 0.01%

    2 - 514 11 - CsFm 0.00% 0.32% 0.01%

    2 - 514 12 - CsCcm 0.00% 1.17% 0.18%

    2 - 514 13 - CsPdm 0.00% 0.39% 0.04%

    2 - 514 14 - CsEcsi 0.00% 0.01% 0.00%

    2 - 514 15 - CsTftpd 0.00% 2.74% 0.18%

    2 - 514 17 - CsSipReg 0.00% 0.01% 0.00%

    2 - 514 20 - CpCallm 0.07% 4.74% 0.63%

    2 - 514 36 - SgwSip 0.00% 0.08% 0.00%

    2 - 514 40 - OamCfg 0.05% 2.91% 0.87%

    2 - 514 41 - OamTrap 0.00% 1.97% 0.18%

    2 - 514 43 - OamPfm 0.00% 0.91% 0.05%

    2 - 514 44 - OamNm 0.00% 0.02% 0.00%

    2 - 514 45 - OamMt 0.00% 0.01% 0.00%

    2 - 514 47 - OamLr 33.96% 94.95% 83.22%

    2 - 514 48 - OamDbAudit 0.00% 0.01% 0.00%

    2 - 514 56 - NamingService 0.00% 0.03% 0.00%

    2 - 514 57 - NotificationService 0.00% 0.69% 0.06%

    2 - 514 58 - CorbaEmsServer 0.00% 0.22% 0.08%

    2 - 514 59 - CliServer 0.01% 0.40% 0.09%

    2 - 514 60 - OamAmaFmt 0.00% 0.13% 0.00%

    2 - 514 61 - OamAmaDist 0.00% 0.03% 0.00%

    2 - 514 62 - OamAmaOp 0.00% 24.04% 1.39%

    2 - 514 70 - OamDbMaint 0.01% 0.11% 0.04%

    2 - 514 71 - OamSysWatch 0.00% 0.02% 0.00%

    2 - 514 117 - BsSnmpMonitord 0.00% 0.00% 0.00%

    2 - 514 120 - timestend 0.00% 0.18% 0.05%

    2 - 514 122 - snmpdm 0.00% 0.30% 0.05%

    2 - 514 123 - brassagt 0.08% 4.15% 1.31%

    2 - 514 124 - syslogd 0.01% 0.48% 0.09%

    2 - 514 125 - BsAlm_alarmd 0.00% 0.00% 0.00%

    2 - 514 126 - BsAlm_rnetd 0.00% 0.01% 0.00%

    2 - 514 127 - xntpd 0.00% 0.10% 0.01%

    2 - 514 128 - inetd 0.00% 0.03% 0.00%

  • C3 Gateway Controller Engineering Guide 39

    630-00466-01 Rev 004 6/16/09

    MSC NODE NUM APP MGR ID

    Min CPU %

    Max CPU %

    Avg CPU %

    3 - 514 1 - BsPm 0.00% 0.00% 0.00%

    3 - 514 2 - BsDm 0.18% 3.23% 0.96%

    3 - 514 10 - CsSp 0.00% 0.01% 0.00%

    3 - 514 13 - CsPdm 0.00% 0.06% 0.02%

    3 - 514 17 - CsSipReg 0.00% 0.00% 0.00%

    3 - 514 20 - CpCallm 0.10% 0.66% 0.33%

    3 - 514 30 - SgwFtMgr 0.00% 0.00% 0.00%

    3 - 514 31 - SgwMtp2 0.05% 0.85% 0.42%

    3 - 514 32 - SgwMtp3 0.00% 0.02% 0.00%

    3 - 514 33 - SgwIsup 0.00% 0.39% 0.00%

    3 - 514 34 - SgwTcap 0.00% 0.00% 0.00%

    3 - 514 36 - SgwSip 1.66% 3.70% 2.51%

    3 - 514 44 - OamNm 0.00% 0.02% 0.00%

    3 - 514 47 - OamLr 89.96% 96.65% 94.33%

    3 - 514 117 - BsSnmpMonitord 0.00% 0.00% 0.00%

    3 - 514 122 - snmpdm 0.00% 0.00% 0.00%

    3 - 514 123 - brassagt 0.00% 0.00% 0.00%

    3 - 514 124 - syslogd 0.00% 0.16% 0.00%

    3 - 514 125 - BsAlm_alarmd 0.00% 0.00% 0.00%

    3 - 514 126 - BsAlm_rnetd 0.00% 0.00% 0.00%

    3 - 514 127 - xntpd 0.00% 0.01% 0.00%

    3 - 514 128 - inetd 0.00% 0.01% 0.00%

    4 - 514 1 - BsPm 0.00% 0.00% 0.00%

    4 - 514 2 - BsDm 0.36% 3.27% 0.92%

    4 - 514 10 - CsSp 0.29% 3.00% 1.40%

    4 - 514 13 - CsPdm 0.00% 0.07% 0.02%

    4 - 514 17 - CsSipReg 0.00% 0.00% 0.00%

    4 - 514 20 - CpCallm 0.28% 1.04% 0.63%

    4 - 514 30 - SgwFtMgr 0.00% 0.00% 0.00%

    4 - 514 31 - SgwMtp2 0.07% 0.83% 0.42%

    4 - 514 32 - SgwMtp3 0.11% 1.32% 0.60%

    4 - 514 33 - SgwIsup 0.28% 1.93% 1.01%

    4 - 514 34 - SgwTcap 0.00% 0.00% 0.00%

    4 - 514 36 - SgwSip 0.00% 0.01% 0.00%

    4 - 514 44 - OamNm 0.00% 0.01% 0.00%

    4 - 514 47 - OamLr 89.58% 97.19% 93.75%

    4 - 514 117 - BsSnmpMonitord 0.00% 0.00% 0.00%

    4 - 514 122 - snmpdm 0.00% 0.00% 0.00%

    4 - 514 123 - brassagt 0.00% 0.00% 0.00%

    4 - 514 124 - syslogd 0.00% 0.04% 0.00%

    4 - 514 125 - BsAlm_alarmd 0.00% 0.00% 0.00%

    4 - 514 126 - BsAlm_rnetd 0.00% 0.00% 0.00%

    4 - 514 127 - xntpd 0.00% 0.01% 0.00%

    4 - 514 128 - inetd 0.00% 0.00% 0.00%

  • 40 C3 Gateway Controller Engineering Guide

    630-00466-01 Rev 004 6/16/09

    Appendix D. References

    [1] G9 Converged Media Gateway Product Description, 990-0102-101

    [2] G9 Converged Media Gateway Engineering Rules, 630-00453-01

    [3] C3/G9 IP Network Interface Specification, 630-00450-01

    [4] GENBAND 8000 Media Gateway Engineering Rules, 630-00452-01

    [5] G9 Signaling Gateway Engineering Rules, 630-00454-01

    [6] C3 Signaling Controller G9 Converged Media Gateway System Description, 630-00484-01

    [7] C3 Signaling Controller 8000 Media Gateway System Description, 630-00528-01

  • C3 Gateway Controller Engineering Guide 41

    630-00466-01 Rev 004 6/16/09

    Appendix E. Acronyms

    8000 GENBAND 8000 Media Gateway

    C3 GENBAND C3 Signaling Controller

    CAC Call Admission Control

    DSCP Differentiated Service Code Point

    DSP Digital Signal Processor

    DSS Distributed Switching System

    EF Expedite Forwarding

    FE Fast Ethernet

    G9 GENBAND G9 Converged Media Gateway

    GbE Gigabit Ethernet

    Gbps Gigabits per second

    GEI Gigabit Ethernet Interface card

    K Kilo (1024)

    LAN Local Access Network

    MG Media Gateway

    MGC Media Gateway Controller (Softswitch)

    MPLS Multi-Protocol Label Switching

    NI Network Interface (packet and TDM)

    NIC Network Interface Card

    PAC Packet and Control card

    PDU Power Distribution Unit

    RSVP-TE Resource reSerVation Protocol with Traffic Engineering extensions

    SC Signaling Controller

    SG Signaling Gateway

    TDM Time Division Multiplex

    VLAN Virtual LAN

    VoIP Voice over Internet Protocol

    VRRP Virtual Router Redundancy Protocol

    1 Introduction1.1 Scope

    2 Overview3 Cluster Configuration3.1 Basic Clustering Rules3.2 Clustering Summary

    4 C3 Cluster Capacity Analysis5 Protocol Stack and Application Sizing and Distribution5.1 Sizing5.2 Application Distribution and Duplication5.2.1 Combined Transport and Local Access System (Class 4/5)5.2.2 ISUP Growth5.2.3 SIP Growth5.2.4 Other Protocol Growth

    6 Memory Engineering7 Media Gateway Control Capacities8 Network Configuration8.1 Control Network Connections8.1.1 Local control network8.1.2 Remote control network

    8.2 Quality of Service and Traffic Engineering

    9 Summary