1 Dealing with Crises: CSDs’ Recent Experiences Singapore Exchange Case Studies SARS in Singapore...
-
Upload
addison-daughtery -
Category
Documents
-
view
218 -
download
1
Embed Size (px)
Transcript of 1 Dealing with Crises: CSDs’ Recent Experiences Singapore Exchange Case Studies SARS in Singapore...

1
Dealing with Crises: CSDs’ Recent Experiences
Singapore ExchangeCase Studies •SARS in Singapore•9/11/2001 in the U.S.•…and a few words about
Earthquakes, Typhoons,
Ice Storms,Blackouts
Mary Ann Callahan, Managing Director, DTCC
ACSDA / STRATE Seminar, November 20-21, 2003

2
1. Double Wake Up Calls
2. SARS & Terrorism
3. SARS Hit Singapore
4. SARS Contingency Measures in SGX
5. Lessons Learned
6. BCP Best Practices Guidelines
7. Conclusion
Business Continuity in the Wake of SARS* & 9/11
*borrowing from Daniel Tan of
Singapore Exchange at ISSA…

3
Multiple Wake Up Calls - Before, After & Since…
Prior to 9/11 and SARS (Severe Acute Respiratory Syndrome)
Disaster scenarios evolved around man-made accidents
e.g. ,fire, chemical leakage and power outage
BCP was positioned mainly in terms of Disaster Recovery
for technology assets, networks and data
In the wake of SARS & Terrorism
Many BCP assumptions were proved wrong and inadequate
BCP readiness became our only protection

4
SARS & Terrorism
– The Similarities
Crippling blow to business operations
Invisible and totally unpredictable
Real or potential staggering loss of life of key executives
(2,752 people died in 9/11) and disrupting crisis
management plan
Need alternate site to resume business operations
Severe impact on the travel industry and major economic
losses

5
SARS & Terrorism
– The Subtle Differences
SARS is not linked to terrorism
Terrorism usually leads to massive loss of technology and
physical assets. SARS requires segregation of critical staff to
operate from separate locations
A terrorist attack is not only unpredictable but also less
controllable. SARS can be positively controlled with adequate
continuity response measures.

6
SARS Hit Singapore
– A Chronology of Events
Three Singaporeans admitted to hospital
SARS notified as infectious disease under Infectious Disease
Act. Healthcare workers were particularly affected
1-3 Mar
17 Mar
Suspects and probable SARS cases centralized at a
designated hospital
22 Mar
Childcares, preschools, primary and secondary schools
closed
27 Mar

7
SARS Hit Singapore
– A Chronology of Events
Home quarantine orders implemented. 10 Apr
Hospital visitors restricted to 1 per patient to reduce possible
virus spread
18 Apr
Thermal imaging scanners installed at airport and main entry
points
23 Apr
Vegetable wholesale center closed for 10 days – fear of
SARS spreading to the community
20 Apr
Singapore removed from WHO’s list of SARS affected areas 31 May In the end, 162 people were infected, and 13 people died.

8
SARS – Disaster Scenario
Severe temporary or permanent loss of critical key staff
Absenteeism due to illness, serving Quarantine Orders
Higher incidence of sick leave
Threat of forced closure of key business operation
facilities by health authority e.g. Trading floor (Pictures)

9
SGX Open Outcry Trading Floor

10
SARS – Contingency Measures in SGX
Staffs with critical operational responsibilities segregated
to work from the Emergency Operation Center (EOC) far
away from the main office.
Health declaration and temperature checks made
compulsory for all visitors. Particulars (names, NRIC, body
temperature, etc) recorded for contact tracing.
Business travel ban to SARS affected areas imposed.
Staff advised against social travel; Self-imposed home
quarantine for those who did.
Isolation rooms identified for handling suspected SARS.

11
SARS – Contingency Measures on Trading Floor
Measures to prevent spreading to trading floor
Daily health declaration by all trading floor personnel. (No fever +
No coughs/breathing difficulty symptoms + No travel to SARS
areas)
Body temperature check using thermal scanner (Picture)
No visitors allowed into trading floor
Disinfectant cleaning done twice a week
Alternative trading room set up; Canadian financial industry had
“clean teams”

12
Thermal Scanner installed in SGX

13
Lessons Learned
Shared recovery solution inadequate (multiple activations,
containment)
Segregating personnel in different location impeded by
lack of qualified staff and unavailability of IT infrastructure
to support primary system and DR site. Need for cross-
training, decentralized-operations?
Unmanned data centre with remote operation capability
Potential for increased focus on telecommuting.
Prevention is a BCP strategy

14
The Impact of 9/11
DTCC
Source:TowerGroup

15
9/11 damage to securities industry IT infrastructure
estimated at $3.2 billion
1.3
4.1
10.5
34.2
Res
earc
hS
tati
on
sS
ales
Sta
tio
ns
Estimated workstations lost (‘000) Estimated servers lost
$1.3 $1.7$0.2Total Cost
Total Damage (in billions)
Software Services Hardware
4,750
7,750
UnixServers
IA Servers
Source:TowerGroup

16
DTCC Before 9/11/2001
DTCC staff concentrated at headquarters location (a well-
known address) in Manhattan, with modest staff complement
in Brooklyn, outside NYC Principal processing facility and depository primary data
center in Manhattan Depository and clearing corporation data centers and
business recovery site split between Manhattan and
Brooklyn (but very close)

17
Significant Issues That DTCC Faced
Impact of an event affecting the greater NY metropolitan
area 9/11 changed old BCP assumptions that only a single building outage
should be planned for, and assumptions about telecom and fiber routes, when 80% of inbound phone traffic to Manhattan was blocked.
Site of DTCC’s production data center Location of DTCC’s staff Notification and instruction procedures during event and
recovery period Client readiness, and their Business Recovery locations

18
DTCC’s Response – first 18 months
Immediately moved primary data center outside Manhattan Implemented an additional data center, adding equipment
to all sites Relocated staff outside Manhattan; currently planning for a
new operational site even farther away Various recovery scenarios and scripts

19
DTCC’s Response – first 18 months
Implemented Officer rotation
Utilize Government telecom programs:
Telecommunication Service Priority - TSP
Government Emergency Telephone Service (GETS)
Participant Readiness:
Developed additional communications requirements for largest
clients
Require dedicated backup connection
Require annual recovery testing, from client primary and backup
sites, to all DTCC processing sites

20
DTCC Remote Data Center (RDC)
Acquired site and completed major modifications.Implemented offsite tape storage program in 2002;
implemented data replication in 2003Support client traffic daily across all data centers Distance requires Multi-Hop process part of daily operationSystems ready and available.
First chance to prove: DTCC relied on RDC staff and
components during NY’s blackout in August 2003.

21
RDC Management and Command Center
6 DTCC NY staff formed initial team, then local hiresManagement staff on siteStaff today covers 3 shifts, representing Network, DP
Operations, Command Center, Facilities and System
SupportDP Operations calls distributed among all sitesEstablished Command Center at RDCAbility to command all data centers from RDC

22
Status – Data Centers
Supporting live customer traffic on a daily basis through all
data centers; each customer’s traffic is routed though each
data center at least once per year. Successful start-ups of NY Production systems using Multi-
Hop data connections 3rd (Remote)Data Center certified May 2003 for backup of
all DTC critical systems.

23
Status - Network
Existing circuits at client locations provide connectivity to RDC
Routing to RDC did not require customer involvement
Additional functionality soon:
Network engineers have completed design and implementation
of interconnected Stock Exchanges/CCP and Depository
networks, to enable all DTCC businesses to reach RDC.

24
Our Participants’ Experience Post 9/11
Took 12 months to build out operations space and 12 to 18 months to build out trading floors. Firms directly impacted by the climate control breakdown saw how difficult it is to simultaneously execute people recovery and system recovery.
Firms relocated from disaster sites into temp space (1-3 months). Firms looked for more permanent space (3-9 months)
Lost 10% of Manhattan office space
Not enough lower Manhattan new space
9/11 showed people support is also needed, including an HR helpline and ability to locate staff

25
Tokyo BCP: Focus on Natural Disasters
Earthquake vs. building failure, fire, loss of power
People plan
JASDEC’s back-up center in Osaka

26
Other Natural Disasters
Hong Kong Exchanges & Clearing’s procedures for typhoons and black
rainstorm warnings.
The Canadian Depository’s ice storm experience in the late 1990s.

27
Blackouts – since August 2003!
United States/Canada: 14 & 15 AugustRepublic of Georgia: 19 AugustLondon, UK: 28 AugustYucatan Peninsula: 2 SeptemberSweden & portions of Denmark: 2 SeptemberItaly & Switzerland: 28 SeptemberIsrael: 4 OctoberCzech Republic: 8 OctoberAustralia: 9 OctoberIsland of Guam: 13 OctoberArgentina: 16 OctoberUnited Kingdom: 17 OctoberBrazil: 29 October Bangladesh: 30 October

28
Regulatory Intervention
Monetary Authority of Singapore issued guidelines to
Financial Institutions In the U.S., Federal Reserve-Securities and Exchange
Commission Interagency White Paper Others regulators, e.g., Financial Services Authority(s),
Bank of Canada

29
Avoid the ‘War and Peace’ syndrome - don’t confuse crisis
management with business continuity planning
Think effect, not cause
Build a solid platform from which you will initiate your response
that involves your best and brightest people with specific roles
during a disaster
Empower staff with the appropriate decision making authority
If you involve the right people with the right authority and a
basic foundation plan, you will be successful
Industry Insights:
Good Crisis Management is Flexible

30
Business Continuity Planning: A Process, Not A Project
Accept the fact that some percentage of what you plan for
today will change two weeks from now and will continue to
deteriorate over timeIt is better to have a plan that is substantially
complete/accurate than no plan at all

31
The Right Balance
Pre 9/11: Without an obvious and immediate threat,
complacency sometimes infiltrated the BC planning process
Post 9/11: In response to a catastrophic event, people can
overreact in BC planning
In summary, you should strive to develop BC plans that
consider the critical, time sensitive aspects of your business, and
are adaptable to the various situations that may arise

32
From “Build and React” to “Integrate, Mitigate”
BCP has traditionally been a reactive process
Due to the heightened threat environment, business recovery
issues should be considered as a primary operational risk
Strategies such as secondary and tertiary data centers and
diversified office space/staff locations are getting increased focus

33
Conclusions
Challenging, costly to anticipate all of these events
SGX found shared BCP recovery solutions within the its
own market inadequate
Increasing STP and process automation have expensive
implications to BCP/DR
Expect the unexpected, plan for the unplanned – network
attack?
BCP should be part of IT project life cycle like DR
Readiness and prevention are the only protection
Consider recovery from beyond the shore:
Is there any potential for regional CSDs to work on
common solutions?