OPC Tunnel for PlantTriage Communication.ppt - … · OPC Tunnel for PlantTriage Communication...

16
4/30/2008 1 OPC Tunnel for PlantTriage Communication Communication Benefits of an OPC Tunnel to bypass DCOM issues for secure, efficient PlantTriage communication across firewall / WAN Chris Friedman Introduction Chris Friedman SABIC Innovative Plastics 2 - SABIC Innovative Plastics - Information Management Organization / IT - Chemical Engineer - Global and local projects - Mt. Vernon, Indiana site - Multi-site PlantTriage implementation Multi site PlantTriage implementation

Transcript of OPC Tunnel for PlantTriage Communication.ppt - … · OPC Tunnel for PlantTriage Communication...

4/30/2008

1

OPC Tunnel for PlantTriage CommunicationCommunication

Benefits of an OPC Tunnel to bypass DCOM issues for secure, efficient PlantTriage communication across firewall / WAN

Chris Friedman

Introduction

Chris FriedmanSABIC Innovative Plastics

2

- SABIC Innovative Plastics- Information Management Organization / IT- Chemical Engineer- Global and local projects- Mt. Vernon, Indiana site- Multi-site PlantTriage implementationMulti site PlantTriage implementation

4/30/2008

2

Implementation BackgroundProject Goals- Implementation at numerous global sites

3

- Consistent and standard implementation strategy- Separate server(s) by sites

• Site size and complexity vary considerably• Number of loops and users vary

- Centralize architecture as much as possible- Minimize hardware footprint and cost- Minimize hardware footprint and cost- Consistent and robust access to process data- No security risk to process network

Implementation Background 4

CentralData Center

LevelWAN

(Wide Area Network)

Site LevelSite 1 Site 2

Site Level

Plant / Process Unit Level

LAN (Local Area

Network)

Process Network

Data Historian

Data Historian

Unit 3Unit 2Unit 1Unit 3Unit 2Unit 1

Network

`

DCS / PLC`

DCS / PLC`

DCS / PLC`

DCS / PLC`

DCS / PLC`

DCS / PLC

4/30/2008

3

Architecture ConcernsCorporate IT- Centralize to data center where possible

E f i t d t

5

• Ease of maintenance and management• Easier hardware replacement

- Consolidate servers• Fewer servers means less hardware cost• Less maintenance cost

- Wide Area Network• Focus on WAN bandwidth and reliabilityocus o ba d dt a d e ab ty• Transparent server location

- Consistent and standard architecture

Architecture ConcernsLocal Site IT / Control- Process network

6

• Security and firewall not impacted for process network• Minimize impact to data traffic load on process network

- Data integrity• Performance of data transfer appropriate (collection frequency)• Accuracy of data collection to PlantTriage

- User accessibility• Users have robust access to appropriate web / engineering UI• Hierarchical data visibility (users can easily find their data)

4/30/2008

4

Architecture AlternativesLocation of PlantTriage server1) Data center (off-site)

7

2) Shared server on site LAN (site computer center)3) Multiple servers on site LAN (per unit / area)4) Server on process network (per unit /area)

Considerations- Data from DCS OPC through firewall or plant historian?Data from DCS OPC through firewall or plant historian?- Performance and accessibility- Corporate IT concerns

Architecture Alternative # 1Data Center (Off-site)

8

CentralData Center

PlantTriageServer

Site 1

Data Center Level WAN

(Wide Area Network)

Site Level

LANLAN (Local Area

Network)

4/30/2008

5

Architecture Alternative # 1Data Center (Off-site)- Considerations

N t k f WAN b d idth d l t

9

• Network performance, WAN bandwidth and latency• Reliability and uptime of WAN• Performance of data collection from site across WAN• Performance of web pages across WAN• Accessibility of PlantTriage engineering interface

- Discussion• High bandwidth and reliability of WAN (comparable to LAN)• Remote desktop access to engineering desktop• Several users sharing data (proper hierarchy of data needed)• Performance unknown, need testing!

Architecture Alternative # 2Shared Server on Site LAN

10

Site 1

Site LevelLAN

(L l A

Plant / Process Unit Level

(Local Area Network)

Process Network

Unit 3Unit 2Unit 1

PlantTriageServer

Data Historian?

`

DCS / PLC`

DCS / PLC`

DCS / PLC

4/30/2008

6

Architecture Alternative # 2Shared Server on Site LAN- Considerations

• Corporate IT architecture goals

11

• Site LAN reliability vs. process network• Hardware management responsibility• Data shared on same server (security, hierarchy)• Loop count limitations per server

- Discussion• Site LAN reliability and performance good (better than WAN)• Site management of server less efficient than centralized• Users accessing same server requires proper data hierarchy for efficient• Users accessing same server requires proper data hierarchy for efficient

visibility• No way to secure data by plant area in engineering interface

Architecture Alternative # 3Multiple Servers on Site LAN

12

Site 1

Site LevelLAN

(L l A

Plant / Process Unit Level

(Local Area Network)

Process Network

Multiple PlantTriage

Servers

Unit 3Unit 2Unit 1

`

DCS / PLC`

DCS / PLC`

DCS / PLC

4/30/2008

7

Architecture Alternative # 3Multiple Servers on Site LAN- Considerations

W ld h i ll t d t b l t ( it )

13

• Would physically separate data by plant areas (security)• High cost of hardware / maintenance for multiple servers• Main users like their “own server”

- Discussion• Performance would be good• Separation of data and users would be good from security

aspect, but Enterprise server needed to see all site data• High cost of multiple servers really eliminates this option without

some consolidation• Eliminate some servers appropriately more like option # 2

Architecture Alternative # 4

Server on Process Network

14

Unit 1

`

PlantTriage

DCS / PLC

PlantTriage Server

4/30/2008

8

Architecture Alternative # 4Server on Process Network- Considerations

• Would physically separate data by plant area / network (security)

15

p y y p y p ( y)• High cost of hardware / maintenance for multiple servers• No server consolidation possible due to network separation• Highest performance, robustness due to network diversity• No need to cross firewall for data collection, but site network users

would not be able to cross firewall- Discussion

• Hardware / maintenance cost makes this poor option• Multiple DCS systems (some with no OPC) eliminates this as a• Multiple DCS systems (some with no OPC) eliminates this as a

consistent option• Support and maintenance would fall to process network group (no IT

support)

Initial Conclusions- Option 4 - Many DCS types, some without OPC, high cost, lack of

access, etc. eliminate this option- Option 3 – Too costly for multiple site servers, server consolidation

necessary

16

- Option 2 – Good option, may require split of servers due to loop counts

- Option 1 – Best option from cost and management standpoint if performance is comparable to on-site location

- Other topic – Data historian as standard data collector?• Some DCS systems without OPC server• OPC / DCOM crossing firewall presents security issue to process network• Mixed environment if data historian not standard collector• Mixed environment if data historian not standard collector• Robust OPC data source required for data historian• Historian data structures, performance, configuration, load, etc. are

issues

4/30/2008

9

Key Decision Points- Number of servers minimized according to loop counts- Areas/units to share data and managed through data

17

organization and process- If server footprint is minimized, key decision is location

on-site or in off-site data center- Main consideration is performance and reliability- Corporate IT focus makes WAN reliability comparable to

LAN- Performance tests over WAN to determine final decision

Performance TestingCriteria- 2 second scan rates for 500, 1000, 1500, 2000 loops- Testing to multiple (4) OPC sources and single source- Test to multiple sites (worst WAN case)

18

p ( )- Data historian OPC server(s) used for testing- Web site response testing (web page update speed)- Compare response of local server to remote server- Performance data such as network load, CPU use, memory use, disk

speed, data processing speed, data accuracy- Stress test to 1 second data (OPC source may be limitation)- Considerations:

• OPC uses DCOM as underlying communication architecture• PlantTriage optimizes traffic through OPC data change “callback”• Tests performed with max stress, all values changing every 2 seconds

4/30/2008

10

Performance Testing 19

Site Network

` `

ExperTune PlantTriage Architecture

Firewall

PlantTriage Server(s)IIS / ASP.Net

OPC-DA from data historian

WAN or LAN ?

Server at Data Center vs. Local

Site Location

Process Network

DCS / PLC OPC Server /Data collector

Performance TestingData Center Server- Data Collection

• 500 loops at 2 seconds from 4 sources same as 500 loops from 1 source• At 500 loops, 2 second data points being lost every 8 to 30 seconds

20

p , p g y• 1000 loops worse, 1500 loops fails to maintain connection, 2000 loops

and PlantTriage fails to start• Performance monitors for network, CPU, disk usage, memory usage

show these are not limiting factors (on client or server side)• Problems due to WAN performance or overloading of OPC server?

- Web site response• Performance reasonable, compare to local response• Latency in WAN will always make remote web server slowerLatency in WAN will always make remote web server slower• Question is how much, is it reasonable?

4/30/2008

11

Performance TestingData Center Server (500 loops every 2 seconds)

PlantTriage Data Historian

21

Performance TestingData Center Server (500 loops every 2 seconds)

22

OPCOPC

4/30/2008

12

Performance TestingLocal Server- Data Collection

• 500, 1000 loops per 2 seconds scan with no data loss

23

• Stress tested to 1000 loops at 1 second with no data loss• 2000 loops per 2 seconds reaches limitations since problems also

experienced on local server (external limitation reached)• Performance monitors for network, CPU, disk usage, memory

usage similar to WAN tests

- Web site responseR f t th WAN i d b it t ffi t• Response faster than over WAN, varied by site, traffic, etc.

• Considered to be reasonable (similar to intranet sites)

Performance TestingData Center Server (1000 loops scanned per 1 second)

PlantTriage Data Historian

24

4/30/2008

13

Test Conclusions- Remote communication over WAN not acceptable

compared to LAN- Data historian OPC server is not the limitation

25

- Issue over WAN not identified as network limitation• Performance monitors show low network, resource usage• Network not limiting data transfer• OPC overhead must be involved• Research indicated that DCOM communication degrades with

network latency• DCOM is cause of issue!!!

DCOM- DCOM is underlying communication protocol for OPC- Difficult to configure, most OPC problems related to DCOM- Web research indicates:

26

• DCOM works best on networks with low latency and high bandwidth

• Networking is not OPC’s strength since it is based on DCOM• OPC was originally developed based on Microsoft COM

(component object model) that runs on single computer• Most users have difficulty on LAN with OPC/DCOM, and wouldn’t

even attempt it over a WANDCOM i hi h i ti h d- DCOM requires high communication overhead• Requires handshaking, security layers, user authentication• Lots of communication sensitive to latency

4/30/2008

14

OPC TunnelPossible solution

with OPC Tunnel- Best described in

27

DCOM

Network

OPC Server OPC Client

depiction- OPC Tunnel

eliminates DCOM, making the communication over network much

OPC is inefficient over a network due to DCOM.It is worse over WAN due to increased latency.

TCP/IPOPC Tunnel OPC Tunnel

more efficient Network

OPC Tunnel eliminates DCOM.OPC communicates via COM locally on computer and efficiently

passes data across network through standard TCP/IP port.

OPC Server OPC Client

COM COM

Verification- Tests from data center repeated using OPC Tunnel from

various vendors- Results proved the theory of DCOM as issue over WAN

S f O C

28

- Showed performance similar to or better than OPC connection on local LAN

- 1500 loops at 2 seconds scanned into PlantTriage across WAN with no data loss

- Server resources (CPU, memory, etc.) showed slight elevation using OPC Tunnel, but no real impactPerformance configuration cost etc vary by vendor- Performance, configuration, cost, etc. vary by vendor –personal preference and testing recommended

4/30/2008

15

Security- OPC Tunnel improves OPC security since DCOM is

eliminated• No user authentication for DCOM, OPC can be locked down

29

• Only connections from clients with OPC Tunnel client can connect to OPC Tunnel server

- Common problems related to DCOM configuration are eliminated

- OPC Tunnel uses a single TCP/IP socket• DCOM uses a range of TCP/IP ports for communication making

OPC diffi lt t t l fi llOPC difficult to control across a firewall• With OPC Tunnel, only the single TCP/IP port needs to be

allowed to pass through a firewall

Caveats- Considerations to note

• OPC Tunnels vary by vendor, so although an OPC Tunnel is a valid technical solution, specific software should be tested and confirmed to meet specific needs by preference

30

p y p• Some settings may need to be investigated for specific OPC

Tunnels by vendor, in particular settings for “data change mechanisms” in OPC Tunnel, which can interfere with the OPC Server callback mechanism used by PlantTriage

• In other words, values that change infrequently still need to scan and pass a timestamp (Forced Update option of PlantTriage may be needed)

• Possible for OPC Tunnels to have difficulty reconnecting• Possible for OPC Tunnels to have difficulty reconnecting automatically on occasion when servers reboot or disconnect

• Still dependent on bandwidth and performance of WAN to be viable

4/30/2008

16

Benefits / Conclusion- Need for DCOM for OPC communication eliminated- Complicated DCOM configuration and common errors

avoided

31

- Increased security and control of communication• DCOM security configuration headaches are gone• Good solution for OPC across firewall

- Performance and reliability of OPC data transfer across WAN greatly improved

- Enables centralized architecture for PlantTriage serversEnables centralized architecture for PlantTriage servers

Benefits / ConclusionProject Goals- Of original goals, following were realized:

• Centralized architecture for improved server support, cost, etc.

32

• Consistent design across sites• Robust data collection across WAN and through firewalls• Reliable access to engineering interface via Terminal Services• Reasonable web access for users on site networks• No impact to security and function of process network• Goals of all involved parties met