WS-DREAM: A Distributed Reliability Assessment Mechanism for Web Services Zibin Zheng, Michael R....
-
Upload
adele-quinn -
Category
Documents
-
view
215 -
download
0
Transcript of WS-DREAM: A Distributed Reliability Assessment Mechanism for Web Services Zibin Zheng, Michael R....
WS-DREAM: A Distributed Reliability Assessment Mechanism for Web Services
Zibin Zheng, Michael R. Lyu
Department of Computer Science & EngineeringThe Chinese University of Hong Kong
Hong Kong, China
DSN 2008, Anchorage, Alaska, USA, 25 June, 2008
2
Outlines
1. Introduction
2. Design
3. Implementation
4. Experiments
5. Conclusion
3
1. Introduction
• Service-Oriented Architecture (SOA) is becoming popular.– Usually built using Web services.
• Reliability of the service-oriented applications becomes difficult to be guaranteed.– Remote Web services may contain faults.– Remote Web services may become unavailable.– The Internet environment is unpredictable.
We need to know whether the target Web services are reliable or not before using them.Service Oriented Application
Web service 1
Web service 2
Web service n
4
1. Introduction
• Performance of Web services is different from different user locations.
• Service-oriented applications may be deployed to different locations after developed.
Distributed reliability assessment on Web services is necessary.• Difficult. • Time consuming.• Expensive.
5
1. Introduction
• WS-DREAM: A Distributed REliability Assessment
Mechanism for Web Services.– User-collaboration
• YouTube: sharing videos. • Wikipedia: sharing knowledge.• WS-DREAM: sharing assessment results of target Web servi
ces.
– Obtain performance information of individual Web service from different locations for Web service selection and ranking.
– Assess fault tolerance replication strategies.
6
2. Design
Strategy Manager
Fault Injector
WSDL Analyzer
Web Site
Manager
Coordinator
1
7
User1
User N
Web Service 2
Web Service 1
Web Service N
2
TestCase Dispatcher
TestResult Reveriver
Result Database
TestResult Analyzer
8
TestCase Generator
Test Coodinator
Web Service 2
Web Service 1
Web Service N
6
WS-DREAM Server
Testing Engine
RulesManeger1.<parallel>2.<sequence>3. <retry>………...
TestRunner
4
3
5
1. Assessment request
2. Load Applet
3. Create test cases
4. Test task scheduling
5. Client get test plans
6. Client run test plans
7. Send back results
8. Analyzing and return
final results to client.
7
2. Design
• Fairness. Different Web Services should have fair chances to be assessed.
• Distribution. Web Services should be assessed by users in as many geography locations as possible.
• Feasibility. Task assignment should dynamically adjust to the frequently changed number of users and number of test plans.
• Efficiency. The algorithm should be efficient and it should not slow down the testing progress.
8
2. Design• Identical and similar Web Services are becoming
available in Internet redundant replicas for fault tolerance cheaper.
• Basic replication strategies.1. Parallel. The application sends requests to different replicas at the
same time.
2. Retry. The same Web Service will be tried one more time if it fails at first.
3. Recovery Block (RB). Another standby Web Service will be tried in sequence if the primary Web Service fails.
Parallel Retry RB
Parallel 1.Parallel 4.Parallel+Retry 6. Parallel+RB
Retry 5.Retry+Parallel 2.Retry 8.Retry+RB
RB 7.RB+Parallel 9.RB+Retry 3.RB
9
2. Design4. Parallel+Retry 5. Retry+Parallel
6. Parallel+RB. 7. RB+Parallel
8. Retry+RB 9. RB+Retry
10
2. Design
• Assess performance of different
replication strategies.
• Includes several test cases.
• Created by WS-DREAM server
and executed in the client-side.
• XML-based test plan design.
6. Parallel+RB. 7. RB+Parallel
11
3. Implementation
• JDK + Eclipse
• Client-side:– Java Applet
• Server-side: – an HTTP Web site (Apache HTTP Server)– a TestCaseGenerator (JDK6.0 + Axis library) – a TestCoodinator (Java Servlet + Tomcat 6.0) – a MySQL database (Record testing results)
12
4. Experiments
• A service user plans to employ several Web services in his commercial Web site. – Six identical Amazon book displaying and selling Web
Service for fault tolerance purpose. (a-us, a-jp, a-de, a-ca, a-fr and a-uk)
– A Global Weather Web Service to display currently weather information.
– A GeoIP Web Service to get geography information of Website visitors.
13
4. Experiments
• Among all the 5443 failure cases – 2986 failure cases are due to timeout (of larger than 10 seconds)
– 2456 failure cases are due to unavailable service (http code 503)
– 1 failure case is due to bad gateway (http code 502).
1. Assess the reliability of individual Web Services.
14
4. Experiments
0
500
1000
1500
2000
2500
3000
3500
4000
a-us a-jp a-de a-ca a-fr a-uk GW GIP
CN
AU
TW
HK
US
15
4. Experiments
• Strategy 1 (Parallel) provides the best RTT performance. • The sequential-type strategies (2:Retry, 3:RB, 8:Retry+RB, and
9:RB+Retry) can provide good RTT performance in the normal environment, however, their performances are not so good in the faulty environment.
2. Measure the performance of different replication strategies.
16
4. Experiments
• Two replicas are enough to provide high availability in the normal Internet environment, while three replicas are needed to ensure high availability in the 5% faulty Internet environment.
3. Determine the optimal number of replicas.
17
5. Conclusion and future work
Conclusion WS-DREAM
Reliability assessment of individual Web services. Performance assessment of fault tolerance replication strategies.
ExperimentMore than 1,000,000 test plans.Users from five locations.Web Services located in six countries.
Future workAssessment of stateful Web services. Enhancement of system feature in facilitating user test case
contributions
WS-DREAM: A Distributed Reliability Assessment Mechanism for Web Services
Zibin Zheng, Michael R. Lyu
Department of Computer Science & EngineeringThe Chinese University of Hong Kong
Hong Kong, China
DSN 2008, Anchorage, Alaska, USA, 25 June, 2008