Enabling Worm and Malware Investigation Using Virtualization (Demo and poster this afternoon)...
-
date post
21-Dec-2015 -
Category
Documents
-
view
214 -
download
0
Transcript of Enabling Worm and Malware Investigation Using Virtualization (Demo and poster this afternoon)...
Enabling Worm and Malware InvestigationUsing Virtualization
(Demo and poster this afternoon)
Dongyan Xu, Xuxian Jiang
CERIAS andDepartment of Computer Science
Purdue University
The Team
Lab FRIENDS Xuxian Jiang (Ph.D. student) Paul Ruth (Ph.D. student) Dongyan Xu (faculty)
CERIAS Eugene H. Spafford
External Collaboration Microsoft Research
Outline
Motivation An integrated approach
Front-end : Collapsar (Part I) Back-end : vGround (Part II) Bringing them together
On-going work
Front-End: CollapsarEnabling Worm/Malware Capture
* X. Jiang, D. Xu, “Collapsar: a VM-Based Architecture for Network Attack Detention Center”, 13th USENIX Security Symposium (Security’04), 2004.
Part I
General Approach
Promise of honeypots Providing insights into intruders’
motivations, tactics, and tools Highly concentrated datasets w/ low noise Low false-positive and false negative rate
Discovering unknown vulnerabilities/exploitations Example: CERT advisory CA-2002-01 (solaris
CDE subprocess control daemon – dtspcd)
Current Honeypot Operation
Individual honeypots Limited local view of attacks
Federation of distributed honeypots Deploying honeypots in different networks Exchanging logs and alerts
Problems Difficulties in distributed management Lack of honeypot expertise Inconsistency in security and management
policies Example: log format, sharing policy, exchange
frequency
Our Approach: Collapsar
Based on the HoneyFarm idea of Lance Spitzner
Achieving two (seemingly) conflicting goals Distributed honeypot presence Centralized honeypot operation
Key ideas Leveraging unused IP addresses in each
network Diverting corresponding traffic to a
“detention” center (transparently) Creating VM-based honeypots in the center
VM-based Honeypot
Collapsar Architecture
Redirector
Redirector Redirector
Correlation Engine
Management Station
Production Network
Production Network
Production Network
Collapsar Center
Attacker
Front-End
Comparison with Current Approaches
Overlay-based approach (e.g., NetBait, Domino overlay) Honeypots deployed in different sites Logs aggregated from distributed honeypots Data mining performed on aggregated log
information Key difference: where the attacks take place (on-site vs. off-site)
Comparison with Current Approaches
Sinkhole networking approach (e.g., iSink ) “Dark” space to monitor Internet
abnormality and commotion (e.g. msblaster worms)
Limited interaction for better scalability Key difference: contiguous large address
blocks (vs. scattered addresses)
Comparison with Current Approaches
Low-interaction approach (e.g., honeyd, iSink ) Highly scalable deployment Low security risks Key difference: emulated services (vs. real
things) Less effective to reveal unknown
vulnerabilities Less effective to capture 0-day worms
Collapsar Design
Functional components Redirector Collapsar Front-End Virtual honeypots
Assurance modules Logging module Tarpitting module Correlation module
Collapsar Deployment
Deployed in a local environment for a two-month period in 2003
Traffic redirected from five networks Three wired LANs One wireless LAN One DSL network
~ 50 honeypots analyzed so far Internet worms (MSBlaster, Enbiei, Nachi ) Interactive intrusions (Apache, Samba) OS: Windows, Linux, Solaris, FreeBSD
Incident: Apache Honeypot/VMware
Vulnerabilities Vul 1: Apache (CERT® CA-2002-17) Vul 2: Ptrace (CERT® VU-6288429)
Time-line Deployed: 23:44:03pm, 11/24/03 Compromised: 09:33:55am, 11/25/03
Attack monitoring Detailed log
http://www.cs.purdue.edu/homes/jiangx/collapsar
Incident: Windows XP Honeypot/VMware
Vulnerability RPC DCOM Vul.
(Microsoft Security Bulletin MS03-026)
Time-line Deployed: 22:10:00pm,
11/26/03 MSBlaster: 00:36:47am,
11/27/03 Enbiei: 01:48:57am,
11/27/03 Nachi: 07:03:55am,
11/27/03
Summary (Front-End)
A novel front-end for worm/malware capture Distributed presence and centralized operation
of honeypots Good potential in attack correlation and log
mining Unique features
Aggregation of Scattered unused (dark) IP addresses
Off-site (relative to participating networks) attack occurrences and monitoring
Real services for unknown vulnerability revelation
Back-End: vGroundEnabling Worm/Malware Analysis
Part II
* X. Jiang, D. Xu, H. J. Wang, E. H. Spafford, “Virtual Playgrounds for Worm Behavior Investigation”, 8th International Symposium on Recent Advances in Intrusion Detection (RAID’05), 2005.
Basic Approach
A dedicated testbed Internet-inna-box (IBM), Blended Threat Lab
(Symantec) DETER
Goal: understanding worm behavior Static analysis/ execution trace
Reverse Engineering (IDA Pro, GDB, …) Worm experiment within a limited scale Result:
Only enabling relatively static analysis within a small scale
The Reality – Worm Threats
Speed, Virulence, & Sophistication of Worms Flash/Warhol Worms Polymorphic/Metamorphic Appearances Zombie Networks (DDoS Attacks, Spam)
What we also need A high-fidelity, large-scale, live but safe
worm playground
Picture by Peter Szor, Symantec Corp.
A Worm Playground
Requirements
Cost & Scalability How about a topology with 2000+ nodes?
Confinement In-house private use?
Management & user convenience Diverse environment requirement Recovery from damages from a worm
experiment re-installation, re-configuration, and reboot …
Our Approach
vGround A virtualization-based approach
Virtual Entities: Leveraging current virtual machine
techniques Designing new virtual networking techniques
User Configurability Customizing every node (end-hosts/routers) Enabling flexible experimental topologies
An Example Run: Internet Worms
A shared infrastructure (e.g. PlanetLab)
A worm playground
Virtual
Physical
Full-System Virtualization
Emerging and New VM Techniques VMware, Xen, Denali, UML Supporting for real-world services
DNS, Sendmail, Apache w/ “native” vulnerabilities
Adopted technique: UML Deployability Convenience/Resource Efficiency
User-Mode Linux (http://user-mode-linux.sf.net)
System-Call Virtualization User-Level Implementation
Host OS Kernel
Device Drivers
Hardware
Device DriversMMU
Guest OS Kernel
UM User Process 1p
trace
UM User Process 2
New Network Virtualization Link Layer Virtualization User-Level Implementation
Host OS
Virtual Node 1 Virtual Node 2
Virtual Switch 1IP-IP
User Configurability
Node Customization System Template
End Node (BIND, Apach, Sendmail, …) Router (RIP, OSPF, BGP, …) Firewall (iptables) Sniffer/IDS (bro, snort)
Topology Customization Language
Network, Node Toolkits
Project Planetlab-Wormtemplate slapper { image slapper.ext2 cow enabled startup { /etc/rc.d/init.d/httpd start }}template router { image router.ext2 routing ospf startup { /etc/rc.d/init.d/ospfd start }}router R1 { superclass router
network eth0 { switch AS1_lan1 address 128.10.1.250/24 }
network eth1 { switch AS1_AS2 address 128.8.1.1/24 } }
switch AS1_lan1 { unix_sock sock/as1_lan1 host planetlab6.millennium. berkeley.edu}
switch AS1_AS2 { udp_sock 1500 host planetlab6.millennium. berkeley.edu}
node AS1_H1 { superclass slapper network eth0 { switch AS1_lan1 address 128.10.1.1/24 gateway 128.10.1.250 }}node AS1_H2 { superclass slapper network eth0 { switch AS1_lan1 address 128.10.1.2/24 gateway 128.10.1.250 }}
switch AS2_lan1 { unix_sock sock/as2_lan1 host planetlab1.cs.purdue.edu}
switch AS2_AS3 { udp_sock 1500 host planetlab1.cs.purdue.edu}
node AS2_H1 { superclass slapper network eth0 { switch AS2_lan1 address 128.11.1.5/24 gateway 128.11.1.250 }}node AS2_H2 { superclass slapper network eth0 { switch AS2_lan1 address 128.11.1.6/24 gateway 128.11.1.250 }}
switch AS3_lan1 { unix_sock sock/as3_lan1 host planetlab8.lcs.mit.edu}router R2 { superclass router
network eth0 { switch AS2_lan1 address 128.11.1.250/24 }
network eth1 { switch AS1_AS2 address 128.8.1.2/24 }
network eth2 { switch AS2_AS3 address 128.9.1.2/24 } }
node AS3_H1 { superclass slapper network eth0 { switch AS3_lan1 address 128.12.1.5/24 gateway 128.12.1.250 }}node AS3_H2 { superclass slapper network eth0 { switch AS3_lan1 address 128.12.1.6/24 gateway 128.12.1.250 }}router R3 { superclass router
network eth0 { switch AS3_lan1 address 128.12.1.250/24 }
network eth1 { switch AS2_AS3 address 128.9.1.1/24 } }
Networked NodeNetworked Node NetworkNetwork System TemplateSystem Template
AS1_H1 R1
AS1_H2
AS2_H1 AS2_H2
R2 R3
AS3_H1
AS3_H2
Features
Scalability 3000 virtual hosts in 10 physical nodes
Iterative Experiment Convenience Virtual node generation time: 60 seconds Boot-strap time: 90 seconds Tear-down time: 10 seconds
Strict Confinement High Fidelity
Evaluation
Current Focus Worm behavior reproduction
Experiments Probing, exploitation, payloads, and
propagation Further Potentials – on-going work
Routing worms / Stealthy worms Infrastructure security (BGP)
Experiment Setup
Two Real-World Worms Lion, Slapper, and their variants
A vGround Topology 10 virtual networks
1500 virtual Nodes 10 physical machines
in an ITaP cluster
Evaluation
Target Host Distribution Detailed Exploitation Steps Malicious Payloads Propagation Pattern
Number of Probes (Total 105)
0
100
200
300
400
500
600
1 9 17 25 33 41 49 57 65 73 81 89 97 105 113 121 129 137 145 153 161 169 177 185 193 201 209 217 225 233 241 249
The First Octet of IP Address
Number of Probes (105)
0
100
200
300
400
500
600
700
1 11 21 31 41 51 61 71 81 91 101 111 121 131 141 151 161 171 181 191 201 211 221 231 241 251
The First Octet of IP Address
Probing: Target Network Selection
Lion Worms
Slapper Worms
13 243
80,81http://www.iana.org/assignments/ipv4-address-space.
Exploitation (Lion)
1: Probing1: Probing
2: Exploitation!2: Exploitation!
3: Propagation!3: Propagation!
Exploitation (Slapper)
1: Probing1: Probing
2: Exploitation!2: Exploitation!
3: Propagation!3: Propagation!
Propagation Pattern and Strategy
Address-Sweeping Randomly choose a Class B address (a.b.0.0) Sequentially scan hosts a.b.0.0 –
a.b.255.255 Island-Hopping
Local subnet preference
Propagation Pattern and Strategy
Address-Sweeping (Slapper Worm)
Infected Hosts: 2%Infected Hosts: 2% Infected Hosts: 5%Infected Hosts: 5% Infected Hosts: 10%Infected Hosts: 10%
192.168.a.b
Propagation Pattern and Strategy
Island-Hopping
Infected Hosts: 2%Infected Hosts: 2% Infected Hosts: 5%Infected Hosts: 5% Infected Hosts: 10%Infected Hosts: 10%
Summary (Back-End)
vGround – the back-end A Virtualization-Based Worm Playground Properties:
High Fidelity Strict Confinement Good Scalability
3000 Virtual Hosts in 10 Physical Nodes High Resource Efficiency Flexible and Efficient Worm Experiment
Control
Conclusions
An integrated virtualization-based platform for worm and malware investigation Front-end : Collapsar Back-end : vGround
Great potential for automatic Characterization of unknown service
vulnerabilities Generation of 0-day worm signatures Tracking of worm contaminations
On-going Work
More real-world evaluation Stealthy worms Polymorphic worms
Additional capabilities Collapsar center federation On-demand honeypot customization Worm/malware contamination tracking Automated signature generation
Thank you.Stop by our poster and demo this afternoon!
For more information:
Email: [email protected]: http://www.cs.purdue.edu/~dxuGoogle: “Purdue Collapsar Friends”