Massive Data Storage - UCF Computer Sciencedcm/Teaching/COT4810-Spring2011/...1973 - IBM - first...
Transcript of Massive Data Storage - UCF Computer Sciencedcm/Teaching/COT4810-Spring2011/...1973 - IBM - first...
Massive Data StorageStorage on the "Cloud" and the
Google File Systempaper by: Sanjay Ghemawat,
Howard Gobioff, and Shun-Tak Leung
presentation by: Joshua MichalczakCOP 4810 - Topics in Computer Science
Dr. Marinescu, Spring 2011, UCF
Outline
A history of data storageDefining "Massive" data storageDefining the properties of a good storage systemGoogle's storage - the Google File System (GFS)Cloud storage and you: present and future
A history of storage - 1970s
Internal 4K ROM expandable to 12KUsed Cassette Tapes for external I/O
Capacity based on read/write speedRoughly 200K of storage
A history of storage - 1980sFirst introduced in 1971
More expensive thancassettesEquivalent storagecapacityNot many computers yet offered diskette drives
Popularity rose in early 1980sMany competing manufacturers (cheaper)Larger capacities (> 1MB)Most machines offered diskette peripherals (Commodore 64), or used it exclusively (Apple II, Macintosh)
A history of storage - 1990s to presentHard drives first introduced in 1957
Reserved for "macrocomputers"Very expensive; $3,200 / month
Cost and drive size limit adoptionin households
1973 - IBM - first "sealed" hard drive 1980 - Seagate - first hard drive formicrocomputers; 5 MB for $4501980 - IBM - first 1GB hard drive;size of a refridgerator for $40,000
Drops in size (3-1/2 inch, 1988), cost, andthe introduction of interface standards(SCSI, 1986; IDE, 1986) led to largerhousehold adoption.
Defining "Massive" data storageThings to consider:
File type (text documents, pictures, movies, programs, etc)Cloud capacity vs. Local capacityTransfer rate (internet speed)
A typical individual is unlikely to generate a "massive" amount of storage
At $100 / 1TB, why not just buy another drive?Consider then services with large clientele
Internet index & search: Google, Bing, Yahoo, etcData storage & sharing: Flickr, YouTube, Google Docs, Facebook, DropBox, etc
Google's storage needs:2006: Reported that crawler alone uses 800TBMost (~97%) of files < 1GB
What makes a good storage system?Let's ask Jim Gray
received Turing Award in 1988"for seminal contributions to database andtransaction processing research andtechnical leadership in systemimplementation"
Defined "ACID" (atomicity, consistency,isolation, and durability)
properties that guarantee the reliability ofdatabase transactionsalthough originally designed for databases, these terms can apply to all forms of data storage, include our "on-the-cloud" model
Atomicity"all or nothing"
If any portion of a transaction fails, the entire transaction failsFailed transactions should leave the data unchanged; only complete transactions change the system's stateThe reason for failure should not matter (Hardware failure, system failure, connection loss, etc)Atomic transactions can't be subdivided
Why is it important?
Prevents data errors from transaction failure ("roll-back")
Consistency"database integrity"
The database remains "truthful" to it's intentTransactions move the database from one consistent state to anotherOnly "valid" data is written; "invalid" data is handled as per the implementation requirements
validity is determined by a set of data constraints Why is it important?
When we access data, the data present matches our expectations
Isolation"incomplete modifications are invisible"
Other operations cannot access data being modified by a transaction which has not yet completedConcurrent transactions should be unaware of each other (beyond possibly having to wait for access)
Does this mean we can't have concurrency?
No. It just means that if we do have concurrency, that we take extra precautions to prevent data contaminationIt may make implementing concurrency "harder" (the naive solution will likely not work)
Why bother?Prevents the database from going into inconsistent states because of transaction interleaving
Durability"ability to recover from failure"
When a transaction reports back as complete and successful, any modifications it made should be impervious to system failureThis means that the system should never have to roll-back completed transactions
partially completed transactions at the time of failure won't affect the system (atomic)the state of the database should be valid (consistent)
Why is it importantAll systems eventually failIt is best to consider failure at design time rather than as an after-thought.
"ACIDS" - Scalability"Capacity growth"
The data capacity of the system should be expandableadding additional storage should be possible, if not easy
Data access / commits run at an acceptable rate, even during high system usage
"acceptable" will be applicationspecific
How dare you?!I know! I'm sorry :(
Why would you suggest such a thing?To comply with our "massive" requirements: large amounts of data being sent and received by many users simultaneously.
GFS - System RequirementsUses commodity hardware; often failsMillions of files, typically > 100MB, some > 1GBWorkload consists mostly of reads (large, sequential reads and small, random reads)Workload may also contain large, sequential writes to append data; files seldom modified after writtenHigh concurrency required; often 100+ clients appending simultaneously; atomic with minimal synchronization overheadHigh bandwidth required before high latency; most applications will be processing data in bulk in a non-time-sensitive manner.
GFS - Files
GFS - System Overview
GFS - Atomicity & ConcurrencyGFS implements a special file opperation: record append
Client sends an append command, specifying only the data to be written (no offset as is typical)The file system executes these command atomicly, which prevents fragmentation from concurrent interleavingThe file offset is returned to the client once the data is committed (for the client's future reference)
GFS - Consistency & DurabilityGFS implements several Fault Tolerance measures, including:
Data redundancy; minimum of 3 copies per chunckMachine redundancy; 3 master nodes which are hot-swappable should one fail Chunk checksum to detect data corruption; notifies master; new clean copy received from replicaFast recovery
Master reboot: file hierarchy persistent; chunk location metadata rebuilt by probing networkChunkserver reboot: "thin", master controls metadata; once probed by master, available to network
Chunkservers which report frequent errors (data errors, network errors, etc) are reported to humans for diagnosisMaster servers are more heavily monitored (key point of failure)
GFS - Relaxing Isolation requirementsGFS improves concurrency performance by ignoring some of the restrictions of isolation
random overwrites don't interrupt reading creates situations where reads might be reading the data of incomplete write transactions; should the transaction fail, the data read is "bad"
Does this break our database?Short answer: no, if you take the problems into accountApplications are made aware of the fact that they might be reading "bad" data which is incomplete; they can repeat the operation if needed
For Google, the point is mootrandom overwrites are small and will fail early, if they fail at all (which is unlikely)most applications only append data; random writes rare
GFS - ScalabilityAt first glance, master nodes appear to bottleneck system
All transactions must first be routed through the master for approval and location
However, clients perform the raw data transfers"Heavy lifting" is distributed
New Chunkservers are easy to addInstall Linux; install chunkserver application; attatch to networkOnce probed by master, available for use
Signals for control take place on a seperate network from signals for data
Transferring data will not stall the control of the system
GFS - other topics in the paper"snapshot" - a fast file copy (using direct chunkserver-to-chunkserver communication)discussion of the choice for 64MB chunk sizedetailed description of metadata memory structuresmaster node operation log"leasing" and "mutation" of chunks (how chunks propagate to the replica chunkservers)detail discussion of network architecturelocking implementation for concurrencywhen chunks should be replicate, where to put the replicas, and balancing resource use with replica placementgarbage collection of "dead" chunksdetected "stale" (out-of-date) replica chunksbenchmarks (obviously thesystem works and is fast)
Cloud Storage and YOU!Chances are, you are already using several "cloud" based systems, including storage
Google Docs, YouTube, Flickr, Facebook, Dropbox, online email, accounting, photo editing, etc
Cloud storage offers many benefits to end usersthe service provider has the burden of reliability and they're probably doing a better job of it than you would... how many of you backup and replicate your data 3 times?if your hardware fails, who cares? you can just "download it again" when you fix the problemyour data is accessible* everywhere; work on it at home on your desktop, present it in class on your laptop
Future problems for cloud storage to consider
Data ownership - if I make a document in Google Docs, hosted on the Google Doc servers, who owns it?Availability - if a service provider goes down, the internet goes down, or the connection is otherwise unreliable, how will I get my data?Security - service providers host a large, central database of information which makes for a hot target (one penetration gives access to tons of information)Permissions - how do I guarantee who has access to my data when I don't directly control it?Diffusion - my data can be spread out over several providers; how will I know where to look for the data I want?
Bibliography1. Radio Shack Catalogs, http://www.radioshackcatalogs.
com/catalogs_extra/1977_rsc-01/2. TSR-80,http://en.wikipedia.org/wiki/TRS-803. Floppy Disk History, http://en.wikipedia.org/wiki/Floppy_disk4. Hard Drive History (1), http://www.duxcw.
com/digest/guides/hd/hd2.htm5. Hard Drive History (2), http://en.wikipedia.
org/wiki/History_of_hard_disk_drives6. Average internet speed, http://www.speedmatters.
org/content/internet-speed-report7. Google's storage needs, http://labs.google.com/papers/bigtable.
html8. ACID qualities, http://en.wikipedia.org/wiki/ACID9. The Google File System (GFS),http://labs.google.com/papers/gfs.
html