A Pipeline for Lockless Processing of Sound Data David Thall Insomniac Games.
-
Upload
jovani-benbrook -
Category
Documents
-
view
215 -
download
2
Transcript of A Pipeline for Lockless Processing of Sound Data David Thall Insomniac Games.
A Pipeline for Lockless Processing of
Sound Data
David ThallInsomniac Games
or: How I Learned to Stop Worrying and Love Concurrent Programming
David ThallInsomniac Games
Our Goal
• Remove fixed pipeline optimizations from the sound engine– Stop packing runtime sound assets from dependency graphs in the builders
• Dependency graphs only describe ‘what’ to load (an expensive proposition)
– Loose-load sound assets using runtime statistics• Precache sounds that require low latency and might play soon• Load sounds on-demand if they can withstand greater latency• Learn more from “Next-Gen Asset Streaming Using Runtime Statistics”
– http://www.gdcvault.com/free/category/262/conference
Problems
• Loose-loading should require us to:– Load asynchronously to the main update– Keep file system I/O contention at a minimum– Defragment loaded data often enough to handle new requests– Relocate during playback (many sounds are indefinite length)
• Unfortunately, our middleware sound API doesn’t properly handle asynchronous handling of sound data.– We can perform relocation during playback
• But the call is blocking on a sync-point• …so they ask the client to perform the move/fixup in a background thread.
– But the API must lock every time sound bank data is updated• Implementation uses a doubly-linked list to manage loaded sound bank data.
– Therefore, update will break if a load or unload request occurs at the same time as a relocation request (viz., must be synchronous)
Solutions
• Attempt #1: Polling API• Is it safe to move?
– No… someone is loading or updating… skip it
– Yes… move the data to a duplicate location
» Tell sound API about the new fixup locations
» And wait for the blocking call to return…
• But the state is changing in the mixing thread– So… the sound API can still crash anyway!
• DOESN’T WORK
Solutions
• Attempt #2: Sync-point Callback API– But now we need to lock on our end to make
sure we don’t relocate while they are still processing a load or unload request
– DOESN’T WORK
• Attempt #3: Synchronous Updates from a Background Thread– COULD WORK
Our Solution
• A solution that works– No blind ‘lock-and-hope’ semantics
• Is designed to be malleable– Sound API is an inherently sequential system
• And can run concurrent updates on data– Such as loads and unloads behind playback
Staged Pipeline Updates• Each stage represents a job to be completed• Each subsequent stage’s counter checks whether or not it has pending jobs
– while (g_counters[LOAD_COMPLETED] < g_counters[LOAD_REQUESTED])• Do some work on request… then…• Increment LOAD_COMPLETED counter (must guarantee this happens last)
• Jobs can run concurrently in separate threads without locks • And if we have a system that must run sequentially, we can manage that too.
Sound Loading Algorithm
Sound Loading Algorithm• Write load requests to a command queue (or set of low and high latency queues), to be processed later...
Sound Loading Algorithm• If the staging buffer is empty, begin loading a request
Sound Loading Algorithm• Once the file has been loaded into the staging buffer, signal that the load is complete
Sound Loading Algorithm• Register the loaded sound file with the sound API.• Flag the request as ready for playback.
Sound Loading Algorithm• Copy the file from the staging buffer to the main buffer
Sound Unloading Algorithm
Sound Unloading Algorithm• Write unload requests to a command queue, to be
processed later...
Sound Unloading Algorithm• If an unload request’s file is already loaded, flag the file as
ready for an unload
Sound Unloading Algorithm• Copy the file from the main buffer to the staging buffer• Free allocated memory… Defrag the entire main buffer
Sound Unloading Algorithm• Begin unloading the sound file
Sound Unloading Algorithm• When the sound file is completely unloaded, flag the
request as completed and the staging buffer as empty
Lockless API Restrictions
• Message-based API– No state queries (immediate queries are meaningless)– No accessors– No handles to memory– Asynchronous– Unidirectional– Pass by value– Errors are deferred / propagated– No required client synchronization
• However, client may request a message to its input queue for synching its own state data.
Results
• Updates are modular, fast and scalable• Solution is general enough to be exported for use
in other staged data processing applications
Questions?
Thank you!