Let’s start with Core Audio
description
Transcript of Let’s start with Core Audio
![Page 1: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/1.jpg)
Francesco SinopoliProject Manager & Software Developer
http://it.linkedin.com/in/[email protected]
Let’s start with Core Audio
![Page 2: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/2.jpg)
Outline
•Audio Technologies su iOS
• Il Suono
•Audio Queue Services
•Audio Unit
Parleremo di:
![Page 3: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/3.jpg)
Media su iOSAudio Overview
![Page 4: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/4.jpg)
Audio Overview
•Media Player framework: iPod library search and playback
•AV Foundation: Obj-C wrapper for audio file playback and recording
•Core Audio: low level audio streaming
Quali sono le alternative a Core Audio?
iOS offre il seguente set di tools organizzati in frameworks in base alla features che forniscono:
![Page 5: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/5.jpg)
Audio OverviewUsiamo Media Player per eseguire song, audio book,
podcast dalla user’s iPod library
//Import the umbrella header file for the Media Player framework#import <MediaPlayer/MediaPlayer.h>
![Page 6: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/6.jpg)
Audio OverviewUsiamo AV Foundation per eseguire e registrare
audio utilizzando un semplice interfaccia Objective-C
#import <AVFoundation/AVFoundation.h>
![Page 7: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/7.jpg)
Audio OverviewAVAsset is the core class in the AV Foundation
framework
NSURL *url = <#A URL that identifies an audiovisual asset such as a movie file#>;AVURLAsset *anAsset = [[AVURLAsset alloc] initWithURL:url options:nil];
![Page 8: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/8.jpg)
What is Core Audio?Core Audio è il motore che c’è dietro ogni suono
riprodotto su Mac OS X e iOS
![Page 9: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/9.jpg)
What is Core Audio?Punti di vista
• Da un punto di vista audio Core Audio è “ad alto livello” perchè astrae sia l’hardware che il formato audio
• Da un punto di vista del developer appare essere a “basso livello” perché si utilizzano API C-based
![Page 10: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/10.jpg)
Core Audio
•Non usarlo per ... eseguire Video
•Per accedere alla user iPod Library
•Per eseguire un file audio (AVAudioPlayer)
•Per registare un suono (AVAudioRecorder)
Quando non utilizzare Core Audio
“You should typically use the highest-level abstraction available that allows you to perform the tasks you want”
![Page 11: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/11.jpg)
Core Audio
•Quando vogliamo fare qualcosa direttamente con i dati audio. Cioè, quando abbiamo la necessità di processare il suono in tempo reale; vogliamo mixare, misurare (i decibel e.g.), eseguire effetti sull’audio, etc.
•Quando come requisiti vengono richieste high performance e low latency
...allora quando?
![Page 12: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/12.jpg)
Core Audio
•Engines
•Helpers
Core Audio è un set di framework per lavorare con audio digitale
È possibile suddividere questo set di frameworks in due gruppi:
![Page 13: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/13.jpg)
Core Audio
•Audio Units: sono un API per la cattura e l’esecuzione di dati audio, sono caratterizzate da una bassa latenza, ci consentono di produrre effetti ed effettuare mixing. Sono utilizzabili più audio unit, ciascuno con il suo compito, connessi tra di loro come in un grafo. E altro ancora
•Audio Queues: anche loro sono un API per registrare ed eseguire audio, costruite sopra le audio unit. Sono utilizzate mediante un meccanismo di callback che fornisce o riceve audio, buffers di audio della dimensione desiderata. Hanno una latenza maggiore delle audio unit ma supportono formati compressi
•OpenAL: è un API per l’audio 3D, implementata sulle audio unit. Sono utilizzate soprattutto per il gaming
Le engine API le utilizziamo per processare lo stream audio
![Page 14: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/14.jpg)
Core Audio
• Audio File: ci consentono di leggere da e scrivere su diversi tipi di file audio astraendo dal loro contenuto. Ci consentono anche di recuperare metadati come durata, iTunes info, etc.
• Audio File Stream: ci consentono di leggere da un stream network e di scoprire al volo l’encoding di un file
• Audio Converter: ci consentono di convertire buffer di audio tra differenti encoding
• ExtAudioFile: combinano le features di Audio File e Audio Converter; sono un’interfaccia unificata per leggere e scrivere file audio compressi e linear PCM
• Audio Session: giocano un ruolo decisivo nell’ambito del comportamento audio della nostra app (sostituite con le AVAudioSession)
Le helper API le utilizziamo per lavorare i dati o veicolarli attraverso gli engine
![Page 15: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/15.jpg)
Core Audio
• iOS gestisce il comportamento audio a livello di app, inter-app e device usando il concetto di Audio Session
• Tramite le Audio Session rispondiamo a diverse questioni sul comportamento della nostra app
• L’audio environment su un device iOS è abbastanza complesso
•Un Audio Session incapsula un set di comportamenti; ogni set di comportamenti è identificato da una chiave detta categoria
•Configurando questa chiave stabiliamo come l’audio deve essere gestito nella nostra app
Audio Session: take notes
![Page 16: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/16.jpg)
Core Audio Learning Curve
![Page 17: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/17.jpg)
Il suonoAudio digitale 101
![Page 18: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/18.jpg)
About the sound
•Segnale analogico continuo vs natura digitale del computer
•Campionamento del segnale: come i campioni sono rappresentati e organizzati in forma digitale
•Problemi quando trattiamo audio digitale: buffer (e relativa latenza), formati (vari tipi di formato audio digitale), etc.
Suono: questioni principali
![Page 19: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/19.jpg)
About the sound
![Page 20: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/20.jpg)
About the sound
![Page 21: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/21.jpg)
About the sound
![Page 22: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/22.jpg)
About the sound
Nyquist-Shannon
“per non alterare il contenuto in frequenza di un segnale la frequenza di campionamento deve essere maggiore del doppio della frequenza massima contenuta nel segnale”
![Page 23: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/23.jpg)
About the soundIl problema fondamentale della fedeltà digitale consiste nel fare la migliore approssimazione
considerati i limiti hardware
![Page 24: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/24.jpg)
Core Audio glossary•Sample: rappresenta l’ampiezza dell’onda
•Sample rate: è il numero di sample catturati per ogni secondo di audio. Si misura in sample per secondo
•Bit depth: È il numero di bit di informazione per ogni sample. Si può parlare di risoluzione del sample. Si misura in bit per sample
•Bit rate: è il numero di bit richiesti per descrivere un secondo di audio. Si misura in bit per secondo, corrisponde al prodotto tra bit depth e sample rate
•Frame: è un bundle che combina un sample per canale. Un frame rappresenta tutti i canali audio in un dato momento di tempo. Per il suono mono: un frame ha un sample; per il suono stereo: un frame ha due sample
![Page 25: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/25.jpg)
Audio Queue ServicesCore Audio Engines
![Page 26: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/26.jpg)
Audio Queue
•AudioToolbox framework
•Higest level playback and recording API (pure C interface) in Core Audio
Sono la tecnologia raccomandata per aggiungere semplici feature di registrazione e playback alla
nostra app
![Page 27: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/27.jpg)
Audio QueueLe Audio Queue service hanno un semplice modello per catturare o eseguire un audio. Semplice perché ci consente di non preoccuparci della complessità dei dati audio o dell’hardware sottostante
Agli estremi abbiamo un trasduttore (microfono o speaker)
Lo stream continuo di dati è rappresentato da una coda (di buffer)
I buffer sono riempiti con i dati che provengono dall’hardware e che poi vengono passati alla funzione di callback
A recorder audio queue A playback audio queue
![Page 28: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/28.jpg)
Example
•Catturiamo e registriamo l’audio
•Catturiamo l’audio e applichiamo un effetto
![Page 29: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/29.jpg)
Example
DEMO
![Page 30: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/30.jpg)
Audio Queue - RecorderAudio Queue non è un API per registrare ed eseguire un suono ma è a un livello più basso
• Configurare il formato audio che vogliamo utilizzare, AudioStreamBasicDescription
• Creare un audio queue, AudioQueueNewInput()
• Fornire una funzione di callback che processerà l’audio in entrata
• Avviare l’audio queue, AudioQueueStart()
• Terminare l’audio queue, AudioQueueStop()
![Page 31: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/31.jpg)
Audio Queue - Recorder
Quando inizia a registrare l’audio queue riempie un buffer con i primi dati acquisiti
L’audio queue invia questi buffer alla funzione di callback nello stesso ordine in cui sono stati acquisiti
La funzione di callback dopo avere utilizzato il buffer lo rimette a disposizione dell’audio queue per il suo riuso
![Page 32: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/32.jpg)
Example: Capture + Effect
CAStreamBasicDescription recordFormat; recordFormat.mFormatID = kAudioFormatLinearPCM; recordFormat.mChannelsPerFrame = 2; recordFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked; recordFormat.mBitsPerChannel = 16; recordFormat.mBytesPerPacket = (recordFormat.mBitsPerChannel / 8) * recordFormat.mChannelsPerFrame; recordFormat.mBytesPerFrame = (recordFormat.mBitsPerChannel / 8) * recordFormat.mChannelsPerFrame; recordFormat.mFramesPerPacket = 1; recordFormat.mSampleRate = 8000;
Formato Crea ControllaInizializza e avvia
![Page 33: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/33.jpg)
AudioQueueRef queue = {0}; AudioQueueNewInput(&_mRecordFormat, audioQueueBufferHandler, (__bridge void*) self /* userData */, NULL /* run loop */, NULL /* run loop mode */, 0 /* flags */, &queue);
Example: Capture + EffectFormato Crea ControllaInizializza e avvia
![Page 34: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/34.jpg)
#define kNumbRecordBuffers!3#define kBufferDurationSecond .5
// enough bytes for half a secondbufferByteSize = 32000;!for (i = 0; i < kNumbRecordBuffers; ++i) { AudioQueueAllocateBuffer(_audioQueue, bufferByteSize, &_mBuffers[i]); AudioQueueEnqueueBuffer(_audioQueue, _mBuffers[i], 0, NULL);}
AudioQueueStart(_audioQueue, NULL)
Example: Capture + EffectFormato Crea ControllaInizializza e avvia
![Page 35: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/35.jpg)
// AudioQueue callback function, called when an input buffers has been filled.void audioQueueBufferHandler(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, const AudioTimeStamp *inStartTime, UInt32 inNumPackets, const AudioStreamPacketDescription *inPacketDesc){! PMMAQCapture *capture = (__bridge PMMAQCapture *)inUserData;! try {! ! if (inNumPackets > 0) {! ! ! // TO DO Something AudioQueueEnqueueBuffer(capture.audioQueue, inBuffer, 0, NULL);! ! }! ! } catch (CAXException e) {! ! char buf[256];! ! fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));! }}
Example: Capture + EffectFormato Crea ControllaInizializza e avvia
![Page 36: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/36.jpg)
Set up size of the buffers
How big the buffers should be?
• Format, ASBD (bit rate, bit depth,...)• (Buffer) duration in seconds• Audio Queue, kAudioConverterPropertyMaximumOutputPacketSize (compressed data)
8000 x 1 x 2 x 1 = 16000 bytes
samples/seconds channels
bytes/channel seconds
Example: Capture + Effect
![Page 37: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/37.jpg)
Write to a file
@property (nonatomic, assign) AudioFileID recordFile;@property (nonatomic, assign) SInt64 recordPacket;
...
NSString *recordFile = [NSTemporaryDirectory() stringByAppendingPathComponent:(NSString*)CFSTR("capture.caf")]; CFURLRef url = CFURLCreateWithString(kCFAllocatorDefault, (CFStringRef)recordFile, NULL); AudioFileCreateWithURL(url, kAudioFileCAFType, &recordFormat, kAudioFileFlags_EraseFile, &_recordFile);...AudioFileWritePackets(capture.recordFile, FALSE, inBuffer->mAudioDataByteSize, inPacketDesc, capture.recordPacket, &inNumPackets, inBuffer->mAudioData);capture.recordPacket += inNumPackets;
Declaration
Set Up
Callback
Example: Capture + Effect
![Page 38: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/38.jpg)
Property-driven nature of the Audio Queue
•kAudioQueueProperty_EnableLevelMetering
•kAudioQueueProperty_CurrentLevelMeter(DB)
Example: Capture + Effect
![Page 39: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/39.jpg)
UInt32 val = 1;XThrowIfError(AudioQueueSetProperty(_audioQueue, kAudioQueueProperty_EnableLevelMetering, &val, sizeof(UInt32)), "couldn't enable metering");
Example: Capture + Effect
UInt32 data_sz = sizeof(AudioQueueLevelMeterState) * 1; //[_channelNumbers count];OSErr status = AudioQueueGetProperty(self.audioQueue, kAudioQueueProperty_CurrentLevelMeterDB, self.chan_lvls, &data_sz);
@property (nonatomic, assign) AudioQueueLevelMeterState! *chan_lvls;...
...
...
float value = (float)(self.chan_lvls[0].mAveragePower);
[self.delegate recorderEngine:self levelMeterChanged:value];
Level meteringDeclaration
Set Up
Callback | Timer
![Page 40: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/40.jpg)
Audio QueuePlayback
![Page 41: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/41.jpg)
Audio UnitCore Audio Engines
![Page 42: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/42.jpg)
Audio UnitNelle puntate precedenti
• iOS offre più tecnologie per gestire l’audio digitale: Media Player, AVFoundation, Core Audio. Ciascuna di queste tecnologie deve essere utilizzata in presenza di specifiche necessità
• Core Audio contiene tre engine per processare uno stream di dati audio (Audio Unit, Audio Queue e OpenAL). Audio Unit è l’engine principale, Audio Queue e OpenAL sono costruiti su di esso
![Page 43: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/43.jpg)
Audio Unit“One engine to rule them all, one engine to find them, one engine to bring them all and in the darkness bind
them”
![Page 44: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/44.jpg)
Audio UnitAll audio technologies in iOS are build on top of
audio units
![Page 45: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/45.jpg)
Audio UnitBenvenuti nella funzionalità di più basso livello di
Core Audio
• Non è possibile, per un dev iOS, andare più vicini all’hardware
• È possibile lavorare con i dati audio raw come non è possibile fare a più alti livelli di astrazione
• È possibile sintetizzare l’audio, eseguire effetti su stream audio, catturare il suono dal microfono, combinare tutte queste cose tra di loro e farne altre ancora (ogni audio unit offre delle funzionalità particolari)
![Page 46: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/46.jpg)
Audio Unit
• Eccellente reattività
• Riconfigurazione dinamica
Quando le utilizziamo
![Page 47: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/47.jpg)
Audio Unit
• Audio Unit consente di lavorare con un meccanismo di plug-in; ogni singolo Audio Unit può inserirsi in una catena di processamento audio (grafo)
• Ci viene in aiuto un helper API: AUGraph
AUGraph
![Page 48: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/48.jpg)
Audio UnitAudio Units life cycle
• Instantiate
• Configure: qui configureremo l’Audio Unit come richiesto dal tipo utilizzato al fine di raggiungere lo scopo della nostra app;
• Initialize: qui prepareremo l’Audio Unit a gestire l’audio che transiterà da esso
• Start
• Control: consideriamo che, ad un alto livello siamo abituati a passare un URL ad un player o ad un recorder; ad un Audio Unit level lavoriamo con funzioni di callback che sono chiamate centinaia di volte al secondo
• Clean
![Page 49: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/49.jpg)
Audio UnitCreare un Audio Unit
• Type
• Subtype
• Manufacturer
L’audio unit è creato tramite tre codici:
questa tripletta identifica univocamente un audio unit
![Page 50: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/50.jpg)
Audio Unit
Nel caso di più audio unit è necessario creare le connessioni tra di loro prima di avviare il processo audio
Creare un Audio Unit
![Page 51: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/51.jpg)
Audio UnitAudio Unit è un oggetto software che esegue un
certo tipo di lavoro su uno stream audio.
• Generator unit: creano uno stream di audio da qualche sorgente, come file, network o memoria
• Music unit: sono simili ai generator ma producono uno stream di audio sintetizzato da un MIDI data
• Mixer unit (Multichannel mixer, 3D mixer): combinano multipli stream in uno o più stream
• Effect unit (iPodEQ, Delay, ...): eseguono qualche tipo di processamento del segnale audio su uno stream
• Converter unit (Format Converter): esegue trasformazioni che non sono rivolte all’utente finale ma piuttosto a conversioni tra diversi varietà di PCM (ad esempio, cambia il sample rate o il bit depth)
• Output unit (Remote I/O, Voice Processing I/O, Generic Output): fanno anche da input. Sono delle interfacce con l’audio input e output hardware, ci consentono di catturare l’audio dal microfono o eseguirlo attraverso le casse
![Page 52: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/52.jpg)
Audio UnitScope: è un contesto all’interno di audio unit
Element: chiamato anche bus, è un contesto all’interno di un audio unit scope
![Page 53: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/53.jpg)
Audio Unit
I/O Unit: è un’astrazione sull’audio hardware (From hardware To hardware). Ogni elemento ha un input scope e un output scope
Input element =
Element 1
Output element =
Element 0
Come avviene il flusso del segnale audio?
![Page 54: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/54.jpg)
Audio Unit
Riceviamo audio dall’output scope dell’element di input e inviamo audio all’input scope dell’elemento di output
Come avviene il flusso del segnale audio?
![Page 55: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/55.jpg)
Audio Unit
![Page 56: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/56.jpg)
Audio Unit
![Page 57: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/57.jpg)
Example
• Creiamo un audio unit per la cattura dell’audio dal microfono e mostriamo come applicare effetti anche senza utilizzare un effect unit di sistema fornito da iOS
• Come facciamo ad eseguire effetti “manualmente”? Collezioniamo i sample nell’output scope del bus 1 di un I/O unit e, dopo averli processati, forniamo questi sample, per essere eseguiti, all’input scope del bus 0 dello stesso I/O unit
Catturiamo l’audio e applichiamo un effetto
![Page 58: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/58.jpg)
Example
DEMO
![Page 59: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/59.jpg)
• Direct connection su bus 0/input scope e bus 1/output scope.
• AUGraph con un solo nodo
• Render callback function sul bus 0/input scope all’interno della quale effettuare il pull sul bus 1/output scope
Example: Capture + Effect
![Page 60: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/60.jpg)
AudioUnit rioUnit; ... AudioComponentDescription audioComponentDesc; audioComponentDesc.componentType = kAudioUnitType_Output; audioComponentDesc.componentSubType = kAudioUnitSubType_RemoteIO; audioComponentDesc.componentManufacturer = kAudioUnitManufacturer_Apple; audioComponentDesc.componentFlags = 0; audioComponentDesc.componentFlagsMask = 0; AudioComponent rioComponent = AudioComponentFindNext(NULL, &audioComponentDesc); AudioComponentInstanceNew(rioComponent, &rioUnit);
Example: Capture + Effect
Crea Configura ControllaInizializza e avvia
![Page 61: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/61.jpg)
UInt32 onFlag = 1;
AudioUnitElement bus1 = 1; AudioUnitSetProperty(rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, bus1, &onFlag, sizeof(onFlag));
AudioUnitElement bus0 = 0; AudioUnitSetProperty(rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, bus0, &onFlag, sizeof(onFlag));
Example: Capture + Effect
Crea Configura ControllaInizializza e avvia
![Page 62: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/62.jpg)
AudioStreamBasicDescription customASBD; memset(&customASBD, 0, sizeof(customASBD)); customASBD.mSampleRate = hardwareSampleRate; customASBD.mFormatID = kAudioFormatLinearPCM; customASBD.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked; //kAudioFormatFlagsCanonical; //which means that you'll be working with signed integer samples customASBD.mBytesPerPacket = 2; //4; customASBD.mFramesPerPacket = 1; customASBD.mBytesPerFrame = 2; //4; customASBD.mChannelsPerFrame = 1; //mono 2 for stereo! customASBD.mBitsPerChannel = 16;
Example: Capture + Effect
Crea Configura ControllaInizializza e avvia
![Page 63: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/63.jpg)
//Set ASBD for output (bus 0) on the RIO's input scope AudioUnitSetProperty(rioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, bus0, &customASBD, sizeof(customASBD)); //Set ASBD for mic input (bus 1) on the Rio's output scope AudioUnitSetProperty(rioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, bus1, &customASBD, sizeof(customASBD));
Example: Capture + Effect
Crea Configura ControllaInizializza e avvia
![Page 64: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/64.jpg)
Example: Capture + EffectConfigure: set stream formats
![Page 65: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/65.jpg)
// Set render proc to supply samples from input unit AURenderCallbackStruct callbackStruct; callbackStruct.inputProc = InputRenderCallback; callbackStruct.inputProcRefCon = (__bridge void*)self; AudioUnitSetProperty(rioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, bus0, &callbackStruct, sizeof(callbackStruct));
Crea Configura ControllaInizializza e avvia
Example: Capture + Effect
![Page 66: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/66.jpg)
//initialize and start Rio unitAudioUnitInitialize(_effectState.rioUnit);AudioOutputUnitStart(_effectState.rioUnit); //now, RemoteIO Unit starts making callback to a //function called inputRenderCallback
Example: Capture + Effect
Crea Configura ControllaInizializza e avvia
![Page 67: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/67.jpg)
static OSStatus InputRenderCallback (void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData){ EffectClass *effectClass = (EffectClass*) inRefCon; // Just copy samples UInt32 bus1 = 1; AudioUnitRender(effectClass->rioUnit, ioActionFlags, inTimeStamp, bus1, inNumberFrames, ioData); roboticVoice(effectClass, ioData, inNumberFrames); return noErr;
}
Example: Capture + Effect
Crea Configura ControllaInizializza e avvia
Ehy, you can find other DSP stuff on that site: http://musicdsp.org/
![Page 68: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/68.jpg)
Finito?
Example: Capture + Effect
Crea Configura ControllaInizializza e avvia
![Page 69: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/69.jpg)
Audio AppCose da fare per un app sounds
• Configure Your Audio Session
• Categorize your application
• Respond to interruptions
• Handle routing changes
![Page 70: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/70.jpg)
Audio App
AVAudioSession *appSession = [AVAudioSession sharedInstance];
//we want to avoid sample rate conversion (CPU intensive); 44.100 Hertz [appSession setPreferredSampleRate:44100.0 error:nil]; [appSession setCategory:AVAudioSessionCategoryPlayAndRecord error:nil];
[appSession setActive:YES error:nil];
//set up AudioQueue, AVAudioPlayer or Audio Unit, etc.
Set Up the session
![Page 71: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/71.jpg)
Audio AppHandle interruptions
[[NSNotificationCenter defaultCenter] addObserver: self selector: @selector(handleInterruption:) name: AVAudioSessionInterruptionNotification object: [AVAudioSession sharedInstance] ];
- (void) handleInterruption:(NSNotification*)notification{ NSDictionary *interruptionDict = notification.userInfo; NSUInteger interruptionType = (NSUInteger)[interruptionDict valueForKey:AVAudioSessionInterruptionTypeKey]; if (interruptionType == AVAudioSessionInterruptionTypeBegan) ... else if (interruptionType == AVAudioSessionInterruptionTypeEnded){ ... } }
![Page 72: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/72.jpg)
Audio AppHandle route changes
//Posted on the main thread when the system’s audio route changes. [[NSNotificationCenter defaultCenter] addObserver: self selector: @selector(handleRouteChanging:) name: AVAudioSessionRouteChangeNotification object: [AVAudioSession sharedInstance]];
- (void)handleRouteChanging:(NSNotification*)notification{ UInt8 reasonValue = [[notification.userInfo valueForKey: AVAudioSessionRouteChangeReasonKey] intValue]; switch (reasonValue) { case AVAudioSessionRouteChangeReasonNoSuitableRouteForCategory: break; case AVAudioSessionRouteChangeReasonWakeFromSleep: break; case AVAudioSessionRouteChangeReasonOverride: break; case AVAudioSessionRouteChangeReasonCategoryChange: break; case AVAudioSessionRouteChangeReasonOldDeviceUnavailable: break; case AVAudioSessionRouteChangeReasonNewDeviceAvailable: break; case AVAudioSessionRouteChangeReasonUnknown: default: break; }}
![Page 73: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/73.jpg)
Resources
•Apple documentation
•Learning Core Audio - (Adamson-Avila)
•Coreaudio-api mailing list
![Page 74: Let’s start with Core Audio](https://reader034.fdocuments.us/reader034/viewer/2022042508/55cf9b49550346d033a5712e/html5/thumbnails/74.jpg)
Thank you
The End