Commit 96de1d1b authored by Christophe Massiot's avatar Christophe Massiot

Aout3 developer documentation, cont'd.

parent f52ae778
......@@ -14,153 +14,287 @@ The audio output's main purpose is to take sound samples from one or several dec
(insert here a schematic of the data flow in aout3)
</para>
<sect2> <title> Terminology </title>
</sect1>
<itemizedlist>
<listitem> <para> <emphasis> Sample </emphasis> : A sample is an elementary piece of audio information, containing the value for all channels. For instance, a stream at 44100 Hz features 44100 samples per second, no matter how many channels are coded, nor the coding type of the coefficients. </para> </listitem>
<sect1> <title> Terminology </title>
<listitem> <para> <emphasis> Frame </emphasis> : A set of samples of arbitrary size. Codecs usually have a fixed frame size (for instance an A/52 frame contains 1536 samples). Frames do not have much importance in the audio output, since it can manage buffers of arbitrary sizes. However, for undecoded formats, the developer must indicate the number of bytes required to carry a frame of n samples, since it depends on the compression ratio of the stream. </para> </listitem>
<itemizedlist>
<listitem> <para> <emphasis> Sample </emphasis> : A sample is an elementary piece of audio information, containing the value for all channels. For instance, a stream at 44100 Hz features 44100 samples per second, no matter how many channels are coded, nor the coding type of the coefficients. </para> </listitem>
<listitem> <para> <emphasis> Coefficient </emphasis> : A sample contains one coefficient per channel. For instance a stereo stream features 2 coefficients per sample. Many audio items (such as the float32 audio mixer) deal directly with the coefficients. Of course, an undecoded sample format doesn't have the notion of "coefficient", since a sample cannot be materialized independantly in the stream. </para> </listitem>
<listitem> <para> <emphasis> Frame </emphasis> : A set of samples of arbitrary size. Codecs usually have a fixed frame size (for instance an A/52 frame contains 1536 samples). Frames do not have much importance in the audio output, since it can manage buffers of arbitrary sizes. However, for undecoded formats, the developer must indicate the number of bytes required to carry a frame of n samples, since it depends on the compression ratio of the stream. </para> </listitem>
<listitem> <para> <emphasis> Resampling </emphasis> : Changing the number of samples per second of an audio stream. </para> </listitem>
<listitem> <para> <emphasis> Coefficient </emphasis> : A sample contains one coefficient per channel. For instance a stereo stream features 2 coefficients per sample. Many audio items (such as the float32 audio mixer) deal directly with the coefficients. Of course, an undecoded sample format doesn't have the notion of "coefficient", since a sample cannot be materialized independantly in the stream. </para> </listitem>
<listitem> <para> <emphasis> Downmixing/upmixing </emphasis> : Changing the configuration of the channels (see below). </para> </listitem>
</itemizedlist>
<listitem> <para> <emphasis> Resampling </emphasis> : Changing the number of samples per second of an audio stream. </para> </listitem>
</sect2>
<listitem> <para> <emphasis> Downmixing/upmixing </emphasis> : Changing the configuration of the channels (see below). </para> </listitem>
</itemizedlist>
</sect1>
<sect2> <title> Audio sample formats </title>
<sect1> <title> Audio sample formats </title>
<para>
<para>
The whole audio output can viewed as a pipeline transforming one audio format to another in successive steps. Consequently, it is essential to understand what an audio sample format is.
</para>
</para>
<para> The audio_sample_format_t structure is defined in include/audio_output.h. It contains the following members : </para>
<para> The audio_sample_format_t structure is defined in include/audio_output.h. It contains the following members : </para>
<itemizedlist>
<listitem> <para> <emphasis> i_format </emphasis> : Define the format of the coefficients. For instance AOUT_FMT_FLOAT32, AOUT_FMT_S16_NE. Undecoded sample formats include AOUT_FMT_A52, AOUT_FMT_DTS, AOUT_FMT_SPDIF. An audio filter allowing to go from one format to another is called, by definition, a "converter". Some converters play the role of a decoder (for instance a52tofloat32.c), but are in fact "audio filters". </para> </listitem>
<itemizedlist>
<listitem> <para> <emphasis> i_format </emphasis> : Define the format of the coefficients. For instance AOUT_FMT_FLOAT32, AOUT_FMT_S16_NE. Undecoded sample formats include AOUT_FMT_A52, AOUT_FMT_DTS, AOUT_FMT_SPDIF. An audio filter allowing to go from one format to another is called, by definition, a "converter". Some converters play the role of a decoder (for instance a52tofloat32.c), but are in fact "audio filters". </para> </listitem>
<listitem> <para> <emphasis> i_rate </emphasis> : Define the number of samples per second the audio output will have to deal with. Common values are 22050, 24000, 44100, 48000. i_rate is in Hz. </para> </listitem>
<listitem> <para> <emphasis> i_rate </emphasis> : Define the number of samples per second the audio output will have to deal with. Common values are 22050, 24000, 44100, 48000. i_rate is in Hz. </para> </listitem>
<listitem> <para> <emphasis> i_channels </emphasis> : Define the channel configuration, for instance AOUT_CHAN_MONO, AOUT_CHAN_STEREO, AOUT_CHAN_3F1R. Beware : the numeric value doesn't represent the number of coefficients per sample, see aout_FormatNbChannels() for that. The coefficients for each channel are always stored interleaved, because it is much easier for the mixer to deal with interleaved coefficients. Consequently, decoders which output planar data must implement an interleaving function. </para> </listitem>
</itemizedlist>
<listitem> <para> <emphasis> i_channels </emphasis> : Define the channel configuration, for instance AOUT_CHAN_MONO, AOUT_CHAN_STEREO, AOUT_CHAN_3F1R. Beware : the numeric value doesn't represent the number of coefficients per sample, see aout_FormatNbChannels() for that. The coefficients for each channel are always stored interleaved, because it is much easier for the mixer to deal with interleaved coefficients. Consequently, decoders which output planar data must implement an interleaving function. </para> </listitem>
</itemizedlist>
<note> <para>
<note> <para>
For 16-bit integer format types, we make a distinction between big-endian and little-endian storage types. However, floats are also stored in either big endian or little endian formats, and we didn't make a difference. The reason is, samples are hardly stored in float32 format in a file, and transferred from one machine to another ; so we assume float32 always use the native endianness.
</para> <para>
</para> <para>
Yet, samples are quite often stored as big-endian signed 16-bit integers, such as in DVD's LPCM format. So the LPCM decoder allocates an AOUT_FMT_S16_BE input stream, and on little-endian machines, an AOUT_FMT_S16_BE-&gt;AOUT_FMT_S16_NE is automatically invoked by the input pipeline.
</para> <para>
</para> <para>
In most cases though, AOUT_FMT_S16_NE and AOUT_FMT_U16_NE should be used.
</para> </note>
</para> </note>
<para>
<para>
The aout core provides macros to compare two audio sample formats. AOUT_FMTS_IDENTICAL() tests if i_format, i_rate and i_channels are identical. AOUT_FMTS_SIMILAR tests if i_rate and i_channels are identical (useful to write a pure converter filter).
</para>
</para>
<para>
<para>
The audio_sample_format_t structure then contains two additional parameters, which you are not supposed to write directly, except if you're dealing with undecoded formats. For PCM formats they are automatically filled in by aout_FormatPrepare(), which is called by the core functions when necessary.
</para>
</para>
<itemizedlist>
<listitem> <para> <emphasis> i_frame_length </emphasis> : Define the number of samples of the "natural" frame. For instance for A/52 it is 1536, since 1536 samples are compressed in an undecoded buffer. For PCM formats, the frame size is 1, because every sample in the buffer can be independantly accessed. </para> </listitem>
<itemizedlist>
<listitem> <para> <emphasis> i_frame_length </emphasis> : Define the number of samples of the "natural" frame. For instance for A/52 it is 1536, since 1536 samples are compressed in an undecoded buffer. For PCM formats, the frame size is 1, because every sample in the buffer can be independantly accessed. </para> </listitem>
<listitem> <para> <emphasis> i_bytes_per_frame </emphasis> : Define the size (in bytes) of a frame. For A/52 it depends on the bitrate of the input stream (read in the sync info). For instance for stereo float32 samples, i_bytes_per_frame == 8 (i_frame_length == 1). </para> </listitem>
</itemizedlist>
<listitem> <para> <emphasis> i_bytes_per_frame </emphasis> : Define the size (in bytes) of a frame. For A/52 it depends on the bitrate of the input stream (read in the sync info). For instance for stereo float32 samples, i_bytes_per_frame == 8 (i_frame_length == 1). </para> </listitem>
</itemizedlist>
<para>
<para>
These last two fields (which are <emphasis> always </emphasis> meaningful as soon as aout_FormatPrepare() has been called) make it easy to calculate the size of an audio buffer : i_nb_samples * i_bytes_per_frame / i_frame_length.
</para>
</para>
</sect2>
</sect1>
<sect2> <title> Typical runcourse </title>
<sect1> <title> Typical runcourse </title>
<para>
<para>
The input spawns a new audio decoder, say for instance an A/52 decoder. The A/52 decoder parses the sync info for format information (eg. it finds 48 kHz, 5.1, 196 kbi/s), and creates a new aout "input stream" with aout_InputNew(). The sample format is :
</para>
</para>
<itemizedlist>
<listitem> <para> i_format = AOUT_FMT_A52 </para> </listitem>
<listitem> <para> i_rate = 48000 </para> </listitem>
<listitem> <para> i_channels = AOUT_CHAN_3F2R | AOUT_CHAN_LFE </para> </listitem>
<listitem> <para> i_frame_length = 1536 </para> </listitem>
<listitem> <para> i_bytes_per_frame = 24000 </para> </listitem>
</itemizedlist>
<itemizedlist>
<listitem> <para> i_format = AOUT_FMT_A52 </para> </listitem>
<listitem> <para> i_rate = 48000 </para> </listitem>
<listitem> <para> i_channels = AOUT_CHAN_3F2R | AOUT_CHAN_LFE </para> </listitem>
<listitem> <para> i_frame_length = 1536 </para> </listitem>
<listitem> <para> i_bytes_per_frame = 24000 </para> </listitem>
</itemizedlist>
<para>
<para>
This input format won't be modified, and will be stored in the aout_input_t structure corresponding to this input stream : p_aout-&gt;pp_inputs[0]-&gt;input. Since it is our first input stream, the aout core will try to configure the output device with this audio sample format (p_aout-&gt;output.output), to avoid unnecessary transformations.
</para>
</para>
<para>
<para>
The core will probe for an output module in the usual fashion, and its behavior will depend. Either the output device has the S/PDIF capability, and then it will set p_aout-&gt;output.output.i_format to AOUT_FMT_SPDIF, or it's a PCM-only device. It will thus ask for the native sample format, such as AOUT_FMT_FLOAT32 (for Darwin CoreAudio) or AOUT_FMT_S16_NE (for OSS). The output device may also have constraints on the number of channels or the rate. For instance, the p_aout-&gt;output.output structure may look like :
</para>
</para>
<itemizedlist>
<listitem> <para> i_format = AOUT_FMT_S16_NE </para> </listitem>
<listitem> <para> i_rate = 44100 </para> </listitem>
<listitem> <para> i_channels = AOUT_CHAN_STEREO </para> </listitem>
<listitem> <para> i_frame_length = 1 </para> </listitem>
<listitem> <para> i_bytes_per_frame = 4 </para> </listitem>
</itemizedlist>
<itemizedlist>
<listitem> <para> i_format = AOUT_FMT_S16_NE </para> </listitem>
<listitem> <para> i_rate = 44100 </para> </listitem>
<listitem> <para> i_channels = AOUT_CHAN_STEREO </para> </listitem>
<listitem> <para> i_frame_length = 1 </para> </listitem>
<listitem> <para> i_bytes_per_frame = 4 </para> </listitem>
</itemizedlist>
<para>
<para>
Once we have an output format, we deduce the mixer format. It is strictly forbidden to change the audio sample format between the mixer and the output (because all transformations happen in the input pipeline), except for i_format. The reason is that we have only developed three mixers (float32 and S/PDIF, plus fixed32 for embedded devices which do not feature an FPU), so all other types must be cast into one of those. Still with our example, the p_aout-&gt;mixer.mixer structure looks like :
</para>
<itemizedlist>
<listitem> <para> i_format = AOUT_FMT_FLOAT32 </para> </listitem>
<listitem> <para> i_rate = 44100 </para> </listitem>
<listitem> <para> i_channels = AOUT_CHAN_STEREO </para> </listitem>
<listitem> <para> i_frame_length = 1 </para> </listitem>
<listitem> <para> i_bytes_per_frame = 8 </para> </listitem>
</itemizedlist>
<para>
The aout core will thus allocate an audio filter to convert AOUT_FMT_FLOAT32 to AOUT_FMT_S16_NE. This is the only audio filter in the output pipeline. It will also allocate a float32 mixer. Since only one input stream is present, the trivial mixer will be used (only copies samples from the first input stream). Otherwise it would have used a more precise float32 mixer.
</para>
<para>
The last step of the initialization is to build an input pipeline. When several properties have to be changed, the aout core searches first for an audio filter capable of changing :
</para>
<orderedlist>
<listitem> <para> All parameters ; </para> </listitem>
<listitem> <para> i_format and i_channels ; </para> </listitem>
<listitem> <para> i_format ; </para> </listitem>
</orderedlist>
<para>
If the whole transformation cannot be done by only one audio filter, it will allocate a second and maybe a third filter to deal with the rest. To follow up on our example, we will allocate two filters : a52tofloat32 (which will deal with the conversion and the downmixing), and a resampler. Quite often, for undecoded formats, the converter will also deal with the downmixing, for efficiency reasons.
</para>
<para>
When this initialization is over, the "decoder" plug-in can run its main loop. Typically the decoder requests a buffer of length i_nb_samples, and copies the undecoded samples there (using GetChunk()). The buffer then goes along the input pipeline, which will do the decoding (to AOUT_FMT_FLOAT32), and downmixing and resampling. Additional resampling will occur if complex latency issues in the output layer impose us to go temporarily faster or slower to achieve perfect lipsync (this is decided on a per-buffer basis). At the end of the input pipeline, the buffer is placed in a FIFO, and the decoder thread runs the audio mixer.
</para>
<para>
The audio mixer then calculates whether it has enough samples to build a new output buffer. If it does, it mixes the input streams, and passes the buffer to the output layer. The buffer goes along the output pipeline (which in our case only contains a converter filter), and then it is put in the output FIFO for the device.
</para>
<para>
Regularly, the output device will fetch the next buffer from the output FIFO, either through a callback of the audio subsystem (Mac OS X' CoreAudio, SDL), or thanks to a dedicated audio output thread (OSS, ALSA...). This mechanism uses aout_OutputNextBuffer(), and gives the estimated playing date of the buffer. If the computed playing date isn't equal to the estimated playing date (with a small tolerance), the output layer changes the date of all buffers in the audio output module, triggering some resampling at the beginning of the input pipeline when the next buffer will come from the decoder. That way, we shall resynchronize audio and video streams. When the buffer is played, it is finally released.
</para>
</sect1>
<sect1> <title> Mutual exclusion mechanism </title>
<para>
The access to the internal structures must be carefully protected, because contrary to other objects in the VLC framework (input, video output, decoders...), the audio output doesn't have an associated thread. It means that parts of the audio output run in different threads (decoders, audio output IO thread, interface), and we do not control when the functions are called. Thus, much care must be taken to avoid concurrent access on the same part of the audio output, without creating a bottleneck which would cause latency problems at the output layer.
</para>
<para>
Consequently, we have set up a locking mechanism in five parts :
</para>
<orderedlist>
<listitem> <para> <emphasis> p_input-&gt;lock </emphasis> : This lock is taken when a decoder calls aout_BufferPlay(), as long as the buffer is in the input pipeline. The interface thread cannot change the input pipeline without holding this lock. </para> </listitem>
<listitem> <para> <emphasis> p_aout-&gt;mixer_lock </emphasis> : This lock is taken when the audio mixer is entered. The decoder thread in which the mixer runs must hold the mutex during the mixing, until the buffer comes out of the output pipeline. Without holding this mutex, the interface thread cannot change the output pipeline, and a decoder cannot add a new input stream. </para> </listitem>
<listitem> <para> <emphasis> p_aout-&gt;output_fifo_lock </emphasis> : This lock must be taken to add or remove a packet from the output FIFO, or change its dates. </para> </listitem>
<listitem> <para> <emphasis> p_aout-&gt;input_fifos_lock </emphasis> : This lock must be taken to add or remove a packet from one of the input FIFOs, or change its dates. </para> </listitem>
</orderedlist>
<para>
Having so many mutexes makes it easy to fall into deadlocks (ie. when a thread has the mixer lock and wants the input fifos lock, and the other has the input fifos lock and wants the mixer lock). We could have worked with fewer locks (and even one global_lock), but for instance when the mixer is running, we do not want to block the audio output IO thread from picking up the next buffer. So for efficiency reasons we want to keep that many locks.
</para>
<para>
So we have set up a strong discipline in taking the locks. If you need several of the locks, you <emphasis> must </emphasis> take them in the order indicated above. For instance if you already the hold input fifos lock, it is <emphasis> strictly forbidden </emphasis> to try and take the mixer lock. You must first release the input fifos lock, then take the mixer lock, and finally take again the input fifos lock.
</para>
<para>
It might seem a big constraint, but the order has been chosen so that in most cases, it is the most natural order to take the locks.
</para>
</sect1>
<sect1> <title> Internal structures </title>
<sect2> <title> Buffers </title>
<para>
The aout_buffer_t structure is only allocated by the aout core functions, and goes from the decoder to the output device. A new aout buffer is allocated in these circumstances :
</para>
<itemizedlist>
<listitem> <para> i_format = AOUT_FMT_FLOAT32 </para> </listitem>
<listitem> <para> i_rate = 44100 </para> </listitem>
<listitem> <para> i_channels = AOUT_CHAN_STEREO </para> </listitem>
<listitem> <para> i_frame_length = 1 </para> </listitem>
<listitem> <para> i_bytes_per_frame = 8 </para> </listitem>
<listitem> <para> Whenever the decoder calls aout_BufferNew(). </para> </listitem>
<listitem> <para> In the input and output pipeline, when an audio filter requests a new output buffer (ie. when b_in_place == 0, see below). </para> </listitem>
<listitem> <para> In the audio mixer, when a new output buffer is being prepared. </para> </listitem>
</itemizedlist>
<note> <para>
Most audio filters are able to place the output result in the same buffer as the input data, so most buffers can be reused that way, and we avoid massive allocations. However, some filters require the allocation of an output buffer.
</para> <para>
The core functions are smart enough to determine if the buffer is ephemer (for instance if it will only be used between two audio filters, and disposed of immediately therafter), or if it will need to be shared among several threads (as soon as it will need to stay in an input or output FIFO).
</para> <para>
In the first case, the aout_buffer_t structure and its associated buffer will be allocated in the thread's stack (via the alloca() system call), whereas in the latter in the process's heap (via malloc()). You, codec or filter developer, don't have to deal with the allocation or deallocation of the buffers.
</para> </note>
<para>
The aout core will thus allocate an audio filter to convert AOUT_FMT_FLOAT32 to AOUT_FMT_S16_NE. This is the only audio filter in the output pipeline. It will also allocate a float32 mixer. Since only one input stream is present, the trivial mixer will be used (only copies samples from the first input stream). Otherwise it would have used a more precise float32 mixer.
The fields you'll probably need to use are : p_buffer (pointer to the raw data), i_nb_bytes (size of the significative portion of the data), i_nb_samples, start_date and end_date.
</para>
</sect2>
<sect2> <title> Date management </title>
<para>
The last step of the initialization is to build an input pipeline. When several properties have to be changed, the aout core searches first for an audio filter capable of changing :
On the first impression, you might be tempted to think that to calculate the starting date of a buffer, it might be enough to regularly fetch the PTS i_pts from the input, and then : i_pts += i_nb_past_samples * 1000000 / i_rate. Well, I'm sorry to deceive you, but you'll end up with rounding problems, resulting in a crack every few seconds.
</para>
<orderedlist>
<listitem> <para> All parameters ; </para> </listitem>
<listitem> <para> i_format and i_channels ; </para> </listitem>
<listitem> <para> i_format ; </para> </listitem>
</orderedlist>
<para>
If the whole transformation cannot be done by only one audio filter, it will allocate a second and maybe a third filter to deal with the rest. To follow up on our example, we will allocate two filters : a52tofloat32 (which will deal with the conversion and the downmixing), and a resampler. Quite often, for undecoded formats, the converter will also deal with the downmixing, for efficiency reasons.
Indeed, if you have 1536 samples per buffer (as is often the case for A/52) at 44.1 kHz, it gives : 1536 * 1000000 / 44100 = 34829.9319727891. The decimal part of this figure will drive you mad (note that with 48 kHz samples it is an integral digit, so it will work well in many cases).
</para>
<para>
When this initialization is over, the "decoder" plug-in can run its main loop. Typically the decoder requests a buffer of length i_nb_samples, and copies the undecoded samples there (using GetChunk()). The buffer then goes along the input pipeline, which will do the decoding (to AOUT_FMT_FLOAT32), and downmixing and resampling. Additional resampling will occur if complex latency issues in the output layer impose us to go temporarily faster or slower to achieve perfect lipsync (this is decided on a per-buffer basis). At the end of the input pipeline, the buffer is placed in a FIFO, and the decoder thread runs the audio mixer.
One solution could have been to work in nanoseconds instead of milliseconds, but you'd only be making the problem 1000 times less frequent. The only exact solution is to add 34829 for every buffer, and keep the remainder of the division somewhere. For every buffer you add the remainders, and when it's greater than 44100, you add 34830 instead of 34829. That way you don't have the rounding error which would occur in the long run (this is called the Bresenham algorithm).
</para>
<para>
The audio mixer then calculates whether it has enough samples to build a new output buffer. If it does, it mixes the input streams, and passes the buffer to the output layer.
The good news is, the audio output core provides a structure (audio_date_t) and functions to deal with it :
</para>
<orderedlist>
<listitem> <para> AOUT_FMT_FLOAT32 : the general case for plain PCM samples. </para> </listitem>
<listitem> <para> AOUT_FMT_FIXED32 : machines without FPU cannot work on floats, so it is recommended to use fixed-point arithmetic instead. </para> </listitem>
<listitem> <para> AOUT_FMT_A52 : the raw undecoded samples will be passed to the output layer for decoding by an external S/PDIF dedicated hardware. </para> </listitem>
</orderedlist>
<itemizedlist>
<listitem> <para> <emphasis> aout_DateInit( audio_date_t * p_date, u32 i_divider ) </emphasis> : Initialize the Bresenham algorithm with the divider i_divider. Usually, i_divider will be the rate of the stream. </para> </listitem>
<listitem> <para> <emphasis> aout_DateSet( audio_date_t * p_date, mtime_t new_date ) </emphasis> : Initialize the date, and set the remainder to 0. You will usually need this whenever you get a new PTS from the input. </para> </listitem>
<listitem> <para> <emphasis> aout_DateMove( audio_date_t * p_date, mtime_t difference ) </emphasis> : Add or subtract microseconds from the stored date (used by the aout core when the output layer reports a lipsync problem). </para> </listitem>
<listitem> <para> <emphasis> aout_DateGet( audio_date_t * p_date ) </emphasis> : Return the current stored date. </para> </listitem>
<listitem> <para> <emphasis> aout_DateIncrement( audio_date_t * p_date, u32 i_nb_samples ) </emphasis> : Add i_nb_samples * 1000000 to the stored date, taking into account rounding errors, and return the result. </para> </listitem>
</itemizedlist>
</sect2>
<sect2> <title> FIFOs </title>
<para>
Currently, the audio mixer will impose one of these three formats. Technically, nothing prevents you from providing for instance an AOUT_FMT_S16_NE mixer, but for simplicity it has been decided to keep only two PCM formats, and formats for undecoded samples.
FIFOs are used at two places in the audio output : at the end of the input pipeline, before entering the audio mixer, to store the buffers which haven't been mixed yet ; and at the end of the output pipeline, to queue the buffers for the output device.
</para>
<para>
FIFOs store a chained list of buffers. They also keep the ending date of the last buffer, and whenever you pass a new buffer, they will enforce the time continuity of the stream by changing its start_date and end_date to match the FIFO's end_date (in case of stream discontinuity, the aout core will have to reset the date). The aout core provides functions to access the FIFO. Please understand than none of these functions use mutexes to protect exclusive access, so you must deal with race conditions yourself if you want to use them directly !
</para>
<itemizedlist>
<listitem> <para> <emphasis> aout_FifoInit( aout_instance_t * p_aout, aout_fifo_t * p_fifo, u32 i_rate ) </emphasis> : Initialize the FIFO pointers, and the aout_date_t with the appropriate rate of the stream (see above for an explanation of aout dates). </para> </listitem>
<listitem> <para> <emphasis> aout_FifoPush( aout_instance_t * p_aout, aout_fifo_t * p_fifo, aout_buffer_t * p_buffer ) </emphasis> : Add p_buffer at the end of the chained list, update its start_date and end_date according to the FIFO's end_date, and update the internal end_date. </para> </listitem>
<listitem> <para> <emphasis> aout_FifoSet( aout_instance_t * p_aout, aout_fifo_t * p_fifo, mtime_t date ) </emphasis> : Trash all buffers, and set a new end_date. Used when a stream discontinuity has been detected. </para> </listitem>
<listitem> <para> <emphasis> aout_FifoMoveDates( aout_instance_t * p_aout, aout_fifo_t * p_fifo, mtime_t difference ) </emphasis> : Add or subtract microseconds from end_date and from start_date and end_date of all buffers in the FIFO. The aout core will use this function to force resampling, after lipsync issues. </para> </listitem>
<listitem> <para> <emphasis> aout_FifoNextStart( aout_instance_t * p_aout, aout_fifo_t * p_fifo ) </emphasis> : Return the start_date which will be given to the next buffer passed to aout_FifoPush(). </para> </listitem>
<listitem> <para> <emphasis> aout_FifoPop( aout_instance_t * p_aout, aout_fifo_t * p_fifo ) </emphasis> : Return the first buffer of the FIFO, and remove it from the chained list. </para> </listitem>
<listitem> <para> <emphasis> aout_FifoDestroy( aout_instance_t * p_aout, aout_fifo_t * p_fifo ) </emphasis> : Free all buffers in the FIFO. </para> </listitem>
</itemizedlist>
</sect2>
</sect1>
<!--
<sect1> <title> API for the decoders </title>
<para>
The API between the audio output and the decoders is quite simple. As soon as the decoder has the required information to fill in an audio_sample_format_t, it can call : p_dec-&gt;p_aout_input = aout_InputNew( p_dec-&gt;p_fifo, &amp;p_dec-&gt;p_aout, &amp;p_dec-&gt;output_format ).
</para>
<para>
In the next operations, the decoder will need both p_aout and p_aout_input. To retrieve a buffer, it calls : p_buffer = aout_BufferNew( p_dec-&gt;p_aout, p_dec-&gt;p_aout_input, i_nb_frames ).
</para>
<para>
The decoder must at least fill in start_date (using an audio_date_t is recommended), and then it can play the buffer : aout_BufferPlay( p_dec-&gt;p_aout, p_dec-&gt;p_aout_input, p_buffer ). In case of error, the buffer can be deleted (without being played) with aout_BufferDelete( p_dec-&gt;p_aout, p_dec-&gt;p_aout_input, p_buffer ).
</para>
<para>
When the decoder dies, or the sample format changes, the input stream must be destroyed with : aout_InputDelete( p_dec-&gt;p_aout, p_dec-&gt;p_aout_input ).
</para>
</sect1>
<!--
<sect1> <title> API for the output module </title>
</sect1>
......
......@@ -2,7 +2,7 @@
* aout_internal.h : internal defines for audio output
*****************************************************************************
* Copyright (C) 2002 VideoLAN
* $Id: aout_internal.h,v 1.15 2002/09/02 23:17:05 massiot Exp $
* $Id: aout_internal.h,v 1.16 2002/09/06 23:15:44 massiot Exp $
*
* Authors: Christophe Massiot <massiot@via.ecp.fr>
*
......@@ -171,7 +171,7 @@ struct aout_instance_t
/* Locks : please note that if you need several of these locks, it is
* mandatory (to avoid deadlocks) to take them in the following order :
* p_input->lock, mixer_lock, output_fifo_lock, input_fifo_lock.
* p_input->lock, mixer_lock, output_fifo_lock, input_fifos_lock.
* --Meuuh */
/* When input_fifos_lock is taken, none of the p_input->fifo structures
* can be read or modified by a third-party thread. */
......
......@@ -2,7 +2,7 @@
* a52.c: A/52 basic parser
*****************************************************************************
* Copyright (C) 2001-2002 VideoLAN
* $Id: a52.c,v 1.9 2002/09/02 23:17:05 massiot Exp $
* $Id: a52.c,v 1.10 2002/09/06 23:15:44 massiot Exp $
*
* Authors: Stphane Borel <stef@via.ecp.fr>
* Christophe Massiot <massiot@via.ecp.fr>
......@@ -225,7 +225,12 @@ static int RunDecoder( decoder_fifo_t *p_fifo )
memcpy( p_buffer->p_buffer, p_header, 7 );
GetChunk( &p_dec->bit_stream, p_buffer->p_buffer + 7,
i_frame_size - 7 );
if( p_dec->p_fifo->b_die ) break;
if( p_dec->p_fifo->b_die )
{
aout_BufferDelete( p_dec->p_aout, p_dec->p_aout_input,
p_buffer );
break;
}
/* Send the buffer to the mixer. */
aout_BufferPlay( p_dec->p_aout, p_dec->p_aout_input, p_buffer );
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment