Archive for the ‘Audio’ Category

On Bluetooth codecs

Wednesday, December 15th, 2021

I got a strange request for LDAC decoder as it may help to “…verify (or debunk) Sony’s quality claims.” This made me want to write a post about the known BT codecs and what’s my opinion on them.

Bluetooth codecs can be divided into three categories: the standard ones defined in A2DP that nobody uses (MP3 and ATRAC), the standard ones that are widely used (AAC and SBC) and custom codecs supported by specific vendors.

So, let’s start with mandatory A2DP codecs:

  • SBC—a codec designed specifically for Bluetooth. It works like MPEG Audio Layer II but with 4 or 8 sub-bands and parametric bit allocation instead of fixed tables. This allows it to change bitrate at any frame (which allows it to adapt to changing transmission quality). I heard an opinion that it beats newer codecs at their bitrates in quality but the standard intentionally capped it to prevent that. I find that not that hard to believe;
  • MPEG-1,2 Audio—I’ve not heard that anybody actually uses them and it’s fro the best;
  • MPEG-2,4 AAC—it should give better quality than SBC but for a much larger delay and decoding complexity;
  • ATRAC family—this feels like a proprietary replacement of AAC to me. I’ve not heard that anybody actually supports any of the codecs in their products (it’s not that I’ve heard much about BT in general though).

Here I should also mention a candidate codec named LC3 (and LC3plus). Whatever audio codec FhG IIS develops, it’ll be AAC. LC3 is no exception as by the first glance it looks like like AAC LC with an arithmetic coding and some additional coding tools glued to it.

There’s CVSD codec for speech transmission over BT. It’s a speech codec and that’s enough about it.

Now let’s move to the proprietary codecs:

  • aptX—a rather simple codec with 4:1 compression ration (four 16-bit samples into single 16-bit word). The codec works by splitting audio into four sub-bands, applying ADPCM and quantising to the fixed amount. Beside inability to adapt to bad channels it should produce about the same quality as SBC (at least from a feature comparison point of view);
  • aptX HD—the same as non-HD version but works on 24-bit samples (and probably the only honest high-res/high-definition codec here);
  • aptX other variants—they exist but there’s no solid information about them;
  • LDAC—will be discussed below in more detail. For now suffice to say it’s on MP2 level and hi-res claims are just marketing;
  • LHDC and LLAC—not much is known about the codecs but after seeing quality comparison picture (with a note) on the official website I don’t expect anything good;
  • Ultra Audio Transmission—there’s no information about it except for a name mentioned in Wikipedia list of BT codecs and some marketing materials on the page with smartphone description by the same vendor;
  • Samsung BT codecs—see above.

Now let’s review LDAC specifically. I’m somewhat surprised nobody has written a decoder for it yet. It’s so easy to reconstruct the format from the open-source encoder that Paul B. Mahol could do it in a couple of days (before returning to Bink2 decoder hopefully). aptX has only binary encoder and yet people have managed to RE it. I’m not going to do it because I don’t care much about Bluetooth codecs in general and it’s not a good fit for NihAV either.

To the technical details. The codec frame is either one long MDCT or two interlaced half-size MDCTs (just like ATSC A/52B), coefficients are coded as pairs, quads or larger single values (which reminds me of MP3 and MP2, quantisation is very similar as well). Coefficients (in pairs and quads as well) are stored in bit fields, the only variable-length codebooks are used to code quantiser differences. There’s bit allocation information transmitted for each frame so different coefficients can have different bit sizes (and thus precision). Nevertheless the maximum it can have is just 15 bits per coefficient (plus sign), which makes it hardly any hi-resier than AAC LC or SBC. And the only excuse that can be said here is the one I heard about MP3 being hi-res: with the large scales and coefficients you can have almost infinite precision. Disproving it is left as an exercise to the reader.

I hope now it’s clear why I don’t look at the development of Bluetooth codecs much. Back to slacking.

Looking at Voxware MetaVoice

Monday, December 13th, 2021

Since there’s not much I’d like to do with NihAV, I decided to revisit one. old family of codecs.

It seems that they had several families of codecs and most (all?) of them are licensed from some other company, sometimes with some changes (there are four codecs licensed from Lernout & Hauspie, MetaSound is essentially TwinVQ with a different set of codebooks, RT2x and VR1x are essentially different flavours of the same codec, SC3 and SC6 might be related to Micronas codec though Micronas SC4 decoder does not look similar at all).

So here’s a short review of those various codecs that I have some information about:

  • L&H CELP 4.8kpbs—this is rather standard CELP codec with no remarkable features (and I’ve even managed to write a working decoder for it);
  • L&H SBC 8/12/16kbps—that one is a sub-band coder with variable frame size (and amount of bits allocated per band);
  • RT24/RT28/RT29HQ and VR12/VR18—all these codecs share the common core and essentially it’s a variable-bitrate LPC-based speech codec with four different frame modes with no information transmitted beside frame mode, pitch information and the filter coefficients (for CELP you’d also have pulse information).
  • SC3/SC6—this one seems to be more advanced and, by the look of it, it uses order 12 LPC filter (usually speech codecs use either LPC of order 10 or 16).

I’ll try to document it for The Wiki but don’t expect much. And I’m not going to implement decoders for these formats either (beside already implemented 4.8k CELP one): the codecs have variable bitrate so you need to decode a frame (at least partially) in order to tell how many bytes it will take—and I don’t want to introduce a hack in NihAV to support such mode (either the demuxer should serve variable-length frames or the decoder should expect fixed-size frames); and even worse thing is that they are speech codecs that I don’t understand well (and there’s a lot of obscure code there). It took me more than a week to implement and debug CELP decoder. Fun story: I could not use MPlayer2 binary loader because the codec was misdetected as MPEG Audio Layer II. The cause of that was libavformat and its “helpful” tag search: when twocc 0x0070 was not found, it tried upper-case 0x0050 which belongs to MP2. And after I’ve finally made it work I discovered a fun bug in the reference decoder: while calculating cosine, the difference can overflow and thus the resulting value is somewhat wrong (and it could be easily fixed by changing “less or equal” condition to “less” in table search refinement step).

Anyway, it’s done and now I can forget about it.

NihAV: now with lossless audio encoder

Tuesday, October 26th, 2021

Since I wanted to try my hoof at various encoding concepts it’s no wonder that after lossy audio encoder (IMA ADPCM with trellis encoding), lossless video encoder (ZMBV, using my own deflate implementation for compressing), lossy video encoder (Cinepak, for playing with vector quantisation, and VP6, for playing with many other concepts) it was time for a lossless audio encoder.

To remind you, there are essentially two types of lossless audio compressors—fast and asymmetric (based on LPC filters) and slow and symmetric (based on adaptive filters, usually long LMS ones). The theory behind them is rather simple and described below.
(more…)

Playing with trellis and encoding

Sunday, August 8th, 2021

I said before I want to play with encoding algorithms within NihAV and here’s another step (a previous major step was vector quantisation and a simple Cinepak encoder using it). Now it’s time for trellis search for encoding.

The idea by itself is very simple: you want to encode a sequence optimally (or decode transmitted sequence with distortions), so you represent your data as a set of possible states for each sample and search a path from one state to another with the minimum error. Since each state of the sample is connected with all states of the previous samples, its graph looks like a trellis:

The search itself is performed by selecting for each state a transition from a previous state that gives minimal error, then selecting a state with the least error for the last sample and tracing back the path that lead to it from the beginning. You just need to store the pointer to the previous state, error value and whatever decoder state you require.

I’ve chosen IMA ADPCM encoder as a test playground since it’s simple but useful. The idea of the format is very simple: you have a state consisting of current sample value and step size used as a multiplier for the transmitted 4-bit difference value; you reconstruct the difference, add it to the previous stored value, and correct step size (small delta—decrease step, large delta—increase step). You have 16 possible states for each sample which makes the search take not so long time.

There’s another tricky question of selecting initial step size (it will adapt to the samples but you need to start with something). I select it to be close to the difference between first and second samples and actually abuse first state to store not the index of the previous state but rather a step index. This way I start with (ideally) 16 different step sizes around the current one and can use the one that gives slightly lower error in the end.

And another fun fact: this way I can use just the code for decompression of single ADPCM sample and I don’t require actual code for compression—it traverses through all possible compressed codes already.

I hope this demonstrates that it’s an easy method that improves quality significantly (I have not conducted proper testing but a from a quick test it reduced mean squared error for me by 10-20%).

It should also come in handy for video compression but unfortunately rate distortion optimisation does not look that easy…

Why codecs are designed like this and why they are not very interchangeable

Monday, August 2nd, 2021

Sometimes I have to explain the role of various codecs and why it’s pointless in most cases to adapt compression tricks from image codecs to audio codecs (and vice versa) and even from lossy to lossless codecs in the same content. If you understand that already then you’ll find no new information here.

Yours truly
Captain Obvious
(more…)

A quick glance at Bink Audio

Tuesday, February 2nd, 2021

Since my attention was drawn to this format (and binary specification was provided as well) I’ve briefly looked at it—and a brief look should be enough.

From what I see it’s the same Bink Audio but in its own container instead of Bink. It has 24-byte header, a table of 16-bit audio block sizes and actual audio data (each frame may be prefixed with 0x99 0x99 but I’m not sure since I’ve not seen a single file in that format).

Frame header:

  • 1FCB magic;
  • one byte of version (version 2 groups audio frames together, previous one does not);
  • one byte with number of blocks per frame;
  • two-byte sampling rate;
  • four-byte variable, probably frame length in samples;
  • four-byte unknown variable, maybe suggested input buffer size?
  • four-byte unknown variable
  • four-byte variable, number of frames in seek table.

So as expected it’s nothing special.

Looking at XVD

Saturday, January 30th, 2021

A week ago a certain XviD developer made a request to look at something more compressed called XVD and so I did.
(more…)

A look on weird audio codec

Thursday, January 7th, 2021

Since I still have nothing better to do I decided to look at ALF2CD audio codec. And it turned out to be weird.

The codec is remarkable since while it seems to be simple transform+coefficient coding it does that in its own unique way: transform is some kind of integer FFT approximation and coefficient coding is done with CABAC-like approach. Let’s review all details for the decoder as much as I understood them (so not much).

Framing. Audio is split into sub-frames for middle and side channels with 4096 samples per sub-frame. Sub-frame sizes are fixed for each bitrate: for 512kbps it’s 2972 bytes each, for 384kbps it’s 2230 bytes each, for 320kbps it’s 2230/1486 bytes, for 256kbps it’s 1858/1114 bytes. Each sub-frame has the following data coded in it: first and last 16 raw samples, DC value, transform coefficients.

Coding. All values except for transform coefficients are coded in this sequence: non-zero flag, sign, absolute value coded using Elias gamma code. Transform coefficient are coded in bit-slicing mode: you transmit the lengths of region that may have 0x100000 set in their values plus bit flags to tell which entries in that actually have it set, then the additional length of region that may have 0x80000 set etc etc. The rationale is that larger coefficients come first so only first N coefficients may be that large, then N+M coefficients may have another bit set down below to bit 0. Plus this way you can have coarse or fine approximation of the coefficients to fit the fixed frame size without special tricks to change the size.

Speaking of the coder itself, it is context-adaptive binary range coder but not exactly CABAC you see in ITU H.26x codecs. It has some changes, especially in the model which is actually a combination of several smaller models in the same space and in the beginning of each sub-model you have to flip MPS value and maybe transition to some other sub-model. I.e. a single model is a collection of fixed probabilities of one/zero appearing and depending on what bit we decoded we move to another probability that more suits it (more zeroes to expect or more ones to expect). In H.26x there’s a single model for it, in ALF2CD there are several such models so when you hit the edge state aka “expect all ones or all zeroes” you don’t simply remain in the state but may transition to another sub-model with a different probabilities for expected ones-zeroes. A nice trick I’d say.

Coder also maintains around 30 bit states: state 0 is for coding non-zero flags, state 1 is for coding value sign, states 2-25 are for coding value exponent and state 26 is for coding value mantissa (or it’s states 2-17 for exponent and state 18 for mantissa bits when we code lengths of transform coefficient regions).

Reconstruction. This is done by performing inverse integer transform (which looks like FFT approximation but I’ve not looked at it that close), replacing first and last 16 coefficients with previously decoded ones (probably to deal with effects of windowing or imperfect reconstruction), and finally undoing mid/stereo for both sub-frames.

Overall it’s an interesting codec since you don’t often see arithmetic coding employed in lossy audio codecs unless they’re very recent ones of BSAC. And even then I can’t remember any audio codec using binary arithmetic coder instead of multi-symbol models. Who knows, maybe this approach will be used once again as something new. Most of those new ideas in various codecs have been implemented before after all (e.g. spatial prediction in H.264 is just a simplified version of spatial prediction in WMV2 X8-frames and quadtrees were used quite often in the 90s before reappearing in H.265; the same way Opus is not so modern if you know about ITU G.722.1 and heard that WMA Voice could have WMA Pro-coded frames in its stream).

A brief look at Sonarc

Tuesday, September 29th, 2020

Recently The Mike asked if I can look at this format. In case you didn’t know, The Multimedia Mike is one of the under-appreciated founders of opensource multimedia, involved both in reverse engineering codecs and maintaining infrastructure for about two decades (for example this particular blog has been here for fifteen years thanks to him and his maintenance efforts). So of course I had to look at it even if out of sheer respect.

Sonarc is probably the first known lossless audio codec as the copyright mentions year 1992 as the first date (Shorten and VocPack appeared in 1993). Spoiler: it turned out to be closer to Shorten in design.

This was harder to RE because it was larger (decompressor was three to four times larger than VocPack) and the original was written in Borland Pascal with all the peculiarities it brings. By those peculiarities I mean mostly Pascal strings. Well, the code for manipulating them is annoying to parse but not too bad, the main problem is that they are put in the same segment with code right before the function that uses it and that confuses Ghidra which for some reason selects the segment with standard library routines for them instead (and uninitialised variables are not assigned to any segment at all). The write() implementation is also no fun.

Side note: back in the day Turbo Pascal was probably the best programming language for DOS and back in school at least two my schoolmates were doing crazy things with it (and Delphi later) which I couldn’t (and I was writing in C as I still do today). Yet somehow the popularity of the language vanished and I haven’t heard anything about them becoming famous programmers (neither did I but they had better chances). And the only modern project written in Pascal that I’m aware of is Hedgewars.

Anyway, let’s talk about the format itself. Sonarc can compress raw PCM, .voc and .wav into either its own format or into .wav and it supports both 8- and 16-bit audio.

From what I saw it uses the same approach: optionally applying the LPC filter and coding the residues. Residues can be coded with two different approaches: old one for 8-bit audio and new one for 8- and 16-bit audio. Old 8-bit audio coding uses one of eight different static Huffman codebooks or can code residues as raw bytes (and I can’t remember that many other codecs doing the same except for MLP and DT$-HD Lossless probably because why compress audio in that case). New 8-/16-bit coding still uses fixed codebooks but in a different fashion: now they simply code the number of bits for the residue. It does not look like the data is split into segments but I may be wrong (I/O is still not the easiest thing to get around there).

Overall it’s not a bad codec for its time and e.g. FLAC has not come that far away from it in concepts (except that it uses Rice codes and has independent frames plus partitioning inside individual frame for better compression). I hope though there are no older lossless audio codecs out there to be discovered (CCITT G.711 infinite-law with its fixed 1:1 compression does not count).

A look at VP1 and VP2

Saturday, September 26th, 2020

One of the issues with On2 VPx family is that they started it from VP3 while having four different TrueMotion codecs before that (it’s like the company was called Valve and not Duck at that time). But I wanted to look at some lossless audio codecs and there’s VocPack or VP for short which has versions 1 and 2. Bingo!

This is a very old lossless audio codec that appeared in 1993 along with Shorten and, as it turns out, originated the second approach to lossless audio compression. While Shorten was a simple format oriented on fast decoding and thus used fixed prediction (either LPC filter or even fixed prediction scheme) and Rice codes for residues (the same scheme used in FLAC and TAK), VocPack employed adaptive filter and arithmetic coding (the approach carried by LA, Monkey’s Audio, OptimFROG and such). And it was made for DOS and 8-bit audio! Well, version 2 added support for 16-bit but it seems to compress only high 8 bits of the sample anyway while transmitting low bits verbatim.

And it turned out to be my first real experience of using Ghidra with DOS executables. The main troubles were identifying library functions and dealing with pointers. Since it was compiled with Borland C++ 3.0 (who doesn’t remember it?) it was rather easy to decompile but library functions were not recognized (DOS executables don’t get much love these days…) but searching the disassembly for int 21h with Ralf Brown’s interrupts list at hand it was easy to identify calls for file operations (open/read/write/seek) and from those infer the stdio library functions using them and finally the code using all those getc()s. And of course segmented model makes decompiling fun, especially when decompiler can’t understand segment/offset variables being used separately. In result sometimes you recognize offset but you have to look at the data segment yourself to see what it refers to; even worse, for some local variables Ghidra seemed to assume wrong segment which resulted in variables in disassembly and decompiled output pointing to non-existent locations. Despite all of that it was rather easy to understand what unpacker for VP1 does. VP2 has only packer and no unpacker publicly available (feel free to trace the author and buy a copy from him that supports unpacking) plus it depends on those wrongly understood global variables more which prevented me from understanding how encoding a residue works there. In theory you should be able to set data segment manually but I don’t see a point on spending more than a couple of hours on REing the format.

It was a nice distraction though.