Archive for the ‘Lossless Audio’ Category

A brief look at Sonarc

Tuesday, September 29th, 2020

Recently The Mike asked if I can look at this format. In case you didn’t know, The Multimedia Mike is one of the under-appreciated founders of opensource multimedia, involved both in reverse engineering codecs and maintaining infrastructure for about two decades (for example this particular blog has been here for fifteen years thanks to him and his maintenance efforts). So of course I had to look at it even if out of sheer respect.

Sonarc is probably the first known lossless audio codec as the copyright mentions year 1992 as the first date (Shorten and VocPack appeared in 1993). Spoiler: it turned out to be closer to Shorten in design.

This was harder to RE because it was larger (decompressor was three to four times larger than VocPack) and the original was written in Borland Pascal with all the peculiarities it brings. By those peculiarities I mean mostly Pascal strings. Well, the code for manipulating them is annoying to parse but not too bad, the main problem is that they are put in the same segment with code right before the function that uses it and that confuses Ghidra which for some reason selects the segment with standard library routines for them instead (and uninitialised variables are not assigned to any segment at all). The write() implementation is also no fun.

Side note: back in the day Turbo Pascal was probably the best programming language for DOS and back in school at least two my schoolmates were doing crazy things with it (and Delphi later) which I couldn’t (and I was writing in C as I still do today). Yet somehow the popularity of the language vanished and I haven’t heard anything about them becoming famous programmers (neither did I but they had better chances). And the only modern project written in Pascal that I’m aware of is Hedgewars.

Anyway, let’s talk about the format itself. Sonarc can compress raw PCM, .voc and .wav into either its own format or into .wav and it supports both 8- and 16-bit audio.

From what I saw it uses the same approach: optionally applying the LPC filter and coding the residues. Residues can be coded with two different approaches: old one for 8-bit audio and new one for 8- and 16-bit audio. Old 8-bit audio coding uses one of eight different static Huffman codebooks or can code residues as raw bytes (and I can’t remember that many other codecs doing the same except for MLP and DT$-HD Lossless probably because why compress audio in that case). New 8-/16-bit coding still uses fixed codebooks but in a different fashion: now they simply code the number of bits for the residue. It does not look like the data is split into segments but I may be wrong (I/O is still not the easiest thing to get around there).

Overall it’s not a bad codec for its time and e.g. FLAC has not come that far away from it in concepts (except that it uses Rice codes and has independent frames plus partitioning inside individual frame for better compression). I hope though there are no older lossless audio codecs out there to be discovered (CCITT G.711 infinite-law with its fixed 1:1 compression does not count).

A look at VP1 and VP2

Saturday, September 26th, 2020

One of the issues with On2 VPx family is that they started it from VP3 while having four different TrueMotion codecs before that (it’s like the company was called Valve and not Duck at that time). But I wanted to look at some lossless audio codecs and there’s VocPack or VP for short which has versions 1 and 2. Bingo!

This is a very old lossless audio codec that appeared in 1993 along with Shorten and, as it turns out, originated the second approach to lossless audio compression. While Shorten was a simple format oriented on fast decoding and thus used fixed prediction (either LPC filter or even fixed prediction scheme) and Rice codes for residues (the same scheme used in FLAC and TAK), VocPack employed adaptive filter and arithmetic coding (the approach carried by LA, Monkey’s Audio, OptimFROG and such). And it was made for DOS and 8-bit audio! Well, version 2 added support for 16-bit but it seems to compress only high 8 bits of the sample anyway while transmitting low bits verbatim.

And it turned out to be my first real experience of using Ghidra with DOS executables. The main troubles were identifying library functions and dealing with pointers. Since it was compiled with Borland C++ 3.0 (who doesn’t remember it?) it was rather easy to decompile but library functions were not recognized (DOS executables don’t get much love these days…) but searching the disassembly for int 21h with Ralf Brown’s interrupts list at hand it was easy to identify calls for file operations (open/read/write/seek) and from those infer the stdio library functions using them and finally the code using all those getc()s. And of course segmented model makes decompiling fun, especially when decompiler can’t understand segment/offset variables being used separately. In result sometimes you recognize offset but you have to look at the data segment yourself to see what it refers to; even worse, for some local variables Ghidra seemed to assume wrong segment which resulted in variables in disassembly and decompiled output pointing to non-existent locations. Despite all of that it was rather easy to understand what unpacker for VP1 does. VP2 has only packer and no unpacker publicly available (feel free to trace the author and buy a copy from him that supports unpacking) plus it depends on those wrongly understood global variables more which prevented me from understanding how encoding a residue works there. In theory you should be able to set data segment manually but I don’t see a point on spending more than a couple of hours on REing the format.

It was a nice distraction though.

Lossless audio codecs were more advanced than I thought

Wednesday, September 23rd, 2020

As I’d mentioned in a previous post on lossless audio codecs, I wanted to look at some of them that are still not reverse engineered for documentation sake. And I did exactly that so now entries on LA, OptimFROG and RK Audio are not stubs any more but rather contain some information on how the codecs work.

And if you look at LA structure you see a lot of filters of various sizes and structure. Plus an adaptive weight used to select certain parameters. If you look at other lossless audio codecs with high compression and slow decoding like OptimFROG or Monkey's Audio you’ll see the same picture: several filters of different kinds and sizes layered over each other plus adaptive weights also used in residuals coding. Of course that reminded me of AV2 and more specifically about neural networks. And what do you know, Monkey's Audio actually calls its longer filters neural networks (hence the name NNFilter.h in the official SDK and you can spot it in the version history as well leaving no doubts that it’s exactly the neural networks it is named after).

Which leads me to the only possible conclusion: lossless audio codecs had been using neural networks for compression before it became mainstream and it gave them the best compression ratios in the class.

And if we apply all this knowledge to video coding then maybe in AV4 we’ll finally see some kind of convolution filters processing whole tiles and then the smaller blocks removing spatial redundance maybe with some compaction layers like many neural network designs have (or transforms for largest possible block size in H.265/AV1/AVS2) and expansion layers (well, what do you think motion interpolation actually does?) and using RNNs to code residues left from all the prediction.

Revisiting lossless codecs…

Sunday, September 6th, 2020

I’ve decided to add a couple of lossless audio formats in a preparation for a long-term goal of having a NihAV-based player (the debug tool nihav-player that I currently have can’t really count for one especially considering how it does not play pure audio files and tends to deadlock in SDL audio thread).

So I’ve added nihav-llaudio crate with four most common formats for music I have, namely FLAC, Monkey’s Audio, TTA and WavPack. And I guess it’s time to revisit my opinion about various lossless audio formats now that I’ve (re)implemented support for some of them (I tried to summarise my views about them almost ten years ago). Let’s see what has changed since then:

  • I had a closer look at MPEG-4 ALS and it turned out to be rather interesting (and probably the only lossless audio codec with P-frames) but it also has somewhat insane options (like maximum prediction order of 1023 for LPC; or coding the whole file with just one I-frame and the rest being P-frames so no seeking is possible) and RLSLMS being broken (the reference decoder can’t decode the official reference samples) and it got no popularity at all;
  • TTA turned out to be very simple with a baffling rationale

    The sample count in a TTA1 frame is a multiple to 576 (sound buffer granule). Based on this, the “frame time” is defined as a constant 1.04489795918367346939. Thus, the sample count in a regular TTA1 frame determined as: regular TTA1 frame length = frame time * sample rate.

    I’m no mathematician so this does not form a coherent logical chain for me, I’d use something like “frame length in samples is sample rate rounded up to multiple of 576” instead of “sample rate multiplied by 256/245”. The main irritating point is that last frame contains less samples and you need to signal that it’s last frame (or merely check if you have enough bits left to decode a full frame after you decoded enough samples for the last frame). Oh, and TTA2 seems to be still in development.

  • And speaking about codecs in development, I don’t see new lossless audio codecs appearing after 2010. Either I got too old and don’t spot them or the interest has finally faded out. This might be because most people don’t buy music any more but rather rent it in some online store or use some streaming service. And those who still do probably use one of the old established codecs like FLAC.
  • And since I’ve mentioned it, my opinion on it has not changed and only got a bit more refined not that I have a decoder for it as well. Previously I thought FLAC is a simple format with a bad bitstream format that makes seeking hard. Now I know that FLAC is a simple format (fixed predictor or LPC up to order 32 and fixed Rice codes; the only thing that improves compression is splitting residues into partitions where optimal k for coding them can be selected) with horribly designed bitstream format.

    Normally lossless audio formats either store offsets for each frame or have an easily recognizable header, but FLAC is different. It’s obvious that the author was inspired by MPEG audio header design but those actually had frame sizes coded. Here in order to find where the frame ends you need either to decode it or calculate CRC for the data you read (and in the likely case of false positives also check that the data is followed by a valid header). One could argues that there’s often a seek table in FLAC file but for e.g. in luckynight.flac those entries are for multiples of ten seconds positions, making seeking to a more precise position a task of skipping frames (which is fun—see above).

  • WavPack is still the best designed format in my opinion though it would be nicer to have some initial header with various metadata instead of having it stored in the first block. Other than that still no objections.
  • And it turns out there’s lossless AAC compression that employs wavelet transform before LPC (it’s Chinese AAC though so who cares).

I remember reading somewhere (on Hydrogenaudio most likely) a brief story about development of several popular lossless audio codecs (even told by the author of one but I might be wrong). Essentially it’s not a NIH syndrome but very close: somebody develops a format, another guy finds a minor flaw the original developer refuses to address (my memory is hazy but I think there were such things mentioned as no plugin for some player or not supporting some tags) and develops another format. The amount of formats that came to existence because somebody wanted to create a format and could not keep it to himself is pretty large too.

But those days seem to be over and maybe I’ll reverse engineer some of those old codecs for documenting reasons as there’s very little risk that somebody would pick them up and make widespread now. Alternatively I can rant on newer formats sucking as well. Though why wait, let’s do it now:

  • AAC sucks because of the countless extensions and attempts to bundle various coding approaches under the same name (fun fact – “xHE-AAC” is actually pronounced as “MPEG-D you-suck”);
  • AV1 sucks because of the organisational structure and their decisions during (and after) the design stage;
  • AV2 is not here yet but it sucks for the same reason;
  • BlueTooth audio codecs suck in various ways (except SBC, it’s okay for the purpose), especially because of marketing them as high-definition and robust while in reality they rarely are;
  • Chinese codecs suck for being rip-offs of better-known codecs. It’s especially gross that one of them got standardised as IEEE 1857.2 AAC;
  • H.264 sucks because of countless extensions;
  • H.265 inherited some from H.264 and added the licensing situation on top of that;
  • MPEG-5 EVC sucks because it’s a Frankenstein monster constructed from bits from H.263-H.265;
  • Opus sucks for being designed for streaming case and used elsewhere;
  • Vector-based codecs suck because current tools are still not good enough to autovectorise complex shapes and recognize gradients.

Now back to doing nothing.

OptimFROG

Saturday, March 26th, 2016

You know, the greatest reverse engineer I know is Derek B. He’s managed to RE such codecs as Canopus HQX and Cineform HD in the most efficient manner ever—saying he’ll do it and patiently waiting until somebody else does it.

So here are some words about his favourite lossless audio codec. The most interesting thing about it is that it was actively developed in 2001-2006 and then it was suddenly resurrected in 2015. Also it’s one of few non-standard codecs (i.e. not made into standard) that has several articles written about it.

The codec actually consists of two different formats, seemingly an old one and a newer one (that looks like it supports all range of sample type). The former is notable for having signal reconstruction stage using floating point math (a thing you don’t see in codecs every day), the latter seems to employ various parameter reading and reconstruction methods. Coding is done using low precision range coder (large values are decoded using chunks of 8 or 12 bits). So nothing really interesting there.

P.S. I’m definitely not going to write a decoder for it. There are too many lossless audio codecs already, let all proprietary ones (in custom containers too) die in peace.

More about Monkey’s Audio filter changes

Sunday, March 3rd, 2013

In the previous post I gave general overview of codec changes, now I’m going to look more deeply at the filter changes with time.

  • 3950 — current mode with up to three layers of IIR filters
  • 3930 — simpler filters: no third layer (there was no insane compression level back then) and the difference between predicted and actual value was not used.

For the older versions there are differences in the implementations of the filters for the different compression modes.

Fast compression:

  • 3200 — order 2 adaptive prediction (i.e. previously decoded and adjustable prediction value are used in prediction)
  • 0000 — almost the same but with different rules for adjustment factor updating

Normal compression:

  • 3800 — two layers of filters: order 4 adaptive prediction and order 2 afterwards
  • 3200 — the same structure, different rules for updating
  • 0000 — three layers with orders 3, 2 and 1 and different updating rules

High compression:

  • 3700 — first it tries first order adaptive prediction with the delay of 2-16 (i.e. the next to previous element is used for prediction) and normal mode decompression afterwards (different decoding for 3800 of course)
  • 3600 — the same but delays are 2-13
  • 3200 — the same but delays are 2-7
  • 0000 — orders 5 and 4 and different updating rules

Extra high compression:

  • 3830 — an IIR filter resembling the one used in the newer Monkey’s Audio versions
  • 3800 — some filter parameters were half as much as in 3830 and there was no delay 2-8 filtering
  • 3600 — delay filtering plus high filtering (which is delay filtering plus normal filtering, which can be expressed as a layer of filtering over fast filtering)
  • 0000 — essentially the same but with different prefiltering

Monkey’s Audio: noted differenced between versions

Thursday, February 28th, 2013

While preparing for working on old APE versions support I finally got courage to try and trace all changes for different versions. So here’s the list of internal versions and the changes they introduced:

  • 0000 — the reference version for all prehistoric version. Before version 0000 it was fine, then it all got worse IMO.
  • 3320 — changes in the filters
  • 3600 — changes in the filters
  • 3700 — changes in the filters
  • 3800 — blocks per frame changed for extra high compression level; changes in the filters (yawn)
  • 3810 — frame start at byte boundaries now
  • 3820 — special codes extension (signalled by top bit of CRC set to one)
  • 3830 — filter lengths and some implementation details changed
  • 3840 — CRC calculation algorithm changed a bit
  • 3870 — significant changes in residue coding
  • 3890 — small changes in residue coding
  • 3900 — residue coding format has changed seriously.
  • 3910 — small change in the residue coding (more than 16 bit values can be coded now)
  • 3930 — significantly changed format introduced (both filtering and coding scheme were changed)
  • 3950 — filter format changed a bit (and insane compression mode is added somewhere after that), blocks per frame is changed too
  • 3960 — some small and compatible change in the bitstream (consuming two last bytes or not)
  • 3980 — file format is changed a bit; filtering process has changed a bit too.
  • 3990 — the latest (known) format. Residue coding has changed.

Do you still wonder why I strongly dislike this format?

Preserving extinct formats

Wednesday, February 27th, 2013

By the request of one guy (he has provided samples as well) I shall work on supporting old Monkey’s Audio versions (before 3.95).

Why? Because the latest official version of Monkey’s Audio has dropped support for those files, because I wanted to support such files since really long time (just didn’t have a good opportunity to do that) and because I definitely need a distraction from Go2Insanity codec (I shan’t blog about it anymore).

Well, let’s see what the old versions of the worst (known) designed lossless audio codec have to offer me.

Call for Intel Codecs

Monday, March 19th, 2012

I’ve spent two weekends and finally REd and wrote decoder for Re* Audio Lossless Format. With news like these I can deliberately call it Intel Audio Lossless Format.

So, what codecs we’re lacking so far?

  • Intel Audio Coder — it’s quite similar to IMC (Music Coder) but not identical.
  • Intel Layered Video Codec — probably it’s just h.263 variant, the only thing I know is that RealVideo 2 decoder was based on it (it’s mentioned in doxygen for Helix SDK I saw once in Internet somewhere and this supports that theory indirectly).
  • ClearVideo — a licensed fractal-based codec. It’d be rather simple DCT-based codec if not for one catch: it uses domain search to generate codes that then are used for block unpacking (and in decoder too, it seems). Maybe these patents will help?
  • Intel NGV — we’ll deal with it when it’s ready 🙂

Feel free to send any useful information about them, preferably working decoders of course.

After that we can claim full support of Real and Intel codec family.

Why Lossless Audio Codecs generally suck

Saturday, November 27th, 2010

Why there are so many lossless audio codecs? Mike, obviously, had his thoughts on that subject and I agree with my another friend who said: “it’s just too easy to create lossless audio codec, that’s why everybody creates his own”.

Well, theory is simple: you remove redundancy from samples by predicting their values and code the residue. Coding is usually done with Rice codes or some combination of Rice codes and an additional coder — for zero runs or for finer coding of Rice codes. Prediction may be done in two major ways: FIR filters (some fixed prediction filters or LPC) or IIR filters (personally I call those “CPU eaters” for certain property of codecs using it). And of course they always invent their own container (I think in most cases that’s because they are too stupid to implement even minimal support for some existing container or even to think how to fit it into one).

Let’s iterate through the list of better-known lossless audio codecs.

  1. ALAC (by Apple) — nothing remarkable, they just needed to fit something like FLAC into MOV so their players can handle it
  2. Bonk— one of the first lossless/lossy codecs, nobody cares about it anymore. Some FFmpeg developers had intent to enhance it but nothing substantial has been done. You can still find that “effort” as Sonic codec in libavcodec.
  3. DTS-HD MA — it may employ both FIR and IIR prediction and uses Rice codes but they totally screwed bitstream format. Not to mention there’s no openly available documentation for it.
  4. FLAC — the codec itself is good: it’s extremely fast and features good compression ratios. The only bad thing about it is that it’s too hard to seek properly in it since there’s no proper frame header and you can just hope that that combination of bits and CRC are not false positive.
  5. G.711.0 — have you ever heard about it? That’s its problem: nobody cares and nobody even tries to use it.
  6. MLP/Dolby True-HD — it seems to be rather simple and it exists solely because there was no standardised lossless audio codec for DVD.
  7. Monkey’s Audio — well, the only good thing about is that it does not seem to be actively developed anymore.
  8. MPEG-4 ALS — the same problem: it may be standardised but nobody cares about it.
  9. MPEG-4 SLS — even worse since you need bitexact AAC decoder to make it work.
  10. OggSquish — luckily, it’s buried for good but it also spawned one of the worst container formats possible which still lives. And looking at original source of it one should not wonder why.
  11. RealAudio Lossless Format — I always say it was named after its main developer Ralph Wiggum. This codec is very special — they had to modify RM container format specially for it. A quick look inside showed that they use more than 800 (yes, more than eighty hundred) Huffman tables, most of them with several hundreds of codes (about 400 in average). That reminds me of RealVideo 4 with its above-the-average number of tables for context-dependant coding.
  12. Shorten — one of the first lossless audio codecs. Hardly anyone remembers it nowadays.
  13. TAK — it was originally called YALAC (yet another lossless audio codec) for a reason. Since it’s closed-source and fortunately not widespread (though some idiots use it for CD rip releases), it just annoys me time from time but I don’t think someone will work on adding support for it in FFmpeg.
  14. TrueAudio (TTA) — I can say anything about it except it seems to be quite widespread and it works. Looks like they’re still alive and work on TTA2 but who cares?
  15. WavPack — that’s rather good codec with sane bitstream format too. Looks like its author invested some time in its design. Also he sent patches to implement some missing features in our decoder (thank you for that!).
  16. WMA Lossless — from what I know, it uses IIR filter based on least minimum squares method for finding its coefficients. It has two peculiarities: that filter is also used for inter-channel decorrelation and bitstream format follows WMA9 format, i.e. it has something like interframes and frame data starting at arbitrary point (hello, MP3!).

P.S. I still hope this post won’t encourage anybody to write yet another useless lossless audio decoder.