Archive for the ‘Audio’ Category

NihAV, RealMedia, Rust and Everything Else

Saturday, October 13th, 2018

Looks like it’s been about two months since I last wrote anything about NihAV but that does not mean I did not have anything to write about. On the contrary, I’m glad to report about significant progress in RealAudio support.

Previously I’ve reported about RealVideo 3 and 4 support (as for RealVideo 1/2 and ClearVideo before), so video part was covered quite well but audio part was missing and I went on to rectify the situation.

Now NihAV supports RealAudio 1.0 (speech codec), RealAudio 2.0 (speech codec), RealAudio DNET (a bit about it later), RealAudio 4.0 (speech codec from Sipro), RealAudio Cook (this one deserves a separate post so the next one should be about this codec) and RealAudio Lossless. So there are only three codecs missing now: RealAudio 8 (ATRAC3), RealAudio 9/10 (AAC) and RealVideo 6(HD). Of course I’m going to add support for those as well.

This is actually a good time to implement those. As you might know, there is a Holy Trinity of Licensors: D.vX, D*lby and DT$. They are famous for ‘nice’ licensing terms. While I’ve never had to deal with them, I’ve heard from people who did that they like licensing single product they’re most famous for at outrageous prices (i.e. it’ll cost you a magnitude more per unit using their technology than e.g. H.264 decoder) and it’s a viral license too because if you sell stuff not oriented for consumers then you have to force your customers into the same deal (it’s GPL—Greedy Private License) and you have to report your sales to them for obvious reasons. Funny how two of the companies were bought out already. Now let’s look at them in some details:

  • D.vX This one is remarkable since it licensed the product it had nothing to do with (aka M$MPEG-4 adapted for non-ASF containers and MPEG-4 ASP). At least it seems hardly relevant now unless I dig out some old movies.
  • D*lby This one is mostly known (outside cinema equipment) for codec with several names: ATSC A/52, RealAudio DNET, ETSI TS 102 366, D*lby Digital and even something you can make out of letters A C and 3 (I heard rumours that it does not like its trademarks mentioned so I’d better avoid directly naming it). At least the last patents for that format has expired and support for it can be implemented freely. And it also owns a company that manages licensing of AAC. Fun fact is that patents for MPEG2 NBC are expired so I can implement AAC-LC decoder just fine but that does not stop them for licensing it. How they do it? By refusing to license the separate parts and forcing a whole package of AAC-LC, HE-AACv1, HE-AACv2 and xHE-AAC onto you. I guess if the situation won’t change in twenty years all current stuff will expire but they’ll still license it along with Ultra-Enhanced-Hyper-Expanded-Radically-Extended High-Efficiency AAC (which will have nothing to do with all those previous formats).
  • DT$ A company similar to D*lby and its (former?) prime competition. Also known for single format with many extensions making it essentially a homebrew AAC. At least it seems to be exclusively DVD/Blu-ray format and I’m satisfied with Xine for playing the former and avoiding the latter completely.

And I want to talk a bit more about my RealAudio DNET decoder. Internally it’s called ts102366 for obvious reasons and I have just a primitive implementation for it (i.e. it seems to work and should handle multichannel fine but no extended features). The extension for more than 5.1 channels also seems to be HD-DVD/Blu-ray only so I don’t care, it’s quite rare in RealMedia format and other containers seem to contain it as contiguous stream so I’d need to introduce support for NAElementaryStream in demuxing code and also proper parser to split it into frames. Not worth the effort for me at this moment. Another fun fact is that bitstream comes in 16-bit words that can have any endianness. In my case I just had to detect the proper endianness from first two bytes and simply initialise bitstream reader in BE or LE16 mode depending on it (again, it’s funnier with DT$ format where you have three different bitstream reading modes and you might need two modes simultaneously in some cases; again, good thing I don’t have to care about that stuff). Also it’s still one of two codecs I currently have that support multichannel audio (Cook is the second of course and AAC will be third).

And finally some words about Rust issues I had to deal with.

Rust as a language is more or less fine but compiler sucks. I’ve ran into several issues while writing code.

First, I had a fixed array of Codebooks to initialise in RALF decoder (one of 15 codebooks, another one of 125 codebooks and yet another one of 10×11 codebooks). If I use simply mem::uninitialized() with filling it up it works fine. In debug mode. In release mode it segfaults at the end. Probably I should’ve used ptr::write() instead of assigning and it would work fine but I gave up and used a vector instead of an array even if it’s not as efficient. Obviously it’s all my fault and not Rust issue but still that was weird.

Second, when I tried to create a generic codebook reader that would accept table of codes of any primitive type (u8, u16 or u32) I ran into funnier issue of Rust compiler spewing weird errors like “cannot convert u16 to u32 because it’s not a primitive type”. Obviously it’s my mistake and it’s caught by a tool (that is still not in stable) so the developers don’t care (yes, Luca even bothered to file an issue on that). Still, I’d rather have a clearer error message in that case (e.g. “… because it’s X and not a primitive type”).

And finally, an example that is definitely rustc stupidity and not mine. Again, developers don’t consider this to be an issue but I do (and Luca seemed to agree with me since he opened an issue about it). Essentially, there is a thing called DCE (dead code elimination), so when compilers see that certain block won’t be executed they might print a warning and just check inside code for syntactic validity. Current rustc might ignore condition value and optimise code inside even if it clearly makes no sense (to the point where it crashed because of that on some nightly version, see the issue for details). And while you argue that one should not write such code, I had quite plausible use case for it: a macro that took 2- or 3-element array and did something to its values so if third value was present it had to do something special with it. But of course compilation failed because you tried to do if ARR.len() > 2 { a = ARR[2]; } with two-element array. But when I tried to check whether I got indexing correct by using large constants as indices, cargo check passed just fine—probably because const propagation did not go that deep inside my code (it was in a function called from a long chain in some sub-sub-sub-module and standalone example errors out fine). This feels quite unpolished to me.

Oh, and final final fun thing: the calls like foo.bar(foo.baz) would still fail borrow check probably because they can’t (I guess) formalise function calling convention i.e. “if function is called then first its arguments are evaluated and copied if needed in certain order, then function address is evaluated and called with the arguments”. BTW you still have the situation like this:

struct Foo { foo: u8 }
impl Foo {
    fn bar(&mut self) -> u8 { self.foo += 1; self.foo }
}

fn fee(a: u8, b: u8) {
    println!("{} {}", a, b);
}

fn main() {
    let mut foo = Foo { foo: 42 };
    fee(foo.bar(), foo.bar());
}

And if you don’t know what’s wrong here I’ll tell you: in C argument evaluation is implementation-defined because back in the day there were very different calling conventions and thus compiler needed to start with evaluating from last argument to first to store them in order instead of widespread pushing arguments in order to stack. So depending on ABI the function would be called either as fee(43, 44) or as fee(44, 43).

Now I see two ways out of it: either detect such situation where the same object is mutably called several times and give an error or, which is better IMO, make formal calling convention so the code won’t be undefined. And fix borrow checker while doing that.


Overall, Rust is a nice experience so far since it allows code to structure much better but sometimes you hit such silly issues that spoil all the fun.

Anyway, next post should be about RealAudio Cook, the Opus of its era.

Some Information on Micronas SC4 and VoxWare MetaSound

Sunday, April 24th, 2016

So I’ve looked at them.

Micronas SC4 seems to be rather unusual as it seems to bring elements of LPC to ADPCM. So it’s not just the old conventional “get nibble, multiply by step, output prediction, update index and step values”—it keeps a history of last 6 decoded samples and predictions and use them to calculate a new prediction value. Details might appear in the Wiki one day.

VoxWare MetaSound is three families of 2-3 codecs bundled under the same brand. I’ve not looked at technical details but they seem to have lots and lots of tables with floating point numbers (or just a bit of tables if you’ve looked at MetaSound first).
Here are the codecs:

  • RT24 2400bps “Real-Time” codec (ID is VOXa)
  • RT28 2844bps “Real-Time” codec (ID is VOXh)
  • RT29 2978bps “High Quality” codec (ID is VOXg)
  • VR12 1260bps Variable Rate codec (ID is VOXb)
  • VR15 1537bps Variable Rate codec(ID is VOXc)
  • SC3 3200bps “Embedded” codec (no ID)
  • SC6 6400bps “Embedded” codec (no ID)

Ask for support by grabbing j-b and demanding it to be supported. I know there are other players beside VLC but that’s the only project advertising that it “plays it all” even on T-shirts. It’s time to be responsible for your own words. And ask for Bink2 too while at it.

OptimFROG

Saturday, March 26th, 2016

You know, the greatest reverse engineer I know is Derek B. He’s managed to RE such codecs as Canopus HQX and Cineform HD in the most efficient manner ever—saying he’ll do it and patiently waiting until somebody else does it.

So here are some words about his favourite lossless audio codec. The most interesting thing about it is that it was actively developed in 2001-2006 and then it was suddenly resurrected in 2015. Also it’s one of few non-standard codecs (i.e. not made into standard) that has several articles written about it.

The codec actually consists of two different formats, seemingly an old one and a newer one (that looks like it supports all range of sample type). The former is notable for having signal reconstruction stage using floating point math (a thing you don’t see in codecs every day), the latter seems to employ various parameter reading and reconstruction methods. Coding is done using low precision range coder (large values are decoded using chunks of 8 or 12 bits). So nothing really interesting there.

P.S. I’m definitely not going to write a decoder for it. There are too many lossless audio codecs already, let all proprietary ones (in custom containers too) die in peace.

A Call for Modern Audio Codec

Wednesday, February 11th, 2015

We need a proper audio codec to accompany state of the art video codecs, so here’s an outline of codec features that should be present:

  • audio codec should make more of its context, it should have a system of forward and backward reference frames like B-pyramid in H.264 or H.265;
  • it should employ tonal compensation with that — track the frequency changes from the references (e.g. it may be the same note continued or changing pitch);
  • time domain prediction via FIR or IIR filters;
  • flexible subdivision into subframes like binary tree;
  • raw (or non-transformed at least) coding mode for transients or noise;
  • integer only bitexact transform that passes for MDCT under bad light;
  • high-bitdepth sound support (up to 64 bits per sample).

The project name is transGhost (hopefully no Monty will be hurt by this).

And if you point out this is stupid — well, audio codecs should have the same rights as video codecs including PTS/DTS differences and employing similar coding methods.

Blåtand-Passande-X

Sunday, November 23rd, 2014

So, finally there’s a post about some codec.

It is a specialised codec from Oxford Germanium Television (all names are changed just in case) that has 4:1 compression ratio and very niche use. It’s hard to find even a decoder for it so this analysis was done on ARM version of encoder (maybe I’ll be able to RE something more useful next time like VX).

The codec itself is rather simple: you take 4 samples from one channel, compress them, output the 16-bit result and repeat the same for the second channel. Encoding is rather simple too:

  1. feed input to 4-band QMF (with filter looking a lot like D4 wavelet to me);
  2. perform ADPCM on each band (this varies a bit for each band but it’s the same approach);
  3. generate output word (7 bits for band 0, 4 bits for band 1, 2 bits for band 2 and 3 plus a parity bit for them all).

Since I have no samples of it don’t expect a decoder from me any time soon (and I don’t have enough motivation to hook Android encoder directly to make it produce data). Not that anyone cares about it either.

On Some Annoying Audio Codecs Family

Tuesday, July 29th, 2014

For the reasons I can’t disclose I really hate DTS codecs. For those who don’t know there are about three and a half codecs in this family:

  • DTS Core
  • DTS Core extensions (bitrate extension, two extensions for more channels and an extension for upsampling e.g. 48 kHz -> 96 kHz)
  • DTS Lossless (which might depend on core and extend/replace its channels)
  • DTS LBR (aka Express profile)

You need to be Jean-Baptiste Kempf to love these formats: DTS Core uses annoyingly large tables, DTS Lossless relies on DTS Core part being decoded properly for it, DTS LBR is a special beast that I’ll describe below. And the best part — all those formats are poorly documented (tables are missing for DTS Core, something was missing for DTS Core X96k extension too, bitexact core reconstruction and some other things needed for real lossless decoder implementation are not documented, LBR is not much better either).

So, what makes DTS LBR special? Its coding mode of course. This is a weird codec that employs MDCT (nothing special so far), codes tones separately (that’s not so common) and spreads it all among many chunks for different resolutions that make it “scalable” or whatever.

Nevertheless this post is not about how horrible are all those codecs (if you have ever worked with them it’s obvious and Jean-Baptiste Kempf won’t believe anyway), it’s about obscure relations with other codecs.

When I looked at QDesign Music codec (unsupported by Libav currently) I found that it has suspiciously familiar coding scheme for tones (QDesign Music 1/2 also use tone detection in MDCT frames) — I’ve seen it in DTS LBR. And indeed, it seems the same guy created some codec called LBpack that was first to use that approach, then he was employed by QDesign and then by DTS. No wonder it looked similar.

Another piece of trivia — there was one guy working on so-called adaptive prediction and transform scheme. Later the prototype known as APT100 was turned into DTS Core. But looks like the same work gave birth to lesser-known codec APT-X (that I’m currently REing but that’s beside the point). And it’s not just the name — one codec employs QMF and ADPCM on subbands, another one employs QMF and optional ADPCM on subbands.

All that makes one wonder whether DTS Lossless is related to some lossless codec outside DTS (not necessarily APT Lossless but might be, no details are known about that one). Currently I cannot name any other lossless codec that employs the same coding approach (block coding with different coding for large and small coefficients plus non-adaptive filter). Of course such knowledge won’t change anything but it would be still interesting to know.

P.S. There are rumours that DTS LBR will be made scalable for adaptive streaming, what a fun that will be!
P.P.S. This post was written mainly to test how well new Mike’s setup works.

Voxware Codecs and Tags

Saturday, August 10th, 2013

If you look at the registry of WAV formats you can see this:


0x0069 WAVE_FORMAT_VOXWARE_BYTE_ALIGNED Voxware, Inc.
0x0070 WAVE_FORMAT_VOXWARE_AC8 Voxware, Inc.
0x0071 WAVE_FORMAT_VOXWARE_AC10 Voxware, Inc.
0x0072 WAVE_FORMAT_VOXWARE_AC16 Voxware, Inc.
0x0073 WAVE_FORMAT_VOXWARE_AC20 Voxware, Inc.
0x0074 WAVE_FORMAT_VOXWARE_RT24 Voxware, Inc.
0x0075 WAVE_FORMAT_VOXWARE_RT29 Voxware, Inc.
0x0076 WAVE_FORMAT_VOXWARE_RT29HW Voxware, Inc.
0x0077 WAVE_FORMAT_VOXWARE_VR12 Voxware, Inc.
0x0078 WAVE_FORMAT_VOXWARE_VR18 Voxware, Inc.
0x0079 WAVE_FORMAT_VOXWARE_TQ40 Voxware, Inc.
0x007A WAVE_FORMAT_VOXWARE_SC3 Voxware, Inc.
0x007B WAVE_FORMAT_VOXWARE_SC3 Voxware, Inc.
0x0081 WAVE_FORMAT_VOXWARE_TQ60 Voxware, Inc.

In reality there’s one codec with several variations (MetaSound) and a family of low-bitrate MetaVoice codecs. And it doesn’t really matter what ID you’ll use — codec extradata contains real tag used to distinguish one codec from another. That’s why we can have 0x0075 format reserved for Voxware RT29 speech codec but used by MetaSound instead.

Here’s the list of internal tags:

  • VOXa — MetaVoice RT24, 8 kHz, mono, 2.4kbps
  • VOXb — MetaVoice VR12, 8 kHz, mono, 1.2kbps (variable bitrate)
  • VOXc — MetaVoice VR15, 8 kHz, mono, 2.4kbps (variable bitrate)
  • VOXg — MetaVoice RT29HQ, 8 kHz, mono, 2.98kbps (called high-quality for some reason)
  • VOXh — MetaVoice RT28, 8 kHz, mono, 2.8kbps
  • VOXi — MetaSound AC08, 8 kHz, mono, 8kbps
  • VOXj — MetaSound AC10, 11 kHz, mono, 10kbps
  • VOXk — MetaSound AC16, 16 kHz, mono, 16kbps
  • VOXL — MetaSound AC24, 22 kHz, mono, 24kbps
  • VOXq-VOXz — MetaSound mono and stereo, various formats
  • VX01 — MetaVoice SC3, 8 kHz, mono, 3.2kbps (embedded)
  • VX02 — MetaVoice SC6, 8 kHz, mono, 6.4kbps (embedded)
  • VX03 — MetaSound, 8 kHz, mono, 6kbps
  • VX04 — MetaSound, 8 kHz, stereo, 12kbps

So, maybe RT29 does not exist and it should be RT28 instead; obviously RT29HW is a typo for RT29HQ and the second SC3 should be SC6 in the registry (and unfortunately there’s no information about TQ40/TQ60). But who is going to correct WAVE formats list because of facts?

P.S. It would be nice to receive samples for all MetaSound modes (encoder is still available and should work on older Windows systems).

A Quest Continues

Friday, June 28th, 2013

Well, after some distraction as writing semi-working On2 AVC decoder (it turned out that On2 has introduced some special modes there that differ only on signal reconstruction stage, too lazy to RE them) and recovering after heat wave I’ve returned to the VoxWare ElenrilSound decoder.

I hate parametric codecs — no matter how you screw calculations you’ll still get some output but it won’t be useful for debugging. At least I can use MPlayer2 + binary codec loader + gdb combination to extract runtime information from the reference decoder.

Now I’m trying to make at least one mode work properly, 16kHz@16kbps mono (aka VOXk) for now. Stereo reconstruction might be trickier so I’ll leave it for later but at least most modes differ only by the tables they use. So (in theory) I’ll need to make at least this mode work, add tables for other modes, fix stereo decoding, look at 8kHz@6kbps mode, curse and forget about it.

Good news — bit allocation works properly and bits are read exactly as in the reference decoder. Bad news — reconstructed output is not even close to the expected one, so the work continues…

Some Information about VoxWare MetaSound

Wednesday, June 5th, 2013

So I’ve looked at the beast again. It seems to be close enough to the original TwinVQ (as in .VQF, not something that got into MPEG-4 Part 3 Subpart 4 Variation 3), so I’ll just try to document spotted differences.

Coding modes. Original TwinVQ had 9 modes, VoxWare has twice as much (and so twice as much codebooks!). One of them is very special (8kHz at 6kbps), with a cutoff of “high” frequencies. Also mode explicitly signals the number of channels so some modes are stereo-only and some are mono-only.

Bitstream format differences. Bitstream is packed LSB, the first byte is not skipped by the decoder. There’s an additional 2-bit variable right after window type present in many modes (but not 8kHz@6kbps or when short windows are used), my guess is that it has something to do with intensity stereo. Some parts order seems to be shuffled (i.e. original TwinVQ used p_coef, g_coef, shape order, MetaSound uses p_coef, shape, g_coef order).

Reconstruction. I’m not familiar with TwinVQ much myself but it looks like there are some small differences there as well. For instance, pgain base is 25000 for mono and 20000 for stereo and in bark decoding scales 0.5, 0.4, 0.35 are used instead of 0.4, 0.35, 0.28 (not really sure about that bit).

Any help with the decoder is welcome — new decoder will reuse most of the current TwinVQ decoder after all and new tables (it should take the title of decoder with the biggest tables from DCA decoder).

Some notes about AVC

Saturday, April 13th, 2013

Of course you’ve heard about AVC, the famous audio codec from On2 (and not the codec used as a base for VP7 and VP8). Out of curiosity I tried to look at it and found out that it was as creative as video codecs.

So here’s how it looks:

  • It is 1024-point MDCT based codec.
  • It codes samples either in one window or in eight 128-sample windows. There are four modes because of that — long window, short windows and two transitional modes.
  • Windows are organised into bands, 49 bands for the long window, 12 bands for each short window (band widths are the same for all sampling frequencies though).
  • Frame data consists of: windowing mode, grouping information for short windows, some band flags, run-coded information about used codebooks, scales, coefficients.
  • Scales are stored this way: first scale is explicitly coded 7-bit value, other scales are coded as differences and restored as prev_scale + scale_diff - 60.
  • Codebook 0 means no coefficients codec, codebooks 1-8 code quartets of coefficients, codebooks 9-15 code pairs of coefficients, additionally codebook 15 allows escape values to be coded.
  • Unscaling is performed as *dst++ = val * sqrt(val) * scale.

Do not blame me if that reminds you of some other codec. Bitstream format is different after all. And the second letter of the codec name too.