Update on AAC progress

If you are interested in what happens with my encoder, here’s a piece of report.

Simple encoding works. That means you can encode files with it now and they can be played back and you’ll be able to recognize the sound. Also I’ve separated psychoacoustic model and encoder itself, so it calls model to ask what windowing to use and what scaling/coefficients to encode.
Can I say this concludes the task for this summer of code? Technically yes but there are few points I ought to finish.

Encoder side:

  • MDCT for the cases different from simple 1024-point window (8 short windows sequence and two transition windows)
  • correct bitstream writing for 8SS case
  • probably multichannel encoding (it’s useless until we have defined multichannel audio API though)

Psychoacoustic model(s) side:

  • good psychoacoustic model 🙂
  • quantizer which allows rate control
  • something else?

I can add some models after the work is complete too and probably tune it for my ears and music I like to listen to. Reading papers I got on psychoacoustic models should help.

Back to work then.

Comments are closed.