I’ve more or less completed nihav::io
module, so let’s look at it (or not, it’s your choice).
There are four components there: bytestream reading, bitstream reading, generic integer code reading and codebook support for bitstream reader. Obviously there is no writing functionality there but I don’t need it now and it can be added later when (or if) needed.
Bytestream I/O
This component has two essential parts. The first part is generic interface for bytes and buffers reading (I don’t like Read
trait much because it lacks some functions I need anyway and, well, look at the project name).
pub trait ByteIO { fn read_buf(&mut self, buf: &mut [u8]) -> ByteIOResult<usize>; fn peek_buf(&mut self, buf: &mut [u8]) -> ByteIOResult<usize>; fn read_byte(&mut self) -> ByteIOResult<u8>; fn peek_byte(&mut self) -> ByteIOResult<u8>; fn write_buf(&mut self, buf: &[u8]) -> ByteIOResult<usize>; fn tell(&mut self) -> u64; fn seek(&mut self, pos: SeekFrom) -> ByteIOResult<u64>; fn is_eof(&mut self) -> bool; fn is_seekable(&mut self) -> bool; }
As you can see it offers just basic functionality—reading or peeking byte or buffer plus seeking. But it allows to have the same interface for different data sources, I have MemoryReader
and FileReader
implemented so reading from both file and memory buffer can be done the same way.
And the actual bytestream reader has this interface (implementation omitted for clarity):
impl<'a> ByteReader<'a> { pub fn new(io: &'a mut ByteIO) -> ByteReader pub fn read_buf(&mut self, buf: &mut [u8]) -> ByteIOResult<usize> pub fn peek_buf(&mut self, buf: &mut [u8]) -> ByteIOResult<usize> pub fn read_byte(&mut self) -> ByteIOResult<u8> pub fn peek_byte(&mut self) -> ByteIOResult<u8> pub fn read_u16be(&mut self) -> ByteIOResult<u16> pub fn peek_u16be(&mut self) -> ByteIOResult<u16> pub fn read_u24be(&mut self) -> ByteIOResult<u32> pub fn peek_u24be(&mut self) -> ByteIOResult<u32> pub fn read_u32be(&mut self) -> ByteIOResult<u32> pub fn peek_u32be(&mut self) -> ByteIOResult<u32> pub fn read_u64be(&mut self) -> ByteIOResult<u64> pub fn peek_u64be(&mut self) -> ByteIOResult<u64> pub fn read_u16le(&mut self) -> ByteIOResult<u16> pub fn peek_u16le(&mut self) -> ByteIOResult<u16> pub fn read_u24le(&mut self) -> ByteIOResult<u32> pub fn peek_u24le(&mut self) -> ByteIOResult<u32> pub fn read_u32le(&mut self) -> ByteIOResult<u32> pub fn peek_u32le(&mut self) -> ByteIOResult<u32> pub fn read_u64le(&mut self) -> ByteIOResult<u64> pub fn peek_u64le(&mut self) -> ByteIOResult<u64> pub fn read_skip(&mut self, len: usize) -> ByteIOResult<()> pub fn tell(&mut self) -> u64 pub fn seek(&mut self, pos: SeekFrom) -> ByteIOResult<u64> pub fn is_eof(&mut self) -> bool }
Essentially it allows to read/peek the multibyte integers regardless the source (memory or file). I thought about adding functions for signed integers and floats but then decided it to postpone it to the moment when I really need them.
Bitstream reader
I’ve decided to try old boring approach with bitstream reader being configurable at the creation time, here’s the public interface:
pub enum BitReaderMode { BE, LE, LE16MSB, LE32MSB, } impl<'a> BitReader<'a> { pub fn new(src: &'a [u8], size: usize, mode: BitReaderMode) -> Self pub fn tell(&self) -> usize pub fn left(&self) -> isize pub fn read(&mut self, nbits: u8) -> BitReaderResult<u32> pub fn read_s(&mut self, nbits: u8) -> BitReaderResult<i32> pub fn peek(&mut self, nbits: u8) -> u32 pub fn skip(&mut self, nbits: u32) -> BitReaderResult<()> pub fn seek(&mut self, nbits: u32) -> BitReaderResult<()> }
Essentially there are two main modes or reading—most significant bit first or least significant bit first. Also for MSB mode sometimes you have bitstreams packed into little-endian words so I’m dealing with them too. The bitreader keeps internal 64-bit cache and refills it or reads from it depending on mode. And yes, I don’t care about speed much, it was just the easiest way to implement it.
Integer codes reading
This component deals with reading various generic unbounded codes for integers. Here’s the interface:
pub enum UintCodeType { UnaryOnes, UnaryZeroes, Unary012, Unary210, LimitedUnary(u32, u32), Golomb(u8), Rice(u8), Gamma, GammaP, } pub enum IntCodeType { Golomb(u8), Rice(u8), Gamma, GammaP, } pub trait IntCodeReader { fn read_code(&mut self, t: UintCodeType) -> BitReaderResult<u32>; fn read_code_signed(&mut self, t: IntCodeType) -> BitReaderResult<i32>; }
Obviously I have this trait implemented for BitReader
and in result I can read Golomb(5) code just like this:
let mut br = BitReader::new(src, src.len(), BitReaderMode::BE); let value = br.read_code(UintCodeType::Golomb(5))?;
If you wonder about the names and if you don’t recognize some, I chose them from the proper data compression terminology. I wrote a post about it some time ago.
Codebooks support
This component supports codebooks (i.e. set of codes, most likely of different length, corresponding to certain values).
Again, I stopped on that name because it reflects what it is the best. Variable-length codes is too generic term—the codes handled by the previous component are variable-length too. Huffman codes are defined as optimal codes for given set of symbol with certain probabilities and provided codebooks are most likely not optimal for coding the particular stream but rather for some spherical set of streams in vacuum (and sometimes those codebooks are not optimal codes at all since they don’t cover all possible input sequences of bits). And codebook in this sense seems the most appropriate: you have certain sequences of bits and you map them to certain output (can be one integer, can be a special triplet <run of zeroes>
+<coefficient value>
+<end of block flag>
or anything else). Of course there may be a confusion between these codebooks and vector quantiser codebooks (i.e. several pixel values corresponding to the single codebook index) but I think I’ll manage.
Here’s the interface:
pub enum CodebookMode { MSB, LSB, } pub struct Codebook<S> { table: Vec<u32>, syms: Vec<S>, lut_bits: u8, } pub trait CodebookDescReader<S> { fn bits(&mut self, idx: usize) -> u8; fn code(&mut self, idx: usize) -> u32; fn sym (&mut self, idx: usize) -> S; fn len (&mut self) -> usize; } pub trait CodebookReader<S> { fn read_cb(&mut self, cb: &Codebook<S>) -> CodebookResult<S>; } const MAX_LUT_BITS: u8 = 10; impl<S: Copy> Codebook<S> { pub fn new(cb: &mut CodebookDescReader<S>, mode: CodebookMode) -> CodebookResult<Self> }
It’s rather trivial code that takes the provided codebook description and generates multi-level lookup table. The only interesting part is that it uses CodebookDescReader
trait for providing actual codes description. I have two predefined description generators, FullCodebookDescReader
and ShortCodebookDescReader
—one of them takes a table of codes with symbols, another one uses entry index for symbol. When need arises I can easily add generators that return code and symbol just from code lengths array (this is the way codes stored in deflate, JPEG and other formats). Or from frequencies for Huffman tree. Or whatever.
Here’s some self-test code:
const BITS: [u8; 2] = [0b01011011, 0b10111100]; let cb_desc: Vec<FullCodebookDesc<i8>> = vec!( FullCodebookDesc { code: 0b0, bits: 1, sym: 16 }, FullCodebookDesc { code: 0b10, bits: 2, sym: -3 }, FullCodebookDesc { code: 0b110, bits: 3, sym: 42 }, FullCodebookDesc { code: 0b1110, bits: 4, sym: -42 } ); let buf = &BITS; let mut br = BitReader::new(buf, buf.len(), BitReaderMode::BE); let mut cfr = FullCodebookDescReader::new(cb_desc); let cb = Codebook::new(&mut cfr, CodebookMode::MSB).unwrap(); assert_eq!(br.read_cb(&cb).unwrap(), 16); assert_eq!(br.read_cb(&cb).unwrap(), -3); assert_eq!(br.read_cb(&cb).unwrap(), 42); assert_eq!(br.read_cb(&cb).unwrap(), -42);
Now as this part seems to be working and good enough to use I shall move to another things like demuxing and decoding and that means shaving a pen full of piglets designing formats for codec information, streams, packets and such. I have some of it done already but there’s even more fun ahead!
How NIHing something can be fun at all?
Also bitstream reading implementation code is missing, how it is done at all?
Tao of Programming — koan 3.3 should give you the answer.
And bitstream reading is trivial, everybody who’s REd more than one non-trivial codec has seen several variants already.
Actually I can write a post on various bitstream reading techniques and why I choice was like that but who would like to read that?
I’ve actually been writing my own I/O library, I had no clue what I was doing when I started (literally, I learned how to program with it) so it’s undergone a few paradigm shifts, but now a days it uses a Source to buffer to source paradigm, basically everything internally is a BitBuffer, and files/sockets/packets are input/output sources that are loaded into their own buffers.
The writing side is generally there, but it has not been tested, so watch out for that.
I keep meaning to get around to fixing the write side of things, but I keep getting distracted.
Would be nice to have that post about all the possible bitstream readers from you 🙂