I was asked to look at it and since I have nothing better to do, I finally did.
OSQ was a format used by WaveLab mastering software for storing lossless compressed audio (they seem to have dropped support for it in the more recent versions). Internally it’s a clone of ages-old Shorten which does not even have LPC mode, only fixed predictors.
Of course the format somewhat differs and there are more prediction modes but they still use maximum three previously decoded samples and filter coefficients are constant. There are other original additions such as stereo decorrelation or adaptive Rice coding with the parameter selected based on previous statistics and some floating-point calculations that look more complex than they should be. Overall though it looks like somebody took Shorten and decided not to do the hard part (LPC), maybe in order to make files to save faster, and tried to compensate it by a couple of simple tricks.
I’ll probably document it at my leisure but overall it’s a rather silly format.
It appears to support more bit depths and channels need to encode more files to check. And its about original audio not about original encoding algorithm. Things that slowing me is rather peculiar compiler optimized binary, when it uses self-invented function calling conventions.
Sure, we have somewhat different goals as I mainly study the codecs and you mainly write decoders for them.
And the calling convention looks rather normal C++ for me (class pointer in ECX, the arguments are on stack).
That is for older binary that is bigger than newer one. New one I use because I managed to get rtti/vftables with Ghidra. And in this one decompiler eats single argument from function.