As most of you don’t know and don’t care, modern advanced video codecs may use the special prediction mode called “chroma from luma” where, as it’s obvious from the name, the chroma components are reconstructed from the luma using some coefficients. And what do you know, I’ve found a codec that used this approach back in 1997.
So there’s a French company called Kalisto Entertainment and back in the day it developed a codec for the cutscenes in some of its games (at least Dark Earth and Nightmare Creatures). 15-bit RGB video is split into three components and each is coded separately using simple LZ77-like method (i.e. it’s either RLE mode, or copy with an offset from the current or previous frame). The twist comes from the fact that those components are split into tiles (usually 20×20 ones) and each tile has coding mode and two sets of scale/offset coefficients, so for each tile one of RGB components is selected as the base one and two others are coded as the differences from the scaled (and offset) base value.
So one component plane contains the base components for each tile (which may be different for each tile) and the other two contain the differences for the predicted non-base components (which, at least in theory, should be mostly zeroes and thus better compressible). So when some people wonder if it’s time for video codecs to perform optimal component decorrelation on per-frame basis, here’s a practical codec from the last century that did it per-tile.