You'll be waiting for a long time then, probably. Making codecs is actually a hard problem, the type of thing that AI completely falls over when tasked with.
Compression is actually a very good use case for neural networks (i.e. don't have an LLM develop a codec, but rather train a neural network to do the compression itself).
Considering AI is good at predicting things and that’s largely what compression does, I could see machine learning techniques being useful as a part of a codec though (which is a completely different thing from asking ChatGPT to write you a codec)
Yeah in the future we might use some sort of learned spatial+temporal representation to compress video, same for audio. Its easier to imagine for audio: Instead of storing the audio samples, we store text + some feature vectors that uses some model to "render" the audio samples.
It’s not absurd to think that you could send a model of your voice to a receiving party and then have your audio call just essentially be encoded text that gets thrown through the voice generator on the local machine.
AI video could mean that essential elements are preserved (actors?) but other elements are generated locally. Hell, digital doubles for actors could also mean only their movements are transmitted. Essentially just sending the mo-cap data. The future is gonna be weird
Yeah, I brought that up here and got some interesting responses:
> It would be interesting to see how far you could get using deepfakes as a method for video call compression.
> Train a model locally ahead of time and upload it to a server, then whenever you have a call scheduled the model is downloaded in advance by the other participants.
> Now, instead of having to send video data, you only have to send a representation of the facial movements so that the recipients can render it on their end. When the tech is a little further along, it should be possible to get good quality video using only a fraction of the bandwidth.
In the future, our phone contacts will store name, address, phone number, voice model. (The messed up part will be that the user doesn’t necessarily send their model, but the model could be crafted from previous calls)
You could probably also transmit a low res grayscale version of the video to “map” any local reproduction to. Kinda like how a low resolution image could be reasonably reproduced if an artist knew who the subject was.