I didn't mention h264 for a reason. It's a codec that was developed 25 years ago.
The complexity of video decoders has been going up exponentially and AV2 is no exception. Throwing more tools (and thus resources) at it is the only way to increase compression ratio.
Take AV1. It has CTBs that are 128x128 pixels. For intra prediction, you need to keep track of 256 neighboring pixels above the current CTB and 128 to the left. And you need to do this for YUV. For 420, that means you need to keep track of (256+128 + 2x(128+64)) = 768 pixels. At 8 bits per component, that's 8x768=6144 flip-flops. That's just for neighboring pixel tracking, which is only a tiny fraction of what you need to do, a few % of the total resources.
These neighbor tracking flip-flops are followed by a gigantic multiplexer, which is incredibly inefficient on FPGAs and it devours LUTs and routing resources.
A Lattice ECP5-85 has 85K LUTs. The FFs alone consume 8% of the FPGA. The multiplier probably another conservative 20%. You haven't even started to calculate anything and your FPGA is already almost 30% full.
FWIW, for h264, the equivalent of that 128x128 pixel CTB is 16x16 pixel MB. Instead of 768 neighboring pixels, you only need 16+32+2*(8+16)=96 pixels. See the difference? AV2 retains the 128x128 CTB size of AV1 and if it adds something like MRL of h.266, the number of neighbors will more than double.
H264 is child's play compared later codecs. It only has a handful of angular prediction modes, it has barely any pre-angular filtering, it has no chroma from luma prediction, it only has a weak deblocking filter and no loop filtering. It only has one DCT mode. The coding tree is trivial too. Its entropy decoder and syntax processing is low in complexity compared to later codecs. It doesn't have intra-block copy. Etc. etc.
Working on a hardware video decoder is my day job. I know exactly what I'm talking about, and, with all due respect, you clearly do not.
Hmmm so you're ignoring the crux of my argument because it's convenient for you (h264 is comfortably small, AV1 is maybe too big, so between them might work). So anything that's related to why AV1 won't fit is pointless. They know that and are improving on it.
Your argument about your large amount of flops is odd. You would only store data that way if you needed everything on the same cycle. You say there's a multiplexor after that. Data storage + multiplexor is just memory. Could use a bram or lutram which would cut down on that dramatically, big if there's a need based on later processing which you haven't defined. And even then, that's for AV1 which isn't AV2 and may change
I’m ignoring h264 because it’s irrelevant in a discussion about AV2, for the reasons that I already brought up in my earlier reply. It’s like having a discussion about a Zen CPU and bringing up the 8088 architecture.
Let’s cut to the chase. AV2 will not be smaller than AV1 at all. The linked article doesn’t say that. The slides don’t say that either.
The only thing that could make somebody think that it’s smaller is the claim that all tools have been validated for hardware efficiency. The goal of this process is to make sure that none of the new tools make the HW unreasonably explode in size, not to make the codec smaller than before, because everyone knows that this is impossible if you want to increase compression ratio.
Let’s look at 2 of those new tools. MRLS: this adds multiple reference lines, just like I expected there would be. Boom! Much more complexity for neighbor handling. I also see more directions (more angles.) That also adds HW. The article mentions improved chroma from luma. Not unexpected because h266 already has that, and AV2 needs to compete against that. AV1 has a basic 2x2 block filter. I expect AV2 to have a more complex FIR filter, which makes things significantly harder for a HW implementation.
You are delusional if you think AV2 will be smaller than AV1.
The reason I brought up neighbor handling is because it’s so easy to estimate its resource requirements from first principles, not because it’s a huge part of a decoder. But if neighbors alone already make a smaller FPGA nearly impossible, it should be obvious that the whole decoder is ridiculous.
So… as for storing neighbors in RAM: if I’d bring this up at work, they’d probably send me home to take mental health break or something.
Neighbor processing lives right inside the critical latency loop. Every clock cycle that you add in that loop impacts performance. You need to update these neighbors after predicting every coding unit. Oh, and the article mentions that the CTB size (“super block” in AV2 parlance) has been increased from 128x128 to 256x256. Good luck area reducing that. :-)
The complexity of video decoders has been going up exponentially and AV2 is no exception. Throwing more tools (and thus resources) at it is the only way to increase compression ratio.
Take AV1. It has CTBs that are 128x128 pixels. For intra prediction, you need to keep track of 256 neighboring pixels above the current CTB and 128 to the left. And you need to do this for YUV. For 420, that means you need to keep track of (256+128 + 2x(128+64)) = 768 pixels. At 8 bits per component, that's 8x768=6144 flip-flops. That's just for neighboring pixel tracking, which is only a tiny fraction of what you need to do, a few % of the total resources.
These neighbor tracking flip-flops are followed by a gigantic multiplexer, which is incredibly inefficient on FPGAs and it devours LUTs and routing resources.
A Lattice ECP5-85 has 85K LUTs. The FFs alone consume 8% of the FPGA. The multiplier probably another conservative 20%. You haven't even started to calculate anything and your FPGA is already almost 30% full.
FWIW, for h264, the equivalent of that 128x128 pixel CTB is 16x16 pixel MB. Instead of 768 neighboring pixels, you only need 16+32+2*(8+16)=96 pixels. See the difference? AV2 retains the 128x128 CTB size of AV1 and if it adds something like MRL of h.266, the number of neighbors will more than double.
H264 is child's play compared later codecs. It only has a handful of angular prediction modes, it has barely any pre-angular filtering, it has no chroma from luma prediction, it only has a weak deblocking filter and no loop filtering. It only has one DCT mode. The coding tree is trivial too. Its entropy decoder and syntax processing is low in complexity compared to later codecs. It doesn't have intra-block copy. Etc. etc.
Working on a hardware video decoder is my day job. I know exactly what I'm talking about, and, with all due respect, you clearly do not.