One of the things I’ve always lamented about hardware image formats is the slow pace of innovation.
This applies to software image formats too. PNG and JPEG (from 1992!) still reign supreme simply because they're already supported everywhere.
Wavelet-based formats from the early 2000s never found widespread adoption despite being technically superior.
Today the SOTA is neural compressors, which achieve extremely high compression ratios by exploiting prior knowledge about images, but I have doubts they will see adoption either.
We're getting some evolution with phones taking photos in HEIF/HEIC/AVIF (which are just I-frames of h.264/h.265/AV1) and webp is used extensively on the web, which is the same thing for VP8.
Yeah, it's kinda brilliant really. Video I-frame coders are already really efficient at still images, and for hardware acceleration you get to use the same hardware accel and HALs you already need for video.
Wavelet-based formats from the early 2000s never found widespread adoption despite being technically superior.
I think this really hit the short-sightedness of trying to milk users as much as possible right as open source became the de facto standard. If you wanted to implement JPEG 2000 you had to pay thousands of dollars for a msssive spec or pay a lot of money to license someone’s codec, and because there was no good, widely available test suite you hit tons of compatibility issues with unexpected behaviors which discouraged users from sticking with something which made their lives harder (“this looked great in PhotoShop but the CMS said was corrupt and app using Kakadu displays a black rectangle in the middle!” “Screw it, just save it as JPEG!”).
Because usage was low, it didn’t get attention for performance and that really didn’t help, and that meant that browser adoption was doomed because nobody wanted an Uber-slow codec of dubious QA status in internet-facing code. OpenJPEG helped a lot but it was too late since the modern video codecs got a lot more optimization.
If I was trying to launch a new codec in 2026, table stakes would be a robust image suite for interoperability testing and a WASM target for browsers so the path for adoption didn’t mean forgoing easy use on the web until you can convince browser developers your new format is worth the security exposure and maintenance cost.
The slow pace of innovation in image formats is a real issue. It’s interesting how video codecs have been adapted for image compression. I wonder if a fully open-source memory system like Hindsight could help with managing and evolving these formats. https://github.com/vectorize-io/hindsight
I worked at a Fortune 500 company and developed a zlib (.gz and .png) library which increased compression performance by x20. Hardest part was adoption, not implementation.
I've taken to moving a lot of my "finished" images to avif. Compression ratio vs noise added is silly compared to jpeg (when measuring psnr), meaning I'm saving ~50% file space functionally for free, and browser support is great.
When thinking about hardware accelerated encoding and decoding I always think of video codecs and had assumed that pictures use a full software path, but makes sense that it can be accelerated as well.
meet_miyani@reddit
I wrote about how we brought it down from 81KB to 5KB by optimizing for thermal printer constraints, without sacrificing print quality.
Read more: https://meet-miyani.medium.com/how-we-reduced-pos-receipt-image-size-by-93-from-81kb-down-to-5kb-dd7a456fcd3a
currentscurrents@reddit
This applies to software image formats too. PNG and JPEG (from 1992!) still reign supreme simply because they're already supported everywhere.
Wavelet-based formats from the early 2000s never found widespread adoption despite being technically superior.
Today the SOTA is neural compressors, which achieve extremely high compression ratios by exploiting prior knowledge about images, but I have doubts they will see adoption either.
inio@reddit
We're getting some evolution with phones taking photos in HEIF/HEIC/AVIF (which are just I-frames of h.264/h.265/AV1) and webp is used extensively on the web, which is the same thing for VP8.
Miserygut@reddit
I didn't know those formats were derived from the video codecs. TIL.
inio@reddit
Yeah, it's kinda brilliant really. Video I-frame coders are already really efficient at still images, and for hardware acceleration you get to use the same hardware accel and HALs you already need for video.
equeim@reddit
Hwaccel is not available everywhere (and when it is it's often broken in some way) and without it these formats are slow to decode.
acdha@reddit
I think this really hit the short-sightedness of trying to milk users as much as possible right as open source became the de facto standard. If you wanted to implement JPEG 2000 you had to pay thousands of dollars for a msssive spec or pay a lot of money to license someone’s codec, and because there was no good, widely available test suite you hit tons of compatibility issues with unexpected behaviors which discouraged users from sticking with something which made their lives harder (“this looked great in PhotoShop but the CMS said was corrupt and app using Kakadu displays a black rectangle in the middle!” “Screw it, just save it as JPEG!”).
Because usage was low, it didn’t get attention for performance and that really didn’t help, and that meant that browser adoption was doomed because nobody wanted an Uber-slow codec of dubious QA status in internet-facing code. OpenJPEG helped a lot but it was too late since the modern video codecs got a lot more optimization.
If I was trying to launch a new codec in 2026, table stakes would be a robust image suite for interoperability testing and a WASM target for browsers so the path for adoption didn’t mean forgoing easy use on the web until you can convince browser developers your new format is worth the security exposure and maintenance cost.
Rxyro@reddit
They need progressive fallbacks so old hardware andOS isn’t screwed?
mccoyn@reddit
That is tricky with compression because the whole point is to save space. If you need to store another copy, you’ll use more space.
Even for network transfers, an extra round trip might add more latency than using a legacy compression format.
elperroborrachotoo@reddit
Meme: .mng (2001) underwater.
nicoloboschi@reddit
The slow pace of innovation in image formats is a real issue. It’s interesting how video codecs have been adapted for image compression. I wonder if a fully open-source memory system like Hindsight could help with managing and evolving these formats. https://github.com/vectorize-io/hindsight
ThemBones@reddit
I worked at a Fortune 500 company and developed a zlib (.gz and .png) library which increased compression performance by x20. Hardest part was adoption, not implementation.
valarauca14@reddit
Yeah, modern image formats (HEIF/HEIC, AVIF) are just single frames of videos (H.264, H.265, and AVI).
ffmpegsupports the workflow out of the box with a sort ofI've taken to moving a lot of my "finished" images to avif. Compression ratio vs noise added is silly compared to jpeg (when measuring psnr), meaning I'm saving ~50% file space functionally for free, and browser support is great.
Silver-Hearing-5010@reddit
This is a great point. A lot of people overlook this but it's foundational.
olivermtr@reddit
When thinking about hardware accelerated encoding and decoding I always think of video codecs and had assumed that pictures use a full software path, but makes sense that it can be accelerated as well.