Current solutions to responsive images require web designers to create multiple versions of images to suit different device viewport sizes and pixel densities, but could there be a better way?
When I was looking at the current solutions and proposals for handling responsive images I started thinking about how we'd implement them in practice. The thought of having to resize all the images on a website multiple times for various device viewport sizes and pixel densities fills me with dread. So we'd likely lean on our Gruntjs build script to automate the resizing of images into the various dimensions needed.
The main problem in my opinion comes with user-generated images, so when our customers are using a content management system to add images to a website. We'd then need an automated process server-side to handle those images and spit out the necessary HTML. Although completely possible it's something I'd prefer to avoid if at all possible.
Is there a better way?
We need a solution that is simple and straightforward for clients and web designers but also works as a responsive image. Perhaps the existing progressive scan JPEG format could come to the rescue? Images would be saved in progressive JPEG format at the highest resolution available (within limits of course). The image is saved in perhaps 10 scans so a mobile device might download just 3 scans of the image but a higher bandwidth desktop device would download all 10 scans of the image.
It should be possible to implement this approach today using the byte range HTTP header, that would allow a user agent to download only a certain portion of the entire image file. The image file would need to contain some meta data to inform browsers as to the byte ranges that correspond with the progressive scans. Although less desirable a server-side solution is also feasible, as the number of scans sent to the client could be varied based on a Client Hints header (a proposal from Google) sent by the user agent. Both options could be cache friendly by using a Vary: Range or Vary: CH header respectively.
This JPEG progressive scan approach to responsive images has several bonus side-effects:
- The number of bytes downloaded of each image is determined by the device itself rather than by the web designer/developer of the particular website. So it's much more future proof as the capabilities of devices change over time.
- The device is able to vary the filesize of the images downloaded at will. So for example if I was roaming on my mobile device it could download just 2 scans of the full progressive image but if I then switched to a fast Wi-fi connection it could then download all 10 scans.
- In the example above where the device has swapped from a low to high bandwidth connection it could download just the additional bytes of the image it needs (using the byte range HTTP header) rather than needing to download the image all over again, saving server bandwidth over other solutions.
- Browser vendors could also use any idle time once a page has loaded to improve image quality by downloading further scans of larger images.
- Browser vendors could intelligently manage the fidelity of images depending on their size in the page, so if an image was actually larger on mobile than in the desktop layout rather than needing the front-end developer to link to different files and include media queries the browser can handle this and download more progressive scans as and when required.
- We could say goodbye to thumbnail images!
Of course I could probably list as many issues with this approach if I put my mind to it but it certainly seems to have sufficient potential to investigate further. I should clarify that I don't see this replacing a solution like Florian's Compromise as there are instances when a resized version of an image just won't do, such as when a logo would be illegible at a much smaller size.
I'm not the only person to notice the potential of progressive scan JPEGs for responsive images: