I kinda like the idea of the “minimal” service worker. Service Workers can be pretty damn complicated and the power of them honestly makes me a little nervous. They are middlemen between the browser and the network and I can imagine really dinking that up, myself. Not to dissuade you from using them, as they can useful things no other technology can do.
That’s why I like the “minimal” part. I want to understand what it’s doing extremely clearly! The less code the better.
Tantek posted about that recently, with a minimal idea:
You have a service worker (and “offline” HTML page) on your personal site, installed from any page on your site, that all it does is cache the offline page, and on future requests to your site checks to see if the requested page is available, and if so serves it, otherwise it displays your offline page with a “site appears to be unreachable” message that a lot of service workers provide, AND provides an algorithmically constructed link to the page on an archive (e.g. Internet Archive) or static mirror of your site (typically at another domain).
That seems clearly useful. The bit about linking to an archive of the page though seems a smidge off to me. If the reason a user can’t see the page is because they are offline, a page that sends them to the Internet Archive isn’t going to work either. But I like the bit about caching and at least trying to do something.
Jeremy Keith did some thinking about this back in 2018 as well:
The logic works like this:
- If there’s a request for an HTML page, fetch it from the network and store a copy in a cache (but if the network request fails, try looking in the cache instead).
- For any other files, look for a copy in the cache first but meanwhile fetch a fresh version from the network to update the cache (and if there’s no existing version in the cache, fetch the file from the network and store a copy of it in the cache).
The implementation is actually just a few lines of code. A variation of it handles Tantek’s idea as well, implementing a custom offline page that could do the thing where it links off to an archive elsewhere.
I’ll leave you with a couple more links. Have you heard the term LoFi? I’m not the biggest fan of the shortening of it because “Lo-fi” is a pretty established musical term not to mention “low fidelity” is useful in all sorts of contexts. But recently in web tech it refers to “Local First”.
I see “local-first” as shifting reads and writes to an embedded database in each client via “sync engines” that facilitate data exchange between clients and servers. Applications like Figma and Linear pioneered this approach, but it’s becoming increasingly easy to do.
Some notes on Local-First Development, Kyle Matthews
I dig the idea honestly and do see it as a place for technology (and companies that make technology) to step and really make this style of working easy. Plenty of stuff already works this way. I think of the Notes app on my phone. Those notes are always available. It doesn’t (seem to) care if I’m online or offline. If I’m online, they’ll sync up with the cloud so other devices and backups will have the latest, but if not, so be it. It better as heck work that way! And I’m glad it does, but lots of stuff on the web does not (CodePen doesn’t). But I’d like to build stuff that works that way and have it not be some huge mountain to climb.
That eh, we’ll just sync later/whenever when we have network access is super non-trivial, is part of the issue. Technology could make easy/dumb choices like “last write wins”, but that tends to be dangerous data-loss territory that users don’t put up with. Instead data need to be intelligently merged, and that isn’t easy. Dropbox is multi-billion dollar company that deals with this and they admittedly don’t always have it perfect. One of the major solutions is the concept of CRDTs, which are an impressive idea to say the least, but are complex enough that most of us will gently back away. So I’ll simply leave you with A Gentle Introduction to CRDTs.