Search
  • ORESoftware

The browser innovation that could make Multi-Page apps great again

Updated: Oct 18, 2021

Find this article on our Medium page


This innovation, for Chrome, Firefox, Safari etc, could allow multi-page apps to once again be competitive with SPAs for web and mobile apps.




The problem

Full-page reloads are slow for many reasons, the first bottleneck is the browser having to re-parse all the JavaScript in order for your app to load and make its first AJAX request to fetch data etc. We don’t want to see billions of devices around the world doing extra work to re-interpret code and put CPUs through trillions of extra cycles.

The SPA solution

Single-page apps solved this problem by requiring only one page-load during the user session.

But Multi-Page apps still had some advantages

Because multi-page apps allow the front-end to be more stateless, multi-page apps can be simpler to design and also can utilize a single framework for both backend and frond-end development, eg, Rails, Django, Phoenix, etc. Another big advantage of multi-page apps is SEO. Apps like Medium, Quora, Reddit, etc, need to have different web pages for different resources so they can be indexed by search engines etc.

A New Solution

So here’s where you have to pay attention. Normally we can cache static assets like JavaScript files in the browser — the browser will make a new request to see if the static asset has been updated, but the server usually responds with 304 — no updates and the browser uses the cached version — *but* it has to re-parse the JavaScript. But what if we could cache the parsed JS in memory in the browser? Once a JS library had been parsed, for any given domain for example *.yahoo.com, it would stay in memory and could be used by any loaded page for that domain?

What this would mean, is that there could be a full-page-refresh, but perhaps up to 90% of the JavaScript (all the library code) would not have to be re-parsed (for the same domain). Sharing files and parsed data across domains would be a security issue, but within the same domain, not so much.

It’s possible that ServiceWorkers can be utilized to do this, but from what I know, they have to use asynchronous message passing to pass data between the ServiceWorker and the application thread so it wouldn’t solve the problem, since memory is not shared.

Remaining Problems

It may be difficult to implement this since libraries may have memory leaks or be too stateful. It would only work well for stateless libraries that do no caching of their own, etc. This would be difficult to police from a technical perspective. You could put a memory limit for each library or a total limit for a domain.

How the API would work

How would this JS object caching work? I imagine it would work by allowing application developers to define library code that would get cached, and those caches would always be loaded first, before the application code gets loaded, so the application could load the libraries synchronously, using standard tools like Webpack, etc. This would be much the same as doing something like this with Node.js

node --require "lib-1" --require "lib-2"  app.js

This way in the browser, much like node.js, the cached library code would be pre-loaded and allow synchronous calls. The library code configuration might be similar to a service worker configuration — the most recent config file would override any other existing config file for the url domain or subdomain.

The same concepts would mostly behave in the same way whether it’s for JavaScript or WebAssembly or any language.

Current Info on Service Workers

I am not that up-to-date on service workers but if this is the extent of the caching available from service workers then there is plenty to do to fulfill the goals of this article: https://developers.google.com/web/ilt/pwa/caching-files-with-service-worker





15 views0 comments

Recent Posts

See All