I was once working with a client, and our team was facing a mix of technical and non-technical challenges in shipping a relatively simple web application. During one of our meetings, I cracked a silly joke: "At this rate, we might as well zip up the source code and send it to anyone who wants to try it." (note for the reader: our users were non-developer colleagues from the same organization)
Of course we chuckled and we moved on. But later, I found myself aching to try that silly idea out. Or a refined version of it:
Ship the source-code of a "modern" frontend application as-is, without any build artifacts, and have users run it in their browsers without any additional tools
There’s probably no practical reason to do this, unless, of course, shipping anything in your organization takes weeks 😜. But, taking the wisdom of this guy into account:
FAFO!
Trying ridiculous ideas is a great way to learn something new. In the rest of this post, I’ll share how my experiment unfolded.
What do I mean by a "Modern" Frontend Application?
Let’s start by clarifying what I mean by a modern frontend application. Since building a solution for every type of application isn’t practical, I narrowed the scope to my own interpretation. To me, a modern frontend application is modular, may use a modern framework, and is optionally written in TypeScript.
Notice that I left out a very common step in many of the frontend application development workflows: bundling. That is because shipping a bundle is orthogonal to the goal of shipping the source-code of the application as is.
Modern browsers support modules natively and that allows us to import JavaScript modules from no-build CDNs like esm.sh. Obviously, loading modules from a CDN is not as efficient as loading a bundle, but for our purposes, it will do just fine.
Lastly, some applications may be server-side rendered, but I’ll focus on single-page applications (SPAs) that are typically deployed on a static file server. A static file server is necessary for two main reasons:
- Users need to be able to access the application.
- Many web APIs and features require a secure context, which is only available when the application is served over HTTPS.
So to recap, to solve our little challenge, we have to find a way to support TypeScript and serving files from a secure context.
Serving Assets from a Secure Context
By registering a ServiceWorker that intercepts the fetch event, and using window.showDirectoryPicker
(on supported browsers) or combining the File API with a library like zip.js, we can load a directory/archive and serve the files just like a static file server.
The only catch™ is that a Service Worker can only be registered from a secure context. This means we must host a small application to register the Service Worker (which I’ll refer to as the loader from here on). Note that the loader application is separate from the web application we want to ship and can be as simple as an HTML file and less than 100 lines of JavaScript. For my experiment, I built a loader and hosted it on GitHub Pages.
I included a minimal implementation of a loader below (with some comments) that uses window.showDirectoryPicker
API to read a directory into a virtual file system (a fancy way of saying a Map of file paths and contents), and serves it from the CacheStorage via a service worker.
<!DOCTYPE html>
<script type="importmap">
{
"imports": {
"mime/lite": "https://esm.sh/[email protected]/lite"
}
}
</script>
<button id="load-dir">Serve Directory</button>
<!-- Read the directory into a virtual filesystem and populate the CacheStorage -->
<script type="module">
import mime from "mime/lite";
// Create a response with the correct Content-Type header
function createResponse(fileName, contents) {
return new Response(contents, {
headers: { "Content-Type": mime.getType(fileName) },
});
};
// window.showDirectoryPicker is gated by user activation. That's why we're
// calling that API in a button click handler
document.querySelector("#load-dir").addEventListener("click", async () => {
const dirHandle = await window.showDirectoryPicker();
// Step 1: Load the directory into a virtual file system in memory
await loadDirectory(dirHandle)
// Step 2: Populate the browser CacheStorage with the contents of the directory
.then(populateCache);
// Load the index page from the directory
// The request will be intercepted and handled by the ServiceWorker
window.location.href = "/";
});
// Read the files in the directory into a virtual file system
async function loadDirectory(dirHandle) {
// Virtual filesystem: { [filePath]: contents }
const fsMap = new Map();
const _loadDirectory = async (dirHandle, path) => {
for await (const [name, handle] of dirHandle.entries()) {
const filePath = `${path}/${name}`;
if (handle.kind === "file") {
const file = await handle.getFile();
fsMap.set(filePath, await file.arrayBuffer());
} else if (handle.kind === "directory") {
await _loadDirectory(handle, filePath);
}
}
};
await _loadDirectory(dirHandle, "");
return fsMap;
};
// Populate the CacheStorage with the contents of the virtual file system
async function populateCache(fsMap) {
for (const cacheName of await caches.keys()) {
if (cacheName !== "vfs") {
return;
}
await caches.delete(cacheName);
}
const cache = await caches.open("vfs");
for (const fileName of fsMap.keys()) {
const contents = fsMap.get(fileName);
await cache.put(fileName, createResponse(fileName, contents));
if (fileName === "/index.html") {
await cache.put("/", createResponse(fileName, contents));
}
}
};
</script>
<!-- Register the service worker -->
<script type="module">
if (!navigator.serviceWorker.controller) {
const registrations = await navigator.serviceWorker.getRegistrations();
await Promise.all(registrations.map((r) => r.unregister()));
}
await navigator.serviceWorker
.register("service-worker.js")
.then(() => console.log("Service Worker registered!"))
.catch((error) =>
console.error("Service Worker registration failed:", error),
);
</script>
// service-worker.js
// Intercept HTTP requests and serve the files in the directory from the cache storage
self.addEventListener("fetch", (event) => {
return event.respondWith(
(async () => {
const cache = await caches.open("vfs");
const { pathname: fileName } = new URL(event.request.url);
const match = await cache.match(fileName);
if (match) {
return match;
}
return fetch(event.request).catch((error) => {
return new Response("You are offline.", {
headers: { "Content-Type": "text/plain" },
});
});
})(),
);
});
self.addEventListener("install", (event) => {
console.log("Service Worker installing...");
self.skipWaiting();
});
self.addEventListener("activate", (event) => {
console.log("Service Worker activated!");
event.waitUntil(clients.claim());
});
For a more complete implementation, you can check the ts-app-loader repository.
TypeScript Support
Some frameworks, such as Preact, officially support no-build workflows. However, since browsers don't natively support running TypeScript modules, we need to compile them first. Luckily, the TypeScript team has already done most of the work. The TypeScript Website's monorepo includes a package called TypeScript VFS. Here's a short introduction to the package, taken from its README file:
A Map-based TypeScript Virtual File System.
Useful when you need to:
- Run TypeScript in the browser
- Run virtual TypeScript environments where files on disk aren't the source of truth
The TypeScript VFS package also exports a convenient factory function (createDefaultMapFromCDN
) to create a virtual file system, pre-populated with the lib files from the TypeScript CDN.
That sounds exactly like what we need! We just need to replace our virtual file system Map with the one from the factory function and compile the TypeScript files using the TypeScript compiler API:
<!-- Our updated importmap to import TypeScript and TypeScript VFS -->
<script type="importmap">
{
"imports": {
"typescript": "https://esm.sh/[email protected]",
"@typescript/vfs": "https://esm.sh/@typescript/[email protected]",
"mime/lite": "https://esm.sh/[email protected]/lite"
}
}
</script>
import * as tsvfs from "@typescript/vfs";
import ts from "typescript";
// We can make this configurable or even read it from the input directory
const tsCompilerOptions = {
esModuleInterop: true,
skipLibCheck: true,
target: ts.ScriptTarget.ES2022,
moduleResolution: ts.ModuleResolutionKind.Bundler,
allowJs: true,
resolveJsonModule: true,
moduleDetection: ts.ModuleDetectionKind.Force,
isolatedModules: true,
verbatimModuleSyntax: true,
module: ts.ModuleKind.ES2022,
inlineSourceMap: true,
inlineSources: true,
lib: ["es2022", "dom", "dom.iterable"],
jsx: ts.JsxEmit.ReactJSX,
};
// Our new loadDirectory function with TypeScript VFS
async function loadDirectory(dirHandle) {
const fsMap = await tsvfs.createDefaultMapFromCDN(
tsCompilerOptions,
ts.version,
true,
ts,
);
const _loadDirectory = async (dirHandle, path) => {
for await (const [name, handle] of dirHandle.entries()) {
const filePath = `${path}/${name}`;
if (handle.kind === "file") {
const file = await handle.getFile();
// For simplicity, we're reading files as text. Have a look at the
// repository for a better implementation
fsMap.set(filePath, await file.text());
} else if (handle.kind === "directory") {
await _loadDirectory(handle, filePath);
}
}
};
await _loadDirectory(dirHandle, "");
return fsMap;
}
Next, we can update our click handler to compile the TypeScript files before loading the application:
async function buildProject(fsMap) {
const system = tsvfs.createSystem(fsMap);
const host = tsvfs.createVirtualCompilerHost(system, tsCompilerOptions, ts);
const program = ts.createProgram({
rootNames: [...fsMap.keys()],
options: tsCompilerOptions,
host: host.compilerHost,
});
program.emit();
return fsMap;
}
document.querySelector("#load-dir").addEventListener("click", async () => {
const dirHandle = await window.showDirectoryPicker();
await loadDirectory(dirHandle)
.then(buildProject) // New step for compiling TS files
.then(populateCache);
window.location.href = "/";
});
And just like that, we have ourselves a TypeScript application loader.
NOTE
Some important implementation details are left for brevity. Make sure to check the ts-app-loader repository for more details.
The Frontend Application
We have a way to run and distribute a modern frontend application. The only thing left to do is to build one! Fortunately, with TypeScript support, building an application for our loader is not drastically different from building one with a bundler. However, there are some key differences to keep in mind:
1. No package.json
As we're taking a no-build route, we’ll use an import_map.json
instead of a package.json
file. Below is an example of a import_map.json
file:
{
"imports": {
"preact": "https://esm.sh/[email protected]",
"preact/": "https://esm.sh/[email protected]/",
"react": "https://esm.sh/[email protected]/compat",
"react/": "https://esm.sh/[email protected]/compat/",
"react-dom": "https://esm.sh/[email protected]/compat",
"preact-iso": "https://esm.sh/[email protected]?external=preact"
}
}
An important detail to highlight in the example above is the use of the external
query parameter when importing modules with conflicting dependencies from esm.sh. The official docs have more details about the behavior of the external
query parameter.
2. Development Tools
To have a pleasant development experience with features like auto-complete and type checking, we’ll need to use a language server that supports resolving dependencies from an import_map.json
file. Deno's built-in language server is a great choice, as it natively supports import maps and provides excellent TypeScript support. I needed to make a few adjustments in deno.json
file to make it work with my Preact application:
{
"lock": false,
"importMap": "import_map.json",
"unstable": ["sloppy-imports"],
"lint": {
"rules": {
"exclude": ["no-sloppy-imports"]
}
},
"compilerOptions": {
"jsx": "react-jsx",
"jsxImportSource": "preact"
}
}
The first important detail here is the importMap
option. This tells Deno where to find the import_map.json
file. Without this, the language server won’t know how to resolve the dependencies.
Another key adjustment is the use of sloppy-imports
. By default, Deno expects TypeScript files to be imported with the .ts
extension. However, we can only import the compiled files (with .js
extension) in the browser. With sloppy-imports
option we can write import statements like:
import { Header } from "./components/Header.js";
Instead of:
import { Header } from "./components/Header.ts";
3. Dynamic Loading of import_map
and Entry Point
Last but not least, we need to dynamically load the import_map
and our application entry point in the index.html
file:
<!DOCTYPE html>
<html>
<head>
<!-- ... -->
<script>
// Top-level await is only available in modules. We need to load the
// import map before using any modules
(async () => {
const importMapScript = document.createElement('script');
importMapScript.type = 'importmap';
importMapScript.textContent = await fetch("/import_map.json").then((res) => res.text());
document.head.appendChild(importMapScript);
const indexScript = document.createElement('script');
indexScript.type = 'module';
indexScript.src = '/index.js';
indexScript.defer = true;
document.body.appendChild(indexScript);
})()
</script>
</head>
<body>
<div id="app"></div>
</body>
</html>
Pretty much everything else after this point should be similar to developing a regular Preact application with TypeScript (the features that require a bundler, such as direct CSS imports are not supported).
The full source code of a "loadable" Preact application can be found here.
What Now?
Now we know what it takes to ship a modern frontend application written in TypeScript without a build workflow (Yay!). Also, is it only me or does it look like running a retro game ROM in an emulator?
Running your frontend application can be similar to running your favorite game ROM in an emulator 😉
Here comes a spicy take:
If shipping your application the way I showed in this post takes less effort than your regular release process in your organization, you might be doing something wrong.
To be honest, as impractical as it may seem, I like the idea of not having to deal with a build workflow for developing a simple application. Even as a user, I wouldn't mind waiting a few extra seconds to get a working application. In my humble opinion, not everything should be server-side rendered and loaded in a few milliseconds. The mentality of using a sledgehammer for everything is not always the best approach, and sometimes it's okay to use a nutcracker when you're dealing with nuts (no pun intended).
I had a lot of fun with this experiment, and after sharing it with a few people, I got some interesting feedback. One colleague mentioned it could be useful for teaching and learning, and a friend thought it could evolve into a platform for sharing simple applications. I'd love to hear your thoughts as well. What would you use this for?