The single-author problem
Most digital archives are run by one person and go dark when that person stops paying for hosting. I've designed this one so anyone can recover the full archive from at least one of several independent backups, with no co-operation from me.
Three independent copies
1. Public source repository
The full source lives at github.com/jonajinga/filthy-little-atheist. Public. MIT-licensed for the code, Public Domain Mark for the Conway text, CC BY-SA for the commentary (see /license/). Clone, run npm install && npm run build, and the whole site is on your laptop in thirty seconds.
If GitHub disappears, the repository is mirrored to Software Heritage.
2. Static, framework-free output
Plain HTML, plain CSS, a few small vanilla-JS files. No server-side rendering, no database, no proprietary CMS. Anyone with a static host — Cloudflare Pages, GitHub Pages, Netlify, an S3 bucket, an old Apache server — can publish _site/ as is. Build runs on any platform with Node 22 or later.
3. Wayback Machine snapshots
The Internet Archive's Wayback Machine holds an independent copy of every page as rendered HTML. A reader can still read the words if both GitHub and Cloudflare Pages disappear and I never touch the project again.
A GitHub Actions workflow submits snapshots on two cadences:
- On every push to
main: the homepage and twelve hub URLs (works, biography, timeline, and the rest) are sent tohttps://web.archive.org/save/<url>. New editorial changes are archived within minutes. - Every Sunday at 06:30 UTC: the workflow walks sitemap.xml and submits a deterministic ~150-URL slice chosen by ISO week number. The slice rotates, so successive runs cycle through every URL — the full ~800-URL sitemap is re-snapshotted every four to six weeks.
Requests are paced eight seconds apart to stay under Save Page Now's anonymous limit. Failures are logged and retried on the next sweep.
To see the latest snapshot of any page, append its URL to https://web.archive.org/web/*/. To trigger a fresh snapshot, visit https://web.archive.org/save/<url>.
Formats that age well
- Plain Markdown for every work, with YAML front-matter.
- Plain JSON for the API surfaces. OAI-PMH 2.0 is plain XML.
- Static HTML for the deployed site. No framework runtime.
- Public-domain text as the substrate. No paywalled sources. No licensing chain to break.
If I'm hit by a bus
The minimum to keep the archive running:
- Clone
github.com/jonajinga/filthy-little-atheist. - Edit
src/_data/site.jswith your editor name and contact email. - Run
npm install && npm run build. The pipeline produces_site/, a complete static copy. - Deploy
_site/to any static host. Cloudflare Pages runsnpx eleventydirectly with no buildpack; GitHub Pages, Netlify, Vercel, S3, and a plain Apache server all work as drop-in alternatives. - Point a domain wherever you like. Editorial continuity does not depend on inheriting
filthylittleatheist.com; the substance is the corpus, not the URL.
Stable identifiers
Every work carries an identifier in the namespace conway:vol-N:slug — e.g. conway:vol-3:about-the-holy-bible. It appears in the JSON-LD on every work page and in the OAI-PMH records. It survives URL restructuring, host migration, and a fork by a different editor. Slugs are never repurposed.
Domain
filthylittleatheist.com is on annual auto-renew with a multi-year lock. If the project changes hands, the domain transfers with it. If I lapse, the registrar's auto-renew and warning chain give at least ninety days' notice before expiry could affect the live site.
What I don't depend on
- No third-party CDN for primary content. Fonts come from Bunny and degrade to system serif + Inter substitutes if Bunny is unreachable.
- No third-party JS frameworks on the reading surface.
- No SaaS dependency for the corpus. The contact form uses Web3Forms; reading does not.
- No analytics or tracking that, if blocked, would break the page.
How you can help
Two cheap, useful things: cite a permanent URL on this site (e.g. /works/the-gods/#p-vol1-007) instead of a screenshot, and ask the Wayback Machine to snapshot any page you mean to rely on. archive.org/save takes a URL and produces a stable snapshot in seconds. Each one is another independent copy.
Related
- License — the legal substrate of any fork.
- Governance — who decides what gets published.
- Data & APIs — every machine-readable surface, in one place.
- Manuscript watch — physical preservation of the printed Conway Edition.