0%

Five things I hold to

Every technical choice on this site flows from a small set of commitments. These aren't dogma — they're how I keep the archive readable, fast, and trustworthy a decade from now.

  • Static first.

    Every page is built once at deploy time and served as plain HTML. No databases at request time. No server-side rendering. No invisible runtime.

  • The reader's browser is the runtime.

    Search, notes, highlights, and reading settings all run client-side. Nothing personal leaves your device.

  • Vanilla over framework.

    No React, no Vue, no Svelte. Plain JavaScript and plain CSS. The smallest dependency that still does the job.

  • Public-domain text on a public web.

    Paine's writings are public-domain. I treat the archive the same way — open formats, RSS, sitemaps, plain Markdown source.

  • Privacy by default.

    No cookies, no fingerprinting. Fonts via GDPR-compliant Bunny Fonts. Cookieless analytics through Umami Cloud (public dashboard at /analytics/) and Cloudflare Web Analytics. Both anonymise IPs.

Every layer, named

If you want to fork this site, replicate it for a different author, or just understand what's happening, start here. Every dependency is open-source and listed below.

Build

Eleventy v3

The static site generator. ESM config (eleventy.config.js) with "type": "module" in package.json. All data files use export default. The build emits roughly 1,300 files in under 30 seconds.

  • Templating: Nunjucks (.njk) for layouts and pages, Markdown for prose.
  • Markdown: markdown-it + markdown-it-anchor for heading IDs.
  • Plugins: @11ty/eleventy-plugin-rss, @11ty/eleventy-navigation, @11ty/eleventy-plugin-syntaxhighlight.
  • Images: @11ty/eleventy-img, AVIF + WebP + JPEG fallback, lazy loading, responsive sizes.
  • HTML minification: @minify-html/node via a custom transform.
  • SVG rasterization: sharp re-renders the OG card, favicons (16/32/192/512), apple-touch-icon, and /favicon.ico from a single source SVG on every build.
CSS

Vanilla CSS, custom properties

No framework. No bundler. CSS is split into ~50 partials by concern (reset.css, tokens.css, components, pages) and concatenated + minified at build time via lightningcss into a single global.css. Cache-busted with a ?v= query string keyed to the build timestamp.

  • Why no @import? Browser @import creates a waterfall of sequential network requests that delays first paint. Concatenation at build is one request, fully parallelised with the HTML.
  • Tokens: Three colours (paper, ink, accent), two typefaces, one spacing scale. All in src/assets/css/partials/tokens.css.
  • Type scale: WCAG-friendly editorial defaults — 18 px body, 14 px floor, 24 px qualifies as "large text" per WCAG 1.4.3. Pure rem so the reader-settings font-size control propagates proportionally.
  • Dark mode: CSS custom-property flip. No-flash inline script in <head>.
JavaScript

Plain JS, deferred

Every script is plain ES, vanilla, deferred, and namespaced. No bundler. No transpiler. Each script is independently inspectable in src/assets/js/.

  • main.js, nav, accordions, smooth scroll, theme toggle, instant-page hover-prefetch.
  • annotations.js, selection highlighting + per-work notes (localStorage).
  • reading-log.js / reading-resume.js, private reading log, continue-reading, heatmap calendar.
  • reader-toc.js, sticky TOC with IntersectionObserver active-section tracking.
  • reader-share.js / work-tools.js, native HTML <dialog> share & export modal + APA / MLA / Chicago / BibTeX citation generator.
  • timeline-map.js, Leaflet wrapper for /timeline/ + lecture-tour.
  • works-graph.js / build-concept-graph, D3 force-directed topic graph for the corpus.
  • tippy-init.js, glossary tooltips with mobile-tap support.
Search

Pagefind

Full-text search runs entirely in the browser. Pagefind builds a per-word index at deploy time and downloads only the chunks needed for a given query, no server, no API call, no logging.

  • Where: press / from anywhere or visit /search/.
  • How it builds: eleventy.after hook runs pagefind against the just-built _site/ directory.
  • What's indexed: the <main data-pagefind-body> region, the prose, not the chrome.
Visualisations

D3 + Leaflet

Two interactive visualisations across the site. Both are loaded only on the pages that need them, every other page ships zero visualisation code.

  • D3 v7: the topic map on /works/, force-directed graph of 11 topics + 177 works.
  • Leaflet 1.9: the timeline map with OpenStreetMap tiles, custom pins, and clustered popups.
Typography

Bunny Fonts

EB Garamond for body and headlines, Inter for UI chrome. Served by Bunny Fonts, a privacy-first, GDPR-compliant Google Fonts mirror with no logging, no IP retention, and a permissive cache header.

Hosting

Cloudflare Pages

Zero-config, edge-cached, free for the traffic an archive of this size sees. Build runs npx eleventy directly; no Node API, no serverless, no buildpack.

  • Headers: long cache for hashed assets, short cache for HTML, set in src/_headers.
  • Redirects: declared in src/_redirects for legacy URLs.
  • Domain: filthylittleatheist.com, with a Cloudflare-managed certificate.
Privacy

Tracker-free by default

Two analytics scripts on the reading surface, both cookieless and IP-anonymised: Umami Cloud (public dashboard at /analytics/) and Cloudflare Web Analytics.

  • No cookies: the site sets none.
  • No fingerprinting: no canvas, no audio, no font enumeration.
  • Local-only state: notes, highlights, saved readings, theme, all localStorage.
  • security.txt: at /.well-known/security.txt.
Data

Markdown + JSON

Every work is a single Markdown file in src/works/ with YAML front-matter (title, year, volume, category, subtitle, excerpt). Volumes, topics, timeline, connections, glossary, bibliography, and corpus statistics are computed at build time from JS data files in src/_data/.

  • worksStats.js: total / per-volume / per-decade / per-year / per-letter buckets.
  • timeline.js, connections.js, conceptIndex.json: hand-curated relational data.
  • bibliography.js, manuscripts.js: 12 modern scholarly sources + 11 holding institutions.
  • glossary.js: 55+ contextual + Paine-specific entries.
Exports

Five formats per work

Every work ships five machine-readable downloads, generated at build time by scripts/build-exports.mjs and indexed in /api/downloads.json. Volume-level zip bundles ride on top.

  • .txt — plain text, the universal source-of-truth fallback.
  • .json — full structured payload (front-matter + paragraphs + footnotes).
  • .bib — BibTeX, citable directly in LaTeX / Pandoc.
  • .ris — Research Information Systems format for Zotero, EndNote, Mendeley.
  • Print view — browser-rendered, with citation footer.
Discovery

Library + machine harvest

The corpus is exposed through the same protocols digital-humanities tools, libraries, and search engines actually consume.

  • OAI-PMH 2.0 at /api/oai-pmh/identify.xml — Identify, ListIdentifiers, ListRecords, GetRecord, ListMetadataFormats. Dublin Core. Suitable for DPLA / Internet Archive / university harvesters.
  • JSON APIs: /api/works.json, /api/downloads.json, /api/work-history.json, /api/concordance.json, /api/glossary.json.
  • JSON Feed 1.1 for the daily "this day in Paine" rotator at /api/this-day.json.
  • Atom feeds: site-wide, works-only, this-day-only.
  • Sitemap + robots.txt with stable identifiers in the conway:vol-N:slug namespace.
Structured data

JSON-LD @graph

Every page emits a single JSON-LD @graph spine that links the WebSite, Organization (publisher), Person (editor + Paine with Wikidata / VIAF / LCNAF identifiers), and Book (Conway Edition with 12 typed Volume parts) entities through stable @id IRIs.

  • CreativeWork per work, with accessMode, speakable, encoding[] for all five exports.
  • Quotation, Event, Place graphs for /quotes/ and /timeline/ (with GeoCoordinates).
  • Person entities for every figure on /connections/ with knows back to Paine.
  • FAQPage, BreadcrumbList, CollectionPage, DataCatalog, LearningResource, ImageGallery, VideoGallery as appropriate.
  • 1,797 JSON-LD blocks across 806 pages, all parse cleanly via JSON.parse.
Reader UI

Native HTML dialogs

The share & export modal and the citation generator both use the native <dialog> element. The browser handles top-layer rendering, the ::backdrop scrim, escape-to-close, and focus management — no z-index battles, no transform hacks, no iOS Safari viewport quirks.

  • Five close paths on the share dialog (form method=dialog, inline onclick, JS listener, capture-phase Esc, geometric backdrop).
  • Mobile-first: 100vw + 100dvh on phones, centered card via margin: auto at ≥768 px.
  • 10 brand-share targets (X, Bluesky, LinkedIn, Mastodon, Reddit, Hacker News, etc.) + cite + print + 5 downloads.
Accessibility

WCAG 2.2 AA

Semantic HTML throughout. Skip link, visible focus rings, 44×44 px tap targets, 16:1 contrast on body text, aria-current="page" on active nav, prefers-reduced-motion respected. Type scale follows WCAG-friendly editorial defaults (18 px body, 14 px floor).

  • Statement: /accessibility/.
  • Tested with: WAVE (WebAIM), Lighthouse, manual keyboard + NVDA / VoiceOver passes.
  • Audit rig: scripts/audit-health.mjs + Playwright viewport screenshots at 360 / 480 / 768 / 1024 / 1280 / 1500 px.
Build pipeline

Per-build derived data

Several scripts run on every build to derive secondary data from the corpus. All are plain Node ESM under scripts/; none require a database or an external API.

  • build-exports.mjs — txt / json / bib / ris per work, plus volume zips.
  • build-concept-graph.mjs — chord diagram of category co-occurrence.
  • build-concordance.mjs — phrase-level entity index across the corpus.
  • build-work-history.mjs — per-work git revision history feed.
  • render-og.mjs — 177 work cards + 3 post cards via Playwright at 1200 × 630.
  • ingest-conway.mjs — splits raw Conway volumes into per-work Markdown.

Read the code

The site is open-source. Fork it, audit it, propose changes, or copy the structure for your own author archive. The code is permissively licensed; the underlying texts are public-domain.

From Project Gutenberg to web page

The 1900–1902 Conway Edition is a paper book. Getting it onto the web meant working from the Project Gutenberg e-text of the volumes, careful proofreading against that source, and structuring each work as a single Markdown file with YAML metadata. I'm still verifying the transcription end-to-end; corrections are tracked in each work's revision history.

  1. 1.

    Source text

    The Project Gutenberg e-text of the four-volume Conway Edition. I did not scan anything — Project Gutenberg's transcription is the source of record for this site.

  2. 2.

    Ingestion script

    A Node ingestion pipeline (scripts/ingest-conway.mjs) splits each volume into individual works using a hand-tuned WORKS manifest with distinctive start markers.

  3. 3.

    Markdown + front-matter

    Each work becomes a single .md file in src/works/, with YAML front-matter for title, year, volume, category, subtitle, and excerpt.

  4. 4.

    Verification (ongoing)

    Spelling, capitalization, and Paine's idiosyncratic punctuation are preserved as printed. Obvious typesetting errors are corrected silently; editorial interpolations are bracketed. I'm still checking the corpus through against the printed Conway text — corrections appear in each work's /works/<slug>/history/.

  5. 5.

    Build + deploy

    Cloudflare Pages runs npx eleventy on push to main; Pagefind indexes the result; the static output deploys to the global edge.

Link copied