I offered to write a new voting tool for my favorite book club. It seemed a good use of winter doldrums and new tools. I had several ideas I thought would be good for the club. I wanted another, easily open source portfolio site for Butterfloat and to try to show off some of the techniques I’ve been concerned with in my professional life that aren’t always obvious because most of that code lives (and dies) behind closed doors. I had a couple other pieces of technology I wanted to play with, either to learn them better or because they were cheap new options or both.

I got further along in this nights and weekends hobby project than I expected I would, especially as someone who has had to explicitly take a “no moonlighting” stance for most of my career in a needed boundary to avoid burnout. Some of that is the new tools at my disposal. Some of that is because this project turned out to be fun in some surprising ways.

I’ve taken technical liberties in how almost everything worked, but it’s also been interesting after the major efforts were done seeking “Product Owner approval” from the current book club admin (who is not me). I didn’t want to take over club admin, I wanted to try to work to make this something friendly for an existing admin to work with.

With a Little Help from My (Artificial) Friends

While I kept talking about wanting to build a cool new voting site for my favorite book club, I was rubbing up against my own boundaries on “No Moonlighting”. As users have been starting to use and settle into the new site, I have been making a lot of jokes that this site is especially brought to them thanks to my Junior Developers on this project: GitHub Copilot and Good Scotch.

This is one of the best kinds of jokes because it is surprisingly true. There’s a style of LLM coding that has been called vibe coding. One January weekend I realized I had a fun vibe coding relative of my own within this project. My Mastodon subject line (or the Welsh spelling CWbject if you prefer) was even “Weird Vibes (Software Engineering)”, convergent evolution at work in describing the difference in a new workflow. (I was not aware of “vibe coding” at the time, but now it’s such a common term.) My vibe was different from the one making the rounds as the titular “vibe coding”. That January day I realized that my vibe was sort of distinctly “bougie” in a fun way. I realized that I’d spent most of that coding session feeling as if I was “leaned back”, in a nice bath robe, a rocks glass of Scotch in hand, code reviewing the output of GitHub Copilot, much more than “leaned in” and directly writing code as I would normally be on a day job project.

I loved that vibe of leaned back, bougie code review enough that I was immediately joking about starting a new consultancy focused entirely on being “What if Masterpiece Theater was about Code Reviews?” I still kind of think that would be a cool company. I don’t know how much of a market there would be for “We rent nice mahogany offices, set up a good whiskey bar, wear cool robes and smoking jackets, and only review code, we don’t write it”, but if you are an investor looking for that opportunity, hit me up, I’ve got ideas.

Moonlighting has less risk of burnout when it feels like distinctly different vibes from day job work. My current day job does pay for GitHub Copilot on my projects, but isn’t currently encouraging me to work with a good scotch or bourbon in hand. Also most of the things I’m working on at my day job aren’t necessarily easy off the shelf algorithms and “basic CRUD” in the same way that this hobby project has been.

The Schulze (or “beatpath”) Method and Rabbit Holes of Voting System Knowledge

A lot of us get interesting hyperfixations at various times that stick with us. One of my college ones was getting deep in trivia of Robert’s Rules of Order, where I generated half an idea that I wanted to build a meeting presentation tool more tuned for parliamentary procedure than PowerPoint. (That is maybe a fun idea to revisit with LLMs, though the ultimate crash of that project was not wanting to get into the licensing drama of modern Robert’s Rules and not expecting the tool to be that exciting for running contemporary meetings if sticking only to the content of Public Domain versions). Related to that one, and both a deeper rabbit hole and maybe more important to following years, was a deep dive into the more arcane mathematics, game theory/economics, and software algorithms related to many of the ways it is possible to calculate the votes of a group to find the most interesting winner.

The most common voting systems in our lives are all, in one way or another, “first-past-the-post” systems where a simple majority winner takes all. The very well known failure cases of those systems that are “first-past-the-post” are what leads to common modern problems like “two party systems” and ugly compromises like slate voting and “never vote for a third party even if that’s really what you want because that ‘takes away’ votes from the next best candidate in one of the only two ‘allowed’ parties”.

There are lots of ways to solve this, but there’s lots more ways that feel like they solve this but are really just “first-past-the-post-with-more-steps”. Ranked choice (rank some or all the candidates from favorite to least favorite) often give the feeling of solving this easily, with relatively “easy” ways to assign “points” and build point systems that do easy math in a spreadsheet. No offense to anyone that likes to run a voting system like that or how much interesting work has gone into running such votes and tweaking point systems, but the ease of doing that is deceptive that in how rare it can be for your “points” system to help solve things like let the winner be the candidate that would win the most head-to-head battles against all of the other candidates.

The nerdy mathematical name for this goal that head-to-head battles matter is the Condorcet winner criterion and you might be surprised how many voting systems fail this criterion. It sounds like an easy thing to do, but it’s a lot harder in practice, mathematically, than it sounds. (Also, I would be technically remiss if I didn’t point out that the Condorcet winner criterion is not the only possible criterion to separate a “good” voting system from a “first-pass-the-post” one, there are several “competing” criteria that have different trade-offs. I like the Condorcet criterion best, because it allows for “surprising” winners but winners that most voters can still agree should have won, which is to say it is very good at avoiding “two party systems” where there’s only two choices and the rest is “throwing your vote away” and “spoiling” a loser’s chance.)

I became one of those nerds with a favorite voting system. That system is generally referred to as the Schulze method, or the “beatpath” method. (It does pass the Condorcet winner criterion.) The Schulze method is a ranked pairs system (rank all candidates versus each other in head-to-head battles) that mathematically accepts and flourishes with ties so it can be presented as if it were a simpler ranked choice system which allows ties. I think this is a surprisingly big deal: no one really wants to vote for every pair of choices (think “Round Robin tournament”), but present them a list of candidates to rank every one of them 1-5 stars like they are writing Yelp Reviews and they can “secretly” do all the work of doing a full pairwise ranking, with interesting ties, and have fun doing it.

Unfortunately the Schulze method is easier demonstrated than described, especially the math behind it. For a while there was a cool website called Modern Ballots, may it rest in peace, that I could point to do sample votes. It was rather close to a “Survey Monkey to make quick Schulze ballots”. With that website gone (but obviously not forgotten) it has gotten a lot harder again to convince people to try Schulze method voting. The math is just hard enough that no one wants to particularly do it by hand (I don’t), and it isn’t easily illustrated how to do it in an Excel sheet or Google Sheet, either. But the math is also so juicily easy for a simple program to automate. The meat of the Schulze method uses a slight variation on a simple textbook algorithm called the Floyd-Warshall algorithm. It is one of those algorithms you learn about early in an undergraduate study of computing, because it also has deep early computing roots from the time when “Dynamic Programming” meant “solving a problem in place in its own existing data structure” rather than something more exciting than that.

It’s one of those algorithms that you may code one or two times for class assignments and wonder if you’d ever actually need it in the real world. It’s one of those algorithms that code competitions love to include as non-obvious solution to a word problem that then turns into a quickly solved “known algorithm” project after you think long and hard about it. The Floyd-Warshall algorithm is for finding the “widest path” in a directed graph (digraph). Very simply: you’ve got paths from places like A to B and A to C and C to B. These paths have numbers on them for some reason that often is a variation of “how wide is this path” (how many people can walk it side by side, how much money does it cost to pay the tolls, how slow is it on a busy traffic day, all sorts of other word problem variations like that). Is it a wider path to go directly from A to B, or should you take the scenic route from A to C to B? These sorts of questions are surprisingly common, so that undergraduate implication of “you should learn this because it is handy” turns out to be a real world thing sometimes.

The nickname for the Schulze method as the “beatpath” method comes directly from this core reliance on the Floyd-Warshall algorithm: you are looking for the widest path of candidates, no matter how convoluted, who beats the most other candidates, in whatever surprising order the graph returns. If you’ve encouraged a lot ties along the way, as in using a simple, strict 1-5 “stars” choice, sometimes the widest paths are very surprising, but in a fun way (and a Condorcet criterion way) that few can argue with the results.

One of the reasons that this hobby project turned into “bougie vibes coding” for me was entirely because the hardest mathematical piece of the puzzle was very much writing a comment that I was about to use the Floyd-Warshall algorithm adapted for the Schulze method and allowing GitHub Copilot to spit out almost exactly the Wikipedia definition of the Floyd-Warshall adjusted from Wikipedian pseudo-code to the language I was actually working in with some of my coding style. I took the time to review that it actually matched the definition and my code style, but I was in a place to be very pleased that something I had written a bunch of times before (as mentioned) was so easily plugged in for me here by “my Junior Developer”, Copilot. Not all programming is “use this very well known algorithm in this rather well known use case” (I’d argue most isn’t), but it was certainly exciting to see an example play out directly here, and code review it in a bathrobe with a good Scotch in hand.

Passkeys Are So Nearly (Sigh) the Present

Years ago after too many near misses on database leaks and other concerns, I made a decision that I never wanted to be in charge of storing passwords in a database ever again. I hate passwords, personally. I especially hate the feeling of the risk of someone’s one-and-only password becoming compromised due to my “silly fun hobby project”.  I’ve been advocating to get rid of them in one way or another for many years. I was a fan of the original Blogger’s friend version of OpenID and was sad to see it devolve into related-in-name-only standards like OpenID Connect. I was a proponent for Mozilla Persona and still remain disappointed it didn’t succeed. (One of my few other moonlighting projects back in the day that’s fun to compare/contrast with this book club site was one that used Schulze voting and Mozilla Persona. It died when Rotten Tomatoes shut down their public API almost exactly a year into running that site, but if not for that it would have died soon after that when Mozilla Persona was shutdown.)

Passkeys are the current hope for the present of “no passwords in my database”. It feels like they are ready for prime time and the mainstream, we just need a bit more education, a bit more pomp and circumstance, a tiny bit more polish in edge cases. It feels like we finally have a big enough multi-vendor coalition it isn’t destined to die like Mozilla Persona did. Technically, it even feels like a slightly more polished version of what Mozilla Persona wanted to be when it grew up (a way for websites to get an ID directly from the user’s browser with no middle men), though it is missing some of the things that made Mozilla Persona so nice to work with. A big one for me is that Mozilla Persona was designed with the intent that there be an easy to verify claim of a user’s email address directly associated with the key/ID. As someone that doesn’t want to pay for a transactional email provider for my hobby project, I would love for an email attestation on Passkeys to be something easy to request and also to verify with a trustworthy third party. I also understand why that’s not currently in anyone’s Passkey plans (and may have played a part of why cross-vendor interest in Mozilla Persona was so low).

This site was my first attempt to implement Passkeys as a so-called, in security jargon, Relying Party (RP). I used an off-the-shelf library for this called simplewebauthn and hit some issues where despite the name, it did not feel as simple as I would have liked. This is partly because simplewebauthn, to stay simple, acts mostly as a lego kit with a gnarly pack of step-by-step instructions, including many “DIY Here” steps, and a wave for good luck. To my benefit, I haven’t been doing anything particularly exciting or different or weird with it, and I appreciated it as a lego kit for not constraining my choice of front-end “Frameworks” for my front end (as I’d already made my choice, which I will get to later in this post), even as I kept hoping for an even simpler solution.

As much as I could complain about how many headaches it gave me in the thick of choosing it and writing the code to glue it all together and implement it, a lot of it really was just following the lego instructions, and a lot of that did benefit from “let Copilot write the first pass” and then clean up its assumptions and fix things specific to my backend and frontend choices. Not exactly the same “lean back” experience of writing the core voting algorithm but something similar feeling to that. This was the first big project of the application (if you can’t login, how can you vote) before even building the voting code, and a lot of the procrastination leading into the project was not wanting to build this code in the first place. I’m glad I got through it, and I think I did a strong job with the strange intricacies of logging in with Passkeys and only Passkeys, as well as the bootstrap phase to register an account’s first Passkey. I can’t say it is the most secure implementation (and given a lack of transactional email provider to actually verify emails, it certainly isn’t), but it is also possibly overkill for a “silly fun hobby project”. Yet I succeeded in not storing anything even resembling a user password in the site’s database.

Passkeys were one of my biggest anxieties in technical choices leading into the “MVP” demo with the Product Owner. I was pleasantly surprised at how well it demoed and the overall acceptance of it. Though I also was lucky here that the Product Owner’s background in information security also easily agreed “oh yeah, no passwords”.

I also expected a lot more user support/training issues and/or complaints with Passkeys and have been mostly pleasantly surprised with general user acceptance. Again here some of that is probably the luck of the bias of this book club in question to have a somewhat tech friendly background overall.

The biggest issue seemed to be an old (already out of security support) version of macOS claiming to support Passkeys but failing to register a new one in any browser using the system keychain. The workaround seemed to be to register on a recent enough iOS device and well enough I’m told the Mac eventually synced that key and worked with it.

The next biggest issue has been Windows 10 which is in a similar place of “supports Passkey” but has quirks with it and only syncs with Windows devices. I had planned for this issue, as my own main development machine on this project is stuck with Windows 10, and so I made sure that my implementation supported registering multiple keys for the same email address (which I’ve always taken for as table stakes in Passkey implementation, but it’s interesting how some still don’t).

We also found out that the “the Facebook (embedded) browser” is generally blocked from using Passkeys and/or doesn’t implement Passkey support on every phone OS. This has been particularly frustrating because sharing links on Facebook has been common for the book club, which has used an FB Group as a central communications channel since it started. It’s easy to forget that not everyone distrusts the embedded browsers in platforms like Facebook or even understands the difference between opening a link in an embedded browser and opening the link in the system browser/their regular and default browser. Facebook doesn’t help this by moving the “open in default browser” option strangely hard to find. (Today it’s behind an ellipsis menu. Who knows where it will move tomorrow.)

I also tried to mitigate some feedback ahead of time by suggesting logging in with an iOS or Android device first, because those have the subtly strongest implementations, are slightly more likely to be somewhat up to date (given mobile OS update policies and cell carrier enforcement of some of them), and generally the best sync behavior to their respective ecosystems. Windows 11 can do the QR code dance to login with an iOS or Android Passkey then help you register a Windows Passkey. Some Linux setups can do that now, too. I got some feedback on the earliest wording of that suggestion that I was making it sound like the website only worked on iOS or Android, and I was happy to reword that and also still feel like I’m trying to find the best way to word that advice (without also over-explaining it, as I do here).

A lesser pet peeve I have with Passkey UX is that I’ve implemented all the markup for the best “autofill” experiences but it doesn’t light up and I believe the reason for that is that in most browsers it still assumes a password field. I’ve wondered if adding a dummy password field might help browsers show the best UX, but that seems silly for a Passkey-only website to do and I certainly don’t want to confuse my users with a vestigial password field that doesn’t do anything just to autofill their email address with a cute key icon next to it. I hope the browsers improve the UX for “no password” sites, as much as I understand why the UX is maybe overly focused on the chicken-and-egg bootstrap dance of upgrading sites that still use “traditional” passwords first.

Of course, some of this user acceptance feedback still feels like “early adopter” feedback and maybe I should still brace for more pain in future registration waves as our least technical users find time to want to vote or we find new club members with more diversity in their technical backgrounds. But overall I think big takeaways are beware old Passkey implementations that only sort of work, know your workarounds for that (allow registering the same email twice; always support multiple Passkeys in an account), and I wish there more examples of Passkey-only sites in the open source zeitgeist to double check implementations on. Hopefully this implementation will be another one of use to someone else next (even if indirectly through better GitHub Copilot vibe coding, maybe).

Web Components with Butterfloat

I wanted another public, open source portfolio project for Butterfloat. I’m still quite proud of Butterfloat and I know it isn’t a “Framework” on hardly anyone else’s radar (and maybe can’t be because it isn’t churning through backwards compatibility breaks fast enough 😼), but one of those things that if I build cool things with it maybe I slowly convince more people to try it. In particular, both of the other public sites that are open source in my portfolio were migrated from Knockout and were single pages (though one because it was built to be single-page-application-like and the other only because it was a small demo with no need for a second page). This site I knew I wanted an old school multi-page app, because I wanted to use a static site generator to build as much as possible ahead of time. I also knew going into this project that I wanted to start from the perspective of a (“traditional” for a static site generator [SSG]) “flat file database” of Markdown files in folders (with some modest YAML).

I’ve had some ideas for building an SSG with Butterfloat, but this project didn’t feel right for experimenting with those ideas. Particularly with the desire to use the Markdown files with frontmatter paradigm. I did have fun discovering Lume as a minimalist SSG with all the basics covered that I expected and needed.

This seemed like the right project where I needed to finally test building out Web Components with Butterfloat in a classic multi-page architecture.

Overall, I’ve been very excited from the results of building web components with Butterfloat. I’ve mentioned many times that a guide star for Butterfloat has been “modern Knockout” and it has been in building these web components that I’ve felt some of the most like I’ve been honoring the Knockout legacy. Knockout was critical in the early “Progressive Enhancement” web, and web components, when they work well, have a beautiful way of feeling like the endgame of Progressive Enhancement. Simple DOM elements get replaced with more interesting things if JS is available and as soon as it loads. That feels a lot like the best of Knockout’s experiences in the old days. Having the ability to do it with much less of a “flash of unstyled content” is a strong improvement. Web component elements themselves have no default content or styles and if you place things like “noscript” warnings inside them it is a quick matter to replace them on web component startup. Additionally, template tags are better than what Knockout was doing with programming inside comments and hiding things with display CSS at runtime. Stamps (“server-side rendering” of Butterfloat static DOM to template tags) have been in Butterfloat for some time now, but Stamps definitely shine in the context of web components, building template tags at “compile time” ready for web components to pick up as soon as they are ready.

Going into building web components with Butterfloat I was worried that it was going to be more complex and/or harder than it turned out to be. Given some of the other web component libraries I’ve seen for other “frameworks”, I expected to need a bunch of custom adapter code or things of that nature, but I think the Butterfloat component model and lifecycle sort of accidentally turned out to be perfect for running inside web components and the amount of code to build a Web Component from a Butterfloat Component seems almost too simple enough to me. I’ve documented the bones of the pattern already and hope it helps other projects looking for a lightweight alternative for a Web Component “framework”.

Of course, to be fair, part of why this book club site has had such a simple time with Web Components is that I intentionally eschewed the Shadow DOM and there’s nary a Shadow Root in sight anywhere in the project. Turns out that is something that you can just do. I know a lot of Web Components tutorials and discussions get very deep into the weeds of the Shadow DOM, and I understand why so many libraries that build Web Components may see needs for Shadow DOM tools, but also I think a lot of the over-focus on the Shadow DOM does injustice to how simple they are without it, and how much you can do with Web Components that don’t have Shadow DOM. But also, I’m a fan of letting CSS do what it does best at, at a global level across the page, because that is a lot of power and the Shadow DOM is partly about distrusting page styles rather than taking advantage of them. I do like to choose to take advantage of them.

Building Butterfloat components with GitHub Copilot has been fascinating. Obviously Copilot is trained on a ton of JSX from React projects. For the most part a lot of that just works, though stylistically it’s nice to also update it for shortcuts Butterfloat supports that React doesn’t (like class over className). In a couple places Copilot has been useful in helping me find React idioms that didn’t work in Butterfloat but could (and was quickly upgraded to support) and ones that I still intentionally did not wish to support preferring more Butterfloat-specific idioms. This was also a fun case of watching Copilot pick up more and more of those Butterfloat specifics from this project as it grew and presumably also from rich code search of my other public and open source Butterfloat projects.

My biggest pet peeve with building these web components is that the ESM native, properly tree-shakeable version of RxJS is apparently currently trapped behind waiting for various standards organization working groups, because the current maintainers want to wait for “Signals” and possibly browser-native Observables proposals to shake out first. I understand the reasoning behind that (align to standards for the next SemVer major release), but I don’t agree with it, because we’ve been on this merry-go-round before with standards bodies almost doing native Observables and then giving up after lots of hemming and hawing and giant debates. The fact that this round also includes trying to standardize Observables-but-dumber “Signals” drama doesn’t give me a lot of confidence that things will turn out better this time than the last time, and in the mean time as great as esbuild is, I still would love to get real treeshaking from RxJS (Butterfloat’s one and only dependency).

Deno KV and Over-Engineering a Vote Engine for a Larger Scale than Necessary

For this project the backend database I chose was Deno KV on Deno Deploy. I evaluated a lot of “serverless” deployment tools and their various database backends. There’s a lot of great options today. There are a lot of options with interesting marketing budgets and “fan bases”. I started exploring Deno earlier when I made sure that Butterfloat got a good score on JSR and found I liked a lot of the philosophy and developer experience feel of it. (So much so I’m debating making Butterfloat Deno and JSR-first and suggesting that over traditional npm installs, but I’m not in a rush to do that.)

I’d pretty happily recommend Deno Deploy at this point. So far the developer experience has been great.

Deno’s Deploy product on paper seems to have fewer features that most of the more popular/hyped options. It’s primary database is “just” a “simple” key-value store without a lot of bells or whistles like some of the hosted SQL databases and/or “NoSQL” document databases that are “standards” in this area.

But Deno Deploy’s own focus on its documentation spoke to me. The experiences with JSR and other parts of the Deno ecosystem have all been pleasant and I think the Deno team seems to show a lot of maturity in how they think about developer experiences. That focus on documentation means their website leads relatively straight to the developer documentation, rather than some of other hosting providers trying to steamroll you through marketing hype after marketing hype page or blind trial sign-up actions without first being able to read the developer document. (It also helps that Deno Deploy seems to have a very generous free tier compared to some of its peers.) A lot Deno’s documentation alone makes up for how relatively “young” Deno Deploy is and how many parts of it are still appropriately and visibly labeled “experimental” or “unstable”, which shows maturity, to me at least, in how everything is documented. It’s counter-intuitive but being able to clearly see the “experimental” and “unstable” labels helps my impression a lot. To some extent a lot of these “serverless” hosts still feel like most of their products should be labeled “experimental” or “unstable” and Deno admitting to it feels more mature to me.

As for Deno KV, I also knew from past experience that the border between “just” a key-value store and “a document database” is really blurry, especially if you don’t mind doing a little bit of work up front on your “primary indexes” (your key building patterns) and rolling your own “secondary indexes” as needed. I especially loved that Deno KV brings its own invisible “path separator” for its key namespaces. This makes it easier to build smart “primary indexes” while also providing tools to avoid some of the obvious problems like key injection attacks.

There is a lot you can do with “just” a KV, especially if you are willing to over-engineer things a bit. I’ve certainly picked up familiarity over the years from complex usages of things like redis caches and Local Storage.

I’m quite proud of the voting engine in this book club site. It’s designed for a massive scale that this particular club doesn’t really need, but it was exciting to write it that way, and it is also leads to more than a bit of “penny pinching” in interesting ways to help keep the site within Deno’s current generous free tier.

In this case the Schulze method is very amenable to a classic “map/reduce” pattern, with each ballot being mapped to an adjacency matrix describing the “beats” graph for that user, reducing those adjacency graphs through a simple matrix sum aggregate, then taking the last summation matrix and mapping that through the Floyd-Warshall algorithm to arrive at the final matrix of the widest paths of all the votes.

The voting site backend is using Deno Queues to orchestrate all of that work. Additionally, the users’ ballots are partitioned into (currently) 32 random buckets (but flexibly more as needed) reducing the number of adjacency matrixes that need to be resummed when a single user votes, as a simple complexity reduction to what we call O(log n). I’m proud of this bucketing, which is accomplished through a bit of maybe silly but handy “primary index” magic of using for the ballot keys the reversed string of the user’s ID for the ballot. User IDs are ULIDs which are timestamp up front and random entropy at the back, so reversing the ID gives random buckets (as opposed to time insertion buckets, which is useful in other cases for things like clustered indexes and log-oriented appends/merges). (String reversals of these basic 32-letter alphabet ASCII strings are also easily reversable and/or repeatable, making it still easy to do ballot lookups by User ID.) As far as hash bucketing schemes go, it’s not a very complex one, but it works well in this case.

“Offline First” for the MPA World

One of the things I’ve picked up from past SPAs that I’ve worked on is the importance for designing for “offline first”. Some of the applications I worked on for past jobs demanded “offline first” architectures, because if you are in the field examining some remote reach of a river, you aren’t likely to have good cell service no matter how close we feel to having ubiquitous cell service coverage in the US.

But the thing I found from “offline first” is that it generally “feels better” and that it often feels like how modern apps seem that they are supposed to feel.

All of the CRUD (create, read, update, delete) work in the voting site is designed to be “offline first”, using Local Storage generously and some very simple “three-way merge” techniques. (CRDTs are fun, but not what I thought was needed because ballots are intentionally single user.) This “offline first” approach may seem especially like overkill for a multi-page application because “the next page” isn’t likely to load when offline, but I think people are surprised how well MPA applications get cached and even when you are not intentionally expecting users to come to the site while offline, the experience of becoming accidentally offline or even just needing to “save a draft until I come back” is fantastic if you’ve designed for “offline first”, I think. Because offline happens unexpectedly all the time and people often want time to draft things and come back. In a multi-page application it especially helps to give that “feel” of a single page application, being able to make changes in one page and reflect them in a second, without actually being a single page application.

(Between Butterfloat components caching well, the speedy loading of Stamp-based components, and the fast loading from local storage of “offline first” data I’ve even heard surprised remarks that some users thought it was a very well optimized single-page application. I’m sure if I add CSS view transitions that illusion will feel complete. I may add CSS view transitions. As others are also saying, it’s a good time to start writing MPAs again. I do recommend considering “offline first” as a useful tool even for an MPA.)

Takeaways and Action Items

I’m still skeptical about the long term viability of LLMs in software engineering. It’s not going to replace most of what I do and I’m not the sort to “vibe code” entire hobby projects (much less professional projects) because I’m still me and saddled with a goal to over-engineer for scale and reliability beyond the bare minimum, but hobby projects start to be something I want to do again when I can lean back on a cold winter’s day with a Scotch and code review some moron Junior Developer that’s great at copypasta and Stack Overflow and Wikipedia cribbing solutions. It makes the real engineering easier when the grunt work is done so quickly. It’s nice to have “a team” for solo projects now. That also helps hobby work feel more like “senior-level” work: explaining to junior developers what to do is a good chunk of my day jobs and Copilot takes instructions in similar (though not the same) ways.

I’ve got a couple wishlist items out of this project:

I think there’s some useful Action Items for other projects coming out of this one: