My Humble Pis

Eight Raspberry Pis ran continuously for five years powering Monte Carlo trading simulations, evolutionary neural network race cars, and a headless AI agent swarm. A practical look at small-scale distributed systems, orchestration tradeoffs, and what simple hardware can teach about scale.

My Humble Pis

I bought eight Raspberry Pis in 2021 and never turned them off. Not intentionally, at least. Oh, and PG&E did a few times. I had bigger problems during those events. Why did I buy them in the first place? They were cheap, they were interesting, it was COVID, and I was bored. And I could.

If you're not familiar with them, a Raspberry Pi 4 is a single-board computer about the size of a deck of cards. Quad-core ARM processor, 8 gigabytes of RAM, gigabit Ethernet, WiFi, and it runs Linux. People who cluster them together call the result a bramble, which is what you get when engineers lean into a raspberry pun.

My setup is a pair of brambles, four Pis each, connected through two Netgear GS305P PoE switches (one per bramble). Each Pi gets both power and data via POE (Power-Over-Ethernet). I put Ubuntu on a 2014 Mac Mini because macOS makes a terrible server. Power of the penguin!

The Mac Mini has one job that matters: it's the only machine with persistent disk storage. The Pis use SD cards (like your GoPro), and are not meant to be thrashing reads and writes. For long, at least. This means that, other than the operating system and any software I install, the Pis don't need to use their local disks.

I paid roughly $50 per Pi (including the POE hats) and $50 per switch. The Mac I had laying around doing nothing after buying a new MacBook Pro. I was $500 into the project. The 2 brambles and the Mac Mini (no monitor) draw a combined 75 watts from the wall continuously at full power, which is roughly the equivalent of a light bulb. That's roughly $10 a month in electricity.

An A1.xlarge with the same CPU (but faster) at AWS costs $0.10/hr retail. Let's say I got four of those (roughly the same clocks and identical memory)... in two months I'd have paid more than my hobby setup.

Three projects have lived on this cluster over the past five years. One of them paid for the hardware by being my RobinHood broker. Another helped me learn neural networks while simulating race cars. The most recent one wrote this article. More on that later.


Taking (and selling) Stock

My first project, takeStock, started in May 2021 as a curiosity: if you can't predict which stocks will perform, regardless of how Warren Buffett shops, can you find trading rules that work regardless of what you pick? I explored this by running Monte Carlo simulations against historical S&P 500 data, looking for repeatable and predictable buying and selling patterns that resulted in net profit across the S&P 500 and NASDAQ. The system wasn't looking for winners; it was hunting for an algorithm.

takeStock v1 (oh dangit, i foreshadowed) was written by the Nickelback of programming languages: PHP. At the time, the Mac Mini was sorting through results and determining what it wanted to know more about. It constantly shifted parameters for the Raspberries to process. They were Gru's minions, and Gru had his sights set on the moon.

A month later, the system was working. Never mind the attempts at Elasticache, Redis, RabbitMQ, and assorted other software tools' attempts to derail it. My version control comment that day read: "damn it's fast."

I traded on the recommendations for a few months. The system gave advice; I executed the trades. I contemplated automation but let's just say that doing so on reverse-engineered protocols risked many things, including account suspension. It was every couple days I'd run a script that would -redacted- and I'd open the app and do it.

By late 2021, COVID restrictions were lifting and travel was back on the menu. That will be another blog someday. Not the food; the travel. Or both. After a few weeks in Europe, checking out yachts in Barcelona on F1 weekend, among other things, I kinda forgot to check in on takeStock. I'm sure you can't blame me. But that didn't matter, because... sorry, foreshadowing again.


A Peek Inside AI

The second project started on a Friday night in September 2025. First comment in version control at 10:06 PM: "working, need to optimize.". Sounds like a 10PM comment. With a touch of "I'm exhausted, been working too long". And I had been.

This section might run a bit long. Bear with me; I'm going to try to explain some AI concepts to you the way I wish someone had explained them to me: simply. No guarantees on accuracy, though.

I had been studying AI. Not just chatbots, but how LLMs work. It's fascinating. Really, if you haven't explored it you'll be fascinated to learn that with a simple file filled with numbers you get capabilities like prediction, regurgitation, hallucinations, and the most shocking fact (to me at least), storage that's more like memory than text on a hard drive. I had to know more.

All those numbers are often drawn in a mesh that represents something called a neural network. It's like a brain. And totally deterministic. Put the same numbers in the front, get the same numbers out the back. Mostly, but that's outside our scope today. And probably outside mine for a few more years. Anyway, the idea was to train neural networks to race cars around a simulated track using only vision. Vision was represented by rays, or lines of sight, that said how far in front (or to the sides) the car was from a wall. Combined with speed and some other factors the car makes a decision.

Those numbers I mentioned, and the deterministic part, means that at 50 MPH, centered on the track, 500 ft from the wall approaching turn 8, the car would do the same thing every time (which was typically to stand on the accelerator and crash). So the trick, like the stock program, was to find the right numbers to put into that file that would tell the car something like, maybe time to start slowing down, "see" into the corner a bit, then make a new decision. Lather, rinse, repeat; complete a lap, win a prize.

The training approach: start a population of cars with random neural network brains (the file of numbers, also called a model), run them all on the track, multiple times, and grant point-based rewards for things like how long they made it without crashing, how fast they went, etc. At the end of a batch (I think mine were a couple hundred models a thousand times or so each) keep the best dozen or so performers. Add to that a dozen or so mutations (change some number slightly), and then "merge" some; e.g. combine the models, possibly with mutation. That's where the "genetic" AI term comes from (cover your eyes, kids).

How best to run what equated to many millions of simulations as fast as possible? Throw them at the brambles! I think you recall we sort of let takeStock die for a while, so the brambles were free, and you bet I was going to use them.

I built the initial training infrastructure on K3s. K3s is sort of the Pi go-to for launching processes on Raspberries. It's a lightweight version of Kubernetes, an orchestration platform that's become the enterprise standard for running distributed workloads at scale, as well as the source of nightmares for system operators around the world.

The K3s setup evaluated car models beautifully. It tracked performance with statistical rigor. I was a proud papa. It just never produced any actual progress, or in genetic algorithmic terms, evolution. I was a sad papa. The system was so fast that all the forty billion other parts necessary to make things work were causing bottlenecks, exhausting resources, and generally not doing what they're supposed to do, no matter how carefully I assembled them. I swear, Kubernetes is like building a ship in a bottle with welding gloves on.

So I nuked it.

Nomad replaced K3s, running the training workers as native Python processes with simple, small, and straightforward configuration files instead of layers of YAML. It's a product I've worked with before, and actually helped the developers launch 2 MILLION tasks on a distributed cluster some years back to celebrate their 1.0 release. The setup that took hours in K3s took thirty minutes in Nomad. I'm not sure you can appreciate that properly without first building the wrong thing. The wrong path wasn't wasted; it proved definitively why the simpler architecture was correct. Happy papa was back.

After getting model fitness levels that equated to not constantly crashing I got to learn about, what I can only refer to as the closest thing to Darwinian tactics I've ever seen first-hand. On a computer, dethroning failblog by quite a bit. Cars would sit at the start-finish and get points for every second they didn't crash, but they weren't moving. Cars would drive around in circles. They drove backwards. They drove me insane! To counter this you apply penalties as well as rewards. So I spent the next few weeks learning about that.

Eventually they started circling the track. I started creating different "cars", like a Honda CVCC, a Tesla Plaid, and others, all with their own attributes (how fast, how quick, etc). Hopefully it comes as no surprise that the car that consistently showed the best performance was a Mazda MX5 Miata, aka: The Answer.

I named all the components of the service after people I know. Like the HTTP file service, the data analytics, the organization of all the models. It was a nice feeling to see my son working so hard. I later regretted the decision when one of those services failed and I started shouting obscenities in their names. Lesson learned.

The project ended last week when the analytics service ran out of memory and took down the Mac Mini. I didn't know that was possible, but my faith in the penguin remains, and fortunately I named that service after a friend. Fitness scores had plateaued anyway. The cars could drive. Watching the Mazda on the track was a thing of beauty, the Tesla apparently didn't have as much regenerative braking as it thought, the MotoGP bike spent more time in the gravel traps than Marc Marquez two seasons back, and the DIY gokart would saw back and forth at full speed for the same reason I built the brambles: because he could. I'd seen enough, anyway.


Bring Me to Life

In late 2025, before my Mac was violated by AI, I looked at my RobinHood account. I don't remember why. I think it was after watching that Super Bowl ad for ai.com (or whatever, shows how good the ad was) that reminded me of all the awful crypto ads in like 2022. That, in turn, prompted me to check my fractional bitcoin on CoinBase. The RobinHood app was right next to it in my "do not open for 5 years" app group on my phone.

My portfolio had returned about 70% over roughly four years, my finance friends say that's about 15% annualized. That slaughters a savings account and is competitive with the S&P 500 over that same period. I called my broker. And fired him.

The number was a genuine result. But more interesting than the return was what it represented: the system had sat untouched through four years of strange market history, unmonitored and unmanaged, and it hadn't blown up. If the strategy required constant intervention, the dormancy would have broken it. It didn't.

The algorithm worked. Unlike YouTube's.

What followed was a ten-week sprint that rebuilt takeStock into something I call Trade-ception. You must have realized by now how bad I am at naming things. It's a multi-tier validation system where each layer independently validates the recommendations of the layer below it. If tier3 doesn't outperform tier2 the system sweeps the parameters again until it finds the most opportunity and the lowest risk.

The philosophy had shifted too, from 2021's get-rich-quick framing to something more specific. All but the bottom 10% of simulations above zero, the midpoint above inflation, and the top 10% beating the S&P. And boy are they ever.

I'm live trading again. I get a morning report on my phone through Pushover that tells me how my sims are doing, how my portfolios are doing, and any recommended action. Did I say portfolios? takeStock v2 runs five of them instead of one. I'm currently at +2%/mo. I didn't need that broker anyway.

In short, the Pis have, in a very literal sense, paid for themselves. More than a few times.


Worker nodes? But I need workers!

I met with a business recently that asked me to write a proposal for a new product in their catalog. Needless to say, I'm excited to no end for the opportunity. I spent a fair amount of time researching before sitting down with three friends to write the doc. Claude, with whom I have the longest relationship and is super nerdy, Gemini, who seems to be better at constructive criticism than contributing, and Jippity, who is a great writer but prefers everything done one way.

As I was working with Claude I noticed it occasionally telling me it was waiting for something. I came to realize it was talking about agents, a mysterious chatbot feature akin to the far more well-known "agentic AI". Claude was telling someone else to do the work. I thought that was my job.

So I started to explore. Invoking agents in the Claude app is not as easy as instructing it to do so, or maybe it is and I couldn't find it, but I know that my dear friend Claude Code is configured with files (as all things should be), so I'd give it a try. Turns out, yes, mkdir -p .claude/agents and you're on your way. So I went.

This is probably a good time to describe what exactly an agent is. Let me explain. No, there is too much. Let me sum up. An agent is like a role. I started simple with three agents: a researcher, a writer, and an editor. I asked the researcher to research the band Tool. It wrote its research to a file. I asked the writer to summarize that research for a 2-minute story. It wrote that story to a file. I asked the editor to read the story and give me feedback. It did. My third eye was open.

Next, I created a product manager. I asked the product manager to provide me a similar story but this time about Falling in Reverse. It had the researcher scan the web, it had the writer compose the piece, and the editor offered feedback. Same as.. wait, no, then the product manager invoked the writer again to make the adjustments, consulted the editor again, then wrote the file. I just about had a breakdown.

So I went to town. First, I confirmed my crazy idea on my MacBook; I invoked the product manager "headless", meaning no interaction was possible. Its definition file included instructions to launch other agents headless as well. The objective given was to write a 5-minute piece on the falcon9 rocket. It went through 7 waves with task counts varying from 1 to 4. This I didn't expect. In 49 minutes it generated a doc that I'll just quote the first paragraph of

Twice a week or so, a 15-story rocket booster lands itself. It flips around at hypersonic speed, fires its engines in reverse, deploys four landing legs, and sets itself down at a walking pace on either a concrete pad in Florida or an autonomous drone ship somewhere in the Atlantic. Nobody tweets about it anymore.

This is one of the most impressive things I've ever seen. The rocket, of course, but that open's not bad either. I was on to something. So I kept building, eventually moving to the brambles. Because of course I did.

The system now has eleven specialist agents with specific, bounded roles; no agent does another's job. An agent doing two jobs does both worse. When a task is too large or an agent wants independent perspectives on the same material, it can write new task files to the shared directory, which spawns fresh agent sessions automatically. The reader agent, for example, dispatches persona-specific sub-readers who each evaluate the piece from a different vantage point; their observations get synthesized before reaching the PM.

There's a saying about complexity, but it's really difficult to remember and I can't be bothered to look it up right now. Suffice it to say I think the complexity knob is rapidly approaching eleven, so to simplify things a bit I added a web interface that I can spawn a process from. Did it last night from bed after setting my alarm. Had it write a piece on Minecraft redstoners (the developers of the gaming world). 5 pages in 38 minutes and not half-bad. Could fool my mom. What? She reads these? Love you mom!


So that's three projects in five years on a compute platform simple enough to cost half a laptop and complex enough to simulate the cloud.

I found a way to trade stock without actually having to know anything, I learned how to get a car around a track without having to actually go to a track, and I learned how to write an article without actually writing an article.

I'm actually writing this closing at a McDonald's drive-thru. I didn't expect the system to finish its revision so quickly.

Subscribe to theSteveCo

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe