Hosting from home
May 9, 2025
This website is now being delivered from a Raspberry Pi B+ that, as I’m writing this, sits in the back corner of the bottom rung of a bookshelf in my home office. If you aren’t familiar (you would be hard pressed to find a good reason to be familiar), a Raspberry Pi is a little computer, and I’ve owned this chunk of silicon for several years without coming up with any good use for it. I find the idea of digital independence a little bit romantic, so I’ve been searching for an excuse to administer some of my own software for a while now. Tempting as it was to buy a bunch of brand new PC parts, build out an entire server rack, and then start negotiating an AC installation with my landlord, I somehow managed to quiet the consumptive demons that rage ceaselessly in my psyche for long enough to make one economically rational decision: I am experimenting with a new hobby using a thing I already own.
Let’s talk about this computer for a second. This little guy, bless his heart, is a proper wimp. He’s got 512MB of RAM and a single CPU core running at 700MHz. To put that in perspective, an iPhone 8, the last one with a home button, the one you kept using for as long as you possibly could because you didn’t want to lose that button, the one that might still be in a junk drawer somewhere in your parents’ house, had 2GB of RAM and a 6-core CPU, where four of those cores ran at 1.4GHz, and two of them ran at 2.4GHz. If you could SSH into one of those bad boys, it would run circles around my little toy. But my toy is mine, and I love it for its weakness.
By the way, and while we’re already doing asides, I’m aware that the word “server” can trip people up because it sounds so technical. I’m going to use that word a few times in the rest of this post, and when I do, I want you to remind yourself that it always either means “computer” or “computer program.” A server’s job is to serve something. We refer to computers or programs as servers when their primary purpose is to provide info or do tasks for people on demand. An example of this is when a program’s job is to respond to HTTP requests (like the ones your browser makes) with HTML files—doing so makes that program a web server! And often, we’ll refer to the computer that is running that program as a server as well, particularly when that computer’s only real job is to run programs which are serving things. OK, now you know what a server is, and you won’t need to be impressed or intimidated the next time you’re stuck talking to some snooty software engineer.
Back to the point. Running my own server, and especially using this specific computer as a server, is not a good idea. It makes nearly everything about the site worse. My site’s previous host was far cheaper, more performant, and more reliable than Baby’s First Home Server. I’m paying for the (small, but not zero) power that the Pi draws, it will probably crash if I get any serious traffic, and the site will simply no longer exist if I lose power. Pages likely load slower than they did before, it’s now much easier for me to break the entire setup. Bad business all around, I’d say.
But this site is a personal endeavor, and I’d like it to feel as personal as it can. What is a website, really? It’s so hard to remember that it is a real, physical, thing. Data on a disk, copied over a network to someone else’s computer, displayed by an application of their choosing in a manner that they can control. In the case of this site, that disk is a real object in my home that I can see and touch! Every time I step into my office, I see my Pi, and I think, “there’s my website.” When I push the update that contains this post, I’ll be writing tens of thousands of 1s and 0s to a device that’s currently 20 feet away from me; just tossing some data upstairs, really. When you download it and read it, it’ll be as if you’ve showed up at my house and knocked on my door to see what’s up. Hey neighbor, glad you stopped by.
And besides, I’m not really worried about how weak my server is. Outgrowing the Raspberry Pi would be a great problem to have, as it would mean I’ve either got too much software to run or too large an audience. I’m certainly not anticipating the latter. Just to be extra safe, though, I do still have a CDN in front of the entire site, so the Pi doesn’t actually have to respond to every single request. Now that I’m in it, I do feel somewhat committed to, and energized by, the self-hosting lifestyle. So if hordes of adoring readers do ever overwhelm my current server, I’ll need to decide if I’d rather build/buy a beefier PC and keep hosting from the house, or rent out a VM in the cloud. Either way, I’m sure my wife will approve the expense reports.
Postscript: Technical details, for the freaks
What’s up, freaks. You want to know how this thing runs? Ok, here goes!
Migrating my site from Cloudflare Pages to the Pi was not quite as difficult as I suspected. As I’ve already discussed elsewhere, SvelteKit already builds this site into a bunch of static files. So the biggest impact of this change is that when I want to push an update to the site, I now need to copy all those files to some directory on the Raspberry Pi instead of uploading them to Cloudflare. So I whipped up a little bash script to build the site, copy the output over to a directory with the current timestamp in its name, and then update an “active” symlink to point to that new directory. The script also deletes all but the last 3 revisions of the site, so I still have an emergency backup/revert option if necessary, but don’t waste too much disk space. Those last steps look like this:
echo "Copying files to $rev_dir"
ssh "$server" "mkdir -p $rev_dir"
scp -r build/* "$server:$rev_dir"
echo "Making revision $rev the active deployment"
ssh "$server" "ln -sfT $rev_dir $active_link"
echo "Deleting all but the last 3 deployments"
ssh "$server" "cd $deployment_dir && ls | grep -E '^[0-9]+$' | sort -nr | tail -n +4 | xargs -I {} rm -rf $deployment_dir/{}" || echo 'No old deployments to delete or cleanup failed'
I’d also like to say that I wrote this script by hand, but unfortunately I can’t
say that, because I did need Gemini’s help to do some CLI flag parsing (I wanted
a -v
flag for verbose logging).
I then configured nginx to serve these files with the following configuration file:
server {
listen 127.0.0.1:8001;
root /home/chard/chard-ooo/deployments/active;
index index.html;
location / {
try_files $uri $uri.html $uri/ =404;
add_header Cache-Control "public, max-age=3600, no-transform";
expires 1h;
}
# Optimize cache headers for static content
location ~* .(jpg|jpeg|png|gif|ico|css|js)$ {
expires 7d;
add_header Cache-Control "public, max-age=604800";
}
}
Note that it’s serving files from ~/chard-ooo/deployments/active
, which is
that symlink I was talking about before. It also includes some Cache-Control
headers on all its responses; these max-age
values could probably stand to be
even longer, but let’s worry about that if we ever have to, eh?
The last piece of the puzzle was getting this thing exposed to the real damn Internet, which is actually a bit of a tricky proposition from a residential address. The reason for this is that static IP addresses are somewhat hard to come by, and ISPs don’t like to give them out for free. So they actually rotate IP addresses among all their customers’ routers, to make sure they’re reserving as few as they can in the long run. So if you made a DNS rule to point your domain at your current public IP address, that might only last for a day, or a week, or a month until your ISP decides they actually need that address for someone else and give you a new one.
If you’re hosting on a Big Boy Cloud Provider, you can just click a few buttons to get a static IP for your VM. But the rest of us would need to either upgrade to a Comcast Business plan, or run a DDNS process of our own to update DNS records whenever IP addresses change. I don’t really love either of these options, as either one of them would require opening up my home network to be publicly accessible - yikes. So instead, I’m using a Cloudflare Tunnel to maintain a connection from my server directly to Cloudflare (using an outbound connection rather than an inbound one, so nothing is accessing my network directly). This allows me to assign DNS records to ports on my home server without exposing the rest of my home network. It definitely takes some of the “self” out of my self-hosted setup, but that’s a compromise I’m pretty happy to make.
One last thing: I got curious about how much load my Pi can actually handle, and thought it would be fun to see it in real time. So I vibe-coded the world’s worst Datadog clone and exposed it via that same CF tunnel, and now the whole world can access it at stats.chard.ooo. Let’s all watch my server crash together!