Self-Hosted Diaries (2025.1)
Containerization (and in particular, Docker) is a pretty hot thing now (well, it has been for a little bit). I had previously dabbled with some basic hello-world containers but I didn’t get it. Truthfully, I still don’t get it, but figure if I give it an earnest effort, maybe one day it’ll just click and make sense. Or at the least, I can to play with some neat technologies.
Prior to starting down the self-hosted journey, I made the jump to a slightly more structured networking setup, so this was the next logical progression.
Quick Links
In the beginning...
A long time ago, I was in the market for a password manager and the big player in that space at the time was LastPass. I was no stranger to password complexity, but I was looking for a usability improvement (as compared to a local passwords file). To me, LastPass scored very highly on usability and not long after, I migrated to their Premium plan and then a little time later, their Family plan.
Generally speaking (and despite a few high-profile security breaches), I was fairly happy with the service. What I wasn’t too impressed with, were the crazy high prices.
Pricing snapshot of some password managers, Jan 2025
But that’s just how these online services work right? They wow you with a fantastic onboarding experience but to get out? Well, that’s going to be a bit more involved, now that you’re invested into their ecosystem. Looking at this list now, it’s kind of wild that LastPass represents one of the ‘more affordable options’.
Imagine my surprise when Synology announced a password service, C2 Password in 2021 and I saw their pricing:
Pricing snapshot for Synology’s C2 Password service, Jan 2025
I had to do a double-take on the numbers. For just a tiny bit more than the monthly cost of LastPass, I would get a year’s worth of service. Sure, naysayers have/had some very valid points to consider:
This might be introductory pricing and they will jack the prices up down the road
They might decide this service isn’t worth it for them and pull the plug
Can you trust them?
At the end of the day, for that kind of a price drop, it was worth at least trying. It took me about three days fully cut over to C2 and never look back. For $5 a year, it’s pretty unbeatable, no complaints.
But I had a nagging thought:
“Okay, C2 Password is great — but can I somehow cut back on that $5/year expense?”
And it was this goal of saving $5 that set me off down the rabbit hole of self-hosted services…
Getting started
Talk about self-hosting usually brings with it a few predictable questions
-
As the name suggests, it refers to running your own services, on your own hardware, on your own terms. While this isn’t an exhaustive, definitive definition, it’s pretty close, I think. A few easy examples:
A business might have an ‘on premise’ email server
Someone might have their own multiplayer server for a game
These are pretty clear cut: you can run whatever version of the software you want, with whatever configuration you want, any way you want.
A few grey-area examples:
You pay Amazon for an EC2 instance to run your own services
You rent a dedicated game server
These are a bit grey area because the amount of ultimate control you have may be limited: Amazon may discontinue your server instance, or the game server must run a specific version or configuration of the game.
Self-hosted can mean different things to different people, but as more and more restrictions are placed on when, how, where etc. you run your software, the more it moves away from the idea of being ‘self hosted’
-
Sometimes phrased as ‘what’s wrong with using <existing service>’,
You control all aspects of the service. This can mean different things for different people:
You control what version of the service you run. Lots of us have encountered a scenario where a company has rolled out an update we don’t necessarily agree with: either they change some functionality or worse, they remove it. When you self-host, you have some options: you can simply not update, you can skip/disable/bypass the change or you can implement fixes/workarounds independently
You control your data: if you want to take a quick backup right now? You can. If you want to purge all traces everything and start from scratch? You can. And if you do (or don’t) want your data to feed AI training models? You’re in control. More on this in a bit.
Generally speaking, you control when your service has downtime (if applicable). For example, gamers love it when game companies roll out big patches on Friday night…
The cost angle. Services may be affordable now, but [a] they add up and [b] we all know that as soon as they can, the subscription cost will go up. On the note of free services (Google I’m looking at you), not only are we at the mercy of features being paywalled down the road, with ‘free services’, we are the commodity. We are being profiled for ad targeting.
Self investment. Suppose there’s a service you’re going to use anyways — something like ChatGPT for example. Without getting into tinfoil hat territory, the more you use their services, the more it benefits them (rightfully so) - but what if you could run your own and thus, benefit your own training pool? While you’re not likely going to outcompete with their resources, this does go a bit towards minimizing your risks when they inevitably change the services (either the functionality or the price, or both)
At a high level, just looking at the streaming industry over the last decade should give a pretty clear (and dire) picture of how the next decade will roll.
First prices will go up and product segmentation will try and carve up the consumer base to make new ‘norms’.
Next, companies will reduce service, coverage, quality, etc. (and spin it as an effort to be equal to everyone or how it’s ‘no longer viable to support’ — woe is them)
Repeat for more profit.
-
Self-hosting isn’t all rainbows either: sometimes features can be hidden, removed, changed etc. and while it may (and likely will) be possible to mitigate these changes, not everyone has the time, resources or care to go through the damn hassle
Truthfully, if you’re not living and breathing this space, it can be really intimidating and frustrating. I cannot begin to count the number of times I’ve yelled “JUST #&^$%! WORK” while figuring out some nuance about a service
Yes, you’re 100% on the hook for your own uptime, data security, etc. If you’re self-hosting for a business, this can mean added costs in the way of fail-over, redundancy etc. There can be a lot of upfront costs that may sting in comparison to a “small monthly fee”
You’re 100% on the hook for your own backups too. Obviously this would be covered by the previous bullet, but given how ‘lax a lot of people are with backups, I thought it’d be worth it’s own bullet
Worth noting: not everyone’s spouse, family etc., is as open-minded and forgiving as mine. I’m pretty lucky in that respect…
-
For a lot of people, thing that run through a web-browser makes for a great place to start (it’s wild the breadth and depth of stuff that can run through a browser). I get a lot of inspiration from lists like
My self-hosting setup
At a glance
I started self-hosting via Container Manager (essentially, an implementation of Docker) on my Synology NAS. The first container I ran was Portainer which I use as a wonderful front-end for managing all of my actual containers. There are a ton of Synology-centric guides, but I used a combination of guides from Wundertech and MariusHosting when I got started. After I got Portainer up and running, I went headfirst down the rabbit hole, voraciously trying out new tools left, right and center.
For awhile now, I’ve held steady with around 100 or so containers spread across two Docker hosts. For my second Docker host, I went with Raspberry Pi 5 for a couple reasons:
There are more than a few services I can’t run on off Synology NAS due to OS using an old kernel version. For my NAS, the specific error to look out for is ‘failed to get urandom’ or something to related to random-numbers or cryptographic seeding. Thankfully, most of these services that I couldn’t run on the NAS, I was able to run using the Pi (note the architecture difference: x86 vs ARM)
Longer term, I would like to get a dedicated (x86) mini PC to sit above my current desktop to be my primary Docker host. By moving the bulk (or ideally, all) of my containers from the NAS to this higher-performance Docker host, I can realize a few benefits:
From time to time, I need to reboot my NAS (say, for patching) — the extra load of firing up a ton of Docker containers on reboot is brutal (it can add 30 minutes to the boot time, ha)
I prefer to use my NAS purely as a storage-only device and to let computing happen independently of that
A standalone Docker host will offer a ton more performance and, given that I have dedicated connectivity (USB, power, networking, etc.) for this machine directly built into the upper cubby, it’ll be easy to interact with
In theory, I should be able to be free from the limitations of old-Linux-kernel
Flow
Looking at the flow and connectivity of my services, I’m setup something like this:
The bits and pieces of my self-hosted setup (Jan 2025)
The two big pieces of the puzzle (at least for me) were getting a reverse-proxy and local-DNS in place.
Reverse-proxy: think of a reverse-proxy as a switchboard operator for network traffic.
This is the system that routes traffic from a domain to the appropriate server (or in this case, the Docker container):
serviceA.domain.com → docker-container-A
serviceB.domain.com → docker-container-B
serviceC.domain.com → docker-container-C
You can have multiple incoming urls point to the same server/container. This can be handy if you haven’t figured out how you want to call your services, or if you want to make a quick test mapping without affecting live mappings
serviceA.domain.com → docker-container-A
service-A.domain.com → docker-container-A
a.domain.com → docker-container-A
a-service.domain.com → docker-container-A
The reverse proxy can also handle wrapping everything with SSL certificates (handy for getting mitigating the pesky “this site isn’t secure” warnings that browsers throw at you) and can be integrated with authentication services like Authelia. Some/many (but not all) services have integrated user authentication but:
You can use a standalone authentication service like Authelia to cover the services that don’t and
You could even use Authelia to cover the services that do have their own integrated authentication, giving you the option of disabling the built-in authentication. This would turn Authelia into something of a sorta-single-sign-on.
Local-DNS: we’re familiar with what DNS is — it’s the magic that converts an IP address like 1.2.3.4 into a more memorable thing like domain.com. Here, we use DNS to make ‘custom shortcuts’ that point to the same domain.
serviceA.domain.com → domain.com
serviceB.domain.com → domain.com
serviceC.domain.com → domain.com
Hooking up the reverse-proxy
Over the two services, NGINX Proxy Manager (NPM) running on the NAS (specifically), was the [much] harder of the two — due to Synology quirkiness: Synology claims port 80 and 443 for itself. You can use the built-in Synology Reverse Proxy (which I tried out - but it’s clunky and I didn’t like it). It’s a bit hazy, but at a super high level:
Create a MACVLAN that binds to a physical network port (thankfully, my NAS has several)
Create a bridge network
On the NPM Docker compose, assign both networks
-
For me this was the more complicated task to do (this was my starting point).
At a high level
Enable remote access to your NAS
SSH into your NAS
List your network cards (ifconfig) and pick one (i.e., ovs_eth1). Now,
Identify the gateway IP for that NIC (i.e., 192.168.10.1) This will be listed in the ifconfig call.
Set an IP range for this network we are creating. For me, I only plan to have NPM on this special network, so I only needed one IP (i.e., 192.168.10.15/32)
Pick a name for your network (i.e., mymacvlan)
Create the network:
sudo docker network create -d macvlan -o parent=ovs_eth1 --gateway=192.168.10.1 --ip-range=192.168.10.15/32mymacvlan
Hooking up the networking bits for Nginx Proxy Manager
Now, within NPM, you can create, a proxy host for your example service:
The domain names here line up with whatever CNAMES we’ve created in our local DNS
The forward IP just needs to match the gateway for our mybridgenetwork
The port will need to match whatever the port number is for your Docker container
Example of defining a reverse proxy entry
I am, by no means, an expert on this (not even close) and there may be better ways to do things, but this was what ended up working for me after trying a few options and more importantly, a ton of cursing. Your mileage may vary haha.
Services spotlight: what kinds of stuff do I host?
Vaultwarden
My original objective had been to self-host a password manager, so this was one of my earlier projects. I had originally set out to try and host Bitwarden but I found that to be a bit more stressful to deploy than I liked so I gave Vaultwarden a spin and it was a dream to get running. Vaultwarden is feature compatible with extensions/apps made for Bitwarden so this was a win all around.
Migrating from C2 was a different story though: there was no easy way to export/import so I had to do things manually. There is a script that promises to help alleviate this pain but I found it wasn’t too bad to do things manually once I got underway - doing things manually was a good opportunity to rotate passwords, increase complexity and where appropriate, close out accounts. All in all it took me a couple days to fully migrate over.
Having the local-desktop-app makes managing your credentials en-mass much more pleasant - your pace is limited only by your keyboard and mouse and not any webserver-refresh/fetch.
Deploying Vaultwarden was a great intro project and allowed me to meet my goal of saving that $5/yr.
-
For most users, you’ll want to allow this to be exposed to the outside in some way (with all of the security gotchas that go along with that). A Cloudflare Tunnel setup can be a fairly low-effort and low-stress way to expose this service.
When connecting the Bitwarden extension to your server, note that you’ll need to switch the mode drop-down to self-hosted
Actual Budget
For a few years, I’ve used YNAB as a way to track expenses and do budgeting. YNAB wasn’t cheap and for me (at least for a time), there was some value in having transactions auto-import. At some point though, something changed: it could be the banks themselves, the import-middleware or YNAB, frankly, I don’t know but the process of importing because a collosal pain in the butt (mostly stuck on minutes-long loading screens). Between this frustration and YNAB increasing it’s prices, this was a kick in the pants for me to take a look at something — anything else.
I settled on Actual Budget and it was a breath of fresh air - they even have documentation for migrating from YNAB! Having this service run locally where everything is stupid-fast and responsive makes staying on top of transactions very easy (once you get over the initial hump of transactions to import). I made the cutover right at the end of the year so it was all kinds of perfect to start my Actual Budget journey on January 1st.
Deploying Actual Budget saved me an eye-opening $110/yr (!!) compared to YNAB.
-
Actual Budget doesn’t support multiple logins directly yet. For couples/families, you can have a single server/login, but multiple files. Each person sticks to their own file (there’s no connectivity between files).
If you absolutely need your files to be private, then you will need to spin up a separate Actual Budget instance for each person (also probably worth looking at Encryption settings)
I don’t see a reason to have this exposed to the outside, so for me, this is only hosted internally
Vikunja
Over the years I’ve struggled to find a tool to do task-tracking/to-do and nothing quite had the features I wanted. Google Tasks was kind of serviceable until the forced integration with Calendar and then killing the standalone web-app). The main thing I wanted from my tasks was to be rich: bullets, embedded files, links, subtasks, related tasks, categories, historical tracking etc. The overwhelming majority of apps in this space are mobile-first and I explicitly wanted a desktop-first solution — I don’t want to be creating extensive recurring tasks, embedding files etc. using a phone.
For me, Vikunja hit the spot for most things - I suspect for some people, the lack of ‘see all my tasks on a calendar view’ will be a bummer (— although, it’s coming!).
-
Something like Vikunja probably benefits from being accessible from outside your network (again, a Cloudflare Tunnel is probably a half-decent way to do this)
Vikunja has a mobile app in early development, however, the experience using the PWA is pretty decent (I primarily use via desktop though anyways)
Looking ahead
It’s funny, that what started me down this path was a chuckle thought along the lines of ‘heh, I wonder if I can save $5’ and here I am, a year later, with a bazillionty number of containers running locally…
When I initially started, I spun up all of my services using the web view within Portainer. Since then, I’ve deployed source control infrastructure (using Docker no less), so I’ve begun the process of pulling the docker.compose content out from Portainer into source control and transitioning my stacks to pull from git.
In addition to more services to talk about in ‘Services Spotlight’, there are some tangential things I’d like to cover in a future post:
A closer look at a few ways of getting external access working
Juggling internal and external access to the same service
Getting SMTP working
All with a focus on trying to keep costs down where viable.
This post has been a long time in the making — I’ve mostly been hit by writers block (and churn!). Trying to keep this post (and series) from becoming too technical, but at the same time, offering some value in the technical bits… is hard!
Product links may be affiliate links: MinMaxGeek may earn a commission on any purchases made via said links without any additional cost to you.