A Smart Home should be a private one—so here’s what I did
2 August 2025 · Written by Herman · No comments yet – be the first!
When people talk about a smart home, they usually refer to having colourful dimming lights in their home; blinds that open and close at set times of the day, and the ability to tell their voice assistant to play their favourite show on TV. For me, having a Smart Home refers much more to the technology stack underneath. In essence, a Smart Home should be a private home, one where all of your data (input and output) is yours and yours alone. Independent of the public cloud, independent of Big Tech. Every time you open and close your smart blinds, you could be sharing more and more of your daily routine with its manufacturer. A real Smart Home shouldn’t trade convenience for surveillance, especially when it doesn’t have to. Let me tell you how I’ve set things up.
Contents
At the heart of it all
For all of the Smart Home tasks that I run, I like to have a local private server – a computer that is always on. With my background in Machine Learning it helps to have a powerful computer at home. With my perfectionism, I like to have a single machine that can handle as much as possible. And with my wallet occasionally disagreeing with my wishes, I’d need a machine that is both incredibly powerful when needed, and power-efficient when idle. With the advent of capable local Large Language Models (LLMs) able to compete with ChatGPT, having large amounts of VRAM is a huge bonus. Running large LLMs on a GPU would make the server quite loud and energy inefficient. If you’re hoping to run ~70b models (for the uninitiated: “pretty decent ChatGPT-like models”), you’re looking at building a custom PC or server rack with multiple GPUs working in parallel, being that most consumer GPUs “only” have up to 32GB of VRAM.

All of this combined, I landed on a Mac Studio with 128GB RAM as the heart of my Smart Home. This machine is able to easily run 70b+ parameter LLMs while keeping enough RAM for the rest of the machine to run miscellaneous Smart Home tasks simultaneously. It also happens to be incredibly efficient and quiet when idle, which I personally value a lot. It is of course much less powerful than a GPU setup for AI purposes, but that is a trade-off I am willing to take given the practical advantages. Luckily, the RAM on Apple Silicon sits so close to the processors, in terms of physical architecture, that the bandwidth is high enough to be practically considered VRAM. That’s why Apple Silicon machines are a popular choice nowadays for running local LLMs.
I can already hear you say: “you mentioned your wallet occasionally disagrees with you, and yet you’ve purchased a Mac Studio with an insane 128GB of RAM. How do you justify this?”. And that’s a fair point. There’s a lot of personal preference involved in designing a custom Smart Home technology stack. For me personally, I’m a huge fan of CapEx over OpEx; that is, making a single large purchase that doesn’t cost much more over time, versus a relatively cheap investment upfront that accrues over time (e.g. cloud hosting). I deliberately avoid subscriptions in my life as much as possible through the same mindset. Your preference may of course vary. There is no good or bad choice, although you can absolutely do a statistical analysis to figure out what would be the more economical option given your own personal requirements for a private server.
I would like to give a shoutout to various home server machines I’ve used in the past, all of which led me to upgrade to my current Mac Studio. Note that this is not because the following machines weren’t working, it was simply because my own requirements for a home server continued to expand as I grew older and wiser.
- A Raspberry Pi (and the like) remains one of the cheapest options if you’re starting out. Many Smart Home tasks don’t actually require much processing power, and these little devices are more than capable enough to run local adblockers, VPNs, Home Assistant, Docker containers or even use it as a Network Attached Storage device (NAS).
- If you’re looking for a bit more oomph, the HP ProDesk and EliteDesk series are tiny yet full-fledged computers that are energy efficient and feature much more processing power than a Raspberry Pi. They’re really cheap on the used market since they’ve aged a bit and were often purchased in bulk by large companies. That doesn’t make them any worse than they were back in the day though.
- If you have a spare laptop around, why not use that? They’re designed to be energy efficient too, after all. Just make sure your laptop supports battery bypass (i.e. being able to remove the battery as it’s plugged in). It can be dangerous to have an older laptop with battery running 24/7; always assume a battery will become a spicy pillow at some point, just to be safe.
So what does it do?
My Mac Studio is an ever-evolving home server. As of writing, I have a couple of services running, most of which in the shape of Docker containers. As of reading, it’s probably already doing more than you see below.
I mainly use GitLab to track all of my projects and TODOs, as well as hosting code repositories of development-related projects. Think of a personalized chat assistant (more on this later), the website you’re reading this on, financial analyses and other projects I can’t yet talk about 🤫.
For running LLMs locally, I’m a huge fan of Ollama with Open WebUI. My favourite models to use are glm4.5:106b for general ‘co-thinking’ and development, as well as qwen3:30b for quick answers, e.g. specific terminal commands like unzipping files.
In order to generate images like the ones you see alongside these blog posts, I’m currently using Draw Things. I’ve tried ComfyUI and the like, but nothing beats a simple installer that just works. As for the models, I haven’t experimented much but so far “FLUX.1 [schnell]” seems pretty solid for simple image generation and “FLUX.1 KONTEXT [dev]” for context-aware adjustments (like you get with ChatGPT).
All of my photos from my childhood onward are currently stored on my NAS. In order to easily view these, I’m running Immich – my latest addition of a service. I’m absolutely amazed at how capable this FOSS is. My favourite part is the semantic search: after your images are processed, simply search for what you’d like to find inside an image and you’ll find images that match this semantic concept. Given the size of the Docker containers, I’m amazed at how well this works. Since it ends up doing a vector similarity search after it’s done preprocessing, it’s incredibly quick, too!
What’s a Smart Home without Home Assistant? This is how you can control your smart devices without relying on the cloud, assuming you can communicate through the protocols that your devices need (they could use Wi-Fi, Bluetooth, Zigbee). I’ve set up the basics, though I’ve come up with new projects to add to my instance of Home Assistant faster than I’m implementing them. It’s ever-growing… (does this by any chance sound familiar to you?)
For my personalized chat assistant, I’m running a local Matrix server. This is still a work in progress though. Again, more on this in another blog post soon!
Finally, in order to keep all of my containers updated, I use Watchtower. Set & forget.
Apart from these services, I also use my Mac Studio as my personal computer. This includes video editing and gaming. MacOS is notorious for gaming in the sense that many games won’t actually run on this machine, but luckily I’m barely a gamer; I just like to occasionally play Minecraft with shaders, and the Mac Studio handles this task pretty well :-).
Where to go from here
A server at the heart of a Smart Home is only as secure as the network and software running on it are. In the next blog post, I will explain from my perspective as a cybersecurity enthusiast what to look out for when hosting services locally on a server that’s running 24/7 from your home. When in doubt, you can always ask your private ChatGPT-like Large Language Model to help out!