This post is a Sequel…This is a follow-up to my previous post, The Appeals of Homelabbing.
Back in early May of 2025, at the end of the Spring academic semester, I was looking for something to write about while campus was silent around me. That lead to the creation of “The Appeals of Homelabbing”, a post where I discussed the concept of a Homelab, and “why they’re good actually”. Now that it’s been nearly a year since that post and I haven’t written anything in longer than I would like, I am once more looking for a topic to write about. Coincidently, I’ve recently developed a number of things to rant about on the topic of my own lab! So today I’m writing the sequel to that previous post, and talking about why “homelabbing can be a pain actually” (and the reasons to do it anyway). This post is likely to be a bit more opinionated than the previous.
As in my previous post, I’ll specify that I believe homelabbing can be done using Virtual Private Servers (VPSs) from platforms such as Amazon Web Services, Google Cloud, Microsoft Azure, etc. Personally, I prefer to host on my own hardware when at all possible (with the exception of a hosted Nginx Proxy server), but I will touch on both physical hardware and software specific problems here to include both types of hosting.
Personal Hardware
Personal Hardware, for the purposes of this post, is anything physical you can get your hands on. Found a good deal on a rack server? Maybe you got an old workstation from work? How about that old Compaq laptop you have in a drawer with the broken screen and the long dead battery that still technically functions? You can go small, use a Raspberry Pi or an old android phone with Termux! Anything can be a homelab device if you put your mind to it, and this is by far my personally preferred type of homelab. My current homelab has existed for about 5 to 6 years now, with things really ramping up 2 to 3 years ago. But before then, I dabbled with plenty of other experiments. In the past I’ve personally used: Two Raspberry Pi’s (2b, 3), a Compaq Laptop, an old Chromebook (With ChromeOS and with Windows), a Nintendo Wii, and a Samsung Galaxy S7 Active (acquired originally after the previously owned Galaxy Note 7 was recalled due to faulty batteries1). I can safely say anything can be used to create a Homelab, because I have a fair amount of experience doing just that. This section will be dedicated to issues I’ve had with my hardware and networking specifically, in my home environment.
Uptime
I want to start with the proverbial elephant in the room. Running a homelab on your own hardware can very quickly reveal certain stability issues in your environment. My servers currently sit in my house, in the laundry room as that is the closest place to the router for an ethernet connection. The number of internet outages or power outages I’ve experienced since I set up monitoring is higher than I actually realized. Additionally, my server is constantly being monitored by the Purdue Hackers Webring, which has their own automation ready to bring any issue to my immediate attention. This site takes up a concerning amount of those alerts.
That said, a number of these outages are by my own hand as well. I’ve failed a number of networking migrations (more on this later) causing downtime, failing to properly manage resource limitations (RAM especially) has caused services to hang, and every time I update my Nginx configurations on my VPS I’m rolling the dice on a service failure. On this last point specifically, I use NixOS on my Proxy VPS. This normally works great, but does introduce an occasional issue after certain server rebuilds where the limited space /boot partition fills, and the whole rebuild fails until I ssh into the server and delete all the files in the /boot directory. In the meantime, Nginx sits offline until I can restore the server which can take a few minutes. [Editors Note: This was fixed while writing this post. See Addendum i. [Editors Editors Note: This fix was made obsolete in April. See Addendum ii. The previous note was written in February.]]
I wanted to talk about Uptime first for two reasons. First, uptime is the issue that plagues me the most in my personal setup, so I have plenty to say on the matter. Second, I wanted to get it out of the way as I honestly don’t have a good way to spin this positively. Uptime, and it’s inverse of Downtime, are just a fact of life. Even large companies experience downtime from time to time, so at least be glad you aren’t losing millions of dollars an hour when your own servers go down.
Networking Pains
Networking gets it’s own category in this post, purely because it can make or break pretty much anything and everything. Making a minor mistake in a hosts local network configuration can result in that host being unable to communicate or serve any services to users or other hosts, which is nearly as “down” as a host could be. This is bad enough, until you get into issues in an upstream device, such as a router or network switch. Suddenly, multiple hosts are entirely disconnected.
While this category is broadly called “Networking Pains”, the subsections will start off with a bit about Tailscale, which serves a core connection path in my environment, then pivot into OPNsense, an open-source firewall and router. I’ll go over why I decided to start using OPNsense, and one of the main issues I subsequently encountered with it that was entirely my fault.
Over-reliance on Tailscale (or, Switching to OPNsense)
If you haven’t heard of Tailscale, it’s most simply described as a managed Wireguard mesh network provider. This is very simplified and way undersells their product, but is most of what I need to explain here. Wireguard in turn is a site-to-site VPN, and is very fast at being such.
In my environment, Tailscale provides a crucial route between barrier-proxy and my other servers, creating the connection between the network ingress and the services. In addition to this, Tailscale is my primary access method for all my hosts, so if one isn’t working properly I can connect to it via it’s Tailscale IP Address. However, Tailscale was relied on a bit too much, and I wanted to lighten the reliance. At the same time, I was discussing OPNsense with a TA for my Wireless Networking class, and decided to give it a go. I spun up a VM on momiji-server and got to installing OPNsense, and the process went extremely smoothly! In only a few hours and some documentation reading, I had a fully installed OPNsense with a WAN upstream.
For the next week, I made preparations to switch the network over. I mirrored my work on aya-server, created a site-to-site Wireguard connection between them so containers could interact with their local addresses, and created a new Proxmox bridge to act as the OPNsense LAN. I made plenty of fallback plans and announced expected downtime to those who needed to know. I finally moved a few containers over to test, saw that they were working fine, and then moved everything else over. Happy ending!
Oh no, OPNsense broke!
… wait, why is this section here? The migration went fine, I planned this for a week! Nothing could possibly have gone wrong-… oh. I can’t connect to anything, can I?
Migrations are a pain point from the smallest network to the Internet at large. You can plan for hours, days, months… and something will inevitably go wrong. Murphy’s law: Anything that can go wrong will go wrong.2 As part of this migration, I’d been removing Tailscale from the less-critical containers newly behind OPNsense in order to remove them from the Tailscale network, and verifying that the migration worked. And for the next 24 hours, everything did work! I was happy, and since it was 23:00 at that point, I went to sleep.
The next night, at just after 23:00, the aforementioned Purdue Hackers Webring fired an alert that my blog was down. My monitoring server was silent however, so I was in no rush to fact check, instead finishing what I had been working on prior to the alert. About half an hour later, I tried to access my blog. Gateway Timeout. Surprised, I try to open Uptime Kuma to check why my own monitoring hadn’t pinged me… Gateway Timeout. This is when I realized something was terribly wrong. Hoping it was a weird transient issue, especially since I could access the Proxmox Virtual Environment interface, I rebooted the node. Once it started back up, everything was working again! I went to bed, a bit relieved, and didn’t think about it much until the next night. Almost exactly 24 hours after the server started back up, the webring pinged me once more.
From these details alone, some readers may have deduced the issue. It didn’t occur to me right away however, not until I attempted to run ip a in a container to verify it’s IP Address. It didn’t have one. I checked network configurations, ensuring the container was connected to OPNsense correctly, before it hit me. The containers had been getting their IP addresses using DHCP, which lets OPNsense tell each container what it’s IP address should be. By default, OPNsense had a DHCP “Lease”, or expiration time, of one day… exactly how long it was taking for the addresses to disappear and networking to break. In a moment of weakness, I decided to restart the server again to buy time to figure out a proper solution, and it wasn’t until the next day that a friend of mine (Lillith, Granter of Receipts) reminded me that I can set the Lease timer to 0 which effectively disables expiration. In retrospect, that was obvious. Getting other opinions is extremely helpful sometimes! This conversation happened during an in-person Daggerheart session funnily enough.
Well, it’s gone now!
…WHAT DO YOU MEAN IT’S GONE NOW??
Well, similarly to the proxy server issues I mentioned a bit earlier, this is a result of circumstances that changed between February and April. In the time it took to finally get around to finishing this draft, my current primary server momiji-server was renamed to momiji and had a complete OS overhaul, replacing the Proxmox installation with NixOS. The other Optiplex server, aya-server, is retaining it’s Proxmox installation and it’s name for now (See Renaming an Active Node in a Proxmox Cluster for why I would rather not change the hostname, despite aya-server not being clustered). With this change, momiji has considerably less reason to be running an OPNsense instance now.
There was a decent amount of work put into the OPNsense setup, and it gave the opportunity to learn some uses of Suricata IDS/IPS, Squid Proxy, and more about Wireguard, but there was no reason to recreate it after the migration. Nearly every service other than Wazuh is running on the base NixOS install now, while Wazuh is put into an LXC on momiji as it’s currently not compatible with NixOS (although there are efforts!3). I wanted to add this section into the February draft as I was reviewing it because I think it’s a perfect representation of how homelabs are constantly evolving environments. This was a relatively aggressive change, but with some planning and announced downtime the implementation went well, and I now have the opportunity to learn a new subset of skills, as well as the ability to work in a new environment that seems to be working better for me. As much as I like Proxmox, momiji had reached a slightly overwhelming point for the hardware with the overhead having 30+ containers and 3 VMs caused.
Why is my wallet empty?
The quick answer to this question is that I’m a Senior in college with poor spending habits and a job that makes me barely $100/wk. Ask me about “The Shelf”, and I’ll start yapping about all the plushies and figures I have; the picture I have on hand at any given moment is probably already outdated. On top of this, my hobbies mostly revolve around computers, virtual reality, and other costly topics. This post is about homelabbing however, so I’ll focus on what I can’t purchase.
The Great RAM shortage of 2026
Welcome to 2026! The United States economy is entering a new year of constant flux. I am admittedly not extremely knowledgeable about the world economy at large, but from my localized perspective it’s gotten much harder to buy things than I was previously used to. There are plenty of reasons for this that I won’t get into. Thankfully, an already plentiful resource shouldn’t become exceedingly hard to buy in a matter of a few months, right? I can think of few reasons that would cause such massive increases in the price of specific products.
Artificial Intelligence is everywhere I look these days. On my campus, I constantly see my cohort using ChatGPT, Open WebUI, Twitter’s AI Model, and plenty of other platforms for the simplest of tasks. All of these different models need to be run somewhere, right? That tends to resolve to large AI data centers, with large energy and hardware footprints. For a few years now, GPUs have been the component most commonly associated with running AI models, but in mid-August 2025, this started to expand. RAM started slowly increasing in price, alongside SSD Storage. Samsung made a perfect example of this in December 2025, when it was widely reported that Samsung Semiconductor had rejected an order from Samsung Electronics’ Mobile Experience division for a years worth of DRAM chips.4 (Samsung Semiconductor and the Mobile Experience division are both part of Samsung Electronics, itself owned by Samsung.)
Authors Stance - Artificial Intelligence
Click to View
As a matter of opinion, I take a generally unfavorable on "Artificial Intelligence" as it's seen in the mainstream these days. I have experimented with open-source models before (including training, quantizing, etc.), and in my own view know more about how current generative models work under the hood than most of my cohort. It's a thin line these days of what I consider acceptable and what I don't. Generally, I take a strong stance against Image Generation and Code-focused LLMs, and a unfavorable view on LLMs in general. More nuanced opinions will not be accessible via this blog at this time.
All this is to say, RAM is in short supply. The supply we do have is expensive. GPU price tags are still unseemly, storage is tracking the RAM prices, and everyone is left staring at their empty wallets as prices go up again before they can click the checkout button. The Purdue University Surplus Store that I’ve mentioned in the past has had an issue recently with their servers and desktops in this same vein - people have been going to Surplus to purchase hardware to salvage the RAM and resell it. These purchasers have been… creative in getting rid of the rest of the hardware. I’m sure most sell the hardware, but there are a couple egregious examples. One such example came during a snowfall in January, when someone was photographed using a salvaged server housing as a sled on Slayter Hill.5 I think I’ve made my point here.
Editors Note: This section is another example of a February draft that changed before publishing in April. The component shortage is 100% still a problem, and will be for awhile longer. However, in the last handful of weeks, prices on RAM and similar hardware are staring to ease up, giving some hope for the future. Maybe I’ll get a Steam Frame yet.
RAID? I need more hard drives for that
RAID, also known as “Redundant Arrays of Inexpensive Disks”6, is a common solution to two problems in storage. Namely, data redundancy (in case of drive failure) and storage capacity. There are a few different levels of RAID that provide different levels of benefits, but the concept boils down to using multiple physical storage devices as one large “logical” device that can (depending on how it’s configured) allow for one or more drives to fail, and/or to pool storage between devices for a larger filesystem. RAID has a number of benefits, and it’s highly recommended to use a form of RAID with redundancy in server applications, at least for the data storage side of the coin.
You may be able to guess my primary concern here. Despite the term including the word “Inexpensive”, storage devices can be anything but. My current Backup/Media drive is a 4TB Western Digital Blue HDD mounted in momiji-server, split into 3 ~1.3TB partitions for each purpose. In my ideal world, I would have 2 more drives; using RAID 5 on those 3 drives, this would give me 8TB of storage and redundancy of 1 drive, meaning any one of the three drives can fail and the RAID array will still work until I can replace it. Unfortunately for me, current prices per drive at Walmart are about $105, with prices at Amazon and Newegg being comparable. This means I would be paying a bit over $200 to fill out my storage, which is a bit more than I would like.
I do have a secondary concern here as well, however. Depending on the form factor, slotting in additional drives is no big issue. If you have a rack server, then you can buy a few 2.5” drives (or applicable size) and slot them in. Homelab applications tend to be less forgiving however, especially for smaller homelabs. In my case, my primary storage is in a Dell Optiplex. I can fit one 3.5” HDD in there, sure! Ask me to fit a second one and I’ll look at you like a sad puppy as I try to puzzle through the increasingly nonsensical solutions that run through my head.
General
For the rest of this post, I’ll be talking about more general topics that apply whether you’re hosting on a VPS or on your own hardware. There are a lot of software considerations in a homelab, and even with a VPS there are certain hardware aspects that need to be considered.
Location/Storage
This section will be quick. Location won’t generally be a big factor in homelab setups, as normally the location is “right here”, where here is usually your house or something. But circumstances always matter, so it doesn’t hurt for the sake of being general to mention location in this post.
Remember those Poweredges? They’re offline
In my previous post, I discussed the acquisition of two Dell Poweredge R640s. Unfortunately, those have been offline for some time. Last summer I had an internship, and the two servers were sent to live with my parents for awhile. This didn’t last long, as the only place I could put them left them as Wi-fi only, and the noise of the fans was too loud in the surrounding rooms. The servers were unplugged, and since I wasn’t about to run them under my bed in my university dorm room for another year for the sake of my roommate, they have been sitting powered off for some time. This is just an unfortunate reality, and I don’t really have much more to say about this.
VPS location?
If you’re running a homelab for a few users, this shouldn’t be a concern for you… unless you really want it to be. Most cloud providers provide multiple regions for their users to create their VPS instances in, and for certain applications this can matter. If you live in Australia, you probably shouldn’t be trying to create a VPS in the us-east1 region, and instead you should probably be targeting a local region (usually Sydney or Melbourne). Another example, Japan has Tokyo and Osaka for common regions. Point is, there are usually local regions you should be picking from for the fastest speeds unless you have reason to use a server elsewhere.
Sometimes things just break
If you’ve used a computer before, you’ve probably had something inexplicably break or stop working for little to no reason. Servers aren’t much different in that regard, though they can sometimes be easier to fix. This may be experience speaking, but I do arguably find it easier to work on servers than client machines. With this in mind, I want to discuss a few recent issues I’ve had recently!
Wazuh my beloved
For this next section, we’ll let Fen write for a bit. Fen is the affectionatley given name to my largely Cybersecurity and Networking-sided interests, and is more prone to pure ranting. The following was edited into a more readable format from such a rant.
Wazuh is an open source Security Information and Event Management (SIEM) and Extended Detection and Response (XDR) system that is used in this homelab, as well as in the Purdue class CNIT 47000. Wazuh is used to monitor the general security of the homelab, but it’s also used to experiment as any good homelab should be used. This includes writing rules for services that may not have built-in rulesets7.
What do SIEM and XDR mean?The terms SIEM and XDR are generally Cybersecurity oriented terms. Unfortunately, they are also pretty vendored terms, so it’s hard to give one definition. Instead, I’ll give basic definitions and leave it as a reader exercise to learn more if they’re curious. There is a Reddit post on r/MSP8 that serves as a good starting point.
SIEM commonly stands for Security Information and Event Management, and is the passive side of things in centralized security. A SIEMs job is usually to injest logs from a number of connected hosts, parse the data into fields, and index the data to be searchable and readable. A SIEM is essentially a specialized data pool of log data.
XDR commonly stands for eXtended Detection and Response, and is the reactive side of things. An XDR platform will be able to perform actions on linked hosts to automated response or threat hunting for example.
Wazuh is, in authors opinion, great software. It’s a great adaptation of the standard ELK stack, and it’s grown quickly in popularity as of late. But, nothing is infallible, and I have my fair share of issues I’ve encountered. Hilariously, of the three problems that can be immediately thought of, only one of them is a direct grievance with Wazuh. Specifically, that grievance is if the installation (or uninstallation) of Wazuh Manager or Wazuh Agent is botched, it can be difficult to recover. Generally for user error in installation, such as in a step-by-step install or missing environment variables when installing the agent, manual intervention is required to work the system backwards to a more operable state, often easier to attempt an uninstallation and start over. If the install fails due to disk space or a system crashing/losing power mid-install however, the Wazuh Manager package is prone to getting stuck failing to install OR uninstall. The best solution to this that I’ve found is (in an apt system) patching the prerm/postrm scripts in dpkg to exit 0 immediately and skip the uninstallation steps.
The other two issues are external to Wazuh, but are relevant here. The first of which is once more back to resource exhaustion. Even on momiji-server, a 32GB RAM node, memory ceiling and occasionally the CPU usage limits are often hit, as there are a non-insignificant number of services on that server. This generally isn’t a problem for Wazuh. As long as nobody tries to show off and attempts to visualize two entire months of constant SSHd scanning attempts on a world map… The other issue is authentication. It’s not ideal to expose a username and password login page for a service, and instead we should delegate that responsibility to authentik. However, authentik has also broken a handful of times, as referenced in the next section.
authentik et. al.
Ok, Fen is done for now. Reigning her in is a challenge, so there’s likely to be a fully Cyber oriented post soon. Probably heavily about Wazuh! Look out for that. Anyway, authentik is my Single Sign-On system for my homelab and is in charge of almost everything from Forgejo to Outline to Jellyfin. By default, you can create an account with Discord or GitHub, and you will be assigned access to only Forgejo to create issues on existing repositories. It’s great software, it’s open source, and I’ve spent plenty of time on making it one of two gateways into anything I host. But of course, I break things often. As I alluded to before, one such break occurred with Wazuh, after a migration from one authentik instance to a new one that uses an external postgres database. At some point, the connection to Wazuh was severed, and any attempts to sign in to Wazuh resulted in an Internal Server Error as Wazuh failed to get the user token from authentik. I have yet to identify whether this is an issue with an authentik update or a Wazuh update, but a brand new Wazuh instance I spun up to test failed to work even with a completely new setup in authentik. The possibility of user error is also present.
authentik is also very well maintained. At time of writing, they have just switched from a two month to a three month release cycle9, with plenty of new features both open-source and enterprise-licensed in each release. In these release cycles they continue to release patches for bugs, and importantly, they have a security inbox and publish Security Advisories using GitHub’s Vulnerability Reporting flow to quickly update their users and if applicable get a CVE assigned. But on a release cycle this quick, things are prone to breaking, especially if you use any custom css in your branding. I don’t have any specific examples, but once every few months I find something I need to fix after an update. In fairness to the team, authentik is the type of software you shouldn’t be updating on a whim like I do anyway.
Please hold while I backup the serv<SIGKILL/Out Of Memory>
Of my servers, most services run on one of four Proxmox installations. This actually makes creating backups really easy, as Proxmox has a tool called Proxmox Backup Server (PBS) that integrates cleanly with Proxmox and is quite good at deduplication, as well as other niceties. The majority of the backup jobs still happen on the Proxmox node, but those files are streamed to PBS as each backup runs. This can become a problem on a node that is already starved for RAM however, especially if a service happened to be busy during that time. While normally light on RAM I have had two instances on aya-server, a particularly RAM starved node, where PBSs nightly backups triggered, and quickly became overwhelming. This locked up the server, and since I don’t have a great way to remotely manage unresponsive nodes I was stuck. Luckily, aya-server was still in onboarding at this point as I tested it’s limits with actual traffic, and I hadn’t put anything critical on it yet. Amusingly, the second time this happened I had to wait for a power outage to fix the server. When I saw the sibling node momiji-server go down during a winter storm, I sat in my bed for half an hour waiting for signs of life. Once momiji-server came back up, I quickly connected to aya-server before too many containers started and I was able to decrease RAM usage to an acceptable level before it all locked up again.
The certificate “isn’t about to expire”? it’s almost a year old!
If you haven’t had to deal with SSL Certificates for a website before, use Let’s Encrypt. They’re great. If Let’s Encrypt fails you? Uh oh. This happened to me recently, in fact. Verifying you own a domain for Let’s Encrypt often happens through some smart work by an ACME client10, usually by making a temporary website or a DNS entry for the Let’s Encrypt verification server. Unfortunately I didn’t realize until early February that I had a broken ACME setup since the previous April leaving some long expired certificates on barrier-proxy, my NixOS server running Nginx and Anubis. At this time, I was migrating off Cloudflare to instead host my own authoritative DNS and reworking how I obtained my SSL Certificates as a result. Cue the shocked Pikachu face when I discovered 80% of my subdomains were serving expired SSL certificates, but not all 100% of them.
I ended up doing research into just how Lego (the ACME client favored by NixOS’ security.acme module) fetches and stores certificates. Checking the status of one of the many ACME systemd services, I was informed that the certificate was not within 30 days of expiration. What? I’m legitimately still unsure how exactly this happened - I’ve read through the systemd service NixOS creates and I’ve run the lego command manually, and the certificate should have been marked for renewal. But I could not manage to get the certificates to refresh for whatever reason. In my research however, I did find the storage directory for the certificates at /var/lib/acme/. Surely nothing will break if I just… delete everything in the folder? Somehow this did solve the issue.
Conclusion
Homelabbing is… a chore sometimes. Don’t let anyone trick you into thinking otherwise, there’s a reason homelabs are considered learning environments and hobbies. But! For those same reasons, having a homelab can be exceedingly beneficial for learning. Even on the days you aren’t learning, you have your own small bubble on the internet you control, especially if your lab has any public endpoints. Even if you’re the only user, or if you have many users, the things you host can help your day-to-day workflow and provide many different sorts of value based entirely on what you decide to run!
As I started this post off with, I want readers to come away from this post not frightened or scared away from homelabbing, rather I hope that there are some readers who come away inspired. For every flaw or failure I encountered, with a bit of elbow grease and a couple metaphorical wrenches things turned out in the end perfectly fine, if not better for the effort. Apologies for both the length of this post and for the time it’s taken to write another post, I did not intend to spend 9 months before making another post, and I certainly did not intend for it to take another 2 months to finish drafting it. Big thanks goes out to a number of my friends who pointed out my hypocricy during my ongoing crusade to get more people to make blogs. On that note, if you’re reading this, go make a blog!! There’s two specific people who I’m looking at in particular here. They know who they are. :3
Addendums
[i] - These posts can often take awhile to write, and this is a perfect example of circumstances changing as I’m still drafting. However, the boot partition issue is a pretty good example of some of the challenges and subsequent workarounds you may encounter in your own homelab environments, so it felt apt to leave in place. The fix I implemented isn’t a good one anyway, I simply limited the amount of “generations” (or rollbacks) to a single generation in the bootloader. You can see the “fix” here: TheShadowEevee/NixOS@af88b1e7.
[ii] - Unlike Addendum i, where I had fixed an issue within the week of initial drafting, this post got sidelined for… *checks notes*… two months. Hm. In that time, more things broke, some things ran great, and other things crashed and burned. Since drafting, momiji-server has been converted to a NixOS install, and barrier-proxy had to be rebuilt (See Addendum iii) and was hence renamed to reimu following the current hostname scheme of servers using names from The Touhou Project.
[iii] - An addendum in an addendum! NixOS is extremely nice, and I have used it for some time on my laptop and on some servers. But, like anything that touches system files, it can break things if you configure it wrong. barrier-proxy was officially upgraded to fail-state status on April 16th, 2026 with a change to common/system/default.nix in commit 2a9363ab27 where I mistakenly removed the block that imports server specific configurations. This included the setting security.sudo.wheelNeedsPassword = false; which is included specifically as an anti-lockout mechanism. It was quickly discovered that the password for my account and the root account on barrier-server was improperly set, and as a result the server could no longer be re-configured as my user account was not privileged enough without access to the sudo command. With some help from Lillith (again), a new Oracle Cloud instance was spun up and reconfigured as reimu using the same configs from before. This was heavily aided by the Crowbar project, go check it out!
References
- Associated Press: Samsung recalls Galaxy Note 7 after battery explosions ↵
- Wikipedia: Murphy’s Law ↵
- NixOS/nixpkgs#230623 ↵
- PC World: RAM is so expensive, Samsung won’t even sell it to Samsung ↵
- Purdue Linux Users Group: Server Sled Pictures ↵
- Wikipedia: RAID ↵
- Konpeki Solutions: Konpeki-Solutions/Wazuh-Rules Repository ↵
- What’s the Difference between SIEM and XDR? at r/MSP ↵
- authentik Documentation: Release 2026.2, Release frequency change ↵
- Wikipedia: Automatic Certificate Management Environment ↵