

You don’t need -it
because you don’t run an interactive session in docker. It might be failing because you ask for a pseudoterminal in an environment where it doesn’t make sense.
**beep ** bop.
You don’t need -it
because you don’t run an interactive session in docker. It might be failing because you ask for a pseudoterminal in an environment where it doesn’t make sense.
Seq is expecting structured logs which yours aren’t. So you want to either convert your app’s logs into a structured format (which is generally hard for a random third-party application) or use a log collector that’s fine with non-structured logs (e.g. Loki+grafana don’t care about the shape is your logs and you can format the output while querying).
How does it compare to archivebox in regards to specifically saving content that’s a mix of websites and YT videos?
I’ve been using FreshRSS and Reeder (now Reeder Classic) since google reader stopped being a thing. It’s pretty great.
I have a dedicated vm for things that are crucial to the home network, either latency-critical or network related.
That’d be my dns resolver (I enforce it over VLANs by hijacking anyone trying to do DNS to other resolvers, like random IoT devices), homebridge for less important home automaton and my own matter controller for most important home automaton (controlling the lights).
My router of choice is RouterOS in another VM. I tried opnsense, pfsense, vyatta, and a bunch of others (even a containerized Cisco route), and I settled on ROS, because it was the only one who could do IPv6 properly (apart from Cisco, but that has other issues).
For the less important things I run them on k8s and really, there are only two bits worth mentioning as essential: ArgoCD and nixhelm. Together, they provide effortless and mostly automated software updates with very easy rollbacks. I don’t have to go and manually update every single bit of software and that saves huge amounts of time.
That’s a good point. Mind that in most production environments you’d be firewalled rather hard (especailly when it comes to logs processing which oftentimes ends up having PII). I wouldn’t trust any service that tries to use DoT or DoH in there that I couldn’t snoop on. Many deployments nowadays allow you to “punch” firewall holes based on the outgoing dns requests to an allowlisted domain, so chances are you actually want to use the glibc resolver and not try to be fancy.
That said, smaller images are always good in my book!
You’re nailing your goal then!
I would still steer you slightly towards documenting your architectural decisions more. It’s a good skill to have and will help you in a long run.
You have dozens of crate dependencies and only you know why they are in there. A high-level document on how your system interconnects and how the algorithms under the hood work will be a huge help to anyone who comes looking through your source code. We become better programmers not by reading the source code, but by understanding what it actually does.
Here’s a random trivia: your server depends on trust-dns-resolver. Why? Why wasn’t the stock resolver enough? Is that a design choice or you just wanted to have fun? There is no wrong answer but without the design notes it’s hard to figure your intent.
This looks nice, but there’s plenty free alternatives in this space which warrants a section in the readme with the comparison to other products.
You mention ram usage, but it’s oftentimes a product of event size. Based on your numbers, your average event size is about 800 bytes. Let’s call it 1kb. That’s one million events per day. It’s surely sounds more promising than Elastic, but not reaching Loki numbers, or, if you focus on efficiency, is way behind Victoriametrics Logs (based on peeking at their benches).
I think the important bits you need to add is how you store the logs (i.e. which indices you build) and what are your trade-offs. Grep is an efficient logs processor which barely uses any ram but incurs dramatic I/O costs, after all.
Enterprises will be looking at different numbers and they have lots of SaaS products to choose from. Homelab users are absolutely your target audience and you can have it by making a better UI than the alternative (victoriametrics logs aren’t that comfortable to work with) or making resource usage lower (people run k8s clusters on RPis, they sure wonder about every megabyte of ram lost) or making the deployment easier (fire and forget, and when you come to it, it works).
It sounds like lots of things and I don’t want to be discouraging. What you started there is really nice-looking. Good job!
You can enforce an always-on VPN (for at least ipsec) via an MDM profile. This kind of features isn’t found in the casual user setup options, but there’s plenty of knobs to tune in the enterprise profile configurator.
And yes, you can easily install that profile on your phone after.
It is pretty bad. After this thread I tried using Element X again only to learn that its “favorites” aren’t the same as Element’s “favorites” and more so you can’t set someone a favorite in E-X, at least not of your server is Conduit. It’s just silently ignored.
I would absolutely recommend a file system with snapshot capabilities for a home server. One of btrfs mirror, dm-raid (raid5) with btrfs, or zfs would work. The practical differences would be negligible at this scale and you can just pick whatever you fancy.
I’ve been having sync issues with conduit lately, takes minutes for the mobile app to catch up. No way to purge old media, or to use something S3-compatible for its storage either.
Also, element x doesn’t support spaces, so if you want to bridge other chats into matrix they all are going to be messed up together.
I like matrix as a concept, but both servers and clients are in a bit of a shitshow state (same as xmpp was years ago).
For the last 10 days tailscale clocked 1% battery on my phone. I honestly didn’t even consider turning it off for battery savings.
If tailscale inside a container allows you to talk to it via “direct” connection and not a derp proxy, then it will offer you better service isolation (can set the tailscale ACLs for this specific service) without sacrificing performance.
Tailscale pushes for it because it just ties you in more. It allows to to utilize the ACLs better, to see your thing in their service mesh, and every service will count against the free node limit.
In practice, I often do both. E.g. I’ll have my http ingress exposed to tailscale and route a bunch of different services through it at a single tailscale node, where the access control is done by services individually. But I’ll also run a pod-to-pod tailscale between two k8s clusters because tailscale ACL is just convenient.
ECC is slightly more required for ZFS because its ARC is generally more aggressive than the usual linux caching subsystem. That said, it’s not a hard requirement. My curent NAS was converted from my old windows box (which apparently worked for years with bad ram). Zfs uncovered the problem in the first 2 days by reporting the (recoverable) data corruption in the pool. When I fixed the ram issue and hash-checked against the old backup all the data was good. So, effectively, ZFS uncovered memory corruption and remained resilient against it.
I had exactly the same use case and I ended up with a 40G DAC fiber for that case. It ended up cheaper than converting the whole lan to 10G.
That said, it feels like used 10G equipment is easier to come by than 2.5G for now, and if you have 2G fiber uplink and only 1G past the router then it’s a waste.
Garage is trivial to get up and running and it’s more lightweight than minio nowadays.
No. It’s my in-cluster storage that I only use for things that are easier to work with via S3 api, and I do backups outside of the k8s scope (it’s a bunch of various solutions that boil down to offsite zfs replication, basically). I’d suggest you to take a look at garage’s replication features if you want it to be durable.
I just made a mirror out of two NVMes―they got cheap enough not to bother too much with the loss of capacity. Of course, that limits what I can put there, so I use a bit of a tiered storage between my NVMe and HDD pools.
Just think in terms of data loss: are you going to be ok if you lost the data between backups? If the answer is yes, one NVMe is enough.