Just chiming in, this is not recommended for proxmox
The documentation (FAQ 13) actually directly says that docker should be installed as a QEMU VM on proxmox and that it should not be installed on the Proxmox VE Host
Just your normal everyday casual software dev. Nothing to see here.
Just chiming in, this is not recommended for proxmox
The documentation (FAQ 13) actually directly says that docker should be installed as a QEMU VM on proxmox and that it should not be installed on the Proxmox VE Host
the amount of software I’ve used that lacks this type of system is aggravating. How hard is it to keep an object of property names, and if the name isn’t in it then it errors.
this can be continued into command line as well. if flag -z doesn’t exist, you shouldn’t allow me to run a command with it. It’s clear I am trying to do something (incorrectly) thinking -z is something it isn’t, just error it and tell me that.
Because it’s universal, it works, it’s multi-platform, device agnostic and it’s simple to use user side.
Nothing else available really fits that criteria.
The closest in todays age is probally discord or teams, but neither of which are decentralized. XMPP could work for it, but nobody really uses it anymore and to be honest the standard is ugly as hell to implement.
Browser Notifications are ineffective and have a high probability of failing or not being seen, they are more meant for real-time notices not historical notices not to mention locked to that browser.
App notifications would be amazing for things with apps, but not everyone wants to be forced into using their mobile device for everything, and it would again only be available from said app(unless you do use something like NTFY), which would generally be locked down to a device
Email sucks admin side, but there’s a reason its used.
This is also ignoring the multi-use case that email allows for such as authentication as well, so if its already being stored for accounts, might as well use it for notifications
hard agree, I hate browser notifications with a hard passion, I would never see them if they swapped to that.
my only complaint about it is the lack of clear “hey this is going to be a major update” on the webUI. I did the update command and was met with a different UI. Which wasn’t difficult to figure out, and I have to blame myself for not actually checking the patch notes first, but I wasn’t expecting a major update from the webUI as it only said “new version available run this command to upgrade”
the upgrade as a whole is all and all a great improvement
I’ve recently setup an recipe archival project using tandoor, I’m working on converting all my grandparents fading old as dust cooking recipes from their misc handwritten cursive notecards to digital.
Setup was uneventful but it took a little research to figure out how to use a remote postgres server, turns out the app doesn’t give an error when it can’t connect to the server, it just fails to run
Have to say the actual program itself is absolutely absurd and how they choose their permissions, it breaks all conventional and took quite a bit to get used to.
this is what messed me up with ZSH for a bit, having a shell default to 1 instead of 0 was weird
in all fairness, most developers that are still passionate about code aren’t looking for promotion anyway, because usually once you hit the management level positions you do far less code and more bureaucracy bullshit, and what code you do is usually limited to guidance or review.
Now being passed on raises or laid off? that might be annoying.
Yea for sure, I plan to implement that as well when I have some free time.
Oh ok, thank you, I already use Portainer for my existing setup so it wouldn’t make much sense to fully rework it. I haden’t thought of version pinning though so I may implement that instead, it makes sense “breaking changes” wouldn’t happen within the same major version.
Strangely it sounds like that’s correct. I was under the understanding that depends_on cared about it past start as well but it does not. It doesn’t look like there’s a native way of turning containers that are depending on one another when you turn the dependency off. It looks like the current recommended way of doing it is either with a Docker compose file (which doesn’t help if the process crashed/was concidered unhealthy), or having a third party script on the host monitor the dependencies and if one is considered offline, it turns the dependees off.
Looking into it the concern has been approached twice now on the GitHub page, however every time that it’s been brought up it’s been closed for stale because nobody ever replies to the question
I don’t use Watchtower myself for the same reasons described, but I was under the understanding if you had a container as a dependency on another container that if you took the dependency down it also took the container down. Is this not actually true?
I’ve never heard of komodo, I’ve heard a lot about Watchtower but I found it more annoying to set up due to its labeling systems. Is there any added benefit for Komodo over using a standard watch tower setup?
I haven’t set up either of them, but my main concern is having a breaking change be automatically updated
I’ve never used true nass, but I’ve never had any issue with keeping up with releases. I use a proxmox host with Debian containers mostly, and then I use ansible to do any major changes to the hosts such as replacing certificates or upgrading the packages
Being said my backup structure isn’t the most professional, I have a 8 TB external drive that I keep plugged in via USB and I have proxmox backup server on the same host and it creates backups nightly
my router uses openwrt which supports dynamic DNS updating on its own for multiple providers, I currently am through namecheap on it.
I just create the lxc, and if the package requires docker I begrugendly install docker on the lxc, I’ve never had performance issues with Debian lxc, I use it as my base template and it runs flawlessly (outside of ping not working unless sudo)
That being said, I don’t like installing Docker a billion times and I feel like that defeats the purpose of using an lxc in the first place, so for most small Docker containers I just put them on the same lxc since docker is going to handle all the isolation in those anyway
I don’t ZFS though I still use normal EXT4, and I use PBS for backing it up to an external drive, but I’m curious if that may be the root cause of the issues.
Right, it looks like they went back to their roots and stopped doing a curved screen lol, maybe they realize that not everyone wanted the curve and that it made it a pain in the ass to put any type of screen protection on
Unless that design includes a MicroSD card slot I’m not interested
before I read the article, I wholeheartedly disagree with the title.
Self-Hosting not only brings control back into your own hands, but also hones your skills at the same time.
OK so after reading I do agree partially with the regulation aspect, but from a privacy POV all of that is fixed by just not storing PII, I run multiple services in my stack, and the most info I collect on someone is their email, which they defo could just opt out of which I would delete off the system.
As for the cost and labor. It’s really not that difficult, my stack consists of Game servers (a mix of them primarily survival based like ark), email hosting for myself and some friends + no reply services for other internal services, my media stack, my file server, the firewall, a reverse proxy manager and my own programming projects/sites. Honestly the hardest part was the networking aspect of it, learning how to use proxmox was a trip because I hadn’t used a containerized environment before outside of docker.
I think this articles being disingenuous with the no paycheck, there is more to Value than a paycheck. My self hosting while I may not be being paid for it, if I were to put my current setup on to remote hosting I would probably be paying roughly $150 to $200 a month for a private VPS this system allowed me to just spend $700 as a one-off and then minor maintenance costs if something failed, which for a project I intend to keep running regardless its the cheaper option.
As for the ideology of decentralization, yes there is some issues in regards to reliability, obviously these smaller side projects for self hosting aren’t going to have the redundancies that the “proper” hosting is going to have. Like for example just last night my service went down because I lost power for about an hour and a half and my battery standby only had enough power for about 45 minutes of it. Being as most of my stuff is more personal based I’m not too concerned about the downtime but I could definitely see if it was a large scale project like a lemmy server it would be a little more distasteful.
I mean, I don’t use the service but, $7-8 a month that gives you access to everything versus 14 to $16 a month per streaming service on everything else. It sounds like they’re still getting a steal at a more convenient rate.
Being said, yeah there is plenty of free options that could be being done as well so there is that argument