

Yeah, companies have abused that to release buggy, incomplete products faster and only make the software stable and feature complete if they make a good profit.
Yeah, companies have abused that to release buggy, incomplete products faster and only make the software stable and feature complete if they make a good profit.
Yeah one thing I find these kinds of tools good for is warranty tracking I’d something breaks and insurance claims if there’s a fire or robbery or something.
Personally, I find Traefik much simpler than Nginx, especially with Kubernetes, but even with pure docker, but it’s definitely not as performant. That’s balanced by the fact that it does a lot of automatic detection and has dynamic config loading so I don’t have to break other services when changing configurations.
Yeah, video streaming is not a good thing to put on a limited bandwidth server either directly or as a VPN or proxy passing data.
Best bet would be if you can set up a reverse proxy on your router and have that accept all inbound requests and direct to the correct internal server and port.
Mobilizon works well for me. I only wish more organizers used it so I could get events from local communities without having to enter it myself.
There’s a plugin for compose, but podman itself does have some differences here and there. I’m starting to migrate my own stuff as Docker is getting more money hungry. Womder if they’ll try to IPO in a few years. Seems like that’s what these kinds of companies do after they start to decline from alienating users. Just wish that portainer and docker hadn’t killed all the GUIs for docker and swarm was better supported.
The company i work for has also required us to migrate from Docker as the hub and desktop app are no longer totally free. I expect more and more limitations will show up on the free versions as usually is the case with companies like this.
If the meter is plugged into the UPS, then the UPS has nothing to do with the power flowing into the meter. Power is “pulled” not “pushed” to devices in that a device supplying power can limit the amount of power provided, but can’t increase it beyond what the devices request.
Just like with plumbing. The water company can’t force your faucets to open and use more water. Now they could increase pressure and break pipes, similarly the UPS could provide the wrong voltage and short or burn out wires or devices causing them to draw more, but that is unlikely to be the issue here. As long as voltage is constant, amperage (the other component in wattage) is pulled, not pushed.
What you’re seeing in the input load, if it matches what is flowing out of the meter, is some device requesting more power and thus more power flowing into the UPS to be passed to those devices, not the UPS forcing something to use power which isn’t possible as explained above, or the UPS itself using power because the meter has no connection to what power is being used by the UPS, only things plugged into the meter.
So, there must be something else using the power. Likely the devices, even if they aren’t really doing anything you consider significant, are doing something. Probably maintenance, checking for updates, the monitoring proceses requesting information from the devices since the TrueNAS server is on that end, etc. You’d need to put a meter on each device to determine what is drawing the power specifically.
Also, does the power meter only display power used by devices plugged into it, or does it also display it’s own power usage? Could be that the plug itself is using WiFi or something to communicate with external services to log that data. But that would be quick bursts.
Also, without putting a meter on each device, this is probably cumulative. For example, if the NAS is requesing info for monitoring the network, that would spin up the processors on the RPi an cause the switch to draw more power as it transmits that information across the network. Again, this should only be small bursts, but it’s also possible the devices are not sleeping properly after whatever process wakes them so they continue to run their processors at higher amperage for some time. Tweaking power profiles can help with something like tuned on Linux or similar to make things sleep more agressively. With the drawback that they take some amount of time to spin back up when needed.
It you’re talking about TOTP exclusively, that only needs the secret and the correct time on the device. The secret is cached along with the passwords on the device.
LLMs are perfectly fine, and cool tech. Problem is they’re billed as being actual intelligence or things that can replace humans. Sure they mimic humans well enough, but it would take a lot more than just absorbing content to be good enough at it to replace a human, rather than just aiding them. Either the content needs to be manually processed to add social context, or new tech needs to be made that includes models for how to interpret content in every culture represented by every piece of content, including dead cultures who’s work is available to the model. Otherwise, “hallucinations” (e.g. misinterpretation and thus miscategorization of data) will make them totally unreliable without human filtering.
That being said, there many more targeted uses of the tech that are quite good, but always with the need for a human to verify.
There’s not a need to have vaultwarden up all of the time unless you use new devices often or create and modify entries really often. The data is cached on the device and kept encrypted by the app locally. So a little downtime shouldn’t be a big issue in the large majority of cases.
A desktop environment is a waste of resources on a system where you’ll only use it to install and occasionally upgrade a few server applications. The RAM, CPU power, and electricity used to run the desktop environment could be instead powering another couple of small applications.
Selfhosting is already inefficient with computing resources just like everyone building their own separate infrastructure in a city is less efficient. Problem is infrastructure is shared ownership whereas most online services are not owned by the users so selfhosting makes sense, but requires extra efficiencies.
How do you connect? Is there a domain? Is that domain used for email or any other way that it might circulate?
Also, depends on if the IP address was used for something in the past that was useful to target or not. And finally do you use that IP address outbound a lot, like do you connect to a lot of other services, websites, etc. And finally, does your ISP have geolocation blocks or other filters in place?
It’s rare for a process to just scan through all possible IP addresses to find a vulnerable service, there are billions and that would take a very long time. Usually, they use lists of known targets or scan through the addresses owned by certain ISPs. So if you don’t have a domain, or that domain is not used for anything else, and you IP address has never gotten on a list in the past, then it’s less likely you’ll get targeted. But that’s no reason to lower your guard. Security through obscurity is only a contributory strategy. Once that obscurity is broken, you’re a prime target if anything is vulnerable. New targets get the most attention as they often fix their vulnerabilities once discovered so it has to be used fast, but tend to be the easiest to get lots of goodies out of. Like the person who lives on a side street during trick-or-treat that gives out handfuls of candy to get rid of it fast enough. Once the kids find out, they swarm. Lol
At work we have 6 environments other than production. At home just one. I created a way to ease deployment of the environment from scratch using a k0sctl config and argocd and the data gets backed up regularly if I need to restore that, too.
Note that often it’s more efficient to move infrequently accessed memory for background tasks to swap rather than having to move that out to swap when something requires the memory causing a delay in loading the application trying to get the RAM, especially on a system with lower total RAM. This is the typical behavior.
However, if you need background tasks to have more priority than foreground tasks, or it truly is a specific application that shouldn’t be using swap and should be quickly accessible at all times, or if you need the disk space, then you might benefit from reducing the swap usage. Otherwise, let it swap out and keep memory available.
This. Get in writing the specific legally binding policies for personal use of their network resources. Not just the personal opinion of the IT people. They don’t write the legally binding policy that you are responsible for following.
I mean, in most cases this isn’t criminal law (in the US at least), so it means you have to attract enough attention of a corporation since they’re usually the only ones who can afford the legal costs to file the DMCA requests and responses for copyright violation. And with many other civil issues, often corporations with the money for it, don’t have standing to sue, and if they did, would be required to sue each individual in the appropriate jurisdiction.
With the removal of Section 230, these costs will go down significantly as a single user’s violation could be enough to bankrupt or shut down an entire site of violating content or, if serious criminal violations like child porn, put the person who hosts the site in prison who, will be much easier to identify and sue in a single jurisdiction or arrest than a random internet user.
Yeah, other countries have similar or even more strict requirements, so yeah it all depends on the jurisdiction. You have to also understand that just hosting something externally, doesn’t mean you don’t fall under laws of another country. It’s the internet. And if you live in a country, you may be held responsible for obeying their laws. I’m not a lawyer, so it’s something to be careful of even if externally hosted.
This is especially necessary to consider if you live in the US right now. One of the things the current administration is pushing for even harder than past administrations is removal of Section 230 of the communications act that was enacted in the 90s. This provides a defense against liability for the content you host as long as you make a reasonable effort to remove content that is illegal. Problem is that this makes it really difficult to censor (maliciously or otherwise) content because it’s hard to go after the poster of the content and easier to go after the host or for the host to be under threat to stop it from being posted in the first place. But it’s a totally unreasonable thing, so it basically would mean every website would have to screen every piece of content manually with a legal team and thus would mean user generates content would go away because it would be extremely expensive to implement (to the chagrin of the broadcast content industries).
The DMCA created way for censors to file a complaint and have content taken down immediately before review, but that means the censors have to do a lot of work to implement it, so they’ve continued to push for total elimination of Section 230. Since it’s a problematic thing for fascism, the current administration has also been working hard to build a case so the current biased supreme court can remove it since legislation is unlikely to get through since those people have to get reelected whereas supreme court justices don’t care about their reputation.
So, check your local laws and if in the US, keep an eye on Section 230 news as well as making sure you have a proper way to handle DMCA takedown notices.
Are there any guides to using it with reverse proxies like traefik? I’ve been wanting to try it out but haven’t had time to do the research yet.
I’ve used java Scanner objects to do this extremely efficiently with minimal memory required even with multiple parallel searches. Indexing is only necessary if you want to search for information many times and don’t know what exactly the search will be. For one time searches, it’s not going to be useful. Grep honestly is going to be faster and more efficient for most one time searches.
The initial indexing or searching of the files will be bottlenecked by the speed of the disk the files are on, no matter what you do. It only helps to index because you can move future searches to faster memory.
So it greatly depends on what and how often you need to search and the tradeoff is memory usage, but only for multiple searches of data you choose to index from the files in the first pass.