

This is also a great way to just break everything you’ve set up.
This is also a great way to just break everything you’ve set up.
Removed by mod
Really depends on where and how the data collection is integrated.
Browser forks mostly make changes to the application UI which wraps the engine, not to the engine itself. Browser engines are these fantastically complex things, extremely difficult to keep operational and secure, which is why there aren’t many of them and why they’re all developed by large organizations. Forking the engine is basically doomed to failure for a small project because you won’t be able to keep up, you’ll be out of date in a month and drastically insecure in a year.
Removed by mod
Removed by mod
deleted by creator
Removed by mod
Removed by mod
…which is Gecko, which is Mozilla.
…which is Gecko, which is Mozilla.
I recommend getting familiar with SMART and understanding what the various attributes mean and how they affect a drive’s performance and reliability. You may need to install smartmontools to interact with SMART, though some Linux distributions include this by default.
Some problems reported by SMART are not a big deal at low rates (like Soft Read Errors) but enterprise organizations will replace them anyway. Sometimes drives are simply replaced at a certain number of Power-On Hours, regardless of condition. Some problems are survivable if they’re static, like Uncorrectable Sector Count - every drive has some overhead of extra sectors for internal redundancy, so one bad sector isn’t a big deal , but if the number is increasing over time then you have a problem and should replace the drive immediately.
Also keep in mind, hard drives are consumables. Mirroring and failovers are a must if your data is important. New drives fail too. There’s nothing wrong with buying used if you’re comfortable with drive’s condition.
And now for something completely different: https://youtu.be/ahnfLZKwnTg
Yeah, pay somebody else to be responsible for the server uptime and the bandwidth. Somebody who specializes in providing that.
I think the answer depends a lot on the use case of each business’s website and what the business owner/employees expect from it.
Is the website a storefront? You’ll be spending a lot of time maintaining integration with payment networks and ensuring that the transaction process is secure and can’t be exploited to create fake invoices or spammed with fake orders. Also probably maintaining a database of customer orders with names, emails, physical addresses, credit card info, and payment and order fulfillment records… so now you have to worry about handling and storing PII, maybe PCI DSS compliance, and you’ll end up performing some accounting tasks as well due to controlling the payment processing. HIPAA compliance too if it’s something medical like a small doctor’s office, therapist, dialysis clinic, outpatient care - basically anything that might be billable to health insurance.
Does the business have a private email server? You’ll be spending a lot of time maintaining spam filters and block lists and ensuring that their email server has a good reputation with the major email service providers.
Do the employees need user logins so that they can add or edit content on the website or perform other business tasks? Now you’re not just a web host, you’re also a sysadmin for a small enterprise which means you’ll be handling common end-user support tasks like password resets. Have fun with that.
Do they regularly upload new content? (e.g. product photos and descriptions, customer testimonies, demo videos) Now you’re a database admin too.
Does the website allow the business’s customers to upload information? (comments/reviews/pictures/etc, e.g. is it Web 2.0 in some way) god help you.
You’re going to expose this to the public internet. It will be crawled, and its content scraped by various bots. At some point, someone will try to install a cryptominer on it. Someone will try to use it as a C2 server. Someone will notice that you’re running multiple sites/services from one infrastructure stack and attempt to punch their way out of the webhost VM and into the main server just to poke around and see what else you’ve got there. Someone will install mirai and try to make it part of a DDOS service provider’s network.
I’ve come up with a set of rules that describe our reactions to technologies:
- Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
- Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
- Anything invented after you’re thirty-five is against the natural order of things.
- Douglas Adams
Quality control is expensive, and all they ever do is complain about how my brilliant idea to save money will kill more trees or some shit.
Is it possible that some of the discomfort comes from trying to use controls that are too small?
I also have big hands, and I find the Switch controllers uncomfortable because they feel like they were meant for baby hands, and they’re flat so it’s an effort to keep hold of them. I find the Deck very easy to hold because its grips are built like a proper controller and all the buttons are within comfortable reach. The ergonomics make a big difference.
Valve put a lot of design effort into the form of the Deck:
The issue is more that trying to upgrade everything at the same time is a recipe for disaster and a troubleshooting nightmare. Once you have a few interdependent services/VMs/containers/environments/hosts running, what you want to do is upgrade them separately, one at a time, then restart that service and anything that connects to it and make sure everything still works, then move on to updating the next thing.
If you do this shotgun approach for the sake of expediency, what happens is something halfway through the stack of upgrades breaks connectivity with something else, and then you have to go digging through the logs trying to figure out which piece needs a rollback.
Even more fun if two things in the same environment have conflicting dependencies, and one of them upgrades and installs its new dependency version and breaks whatever manual fix you did to get them to play nice together before, and good luck remembering what you did to fix it in that one environment six months ago.
It’s not FUD, it’s experience.