For highly technical users containers are going to do everything we need.
For non technical users who need separation, profiles are a standard known framework.
For highly technical users containers are going to do everything we need.
For non technical users who need separation, profiles are a standard known framework.
Depends on seat count. But even a "small" (the smallest bucket of seats is 500) on prem install of data center/confluence can be in 6 figures...
I get it. But the moment we invoke RAID, or ZFS, we are outside what standard consumers will ever interact with, and therefore into business use cases. Remember, even simple homelab use cases involving docker are well past what the bulk of the world understands.
There is an enterprise storage shelf (aka a bunch of drives that hooks up to a server) made by Dell which is 1.2 PB (yes petabytes). So there is a use, but it's not for consumers.
The requirement for everything to have a ground wire is fairly new in the scheme of home construction. Depending on when the house was built, and depending on if everything up to now was done to code, would impact if it's there.
Not op but I'll do stuff from time to time that is well below my pay grade. Mind management understands that the pay difference and that I'm not doing my normal responsibilities if I'm helping out...
It was ok(ish). The problem is the book is one of those stories you recommend to people who like fiction. The show...not so much.
They also produced brave new world.
See I just like LMDE. Everything works without fiddling (I want my OS to be boring). And if I feel spicy - backports.
Ask your local sys admin/DevOps nerd what they're doing. The self hosted stacks are easy to maintain if you do it for a living and most of us hand out access to our friends/family.
I'm a senior IT type. My work laptop is Debian.
We like good pastries, coffee, good booze and feeling appreciated. Go make friends with the senior IT types and the help desk manager. Trust me it's with it.
I or others can go into more detail, but I'm guessing you do not want a super in depth answer?
One of the major cloud providers (aka renting a chunk of a data center) has a outage in the us-east-1 region. Because of internal dependencies on us-east-1, when that AWS region (aka data center) has problems it impacts service's across all AWS regions. To end users, suddenly web sites will act strange, crash, or just not work as elements of their backend are having problems. Due to the raw size of AWS, when something like this happens vast swaths of the web will break.