Never.
Homelab
Rules
- Be Civil.
- Post about your homelab, discussion of your homelab, questions you may have, or general discussion about transition your skill from the homelab to the workplace.
- No memes or potato images.
- We love detailed homelab builds, especially network diagrams!
- Report any posts that you feel should be brought to our attention.
- Please no shitposting or blogspam.
- No Referral Linking.
- Keep piracy discussion off of this community
wait, they shut off? who knew.
I had one Linux server that was up for over 500 days. It would have been up longer but I was organizing some cables and accidentally unplugged it.
Where I worked as a developer we had Sun Solaris servers as desktops to do my dev work on. I would just leave it on even during the weekends and vacations, it also had our backup webserver on it so we just let to run 100%. One day the sys admin said you may want to reboot your computer, it's been over 600 days. 😆 I guess he didn't have to reboot after patching all that time and I didn't have any issues with it.
Prod environments typically don't have downtime. Save for patching every quarter that requires a host reboot.
Sometimes I don’t need all the things running so I’ll kill a few pi’s and disks
I keep the stuff running 24/7, barely once a year for cleanup pretty much / upgrades or whatever. Don't mind me I still got to get a UPS for when the electricity goes down which it hasn't happen in the past years
You don’t (and generally shouldn’t) reboot servers. People got this idea that PCs needed to be rebooted because Windows is trash and becomes more unstable the longer it runs. Server OS’s dont have this problem.
It kind of also depends on the OS. And before the comments start; As with anything, situation & a bit of luck go a long way.
But Linux based machines can be left enabled for months, some times even years. Windows I Honestly wouldn't trust beyond a few months and even that would seem too much for my own taste.
I reboot my systems monthly most of the time, usually paired with updates. But my main host is Windows serv, which gets daily reboots (power savings, I don't need it on when I sleep), and the VMs on that are frozen & unfrozen so they are on for about a month or more until I do the above.
I have built a UPS with 200AH 12V battery with inverter charger for RV. It never fails with power so it runs like for months until I decided to put something in… let’s see
Only if I need to move it or upgrade the components, and that happens maybe once a year, if not less.
If it weren't for that and power outages, they would have been on for 5+ years.
I don't ever shut them down "just because why not".
You're rebooting them for update right?
right?
Fairly frequently,but no real schedule. 3wks-3months, whenever I get some time to update without it being a problem. Primarily for patching & new kernels reasons but has caught the odd disk issue where btrfs was struggling and I didn't catch when running.
3 months? i've had linux systems up for 3 years haha
Only when I type "shutdown" on the wrong console window.. New hardware or need to fix something.
So that's pretty rare 😂
When do I shut down?
- When the power goes out and my UPS battery drains.
- When I do a hardware upgrade.
- If I want to rearrange equipment, and also when I moved this past summer.
That's seriously about it.
I boot my big server whenever i need it, everything else is 24/7. I have had no catastrophic failures in either for the last 2-3 years, so it seems to be fine?
i have a year of uptime, i need to shut down and clean the servers out but have not cared enough to do that.
Whenever there is a proxmox kernel update. Every few years to dust them If i get new hardware.
In the year mine has been running... it's been offline twice. Once, when upgrading the memory. The other was when I upgraded the processors. The only other time was a software update. Didn't require a reboot.
Out of 6 Cisco servers 3 have auto power on at 7am and auto shutdown at 11 pm. Other 3 are 24/7
What are you guys even running that needs to be on?
I just got a Dell R510 and a HPe Proliant 360 g7, installed esx on them, but i cant find anything that would justify running them for 24/7.
I mean, besides a nas that holds some files.. i cant find anything worthy.. can only think about enterprise purposes which i dont meed at home.
So, to answer the question, they are always off untill i want to experiment
Never? Only when they need physical maintenance
Almost never since getting a whole home generator.
They have an off switch? who knew.
What uptimes are people looking at right now?
old 486 slackware 4.0 server I had on a big UPS made it through several dorm/apartment moves without a shutdown. Something like 7 years of uptime when I finally retired it.
It depends. I don't run anything public facing so security updates that need reboots are less of a concern to me
My Windows servers are rebooted once a month for patches. My Linux servers maybe once every couple months for kernel patches or if I screwed something up. My physical proxmox hosts? Twice in the last year. Once because I moved. The other time because I upgraded to proxmox 8.
I shut down my NAS after work because I tend to not use it's services outside yet and saving like 2/3 of a day in electricity is worth it. For the machines that provide services like networking and security they run on UPS 24/7 up until there is a need to update or a UPS has a failure
Only when I'm installing/removing hardware. Probably like once a year on average.
I have two hosts: raspberry pi that serves as a pi-hole and as a log of infrequent power outages, it goes 24/7, often with 100+ days of uptime (seeing the "(!)" sign in htop is so satisfying) and a SFF that shuts itself off nighty, provided nothing is happening on it (power is expensive).
When I’m adding hardware or decide to blow out my pc equipment (which is way less than I should). I have dogs and cats and their hair gets everywhere.
My optiplex 9010 sff is what I use for experimenting with services and as a staging area for moving VMs to my main lab because it's air gapped. At max load it runs at 140w but it has a GTX 1650 that I use for gaming as well.
Otherwise the rest of my lab is only turned on when I'm using it or forget to turn it off when I leave the house. When I get a laptop again I'll leave it on more. None of it is more than $150 to replace though. It's a Hyve Zeus, Cisco isr 4331, and a catalyst 3750x so nothing heavy, just a little loud.
Never really shit my mini pcs down, sometimes I restart a proxmox node if I want it to use an updated kernal but that's it. I don't run large servers at home
Power failures, hardware upgrades.
Mine are running all of the time, including during power outages, and are only shut down for physical maintenance and reboot for software maintenance.
This is a little variable through. Windows hosts tend to require more frequent software reboots in my experience. About once a year, I physically open each device and inspect, clean dust (fairly rare to find it for my setup though), and perform upgrades, replace old storage devices and such. Otherwise I leave them alone.
I usually get about 5-7 years out of the servers and 10 out of networking hardware, but sometimes a total failure occurs unexpectedly still and I just deal with it as needed.
Even though live kernel patching is a thing, I generally do a full reboot every month or two for the next big patch.
Full shut downs? Are we upgrading them, dusting them, or doing any other maintenance to them? That would be the only case besides UPS failure or power outage.
Lol. 236 days and 107 days since the last reboots of my two servers.
If it is a Windows 95 server then every three days. Format and reinstall once every three months.
I have a five 9s SLA with the wife for Plex.
Changes rarely get approved anyway.
She likes to sweat those assets.
My stuff is pretty low powered so it runs 24/7 except one old machine I use as a last resort offline backup that I boot and sync to every few months.
Usually I reboot once a year, but in reality power outages limit uptime to about this anyway.
I set up a cron job to reboot once a day. Its for my security cameras and I want to ensure access. But, if you dont have issues, you dont need to.
I mean, so far the longest uptime I’ve seen at my current job is 9 years. Yes, that host should be patched. But given its role, and network access, it’s fine. Running strong. It is in a DC. Server grade hardware is designed with 24/7 operation in mind. Consumer hardware might be fine, but wasn’t designed with 24/7 critical operation in mind.
At home, I have some nucs on 24/7, and a r740 and nx3230 on 24/7. The rest is for true lab env and I only power on as needed.
Only when I swap or upgrade internal hardware.
These run 24/7/365.
Only when children break into the server room.
Once a month to install patch Tuesday updates because my only host is still running Microsoft Hyper-V 2019 server. Planning to switch to Proxmox that but gonna take a while so I haven't got myself around to do it.
Whenever regular patching necessitates a reboot. Typically once a month.
Only shut down for maintenance if hardware breaks. Otherwise reboots are done to update firmwares, esxi.
Once a year for firmware updates. But my unraid box usually needs reboots once a month to stay stable.