0

It seems to be accepted that computers that have been powered on for a long time and have any sort of complex software (ie and OS) running on them they tend to develop random errors and problems. Turning the device off and back on powers off the machine and destroys all volatile memory, generally fixing the problem.

First, am I just imagining that or is it accepted? Is there a better description or a word/phrase for it?

Second, how do servers deal with this. They are generally 24/7/365 machines. Though multiple machines serving the same page could be turned off individually, is this done in that situation?

3 Answers 3

2

It might be the accepted norm if you are accustomed to running hardware and software that isn't that stable. But I haven't observed a particular trend of servers running poorly after extended uptime in my career. I've run many a Solaris, Linux or BSD server well past 1000 day uptime and more than a handful have made it to the 1400-1500 day mark. I would update Apache or apply other patches without patching the kernel and just keep trucking. (NOTE: I don't advocate that this as a sys-admin practice, but there are systems that customer's don't want rebooted unless there is a problem).

As to how it is done in web servers that just serve pages, you are correct, a page can and is often served by redundant servers and even a content delivery network. Taking down a node shouldn't impact your site if you have redundancy and caches. High availability is all about redundancy. It isn't so important to keep a single node healthy for extended runtimes for a static web site. You really shouldn't need to depend on a single web server today when a Linux VM can be had for $5 a month at Digital Ocean and a 2 node load-balanced Linux setup can be put together for cheap.

The shift for the past 10 years has been toward many cheap servers. Back in the 1998-2000 time frame at IBM we were already running massively distributed web farms with 50-100 nodes serving up a single site (Olympics, Wimbledon, US Open, Masters), and now it is commonplace since companies like Google and Facebook published a lot of literature on this technique.

0

Any system that has to be reset/restarted after a while to continue working is faulty. Various faults include simple things like memory leaks to more complex problems like design failures.

Many years of poor software has had the effect of "training" users to accept faulty systems; and to work around the problem by restarting. Note: "working around" a problem is not the same as fixing a problem.

For servers (e.g. 24/7/365 machines) this isn't possible, and you need (e.g.) software that isn't faulty.

2
  • 1
    But is that a realistic approach? In other words, is server software never faulty, and how is this ensured? Commented Sep 28, 2014 at 23:31
  • @delnan: What it really comes down to is not whether the software is faulty, but whether the user is willing to accept the faultiness (including faultiness they simply aren't aware of). If the end user doesn't know or doesn't care, then there's little incentive for developers to fix the faultiness; and if the user does know and does care (e.g. they're trying to run a 24/7/365 server and your code starts behaving badly after a while) then there's a very good reason for developers to fix it. Commented Oct 4, 2014 at 4:30
0

Well firstly I would expect years of uninterrupted up-time from any modern server OS (even Windows :-) ).

But there are usually external reasons for bringing down a server (often every few months!). Software upgrades, hardware upgrades, changing dust filters, re-organizing your data center, applying security patches etc. etc.

If the system is critical and expected to run 24*7*365 then there are a few ways to deal with this.

  • Run a load balanced cluster. You simply re-cycle the servers one at a time.
  • Have a hot standby. Switch load to the standby machine when applying maintenance.
  • No real servers -- run only virtual machines. You can shift the image to another physical server quickly will minimal downtime.

In practice a robust setup will probably use a hybrid of all of the above methods. A cluster of physical servers, running a set of load balanced virtual machine images, with some hot standby servers at a remote site.

2
  • For fun: Google's MTBF rate for their drives - static.googleusercontent.com/media/research.google.com/en/us/… - its not so much the software or the computer electronics (ram, cpu) that are the things failing... but the hard drives where state is stored and software is loaded from. Commented Sep 29, 2014 at 2:07
  • @MichaelT -- yep disks are the least reliable piece o hardware, thats why we have RAID to reduce the impact of a single failure. More worrying are network cards which do fail reasonably often but are harder to make redundant. Worse they can degrade rather than fail completely which can cause severe performance issues which are very hard to diagnose if you don't know where to look. Commented Sep 29, 2014 at 2:21

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.