It might be the accepted norm if you are accustomed to running hardware and software that isn't that stable. But I haven't observed a particular trend of servers running poorly after extended uptime in my career. I've run many a Solaris, Linux or BSD server well past 1000 day uptime and more than a handful have made it to the 1400-1500 day mark. I would update Apache or apply other patches without patching the kernel and just keep trucking. (NOTE: I don't advocate that this as a sys-admin practice, but there are systems that customer's don't want rebooted unless there is a problem).
As to how it is done in web servers that just serve pages, you are correct, a page can and is often served by redundant servers and even a content delivery network. Taking down a node shouldn't impact your site if you have redundancy and caches. High availability is all about redundancy. It isn't so important to keep a single node healthy for extended runtimes for a static web site. You really shouldn't need to depend on a single web server today when a Linux VM can be had for $5 a month at Digital Ocean and a 2 node load-balanced Linux setup can be put together for cheap.
The shift for the past 10 years has been toward many cheap servers. Back in the 1998-2000 time frame at IBM we were already running massively distributed web farms with 50-100 nodes serving up a single site (Olympics, Wimbledon, US Open, Masters), and now it is commonplace since companies like Google and Facebook published a lot of literature on this technique.