Just came across this on coding horror: http://www.codinghorror.com/blog/2011/01/24-gigabytes-of-memory-ought-to-be-enough-for-anybody.html
RAM is as good a resource in a computer as CPU time itself. While CPU scheduling can be done in the order of milliseconds (thread context save/restore + CPU cache re-population) which makes applications appear to run simultaneously, the same cannot be said for RAM scheduling. RAM scheduling requires paging out memory to disk and give the newly freed RAM resources to the demanding application. Disk I/O is slow and a RAM context switch cannot be done in milliseconds. Thus, for a responsive computer, while it is okay to have a CPU good enough to keep UI features for a single application responsive, we need to make sure that the RAM working sets for all applications need to be resident in RAM for a well functioning (responsive) computer.
From a developer’s point of view, given that he can measure and control exactly how much time does it take for his single application to respond on a CPU, dealing with CPU scheduling is generally fine for him. He assumes that his app is the only app running on the user’s system and in most cases, his assumptions about code path lengths for app responsive-ness are reasonably accurate. Memory is a whole different beast. The developer has no idea how much of a working set can he assume a computer to have for its app. The worst case figure is essentially zero, so the developers go by the best case figure of having the entire system RAM to their application. If for some reason, the sum of working sets of all the applications running is greater than RAM, the computer fundamentally cannot effectively schedule those applications leading to a phenomenon commonly referred to as “thrashing” where the computer spends most of its time context switching between RAM working sets without getting any actual work done.
The solution we have all been comfortable with so far has been to keep inflating the amount of RAM in our systems such that the developer’s expectation of the amount of RAM needed trails behind the actual RAM present in our system. We could aim to make it closer to the CPU way of doing things where its possible for the developer to track the working set requirements of their applications and make it possible for them to gracefully degrade application experience with smaller amounts of RAM. A simple malloc() + demand paging in the background doesn’t cut it then. We would need a more elaborate memory programming model where the system would let the application know when it is taking RAM away from it.