Multi-Core Microprocessors Too Powerful for Modern Software

@ 2009/01/29
The relentless doubling of processing cores per chip will drive the total core counts of upcoming server generations to peaks well above the levels for which key software have been engineered, according to Gartner. Operating systems, middleware, virtualization tools and applications will all be affected, leaving organizations facing difficult decisions, hurried migrations to new versions and performance challenges as a consequence of this evolution.
"Looking at the specifications for these software products, it is clear that many will be challenged to support the hardware configurations possible today and those that will be accelerating in the future. The impact is akin to putting a Ferrari engine in a go-cart; the power may be there, but design mismatches severely limit the ability to exploit it," said Carl Claunch, vice president and distinguished analyst at Gartner.
On average, organizations get double the number of processing cores in each chip generation, approximately every two years. Each generation of microprocessor, with its doubling of processor counts through some combination of more cores and more threads per core, turns the same number of sockets into twice as many processors. In this way a 32-socket, high-end server with eight core chips in the sockets would deliver 256 cores in 2009. In two years, with 16 cores per socket appearing on the market, the machine swells to 512 cores in total. Four years from now, with 32 processors per socket shipping, that machine would host 1024 processors.

Comment from Wolf2000me @ 2009/01/29
I wouldn't really say a huge number of cores is that much of a disaster, I think the article overly dramatizes it all.

For one, multithreaded applications have been written for quite a while. And software concurrency is a problem that has been battled for decades. A simple client-server application with a database for example already has to deal with an unknown number of concurrent R/W operations. A good locking mechanism is needed for that, no matter how many threads.
I'm not saying it's a trivial task, far from it, but it's nothing new.

On the system level, the operating system will have a bigger responsibility dividing the threads among the different cores and also virtual machines can be helpful as well dividing resources. For example the Sun Java virtual machine implementing updated scheduling routines might already solve quite a few problems in one hit. If your applications can't handle different scheduling routines then it's already in more trouble than it's worth.
And one can also opt to dedicate a certain nr. of cores to a certain application, task or any atomic operation.
Running one app on 1024 dedicated cores is a bit overkill for 99% of the applications if you ask me ^^

Older applications based on older technologies won't be run on new shiny state of the art servers either. There are technical incompatibilities without the increased number of cores taken into consideration.

However just laughing and sleeping on both ears isn't an option either but it's not that dramatic.
Comment from npp @ 2009/01/29
The sad story is that most computational problems aren't embarassingly parallel - i.e., divide your HD frame into 8 parts and there you go... Most real problems simply can't be decomposed into a large number of tasks to be executed in parallel, and even if they can, synchronization easily becoms a nightmare, even with << 512 threads... Rise the count to 1024 and I really can't think of any problem of practiacal significance that can be successfuly decomposed to take use of 1024 executing units... Something has to be changed on a very different plane, I suspect.
Comment from jmke @ 2009/01/29
writing software that uses different threads that can be run on separate processors and waits for the other thread to finish and has correct timing etc etc is not that easy.

on paper the PS3 is hugely powerful, but in practice you have to have very scalable code that can run on the specialized processing units.