It appears you have not yet registered with our community. To register please click here...

 
Go Back [M] > Madshrimps > WebNews
Multi-Core Microprocessors Too Powerful for Modern Software Multi-Core Microprocessors Too Powerful for Modern Software
FAQ Members List Calendar Search Today's Posts Mark Forums Read


Multi-Core Microprocessors Too Powerful for Modern Software
Reply
 
Thread Tools
Old 29th January 2009, 12:03   #1
Madshrimp
 
jmke's Avatar
 
Join Date: May 2002
Location: 7090/Belgium
Posts: 79,021
jmke has disabled reputation
Default Multi-Core Microprocessors Too Powerful for Modern Software

The relentless doubling of processing cores per chip will drive the total core counts of upcoming server generations to peaks well above the levels for which key software have been engineered, according to Gartner. Operating systems, middleware, virtualization tools and applications will all be affected, leaving organizations facing difficult decisions, hurried migrations to new versions and performance challenges as a consequence of this evolution.
"Looking at the specifications for these software products, it is clear that many will be challenged to support the hardware configurations possible today and those that will be accelerating in the future. The impact is akin to putting a Ferrari engine in a go-cart; the power may be there, but design mismatches severely limit the ability to exploit it," said Carl Claunch, vice president and distinguished analyst at Gartner.
On average, organizations get double the number of processing cores in each chip generation, approximately every two years. Each generation of microprocessor, with its doubling of processor counts through some combination of more cores and more threads per core, turns the same number of sockets into twice as many processors. In this way a 32-socket, high-end server with eight core chips in the sockets would deliver 256 cores in 2009. In two years, with 16 cores per socket appearing on the market, the machine swells to 512 cores in total. Four years from now, with 32 processors per socket shipping, that machine would host 1024 processors.

http://www.xbitlabs.com/news/cpu/dis...re_Report.html
__________________
jmke is offline   Reply With Quote
Old 29th January 2009, 12:04   #2
Madshrimp
 
jmke's Avatar
 
Join Date: May 2002
Location: 7090/Belgium
Posts: 79,021
jmke has disabled reputation
Default

writing software that uses different threads that can be run on separate processors and waits for the other thread to finish and has correct timing etc etc is not that easy.

on paper the PS3 is hugely powerful, but in practice you have to have very scalable code that can run on the specialized processing units.
__________________
jmke is offline   Reply With Quote
Old 29th January 2009, 20:53   #3
npp
 
Posts: n/a
Default

The sad story is that most computational problems aren't embarassingly parallel - i.e., divide your HD frame into 8 parts and there you go... Most real problems simply can't be decomposed into a large number of tasks to be executed in parallel, and even if they can, synchronization easily becoms a nightmare, even with << 512 threads... Rise the count to 1024 and I really can't think of any problem of practiacal significance that can be successfuly decomposed to take use of 1024 executing units... Something has to be changed on a very different plane, I suspect.
  Reply With Quote
Old 29th January 2009, 21:26   #4
Wolf2000me
 
Posts: n/a
Default

I wouldn't really say a huge number of cores is that much of a disaster, I think the article overly dramatizes it all.

For one, multithreaded applications have been written for quite a while. And software concurrency is a problem that has been battled for decades. A simple client-server application with a database for example already has to deal with an unknown number of concurrent R/W operations. A good locking mechanism is needed for that, no matter how many threads.
I'm not saying it's a trivial task, far from it, but it's nothing new.

On the system level, the operating system will have a bigger responsibility dividing the threads among the different cores and also virtual machines can be helpful as well dividing resources. For example the Sun Java virtual machine implementing updated scheduling routines might already solve quite a few problems in one hit. If your applications can't handle different scheduling routines then it's already in more trouble than it's worth.
And one can also opt to dedicate a certain nr. of cores to a certain application, task or any atomic operation.
Running one app on 1024 dedicated cores is a bit overkill for 99% of the applications if you ask me ^^

Older applications based on older technologies won't be run on new shiny state of the art servers either. There are technical incompatibilities without the increased number of cores taken into consideration.

However just laughing and sleeping on both ears isn't an option either but it's not that dramatic.
  Reply With Quote
Reply


Similar Threads
Thread Thread Starter Forum Replies Last Post
Intel Core i5 and Core i7 CPU Reviews Roundup jmke WebNews 2 8th September 2009 12:25
Core i7 beats Intel IGP in DirectX 10 software rasterizer Shogun WebNews 0 29th November 2008 20:46
Multi Core CPUs vs Dedicated Physics CPU? jmke WebNews 0 19th July 2007 15:26
R700 multi core part explained jmke WebNews 0 21st May 2007 14:14
NVIDIA Brings the Power of SLI Technology to Intel Core 2 Duo Platforms jmke WebNews 0 6th June 2006 15:02

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off


All times are GMT +1. The time now is 14:51.


Powered by vBulletin® - Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO