Madshrimps Forum Madness

Madshrimps Forum Madness (https://www.madshrimps.be/vbulletin/)
-   WebNews (https://www.madshrimps.be/vbulletin/f22/)
-   -   ELSA Multi-GPU Applications powered by Hydra (https://www.madshrimps.be/vbulletin/f22/elsa-multi-gpu-applications-powered-hydra-61358/)

jmke 12th February 2009 19:00

ELSA Multi-GPU Applications powered by Hydra
 
Elsa Japan, a supplier of various graphics solution, has unveiled the world’s first multi-GPU setup powered by Hydra technology by LucidLogix, a startup supported by Intel Capital. The solution powered by four graphics processing units is aimed at broadcast, medical and other markets that require high-performance real time graphics processing or GPGPU-based high-performance computing.


The Hydra engine sits between the chipset and several GPUs and acts like a dispatch processor within a graphics processing unit (GPU) to distribute tasks among the chips. The technology drives the GPUs, performing scalable rendering of a particular image or scene, and relies on “unique adaptive decomposition and acceleration algorithms to overcome bottlenecks”. The Hydra engine combines a PCI Express 1.1. system-on-chip (Tensilica Diamond 212GP programmable general purpose processor) with exclusive software technologies that load-balances graphics processing tasks, delivering near-linear to above-linear performance with two, three or more graphics cards, according to the company’s promises.

Originally Lucid said that the first commercial Hydra-based applications will be available in Q4 2008, but it looks like the first visualization solution will only hit the market in Q2 2009.

http://www.xbitlabs.com/news/video/d...lications.html

npp 12th February 2009 19:57

But can it play Crysis? :D


Joke aside, I am impressed that this thing made it into the wild. It would be nice to see if it really lives up to the expectations. As far as I have understood, its main feature should be to provide some coarse-level parallelism by distributing the load amongst the GPUs installed in the system (4 in this case). I am not sure if it is possible to address a particular GPU in a multi-GPU configuration using CUDA right now, or only one GPU is used no matter the total count; if the former is the case, the usefulness of such device is fairly doubtful, since a manual partitioning of the data would also be possible. If one has the appropriate data and algorithms that would run fine on a GPU, dividing the data in 4 parts is the least of the problems... The idea of distributing commands on the fly amongst the GPUs sounds much healthier for applications like games, but the device is targeted at GPGPU applications right now, which confuses me a bit.


All times are GMT +1. The time now is 14:37.

Powered by vBulletin® - Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO