Madshrimps Forum Madness

Madshrimps Forum Madness (https://www.madshrimps.be/vbulletin/)
-   Articles & Howto's (https://www.madshrimps.be/vbulletin/f6/)
-   -   DDR3 Roundup: New Elpida Kits from OCZ, Mushkin and Corsair (https://www.madshrimps.be/vbulletin/f6/ddr3-roundup-new-elpida-kits-ocz-mushkin-corsair-65497/)

jmke 17th August 2009 15:51

Quote:

Originally Posted by leeghoofd (Post 242637)
Stock is for MAC users...

and people who care about integrity of their data :D

Kougar 17th August 2009 17:12

Quote:

Originally Posted by leeghoofd (Post 242621)
Don't say the linpack word, bah bah bah, it has proven over and over again that it needs a 64bit OS and there are loads of instabilities with these programs (usually shell and not core related) I can pass IBT or LinX and yet get a reboot or freeze on Hyperpi. One of the reasons we discovered B2B as it failed even a simple superpi yet passed the IBT's here ( maxmem selected )

A very good ram test for I7 : Hyperpi 32mb 8 threads. For all platforms also a good test is HCI memtest in windows.

Okay, I need help with the acronyms... B2B? And HCI Memtest?? Do you mean the memtest built into the Windows Vista/7 CD?

Thanks, I will give HyperPI a try!

Quote:

Originally Posted by thorgal (Post 242629)
I use the multithreaded wprime (you can see it in the screenshots) and occasionally hyperpi 0.99b for i7, as some motherboards dislike superpi 32M for unknown reasons.

Good info to know, I will try this one as well. Thanks!

Quote:

Originally Posted by jmke (Post 242631)
all depends on what you call "stable", if everything runs fine (apps, games) and only HyperPi crashes after several hours.... no big deal;)

Stable as in rock stable, stock stable, etc. Anything less than completely stable just leads to data corruption and unexplained errors.

Quote:

Originally Posted by jmke (Post 242635)
or just run at stock speeds :D

And waste a ton of performace? Is like buying a Tesla Roadster and capping it to 55mph. :D And I've seen stock systems still be unstable for various reasons, regardless.

Quote:

Originally Posted by leeghoofd (Post 242637)
And it's for a good cause :) no wasted cycles in silly test programs... but GPU folding is far faster these days...

One Core i7 920 folding four Linux SMP clients (each inside one of four 64bit Ubuntu Virtual Machines) takes about 19 hours for all four SMP programs to cycle once @ 4.2GHz. So say 20 hours = 4x1920 is 7,680 points every 20 hours. PPD results are better than many GPUs. ;-)

But yes, even better to use a GPU or two with the Core i7 as it doesn't affect performance much. This is partly why I am asking, been having nonstop issues with the GPU client, suspecting the GPU core or compute shaders on my GTX 260 might not stable, but have not been able to prove it. :-/ Could just be more F@H GPU code or CUDA driver issues...

Quote:

Originally Posted by jmke (Post 242638)
and people who care about integrity of their data :D

That's what my RAID'd NAS is for. Lost to much data to RAID arrays on my desktop, Intel's RAID is no protection at all.

leeghoofd 17th August 2009 17:36

B2B : Back to Back Cas 2 delay (think Asus, MSI and now gigabyte biosses got it)

Read up here : B2B investigated Massman Style

And here : B2B on Gene II

HCI Memtest : download here : hci memtest

I never went deep into the PPD stuff, but my 285GTX folds about 3-5 cores per nite... my 8 cores on the I7 can't keep up with that ! Maybe the CPU get's other heavier assignments, dunno. Not an expert on the matter

Kougar 17th August 2009 18:11

Ah, I had read that article on Back to Back CAS delay. :) Gigabyte has not released any BIOS's for the EX58-UD5 in months and I could not find a setting for it, was it under a different name? (I see they just released one, will update and check again).

Thanks for the links, will download the HCI program and play with it.

I'd recommend http://fahmon.net/ Just point it at the GPU directory and the CPU F@H directory to get an idea on your PPD figures... Two Windows SMP clients only fold half as much as four Linux SMP clients, Folding@home code is not optimized very well. Kinda hard to compare cores as the GPU folds quite a few different types, but your GPU should fall between 6,000 to 8,000PPD... same as my Core i7 920. ;)

jmke 17th August 2009 18:46

Quote:

Originally Posted by Kougar (Post 242644)
And waste a ton of performace?

what TON are you referring to? 2.5% at the most; OCing memory doesn't improve performance noticeably...

Quote:

That's what my RAID'd NAS is for. Lost to much data to RAID arrays on my desktop, Intel's RAID is no protection at all.
not quite, if your system is unstable and you can't SAVE the data CORRECTLY, you can have all the RAID, NAS , SAN, etc in the world, it won't restore the corrupted DATA due to a SYSTEM crash and INSTABILITY.

choosing between ~2% "potential" performance increase, or ROCK STABLE, doesn't crash, doesn't bluescreen, always works, always saves data. IS A NO BRAINER :)
and to be honest, any Dual Core system at 3ghz is plenty fast for anything you throw at it; the only way at this moment to increase performance noticeably is SSD; that will make a difference;
OCing your CPU/VGA/MEM is not worth the risk of an unstable system, for a few % performance increase; In my opinion; to each his/her own :)

Kougar 17th August 2009 19:15

leeghoofd, nice catch! That new just released BIOS does unlock the B2B memory setting... doesn't say what it is set to by default though. I guess I'll be spending a bit of time playing with it. :woot:

JMke, you are assuming I was referring to OCing the memory, I am not. Not worth the time, headache, risks involved for the tiny gain. All I am doing is tightening the timings, but I wanted something a bit more accurate than Prime95 Blend to double check my settings.

A 1,560Mhz CPU overclock is "not a few % increase" especially when referring to programs that use all four cores / eight threads. :)

jmke 17th August 2009 19:41

that's why I finished my post with "in my opinion"; I have zero applications which use more than 2 cores, and the apps I do have, few need more than one core. I noticed no real difference between C2D E8200 running at stock vs one overclocked to ~4ghz; running two identical systems, side by side, one overclocked to 4ghz, the other at stock speed; I couldn't tell which one was which until you start up a CPU dependent benchmark.

while I agree with you that single threaded performance increases are the way forward to make our systems fasters, we have run into bottlenecks where a faster CPU does no longer overcome the slowdown, "in ye old days" one could overclock a 300mhz CPU to 600Mhz and the difference in overall computing would be night and day. Today, if we take a 2.66ghz CPU to 4ghz, there is an increase in performance... but nowhere near comparable to the past; we have reach a point of diminishing returns;

the single biggest factor at this point is storage, make that super speedy and you'll be able to feed that overclocked CPU with data it can actually do something with; your i7 @ 4.2Ghz is a monster, but with normal storage you're throwing peanuts at it, by switching to SSD/ramdisk you'll be providing it with a real meal and the outcome will be an eye opener.

Kougar 18th August 2009 08:25

Well it sounds like we mostly agree, even I can't notice if my system is at stock or if it is partly overclocked unless I start up an intensive program or several programs at once.

With things like VMWare and its four virtual machines I easily notice the smoother loading and responsiveness between stock and overclocked states. Game levels also tend to load faster even with a basic HDD, oddly I'm usually the first to load in 24 player TF2 servers or 8 player L4D games, but not more disk intensive games. Folding@home is only minimally (at best) affected by disk performance and to a slightly larger extent RAM performance, it's almost completely CPU / GPU bound.

That said an SSD upgrade has been on my list for that very reason, I've just been waiting for the new G2's to be stocked so I can get one for 20% off with Bing cashback. In any case, a system is only as fast as it's slowest bottleneck. It wouldn't make much sense to buy an SSD for a Pentium 4, or a system with 1GB of RAM. Just been waiting for the new 2nd gen drives and some price cuts. :)

---

I'll post what I found about Gigabyte's B2B in the appropriate thread. Memtest HCI seems to be stuck at a 2GB limit so three instances must be run simultaneously for it to work. I don't understand what Wprime does that Superpi and Hyperpi don't already?

Massman 18th August 2009 08:29

Wprime stresses all cores as you can manually set the number of threads.

leeghoofd 18th August 2009 08:50

So does Hyperpi :) Wprime is more of a raw CPU test, while Hyperpi and co stresses the rams and IMC more...

HCI memtest is limited inmax ram usage (think it's just the freeware edition) so yes I run several instances too and even attribute affinity for the cores... you run 2 2048 ones and one with rest of the unused memory ( auto )


All times are GMT +1. The time now is 19:52.

Powered by vBulletin® - Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO