The GPU2 code beta test is going well so far. While there are some issues to deal with, it looks like this client is significantly easier to use than GPU1. Also, we're excited since the science in GPU2 makes a big step forward.
There's been lots of suggestions (tweaks to the client, a visualizer, PPD change), and we've been compiling lists of requests and going over what makes sense. Right now, I'd say that everything is on the table, but some suggestions are easier than others. We'll prioritize on the easy & important additions first of course (and the hard & less important changes last). Also, we're working on a visualizer that will give real time updates like the PS3, as well as a console version of the GPU2 client. We may hit a show stopper, but that's on our roadmap.
We're excited about the turnout so far and are looking forward to doing exciting work together with this new technology.
it's great :-)
will there's it a version for Ubuntu GPU client ?
Posted by: Einstein-Rosen-Podolsky | April 11, 2008 at 09:17 AM
At first there must be usable drivers and library set for the Linux. Only after those two the GPU FAH client can be made. But there seems to be hope as ATI is heavily pushing OSS GPU drivers and libraries for them. http://ati.amd.com/technology/streamcomputing/faq.html#8
Posted by: Ivoshiee | April 11, 2008 at 11:13 AM
It seems like OS X would be an obvious direction since all new iMacs and Mac Pros come with ATI 2xxx standard.
Posted by: Stephen | April 11, 2008 at 10:53 PM
Works great for me so far.
ATI 3850 @ 740/829
AMD AM2+ @ 2.75GHz
Although it doesn't use 100% of the GPU (which I realize a lot of people are noticing). It runs at around 74% usage.
Posted by: tyler | April 12, 2008 at 09:13 AM
I very much doubt that AMD is willing to port CAL to MacOS X.
Posted by: Ivoshiee | April 12, 2008 at 10:24 AM
Actually, the 20" iMacs come with an HD 2400, which is very low-end and wouldn't be all that useful for GPU2. The 24" ones come with an HD 2600, but only with 256MB VRAM, which would still be kind of limitted for FAH use, though much moreso than the 2400. Linux and Windows are where the real GPU's are going to be because that's what gamers use (mostly Windows.)
Posted by: rab38505 | April 12, 2008 at 10:38 AM
I suppose, HD2600 is fine enough, as 3650 (512 VRAM) gets about 950 ppd (O/C to 920/720 though) and it seems like Vmemory size & speed aren't that important for this client.
But of course, Linux version is much more interesting - keeping in mind that it can be installed as 2nd OS on almost any PC.
Posted by: Hil | April 12, 2008 at 11:56 AM
ATI 2600XT
AMD X2 4000+ @2,4GHz
Works fine and at 99% usage.
Posted by: hoyy | April 12, 2008 at 11:58 AM
AMD Athlon64 X2 6400+ @3.2GhZ
ATI Radeon HD 3870 X2
GPU core 1 @ 17% (inactive for folding)
GPU core 2 @ 77% (active for folding)
Completing Project 2799 WUs at a rate of 1% every 2 minutes. This seems just a tad bit slow to me, though...
Posted by: Team PD 86565 | April 12, 2008 at 01:21 PM
Would be nice to have the table of optimal performance of processor and videocard. For ex.: ati 2600xt (def.clocks) and amd x2 [email protected] ghz equals 99% videocard utilization etc.
I think by that it'll be easy for the new one's in gpu folding to understand what's the best use of their system: be it smp, gpu or general folding
Posted by: LarryPalmer | April 13, 2008 at 04:42 AM
That would be cool! You download a smart tool that analyzes your hardware and then says "Its recommended that you use the GPU folding client for best performance on your system"
Posted by: AoD | April 13, 2008 at 05:30 AM
How does the GPU2 client determine how many stream processors to use, and how does it manage the parallel processing?
From the above comments, it seems that 120 stream processors get taxed to 99% (Radeon 2600XT), but 320 stream processors only chalk up 74-77% utilization.
Does the GPU2 client pre-defines the number of stream processors to use for each different model of GPU, or does it determine that info by itself (maybe through some CAL GPUID routine or something)?
Posted by: WJS | April 13, 2008 at 08:04 AM
I haven't seen any GPU code, but hints here and there indicate that the shader count is being detected at run time and the 74% utilization bottleneck is the CPU.
Posted by: Ivoshiee | April 13, 2008 at 09:43 AM
I believe that to be the case. The CPU is the bottleneck. As I've been experimenting with my overclock settings and have gone from 57% (at 2.5GHz) to ~74% (at 2.75GHz) GPU utilization.
Posted by: Tyler | April 13, 2008 at 09:50 AM
Well, of course it's CPU. There is a lot of stats already, and O/C results above, and also me running 2 GPU2 clients on a single 3850 card (where 1 client gets about 1450 ppd due to "slow" CPU X2 5600+@3200 MHz and two clients get about 1980-2000 ppd - 990+ each).
Posted by: Hil | April 13, 2008 at 12:45 PM
No offense intended, but this discussion of Points and Performance is somewhat premature. The client is only 3 days old, and we've only run ONE very small test work unit.
IMO, performance will change as the client develops and Stanford can make more tweaks as they get good feedback. Any conclusions made now would be made inconclusive with the next revision of the beta client, or with some good sized work units.
The feedback is good, but don't bet the farm just yet.
Posted by: 7im | April 13, 2008 at 04:44 PM
Video card overheat ?!?!
Sapphire 3850 , 512 meg PCI-E
CPU INTEL E6750,
Vista ultimate 64 bit
4 Gb RAM
when A put the CPU usage % at 100% , the tempeture is 90 to 96 degre Celcius.... waw ..
and when a put at 60% = the tempeture is 76 to 82 degre Celcius
So, the video card at this tempeture is not to high ?!?!?! My video card I think can't work often like this without broke ?!?
Posted by: Philippe | April 13, 2008 at 07:42 PM
Athlon 64 X2 6000+ @ 3GHz
Radeon HD3870
Vista Ultimate 64bits
Hi work fine but :
GPU usage top at 67%, and my CPU is at average 80% (2 cores used).
Is it possible to get more power with a 64bits compiled client ? :)
Posted by: Jim | April 15, 2008 at 03:21 AM
Please make the GPU2 console client a priority. I find it difficult to track the progress of the GPU2 client. On the CPU console client I can see plainly where it stands. Also, I can only run the GPU2 client as Administrator. Is there a fundamental reason why it can't run without Administrator privileges?
Posted by: Neal | July 12, 2008 at 03:18 PM
OS X is where the graphic arts professionals are these days. And they're pretty darn serious about their video cards.
Posted by: Stephen | August 17, 2008 at 11:35 PM
GPU client is way faster then the CPU one. I have them both running at the same time. Difference is pretty clear.
Posted by: Shaheer | January 01, 2009 at 06:42 PM