« New FAH map: FAH donors in May 2008 | Main | ATI GPU2 core issue »

May 29, 2008


Team PD 86565

That last sentence is the perfect way to go about getting people excited about contributing as much as possible to F@H in my opinion. Nobody seems to care that there is no prize for having tons of points so much as they care about having the points.


Nah i think aspiring to be a top 500 team or donor is great thing to do and whilst your doing that, your contributing to a very worthy project.


Does the upcoming nVidia client take advantage of SLI to increase processing power? Or must SLI be disabled, like how CrossFire needs to be disabled for the ATI GPU2 client to be able to use both GPUs individually?


Re balancing points to reflect scientific output would be the way to go. It would be good for users and Stanford.


Differences in points between hardware and WU projects can be viewed two ways:
1) Some are given a bonus for being more efficient or faster than the baseline system; OR
2) Some are penalised for not having the same hardware as the baseline system.
You can’t give a reward without others being penalised whichever method you use.
I’m sure people will be happy if they can get more PPD than the loss of CPU cores feeding the GPU or the opportunity cost of spending those GPU$$$ on a PS3, or more/faster CPU etc.
However, the comparisons between different hardware, work units and points overtime involves too many moving targets. CPU’s get better perf/$ over time. GPU perf /$ may increase at a different rate compared to CPU’s. Consoles performance doesn’t change frequently but get cheaper over time.


Why not calibrate the points system based on approximate GFLOPs rather than against a reference machine? That way, we have a direct linear measurement of scientific contribution from different platforms. We won't be ending up with lopsided points award in which SMP cores churning out an average of 1GFLOP are scoring 3000 PPD while GPU2 cores churning out an average of over 20GFLOPs get only 1200 PPD.


Because GFLOPs do not necessarily reflect scientific value. For example, while CPUs may not have the same GFLOP power that a GPU has, it calculates different types of work units.

From the Extremetech interview with Dr. Pande...

"Dr. Pande: The CPU clients do a somewhat different calculation ("explicit solvent") vs. the GPU/PS3 (which do "implicit solvent"). All of this depends on how we deal with water. Do we deal with water as individual molecules ("explicit") or as a mathematical continuum ("implicit")? Both have various pros and cons. Implicit solvent maps better to the PS3 & GPUs (at least with today's hardware)."

By the way, a CPU running SMP, and getting 3000 PPD is running much faster than 1 GFLOP. My OCd Q6600 gets about 3000 PPD, and is running a little over 9 GFLOPs according to the linux Folding@home client.



You discuss the quality of GPU1 and not the improved capability of GPU2. Also "By the way, a CPU running SMP, and getting 3000 PPD is running much faster than 1 GFLOP. My OCd Q6600 gets about 3000 PPD, and is running a little over 9 GFLOPs" only goes to prove the imbalance; 3X the points for 1/10 of the FLOPS!!


Again, they are running different types of calculations. The CPU is using explicit solvent, which is more complex. The science is more useful on the CPU running SMP. We will see if that holds true after the points get adjusted. The sheer power of a high end GPU2 may outweigh the CPU's explicit calculations.


Will the Nvidia and ATI GPUs be running the same projects or perhaps it would make sense to seperate them (some proteins might run faster on Nvidia hardware, some on ATI hardware). If the projects are seperate then you could have TWO reference machines. 8)

The comments to this entry are closed.