« GPU news (about GPU1, GPU2, & NVIDIA support) | Main | New FAH map: FAH donors in May 2008 »

May 27, 2008

Comments

Stephen Dewey

Agreed. The point system should be designed to encourage people to make contributions that are scientifically useful, at least on an indirect basis if not a direct basis.

HP_Raider

I am not a GPU cruncher and I am interested in advancing research. If you wanted to take less of a PR hit then slowly or quickly give fewer and fewer points for GPU1 clients until it just doesn't matter. Just a thought. It might just be the best of all worlds! Complainers never win and winners never complain! :)

DocJonz

Anyone who uses the higher performance beta clients knows that they will be superceded at some point - I think this is just a sign of (rapid) progress on the GPU front. I'm sure that most people won't want to crunch WU's just for crunching's sake - but I guess that people who Folded on older hardware (I used to Fold on an X1950XTX) may be a bit lost with not being able to contribute anymore ..... sounds like a great excuse for an upgrade ;-)

kwyjibo

What was wrong with DirectX in the wild? Were cards returning different results given the same work unit?

penguinusaf

How about letting those who have x1900 cards run in gpu2 so the gpu1 can be shut off? Arnt them similar?

Stephen Dewey

I'd agree that part of the problem is probably that certain hardware could run GPU1 but can't run GPU2. If you avoided the hardware obsolescence issue, there might be fewer complaints. That might not be scientifically/computationally feasible, however.

S

This reminds me of what Thomas Edison said: Genius is 1% inspiration and 99% perspiration.

aki

Is there clear scientific criteria for validating the folding result? I am afraid other, so many results from another clients are to be concluded as unreliable. Probably you have already established methods of validating folding results, would you advise me reference? Any links would be fine.

b

I can answer the question from penguinusaf: How about letting those who have x1900 cards run in gpu2 so the gpu1 can be shut off? Arnt them similar?

ATI provides a software interface to the 2xxx and 3xxx called "CAL" and used for GPU2. CAL doesn't support the x1900 cards. The x1900 cards used the DirectX interface instead for GPU1 and it does not work reliably.

Sneakers55

I think F@H learned with GPU1... that they needed a GPU2. The whole concept of getting GPGPU code to run through DirectX (where anything nearly could come along and blow off the DX context) seemed like pushing it all along.

WJS

So, when is the nVidia client coming out and what sort of performance can we expect? I got an overclocked Radeon 3850 for the primary purpose of F@H but so far it hasn't really satisfied me for games.. it would be great if nVidia's performance is at least as good as ATI's, but hopefully better. Then it'll make sense to switch to an nVidia GPU that's great for both folding and games..

DocJonz

Any sign of a GPU client for Linux-64 ???

smASHer88

Good on those for folding with GPU1, It appears you've significantly helped Stanford advance their research so GPU2 could be possible. Without u guys GPU2 wouldn't be where it is and i think thats a great contribution to the project.

Thankyou

Neil Rieck

You made the correct decision. Encouraging people to waste electricity for bad science is illogical. On the flip side, maybe some of the x1000 series people people would consider purchasing newer graphics cards in order to help all of humanity.

Neil Rieck

DirectX (DX -- what GPU1 is based on) works much better in the lab than in the wild


I'm curious about this comment. Could this be due to "overclocking" or "flaky memory combined with lack of parity checking"?

The comments to this entry are closed.