Posted at 09:10 AM | Permalink
A key FAH server is down right now and stats updates have been suspended until it is back up. As always, stats are kept on the Work Servers (WS's) so even if an update hasn't been run, the points are being accumulated as WUs come in, so it's only an issue of updating the database for donors to see.
We don't have an ETA on this right now, but our team is working on it.
Posted at 04:19 PM | Permalink
We’ve updated Core 17 with OpenMM 5.1, so checkout the release video for more info:
A live Q&A is available on reddit.
Some of the key highlights are:
-Up to 120,000 PPD on GTX Titan, and 110,000 PPD on HD 7970
-Support for more diverse simulations
-Linux support on NVIDIA cards and 64bit OSes
-FAHBench updated to use the latest OpenMM and display version information
Full Transcript of the Talk:
Hi I’m Yutong, I’m a GPU core developer here at Folding@home. Today I want to give you guys an update on what we’ve been working on over the past few months. Let’s take a look at the three major components of GPU core development. First off, we have OpenMM, our open source library for MD simulations. It’s used by both FAHBench and Core17. FAHBench is our official benchmarking tool for GPUs, and it supports all OpenCL compatible devices. We’re very happy to tell you guys that it’s been recently added to Anandtech’s GPU test suite. And Core17 is what your Folding@home clients use to do science. By the way, all those arrows just mean that the entire development process is interconnected.
So let’s take a step back in time.
Last year in October, we conceived Core 17. And we had three major goals in mind. We wanted a core that was going to be faster, more stable, and to be able to support more types of simulations than just implicit solvent. But because of how our old core 15 and 16 was written, it was in fact easier for us to write the core from scratch.
So in November, we started rewriting some of the key parts to replace some pre-existing functionality. Over two months, in January, things started to come together. Our work server, assignment server, and client was modified to support Core 17. We also started an internal test team, for the first time ever, using an IRC channel on freenode to provide real-time testing feedback.
In February, Core17 had a public Beta of over 1000 GPUs. And We learnt a lot of valuable things. One of them was that the core wasn’t all that much faster it seems on NVIDIA. Though on AMD things certainly looked brighter. Things still crashed occasionally, and bugs were certainly still present. So we went back to the drawing board to improve the core.
In April, we added a lot of new optimizations and bug fixes to OpenMM. We tested a linux core for the first time ever on GPUs. And our internal testing team had grown to over 30 people. And that brings us to today.
We now support many more types of simulations, ranging from explicit solvent to large systems of up to 100,000 atoms. We improved the stability of our cores. We now have a sustainable code base. We added support for linux for the first time. It’s also really fast – so I’m sure the burning question on your mind is, just how fast is it? Well let’s take a look. On the GTX Titan we saw it from 50,000 points per day to over 120,000 points per day. On the GTX 680, we saw it go from 30,000 points per day to over 80,000 points per day. On the AMD HD 7970, we saw it from 10,000 points per day to over 110,000 points per day. On the AMD HD 7870 we saw it jump from 5,000 points per day to over 50,000 points per day.
We never want to rest on our laurels for too long. We are already planning support for more Intel devices in the future, such as the i7s, integrated graphics cards, and Xeon Phis. We plan to add more projects to Folding@home as time goes on, so researchers within our group can investigate more systems of interest. And as always, we want things to be faster.
Now let’s go back to the beginning again, and here’s you guys can help us. If you’re a programmer, we invite you to contribute to the open source OpenMM project (available on github at the end of the month on github.com/simtk/openmm). If you’re an enthusiast and like to build state-of the-art computers, we encourage you to run FAHBench and join our internal testing team on freenode. If you’re a donor, we’d like you guys to help us spread the word about Folding@home and bring more people, and their machines of course. Now before I wrap things up, there are some people I’d like to thank. Our internal testers are on the right hand side, and they’ve been instrumental in providing me with real time feedback regarding our tests. We couldn’t have done it this fast without them. On the left hand side, are people within the Pande Group, Joseph and Peter are also programmers like me. Diwakar and TJ helped set up many of our projects. Christian and Robert have always been there for support and feedback.
But wait, one last thing. This week, I’ll be doing a Questions and Answers session on reddit at reddit.com/r/folding. So if you’ve got questions, come drop by and hang out with us. Thanks, and bye-bye.
Posted at 05:27 PM | Permalink
Here's a guest post from Vickie Curtis, a Research Student at UK's Centre for Research in Education and Educational Technology.
I am a doctoral student at the Institute for Educational Technology at the Open University in the UK. I am looking at how digital technologies are changing the way scientists interact with members of the wider public, and I am particularly interested in online 'citizen science' projects such as Folding@home.
A few weeks ago we launched an online survey to learn a little more about why people contribute to the Folding@home community, their views about the project, and about ‘citizen science’ projects in general. We’ve had a great response so far, but would like to keep the survey open for a couple more weeks so that we can capture the views of participants who haven’t yet had a chance to take part (we would love to hear from more women who contribute to Folding@home).
The survey should take about 10 minutes, and the feedback will eventually be shared with you via the website and blog. All the information you supply will be kept on a secure server and not passed to any third parties. If you would like to take part, please follow the link below.
Many thanks to those who have already contributed!
Posted at 10:22 AM | Permalink
Today, we have a guest blog post by Vickie Curtis, a research student in the UK's Centre for Research in Education and Educational Technology. She's working with the Folding@home team to glean more feedback from donors.
Would you like to learn more about the Folding@home community and your contribution to it? I am a doctoral student at the Institute for Educational Technology at the Open University in the UK. I am looking at how digital technologies are changing the way scientists interact with members of the wider public, and I am particularly interested in online 'citizen science' projects such as Folding@home.
Folding@home is one of the longest-running and most successful online citizen science projects, and it would be great to know a little more about why people contribute to the Folding@home community, their views about the project, and about these types of project in general. I have prepared an online survey for participants, which should take about 10-15 minutes to complete. The feedback will be shared with the Folding@home team and may help them to make improvements to the project. I will also share the findings with you via the website and blog.
All the information you supply will be kept on a secure server and not passed to any third parties. If you would like to take part, please follow the link below.
Posted at 08:00 AM | Permalink
We have been aggressively working on OpenMM (the key code used in the FAH GPU cores), creating new algorithms to increase performance on NVIDIA and AMD GPUs. The results have been pretty exciting. With OpenMM 5.1 (vs OpenMM 5.0, used in the current core 17 release), we are getting about a 2x speed up on typical FAH WU calculations, which will lead to an automatic 2x increase in PPD once this software is out of beta testing and integrated into core 17.
There's a lot of testing to do and it's very possible that these numbers will change, but the results were so exciting that I wanted to give donors a heads up. Here's some numbers that we're seeing:
OpenCL running on the GTX 680: The first 2 columns are nanoseconds per day (i.e. how much science gets done in a GPU day) and the 3rd column is the speedup of 5.1 over 5.0.
Type of Calculation |
OpenMM 5.0 |
OpenMM 5.1 |
Speedup |
Implicit hbonds |
92 |
134 |
1.46 |
Implicit hangles |
153 |
209 |
1.36 |
RF hbonds |
31.4 |
78.1 |
2.49 |
RF hangles |
58 |
113.0 |
1.95 |
PME hbonds |
19.6 |
41.5 |
2.12 |
PME hangles |
37.3 |
66.9 |
1.79 |
OpenCL running on a Radeon HD 7970: The first 2 columns are nanoseconds per day (i.e. how much science gets done in a GPU day) and the 3rd column is the speedup of 5.1 over 5.0.
Type of Calculation |
OpenMM 5.0 |
OpenMM 5.1 |
Speedup |
Implicit hbonds |
87 |
120 |
1.38 |
Implicit hangles |
96 |
104 |
1.09 |
RF hbonds |
33.5 |
83.5 |
2.49 |
RF hangles |
51.8 |
90.2 |
1.74 |
PME hbonds |
21.8 |
49.3 |
2.26 |
PME hangles |
34.6 |
63.0 |
1.82 |
Note that "PME hbonds" is likely the most common calculation that we plan to run in the near term with core 17. We're very excited about the way this is shaping up and think that donors would be curious to know where this is going.
Posted at 10:00 AM | Permalink
Guest post from Dr. Greg Bowman, UC Berkeley
Prof. Vince Voelz’s lab has published an exciting paper on their recent successes with predicting the structures of protein-like molecules called peptoids (here). Peptoids are similar to proteins but with a rearrangement in their chemistry (see example below). Their similarity to proteins allows peptoids to function like proteins. However, the alteration in peptoid chemistry relative to proteins effectively makes them invisible to parts of the immune system designed to recognize foreign proteins. Therefore, peptoids are an attractive option for drug design. To fully realize this potential, we need to be able to predict the structures of peptoids and design them to perform specific functions. The Voelz lab’s work demonstrates that computer simulations can provide this sort of information by presented predicted structures of a number of peptoids along with experimental structures confirming the accuracy of their predictions (see example below).
Peptide vs. peptoid chemistry. In peptoids, a group of atoms (called R) is moved from a carbon to an adjacent nitrogen (N).
An example of one of the Voelz lab's predicted structures (in green) overlaid with the experimental structure (in white).
Posted at 06:00 AM | Permalink
We’ve released FAHBench 1.0, with a new slick GUI that should make it much more accessible to new comers. Click on the FAHBench link above or the image below to try it out! Don’t worry, it maintains backwards compatibility with the old command line interface.
More info at http://fahbench.com/
Posted at 12:37 PM | Permalink
The Quick Return Bonus (QRB) gives more points when WUs are completed quickly. This helps keep the points in line with the science. Now with the GPU core maturing, our plan is to treat all WUs identically, i.e. benchmark on a single benchmark machine (SMP) and use those points. Now that we can do just about any calculation on any piece of hardware, it's strange to benchmark them separately. That wasn't the case before where the capabilities of the GPU and SMP cores were very different.
With the new GPU core (17), we'll have that matching capability. Our plan is to introduce QRB to GPUs with the rollout of production core 17 WUs.
Posted at 02:10 PM | Permalink
We often have to make difficult decisions on what hardware to support in the future, including adding new platforms or removing existing ones. Removing existing platforms always leads to a lot of disruptive change for donors, so we try to do this as rarely as we can. In particular, in the GPU1 to GPU2 transition, there was a big change done quickly, which was extremely hard on donors.
For GPUs in particular, the central issue is that in general, GPU technology keeps on progressing and GPU manufacturers come up with new ways to do things which make the old ones obsolete. So, it's probably safe to say that until hardware design innovation changes, *eventually* older GPUs will become obsolete for FAH. We try to keep as many GPUs working as long as possible, but eventually it becomes a losing battle, as we have only a fixed number of programmers and more GPU types (even from a given vendor, say different CUDA capable levels) require more programmers to keep up, and eventually we run out of resources.
Right now, there's a division with the Fermi cards. Fermi and later cards have powerful new capabilities that the older cards do not have. So, I can imagine eventually we'll run out of tricks to support the older cards. I can't predict when that will be, as it depends on lots of things, but I can say we're trying hard to support everything as long as we can.
One of the biggest issues is that scientific needs can change based on where the science takes us and that's particularly hard for us to predict. We'll try to let donors know as soon as we can if there will be any changes, but some donors questions prompted this blog post so that at least donors have some sense of how our decisions are made internally.
Posted at 02:09 PM | Permalink