Page 1 of 2

Question regarding the News post on July 8

Posted: Fri Jul 10, 2020 7:57 am
by Simplex0
In a post on the News page on July 8 here https://foldingathome.org/2020/07/08/ci ... -covid-19/ it says.....

"An unprecedented 0.1 seconds of simulation of the viral proteome reveal how the spike complex uses conformational masking to evade an immune response"

Does that means that a covid-19 work unit that commonly takes several hours to process only covering 0,1 seconds in real life?

Re: Question regarding the News post on July 8

Posted: Fri Jul 10, 2020 9:00 am
by Hopfgeist
It is a lot worse than that. A single work unit is more on the order of a few nanoseconds. They are talking about millions of work units, in combination simulating a total of 100 milliseconds real-time.

Which is what makes this result unprecedented. Never before have there been atomic-scale protein simulations for such a long timespan. Typically they only simulate microseconds up to a few milliseconds.

Chemical reactions are unbelievably fast, because the constituents involved are unbelievably small.

For reference: to measure the timing of chemical reactions in the real wold it takes femtosecond resolution. Take a look at this presentation on how awesome femtosecond x-ray lasers are.

There are 100 trillion femtoseconds in 0.1 seconds. I think F@H may use a slightly coarser timescale, but probably not by much, and it takes an extraordinarily large number of steps to simulate 0.1 seconds.

Bernd

Re: Question regarding the News post on July 8

Posted: Fri Jul 10, 2020 9:05 am
by uyaem
Simplex0 wrote:Does that means that a covid-19 work unit that commonly takes several hours to process only covering 0,1 seconds in real life?
I think it's actually much "worse" than that, it's the combination of several thousand hours of computing that resulted in those 0.1 seconds.

Re: Question regarding the News post on July 8

Posted: Fri Jul 10, 2020 11:02 am
by Neil-B
If I understand it correctly "several" might be quite a large number tbh? ... either that or thousands might be millions !! ... or I might have misunderstood

Re: Question regarding the News post on July 8

Posted: Fri Jul 10, 2020 1:10 pm
by Joe_H
If I remember the timescale correctly, for most projects each "step" listed in the log for a WU is 2 femtoseconds. In some of the GPU projects they were testing the use of steps that were twice as long. But for the 2 femtosecond time step, if the WU ran for a total of 500,000 steps that was for the length of 1 nanosecond.

Re: Question regarding the News post on July 8

Posted: Fri Jul 10, 2020 9:25 pm
by Simplex0
Thank you all for helping me understand this.

Re: Question regarding the News post on July 8

Posted: Fri Jul 10, 2020 9:50 pm
by Neil-B
Joe_H wrote:If I remember the timescale correctly, for most projects each "step" listed in the log for a WU is 2 femtoseconds. In some of the GPU projects they were testing the use of steps that were twice as long. But for the 2 femtosecond time step, if the WU ran for a total of 500,000 steps that was for the length of 1 nanosecond.
So scratches head to get grey matter working .. 0.1 secs would be about 100 million (or 50 million for twice as long steps) WUs based on what you recall .. grief !!

Re: Question regarding the News post on July 8

Posted: Sat Jul 11, 2020 7:49 am
by Hopfgeist
Neil-B wrote:
Joe_H wrote:If I remember the timescale correctly, for most projects each "step" listed in the log for a WU is 2 femtoseconds. In some of the GPU projects they were testing the use of steps that were twice as long. But for the 2 femtosecond time step, if the WU ran for a total of 500,000 steps that was for the length of 1 nanosecond.
So scratches head to get grey matter working .. 0.1 secs would be about 100 million (or 50 million for twice as long steps) WUs based on what you recall .. grief !!
Yes, I did the math, too. But I didn't post it because I'm still a bit sceptical. According to the stats on extremeoverclocking.org there have only been 915 million WUs total, so this project alone would have used between 5 and 10% of the whole effort. Given the total number of projects I'm not sure that is right, but I cannot find statistics about which project has been finishing how many work units.

And that also assumes roughly equally-sized work units, which isn't the case, but still. Mind blown.

Cheers,
HG.

Re: Question regarding the News post on July 8

Posted: Sat Jul 11, 2020 2:07 pm
by JimF
Hopfgeist wrote:According to the stats on extremeoverclocking.org there have only been 915 million WUs total, so this project alone would have used between 5 and 10% of the whole effort. Given the total number of projects I'm not sure that is right, but I cannot find statistics about which project has been finishing how many work units.
That seem reasonable enough to me. Folding is one of the oldest distributed computing projects around, and they have done a LOT of stuff, long before COVID came around.
They are one of the few that does GPU work also, which attracts a lot of crunchers. And the work supply has been steady. They really don't run out. So it all adds up.

Re: Question regarding the News post on July 8

Posted: Sun Jul 12, 2020 8:59 am
by Simplex0
After reading parts of the full text here https://www.biorxiv.org/content/10.1101 ... 430v1.full
it says.....

"simulating every protein that is relevant to SARS-CoV-2 for biologically relevant timescales would require compute resources on an unprecedented scale."

But I have not found any information on exactly how many proteins "every protein", relevant to this search, is. Does anyone have any information on this?

They also say that.....

"Using this resource, we constructed quantitative maps of the structural ensembles of over two dozen proteins and complexes that pertain to SARS-CoV-2."

Should I take this as that Folding@home so far have covered a little more than 12 proteins of all the proteins that needs to be covered?

Re: Question regarding the News post on July 8

Posted: Mon Jul 13, 2020 5:43 pm
by uyaem
Okay, some clarification on the matter... I remembered this being answered indirectly on Discord a while ago, and I finally worked the search function correctly:
SlinkyDolphinClock wrote:@Grayfox @Uyaem nanoseconds may not seem like a lot, but atoms move around really quickly and the client computes/updates the forces/positions between atoms (i.e. a new "snapshot") either every 2 or 4 femtoseconds. Those snapshots are usually saved every 1-100 picoseconds, and all those frames/snapshots constitute the trajectory that is sent back, so a lot can actually happen in a couple nanoseconds of simulation
Link to Discord screenshot here: https://ibb.co/FxRyLmk.

So every snapshot is already 1+ picoseconds.
With a GPU project being 100 snapshots normally (assuming the snapshot is the same as a viewer snapshot), we have at least 100ps/WU

So that's 0,1s = 100ms = 100,000µs = 100,000,000ns = 100,000,000,000ps, so 1bn WUs.

Based on a comment from PantherX on Discord the day before
PantherX wrote:[...]and I think it is few nanoseconds which allows the researchers to have a good "feel" for the project.
I would guess those 0.1s are the sum of trajectories and not a single one.

Re: Question regarding the News post on July 8

Posted: Mon Jul 13, 2020 5:56 pm
by Brad_C
By comparison, here's an article about using a supercomputer for 100 days in 2010 to set a new record simulating a protein for one millisecond.
https://www.nature.com/news/2010/101014 ... ews.2010.5

Re: Question regarding the News post on July 8

Posted: Tue Jul 14, 2020 6:21 am
by Hopfgeist
uyaem wrote:Okay, some clarification on the matter... I remembered this being answered indirectly on Discord a while ago, and I finally worked the search function correctly:
[...]
So every snapshot is already 1+ picoseconds.
With a GPU project being 100 snapshots normally (assuming the snapshot is the same as a viewer snapshot), we have at least 100ps/WU

So that's 0,1s = 100ms = 100,000µs = 100,000,000ns = 100,000,000,000ps, so 1bn WUs.
That doesn't work out, since the (preprint) paper says explicitly that the stepsize is 4 femtoseconds. So 100 ms would require 25,000,000,000,000 steps, and with a typical work unit consisting of 250,000 steps that would be "only" 100,000,000 work units, or 50 million WUs of 500,000 steps.
I would guess those 0.1s are the sum of trajectories and not a single one.
Yes, quite clearly, when one actually reads the paper.

Cheers,
HG

Re: Question regarding the News post on July 8

Posted: Tue Jul 14, 2020 1:57 pm
by Joe_H
... and with a typical work unit consisting of 250,000 steps...
250,000 steps is a typical size for a CPU project WU. GPU projects typically have 1,000,000 or more steps done in each WU

Re: Question regarding the News post on July 8

Posted: Fri Jul 17, 2020 2:50 pm
by uyaem
Using the combined info from the postings of Hopfgeist and Joe_H, let's take an average of 500k steps (across the sum of CPU and GPU WUs):
That would mean 50 million WUs to get 100ms.
With an assignment rate of above 100k WUs/h at the time (seen on https://apps.foldingathome.org/serverstats), that would mean roughly 2.5 million WUs per day, so 20 days - I think that could check out given the overall time frame :) (whilst still keeping in mind that some WUs were dumped, faulty, for other projects, expired, ... then server downtimes, ... )