Bigadv points change

Moderators: Site Moderators, FAHC Science Team

Post Reply
orion
Posts: 135
Joined: Sun Dec 02, 2007 12:45 pm
Hardware configuration: 4p/4 MC ES @ 3.0GHz/32GB
4p/4x6128 @ 2.47GHz/32GB
2p/2 IL ES @ 2.7GHz/16GB
1p/8150/8GB
1p/1090T/4GB
Location: neither here nor there

Re: point system is getting ridiculous...

Post by orion »

Nothing to see here...move along...move along Image
Last edited by orion on Thu Jun 09, 2011 10:45 pm, edited 1 time in total.
iustus quia...
ChasR
Posts: 402
Joined: Sun Dec 02, 2007 5:36 am
Location: Atlanta, GA

Re: point system is getting ridiculous...

Post by ChasR »

These were my thoughts when the QRB was first proposed, a much flatter curve, using higher base values and lower values for k.
ChasR wrote:
I'd rather see a formula that uses the preferred deadline in the calculation, something like this (excel format);

multiplier = 1+IF(TO<(PDL-1),(PDL-1-TO)*K,0)

TO= actual time to produce the WU from start of download to finish of upload
PDL = preferred Deadline (in this formula you have to beat the preferred deadline by one day to earn any bonus. Personally I think you should have to beat it by more than that to earn a bonus. For Uniprocessor WUs, you would subtract far more than 1.)
The K value will determine the the percentage of bonus. Assuming a 3 day preferred deadline, K set to .25, a WU completed in .25 days would earn a 45% bonus. A WU completed in .5 days would earn a 37.5% bonus. 1 day, 25%
This was written assuming base values of A3 WUs would be set to match A2 production, at the time 4300 ppd on a Q6600 @ 3.0. In the above scenario the maximum multiplier would be < 1.5
Image
VijayPande
Pande Group Member
Posts: 2058
Joined: Fri Nov 30, 2007 6:25 am
Location: Stanford

Re: point system is getting ridiculous...

Post by VijayPande »

7im wrote:I thought I had been specific, and using a pretty good visual aid too.

Either lower the slope of the QRB bonus curve, or cap the upper limit in some way if the function doesn't change slope well (fixing upper end blows out lower end). I don't think a computer that is 10x faster should be getting 1000x the points. And 6 months from now, the 20x computer gets 20,000x the points? Again, as the curve moves to the right, it approaches inifinity. That points model is not sustainable. Moore's Law moves right along that graph too quickly! You already have 1 million point work units, and they'll be 10 million by years end.

QRB is a must for ALL client types, or the GPU client will be dead within the year. The top teams already recommend against running SMP and GPU at the same time, and (almost) all new computers are SMP ready.

Same goes for the CPU client. If all of the CPU work units are not on the QRB system soon (a4 core), you might as well issue the EOL warning now, and stop making CPU work units by 2012.

You need to redefine the small, normal, large setting in the clients. NONE of your researchers makes small work units any more (well, GPU WUs are small, but they are not desinated that way). It should change to Normal, Large, and XLarge, with some bumps in the sizes. Less than 10 MB, 10-25 MB, over 25 MB (and even this might be too small, 10-50, and 50+ might work better)

The points gap between smp and -bigadv needs to be reduced. You have people almost burning up X6 systems chasing the -bigadv carrot. Not good for the X6, even worse for their power bill to over clock so much (diminishing returns, with exponential power increases), and they're slowing down the results. (NOT picking on AMD here, so don't even start, i7s are almost the same thing)

And you need a new way to benchmark that does not depend on multiple benchmark computers at Stanford that continue to age too quickly, and that seeminly never match my configuration any more. You should be able to benchmark my computer, understand how much science it can process in a given amount of time, and reward my computer accordingly. My least favorite thing to say, but other DC projects do it, so should FAH.

If I need to be more specific than that, just ask. ;)

I was thinking of much more specific in terms of equations, factors, benchmark machine specs, benchmark machine PPD, etc. In other words, something that I could hand over to my team to try out immediately rather than a good set of ideas, but not specific, detailed implementation details. Your posts make good points, but don't go far enough to explaining how you would determine the points for a specific WU.

The other benefit of giving all of these details is that other donors can debate the details more directly, since in the end, much of this is all in the details. Also, I'd love to see how your formula would give points to existing project WUs. I think that would go a long way to giving people a sense of how the suggested points system would work.
Prof. Vijay Pande, PhD
Departments of Chemistry, Structural Biology, and Computer Science
Chair, Biophysics
Director, Folding@home Distributed Computing Project
Stanford University
7im
Posts: 10189
Joined: Thu Nov 29, 2007 4:30 pm
Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
Location: Arizona
Contact:

Re: point system is getting ridiculous...

Post by 7im »

For the record, I am hardware agnostic, like the forum. Personally, I have contempt for both Intel and AMD, for different reasons, but that's nothing to do this this topic.

As for specifics, I don't have time to get in to those right now, but there are plenty of people her who enjoy that so I will defer to them. I think one possible K formula change one was suggested above, but we'd have to put some actual numbers in there to be sure.

When my 2 current projects mature this month, I'll come back to this if nobody else has.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
VijayPande
Pande Group Member
Posts: 2058
Joined: Fri Nov 30, 2007 6:25 am
Location: Stanford

Re: point system is getting ridiculous...

Post by VijayPande »

7im wrote: As for specifics, I don't have time to get in to those right now, but there are plenty of people her who enjoy that so I will defer to them. I think one possible K formula change one was suggested above, but we'd have to put some actual numbers in there to be sure.

When my 2 current projects mature this month, I'll come back to this if nobody else has.
Thanks. I welcome specific suggestions here from all.

However, based on personal experience, this is where it gets very challenging to design a new points system. The flaws of a given system with well-defined details are easy enough to point out, but making specific plans for how to improve the system is not trivial. Also, with such details, we can retroactively calculate what the points would have been for older projects and see if people think that looks right.

So, thanks for the suggestions, but for those people making suggestions, please consider this level of detail the immediate goal: anything short of this isn't a real suggestion in my mind, just good ideas without any practical plan for implementation and hard to debate its downsides (and thus maybe some unseen issues).
Prof. Vijay Pande, PhD
Departments of Chemistry, Structural Biology, and Computer Science
Chair, Biophysics
Director, Folding@home Distributed Computing Project
Stanford University
Punchy
Posts: 125
Joined: Fri Feb 19, 2010 1:49 am

Re: point system is getting ridiculous...

Post by Punchy »

ChasR wrote:These were my thoughts when the QRB was first proposed, a much flatter curve, using higher base values and lower values for k.

multiplier = 1+IF(TO<(PDL-1),(PDL-1-TO)*K,0)
Graphs, please - it's much easier for me to think about formulas with pictures :oops:
Punchy
Posts: 125
Joined: Fri Feb 19, 2010 1:49 am

Re: point system is getting ridiculous...

Post by Punchy »

7im wrote:You should be able to benchmark my computer, understand how much science it can process in a given amount of time, and reward my computer accordingly. My least favorite thing to say, but other DC projects do it, so should FAH.
For the other DC projects that benchmark the donor's computer and reward accordingly, are the rewards linear with the benchmark or non-linear? I.e. if system A has a benchmark score of 1 and system B has a score of 2 (bigger being better), does system B get the same points for doing the same amount of computation as system A and just complete the work twice as fast, or get higher points and complete the work faster, or ??
patonb
Posts: 348
Joined: Thu Oct 23, 2008 2:42 am
Hardware configuration: WooHoo= SR-2 -- L5639 @ ?? -- Evga 560ti FPB -- 12Gig Corsair XMS3 -- Corsair 1050hx -- Blackhawk Ultra

Foldie = @3.2Ghz -- Noctua NH-U12 -- BFG GTX 260-216 -- 6Gig OCZ Gold -- x58a-ud3r -- 6Gig OCZ Gold -- hx520

Re: point system is getting ridiculous...

Post by patonb »

One thing in a linear system, to get a faster system, its not a linear price... So having a curve is a good reward.

If you've ever done research, if you can cut a experiment for 3 to 2 days, its an immense increase.. 2 to 1 days is even better... Most of yous are not really talking about % time, but just straight time. sure knocking off 5hrs on 30hrs doesn't seem much but its a 17% increase. If it requires an expenditure of $1500, why not give a nice reward.

We're currently in a bad spot for cpu's. We got lucky the i7s were fantastic and fit a gray area in the smp big advc. But it seems now that people are peeved because their great system doesn't cut it on unreleased units, and the people who have the pfroper systems are finally not getting cheated by the gray i7/x6 folks.

I agree that the unicores are pretty much not worth the power, and the right side curve is unreal, but its not as much of an issue. I only started 4yrs ago, but i did run the 130pt per week days.

Why not just translate the graph further to the right? Give a small qrb AND decrease gpu times, and a4 the unicores.
WooHoo = L5639 @ 3.3Ghz Evga SR-2 6x2gb Corsair XMS3 CM 212+ Corsair 1050hx Blackhawk Ultra EVGA 560ti

Foldie = i7 950@ 4.0Ghz x58a-ud3r 216-216 @ 850/2000 3x2gb OCZ Gold NH-u12 Heatsink Corsair hx520 Antec 900
soya_crack
Posts: 11
Joined: Tue Dec 15, 2009 4:13 pm

Re: point system is getting ridiculous...

Post by soya_crack »

I am happy to see that the discussion developed that far and wasn't smashed to the ground.

@patonb
Concerning your first paragraph:
The reason why there is normally not such a reward is because your are leaving the sweetspot and that's basically everywhere in life that way. If you want the last ten percent of performance, you have to pay an expotential higher price.
patonb
Posts: 348
Joined: Thu Oct 23, 2008 2:42 am
Hardware configuration: WooHoo= SR-2 -- L5639 @ ?? -- Evga 560ti FPB -- 12Gig Corsair XMS3 -- Corsair 1050hx -- Blackhawk Ultra

Foldie = @3.2Ghz -- Noctua NH-U12 -- BFG GTX 260-216 -- 6Gig OCZ Gold -- x58a-ud3r -- 6Gig OCZ Gold -- hx520

Re: point system is getting ridiculous...

Post by patonb »

I meant with killer HT quads and hex cores that didn't exist when things here got the QRB... Not the ppd.
WooHoo = L5639 @ 3.3Ghz Evga SR-2 6x2gb Corsair XMS3 CM 212+ Corsair 1050hx Blackhawk Ultra EVGA 560ti

Foldie = i7 950@ 4.0Ghz x58a-ud3r 216-216 @ 850/2000 3x2gb OCZ Gold NH-u12 Heatsink Corsair hx520 Antec 900
PantherX
Site Moderator
Posts: 7020
Joined: Wed Dec 23, 2009 9:33 am
Hardware configuration: V7.6.21 -> Multi-purpose 24/7
Windows 10 64-bit
CPU:2/3/4/6 -> Intel i7-6700K
GPU:1 -> Nvidia GTX 1080 Ti
§
Retired:
2x Nvidia GTX 1070
Nvidia GTX 675M
Nvidia GTX 660 Ti
Nvidia GTX 650 SC
Nvidia GTX 260 896 MB SOC
Nvidia 9600GT 1 GB OC
Nvidia 9500M GS
Nvidia 8800GTS 320 MB

Intel Core i7-860
Intel Core i7-3840QM
Intel i3-3240
Intel Core 2 Duo E8200
Intel Core 2 Duo E6550
Intel Core 2 Duo T8300
Intel Pentium E5500
Intel Pentium E5400
Location: Land Of The Long White Cloud
Contact:

Re: point system is getting ridiculous...

Post by PantherX »

I problem that I see is that while PG uses an 8 Core (or recently 12 Core) systems for benchmarking, the top end donors are using 48 Cores (with overclocking?) so it greatly skews the result. I would strongly suggest that PG use two different benchmark machines, one for normal SMP and the other for bigadv.

Moreover, what happens to the PPD if the bigadv was benchmarked on a 16 or 24 Core machine? How much would the 48 Core earn? Would that PPD be more acceptable? Would you have to use a new formula? The issue is that as hardware develops faster and better CPUs every year, while PG benchmark machine is still "old" (from a PC enthusiast point of view) so if PG wants to continue the bigadv trial program, they might want to consider into upgrading their hardware whenever a significantly better one is present.

So the question is, what is the proposed benchmarked system for the bigadv trial program?

BTW, regarding the normal SMP, I haven't noticed any complains from the PPD so would it be fair to assume that the benchmark machine is sufficient or does it too, needs to be upgraded?
ETA:
Now ↞ Very Soon ↔ Soon ↔ Soon-ish ↔ Not Soon ↠ End Of Time

Welcome To The F@H Support Forum Ӂ Troubleshooting Bad WUs Ӂ Troubleshooting Server Connectivity Issues
Jester
Posts: 102
Joined: Sun Mar 30, 2008 1:03 pm

Re: point system is getting ridiculous...

Post by Jester »

This has always been an issue, even before any bonus scheme for Wu size being up/downloaded or needing more system memory was introduced, an equitable benchmarking formula,
as has been said in defence of the current benchmarking for years, "it's not perfect but generally does a good job", it's not so much the hardware of the benchmarking rig as the software
under test, the benchmark rig assigns 'X" points for a given Wu, if the home system is identical you'll get the same points, if the home system is much faster you'd expect much more points
which is usually true, but only if the software in question is able to use the extra speed of a faster machine,
A little like graphics software, games for one, where a significant boost can be achieved by running 2 or more Gpu's in SLi mode, but only if the software is able to use it,
lot's of times in a new series of Wu's there is one or two that always bring complaints about low ppd compared to others in the series but I feel it's not so much that they are slow,
more that they are "normal" and other Wu's in the same series are more able to use the extra speed of the home machine and are "faster",
It all comes down to an equitable points system where every Folder believes they are duley rewarded for their level of contribution, not only in up front hardware costs but also the ever
increasing costs of powering them,
I don't have the "magic solution", it's impractical to start running multiple benchmark machines to mirror what is run by contributors as that is too diverse, so do we look at a "mean average" system ?,
where for every series of Wu's there's a base point figure, then award bonus points on a rolling average on the Wu's times returned, for instance, on average 5x bonus, twice as fast as average 10x bonus,
twice as slow as average 2.5x bonus etc, for a series of Wu's such as Bigadv where the goal is fast return times that would encourage and reward those with high end hardware to be faster than average,
and may serve to discourage those with "marginal" hardware that just finishes a Wu inside the deadlines,
As I've said, I don't have a magic solution, and I don't think there is one that will keep all of the people happy all of the time, but discussions such as this may find a solution which is as equitable as possible,
and nobody can really ask for more than that...
7im
Posts: 10189
Joined: Thu Nov 29, 2007 4:30 pm
Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
Location: Arizona
Contact:

Re: point system is getting ridiculous...

Post by 7im »

PantherX wrote:I problem that I see is that while PG uses an 8 Core (or recently 12 Core) systems for benchmarking, the top end donors are using 48 Cores (with overclocking?) so it greatly skews the result. I would strongly suggest that PG use two different benchmark machines, one for normal SMP and the other for bigadv.

Moreover, what happens to the PPD if the bigadv was benchmarked on a 16 or 24 Core machine? How much would the 48 Core earn? Would that PPD be more acceptable? Would you have to use a new formula? The issue is that as hardware develops faster and better CPUs every year, while PG benchmark machine is still "old" (from a PC enthusiast point of view) so if PG wants to continue the bigadv trial program, they might want to consider into upgrading their hardware whenever a significantly better one is present.

So the question is, what is the proposed benchmarked system for the bigadv trial program?

BTW, regarding the normal SMP, I haven't noticed any complains from the PPD so would it be fair to assume that the benchmark machine is sufficient or does it too, needs to be upgraded?

You're still thinking about the old way of benchmarking. Adding or changing more machines? That's a dead end.

Yes, for the time being, PG probably does need to benchmark some of those WUs on a 48 core system, just to be sure their points curve is well grounded. One the curve is probably off, too steep, and two, PG needs to confirm how well SMP scales up to that many cores in regards to science/points production. Guessing at it, and throwing more machines and more points at it is not a long term solution.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Leonardo
Posts: 261
Joined: Tue Dec 04, 2007 5:09 am
Hardware configuration: GPU slots on home-built, purpose-built PCs.
Location: Eagle River, Alaska

Re: point system is getting ridiculous...

Post by Leonardo »

Dr. Pande, you asked for specific recommendations. Although I don't have hard numbers for you, I think I can be more specific than my last post in this thread.

Why not just dispense with the K factor altogether? Let the points scale according to the complexity of the work unit. For work unit series that provide the most valuable input to the research, just score those units accordingly. Sure, you will want to provide incentive for completing the more advanced, complex units quickly. Simply adhere to strict completion times on those units and award zero points or half value if they are completed late. Those with the higher powered units (higher personal and monetary investment from the users) will still reap the rewards and will surpass those with a more casual approach. So is this reverting back to the original scoring method? Well, yes, and no. Yes, in that it's simpler and resistant to inflation, relatively that is. It could be different from the original system in that points scaling could be significantly higher for the more complex units. There will always be the great disparity in production between garden variety machines and purpose-built, high performance units, but at least we could eliminate (or close to it) the mad inflation.

These numbers are only to illustrate my concept:

Garden variety, simple unit: 100 acorns
Exotic variety units: 200 acorns
Perishable, prized, units: 500 acorns if aggressive deadline is met, 150 if late, 00 if very late (there's incentive also to not waste prized units on slower machines)
Image
Leonardo
Posts: 261
Joined: Tue Dec 04, 2007 5:09 am
Hardware configuration: GPU slots on home-built, purpose-built PCs.
Location: Eagle River, Alaska

Re: point system is getting ridiculous...

Post by Leonardo »

Another thought: Momentum and popularity is important to the program, perhaps even as much as the cadre of high performance competitors. You might also consider adding a 'flat rate' award for each work unit, so those with modest systems see recognition for participating, to understand that every work unit completed is appreciated. The flat rate award would be the same for every work unit, regardless of complexity. I think that would keep a number of users engaged, who otherwise might leave in frustration. A rough metaphor for this would be the runner at a 5K race who finishes way behind the most of the others. She still gets a t-shirt, and she decides to return for the next 5K.


EDIT: so that I haven't confused everyone, the 'flat rate' award would be in addition to value points of the work unit.
Image
Post Reply