Bigadv points change

Moderators: Site Moderators, FAHC Science Team

7im
Posts: 10189
Joined: Thu Nov 29, 2007 4:30 pm
Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
Location: Arizona
Contact:

Re: point system is getting ridiculous...

Post by 7im »

P5-133XL wrote:Putting a cap on the points is contrary to the points being proportional to the science for there is no corresponding cap on the science value. All a cap does is create a distortion in the proportionality.

From a purely theoretical point of view, that might be true. However, the exponential curve of the current forumula makes the very far right end of the curve too extreme. It is NOT proportional, it is exponential. The far right continues on to infinity, and that's not feasible in any way. It cannot continue on to infinity, so it has to be capped somewhere.

Return a 6901 in 2 days, get 41567.11 PPD. In 36 hours, get 63996.76 PPD. In 1 day's time, get 117569.55 PPD.

The first 12 hour jump is only 20K+ points. Are getting the results turned in another 12 hours sooner really worth 60K+ extra points?

What if you could complete the WU in 12 hours? 332,536.91 PPD. Is that next 12 hours really worth 200K+ PPD?

What if you could complete the WU in 6 hours? 1,185,236.55 Is that next 6 hours really worth almost 1 million PPD?

3 hours? 3,352,355.2 PPD
1 hour? 13,251,347.45 PPD. Is that 2 hour difference really worth 10 million PPD? Does the value of the science REALLY go up 10 MILLION PPD in two hours time?


IMO, the answer is no. The curve of the points graph is too steap as it moves to the right. That's why there should be a cap in SMP bonus points that is eqivalent to the core count contributing the work. More cores equals higher potential bonus. The points should be, in some way, tied proportionally to the hardware donating to the project.


P.S. Yes, I said core counts. And yes, that is a somewhat simplified view, but not overly simplified. One does not need to count threads, which only contribute a few extra percentages of performance. An oversimplification would be to call the processing units of an NV GPU "cores."

I could have said FPU count, because the FPU does the majority of the work. That way both Intel and AMD are on level ground. But if people want to give Intel chips a higher potential bonus because they use HT, we can discuss that too. ;)
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
ChasR
Posts: 402
Joined: Sun Dec 02, 2007 5:36 am
Location: Atlanta, GA

Re: point system is getting ridiculous...

Post by ChasR »

7im wrote: And it's not so much points inflation as it is Moore's law affecting how fast we can fold. Throw in a quad or 8 core processor, and we have 120x or 240x processing power today. What is 240x the old 110 PPD? The answer is not that far off what we are actually getting in PPD.

Moore's law plays a part, no doubt. However, I got out one of my old P4 640s, a $2000+ machine when I built it. Clocked it up to 3.8 GHz and set it to run SMP so as to compare the FAH desktop processing power of old to desktop performance of today, a 2600K @ 4.8 GHz. In terms of folding performance, a 2600K is roughly 12 times faster than the P4 on the same regular SMP WU (p6072 ~ 24:00/frame on the p4 vs ~2:00/frame on the 2600K). That's the Moore's law part of issue. As I see it, the real problem has some to do with Moore's law, and much to do with points inflation. The 2600K is awarded ~400 times the ppd as the p4, comparing uniprocessor ppd on the p4 to -bigadv ppd on the 2600K. If the real value to science is represented by the ppd produced, why does PG even bother with uniprocessor work?

The exponential QRB with or without a cap devalues all the work we did before and it will do the same to all the work we do today (Moore's law).
Image
7im
Posts: 10189
Joined: Thu Nov 29, 2007 4:30 pm
Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
Location: Arizona
Contact:

Re: point system is getting ridiculous...

Post by 7im »

ChasR, please run one of the new a4 CPU work units that has a QRB on your P4, and compare those points.

Let's try an Apples to Apples comparison. Non-QRB CPU to non-QRB SMP, or QRB CPU to QRB SMP. Or old points model to old points model, vs. new to new.

IMO, the CPU client doesn't look so bad with a QRB bonus. ;)
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
bruce
Posts: 20910
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: point system is getting ridiculous...

Post by bruce »

k1wi wrote:I guess what you're saying grandpa is that GPUs are terribly inefficient at realising their theoretical computational power? and are thus being overcompensated in points?
GPUs may be somewhat inefficient in reaching their theoretical computational power, but it's more than that. The GPU cores are limited in the type of analysis that they can run, thus if you measure GFLOPS or some other theoretical measure of computational power, FAH can do more science with the same number of CPU GFLOPS.

My first hand-held calculator did a good job with simple calculations. My second hand-held calculator had a square root button. As long as I never needed to find square roots, the two were equivalent but that one extra button was really important if that's what I needed to do. Think of the extra software in the SMP client as a square root button.
7im
Posts: 10189
Joined: Thu Nov 29, 2007 4:30 pm
Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
Location: Arizona
Contact:

Re: point system is getting ridiculous...

Post by 7im »

There is no over compensation of GPUs. If anything, GPUs are undercompensated. They have hundreds of processing units, and are 10x to 30x faster than CPUs at the same task. However, GPUs don't have a square root button, to continue the example, so GPUs can't do all of the same tasks as a CPU. This limited ability is why the points from a GPU are significantly less than the speed that they appear to run. GPUs do some science REALLY fast, but can't do some science at all. So the points were "averaged" so to speak, based on, as always, the value of scientific production.

Read the GPU history section on the GPU 2 FAQ sometime, and pay attention to the speed comparisons for more detail. ;)
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
mdk777
Posts: 480
Joined: Fri Dec 21, 2007 4:12 am

Re: point system is getting ridiculous...

Post by mdk777 »

The exponential QRB with or without a cap devalues all the work we did before and it will do the same to all the work we do today (Moore's law).
Yes, that is why points accumulated have a half-life. They are only significant to the recent work done, but become insignificant over time as the total amount of work done increases. :wink:

7IM's point about the impossible to achieve vertical limit of the exponential curve only applies if you don't adjust the function (not cap it ) :roll:

Obviously, the intention is to reward doubling the speed of return. The "normal" speed of return of course is increased over time. :wink:

...Keeping the returns on the left and not right of the exponential curve.

I thought this was being done already.
Transparency and Accountability, the necessary foundation of any great endeavor!
k1wi
Posts: 910
Joined: Tue Sep 22, 2009 10:48 pm

Re: point system is getting ridiculous...

Post by k1wi »

I would love for the cap to be at 1x per core, but suspect that it is just too simple. The hassles with limiting bigadv to >8 cores is a case in point that there will be many difficulties in implementing it.

Why not set the limit at 10x, or 20x(, or whatever point where the vertical line becomes too steep), and take advantage of flags to create 'classes' of projects where the likelihood of a computer reaching the top multiplier is significantly unlikely? Therefore it will be up to users to select the most appropriate class; if they are capped at one level, they should move to the next. As is basically done now (users opt-in to bigadv because the points premium is about 100%).

Or perhaps set the multiplier so that rather than a exponential curve it is set at >1.0, whereby there is a linear benefit such that there is an incentive to returning work units faster, but not to the point where the benefit is astronomical.

I think the QRB for uniproc is a good move (for an *average* computer it increases the ppd by by about two), which brings it more in line with the rest of the QRB system.

As someone who has worked very hard to bring in donors (over 250 for my team) through competitions etc, I noted that the 'average' user is probably sitting on a Core2Duo. A LOT are sitting on Pentiums and the churn from these guys as they see their points blitzed is really high. They see their 300ppd as pittance compared with the 30,000ppd of an i7 so just turn it off (faster than the previous rate of churn). That's fine while the really fast machines are offsetting it, but if enough folders turn them off then it has a negative effect on the amount of science being computed.
Image
orion
Posts: 135
Joined: Sun Dec 02, 2007 12:45 pm
Hardware configuration: 4p/4 MC ES @ 3.0GHz/32GB
4p/4x6128 @ 2.47GHz/32GB
2p/2 IL ES @ 2.7GHz/16GB
1p/8150/8GB
1p/1090T/4GB
Location: neither here nor there

Re: point system is getting ridiculous...

Post by orion »

k1wi wrote:I would love for the cap to be at 1x per core, but suspect that it is just too simple. The hassles with limiting bigadv to >8 cores is a case in point that there will be many difficulties in implementing it.

Why not set the limit at 10x, or 20x(, or whatever point where the vertical line becomes too steep), and take advantage of flags to create 'classes' of projects where the likelihood of a computer reaching the top multiplier is significantly unlikely? Therefore it will be up to users to select the most appropriate class; if they are capped at one level, they should move to the next. As is basically done now (users opt-in to bigadv because the points premium is about 100%).

Or perhaps set the multiplier so that rather than a exponential curve it is set at >1.0, whereby there is a linear benefit such that there is an incentive to returning work units faster, but not to the point where the benefit is astronomical.
To make that work PG would need to decrease the preferred and final deadlines. It wouldn't hurt to make the base points for a project smaller while increasing the K-factor. That way it makes it "un-profitable" for people to use non-qualified systems.

Not that anyone around here has used a work around to do just that. ;)
iustus quia...
7im
Posts: 10189
Joined: Thu Nov 29, 2007 4:30 pm
Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
Location: Arizona
Contact:

Re: point system is getting ridiculous...

Post by 7im »

mdk777 wrote:...Keeping the returns on the left and not right of the exponential curve.

I thought this was being done already.
QRB is a rather new bonus program, and -bigadv even newer. What lead you to think this?


@ k1wi - I am not a fan of dividing up folding in to classes. Your own comment speaks well against it... "The hassles with limiting bigadv to >8 cores is a case in point that there will be many difficulties in implementing it."

The same applies to classes. Where to divide the classes? 8 cores or 8 threads? How well do the points curves overlap from one class to the next? The current large non-overlap points (gap) from SMP to -bigadv makes class division look unappealing, as does your implimentation problem.

And what happens when Intel or AMD comes out with a new class of chip that blurs the lines between classes? For example, when -bigadv first started, there were no i7s with 4 cores and 8 threads. Only 8 core systems and higher. In another month or so, we'll see another even more blurring of the line with AMD's Bulldozer, that has 8 partial cores, and 8 threads, but only 4 FPUs like the i7. What class would you stick that one in, Bigadv or not?
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
mdk777
Posts: 480
Joined: Fri Dec 21, 2007 4:12 am

Re: point system is getting ridiculous...

Post by mdk777 »

mdk777 wrote:...Keeping the returns on the left and not right of the exponential curve.

I thought this was being done already.



QRB is a rather new bonus program, and -bigadv even newer. What lead you to think this?
From the definition of the program.
Since the K factor is an arbitrary number, it can be adjusted.

viewtopic.php?p=105038#p105038
For project 2681, we will initially set k=2. We may adjust k as necessary.
Transparency and Accountability, the necessary foundation of any great endeavor!
k1wi
Posts: 910
Joined: Tue Sep 22, 2009 10:48 pm

Re: point system is getting ridiculous...

Post by k1wi »

I never said I would divide it based on cores or threads (I just used an example of what is currently being used) I'd leave it as a fluid division based on self-selecting flags: -small -normal -large. A guide of what sort of hardware suits different classes could be made, but people would be responsible for selecting the flag that best suited their hardware.

Like I said, I would arrange the points structure so that each class was capped at 10x or 20x, that way, if someone constantly found they were being capped at that level, they could change their flag and fold in the next class. You could tighten up the deadlines and thus manage the largest class users could fold in for a given speed of folding.
i can listen

i.e. Machine A earns 100,000ppd raw on -small smp, which equals a hypothetical 20x QRB
Machine A running -small only earns 50,000ppd after the 10x QRB cap is applied (we're talking hypthetical numbers here)
If Machine A switches to -normal, it earns 100,000ppd raw with -normal, which equals a hypothetical 13xQRB
Machine A running -normal only earns 80,000 after the 10x QRB cap is applied
Machine A switches again to -big, it earns 100,000ppd raw with -big, which equals a hypothetical 6xQRB
Machine A running -big earns the full 100,000ppd after the 10x QRB cap is applied, because they didn't get capped...

At the same time, Machine B running -big hits a hypothetical 12x QRB and maxes out the bonus, but faster machines still gets higher ppd as they complete work units faster.

Stanford can then potentially introduce new classes if users in the top class start earning a raw QRB above 20x (which I *think* would make beneficial to run 2 clients simultaneously).

After all, there is still an incentive for folding fast: fast work units = higher ppd even on a capped QRB because you are completing more work units faster. Plus, at the top end there is only a very very small number of folders.
Image
John_Weatherman
Posts: 289
Joined: Sun Dec 02, 2007 4:31 am
Location: Carrizo Plain National Monument, California
Contact:

Re: point system is getting ridiculous...

Post by John_Weatherman »

k1wi wrote: A LOT are sitting on Pentiums and the churn from these guys as they see their points blitzed is really high. They see their 300ppd as pittance compared with the 30,000ppd of an i7 so just turn it off (faster than the previous rate of churn). That's fine while the really fast machines are offsetting it, but if enough folders turn them off then it has a negative effect on the amount of science being computed.
200 PPD is good going for a P4 - unless people are using 2 clients on a single core machine, which is not smiled apon ... Just to back up what I posted earlier, I see Steve Jobs said this in todays NYT, " We are going to demote the PC to just be a device. We are going to move the digital hub, the center of your digital life, into the cloud.”
7im
Posts: 10189
Joined: Thu Nov 29, 2007 4:30 pm
Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
Location: Arizona
Contact:

Re: point system is getting ridiculous...

Post by 7im »

mdk777 wrote:
mdk777 wrote:...Keeping the returns on the left and not right of the exponential curve.

I thought this was being done already.



QRB is a rather new bonus program, and -bigadv even newer. What lead you to think this?
From the definition of the program.
Since the K factor is an arbitrary number, it can be adjusted.

viewtopic.php?p=105038#p105038
For project 2681, we will initially set k=2. We may adjust k as necessary.

K is not arbitrary. It is not selected from thin air.

But for the sake of argument, let's assume I'm wrong and K is arbitrary. What works for an 8 core system in regards to bonus points does not work well for a 64 core system. Adjusting the value of K does not prevent the right hand side of the curve from continuing to approach inifinity. And the more you move the curve to adjust for the very few bleeding edge 64 core systems, the more you destroy the points for the much larger base of 8 core users.


P.S. The post you linked does not contain any form of the word "arbitrary" nor does it convey that meaning. If you feel it does, please quote the specific text. ;)
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
mdk777
Posts: 480
Joined: Fri Dec 21, 2007 4:12 am

Re: point system is getting ridiculous...

Post by mdk777 »

Adjusting the value of K does not prevent the right hand side of the curve from continuing to approach inifinity.
Correct, it adjusts the left side; the area of interest.
And the more you move the curve to adjust for the very few bleeding edge 64 core systems, the more you destroy the points for the much larger base of 8 core users.
Correct. If you want to avoid "ridiculous" bonus for systems that are 16X as powerful, you reduce the bonus for systems that are merely 2x as the average system.
(Since 4 core is ubiquitous now as compared to 1 core)

Regarding the definition of arbitrary verses a constant, that is merely a semantic discussions. You pick your constant to yield the desired result, so it is by definition arbitrary.

However, I don't disagree with k1wi, that more than one class of bonus could exist.
Transparency and Accountability, the necessary foundation of any great endeavor!
7im
Posts: 10189
Joined: Thu Nov 29, 2007 4:30 pm
Hardware configuration: Intel i7-4770K @ 4.5 GHz, 16 GB DDR3-2133 Corsair Vengence (black/red), EVGA GTX 760 @ 1200 MHz, on an Asus Maximus VI Hero MB (black/red), in a blacked out Antec P280 Tower, with a Xigmatek Night Hawk (black) HSF, Seasonic 760w Platinum (black case, sleeves, wires), 4 SilenX 120mm Case fans with silicon fan gaskets and silicon mounts (all black), a 512GB Samsung SSD (black), and a 2TB Black Western Digital HD (silver/black).
Location: Arizona
Contact:

Re: point system is getting ridiculous...

Post by 7im »

ar·bi·trar·y / ˈärbiˌtrerē / Adjective
1. Based on random choice or personal whim, rather than any reason or system.

It is not a semantical difference. Arbitrary is the wrong word to use. There is nothing whimsical about how the K value is selected. The selection is very constrained, and there is reason behind the selection, with a specific desired result, not a random result. Pick a different word.
mdk777 wrote:Correct. If you want to avoid "ridiculous" bonus for systems that are 16X as powerful, you reduce the bonus for systems that are merely 2x as the average system. (Since 4 core is ubiquitous now as compared to 1 core)
Yes, but this is the wrong solution. Crushing the points for the ubiquitous to reel in the points for the few on the far right fringe is not a solution. Sliding the existing curve does not work. The slope of the curve needs to change. The function needs adjustment to provide better overlap with the SMP points, and to allow for growth in to 48, 64, even 80 cores in the next year.

If not fixed, we'll soon have systems that produce more points in one day than people have produced in 7 years time. And no offense to anyone, but there is NO work unit, no matter how large or how fast it was returned, should ever be worth years of contribution to the project.

Proportional, not Exponential.

Look at the graph below. We used to expect the red line, we even demanded it. Linear points with linear speed improvements. A 4 GHz system should get twice as many points as a 2 GHz system.

But now, the -bigadv points look like the green line, and instead should look more like the blue line. (actually, the blue line is much too steep as well)

Image


And how is it fair that GPU clients are still stuck on the red line, as are most CPU clients?

If not corrected, this current path is unsustainable. I fear more people will leave the project like ChasR noted.
How to provide enough information to get helpful support
Tell me and I forget. Teach me and I remember. Involve me and I learn.
Post Reply