Page 3 of 6

Re: GeForce RTX 3080 and 3090 support enabled !

Posted: Sun Sep 20, 2020 10:26 pm
by ir_cow
MeeLee wrote:@flarbear: I tend to agree with you.
The question I would ask here, is, if the pcie 4.0 bandwidth also consumes 1.5% more energy than 3.0? If your system runs at 350W, the extra 3.5W may be worth it, but it may not if the power draw is more like 10W higher...
And performance and power draw on Pcie 4.0 vs 3.0, and x16, vs x8, vs 4.0x4 speeds also need to be tested.
I don't see how the PCIe slot can "consume" more power. I also tried the Founders Edition which has a limit of 370 watts. No difference in PPD. Just those massive swings depending on the WU. Also the RTX 3080 doesn't even use 8x PCIe 3.0 for folding. I doesn't use the full 16x in games either. That .5% uplift is how the bits are encoded and lowers the overhead. Funny enough you "gain" .5% with PCIe, but being on AMD at lower resolutions you lose 15-30% FPS depending on the game. It is only when you reach 4K does the CPU not matter much. But we are talking folding here and I don't see any reason why PCIe 4.0 would help in folding.

Now what needs to happen is FAHCores being written for Tensor. If the RTX 2080 Ti gets 3.5~ mill, a extra 4000 CUDA cores, higher clock speed and memory frequency should be at least 50% faster. But at 4.5 Mill tops it is only 28%. This tells me things need to be optimized better for currently CUDA. Then add Tensor WUs.

Re: GeForce RTX 3080 and 3090 support enabled !

Posted: Sun Sep 20, 2020 11:33 pm
by ipkh
The Nvidia driver interprets the OpenCL and CUDA (Core 22 version 13) instructions. So it is up to Nvidias optimizations to make the dual fp32 work. For games the basic rule was that 30% of the commands were int32 so expect some reduction to the doubling of performance.
FAH has a difficult time here as it is has to split the work over many more cores (effective fp32 cores) and smaller WU will be very inefficient on large GPUs. We already see this disparity with just the gaps between the 900 series, 10 series and 20 series. But I have no doubt that they are working on it. I'm sure Nvidia has a vested interest in helping as well.

Re: GeForce RTX 3080 and 3090 support enabled !

Posted: Mon Sep 21, 2020 12:00 am
by kiore
What is being seen with F@H not seeming to make the most of new hardware has happened previously with new gens, it can be a number of factors such as projects cores not aligned to new standards or drivers not yet fully utilizing capacities or combinations. However the project seems to be ahead of the curve this time with new core versions coming online, new bench marking and new ways to use the new generations of hardware like running multiple work units on single GPUs under development. I am optimistic that the work underway will see significant optimization improvements not too far into the future.

Re: GeForce RTX 3080 and 3090 support enabled !

Posted: Mon Sep 21, 2020 8:34 am
by PantherX
F@H can't use all the new GPU features since it doesn't render anything. Instead, it will use all features that helps it in protein simulation. There are some really cool ideas floating around and some are easier to implement than others. However, time will tell what happens next but it is definitely a good thing for F@H since new and exciting times lie ahead for GPU folding :)

Re: GeForce RTX 3080 and 3090 support enabled !

Posted: Mon Sep 21, 2020 10:10 am
by HaloJones
will be very interested to see what 0.0.13 can do with a 3080

Re: GeForce RTX 3080 and 3090 support enabled !

Posted: Tue Sep 22, 2020 3:59 am
by PantherX
HaloJones wrote:will be very interested to see what 0.0.13 can do with a 3080
Some quick numbers from Project 11765 in Linux:

TPF 73s - GTX 1080Ti running OpenCL/ 1.554 M PPD
TPF 57s - GTX 1080Ti running CUDA / 2.253 M PPD
TPF 49s - RTX 2080Ti running OpenCL/ 2.826 M PPD
TPF 39s - RTX 2080Ti running CUDA / 3.981 M PPD
TPF 36s - RTX 3080 running OpenCL / 4.489 M PPD
TPF 31s - RTX 3080 running CUDA / 5.618 M PPD

I do expect that the numbers might potentially be better once the drivers have matured a bit, generally in about 6 months. By that time, we might have a new version of FahCore_22 that can unlock more performance too!

Re: GeForce RTX 3080 and 3090 support enabled !

Posted: Tue Sep 22, 2020 4:28 pm
by MeeLee
ir_cow wrote:
MeeLee wrote:@flarbear: I tend to agree with you.
The question I would ask here, is, if the pcie 4.0 bandwidth also consumes 1.5% more energy than 3.0? If your system runs at 350W, the extra 3.5W may be worth it, but it may not if the power draw is more like 10W higher...
And performance and power draw on Pcie 4.0 vs 3.0, and x16, vs x8, vs 4.0x4 speeds also need to be tested.
I don't see how the PCIe slot can "consume" more power. I also tried the Founders Edition which has a limit of 370 watts. No difference in PPD. Just those massive swings depending on the WU. Also the RTX 3080 doesn't even use 8x PCIe 3.0 for folding. I doesn't use the full 16x in games either. That .5% uplift is how the bits are encoded and lowers the overhead. Funny enough you "gain" .5% with PCIe, but being on AMD at lower resolutions you lose 15-30% FPS depending on the game. It is only when you reach 4K does the CPU not matter much. But we are talking folding here and I don't see any reason why PCIe 4.0 would help in folding.
11th gen Intel CPUs support PCIE Gen 4.
While the primary PCIE x16 slot, is generally seen as directly laned to the CPU, and should have very little wattage overhead,
Other slots (especially x4 slots, or m.2 slots) could go via a PCIE bridge chip, consuming extra power.
They actually use a controller that requires active cooling (a tiny 40mm fan in most cases, so I'd be estimating ~15-20W max).
You make a point about AMD CPUs being slower than Intel CPUs in PCIE data transfer.
Despite a 2080Ti not using more than a PCIE 3.0 x8 slot, when connecting it to an x16 slot, there's a marginal performance improvement (<10%, usually between 1-5%).

Re: GeForce RTX 3080 and 3090 support enabled !

Posted: Thu Sep 24, 2020 1:33 am
by bruce
ipkh wrote:The Nvidia driver interprets the OpenCL and CUDA (Core 22 version 13) instructions. So it is up to Nvidias optimizations to make the dual fp32 work. For games the basic rule was that 30% of the commands were int32 so expect some reduction to the doubling of performance.
It's impossible to write code without integers, but I'd expect the ratio of INT to FP32 in a game to be inferior to FAH ... though the benchmarking results will be examined carefully and then the drivers will be improved, making them obsolete. 8-)

Re: GeForce RTX 3080 and 3090 support enabled !

Posted: Thu Sep 24, 2020 10:37 pm
by MeeLee
I don't think there'll be a lot of people running the 3090.
It's theoretical performance is a max of 20-25% higher than the 3080, costing twice the price.
I think the 3080 will be the best GPU for most people looking for a new high performance GPU.

Re: GeForce RTX 3080 and 3090 support enabled !

Posted: Fri Sep 25, 2020 1:52 am
by road-runner
yea the price of those I can buy a lot of electric for the 1080TI

Re: GeForce RTX 3080 and 3090 support enabled !

Posted: Fri Sep 25, 2020 8:27 am
by gunnarre
MeeLee wrote: Other slots (especially x4 slots, or m.2 slots) could go via a PCIE bridge chip, consuming extra power.
They actually use a controller that requires active cooling (a tiny 40mm fan in most cases, so I'd be estimating ~15-20W max).
This is not a feature inherent to PCIe Gen 4 standard, right? It has more to do with having to use a less power efficient chip for the X570 chipset, which made an active chipset cooling fan necessary. In future chipsets from ASMedia, Intel or AMD, we might see PCIe 4 support with lower power dissipation.

Re: GeForce RTX 3080 and 3090 support enabled !

Posted: Fri Sep 25, 2020 4:34 pm
by MeeLee
gunnarre wrote: This is not a feature inherent to PCIe Gen 4 standard, right? It has more to do with having to use a less power efficient chip for the X570 chipset, which made an active chipset cooling fan necessary. In future chipsets from ASMedia, Intel or AMD, we might see PCIe 4 support with lower power dissipation.
I'm not sure,
I think it'll be like USB 3.0 protocol.
It does use more power than USB 2.0, but then data also moves at a higher rate.
However, the question would be, if you stick a USB 3.0 stick with USB 2.0 speeds, in a USB 3.0 port, will it run more or less power than a USB 2.0 port?
My estimation is that a PCIE 4.0 x4 port, uses nearly the same power as a PCIE 3.0 x8 port.
Saves a bit of power with less lanes, but wastes more to feed the GPU at a faster data rate.
Saves power again, because faster transactions mean quicker idling of the PCIE interface.
But uses both more idle power, as well as power under load.

If the load isn't 100%, but a constant 25%, the PCIE 4.0 should have slightly higher power consumption than a modern 3.0.

I think ultimately power consumption will depend on the CPU. So it'll depend on what nm the CPU process is made.
Like many, a 10nm CPU doesn't mean the entire CPU is made on a 10nm process die. Sometimes parts are still 14nm, or even 28nm.

So I think a new PCIE 4.0 will consume less power than an old 3.0 port,
Things will be more interesting when trying to compare 4.0 to 3.0 of the same node CPU.

In the grand scheme of things, answers to these questions will more than likely be useless, as we're going to PCIE 4.0 regardless; and PCIE 5.0, and 6.0 is on the table already.
Both 5.0 and 6.0 might make problems with finding good risers that can support these speeds.

Re: GeForce RTX 3080 and 3090 support enabled !

Posted: Sat Sep 26, 2020 1:25 am
by Lockheed_Tvr
PantherX wrote:
HaloJones wrote:will be very interested to see what 0.0.13 can do with a 3080
Some quick numbers from Project 11765 in Linux:

TPF 73s - GTX 1080Ti running OpenCL/ 1.554 M PPD
TPF 57s - GTX 1080Ti running CUDA / 2.253 M PPD
TPF 49s - RTX 2080Ti running OpenCL/ 2.826 M PPD
TPF 39s - RTX 2080Ti running CUDA / 3.981 M PPD
TPF 36s - RTX 3080 running OpenCL / 4.489 M PPD
TPF 31s - RTX 3080 running CUDA / 5.618 M PPD

I do expect that the numbers might potentially be better once the drivers have matured a bit, generally in about 6 months. By that time, we might have a new version of FahCore_22 that can unlock more performance too!
Is there any way to force it to use CUDA or is that just for that new Beta core that recently came out?

Re: GeForce RTX 3080 and 3090 support enabled !

Posted: Sat Sep 26, 2020 1:52 am
by kiore
Only with the new core. The new core is under beta level testing, still a few bugs it seems as some 'escaped' to general users and some issues found. Serious progress though for optimization let us see, I am optimistic.

Re: GeForce RTX 3080 and 3090 support enabled !

Posted: Sat Sep 26, 2020 4:09 am
by PantherX
Lockheed_Tvr wrote:...Is there any way to force it to use CUDA or is that just for that new Beta core that recently came out?
In addition to what kiore mentioned, do note that you can't "force" to use CUDA... upon initialization, the FahCore_22 has this logic (simplified steps):
1) Let me see how many platforms I have access to
2) Let me try to use CUDA since you're an Nvidia GPU
3) Okay, I tired to use CUDA and failed so let me try to use OpenCL
4) Oh no, I can't use any platforms, let me collect all the information in an error report and send it back for debugging

Do note that AMD GPUs would skip step 2 since CUDA isn't present.