Page 1 of 1

Trillion transistor chip, 8^2x larger than a GA100!

Posted: Fri May 28, 2021 8:43 pm
by MeeLee
Would be interesting to see if FAH could work together with this company to create workable WUs for this hardware.
It's even faster than a supercomputer in deep learning models.
For more info, see this vid:
https://www.youtube.com/watch?v=NQGyd2kuctA

Re: Trillion transistor chip, 8^2x larger than a GA100!

Posted: Sat May 29, 2021 10:49 pm
by bruce
FAH doesn't use all the capabilities of GPUs. It outputs no video, so all of the components that genrate high-resolution / high frame rates take up their share of chip real-estate is essentially wasted space from FAH's perspective. The same goes for components that generate tensors (for AI) and do half-precision math. Evem on-chip RAM provides very little boost in FAH production. FAH makes extensive use of single-precision math (FP32) and moderate use of double-precision math (FP64) and bits and pieces of other math hardware. What is really valuable are the math components when the calculations can be reordered to be highly parallel. Except for the GPUs ability to do those highly parallel calculations, it's not doing anything that couldn't have been done on a massive number of Pentium CPUs, given enough time.

The fundamental limitation isn't the chip size, but rather the serial nature of segments of computer, where you can't reorder the operations to run them in parallel.