Trillion transistor chip, 8^2x larger than a GA100!

A forum for discussing FAH-related hardware choices and info on actual products (not speculation).

Moderator: Site Moderators

Forum rules
Please read the forum rules before posting.
Post Reply
MeeLee
Posts: 1375
Joined: Tue Feb 19, 2019 10:16 pm

Trillion transistor chip, 8^2x larger than a GA100!

Post by MeeLee »

Would be interesting to see if FAH could work together with this company to create workable WUs for this hardware.
It's even faster than a supercomputer in deep learning models.
For more info, see this vid:
https://www.youtube.com/watch?v=NQGyd2kuctA
bruce
Posts: 20910
Joined: Thu Nov 29, 2007 10:13 pm
Location: So. Cal.

Re: Trillion transistor chip, 8^2x larger than a GA100!

Post by bruce »

FAH doesn't use all the capabilities of GPUs. It outputs no video, so all of the components that genrate high-resolution / high frame rates take up their share of chip real-estate is essentially wasted space from FAH's perspective. The same goes for components that generate tensors (for AI) and do half-precision math. Evem on-chip RAM provides very little boost in FAH production. FAH makes extensive use of single-precision math (FP32) and moderate use of double-precision math (FP64) and bits and pieces of other math hardware. What is really valuable are the math components when the calculations can be reordered to be highly parallel. Except for the GPUs ability to do those highly parallel calculations, it's not doing anything that couldn't have been done on a massive number of Pentium CPUs, given enough time.

The fundamental limitation isn't the chip size, but rather the serial nature of segments of computer, where you can't reorder the operations to run them in parallel.
Post Reply