Page 1 of 1

Why is very powerful the tensor cores of Nvidia

Posted: Tue Jan 31, 2023 9:05 pm
by GTX56O
The rtx4090, have cores H100 https://www.techpowerup.com/299092/nvid ... t-a-glance

And this is very useful to chemical secuences, for example in the page 70 of white paper

https://resources.nvidia.com/en-us-tensor-core

https://en.wikipedia.org/wiki/Smith%E2% ... _algorithm

It is very expensive for my economy to buy the rtx4090, but I am sure they wanted guaranteed sales after the pandemic. Because it is a big jump in power where ATI has been ousted.

It is curious how price speculation due to the low supply and the high prices of the pandemic have prevented the sale of a large stock that previous versions are now giving away at a bargain price.

I hope that in the future they make two formats of cards, one for air-cooling and the other for liquid, because the air-cooling makes the card have disproportionately large measurements.

This can pose a danger because if ATI or AMD does not get anything decent throughout this year that can compete with Nvidia, Nvidia will have a monopoly, it will put the price it wants on its cards, and they are not characterized by being weak by imposing prices on the low, especially intermediaries who speculate on supply and demand.

Amd is wasting time developing CPUs with more process threads, because the GPU already fulfills that function, and I think that since they can't compete with Nvidia's graphics cards, they are opening the market with CPUs, something very ambitious, because it supposes that it also has to compete with intel.

Re: Why is very powerful the tensor cores of Nvidia

Posted: Tue Jan 31, 2023 11:08 pm
by Joe_H
The tensor cores may speed up some calculations, but as implemented by Nvidia they are based on 16-bit floating point calculations and are not as accurate. F@h uses 32-bit calculations, and for some critical ones uses 64-bit floating point. So currently not useful for F@h. The very large number of shader cores in a 4090 can give impressive speedups if the molecular system being simulated is large enough.

Re: Why is very powerful the tensor cores of Nvidia

Posted: Fri Feb 03, 2023 5:15 pm
by GTX56O
https://www.youtube.com/watch?v=jiMZYJ--cT8
https://developer.nvidia.com/hpc-applic ... erformance

NAMD
Molecular Dynamics

Designed for high-performance simulation of large molecular systems

VERSION

GPU, AMD CPU V 3.0a13 ; Intel CPU V 2.15a AVX512

Hoy en día, aplicaciones de simulación de dinámica molecular como AMBER, GROMACS, NAMD y LAMMPS
https://www.azken.com/blog/sistemas-rec ... molecular/
http://www.mdtutorials.com/gmx/index.html

https://www.youtube.com/watch?v=rYZ1p5l ... haelPapili
https://www.youtube.com/watch?v=DH25pKy ... nformatics

Why can the rx7900xtx only get 4 million points when the rtx4090 gets 30 million and they are worth the same price then?

Although Nvidia has implemented CUDA for FP16 in recreational game graphics, it seems that it has implemented artificial intelligence for data centers. I don't understand why he has abandoned us like that. What could be done at the software level?

It seems that Nvidia has implemented the Tensor Cores for a data center with artificial intelligence.

Are home folding applications accurate to use 32 or 64 bit double precision?

Could Alpha fold reduce processor time?

https://www.youtube.com/watch?v=mTjYvIU ... nformatics
https://www.youtube.com/watch?v=lLFEqKl3sm4
https://www.youtube.com/watch?v=Uz7ucmqjZ08
https://www.youtube.com/watch?v=f8FAJXPBdOg