Source: youtube

Where a FP32 core can do 2FLOP/cycle, a tensor core can do 128 FLOP/cycle for the RTX Titan. So about 128x speed up for matrix multiply-add.

Use 16-bit precision. Or google “use mixed precision”.

Due to underlying hardware being based on powers of two these are the most…

Muad Dev

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store