Skip to content

The bfloat16 numerical format

bfloat16 is a custom 16-bit floating point format for machine learning that is composed of:

  • one sign bit,
  • eight exponent bits, and
  • seven mantissa bits.

The following diagram shows the internals of three floating point formats:

  • float32: IEEE single-precision,
  • float16: IEEE half-precision, and
  • bfloat16.

Floating Point Format

The dynamic range of bfloat16 and float32 are equivalent. However, bfloat16 takes up half the memory space.

References