Skip to main content

"We need to talk about arithmetic", Tim Fernandez-Hart, Brunel University London

As computational demand grows and the effect of Moore’s law fades, there is increasing interest in energy efficient computation. While neuromorphic computing itself is a response to this need, other areas have begun to use reduced precision and alternative numerical representations to address the problem.

The benefits of reduced bit-width arithmetic are numerous and affect all aspects of system performance and behaviour. Fewer bits lead to lower communication overhead, reduced memory usage, faster computation, as well as decreased energy consumption. However, such developments have been largely overlooked by the neuromorphic community, where FP is typically chosen when accuracy is paramount, and fixed-point (FxP) when energy efficiency is the overriding design constraint. These choices are often based on the assumption that FxP arithmetic is inherently more efficient than FP; however, this may be misleading because it fails to consider the numerical capabilities of each format.

While much of the literature compares formats purely by bit-width, here we propose a different perspective that considers functional parity, rather than bit-width equivalence, before evaluating efficiency. We show that for e-prop training, functional equivalence occurs with a 16-bit posit format, a 32-bit FP and a 64-bit FxP number. Among other results.