-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Description
In current code base, the BFloat16 data type is named as shxxxx (e.x.: shgemm), and related build flag as BUILD_HALF. Seems we simply take BF16 as half precision of float. This is not true as to the standard definition of IEEE. Half precsion should be FP16, which is different in format and content as compared with BFloat16. As for OpenBLAS, we may both support BFloat16 and and FP16 as valuable for different domains -- BFloat16 mostly valuable for Deep Learning and Machine Learning, while FP16 more valuable for traditional scientific computation and telecom processing.
Suggest to change the naming of data type and build flag to be bxxxx (e.x.: bgemm) and BUILD_BF16. While we can leave shxxxx for the real half precision data type -- FP16, or even make it as hxxxx (e.x.: hgemm). Whatever, use shxxxx and BUILD_HALF is quite confusing to community, every other math libs like Eigen/oneDNN (previously mkldnn) use keywords like bf or bf16 or so.
I can submit PR to make this change if we are OK on this.