* [RFC PATCH 0/8] Implement blfoat16 in softfloat
@ 2020-07-12 23:45 LIU Zhiwei
2020-07-12 23:45 ` [RFC PATCH 1/8] fpu/softfloat: fix up float16 nan recognition LIU Zhiwei
` (7 more replies)
0 siblings, 8 replies; 22+ messages in thread
From: LIU Zhiwei @ 2020-07-12 23:45 UTC (permalink / raw)
To: qemu-devel
Cc: alex.bennee, wenmeng_zhang, richard.henderson, LIU Zhiwei,
wxy194768
As bfloat16 is more and more popular in many archs, implement bfloat16
interfaces in softfloat, so that archs can add their bfloat16 insns
based on the blfoat16 interfaces here.
This patch set is more copy of float16 than really define new
interfaces or implementations.
Any thoughts are welcomed!
LIU Zhiwei (8):
fpu/softfloat: fix up float16 nan recognition
fpu/softfloat: use the similiar logic to recognize sNaN and qNaN
fpu/softfloat: add FloatFmt for bfloat16
fpu/softfloat: add pack and unpack interfaces for bfloat16
fpu/softfloat: define brain floating-point types
fpu/softfloat: define operation for bfloat16
fpu/softfloat: define covert operation for bfloat16
fpu/softfloat: define misc operation for bfloat16
fpu/softfloat-specialize.inc.c | 50 ++++-
fpu/softfloat.c | 393 ++++++++++++++++++++++++++++++++-
include/fpu/softfloat-types.h | 8 +
include/fpu/softfloat.h | 133 +++++++++++
4 files changed, 577 insertions(+), 7 deletions(-)
--
2.23.0
^ permalink raw reply [flat|nested] 22+ messages in thread
* [RFC PATCH 1/8] fpu/softfloat: fix up float16 nan recognition
2020-07-12 23:45 [RFC PATCH 0/8] Implement blfoat16 in softfloat LIU Zhiwei
@ 2020-07-12 23:45 ` LIU Zhiwei
2020-07-13 19:15 ` Richard Henderson
2020-07-13 19:55 ` Alex Bennée
2020-07-12 23:45 ` [RFC PATCH 2/8] fpu/softfloat: use the similiar logic to recognize sNaN and qNaN LIU Zhiwei
` (6 subsequent siblings)
7 siblings, 2 replies; 22+ messages in thread
From: LIU Zhiwei @ 2020-07-12 23:45 UTC (permalink / raw)
To: qemu-devel
Cc: alex.bennee, wenmeng_zhang, richard.henderson, LIU Zhiwei,
wxy194768
Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
---
fpu/softfloat-specialize.inc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fpu/softfloat-specialize.inc.c b/fpu/softfloat-specialize.inc.c
index 44f5b661f8..034d18199c 100644
--- a/fpu/softfloat-specialize.inc.c
+++ b/fpu/softfloat-specialize.inc.c
@@ -254,7 +254,7 @@ bool float16_is_quiet_nan(float16 a_, float_status *status)
if (snan_bit_is_one(status)) {
return (((a >> 9) & 0x3F) == 0x3E) && (a & 0x1FF);
} else {
- return ((a & ~0x8000) >= 0x7C80);
+ return ((a >> 9) & 0x3F) == 0x3F;
}
#endif
}
@@ -271,7 +271,7 @@ bool float16_is_signaling_nan(float16 a_, float_status *status)
#else
uint16_t a = float16_val(a_);
if (snan_bit_is_one(status)) {
- return ((a & ~0x8000) >= 0x7C80);
+ return ((a >> 9) & 0x3F) == 0x3F;
} else {
return (((a >> 9) & 0x3F) == 0x3E) && (a & 0x1FF);
}
--
2.23.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [RFC PATCH 2/8] fpu/softfloat: use the similiar logic to recognize sNaN and qNaN
2020-07-12 23:45 [RFC PATCH 0/8] Implement blfoat16 in softfloat LIU Zhiwei
2020-07-12 23:45 ` [RFC PATCH 1/8] fpu/softfloat: fix up float16 nan recognition LIU Zhiwei
@ 2020-07-12 23:45 ` LIU Zhiwei
2020-07-13 19:17 ` Richard Henderson
2020-07-12 23:45 ` [RFC PATCH 3/8] fpu/softfloat: add FloatFmt for bfloat16 LIU Zhiwei
` (5 subsequent siblings)
7 siblings, 1 reply; 22+ messages in thread
From: LIU Zhiwei @ 2020-07-12 23:45 UTC (permalink / raw)
To: qemu-devel
Cc: alex.bennee, wenmeng_zhang, richard.henderson, LIU Zhiwei,
wxy194768
Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
---
fpu/softfloat-specialize.inc.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/fpu/softfloat-specialize.inc.c b/fpu/softfloat-specialize.inc.c
index 034d18199c..6b778a7830 100644
--- a/fpu/softfloat-specialize.inc.c
+++ b/fpu/softfloat-specialize.inc.c
@@ -292,7 +292,7 @@ bool float32_is_quiet_nan(float32 a_, float_status *status)
if (snan_bit_is_one(status)) {
return (((a >> 22) & 0x1FF) == 0x1FE) && (a & 0x003FFFFF);
} else {
- return ((uint32_t)(a << 1) >= 0xFF800000);
+ return ((a >> 22) & 0x1FF) == 0x1FF;
}
#endif
}
@@ -309,7 +309,7 @@ bool float32_is_signaling_nan(float32 a_, float_status *status)
#else
uint32_t a = float32_val(a_);
if (snan_bit_is_one(status)) {
- return ((uint32_t)(a << 1) >= 0xFF800000);
+ return ((a >> 22) & 0x1FF) == 0x1FF;
} else {
return (((a >> 22) & 0x1FF) == 0x1FE) && (a & 0x003FFFFF);
}
@@ -647,7 +647,7 @@ bool float64_is_quiet_nan(float64 a_, float_status *status)
return (((a >> 51) & 0xFFF) == 0xFFE)
&& (a & 0x0007FFFFFFFFFFFFULL);
} else {
- return ((a << 1) >= 0xFFF0000000000000ULL);
+ return ((a >> 51) & 0xFFF) == 0xFFF;
}
#endif
}
@@ -664,7 +664,7 @@ bool float64_is_signaling_nan(float64 a_, float_status *status)
#else
uint64_t a = float64_val(a_);
if (snan_bit_is_one(status)) {
- return ((a << 1) >= 0xFFF0000000000000ULL);
+ return ((a >> 51) & 0xFFF) == 0xFFF;
} else {
return (((a >> 51) & 0xFFF) == 0xFFE)
&& (a & UINT64_C(0x0007FFFFFFFFFFFF));
--
2.23.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [RFC PATCH 3/8] fpu/softfloat: add FloatFmt for bfloat16
2020-07-12 23:45 [RFC PATCH 0/8] Implement blfoat16 in softfloat LIU Zhiwei
2020-07-12 23:45 ` [RFC PATCH 1/8] fpu/softfloat: fix up float16 nan recognition LIU Zhiwei
2020-07-12 23:45 ` [RFC PATCH 2/8] fpu/softfloat: use the similiar logic to recognize sNaN and qNaN LIU Zhiwei
@ 2020-07-12 23:45 ` LIU Zhiwei
2020-07-13 19:18 ` Richard Henderson
2020-07-12 23:45 ` [RFC PATCH 4/8] fpu/softfloat: add pack and unpack interfaces " LIU Zhiwei
` (4 subsequent siblings)
7 siblings, 1 reply; 22+ messages in thread
From: LIU Zhiwei @ 2020-07-12 23:45 UTC (permalink / raw)
To: qemu-devel
Cc: alex.bennee, wenmeng_zhang, richard.henderson, LIU Zhiwei,
wxy194768
Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
---
fpu/softfloat.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/fpu/softfloat.c b/fpu/softfloat.c
index 79be4f5840..1ef07d9160 100644
--- a/fpu/softfloat.c
+++ b/fpu/softfloat.c
@@ -554,6 +554,10 @@ static const FloatFmt float16_params_ahp = {
.arm_althp = true
};
+static const FloatFmt bfloat16_params = {
+ FLOAT_PARAMS(8, 7)
+};
+
static const FloatFmt float32_params = {
FLOAT_PARAMS(8, 23)
};
--
2.23.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [RFC PATCH 4/8] fpu/softfloat: add pack and unpack interfaces for bfloat16
2020-07-12 23:45 [RFC PATCH 0/8] Implement blfoat16 in softfloat LIU Zhiwei
` (2 preceding siblings ...)
2020-07-12 23:45 ` [RFC PATCH 3/8] fpu/softfloat: add FloatFmt for bfloat16 LIU Zhiwei
@ 2020-07-12 23:45 ` LIU Zhiwei
2020-07-13 19:22 ` Richard Henderson
2020-07-12 23:45 ` [RFC PATCH 5/8] fpu/softfloat: define brain floating-point types LIU Zhiwei
` (3 subsequent siblings)
7 siblings, 1 reply; 22+ messages in thread
From: LIU Zhiwei @ 2020-07-12 23:45 UTC (permalink / raw)
To: qemu-devel
Cc: alex.bennee, wenmeng_zhang, richard.henderson, LIU Zhiwei,
wxy194768
Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
---
fpu/softfloat.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/fpu/softfloat.c b/fpu/softfloat.c
index 1ef07d9160..54fc889446 100644
--- a/fpu/softfloat.c
+++ b/fpu/softfloat.c
@@ -584,6 +584,11 @@ static inline FloatParts float16_unpack_raw(float16 f)
return unpack_raw(float16_params, f);
}
+static inline FloatParts bfloat16_unpack_raw(bfloat16 f)
+{
+ return unpack_raw(bfloat16_params, f);
+}
+
static inline FloatParts float32_unpack_raw(float32 f)
{
return unpack_raw(float32_params, f);
@@ -607,6 +612,11 @@ static inline float16 float16_pack_raw(FloatParts p)
return make_float16(pack_raw(float16_params, p));
}
+static inline bfloat16 bfloat16_pack_raw(FloatParts p)
+{
+ return make_bfloat16(pack_raw(bfloat16_params, p));
+}
+
static inline float32 float32_pack_raw(FloatParts p)
{
return make_float32(pack_raw(float32_params, p));
@@ -824,6 +834,11 @@ static FloatParts float16_unpack_canonical(float16 f, float_status *s)
return float16a_unpack_canonical(f, s, &float16_params);
}
+static FloatParts bfloat16_unpack_canonical(bfloat16 f, float_status *s)
+{
+ return sf_canonicalize(bfloat16_unpack_raw(f), &bfloat16_params, s);
+}
+
static float16 float16a_round_pack_canonical(FloatParts p, float_status *s,
const FloatFmt *params)
{
@@ -835,6 +850,11 @@ static float16 float16_round_pack_canonical(FloatParts p, float_status *s)
return float16a_round_pack_canonical(p, s, &float16_params);
}
+static bfloat16 bfloat16_round_pack_canonical(FloatParts p, float_status *s)
+{
+ return float16a_round_pack_canonical(p, s, &bfloat16_params);
+}
+
static FloatParts float32_unpack_canonical(float32 f, float_status *s)
{
return sf_canonicalize(float32_unpack_raw(f), &float32_params, s);
--
2.23.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [RFC PATCH 5/8] fpu/softfloat: define brain floating-point types
2020-07-12 23:45 [RFC PATCH 0/8] Implement blfoat16 in softfloat LIU Zhiwei
` (3 preceding siblings ...)
2020-07-12 23:45 ` [RFC PATCH 4/8] fpu/softfloat: add pack and unpack interfaces " LIU Zhiwei
@ 2020-07-12 23:45 ` LIU Zhiwei
2020-07-13 19:26 ` Richard Henderson
2020-07-12 23:45 ` [RFC PATCH 6/8] fpu/softfloat: define operation for bfloat16 LIU Zhiwei
` (2 subsequent siblings)
7 siblings, 1 reply; 22+ messages in thread
From: LIU Zhiwei @ 2020-07-12 23:45 UTC (permalink / raw)
To: qemu-devel
Cc: alex.bennee, wenmeng_zhang, richard.henderson, LIU Zhiwei,
wxy194768
Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
---
include/fpu/softfloat-types.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
index 7680193ebc..8f8fdfeecf 100644
--- a/include/fpu/softfloat-types.h
+++ b/include/fpu/softfloat-types.h
@@ -112,6 +112,14 @@ typedef struct {
#define make_float128(high_, low_) ((float128) { .high = high_, .low = low_ })
#define make_float128_init(high_, low_) { .high = high_, .low = low_ }
+/*
+ * Software brain floating-point types
+ */
+typedef uint16_t bfloat16;
+#define bfloat16_val(x) (x)
+#define make_bfloat16(x) (x)
+#define const_bfloat16(x) (x)
+
/*
* Software IEC/IEEE floating-point underflow tininess-detection mode.
*/
--
2.23.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [RFC PATCH 6/8] fpu/softfloat: define operation for bfloat16
2020-07-12 23:45 [RFC PATCH 0/8] Implement blfoat16 in softfloat LIU Zhiwei
` (4 preceding siblings ...)
2020-07-12 23:45 ` [RFC PATCH 5/8] fpu/softfloat: define brain floating-point types LIU Zhiwei
@ 2020-07-12 23:45 ` LIU Zhiwei
2020-07-13 19:31 ` Richard Henderson
2020-07-12 23:45 ` [RFC PATCH 7/8] fpu/softfloat: define covert " LIU Zhiwei
2020-07-12 23:45 ` [RFC PATCH 8/8] fpu/softfloat: define misc " LIU Zhiwei
7 siblings, 1 reply; 22+ messages in thread
From: LIU Zhiwei @ 2020-07-12 23:45 UTC (permalink / raw)
To: qemu-devel
Cc: alex.bennee, wenmeng_zhang, richard.henderson, LIU Zhiwei,
wxy194768
Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
---
fpu/softfloat.c | 146 +++++++++++++++++++++++++++++++++++++++-
include/fpu/softfloat.h | 44 ++++++++++++
2 files changed, 189 insertions(+), 1 deletion(-)
diff --git a/fpu/softfloat.c b/fpu/softfloat.c
index 54fc889446..9a58107be3 100644
--- a/fpu/softfloat.c
+++ b/fpu/softfloat.c
@@ -1182,6 +1182,28 @@ float64_sub(float64 a, float64 b, float_status *s)
return float64_addsub(a, b, s, hard_f64_sub, soft_f64_sub);
}
+/*
+ * Returns the result of adding or subtracting the brain floating-point
+ * values `a' and `b'.
+ */
+bfloat16 QEMU_FLATTEN bfloat16_add(bfloat16 a, bfloat16 b, float_status *status)
+{
+ FloatParts pa = bfloat16_unpack_canonical(a, status);
+ FloatParts pb = bfloat16_unpack_canonical(b, status);
+ FloatParts pr = addsub_floats(pa, pb, false, status);
+
+ return bfloat16_round_pack_canonical(pr, status);
+}
+
+bfloat16 QEMU_FLATTEN bfloat16_sub(bfloat16 a, bfloat16 b, float_status *status)
+{
+ FloatParts pa = bfloat16_unpack_canonical(a, status);
+ FloatParts pb = bfloat16_unpack_canonical(b, status);
+ FloatParts pr = addsub_floats(pa, pb, true, status);
+
+ return bfloat16_round_pack_canonical(pr, status);
+}
+
/*
* Returns the result of multiplying the floating-point values `a' and
* `b'. The operation is performed according to the IEC/IEEE Standard
@@ -1284,6 +1306,20 @@ float64_mul(float64 a, float64 b, float_status *s)
f64_is_zon2, f64_addsubmul_post);
}
+/*
+ * Returns the result of multiplying the brain floating-point
+ * values `a' and `b'.
+ */
+
+bfloat16 QEMU_FLATTEN bfloat16_mul(bfloat16 a, bfloat16 b, float_status *status)
+{
+ FloatParts pa = bfloat16_unpack_canonical(a, status);
+ FloatParts pb = bfloat16_unpack_canonical(b, status);
+ FloatParts pr = mul_floats(pa, pb, status);
+
+ return bfloat16_round_pack_canonical(pr, status);
+}
+
/*
* Returns the result of multiplying the floating-point values `a' and
* `b' then adding 'c', with no intermediate rounding step after the
@@ -1666,6 +1702,23 @@ float64_muladd(float64 xa, float64 xb, float64 xc, int flags, float_status *s)
return soft_f64_muladd(ua.s, ub.s, uc.s, flags, s);
}
+/*
+ * Returns the result of multiplying the brain floating-point values `a'
+ * and `b' then adding 'c', with no intermediate rounding step after the
+ * multiplication.
+ */
+
+bfloat16 QEMU_FLATTEN bfloat16_muladd(bfloat16 a, bfloat16 b, bfloat16 c,
+ int flags, float_status *status)
+{
+ FloatParts pa = bfloat16_unpack_canonical(a, status);
+ FloatParts pb = bfloat16_unpack_canonical(b, status);
+ FloatParts pc = bfloat16_unpack_canonical(c, status);
+ FloatParts pr = muladd_floats(pa, pb, pc, flags, status);
+
+ return bfloat16_round_pack_canonical(pr, status);
+}
+
/*
* Returns the result of dividing the floating-point value `a' by the
* corresponding value `b'. The operation is performed according to
@@ -1832,6 +1885,20 @@ float64_div(float64 a, float64 b, float_status *s)
f64_div_pre, f64_div_post);
}
+/*
+ * Returns the result of dividing the brain floating-point
+ * value `a' by the corresponding value `b'.
+ */
+
+bfloat16 bfloat16_div(bfloat16 a, bfloat16 b, float_status *status)
+{
+ FloatParts pa = bfloat16_unpack_canonical(a, status);
+ FloatParts pb = bfloat16_unpack_canonical(b, status);
+ FloatParts pr = div_floats(pa, pb, status);
+
+ return bfloat16_round_pack_canonical(pr, status);
+}
+
/*
* Float to Float conversions
*
@@ -2871,6 +2938,25 @@ MINMAX(64, maxnummag, false, true, true)
#undef MINMAX
+#define BF16_MINMAX(name, ismin, isiee, ismag) \
+bfloat16 bfloat16_ ## name(bfloat16 a, bfloat16 b, float_status *s) \
+{ \
+ FloatParts pa = bfloat16_unpack_canonical(a, s); \
+ FloatParts pb = bfloat16_unpack_canonical(b, s); \
+ FloatParts pr = minmax_floats(pa, pb, ismin, isiee, ismag, s); \
+ \
+ return bfloat16_round_pack_canonical(pr, s); \
+}
+
+BF16_MINMAX(min, true, false, false)
+BF16_MINMAX(minnum, true, true, false)
+BF16_MINMAX(minnummag, true, true, true)
+BF16_MINMAX(max, false, false, false)
+BF16_MINMAX(maxnum, false, true, false)
+BF16_MINMAX(maxnummag, false, true, true)
+
+#undef BF16_MINMAX
+
/* Floating point compare */
static FloatRelation compare_floats(FloatParts a, FloatParts b, bool is_quiet,
float_status *s)
@@ -3032,6 +3118,24 @@ FloatRelation float64_compare_quiet(float64 a, float64 b, float_status *s)
return f64_compare(a, b, true, s);
}
+static int QEMU_FLATTEN
+soft_bf16_compare(bfloat16 a, bfloat16 b, bool is_quiet, float_status *s)
+{
+ FloatParts pa = bfloat16_unpack_canonical(a, s);
+ FloatParts pb = bfloat16_unpack_canonical(b, s);
+ return compare_floats(pa, pb, is_quiet, s);
+}
+
+int bfloat16_compare(bfloat16 a, bfloat16 b, float_status *s)
+{
+ return soft_bf16_compare(a, b, false, s);
+}
+
+int bfloat16_compare_quiet(bfloat16 a, bfloat16 b, float_status *s)
+{
+ return soft_bf16_compare(a, b, true, s);
+}
+
/* Multiply A by 2 raised to the power N. */
static FloatParts scalbn_decomposed(FloatParts a, int n, float_status *s)
{
@@ -3039,7 +3143,7 @@ static FloatParts scalbn_decomposed(FloatParts a, int n, float_status *s)
return return_nan(a, s);
}
if (a.cls == float_class_normal) {
- /* The largest float type (even though not supported by FloatParts)
+ /* The largest float type (even though nt supported by FloatParts)
* is float128, which has a 15 bit exponent. Bounding N to 16 bits
* still allows rounding to infinity, without allowing overflow
* within the int32_t that backs FloatParts.exp.
@@ -3071,6 +3175,13 @@ float64 float64_scalbn(float64 a, int n, float_status *status)
return float64_round_pack_canonical(pr, status);
}
+bfloat16 bfloat16_scalbn(bfloat16 a, int n, float_status *status)
+{
+ FloatParts pa = bfloat16_unpack_canonical(a, status);
+ FloatParts pr = scalbn_decomposed(pa, n, status);
+ return bfloat16_round_pack_canonical(pr, status);
+}
+
/*
* Square Root
*
@@ -3221,6 +3332,13 @@ float64 QEMU_FLATTEN float64_sqrt(float64 xa, float_status *s)
return soft_f64_sqrt(ua.s, s);
}
+bfloat16 QEMU_FLATTEN bfloat16_sqrt(bfloat16 a, float_status *status)
+{
+ FloatParts pa = bfloat16_unpack_canonical(a, status);
+ FloatParts pr = sqrt_float(pa, status, &bfloat16_params);
+ return bfloat16_round_pack_canonical(pr, status);
+}
+
/*----------------------------------------------------------------------------
| The pattern for a default generated NaN.
*----------------------------------------------------------------------------*/
@@ -3263,6 +3381,13 @@ float128 float128_default_nan(float_status *status)
return r;
}
+bfloat16 bfloat16_default_nan(float_status *status)
+{
+ FloatParts p = parts_default_nan(status);
+ p.frac >>= bfloat16_params.frac_shift;
+ return bfloat16_pack_raw(p);
+}
+
/*----------------------------------------------------------------------------
| Returns a quiet NaN from a signalling NaN for the floating point value `a'.
*----------------------------------------------------------------------------*/
@@ -3294,6 +3419,14 @@ float64 float64_silence_nan(float64 a, float_status *status)
return float64_pack_raw(p);
}
+bfloat16 bfloat16_silence_nan(bfloat16 a, float_status *status)
+{
+ FloatParts p = bfloat16_unpack_raw(a);
+ p.frac <<= bfloat16_params.frac_shift;
+ p = parts_silence_nan(p, status);
+ p.frac >>= bfloat16_params.frac_shift;
+ return bfloat16_pack_raw(p);
+}
/*----------------------------------------------------------------------------
| If `a' is denormal and we are in flush-to-zero mode then set the
@@ -3343,6 +3476,17 @@ float64 float64_squash_input_denormal(float64 a, float_status *status)
return a;
}
+bfloat16 bfloat16_squash_input_denormal(bfloat16 a, float_status *status)
+{
+ if (status->flush_inputs_to_zero) {
+ FloatParts p = bfloat16_unpack_raw(a);
+ if (parts_squash_denormal(p, status)) {
+ return bfloat16_set_sign(bfloat16_zero, p.sign);
+ }
+ }
+ return a;
+}
+
/*----------------------------------------------------------------------------
| Takes a 64-bit fixed-point value `absZ' with binary point between bits 6
| and 7, and returns the properly rounded 32-bit integer corresponding to the
diff --git a/include/fpu/softfloat.h b/include/fpu/softfloat.h
index ff4e2605b1..07020eafad 100644
--- a/include/fpu/softfloat.h
+++ b/include/fpu/softfloat.h
@@ -239,6 +239,37 @@ bool float16_is_quiet_nan(float16, float_status *status);
bool float16_is_signaling_nan(float16, float_status *status);
float16 float16_silence_nan(float16, float_status *status);
+/*----------------------------------------------------------------------------
+| Software brain floatint-point operations.
+*----------------------------------------------------------------------------*/
+
+bfloat16 bfloat16_add(bfloat16, bfloat16, float_status *status);
+bfloat16 bfloat16_sub(bfloat16, bfloat16, float_status *status);
+bfloat16 bfloat16_mul(bfloat16, bfloat16, float_status *status);
+bfloat16 bfloat16_div(bfloat16, bfloat16, float_status *status);
+bfloat16 bfloat16_muladd(bfloat16, bfloat16, bfloat16, int,
+ float_status *status);
+float16 bfloat16_scalbn(bfloat16, int, float_status *status);
+bfloat16 bfloat16_min(bfloat16, bfloat16, float_status *status);
+bfloat16 bfloat16_max(bfloat16, bfloat16, float_status *status);
+bfloat16 bfloat16_minnum(bfloat16, bfloat16, float_status *status);
+bfloat16 bfloat16_maxnum(bfloat16, bfloat16, float_status *status);
+bfloat16 bfloat16_minnummag(bfloat16, bfloat16, float_status *status);
+bfloat16 bfloat16_maxnummag(bfloat16, bfloat16, float_status *status);
+bfloat16 bfloat16_sqrt(bfloat16, float_status *status);
+int bfloat16_compare(bfloat16, bfloat16, float_status *status);
+int bfloat16_compare_quiet(bfloat16, bfloat16, float_status *status);
+int bfloat16_unordered_quiet(bfloat16, bfloat16, float_status *status);
+int bfloat16_le(bfloat16, bfloat16, float_status *status);
+int bfloat16_lt(bfloat16, bfloat16, float_status *status);
+int bfloat16_eq_quiet(bfloat16, bfloat16, float_status *status);
+
+int bfloat16_is_quiet_nan(bfloat16, float_status *status);
+int bfloat16_is_signaling_nan(bfloat16, float_status *status);
+bfloat16 bfloat16_silence_nan(bfloat16, float_status *status);
+bfloat16 bfloat16_default_nan(float_status *status);
+bfloat16 bfloat16_squash_input_denormal(bfloat16 a, float_status *status);
+
static inline bool float16_is_any_nan(float16 a)
{
return ((float16_val(a) & ~0x8000) > 0x7c00);
@@ -293,6 +324,19 @@ static inline float16 float16_set_sign(float16 a, int sign)
#define float16_three make_float16(0x4200)
#define float16_infinity make_float16(0x7c00)
+static inline bfloat16 bfloat16_set_sign(bfloat16 a, int sign)
+{
+ return make_bfloat16((bfloat16_val(a) & 0x7fff) | (sign << 15));
+}
+
+#define bfloat16_zero make_bfloat16(0)
+#define bfloat16_half make_bfloat16(0x3f00)
+#define bfloat16_one make_bfloat16(0x3f80)
+#define bfloat16_one_point_five make_bfloat16(0x3fc0)
+#define bfloat16_two make_bfloat16(0x4000)
+#define bfloat16_three make_bfloat16(0x4040)
+#define bfloat16_infinity make_bfloat16(0x7f80)
+
/*----------------------------------------------------------------------------
| The pattern for a default generated half-precision NaN.
*----------------------------------------------------------------------------*/
--
2.23.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [RFC PATCH 7/8] fpu/softfloat: define covert operation for bfloat16
2020-07-12 23:45 [RFC PATCH 0/8] Implement blfoat16 in softfloat LIU Zhiwei
` (5 preceding siblings ...)
2020-07-12 23:45 ` [RFC PATCH 6/8] fpu/softfloat: define operation for bfloat16 LIU Zhiwei
@ 2020-07-12 23:45 ` LIU Zhiwei
2020-07-13 19:34 ` Richard Henderson
2020-07-12 23:45 ` [RFC PATCH 8/8] fpu/softfloat: define misc " LIU Zhiwei
7 siblings, 1 reply; 22+ messages in thread
From: LIU Zhiwei @ 2020-07-12 23:45 UTC (permalink / raw)
To: qemu-devel
Cc: alex.bennee, wenmeng_zhang, richard.henderson, LIU Zhiwei,
wxy194768
Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
---
fpu/softfloat.c | 223 ++++++++++++++++++++++++++++++++++++++++
include/fpu/softfloat.h | 48 +++++++++
2 files changed, 271 insertions(+)
diff --git a/fpu/softfloat.c b/fpu/softfloat.c
index 9a58107be3..b6002d6856 100644
--- a/fpu/softfloat.c
+++ b/fpu/softfloat.c
@@ -2014,6 +2014,34 @@ float32 float64_to_float32(float64 a, float_status *s)
return float32_round_pack_canonical(pr, s);
}
+float32 bfloat16_to_float32(bfloat16 a, float_status *s)
+{
+ FloatParts p = bfloat16_unpack_canonical(a, s);
+ FloatParts pr = float_to_float(p, &float32_params, s);
+ return float32_round_pack_canonical(pr, s);
+}
+
+float64 bfloat16_to_float64(bfloat16 a, float_status *s)
+{
+ FloatParts p = bfloat16_unpack_canonical(a, s);
+ FloatParts pr = float_to_float(p, &float64_params, s);
+ return float64_round_pack_canonical(pr, s);
+}
+
+bfloat16 float32_to_bfloat16(float32 a, float_status *s)
+{
+ FloatParts p = float32_unpack_canonical(a, s);
+ FloatParts pr = float_to_float(p, &bfloat16_params, s);
+ return bfloat16_round_pack_canonical(pr, s);
+}
+
+bfloat16 float64_to_bfloat16(float64 a, float_status *s)
+{
+ FloatParts p = float64_unpack_canonical(a, s);
+ FloatParts pr = float_to_float(p, &bfloat16_params, s);
+ return bfloat16_round_pack_canonical(pr, s);
+}
+
/*
* Rounds the floating-point value `a' to an integer, and returns the
* result as a floating-point value. The operation is performed
@@ -2143,6 +2171,18 @@ float64 float64_round_to_int(float64 a, float_status *s)
return float64_round_pack_canonical(pr, s);
}
+/*
+ * Rounds the brain floating-point value `a' to an integer, and returns the
+ * result as a brain floating-point value.
+ */
+
+bfloat16 bfloat16_round_to_int(bfloat16 a, float_status *s)
+{
+ FloatParts pa = bfloat16_unpack_canonical(a, s);
+ FloatParts pr = round_to_int(pa, s->float_rounding_mode, 0, s);
+ return bfloat16_round_pack_canonical(pr, s);
+}
+
/*
* Returns the result of converting the floating-point value `a' to
* the two's complement integer format. The conversion is performed
@@ -2353,6 +2393,62 @@ int64_t float64_to_int64_round_to_zero(float64 a, float_status *s)
return float64_to_int64_scalbn(a, float_round_to_zero, 0, s);
}
+/*
+ * Returns the result of converting the floating-point value `a' to
+ * the two's complement integer format.
+ */
+
+int16_t bfloat16_to_int16_scalbn(bfloat16 a, int rmode, int scale,
+ float_status *s)
+{
+ return round_to_int_and_pack(bfloat16_unpack_canonical(a, s),
+ rmode, scale, INT16_MIN, INT16_MAX, s);
+}
+
+int32_t bfloat16_to_int32_scalbn(bfloat16 a, int rmode, int scale,
+ float_status *s)
+{
+ return round_to_int_and_pack(bfloat16_unpack_canonical(a, s),
+ rmode, scale, INT32_MIN, INT32_MAX, s);
+}
+
+int64_t bfloat16_to_int64_scalbn(bfloat16 a, int rmode, int scale,
+ float_status *s)
+{
+ return round_to_int_and_pack(bfloat16_unpack_canonical(a, s),
+ rmode, scale, INT64_MIN, INT64_MAX, s);
+}
+
+int16_t bfloat16_to_int16(bfloat16 a, float_status *s)
+{
+ return bfloat16_to_int16_scalbn(a, s->float_rounding_mode, 0, s);
+}
+
+int32_t bfloat16_to_int32(bfloat16 a, float_status *s)
+{
+ return bfloat16_to_int32_scalbn(a, s->float_rounding_mode, 0, s);
+}
+
+int64_t bfloat16_to_int64(bfloat16 a, float_status *s)
+{
+ return bfloat16_to_int64_scalbn(a, s->float_rounding_mode, 0, s);
+}
+
+int16_t bfloat16_to_int16_round_to_zero(bfloat16 a, float_status *s)
+{
+ return bfloat16_to_int16_scalbn(a, float_round_to_zero, 0, s);
+}
+
+int32_t bfloat16_to_int32_round_to_zero(bfloat16 a, float_status *s)
+{
+ return bfloat16_to_int32_scalbn(a, float_round_to_zero, 0, s);
+}
+
+int64_t bfloat16_to_int64_round_to_zero(bfloat16 a, float_status *s)
+{
+ return bfloat16_to_int64_scalbn(a, float_round_to_zero, 0, s);
+}
+
/*
* Returns the result of converting the floating-point value `a' to
* the unsigned integer format. The conversion is performed according
@@ -2566,6 +2662,62 @@ uint64_t float64_to_uint64_round_to_zero(float64 a, float_status *s)
return float64_to_uint64_scalbn(a, float_round_to_zero, 0, s);
}
+/*
+ * Returns the result of converting the brain floating-point value `a' to
+ * the unsigned integer format.
+ */
+
+uint16_t bfloat16_to_uint16_scalbn(bfloat16 a, int rmode, int scale,
+ float_status *s)
+{
+ return round_to_uint_and_pack(bfloat16_unpack_canonical(a, s),
+ rmode, scale, UINT16_MAX, s);
+}
+
+uint32_t bfloat16_to_uint32_scalbn(bfloat16 a, int rmode, int scale,
+ float_status *s)
+{
+ return round_to_uint_and_pack(bfloat16_unpack_canonical(a, s),
+ rmode, scale, UINT32_MAX, s);
+}
+
+uint64_t bfloat16_to_uint64_scalbn(bfloat16 a, int rmode, int scale,
+ float_status *s)
+{
+ return round_to_uint_and_pack(bfloat16_unpack_canonical(a, s),
+ rmode, scale, UINT64_MAX, s);
+}
+
+uint16_t bfloat16_to_uint16(bfloat16 a, float_status *s)
+{
+ return bfloat16_to_uint16_scalbn(a, s->float_rounding_mode, 0, s);
+}
+
+uint32_t bfloat16_to_uint32(bfloat16 a, float_status *s)
+{
+ return bfloat16_to_uint32_scalbn(a, s->float_rounding_mode, 0, s);
+}
+
+uint64_t bfloat16_to_uint64(bfloat16 a, float_status *s)
+{
+ return bfloat16_to_uint64_scalbn(a, s->float_rounding_mode, 0, s);
+}
+
+uint16_t bfloat16_to_uint16_round_to_zero(bfloat16 a, float_status *s)
+{
+ return bfloat16_to_uint16_scalbn(a, float_round_to_zero, 0, s);
+}
+
+uint32_t bfloat16_to_uint32_round_to_zero(bfloat16 a, float_status *s)
+{
+ return bfloat16_to_uint32_scalbn(a, float_round_to_zero, 0, s);
+}
+
+uint64_t bfloat16_to_uint64_round_to_zero(bfloat16 a, float_status *s)
+{
+ return bfloat16_to_uint64_scalbn(a, float_round_to_zero, 0, s);
+}
+
/*
* Integer to float conversions
*
@@ -2692,6 +2844,41 @@ float64 int16_to_float64(int16_t a, float_status *status)
return int64_to_float64_scalbn(a, 0, status);
}
+/*
+ * Returns the result of converting the two's complement integer `a'
+ * to the brain floating-point format.
+ */
+
+bfloat16 int64_to_bfloat16_scalbn(int64_t a, int scale, float_status *status)
+{
+ FloatParts pa = int_to_float(a, scale, status);
+ return bfloat16_round_pack_canonical(pa, status);
+}
+
+bfloat16 int32_to_bfloat16_scalbn(int32_t a, int scale, float_status *status)
+{
+ return int64_to_bfloat16_scalbn(a, scale, status);
+}
+
+bfloat16 int16_to_bfloat16_scalbn(int16_t a, int scale, float_status *status)
+{
+ return int64_to_bfloat16_scalbn(a, scale, status);
+}
+
+bfloat16 int64_to_bfloat16(int64_t a, float_status *status)
+{
+ return int64_to_bfloat16_scalbn(a, 0, status);
+}
+
+bfloat16 int32_to_bfloat16(int32_t a, float_status *status)
+{
+ return int64_to_bfloat16_scalbn(a, 0, status);
+}
+
+bfloat16 int16_to_bfloat16(int16_t a, float_status *status)
+{
+ return int64_to_bfloat16_scalbn(a, 0, status);
+}
/*
* Unsigned Integer to float conversions
@@ -2817,6 +3004,42 @@ float64 uint16_to_float64(uint16_t a, float_status *status)
return uint64_to_float64_scalbn(a, 0, status);
}
+/*
+ * Returns the result of converting the unsigned integer `a' to the
+ * brain floating-point format.
+ */
+
+bfloat16 uint64_to_bfloat16_scalbn(uint64_t a, int scale, float_status *status)
+{
+ FloatParts pa = uint_to_float(a, scale, status);
+ return bfloat16_round_pack_canonical(pa, status);
+}
+
+bfloat16 uint32_to_bfloat16_scalbn(uint32_t a, int scale, float_status *status)
+{
+ return uint64_to_bfloat16_scalbn(a, scale, status);
+}
+
+bfloat16 uint16_to_bfloat16_scalbn(uint16_t a, int scale, float_status *status)
+{
+ return uint64_to_bfloat16_scalbn(a, scale, status);
+}
+
+bfloat16 uint64_to_bfloat16(uint64_t a, float_status *status)
+{
+ return uint64_to_bfloat16_scalbn(a, 0, status);
+}
+
+bfloat16 uint32_to_bfloat16(uint32_t a, float_status *status)
+{
+ return uint64_to_bfloat16_scalbn(a, 0, status);
+}
+
+bfloat16 uint16_to_bfloat16(uint16_t a, float_status *status)
+{
+ return uint64_to_bfloat16_scalbn(a, 0, status);
+}
+
/* Float Min/Max */
/* min() and max() functions. These can't be implemented as
* 'compare and pick one input' because that would mishandle
diff --git a/include/fpu/softfloat.h b/include/fpu/softfloat.h
index 07020eafad..6590850253 100644
--- a/include/fpu/softfloat.h
+++ b/include/fpu/softfloat.h
@@ -270,6 +270,54 @@ bfloat16 bfloat16_silence_nan(bfloat16, float_status *status);
bfloat16 bfloat16_default_nan(float_status *status);
bfloat16 bfloat16_squash_input_denormal(bfloat16 a, float_status *status);
+/*----------------------------------------------------------------------------
+| Software brain floating-point conversion routines.
+*----------------------------------------------------------------------------*/
+
+bfloat16 bfloat16_round_to_int(bfloat16, float_status *status);
+bfloat16 float32_to_bfloat16(float32, float_status *status);
+float32 bfloat16_to_float32(bfloat16, float_status *status);
+bfloat16 float64_to_bfloat16(float64 a, float_status *status);
+float64 bfloat16_to_float64(bfloat16 a, float_status *status);
+
+int16_t bfloat16_to_int16_scalbn(bfloat16, int, int, float_status *status);
+int32_t bfloat16_to_int32_scalbn(bfloat16, int, int, float_status *status);
+int64_t bfloat16_to_int64_scalbn(bfloat16, int, int, float_status *status);
+
+int16_t bfloat16_to_int16(bfloat16, float_status *status);
+int32_t bfloat16_to_int32(bfloat16, float_status *status);
+int64_t bfloat16_to_int64(bfloat16, float_status *status);
+
+int16_t bfloat16_to_int16_round_to_zero(bfloat16, float_status *status);
+int32_t bfloat16_to_int32_round_to_zero(bfloat16, float_status *status);
+int64_t bfloat16_to_int64_round_to_zero(bfloat16, float_status *status);
+
+uint16_t bfloat16_to_uint16_scalbn(bfloat16 a, int, int, float_status *status);
+uint32_t bfloat16_to_uint32_scalbn(bfloat16 a, int, int, float_status *status);
+uint64_t bfloat16_to_uint64_scalbn(bfloat16 a, int, int, float_status *status);
+
+uint16_t bfloat16_to_uint16(bfloat16 a, float_status *status);
+uint32_t bfloat16_to_uint32(bfloat16 a, float_status *status);
+uint64_t bfloat16_to_uint64(bfloat16 a, float_status *status);
+
+uint16_t bfloat16_to_uint16_round_to_zero(bfloat16 a, float_status *status);
+uint32_t bfloat16_to_uint32_round_to_zero(bfloat16 a, float_status *status);
+uint64_t bfloat16_to_uint64_round_to_zero(bfloat16 a, float_status *status);
+
+bfloat16 int16_to_bfloat16_scalbn(int16_t a, int, float_status *status);
+bfloat16 int32_to_bfloat16_scalbn(int32_t a, int, float_status *status);
+bfloat16 int64_to_bfloat16_scalbn(int64_t a, int, float_status *status);
+bfloat16 uint16_to_bfloat16_scalbn(uint16_t a, int, float_status *status);
+bfloat16 uint32_to_bfloat16_scalbn(uint32_t a, int, float_status *status);
+bfloat16 uint64_to_bfloat16_scalbn(uint64_t a, int, float_status *status);
+
+bfloat16 int16_to_bfloat16(int16_t a, float_status *status);
+bfloat16 int32_to_bfloat16(int32_t a, float_status *status);
+bfloat16 int64_to_bfloat16(int64_t a, float_status *status);
+bfloat16 uint16_to_bfloat16(uint16_t a, float_status *status);
+bfloat16 uint32_to_bfloat16(uint32_t a, float_status *status);
+bfloat16 uint64_to_bfloat16(uint64_t a, float_status *status);
+
static inline bool float16_is_any_nan(float16 a)
{
return ((float16_val(a) & ~0x8000) > 0x7c00);
--
2.23.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [RFC PATCH 8/8] fpu/softfloat: define misc operation for bfloat16
2020-07-12 23:45 [RFC PATCH 0/8] Implement blfoat16 in softfloat LIU Zhiwei
` (6 preceding siblings ...)
2020-07-12 23:45 ` [RFC PATCH 7/8] fpu/softfloat: define covert " LIU Zhiwei
@ 2020-07-12 23:45 ` LIU Zhiwei
2020-07-13 19:37 ` Richard Henderson
7 siblings, 1 reply; 22+ messages in thread
From: LIU Zhiwei @ 2020-07-12 23:45 UTC (permalink / raw)
To: qemu-devel
Cc: alex.bennee, wenmeng_zhang, richard.henderson, LIU Zhiwei,
wxy194768
Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
---
fpu/softfloat-specialize.inc.c | 38 +++++++++++++++++++++++++++++++
include/fpu/softfloat.h | 41 ++++++++++++++++++++++++++++++++++
2 files changed, 79 insertions(+)
diff --git a/fpu/softfloat-specialize.inc.c b/fpu/softfloat-specialize.inc.c
index 6b778a7830..ff17f11f0c 100644
--- a/fpu/softfloat-specialize.inc.c
+++ b/fpu/softfloat-specialize.inc.c
@@ -259,6 +259,25 @@ bool float16_is_quiet_nan(float16 a_, float_status *status)
#endif
}
+/*----------------------------------------------------------------------------
+| Returns 1 if the brain floating point value `a' is a quiet
+| NaN; otherwise returns 0.
+*----------------------------------------------------------------------------*/
+
+int bfloat16_is_quiet_nan(bfloat16 a_, float_status *status)
+{
+#ifdef NO_SIGNALING_NANS
+ return bfloat16_is_any_nan(a_);
+#else
+ uint16_t a = bfloat16_val(a_);
+ if (snan_bit_is_one(status)) {
+ return (((a >> 6) & 0x1FF) == 0x1FE) && (a & 0x3F);
+ } else {
+ return ((a >> 6) & 0x1FF) == 0x1FF;
+ }
+#endif
+}
+
/*----------------------------------------------------------------------------
| Returns 1 if the half-precision floating-point value `a' is a signaling
| NaN; otherwise returns 0.
@@ -278,6 +297,25 @@ bool float16_is_signaling_nan(float16 a_, float_status *status)
#endif
}
+/*----------------------------------------------------------------------------
+| Returns 1 if the brain floating point value `a' is a signaling
+| NaN; otherwise returns 0.
+*----------------------------------------------------------------------------*/
+
+int bfloat16_is_signaling_nan(bfloat16 a_, float_status *status)
+{
+#ifdef NO_SIGNALING_NANS
+ return 0;
+#else
+ uint16_t a = bfloat16_val(a_);
+ if (snan_bit_is_one(status)) {
+ return ((a >> 6) & 0x1FF) == 0x1FF;
+ } else {
+ return (((a >> 6) & 0x1FF) == 0x1FE) && (a & 0x3F);
+ }
+#endif
+}
+
/*----------------------------------------------------------------------------
| Returns 1 if the single-precision floating-point value `a' is a quiet
| NaN; otherwise returns 0.
diff --git a/include/fpu/softfloat.h b/include/fpu/softfloat.h
index 6590850253..d2c3f5fbe0 100644
--- a/include/fpu/softfloat.h
+++ b/include/fpu/softfloat.h
@@ -372,6 +372,47 @@ static inline float16 float16_set_sign(float16 a, int sign)
#define float16_three make_float16(0x4200)
#define float16_infinity make_float16(0x7c00)
+static inline int bfloat16_is_any_nan(bfloat16 a)
+{
+ return ((bfloat16_val(a) & ~0x8000) > 0x7F80);
+}
+
+static inline int bfloat16_is_neg(bfloat16 a)
+{
+ return bfloat16_val(a) >> 15;
+}
+
+static inline int bfloat16_is_infinity(bfloat16 a)
+{
+ return (bfloat16_val(a) & 0x7fff) == 0x7F80;
+}
+
+static inline int bfloat16_is_zero(bfloat16 a)
+{
+ return (bfloat16_val(a) & 0x7fff) == 0;
+}
+
+static inline int bfloat16_is_zero_or_denormal(bfloat16 a)
+{
+ return (bfloat16_val(a) & 0x7F80) == 0;
+}
+
+static inline bfloat16 bfloat16_abs(bfloat16 a)
+{
+ /* Note that abs does *not* handle NaN specially, nor does
+ * it flush denormal inputs to zero.
+ */
+ return make_bfloat16(bfloat16_val(a) & 0x7fff);
+}
+
+static inline bfloat16 bfloat16_chs(bfloat16 a)
+{
+ /* Note that chs does *not* handle NaN specially, nor does
+ * it flush denormal inputs to zero.
+ */
+ return make_bfloat16(bfloat16_val(a) ^ 0x8000);
+}
+
static inline bfloat16 bfloat16_set_sign(bfloat16 a, int sign)
{
return make_bfloat16((bfloat16_val(a) & 0x7fff) | (sign << 15));
--
2.23.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [RFC PATCH 1/8] fpu/softfloat: fix up float16 nan recognition
2020-07-12 23:45 ` [RFC PATCH 1/8] fpu/softfloat: fix up float16 nan recognition LIU Zhiwei
@ 2020-07-13 19:15 ` Richard Henderson
2020-07-13 19:55 ` Alex Bennée
1 sibling, 0 replies; 22+ messages in thread
From: Richard Henderson @ 2020-07-13 19:15 UTC (permalink / raw)
To: LIU Zhiwei, qemu-devel; +Cc: wenmeng_zhang, alex.bennee, wxy194768
On 7/12/20 4:45 PM, LIU Zhiwei wrote:
> Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
> ---
> fpu/softfloat-specialize.inc.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
This one should go into 5.1. Are you collecting, Alex?
r~
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH 2/8] fpu/softfloat: use the similiar logic to recognize sNaN and qNaN
2020-07-12 23:45 ` [RFC PATCH 2/8] fpu/softfloat: use the similiar logic to recognize sNaN and qNaN LIU Zhiwei
@ 2020-07-13 19:17 ` Richard Henderson
2020-07-13 20:15 ` LIU Zhiwei
0 siblings, 1 reply; 22+ messages in thread
From: Richard Henderson @ 2020-07-13 19:17 UTC (permalink / raw)
To: LIU Zhiwei, qemu-devel; +Cc: wenmeng_zhang, alex.bennee, wxy194768
On 7/12/20 4:45 PM, LIU Zhiwei wrote:
> Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
> ---
> fpu/softfloat-specialize.inc.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/fpu/softfloat-specialize.inc.c b/fpu/softfloat-specialize.inc.c
> index 034d18199c..6b778a7830 100644
> --- a/fpu/softfloat-specialize.inc.c
> +++ b/fpu/softfloat-specialize.inc.c
> @@ -292,7 +292,7 @@ bool float32_is_quiet_nan(float32 a_, float_status *status)
> if (snan_bit_is_one(status)) {
> return (((a >> 22) & 0x1FF) == 0x1FE) && (a & 0x003FFFFF);
> } else {
> - return ((uint32_t)(a << 1) >= 0xFF800000);
> + return ((a >> 22) & 0x1FF) == 0x1FF;
> }
> #endif
> }
I don't see a reason for this. The previous was a bug, but this isn't.
r~
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH 3/8] fpu/softfloat: add FloatFmt for bfloat16
2020-07-12 23:45 ` [RFC PATCH 3/8] fpu/softfloat: add FloatFmt for bfloat16 LIU Zhiwei
@ 2020-07-13 19:18 ` Richard Henderson
2020-07-13 19:20 ` Richard Henderson
0 siblings, 1 reply; 22+ messages in thread
From: Richard Henderson @ 2020-07-13 19:18 UTC (permalink / raw)
To: LIU Zhiwei, qemu-devel; +Cc: wenmeng_zhang, alex.bennee, wxy194768
On 7/12/20 4:45 PM, LIU Zhiwei wrote:
> Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
> ---
> fpu/softfloat.c | 4 ++++
> 1 file changed, 4 insertions(+)
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
r~
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH 3/8] fpu/softfloat: add FloatFmt for bfloat16
2020-07-13 19:18 ` Richard Henderson
@ 2020-07-13 19:20 ` Richard Henderson
0 siblings, 0 replies; 22+ messages in thread
From: Richard Henderson @ 2020-07-13 19:20 UTC (permalink / raw)
To: LIU Zhiwei, qemu-devel; +Cc: wenmeng_zhang, alex.bennee, wxy194768
On 7/13/20 12:18 PM, Richard Henderson wrote:
> On 7/12/20 4:45 PM, LIU Zhiwei wrote:
>> Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
>> ---
>> fpu/softfloat.c | 4 ++++
>> 1 file changed, 4 insertions(+)
>
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Actually, it occurs to me that clang probably warns for unused variable. This
will need merging with a patch that uses it.
r~
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH 4/8] fpu/softfloat: add pack and unpack interfaces for bfloat16
2020-07-12 23:45 ` [RFC PATCH 4/8] fpu/softfloat: add pack and unpack interfaces " LIU Zhiwei
@ 2020-07-13 19:22 ` Richard Henderson
0 siblings, 0 replies; 22+ messages in thread
From: Richard Henderson @ 2020-07-13 19:22 UTC (permalink / raw)
To: LIU Zhiwei, qemu-devel; +Cc: wenmeng_zhang, alex.bennee, wxy194768
On 7/12/20 4:45 PM, LIU Zhiwei wrote:
> Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
> ---
> fpu/softfloat.c | 20 ++++++++++++++++++++
> 1 file changed, 20 insertions(+)
Similarly, all of the static inlines are unused and clang will warn. Needs
merging with subsequent.
Otherwise the actual code looks good.
r~
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH 5/8] fpu/softfloat: define brain floating-point types
2020-07-12 23:45 ` [RFC PATCH 5/8] fpu/softfloat: define brain floating-point types LIU Zhiwei
@ 2020-07-13 19:26 ` Richard Henderson
2020-07-13 20:22 ` LIU Zhiwei
0 siblings, 1 reply; 22+ messages in thread
From: Richard Henderson @ 2020-07-13 19:26 UTC (permalink / raw)
To: LIU Zhiwei, qemu-devel; +Cc: wenmeng_zhang, alex.bennee, wxy194768
On 7/12/20 4:45 PM, LIU Zhiwei wrote:
> Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
> ---
> include/fpu/softfloat-types.h | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
> index 7680193ebc..8f8fdfeecf 100644
> --- a/include/fpu/softfloat-types.h
> +++ b/include/fpu/softfloat-types.h
> @@ -112,6 +112,14 @@ typedef struct {
> #define make_float128(high_, low_) ((float128) { .high = high_, .low = low_ })
> #define make_float128_init(high_, low_) { .high = high_, .low = low_ }
>
> +/*
> + * Software brain floating-point types
> + */
> +typedef uint16_t bfloat16;
> +#define bfloat16_val(x) (x)
> +#define make_bfloat16(x) (x)
> +#define const_bfloat16(x) (x)
I do not like the val/make/const macros. I've been meaning to get them everywhere.
The word "brain" is better translated as "neural-network" in english.
r~
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH 6/8] fpu/softfloat: define operation for bfloat16
2020-07-12 23:45 ` [RFC PATCH 6/8] fpu/softfloat: define operation for bfloat16 LIU Zhiwei
@ 2020-07-13 19:31 ` Richard Henderson
0 siblings, 0 replies; 22+ messages in thread
From: Richard Henderson @ 2020-07-13 19:31 UTC (permalink / raw)
To: LIU Zhiwei, qemu-devel; +Cc: wenmeng_zhang, alex.bennee, wxy194768
On 7/12/20 4:45 PM, LIU Zhiwei wrote:
> @@ -3039,7 +3143,7 @@ static FloatParts scalbn_decomposed
> return return_nan(a, s);
> }
> if (a.cls == float_class_normal) {
> - /* The largest float type (even though not supported by FloatParts)
> + /* The largest float type (even though nt supported by FloatParts)
Oops.
Otherwise,
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
r~
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH 7/8] fpu/softfloat: define covert operation for bfloat16
2020-07-12 23:45 ` [RFC PATCH 7/8] fpu/softfloat: define covert " LIU Zhiwei
@ 2020-07-13 19:34 ` Richard Henderson
0 siblings, 0 replies; 22+ messages in thread
From: Richard Henderson @ 2020-07-13 19:34 UTC (permalink / raw)
To: LIU Zhiwei, qemu-devel; +Cc: wenmeng_zhang, alex.bennee, wxy194768
On 7/12/20 4:45 PM, LIU Zhiwei wrote:
> Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
> ---
> fpu/softfloat.c | 223 ++++++++++++++++++++++++++++++++++++++++
> include/fpu/softfloat.h | 48 +++++++++
> 2 files changed, 271 insertions(+)
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Some "brain" references in here too. In these cases, I think just s/brain
floating-point/bfloat16/.
r~
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH 8/8] fpu/softfloat: define misc operation for bfloat16
2020-07-12 23:45 ` [RFC PATCH 8/8] fpu/softfloat: define misc " LIU Zhiwei
@ 2020-07-13 19:37 ` Richard Henderson
0 siblings, 0 replies; 22+ messages in thread
From: Richard Henderson @ 2020-07-13 19:37 UTC (permalink / raw)
To: LIU Zhiwei, qemu-devel; +Cc: wenmeng_zhang, alex.bennee, wxy194768
On 7/12/20 4:45 PM, LIU Zhiwei wrote:
> Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
> ---
> fpu/softfloat-specialize.inc.c | 38 +++++++++++++++++++++++++++++++
> include/fpu/softfloat.h | 41 ++++++++++++++++++++++++++++++++++
> 2 files changed, 79 insertions(+)
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
s/brain floating-point/bfloat16/.
r~
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH 1/8] fpu/softfloat: fix up float16 nan recognition
2020-07-12 23:45 ` [RFC PATCH 1/8] fpu/softfloat: fix up float16 nan recognition LIU Zhiwei
2020-07-13 19:15 ` Richard Henderson
@ 2020-07-13 19:55 ` Alex Bennée
1 sibling, 0 replies; 22+ messages in thread
From: Alex Bennée @ 2020-07-13 19:55 UTC (permalink / raw)
To: LIU Zhiwei; +Cc: wenmeng_zhang, richard.henderson, qemu-devel, wxy194768
LIU Zhiwei <zhiwei_liu@c-sky.com> writes:
> Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
Queued to misc/for-5.1-rc0, thanks.
--
Alex Bennée
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH 2/8] fpu/softfloat: use the similiar logic to recognize sNaN and qNaN
2020-07-13 19:17 ` Richard Henderson
@ 2020-07-13 20:15 ` LIU Zhiwei
0 siblings, 0 replies; 22+ messages in thread
From: LIU Zhiwei @ 2020-07-13 20:15 UTC (permalink / raw)
To: Richard Henderson, qemu-devel; +Cc: wenmeng_zhang, alex.bennee, wxy194768
[-- Attachment #1: Type: text/plain, Size: 1146 bytes --]
On 2020/7/14 3:17, Richard Henderson wrote:
> On 7/12/20 4:45 PM, LIU Zhiwei wrote:
>> Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
>> ---
>> fpu/softfloat-specialize.inc.c | 8 ++++----
>> 1 file changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/fpu/softfloat-specialize.inc.c b/fpu/softfloat-specialize.inc.c
>> index 034d18199c..6b778a7830 100644
>> --- a/fpu/softfloat-specialize.inc.c
>> +++ b/fpu/softfloat-specialize.inc.c
>> @@ -292,7 +292,7 @@ bool float32_is_quiet_nan(float32 a_, float_status *status)
>> if (snan_bit_is_one(status)) {
>> return (((a >> 22) & 0x1FF) == 0x1FE) && (a & 0x003FFFFF);
>> } else {
>> - return ((uint32_t)(a << 1) >= 0xFF800000);
>> + return ((a >> 22) & 0x1FF) == 0x1FF;
>> }
>> #endif
>> }
> I don't see a reason for this. The previous was a bug, but this isn't.
It's not a bug, just a clean up.
As you can see, we have already recognized a quiet nan by
if (snan_bit_is_one(status)) {
return (((a >> 22) & 0x1FF) == 0x1FE) && (a & 0x003FFFFF);
}
We need not to give another method to recognize it again.
Zhiwei
>
> r~
[-- Attachment #2: Type: text/html, Size: 1958 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH 5/8] fpu/softfloat: define brain floating-point types
2020-07-13 19:26 ` Richard Henderson
@ 2020-07-13 20:22 ` LIU Zhiwei
2020-07-13 21:48 ` Richard Henderson
0 siblings, 1 reply; 22+ messages in thread
From: LIU Zhiwei @ 2020-07-13 20:22 UTC (permalink / raw)
To: Richard Henderson, qemu-devel; +Cc: wenmeng_zhang, alex.bennee, wxy194768
On 2020/7/14 3:26, Richard Henderson wrote:
> On 7/12/20 4:45 PM, LIU Zhiwei wrote:
>> Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com>
>> ---
>> include/fpu/softfloat-types.h | 8 ++++++++
>> 1 file changed, 8 insertions(+)
>>
>> diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
>> index 7680193ebc..8f8fdfeecf 100644
>> --- a/include/fpu/softfloat-types.h
>> +++ b/include/fpu/softfloat-types.h
>> @@ -112,6 +112,14 @@ typedef struct {
>> #define make_float128(high_, low_) ((float128) { .high = high_, .low = low_ })
>> #define make_float128_init(high_, low_) { .high = high_, .low = low_ }
>>
>> +/*
>> + * Software brain floating-point types
>> + */
>> +typedef uint16_t bfloat16;
>> +#define bfloat16_val(x) (x)
>> +#define make_bfloat16(x) (x)
>> +#define const_bfloat16(x) (x)
> I do not like the val/make/const macros. I've been meaning to get them everywhere.
Yes, but they have been spread to everywhere.
Should we just make bfloat16 different or remove all other references?
> The word "brain" is better translated as "neural-network" in english.
Do you mean the comment here should be
+/*
+ * Software neural-network floating-point types
+ */
Zhiwei
>
> r~
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [RFC PATCH 5/8] fpu/softfloat: define brain floating-point types
2020-07-13 20:22 ` LIU Zhiwei
@ 2020-07-13 21:48 ` Richard Henderson
0 siblings, 0 replies; 22+ messages in thread
From: Richard Henderson @ 2020-07-13 21:48 UTC (permalink / raw)
To: LIU Zhiwei, qemu-devel; +Cc: wenmeng_zhang, alex.bennee, wxy194768
On 7/13/20 1:22 PM, LIU Zhiwei wrote:
> Should we just make bfloat16 different or remove all other references?
If you have time to do a global remove, I would be grateful. Otherwise, let's
just make bfloat16 different.
>> The word "brain" is better translated as "neural-network" in english.
> Do you mean the comment here should be
>
> +/*
> + * Software neural-network floating-point types
> + */
Yes, thanks.
r~
^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2020-07-13 21:49 UTC | newest]
Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-07-12 23:45 [RFC PATCH 0/8] Implement blfoat16 in softfloat LIU Zhiwei
2020-07-12 23:45 ` [RFC PATCH 1/8] fpu/softfloat: fix up float16 nan recognition LIU Zhiwei
2020-07-13 19:15 ` Richard Henderson
2020-07-13 19:55 ` Alex Bennée
2020-07-12 23:45 ` [RFC PATCH 2/8] fpu/softfloat: use the similiar logic to recognize sNaN and qNaN LIU Zhiwei
2020-07-13 19:17 ` Richard Henderson
2020-07-13 20:15 ` LIU Zhiwei
2020-07-12 23:45 ` [RFC PATCH 3/8] fpu/softfloat: add FloatFmt for bfloat16 LIU Zhiwei
2020-07-13 19:18 ` Richard Henderson
2020-07-13 19:20 ` Richard Henderson
2020-07-12 23:45 ` [RFC PATCH 4/8] fpu/softfloat: add pack and unpack interfaces " LIU Zhiwei
2020-07-13 19:22 ` Richard Henderson
2020-07-12 23:45 ` [RFC PATCH 5/8] fpu/softfloat: define brain floating-point types LIU Zhiwei
2020-07-13 19:26 ` Richard Henderson
2020-07-13 20:22 ` LIU Zhiwei
2020-07-13 21:48 ` Richard Henderson
2020-07-12 23:45 ` [RFC PATCH 6/8] fpu/softfloat: define operation for bfloat16 LIU Zhiwei
2020-07-13 19:31 ` Richard Henderson
2020-07-12 23:45 ` [RFC PATCH 7/8] fpu/softfloat: define covert " LIU Zhiwei
2020-07-13 19:34 ` Richard Henderson
2020-07-12 23:45 ` [RFC PATCH 8/8] fpu/softfloat: define misc " LIU Zhiwei
2020-07-13 19:37 ` Richard Henderson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).