* [Qemu-devel] [PATCH v2 0/2] A64: Neon support, fourth set
@ 2014-02-11 13:51 Peter Maydell
2014-02-11 13:51 ` [Qemu-devel] [PATCH v2 1/2] softfloat: Support halving the result of muladd operation Peter Maydell
2014-02-11 13:51 ` [Qemu-devel] [PATCH v2 2/2] target-arm: A64: Implement remaining 3-same instructions Peter Maydell
0 siblings, 2 replies; 5+ messages in thread
From: Peter Maydell @ 2014-02-11 13:51 UTC (permalink / raw)
To: qemu-devel
Cc: patches, Michael Matz, Alexander Graf, Claudio Fontana,
Dirk Mueller, Will Newton, Laurent Desnogues, Alex Bennée,
kvmarm, Christoffer Dall, Richard Henderson
I've applied patches 1..6 from this set to target-arm.next, since
they passed code review. v2 therefore just has the old patches
7 and 8 in it (and 8 has been reviewed).
Changes v1->v2:
* handle the halving correctly in the "zero + something" case
by using roundAndPackFloat32() rather than trying to do it
incorrectly by hand
NB: forgot to mention first time round, but the softfloat patch
is licensed under either the softfloat-2a or -2b license, at your
option.
Peter Maydell (2):
softfloat: Support halving the result of muladd operation
target-arm: A64: Implement remaining 3-same instructions
fpu/softfloat.c | 32 +++++++++++++++++++++++++
include/fpu/softfloat.h | 3 +++
target-arm/helper-a64.c | 60 ++++++++++++++++++++++++++++++++++++++++++++++
target-arm/helper-a64.h | 4 ++++
target-arm/helper.h | 2 ++
target-arm/neon_helper.c | 16 +++++++++++++
target-arm/translate-a64.c | 52 ++++++++++++++++++++++++++++++++++++----
7 files changed, 165 insertions(+), 4 deletions(-)
--
1.8.5
^ permalink raw reply [flat|nested] 5+ messages in thread
* [Qemu-devel] [PATCH v2 1/2] softfloat: Support halving the result of muladd operation
2014-02-11 13:51 [Qemu-devel] [PATCH v2 0/2] A64: Neon support, fourth set Peter Maydell
@ 2014-02-11 13:51 ` Peter Maydell
2014-02-11 13:51 ` [Qemu-devel] [PATCH v2 2/2] target-arm: A64: Implement remaining 3-same instructions Peter Maydell
1 sibling, 0 replies; 5+ messages in thread
From: Peter Maydell @ 2014-02-11 13:51 UTC (permalink / raw)
To: qemu-devel
Cc: patches, Michael Matz, Alexander Graf, Claudio Fontana,
Dirk Mueller, Will Newton, Laurent Desnogues, Alex Bennée,
kvmarm, Christoffer Dall, Richard Henderson
The ARMv8 instruction set includes a fused floating point
reciprocal square root step instruction which demands an
"(x * y + z) / 2" fused operation. Support this by adding
a flag to the softfloat muladd operations which requests
that the result is halved before rounding.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
fpu/softfloat.c | 32 ++++++++++++++++++++++++++++++++
include/fpu/softfloat.h | 3 +++
2 files changed, 35 insertions(+)
diff --git a/fpu/softfloat.c b/fpu/softfloat.c
index e0ea599..b815356 100644
--- a/fpu/softfloat.c
+++ b/fpu/softfloat.c
@@ -2372,6 +2372,14 @@ float32 float32_muladd(float32 a, float32 b, float32 c, int flags STATUS_PARAM)
}
}
/* Zero plus something non-zero : just return the something */
+ if (flags & float_muladd_halve_result) {
+ if (cExp == 0) {
+ normalizeFloat32Subnormal(cSig, &cExp, &cSig);
+ }
+ cExp--;
+ cSig = (cSig | 0x00800000) << 7;
+ return roundAndPackFloat32(cSign ^ signflip, cExp, cSig STATUS_VAR);
+ }
return packFloat32(cSign ^ signflip, cExp, cSig);
}
@@ -2408,6 +2416,9 @@ float32 float32_muladd(float32 a, float32 b, float32 c, int flags STATUS_PARAM)
/* Throw out the special case of c being an exact zero now */
shift64RightJamming(pSig64, 32, &pSig64);
pSig = pSig64;
+ if (flags & float_muladd_halve_result) {
+ pExp--;
+ }
return roundAndPackFloat32(zSign, pExp - 1,
pSig STATUS_VAR);
}
@@ -2472,6 +2483,10 @@ float32 float32_muladd(float32 a, float32 b, float32 c, int flags STATUS_PARAM)
zSig64 <<= shiftcount;
zExp -= shiftcount;
}
+ if (flags & float_muladd_halve_result) {
+ zExp--;
+ }
+
shift64RightJamming(zSig64, 32, &zSig64);
return roundAndPackFloat32(zSign, zExp, zSig64 STATUS_VAR);
}
@@ -4088,6 +4103,14 @@ float64 float64_muladd(float64 a, float64 b, float64 c, int flags STATUS_PARAM)
}
}
/* Zero plus something non-zero : just return the something */
+ if (flags & float_muladd_halve_result) {
+ if (cExp == 0) {
+ normalizeFloat64Subnormal(cSig, &cExp, &cSig);
+ }
+ cExp--;
+ cSig = (cSig | 0x0010000000000000ULL) << 10;
+ return roundAndPackFloat64(cSign ^ signflip, cExp, cSig STATUS_VAR);
+ }
return packFloat64(cSign ^ signflip, cExp, cSig);
}
@@ -4123,6 +4146,9 @@ float64 float64_muladd(float64 a, float64 b, float64 c, int flags STATUS_PARAM)
if (!cSig) {
/* Throw out the special case of c being an exact zero now */
shift128RightJamming(pSig0, pSig1, 64, &pSig0, &pSig1);
+ if (flags & float_muladd_halve_result) {
+ pExp--;
+ }
return roundAndPackFloat64(zSign, pExp - 1,
pSig1 STATUS_VAR);
}
@@ -4159,6 +4185,9 @@ float64 float64_muladd(float64 a, float64 b, float64 c, int flags STATUS_PARAM)
zExp--;
}
shift128RightJamming(zSig0, zSig1, 64, &zSig0, &zSig1);
+ if (flags & float_muladd_halve_result) {
+ zExp--;
+ }
return roundAndPackFloat64(zSign, zExp, zSig1 STATUS_VAR);
} else {
/* Subtraction */
@@ -4209,6 +4238,9 @@ float64 float64_muladd(float64 a, float64 b, float64 c, int flags STATUS_PARAM)
zExp -= (shiftcount + 64);
}
}
+ if (flags & float_muladd_halve_result) {
+ zExp--;
+ }
return roundAndPackFloat64(zSign, zExp, zSig0 STATUS_VAR);
}
}
diff --git a/include/fpu/softfloat.h b/include/fpu/softfloat.h
index 806ae13..4b4df88 100644
--- a/include/fpu/softfloat.h
+++ b/include/fpu/softfloat.h
@@ -249,11 +249,14 @@ void float_raise( int8 flags STATUS_PARAM);
| Using these differs from negating an input or output before calling
| the muladd function in that this means that a NaN doesn't have its
| sign bit inverted before it is propagated.
+| We also support halving the result before rounding, as a special
+| case to support the ARM fused-sqrt-step instruction FRSQRTS.
*----------------------------------------------------------------------------*/
enum {
float_muladd_negate_c = 1,
float_muladd_negate_product = 2,
float_muladd_negate_result = 4,
+ float_muladd_halve_result = 8,
};
/*----------------------------------------------------------------------------
--
1.8.5
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [Qemu-devel] [PATCH v2 2/2] target-arm: A64: Implement remaining 3-same instructions
2014-02-11 13:51 [Qemu-devel] [PATCH v2 0/2] A64: Neon support, fourth set Peter Maydell
2014-02-11 13:51 ` [Qemu-devel] [PATCH v2 1/2] softfloat: Support halving the result of muladd operation Peter Maydell
@ 2014-02-11 13:51 ` Peter Maydell
2014-02-11 15:40 ` Richard Henderson
1 sibling, 1 reply; 5+ messages in thread
From: Peter Maydell @ 2014-02-11 13:51 UTC (permalink / raw)
To: qemu-devel
Cc: patches, Michael Matz, Alexander Graf, Claudio Fontana,
Dirk Mueller, Will Newton, Laurent Desnogues, Alex Bennée,
kvmarm, Christoffer Dall, Richard Henderson
Implement the remaining instructions in the SIMD 3-reg-same
and scalar-3-reg-same groups: FMULX, FRECPS, FRSQRTS, FACGE,
FACGT, FMLA and FMLS.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
---
fpu/softfloat.c | 4 ++--
target-arm/helper-a64.c | 60 ++++++++++++++++++++++++++++++++++++++++++++++
target-arm/helper-a64.h | 4 ++++
target-arm/helper.h | 2 ++
target-arm/neon_helper.c | 16 +++++++++++++
target-arm/translate-a64.c | 52 ++++++++++++++++++++++++++++++++++++----
6 files changed, 132 insertions(+), 6 deletions(-)
diff --git a/fpu/softfloat.c b/fpu/softfloat.c
index b815356..1b49b70 100644
--- a/fpu/softfloat.c
+++ b/fpu/softfloat.c
@@ -2376,7 +2376,7 @@ float32 float32_muladd(float32 a, float32 b, float32 c, int flags STATUS_PARAM)
if (cExp == 0) {
normalizeFloat32Subnormal(cSig, &cExp, &cSig);
}
- cExp--;
+ cExp -= 2;
cSig = (cSig | 0x00800000) << 7;
return roundAndPackFloat32(cSign ^ signflip, cExp, cSig STATUS_VAR);
}
@@ -4107,7 +4107,7 @@ float64 float64_muladd(float64 a, float64 b, float64 c, int flags STATUS_PARAM)
if (cExp == 0) {
normalizeFloat64Subnormal(cSig, &cExp, &cSig);
}
- cExp--;
+ cExp -= 2;
cSig = (cSig | 0x0010000000000000ULL) << 10;
return roundAndPackFloat64(cSign ^ signflip, cExp, cSig STATUS_VAR);
}
diff --git a/target-arm/helper-a64.c b/target-arm/helper-a64.c
index b4cab51..c2ce33e 100644
--- a/target-arm/helper-a64.c
+++ b/target-arm/helper-a64.c
@@ -198,3 +198,63 @@ uint64_t HELPER(neon_cgt_f64)(float64 a, float64 b, void *fpstp)
float_status *fpst = fpstp;
return -float64_lt(b, a, fpst);
}
+
+/* Reciprocal step and sqrt step. Note that unlike the A32/T32
+ * versions, these do a fully fused multiply-add or
+ * multiply-add-and-halve.
+ */
+#define float32_two make_float32(0x40000000)
+#define float32_three make_float32(0x40400000)
+#define float32_one_point_five make_float32(0x3fc00000)
+
+#define float64_two make_float64(0x4000000000000000ULL)
+#define float64_three make_float64(0x4008000000000000ULL)
+#define float64_one_point_five make_float64(0x3FF8000000000000ULL)
+
+float32 HELPER(recpsf_f32)(float32 a, float32 b, void *fpstp)
+{
+ float_status *fpst = fpstp;
+
+ a = float32_chs(a);
+ if ((float32_is_infinity(a) && float32_is_zero(b)) ||
+ (float32_is_infinity(b) && float32_is_zero(a))) {
+ return float32_two;
+ }
+ return float32_muladd(a, b, float32_two, 0, fpst);
+}
+
+float64 HELPER(recpsf_f64)(float64 a, float64 b, void *fpstp)
+{
+ float_status *fpst = fpstp;
+
+ a = float64_chs(a);
+ if ((float64_is_infinity(a) && float64_is_zero(b)) ||
+ (float64_is_infinity(b) && float64_is_zero(a))) {
+ return float64_two;
+ }
+ return float64_muladd(a, b, float64_two, 0, fpst);
+}
+
+float32 HELPER(rsqrtsf_f32)(float32 a, float32 b, void *fpstp)
+{
+ float_status *fpst = fpstp;
+
+ a = float32_chs(a);
+ if ((float32_is_infinity(a) && float32_is_zero(b)) ||
+ (float32_is_infinity(b) && float32_is_zero(a))) {
+ return float32_one_point_five;
+ }
+ return float32_muladd(a, b, float32_three, float_muladd_halve_result, fpst);
+}
+
+float64 HELPER(rsqrtsf_f64)(float64 a, float64 b, void *fpstp)
+{
+ float_status *fpst = fpstp;
+
+ a = float64_chs(a);
+ if ((float64_is_infinity(a) && float64_is_zero(b)) ||
+ (float64_is_infinity(b) && float64_is_zero(a))) {
+ return float64_one_point_five;
+ }
+ return float64_muladd(a, b, float64_three, float_muladd_halve_result, fpst);
+}
diff --git a/target-arm/helper-a64.h b/target-arm/helper-a64.h
index bf20466..ab9933c 100644
--- a/target-arm/helper-a64.h
+++ b/target-arm/helper-a64.h
@@ -32,3 +32,7 @@ DEF_HELPER_FLAGS_3(vfp_mulxd, TCG_CALL_NO_RWG, f64, f64, f64, ptr)
DEF_HELPER_FLAGS_3(neon_ceq_f64, TCG_CALL_NO_RWG, i64, i64, i64, ptr)
DEF_HELPER_FLAGS_3(neon_cge_f64, TCG_CALL_NO_RWG, i64, i64, i64, ptr)
DEF_HELPER_FLAGS_3(neon_cgt_f64, TCG_CALL_NO_RWG, i64, i64, i64, ptr)
+DEF_HELPER_FLAGS_3(recpsf_f32, TCG_CALL_NO_RWG, f32, f32, f32, ptr)
+DEF_HELPER_FLAGS_3(recpsf_f64, TCG_CALL_NO_RWG, f64, f64, f64, ptr)
+DEF_HELPER_FLAGS_3(rsqrtsf_f32, TCG_CALL_NO_RWG, f32, f32, f32, ptr)
+DEF_HELPER_FLAGS_3(rsqrtsf_f64, TCG_CALL_NO_RWG, f64, f64, f64, ptr)
diff --git a/target-arm/helper.h b/target-arm/helper.h
index 951e6ad..7c60121 100644
--- a/target-arm/helper.h
+++ b/target-arm/helper.h
@@ -382,6 +382,8 @@ DEF_HELPER_3(neon_cge_f32, i32, i32, i32, ptr)
DEF_HELPER_3(neon_cgt_f32, i32, i32, i32, ptr)
DEF_HELPER_3(neon_acge_f32, i32, i32, i32, ptr)
DEF_HELPER_3(neon_acgt_f32, i32, i32, i32, ptr)
+DEF_HELPER_3(neon_acge_f64, i64, i64, i64, ptr)
+DEF_HELPER_3(neon_acgt_f64, i64, i64, i64, ptr)
/* iwmmxt_helper.c */
DEF_HELPER_2(iwmmxt_maddsq, i64, i64, i64)
diff --git a/target-arm/neon_helper.c b/target-arm/neon_helper.c
index b4c8690..13752ba 100644
--- a/target-arm/neon_helper.c
+++ b/target-arm/neon_helper.c
@@ -1823,6 +1823,22 @@ uint32_t HELPER(neon_acgt_f32)(uint32_t a, uint32_t b, void *fpstp)
return -float32_lt(f1, f0, fpst);
}
+uint64_t HELPER(neon_acge_f64)(uint64_t a, uint64_t b, void *fpstp)
+{
+ float_status *fpst = fpstp;
+ float64 f0 = float64_abs(make_float64(a));
+ float64 f1 = float64_abs(make_float64(b));
+ return -float64_le(f1, f0, fpst);
+}
+
+uint64_t HELPER(neon_acgt_f64)(uint64_t a, uint64_t b, void *fpstp)
+{
+ float_status *fpst = fpstp;
+ float64 f0 = float64_abs(make_float64(a));
+ float64 f1 = float64_abs(make_float64(b));
+ return -float64_lt(f1, f0, fpst);
+}
+
#define ELEM(V, N, SIZE) (((V) >> ((N) * (SIZE))) & ((1ull << (SIZE)) - 1))
void HELPER(neon_qunzip8)(CPUARMState *env, uint32_t rd, uint32_t rm)
diff --git a/target-arm/translate-a64.c b/target-arm/translate-a64.c
index e5de1ec..2861b13 100644
--- a/target-arm/translate-a64.c
+++ b/target-arm/translate-a64.c
@@ -6045,18 +6045,33 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
read_vec_element(s, tcg_op2, rm, pass, MO_64);
switch (fpopcode) {
+ case 0x39: /* FMLS */
+ /* As usual for ARM, separate negation for fused multiply-add */
+ gen_helper_vfp_negd(tcg_op1, tcg_op1);
+ /* fall through */
+ case 0x19: /* FMLA */
+ read_vec_element(s, tcg_res, rd, pass, MO_64);
+ gen_helper_vfp_muladdd(tcg_res, tcg_op1, tcg_op2,
+ tcg_res, fpst);
+ break;
case 0x18: /* FMAXNM */
gen_helper_vfp_maxnumd(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x1a: /* FADD */
gen_helper_vfp_addd(tcg_res, tcg_op1, tcg_op2, fpst);
break;
+ case 0x1b: /* FMULX */
+ gen_helper_vfp_mulxd(tcg_res, tcg_op1, tcg_op2, fpst);
+ break;
case 0x1c: /* FCMEQ */
gen_helper_neon_ceq_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x1e: /* FMAX */
gen_helper_vfp_maxd(tcg_res, tcg_op1, tcg_op2, fpst);
break;
+ case 0x1f: /* FRECPS */
+ gen_helper_recpsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
+ break;
case 0x38: /* FMINNM */
gen_helper_vfp_minnumd(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -6066,12 +6081,18 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
case 0x3e: /* FMIN */
gen_helper_vfp_mind(tcg_res, tcg_op1, tcg_op2, fpst);
break;
+ case 0x3f: /* FRSQRTS */
+ gen_helper_rsqrtsf_f64(tcg_res, tcg_op1, tcg_op2, fpst);
+ break;
case 0x5b: /* FMUL */
gen_helper_vfp_muld(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x5c: /* FCMGE */
gen_helper_neon_cge_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
+ case 0x5d: /* FACGE */
+ gen_helper_neon_acge_f64(tcg_res, tcg_op1, tcg_op2, fpst);
+ break;
case 0x5f: /* FDIV */
gen_helper_vfp_divd(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -6082,6 +6103,9 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
case 0x7c: /* FCMGT */
gen_helper_neon_cgt_f64(tcg_res, tcg_op1, tcg_op2, fpst);
break;
+ case 0x7d: /* FACGT */
+ gen_helper_neon_acgt_f64(tcg_res, tcg_op1, tcg_op2, fpst);
+ break;
default:
g_assert_not_reached();
}
@@ -6101,15 +6125,30 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
read_vec_element_i32(s, tcg_op2, rm, pass, MO_32);
switch (fpopcode) {
+ case 0x39: /* FMLS */
+ /* As usual for ARM, separate negation for fused multiply-add */
+ gen_helper_vfp_negs(tcg_op1, tcg_op1);
+ /* fall through */
+ case 0x19: /* FMLA */
+ read_vec_element_i32(s, tcg_res, rd, pass, MO_32);
+ gen_helper_vfp_muladds(tcg_res, tcg_op1, tcg_op2,
+ tcg_res, fpst);
+ break;
case 0x1a: /* FADD */
gen_helper_vfp_adds(tcg_res, tcg_op1, tcg_op2, fpst);
break;
+ case 0x1b: /* FMULX */
+ gen_helper_vfp_mulxs(tcg_res, tcg_op1, tcg_op2, fpst);
+ break;
case 0x1c: /* FCMEQ */
gen_helper_neon_ceq_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x1e: /* FMAX */
gen_helper_vfp_maxs(tcg_res, tcg_op1, tcg_op2, fpst);
break;
+ case 0x1f: /* FRECPS */
+ gen_helper_recpsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
+ break;
case 0x18: /* FMAXNM */
gen_helper_vfp_maxnums(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -6122,12 +6161,18 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
case 0x3e: /* FMIN */
gen_helper_vfp_mins(tcg_res, tcg_op1, tcg_op2, fpst);
break;
+ case 0x3f: /* FRSQRTS */
+ gen_helper_rsqrtsf_f32(tcg_res, tcg_op1, tcg_op2, fpst);
+ break;
case 0x5b: /* FMUL */
gen_helper_vfp_muls(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x5c: /* FCMGE */
gen_helper_neon_cge_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
+ case 0x5d: /* FACGE */
+ gen_helper_neon_acge_f32(tcg_res, tcg_op1, tcg_op2, fpst);
+ break;
case 0x5f: /* FDIV */
gen_helper_vfp_divs(tcg_res, tcg_op1, tcg_op2, fpst);
break;
@@ -6138,6 +6183,9 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
case 0x7c: /* FCMGT */
gen_helper_neon_cgt_f32(tcg_res, tcg_op1, tcg_op2, fpst);
break;
+ case 0x7d: /* FACGT */
+ gen_helper_neon_acgt_f32(tcg_res, tcg_op1, tcg_op2, fpst);
+ break;
default:
g_assert_not_reached();
}
@@ -6192,8 +6240,6 @@ static void disas_simd_scalar_three_reg_same(DisasContext *s, uint32_t insn)
case 0x3f: /* FRSQRTS */
case 0x5d: /* FACGE */
case 0x7d: /* FACGT */
- unsupported_encoding(s, insn);
- return;
case 0x1c: /* FCMEQ */
case 0x5c: /* FCMGE */
case 0x7c: /* FCMGT */
@@ -7303,8 +7349,6 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
case 0x7d: /* FACGT */
case 0x19: /* FMLA */
case 0x39: /* FMLS */
- unsupported_encoding(s, insn);
- return;
case 0x18: /* FMAXNM */
case 0x1a: /* FADD */
case 0x1c: /* FCMEQ */
--
1.8.5
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] [PATCH v2 2/2] target-arm: A64: Implement remaining 3-same instructions
2014-02-11 13:51 ` [Qemu-devel] [PATCH v2 2/2] target-arm: A64: Implement remaining 3-same instructions Peter Maydell
@ 2014-02-11 15:40 ` Richard Henderson
2014-02-11 15:55 ` Peter Maydell
0 siblings, 1 reply; 5+ messages in thread
From: Richard Henderson @ 2014-02-11 15:40 UTC (permalink / raw)
To: Peter Maydell, qemu-devel
Cc: patches, Michael Matz, Alexander Graf, Claudio Fontana,
Dirk Mueller, Will Newton, Laurent Desnogues, Alex Bennée,
kvmarm, Christoffer Dall
On 02/11/2014 05:51 AM, Peter Maydell wrote:
> @@ -2376,7 +2376,7 @@ float32 float32_muladd(float32 a, float32 b, float32 c, int flags STATUS_PARAM)
> if (cExp == 0) {
> normalizeFloat32Subnormal(cSig, &cExp, &cSig);
> }
> - cExp--;
> + cExp -= 2;
> cSig = (cSig | 0x00800000) << 7;
> return roundAndPackFloat32(cSign ^ signflip, cExp, cSig STATUS_VAR);
> }
> @@ -4107,7 +4107,7 @@ float64 float64_muladd(float64 a, float64 b, float64 c, int flags STATUS_PARAM)
> if (cExp == 0) {
> normalizeFloat64Subnormal(cSig, &cExp, &cSig);
> }
> - cExp--;
> + cExp -= 2;
> cSig = (cSig | 0x0010000000000000ULL) << 10;
> return roundAndPackFloat64(cSign ^ signflip, cExp, cSig STATUS_VAR);
> }
This should obviously be folded into the previous patch.
Probably with a comment reminding about roundAndPackFloat wanting an off-by-one
exponent.
Otherwise both get
Reviewed-by: Richard Henderson <rth@twiddle.net>
r~
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] [PATCH v2 2/2] target-arm: A64: Implement remaining 3-same instructions
2014-02-11 15:40 ` Richard Henderson
@ 2014-02-11 15:55 ` Peter Maydell
0 siblings, 0 replies; 5+ messages in thread
From: Peter Maydell @ 2014-02-11 15:55 UTC (permalink / raw)
To: Richard Henderson
Cc: Patch Tracking, Michael Matz, Alexander Graf, QEMU Developers,
Claudio Fontana, Dirk Mueller, Will Newton, Laurent Desnogues,
Alex Bennée, kvmarm@lists.cs.columbia.edu, Christoffer Dall
On 11 February 2014 15:40, Richard Henderson <rth@twiddle.net> wrote:
> On 02/11/2014 05:51 AM, Peter Maydell wrote:
>> @@ -2376,7 +2376,7 @@ float32 float32_muladd(float32 a, float32 b, float32 c, int flags STATUS_PARAM)
>> if (cExp == 0) {
>> normalizeFloat32Subnormal(cSig, &cExp, &cSig);
>> }
>> - cExp--;
>> + cExp -= 2;
>> cSig = (cSig | 0x00800000) << 7;
>> return roundAndPackFloat32(cSign ^ signflip, cExp, cSig STATUS_VAR);
>> }
>> @@ -4107,7 +4107,7 @@ float64 float64_muladd(float64 a, float64 b, float64 c, int flags STATUS_PARAM)
>> if (cExp == 0) {
>> normalizeFloat64Subnormal(cSig, &cExp, &cSig);
>> }
>> - cExp--;
>> + cExp -= 2;
>> cSig = (cSig | 0x0010000000000000ULL) << 10;
>> return roundAndPackFloat64(cSign ^ signflip, cExp, cSig STATUS_VAR);
>> }
>
> This should obviously be folded into the previous patch.
>
> Probably with a comment reminding about roundAndPackFloat wanting an off-by-one
> exponent.
>
> Otherwise both get
>
> Reviewed-by: Richard Henderson <rth@twiddle.net>
Doh; thanks for the catch. Yeah, I was halfway to putting
in a comment about why we were subtracting two, should have
followed my first instinct.
Will make trivial fixups and put into target-arm.next.
(comment text: /* Subtract one to halve and one again because
roundAndPackFloat wants one less than the true exponent. */)
thanks
-- PMM
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2014-02-11 15:56 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-02-11 13:51 [Qemu-devel] [PATCH v2 0/2] A64: Neon support, fourth set Peter Maydell
2014-02-11 13:51 ` [Qemu-devel] [PATCH v2 1/2] softfloat: Support halving the result of muladd operation Peter Maydell
2014-02-11 13:51 ` [Qemu-devel] [PATCH v2 2/2] target-arm: A64: Implement remaining 3-same instructions Peter Maydell
2014-02-11 15:40 ` Richard Henderson
2014-02-11 15:55 ` Peter Maydell
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).