linux-bluetooth.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] SBC encoder optimizations for ARM processors
@ 2010-07-02 12:25 Siarhei Siamashka
  2010-07-02 12:25 ` [PATCH 1/5] sbc: ARM NEON optimized joint stereo processing in SBC encoder Siarhei Siamashka
                   ` (5 more replies)
  0 siblings, 6 replies; 7+ messages in thread
From: Siarhei Siamashka @ 2010-07-02 12:25 UTC (permalink / raw)
  To: linux-bluetooth; +Cc: Siarhei Siamashka

From: Siarhei Siamashka <siarhei.siamashka@nokia.com>

This patch series adds a bunch of ARM assembly optimizations.

Now all the functions from 'sbc_primitives.c' got NEON optimized
variants. As benchmarked with the common A2DP case (44100kHz audio
with bitpool set to 53, 8 subbands, joint stereo), SBC encoding is
now approximately 1.6x faster overall when compared to bluez-4.66.
Some more room for improvement still exists though.

For ARMv6 processors, only analysis filter has been implemented
(using dual 16-bit multiply-accumulate instructions). But that's
the most important optimization and it doubles performance already.
And older processors such as ARM11 are much slower, so they
definitely benefit more on a relative scale (Nokia N800/N810 users
may find this update useful).

All the optimizations are bitexact. Given the same input, they provide
the same output as the SBC encoder from the previous bluez versions.

Patches are also available in the branch 'sbc-arm-optimizations' here:
git://gitorious.org/system-performance/bluez-sbc.git

Siarhei Siamashka (5):
  sbc: ARM NEON optimized joint stereo processing in SBC encoder
  sbc: ARM NEON optimizations for input permutation in SBC encoder
  sbc: slightly faster 'sbc_calc_scalefactors_neon'
  sbc: faster 'sbc_calculate_bits' function
  sbc: ARMv6 optimized version of analysis filter for SBC encoder

 Makefile.am                |    3 +-
 sbc/sbc.c                  |   43 ++-
 sbc/sbc_primitives.c       |    4 +
 sbc/sbc_primitives_armv6.c |  299 +++++++++++++++++++++
 sbc/sbc_primitives_armv6.h |   52 ++++
 sbc/sbc_primitives_neon.c  |  618 ++++++++++++++++++++++++++++++++++++++++++-
 6 files changed, 988 insertions(+), 31 deletions(-)
 create mode 100644 sbc/sbc_primitives_armv6.c
 create mode 100644 sbc/sbc_primitives_armv6.h


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/5] sbc: ARM NEON optimized joint stereo processing in SBC encoder
  2010-07-02 12:25 [PATCH 0/5] SBC encoder optimizations for ARM processors Siarhei Siamashka
@ 2010-07-02 12:25 ` Siarhei Siamashka
  2010-07-02 12:25 ` [PATCH 2/5] sbc: ARM NEON optimizations for input permutation " Siarhei Siamashka
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Siarhei Siamashka @ 2010-07-02 12:25 UTC (permalink / raw)
  To: linux-bluetooth; +Cc: Siarhei Siamashka

From: Siarhei Siamashka <siarhei.siamashka@nokia.com>

Improves SBC encoding performance when joint stereo is used, which
is a typical A2DP configuration.

Benchmarked on ARM Cortex-A8:

== Before: ==

$ time ./sbcenc -b53 -s8 -j test.au > /dev/null

real    0m5.239s
user    0m4.805s
sys     0m0.430s

samples  %        image name               symbol name
26083    25.0856  sbcenc                   sbc_pack_frame
21548    20.7240  sbcenc                   sbc_calc_scalefactors_j
19910    19.1486  sbcenc                   sbc_analyze_4b_8s_neon
14377    13.8272  sbcenc                   sbc_calculate_bits
9990      9.6080  sbcenc                   sbc_enc_process_input_8s_be
8667      8.3356  no-vmlinux               /no-vmlinux
2263      2.1765  sbcenc                   sbc_encode
696       0.6694  libc-2.10.1.so           memcpy

== After: ==

$ time ./sbcenc -b53 -s8 -j test.au > /dev/null

real    0m4.389s
user    0m3.969s
sys     0m0.422s

samples  %        image name               symbol name
26234    29.9625  sbcenc                   sbc_pack_frame
20057    22.9076  sbcenc                   sbc_analyze_4b_8s_neon
14306    16.3393  sbcenc                   sbc_calculate_bits
9866     11.2682  sbcenc                   sbc_enc_process_input_8s_be
8506      9.7149  no-vmlinux               /no-vmlinux
5219      5.9608  sbcenc                   sbc_calc_scalefactors_j_neon
2280      2.6040  sbcenc                   sbc_encode
661       0.7549  libc-2.10.1.so           memcpy
---
 sbc/sbc_primitives_neon.c |  243 +++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 243 insertions(+), 0 deletions(-)

diff --git a/sbc/sbc_primitives_neon.c b/sbc/sbc_primitives_neon.c
index 2a4cdf0..c6a16ac 100644
--- a/sbc/sbc_primitives_neon.c
+++ b/sbc/sbc_primitives_neon.c
@@ -294,11 +294,254 @@ static void sbc_calc_scalefactors_neon(
 	}
 }
 
+int sbc_calc_scalefactors_j_neon(
+	int32_t sb_sample_f[16][2][8],
+	uint32_t scale_factor[2][8],
+	int blocks, int subbands)
+{
+	static SBC_ALIGNED int32_t joint_bits_mask[8] = {
+		8,   4,  2,  1, 128, 64, 32, 16
+	};
+	int joint, i;
+	int32_t  *in0, *in1;
+	int32_t  *in = &sb_sample_f[0][0][0];
+	uint32_t *out0, *out1;
+	uint32_t *out = &scale_factor[0][0];
+	int32_t  *consts = joint_bits_mask;
+
+	i = subbands;
+
+	asm volatile (
+		/*
+		 * constants: q13 = (31 - SCALE_OUT_BITS), q14 = 1
+		 * input:     q0  = ((1 << SCALE_OUT_BITS) + 1)
+		 *            %[in0] - samples for channel 0
+		 *            %[in1] - samples for shannel 1
+		 * output:    q0, q1 - scale factors without joint stereo
+		 *            q2, q3 - scale factors with joint stereo
+		 *            q15    - joint stereo selection mask
+		 */
+		".macro calc_scalefactors\n"
+			"vmov.s32  q1, q0\n"
+			"vmov.s32  q2, q0\n"
+			"vmov.s32  q3, q0\n"
+			"mov       %[i], %[blocks]\n"
+		"1:\n"
+			"vld1.32   {d18, d19}, [%[in1], :128], %[inc]\n"
+			"vbic.s32  q11, q9,  q14\n"
+			"vld1.32   {d16, d17}, [%[in0], :128], %[inc]\n"
+			"vhadd.s32 q10, q8,  q11\n"
+			"vhsub.s32 q11, q8,  q11\n"
+			"vabs.s32  q8,  q8\n"
+			"vabs.s32  q9,  q9\n"
+			"vabs.s32  q10, q10\n"
+			"vabs.s32  q11, q11\n"
+			"vmax.s32  q0,  q0,  q8\n"
+			"vmax.s32  q1,  q1,  q9\n"
+			"vmax.s32  q2,  q2,  q10\n"
+			"vmax.s32  q3,  q3,  q11\n"
+			"subs      %[i], %[i], #1\n"
+			"bgt       1b\n"
+			"vsub.s32  q0,  q0,  q14\n"
+			"vsub.s32  q1,  q1,  q14\n"
+			"vsub.s32  q2,  q2,  q14\n"
+			"vsub.s32  q3,  q3,  q14\n"
+			"vclz.s32  q0,  q0\n"
+			"vclz.s32  q1,  q1\n"
+			"vclz.s32  q2,  q2\n"
+			"vclz.s32  q3,  q3\n"
+			"vsub.s32  q0,  q13, q0\n"
+			"vsub.s32  q1,  q13, q1\n"
+			"vsub.s32  q2,  q13, q2\n"
+			"vsub.s32  q3,  q13, q3\n"
+		".endm\n"
+		/*
+		 * constants: q14 = 1
+		 * input: q15    - joint stereo selection mask
+		 *        %[in0] - value set by calc_scalefactors macro
+		 *        %[in1] - value set by calc_scalefactors macro
+		 */
+		".macro update_joint_stereo_samples\n"
+			"sub       %[out1], %[in1], %[inc]\n"
+			"sub       %[out0], %[in0], %[inc]\n"
+			"sub       %[in1], %[in1], %[inc], asl #1\n"
+			"sub       %[in0], %[in0], %[inc], asl #1\n"
+			"vld1.32   {d18, d19}, [%[in1], :128]\n"
+			"vbic.s32  q11, q9,  q14\n"
+			"vld1.32   {d16, d17}, [%[in0], :128]\n"
+			"vld1.32   {d2, d3}, [%[out1], :128]\n"
+			"vbic.s32  q3,  q1,  q14\n"
+			"vld1.32   {d0, d1}, [%[out0], :128]\n"
+			"vhsub.s32 q10, q8,  q11\n"
+			"vhadd.s32 q11, q8,  q11\n"
+			"vhsub.s32 q2,  q0,  q3\n"
+			"vhadd.s32 q3,  q0,  q3\n"
+			"vbif.s32  q10, q9,  q15\n"
+			"vbif.s32  d22, d16, d30\n"
+			"sub       %[inc], %[zero], %[inc], asl #1\n"
+			"sub       %[i], %[blocks], #2\n"
+		"2:\n"
+			"vbif.s32  d23, d17, d31\n"
+			"vst1.32   {d20, d21}, [%[in1], :128], %[inc]\n"
+			"vbif.s32  d4,  d2,  d30\n"
+			"vld1.32   {d18, d19}, [%[in1], :128]\n"
+			"vbif.s32  d5,  d3,  d31\n"
+			"vst1.32   {d22, d23}, [%[in0], :128], %[inc]\n"
+			"vbif.s32  d6,  d0,  d30\n"
+			"vld1.32   {d16, d17}, [%[in0], :128]\n"
+			"vbif.s32  d7,  d1,  d31\n"
+			"vst1.32   {d4, d5}, [%[out1], :128], %[inc]\n"
+			"vbic.s32  q11, q9,  q14\n"
+			"vld1.32   {d2, d3}, [%[out1], :128]\n"
+			"vst1.32   {d6, d7}, [%[out0], :128], %[inc]\n"
+			"vbic.s32  q3,  q1,  q14\n"
+			"vld1.32   {d0, d1}, [%[out0], :128]\n"
+			"vhsub.s32 q10, q8,  q11\n"
+			"vhadd.s32 q11, q8,  q11\n"
+			"vhsub.s32 q2,  q0,  q3\n"
+			"vhadd.s32 q3,  q0,  q3\n"
+			"vbif.s32  q10, q9,  q15\n"
+			"vbif.s32  d22, d16, d30\n"
+			"subs      %[i], %[i], #2\n"
+			"bgt       2b\n"
+			"sub       %[inc], %[zero], %[inc], asr #1\n"
+			"vbif.s32  d23, d17, d31\n"
+			"vst1.32   {d20, d21}, [%[in1], :128]\n"
+			"vbif.s32  q2,  q1,  q15\n"
+			"vst1.32   {d22, d23}, [%[in0], :128]\n"
+			"vbif.s32  q3,  q0,  q15\n"
+			"vst1.32   {d4, d5}, [%[out1], :128]\n"
+			"vst1.32   {d6, d7}, [%[out0], :128]\n"
+		".endm\n"
+
+		"vmov.s32  q14, #1\n"
+		"vmov.s32  q13, %[c2]\n"
+
+		"cmp   %[i], #4\n"
+		"bne   8f\n"
+
+	"4:\n" /* 4 subbands */
+		"add   %[in0], %[in], #0\n"
+		"add   %[in1], %[in], #32\n"
+		"add   %[out0], %[out], #0\n"
+		"add   %[out1], %[out], #32\n"
+		"vmov.s32  q0, %[c1]\n"
+		"vadd.s32  q0, q0, q14\n"
+
+		"calc_scalefactors\n"
+
+		/* check whether to use joint stereo for subbands 0, 1, 2 */
+		"vadd.s32  q15, q0,  q1\n"
+		"vadd.s32  q9,  q2,  q3\n"
+		"vmov.s32  d31[1], %[zero]\n" /* last subband -> no joint */
+		"vld1.32   {d16, d17}, [%[consts], :128]!\n"
+		"vcgt.s32  q15, q15, q9\n"
+
+		/* calculate and save to memory 'joint' variable */
+		/* update and save scale factors to memory */
+		"  vand.s32  q8, q8, q15\n"
+		"vbit.s32  q0,  q2,  q15\n"
+		"  vpadd.s32 d16, d16, d17\n"
+		"vbit.s32  q1,  q3,  q15\n"
+		"  vpadd.s32 d16, d16, d16\n"
+		"vst1.32   {d0, d1}, [%[out0], :128]\n"
+		"vst1.32   {d2, d3}, [%[out1], :128]\n"
+		"  vst1.32   {d16[0]}, [%[joint]]\n"
+
+		"update_joint_stereo_samples\n"
+		"b     9f\n"
+
+	"8:\n" /* 8 subbands */
+		"add   %[in0], %[in], #16\n\n"
+		"add   %[in1], %[in], #48\n"
+		"add   %[out0], %[out], #16\n\n"
+		"add   %[out1], %[out], #48\n"
+		"vmov.s32  q0, %[c1]\n"
+		"vadd.s32  q0, q0, q14\n"
+
+		"calc_scalefactors\n"
+
+		/* check whether to use joint stereo for subbands 4, 5, 6 */
+		"vadd.s32  q15, q0,  q1\n"
+		"vadd.s32  q9,  q2,  q3\n"
+		"vmov.s32  d31[1], %[zero]\n"  /* last subband -> no joint */
+		"vld1.32   {d16, d17}, [%[consts], :128]!\n"
+		"vcgt.s32  q15, q15, q9\n"
+
+		/* calculate part of 'joint' variable and save it to d24 */
+		/* update and save scale factors to memory */
+		"  vand.s32  q8, q8, q15\n"
+		"vbit.s32  q0,  q2,  q15\n"
+		"  vpadd.s32 d16, d16, d17\n"
+		"vbit.s32  q1,  q3,  q15\n"
+		"vst1.32   {d0, d1}, [%[out0], :128]\n"
+		"vst1.32   {d2, d3}, [%[out1], :128]\n"
+		"  vpadd.s32 d24, d16, d16\n"
+
+		"update_joint_stereo_samples\n"
+
+		"add   %[in0], %[in], #0\n"
+		"add   %[in1], %[in], #32\n"
+		"add   %[out0], %[out], #0\n\n"
+		"add   %[out1], %[out], #32\n"
+		"vmov.s32  q0, %[c1]\n"
+		"vadd.s32  q0, q0, q14\n"
+
+		"calc_scalefactors\n"
+
+		/* check whether to use joint stereo for subbands 0, 1, 2, 3 */
+		"vadd.s32  q15, q0,  q1\n"
+		"vadd.s32  q9,  q2,  q3\n"
+		"vld1.32   {d16, d17}, [%[consts], :128]!\n"
+		"vcgt.s32  q15, q15, q9\n"
+
+		/* combine last part of 'joint' with d24 and save to memory */
+		/* update and save scale factors to memory */
+		"  vand.s32  q8, q8, q15\n"
+		"vbit.s32  q0,  q2,  q15\n"
+		"  vpadd.s32 d16, d16, d17\n"
+		"vbit.s32  q1,  q3,  q15\n"
+		"  vpadd.s32 d16, d16, d16\n"
+		"vst1.32   {d0, d1}, [%[out0], :128]\n"
+		"  vadd.s32  d16, d16, d24\n"
+		"vst1.32   {d2, d3}, [%[out1], :128]\n"
+		"  vst1.32   {d16[0]}, [%[joint]]\n"
+
+		"update_joint_stereo_samples\n"
+	"9:\n"
+		".purgem calc_scalefactors\n"
+		".purgem update_joint_stereo_samples\n"
+		:
+		  [i]      "+&r" (i),
+		  [in]     "+&r" (in),
+		  [in0]    "=&r" (in0),
+		  [in1]    "=&r" (in1),
+		  [out]    "+&r" (out),
+		  [out0]   "=&r" (out0),
+		  [out1]   "=&r" (out1),
+		  [consts] "+&r" (consts)
+		:
+		  [inc]      "r" ((char *) &sb_sample_f[1][0][0] -
+				 (char *) &sb_sample_f[0][0][0]),
+		  [blocks]   "r" (blocks),
+		  [joint]    "r" (&joint),
+		  [c1]       "i" (1 << SCALE_OUT_BITS),
+		  [c2]       "i" (31 - SCALE_OUT_BITS),
+		  [zero]     "r" (0)
+		: "d0", "d1", "d2", "d3", "d4", "d5", "d6", "d7",
+		  "d16", "d17", "d18", "d19", "d20", "d21", "d22",
+		  "d23", "d24", "d25", "d26", "d27", "d28", "d29",
+		  "d30", "d31", "cc", "memory");
+
+	return joint;
+}
+
 void sbc_init_primitives_neon(struct sbc_encoder_state *state)
 {
 	state->sbc_analyze_4b_4s = sbc_analyze_4b_4s_neon;
 	state->sbc_analyze_4b_8s = sbc_analyze_4b_8s_neon;
 	state->sbc_calc_scalefactors = sbc_calc_scalefactors_neon;
+	state->sbc_calc_scalefactors_j = sbc_calc_scalefactors_j_neon;
 	state->implementation_info = "NEON";
 }
 
-- 
1.6.4.4


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/5] sbc: ARM NEON optimizations for input permutation in SBC encoder
  2010-07-02 12:25 [PATCH 0/5] SBC encoder optimizations for ARM processors Siarhei Siamashka
  2010-07-02 12:25 ` [PATCH 1/5] sbc: ARM NEON optimized joint stereo processing in SBC encoder Siarhei Siamashka
@ 2010-07-02 12:25 ` Siarhei Siamashka
  2010-07-02 12:25 ` [PATCH 3/5] sbc: slightly faster 'sbc_calc_scalefactors_neon' Siarhei Siamashka
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Siarhei Siamashka @ 2010-07-02 12:25 UTC (permalink / raw)
  To: linux-bluetooth; +Cc: Siarhei Siamashka

From: Siarhei Siamashka <siarhei.siamashka@nokia.com>

Using SIMD optimizations for 'sbc_enc_process_input_*' functions provides
a modest, but consistent speedup in all SBC encoding cases.

Benchmarked on ARM Cortex-A8:

== Before: ==

$ time ./sbcenc -b53 -s8 -j test.au > /dev/null

real    0m4.389s
user    0m3.969s
sys     0m0.422s

samples  %        image name               symbol name
26234    29.9625  sbcenc                   sbc_pack_frame
20057    22.9076  sbcenc                   sbc_analyze_4b_8s_neon
14306    16.3393  sbcenc                   sbc_calculate_bits
9866     11.2682  sbcenc                   sbc_enc_process_input_8s_be
8506      9.7149  no-vmlinux               /no-vmlinux
5219      5.9608  sbcenc                   sbc_calc_scalefactors_j_neon
2280      2.6040  sbcenc                   sbc_encode
661       0.7549  libc-2.10.1.so           memcpy

== After: ==

$ time ./sbcenc -b53 -s8 -j test.au > /dev/null

real    0m3.989s
user    0m3.602s
sys     0m0.391s

samples  %        image name               symbol name
26057    32.6128  sbcenc                   sbc_pack_frame
20003    25.0357  sbcenc                   sbc_analyze_4b_8s_neon
14220    17.7977  sbcenc                   sbc_calculate_bits
8498     10.6361  no-vmlinux               /no-vmlinux
5300      6.6335  sbcenc                   sbc_calc_scalefactors_j_neon
3235      4.0489  sbcenc                   sbc_enc_process_input_8s_be_neon
2172      2.7185  sbcenc                   sbc_encode
---
 sbc/sbc_primitives_neon.c |  350 +++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 350 insertions(+), 0 deletions(-)

diff --git a/sbc/sbc_primitives_neon.c b/sbc/sbc_primitives_neon.c
index c6a16ac..7713759 100644
--- a/sbc/sbc_primitives_neon.c
+++ b/sbc/sbc_primitives_neon.c
@@ -536,12 +536,362 @@ int sbc_calc_scalefactors_j_neon(
 	return joint;
 }
 
+#define PERM_BE(a, b, c, d) {             \
+		(a * 2) + 1, (a * 2) + 0, \
+		(b * 2) + 1, (b * 2) + 0, \
+		(c * 2) + 1, (c * 2) + 0, \
+		(d * 2) + 1, (d * 2) + 0  \
+	}
+#define PERM_LE(a, b, c, d) {             \
+		(a * 2) + 0, (a * 2) + 1, \
+		(b * 2) + 0, (b * 2) + 1, \
+		(c * 2) + 0, (c * 2) + 1, \
+		(d * 2) + 0, (d * 2) + 1  \
+	}
+
+static SBC_ALWAYS_INLINE int sbc_enc_process_input_4s_neon_internal(
+	int position,
+	const uint8_t *pcm, int16_t X[2][SBC_X_BUFFER_SIZE],
+	int nsamples, int nchannels, int big_endian)
+{
+	static SBC_ALIGNED uint8_t perm_be[2][8] = {
+		PERM_BE(7, 3, 6, 4),
+		PERM_BE(0, 2, 1, 5)
+	};
+	static SBC_ALIGNED uint8_t perm_le[2][8] = {
+		PERM_LE(7, 3, 6, 4),
+		PERM_LE(0, 2, 1, 5)
+	};
+	/* handle X buffer wraparound */
+	if (position < nsamples) {
+		int16_t *dst = &X[0][SBC_X_BUFFER_SIZE - 40];
+		int16_t *src = &X[0][position];
+		asm volatile (
+			"vld1.16 {d0, d1, d2, d3}, [%[src], :128]!\n"
+			"vst1.16 {d0, d1, d2, d3}, [%[dst], :128]!\n"
+			"vld1.16 {d0, d1, d2, d3}, [%[src], :128]!\n"
+			"vst1.16 {d0, d1, d2, d3}, [%[dst], :128]!\n"
+			"vld1.16 {d0}, [%[src], :64]!\n"
+			"vst1.16 {d0}, [%[dst], :64]!\n"
+			:
+			  [dst] "+r" (dst),
+			  [src] "+r" (src)
+			: : "memory", "d0", "d1", "d2", "d3");
+		if (nchannels > 1) {
+			dst = &X[1][SBC_X_BUFFER_SIZE - 40];
+			src = &X[1][position];
+			asm volatile (
+				"vld1.16 {d0, d1, d2, d3}, [%[src], :128]!\n"
+				"vst1.16 {d0, d1, d2, d3}, [%[dst], :128]!\n"
+				"vld1.16 {d0, d1, d2, d3}, [%[src], :128]!\n"
+				"vst1.16 {d0, d1, d2, d3}, [%[dst], :128]!\n"
+				"vld1.16 {d0}, [%[src], :64]!\n"
+				"vst1.16 {d0}, [%[dst], :64]!\n"
+				:
+				  [dst] "+r" (dst),
+				  [src] "+r" (src)
+				: : "memory", "d0", "d1", "d2", "d3");
+		}
+		position = SBC_X_BUFFER_SIZE - 40;
+	}
+
+	if ((nchannels > 1) && ((uintptr_t)pcm & 1)) {
+		/* poor 'pcm' alignment */
+		int16_t *x = &X[0][position];
+		int16_t *y = &X[1][position];
+		asm volatile (
+			"vld1.8  {d0, d1}, [%[perm], :128]\n"
+		"1:\n"
+			"sub     %[x], %[x], #16\n"
+			"sub     %[y], %[y], #16\n"
+			"sub     %[position], %[position], #8\n"
+			"vld1.8  {d4, d5}, [%[pcm]]!\n"
+			"vuzp.16 d4,  d5\n"
+			"vld1.8  {d20, d21}, [%[pcm]]!\n"
+			"vuzp.16 d20, d21\n"
+			"vswp    d5,  d20\n"
+			"vtbl.8  d16, {d4, d5}, d0\n"
+			"vtbl.8  d17, {d4, d5}, d1\n"
+			"vtbl.8  d18, {d20, d21}, d0\n"
+			"vtbl.8  d19, {d20, d21}, d1\n"
+			"vst1.16 {d16, d17}, [%[x], :128]\n"
+			"vst1.16 {d18, d19}, [%[y], :128]\n"
+			"subs    %[nsamples], %[nsamples], #8\n"
+			"bgt     1b\n"
+			:
+			  [x]        "+r" (x),
+			  [y]        "+r" (y),
+			  [pcm]      "+r" (pcm),
+			  [nsamples] "+r" (nsamples),
+			  [position] "+r" (position)
+			:
+			  [perm]      "r" (big_endian ? perm_be : perm_le)
+			: "cc", "memory", "d0", "d1", "d2", "d3", "d4",
+			  "d5", "d6", "d7", "d16", "d17", "d18", "d19",
+			  "d20", "d21", "d22", "d23");
+	} else if (nchannels > 1) {
+		/* proper 'pcm' alignment */
+		int16_t *x = &X[0][position];
+		int16_t *y = &X[1][position];
+		asm volatile (
+			"vld1.8  {d0, d1}, [%[perm], :128]\n"
+		"1:\n"
+			"sub     %[x], %[x], #16\n"
+			"sub     %[y], %[y], #16\n"
+			"sub     %[position], %[position], #8\n"
+			"vld2.16 {d4, d5}, [%[pcm]]!\n"
+			"vld2.16 {d20, d21}, [%[pcm]]!\n"
+			"vswp    d5, d20\n"
+			"vtbl.8  d16, {d4, d5}, d0\n"
+			"vtbl.8  d17, {d4, d5}, d1\n"
+			"vtbl.8  d18, {d20, d21}, d0\n"
+			"vtbl.8  d19, {d20, d21}, d1\n"
+			"vst1.16 {d16, d17}, [%[x], :128]\n"
+			"vst1.16 {d18, d19}, [%[y], :128]\n"
+			"subs    %[nsamples], %[nsamples], #8\n"
+			"bgt     1b\n"
+			:
+			  [x]        "+r" (x),
+			  [y]        "+r" (y),
+			  [pcm]      "+r" (pcm),
+			  [nsamples] "+r" (nsamples),
+			  [position] "+r" (position)
+			:
+			  [perm]      "r" (big_endian ? perm_be : perm_le)
+			: "cc", "memory", "d0", "d1", "d2", "d3", "d4",
+			  "d5", "d6", "d7", "d16", "d17", "d18", "d19",
+			  "d20", "d21", "d22", "d23");
+	} else {
+		int16_t *x = &X[0][position];
+		asm volatile (
+			"vld1.8  {d0, d1}, [%[perm], :128]\n"
+		"1:\n"
+			"sub     %[x], %[x], #16\n"
+			"sub     %[position], %[position], #8\n"
+			"vld1.8  {d4, d5}, [%[pcm]]!\n"
+			"vtbl.8  d16, {d4, d5}, d0\n"
+			"vtbl.8  d17, {d4, d5}, d1\n"
+			"vst1.16 {d16, d17}, [%[x], :128]\n"
+			"subs    %[nsamples], %[nsamples], #8\n"
+			"bgt     1b\n"
+			:
+			  [x]        "+r" (x),
+			  [pcm]      "+r" (pcm),
+			  [nsamples] "+r" (nsamples),
+			  [position] "+r" (position)
+			:
+			  [perm]      "r" (big_endian ? perm_be : perm_le)
+			: "cc", "memory", "d0", "d1", "d2", "d3", "d4",
+			  "d5", "d6", "d7", "d16", "d17", "d18", "d19");
+	}
+	return position;
+}
+
+static SBC_ALWAYS_INLINE int sbc_enc_process_input_8s_neon_internal(
+	int position,
+	const uint8_t *pcm, int16_t X[2][SBC_X_BUFFER_SIZE],
+	int nsamples, int nchannels, int big_endian)
+{
+	static SBC_ALIGNED uint8_t perm_be[4][8] = {
+		PERM_BE(15, 7, 14, 8),
+		PERM_BE(13, 9, 12, 10),
+		PERM_BE(11, 3, 6,  0),
+		PERM_BE(5,  1, 4,  2)
+	};
+	static SBC_ALIGNED uint8_t perm_le[4][8] = {
+		PERM_LE(15, 7, 14, 8),
+		PERM_LE(13, 9, 12, 10),
+		PERM_LE(11, 3, 6,  0),
+		PERM_LE(5,  1, 4,  2)
+	};
+	/* handle X buffer wraparound */
+	if (position < nsamples) {
+		int16_t *dst = &X[0][SBC_X_BUFFER_SIZE - 72];
+		int16_t *src = &X[0][position];
+		asm volatile (
+			"vld1.16 {d0, d1, d2, d3}, [%[src], :128]!\n"
+			"vst1.16 {d0, d1, d2, d3}, [%[dst], :128]!\n"
+			"vld1.16 {d0, d1, d2, d3}, [%[src], :128]!\n"
+			"vst1.16 {d0, d1, d2, d3}, [%[dst], :128]!\n"
+			"vld1.16 {d0, d1, d2, d3}, [%[src], :128]!\n"
+			"vst1.16 {d0, d1, d2, d3}, [%[dst], :128]!\n"
+			"vld1.16 {d0, d1, d2, d3}, [%[src], :128]!\n"
+			"vst1.16 {d0, d1, d2, d3}, [%[dst], :128]!\n"
+			"vld1.16 {d0, d1}, [%[src], :128]!\n"
+			"vst1.16 {d0, d1}, [%[dst], :128]!\n"
+			:
+			  [dst] "+r" (dst),
+			  [src] "+r" (src)
+			: : "memory", "d0", "d1", "d2", "d3");
+		if (nchannels > 1) {
+			dst = &X[1][SBC_X_BUFFER_SIZE - 72];
+			src = &X[1][position];
+			asm volatile (
+				"vld1.16 {d0, d1, d2, d3}, [%[src], :128]!\n"
+				"vst1.16 {d0, d1, d2, d3}, [%[dst], :128]!\n"
+				"vld1.16 {d0, d1, d2, d3}, [%[src], :128]!\n"
+				"vst1.16 {d0, d1, d2, d3}, [%[dst], :128]!\n"
+				"vld1.16 {d0, d1, d2, d3}, [%[src], :128]!\n"
+				"vst1.16 {d0, d1, d2, d3}, [%[dst], :128]!\n"
+				"vld1.16 {d0, d1, d2, d3}, [%[src], :128]!\n"
+				"vst1.16 {d0, d1, d2, d3}, [%[dst], :128]!\n"
+				"vld1.16 {d0, d1}, [%[src], :128]!\n"
+				"vst1.16 {d0, d1}, [%[dst], :128]!\n"
+				:
+				  [dst] "+r" (dst),
+				  [src] "+r" (src)
+				: : "memory", "d0", "d1", "d2", "d3");
+		}
+		position = SBC_X_BUFFER_SIZE - 72;
+	}
+
+	if ((nchannels > 1) && ((uintptr_t)pcm & 1)) {
+		/* poor 'pcm' alignment */
+		int16_t *x = &X[0][position];
+		int16_t *y = &X[1][position];
+		asm volatile (
+			"vld1.8  {d0, d1, d2, d3}, [%[perm], :128]\n"
+		"1:\n"
+			"sub     %[x], %[x], #32\n"
+			"sub     %[y], %[y], #32\n"
+			"sub     %[position], %[position], #16\n"
+			"vld1.8  {d4, d5, d6, d7}, [%[pcm]]!\n"
+			"vuzp.16 q2,  q3\n"
+			"vld1.8  {d20, d21, d22, d23}, [%[pcm]]!\n"
+			"vuzp.16 q10, q11\n"
+			"vswp    q3,  q10\n"
+			"vtbl.8  d16, {d4, d5, d6, d7}, d0\n"
+			"vtbl.8  d17, {d4, d5, d6, d7}, d1\n"
+			"vtbl.8  d18, {d4, d5, d6, d7}, d2\n"
+			"vtbl.8  d19, {d4, d5, d6, d7}, d3\n"
+			"vst1.16 {d16, d17, d18, d19}, [%[x], :128]\n"
+			"vtbl.8  d16, {d20, d21, d22, d23}, d0\n"
+			"vtbl.8  d17, {d20, d21, d22, d23}, d1\n"
+			"vtbl.8  d18, {d20, d21, d22, d23}, d2\n"
+			"vtbl.8  d19, {d20, d21, d22, d23}, d3\n"
+			"vst1.16 {d16, d17, d18, d19}, [%[y], :128]\n"
+			"subs    %[nsamples], %[nsamples], #16\n"
+			"bgt     1b\n"
+			:
+			  [x]        "+r" (x),
+			  [y]        "+r" (y),
+			  [pcm]      "+r" (pcm),
+			  [nsamples] "+r" (nsamples),
+			  [position] "+r" (position)
+			:
+			  [perm]      "r" (big_endian ? perm_be : perm_le)
+			: "cc", "memory", "d0", "d1", "d2", "d3", "d4",
+			  "d5", "d6", "d7", "d16", "d17", "d18", "d19",
+			  "d20", "d21", "d22", "d23");
+	} else if (nchannels > 1) {
+		/* proper 'pcm' alignment */
+		int16_t *x = &X[0][position];
+		int16_t *y = &X[1][position];
+		asm volatile (
+			"vld1.8  {d0, d1, d2, d3}, [%[perm], :128]\n"
+		"1:\n"
+			"sub     %[x], %[x], #32\n"
+			"sub     %[y], %[y], #32\n"
+			"sub     %[position], %[position], #16\n"
+			"vld2.16  {d4, d5, d6, d7}, [%[pcm]]!\n"
+			"vld2.16  {d20, d21, d22, d23}, [%[pcm]]!\n"
+			"vswp    q3, q10\n"
+			"vtbl.8  d16, {d4, d5, d6, d7}, d0\n"
+			"vtbl.8  d17, {d4, d5, d6, d7}, d1\n"
+			"vtbl.8  d18, {d4, d5, d6, d7}, d2\n"
+			"vtbl.8  d19, {d4, d5, d6, d7}, d3\n"
+			"vst1.16 {d16, d17, d18, d19}, [%[x], :128]\n"
+			"vtbl.8  d16, {d20, d21, d22, d23}, d0\n"
+			"vtbl.8  d17, {d20, d21, d22, d23}, d1\n"
+			"vtbl.8  d18, {d20, d21, d22, d23}, d2\n"
+			"vtbl.8  d19, {d20, d21, d22, d23}, d3\n"
+			"vst1.16 {d16, d17, d18, d19}, [%[y], :128]\n"
+			"subs    %[nsamples], %[nsamples], #16\n"
+			"bgt     1b\n"
+			:
+			  [x]        "+r" (x),
+			  [y]        "+r" (y),
+			  [pcm]      "+r" (pcm),
+			  [nsamples] "+r" (nsamples),
+			  [position] "+r" (position)
+			:
+			  [perm]      "r" (big_endian ? perm_be : perm_le)
+			: "cc", "memory", "d0", "d1", "d2", "d3", "d4",
+			  "d5", "d6", "d7", "d16", "d17", "d18", "d19",
+			  "d20", "d21", "d22", "d23");
+	} else {
+		int16_t *x = &X[0][position];
+		asm volatile (
+			"vld1.8  {d0, d1, d2, d3}, [%[perm], :128]\n"
+		"1:\n"
+			"sub     %[x], %[x], #32\n"
+			"sub     %[position], %[position], #16\n"
+			"vld1.8  {d4, d5, d6, d7}, [%[pcm]]!\n"
+			"vtbl.8  d16, {d4, d5, d6, d7}, d0\n"
+			"vtbl.8  d17, {d4, d5, d6, d7}, d1\n"
+			"vtbl.8  d18, {d4, d5, d6, d7}, d2\n"
+			"vtbl.8  d19, {d4, d5, d6, d7}, d3\n"
+			"vst1.16 {d16, d17, d18, d19}, [%[x], :128]\n"
+			"subs    %[nsamples], %[nsamples], #16\n"
+			"bgt     1b\n"
+			:
+			  [x]        "+r" (x),
+			  [pcm]      "+r" (pcm),
+			  [nsamples] "+r" (nsamples),
+			  [position] "+r" (position)
+			:
+			  [perm]      "r" (big_endian ? perm_be : perm_le)
+			: "cc", "memory", "d0", "d1", "d2", "d3", "d4",
+			  "d5", "d6", "d7", "d16", "d17", "d18", "d19");
+	}
+	return position;
+}
+
+#undef PERM_BE
+#undef PERM_LE
+
+static int sbc_enc_process_input_4s_be_neon(int position, const uint8_t *pcm,
+					int16_t X[2][SBC_X_BUFFER_SIZE],
+					int nsamples, int nchannels)
+{
+	return sbc_enc_process_input_4s_neon_internal(
+		position, pcm, X, nsamples, nchannels, 1);
+}
+
+static int sbc_enc_process_input_4s_le_neon(int position, const uint8_t *pcm,
+					int16_t X[2][SBC_X_BUFFER_SIZE],
+					int nsamples, int nchannels)
+{
+	return sbc_enc_process_input_4s_neon_internal(
+		position, pcm, X, nsamples, nchannels, 0);
+}
+
+static int sbc_enc_process_input_8s_be_neon(int position, const uint8_t *pcm,
+					int16_t X[2][SBC_X_BUFFER_SIZE],
+					int nsamples, int nchannels)
+{
+	return sbc_enc_process_input_8s_neon_internal(
+		position, pcm, X, nsamples, nchannels, 1);
+}
+
+static int sbc_enc_process_input_8s_le_neon(int position, const uint8_t *pcm,
+					int16_t X[2][SBC_X_BUFFER_SIZE],
+					int nsamples, int nchannels)
+{
+	return sbc_enc_process_input_8s_neon_internal(
+		position, pcm, X, nsamples, nchannels, 0);
+}
+
 void sbc_init_primitives_neon(struct sbc_encoder_state *state)
 {
 	state->sbc_analyze_4b_4s = sbc_analyze_4b_4s_neon;
 	state->sbc_analyze_4b_8s = sbc_analyze_4b_8s_neon;
 	state->sbc_calc_scalefactors = sbc_calc_scalefactors_neon;
 	state->sbc_calc_scalefactors_j = sbc_calc_scalefactors_j_neon;
+	state->sbc_enc_process_input_4s_le = sbc_enc_process_input_4s_le_neon;
+	state->sbc_enc_process_input_4s_be = sbc_enc_process_input_4s_be_neon;
+	state->sbc_enc_process_input_8s_le = sbc_enc_process_input_8s_le_neon;
+	state->sbc_enc_process_input_8s_be = sbc_enc_process_input_8s_be_neon;
 	state->implementation_info = "NEON";
 }
 
-- 
1.6.4.4


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/5] sbc: slightly faster 'sbc_calc_scalefactors_neon'
  2010-07-02 12:25 [PATCH 0/5] SBC encoder optimizations for ARM processors Siarhei Siamashka
  2010-07-02 12:25 ` [PATCH 1/5] sbc: ARM NEON optimized joint stereo processing in SBC encoder Siarhei Siamashka
  2010-07-02 12:25 ` [PATCH 2/5] sbc: ARM NEON optimizations for input permutation " Siarhei Siamashka
@ 2010-07-02 12:25 ` Siarhei Siamashka
  2010-07-02 12:25 ` [PATCH 4/5] sbc: faster 'sbc_calculate_bits' function Siarhei Siamashka
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Siarhei Siamashka @ 2010-07-02 12:25 UTC (permalink / raw)
  To: linux-bluetooth; +Cc: Siarhei Siamashka

From: Siarhei Siamashka <siarhei.siamashka@nokia.com>

Previous variant was basically derived from C and MMX implementations.
Now new variant makes use of 'vmax' instruction, which is available in
NEON and can do this job faster. The same method for calculating scale
factors is also used in 'sbc_calc_scalefactors_j_neon'.

Benchmarked without joint stereo on ARM Cortex-A8:

== Before: ==

$ time ./sbcenc -b53 -s8 test.au > /dev/null

real    0m3.851s
user    0m3.375s
sys     0m0.469s

samples  %        image name               symbol name
26260    34.2672  sbcenc                   sbc_pack_frame
20013    26.1154  sbcenc                   sbc_analyze_4b_8s_neon
13796    18.0027  sbcenc                   sbc_calculate_bits
8388     10.9457  no-vmlinux               /no-vmlinux
3229      4.2136  sbcenc                   sbc_enc_process_input_8s_be_neon
2408      3.1422  sbcenc                   sbc_calc_scalefactors_neon
2093      2.7312  sbcenc                   sbc_encode

== After: ==

$ time ./sbcenc -b53 -s8 test.au > /dev/null

real    0m3.796s
user    0m3.344s
sys     0m0.438s

samples  %        image name               symbol name
26582    34.8726  sbcenc                   sbc_pack_frame
20032    26.2797  sbcenc                   sbc_analyze_4b_8s_neon
13808    18.1146  sbcenc                   sbc_calculate_bits
8374     10.9858  no-vmlinux               /no-vmlinux
3187      4.1810  sbcenc                   sbc_enc_process_input_8s_be_neon
2027      2.6592  sbcenc                   sbc_encode
1766      2.3168  sbcenc                   sbc_calc_scalefactors_neon
---
 sbc/sbc_primitives_neon.c |   25 ++++++++++---------------
 1 files changed, 10 insertions(+), 15 deletions(-)

diff --git a/sbc/sbc_primitives_neon.c b/sbc/sbc_primitives_neon.c
index 7713759..0572158 100644
--- a/sbc/sbc_primitives_neon.c
+++ b/sbc/sbc_primitives_neon.c
@@ -248,8 +248,11 @@ static void sbc_calc_scalefactors_neon(
 			int blk = blocks;
 			int32_t *in = &sb_sample_f[0][ch][sb];
 			asm volatile (
-				"vmov.s32  q0, %[c1]\n"
+				"vmov.s32  q0, #0\n"
 				"vmov.s32  q1, %[c1]\n"
+				"vmov.s32  q14, #1\n"
+				"vmov.s32  q15, %[c2]\n"
+				"vadd.s32  q1, q1, q14\n"
 			"1:\n"
 				"vld1.32   {d16, d17}, [%[in], :128], %[inc]\n"
 				"vabs.s32  q8,  q8\n"
@@ -259,22 +262,14 @@ static void sbc_calc_scalefactors_neon(
 				"vabs.s32  q10, q10\n"
 				"vld1.32   {d22, d23}, [%[in], :128], %[inc]\n"
 				"vabs.s32  q11, q11\n"
-				"vcgt.s32  q12, q8,  #0\n"
-				"vcgt.s32  q13, q9,  #0\n"
-				"vcgt.s32  q14, q10, #0\n"
-				"vcgt.s32  q15, q11, #0\n"
-				"vadd.s32  q8,  q8,  q12\n"
-				"vadd.s32  q9,  q9,  q13\n"
-				"vadd.s32  q10, q10, q14\n"
-				"vadd.s32  q11, q11, q15\n"
-				"vorr.s32  q0,  q0,  q8\n"
-				"vorr.s32  q1,  q1,  q9\n"
-				"vorr.s32  q0,  q0,  q10\n"
-				"vorr.s32  q1,  q1,  q11\n"
+				"vmax.s32  q0,  q0,  q8\n"
+				"vmax.s32  q1,  q1,  q9\n"
+				"vmax.s32  q0,  q0,  q10\n"
+				"vmax.s32  q1,  q1,  q11\n"
 				"subs      %[blk], %[blk], #4\n"
 				"bgt       1b\n"
-				"vorr.s32  q0,  q0, q1\n"
-				"vmov.s32  q15, %[c2]\n"
+				"vmax.s32  q0,  q0,  q1\n"
+				"vsub.s32  q0,  q0,  q14\n"
 				"vclz.s32  q0,  q0\n"
 				"vsub.s32  q0,  q15, q0\n"
 				"vst1.32   {d0, d1}, [%[out], :128]\n"
-- 
1.6.4.4


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 4/5] sbc: faster 'sbc_calculate_bits' function
  2010-07-02 12:25 [PATCH 0/5] SBC encoder optimizations for ARM processors Siarhei Siamashka
                   ` (2 preceding siblings ...)
  2010-07-02 12:25 ` [PATCH 3/5] sbc: slightly faster 'sbc_calc_scalefactors_neon' Siarhei Siamashka
@ 2010-07-02 12:25 ` Siarhei Siamashka
  2010-07-02 12:25 ` [PATCH 5/5] sbc: ARMv6 optimized version of analysis filter for SBC encoder Siarhei Siamashka
  2010-07-02 19:04 ` [PATCH 0/5] SBC encoder optimizations for ARM processors Johan Hedberg
  5 siblings, 0 replies; 7+ messages in thread
From: Siarhei Siamashka @ 2010-07-02 12:25 UTC (permalink / raw)
  To: linux-bluetooth; +Cc: Siarhei Siamashka

From: Siarhei Siamashka <siarhei.siamashka@nokia.com>

By using SBC_ALWAYS_INLINE trick, the implementation of 'sbc_calculate_bits'
function is split into two branches, each having 'subband' variable value
known at compile time. It helps the compiler to generate more optimal code
by saving at least one extra register, and also provides more obvious
opportunities for loops unrolling.

Benchmarked on ARM Cortex-A8:

== Before: ==

$ time ./sbcenc -b53 -s8 -j test.au > /dev/null

real    0m3.989s
user    0m3.602s
sys     0m0.391s

samples  %        image name               symbol name
26057    32.6128  sbcenc                   sbc_pack_frame
20003    25.0357  sbcenc                   sbc_analyze_4b_8s_neon
14220    17.7977  sbcenc                   sbc_calculate_bits
8498     10.6361  no-vmlinux               /no-vmlinux
5300      6.6335  sbcenc                   sbc_calc_scalefactors_j_neon
3235      4.0489  sbcenc                   sbc_enc_process_input_8s_be_neon
2172      2.7185  sbcenc                   sbc_encode

== After: ==

$ time ./sbcenc -b53 -s8 -j test.au > /dev/null

real    0m3.652s
user    0m3.195s
sys     0m0.445s

samples  %        image name               symbol name
26207    36.0095  sbcenc                   sbc_pack_frame
19820    27.2335  sbcenc                   sbc_analyze_4b_8s_neon
8629     11.8566  no-vmlinux               /no-vmlinux
6988      9.6018  sbcenc                   sbc_calculate_bits
5094      6.9994  sbcenc                   sbc_calc_scalefactors_j_neon
3351      4.6044  sbcenc                   sbc_enc_process_input_8s_be_neon
2182      2.9982  sbcenc                   sbc_encode
---
 sbc/sbc.c |   43 ++++++++++++++++++++++++++++---------------
 1 files changed, 28 insertions(+), 15 deletions(-)

diff --git a/sbc/sbc.c b/sbc/sbc.c
index 1921585..a6391ae 100644
--- a/sbc/sbc.c
+++ b/sbc/sbc.c
@@ -160,7 +160,8 @@ static uint8_t sbc_crc8(const uint8_t *data, size_t len)
  * Takes a pointer to the frame in question, a pointer to the bits array and
  * the sampling frequency (as 2 bit integer)
  */
-static void sbc_calculate_bits(const struct sbc_frame *frame, int (*bits)[8])
+static SBC_ALWAYS_INLINE void sbc_calculate_bits_internal(
+		const struct sbc_frame *frame, int (*bits)[8], int subbands)
 {
 	uint8_t sf = frame->frequency;
 
@@ -171,17 +172,17 @@ static void sbc_calculate_bits(const struct sbc_frame *frame, int (*bits)[8])
 		for (ch = 0; ch < frame->channels; ch++) {
 			max_bitneed = 0;
 			if (frame->allocation == SNR) {
-				for (sb = 0; sb < frame->subbands; sb++) {
+				for (sb = 0; sb < subbands; sb++) {
 					bitneed[ch][sb] = frame->scale_factor[ch][sb];
 					if (bitneed[ch][sb] > max_bitneed)
 						max_bitneed = bitneed[ch][sb];
 				}
 			} else {
-				for (sb = 0; sb < frame->subbands; sb++) {
+				for (sb = 0; sb < subbands; sb++) {
 					if (frame->scale_factor[ch][sb] == 0)
 						bitneed[ch][sb] = -5;
 					else {
-						if (frame->subbands == 4)
+						if (subbands == 4)
 							loudness = frame->scale_factor[ch][sb] - sbc_offset4[sf][sb];
 						else
 							loudness = frame->scale_factor[ch][sb] - sbc_offset8[sf][sb];
@@ -202,7 +203,7 @@ static void sbc_calculate_bits(const struct sbc_frame *frame, int (*bits)[8])
 				bitslice--;
 				bitcount += slicecount;
 				slicecount = 0;
-				for (sb = 0; sb < frame->subbands; sb++) {
+				for (sb = 0; sb < subbands; sb++) {
 					if ((bitneed[ch][sb] > bitslice + 1) && (bitneed[ch][sb] < bitslice + 16))
 						slicecount++;
 					else if (bitneed[ch][sb] == bitslice + 1)
@@ -215,7 +216,7 @@ static void sbc_calculate_bits(const struct sbc_frame *frame, int (*bits)[8])
 				bitslice--;
 			}
 
-			for (sb = 0; sb < frame->subbands; sb++) {
+			for (sb = 0; sb < subbands; sb++) {
 				if (bitneed[ch][sb] < bitslice + 2)
 					bits[ch][sb] = 0;
 				else {
@@ -225,7 +226,8 @@ static void sbc_calculate_bits(const struct sbc_frame *frame, int (*bits)[8])
 				}
 			}
 
-			for (sb = 0; bitcount < frame->bitpool && sb < frame->subbands; sb++) {
+			for (sb = 0; bitcount < frame->bitpool &&
+							sb < subbands; sb++) {
 				if ((bits[ch][sb] >= 2) && (bits[ch][sb] < 16)) {
 					bits[ch][sb]++;
 					bitcount++;
@@ -235,7 +237,8 @@ static void sbc_calculate_bits(const struct sbc_frame *frame, int (*bits)[8])
 				}
 			}
 
-			for (sb = 0; bitcount < frame->bitpool && sb < frame->subbands; sb++) {
+			for (sb = 0; bitcount < frame->bitpool &&
+							sb < subbands; sb++) {
 				if (bits[ch][sb] < 16) {
 					bits[ch][sb]++;
 					bitcount++;
@@ -251,7 +254,7 @@ static void sbc_calculate_bits(const struct sbc_frame *frame, int (*bits)[8])
 		max_bitneed = 0;
 		if (frame->allocation == SNR) {
 			for (ch = 0; ch < 2; ch++) {
-				for (sb = 0; sb < frame->subbands; sb++) {
+				for (sb = 0; sb < subbands; sb++) {
 					bitneed[ch][sb] = frame->scale_factor[ch][sb];
 					if (bitneed[ch][sb] > max_bitneed)
 						max_bitneed = bitneed[ch][sb];
@@ -259,11 +262,11 @@ static void sbc_calculate_bits(const struct sbc_frame *frame, int (*bits)[8])
 			}
 		} else {
 			for (ch = 0; ch < 2; ch++) {
-				for (sb = 0; sb < frame->subbands; sb++) {
+				for (sb = 0; sb < subbands; sb++) {
 					if (frame->scale_factor[ch][sb] == 0)
 						bitneed[ch][sb] = -5;
 					else {
-						if (frame->subbands == 4)
+						if (subbands == 4)
 							loudness = frame->scale_factor[ch][sb] - sbc_offset4[sf][sb];
 						else
 							loudness = frame->scale_factor[ch][sb] - sbc_offset8[sf][sb];
@@ -286,7 +289,7 @@ static void sbc_calculate_bits(const struct sbc_frame *frame, int (*bits)[8])
 			bitcount += slicecount;
 			slicecount = 0;
 			for (ch = 0; ch < 2; ch++) {
-				for (sb = 0; sb < frame->subbands; sb++) {
+				for (sb = 0; sb < subbands; sb++) {
 					if ((bitneed[ch][sb] > bitslice + 1) && (bitneed[ch][sb] < bitslice + 16))
 						slicecount++;
 					else if (bitneed[ch][sb] == bitslice + 1)
@@ -301,7 +304,7 @@ static void sbc_calculate_bits(const struct sbc_frame *frame, int (*bits)[8])
 		}
 
 		for (ch = 0; ch < 2; ch++) {
-			for (sb = 0; sb < frame->subbands; sb++) {
+			for (sb = 0; sb < subbands; sb++) {
 				if (bitneed[ch][sb] < bitslice + 2) {
 					bits[ch][sb] = 0;
 				} else {
@@ -325,7 +328,8 @@ static void sbc_calculate_bits(const struct sbc_frame *frame, int (*bits)[8])
 			if (ch == 1) {
 				ch = 0;
 				sb++;
-				if (sb >= frame->subbands) break;
+				if (sb >= subbands)
+					break;
 			} else
 				ch = 1;
 		}
@@ -340,7 +344,8 @@ static void sbc_calculate_bits(const struct sbc_frame *frame, int (*bits)[8])
 			if (ch == 1) {
 				ch = 0;
 				sb++;
-				if (sb >= frame->subbands) break;
+				if (sb >= subbands)
+					break;
 			} else
 				ch = 1;
 		}
@@ -349,6 +354,14 @@ static void sbc_calculate_bits(const struct sbc_frame *frame, int (*bits)[8])
 
 }
 
+static void sbc_calculate_bits(const struct sbc_frame *frame, int (*bits)[8])
+{
+	if (frame->subbands == 4)
+		sbc_calculate_bits_internal(frame, bits, 4);
+	else
+		sbc_calculate_bits_internal(frame, bits, 8);
+}
+
 /*
  * Unpacks a SBC frame at the beginning of the stream in data,
  * which has at most len bytes into frame.
-- 
1.6.4.4


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 5/5] sbc: ARMv6 optimized version of analysis filter for SBC encoder
  2010-07-02 12:25 [PATCH 0/5] SBC encoder optimizations for ARM processors Siarhei Siamashka
                   ` (3 preceding siblings ...)
  2010-07-02 12:25 ` [PATCH 4/5] sbc: faster 'sbc_calculate_bits' function Siarhei Siamashka
@ 2010-07-02 12:25 ` Siarhei Siamashka
  2010-07-02 19:04 ` [PATCH 0/5] SBC encoder optimizations for ARM processors Johan Hedberg
  5 siblings, 0 replies; 7+ messages in thread
From: Siarhei Siamashka @ 2010-07-02 12:25 UTC (permalink / raw)
  To: linux-bluetooth; +Cc: Siarhei Siamashka

From: Siarhei Siamashka <siarhei.siamashka@nokia.com>

The optimized filter gets enabled when the code is compiled
with -mcpu=/-march options set to target the processors which
support ARMv6 instructions. This code is also disabled when
NEON is used (which is a lot better alternative). For additional
safety ARM EABI is required and thumb mode should not be used.

Benchmarks from ARM11:

== 8 subbands ==

$ time ./sbcenc -b53 -s8 -j test.au > /dev/null

real    0m 35.65s
user    0m 34.17s
sys     0m 1.28s

$ time ./sbcenc.armv6 -b53 -s8 -j test.au > /dev/null

real    0m 17.29s
user    0m 15.47s
sys     0m 0.67s

== 4 subbands ==

$ time ./sbcenc -b53 -s4 -j test.au > /dev/null

real    0m 25.28s
user    0m 23.76s
sys     0m 1.32s

$ time ./sbcenc.armv6 -b53 -s4 -j test.au > /dev/null

real    0m 18.64s
user    0m 15.78s
sys     0m 2.22s
---
 Makefile.am                |    3 +-
 sbc/sbc_primitives.c       |    4 +
 sbc/sbc_primitives_armv6.c |  299 ++++++++++++++++++++++++++++++++++++++++++++
 sbc/sbc_primitives_armv6.h |   52 ++++++++
 4 files changed, 357 insertions(+), 1 deletions(-)
 create mode 100644 sbc/sbc_primitives_armv6.c
 create mode 100644 sbc/sbc_primitives_armv6.h

diff --git a/Makefile.am b/Makefile.am
index 36ffde3..9ed3f89 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -65,7 +65,8 @@ noinst_LTLIBRARIES += sbc/libsbc.la
 sbc_libsbc_la_SOURCES = sbc/sbc.h sbc/sbc.c sbc/sbc_math.h sbc/sbc_tables.h \
 			sbc/sbc_primitives.h sbc/sbc_primitives.c \
 			sbc/sbc_primitives_mmx.h sbc/sbc_primitives_mmx.c \
-			sbc/sbc_primitives_neon.h sbc/sbc_primitives_neon.c
+			sbc/sbc_primitives_neon.h sbc/sbc_primitives_neon.c \
+			sbc/sbc_primitives_armv6.h sbc/sbc_primitives_armv6.c
 
 sbc_libsbc_la_CFLAGS = -finline-functions -fgcse-after-reload \
 					-funswitch-loops -funroll-loops
diff --git a/sbc/sbc_primitives.c b/sbc/sbc_primitives.c
index c73fb1c..f87fb5a 100644
--- a/sbc/sbc_primitives.c
+++ b/sbc/sbc_primitives.c
@@ -34,6 +34,7 @@
 #include "sbc_primitives.h"
 #include "sbc_primitives_mmx.h"
 #include "sbc_primitives_neon.h"
+#include "sbc_primitives_armv6.h"
 
 /*
  * A reference C code of analysis filter with SIMD-friendly tables
@@ -540,6 +541,9 @@ void sbc_init_primitives(struct sbc_encoder_state *state)
 #endif
 
 	/* ARM optimizations */
+#ifdef SBC_BUILD_WITH_ARMV6_SUPPORT
+	sbc_init_primitives_armv6(state);
+#endif
 #ifdef SBC_BUILD_WITH_NEON_SUPPORT
 	sbc_init_primitives_neon(state);
 #endif
diff --git a/sbc/sbc_primitives_armv6.c b/sbc/sbc_primitives_armv6.c
new file mode 100644
index 0000000..9586098
--- /dev/null
+++ b/sbc/sbc_primitives_armv6.c
@@ -0,0 +1,299 @@
+/*
+ *
+ *  Bluetooth low-complexity, subband codec (SBC) library
+ *
+ *  Copyright (C) 2008-2010  Nokia Corporation
+ *  Copyright (C) 2004-2010  Marcel Holtmann <marcel@holtmann.org>
+ *  Copyright (C) 2004-2005  Henryk Ploetz <henryk@ploetzli.ch>
+ *  Copyright (C) 2005-2006  Brad Midgley <bmidgley@xmission.com>
+ *
+ *
+ *  This library is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU Lesser General Public
+ *  License as published by the Free Software Foundation; either
+ *  version 2.1 of the License, or (at your option) any later version.
+ *
+ *  This library is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  Lesser General Public License for more details.
+ *
+ *  You should have received a copy of the GNU Lesser General Public
+ *  License along with this library; if not, write to the Free Software
+ *  Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
+ *
+ */
+
+#include <stdint.h>
+#include <limits.h>
+#include "sbc.h"
+#include "sbc_math.h"
+#include "sbc_tables.h"
+
+#include "sbc_primitives_armv6.h"
+
+/*
+ * ARMv6 optimizations. The instructions are scheduled for ARM11 pipeline.
+ */
+
+#ifdef SBC_BUILD_WITH_ARMV6_SUPPORT
+
+static void __attribute__((naked)) sbc_analyze_four_armv6()
+{
+	/* r0 = in, r1 = out, r2 = consts */
+	asm volatile (
+		"push   {r1, r4-r7, lr}\n"
+		"push   {r8-r11}\n"
+		"ldrd   r4,  r5,  [r0, #0]\n"
+		"ldrd   r6,  r7,  [r2, #0]\n"
+		"ldrd   r8,  r9,  [r0, #16]\n"
+		"ldrd   r10, r11, [r2, #16]\n"
+		"mov    r14, #0x8000\n"
+		"smlad  r3,  r4,  r6,  r14\n"
+		"smlad  r12, r5,  r7,  r14\n"
+		"ldrd   r4,  r5,  [r0, #32]\n"
+		"ldrd   r6,  r7,  [r2, #32]\n"
+		"smlad  r3,  r8,  r10, r3\n"
+		"smlad  r12, r9,  r11, r12\n"
+		"ldrd   r8,  r9,  [r0, #48]\n"
+		"ldrd   r10, r11, [r2, #48]\n"
+		"smlad  r3,  r4,  r6,  r3\n"
+		"smlad  r12, r5,  r7,  r12\n"
+		"ldrd   r4,  r5,  [r0, #64]\n"
+		"ldrd   r6,  r7,  [r2, #64]\n"
+		"smlad  r3,  r8,  r10, r3\n"
+		"smlad  r12, r9,  r11, r12\n"
+		"ldrd   r8,  r9,  [r0, #8]\n"
+		"ldrd   r10, r11, [r2, #8]\n"
+		"smlad  r3,  r4,  r6,  r3\n"      /* t1[0] is done */
+		"smlad  r12, r5,  r7,  r12\n"     /* t1[1] is done */
+		"ldrd   r4,  r5,  [r0, #24]\n"
+		"ldrd   r6,  r7,  [r2, #24]\n"
+		"pkhtb  r3,  r12, r3, asr #16\n"  /* combine t1[0] and t1[1] */
+		"smlad  r12, r8,  r10, r14\n"
+		"smlad  r14, r9,  r11, r14\n"
+		"ldrd   r8,  r9,  [r0, #40]\n"
+		"ldrd   r10, r11, [r2, #40]\n"
+		"smlad  r12, r4,  r6,  r12\n"
+		"smlad  r14, r5,  r7,  r14\n"
+		"ldrd   r4,  r5,  [r0, #56]\n"
+		"ldrd   r6,  r7,  [r2, #56]\n"
+		"smlad  r12, r8,  r10, r12\n"
+		"smlad  r14, r9,  r11, r14\n"
+		"ldrd   r8,  r9,  [r0, #72]\n"
+		"ldrd   r10, r11, [r2, #72]\n"
+		"smlad  r12, r4,  r6,  r12\n"
+		"smlad  r14, r5,  r7,  r14\n"
+		"ldrd   r4,  r5,  [r2, #80]\n"    /* start loading cos table */
+		"smlad  r12, r8,  r10, r12\n"     /* t1[2] is done */
+		"smlad  r14, r9,  r11, r14\n"     /* t1[3] is done */
+		"ldrd   r6,  r7,  [r2, #88]\n"
+		"ldrd   r8,  r9,  [r2, #96]\n"
+		"ldrd   r10, r11, [r2, #104]\n"   /* cos table fully loaded */
+		"pkhtb  r12, r14, r12, asr #16\n" /* combine t1[2] and t1[3] */
+		"smuad  r4,  r3,  r4\n"
+		"smuad  r5,  r3,  r5\n"
+		"smlad  r4,  r12, r8,  r4\n"
+		"smlad  r5,  r12, r9,  r5\n"
+		"smuad  r6,  r3,  r6\n"
+		"smuad  r7,  r3,  r7\n"
+		"smlad  r6,  r12, r10, r6\n"
+		"smlad  r7,  r12, r11, r7\n"
+		"pop    {r8-r11}\n"
+		"stmia  r1, {r4, r5, r6, r7}\n"
+		"pop    {r1, r4-r7, pc}\n"
+	);
+}
+
+#define sbc_analyze_four(in, out, consts) \
+	((void (*)(int16_t *, int32_t *, const FIXED_T*)) \
+		sbc_analyze_four_armv6)((in), (out), (consts))
+
+static void __attribute__((naked)) sbc_analyze_eight_armv6()
+{
+	/* r0 = in, r1 = out, r2 = consts */
+	asm volatile (
+		"push   {r1, r4-r7, lr}\n"
+		"push   {r8-r11}\n"
+		"ldrd   r4,  r5,  [r0, #24]\n"
+		"ldrd   r6,  r7,  [r2, #24]\n"
+		"ldrd   r8,  r9,  [r0, #56]\n"
+		"ldrd   r10, r11, [r2, #56]\n"
+		"mov    r14, #0x8000\n"
+		"smlad  r3,  r4,  r6,  r14\n"
+		"smlad  r12, r5,  r7,  r14\n"
+		"ldrd   r4,  r5,  [r0, #88]\n"
+		"ldrd   r6,  r7,  [r2, #88]\n"
+		"smlad  r3,  r8,  r10, r3\n"
+		"smlad  r12, r9,  r11, r12\n"
+		"ldrd   r8,  r9,  [r0, #120]\n"
+		"ldrd   r10, r11, [r2, #120]\n"
+		"smlad  r3,  r4,  r6,  r3\n"
+		"smlad  r12, r5,  r7,  r12\n"
+		"ldrd   r4,  r5,  [r0, #152]\n"
+		"ldrd   r6,  r7,  [r2, #152]\n"
+		"smlad  r3,  r8,  r10, r3\n"
+		"smlad  r12, r9,  r11, r12\n"
+		"ldrd   r8,  r9,  [r0, #16]\n"
+		"ldrd   r10, r11, [r2, #16]\n"
+		"smlad  r3,  r4,  r6,  r3\n"      /* t1[6] is done */
+		"smlad  r12, r5,  r7,  r12\n"     /* t1[7] is done */
+		"ldrd   r4,  r5,  [r0, #48]\n"
+		"ldrd   r6,  r7,  [r2, #48]\n"
+		"pkhtb  r3,  r12, r3, asr #16\n"  /* combine t1[6] and t1[7] */
+		"str    r3,  [sp, #-4]!\n"        /* save to stack */
+		"smlad  r3,  r8,  r10, r14\n"
+		"smlad  r12, r9,  r11, r14\n"
+		"ldrd   r8,  r9,  [r0, #80]\n"
+		"ldrd   r10, r11, [r2, #80]\n"
+		"smlad  r3,  r4,  r6,  r3\n"
+		"smlad  r12, r5,  r7,  r12\n"
+		"ldrd   r4,  r5,  [r0, #112]\n"
+		"ldrd   r6,  r7,  [r2, #112]\n"
+		"smlad  r3,  r8,  r10, r3\n"
+		"smlad  r12, r9,  r11, r12\n"
+		"ldrd   r8,  r9,  [r0, #144]\n"
+		"ldrd   r10, r11, [r2, #144]\n"
+		"smlad  r3,  r4,  r6,  r3\n"
+		"smlad  r12, r5,  r7,  r12\n"
+		"ldrd   r4,  r5,  [r0, #0]\n"
+		"ldrd   r6,  r7,  [r2, #0]\n"
+		"smlad  r3,  r8,  r10, r3\n"      /* t1[4] is done */
+		"smlad  r12, r9,  r11, r12\n"     /* t1[5] is done */
+		"ldrd   r8,  r9,  [r0, #32]\n"
+		"ldrd   r10, r11, [r2, #32]\n"
+		"pkhtb  r3,  r12, r3, asr #16\n"  /* combine t1[4] and t1[5] */
+		"str    r3,  [sp, #-4]!\n"        /* save to stack */
+		"smlad  r3,  r4,  r6,  r14\n"
+		"smlad  r12, r5,  r7,  r14\n"
+		"ldrd   r4,  r5,  [r0, #64]\n"
+		"ldrd   r6,  r7,  [r2, #64]\n"
+		"smlad  r3,  r8,  r10, r3\n"
+		"smlad  r12, r9,  r11, r12\n"
+		"ldrd   r8,  r9,  [r0, #96]\n"
+		"ldrd   r10, r11, [r2, #96]\n"
+		"smlad  r3,  r4,  r6,  r3\n"
+		"smlad  r12, r5,  r7,  r12\n"
+		"ldrd   r4,  r5,  [r0, #128]\n"
+		"ldrd   r6,  r7,  [r2, #128]\n"
+		"smlad  r3,  r8,  r10, r3\n"
+		"smlad  r12, r9,  r11, r12\n"
+		"ldrd   r8,  r9,  [r0, #8]\n"
+		"ldrd   r10, r11, [r2, #8]\n"
+		"smlad  r3,  r4,  r6,  r3\n"      /* t1[0] is done */
+		"smlad  r12, r5,  r7,  r12\n"     /* t1[1] is done */
+		"ldrd   r4,  r5,  [r0, #40]\n"
+		"ldrd   r6,  r7,  [r2, #40]\n"
+		"pkhtb  r3,  r12, r3, asr #16\n"  /* combine t1[0] and t1[1] */
+		"smlad  r12, r8,  r10, r14\n"
+		"smlad  r14, r9,  r11, r14\n"
+		"ldrd   r8,  r9,  [r0, #72]\n"
+		"ldrd   r10, r11, [r2, #72]\n"
+		"smlad  r12, r4,  r6,  r12\n"
+		"smlad  r14, r5,  r7,  r14\n"
+		"ldrd   r4,  r5,  [r0, #104]\n"
+		"ldrd   r6,  r7,  [r2, #104]\n"
+		"smlad  r12, r8,  r10, r12\n"
+		"smlad  r14, r9,  r11, r14\n"
+		"ldrd   r8,  r9,  [r0, #136]\n"
+		"ldrd   r10, r11, [r2, #136]!\n"
+		"smlad  r12, r4,  r6,  r12\n"
+		"smlad  r14, r5,  r7,  r14\n"
+		"ldrd   r4,  r5,  [r2, #(160 - 136 + 0)]\n"
+		"smlad  r12, r8,  r10, r12\n"     /* t1[2] is done */
+		"smlad  r14, r9,  r11, r14\n"     /* t1[3] is done */
+		"ldrd   r6,  r7,  [r2, #(160 - 136 + 8)]\n"
+		"smuad  r4,  r3,  r4\n"
+		"smuad  r5,  r3,  r5\n"
+		"pkhtb  r12, r14, r12, asr #16\n" /* combine t1[2] and t1[3] */
+						  /* r3  = t2[0:1] */
+						  /* r12 = t2[2:3] */
+		"pop    {r0, r14}\n"              /* t2[4:5], t2[6:7] */
+		"ldrd   r8,  r9,  [r2, #(160 - 136 + 32)]\n"
+		"smuad  r6,  r3,  r6\n"
+		"smuad  r7,  r3,  r7\n"
+		"ldrd   r10, r11, [r2, #(160 - 136 + 40)]\n"
+		"smlad  r4,  r12, r8,  r4\n"
+		"smlad  r5,  r12, r9,  r5\n"
+		"ldrd   r8,  r9,  [r2, #(160 - 136 + 64)]\n"
+		"smlad  r6,  r12, r10, r6\n"
+		"smlad  r7,  r12, r11, r7\n"
+		"ldrd   r10, r11, [r2, #(160 - 136 + 72)]\n"
+		"smlad  r4,  r0,  r8,  r4\n"
+		"smlad  r5,  r0,  r9,  r5\n"
+		"ldrd   r8,  r9,  [r2, #(160 - 136 + 96)]\n"
+		"smlad  r6,  r0,  r10, r6\n"
+		"smlad  r7,  r0,  r11, r7\n"
+		"ldrd   r10, r11, [r2, #(160 - 136 + 104)]\n"
+		"smlad  r4,  r14, r8,  r4\n"
+		"smlad  r5,  r14, r9,  r5\n"
+		"ldrd   r8,  r9,  [r2, #(160 - 136 + 16 + 0)]\n"
+		"smlad  r6,  r14, r10, r6\n"
+		"smlad  r7,  r14, r11, r7\n"
+		"ldrd   r10, r11, [r2, #(160 - 136 + 16 + 8)]\n"
+		"stmia  r1!, {r4, r5}\n"
+		"smuad  r4,  r3,  r8\n"
+		"smuad  r5,  r3,  r9\n"
+		"ldrd   r8,  r9,  [r2, #(160 - 136 + 16 + 32)]\n"
+		"stmia  r1!, {r6, r7}\n"
+		"smuad  r6,  r3,  r10\n"
+		"smuad  r7,  r3,  r11\n"
+		"ldrd   r10, r11, [r2, #(160 - 136 + 16 + 40)]\n"
+		"smlad  r4,  r12, r8,  r4\n"
+		"smlad  r5,  r12, r9,  r5\n"
+		"ldrd   r8,  r9,  [r2, #(160 - 136 + 16 + 64)]\n"
+		"smlad  r6,  r12, r10, r6\n"
+		"smlad  r7,  r12, r11, r7\n"
+		"ldrd   r10, r11, [r2, #(160 - 136 + 16 + 72)]\n"
+		"smlad  r4,  r0,  r8,  r4\n"
+		"smlad  r5,  r0,  r9,  r5\n"
+		"ldrd   r8,  r9,  [r2, #(160 - 136 + 16 + 96)]\n"
+		"smlad  r6,  r0,  r10, r6\n"
+		"smlad  r7,  r0,  r11, r7\n"
+		"ldrd   r10, r11, [r2, #(160 - 136 + 16 + 104)]\n"
+		"smlad  r4,  r14, r8,  r4\n"
+		"smlad  r5,  r14, r9,  r5\n"
+		"smlad  r6,  r14, r10, r6\n"
+		"smlad  r7,  r14, r11, r7\n"
+		"pop    {r8-r11}\n"
+		"stmia  r1!, {r4, r5, r6, r7}\n"
+		"pop    {r1, r4-r7, pc}\n"
+	);
+}
+
+#define sbc_analyze_eight(in, out, consts) \
+	((void (*)(int16_t *, int32_t *, const FIXED_T*)) \
+		sbc_analyze_eight_armv6)((in), (out), (consts))
+
+static void sbc_analyze_4b_4s_armv6(int16_t *x, int32_t *out, int out_stride)
+{
+	/* Analyze blocks */
+	sbc_analyze_four(x + 12, out, analysis_consts_fixed4_simd_odd);
+	out += out_stride;
+	sbc_analyze_four(x + 8, out, analysis_consts_fixed4_simd_even);
+	out += out_stride;
+	sbc_analyze_four(x + 4, out, analysis_consts_fixed4_simd_odd);
+	out += out_stride;
+	sbc_analyze_four(x + 0, out, analysis_consts_fixed4_simd_even);
+}
+
+static void sbc_analyze_4b_8s_armv6(int16_t *x, int32_t *out, int out_stride)
+{
+	/* Analyze blocks */
+	sbc_analyze_eight(x + 24, out, analysis_consts_fixed8_simd_odd);
+	out += out_stride;
+	sbc_analyze_eight(x + 16, out, analysis_consts_fixed8_simd_even);
+	out += out_stride;
+	sbc_analyze_eight(x + 8, out, analysis_consts_fixed8_simd_odd);
+	out += out_stride;
+	sbc_analyze_eight(x + 0, out, analysis_consts_fixed8_simd_even);
+}
+
+void sbc_init_primitives_armv6(struct sbc_encoder_state *state)
+{
+	state->sbc_analyze_4b_4s = sbc_analyze_4b_4s_armv6;
+	state->sbc_analyze_4b_8s = sbc_analyze_4b_8s_armv6;
+	state->implementation_info = "ARMv6 SIMD";
+}
+
+#endif
diff --git a/sbc/sbc_primitives_armv6.h b/sbc/sbc_primitives_armv6.h
new file mode 100644
index 0000000..1862aed
--- /dev/null
+++ b/sbc/sbc_primitives_armv6.h
@@ -0,0 +1,52 @@
+/*
+ *
+ *  Bluetooth low-complexity, subband codec (SBC) library
+ *
+ *  Copyright (C) 2008-2010  Nokia Corporation
+ *  Copyright (C) 2004-2010  Marcel Holtmann <marcel@holtmann.org>
+ *  Copyright (C) 2004-2005  Henryk Ploetz <henryk@ploetzli.ch>
+ *  Copyright (C) 2005-2006  Brad Midgley <bmidgley@xmission.com>
+ *
+ *
+ *  This library is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU Lesser General Public
+ *  License as published by the Free Software Foundation; either
+ *  version 2.1 of the License, or (at your option) any later version.
+ *
+ *  This library is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  Lesser General Public License for more details.
+ *
+ *  You should have received a copy of the GNU Lesser General Public
+ *  License along with this library; if not, write to the Free Software
+ *  Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
+ *
+ */
+
+#ifndef __SBC_PRIMITIVES_ARMV6_H
+#define __SBC_PRIMITIVES_ARMV6_H
+
+#include "sbc_primitives.h"
+
+#if defined(__ARM_ARCH_6__) || defined(__ARM_ARCH_6J__) || \
+	defined(__ARM_ARCH_6K__) || defined(__ARM_ARCH_6Z__) || \
+	defined(__ARM_ARCH_6ZK__) || defined(__ARM_ARCH_6T2__) || \
+	defined(__ARM_ARCH_6M__) || defined(__ARM_ARCH_7__) || \
+	defined(__ARM_ARCH_7A__) || defined(__ARM_ARCH_7R__) || \
+	defined(__ARM_ARCH_7M__)
+#define SBC_HAVE_ARMV6 1
+#endif
+
+#if !defined(SBC_HIGH_PRECISION) && (SCALE_OUT_BITS == 15) && \
+	defined(__GNUC__) && defined(SBC_HAVE_ARMV6) && \
+	defined(__ARM_EABI__) && !defined(__thumb__) && \
+	!defined(__ARM_NEON__)
+
+#define SBC_BUILD_WITH_ARMV6_SUPPORT
+
+void sbc_init_primitives_armv6(struct sbc_encoder_state *encoder_state);
+
+#endif
+
+#endif
-- 
1.6.4.4


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 0/5] SBC encoder optimizations for ARM processors
  2010-07-02 12:25 [PATCH 0/5] SBC encoder optimizations for ARM processors Siarhei Siamashka
                   ` (4 preceding siblings ...)
  2010-07-02 12:25 ` [PATCH 5/5] sbc: ARMv6 optimized version of analysis filter for SBC encoder Siarhei Siamashka
@ 2010-07-02 19:04 ` Johan Hedberg
  5 siblings, 0 replies; 7+ messages in thread
From: Johan Hedberg @ 2010-07-02 19:04 UTC (permalink / raw)
  To: Siarhei Siamashka; +Cc: linux-bluetooth, Siarhei Siamashka

Hi Siarhei,

On Fri, Jul 02, 2010, Siarhei Siamashka wrote:
> This patch series adds a bunch of ARM assembly optimizations.
> 
> Now all the functions from 'sbc_primitives.c' got NEON optimized
> variants. As benchmarked with the common A2DP case (44100kHz audio
> with bitpool set to 53, 8 subbands, joint stereo), SBC encoding is
> now approximately 1.6x faster overall when compared to bluez-4.66.
> Some more room for improvement still exists though.
> 
> For ARMv6 processors, only analysis filter has been implemented
> (using dual 16-bit multiply-accumulate instructions). But that's
> the most important optimization and it doubles performance already.
> And older processors such as ARM11 are much slower, so they
> definitely benefit more on a relative scale (Nokia N800/N810 users
> may find this update useful).
> 
> All the optimizations are bitexact. Given the same input, they provide
> the same output as the SBC encoder from the previous bluez versions.
> 
> Patches are also available in the branch 'sbc-arm-optimizations' here:
> git://gitorious.org/system-performance/bluez-sbc.git
> 
> Siarhei Siamashka (5):
>   sbc: ARM NEON optimized joint stereo processing in SBC encoder
>   sbc: ARM NEON optimizations for input permutation in SBC encoder
>   sbc: slightly faster 'sbc_calc_scalefactors_neon'
>   sbc: faster 'sbc_calculate_bits' function
>   sbc: ARMv6 optimized version of analysis filter for SBC encoder
> 
>  Makefile.am                |    3 +-
>  sbc/sbc.c                  |   43 ++-
>  sbc/sbc_primitives.c       |    4 +
>  sbc/sbc_primitives_armv6.c |  299 +++++++++++++++++++++
>  sbc/sbc_primitives_armv6.h |   52 ++++
>  sbc/sbc_primitives_neon.c  |  618 ++++++++++++++++++++++++++++++++++++++++++-
>  6 files changed, 988 insertions(+), 31 deletions(-)
>  create mode 100644 sbc/sbc_primitives_armv6.c
>  create mode 100644 sbc/sbc_primitives_armv6.h

Thanks for these! They've all been pushed upstream now.

Johan

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2010-07-02 19:04 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-07-02 12:25 [PATCH 0/5] SBC encoder optimizations for ARM processors Siarhei Siamashka
2010-07-02 12:25 ` [PATCH 1/5] sbc: ARM NEON optimized joint stereo processing in SBC encoder Siarhei Siamashka
2010-07-02 12:25 ` [PATCH 2/5] sbc: ARM NEON optimizations for input permutation " Siarhei Siamashka
2010-07-02 12:25 ` [PATCH 3/5] sbc: slightly faster 'sbc_calc_scalefactors_neon' Siarhei Siamashka
2010-07-02 12:25 ` [PATCH 4/5] sbc: faster 'sbc_calculate_bits' function Siarhei Siamashka
2010-07-02 12:25 ` [PATCH 5/5] sbc: ARMv6 optimized version of analysis filter for SBC encoder Siarhei Siamashka
2010-07-02 19:04 ` [PATCH 0/5] SBC encoder optimizations for ARM processors Johan Hedberg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).