* [PATCH bpf 0/5] Couple of BPF JIT fixes
@ 2017-12-14 20:07 Daniel Borkmann
2017-12-14 20:07 ` [PATCH bpf 1/5] bpf, s390x: do not reload skb pointers in non-skb context Daniel Borkmann
` (5 more replies)
0 siblings, 6 replies; 8+ messages in thread
From: Daniel Borkmann @ 2017-12-14 20:07 UTC (permalink / raw)
To: ast; +Cc: holzheu, naveen.n.rao, davem, netdev, Daniel Borkmann
Two fixes that deal with buggy usage of bpf_helper_changes_pkt_data()
in the sense that they also reload cached skb data when there's no
skb context but xdp one, for example. A fix where skb meta data is
reloaded out of the wrong register on helper call, rest is test cases
and making sure on verifier side that there's always the guarantee
that ctx sits in r1. Thanks!
Daniel Borkmann (5):
bpf, s390x: do not reload skb pointers in non-skb context
bpf, ppc64: do not reload skb pointers in non-skb context
bpf: guarantee r1 to be ctx in case of bpf_helper_changes_pkt_data
bpf, sparc: fix usage of wrong reg for load_skb_regs after call
bpf: add test case for ld_abs and helper changing pkt data
arch/powerpc/net/bpf_jit_comp64.c | 6 ++--
arch/s390/net/bpf_jit_comp.c | 11 ++++----
arch/sparc/net/bpf_jit_comp_64.c | 6 ++--
kernel/bpf/verifier.c | 6 ++++
lib/test_bpf.c | 43 +++++++++++++++++++++++++++++
tools/testing/selftests/bpf/test_verifier.c | 24 ++++++++++++++++
6 files changed, 86 insertions(+), 10 deletions(-)
--
2.9.5
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH bpf 1/5] bpf, s390x: do not reload skb pointers in non-skb context
2017-12-14 20:07 [PATCH bpf 0/5] Couple of BPF JIT fixes Daniel Borkmann
@ 2017-12-14 20:07 ` Daniel Borkmann
2018-01-09 14:20 ` Michael Holzheu
2017-12-14 20:07 ` [PATCH bpf 2/5] bpf, ppc64: " Daniel Borkmann
` (4 subsequent siblings)
5 siblings, 1 reply; 8+ messages in thread
From: Daniel Borkmann @ 2017-12-14 20:07 UTC (permalink / raw)
To: ast; +Cc: holzheu, naveen.n.rao, davem, netdev, Daniel Borkmann
The assumption of unconditionally reloading skb pointers on
BPF helper calls where bpf_helper_changes_pkt_data() holds
true is wrong. There can be different contexts where the
BPF helper would enforce a reload such as in case of XDP.
Here, we do have a struct xdp_buff instead of struct sk_buff
as context, thus this will access garbage.
JITs only ever need to deal with cached skb pointer reload
when ld_abs/ind was seen, therefore guard the reload behind
SEEN_SKB only. Tested on s390x.
Fixes: 9db7f2b81880 ("s390/bpf: recache skb->data/hlen for skb_vlan_push/pop")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com>
---
arch/s390/net/bpf_jit_comp.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index e81c168..9557d8b 100644
--- a/arch/s390/net/bpf_jit_comp.c
+++ b/arch/s390/net/bpf_jit_comp.c
@@ -55,8 +55,7 @@ struct bpf_jit {
#define SEEN_LITERAL 8 /* code uses literals */
#define SEEN_FUNC 16 /* calls C functions */
#define SEEN_TAIL_CALL 32 /* code uses tail calls */
-#define SEEN_SKB_CHANGE 64 /* code changes skb data */
-#define SEEN_REG_AX 128 /* code uses constant blinding */
+#define SEEN_REG_AX 64 /* code uses constant blinding */
#define SEEN_STACK (SEEN_FUNC | SEEN_MEM | SEEN_SKB)
/*
@@ -448,12 +447,12 @@ static void bpf_jit_prologue(struct bpf_jit *jit, u32 stack_depth)
EMIT6_DISP_LH(0xe3000000, 0x0024, REG_W1, REG_0,
REG_15, 152);
}
- if (jit->seen & SEEN_SKB)
+ if (jit->seen & SEEN_SKB) {
emit_load_skb_data_hlen(jit);
- if (jit->seen & SEEN_SKB_CHANGE)
/* stg %b1,ST_OFF_SKBP(%r0,%r15) */
EMIT6_DISP_LH(0xe3000000, 0x0024, BPF_REG_1, REG_0, REG_15,
STK_OFF_SKBP);
+ }
}
/*
@@ -983,8 +982,8 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, int i
EMIT2(0x0d00, REG_14, REG_W1);
/* lgr %b0,%r2: load return value into %b0 */
EMIT4(0xb9040000, BPF_REG_0, REG_2);
- if (bpf_helper_changes_pkt_data((void *)func)) {
- jit->seen |= SEEN_SKB_CHANGE;
+ if ((jit->seen & SEEN_SKB) &&
+ bpf_helper_changes_pkt_data((void *)func)) {
/* lg %b1,ST_OFF_SKBP(%r15) */
EMIT6_DISP_LH(0xe3000000, 0x0004, BPF_REG_1, REG_0,
REG_15, STK_OFF_SKBP);
--
2.9.5
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH bpf 2/5] bpf, ppc64: do not reload skb pointers in non-skb context
2017-12-14 20:07 [PATCH bpf 0/5] Couple of BPF JIT fixes Daniel Borkmann
2017-12-14 20:07 ` [PATCH bpf 1/5] bpf, s390x: do not reload skb pointers in non-skb context Daniel Borkmann
@ 2017-12-14 20:07 ` Daniel Borkmann
2017-12-14 20:07 ` [PATCH bpf 3/5] bpf: guarantee r1 to be ctx in case of bpf_helper_changes_pkt_data Daniel Borkmann
` (3 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Daniel Borkmann @ 2017-12-14 20:07 UTC (permalink / raw)
To: ast; +Cc: holzheu, naveen.n.rao, davem, netdev, Daniel Borkmann
The assumption of unconditionally reloading skb pointers on
BPF helper calls where bpf_helper_changes_pkt_data() holds
true is wrong. There can be different contexts where the helper
would enforce a reload such as in case of XDP. Here, we do
have a struct xdp_buff instead of struct sk_buff as context,
thus this will access garbage.
JITs only ever need to deal with cached skb pointer reload
when ld_abs/ind was seen, therefore guard the reload behind
SEEN_SKB.
Fixes: 156d0e290e96 ("powerpc/ebpf/jit: Implement JIT compiler for extended BPF")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Tested-by: Sandipan Das <sandipan@linux.vnet.ibm.com>
---
arch/powerpc/net/bpf_jit_comp64.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index 46d74e8..d183b48 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -763,7 +763,8 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
func = (u8 *) __bpf_call_base + imm;
/* Save skb pointer if we need to re-cache skb data */
- if (bpf_helper_changes_pkt_data(func))
+ if ((ctx->seen & SEEN_SKB) &&
+ bpf_helper_changes_pkt_data(func))
PPC_BPF_STL(3, 1, bpf_jit_stack_local(ctx));
bpf_jit_emit_func_call(image, ctx, (u64)func);
@@ -772,7 +773,8 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
PPC_MR(b2p[BPF_REG_0], 3);
/* refresh skb cache */
- if (bpf_helper_changes_pkt_data(func)) {
+ if ((ctx->seen & SEEN_SKB) &&
+ bpf_helper_changes_pkt_data(func)) {
/* reload skb pointer to r3 */
PPC_BPF_LL(3, 1, bpf_jit_stack_local(ctx));
bpf_jit_emit_skb_loads(image, ctx);
--
2.9.5
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH bpf 3/5] bpf: guarantee r1 to be ctx in case of bpf_helper_changes_pkt_data
2017-12-14 20:07 [PATCH bpf 0/5] Couple of BPF JIT fixes Daniel Borkmann
2017-12-14 20:07 ` [PATCH bpf 1/5] bpf, s390x: do not reload skb pointers in non-skb context Daniel Borkmann
2017-12-14 20:07 ` [PATCH bpf 2/5] bpf, ppc64: " Daniel Borkmann
@ 2017-12-14 20:07 ` Daniel Borkmann
2017-12-14 20:07 ` [PATCH bpf 4/5] bpf, sparc: fix usage of wrong reg for load_skb_regs after call Daniel Borkmann
` (2 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: Daniel Borkmann @ 2017-12-14 20:07 UTC (permalink / raw)
To: ast; +Cc: holzheu, naveen.n.rao, davem, netdev, Daniel Borkmann
Some JITs don't cache skb context on stack in prologue, so when
LD_ABS/IND is used and helper calls yield bpf_helper_changes_pkt_data()
as true, then they temporarily save/restore skb pointer. However,
the assumption that skb always has to be in r1 is a bit of a
gamble. Right now it turned out to be true for all helpers listed
in bpf_helper_changes_pkt_data(), but lets enforce that from verifier
side, so that we make this a guarantee and bail out if the func
proto is misconfigured in future helpers.
In case of BPF helper calls from cBPF, bpf_helper_changes_pkt_data()
is completely unrelevant here (since cBPF is context read-only) and
therefore always false.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
kernel/bpf/verifier.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index d459357..e39b013 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1674,7 +1674,13 @@ static int check_call(struct bpf_verifier_env *env, int func_id, int insn_idx)
return -EINVAL;
}
+ /* With LD_ABS/IND some JITs save/restore skb from r1. */
changes_data = bpf_helper_changes_pkt_data(fn->func);
+ if (changes_data && fn->arg1_type != ARG_PTR_TO_CTX) {
+ verbose(env, "kernel subsystem misconfigured func %s#%d: r1 != ctx\n",
+ func_id_name(func_id), func_id);
+ return -EINVAL;
+ }
memset(&meta, 0, sizeof(meta));
meta.pkt_access = fn->pkt_access;
--
2.9.5
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH bpf 4/5] bpf, sparc: fix usage of wrong reg for load_skb_regs after call
2017-12-14 20:07 [PATCH bpf 0/5] Couple of BPF JIT fixes Daniel Borkmann
` (2 preceding siblings ...)
2017-12-14 20:07 ` [PATCH bpf 3/5] bpf: guarantee r1 to be ctx in case of bpf_helper_changes_pkt_data Daniel Borkmann
@ 2017-12-14 20:07 ` Daniel Borkmann
2017-12-14 20:07 ` [PATCH bpf 5/5] bpf: add test case for ld_abs and helper changing pkt data Daniel Borkmann
2017-12-15 17:28 ` [PATCH bpf 0/5] Couple of BPF JIT fixes Alexei Starovoitov
5 siblings, 0 replies; 8+ messages in thread
From: Daniel Borkmann @ 2017-12-14 20:07 UTC (permalink / raw)
To: ast; +Cc: holzheu, naveen.n.rao, davem, netdev, Daniel Borkmann
When LD_ABS/IND is used in the program, and we have a BPF helper
call that changes packet data (bpf_helper_changes_pkt_data() returns
true), then in case of sparc JIT, we try to reload cached skb data
from bpf2sparc[BPF_REG_6]. However, there is no such guarantee or
assumption that skb sits in R6 at this point, all helpers changing
skb data only have a guarantee that skb sits in R1. Therefore,
store BPF R1 in L7 temporarily and after procedure call use L7 to
reload cached skb data. skb sitting in R6 is only true at the time
when LD_ABS/IND is executed.
Fixes: 7a12b5031c6b ("sparc64: Add eBPF JIT.")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
arch/sparc/net/bpf_jit_comp_64.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c
index 5765e7e..ff5f9cb 100644
--- a/arch/sparc/net/bpf_jit_comp_64.c
+++ b/arch/sparc/net/bpf_jit_comp_64.c
@@ -1245,14 +1245,16 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
u8 *func = ((u8 *)__bpf_call_base) + imm;
ctx->saw_call = true;
+ if (ctx->saw_ld_abs_ind && bpf_helper_changes_pkt_data(func))
+ emit_reg_move(bpf2sparc[BPF_REG_1], L7, ctx);
emit_call((u32 *)func, ctx);
emit_nop(ctx);
emit_reg_move(O0, bpf2sparc[BPF_REG_0], ctx);
- if (bpf_helper_changes_pkt_data(func) && ctx->saw_ld_abs_ind)
- load_skb_regs(ctx, bpf2sparc[BPF_REG_6]);
+ if (ctx->saw_ld_abs_ind && bpf_helper_changes_pkt_data(func))
+ load_skb_regs(ctx, L7);
break;
}
--
2.9.5
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH bpf 5/5] bpf: add test case for ld_abs and helper changing pkt data
2017-12-14 20:07 [PATCH bpf 0/5] Couple of BPF JIT fixes Daniel Borkmann
` (3 preceding siblings ...)
2017-12-14 20:07 ` [PATCH bpf 4/5] bpf, sparc: fix usage of wrong reg for load_skb_regs after call Daniel Borkmann
@ 2017-12-14 20:07 ` Daniel Borkmann
2017-12-15 17:28 ` [PATCH bpf 0/5] Couple of BPF JIT fixes Alexei Starovoitov
5 siblings, 0 replies; 8+ messages in thread
From: Daniel Borkmann @ 2017-12-14 20:07 UTC (permalink / raw)
To: ast; +Cc: holzheu, naveen.n.rao, davem, netdev, Daniel Borkmann
Add a test that i) uses LD_ABS, ii) zeroing R6 before call, iii) calls
a helper that triggers reload of cached skb data, iv) uses LD_ABS again.
It's added for test_bpf in order to do runtime testing after JITing as
well as test_verifier to test that the sequence is allowed.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
lib/test_bpf.c | 43 +++++++++++++++++++++++++++++
tools/testing/selftests/bpf/test_verifier.c | 24 ++++++++++++++++
2 files changed, 67 insertions(+)
diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index aa8812a..9e97480 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -435,6 +435,41 @@ static int bpf_fill_ld_abs_vlan_push_pop(struct bpf_test *self)
return 0;
}
+static int bpf_fill_ld_abs_vlan_push_pop2(struct bpf_test *self)
+{
+ struct bpf_insn *insn;
+
+ insn = kmalloc_array(16, sizeof(*insn), GFP_KERNEL);
+ if (!insn)
+ return -ENOMEM;
+
+ /* Due to func address being non-const, we need to
+ * assemble this here.
+ */
+ insn[0] = BPF_MOV64_REG(R6, R1);
+ insn[1] = BPF_LD_ABS(BPF_B, 0);
+ insn[2] = BPF_LD_ABS(BPF_H, 0);
+ insn[3] = BPF_LD_ABS(BPF_W, 0);
+ insn[4] = BPF_MOV64_REG(R7, R6);
+ insn[5] = BPF_MOV64_IMM(R6, 0);
+ insn[6] = BPF_MOV64_REG(R1, R7);
+ insn[7] = BPF_MOV64_IMM(R2, 1);
+ insn[8] = BPF_MOV64_IMM(R3, 2);
+ insn[9] = BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ bpf_skb_vlan_push_proto.func - __bpf_call_base);
+ insn[10] = BPF_MOV64_REG(R6, R7);
+ insn[11] = BPF_LD_ABS(BPF_B, 0);
+ insn[12] = BPF_LD_ABS(BPF_H, 0);
+ insn[13] = BPF_LD_ABS(BPF_W, 0);
+ insn[14] = BPF_MOV64_IMM(R0, 42);
+ insn[15] = BPF_EXIT_INSN();
+
+ self->u.ptr.insns = insn;
+ self->u.ptr.len = 16;
+
+ return 0;
+}
+
static int bpf_fill_jump_around_ld_abs(struct bpf_test *self)
{
unsigned int len = BPF_MAXINSNS;
@@ -6066,6 +6101,14 @@ static struct bpf_test tests[] = {
{},
{ {0x1, 0x42 } },
},
+ {
+ "LD_ABS with helper changing skb data",
+ { },
+ INTERNAL,
+ { 0x34 },
+ { { ETH_HLEN, 42 } },
+ .fill_helper = bpf_fill_ld_abs_vlan_push_pop2,
+ },
};
static struct net_device dev;
diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index 3c64f30..b03ecfd 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -6117,6 +6117,30 @@ static struct bpf_test tests[] = {
.result = ACCEPT,
},
{
+ "ld_abs: tests on r6 and skb data reload helper",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
+ BPF_LD_ABS(BPF_B, 0),
+ BPF_LD_ABS(BPF_H, 0),
+ BPF_LD_ABS(BPF_W, 0),
+ BPF_MOV64_REG(BPF_REG_7, BPF_REG_6),
+ BPF_MOV64_IMM(BPF_REG_6, 0),
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
+ BPF_MOV64_IMM(BPF_REG_2, 1),
+ BPF_MOV64_IMM(BPF_REG_3, 2),
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_skb_vlan_push),
+ BPF_MOV64_REG(BPF_REG_6, BPF_REG_7),
+ BPF_LD_ABS(BPF_B, 0),
+ BPF_LD_ABS(BPF_H, 0),
+ BPF_LD_ABS(BPF_W, 0),
+ BPF_MOV64_IMM(BPF_REG_0, 42),
+ BPF_EXIT_INSN(),
+ },
+ .prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ .result = ACCEPT,
+ },
+ {
"ld_ind: check calling conv, r1",
.insns = {
BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
--
2.9.5
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH bpf 0/5] Couple of BPF JIT fixes
2017-12-14 20:07 [PATCH bpf 0/5] Couple of BPF JIT fixes Daniel Borkmann
` (4 preceding siblings ...)
2017-12-14 20:07 ` [PATCH bpf 5/5] bpf: add test case for ld_abs and helper changing pkt data Daniel Borkmann
@ 2017-12-15 17:28 ` Alexei Starovoitov
5 siblings, 0 replies; 8+ messages in thread
From: Alexei Starovoitov @ 2017-12-15 17:28 UTC (permalink / raw)
To: Daniel Borkmann; +Cc: ast, holzheu, naveen.n.rao, davem, netdev
On Thu, Dec 14, 2017 at 09:07:22PM +0100, Daniel Borkmann wrote:
> Two fixes that deal with buggy usage of bpf_helper_changes_pkt_data()
> in the sense that they also reload cached skb data when there's no
> skb context but xdp one, for example. A fix where skb meta data is
> reloaded out of the wrong register on helper call, rest is test cases
> and making sure on verifier side that there's always the guarantee
> that ctx sits in r1. Thanks!
Applied, thanks Daniel!
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH bpf 1/5] bpf, s390x: do not reload skb pointers in non-skb context
2017-12-14 20:07 ` [PATCH bpf 1/5] bpf, s390x: do not reload skb pointers in non-skb context Daniel Borkmann
@ 2018-01-09 14:20 ` Michael Holzheu
0 siblings, 0 replies; 8+ messages in thread
From: Michael Holzheu @ 2018-01-09 14:20 UTC (permalink / raw)
To: Daniel Borkmann; +Cc: ast, naveen.n.rao, davem, netdev
Am Thu, 14 Dec 2017 21:07:23 +0100
schrieb Daniel Borkmann <daniel@iogearbox.net>:
> The assumption of unconditionally reloading skb pointers on
> BPF helper calls where bpf_helper_changes_pkt_data() holds
> true is wrong. There can be different contexts where the
> BPF helper would enforce a reload such as in case of XDP.
> Here, we do have a struct xdp_buff instead of struct sk_buff
> as context, thus this will access garbage.
>
> JITs only ever need to deal with cached skb pointer reload
> when ld_abs/ind was seen, therefore guard the reload behind
> SEEN_SKB only. Tested on s390x.
Hello Daniel,
Sorry for the late answer - I have been on vacation up to now.
Thanks for fixing / testing this for s390x.
Michael
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2018-01-09 14:20 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-12-14 20:07 [PATCH bpf 0/5] Couple of BPF JIT fixes Daniel Borkmann
2017-12-14 20:07 ` [PATCH bpf 1/5] bpf, s390x: do not reload skb pointers in non-skb context Daniel Borkmann
2018-01-09 14:20 ` Michael Holzheu
2017-12-14 20:07 ` [PATCH bpf 2/5] bpf, ppc64: " Daniel Borkmann
2017-12-14 20:07 ` [PATCH bpf 3/5] bpf: guarantee r1 to be ctx in case of bpf_helper_changes_pkt_data Daniel Borkmann
2017-12-14 20:07 ` [PATCH bpf 4/5] bpf, sparc: fix usage of wrong reg for load_skb_regs after call Daniel Borkmann
2017-12-14 20:07 ` [PATCH bpf 5/5] bpf: add test case for ld_abs and helper changing pkt data Daniel Borkmann
2017-12-15 17:28 ` [PATCH bpf 0/5] Couple of BPF JIT fixes Alexei Starovoitov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).