linux-pm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1 0/5] intel_pstate: Some code cleanups, mostly related to concurrency
@ 2024-03-21 19:28 Rafael J. Wysocki
  2024-03-21 19:29 ` [PATCH v1 1/5] cpufreq: intel_pstate: Drop redundant locking from intel_pstate_driver_cleanup() Rafael J. Wysocki
                   ` (4 more replies)
  0 siblings, 5 replies; 8+ messages in thread
From: Rafael J. Wysocki @ 2024-03-21 19:28 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Srinivas Pandruvada

Hi Everyone,

This series consists of a few cleanups of the intep_pstate driver, mostly
related to spinlock locking and synchronization.

The patches are not expected to alter functionality in a visible way.

Please see individual patch changelogs for details.

Thanks!




^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v1 1/5] cpufreq: intel_pstate: Drop redundant locking from intel_pstate_driver_cleanup()
  2024-03-21 19:28 [PATCH v1 0/5] intel_pstate: Some code cleanups, mostly related to concurrency Rafael J. Wysocki
@ 2024-03-21 19:29 ` Rafael J. Wysocki
  2024-03-21 19:30 ` [PATCH v1 2/5] cpufreq: intel_pstate: Simplify spinlock locking Rafael J. Wysocki
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Rafael J. Wysocki @ 2024-03-21 19:29 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Srinivas Pandruvada

From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

Remove the spinlock locking from intel_pstate_driver_cleanup() as it is
not necessary because no other code accessing all_cpu_data[] can run in
parallel with that function.

Had the locking been necessary, though, it would have been incorrect
because the lock in question is acquired from a hardirq handler and
it cannot be acquired from thread context without disabling interrupts.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/cpufreq/intel_pstate.c |    2 --
 1 file changed, 2 deletions(-)

Index: linux-pm/drivers/cpufreq/intel_pstate.c
===================================================================
--- linux-pm.orig/drivers/cpufreq/intel_pstate.c
+++ linux-pm/drivers/cpufreq/intel_pstate.c
@@ -3135,10 +3135,8 @@ static void intel_pstate_driver_cleanup(
 			if (intel_pstate_driver == &intel_pstate)
 				intel_pstate_clear_update_util_hook(cpu);
 
-			spin_lock(&hwp_notify_lock);
 			kfree(all_cpu_data[cpu]);
 			WRITE_ONCE(all_cpu_data[cpu], NULL);
-			spin_unlock(&hwp_notify_lock);
 		}
 	}
 	cpus_read_unlock();




^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v1 2/5] cpufreq: intel_pstate: Simplify spinlock locking
  2024-03-21 19:28 [PATCH v1 0/5] intel_pstate: Some code cleanups, mostly related to concurrency Rafael J. Wysocki
  2024-03-21 19:29 ` [PATCH v1 1/5] cpufreq: intel_pstate: Drop redundant locking from intel_pstate_driver_cleanup() Rafael J. Wysocki
@ 2024-03-21 19:30 ` Rafael J. Wysocki
  2024-03-21 19:32 ` [PATCH v1 3/5] cpufreq: intel_pstate: Wait for canceled delayed work to complete Rafael J. Wysocki
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Rafael J. Wysocki @ 2024-03-21 19:30 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Srinivas Pandruvada

From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

Because intel_pstate_enable/disable_hwp_interrupt() are only called from
thread context, they need not save the IRQ flags when using a spinlock
as interrupts are guaranteed to be enabled when they run, so make them
use spin_lock/unlock_irq().

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/cpufreq/intel_pstate.c |   12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

Index: linux-pm/drivers/cpufreq/intel_pstate.c
===================================================================
--- linux-pm.orig/drivers/cpufreq/intel_pstate.c
+++ linux-pm/drivers/cpufreq/intel_pstate.c
@@ -1682,30 +1682,26 @@ ack_intr:
 
 static void intel_pstate_disable_hwp_interrupt(struct cpudata *cpudata)
 {
-	unsigned long flags;
-
 	if (!boot_cpu_has(X86_FEATURE_HWP_NOTIFY))
 		return;
 
 	/* wrmsrl_on_cpu has to be outside spinlock as this can result in IPC */
 	wrmsrl_on_cpu(cpudata->cpu, MSR_HWP_INTERRUPT, 0x00);
 
-	spin_lock_irqsave(&hwp_notify_lock, flags);
+	spin_lock_irq(&hwp_notify_lock);
 	if (cpumask_test_and_clear_cpu(cpudata->cpu, &hwp_intr_enable_mask))
 		cancel_delayed_work(&cpudata->hwp_notify_work);
-	spin_unlock_irqrestore(&hwp_notify_lock, flags);
+	spin_unlock_irq(&hwp_notify_lock);
 }
 
 static void intel_pstate_enable_hwp_interrupt(struct cpudata *cpudata)
 {
 	/* Enable HWP notification interrupt for guaranteed performance change */
 	if (boot_cpu_has(X86_FEATURE_HWP_NOTIFY)) {
-		unsigned long flags;
-
-		spin_lock_irqsave(&hwp_notify_lock, flags);
+		spin_lock_irq(&hwp_notify_lock);
 		INIT_DELAYED_WORK(&cpudata->hwp_notify_work, intel_pstate_notify_work);
 		cpumask_set_cpu(cpudata->cpu, &hwp_intr_enable_mask);
-		spin_unlock_irqrestore(&hwp_notify_lock, flags);
+		spin_unlock_irq(&hwp_notify_lock);
 
 		/* wrmsrl_on_cpu has to be outside spinlock as this can result in IPC */
 		wrmsrl_on_cpu(cpudata->cpu, MSR_HWP_INTERRUPT, 0x01);




^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v1 3/5] cpufreq: intel_pstate: Wait for canceled delayed work to complete
  2024-03-21 19:28 [PATCH v1 0/5] intel_pstate: Some code cleanups, mostly related to concurrency Rafael J. Wysocki
  2024-03-21 19:29 ` [PATCH v1 1/5] cpufreq: intel_pstate: Drop redundant locking from intel_pstate_driver_cleanup() Rafael J. Wysocki
  2024-03-21 19:30 ` [PATCH v1 2/5] cpufreq: intel_pstate: Simplify spinlock locking Rafael J. Wysocki
@ 2024-03-21 19:32 ` Rafael J. Wysocki
  2024-03-21 19:33 ` [PATCH v1 4/5] cpufreq: intel_pstate: Get rid of unnecessary READ_ONCE() annotations Rafael J. Wysocki
  2024-03-21 19:34 ` [PATCH v1 5/5] cpufreq: intel_pstate: Use __ro_after_init for three variables Rafael J. Wysocki
  4 siblings, 0 replies; 8+ messages in thread
From: Rafael J. Wysocki @ 2024-03-21 19:32 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Srinivas Pandruvada

From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

Make intel_pstate_disable_hwp_interrupt() wait for canceled delayed work
to complete to avoid leftover work items running when it returns which
may be during driver unregistration and may confuse things going forward.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/cpufreq/intel_pstate.c |    8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

Index: linux-pm/drivers/cpufreq/intel_pstate.c
===================================================================
--- linux-pm.orig/drivers/cpufreq/intel_pstate.c
+++ linux-pm/drivers/cpufreq/intel_pstate.c
@@ -1682,6 +1682,8 @@ ack_intr:
 
 static void intel_pstate_disable_hwp_interrupt(struct cpudata *cpudata)
 {
+	bool cancel_work;
+
 	if (!boot_cpu_has(X86_FEATURE_HWP_NOTIFY))
 		return;
 
@@ -1689,9 +1691,11 @@ static void intel_pstate_disable_hwp_int
 	wrmsrl_on_cpu(cpudata->cpu, MSR_HWP_INTERRUPT, 0x00);
 
 	spin_lock_irq(&hwp_notify_lock);
-	if (cpumask_test_and_clear_cpu(cpudata->cpu, &hwp_intr_enable_mask))
-		cancel_delayed_work(&cpudata->hwp_notify_work);
+	cancel_work = cpumask_test_and_clear_cpu(cpudata->cpu, &hwp_intr_enable_mask);
 	spin_unlock_irq(&hwp_notify_lock);
+
+	if (cancel_work)
+		cancel_delayed_work_sync(&cpudata->hwp_notify_work);
 }
 
 static void intel_pstate_enable_hwp_interrupt(struct cpudata *cpudata)




^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v1 4/5] cpufreq: intel_pstate: Get rid of unnecessary READ_ONCE() annotations
  2024-03-21 19:28 [PATCH v1 0/5] intel_pstate: Some code cleanups, mostly related to concurrency Rafael J. Wysocki
                   ` (2 preceding siblings ...)
  2024-03-21 19:32 ` [PATCH v1 3/5] cpufreq: intel_pstate: Wait for canceled delayed work to complete Rafael J. Wysocki
@ 2024-03-21 19:33 ` Rafael J. Wysocki
  2024-03-25 16:45   ` [Update][PATCH v1.1 " Rafael J. Wysocki
  2024-03-21 19:34 ` [PATCH v1 5/5] cpufreq: intel_pstate: Use __ro_after_init for three variables Rafael J. Wysocki
  4 siblings, 1 reply; 8+ messages in thread
From: Rafael J. Wysocki @ 2024-03-21 19:33 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Srinivas Pandruvada

From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

Drop a redundant check involving READ_ONCE() from notify_hwp_interrupt()
and make it check hwp_active without READ_ONCE() which is not necessary,
because that variable is only set once during the early initialization of
the driver.

In order to make that clear, annotate hwp_active with __ro_after_init.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/cpufreq/intel_pstate.c |   14 +++-----------
 1 file changed, 3 insertions(+), 11 deletions(-)

Index: linux-pm/drivers/cpufreq/intel_pstate.c
===================================================================
--- linux-pm.orig/drivers/cpufreq/intel_pstate.c
+++ linux-pm/drivers/cpufreq/intel_pstate.c
@@ -292,7 +292,7 @@ struct pstate_funcs {
 
 static struct pstate_funcs pstate_funcs __read_mostly;
 
-static int hwp_active __read_mostly;
+static bool hwp_active __ro_after_init;
 static int hwp_mode_bdw __read_mostly;
 static bool per_cpu_limits __read_mostly;
 static bool hwp_boost __read_mostly;
@@ -1640,7 +1640,7 @@ void notify_hwp_interrupt(void)
 	unsigned long flags;
 	u64 value;
 
-	if (!READ_ONCE(hwp_active) || !boot_cpu_has(X86_FEATURE_HWP_NOTIFY))
+	if (!hwp_active || !boot_cpu_has(X86_FEATURE_HWP_NOTIFY))
 		return;
 
 	rdmsrl_safe(MSR_HWP_STATUS, &value);
@@ -1653,14 +1653,6 @@ void notify_hwp_interrupt(void)
 		goto ack_intr;
 
 	/*
-	 * Currently we never free all_cpu_data. And we can't reach here
-	 * without this allocated. But for safety for future changes, added
-	 * check.
-	 */
-	if (unlikely(!READ_ONCE(all_cpu_data)))
-		goto ack_intr;
-
-	/*
 	 * The free is done during cleanup, when cpufreq registry is failed.
 	 * We wouldn't be here if it fails on init or switch status. But for
 	 * future changes, added check.
@@ -3464,7 +3456,7 @@ static int __init intel_pstate_init(void
 		 * deal with it.
 		 */
 		if ((!no_hwp && boot_cpu_has(X86_FEATURE_HWP_EPP)) || hwp_forced) {
-			WRITE_ONCE(hwp_active, 1);
+			hwp_active = true;
 			hwp_mode_bdw = id->driver_data;
 			intel_pstate.attr = hwp_cpufreq_attrs;
 			intel_cpufreq.attr = hwp_cpufreq_attrs;




^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v1 5/5] cpufreq: intel_pstate: Use __ro_after_init for three variables
  2024-03-21 19:28 [PATCH v1 0/5] intel_pstate: Some code cleanups, mostly related to concurrency Rafael J. Wysocki
                   ` (3 preceding siblings ...)
  2024-03-21 19:33 ` [PATCH v1 4/5] cpufreq: intel_pstate: Get rid of unnecessary READ_ONCE() annotations Rafael J. Wysocki
@ 2024-03-21 19:34 ` Rafael J. Wysocki
  4 siblings, 0 replies; 8+ messages in thread
From: Rafael J. Wysocki @ 2024-03-21 19:34 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Srinivas Pandruvada

From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

There are at least 3 variables in intel_pstate that do not get updated
after they have been initialized, so annotate them with __ro_after_init.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/cpufreq/intel_pstate.c |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

Index: linux-pm/drivers/cpufreq/intel_pstate.c
===================================================================
--- linux-pm.orig/drivers/cpufreq/intel_pstate.c
+++ linux-pm/drivers/cpufreq/intel_pstate.c
@@ -293,10 +293,10 @@ struct pstate_funcs {
 static struct pstate_funcs pstate_funcs __read_mostly;
 
 static bool hwp_active __ro_after_init;
-static int hwp_mode_bdw __read_mostly;
-static bool per_cpu_limits __read_mostly;
+static int hwp_mode_bdw __ro_after_init;
+static bool per_cpu_limits __ro_after_init;
+static bool hwp_forced __ro_after_init;
 static bool hwp_boost __read_mostly;
-static bool hwp_forced __read_mostly;
 
 static struct cpufreq_driver *intel_pstate_driver __read_mostly;
 




^ permalink raw reply	[flat|nested] 8+ messages in thread

* [Update][PATCH v1.1 4/5] cpufreq: intel_pstate: Get rid of unnecessary READ_ONCE() annotations
  2024-03-21 19:33 ` [PATCH v1 4/5] cpufreq: intel_pstate: Get rid of unnecessary READ_ONCE() annotations Rafael J. Wysocki
@ 2024-03-25 16:45   ` Rafael J. Wysocki
  2024-03-28 19:00     ` [Update][PATCH v1.2 " Rafael J. Wysocki
  0 siblings, 1 reply; 8+ messages in thread
From: Rafael J. Wysocki @ 2024-03-25 16:45 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Srinivas Pandruvada

From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

Drop two redundant checks involving READ_ONCE() from notify_hwp_interrupt()
and make it check hwp_active without READ_ONCE() which is not necessary,
because that variable is only set once during the early initialization of
the driver.

In order to make that clear, annotate hwp_active with __ro_after_init.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---

v1 -> v1.1:
   * There are two redundant checks in notify_hwp_interrupt() that can
     be dropped, so drop them both.

---
 drivers/cpufreq/intel_pstate.c |   23 +++--------------------
 1 file changed, 3 insertions(+), 20 deletions(-)

Index: linux-pm/drivers/cpufreq/intel_pstate.c
===================================================================
--- linux-pm.orig/drivers/cpufreq/intel_pstate.c
+++ linux-pm/drivers/cpufreq/intel_pstate.c
@@ -292,7 +292,7 @@ struct pstate_funcs {
 
 static struct pstate_funcs pstate_funcs __read_mostly;
 
-static int hwp_active __read_mostly;
+static bool hwp_active __ro_after_init;
 static int hwp_mode_bdw __read_mostly;
 static bool per_cpu_limits __read_mostly;
 static bool hwp_boost __read_mostly;
@@ -1640,7 +1640,7 @@ void notify_hwp_interrupt(void)
 	unsigned long flags;
 	u64 value;
 
-	if (!READ_ONCE(hwp_active) || !boot_cpu_has(X86_FEATURE_HWP_NOTIFY))
+	if (!hwp_active || !boot_cpu_has(X86_FEATURE_HWP_NOTIFY))
 		return;
 
 	rdmsrl_safe(MSR_HWP_STATUS, &value);
@@ -1652,23 +1652,6 @@ void notify_hwp_interrupt(void)
 	if (!cpumask_test_cpu(this_cpu, &hwp_intr_enable_mask))
 		goto ack_intr;
 
-	/*
-	 * Currently we never free all_cpu_data. And we can't reach here
-	 * without this allocated. But for safety for future changes, added
-	 * check.
-	 */
-	if (unlikely(!READ_ONCE(all_cpu_data)))
-		goto ack_intr;
-
-	/*
-	 * The free is done during cleanup, when cpufreq registry is failed.
-	 * We wouldn't be here if it fails on init or switch status. But for
-	 * future changes, added check.
-	 */
-	cpudata = READ_ONCE(all_cpu_data[this_cpu]);
-	if (unlikely(!cpudata))
-		goto ack_intr;
-
 	schedule_delayed_work(&cpudata->hwp_notify_work, msecs_to_jiffies(10));
 
 	spin_unlock_irqrestore(&hwp_notify_lock, flags);
@@ -3464,7 +3447,7 @@ static int __init intel_pstate_init(void
 		 * deal with it.
 		 */
 		if ((!no_hwp && boot_cpu_has(X86_FEATURE_HWP_EPP)) || hwp_forced) {
-			WRITE_ONCE(hwp_active, 1);
+			hwp_active = true;
 			hwp_mode_bdw = id->driver_data;
 			intel_pstate.attr = hwp_cpufreq_attrs;
 			intel_cpufreq.attr = hwp_cpufreq_attrs;




^ permalink raw reply	[flat|nested] 8+ messages in thread

* [Update][PATCH v1.2 4/5] cpufreq: intel_pstate: Get rid of unnecessary READ_ONCE() annotations
  2024-03-25 16:45   ` [Update][PATCH v1.1 " Rafael J. Wysocki
@ 2024-03-28 19:00     ` Rafael J. Wysocki
  0 siblings, 0 replies; 8+ messages in thread
From: Rafael J. Wysocki @ 2024-03-28 19:00 UTC (permalink / raw)
  To: Linux PM; +Cc: LKML, Srinivas Pandruvada

From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

Drop two redundant checks involving READ_ONCE() from notify_hwp_interrupt()
and make it check hwp_active without READ_ONCE() which is not necessary,
because that variable is only set once during the early initialization of
the driver.

In order to make that clear, annotate hwp_active with __ro_after_init.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---

v1.1 -> v1.2: Fix uninitialized memory access introduced in v1.1.

---
 drivers/cpufreq/intel_pstate.c |   27 +++++----------------------
 1 file changed, 5 insertions(+), 22 deletions(-)

Index: linux-pm/drivers/cpufreq/intel_pstate.c
===================================================================
--- linux-pm.orig/drivers/cpufreq/intel_pstate.c
+++ linux-pm/drivers/cpufreq/intel_pstate.c
@@ -292,7 +292,7 @@ struct pstate_funcs {
 
 static struct pstate_funcs pstate_funcs __read_mostly;
 
-static int hwp_active __read_mostly;
+static bool hwp_active __ro_after_init;
 static int hwp_mode_bdw __read_mostly;
 static bool per_cpu_limits __read_mostly;
 static bool hwp_boost __read_mostly;
@@ -1636,11 +1636,10 @@ static cpumask_t hwp_intr_enable_mask;
 void notify_hwp_interrupt(void)
 {
 	unsigned int this_cpu = smp_processor_id();
-	struct cpudata *cpudata;
 	unsigned long flags;
 	u64 value;
 
-	if (!READ_ONCE(hwp_active) || !boot_cpu_has(X86_FEATURE_HWP_NOTIFY))
+	if (!hwp_active || !boot_cpu_has(X86_FEATURE_HWP_NOTIFY))
 		return;
 
 	rdmsrl_safe(MSR_HWP_STATUS, &value);
@@ -1652,24 +1651,8 @@ void notify_hwp_interrupt(void)
 	if (!cpumask_test_cpu(this_cpu, &hwp_intr_enable_mask))
 		goto ack_intr;
 
-	/*
-	 * Currently we never free all_cpu_data. And we can't reach here
-	 * without this allocated. But for safety for future changes, added
-	 * check.
-	 */
-	if (unlikely(!READ_ONCE(all_cpu_data)))
-		goto ack_intr;
-
-	/*
-	 * The free is done during cleanup, when cpufreq registry is failed.
-	 * We wouldn't be here if it fails on init or switch status. But for
-	 * future changes, added check.
-	 */
-	cpudata = READ_ONCE(all_cpu_data[this_cpu]);
-	if (unlikely(!cpudata))
-		goto ack_intr;
-
-	schedule_delayed_work(&cpudata->hwp_notify_work, msecs_to_jiffies(10));
+	schedule_delayed_work(&all_cpu_data[this_cpu]->hwp_notify_work,
+			      msecs_to_jiffies(10));
 
 	spin_unlock_irqrestore(&hwp_notify_lock, flags);
 
@@ -3464,7 +3447,7 @@ static int __init intel_pstate_init(void
 		 * deal with it.
 		 */
 		if ((!no_hwp && boot_cpu_has(X86_FEATURE_HWP_EPP)) || hwp_forced) {
-			WRITE_ONCE(hwp_active, 1);
+			hwp_active = true;
 			hwp_mode_bdw = id->driver_data;
 			intel_pstate.attr = hwp_cpufreq_attrs;
 			intel_cpufreq.attr = hwp_cpufreq_attrs;




^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2024-03-28 19:00 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-03-21 19:28 [PATCH v1 0/5] intel_pstate: Some code cleanups, mostly related to concurrency Rafael J. Wysocki
2024-03-21 19:29 ` [PATCH v1 1/5] cpufreq: intel_pstate: Drop redundant locking from intel_pstate_driver_cleanup() Rafael J. Wysocki
2024-03-21 19:30 ` [PATCH v1 2/5] cpufreq: intel_pstate: Simplify spinlock locking Rafael J. Wysocki
2024-03-21 19:32 ` [PATCH v1 3/5] cpufreq: intel_pstate: Wait for canceled delayed work to complete Rafael J. Wysocki
2024-03-21 19:33 ` [PATCH v1 4/5] cpufreq: intel_pstate: Get rid of unnecessary READ_ONCE() annotations Rafael J. Wysocki
2024-03-25 16:45   ` [Update][PATCH v1.1 " Rafael J. Wysocki
2024-03-28 19:00     ` [Update][PATCH v1.2 " Rafael J. Wysocki
2024-03-21 19:34 ` [PATCH v1 5/5] cpufreq: intel_pstate: Use __ro_after_init for three variables Rafael J. Wysocki

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).