* [PATCH v4 0/3] selinux: speed up avc_search_node() with large number of avc nodes
@ 2025-10-23 11:27 Hongru Zhang
2025-10-23 11:29 ` [PATCH v4 1/3] selinux: Introduce a new config to make avc cache slot size adjustable Hongru Zhang
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Hongru Zhang @ 2025-10-23 11:27 UTC (permalink / raw)
To: paul, stephen.smalley.work, omosnace; +Cc: linux-kernel, selinux, zhanghongru
On mobile device high-load situations, permission check can happen
more than 90,000/s (8 core system). With default 512 cache nodes
configuration, avc cache miss happens more often and occasionally
leads to long time (>2ms) irqs off on both big and little cores,
which decreases system real-time capability.
Although we can expand avc nodes through /sys/fs/selinux/cache_threshold
to mitigate long time irqs off, hash conflicts make the bucket average
length longer because of the fixed size of cache slots, leading to
avc_search_node() latency increase.
This patch series aims to expand AVC nodes without significantly increasing
avc_search_node() latency.
This is v4 of the patch series.
Changes since v3:
- use MurmurHash3 to improve avc_hash()
- add a config to adjust avc cache slot size instead of kernel parameter
Changes since v2:
- update changelog and fix warnings for commit ("000bce8f11d0
selinux: improve bucket distribution uniformity of avc_hash()")
Changes since v1:
- improve hash algorithm in avc_hash()
- define avc_cache_slots as a static variable
v1 discussion:
https://lore.kernel.org/selinux/20250905100454.685866-1-zhanghongru@xiaomi.com/
v2 discussion:
https://lore.kernel.org/selinux/cover.1758633723.git.zhanghongru@xiaomi.com/
v3 discussion:
https://lore.kernel.org/selinux/cover.1758859391.git.zhanghongru@xiaomi.com/
Hongru Zhang (3):
selinux: Introduce a new config to make avc cache slot size adjustable
selinux: Move avtab_hash() to a shared location for future reuse
selinux: improve bucket distribution uniformity of avc_hash()
security/selinux/Kconfig | 11 ++++++++
security/selinux/avc.c | 9 ++++---
security/selinux/include/hash.h | 47 +++++++++++++++++++++++++++++++++
security/selinux/ss/avtab.c | 39 +++------------------------
4 files changed, 66 insertions(+), 40 deletions(-)
create mode 100644 security/selinux/include/hash.h
base-commit: c6f0a830ef4bffc8eedd7ddb307bfba4929cd3e1
--
2.43.0
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v4 1/3] selinux: Introduce a new config to make avc cache slot size adjustable
2025-10-23 11:27 [PATCH v4 0/3] selinux: speed up avc_search_node() with large number of avc nodes Hongru Zhang
@ 2025-10-23 11:29 ` Hongru Zhang
2025-10-23 22:24 ` Paul Moore
2025-10-23 11:29 ` [PATCH v4 2/3] selinux: Move avtab_hash() to a shared location for future reuse Hongru Zhang
2025-10-23 11:30 ` [PATCH v4 3/3] selinux: improve bucket distribution uniformity of avc_hash() Hongru Zhang
2 siblings, 1 reply; 7+ messages in thread
From: Hongru Zhang @ 2025-10-23 11:29 UTC (permalink / raw)
To: paul, stephen.smalley.work, omosnace; +Cc: linux-kernel, selinux, zhanghongru
From: Hongru Zhang <zhanghongru@xiaomi.com>
On mobile device high-load situations, permission check can happen
more than 90,000/s (8 core system). With default 512 cache nodes
configuration, avc cache miss happens more often and occasionally
leads to long time (>2ms) irqs off on both big and little cores,
which decreases system real-time capability.
An actual call stack is as follows:
=> avc_compute_av
=> avc_perm_nonode
=> avc_has_perm_noaudit
=> selinux_capable
=> security_capable
=> capable
=> __sched_setscheduler
=> do_sched_setscheduler
=> __arm64_sys_sched_setscheduler
=> invoke_syscall
=> el0_svc_common
=> do_el0_svc
=> el0_svc
=> el0t_64_sync_handler
=> el0t_64_sync
Although we can expand avc nodes through /sys/fs/selinux/cache_threshold
to mitigate long time irqs off, hash conflicts make the bucket average
length longer because of the fixed size of cache slots, leading to
avc_search_node() latency increase.
So introduce a new config to make avc cache slot size also configurable,
and with fine tuning, we can mitigate long time irqs off with slightly
avc_search_node() performance regression.
Theoretically, the main overhead is memory consumption.
Signed-off-by: Hongru Zhang <zhanghongru@xiaomi.com>
---
security/selinux/Kconfig | 11 +++++++++++
security/selinux/avc.c | 6 +++---
2 files changed, 14 insertions(+), 3 deletions(-)
diff --git a/security/selinux/Kconfig b/security/selinux/Kconfig
index 61abc1e094a8..5588c4d573f6 100644
--- a/security/selinux/Kconfig
+++ b/security/selinux/Kconfig
@@ -69,6 +69,17 @@ config SECURITY_SELINUX_SID2STR_CACHE_SIZE
If unsure, keep the default value.
+config SECURITY_SELINUX_AVC_HASH_BITS
+ int "SELinux avc hashtable size"
+ depends on SECURITY_SELINUX
+ range 9 14
+ default 9
+ help
+ This option sets the number of buckets used in the AVC hash table
+ to 2^SECURITY_SELINUX_AVC_HASH_BITS. A higher value helps maintain
+ shorter chain lengths especially when expanding AVC nodes via
+ /sys/fs/selinux/avc/cache_threshold.
+
config SECURITY_SELINUX_DEBUG
bool "SELinux kernel debugging support"
depends on SECURITY_SELINUX
diff --git a/security/selinux/avc.c b/security/selinux/avc.c
index 430b0e23ee00..c12d45e46db6 100644
--- a/security/selinux/avc.c
+++ b/security/selinux/avc.c
@@ -34,9 +34,9 @@
#define CREATE_TRACE_POINTS
#include <trace/events/avc.h>
-#define AVC_CACHE_SLOTS 512
-#define AVC_DEF_CACHE_THRESHOLD 512
-#define AVC_CACHE_RECLAIM 16
+#define AVC_CACHE_SLOTS (1 << CONFIG_SECURITY_SELINUX_AVC_HASH_BITS)
+#define AVC_DEF_CACHE_THRESHOLD AVC_CACHE_SLOTS
+#define AVC_CACHE_RECLAIM 16
#ifdef CONFIG_SECURITY_SELINUX_AVC_STATS
#define avc_cache_stats_incr(field) this_cpu_inc(avc_cache_stats.field)
--
2.43.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v4 2/3] selinux: Move avtab_hash() to a shared location for future reuse
2025-10-23 11:27 [PATCH v4 0/3] selinux: speed up avc_search_node() with large number of avc nodes Hongru Zhang
2025-10-23 11:29 ` [PATCH v4 1/3] selinux: Introduce a new config to make avc cache slot size adjustable Hongru Zhang
@ 2025-10-23 11:29 ` Hongru Zhang
2025-10-23 22:24 ` Paul Moore
2025-10-23 11:30 ` [PATCH v4 3/3] selinux: improve bucket distribution uniformity of avc_hash() Hongru Zhang
2 siblings, 1 reply; 7+ messages in thread
From: Hongru Zhang @ 2025-10-23 11:29 UTC (permalink / raw)
To: paul, stephen.smalley.work, omosnace; +Cc: linux-kernel, selinux, zhanghongru
From: Hongru Zhang <zhanghongru@xiaomi.com>
This is a preparation patch, no functional change.
Signed-off-by: Hongru Zhang <zhanghongru@xiaomi.com>
---
security/selinux/include/hash.h | 46 +++++++++++++++++++++++++++++++++
security/selinux/ss/avtab.c | 41 +----------------------------
2 files changed, 47 insertions(+), 40 deletions(-)
create mode 100644 security/selinux/include/hash.h
diff --git a/security/selinux/include/hash.h b/security/selinux/include/hash.h
new file mode 100644
index 000000000000..5b429a873eb6
--- /dev/null
+++ b/security/selinux/include/hash.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#ifndef _SELINUX_HASH_H_
+#define _SELINUX_HASH_H_
+
+/* Based on MurmurHash3, written by Austin Appleby and placed in the
+ * public domain.
+ */
+static inline u32 avtab_hash(const struct avtab_key *keyp, u32 mask)
+{
+ static const u32 c1 = 0xcc9e2d51;
+ static const u32 c2 = 0x1b873593;
+ static const u32 r1 = 15;
+ static const u32 r2 = 13;
+ static const u32 m = 5;
+ static const u32 n = 0xe6546b64;
+
+ u32 hash = 0;
+
+#define mix(input) \
+ do { \
+ u32 v = input; \
+ v *= c1; \
+ v = (v << r1) | (v >> (32 - r1)); \
+ v *= c2; \
+ hash ^= v; \
+ hash = (hash << r2) | (hash >> (32 - r2)); \
+ hash = hash * m + n; \
+ } while (0)
+
+ mix(keyp->target_class);
+ mix(keyp->target_type);
+ mix(keyp->source_type);
+
+#undef mix
+
+ hash ^= hash >> 16;
+ hash *= 0x85ebca6b;
+ hash ^= hash >> 13;
+ hash *= 0xc2b2ae35;
+ hash ^= hash >> 16;
+
+ return hash & mask;
+}
+
+#endif /* _SELINUX_HASH_H_ */
diff --git a/security/selinux/ss/avtab.c b/security/selinux/ss/avtab.c
index c2c31521cace..15e89d9b5d72 100644
--- a/security/selinux/ss/avtab.c
+++ b/security/selinux/ss/avtab.c
@@ -20,50 +20,11 @@
#include <linux/errno.h>
#include "avtab.h"
#include "policydb.h"
+#include "hash.h"
static struct kmem_cache *avtab_node_cachep __ro_after_init;
static struct kmem_cache *avtab_xperms_cachep __ro_after_init;
-/* Based on MurmurHash3, written by Austin Appleby and placed in the
- * public domain.
- */
-static inline u32 avtab_hash(const struct avtab_key *keyp, u32 mask)
-{
- static const u32 c1 = 0xcc9e2d51;
- static const u32 c2 = 0x1b873593;
- static const u32 r1 = 15;
- static const u32 r2 = 13;
- static const u32 m = 5;
- static const u32 n = 0xe6546b64;
-
- u32 hash = 0;
-
-#define mix(input) \
- do { \
- u32 v = input; \
- v *= c1; \
- v = (v << r1) | (v >> (32 - r1)); \
- v *= c2; \
- hash ^= v; \
- hash = (hash << r2) | (hash >> (32 - r2)); \
- hash = hash * m + n; \
- } while (0)
-
- mix(keyp->target_class);
- mix(keyp->target_type);
- mix(keyp->source_type);
-
-#undef mix
-
- hash ^= hash >> 16;
- hash *= 0x85ebca6b;
- hash ^= hash >> 13;
- hash *= 0xc2b2ae35;
- hash ^= hash >> 16;
-
- return hash & mask;
-}
-
static struct avtab_node *avtab_insert_node(struct avtab *h,
struct avtab_node **dst,
const struct avtab_key *key,
--
2.43.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v4 3/3] selinux: improve bucket distribution uniformity of avc_hash()
2025-10-23 11:27 [PATCH v4 0/3] selinux: speed up avc_search_node() with large number of avc nodes Hongru Zhang
2025-10-23 11:29 ` [PATCH v4 1/3] selinux: Introduce a new config to make avc cache slot size adjustable Hongru Zhang
2025-10-23 11:29 ` [PATCH v4 2/3] selinux: Move avtab_hash() to a shared location for future reuse Hongru Zhang
@ 2025-10-23 11:30 ` Hongru Zhang
2025-10-23 22:24 ` Paul Moore
2 siblings, 1 reply; 7+ messages in thread
From: Hongru Zhang @ 2025-10-23 11:30 UTC (permalink / raw)
To: paul, stephen.smalley.work, omosnace; +Cc: linux-kernel, selinux, zhanghongru
From: Hongru Zhang <zhanghongru@xiaomi.com>
Reuse the already implemented MurmurHash3 algorithm. Under heavy stress
testing (on an 8-core system sustaining over 50,000 authentication events
per second), sample once per second and take the mean of 1800 samples:
1. Bucket utilization rate and length of longest chain
+--------------------------+-----------------------------------------+
| | bucket utilization rate / longest chain |
| +--------------------+--------------------+
| | no-patch | with-patch |
+--------------------------+--------------------+--------------------+
| 512 nodes, 512 buckets | 52.5%/7.5 | 60.2%/5.7 |
+--------------------------+--------------------+--------------------+
| 1024 nodes, 512 buckets | 68.9%/12.1 | 80.2%/9.7 |
+--------------------------+--------------------+--------------------+
| 2048 nodes, 512 buckets | 83.7%/19.4 | 93.4%/16.3 |
+--------------------------+--------------------+--------------------+
| 8192 nodes, 8192 buckets | 49.5%/11.4 | 60.3%/7.4 |
+--------------------------+--------------------+--------------------+
2. avc_search_node latency (total latency of hash operation and table
lookup)
+--------------------------+-----------------------------------------+
| | latency of function avc_search_node |
| +--------------------+--------------------+
| | no-patch | with-patch |
+--------------------------+--------------------+--------------------+
| 512 nodes, 512 buckets | 87ns | 84ns |
+--------------------------+--------------------+--------------------+
| 1024 nodes, 512 buckets | 97ns | 96ns |
+--------------------------+--------------------+--------------------+
| 2048 nodes, 512 buckets | 118ns | 113ns |
+--------------------------+--------------------+--------------------+
| 8192 nodes, 8192 buckets | 106ns | 99ns |
+--------------------------+--------------------+--------------------+
Although MurmurHash3 has higher overhead than the bitwise operations in
the original algorithm, the data shows that the MurmurHash3 achieves
better distribution, reducing average lookup time. Consequently, the
total latency of hashing and table lookup is lower than before.
Signed-off-by: Hongru Zhang <zhanghongru@xiaomi.com>
---
security/selinux/avc.c | 3 ++-
security/selinux/include/hash.h | 11 ++++++-----
security/selinux/ss/avtab.c | 6 ++++++
3 files changed, 14 insertions(+), 6 deletions(-)
diff --git a/security/selinux/avc.c b/security/selinux/avc.c
index c12d45e46db6..8f77b9a732e1 100644
--- a/security/selinux/avc.c
+++ b/security/selinux/avc.c
@@ -30,6 +30,7 @@
#include "avc.h"
#include "avc_ss.h"
#include "classmap.h"
+#include "hash.h"
#define CREATE_TRACE_POINTS
#include <trace/events/avc.h>
@@ -124,7 +125,7 @@ static struct kmem_cache *avc_xperms_cachep __ro_after_init;
static inline u32 avc_hash(u32 ssid, u32 tsid, u16 tclass)
{
- return (ssid ^ (tsid<<2) ^ (tclass<<4)) & (AVC_CACHE_SLOTS - 1);
+ return av_hash(ssid, tsid, (u32)tclass, (u32)(AVC_CACHE_SLOTS - 1));
}
/**
diff --git a/security/selinux/include/hash.h b/security/selinux/include/hash.h
index 5b429a873eb6..18956dbef8ff 100644
--- a/security/selinux/include/hash.h
+++ b/security/selinux/include/hash.h
@@ -3,10 +3,11 @@
#ifndef _SELINUX_HASH_H_
#define _SELINUX_HASH_H_
-/* Based on MurmurHash3, written by Austin Appleby and placed in the
+/*
+ * Based on MurmurHash3, written by Austin Appleby and placed in the
* public domain.
*/
-static inline u32 avtab_hash(const struct avtab_key *keyp, u32 mask)
+static inline u32 av_hash(u32 key1, u32 key2, u32 key3, u32 mask)
{
static const u32 c1 = 0xcc9e2d51;
static const u32 c2 = 0x1b873593;
@@ -28,9 +29,9 @@ static inline u32 avtab_hash(const struct avtab_key *keyp, u32 mask)
hash = hash * m + n; \
} while (0)
- mix(keyp->target_class);
- mix(keyp->target_type);
- mix(keyp->source_type);
+ mix(key1);
+ mix(key2);
+ mix(key3);
#undef mix
diff --git a/security/selinux/ss/avtab.c b/security/selinux/ss/avtab.c
index 15e89d9b5d72..7d44b546ab45 100644
--- a/security/selinux/ss/avtab.c
+++ b/security/selinux/ss/avtab.c
@@ -25,6 +25,12 @@
static struct kmem_cache *avtab_node_cachep __ro_after_init;
static struct kmem_cache *avtab_xperms_cachep __ro_after_init;
+static inline u32 avtab_hash(const struct avtab_key *keyp, u32 mask)
+{
+ return av_hash((u32)keyp->target_class, (u32)keyp->target_type,
+ (u32)keyp->source_type, mask);
+}
+
static struct avtab_node *avtab_insert_node(struct avtab *h,
struct avtab_node **dst,
const struct avtab_key *key,
--
2.43.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v4 1/3] selinux: Introduce a new config to make avc cache slot size adjustable
2025-10-23 11:29 ` [PATCH v4 1/3] selinux: Introduce a new config to make avc cache slot size adjustable Hongru Zhang
@ 2025-10-23 22:24 ` Paul Moore
0 siblings, 0 replies; 7+ messages in thread
From: Paul Moore @ 2025-10-23 22:24 UTC (permalink / raw)
To: Hongru Zhang, stephen.smalley.work, omosnace
Cc: linux-kernel, selinux, zhanghongru
On Oct 23, 2025 Hongru Zhang <zhanghongru06@gmail.com> wrote:
>
> On mobile device high-load situations, permission check can happen
> more than 90,000/s (8 core system). With default 512 cache nodes
> configuration, avc cache miss happens more often and occasionally
> leads to long time (>2ms) irqs off on both big and little cores,
> which decreases system real-time capability.
>
> An actual call stack is as follows:
> => avc_compute_av
> => avc_perm_nonode
> => avc_has_perm_noaudit
> => selinux_capable
> => security_capable
> => capable
> => __sched_setscheduler
> => do_sched_setscheduler
> => __arm64_sys_sched_setscheduler
> => invoke_syscall
> => el0_svc_common
> => do_el0_svc
> => el0_svc
> => el0t_64_sync_handler
> => el0t_64_sync
>
> Although we can expand avc nodes through /sys/fs/selinux/cache_threshold
> to mitigate long time irqs off, hash conflicts make the bucket average
> length longer because of the fixed size of cache slots, leading to
> avc_search_node() latency increase.
>
> So introduce a new config to make avc cache slot size also configurable,
> and with fine tuning, we can mitigate long time irqs off with slightly
> avc_search_node() performance regression.
>
> Theoretically, the main overhead is memory consumption.
>
> Signed-off-by: Hongru Zhang <zhanghongru@xiaomi.com>
> ---
> security/selinux/Kconfig | 11 +++++++++++
> security/selinux/avc.c | 6 +++---
> 2 files changed, 14 insertions(+), 3 deletions(-)
Merged into selinux/dev, thanks!
--
paul-moore.com
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v4 2/3] selinux: Move avtab_hash() to a shared location for future reuse
2025-10-23 11:29 ` [PATCH v4 2/3] selinux: Move avtab_hash() to a shared location for future reuse Hongru Zhang
@ 2025-10-23 22:24 ` Paul Moore
0 siblings, 0 replies; 7+ messages in thread
From: Paul Moore @ 2025-10-23 22:24 UTC (permalink / raw)
To: Hongru Zhang, stephen.smalley.work, omosnace
Cc: linux-kernel, selinux, zhanghongru
On Oct 23, 2025 Hongru Zhang <zhanghongru06@gmail.com> wrote:
>
> This is a preparation patch, no functional change.
>
> Signed-off-by: Hongru Zhang <zhanghongru@xiaomi.com>
> ---
> security/selinux/include/hash.h | 46 +++++++++++++++++++++++++++++++++
> security/selinux/ss/avtab.c | 41 +----------------------------
> 2 files changed, 47 insertions(+), 40 deletions(-)
> create mode 100644 security/selinux/include/hash.h
Merged into selinux/dev, thanks.
--
paul-moore.com
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v4 3/3] selinux: improve bucket distribution uniformity of avc_hash()
2025-10-23 11:30 ` [PATCH v4 3/3] selinux: improve bucket distribution uniformity of avc_hash() Hongru Zhang
@ 2025-10-23 22:24 ` Paul Moore
0 siblings, 0 replies; 7+ messages in thread
From: Paul Moore @ 2025-10-23 22:24 UTC (permalink / raw)
To: Hongru Zhang, stephen.smalley.work, omosnace
Cc: linux-kernel, selinux, zhanghongru
On Oct 23, 2025 Hongru Zhang <zhanghongru06@gmail.com> wrote:
>
> Reuse the already implemented MurmurHash3 algorithm. Under heavy stress
> testing (on an 8-core system sustaining over 50,000 authentication events
> per second), sample once per second and take the mean of 1800 samples:
>
> 1. Bucket utilization rate and length of longest chain
> +--------------------------+-----------------------------------------+
> | | bucket utilization rate / longest chain |
> | +--------------------+--------------------+
> | | no-patch | with-patch |
> +--------------------------+--------------------+--------------------+
> | 512 nodes, 512 buckets | 52.5%/7.5 | 60.2%/5.7 |
> +--------------------------+--------------------+--------------------+
> | 1024 nodes, 512 buckets | 68.9%/12.1 | 80.2%/9.7 |
> +--------------------------+--------------------+--------------------+
> | 2048 nodes, 512 buckets | 83.7%/19.4 | 93.4%/16.3 |
> +--------------------------+--------------------+--------------------+
> | 8192 nodes, 8192 buckets | 49.5%/11.4 | 60.3%/7.4 |
> +--------------------------+--------------------+--------------------+
>
> 2. avc_search_node latency (total latency of hash operation and table
> lookup)
> +--------------------------+-----------------------------------------+
> | | latency of function avc_search_node |
> | +--------------------+--------------------+
> | | no-patch | with-patch |
> +--------------------------+--------------------+--------------------+
> | 512 nodes, 512 buckets | 87ns | 84ns |
> +--------------------------+--------------------+--------------------+
> | 1024 nodes, 512 buckets | 97ns | 96ns |
> +--------------------------+--------------------+--------------------+
> | 2048 nodes, 512 buckets | 118ns | 113ns |
> +--------------------------+--------------------+--------------------+
> | 8192 nodes, 8192 buckets | 106ns | 99ns |
> +--------------------------+--------------------+--------------------+
>
> Although MurmurHash3 has higher overhead than the bitwise operations in
> the original algorithm, the data shows that the MurmurHash3 achieves
> better distribution, reducing average lookup time. Consequently, the
> total latency of hashing and table lookup is lower than before.
>
> Signed-off-by: Hongru Zhang <zhanghongru@xiaomi.com>
> ---
> security/selinux/avc.c | 3 ++-
> security/selinux/include/hash.h | 11 ++++++-----
> security/selinux/ss/avtab.c | 6 ++++++
> 3 files changed, 14 insertions(+), 6 deletions(-)
Merged into selinux/dev, thanks!
--
paul-moore.com
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2025-10-23 22:24 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-23 11:27 [PATCH v4 0/3] selinux: speed up avc_search_node() with large number of avc nodes Hongru Zhang
2025-10-23 11:29 ` [PATCH v4 1/3] selinux: Introduce a new config to make avc cache slot size adjustable Hongru Zhang
2025-10-23 22:24 ` Paul Moore
2025-10-23 11:29 ` [PATCH v4 2/3] selinux: Move avtab_hash() to a shared location for future reuse Hongru Zhang
2025-10-23 22:24 ` Paul Moore
2025-10-23 11:30 ` [PATCH v4 3/3] selinux: improve bucket distribution uniformity of avc_hash() Hongru Zhang
2025-10-23 22:24 ` Paul Moore
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).