* CMA: test_pages_isolated failures in alloc_contig_range
@ 2014-10-26 21:09 Laurent Pinchart
2014-10-27 20:38 ` Laura Abbott
2014-10-28 12:38 ` Michal Nazarewicz
0 siblings, 2 replies; 8+ messages in thread
From: Laurent Pinchart @ 2014-10-26 21:09 UTC (permalink / raw)
To: linux-mm
Cc: linux-kernel, linux-sh, Michal Nazarewicz,
Bartlomiej Zolnierkiewicz, Minchan Kim
Hello,
I've run into a CMA-related issue while testing a DMA engine driver with
dmatest on a Renesas R-Car ARM platform.
When allocating contiguous memory through CMA the kernel prints the following
messages to the kernel log.
[ 99.770000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
[ 124.220000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
[ 127.550000] alloc_contig_range test_pages_isolated(6b845, 6b846) failed
[ 132.850000] alloc_contig_range test_pages_isolated(6b845, 6b846) failed
[ 151.390000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
[ 166.490000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
[ 181.450000] alloc_contig_range test_pages_isolated(6b845, 6b846) failed
I've stripped the dmatest module down as much as possible to remove any
hardware dependencies and came up with the following implementation.
-----------------------------------------------------------------------------
/*
* CMA test module
*
* Copyright (C) 2014 Laurent Pinchart
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/delay.h>
#include <linux/dma-mapping.h>
#include <linux/freezer.h>
#include <linux/kthread.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/slab.h>
#include <linux/wait.h>
static unsigned int num_threads = 4;
module_param(num_threads, uint, S_IRUGO | S_IWUSR);
static unsigned int iterations = 100000;
module_param(iterations, uint, S_IRUGO | S_IWUSR);
struct cma_test_thread {
struct list_head node;
struct task_struct *task;
bool done;
};
static DECLARE_WAIT_QUEUE_HEAD(thread_wait);
static LIST_HEAD(threads);
static int cma_test_thread(void *data)
{
struct cma_test_thread *thread = data;
unsigned int i = 0;
set_freezable();
while (!kthread_should_stop() && i < iterations) {
dma_addr_t dma;
void *mem;
mem = dma_alloc_coherent(NULL, 32, &dma, GFP_KERNEL);
usleep_range(1000, 2000);
if (mem)
dma_free_coherent(NULL, 32, mem, dma);
else
printk(KERN_INFO "allocation error @%u\n", i);
++i;
}
thread->done = true;
wake_up(&thread_wait);
return 0;
}
static bool cma_test_threads_done(void)
{
struct cma_test_thread *thread;
list_for_each_entry(thread, &threads, node) {
if (!thread->done)
return false;
}
return true;
}
static int cma_test_init(void)
{
struct cma_test_thread *thread, *_thread;
unsigned int i;
for (i = 0; i < num_threads; ++i) {
thread = kzalloc(sizeof(*thread), GFP_KERNEL);
if (!thread) {
pr_warn("No memory for thread %u\n", i);
break;
}
thread->task = kthread_create(cma_test_thread, thread,
"cmatest-%u", i);
if (IS_ERR(thread->task)) {
pr_warn("Failed to create thread %u\n", i);
kfree(thread);
break;
}
get_task_struct(thread->task);
list_add_tail(&thread->node, &threads);
wake_up_process(thread->task);
}
wait_event(thread_wait, cma_test_threads_done());
list_for_each_entry_safe(thread, _thread, &threads, node) {
kthread_stop(thread->task);
put_task_struct(thread->task);
list_del(&thread->node);
kfree(thread);
}
return 0;
}
module_init(cma_test_init);
static void cma_test_exit(void)
{
}
module_exit(cma_test_exit);
MODULE_AUTHOR("Laurent Pinchart");
MODULE_LICENSE("GPL v2");
-----------------------------------------------------------------------------
Loading the module will start 4 threads that will allocate and free DMA
coherent memory in a tight loop and eventually produce the error. It seems
like the probability of occurrence grows with the number of threads, which
could indicate a race condition.
The tests have been run on 3.18-rc1, but previous tests on 3.16 did exhibit
the same behaviour.
I'm not that familiar with the CMA internals, help would be appreciated to
debug the problem.
--
Regards,
Laurent Pinchart
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: CMA: test_pages_isolated failures in alloc_contig_range
2014-10-26 21:09 CMA: test_pages_isolated failures in alloc_contig_range Laurent Pinchart
@ 2014-10-27 20:38 ` Laura Abbott
2014-10-28 15:12 ` Laurent Pinchart
2014-10-28 12:38 ` Michal Nazarewicz
1 sibling, 1 reply; 8+ messages in thread
From: Laura Abbott @ 2014-10-27 20:38 UTC (permalink / raw)
To: Laurent Pinchart, linux-mm
Cc: linux-kernel, linux-sh, Michal Nazarewicz,
Bartlomiej Zolnierkiewicz, Minchan Kim
On 10/26/2014 2:09 PM, Laurent Pinchart wrote:
> Hello,
>
> I've run into a CMA-related issue while testing a DMA engine driver with
> dmatest on a Renesas R-Car ARM platform.
>
> When allocating contiguous memory through CMA the kernel prints the following
> messages to the kernel log.
>
> [ 99.770000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
> [ 124.220000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
> [ 127.550000] alloc_contig_range test_pages_isolated(6b845, 6b846) failed
> [ 132.850000] alloc_contig_range test_pages_isolated(6b845, 6b846) failed
> [ 151.390000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
> [ 166.490000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
> [ 181.450000] alloc_contig_range test_pages_isolated(6b845, 6b846) failed
>
> I've stripped the dmatest module down as much as possible to remove any
> hardware dependencies and came up with the following implementation.
>
...
>
> Loading the module will start 4 threads that will allocate and free DMA
> coherent memory in a tight loop and eventually produce the error. It seems
> like the probability of occurrence grows with the number of threads, which
> could indicate a race condition.
>
> The tests have been run on 3.18-rc1, but previous tests on 3.16 did exhibit
> the same behaviour.
>
> I'm not that familiar with the CMA internals, help would be appreciated to
> debug the problem.
>
Are you actually seeing allocation failures or is it just the messages?
The messages themselves may be harmless if the allocation is succeeding.
It's an indication that the particular range could not be isolated and
therefore another range should be used for the CMA allocation. Joonsoo
Kim had a patch series[1] that was designed to correct some problems with
isolation and from my testing it helps fix some CMA related errors. You
might try picking that up to see if it helps.
Thanks,
Laura
[1] https://lkml.org/lkml/2014/10/23/90
--
Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: CMA: test_pages_isolated failures in alloc_contig_range
2014-10-26 21:09 CMA: test_pages_isolated failures in alloc_contig_range Laurent Pinchart
2014-10-27 20:38 ` Laura Abbott
@ 2014-10-28 12:38 ` Michal Nazarewicz
2014-10-28 13:48 ` Peter Hurley
1 sibling, 1 reply; 8+ messages in thread
From: Michal Nazarewicz @ 2014-10-28 12:38 UTC (permalink / raw)
To: Laurent Pinchart, linux-mm
Cc: linux-kernel, linux-sh, Bartlomiej Zolnierkiewicz, Minchan Kim
On Sun, Oct 26 2014, Laurent Pinchart <laurent.pinchart@ideasonboard.com> wrote:
> Hello,
>
> I've run into a CMA-related issue while testing a DMA engine driver with
> dmatest on a Renesas R-Car ARM platform.
>
> When allocating contiguous memory through CMA the kernel prints the following
> messages to the kernel log.
>
> [ 99.770000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
> [ 124.220000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
> [ 127.550000] alloc_contig_range test_pages_isolated(6b845, 6b846) failed
> [ 132.850000] alloc_contig_range test_pages_isolated(6b845, 6b846) failed
> [ 151.390000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
> [ 166.490000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
> [ 181.450000] alloc_contig_range test_pages_isolated(6b845, 6b846) failed
>
> I've stripped the dmatest module down as much as possible to remove any
> hardware dependencies and came up with the following implementation.
Like Laura wrote, the message is not (should not be) a problem in
itself:
mm/page_alloc.c:
int alloc_contig_range(unsigned long start, unsigned long end,
unsigned migratetype)
{
[…]
/* Make sure the range is really isolated. */
if (test_pages_isolated(outer_start, end, false)) {
pr_warn("alloc_contig_range test_pages_isolated(%lx, %lx) failed\n",
outer_start, end);
ret = -EBUSY;
goto done;
}
[…]
done:
undo_isolate_page_range(pfn_max_align_down(start),
pfn_max_align_up(end), migratetype);
return ret;
}
mm/cma.c:
struct page *cma_alloc(struct cma *cma, int count, unsigned int align)
{
[…]
for (;;) {
bitmap_no = bitmap_find_next_zero_area(cma->bitmap,
bitmap_maxno, start, bitmap_count, mask);
if (bitmap_no >= bitmap_maxno)
break;
bitmap_set(cma->bitmap, bitmap_no, bitmap_count);
pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit);
ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA);
if (ret = 0) {
page = pfn_to_page(pfn);
break;
}
cma_clear_bitmap(cma, pfn, count);
if (ret != -EBUSY)
break;
pr_debug("%s(): memory range at %p is busy, retrying\n",
__func__, pfn_to_page(pfn));
/* try again with a bit different memory target */
start = bitmap_no + mask + 1;
}
[…]
}
So as you can see cma_alloc will try another part of the cma region if
test_pages_isolated fails.
Obviously, if CMA region is fragmented or there's enough space for only
one allocation of required size isolation failures will cause allocation
failures, so it's best to avoid them, but they are not always avoidable.
To debug you would probably want to add more debug information about the
page (i.e. data from struct page) that failed isolation after the
pr_warn in alloc_contig_range.
--
Best regards, _ _
.o. | Liege of Serenely Enlightened Majesty of o' \,=./ `o
..o | Computer Science, Michał “mina86” Nazarewicz (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: CMA: test_pages_isolated failures in alloc_contig_range
2014-10-28 12:38 ` Michal Nazarewicz
@ 2014-10-28 13:48 ` Peter Hurley
2014-10-28 16:57 ` Michal Nazarewicz
2014-10-28 18:59 ` Laurent Pinchart
0 siblings, 2 replies; 8+ messages in thread
From: Peter Hurley @ 2014-10-28 13:48 UTC (permalink / raw)
To: Michal Nazarewicz, Laurent Pinchart, linux-mm
Cc: linux-kernel, linux-sh, Bartlomiej Zolnierkiewicz, Minchan Kim,
Andrew Morton
[ +cc Andrew Morton ]
On 10/28/2014 08:38 AM, Michal Nazarewicz wrote:
> On Sun, Oct 26 2014, Laurent Pinchart <laurent.pinchart@ideasonboard.com> wrote:
>> Hello,
>>
>> I've run into a CMA-related issue while testing a DMA engine driver with
>> dmatest on a Renesas R-Car ARM platform.
>>
>> When allocating contiguous memory through CMA the kernel prints the following
>> messages to the kernel log.
>>
>> [ 99.770000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
>> [ 124.220000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
>> [ 127.550000] alloc_contig_range test_pages_isolated(6b845, 6b846) failed
>> [ 132.850000] alloc_contig_range test_pages_isolated(6b845, 6b846) failed
>> [ 151.390000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
>> [ 166.490000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
>> [ 181.450000] alloc_contig_range test_pages_isolated(6b845, 6b846) failed
>>
>> I've stripped the dmatest module down as much as possible to remove any
>> hardware dependencies and came up with the following implementation.
>
> Like Laura wrote, the message is not (should not be) a problem in
> itself:
[...]
> So as you can see cma_alloc will try another part of the cma region if
> test_pages_isolated fails.
>
> Obviously, if CMA region is fragmented or there's enough space for only
> one allocation of required size isolation failures will cause allocation
> failures, so it's best to avoid them, but they are not always avoidable.
>
> To debug you would probably want to add more debug information about the
> page (i.e. data from struct page) that failed isolation after the
> pr_warn in alloc_contig_range.
If the message does not indicate an actual problem, then its printk level is
too high. These messages have been reported when using 3.16+ distro kernels.
Regards,
Peter Hurley
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: CMA: test_pages_isolated failures in alloc_contig_range
2014-10-27 20:38 ` Laura Abbott
@ 2014-10-28 15:12 ` Laurent Pinchart
0 siblings, 0 replies; 8+ messages in thread
From: Laurent Pinchart @ 2014-10-28 15:12 UTC (permalink / raw)
To: Laura Abbott
Cc: linux-mm, linux-kernel, linux-sh, Michal Nazarewicz,
Bartlomiej Zolnierkiewicz, Minchan Kim, Joonsoo Kim
Hi Laura,
On Monday 27 October 2014 13:38:19 Laura Abbott wrote:
> On 10/26/2014 2:09 PM, Laurent Pinchart wrote:
> > Hello,
> >
> > I've run into a CMA-related issue while testing a DMA engine driver with
> > dmatest on a Renesas R-Car ARM platform.
> >
> > When allocating contiguous memory through CMA the kernel prints the
> > following messages to the kernel log.
> >
> > [ 99.770000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
> > [ 124.220000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
> > [ 127.550000] alloc_contig_range test_pages_isolated(6b845, 6b846) failed
> > [ 132.850000] alloc_contig_range test_pages_isolated(6b845, 6b846) failed
> > [ 151.390000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
> > [ 166.490000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
> > [ 181.450000] alloc_contig_range test_pages_isolated(6b845, 6b846) failed
> >
> > I've stripped the dmatest module down as much as possible to remove any
> > hardware dependencies and came up with the following implementation.
>
> ...
>
> > Loading the module will start 4 threads that will allocate and free DMA
> > coherent memory in a tight loop and eventually produce the error. It seems
> > like the probability of occurrence grows with the number of threads, which
> > could indicate a race condition.
> >
> > The tests have been run on 3.18-rc1, but previous tests on 3.16 did
> > exhibit the same behaviour.
> >
> > I'm not that familiar with the CMA internals, help would be appreciated to
> > debug the problem.
>
> Are you actually seeing allocation failures or is it just the messages?
It's just the messages, I haven't noticed allocation failures.
> The messages themselves may be harmless if the allocation is succeeding.
> It's an indication that the particular range could not be isolated and
> therefore another range should be used for the CMA allocation. Joonsoo
> Kim had a patch series[1] that was designed to correct some problems with
> isolation and from my testing it helps fix some CMA related errors. You
> might try picking that up to see if it helps.
>
> Thanks,
> Laura
>
> [1] https://lkml.org/lkml/2014/10/23/90
I've tested the patches but they don't seem to have any influence on the
isolation test failures.
--
Regards,
Laurent Pinchart
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: CMA: test_pages_isolated failures in alloc_contig_range
2014-10-28 13:48 ` Peter Hurley
@ 2014-10-28 16:57 ` Michal Nazarewicz
2014-11-04 3:38 ` Peter Hurley
2014-10-28 18:59 ` Laurent Pinchart
1 sibling, 1 reply; 8+ messages in thread
From: Michal Nazarewicz @ 2014-10-28 16:57 UTC (permalink / raw)
To: Peter Hurley, Laurent Pinchart, linux-mm
Cc: linux-kernel, linux-sh, Bartlomiej Zolnierkiewicz, Minchan Kim,
Andrew Morton
> On 10/28/2014 08:38 AM, Michal Nazarewicz wrote:
>> Like Laura wrote, the message is not (should not be) a problem in
>> itself:
>
> [...]
>
>> So as you can see cma_alloc will try another part of the cma region if
>> test_pages_isolated fails.
>>
>> Obviously, if CMA region is fragmented or there's enough space for only
>> one allocation of required size isolation failures will cause allocation
>> failures, so it's best to avoid them, but they are not always avoidable.
>>
>> To debug you would probably want to add more debug information about the
>> page (i.e. data from struct page) that failed isolation after the
>> pr_warn in alloc_contig_range.
On Tue, Oct 28 2014, Peter Hurley <peter@hurleysoftware.com> wrote:
> If the message does not indicate an actual problem, then its printk level is
> too high. These messages have been reported when using 3.16+ distro kernels.
I think it could be argued both ways. The condition is not an error,
since in many cases cma_alloc will be able to continue, but it *is* an
undesired state. As such it's not an error but feels to me a bit more
then just information, hence a warning. I don't care either way, though.
--
Best regards, _ _
.o. | Liege of Serenely Enlightened Majesty of o' \,=./ `o
..o | Computer Science, Michał “mina86” Nazarewicz (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: CMA: test_pages_isolated failures in alloc_contig_range
2014-10-28 13:48 ` Peter Hurley
2014-10-28 16:57 ` Michal Nazarewicz
@ 2014-10-28 18:59 ` Laurent Pinchart
1 sibling, 0 replies; 8+ messages in thread
From: Laurent Pinchart @ 2014-10-28 18:59 UTC (permalink / raw)
To: Peter Hurley
Cc: Michal Nazarewicz, linux-mm, linux-kernel, linux-sh,
Bartlomiej Zolnierkiewicz, Minchan Kim, Andrew Morton
Hello,
On Tuesday 28 October 2014 09:48:26 Peter Hurley wrote:
> [ +cc Andrew Morton ]
>
> On 10/28/2014 08:38 AM, Michal Nazarewicz wrote:
> > On Sun, Oct 26 2014, Laurent Pinchart wrote:
> >> Hello,
> >>
> >> I've run into a CMA-related issue while testing a DMA engine driver with
> >> dmatest on a Renesas R-Car ARM platform.
> >>
> >> When allocating contiguous memory through CMA the kernel prints the
> >> following messages to the kernel log.
> >>
> >> [ 99.770000] alloc_contig_range test_pages_isolated(6b843, 6b844)
> >> failed
> >> [ 124.220000] alloc_contig_range test_pages_isolated(6b843, 6b844)
> >> failed
> >> [ 127.550000] alloc_contig_range test_pages_isolated(6b845, 6b846)
> >> failed
> >> [ 132.850000] alloc_contig_range test_pages_isolated(6b845, 6b846)
> >> failed
> >> [ 151.390000] alloc_contig_range test_pages_isolated(6b843, 6b844)
> >> failed
> >> [ 166.490000] alloc_contig_range test_pages_isolated(6b843, 6b844)
> >> failed
> >> [ 181.450000] alloc_contig_range test_pages_isolated(6b845, 6b846)
> >> failed
> >>
> >> I've stripped the dmatest module down as much as possible to remove any
> >> hardware dependencies and came up with the following implementation.
> >
> > Like Laura wrote, the message is not (should not be) a problem in
> > itself:
>
> [...]
>
> > So as you can see cma_alloc will try another part of the cma region if
> > test_pages_isolated fails.
> >
> > Obviously, if CMA region is fragmented or there's enough space for only
> > one allocation of required size isolation failures will cause allocation
> > failures, so it's best to avoid them, but they are not always avoidable.
> >
> > To debug you would probably want to add more debug information about the
> > page (i.e. data from struct page) that failed isolation after the
> > pr_warn in alloc_contig_range.
[ 94.730000] __test_page_isolated_in_pageblock: failed at pfn 6b845: buddy 0
count 0 migratetype 4 poison 0
[ 94.740000] alloc_contig_range test_pages_isolated(6b845, 6b846) failed
(-16)
[ 202.140000] __test_page_isolated_in_pageblock: failed at pfn 6b843: buddy 0
count 0 migratetype 4 poison 0
[ 202.150000] alloc_contig_range test_pages_isolated(6b843, 6b844) failed
(-16)
(4 is MIGRATE_CMA)
> If the message does not indicate an actual problem, then its printk level is
> too high. These messages have been reported when using 3.16+ distro kernels.
The messages got me worried, and if there's nothing to worry about, that's bad
:-)
--
Regards,
Laurent Pinchart
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: CMA: test_pages_isolated failures in alloc_contig_range
2014-10-28 16:57 ` Michal Nazarewicz
@ 2014-11-04 3:38 ` Peter Hurley
0 siblings, 0 replies; 8+ messages in thread
From: Peter Hurley @ 2014-11-04 3:38 UTC (permalink / raw)
To: Michal Nazarewicz, Laurent Pinchart, linux-mm
Cc: linux-kernel, linux-sh, Bartlomiej Zolnierkiewicz, Minchan Kim,
Andrew Morton
On 10/28/2014 12:57 PM, Michal Nazarewicz wrote:
>> On 10/28/2014 08:38 AM, Michal Nazarewicz wrote:
>>> Like Laura wrote, the message is not (should not be) a problem in
>>> itself:
>>
>> [...]
>>
>>> So as you can see cma_alloc will try another part of the cma region if
>>> test_pages_isolated fails.
>>>
>>> Obviously, if CMA region is fragmented or there's enough space for only
>>> one allocation of required size isolation failures will cause allocation
>>> failures, so it's best to avoid them, but they are not always avoidable.
>>>
>>> To debug you would probably want to add more debug information about the
>>> page (i.e. data from struct page) that failed isolation after the
>>> pr_warn in alloc_contig_range.
>
> On Tue, Oct 28 2014, Peter Hurley <peter@hurleysoftware.com> wrote:
>> If the message does not indicate an actual problem, then its printk level is
>> too high. These messages have been reported when using 3.16+ distro kernels.
>
> I think it could be argued both ways. The condition is not an error,
> since in many cases cma_alloc will be able to continue, but it *is* an
> undesired state. As such it's not an error but feels to me a bit more
> then just information, hence a warning. I don't care either way, though.
This "undesired state" is trivially reproducible on 3.16.y on the x86 arch;
a smattering of these will show up just building a distro kernel.
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2014-11-04 3:38 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-10-26 21:09 CMA: test_pages_isolated failures in alloc_contig_range Laurent Pinchart
2014-10-27 20:38 ` Laura Abbott
2014-10-28 15:12 ` Laurent Pinchart
2014-10-28 12:38 ` Michal Nazarewicz
2014-10-28 13:48 ` Peter Hurley
2014-10-28 16:57 ` Michal Nazarewicz
2014-11-04 3:38 ` Peter Hurley
2014-10-28 18:59 ` Laurent Pinchart
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).