stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [stable v4.9.y] Backports to fix fstrim time / CPU load on raid0
@ 2017-09-22 17:16 Eduardo Valentin
  2017-09-22 17:31 ` Greg KH
  2017-09-22 17:31 ` Greg KH
  0 siblings, 2 replies; 8+ messages in thread
From: Eduardo Valentin @ 2017-09-22 17:16 UTC (permalink / raw)
  To: gregkh; +Cc: stable, shli, neilb, colyli

Hello GregKH,

I have been seeing several reports of performance issue with raid0 while performing fstrim on v4.9.y.
Currently, if one performs:

# fio --name fio_test_file --direct=1 --rw=randwrite --bs=4k --size=5G --numjobs=8 --group_reporting --directory=/mount/raid0 
# rm -rf /media/nvme-raid0 
# time fstrim -vvv -a 
real	3m41.102s 
user	0m0.000s 
sys	0m4.964s 

And on latest linus master, in fact, starting v4.12, we get

# time fstrim -vvv -a 
/mount/raid0: 3.5 TiB (3798132523008 bytes) trimmed

real	0m1.953s
user	0m0.004s
sys	0m0.004s

So, I performed a git bisect and found this patch:
commit 29efc390b9462582ae95eb9a0b8cd17ab956afc0
Author: Shaohua Li <shli@fb.com>
Date:   Sun May 7 17:36:24 2017 -0700

    md/md0: optimize raid0 discard handling

However, the above patch depends on several other patches. I have performed a backport of the depending patches and I confirm
I can get the performance boost after applying all fixes. The list of patches is bellow. I also have a branch with the backports
here:

https://git.kernel.org/pub/scm/linux/kernel/git/evalenti/linux.git/log/?h=backports/v4.9.y/raid0/fstrim


And here is the list of patches:
797476b block: Add 'zoned' queue limit
c4aebd0 block: remove bio_is_rw
bd1c1c2 block: REQ_NOMERGE is common to the bio and request
188bd2b block: move REQ_RAHEAD to common flags
8d2bbd4 block: replace REQ_THROTTLED with a bio flag
e806402 block: split out request-only flags into a new namespace
ef295ec block: better op and flags encoding
8737417 block: add a proper block layer data direction encoding
e73c23f block: add async variant of blkdev_issue_zeroout
a6f0788 block: add support for REQ_OP_WRITE_ZEROES
bef1331 block: don't try to discard from __blkdev_issue_zeroout
eeeefd4 block: don't try Write Same from __blkdev_issue_zeroout
35b785f md: add bad block support for external metadata
688834e md/failfast: add failfast flag for md to be used by some personalities.
504634f md: add blktrace event for writes to superblock
85c9ccd md/bitmap: Don't write bitmap while earlier writes might be in-flight
91a6c4a md: wake up personality thread after array state update
060b068 md: perform async updates for metadata where possible.
be306c2 md: define mddev flags, recovery flags and r1bio state bits using enums
46533ff md: Use REQ_FAILFAST_* on metadata writes where appropriate
6995f0b md: takeover should clear unrelated bits
394ed8e md: cleanup mddev flag clear for takeover
32cd7cb md/raid5: Use correct IS_ERR() variation on pointer check
109e376 md: add block tracing for bio_remapping
2648381 md: disable WRITE SAME if it fails in underlayer disks
3deff1a md: support REQ_OP_WRITE_ZEROES
f00d7c8 md/raid0: fix up bio splitting.
29efc39 md/md0: optimize raid0 discard handling

The above patches were pulled into the backports due to either static dependency or semantic dependency.

Let me know if it is possible to include these patches into stable v4.9.y.

-- 
All the best,
Eduardo Valentin

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [stable v4.9.y] Backports to fix fstrim time / CPU load on raid0
  2017-09-22 17:16 [stable v4.9.y] Backports to fix fstrim time / CPU load on raid0 Eduardo Valentin
@ 2017-09-22 17:31 ` Greg KH
  2017-09-22 17:39   ` Eduardo Valentin
  2017-09-22 17:31 ` Greg KH
  1 sibling, 1 reply; 8+ messages in thread
From: Greg KH @ 2017-09-22 17:31 UTC (permalink / raw)
  To: Eduardo Valentin; +Cc: stable, shli, neilb, colyli

On Fri, Sep 22, 2017 at 10:16:06AM -0700, Eduardo Valentin wrote:
> Hello GregKH,
> 
> I have been seeing several reports of performance issue with raid0 while performing fstrim on v4.9.y.
> Currently, if one performs:
> 
> # fio --name fio_test_file --direct=1 --rw=randwrite --bs=4k --size=5G --numjobs=8 --group_reporting --directory=/mount/raid0 
> # rm -rf /media/nvme-raid0 
> # time fstrim -vvv -a 
> real	3m41.102s 
> user	0m0.000s 
> sys	0m4.964s 
> 
> And on latest linus master, in fact, starting v4.12, we get
> 
> # time fstrim -vvv -a 
> /mount/raid0: 3.5 TiB (3798132523008 bytes) trimmed
> 
> real	0m1.953s
> user	0m0.004s
> sys	0m0.004s
> 
> So, I performed a git bisect and found this patch:
> commit 29efc390b9462582ae95eb9a0b8cd17ab956afc0
> Author: Shaohua Li <shli@fb.com>
> Date:   Sun May 7 17:36:24 2017 -0700
> 
>     md/md0: optimize raid0 discard handling
> 
> However, the above patch depends on several other patches. I have performed a backport of the depending patches and I confirm
> I can get the performance boost after applying all fixes. The list of patches is bellow. I also have a branch with the backports
> here:
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/evalenti/linux.git/log/?h=backports/v4.9.y/raid0/fstrim
> 
> 
> And here is the list of patches:
> 797476b block: Add 'zoned' queue limit
> c4aebd0 block: remove bio_is_rw
> bd1c1c2 block: REQ_NOMERGE is common to the bio and request
> 188bd2b block: move REQ_RAHEAD to common flags
> 8d2bbd4 block: replace REQ_THROTTLED with a bio flag
> e806402 block: split out request-only flags into a new namespace
> ef295ec block: better op and flags encoding
> 8737417 block: add a proper block layer data direction encoding
> e73c23f block: add async variant of blkdev_issue_zeroout
> a6f0788 block: add support for REQ_OP_WRITE_ZEROES
> bef1331 block: don't try to discard from __blkdev_issue_zeroout
> eeeefd4 block: don't try Write Same from __blkdev_issue_zeroout
> 35b785f md: add bad block support for external metadata
> 688834e md/failfast: add failfast flag for md to be used by some personalities.
> 504634f md: add blktrace event for writes to superblock
> 85c9ccd md/bitmap: Don't write bitmap while earlier writes might be in-flight
> 91a6c4a md: wake up personality thread after array state update
> 060b068 md: perform async updates for metadata where possible.
> be306c2 md: define mddev flags, recovery flags and r1bio state bits using enums
> 46533ff md: Use REQ_FAILFAST_* on metadata writes where appropriate
> 6995f0b md: takeover should clear unrelated bits
> 394ed8e md: cleanup mddev flag clear for takeover
> 32cd7cb md/raid5: Use correct IS_ERR() variation on pointer check
> 109e376 md: add block tracing for bio_remapping
> 2648381 md: disable WRITE SAME if it fails in underlayer disks
> 3deff1a md: support REQ_OP_WRITE_ZEROES
> f00d7c8 md/raid0: fix up bio splitting.
> 29efc39 md/md0: optimize raid0 discard handling
> 
> The above patches were pulled into the backports due to either static dependency or semantic dependency.
> 
> Let me know if it is possible to include these patches into stable v4.9.y.

That's a lot of patches, I'd like verification from the
maintainers/developers of them that this is ok to do and they have no
objection to it.

Otherwise, why not just use 4.13 or 4.14 when it comes out?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [stable v4.9.y] Backports to fix fstrim time / CPU load on raid0
  2017-09-22 17:16 [stable v4.9.y] Backports to fix fstrim time / CPU load on raid0 Eduardo Valentin
  2017-09-22 17:31 ` Greg KH
@ 2017-09-22 17:31 ` Greg KH
  2017-09-22 17:40   ` Eduardo Valentin
  1 sibling, 1 reply; 8+ messages in thread
From: Greg KH @ 2017-09-22 17:31 UTC (permalink / raw)
  To: Eduardo Valentin; +Cc: stable, shli, neilb, colyli

On Fri, Sep 22, 2017 at 10:16:06AM -0700, Eduardo Valentin wrote:
> Hello GregKH,
> 
> I have been seeing several reports of performance issue with raid0 while performing fstrim on v4.9.y.
> Currently, if one performs:
> 
> # fio --name fio_test_file --direct=1 --rw=randwrite --bs=4k --size=5G --numjobs=8 --group_reporting --directory=/mount/raid0 
> # rm -rf /media/nvme-raid0 
> # time fstrim -vvv -a 
> real	3m41.102s 
> user	0m0.000s 
> sys	0m4.964s 

Also, is this a regression from older kernels?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [stable v4.9.y] Backports to fix fstrim time / CPU load on raid0
  2017-09-22 17:31 ` Greg KH
@ 2017-09-22 17:39   ` Eduardo Valentin
  0 siblings, 0 replies; 8+ messages in thread
From: Eduardo Valentin @ 2017-09-22 17:39 UTC (permalink / raw)
  To: Greg KH; +Cc: Eduardo Valentin, stable, shli, neilb, colyli, axboe

Hello

On Fri, Sep 22, 2017 at 07:31:29PM +0200, Greg KH wrote:
> On Fri, Sep 22, 2017 at 10:16:06AM -0700, Eduardo Valentin wrote:
> > Hello GregKH,
> > 
> > I have been seeing several reports of performance issue with raid0 while performing fstrim on v4.9.y.
> > Currently, if one performs:
> > 
> > # fio --name fio_test_file --direct=1 --rw=randwrite --bs=4k --size=5G --numjobs=8 --group_reporting --directory=/mount/raid0 
> > # rm -rf /media/nvme-raid0 
> > # time fstrim -vvv -a 
> > real	3m41.102s 
> > user	0m0.000s 
> > sys	0m4.964s 
> > 
> > And on latest linus master, in fact, starting v4.12, we get
> > 
> > # time fstrim -vvv -a 
> > /mount/raid0: 3.5 TiB (3798132523008 bytes) trimmed
> > 
> > real	0m1.953s
> > user	0m0.004s
> > sys	0m0.004s
> > 
> > So, I performed a git bisect and found this patch:
> > commit 29efc390b9462582ae95eb9a0b8cd17ab956afc0
> > Author: Shaohua Li <shli@fb.com>
> > Date:   Sun May 7 17:36:24 2017 -0700
> > 
> >     md/md0: optimize raid0 discard handling
> > 
> > However, the above patch depends on several other patches. I have performed a backport of the depending patches and I confirm
> > I can get the performance boost after applying all fixes. The list of patches is bellow. I also have a branch with the backports
> > here:
> > 
> > https://git.kernel.org/pub/scm/linux/kernel/git/evalenti/linux.git/log/?h=backports/v4.9.y/raid0/fstrim
> > 
> > 
> > And here is the list of patches:
> > 797476b block: Add 'zoned' queue limit
> > c4aebd0 block: remove bio_is_rw
> > bd1c1c2 block: REQ_NOMERGE is common to the bio and request
> > 188bd2b block: move REQ_RAHEAD to common flags
> > 8d2bbd4 block: replace REQ_THROTTLED with a bio flag
> > e806402 block: split out request-only flags into a new namespace
> > ef295ec block: better op and flags encoding
> > 8737417 block: add a proper block layer data direction encoding
> > e73c23f block: add async variant of blkdev_issue_zeroout
> > a6f0788 block: add support for REQ_OP_WRITE_ZEROES
> > bef1331 block: don't try to discard from __blkdev_issue_zeroout
> > eeeefd4 block: don't try Write Same from __blkdev_issue_zeroout
> > 35b785f md: add bad block support for external metadata
> > 688834e md/failfast: add failfast flag for md to be used by some personalities.
> > 504634f md: add blktrace event for writes to superblock
> > 85c9ccd md/bitmap: Don't write bitmap while earlier writes might be in-flight
> > 91a6c4a md: wake up personality thread after array state update
> > 060b068 md: perform async updates for metadata where possible.
> > be306c2 md: define mddev flags, recovery flags and r1bio state bits using enums
> > 46533ff md: Use REQ_FAILFAST_* on metadata writes where appropriate
> > 6995f0b md: takeover should clear unrelated bits
> > 394ed8e md: cleanup mddev flag clear for takeover
> > 32cd7cb md/raid5: Use correct IS_ERR() variation on pointer check
> > 109e376 md: add block tracing for bio_remapping
> > 2648381 md: disable WRITE SAME if it fails in underlayer disks
> > 3deff1a md: support REQ_OP_WRITE_ZEROES
> > f00d7c8 md/raid0: fix up bio splitting.
> > 29efc39 md/md0: optimize raid0 discard handling
> > 
> > The above patches were pulled into the backports due to either static dependency or semantic dependency.
> > 
> > Let me know if it is possible to include these patches into stable v4.9.y.
> 
> That's a lot of patches, I'd like verification from the
> maintainers/developers of them that this is ok to do and they have no
> objection to it.
> 

I agree here. I copied the original author of the commit that does the fix, and everyone who signed it.
I would be glad if maintainers could confirm if these backports make sense.

> Otherwise, why not just use 4.13 or 4.14 when it comes out?

That is one option when you can deploy new kernels, but the issue still remains in the stable v4.9.y and to whoever is based off that stable kernel.

> 
> thanks,
> 
> greg k-h

-- 
All the best,
Eduardo Valentin

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [stable v4.9.y] Backports to fix fstrim time / CPU load on raid0
  2017-09-22 17:31 ` Greg KH
@ 2017-09-22 17:40   ` Eduardo Valentin
  2017-09-22 18:11     ` Willy Tarreau
  0 siblings, 1 reply; 8+ messages in thread
From: Eduardo Valentin @ 2017-09-22 17:40 UTC (permalink / raw)
  To: Greg KH; +Cc: Eduardo Valentin, stable, shli, neilb, colyli

On Fri, Sep 22, 2017 at 07:31:45PM +0200, Greg KH wrote:
> On Fri, Sep 22, 2017 at 10:16:06AM -0700, Eduardo Valentin wrote:
> > Hello GregKH,
> > 
> > I have been seeing several reports of performance issue with raid0 while performing fstrim on v4.9.y.
> > Currently, if one performs:
> > 
> > # fio --name fio_test_file --direct=1 --rw=randwrite --bs=4k --size=5G --numjobs=8 --group_reporting --directory=/mount/raid0 
> > # rm -rf /media/nvme-raid0 
> > # time fstrim -vvv -a 
> > real	3m41.102s 
> > user	0m0.000s 
> > sys	0m4.964s 
> 
> Also, is this a regression from older kernels?

I personally did not try older than 4.9 kernels. but looking at git history, and the commit message of the fix, looks like this is long known issue that got fixed on v4.12.

> 
> thanks,
> 
> greg k-h
> 

-- 
All the best,
Eduardo Valentin

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [stable v4.9.y] Backports to fix fstrim time / CPU load on raid0
  2017-09-22 17:40   ` Eduardo Valentin
@ 2017-09-22 18:11     ` Willy Tarreau
  2017-09-22 18:38       ` Eduardo Valentin
  0 siblings, 1 reply; 8+ messages in thread
From: Willy Tarreau @ 2017-09-22 18:11 UTC (permalink / raw)
  To: Eduardo Valentin; +Cc: Greg KH, stable, shli, neilb, colyli

On Fri, Sep 22, 2017 at 10:40:48AM -0700, Eduardo Valentin wrote:
> On Fri, Sep 22, 2017 at 07:31:45PM +0200, Greg KH wrote:
> > On Fri, Sep 22, 2017 at 10:16:06AM -0700, Eduardo Valentin wrote:
> > > Hello GregKH,
> > > 
> > > I have been seeing several reports of performance issue with raid0 while performing fstrim on v4.9.y.
> > > Currently, if one performs:
> > > 
> > > # fio --name fio_test_file --direct=1 --rw=randwrite --bs=4k --size=5G --numjobs=8 --group_reporting --directory=/mount/raid0 
> > > # rm -rf /media/nvme-raid0 
> > > # time fstrim -vvv -a 
> > > real	3m41.102s 
> > > user	0m0.000s 
> > > sys	0m4.964s 
> > 
> > Also, is this a regression from older kernels?
> 
> I personally did not try older than 4.9 kernels. but looking at git history,
> and the commit message of the fix, looks like this is long known issue that
> got fixed on v4.12.

Or it may simply be something that couldn't be achieved without significantly
improving the underlying infrastructure using the patches you've spotted (and
possibly a few others that you didn't notice but are required for stability
or to avoid breaking other subsystems).

While I use 4.9 on many of my machines, I'd rather favor maximal stability
over an optimization for some operations that don't appear *that* often.
That said if the relevant maintainers consider it safe to backport, I'll
certainly welcome some performance improvements on my machines, but that's
not what I'm primarily looking for in stable kernels.

And if we start to backport performance improvements into LTS kernels,
what will encourage users to upgrade to the next LTS ?

Cheers,
Willy

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [stable v4.9.y] Backports to fix fstrim time / CPU load on raid0
  2017-09-22 18:11     ` Willy Tarreau
@ 2017-09-22 18:38       ` Eduardo Valentin
  2017-09-22 19:05         ` Greg KH
  0 siblings, 1 reply; 8+ messages in thread
From: Eduardo Valentin @ 2017-09-22 18:38 UTC (permalink / raw)
  To: Willy Tarreau; +Cc: Eduardo Valentin, Greg KH, stable, shli, neilb, colyli

Hello Willy,

On Fri, Sep 22, 2017 at 08:11:22PM +0200, Willy Tarreau wrote:
> On Fri, Sep 22, 2017 at 10:40:48AM -0700, Eduardo Valentin wrote:
> > On Fri, Sep 22, 2017 at 07:31:45PM +0200, Greg KH wrote:
> > > On Fri, Sep 22, 2017 at 10:16:06AM -0700, Eduardo Valentin wrote:
> > > > Hello GregKH,
> > > > 
> > > > I have been seeing several reports of performance issue with raid0 while performing fstrim on v4.9.y.
> > > > Currently, if one performs:
> > > > 
> > > > # fio --name fio_test_file --direct=1 --rw=randwrite --bs=4k --size=5G --numjobs=8 --group_reporting --directory=/mount/raid0 
> > > > # rm -rf /media/nvme-raid0 
> > > > # time fstrim -vvv -a 
> > > > real	3m41.102s 
> > > > user	0m0.000s 
> > > > sys	0m4.964s 
> > > 
> > > Also, is this a regression from older kernels?
> > 
> > I personally did not try older than 4.9 kernels. but looking at git history,
> > and the commit message of the fix, looks like this is long known issue that
> > got fixed on v4.12.
> 
> Or it may simply be something that couldn't be achieved without significantly
> improving the underlying infrastructure using the patches you've spotted (and
> possibly a few others that you didn't notice but are required for stability
> or to avoid breaking other subsystems).
> 
> While I use 4.9 on many of my machines, I'd rather favor maximal stability
> over an optimization for some operations that don't appear *that* often.
> That said if the relevant maintainers consider it safe to backport, I'll
> certainly welcome some performance improvements on my machines, but that's
> not what I'm primarily looking for in stable kernels.
> 
> And if we start to backport performance improvements into LTS kernels,
> what will encourage users to upgrade to the next LTS ?

Yeah, I pondered for several days before opening up this backport request exactly because of your concerns above.

But what made me to decide to share the backport for the community consideration is the way I see this. To me, this is *not only* a matter of performance boost, but also a real fix. Mainly because the systems with v4.9.y become unresponsive while performing fstrim because of kernel CPU consumption with the fstrim implementation in raid0. Meaning, depending on how this is deployed, how you use your fs, and how you deploy your raid, this can be a real problem on production systems.

So, yeah, I hope you consider this not only from a performance perspective, but a fix for a real issue.

> 
> Cheers,
> Willy
> 

-- 
All the best,
Eduardo Valentin

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [stable v4.9.y] Backports to fix fstrim time / CPU load on raid0
  2017-09-22 18:38       ` Eduardo Valentin
@ 2017-09-22 19:05         ` Greg KH
  0 siblings, 0 replies; 8+ messages in thread
From: Greg KH @ 2017-09-22 19:05 UTC (permalink / raw)
  To: Eduardo Valentin; +Cc: Willy Tarreau, stable, shli, neilb, colyli

On Fri, Sep 22, 2017 at 11:38:59AM -0700, Eduardo Valentin wrote:
> Hello Willy,
> 
> On Fri, Sep 22, 2017 at 08:11:22PM +0200, Willy Tarreau wrote:
> > On Fri, Sep 22, 2017 at 10:40:48AM -0700, Eduardo Valentin wrote:
> > > On Fri, Sep 22, 2017 at 07:31:45PM +0200, Greg KH wrote:
> > > > On Fri, Sep 22, 2017 at 10:16:06AM -0700, Eduardo Valentin wrote:
> > > > > Hello GregKH,
> > > > > 
> > > > > I have been seeing several reports of performance issue with raid0 while performing fstrim on v4.9.y.
> > > > > Currently, if one performs:
> > > > > 
> > > > > # fio --name fio_test_file --direct=1 --rw=randwrite --bs=4k --size=5G --numjobs=8 --group_reporting --directory=/mount/raid0 
> > > > > # rm -rf /media/nvme-raid0 
> > > > > # time fstrim -vvv -a 
> > > > > real	3m41.102s 
> > > > > user	0m0.000s 
> > > > > sys	0m4.964s 
> > > > 
> > > > Also, is this a regression from older kernels?
> > > 
> > > I personally did not try older than 4.9 kernels. but looking at git history,
> > > and the commit message of the fix, looks like this is long known issue that
> > > got fixed on v4.12.
> > 
> > Or it may simply be something that couldn't be achieved without significantly
> > improving the underlying infrastructure using the patches you've spotted (and
> > possibly a few others that you didn't notice but are required for stability
> > or to avoid breaking other subsystems).
> > 
> > While I use 4.9 on many of my machines, I'd rather favor maximal stability
> > over an optimization for some operations that don't appear *that* often.
> > That said if the relevant maintainers consider it safe to backport, I'll
> > certainly welcome some performance improvements on my machines, but that's
> > not what I'm primarily looking for in stable kernels.
> > 
> > And if we start to backport performance improvements into LTS kernels,
> > what will encourage users to upgrade to the next LTS ?
> 
> Yeah, I pondered for several days before opening up this backport
> request exactly because of your concerns above.
> 
> But what made me to decide to share the backport for the community
> consideration is the way I see this. To me, this is *not only* a
> matter of performance boost, but also a real fix. Mainly because the
> systems with v4.9.y become unresponsive while performing fstrim
> because of kernel CPU consumption with the fstrim implementation in
> raid0. Meaning, depending on how this is deployed, how you use your
> fs, and how you deploy your raid, this can be a real problem on
> production systems.
> 
> So, yeah, I hope you consider this not only from a performance
> perspective, but a fix for a real issue.

It might be a "fix", but given that this isn't a regression, the reason
for backporting this is pretty low.  If you want faster I/O, upgrade to
a newer kernel, sounds like a good reason to me (not to mention more
features and more bugfixes as well!) :)

So I would need some big explainations by the developers involved here
as to why I should take it (which is what has caused me to take such
things in the past for other subsystems...)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2017-09-22 19:05 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-09-22 17:16 [stable v4.9.y] Backports to fix fstrim time / CPU load on raid0 Eduardo Valentin
2017-09-22 17:31 ` Greg KH
2017-09-22 17:39   ` Eduardo Valentin
2017-09-22 17:31 ` Greg KH
2017-09-22 17:40   ` Eduardo Valentin
2017-09-22 18:11     ` Willy Tarreau
2017-09-22 18:38       ` Eduardo Valentin
2017-09-22 19:05         ` Greg KH

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).