* [TOPIC] scsi-queue tree past and future
@ 2015-03-05 13:31 Christoph Hellwig
2015-03-05 14:48 ` James Bottomley
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Christoph Hellwig @ 2015-03-05 13:31 UTC (permalink / raw)
To: lsf, lsf-pc, linux-scsi
For about 8 month I've merged almost every scsi commit through the
scsi-queue staging tree, and it seems to have worked out well enough.
I've been too busy for the next cycle, so 4.1 will probably have to live
without it. I'd like to get feedback on how the tree worked for contributors
and driver maintainers, and brainstorm how to move forward with it, preferably
some form of real team maintainance that avoids single points of failure.
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: [TOPIC] scsi-queue tree past and future 2015-03-05 13:31 [TOPIC] scsi-queue tree past and future Christoph Hellwig @ 2015-03-05 14:48 ` James Bottomley 2015-03-07 4:22 ` [Lsf] " Davidlohr Bueso 2015-03-07 3:10 ` Sagi Grimberg 2015-03-10 12:37 ` Tomas Henzl 2 siblings, 1 reply; 7+ messages in thread From: James Bottomley @ 2015-03-05 14:48 UTC (permalink / raw) To: Christoph Hellwig; +Cc: lsf, lsf-pc, linux-scsi On Thu, 2015-03-05 at 14:31 +0100, Christoph Hellwig wrote: > For about 8 month I've merged almost every scsi commit through the > scsi-queue staging tree, and it seems to have worked out well enough. > > I've been too busy for the next cycle, so 4.1 will probably have to live > without it. I'd like to get feedback on how the tree worked for contributors > and driver maintainers, and brainstorm how to move forward with it, preferably > some form of real team maintainance that avoids single points of failure. I'd like to thank Christoph for doing this, it's been an enormous help. Here's what we'll do for 4.1: I need all the current Maintainers to collect the patches and reviews in their area and send them to the list as a series. We'll be adhering to the guidelines Christoph laid down for inclusion: - the patch needs at least two positive reviews (non-author signoff, reviewed-by or acked-by tags). In practice this means it had at least one and I added another one. As an exception I also take trivial and important fixes if they only have a Tested-by: instead of a second review. - the patch has no negative review on the mailing list - the patch applies cleanly - the patch compiles (drivers for architectures I can't test excluded) - for core the core branch: the patch survives a full xfstests run For the last requirement, the 0 day kernel test project will be checking this. That means negative reports from the 0 day project on a patch will be grounds for removal. I'll try to curate the patches in areas without maintainers (like the core). Remember, in all cases, you get an email from my automation infrastructure when a patch is added (or removed) from any of the SCSI trees, so if you haven't seen the email, the patch isn't in the tree. You can also see the state of the git trees here: http://git.kernel.org/cgit/linux/kernel/git/jejb/scsi.git/ with the misc branch being for 4.1 and the fixes branch being for 4.0-rc James ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Lsf] [TOPIC] scsi-queue tree past and future 2015-03-05 14:48 ` James Bottomley @ 2015-03-07 4:22 ` Davidlohr Bueso 2015-03-07 4:43 ` James Bottomley 0 siblings, 1 reply; 7+ messages in thread From: Davidlohr Bueso @ 2015-03-07 4:22 UTC (permalink / raw) To: James Bottomley; +Cc: Christoph Hellwig, lsf, lsf-pc, linux-scsi On Thu, 2015-03-05 at 06:48 -0800, James Bottomley wrote: > On Thu, 2015-03-05 at 14:31 +0100, Christoph Hellwig wrote: > > For about 8 month I've merged almost every scsi commit through the > > scsi-queue staging tree, and it seems to have worked out well enough. > > > > I've been too busy for the next cycle, so 4.1 will probably have to live > > without it. I'd like to get feedback on how the tree worked for contributors > > and driver maintainers, and brainstorm how to move forward with it, preferably > > some form of real team maintainance that avoids single points of failure. > > I'd like to thank Christoph for doing this, it's been an enormous help. > > Here's what we'll do for 4.1: I need all the current Maintainers to > collect the patches and reviews in their area and send them to the list > as a series. We'll be adhering to the guidelines Christoph laid down > for inclusion: > > - the patch needs at least two positive reviews (non-author signoff, > reviewed-by or acked-by tags). In practice this means it had at > least one and I added another one. > As an exception I also take trivial and important fixes if they > only have a Tested-by: instead of a second review. > - the patch has no negative review on the mailing list > - the patch applies cleanly > - the patch compiles (drivers for architectures I can't test excluded) > - for core the core branch: the patch survives a full xfstests run This should be pretty standard in all subsystems, no? And I know this has been discussed many times, but I see no reason not to also consider trinity -- which has a tendency of kicking you in the nuts when you least expect it to. At least in MM we are trying to be a bit more proactive about this, perhaps Sasha or Dave would disagree with me ;) But in general it would also help other subsystems. Thanks, Davidlohr ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Lsf] [TOPIC] scsi-queue tree past and future 2015-03-07 4:22 ` [Lsf] " Davidlohr Bueso @ 2015-03-07 4:43 ` James Bottomley 2015-03-07 4:47 ` Davidlohr Bueso 0 siblings, 1 reply; 7+ messages in thread From: James Bottomley @ 2015-03-07 4:43 UTC (permalink / raw) To: Davidlohr Bueso; +Cc: Christoph Hellwig, lsf, lsf-pc, linux-scsi On Fri, 2015-03-06 at 20:22 -0800, Davidlohr Bueso wrote: > On Thu, 2015-03-05 at 06:48 -0800, James Bottomley wrote: > > On Thu, 2015-03-05 at 14:31 +0100, Christoph Hellwig wrote: > > > For about 8 month I've merged almost every scsi commit through the > > > scsi-queue staging tree, and it seems to have worked out well enough. > > > > > > I've been too busy for the next cycle, so 4.1 will probably have to live > > > without it. I'd like to get feedback on how the tree worked for contributors > > > and driver maintainers, and brainstorm how to move forward with it, preferably > > > some form of real team maintainance that avoids single points of failure. > > > > I'd like to thank Christoph for doing this, it's been an enormous help. > > > > Here's what we'll do for 4.1: I need all the current Maintainers to > > collect the patches and reviews in their area and send them to the list > > as a series. We'll be adhering to the guidelines Christoph laid down > > for inclusion: > > > > - the patch needs at least two positive reviews (non-author signoff, > > reviewed-by or acked-by tags). In practice this means it had at > > least one and I added another one. > > As an exception I also take trivial and important fixes if they > > only have a Tested-by: instead of a second review. > > - the patch has no negative review on the mailing list > > - the patch applies cleanly > > - the patch compiles (drivers for architectures I can't test excluded) > > - for core the core branch: the patch survives a full xfstests run > > This should be pretty standard in all subsystems, no? And I know this > has been discussed many times, but I see no reason not to also consider > trinity -- which has a tendency of kicking you in the nuts when you > least expect it to. At least in MM we are trying to be a bit more > proactive about this, perhaps Sasha or Dave would disagree with me ;) > But in general it would also help other subsystems. Well, to clarify what's happening: I'm not running the tests, I asked the 0 day kernel testing project to run them on all the patches in my tree. The 0 day project has a bunch of standard tests for all trees and then some optional ones (like xfstests) which I asked Fengguang to turn on in our case. Trinity is part of the 0 day project tests, so I could ask for it to be turned on too, but I'm not sure it would be so useful for SCSI: trinity is a sys call fuzzing tool but the sys call exposure of SCSI is pretty tiny. xfstests, which exercise the filesystem data above us provide a much wider range of testing. James ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Lsf] [TOPIC] scsi-queue tree past and future 2015-03-07 4:43 ` James Bottomley @ 2015-03-07 4:47 ` Davidlohr Bueso 0 siblings, 0 replies; 7+ messages in thread From: Davidlohr Bueso @ 2015-03-07 4:47 UTC (permalink / raw) To: James Bottomley; +Cc: lsf, lsf-pc, Christoph Hellwig, linux-scsi On Fri, 2015-03-06 at 20:43 -0800, James Bottomley wrote: > Well, to clarify what's happening: I'm not running the tests, I asked > the 0 day kernel testing project to run them on all the patches in my > tree. The 0 day project has a bunch of standard tests for all trees and > then some optional ones (like xfstests) which I asked Fengguang to turn > on in our case. > > Trinity is part of the 0 day project tests, so I could ask for it to be > turned on too, Oh, I was not aware of that. I tend to consider LTP something completely unrelated. > but I'm not sure it would be so useful for SCSI: trinity > is a sys call fuzzing tool but the sys call exposure of SCSI is pretty > tiny. xfstests, which exercise the filesystem data above us provide a > much wider range of testing. Yeah I'm really talking at a generic level, just like LTP. Thanks, Davidlohr ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Lsf] [TOPIC] scsi-queue tree past and future 2015-03-05 13:31 [TOPIC] scsi-queue tree past and future Christoph Hellwig 2015-03-05 14:48 ` James Bottomley @ 2015-03-07 3:10 ` Sagi Grimberg 2015-03-10 12:37 ` Tomas Henzl 2 siblings, 0 replies; 7+ messages in thread From: Sagi Grimberg @ 2015-03-07 3:10 UTC (permalink / raw) To: Christoph Hellwig, lsf, lsf-pc, linux-scsi On 3/5/2015 3:31 PM, Christoph Hellwig wrote: > For about 8 month I've merged almost every scsi commit through the > scsi-queue staging tree, and it seems to have worked out well enough. > > I've been too busy for the next cycle, so 4.1 will probably have to live > without it. I'd like to get feedback on how the tree worked for contributors > and driver maintainers, and brainstorm how to move forward with it, preferably > some form of real team maintainance that avoids single points of failure. +1 I think this approach can make a lot of sense for other subsystems as well. Sagi. ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [TOPIC] scsi-queue tree past and future 2015-03-05 13:31 [TOPIC] scsi-queue tree past and future Christoph Hellwig 2015-03-05 14:48 ` James Bottomley 2015-03-07 3:10 ` Sagi Grimberg @ 2015-03-10 12:37 ` Tomas Henzl 2 siblings, 0 replies; 7+ messages in thread From: Tomas Henzl @ 2015-03-10 12:37 UTC (permalink / raw) To: Christoph Hellwig, lsf, lsf-pc, linux-scsi On 03/05/2015 02:31 PM, Christoph Hellwig wrote: > For about 8 month I've merged almost every scsi commit through the > scsi-queue staging tree, and it seems to have worked out well enough. >From my user perspective the scsi-queue was an important help and have been using using it a lot. I hope it will have a future. Thanks, Tomas > > I've been too busy for the next cycle, so 4.1 will probably have to live > without it. I'd like to get feedback on how the tree worked for contributors > and driver maintainers, and brainstorm how to move forward with it, preferably > some form of real team maintainance that avoids single points of failure. > -- > To unsubscribe from this list: send the line "unsubscribe linux-scsi" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2015-03-10 12:37 UTC | newest] Thread overview: 7+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2015-03-05 13:31 [TOPIC] scsi-queue tree past and future Christoph Hellwig 2015-03-05 14:48 ` James Bottomley 2015-03-07 4:22 ` [Lsf] " Davidlohr Bueso 2015-03-07 4:43 ` James Bottomley 2015-03-07 4:47 ` Davidlohr Bueso 2015-03-07 3:10 ` Sagi Grimberg 2015-03-10 12:37 ` Tomas Henzl
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).