qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* QEMU Summit Minutes 2023
@ 2023-07-13 13:21 Peter Maydell
  2023-08-17  6:13 ` Thomas Huth
  2023-11-21 17:11 ` Alex Bennée
  0 siblings, 2 replies; 18+ messages in thread
From: Peter Maydell @ 2023-07-13 13:21 UTC (permalink / raw)
  To: QEMU Developers

QEMU Summit Minutes 2023
========================

As usual, we held a QEMU Summit meeting at KVM Forum.  This is an
invite-only meeting for the most active maintainers and submaintainers
in the project, and we discuss various project-wide issues, usually
process stuff. We then post the minutes of the meeting to the list as
a jumping off point for wider discussion and for those who weren't
able to attend.

Attendees
=========

"Peter Maydell" <peter.maydell@linaro.org>
"Alex Bennée" <alex.bennee@linaro.org>
"Kevin Wolf" <kwolf@redhat.com>
"Thomas Huth" <thuth@redhat.com>
"Markus Armbruster" <armbru@redhat.com>
"Mark Cave-Ayland" <mark.cave-ayland@ilande.co.uk>
"Philippe Mathieu-Daudé" <philmd@linaro.org>
"Daniel P. Berrangé" <berrange@redhat.com>
"Richard Henderson" <richard.henderson@linaro.org>
"Michael S. Tsirkin" <mst@redhat.com>
"Stefan Hajnoczi" <stefanha@redhat.com>
"Alex Graf" <agraf@csgraf.de>
"Gerd Hoffmann" <kraxel@redhat.com>
"Paolo Bonzini" <pbonzini@redhat.com>
"Michael Roth" <michael.roth@amd.com>

Topic 1: Dealing with tree wide changes
=======================================

Mark Cave-Ayland raised concerns that tree wide changes often get
stuck because maintainers are conservative about merging code that
touches other subsystems and doesn't have review.  He mentioned a
couple of cases of PC refactoring which had been held up and
languished on the list due to lack of review time. It can be hard to
get everything in the change reviewed, and then hard to get the change
merged, especially if it still has parts that weren't reviewed by
anybody.

Alex Bennée mentioned that maintainers can always give an Acked-by and
then rely on someone else doing the review. But even getting
Acked-by's can take time and we still have a problem with absent
maintainers.

In a brief diversion Markus mused that having more automated checking
for things like QAPI changes would help reduce the maintainer load for
the more mechanical changes.

It was pointed out we should be more accepting of merging changes
without explicit maintainer approval where the changes are surface
level system wide API changes rather than touching the guts of any
particular subsystem. This avoids the sometimes onerous task of
splitting mechanical tree-wide changes along subsystem boundaries.
A delay of one-week + send a ping + one-week was suggested as
sufficient time for maintainers to reply if they care specifically
about the series.

Alex Graf suggested that we should hold different standards of review
requirements depending on the importance of the sub-system. We should
not hold up code because a minor underused subsystem didn't get
signoff. We already informally do this but we don't make it very clear
so it can be hard to tell what is and isn't OK to let through without
review.

Topic 2: Are we happy with the email workflow?
==============================================

This was a topic to see if there was any consensus among maintainers
about the long-term acceptability of sticking with email for patch
submission and review -- in five years' time, if we're still doing it
the same way, how would we feel about it?

One area where we did get consensus was that now that we're doing CI
on gitlab we can change pull requests from maintainers from via-email
to gitlab merge requests. This would hopefully mean that instead of
the release-manager having to tell gitlab to do a merge and then
reporting back the results of any CI failures, the maintainer
could directly see the CI results and deal with fixing up failures
and resubmitting without involving the release manager. (This
may have the disbenefit that there isn't a single person any
more who looks at all the CI results and gets a sense of whether
particular test cases have pre-existing intermittent failures.)

There was less agreement on the main problem of reviewing code.
On the positive side:
 - everyone acknowledged that the email workflow was a barrier to new
   contributors
 - email is not awkward just for newcomers -- many regular
   developers have to deal with corporate mail systems, firewalls,
   etc, that make the email workflow more awkward than it was
   when Linux (and subsequently QEMU) first adopted it decades ago
 - a web UI means that unreviewed submissions are easier to track,
   rather than being simply ignored on the mailing list
But on the negative side:
 - gitlab is not set up for a "submaintainer tree" kind of workflow,
   so patches would go directly into the main tree and get no
   per-subsystem testing beyond whatever our CI can cover
 - gitlab doesn't handle adding Reviewed-by: and similar tags
 - email provides an automatic archive of the historical code
   review conversation; gitlab doesn't do this as well
 - it would increase the degree to which we might have a lock-in
   problem with gitlab (we could switch away, but it gets more painful)
 - it has the potential to be a bigger barrier to new contributors
   getting started with reviewing, compared to "just send an email"
 - it would probably burn the project's CI minutes more quickly
   as we would do runs per-submission, not just per-pullreq
 - might increase the awkwardness of the "some contributors/bug
   reporters/people interested in a patch are only notifiable
   by gitlab handle, and some only by email, and you can't
   email a handle and you can't tag an email on gitlab" split
 - offline working is trickier/impossible
 - many people were somewhere between "not enthusiastic" and
   "deal-breaker" about the web UI experience for code review
   (some of this can be dealt with via tooling)

So on net, there is no current consensus that we should make a change
in the project's patch submission and code review workflow.

Topic 3: Should we split responsibility for managing CoC reports?
=================================================================

The QEMU project happily does not have to deal with many Code of
Conduct (CoC) reports, but we could do a better job with managing the
ones we do get.  At the moment CoC reports go to the QEMU Leadership
Committee; Paolo proposed that it would be better to decouple CoC
handling to a separate team: although the CoC itself seems good,
asking the Leadership Committee to deal with the reports has not been
working so well.  The model for this is that Linux also initially had
its tech advisory board be the contact for CoC reports before
switching to a dedicated team for them.

There was general consensus that we should try the separate-team
approach. We plan to ask on the mailing list for volunteers who would
be interested in helping out with this.

(As always, the existence of a CoC policy and separate CoC team
doesn't remove the responsibility of established developers for
dealing with poor behaviour on the mailing lists when we see it. But
we can't see everything and the existence of a formal channel for
escalating problems is important.)

Topic 4: Size of the QEMU tarballs
==================================

Stefan began by outlining how the issue was noticed after Rackspace
pulled their open source funding leading to a sudden rise in hosting
bills. Fortunately we have been able to deal with the immediate
problem by first using Azure and then migrating to Gnome's CDN
service.  However the tarballs are still big with firmware source code
that we suppose most people never look at taking up a significant
chunk of the size. (In particular, EDK2 sources are half the tarball!)

We do need to be careful about GPL compliance (making sure users
have the source if we provide them the compiled firmware blob
for a GPL'd piece of firmware); but we don't need to necessarily
ship the sources in the exact same tarball as the blob.

Peter said we should check with the downstream consumers of our
tarballs what would be most useful to them; or at least figure
out what we think the common use-cases are. At the moment what
we do is not very useful to anybody:
 * most end users, CI systems, etc building from source tarballs
   don't care about the firmware sources and only want the
   binary blobs to be present
 * most downstream distros doing rebuilds want to rebuild the
   firmware from sources anyway, and will use the 'upstream'
   firmware sources rather than the ones we have in the tarballs

Users of QEMU from git don't get a great firmware experience either,
since the firmware is in submodules, with all the usual git submodule
problems. Plus we could do better in these days of Docker containers
than "somebody builds the firmware blob on their machine and sends a
patch with the binary blob to commit to git". So we should consider
whether we can improve things here while we're working on the firmware
problem.

Mark Cave-Ayland mentioned that he's already automated the "build
OpenBios blobs" aspect; so we could look at that as a model.

There definitely seemed to be consensus that it was worth trying
to improve what we do here -- hopefully somebody will have the
time to attempt something :-)


thanks
-- PMM


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: QEMU Summit Minutes 2023
  2023-07-13 13:21 QEMU Summit Minutes 2023 Peter Maydell
@ 2023-08-17  6:13 ` Thomas Huth
  2023-11-21 17:11 ` Alex Bennée
  1 sibling, 0 replies; 18+ messages in thread
From: Thomas Huth @ 2023-08-17  6:13 UTC (permalink / raw)
  To: Peter Maydell, QEMU Developers; +Cc: Paolo Bonzini

On 13/07/2023 15.21, Peter Maydell wrote:
> QEMU Summit Minutes 2023
> ========================
...
> Topic 3: Should we split responsibility for managing CoC reports?
> =================================================================
> 
> The QEMU project happily does not have to deal with many Code of
> Conduct (CoC) reports, but we could do a better job with managing the
> ones we do get.  At the moment CoC reports go to the QEMU Leadership
> Committee; Paolo proposed that it would be better to decouple CoC
> handling to a separate team: although the CoC itself seems good,
> asking the Leadership Committee to deal with the reports has not been
> working so well.  The model for this is that Linux also initially had
> its tech advisory board be the contact for CoC reports before
> switching to a dedicated team for them.
> 
> There was general consensus that we should try the separate-team
> approach. We plan to ask on the mailing list for volunteers who would
> be interested in helping out with this.

So who is going to drive this now? I haven't seen any mail on the mailing 
list with that question yet...

  Thomas




^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: QEMU Summit Minutes 2023
  2023-07-13 13:21 QEMU Summit Minutes 2023 Peter Maydell
  2023-08-17  6:13 ` Thomas Huth
@ 2023-11-21 17:11 ` Alex Bennée
  2023-11-28 17:54   ` Cédric Le Goater
  1 sibling, 1 reply; 18+ messages in thread
From: Alex Bennée @ 2023-11-21 17:11 UTC (permalink / raw)
  To: Peter Maydell; +Cc: QEMU Developers

Peter Maydell <peter.maydell@linaro.org> writes:

> QEMU Summit Minutes 2023
> ========================
>
> As usual, we held a QEMU Summit meeting at KVM Forum.  This is an
> invite-only meeting for the most active maintainers and submaintainers
> in the project, and we discuss various project-wide issues, usually
> process stuff. We then post the minutes of the meeting to the list as
> a jumping off point for wider discussion and for those who weren't
> able to attend.
>
<snip>
>
> Topic 2: Are we happy with the email workflow?
> ==============================================
>
> This was a topic to see if there was any consensus among maintainers
> about the long-term acceptability of sticking with email for patch
> submission and review -- in five years' time, if we're still doing it
> the same way, how would we feel about it?
>
> One area where we did get consensus was that now that we're doing CI
> on gitlab we can change pull requests from maintainers from via-email
> to gitlab merge requests. This would hopefully mean that instead of
> the release-manager having to tell gitlab to do a merge and then
> reporting back the results of any CI failures, the maintainer
> could directly see the CI results and deal with fixing up failures
> and resubmitting without involving the release manager. (This
> may have the disbenefit that there isn't a single person any
> more who looks at all the CI results and gets a sense of whether
> particular test cases have pre-existing intermittent failures.)

If we are keen to start processing merge requests for the 9.0 release we
really should consider how it is going to work before we open up the
taps post 8.2-final going out.

Does anyone want to have a go at writing an updated process for
docs/devel/submitting-a-pull-request.rst (or I guess merge-request) so
we can discuss it and be ready early in the cycle? Ideally someone who
already has experience with the workflow with other gitlab hosted
projects.

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: QEMU Summit Minutes 2023
  2023-11-21 17:11 ` Alex Bennée
@ 2023-11-28 17:54   ` Cédric Le Goater
  2023-11-28 18:05     ` Peter Maydell
  2023-11-28 18:06     ` Daniel P. Berrangé
  0 siblings, 2 replies; 18+ messages in thread
From: Cédric Le Goater @ 2023-11-28 17:54 UTC (permalink / raw)
  To: Alex Bennée, Peter Maydell; +Cc: QEMU Developers

On 11/21/23 18:11, Alex Bennée wrote:
> Peter Maydell <peter.maydell@linaro.org> writes:
> 
>> QEMU Summit Minutes 2023
>> ========================
>>
>> As usual, we held a QEMU Summit meeting at KVM Forum.  This is an
>> invite-only meeting for the most active maintainers and submaintainers
>> in the project, and we discuss various project-wide issues, usually
>> process stuff. We then post the minutes of the meeting to the list as
>> a jumping off point for wider discussion and for those who weren't
>> able to attend.
>>
> <snip>
>>
>> Topic 2: Are we happy with the email workflow?
>> ==============================================
>>
>> This was a topic to see if there was any consensus among maintainers
>> about the long-term acceptability of sticking with email for patch
>> submission and review -- in five years' time, if we're still doing it
>> the same way, how would we feel about it?
>>
>> One area where we did get consensus was that now that we're doing CI
>> on gitlab we can change pull requests from maintainers from via-email
>> to gitlab merge requests. This would hopefully mean that instead of
>> the release-manager having to tell gitlab to do a merge and then
>> reporting back the results of any CI failures, the maintainer
>> could directly see the CI results and deal with fixing up failures
>> and resubmitting without involving the release manager. (This
>> may have the disbenefit that there isn't a single person any
>> more who looks at all the CI results and gets a sense of whether
>> particular test cases have pre-existing intermittent failures.)
> 
> If we are keen to start processing merge requests for the 9.0 release we
> really should consider how it is going to work before we open up the
> taps post 8.2-final going out.
>
> Does anyone want to have a go at writing an updated process for
> docs/devel/submitting-a-pull-request.rst (or I guess merge-request) so
> we can discuss it and be ready early in the cycle? Ideally someone who
> already has experience with the workflow with other gitlab hosted
> projects.


Reading the Topic 2 paragraph above, I understand that a maintainer
of a subsystem would be able to merge its '-next' branch in the main
repository when CI is all green. Correct ?

It seems to me that we should also have a group of people approving
the MR.

Thanks,

C.



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: QEMU Summit Minutes 2023
  2023-11-28 17:54   ` Cédric Le Goater
@ 2023-11-28 18:05     ` Peter Maydell
  2023-11-28 18:09       ` Daniel P. Berrangé
  2023-11-28 18:06     ` Daniel P. Berrangé
  1 sibling, 1 reply; 18+ messages in thread
From: Peter Maydell @ 2023-11-28 18:05 UTC (permalink / raw)
  To: Cédric Le Goater; +Cc: Alex Bennée, QEMU Developers

On Tue, 28 Nov 2023 at 17:54, Cédric Le Goater <clg@redhat.com> wrote:
>
> On 11/21/23 18:11, Alex Bennée wrote:
> > Peter Maydell <peter.maydell@linaro.org> writes:
> >> Topic 2: Are we happy with the email workflow?
> >> ==============================================
> >>
> >> This was a topic to see if there was any consensus among maintainers
> >> about the long-term acceptability of sticking with email for patch
> >> submission and review -- in five years' time, if we're still doing it
> >> the same way, how would we feel about it?
> >>
> >> One area where we did get consensus was that now that we're doing CI
> >> on gitlab we can change pull requests from maintainers from via-email
> >> to gitlab merge requests. This would hopefully mean that instead of
> >> the release-manager having to tell gitlab to do a merge and then
> >> reporting back the results of any CI failures, the maintainer
> >> could directly see the CI results and deal with fixing up failures
> >> and resubmitting without involving the release manager. (This
> >> may have the disbenefit that there isn't a single person any
> >> more who looks at all the CI results and gets a sense of whether
> >> particular test cases have pre-existing intermittent failures.)
> >
> > If we are keen to start processing merge requests for the 9.0 release we
> > really should consider how it is going to work before we open up the
> > taps post 8.2-final going out.
> >
> > Does anyone want to have a go at writing an updated process for
> > docs/devel/submitting-a-pull-request.rst (or I guess merge-request) so
> > we can discuss it and be ready early in the cycle? Ideally someone who
> > already has experience with the workflow with other gitlab hosted
> > projects.
>
>
> Reading the Topic 2 paragraph above, I understand that a maintainer
> of a subsystem would be able to merge its '-next' branch in the main
> repository when CI is all green. Correct ?

I think my intention when writing that was to say that the submaintainer
kicks things off and deals with resubmitting and rerunning if there
are failures, but actually doing "merge this successfully tested
pullreq" is still the release-manager's job.

> It seems to me that we should also have a group of people approving
> the MR.

I do think something like this is probably where we want to get to
eventually, where there's a group of people with the rights to
approve a merge, and maybe the rules about how many approvals
or whose approval is needed can differ between "normal development"
and "in freeze" periods. But the idea of the above text I think
was that the first step is to change from how the release manager
receives "please merge this" requests from the current "here's an
email, you need to test it" to "here's a thing in the gitlab UI
that has already passed the tests and is ready to go".

thanks
-- PMM


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: QEMU Summit Minutes 2023
  2023-11-28 17:54   ` Cédric Le Goater
  2023-11-28 18:05     ` Peter Maydell
@ 2023-11-28 18:06     ` Daniel P. Berrangé
  2023-11-29 14:21       ` Philippe Mathieu-Daudé
  2023-11-29 15:50       ` Warner Losh
  1 sibling, 2 replies; 18+ messages in thread
From: Daniel P. Berrangé @ 2023-11-28 18:06 UTC (permalink / raw)
  To: Cédric Le Goater; +Cc: Alex Bennée, Peter Maydell, QEMU Developers

On Tue, Nov 28, 2023 at 06:54:42PM +0100, Cédric Le Goater wrote:
> On 11/21/23 18:11, Alex Bennée wrote:
> > Peter Maydell <peter.maydell@linaro.org> writes:
> > 
> > > QEMU Summit Minutes 2023
> > > ========================
> > > 
> > > As usual, we held a QEMU Summit meeting at KVM Forum.  This is an
> > > invite-only meeting for the most active maintainers and submaintainers
> > > in the project, and we discuss various project-wide issues, usually
> > > process stuff. We then post the minutes of the meeting to the list as
> > > a jumping off point for wider discussion and for those who weren't
> > > able to attend.
> > > 
> > <snip>
> > > 
> > > Topic 2: Are we happy with the email workflow?
> > > ==============================================
> > > 
> > > This was a topic to see if there was any consensus among maintainers
> > > about the long-term acceptability of sticking with email for patch
> > > submission and review -- in five years' time, if we're still doing it
> > > the same way, how would we feel about it?
> > > 
> > > One area where we did get consensus was that now that we're doing CI
> > > on gitlab we can change pull requests from maintainers from via-email
> > > to gitlab merge requests. This would hopefully mean that instead of
> > > the release-manager having to tell gitlab to do a merge and then
> > > reporting back the results of any CI failures, the maintainer
> > > could directly see the CI results and deal with fixing up failures
> > > and resubmitting without involving the release manager. (This
> > > may have the disbenefit that there isn't a single person any
> > > more who looks at all the CI results and gets a sense of whether
> > > particular test cases have pre-existing intermittent failures.)
> > 
> > If we are keen to start processing merge requests for the 9.0 release we
> > really should consider how it is going to work before we open up the
> > taps post 8.2-final going out.
> > 
> > Does anyone want to have a go at writing an updated process for
> > docs/devel/submitting-a-pull-request.rst (or I guess merge-request) so
> > we can discuss it and be ready early in the cycle? Ideally someone who
> > already has experience with the workflow with other gitlab hosted
> > projects.

If no one else beats me to it, I can try and write up something,
since I'm pretty familiar with gitlab PR from libvirt & other
projects.

> Reading the Topic 2 paragraph above, I understand that a maintainer
> of a subsystem would be able to merge its '-next' branch in the main
> repository when CI is all green. Correct ?

A maintainer would have their own fork of qemu-project/qemu, under
their namespace, or if there are maintainers collaborating, they
might have a separate group nmamespace for their subsystem.
eg qemu-block-subsys/qemu, or we could use sub-groups perhaps
so  qemu-project/block-subsys/qemu  for official subsystem
trees.

Anyway, when a maintainer wants to merge a tree, I would expect to
have a MR opened against 'master' in qemu-project/qemu.  The CI
ought to then run and if it is all green, then someone would approve
it to merge to master.

> It seems to me that we should also have a group of people approving
> the MR.

Yes, while we could have one designated gate keeper approving all
MRs, that would defeat some of the benefit of MRs. So likely would
be good to have a pool, and also setup the config so that the owner
of an MR is not allow to approve their own MR, to guarantee there
is always a 2nd pair of eyes as sanity check.

We might also need to consider enabling 'merge trains', so that
we get a serialized CI run again after hte MR is approved, in
case 'master' moved onwards since the initial CI pipeline when
the MR was opened.

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: QEMU Summit Minutes 2023
  2023-11-28 18:05     ` Peter Maydell
@ 2023-11-28 18:09       ` Daniel P. Berrangé
  0 siblings, 0 replies; 18+ messages in thread
From: Daniel P. Berrangé @ 2023-11-28 18:09 UTC (permalink / raw)
  To: Peter Maydell; +Cc: Cédric Le Goater, Alex Bennée, QEMU Developers

On Tue, Nov 28, 2023 at 06:05:25PM +0000, Peter Maydell wrote:
> On Tue, 28 Nov 2023 at 17:54, Cédric Le Goater <clg@redhat.com> wrote:
> >
> > On 11/21/23 18:11, Alex Bennée wrote:
> > > Peter Maydell <peter.maydell@linaro.org> writes:
> > >> Topic 2: Are we happy with the email workflow?
> > >> ==============================================
> > >>
> > >> This was a topic to see if there was any consensus among maintainers
> > >> about the long-term acceptability of sticking with email for patch
> > >> submission and review -- in five years' time, if we're still doing it
> > >> the same way, how would we feel about it?
> > >>
> > >> One area where we did get consensus was that now that we're doing CI
> > >> on gitlab we can change pull requests from maintainers from via-email
> > >> to gitlab merge requests. This would hopefully mean that instead of
> > >> the release-manager having to tell gitlab to do a merge and then
> > >> reporting back the results of any CI failures, the maintainer
> > >> could directly see the CI results and deal with fixing up failures
> > >> and resubmitting without involving the release manager. (This
> > >> may have the disbenefit that there isn't a single person any
> > >> more who looks at all the CI results and gets a sense of whether
> > >> particular test cases have pre-existing intermittent failures.)
> > >
> > > If we are keen to start processing merge requests for the 9.0 release we
> > > really should consider how it is going to work before we open up the
> > > taps post 8.2-final going out.
> > >
> > > Does anyone want to have a go at writing an updated process for
> > > docs/devel/submitting-a-pull-request.rst (or I guess merge-request) so
> > > we can discuss it and be ready early in the cycle? Ideally someone who
> > > already has experience with the workflow with other gitlab hosted
> > > projects.
> >
> >
> > Reading the Topic 2 paragraph above, I understand that a maintainer
> > of a subsystem would be able to merge its '-next' branch in the main
> > repository when CI is all green. Correct ?
> 
> I think my intention when writing that was to say that the submaintainer
> kicks things off and deals with resubmitting and rerunning if there
> are failures, but actually doing "merge this successfully tested
> pullreq" is still the release-manager's job.
> 
> > It seems to me that we should also have a group of people approving
> > the MR.
> 
> I do think something like this is probably where we want to get to
> eventually, where there's a group of people with the rights to
> approve a merge, and maybe the rules about how many approvals
> or whose approval is needed can differ between "normal development"
> and "in freeze" periods. But the idea of the above text I think
> was that the first step is to change from how the release manager
> receives "please merge this" requests from the current "here's an
> email, you need to test it" to "here's a thing in the gitlab UI
> that has already passed the tests and is ready to go".

If we setup ACL rules to require the release manager only to
start with, it is easy enough to expand the ACL rules later
once we're comfortable with more people doing the work.


With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: QEMU Summit Minutes 2023
  2023-11-28 18:06     ` Daniel P. Berrangé
@ 2023-11-29 14:21       ` Philippe Mathieu-Daudé
  2023-11-29 15:45         ` Warner Losh
  2023-11-29 15:47         ` Stefan Hajnoczi
  2023-11-29 15:50       ` Warner Losh
  1 sibling, 2 replies; 18+ messages in thread
From: Philippe Mathieu-Daudé @ 2023-11-29 14:21 UTC (permalink / raw)
  To: Daniel P. Berrangé, Cédric Le Goater
  Cc: Alex Bennée, Peter Maydell, QEMU Developers

On 28/11/23 19:06, Daniel P. Berrangé wrote:
> On Tue, Nov 28, 2023 at 06:54:42PM +0100, Cédric Le Goater wrote:

> Anyway, when a maintainer wants to merge a tree, I would expect to
> have a MR opened against 'master' in qemu-project/qemu.  The CI
> ought to then run and if it is all green, then someone would approve
> it to merge to master.
> 
>> It seems to me that we should also have a group of people approving
>> the MR.
> 
> Yes, while we could have one designated gate keeper approving all
> MRs, that would defeat some of the benefit of MRs. So likely would
> be good to have a pool, and also setup the config so that the owner
> of an MR is not allow to approve their own MR, to guarantee there
> is always a 2nd pair of eyes as sanity check.

Are all our tests already on GitLab? Last time I remember Peter still
had manual tests.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: QEMU Summit Minutes 2023
  2023-11-29 14:21       ` Philippe Mathieu-Daudé
@ 2023-11-29 15:45         ` Warner Losh
  2023-11-29 15:47         ` Stefan Hajnoczi
  1 sibling, 0 replies; 18+ messages in thread
From: Warner Losh @ 2023-11-29 15:45 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé
  Cc: Daniel P. Berrangé, Cédric Le Goater, Alex Bennée,
	Peter Maydell, QEMU Developers

[-- Attachment #1: Type: text/plain, Size: 2485 bytes --]

On Wed, Nov 29, 2023 at 8:33 AM Philippe Mathieu-Daudé <philmd@linaro.org>
wrote:

> On 28/11/23 19:06, Daniel P. Berrangé wrote:
> > On Tue, Nov 28, 2023 at 06:54:42PM +0100, Cédric Le Goater wrote:
>
> > Anyway, when a maintainer wants to merge a tree, I would expect to
> > have a MR opened against 'master' in qemu-project/qemu.  The CI
> > ought to then run and if it is all green, then someone would approve
> > it to merge to master.
> >
> >> It seems to me that we should also have a group of people approving
> >> the MR.
> >
> > Yes, while we could have one designated gate keeper approving all
> > MRs, that would defeat some of the benefit of MRs. So likely would
> > be good to have a pool, and also setup the config so that the owner
> > of an MR is not allow to approve their own MR, to guarantee there
> > is always a 2nd pair of eyes as sanity check.
>
> Are all our tests already on GitLab? Last time I remember Peter still
> had manual tests.
>

As a low-volume maintainer, I'd love nothing more than to push my PR
asynchronously to the release cycle. I'll get immediate yes/no feedback and
have a chance to fix the 'no' from the CI and/or reviewers. I'd know early
in the review when CI tests break that I can deal with in parallel. All as
part of the normal process. Now I have to publish in email, and push to
gitlab and it's very manual, not integrated and a large source of friction
for me as someone who does things from time to time rather than all the
time (since it's the most radically different set or processes from
anything else I contribute to). This way, I don't have to care about
freezes or whatever. During the non-freeze times it goes in once whatever
criteria are ticked (reviewers and no objections, my say so, CI working,
etc) During the freeze times the release engineer ticks another box for it
to go in... or not... and after the freeze, we'll have a battle royale of
accumulated MRs that will go in, though not all queued once since we'll
have to re-run the CI with the new changes.

And maybe we could consider just branching for release. Freeze master for
as long as it takes to branch (which needn't be tip) and then master goes
on with life and the release engineer lands bug fixes to the release branch
like we do now in frozen master. That way we don't get the big in-rush
effects when the freeze lifts. FreeBSD went to this a decade ago and makes
releases so much easier.

Warner

[-- Attachment #2: Type: text/html, Size: 2979 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: QEMU Summit Minutes 2023
  2023-11-29 14:21       ` Philippe Mathieu-Daudé
  2023-11-29 15:45         ` Warner Losh
@ 2023-11-29 15:47         ` Stefan Hajnoczi
  2023-11-29 15:53           ` Stefan Hajnoczi
  1 sibling, 1 reply; 18+ messages in thread
From: Stefan Hajnoczi @ 2023-11-29 15:47 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé
  Cc: Daniel P. Berrangé, Cédric Le Goater, Alex Bennée,
	Peter Maydell, QEMU Developers

On Wed, 29 Nov 2023 at 09:22, Philippe Mathieu-Daudé <philmd@linaro.org> wrote:
>
> On 28/11/23 19:06, Daniel P. Berrangé wrote:
> > On Tue, Nov 28, 2023 at 06:54:42PM +0100, Cédric Le Goater wrote:
>
> > Anyway, when a maintainer wants to merge a tree, I would expect to
> > have a MR opened against 'master' in qemu-project/qemu.  The CI
> > ought to then run and if it is all green, then someone would approve
> > it to merge to master.
> >
> >> It seems to me that we should also have a group of people approving
> >> the MR.
> >
> > Yes, while we could have one designated gate keeper approving all
> > MRs, that would defeat some of the benefit of MRs. So likely would
> > be good to have a pool, and also setup the config so that the owner
> > of an MR is not allow to approve their own MR, to guarantee there
> > is always a 2nd pair of eyes as sanity check.
>
> Are all our tests already on GitLab? Last time I remember Peter still
> had manual tests.

Hi Philippe,
QEMU no longer depends on those manual tests even if they still exist.
I did not run any manual tests during the 8.2 release cycle.

I want to highlight that the CI is not yet reliable. It fails due to
intermittent issues more often than it passes. Most of the issues are
related to unreliable test cases. Some of the issues are related to
temporary infrastructure outages where the tests fail when
initializing the environment (e.g. failure to download dependencies).

I am willing to review the CI failure history for the past two weeks
and submit patches to disable unreliable tests. The test owners can
investigate and fix those tests if they want to re-enable them.

Stefan


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: QEMU Summit Minutes 2023
  2023-11-28 18:06     ` Daniel P. Berrangé
  2023-11-29 14:21       ` Philippe Mathieu-Daudé
@ 2023-11-29 15:50       ` Warner Losh
  2023-11-29 16:49         ` Daniel P. Berrangé
  1 sibling, 1 reply; 18+ messages in thread
From: Warner Losh @ 2023-11-29 15:50 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Cédric Le Goater, Alex Bennée, Peter Maydell,
	QEMU Developers

[-- Attachment #1: Type: text/plain, Size: 5326 bytes --]

On Wed, Nov 29, 2023 at 8:44 AM Daniel P. Berrangé <berrange@redhat.com>
wrote:

> On Tue, Nov 28, 2023 at 06:54:42PM +0100, Cédric Le Goater wrote:
> > On 11/21/23 18:11, Alex Bennée wrote:
> > > Peter Maydell <peter.maydell@linaro.org> writes:
> > >
> > > > QEMU Summit Minutes 2023
> > > > ========================
> > > >
> > > > As usual, we held a QEMU Summit meeting at KVM Forum.  This is an
> > > > invite-only meeting for the most active maintainers and
> submaintainers
> > > > in the project, and we discuss various project-wide issues, usually
> > > > process stuff. We then post the minutes of the meeting to the list as
> > > > a jumping off point for wider discussion and for those who weren't
> > > > able to attend.
> > > >
> > > <snip>
> > > >
> > > > Topic 2: Are we happy with the email workflow?
> > > > ==============================================
> > > >
> > > > This was a topic to see if there was any consensus among maintainers
> > > > about the long-term acceptability of sticking with email for patch
> > > > submission and review -- in five years' time, if we're still doing it
> > > > the same way, how would we feel about it?
> > > >
> > > > One area where we did get consensus was that now that we're doing CI
> > > > on gitlab we can change pull requests from maintainers from via-email
> > > > to gitlab merge requests. This would hopefully mean that instead of
> > > > the release-manager having to tell gitlab to do a merge and then
> > > > reporting back the results of any CI failures, the maintainer
> > > > could directly see the CI results and deal with fixing up failures
> > > > and resubmitting without involving the release manager. (This
> > > > may have the disbenefit that there isn't a single person any
> > > > more who looks at all the CI results and gets a sense of whether
> > > > particular test cases have pre-existing intermittent failures.)
> > >
> > > If we are keen to start processing merge requests for the 9.0 release
> we
> > > really should consider how it is going to work before we open up the
> > > taps post 8.2-final going out.
> > >
> > > Does anyone want to have a go at writing an updated process for
> > > docs/devel/submitting-a-pull-request.rst (or I guess merge-request) so
> > > we can discuss it and be ready early in the cycle? Ideally someone who
> > > already has experience with the workflow with other gitlab hosted
> > > projects.
>
> If no one else beats me to it, I can try and write up something,
> since I'm pretty familiar with gitlab PR from libvirt & other
> projects.
>
> > Reading the Topic 2 paragraph above, I understand that a maintainer
> > of a subsystem would be able to merge its '-next' branch in the main
> > repository when CI is all green. Correct ?
>
> A maintainer would have their own fork of qemu-project/qemu, under
> their namespace, or if there are maintainers collaborating, they
> might have a separate group nmamespace for their subsystem.
> eg qemu-block-subsys/qemu, or we could use sub-groups perhaps
> so  qemu-project/block-subsys/qemu  for official subsystem
> trees.
>
> Anyway, when a maintainer wants to merge a tree, I would expect to
> have a MR opened against 'master' in qemu-project/qemu.  The CI
> ought to then run and if it is all green, then someone would approve
> it to merge to master.
>
> > It seems to me that we should also have a group of people approving
> > the MR.
>
> Yes, while we could have one designated gate keeper approving all
> MRs, that would defeat some of the benefit of MRs. So likely would
> be good to have a pool, and also setup the config so that the owner
> of an MR is not allow to approve their own MR, to guarantee there
> is always a 2nd pair of eyes as sanity check.
>
> We might also need to consider enabling 'merge trains', so that
> we get a serialized CI run again after hte MR is approved, in
> case 'master' moved onwards since the initial CI pipeline when
> the MR was opened.
>

I'd honestly optimize for 'frequent merges of smaller things' rather than
'infrequent merges of larger things'. The latter has caused most of the
issues for me. It's harder to contribute because the overhead of doing so
is so large you want to batch everything. Let's not optimize for that
workaround for the high-friction submission process we have now. If there's
always smaller bits of work going in all the time, you'll find few commit
races... though the CI pipeline is rather large, so having a ci-staging
branch to land the MRs to that have completed CI, but not CI on the tip,
might not be bad... but the resolution of conflicts can be tricky, though
infrequent, so if a ci-staging branch bounces, all MRs would need to be
manually requeued after humans look at why and think through who needs to
talk to whom, or if it's just a 'other things landed before you could get
yours in and it's not the ci-staging being full of other people's commits
that is at fault.

Warner


> With regards,
> Daniel
> --
> |: https://berrange.com      -o-
> https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-
> https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-
> https://www.instagram.com/dberrange :|
>
>
>

[-- Attachment #2: Type: text/html, Size: 7016 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: QEMU Summit Minutes 2023
  2023-11-29 15:47         ` Stefan Hajnoczi
@ 2023-11-29 15:53           ` Stefan Hajnoczi
  2023-11-29 16:46             ` Daniel P. Berrangé
  2023-11-29 16:57             ` Alex Bennée
  0 siblings, 2 replies; 18+ messages in thread
From: Stefan Hajnoczi @ 2023-11-29 15:53 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé
  Cc: Daniel P. Berrangé, Cédric Le Goater, Alex Bennée,
	Peter Maydell, QEMU Developers

To give a picture of the state of the CI, I'd say it fails 80% of the
time. Usually 2 or 3 of the tests fail randomly from a group of <10
tests that commonly fail randomly.

In order for the CI to be usable to submaintainers I think it should
_pass_ at least 90% of the time.

There is still some way to go but I think this goal is achievable in
the next 2 or 3 months because the set of problematic tests is not
that large.

Stefan


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: QEMU Summit Minutes 2023
  2023-11-29 15:53           ` Stefan Hajnoczi
@ 2023-11-29 16:46             ` Daniel P. Berrangé
  2023-11-29 16:49               ` Stefan Hajnoczi
  2023-11-29 16:57             ` Alex Bennée
  1 sibling, 1 reply; 18+ messages in thread
From: Daniel P. Berrangé @ 2023-11-29 16:46 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Philippe Mathieu-Daudé, Cédric Le Goater,
	Alex Bennée, Peter Maydell, QEMU Developers

On Wed, Nov 29, 2023 at 10:53:00AM -0500, Stefan Hajnoczi wrote:
> To give a picture of the state of the CI, I'd say it fails 80% of the
> time. Usually 2 or 3 of the tests fail randomly from a group of <10
> tests that commonly fail randomly.
> 
> In order for the CI to be usable to submaintainers I think it should
> _pass_ at least 90% of the time.
> 
> There is still some way to go but I think this goal is achievable in
> the next 2 or 3 months because the set of problematic tests is not
> that large.

FWIW, also bear in mind that when a pipeline fails, it is advisible to
*NOT* re-run the pipeline, as that increases your odds of hitting another
non-deterministic bug. Instead re-run only the individual job(s) that
failed.

None the less, we should of course identify and fix non-deterministic
test failures.


With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: QEMU Summit Minutes 2023
  2023-11-29 16:46             ` Daniel P. Berrangé
@ 2023-11-29 16:49               ` Stefan Hajnoczi
  0 siblings, 0 replies; 18+ messages in thread
From: Stefan Hajnoczi @ 2023-11-29 16:49 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Philippe Mathieu-Daudé, Cédric Le Goater,
	Alex Bennée, Peter Maydell, QEMU Developers

On Wed, 29 Nov 2023 at 11:46, Daniel P. Berrangé <berrange@redhat.com> wrote:
>
> On Wed, Nov 29, 2023 at 10:53:00AM -0500, Stefan Hajnoczi wrote:
> > To give a picture of the state of the CI, I'd say it fails 80% of the
> > time. Usually 2 or 3 of the tests fail randomly from a group of <10
> > tests that commonly fail randomly.
> >
> > In order for the CI to be usable to submaintainers I think it should
> > _pass_ at least 90% of the time.
> >
> > There is still some way to go but I think this goal is achievable in
> > the next 2 or 3 months because the set of problematic tests is not
> > that large.
>
> FWIW, also bear in mind that when a pipeline fails, it is advisible to
> *NOT* re-run the pipeline, as that increases your odds of hitting another
> non-deterministic bug. Instead re-run only the individual job(s) that
> failed.

Yes, that's the approach I take. It's way to risky to re-run the
pipeline because it will probably fail again somewhere else :-).

Stefan


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: QEMU Summit Minutes 2023
  2023-11-29 15:50       ` Warner Losh
@ 2023-11-29 16:49         ` Daniel P. Berrangé
  2023-11-29 19:41           ` Peter Maydell
  0 siblings, 1 reply; 18+ messages in thread
From: Daniel P. Berrangé @ 2023-11-29 16:49 UTC (permalink / raw)
  To: Warner Losh
  Cc: Cédric Le Goater, Alex Bennée, Peter Maydell,
	QEMU Developers

On Wed, Nov 29, 2023 at 08:50:06AM -0700, Warner Losh wrote:
> On Wed, Nov 29, 2023 at 8:44 AM Daniel P. Berrangé <berrange@redhat.com>
> wrote:
> 
> > On Tue, Nov 28, 2023 at 06:54:42PM +0100, Cédric Le Goater wrote:
> > > On 11/21/23 18:11, Alex Bennée wrote:
> > > > Peter Maydell <peter.maydell@linaro.org> writes:
> > > >
> > > > > QEMU Summit Minutes 2023
> > > > > ========================
> > > > >
> > > > > As usual, we held a QEMU Summit meeting at KVM Forum.  This is an
> > > > > invite-only meeting for the most active maintainers and
> > submaintainers
> > > > > in the project, and we discuss various project-wide issues, usually
> > > > > process stuff. We then post the minutes of the meeting to the list as
> > > > > a jumping off point for wider discussion and for those who weren't
> > > > > able to attend.
> > > > >
> > > > <snip>
> > > > >
> > > > > Topic 2: Are we happy with the email workflow?
> > > > > ==============================================
> > > > >
> > > > > This was a topic to see if there was any consensus among maintainers
> > > > > about the long-term acceptability of sticking with email for patch
> > > > > submission and review -- in five years' time, if we're still doing it
> > > > > the same way, how would we feel about it?
> > > > >
> > > > > One area where we did get consensus was that now that we're doing CI
> > > > > on gitlab we can change pull requests from maintainers from via-email
> > > > > to gitlab merge requests. This would hopefully mean that instead of
> > > > > the release-manager having to tell gitlab to do a merge and then
> > > > > reporting back the results of any CI failures, the maintainer
> > > > > could directly see the CI results and deal with fixing up failures
> > > > > and resubmitting without involving the release manager. (This
> > > > > may have the disbenefit that there isn't a single person any
> > > > > more who looks at all the CI results and gets a sense of whether
> > > > > particular test cases have pre-existing intermittent failures.)
> > > >
> > > > If we are keen to start processing merge requests for the 9.0 release
> > we
> > > > really should consider how it is going to work before we open up the
> > > > taps post 8.2-final going out.
> > > >
> > > > Does anyone want to have a go at writing an updated process for
> > > > docs/devel/submitting-a-pull-request.rst (or I guess merge-request) so
> > > > we can discuss it and be ready early in the cycle? Ideally someone who
> > > > already has experience with the workflow with other gitlab hosted
> > > > projects.
> >
> > If no one else beats me to it, I can try and write up something,
> > since I'm pretty familiar with gitlab PR from libvirt & other
> > projects.
> >
> > > Reading the Topic 2 paragraph above, I understand that a maintainer
> > > of a subsystem would be able to merge its '-next' branch in the main
> > > repository when CI is all green. Correct ?
> >
> > A maintainer would have their own fork of qemu-project/qemu, under
> > their namespace, or if there are maintainers collaborating, they
> > might have a separate group nmamespace for their subsystem.
> > eg qemu-block-subsys/qemu, or we could use sub-groups perhaps
> > so  qemu-project/block-subsys/qemu  for official subsystem
> > trees.
> >
> > Anyway, when a maintainer wants to merge a tree, I would expect to
> > have a MR opened against 'master' in qemu-project/qemu.  The CI
> > ought to then run and if it is all green, then someone would approve
> > it to merge to master.
> >
> > > It seems to me that we should also have a group of people approving
> > > the MR.
> >
> > Yes, while we could have one designated gate keeper approving all
> > MRs, that would defeat some of the benefit of MRs. So likely would
> > be good to have a pool, and also setup the config so that the owner
> > of an MR is not allow to approve their own MR, to guarantee there
> > is always a 2nd pair of eyes as sanity check.
> >
> > We might also need to consider enabling 'merge trains', so that
> > we get a serialized CI run again after hte MR is approved, in
> > case 'master' moved onwards since the initial CI pipeline when
> > the MR was opened.
> >
> 
> I'd honestly optimize for 'frequent merges of smaller things' rather than
> 'infrequent merges of larger things'. The latter has caused most of the
> issues for me. It's harder to contribute because the overhead of doing so
> is so large you want to batch everything. Let's not optimize for that
> workaround for the high-friction submission process we have now. If there's
> always smaller bits of work going in all the time, you'll find few commit
> races... though the CI pipeline is rather large, so having a ci-staging
> branch to land the MRs to that have completed CI, but not CI on the tip,
> might not be bad... but the resolution of conflicts can be tricky, though
> infrequent, so if a ci-staging branch bounces, all MRs would need to be
> manually requeued after humans look at why and think through who needs to
> talk to whom, or if it's just a 'other things landed before you could get
> yours in and it's not the ci-staging being full of other people's commits
> that is at fault.

I agree. Right now we tend to have fairly large pull requests, because
people are concious of each pull request consuming non-trivial resources
from the release maintainer. If this new MR based approach reduces the
load on the release maintainer, then sending frequent small pull requests
is almost certainly going to be better.

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: QEMU Summit Minutes 2023
  2023-11-29 15:53           ` Stefan Hajnoczi
  2023-11-29 16:46             ` Daniel P. Berrangé
@ 2023-11-29 16:57             ` Alex Bennée
  2023-11-29 18:24               ` Stefan Hajnoczi
  1 sibling, 1 reply; 18+ messages in thread
From: Alex Bennée @ 2023-11-29 16:57 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Philippe Mathieu-Daudé, Daniel P. Berrangé,
	Cédric Le Goater, Peter Maydell, QEMU Developers

Stefan Hajnoczi <stefanha@gmail.com> writes:

> To give a picture of the state of the CI, I'd say it fails 80% of the
> time. Usually 2 or 3 of the tests fail randomly from a group of <10
> tests that commonly fail randomly.

Do you have a list anywhere?

>
> In order for the CI to be usable to submaintainers I think it should
> _pass_ at least 90% of the time.
>
> There is still some way to go but I think this goal is achievable in
> the next 2 or 3 months because the set of problematic tests is not
> that large.
>
> Stefan

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: QEMU Summit Minutes 2023
  2023-11-29 16:57             ` Alex Bennée
@ 2023-11-29 18:24               ` Stefan Hajnoczi
  0 siblings, 0 replies; 18+ messages in thread
From: Stefan Hajnoczi @ 2023-11-29 18:24 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Philippe Mathieu-Daudé, Daniel P. Berrangé,
	Cédric Le Goater, Peter Maydell, QEMU Developers

On Wed, 29 Nov 2023 at 11:57, Alex Bennée <alex.bennee@linaro.org> wrote:
>
> Stefan Hajnoczi <stefanha@gmail.com> writes:
>
> > To give a picture of the state of the CI, I'd say it fails 80% of the
> > time. Usually 2 or 3 of the tests fail randomly from a group of <10
> > tests that commonly fail randomly.
>
> Do you have a list anywhere?

No, I will need to review the test results from the past two weeks.

Tests I remember causing trouble:
- avocado tests: plain unreliable, most of the time the CI job fails
but the test output doesn't show hard failures, just a bunch of
warnings
- netdev-socket: UNIX domain socket tests are non-deterministic
- qemu-iotests/161: randomly complains that another QEMU may be
running (file locking cleanup is probably racy)
- cfi and s390 jobs are particularly prone to random failures/timeouts

There are more that I don't remember.

Stefan


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: QEMU Summit Minutes 2023
  2023-11-29 16:49         ` Daniel P. Berrangé
@ 2023-11-29 19:41           ` Peter Maydell
  0 siblings, 0 replies; 18+ messages in thread
From: Peter Maydell @ 2023-11-29 19:41 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: Warner Losh, Cédric Le Goater, Alex Bennée,
	QEMU Developers

On Wed, 29 Nov 2023 at 16:49, Daniel P. Berrangé <berrange@redhat.com> wrote:
>
> On Wed, Nov 29, 2023 at 08:50:06AM -0700, Warner Losh wrote:
> > I'd honestly optimize for 'frequent merges of smaller things' rather than
> > 'infrequent merges of larger things'. The latter has caused most of the
> > issues for me. It's harder to contribute because the overhead of doing so
> > is so large you want to batch everything. Let's not optimize for that
> > workaround for the high-friction submission process we have now. If there's
> > always smaller bits of work going in all the time, you'll find few commit
> > races... though the CI pipeline is rather large, so having a ci-staging
> > branch to land the MRs to that have completed CI, but not CI on the tip,
> > might not be bad... but the resolution of conflicts can be tricky, though
> > infrequent, so if a ci-staging branch bounces, all MRs would need to be
> > manually requeued after humans look at why and think through who needs to
> > talk to whom, or if it's just a 'other things landed before you could get
> > yours in and it's not the ci-staging being full of other people's commits
> > that is at fault.
>
> I agree. Right now we tend to have fairly large pull requests, because
> people are concious of each pull request consuming non-trivial resources
> from the release maintainer. If this new MR based approach reduces the
> load on the release maintainer, then sending frequent small pull requests
> is almost certainly going to be better.

FWIW, in the current system also I would recommend submitting
smaller pullrequests rather than large ones. For me when I'm doing
release handling, pushing pullreqs through the process is not
that difficult. The problems come when there are test failures
that need to be tracked down, and those are more likely to be
problematic if they're in the middle of a huge pullreq than
if they're in a smaller one.

Plus waiting means things don't get into the tree as soon, which
just extends the timelines on things.

thanks
-- PMM


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2023-11-29 19:42 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-07-13 13:21 QEMU Summit Minutes 2023 Peter Maydell
2023-08-17  6:13 ` Thomas Huth
2023-11-21 17:11 ` Alex Bennée
2023-11-28 17:54   ` Cédric Le Goater
2023-11-28 18:05     ` Peter Maydell
2023-11-28 18:09       ` Daniel P. Berrangé
2023-11-28 18:06     ` Daniel P. Berrangé
2023-11-29 14:21       ` Philippe Mathieu-Daudé
2023-11-29 15:45         ` Warner Losh
2023-11-29 15:47         ` Stefan Hajnoczi
2023-11-29 15:53           ` Stefan Hajnoczi
2023-11-29 16:46             ` Daniel P. Berrangé
2023-11-29 16:49               ` Stefan Hajnoczi
2023-11-29 16:57             ` Alex Bennée
2023-11-29 18:24               ` Stefan Hajnoczi
2023-11-29 15:50       ` Warner Losh
2023-11-29 16:49         ` Daniel P. Berrangé
2023-11-29 19:41           ` Peter Maydell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).