public inbox for linux-media@vger.kernel.org
 help / color / mirror / Atom feed
* [ANN] Media Summit September 16th: Draft Agenda (v5)
@ 2024-09-05  7:16 Hans Verkuil
  2024-09-05  9:25 ` Mehdi Djait
  2024-09-06  8:11 ` Mauro Carvalho Chehab
  0 siblings, 2 replies; 20+ messages in thread
From: Hans Verkuil @ 2024-09-05  7:16 UTC (permalink / raw)
  To: Linux Media Mailing List
  Cc: Sakari Ailus, Daniel Almeida, Mauro Carvalho Chehab,
	Sebastian Fricke, Martin Hecht, Tommaso Merciai, Jacopo Mondi,
	Benjamin Mugnier, Laurent Pinchart, Ricardo Ribalda,
	Michael Tretter, Alain Volmat, Sean Young, Steve Cho, Tomasz Figa,
	Hidenori Kobayashi, Hu, Jerry W, Suresh Vankadara,
	Devarsh Thakkar, r-donadkar

Hi all,

Here is my fifth (and likely final) stab at an agenda for the media summit. As always,
it is subject to change and all times are guesstimates!

The media summit will be held on Monday September 16th. Avnet Silica has very
kindly offered to host this summit at their Vienna office, which is about 35
minutes by public transport from the Open Source Summit Europe venue
(https://events.linuxfoundation.org/open-source-summit-europe/OSSE).

Avnet Silica Office Location:

Schönbrunner Str. 297/307, 1120 Vienna, Austria

https://www.google.com/maps/place/Avnet+EMG+Elektronische+Bauteile+GmbH+(Silica)/@48.183203,16.3100937,15z/data=!4m6!3m5!1s0x476da80e20b26d5b:0x2c5d2a77bbd43334!8m2!3d48.1832035!4d16.320372!16s%2Fg%2F1tcy32vt?entry=ttu

Refreshments are available during the day.

Lunch is held at Schönbrunner Stöckl (https://www.schoenbrunnerstoeckl.com/), close
to the Avnet Silica office. The lunch is sponsored by Ideas on Board and Cisco Systems
Norway.

Regarding the face mask policy: we will follow the same guidance that the
Linux Foundation gives for the EOSS conference:

https://events.linuxfoundation.org/open-source-summit-europe/attend/health-and-safety/#onsite-health-and-safety


In-Person Attendees:

Sakari Ailus <sakari.ailus@linux.intel.com> (Intel)
Daniel Almeida <daniel.almeida@collabora.com> (Collabora)
Mauro Carvalho Chehab <mchehab@kernel.org> (Huawei, Media Kernel Maintainer)
Steve Cho <stevecho@chromium.org> (Google)
Sebastian Fricke <sebastian.fricke@collabora.com> (Collabora)
Martin Hecht <martin.hecht@avnet.eu> (Avnet)
Tommaso Merciai <tomm.merciai@gmail.com> (Avnet)
Jacopo Mondi <jacopo.mondi@ideasonboard.com> (Ideas On Board)
Benjamin Mugnier <benjamin.mugnier@foss.st.com> (ST Electronics)
Laurent Pinchart <laurent.pinchart@ideasonboard.com> (Ideas On Board)
Ricardo Ribalda <ribalda@chromium.org> (Google)
Michael Tretter <m.tretter@pengutronix.de> (Pengutronix)
Suresh Vankadara <svankada@qti.qualcomm.com> (Qualcomm)
Hans Verkuil <hverkuil-cisco@xs4all.nl> (Cisco Systems Norway)
Alain Volmat <alain.volmat@foss.st.com> (ST Electronics)
Sean Young <sean@mess.org>
Jerry W Hu <jerry.w.hu@intel.com> (Intel)

Remote Attendees (using MS Teams):

Rishikesh Donadkar <r-donadkar@ti.com> (TI)
Tomasz Figa <tfiga@chromium.org> (Google)
Hidenori Kobayashi <hidenorik@chromium.org> (Google)
Devarsh Thakkar <devarsht@ti.com> (TI)

Note: information on how to connect remotely will come later.

If any information above is incorrect, or if I missed someone, then please let me know.

We are currently 17 confirmed in-person participants, so we're pretty much full.
If you want to join remotely, then contact me and I'll add you to that list.

Draft agenda:

8:45-9:15: get settled :-)

9:15-9:25: Hans: Quick introduction

9:25-11:00: Ricardo: multi-committer model using gitlab

11:00-11:15: break

11:15-12:15: Jacopo: Multi-context support in V4L2

12:15-13:30: Lunch

13:30-14:00: Tomasz: Current state of videobuf2, its limitations and the paths forward.

14:00-14:45: Laurent: subdevs, state, and usage of the media controller device to submit requests.

14:45-15:00: break

15:00-15:30: Sean: new tooling for infrared:

- What it is and what it can do (love to hear any feedback of course)
- Where it should be hosted? (I hope gitlab fdo, who do I ask)
- What needs to be in place for a release?
- This tool replaces ir-ctl and ir-keytable. How we phase them out?

15:30-16:00: Daniel: Rust in the media subsystem

16:00-16:15: break

16:15-16:30: Hans: UVC maintenance

16:30-17:00: Steve Cho:

- V4L2 testing on Chromium using virtual video decode driver (VISL)
- V4L2 video decoding testing with KernelCI

17:00-17:30: Laurent: Should media drivers depend on CONFIG_PM?
See here for more info:
https://lore.kernel.org/linux-media/20240825222455.GA24390@pendragon.ideasonboard.com/

17:30-18:00: TBD

Please reply with corrections, questions, etc. to this email. I'll update the agenda
over time. Again, these times are very preliminary.

Regards,

	Hans


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [ANN] Media Summit September 16th: Draft Agenda (v5)
  2024-09-05  7:16 [ANN] Media Summit September 16th: Draft Agenda (v5) Hans Verkuil
@ 2024-09-05  9:25 ` Mehdi Djait
  2024-09-05  9:29   ` Hans Verkuil
  2024-09-06  8:11 ` Mauro Carvalho Chehab
  1 sibling, 1 reply; 20+ messages in thread
From: Mehdi Djait @ 2024-09-05  9:25 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Linux Media Mailing List, Sakari Ailus, Daniel Almeida,
	Mauro Carvalho Chehab, Sebastian Fricke, Martin Hecht,
	Tommaso Merciai, Jacopo Mondi, Benjamin Mugnier, Laurent Pinchart,
	Ricardo Ribalda, Michael Tretter, Alain Volmat, Sean Young,
	Steve Cho, Tomasz Figa, Hidenori Kobayashi, Hu, Jerry W,
	Suresh Vankadara, Devarsh Thakkar, r-donadkar

Hi Hans,

On Thu, Sep 05, 2024 at 09:16:27AM GMT, Hans Verkuil wrote:
> Hi all,
> 
> Here is my fifth (and likely final) stab at an agenda for the media summit. As always,
> it is subject to change and all times are guesstimates!
> 
> The media summit will be held on Monday September 16th. Avnet Silica has very
> kindly offered to host this summit at their Vienna office, which is about 35
> minutes by public transport from the Open Source Summit Europe venue
> (https://events.linuxfoundation.org/open-source-summit-europe/OSSE).
> 
> Avnet Silica Office Location:
> 
> Schönbrunner Str. 297/307, 1120 Vienna, Austria
> 
> https://www.google.com/maps/place/Avnet+EMG+Elektronische+Bauteile+GmbH+(Silica)/@48.183203,16.3100937,15z/data=!4m6!3m5!1s0x476da80e20b26d5b:0x2c5d2a77bbd43334!8m2!3d48.1832035!4d16.320372!16s%2Fg%2F1tcy32vt?entry=ttu
> 
> Refreshments are available during the day.
> 
> Lunch is held at Schönbrunner Stöckl (https://www.schoenbrunnerstoeckl.com/), close
> to the Avnet Silica office. The lunch is sponsored by Ideas on Board and Cisco Systems
> Norway.
> 
> Regarding the face mask policy: we will follow the same guidance that the
> Linux Foundation gives for the EOSS conference:
> 
> https://events.linuxfoundation.org/open-source-summit-europe/attend/health-and-safety/#onsite-health-and-safety
> 
> 
> In-Person Attendees:
> 
> Sakari Ailus <sakari.ailus@linux.intel.com> (Intel)
> Daniel Almeida <daniel.almeida@collabora.com> (Collabora)
> Mauro Carvalho Chehab <mchehab@kernel.org> (Huawei, Media Kernel Maintainer)
> Steve Cho <stevecho@chromium.org> (Google)
> Sebastian Fricke <sebastian.fricke@collabora.com> (Collabora)
> Martin Hecht <martin.hecht@avnet.eu> (Avnet)
> Tommaso Merciai <tomm.merciai@gmail.com> (Avnet)
> Jacopo Mondi <jacopo.mondi@ideasonboard.com> (Ideas On Board)
> Benjamin Mugnier <benjamin.mugnier@foss.st.com> (ST Electronics)
> Laurent Pinchart <laurent.pinchart@ideasonboard.com> (Ideas On Board)
> Ricardo Ribalda <ribalda@chromium.org> (Google)
> Michael Tretter <m.tretter@pengutronix.de> (Pengutronix)
> Suresh Vankadara <svankada@qti.qualcomm.com> (Qualcomm)
> Hans Verkuil <hverkuil-cisco@xs4all.nl> (Cisco Systems Norway)
> Alain Volmat <alain.volmat@foss.st.com> (ST Electronics)
> Sean Young <sean@mess.org>
> Jerry W Hu <jerry.w.hu@intel.com> (Intel)
> 
> Remote Attendees (using MS Teams):
> 
> Rishikesh Donadkar <r-donadkar@ti.com> (TI)
> Tomasz Figa <tfiga@chromium.org> (Google)
> Hidenori Kobayashi <hidenorik@chromium.org> (Google)
> Devarsh Thakkar <devarsht@ti.com> (TI)
> 
> Note: information on how to connect remotely will come later.
> 
> If any information above is incorrect, or if I missed someone, then please let me know.
> 
> We are currently 17 confirmed in-person participants, so we're pretty much full.
> If you want to join remotely, then contact me and I'll add you to that list.

I would be happy to join remotely.

Thank you for the organisation

--
Kind Regards
Mehdi Djait

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [ANN] Media Summit September 16th: Draft Agenda (v5)
  2024-09-05  9:25 ` Mehdi Djait
@ 2024-09-05  9:29   ` Hans Verkuil
  2024-09-06 14:23     ` Nicolas Dufresne
  0 siblings, 1 reply; 20+ messages in thread
From: Hans Verkuil @ 2024-09-05  9:29 UTC (permalink / raw)
  To: Mehdi Djait
  Cc: Linux Media Mailing List, Sakari Ailus, Daniel Almeida,
	Mauro Carvalho Chehab, Sebastian Fricke, Martin Hecht,
	Tommaso Merciai, Jacopo Mondi, Benjamin Mugnier, Laurent Pinchart,
	Ricardo Ribalda, Michael Tretter, Alain Volmat, Sean Young,
	Steve Cho, Tomasz Figa, Hidenori Kobayashi, Hu, Jerry W,
	Suresh Vankadara, Devarsh Thakkar, r-donadkar

On 05/09/2024 11:25, Mehdi Djait wrote:
> Hi Hans,
> 
> On Thu, Sep 05, 2024 at 09:16:27AM GMT, Hans Verkuil wrote:
>> Hi all,
>>
>> Here is my fifth (and likely final) stab at an agenda for the media summit. As always,
>> it is subject to change and all times are guesstimates!
>>
>> The media summit will be held on Monday September 16th. Avnet Silica has very
>> kindly offered to host this summit at their Vienna office, which is about 35
>> minutes by public transport from the Open Source Summit Europe venue
>> (https://events.linuxfoundation.org/open-source-summit-europe/OSSE).
>>
>> Avnet Silica Office Location:
>>
>> Schönbrunner Str. 297/307, 1120 Vienna, Austria
>>
>> https://www.google.com/maps/place/Avnet+EMG+Elektronische+Bauteile+GmbH+(Silica)/@48.183203,16.3100937,15z/data=!4m6!3m5!1s0x476da80e20b26d5b:0x2c5d2a77bbd43334!8m2!3d48.1832035!4d16.320372!16s%2Fg%2F1tcy32vt?entry=ttu
>>
>> Refreshments are available during the day.
>>
>> Lunch is held at Schönbrunner Stöckl (https://www.schoenbrunnerstoeckl.com/), close
>> to the Avnet Silica office. The lunch is sponsored by Ideas on Board and Cisco Systems
>> Norway.
>>
>> Regarding the face mask policy: we will follow the same guidance that the
>> Linux Foundation gives for the EOSS conference:
>>
>> https://events.linuxfoundation.org/open-source-summit-europe/attend/health-and-safety/#onsite-health-and-safety
>>
>>
>> In-Person Attendees:
>>
>> Sakari Ailus <sakari.ailus@linux.intel.com> (Intel)
>> Daniel Almeida <daniel.almeida@collabora.com> (Collabora)
>> Mauro Carvalho Chehab <mchehab@kernel.org> (Huawei, Media Kernel Maintainer)
>> Steve Cho <stevecho@chromium.org> (Google)
>> Sebastian Fricke <sebastian.fricke@collabora.com> (Collabora)
>> Martin Hecht <martin.hecht@avnet.eu> (Avnet)
>> Tommaso Merciai <tomm.merciai@gmail.com> (Avnet)
>> Jacopo Mondi <jacopo.mondi@ideasonboard.com> (Ideas On Board)
>> Benjamin Mugnier <benjamin.mugnier@foss.st.com> (ST Electronics)
>> Laurent Pinchart <laurent.pinchart@ideasonboard.com> (Ideas On Board)
>> Ricardo Ribalda <ribalda@chromium.org> (Google)
>> Michael Tretter <m.tretter@pengutronix.de> (Pengutronix)
>> Suresh Vankadara <svankada@qti.qualcomm.com> (Qualcomm)
>> Hans Verkuil <hverkuil-cisco@xs4all.nl> (Cisco Systems Norway)
>> Alain Volmat <alain.volmat@foss.st.com> (ST Electronics)
>> Sean Young <sean@mess.org>
>> Jerry W Hu <jerry.w.hu@intel.com> (Intel)
>>
>> Remote Attendees (using MS Teams):
>>
>> Rishikesh Donadkar <r-donadkar@ti.com> (TI)
>> Tomasz Figa <tfiga@chromium.org> (Google)
>> Hidenori Kobayashi <hidenorik@chromium.org> (Google)
>> Devarsh Thakkar <devarsht@ti.com> (TI)
>>
>> Note: information on how to connect remotely will come later.
>>
>> If any information above is incorrect, or if I missed someone, then please let me know.
>>
>> We are currently 17 confirmed in-person participants, so we're pretty much full.
>> If you want to join remotely, then contact me and I'll add you to that list.
> 
> I would be happy to join remotely.
> 
> Thank you for the organisation

I added you to the remote attendees list!

Regards,

	Hans

> 
> --
> Kind Regards
> Mehdi Djait


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [ANN] Media Summit September 16th: Draft Agenda (v5)
  2024-09-05  7:16 [ANN] Media Summit September 16th: Draft Agenda (v5) Hans Verkuil
  2024-09-05  9:25 ` Mehdi Djait
@ 2024-09-06  8:11 ` Mauro Carvalho Chehab
  2024-09-06 10:36   ` Sebastian Fricke
                     ` (2 more replies)
  1 sibling, 3 replies; 20+ messages in thread
From: Mauro Carvalho Chehab @ 2024-09-06  8:11 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Linux Media Mailing List, Sakari Ailus, Daniel Almeida,
	Mauro Carvalho Chehab, Sebastian Fricke, Martin Hecht,
	Tommaso Merciai, Jacopo Mondi, Benjamin Mugnier, Laurent Pinchart,
	Ricardo Ribalda, Michael Tretter, Alain Volmat, Sean Young,
	Steve Cho, Tomasz Figa, Hidenori Kobayashi, Hu, Jerry W,
	Suresh Vankadara, Devarsh Thakkar, r-donadkar

Em Thu, 5 Sep 2024 09:16:27 +0200
Hans Verkuil <hverkuil@xs4all.nl> escreveu:

> Hi all,
> 
> Here is my fifth (and likely final) stab at an agenda for the media summit. As always,
> it is subject to change and all times are guesstimates!
> 
> The media summit will be held on Monday September 16th. Avnet Silica has very
> kindly offered to host this summit at their Vienna office, which is about 35
> minutes by public transport from the Open Source Summit Europe venue
> (https://events.linuxfoundation.org/open-source-summit-europe/OSSE).
> 
> Avnet Silica Office Location:
> 
> Schönbrunner Str. 297/307, 1120 Vienna, Austria
> 
> https://www.google.com/maps/place/Avnet+EMG+Elektronische+Bauteile+GmbH+(Silica)/@48.183203,16.3100937,15z/data=!4m6!3m5!1s0x476da80e20b26d5b:0x2c5d2a77bbd43334!8m2!3d48.1832035!4d16.320372!16s%2Fg%2F1tcy32vt?entry=ttu
> 
> Refreshments are available during the day.
> 
> Lunch is held at Schönbrunner Stöckl (https://www.schoenbrunnerstoeckl.com/), close
> to the Avnet Silica office. The lunch is sponsored by Ideas on Board and Cisco Systems
> Norway.
> 
> Regarding the face mask policy: we will follow the same guidance that the
> Linux Foundation gives for the EOSS conference:
> 
> https://events.linuxfoundation.org/open-source-summit-europe/attend/health-and-safety/#onsite-health-and-safety
> 
> 
> In-Person Attendees:
> 
> Sakari Ailus <sakari.ailus@linux.intel.com> (Intel)
> Daniel Almeida <daniel.almeida@collabora.com> (Collabora)
> Mauro Carvalho Chehab <mchehab@kernel.org> (Huawei, Media Kernel Maintainer)
> Steve Cho <stevecho@chromium.org> (Google)
> Sebastian Fricke <sebastian.fricke@collabora.com> (Collabora)
> Martin Hecht <martin.hecht@avnet.eu> (Avnet)
> Tommaso Merciai <tomm.merciai@gmail.com> (Avnet)
> Jacopo Mondi <jacopo.mondi@ideasonboard.com> (Ideas On Board)
> Benjamin Mugnier <benjamin.mugnier@foss.st.com> (ST Electronics)
> Laurent Pinchart <laurent.pinchart@ideasonboard.com> (Ideas On Board)
> Ricardo Ribalda <ribalda@chromium.org> (Google)
> Michael Tretter <m.tretter@pengutronix.de> (Pengutronix)
> Suresh Vankadara <svankada@qti.qualcomm.com> (Qualcomm)
> Hans Verkuil <hverkuil-cisco@xs4all.nl> (Cisco Systems Norway)
> Alain Volmat <alain.volmat@foss.st.com> (ST Electronics)
> Sean Young <sean@mess.org>
> Jerry W Hu <jerry.w.hu@intel.com> (Intel)
> 
> Remote Attendees (using MS Teams):
> 
> Rishikesh Donadkar <r-donadkar@ti.com> (TI)
> Tomasz Figa <tfiga@chromium.org> (Google)
> Hidenori Kobayashi <hidenorik@chromium.org> (Google)
> Devarsh Thakkar <devarsht@ti.com> (TI)
> 
> Note: information on how to connect remotely will come later.
> 
> If any information above is incorrect, or if I missed someone, then please let me know.
> 
> We are currently 17 confirmed in-person participants, so we're pretty much full.
> If you want to join remotely, then contact me and I'll add you to that list.
> 
> Draft agenda:
> 
> 8:45-9:15: get settled :-)
> 
> 9:15-9:25: Hans: Quick introduction
> 
> 9:25-11:00: Ricardo: multi-committer model using gitlab

As part of such discussion, IMO some topics that should be covered:

1. All committers shall use a common procedure for all merges.

   This is easy said than done. So, IMO, it is needed some common scripts
   to be used by all committers. On my tests when merging two PRs there,
   those seems to be the minimal set of scripts that are needed:

   a) script to create a new topic branch at linux-media/users/<user>
      The input parameter is the message-ID, e. g. something like:

	create_media_staging_topic <topic_name> <message_id>

      (eventually with an extra parameter with the name of the tree)

      It shall use patchwork REST interface to get the patches - or at least
      to check if all patches are there (and then use b4).

      such script needs to work with a single patch, a patch series and a
      pull request.

      the message ID of every patch, including the PR should be stored at
      the MR, as this will be needed to later update patchwork.

   b) once gitlab CI runs, there are two possible outcomes: patches may
      pass or not. If they pass, a MR will be created and eventually be
      merged.

      Either merged or not (because something failed or the patches require
      more work), the patchwork status of the patch require changes to
      reflect what happened. IMO, another script is needed to update the
      patch/patch series/PR on patchwork on a consistent way.

      This is actually a *big* gap we have here. I have a script that 
      manually check patchwork status and the gap is huge. currently,
      there are 73 patches that seems to be merged, but patchwork was not
      updated.

      From what I noticed, some PR submitters almost never update patchwork
      after the merges.

      So another script to solve this gap is needed, doing updates on all 
      patches that were picked by the first script. Something like:

      update_patchwork_from_topic <topic_name> <new_status>

      This would likely need to use some logic to pick the message IDs
      of the patch inside the MR.

      Such script could also check for previous versions of the patch
      and for identical patches (it is somewhat common to receive identical
      patches with trivial fixes from different developers).

   Someone needs to work on such script, as otherwise the multi committers
   model may fail, and we risk needing to return back to the current model.

2. The mailbomb script that notifies when a patch is merged at media-stage
   we're using right now doesn't work with well with multiple committers.

   Right now, the tree at linuxtv runs it, but it might end sending patches
   to the author and to linuxtv-commits ML that reached upstream from other
   trees. It has some logic to prevent that, but it is not bulletproof.
 
   A replacement script is needed. Perhaps this can be executed together
   with the patchwork script (1B) at the committer's machine.

3. Staging require different rules, as smatch/spatch/sparse/checkpatch
   warnings and errors can be acceptable.

4. We need to have some sort of "honour code": if undesired behavior
   is noticed, maintainers may temporarily (or permanently) revoke
   committer rights.

   Hopefully, this will never happen, but, if it happens, a rebase
   of media-staging tree may be needed.

5. The procedure for fixes wil remain the same. We'll have already enough
   things to make it work. Let's not add fixes complexity just yet.
   Depending on how well the new multi-committers experimental model
   works, we may think using it for fixes as well.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [ANN] Media Summit September 16th: Draft Agenda (v5)
  2024-09-06  8:11 ` Mauro Carvalho Chehab
@ 2024-09-06 10:36   ` Sebastian Fricke
  2024-09-06 11:10     ` Mauro Carvalho Chehab
  2024-09-06 13:32   ` Laurent Pinchart
  2024-09-07  8:02   ` Hans Verkuil
  2 siblings, 1 reply; 20+ messages in thread
From: Sebastian Fricke @ 2024-09-06 10:36 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Hans Verkuil, Linux Media Mailing List, Sakari Ailus,
	Daniel Almeida, Mauro Carvalho Chehab, Martin Hecht,
	Tommaso Merciai, Jacopo Mondi, Benjamin Mugnier, Laurent Pinchart,
	Ricardo Ribalda, Michael Tretter, Alain Volmat, Sean Young,
	Steve Cho, Tomasz Figa, Hidenori Kobayashi, Hu, Jerry W,
	Suresh Vankadara, Devarsh Thakkar, r-donadkar

Hey Mauro,

On 06.09.2024 10:11, Mauro Carvalho Chehab wrote:
>Em Thu, 5 Sep 2024 09:16:27 +0200
>Hans Verkuil <hverkuil@xs4all.nl> escreveu:
>
>> Hi all,
>>
>> Here is my fifth (and likely final) stab at an agenda for the media summit. As always,
>> it is subject to change and all times are guesstimates!
>>
>> The media summit will be held on Monday September 16th. Avnet Silica has very
>> kindly offered to host this summit at their Vienna office, which is about 35
>> minutes by public transport from the Open Source Summit Europe venue
>> (https://events.linuxfoundation.org/open-source-summit-europe/OSSE).
>>
>> Avnet Silica Office Location:
>>
>> Schönbrunner Str. 297/307, 1120 Vienna, Austria
>>
>> https://www.google.com/maps/place/Avnet+EMG+Elektronische+Bauteile+GmbH+(Silica)/@48.183203,16.3100937,15z/data=!4m6!3m5!1s0x476da80e20b26d5b:0x2c5d2a77bbd43334!8m2!3d48.1832035!4d16.320372!16s%2Fg%2F1tcy32vt?entry=ttu
>>
>> Refreshments are available during the day.
>>
>> Lunch is held at Schönbrunner Stöckl (https://www.schoenbrunnerstoeckl.com/), close
>> to the Avnet Silica office. The lunch is sponsored by Ideas on Board and Cisco Systems
>> Norway.
>>
>> Regarding the face mask policy: we will follow the same guidance that the
>> Linux Foundation gives for the EOSS conference:
>>
>> https://events.linuxfoundation.org/open-source-summit-europe/attend/health-and-safety/#onsite-health-and-safety
>>
>>
>> In-Person Attendees:
>>
>> Sakari Ailus <sakari.ailus@linux.intel.com> (Intel)
>> Daniel Almeida <daniel.almeida@collabora.com> (Collabora)
>> Mauro Carvalho Chehab <mchehab@kernel.org> (Huawei, Media Kernel Maintainer)
>> Steve Cho <stevecho@chromium.org> (Google)
>> Sebastian Fricke <sebastian.fricke@collabora.com> (Collabora)
>> Martin Hecht <martin.hecht@avnet.eu> (Avnet)
>> Tommaso Merciai <tomm.merciai@gmail.com> (Avnet)
>> Jacopo Mondi <jacopo.mondi@ideasonboard.com> (Ideas On Board)
>> Benjamin Mugnier <benjamin.mugnier@foss.st.com> (ST Electronics)
>> Laurent Pinchart <laurent.pinchart@ideasonboard.com> (Ideas On Board)
>> Ricardo Ribalda <ribalda@chromium.org> (Google)
>> Michael Tretter <m.tretter@pengutronix.de> (Pengutronix)
>> Suresh Vankadara <svankada@qti.qualcomm.com> (Qualcomm)
>> Hans Verkuil <hverkuil-cisco@xs4all.nl> (Cisco Systems Norway)
>> Alain Volmat <alain.volmat@foss.st.com> (ST Electronics)
>> Sean Young <sean@mess.org>
>> Jerry W Hu <jerry.w.hu@intel.com> (Intel)
>>
>> Remote Attendees (using MS Teams):
>>
>> Rishikesh Donadkar <r-donadkar@ti.com> (TI)
>> Tomasz Figa <tfiga@chromium.org> (Google)
>> Hidenori Kobayashi <hidenorik@chromium.org> (Google)
>> Devarsh Thakkar <devarsht@ti.com> (TI)
>>
>> Note: information on how to connect remotely will come later.
>>
>> If any information above is incorrect, or if I missed someone, then please let me know.
>>
>> We are currently 17 confirmed in-person participants, so we're pretty much full.
>> If you want to join remotely, then contact me and I'll add you to that list.
>>
>> Draft agenda:
>>
>> 8:45-9:15: get settled :-)
>>
>> 9:15-9:25: Hans: Quick introduction
>>
>> 9:25-11:00: Ricardo: multi-committer model using gitlab
>
>As part of such discussion, IMO some topics that should be covered:
>
>1. All committers shall use a common procedure for all merges.
>
>   This is easy said than done. So, IMO, it is needed some common scripts
>   to be used by all committers. On my tests when merging two PRs there,
>   those seems to be the minimal set of scripts that are needed:
>
>   a) script to create a new topic branch at linux-media/users/<user>
>      The input parameter is the message-ID, e. g. something like:
>
>	create_media_staging_topic <topic_name> <message_id>
>
>      (eventually with an extra parameter with the name of the tree)
>
>      It shall use patchwork REST interface to get the patches - or at least
>      to check if all patches are there (and then use b4).
>
>      such script needs to work with a single patch, a patch series and a
>      pull request.
>
>      the message ID of every patch, including the PR should be stored at
>      the MR, as this will be needed to later update patchwork.
>
>   b) once gitlab CI runs, there are two possible outcomes: patches may
>      pass or not. If they pass, a MR will be created and eventually be
>      merged.
>
>      Either merged or not (because something failed or the patches require
>      more work), the patchwork status of the patch require changes to
>      reflect what happened. IMO, another script is needed to update the
>      patch/patch series/PR on patchwork on a consistent way.
>
>      This is actually a *big* gap we have here. I have a script that
>      manually check patchwork status and the gap is huge. currently,
>      there are 73 patches that seems to be merged, but patchwork was not
>      updated.
>
>      From what I noticed, some PR submitters almost never update patchwork
>      after the merges.

Interesting I thought that is how it should be? I mean if I send a PR,
the one taking the PR must decide whether he wants to pull it, so the
decision whether the set is accepted should be on the one pulling the
changes and not on the one pushing them. Right?

>
>      So another script to solve this gap is needed, doing updates on all
>      patches that were picked by the first script. Something like:

Shouldn't the update happen after the MR has been created instead,
because only after running the CI we know whether the tests fail or
pass? From what you say above it sounds like the first script merely
prepares a topic branch to be tested.

>
>      update_patchwork_from_topic <topic_name> <new_status>
>
>      This would likely need to use some logic to pick the message IDs
>      of the patch inside the MR.
>
>      Such script could also check for previous versions of the patch
>      and for identical patches (it is somewhat common to receive identical
>      patches with trivial fixes from different developers).
>
>   Someone needs to work on such script, as otherwise the multi committers
>   model may fail, and we risk needing to return back to the current model.

Just my personal thought: This sounds a bit extreme to me, I mean we are
currently in the first test run as preparation for the Media-summit
where we actually want to discuss policies and further requirements.

>
>2. The mailbomb script that notifies when a patch is merged at media-stage
>   we're using right now doesn't work with well with multiple committers.
>
>   Right now, the tree at linuxtv runs it, but it might end sending patches
>   to the author and to linuxtv-commits ML that reached upstream from other
>   trees. It has some logic to prevent that, but it is not bulletproof.
>
>   A replacement script is needed. Perhaps this can be executed together
>   with the patchwork script (1B) at the committer's machine.
>
>3. Staging require different rules, as smatch/spatch/sparse/checkpatch
>   warnings and errors can be acceptable.

I thought that we came to a consensus that only warnings are acceptable?
If we accept errors in the staging but not in media master, does that
mean that we ought to send patches to the media staging tree then?
I'd vote for only allowing patches without errors into the staging tree
and in the worst case add a list with acceptable errors.

>
>4. We need to have some sort of "honour code": if undesired behavior
>   is noticed, maintainers may temporarily (or permanently) revoke
>   committer rights.

That sounds like something to discuss on the media summit, revoking
commit rights shouldn't be hard as you just have to remove push rights
for that GitLab tree.

>
>   Hopefully, this will never happen, but, if it happens, a rebase
>   of media-staging tree may be needed.
>
>5. The procedure for fixes wil remain the same. We'll have already enough
>   things to make it work. Let's not add fixes complexity just yet.
>   Depending on how well the new multi-committers experimental model
>   works, we may think using it for fixes as well.
>
>Thanks,
>Mauro
>

Regards,
Sebastian Fricke
Consultant Software Engineer

Collabora Ltd
Platinum Building, St John's Innovation Park, Cambridge CB4 0DS, UK
Registered in England & Wales no 5513718.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [ANN] Media Summit September 16th: Draft Agenda (v5)
  2024-09-06 10:36   ` Sebastian Fricke
@ 2024-09-06 11:10     ` Mauro Carvalho Chehab
  2024-09-06 13:29       ` Laurent Pinchart
  0 siblings, 1 reply; 20+ messages in thread
From: Mauro Carvalho Chehab @ 2024-09-06 11:10 UTC (permalink / raw)
  To: Sebastian Fricke
  Cc: Hans Verkuil, Linux Media Mailing List, Sakari Ailus,
	Daniel Almeida, Mauro Carvalho Chehab, Martin Hecht,
	Tommaso Merciai, Jacopo Mondi, Benjamin Mugnier, Laurent Pinchart,
	Ricardo Ribalda, Michael Tretter, Alain Volmat, Sean Young,
	Steve Cho, Tomasz Figa, Hidenori Kobayashi, Hu, Jerry W,
	Suresh Vankadara, Devarsh Thakkar, r-donadkar

Em Fri, 6 Sep 2024 12:36:58 +0200
Sebastian Fricke <sebastian.fricke@collabora.com> escreveu:

> Hey Mauro,
> 
> On 06.09.2024 10:11, Mauro Carvalho Chehab wrote:
> >Em Thu, 5 Sep 2024 09:16:27 +0200
> >Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> >  
...
> >1. All committers shall use a common procedure for all merges.
> >
> >   This is easy said than done. So, IMO, it is needed some common scripts
> >   to be used by all committers. On my tests when merging two PRs there,
> >   those seems to be the minimal set of scripts that are needed:
> >
> >   a) script to create a new topic branch at linux-media/users/<user>
> >      The input parameter is the message-ID, e. g. something like:
> >
> >	create_media_staging_topic <topic_name> <message_id>
> >
> >      (eventually with an extra parameter with the name of the tree)
> >
> >      It shall use patchwork REST interface to get the patches - or at least
> >      to check if all patches are there (and then use b4).
> >
> >      such script needs to work with a single patch, a patch series and a
> >      pull request.
> >
> >      the message ID of every patch, including the PR should be stored at
> >      the MR, as this will be needed to later update patchwork.
> >
> >   b) once gitlab CI runs, there are two possible outcomes: patches may
> >      pass or not. If they pass, a MR will be created and eventually be
> >      merged.
> >
> >      Either merged or not (because something failed or the patches require
> >      more work), the patchwork status of the patch require changes to
> >      reflect what happened. IMO, another script is needed to update the
> >      patch/patch series/PR on patchwork on a consistent way.
> >
> >      This is actually a *big* gap we have here. I have a script that
> >      manually check patchwork status and the gap is huge. currently,
> >      there are 73 patches that seems to be merged, but patchwork was not
> >      updated.
> >
> >      From what I noticed, some PR submitters almost never update patchwork
> >      after the merges.  
> 
> Interesting I thought that is how it should be? I mean if I send a PR,
> the one taking the PR must decide whether he wants to pull it, so the
> decision whether the set is accepted should be on the one pulling the
> changes and not on the one pushing them. Right?

Yes, but you still need to update previous attempts to submit the
work. So, let's see a patch series at v9 got a PR. It is up to you
to cleanup everything on patchwork from v1 to v8.

Now, the committer handing PR for v9 should be in charge of updating
the status of it and the patches that belong to it, once the patch
is merged or once he takes a decision otherwise.

> >
> >      So another script to solve this gap is needed, doing updates on all
> >      patches that were picked by the first script. Something like:  
> 
> Shouldn't the update happen after the MR has been created instead,
> because only after running the CI we know whether the tests fail or
> pass? From what you say above it sounds like the first script merely
> prepares a topic branch to be tested.

the first script I mentioned prepares and pushes the topic branch. The
patchwork update script (the second one) can be used on several parts
of the workflow:

- before MR: if the committer decides to not merge the changes, because
  it didn't pass on his review;
- after MR pipeline failures;
- after MR being merged: once it reaches media-staging master.

> >      update_patchwork_from_topic <topic_name> <new_status>
> >
> >      This would likely need to use some logic to pick the message IDs
> >      of the patch inside the MR.
> >
> >      Such script could also check for previous versions of the patch
> >      and for identical patches (it is somewhat common to receive identical
> >      patches with trivial fixes from different developers).
> >
> >   Someone needs to work on such script, as otherwise the multi committers
> >   model may fail, and we risk needing to return back to the current model.  
> 
> Just my personal thought: This sounds a bit extreme to me, I mean we are
> currently in the first test run as preparation for the Media-summit
> where we actually want to discuss policies and further requirements.

What I'm saying is that multiple-committers model will only work if
all committers follow a common procedure (still variants between
committers is possible, provided that the minimal set is followed).

If this doesn't happens, maintainers may be forced to do rebases and other
manual fixes, with will actually make life worse for everyone. That's
what I want to avoid by having a common set of scripts for the very basic
tasks: create a topic branch for CI to test and update patchwork.

> >2. The mailbomb script that notifies when a patch is merged at media-stage
> >   we're using right now doesn't work with well with multiple committers.
> >
> >   Right now, the tree at linuxtv runs it, but it might end sending patches
> >   to the author and to linuxtv-commits ML that reached upstream from other
> >   trees. It has some logic to prevent that, but it is not bulletproof.
> >
> >   A replacement script is needed. Perhaps this can be executed together
> >   with the patchwork script (1B) at the committer's machine.
> >
> >3. Staging require different rules, as smatch/spatch/sparse/checkpatch
> >   warnings and errors can be acceptable.  
> 
> I thought that we came to a consensus that only warnings are acceptable?
> If we accept errors in the staging but not in media master, does that
> mean that we ought to send patches to the media staging tree then?
> I'd vote for only allowing patches without errors into the staging tree
> and in the worst case add a list with acceptable errors.

No, you got me wrong. By staging, I'm referring to drivers/staging/
not to the multi-committers tree.

The bar for sending stuff to drivers/staging is lower than for
drivers/media. According with Kernel's documentation at
Documentation/process/2.Process.rst:

	"Current rules require that drivers contributed to staging
	 must, at a minimum, compile properly."

We actually try to be better than that, but still when new stuff is
added to staging, while we try to also ensure no static code checkers
would fail, we may need to relax the rules, as it is not uncommon to
receive drivers that need extra care to follow Kernel coding style
there.

> >4. We need to have some sort of "honour code": if undesired behavior
> >   is noticed, maintainers may temporarily (or permanently) revoke
> >   committer rights.  
> 
> That sounds like something to discuss on the media summit, 

Sure. That's why I'm asking to explicitly add it to the topics to be
discussed there ;-)

It would be nice if someone could come up with a proposal for 
it, although the discussions of such proposal should likely happen via 
email.

Also, the ones to be added there likely need to explicitly ack with
such code before being added to gitlab's permission group.

> revoking
> commit rights shouldn't be hard as you just have to remove push rights
> for that GitLab tree.

Technically yes, but in practice this is usually very hard, as we're
dealing with people that will very likely be very unhappy of getting banned.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [ANN] Media Summit September 16th: Draft Agenda (v5)
  2024-09-06 11:10     ` Mauro Carvalho Chehab
@ 2024-09-06 13:29       ` Laurent Pinchart
  2024-09-06 17:43         ` Mauro Carvalho Chehab
  0 siblings, 1 reply; 20+ messages in thread
From: Laurent Pinchart @ 2024-09-06 13:29 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Sebastian Fricke, Hans Verkuil, Linux Media Mailing List,
	Sakari Ailus, Daniel Almeida, Mauro Carvalho Chehab, Martin Hecht,
	Tommaso Merciai, Jacopo Mondi, Benjamin Mugnier, Ricardo Ribalda,
	Michael Tretter, Alain Volmat, Sean Young, Steve Cho, Tomasz Figa,
	Hidenori Kobayashi, Hu, Jerry W, Suresh Vankadara,
	Devarsh Thakkar, r-donadkar

On Fri, Sep 06, 2024 at 01:10:30PM +0200, Mauro Carvalho Chehab wrote:
> Em Fri, 6 Sep 2024 12:36:58 +0200 Sebastian Fricke escreveu:
> > On 06.09.2024 10:11, Mauro Carvalho Chehab wrote:
> > > Em Thu, 5 Sep 2024 09:16:27 +0200 Hans Verkuil escreveu:
> > >  
> ...
> > > 1. All committers shall use a common procedure for all merges.
> > >
> > >   This is easy said than done. So, IMO, it is needed some common scripts
> > >   to be used by all committers. On my tests when merging two PRs there,
> > >   those seems to be the minimal set of scripts that are needed:
> > >
> > >   a) script to create a new topic branch at linux-media/users/<user>
> > >      The input parameter is the message-ID, e. g. something like:
> > >
> > >	create_media_staging_topic <topic_name> <message_id>
> > >
> > >      (eventually with an extra parameter with the name of the tree)
> > >
> > >      It shall use patchwork REST interface to get the patches - or at least
> > >      to check if all patches are there (and then use b4).
> > >
> > >      such script needs to work with a single patch, a patch series and a
> > >      pull request.
> > >
> > >      the message ID of every patch, including the PR should be stored at
> > >      the MR, as this will be needed to later update patchwork.

I'm sure individual committers will want to optimize their workflow
using scripts, and possibly share them, but what's wrong with simply
using b4 ? With -l it will add a link to lore, so the message ID will be
available.

I'd rather first focus on what we want to see included in commit
messages, and then on how to make sure it gets there.

> > >
> > >   b) once gitlab CI runs, there are two possible outcomes: patches may
> > >      pass or not. If they pass, a MR will be created and eventually be
> > >      merged.
> > >
> > >      Either merged or not (because something failed or the patches require
> > >      more work), the patchwork status of the patch require changes to
> > >      reflect what happened. IMO, another script is needed to update the
> > >      patch/patch series/PR on patchwork on a consistent way.
> > >
> > >      This is actually a *big* gap we have here. I have a script that
> > >      manually check patchwork status and the gap is huge. currently,
> > >      there are 73 patches that seems to be merged, but patchwork was not
> > >      updated.
> > >
> > >      From what I noticed, some PR submitters almost never update patchwork
> > >      after the merges.  
> > 
> > Interesting I thought that is how it should be? I mean if I send a PR,
> > the one taking the PR must decide whether he wants to pull it, so the
> > decision whether the set is accepted should be on the one pulling the
> > changes and not on the one pushing them. Right?
> 
> Yes, but you still need to update previous attempts to submit the
> work. So, let's see a patch series at v9 got a PR. It is up to you
> to cleanup everything on patchwork from v1 to v8.

That should be done before though, when v2 is submitted, v1 should be
marked as superseded. Isn't there a patchwork bot that can help with
this ? It's not perfect in the sense that it doesn't always match new
versions with previous ones, but if it can lower the manual burden by
even 80%, it's a big win.

> Now, the committer handing PR for v9 should be in charge of updating
> the status of it and the patches that belong to it, once the patch
> is merged or once he takes a decision otherwise.

Today most authors don't have a patchwork account, so they can't update
the status themselves. The responsibility mostly falls on you and Hans.
I'm fine with moving this to committers, which will make your life
easier. The patchwork update when merging a branch should ideally be
done automatically by the gitlab instance. If we have lore links in the
commit messages, that sounds doable.

> > >
> > >      So another script to solve this gap is needed, doing updates on all
> > >      patches that were picked by the first script. Something like:  
> > 
> > Shouldn't the update happen after the MR has been created instead,
> > because only after running the CI we know whether the tests fail or
> > pass? From what you say above it sounds like the first script merely
> > prepares a topic branch to be tested.
> 
> the first script I mentioned prepares and pushes the topic branch. The
> patchwork update script (the second one) can be used on several parts
> of the workflow:
> 
> - before MR: if the committer decides to not merge the changes, because
>   it didn't pass on his review;
> - after MR pipeline failures;
> - after MR being merged: once it reaches media-staging master.
> 
> > >      update_patchwork_from_topic <topic_name> <new_status>
> > >
> > >      This would likely need to use some logic to pick the message IDs
> > >      of the patch inside the MR.
> > >
> > >      Such script could also check for previous versions of the patch
> > >      and for identical patches (it is somewhat common to receive identical
> > >      patches with trivial fixes from different developers).
> > >
> > >   Someone needs to work on such script, as otherwise the multi committers
> > >   model may fail, and we risk needing to return back to the current model.  
> > 
> > Just my personal thought: This sounds a bit extreme to me, I mean we are
> > currently in the first test run as preparation for the Media-summit
> > where we actually want to discuss policies and further requirements.
> 
> What I'm saying is that multiple-committers model will only work if
> all committers follow a common procedure (still variants between
> committers is possible, provided that the minimal set is followed).
> 
> If this doesn't happens, maintainers may be forced to do rebases and other
> manual fixes, with will actually make life worse for everyone. That's
> what I want to avoid by having a common set of scripts for the very basic
> tasks: create a topic branch for CI to test and update patchwork.

The issues you've listed above are about updating status of patches in
patchwork. That's important (or otherwise we should drop patchwork if we
think it's not important, but I don't think that's the case), but I
hardly see how missing updates to patchwork would call for a rebase :-)

There's a need to make sure that what lands in the shared tree is
proper, and for that we need a clearly documented procedure. At this
point I don't think it requires a committer tool script, even if in the
future developing such a tool could help.

> > > 2. The mailbomb script that notifies when a patch is merged at media-stage
> > >    we're using right now doesn't work with well with multiple committers.
> > >
> > >    Right now, the tree at linuxtv runs it, but it might end sending patches
> > >    to the author and to linuxtv-commits ML that reached upstream from other
> > >    trees. It has some logic to prevent that, but it is not bulletproof.
> > >
> > >    A replacement script is needed. Perhaps this can be executed together
> > >    with the patchwork script (1B) at the committer's machine.

This could possibly be done with 'b4 ty'. Another option is to rely on
patchwork notifications instead. If a patch is marked as merged, the
notification e-mail sounds enough to notify the submitter.

> > > 3. Staging require different rules, as smatch/spatch/sparse/checkpatch
> > >    warnings and errors can be acceptable.  
> > 
> > I thought that we came to a consensus that only warnings are acceptable?
> > If we accept errors in the staging but not in media master, does that
> > mean that we ought to send patches to the media staging tree then?
> > I'd vote for only allowing patches without errors into the staging tree
> > and in the worst case add a list with acceptable errors.
> 
> No, you got me wrong. By staging, I'm referring to drivers/staging/
> not to the multi-committers tree.

Can we please rename the shared tree ? media-staging creates confusion
with drivers/staging/media/.

> The bar for sending stuff to drivers/staging is lower than for
> drivers/media. According with Kernel's documentation at
> Documentation/process/2.Process.rst:
> 
> 	"Current rules require that drivers contributed to staging
> 	 must, at a minimum, compile properly."
> 
> We actually try to be better than that, but still when new stuff is
> added to staging, while we try to also ensure no static code checkers
> would fail, we may need to relax the rules, as it is not uncommon to
> receive drivers that need extra care to follow Kernel coding style
> there.

Why can't we tell the submitter to fix the warnings first ? Especially
if we can point them to a CI report, that would seem the best course of
action to me. drivers/staging/ is meant for code that need more time to
be considered stable enough for the kernel. The stabilization doesn't
happen by magic, it requires someone actively working on it. If the
submitter can't bother to fix the warnings, it doesn't forebode anything
good and wouldn't make me confident that the code would ever get out of
staging (other than simply being deleted).

> > > 4. We need to have some sort of "honour code": if undesired behavior
> > >    is noticed, maintainers may temporarily (or permanently) revoke
> > >    committer rights.  
> > 
> > That sounds like something to discuss on the media summit, 
> 
> Sure. That's why I'm asking to explicitly add it to the topics to be
> discussed there ;-)
> 
> It would be nice if someone could come up with a proposal for 
> it, although the discussions of such proposal should likely happen via 
> email.
> 
> Also, the ones to be added there likely need to explicitly ack with
> such code before being added to gitlab's permission group.

This sounds fairly straightforward, I think we'll easily agree on rules.

> > revoking
> > commit rights shouldn't be hard as you just have to remove push rights
> > for that GitLab tree.
> 
> Technically yes, but in practice this is usually very hard, as we're
> dealing with people that will very likely be very unhappy of getting banned.

Why would that make it hard to remove push rights ?

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [ANN] Media Summit September 16th: Draft Agenda (v5)
  2024-09-06  8:11 ` Mauro Carvalho Chehab
  2024-09-06 10:36   ` Sebastian Fricke
@ 2024-09-06 13:32   ` Laurent Pinchart
  2024-09-07  8:02   ` Hans Verkuil
  2 siblings, 0 replies; 20+ messages in thread
From: Laurent Pinchart @ 2024-09-06 13:32 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Hans Verkuil, Linux Media Mailing List, Sakari Ailus,
	Daniel Almeida, Mauro Carvalho Chehab, Sebastian Fricke,
	Martin Hecht, Tommaso Merciai, Jacopo Mondi, Benjamin Mugnier,
	Ricardo Ribalda, Michael Tretter, Alain Volmat, Sean Young,
	Steve Cho, Tomasz Figa, Hidenori Kobayashi, Hu, Jerry W,
	Suresh Vankadara, Devarsh Thakkar, r-donadkar

On Fri, Sep 06, 2024 at 10:11:48AM +0200, Mauro Carvalho Chehab wrote:
> Em Thu, 5 Sep 2024 09:16:27 +0200
> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> 
> > Hi all,
> > 
> > Here is my fifth (and likely final) stab at an agenda for the media summit. As always,
> > it is subject to change and all times are guesstimates!
> > 
> > The media summit will be held on Monday September 16th. Avnet Silica has very
> > kindly offered to host this summit at their Vienna office, which is about 35
> > minutes by public transport from the Open Source Summit Europe venue
> > (https://events.linuxfoundation.org/open-source-summit-europe/OSSE).
> > 
> > Avnet Silica Office Location:
> > 
> > Schönbrunner Str. 297/307, 1120 Vienna, Austria
> > 
> > https://www.google.com/maps/place/Avnet+EMG+Elektronische+Bauteile+GmbH+(Silica)/@48.183203,16.3100937,15z/data=!4m6!3m5!1s0x476da80e20b26d5b:0x2c5d2a77bbd43334!8m2!3d48.1832035!4d16.320372!16s%2Fg%2F1tcy32vt?entry=ttu
> > 
> > Refreshments are available during the day.
> > 
> > Lunch is held at Schönbrunner Stöckl (https://www.schoenbrunnerstoeckl.com/), close
> > to the Avnet Silica office. The lunch is sponsored by Ideas on Board and Cisco Systems
> > Norway.
> > 
> > Regarding the face mask policy: we will follow the same guidance that the
> > Linux Foundation gives for the EOSS conference:
> > 
> > https://events.linuxfoundation.org/open-source-summit-europe/attend/health-and-safety/#onsite-health-and-safety
> > 
> > 
> > In-Person Attendees:
> > 
> > Sakari Ailus <sakari.ailus@linux.intel.com> (Intel)
> > Daniel Almeida <daniel.almeida@collabora.com> (Collabora)
> > Mauro Carvalho Chehab <mchehab@kernel.org> (Huawei, Media Kernel Maintainer)
> > Steve Cho <stevecho@chromium.org> (Google)
> > Sebastian Fricke <sebastian.fricke@collabora.com> (Collabora)
> > Martin Hecht <martin.hecht@avnet.eu> (Avnet)
> > Tommaso Merciai <tomm.merciai@gmail.com> (Avnet)
> > Jacopo Mondi <jacopo.mondi@ideasonboard.com> (Ideas On Board)
> > Benjamin Mugnier <benjamin.mugnier@foss.st.com> (ST Electronics)
> > Laurent Pinchart <laurent.pinchart@ideasonboard.com> (Ideas On Board)
> > Ricardo Ribalda <ribalda@chromium.org> (Google)
> > Michael Tretter <m.tretter@pengutronix.de> (Pengutronix)
> > Suresh Vankadara <svankada@qti.qualcomm.com> (Qualcomm)
> > Hans Verkuil <hverkuil-cisco@xs4all.nl> (Cisco Systems Norway)
> > Alain Volmat <alain.volmat@foss.st.com> (ST Electronics)
> > Sean Young <sean@mess.org>
> > Jerry W Hu <jerry.w.hu@intel.com> (Intel)
> > 
> > Remote Attendees (using MS Teams):
> > 
> > Rishikesh Donadkar <r-donadkar@ti.com> (TI)
> > Tomasz Figa <tfiga@chromium.org> (Google)
> > Hidenori Kobayashi <hidenorik@chromium.org> (Google)
> > Devarsh Thakkar <devarsht@ti.com> (TI)
> > 
> > Note: information on how to connect remotely will come later.
> > 
> > If any information above is incorrect, or if I missed someone, then please let me know.
> > 
> > We are currently 17 confirmed in-person participants, so we're pretty much full.
> > If you want to join remotely, then contact me and I'll add you to that list.
> > 
> > Draft agenda:
> > 
> > 8:45-9:15: get settled :-)
> > 
> > 9:15-9:25: Hans: Quick introduction
> > 
> > 9:25-11:00: Ricardo: multi-committer model using gitlab
> 
> As part of such discussion, IMO some topics that should be covered:
> 
> 1. All committers shall use a common procedure for all merges.
> 
>    This is easy said than done. So, IMO, it is needed some common scripts
>    to be used by all committers. On my tests when merging two PRs there,
>    those seems to be the minimal set of scripts that are needed:
> 
>    a) script to create a new topic branch at linux-media/users/<user>
>       The input parameter is the message-ID, e. g. something like:
> 
> 	create_media_staging_topic <topic_name> <message_id>
> 
>       (eventually with an extra parameter with the name of the tree)
> 
>       It shall use patchwork REST interface to get the patches - or at least
>       to check if all patches are there (and then use b4).
> 
>       such script needs to work with a single patch, a patch series and a
>       pull request.
> 
>       the message ID of every patch, including the PR should be stored at
>       the MR, as this will be needed to later update patchwork.
> 
>    b) once gitlab CI runs, there are two possible outcomes: patches may
>       pass or not. If they pass, a MR will be created and eventually be
>       merged.
> 
>       Either merged or not (because something failed or the patches require
>       more work), the patchwork status of the patch require changes to
>       reflect what happened. IMO, another script is needed to update the
>       patch/patch series/PR on patchwork on a consistent way.
> 
>       This is actually a *big* gap we have here. I have a script that 
>       manually check patchwork status and the gap is huge. currently,
>       there are 73 patches that seems to be merged, but patchwork was not
>       updated.
> 
>       From what I noticed, some PR submitters almost never update patchwork
>       after the merges.
> 
>       So another script to solve this gap is needed, doing updates on all 
>       patches that were picked by the first script. Something like:
> 
>       update_patchwork_from_topic <topic_name> <new_status>
> 
>       This would likely need to use some logic to pick the message IDs
>       of the patch inside the MR.
> 
>       Such script could also check for previous versions of the patch
>       and for identical patches (it is somewhat common to receive identical
>       patches with trivial fixes from different developers).
> 
>    Someone needs to work on such script, as otherwise the multi committers
>    model may fail, and we risk needing to return back to the current model.
> 
> 2. The mailbomb script that notifies when a patch is merged at media-stage
>    we're using right now doesn't work with well with multiple committers.
> 
>    Right now, the tree at linuxtv runs it, but it might end sending patches
>    to the author and to linuxtv-commits ML that reached upstream from other
>    trees. It has some logic to prevent that, but it is not bulletproof.
>  
>    A replacement script is needed. Perhaps this can be executed together
>    with the patchwork script (1B) at the committer's machine.
> 
> 3. Staging require different rules, as smatch/spatch/sparse/checkpatch
>    warnings and errors can be acceptable.
> 
> 4. We need to have some sort of "honour code": if undesired behavior
>    is noticed, maintainers may temporarily (or permanently) revoke
>    committer rights.
> 
>    Hopefully, this will never happen, but, if it happens, a rebase
>    of media-staging tree may be needed.
> 
> 5. The procedure for fixes wil remain the same. We'll have already enough
>    things to make it work. Let's not add fixes complexity just yet.
>    Depending on how well the new multi-committers experimental model
>    works, we may think using it for fixes as well.

I think this last point should still be discussed in Vienna. I want to
design a clear workflow that covers -next and -fixes. I'm fine if we
decide to then implement part of the workflow only as an initial step,
if there are good enough reasons to do so.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [ANN] Media Summit September 16th: Draft Agenda (v5)
  2024-09-05  9:29   ` Hans Verkuil
@ 2024-09-06 14:23     ` Nicolas Dufresne
  0 siblings, 0 replies; 20+ messages in thread
From: Nicolas Dufresne @ 2024-09-06 14:23 UTC (permalink / raw)
  To: Hans Verkuil, Mehdi Djait
  Cc: Linux Media Mailing List, Sakari Ailus, Daniel Almeida,
	Mauro Carvalho Chehab, Sebastian Fricke, Martin Hecht,
	Tommaso Merciai, Jacopo Mondi, Benjamin Mugnier, Laurent Pinchart,
	Ricardo Ribalda, Michael Tretter, Alain Volmat, Sean Young,
	Steve Cho, Tomasz Figa, Hidenori Kobayashi, Hu, Jerry W,
	Suresh Vankadara, Devarsh Thakkar, r-donadkar

Hi Hans,

Le jeudi 05 septembre 2024 à 11:29 +0200, Hans Verkuil a écrit :
> On 05/09/2024 11:25, Mehdi Djait wrote:
> > Hi Hans,
> > > 
> > > Remote Attendees (using MS Teams):
> > > 
> > > Rishikesh Donadkar <r-donadkar@ti.com> (TI)
> > > Tomasz Figa <tfiga@chromium.org> (Google)
> > > Hidenori Kobayashi <hidenorik@chromium.org> (Google)
> > > Devarsh Thakkar <devarsht@ti.com> (TI)

I'd be happy get the MS Team links too. I'll try and join from 11:15 discussion
(5:15 my time) as multi-context MC is also something I foresee as a solution to
fulfill our gaps in regard to support Vulkan Video standard.

I'll try and stay listener only as I understand too much remote interaction
breaks the flow, ask me questions if needed.

Nicolas

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [ANN] Media Summit September 16th: Draft Agenda (v5)
  2024-09-06 13:29       ` Laurent Pinchart
@ 2024-09-06 17:43         ` Mauro Carvalho Chehab
  2024-09-06 18:04           ` Laurent Pinchart
  2024-09-07  7:53           ` Hans Verkuil
  0 siblings, 2 replies; 20+ messages in thread
From: Mauro Carvalho Chehab @ 2024-09-06 17:43 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sebastian Fricke, Hans Verkuil, Linux Media Mailing List,
	Sakari Ailus, Daniel Almeida, Mauro Carvalho Chehab, Martin Hecht,
	Tommaso Merciai, Jacopo Mondi, Benjamin Mugnier, Ricardo Ribalda,
	Michael Tretter, Alain Volmat, Sean Young, Steve Cho, Tomasz Figa,
	Hidenori Kobayashi, Hu, Jerry W, Suresh Vankadara,
	Devarsh Thakkar, r-donadkar

Em Fri, 6 Sep 2024 16:29:59 +0300
Laurent Pinchart <laurent.pinchart@ideasonboard.com> escreveu:

> On Fri, Sep 06, 2024 at 01:10:30PM +0200, Mauro Carvalho Chehab wrote:
> > Em Fri, 6 Sep 2024 12:36:58 +0200 Sebastian Fricke escreveu:  
> > > On 06.09.2024 10:11, Mauro Carvalho Chehab wrote:  
> > > > Em Thu, 5 Sep 2024 09:16:27 +0200 Hans Verkuil escreveu:
> > > >    
> > ...  
> > > > 1. All committers shall use a common procedure for all merges.
> > > >
> > > >   This is easy said than done. So, IMO, it is needed some common scripts
> > > >   to be used by all committers. On my tests when merging two PRs there,
> > > >   those seems to be the minimal set of scripts that are needed:
> > > >
> > > >   a) script to create a new topic branch at linux-media/users/<user>
> > > >      The input parameter is the message-ID, e. g. something like:
> > > >
> > > >	create_media_staging_topic <topic_name> <message_id>
> > > >
> > > >      (eventually with an extra parameter with the name of the tree)
> > > >
> > > >      It shall use patchwork REST interface to get the patches - or at least
> > > >      to check if all patches are there (and then use b4).
> > > >
> > > >      such script needs to work with a single patch, a patch series and a
> > > >      pull request.
> > > >
> > > >      the message ID of every patch, including the PR should be stored at
> > > >      the MR, as this will be needed to later update patchwork.  
> 
> I'm sure individual committers will want to optimize their workflow
> using scripts, and possibly share them, but what's wrong with simply
> using b4 ? With -l it will add a link to lore, so the message ID will be
> available.
> 
> I'd rather first focus on what we want to see included in commit
> messages, and then on how to make sure it gets there.

It is not just running b4. There's the need of preparing a MR message that
matches the content of patch 0, very likely with the message IDs of patches
and PRs, and check if everything is in place on patchwork, as this is the 
main tool to track the pending queue. Surely b4 can be called on such script.

It doesn't need to be complex, but it should automate the basic steps that
all committers will use.
 
> > > >
> > > >   b) once gitlab CI runs, there are two possible outcomes: patches may
> > > >      pass or not. If they pass, a MR will be created and eventually be
> > > >      merged.
> > > >
> > > >      Either merged or not (because something failed or the patches require
> > > >      more work), the patchwork status of the patch require changes to
> > > >      reflect what happened. IMO, another script is needed to update the
> > > >      patch/patch series/PR on patchwork on a consistent way.
> > > >
> > > >      This is actually a *big* gap we have here. I have a script that
> > > >      manually check patchwork status and the gap is huge. currently,
> > > >      there are 73 patches that seems to be merged, but patchwork was not
> > > >      updated.
> > > >
> > > >      From what I noticed, some PR submitters almost never update patchwork
> > > >      after the merges.    
> > > 
> > > Interesting I thought that is how it should be? I mean if I send a PR,
> > > the one taking the PR must decide whether he wants to pull it, so the
> > > decision whether the set is accepted should be on the one pulling the
> > > changes and not on the one pushing them. Right?  
> > 
> > Yes, but you still need to update previous attempts to submit the
> > work. So, let's see a patch series at v9 got a PR. It is up to you
> > to cleanup everything on patchwork from v1 to v8.  
> 
> That should be done before though, when v2 is submitted, v1 should be
> marked as superseded. 

Agreed. Still most of the ones sending PRs those days don't do that.

> Isn't there a patchwork bot that can help with this ? 

As far as I'm aware, no.

> It's not perfect in the sense that it doesn't always match new
> versions with previous ones, but if it can lower the manual burden by
> even 80%, it's a big win.
> 
> > Now, the committer handing PR for v9 should be in charge of updating
> > the status of it and the patches that belong to it, once the patch
> > is merged or once he takes a decision otherwise.  
> 
> Today most authors don't have a patchwork account, so they can't update
> the status themselves. The responsibility mostly falls on you and Hans.

The responsibility is for the ones that submit PRs. Unfortunately, this
is a process that it is not working right now. 

See, if you're reviewing a v2 of a patch series, you know that v1 is
obsolete. It should be easy to run a script that would take the v1
patch series and update patchwork to mark all v1 patches obsoleted.

> I'm fine with moving this to committers, which will make your life
> easier. The patchwork update when merging a branch should ideally be
> done automatically by the gitlab instance. If we have lore links in the
> commit messages, that sounds doable.

With multi-committer, it won't be possible anymore for Hans and I do
do such updates, as the workflow will be handled by the committers.

So, all committers will need to cleanup patch status on patchwork.

> > > >
> > > >      So another script to solve this gap is needed, doing updates on all
> > > >      patches that were picked by the first script. Something like:    
> > > 
> > > Shouldn't the update happen after the MR has been created instead,
> > > because only after running the CI we know whether the tests fail or
> > > pass? From what you say above it sounds like the first script merely
> > > prepares a topic branch to be tested.  
> > 
> > the first script I mentioned prepares and pushes the topic branch. The
> > patchwork update script (the second one) can be used on several parts
> > of the workflow:
> > 
> > - before MR: if the committer decides to not merge the changes, because
> >   it didn't pass on his review;
> > - after MR pipeline failures;
> > - after MR being merged: once it reaches media-staging master.
> >   
> > > >      update_patchwork_from_topic <topic_name> <new_status>
> > > >
> > > >      This would likely need to use some logic to pick the message IDs
> > > >      of the patch inside the MR.
> > > >
> > > >      Such script could also check for previous versions of the patch
> > > >      and for identical patches (it is somewhat common to receive identical
> > > >      patches with trivial fixes from different developers).
> > > >
> > > >   Someone needs to work on such script, as otherwise the multi committers
> > > >   model may fail, and we risk needing to return back to the current model.    
> > > 
> > > Just my personal thought: This sounds a bit extreme to me, I mean we are
> > > currently in the first test run as preparation for the Media-summit
> > > where we actually want to discuss policies and further requirements.  
> > 
> > What I'm saying is that multiple-committers model will only work if
> > all committers follow a common procedure (still variants between
> > committers is possible, provided that the minimal set is followed).
> > 
> > If this doesn't happens, maintainers may be forced to do rebases and other
> > manual fixes, with will actually make life worse for everyone. That's
> > what I want to avoid by having a common set of scripts for the very basic
> > tasks: create a topic branch for CI to test and update patchwork.  
> 
> The issues you've listed above are about updating status of patches in
> patchwork. That's important (or otherwise we should drop patchwork if we
> think it's not important, but I don't think that's the case), but I
> hardly see how missing updates to patchwork would call for a rebase :-)

The need for rebase typically won't be due to patchwork[1], but to
broken processes to validate patches that may happen if the patches
don't reach first a tree under linux-media/users; if someone uses a wrong
version of compliance utils; if it has a broken topic branch, etc.
E. g. it may happen if the agreed process isn't be followed to the
letter.

[1] Still, rebases might still be needed if the developer is not taking
    enough care on patchwork. See, if a developer sends a patch X with a
    change, and some days/weeks later another developer sends a patch Y
    identical to X, except for the author (and eventually description), 
    if patch Y is merged, there will be a need to drop it and pick X, to
    properly give credits to the right person.

> There's a need to make sure that what lands in the shared tree is
> proper, and for that we need a clearly documented procedure. At this
> point I don't think it requires a committer tool script, even if in the
> future developing such a tool could help.
> 
> > > > 2. The mailbomb script that notifies when a patch is merged at media-stage
> > > >    we're using right now doesn't work with well with multiple committers.
> > > >
> > > >    Right now, the tree at linuxtv runs it, but it might end sending patches
> > > >    to the author and to linuxtv-commits ML that reached upstream from other
> > > >    trees. It has some logic to prevent that, but it is not bulletproof.
> > > >
> > > >    A replacement script is needed. Perhaps this can be executed together
> > > >    with the patchwork script (1B) at the committer's machine.  
> 
> This could possibly be done with 'b4 ty'.

Call b4 ty after having patches merged at media staging tree works for me. 
Again placing it on a script that would for instance call pwclient and
b4 ty afterwards sounds the right thing to do.

> Another option is to rely on patchwork notifications instead. If a patch is 
> marked as merged, the notification e-mail sounds enough to notify the
> submitter.

Patchwork notification e-mails is about patch reviews, being optional.
Most authors won't create accounts at patchwork, but are interested only
at the final result. "b4 ty" or similar sounds more appropriate to me.

Also, having such patches c/c to linuxtv-commits help maintainers,
committers and developers to keep track with tree updates.

> > The bar for sending stuff to drivers/staging is lower than for
> > drivers/media. According with Kernel's documentation at
> > Documentation/process/2.Process.rst:
> > 
> > 	"Current rules require that drivers contributed to staging
> > 	 must, at a minimum, compile properly."
> > 
> > We actually try to be better than that, but still when new stuff is
> > added to staging, while we try to also ensure no static code checkers
> > would fail, we may need to relax the rules, as it is not uncommon to
> > receive drivers that need extra care to follow Kernel coding style
> > there.  
> 
> Why can't we tell the submitter to fix the warnings first ? Especially
> if we can point them to a CI report, that would seem the best course of
> action to me. drivers/staging/ is meant for code that need more time to
> be considered stable enough for the kernel. The stabilization doesn't
> happen by magic, it requires someone actively working on it. If the
> submitter can't bother to fix the warnings, it doesn't forebode anything
> good and wouldn't make me confident that the code would ever get out of
> staging (other than simply being deleted).

Surely we can ask fixes, but sometimes the driver is in so bad shape
that it may take some time, specially when it is a driver that came
from a developer without Linux upstream development experience. That
happened, for instance, when we merged lirc drivers, several USB non-uvc
camera drivers, atomisp, etc.

So, I'd say this will happen case by case, but, from the media CI PoV, 
an option to relax the checks for stuff under drivers/staging is needed.

> > > > 4. We need to have some sort of "honour code": if undesired behavior
> > > >    is noticed, maintainers may temporarily (or permanently) revoke
> > > >    committer rights.    
> > > 
> > > That sounds like something to discuss on the media summit,   
> > 
> > Sure. That's why I'm asking to explicitly add it to the topics to be
> > discussed there ;-)
> > 
> > It would be nice if someone could come up with a proposal for 
> > it, although the discussions of such proposal should likely happen via 
> > email.
> > 
> > Also, the ones to be added there likely need to explicitly ack with
> > such code before being added to gitlab's permission group.  
> 
> This sounds fairly straightforward, I think we'll easily agree on rules.

Sure, but a document with those and an explicit ack sounds an important
measure to avoid potential future problems.

> > 5. The procedure for fixes wil remain the same. We'll have already enough
> >    things to make it work. Let's not add fixes complexity just yet.
> >    Depending on how well the new multi-committers experimental model
> >    works, we may think using it for fixes as well.  
> 
> I think this last point should still be discussed in Vienna. I want to
> design a clear workflow that covers -next and -fixes. I'm fine if we
> decide to then implement part of the workflow only as an initial step,
> if there are good enough reasons to do so.

I don't see any need. There will be enough things for discussion there
just for -next, which is where 99% of the patches sit.

Handling -fixes require a different procedure, as maintainers need to
prepare an special PR to send them. Also, it is up to the maintainers 
to decide to either accept a patch as fixes or postpone to next,
as sometimes it is not a black and white decision if a patch belongs
to -fixes or if they will be postponed to the next merge week.

For instance, at -rc7, we usually postpone more complex fixes to
the merge window, as it is usually not a good idea to do last minute 
complex changes. If a committer write a fix patch during -rc7 and get
it merged, but the maintainers decide to postpone to the merge window,
the fix branch will require rebase.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [ANN] Media Summit September 16th: Draft Agenda (v5)
  2024-09-06 17:43         ` Mauro Carvalho Chehab
@ 2024-09-06 18:04           ` Laurent Pinchart
  2024-09-06 21:22             ` Kieran Bingham
                               ` (2 more replies)
  2024-09-07  7:53           ` Hans Verkuil
  1 sibling, 3 replies; 20+ messages in thread
From: Laurent Pinchart @ 2024-09-06 18:04 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Sebastian Fricke, Hans Verkuil, Linux Media Mailing List,
	Sakari Ailus, Daniel Almeida, Mauro Carvalho Chehab, Martin Hecht,
	Tommaso Merciai, Jacopo Mondi, Benjamin Mugnier, Ricardo Ribalda,
	Michael Tretter, Alain Volmat, Sean Young, Steve Cho, Tomasz Figa,
	Hidenori Kobayashi, Hu, Jerry W, Suresh Vankadara,
	Devarsh Thakkar, r-donadkar, Kieran Bingham

Hi Mauro,

(CC'ing Kieran, for a question below)

First of all, I'd like to say that I think we agree on more points than
may be apparent from the conversation. I believe this new process will
lower the administrative burden on everybody, which should also come as
a relief.

On Fri, Sep 06, 2024 at 07:43:10PM +0200, Mauro Carvalho Chehab wrote:
> Em Fri, 6 Sep 2024 16:29:59 +0300 Laurent Pinchart escreveu:
> > On Fri, Sep 06, 2024 at 01:10:30PM +0200, Mauro Carvalho Chehab wrote:
> > > Em Fri, 6 Sep 2024 12:36:58 +0200 Sebastian Fricke escreveu:  
> > > > On 06.09.2024 10:11, Mauro Carvalho Chehab wrote:  
> > > > > Em Thu, 5 Sep 2024 09:16:27 +0200 Hans Verkuil escreveu:
> > > > >    
> > > ...  
> > > > > 1. All committers shall use a common procedure for all merges.
> > > > >
> > > > >   This is easy said than done. So, IMO, it is needed some common scripts
> > > > >   to be used by all committers. On my tests when merging two PRs there,
> > > > >   those seems to be the minimal set of scripts that are needed:
> > > > >
> > > > >   a) script to create a new topic branch at linux-media/users/<user>
> > > > >      The input parameter is the message-ID, e. g. something like:
> > > > >
> > > > >	create_media_staging_topic <topic_name> <message_id>
> > > > >
> > > > >      (eventually with an extra parameter with the name of the tree)
> > > > >
> > > > >      It shall use patchwork REST interface to get the patches - or at least
> > > > >      to check if all patches are there (and then use b4).
> > > > >
> > > > >      such script needs to work with a single patch, a patch series and a
> > > > >      pull request.
> > > > >
> > > > >      the message ID of every patch, including the PR should be stored at
> > > > >      the MR, as this will be needed to later update patchwork.  
> > 
> > I'm sure individual committers will want to optimize their workflow
> > using scripts, and possibly share them, but what's wrong with simply
> > using b4 ? With -l it will add a link to lore, so the message ID will be
> > available.
> > 
> > I'd rather first focus on what we want to see included in commit
> > messages, and then on how to make sure it gets there.
> 
> It is not just running b4. There's the need of preparing a MR message that
> matches the content of patch 0, very likely with the message IDs of patches
> and PRs, and check if everything is in place on patchwork, as this is the 
> main tool to track the pending queue. Surely b4 can be called on such script.

Is that needed, or can we rely on the Link: tag of individual patches to
find the corresponding patch in patchwork ?

> It doesn't need to be complex, but it should automate the basic steps that
> all committers will use.
>  
> > > > >
> > > > >   b) once gitlab CI runs, there are two possible outcomes: patches may
> > > > >      pass or not. If they pass, a MR will be created and eventually be
> > > > >      merged.
> > > > >
> > > > >      Either merged or not (because something failed or the patches require
> > > > >      more work), the patchwork status of the patch require changes to
> > > > >      reflect what happened. IMO, another script is needed to update the
> > > > >      patch/patch series/PR on patchwork on a consistent way.
> > > > >
> > > > >      This is actually a *big* gap we have here. I have a script that
> > > > >      manually check patchwork status and the gap is huge. currently,
> > > > >      there are 73 patches that seems to be merged, but patchwork was not
> > > > >      updated.
> > > > >
> > > > >      From what I noticed, some PR submitters almost never update patchwork
> > > > >      after the merges.    
> > > > 
> > > > Interesting I thought that is how it should be? I mean if I send a PR,
> > > > the one taking the PR must decide whether he wants to pull it, so the
> > > > decision whether the set is accepted should be on the one pulling the
> > > > changes and not on the one pushing them. Right?  
> > > 
> > > Yes, but you still need to update previous attempts to submit the
> > > work. So, let's see a patch series at v9 got a PR. It is up to you
> > > to cleanup everything on patchwork from v1 to v8.  
> > 
> > That should be done before though, when v2 is submitted, v1 should be
> > marked as superseded. 
> 
> Agreed. Still most of the ones sending PRs those days don't do that.
> 
> > Isn't there a patchwork bot that can help with this ? 
> 
> As far as I'm aware, no.

Kieran, aren't you running such a bot for libcamera ?

> > It's not perfect in the sense that it doesn't always match new
> > versions with previous ones, but if it can lower the manual burden by
> > even 80%, it's a big win.
> > 
> > > Now, the committer handing PR for v9 should be in charge of updating
> > > the status of it and the patches that belong to it, once the patch
> > > is merged or once he takes a decision otherwise.  
> > 
> > Today most authors don't have a patchwork account, so they can't update
> > the status themselves. The responsibility mostly falls on you and Hans.
> 
> The responsibility is for the ones that submit PRs. Unfortunately, this
> is a process that it is not working right now. 
> 
> See, if you're reviewing a v2 of a patch series, you know that v1 is
> obsolete. It should be easy to run a script that would take the v1
> patch series and update patchwork to mark all v1 patches obsoleted.

In many cases that could be automated, but sometimes locating the
previous version automatically won't work. There will likely be some
amount of manual work. Tooling could help.

> > I'm fine with moving this to committers, which will make your life
> > easier. The patchwork update when merging a branch should ideally be
> > done automatically by the gitlab instance. If we have lore links in the
> > commit messages, that sounds doable.
> 
> With multi-committer, it won't be possible anymore for Hans and I do
> do such updates, as the workflow will be handled by the committers.
> 
> So, all committers will need to cleanup patch status on patchwork.

As I wrote above, that's fine with me, and hopefully that will also make
your life easier (unless you tell me you enjoy updating patchwork
manually :-)).

> > > > >
> > > > >      So another script to solve this gap is needed, doing updates on all
> > > > >      patches that were picked by the first script. Something like:    
> > > > 
> > > > Shouldn't the update happen after the MR has been created instead,
> > > > because only after running the CI we know whether the tests fail or
> > > > pass? From what you say above it sounds like the first script merely
> > > > prepares a topic branch to be tested.  
> > > 
> > > the first script I mentioned prepares and pushes the topic branch. The
> > > patchwork update script (the second one) can be used on several parts
> > > of the workflow:
> > > 
> > > - before MR: if the committer decides to not merge the changes, because
> > >   it didn't pass on his review;
> > > - after MR pipeline failures;
> > > - after MR being merged: once it reaches media-staging master.
> > >   
> > > > >      update_patchwork_from_topic <topic_name> <new_status>
> > > > >
> > > > >      This would likely need to use some logic to pick the message IDs
> > > > >      of the patch inside the MR.
> > > > >
> > > > >      Such script could also check for previous versions of the patch
> > > > >      and for identical patches (it is somewhat common to receive identical
> > > > >      patches with trivial fixes from different developers).
> > > > >
> > > > >   Someone needs to work on such script, as otherwise the multi committers
> > > > >   model may fail, and we risk needing to return back to the current model.    
> > > > 
> > > > Just my personal thought: This sounds a bit extreme to me, I mean we are
> > > > currently in the first test run as preparation for the Media-summit
> > > > where we actually want to discuss policies and further requirements.  
> > > 
> > > What I'm saying is that multiple-committers model will only work if
> > > all committers follow a common procedure (still variants between
> > > committers is possible, provided that the minimal set is followed).
> > > 
> > > If this doesn't happens, maintainers may be forced to do rebases and other
> > > manual fixes, with will actually make life worse for everyone. That's
> > > what I want to avoid by having a common set of scripts for the very basic
> > > tasks: create a topic branch for CI to test and update patchwork.  
> > 
> > The issues you've listed above are about updating status of patches in
> > patchwork. That's important (or otherwise we should drop patchwork if we
> > think it's not important, but I don't think that's the case), but I
> > hardly see how missing updates to patchwork would call for a rebase :-)
> 
> The need for rebase typically won't be due to patchwork[1], but to
> broken processes to validate patches that may happen if the patches
> don't reach first a tree under linux-media/users; if someone uses a wrong
> version of compliance utils; if it has a broken topic branch, etc.
> E. g. it may happen if the agreed process isn't be followed to the
> letter.

I think we all agree on the need of a clearly documented and simple to
understand process.

> [1] Still, rebases might still be needed if the developer is not taking
>     enough care on patchwork. See, if a developer sends a patch X with a
>     change, and some days/weeks later another developer sends a patch Y
>     identical to X, except for the author (and eventually description), 
>     if patch Y is merged, there will be a need to drop it and pick X, to
>     properly give credits to the right person.

No, that doesn't call for a rebase. If we really want to credit patch X
in the git history, we can revert Y and apply X. In most cases the
author of patch X will not mind that Y got applied.

> > There's a need to make sure that what lands in the shared tree is
> > proper, and for that we need a clearly documented procedure. At this
> > point I don't think it requires a committer tool script, even if in the
> > future developing such a tool could help.
> > 
> > > > > 2. The mailbomb script that notifies when a patch is merged at media-stage
> > > > >    we're using right now doesn't work with well with multiple committers.
> > > > >
> > > > >    Right now, the tree at linuxtv runs it, but it might end sending patches
> > > > >    to the author and to linuxtv-commits ML that reached upstream from other
> > > > >    trees. It has some logic to prevent that, but it is not bulletproof.
> > > > >
> > > > >    A replacement script is needed. Perhaps this can be executed together
> > > > >    with the patchwork script (1B) at the committer's machine.  
> > 
> > This could possibly be done with 'b4 ty'.
> 
> Call b4 ty after having patches merged at media staging tree works for me. 
> Again placing it on a script that would for instance call pwclient and
> b4 ty afterwards sounds the right thing to do.
> 
> > Another option is to rely on patchwork notifications instead. If a patch is 
> > marked as merged, the notification e-mail sounds enough to notify the
> > submitter.
> 
> Patchwork notification e-mails is about patch reviews, being optional.
> Most authors won't create accounts at patchwork, but are interested only
> at the final result. "b4 ty" or similar sounds more appropriate to me.

Does patchwork send notification to the author if they don't have an
account ?

> Also, having such patches c/c to linuxtv-commits help maintainers,
> committers and developers to keep track with tree updates.

Who uses linuxtv-commits, and for what purpose ? I know I never look at
it, but I'm interested in hearing if anyone uses it.

> > > The bar for sending stuff to drivers/staging is lower than for
> > > drivers/media. According with Kernel's documentation at
> > > Documentation/process/2.Process.rst:
> > > 
> > > 	"Current rules require that drivers contributed to staging
> > > 	 must, at a minimum, compile properly."
> > > 
> > > We actually try to be better than that, but still when new stuff is
> > > added to staging, while we try to also ensure no static code checkers
> > > would fail, we may need to relax the rules, as it is not uncommon to
> > > receive drivers that need extra care to follow Kernel coding style
> > > there.  
> > 
> > Why can't we tell the submitter to fix the warnings first ? Especially
> > if we can point them to a CI report, that would seem the best course of
> > action to me. drivers/staging/ is meant for code that need more time to
> > be considered stable enough for the kernel. The stabilization doesn't
> > happen by magic, it requires someone actively working on it. If the
> > submitter can't bother to fix the warnings, it doesn't forebode anything
> > good and wouldn't make me confident that the code would ever get out of
> > staging (other than simply being deleted).
> 
> Surely we can ask fixes, but sometimes the driver is in so bad shape
> that it may take some time, specially when it is a driver that came
> from a developer without Linux upstream development experience. That
> happened, for instance, when we merged lirc drivers, several USB non-uvc
> camera drivers, atomisp, etc.
> 
> So, I'd say this will happen case by case, but, from the media CI PoV, 
> an option to relax the checks for stuff under drivers/staging is needed.

We *may* decide to accept a new driver that comes with warnings, but
that would be a rare operation. Subsequent commits for the driver should
not introduce new warnings.

> > > > > 4. We need to have some sort of "honour code": if undesired behavior
> > > > >    is noticed, maintainers may temporarily (or permanently) revoke
> > > > >    committer rights.    
> > > > 
> > > > That sounds like something to discuss on the media summit,   
> > > 
> > > Sure. That's why I'm asking to explicitly add it to the topics to be
> > > discussed there ;-)
> > > 
> > > It would be nice if someone could come up with a proposal for 
> > > it, although the discussions of such proposal should likely happen via 
> > > email.
> > > 
> > > Also, the ones to be added there likely need to explicitly ack with
> > > such code before being added to gitlab's permission group.  
> > 
> > This sounds fairly straightforward, I think we'll easily agree on rules.
> 
> Sure, but a document with those and an explicit ack sounds an important
> measure to avoid potential future problems.

Yes, it will be explained in a public document.

> > > 5. The procedure for fixes wil remain the same. We'll have already enough
> > >    things to make it work. Let's not add fixes complexity just yet.
> > >    Depending on how well the new multi-committers experimental model
> > >    works, we may think using it for fixes as well.  
> > 
> > I think this last point should still be discussed in Vienna. I want to
> > design a clear workflow that covers -next and -fixes. I'm fine if we
> > decide to then implement part of the workflow only as an initial step,
> > if there are good enough reasons to do so.
> 
> I don't see any need. There will be enough things for discussion there
> just for -next, which is where 99% of the patches sit.
> 
> Handling -fixes require a different procedure, as maintainers need to
> prepare an special PR to send them. Also, it is up to the maintainers 
> to decide to either accept a patch as fixes or postpone to next,
> as sometimes it is not a black and white decision if a patch belongs
> to -fixes or if they will be postponed to the next merge week.
> 
> For instance, at -rc7, we usually postpone more complex fixes to
> the merge window, as it is usually not a good idea to do last minute 
> complex changes. If a committer write a fix patch during -rc7 and get
> it merged, but the maintainers decide to postpone to the merge window,
> the fix branch will require rebase.

-next and -fixes are not independent. Even if they are handled through
separate tree and processes, they need to take each other into account,
as the -fixes branch may need to be back-merged in -next from time to
time, and patches should never be applied to both -fixes and -next. We
therefore need to agree on a process that will cover both. It doesn't
mean -fixes will be handled through the shared tree right away if we
decide there are good reasons not to do so.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [ANN] Media Summit September 16th: Draft Agenda (v5)
  2024-09-06 18:04           ` Laurent Pinchart
@ 2024-09-06 21:22             ` Kieran Bingham
  2024-09-07  7:56             ` Hans Verkuil
  2024-09-07 20:43             ` Mauro Carvalho Chehab
  2 siblings, 0 replies; 20+ messages in thread
From: Kieran Bingham @ 2024-09-06 21:22 UTC (permalink / raw)
  To: Laurent Pinchart, Mauro Carvalho Chehab
  Cc: Sebastian Fricke, Hans Verkuil, Linux Media Mailing List,
	Sakari Ailus, Daniel Almeida, Mauro Carvalho Chehab, Martin Hecht,
	Tommaso Merciai, Jacopo Mondi, Benjamin Mugnier, Ricardo Ribalda,
	Michael Tretter, Alain Volmat, Sean Young, Steve Cho, Tomasz Figa,
	Hidenori Kobayashi, Hu, Jerry W, Suresh Vankadara,
	Devarsh Thakkar, r-donadkar

Quoting Laurent Pinchart (2024-09-06 19:04:39)
> Hi Mauro,
> 
> (CC'ing Kieran, for a question below)
> 
> First of all, I'd like to say that I think we agree on more points than
> may be apparent from the conversation. I believe this new process will
> lower the administrative burden on everybody, which should also come as
> a relief.
> 
> On Fri, Sep 06, 2024 at 07:43:10PM +0200, Mauro Carvalho Chehab wrote:
> > Em Fri, 6 Sep 2024 16:29:59 +0300 Laurent Pinchart escreveu:
> > > On Fri, Sep 06, 2024 at 01:10:30PM +0200, Mauro Carvalho Chehab wrote:
> > > > Em Fri, 6 Sep 2024 12:36:58 +0200 Sebastian Fricke escreveu:  
> > > > > On 06.09.2024 10:11, Mauro Carvalho Chehab wrote:  
> > > > > > Em Thu, 5 Sep 2024 09:16:27 +0200 Hans Verkuil escreveu:
> > > > > >    
> > > > ...  
> > > > > > 1. All committers shall use a common procedure for all merges.
> > > > > >
> > > > > >   This is easy said than done. So, IMO, it is needed some common scripts
> > > > > >   to be used by all committers. On my tests when merging two PRs there,
> > > > > >   those seems to be the minimal set of scripts that are needed:
> > > > > >
> > > > > >   a) script to create a new topic branch at linux-media/users/<user>
> > > > > >      The input parameter is the message-ID, e. g. something like:
> > > > > >
> > > > > >       create_media_staging_topic <topic_name> <message_id>
> > > > > >
> > > > > >      (eventually with an extra parameter with the name of the tree)
> > > > > >
> > > > > >      It shall use patchwork REST interface to get the patches - or at least
> > > > > >      to check if all patches are there (and then use b4).
> > > > > >
> > > > > >      such script needs to work with a single patch, a patch series and a
> > > > > >      pull request.
> > > > > >
> > > > > >      the message ID of every patch, including the PR should be stored at
> > > > > >      the MR, as this will be needed to later update patchwork.  
> > > 
> > > I'm sure individual committers will want to optimize their workflow
> > > using scripts, and possibly share them, but what's wrong with simply
> > > using b4 ? With -l it will add a link to lore, so the message ID will be
> > > available.
> > > 
> > > I'd rather first focus on what we want to see included in commit
> > > messages, and then on how to make sure it gets there.
> > 
> > It is not just running b4. There's the need of preparing a MR message that
> > matches the content of patch 0, very likely with the message IDs of patches
> > and PRs, and check if everything is in place on patchwork, as this is the 
> > main tool to track the pending queue. Surely b4 can be called on such script.
> 
> Is that needed, or can we rely on the Link: tag of individual patches to
> find the corresponding patch in patchwork ?
> 
> > It doesn't need to be complex, but it should automate the basic steps that
> > all committers will use.
> >  
> > > > > >
> > > > > >   b) once gitlab CI runs, there are two possible outcomes: patches may
> > > > > >      pass or not. If they pass, a MR will be created and eventually be
> > > > > >      merged.
> > > > > >
> > > > > >      Either merged or not (because something failed or the patches require
> > > > > >      more work), the patchwork status of the patch require changes to
> > > > > >      reflect what happened. IMO, another script is needed to update the
> > > > > >      patch/patch series/PR on patchwork on a consistent way.
> > > > > >
> > > > > >      This is actually a *big* gap we have here. I have a script that
> > > > > >      manually check patchwork status and the gap is huge. currently,
> > > > > >      there are 73 patches that seems to be merged, but patchwork was not
> > > > > >      updated.
> > > > > >
> > > > > >      From what I noticed, some PR submitters almost never update patchwork
> > > > > >      after the merges.    
> > > > > 
> > > > > Interesting I thought that is how it should be? I mean if I send a PR,
> > > > > the one taking the PR must decide whether he wants to pull it, so the
> > > > > decision whether the set is accepted should be on the one pulling the
> > > > > changes and not on the one pushing them. Right?  
> > > > 
> > > > Yes, but you still need to update previous attempts to submit the
> > > > work. So, let's see a patch series at v9 got a PR. It is up to you
> > > > to cleanup everything on patchwork from v1 to v8.  
> > > 
> > > That should be done before though, when v2 is submitted, v1 should be
> > > marked as superseded. 
> > 
> > Agreed. Still most of the ones sending PRs those days don't do that.
> > 
> > > Isn't there a patchwork bot that can help with this ? 
> > 
> > As far as I'm aware, no.
> 
> Kieran, aren't you running such a bot for libcamera ?

Yes, but I find it a bit hit-and-miss for superseeding and haven't checked through or
updated it lately. It's still running somewhat daily though, and it
catches patches being merged quite well - but it's just based on commit
title/subject.

It's derived from the kernel patchwork bot:

 - https://korg.docs.kernel.org/patchwork/pwbot.html
 - https://git.kernel.org/pub/scm/linux/kernel/git/mricon/korg-helpers.git/tree/git-patchwork-bot.py

--
Kieran


> > > It's not perfect in the sense that it doesn't always match new
> > > versions with previous ones, but if it can lower the manual burden by
> > > even 80%, it's a big win.
> > > 
> > > > Now, the committer handing PR for v9 should be in charge of updating
> > > > the status of it and the patches that belong to it, once the patch
> > > > is merged or once he takes a decision otherwise.  
> > > 
> > > Today most authors don't have a patchwork account, so they can't update
> > > the status themselves. The responsibility mostly falls on you and Hans.
> > 
> > The responsibility is for the ones that submit PRs. Unfortunately, this
> > is a process that it is not working right now. 
> > 
> > See, if you're reviewing a v2 of a patch series, you know that v1 is
> > obsolete. It should be easy to run a script that would take the v1
> > patch series and update patchwork to mark all v1 patches obsoleted.
> 
> In many cases that could be automated, but sometimes locating the
> previous version automatically won't work. There will likely be some
> amount of manual work. Tooling could help.
> 
> > > I'm fine with moving this to committers, which will make your life
> > > easier. The patchwork update when merging a branch should ideally be
> > > done automatically by the gitlab instance. If we have lore links in the
> > > commit messages, that sounds doable.
> > 
> > With multi-committer, it won't be possible anymore for Hans and I do
> > do such updates, as the workflow will be handled by the committers.
> > 
> > So, all committers will need to cleanup patch status on patchwork.
> 
> As I wrote above, that's fine with me, and hopefully that will also make
> your life easier (unless you tell me you enjoy updating patchwork
> manually :-)).
> 
> > > > > >
> > > > > >      So another script to solve this gap is needed, doing updates on all
> > > > > >      patches that were picked by the first script. Something like:    
> > > > > 
> > > > > Shouldn't the update happen after the MR has been created instead,
> > > > > because only after running the CI we know whether the tests fail or
> > > > > pass? From what you say above it sounds like the first script merely
> > > > > prepares a topic branch to be tested.  
> > > > 
> > > > the first script I mentioned prepares and pushes the topic branch. The
> > > > patchwork update script (the second one) can be used on several parts
> > > > of the workflow:
> > > > 
> > > > - before MR: if the committer decides to not merge the changes, because
> > > >   it didn't pass on his review;
> > > > - after MR pipeline failures;
> > > > - after MR being merged: once it reaches media-staging master.
> > > >   
> > > > > >      update_patchwork_from_topic <topic_name> <new_status>
> > > > > >
> > > > > >      This would likely need to use some logic to pick the message IDs
> > > > > >      of the patch inside the MR.
> > > > > >
> > > > > >      Such script could also check for previous versions of the patch
> > > > > >      and for identical patches (it is somewhat common to receive identical
> > > > > >      patches with trivial fixes from different developers).
> > > > > >
> > > > > >   Someone needs to work on such script, as otherwise the multi committers
> > > > > >   model may fail, and we risk needing to return back to the current model.    
> > > > > 
> > > > > Just my personal thought: This sounds a bit extreme to me, I mean we are
> > > > > currently in the first test run as preparation for the Media-summit
> > > > > where we actually want to discuss policies and further requirements.  
> > > > 
> > > > What I'm saying is that multiple-committers model will only work if
> > > > all committers follow a common procedure (still variants between
> > > > committers is possible, provided that the minimal set is followed).
> > > > 
> > > > If this doesn't happens, maintainers may be forced to do rebases and other
> > > > manual fixes, with will actually make life worse for everyone. That's
> > > > what I want to avoid by having a common set of scripts for the very basic
> > > > tasks: create a topic branch for CI to test and update patchwork.  
> > > 
> > > The issues you've listed above are about updating status of patches in
> > > patchwork. That's important (or otherwise we should drop patchwork if we
> > > think it's not important, but I don't think that's the case), but I
> > > hardly see how missing updates to patchwork would call for a rebase :-)
> > 
> > The need for rebase typically won't be due to patchwork[1], but to
> > broken processes to validate patches that may happen if the patches
> > don't reach first a tree under linux-media/users; if someone uses a wrong
> > version of compliance utils; if it has a broken topic branch, etc.
> > E. g. it may happen if the agreed process isn't be followed to the
> > letter.
> 
> I think we all agree on the need of a clearly documented and simple to
> understand process.
> 
> > [1] Still, rebases might still be needed if the developer is not taking
> >     enough care on patchwork. See, if a developer sends a patch X with a
> >     change, and some days/weeks later another developer sends a patch Y
> >     identical to X, except for the author (and eventually description), 
> >     if patch Y is merged, there will be a need to drop it and pick X, to
> >     properly give credits to the right person.
> 
> No, that doesn't call for a rebase. If we really want to credit patch X
> in the git history, we can revert Y and apply X. In most cases the
> author of patch X will not mind that Y got applied.
> 
> > > There's a need to make sure that what lands in the shared tree is
> > > proper, and for that we need a clearly documented procedure. At this
> > > point I don't think it requires a committer tool script, even if in the
> > > future developing such a tool could help.
> > > 
> > > > > > 2. The mailbomb script that notifies when a patch is merged at media-stage
> > > > > >    we're using right now doesn't work with well with multiple committers.
> > > > > >
> > > > > >    Right now, the tree at linuxtv runs it, but it might end sending patches
> > > > > >    to the author and to linuxtv-commits ML that reached upstream from other
> > > > > >    trees. It has some logic to prevent that, but it is not bulletproof.
> > > > > >
> > > > > >    A replacement script is needed. Perhaps this can be executed together
> > > > > >    with the patchwork script (1B) at the committer's machine.  
> > > 
> > > This could possibly be done with 'b4 ty'.
> > 
> > Call b4 ty after having patches merged at media staging tree works for me. 
> > Again placing it on a script that would for instance call pwclient and
> > b4 ty afterwards sounds the right thing to do.
> > 
> > > Another option is to rely on patchwork notifications instead. If a patch is 
> > > marked as merged, the notification e-mail sounds enough to notify the
> > > submitter.
> > 
> > Patchwork notification e-mails is about patch reviews, being optional.
> > Most authors won't create accounts at patchwork, but are interested only
> > at the final result. "b4 ty" or similar sounds more appropriate to me.
> 
> Does patchwork send notification to the author if they don't have an
> account ?
> 
> > Also, having such patches c/c to linuxtv-commits help maintainers,
> > committers and developers to keep track with tree updates.
> 
> Who uses linuxtv-commits, and for what purpose ? I know I never look at
> it, but I'm interested in hearing if anyone uses it.
> 
> > > > The bar for sending stuff to drivers/staging is lower than for
> > > > drivers/media. According with Kernel's documentation at
> > > > Documentation/process/2.Process.rst:
> > > > 
> > > >   "Current rules require that drivers contributed to staging
> > > >    must, at a minimum, compile properly."
> > > > 
> > > > We actually try to be better than that, but still when new stuff is
> > > > added to staging, while we try to also ensure no static code checkers
> > > > would fail, we may need to relax the rules, as it is not uncommon to
> > > > receive drivers that need extra care to follow Kernel coding style
> > > > there.  
> > > 
> > > Why can't we tell the submitter to fix the warnings first ? Especially
> > > if we can point them to a CI report, that would seem the best course of
> > > action to me. drivers/staging/ is meant for code that need more time to
> > > be considered stable enough for the kernel. The stabilization doesn't
> > > happen by magic, it requires someone actively working on it. If the
> > > submitter can't bother to fix the warnings, it doesn't forebode anything
> > > good and wouldn't make me confident that the code would ever get out of
> > > staging (other than simply being deleted).
> > 
> > Surely we can ask fixes, but sometimes the driver is in so bad shape
> > that it may take some time, specially when it is a driver that came
> > from a developer without Linux upstream development experience. That
> > happened, for instance, when we merged lirc drivers, several USB non-uvc
> > camera drivers, atomisp, etc.
> > 
> > So, I'd say this will happen case by case, but, from the media CI PoV, 
> > an option to relax the checks for stuff under drivers/staging is needed.
> 
> We *may* decide to accept a new driver that comes with warnings, but
> that would be a rare operation. Subsequent commits for the driver should
> not introduce new warnings.
> 
> > > > > > 4. We need to have some sort of "honour code": if undesired behavior
> > > > > >    is noticed, maintainers may temporarily (or permanently) revoke
> > > > > >    committer rights.    
> > > > > 
> > > > > That sounds like something to discuss on the media summit,   
> > > > 
> > > > Sure. That's why I'm asking to explicitly add it to the topics to be
> > > > discussed there ;-)
> > > > 
> > > > It would be nice if someone could come up with a proposal for 
> > > > it, although the discussions of such proposal should likely happen via 
> > > > email.
> > > > 
> > > > Also, the ones to be added there likely need to explicitly ack with
> > > > such code before being added to gitlab's permission group.  
> > > 
> > > This sounds fairly straightforward, I think we'll easily agree on rules.
> > 
> > Sure, but a document with those and an explicit ack sounds an important
> > measure to avoid potential future problems.
> 
> Yes, it will be explained in a public document.
> 
> > > > 5. The procedure for fixes wil remain the same. We'll have already enough
> > > >    things to make it work. Let's not add fixes complexity just yet.
> > > >    Depending on how well the new multi-committers experimental model
> > > >    works, we may think using it for fixes as well.  
> > > 
> > > I think this last point should still be discussed in Vienna. I want to
> > > design a clear workflow that covers -next and -fixes. I'm fine if we
> > > decide to then implement part of the workflow only as an initial step,
> > > if there are good enough reasons to do so.
> > 
> > I don't see any need. There will be enough things for discussion there
> > just for -next, which is where 99% of the patches sit.
> > 
> > Handling -fixes require a different procedure, as maintainers need to
> > prepare an special PR to send them. Also, it is up to the maintainers 
> > to decide to either accept a patch as fixes or postpone to next,
> > as sometimes it is not a black and white decision if a patch belongs
> > to -fixes or if they will be postponed to the next merge week.
> > 
> > For instance, at -rc7, we usually postpone more complex fixes to
> > the merge window, as it is usually not a good idea to do last minute 
> > complex changes. If a committer write a fix patch during -rc7 and get
> > it merged, but the maintainers decide to postpone to the merge window,
> > the fix branch will require rebase.
> 
> -next and -fixes are not independent. Even if they are handled through
> separate tree and processes, they need to take each other into account,
> as the -fixes branch may need to be back-merged in -next from time to
> time, and patches should never be applied to both -fixes and -next. We
> therefore need to agree on a process that will cover both. It doesn't
> mean -fixes will be handled through the shared tree right away if we
> decide there are good reasons not to do so.
> 
> -- 
> Regards,
> 
> Laurent Pinchart

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [ANN] Media Summit September 16th: Draft Agenda (v5)
  2024-09-06 17:43         ` Mauro Carvalho Chehab
  2024-09-06 18:04           ` Laurent Pinchart
@ 2024-09-07  7:53           ` Hans Verkuil
  2024-09-07 11:44             ` Laurent Pinchart
  1 sibling, 1 reply; 20+ messages in thread
From: Hans Verkuil @ 2024-09-07  7:53 UTC (permalink / raw)
  To: Mauro Carvalho Chehab, Laurent Pinchart
  Cc: Sebastian Fricke, Linux Media Mailing List, Sakari Ailus,
	Daniel Almeida, Mauro Carvalho Chehab, Martin Hecht,
	Tommaso Merciai, Jacopo Mondi, Benjamin Mugnier, Ricardo Ribalda,
	Michael Tretter, Alain Volmat, Sean Young, Steve Cho, Tomasz Figa,
	Hidenori Kobayashi, Hu, Jerry W, Suresh Vankadara,
	Devarsh Thakkar, r-donadkar

On 06/09/2024 19:43, Mauro Carvalho Chehab wrote:
> Em Fri, 6 Sep 2024 16:29:59 +0300
> Laurent Pinchart <laurent.pinchart@ideasonboard.com> escreveu:
> 
>> On Fri, Sep 06, 2024 at 01:10:30PM +0200, Mauro Carvalho Chehab wrote:
>>> Em Fri, 6 Sep 2024 12:36:58 +0200 Sebastian Fricke escreveu:  
>>>> On 06.09.2024 10:11, Mauro Carvalho Chehab wrote:  
>>>>> Em Thu, 5 Sep 2024 09:16:27 +0200 Hans Verkuil escreveu:
>>>>>    
>>> ...  
>>>>> 1. All committers shall use a common procedure for all merges.
>>>>>
>>>>>   This is easy said than done. So, IMO, it is needed some common scripts
>>>>>   to be used by all committers. On my tests when merging two PRs there,
>>>>>   those seems to be the minimal set of scripts that are needed:
>>>>>
>>>>>   a) script to create a new topic branch at linux-media/users/<user>
>>>>>      The input parameter is the message-ID, e. g. something like:
>>>>>
>>>>> 	create_media_staging_topic <topic_name> <message_id>
>>>>>
>>>>>      (eventually with an extra parameter with the name of the tree)
>>>>>
>>>>>      It shall use patchwork REST interface to get the patches - or at least
>>>>>      to check if all patches are there (and then use b4).
>>>>>
>>>>>      such script needs to work with a single patch, a patch series and a
>>>>>      pull request.
>>>>>
>>>>>      the message ID of every patch, including the PR should be stored at
>>>>>      the MR, as this will be needed to later update patchwork.  
>>
>> I'm sure individual committers will want to optimize their workflow
>> using scripts, and possibly share them, but what's wrong with simply
>> using b4 ? With -l it will add a link to lore, so the message ID will be
>> available.
>>
>> I'd rather first focus on what we want to see included in commit
>> messages, and then on how to make sure it gets there.
> 
> It is not just running b4. There's the need of preparing a MR message that
> matches the content of patch 0, very likely with the message IDs of patches
> and PRs, and check if everything is in place on patchwork, as this is the 
> main tool to track the pending queue. Surely b4 can be called on such script.
> 
> It doesn't need to be complex, but it should automate the basic steps that
> all committers will use.
>  
>>>>>
>>>>>   b) once gitlab CI runs, there are two possible outcomes: patches may
>>>>>      pass or not. If they pass, a MR will be created and eventually be
>>>>>      merged.
>>>>>
>>>>>      Either merged or not (because something failed or the patches require
>>>>>      more work), the patchwork status of the patch require changes to
>>>>>      reflect what happened. IMO, another script is needed to update the
>>>>>      patch/patch series/PR on patchwork on a consistent way.
>>>>>
>>>>>      This is actually a *big* gap we have here. I have a script that
>>>>>      manually check patchwork status and the gap is huge. currently,
>>>>>      there are 73 patches that seems to be merged, but patchwork was not
>>>>>      updated.
>>>>>
>>>>>      From what I noticed, some PR submitters almost never update patchwork
>>>>>      after the merges.    
>>>>
>>>> Interesting I thought that is how it should be? I mean if I send a PR,
>>>> the one taking the PR must decide whether he wants to pull it, so the
>>>> decision whether the set is accepted should be on the one pulling the
>>>> changes and not on the one pushing them. Right?  
>>>
>>> Yes, but you still need to update previous attempts to submit the
>>> work. So, let's see a patch series at v9 got a PR. It is up to you
>>> to cleanup everything on patchwork from v1 to v8.  
>>
>> That should be done before though, when v2 is submitted, v1 should be
>> marked as superseded. 
> 
> Agreed. Still most of the ones sending PRs those days don't do that.
> 
>> Isn't there a patchwork bot that can help with this ? 
> 
> As far as I'm aware, no.
> 
>> It's not perfect in the sense that it doesn't always match new
>> versions with previous ones, but if it can lower the manual burden by
>> even 80%, it's a big win.
>>
>>> Now, the committer handing PR for v9 should be in charge of updating
>>> the status of it and the patches that belong to it, once the patch
>>> is merged or once he takes a decision otherwise.  
>>
>> Today most authors don't have a patchwork account, so they can't update
>> the status themselves. The responsibility mostly falls on you and Hans.
> 
> The responsibility is for the ones that submit PRs. Unfortunately, this
> is a process that it is not working right now. 
> 
> See, if you're reviewing a v2 of a patch series, you know that v1 is
> obsolete. It should be easy to run a script that would take the v1
> patch series and update patchwork to mark all v1 patches obsoleted.
> 
>> I'm fine with moving this to committers, which will make your life
>> easier. The patchwork update when merging a branch should ideally be
>> done automatically by the gitlab instance. If we have lore links in the
>> commit messages, that sounds doable.
> 
> With multi-committer, it won't be possible anymore for Hans and I do
> do such updates, as the workflow will be handled by the committers.
> 
> So, all committers will need to cleanup patch status on patchwork.

It would be really nice if patchwork would be kept in a better state.

Two or three times per kernel cycle I go through patchwork to hoover up
all the 'random patches' that people post, and at that time I also try
to clean up superseded patches etc. But sometimes it isn't clear, esp. if
the v2 renames the series title. When in doubt, I won't mark it as superseded.

So I agree with Mauro that as committer updating patchwork would be one of
the tasks. That implies that those with commit rights also need patchwork
rights.

For me it has always been easiest to keep all the patches I want to commit
in a bundle. Then, once it is in, I update all patches in the bundle.

And whenever I post a new version, I also update the old version in patchwork.
It's not much work as long as you stay on top of it, it is a lot more work
if you let it slip and you have to figure it out later.

The most annoying part of dealing with PRs from others has been to find the
patches in patchwork. I have a script that tries to find them based on the hash,
which works well, except when the patch was modified before ending in the PR,
then the hash no longer matches and I have to search manually.

> 
>>>>>
>>>>>      So another script to solve this gap is needed, doing updates on all
>>>>>      patches that were picked by the first script. Something like:    
>>>>
>>>> Shouldn't the update happen after the MR has been created instead,
>>>> because only after running the CI we know whether the tests fail or
>>>> pass? From what you say above it sounds like the first script merely
>>>> prepares a topic branch to be tested.  
>>>
>>> the first script I mentioned prepares and pushes the topic branch. The
>>> patchwork update script (the second one) can be used on several parts
>>> of the workflow:
>>>
>>> - before MR: if the committer decides to not merge the changes, because
>>>   it didn't pass on his review;
>>> - after MR pipeline failures;
>>> - after MR being merged: once it reaches media-staging master.
>>>   
>>>>>      update_patchwork_from_topic <topic_name> <new_status>
>>>>>
>>>>>      This would likely need to use some logic to pick the message IDs
>>>>>      of the patch inside the MR.
>>>>>
>>>>>      Such script could also check for previous versions of the patch
>>>>>      and for identical patches (it is somewhat common to receive identical
>>>>>      patches with trivial fixes from different developers).
>>>>>
>>>>>   Someone needs to work on such script, as otherwise the multi committers
>>>>>   model may fail, and we risk needing to return back to the current model.    
>>>>
>>>> Just my personal thought: This sounds a bit extreme to me, I mean we are
>>>> currently in the first test run as preparation for the Media-summit
>>>> where we actually want to discuss policies and further requirements.  
>>>
>>> What I'm saying is that multiple-committers model will only work if
>>> all committers follow a common procedure (still variants between
>>> committers is possible, provided that the minimal set is followed).
>>>
>>> If this doesn't happens, maintainers may be forced to do rebases and other
>>> manual fixes, with will actually make life worse for everyone. That's
>>> what I want to avoid by having a common set of scripts for the very basic
>>> tasks: create a topic branch for CI to test and update patchwork.  
>>
>> The issues you've listed above are about updating status of patches in
>> patchwork. That's important (or otherwise we should drop patchwork if we
>> think it's not important, but I don't think that's the case), but I
>> hardly see how missing updates to patchwork would call for a rebase :-)
> 
> The need for rebase typically won't be due to patchwork[1], but to
> broken processes to validate patches that may happen if the patches
> don't reach first a tree under linux-media/users; if someone uses a wrong
> version of compliance utils; if it has a broken topic branch, etc.
> E. g. it may happen if the agreed process isn't be followed to the
> letter.

Well, it shouldn't be possible to merge code unless the process was followed.
There is also a certain level of trust implied by giving someone commit rights.

And if they violate that trust, then we can take away those rights.

> 
> [1] Still, rebases might still be needed if the developer is not taking
>     enough care on patchwork. See, if a developer sends a patch X with a
>     change, and some days/weeks later another developer sends a patch Y
>     identical to X, except for the author (and eventually description), 
>     if patch Y is merged, there will be a need to drop it and pick X, to
>     properly give credits to the right person.
> 
>> There's a need to make sure that what lands in the shared tree is
>> proper, and for that we need a clearly documented procedure. At this
>> point I don't think it requires a committer tool script, even if in the
>> future developing such a tool could help.
>>
>>>>> 2. The mailbomb script that notifies when a patch is merged at media-stage
>>>>>    we're using right now doesn't work with well with multiple committers.
>>>>>
>>>>>    Right now, the tree at linuxtv runs it, but it might end sending patches
>>>>>    to the author and to linuxtv-commits ML that reached upstream from other
>>>>>    trees. It has some logic to prevent that, but it is not bulletproof.
>>>>>
>>>>>    A replacement script is needed. Perhaps this can be executed together
>>>>>    with the patchwork script (1B) at the committer's machine.  
>>
>> This could possibly be done with 'b4 ty'.
> 
> Call b4 ty after having patches merged at media staging tree works for me. 
> Again placing it on a script that would for instance call pwclient and
> b4 ty afterwards sounds the right thing to do.
> 
>> Another option is to rely on patchwork notifications instead. If a patch is 
>> marked as merged, the notification e-mail sounds enough to notify the
>> submitter.
> 
> Patchwork notification e-mails is about patch reviews, being optional.
> Most authors won't create accounts at patchwork, but are interested only
> at the final result. "b4 ty" or similar sounds more appropriate to me.
> 
> Also, having such patches c/c to linuxtv-commits help maintainers,
> committers and developers to keep track with tree updates.
> 
>>> The bar for sending stuff to drivers/staging is lower than for
>>> drivers/media. According with Kernel's documentation at
>>> Documentation/process/2.Process.rst:
>>>
>>> 	"Current rules require that drivers contributed to staging
>>> 	 must, at a minimum, compile properly."
>>>
>>> We actually try to be better than that, but still when new stuff is
>>> added to staging, while we try to also ensure no static code checkers
>>> would fail, we may need to relax the rules, as it is not uncommon to
>>> receive drivers that need extra care to follow Kernel coding style
>>> there.  
>>
>> Why can't we tell the submitter to fix the warnings first ? Especially
>> if we can point them to a CI report, that would seem the best course of
>> action to me. drivers/staging/ is meant for code that need more time to
>> be considered stable enough for the kernel. The stabilization doesn't
>> happen by magic, it requires someone actively working on it. If the
>> submitter can't bother to fix the warnings, it doesn't forebode anything
>> good and wouldn't make me confident that the code would ever get out of
>> staging (other than simply being deleted).
> 
> Surely we can ask fixes, but sometimes the driver is in so bad shape
> that it may take some time, specially when it is a driver that came
> from a developer without Linux upstream development experience. That
> happened, for instance, when we merged lirc drivers, several USB non-uvc
> camera drivers, atomisp, etc.
> 
> So, I'd say this will happen case by case, but, from the media CI PoV, 
> an option to relax the checks for stuff under drivers/staging is needed.

As I mentioned elsewhere, I think this should be under a CI option: by
default staging shouldn't have warnings, but it can be relaxed by setting
a CI option. But I also think this should only be allowed for new files,
not when adding patches on top.

Regards,

	Hans

> 
>>>>> 4. We need to have some sort of "honour code": if undesired behavior
>>>>>    is noticed, maintainers may temporarily (or permanently) revoke
>>>>>    committer rights.    
>>>>
>>>> That sounds like something to discuss on the media summit,   
>>>
>>> Sure. That's why I'm asking to explicitly add it to the topics to be
>>> discussed there ;-)
>>>
>>> It would be nice if someone could come up with a proposal for 
>>> it, although the discussions of such proposal should likely happen via 
>>> email.
>>>
>>> Also, the ones to be added there likely need to explicitly ack with
>>> such code before being added to gitlab's permission group.  
>>
>> This sounds fairly straightforward, I think we'll easily agree on rules.
> 
> Sure, but a document with those and an explicit ack sounds an important
> measure to avoid potential future problems.
> 
>>> 5. The procedure for fixes wil remain the same. We'll have already enough
>>>    things to make it work. Let's not add fixes complexity just yet.
>>>    Depending on how well the new multi-committers experimental model
>>>    works, we may think using it for fixes as well.  
>>
>> I think this last point should still be discussed in Vienna. I want to
>> design a clear workflow that covers -next and -fixes. I'm fine if we
>> decide to then implement part of the workflow only as an initial step,
>> if there are good enough reasons to do so.
> 
> I don't see any need. There will be enough things for discussion there
> just for -next, which is where 99% of the patches sit.
> 
> Handling -fixes require a different procedure, as maintainers need to
> prepare an special PR to send them. Also, it is up to the maintainers 
> to decide to either accept a patch as fixes or postpone to next,
> as sometimes it is not a black and white decision if a patch belongs
> to -fixes or if they will be postponed to the next merge week.
> 
> For instance, at -rc7, we usually postpone more complex fixes to
> the merge window, as it is usually not a good idea to do last minute 
> complex changes. If a committer write a fix patch during -rc7 and get
> it merged, but the maintainers decide to postpone to the merge window,
> the fix branch will require rebase.
> 
> Thanks,
> Mauro


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [ANN] Media Summit September 16th: Draft Agenda (v5)
  2024-09-06 18:04           ` Laurent Pinchart
  2024-09-06 21:22             ` Kieran Bingham
@ 2024-09-07  7:56             ` Hans Verkuil
  2024-09-07 20:43             ` Mauro Carvalho Chehab
  2 siblings, 0 replies; 20+ messages in thread
From: Hans Verkuil @ 2024-09-07  7:56 UTC (permalink / raw)
  To: Laurent Pinchart, Mauro Carvalho Chehab
  Cc: Sebastian Fricke, Linux Media Mailing List, Sakari Ailus,
	Daniel Almeida, Mauro Carvalho Chehab, Martin Hecht,
	Tommaso Merciai, Jacopo Mondi, Benjamin Mugnier, Ricardo Ribalda,
	Michael Tretter, Alain Volmat, Sean Young, Steve Cho, Tomasz Figa,
	Hidenori Kobayashi, Hu, Jerry W, Suresh Vankadara,
	Devarsh Thakkar, r-donadkar, Kieran Bingham

On 06/09/2024 20:04, Laurent Pinchart wrote:
> Hi Mauro,
> 

<snip>

>> Patchwork notification e-mails is about patch reviews, being optional.
>> Most authors won't create accounts at patchwork, but are interested only
>> at the final result. "b4 ty" or similar sounds more appropriate to me.
> 
> Does patchwork send notification to the author if they don't have an
> account ?
> 
>> Also, having such patches c/c to linuxtv-commits help maintainers,
>> committers and developers to keep track with tree updates.
> 
> Who uses linuxtv-commits, and for what purpose ? I know I never look at
> it, but I'm interested in hearing if anyone uses it.

I do. It is easy to search if I want to know if a specific patch was merged,
faster than checking with git. Also I can see immediately when a patch was
merged.

I'd call it a convenience for me, rather than an essential part of my workflow.
I.e., I could do without it, but it is nice to have.

Regards,

	Hans


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [ANN] Media Summit September 16th: Draft Agenda (v5)
  2024-09-06  8:11 ` Mauro Carvalho Chehab
  2024-09-06 10:36   ` Sebastian Fricke
  2024-09-06 13:32   ` Laurent Pinchart
@ 2024-09-07  8:02   ` Hans Verkuil
  2024-09-07 11:46     ` Laurent Pinchart
  2 siblings, 1 reply; 20+ messages in thread
From: Hans Verkuil @ 2024-09-07  8:02 UTC (permalink / raw)
  To: Mauro Carvalho Chehab
  Cc: Linux Media Mailing List, Sakari Ailus, Daniel Almeida,
	Mauro Carvalho Chehab, Sebastian Fricke, Martin Hecht,
	Tommaso Merciai, Jacopo Mondi, Benjamin Mugnier, Laurent Pinchart,
	Ricardo Ribalda, Michael Tretter, Alain Volmat, Sean Young,
	Steve Cho, Tomasz Figa, Hidenori Kobayashi, Hu, Jerry W,
	Suresh Vankadara, Devarsh Thakkar, r-donadkar

On 06/09/2024 10:11, Mauro Carvalho Chehab wrote:
> Em Thu, 5 Sep 2024 09:16:27 +0200
> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> 
>> Hi all,
>>
>> Here is my fifth (and likely final) stab at an agenda for the media summit. As always,
>> it is subject to change and all times are guesstimates!
>>
>> The media summit will be held on Monday September 16th. Avnet Silica has very
>> kindly offered to host this summit at their Vienna office, which is about 35
>> minutes by public transport from the Open Source Summit Europe venue
>> (https://events.linuxfoundation.org/open-source-summit-europe/OSSE).
>>
>> Avnet Silica Office Location:
>>
>> Schönbrunner Str. 297/307, 1120 Vienna, Austria
>>
>> https://www.google.com/maps/place/Avnet+EMG+Elektronische+Bauteile+GmbH+(Silica)/@48.183203,16.3100937,15z/data=!4m6!3m5!1s0x476da80e20b26d5b:0x2c5d2a77bbd43334!8m2!3d48.1832035!4d16.320372!16s%2Fg%2F1tcy32vt?entry=ttu
>>
>> Refreshments are available during the day.
>>
>> Lunch is held at Schönbrunner Stöckl (https://www.schoenbrunnerstoeckl.com/), close
>> to the Avnet Silica office. The lunch is sponsored by Ideas on Board and Cisco Systems
>> Norway.
>>
>> Regarding the face mask policy: we will follow the same guidance that the
>> Linux Foundation gives for the EOSS conference:
>>
>> https://events.linuxfoundation.org/open-source-summit-europe/attend/health-and-safety/#onsite-health-and-safety
>>
>>
>> In-Person Attendees:
>>
>> Sakari Ailus <sakari.ailus@linux.intel.com> (Intel)
>> Daniel Almeida <daniel.almeida@collabora.com> (Collabora)
>> Mauro Carvalho Chehab <mchehab@kernel.org> (Huawei, Media Kernel Maintainer)
>> Steve Cho <stevecho@chromium.org> (Google)
>> Sebastian Fricke <sebastian.fricke@collabora.com> (Collabora)
>> Martin Hecht <martin.hecht@avnet.eu> (Avnet)
>> Tommaso Merciai <tomm.merciai@gmail.com> (Avnet)
>> Jacopo Mondi <jacopo.mondi@ideasonboard.com> (Ideas On Board)
>> Benjamin Mugnier <benjamin.mugnier@foss.st.com> (ST Electronics)
>> Laurent Pinchart <laurent.pinchart@ideasonboard.com> (Ideas On Board)
>> Ricardo Ribalda <ribalda@chromium.org> (Google)
>> Michael Tretter <m.tretter@pengutronix.de> (Pengutronix)
>> Suresh Vankadara <svankada@qti.qualcomm.com> (Qualcomm)
>> Hans Verkuil <hverkuil-cisco@xs4all.nl> (Cisco Systems Norway)
>> Alain Volmat <alain.volmat@foss.st.com> (ST Electronics)
>> Sean Young <sean@mess.org>
>> Jerry W Hu <jerry.w.hu@intel.com> (Intel)
>>
>> Remote Attendees (using MS Teams):
>>
>> Rishikesh Donadkar <r-donadkar@ti.com> (TI)
>> Tomasz Figa <tfiga@chromium.org> (Google)
>> Hidenori Kobayashi <hidenorik@chromium.org> (Google)
>> Devarsh Thakkar <devarsht@ti.com> (TI)
>>
>> Note: information on how to connect remotely will come later.
>>
>> If any information above is incorrect, or if I missed someone, then please let me know.
>>
>> We are currently 17 confirmed in-person participants, so we're pretty much full.
>> If you want to join remotely, then contact me and I'll add you to that list.
>>
>> Draft agenda:
>>
>> 8:45-9:15: get settled :-)
>>
>> 9:15-9:25: Hans: Quick introduction
>>
>> 9:25-11:00: Ricardo: multi-committer model using gitlab
> 
> As part of such discussion, IMO some topics that should be covered:
> 
> 1. All committers shall use a common procedure for all merges.
> 
>    This is easy said than done. So, IMO, it is needed some common scripts
>    to be used by all committers. On my tests when merging two PRs there,
>    those seems to be the minimal set of scripts that are needed:
> 
>    a) script to create a new topic branch at linux-media/users/<user>
>       The input parameter is the message-ID, e. g. something like:
> 
> 	create_media_staging_topic <topic_name> <message_id>
> 
>       (eventually with an extra parameter with the name of the tree)
> 
>       It shall use patchwork REST interface to get the patches - or at least
>       to check if all patches are there (and then use b4).
> 
>       such script needs to work with a single patch, a patch series and a
>       pull request.
> 
>       the message ID of every patch, including the PR should be stored at
>       the MR, as this will be needed to later update patchwork.
> 
>    b) once gitlab CI runs, there are two possible outcomes: patches may
>       pass or not. If they pass, a MR will be created and eventually be
>       merged.
> 
>       Either merged or not (because something failed or the patches require
>       more work), the patchwork status of the patch require changes to
>       reflect what happened. IMO, another script is needed to update the
>       patch/patch series/PR on patchwork on a consistent way.
> 
>       This is actually a *big* gap we have here. I have a script that 
>       manually check patchwork status and the gap is huge. currently,
>       there are 73 patches that seems to be merged, but patchwork was not
>       updated.
> 
>       From what I noticed, some PR submitters almost never update patchwork
>       after the merges.
> 
>       So another script to solve this gap is needed, doing updates on all 
>       patches that were picked by the first script. Something like:
> 
>       update_patchwork_from_topic <topic_name> <new_status>
> 
>       This would likely need to use some logic to pick the message IDs
>       of the patch inside the MR.
> 
>       Such script could also check for previous versions of the patch
>       and for identical patches (it is somewhat common to receive identical
>       patches with trivial fixes from different developers).
> 
>    Someone needs to work on such script, as otherwise the multi committers
>    model may fail, and we risk needing to return back to the current model.
> 
> 2. The mailbomb script that notifies when a patch is merged at media-stage
>    we're using right now doesn't work with well with multiple committers.
> 
>    Right now, the tree at linuxtv runs it, but it might end sending patches
>    to the author and to linuxtv-commits ML that reached upstream from other
>    trees. It has some logic to prevent that, but it is not bulletproof.
>  
>    A replacement script is needed. Perhaps this can be executed together
>    with the patchwork script (1B) at the committer's machine.
> 
> 3. Staging require different rules, as smatch/spatch/sparse/checkpatch
>    warnings and errors can be acceptable.
> 
> 4. We need to have some sort of "honour code": if undesired behavior
>    is noticed, maintainers may temporarily (or permanently) revoke
>    committer rights.
> 
>    Hopefully, this will never happen, but, if it happens, a rebase
>    of media-staging tree may be needed.
> 
> 5. The procedure for fixes wil remain the same. We'll have already enough
>    things to make it work. Let's not add fixes complexity just yet.
>    Depending on how well the new multi-committers experimental model
>    works, we may think using it for fixes as well.

6. Since now the committer has to collect the necessary A-by/R-by tags,
   how do we handle that? Today it is implicit by posting a PR: the patches
   will be signed off by me or Mauro when we process the PR. Now you need
   to collect the tags by asking others. I'd like to formalize this in some
   way.

Regards,

	Hans

> 
> Thanks,
> Mauro


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [ANN] Media Summit September 16th: Draft Agenda (v5)
  2024-09-07  7:53           ` Hans Verkuil
@ 2024-09-07 11:44             ` Laurent Pinchart
  0 siblings, 0 replies; 20+ messages in thread
From: Laurent Pinchart @ 2024-09-07 11:44 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Mauro Carvalho Chehab, Sebastian Fricke, Linux Media Mailing List,
	Sakari Ailus, Daniel Almeida, Mauro Carvalho Chehab, Martin Hecht,
	Tommaso Merciai, Jacopo Mondi, Benjamin Mugnier, Ricardo Ribalda,
	Michael Tretter, Alain Volmat, Sean Young, Steve Cho, Tomasz Figa,
	Hidenori Kobayashi, Hu, Jerry W, Suresh Vankadara,
	Devarsh Thakkar, r-donadkar

On Sat, Sep 07, 2024 at 09:53:49AM +0200, Hans Verkuil wrote:
> On 06/09/2024 19:43, Mauro Carvalho Chehab wrote:
> > Em Fri, 6 Sep 2024 16:29:59 +0300 Laurent Pinchart escreveu:
> >> On Fri, Sep 06, 2024 at 01:10:30PM +0200, Mauro Carvalho Chehab wrote:
> >>> Em Fri, 6 Sep 2024 12:36:58 +0200 Sebastian Fricke escreveu:  
> >>>> On 06.09.2024 10:11, Mauro Carvalho Chehab wrote:  
> >>>>> Em Thu, 5 Sep 2024 09:16:27 +0200 Hans Verkuil escreveu:
> >>>>>    
> >>> ...  
> >>>>> 1. All committers shall use a common procedure for all merges.
> >>>>>
> >>>>>   This is easy said than done. So, IMO, it is needed some common scripts
> >>>>>   to be used by all committers. On my tests when merging two PRs there,
> >>>>>   those seems to be the minimal set of scripts that are needed:
> >>>>>
> >>>>>   a) script to create a new topic branch at linux-media/users/<user>
> >>>>>      The input parameter is the message-ID, e. g. something like:
> >>>>>
> >>>>> 	create_media_staging_topic <topic_name> <message_id>
> >>>>>
> >>>>>      (eventually with an extra parameter with the name of the tree)
> >>>>>
> >>>>>      It shall use patchwork REST interface to get the patches - or at least
> >>>>>      to check if all patches are there (and then use b4).
> >>>>>
> >>>>>      such script needs to work with a single patch, a patch series and a
> >>>>>      pull request.
> >>>>>
> >>>>>      the message ID of every patch, including the PR should be stored at
> >>>>>      the MR, as this will be needed to later update patchwork.  
> >>
> >> I'm sure individual committers will want to optimize their workflow
> >> using scripts, and possibly share them, but what's wrong with simply
> >> using b4 ? With -l it will add a link to lore, so the message ID will be
> >> available.
> >>
> >> I'd rather first focus on what we want to see included in commit
> >> messages, and then on how to make sure it gets there.
> > 
> > It is not just running b4. There's the need of preparing a MR message that
> > matches the content of patch 0, very likely with the message IDs of patches
> > and PRs, and check if everything is in place on patchwork, as this is the 
> > main tool to track the pending queue. Surely b4 can be called on such script.
> > 
> > It doesn't need to be complex, but it should automate the basic steps that
> > all committers will use.
> >  
> >>>>>
> >>>>>   b) once gitlab CI runs, there are two possible outcomes: patches may
> >>>>>      pass or not. If they pass, a MR will be created and eventually be
> >>>>>      merged.
> >>>>>
> >>>>>      Either merged or not (because something failed or the patches require
> >>>>>      more work), the patchwork status of the patch require changes to
> >>>>>      reflect what happened. IMO, another script is needed to update the
> >>>>>      patch/patch series/PR on patchwork on a consistent way.
> >>>>>
> >>>>>      This is actually a *big* gap we have here. I have a script that
> >>>>>      manually check patchwork status and the gap is huge. currently,
> >>>>>      there are 73 patches that seems to be merged, but patchwork was not
> >>>>>      updated.
> >>>>>
> >>>>>      From what I noticed, some PR submitters almost never update patchwork
> >>>>>      after the merges.    
> >>>>
> >>>> Interesting I thought that is how it should be? I mean if I send a PR,
> >>>> the one taking the PR must decide whether he wants to pull it, so the
> >>>> decision whether the set is accepted should be on the one pulling the
> >>>> changes and not on the one pushing them. Right?  
> >>>
> >>> Yes, but you still need to update previous attempts to submit the
> >>> work. So, let's see a patch series at v9 got a PR. It is up to you
> >>> to cleanup everything on patchwork from v1 to v8.  
> >>
> >> That should be done before though, when v2 is submitted, v1 should be
> >> marked as superseded. 
> > 
> > Agreed. Still most of the ones sending PRs those days don't do that.
> > 
> >> Isn't there a patchwork bot that can help with this ? 
> > 
> > As far as I'm aware, no.
> > 
> >> It's not perfect in the sense that it doesn't always match new
> >> versions with previous ones, but if it can lower the manual burden by
> >> even 80%, it's a big win.
> >>
> >>> Now, the committer handing PR for v9 should be in charge of updating
> >>> the status of it and the patches that belong to it, once the patch
> >>> is merged or once he takes a decision otherwise.  
> >>
> >> Today most authors don't have a patchwork account, so they can't update
> >> the status themselves. The responsibility mostly falls on you and Hans.
> > 
> > The responsibility is for the ones that submit PRs. Unfortunately, this
> > is a process that it is not working right now. 
> > 
> > See, if you're reviewing a v2 of a patch series, you know that v1 is
> > obsolete. It should be easy to run a script that would take the v1
> > patch series and update patchwork to mark all v1 patches obsoleted.
> > 
> >> I'm fine with moving this to committers, which will make your life
> >> easier. The patchwork update when merging a branch should ideally be
> >> done automatically by the gitlab instance. If we have lore links in the
> >> commit messages, that sounds doable.
> > 
> > With multi-committer, it won't be possible anymore for Hans and I do
> > do such updates, as the workflow will be handled by the committers.
> > 
> > So, all committers will need to cleanup patch status on patchwork.
> 
> It would be really nice if patchwork would be kept in a better state.
> 
> Two or three times per kernel cycle I go through patchwork to hoover up
> all the 'random patches' that people post, and at that time I also try
> to clean up superseded patches etc. But sometimes it isn't clear, esp. if
> the v2 renames the series title. When in doubt, I won't mark it as superseded.
> 
> So I agree with Mauro that as committer updating patchwork would be one of
> the tasks. That implies that those with commit rights also need patchwork
> rights.

This implies more work for me, and I'm fine with that. It also implies a
lower burden for Mauro and you, which I hope you will enjoy :-)

> For me it has always been easiest to keep all the patches I want to commit
> in a bundle. Then, once it is in, I update all patches in the bundle.
> 
> And whenever I post a new version, I also update the old version in patchwork.
> It's not much work as long as you stay on top of it, it is a lot more work
> if you let it slip and you have to figure it out later.
> 
> The most annoying part of dealing with PRs from others has been to find the
> patches in patchwork. I have a script that tries to find them based on the hash,
> which works well, except when the patch was modified before ending in the PR,
> then the hash no longer matches and I have to search manually.

I apply patches from the list using 'b4 shazam -lst'. This adds a
'Link:' tag to the commit message, which includes the message ID. From
that it should be possible to locate the corresponding patch in
patchwork, even when making local modifications.

What this doesn't solve is locating previous versions. That process can
be partly but not fully automated with a bot, as Kieran explained in a
separate mail in this thread. I don't think we can fully automate a
solution for this problem, but if we combine automated handling of the
last version with automation-assisted handling of previous versions, the
manual burden should be significantly lower.

> >>>>>
> >>>>>      So another script to solve this gap is needed, doing updates on all
> >>>>>      patches that were picked by the first script. Something like:    
> >>>>
> >>>> Shouldn't the update happen after the MR has been created instead,
> >>>> because only after running the CI we know whether the tests fail or
> >>>> pass? From what you say above it sounds like the first script merely
> >>>> prepares a topic branch to be tested.  
> >>>
> >>> the first script I mentioned prepares and pushes the topic branch. The
> >>> patchwork update script (the second one) can be used on several parts
> >>> of the workflow:
> >>>
> >>> - before MR: if the committer decides to not merge the changes, because
> >>>   it didn't pass on his review;
> >>> - after MR pipeline failures;
> >>> - after MR being merged: once it reaches media-staging master.
> >>>   
> >>>>>      update_patchwork_from_topic <topic_name> <new_status>
> >>>>>
> >>>>>      This would likely need to use some logic to pick the message IDs
> >>>>>      of the patch inside the MR.
> >>>>>
> >>>>>      Such script could also check for previous versions of the patch
> >>>>>      and for identical patches (it is somewhat common to receive identical
> >>>>>      patches with trivial fixes from different developers).
> >>>>>
> >>>>>   Someone needs to work on such script, as otherwise the multi committers
> >>>>>   model may fail, and we risk needing to return back to the current model.    
> >>>>
> >>>> Just my personal thought: This sounds a bit extreme to me, I mean we are
> >>>> currently in the first test run as preparation for the Media-summit
> >>>> where we actually want to discuss policies and further requirements.  
> >>>
> >>> What I'm saying is that multiple-committers model will only work if
> >>> all committers follow a common procedure (still variants between
> >>> committers is possible, provided that the minimal set is followed).
> >>>
> >>> If this doesn't happens, maintainers may be forced to do rebases and other
> >>> manual fixes, with will actually make life worse for everyone. That's
> >>> what I want to avoid by having a common set of scripts for the very basic
> >>> tasks: create a topic branch for CI to test and update patchwork.  
> >>
> >> The issues you've listed above are about updating status of patches in
> >> patchwork. That's important (or otherwise we should drop patchwork if we
> >> think it's not important, but I don't think that's the case), but I
> >> hardly see how missing updates to patchwork would call for a rebase :-)
> > 
> > The need for rebase typically won't be due to patchwork[1], but to
> > broken processes to validate patches that may happen if the patches
> > don't reach first a tree under linux-media/users; if someone uses a wrong
> > version of compliance utils; if it has a broken topic branch, etc.
> > E. g. it may happen if the agreed process isn't be followed to the
> > letter.
> 
> Well, it shouldn't be possible to merge code unless the process was followed.
> There is also a certain level of trust implied by giving someone commit rights.

CI will need to run successfully to merge a branch. There's a certain
level of human errors that are still possible, such as picking up an
older version of a series for instance. This can already happen today.
The new process increases the safeguards, it doesn't introduce new
issues.

> And if they violate that trust, then we can take away those rights.

Absolutely, and I don't foresee that becoming part of our day-to-day
activities.

> > [1] Still, rebases might still be needed if the developer is not taking
> >     enough care on patchwork. See, if a developer sends a patch X with a
> >     change, and some days/weeks later another developer sends a patch Y
> >     identical to X, except for the author (and eventually description), 
> >     if patch Y is merged, there will be a need to drop it and pick X, to
> >     properly give credits to the right person.
> > 
> >> There's a need to make sure that what lands in the shared tree is
> >> proper, and for that we need a clearly documented procedure. At this
> >> point I don't think it requires a committer tool script, even if in the
> >> future developing such a tool could help.
> >>
> >>>>> 2. The mailbomb script that notifies when a patch is merged at media-stage
> >>>>>    we're using right now doesn't work with well with multiple committers.
> >>>>>
> >>>>>    Right now, the tree at linuxtv runs it, but it might end sending patches
> >>>>>    to the author and to linuxtv-commits ML that reached upstream from other
> >>>>>    trees. It has some logic to prevent that, but it is not bulletproof.
> >>>>>
> >>>>>    A replacement script is needed. Perhaps this can be executed together
> >>>>>    with the patchwork script (1B) at the committer's machine.  
> >>
> >> This could possibly be done with 'b4 ty'.
> > 
> > Call b4 ty after having patches merged at media staging tree works for me. 
> > Again placing it on a script that would for instance call pwclient and
> > b4 ty afterwards sounds the right thing to do.
> > 
> >> Another option is to rely on patchwork notifications instead. If a patch is 
> >> marked as merged, the notification e-mail sounds enough to notify the
> >> submitter.
> > 
> > Patchwork notification e-mails is about patch reviews, being optional.
> > Most authors won't create accounts at patchwork, but are interested only
> > at the final result. "b4 ty" or similar sounds more appropriate to me.
> > 
> > Also, having such patches c/c to linuxtv-commits help maintainers,
> > committers and developers to keep track with tree updates.
> > 
> >>> The bar for sending stuff to drivers/staging is lower than for
> >>> drivers/media. According with Kernel's documentation at
> >>> Documentation/process/2.Process.rst:
> >>>
> >>> 	"Current rules require that drivers contributed to staging
> >>> 	 must, at a minimum, compile properly."
> >>>
> >>> We actually try to be better than that, but still when new stuff is
> >>> added to staging, while we try to also ensure no static code checkers
> >>> would fail, we may need to relax the rules, as it is not uncommon to
> >>> receive drivers that need extra care to follow Kernel coding style
> >>> there.  
> >>
> >> Why can't we tell the submitter to fix the warnings first ? Especially
> >> if we can point them to a CI report, that would seem the best course of
> >> action to me. drivers/staging/ is meant for code that need more time to
> >> be considered stable enough for the kernel. The stabilization doesn't
> >> happen by magic, it requires someone actively working on it. If the
> >> submitter can't bother to fix the warnings, it doesn't forebode anything
> >> good and wouldn't make me confident that the code would ever get out of
> >> staging (other than simply being deleted).
> > 
> > Surely we can ask fixes, but sometimes the driver is in so bad shape
> > that it may take some time, specially when it is a driver that came
> > from a developer without Linux upstream development experience. That
> > happened, for instance, when we merged lirc drivers, several USB non-uvc
> > camera drivers, atomisp, etc.
> > 
> > So, I'd say this will happen case by case, but, from the media CI PoV, 
> > an option to relax the checks for stuff under drivers/staging is needed.
> 
> As I mentioned elsewhere, I think this should be under a CI option: by
> default staging shouldn't have warnings, but it can be relaxed by setting
> a CI option. But I also think this should only be allowed for new files,
> not when adding patches on top.

We could do that, or we could require such cases to be handled by a
maintainer who will have direct push access. Known warnings need to be
recorded in the CI scripts if I recall correctly, so there will be a
need to involve CI maintainers anyway if we add new ones. My preference
would be to simply fix the warnings before the code lands in staging. As
I said, if the submitter can't do that, what are the chances they will
do all the rest of the work to get the driver eventually out of staging
?

> >>>>> 4. We need to have some sort of "honour code": if undesired behavior
> >>>>>    is noticed, maintainers may temporarily (or permanently) revoke
> >>>>>    committer rights.    
> >>>>
> >>>> That sounds like something to discuss on the media summit,   
> >>>
> >>> Sure. That's why I'm asking to explicitly add it to the topics to be
> >>> discussed there ;-)
> >>>
> >>> It would be nice if someone could come up with a proposal for 
> >>> it, although the discussions of such proposal should likely happen via 
> >>> email.
> >>>
> >>> Also, the ones to be added there likely need to explicitly ack with
> >>> such code before being added to gitlab's permission group.  
> >>
> >> This sounds fairly straightforward, I think we'll easily agree on rules.
> > 
> > Sure, but a document with those and an explicit ack sounds an important
> > measure to avoid potential future problems.
> > 
> >>> 5. The procedure for fixes wil remain the same. We'll have already enough
> >>>    things to make it work. Let's not add fixes complexity just yet.
> >>>    Depending on how well the new multi-committers experimental model
> >>>    works, we may think using it for fixes as well.  
> >>
> >> I think this last point should still be discussed in Vienna. I want to
> >> design a clear workflow that covers -next and -fixes. I'm fine if we
> >> decide to then implement part of the workflow only as an initial step,
> >> if there are good enough reasons to do so.
> > 
> > I don't see any need. There will be enough things for discussion there
> > just for -next, which is where 99% of the patches sit.
> > 
> > Handling -fixes require a different procedure, as maintainers need to
> > prepare an special PR to send them. Also, it is up to the maintainers 
> > to decide to either accept a patch as fixes or postpone to next,
> > as sometimes it is not a black and white decision if a patch belongs
> > to -fixes or if they will be postponed to the next merge week.
> > 
> > For instance, at -rc7, we usually postpone more complex fixes to
> > the merge window, as it is usually not a good idea to do last minute 
> > complex changes. If a committer write a fix patch during -rc7 and get
> > it merged, but the maintainers decide to postpone to the merge window,
> > the fix branch will require rebase.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [ANN] Media Summit September 16th: Draft Agenda (v5)
  2024-09-07  8:02   ` Hans Verkuil
@ 2024-09-07 11:46     ` Laurent Pinchart
  2024-09-07 11:55       ` Hans Verkuil
  0 siblings, 1 reply; 20+ messages in thread
From: Laurent Pinchart @ 2024-09-07 11:46 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Mauro Carvalho Chehab, Linux Media Mailing List, Sakari Ailus,
	Daniel Almeida, Mauro Carvalho Chehab, Sebastian Fricke,
	Martin Hecht, Tommaso Merciai, Jacopo Mondi, Benjamin Mugnier,
	Ricardo Ribalda, Michael Tretter, Alain Volmat, Sean Young,
	Steve Cho, Tomasz Figa, Hidenori Kobayashi, Hu, Jerry W,
	Suresh Vankadara, Devarsh Thakkar, r-donadkar

On Sat, Sep 07, 2024 at 10:02:07AM +0200, Hans Verkuil wrote:
> On 06/09/2024 10:11, Mauro Carvalho Chehab wrote:
> > Em Thu, 5 Sep 2024 09:16:27 +0200
> > Hans Verkuil <hverkuil@xs4all.nl> escreveu:
> > 
> >> Hi all,
> >>
> >> Here is my fifth (and likely final) stab at an agenda for the media summit. As always,
> >> it is subject to change and all times are guesstimates!
> >>
> >> The media summit will be held on Monday September 16th. Avnet Silica has very
> >> kindly offered to host this summit at their Vienna office, which is about 35
> >> minutes by public transport from the Open Source Summit Europe venue
> >> (https://events.linuxfoundation.org/open-source-summit-europe/OSSE).
> >>
> >> Avnet Silica Office Location:
> >>
> >> Schönbrunner Str. 297/307, 1120 Vienna, Austria
> >>
> >> https://www.google.com/maps/place/Avnet+EMG+Elektronische+Bauteile+GmbH+(Silica)/@48.183203,16.3100937,15z/data=!4m6!3m5!1s0x476da80e20b26d5b:0x2c5d2a77bbd43334!8m2!3d48.1832035!4d16.320372!16s%2Fg%2F1tcy32vt?entry=ttu
> >>
> >> Refreshments are available during the day.
> >>
> >> Lunch is held at Schönbrunner Stöckl (https://www.schoenbrunnerstoeckl.com/), close
> >> to the Avnet Silica office. The lunch is sponsored by Ideas on Board and Cisco Systems
> >> Norway.
> >>
> >> Regarding the face mask policy: we will follow the same guidance that the
> >> Linux Foundation gives for the EOSS conference:
> >>
> >> https://events.linuxfoundation.org/open-source-summit-europe/attend/health-and-safety/#onsite-health-and-safety
> >>
> >>
> >> In-Person Attendees:
> >>
> >> Sakari Ailus <sakari.ailus@linux.intel.com> (Intel)
> >> Daniel Almeida <daniel.almeida@collabora.com> (Collabora)
> >> Mauro Carvalho Chehab <mchehab@kernel.org> (Huawei, Media Kernel Maintainer)
> >> Steve Cho <stevecho@chromium.org> (Google)
> >> Sebastian Fricke <sebastian.fricke@collabora.com> (Collabora)
> >> Martin Hecht <martin.hecht@avnet.eu> (Avnet)
> >> Tommaso Merciai <tomm.merciai@gmail.com> (Avnet)
> >> Jacopo Mondi <jacopo.mondi@ideasonboard.com> (Ideas On Board)
> >> Benjamin Mugnier <benjamin.mugnier@foss.st.com> (ST Electronics)
> >> Laurent Pinchart <laurent.pinchart@ideasonboard.com> (Ideas On Board)
> >> Ricardo Ribalda <ribalda@chromium.org> (Google)
> >> Michael Tretter <m.tretter@pengutronix.de> (Pengutronix)
> >> Suresh Vankadara <svankada@qti.qualcomm.com> (Qualcomm)
> >> Hans Verkuil <hverkuil-cisco@xs4all.nl> (Cisco Systems Norway)
> >> Alain Volmat <alain.volmat@foss.st.com> (ST Electronics)
> >> Sean Young <sean@mess.org>
> >> Jerry W Hu <jerry.w.hu@intel.com> (Intel)
> >>
> >> Remote Attendees (using MS Teams):
> >>
> >> Rishikesh Donadkar <r-donadkar@ti.com> (TI)
> >> Tomasz Figa <tfiga@chromium.org> (Google)
> >> Hidenori Kobayashi <hidenorik@chromium.org> (Google)
> >> Devarsh Thakkar <devarsht@ti.com> (TI)
> >>
> >> Note: information on how to connect remotely will come later.
> >>
> >> If any information above is incorrect, or if I missed someone, then please let me know.
> >>
> >> We are currently 17 confirmed in-person participants, so we're pretty much full.
> >> If you want to join remotely, then contact me and I'll add you to that list.
> >>
> >> Draft agenda:
> >>
> >> 8:45-9:15: get settled :-)
> >>
> >> 9:15-9:25: Hans: Quick introduction
> >>
> >> 9:25-11:00: Ricardo: multi-committer model using gitlab
> > 
> > As part of such discussion, IMO some topics that should be covered:
> > 
> > 1. All committers shall use a common procedure for all merges.
> > 
> >    This is easy said than done. So, IMO, it is needed some common scripts
> >    to be used by all committers. On my tests when merging two PRs there,
> >    those seems to be the minimal set of scripts that are needed:
> > 
> >    a) script to create a new topic branch at linux-media/users/<user>
> >       The input parameter is the message-ID, e. g. something like:
> > 
> > 	create_media_staging_topic <topic_name> <message_id>
> > 
> >       (eventually with an extra parameter with the name of the tree)
> > 
> >       It shall use patchwork REST interface to get the patches - or at least
> >       to check if all patches are there (and then use b4).
> > 
> >       such script needs to work with a single patch, a patch series and a
> >       pull request.
> > 
> >       the message ID of every patch, including the PR should be stored at
> >       the MR, as this will be needed to later update patchwork.
> > 
> >    b) once gitlab CI runs, there are two possible outcomes: patches may
> >       pass or not. If they pass, a MR will be created and eventually be
> >       merged.
> > 
> >       Either merged or not (because something failed or the patches require
> >       more work), the patchwork status of the patch require changes to
> >       reflect what happened. IMO, another script is needed to update the
> >       patch/patch series/PR on patchwork on a consistent way.
> > 
> >       This is actually a *big* gap we have here. I have a script that 
> >       manually check patchwork status and the gap is huge. currently,
> >       there are 73 patches that seems to be merged, but patchwork was not
> >       updated.
> > 
> >       From what I noticed, some PR submitters almost never update patchwork
> >       after the merges.
> > 
> >       So another script to solve this gap is needed, doing updates on all 
> >       patches that were picked by the first script. Something like:
> > 
> >       update_patchwork_from_topic <topic_name> <new_status>
> > 
> >       This would likely need to use some logic to pick the message IDs
> >       of the patch inside the MR.
> > 
> >       Such script could also check for previous versions of the patch
> >       and for identical patches (it is somewhat common to receive identical
> >       patches with trivial fixes from different developers).
> > 
> >    Someone needs to work on such script, as otherwise the multi committers
> >    model may fail, and we risk needing to return back to the current model.
> > 
> > 2. The mailbomb script that notifies when a patch is merged at media-stage
> >    we're using right now doesn't work with well with multiple committers.
> > 
> >    Right now, the tree at linuxtv runs it, but it might end sending patches
> >    to the author and to linuxtv-commits ML that reached upstream from other
> >    trees. It has some logic to prevent that, but it is not bulletproof.
> >  
> >    A replacement script is needed. Perhaps this can be executed together
> >    with the patchwork script (1B) at the committer's machine.
> > 
> > 3. Staging require different rules, as smatch/spatch/sparse/checkpatch
> >    warnings and errors can be acceptable.
> > 
> > 4. We need to have some sort of "honour code": if undesired behavior
> >    is noticed, maintainers may temporarily (or permanently) revoke
> >    committer rights.
> > 
> >    Hopefully, this will never happen, but, if it happens, a rebase
> >    of media-staging tree may be needed.
> > 
> > 5. The procedure for fixes wil remain the same. We'll have already enough
> >    things to make it work. Let's not add fixes complexity just yet.
> >    Depending on how well the new multi-committers experimental model
> >    works, we may think using it for fixes as well.
> 
> 6. Since now the committer has to collect the necessary A-by/R-by tags,
>    how do we handle that? Today it is implicit by posting a PR: the patches
>    will be signed off by me or Mauro when we process the PR. Now you need
>    to collect the tags by asking others. I'd like to formalize this in some
>    way.

Tags should be sent to the list as part of the review process, right ?
In that case they can be collected from there. b4 does so automatically.
We also sometimes give Rb tags in IRC as a shortcut, they can be added
manually, or we can decide that tags always have to be posted to the
list.

I don't really see the issue, am I missing something ?

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [ANN] Media Summit September 16th: Draft Agenda (v5)
  2024-09-07 11:46     ` Laurent Pinchart
@ 2024-09-07 11:55       ` Hans Verkuil
  2024-09-07 16:29         ` Laurent Pinchart
  0 siblings, 1 reply; 20+ messages in thread
From: Hans Verkuil @ 2024-09-07 11:55 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Mauro Carvalho Chehab, Linux Media Mailing List, Sakari Ailus,
	Daniel Almeida, Mauro Carvalho Chehab, Sebastian Fricke,
	Martin Hecht, Tommaso Merciai, Jacopo Mondi, Benjamin Mugnier,
	Ricardo Ribalda, Michael Tretter, Alain Volmat, Sean Young,
	Steve Cho, Tomasz Figa, Hidenori Kobayashi, Hu, Jerry W,
	Suresh Vankadara, Devarsh Thakkar, r-donadkar

On 07/09/2024 13:46, Laurent Pinchart wrote:
> On Sat, Sep 07, 2024 at 10:02:07AM +0200, Hans Verkuil wrote:
>> On 06/09/2024 10:11, Mauro Carvalho Chehab wrote:
>>> Em Thu, 5 Sep 2024 09:16:27 +0200
>>> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
>>>
>>>> Hi all,
>>>>
>>>> Here is my fifth (and likely final) stab at an agenda for the media summit. As always,
>>>> it is subject to change and all times are guesstimates!
>>>>
>>>> The media summit will be held on Monday September 16th. Avnet Silica has very
>>>> kindly offered to host this summit at their Vienna office, which is about 35
>>>> minutes by public transport from the Open Source Summit Europe venue
>>>> (https://events.linuxfoundation.org/open-source-summit-europe/OSSE).
>>>>
>>>> Avnet Silica Office Location:
>>>>
>>>> Schönbrunner Str. 297/307, 1120 Vienna, Austria
>>>>
>>>> https://www.google.com/maps/place/Avnet+EMG+Elektronische+Bauteile+GmbH+(Silica)/@48.183203,16.3100937,15z/data=!4m6!3m5!1s0x476da80e20b26d5b:0x2c5d2a77bbd43334!8m2!3d48.1832035!4d16.320372!16s%2Fg%2F1tcy32vt?entry=ttu
>>>>
>>>> Refreshments are available during the day.
>>>>
>>>> Lunch is held at Schönbrunner Stöckl (https://www.schoenbrunnerstoeckl.com/), close
>>>> to the Avnet Silica office. The lunch is sponsored by Ideas on Board and Cisco Systems
>>>> Norway.
>>>>
>>>> Regarding the face mask policy: we will follow the same guidance that the
>>>> Linux Foundation gives for the EOSS conference:
>>>>
>>>> https://events.linuxfoundation.org/open-source-summit-europe/attend/health-and-safety/#onsite-health-and-safety
>>>>
>>>>
>>>> In-Person Attendees:
>>>>
>>>> Sakari Ailus <sakari.ailus@linux.intel.com> (Intel)
>>>> Daniel Almeida <daniel.almeida@collabora.com> (Collabora)
>>>> Mauro Carvalho Chehab <mchehab@kernel.org> (Huawei, Media Kernel Maintainer)
>>>> Steve Cho <stevecho@chromium.org> (Google)
>>>> Sebastian Fricke <sebastian.fricke@collabora.com> (Collabora)
>>>> Martin Hecht <martin.hecht@avnet.eu> (Avnet)
>>>> Tommaso Merciai <tomm.merciai@gmail.com> (Avnet)
>>>> Jacopo Mondi <jacopo.mondi@ideasonboard.com> (Ideas On Board)
>>>> Benjamin Mugnier <benjamin.mugnier@foss.st.com> (ST Electronics)
>>>> Laurent Pinchart <laurent.pinchart@ideasonboard.com> (Ideas On Board)
>>>> Ricardo Ribalda <ribalda@chromium.org> (Google)
>>>> Michael Tretter <m.tretter@pengutronix.de> (Pengutronix)
>>>> Suresh Vankadara <svankada@qti.qualcomm.com> (Qualcomm)
>>>> Hans Verkuil <hverkuil-cisco@xs4all.nl> (Cisco Systems Norway)
>>>> Alain Volmat <alain.volmat@foss.st.com> (ST Electronics)
>>>> Sean Young <sean@mess.org>
>>>> Jerry W Hu <jerry.w.hu@intel.com> (Intel)
>>>>
>>>> Remote Attendees (using MS Teams):
>>>>
>>>> Rishikesh Donadkar <r-donadkar@ti.com> (TI)
>>>> Tomasz Figa <tfiga@chromium.org> (Google)
>>>> Hidenori Kobayashi <hidenorik@chromium.org> (Google)
>>>> Devarsh Thakkar <devarsht@ti.com> (TI)
>>>>
>>>> Note: information on how to connect remotely will come later.
>>>>
>>>> If any information above is incorrect, or if I missed someone, then please let me know.
>>>>
>>>> We are currently 17 confirmed in-person participants, so we're pretty much full.
>>>> If you want to join remotely, then contact me and I'll add you to that list.
>>>>
>>>> Draft agenda:
>>>>
>>>> 8:45-9:15: get settled :-)
>>>>
>>>> 9:15-9:25: Hans: Quick introduction
>>>>
>>>> 9:25-11:00: Ricardo: multi-committer model using gitlab
>>>
>>> As part of such discussion, IMO some topics that should be covered:
>>>
>>> 1. All committers shall use a common procedure for all merges.
>>>
>>>    This is easy said than done. So, IMO, it is needed some common scripts
>>>    to be used by all committers. On my tests when merging two PRs there,
>>>    those seems to be the minimal set of scripts that are needed:
>>>
>>>    a) script to create a new topic branch at linux-media/users/<user>
>>>       The input parameter is the message-ID, e. g. something like:
>>>
>>> 	create_media_staging_topic <topic_name> <message_id>
>>>
>>>       (eventually with an extra parameter with the name of the tree)
>>>
>>>       It shall use patchwork REST interface to get the patches - or at least
>>>       to check if all patches are there (and then use b4).
>>>
>>>       such script needs to work with a single patch, a patch series and a
>>>       pull request.
>>>
>>>       the message ID of every patch, including the PR should be stored at
>>>       the MR, as this will be needed to later update patchwork.
>>>
>>>    b) once gitlab CI runs, there are two possible outcomes: patches may
>>>       pass or not. If they pass, a MR will be created and eventually be
>>>       merged.
>>>
>>>       Either merged or not (because something failed or the patches require
>>>       more work), the patchwork status of the patch require changes to
>>>       reflect what happened. IMO, another script is needed to update the
>>>       patch/patch series/PR on patchwork on a consistent way.
>>>
>>>       This is actually a *big* gap we have here. I have a script that 
>>>       manually check patchwork status and the gap is huge. currently,
>>>       there are 73 patches that seems to be merged, but patchwork was not
>>>       updated.
>>>
>>>       From what I noticed, some PR submitters almost never update patchwork
>>>       after the merges.
>>>
>>>       So another script to solve this gap is needed, doing updates on all 
>>>       patches that were picked by the first script. Something like:
>>>
>>>       update_patchwork_from_topic <topic_name> <new_status>
>>>
>>>       This would likely need to use some logic to pick the message IDs
>>>       of the patch inside the MR.
>>>
>>>       Such script could also check for previous versions of the patch
>>>       and for identical patches (it is somewhat common to receive identical
>>>       patches with trivial fixes from different developers).
>>>
>>>    Someone needs to work on such script, as otherwise the multi committers
>>>    model may fail, and we risk needing to return back to the current model.
>>>
>>> 2. The mailbomb script that notifies when a patch is merged at media-stage
>>>    we're using right now doesn't work with well with multiple committers.
>>>
>>>    Right now, the tree at linuxtv runs it, but it might end sending patches
>>>    to the author and to linuxtv-commits ML that reached upstream from other
>>>    trees. It has some logic to prevent that, but it is not bulletproof.
>>>  
>>>    A replacement script is needed. Perhaps this can be executed together
>>>    with the patchwork script (1B) at the committer's machine.
>>>
>>> 3. Staging require different rules, as smatch/spatch/sparse/checkpatch
>>>    warnings and errors can be acceptable.
>>>
>>> 4. We need to have some sort of "honour code": if undesired behavior
>>>    is noticed, maintainers may temporarily (or permanently) revoke
>>>    committer rights.
>>>
>>>    Hopefully, this will never happen, but, if it happens, a rebase
>>>    of media-staging tree may be needed.
>>>
>>> 5. The procedure for fixes wil remain the same. We'll have already enough
>>>    things to make it work. Let's not add fixes complexity just yet.
>>>    Depending on how well the new multi-committers experimental model
>>>    works, we may think using it for fixes as well.
>>
>> 6. Since now the committer has to collect the necessary A-by/R-by tags,
>>    how do we handle that? Today it is implicit by posting a PR: the patches
>>    will be signed off by me or Mauro when we process the PR. Now you need
>>    to collect the tags by asking others. I'd like to formalize this in some
>>    way.
> 
> Tags should be sent to the list as part of the review process, right ?
> In that case they can be collected from there. b4 does so automatically.
> We also sometimes give Rb tags in IRC as a shortcut, they can be added
> manually, or we can decide that tags always have to be posted to the
> list.
> 
> I don't really see the issue, am I missing something ?
> 

It's not the collecting of given tags, it is knowing that I need to review
a patch so it can be given a A-by or R-by tag. Today a PR implies that I
will look at it (to varying degrees) and sign off on it, but now you need
to actively request that I look at e.g. a v4l2-core patch so you can have
the required minimum number A/R-by tags.

There is no clear process for that.

Regards,

	Hans

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [ANN] Media Summit September 16th: Draft Agenda (v5)
  2024-09-07 11:55       ` Hans Verkuil
@ 2024-09-07 16:29         ` Laurent Pinchart
  0 siblings, 0 replies; 20+ messages in thread
From: Laurent Pinchart @ 2024-09-07 16:29 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Mauro Carvalho Chehab, Linux Media Mailing List, Sakari Ailus,
	Daniel Almeida, Mauro Carvalho Chehab, Sebastian Fricke,
	Martin Hecht, Tommaso Merciai, Jacopo Mondi, Benjamin Mugnier,
	Ricardo Ribalda, Michael Tretter, Alain Volmat, Sean Young,
	Steve Cho, Tomasz Figa, Hidenori Kobayashi, Hu, Jerry W,
	Suresh Vankadara, Devarsh Thakkar, r-donadkar

On Sat, Sep 07, 2024 at 01:55:47PM +0200, Hans Verkuil wrote:
> On 07/09/2024 13:46, Laurent Pinchart wrote:
> > On Sat, Sep 07, 2024 at 10:02:07AM +0200, Hans Verkuil wrote:
> >> On 06/09/2024 10:11, Mauro Carvalho Chehab wrote:
> >>> Em Thu, 5 Sep 2024 09:16:27 +0200 Hans Verkuil escreveu:
> >>>
> >>>> Hi all,
> >>>>
> >>>> Here is my fifth (and likely final) stab at an agenda for the media summit. As always,
> >>>> it is subject to change and all times are guesstimates!
> >>>>
> >>>> The media summit will be held on Monday September 16th. Avnet Silica has very
> >>>> kindly offered to host this summit at their Vienna office, which is about 35
> >>>> minutes by public transport from the Open Source Summit Europe venue
> >>>> (https://events.linuxfoundation.org/open-source-summit-europe/OSSE).
> >>>>
> >>>> Avnet Silica Office Location:
> >>>>
> >>>> Schönbrunner Str. 297/307, 1120 Vienna, Austria
> >>>>
> >>>> https://www.google.com/maps/place/Avnet+EMG+Elektronische+Bauteile+GmbH+(Silica)/@48.183203,16.3100937,15z/data=!4m6!3m5!1s0x476da80e20b26d5b:0x2c5d2a77bbd43334!8m2!3d48.1832035!4d16.320372!16s%2Fg%2F1tcy32vt?entry=ttu
> >>>>
> >>>> Refreshments are available during the day.
> >>>>
> >>>> Lunch is held at Schönbrunner Stöckl (https://www.schoenbrunnerstoeckl.com/), close
> >>>> to the Avnet Silica office. The lunch is sponsored by Ideas on Board and Cisco Systems
> >>>> Norway.
> >>>>
> >>>> Regarding the face mask policy: we will follow the same guidance that the
> >>>> Linux Foundation gives for the EOSS conference:
> >>>>
> >>>> https://events.linuxfoundation.org/open-source-summit-europe/attend/health-and-safety/#onsite-health-and-safety
> >>>>
> >>>>
> >>>> In-Person Attendees:
> >>>>
> >>>> Sakari Ailus <sakari.ailus@linux.intel.com> (Intel)
> >>>> Daniel Almeida <daniel.almeida@collabora.com> (Collabora)
> >>>> Mauro Carvalho Chehab <mchehab@kernel.org> (Huawei, Media Kernel Maintainer)
> >>>> Steve Cho <stevecho@chromium.org> (Google)
> >>>> Sebastian Fricke <sebastian.fricke@collabora.com> (Collabora)
> >>>> Martin Hecht <martin.hecht@avnet.eu> (Avnet)
> >>>> Tommaso Merciai <tomm.merciai@gmail.com> (Avnet)
> >>>> Jacopo Mondi <jacopo.mondi@ideasonboard.com> (Ideas On Board)
> >>>> Benjamin Mugnier <benjamin.mugnier@foss.st.com> (ST Electronics)
> >>>> Laurent Pinchart <laurent.pinchart@ideasonboard.com> (Ideas On Board)
> >>>> Ricardo Ribalda <ribalda@chromium.org> (Google)
> >>>> Michael Tretter <m.tretter@pengutronix.de> (Pengutronix)
> >>>> Suresh Vankadara <svankada@qti.qualcomm.com> (Qualcomm)
> >>>> Hans Verkuil <hverkuil-cisco@xs4all.nl> (Cisco Systems Norway)
> >>>> Alain Volmat <alain.volmat@foss.st.com> (ST Electronics)
> >>>> Sean Young <sean@mess.org>
> >>>> Jerry W Hu <jerry.w.hu@intel.com> (Intel)
> >>>>
> >>>> Remote Attendees (using MS Teams):
> >>>>
> >>>> Rishikesh Donadkar <r-donadkar@ti.com> (TI)
> >>>> Tomasz Figa <tfiga@chromium.org> (Google)
> >>>> Hidenori Kobayashi <hidenorik@chromium.org> (Google)
> >>>> Devarsh Thakkar <devarsht@ti.com> (TI)
> >>>>
> >>>> Note: information on how to connect remotely will come later.
> >>>>
> >>>> If any information above is incorrect, or if I missed someone, then please let me know.
> >>>>
> >>>> We are currently 17 confirmed in-person participants, so we're pretty much full.
> >>>> If you want to join remotely, then contact me and I'll add you to that list.
> >>>>
> >>>> Draft agenda:
> >>>>
> >>>> 8:45-9:15: get settled :-)
> >>>>
> >>>> 9:15-9:25: Hans: Quick introduction
> >>>>
> >>>> 9:25-11:00: Ricardo: multi-committer model using gitlab
> >>>
> >>> As part of such discussion, IMO some topics that should be covered:
> >>>
> >>> 1. All committers shall use a common procedure for all merges.
> >>>
> >>>    This is easy said than done. So, IMO, it is needed some common scripts
> >>>    to be used by all committers. On my tests when merging two PRs there,
> >>>    those seems to be the minimal set of scripts that are needed:
> >>>
> >>>    a) script to create a new topic branch at linux-media/users/<user>
> >>>       The input parameter is the message-ID, e. g. something like:
> >>>
> >>> 	create_media_staging_topic <topic_name> <message_id>
> >>>
> >>>       (eventually with an extra parameter with the name of the tree)
> >>>
> >>>       It shall use patchwork REST interface to get the patches - or at least
> >>>       to check if all patches are there (and then use b4).
> >>>
> >>>       such script needs to work with a single patch, a patch series and a
> >>>       pull request.
> >>>
> >>>       the message ID of every patch, including the PR should be stored at
> >>>       the MR, as this will be needed to later update patchwork.
> >>>
> >>>    b) once gitlab CI runs, there are two possible outcomes: patches may
> >>>       pass or not. If they pass, a MR will be created and eventually be
> >>>       merged.
> >>>
> >>>       Either merged or not (because something failed or the patches require
> >>>       more work), the patchwork status of the patch require changes to
> >>>       reflect what happened. IMO, another script is needed to update the
> >>>       patch/patch series/PR on patchwork on a consistent way.
> >>>
> >>>       This is actually a *big* gap we have here. I have a script that 
> >>>       manually check patchwork status and the gap is huge. currently,
> >>>       there are 73 patches that seems to be merged, but patchwork was not
> >>>       updated.
> >>>
> >>>       From what I noticed, some PR submitters almost never update patchwork
> >>>       after the merges.
> >>>
> >>>       So another script to solve this gap is needed, doing updates on all 
> >>>       patches that were picked by the first script. Something like:
> >>>
> >>>       update_patchwork_from_topic <topic_name> <new_status>
> >>>
> >>>       This would likely need to use some logic to pick the message IDs
> >>>       of the patch inside the MR.
> >>>
> >>>       Such script could also check for previous versions of the patch
> >>>       and for identical patches (it is somewhat common to receive identical
> >>>       patches with trivial fixes from different developers).
> >>>
> >>>    Someone needs to work on such script, as otherwise the multi committers
> >>>    model may fail, and we risk needing to return back to the current model.
> >>>
> >>> 2. The mailbomb script that notifies when a patch is merged at media-stage
> >>>    we're using right now doesn't work with well with multiple committers.
> >>>
> >>>    Right now, the tree at linuxtv runs it, but it might end sending patches
> >>>    to the author and to linuxtv-commits ML that reached upstream from other
> >>>    trees. It has some logic to prevent that, but it is not bulletproof.
> >>>  
> >>>    A replacement script is needed. Perhaps this can be executed together
> >>>    with the patchwork script (1B) at the committer's machine.
> >>>
> >>> 3. Staging require different rules, as smatch/spatch/sparse/checkpatch
> >>>    warnings and errors can be acceptable.
> >>>
> >>> 4. We need to have some sort of "honour code": if undesired behavior
> >>>    is noticed, maintainers may temporarily (or permanently) revoke
> >>>    committer rights.
> >>>
> >>>    Hopefully, this will never happen, but, if it happens, a rebase
> >>>    of media-staging tree may be needed.
> >>>
> >>> 5. The procedure for fixes wil remain the same. We'll have already enough
> >>>    things to make it work. Let's not add fixes complexity just yet.
> >>>    Depending on how well the new multi-committers experimental model
> >>>    works, we may think using it for fixes as well.
> >>
> >> 6. Since now the committer has to collect the necessary A-by/R-by tags,
> >>    how do we handle that? Today it is implicit by posting a PR: the patches
> >>    will be signed off by me or Mauro when we process the PR. Now you need
> >>    to collect the tags by asking others. I'd like to formalize this in some
> >>    way.
> > 
> > Tags should be sent to the list as part of the review process, right ?
> > In that case they can be collected from there. b4 does so automatically.
> > We also sometimes give Rb tags in IRC as a shortcut, they can be added
> > manually, or we can decide that tags always have to be posted to the
> > list.
> > 
> > I don't really see the issue, am I missing something ?
> 
> It's not the collecting of given tags, it is knowing that I need to review
> a patch so it can be given a A-by or R-by tag. Today a PR implies that I
> will look at it (to varying degrees) and sign off on it, but now you need
> to actively request that I look at e.g. a v4l2-core patch so you can have
> the required minimum number A/R-by tags.

All the pull requests I've sent so far that included V4L2 core changes
were assuming that you had reviewed the patches already, or had a chance
to review them and decided not to. I don't recall a case where you
refused such a pull request (but my memory may fail me).

I think we need to give any interested party a chance to review the
changes they're interested in. As far as I'm concerned, you review my
patches on the list in a timely manner, and when some fall through the
cracks, if I think you would be interested in reviewing them, I ping
you. That has worked quite well so far.

I'm all for discussing the review process as part of the overall multi
committer model, the two are not independent.

> There is no clear process for that.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [ANN] Media Summit September 16th: Draft Agenda (v5)
  2024-09-06 18:04           ` Laurent Pinchart
  2024-09-06 21:22             ` Kieran Bingham
  2024-09-07  7:56             ` Hans Verkuil
@ 2024-09-07 20:43             ` Mauro Carvalho Chehab
  2 siblings, 0 replies; 20+ messages in thread
From: Mauro Carvalho Chehab @ 2024-09-07 20:43 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sebastian Fricke, Hans Verkuil, Linux Media Mailing List,
	Sakari Ailus, Daniel Almeida, Mauro Carvalho Chehab, Martin Hecht,
	Tommaso Merciai, Jacopo Mondi, Benjamin Mugnier, Ricardo Ribalda,
	Michael Tretter, Alain Volmat, Sean Young, Steve Cho, Tomasz Figa,
	Hidenori Kobayashi, Hu, Jerry W, Suresh Vankadara,
	Devarsh Thakkar, r-donadkar, Kieran Bingham

Em Fri, 6 Sep 2024 21:04:39 +0300
Laurent Pinchart <laurent.pinchart@ideasonboard.com> escreveu:

> Hi Mauro,
> 
> (CC'ing Kieran, for a question below)
> 
> First of all, I'd like to say that I think we agree on more points than
> may be apparent from the conversation. I believe this new process will
> lower the administrative burden on everybody, which should also come as
> a relief.
> 
> On Fri, Sep 06, 2024 at 07:43:10PM +0200, Mauro Carvalho Chehab wrote:
> > Em Fri, 6 Sep 2024 16:29:59 +0300 Laurent Pinchart escreveu:  
> > > On Fri, Sep 06, 2024 at 01:10:30PM +0200, Mauro Carvalho Chehab wrote:  
> > > > Em Fri, 6 Sep 2024 12:36:58 +0200 Sebastian Fricke escreveu:    
> > > > > On 06.09.2024 10:11, Mauro Carvalho Chehab wrote:    
> > > > > > Em Thu, 5 Sep 2024 09:16:27 +0200 Hans Verkuil escreveu:
> > > > > >      
> > > > ...    
> > > > > > 1. All committers shall use a common procedure for all merges.
> > > > > >
> > > > > >   This is easy said than done. So, IMO, it is needed some common scripts
> > > > > >   to be used by all committers. On my tests when merging two PRs there,
> > > > > >   those seems to be the minimal set of scripts that are needed:
> > > > > >
> > > > > >   a) script to create a new topic branch at linux-media/users/<user>
> > > > > >      The input parameter is the message-ID, e. g. something like:
> > > > > >
> > > > > >	create_media_staging_topic <topic_name> <message_id>
> > > > > >
> > > > > >      (eventually with an extra parameter with the name of the tree)
> > > > > >
> > > > > >      It shall use patchwork REST interface to get the patches - or at least
> > > > > >      to check if all patches are there (and then use b4).
> > > > > >
> > > > > >      such script needs to work with a single patch, a patch series and a
> > > > > >      pull request.
> > > > > >
> > > > > >      the message ID of every patch, including the PR should be stored at
> > > > > >      the MR, as this will be needed to later update patchwork.    
> > > 
> > > I'm sure individual committers will want to optimize their workflow
> > > using scripts, and possibly share them, but what's wrong with simply
> > > using b4 ? With -l it will add a link to lore, so the message ID will be
> > > available.
> > > 
> > > I'd rather first focus on what we want to see included in commit
> > > messages, and then on how to make sure it gets there.  
> > 
> > It is not just running b4. There's the need of preparing a MR message that
> > matches the content of patch 0, very likely with the message IDs of patches
> > and PRs, and check if everything is in place on patchwork, as this is the 
> > main tool to track the pending queue. Surely b4 can be called on such script.  
> 
> Is that needed, or can we rely on the Link: tag of individual patches to
> find the corresponding patch in patchwork ?

Only if all patches get links, which is not the case. Patches received
via pull requests won't have, and people may still forget o use -l option.

That's why we need a script. It can be a simple as calling b4 with -l for
normal patches, but it would likely need to store PR message IDs somewhere
too, for merges done via PRs. We'll still have those even after
multi-commiters, for people that opt to not be a committer and for certain
types of patches like uAPI, core and more complex changes.

See, the problem is that there will be a time gap between submitting a
topic branch, wait for CI to run, create a MR, wait for CI to run and
finally merge the patch.

> 
> > It doesn't need to be complex, but it should automate the basic steps that
> > all committers will use.
> >    
> > > > > >
> > > > > >   b) once gitlab CI runs, there are two possible outcomes: patches may
> > > > > >      pass or not. If they pass, a MR will be created and eventually be
> > > > > >      merged.
> > > > > >
> > > > > >      Either merged or not (because something failed or the patches require
> > > > > >      more work), the patchwork status of the patch require changes to
> > > > > >      reflect what happened. IMO, another script is needed to update the
> > > > > >      patch/patch series/PR on patchwork on a consistent way.
> > > > > >
> > > > > >      This is actually a *big* gap we have here. I have a script that
> > > > > >      manually check patchwork status and the gap is huge. currently,
> > > > > >      there are 73 patches that seems to be merged, but patchwork was not
> > > > > >      updated.
> > > > > >
> > > > > >      From what I noticed, some PR submitters almost never update patchwork
> > > > > >      after the merges.      
> > > > > 
> > > > > Interesting I thought that is how it should be? I mean if I send a PR,
> > > > > the one taking the PR must decide whether he wants to pull it, so the
> > > > > decision whether the set is accepted should be on the one pulling the
> > > > > changes and not on the one pushing them. Right?    
> > > > 
> > > > Yes, but you still need to update previous attempts to submit the
> > > > work. So, let's see a patch series at v9 got a PR. It is up to you
> > > > to cleanup everything on patchwork from v1 to v8.    
> > > 
> > > That should be done before though, when v2 is submitted, v1 should be
> > > marked as superseded.   
> > 
> > Agreed. Still most of the ones sending PRs those days don't do that.
> >   
> > > Isn't there a patchwork bot that can help with this ?   
> > 
> > As far as I'm aware, no.  
> 
> Kieran, aren't you running such a bot for libcamera ?
> 
> > > It's not perfect in the sense that it doesn't always match new
> > > versions with previous ones, but if it can lower the manual burden by
> > > even 80%, it's a big win.
> > >   
> > > > Now, the committer handing PR for v9 should be in charge of updating
> > > > the status of it and the patches that belong to it, once the patch
> > > > is merged or once he takes a decision otherwise.    
> > > 
> > > Today most authors don't have a patchwork account, so they can't update
> > > the status themselves. The responsibility mostly falls on you and Hans.  
> > 
> > The responsibility is for the ones that submit PRs. Unfortunately, this
> > is a process that it is not working right now. 
> > 
> > See, if you're reviewing a v2 of a patch series, you know that v1 is
> > obsolete. It should be easy to run a script that would take the v1
> > patch series and update patchwork to mark all v1 patches obsoleted.  
> 
> In many cases that could be automated, but sometimes locating the
> previous version automatically won't work. There will likely be some
> amount of manual work. Tooling could help.

I have a tool that could be used as basis. It is hint based and test
some conditions. Not perfect, but picks several cases. It worked fine
when I was the only one committing patches, as I had a better idea
about what it was merged, but now, with Hans and I, it is very hard
to detect false positives. Yet, it could serve as basis for a check
inside the committer's script.

> 
> > > I'm fine with moving this to committers, which will make your life
> > > easier. The patchwork update when merging a branch should ideally be
> > > done automatically by the gitlab instance. If we have lore links in the
> > > commit messages, that sounds doable.  
> > 
> > With multi-committer, it won't be possible anymore for Hans and I do
> > do such updates, as the workflow will be handled by the committers.
> > 
> > So, all committers will need to cleanup patch status on patchwork.  
> 
> As I wrote above, that's fine with me, and hopefully that will also make
> your life easier (unless you tell me you enjoy updating patchwork
> manually :-)).
> 
> > > > > >
> > > > > >      So another script to solve this gap is needed, doing updates on all
> > > > > >      patches that were picked by the first script. Something like:      
> > > > > 
> > > > > Shouldn't the update happen after the MR has been created instead,
> > > > > because only after running the CI we know whether the tests fail or
> > > > > pass? From what you say above it sounds like the first script merely
> > > > > prepares a topic branch to be tested.    
> > > > 
> > > > the first script I mentioned prepares and pushes the topic branch. The
> > > > patchwork update script (the second one) can be used on several parts
> > > > of the workflow:
> > > > 
> > > > - before MR: if the committer decides to not merge the changes, because
> > > >   it didn't pass on his review;
> > > > - after MR pipeline failures;
> > > > - after MR being merged: once it reaches media-staging master.
> > > >     
> > > > > >      update_patchwork_from_topic <topic_name> <new_status>
> > > > > >
> > > > > >      This would likely need to use some logic to pick the message IDs
> > > > > >      of the patch inside the MR.
> > > > > >
> > > > > >      Such script could also check for previous versions of the patch
> > > > > >      and for identical patches (it is somewhat common to receive identical
> > > > > >      patches with trivial fixes from different developers).
> > > > > >
> > > > > >   Someone needs to work on such script, as otherwise the multi committers
> > > > > >   model may fail, and we risk needing to return back to the current model.      
> > > > > 
> > > > > Just my personal thought: This sounds a bit extreme to me, I mean we are
> > > > > currently in the first test run as preparation for the Media-summit
> > > > > where we actually want to discuss policies and further requirements.    
> > > > 
> > > > What I'm saying is that multiple-committers model will only work if
> > > > all committers follow a common procedure (still variants between
> > > > committers is possible, provided that the minimal set is followed).
> > > > 
> > > > If this doesn't happens, maintainers may be forced to do rebases and other
> > > > manual fixes, with will actually make life worse for everyone. That's
> > > > what I want to avoid by having a common set of scripts for the very basic
> > > > tasks: create a topic branch for CI to test and update patchwork.    
> > > 
> > > The issues you've listed above are about updating status of patches in
> > > patchwork. That's important (or otherwise we should drop patchwork if we
> > > think it's not important, but I don't think that's the case), but I
> > > hardly see how missing updates to patchwork would call for a rebase :-)  
> > 
> > The need for rebase typically won't be due to patchwork[1], but to
> > broken processes to validate patches that may happen if the patches
> > don't reach first a tree under linux-media/users; if someone uses a wrong
> > version of compliance utils; if it has a broken topic branch, etc.
> > E. g. it may happen if the agreed process isn't be followed to the
> > letter.  
> 
> I think we all agree on the need of a clearly documented and simple to
> understand process.
> 
> > [1] Still, rebases might still be needed if the developer is not taking
> >     enough care on patchwork. See, if a developer sends a patch X with a
> >     change, and some days/weeks later another developer sends a patch Y
> >     identical to X, except for the author (and eventually description), 
> >     if patch Y is merged, there will be a need to drop it and pick X, to
> >     properly give credits to the right person.  
> 
> No, that doesn't call for a rebase. If we really want to credit patch X
> in the git history, we can revert Y and apply X. In most cases the
> author of patch X will not mind that Y got applied.

I'd say it depends. For trivial changes, this could be ok, as patches X 
and Y could be independently produced. But a more complex patch that could
be considered plagiarism, I would do a rebase.
> 
> > > There's a need to make sure that what lands in the shared tree is
> > > proper, and for that we need a clearly documented procedure. At this
> > > point I don't think it requires a committer tool script, even if in the
> > > future developing such a tool could help.
> > >   
> > > > > > 2. The mailbomb script that notifies when a patch is merged at media-stage
> > > > > >    we're using right now doesn't work with well with multiple committers.
> > > > > >
> > > > > >    Right now, the tree at linuxtv runs it, but it might end sending patches
> > > > > >    to the author and to linuxtv-commits ML that reached upstream from other
> > > > > >    trees. It has some logic to prevent that, but it is not bulletproof.
> > > > > >
> > > > > >    A replacement script is needed. Perhaps this can be executed together
> > > > > >    with the patchwork script (1B) at the committer's machine.    
> > > 
> > > This could possibly be done with 'b4 ty'.  
> > 
> > Call b4 ty after having patches merged at media staging tree works for me. 
> > Again placing it on a script that would for instance call pwclient and
> > b4 ty afterwards sounds the right thing to do.
> >   
> > > Another option is to rely on patchwork notifications instead. If a patch is 
> > > marked as merged, the notification e-mail sounds enough to notify the
> > > submitter.  
> > 
> > Patchwork notification e-mails is about patch reviews, being optional.
> > Most authors won't create accounts at patchwork, but are interested only
> > at the final result. "b4 ty" or similar sounds more appropriate to me.  
> 
> Does patchwork send notification to the author if they don't have an
> account ?

No. Even if the author has an account, notifications are opt-in.

> > Also, having such patches c/c to linuxtv-commits help maintainers,
> > committers and developers to keep track with tree updates.  
> 
> Who uses linuxtv-commits, and for what purpose ? I know I never look at
> it, but I'm interested in hearing if anyone uses it.

Currently, there are 42 people subscribed to it, receiving regular
updates. That's twice the number of people that will go to the media 
summit, so I suspect it is popular among developers and other people
interested on tracking the subsystem changes.

Also, authors receive e-mails from their own patches via the same
logic used by the mailing list, and people may still look at the
mailman archives when needed.

In my case, I use, when I suspect something wrong might have happened.
> 
> > > > The bar for sending stuff to drivers/staging is lower than for
> > > > drivers/media. According with Kernel's documentation at
> > > > Documentation/process/2.Process.rst:
> > > > 
> > > > 	"Current rules require that drivers contributed to staging
> > > > 	 must, at a minimum, compile properly."
> > > > 
> > > > We actually try to be better than that, but still when new stuff is
> > > > added to staging, while we try to also ensure no static code checkers
> > > > would fail, we may need to relax the rules, as it is not uncommon to
> > > > receive drivers that need extra care to follow Kernel coding style
> > > > there.    
> > > 
> > > Why can't we tell the submitter to fix the warnings first ? Especially
> > > if we can point them to a CI report, that would seem the best course of
> > > action to me. drivers/staging/ is meant for code that need more time to
> > > be considered stable enough for the kernel. The stabilization doesn't
> > > happen by magic, it requires someone actively working on it. If the
> > > submitter can't bother to fix the warnings, it doesn't forebode anything
> > > good and wouldn't make me confident that the code would ever get out of
> > > staging (other than simply being deleted).  
> > 
> > Surely we can ask fixes, but sometimes the driver is in so bad shape
> > that it may take some time, specially when it is a driver that came
> > from a developer without Linux upstream development experience. That
> > happened, for instance, when we merged lirc drivers, several USB non-uvc
> > camera drivers, atomisp, etc.
> > 
> > So, I'd say this will happen case by case, but, from the media CI PoV, 
> > an option to relax the checks for stuff under drivers/staging is needed.  
> 
> We *may* decide to accept a new driver that comes with warnings, but
> that would be a rare operation. Subsequent commits for the driver should
> not introduce new warnings.

Nobody (including me) wants to add drivers with warnings, but we still
accept them when really needed. So, from CI PoV, support for it is
needed, as it should be a discretionary decision for drivers/staging.

New warnings may appear naturally with either toolchain updates or with
changes that will reveal existing issues.

> > > > > > 4. We need to have some sort of "honour code": if undesired behavior
> > > > > >    is noticed, maintainers may temporarily (or permanently) revoke
> > > > > >    committer rights.      
> > > > > 
> > > > > That sounds like something to discuss on the media summit,     
> > > > 
> > > > Sure. That's why I'm asking to explicitly add it to the topics to be
> > > > discussed there ;-)
> > > > 
> > > > It would be nice if someone could come up with a proposal for 
> > > > it, although the discussions of such proposal should likely happen via 
> > > > email.
> > > > 
> > > > Also, the ones to be added there likely need to explicitly ack with
> > > > such code before being added to gitlab's permission group.    
> > > 
> > > This sounds fairly straightforward, I think we'll easily agree on rules.  
> > 
> > Sure, but a document with those and an explicit ack sounds an important
> > measure to avoid potential future problems.  
> 
> Yes, it will be explained in a public document.
> 
> > > > 5. The procedure for fixes wil remain the same. We'll have already enough
> > > >    things to make it work. Let's not add fixes complexity just yet.
> > > >    Depending on how well the new multi-committers experimental model
> > > >    works, we may think using it for fixes as well.    
> > > 
> > > I think this last point should still be discussed in Vienna. I want to
> > > design a clear workflow that covers -next and -fixes. I'm fine if we
> > > decide to then implement part of the workflow only as an initial step,
> > > if there are good enough reasons to do so.  
> > 
> > I don't see any need. There will be enough things for discussion there
> > just for -next, which is where 99% of the patches sit.
> > 
> > Handling -fixes require a different procedure, as maintainers need to
> > prepare an special PR to send them. Also, it is up to the maintainers 
> > to decide to either accept a patch as fixes or postpone to next,
> > as sometimes it is not a black and white decision if a patch belongs
> > to -fixes or if they will be postponed to the next merge week.
> > 
> > For instance, at -rc7, we usually postpone more complex fixes to
> > the merge window, as it is usually not a good idea to do last minute 
> > complex changes. If a committer write a fix patch during -rc7 and get
> > it merged, but the maintainers decide to postpone to the merge window,
> > the fix branch will require rebase.  
> 
> -next and -fixes are not independent. Even if they are handled through
> separate tree and processes, they need to take each other into account,
> as the -fixes branch may need to be back-merged in -next from time to
> time, and patches should never be applied to both -fixes and -next. We
> therefore need to agree on a process that will cover both. It doesn't
> mean -fixes will be handled through the shared tree right away if we
> decide there are good reasons not to do so.

Nothing changes on that regards, as we'll keep following upstream
best practices for -fixes, e. g.:

1) Fixes should be submitted to maintainers via PR;

2) Committers shall not merge patches that went to -fix inside -next.
   The maintainers will merge them from a -rc kernel that contains
   them, if needed.

Thanks,
Mauro

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2024-09-07 20:43 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-09-05  7:16 [ANN] Media Summit September 16th: Draft Agenda (v5) Hans Verkuil
2024-09-05  9:25 ` Mehdi Djait
2024-09-05  9:29   ` Hans Verkuil
2024-09-06 14:23     ` Nicolas Dufresne
2024-09-06  8:11 ` Mauro Carvalho Chehab
2024-09-06 10:36   ` Sebastian Fricke
2024-09-06 11:10     ` Mauro Carvalho Chehab
2024-09-06 13:29       ` Laurent Pinchart
2024-09-06 17:43         ` Mauro Carvalho Chehab
2024-09-06 18:04           ` Laurent Pinchart
2024-09-06 21:22             ` Kieran Bingham
2024-09-07  7:56             ` Hans Verkuil
2024-09-07 20:43             ` Mauro Carvalho Chehab
2024-09-07  7:53           ` Hans Verkuil
2024-09-07 11:44             ` Laurent Pinchart
2024-09-06 13:32   ` Laurent Pinchart
2024-09-07  8:02   ` Hans Verkuil
2024-09-07 11:46     ` Laurent Pinchart
2024-09-07 11:55       ` Hans Verkuil
2024-09-07 16:29         ` Laurent Pinchart

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox