Intel-GFX Archive on lore.kernel.org
 help / color / mirror / Atom feed
* Discussion: Moving away from Patchwork for Intel i915/Xe CI
@ 2025-03-05 16:51 Knop, Ryszard
  2025-03-05 17:30 ` Lucas De Marchi
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Knop, Ryszard @ 2025-03-05 16:51 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org
  Cc: rk@dragonic.eu, De Marchi, Lucas, daniel@fooishbar.org

Hey everyone,

Patchwork has been having lots of issues recently, dropping patches,
being unusably slow and generally starting to become more of a pain
than help. Over on the CI side we are also not super fond of it and we
don't have enough resources to maintain it properly. Lucas has
suggested using public-inbox archives directly, and with some recent
tools like b4/lei making common ML workflows feasible without
reinventing the wheel, we are considering setting up a bridge between
MLs and GitHub/GitLab to instead drive CI using more modern tools.

We have not decided whether to drop Patchwork yet - this is a thread to
gather people's thoughts if this sounds like a good idea.

The workflow would look like this:

- A drm-tip mirror would be set up on GitHub/fd.o GitLab, automatically
pulling all changes from drm-tip upstream fd.o GitLab as a secondary
source.
- For each new series on lore.kernel.org a bridge would create a PR by
taking the latest mirrored drm-tip source, then applying a new series
with `b4 shazam`.
- That PR would then go through the normal CI flow, with CI checks
being reported on that PR, instead of sending all the reports to the
mailing list.
- On the mailing list, the bridge would send an ack that a series has
been seen and where are its results. You would no longer receive
multiple emails with KBs of logs in your email client, but everything
would be available from PR checks (as status checks and links to full
logs only, no trimming and "last 1000 lines only").
- Mirrors, PRs and checks for public mailing lists would be public,
much like on the current public Patchwork instance.
- Logs behind links will be stored for a few months (3-6, depends on
traffic and how the situation evolves). GitHub Checks themselves (check
status, shortlogs and links) have a hard retention period of 400 days.
- Not sure about PR retention: we need a mechanism to correctly
identify merged series somehow, then to trim these from the list.
Expected retention time?
- Not sure about reporting on "CI finished": Maybe we could send one
more email with a summary of checks when the entire workflow has been
finished?

On GitHub vs fd.o GitLab: I'm thinking more of GitHub here:

- GitHub generally performs better with large repositories.
- Extra fallback drm-tip source for fd.o downtime periods.
- Bonus points: We can store public Intel CI tags directly in that
mirror for moderate periods of time without abusing fd.o servers.

Either option would work fine though, so opinions here would be
appreciated :)

CI itself would not run in the native forge CI either way, but rather
on our Jenkins infra, as it does today. If there's ever a need to
switch forges, it would require reworking the bridging/reporting bits,
but not *all* the bits.

We don't want to self-host another forge as we don't have enough people
and free time to do that properly. Codeberg, etc are not an option due
to the drm-tip repo size.

And that's the initial idea. Thoughts? Do you like any of this, or does
it sound like a downgrade from what we have today?

Thanks, Ryszard

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Discussion: Moving away from Patchwork for Intel i915/Xe CI
  2025-03-05 16:51 Discussion: Moving away from Patchwork for Intel i915/Xe CI Knop, Ryszard
@ 2025-03-05 17:30 ` Lucas De Marchi
  2025-03-05 17:52 ` Jani Nikula
  2025-03-13 10:22 ` Jani Nikula
  2 siblings, 0 replies; 13+ messages in thread
From: Lucas De Marchi @ 2025-03-05 17:30 UTC (permalink / raw)
  To: Knop, Ryszard
  Cc: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	rk@dragonic.eu, daniel@fooishbar.org

On Wed, Mar 05, 2025 at 10:51:20AM -0600, Knop, Ryszard wrote:
>Hey everyone,
>
>Patchwork has been having lots of issues recently, dropping patches,
>being unusably slow and generally starting to become more of a pain
>than help. Over on the CI side we are also not super fond of it and we
>don't have enough resources to maintain it properly. Lucas has
>suggested using public-inbox archives directly, and with some recent
>tools like b4/lei making common ML workflows feasible without
>reinventing the wheel, we are considering setting up a bridge between
>MLs and GitHub/GitLab to instead drive CI using more modern tools.
>
>We have not decided whether to drop Patchwork yet - this is a thread to
>gather people's thoughts if this sounds like a good idea.
>
>The workflow would look like this:
>
>- A drm-tip mirror would be set up on GitHub/fd.o GitLab, automatically
>pulling all changes from drm-tip upstream fd.o GitLab as a secondary
>source.
>- For each new series on lore.kernel.org a bridge would create a PR by
>taking the latest mirrored drm-tip source, then applying a new series
>with `b4 shazam`.
>- That PR would then go through the normal CI flow, with CI checks
>being reported on that PR, instead of sending all the reports to the
>mailing list.
>- On the mailing list, the bridge would send an ack that a series has
>been seen and where are its results. You would no longer receive
>multiple emails with KBs of logs in your email client, but everything
>would be available from PR checks (as status checks and links to full
>logs only, no trimming and "last 1000 lines only").
>- Mirrors, PRs and checks for public mailing lists would be public,
>much like on the current public Patchwork instance.
>- Logs behind links will be stored for a few months (3-6, depends on
>traffic and how the situation evolves). GitHub Checks themselves (check
>status, shortlogs and links) have a hard retention period of 400 days.
>- Not sure about PR retention: we need a mechanism to correctly
>identify merged series somehow, then to trim these from the list.
>Expected retention time?
>- Not sure about reporting on "CI finished": Maybe we could send one
>more email with a summary of checks when the entire workflow has been
>finished?
>
>On GitHub vs fd.o GitLab: I'm thinking more of GitHub here:
>
>- GitHub generally performs better with large repositories.
>- Extra fallback drm-tip source for fd.o downtime periods.
>- Bonus points: We can store public Intel CI tags directly in that
>mirror for moderate periods of time without abusing fd.o servers.
>
>Either option would work fine though, so opinions here would be
>appreciated :)
>
>CI itself would not run in the native forge CI either way, but rather
>on our Jenkins infra, as it does today. If there's ever a need to
>switch forges, it would require reworking the bridging/reporting bits,
>but not *all* the bits.
>
>We don't want to self-host another forge as we don't have enough people
>and free time to do that properly. Codeberg, etc are not an option due
>to the drm-tip repo size.
>
>And that's the initial idea. Thoughts? Do you like any of this, or does
>it sound like a downgrade from what we have today?

I like it and the pros hugely outweigh the cons IMO. One thing to be
careful is not to create a maze of links to get to useful information
that were once available as a reply to the email. I.e. it would be bad
if the user only has a link to the PR and needs to find a way to get to the
CI check status, that links to somewhere in jenkins, that needs some
extra steps/auth or leads to a 404, etc). But I think that is fixable
even if we get it wrong in the beginning.

No opinion on the gitlab vs github as it's only used as interface to the
real CI that is happening. I think you can use what makes more sense for
CI team, but hopefully making it in a way that would not be a huge
effort to use something else if the need arises.

thanks
Lucas De Marchi

>
>Thanks, Ryszard

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Discussion: Moving away from Patchwork for Intel i915/Xe CI
  2025-03-05 16:51 Discussion: Moving away from Patchwork for Intel i915/Xe CI Knop, Ryszard
  2025-03-05 17:30 ` Lucas De Marchi
@ 2025-03-05 17:52 ` Jani Nikula
  2025-03-05 19:33   ` Konstantin Ryabitsev
                     ` (2 more replies)
  2025-03-13 10:22 ` Jani Nikula
  2 siblings, 3 replies; 13+ messages in thread
From: Jani Nikula @ 2025-03-05 17:52 UTC (permalink / raw)
  To: Knop, Ryszard, intel-xe@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org
  Cc: rk@dragonic.eu, De Marchi, Lucas, daniel@fooishbar.org,
	Sima Vetter

On Wed, 05 Mar 2025, "Knop, Ryszard" <ryszard.knop@intel.com> wrote:
> Hey everyone,
>
> Patchwork has been having lots of issues recently, dropping patches,
> being unusably slow and generally starting to become more of a pain
> than help. Over on the CI side we are also not super fond of it and we
> don't have enough resources to maintain it properly. Lucas has
> suggested using public-inbox archives directly, and with some recent
> tools like b4/lei making common ML workflows feasible without
> reinventing the wheel, we are considering setting up a bridge between
> MLs and GitHub/GitLab to instead drive CI using more modern tools.

I am supportive of this change.

> We have not decided whether to drop Patchwork yet - this is a thread to
> gather people's thoughts if this sounds like a good idea.
>
> The workflow would look like this:
>
> - A drm-tip mirror would be set up on GitHub/fd.o GitLab, automatically
> pulling all changes from drm-tip upstream fd.o GitLab as a secondary
> source.
> - For each new series on lore.kernel.org a bridge would create a PR by
> taking the latest mirrored drm-tip source, then applying a new series
> with `b4 shazam`.

There's a small catch here. Patchwork is currently more clever about
handling series revisions when only some of the patches in a series are
updated by way of replying to the individual patch. For example [1][2].

I've been meaning to report it to upstream b4, but haven't gotten around
to it yet. But I wouldn't consider this a blocker.

[1] https://lore.kernel.org/r/20250305114820.3523077-2-imre.deak@intel.com
[2] https://patchwork.freedesktop.org/series/145782/

> - That PR would then go through the normal CI flow, with CI checks
> being reported on that PR, instead of sending all the reports to the
> mailing list.
> - On the mailing list, the bridge would send an ack that a series has
> been seen and where are its results. You would no longer receive
> multiple emails with KBs of logs in your email client, but everything
> would be available from PR checks (as status checks and links to full
> logs only, no trimming and "last 1000 lines only").

\o/

> - Mirrors, PRs and checks for public mailing lists would be public,
> much like on the current public Patchwork instance.
> - Logs behind links will be stored for a few months (3-6, depends on
> traffic and how the situation evolves). GitHub Checks themselves (check
> status, shortlogs and links) have a hard retention period of 400 days.
> - Not sure about PR retention: we need a mechanism to correctly
> identify merged series somehow, then to trim these from the list.
> Expected retention time?

With the velocity of the driver development, the test results go stale
in a matter of a week or two. I normally wouldn't merge patches older
than that without a fresh test round.

IMO a long retention isn't necessary. At most a few months? The patches
will still stay on the list forever.

> - Not sure about reporting on "CI finished": Maybe we could send one
> more email with a summary of checks when the entire workflow has been
> finished?

I've previously suggested one short mail as an acknowledgement with a
URL to the results, and another mail when testing has ended one way or
another. I think it would be a good starting point.

> On GitHub vs fd.o GitLab: I'm thinking more of GitHub here:
>
> - GitHub generally performs better with large repositories.
> - Extra fallback drm-tip source for fd.o downtime periods.
> - Bonus points: We can store public Intel CI tags directly in that
> mirror for moderate periods of time without abusing fd.o servers.
>
> Either option would work fine though, so opinions here would be
> appreciated :)

I think eventually we will want to consider accepting contributions via
gitlab merge requests directly.

It would also be interesting if maintainers/committers could merge the
contributions via gitlab UI already when CI applied the patches from the
mailing list and created the merge request.

In the merge request case, they'd have to be against individual repos
that feed into drm-tip, *not* merge requests against drm-tip
directly. So for testing CI would have to recreate drm-tip the same way
as 'dim push-branch' currently does.

I don't think these are hard requests at this time, and should not block
the forward progress, but it's maybe something you want to consider so
you're not inadvertently creating problems for your future self!

> CI itself would not run in the native forge CI either way, but rather
> on our Jenkins infra, as it does today. If there's ever a need to
> switch forges, it would require reworking the bridging/reporting bits,
> but not *all* the bits.
>
> We don't want to self-host another forge as we don't have enough people
> and free time to do that properly. Codeberg, etc are not an option due
> to the drm-tip repo size.
>
> And that's the initial idea. Thoughts? Do you like any of this, or does
> it sound like a downgrade from what we have today?

I think it sounds good overall. I don't like the flood of mails, and
they don't have complete information anyway. I'm hopeful using
github/gitlab would make the whole CI slightly more transparent too.

I wouldn't mind sunsetting fdo patchwork at all.


BR,
Jani.

-- 
Jani Nikula, Intel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Discussion: Moving away from Patchwork for Intel i915/Xe CI
  2025-03-05 17:52 ` Jani Nikula
@ 2025-03-05 19:33   ` Konstantin Ryabitsev
  2025-03-06 10:42     ` Jani Nikula
  2025-03-05 19:54   ` Ryszard Knop
  2025-03-05 20:32   ` Lucas De Marchi
  2 siblings, 1 reply; 13+ messages in thread
From: Konstantin Ryabitsev @ 2025-03-05 19:33 UTC (permalink / raw)
  To: Jani Nikula
  Cc: Knop, Ryszard, intel-xe@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, rk@dragonic.eu, De Marchi, Lucas,
	daniel@fooishbar.org, Sima Vetter

On Wed, Mar 05, 2025 at 07:52:31PM +0200, Jani Nikula wrote:
> > - For each new series on lore.kernel.org a bridge would create a PR by
> > taking the latest mirrored drm-tip source, then applying a new series
> > with `b4 shazam`.
> 
> There's a small catch here. Patchwork is currently more clever about
> handling series revisions when only some of the patches in a series are
> updated by way of replying to the individual patch. For example [1][2].

FWIW, b4 does partial rerolls already. E.g., using your own example:

	$ b4 am -o/tmp 20250305114820.3523077-2-imre.deak@intel.com
	[...]
	---
	  ✓ [PATCH v5->v6 1/6] drm/i915/hpd: Track HPD pins instead of ports for HPD pulse events
		+ Reviewed-by: Jani Nikula <jani.nikula@intel.com> (✓ DKIM/intel.com)
	  ✓ [PATCH v5->v6 2/6] drm/i915/hpd: Let an HPD pin be in the disabled state when handling missed IRQs
		+ Reviewed-by: Jani Nikula <jani.nikula@intel.com> (✓ DKIM/intel.com)
	  ✓ [PATCH     v6 3/6] drm/i915/hpd: Add support for blocking the IRQ handling on an HPD pin
	  ✓ [PATCH v5->v6 4/6] drm/i915/dp: Fix link training interrupted by a short HPD pulse
		+ Reviewed-by: Jani Nikula <jani.nikula@intel.com> (✓ DKIM/intel.com)
	  ✓ [PATCH     v6 5/6] drm/i915/dp: Queue a link check after link training is complete
	  ✓ [PATCH v5->v6 6/6] drm/i915/crt: Use intel_hpd_block/unblock() instead of intel_hpd_disable/enable()
	  ---
	  ✓ Signed: DKIM/intel.com
	---
	[...]
	WARNING: v6 is a partial reroll from previous revisions
			 Please carefully review the resulting series to ensure correctness
			 Pass --no-partial-reroll to disable

-K

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Discussion: Moving away from Patchwork for Intel i915/Xe CI
  2025-03-05 17:52 ` Jani Nikula
  2025-03-05 19:33   ` Konstantin Ryabitsev
@ 2025-03-05 19:54   ` Ryszard Knop
  2025-03-06 10:48     ` Jani Nikula
  2025-03-05 20:32   ` Lucas De Marchi
  2 siblings, 1 reply; 13+ messages in thread
From: Ryszard Knop @ 2025-03-05 19:54 UTC (permalink / raw)
  To: Jani Nikula, Knop, Ryszard, intel-xe@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org
  Cc: De Marchi, Lucas, daniel@fooishbar.org, Sima Vetter

On Wed, 2025-03-05 at 19:52 +0200, Jani Nikula wrote:
> On Wed, 05 Mar 2025, "Knop, Ryszard" <ryszard.knop@intel.com> wrote:
> > Hey everyone,
> > 
> > Patchwork has been having lots of issues recently, dropping patches,
> > being unusably slow and generally starting to become more of a pain
> > than help. Over on the CI side we are also not super fond of it and we
> > don't have enough resources to maintain it properly. Lucas has
> > suggested using public-inbox archives directly, and with some recent
> > tools like b4/lei making common ML workflows feasible without
> > reinventing the wheel, we are considering setting up a bridge between
> > MLs and GitHub/GitLab to instead drive CI using more modern tools.
> 
> I am supportive of this change.
> 
> > We have not decided whether to drop Patchwork yet - this is a thread to
> > gather people's thoughts if this sounds like a good idea.
> > 
> > The workflow would look like this:
> > 
> > - A drm-tip mirror would be set up on GitHub/fd.o GitLab, automatically
> > pulling all changes from drm-tip upstream fd.o GitLab as a secondary
> > source.
> > - For each new series on lore.kernel.org a bridge would create a PR by
> > taking the latest mirrored drm-tip source, then applying a new series
> > with `b4 shazam`.
> 
> There's a small catch here. Patchwork is currently more clever about
> handling series revisions when only some of the patches in a series are
> updated by way of replying to the individual patch. For example [1][2].
> 
> I've been meaning to report it to upstream b4, but haven't gotten around
> to it yet. But I wouldn't consider this a blocker.
> 
> [1] https://lore.kernel.org/r/20250305114820.3523077-2-imre.deak@intel.com
> [2] https://patchwork.freedesktop.org/series/145782/
> 
> > - That PR would then go through the normal CI flow, with CI checks
> > being reported on that PR, instead of sending all the reports to the
> > mailing list.
> > - On the mailing list, the bridge would send an ack that a series has
> > been seen and where are its results. You would no longer receive
> > multiple emails with KBs of logs in your email client, but everything
> > would be available from PR checks (as status checks and links to full
> > logs only, no trimming and "last 1000 lines only").
> 
> \o/
> 
> > - Mirrors, PRs and checks for public mailing lists would be public,
> > much like on the current public Patchwork instance.
> > - Logs behind links will be stored for a few months (3-6, depends on
> > traffic and how the situation evolves). GitHub Checks themselves (check
> > status, shortlogs and links) have a hard retention period of 400 days.
> > - Not sure about PR retention: we need a mechanism to correctly
> > identify merged series somehow, then to trim these from the list.
> > Expected retention time?
> 
> With the velocity of the driver development, the test results go stale
> in a matter of a week or two. I normally wouldn't merge patches older
> than that without a fresh test round.
> 
> IMO a long retention isn't necessary. At most a few months? The patches
> will still stay on the list forever.
> 
> > - Not sure about reporting on "CI finished": Maybe we could send one
> > more email with a summary of checks when the entire workflow has been
> > finished?
> 
> I've previously suggested one short mail as an acknowledgement with a
> URL to the results, and another mail when testing has ended one way or
> another. I think it would be a good starting point.
> 
> > On GitHub vs fd.o GitLab: I'm thinking more of GitHub here:
> > 
> > - GitHub generally performs better with large repositories.
> > - Extra fallback drm-tip source for fd.o downtime periods.
> > - Bonus points: We can store public Intel CI tags directly in that
> > mirror for moderate periods of time without abusing fd.o servers.
> > 
> > Either option would work fine though, so opinions here would be
> > appreciated :)
> 
> I think eventually we will want to consider accepting contributions via
> gitlab merge requests directly.
> 
> It would also be interesting if maintainers/committers could merge the
> contributions via gitlab UI already when CI applied the patches from the
> mailing list and created the merge request.
> 
> In the merge request case, they'd have to be against individual repos
> that feed into drm-tip, *not* merge requests against drm-tip
> directly. So for testing CI would have to recreate drm-tip the same way
> as 'dim push-branch' currently does.

This is doable, but perf-wise is not going to be great. We would have to
checkout all trees pulled into drm/tip for each build as listed in the
latest integration-manifest, replace target tree with the MR tree, then
provide results from that. We'll see how this works out in practice.
(It should be just `dim rebuild-tip` after pointing all the branches at
the required commits?)

This also means having a backup drm/tip source when fd.o is offline is
out; it's patched into too many places if dim gets used.

Ryszard

> I don't think these are hard requests at this time, and should not block
> the forward progress, but it's maybe something you want to consider so
> you're not inadvertently creating problems for your future self!
> 
> > CI itself would not run in the native forge CI either way, but rather
> > on our Jenkins infra, as it does today. If there's ever a need to
> > switch forges, it would require reworking the bridging/reporting bits,
> > but not *all* the bits.
> > 
> > We don't want to self-host another forge as we don't have enough people
> > and free time to do that properly. Codeberg, etc are not an option due
> > to the drm-tip repo size.
> > 
> > And that's the initial idea. Thoughts? Do you like any of this, or does
> > it sound like a downgrade from what we have today?
> 
> I think it sounds good overall. I don't like the flood of mails, and
> they don't have complete information anyway. I'm hopeful using
> github/gitlab would make the whole CI slightly more transparent too.
> 
> I wouldn't mind sunsetting fdo patchwork at all.
> 
> 
> BR,
> Jani.
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Discussion: Moving away from Patchwork for Intel i915/Xe CI
  2025-03-05 17:52 ` Jani Nikula
  2025-03-05 19:33   ` Konstantin Ryabitsev
  2025-03-05 19:54   ` Ryszard Knop
@ 2025-03-05 20:32   ` Lucas De Marchi
  2025-03-06  8:20     ` Jani Nikula
  2 siblings, 1 reply; 13+ messages in thread
From: Lucas De Marchi @ 2025-03-05 20:32 UTC (permalink / raw)
  To: Jani Nikula
  Cc: Knop, Ryszard, intel-xe@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, rk@dragonic.eu,
	daniel@fooishbar.org, Sima Vetter

On Wed, Mar 05, 2025 at 07:52:31PM +0200, Jani Nikula wrote:
>On Wed, 05 Mar 2025, "Knop, Ryszard" <ryszard.knop@intel.com> wrote:
>> Hey everyone,
>>
>> Patchwork has been having lots of issues recently, dropping patches,
>> being unusably slow and generally starting to become more of a pain
>> than help. Over on the CI side we are also not super fond of it and we
>> don't have enough resources to maintain it properly. Lucas has
>> suggested using public-inbox archives directly, and with some recent
>> tools like b4/lei making common ML workflows feasible without
>> reinventing the wheel, we are considering setting up a bridge between
>> MLs and GitHub/GitLab to instead drive CI using more modern tools.
>
>I am supportive of this change.
>
>> We have not decided whether to drop Patchwork yet - this is a thread to
>> gather people's thoughts if this sounds like a good idea.
>>
>> The workflow would look like this:
>>
>> - A drm-tip mirror would be set up on GitHub/fd.o GitLab, automatically
>> pulling all changes from drm-tip upstream fd.o GitLab as a secondary
>> source.
>> - For each new series on lore.kernel.org a bridge would create a PR by
>> taking the latest mirrored drm-tip source, then applying a new series
>> with `b4 shazam`.
>
>There's a small catch here. Patchwork is currently more clever about

for some notion of clever. Try giving this kind of feedback in the
mailing list:

"oh, in addition to what you did, you also need this:

----8<----
<diff>
----8<----"

It will a) mangle the author for the entire series b) not do right thing
with the patch and the series won't apply anymore (afair it tries to
replace the patch with what you gave as diff). Also, what should go in
the subject? Is it v{n}, v{n+1} or v{n}.1? There may be an answer, not
documented anywhere, but for me relying on "this is what b4 does" rather
than a specific behavior in this forked patchwork instance is much
better.  At least with b4 we can set expectations or have hope of
eventually tweaking it.

Lucas De Marchi

>handling series revisions when only some of the patches in a series are
>updated by way of replying to the individual patch. For example [1][2].
>
>I've been meaning to report it to upstream b4, but haven't gotten around
>to it yet. But I wouldn't consider this a blocker.
>
>[1] https://lore.kernel.org/r/20250305114820.3523077-2-imre.deak@intel.com
>[2] https://patchwork.freedesktop.org/series/145782/
>
>> - That PR would then go through the normal CI flow, with CI checks
>> being reported on that PR, instead of sending all the reports to the
>> mailing list.
>> - On the mailing list, the bridge would send an ack that a series has
>> been seen and where are its results. You would no longer receive
>> multiple emails with KBs of logs in your email client, but everything
>> would be available from PR checks (as status checks and links to full
>> logs only, no trimming and "last 1000 lines only").
>
>\o/
>
>> - Mirrors, PRs and checks for public mailing lists would be public,
>> much like on the current public Patchwork instance.
>> - Logs behind links will be stored for a few months (3-6, depends on
>> traffic and how the situation evolves). GitHub Checks themselves (check
>> status, shortlogs and links) have a hard retention period of 400 days.
>> - Not sure about PR retention: we need a mechanism to correctly
>> identify merged series somehow, then to trim these from the list.
>> Expected retention time?
>
>With the velocity of the driver development, the test results go stale
>in a matter of a week or two. I normally wouldn't merge patches older
>than that without a fresh test round.
>
>IMO a long retention isn't necessary. At most a few months? The patches
>will still stay on the list forever.
>
>> - Not sure about reporting on "CI finished": Maybe we could send one
>> more email with a summary of checks when the entire workflow has been
>> finished?
>
>I've previously suggested one short mail as an acknowledgement with a
>URL to the results, and another mail when testing has ended one way or
>another. I think it would be a good starting point.
>
>> On GitHub vs fd.o GitLab: I'm thinking more of GitHub here:
>>
>> - GitHub generally performs better with large repositories.
>> - Extra fallback drm-tip source for fd.o downtime periods.
>> - Bonus points: We can store public Intel CI tags directly in that
>> mirror for moderate periods of time without abusing fd.o servers.
>>
>> Either option would work fine though, so opinions here would be
>> appreciated :)
>
>I think eventually we will want to consider accepting contributions via
>gitlab merge requests directly.
>
>It would also be interesting if maintainers/committers could merge the
>contributions via gitlab UI already when CI applied the patches from the
>mailing list and created the merge request.
>
>In the merge request case, they'd have to be against individual repos
>that feed into drm-tip, *not* merge requests against drm-tip
>directly. So for testing CI would have to recreate drm-tip the same way
>as 'dim push-branch' currently does.
>
>I don't think these are hard requests at this time, and should not block
>the forward progress, but it's maybe something you want to consider so
>you're not inadvertently creating problems for your future self!
>
>> CI itself would not run in the native forge CI either way, but rather
>> on our Jenkins infra, as it does today. If there's ever a need to
>> switch forges, it would require reworking the bridging/reporting bits,
>> but not *all* the bits.
>>
>> We don't want to self-host another forge as we don't have enough people
>> and free time to do that properly. Codeberg, etc are not an option due
>> to the drm-tip repo size.
>>
>> And that's the initial idea. Thoughts? Do you like any of this, or does
>> it sound like a downgrade from what we have today?
>
>I think it sounds good overall. I don't like the flood of mails, and
>they don't have complete information anyway. I'm hopeful using
>github/gitlab would make the whole CI slightly more transparent too.
>
>I wouldn't mind sunsetting fdo patchwork at all.
>
>
>BR,
>Jani.
>
>-- 
>Jani Nikula, Intel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Discussion: Moving away from Patchwork for Intel i915/Xe CI
  2025-03-05 20:32   ` Lucas De Marchi
@ 2025-03-06  8:20     ` Jani Nikula
  0 siblings, 0 replies; 13+ messages in thread
From: Jani Nikula @ 2025-03-06  8:20 UTC (permalink / raw)
  To: Lucas De Marchi
  Cc: Knop, Ryszard, intel-xe@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, rk@dragonic.eu,
	daniel@fooishbar.org, Sima Vetter, Konstantin Ryabitsev

On Wed, 05 Mar 2025, Lucas De Marchi <lucas.demarchi@intel.com> wrote:
> On Wed, Mar 05, 2025 at 07:52:31PM +0200, Jani Nikula wrote:
>>There's a small catch here. Patchwork is currently more clever about
>
> for some notion of clever. Try giving this kind of feedback in the
> mailing list:
>
> "oh, in addition to what you did, you also need this:
>
> ----8<----
> <diff>
> ----8<----"
>
> It will a) mangle the author for the entire series b) not do right thing
> with the patch and the series won't apply anymore (afair it tries to
> replace the patch with what you gave as diff). Also, what should go in
> the subject? Is it v{n}, v{n+1} or v{n}.1? There may be an answer, not
> documented anywhere, but for me relying on "this is what b4 does" rather
> than a specific behavior in this forked patchwork instance is much
> better.  At least with b4 we can set expectations or have hope of
> eventually tweaking it.

Agreed.

And as Konstantin noted, b4 already does better than what I claimed
(maybe I need to upgrade).


BR,
Jani.


-- 
Jani Nikula, Intel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Discussion: Moving away from Patchwork for Intel i915/Xe CI
  2025-03-05 19:33   ` Konstantin Ryabitsev
@ 2025-03-06 10:42     ` Jani Nikula
  2025-03-06 16:44       ` Konstantin Ryabitsev
  0 siblings, 1 reply; 13+ messages in thread
From: Jani Nikula @ 2025-03-06 10:42 UTC (permalink / raw)
  To: Konstantin Ryabitsev
  Cc: Knop, Ryszard, intel-xe@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, rk@dragonic.eu, De Marchi, Lucas,
	daniel@fooishbar.org, Sima Vetter

On Wed, 05 Mar 2025, Konstantin Ryabitsev <konstantin@linuxfoundation.org> wrote:
> On Wed, Mar 05, 2025 at 07:52:31PM +0200, Jani Nikula wrote:
>> > - For each new series on lore.kernel.org a bridge would create a PR by
>> > taking the latest mirrored drm-tip source, then applying a new series
>> > with `b4 shazam`.
>> 
>> There's a small catch here. Patchwork is currently more clever about
>> handling series revisions when only some of the patches in a series are
>> updated by way of replying to the individual patch. For example [1][2].
>
> FWIW, b4 does partial rerolls already. E.g., using your own example:

Yay, I upgraded to 0.14 and so it does. Thanks!

The point I made is moot, and I agree with Lucas that we should align
with what b4 does.

> 	$ b4 am -o/tmp 20250305114820.3523077-2-imre.deak@intel.com
> 	[...]
> 	---
> 	  ✓ [PATCH v5->v6 1/6] drm/i915/hpd: Track HPD pins instead of ports for HPD pulse events
> 		+ Reviewed-by: Jani Nikula <jani.nikula@intel.com> (✓ DKIM/intel.com)
> 	  ✓ [PATCH v5->v6 2/6] drm/i915/hpd: Let an HPD pin be in the disabled state when handling missed IRQs
> 		+ Reviewed-by: Jani Nikula <jani.nikula@intel.com> (✓ DKIM/intel.com)
> 	  ✓ [PATCH     v6 3/6] drm/i915/hpd: Add support for blocking the IRQ handling on an HPD pin
> 	  ✓ [PATCH v5->v6 4/6] drm/i915/dp: Fix link training interrupted by a short HPD pulse
> 		+ Reviewed-by: Jani Nikula <jani.nikula@intel.com> (✓ DKIM/intel.com)
> 	  ✓ [PATCH     v6 5/6] drm/i915/dp: Queue a link check after link training is complete
> 	  ✓ [PATCH v5->v6 6/6] drm/i915/crt: Use intel_hpd_block/unblock() instead of intel_hpd_disable/enable()
> 	  ---
> 	  ✓ Signed: DKIM/intel.com

Side note, I often pipe messages from my MUA (notmuch-emacs) to b4, as
it nicely parses the mails and picks up the message-id from
there. Overall it works great. However, b4 seems to err on the side of
writing color codes to pipes, and I get this as output:

---
  [32m✓[0m [PATCH v5->v6 1/6] drm/i915/hpd: Track HPD pins instead of ports for HPD pulse events
    + Reviewed-by: Jani Nikula <jani.nikula@intel.com> ([32m✓[0m DKIM/intel.com)
  [32m✓[0m [PATCH v5->v6 2/6] drm/i915/hpd: Let an HPD pin be in the disabled state when handling missed IRQs
    + Reviewed-by: Jani Nikula <jani.nikula@intel.com> ([32m✓[0m DKIM/intel.com)
  [32m✓[0m [PATCH     v6 3/6] drm/i915/hpd: Add support for blocking the IRQ handling on an HPD pin
  [32m✓[0m [PATCH v5->v6 4/6] drm/i915/dp: Fix link training interrupted by a short HPD pulse
    + Reviewed-by: Jani Nikula <jani.nikula@intel.com> ([32m✓[0m DKIM/intel.com)
  [32m✓[0m [PATCH     v6 5/6] drm/i915/dp: Queue a link check after link training is complete
  [32m✓[0m [PATCH v5->v6 6/6] drm/i915/crt: Use intel_hpd_block/unblock() instead of intel_hpd_disable/enable()
  ---
  [32m✓[0m Signed: DKIM/intel.com
---

I haven't had the time to dig into b4 source on this, but it would be
great if it could automatically detect whether sending colors is the
right thing to do or not. Basically only emit color codes to interactive
terminals, unless forced also for pipes.

(Alternatively I could try to figure out how to enable colors on emacs
pipe output, but that's another rabbit hole...)


Thanks,
Jani.


-- 
Jani Nikula, Intel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Discussion: Moving away from Patchwork for Intel i915/Xe CI
  2025-03-05 19:54   ` Ryszard Knop
@ 2025-03-06 10:48     ` Jani Nikula
  0 siblings, 0 replies; 13+ messages in thread
From: Jani Nikula @ 2025-03-06 10:48 UTC (permalink / raw)
  To: Ryszard Knop, Knop, Ryszard, intel-xe@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org
  Cc: De Marchi, Lucas, daniel@fooishbar.org, Sima Vetter

On Wed, 05 Mar 2025, Ryszard Knop <rk@dragonic.eu> wrote:
> On Wed, 2025-03-05 at 19:52 +0200, Jani Nikula wrote:
>> I think eventually we will want to consider accepting contributions via
>> gitlab merge requests directly.
>> 
>> It would also be interesting if maintainers/committers could merge the
>> contributions via gitlab UI already when CI applied the patches from the
>> mailing list and created the merge request.
>> 
>> In the merge request case, they'd have to be against individual repos
>> that feed into drm-tip, *not* merge requests against drm-tip
>> directly. So for testing CI would have to recreate drm-tip the same way
>> as 'dim push-branch' currently does.
>
> This is doable, but perf-wise is not going to be great. We would have to
> checkout all trees pulled into drm/tip for each build as listed in the
> latest integration-manifest, replace target tree with the MR tree, then
> provide results from that. We'll see how this works out in practice.
> (It should be just `dim rebuild-tip` after pointing all the branches at
> the required commits?)
>
> This also means having a backup drm/tip source when fd.o is offline is
> out; it's patched into too many places if dim gets used.

I think the short answer is, just go ahead with what you're planning
now, but keep the above in the back of your mind. I'm not sure we have
definitive answers without a bunch of planning yet either.

BR,
Jani.


-- 
Jani Nikula, Intel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Discussion: Moving away from Patchwork for Intel i915/Xe CI
  2025-03-06 10:42     ` Jani Nikula
@ 2025-03-06 16:44       ` Konstantin Ryabitsev
  2025-03-07  9:23         ` Jani Nikula
  0 siblings, 1 reply; 13+ messages in thread
From: Konstantin Ryabitsev @ 2025-03-06 16:44 UTC (permalink / raw)
  To: Jani Nikula
  Cc: Knop, Ryszard, intel-xe@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, rk@dragonic.eu, De Marchi, Lucas,
	daniel@fooishbar.org, Sima Vetter

On Thu, Mar 06, 2025 at 12:42:07PM +0200, Jani Nikula wrote:
> Side note, I often pipe messages from my MUA (notmuch-emacs) to b4, as
> it nicely parses the mails and picks up the message-id from
> there. Overall it works great. However, b4 seems to err on the side of
> writing color codes to pipes, and I get this as output:
> 
> ---
>   [32m✓[0m [PATCH v5->v6 1/6] drm/i915/hpd: Track HPD pins instead of ports for HPD pulse events
>     + Reviewed-by: Jani Nikula <jani.nikula@intel.com> ([32m✓[0m DKIM/intel.com)
>   [32m✓[0m [PATCH v5->v6 2/6] drm/i915/hpd: Let an HPD pin be in the disabled state when handling missed IRQs
>     + Reviewed-by: Jani Nikula <jani.nikula@intel.com> ([32m✓[0m DKIM/intel.com)
>   [32m✓[0m [PATCH     v6 3/6] drm/i915/hpd: Add support for blocking the IRQ handling on an HPD pin
>   [32m✓[0m [PATCH v5->v6 4/6] drm/i915/dp: Fix link training interrupted by a short HPD pulse
>     + Reviewed-by: Jani Nikula <jani.nikula@intel.com> ([32m✓[0m DKIM/intel.com)
>   [32m✓[0m [PATCH     v6 5/6] drm/i915/dp: Queue a link check after link training is complete
>   [32m✓[0m [PATCH v5->v6 6/6] drm/i915/crt: Use intel_hpd_block/unblock() instead of intel_hpd_disable/enable()
>   ---
>   [32m✓[0m Signed: DKIM/intel.com
> ---
> 
> I haven't had the time to dig into b4 source on this, but it would be
> great if it could automatically detect whether sending colors is the
> right thing to do or not. Basically only emit color codes to interactive
> terminals, unless forced also for pipes.

Yes, it should do that automatically. Please send a bug report to
tools@kernel.org and I'll work an automated switch to "simple" attestation
marks when we don't have a terminal.

-K

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Discussion: Moving away from Patchwork for Intel i915/Xe CI
  2025-03-06 16:44       ` Konstantin Ryabitsev
@ 2025-03-07  9:23         ` Jani Nikula
  0 siblings, 0 replies; 13+ messages in thread
From: Jani Nikula @ 2025-03-07  9:23 UTC (permalink / raw)
  To: Konstantin Ryabitsev
  Cc: Knop, Ryszard, intel-xe@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, rk@dragonic.eu, De Marchi, Lucas,
	daniel@fooishbar.org, Sima Vetter

On Thu, 06 Mar 2025, Konstantin Ryabitsev <konstantin@linuxfoundation.org> wrote:
> On Thu, Mar 06, 2025 at 12:42:07PM +0200, Jani Nikula wrote:
>> I haven't had the time to dig into b4 source on this, but it would be
>> great if it could automatically detect whether sending colors is the
>> right thing to do or not. Basically only emit color codes to interactive
>> terminals, unless forced also for pipes.
>
> Yes, it should do that automatically. Please send a bug report to
> tools@kernel.org and I'll work an automated switch to "simple" attestation
> marks when we don't have a terminal.

Done. Link for posterity [1].

BR,
Jani.


[1] https://lore.kernel.org/r/87ecz9i4eo.fsf@intel.com


-- 
Jani Nikula, Intel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Discussion: Moving away from Patchwork for Intel i915/Xe CI
  2025-03-05 16:51 Discussion: Moving away from Patchwork for Intel i915/Xe CI Knop, Ryszard
  2025-03-05 17:30 ` Lucas De Marchi
  2025-03-05 17:52 ` Jani Nikula
@ 2025-03-13 10:22 ` Jani Nikula
  2025-03-13 10:40   ` Jani Nikula
  2 siblings, 1 reply; 13+ messages in thread
From: Jani Nikula @ 2025-03-13 10:22 UTC (permalink / raw)
  To: Knop, Ryszard, intel-xe@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org
  Cc: rk@dragonic.eu, De Marchi, Lucas, daniel@fooishbar.org

On Wed, 05 Mar 2025, "Knop, Ryszard" <ryszard.knop@intel.com> wrote:
> The workflow would look like this:
>
> - A drm-tip mirror would be set up on GitHub/fd.o GitLab, automatically
> pulling all changes from drm-tip upstream fd.o GitLab as a secondary
> source.
> - For each new series on lore.kernel.org a bridge would create a PR by
> taking the latest mirrored drm-tip source, then applying a new series
> with `b4 shazam`.
> - That PR would then go through the normal CI flow, with CI checks
> being reported on that PR, instead of sending all the reports to the
> mailing list.
> - On the mailing list, the bridge would send an ack that a series has
> been seen and where are its results. You would no longer receive
> multiple emails with KBs of logs in your email client, but everything
> would be available from PR checks (as status checks and links to full
> logs only, no trimming and "last 1000 lines only").
> - Mirrors, PRs and checks for public mailing lists would be public,
> much like on the current public Patchwork instance.
> - Logs behind links will be stored for a few months (3-6, depends on
> traffic and how the situation evolves). GitHub Checks themselves (check
> status, shortlogs and links) have a hard retention period of 400 days.
> - Not sure about PR retention: we need a mechanism to correctly
> identify merged series somehow, then to trim these from the list.
> Expected retention time?

There's one feature I like about FDO patchwork that I'd like to be able
to retain. You can find the patches and thus the test results via a
message-id like this:

https://patchwork.freedesktop.org/patch/msgid/<message-id>

I use that from my MUA 100x more often than the series URL sent by
patchwork:

https://patchwork.freedesktop.org/series/<id>/

So I'd like to have a way to get from the patch/cover-letter message-id
to the github/gitlab MR or where ever the resuls are.

We currently add Link: tags to commits pointing at patchwork. There's
already been requests to switch to use Lore links instead, and I think
we should probably do that.

Finally, on sunsetting patchwork, I think a redirector from:

https://patchwork.freedesktop.org/patch/msgid/<message-id>

to:

https://lore.kernel.org/r/<message-id>

would be a nice thing to do, considering how many patchwork links we
have in commit messages.


BR,
Jani.


-- 
Jani Nikula, Intel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Discussion: Moving away from Patchwork for Intel i915/Xe CI
  2025-03-13 10:22 ` Jani Nikula
@ 2025-03-13 10:40   ` Jani Nikula
  0 siblings, 0 replies; 13+ messages in thread
From: Jani Nikula @ 2025-03-13 10:40 UTC (permalink / raw)
  To: Knop, Ryszard, intel-xe@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org
  Cc: rk@dragonic.eu, De Marchi, Lucas, daniel@fooishbar.org

On Thu, 13 Mar 2025, Jani Nikula <jani.nikula@linux.intel.com> wrote:
> We currently add Link: tags to commits pointing at patchwork. There's
> already been requests to switch to use Lore links instead, and I think
> we should probably do that.

I created a MR to do just that.

https://gitlab.freedesktop.org/drm/maintainer-tools/-/merge_requests/74

BR,
Jani.


-- 
Jani Nikula, Intel

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2025-03-13 10:40 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-05 16:51 Discussion: Moving away from Patchwork for Intel i915/Xe CI Knop, Ryszard
2025-03-05 17:30 ` Lucas De Marchi
2025-03-05 17:52 ` Jani Nikula
2025-03-05 19:33   ` Konstantin Ryabitsev
2025-03-06 10:42     ` Jani Nikula
2025-03-06 16:44       ` Konstantin Ryabitsev
2025-03-07  9:23         ` Jani Nikula
2025-03-05 19:54   ` Ryszard Knop
2025-03-06 10:48     ` Jani Nikula
2025-03-05 20:32   ` Lucas De Marchi
2025-03-06  8:20     ` Jani Nikula
2025-03-13 10:22 ` Jani Nikula
2025-03-13 10:40   ` Jani Nikula

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox