Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Michal Wajdeczko <michal.wajdeczko@intel.com>
To: "Summers, Stuart" <stuart.summers@intel.com>,
	"intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>,
	"Lin, Shuicheng" <shuicheng.lin@intel.com>
Cc: "Winiarski, Michal" <michal.winiarski@intel.com>
Subject: Re: [PATCH] drm/xe/pf: Fix MMIO access using PF view instead of VF view during migration
Date: Wed, 29 Apr 2026 22:43:50 +0200	[thread overview]
Message-ID: <4975abb3-990b-462a-8eef-a376a6c14aad@intel.com> (raw)
In-Reply-To: <a2b00865f74aca893aa0d9a1af179b42f8dece63.camel@intel.com>



On 4/29/2026 10:25 PM, Summers, Stuart wrote:
> On Wed, 2026-04-29 at 19:22 +0000, Shuicheng Lin wrote:
>> pf_migration_mmio_save() and pf_migration_mmio_restore() initialize a
>> local VF-specific MMIO view via xe_mmio_init_vf_view() but then pass
>> &gt->mmio (the PF base) to all xe_mmio_read32()/xe_mmio_write32()
>> calls instead of the local &mmio. This causes the PF own SW flag
>> registers to be saved/restored rather than the target VF registers,
>> silently corrupting migration state.
>>
>> Use the VF MMIO view for all register accesses, matching the correct
>> pattern used in pf_clear_vf_scratch_regs().
>>
>> Fixes: b7c1b990f719 ("drm/xe/pf: Handle MMIO migration data as part
>> of PF control")
>> Cc: Michał Winiarski <michal.winiarski@intel.com>
>> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
>> Assisted-by: Claude:claude-opus-4.6
>> Signed-off-by: Shuicheng Lin <shuicheng.lin@intel.com>

Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>

>> ---
>>  drivers/gpu/drm/xe/xe_gt_sriov_pf_migration.c | 8 ++++----
>>  1 file changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_migration.c
>> b/drivers/gpu/drm/xe/xe_gt_sriov_pf_migration.c
>> index 87a164efcc33..01fe03b9efe8 100644
>> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_migration.c
>> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_migration.c
>> @@ -385,10 +385,10 @@ static int pf_migration_mmio_save(struct xe_gt
>> *gt, unsigned int vfid, void *buf
>>  
>>         if (xe_gt_is_media_type(gt))
>>                 for (n = 0; n < MED_VF_SW_FLAG_COUNT; n++)
>> -                       regs[n] = xe_mmio_read32(&gt->mmio,
>> MED_VF_SW_FLAG(n));
>> +                       regs[n] = xe_mmio_read32(&mmio,
>> MED_VF_SW_FLAG(n));
> 
> Good to get feedback from Michal Wa/Michal Wi here, but I don't see any
> usage of these MMIOs from the VF in the driver. Are these even exposed
> to the VF? This seems unsafe from what I can see...

see xe_guc_mmio_send_recv()

> 
> Can you show the error you are seeing specifically?

it might be hard to catch the VF driver in the middle of the
MMIO communication, which usually is done only during probe

> 
> Thanks,
> Stuart
> 
>>         else
>>                 for (n = 0; n < VF_SW_FLAG_COUNT; n++)
>> -                       regs[n] = xe_mmio_read32(&gt->mmio,
>> VF_SW_FLAG(n));
>> +                       regs[n] = xe_mmio_read32(&mmio,
>> VF_SW_FLAG(n));
>>  
>>         return 0;
>>  }
>> @@ -407,10 +407,10 @@ static int pf_migration_mmio_restore(struct
>> xe_gt *gt, unsigned int vfid,
>>  
>>         if (xe_gt_is_media_type(gt))
>>                 for (n = 0; n < MED_VF_SW_FLAG_COUNT; n++)
>> -                       xe_mmio_write32(&gt->mmio, MED_VF_SW_FLAG(n),
>> regs[n]);
>> +                       xe_mmio_write32(&mmio, MED_VF_SW_FLAG(n),
>> regs[n]);
>>         else
>>                 for (n = 0; n < VF_SW_FLAG_COUNT; n++)
>> -                       xe_mmio_write32(&gt->mmio, VF_SW_FLAG(n),
>> regs[n]);
>> +                       xe_mmio_write32(&mmio, VF_SW_FLAG(n),
>> regs[n]);
>>  
>>         return 0;
>>  }
> 


  parent reply	other threads:[~2026-04-29 20:44 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-29 19:22 [PATCH] drm/xe/pf: Fix MMIO access using PF view instead of VF view during migration Shuicheng Lin
2026-04-29 19:41 ` ✓ CI.KUnit: success for " Patchwork
2026-04-29 20:25 ` [PATCH] " Summers, Stuart
2026-04-29 20:40   ` Lin, Shuicheng
2026-04-29 20:43   ` Michal Wajdeczko [this message]
2026-04-29 20:55     ` Summers, Stuart
2026-04-29 21:07 ` ✓ Xe.CI.BAT: success for " Patchwork
2026-04-30  8:54 ` ✗ Xe.CI.FULL: failure " Patchwork
2026-04-30 15:42   ` Lin, Shuicheng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4975abb3-990b-462a-8eef-a376a6c14aad@intel.com \
    --to=michal.wajdeczko@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=michal.winiarski@intel.com \
    --cc=shuicheng.lin@intel.com \
    --cc=stuart.summers@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox