From: "Purkait, Soham" <soham.purkait@intel.com>
To: Andi Shyti <andi.shyti@linux.intel.com>
Cc: <intel-xe@lists.freedesktop.org>, <riana.tauro@intel.com>,
<anshuman.gupta@intel.com>, <aravind.iddamsetty@linux.intel.com>,
<badal.nilawar@intel.com>, <raag.jadav@intel.com>,
<ravi.kishore.koppuravuri@intel.com>,
<mallesh.koujalagi@intel.com>, <anoop.c.vijay@intel.com>
Subject: Re: [PATCH v1 2/2] drm/xe/xe_ras: Add RAS support for GPU health indicator
Date: Mon, 20 Apr 2026 21:49:32 +0530 [thread overview]
Message-ID: <bd5d4112-36c1-4e9f-b06e-08213922c157@intel.com> (raw)
In-Reply-To: <aeDN6g6HWd3v-V72@ashyti-mobl2.lan>
Hi Andi,
On 16-04-2026 17:24, Andi Shyti wrote:
> Hi Soham,
>
> On Thu, Apr 16, 2026 at 03:06:10PM +0530, Soham Purkait wrote:
>> GPU health indicator exposes a single sysfs interface, gpu_health,
>> at the device level, allowing administrators and management tools to
>> query the GPU health status. The interface permits both read and write
>> operations on PF and native functions, while on VFs it is exposed as
>> read-only.
> Can you describe better the interfaces? The input and output
> values?
Sure. I missed it in this series. I think it is better to be in
cover-letter isn't it ?
FYI the sysfs file (gpu_health) is placed in the device level and behaves as
follows:
$ cat /sys/.../device/gpu_health
[ok] warning critical
$ echo critical > /sys/.../device/gpu_health
$ cat /sys/.../device/gpu_health
ok warning [critical]
>
>> v1:
>> - gpu_health is read-write on PFs and native functions. It is read-only
>> on VFs. VF write attempts are rejected.
> Are you adding a changelog for V1?
>
>> Signed-off-by: Soham Purkait <soham.purkait@intel.com>
> ...
>
>> +static const char * const gpu_health_states[] = { "ok", "warning", "critical" };
>> +static const char * const gpu_health_fmt[] = {
>> + "[%s] %s %s\n",
>> + "%s [%s] %s\n",
>> + "%s %s [%s]\n",
>> +};
> Please, don't use complex sentences in sysfs outputs. Use a
> single string/character/value
>
> ...
>
>> +static ssize_t gpu_health_show(struct device *dev, struct device_attribute *attr, char *buf)
>> +{
>> + struct xe_device *xe = kdev_to_xe_device(dev);
>> + struct xe_sysctrl_mailbox_command command = {0};
>> + struct xe_ras_health_get_response response = {0};
>> + struct xe_ras_health_get_input request = {0};
>> + u8 health;
>> + int ret;
>> + size_t rlen = 0;
>> +
>> + prepare_sysctrl_command(&command, XE_SYSCTRL_CMD_GET_HEALTH, &request,
>> + sizeof(request), &response, sizeof(response));
>> + ret = xe_sysctrl_send_command(&xe->sc, &command, &rlen);
>> + if (ret) {
>> + xe_err(xe, "[RAS]: Sysctrl error ret %d\n", ret);
>> + return -EIO;
> why not return ret? (same goes for the rest of the function and
> the store() function).
>
>> + }
>> + if (rlen != sizeof(response)) {
>> + xe_err(xe,
>> + "[RAS]: invalid Sysctrl response length %zu (expected %zu)\n",
>> + rlen, sizeof(response));
>> + return -EIO;
>> + }
>> + if (response.current_health >= ARRAY_SIZE(gpu_health_states)) {
>> + xe_err(xe, "[RAS]: invalid health state %u from Sysctrl\n",
>> + response.current_health);
>> + return -EIO;
>> + }
> ...
>
>> +static ssize_t gpu_health_store(struct device *dev, struct device_attribute *attr,
>> + const char *buf, size_t count)
>> +{
> ...
>
>> + if (IS_SRIOV_VF(xe)) {
>> + xe_dbg(xe, "[RAS]: GPU health state update rejected on VF\n");
>> + return -EPERM;
>> + }
> This is redundant as this function wouldn't be used for sriov.
>
>> + state = sysfs_match_string(gpu_health_states,
>> + buf);
>> + if (state < 0)
>> + return -EINVAL;
>> +
>> + request.new_health = (xe_ras_health_status_t)state;
>> +
>> + prepare_sysctrl_command(&command, XE_SYSCTRL_CMD_SET_HEALTH, &request,
>> + sizeof(request), &response, sizeof(response));
>> + ret = xe_sysctrl_send_command(&xe->sc, &command, &rlen);
>> + if (ret) {
>> + xe_err(xe, "[RAS]: Sysctrl error ret %d\n", ret);
>> + return -EIO;
>> + }
>> + if (rlen != sizeof(response)) {
>> + xe_err(xe,
>> + "[RAS]: invalid Sysctrl response length %zu (expected %zu)\n",
>> + rlen, sizeof(response));
>> + return -EIO;
>> + }
>> + if (response.current_health >= ARRAY_SIZE(gpu_health_states)) {
>> + xe_err(xe, "[RAS]: invalid health state %u from Sysctrl\n",
>> + response.current_health);
>> + return -EIO;
>> + }
>> +
>> + health = response.current_health;
>> +
>> + xe_dbg(xe, "[RAS]: current GPU health state=%d (%s)\n",
>> + health, gpu_health_states[health]);
>> +
> BTW, why do we need the field response.operation_status that is
> not used at all here?
This is just for the debugging purposes. "response.current_health" is
the health status after this change.
Thanks,
Soham
>
>> + return count;
>> +}
>> +
>> +static struct device_attribute dev_attr_gpu_health_rw =
>> + __ATTR_RW_MODE(gpu_health, 0600);
>> +
>> +static struct device_attribute dev_attr_gpu_health_ro =
>> + __ATTR_RO_MODE(gpu_health, 0400);
>> +
>> +static struct device_attribute *gpu_health_attr(struct xe_device *xe)
>> +{
>> + return IS_SRIOV_VF(xe) ? &dev_attr_gpu_health_ro : &dev_attr_gpu_health_rw;
>> +}
> ...
>
>> +static void gpu_health_indicator_sysfs_init(struct xe_device *xe)
>> +{
>> + struct device *dev = xe->drm.dev;
>> + int err;
>> +
>> + err = device_create_file(dev, gpu_health_attr(xe));
>> + if (err)
>> + goto err;
> Please, don't use goto this way. If you need only one log out of
> the outcome of this function, print it in _init(): make this
> an int function, return err and check err in the calling
> function.
>
> Andi
>
>> +
>> + err = devm_add_action_or_reset(dev, gpu_health_sysfs_fini, dev);
>> + if (err)
>> + goto err;
>> +
>> + return;
>> +
>> +err:
>> + xe_err(xe, "[RAS]: failed to initialize GPU health sysfs, err=%d\n", err);
>> +}
>> +
>> +/**
>> + * xe_ras_init - Initialize Xe RAS
>> + * @xe: xe device instance
>> + *
>> + * Initialize Xe RAS
>> + */
>> +void xe_ras_init(struct xe_device *xe)
>> +{
>> + if (!xe->info.has_sysctrl)
>> + return;
>> +
>> + gpu_health_indicator_sysfs_init(xe);
>> +}
next prev parent reply other threads:[~2026-04-20 16:19 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-16 9:36 [PATCH v1 0/2] drm/xe: Add support for GPU health indicator Soham Purkait
2026-04-16 9:36 ` [PATCH v1 1/2] drm/xe/xe_ras: Add structures and commands for RAS " Soham Purkait
2026-04-16 11:39 ` Andi Shyti
2026-04-17 14:45 ` Rodrigo Vivi
2026-04-16 9:36 ` [PATCH v1 2/2] drm/xe/xe_ras: Add RAS support for " Soham Purkait
2026-04-16 11:54 ` Andi Shyti
2026-04-17 14:51 ` Rodrigo Vivi
2026-04-20 15:26 ` Andi Shyti
2026-04-20 19:51 ` Rodrigo Vivi
2026-04-21 12:56 ` Andi Shyti
2026-04-21 13:21 ` Rodrigo Vivi
2026-04-22 6:05 ` Purkait, Soham
2026-04-20 16:19 ` Purkait, Soham [this message]
2026-04-20 17:35 ` Andi Shyti
2026-04-16 9:55 ` ✗ CI.checkpatch: warning for drm/xe: Add " Patchwork
2026-04-16 9:56 ` ✓ CI.KUnit: success " Patchwork
2026-04-16 10:58 ` ✓ Xe.CI.BAT: " Patchwork
2026-04-16 12:01 ` ✗ Xe.CI.FULL: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bd5d4112-36c1-4e9f-b06e-08213922c157@intel.com \
--to=soham.purkait@intel.com \
--cc=andi.shyti@linux.intel.com \
--cc=anoop.c.vijay@intel.com \
--cc=anshuman.gupta@intel.com \
--cc=aravind.iddamsetty@linux.intel.com \
--cc=badal.nilawar@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=mallesh.koujalagi@intel.com \
--cc=raag.jadav@intel.com \
--cc=ravi.kishore.koppuravuri@intel.com \
--cc=riana.tauro@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox