From: Rodrigo Vivi <rodrigo.vivi@intel.com>
To: He Lugang <helugang@uniontech.com>
Cc: <lucas.demarchi@intel.com>, <thomas.hellstrom@linux.intel.com>,
<maarten.lankhorst@linux.intel.com>, <mripard@kernel.org>,
<tzimmermann@suse.de>, <airlied@gmail.com>, <simona@ffwll.ch>,
<intel-xe@lists.freedesktop.org>,
<dri-devel@lists.freedesktop.org>
Subject: Re: [PATCH v2] drm/xe: use devm_add_action_or_reset() helper
Date: Mon, 16 Sep 2024 12:38:25 -0400 [thread overview]
Message-ID: <ZuhfAXjk93eXLOSh@intel.com> (raw)
In-Reply-To: <9631BC17D1E028A2+20240911102215.84865-1-helugang@uniontech.com>
On Wed, Sep 11, 2024 at 06:22:15PM +0800, He Lugang wrote:
> Use devm_add_action_or_reset() to release resources in case of failure,
> because the cleanup function will be automatically called.
>
> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Signed-off-by: He Lugang <helugang@uniontech.com>
> ---
> v2:move devm_add_action_or_reset after sysfs_create_files to avoid removing
> the sysfs files that hadn't been created.
> ---
pushed to drm-xe-next, thanks for the patch
> drivers/gpu/drm/xe/xe_gt_freq.c | 4 ++--
> drivers/gpu/drm/xe/xe_gt_sysfs.c | 2 +-
> 2 files changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_gt_freq.c b/drivers/gpu/drm/xe/xe_gt_freq.c
> index 68a5778b4319..ab76973f3e1e 100644
> --- a/drivers/gpu/drm/xe/xe_gt_freq.c
> +++ b/drivers/gpu/drm/xe/xe_gt_freq.c
> @@ -237,11 +237,11 @@ int xe_gt_freq_init(struct xe_gt *gt)
> if (!gt->freq)
> return -ENOMEM;
>
> - err = devm_add_action(xe->drm.dev, freq_fini, gt->freq);
> + err = sysfs_create_files(gt->freq, freq_attrs);
> if (err)
> return err;
>
> - err = sysfs_create_files(gt->freq, freq_attrs);
> + err = devm_add_action_or_reset(xe->drm.dev, freq_fini, gt->freq);
> if (err)
> return err;
>
> diff --git a/drivers/gpu/drm/xe/xe_gt_sysfs.c b/drivers/gpu/drm/xe/xe_gt_sysfs.c
> index a05c3699e8b9..ec2b8246204b 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sysfs.c
> +++ b/drivers/gpu/drm/xe/xe_gt_sysfs.c
> @@ -51,5 +51,5 @@ int xe_gt_sysfs_init(struct xe_gt *gt)
>
> gt->sysfs = &kg->base;
>
> - return devm_add_action(xe->drm.dev, gt_sysfs_fini, gt);
> + return devm_add_action_or_reset(xe->drm.dev, gt_sysfs_fini, gt);
> }
> --
> 2.45.2
>
prev parent reply other threads:[~2024-09-16 16:38 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-09-11 10:22 [PATCH v2] drm/xe: use devm_add_action_or_reset() helper He Lugang
2024-09-13 21:07 ` ✓ CI.Patch_applied: success for " Patchwork
2024-09-13 21:08 ` ✓ CI.checkpatch: " Patchwork
2024-09-13 21:09 ` ✓ CI.KUnit: " Patchwork
2024-09-13 21:20 ` ✓ CI.Build: " Patchwork
2024-09-13 21:23 ` ✓ CI.Hooks: " Patchwork
2024-09-13 21:24 ` ✓ CI.checksparse: " Patchwork
2024-09-13 21:42 ` ✓ CI.BAT: " Patchwork
2024-09-14 19:10 ` ✗ CI.FULL: failure " Patchwork
2024-09-16 16:38 ` Rodrigo Vivi [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZuhfAXjk93eXLOSh@intel.com \
--to=rodrigo.vivi@intel.com \
--cc=airlied@gmail.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=helugang@uniontech.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=lucas.demarchi@intel.com \
--cc=maarten.lankhorst@linux.intel.com \
--cc=mripard@kernel.org \
--cc=simona@ffwll.ch \
--cc=thomas.hellstrom@linux.intel.com \
--cc=tzimmermann@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox