From: Aravind Iddamsetty <aravind.iddamsetty@linux.intel.com>
To: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>,
intel-xe@lists.freedesktop.org
Subject: Re: [Intel-xe] [PATCH v2 07/10] drm/xe: Support SOC NONFATAL error handling for PVC.
Date: Thu, 19 Oct 2023 13:56:14 +0530 [thread overview]
Message-ID: <856fc2f8-2e56-9ec6-5340-68a55b909a6e@linux.intel.com> (raw)
In-Reply-To: <20231018040033.1227494-8-himal.prasad.ghimiray@intel.com>
On 18/10/23 09:30, Himal Prasad Ghimiray wrote:
> Report the SOC nonfatal hardware error and update the counters which
> will increment incase of error.
>
> v2
> - Use xe_assign_hw_err_regs to initilaize registers.
> - Dont use the counters if error is being reported by second level
> registers.
> - Fix Num of IEH to 2.
> - Follow the convention source_typeoferror_errorname for enum and error
> reporting.(Aravind)
>
> Cc: Aravind Iddamsetty <aravind.iddamsetty@intel.com>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_hw_error.c | 70 +++++++++++++++++++++++++++++++-
> drivers/gpu/drm/xe/xe_hw_error.h | 39 ++++++++++++++++++
> 2 files changed, 108 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_hw_error.c b/drivers/gpu/drm/xe/xe_hw_error.c
> index 55f8613e8b6d..8b968b117c18 100644
> --- a/drivers/gpu/drm/xe/xe_hw_error.c
> +++ b/drivers/gpu/drm/xe/xe_hw_error.c
> @@ -258,6 +258,67 @@ static const struct err_name_index_pair pvc_soc_mstr_lcl_err_reg_fatal[] = {
> [14 ... 31] = {"Undefined", XE_HW_ERR_SOC_FATAL_UNKNOWN},
> };
>
> +static const struct err_name_index_pair pvc_soc_mstr_glbl_err_reg_nonfatal[] = {
> + [0] = {"MASTER LOCAL Reported", XE_HW_ERR_TILE_UNSPEC},
> + [1] = {"SLAVE GLOBAL Reported", XE_HW_ERR_TILE_UNSPEC},
> + [2] = {"HBM SS0: Channel0", XE_HW_ERR_SOC_NONFATAL_HBM0_CHNL0},
> + [3] = {"HBM SS0: Channel1", XE_HW_ERR_SOC_NONFATAL_HBM0_CHNL1},
> + [4] = {"HBM SS0: Channel2", XE_HW_ERR_SOC_NONFATAL_HBM0_CHNL2},
> + [5] = {"HBM SS0: Channel3", XE_HW_ERR_SOC_NONFATAL_HBM0_CHNL3},
> + [6] = {"HBM SS0: Channel4", XE_HW_ERR_SOC_NONFATAL_HBM0_CHNL4},
> + [7] = {"HBM SS0: Channel5", XE_HW_ERR_SOC_NONFATAL_HBM0_CHNL5},
> + [8] = {"HBM SS0: Channel6", XE_HW_ERR_SOC_NONFATAL_HBM0_CHNL6},
> + [9] = {"HBM SS0: Channel7", XE_HW_ERR_SOC_NONFATAL_HBM0_CHNL7},
> + [10] = {"HBM SS1: Channel0", XE_HW_ERR_SOC_NONFATAL_HBM1_CHNL0},
> + [11] = {"HBM SS1: Channel1", XE_HW_ERR_SOC_NONFATAL_HBM1_CHNL1},
> + [12] = {"HBM SS1: Channel2", XE_HW_ERR_SOC_NONFATAL_HBM1_CHNL2},
> + [13] = {"HBM SS1: Channel3", XE_HW_ERR_SOC_NONFATAL_HBM1_CHNL3},
> + [14] = {"HBM SS1: Channel4", XE_HW_ERR_SOC_NONFATAL_HBM1_CHNL4},
> + [15] = {"HBM SS1: Channel5", XE_HW_ERR_SOC_NONFATAL_HBM1_CHNL5},
> + [16] = {"HBM SS1: Channel6", XE_HW_ERR_SOC_NONFATAL_HBM1_CHNL6},
> + [17] = {"HBM SS1: Channel7", XE_HW_ERR_SOC_NONFATAL_HBM1_CHNL7},
> + [18 ... 31] = {"Undefined", XE_HW_ERR_SOC_NONFATAL_UNKNOWN},
> +};
> +
> +static const struct err_name_index_pair pvc_soc_slave_glbl_err_reg_nonfatal[] = {
> + [0] = {"SLAVE LOCAL Reported", XE_HW_ERR_TILE_UNSPEC},
> + [1] = {"HBM SS2: Channel0", XE_HW_ERR_SOC_NONFATAL_HBM2_CHNL0},
> + [2] = {"HBM SS2: Channel1", XE_HW_ERR_SOC_NONFATAL_HBM2_CHNL1},
> + [3] = {"HBM SS2: Channel2", XE_HW_ERR_SOC_NONFATAL_HBM2_CHNL2},
> + [4] = {"HBM SS2: Channel3", XE_HW_ERR_SOC_NONFATAL_HBM2_CHNL3},
> + [5] = {"HBM SS2: Channel4", XE_HW_ERR_SOC_NONFATAL_HBM2_CHNL4},
> + [6] = {"HBM SS2: Channel5", XE_HW_ERR_SOC_NONFATAL_HBM2_CHNL5},
> + [7] = {"HBM SS2: Channel6", XE_HW_ERR_SOC_NONFATAL_HBM2_CHNL6},
> + [8] = {"HBM SS2: Channel7", XE_HW_ERR_SOC_NONFATAL_HBM2_CHNL7},
> + [9] = {"HBM SS3: Channel0", XE_HW_ERR_SOC_NONFATAL_HBM3_CHNL0},
> + [10] = {"HBM SS3: Channel1", XE_HW_ERR_SOC_NONFATAL_HBM3_CHNL1},
> + [11] = {"HBM SS3: Channel2", XE_HW_ERR_SOC_NONFATAL_HBM3_CHNL2},
> + [12] = {"HBM SS3: Channel3", XE_HW_ERR_SOC_NONFATAL_HBM3_CHNL3},
> + [13] = {"HBM SS3: Channel4", XE_HW_ERR_SOC_NONFATAL_HBM3_CHNL4},
> + [14] = {"HBM SS3: Channel5", XE_HW_ERR_SOC_NONFATAL_HBM3_CHNL5},
> + [15] = {"HBM SS3: Channel6", XE_HW_ERR_SOC_NONFATAL_HBM3_CHNL6},
> + [16] = {"HBM SS3: Channel7", XE_HW_ERR_SOC_NONFATAL_HBM3_CHNL7},
> + [18] = {"ANR MDFI", XE_HW_ERR_SOC_NONFATAL_ANR_MDFI},
> + [17] = {"Undefined", XE_HW_ERR_SOC_NONFATAL_UNKNOWN},
> + [19 ... 31] = {"Undefined", XE_HW_ERR_SOC_NONFATAL_UNKNOWN},
> +};
> +
> +static const struct err_name_index_pair pvc_soc_slave_lcl_err_reg_nonfatal[] = {
> + [0 ... 31] = {"Undefined", XE_HW_ERR_SOC_NONFATAL_UNKNOWN},
> +};
> +
> +static const struct err_name_index_pair pvc_soc_mstr_lcl_err_reg_nonfatal[] = {
> + [0 ... 3] = {"Undefined", XE_HW_ERR_SOC_NONFATAL_UNKNOWN},
> + [4] = {"Base Die MDFI T2T", XE_HW_ERR_SOC_NONFATAL_MDFI_T2T},
> + [5] = {"Undefined", XE_HW_ERR_SOC_NONFATAL_UNKNOWN},
> + [6] = {"Base Die MDFI T2C", XE_HW_ERR_SOC_NONFATAL_MDFI_T2C},
> + [7] = {"Undefined", XE_HW_ERR_SOC_NONFATAL_UNKNOWN},
> + [8] = {"Invalid CSC PSF Command Parity", XE_HW_ERR_SOC_NONFATAL_CSC_PSF_CMD},
> + [9] = {"Invalid CSC PSF Unexpected Completion", XE_HW_ERR_SOC_NONFATAL_CSC_PSF_CMP},
> + [10] = {"Invalid CSC PSF Unsupported Request", XE_HW_ERR_SOC_NONFATAL_CSC_PSF_REQ},
> + [11 ... 31] = {"Undefined", XE_HW_ERR_SOC_NONFATAL_UNKNOWN},
> +};
> +
> void xe_assign_hw_err_regs(struct xe_device *xe)
> {
> const struct err_name_index_pair **dev_err_stat = xe->hw_err_regs.dev_err_stat;
> @@ -295,6 +356,10 @@ void xe_assign_hw_err_regs(struct xe_device *xe)
> soc_mstr_lcl[HARDWARE_ERROR_FATAL] = pvc_soc_mstr_lcl_err_reg_fatal;
> soc_slave_glbl[HARDWARE_ERROR_FATAL] = pvc_soc_slave_glbl_err_reg_fatal;
> soc_slave_lcl[HARDWARE_ERROR_FATAL] = pvc_soc_slave_lcl_err_reg_fatal;
> + soc_mstr_glbl[HARDWARE_ERROR_NONFATAL] = pvc_soc_mstr_glbl_err_reg_nonfatal;
> + soc_mstr_lcl[HARDWARE_ERROR_NONFATAL] = pvc_soc_mstr_lcl_err_reg_nonfatal;
> + soc_slave_glbl[HARDWARE_ERROR_NONFATAL] = pvc_soc_slave_glbl_err_reg_nonfatal;
> + soc_slave_lcl[HARDWARE_ERROR_NONFATAL] = pvc_soc_slave_lcl_err_reg_nonfatal;
> }
>
> }
> @@ -578,7 +643,10 @@ xe_soc_hw_error_handler(struct xe_tile *tile, const enum hardware_error hw_err)
>
> lockdep_assert_held(&tile_to_xe(tile)->irq.lock);
>
> - if ((tile_to_xe(tile)->info.platform != XE_PVC) || hw_err != HARDWARE_ERROR_FATAL)
> + if (tile_to_xe(tile)->info.platform != XE_PVC)
> + return;
> +
> + if (hw_err == HARDWARE_ERROR_CORRECTABLE)
nit: could be clubbed with above
> return;
>
> base = SOC_PVC_BASE;
> diff --git a/drivers/gpu/drm/xe/xe_hw_error.h b/drivers/gpu/drm/xe/xe_hw_error.h
> index 700474aed171..59b331f52783 100644
> --- a/drivers/gpu/drm/xe/xe_hw_error.h
> +++ b/drivers/gpu/drm/xe/xe_hw_error.h
> @@ -112,6 +112,45 @@ enum xe_soc_hw_errors {
> XE_HW_ERR_SOC_FATAL_PCIE_PSF_CMD,
> XE_HW_ERR_SOC_FATAL_PCIE_PSF_CMP,
> XE_HW_ERR_SOC_FATAL_PCIE_PSF_REQ,
> + XE_HW_ERR_SOC_NONFATAL_HBM0_CHNL0,
> + XE_HW_ERR_SOC_NONFATAL_HBM0_CHNL1,
> + XE_HW_ERR_SOC_NONFATAL_HBM0_CHNL2,
> + XE_HW_ERR_SOC_NONFATAL_HBM0_CHNL3,
> + XE_HW_ERR_SOC_NONFATAL_HBM0_CHNL4,
> + XE_HW_ERR_SOC_NONFATAL_HBM0_CHNL5,
> + XE_HW_ERR_SOC_NONFATAL_HBM0_CHNL6,
> + XE_HW_ERR_SOC_NONFATAL_HBM0_CHNL7,
> + XE_HW_ERR_SOC_NONFATAL_HBM1_CHNL0,
> + XE_HW_ERR_SOC_NONFATAL_HBM1_CHNL1,
> + XE_HW_ERR_SOC_NONFATAL_HBM1_CHNL2,
> + XE_HW_ERR_SOC_NONFATAL_HBM1_CHNL3,
> + XE_HW_ERR_SOC_NONFATAL_HBM1_CHNL4,
> + XE_HW_ERR_SOC_NONFATAL_HBM1_CHNL5,
> + XE_HW_ERR_SOC_NONFATAL_HBM1_CHNL6,
> + XE_HW_ERR_SOC_NONFATAL_HBM1_CHNL7,
> + XE_HW_ERR_SOC_NONFATAL_UNKNOWN,
> + XE_HW_ERR_SOC_NONFATAL_HBM2_CHNL0,
> + XE_HW_ERR_SOC_NONFATAL_HBM2_CHNL1,
> + XE_HW_ERR_SOC_NONFATAL_HBM2_CHNL2,
> + XE_HW_ERR_SOC_NONFATAL_HBM2_CHNL3,
> + XE_HW_ERR_SOC_NONFATAL_HBM2_CHNL4,
> + XE_HW_ERR_SOC_NONFATAL_HBM2_CHNL5,
> + XE_HW_ERR_SOC_NONFATAL_HBM2_CHNL6,
> + XE_HW_ERR_SOC_NONFATAL_HBM2_CHNL7,
> + XE_HW_ERR_SOC_NONFATAL_HBM3_CHNL0,
> + XE_HW_ERR_SOC_NONFATAL_HBM3_CHNL1,
> + XE_HW_ERR_SOC_NONFATAL_HBM3_CHNL2,
> + XE_HW_ERR_SOC_NONFATAL_HBM3_CHNL3,
> + XE_HW_ERR_SOC_NONFATAL_HBM3_CHNL4,
> + XE_HW_ERR_SOC_NONFATAL_HBM3_CHNL5,
> + XE_HW_ERR_SOC_NONFATAL_HBM3_CHNL6,
> + XE_HW_ERR_SOC_NONFATAL_HBM3_CHNL7,
> + XE_HW_ERR_SOC_NONFATAL_ANR_MDFI,
> + XE_HW_ERR_SOC_NONFATAL_MDFI_T2T,
> + XE_HW_ERR_SOC_NONFATAL_MDFI_T2C,
> + XE_HW_ERR_SOC_NONFATAL_CSC_PSF_CMD,
> + XE_HW_ERR_SOC_NONFATAL_CSC_PSF_CMP,
> + XE_HW_ERR_SOC_NONFATAL_CSC_PSF_REQ,
> XE_TILE_HW_ERROR_MAX,
> };
same here common enum.
>
Reviewed-by: Aravind Iddamsetty <aravind.iddamsetty@linux.intel.com>
Thanks,
Aravind.
next prev parent reply other threads:[~2023-10-19 8:23 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-18 4:00 [Intel-xe] [PATCH v9 00/10] Supporting RAS on XE Himal Prasad Ghimiray
2023-10-18 3:57 ` [Intel-xe] ✓ CI.Patch_applied: success for " Patchwork
2023-10-18 3:57 ` [Intel-xe] ✗ CI.checkpatch: warning " Patchwork
2023-10-18 3:59 ` [Intel-xe] ✓ CI.KUnit: success " Patchwork
2023-10-18 4:00 ` [Intel-xe] [PATCH v8 01/10] drm/xe: Handle errors from various components Himal Prasad Ghimiray
2023-10-19 8:23 ` Aravind Iddamsetty
2023-10-19 13:23 ` Upadhyay, Tejas
2023-10-18 4:00 ` [Intel-xe] [PATCH v7 02/10] drm/xe: Log and count the GT hardware errors Himal Prasad Ghimiray
2023-10-19 8:24 ` Aravind Iddamsetty
2023-10-18 4:00 ` [Intel-xe] [PATCH v6 03/10] drm/xe: Support GT hardware error reporting for PVC Himal Prasad Ghimiray
2023-10-19 8:25 ` Aravind Iddamsetty
2023-10-18 4:00 ` [Intel-xe] [PATCH v2 04/10] drm/xe: Support GSC " Himal Prasad Ghimiray
2023-10-19 8:25 ` Aravind Iddamsetty
2023-10-18 4:00 ` [Intel-xe] [PATCH v2 05/10] drm/xe: Notify userspace about GSC HW errors Himal Prasad Ghimiray
2023-10-19 0:52 ` Welty, Brian
2023-10-19 5:36 ` Ghimiray, Himal Prasad
2023-10-19 6:02 ` Aravind Iddamsetty
2023-10-19 6:36 ` Ghimiray, Himal Prasad
2023-10-18 4:00 ` [Intel-xe] [PATCH v3 06/10] drm/xe: Support SOC FATAL error handling for PVC Himal Prasad Ghimiray
2023-10-19 8:25 ` Aravind Iddamsetty
2023-10-18 4:00 ` [Intel-xe] [PATCH v2 07/10] drm/xe: Support SOC NONFATAL " Himal Prasad Ghimiray
2023-10-19 8:26 ` Aravind Iddamsetty [this message]
2023-10-18 4:00 ` [Intel-xe] [PATCH v2 08/10] drm/xe: Handle MDFI error severity Himal Prasad Ghimiray
2023-10-19 8:26 ` Aravind Iddamsetty
2023-10-18 4:00 ` [Intel-xe] [PATCH v2 09/10] drm/xe: Clear SOC CORRECTABLE error registers Himal Prasad Ghimiray
2023-10-19 8:26 ` Aravind Iddamsetty
2023-10-18 4:00 ` [Intel-xe] [PATCH v4 10/10] drm/xe: Clear all SoC errors post warm reset Himal Prasad Ghimiray
2023-10-19 8:26 ` Aravind Iddamsetty
2023-10-18 4:07 ` [Intel-xe] ✓ CI.Build: success for Supporting RAS on XE Patchwork
2023-10-18 4:08 ` [Intel-xe] ✓ CI.Hooks: " Patchwork
2023-10-18 4:09 ` [Intel-xe] ✓ CI.checksparse: " Patchwork
2023-10-18 4:45 ` [Intel-xe] ✓ CI.BAT: " Patchwork
-- strict thread matches above, loose matches on Subject: below --
2023-10-18 2:57 [Intel-xe] [PATCH v8 00/10] " Himal Prasad Ghimiray
2023-10-18 2:57 ` [Intel-xe] [PATCH v2 07/10] drm/xe: Support SOC NONFATAL error handling for PVC Himal Prasad Ghimiray
2023-10-18 2:48 [Intel-xe] [PATCH v8 00/10] *Supporting RAS on XE Himal Prasad Ghimiray
2023-10-18 2:48 ` [Intel-xe] [PATCH v2 07/10] drm/xe: Support SOC NONFATAL error handling for PVC Himal Prasad Ghimiray
2023-10-17 5:09 [Intel-xe] [PATCH v6 00/10] Supporting RAS on XE Himal Prasad Ghimiray
2023-10-17 5:09 ` [Intel-xe] [PATCH v2 07/10] drm/xe: Support SOC NONFATAL error handling for PVC Himal Prasad Ghimiray
2023-10-17 4:15 [Intel-xe] [PATCH v6 00/10] Supporting RAS on XE Himal Prasad Ghimiray
2023-10-17 4:15 ` [Intel-xe] [PATCH v2 07/10] drm/xe: Support SOC NONFATAL error handling for PVC Himal Prasad Ghimiray
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=856fc2f8-2e56-9ec6-5340-68a55b909a6e@linux.intel.com \
--to=aravind.iddamsetty@linux.intel.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox