linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Mathieu Poirier <mathieu.poirier@linaro.org>
To: Leonard Crestez <cdleonard@gmail.com>
Cc: Tanmay Shah <tanmay.shah@amd.com>,
	Bjorn Andersson <andersson@kernel.org>,
	Michal Simek <michal.simek@amd.com>,
	Sarangdhar Joshi <spjoshi@codeaurora.org>,
	Siddharth Gupta <sidgup@codeaurora.org>,
	Rishabh Bhatnagar <rishabhb@codeaurora.org>,
	linux-remoteproc@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH] remoteproc: zynqmp: Add coredump support
Date: Mon, 18 Mar 2024 10:52:32 -0600	[thread overview]
Message-ID: <ZfhxUJjrcYfqt9Nd@p14s> (raw)
In-Reply-To: <d4556268-8274-4089-949f-3b97d67793c7@gmail.com>

Hi Leonard,

I have queued patches for this driver that will break this patch.  Please
re-submit when v6.9-rc1 is out and rproc-next has been updated, which should be
around the middle of next week.

Thanks,
Mathieu

On Sat, Mar 16, 2024 at 08:16:42PM +0200, Leonard Crestez wrote:
> Supporting remoteproc coredump requires the platform-specific driver to
> register coredump segments to be dumped. Do this by calling
> rproc_coredump_add_segment for every carveout.
> 
> Also call rproc_coredump_set_elf_info when then rproc is created. If the
> ELFCLASS parameter is not provided then coredump fails with an error.
> Other drivers seem to pass EM_NONE for the machine argument but for me
> this shows a warning in gdb. Pass EM_ARM because this is an ARM R5.
> 
> Signed-off-by: Leonard Crestez <cdleonard@gmail.com>
> ---
> 
> Tests were done by triggering an deliberate crash using remoteproc
> debugfs: echo 2 > /sys/kernel/debug/remoteproc/remoteproc0/crash
> 
> This was tested using RPU apps which use RAM for everything so TCM dump
> was not verified. The freertos-gdb script package showed credible data:
> 
> https://github.com/espressif/freertos-gdb
> 
> The R5 cache is not flushed so RAM might be out of date which is
> actually very bad because information most relevant to determining the
> cause of a crash is lost. Possible workaround would be to flush caches
> in some sort of R5 crash handler? I don't think Linux can do anything
> about this limitation.
> 
> The generated coredump doesn't contain registers, this seems to be a
> limitation shared with other rproc coredumps. It's not clear how the apu
> could access rpu registers on zynqmp, my only idea would be to use the
> coresight dap but that sounds difficult.
> 
> ---
>  drivers/remoteproc/xlnx_r5_remoteproc.c | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
> index 4395edea9a64..cfbd97b89c26 100644
> --- a/drivers/remoteproc/xlnx_r5_remoteproc.c
> +++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
> @@ -484,10 +484,11 @@ static int add_mem_regions_carveout(struct rproc *rproc)
>  			of_node_put(it.node);
>  			return -ENOMEM;
>  		}
>  
>  		rproc_add_carveout(rproc, rproc_mem);
> +		rproc_coredump_add_segment(rproc, rmem->base, rmem->size);
>  
>  		dev_dbg(&rproc->dev, "reserved mem carveout %s addr=%llx, size=0x%llx",
>  			it.node->name, rmem->base, rmem->size);
>  		i++;
>  	}
> @@ -595,10 +596,11 @@ static int add_tcm_carveout_split_mode(struct rproc *rproc)
>  			zynqmp_pm_release_node(pm_domain_id);
>  			goto release_tcm_split;
>  		}
>  
>  		rproc_add_carveout(rproc, rproc_mem);
> +		rproc_coredump_add_segment(rproc, da, bank_size);
>  	}
>  
>  	return 0;
>  
>  release_tcm_split:
> @@ -674,10 +676,11 @@ static int add_tcm_carveout_lockstep_mode(struct rproc *rproc)
>  			goto release_tcm_lockstep;
>  		}
>  
>  		/* If registration is success, add carveouts */
>  		rproc_add_carveout(rproc, rproc_mem);
> +		rproc_coredump_add_segment(rproc, da, bank_size);
>  
>  		dev_dbg(dev, "TCM carveout lockstep mode %s addr=0x%llx, da=0x%x, size=0x%lx",
>  			bank_name, bank_addr, da, bank_size);
>  	}
>  
> @@ -851,10 +854,12 @@ static struct zynqmp_r5_core *zynqmp_r5_add_rproc_core(struct device *cdev)
>  	if (!r5_rproc) {
>  		dev_err(cdev, "failed to allocate memory for rproc instance\n");
>  		return ERR_PTR(-ENOMEM);
>  	}
>  
> +	rproc_coredump_set_elf_info(r5_rproc, ELFCLASS32, EM_ARM);
> +
>  	r5_rproc->auto_boot = false;
>  	r5_core = r5_rproc->priv;
>  	r5_core->dev = cdev;
>  	r5_core->np = dev_of_node(cdev);
>  	if (!r5_core->np) {
> -- 
> 2.34.1

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2024-03-18 16:52 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-16 18:16 [PATCH] remoteproc: zynqmp: Add coredump support Leonard Crestez
2024-03-18 16:52 ` Mathieu Poirier [this message]
2024-03-28  8:17   ` Leonard Crestez
2024-03-28 14:53     ` Mathieu Poirier
2024-04-04 20:14 ` Tanmay Shah
2024-04-06 18:28   ` Leonard Crestez
2024-04-08 16:57   ` Mathieu Poirier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZfhxUJjrcYfqt9Nd@p14s \
    --to=mathieu.poirier@linaro.org \
    --cc=andersson@kernel.org \
    --cc=cdleonard@gmail.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-remoteproc@vger.kernel.org \
    --cc=michal.simek@amd.com \
    --cc=rishabhb@codeaurora.org \
    --cc=sidgup@codeaurora.org \
    --cc=spjoshi@codeaurora.org \
    --cc=tanmay.shah@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).