From: "Michael S. Tsirkin" <mst@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: gdawar@amd.com, elic@nvidia.com,
virtualization@lists.linux-foundation.org, tanuj.kamde@amd.com,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH V2 5/5] vdpa: mlx5: support per virtqueue dma device
Date: Fri, 3 Feb 2023 04:33:21 -0500 [thread overview]
Message-ID: <20230203043307-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20230119061525.75068-6-jasowang@redhat.com>
On Thu, Jan 19, 2023 at 02:15:25PM +0800, Jason Wang wrote:
> This patch implements per virtqueue dma device for mlx5_vdpa. This is
> needed for virtio_vdpa to work for CVQ which is backed by vringh but
> not DMA. We simply advertise the vDPA device itself as the DMA device
> for CVQ then DMA API can simply use PA so the identical mapping for
> CVQ can still be used. Otherwise the identical (1:1) mapping won't
> work when platform IOMMU is enabled since the IOVA is allocated on
> demand which is not necessarily the PA.
>
> This fixes the following crash when mlx5 vDPA device is bound to
> virtio-vdpa with platform IOMMU enabled but not in passthrough mode:
>
> BUG: unable to handle page fault for address: ff2fb3063deb1002
> #PF: supervisor read access in kernel mode
> #PF: error_code(0x0000) - not-present page
> PGD 1393001067 P4D 1393002067 PUD 0
> Oops: 0000 [#1] PREEMPT SMP NOPTI
> CPU: 55 PID: 8923 Comm: kworker/u112:3 Kdump: loaded Not tainted 6.1.0+ #7
> Hardware name: Dell Inc. PowerEdge R750/0PJ80M, BIOS 1.5.4 12/17/2021
> Workqueue: mlx5_vdpa_wq mlx5_cvq_kick_handler [mlx5_vdpa]
> RIP: 0010:vringh_getdesc_iotlb+0x93/0x1d0 [vringh]
> Code: 14 25 40 ef 01 00 83 82 c0 0a 00 00 01 48 2b 05 93 5a 1b ea 8b 4c 24 14 48 c1 f8 06 48 c1 e0 0c 48 03 05 90 5a 1b ea 48 01 c8 <0f> b7 00 83 aa c0 0a 00 00 01 65 ff 0d bc e4 41 3f 0f 84 05 01 00
> RSP: 0018:ff46821ba664fdf8 EFLAGS: 00010282
> RAX: ff2fb3063deb1002 RBX: 0000000000000a20 RCX: 0000000000000002
> RDX: ff2fb318d2f94380 RSI: 0000000000000002 RDI: 0000000000000001
> RBP: ff2fb3065e832410 R08: ff46821ba664fe00 R09: 0000000000000001
> R10: 0000000000000000 R11: 000000000000000d R12: ff2fb3065e832488
> R13: ff2fb3065e8324a8 R14: ff2fb3065e8324c8 R15: ff2fb3065e8324a8
> FS: 0000000000000000(0000) GS:ff2fb3257fac0000(0000) knlGS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: ff2fb3063deb1002 CR3: 0000001392010006 CR4: 0000000000771ee0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> PKRU: 55555554
> Call Trace:
> <TASK>
> mlx5_cvq_kick_handler+0x89/0x2b0 [mlx5_vdpa]
> process_one_work+0x1e2/0x3b0
> ? rescuer_thread+0x390/0x390
> worker_thread+0x50/0x3a0
> ? rescuer_thread+0x390/0x390
> kthread+0xd6/0x100
> ? kthread_complete_and_exit+0x20/0x20
> ret_from_fork+0x1f/0x30
> </TASK>
>
> Reviewed-by: Eli Cohen <elic@nvidia.com>
> Tested-by: Eli Cohen <elic@nvidia.com>
> Signed-off-by: Jason Wang <jasowang@redhat.com>
Jason how about a Fixes tag here?
> ---
> Changes since V1:
> - make mlx5_get_vq_dma_dev() static
> ---
> drivers/vdpa/mlx5/net/mlx5_vnet.c | 11 +++++++++++
> 1 file changed, 11 insertions(+)
>
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 6632651b1e54..97d1ada7f4db 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -2682,6 +2682,16 @@ static int mlx5_vdpa_set_map(struct vdpa_device *vdev, unsigned int asid,
> return err;
> }
>
> +static struct device *mlx5_get_vq_dma_dev(struct vdpa_device *vdev, u16 idx)
> +{
> + struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
> +
> + if (is_ctrl_vq_idx(mvdev, idx))
> + return &vdev->dev;
> +
> + return mvdev->vdev.dma_dev;
> +}
> +
> static void mlx5_vdpa_free(struct vdpa_device *vdev)
> {
> struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
> @@ -2897,6 +2907,7 @@ static const struct vdpa_config_ops mlx5_vdpa_ops = {
> .get_generation = mlx5_vdpa_get_generation,
> .set_map = mlx5_vdpa_set_map,
> .set_group_asid = mlx5_set_group_asid,
> + .get_vq_dma_dev = mlx5_get_vq_dma_dev,
> .free = mlx5_vdpa_free,
> .suspend = mlx5_vdpa_suspend,
> };
> --
> 2.25.1
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
prev parent reply other threads:[~2023-02-03 9:33 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-19 6:15 [PATCH V2 0/5] virtio_ring: per virtqueue DMA device Jason Wang
2023-01-19 6:15 ` [PATCH V2 1/5] virtio_ring: per virtqueue dma device Jason Wang
2023-01-19 6:15 ` [PATCH V2 2/5] vdpa: introduce get_vq_dma_device() Jason Wang
2023-01-19 6:15 ` [PATCH V2 3/5] virtio-vdpa: support per vq dma device Jason Wang
2023-01-19 6:15 ` [PATCH V2 4/5] vdpa: set dma mask for vDPA device Jason Wang
2023-01-19 6:15 ` [PATCH V2 5/5] vdpa: mlx5: support per virtqueue dma device Jason Wang
2023-02-03 9:33 ` Michael S. Tsirkin [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230203043307-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=elic@nvidia.com \
--cc=gdawar@amd.com \
--cc=jasowang@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=tanuj.kamde@amd.com \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).