From: Leon Romanovsky <leon@kernel.org>
To: Max Gurtovoy <mgurtovoy@nvidia.com>
Cc: axboe@kernel.dk, linux-block@vger.kernel.org,
kvm@vger.kernel.org, mst@redhat.com, israelr@nvidia.com,
virtualization@lists.linux-foundation.org, hch@infradead.org,
nitzanc@nvidia.com, stefanha@redhat.com, oren@nvidia.com
Subject: Re: [PATCH 2/2] virtio-blk: set NUMA affinity for a tagset
Date: Mon, 27 Sep 2021 14:34:10 +0300 [thread overview]
Message-ID: <YVGsMsIjD2+aS3eC@unreal> (raw)
In-Reply-To: <20210926145518.64164-2-mgurtovoy@nvidia.com>
On Sun, Sep 26, 2021 at 05:55:18PM +0300, Max Gurtovoy wrote:
> To optimize performance, set the affinity of the block device tagset
> according to the virtio device affinity.
>
> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> ---
> drivers/block/virtio_blk.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index 9b3bd083b411..1c68c3e0ebf9 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -774,7 +774,7 @@ static int virtblk_probe(struct virtio_device *vdev)
> memset(&vblk->tag_set, 0, sizeof(vblk->tag_set));
> vblk->tag_set.ops = &virtio_mq_ops;
> vblk->tag_set.queue_depth = queue_depth;
> - vblk->tag_set.numa_node = NUMA_NO_NODE;
> + vblk->tag_set.numa_node = virtio_dev_to_node(vdev);
I afraid that by doing it, you will increase chances to see OOM, because
in NUMA_NO_NODE, MM will try allocate memory in whole system, while in
the latter mode only on specific NUMA which can be depleted.
Thanks
> vblk->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
> vblk->tag_set.cmd_size =
> sizeof(struct virtblk_req) +
> --
> 2.18.1
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
next prev parent reply other threads:[~2021-09-27 11:34 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20210926145518.64164-1-mgurtovoy@nvidia.com>
2021-09-27 8:02 ` [PATCH 1/2] virtio: introduce virtio_dev_to_node helper Stefan Hajnoczi
2021-09-27 9:31 ` Michael S. Tsirkin
[not found] ` <20210926145518.64164-2-mgurtovoy@nvidia.com>
2021-09-27 8:09 ` [PATCH 2/2] virtio-blk: set NUMA affinity for a tagset Stefan Hajnoczi
[not found] ` <21295187-41c4-5fb6-21c3-28004eb7c5d8@nvidia.com>
2021-09-28 6:47 ` Stefan Hajnoczi
[not found] ` <f15e1115-25c1-5b9a-223c-db122251d4c1@nvidia.com>
2021-09-30 13:16 ` Stefan Hajnoczi
2021-09-27 11:34 ` Leon Romanovsky [this message]
[not found] ` <0c155679-e1db-3d1e-2b4e-a0f12ce5950c@nvidia.com>
2021-09-27 18:23 ` Leon Romanovsky
[not found] ` <f8de7c19-9f04-a458-6c1d-8133a83aa93f@nvidia.com>
2021-09-28 16:27 ` Leon Romanovsky
[not found] ` <f0cc8cb4-92dc-f5f9-ea50-aa312ac6a056@nvidia.com>
2021-09-29 6:50 ` Leon Romanovsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YVGsMsIjD2+aS3eC@unreal \
--to=leon@kernel.org \
--cc=axboe@kernel.dk \
--cc=hch@infradead.org \
--cc=israelr@nvidia.com \
--cc=kvm@vger.kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=mgurtovoy@nvidia.com \
--cc=mst@redhat.com \
--cc=nitzanc@nvidia.com \
--cc=oren@nvidia.com \
--cc=stefanha@redhat.com \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).