virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Stefano Garzarella <sgarzare@redhat.com>
Cc: Junichi Uekawa <uekawa@chromium.org>,
	kvm@vger.kernel.org, netdev@vger.kernel.org,
	Bobby Eshleman <bobby.eshleman@gmail.com>,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	Eric Dumazet <edumazet@google.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	davem@davemloft.net
Subject: Re: [PATCH] vhost/vsock: Use kvmalloc/kvfree for larger packets.
Date: Thu, 29 Sep 2022 03:47:08 -0400	[thread overview]
Message-ID: <20220929034552-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20220929074010.37mksjmwr3l4wlwt@sgarzare-redhat>

On Thu, Sep 29, 2022 at 09:40:10AM +0200, Stefano Garzarella wrote:
> On Wed, Sep 28, 2022 at 04:02:12PM -0400, Michael S. Tsirkin wrote:
> > On Wed, Sep 28, 2022 at 05:11:35PM +0200, Stefano Garzarella wrote:
> > > On Wed, Sep 28, 2022 at 05:31:58AM -0400, Michael S. Tsirkin wrote:
> > > > On Wed, Sep 28, 2022 at 10:28:23AM +0200, Stefano Garzarella wrote:
> > > > > On Wed, Sep 28, 2022 at 03:45:38PM +0900, Junichi Uekawa wrote:
> > > > > > When copying a large file over sftp over vsock, data size is usually 32kB,
> > > > > > and kmalloc seems to fail to try to allocate 32 32kB regions.
> > > > > >
> > > > > > Call Trace:
> > > > > >  [<ffffffffb6a0df64>] dump_stack+0x97/0xdb
> > > > > >  [<ffffffffb68d6aed>] warn_alloc_failed+0x10f/0x138
> > > > > >  [<ffffffffb68d868a>] ? __alloc_pages_direct_compact+0x38/0xc8
> > > > > >  [<ffffffffb664619f>] __alloc_pages_nodemask+0x84c/0x90d
> > > > > >  [<ffffffffb6646e56>] alloc_kmem_pages+0x17/0x19
> > > > > >  [<ffffffffb6653a26>] kmalloc_order_trace+0x2b/0xdb
> > > > > >  [<ffffffffb66682f3>] __kmalloc+0x177/0x1f7
> > > > > >  [<ffffffffb66e0d94>] ? copy_from_iter+0x8d/0x31d
> > > > > >  [<ffffffffc0689ab7>] vhost_vsock_handle_tx_kick+0x1fa/0x301 [vhost_vsock]
> > > > > >  [<ffffffffc06828d9>] vhost_worker+0xf7/0x157 [vhost]
> > > > > >  [<ffffffffb683ddce>] kthread+0xfd/0x105
> > > > > >  [<ffffffffc06827e2>] ? vhost_dev_set_owner+0x22e/0x22e [vhost]
> > > > > >  [<ffffffffb683dcd1>] ? flush_kthread_worker+0xf3/0xf3
> > > > > >  [<ffffffffb6eb332e>] ret_from_fork+0x4e/0x80
> > > > > >  [<ffffffffb683dcd1>] ? flush_kthread_worker+0xf3/0xf3
> > > > > >
> > > > > > Work around by doing kvmalloc instead.
> > > > > >
> > > > > > Signed-off-by: Junichi Uekawa <uekawa@chromium.org>
> > > >
> > > > My worry here is that this in more of a work around.
> > > > It would be better to not allocate memory so aggressively:
> > > > if we are so short on memory we should probably process
> > > > packets one at a time. Is that very hard to implement?
> > > 
> > > Currently the "virtio_vsock_pkt" is allocated in the "handle_kick" callback
> > > of TX virtqueue. Then the packet is multiplexed on the right socket queue,
> > > then the user space can de-queue it whenever they want.
> > > 
> > > So maybe we can stop processing the virtqueue if we are short on memory, but
> > > when can we restart the TX virtqueue processing?
> > 
> > Assuming you added at least one buffer, the time to restart would be
> > after that buffer has been used.
> 
> Yes, but we still might not have as many continuous pages to allocate, so I
> would use kvmalloc the same.


you would do something like
	if (is_vmalloc_addr())
		stop adding buffers.



> I agree that we should do better, I hope that moving to sk_buff will allow
> us to better manage allocation. Maybe after we merge that part we should
> spend some time to solve these problems.
> 
> Thanks,
> Stefano

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  reply	other threads:[~2022-09-29  7:47 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20220928064538.667678-1-uekawa@chromium.org>
2022-09-28  8:28 ` [PATCH] vhost/vsock: Use kvmalloc/kvfree for larger packets Stefano Garzarella
2022-09-28  9:31   ` Michael S. Tsirkin
2022-09-28 15:11     ` Stefano Garzarella
2022-09-28 20:02       ` Michael S. Tsirkin
2022-09-29  7:40         ` Stefano Garzarella
2022-09-29  7:47           ` Michael S. Tsirkin [this message]
     [not found]       ` <CADgJSGHxPWXJjbakEeWnqF42A03yK7Dpw6U1SKNLhk+B248Ymg@mail.gmail.com>
2022-09-29  7:19         ` Michael S. Tsirkin
2022-09-29  7:46           ` Stefano Garzarella
2022-09-29  7:49             ` Michael S. Tsirkin
     [not found]               ` <20220929090731.27cda58c@kernel.org>
2022-09-29 16:25                 ` Michael S. Tsirkin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220929034552-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=bobby.eshleman@gmail.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=kuba@kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=sgarzare@redhat.com \
    --cc=stefanha@redhat.com \
    --cc=uekawa@chromium.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).