From: Juan Quintela <quintela@redhat.com>
To: "Wang, Wei W" <wei.w.wang@intel.com>
Cc: "Wang, Lei4" <lei4.wang@intel.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
"peterx@redhat.com" <peterx@redhat.com>,
"leobras@redhat.com" <leobras@redhat.com>,
"Daniel Berrange" <berrange@redhat.com>
Subject: Re: [PATCH] multifd: Set a higher "backlog" default value for listen()
Date: Fri, 19 May 2023 13:22:20 +0200 [thread overview]
Message-ID: <87jzx4y39v.fsf@secure.mitica> (raw)
In-Reply-To: <DS0PR11MB637345417B81FF5637B2D7D8DC7C9@DS0PR11MB6373.namprd11.prod.outlook.com> (Wei W. Wang's message of "Fri, 19 May 2023 02:44:16 +0000")
"Wang, Wei W" <wei.w.wang@intel.com> wrote:
> On Friday, May 19, 2023 9:31 AM, Wang, Lei4 wrote:
>> On 5/18/2023 17:16, Juan Quintela wrote:
>> > Lei Wang <lei4.wang@intel.com> wrote:
>> >> When destination VM is launched, the "backlog" parameter for listen()
>> >> is set to 1 as default in socket_start_incoming_migration_internal(),
>> >> which will lead to socket connection error (the queue of pending
>> >> connections is full) when "multifd" and "multifd-channels" are set
>> >> later on and a high number of channels are used. Set it to a
>> >> hard-coded higher default value 512 to fix this issue.
>> >>
>> >> Reported-by: Wei Wang <wei.w.wang@intel.com>
>> >> Signed-off-by: Lei Wang <lei4.wang@intel.com>
>> >
>> > [cc'd daiel who is the maintainer of qio]
>> >
>> > My understanding of that value is that 230 or something like that
>> > would be more than enough. The maxiimum number of multifd channels is
>> 256.
>>
>> You are right, the "multifd-channels" expects uint8_t, so 256 is enough.
>>
>
> We can change it to uint16_t or uint32_t, but need to see if listening on a larger
> value is OK to everyone.
If we need something more than 256 channels for migration, we ar edoing
something really weird. We can saturate a 100Gigabit network relatively
easily with 10 channels. 256 Channels would mean that we have at least
2TBit/s networking. I am not expecting that really soon. And as soon
as that happens I would expect CPU's to handle easily more that
10Gigabits/second.
> Man page of listen mentions that the maximum length of the queue for
> incomplete sockets can be set using /proc/sys/net/ipv4/tcp_max_syn_backlog,
> and it is 4096 by default on my machine.
I think that current code is ok. We just need to enforce that we use
defer.
Later, Juan.
next prev parent reply other threads:[~2023-05-19 11:23 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-18 8:52 [PATCH] multifd: Set a higher "backlog" default value for listen() Lei Wang
2023-05-18 9:13 ` Wang, Wei W
2023-05-18 11:44 ` Juan Quintela
2023-05-18 12:29 ` Daniel P. Berrangé
2023-05-18 12:42 ` Juan Quintela
2023-05-18 15:17 ` Wang, Wei W
2023-05-18 15:28 ` Juan Quintela
2023-05-18 9:16 ` Juan Quintela
2023-05-19 1:30 ` Wang, Lei
2023-05-19 2:44 ` Wang, Wei W
2023-05-19 2:51 ` Wang, Lei
2023-05-19 3:33 ` Wang, Wei W
2023-05-19 11:32 ` Juan Quintela
2023-05-19 11:22 ` Juan Quintela [this message]
2023-05-19 15:17 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87jzx4y39v.fsf@secure.mitica \
--to=quintela@redhat.com \
--cc=berrange@redhat.com \
--cc=lei4.wang@intel.com \
--cc=leobras@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=wei.w.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).