From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8303CE77180 for ; Mon, 9 Dec 2024 13:02:03 +0000 (UTC) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 10CFE402D7; Mon, 9 Dec 2024 14:01:57 +0100 (CET) Received: from mail-ua1-f49.google.com (mail-ua1-f49.google.com [209.85.222.49]) by mails.dpdk.org (Postfix) with ESMTP id 4461040267 for ; Thu, 5 Dec 2024 23:58:34 +0100 (CET) Received: by mail-ua1-f49.google.com with SMTP id a1e0cc1a2514c-85bad9e0214so542277241.3 for ; Thu, 05 Dec 2024 14:58:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1733439513; x=1734044313; darn=dpdk.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=VMD4SaSwH2nKcxFIhn+fb+mN5/DVAC6z8XAdV0IfL+o=; b=e40zBo8eW3HUd0xQdUWqWuKWp4wTJAL/GTHYW9IA2D8M65TXvTLbGVWBvsk6a5EB/Q uTw23yLQIooN6qyBAZCO4YAFF0exVO0Qt8OScGPmh+HqkGGNM1rMFLX4xPClmqdouBaV DgxhdO4/FDRcf9xCu9IeIRv4OjPynizJ8BgKg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733439513; x=1734044313; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=VMD4SaSwH2nKcxFIhn+fb+mN5/DVAC6z8XAdV0IfL+o=; b=R2ZtQPLygTAQALZdD7RMW5HvdQ/pemKqM9BP73o7lOnc9FvUniIRto2V2pm1q+jDuz /jVQyH1pnVFVgO3s35TUFOkM+W1VxxVRWjhp7zb5U+WhB+yUs10N+k9FeqgP8QlSgjII Q0wGLjXtwWfo55/eCWYAXdHJ1PR5t2b68GpKyk/pnXIFn6Dy7H7p3CesIHnD9AYa8+T6 XnJxjw73tSAEyixegRz6ZizqTS5OW9mV3Fno58PXFN5339sB94teJhPUjXtsKRU+fiio J7ElSa4JDtCYZYMxQPnqpDuAE6q7I/BJfCtmoK4crGFw/gS++srJ0H/dVMGX6fim5PBQ 6b/g== X-Forwarded-Encrypted: i=1; AJvYcCXdaYNKzFzreUC3av42U7LvASDiiI6DdXLofN3grL6QrfUn/KcTfWkjFgkan56ZFAMxDdE=@dpdk.org X-Gm-Message-State: AOJu0YzdC8ULlBmKOHOBHQhUexdRK0AE36jKyJwoBsd6/AAftBk6ZeAH TA/ya4bE51FkUkUXKibFmB4cUyJ5wyLJ7PmCvZwusljvkx7h+Zo4z11T7hjgqbYqjfEWDlvZBFf eqxI3jAamhMyXD5xvrnTSTEQbEHiBUKp3vVlNBEge+i3pYYehG1XDrRc7rNkKmcwEjNTLEvsjjI mCzjSQ X-Gm-Gg: ASbGncv3JdFiZFlsrerHrQhk7vRMFMOaE4l8X9RJ4SSTy05dOVpvPsNIQca4f3Q6Leb x+T+L/Po7Fz1rLmEr94F+U5ROBX1pSAXq X-Google-Smtp-Source: AGHT+IEwFB8zB2y6zRCdvlYxQR7+fW28ZL8uYJHvK0IEYxmmubak46fccXFk3dFD0z61aaVo2LJW/QEhkOKqCGGL3ww= X-Received: by 2002:a05:6102:c87:b0:4af:5d05:1f8b with SMTP id ada2fe7eead31-4afcaa1c7a4mr1632451137.12.1733439513456; Thu, 05 Dec 2024 14:58:33 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: Mukul Sinha Date: Fri, 6 Dec 2024 04:28:22 +0530 Message-ID: Subject: Re: GCP cloud : Virtio-PMD performance Issue To: Maxime Coquelin , dev@dpdk.org Cc: chenbox@nvidia.com, jeroendb@google.com, rushilg@google.com, joshwash@google.com, Srinivasa Srikanth Podila , Tathagat Priyadarshi , Samar Yadav , Varun LA Content-Type: multipart/alternative; boundary="000000000000709ecc06288dd663" X-Mailman-Approved-At: Mon, 09 Dec 2024 14:01:54 +0100 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org --000000000000709ecc06288dd663 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable GCP-dev team @jeroendb@google.com @rushilg@google.co= m @joshwash@google.com Please do check on this & get back. On Fri, Dec 6, 2024 at 4:24=E2=80=AFAM Mukul Sinha wrote: > Thanks @maxime.coquelin@redhat.com > Have included dev@dpdk.org > > > On Fri, Dec 6, 2024 at 2:11=E2=80=AFAM Maxime Coquelin > wrote: > >> Hi Mukul, >> >> DPDK upstream mailing lists should be added to this e-mail. >> I am not allowed to provide off-list support, all discussions should >> happen upstream. >> >> If this is reproduced with downtream DPDK provided with RHEL and you >> have a RHEL subscription, please use the Red Hat issue tracker. >> >> Thanks for your understanding, >> Maxime >> >> On 12/5/24 21:36, Mukul Sinha wrote: >> > + Varun >> > >> > On Fri, Dec 6, 2024 at 2:04=E2=80=AFAM Mukul Sinha > > > wrote: >> > >> > Hi GCP & Virtio-PMD dev teams, >> > We are from VMware NSX Advanced Load Balancer Team whereby in >> > GCP-cloud (*custom-8-8192 VM instance type 8core8G*) we are triagi= ng >> > an issue of TCP profile application throughput performance with >> > single dispatcher core single Rx/Tx queue (queue depth: 2048) the >> > throughput performance we get using dpdk-22.11 virtio-PMD code is >> > degraded significantly when compared to when using dpdk-20.05 PMD >> > We see high amount of Tx packet drop counter incrementing on >> > virtio-NIC pointing to issue that the GCP hypervisor side is unabl= e >> > to drain the packets faster (No drops are seen on Rx side) >> > The behavior is like this : >> > _Using dpdk-22.11_ >> > At 75% CPU usage itself we start seeing huge number of Tx packet >> > drops reported (no Rx drops) causing TCP restransmissions eventual= ly >> > bringing down the effective throughput numbers >> > _Using dpdk-20.05_ >> > even at ~95% CPU usage without any packet drops (neither Rx nor Tx= ) >> > we are able to get a much better throughput >> > >> > To improve performance numbers with dpdk-22.11 we have tried >> > increasing the queue depth to 4096 but that din't help. >> > If with dpdk-22.11 we move from single core Rx/Tx queue=3D1 to sin= gle >> > core Rx/Tx queue=3D2 we are able to get slightly better numbers (b= ut >> > still doesnt match the numbers obtained using dpdk-20.05 single co= re >> > Rx/Tx queue=3D1). This again corroborates the fact the GCP hypervi= sor >> > is the bottleneck here. >> > >> > To root-cause this issue we were able to replicate this behavior >> > using native DPDK testpmd as shown below (cmds used):- >> > Hugepage size: 2 MB >> > ./app/dpdk-testpmd -l 0-1 -n 1 -- -i --nb-cores=3D1 --txd=3D2048 >> > --rxd=3D2048 --rxq=3D1 --txq=3D1 --portmask=3D0x3 >> > set fwd mac >> > set fwd flowgen >> > set txpkts 1518 >> > start >> > stop >> > >> > Testpmd traffic run (for packet-size=3D1518) for exact same >> > time-interval of 15 seconds: >> > >> > _22.11_ >> > ---------------------- Forward statistics for port 0 >> > ---------------------- >> > RX-packets: 2 RX-dropped: 0 RX-total: = 2 >> > TX-packets: 19497570 *TX-dropped: 364674686 * TX-total: >> 384172256 >> > >> > >> -----------------------------------------------------------------------= ----- >> > _20.05_ >> > ---------------------- Forward statistics for port 0 >> > ---------------------- >> > RX-packets: 3 RX-dropped: 0 RX-total: = 3 >> > TX-packets: 19480319 TX-dropped: 0 TX-total: >> > 19480319 >> > >> > >> -----------------------------------------------------------------------= ----- >> > >> > As you can see >> > dpdk-22.11 >> > Packets generated : 384 million Packets serviced : ~19.5 million : >> > Tx-dropped : 364 million >> > dpdk-20.05 >> > Packets generated : ~19.5 million Packets serviced : ~19.5 million= : >> > Tx-dropped : 0 >> > >> > Actual serviced traffic remains almost same between the two versio= ns >> > (implying the underlying GCP hypervisor is only capable of handlin= g >> > that much) but in dpdk-22.11 the PMD is pushing almost 20x traffic >> > compared to dpdk-20.05 >> > The same pattern can be seen even if we run traffic for a longer >> > duration. >> > >> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >> > >> > Following are our queries: >> > @ Virtio-dev team >> > 1. Why in dpdk-22.11 using virtio PMD the testpmd application is >> > able to pump 20 times Tx traffic towards hypervisor compared to >> > dpdk-20.05 ? >> > What has changed either in the virtio-PMD or in the virtio-PMD & >> > underlying hypervisor communication causing this behavior ? >> > If you see actual serviced traffic by the hypervisor remains almos= t >> > on par with dpdk-20.05 but its the humongous packets drop count >> > which can be overall detrimental for any DPDK-application running >> > TCP traffic profile. >> > Is there a way to slow down the number of packets sent towards the >> > hypervisor (through either any code change in virtio-PMD or any >> > config setting) and make it on-par with dpdk-20.05 performance ? >> > 2. In the published Virtio performance report Release 22.11 we see >> > no qualification of throughput numbers done on GCP-cloud. Is there >> > any internal performance benchmark numbers you have for GCP-cloud >> > and if yes can you please share it with us so that we can check if >> > there's any configs/knobs/settings you used to get optimum >> performance. >> > >> > @ GCP-cloud dev team >> > As we can see any amount of traffic greater than what can be >> > successfully serviced by the GCP hypervisor is all getting dropped >> > hence we need help from your side to reproduce this issue in your >> > in-house setup preferably using the same VM instance type as >> > highlighted before. >> > We need further investigation by you from the GCP host level side = to >> > check on parameters like running out of Tx buffers or Queue full >> > conditions for the virtio-NIC or number of NIC Rx/Tx kernel thread= s >> > as to what is causing hypervisor to not match up to the traffic lo= ad >> > pumped in dpdk-22.11 >> > Based on your debugging we would additionally need inputs as to wh= at >> > can be tweaked or any knobs/settings can be configured from the >> > GCP-VM level to get better performance numbers. >> > >> > Please feel free to reach out to us for any further queries. >> > >> > _Additional outputs for debugging:_ >> > lspci | grep Eth >> > 00:06.0 Ethernet controller: Red Hat, Inc. Virtio network device >> > root@dcg15-se-ecmyw:/home/admin/dpdk/build# ethtool -i eth0 >> > driver: virtio_net >> > version: 1.0.0 >> > firmware-version: >> > expansion-rom-version: >> > bus-info: 0000:00:06.0 >> > supports-statistics: yes >> > supports-test: no >> > supports-eeprom-access: no >> > supports-register-dump: no >> > supports-priv-flags: no >> > >> > testpmd> show port info all >> > ********************* Infos for port 0 ********************* >> > MAC address: 42:01:0A:98:A0:0F >> > Device name: 0000:00:06.0 >> > Driver name: net_virtio >> > Firmware-version: not available >> > Connect to socket: 0 >> > memory allocation on the socket: 0 >> > Link status: up >> > Link speed: Unknown >> > Link duplex: full-duplex >> > Autoneg status: On >> > MTU: 1500 >> > Promiscuous mode: disabled >> > Allmulticast mode: disabled >> > Maximum number of MAC addresses: 64 >> > Maximum number of MAC addresses of hash filtering: 0 >> > VLAN offload: >> > strip off, filter off, extend off, qinq strip off >> > No RSS offload flow type is supported. >> > Minimum size of RX buffer: 64 >> > Maximum configurable length of RX packet: 9728 >> > Maximum configurable size of LRO aggregated packet: 0 >> > Current number of RX queues: 1 >> > Max possible RX queues: 2 >> > Max possible number of RXDs per queue: 32768 >> > Min possible number of RXDs per queue: 32 >> > RXDs number alignment: 1 >> > Current number of TX queues: 1 >> > Max possible TX queues: 2 >> > Max possible number of TXDs per queue: 32768 >> > Min possible number of TXDs per queue: 32 >> > TXDs number alignment: 1 >> > Max segment number per packet: 65535 >> > Max segment number per MTU/TSO: 65535 >> > Device capabilities: 0x0( ) >> > Device error handling mode: none >> > >> > >> > >> > This electronic communication and the information and any files >> > transmitted with it, or attached to it, are confidential and are >> > intended solely for the use of the individual or entity to whom it is >> > addressed and may contain information that is confidential, legally >> > privileged, protected by privacy laws, or otherwise restricted from >> > disclosure to anyone else. If you are not the intended recipient or th= e >> > person responsible for delivering the e-mail to the intended recipient= , >> > you are hereby notified that any use, copying, distributing, >> > dissemination, forwarding, printing, or copying of this e-mail is >> > strictly prohibited. If you received this e-mail in error, please >> return >> > the e-mail to the sender, delete it from your computer, and destroy an= y >> > printed copy of it. >> >> --=20 This electronic communication and the information and any files transmitted= =20 with it, or attached to it, are confidential and are intended solely for=20 the use of the individual or entity to whom it is addressed and may contain= =20 information that is confidential, legally privileged, protected by privacy= =20 laws, or otherwise restricted from disclosure to anyone else. If you are=20 not the intended recipient or the person responsible for delivering the=20 e-mail to the intended recipient, you are hereby notified that any use,=20 copying, distributing, dissemination, forwarding, printing, or copying of= =20 this e-mail is strictly prohibited. If you received this e-mail in error,= =20 please return the e-mail to the sender, delete it from your computer, and= =20 destroy any printed copy of it. --000000000000709ecc06288dd663 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Please do check on this & get back.

On = Fri, Dec 6, 2024 at 4:24=E2=80=AFAM Mukul Sinha <mukul.sinha@broadcom.com> wrote:

On Fri, Dec 6,= 2024 at 2:11=E2=80=AFAM Maxime Coquelin <maxime.coquelin@redhat.com> wrote:=
Hi Mukul,

DPDK upstream mailing lists should be added to this e-mail.
I am not allowed to provide off-list support, all discussions should
happen upstream.

If this is reproduced with downtream DPDK provided with RHEL and you
have a RHEL subscription, please use the Red Hat issue tracker.

Thanks for your understanding,
Maxime

On 12/5/24 21:36, Mukul Sinha wrote:
> + Varun
>
> On Fri, Dec 6, 2024 at 2:04=E2=80=AFAM Mukul Sinha <mukul.sinha@broadcom.com=
> <mailto:mukul.sinha@broadcom.com>> wrote:
>
>=C2=A0 =C2=A0 =C2=A0Hi GCP & Virtio-PMD dev teams,
>=C2=A0 =C2=A0 =C2=A0We are from VMware NSX Advanced Load Balancer Team = whereby in
>=C2=A0 =C2=A0 =C2=A0GCP-cloud (*custom-8-8192 VM instance type 8core8G*= ) we are triaging
>=C2=A0 =C2=A0 =C2=A0an issue of TCP profile application throughput perf= ormance with
>=C2=A0 =C2=A0 =C2=A0single dispatcher core single Rx/Tx queue (queue de= pth: 2048) the
>=C2=A0 =C2=A0 =C2=A0throughput performance we get using dpdk-22.11 virt= io-PMD code is
>=C2=A0 =C2=A0 =C2=A0degraded significantly when compared to when using = dpdk-20.05 PMD
>=C2=A0 =C2=A0 =C2=A0We see high amount of Tx packet drop counter increm= enting on
>=C2=A0 =C2=A0 =C2=A0virtio-NIC pointing to issue that the GCP hyperviso= r side is unable
>=C2=A0 =C2=A0 =C2=A0to drain the packets faster (No drops are seen on R= x side)
>=C2=A0 =C2=A0 =C2=A0The behavior is like this :
>=C2=A0 =C2=A0 =C2=A0_Using dpdk-22.11_
>=C2=A0 =C2=A0 =C2=A0At 75% CPU usage itself we start seeing huge number= of Tx packet
>=C2=A0 =C2=A0 =C2=A0drops reported (no Rx drops) causing TCP restransmi= ssions eventually
>=C2=A0 =C2=A0 =C2=A0bringing down the effective throughput numbers
>=C2=A0 =C2=A0 =C2=A0_Using dpdk-20.05_
>=C2=A0 =C2=A0 =C2=A0even at ~95% CPU usage without any packet drops (ne= ither Rx nor Tx)
>=C2=A0 =C2=A0 =C2=A0we are able to get a much better throughput
>
>=C2=A0 =C2=A0 =C2=A0To improve performance numbers with dpdk-22.11 we h= ave tried
>=C2=A0 =C2=A0 =C2=A0increasing the queue depth to 4096 but that din'= ;t help.
>=C2=A0 =C2=A0 =C2=A0If with dpdk-22.11 we move from single core Rx/Tx q= ueue=3D1 to single
>=C2=A0 =C2=A0 =C2=A0core Rx/Tx queue=3D2 we are able to get slightly be= tter numbers (but
>=C2=A0 =C2=A0 =C2=A0still doesnt match the numbers obtained using dpdk-= 20.05 single core
>=C2=A0 =C2=A0 =C2=A0Rx/Tx queue=3D1). This again corroborates the fact = the GCP hypervisor
>=C2=A0 =C2=A0 =C2=A0is the bottleneck here.
>
>=C2=A0 =C2=A0 =C2=A0To root-cause this issue we were able to replicate = this behavior
>=C2=A0 =C2=A0 =C2=A0using native DPDK testpmd as shown below (cmds used= ):-
>=C2=A0 =C2=A0 =C2=A0Hugepage size: 2 MB
>=C2=A0 =C2=A0 =C2=A0 =C2=A0./app/dpdk-testpmd -l 0-1 -n 1 -- -i --nb-co= res=3D1 --txd=3D2048
>=C2=A0 =C2=A0 =C2=A0--rxd=3D2048 --rxq=3D1 --txq=3D1 =C2=A0--portmask= =3D0x3
>=C2=A0 =C2=A0 =C2=A0set fwd mac
>=C2=A0 =C2=A0 =C2=A0set fwd flowgen
>=C2=A0 =C2=A0 =C2=A0set txpkts 1518
>=C2=A0 =C2=A0 =C2=A0start
>=C2=A0 =C2=A0 =C2=A0stop
>
>=C2=A0 =C2=A0 =C2=A0Testpmd traffic run (for packet-size=3D1518) for ex= act same
>=C2=A0 =C2=A0 =C2=A0time-interval of 15 seconds:
>
>=C2=A0 =C2=A0 =C2=A0_22.11_
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 ---------------------- Forward statistics f= or port 0
>=C2=A0 =C2=A0 =C2=A0 =C2=A0----------------------
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 RX-packets: 2 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0RX-dropped: 0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= RX-total: 2
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 TX-packets: 19497570 *TX-dropped: 364674686= *=C2=A0 =C2=A0 TX-total: 384172256
>=C2=A0 =C2=A0 =C2=A0 =C2=A0
>=C2=A0 =C2=A0 =C2=A0---------------------------------------------------= -------------------------
>=C2=A0 =C2=A0 =C2=A0_20.05_
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 ---------------------- Forward statistics f= or port 0
>=C2=A0 =C2=A0 =C2=A0 =C2=A0----------------------
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 RX-packets: 3 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0RX-dropped: 0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= RX-total: 3
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 TX-packets: 19480319 =C2=A0 =C2=A0 =C2=A0 T= X-dropped: 0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 TX-total:
>=C2=A0 =C2=A0 =C2=A019480319
>=C2=A0 =C2=A0 =C2=A0 =C2=A0
>=C2=A0 =C2=A0 =C2=A0---------------------------------------------------= -------------------------
>
>=C2=A0 =C2=A0 =C2=A0As you can see
>=C2=A0 =C2=A0 =C2=A0dpdk-22.11
>=C2=A0 =C2=A0 =C2=A0Packets generated : 384 million Packets serviced : = ~19.5 million :
>=C2=A0 =C2=A0 =C2=A0Tx-dropped : 364 million
>=C2=A0 =C2=A0 =C2=A0dpdk-20.05
>=C2=A0 =C2=A0 =C2=A0Packets generated : ~19.5 million Packets serviced = : ~19.5 million :
>=C2=A0 =C2=A0 =C2=A0Tx-dropped : 0
>
>=C2=A0 =C2=A0 =C2=A0Actual serviced traffic remains almost same between= the two versions
>=C2=A0 =C2=A0 =C2=A0(implying the underlying GCP hypervisor is only cap= able of handling
>=C2=A0 =C2=A0 =C2=A0that much) but in dpdk-22.11 the PMD is pushing alm= ost 20x traffic
>=C2=A0 =C2=A0 =C2=A0compared to dpdk-20.05
>=C2=A0 =C2=A0 =C2=A0The same pattern can be seen even if we run traffic= for a longer
>=C2=A0 =C2=A0 =C2=A0duration.
>=C2=A0 =C2=A0 =C2=A0=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D
>
>=C2=A0 =C2=A0 =C2=A0Following are our queries:
>=C2=A0 =C2=A0 =C2=A0@ Virtio-dev team
>=C2=A0 =C2=A0 =C2=A01. Why in dpdk-22.11 using virtio PMD the testpmd a= pplication is
>=C2=A0 =C2=A0 =C2=A0able to pump 20 times Tx traffic towards hypervisor= compared to
>=C2=A0 =C2=A0 =C2=A0dpdk-20.05 ?
>=C2=A0 =C2=A0 =C2=A0What has changed either in the virtio-PMD or in the= virtio-PMD &
>=C2=A0 =C2=A0 =C2=A0underlying hypervisor communication causing this be= havior ?
>=C2=A0 =C2=A0 =C2=A0If you see actual serviced traffic by the hyperviso= r remains almost
>=C2=A0 =C2=A0 =C2=A0on par with dpdk-20.05 but its the humongous packet= s drop count
>=C2=A0 =C2=A0 =C2=A0which can be overall detrimental for any DPDK-appli= cation running
>=C2=A0 =C2=A0 =C2=A0TCP traffic profile.
>=C2=A0 =C2=A0 =C2=A0Is there a way to slow down the number of packets s= ent towards the
>=C2=A0 =C2=A0 =C2=A0hypervisor (through either any code change in virti= o-PMD or any
>=C2=A0 =C2=A0 =C2=A0config setting) and make it on-par with dpdk-20.05 = performance ?
>=C2=A0 =C2=A0 =C2=A02. In the published Virtio performance report Relea= se 22.11 we see
>=C2=A0 =C2=A0 =C2=A0no qualification of throughput numbers done on GCP-= cloud. Is there
>=C2=A0 =C2=A0 =C2=A0any internal performance benchmark numbers you have= for GCP-cloud
>=C2=A0 =C2=A0 =C2=A0and if yes can you please share it with us so that = we can check if
>=C2=A0 =C2=A0 =C2=A0there's any configs/knobs/settings you used to = get optimum performance.
>
>=C2=A0 =C2=A0 =C2=A0@ GCP-cloud dev team
>=C2=A0 =C2=A0 =C2=A0As we can see any amount of traffic greater than wh= at can be
>=C2=A0 =C2=A0 =C2=A0successfully serviced by the GCP hypervisor is all = getting dropped
>=C2=A0 =C2=A0 =C2=A0hence we need help from your side to reproduce this= issue in your
>=C2=A0 =C2=A0 =C2=A0in-house setup preferably using the same VM instanc= e type as
>=C2=A0 =C2=A0 =C2=A0highlighted before.
>=C2=A0 =C2=A0 =C2=A0We need further investigation by you from the GCP h= ost level side to
>=C2=A0 =C2=A0 =C2=A0check on parameters like running out of Tx buffers = or Queue full
>=C2=A0 =C2=A0 =C2=A0conditions for the virtio-NIC or number of NIC Rx/T= x kernel threads
>=C2=A0 =C2=A0 =C2=A0as to what is causing hypervisor to not match up to= the traffic load
>=C2=A0 =C2=A0 =C2=A0pumped in dpdk-22.11
>=C2=A0 =C2=A0 =C2=A0Based on your debugging we would additionally need = inputs as to what
>=C2=A0 =C2=A0 =C2=A0can be tweaked or any knobs/settings can be configu= red from the
>=C2=A0 =C2=A0 =C2=A0GCP-VM level to get better performance numbers.
>
>=C2=A0 =C2=A0 =C2=A0Please feel free to reach out to us for any further= queries.
>
>=C2=A0 =C2=A0 =C2=A0_Additional outputs for debugging:_
>=C2=A0 =C2=A0 =C2=A0lspci | grep Eth
>=C2=A0 =C2=A0 =C2=A000:06.0 Ethernet controller: Red Hat, Inc. Virtio n= etwork device
>=C2=A0 =C2=A0 =C2=A0root@dcg15-se-ecmyw:/home/admin/dpdk/build# ethtool= -i eth0
>=C2=A0 =C2=A0 =C2=A0driver: virtio_net
>=C2=A0 =C2=A0 =C2=A0version: 1.0.0
>=C2=A0 =C2=A0 =C2=A0firmware-version:
>=C2=A0 =C2=A0 =C2=A0expansion-rom-version:
>=C2=A0 =C2=A0 =C2=A0bus-info: 0000:00:06.0
>=C2=A0 =C2=A0 =C2=A0supports-statistics: yes
>=C2=A0 =C2=A0 =C2=A0supports-test: no
>=C2=A0 =C2=A0 =C2=A0supports-eeprom-access: no
>=C2=A0 =C2=A0 =C2=A0supports-register-dump: no
>=C2=A0 =C2=A0 =C2=A0supports-priv-flags: no
>
>=C2=A0 =C2=A0 =C2=A0testpmd> show port info all
>=C2=A0 =C2=A0 =C2=A0********************* Infos for port 0 =C2=A0******= ***************
>=C2=A0 =C2=A0 =C2=A0MAC address: 42:01:0A:98:A0:0F
>=C2=A0 =C2=A0 =C2=A0Device name: 0000:00:06.0
>=C2=A0 =C2=A0 =C2=A0Driver name: net_virtio
>=C2=A0 =C2=A0 =C2=A0Firmware-version: not available
>=C2=A0 =C2=A0 =C2=A0Connect to socket: 0
>=C2=A0 =C2=A0 =C2=A0memory allocation on the socket: 0
>=C2=A0 =C2=A0 =C2=A0Link status: up
>=C2=A0 =C2=A0 =C2=A0Link speed: Unknown
>=C2=A0 =C2=A0 =C2=A0Link duplex: full-duplex
>=C2=A0 =C2=A0 =C2=A0Autoneg status: On
>=C2=A0 =C2=A0 =C2=A0MTU: 1500
>=C2=A0 =C2=A0 =C2=A0Promiscuous mode: disabled
>=C2=A0 =C2=A0 =C2=A0Allmulticast mode: disabled
>=C2=A0 =C2=A0 =C2=A0Maximum number of MAC addresses: 64
>=C2=A0 =C2=A0 =C2=A0Maximum number of MAC addresses of hash filtering: = 0
>=C2=A0 =C2=A0 =C2=A0VLAN offload:
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 strip off, filter off, extend off, qinq str= ip off
>=C2=A0 =C2=A0 =C2=A0No RSS offload flow type is supported.
>=C2=A0 =C2=A0 =C2=A0Minimum size of RX buffer: 64
>=C2=A0 =C2=A0 =C2=A0Maximum configurable length of RX packet: 9728
>=C2=A0 =C2=A0 =C2=A0Maximum configurable size of LRO aggregated packet:= 0
>=C2=A0 =C2=A0 =C2=A0Current number of RX queues: 1
>=C2=A0 =C2=A0 =C2=A0Max possible RX queues: 2
>=C2=A0 =C2=A0 =C2=A0Max possible number of RXDs per queue: 32768
>=C2=A0 =C2=A0 =C2=A0Min possible number of RXDs per queue: 32
>=C2=A0 =C2=A0 =C2=A0RXDs number alignment: 1
>=C2=A0 =C2=A0 =C2=A0Current number of TX queues: 1
>=C2=A0 =C2=A0 =C2=A0Max possible TX queues: 2
>=C2=A0 =C2=A0 =C2=A0Max possible number of TXDs per queue: 32768
>=C2=A0 =C2=A0 =C2=A0Min possible number of TXDs per queue: 32
>=C2=A0 =C2=A0 =C2=A0TXDs number alignment: 1
>=C2=A0 =C2=A0 =C2=A0Max segment number per packet: 65535
>=C2=A0 =C2=A0 =C2=A0Max segment number per MTU/TSO: 65535
>=C2=A0 =C2=A0 =C2=A0Device capabilities: 0x0( )
>=C2=A0 =C2=A0 =C2=A0Device error handling mode: none
>
>
>
> This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are
> intended solely for the use of the individual or entity to whom it is =
> addressed and may contain information that is confidential, legally > privileged, protected by privacy laws, or otherwise restricted from > disclosure to anyone else. If you are not the intended recipient or th= e
> person responsible for delivering the e-mail to the intended recipient= ,
> you are hereby notified that any use, copying, distributing,
> dissemination, forwarding, printing, or copying of this e-mail is
> strictly prohibited. If you received this e-mail in error, please retu= rn
> the e-mail to the sender, delete it from your computer, and destroy an= y
> printed copy of it.


This ele= ctronic communication and the information and any files transmitted with it= , or attached to it, are confidential and are intended solely for the use o= f the individual or entity to whom it is addressed and may contain informat= ion that is confidential, legally privileged, protected by privacy laws, or= otherwise restricted from disclosure to anyone else. If you are not the in= tended recipient or the person responsible for delivering the e-mail to the= intended recipient, you are hereby notified that any use, copying, distrib= uting, dissemination, forwarding, printing, or copying of this e-mail is st= rictly prohibited. If you received this e-mail in error, please return the = e-mail to the sender, delete it from your computer, and destroy any printed= copy of it. --000000000000709ecc06288dd663--