From: Alexandre DERUMIER <aderumier@odiso.com>
To: "Stefan Priebe, Profihost AG" <s.priebe@profihost.ag>
Cc: qemu-devel <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] dropped pkts with Qemu on tap interace (RX)
Date: Wed, 3 Jan 2018 09:14:10 +0100 (CET) [thread overview]
Message-ID: <826605310.756359.1514967250373.JavaMail.zimbra@oxygem.tv> (raw)
In-Reply-To: <77f0c119-236d-9d4e-0a99-3519ecc26b23@profihost.ag>
Hi Stefan,
>>The tap devices on the target vm shows dropped RX packages on BOTH tap
>>interfaces - strangely with the same amount of pkts?
that's strange indeed.
if you tcpdump tap interfaces, do you see incoming traffic only on 1 interface, or both random ?
(can you provide the network configuration in the guest for both interfaces ?)
I'm seeing that you have enable multiqueue on 1 of the interfaces, do you have setup correctly the multiqueue part inside the guest.
do you have enough vcpu to handle all the queues ?
----- Mail original -----
De: "Stefan Priebe, Profihost AG" <s.priebe@profihost.ag>
À: "qemu-devel" <qemu-devel@nongnu.org>
Envoyé: Mardi 2 Janvier 2018 12:17:29
Objet: [Qemu-devel] dropped pkts with Qemu on tap interace (RX)
Hello,
currently i'm trying to fix a problem where we have "random" missing
packets.
We're doing an ssh connect from machine a to machine b every 5 minutes
via rsync and ssh.
Sometimes it happens that we get this cron message:
"Connection to 192.168.0.2 closed by remote host.
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(226) [sender=3.1.2]
ssh: connect to host 192.168.0.2 port 22: Connection refused"
The tap devices on the target vm shows dropped RX packages on BOTH tap
interfaces - strangely with the same amount of pkts?
# ifconfig tap317i0; ifconfig tap317i1
tap317i0 Link encap:Ethernet HWaddr 6e:cb:65:94:bb:bf
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1
RX packets:2238445 errors:0 dropped:13159 overruns:0 frame:0
TX packets:9655853 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:177991267 (169.7 MiB) TX bytes:910412749 (868.2 MiB)
tap317i1 Link encap:Ethernet HWaddr 96:f8:b5:d0:9a:07
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1
RX packets:1516085 errors:0 dropped:13159 overruns:0 frame:0
TX packets:1446964 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1597564313 (1.4 GiB) TX bytes:3517734365 (3.2 GiB)
Any ideas how to inspect this issue?
Greets,
Stefan
next prev parent reply other threads:[~2018-01-03 8:14 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-02 11:17 [Qemu-devel] dropped pkts with Qemu on tap interace (RX) Stefan Priebe - Profihost AG
2018-01-02 14:20 ` Wei Xu
2018-01-02 15:24 ` Stefan Priebe - Profihost AG
2018-01-02 17:04 ` Wei Xu
2018-01-02 21:17 ` Stefan Priebe - Profihost AG
2018-01-03 3:57 ` Wei Xu
2018-01-03 15:07 ` Stefan Priebe - Profihost AG
2018-01-04 3:09 ` Wei Xu
2018-01-03 8:14 ` Alexandre DERUMIER [this message]
2018-01-03 15:10 ` Stefan Priebe - Profihost AG
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=826605310.756359.1514967250373.JavaMail.zimbra@oxygem.tv \
--to=aderumier@odiso.com \
--cc=qemu-devel@nongnu.org \
--cc=s.priebe@profihost.ag \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).