* [PULL 00/12] Net patches
@ 2021-06-11 6:00 Jason Wang
2021-06-11 12:02 ` Peter Maydell
2021-06-14 22:43 ` no-reply
0 siblings, 2 replies; 19+ messages in thread
From: Jason Wang @ 2021-06-11 6:00 UTC (permalink / raw)
To: qemu-devel, peter.maydell; +Cc: Jason Wang
The following changes since commit 7fe7fae8b48e3f9c647fd685e5155ebc8e6fb84d:
Merge remote-tracking branch 'remotes/dgilbert-gitlab/tags/pull-migration-20210609a' into staging (2021-06-09 16:40:21 +0100)
are available in the git repository at:
https://github.com/jasowang/qemu.git tags/net-pull-request
for you to fetch changes up to 5a2d9929ac1f01a1e8ef2a3f56f69e6069863dad:
Fixed calculation error of pkt->header_size in fill_pkt_tcp_info() (2021-06-11 10:30:13 +0800)
----------------------------------------------------------------
----------------------------------------------------------------
Jason Wang (4):
vhost-vdpa: skip ram device from the IOTLB mapping
vhost-vdpa: map virtqueue notification area if possible
vhost-vdpa: don't initialize backend_features
vhost-vdpa: remove the unused vhost_vdpa_get_acked_features()
Paolo Bonzini (1):
netdev: add more commands to preconfig mode
Rao, Lei (7):
Remove some duplicate trace code.
Fix the qemu crash when guest shutdown during checkpoint
Optimize the function of filter_send
Remove migrate_set_block_enabled in checkpoint
Add a function named packet_new_nocopy for COLO.
Add the function of colo_compare_cleanup
Fixed calculation error of pkt->header_size in fill_pkt_tcp_info()
hmp-commands.hx | 2 +
hw/virtio/vhost-vdpa.c | 100 +++++++++++++++++++++++++++++++++++------
include/hw/virtio/vhost-vdpa.h | 6 +++
include/net/vhost-vdpa.h | 1 -
migration/colo.c | 6 ---
migration/migration.c | 4 ++
net/colo-compare.c | 25 +++++------
net/colo-compare.h | 1 +
net/colo.c | 25 +++++++----
net/colo.h | 1 +
net/filter-mirror.c | 8 ++--
net/filter-rewriter.c | 3 +-
net/net.c | 4 ++
net/vhost-vdpa.c | 9 ----
qapi/net.json | 6 ++-
softmmu/runstate.c | 1 +
16 files changed, 143 insertions(+), 59 deletions(-)
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PULL 00/12] Net patches
2021-06-11 6:00 Jason Wang
@ 2021-06-11 12:02 ` Peter Maydell
2021-06-14 22:43 ` no-reply
1 sibling, 0 replies; 19+ messages in thread
From: Peter Maydell @ 2021-06-11 12:02 UTC (permalink / raw)
To: Jason Wang; +Cc: QEMU Developers
On Fri, 11 Jun 2021 at 07:00, Jason Wang <jasowang@redhat.com> wrote:
>
> The following changes since commit 7fe7fae8b48e3f9c647fd685e5155ebc8e6fb84d:
>
> Merge remote-tracking branch 'remotes/dgilbert-gitlab/tags/pull-migration-20210609a' into staging (2021-06-09 16:40:21 +0100)
>
> are available in the git repository at:
>
> https://github.com/jasowang/qemu.git tags/net-pull-request
>
> for you to fetch changes up to 5a2d9929ac1f01a1e8ef2a3f56f69e6069863dad:
>
> Fixed calculation error of pkt->header_size in fill_pkt_tcp_info() (2021-06-11 10:30:13 +0800)
>
> ----------------------------------------------------------------
>
> ----------------------------------------------------------------
> Jason Wang (4):
> vhost-vdpa: skip ram device from the IOTLB mapping
> vhost-vdpa: map virtqueue notification area if possible
> vhost-vdpa: don't initialize backend_features
> vhost-vdpa: remove the unused vhost_vdpa_get_acked_features()
>
> Paolo Bonzini (1):
> netdev: add more commands to preconfig mode
>
> Rao, Lei (7):
> Remove some duplicate trace code.
> Fix the qemu crash when guest shutdown during checkpoint
> Optimize the function of filter_send
> Remove migrate_set_block_enabled in checkpoint
> Add a function named packet_new_nocopy for COLO.
> Add the function of colo_compare_cleanup
> Fixed calculation error of pkt->header_size in fill_pkt_tcp_info()
Applied, thanks.
Please update the changelog at https://wiki.qemu.org/ChangeLog/6.1
for any user-visible changes.
-- PMM
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PULL 00/12] Net patches
2021-06-11 6:00 Jason Wang
2021-06-11 12:02 ` Peter Maydell
@ 2021-06-14 22:43 ` no-reply
1 sibling, 0 replies; 19+ messages in thread
From: no-reply @ 2021-06-14 22:43 UTC (permalink / raw)
To: jasowang; +Cc: peter.maydell, jasowang, qemu-devel
Patchew URL: https://patchew.org/QEMU/20210611060024.46763-1-jasowang@redhat.com/
Hi,
This series seems to have some coding style problems. See output below for
more information:
Type: series
Message-id: 20210611060024.46763-1-jasowang@redhat.com
Subject: [PULL 00/12] Net patches
=== TEST SCRIPT BEGIN ===
#!/bin/bash
git rev-parse base > /dev/null || exit 0
git config --local diff.renamelimit 0
git config --local diff.renames True
git config --local diff.algorithm histogram
./scripts/checkpatch.pl --mailback base..
=== TEST SCRIPT END ===
Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
From https://github.com/patchew-project/qemu
- [tag update] patchew/20210614200940.2056770-1-philmd@redhat.com -> patchew/20210614200940.2056770-1-philmd@redhat.com
* [new tag] patchew/20210614202842.581640-1-mathieu.poirier@linaro.org -> patchew/20210614202842.581640-1-mathieu.poirier@linaro.org
Switched to a new branch 'test'
=== OUTPUT BEGIN ===
checkpatch.pl: no revisions returned for revlist 'base..'
=== OUTPUT END ===
Test command exited with code: 255
The full log is available at
http://patchew.org/logs/20210611060024.46763-1-jasowang@redhat.com/testing.checkpatch/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PULL 00/12] Net patches
@ 2023-03-28 5:19 Jason Wang
2023-03-28 5:19 ` [PULL 01/12] igb: Save more Tx states Jason Wang
` (12 more replies)
0 siblings, 13 replies; 19+ messages in thread
From: Jason Wang @ 2023-03-28 5:19 UTC (permalink / raw)
To: qemu-devel; +Cc: Jason Wang
The following changes since commit e3debd5e7d0ce031356024878a0a18b9d109354a:
Merge tag 'pull-request-2023-03-24' of https://gitlab.com/thuth/qemu into staging (2023-03-24 16:08:46 +0000)
are available in the git repository at:
https://github.com/jasowang/qemu.git tags/net-pull-request
for you to fetch changes up to fba7c3b788dfcb99a3f9253f7d99cc0d217d6d3c:
igb: respect VMVIR and VMOLR for VLAN (2023-03-28 13:10:55 +0800)
----------------------------------------------------------------
----------------------------------------------------------------
Akihiko Odaki (4):
igb: Save more Tx states
igb: Fix DMA requester specification for Tx packet
hw/net/net_tx_pkt: Ignore ECN bit
hw/net/net_tx_pkt: Align l3_hdr
Sriram Yagnaraman (8):
MAINTAINERS: Add Sriram Yagnaraman as a igb reviewer
igb: handle PF/VF reset properly
igb: add ICR_RXDW
igb: implement VFRE and VFTE registers
igb: check oversized packets for VMDq
igb: respect E1000_VMOLR_RSSE
igb: implement VF Tx and Rx stats
igb: respect VMVIR and VMOLR for VLAN
MAINTAINERS | 1 +
hw/net/e1000e_core.c | 6 +-
hw/net/e1000x_regs.h | 4 +
hw/net/igb.c | 26 ++++--
hw/net/igb_core.c | 256 ++++++++++++++++++++++++++++++++++++++-------------
hw/net/igb_core.h | 9 +-
hw/net/igb_regs.h | 6 ++
hw/net/net_tx_pkt.c | 30 +++---
hw/net/net_tx_pkt.h | 3 +-
hw/net/trace-events | 2 +
hw/net/vmxnet3.c | 4 +-
11 files changed, 254 insertions(+), 93 deletions(-)
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PULL 01/12] igb: Save more Tx states
2023-03-28 5:19 [PULL 00/12] Net patches Jason Wang
@ 2023-03-28 5:19 ` Jason Wang
2023-03-28 5:19 ` [PULL 02/12] igb: Fix DMA requester specification for Tx packet Jason Wang
` (11 subsequent siblings)
12 siblings, 0 replies; 19+ messages in thread
From: Jason Wang @ 2023-03-28 5:19 UTC (permalink / raw)
To: qemu-devel; +Cc: Akihiko Odaki, Sriram Yagnaraman, Jason Wang
From: Akihiko Odaki <akihiko.odaki@daynix.com>
The current implementation of igb uses only part of a advanced Tx
context descriptor and first data descriptor because it misses some
features and sniffs the trait of the packet instead of respecting the
packet type specified in the descriptor. However, we will certainly
need the entire Tx context descriptor when we update igb to respect
these ignored fields. Save the entire context descriptor and first
data descriptor except the buffer address to prepare for such a change.
This also introduces the distinction of contexts with different
indexes, which was not present in e1000e but in igb.
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Reviewed-by: Sriram Yagnaraman <sriram.yagnaraman@est.tech>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
hw/net/igb.c | 26 +++++++++++++++++++-------
hw/net/igb_core.c | 39 +++++++++++++++++++--------------------
hw/net/igb_core.h | 8 +++-----
3 files changed, 41 insertions(+), 32 deletions(-)
diff --git a/hw/net/igb.c b/hw/net/igb.c
index c6d753d..51a7e91 100644
--- a/hw/net/igb.c
+++ b/hw/net/igb.c
@@ -502,16 +502,28 @@ static int igb_post_load(void *opaque, int version_id)
return igb_core_post_load(&s->core);
}
-static const VMStateDescription igb_vmstate_tx = {
- .name = "igb-tx",
+static const VMStateDescription igb_vmstate_tx_ctx = {
+ .name = "igb-tx-ctx",
.version_id = 1,
.minimum_version_id = 1,
.fields = (VMStateField[]) {
- VMSTATE_UINT16(vlan, struct igb_tx),
- VMSTATE_UINT16(mss, struct igb_tx),
- VMSTATE_BOOL(tse, struct igb_tx),
- VMSTATE_BOOL(ixsm, struct igb_tx),
- VMSTATE_BOOL(txsm, struct igb_tx),
+ VMSTATE_UINT32(vlan_macip_lens, struct e1000_adv_tx_context_desc),
+ VMSTATE_UINT32(seqnum_seed, struct e1000_adv_tx_context_desc),
+ VMSTATE_UINT32(type_tucmd_mlhl, struct e1000_adv_tx_context_desc),
+ VMSTATE_UINT32(mss_l4len_idx, struct e1000_adv_tx_context_desc),
+ VMSTATE_END_OF_LIST()
+ }
+};
+
+static const VMStateDescription igb_vmstate_tx = {
+ .name = "igb-tx",
+ .version_id = 2,
+ .minimum_version_id = 2,
+ .fields = (VMStateField[]) {
+ VMSTATE_STRUCT_ARRAY(ctx, struct igb_tx, 2, 0, igb_vmstate_tx_ctx,
+ struct e1000_adv_tx_context_desc),
+ VMSTATE_UINT32(first_cmd_type_len, struct igb_tx),
+ VMSTATE_UINT32(first_olinfo_status, struct igb_tx),
VMSTATE_BOOL(first, struct igb_tx),
VMSTATE_BOOL(skip_cp, struct igb_tx),
VMSTATE_END_OF_LIST()
diff --git a/hw/net/igb_core.c b/hw/net/igb_core.c
index a7c7bfd..7708333 100644
--- a/hw/net/igb_core.c
+++ b/hw/net/igb_core.c
@@ -389,8 +389,10 @@ igb_rss_parse_packet(IGBCore *core, struct NetRxPkt *pkt, bool tx,
static bool
igb_setup_tx_offloads(IGBCore *core, struct igb_tx *tx)
{
- if (tx->tse) {
- if (!net_tx_pkt_build_vheader(tx->tx_pkt, true, true, tx->mss)) {
+ if (tx->first_cmd_type_len & E1000_ADVTXD_DCMD_TSE) {
+ uint32_t idx = (tx->first_olinfo_status >> 4) & 1;
+ uint32_t mss = tx->ctx[idx].mss_l4len_idx >> 16;
+ if (!net_tx_pkt_build_vheader(tx->tx_pkt, true, true, mss)) {
return false;
}
@@ -399,13 +401,13 @@ igb_setup_tx_offloads(IGBCore *core, struct igb_tx *tx)
return true;
}
- if (tx->txsm) {
+ if (tx->first_olinfo_status & E1000_ADVTXD_POTS_TXSM) {
if (!net_tx_pkt_build_vheader(tx->tx_pkt, false, true, 0)) {
return false;
}
}
- if (tx->ixsm) {
+ if (tx->first_olinfo_status & E1000_ADVTXD_POTS_IXSM) {
net_tx_pkt_update_ip_hdr_checksum(tx->tx_pkt);
}
@@ -527,7 +529,7 @@ igb_process_tx_desc(IGBCore *core,
{
struct e1000_adv_tx_context_desc *tx_ctx_desc;
uint32_t cmd_type_len;
- uint32_t olinfo_status;
+ uint32_t idx;
uint64_t buffer_addr;
uint16_t length;
@@ -538,20 +540,19 @@ igb_process_tx_desc(IGBCore *core,
E1000_ADVTXD_DTYP_DATA) {
/* advanced transmit data descriptor */
if (tx->first) {
- olinfo_status = le32_to_cpu(tx_desc->read.olinfo_status);
-
- tx->tse = !!(cmd_type_len & E1000_ADVTXD_DCMD_TSE);
- tx->ixsm = !!(olinfo_status & E1000_ADVTXD_POTS_IXSM);
- tx->txsm = !!(olinfo_status & E1000_ADVTXD_POTS_TXSM);
-
+ tx->first_cmd_type_len = cmd_type_len;
+ tx->first_olinfo_status = le32_to_cpu(tx_desc->read.olinfo_status);
tx->first = false;
}
} else if ((cmd_type_len & E1000_ADVTXD_DTYP_CTXT) ==
E1000_ADVTXD_DTYP_CTXT) {
/* advanced transmit context descriptor */
tx_ctx_desc = (struct e1000_adv_tx_context_desc *)tx_desc;
- tx->vlan = le32_to_cpu(tx_ctx_desc->vlan_macip_lens) >> 16;
- tx->mss = le32_to_cpu(tx_ctx_desc->mss_l4len_idx) >> 16;
+ idx = (le32_to_cpu(tx_ctx_desc->mss_l4len_idx) >> 4) & 1;
+ tx->ctx[idx].vlan_macip_lens = le32_to_cpu(tx_ctx_desc->vlan_macip_lens);
+ tx->ctx[idx].seqnum_seed = le32_to_cpu(tx_ctx_desc->seqnum_seed);
+ tx->ctx[idx].type_tucmd_mlhl = le32_to_cpu(tx_ctx_desc->type_tucmd_mlhl);
+ tx->ctx[idx].mss_l4len_idx = le32_to_cpu(tx_ctx_desc->mss_l4len_idx);
return;
} else {
/* unknown descriptor type */
@@ -575,8 +576,10 @@ igb_process_tx_desc(IGBCore *core,
if (cmd_type_len & E1000_TXD_CMD_EOP) {
if (!tx->skip_cp && net_tx_pkt_parse(tx->tx_pkt)) {
if (cmd_type_len & E1000_TXD_CMD_VLE) {
- net_tx_pkt_setup_vlan_header_ex(tx->tx_pkt, tx->vlan,
- core->mac[VET] & 0xffff);
+ idx = (tx->first_olinfo_status >> 4) & 1;
+ uint16_t vlan = tx->ctx[idx].vlan_macip_lens >> 16;
+ uint16_t vet = core->mac[VET] & 0xffff;
+ net_tx_pkt_setup_vlan_header_ex(tx->tx_pkt, vlan, vet);
}
if (igb_tx_pkt_send(core, tx, queue_index)) {
igb_on_tx_done_update_stats(core, tx->tx_pkt);
@@ -4024,11 +4027,7 @@ static void igb_reset(IGBCore *core, bool sw)
for (i = 0; i < ARRAY_SIZE(core->tx); i++) {
tx = &core->tx[i];
net_tx_pkt_reset(tx->tx_pkt);
- tx->vlan = 0;
- tx->mss = 0;
- tx->tse = false;
- tx->ixsm = false;
- tx->txsm = false;
+ memset(tx->ctx, 0, sizeof(tx->ctx));
tx->first = true;
tx->skip_cp = false;
}
diff --git a/hw/net/igb_core.h b/hw/net/igb_core.h
index 814c1e2..8914e0b 100644
--- a/hw/net/igb_core.h
+++ b/hw/net/igb_core.h
@@ -72,11 +72,9 @@ struct IGBCore {
QEMUTimer *autoneg_timer;
struct igb_tx {
- uint16_t vlan; /* VLAN Tag */
- uint16_t mss; /* Maximum Segment Size */
- bool tse; /* TCP/UDP Segmentation Enable */
- bool ixsm; /* Insert IP Checksum */
- bool txsm; /* Insert TCP/UDP Checksum */
+ struct e1000_adv_tx_context_desc ctx[2];
+ uint32_t first_cmd_type_len;
+ uint32_t first_olinfo_status;
bool first;
bool skip_cp;
--
2.7.4
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 02/12] igb: Fix DMA requester specification for Tx packet
2023-03-28 5:19 [PULL 00/12] Net patches Jason Wang
2023-03-28 5:19 ` [PULL 01/12] igb: Save more Tx states Jason Wang
@ 2023-03-28 5:19 ` Jason Wang
2023-03-28 5:19 ` [PULL 03/12] hw/net/net_tx_pkt: Ignore ECN bit Jason Wang
` (10 subsequent siblings)
12 siblings, 0 replies; 19+ messages in thread
From: Jason Wang @ 2023-03-28 5:19 UTC (permalink / raw)
To: qemu-devel; +Cc: Akihiko Odaki, Jason Wang
From: Akihiko Odaki <akihiko.odaki@daynix.com>
igb used to specify the PF as DMA requester when reading Tx packets.
This made Tx requests from VFs to be performed on the address space of
the PF, defeating the purpose of SR-IOV. Add some logic to change the
requester depending on the queue, which can be assigned to a VF.
Fixes: 3a977deebe ("Intrdocue igb device emulation")
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
hw/net/e1000e_core.c | 6 +++---
hw/net/igb_core.c | 13 ++++++++-----
hw/net/net_tx_pkt.c | 3 ++-
hw/net/net_tx_pkt.h | 3 ++-
hw/net/vmxnet3.c | 4 ++--
5 files changed, 17 insertions(+), 12 deletions(-)
diff --git a/hw/net/e1000e_core.c b/hw/net/e1000e_core.c
index 4d9679c..c0c09b6 100644
--- a/hw/net/e1000e_core.c
+++ b/hw/net/e1000e_core.c
@@ -765,7 +765,7 @@ e1000e_process_tx_desc(E1000ECore *core,
}
tx->skip_cp = false;
- net_tx_pkt_reset(tx->tx_pkt);
+ net_tx_pkt_reset(tx->tx_pkt, core->owner);
tx->sum_needed = 0;
tx->cptse = 0;
@@ -3447,7 +3447,7 @@ e1000e_core_pci_uninit(E1000ECore *core)
qemu_del_vm_change_state_handler(core->vmstate);
for (i = 0; i < E1000E_NUM_QUEUES; i++) {
- net_tx_pkt_reset(core->tx[i].tx_pkt);
+ net_tx_pkt_reset(core->tx[i].tx_pkt, core->owner);
net_tx_pkt_uninit(core->tx[i].tx_pkt);
}
@@ -3572,7 +3572,7 @@ static void e1000e_reset(E1000ECore *core, bool sw)
e1000x_reset_mac_addr(core->owner_nic, core->mac, core->permanent_mac);
for (i = 0; i < ARRAY_SIZE(core->tx); i++) {
- net_tx_pkt_reset(core->tx[i].tx_pkt);
+ net_tx_pkt_reset(core->tx[i].tx_pkt, core->owner);
memset(&core->tx[i].props, 0, sizeof(core->tx[i].props));
core->tx[i].skip_cp = false;
}
diff --git a/hw/net/igb_core.c b/hw/net/igb_core.c
index 7708333..78d3073 100644
--- a/hw/net/igb_core.c
+++ b/hw/net/igb_core.c
@@ -523,6 +523,7 @@ igb_on_tx_done_update_stats(IGBCore *core, struct NetTxPkt *tx_pkt)
static void
igb_process_tx_desc(IGBCore *core,
+ PCIDevice *dev,
struct igb_tx *tx,
union e1000_adv_tx_desc *tx_desc,
int queue_index)
@@ -588,7 +589,7 @@ igb_process_tx_desc(IGBCore *core,
tx->first = true;
tx->skip_cp = false;
- net_tx_pkt_reset(tx->tx_pkt);
+ net_tx_pkt_reset(tx->tx_pkt, dev);
}
}
@@ -803,6 +804,8 @@ igb_start_xmit(IGBCore *core, const IGB_TxRing *txr)
d = core->owner;
}
+ net_tx_pkt_reset(txr->tx->tx_pkt, d);
+
while (!igb_ring_empty(core, txi)) {
base = igb_ring_head_descr(core, txi);
@@ -811,7 +814,7 @@ igb_start_xmit(IGBCore *core, const IGB_TxRing *txr)
trace_e1000e_tx_descr((void *)(intptr_t)desc.read.buffer_addr,
desc.read.cmd_type_len, desc.wb.status);
- igb_process_tx_desc(core, txr->tx, &desc, txi->idx);
+ igb_process_tx_desc(core, d, txr->tx, &desc, txi->idx);
igb_ring_advance(core, txi, 1);
eic |= igb_txdesc_writeback(core, base, &desc, txi);
}
@@ -3828,7 +3831,7 @@ igb_core_pci_realize(IGBCore *core,
core->vmstate = qemu_add_vm_change_state_handler(igb_vm_state_change, core);
for (i = 0; i < IGB_NUM_QUEUES; i++) {
- net_tx_pkt_init(&core->tx[i].tx_pkt, core->owner, E1000E_MAX_TX_FRAGS);
+ net_tx_pkt_init(&core->tx[i].tx_pkt, NULL, E1000E_MAX_TX_FRAGS);
}
net_rx_pkt_init(&core->rx_pkt);
@@ -3853,7 +3856,7 @@ igb_core_pci_uninit(IGBCore *core)
qemu_del_vm_change_state_handler(core->vmstate);
for (i = 0; i < IGB_NUM_QUEUES; i++) {
- net_tx_pkt_reset(core->tx[i].tx_pkt);
+ net_tx_pkt_reset(core->tx[i].tx_pkt, NULL);
net_tx_pkt_uninit(core->tx[i].tx_pkt);
}
@@ -4026,7 +4029,7 @@ static void igb_reset(IGBCore *core, bool sw)
for (i = 0; i < ARRAY_SIZE(core->tx); i++) {
tx = &core->tx[i];
- net_tx_pkt_reset(tx->tx_pkt);
+ net_tx_pkt_reset(tx->tx_pkt, NULL);
memset(tx->ctx, 0, sizeof(tx->ctx));
tx->first = true;
tx->skip_cp = false;
diff --git a/hw/net/net_tx_pkt.c b/hw/net/net_tx_pkt.c
index 986a3ad..cb606cc 100644
--- a/hw/net/net_tx_pkt.c
+++ b/hw/net/net_tx_pkt.c
@@ -443,7 +443,7 @@ void net_tx_pkt_dump(struct NetTxPkt *pkt)
#endif
}
-void net_tx_pkt_reset(struct NetTxPkt *pkt)
+void net_tx_pkt_reset(struct NetTxPkt *pkt, PCIDevice *pci_dev)
{
int i;
@@ -467,6 +467,7 @@ void net_tx_pkt_reset(struct NetTxPkt *pkt)
pkt->raw[i].iov_len, DMA_DIRECTION_TO_DEVICE, 0);
}
}
+ pkt->pci_dev = pci_dev;
pkt->raw_frags = 0;
pkt->hdr_len = 0;
diff --git a/hw/net/net_tx_pkt.h b/hw/net/net_tx_pkt.h
index f57b4e0..e5ce6f2 100644
--- a/hw/net/net_tx_pkt.h
+++ b/hw/net/net_tx_pkt.h
@@ -148,9 +148,10 @@ void net_tx_pkt_dump(struct NetTxPkt *pkt);
* reset tx packet private context (needed to be called between packets)
*
* @pkt: packet
+ * @dev: PCI device processing the next packet
*
*/
-void net_tx_pkt_reset(struct NetTxPkt *pkt);
+void net_tx_pkt_reset(struct NetTxPkt *pkt, PCIDevice *dev);
/**
* Send packet to qemu. handles sw offloads if vhdr is not supported.
diff --git a/hw/net/vmxnet3.c b/hw/net/vmxnet3.c
index 1068b80..f7b874c 100644
--- a/hw/net/vmxnet3.c
+++ b/hw/net/vmxnet3.c
@@ -678,7 +678,7 @@ static void vmxnet3_process_tx_queue(VMXNET3State *s, int qidx)
vmxnet3_complete_packet(s, qidx, txd_idx);
s->tx_sop = true;
s->skip_current_tx_pkt = false;
- net_tx_pkt_reset(s->tx_pkt);
+ net_tx_pkt_reset(s->tx_pkt, PCI_DEVICE(s));
}
}
}
@@ -1159,7 +1159,7 @@ static void vmxnet3_deactivate_device(VMXNET3State *s)
{
if (s->device_active) {
VMW_CBPRN("Deactivating vmxnet3...");
- net_tx_pkt_reset(s->tx_pkt);
+ net_tx_pkt_reset(s->tx_pkt, PCI_DEVICE(s));
net_tx_pkt_uninit(s->tx_pkt);
net_rx_pkt_uninit(s->rx_pkt);
s->device_active = false;
--
2.7.4
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 03/12] hw/net/net_tx_pkt: Ignore ECN bit
2023-03-28 5:19 [PULL 00/12] Net patches Jason Wang
2023-03-28 5:19 ` [PULL 01/12] igb: Save more Tx states Jason Wang
2023-03-28 5:19 ` [PULL 02/12] igb: Fix DMA requester specification for Tx packet Jason Wang
@ 2023-03-28 5:19 ` Jason Wang
2023-03-28 5:19 ` [PULL 04/12] hw/net/net_tx_pkt: Align l3_hdr Jason Wang
` (9 subsequent siblings)
12 siblings, 0 replies; 19+ messages in thread
From: Jason Wang @ 2023-03-28 5:19 UTC (permalink / raw)
To: qemu-devel; +Cc: Akihiko Odaki, Jason Wang
From: Akihiko Odaki <akihiko.odaki@daynix.com>
No segmentation should be performed if gso type is
VIRTIO_NET_HDR_GSO_NONE even if ECN bit is set.
Fixes: e263cd49c7 ("Packet abstraction for VMWARE network devices")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1544
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
hw/net/net_tx_pkt.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/hw/net/net_tx_pkt.c b/hw/net/net_tx_pkt.c
index cb606cc..efe80b1 100644
--- a/hw/net/net_tx_pkt.c
+++ b/hw/net/net_tx_pkt.c
@@ -796,11 +796,13 @@ bool net_tx_pkt_send_custom(struct NetTxPkt *pkt, bool offload,
{
assert(pkt);
+ uint8_t gso_type = pkt->virt_hdr.gso_type & ~VIRTIO_NET_HDR_GSO_ECN;
+
/*
* Since underlying infrastructure does not support IP datagrams longer
* than 64K we should drop such packets and don't even try to send
*/
- if (VIRTIO_NET_HDR_GSO_NONE != pkt->virt_hdr.gso_type) {
+ if (VIRTIO_NET_HDR_GSO_NONE != gso_type) {
if (pkt->payload_len >
ETH_MAX_IP_DGRAM_LEN -
pkt->vec[NET_TX_PKT_L3HDR_FRAG].iov_len) {
@@ -808,7 +810,7 @@ bool net_tx_pkt_send_custom(struct NetTxPkt *pkt, bool offload,
}
}
- if (offload || pkt->virt_hdr.gso_type == VIRTIO_NET_HDR_GSO_NONE) {
+ if (offload || gso_type == VIRTIO_NET_HDR_GSO_NONE) {
if (!offload && pkt->virt_hdr.flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) {
net_tx_pkt_do_sw_csum(pkt, &pkt->vec[NET_TX_PKT_L2HDR_FRAG],
pkt->payload_frags + NET_TX_PKT_PL_START_FRAG - 1,
--
2.7.4
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 04/12] hw/net/net_tx_pkt: Align l3_hdr
2023-03-28 5:19 [PULL 00/12] Net patches Jason Wang
` (2 preceding siblings ...)
2023-03-28 5:19 ` [PULL 03/12] hw/net/net_tx_pkt: Ignore ECN bit Jason Wang
@ 2023-03-28 5:19 ` Jason Wang
2023-03-28 5:19 ` [PULL 05/12] MAINTAINERS: Add Sriram Yagnaraman as a igb reviewer Jason Wang
` (8 subsequent siblings)
12 siblings, 0 replies; 19+ messages in thread
From: Jason Wang @ 2023-03-28 5:19 UTC (permalink / raw)
To: qemu-devel; +Cc: Akihiko Odaki, Jason Wang
From: Akihiko Odaki <akihiko.odaki@daynix.com>
Align the l3_hdr member of NetTxPkt by defining it as a union of
ip_header, ip6_header, and an array of octets.
Fixes: e263cd49c7 ("Packet abstraction for VMWARE network devices")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1544
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
hw/net/net_tx_pkt.c | 21 +++++++++++----------
1 file changed, 11 insertions(+), 10 deletions(-)
diff --git a/hw/net/net_tx_pkt.c b/hw/net/net_tx_pkt.c
index efe80b1..8dc8568 100644
--- a/hw/net/net_tx_pkt.c
+++ b/hw/net/net_tx_pkt.c
@@ -43,7 +43,11 @@ struct NetTxPkt {
struct iovec *vec;
uint8_t l2_hdr[ETH_MAX_L2_HDR_LEN];
- uint8_t l3_hdr[ETH_MAX_IP_DGRAM_LEN];
+ union {
+ struct ip_header ip;
+ struct ip6_header ip6;
+ uint8_t octets[ETH_MAX_IP_DGRAM_LEN];
+ } l3_hdr;
uint32_t payload_len;
@@ -89,16 +93,14 @@ void net_tx_pkt_update_ip_hdr_checksum(struct NetTxPkt *pkt)
{
uint16_t csum;
assert(pkt);
- struct ip_header *ip_hdr;
- ip_hdr = pkt->vec[NET_TX_PKT_L3HDR_FRAG].iov_base;
- ip_hdr->ip_len = cpu_to_be16(pkt->payload_len +
+ pkt->l3_hdr.ip.ip_len = cpu_to_be16(pkt->payload_len +
pkt->vec[NET_TX_PKT_L3HDR_FRAG].iov_len);
- ip_hdr->ip_sum = 0;
- csum = net_raw_checksum((uint8_t *)ip_hdr,
+ pkt->l3_hdr.ip.ip_sum = 0;
+ csum = net_raw_checksum(pkt->l3_hdr.octets,
pkt->vec[NET_TX_PKT_L3HDR_FRAG].iov_len);
- ip_hdr->ip_sum = cpu_to_be16(csum);
+ pkt->l3_hdr.ip.ip_sum = cpu_to_be16(csum);
}
void net_tx_pkt_update_ip_checksums(struct NetTxPkt *pkt)
@@ -832,15 +834,14 @@ void net_tx_pkt_fix_ip6_payload_len(struct NetTxPkt *pkt)
{
struct iovec *l2 = &pkt->vec[NET_TX_PKT_L2HDR_FRAG];
if (eth_get_l3_proto(l2, 1, l2->iov_len) == ETH_P_IPV6) {
- struct ip6_header *ip6 = (struct ip6_header *) pkt->l3_hdr;
/*
* TODO: if qemu would support >64K packets - add jumbo option check
* something like that:
* 'if (ip6->ip6_plen == 0 && !has_jumbo_option(ip6)) {'
*/
- if (ip6->ip6_plen == 0) {
+ if (pkt->l3_hdr.ip6.ip6_plen == 0) {
if (pkt->payload_len <= ETH_MAX_IP_DGRAM_LEN) {
- ip6->ip6_plen = htons(pkt->payload_len);
+ pkt->l3_hdr.ip6.ip6_plen = htons(pkt->payload_len);
}
/*
* TODO: if qemu would support >64K packets
--
2.7.4
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 05/12] MAINTAINERS: Add Sriram Yagnaraman as a igb reviewer
2023-03-28 5:19 [PULL 00/12] Net patches Jason Wang
` (3 preceding siblings ...)
2023-03-28 5:19 ` [PULL 04/12] hw/net/net_tx_pkt: Align l3_hdr Jason Wang
@ 2023-03-28 5:19 ` Jason Wang
2023-03-28 5:19 ` [PULL 06/12] igb: handle PF/VF reset properly Jason Wang
` (7 subsequent siblings)
12 siblings, 0 replies; 19+ messages in thread
From: Jason Wang @ 2023-03-28 5:19 UTC (permalink / raw)
To: qemu-devel; +Cc: Sriram Yagnaraman, Jason Wang
From: Sriram Yagnaraman <sriram.yagnaraman@est.tech>
I would like to review and be informed on changes to igb device
Signed-off-by: Sriram Yagnaraman <sriram.yagnaraman@est.tech>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
MAINTAINERS | 1 +
1 file changed, 1 insertion(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 34b50b2..ef45b5e 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -2252,6 +2252,7 @@ F: tests/qtest/libqos/e1000e.*
igb
M: Akihiko Odaki <akihiko.odaki@daynix.com>
+R: Sriram Yagnaraman <sriram.yagnaraman@est.tech>
S: Maintained
F: docs/system/devices/igb.rst
F: hw/net/igb*
--
2.7.4
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 06/12] igb: handle PF/VF reset properly
2023-03-28 5:19 [PULL 00/12] Net patches Jason Wang
` (4 preceding siblings ...)
2023-03-28 5:19 ` [PULL 05/12] MAINTAINERS: Add Sriram Yagnaraman as a igb reviewer Jason Wang
@ 2023-03-28 5:19 ` Jason Wang
2023-03-28 5:19 ` [PULL 07/12] igb: add ICR_RXDW Jason Wang
` (6 subsequent siblings)
12 siblings, 0 replies; 19+ messages in thread
From: Jason Wang @ 2023-03-28 5:19 UTC (permalink / raw)
To: qemu-devel; +Cc: Sriram Yagnaraman, Jason Wang
From: Sriram Yagnaraman <sriram.yagnaraman@est.tech>
Use PFRSTD to reset RSTI bit for VFs, and raise VFLRE interrupt when VF
is reset.
Signed-off-by: Sriram Yagnaraman <sriram.yagnaraman@est.tech>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
hw/net/igb_core.c | 38 ++++++++++++++++++++++++++------------
hw/net/igb_regs.h | 3 +++
hw/net/trace-events | 2 ++
3 files changed, 31 insertions(+), 12 deletions(-)
diff --git a/hw/net/igb_core.c b/hw/net/igb_core.c
index 78d3073..6ba9696 100644
--- a/hw/net/igb_core.c
+++ b/hw/net/igb_core.c
@@ -1898,14 +1898,6 @@ static void igb_set_eims(IGBCore *core, int index, uint32_t val)
igb_update_interrupt_state(core);
}
-static void igb_vf_reset(IGBCore *core, uint16_t vfn)
-{
- /* TODO: Reset of the queue enable and the interrupt registers of the VF. */
-
- core->mac[V2PMAILBOX0 + vfn] &= ~E1000_V2PMAILBOX_RSTI;
- core->mac[V2PMAILBOX0 + vfn] = E1000_V2PMAILBOX_RSTD;
-}
-
static void mailbox_interrupt_to_vf(IGBCore *core, uint16_t vfn)
{
uint32_t ent = core->mac[VTIVAR_MISC + vfn];
@@ -1983,6 +1975,17 @@ static void igb_set_vfmailbox(IGBCore *core, int index, uint32_t val)
}
}
+static void igb_vf_reset(IGBCore *core, uint16_t vfn)
+{
+ /* disable Rx and Tx for the VF*/
+ core->mac[VFTE] &= ~BIT(vfn);
+ core->mac[VFRE] &= ~BIT(vfn);
+ /* indicate VF reset to PF */
+ core->mac[VFLRE] |= BIT(vfn);
+ /* VFLRE and mailbox use the same interrupt cause */
+ mailbox_interrupt_to_pf(core);
+}
+
static void igb_w1c(IGBCore *core, int index, uint32_t val)
{
core->mac[index] &= ~val;
@@ -2237,14 +2240,20 @@ igb_set_status(IGBCore *core, int index, uint32_t val)
static void
igb_set_ctrlext(IGBCore *core, int index, uint32_t val)
{
- trace_e1000e_link_set_ext_params(!!(val & E1000_CTRL_EXT_ASDCHK),
- !!(val & E1000_CTRL_EXT_SPD_BYPS));
-
- /* TODO: PFRSTD */
+ trace_igb_link_set_ext_params(!!(val & E1000_CTRL_EXT_ASDCHK),
+ !!(val & E1000_CTRL_EXT_SPD_BYPS),
+ !!(val & E1000_CTRL_EXT_PFRSTD));
/* Zero self-clearing bits */
val &= ~(E1000_CTRL_EXT_ASDCHK | E1000_CTRL_EXT_EE_RST);
core->mac[CTRL_EXT] = val;
+
+ if (core->mac[CTRL_EXT] & E1000_CTRL_EXT_PFRSTD) {
+ for (int vfn = 0; vfn < IGB_MAX_VF_FUNCTIONS; vfn++) {
+ core->mac[V2PMAILBOX0 + vfn] &= ~E1000_V2PMAILBOX_RSTI;
+ core->mac[V2PMAILBOX0 + vfn] |= E1000_V2PMAILBOX_RSTD;
+ }
+ }
}
static void
@@ -4027,6 +4036,11 @@ static void igb_reset(IGBCore *core, bool sw)
e1000x_reset_mac_addr(core->owner_nic, core->mac, core->permanent_mac);
+ for (int vfn = 0; vfn < IGB_MAX_VF_FUNCTIONS; vfn++) {
+ /* Set RSTI, so VF can identify a PF reset is in progress */
+ core->mac[V2PMAILBOX0 + vfn] |= E1000_V2PMAILBOX_RSTI;
+ }
+
for (i = 0; i < ARRAY_SIZE(core->tx); i++) {
tx = &core->tx[i];
net_tx_pkt_reset(tx->tx_pkt, NULL);
diff --git a/hw/net/igb_regs.h b/hw/net/igb_regs.h
index 00934d4..a658f9b 100644
--- a/hw/net/igb_regs.h
+++ b/hw/net/igb_regs.h
@@ -240,6 +240,9 @@ union e1000_adv_rx_desc {
/* from igb/e1000_defines.h */
+/* Physical Func Reset Done Indication */
+#define E1000_CTRL_EXT_PFRSTD 0x00004000
+
#define E1000_IVAR_VALID 0x80
#define E1000_GPIE_NSICR 0x00000001
#define E1000_GPIE_MSIX_MODE 0x00000010
diff --git a/hw/net/trace-events b/hw/net/trace-events
index 6575341..d35554f 100644
--- a/hw/net/trace-events
+++ b/hw/net/trace-events
@@ -280,6 +280,8 @@ igb_core_mdic_read_unhandled(uint32_t addr) "MDIC READ: PHY[%u] UNHANDLED"
igb_core_mdic_write(uint32_t addr, uint32_t data) "MDIC WRITE: PHY[%u] = 0x%x"
igb_core_mdic_write_unhandled(uint32_t addr) "MDIC WRITE: PHY[%u] UNHANDLED"
+igb_link_set_ext_params(bool asd_check, bool speed_select_bypass, bool pfrstd) "Set extended link params: ASD check: %d, Speed select bypass: %d, PF reset done: %d"
+
igb_rx_desc_buff_size(uint32_t b) "buffer size: %u"
igb_rx_desc_buff_write(uint64_t addr, uint16_t offset, const void* source, uint32_t len) "addr: 0x%"PRIx64", offset: %u, from: %p, length: %u"
--
2.7.4
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 07/12] igb: add ICR_RXDW
2023-03-28 5:19 [PULL 00/12] Net patches Jason Wang
` (5 preceding siblings ...)
2023-03-28 5:19 ` [PULL 06/12] igb: handle PF/VF reset properly Jason Wang
@ 2023-03-28 5:19 ` Jason Wang
2023-03-28 5:19 ` [PULL 08/12] igb: implement VFRE and VFTE registers Jason Wang
` (5 subsequent siblings)
12 siblings, 0 replies; 19+ messages in thread
From: Jason Wang @ 2023-03-28 5:19 UTC (permalink / raw)
To: qemu-devel; +Cc: Sriram Yagnaraman, Jason Wang
From: Sriram Yagnaraman <sriram.yagnaraman@est.tech>
IGB uses RXDW ICR bit to indicate that rx descriptor has been written
back. This is the same as RXT0 bit in older HW.
Signed-off-by: Sriram Yagnaraman <sriram.yagnaraman@est.tech>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
hw/net/e1000x_regs.h | 4 ++++
hw/net/igb_core.c | 2 +-
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/hw/net/e1000x_regs.h b/hw/net/e1000x_regs.h
index c0832fa..6d3c4c6 100644
--- a/hw/net/e1000x_regs.h
+++ b/hw/net/e1000x_regs.h
@@ -335,6 +335,7 @@
#define E1000_ICR_RXDMT0 0x00000010 /* rx desc min. threshold (0) */
#define E1000_ICR_RXO 0x00000040 /* rx overrun */
#define E1000_ICR_RXT0 0x00000080 /* rx timer intr (ring 0) */
+#define E1000_ICR_RXDW 0x00000080 /* rx desc written back */
#define E1000_ICR_MDAC 0x00000200 /* MDIO access complete */
#define E1000_ICR_RXCFG 0x00000400 /* RX /c/ ordered set */
#define E1000_ICR_GPI_EN0 0x00000800 /* GP Int 0 */
@@ -378,6 +379,7 @@
#define E1000_ICS_RXDMT0 E1000_ICR_RXDMT0 /* rx desc min. threshold */
#define E1000_ICS_RXO E1000_ICR_RXO /* rx overrun */
#define E1000_ICS_RXT0 E1000_ICR_RXT0 /* rx timer intr */
+#define E1000_ICS_RXDW E1000_ICR_RXDW /* rx desc written back */
#define E1000_ICS_MDAC E1000_ICR_MDAC /* MDIO access complete */
#define E1000_ICS_RXCFG E1000_ICR_RXCFG /* RX /c/ ordered set */
#define E1000_ICS_GPI_EN0 E1000_ICR_GPI_EN0 /* GP Int 0 */
@@ -407,6 +409,7 @@
#define E1000_IMS_RXDMT0 E1000_ICR_RXDMT0 /* rx desc min. threshold */
#define E1000_IMS_RXO E1000_ICR_RXO /* rx overrun */
#define E1000_IMS_RXT0 E1000_ICR_RXT0 /* rx timer intr */
+#define E1000_IMS_RXDW E1000_ICR_RXDW /* rx desc written back */
#define E1000_IMS_MDAC E1000_ICR_MDAC /* MDIO access complete */
#define E1000_IMS_RXCFG E1000_ICR_RXCFG /* RX /c/ ordered set */
#define E1000_IMS_GPI_EN0 E1000_ICR_GPI_EN0 /* GP Int 0 */
@@ -441,6 +444,7 @@
#define E1000_IMC_RXDMT0 E1000_ICR_RXDMT0 /* rx desc min. threshold */
#define E1000_IMC_RXO E1000_ICR_RXO /* rx overrun */
#define E1000_IMC_RXT0 E1000_ICR_RXT0 /* rx timer intr */
+#define E1000_IMC_RXDW E1000_ICR_RXDW /* rx desc written back */
#define E1000_IMC_MDAC E1000_ICR_MDAC /* MDIO access complete */
#define E1000_IMC_RXCFG E1000_ICR_RXCFG /* RX /c/ ordered set */
#define E1000_IMC_GPI_EN0 E1000_ICR_GPI_EN0 /* GP Int 0 */
diff --git a/hw/net/igb_core.c b/hw/net/igb_core.c
index 6ba9696..9ab90e8 100644
--- a/hw/net/igb_core.c
+++ b/hw/net/igb_core.c
@@ -1583,7 +1583,7 @@ igb_receive_internal(IGBCore *core, const struct iovec *iov, int iovcnt,
continue;
}
- n |= E1000_ICR_RXT0;
+ n |= E1000_ICR_RXDW;
igb_rx_fix_l4_csum(core, core->rx_pkt);
igb_write_packet_to_guest(core, core->rx_pkt, &rxr, &rss_info);
--
2.7.4
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 08/12] igb: implement VFRE and VFTE registers
2023-03-28 5:19 [PULL 00/12] Net patches Jason Wang
` (6 preceding siblings ...)
2023-03-28 5:19 ` [PULL 07/12] igb: add ICR_RXDW Jason Wang
@ 2023-03-28 5:19 ` Jason Wang
2023-03-28 5:19 ` [PULL 09/12] igb: check oversized packets for VMDq Jason Wang
` (4 subsequent siblings)
12 siblings, 0 replies; 19+ messages in thread
From: Jason Wang @ 2023-03-28 5:19 UTC (permalink / raw)
To: qemu-devel; +Cc: Sriram Yagnaraman, Jason Wang
From: Sriram Yagnaraman <sriram.yagnaraman@est.tech>
Also introduce:
- Checks for RXDCTL/TXDCTL queue enable bits
- IGB_NUM_VM_POOLS enum (Sec 1.5: Table 1-7)
Signed-off-by: Sriram Yagnaraman <sriram.yagnaraman@est.tech>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
hw/net/igb_core.c | 38 +++++++++++++++++++++++++++++++-------
hw/net/igb_core.h | 1 +
hw/net/igb_regs.h | 3 +++
3 files changed, 35 insertions(+), 7 deletions(-)
diff --git a/hw/net/igb_core.c b/hw/net/igb_core.c
index 9ab90e8..753f17b 100644
--- a/hw/net/igb_core.c
+++ b/hw/net/igb_core.c
@@ -784,6 +784,18 @@ igb_txdesc_writeback(IGBCore *core, dma_addr_t base,
return igb_tx_wb_eic(core, txi->idx);
}
+static inline bool
+igb_tx_enabled(IGBCore *core, const E1000E_RingInfo *txi)
+{
+ bool vmdq = core->mac[MRQC] & 1;
+ uint16_t qn = txi->idx;
+ uint16_t pool = qn % IGB_NUM_VM_POOLS;
+
+ return (core->mac[TCTL] & E1000_TCTL_EN) &&
+ (!vmdq || core->mac[VFTE] & BIT(pool)) &&
+ (core->mac[TXDCTL0 + (qn * 16)] & E1000_TXDCTL_QUEUE_ENABLE);
+}
+
static void
igb_start_xmit(IGBCore *core, const IGB_TxRing *txr)
{
@@ -793,8 +805,7 @@ igb_start_xmit(IGBCore *core, const IGB_TxRing *txr)
const E1000E_RingInfo *txi = txr->i;
uint32_t eic = 0;
- /* TODO: check if the queue itself is enabled too. */
- if (!(core->mac[TCTL] & E1000_TCTL_EN)) {
+ if (!igb_tx_enabled(core, txi)) {
trace_e1000e_tx_disabled();
return;
}
@@ -872,6 +883,9 @@ igb_can_receive(IGBCore *core)
for (i = 0; i < IGB_NUM_QUEUES; i++) {
E1000E_RxRing rxr;
+ if (!(core->mac[RXDCTL0 + (i * 16)] & E1000_RXDCTL_QUEUE_ENABLE)) {
+ continue;
+ }
igb_rx_ring_init(core, &rxr, i);
if (igb_ring_enabled(core, rxr.i) && igb_has_rxbufs(core, rxr.i, 1)) {
@@ -938,7 +952,7 @@ static uint16_t igb_receive_assign(IGBCore *core, const struct eth_header *ehdr,
if (core->mac[MRQC] & 1) {
if (is_broadcast_ether_addr(ehdr->h_dest)) {
- for (i = 0; i < 8; i++) {
+ for (i = 0; i < IGB_NUM_VM_POOLS; i++) {
if (core->mac[VMOLR0 + i] & E1000_VMOLR_BAM) {
queues |= BIT(i);
}
@@ -972,7 +986,7 @@ static uint16_t igb_receive_assign(IGBCore *core, const struct eth_header *ehdr,
f = ta_shift[(rctl >> E1000_RCTL_MO_SHIFT) & 3];
f = (((ehdr->h_dest[5] << 8) | ehdr->h_dest[4]) >> f) & 0xfff;
if (macp[f >> 5] & (1 << (f & 0x1f))) {
- for (i = 0; i < 8; i++) {
+ for (i = 0; i < IGB_NUM_VM_POOLS; i++) {
if (core->mac[VMOLR0 + i] & E1000_VMOLR_ROMPE) {
queues |= BIT(i);
}
@@ -995,7 +1009,7 @@ static uint16_t igb_receive_assign(IGBCore *core, const struct eth_header *ehdr,
}
}
} else {
- for (i = 0; i < 8; i++) {
+ for (i = 0; i < IGB_NUM_VM_POOLS; i++) {
if (core->mac[VMOLR0 + i] & E1000_VMOLR_AUPE) {
mask |= BIT(i);
}
@@ -1011,6 +1025,7 @@ static uint16_t igb_receive_assign(IGBCore *core, const struct eth_header *ehdr,
queues = BIT(def_pl >> E1000_VT_CTL_DEFAULT_POOL_SHIFT);
}
+ queues &= core->mac[VFRE];
igb_rss_parse_packet(core, core->rx_pkt, external_tx != NULL, rss_info);
if (rss_info->queue & 1) {
queues <<= 8;
@@ -1571,7 +1586,8 @@ igb_receive_internal(IGBCore *core, const struct iovec *iov, int iovcnt,
e1000x_fcs_len(core->mac);
for (i = 0; i < IGB_NUM_QUEUES; i++) {
- if (!(queues & BIT(i))) {
+ if (!(queues & BIT(i)) ||
+ !(core->mac[RXDCTL0 + (i * 16)] & E1000_RXDCTL_QUEUE_ENABLE)) {
continue;
}
@@ -1977,9 +1993,16 @@ static void igb_set_vfmailbox(IGBCore *core, int index, uint32_t val)
static void igb_vf_reset(IGBCore *core, uint16_t vfn)
{
+ uint16_t qn0 = vfn;
+ uint16_t qn1 = vfn + IGB_NUM_VM_POOLS;
+
/* disable Rx and Tx for the VF*/
- core->mac[VFTE] &= ~BIT(vfn);
+ core->mac[RXDCTL0 + (qn0 * 16)] &= ~E1000_RXDCTL_QUEUE_ENABLE;
+ core->mac[RXDCTL0 + (qn1 * 16)] &= ~E1000_RXDCTL_QUEUE_ENABLE;
+ core->mac[TXDCTL0 + (qn0 * 16)] &= ~E1000_TXDCTL_QUEUE_ENABLE;
+ core->mac[TXDCTL0 + (qn1 * 16)] &= ~E1000_TXDCTL_QUEUE_ENABLE;
core->mac[VFRE] &= ~BIT(vfn);
+ core->mac[VFTE] &= ~BIT(vfn);
/* indicate VF reset to PF */
core->mac[VFLRE] |= BIT(vfn);
/* VFLRE and mailbox use the same interrupt cause */
@@ -3914,6 +3937,7 @@ igb_phy_reg_init[] = {
static const uint32_t igb_mac_reg_init[] = {
[LEDCTL] = 2 | (3 << 8) | BIT(15) | (6 << 16) | (7 << 24),
[EEMNGCTL] = BIT(31),
+ [TXDCTL0] = E1000_TXDCTL_QUEUE_ENABLE,
[RXDCTL0] = E1000_RXDCTL_QUEUE_ENABLE | (1 << 16),
[RXDCTL1] = 1 << 16,
[RXDCTL2] = 1 << 16,
diff --git a/hw/net/igb_core.h b/hw/net/igb_core.h
index 8914e0b..9cbbfd5 100644
--- a/hw/net/igb_core.h
+++ b/hw/net/igb_core.h
@@ -47,6 +47,7 @@
#define IGB_MSIX_VEC_NUM (10)
#define IGBVF_MSIX_VEC_NUM (3)
#define IGB_NUM_QUEUES (16)
+#define IGB_NUM_VM_POOLS (8)
typedef struct IGBCore IGBCore;
diff --git a/hw/net/igb_regs.h b/hw/net/igb_regs.h
index a658f9b..c5c5b3c 100644
--- a/hw/net/igb_regs.h
+++ b/hw/net/igb_regs.h
@@ -160,6 +160,9 @@ union e1000_adv_rx_desc {
#define E1000_MRQC_RSS_FIELD_IPV6_UDP 0x00800000
#define E1000_MRQC_RSS_FIELD_IPV6_UDP_EX 0x01000000
+/* Additional Transmit Descriptor Control definitions */
+#define E1000_TXDCTL_QUEUE_ENABLE 0x02000000 /* Enable specific Tx Queue */
+
/* Additional Receive Descriptor Control definitions */
#define E1000_RXDCTL_QUEUE_ENABLE 0x02000000 /* Enable specific Rx Queue */
--
2.7.4
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 09/12] igb: check oversized packets for VMDq
2023-03-28 5:19 [PULL 00/12] Net patches Jason Wang
` (7 preceding siblings ...)
2023-03-28 5:19 ` [PULL 08/12] igb: implement VFRE and VFTE registers Jason Wang
@ 2023-03-28 5:19 ` Jason Wang
2023-03-28 5:19 ` [PULL 10/12] igb: respect E1000_VMOLR_RSSE Jason Wang
` (3 subsequent siblings)
12 siblings, 0 replies; 19+ messages in thread
From: Jason Wang @ 2023-03-28 5:19 UTC (permalink / raw)
To: qemu-devel; +Cc: Sriram Yagnaraman, Jason Wang
From: Sriram Yagnaraman <sriram.yagnaraman@est.tech>
Signed-off-by: Sriram Yagnaraman <sriram.yagnaraman@est.tech>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
hw/net/igb_core.c | 41 ++++++++++++++++++++++++++++++++++++-----
1 file changed, 36 insertions(+), 5 deletions(-)
diff --git a/hw/net/igb_core.c b/hw/net/igb_core.c
index 753f17b..38aa459 100644
--- a/hw/net/igb_core.c
+++ b/hw/net/igb_core.c
@@ -921,12 +921,26 @@ igb_rx_l4_cso_enabled(IGBCore *core)
return !!(core->mac[RXCSUM] & E1000_RXCSUM_TUOFLD);
}
+static bool
+igb_rx_is_oversized(IGBCore *core, uint16_t qn, size_t size)
+{
+ uint16_t pool = qn % IGB_NUM_VM_POOLS;
+ bool lpe = !!(core->mac[VMOLR0 + pool] & E1000_VMOLR_LPE);
+ int max_ethernet_lpe_size =
+ core->mac[VMOLR0 + pool] & E1000_VMOLR_RLPML_MASK;
+ int max_ethernet_vlan_size = 1522;
+
+ return size > (lpe ? max_ethernet_lpe_size : max_ethernet_vlan_size);
+}
+
static uint16_t igb_receive_assign(IGBCore *core, const struct eth_header *ehdr,
- E1000E_RSSInfo *rss_info, bool *external_tx)
+ size_t size, E1000E_RSSInfo *rss_info,
+ bool *external_tx)
{
static const int ta_shift[] = { 4, 3, 2, 0 };
uint32_t f, ra[2], *macp, rctl = core->mac[RCTL];
uint16_t queues = 0;
+ uint16_t oversized = 0;
uint16_t vid = lduw_be_p(&PKT_GET_VLAN_HDR(ehdr)->h_tci) & VLAN_VID_MASK;
bool accepted = false;
int i;
@@ -1026,9 +1040,26 @@ static uint16_t igb_receive_assign(IGBCore *core, const struct eth_header *ehdr,
}
queues &= core->mac[VFRE];
- igb_rss_parse_packet(core, core->rx_pkt, external_tx != NULL, rss_info);
- if (rss_info->queue & 1) {
- queues <<= 8;
+ if (queues) {
+ for (i = 0; i < IGB_NUM_VM_POOLS; i++) {
+ if ((queues & BIT(i)) && igb_rx_is_oversized(core, i, size)) {
+ oversized |= BIT(i);
+ }
+ }
+ /* 8.19.37 increment ROC if packet is oversized for all queues */
+ if (oversized == queues) {
+ trace_e1000x_rx_oversized(size);
+ e1000x_inc_reg_if_not_full(core->mac, ROC);
+ }
+ queues &= ~oversized;
+ }
+
+ if (queues) {
+ igb_rss_parse_packet(core, core->rx_pkt,
+ external_tx != NULL, rss_info);
+ if (rss_info->queue & 1) {
+ queues <<= 8;
+ }
}
} else {
switch (net_rx_pkt_get_packet_type(core->rx_pkt)) {
@@ -1576,7 +1607,7 @@ igb_receive_internal(IGBCore *core, const struct iovec *iov, int iovcnt,
e1000x_vlan_enabled(core->mac),
core->mac[VET] & 0xffff);
- queues = igb_receive_assign(core, ehdr, &rss_info, external_tx);
+ queues = igb_receive_assign(core, ehdr, size, &rss_info, external_tx);
if (!queues) {
trace_e1000e_rx_flt_dropped();
return orig_size;
--
2.7.4
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 10/12] igb: respect E1000_VMOLR_RSSE
2023-03-28 5:19 [PULL 00/12] Net patches Jason Wang
` (8 preceding siblings ...)
2023-03-28 5:19 ` [PULL 09/12] igb: check oversized packets for VMDq Jason Wang
@ 2023-03-28 5:19 ` Jason Wang
2023-03-28 5:19 ` [PULL 11/12] igb: implement VF Tx and Rx stats Jason Wang
` (2 subsequent siblings)
12 siblings, 0 replies; 19+ messages in thread
From: Jason Wang @ 2023-03-28 5:19 UTC (permalink / raw)
To: qemu-devel; +Cc: Sriram Yagnaraman, Jason Wang
From: Sriram Yagnaraman <sriram.yagnaraman@est.tech>
RSS for VFs is only enabled if VMOLR[n].RSSE is set.
Signed-off-by: Sriram Yagnaraman <sriram.yagnaraman@est.tech>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
hw/net/igb_core.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/hw/net/igb_core.c b/hw/net/igb_core.c
index 38aa459..fd61c6c 100644
--- a/hw/net/igb_core.c
+++ b/hw/net/igb_core.c
@@ -1057,8 +1057,15 @@ static uint16_t igb_receive_assign(IGBCore *core, const struct eth_header *ehdr,
if (queues) {
igb_rss_parse_packet(core, core->rx_pkt,
external_tx != NULL, rss_info);
+ /* Sec 8.26.1: PQn = VFn + VQn*8 */
if (rss_info->queue & 1) {
- queues <<= 8;
+ for (i = 0; i < IGB_NUM_VM_POOLS; i++) {
+ if ((queues & BIT(i)) &&
+ (core->mac[VMOLR0 + i] & E1000_VMOLR_RSSE)) {
+ queues |= BIT(i + IGB_NUM_VM_POOLS);
+ queues &= ~BIT(i);
+ }
+ }
}
}
} else {
--
2.7.4
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 11/12] igb: implement VF Tx and Rx stats
2023-03-28 5:19 [PULL 00/12] Net patches Jason Wang
` (9 preceding siblings ...)
2023-03-28 5:19 ` [PULL 10/12] igb: respect E1000_VMOLR_RSSE Jason Wang
@ 2023-03-28 5:19 ` Jason Wang
2023-03-28 5:19 ` [PULL 12/12] igb: respect VMVIR and VMOLR for VLAN Jason Wang
2023-03-28 16:00 ` [PULL 00/12] Net patches Peter Maydell
12 siblings, 0 replies; 19+ messages in thread
From: Jason Wang @ 2023-03-28 5:19 UTC (permalink / raw)
To: qemu-devel; +Cc: Sriram Yagnaraman, Jason Wang
From: Sriram Yagnaraman <sriram.yagnaraman@est.tech>
Please note that loopback counters for VM to VM traffic is not
implemented yet: VFGOTLBC, VFGPTLBC, VFGORLBC and VFGPRLBC.
Signed-off-by: Sriram Yagnaraman <sriram.yagnaraman@est.tech>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
hw/net/igb_core.c | 26 ++++++++++++++++++++++----
1 file changed, 22 insertions(+), 4 deletions(-)
diff --git a/hw/net/igb_core.c b/hw/net/igb_core.c
index fd61c6c..162ba8b 100644
--- a/hw/net/igb_core.c
+++ b/hw/net/igb_core.c
@@ -492,7 +492,7 @@ igb_tx_pkt_send(IGBCore *core, struct igb_tx *tx, int queue_index)
}
static void
-igb_on_tx_done_update_stats(IGBCore *core, struct NetTxPkt *tx_pkt)
+igb_on_tx_done_update_stats(IGBCore *core, struct NetTxPkt *tx_pkt, int qn)
{
static const int PTCregs[6] = { PTC64, PTC127, PTC255, PTC511,
PTC1023, PTC1522 };
@@ -519,6 +519,13 @@ igb_on_tx_done_update_stats(IGBCore *core, struct NetTxPkt *tx_pkt)
core->mac[GPTC] = core->mac[TPT];
core->mac[GOTCL] = core->mac[TOTL];
core->mac[GOTCH] = core->mac[TOTH];
+
+ if (core->mac[MRQC] & 1) {
+ uint16_t pool = qn % IGB_NUM_VM_POOLS;
+
+ core->mac[PVFGOTC0 + (pool * 64)] += tot_len;
+ core->mac[PVFGPTC0 + (pool * 64)]++;
+ }
}
static void
@@ -583,7 +590,7 @@ igb_process_tx_desc(IGBCore *core,
net_tx_pkt_setup_vlan_header_ex(tx->tx_pkt, vlan, vet);
}
if (igb_tx_pkt_send(core, tx, queue_index)) {
- igb_on_tx_done_update_stats(core, tx->tx_pkt);
+ igb_on_tx_done_update_stats(core, tx->tx_pkt, queue_index);
}
}
@@ -1409,7 +1416,8 @@ igb_write_to_rx_buffers(IGBCore *core,
}
static void
-igb_update_rx_stats(IGBCore *core, size_t data_size, size_t data_fcs_size)
+igb_update_rx_stats(IGBCore *core, const E1000E_RingInfo *rxi,
+ size_t data_size, size_t data_fcs_size)
{
e1000x_update_rx_total_stats(core->mac, data_size, data_fcs_size);
@@ -1425,6 +1433,16 @@ igb_update_rx_stats(IGBCore *core, size_t data_size, size_t data_fcs_size)
default:
break;
}
+
+ if (core->mac[MRQC] & 1) {
+ uint16_t pool = rxi->idx % IGB_NUM_VM_POOLS;
+
+ core->mac[PVFGORC0 + (pool * 64)] += data_size + 4;
+ core->mac[PVFGPRC0 + (pool * 64)]++;
+ if (net_rx_pkt_get_packet_type(core->rx_pkt) == ETH_PKT_MCAST) {
+ core->mac[PVFMPRC0 + (pool * 64)]++;
+ }
+ }
}
static inline bool
@@ -1526,7 +1544,7 @@ igb_write_packet_to_guest(IGBCore *core, struct NetRxPkt *pkt,
} while (desc_offset < total_size);
- igb_update_rx_stats(core, size, total_size);
+ igb_update_rx_stats(core, rxi, size, total_size);
}
static inline void
--
2.7.4
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PULL 12/12] igb: respect VMVIR and VMOLR for VLAN
2023-03-28 5:19 [PULL 00/12] Net patches Jason Wang
` (10 preceding siblings ...)
2023-03-28 5:19 ` [PULL 11/12] igb: implement VF Tx and Rx stats Jason Wang
@ 2023-03-28 5:19 ` Jason Wang
2023-03-28 16:00 ` [PULL 00/12] Net patches Peter Maydell
12 siblings, 0 replies; 19+ messages in thread
From: Jason Wang @ 2023-03-28 5:19 UTC (permalink / raw)
To: qemu-devel; +Cc: Sriram Yagnaraman, Jason Wang
From: Sriram Yagnaraman <sriram.yagnaraman@est.tech>
Add support for stripping/inserting VLAN for VFs.
Had to move CSUM calculation back into the for loop, since packet data
is pulled inside the loop based on strip VLAN decision for every VF.
net_rx_pkt_fix_l4_csum should be extended to accept a buffer instead for
igb. Work for a future patch.
Signed-off-by: Sriram Yagnaraman <sriram.yagnaraman@est.tech>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
hw/net/igb_core.c | 62 +++++++++++++++++++++++++++++++++++++++++++------------
1 file changed, 49 insertions(+), 13 deletions(-)
diff --git a/hw/net/igb_core.c b/hw/net/igb_core.c
index 162ba8b..d733fed 100644
--- a/hw/net/igb_core.c
+++ b/hw/net/igb_core.c
@@ -386,6 +386,28 @@ igb_rss_parse_packet(IGBCore *core, struct NetRxPkt *pkt, bool tx,
info->queue = E1000_RSS_QUEUE(&core->mac[RETA], info->hash);
}
+static void
+igb_tx_insert_vlan(IGBCore *core, uint16_t qn, struct igb_tx *tx,
+ uint16_t vlan, bool insert_vlan)
+{
+ if (core->mac[MRQC] & 1) {
+ uint16_t pool = qn % IGB_NUM_VM_POOLS;
+
+ if (core->mac[VMVIR0 + pool] & E1000_VMVIR_VLANA_DEFAULT) {
+ /* always insert default VLAN */
+ insert_vlan = true;
+ vlan = core->mac[VMVIR0 + pool] & 0xffff;
+ } else if (core->mac[VMVIR0 + pool] & E1000_VMVIR_VLANA_NEVER) {
+ insert_vlan = false;
+ }
+ }
+
+ if (insert_vlan && e1000x_vlan_enabled(core->mac)) {
+ net_tx_pkt_setup_vlan_header_ex(tx->tx_pkt, vlan,
+ core->mac[VET] & 0xffff);
+ }
+}
+
static bool
igb_setup_tx_offloads(IGBCore *core, struct igb_tx *tx)
{
@@ -583,12 +605,11 @@ igb_process_tx_desc(IGBCore *core,
if (cmd_type_len & E1000_TXD_CMD_EOP) {
if (!tx->skip_cp && net_tx_pkt_parse(tx->tx_pkt)) {
- if (cmd_type_len & E1000_TXD_CMD_VLE) {
- idx = (tx->first_olinfo_status >> 4) & 1;
- uint16_t vlan = tx->ctx[idx].vlan_macip_lens >> 16;
- uint16_t vet = core->mac[VET] & 0xffff;
- net_tx_pkt_setup_vlan_header_ex(tx->tx_pkt, vlan, vet);
- }
+ idx = (tx->first_olinfo_status >> 4) & 1;
+ igb_tx_insert_vlan(core, queue_index, tx,
+ tx->ctx[idx].vlan_macip_lens >> 16,
+ !!(cmd_type_len & E1000_TXD_CMD_VLE));
+
if (igb_tx_pkt_send(core, tx, queue_index)) {
igb_on_tx_done_update_stats(core, tx->tx_pkt, queue_index);
}
@@ -1547,6 +1568,20 @@ igb_write_packet_to_guest(IGBCore *core, struct NetRxPkt *pkt,
igb_update_rx_stats(core, rxi, size, total_size);
}
+static bool
+igb_rx_strip_vlan(IGBCore *core, const E1000E_RingInfo *rxi)
+{
+ if (core->mac[MRQC] & 1) {
+ uint16_t pool = rxi->idx % IGB_NUM_VM_POOLS;
+ /* Sec 7.10.3.8: CTRL.VME is ignored, only VMOLR/RPLOLR is used */
+ return (net_rx_pkt_get_packet_type(core->rx_pkt) == ETH_PKT_MCAST) ?
+ core->mac[RPLOLR] & E1000_RPLOLR_STRVLAN :
+ core->mac[VMOLR0 + pool] & E1000_VMOLR_STRVLAN;
+ }
+
+ return e1000x_vlan_enabled(core->mac);
+}
+
static inline void
igb_rx_fix_l4_csum(IGBCore *core, struct NetRxPkt *pkt)
{
@@ -1627,10 +1662,7 @@ igb_receive_internal(IGBCore *core, const struct iovec *iov, int iovcnt,
ehdr = PKT_GET_ETH_HDR(filter_buf);
net_rx_pkt_set_packet_type(core->rx_pkt, get_eth_packet_type(ehdr));
-
- net_rx_pkt_attach_iovec_ex(core->rx_pkt, iov, iovcnt, iov_ofs,
- e1000x_vlan_enabled(core->mac),
- core->mac[VET] & 0xffff);
+ net_rx_pkt_set_protocols(core->rx_pkt, filter_buf, size);
queues = igb_receive_assign(core, ehdr, size, &rss_info, external_tx);
if (!queues) {
@@ -1638,9 +1670,6 @@ igb_receive_internal(IGBCore *core, const struct iovec *iov, int iovcnt,
return orig_size;
}
- total_size = net_rx_pkt_get_total_len(core->rx_pkt) +
- e1000x_fcs_len(core->mac);
-
for (i = 0; i < IGB_NUM_QUEUES; i++) {
if (!(queues & BIT(i)) ||
!(core->mac[RXDCTL0 + (i * 16)] & E1000_RXDCTL_QUEUE_ENABLE)) {
@@ -1649,6 +1678,13 @@ igb_receive_internal(IGBCore *core, const struct iovec *iov, int iovcnt,
igb_rx_ring_init(core, &rxr, i);
+ net_rx_pkt_attach_iovec_ex(core->rx_pkt, iov, iovcnt, iov_ofs,
+ igb_rx_strip_vlan(core, rxr.i),
+ core->mac[VET] & 0xffff);
+
+ total_size = net_rx_pkt_get_total_len(core->rx_pkt) +
+ e1000x_fcs_len(core->mac);
+
if (!igb_has_rxbufs(core, rxr.i, total_size)) {
n |= E1000_ICS_RXO;
trace_e1000e_rx_not_written_to_guest(rxr.i->idx);
--
2.7.4
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PULL 00/12] Net patches
2023-03-28 5:19 [PULL 00/12] Net patches Jason Wang
` (11 preceding siblings ...)
2023-03-28 5:19 ` [PULL 12/12] igb: respect VMVIR and VMOLR for VLAN Jason Wang
@ 2023-03-28 16:00 ` Peter Maydell
12 siblings, 0 replies; 19+ messages in thread
From: Peter Maydell @ 2023-03-28 16:00 UTC (permalink / raw)
To: Jason Wang; +Cc: qemu-devel
On Tue, 28 Mar 2023 at 06:21, Jason Wang <jasowang@redhat.com> wrote:
>
> The following changes since commit e3debd5e7d0ce031356024878a0a18b9d109354a:
>
> Merge tag 'pull-request-2023-03-24' of https://gitlab.com/thuth/qemu into staging (2023-03-24 16:08:46 +0000)
>
> are available in the git repository at:
>
> https://github.com/jasowang/qemu.git tags/net-pull-request
>
> for you to fetch changes up to fba7c3b788dfcb99a3f9253f7d99cc0d217d6d3c:
>
> igb: respect VMVIR and VMOLR for VLAN (2023-03-28 13:10:55 +0800)
>
Applied, thanks.
Please update the changelog at https://wiki.qemu.org/ChangeLog/8.0
for any user-visible changes.
-- PMM
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PULL 00/12] Net patches
@ 2025-07-21 5:59 Jason Wang
2025-07-21 13:59 ` Stefan Hajnoczi
0 siblings, 1 reply; 19+ messages in thread
From: Jason Wang @ 2025-07-21 5:59 UTC (permalink / raw)
To: qemu-devel; +Cc: Jason Wang
The following changes since commit e82989544e38062beeeaad88c175afbeed0400f8:
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging (2025-07-18 14:10:02 -0400)
are available in the Git repository at:
https://github.com/jasowang/qemu.git tags/net-pull-request
for you to fetch changes up to ae9b09972bbf8ff49ae0edf3241fb413391b15ce:
net/vhost-user: Remove unused "err" from chr_closed_bh() (CID 1612365) (2025-07-21 10:23:17 +0800)
----------------------------------------------------------------
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEEIV1G9IJGaJ7HfzVi7wSWWzmNYhEFAmh91p0ACgkQ7wSWWzmN
YhG+2wgAqw3G2TGRPT29ObyYDcd2Z54jdnNpX5gEND/UnqENprXfdD3PR58bnxe3
uJGPRkMXgkIDit61lshsb8DF8x9ZEIlm/Ax5FM0ksBczWDYHiyEuXoyt2Uai1kWY
fLBkVfjFqCu1AGniboCZiC4ZawZXIqkx/+DI3J/XRqa+bSCQ18I15dsLD/yxU/pp
Hwxp07/d+UayANdxs0mZ5Lr7a1ktTgytCt0O2jQNHlMzfOvdBbVbF/WGclMWfNgI
68HWPY7P8k8jRTRFx3H/0LyYQrPyseTpa3zHC+zW9jNskkPkhCwlAY4UDC8x8LII
OjsDc/0nre626rNCiJlifD3UJ7t86A==
=xj23
-----END PGP SIGNATURE-----
----------------------------------------------------------------
Laurent Vivier (6):
net/passt: Remove unused "err" from passt_vhost_user_event() (CID 1612375)
net/vhost-user: Remove unused "err" from net_vhost_user_event() (CID 1612372)
net/passt: Remove dead code in passt_vhost_user_start error path (CID 1612371)
net/passt: Check return value of g_remove() in net_passt_cleanup() (CID 1612369)
net/passt: Initialize "error" variable in net_passt_send() (CID 1612368)
net/vhost-user: Remove unused "err" from chr_closed_bh() (CID 1612365)
Peter Maydell (4):
hw/net/npcm_gmac.c: Send the right data for second packet in a row
hw/net/npcm_gmac.c: Unify length and prev_buf_size variables
hw/net/npcm_gmac.c: Correct test for when to reallocate packet buffer
hw/net/npcm_gmac.c: Drop 'buf' local variable
Steve Sistare (1):
tap: fix net_init_tap() return code
Vladimir Sementsov-Ogievskiy (1):
net/tap: drop too small packets
hw/net/npcm_gmac.c | 26 ++++++++++++--------------
net/passt.c | 22 +++++++---------------
net/tap.c | 9 +++++++--
net/vhost-user.c | 9 ---------
4 files changed, 26 insertions(+), 40 deletions(-)
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PULL 00/12] Net patches
2025-07-21 5:59 Jason Wang
@ 2025-07-21 13:59 ` Stefan Hajnoczi
0 siblings, 0 replies; 19+ messages in thread
From: Stefan Hajnoczi @ 2025-07-21 13:59 UTC (permalink / raw)
To: Jason Wang; +Cc: qemu-devel, Jason Wang
[-- Attachment #1: Type: text/plain, Size: 116 bytes --]
Applied, thanks.
Please update the changelog at https://wiki.qemu.org/ChangeLog/10.1 for any user-visible changes.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2025-07-21 14:00 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-03-28 5:19 [PULL 00/12] Net patches Jason Wang
2023-03-28 5:19 ` [PULL 01/12] igb: Save more Tx states Jason Wang
2023-03-28 5:19 ` [PULL 02/12] igb: Fix DMA requester specification for Tx packet Jason Wang
2023-03-28 5:19 ` [PULL 03/12] hw/net/net_tx_pkt: Ignore ECN bit Jason Wang
2023-03-28 5:19 ` [PULL 04/12] hw/net/net_tx_pkt: Align l3_hdr Jason Wang
2023-03-28 5:19 ` [PULL 05/12] MAINTAINERS: Add Sriram Yagnaraman as a igb reviewer Jason Wang
2023-03-28 5:19 ` [PULL 06/12] igb: handle PF/VF reset properly Jason Wang
2023-03-28 5:19 ` [PULL 07/12] igb: add ICR_RXDW Jason Wang
2023-03-28 5:19 ` [PULL 08/12] igb: implement VFRE and VFTE registers Jason Wang
2023-03-28 5:19 ` [PULL 09/12] igb: check oversized packets for VMDq Jason Wang
2023-03-28 5:19 ` [PULL 10/12] igb: respect E1000_VMOLR_RSSE Jason Wang
2023-03-28 5:19 ` [PULL 11/12] igb: implement VF Tx and Rx stats Jason Wang
2023-03-28 5:19 ` [PULL 12/12] igb: respect VMVIR and VMOLR for VLAN Jason Wang
2023-03-28 16:00 ` [PULL 00/12] Net patches Peter Maydell
-- strict thread matches above, loose matches on Subject: below --
2025-07-21 5:59 Jason Wang
2025-07-21 13:59 ` Stefan Hajnoczi
2021-06-11 6:00 Jason Wang
2021-06-11 12:02 ` Peter Maydell
2021-06-14 22:43 ` no-reply
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).