* [RFC 0/2] Convert skb to use scatterlist
@ 2007-07-05 23:14 Stephen Hemminger
2007-07-05 23:14 ` [RFC 1/2] skbuff: " Stephen Hemminger
2007-07-05 23:14 ` [RFC 2/2] shrink size of scatterlist on common i386/x86-64 Stephen Hemminger
0 siblings, 2 replies; 16+ messages in thread
From: Stephen Hemminger @ 2007-07-05 23:14 UTC (permalink / raw)
To: David Miller; +Cc: netdev
This topic came up at first Netconf. This patch changes skbuff
to use scatterlist. Why? Devices can than use the pci_dma_sg
routines to map the fraglist in one operation. This allows
for better error handling (less unwinding), and some IOMMU's
(PPC?) can be smarter.
--
Stephen Hemminger <shemminger@linux-foundation.org>
^ permalink raw reply [flat|nested] 16+ messages in thread
* [RFC 1/2] skbuff: use scatterlist
2007-07-05 23:14 [RFC 0/2] Convert skb to use scatterlist Stephen Hemminger
@ 2007-07-05 23:14 ` Stephen Hemminger
2007-07-05 23:14 ` [RFC 2/2] shrink size of scatterlist on common i386/x86-64 Stephen Hemminger
1 sibling, 0 replies; 16+ messages in thread
From: Stephen Hemminger @ 2007-07-05 23:14 UTC (permalink / raw)
To: David Miller; +Cc: netdev
[-- Attachment #1: skb-sg.patch --]
[-- Type: TEXT/PLAIN, Size: 60988 bytes --]
Replace the skb frag list with the common scatterlist definition.
This allows device drivers to use dma_scatter/gather operations which
may be faster on some platforms. As a side benefit, it is easier to
handle dma mapping error unwind.
This idea came up long ago, just never got implemented.
Signed-off-by: Stephen Hemminger <shemminger@linux-foundation.org>
---
drivers/atm/he.c | 2 -
drivers/infiniband/hw/amso1100/c2.c | 2 -
drivers/net/3c59x.c | 8 ++--
drivers/net/8139cp.c | 5 +-
drivers/net/acenic.c | 8 ++--
drivers/net/atl1/atl1_main.c | 8 ++--
drivers/net/bnx2.c | 8 ++--
drivers/net/cassini.c | 19 ++++-----
drivers/net/chelsio/sge.c | 8 ++--
drivers/net/cxgb3/adapter.h | 2 -
drivers/net/cxgb3/sge.c | 25 ++++++------
drivers/net/e1000/e1000_main.c | 8 ++--
drivers/net/ehea/ehea_main.c | 7 +--
drivers/net/forcedeth.c | 22 ++++++-----
drivers/net/ibm_emac/ibm_emac_core.c | 2 -
drivers/net/ibmveth.c | 2 -
drivers/net/ixgb/ixgb_main.c | 6 +--
drivers/net/mv643xx_eth.c | 2 -
drivers/net/myri10ge/myri10ge.c | 26 ++++++-------
drivers/net/netxen/netxen_nic_main.c | 6 +--
drivers/net/ns83820.c | 9 ++--
drivers/net/qla3xxx.c | 6 +--
drivers/net/r8169.c | 4 +-
drivers/net/s2io.c | 18 +++++----
drivers/net/sk98lin/skge.c | 8 ++--
drivers/net/skge.c | 8 ++--
drivers/net/sky2.c | 16 ++++----
drivers/net/starfire.c | 9 +++-
drivers/net/sungem.c | 4 +-
drivers/net/sunhme.c | 4 +-
drivers/net/tg3.c | 14 +++----
drivers/net/tsi108_eth.c | 2 -
drivers/net/typhoon.c | 2 -
drivers/net/via-velocity.c | 2 -
include/linux/skbuff.h | 21 ++++------
net/appletalk/ddp.c | 4 +-
net/core/datagram.c | 9 ++--
net/core/pktgen.c | 42 ++++++++++-----------
net/core/skbuff.c | 64 ++++++++++++++++----------------
net/core/sock.c | 8 ++--
net/core/user_dma.c | 2 -
net/ipv4/ip_fragment.c | 4 +-
net/ipv4/ip_output.c | 9 +++-
net/ipv4/tcp.c | 9 ++--
net/ipv4/tcp_output.c | 8 ++--
net/ipv6/ip6_output.c | 7 ++-
net/ipv6/netfilter/nf_conntrack_reasm.c | 2 -
net/ipv6/reassembly.c | 2 -
net/xfrm/xfrm_algo.c | 4 +-
49 files changed, 239 insertions(+), 238 deletions(-)
--- a/include/linux/skbuff.h 2007-07-05 14:21:36.000000000 -0700
+++ b/include/linux/skbuff.h 2007-07-05 14:53:11.000000000 -0700
@@ -21,6 +21,7 @@
#include <asm/atomic.h>
#include <asm/types.h>
+#include <asm/scatterlist.h>
#include <linux/spinlock.h>
#include <linux/net.h>
#include <linux/textsearch.h>
@@ -122,13 +123,7 @@ struct sk_buff;
/* To allow 64K frame to be packed as single skb without frag_list */
#define MAX_SKB_FRAGS (65536/PAGE_SIZE + 2)
-typedef struct skb_frag_struct skb_frag_t;
-
-struct skb_frag_struct {
- struct page *page;
- __u16 page_offset;
- __u16 size;
-};
+typedef struct scatterlist skb_frag_t;
/* This data is invariant across clones and lives at
* the end of the header data, ie. at skb->end.
@@ -813,7 +808,7 @@ static inline int skb_pagelen(const stru
int i, len = 0;
for (i = (int)skb_shinfo(skb)->nr_frags - 1; i >= 0; i--)
- len += skb_shinfo(skb)->frags[i].size;
+ len += skb_shinfo(skb)->frags[i].length;
return len + skb_headlen(skb);
}
@@ -822,9 +817,9 @@ static inline void skb_fill_page_desc(st
{
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
- frag->page = page;
- frag->page_offset = off;
- frag->size = size;
+ frag->page = page;
+ frag->offset = off;
+ frag->length = size;
skb_shinfo(skb)->nr_frags = i + 1;
}
@@ -1390,10 +1385,10 @@ static inline int skb_can_coalesce(struc
struct page *page, int off)
{
if (i) {
- struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[i - 1];
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i - 1];
return page == frag->page &&
- off == frag->page_offset + frag->size;
+ off == frag->offset + frag->length;
}
return 0;
}
--- a/drivers/atm/he.c 2007-06-05 13:27:30.000000000 -0700
+++ b/drivers/atm/he.c 2007-07-05 14:38:04.000000000 -0700
@@ -2803,7 +2803,7 @@ he_send(struct atm_vcc *vcc, struct sk_b
}
tpd->iovec[slot].addr = pci_map_single(he_dev->pci_dev,
- (void *) page_address(frag->page) + frag->page_offset,
+ (void *) page_address(frag->page) + frag->offset,
frag->size, PCI_DMA_TODEVICE);
tpd->iovec[slot].len = frag->size;
++slot;
--- a/drivers/infiniband/hw/amso1100/c2.c 2007-07-05 14:21:36.000000000 -0700
+++ b/drivers/infiniband/hw/amso1100/c2.c 2007-07-05 14:38:04.000000000 -0700
@@ -801,7 +801,7 @@ static int c2_xmit_frame(struct sk_buff
maplen = frag->size;
mapaddr =
pci_map_page(c2dev->pcidev, frag->page,
- frag->page_offset, maplen,
+ frag->offset, maplen,
PCI_DMA_TODEVICE);
elem = elem->next;
--- a/drivers/net/3c59x.c 2007-06-05 13:27:35.000000000 -0700
+++ b/drivers/net/3c59x.c 2007-07-05 15:21:08.000000000 -0700
@@ -2102,13 +2102,13 @@ boomerang_start_xmit(struct sk_buff *skb
vp->tx_ring[entry].frag[i+1].addr =
cpu_to_le32(pci_map_single(VORTEX_PCI(vp),
- (void*)page_address(frag->page) + frag->page_offset,
- frag->size, PCI_DMA_TODEVICE));
+ (void*)page_address(frag->page) + frag->offset,
+ frag->length, PCI_DMA_TODEVICE));
if (i == skb_shinfo(skb)->nr_frags-1)
- vp->tx_ring[entry].frag[i+1].length = cpu_to_le32(frag->size|LAST_FRAG);
+ vp->tx_ring[entry].frag[i+1].length = cpu_to_le32(frag->length|LAST_FRAG);
else
- vp->tx_ring[entry].frag[i+1].length = cpu_to_le32(frag->size);
+ vp->tx_ring[entry].frag[i+1].length = cpu_to_le32(frag->length);
}
}
#else
--- a/drivers/net/8139cp.c 2007-06-05 13:27:35.000000000 -0700
+++ b/drivers/net/8139cp.c 2007-07-05 15:36:16.000000000 -0700
@@ -831,14 +831,13 @@ static int cp_start_xmit (struct sk_buff
for (frag = 0; frag < skb_shinfo(skb)->nr_frags; frag++) {
skb_frag_t *this_frag = &skb_shinfo(skb)->frags[frag];
- u32 len;
+ u32 len = this_frag->length;
u32 ctrl;
dma_addr_t mapping;
- len = this_frag->size;
mapping = pci_map_single(cp->pdev,
((void *) page_address(this_frag->page) +
- this_frag->page_offset),
+ this_frag->offset),
len, PCI_DMA_TODEVICE);
eor = (entry == (CP_TX_RING_SIZE - 1)) ? RingEnd : 0;
--- a/drivers/net/acenic.c 2007-06-05 13:27:35.000000000 -0700
+++ b/drivers/net/acenic.c 2007-07-05 15:23:30.000000000 -0700
@@ -2528,15 +2528,15 @@ restart:
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
struct tx_ring_info *info;
- len += frag->size;
+ len += frag->length;
info = ap->skb->tx_skbuff + idx;
desc = ap->tx_ring + idx;
mapping = pci_map_page(ap->pdev, frag->page,
- frag->page_offset, frag->size,
+ frag->offset, frag->length,
PCI_DMA_TODEVICE);
- flagsize = (frag->size << 16);
+ flagsize = (frag->length << 16);
if (skb->ip_summed == CHECKSUM_PARTIAL)
flagsize |= BD_FLG_TCP_UDP_SUM;
idx = (idx + 1) % ACE_TX_RING_ENTRIES(ap);
@@ -2555,7 +2555,7 @@ restart:
info->skb = NULL;
}
pci_unmap_addr_set(info, mapping, mapping);
- pci_unmap_len_set(info, maplen, frag->size);
+ pci_unmap_len_set(info, maplen, frag->length);
ace_load_tx_bd(ap, desc, mapping, flagsize, vlan_tag);
}
}
--- a/drivers/net/atl1/atl1_main.c 2007-06-05 13:27:35.000000000 -0700
+++ b/drivers/net/atl1/atl1_main.c 2007-07-05 15:01:22.000000000 -0700
@@ -1384,11 +1384,11 @@ static void atl1_tx_map(struct atl1_adap
}
for (f = 0; f < nr_frags; f++) {
- struct skb_frag_struct *frag;
+ skb_frag_t *frag;
u16 lenf, i, m;
frag = &skb_shinfo(skb)->frags[f];
- lenf = frag->size;
+ lenf = frag->length;
m = (lenf + MAX_TX_BUF_LEN - 1) / MAX_TX_BUF_LEN;
for (i = 0; i < m; i++) {
@@ -1401,7 +1401,7 @@ static void atl1_tx_map(struct atl1_adap
lenf -= buffer_info->length;
buffer_info->dma =
pci_map_page(adapter->pdev, frag->page,
- frag->page_offset + i * MAX_TX_BUF_LEN,
+ frag->offset + i * MAX_TX_BUF_LEN,
buffer_info->length, PCI_DMA_TODEVICE);
if (++tpd_next_to_use == tpd_ring->count)
@@ -1516,7 +1516,7 @@ static int atl1_xmit_frame(struct sk_buf
/* nr_frags will be nonzero if we're doing scatter/gather (SG) */
nr_frags = skb_shinfo(skb)->nr_frags;
for (f = 0; f < nr_frags; f++) {
- frag_size = skb_shinfo(skb)->frags[f].size;
+ frag_size = skb_shinfo(skb)->frags[f].length;
if (frag_size)
count +=
(frag_size + MAX_TX_BUF_LEN - 1) / MAX_TX_BUF_LEN;
--- a/drivers/net/bnx2.c 2007-07-05 14:21:36.000000000 -0700
+++ b/drivers/net/bnx2.c 2007-07-05 15:24:36.000000000 -0700
@@ -2038,7 +2038,7 @@ bnx2_tx_int(struct bnx2 *bp)
pci_unmap_addr(
&bp->tx_buf_ring[TX_RING_IDX(sw_cons)],
mapping),
- skb_shinfo(skb)->frags[i].size,
+ skb_shinfo(skb)->frags[i].length,
PCI_DMA_TODEVICE);
}
@@ -4001,7 +4001,7 @@ bnx2_free_tx_skbs(struct bnx2 *bp)
tx_buf = &bp->tx_buf_ring[i + j + 1];
pci_unmap_page(bp->pdev,
pci_unmap_addr(tx_buf, mapping),
- skb_shinfo(skb)->frags[j].size,
+ skb_shinfo(skb)->frags[j].length,
PCI_DMA_TODEVICE);
}
dev_kfree_skb(skb);
@@ -4922,8 +4922,8 @@ bnx2_start_xmit(struct sk_buff *skb, str
ring_prod = TX_RING_IDX(prod);
txbd = &bp->tx_desc_ring[ring_prod];
- len = frag->size;
- mapping = pci_map_page(bp->pdev, frag->page, frag->page_offset,
+ len = frag->length;
+ mapping = pci_map_page(bp->pdev, frag->page, frag->offset,
len, PCI_DMA_TODEVICE);
pci_unmap_addr_set(&bp->tx_buf_ring[ring_prod],
mapping, mapping);
--- a/drivers/net/cassini.c 2007-06-05 13:27:35.000000000 -0700
+++ b/drivers/net/cassini.c 2007-07-05 15:20:37.000000000 -0700
@@ -2067,8 +2067,8 @@ static int cas_rx_process_pkt(struct cas
get_page(page->buffer);
cas_buffer_inc(page);
frag->page = page->buffer;
- frag->page_offset = off;
- frag->size = hlen - swivel;
+ frag->offset = off;
+ frag->length = hlen - swivel;
/* any more data? */
if ((words[0] & RX_COMP1_SPLIT_PKT) && ((dlen -= hlen) > 0)) {
@@ -2092,8 +2092,8 @@ static int cas_rx_process_pkt(struct cas
get_page(page->buffer);
cas_buffer_inc(page);
frag->page = page->buffer;
- frag->page_offset = 0;
- frag->size = hlen;
+ frag->offset = 0;
+ frag->length = hlen;
RX_USED_ADD(page, hlen + cp->crc_size);
}
@@ -2860,12 +2860,11 @@ static inline int cas_xmit_tx_ringN(stru
for (frag = 0; frag < nr_frags; frag++) {
skb_frag_t *fragp = &skb_shinfo(skb)->frags[frag];
- len = fragp->size;
- mapping = pci_map_page(cp->pdev, fragp->page,
- fragp->page_offset, len,
- PCI_DMA_TODEVICE);
+ len = fragp->length;
+ mapping = pci_map_page(cp->pdev, fragp->page, fragp->offset,
+ len, PCI_DMA_TODEVICE);
- tabort = cas_calc_tabort(cp, fragp->page_offset, len);
+ tabort = cas_calc_tabort(cp, fragp->offset, len);
if (unlikely(tabort)) {
void *addr;
@@ -2876,7 +2875,7 @@ static inline int cas_xmit_tx_ringN(stru
addr = cas_page_map(fragp->page);
memcpy(tx_tiny_buf(cp, ring, entry),
- addr + fragp->page_offset + len - tabort,
+ addr + fragp->offset + len - tabort,
tabort);
cas_page_unmap(addr);
mapping = tx_tiny_map(cp, ring, entry, tentry);
--- a/drivers/net/chelsio/sge.c 2007-06-05 13:27:35.000000000 -0700
+++ b/drivers/net/chelsio/sge.c 2007-07-05 15:02:00.000000000 -0700
@@ -1130,7 +1130,7 @@ static inline unsigned int compute_large
}
for (i = 0; nfrags--; i++) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
- len = frag->size;
+ len = frag->length;
while (len > SGE_TX_DESC_MAX_PLEN) {
count++;
len -= SGE_TX_DESC_MAX_PLEN;
@@ -1272,10 +1272,10 @@ static inline void write_tx_descs(struct
}
mapping = pci_map_page(adapter->pdev, frag->page,
- frag->page_offset, frag->size,
+ frag->offset, frag->length,
PCI_DMA_TODEVICE);
desc_mapping = mapping;
- desc_len = frag->size;
+ desc_len = frag->length;
pidx = write_large_page_tx_descs(pidx, &e1, &ce, &gen,
&desc_mapping, &desc_len,
@@ -1285,7 +1285,7 @@ static inline void write_tx_descs(struct
nfrags == 0);
ce->skb = NULL;
pci_unmap_addr_set(ce, dma_addr, mapping);
- pci_unmap_len_set(ce, dma_len, frag->size);
+ pci_unmap_len_set(ce, dma_len, frag->length);
}
ce->skb = skb;
wmb();
--- a/drivers/net/cxgb3/sge.c 2007-07-05 14:21:36.000000000 -0700
+++ b/drivers/net/cxgb3/sge.c 2007-07-05 15:17:10.000000000 -0700
@@ -246,7 +246,7 @@ static inline void unmap_skb(struct sk_b
while (frag_idx < nfrags && curflit < WR_FLITS) {
pci_unmap_page(pdev, be64_to_cpu(sgp->addr[j]),
- skb_shinfo(skb)->frags[frag_idx].size,
+ skb_shinfo(skb)->frags[frag_idx].length,
PCI_DMA_TODEVICE);
j ^= 1;
if (j == 0) {
@@ -433,8 +433,8 @@ static void refill_fl(struct adapter *ad
q->alloc_failed++;
break;
} else {
- p->frag.size = RX_PAGE_SIZE;
- p->frag.page_offset = 0;
+ p->frag.length = RX_PAGE_SIZE;
+ p->frag.offset = 0;
p->va = page_address(p->frag.page);
}
}
@@ -442,10 +442,10 @@ static void refill_fl(struct adapter *ad
memcpy(&sd->t, p, sizeof(*p));
va = p->va;
- p->frag.page_offset += RX_PAGE_SIZE;
- BUG_ON(p->frag.page_offset > PAGE_SIZE);
+ p->frag.offset += RX_PAGE_SIZE;
+ BUG_ON(p->frag.offset > PAGE_SIZE);
p->va += RX_PAGE_SIZE;
- if (p->frag.page_offset == PAGE_SIZE)
+ if (p->frag.offset == PAGE_SIZE)
p->frag.page = NULL;
else
get_page(p->frag.page);
@@ -716,9 +716,9 @@ static inline unsigned int make_sgl(cons
for (i = 0; i < nfrags; i++) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
- mapping = pci_map_page(pdev, frag->page, frag->page_offset,
- frag->size, PCI_DMA_TODEVICE);
- sgp->len[j] = cpu_to_be32(frag->size);
+ mapping = pci_map_page(pdev, frag->page, frag->offset,
+ frag->length, PCI_DMA_TODEVICE);
+ sgp->len[j] = cpu_to_be32(frag->length);
sgp->addr[j] = cpu_to_be64(mapping);
j ^= 1;
if (j == 0)
@@ -1270,7 +1270,7 @@ static void deferred_unmap_destructor(st
si = skb_shinfo(skb);
for (i = 0; i < si->nr_frags; i++)
- pci_unmap_page(dui->pdev, *p++, si->frags[i].size,
+ pci_unmap_page(dui->pdev, *p++, si->frags[i].length,
PCI_DMA_TODEVICE);
}
@@ -1728,9 +1728,8 @@ static void skb_data_init(struct sk_buff
} else {
skb_copy_to_linear_data(skb, p->va, SKB_DATA_SIZE);
skb_shinfo(skb)->frags[0].page = p->frag.page;
- skb_shinfo(skb)->frags[0].page_offset =
- p->frag.page_offset + SKB_DATA_SIZE;
- skb_shinfo(skb)->frags[0].size = len - SKB_DATA_SIZE;
+ skb_shinfo(skb)->frags[0].offset = p->frag.offset + SKB_DATA_SIZE;
+ skb_shinfo(skb)->frags[0].length = len - SKB_DATA_SIZE;
skb_shinfo(skb)->nr_frags = 1;
skb->data_len = len - SKB_DATA_SIZE;
skb->tail += SKB_DATA_SIZE;
--- a/drivers/net/e1000/e1000_main.c 2007-06-05 13:27:35.000000000 -0700
+++ b/drivers/net/e1000/e1000_main.c 2007-07-05 15:17:16.000000000 -0700
@@ -3049,11 +3049,11 @@ e1000_tx_map(struct e1000_adapter *adapt
}
for (f = 0; f < nr_frags; f++) {
- struct skb_frag_struct *frag;
+ skb_frag_t *frag;
frag = &skb_shinfo(skb)->frags[f];
- len = frag->size;
- offset = frag->page_offset;
+ len = frag->length;
+ offset = frag->offset;
while (len) {
buffer_info = &tx_ring->buffer_info[i];
@@ -3358,7 +3358,7 @@ e1000_xmit_frame(struct sk_buff *skb, st
nr_frags = skb_shinfo(skb)->nr_frags;
for (f = 0; f < nr_frags; f++)
- count += TXD_USE_COUNT(skb_shinfo(skb)->frags[f].size,
+ count += TXD_USE_COUNT(skb_shinfo(skb)->frags[f].length,
max_txd_pwr);
if (adapter->pcix_82544)
count += nr_frags;
--- a/drivers/net/ehea/ehea_main.c 2007-07-05 14:21:36.000000000 -0700
+++ b/drivers/net/ehea/ehea_main.c 2007-07-05 14:38:04.000000000 -0700
@@ -1390,7 +1390,7 @@ static inline void write_swqe2_data(stru
sg1entry->l_key = lkey;
sg1entry->len = frag->size;
tmp_addr = (u64)(page_address(frag->page)
- + frag->page_offset);
+ + frag->offset);
sg1entry->vaddr = tmp_addr;
swqe->descriptors++;
sg1entry_contains_frag_data = 1;
@@ -1404,8 +1404,7 @@ static inline void write_swqe2_data(stru
sgentry->l_key = lkey;
sgentry->len = frag->size;
- tmp_addr = (u64)(page_address(frag->page)
- + frag->page_offset);
+ tmp_addr = (u64)(page_address(frag->page) + frag->offset);
sgentry->vaddr = tmp_addr;
swqe->descriptors++;
}
@@ -1789,7 +1788,7 @@ static void ehea_xmit3(struct sk_buff *s
for (i = 0; i < nfrags; i++) {
frag = &skb_shinfo(skb)->frags[i];
memcpy(imm_data,
- page_address(frag->page) + frag->page_offset,
+ page_address(frag->page) + frag->offset,
frag->size);
imm_data += frag->size;
}
--- a/drivers/net/forcedeth.c 2007-07-05 14:21:36.000000000 -0700
+++ b/drivers/net/forcedeth.c 2007-07-05 15:35:30.000000000 -0700
@@ -1649,8 +1649,8 @@ static int nv_start_xmit(struct sk_buff
/* add fragments to entries count */
for (i = 0; i < fragments; i++) {
- entries += (skb_shinfo(skb)->frags[i].size >> NV_TX2_TSO_MAX_SHIFT) +
- ((skb_shinfo(skb)->frags[i].size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0);
+ entries += (skb_shinfo(skb)->frags[i].length >> NV_TX2_TSO_MAX_SHIFT) +
+ ((skb_shinfo(skb)->frags[i].length & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0);
}
empty_slots = nv_get_empty_tx_slots(np);
@@ -1687,15 +1687,16 @@ static int nv_start_xmit(struct sk_buff
/* setup the fragments */
for (i = 0; i < fragments; i++) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
- u32 size = frag->size;
+ u32 size = frag->length;
offset = 0;
do {
prev_tx = put_tx;
prev_tx_ctx = np->put_tx_ctx;
bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size;
- np->put_tx_ctx->dma = pci_map_page(np->pci_dev, frag->page, frag->page_offset+offset, bcnt,
- PCI_DMA_TODEVICE);
+ np->put_tx_ctx->dma = pci_map_page(np->pci_dev, frag->page,
+ frag->offset + offset,
+ bcnt, PCI_DMA_TODEVICE);
np->put_tx_ctx->dma_len = bcnt;
put_tx->buf = cpu_to_le32(np->put_tx_ctx->dma);
put_tx->flaglen = cpu_to_le32((bcnt-1) | tx_flags);
@@ -1765,8 +1766,8 @@ static int nv_start_xmit_optimized(struc
/* add fragments to entries count */
for (i = 0; i < fragments; i++) {
- entries += (skb_shinfo(skb)->frags[i].size >> NV_TX2_TSO_MAX_SHIFT) +
- ((skb_shinfo(skb)->frags[i].size & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0);
+ entries += (skb_shinfo(skb)->frags[i].length >> NV_TX2_TSO_MAX_SHIFT) +
+ ((skb_shinfo(skb)->frags[i].length & (NV_TX2_TSO_MAX_SIZE-1)) ? 1 : 0);
}
empty_slots = nv_get_empty_tx_slots(np);
@@ -1804,15 +1805,16 @@ static int nv_start_xmit_optimized(struc
/* setup the fragments */
for (i = 0; i < fragments; i++) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
- u32 size = frag->size;
+ u32 size = frag->length;
offset = 0;
do {
prev_tx = put_tx;
prev_tx_ctx = np->put_tx_ctx;
bcnt = (size > NV_TX2_TSO_MAX_SIZE) ? NV_TX2_TSO_MAX_SIZE : size;
- np->put_tx_ctx->dma = pci_map_page(np->pci_dev, frag->page, frag->page_offset+offset, bcnt,
- PCI_DMA_TODEVICE);
+ np->put_tx_ctx->dma = pci_map_page(np->pci_dev, frag->page,
+ frag->offset + offset,
+ bcnt, PCI_DMA_TODEVICE);
np->put_tx_ctx->dma_len = bcnt;
put_tx->bufhigh = cpu_to_le64(np->put_tx_ctx->dma) >> 32;
put_tx->buflow = cpu_to_le64(np->put_tx_ctx->dma) & 0x0FFFFFFFF;
--- a/drivers/net/ibm_emac/ibm_emac_core.c 2007-06-05 13:27:35.000000000 -0700
+++ b/drivers/net/ibm_emac/ibm_emac_core.c 2007-07-05 14:38:04.000000000 -0700
@@ -1165,7 +1165,7 @@ static int emac_start_xmit_sg(struct sk_
if (unlikely(dev->tx_cnt + mal_tx_chunks(len) >= NUM_TX_BUFF))
goto undo_frame;
- pd = dma_map_page(dev->ldev, frag->page, frag->page_offset, len,
+ pd = dma_map_page(dev->ldev, frag->page, frag->offset, len,
DMA_TO_DEVICE);
slot = emac_xmit_split(dev, slot, pd, len, i == nr_frags - 1,
--- a/drivers/net/ibmveth.c 2007-07-05 14:21:36.000000000 -0700
+++ b/drivers/net/ibmveth.c 2007-07-05 14:38:04.000000000 -0700
@@ -698,7 +698,7 @@ static int ibmveth_start_xmit(struct sk_
skb_frag_t *frag = &skb_shinfo(skb)->frags[curfrag];
desc[curfrag+1].fields.address
= dma_map_single(&adapter->vdev->dev,
- page_address(frag->page) + frag->page_offset,
+ page_address(frag->page) + frag->offset,
frag->size, DMA_TO_DEVICE);
desc[curfrag+1].fields.length = frag->size;
desc[curfrag+1].fields.valid = 1;
--- a/drivers/net/ixgb/ixgb_main.c 2007-06-05 13:27:35.000000000 -0700
+++ b/drivers/net/ixgb/ixgb_main.c 2007-07-05 15:15:47.000000000 -0700
@@ -1314,10 +1314,10 @@ ixgb_tx_map(struct ixgb_adapter *adapter
}
for(f = 0; f < nr_frags; f++) {
- struct skb_frag_struct *frag;
+ skb_frag_t *frag;
frag = &skb_shinfo(skb)->frags[f];
- len = frag->size;
+ len = frag->length;
offset = 0;
while(len) {
@@ -1334,7 +1334,7 @@ ixgb_tx_map(struct ixgb_adapter *adapter
buffer_info->dma =
pci_map_page(adapter->pdev,
frag->page,
- frag->page_offset + offset,
+ frag->offset + offset,
size,
PCI_DMA_TODEVICE);
buffer_info->time_stamp = jiffies;
--- a/drivers/net/mv643xx_eth.c 2007-06-05 13:27:35.000000000 -0700
+++ b/drivers/net/mv643xx_eth.c 2007-07-05 14:38:04.000000000 -0700
@@ -1114,7 +1114,7 @@ static void eth_tx_fill_frag_descs(struc
desc->l4i_chk = 0;
desc->byte_cnt = this_frag->size;
desc->buf_ptr = dma_map_page(NULL, this_frag->page,
- this_frag->page_offset,
+ this_frag->offset,
this_frag->size,
DMA_TO_DEVICE);
}
--- a/drivers/net/myri10ge/myri10ge.c 2007-07-05 14:21:36.000000000 -0700
+++ b/drivers/net/myri10ge/myri10ge.c 2007-07-05 15:23:04.000000000 -0700
@@ -877,9 +877,9 @@ static inline void myri10ge_vlan_ip_csum
static inline void
myri10ge_rx_skb_build(struct sk_buff *skb, u8 * va,
- struct skb_frag_struct *rx_frags, int len, int hlen)
+ skb_frag_t *rx_frags, int len, int hlen)
{
- struct skb_frag_struct *skb_frags;
+ skb_frag_t *skb_frags;
skb->len = skb->data_len = len;
skb->truesize = len + sizeof(struct sk_buff);
@@ -888,7 +888,7 @@ myri10ge_rx_skb_build(struct sk_buff *sk
skb_frags = skb_shinfo(skb)->frags;
while (len > 0) {
memcpy(skb_frags, rx_frags, sizeof(*skb_frags));
- len -= rx_frags->size;
+ len -= rx_frags->length;
skb_frags++;
rx_frags++;
skb_shinfo(skb)->nr_frags++;
@@ -899,8 +899,8 @@ myri10ge_rx_skb_build(struct sk_buff *sk
* the beginning of the packet in skb_headlen(), move it
* manually */
skb_copy_to_linear_data(skb, va, hlen);
- skb_shinfo(skb)->frags[0].page_offset += hlen;
- skb_shinfo(skb)->frags[0].size -= hlen;
+ skb_shinfo(skb)->frags[0].offset += hlen;
+ skb_shinfo(skb)->frags[0].length -= hlen;
skb->data_len -= hlen;
skb->tail += hlen;
skb_pull(skb, MXGEFW_PAD);
@@ -994,7 +994,7 @@ myri10ge_rx_done(struct myri10ge_priv *m
int bytes, int len, __wsum csum)
{
struct sk_buff *skb;
- struct skb_frag_struct rx_frags[MYRI10GE_MAX_FRAGS_PER_FRAME];
+ skb_frag_t rx_frags[MYRI10GE_MAX_FRAGS_PER_FRAME];
int i, idx, hlen, remainder;
struct pci_dev *pdev = mgp->pdev;
struct net_device *dev = mgp->dev;
@@ -1008,11 +1008,11 @@ myri10ge_rx_done(struct myri10ge_priv *m
for (i = 0, remainder = len; remainder > 0; i++) {
myri10ge_unmap_rx_page(pdev, &rx->info[idx], bytes);
rx_frags[i].page = rx->info[idx].page;
- rx_frags[i].page_offset = rx->info[idx].page_offset;
+ rx_frags[i].offset = rx->info[idx].page_offset;
if (remainder < MYRI10GE_ALLOC_SIZE)
- rx_frags[i].size = remainder;
+ rx_frags[i].length = remainder;
else
- rx_frags[i].size = MYRI10GE_ALLOC_SIZE;
+ rx_frags[i].length = MYRI10GE_ALLOC_SIZE;
rx->cnt++;
idx = rx->cnt & rx->mask;
remainder -= MYRI10GE_ALLOC_SIZE;
@@ -1034,7 +1034,7 @@ myri10ge_rx_done(struct myri10ge_priv *m
/* Attach the pages to the skb, and trim off any padding */
myri10ge_rx_skb_build(skb, va, rx_frags, len, hlen);
- if (skb_shinfo(skb)->frags[0].size <= 0) {
+ if (skb_shinfo(skb)->frags[0].length <= 0) {
put_page(skb_shinfo(skb)->frags[0].page);
skb_shinfo(skb)->nr_frags = 0;
}
@@ -2026,7 +2026,7 @@ static int myri10ge_xmit(struct sk_buff
struct myri10ge_priv *mgp = netdev_priv(dev);
struct mcp_kreq_ether_send *req;
struct myri10ge_tx_buf *tx = &mgp->tx;
- struct skb_frag_struct *frag;
+ skb_frag_t *frag;
dma_addr_t bus;
u32 low;
__be32 high_swapped;
@@ -2214,8 +2214,8 @@ again:
idx = (count + tx->req) & tx->mask;
frag = &skb_shinfo(skb)->frags[frag_idx];
frag_idx++;
- len = frag->size;
- bus = pci_map_page(mgp->pdev, frag->page, frag->page_offset,
+ len = frag->length;
+ bus = pci_map_page(mgp->pdev, frag->page, frag->offset,
len, PCI_DMA_TODEVICE);
pci_unmap_addr_set(&tx->info[idx], bus, bus);
pci_unmap_len_set(&tx->info[idx], len, len);
--- a/drivers/net/netxen/netxen_nic_main.c 2007-07-05 14:21:36.000000000 -0700
+++ b/drivers/net/netxen/netxen_nic_main.c 2007-07-05 15:17:32.000000000 -0700
@@ -996,7 +996,7 @@ static int netxen_nic_xmit_frame(struct
hwdesc->addr_buffer1 = cpu_to_le64(buffrag->dma);
for (i = 1, k = 1; i < frag_count; i++, k++) {
- struct skb_frag_struct *frag;
+ skb_frag_t *frag;
int len, temp_len;
unsigned long offset;
dma_addr_t temp_dma;
@@ -1010,8 +1010,8 @@ static int netxen_nic_xmit_frame(struct
memset(hwdesc, 0, sizeof(struct cmd_desc_type0));
}
frag = &skb_shinfo(skb)->frags[i - 1];
- len = frag->size;
- offset = frag->page_offset;
+ len = frag->length;
+ offset = frag->offset;
temp_len = len;
temp_dma = pci_map_page(adapter->pdev, frag->page, offset,
--- a/drivers/net/ns83820.c 2007-07-05 14:21:36.000000000 -0700
+++ b/drivers/net/ns83820.c 2007-07-05 15:23:47.000000000 -0700
@@ -1187,13 +1187,12 @@ again:
if (!nr_frags)
break;
- buf = pci_map_page(dev->pci_dev, frag->page,
- frag->page_offset,
- frag->size, PCI_DMA_TODEVICE);
+ buf = pci_map_page(dev->pci_dev, frag->page, frag->offset,
+ frag->length, PCI_DMA_TODEVICE);
dprintk("frag: buf=%08Lx page=%08lx offset=%08lx\n",
(long long)buf, (long) page_to_pfn(frag->page),
- frag->page_offset);
- len = frag->size;
+ frag->offset);
+ len = frag->length;
frag++;
nr_frags--;
}
--- a/drivers/net/qla3xxx.c 2007-06-05 13:27:36.000000000 -0700
+++ b/drivers/net/qla3xxx.c 2007-07-05 15:35:53.000000000 -0700
@@ -2548,7 +2548,7 @@ static int ql_send_map(struct ql3_adapte
map =
pci_map_page(qdev->pdev, frag->page,
- frag->page_offset, frag->size,
+ frag->offset, frag->length,
PCI_DMA_TODEVICE);
err = pci_dma_mapping_error(map);
@@ -2560,10 +2560,10 @@ static int ql_send_map(struct ql3_adapte
oal_entry->dma_lo = cpu_to_le32(LS_64BITS(map));
oal_entry->dma_hi = cpu_to_le32(MS_64BITS(map));
- oal_entry->len = cpu_to_le32(frag->size);
+ oal_entry->len = cpu_to_le32(frag->length);
pci_unmap_addr_set(&tx_cb->map[seg], mapaddr, map);
pci_unmap_len_set(&tx_cb->map[seg], maplen,
- frag->size);
+ frag->length);
}
/* Terminate the last segment. */
oal_entry->len =
--- a/drivers/net/r8169.c 2007-06-05 13:27:36.000000000 -0700
+++ b/drivers/net/r8169.c 2007-07-05 15:36:00.000000000 -0700
@@ -2243,8 +2243,8 @@ static int rtl8169_xmit_frags(struct rtl
entry = (entry + 1) % NUM_TX_DESC;
txd = tp->TxDescArray + entry;
- len = frag->size;
- addr = ((void *) page_address(frag->page)) + frag->page_offset;
+ len = frag->length;
+ addr = ((void *) page_address(frag->page)) + frag->offset;
mapping = pci_map_single(tp->pci_dev, addr, len, PCI_DMA_TODEVICE);
/* anti gcc 2.95.3 bugware (sic) */
--- a/drivers/net/s2io.c 2007-07-05 14:21:36.000000000 -0700
+++ b/drivers/net/s2io.c 2007-07-05 15:36:39.000000000 -0700
@@ -2141,9 +2141,9 @@ static struct sk_buff *s2io_txdl_getskb(
skb_frag_t *frag = &skb_shinfo(skb)->frags[j];
if (!txds->Buffer_Pointer)
break;
- pci_unmap_page(nic->pdev, (dma_addr_t)
- txds->Buffer_Pointer,
- frag->size, PCI_DMA_TODEVICE);
+ pci_unmap_page(nic->pdev,
+ (dma_addr_t) txds->Buffer_Pointer,
+ frag->length, PCI_DMA_TODEVICE);
}
}
memset(txdlp,0, (sizeof(struct TxD) * fifo_data->max_txds));
@@ -4087,13 +4087,15 @@ static int s2io_xmit(struct sk_buff *skb
for (i = 0; i < frg_cnt; i++) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
/* A '0' length fragment will be ignored */
- if (!frag->size)
+ if (!frag->length)
continue;
txdp++;
- txdp->Buffer_Pointer = (u64) pci_map_page
- (sp->pdev, frag->page, frag->page_offset,
- frag->size, PCI_DMA_TODEVICE);
- txdp->Control_1 = TXD_BUFFER0_SIZE(frag->size);
+ txdp->Buffer_Pointer = (u64) pci_map_page(sp->pdev,
+ frag->page,
+ frag->offset,
+ frag->length,
+ PCI_DMA_TODEVICE);
+ txdp->Control_1 = TXD_BUFFER0_SIZE(frag->length);
if (offload_type == SKB_GSO_UDP)
txdp->Control_1 |= TXD_UFO_EN;
}
--- a/drivers/net/sk98lin/skge.c 2007-06-05 13:27:36.000000000 -0700
+++ b/drivers/net/sk98lin/skge.c 2007-07-05 15:23:10.000000000 -0700
@@ -1721,15 +1721,15 @@ struct sk_buff *pMessage) /* pointer to
*/
PhysAddr = (SK_U64) pci_map_page(pAC->PciDev,
sk_frag->page,
- sk_frag->page_offset,
- sk_frag->size,
+ sk_frag->offset,
+ sk_frag->length,
PCI_DMA_TODEVICE);
pTxd->VDataLow = (SK_U32) (PhysAddr & 0xffffffff);
pTxd->VDataHigh = (SK_U32) (PhysAddr >> 32);
pTxd->pMBuf = pMessage;
- pTxd->TBControl = Control | BMU_OWN | sk_frag->size;
+ pTxd->TBControl = Control | BMU_OWN | sk_frag->length;
/*
** Do we have the last fragment?
@@ -1745,7 +1745,7 @@ struct sk_buff *pMessage) /* pointer to
pTxdLst = pTxd;
pTxd = pTxd->pNextTxd;
pTxPort->TxdRingFree--;
- BytesSend += sk_frag->size;
+ BytesSend += sk_frag->length;
}
/*
--- a/drivers/net/skge.c 2007-06-05 13:27:36.000000000 -0700
+++ b/drivers/net/skge.c 2007-07-05 15:33:23.000000000 -0700
@@ -2684,8 +2684,8 @@ static int skge_xmit_frame(struct sk_buf
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
- map = pci_map_page(hw->pdev, frag->page, frag->page_offset,
- frag->size, PCI_DMA_TODEVICE);
+ map = pci_map_page(hw->pdev, frag->page, frag->offset,
+ frag->length, PCI_DMA_TODEVICE);
e = e->next;
e->skb = skb;
@@ -2695,9 +2695,9 @@ static int skge_xmit_frame(struct sk_buf
tf->dma_lo = map;
tf->dma_hi = (u64) map >> 32;
pci_unmap_addr_set(e, mapaddr, map);
- pci_unmap_len_set(e, maplen, frag->size);
+ pci_unmap_len_set(e, maplen, frag->length);
- tf->control = BMU_OWN | BMU_SW | control | frag->size;
+ tf->control = BMU_OWN | BMU_SW | control | frag->length;
}
tf->control |= BMU_EOF | BMU_IRQ_EOF;
}
--- a/drivers/net/sky2.c 2007-06-05 13:27:36.000000000 -0700
+++ b/drivers/net/sky2.c 2007-07-05 15:34:18.000000000 -0700
@@ -912,8 +912,8 @@ static void sky2_rx_map_skb(struct pci_d
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++)
re->frag_addr[i] = pci_map_page(pdev,
skb_shinfo(skb)->frags[i].page,
- skb_shinfo(skb)->frags[i].page_offset,
- skb_shinfo(skb)->frags[i].size,
+ skb_shinfo(skb)->frags[i].offset,
+ skb_shinfo(skb)->frags[i].length,
PCI_DMA_FROMDEVICE);
}
@@ -927,7 +927,7 @@ static void sky2_rx_unmap_skb(struct pci
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++)
pci_unmap_page(pdev, re->frag_addr[i],
- skb_shinfo(skb)->frags[i].size,
+ skb_shinfo(skb)->frags[i].length,
PCI_DMA_FROMDEVICE);
}
@@ -1457,8 +1457,8 @@ static int sky2_xmit_frame(struct sk_buf
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
const skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
- mapping = pci_map_page(hw->pdev, frag->page, frag->page_offset,
- frag->size, PCI_DMA_TODEVICE);
+ mapping = pci_map_page(hw->pdev, frag->page, frag->offset,
+ frag->length, PCI_DMA_TODEVICE);
addr64 = high32(mapping);
if (addr64 != sky2->tx_addr64) {
le = get_tx_le(sky2);
@@ -1470,14 +1470,14 @@ static int sky2_xmit_frame(struct sk_buf
le = get_tx_le(sky2);
le->addr = cpu_to_le32((u32) mapping);
- le->length = cpu_to_le16(frag->size);
+ le->length = cpu_to_le16(frag->length);
le->ctrl = ctrl;
le->opcode = OP_BUFFER | HW_OWNER;
re = tx_le_re(sky2, le);
re->skb = skb;
pci_unmap_addr_set(re, mapaddr, mapping);
- pci_unmap_len_set(re, maplen, frag->size);
+ pci_unmap_len_set(re, maplen, frag->length);
}
le->ctrl |= EOP;
@@ -2002,7 +2002,7 @@ static void skb_put_frags(struct sk_buff
} else {
size = min(length, (unsigned) PAGE_SIZE);
- frag->size = size;
+ frag->length = size;
skb->data_len += size;
skb->truesize += size;
skb->len += size;
--- a/drivers/net/starfire.c 2007-07-05 14:21:36.000000000 -0700
+++ b/drivers/net/starfire.c 2007-07-05 15:34:54.000000000 -0700
@@ -1262,9 +1262,12 @@ static int start_tx(struct sk_buff *skb,
pci_map_single(np->pci_dev, skb->data, skb_first_frag_len(skb), PCI_DMA_TODEVICE);
} else {
skb_frag_t *this_frag = &skb_shinfo(skb)->frags[i - 1];
- status |= this_frag->size;
+ status |= this_frag->length;
np->tx_info[entry].mapping =
- pci_map_single(np->pci_dev, page_address(this_frag->page) + this_frag->page_offset, this_frag->size, PCI_DMA_TODEVICE);
+ pci_map_single(np->pci_dev,
+ page_address(this_frag->page) + this_frag->offset,
+ this_frag->length,
+ PCI_DMA_TODEVICE);
}
np->tx_ring[entry].addr = cpu_to_dma(np->tx_info[entry].mapping);
@@ -1362,7 +1365,7 @@ static irqreturn_t intr_handler(int irq,
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
pci_unmap_single(np->pci_dev,
np->tx_info[entry].mapping,
- skb_shinfo(skb)->frags[i].size,
+ skb_shinfo(skb)->frags[i].length,
PCI_DMA_TODEVICE);
np->dirty_tx++;
entry++;
--- a/drivers/net/sungem.c 2007-06-05 13:27:36.000000000 -0700
+++ b/drivers/net/sungem.c 2007-07-05 15:19:49.000000000 -0700
@@ -1101,10 +1101,10 @@ static int gem_start_xmit(struct sk_buff
dma_addr_t mapping;
u64 this_ctrl;
- len = this_frag->size;
+ len = this_frag->length;
mapping = pci_map_page(gp->pdev,
this_frag->page,
- this_frag->page_offset,
+ this_frag->offset,
len, PCI_DMA_TODEVICE);
this_ctrl = ctrl;
if (frag == skb_shinfo(skb)->nr_frags - 1)
--- a/drivers/net/sunhme.c 2007-06-05 13:27:36.000000000 -0700
+++ b/drivers/net/sunhme.c 2007-07-05 15:16:45.000000000 -0700
@@ -2313,10 +2313,10 @@ static int happy_meal_start_xmit(struct
skb_frag_t *this_frag = &skb_shinfo(skb)->frags[frag];
u32 len, mapping, this_txflags;
- len = this_frag->size;
+ len = this_frag->length;
mapping = hme_dma_map(hp,
((void *) page_address(this_frag->page) +
- this_frag->page_offset),
+ this_frag->offset),
len, DMA_TODEVICE);
this_txflags = tx_flags;
if (frag == skb_shinfo(skb)->nr_frags - 1)
--- a/drivers/net/tg3.c 2007-06-05 13:27:36.000000000 -0700
+++ b/drivers/net/tg3.c 2007-07-05 15:24:11.000000000 -0700
@@ -3095,7 +3095,7 @@ static void tg3_tx(struct tg3 *tp)
pci_unmap_page(tp->pdev,
pci_unmap_addr(ri, mapping),
- skb_shinfo(skb)->frags[i].size,
+ skb_shinfo(skb)->frags[i].length,
PCI_DMA_TODEVICE);
sw_idx = NEXT_TX(sw_idx);
@@ -3835,7 +3835,7 @@ static int tigon3_dma_hwbug_workaround(s
if (i == 0)
len = skb_headlen(skb);
else
- len = skb_shinfo(skb)->frags[i-1].size;
+ len = skb_shinfo(skb)->frags[i-1].length;
pci_unmap_single(tp->pdev,
pci_unmap_addr(&tp->tx_buffers[entry], mapping),
len, PCI_DMA_TODEVICE);
@@ -3962,10 +3962,10 @@ static int tg3_start_xmit(struct sk_buff
for (i = 0; i <= last; i++) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
- len = frag->size;
+ len = frag->length;
mapping = pci_map_page(tp->pdev,
frag->page,
- frag->page_offset,
+ frag->offset,
len, PCI_DMA_TODEVICE);
tp->tx_buffers[entry].skb = NULL;
@@ -4144,10 +4144,10 @@ static int tg3_start_xmit_dma_bug(struct
for (i = 0; i <= last; i++) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
- len = frag->size;
+ len = frag->length;
mapping = pci_map_page(tp->pdev,
frag->page,
- frag->page_offset,
+ frag->offset,
len, PCI_DMA_TODEVICE);
tp->tx_buffers[entry].skb = NULL;
@@ -4321,7 +4321,7 @@ static void tg3_free_rings(struct tg3 *t
txp = &tp->tx_buffers[i & (TG3_TX_RING_SIZE - 1)];
pci_unmap_page(tp->pdev,
pci_unmap_addr(txp, mapping),
- skb_shinfo(skb)->frags[j].size,
+ skb_shinfo(skb)->frags[j].length,
PCI_DMA_TODEVICE);
i++;
}
--- a/drivers/net/tsi108_eth.c 2007-06-05 13:27:36.000000000 -0700
+++ b/drivers/net/tsi108_eth.c 2007-07-05 14:38:04.000000000 -0700
@@ -715,7 +715,7 @@ static int tsi108_send_packet(struct sk_
skb_frag_t *frag = &skb_shinfo(skb)->frags[i - 1];
data->txring[tx].buf0 =
- dma_map_page(NULL, frag->page, frag->page_offset,
+ dma_map_page(NULL, frag->page, frag->offset,
frag->size, DMA_TO_DEVICE);
data->txring[tx].len = frag->size;
}
--- a/drivers/net/typhoon.c 2007-06-05 13:27:36.000000000 -0700
+++ b/drivers/net/typhoon.c 2007-07-05 14:38:04.000000000 -0700
@@ -874,7 +874,7 @@ typhoon_start_tx(struct sk_buff *skb, st
len = frag->size;
frag_addr = (void *) page_address(frag->page) +
- frag->page_offset;
+ frag->offset;
skb_dma = pci_map_single(tp->tx_pdev, frag_addr, len,
PCI_DMA_TODEVICE);
txd->flags = TYPHOON_FRAG_DESC | TYPHOON_DESC_VALID;
--- a/drivers/net/via-velocity.c 2007-07-05 14:21:36.000000000 -0700
+++ b/drivers/net/via-velocity.c 2007-07-05 14:38:04.000000000 -0700
@@ -1966,7 +1966,7 @@ static int velocity_xmit(struct sk_buff
for (i = 0; i < nfrags; i++) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
- void *addr = ((void *) page_address(frag->page + frag->page_offset));
+ void *addr = ((void *) page_address(frag->page + frag->offset));
tdinfo->skb_dma[i + 1] = pci_map_single(vptr->pdev, addr, frag->size, PCI_DMA_TODEVICE);
--- a/net/appletalk/ddp.c 2007-06-05 13:27:45.000000000 -0700
+++ b/net/appletalk/ddp.c 2007-07-05 15:01:28.000000000 -0700
@@ -957,7 +957,7 @@ static unsigned long atalk_sum_skb(const
BUG_TRAP(start <= offset + len);
- end = start + skb_shinfo(skb)->frags[i].size;
+ end = start + skb_shinfo(skb)->frags[i].length;
if ((copy = end - offset) > 0) {
u8 *vaddr;
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
@@ -965,7 +965,7 @@ static unsigned long atalk_sum_skb(const
if (copy > len)
copy = len;
vaddr = kmap_skb_frag(frag);
- sum = atalk_sum_partial(vaddr + frag->page_offset +
+ sum = atalk_sum_partial(vaddr + frag->offset +
offset - start, copy, sum);
kunmap_skb_frag(vaddr);
--- a/net/core/datagram.c 2007-06-05 13:27:46.000000000 -0700
+++ b/net/core/datagram.c 2007-07-05 15:07:51.000000000 -0700
@@ -267,7 +267,7 @@ int skb_copy_datagram_iovec(const struct
BUG_TRAP(start <= offset + len);
- end = start + skb_shinfo(skb)->frags[i].size;
+ end = start + skb_shinfo(skb)->frags[i].length;
if ((copy = end - offset) > 0) {
int err;
u8 *vaddr;
@@ -277,7 +277,7 @@ int skb_copy_datagram_iovec(const struct
if (copy > len)
copy = len;
vaddr = kmap(page);
- err = memcpy_toiovec(to, vaddr + frag->page_offset +
+ err = memcpy_toiovec(to, vaddr + frag->offset +
offset - start, copy);
kunmap(page);
if (err)
@@ -348,7 +348,7 @@ static int skb_copy_and_csum_datagram(co
BUG_TRAP(start <= offset + len);
- end = start + skb_shinfo(skb)->frags[i].size;
+ end = start + skb_shinfo(skb)->frags[i].length;
if ((copy = end - offset) > 0) {
__wsum csum2;
int err = 0;
@@ -359,8 +359,7 @@ static int skb_copy_and_csum_datagram(co
if (copy > len)
copy = len;
vaddr = kmap(page);
- csum2 = csum_and_copy_to_user(vaddr +
- frag->page_offset +
+ csum2 = csum_and_copy_to_user(vaddr + frag->offset +
offset - start,
to, copy, 0, &err);
kunmap(page);
--- a/net/core/skbuff.c 2007-07-05 14:21:36.000000000 -0700
+++ b/net/core/skbuff.c 2007-07-05 15:10:06.000000000 -0700
@@ -837,14 +837,14 @@ int ___pskb_trim(struct sk_buff *skb, un
goto drop_pages;
for (; i < nfrags; i++) {
- int end = offset + skb_shinfo(skb)->frags[i].size;
+ int end = offset + skb_shinfo(skb)->frags[i].length;
if (end < len) {
offset = end;
continue;
}
- skb_shinfo(skb)->frags[i++].size = len - offset;
+ skb_shinfo(skb)->frags[i++].length = len - offset;
drop_pages:
skb_shinfo(skb)->nr_frags = i;
@@ -952,9 +952,9 @@ unsigned char *__pskb_pull_tail(struct s
/* Estimate size of pulled pages. */
eat = delta;
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
- if (skb_shinfo(skb)->frags[i].size >= eat)
+ if (skb_shinfo(skb)->frags[i].length >= eat)
goto pull_pages;
- eat -= skb_shinfo(skb)->frags[i].size;
+ eat -= skb_shinfo(skb)->frags[i].length;
}
/* If we need update frag list, we are in troubles.
@@ -1018,14 +1018,14 @@ pull_pages:
eat = delta;
k = 0;
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
- if (skb_shinfo(skb)->frags[i].size <= eat) {
+ if (skb_shinfo(skb)->frags[i].length <= eat) {
put_page(skb_shinfo(skb)->frags[i].page);
- eat -= skb_shinfo(skb)->frags[i].size;
+ eat -= skb_shinfo(skb)->frags[i].length;
} else {
skb_shinfo(skb)->frags[k] = skb_shinfo(skb)->frags[i];
if (eat) {
- skb_shinfo(skb)->frags[k].page_offset += eat;
- skb_shinfo(skb)->frags[k].size -= eat;
+ skb_shinfo(skb)->frags[k].offset += eat;
+ skb_shinfo(skb)->frags[k].length -= eat;
eat = 0;
}
k++;
@@ -1065,7 +1065,7 @@ int skb_copy_bits(const struct sk_buff *
BUG_TRAP(start <= offset + len);
- end = start + skb_shinfo(skb)->frags[i].size;
+ end = start + skb_shinfo(skb)->frags[i].length;
if ((copy = end - offset) > 0) {
u8 *vaddr;
@@ -1074,7 +1074,7 @@ int skb_copy_bits(const struct sk_buff *
vaddr = kmap_skb_frag(&skb_shinfo(skb)->frags[i]);
memcpy(to,
- vaddr + skb_shinfo(skb)->frags[i].page_offset+
+ vaddr + skb_shinfo(skb)->frags[i].offset +
offset - start, copy);
kunmap_skb_frag(vaddr);
@@ -1152,7 +1152,7 @@ int skb_store_bits(struct sk_buff *skb,
BUG_TRAP(start <= offset + len);
- end = start + frag->size;
+ end = start + frag->length;
if ((copy = end - offset) > 0) {
u8 *vaddr;
@@ -1160,7 +1160,7 @@ int skb_store_bits(struct sk_buff *skb,
copy = len;
vaddr = kmap_skb_frag(frag);
- memcpy(vaddr + frag->page_offset + offset - start,
+ memcpy(vaddr + frag->offset + offset - start,
from, copy);
kunmap_skb_frag(vaddr);
@@ -1229,7 +1229,7 @@ __wsum skb_checksum(const struct sk_buff
BUG_TRAP(start <= offset + len);
- end = start + skb_shinfo(skb)->frags[i].size;
+ end = start + skb_shinfo(skb)->frags[i].length;
if ((copy = end - offset) > 0) {
__wsum csum2;
u8 *vaddr;
@@ -1238,7 +1238,7 @@ __wsum skb_checksum(const struct sk_buff
if (copy > len)
copy = len;
vaddr = kmap_skb_frag(frag);
- csum2 = csum_partial(vaddr + frag->page_offset +
+ csum2 = csum_partial(vaddr + frag->offset +
offset - start, copy, 0);
kunmap_skb_frag(vaddr);
csum = csum_block_add(csum, csum2, pos);
@@ -1306,7 +1306,7 @@ __wsum skb_copy_and_csum_bits(const stru
BUG_TRAP(start <= offset + len);
- end = start + skb_shinfo(skb)->frags[i].size;
+ end = start + skb_shinfo(skb)->frags[i].length;
if ((copy = end - offset) > 0) {
__wsum csum2;
u8 *vaddr;
@@ -1316,7 +1316,7 @@ __wsum skb_copy_and_csum_bits(const stru
copy = len;
vaddr = kmap_skb_frag(frag);
csum2 = csum_partial_copy_nocheck(vaddr +
- frag->page_offset +
+ frag->offset +
offset - start, to,
copy, 0);
kunmap_skb_frag(vaddr);
@@ -1574,7 +1574,7 @@ static inline void skb_split_no_header(s
skb->data_len = len - pos;
for (i = 0; i < nfrags; i++) {
- int size = skb_shinfo(skb)->frags[i].size;
+ int size = skb_shinfo(skb)->frags[i].length;
if (pos + size > len) {
skb_shinfo(skb1)->frags[k] = skb_shinfo(skb)->frags[i];
@@ -1589,9 +1589,9 @@ static inline void skb_split_no_header(s
* 2. Split is accurately. We make this.
*/
get_page(skb_shinfo(skb)->frags[i].page);
- skb_shinfo(skb1)->frags[0].page_offset += len - pos;
- skb_shinfo(skb1)->frags[0].size -= len - pos;
- skb_shinfo(skb)->frags[i].size = len - pos;
+ skb_shinfo(skb1)->frags[0].offset += len - pos;
+ skb_shinfo(skb1)->frags[0].length -= len - pos;
+ skb_shinfo(skb)->frags[i].length = len - pos;
skb_shinfo(skb)->nr_frags++;
}
k++;
@@ -1685,13 +1685,13 @@ next_skb:
while (st->frag_idx < skb_shinfo(st->cur_skb)->nr_frags) {
frag = &skb_shinfo(st->cur_skb)->frags[st->frag_idx];
- block_limit = frag->size + st->stepped_offset;
+ block_limit = frag->length + st->stepped_offset;
if (abs_offset < block_limit) {
if (!st->frag_data)
st->frag_data = kmap_skb_frag(frag);
- *data = (u8 *) st->frag_data + frag->page_offset +
+ *data = (u8 *) st->frag_data + frag->offset +
(abs_offset - st->stepped_offset);
return block_limit - abs_offset;
@@ -1703,7 +1703,7 @@ next_skb:
}
st->frag_idx++;
- st->stepped_offset += frag->size;
+ st->stepped_offset += frag->length;
}
if (st->frag_data) {
@@ -1829,18 +1829,18 @@ int skb_append_datato_frags(struct sock
frag = &skb_shinfo(skb)->frags[frg_cnt - 1];
/* copy the user data to page */
- left = PAGE_SIZE - frag->page_offset;
+ left = PAGE_SIZE - frag->offset;
copy = (length > left)? left : length;
ret = getfrag(from, (page_address(frag->page) +
- frag->page_offset + frag->size),
+ frag->offset + frag->length),
offset, copy, 0, skb);
if (ret < 0)
return -EFAULT;
/* copy was successful so update the size parameters */
sk->sk_sndmsg_off += copy;
- frag->size += copy;
+ frag->length += copy;
skb->len += copy;
skb->data_len += copy;
offset += copy;
@@ -1964,11 +1964,11 @@ struct sk_buff *skb_segment(struct sk_bu
*frag = skb_shinfo(skb)->frags[i];
get_page(frag->page);
- size = frag->size;
+ size = frag->length;
if (pos < offset) {
- frag->page_offset += offset - pos;
- frag->size -= offset - pos;
+ frag->offset += offset - pos;
+ frag->length -= offset - pos;
}
k++;
@@ -1977,7 +1977,7 @@ struct sk_buff *skb_segment(struct sk_bu
i++;
pos += size;
} else {
- frag->size -= pos + size - (offset + len);
+ frag->length -= pos + size - (offset + len);
break;
}
@@ -2051,14 +2051,14 @@ skb_to_sgvec(struct sk_buff *skb, struct
BUG_TRAP(start <= offset + len);
- end = start + skb_shinfo(skb)->frags[i].size;
+ end = start + skb_shinfo(skb)->frags[i].length;
if ((copy = end - offset) > 0) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
if (copy > len)
copy = len;
sg[elt].page = frag->page;
- sg[elt].offset = frag->page_offset+offset-start;
+ sg[elt].offset = frag->offset+offset-start;
sg[elt].length = copy;
elt++;
if (!(len -= copy))
--- a/net/core/sock.c 2007-06-05 13:27:46.000000000 -0700
+++ b/net/core/sock.c 2007-07-05 15:03:43.000000000 -0700
@@ -1215,10 +1215,10 @@ static struct sk_buff *sock_alloc_send_p
frag = &skb_shinfo(skb)->frags[i];
frag->page = page;
- frag->page_offset = 0;
- frag->size = (data_len >= PAGE_SIZE ?
- PAGE_SIZE :
- data_len);
+ frag->offset = 0;
+ frag->length = (data_len >= PAGE_SIZE ?
+ PAGE_SIZE :
+ data_len);
data_len -= PAGE_SIZE;
}
--- a/net/core/user_dma.c 2007-06-05 13:27:46.000000000 -0700
+++ b/net/core/user_dma.c 2007-07-05 14:38:04.000000000 -0700
@@ -83,7 +83,7 @@ int dma_skb_copy_datagram_iovec(struct d
copy = len;
cookie = dma_memcpy_pg_to_iovec(chan, to, pinned_list, page,
- frag->page_offset + offset - start, copy);
+ frag->offset + offset - start, copy);
if (cookie < 0)
goto fault;
len -= copy;
--- a/drivers/net/cxgb3/adapter.h 2007-06-05 13:27:35.000000000 -0700
+++ b/drivers/net/cxgb3/adapter.h 2007-07-05 15:02:48.000000000 -0700
@@ -75,7 +75,7 @@ struct rx_desc;
struct rx_sw_desc;
struct sge_fl_page {
- struct skb_frag_struct frag;
+ skb_frag_t frag;
unsigned char *va;
};
--- a/net/core/pktgen.c 2007-06-05 13:27:46.000000000 -0700
+++ b/net/core/pktgen.c 2007-07-05 15:12:00.000000000 -0700
@@ -6,7 +6,7 @@
*
* Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
* Ben Greear <greearb@candelatech.com>
- * Jens Låås <jens.laas@data.slu.se>
+ * Jens Låås <jens.laas@data.slu.se>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -2415,12 +2415,12 @@ static struct sk_buff *fill_packet_ipv4(
while (datalen > 0) {
struct page *page = alloc_pages(GFP_KERNEL, 0);
skb_shinfo(skb)->frags[i].page = page;
- skb_shinfo(skb)->frags[i].page_offset = 0;
- skb_shinfo(skb)->frags[i].size =
+ skb_shinfo(skb)->frags[i].offset = 0;
+ skb_shinfo(skb)->frags[i].length =
(datalen < PAGE_SIZE ? datalen : PAGE_SIZE);
- datalen -= skb_shinfo(skb)->frags[i].size;
- skb->len += skb_shinfo(skb)->frags[i].size;
- skb->data_len += skb_shinfo(skb)->frags[i].size;
+ datalen -= skb_shinfo(skb)->frags[i].length;
+ skb->len += skb_shinfo(skb)->frags[i].length;
+ skb->data_len += skb_shinfo(skb)->frags[i].length;
i++;
skb_shinfo(skb)->nr_frags = i;
}
@@ -2431,20 +2431,20 @@ static struct sk_buff *fill_packet_ipv4(
if (i == 0)
break;
- rem = skb_shinfo(skb)->frags[i - 1].size / 2;
+ rem = skb_shinfo(skb)->frags[i - 1].length / 2;
if (rem == 0)
break;
- skb_shinfo(skb)->frags[i - 1].size -= rem;
+ skb_shinfo(skb)->frags[i - 1].length -= rem;
skb_shinfo(skb)->frags[i] =
skb_shinfo(skb)->frags[i - 1];
get_page(skb_shinfo(skb)->frags[i].page);
skb_shinfo(skb)->frags[i].page =
skb_shinfo(skb)->frags[i - 1].page;
- skb_shinfo(skb)->frags[i].page_offset +=
- skb_shinfo(skb)->frags[i - 1].size;
- skb_shinfo(skb)->frags[i].size = rem;
+ skb_shinfo(skb)->frags[i].offset +=
+ skb_shinfo(skb)->frags[i - 1].length;
+ skb_shinfo(skb)->frags[i].length = rem;
i++;
skb_shinfo(skb)->nr_frags = i;
}
@@ -2763,12 +2763,12 @@ static struct sk_buff *fill_packet_ipv6(
while (datalen > 0) {
struct page *page = alloc_pages(GFP_KERNEL, 0);
skb_shinfo(skb)->frags[i].page = page;
- skb_shinfo(skb)->frags[i].page_offset = 0;
- skb_shinfo(skb)->frags[i].size =
+ skb_shinfo(skb)->frags[i].offset = 0;
+ skb_shinfo(skb)->frags[i].length =
(datalen < PAGE_SIZE ? datalen : PAGE_SIZE);
- datalen -= skb_shinfo(skb)->frags[i].size;
- skb->len += skb_shinfo(skb)->frags[i].size;
- skb->data_len += skb_shinfo(skb)->frags[i].size;
+ datalen -= skb_shinfo(skb)->frags[i].length;
+ skb->len += skb_shinfo(skb)->frags[i].length;
+ skb->data_len += skb_shinfo(skb)->frags[i].length;
i++;
skb_shinfo(skb)->nr_frags = i;
}
@@ -2779,20 +2779,20 @@ static struct sk_buff *fill_packet_ipv6(
if (i == 0)
break;
- rem = skb_shinfo(skb)->frags[i - 1].size / 2;
+ rem = skb_shinfo(skb)->frags[i - 1].length / 2;
if (rem == 0)
break;
- skb_shinfo(skb)->frags[i - 1].size -= rem;
+ skb_shinfo(skb)->frags[i - 1].length -= rem;
skb_shinfo(skb)->frags[i] =
skb_shinfo(skb)->frags[i - 1];
get_page(skb_shinfo(skb)->frags[i].page);
skb_shinfo(skb)->frags[i].page =
skb_shinfo(skb)->frags[i - 1].page;
- skb_shinfo(skb)->frags[i].page_offset +=
- skb_shinfo(skb)->frags[i - 1].size;
- skb_shinfo(skb)->frags[i].size = rem;
+ skb_shinfo(skb)->frags[i].offset +=
+ skb_shinfo(skb)->frags[i - 1].length;
+ skb_shinfo(skb)->frags[i].length = rem;
i++;
skb_shinfo(skb)->nr_frags = i;
}
--- a/net/ipv4/ip_fragment.c 2007-06-05 13:27:46.000000000 -0700
+++ b/net/ipv4/ip_fragment.c 2007-07-05 15:04:38.000000000 -0700
@@ -647,8 +647,8 @@ static struct sk_buff *ip_frag_reasm(str
head->next = clone;
skb_shinfo(clone)->frag_list = skb_shinfo(head)->frag_list;
skb_shinfo(head)->frag_list = NULL;
- for (i=0; i<skb_shinfo(head)->nr_frags; i++)
- plen += skb_shinfo(head)->frags[i].size;
+ for (i = 0; i < skb_shinfo(head)->nr_frags; i++)
+ plen += skb_shinfo(head)->frags[i].length;
clone->len = clone->data_len = head->data_len - plen;
head->data_len -= clone->len;
head->len -= clone->len;
--- a/net/ipv4/ip_output.c 2007-07-05 14:21:36.000000000 -0700
+++ b/net/ipv4/ip_output.c 2007-07-05 15:36:39.000000000 -0700
@@ -1019,12 +1019,15 @@ alloc_new_skb:
err = -EMSGSIZE;
goto error;
}
- if (getfrag(from, page_address(frag->page)+frag->page_offset+frag->size, offset, copy, skb->len, skb) < 0) {
+ if (getfrag(from,
+ page_address(frag->page)
+ + frag->offset + frag->length,
+ offset, copy, skb->len, skb) < 0) {
err = -EFAULT;
goto error;
}
sk->sk_sndmsg_off += copy;
- frag->size += copy;
+ frag->length += copy;
skb->len += copy;
skb->data_len += copy;
}
@@ -1152,7 +1155,7 @@ ssize_t ip_append_page(struct sock *sk,
if (len > size)
len = size;
if (skb_can_coalesce(skb, i, page, offset)) {
- skb_shinfo(skb)->frags[i-1].size += len;
+ skb_shinfo(skb)->frags[i-1].length += len;
} else if (i < MAX_SKB_FRAGS) {
get_page(page);
skb_fill_page_desc(skb, i, page, offset, len);
--- a/net/ipv4/tcp.c 2007-07-05 14:21:36.000000000 -0700
+++ b/net/ipv4/tcp.c 2007-07-05 15:13:23.000000000 -0700
@@ -558,7 +558,7 @@ new_segment:
goto wait_for_memory;
if (can_coalesce) {
- skb_shinfo(skb)->frags[i - 1].size += copy;
+ skb_shinfo(skb)->frags[i - 1].length += copy;
} else {
get_page(page);
skb_fill_page_desc(skb, i, page, offset, copy);
@@ -799,10 +799,9 @@ new_segment:
}
/* Update the skb. */
- if (merge) {
- skb_shinfo(skb)->frags[i - 1].size +=
- copy;
- } else {
+ if (merge)
+ skb_shinfo(skb)->frags[i - 1].length += copy;
+ else {
skb_fill_page_desc(skb, i, page, off, copy);
if (TCP_PAGE(sk)) {
get_page(page);
--- a/net/ipv4/tcp_output.c 2007-06-05 13:27:46.000000000 -0700
+++ b/net/ipv4/tcp_output.c 2007-07-05 15:21:44.000000000 -0700
@@ -720,14 +720,14 @@ static void __pskb_trim_head(struct sk_b
eat = len;
k = 0;
for (i=0; i<skb_shinfo(skb)->nr_frags; i++) {
- if (skb_shinfo(skb)->frags[i].size <= eat) {
+ if (skb_shinfo(skb)->frags[i].length <= eat) {
put_page(skb_shinfo(skb)->frags[i].page);
- eat -= skb_shinfo(skb)->frags[i].size;
+ eat -= skb_shinfo(skb)->frags[i].length;
} else {
skb_shinfo(skb)->frags[k] = skb_shinfo(skb)->frags[i];
if (eat) {
- skb_shinfo(skb)->frags[k].page_offset += eat;
- skb_shinfo(skb)->frags[k].size -= eat;
+ skb_shinfo(skb)->frags[k].offset += eat;
+ skb_shinfo(skb)->frags[k].length -= eat;
eat = 0;
}
k++;
--- a/net/ipv6/ip6_output.c 2007-06-05 13:27:46.000000000 -0700
+++ b/net/ipv6/ip6_output.c 2007-07-05 15:36:39.000000000 -0700
@@ -1314,12 +1314,15 @@ alloc_new_skb:
err = -EMSGSIZE;
goto error;
}
- if (getfrag(from, page_address(frag->page)+frag->page_offset+frag->size, offset, copy, skb->len, skb) < 0) {
+ if (getfrag(from,
+ page_address(frag->page)
+ + frag->offset + frag->length,
+ offset, copy, skb->len, skb) < 0) {
err = -EFAULT;
goto error;
}
sk->sk_sndmsg_off += copy;
- frag->size += copy;
+ frag->length += copy;
skb->len += copy;
skb->data_len += copy;
}
--- a/net/ipv6/netfilter/nf_conntrack_reasm.c 2007-06-05 13:27:46.000000000 -0700
+++ b/net/ipv6/netfilter/nf_conntrack_reasm.c 2007-07-05 15:13:29.000000000 -0700
@@ -612,7 +612,7 @@ nf_ct_frag6_reasm(struct nf_ct_frag6_que
skb_shinfo(clone)->frag_list = skb_shinfo(head)->frag_list;
skb_shinfo(head)->frag_list = NULL;
for (i=0; i<skb_shinfo(head)->nr_frags; i++)
- plen += skb_shinfo(head)->frags[i].size;
+ plen += skb_shinfo(head)->frags[i].length;
clone->len = clone->data_len = head->data_len - plen;
head->data_len -= clone->len;
head->len -= clone->len;
--- a/net/ipv6/reassembly.c 2007-06-05 13:27:46.000000000 -0700
+++ b/net/ipv6/reassembly.c 2007-07-05 15:22:18.000000000 -0700
@@ -634,7 +634,7 @@ static int ip6_frag_reasm(struct frag_qu
skb_shinfo(clone)->frag_list = skb_shinfo(head)->frag_list;
skb_shinfo(head)->frag_list = NULL;
for (i=0; i<skb_shinfo(head)->nr_frags; i++)
- plen += skb_shinfo(head)->frags[i].size;
+ plen += skb_shinfo(head)->frags[i].length;
clone->len = clone->data_len = head->data_len - plen;
head->data_len -= clone->len;
head->len -= clone->len;
--- a/net/xfrm/xfrm_algo.c 2007-06-05 13:27:46.000000000 -0700
+++ b/net/xfrm/xfrm_algo.c 2007-07-05 15:36:47.000000000 -0700
@@ -570,7 +570,7 @@ int skb_icv_walk(const struct sk_buff *s
BUG_TRAP(start <= offset + len);
- end = start + skb_shinfo(skb)->frags[i].size;
+ end = start + skb_shinfo(skb)->frags[i].length;
if ((copy = end - offset) > 0) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
@@ -578,7 +578,7 @@ int skb_icv_walk(const struct sk_buff *s
copy = len;
sg.page = frag->page;
- sg.offset = frag->page_offset + offset-start;
+ sg.offset = frag->offset + offset-start;
sg.length = copy;
err = icv_update(desc, &sg, copy);
--
Stephen Hemminger <shemminger@linux-foundation.org>
^ permalink raw reply [flat|nested] 16+ messages in thread
* [RFC 2/2] shrink size of scatterlist on common i386/x86-64
2007-07-05 23:14 [RFC 0/2] Convert skb to use scatterlist Stephen Hemminger
2007-07-05 23:14 ` [RFC 1/2] skbuff: " Stephen Hemminger
@ 2007-07-05 23:14 ` Stephen Hemminger
2007-07-05 23:32 ` Roland Dreier
2007-07-05 23:43 ` David Miller
1 sibling, 2 replies; 16+ messages in thread
From: Stephen Hemminger @ 2007-07-05 23:14 UTC (permalink / raw)
To: David Miller; +Cc: netdev
[-- Attachment #1: scatterlist-shrink.patch --]
[-- Type: text/plain, Size: 1032 bytes --]
The scatterlist only needs 16 bits for length/offset because
PAGE_SIZE is 4K
Signed-off-by: Stephen Hemminger <shemminger@linux-foundation.org>
--- a/include/asm-i386/scatterlist.h 2007-07-05 14:37:11.000000000 -0700
+++ b/include/asm-i386/scatterlist.h 2007-07-05 15:44:51.000000000 -0700
@@ -5,9 +5,9 @@
struct scatterlist {
struct page *page;
- unsigned int offset;
dma_addr_t dma_address;
- unsigned int length;
+ u16 offset;
+ u16 length;
};
/* These macros should be used after a pci_map_sg call has been done
--- a/include/asm-x86_64/scatterlist.h 2007-07-05 14:37:11.000000000 -0700
+++ b/include/asm-x86_64/scatterlist.h 2007-07-05 15:46:49.000000000 -0700
@@ -5,10 +5,10 @@
struct scatterlist {
struct page *page;
- unsigned int offset;
- unsigned int length;
dma_addr_t dma_address;
unsigned int dma_length;
+ u16 offset;
+ u16 length;
};
#define ISA_DMA_THRESHOLD (0x00ffffff)
--
Stephen Hemminger <shemminger@linux-foundation.org>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [RFC 2/2] shrink size of scatterlist on common i386/x86-64
2007-07-05 23:14 ` [RFC 2/2] shrink size of scatterlist on common i386/x86-64 Stephen Hemminger
@ 2007-07-05 23:32 ` Roland Dreier
2007-07-05 23:43 ` David Miller
1 sibling, 0 replies; 16+ messages in thread
From: Roland Dreier @ 2007-07-05 23:32 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: David Miller, netdev
> --- a/include/asm-i386/scatterlist.h 2007-07-05 14:37:11.000000000 -0700
> +++ b/include/asm-i386/scatterlist.h 2007-07-05 15:44:51.000000000 -0700
> @@ -5,9 +5,9 @@
>
> struct scatterlist {
> struct page *page;
> - unsigned int offset;
> dma_addr_t dma_address;
> - unsigned int length;
> + u16 offset;
> + u16 length;
> };
Actually this struct layout could be even better, since pointers are
32 bits but dma_addr_t may be 64 bits... having
struct scatterlist {
dma_addr_t dma_address;
struct page *page;
u16 offset;
u16 length;
};
would allow struct scatterlist to be 16 bytes. Seems like a good thing...
- R.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [RFC 2/2] shrink size of scatterlist on common i386/x86-64
2007-07-05 23:14 ` [RFC 2/2] shrink size of scatterlist on common i386/x86-64 Stephen Hemminger
2007-07-05 23:32 ` Roland Dreier
@ 2007-07-05 23:43 ` David Miller
2007-07-06 0:00 ` Stephen Hemminger
1 sibling, 1 reply; 16+ messages in thread
From: David Miller @ 2007-07-05 23:43 UTC (permalink / raw)
To: shemminger; +Cc: netdev
From: Stephen Hemminger <shemminger@linux-foundation.org>
Date: Thu, 05 Jul 2007 16:14:14 -0700
> The scatterlist only needs 16 bits for length/offset because
> PAGE_SIZE is 4K
>
> Signed-off-by: Stephen Hemminger <shemminger@linux-foundation.org>
Unfortunately I don't think this can be done, even on i386.
It is legal to use order!=0 pages and multi-page areas in a single
scatterlist entry.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [RFC 2/2] shrink size of scatterlist on common i386/x86-64
2007-07-05 23:43 ` David Miller
@ 2007-07-06 0:00 ` Stephen Hemminger
2007-07-06 0:15 ` David Miller
0 siblings, 1 reply; 16+ messages in thread
From: Stephen Hemminger @ 2007-07-06 0:00 UTC (permalink / raw)
To: David Miller; +Cc: netdev
On Thu, 05 Jul 2007 16:43:08 -0700 (PDT)
David Miller <davem@davemloft.net> wrote:
> From: Stephen Hemminger <shemminger@linux-foundation.org>
> Date: Thu, 05 Jul 2007 16:14:14 -0700
>
> > The scatterlist only needs 16 bits for length/offset because
> > PAGE_SIZE is 4K
> >
> > Signed-off-by: Stephen Hemminger <shemminger@linux-foundation.org>
>
> Unfortunately I don't think this can be done, even on i386.
>
> It is legal to use order!=0 pages and multi-page areas in a single
> scatterlist entry.
>
Okay, but then using SG lists makes skbuff's much bigger.
fraglist scatterlist per skbuff
32 bit 8 20 +12 * 18 = +216!
64 bit 16 32 +16 * 18 = +288
So never mind...
I'll do a fraglist to scatter list set of routines, but not sure
if it's worth it.
--
Stephen Hemminger <shemminger@linux-foundation.org>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [RFC 2/2] shrink size of scatterlist on common i386/x86-64
2007-07-06 0:00 ` Stephen Hemminger
@ 2007-07-06 0:15 ` David Miller
2007-07-06 7:43 ` Rusty Russell
2007-07-06 17:14 ` Williams, Mitch A
0 siblings, 2 replies; 16+ messages in thread
From: David Miller @ 2007-07-06 0:15 UTC (permalink / raw)
To: shemminger; +Cc: netdev
From: Stephen Hemminger <shemminger@linux-foundation.org>
Date: Thu, 5 Jul 2007 17:00:51 -0700
> On Thu, 05 Jul 2007 16:43:08 -0700 (PDT)
> David Miller <davem@davemloft.net> wrote:
>
> > From: Stephen Hemminger <shemminger@linux-foundation.org>
> > Date: Thu, 05 Jul 2007 16:14:14 -0700
> >
> > > The scatterlist only needs 16 bits for length/offset because
> > > PAGE_SIZE is 4K
> > >
> > > Signed-off-by: Stephen Hemminger <shemminger@linux-foundation.org>
> >
> > Unfortunately I don't think this can be done, even on i386.
> >
> > It is legal to use order!=0 pages and multi-page areas in a single
> > scatterlist entry.
> >
>
> Okay, but then using SG lists makes skbuff's much bigger.
>
> fraglist scatterlist per skbuff
> 32 bit 8 20 +12 * 18 = +216!
> 64 bit 16 32 +16 * 18 = +288
>
> So never mind...
I know, this is why nobody ever really tries to tackle this.
> I'll do a fraglist to scatter list set of routines, but not sure
> if it's worth it.
It's better to add dma_map_skb() et al. interfaces to be honest.
Also even with the scatterlist idea, we'd still need to do two
map calls, one for skb->data and one for the page vector.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [RFC 2/2] shrink size of scatterlist on common i386/x86-64
2007-07-06 0:15 ` David Miller
@ 2007-07-06 7:43 ` Rusty Russell
2007-07-06 8:54 ` David Miller
2007-07-06 17:14 ` Williams, Mitch A
1 sibling, 1 reply; 16+ messages in thread
From: Rusty Russell @ 2007-07-06 7:43 UTC (permalink / raw)
To: David Miller; +Cc: shemminger, netdev
On Thu, 2007-07-05 at 17:15 -0700, David Miller wrote:
> Also even with the scatterlist idea, we'd still need to do two
> map calls, one for skb->data and one for the page vector.
We could make skb->shinfo(skb)->frags[0] the first segment and deprecate
skb->data and skb->len.
OK, you can stop hitting me now...
Rusty.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [RFC 2/2] shrink size of scatterlist on common i386/x86-64
2007-07-06 7:43 ` Rusty Russell
@ 2007-07-06 8:54 ` David Miller
0 siblings, 0 replies; 16+ messages in thread
From: David Miller @ 2007-07-06 8:54 UTC (permalink / raw)
To: rusty; +Cc: shemminger, netdev
From: Rusty Russell <rusty@rustcorp.com.au>
Date: Fri, 06 Jul 2007 17:43:46 +1000
> On Thu, 2007-07-05 at 17:15 -0700, David Miller wrote:
> > Also even with the scatterlist idea, we'd still need to do two
> > map calls, one for skb->data and one for the page vector.
>
> We could make skb->shinfo(skb)->frags[0] the first segment and deprecate
> skb->data and skb->len.
>
> OK, you can stop hitting me now...
The fly in that ointment is that referencing the backing
page of kmalloc() data is not safe currently, especially
with SLUB.
Jens Axboe ran into this while trying to implement sendfile()
via splice() several weeks ago, it's an ongoing battle that
even C. Lameter is even involved in now :-)
^ permalink raw reply [flat|nested] 16+ messages in thread
* RE: [RFC 2/2] shrink size of scatterlist on common i386/x86-64
2007-07-06 0:15 ` David Miller
2007-07-06 7:43 ` Rusty Russell
@ 2007-07-06 17:14 ` Williams, Mitch A
2007-07-06 19:20 ` David Miller
2007-07-08 16:11 ` Muli Ben-Yehuda
1 sibling, 2 replies; 16+ messages in thread
From: Williams, Mitch A @ 2007-07-06 17:14 UTC (permalink / raw)
To: David Miller, shemminger; +Cc: netdev
David Miller wrote:
>> Okay, but then using SG lists makes skbuff's much bigger.
>>
>> fraglist scatterlist per skbuff
>> 32 bit 8 20 +12 * 18 = +216!
>> 64 bit 16 32 +16 * 18 = +288
>>
>> So never mind...
>
>I know, this is why nobody ever really tries to tackle this.
>
>> I'll do a fraglist to scatter list set of routines, but not sure
>> if it's worth it.
>
>It's better to add dma_map_skb() et al. interfaces to be honest.
>
>Also even with the scatterlist idea, we'd still need to do two
>map calls, one for skb->data and one for the page vector.
FWIW, I tried this about a year ago to try to improve e1000 performance
on pSeries. I was hoping to simplify the driver transmit code and make
IOMMU mapping easier. This was on 2.6.16 or thereabouts.
Net result: zilch. No performance increase, no noticeable CPU
utilization
benefits. Nothing. So I dropped it.
Slightly off topic:
The real problem that I saw on pSeries is lock contention for the IOMMU.
It's architected with a single table per slot, which is great in that
two boards in separate slots won't have lock contention. However, this
all goes out the window when you drop a quad-port gigabit adapter in
there.
The time spent waiting for the IOMMU table lock goes up exponentially
as you activate each additional port.
In my opinion, IOMMU table locking is the major issue with this type of
architecture. Since both Intel and AMD are touting IOMMUs for virtual-
ization support, this is an issue that's going to need a lot of
scrutiny.
-Mitch
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [RFC 2/2] shrink size of scatterlist on common i386/x86-64
2007-07-06 17:14 ` Williams, Mitch A
@ 2007-07-06 19:20 ` David Miller
2007-07-08 16:17 ` Muli Ben-Yehuda
2007-07-08 16:11 ` Muli Ben-Yehuda
1 sibling, 1 reply; 16+ messages in thread
From: David Miller @ 2007-07-06 19:20 UTC (permalink / raw)
To: mitch.a.williams; +Cc: shemminger, netdev
From: "Williams, Mitch A" <mitch.a.williams@intel.com>
Date: Fri, 6 Jul 2007 10:14:56 -0700
> In my opinion, IOMMU table locking is the major issue with this type of
> architecture. Since both Intel and AMD are touting IOMMUs for virtual-
> ization support, this is an issue that's going to need a lot of
> scrutiny.
For the allocation of IOMMU entries themselves you can play tricks
using atomic operations on 64-bit words of the allocator bitmap
to avoid locking that.
You can use per-cpu salts to determine where to start the search
and avoid hitting the same cachelines as other cpus working on
the same table.
But you'll need to lock in order to flush the IOMMU tlb I'm afraid.
The way to mitigate that is to only flush the IOMMU tlb once per
allocator generation.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [RFC 2/2] shrink size of scatterlist on common i386/x86-64
2007-07-06 17:14 ` Williams, Mitch A
2007-07-06 19:20 ` David Miller
@ 2007-07-08 16:11 ` Muli Ben-Yehuda
2007-07-11 23:46 ` Williams, Mitch A
1 sibling, 1 reply; 16+ messages in thread
From: Muli Ben-Yehuda @ 2007-07-08 16:11 UTC (permalink / raw)
To: Williams, Mitch A; +Cc: David Miller, shemminger, netdev
On Fri, Jul 06, 2007 at 10:14:56AM -0700, Williams, Mitch A wrote:
> David Miller wrote:
> >> Okay, but then using SG lists makes skbuff's much bigger.
> >>
> >> fraglist scatterlist per skbuff
> >> 32 bit 8 20 +12 * 18 = +216!
> >> 64 bit 16 32 +16 * 18 = +288
> >>
> >> So never mind...
> >
> >I know, this is why nobody ever really tries to tackle this.
> >
> >> I'll do a fraglist to scatter list set of routines, but not sure
> >> if it's worth it.
> >
> >It's better to add dma_map_skb() et al. interfaces to be honest.
> >
> >Also even with the scatterlist idea, we'd still need to do two
> >map calls, one for skb->data and one for the page vector.
>
> FWIW, I tried this about a year ago to try to improve e1000
> performance on pSeries. I was hoping to simplify the driver
> transmit code and make IOMMU mapping easier. This was on 2.6.16 or
> thereabouts.
>
> Net result: zilch. No performance increase, no noticeable CPU
> utilization
> benefits. Nothing. So I dropped it.
Do you have pointers to the patches perchance?
> Slightly off topic:
> The real problem that I saw on pSeries is lock contention for the IOMMU.
> It's architected with a single table per slot, which is great in that
> two boards in separate slots won't have lock contention. However, this
> all goes out the window when you drop a quad-port gigabit adapter in
> there.
> The time spent waiting for the IOMMU table lock goes up exponentially
> as you activate each additional port.
>
> In my opinion, IOMMU table locking is the major issue with this type
> of architecture. Since both Intel and AMD are touting IOMMUs for
> virtual- ization support, this is an issue that's going to need a
> lot of scrutiny.
Agreed.
Cheers,
Muli
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [RFC 2/2] shrink size of scatterlist on common i386/x86-64
2007-07-06 19:20 ` David Miller
@ 2007-07-08 16:17 ` Muli Ben-Yehuda
2007-07-09 7:06 ` David Miller
0 siblings, 1 reply; 16+ messages in thread
From: Muli Ben-Yehuda @ 2007-07-08 16:17 UTC (permalink / raw)
To: David Miller; +Cc: mitch.a.williams, shemminger, netdev
On Fri, Jul 06, 2007 at 12:20:19PM -0700, David Miller wrote:
> From: "Williams, Mitch A" <mitch.a.williams@intel.com>
> Date: Fri, 6 Jul 2007 10:14:56 -0700
>
> > In my opinion, IOMMU table locking is the major issue with this
> > type of architecture. Since both Intel and AMD are touting IOMMUs
> > for virtual- ization support, this is an issue that's going to
> > need a lot of scrutiny.
>
> For the allocation of IOMMU entries themselves you can play tricks
> using atomic operations on 64-bit words of the allocator bitmap to
> avoid locking that.
Hmm, any pointers?
> You can use per-cpu salts to determine where to start the search and
> avoid hitting the same cachelines as other cpus working on the same
> table.
>
> But you'll need to lock in order to flush the IOMMU tlb I'm afraid.
> The way to mitigate that is to only flush the IOMMU tlb once per
> allocator generation.
That works, but isn't optimal when you have an isolation-capable IOMMU
and you want the full isolation properties of the IOMMU. If you only
flush the IOTLB when the allocator wraps around, a stale entry in the
IOTLB can allow a DMA to go through for an IO entry that has already
been unmapped. One way to mitigate that and still retain full
isolation is to make sure no one else gets to use the frames that are
the targets of the DMA until the translation has been flushed out of
the IOTLB, but that requires pretty deep surgery.
Cheers,
Muli
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [RFC 2/2] shrink size of scatterlist on common i386/x86-64
2007-07-08 16:17 ` Muli Ben-Yehuda
@ 2007-07-09 7:06 ` David Miller
2007-07-09 8:53 ` Muli Ben-Yehuda
0 siblings, 1 reply; 16+ messages in thread
From: David Miller @ 2007-07-09 7:06 UTC (permalink / raw)
To: muli; +Cc: mitch.a.williams, shemminger, netdev
From: Muli Ben-Yehuda <muli@il.ibm.com>
Date: Sun, 8 Jul 2007 19:17:30 +0300
> On Fri, Jul 06, 2007 at 12:20:19PM -0700, David Miller wrote:
> > From: "Williams, Mitch A" <mitch.a.williams@intel.com>
> > Date: Fri, 6 Jul 2007 10:14:56 -0700
> >
> > > In my opinion, IOMMU table locking is the major issue with this
> > > type of architecture. Since both Intel and AMD are touting IOMMUs
> > > for virtual- ization support, this is an issue that's going to
> > > need a lot of scrutiny.
> >
> > For the allocation of IOMMU entries themselves you can play tricks
> > using atomic operations on 64-bit words of the allocator bitmap to
> > avoid locking that.
>
> Hmm, any pointers?
I've never implemented it but it is certainly possible to do.
> That works, but isn't optimal when you have an isolation-capable IOMMU
> and you want the full isolation properties of the IOMMU. If you only
> flush the IOTLB when the allocator wraps around, a stale entry in the
> IOTLB can allow a DMA to go through for an IO entry that has already
> been unmapped. One way to mitigate that and still retain full
> isolation is to make sure no one else gets to use the frames that are
> the targets of the DMA until the translation has been flushed out of
> the IOTLB, but that requires pretty deep surgery.
Virtualization sucks doesn't it? :-)
It's one of the worst aspects of all of this virtualization business.
In my view it makes no sense to split up the physical hardware. Just
give control nodes complete access to everything and instead of
playing games partializing real hardware, just give virtual instances
to everybody and be done with all of this complexity.
Anyways, hypervisors et al. have already decided to do this braindamage
so you will need to find some way to make it go fast now won't you? :)
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [RFC 2/2] shrink size of scatterlist on common i386/x86-64
2007-07-09 7:06 ` David Miller
@ 2007-07-09 8:53 ` Muli Ben-Yehuda
0 siblings, 0 replies; 16+ messages in thread
From: Muli Ben-Yehuda @ 2007-07-09 8:53 UTC (permalink / raw)
To: David Miller; +Cc: mitch.a.williams, shemminger, netdev
On Mon, Jul 09, 2007 at 12:06:40AM -0700, David Miller wrote:
> > That works, but isn't optimal when you have an isolation-capable
> > IOMMU and you want the full isolation properties of the IOMMU. If
> > you only flush the IOTLB when the allocator wraps around, a stale
> > entry in the IOTLB can allow a DMA to go through for an IO entry
> > that has already been unmapped. One way to mitigate that and still
> > retain full isolation is to make sure no one else gets to use the
> > frames that are the targets of the DMA until the translation has
> > been flushed out of the IOTLB, but that requires pretty deep
> > surgery.
>
> Virtualization sucks doesn't it? :-)
no comment :-) FWIW isolation capable IOMMUs are also useful to catch
DMA errors when drivers program devices to DMA where they
shouldn't. Sure, drivers can trash the kernel in other ways, but a
printk beats random memory corruption any day of the week.
> It's one of the worst aspects of all of this virtualization
> business. In my view it makes no sense to split up the physical
> hardware. Just give control nodes complete access to everything and
> instead of playing games partializing real hardware, just give
> virtual instances to everybody and be done with all of this
> complexity.
Virtual instances have a non-negligible performance cost and
development cost - you need a virtual driver for any device or class
of devices out there. Not to say that they aren't useful, just that
direct access (aka "passthrough") has its uses too.
> Anyways, hypervisors et al. have already decided to do this
> braindamage so you will need to find some way to make it go fast now
> won't you? :)
Working on it :-)
Cheers,
Muli
^ permalink raw reply [flat|nested] 16+ messages in thread
* RE: [RFC 2/2] shrink size of scatterlist on common i386/x86-64
2007-07-08 16:11 ` Muli Ben-Yehuda
@ 2007-07-11 23:46 ` Williams, Mitch A
0 siblings, 0 replies; 16+ messages in thread
From: Williams, Mitch A @ 2007-07-11 23:46 UTC (permalink / raw)
To: Muli Ben-Yehuda; +Cc: David Miller, shemminger, netdev
Muli Ben-Yehuda wrote:
>> Net result: zilch. No performance increase, no noticeable CPU
>> utilization
>> benefits. Nothing. So I dropped it.
>
>Do you have pointers to the patches perchance?
Muli, I've been looking for this code and it looks like it's gone.
I was using a Power5 system that I had borrowed and the folks I
borrowed it from have reformatted the drive. I never bothered
to hang on to the patches because they were neither particularly
pretty or particularly useful. Sorry.
-Mitch
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2007-07-11 23:46 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-07-05 23:14 [RFC 0/2] Convert skb to use scatterlist Stephen Hemminger
2007-07-05 23:14 ` [RFC 1/2] skbuff: " Stephen Hemminger
2007-07-05 23:14 ` [RFC 2/2] shrink size of scatterlist on common i386/x86-64 Stephen Hemminger
2007-07-05 23:32 ` Roland Dreier
2007-07-05 23:43 ` David Miller
2007-07-06 0:00 ` Stephen Hemminger
2007-07-06 0:15 ` David Miller
2007-07-06 7:43 ` Rusty Russell
2007-07-06 8:54 ` David Miller
2007-07-06 17:14 ` Williams, Mitch A
2007-07-06 19:20 ` David Miller
2007-07-08 16:17 ` Muli Ben-Yehuda
2007-07-09 7:06 ` David Miller
2007-07-09 8:53 ` Muli Ben-Yehuda
2007-07-08 16:11 ` Muli Ben-Yehuda
2007-07-11 23:46 ` Williams, Mitch A
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).