From: Alexander Lobakin <alexandr.lobakin@intel.com>
To: intel-wired-lan@osuosl.org
Subject: [Intel-wired-lan] [PATCH net-next 10/19] iecm: alloc vport RX resources
Date: Thu, 3 Feb 2022 19:29:40 +0100 [thread overview]
Message-ID: <20220203182940.17916-1-alexandr.lobakin@intel.com> (raw)
In-Reply-To: <CO1PR11MB51863E27CA1EDD4B45CB9EED8F289@CO1PR11MB5186.namprd11.prod.outlook.com>
From: Alan Brady <alan.brady@intel.com>
Date: Thu, 3 Feb 2022 01:13:23 +0100
> > -----Original Message-----
> > From: Lobakin, Alexandr <alexandr.lobakin@intel.com>
> > Sent: Friday, January 28, 2022 6:16 AM
> > To: Brady, Alan <alan.brady@intel.com>
> > Cc: Lobakin, Alexandr <alexandr.lobakin@intel.com>; intel-wired-
> > lan at lists.osuosl.org; Burra, Phani R <phani.r.burra@intel.com>; Chittim, Madhu
> > <madhu.chittim@intel.com>; Linga, Pavan Kumar
> > <pavan.kumar.linga@intel.com>
> > Subject: Re: [Intel-wired-lan] [PATCH net-next 10/19] iecm: alloc vport RX
> > resources
> >
> > From: Alan Brady <alan.brady@intel.com>
> > Date: Thu, 27 Jan 2022 16:10:00 -0800
> >
> > > This finishes what we need to do for open by adding RX resource
> > > allocations.
> > >
> > > As noted in the TX alloc patch, the splitq model is unique in
> > > introducing the concept of queue groups, which also applies to RX,
> > > albeit in a slightly different way. For RX we also split the queue
> > > between descriptor handling and buffer handling. We have some number
> > > of RX completion queues associated with up to two buffer queues in a
> > > given queue group. Once buffers are cleaned and recycled, they're
> > > given the buffer queues which then posts the buffers back to hardware.
> > > To enable this in a lockless way, there's also the concept of 'refill
> > > queues' introduced. Recycled buffers are put onto refill queues which is what
> > the buffer queues clean to get buffers back.
> > >
> > > Signed-off-by: Phani Burra <phani.r.burra@intel.com>
> > > Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
> > > Signed-off-by: Madhu Chittim <madhu.chittim@intel.com>
> > > Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
> > > Signed-off-by: Alan Brady <alan.brady@intel.com>
> > > ---
> > > drivers/net/ethernet/intel/iecm/iecm_txrx.c | 769 ++++++++++++++++++
> > > .../net/ethernet/intel/include/iecm_txrx.h | 7 +
> > > 2 files changed, 776 insertions(+)
> > >
--- 8< ---
> > > + rxq->hbuf_pages.nr_pages = nr_pages;
> > > + for (i = 0; i < nr_pages; i++) {
> > > + if (iecm_alloc_page(rxq, &rxq->hbuf_pages.pages[i]))
> >
> > And here you allocate pages with GFP_ATOMIC in process context.
> > Atomic allocations must not be used if the function may sleep.
> > Please add gfp_t gfp argument to iecm_alloc_page() and use GFP_KERNEL here
> > (and GFP_ATOMIC on buffer refill hotpath).
> >
>
> Perhaps I am confused here but it's my understanding we need GFP_ATOMIC when potentially used in a case where we can't sleep as it signals to the memory allocator to not sleep. Not the other way around; we can't sleep if we have memory taken with GFP_ATOMIC. We use it in hotpath as you said, where we can't sleep. What it really means to us is that it has a higher chance of failure to not get alloc'd if the kernel isn't allowed to sleep to free up some memory.
GFP_ATOMIC must be used only in atomic contexts i.e. hardirq and
softirq processing (using them inside spinlocks is debatable). With
this flag set, we have less resources available to allocate from,
and you can consider those resources as the reserves for critical
situations (and e.g. hardirqs are of those kind). This mostly comes
from GFP_NOWAIT embedded into GFP_ATOMIC.
This function allocates all the resources on ifup, the context is
non-critical and it's okay to sleep there, so using GFP_ATOMIC is
an excessive waste of critical memory in case of e.g. memory
pressure.
So we use GFP_ATOMIC for buffer refilling only inside NAPI context
itself.
> > > + goto unroll_buf_alloc;
> > > + }
> > > +
> > > + page_info = &rxq->hbuf_pages.pages[0];
--- 8< ---
> > > +
> > > + buf++;
> > > + nta++;
> > > + if (unlikely(nta == rxbufq->desc_count)) {
> >
> > if (unlikely(++nta == ...)) { /* Just in one line */
> >
>
> Yes but pre-increments are gross and hard for humans to grok.
Could you please elaborate on "pre-increments are gross"? I don't
get it at all.
Also, "hard for humans to grok" to me doesn't seem really correct
here (and in all other places you used it) since you're reflecting
your personal opinion, not a mathematical fact I believe?
>
> > > + buf = rxbufq->rx_buf.buf;
> > > + nta = 0;
> > > + }
> > > +
> > > + alloc_count--;
> > > + } while (alloc_count);
> >
> > } while (alloc_count--); /* Just in one line */
> >
>
> I believe
>
> } while (--alloc_count);
>
> would be accurate but pre increment/decrement are hard for humans to grok (as evidenced here).
Right, sorry, there should be a pre-increment.
Evidenced what and where?
If you're about my off-by-one, it doesn't mean anything, I use
pre-increments much more often than post-check-loops (usually it's
either for-loop or `while {}`).
>
> > > +
> > > + return !!alloc_count;
> > > +}
> > > +
> > > +/**
> > > + * iecm_rx_post_buf_desc - Post buffer to bufq descriptor ring
--- 8< ---
> > > + iecm_rx_buf_hw_update(bufq, bufq->next_to_alloc &
> > > +~(bufq->rx_buf_stride - 1));
> >
> > 87-cols line.
> > Please test all your patches with `checkpatch --strict --codespell`.
> >
>
> Just an FYI, all of these patches do mostly pass checkpatch since otherwise (except for net apparently) in the kernel 100 cols are now acceptable. You must also add `--max-line-length=80` to get a warning about 80 cols now.
Netdev maintainers still prefer 79/80 (and so do I), thus pointing
out all those long lines here.
Wasn't sure about checkpatch detecting whether it's a networking
patch and applying the corresponding settings since I can't recall
my last time going past 79.
>
> > > +}
> > > +
> > > +/**
> > > + * iecm_rx_buf_alloc_all - Allocate memory for all buffer resources
--- 8< ---
> > > + if (bufq) {
> > > + rxq->size = rxq->desc_count *
> > > + sizeof(struct virtchnl2_splitq_rx_buf_desc);
> > > + } else {
> > > + rxq->size = rxq->desc_count *
> > > + sizeof(union virtchnl2_rx_desc);
> > > + }
> >
> > Oneliners, braces are unneeded.
> >
>
> Keeping because multi-line with line wrap.
Please see my previous messages.
>
> > For counting the array sizes it's required to use array_size():
--- 8< ---
> > > +
> > > + rxq->next_to_alloc = 0;
> > > + rxq->next_to_clean = 0;
> > > + rxq->next_to_use = 0;
> >
> > You allocate rxq with kzalloc() (or derivative) IIRC, 'z'-versions zero the memory
> > before returning. These initializers are redundant.
> >
>
> This is allocating descriptors which can change. If we change the descriptor ring it's probably a good idea to reset the queue indexes.
Again, this function takes a freshly zero-allocated ring structure.
Those are zeroes in 100% cases. Please correct if I'm wrong.
Also, while searching for iecm_rx_desc_alloc() usages, I spotted
that both it and iecm_rx_desc_alloc_all() are used only inside the
same file which they're declared in. So they should be static.
Please correct if not.
>
> > > + set_bit(__IECM_Q_GEN_CHK, rxq->flags);
> > > +
> > > + /* Allocate buffers for a rx queue if the q_model is single OR if it
--- 8< ---
> > > --
> > > 2.33.0
> >
> > Thanks,
> > Al
Thanks,
Al
next prev parent reply other threads:[~2022-02-03 18:29 UTC|newest]
Thread overview: 83+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-28 0:09 [Intel-wired-lan] [PATCH net-next 00/19] Add iecm and idpf Alan Brady
2022-01-28 0:09 ` [Intel-wired-lan] [PATCH net-next 01/19] virtchnl: Add new virtchnl2 ops Alan Brady
2022-02-02 22:13 ` Brady, Alan
2022-01-28 0:09 ` [Intel-wired-lan] [PATCH net-next 02/19] iecm: add basic module init and documentation Alan Brady
2022-01-28 11:56 ` Alexander Lobakin
2022-02-02 22:15 ` Brady, Alan
2022-02-01 19:44 ` Shannon Nelson
2022-02-03 3:08 ` Brady, Alan
2022-01-28 0:09 ` [Intel-wired-lan] [PATCH net-next 03/19] iecm: add probe and remove Alan Brady
2022-02-01 20:02 ` Shannon Nelson
2022-02-03 3:13 ` Brady, Alan
2022-01-28 0:09 ` [Intel-wired-lan] [PATCH net-next 04/19] iecm: add api_init and controlq init Alan Brady
2022-01-28 12:09 ` Alexander Lobakin
2022-02-02 22:16 ` Brady, Alan
2022-02-01 21:26 ` Shannon Nelson
2022-02-03 3:24 ` Brady, Alan
2022-02-03 3:40 ` Brady, Alan
2022-02-03 5:26 ` Shannon Nelson
2022-02-03 13:13 ` Alexander Lobakin
2022-01-28 0:09 ` [Intel-wired-lan] [PATCH net-next 05/19] iecm: add vport alloc and virtchnl messages Alan Brady
2022-01-28 4:19 ` kernel test robot
2022-01-28 12:39 ` Alexander Lobakin
2022-02-02 22:23 ` Brady, Alan
2022-01-28 12:32 ` Alexander Lobakin
2022-02-02 22:21 ` Brady, Alan
2022-02-03 13:23 ` Alexander Lobakin
2022-01-28 0:09 ` [Intel-wired-lan] [PATCH net-next 06/19] iecm: add virtchnl messages for queues Alan Brady
2022-01-28 13:03 ` Alexander Lobakin
2022-02-02 22:48 ` Brady, Alan
2022-02-03 10:08 ` Maciej Fijalkowski
2022-02-03 14:09 ` Alexander Lobakin
2022-01-28 0:09 ` [Intel-wired-lan] [PATCH net-next 07/19] iecm: finish virtchnl messages Alan Brady
2022-01-28 13:19 ` Alexander Lobakin
2022-02-02 23:06 ` Brady, Alan
2022-02-03 15:05 ` Alexander Lobakin
2022-02-03 15:16 ` Maciej Fijalkowski
2022-01-28 0:09 ` [Intel-wired-lan] [PATCH net-next 08/19] iecm: add interrupts and configure netdev Alan Brady
2022-01-28 13:34 ` Alexander Lobakin
2022-02-02 23:17 ` Brady, Alan
2022-02-03 15:55 ` Alexander Lobakin
2022-01-28 0:09 ` [Intel-wired-lan] [PATCH net-next 09/19] iecm: alloc vport TX resources Alan Brady
2022-02-02 23:45 ` Brady, Alan
2022-02-03 17:56 ` Alexander Lobakin
2022-01-28 0:10 ` [Intel-wired-lan] [PATCH net-next 10/19] iecm: alloc vport RX resources Alan Brady
2022-01-28 14:16 ` Alexander Lobakin
2022-02-03 0:13 ` Brady, Alan
2022-02-03 18:29 ` Alexander Lobakin [this message]
2022-01-28 0:10 ` [Intel-wired-lan] [PATCH net-next 11/19] iecm: add start_xmit and set_rx_mode Alan Brady
2022-01-28 16:35 ` Alexander Lobakin
2022-01-28 0:10 ` [Intel-wired-lan] [PATCH net-next 12/19] iecm: finish netdev_ops Alan Brady
2022-01-28 17:06 ` Alexander Lobakin
2022-01-28 0:10 ` [Intel-wired-lan] [PATCH net-next 13/19] iecm: implement splitq napi_poll Alan Brady
2022-01-28 5:21 ` kernel test robot
2022-01-28 17:44 ` Alexander Lobakin
2022-02-03 1:15 ` Brady, Alan
2022-01-28 17:38 ` Alexander Lobakin
2022-02-03 1:07 ` Brady, Alan
2022-02-04 11:50 ` Alexander Lobakin
2022-01-28 0:10 ` [Intel-wired-lan] [PATCH net-next 14/19] iecm: implement singleq napi_poll Alan Brady
2022-01-28 17:57 ` Alexander Lobakin
2022-02-03 1:45 ` Brady, Alan
2022-02-03 19:05 ` Alexander Lobakin
2022-01-28 0:10 ` [Intel-wired-lan] [PATCH net-next 15/19] iecm: implement ethtool callbacks Alan Brady
2022-01-28 18:13 ` Alexander Lobakin
2022-02-03 2:13 ` Brady, Alan
2022-02-03 19:54 ` Alexander Lobakin
2022-01-28 0:10 ` [Intel-wired-lan] [PATCH net-next 16/19] iecm: implement flow director Alan Brady
2022-01-28 19:04 ` Alexander Lobakin
2022-02-03 2:41 ` Brady, Alan
2022-02-04 10:08 ` Alexander Lobakin
2022-01-28 0:10 ` [Intel-wired-lan] [PATCH net-next 17/19] iecm: implement cloud filters Alan Brady
2022-01-28 19:38 ` Alexander Lobakin
2022-02-03 2:53 ` Brady, Alan
2022-01-28 0:10 ` [Intel-wired-lan] [PATCH net-next 18/19] iecm: add advanced rss Alan Brady
2022-01-28 19:53 ` Alexander Lobakin
2022-02-03 2:55 ` Brady, Alan
2022-02-03 10:46 ` Maciej Fijalkowski
2022-02-04 10:22 ` Alexander Lobakin
2022-01-28 0:10 ` [Intel-wired-lan] [PATCH net-next 19/19] idpf: introduce idpf driver Alan Brady
2022-01-28 20:08 ` Alexander Lobakin
2022-02-03 3:07 ` Brady, Alan
2022-02-04 10:35 ` Alexander Lobakin
2022-02-04 12:05 ` [Intel-wired-lan] [PATCH net-next 00/19] Add iecm and idpf Alexander Lobakin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220203182940.17916-1-alexandr.lobakin@intel.com \
--to=alexandr.lobakin@intel.com \
--cc=intel-wired-lan@osuosl.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox