From mboxrd@z Thu Jan 1 00:00:00 1970 From: Chas Williams <3chas3@gmail.com> Subject: Re: [PATCH v2] mbuf: use refcnt = 0 when debugging Date: Wed, 06 Sep 2017 06:46:13 -0400 Message-ID: <1504694773.2192.9.camel@gmail.com> References: <1502120243-8902-1-git-send-email-ciwillia@brocade.com> <1502122274-15657-1-git-send-email-ciwillia@brocade.com> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit Cc: olivier.matz@6wind.com, cw817q@att.com To: Radu Nicolau , dev@dpdk.org Return-path: Received: from mail-qt0-f194.google.com (mail-qt0-f194.google.com [209.85.216.194]) by dpdk.org (Postfix) with ESMTP id 9746CFE5 for ; Wed, 6 Sep 2017 12:46:15 +0200 (CEST) Received: by mail-qt0-f194.google.com with SMTP id h21so3670383qth.4 for ; Wed, 06 Sep 2017 03:46:15 -0700 (PDT) In-Reply-To: List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" [Note: My former email address is going away eventually. I am moving the conversation to my other email address which is a bit more permanent.] On Mon, 2017-09-04 at 15:27 +0100, Radu Nicolau wrote: > > On 8/7/2017 5:11 PM, Charles (Chas) Williams wrote: > > After commit 8f094a9ac5d7 ("mbuf: set mbuf fields while in pool") is it > > much harder to detect a "double free". If the developer makes a copy > > of an mbuf pointer and frees it twice, this condition is never detected > > and the mbuf gets returned to the pool twice. > > > > Since this requires extra work to track, make this behavior conditional > > on CONFIG_RTE_LIBRTE_MBUF_DEBUG. > > > > Signed-off-by: Chas Williams > > --- > > > > @@ -1304,10 +1329,13 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m) > > m->next = NULL; > > m->nb_segs = 1; > > } > > +#ifdef RTE_LIBRTE_MBUF_DEBUG > > + rte_mbuf_refcnt_set(m, RTE_MBUF_UNUSED_CNT); > > +#endif > > > > return m; > > > > - } else if (rte_atomic16_add_return(&m->refcnt_atomic, -1) == 0) { > > + } else if (rte_mbuf_refcnt_update(m, -1) == 0) { > Why replace the use of atomic operation? It doesn't. rte_mbuf_refcnt_update() is also atomic(ish) but it slightly more optimal. This whole section is a little hazy actually. It looks like rte_pktmbuf_prefree_seg() unwraps rte_mbuf_refcnt_update() so they can avoid setting the refcnt when the refcnt is already the 'correct' value. > > > > > > if (RTE_MBUF_INDIRECT(m)) > > @@ -1317,7 +1345,7 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m) > > m->next = NULL; > > m->nb_segs = 1; > > } > > - rte_mbuf_refcnt_set(m, 1); > > + rte_mbuf_refcnt_set(m, RTE_MBUF_UNUSED_CNT); > > > > return m; > > } > Reviewed-by: Radu Nicolau Thanks for the review.