From mboxrd@z Thu Jan 1 00:00:00 1970 From: Patrick McHardy Subject: Re: [PATCH 7/9] rhashtable: Per bucket locks & deferred expansion/shrinking Date: Fri, 16 Jan 2015 20:53:12 +0000 Message-ID: <20150116205311.GA26996@acer.localdomain> References: <20150116155835.GA15052@casper.infradead.org> <20150116160354.GI30132@acer.localdomain> <20150116161530.GC15052@casper.infradead.org> <20150116163202.GJ30132@acer.localdomain> <063D6719AE5E284EB5DD2968C1650D6D1CACADAF@AcuExch.aculab.com> <20150116165302.GE15052@casper.infradead.org> <20150116183626.GS30132@acer.localdomain> <20150116191831.GA26730@casper.infradead.org> <20150116193557.GU30132@acer.localdomain> <20150116204644.GA2232@salvia> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Thomas Graf , David Laight , "davem@davemloft.net" , "netdev@vger.kernel.org" , "herbert@gondor.apana.org.au" , "paulmck@linux.vnet.ibm.com" , "edumazet@google.com" , "john.r.fastabend@intel.com" , "josh@joshtriplett.org" , "netfilter-devel@vger.kernel.org" To: Pablo Neira Ayuso Return-path: Content-Disposition: inline In-Reply-To: <20150116204644.GA2232@salvia> Sender: netfilter-devel-owner@vger.kernel.org List-Id: netdev.vger.kernel.org On 16.01, Pablo Neira Ayuso wrote: > On Fri, Jan 16, 2015 at 07:35:57PM +0000, Patrick McHardy wrote: > > On 16.01, Thomas Graf wrote: > > > On 01/16/15 at 06:36pm, Patrick McHardy wrote: > > > > > > > > Well, we do have a problem with interrupted dumps. As you know once > > > > the netlink message buffer is full, we return to userspace and > > > > continue dumping during the next read. Expanding obviously changes > > > > the order since we rehash from bucket N to N and 2N, so this will > > > > indeed cause duplicate (doesn't matter) and missed entries. > > > > > > Right,but that's a Netlink dump issue and not specific to rhashtable. > > > > Well, rhashtable (or generally resizing) will make it a lot worse. > > Usually we at worst miss entries which were added during the dump, > > which is made up by the notifications. > > > > With resizing we might miss anything, its completely undeterministic. > > > > > Putting the sequence number check in place should be sufficient > > > for sets, right? > > > > I don't see how. The problem is that the ordering of the hash changes > > and it will skip different entries than those that have already been > > dumped. > > I think the generation counter should catch up this sort of problems. > The resizing is triggered by a new/deletion element, which bumps it > once the transaction is handled. I don't think so, it tracks only two generations, we can have an arbitrary amount of changes while performing a dump.