From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752612Ab1GTO7X (ORCPT ); Wed, 20 Jul 2011 10:59:23 -0400 Received: from ch1ehsobe001.messaging.microsoft.com ([216.32.181.181]:3578 "EHLO ch1outboundpool.messaging.microsoft.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752099Ab1GTO7M (ORCPT ); Wed, 20 Jul 2011 10:59:12 -0400 X-SpamScore: -26 X-BigFish: VPS-26(zz1dbaL1432N98dK4015Lzz1202hzz15d4Rz32i668h839h944h61h) X-Spam-TCS-SCL: 0:0 X-Forefront-Antispam-Report: CIP:163.181.249.108;KIP:(null);UIP:(null);IPVD:NLI;H:ausb3twp01.amd.com;RD:none;EFVD:NLI X-WSS-ID: 0LON0AE-01-FIA-02 X-M-MSG: Date: Wed, 20 Jul 2011 16:59:01 +0200 From: "Roedel, Joerg" To: Neil Horman CC: "linux-kernel@vger.kernel.org" , Divy LeRay , Stanislaw Gruszka , Arnd Bergmann Subject: Re: [PATCH] dma-debug: hash_bucket_find needs to allow for offsets within an entry Message-ID: <20110720145901.GE21948@amd.com> References: <1311097278-30841-1-git-send-email-nhorman@tuxdriver.com> <20110720103813.GL15108@amd.com> <20110720111156.GA12349@hmsreliant.think-freely.org> <20110720132925.GB21948@amd.com> <20110720143222.GB12349@hmsreliant.think-freely.org> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <20110720143222.GB12349@hmsreliant.think-freely.org> User-Agent: Mutt/1.5.21 (2010-09-15) X-OriginatorOrg: amd.com Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jul 20, 2011 at 10:32:22AM -0400, Neil Horman wrote: > On Wed, Jul 20, 2011 at 03:29:25PM +0200, Roedel, Joerg wrote: > > You are right. We need to scan > > > > 0 <= idx <= hash_fn(rstart) > > > > Probably we can fix that with a better hash-function. Any ideas? Using > > the device is not an option because then all entries would end up in > > only a few buckets. This will impact scanning performance too much. > > > Unfortunately I don't have any ideas for a better hash function here, but I had > been thinking about fixing this in add_dma_entry. We could detect there that a > debug entry to be added crossed one or more hash bucket boundaries, and, if it > did, split it along those boundaries into multiple entries, hashing each of them > in separately. The check_unmap and check_sync routines would of course then > potentially have to do multiple lookups as well to ensure that they found all of > the correct entries to validate/remove. It would work in all cases, but it > might be overkill. What do you think? Interesting. I discussed that with a colleague an hour ago and he came up with the same idea :-) I like it because we still need to scan only one hash-bucket, so this seems like the best solution. > > For now, the partial syncs seem to happen rarely enough so that we can > > make it a slow-path. It is probably the best to do the exact scan first > > and do the full scan only if exact-scan failed (until we come up with a > > better solution). > > > Agreed, if you don't like my above idea, I'll get to work on this this > afternoon. I think this idea is a better solution. Thanks, Joerg -- AMD Operating System Research Center Advanced Micro Devices GmbH Einsteinring 24 85609 Dornach General Managers: Alberto Bozzo, Andrew Bowd Registration: Dornach, Landkr. Muenchen; Registerger. Muenchen, HRB Nr. 43632