From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:42528) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YqkVA-0007Z4-SB for qemu-devel@nongnu.org; Fri, 08 May 2015 11:47:49 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YqkV9-0000kh-PM for qemu-devel@nongnu.org; Fri, 08 May 2015 11:47:48 -0400 Message-ID: <554CDA71.8060606@redhat.com> Date: Fri, 08 May 2015 17:46:57 +0200 From: Max Reitz MIME-Version: 1.0 References: <38be38957aff5457fd811db71706a6f645df5c7e.1430919406.git.berto@igalia.com> In-Reply-To: <38be38957aff5457fd811db71706a6f645df5c7e.1430919406.git.berto@igalia.com> Content-Type: text/plain; charset=iso-8859-15; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [Qemu-block] [PATCH 5/7] qcow2: use a hash to look for entries in the L2 cache List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Alberto Garcia , qemu-devel@nongnu.org Cc: Stefan Hajnoczi , qemu-block@nongnu.org On 06.05.2015 15:39, Alberto Garcia wrote: > The current cache algorithm traverses the array starting always from > the beginning, so the average number of comparisons needed to perform > a lookup is proportional to the size of the array. > > By using a hash of the offset as the starting point, lookups are > faster and independent from the array size. > > The hash is computed using the cluster number of the table, multiplied > by 4 to make it perform better when there are collisions. > > In my tests, using a cache with 2048 entries, this reduces the average > number of comparisons per lookup from 430 to 2.5. > > Signed-off-by: Alberto Garcia > --- > block/qcow2-cache.c | 9 +++++++-- > 1 file changed, 7 insertions(+), 2 deletions(-) So it's a direct-mapped cache, unless the entry has been used, in which case it becomes a fully associative cache... Let's assume the cache is full. Now this hash algorithm (direct mapped cache) basically becomes futile, because the LRU algorithm (fully associative cache) takes over, and any entry (the LRU entry) is replaced, independently of the cache. Thus, the entry will most probably not be placed at its hashed position. So once the cache is full, I guess this algorithm doesn't help a lot, does it? (I'm sorry if you had this discussion before...) Max