From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755466Ab3AQSpV (ORCPT ); Thu, 17 Jan 2013 13:45:21 -0500 Received: from smtprelay0100.hostedemail.com ([216.40.44.100]:58176 "EHLO smtprelay.hostedemail.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751814Ab3AQSpT (ORCPT ); Thu, 17 Jan 2013 13:45:19 -0500 X-Greylist: delayed 537 seconds by postgrey-1.27 at vger.kernel.org; Thu, 17 Jan 2013 13:45:19 EST X-Panda: scanned! X-Session-Marker: 6A61736F6E40776172722E6E6574 X-Filterd-Recvd-Size: 2517 Message-ID: <50F844A3.9020300@warr.net> Date: Thu, 17 Jan 2013 12:36:19 -0600 From: Jason Warr User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Thunderbird/17.0 MIME-Version: 1.0 To: Amit Kale CC: "thornber@redhat.com" , device-mapper development , "kent.overstreet@gmail.com" , Mike Snitzer , LKML , "linux-bcache@vger.kernel.org" Subject: Re: [dm-devel] Announcement: STEC EnhanceIO SSD caching software for Linux kernel References: <20130116104546.GA3869@raspberrypi> <20130117132620.GA2438@raspberrypi> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 01/17/2013 11:53 AM, Amit Kale wrote: >>> 9. Performance - Throughput is generally most important. Latency is >> > also one more performance comparison point. Performance under >> > different load classes can be measured. >> > >> > I think latency is more important than throughput. Spindles are >> > pretty good at throughput. In fact the mq policy tries to spot when >> > we're doing large linear ios and stops hit counting; best leave this >> > stuff on the spindle. > I disagree. Latency is taken care of automatically when the number of application threads rises. > Can you explain what you mean by that in a little more detail? As an enterprise level user I see both as important overall. However, the biggest driving factor in wanting a cache device in front of any sort of target in my use cases is to hide latency as the number of threads reading and writing to the backing device go up. So for me the cache is basically a tier stage where your ability to keep dirty blocks on it is determined by the specific use case.