From mboxrd@z Thu Jan 1 00:00:00 1970 From: Grant Grundler Subject: Re: libata / scsi separation Date: Wed, 10 Dec 2008 09:11:34 -0800 Message-ID: References: <493B022B.3050406@ru.mvista.com> <1228662298.3501.19.camel@localhost.localdomain> <20081209222113.GU25548@parisc-linux.org> <493F2151.6010702@gmail.com> <493F2DA9.7040008@gmail.com> <493F3B33.8010607@gmail.com> <20081210152445.GW25548@parisc-linux.org> <493FE15C.4060907@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Return-path: Received: from smtp-out.google.com ([216.239.45.13]:59707 "EHLO smtp-out.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752017AbYLJRQv (ORCPT ); Wed, 10 Dec 2008 12:16:51 -0500 Received: from spaceape11.eur.corp.google.com (spaceape11.eur.corp.google.com [172.28.16.145]) by smtp-out.google.com with ESMTP id mBAHGmgK028317 for ; Wed, 10 Dec 2008 09:16:49 -0800 Received: from bwz10 (bwz10.prod.google.com [10.188.26.10]) by spaceape11.eur.corp.google.com with ESMTP id mBAHGkYI004263 for ; Wed, 10 Dec 2008 09:16:47 -0800 Received: by bwz10 with SMTP id 10so1516868bwz.22 for ; Wed, 10 Dec 2008 09:16:46 -0800 (PST) In-Reply-To: <493FE15C.4060907@gmail.com> Sender: linux-ide-owner@vger.kernel.org List-Id: linux-ide@vger.kernel.org To: Tejun Heo Cc: Matthew Wilcox , James Bottomley , linux-ide@vger.kernel.org, linux-scsi@vger.kernel.org On Wed, Dec 10, 2008 at 7:33 AM, Tejun Heo wrote: ... > And yeap we definitely should try to do that too but I don't think > RW_SG would be as useless as jumbo frames (much less compatibility > problem and no loss of functionality), Jumbo frames aren't useless. They just don't apply to the "small message passing overhead" problem. Users doing bulk data transfer (NAS, FTP, etc) are pretty happy with TCP Segmentation Offloading (cousin of Jumbo Frames). > and the actual hardware > overhead of issuing separate commands for each 4k segment is way > higher than anything we do along the block and low level driver layers > in terms of IO access, host bus and ATA (or SAS) bus overhead. That's true and it was also true gigabit NICs in the 90's. NIC HW vendors have figured out host to avoid doing MMIO reads/writes during normal IO. Infiniband has an even more efficient interface that's mostly Host RAM based (a few MMIO writes). Last time I measured (~2006), TCP stack was 4x the CPU cost of the HW interface. I don't know what the current ratio is for any given SATA controller vs libata/SCSI stack, but I'm certain it will change as new controllers are introduced. hth, grant