From mboxrd@z Thu Jan 1 00:00:00 1970 From: Nick Piggin Subject: Re: [rfc] fsync_range? Date: Wed, 21 Jan 2009 04:52:09 +0100 Message-ID: <20090121035209.GH24891@wotan.suse.de> References: <20090120164726.GA24891@wotan.suse.de> <20090120183120.GD27464@shareable.org> <20090121012900.GD24891@wotan.suse.de> <20090121032520.GA2816@shareable.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: linux-fsdevel@vger.kernel.org To: Jamie Lokier Return-path: Received: from ns2.suse.de ([195.135.220.15]:46023 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755601AbZAUDwK (ORCPT ); Tue, 20 Jan 2009 22:52:10 -0500 Content-Disposition: inline In-Reply-To: <20090121032520.GA2816@shareable.org> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On Wed, Jan 21, 2009 at 03:25:20AM +0000, Jamie Lokier wrote: > Nick Piggin wrote: > > > For database writes, you typically write a bunch of stuff in various > > > regions of a big file (or multiple files), then ideally fdatasync > > > some/all of the written ranges - with writes committed to disk in the > > > best order determined by the OS and I/O scheduler. > > > > Do you know which databases do this? It will be nice to ask their > > input and see whether it helps them (I presume it is an OSS database > > because the "big" ones just use direct IO and manage their own > > buffers, right?) > > I just found this: > > http://markmail.org/message/injyo7coein7o3xz > (Postgresql) > > Tom Lane writes (on org.postgreql.pgsql-hackets): > >Greg Stark writes: > >> Come to think of it I wonder whether there's anything to be gained by > >> using smaller files for tables. Instead of 1G files maybe 256M files > >> or something like that to reduce the hit of fsyncing a file. > >> > >> Actually probably not. The weak part of our current approach is that > >> we tell the kernel "sync this file", then "sync that file", etc, in a > >> more or less random order. This leads to a probably non-optimal > >> sequence of disk accesses to complete a checkpoint. What we would > >> really like is a way to tell the kernel "sync all these files, and let > >> me know when you're done" --- then the kernel and hardware have some > >> shot at scheduling all the writes in an intelligent fashion. > >> > >> sync_file_range() is not that exactly, but since it lets you request > >> syncing and then go back and wait for the syncs later, we could get > >> the desired effect with two passes over the file list. (If the file > >> list is longer than our allowed number of open files, though, the > >> extra opens/closes could hurt.) > >> > >> Smaller files would make the I/O scheduling problem worse not better. Interesting. > So if you can make > commit-to-multiple-files-in-optimal-I/O-scheduling-order work, that > would be even better ;-) fsyncv? Send multiple inode,range tuples to the kernel to sync.