From: Hans Reiser <reiser@namesys.com>
To: jshankar <jshankar@CS.ColoState.EDU>
Cc: "Richard B. Johnson" <root@chaos.analogic.com>,
Mike Fedyk <mfedyk@matchmail.com>,
linux-fsdevel <linux-fsdevel@vger.kernel.org>,
linux-kernel <linux-kernel@vger.kernel.org>
Subject: Re: ext3 file system
Date: Thu, 18 Dec 2003 04:25:24 +0300 [thread overview]
Message-ID: <3FE10204.2030708@namesys.com> (raw)
In-Reply-To: <3FF18FD8@webmail.colostate.edu>
jshankar wrote:
>Hello,
>
>Please provide some more insight.
>
>Suppose a filesystem issues a write command to the disk with around 10 4K
>Blocks to be written. SCSI device point of view i don't get what is the
>parallel I/O.
>It has only 1 write command. If some other sends a write request it needs to
>be queued. But the next question arises how the write data would be handled.
>Does it mean the SCSI does not give a response for the block of data written.
>In otherwords does it mean that the response would be given after all the
>block of data is written for a single write request.
>
>Thanks
>Jay
>
>
>
>
>
>
>>===== Original Message From Mike Fedyk <mfedyk@matchmail.com> =====
>>On Wed, Dec 17, 2003 at 05:25:49PM -0500, Richard B. Johnson wrote:
>>
>>
>>>to the physical media. There are special file-systems (journaling)
>>>that guarantee that something, enough to recover the data, is
>>>written at periodic intervals.
>>>
>>>
>>Most journaling filesystems make guarantees on the filesystem meta-data, but
>>not on the data. Some like ext3, and reiserfs (with suse's journaling
>>patch) can journal the data, or order things so that the data is written
>>before any pointers (ie meta-data) make it to the disk so it will be harder
>>to loose data.
>>-
>>To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
>>the body of a message to majordomo@vger.kernel.org
>>More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>>
>
>-
>To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at http://vger.kernel.org/majordomo-info.html
>Please read the FAQ at http://www.tux.org/lkml/
>
>
>
>
Filesystems don't usually wait on the IO to complete before submitting
more IO in response to the next write() syscall. They can do this by
batching a whole bunch of operations into one committed transaction.
In reiser4 we do this more carefully than other filesystems such as
reiserfs v3, and as a result every fs operation is fully atomic.
--
Hans
next prev parent reply other threads:[~2003-12-18 1:25 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2003-12-17 23:25 ext3 file system jshankar
2003-12-17 23:59 ` Brad Boyer
2003-12-18 1:25 ` Hans Reiser [this message]
2003-12-18 14:17 ` Richard B. Johnson
-- strict thread matches above, loose matches on Subject: below --
2003-12-18 4:47 jshankar
2003-12-18 8:39 ` Mike Fedyk
2003-12-18 10:41 ` Hans Reiser
2003-12-17 22:13 jshankar
2003-12-17 22:25 ` Richard B. Johnson
2003-12-17 23:02 ` Mike Fedyk
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3FE10204.2030708@namesys.com \
--to=reiser@namesys.com \
--cc=jshankar@CS.ColoState.EDU \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mfedyk@matchmail.com \
--cc=root@chaos.analogic.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox