public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Rajagopal Ananthanarayanan <ananth@sgi.com>
To: Benjamin LaHaise <bcrl@redhat.com>
Cc: Ingo Molnar <mingo@elte.hu>,
	Linux Kernel List <linux-kernel@vger.kernel.org>,
	linux-aio@kvack.org, kiobuf-io-devel@lists.sourceforge.net,
	"Stephen C. Tweedie" <sct@redhat.com>
Subject: Re: [Kiobuf-io-devel] Re: 1st glance at kiobuf overhead in kernelaio vs  pread vs user aio
Date: Fri, 02 Feb 2001 15:19:58 -0800	[thread overview]
Message-ID: <3A7B409E.F63CA509@sgi.com> (raw)
In-Reply-To: <Pine.LNX.4.30.0102021523430.5491-100000@today.toronto.redhat.com>

Benjamin LaHaise wrote:
> 
> Hey Ingo,
> 
> On Fri, 2 Feb 2001, Ingo Molnar wrote:
> 
> > - first of all, great patch! I've got a conceptual question: exactly how
> > does the AIO code prevent filesystem-related scheduling in the issuing
> > process' context? I'd like to use (and test) your AIO code for TUX, but i
> > do not see where it's guaranteed that the process that does the aio does
> > not block - from the patch this is not yet clear to me. (Right now TUX
> > uses separate 'async IO' kernel threads to avoid this problem.) Or if it's
> > not yet possible, what are the plans to handle this?
> 
> Thanks!  Right now the code does the page cache lookup allocations and
> lookups in the caller's thread, the write path then attempts to lock all
> pages sequentially during io using the async page locking function
> wtd_lock_page.  I've tried to get this close to some of the ideas proposed
> by Jeff Merkey, and have implemented async page and buffer locking
> mechanisms so far.  The down in the write path is still synchronous,
> mostly because I want some feedback before going much further down this
> path.  The read path verifies the up2date state of individual pages, and
> if it encounters one which is not, then it queues the request for the
> worker thread which calls readpage on all the pages that need updating.

[ Ben, good to see you have a patch to send, something which I've been requesting
  you for sometime now ;-) ]

Do you really have worker threads? In my reading of the patch it seems
that the wtd is serviced by keventd. And by using mapped kiobuf's you've
avoided issues such as:

	a. (not) requiring a requestor's process context to perform the copy (copy-out
           on read, for example)
        b. avoiding requestor's (user) page from being unmapped when a
             __iodesc_read_finish is being executed.

These are two major improvements I'm glad to see over my earlier KAIO patch
(obURL: http://oss.sgi.com/projects/kaio/) ... of course, several abstractions,
including kiobufs & more generic task queues in 2.4 have made this easier,
which is a good thing.

I see several similarities to the KAIO patch too; stuff like splitting
generic_read routine (which now you have expanded to include the write
routine also), and the handling of RAW devices.

A nice addition in your patch is the introduction of kiobuf as a common container of
pages, which in the KAIO patch was handled with an ad-hoc (page *) vector
for non RAW & kiobuf's for the RAW case.

One point which is not clear is how one would implement aio_suspend(...)
which waits for any ONE of N aiocb's to complete. The aio_complete(...)
routine in your patch expects a particular idx to wait on, so I assume
as is, only one aiocb can be waited upon. Am I correct? This particular
case is solved in the KAIO patch ...

Also, can you also put out a library that goes with the kernel patch?
I can imagine what it would look like, but ...

Cheers,

ananth.

--------------------------------------------------------------------------
Rajagopal Ananthanarayanan ("ananth")
Member Technical Staff, SGI.
--------------------------------------------------------------------------
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

  parent reply	other threads:[~2001-02-02 23:22 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2001-02-02 19:32 1st glance at kiobuf overhead in kernel aio vs pread vs user aio bcrl
2001-02-02 20:18 ` Ingo Molnar
2001-02-02 20:45   ` [Kiobuf-io-devel] " Benjamin LaHaise
2001-02-02 23:14     ` Ingo Molnar
2001-02-02 23:19     ` Rajagopal Ananthanarayanan [this message]
2001-02-02 23:30       ` [Kiobuf-io-devel] Re: 1st glance at kiobuf overhead in kernelaio " Ingo Molnar
2001-02-03  0:37         ` [Kiobuf-io-devel] Re: 1st glance at kiobuf overhead in kernelaiovs " Rajagopal Ananthanarayanan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3A7B409E.F63CA509@sgi.com \
    --to=ananth@sgi.com \
    --cc=bcrl@redhat.com \
    --cc=kiobuf-io-devel@lists.sourceforge.net \
    --cc=linux-aio@kvack.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=sct@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox