From: Suparna Bhattacharya <suparna@in.ibm.com>
To: linux-aio@kvack.org, linux-kernel@vger.kernel.org
Cc: linux-osdl@osdl.org
Subject: Re: [PATCH 22/22] Fix stalls with the AIO context switch patch
Date: Fri, 2 Jul 2004 22:14:37 +0530 [thread overview]
Message-ID: <20040702164437.GL3450@in.ibm.com> (raw)
In-Reply-To: <20040702130030.GA4256@in.ibm.com>
On Fri, Jul 02, 2004 at 06:30:30PM +0530, Suparna Bhattacharya wrote:
>
> The patchset contains modifications and fixes to the AIO core
> to support the full retry model, an implementation of AIO
> support for buffered filesystem AIO reads and O_SYNC writes
> (the latter courtesy O_SYNC speedup changes from Andrew Morton),
> an implementation of AIO reads and writes to pipes (from
> Chris Mason) and AIO poll (again from Chris Mason).
>
> Full retry infrastructure and fixes
> [1] aio-retry.patch
> [2] 4g4g-aio-hang-fix.patch
> [3] aio-retry-elevated-refcount.patch
> [4] aio-splice-runlist.patch
>
> FS AIO read
> [5] aio-wait-page.patch
> [6] aio-fs_read.patch
> [7] aio-upfront-readahead.patch
>
> AIO for pipes
> [8] aio-cancel-fix.patch
> [9] aio-read-immediate.patch
> [10] aio-pipe.patch
> [11] aio-context-switch.patch
>
> Concurrent O_SYNC write speedups using radix-tree walks
> [12] writepages-range.patch
> [13] fix-writeback-range.patch
> [14] fix-writepages-range.patch
> [15] fdatawrite-range.patch
> [16] O_SYNC-speedup.patch
>
> AIO O_SYNC write
> [17] aio-wait_on_page_writeback_range.patch
> [18] aio-O_SYNC.patch
> [19] O_SYNC-write-fix.patch
>
> AIO poll
> [20] aio-poll.patch
>
> Infrastructure fixes
> [21] aio-putioctx-flushworkqueue.patch
> [22] aio-context-stall.patch
>
--
Suparna Bhattacharya (suparna@in.ibm.com)
Linux Technology Center
IBM Software Lab, India
------------------------------------------------------
From: Chris Mason
aio.c | 39 +++++++++++++++++++++++++++++++++++----
1 files changed, 35 insertions(+), 4 deletions(-)
--- aio/fs/aio.c 2004-06-21 13:35:34.024355464 -0700
+++ aio-context-stall/fs/aio.c 2004-06-21 13:50:59.246700288 -0700
@@ -367,6 +367,7 @@ void fastcall __put_ioctx(struct kioctx
if (unlikely(ctx->reqs_active))
BUG();
+ cancel_delayed_work(&ctx->wq);
flush_workqueue(aio_wq);
aio_free_ring(ctx);
mmdrop(ctx->mm);
@@ -788,6 +789,22 @@ static int __aio_run_iocbs(struct kioctx
return 0;
}
+static void aio_queue_work(struct kioctx * ctx)
+{
+ unsigned long timeout;
+ /*
+ * if someone is waiting, get the work started right
+ * away, otherwise, use a longer delay
+ */
+ smp_mb();
+ if (waitqueue_active(&ctx->wait))
+ timeout = 1;
+ else
+ timeout = HZ/10;
+ queue_delayed_work(aio_wq, &ctx->wq, timeout);
+}
+
+
/*
* aio_run_iocbs:
* Process all pending retries queued on the ioctx
@@ -804,8 +821,18 @@ static inline void aio_run_iocbs(struct
requeue = __aio_run_iocbs(ctx);
spin_unlock_irq(&ctx->ctx_lock);
if (requeue)
- queue_work(aio_wq, &ctx->wq);
+ aio_queue_work(ctx);
+}
+/*
+ * just like aio_run_iocbs, but keeps running them until
+ * the list stays empty
+ */
+static inline void aio_run_all_iocbs(struct kioctx *ctx)
+{
+ spin_lock_irq(&ctx->ctx_lock);
+ while( __aio_run_iocbs(ctx));
+ spin_unlock_irq(&ctx->ctx_lock);
}
/*
@@ -830,6 +857,9 @@ static void aio_kick_handler(void *data)
unuse_mm(ctx->mm);
spin_unlock_irq(&ctx->ctx_lock);
set_fs(oldfs);
+ /*
+ * we're in a worker thread already, don't use queue_delayed_work,
+ */
if (requeue)
queue_work(aio_wq, &ctx->wq);
}
@@ -852,7 +882,7 @@ void queue_kicked_iocb(struct kiocb *ioc
run = __queue_kicked_iocb(iocb);
spin_unlock_irqrestore(&ctx->ctx_lock, flags);
if (run) {
- queue_delayed_work(aio_wq, &ctx->wq, HZ/10);
+ aio_queue_work(ctx);
aio_wakeups++;
}
}
@@ -1119,7 +1149,7 @@ retry:
/* racey check, but it gets redone */
if (!retry && unlikely(!list_empty(&ctx->run_list))) {
retry = 1;
- aio_run_iocbs(ctx);
+ aio_run_all_iocbs(ctx);
goto retry;
}
@@ -1522,7 +1552,8 @@ int fastcall io_submit_one(struct kioctx
spin_lock_irq(&ctx->ctx_lock);
list_add_tail(&req->ki_run_list, &ctx->run_list);
- __aio_run_iocbs(ctx);
+ /* drain the run list */
+ while(__aio_run_iocbs(ctx));
spin_unlock_irq(&ctx->ctx_lock);
aio_put_req(req); /* drop extra ref to req */
return 0;
next prev parent reply other threads:[~2004-07-02 16:36 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-07-02 13:00 [PATCH 0/22] fsaio, pipe aio and aio poll upgraded to 2.6.7 Suparna Bhattacharya
2004-07-02 13:07 ` [PATCH 1/22] High-level AIO retry infrastructure and fixes Suparna Bhattacharya
2004-07-02 13:11 ` [PATCH 2/22] use_mm fix (helps AIO hangs on 4:4 split) Suparna Bhattacharya
2004-07-02 13:14 ` [PATCH 3/22] Refcounting fixes Suparna Bhattacharya
2004-07-02 13:15 ` [PATCH 4/22] Splice ioctx runlist for fairness Suparna Bhattacharya
2004-07-02 13:16 ` [PATCH 5/22] AIO wait on page support Suparna Bhattacharya
2004-07-02 13:18 ` [PATCH 6/22] FS AIO read Suparna Bhattacharya
2004-07-02 13:19 ` [PATCH 7/22] Upfront readahead to help streaming AIO reads Suparna Bhattacharya
2004-07-02 13:20 ` [PATCH 8/22] AIO cancellation fix Suparna Bhattacharya
2004-07-02 13:23 ` [PATCH 9/22] AIO immediate read (needed for AIO pipes & sockets) Suparna Bhattacharya
2004-07-02 13:23 ` [PATCH 10/22] AIO pipe support Suparna Bhattacharya
2004-07-02 13:26 ` [PATCH 11/22] Reduce AIO worker context switches Suparna Bhattacharya
2004-07-02 16:05 ` [PATCH 12/22] Writeback page range hint Suparna Bhattacharya
2004-07-02 16:18 ` [PATCH 13/22] Fix writeback page range to use exact limits Suparna Bhattacharya
2004-07-02 16:22 ` [PATCH 14/22] mpage writepages range limit fix Suparna Bhattacharya
2004-07-02 16:25 ` [PATCH 15/22] filemap_fdatawrite range interface Suparna Bhattacharya
2004-07-02 16:27 ` [PATCH 16/22] Concurrent O_SYNC write support Suparna Bhattacharya
2004-07-02 16:31 ` [PATCH 17/22] AIO wait on writeback Suparna Bhattacharya
2004-07-02 16:33 ` [PATCH 18/22] AIO O_SYNC write Suparna Bhattacharya
2004-07-02 16:34 ` [PATCH 19/22] Fix math error in AIO wait on writeback Suparna Bhattacharya
2004-07-02 16:39 ` [PATCH 20/22] AIO poll Suparna Bhattacharya
2004-07-29 15:19 ` Jeff Moyer
2004-07-29 16:02 ` Avi Kivity
2004-07-29 16:16 ` Arjan van de Ven
2004-07-29 16:37 ` Benjamin LaHaise
2004-07-29 17:23 ` William Lee Irwin III
2004-07-29 17:10 ` William Lee Irwin III
2004-07-29 17:24 ` Avi Kivity
2004-07-29 17:26 ` William Lee Irwin III
2004-07-29 17:30 ` Avi Kivity
2004-07-29 17:32 ` William Lee Irwin III
2004-07-02 16:42 ` [PATCH 21/22] fix: flush workqueue on put_ioctx Suparna Bhattacharya
2004-07-02 16:44 ` Suparna Bhattacharya [this message]
2004-07-05 9:24 ` [PATCH 0/22] fsaio, pipe aio and aio poll upgraded to 2.6.7 Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20040702164437.GL3450@in.ibm.com \
--to=suparna@in.ibm.com \
--cc=linux-aio@kvack.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-osdl@osdl.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.