public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH 1/1] indirect function calls elimination in IO scheduler
  2005-10-16 22:28 Ananiev, Leonid I
@ 2005-10-17  3:41 ` Randy.Dunlap
  0 siblings, 0 replies; 12+ messages in thread
From: Randy.Dunlap @ 2005-10-17  3:41 UTC (permalink / raw)
  To: Ananiev, Leonid I; +Cc: linux-kernel

On Mon, 17 Oct 2005 02:28:09 +0400 Ananiev, Leonid I wrote:

> > Put <...> around the email address.
> Fixed
> > Ugh.  Does exchange (server) add all of those extra lines?
> I do not see extra lines in my "sent box". Next is one more exempt. The
> text is plain.

It's a lot better, but there are still some lines that were broken
in the outbound mail body that should not have been broken (split).
See below.

You should send email to yourself (going out of intel.com and back
into it) and then be able to apply the patch cleanly.

First error is:
patch: **** malformed patch at line 39: *rq)

So I join lines 38 & 39 and try again:
patch: **** malformed patch at line 66: *rq)

Repeat...
patch: **** malformed patch at line 162: *rq)

and:
patch: **** malformed patch at line 234: *cfqq)


> @@ -945,7 +945,7 @@ static void update_write_batch(struct as
>   */
>  static void as_completed_request(request_queue_t *q, struct request
> *rq)     <<<<<<<<<<< this should not be a separate line <<<<<<<<<<
>  {
> -	struct as_data *ad = q->elevator->elevator_data;
> +	struct as_data *ad = q->elevator.elevator_data;
>  	struct as_rq *arq = RQ_DATA(rq);
>  
>  	WARN_ON(!list_empty(&rq->queuelist));
> @@ -1465,7 +1465,7 @@ static void as_add_request(struct as_dat
>  
>  static void as_deactivate_request(request_queue_t *q, struct request
> *rq)     <<<<<<<<<<<<<<< Should not be a separate line. <<<<<<<<<<<
>  {
> -	struct as_data *ad = q->elevator->elevator_data;
> +	struct as_data *ad = q->elevator.elevator_data;
>  	struct as_rq *arq = RQ_DATA(rq);
>  
>  	if (arq) {

> diff -rup linux-2.6.14-rc2/drivers/block/cfq-iosched.c
> linux-2.6.14-rc2elv1/drivers/block/cfq-iosched.c
> --- linux-2.6.14-rc2/drivers/block/cfq-iosched.c	2005-09-24
> 09:13:54.000000000 +0400
> +++ linux-2.6.14-rc2elv1/drivers/block/cfq-iosched.c	2005-10-13
> 04:18:12.000000000 +0400
> @@ -678,7 +678,7 @@ out:
>  
>  static void cfq_deactivate_request(request_queue_t *q, struct request
> *rq)       <<<<<<<<<<<<<<<<<< Same <<<<<<<<<<<<<<<<<<
>  {

There are (lots) more, but I'm not pointing to every one of them.

After joining about 20 lines, I did get it to apply successfully
(to 2.6.14-rc4).  You should also make sure that it applies cleanly
to the current kernel version.

Sometimes it works to send email as text/plain attachments.
Are you using a decent email client?   :)

---
~Randy

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH 1/1] indirect function calls elimination in IO scheduler
@ 2005-10-17 16:12 Ananiev, Leonid I
  2005-10-18  2:44 ` Randy.Dunlap
  0 siblings, 1 reply; 12+ messages in thread
From: Ananiev, Leonid I @ 2005-10-17 16:12 UTC (permalink / raw)
  To: Randy.Dunlap; +Cc: lkml

Randy,
You are right. The lines is broken if I send patch outside Intel (I've
tried to send @mail.ru)
Inside Intel the lines are not broken as I see in response mails.
I've used 'Plain text" before but flag "Use MS Word 2003 to edit e-mail
massages" was not turned off.
Now this flag is turned off. Once more I had opened using WordPad the
diff-text created on Linux and have pasted it in this mail.
Is it OK in your mail client?
But I've send long patch line to @mail.ru and I've seen the lines still
broken.
It is not permitted to use mail client other than MS Outlook in our
office.

Randy writes
>You should also make sure that it applies cleanly
> to the current kernel version

I've applied it to linux-2.6.14-rc4
----------------------------------------------------
>From Leonid Ananiev

      Fully modular io schedulers and enables online switching between
them was introduced in Linux 2.6.10 but as a result percentage of CPU
using by kernel was increased and performance degradation is marked on
Itanium. A cause of degradation is in more steps for indirect IO
scheduler type specific function calls.
      The patch eliminates 45 indirect function calls in 16 elevator
functions. Sysbench fileio benchmark throughput was increased at 2% for
noop elevator after patching.

Signed-off-by: Leonid Ananiev <leonid.i.ananiev@intel.com>
----
diff -rup linux-2.6.14-rc2/drivers/block/as-iosched.c
linux-2.6.14-rc2elv1/drivers/block/as-iosched.c
--- linux-2.6.14-rc2/drivers/block/as-iosched.c	2005-09-24
09:13:54.000000000 +0400
+++ linux-2.6.14-rc2elv1/drivers/block/as-iosched.c	2005-10-13
04:18:12.000000000 +0400
@@ -614,7 +614,7 @@ static void as_antic_stop(struct as_data
 static void as_antic_timeout(unsigned long data)
 {
 	struct request_queue *q = (struct request_queue *)data;
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 	unsigned long flags;
 
 	spin_lock_irqsave(q->queue_lock, flags);
@@ -945,7 +945,7 @@ static void update_write_batch(struct as
  */
 static void as_completed_request(request_queue_t *q, struct request
*rq)
 {
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 	struct as_rq *arq = RQ_DATA(rq);
 
 	WARN_ON(!list_empty(&rq->queuelist));
@@ -1030,7 +1030,7 @@ static void as_remove_queued_request(req
 {
 	struct as_rq *arq = RQ_DATA(rq);
 	const int data_dir = arq->is_sync;
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 
 	WARN_ON(arq->state != AS_RQ_QUEUED);
 
@@ -1361,7 +1361,7 @@ fifo_expired:
 
 static struct request *as_next_request(request_queue_t *q)
 {
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 	struct request *rq = NULL;
 
 	/*
@@ -1465,7 +1465,7 @@ static void as_add_request(struct as_dat
 
 static void as_deactivate_request(request_queue_t *q, struct request
*rq)
 {
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 	struct as_rq *arq = RQ_DATA(rq);
 
 	if (arq) {
@@ -1510,7 +1510,7 @@ static void as_account_queued_request(st
 static void
 as_insert_request(request_queue_t *q, struct request *rq, int where)
 {
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 	struct as_rq *arq = RQ_DATA(rq);
 
 	if (arq) {
@@ -1563,7 +1563,7 @@ as_insert_request(request_queue_t *q, st
  */
 static int as_queue_empty(request_queue_t *q)
 {
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 
 	if (!list_empty(&ad->fifo_list[REQ_ASYNC])
 		|| !list_empty(&ad->fifo_list[REQ_SYNC])
@@ -1602,7 +1602,7 @@ as_latter_request(request_queue_t *q, st
 static int
 as_merge(request_queue_t *q, struct request **req, struct bio *bio)
 {
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 	sector_t rb_key = bio->bi_sector + bio_sectors(bio);
 	struct request *__rq;
 	int ret;
@@ -1657,7 +1657,7 @@ out_insert:
 
 static void as_merged_request(request_queue_t *q, struct request *req)
 {
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 	struct as_rq *arq = RQ_DATA(req);
 
 	/*
@@ -1702,7 +1702,7 @@ static void
 as_merged_requests(request_queue_t *q, struct request *req,
 			 struct request *next)
 {
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 	struct as_rq *arq = RQ_DATA(req);
 	struct as_rq *anext = RQ_DATA(next);
 
@@ -1789,7 +1789,7 @@ static void as_work_handler(void *data)
 
 static void as_put_request(request_queue_t *q, struct request *rq)
 {
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 	struct as_rq *arq = RQ_DATA(rq);
 
 	if (!arq) {
@@ -1809,7 +1809,7 @@ static void as_put_request(request_queue
 static int as_set_request(request_queue_t *q, struct request *rq,
 			  struct bio *bio, int gfp_mask)
 {
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 	struct as_rq *arq = mempool_alloc(ad->arq_pool, gfp_mask);
 
 	if (arq) {
@@ -1831,7 +1831,7 @@ static int as_set_request(request_queue_
 static int as_may_queue(request_queue_t *q, int rw, struct bio *bio)
 {
 	int ret = ELV_MQUEUE_MAY;
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 	struct io_context *ioc;
 	if (ad->antic_status == ANTIC_WAIT_REQ ||
 			ad->antic_status == ANTIC_WAIT_NEXT) {
diff -rup linux-2.6.14-rc2/drivers/block/cfq-iosched.c
linux-2.6.14-rc2elv1/drivers/block/cfq-iosched.c
--- linux-2.6.14-rc2/drivers/block/cfq-iosched.c	2005-09-24
09:13:54.000000000 +0400
+++ linux-2.6.14-rc2elv1/drivers/block/cfq-iosched.c	2005-10-13
04:18:12.000000000 +0400
@@ -364,7 +364,7 @@ static inline void cfq_schedule_dispatch
 
 static int cfq_queue_empty(request_queue_t *q)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 
 	return !cfq_pending_requests(cfqd);
 }
@@ -678,7 +678,7 @@ out:
 
 static void cfq_deactivate_request(request_queue_t *q, struct request
*rq)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 	struct cfq_rq *crq = RQ_DATA(rq);
 
 	if (crq) {
@@ -724,7 +724,7 @@ static void cfq_remove_request(request_q
 static int
 cfq_merge(request_queue_t *q, struct request **req, struct bio *bio)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 	struct request *__rq;
 	int ret;
 
@@ -756,7 +756,7 @@ out_insert:
 
 static void cfq_merged_request(request_queue_t *q, struct request *req)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 	struct cfq_rq *crq = RQ_DATA(req);
 
 	cfq_del_crq_hash(crq);
@@ -999,7 +999,7 @@ static int cfq_arm_slice_timer(struct cf
  */
 static void cfq_dispatch_sort(request_queue_t *q, struct cfq_rq *crq)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 	struct cfq_queue *cfqq = crq->cfq_queue;
 	struct list_head *head = &q->queue_head, *entry = head;
 	struct request *__rq;
@@ -1196,7 +1196,7 @@ __cfq_dispatch_requests(struct cfq_data 
 static int
 cfq_dispatch_requests(request_queue_t *q, int max_dispatch, int force)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 	struct cfq_queue *cfqq;
 
 	if (!cfqd->busy_queues)
@@ -1270,7 +1270,7 @@ cfq_account_completion(struct cfq_queue 
 
 static struct request *cfq_next_request(request_queue_t *q)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 	struct request *rq;
 
 	if (!list_empty(&q->queue_head)) {
@@ -1840,7 +1840,7 @@ static void cfq_enqueue(struct cfq_data 
 static void
 cfq_insert_request(request_queue_t *q, struct request *rq, int where)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 
 	switch (where) {
 		case ELEVATOR_INSERT_BACK:
@@ -2006,7 +2006,7 @@ __cfq_may_queue(struct cfq_data *cfqd, s
 
 static int cfq_may_queue(request_queue_t *q, int rw, struct bio *bio)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 	struct task_struct *tsk = current;
 	struct cfq_queue *cfqq;
 
@@ -2029,7 +2029,7 @@ static int cfq_may_queue(request_queue_t
 
 static void cfq_check_waiters(request_queue_t *q, struct cfq_queue
*cfqq)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 	struct request_list *rl = &q->rq;
 
 	if (cfqq->allocated[READ] <= cfqd->max_queued ||
cfqd->rq_starved) {
@@ -2050,7 +2050,7 @@ static void cfq_check_waiters(request_qu
  */
 static void cfq_put_request(request_queue_t *q, struct request *rq)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 	struct cfq_rq *crq = RQ_DATA(rq);
 
 	if (crq) {
@@ -2077,7 +2077,7 @@ static int
 cfq_set_request(request_queue_t *q, struct request *rq, struct bio
*bio,
 		int gfp_mask)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 	struct task_struct *tsk = current;
 	struct cfq_io_context *cic;
 	const int rw = rq_data_dir(rq);
@@ -2153,7 +2153,7 @@ queue_fail:
 static void cfq_kick_queue(void *data)
 {
 	request_queue_t *q = data;
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 	unsigned long flags;
 
 	spin_lock_irqsave(q->queue_lock, flags);
@@ -2263,7 +2263,7 @@ static void cfq_put_cfqd(struct cfq_data
 	blk_put_queue(q);
 
 	cfq_shutdown_timer_wq(cfqd);
-	q->elevator->elevator_data = NULL;
+	q->elevator.elevator_data = NULL;
 
 	mempool_destroy(cfqd->crq_pool);
 	kfree(cfqd->crq_hash);
diff -rup linux-2.6.14-rc2/drivers/block/deadline-iosched.c
linux-2.6.14-rc2elv1/drivers/block/deadline-iosched.c
--- linux-2.6.14-rc2/drivers/block/deadline-iosched.c	2005-09-24
09:16:32.000000000 +0400
+++ linux-2.6.14-rc2elv1/drivers/block/deadline-iosched.c
2005-10-13 04:18:12.000000000 +0400
@@ -289,7 +289,7 @@ deadline_find_first_drq(struct deadline_
 static inline void
 deadline_add_request(struct request_queue *q, struct request *rq)
 {
-	struct deadline_data *dd = q->elevator->elevator_data;
+	struct deadline_data *dd = q->elevator.elevator_data;
 	struct deadline_rq *drq = RQ_DATA(rq);
 
 	const int data_dir = rq_data_dir(drq->request);
@@ -317,7 +317,7 @@ static void deadline_remove_request(requ
 	struct deadline_rq *drq = RQ_DATA(rq);
 
 	if (drq) {
-		struct deadline_data *dd = q->elevator->elevator_data;
+		struct deadline_data *dd = q->elevator.elevator_data;
 
 		list_del_init(&drq->fifo);
 		deadline_remove_merge_hints(q, drq);
@@ -328,7 +328,7 @@ static void deadline_remove_request(requ
 static int
 deadline_merge(request_queue_t *q, struct request **req, struct bio
*bio)
 {
-	struct deadline_data *dd = q->elevator->elevator_data;
+	struct deadline_data *dd = q->elevator.elevator_data;
 	struct request *__rq;
 	int ret;
 
@@ -383,7 +383,7 @@ out_insert:
 
 static void deadline_merged_request(request_queue_t *q, struct request
*req)
 {
-	struct deadline_data *dd = q->elevator->elevator_data;
+	struct deadline_data *dd = q->elevator.elevator_data;
 	struct deadline_rq *drq = RQ_DATA(req);
 
 	/*
@@ -407,7 +407,7 @@ static void
 deadline_merged_requests(request_queue_t *q, struct request *req,
 			 struct request *next)
 {
-	struct deadline_data *dd = q->elevator->elevator_data;
+	struct deadline_data *dd = q->elevator.elevator_data;
 	struct deadline_rq *drq = RQ_DATA(req);
 	struct deadline_rq *dnext = RQ_DATA(next);
 
@@ -599,7 +599,7 @@ dispatch_request:
 
 static struct request *deadline_next_request(request_queue_t *q)
 {
-	struct deadline_data *dd = q->elevator->elevator_data;
+	struct deadline_data *dd = q->elevator.elevator_data;
 	struct request *rq;
 
 	/*
@@ -620,7 +620,7 @@ dispatch:
 static void
 deadline_insert_request(request_queue_t *q, struct request *rq, int
where)
 {
-	struct deadline_data *dd = q->elevator->elevator_data;
+	struct deadline_data *dd = q->elevator.elevator_data;
 
 	/* barriers must flush the reorder queue */
 	if (unlikely(rq->flags & (REQ_SOFTBARRIER | REQ_HARDBARRIER)
@@ -648,7 +648,7 @@ deadline_insert_request(request_queue_t 
 
 static int deadline_queue_empty(request_queue_t *q)
 {
-	struct deadline_data *dd = q->elevator->elevator_data;
+	struct deadline_data *dd = q->elevator.elevator_data;
 
 	if (!list_empty(&dd->fifo_list[WRITE])
 	    || !list_empty(&dd->fifo_list[READ])
@@ -745,7 +745,7 @@ static int deadline_init_queue(request_q
 
 static void deadline_put_request(request_queue_t *q, struct request
*rq)
 {
-	struct deadline_data *dd = q->elevator->elevator_data;
+	struct deadline_data *dd = q->elevator.elevator_data;
 	struct deadline_rq *drq = RQ_DATA(rq);
 
 	if (drq) {
@@ -758,7 +758,7 @@ static int
 deadline_set_request(request_queue_t *q, struct request *rq, struct bio
*bio,
 		     int gfp_mask)
 {
-	struct deadline_data *dd = q->elevator->elevator_data;
+	struct deadline_data *dd = q->elevator.elevator_data;
 	struct deadline_rq *drq;
 
 	drq = mempool_alloc(dd->drq_pool, gfp_mask);
diff -rup linux-2.6.14-rc2/drivers/block/elevator.c
linux-2.6.14-rc2elv1/drivers/block/elevator.c
--- linux-2.6.14-rc2/drivers/block/elevator.c	2005-09-24
09:13:54.000000000 +0400
+++ linux-2.6.14-rc2elv1/drivers/block/elevator.c	2005-10-13
04:18:12.000000000 +0400
@@ -130,18 +130,17 @@ static struct elevator_type *elevator_ge
 	return e;
 }
 
-static int elevator_attach(request_queue_t *q, struct elevator_type *e,
-			   struct elevator_queue *eq)
+static int elevator_attach(request_queue_t *q, struct elevator_type *e)
 {
 	int ret = 0;
+	struct elevator_queue *eq;
 
-	memset(eq, 0, sizeof(*eq));
+	eq = &q->elevator;
 	eq->ops = &e->ops;
 	eq->elevator_type = e;
 
 	INIT_LIST_HEAD(&q->queue_head);
 	q->last_merge = NULL;
-	q->elevator = eq;
 
 	if (eq->ops->elevator_init_fn)
 		ret = eq->ops->elevator_init_fn(q, eq);
@@ -183,7 +182,6 @@ __setup("elevator=", elevator_setup);
 int elevator_init(request_queue_t *q, char *name)
 {
 	struct elevator_type *e = NULL;
-	struct elevator_queue *eq;
 	int ret = 0;
 
 	elevator_setup_default();
@@ -195,15 +193,8 @@ int elevator_init(request_queue_t *q, ch
 	if (!e)
 		return -EINVAL;
 
-	eq = kmalloc(sizeof(struct elevator_queue), GFP_KERNEL);
-	if (!eq) {
-		elevator_put(e->elevator_type);
-		return -ENOMEM;
-	}
-
-	ret = elevator_attach(q, e, eq);
+	ret = elevator_attach(q, e);
 	if (ret) {
-		kfree(eq);
 		elevator_put(e->elevator_type);
 	}
 
@@ -217,12 +208,11 @@ void elevator_exit(elevator_t *e)
 
 	elevator_put(e->elevator_type);
 	e->elevator_type = NULL;
-	kfree(e);
 }
 
 int elv_merge(request_queue_t *q, struct request **req, struct bio
*bio)
 {
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	if (e->ops->elevator_merge_fn)
 		return e->ops->elevator_merge_fn(q, req, bio);
@@ -232,7 +222,7 @@ int elv_merge(request_queue_t *q, struct
 
 void elv_merged_request(request_queue_t *q, struct request *rq)
 {
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	if (e->ops->elevator_merged_fn)
 		e->ops->elevator_merged_fn(q, rq);
@@ -241,7 +231,7 @@ void elv_merged_request(request_queue_t 
 void elv_merge_requests(request_queue_t *q, struct request *rq,
 			     struct request *next)
 {
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	if (q->last_merge == next)
 		q->last_merge = NULL;
@@ -258,7 +248,7 @@ void elv_merge_requests(request_queue_t 
  */
 void elv_deactivate_request(request_queue_t *q, struct request *rq)
 {
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	/*
 	 * it already went through dequeue, we need to decrement the
@@ -296,8 +286,8 @@ void elv_requeue_request(request_queue_t
 	 * if iosched has an explicit requeue hook, then use that.
otherwise
 	 * just put the request at the front of the queue
 	 */
-	if (q->elevator->ops->elevator_requeue_req_fn)
-		q->elevator->ops->elevator_requeue_req_fn(q, rq);
+	if (q->elevator.ops->elevator_requeue_req_fn)
+		q->elevator.ops->elevator_requeue_req_fn(q, rq);
 	else
 		__elv_add_request(q, rq, ELEVATOR_INSERT_FRONT, 0);
 }
@@ -318,7 +308,7 @@ void __elv_add_request(request_queue_t *
 	rq->q = q;
 
 	if (!test_bit(QUEUE_FLAG_DRAIN, &q->queue_flags)) {
-		q->elevator->ops->elevator_add_req_fn(q, rq, where);
+		q->elevator.ops->elevator_add_req_fn(q, rq, where);
 
 		if (blk_queue_plugged(q)) {
 			int nrq = q->rq.count[READ] + q->rq.count[WRITE]
@@ -348,7 +338,7 @@ void elv_add_request(request_queue_t *q,
 
 static inline struct request *__elv_next_request(request_queue_t *q)
 {
-	struct request *rq = q->elevator->ops->elevator_next_req_fn(q);
+	struct request *rq = q->elevator.ops->elevator_next_req_fn(q);
 
 	/*
 	 * if this is a barrier write and the device has to issue a
@@ -418,7 +408,7 @@ struct request *elv_next_request(request
 
 void elv_remove_request(request_queue_t *q, struct request *rq)
 {
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	/*
 	 * the time frame between a request being removed from the lists
@@ -446,7 +436,7 @@ void elv_remove_request(request_queue_t 
 
 int elv_queue_empty(request_queue_t *q)
 {
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	if (e->ops->elevator_queue_empty_fn)
 		return e->ops->elevator_queue_empty_fn(q);
@@ -458,7 +448,7 @@ struct request *elv_latter_request(reque
 {
 	struct list_head *next;
 
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	if (e->ops->elevator_latter_req_fn)
 		return e->ops->elevator_latter_req_fn(q, rq);
@@ -474,7 +464,7 @@ struct request *elv_former_request(reque
 {
 	struct list_head *prev;
 
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	if (e->ops->elevator_former_req_fn)
 		return e->ops->elevator_former_req_fn(q, rq);
@@ -489,7 +479,7 @@ struct request *elv_former_request(reque
 int elv_set_request(request_queue_t *q, struct request *rq, struct bio
*bio,
 		    int gfp_mask)
 {
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	if (e->ops->elevator_set_req_fn)
 		return e->ops->elevator_set_req_fn(q, rq, bio,
gfp_mask);
@@ -500,7 +490,7 @@ int elv_set_request(request_queue_t *q, 
 
 void elv_put_request(request_queue_t *q, struct request *rq)
 {
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	if (e->ops->elevator_put_req_fn)
 		e->ops->elevator_put_req_fn(q, rq);
@@ -508,7 +498,7 @@ void elv_put_request(request_queue_t *q,
 
 int elv_may_queue(request_queue_t *q, int rw, struct bio *bio)
 {
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	if (e->ops->elevator_may_queue_fn)
 		return e->ops->elevator_may_queue_fn(q, rw, bio);
@@ -518,7 +508,7 @@ int elv_may_queue(request_queue_t *q, in
 
 void elv_completed_request(request_queue_t *q, struct request *rq)
 {
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	/*
 	 * request is released from the driver, io must be done
@@ -532,7 +522,7 @@ void elv_completed_request(request_queue
 
 int elv_register_queue(struct request_queue *q)
 {
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	e->kobj.parent = kobject_get(&q->kobj);
 	if (!e->kobj.parent)
@@ -547,7 +537,7 @@ int elv_register_queue(struct request_qu
 void elv_unregister_queue(struct request_queue *q)
 {
 	if (q) {
-		elevator_t *e = q->elevator;
+		elevator_t *e = &q->elevator;
 		kobject_unregister(&e->kobj);
 		kobject_put(&q->kobj);
 	}
@@ -590,12 +580,8 @@ EXPORT_SYMBOL_GPL(elv_unregister);
  */
 static void elevator_switch(request_queue_t *q, struct elevator_type
*new_e)
 {
-	elevator_t *e = kmalloc(sizeof(elevator_t), GFP_KERNEL);
 	struct elevator_type *noop_elevator = NULL;
-	elevator_t *old_elevator;
-
-	if (!e)
-		goto error;
+	elevator_t old_elevator;
 
 	/*
 	 * first step, drain requests from the block freelist
@@ -615,7 +601,7 @@ static void elevator_switch(request_queu
  	 */
 	noop_elevator = elevator_get("noop");
 	spin_lock_irq(q->queue_lock);
-	elevator_attach(q, noop_elevator, e);
+	elevator_attach(q, noop_elevator);
 	spin_unlock_irq(q->queue_lock);
 
 	blk_wait_queue_drained(q, 1);
@@ -623,7 +609,7 @@ static void elevator_switch(request_queu
 	/*
 	 * attach and start new elevator
 	 */
-	if (elevator_attach(q, new_e, e))
+	if (elevator_attach(q, new_e))
 		goto fail;
 
 	if (elv_register_queue(q))
@@ -632,7 +618,7 @@ static void elevator_switch(request_queu
 	/*
 	 * finally exit old elevator and start queue again
 	 */
-	elevator_exit(old_elevator);
+	elevator_exit(&old_elevator);
 	blk_finish_queue_drain(q);
 	elevator_put(noop_elevator);
 	return;
@@ -642,14 +628,12 @@ fail_register:
 	 * switch failed, exit the new io scheduler and reattach the old
 	 * one again (along with re-adding the sysfs dir)
 	 */
-	elevator_exit(e);
+	elevator_exit(&q->elevator);
 fail:
 	q->elevator = old_elevator;
 	elv_register_queue(q);
 	blk_finish_queue_drain(q);
-error:
-	if (noop_elevator)
-		elevator_put(noop_elevator);
+	elevator_put(noop_elevator);
 	elevator_put(new_e);
 	printk(KERN_ERR "elevator: switch to %s
failed\n",new_e->elevator_name);
 }
@@ -671,7 +655,7 @@ ssize_t elv_iosched_store(request_queue_
 		return -EINVAL;
 	}
 
-	if (!strcmp(elevator_name,
q->elevator->elevator_type->elevator_name))
+	if (!strcmp(elevator_name,
q->elevator.elevator_type->elevator_name))
 		return count;
 
 	elevator_switch(q, e);
@@ -680,7 +664,7 @@ ssize_t elv_iosched_store(request_queue_
 
 ssize_t elv_iosched_show(request_queue_t *q, char *name)
 {
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 	struct elevator_type *elv = e->elevator_type;
 	struct list_head *entry;
 	int len = 0;
diff -rup linux-2.6.14-rc2/drivers/block/ll_rw_blk.c
linux-2.6.14-rc2elv1/drivers/block/ll_rw_blk.c
--- linux-2.6.14-rc2/drivers/block/ll_rw_blk.c	2005-09-24
09:16:32.000000000 +0400
+++ linux-2.6.14-rc2elv1/drivers/block/ll_rw_blk.c	2005-10-13
04:18:12.000000000 +0400
@@ -1613,8 +1613,7 @@ void blk_cleanup_queue(request_queue_t *
 	if (!atomic_dec_and_test(&q->refcnt))
 		return;
 
-	if (q->elevator)
-		elevator_exit(q->elevator);
+	elevator_exit(&q->elevator);
 
 	blk_sync_queue(q);
 
diff -rup linux-2.6.14-rc2/include/linux/blkdev.h
linux-2.6.14-rc2elv1/include/linux/blkdev.h
--- linux-2.6.14-rc2/include/linux/blkdev.h	2005-09-24
09:16:47.000000000 +0400
+++ linux-2.6.14-rc2elv1/include/linux/blkdev.h	2005-10-13
04:18:12.000000000 +0400
@@ -312,7 +312,7 @@ struct request_queue
 	 */
 	struct list_head	queue_head;
 	struct request		*last_merge;
-	elevator_t		*elevator;
+	elevator_t		elevator;
 
 	/*
 	 * the queue request freelist, one for reads and one for writes






^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re:[PATCH 1/1] indirect function calls elimination in IO scheduler
@ 2005-10-17 17:01 Ananiev, Leonid I
  2005-10-17 17:58 ` [PATCH " Jens Axboe
  0 siblings, 1 reply; 12+ messages in thread
From: Ananiev, Leonid I @ 2005-10-17 17:01 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel

Jens Axboe writes

> I don't really see the patch doing what you describe - the indirect
> function calls are the same.

For example on Pentium4 in the function elv_next_request() the line
            struct request *rq =
q->elevator->ops->elevator_next_req_fn(q);
before patch had required 11% of function running time as oprofile
reports
           %
    26  0.0457 :c0270ecb:       mov    0xc(%edi),%eax
  3455  6.0670 :c0270ece:       mov    (%eax),%eax
  2848  5.0011 :c0270ed0:       mov    %edi,(%esp)
  1538  2.7008 :c0270ed3:       call   *0xc(%eax)

	A patch which would delete all indirect calls was tryed
        struct request *rq = q->elevator_cp.ops.elevator_next_req_fn(q);
     9  0.0224 :c0270eea:       mov    %edi,(%esp)
  3814  9.4793 :c0270eed:       call   *0x18(%edi)

But additional memory would be needed for 'ops' in each queue. The
intermediate (proposed) patch has the same timing effect but saves some
memory:
	struct request *rq =
q->elevator_cp.ops->elevator_next_req_fn(q);
drivers/block/elevator.c:351
ffffffff802a8b97:       49 8b 44 24 18          mov    0x18(%r12),%rax
ffffffff802a8b9c:       4c 89 e7                mov    %r12,%rdi
ffffffff802a8b9f:       ff 50 18                callq  *0x18(%rax)

For Itanium the difference is huge:
	Before patch:
drivers/block/elevator.c:351
a0000001002cbb60:       0d f0 00 4c 18 10       [MFI]   ld8 r30=[r38]
a0000001002cbb66:       00 00 00 02 00 c0               nop.f 0x0
a0000001002cbb6c:       05 00 01 84                     mov r46=r32;;
a0000001002cbb70:       0b e8 00 3c 18 10       [MMI]   ld8 r29=[r30];;
a0000001002cbb76:       c0 c1 74 00 42 00               adds r28=24,r29
a0000001002cbb7c:       00 00 04 00                     nop.i 0x0;;
a0000001002cbb80:       0b d0 00 38 18 10       [MMI]   ld8 r26=[r28];;
a0000001002cbb86:       b0 41 68 30 28 00               ld8 r27=[r26],8
a0000001002cbb8c:       00 00 04 00                     nop.i 0x0;;
a0000001002cbb90:       11 08 00 34 18 10       [MIB]   ld8 r1=[r26]
a0000001002cbb96:       70 d8 04 80 03 00               mov b7=r27
a0000001002cbb9c:       78 00 80 10
br.call.sptk.many

	After patching there is no object code for considered line. It
is scattered mixed with other source code lines.

Leonid

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/1] indirect function calls elimination in IO scheduler
  2005-10-17 17:01 Re:[PATCH 1/1] indirect function calls elimination in IO scheduler Ananiev, Leonid I
@ 2005-10-17 17:58 ` Jens Axboe
  2005-10-17 19:25   ` Chen, Kenneth W
  0 siblings, 1 reply; 12+ messages in thread
From: Jens Axboe @ 2005-10-17 17:58 UTC (permalink / raw)
  To: Ananiev, Leonid I; +Cc: linux-kernel

On Mon, Oct 17 2005, Ananiev, Leonid I wrote:
> Jens Axboe writes
> 
> > I don't really see the patch doing what you describe - the indirect
> > function calls are the same.
> 
> For example on Pentium4 in the function elv_next_request() the line
>             struct request *rq =
> q->elevator->ops->elevator_next_req_fn(q);
> before patch had required 11% of function running time as oprofile
> reports
>            %
>     26  0.0457 :c0270ecb:       mov    0xc(%edi),%eax
>   3455  6.0670 :c0270ece:       mov    (%eax),%eax
>   2848  5.0011 :c0270ed0:       mov    %edi,(%esp)
>   1538  2.7008 :c0270ed3:       call   *0xc(%eax)
> 
> 	A patch which would delete all indirect calls was tryed
>         struct request *rq = q->elevator_cp.ops.elevator_next_req_fn(q);
>      9  0.0224 :c0270eea:       mov    %edi,(%esp)
>   3814  9.4793 :c0270eed:       call   *0x18(%edi)
> 
> But additional memory would be needed for 'ops' in each queue. The
> intermediate (proposed) patch has the same timing effect but saves some
> memory:
> 	struct request *rq =
> q->elevator_cp.ops->elevator_next_req_fn(q);
> drivers/block/elevator.c:351
> ffffffff802a8b97:       49 8b 44 24 18          mov    0x18(%r12),%rax
> ffffffff802a8b9c:       4c 89 e7                mov    %r12,%rdi
> ffffffff802a8b9f:       ff 50 18                callq  *0x18(%rax)

But with the patch proposed, the function call is still indirect. You
are only moving eliminating a dereference of elevator-> since that is
now inlined in the queue. That matches your asm, you eliminate one mov
there.

I'm guessing you are testing this with your NULL driver, which is why
the difference is so 'huge' in profiling. And you are probably using
noop, correct? I don't see a lot of real world relevance to this
testing to be honest, the io path isn't completely lean with the regular
io schedulers either and I bet this would be noise on real world
testing. Micro benchmarks are all fine, but they only say so much. And
as I originally stated, this patch is a no-go from the beginning since
you cannot ref count a statically embedded structure. It has to be
dynamically allocated.

So if you are really interested in this and have a valid reason to
pursue it, please think more about other ways to solve this.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH 1/1] indirect function calls elimination in IO scheduler
  2005-10-17 17:58 ` [PATCH " Jens Axboe
@ 2005-10-17 19:25   ` Chen, Kenneth W
  2005-10-17 19:40     ` Jens Axboe
  0 siblings, 1 reply; 12+ messages in thread
From: Chen, Kenneth W @ 2005-10-17 19:25 UTC (permalink / raw)
  To: 'Jens Axboe', Ananiev, Leonid I; +Cc: linux-kernel

Jens Axboe wrote on Monday, October 17, 2005 10:59 AM
> you cannot ref count a statically embedded structure. It has to be
> dynamically allocated.

I'm confused.  For every block device queue, there is one unique
elevator_t structure allocated via kmalloc.  And vice versa, one
elevator_t has only one request_queue points to it. This elevator_t
structure is per q since it has pointer to per-queue elevator
private data.

Since it is always one to one relationship, ref count is predictable
and static.  I see there are ref count on q->elevator, But it is
always 2: one from object instantiation and one from adding an sysfs
hierarchy directory.  In this case, I don't see the difference.
Am I missing something?

- Ken


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/1] indirect function calls elimination in IO scheduler
  2005-10-17 19:25   ` Chen, Kenneth W
@ 2005-10-17 19:40     ` Jens Axboe
  0 siblings, 0 replies; 12+ messages in thread
From: Jens Axboe @ 2005-10-17 19:40 UTC (permalink / raw)
  To: Chen, Kenneth W; +Cc: Ananiev, Leonid I, linux-kernel

On Mon, Oct 17 2005, Chen, Kenneth W wrote:
> Jens Axboe wrote on Monday, October 17, 2005 10:59 AM
> > you cannot ref count a statically embedded structure. It has to be
> > dynamically allocated.
> 
> I'm confused.  For every block device queue, there is one unique
> elevator_t structure allocated via kmalloc.  And vice versa, one
> elevator_t has only one request_queue points to it. This elevator_t
> structure is per q since it has pointer to per-queue elevator
> private data.

For every _non_ stacked queue there is an elevator_t structure attached.
That's a seperate point, but it means that embedding the elevator inside
the queue wastes memory (104 bytes per queue on this box I'm typing on)
for dm/md devices.

> Since it is always one to one relationship, ref count is predictable
> and static.  I see there are ref count on q->elevator, But it is
> always 2: one from object instantiation and one from adding an sysfs
> hierarchy directory.  In this case, I don't see the difference.
> Am I missing something?

The reference count does exist outside of the queue getting gotten/put,
the switching being one of them. Tejun has patches for improving the
switching, so it would be possible to keep two schedulers alive for the
queue for the duration of the switch.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/1] indirect function calls elimination in IO scheduler
  2005-10-17 16:12 Ananiev, Leonid I
@ 2005-10-18  2:44 ` Randy.Dunlap
  0 siblings, 0 replies; 12+ messages in thread
From: Randy.Dunlap @ 2005-10-18  2:44 UTC (permalink / raw)
  To: Ananiev, Leonid I; +Cc: linux-kernel

On Mon, 17 Oct 2005 20:12:18 +0400 Ananiev, Leonid I wrote:

> Randy,
> You are right. The lines is broken if I send patch outside Intel (I've
> tried to send @mail.ru)
> Inside Intel the lines are not broken as I see in response mails.
> I've used 'Plain text" before but flag "Use MS Word 2003 to edit e-mail
> massages" was not turned off.
> Now this flag is turned off. Once more I had opened using WordPad the
> diff-text created on Linux and have pasted it in this mail.
> Is it OK in your mail client?

No:

patch: **** malformed patch at line 52: *rq)

Fix one, try again:

patch: **** malformed patch at line 79: *rq)

again:

patch: **** malformed patch at line 175: *rq)

again:
patch: **** malformed patch at line 247: *cfqq)

In general:  copy-paste often has problems in Linux.  I don't
know about in Windows.

In general:  you might be better off trying to use attachments.


> But I've send long patch line to @mail.ru and I've seen the lines still
> broken.
> It is not permitted to use mail client other than MS Outlook in our
> office.

That needs to be fixed.  There are some decent email clients for
Windows, like Netscape/Mozilla, Thunderbird, sylpheed (beta),
even Eudora.

> Randy writes
> >You should also make sure that it applies cleanly
> > to the current kernel version
> 
> I've applied it to linux-2.6.14-rc4
> ----------------------------------------------------

---
~Randy

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH 1/1] indirect function calls elimination in IO scheduler
@ 2005-10-18 13:15 Ananiev, Leonid I
  0 siblings, 0 replies; 12+ messages in thread
From: Ananiev, Leonid I @ 2005-10-18 13:15 UTC (permalink / raw)
  To: Jens Axboe; +Cc: lkml

Jens Axboe writes 
> But with the patch proposed, the function call is still indirect.
> That matches your asm, you eliminate one mov there.

Yes I've eliminated one only of two additional mov's and it is enough
for
increasing SysBench fileio throughput by 2% and more on Itanium. Other
indirect step deleting had no any influence on SysBench fileio
throughput.

Kernel object code for Itanium is reduced by 856 bytes after patching.

> I'm guessing you are testing this with your NULL driver.
NULL driver was used for profiling of elevator function only.
It is not considered as grounding for patching.

> this patch is a no-go from the beginning since
> you cannot ref count a statically embedded structure. It has to be
> dynamically allocated.

There was no any 'ref count' elevator structure in 2.6.9. There was not
added any 'ref count' while modular and online switching was enabled.

> So if you are really interested in this and have a valid reason
The SysBench fileio test results shows noop degradation after 2.6.9.

> That's a seperate point, but it means that embedding the elevator
inside
> the queue wastes memory (104 bytes per queue on this box I'm typing
on)
> for dm/md devices.

845 bytes in kernel code are deleted but 104 bites per disk
(except first disk) are wasted. For summery 9 disks he have a balance.
But kernel run time and performance is better.

> The reference count does exist outside of the queue getting
gotten/put,
> the switching being one of them. Tejun has patches for improving the
> switching, so it would be possible to keep two schedulers alive for
the
> queue for the duration of the switch.

Now it is not possible function two schedulers simultaneously while
schedulers are switching. But ioscheduler switcher will return still to
previous ioscheduler if new ioscheduler fail to attach. 

Leonid
-----Original Message-----
From: Jens Axboe [mailto:axboe@suse.de] 
Sent: Monday, October 17, 2005 9:59 PM
To: Ananiev, Leonid I
Cc: linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/1] indirect function calls elimination in IO
scheduler

On Mon, Oct 17 2005, Ananiev, Leonid I wrote:
> Jens Axboe writes
> 
> > I don't really see the patch doing what you describe - the indirect
> > function calls are the same.
> 
> For example on Pentium4 in the function elv_next_request() the line
>             struct request *rq =
> q->elevator->ops->elevator_next_req_fn(q);
> before patch had required 11% of function running time as oprofile
> reports
>            %
>     26  0.0457 :c0270ecb:       mov    0xc(%edi),%eax
>   3455  6.0670 :c0270ece:       mov    (%eax),%eax
>   2848  5.0011 :c0270ed0:       mov    %edi,(%esp)
>   1538  2.7008 :c0270ed3:       call   *0xc(%eax)
> 
> 	A patch which would delete all indirect calls was tryed
>         struct request *rq =
q->elevator_cp.ops.elevator_next_req_fn(q);
>      9  0.0224 :c0270eea:       mov    %edi,(%esp)
>   3814  9.4793 :c0270eed:       call   *0x18(%edi)
> 
> But additional memory would be needed for 'ops' in each queue. The
> intermediate (proposed) patch has the same timing effect but saves
some
> memory:
> 	struct request *rq =
> q->elevator_cp.ops->elevator_next_req_fn(q);
> drivers/block/elevator.c:351
> ffffffff802a8b97:       49 8b 44 24 18          mov    0x18(%r12),%rax
> ffffffff802a8b9c:       4c 89 e7                mov    %r12,%rdi
> ffffffff802a8b9f:       ff 50 18                callq  *0x18(%rax)

But with the patch proposed, the function call is still indirect. You
are only moving eliminating a dereference of elevator-> since that is
now inlined in the queue. That matches your asm, you eliminate one mov
there.

I'm guessing you are testing this with your NULL driver, which is why
the difference is so 'huge' in profiling. And you are probably using
noop, correct? I don't see a lot of real world relevance to this
testing to be honest, the io path isn't completely lean with the regular
io schedulers either and I bet this would be noise on real world
testing. Micro benchmarks are all fine, but they only say so much. And
as I originally stated, this patch is a no-go from the beginning since
you cannot ref count a statically embedded structure. It has to be
dynamically allocated.

So if you are really interested in this and have a valid reason to
pursue it, please think more about other ways to solve this.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/1] indirect function calls elimination in IO scheduler
@ 2005-10-19 13:08 Ananiev, Leonid I
  2005-10-19 13:56 ` Arjan van de Ven
  0 siblings, 1 reply; 12+ messages in thread
From: Ananiev, Leonid I @ 2005-10-19 13:08 UTC (permalink / raw)
  To: linux-kernel; +Cc: rdunlap

>From Leonid Ananiev

      Fully modular io schedulers and enables online switching between
them was introduced in Linux 2.6.10 but as a result percentage of CPU
using by kernel was increased and performance degradation is marked on
Itanium. A cause of degradation is in more steps for indirect IO
scheduler type specific function calls.
      The patch eliminates 45 indirect function calls in 16 elevator
functions. Sysbench fileio benchmark throughput was increased at 2% for
noop elevator after patching.

Signed-off-by: Leonid Ananiev <leonid.i.ananiev@intel.com>

----
diff -rup linux-2.6.14-rc2/drivers/block/as-iosched.c linux-2.6.14-rc2elv1/drivers/block/as-iosched.c
--- linux-2.6.14-rc2/drivers/block/as-iosched.c	2005-09-24 09:13:54.000000000 +0400
+++ linux-2.6.14-rc2elv1/drivers/block/as-iosched.c	2005-10-13 04:18:12.000000000 +0400
@@ -614,7 +614,7 @@ static void as_antic_stop(struct as_data
 static void as_antic_timeout(unsigned long data)
 {
 	struct request_queue *q = (struct request_queue *)data;
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 	unsigned long flags;
 
 	spin_lock_irqsave(q->queue_lock, flags);
@@ -945,7 +945,7 @@ static void update_write_batch(struct as
  */
 static void as_completed_request(request_queue_t *q, struct request *rq)
 {
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 	struct as_rq *arq = RQ_DATA(rq);
 
 	WARN_ON(!list_empty(&rq->queuelist));
@@ -1030,7 +1030,7 @@ static void as_remove_queued_request(req
 {
 	struct as_rq *arq = RQ_DATA(rq);
 	const int data_dir = arq->is_sync;
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 
 	WARN_ON(arq->state != AS_RQ_QUEUED);
 
@@ -1361,7 +1361,7 @@ fifo_expired:
 
 static struct request *as_next_request(request_queue_t *q)
 {
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 	struct request *rq = NULL;
 
 	/*
@@ -1465,7 +1465,7 @@ static void as_add_request(struct as_dat
 
 static void as_deactivate_request(request_queue_t *q, struct request *rq)
 {
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 	struct as_rq *arq = RQ_DATA(rq);
 
 	if (arq) {
@@ -1510,7 +1510,7 @@ static void as_account_queued_request(st
 static void
 as_insert_request(request_queue_t *q, struct request *rq, int where)
 {
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 	struct as_rq *arq = RQ_DATA(rq);
 
 	if (arq) {
@@ -1563,7 +1563,7 @@ as_insert_request(request_queue_t *q, st
  */
 static int as_queue_empty(request_queue_t *q)
 {
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 
 	if (!list_empty(&ad->fifo_list[REQ_ASYNC])
 		|| !list_empty(&ad->fifo_list[REQ_SYNC])
@@ -1602,7 +1602,7 @@ as_latter_request(request_queue_t *q, st
 static int
 as_merge(request_queue_t *q, struct request **req, struct bio *bio)
 {
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 	sector_t rb_key = bio->bi_sector + bio_sectors(bio);
 	struct request *__rq;
 	int ret;
@@ -1657,7 +1657,7 @@ out_insert:
 
 static void as_merged_request(request_queue_t *q, struct request *req)
 {
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 	struct as_rq *arq = RQ_DATA(req);
 
 	/*
@@ -1702,7 +1702,7 @@ static void
 as_merged_requests(request_queue_t *q, struct request *req,
 			 struct request *next)
 {
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 	struct as_rq *arq = RQ_DATA(req);
 	struct as_rq *anext = RQ_DATA(next);
 
@@ -1789,7 +1789,7 @@ static void as_work_handler(void *data)
 
 static void as_put_request(request_queue_t *q, struct request *rq)
 {
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 	struct as_rq *arq = RQ_DATA(rq);
 
 	if (!arq) {
@@ -1809,7 +1809,7 @@ static void as_put_request(request_queue
 static int as_set_request(request_queue_t *q, struct request *rq,
 			  struct bio *bio, int gfp_mask)
 {
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 	struct as_rq *arq = mempool_alloc(ad->arq_pool, gfp_mask);
 
 	if (arq) {
@@ -1831,7 +1831,7 @@ static int as_set_request(request_queue_
 static int as_may_queue(request_queue_t *q, int rw, struct bio *bio)
 {
 	int ret = ELV_MQUEUE_MAY;
-	struct as_data *ad = q->elevator->elevator_data;
+	struct as_data *ad = q->elevator.elevator_data;
 	struct io_context *ioc;
 	if (ad->antic_status == ANTIC_WAIT_REQ ||
 			ad->antic_status == ANTIC_WAIT_NEXT) {
diff -rup linux-2.6.14-rc2/drivers/block/cfq-iosched.c linux-2.6.14-rc2elv1/drivers/block/cfq-iosched.c
--- linux-2.6.14-rc2/drivers/block/cfq-iosched.c	2005-09-24 09:13:54.000000000 +0400
+++ linux-2.6.14-rc2elv1/drivers/block/cfq-iosched.c	2005-10-13 04:18:12.000000000 +0400
@@ -364,7 +364,7 @@ static inline void cfq_schedule_dispatch
 
 static int cfq_queue_empty(request_queue_t *q)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 
 	return !cfq_pending_requests(cfqd);
 }
@@ -678,7 +678,7 @@ out:
 
 static void cfq_deactivate_request(request_queue_t *q, struct request *rq)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 	struct cfq_rq *crq = RQ_DATA(rq);
 
 	if (crq) {
@@ -724,7 +724,7 @@ static void cfq_remove_request(request_q
 static int
 cfq_merge(request_queue_t *q, struct request **req, struct bio *bio)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 	struct request *__rq;
 	int ret;
 
@@ -756,7 +756,7 @@ out_insert:
 
 static void cfq_merged_request(request_queue_t *q, struct request *req)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 	struct cfq_rq *crq = RQ_DATA(req);
 
 	cfq_del_crq_hash(crq);
@@ -999,7 +999,7 @@ static int cfq_arm_slice_timer(struct cf
  */
 static void cfq_dispatch_sort(request_queue_t *q, struct cfq_rq *crq)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 	struct cfq_queue *cfqq = crq->cfq_queue;
 	struct list_head *head = &q->queue_head, *entry = head;
 	struct request *__rq;
@@ -1196,7 +1196,7 @@ __cfq_dispatch_requests(struct cfq_data 
 static int
 cfq_dispatch_requests(request_queue_t *q, int max_dispatch, int force)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 	struct cfq_queue *cfqq;
 
 	if (!cfqd->busy_queues)
@@ -1270,7 +1270,7 @@ cfq_account_completion(struct cfq_queue 
 
 static struct request *cfq_next_request(request_queue_t *q)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 	struct request *rq;
 
 	if (!list_empty(&q->queue_head)) {
@@ -1840,7 +1840,7 @@ static void cfq_enqueue(struct cfq_data 
 static void
 cfq_insert_request(request_queue_t *q, struct request *rq, int where)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 
 	switch (where) {
 		case ELEVATOR_INSERT_BACK:
@@ -2006,7 +2006,7 @@ __cfq_may_queue(struct cfq_data *cfqd, s
 
 static int cfq_may_queue(request_queue_t *q, int rw, struct bio *bio)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 	struct task_struct *tsk = current;
 	struct cfq_queue *cfqq;
 
@@ -2029,7 +2029,7 @@ static int cfq_may_queue(request_queue_t
 
 static void cfq_check_waiters(request_queue_t *q, struct cfq_queue *cfqq)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 	struct request_list *rl = &q->rq;
 
 	if (cfqq->allocated[READ] <= cfqd->max_queued || cfqd->rq_starved) {
@@ -2050,7 +2050,7 @@ static void cfq_check_waiters(request_qu
  */
 static void cfq_put_request(request_queue_t *q, struct request *rq)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 	struct cfq_rq *crq = RQ_DATA(rq);
 
 	if (crq) {
@@ -2077,7 +2077,7 @@ static int
 cfq_set_request(request_queue_t *q, struct request *rq, struct bio *bio,
 		int gfp_mask)
 {
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 	struct task_struct *tsk = current;
 	struct cfq_io_context *cic;
 	const int rw = rq_data_dir(rq);
@@ -2153,7 +2153,7 @@ queue_fail:
 static void cfq_kick_queue(void *data)
 {
 	request_queue_t *q = data;
-	struct cfq_data *cfqd = q->elevator->elevator_data;
+	struct cfq_data *cfqd = q->elevator.elevator_data;
 	unsigned long flags;
 
 	spin_lock_irqsave(q->queue_lock, flags);
@@ -2263,7 +2263,7 @@ static void cfq_put_cfqd(struct cfq_data
 	blk_put_queue(q);
 
 	cfq_shutdown_timer_wq(cfqd);
-	q->elevator->elevator_data = NULL;
+	q->elevator.elevator_data = NULL;
 
 	mempool_destroy(cfqd->crq_pool);
 	kfree(cfqd->crq_hash);
diff -rup linux-2.6.14-rc2/drivers/block/deadline-iosched.c linux-2.6.14-rc2elv1/drivers/block/deadline-iosched.c
--- linux-2.6.14-rc2/drivers/block/deadline-iosched.c	2005-09-24 09:16:32.000000000 +0400
+++ linux-2.6.14-rc2elv1/drivers/block/deadline-iosched.c	2005-10-13 04:18:12.000000000 +0400
@@ -289,7 +289,7 @@ deadline_find_first_drq(struct deadline_
 static inline void
 deadline_add_request(struct request_queue *q, struct request *rq)
 {
-	struct deadline_data *dd = q->elevator->elevator_data;
+	struct deadline_data *dd = q->elevator.elevator_data;
 	struct deadline_rq *drq = RQ_DATA(rq);
 
 	const int data_dir = rq_data_dir(drq->request);
@@ -317,7 +317,7 @@ static void deadline_remove_request(requ
 	struct deadline_rq *drq = RQ_DATA(rq);
 
 	if (drq) {
-		struct deadline_data *dd = q->elevator->elevator_data;
+		struct deadline_data *dd = q->elevator.elevator_data;
 
 		list_del_init(&drq->fifo);
 		deadline_remove_merge_hints(q, drq);
@@ -328,7 +328,7 @@ static void deadline_remove_request(requ
 static int
 deadline_merge(request_queue_t *q, struct request **req, struct bio *bio)
 {
-	struct deadline_data *dd = q->elevator->elevator_data;
+	struct deadline_data *dd = q->elevator.elevator_data;
 	struct request *__rq;
 	int ret;
 
@@ -383,7 +383,7 @@ out_insert:
 
 static void deadline_merged_request(request_queue_t *q, struct request *req)
 {
-	struct deadline_data *dd = q->elevator->elevator_data;
+	struct deadline_data *dd = q->elevator.elevator_data;
 	struct deadline_rq *drq = RQ_DATA(req);
 
 	/*
@@ -407,7 +407,7 @@ static void
 deadline_merged_requests(request_queue_t *q, struct request *req,
 			 struct request *next)
 {
-	struct deadline_data *dd = q->elevator->elevator_data;
+	struct deadline_data *dd = q->elevator.elevator_data;
 	struct deadline_rq *drq = RQ_DATA(req);
 	struct deadline_rq *dnext = RQ_DATA(next);
 
@@ -599,7 +599,7 @@ dispatch_request:
 
 static struct request *deadline_next_request(request_queue_t *q)
 {
-	struct deadline_data *dd = q->elevator->elevator_data;
+	struct deadline_data *dd = q->elevator.elevator_data;
 	struct request *rq;
 
 	/*
@@ -620,7 +620,7 @@ dispatch:
 static void
 deadline_insert_request(request_queue_t *q, struct request *rq, int where)
 {
-	struct deadline_data *dd = q->elevator->elevator_data;
+	struct deadline_data *dd = q->elevator.elevator_data;
 
 	/* barriers must flush the reorder queue */
 	if (unlikely(rq->flags & (REQ_SOFTBARRIER | REQ_HARDBARRIER)
@@ -648,7 +648,7 @@ deadline_insert_request(request_queue_t 
 
 static int deadline_queue_empty(request_queue_t *q)
 {
-	struct deadline_data *dd = q->elevator->elevator_data;
+	struct deadline_data *dd = q->elevator.elevator_data;
 
 	if (!list_empty(&dd->fifo_list[WRITE])
 	    || !list_empty(&dd->fifo_list[READ])
@@ -745,7 +745,7 @@ static int deadline_init_queue(request_q
 
 static void deadline_put_request(request_queue_t *q, struct request *rq)
 {
-	struct deadline_data *dd = q->elevator->elevator_data;
+	struct deadline_data *dd = q->elevator.elevator_data;
 	struct deadline_rq *drq = RQ_DATA(rq);
 
 	if (drq) {
@@ -758,7 +758,7 @@ static int
 deadline_set_request(request_queue_t *q, struct request *rq, struct bio *bio,
 		     int gfp_mask)
 {
-	struct deadline_data *dd = q->elevator->elevator_data;
+	struct deadline_data *dd = q->elevator.elevator_data;
 	struct deadline_rq *drq;
 
 	drq = mempool_alloc(dd->drq_pool, gfp_mask);
diff -rup linux-2.6.14-rc2/drivers/block/elevator.c linux-2.6.14-rc2elv1/drivers/block/elevator.c
--- linux-2.6.14-rc2/drivers/block/elevator.c	2005-09-24 09:13:54.000000000 +0400
+++ linux-2.6.14-rc2elv1/drivers/block/elevator.c	2005-10-13 04:18:12.000000000 +0400
@@ -130,18 +130,17 @@ static struct elevator_type *elevator_ge
 	return e;
 }
 
-static int elevator_attach(request_queue_t *q, struct elevator_type *e,
-			   struct elevator_queue *eq)
+static int elevator_attach(request_queue_t *q, struct elevator_type *e)
 {
 	int ret = 0;
+	struct elevator_queue *eq;
 
-	memset(eq, 0, sizeof(*eq));
+	eq = &q->elevator;
 	eq->ops = &e->ops;
 	eq->elevator_type = e;
 
 	INIT_LIST_HEAD(&q->queue_head);
 	q->last_merge = NULL;
-	q->elevator = eq;
 
 	if (eq->ops->elevator_init_fn)
 		ret = eq->ops->elevator_init_fn(q, eq);
@@ -183,7 +182,6 @@ __setup("elevator=", elevator_setup);
 int elevator_init(request_queue_t *q, char *name)
 {
 	struct elevator_type *e = NULL;
-	struct elevator_queue *eq;
 	int ret = 0;
 
 	elevator_setup_default();
@@ -195,15 +193,8 @@ int elevator_init(request_queue_t *q, ch
 	if (!e)
 		return -EINVAL;
 
-	eq = kmalloc(sizeof(struct elevator_queue), GFP_KERNEL);
-	if (!eq) {
-		elevator_put(e->elevator_type);
-		return -ENOMEM;
-	}
-
-	ret = elevator_attach(q, e, eq);
+	ret = elevator_attach(q, e);
 	if (ret) {
-		kfree(eq);
 		elevator_put(e->elevator_type);
 	}
 
@@ -217,12 +208,11 @@ void elevator_exit(elevator_t *e)
 
 	elevator_put(e->elevator_type);
 	e->elevator_type = NULL;
-	kfree(e);
 }
 
 int elv_merge(request_queue_t *q, struct request **req, struct bio *bio)
 {
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	if (e->ops->elevator_merge_fn)
 		return e->ops->elevator_merge_fn(q, req, bio);
@@ -232,7 +222,7 @@ int elv_merge(request_queue_t *q, struct
 
 void elv_merged_request(request_queue_t *q, struct request *rq)
 {
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	if (e->ops->elevator_merged_fn)
 		e->ops->elevator_merged_fn(q, rq);
@@ -241,7 +231,7 @@ void elv_merged_request(request_queue_t 
 void elv_merge_requests(request_queue_t *q, struct request *rq,
 			     struct request *next)
 {
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	if (q->last_merge == next)
 		q->last_merge = NULL;
@@ -258,7 +248,7 @@ void elv_merge_requests(request_queue_t 
  */
 void elv_deactivate_request(request_queue_t *q, struct request *rq)
 {
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	/*
 	 * it already went through dequeue, we need to decrement the
@@ -296,8 +286,8 @@ void elv_requeue_request(request_queue_t
 	 * if iosched has an explicit requeue hook, then use that. otherwise
 	 * just put the request at the front of the queue
 	 */
-	if (q->elevator->ops->elevator_requeue_req_fn)
-		q->elevator->ops->elevator_requeue_req_fn(q, rq);
+	if (q->elevator.ops->elevator_requeue_req_fn)
+		q->elevator.ops->elevator_requeue_req_fn(q, rq);
 	else
 		__elv_add_request(q, rq, ELEVATOR_INSERT_FRONT, 0);
 }
@@ -318,7 +308,7 @@ void __elv_add_request(request_queue_t *
 	rq->q = q;
 
 	if (!test_bit(QUEUE_FLAG_DRAIN, &q->queue_flags)) {
-		q->elevator->ops->elevator_add_req_fn(q, rq, where);
+		q->elevator.ops->elevator_add_req_fn(q, rq, where);
 
 		if (blk_queue_plugged(q)) {
 			int nrq = q->rq.count[READ] + q->rq.count[WRITE]
@@ -348,7 +338,7 @@ void elv_add_request(request_queue_t *q,
 
 static inline struct request *__elv_next_request(request_queue_t *q)
 {
-	struct request *rq = q->elevator->ops->elevator_next_req_fn(q);
+	struct request *rq = q->elevator.ops->elevator_next_req_fn(q);
 
 	/*
 	 * if this is a barrier write and the device has to issue a
@@ -418,7 +408,7 @@ struct request *elv_next_request(request
 
 void elv_remove_request(request_queue_t *q, struct request *rq)
 {
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	/*
 	 * the time frame between a request being removed from the lists
@@ -446,7 +436,7 @@ void elv_remove_request(request_queue_t 
 
 int elv_queue_empty(request_queue_t *q)
 {
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	if (e->ops->elevator_queue_empty_fn)
 		return e->ops->elevator_queue_empty_fn(q);
@@ -458,7 +448,7 @@ struct request *elv_latter_request(reque
 {
 	struct list_head *next;
 
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	if (e->ops->elevator_latter_req_fn)
 		return e->ops->elevator_latter_req_fn(q, rq);
@@ -474,7 +464,7 @@ struct request *elv_former_request(reque
 {
 	struct list_head *prev;
 
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	if (e->ops->elevator_former_req_fn)
 		return e->ops->elevator_former_req_fn(q, rq);
@@ -489,7 +479,7 @@ struct request *elv_former_request(reque
 int elv_set_request(request_queue_t *q, struct request *rq, struct bio *bio,
 		    int gfp_mask)
 {
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	if (e->ops->elevator_set_req_fn)
 		return e->ops->elevator_set_req_fn(q, rq, bio, gfp_mask);
@@ -500,7 +490,7 @@ int elv_set_request(request_queue_t *q, 
 
 void elv_put_request(request_queue_t *q, struct request *rq)
 {
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	if (e->ops->elevator_put_req_fn)
 		e->ops->elevator_put_req_fn(q, rq);
@@ -508,7 +498,7 @@ void elv_put_request(request_queue_t *q,
 
 int elv_may_queue(request_queue_t *q, int rw, struct bio *bio)
 {
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	if (e->ops->elevator_may_queue_fn)
 		return e->ops->elevator_may_queue_fn(q, rw, bio);
@@ -518,7 +508,7 @@ int elv_may_queue(request_queue_t *q, in
 
 void elv_completed_request(request_queue_t *q, struct request *rq)
 {
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	/*
 	 * request is released from the driver, io must be done
@@ -532,7 +522,7 @@ void elv_completed_request(request_queue
 
 int elv_register_queue(struct request_queue *q)
 {
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 
 	e->kobj.parent = kobject_get(&q->kobj);
 	if (!e->kobj.parent)
@@ -547,7 +537,7 @@ int elv_register_queue(struct request_qu
 void elv_unregister_queue(struct request_queue *q)
 {
 	if (q) {
-		elevator_t *e = q->elevator;
+		elevator_t *e = &q->elevator;
 		kobject_unregister(&e->kobj);
 		kobject_put(&q->kobj);
 	}
@@ -590,12 +580,8 @@ EXPORT_SYMBOL_GPL(elv_unregister);
  */
 static void elevator_switch(request_queue_t *q, struct elevator_type *new_e)
 {
-	elevator_t *e = kmalloc(sizeof(elevator_t), GFP_KERNEL);
 	struct elevator_type *noop_elevator = NULL;
-	elevator_t *old_elevator;
-
-	if (!e)
-		goto error;
+	elevator_t old_elevator;
 
 	/*
 	 * first step, drain requests from the block freelist
@@ -615,7 +601,7 @@ static void elevator_switch(request_queu
  	 */
 	noop_elevator = elevator_get("noop");
 	spin_lock_irq(q->queue_lock);
-	elevator_attach(q, noop_elevator, e);
+	elevator_attach(q, noop_elevator);
 	spin_unlock_irq(q->queue_lock);
 
 	blk_wait_queue_drained(q, 1);
@@ -623,7 +609,7 @@ static void elevator_switch(request_queu
 	/*
 	 * attach and start new elevator
 	 */
-	if (elevator_attach(q, new_e, e))
+	if (elevator_attach(q, new_e))
 		goto fail;
 
 	if (elv_register_queue(q))
@@ -632,7 +618,7 @@ static void elevator_switch(request_queu
 	/*
 	 * finally exit old elevator and start queue again
 	 */
-	elevator_exit(old_elevator);
+	elevator_exit(&old_elevator);
 	blk_finish_queue_drain(q);
 	elevator_put(noop_elevator);
 	return;
@@ -642,14 +628,12 @@ fail_register:
 	 * switch failed, exit the new io scheduler and reattach the old
 	 * one again (along with re-adding the sysfs dir)
 	 */
-	elevator_exit(e);
+	elevator_exit(&q->elevator);
 fail:
 	q->elevator = old_elevator;
 	elv_register_queue(q);
 	blk_finish_queue_drain(q);
-error:
-	if (noop_elevator)
-		elevator_put(noop_elevator);
+	elevator_put(noop_elevator);
 	elevator_put(new_e);
 	printk(KERN_ERR "elevator: switch to %s failed\n",new_e->elevator_name);
 }
@@ -671,7 +655,7 @@ ssize_t elv_iosched_store(request_queue_
 		return -EINVAL;
 	}
 
-	if (!strcmp(elevator_name, q->elevator->elevator_type->elevator_name))
+	if (!strcmp(elevator_name, q->elevator.elevator_type->elevator_name))
 		return count;
 
 	elevator_switch(q, e);
@@ -680,7 +664,7 @@ ssize_t elv_iosched_store(request_queue_
 
 ssize_t elv_iosched_show(request_queue_t *q, char *name)
 {
-	elevator_t *e = q->elevator;
+	elevator_t *e = &q->elevator;
 	struct elevator_type *elv = e->elevator_type;
 	struct list_head *entry;
 	int len = 0;
diff -rup linux-2.6.14-rc2/drivers/block/ll_rw_blk.c linux-2.6.14-rc2elv1/drivers/block/ll_rw_blk.c
--- linux-2.6.14-rc2/drivers/block/ll_rw_blk.c	2005-09-24 09:16:32.000000000 +0400
+++ linux-2.6.14-rc2elv1/drivers/block/ll_rw_blk.c	2005-10-13 04:18:12.000000000 +0400
@@ -1613,8 +1613,7 @@ void blk_cleanup_queue(request_queue_t *
 	if (!atomic_dec_and_test(&q->refcnt))
 		return;
 
-	if (q->elevator)
-		elevator_exit(q->elevator);
+	elevator_exit(&q->elevator);
 
 	blk_sync_queue(q);
 
diff -rup linux-2.6.14-rc2/include/linux/blkdev.h linux-2.6.14-rc2elv1/include/linux/blkdev.h
--- linux-2.6.14-rc2/include/linux/blkdev.h	2005-09-24 09:16:47.000000000 +0400
+++ linux-2.6.14-rc2elv1/include/linux/blkdev.h	2005-10-13 04:18:12.000000000 +0400
@@ -312,7 +312,7 @@ struct request_queue
 	 */
 	struct list_head	queue_head;
 	struct request		*last_merge;
-	elevator_t		*elevator;
+	elevator_t		elevator;
 
 	/*
 	 * the queue request freelist, one for reads and one for writes

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/1] indirect function calls elimination in IO scheduler
  2005-10-19 13:08 Ananiev, Leonid I
@ 2005-10-19 13:56 ` Arjan van de Ven
  0 siblings, 0 replies; 12+ messages in thread
From: Arjan van de Ven @ 2005-10-19 13:56 UTC (permalink / raw)
  To: Ananiev, Leonid I; +Cc: linux-kernel, rdunlap

On Wed, 2005-10-19 at 17:08 +0400, Ananiev, Leonid I wrote:
> >From Leonid Ananiev
> 
>       Fully modular io schedulers and enables online switching between
> them was introduced in Linux 2.6.10 but as a result percentage of CPU
> using by kernel was increased and performance degradation is marked on
> Itanium. A cause of degradation is in more steps for indirect IO
> scheduler type specific function calls.
>       The patch eliminates 45 indirect function calls in 16 elevator
> functions. 

does it really reduce those function calls? I thought it only reduced a
few pointers to be followed, but not the actual indirect function calls!




^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH 1/1] indirect function calls elimination in IO scheduler
@ 2005-10-20  5:11 Ananiev, Leonid I
  0 siblings, 0 replies; 12+ messages in thread
From: Ananiev, Leonid I @ 2005-10-20  5:11 UTC (permalink / raw)
  To: Arjan van de Ven; +Cc: linux-kernel, Chen, Kenneth W

Arjan van de Ven writes
> does it really reduce those function calls?

This patch does not delete indirect function calls. It reduces number of
indirection steps or pointers to be followed only. You are right.
I should say:
	The patch eliminates one of indirection steps in 45 function
calls in 16 elevator functions.
A deleting one only indirection step is a good compromise between memory
waste and run time.
	From the very beginning the patch had deleted two additional
steps in indirect function calls and had returned to 2.6.9 stile. Chen
Kenneth proposed to delete only one indirection step and test it:
> Leonid,
> It looks reasonable to me, though your patch significantly increases
the 
> size of request_queue structure.  This in turn increases kernel cache
> footprint since you are embedding entire elevator_ops and
elevator_queue
> inside each and every block layer request queue
> (increased from 712 to 936 bytes).

> Do you have any benchmark result to show that saving pointer
indirection > > is a win over larger memory foot-print for request_queue
structure?
> I don't think it's a win-win case on large system where it might have
> hundreds or more disk queues etc.  But if you have benchmark result,
let
> the data speak for itself.
> - Ken
	A testing with SysBench fileio had shown that there is no
throughput increasing if one more indirection step is deleted. Hardware
is able to win two of three MOV instructions before function call.
	It should be noted that ops structure size is different for
different I/O schedulers. And while imbedding 'ops' structure the
maximal structure size should be used. First 'elevator' structure
embedding only increases wasted memory size by 104 bytes for each disk;
decreases Itanium's kernel object code by 856 bytes and increases
throughput by 2% at least. 
	
Leonid

-----Original Message-----
From: Arjan van de Ven [mailto:arjan@infradead.org] 
Sent: Wednesday, October 19, 2005 5:56 PM
To: Ananiev, Leonid I
Cc: linux-kernel@vger.kernel.org; rdunlap@xenotime.net
Subject: Re: [PATCH 1/1] indirect function calls elimination in IO
scheduler

On Wed, 2005-10-19 at 17:08 +0400, Ananiev, Leonid I wrote:
> >From Leonid Ananiev
> 
>       Fully modular io schedulers and enables online switching between
> them was introduced in Linux 2.6.10 but as a result percentage of CPU
> using by kernel was increased and performance degradation is marked on
> Itanium. A cause of degradation is in more steps for indirect IO
> scheduler type specific function calls.
>       The patch eliminates 45 indirect function calls in 16 elevator
> functions. 

does it really reduce those function calls? I thought it only reduced a
few pointers to be followed, but not the actual indirect function calls!




^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [PATCH 1/1] indirect function calls elimination in IO scheduler
@ 2005-12-01 17:18 Ananiev, Leonid I
  0 siblings, 0 replies; 12+ messages in thread
From: Ananiev, Leonid I @ 2005-12-01 17:18 UTC (permalink / raw)
  To: linux-kernel; +Cc: axboe

Jens,
 
You wrote about patch
>> This breaks reference counting of said
>> structure, so it's not really something that can be applied.
 
>> this patch is a no-go from the beginning since
>> you cannot ref count a statically embedded structure. It has to be
>> dynamically allocated.
 
My answer:
> There was no any 'ref count' elevator structure in 2.6.9. There was
not
> added any 'ref count' while modular and online switching was enabled.
 
>> The does exist outside of the queue getting gotten/put,
>> the switching being one of them. Tejun has patches for improving the
>> switching, so it would be possible to keep two schedulers alive for
the
>> queue for the duration of the switch.
 
Proposed patch does not modify "gotten/put" modules order.
elevator_attch() function dynamically fills 'kmalloced' memory by
elevator structure before patching.
This function  fills substructure after patching dynamically as well.
There is no reference count problem: we can return to old scheduler if
new one fails.
Both, old and new, schedulers are saved for the duration of the switch.
 
Leonid

 



^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2005-12-01 17:18 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-10-17 17:01 Re:[PATCH 1/1] indirect function calls elimination in IO scheduler Ananiev, Leonid I
2005-10-17 17:58 ` [PATCH " Jens Axboe
2005-10-17 19:25   ` Chen, Kenneth W
2005-10-17 19:40     ` Jens Axboe
  -- strict thread matches above, loose matches on Subject: below --
2005-12-01 17:18 Ananiev, Leonid I
2005-10-20  5:11 Ananiev, Leonid I
2005-10-19 13:08 Ananiev, Leonid I
2005-10-19 13:56 ` Arjan van de Ven
2005-10-18 13:15 Ananiev, Leonid I
2005-10-17 16:12 Ananiev, Leonid I
2005-10-18  2:44 ` Randy.Dunlap
2005-10-16 22:28 Ananiev, Leonid I
2005-10-17  3:41 ` [PATCH " Randy.Dunlap

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox