From mboxrd@z Thu Jan 1 00:00:00 1970 From: Li Yu Subject: Re: [PATCH] aio: using hash table for active requests Date: Mon, 20 Dec 2010 09:57:08 +0800 Message-ID: <4D0EB7F4.6070903@gmail.com> References: <4D082FE9.7080407@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: bcrl@kvack.org, viro@zeniv.linux.org.uk, linux-aio@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org To: Jeff Moyer Return-path: In-Reply-To: Sender: owner-linux-aio@kvack.org List-Id: linux-fsdevel.vger.kernel.org =E4=BA=8E 2010=E5=B9=B412=E6=9C=8816=E6=97=A5 06:02, Jeff Moyer =E5=86=99= =E9=81=93: > Li Yu writes: >=20 >> This patch remove a TODO in fs/aio.c, that is to use hash table for >> active requests. >=20 > It's remained a TODO item for this long because it hasn't really > mattered in the past. Was there other motivation for this patch beside= s > the TODO comment? >=20 :), indeed, to solve a TODO is my major motivation. >> I prefer add an iocb at tail of collision chain, so I do not use hlist >> here. >=20 > Why do you prefer to add to the tail? It makes more sense to add > collisions at the head, since if you're going to cancel everything, you > have a better chance of cancelling stuff that was queued later than > earlier (assuming you can cancel anything at all, which for most cases, > you can't). Also, you're halving the number of buckets, so you should > have a good reason. >=20 I misunderstanded major feature of these code ... it just only can speed = up a bit for io_cancel() syscall. However, the "add to tail" design is wrong here, I ever thinked each hash= bucket acts as a FIFO queue, so add an element to tail of list can reduc= e dequeue time. > What sort of testing did you do? >=20 > I've made some cursory comments below. I'll more fully review this if > you can provide a bit more justification and some proof of testing. I'= d > be really surprised if this helped any real workloads, though. Yes, I have some testings for this patches, it seem that its performance = impact can be ignored, I tested by fio, iodepth=3D65535, size=3D1024m. Thanks for your review time! Yu >=20 > Cheers, > Jeff >=20 >> +static int ioctx_active_reqs_init(struct kioctx *ctx) >> +{ >> + int i; >> + >> + ctx->active_reqs_table =3D kmalloc(AIO_ACTREQ_BUCKETS*sizeof(struct = list_head), GFP_KERNEL); >=20 > Fit this into 80 columns, please. You may just want to run checkpatch > over the whole thing. >=20 >> +static inline void aio_cancel_one(struct kioctx *ctx, struct kiocb *i= ocb) >=20 > I see no good reason to inline this. >=20 >> @@ -465,10 +506,12 @@ static struct kiocb *__aio_get_req(struct kioctx= *ctx) >> /* Check if the completion queue has enough free space to >> * accept an event from this io. >> */ >> + bucket =3D hash_long((unsigned long)tohash, AIO_ACTREQ_BUCKETS_SHIFT= ); >=20 > hash_ptr? >=20 >> - struct list_head active_reqs; /* used for cancellation */ >> + struct list_head* active_reqs_table; /* used for cancellation */ >=20 > Coding Style >=20 -- To unsubscribe, send a message with 'unsubscribe linux-aio' in the body to majordomo@kvack.org. For more info on Linux AIO, see: http://www.kvack.org/aio/ Don't email: aart@kvack.org