From mboxrd@z Thu Jan 1 00:00:00 1970 From: Miao Xie Subject: Re: [Update PATCH 2/2] aio, mem-hotplug: Add memory barrier to aio ring page migration. Date: Tue, 04 Mar 2014 13:35:29 +0800 Message-ID: <53156621.60900@cn.fujitsu.com> References: <1393497616-16428-1-git-send-email-tangchen@cn.fujitsu.com> <1393497616-16428-3-git-send-email-tangchen@cn.fujitsu.com> <530F2A2D.50307@jp.fujitsu.com> <530F3327.8020205@jp.fujitsu.com> Reply-To: miaox@cn.fujitsu.com Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-2022-JP Content-Transfer-Encoding: 7bit Cc: viro@zeniv.linux.org.uk, jmoyer@redhat.com, kosaki.motohiro@gmail.com, kosaki.motohiro@jp.fujitsu.com, guz.fnst@cn.fujitsu.com, linux-fsdevel@vger.kernel.org, linux-aio@kvack.org, linux-kernel@vger.kernel.org To: Yasuaki Ishimatsu , Tang Chen , bcrl@kvack.org Return-path: In-Reply-To: <530F3327.8020205@jp.fujitsu.com> Sender: owner-linux-aio@kvack.org List-Id: linux-fsdevel.vger.kernel.org On thu, 27 Feb 2014 21:44:23 +0900, Yasuaki Ishimatsu wrote: > When doing aio ring page migration, we migrated the page, and update > ctx->ring_pages[]. Like the following: > > aio_migratepage() > |-> migrate_page_copy(new, old) > | ...... /* Need barrier here */ > |-> ctx->ring_pages[idx] = new > > Actually, we need a memory barrier between these two operations. > Otherwise, if ctx->ring_pages[] is updated before memory copy due to > the compiler optimization, other processes may have an opportunity > to access to the not fully initialized new ring page. > > So add a wmb and rmb to synchronize them. > > Signed-off-by: Tang Chen > Signed-off-by: Yasuaki Ishimatsu > > --- > fs/aio.c | 14 ++++++++++++++ > 1 file changed, 14 insertions(+) > > diff --git a/fs/aio.c b/fs/aio.c > index 50c089c..8d9b82b 100644 > --- a/fs/aio.c > +++ b/fs/aio.c > @@ -327,6 +327,14 @@ static int aio_migratepage(struct address_space *mapping, struct page *new, > pgoff_t idx; > spin_lock_irqsave(&ctx->completion_lock, flags); > migrate_page_copy(new, old); > + > + /* > + * Ensure memory copy is finished before updating > + * ctx->ring_pages[]. Otherwise other processes may access to > + * new ring pages which are not fully initialized. > + */ > + smp_wmb(); > + > idx = old->index; > if (idx < (pgoff_t)ctx->nr_pages) { > /* And only do the move if things haven't changed */ > @@ -1074,6 +1082,12 @@ static long aio_read_events_ring(struct kioctx *ctx, > page = ctx->ring_pages[pos / AIO_EVENTS_PER_PAGE]; > pos %= AIO_EVENTS_PER_PAGE; > > + /* > + * Ensure that the page's data was copied from old one by > + * aio_migratepage(). > + */ > + smp_rmb(); > + smp_read_barrier_depends() is better. "One could place an A smp_rmb() primitive between the pointer fetch and dereference. However, this imposes unneeded overhead on systems (such as i386, IA64, PPC, and SPARC) that respect data dependencies on the read side. A smp_read_barrier_depends() primitive has been added to the Linux 2.6 kernel to eliminate overhead on these systems." -- From Chapter 7.1 of Written by Paul E. McKenney Thanks Miao > ev = kmap(page); > copy_ret = copy_to_user(event + ret, ev + pos, > sizeof(*ev) * avail); > > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ > -- To unsubscribe, send a message with 'unsubscribe linux-aio' in the body to majordomo@kvack.org. For more info on Linux AIO, see: http://www.kvack.org/aio/ Don't email: aart@kvack.org