From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754386Ab1A0BIm (ORCPT ); Wed, 26 Jan 2011 20:08:42 -0500 Received: from smtp1.linux-foundation.org ([140.211.169.13]:47346 "EHLO smtp1.linux-foundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753561Ab1A0BIl (ORCPT ); Wed, 26 Jan 2011 20:08:41 -0500 Date: Wed, 26 Jan 2011 17:08:24 -0800 From: Andrew Morton To: KAMEZAWA Hiroyuki Cc: Greg Thelen , Johannes Weiner , David Rientjes , KOSAKI Motohiro , Minchan Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [BUGFIX] memcg: fix res_counter_read_u64 lock aware (Was Re: [PATCH] oom: handle overflow in mem_cgroup_out_of_memory() Message-Id: <20110126170824.ef2ab571.akpm@linux-foundation.org> In-Reply-To: <20110127095342.3d81cf5f.kamezawa.hiroyu@jp.fujitsu.com> References: <1296030555-3594-1-git-send-email-gthelen@google.com> <20110126170713.GA2401@cmpxchg.org> <20110126183023.GB2401@cmpxchg.org> <20110126142909.0b710a0c.akpm@linux-foundation.org> <20110127092434.df18c7a6.kamezawa.hiroyu@jp.fujitsu.com> <20110127095342.3d81cf5f.kamezawa.hiroyu@jp.fujitsu.com> X-Mailer: Sylpheed 3.0.2 (GTK+ 2.20.1; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 27 Jan 2011 09:53:42 +0900 KAMEZAWA Hiroyuki wrote: > res_counter_read_u64 reads u64 value without lock. It's dangerous > in 32bit environment. This patch adds lock. > > Signed-off-by: KAMEZAWA Hiroyuki > --- > include/linux/res_counter.h | 13 ++++++++++++- > kernel/res_counter.c | 2 +- > 2 files changed, 13 insertions(+), 2 deletions(-) > > Index: mmotm-0125/include/linux/res_counter.h > =================================================================== > --- mmotm-0125.orig/include/linux/res_counter.h > +++ mmotm-0125/include/linux/res_counter.h > @@ -68,7 +68,18 @@ struct res_counter { > * @pos: and the offset. > */ > > -u64 res_counter_read_u64(struct res_counter *counter, int member); > +u64 res_counter_read_u64_locked(struct res_counter *counter, int member); > + > +static inline u64 res_counter_read_u64(struct res_counter *counter, int member) > +{ > + unsigned long flags; > + u64 ret; > + > + spin_lock_irqsave(&counter->lock, flags); > + ret = res_counter_read_u64_locked(counter, member); > + spin_unlock_irqrestore(&counter->lock, flags); > + return ret; > +} > > ssize_t res_counter_read(struct res_counter *counter, int member, > const char __user *buf, size_t nbytes, loff_t *pos, > Index: mmotm-0125/kernel/res_counter.c > =================================================================== > --- mmotm-0125.orig/kernel/res_counter.c > +++ mmotm-0125/kernel/res_counter.c > @@ -126,7 +126,7 @@ ssize_t res_counter_read(struct res_coun > pos, buf, s - buf); > } > > -u64 res_counter_read_u64(struct res_counter *counter, int member) > +u64 res_counter_read_u64_locked(struct res_counter *counter, int member) > { > return *res_counter_member(counter, member); > } We don't need the lock on 64-bit platforms! And there's zero benefit to inlining the spin_lock/unlock(), given that the function will always be making a function call anyway. See i_size_read() for inspiration.