From mboxrd@z Thu Jan 1 00:00:00 1970 From: Bob Liu Subject: [PATCH 04/14] tmem: bugfix in obj allocate path Date: Wed, 18 Dec 2013 14:52:31 +0800 Message-ID: <1387349561-27923-5-git-send-email-bob.liu@oracle.com> References: <1387349561-27923-1-git-send-email-bob.liu@oracle.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1VtB1T-0006T2-R6 for xen-devel@lists.xenproject.org; Wed, 18 Dec 2013 06:54:24 +0000 Received: by mail-pb0-f52.google.com with SMTP id uo5so8047192pbc.39 for ; Tue, 17 Dec 2013 22:54:20 -0800 (PST) In-Reply-To: <1387349561-27923-1-git-send-email-bob.liu@oracle.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: xen-devel@lists.xenproject.org Cc: james.harper@bendigoit.com.au, keir@xen.org, ian.campbell@citrix.com, andrew.cooper3@citrix.com, JBeulich@suse.com List-Id: xen-devel@lists.xenproject.org There is a bug in the obj allocate path. If there are parallel callers allocated obj and inserted to pool->obj_rb_root, an unexpected obj will be returned by obj_new(). This patch fix it by checking the return value of obj_rb_insert(). Also rename obj_new to obj_alloc and move the insert/spinlock code out of the alloc path. Signed-off-by: Bob Liu --- xen/common/tmem.c | 25 +++++++++++++++++-------- 1 file changed, 17 insertions(+), 8 deletions(-) diff --git a/xen/common/tmem.c b/xen/common/tmem.c index 6ed91b4..61dfd62 100644 --- a/xen/common/tmem.c +++ b/xen/common/tmem.c @@ -959,12 +959,11 @@ static int obj_rb_insert(struct rb_root *root, struct tmem_object_root *obj) * allocate, initialize, and insert an tmem_object_root * (should be called only if find failed) */ -static struct tmem_object_root * obj_new(struct tmem_pool *pool, struct oid *oidp) +static struct tmem_object_root * obj_alloc(struct tmem_pool *pool, struct oid *oidp) { struct tmem_object_root *obj; ASSERT(pool != NULL); - ASSERT_WRITELOCK(&pool->pool_rwlock); if ( (obj = tmem_malloc(sizeof(struct tmem_object_root), pool)) == NULL ) return NULL; pool->obj_count++; @@ -979,9 +978,6 @@ static struct tmem_object_root * obj_new(struct tmem_pool *pool, struct oid *oid obj->objnode_count = 0; obj->pgp_count = 0; obj->last_client = TMEM_CLI_ID_NULL; - spin_lock(&obj->obj_spinlock); - obj_rb_insert(&pool->obj_rb_root[oid_hash(oidp)], obj); - ASSERT_SPINLOCK(&obj->obj_spinlock); return obj; } @@ -1552,10 +1548,13 @@ static int do_tmem_put(struct tmem_pool *pool, ASSERT(pool != NULL); client = pool->client; + ASSERT(client != NULL); ret = client->frozen ? -EFROZEN : -ENOMEM; pool->puts++; + +refind: /* does page already exist (dup)? if so, handle specially */ - if ( (obj = obj_find(pool,oidp)) != NULL ) + if ( (obj = obj_find(pool, oidp)) != NULL ) { if ((pgp = pgp_lookup_in_obj(obj, index)) != NULL) { @@ -1573,12 +1572,22 @@ static int do_tmem_put(struct tmem_pool *pool, /* no puts allowed into a frozen pool (except dup puts) */ if ( client->frozen ) return ret; + if ( (obj = obj_alloc(pool, oidp)) == NULL ) + return -ENOMEM; + write_lock(&pool->pool_rwlock); - if ( (obj = obj_new(pool,oidp)) == NULL ) + /* + * Parallel callers may already allocated obj and inserted to obj_rb_root + * before us. + */ + if (!obj_rb_insert(&pool->obj_rb_root[oid_hash(oidp)], obj)) { + tmem_free(obj, pool); write_unlock(&pool->pool_rwlock); - return -ENOMEM; + goto refind; } + + spin_lock(&obj->obj_spinlock); newobj = 1; write_unlock(&pool->pool_rwlock); } -- 1.7.10.4