xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/15] xen: continue to cleanup tmem
@ 2013-12-12 11:05 Bob Liu
  2013-12-12 11:05 ` [PATCH v4 01/15] tmem: cleanup: drop unused sub command Bob Liu
                   ` (14 more replies)
  0 siblings, 15 replies; 23+ messages in thread
From: Bob Liu @ 2013-12-12 11:05 UTC (permalink / raw)
  To: xen-devel; +Cc: james.harper, ian.campbell, andrew.cooper3, JBeulich

This is my v4 tmem cleanup patches, it is rebased on commit
d96392361cd05a66b385f0153e398128b196e480:

  xen: arm: correct return value of raw_copy_{to/from}_guest_*,
  raw_clear_guest (2013-12-09 15:31:39 +0000)

Konrad, when you send out pull request please help me adding the correct CIDs to
patch [PATCH v4 08/15] tmem: cleanup: drop tmem_lock_all
and
[PATCH v4 15/15] tmem: check the return value of copy to guest

Changlog v4:
 * Drop patch 'tmem: cleanup: drop runtime statistics' which is not suitable
   for 4.4
 * Add a new patch [15/15] to check the return value of copy to guest

Changlog v3:
 * Change 'void *tmem' to 'struct client *tmem_client' in struct domain(Andrew)
 * Add more comment in the commit log(Konrad)

Changlog v2:
 * Fix the public head file issue introduced my commit 006a687ba4de74
 * Fix some issues based on the feedback from Konrad Wilk

Bob Liu (15):
  tmem: cleanup: drop unused sub command
  tmem: cleanup: drop some debug code
  tmem: cleanup: drop useless function 'tmem_copy_page'
  tmem: cleanup: drop useless parameters from put/get page
  tmem: cleanup: reorg function do_tmem_put()
  tmem: drop unneeded is_ephemeral() and is_private()
  tmem: cleanup: rm useless EXPORT/FORWARD define
  tmem: cleanup: drop tmem_lock_all
  tmem: cleanup: refactor the alloc/free path
  tmem: cleanup: __tmem_alloc_page: drop unneed parameters
  tmem: cleanup: drop useless functions from head file
  tmem: refator function tmem_ensure_avail_pages()
  tmem: cleanup: rename tmem_relinquish_npages()
  tmem: cleanup: rm unused tmem_freeze_all()
  tmem: check the return value of copy to guest

 xen/common/domain.c        |    4 +-
 xen/common/tmem.c          |  855 +++++++++++++++++++-------------------------
 xen/common/tmem_xen.c      |  147 ++------
 xen/include/public/tmem.h  |    4 +-
 xen/include/xen/sched.h    |    2 +-
 xen/include/xen/tmem_xen.h |  144 +-------
 6 files changed, 398 insertions(+), 758 deletions(-)

-- 
1.7.10.4

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v4 01/15] tmem: cleanup: drop unused sub command
  2013-12-12 11:05 [PATCH v4 00/15] xen: continue to cleanup tmem Bob Liu
@ 2013-12-12 11:05 ` Bob Liu
  2013-12-12 11:05 ` [PATCH v4 02/15] tmem: cleanup: drop some debug code Bob Liu
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 23+ messages in thread
From: Bob Liu @ 2013-12-12 11:05 UTC (permalink / raw)
  To: xen-devel; +Cc: james.harper, ian.campbell, andrew.cooper3, JBeulich

TMEM_READ/TMEM_WRITE/TMEM_XCHG/TMEM_NEW_PAGE are never used, drop them to make
things simple and clean.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c         |   23 +----------------------
 xen/include/public/tmem.h |    4 +++-
 2 files changed, 4 insertions(+), 23 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 3d15ead..0991eeb 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -2753,11 +2753,6 @@ EXPORT long do_tmem_op(tmem_cli_op_t uops)
         rc = do_tmem_new_pool(TMEM_CLI_ID_NULL, 0, op.u.creat.flags,
                               op.u.creat.uuid[0], op.u.creat.uuid[1]);
         break;
-    case TMEM_NEW_PAGE:
-        tmem_ensure_avail_pages();
-        rc = do_tmem_put(pool, oidp, op.u.gen.index, op.u.gen.cmfn, 0, 0, 0,
-                         tmem_cli_buf_null);
-        break;
     case TMEM_PUT_PAGE:
         tmem_ensure_avail_pages();
         rc = do_tmem_put(pool, oidp, op.u.gen.index, op.u.gen.cmfn, 0, 0,
@@ -2783,25 +2778,9 @@ EXPORT long do_tmem_op(tmem_cli_op_t uops)
         flush = 1;
         rc = do_tmem_destroy_pool(op.pool_id);
         break;
-    case TMEM_READ:
-        rc = do_tmem_get(pool, oidp, op.u.gen.index, op.u.gen.cmfn,
-                         op.u.gen.tmem_offset, op.u.gen.pfn_offset,
-                         op.u.gen.len, tmem_cli_buf_null);
-        break;
-    case TMEM_WRITE:
-        rc = do_tmem_put(pool, oidp,
-                         op.u.gen.index, op.u.gen.cmfn,
-                         op.u.gen.tmem_offset, op.u.gen.pfn_offset,
-                         op.u.gen.len, tmem_cli_buf_null);
-        break;
-    case TMEM_XCHG:
-        /* need to hold global lock to ensure xchg is atomic */
-        tmem_client_warn("tmem_xchg op not implemented yet\n");
-        rc = 0;
-        break;
     default:
         tmem_client_warn("tmem: op %d not implemented\n", op.cmd);
-        rc = 0;
+        rc = -ENOSYS;
         break;
     }
 
diff --git a/xen/include/public/tmem.h b/xen/include/public/tmem.h
index 5eb2fb4..4fd2fc6 100644
--- a/xen/include/public/tmem.h
+++ b/xen/include/public/tmem.h
@@ -36,14 +36,16 @@
 #define TMEM_CONTROL               0
 #define TMEM_NEW_POOL              1
 #define TMEM_DESTROY_POOL          2
-#define TMEM_NEW_PAGE              3
 #define TMEM_PUT_PAGE              4
 #define TMEM_GET_PAGE              5
 #define TMEM_FLUSH_PAGE            6
 #define TMEM_FLUSH_OBJECT          7
+#if __XEN_INTERFACE_VERSION__ < 0x00040400
+#define TMEM_NEW_PAGE              3
 #define TMEM_READ                  8
 #define TMEM_WRITE                 9
 #define TMEM_XCHG                 10
+#endif
 
 /* Privileged commands to HYPERVISOR_tmem_op() */
 #define TMEM_AUTH                 101 
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 02/15] tmem: cleanup: drop some debug code
  2013-12-12 11:05 [PATCH v4 00/15] xen: continue to cleanup tmem Bob Liu
  2013-12-12 11:05 ` [PATCH v4 01/15] tmem: cleanup: drop unused sub command Bob Liu
@ 2013-12-12 11:05 ` Bob Liu
  2013-12-12 11:05 ` [PATCH v4 03/15] tmem: cleanup: drop useless function 'tmem_copy_page' Bob Liu
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 23+ messages in thread
From: Bob Liu @ 2013-12-12 11:05 UTC (permalink / raw)
  To: xen-devel; +Cc: james.harper, ian.campbell, andrew.cooper3, JBeulich

"SENTINELS" and "DECL_CYC_COUNTER" are hacky code for debugging, there are not
suitable exist in upstream code.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c          |  192 +++++++++-----------------------------------
 xen/common/tmem_xen.c      |    5 --
 xen/include/xen/tmem_xen.h |   51 ------------
 3 files changed, 36 insertions(+), 212 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 0991eeb..cdc8826 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -29,39 +29,6 @@
 
 #define TMEM_SPEC_VERSION 1
 
-/************ DEBUG and STATISTICS (+ some compression testing) *******/
-
-#ifndef NDEBUG
-#define SENTINELS
-#define NOINLINE noinline
-#else
-#define NOINLINE
-#endif
-
-#ifdef SENTINELS
-#define DECL_SENTINEL unsigned long sentinel;
-#define SET_SENTINEL(_x,_y) _x->sentinel = _y##_SENTINEL
-#define INVERT_SENTINEL(_x,_y) _x->sentinel = ~_y##_SENTINEL
-#define ASSERT_SENTINEL(_x,_y) \
-    ASSERT(_x->sentinel != ~_y##_SENTINEL);ASSERT(_x->sentinel == _y##_SENTINEL)
-#if defined(CONFIG_ARM)
-#define POOL_SENTINEL 0x87658765
-#define OBJ_SENTINEL 0x12345678
-#define OBJNODE_SENTINEL 0xfedcba09
-#define PGD_SENTINEL  0x43214321
-#else
-#define POOL_SENTINEL 0x8765876587658765
-#define OBJ_SENTINEL 0x1234567812345678
-#define OBJNODE_SENTINEL 0xfedcba0987654321
-#define PGD_SENTINEL  0x4321432143214321
-#endif
-#else
-#define DECL_SENTINEL
-#define SET_SENTINEL(_x,_y) do { } while (0)
-#define ASSERT_SENTINEL(_x,_y) do { } while (0)
-#define INVERT_SENTINEL(_x,_y) do { } while (0)
-#endif
-
 /* global statistics (none need to be locked) */
 static unsigned long total_tmem_ops = 0;
 static unsigned long errored_tmem_ops = 0;
@@ -83,16 +50,6 @@ static unsigned long failed_copies;
 static unsigned long pcd_tot_tze_size = 0;
 static unsigned long pcd_tot_csize = 0;
 
-DECL_CYC_COUNTER(succ_get);
-DECL_CYC_COUNTER(succ_put);
-DECL_CYC_COUNTER(non_succ_get);
-DECL_CYC_COUNTER(non_succ_put);
-DECL_CYC_COUNTER(flush);
-DECL_CYC_COUNTER(flush_obj);
-EXTERN_CYC_COUNTER(pg_copy);
-DECL_CYC_COUNTER(compress);
-DECL_CYC_COUNTER(decompress);
-
 /************ CORE DATA STRUCTURES ************************************/
 
 #define MAX_POOLS_PER_DOMAIN 16
@@ -166,7 +123,6 @@ struct tmem_pool {
     unsigned long gets, found_gets;
     unsigned long flushs, flushs_found;
     unsigned long flush_objs, flush_objs_found;
-    DECL_SENTINEL
 };
 
 #define is_persistent(_p)  (_p->persistent)
@@ -179,7 +135,6 @@ struct oid {
 };
 
 struct tmem_object_root {
-    DECL_SENTINEL
     struct oid oid;
     struct rb_node rb_tree_node; /* protected by pool->pool_rwlock */
     unsigned long objnode_count; /* atomicity depends on obj_spinlock */
@@ -193,7 +148,6 @@ struct tmem_object_root {
 
 struct tmem_object_node {
     struct tmem_object_root *obj;
-    DECL_SENTINEL
     struct radix_tree_node rtn;
 };
 
@@ -228,7 +182,6 @@ struct tmem_page_descriptor {
         uint64_t timestamp;
         uint32_t pool_id;  /* used for invalid list only */
     };
-    DECL_SENTINEL
 };
 
 #define PCD_TZE_MAX_SIZE (PAGE_SIZE - (PAGE_SIZE/64))
@@ -299,7 +252,7 @@ static atomic_t global_rtree_node_count = ATOMIC_INIT(0);
 
 
 /************ MEMORY ALLOCATION INTERFACE *****************************/
-static NOINLINE void *tmem_malloc(size_t size, struct tmem_pool *pool)
+static void *tmem_malloc(size_t size, struct tmem_pool *pool)
 {
     void *v = NULL;
 
@@ -318,7 +271,7 @@ static NOINLINE void *tmem_malloc(size_t size, struct tmem_pool *pool)
     return v;
 }
 
-static NOINLINE void tmem_free(void *p, struct tmem_pool *pool)
+static void tmem_free(void *p, struct tmem_pool *pool)
 {
     if ( pool == NULL || !is_persistent(pool) )
     {
@@ -332,7 +285,7 @@ static NOINLINE void tmem_free(void *p, struct tmem_pool *pool)
     }
 }
 
-static NOINLINE struct page_info *tmem_page_alloc(struct tmem_pool *pool)
+static struct page_info *tmem_page_alloc(struct tmem_pool *pool)
 {
     struct page_info *pfp = NULL;
 
@@ -347,7 +300,7 @@ static NOINLINE struct page_info *tmem_page_alloc(struct tmem_pool *pool)
     return pfp;
 }
 
-static NOINLINE void tmem_page_free(struct tmem_pool *pool, struct page_info *pfp)
+static void tmem_page_free(struct tmem_pool *pool, struct page_info *pfp)
 {
     ASSERT(pfp);
     if ( pool == NULL || !is_persistent(pool) )
@@ -361,7 +314,7 @@ static NOINLINE void tmem_page_free(struct tmem_pool *pool, struct page_info *pf
 
 #define NOT_SHAREABLE ((uint16_t)-1UL)
 
-static NOINLINE int pcd_copy_to_client(xen_pfn_t cmfn, struct tmem_page_descriptor *pgp)
+static int pcd_copy_to_client(xen_pfn_t cmfn, struct tmem_page_descriptor *pgp)
 {
     uint8_t firstbyte = pgp->firstbyte;
     struct tmem_page_content_descriptor *pcd;
@@ -385,7 +338,7 @@ static NOINLINE int pcd_copy_to_client(xen_pfn_t cmfn, struct tmem_page_descript
 
 /* ensure pgp no longer points to pcd, nor vice-versa */
 /* take pcd rwlock unless have_pcd_rwlock is set, always unlock when done */
-static NOINLINE void pcd_disassociate(struct tmem_page_descriptor *pgp, struct tmem_pool *pool, bool_t have_pcd_rwlock)
+static void pcd_disassociate(struct tmem_page_descriptor *pgp, struct tmem_pool *pool, bool_t have_pcd_rwlock)
 {
     struct tmem_page_content_descriptor *pcd = pgp->pcd;
     struct page_info *pfp = pgp->pcd->pfp;
@@ -448,7 +401,7 @@ static NOINLINE void pcd_disassociate(struct tmem_page_descriptor *pgp, struct t
 }
 
 
-static NOINLINE int pcd_associate(struct tmem_page_descriptor *pgp, char *cdata, pagesize_t csize)
+static int pcd_associate(struct tmem_page_descriptor *pgp, char *cdata, pagesize_t csize)
 {
     struct rb_node **new, *parent = NULL;
     struct rb_root *root;
@@ -585,7 +538,7 @@ unlock:
 /************ PAGE DESCRIPTOR MANIPULATION ROUTINES *******************/
 
 /* allocate a struct tmem_page_descriptor and associate it with an object */
-static NOINLINE struct tmem_page_descriptor *pgp_alloc(struct tmem_object_root *obj)
+static struct tmem_page_descriptor *pgp_alloc(struct tmem_object_root *obj)
 {
     struct tmem_page_descriptor *pgp;
     struct tmem_pool *pool;
@@ -608,7 +561,6 @@ static NOINLINE struct tmem_page_descriptor *pgp_alloc(struct tmem_object_root *
     pgp->size = -1;
     pgp->index = -1;
     pgp->timestamp = get_cycles();
-    SET_SENTINEL(pgp,PGD);
     atomic_inc_and_max(global_pgp_count);
     atomic_inc_and_max(pool->pgp_count);
     return pgp;
@@ -618,13 +570,11 @@ static struct tmem_page_descriptor *pgp_lookup_in_obj(struct tmem_object_root *o
 {
     ASSERT(obj != NULL);
     ASSERT_SPINLOCK(&obj->obj_spinlock);
-    ASSERT_SENTINEL(obj,OBJ);
     ASSERT(obj->pool != NULL);
-    ASSERT_SENTINEL(obj->pool,POOL);
     return radix_tree_lookup(&obj->tree_root, index);
 }
 
-static NOINLINE void pgp_free_data(struct tmem_page_descriptor *pgp, struct tmem_pool *pool)
+static void pgp_free_data(struct tmem_page_descriptor *pgp, struct tmem_pool *pool)
 {
     pagesize_t pgp_size = pgp->size;
 
@@ -645,14 +595,11 @@ static NOINLINE void pgp_free_data(struct tmem_page_descriptor *pgp, struct tmem
     pgp->size = -1;
 }
 
-static NOINLINE void pgp_free(struct tmem_page_descriptor *pgp, int from_delete)
+static void pgp_free(struct tmem_page_descriptor *pgp, int from_delete)
 {
     struct tmem_pool *pool = NULL;
 
-    ASSERT_SENTINEL(pgp,PGD);
     ASSERT(pgp->us.obj != NULL);
-    ASSERT_SENTINEL(pgp->us.obj,OBJ);
-    ASSERT_SENTINEL(pgp->us.obj->pool,POOL);
     ASSERT(pgp->us.obj->pool->client != NULL);
     if ( from_delete )
         ASSERT(pgp_lookup_in_obj(pgp->us.obj,pgp->index) == NULL);
@@ -673,19 +620,15 @@ static NOINLINE void pgp_free(struct tmem_page_descriptor *pgp, int from_delete)
         pgp->pool_id = pool->pool_id;
         return;
     }
-    INVERT_SENTINEL(pgp,PGD);
     pgp->us.obj = NULL;
     pgp->index = -1;
     tmem_free(pgp, pool);
 }
 
-static NOINLINE void pgp_free_from_inv_list(struct client *client, struct tmem_page_descriptor *pgp)
+static void pgp_free_from_inv_list(struct client *client, struct tmem_page_descriptor *pgp)
 {
     struct tmem_pool *pool = client->pools[pgp->pool_id];
 
-    ASSERT_SENTINEL(pool,POOL);
-    ASSERT_SENTINEL(pgp,PGD);
-    INVERT_SENTINEL(pgp,PGD);
     pgp->us.obj = NULL;
     pgp->index = -1;
     tmem_free(pgp, pool);
@@ -733,7 +676,7 @@ static void pgp_delist(struct tmem_page_descriptor *pgp, bool_t no_eph_lock)
 }
 
 /* remove page from lists (but not from parent object) and free it */
-static NOINLINE void pgp_delete(struct tmem_page_descriptor *pgp, bool_t no_eph_lock)
+static void pgp_delete(struct tmem_page_descriptor *pgp, bool_t no_eph_lock)
 {
     uint64_t life;
 
@@ -747,7 +690,7 @@ static NOINLINE void pgp_delete(struct tmem_page_descriptor *pgp, bool_t no_eph_
 }
 
 /* called only indirectly by radix_tree_destroy */
-static NOINLINE void pgp_destroy(void *v)
+static void pgp_destroy(void *v)
 {
     struct tmem_page_descriptor *pgp = (struct tmem_page_descriptor *)v;
 
@@ -770,15 +713,13 @@ static int pgp_add_to_obj(struct tmem_object_root *obj, uint32_t index, struct t
     return ret;
 }
 
-static NOINLINE struct tmem_page_descriptor *pgp_delete_from_obj(struct tmem_object_root *obj, uint32_t index)
+static struct tmem_page_descriptor *pgp_delete_from_obj(struct tmem_object_root *obj, uint32_t index)
 {
     struct tmem_page_descriptor *pgp;
 
     ASSERT(obj != NULL);
     ASSERT_SPINLOCK(&obj->obj_spinlock);
-    ASSERT_SENTINEL(obj,OBJ);
     ASSERT(obj->pool != NULL);
-    ASSERT_SENTINEL(obj->pool,POOL);
     pgp = radix_tree_delete(&obj->tree_root, index);
     if ( pgp != NULL )
         obj->pgp_count--;
@@ -790,19 +731,16 @@ static NOINLINE struct tmem_page_descriptor *pgp_delete_from_obj(struct tmem_obj
 /************ RADIX TREE NODE MANIPULATION ROUTINES *******************/
 
 /* called only indirectly from radix_tree_insert */
-static NOINLINE struct radix_tree_node *rtn_alloc(void *arg)
+static struct radix_tree_node *rtn_alloc(void *arg)
 {
     struct tmem_object_node *objnode;
     struct tmem_object_root *obj = (struct tmem_object_root *)arg;
 
-    ASSERT_SENTINEL(obj,OBJ);
     ASSERT(obj->pool != NULL);
-    ASSERT_SENTINEL(obj->pool,POOL);
     objnode = tmem_malloc(sizeof(struct tmem_object_node),obj->pool);
     if (objnode == NULL)
         return NULL;
     objnode->obj = obj;
-    SET_SENTINEL(objnode,OBJNODE);
     memset(&objnode->rtn, 0, sizeof(struct radix_tree_node));
     if (++obj->pool->objnode_count > obj->pool->objnode_count_max)
         obj->pool->objnode_count_max = obj->pool->objnode_count;
@@ -819,14 +757,10 @@ static void rtn_free(struct radix_tree_node *rtn, void *arg)
 
     ASSERT(rtn != NULL);
     objnode = container_of(rtn,struct tmem_object_node,rtn);
-    ASSERT_SENTINEL(objnode,OBJNODE);
-    INVERT_SENTINEL(objnode,OBJNODE);
     ASSERT(objnode->obj != NULL);
     ASSERT_SPINLOCK(&objnode->obj->obj_spinlock);
-    ASSERT_SENTINEL(objnode->obj,OBJ);
     pool = objnode->obj->pool;
     ASSERT(pool != NULL);
-    ASSERT_SENTINEL(pool,POOL);
     pool->objnode_count--;
     objnode->obj->objnode_count--;
     objnode->obj = NULL;
@@ -872,7 +806,7 @@ unsigned oid_hash(struct oid *oidp)
 }
 
 /* searches for object==oid in pool, returns locked object if found */
-static NOINLINE struct tmem_object_root * obj_find(struct tmem_pool *pool, struct oid *oidp)
+static struct tmem_object_root * obj_find(struct tmem_pool *pool, struct oid *oidp)
 {
     struct rb_node *node;
     struct tmem_object_root *obj;
@@ -910,14 +844,13 @@ restart_find:
 }
 
 /* free an object that has no more pgps in it */
-static NOINLINE void obj_free(struct tmem_object_root *obj, int no_rebalance)
+static void obj_free(struct tmem_object_root *obj, int no_rebalance)
 {
     struct tmem_pool *pool;
     struct oid old_oid;
 
     ASSERT_SPINLOCK(&obj->obj_spinlock);
     ASSERT(obj != NULL);
-    ASSERT_SENTINEL(obj,OBJ);
     ASSERT(obj->pgp_count == 0);
     pool = obj->pool;
     ASSERT(pool != NULL);
@@ -929,7 +862,6 @@ static NOINLINE void obj_free(struct tmem_object_root *obj, int no_rebalance)
     ASSERT(obj->tree_root.rnode == NULL);
     pool->obj_count--;
     ASSERT(pool->obj_count >= 0);
-    INVERT_SENTINEL(obj,OBJ);
     obj->pool = NULL;
     old_oid = obj->oid;
     oid_set_invalid(&obj->oid);
@@ -942,7 +874,7 @@ static NOINLINE void obj_free(struct tmem_object_root *obj, int no_rebalance)
     tmem_free(obj, pool);
 }
 
-static NOINLINE int obj_rb_insert(struct rb_root *root, struct tmem_object_root *obj)
+static int obj_rb_insert(struct rb_root *root, struct tmem_object_root *obj)
 {
     struct rb_node **new, *parent = NULL;
     struct tmem_object_root *this;
@@ -973,7 +905,7 @@ static NOINLINE int obj_rb_insert(struct rb_root *root, struct tmem_object_root
  * allocate, initialize, and insert an tmem_object_root
  * (should be called only if find failed)
  */
-static NOINLINE struct tmem_object_root * obj_new(struct tmem_pool *pool, struct oid *oidp)
+static struct tmem_object_root * obj_new(struct tmem_pool *pool, struct oid *oidp)
 {
     struct tmem_object_root *obj;
 
@@ -993,7 +925,6 @@ static NOINLINE struct tmem_object_root * obj_new(struct tmem_pool *pool, struct
     obj->objnode_count = 0;
     obj->pgp_count = 0;
     obj->last_client = TMEM_CLI_ID_NULL;
-    SET_SENTINEL(obj,OBJ);
     tmem_spin_lock(&obj->obj_spinlock);
     obj_rb_insert(&pool->obj_rb_root[oid_hash(oidp)], obj);
     obj->no_evict = 1;
@@ -1002,7 +933,7 @@ static NOINLINE struct tmem_object_root * obj_new(struct tmem_pool *pool, struct
 }
 
 /* free an object after destroying any pgps in it */
-static NOINLINE void obj_destroy(struct tmem_object_root *obj, int no_rebalance)
+static void obj_destroy(struct tmem_object_root *obj, int no_rebalance)
 {
     ASSERT_WRITELOCK(&obj->pool->pool_rwlock);
     radix_tree_destroy(&obj->tree_root, pgp_destroy);
@@ -1066,14 +997,11 @@ static struct tmem_pool * pool_alloc(void)
     pool->flushs_found = pool->flushs = 0;
     pool->flush_objs_found = pool->flush_objs = 0;
     pool->is_dying = 0;
-    SET_SENTINEL(pool,POOL);
     return pool;
 }
 
-static NOINLINE void pool_free(struct tmem_pool *pool)
+static void pool_free(struct tmem_pool *pool)
 {
-    ASSERT_SENTINEL(pool,POOL);
-    INVERT_SENTINEL(pool,POOL);
     pool->client = NULL;
     list_del(&pool->pool_list);
     xfree(pool);
@@ -1097,7 +1025,7 @@ static int shared_pool_join(struct tmem_pool *pool, struct client *new_client)
 }
 
 /* reassign "ownership" of the pool to another client that shares this pool */
-static NOINLINE void shared_pool_reassign(struct tmem_pool *pool)
+static void shared_pool_reassign(struct tmem_pool *pool)
 {
     struct share_list *sl;
     int poolid;
@@ -1128,7 +1056,7 @@ static NOINLINE void shared_pool_reassign(struct tmem_pool *pool)
 
 /* destroy all objects with last_client same as passed cli_id,
    remove pool's cli_id from list of sharers of this pool */
-static NOINLINE int shared_pool_quit(struct tmem_pool *pool, domid_t cli_id)
+static int shared_pool_quit(struct tmem_pool *pool, domid_t cli_id)
 {
     struct share_list *sl;
     int s_poolid;
@@ -1374,12 +1302,10 @@ static int tmem_evict(void)
 
 found:
     ASSERT(pgp != NULL);
-    ASSERT_SENTINEL(pgp,PGD);
     obj = pgp->us.obj;
     ASSERT(obj != NULL);
     ASSERT(obj->no_evict == 0);
     ASSERT(obj->pool != NULL);
-    ASSERT_SENTINEL(obj,OBJ);
     pool = obj->pool;
 
     ASSERT_SPINLOCK(&obj->obj_spinlock);
@@ -1454,13 +1380,12 @@ static inline void tmem_ensure_avail_pages(void)
 
 /************ TMEM CORE OPERATIONS ************************************/
 
-static NOINLINE int do_tmem_put_compress(struct tmem_page_descriptor *pgp, xen_pfn_t cmfn,
+static int do_tmem_put_compress(struct tmem_page_descriptor *pgp, xen_pfn_t cmfn,
                                          tmem_cli_va_param_t clibuf)
 {
     void *dst, *p;
     size_t size;
     int ret = 0;
-    DECL_LOCAL_CYC_COUNTER(compress);
     
     ASSERT(pgp != NULL);
     ASSERT(pgp->us.obj != NULL);
@@ -1470,7 +1395,6 @@ static NOINLINE int do_tmem_put_compress(struct tmem_page_descriptor *pgp, xen_p
 
     if ( pgp->pfp != NULL )
         pgp_free_data(pgp, pgp->us.obj->pool);
-    START_CYC_COUNTER(compress);
     ret = tmem_compress_from_client(cmfn, &dst, &size, clibuf);
     if ( ret <= 0 )
         goto out;
@@ -1493,11 +1417,10 @@ static NOINLINE int do_tmem_put_compress(struct tmem_page_descriptor *pgp, xen_p
     ret = 1;
 
 out:
-    END_CYC_COUNTER(compress);
     return ret;
 }
 
-static NOINLINE int do_tmem_dup_put(struct tmem_page_descriptor *pgp, xen_pfn_t cmfn,
+static int do_tmem_dup_put(struct tmem_page_descriptor *pgp, xen_pfn_t cmfn,
        pagesize_t tmem_offset, pagesize_t pfn_offset, pagesize_t len,
        tmem_cli_va_param_t clibuf)
 {
@@ -1587,7 +1510,7 @@ cleanup:
 }
 
 
-static NOINLINE int do_tmem_put(struct tmem_pool *pool,
+static int do_tmem_put(struct tmem_pool *pool,
               struct oid *oidp, uint32_t index,
               xen_pfn_t cmfn, pagesize_t tmem_offset,
               pagesize_t pfn_offset, pagesize_t len, tmem_cli_va_param_t clibuf)
@@ -1731,14 +1654,13 @@ free:
     return ret;
 }
 
-static NOINLINE int do_tmem_get(struct tmem_pool *pool, struct oid *oidp, uint32_t index,
+static int do_tmem_get(struct tmem_pool *pool, struct oid *oidp, uint32_t index,
               xen_pfn_t cmfn, pagesize_t tmem_offset,
               pagesize_t pfn_offset, pagesize_t len, tmem_cli_va_param_t clibuf)
 {
     struct tmem_object_root *obj;
     struct tmem_page_descriptor *pgp;
     struct client *client = pool->client;
-    DECL_LOCAL_CYC_COUNTER(decompress);
     int rc;
 
     if ( !_atomic_read(pool->pgp_count) )
@@ -1766,10 +1688,8 @@ static NOINLINE int do_tmem_get(struct tmem_pool *pool, struct oid *oidp, uint32
         rc = pcd_copy_to_client(cmfn, pgp);
     else if ( pgp->size != 0 )
     {
-        START_CYC_COUNTER(decompress);
         rc = tmem_decompress_to_client(cmfn, pgp->cdata,
                                       pgp->size, clibuf);
-        END_CYC_COUNTER(decompress);
     }
     else
         rc = tmem_copy_to_client(cmfn, pgp->pfp, tmem_offset,
@@ -1818,7 +1738,7 @@ bad_copy:
     return rc;
 }
 
-static NOINLINE int do_tmem_flush_page(struct tmem_pool *pool, struct oid *oidp, uint32_t index)
+static int do_tmem_flush_page(struct tmem_pool *pool, struct oid *oidp, uint32_t index)
 {
     struct tmem_object_root *obj;
     struct tmem_page_descriptor *pgp;
@@ -1853,7 +1773,7 @@ out:
         return 1;
 }
 
-static NOINLINE int do_tmem_flush_object(struct tmem_pool *pool, struct oid *oidp)
+static int do_tmem_flush_object(struct tmem_pool *pool, struct oid *oidp)
 {
     struct tmem_object_root *obj;
 
@@ -1873,7 +1793,7 @@ out:
         return 1;
 }
 
-static NOINLINE int do_tmem_destroy_pool(uint32_t pool_id)
+static int do_tmem_destroy_pool(uint32_t pool_id)
 {
     struct client *client = tmem_client_from_current();
     struct tmem_pool *pool;
@@ -1889,7 +1809,7 @@ static NOINLINE int do_tmem_destroy_pool(uint32_t pool_id)
     return 1;
 }
 
-static NOINLINE int do_tmem_new_pool(domid_t this_cli_id,
+static int do_tmem_new_pool(domid_t this_cli_id,
                                      uint32_t d_poolid, uint32_t flags,
                                      uint64_t uuid_lo, uint64_t uuid_hi)
 {
@@ -2169,7 +2089,6 @@ static int tmemc_list_shared(tmem_cli_va_param_t buf, int off, uint32_t len,
     return sum;
 }
 
-#ifdef TMEM_PERF
 static int tmemc_list_global_perf(tmem_cli_va_param_t buf, int off,
                                   uint32_t len, bool_t use_long)
 {
@@ -2177,15 +2096,6 @@ static int tmemc_list_global_perf(tmem_cli_va_param_t buf, int off,
     int n = 0, sum = 0;
 
     n = scnprintf(info+n,BSIZE-n,"T=");
-    n += SCNPRINTF_CYC_COUNTER(info+n,BSIZE-n,succ_get,"G");
-    n += SCNPRINTF_CYC_COUNTER(info+n,BSIZE-n,succ_put,"P");
-    n += SCNPRINTF_CYC_COUNTER(info+n,BSIZE-n,non_succ_get,"g");
-    n += SCNPRINTF_CYC_COUNTER(info+n,BSIZE-n,non_succ_put,"p");
-    n += SCNPRINTF_CYC_COUNTER(info+n,BSIZE-n,flush,"F");
-    n += SCNPRINTF_CYC_COUNTER(info+n,BSIZE-n,flush_obj,"O");
-    n += SCNPRINTF_CYC_COUNTER(info+n,BSIZE-n,pg_copy,"C");
-    n += SCNPRINTF_CYC_COUNTER(info+n,BSIZE-n,compress,"c");
-    n += SCNPRINTF_CYC_COUNTER(info+n,BSIZE-n,decompress,"d");
     n--; /* overwrite trailing comma */
     n += scnprintf(info+n,BSIZE-n,"\n");
     if ( sum + n >= len )
@@ -2194,9 +2104,6 @@ static int tmemc_list_global_perf(tmem_cli_va_param_t buf, int off,
     sum += n;
     return sum;
 }
-#else
-#define tmemc_list_global_perf(_buf,_off,_len,_use) (0)
-#endif
 
 static int tmemc_list_global(tmem_cli_va_param_t buf, int off, uint32_t len,
                               bool_t use_long)
@@ -2304,7 +2211,7 @@ static int tmemc_set_var(domid_t cli_id, uint32_t subop, uint32_t arg1)
     return 0;
 }
 
-static NOINLINE int tmemc_shared_pool_auth(domid_t cli_id, uint64_t uuid_lo,
+static int tmemc_shared_pool_auth(domid_t cli_id, uint64_t uuid_lo,
                                   uint64_t uuid_hi, bool_t auth)
 {
     struct client *client;
@@ -2341,7 +2248,7 @@ static NOINLINE int tmemc_shared_pool_auth(domid_t cli_id, uint64_t uuid_lo,
     return 1;
 }
 
-static NOINLINE int tmemc_save_subop(int cli_id, uint32_t pool_id,
+static int tmemc_save_subop(int cli_id, uint32_t pool_id,
                         uint32_t subop, tmem_cli_va_param_t buf, uint32_t arg1)
 {
     struct client *client = tmem_client_from_cli_id(cli_id);
@@ -2430,7 +2337,7 @@ static NOINLINE int tmemc_save_subop(int cli_id, uint32_t pool_id,
     return rc;
 }
 
-static NOINLINE int tmemc_save_get_next_page(int cli_id, uint32_t pool_id,
+static int tmemc_save_get_next_page(int cli_id, uint32_t pool_id,
                         tmem_cli_va_param_t buf, uint32_t bufsize)
 {
     struct client *client = tmem_client_from_cli_id(cli_id);
@@ -2485,7 +2392,7 @@ out:
     return ret;
 }
 
-static NOINLINE int tmemc_save_get_next_inv(int cli_id, tmem_cli_va_param_t buf,
+static int tmemc_save_get_next_inv(int cli_id, tmem_cli_va_param_t buf,
                         uint32_t bufsize)
 {
     struct client *client = tmem_client_from_cli_id(cli_id);
@@ -2551,7 +2458,7 @@ static int tmemc_restore_flush_page(int cli_id, uint32_t pool_id, struct oid *oi
     return do_tmem_flush_page(pool,oidp,index);
 }
 
-static NOINLINE int do_tmem_control(struct tmem_op *op)
+static int do_tmem_control(struct tmem_op *op)
 {
     int ret;
     uint32_t pool_id = op->pool_id;
@@ -2638,12 +2545,6 @@ EXPORT long do_tmem_op(tmem_cli_op_t uops)
     bool_t non_succ_get = 0, non_succ_put = 0;
     bool_t flush = 0, flush_obj = 0;
     bool_t tmem_write_lock_set = 0, tmem_read_lock_set = 0;
-    DECL_LOCAL_CYC_COUNTER(succ_get);
-    DECL_LOCAL_CYC_COUNTER(succ_put);
-    DECL_LOCAL_CYC_COUNTER(non_succ_get);
-    DECL_LOCAL_CYC_COUNTER(non_succ_put);
-    DECL_LOCAL_CYC_COUNTER(flush);
-    DECL_LOCAL_CYC_COUNTER(flush_obj);
 
     if ( !tmem_initialized )
         return -ENODEV;
@@ -2661,13 +2562,6 @@ EXPORT long do_tmem_op(tmem_cli_op_t uops)
             spin_lock(&tmem_spinlock);
     }
 
-    START_CYC_COUNTER(succ_get);
-    DUP_START_CYC_COUNTER(succ_put,succ_get);
-    DUP_START_CYC_COUNTER(non_succ_get,succ_get);
-    DUP_START_CYC_COUNTER(non_succ_put,succ_get);
-    DUP_START_CYC_COUNTER(flush,succ_get);
-    DUP_START_CYC_COUNTER(flush_obj,succ_get);
-
     if ( client != NULL && tmem_client_is_dying(client) )
     {
         rc = -ENODEV;
@@ -2743,7 +2637,6 @@ EXPORT long do_tmem_op(tmem_cli_op_t uops)
             rc = -ENODEV;
             goto out;
         }
-        ASSERT_SENTINEL(pool,POOL);
     }
 
     oidp = (struct oid *)&op.u.gen.oid[0];
@@ -2787,19 +2680,6 @@ EXPORT long do_tmem_op(tmem_cli_op_t uops)
 out:
     if ( rc < 0 )
         errored_tmem_ops++;
-    if ( succ_get )
-        END_CYC_COUNTER_CLI(succ_get,client);
-    else if ( succ_put )
-        END_CYC_COUNTER_CLI(succ_put,client);
-    else if ( non_succ_get )
-        END_CYC_COUNTER_CLI(non_succ_get,client);
-    else if ( non_succ_put )
-        END_CYC_COUNTER_CLI(non_succ_put,client);
-    else if ( flush )
-        END_CYC_COUNTER_CLI(flush,client);
-    else if ( flush_obj )
-        END_CYC_COUNTER_CLI(flush_obj,client);
-
     if ( tmem_lock_all )
     {
         if ( tmem_lock_all > 1 )
diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c
index d6e2e0d..1e002ae 100644
--- a/xen/common/tmem_xen.c
+++ b/xen/common/tmem_xen.c
@@ -36,8 +36,6 @@ integer_param("tmem_lock", opt_tmem_lock);
 
 EXPORT atomic_t freeable_page_count = ATOMIC_INIT(0);
 
-DECL_CYC_COUNTER(pg_copy);
-
 /* these are a concurrency bottleneck, could be percpu and dynamically
  * allocated iff opt_tmem_compress */
 #define LZO_WORKMEM_BYTES LZO1X_1_MEM_COMPRESS
@@ -48,10 +46,7 @@ static DEFINE_PER_CPU_READ_MOSTLY(void *, scratch_page);
 
 void tmem_copy_page(char *to, char*from)
 {
-    DECL_LOCAL_CYC_COUNTER(pg_copy);
-    START_CYC_COUNTER(pg_copy);
     memcpy(to, from, PAGE_SIZE);
-    END_CYC_COUNTER(pg_copy);
 }
 
 #if defined(CONFIG_ARM)
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index cccc98e..a477960 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -401,55 +401,4 @@ extern void tmem_persistent_pool_page_put(void *page_va);
 #define tmem_client_warn(fmt, args...) printk(XENLOG_G_WARNING fmt, ##args)
 #define tmem_client_info(fmt, args...) printk(XENLOG_G_INFO fmt, ##args)
 
-#define TMEM_PERF
-#ifdef TMEM_PERF
-#define DECL_CYC_COUNTER(x) \
-    uint64_t x##_sum_cycles = 0, x##_count = 0; \
-    uint32_t x##_min_cycles = 0x7fffffff, x##_max_cycles = 0;
-#define EXTERN_CYC_COUNTER(x) \
-    extern uint64_t x##_sum_cycles, x##_count; \
-    extern uint32_t x##_min_cycles, x##_max_cycles;
-#define DECL_LOCAL_CYC_COUNTER(x) \
-    int64_t x##_start = 0
-#define START_CYC_COUNTER(x) x##_start = get_cycles()
-#define DUP_START_CYC_COUNTER(x,y) x##_start = y##_start
-/* following might race, but since its advisory only, don't care */
-#define END_CYC_COUNTER(x) \
-    do { \
-      x##_start = get_cycles() - x##_start; \
-      if (x##_start > 0 && x##_start < 1000000000) { \
-       x##_sum_cycles += x##_start; x##_count++; \
-       if ((uint32_t)x##_start < x##_min_cycles) x##_min_cycles = x##_start; \
-       if ((uint32_t)x##_start > x##_max_cycles) x##_max_cycles = x##_start; \
-      } \
-    } while (0)
-#define END_CYC_COUNTER_CLI(x,y) \
-    do { \
-      x##_start = get_cycles() - x##_start; \
-      if (x##_start > 0 && x##_start < 1000000000) { \
-       x##_sum_cycles += x##_start; x##_count++; \
-       if ((uint32_t)x##_start < x##_min_cycles) x##_min_cycles = x##_start; \
-       if ((uint32_t)x##_start > x##_max_cycles) x##_max_cycles = x##_start; \
-       y->total_cycles += x##_start; \
-      } \
-    } while (0)
-#define RESET_CYC_COUNTER(x) { x##_sum_cycles = 0, x##_count = 0; \
-  x##_min_cycles = 0x7fffffff, x##_max_cycles = 0; }
-#define SCNPRINTF_CYC_COUNTER(buf,size,x,tag) \
-  scnprintf(buf,size, \
-  tag"n:%"PRIu64","tag"t:%"PRIu64","tag"x:%"PRId32","tag"m:%"PRId32",", \
-  x##_count,x##_sum_cycles,x##_max_cycles,x##_min_cycles)
-#else
-#define DECL_CYC_COUNTER(x)
-#define EXTERN_CYC_COUNTER(x) \
-    extern uint64_t x##_sum_cycles, x##_count; \
-    extern uint32_t x##_min_cycles, x##_max_cycles;
-#define DECL_LOCAL_CYC_COUNTER(x) do { } while (0)
-#define START_CYC_COUNTER(x) do { } while (0)
-#define DUP_START_CYC_COUNTER(x) do { } while (0)
-#define END_CYC_COUNTER(x) do { } while (0)
-#define SCNPRINTF_CYC_COUNTER(buf,size,x,tag) (0)
-#define RESET_CYC_COUNTER(x) do { } while (0)
-#endif
-
 #endif /* __XEN_TMEM_XEN_H__ */
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 03/15] tmem: cleanup: drop useless function 'tmem_copy_page'
  2013-12-12 11:05 [PATCH v4 00/15] xen: continue to cleanup tmem Bob Liu
  2013-12-12 11:05 ` [PATCH v4 01/15] tmem: cleanup: drop unused sub command Bob Liu
  2013-12-12 11:05 ` [PATCH v4 02/15] tmem: cleanup: drop some debug code Bob Liu
@ 2013-12-12 11:05 ` Bob Liu
  2013-12-12 11:05 ` [PATCH v4 04/15] tmem: cleanup: drop useless parameters from put/get page Bob Liu
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 23+ messages in thread
From: Bob Liu @ 2013-12-12 11:05 UTC (permalink / raw)
  To: xen-devel; +Cc: james.harper, ian.campbell, andrew.cooper3, JBeulich

Use memcpy directly.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem_xen.c |    9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c
index 1e002ae..af67703 100644
--- a/xen/common/tmem_xen.c
+++ b/xen/common/tmem_xen.c
@@ -44,11 +44,6 @@ static DEFINE_PER_CPU_READ_MOSTLY(unsigned char *, workmem);
 static DEFINE_PER_CPU_READ_MOSTLY(unsigned char *, dstmem);
 static DEFINE_PER_CPU_READ_MOSTLY(void *, scratch_page);
 
-void tmem_copy_page(char *to, char*from)
-{
-    memcpy(to, from, PAGE_SIZE);
-}
-
 #if defined(CONFIG_ARM)
 static inline void *cli_get_page(xen_pfn_t cmfn, unsigned long *pcli_mfn,
                                  struct page_info **pcli_pfp, bool_t cli_write)
@@ -135,7 +130,7 @@ EXPORT int tmem_copy_from_client(struct page_info *pfp,
     }
     smp_mb();
     if ( len == PAGE_SIZE && !tmem_offset && !pfn_offset && cli_va )
-        tmem_copy_page(tmem_va, cli_va);
+        memcpy(tmem_va, cli_va, PAGE_SIZE);
     else if ( (tmem_offset+len <= PAGE_SIZE) &&
               (pfn_offset+len <= PAGE_SIZE) )
     {
@@ -206,7 +201,7 @@ EXPORT int tmem_copy_to_client(xen_pfn_t cmfn, struct page_info *pfp,
     tmem_mfn = page_to_mfn(pfp);
     tmem_va = map_domain_page(tmem_mfn);
     if ( len == PAGE_SIZE && !tmem_offset && !pfn_offset && cli_va )
-        tmem_copy_page(cli_va, tmem_va);
+        memcpy(cli_va, tmem_va, PAGE_SIZE);
     else if ( (tmem_offset+len <= PAGE_SIZE) && (pfn_offset+len <= PAGE_SIZE) )
     {
         if ( cli_va )
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 04/15] tmem: cleanup: drop useless parameters from put/get page
  2013-12-12 11:05 [PATCH v4 00/15] xen: continue to cleanup tmem Bob Liu
                   ` (2 preceding siblings ...)
  2013-12-12 11:05 ` [PATCH v4 03/15] tmem: cleanup: drop useless function 'tmem_copy_page' Bob Liu
@ 2013-12-12 11:05 ` Bob Liu
  2013-12-12 11:05 ` [PATCH v4 05/15] tmem: cleanup: reorg function do_tmem_put() Bob Liu
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 23+ messages in thread
From: Bob Liu @ 2013-12-12 11:05 UTC (permalink / raw)
  To: xen-devel; +Cc: james.harper, ian.campbell, andrew.cooper3, JBeulich

Tmem only takes page as a unit, so parameters tmem_offset, pfn_offset and len in
do_tmem_put/get() are meaningless. All of the callers are using the same
value(tmem_offset=0, pfn_offset=0, len=PAGE_SIZE).

This patch simplifies tmem ignoring those useless parameters and use the default
value directly.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c          |   47 +++++++++++++++++++-------------------------
 xen/common/tmem_xen.c      |   45 +++++++++---------------------------------
 xen/include/xen/tmem_xen.h |    6 ++----
 3 files changed, 31 insertions(+), 67 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index cdc8826..da2326b 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -330,8 +330,7 @@ static int pcd_copy_to_client(xen_pfn_t cmfn, struct tmem_page_descriptor *pgp)
     else if ( tmem_tze_enabled() && pcd->size < PAGE_SIZE )
         ret = tmem_copy_tze_to_client(cmfn, pcd->tze, pcd->size);
     else
-        ret = tmem_copy_to_client(cmfn, pcd->pfp, 0, 0, PAGE_SIZE,
-                                 tmem_cli_buf_null);
+        ret = tmem_copy_to_client(cmfn, pcd->pfp, tmem_cli_buf_null);
     tmem_read_unlock(&pcd_tree_rwlocks[firstbyte]);
     return ret;
 }
@@ -1421,7 +1420,6 @@ out:
 }
 
 static int do_tmem_dup_put(struct tmem_page_descriptor *pgp, xen_pfn_t cmfn,
-       pagesize_t tmem_offset, pagesize_t pfn_offset, pagesize_t len,
        tmem_cli_va_param_t clibuf)
 {
     struct tmem_pool *pool;
@@ -1442,7 +1440,7 @@ static int do_tmem_dup_put(struct tmem_page_descriptor *pgp, xen_pfn_t cmfn,
     if ( client->live_migrating )
         goto failed_dup; /* no dups allowed when migrating */
     /* can we successfully manipulate pgp to change out the data? */
-    if ( len != 0 && client->compress && pgp->size != 0 )
+    if ( client->compress && pgp->size != 0 )
     {
         ret = do_tmem_put_compress(pgp, cmfn, clibuf);
         if ( ret == 1 )
@@ -1461,9 +1459,7 @@ copy_uncompressed:
     if ( ( pgp->pfp = tmem_page_alloc(pool) ) == NULL )
         goto failed_dup;
     pgp->size = 0;
-    /* tmem_copy_from_client properly handles len==0 and offsets != 0 */
-    ret = tmem_copy_from_client(pgp->pfp, cmfn, tmem_offset, pfn_offset, len,
-                               tmem_cli_buf_null);
+    ret = tmem_copy_from_client(pgp->pfp, cmfn, tmem_cli_buf_null);
     if ( ret < 0 )
         goto bad_copy;
     if ( tmem_dedup_enabled() && !is_persistent(pool) )
@@ -1509,11 +1505,9 @@ cleanup:
     return ret;
 }
 
-
 static int do_tmem_put(struct tmem_pool *pool,
               struct oid *oidp, uint32_t index,
-              xen_pfn_t cmfn, pagesize_t tmem_offset,
-              pagesize_t pfn_offset, pagesize_t len, tmem_cli_va_param_t clibuf)
+              xen_pfn_t cmfn, tmem_cli_va_param_t clibuf)
 {
     struct tmem_object_root *obj = NULL, *objfound = NULL, *objnew = NULL;
     struct tmem_page_descriptor *pgp = NULL, *pgpdel = NULL;
@@ -1527,8 +1521,7 @@ static int do_tmem_put(struct tmem_pool *pool,
     {
         ASSERT_SPINLOCK(&objfound->obj_spinlock);
         if ((pgp = pgp_lookup_in_obj(objfound, index)) != NULL)
-            return do_tmem_dup_put(pgp, cmfn, tmem_offset, pfn_offset, len,
-                                   clibuf);
+            return do_tmem_dup_put(pgp, cmfn, clibuf);
     }
 
     /* no puts allowed into a frozen pool (except dup puts) */
@@ -1560,7 +1553,7 @@ static int do_tmem_put(struct tmem_pool *pool,
     pgp->index = index;
     pgp->size = 0;
 
-    if ( len != 0 && client->compress )
+    if ( client->compress )
     {
         ASSERT(pgp->pfp == NULL);
         ret = do_tmem_put_compress(pgp, cmfn, clibuf);
@@ -1586,9 +1579,7 @@ copy_uncompressed:
         ret = -ENOMEM;
         goto delete_and_free;
     }
-    /* tmem_copy_from_client properly handles len==0 (TMEM_NEW_PAGE) */
-    ret = tmem_copy_from_client(pgp->pfp, cmfn, tmem_offset, pfn_offset, len,
-                               clibuf);
+    ret = tmem_copy_from_client(pgp->pfp, cmfn, clibuf);
     if ( ret < 0 )
         goto bad_copy;
     if ( tmem_dedup_enabled() && !is_persistent(pool) )
@@ -1655,8 +1646,7 @@ free:
 }
 
 static int do_tmem_get(struct tmem_pool *pool, struct oid *oidp, uint32_t index,
-              xen_pfn_t cmfn, pagesize_t tmem_offset,
-              pagesize_t pfn_offset, pagesize_t len, tmem_cli_va_param_t clibuf)
+              xen_pfn_t cmfn, tmem_cli_va_param_t clibuf)
 {
     struct tmem_object_root *obj;
     struct tmem_page_descriptor *pgp;
@@ -1688,12 +1678,10 @@ static int do_tmem_get(struct tmem_pool *pool, struct oid *oidp, uint32_t index,
         rc = pcd_copy_to_client(cmfn, pgp);
     else if ( pgp->size != 0 )
     {
-        rc = tmem_decompress_to_client(cmfn, pgp->cdata,
-                                      pgp->size, clibuf);
+        rc = tmem_decompress_to_client(cmfn, pgp->cdata, pgp->size, clibuf);
     }
     else
-        rc = tmem_copy_to_client(cmfn, pgp->pfp, tmem_offset,
-                                pfn_offset, len, clibuf);
+        rc = tmem_copy_to_client(cmfn, pgp->pfp, clibuf);
     if ( rc <= 0 )
         goto bad_copy;
 
@@ -2385,7 +2373,7 @@ static int tmemc_save_get_next_page(int cli_id, uint32_t pool_id,
     h.index = pgp->index;
     tmem_copy_to_client_buf(buf, &h, 1);
     tmem_client_buf_add(buf, sizeof(h));
-    ret = do_tmem_get(pool, &oid, pgp->index, 0, 0, 0, pagesize, buf);
+    ret = do_tmem_get(pool, &oid, pgp->index, 0, buf);
 
 out:
     tmem_spin_unlock(&pers_lists_spinlock);
@@ -2443,7 +2431,12 @@ static int tmemc_restore_put_page(int cli_id, uint32_t pool_id, struct oid *oidp
 
     if ( pool == NULL )
         return -1;
-    return do_tmem_put(pool, oidp, index, 0, 0, 0, bufsize, buf);
+    if (bufsize != PAGE_SIZE) {
+        tmem_client_err("tmem: %s: invalid parameter bufsize(%d) != (%ld)\n",
+                __func__, bufsize, PAGE_SIZE);
+	return -EINVAL;
+    }
+    return do_tmem_put(pool, oidp, index, 0, buf);
 }
 
 static int tmemc_restore_flush_page(int cli_id, uint32_t pool_id, struct oid *oidp,
@@ -2648,14 +2641,14 @@ EXPORT long do_tmem_op(tmem_cli_op_t uops)
         break;
     case TMEM_PUT_PAGE:
         tmem_ensure_avail_pages();
-        rc = do_tmem_put(pool, oidp, op.u.gen.index, op.u.gen.cmfn, 0, 0,
-                         PAGE_SIZE, tmem_cli_buf_null);
+        rc = do_tmem_put(pool, oidp, op.u.gen.index, op.u.gen.cmfn,
+                        tmem_cli_buf_null);
         if (rc == 1) succ_put = 1;
         else non_succ_put = 1;
         break;
     case TMEM_GET_PAGE:
         rc = do_tmem_get(pool, oidp, op.u.gen.index, op.u.gen.cmfn,
-                         0, 0, PAGE_SIZE, tmem_cli_buf_null);
+                        tmem_cli_buf_null);
         if (rc == 1) succ_get = 1;
         else non_succ_get = 1;
         break;
diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c
index af67703..efc2259 100644
--- a/xen/common/tmem_xen.c
+++ b/xen/common/tmem_xen.c
@@ -100,25 +100,16 @@ static inline void cli_put_page(void *cli_va, struct page_info *cli_pfp,
 #endif
 
 EXPORT int tmem_copy_from_client(struct page_info *pfp,
-    xen_pfn_t cmfn, pagesize_t tmem_offset,
-    pagesize_t pfn_offset, pagesize_t len, tmem_cli_va_param_t clibuf)
+    xen_pfn_t cmfn, tmem_cli_va_param_t clibuf)
 {
     unsigned long tmem_mfn, cli_mfn = 0;
     char *tmem_va, *cli_va = NULL;
     struct page_info *cli_pfp = NULL;
     int rc = 1;
 
-    if ( tmem_offset > PAGE_SIZE || pfn_offset > PAGE_SIZE || len > PAGE_SIZE )
-        return -EINVAL;
     ASSERT(pfp != NULL);
     tmem_mfn = page_to_mfn(pfp);
     tmem_va = map_domain_page(tmem_mfn);
-    if ( tmem_offset == 0 && pfn_offset == 0 && len == 0 )
-    {
-        memset(tmem_va, 0, PAGE_SIZE);
-        unmap_domain_page(tmem_va);
-        return 1;
-    }
     if ( guest_handle_is_null(clibuf) )
     {
         cli_va = cli_get_page(cmfn, &cli_mfn, &cli_pfp, 0);
@@ -129,21 +120,13 @@ EXPORT int tmem_copy_from_client(struct page_info *pfp,
         }
     }
     smp_mb();
-    if ( len == PAGE_SIZE && !tmem_offset && !pfn_offset && cli_va )
-        memcpy(tmem_va, cli_va, PAGE_SIZE);
-    else if ( (tmem_offset+len <= PAGE_SIZE) &&
-              (pfn_offset+len <= PAGE_SIZE) )
+    if ( cli_va )
     {
-        if ( cli_va )
-            memcpy(tmem_va + tmem_offset, cli_va + pfn_offset, len);
-        else if ( copy_from_guest_offset(tmem_va + tmem_offset, clibuf,
-                                         pfn_offset, len) )
-            rc = -EFAULT;
+        memcpy(tmem_va, cli_va, PAGE_SIZE);
+        cli_put_page(cli_va, cli_pfp, cli_mfn, 0);
     }
-    else if ( len )
+    else
         rc = -EINVAL;
-    if ( cli_va )
-        cli_put_page(cli_va, cli_pfp, cli_mfn, 0);
     unmap_domain_page(tmem_va);
     return rc;
 }
@@ -181,7 +164,6 @@ EXPORT int tmem_compress_from_client(xen_pfn_t cmfn,
 }
 
 EXPORT int tmem_copy_to_client(xen_pfn_t cmfn, struct page_info *pfp,
-    pagesize_t tmem_offset, pagesize_t pfn_offset, pagesize_t len,
     tmem_cli_va_param_t clibuf)
 {
     unsigned long tmem_mfn, cli_mfn = 0;
@@ -189,8 +171,6 @@ EXPORT int tmem_copy_to_client(xen_pfn_t cmfn, struct page_info *pfp,
     struct page_info *cli_pfp = NULL;
     int rc = 1;
 
-    if ( tmem_offset > PAGE_SIZE || pfn_offset > PAGE_SIZE || len > PAGE_SIZE )
-        return -EINVAL;
     ASSERT(pfp != NULL);
     if ( guest_handle_is_null(clibuf) )
     {
@@ -200,21 +180,14 @@ EXPORT int tmem_copy_to_client(xen_pfn_t cmfn, struct page_info *pfp,
     }
     tmem_mfn = page_to_mfn(pfp);
     tmem_va = map_domain_page(tmem_mfn);
-    if ( len == PAGE_SIZE && !tmem_offset && !pfn_offset && cli_va )
-        memcpy(cli_va, tmem_va, PAGE_SIZE);
-    else if ( (tmem_offset+len <= PAGE_SIZE) && (pfn_offset+len <= PAGE_SIZE) )
+    if ( cli_va )
     {
-        if ( cli_va )
-            memcpy(cli_va + pfn_offset, tmem_va + tmem_offset, len);
-        else if ( copy_to_guest_offset(clibuf, pfn_offset,
-                                       tmem_va + tmem_offset, len) )
-            rc = -EFAULT;
+        memcpy(cli_va, tmem_va, PAGE_SIZE);
+        cli_put_page(cli_va, cli_pfp, cli_mfn, 1);
     }
-    else if ( len )
+    else
         rc = -EINVAL;
     unmap_domain_page(tmem_va);
-    if ( cli_va )
-        cli_put_page(cli_va, cli_pfp, cli_mfn, 1);
     smp_mb();
     return rc;
 }
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index a477960..d842374 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -387,11 +387,9 @@ int tmem_decompress_to_client(xen_pfn_t, void *, size_t,
 int tmem_compress_from_client(xen_pfn_t, void **, size_t *,
 			     tmem_cli_va_param_t);
 
-int tmem_copy_from_client(struct page_info *, xen_pfn_t, pagesize_t tmem_offset,
-    pagesize_t pfn_offset, pagesize_t len, tmem_cli_va_param_t);
+int tmem_copy_from_client(struct page_info *, xen_pfn_t, tmem_cli_va_param_t);
 
-int tmem_copy_to_client(xen_pfn_t, struct page_info *, pagesize_t tmem_offset,
-    pagesize_t pfn_offset, pagesize_t len, tmem_cli_va_param_t);
+int tmem_copy_to_client(xen_pfn_t, struct page_info *, tmem_cli_va_param_t);
 
 extern int tmem_copy_tze_to_client(xen_pfn_t cmfn, void *tmem_va, pagesize_t len);
 extern void *tmem_persistent_pool_page_get(unsigned long size);
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 05/15] tmem: cleanup: reorg function do_tmem_put()
  2013-12-12 11:05 [PATCH v4 00/15] xen: continue to cleanup tmem Bob Liu
                   ` (3 preceding siblings ...)
  2013-12-12 11:05 ` [PATCH v4 04/15] tmem: cleanup: drop useless parameters from put/get page Bob Liu
@ 2013-12-12 11:05 ` Bob Liu
  2013-12-12 11:05 ` [PATCH v4 06/15] tmem: drop unneeded is_ephemeral() and is_private() Bob Liu
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 23+ messages in thread
From: Bob Liu @ 2013-12-12 11:05 UTC (permalink / raw)
  To: xen-devel; +Cc: james.harper, ian.campbell, andrew.cooper3, JBeulich

Reorg code logic of do_tmem_put() to make it more readable and clean.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |   86 ++++++++++++++++++++++++++++++-----------------------
 1 file changed, 49 insertions(+), 37 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index da2326b..bd17893 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -1509,47 +1509,54 @@ static int do_tmem_put(struct tmem_pool *pool,
               struct oid *oidp, uint32_t index,
               xen_pfn_t cmfn, tmem_cli_va_param_t clibuf)
 {
-    struct tmem_object_root *obj = NULL, *objfound = NULL, *objnew = NULL;
-    struct tmem_page_descriptor *pgp = NULL, *pgpdel = NULL;
-    struct client *client = pool->client;
-    int ret = client->frozen ? -EFROZEN : -ENOMEM;
+    struct tmem_object_root *obj = NULL;
+    struct tmem_page_descriptor *pgp = NULL;
+    struct client *client;
+    int ret, newobj = 0;
 
     ASSERT(pool != NULL);
+    client = pool->client;
+    ret = client->frozen ? -EFROZEN : -ENOMEM;
     pool->puts++;
     /* does page already exist (dup)?  if so, handle specially */
-    if ( (obj = objfound = obj_find(pool,oidp)) != NULL )
+    if ( (obj = obj_find(pool,oidp)) != NULL )
     {
-        ASSERT_SPINLOCK(&objfound->obj_spinlock);
-        if ((pgp = pgp_lookup_in_obj(objfound, index)) != NULL)
+        if ((pgp = pgp_lookup_in_obj(obj, index)) != NULL)
+        {
             return do_tmem_dup_put(pgp, cmfn, clibuf);
+        }
+        else
+        {
+            /* no puts allowed into a frozen pool (except dup puts) */
+            if ( client->frozen )
+	        goto unlock_obj;
+        }
     }
-
-    /* no puts allowed into a frozen pool (except dup puts) */
-    if ( client->frozen )
-        goto free;
-
-    if ( (objfound == NULL) )
+    else
     {
+        /* no puts allowed into a frozen pool (except dup puts) */
+        if ( client->frozen )
+            return ret;
         tmem_write_lock(&pool->pool_rwlock);
-        if ( (obj = objnew = obj_new(pool,oidp)) == NULL )
+        if ( (obj = obj_new(pool,oidp)) == NULL )
         {
             tmem_write_unlock(&pool->pool_rwlock);
             return -ENOMEM;
         }
-        ASSERT_SPINLOCK(&objnew->obj_spinlock);
+        newobj = 1;
         tmem_write_unlock(&pool->pool_rwlock);
     }
 
-    ASSERT((obj != NULL)&&((objnew==obj)||(objfound==obj))&&(objnew!=objfound));
+    /* When arrive here, we have a spinlocked obj for use */
     ASSERT_SPINLOCK(&obj->obj_spinlock);
     if ( (pgp = pgp_alloc(obj)) == NULL )
-        goto free;
+        goto unlock_obj;
 
     ret = pgp_add_to_obj(obj, index, pgp);
     if ( ret == -ENOMEM  )
         /* warning, may result in partially built radix tree ("stump") */
-        goto free;
-    ASSERT(ret != -EEXIST);
+        goto free_pgp;
+
     pgp->index = index;
     pgp->size = 0;
 
@@ -1562,7 +1569,7 @@ static int do_tmem_put(struct tmem_pool *pool,
         if ( ret == -ENOMEM )
         {
             client->compress_nomem++;
-            goto delete_and_free;
+            goto del_pgp_from_obj;
         }
         if ( ret == 0 )
         {
@@ -1577,15 +1584,16 @@ copy_uncompressed:
     if ( ( pgp->pfp = tmem_page_alloc(pool) ) == NULL )
     {
         ret = -ENOMEM;
-        goto delete_and_free;
+        goto del_pgp_from_obj;
     }
     ret = tmem_copy_from_client(pgp->pfp, cmfn, clibuf);
     if ( ret < 0 )
         goto bad_copy;
+
     if ( tmem_dedup_enabled() && !is_persistent(pool) )
     {
         if ( pcd_associate(pgp,NULL,0) == -ENOMEM )
-            goto delete_and_free;
+            goto del_pgp_from_obj;
     }
 
 insert_page:
@@ -1601,18 +1609,23 @@ insert_page:
         if (++client->eph_count > client->eph_count_max)
             client->eph_count_max = client->eph_count;
         tmem_spin_unlock(&eph_lists_spinlock);
-    } else { /* is_persistent */
+    }
+    else
+    { /* is_persistent */
         tmem_spin_lock(&pers_lists_spinlock);
         list_add_tail(&pgp->us.pool_pers_pages,
             &pool->persistent_page_list);
         tmem_spin_unlock(&pers_lists_spinlock);
     }
-    ASSERT( ((objnew==obj)||(objfound==obj)) && (objnew!=objfound));
+
     if ( is_shared(pool) )
         obj->last_client = client->cli_id;
     obj->no_evict = 0;
+
+    /* free the obj spinlock */
     tmem_spin_unlock(&obj->obj_spinlock);
     pool->good_puts++;
+
     if ( is_persistent(pool) )
         client->succ_pers_puts++;
     else
@@ -1622,25 +1635,24 @@ insert_page:
 bad_copy:
     failed_copies++;
 
-delete_and_free:
+del_pgp_from_obj:
     ASSERT((obj != NULL) && (pgp != NULL) && (pgp->index != -1));
-    pgpdel = pgp_delete_from_obj(obj, pgp->index);
-    ASSERT(pgp == pgpdel);
+    pgp_delete_from_obj(obj, pgp->index);
 
-free:
-    if ( pgp )
-        pgp_delete(pgp,0);
-    if ( objfound )
-    {
-        objfound->no_evict = 0;
-        tmem_spin_unlock(&objfound->obj_spinlock);
-    }
-    if ( objnew )
+free_pgp:
+    pgp_delete(pgp, 0);
+unlock_obj:
+    if ( newobj )
     {
         tmem_write_lock(&pool->pool_rwlock);
-        obj_free(objnew,0);
+        obj_free(obj, 0);
         tmem_write_unlock(&pool->pool_rwlock);
     }
+    else
+    {
+        obj->no_evict = 0;
+        tmem_spin_unlock(&obj->obj_spinlock);
+    }
     pool->no_mem_puts++;
     return ret;
 }
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 06/15] tmem: drop unneeded is_ephemeral() and is_private()
  2013-12-12 11:05 [PATCH v4 00/15] xen: continue to cleanup tmem Bob Liu
                   ` (4 preceding siblings ...)
  2013-12-12 11:05 ` [PATCH v4 05/15] tmem: cleanup: reorg function do_tmem_put() Bob Liu
@ 2013-12-12 11:05 ` Bob Liu
  2013-12-12 11:05 ` [PATCH v4 07/15] tmem: cleanup: rm useless EXPORT/FORWARD define Bob Liu
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 23+ messages in thread
From: Bob Liu @ 2013-12-12 11:05 UTC (permalink / raw)
  To: xen-devel; +Cc: james.harper, ian.campbell, andrew.cooper3, JBeulich

Can use !is_persistent() and !is_shared() to replace them directly.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |   20 +++++++++-----------
 1 file changed, 9 insertions(+), 11 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index bd17893..7311487 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -126,9 +126,7 @@ struct tmem_pool {
 };
 
 #define is_persistent(_p)  (_p->persistent)
-#define is_ephemeral(_p)   (!(_p->persistent))
 #define is_shared(_p)      (_p->shared)
-#define is_private(_p)     (!(_p->shared))
 
 struct oid {
     uint64_t oid[3];
@@ -604,7 +602,7 @@ static void pgp_free(struct tmem_page_descriptor *pgp, int from_delete)
         ASSERT(pgp_lookup_in_obj(pgp->us.obj,pgp->index) == NULL);
     ASSERT(pgp->us.obj->pool != NULL);
     pool = pgp->us.obj->pool;
-    if ( is_ephemeral(pool) )
+    if ( !is_persistent(pool) )
     {
         ASSERT(list_empty(&pgp->global_eph_pages));
         ASSERT(list_empty(&pgp->us.client_eph_pages));
@@ -643,7 +641,7 @@ static void pgp_delist(struct tmem_page_descriptor *pgp, bool_t no_eph_lock)
     ASSERT(pgp->us.obj->pool != NULL);
     client = pgp->us.obj->pool->client;
     ASSERT(client != NULL);
-    if ( is_ephemeral(pgp->us.obj->pool) )
+    if ( !is_persistent(pgp->us.obj->pool) )
     {
         if ( !no_eph_lock )
             tmem_spin_lock(&eph_lists_spinlock);
@@ -1597,7 +1595,7 @@ copy_uncompressed:
     }
 
 insert_page:
-    if ( is_ephemeral(pool) )
+    if ( !is_persistent(pool) )
     {
         tmem_spin_lock(&eph_lists_spinlock);
         list_add_tail(&pgp->global_eph_pages,
@@ -1697,9 +1695,9 @@ static int do_tmem_get(struct tmem_pool *pool, struct oid *oidp, uint32_t index,
     if ( rc <= 0 )
         goto bad_copy;
 
-    if ( is_ephemeral(pool) )
+    if ( !is_persistent(pool) )
     {
-        if ( is_private(pool) )
+        if ( !is_shared(pool) )
         {
             pgp_delete(pgp,0);
             if ( obj->pgp_count == 0 )
@@ -1725,10 +1723,10 @@ static int do_tmem_get(struct tmem_pool *pool, struct oid *oidp, uint32_t index,
         tmem_spin_unlock(&obj->obj_spinlock);
     }
     pool->found_gets++;
-    if ( is_ephemeral(pool) )
-        client->succ_eph_gets++;
-    else
+    if ( is_persistent(pool) )
         client->succ_pers_gets++;
+    else
+        client->succ_eph_gets++;
     return 1;
 
 bad_copy:
@@ -2349,7 +2347,7 @@ static int tmemc_save_get_next_page(int cli_id, uint32_t pool_id,
     struct tmem_handle h;
     unsigned int pagesize;
 
-    if ( pool == NULL || is_ephemeral(pool) )
+    if ( pool == NULL || !is_persistent(pool) )
         return -1;
 
     pagesize = 1 << (pool->pageshift + 12);
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 07/15] tmem: cleanup: rm useless EXPORT/FORWARD define
  2013-12-12 11:05 [PATCH v4 00/15] xen: continue to cleanup tmem Bob Liu
                   ` (5 preceding siblings ...)
  2013-12-12 11:05 ` [PATCH v4 06/15] tmem: drop unneeded is_ephemeral() and is_private() Bob Liu
@ 2013-12-12 11:05 ` Bob Liu
  2013-12-12 11:05 ` [PATCH v4 08/15] tmem: cleanup: drop tmem_lock_all Bob Liu
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 23+ messages in thread
From: Bob Liu @ 2013-12-12 11:05 UTC (permalink / raw)
  To: xen-devel; +Cc: james.harper, ian.campbell, andrew.cooper3, JBeulich

It's meaningless to define EXPORT/FORWARD and nobody uses them.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c     |   16 ++++++----------
 xen/common/tmem_xen.c |   38 ++++++++++++++++++--------------------
 2 files changed, 24 insertions(+), 30 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 7311487..c31141c 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -24,9 +24,6 @@
 #include <xen/list.h>
 #include <xen/init.h>
 
-#define EXPORT /* indicates code other modules are dependent upon */
-#define FORWARD
-
 #define TMEM_SPEC_VERSION 1
 
 /* global statistics (none need to be locked) */
@@ -212,8 +209,8 @@ static int tmem_initialized = 0;
 
 /************ CONCURRENCY  ***********************************************/
 
-EXPORT DEFINE_SPINLOCK(tmem_spinlock);  /* used iff tmem_lock_all */
-EXPORT DEFINE_RWLOCK(tmem_rwlock);      /* used iff !tmem_lock_all */
+DEFINE_SPINLOCK(tmem_spinlock);  /* used iff tmem_lock_all */
+DEFINE_RWLOCK(tmem_rwlock);      /* used iff !tmem_lock_all */
 static DEFINE_SPINLOCK(eph_lists_spinlock); /* protects global AND clients */
 static DEFINE_SPINLOCK(pers_lists_spinlock);
 
@@ -2537,7 +2534,7 @@ static int do_tmem_control(struct tmem_op *op)
 
 /************ EXPORTed FUNCTIONS **************************************/
 
-EXPORT long do_tmem_op(tmem_cli_op_t uops)
+long do_tmem_op(tmem_cli_op_t uops)
 {
     struct tmem_op op;
     struct client *client = tmem_client_from_current();
@@ -2702,7 +2699,7 @@ out:
 }
 
 /* this should be called when the host is destroying a client */
-EXPORT void tmem_destroy(void *v)
+void tmem_destroy(void *v)
 {
     struct client *client = (struct client *)v;
 
@@ -2731,7 +2728,7 @@ EXPORT void tmem_destroy(void *v)
 }
 
 /* freezing all pools guarantees that no additional memory will be consumed */
-EXPORT void tmem_freeze_all(unsigned char key)
+void tmem_freeze_all(unsigned char key)
 {
     static int freeze = 0;
  
@@ -2750,8 +2747,7 @@ EXPORT void tmem_freeze_all(unsigned char key)
 }
 
 #define MAX_EVICTS 10  /* should be variable or set via TMEMC_ ?? */
-
-EXPORT void *tmem_relinquish_pages(unsigned int order, unsigned int memflags)
+void *tmem_relinquish_pages(unsigned int order, unsigned int memflags)
 {
     struct page_info *pfp;
     unsigned long evicts_per_relinq = 0;
diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c
index efc2259..fbd1acc 100644
--- a/xen/common/tmem_xen.c
+++ b/xen/common/tmem_xen.c
@@ -14,27 +14,25 @@
 #include <xen/cpu.h>
 #include <xen/init.h>
 
-#define EXPORT /* indicates code other modules are dependent upon */
-
-EXPORT bool_t __read_mostly opt_tmem = 0;
+bool_t __read_mostly opt_tmem = 0;
 boolean_param("tmem", opt_tmem);
 
-EXPORT bool_t __read_mostly opt_tmem_compress = 0;
+bool_t __read_mostly opt_tmem_compress = 0;
 boolean_param("tmem_compress", opt_tmem_compress);
 
-EXPORT bool_t __read_mostly opt_tmem_dedup = 0;
+bool_t __read_mostly opt_tmem_dedup = 0;
 boolean_param("tmem_dedup", opt_tmem_dedup);
 
-EXPORT bool_t __read_mostly opt_tmem_tze = 0;
+bool_t __read_mostly opt_tmem_tze = 0;
 boolean_param("tmem_tze", opt_tmem_tze);
 
-EXPORT bool_t __read_mostly opt_tmem_shared_auth = 0;
+bool_t __read_mostly opt_tmem_shared_auth = 0;
 boolean_param("tmem_shared_auth", opt_tmem_shared_auth);
 
-EXPORT int __read_mostly opt_tmem_lock = 0;
+int __read_mostly opt_tmem_lock = 0;
 integer_param("tmem_lock", opt_tmem_lock);
 
-EXPORT atomic_t freeable_page_count = ATOMIC_INIT(0);
+atomic_t freeable_page_count = ATOMIC_INIT(0);
 
 /* these are a concurrency bottleneck, could be percpu and dynamically
  * allocated iff opt_tmem_compress */
@@ -99,7 +97,7 @@ static inline void cli_put_page(void *cli_va, struct page_info *cli_pfp,
 }
 #endif
 
-EXPORT int tmem_copy_from_client(struct page_info *pfp,
+int tmem_copy_from_client(struct page_info *pfp,
     xen_pfn_t cmfn, tmem_cli_va_param_t clibuf)
 {
     unsigned long tmem_mfn, cli_mfn = 0;
@@ -131,7 +129,7 @@ EXPORT int tmem_copy_from_client(struct page_info *pfp,
     return rc;
 }
 
-EXPORT int tmem_compress_from_client(xen_pfn_t cmfn,
+int tmem_compress_from_client(xen_pfn_t cmfn,
     void **out_va, size_t *out_len, tmem_cli_va_param_t clibuf)
 {
     int ret = 0;
@@ -163,7 +161,7 @@ EXPORT int tmem_compress_from_client(xen_pfn_t cmfn,
     return 1;
 }
 
-EXPORT int tmem_copy_to_client(xen_pfn_t cmfn, struct page_info *pfp,
+int tmem_copy_to_client(xen_pfn_t cmfn, struct page_info *pfp,
     tmem_cli_va_param_t clibuf)
 {
     unsigned long tmem_mfn, cli_mfn = 0;
@@ -192,7 +190,7 @@ EXPORT int tmem_copy_to_client(xen_pfn_t cmfn, struct page_info *pfp,
     return rc;
 }
 
-EXPORT int tmem_decompress_to_client(xen_pfn_t cmfn, void *tmem_va,
+int tmem_decompress_to_client(xen_pfn_t cmfn, void *tmem_va,
                                     size_t size, tmem_cli_va_param_t clibuf)
 {
     unsigned long cli_mfn = 0;
@@ -221,7 +219,7 @@ EXPORT int tmem_decompress_to_client(xen_pfn_t cmfn, void *tmem_va,
     return 1;
 }
 
-EXPORT int tmem_copy_tze_to_client(xen_pfn_t cmfn, void *tmem_va,
+int tmem_copy_tze_to_client(xen_pfn_t cmfn, void *tmem_va,
                                     pagesize_t len)
 {
     void *cli_va;
@@ -245,12 +243,12 @@ EXPORT int tmem_copy_tze_to_client(xen_pfn_t cmfn, void *tmem_va,
 
 /******************  XEN-SPECIFIC MEMORY ALLOCATION ********************/
 
-EXPORT struct xmem_pool *tmem_mempool = 0;
-EXPORT unsigned int tmem_mempool_maxalloc = 0;
+struct xmem_pool *tmem_mempool = 0;
+unsigned int tmem_mempool_maxalloc = 0;
 
-EXPORT DEFINE_SPINLOCK(tmem_page_list_lock);
-EXPORT PAGE_LIST_HEAD(tmem_page_list);
-EXPORT unsigned long tmem_page_list_pages = 0;
+DEFINE_SPINLOCK(tmem_page_list_lock);
+PAGE_LIST_HEAD(tmem_page_list);
+unsigned long tmem_page_list_pages = 0;
 
 static noinline void *tmem_mempool_page_get(unsigned long size)
 {
@@ -352,7 +350,7 @@ static struct notifier_block cpu_nfb = {
     .notifier_call = cpu_callback
 };
 
-EXPORT int __init tmem_init(void)
+int __init tmem_init(void)
 {
     unsigned int cpu;
 
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 08/15] tmem: cleanup: drop tmem_lock_all
  2013-12-12 11:05 [PATCH v4 00/15] xen: continue to cleanup tmem Bob Liu
                   ` (6 preceding siblings ...)
  2013-12-12 11:05 ` [PATCH v4 07/15] tmem: cleanup: rm useless EXPORT/FORWARD define Bob Liu
@ 2013-12-12 11:05 ` Bob Liu
  2013-12-12 11:05 ` [PATCH v4 09/15] tmem: cleanup: refactor the alloc/free path Bob Liu
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 23+ messages in thread
From: Bob Liu @ 2013-12-12 11:05 UTC (permalink / raw)
  To: xen-devel; +Cc: james.harper, ian.campbell, andrew.cooper3, JBeulich

tmem_lock_all is used for debug only, remove it from upstream to make
tmem source code more readable and easier to maintain.
And no_evict is meaningless without tmem_lock_all, this patch removes it
also.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c          |  275 ++++++++++++++++----------------------------
 xen/common/tmem_xen.c      |    3 -
 xen/include/xen/tmem_xen.h |    8 --
 3 files changed, 100 insertions(+), 186 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index c31141c..d072d8c 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -138,7 +138,6 @@ struct tmem_object_root {
     struct tmem_pool *pool;
     domid_t last_client;
     spinlock_t obj_spinlock;
-    bool_t no_evict; /* if globally locked, pseudo-locks against eviction */
 };
 
 struct tmem_object_node {
@@ -207,24 +206,12 @@ static bool_t global_shared_auth = 0;
 static atomic_t client_weight_total = ATOMIC_INIT(0);
 static int tmem_initialized = 0;
 
-/************ CONCURRENCY  ***********************************************/
-
-DEFINE_SPINLOCK(tmem_spinlock);  /* used iff tmem_lock_all */
-DEFINE_RWLOCK(tmem_rwlock);      /* used iff !tmem_lock_all */
+DEFINE_RWLOCK(tmem_rwlock);
 static DEFINE_SPINLOCK(eph_lists_spinlock); /* protects global AND clients */
 static DEFINE_SPINLOCK(pers_lists_spinlock);
 
-#define tmem_spin_lock(_l)  do {if (!tmem_lock_all) spin_lock(_l);}while(0)
-#define tmem_spin_unlock(_l)  do {if (!tmem_lock_all) spin_unlock(_l);}while(0)
-#define tmem_read_lock(_l)  do {if (!tmem_lock_all) read_lock(_l);}while(0)
-#define tmem_read_unlock(_l)  do {if (!tmem_lock_all) read_unlock(_l);}while(0)
-#define tmem_write_lock(_l)  do {if (!tmem_lock_all) write_lock(_l);}while(0)
-#define tmem_write_unlock(_l)  do {if (!tmem_lock_all) write_unlock(_l);}while(0)
-#define tmem_write_trylock(_l)  ((tmem_lock_all)?1:write_trylock(_l))
-#define tmem_spin_trylock(_l)  (tmem_lock_all?1:spin_trylock(_l))
-
-#define ASSERT_SPINLOCK(_l) ASSERT(tmem_lock_all || spin_is_locked(_l))
-#define ASSERT_WRITELOCK(_l) ASSERT(tmem_lock_all || rw_is_write_locked(_l))
+#define ASSERT_SPINLOCK(_l) ASSERT(spin_is_locked(_l))
+#define ASSERT_WRITELOCK(_l) ASSERT(rw_is_write_locked(_l))
 
 /* global counters (should use long_atomic_t access) */
 static long global_eph_count = 0; /* atomicity depends on eph_lists_spinlock */
@@ -316,7 +303,7 @@ static int pcd_copy_to_client(xen_pfn_t cmfn, struct tmem_page_descriptor *pgp)
     int ret;
 
     ASSERT(tmem_dedup_enabled());
-    tmem_read_lock(&pcd_tree_rwlocks[firstbyte]);
+    read_lock(&pcd_tree_rwlocks[firstbyte]);
     pcd = pgp->pcd;
     if ( pgp->size < PAGE_SIZE && pgp->size != 0 &&
          pcd->size < PAGE_SIZE && pcd->size != 0 )
@@ -326,7 +313,7 @@ static int pcd_copy_to_client(xen_pfn_t cmfn, struct tmem_page_descriptor *pgp)
         ret = tmem_copy_tze_to_client(cmfn, pcd->tze, pcd->size);
     else
         ret = tmem_copy_to_client(cmfn, pcd->pfp, tmem_cli_buf_null);
-    tmem_read_unlock(&pcd_tree_rwlocks[firstbyte]);
+    read_unlock(&pcd_tree_rwlocks[firstbyte]);
     return ret;
 }
 
@@ -350,14 +337,14 @@ static void pcd_disassociate(struct tmem_page_descriptor *pgp, struct tmem_pool
     if ( have_pcd_rwlock )
         ASSERT_WRITELOCK(&pcd_tree_rwlocks[firstbyte]);
     else
-        tmem_write_lock(&pcd_tree_rwlocks[firstbyte]);
+        write_lock(&pcd_tree_rwlocks[firstbyte]);
     list_del_init(&pgp->pcd_siblings);
     pgp->pcd = NULL;
     pgp->firstbyte = NOT_SHAREABLE;
     pgp->size = -1;
     if ( --pcd->pgp_ref_count )
     {
-        tmem_write_unlock(&pcd_tree_rwlocks[firstbyte]);
+        write_unlock(&pcd_tree_rwlocks[firstbyte]);
         return;
     }
 
@@ -391,7 +378,7 @@ static void pcd_disassociate(struct tmem_page_descriptor *pgp, struct tmem_pool
             pcd_tot_csize -= PAGE_SIZE;
         tmem_page_free(pool,pfp);
     }
-    tmem_write_unlock(&pcd_tree_rwlocks[firstbyte]);
+    write_unlock(&pcd_tree_rwlocks[firstbyte]);
 }
 
 
@@ -423,7 +410,7 @@ static int pcd_associate(struct tmem_page_descriptor *pgp, char *cdata, pagesize
         ASSERT(pfp_size <= PAGE_SIZE);
         ASSERT(!(pfp_size & (sizeof(uint64_t)-1)));
     }
-    tmem_write_lock(&pcd_tree_rwlocks[firstbyte]);
+    write_lock(&pcd_tree_rwlocks[firstbyte]);
 
     /* look for page match */
     root = &pcd_tree_roots[firstbyte];
@@ -525,7 +512,7 @@ match:
     pgp->pcd = pcd;
 
 unlock:
-    tmem_write_unlock(&pcd_tree_rwlocks[firstbyte]);
+    write_unlock(&pcd_tree_rwlocks[firstbyte]);
     return ret;
 }
 
@@ -641,7 +628,7 @@ static void pgp_delist(struct tmem_page_descriptor *pgp, bool_t no_eph_lock)
     if ( !is_persistent(pgp->us.obj->pool) )
     {
         if ( !no_eph_lock )
-            tmem_spin_lock(&eph_lists_spinlock);
+            spin_lock(&eph_lists_spinlock);
         if ( !list_empty(&pgp->us.client_eph_pages) )
             client->eph_count--;
         ASSERT(client->eph_count >= 0);
@@ -651,20 +638,20 @@ static void pgp_delist(struct tmem_page_descriptor *pgp, bool_t no_eph_lock)
         ASSERT(global_eph_count >= 0);
         list_del_init(&pgp->global_eph_pages);
         if ( !no_eph_lock )
-            tmem_spin_unlock(&eph_lists_spinlock);
+            spin_unlock(&eph_lists_spinlock);
     } else {
         if ( client->live_migrating )
         {
-            tmem_spin_lock(&pers_lists_spinlock);
+            spin_lock(&pers_lists_spinlock);
             list_add_tail(&pgp->client_inv_pages,
                           &client->persistent_invalidated_list);
             if ( pgp != pgp->us.obj->pool->cur_pgp )
                 list_del_init(&pgp->us.pool_pers_pages);
-            tmem_spin_unlock(&pers_lists_spinlock);
+            spin_unlock(&pers_lists_spinlock);
         } else {
-            tmem_spin_lock(&pers_lists_spinlock);
+            spin_lock(&pers_lists_spinlock);
             list_del_init(&pgp->us.pool_pers_pages);
-            tmem_spin_unlock(&pers_lists_spinlock);
+            spin_unlock(&pers_lists_spinlock);
         }
     }
 }
@@ -806,7 +793,7 @@ static struct tmem_object_root * obj_find(struct tmem_pool *pool, struct oid *oi
     struct tmem_object_root *obj;
 
 restart_find:
-    tmem_read_lock(&pool->pool_rwlock);
+    read_lock(&pool->pool_rwlock);
     node = pool->obj_rb_root[oid_hash(oidp)].rb_node;
     while ( node )
     {
@@ -814,17 +801,12 @@ restart_find:
         switch ( oid_compare(&obj->oid, oidp) )
         {
             case 0: /* equal */
-                if ( tmem_lock_all )
-                    obj->no_evict = 1;
-                else
+                if ( !spin_trylock(&obj->obj_spinlock) )
                 {
-                    if ( !tmem_spin_trylock(&obj->obj_spinlock) )
-                    {
-                        tmem_read_unlock(&pool->pool_rwlock);
-                        goto restart_find;
-                    }
-                    tmem_read_unlock(&pool->pool_rwlock);
+                    read_unlock(&pool->pool_rwlock);
+                    goto restart_find;
                 }
+                read_unlock(&pool->pool_rwlock);
                 return obj;
             case -1:
                 node = node->rb_left;
@@ -833,7 +815,7 @@ restart_find:
                 node = node->rb_right;
         }
     }
-    tmem_read_unlock(&pool->pool_rwlock);
+    read_unlock(&pool->pool_rwlock);
     return NULL;
 }
 
@@ -864,7 +846,7 @@ static void obj_free(struct tmem_object_root *obj, int no_rebalance)
     /* use no_rebalance only if all objects are being destroyed anyway */
     if ( !no_rebalance )
         rb_erase(&obj->rb_tree_node,&pool->obj_rb_root[oid_hash(&old_oid)]);
-    tmem_spin_unlock(&obj->obj_spinlock);
+    spin_unlock(&obj->obj_spinlock);
     tmem_free(obj, pool);
 }
 
@@ -919,9 +901,8 @@ static struct tmem_object_root * obj_new(struct tmem_pool *pool, struct oid *oid
     obj->objnode_count = 0;
     obj->pgp_count = 0;
     obj->last_client = TMEM_CLI_ID_NULL;
-    tmem_spin_lock(&obj->obj_spinlock);
+    spin_lock(&obj->obj_spinlock);
     obj_rb_insert(&pool->obj_rb_root[oid_hash(oidp)], obj);
-    obj->no_evict = 1;
     ASSERT_SPINLOCK(&obj->obj_spinlock);
     return obj;
 }
@@ -941,7 +922,7 @@ static void pool_destroy_objs(struct tmem_pool *pool, bool_t selective, domid_t
     struct tmem_object_root *obj;
     int i;
 
-    tmem_write_lock(&pool->pool_rwlock);
+    write_lock(&pool->pool_rwlock);
     pool->is_dying = 1;
     for (i = 0; i < OBJ_HASH_BUCKETS; i++)
     {
@@ -949,19 +930,18 @@ static void pool_destroy_objs(struct tmem_pool *pool, bool_t selective, domid_t
         while ( node != NULL )
         {
             obj = container_of(node, struct tmem_object_root, rb_tree_node);
-            tmem_spin_lock(&obj->obj_spinlock);
+            spin_lock(&obj->obj_spinlock);
             node = rb_next(node);
-            ASSERT(obj->no_evict == 0);
             if ( !selective )
                 /* FIXME: should be obj,1 but walking/erasing rbtree is racy */
                 obj_destroy(obj,0);
             else if ( obj->last_client == cli_id )
                 obj_destroy(obj,0);
             else
-                tmem_spin_unlock(&obj->obj_spinlock);
+                spin_unlock(&obj->obj_spinlock);
         }
     }
-    tmem_write_unlock(&pool->pool_rwlock);
+    write_unlock(&pool->pool_rwlock);
 }
 
 
@@ -1229,9 +1209,7 @@ static bool_t tmem_try_to_evict_pgp(struct tmem_page_descriptor *pgp, bool_t *ho
 
     if ( pool->is_dying )
         return 0;
-    if ( tmem_lock_all && !obj->no_evict )
-       return 1;
-    if ( tmem_spin_trylock(&obj->obj_spinlock) )
+    if ( spin_trylock(&obj->obj_spinlock) )
     {
         if ( tmem_dedup_enabled() )
         {
@@ -1239,7 +1217,7 @@ static bool_t tmem_try_to_evict_pgp(struct tmem_page_descriptor *pgp, bool_t *ho
             if ( firstbyte ==  NOT_SHAREABLE )
                 goto obj_unlock;
             ASSERT(firstbyte < 256);
-            if ( !tmem_write_trylock(&pcd_tree_rwlocks[firstbyte]) )
+            if ( !write_trylock(&pcd_tree_rwlocks[firstbyte]) )
                 goto obj_unlock;
             if ( pgp->pcd->pgp_ref_count > 1 && !pgp->eviction_attempted )
             {
@@ -1253,15 +1231,15 @@ static bool_t tmem_try_to_evict_pgp(struct tmem_page_descriptor *pgp, bool_t *ho
         }
         if ( obj->pgp_count > 1 )
             return 1;
-        if ( tmem_write_trylock(&pool->pool_rwlock) )
+        if ( write_trylock(&pool->pool_rwlock) )
         {
             *hold_pool_rwlock = 1;
             return 1;
         }
 pcd_unlock:
-        tmem_write_unlock(&pcd_tree_rwlocks[firstbyte]);
+        write_unlock(&pcd_tree_rwlocks[firstbyte]);
 obj_unlock:
-        tmem_spin_unlock(&obj->obj_spinlock);
+        spin_unlock(&obj->obj_spinlock);
     }
     return 0;
 }
@@ -1276,7 +1254,7 @@ static int tmem_evict(void)
     bool_t hold_pool_rwlock = 0;
 
     evict_attempts++;
-    tmem_spin_lock(&eph_lists_spinlock);
+    spin_lock(&eph_lists_spinlock);
     if ( (client != NULL) && client_over_quota(client) &&
          !list_empty(&client->ephemeral_page_list) )
     {
@@ -1298,7 +1276,6 @@ found:
     ASSERT(pgp != NULL);
     obj = pgp->us.obj;
     ASSERT(obj != NULL);
-    ASSERT(obj->no_evict == 0);
     ASSERT(obj->pool != NULL);
     pool = obj->pool;
 
@@ -1317,14 +1294,14 @@ found:
         obj_free(obj,0);
     }
     else
-        tmem_spin_unlock(&obj->obj_spinlock);
+        spin_unlock(&obj->obj_spinlock);
     if ( hold_pool_rwlock )
-        tmem_write_unlock(&pool->pool_rwlock);
+        write_unlock(&pool->pool_rwlock);
     evicted_pgs++;
     ret = 1;
 
 out:
-    tmem_spin_unlock(&eph_lists_spinlock);
+    spin_unlock(&eph_lists_spinlock);
     return ret;
 }
 
@@ -1467,8 +1444,7 @@ done:
     /* successfully replaced data, clean up and return success */
     if ( is_shared(pool) )
         obj->last_client = client->cli_id;
-    obj->no_evict = 0;
-    tmem_spin_unlock(&obj->obj_spinlock);
+    spin_unlock(&obj->obj_spinlock);
     pool->dup_puts_replaced++;
     pool->good_puts++;
     if ( is_persistent(pool) )
@@ -1489,12 +1465,11 @@ cleanup:
     pgp_delete(pgpfound,0);
     if ( obj->pgp_count == 0 )
     {
-        tmem_write_lock(&pool->pool_rwlock);
+        write_lock(&pool->pool_rwlock);
         obj_free(obj,0);
-        tmem_write_unlock(&pool->pool_rwlock);
+        write_unlock(&pool->pool_rwlock);
     } else {
-        obj->no_evict = 0;
-        tmem_spin_unlock(&obj->obj_spinlock);
+        spin_unlock(&obj->obj_spinlock);
     }
     pool->dup_puts_flushed++;
     return ret;
@@ -1532,14 +1507,14 @@ static int do_tmem_put(struct tmem_pool *pool,
         /* no puts allowed into a frozen pool (except dup puts) */
         if ( client->frozen )
             return ret;
-        tmem_write_lock(&pool->pool_rwlock);
+        write_lock(&pool->pool_rwlock);
         if ( (obj = obj_new(pool,oidp)) == NULL )
         {
-            tmem_write_unlock(&pool->pool_rwlock);
+            write_unlock(&pool->pool_rwlock);
             return -ENOMEM;
         }
         newobj = 1;
-        tmem_write_unlock(&pool->pool_rwlock);
+        write_unlock(&pool->pool_rwlock);
     }
 
     /* When arrive here, we have a spinlocked obj for use */
@@ -1594,7 +1569,7 @@ copy_uncompressed:
 insert_page:
     if ( !is_persistent(pool) )
     {
-        tmem_spin_lock(&eph_lists_spinlock);
+        spin_lock(&eph_lists_spinlock);
         list_add_tail(&pgp->global_eph_pages,
             &global_ephemeral_page_list);
         if (++global_eph_count > global_eph_count_max)
@@ -1603,22 +1578,21 @@ insert_page:
             &client->ephemeral_page_list);
         if (++client->eph_count > client->eph_count_max)
             client->eph_count_max = client->eph_count;
-        tmem_spin_unlock(&eph_lists_spinlock);
+        spin_unlock(&eph_lists_spinlock);
     }
     else
     { /* is_persistent */
-        tmem_spin_lock(&pers_lists_spinlock);
+        spin_lock(&pers_lists_spinlock);
         list_add_tail(&pgp->us.pool_pers_pages,
             &pool->persistent_page_list);
-        tmem_spin_unlock(&pers_lists_spinlock);
+        spin_unlock(&pers_lists_spinlock);
     }
 
     if ( is_shared(pool) )
         obj->last_client = client->cli_id;
-    obj->no_evict = 0;
 
     /* free the obj spinlock */
-    tmem_spin_unlock(&obj->obj_spinlock);
+    spin_unlock(&obj->obj_spinlock);
     pool->good_puts++;
 
     if ( is_persistent(pool) )
@@ -1639,14 +1613,13 @@ free_pgp:
 unlock_obj:
     if ( newobj )
     {
-        tmem_write_lock(&pool->pool_rwlock);
+        write_lock(&pool->pool_rwlock);
         obj_free(obj, 0);
-        tmem_write_unlock(&pool->pool_rwlock);
+        write_unlock(&pool->pool_rwlock);
     }
     else
     {
-        obj->no_evict = 0;
-        tmem_spin_unlock(&obj->obj_spinlock);
+        spin_unlock(&obj->obj_spinlock);
     }
     pool->no_mem_puts++;
     return ret;
@@ -1675,8 +1648,7 @@ static int do_tmem_get(struct tmem_pool *pool, struct oid *oidp, uint32_t index,
         pgp = pgp_delete_from_obj(obj, index);
     if ( pgp == NULL )
     {
-        obj->no_evict = 0;
-        tmem_spin_unlock(&obj->obj_spinlock);
+        spin_unlock(&obj->obj_spinlock);
         return 0;
     }
     ASSERT(pgp->size != -1);
@@ -1699,25 +1671,24 @@ static int do_tmem_get(struct tmem_pool *pool, struct oid *oidp, uint32_t index,
             pgp_delete(pgp,0);
             if ( obj->pgp_count == 0 )
             {
-                tmem_write_lock(&pool->pool_rwlock);
+                write_lock(&pool->pool_rwlock);
                 obj_free(obj,0);
                 obj = NULL;
-                tmem_write_unlock(&pool->pool_rwlock);
+                write_unlock(&pool->pool_rwlock);
             }
         } else {
-            tmem_spin_lock(&eph_lists_spinlock);
+            spin_lock(&eph_lists_spinlock);
             list_del(&pgp->global_eph_pages);
             list_add_tail(&pgp->global_eph_pages,&global_ephemeral_page_list);
             list_del(&pgp->us.client_eph_pages);
             list_add_tail(&pgp->us.client_eph_pages,&client->ephemeral_page_list);
-            tmem_spin_unlock(&eph_lists_spinlock);
+            spin_unlock(&eph_lists_spinlock);
             obj->last_client = tmem_get_cli_id_from_current();
         }
     }
     if ( obj != NULL )
     {
-        obj->no_evict = 0;
-        tmem_spin_unlock(&obj->obj_spinlock);
+        spin_unlock(&obj->obj_spinlock);
     }
     pool->found_gets++;
     if ( is_persistent(pool) )
@@ -1727,8 +1698,7 @@ static int do_tmem_get(struct tmem_pool *pool, struct oid *oidp, uint32_t index,
     return 1;
 
 bad_copy:
-    obj->no_evict = 0;
-    tmem_spin_unlock(&obj->obj_spinlock);
+    spin_unlock(&obj->obj_spinlock);
     failed_copies++;
     return rc;
 }
@@ -1745,19 +1715,17 @@ static int do_tmem_flush_page(struct tmem_pool *pool, struct oid *oidp, uint32_t
     pgp = pgp_delete_from_obj(obj, index);
     if ( pgp == NULL )
     {
-        obj->no_evict = 0;
-        tmem_spin_unlock(&obj->obj_spinlock);
+        spin_unlock(&obj->obj_spinlock);
         goto out;
     }
     pgp_delete(pgp,0);
     if ( obj->pgp_count == 0 )
     {
-        tmem_write_lock(&pool->pool_rwlock);
+        write_lock(&pool->pool_rwlock);
         obj_free(obj,0);
-        tmem_write_unlock(&pool->pool_rwlock);
+        write_unlock(&pool->pool_rwlock);
     } else {
-        obj->no_evict = 0;
-        tmem_spin_unlock(&obj->obj_spinlock);
+        spin_unlock(&obj->obj_spinlock);
     }
     pool->flushs_found++;
 
@@ -1776,10 +1744,10 @@ static int do_tmem_flush_object(struct tmem_pool *pool, struct oid *oidp)
     obj = obj_find(pool,oidp);
     if ( obj == NULL )
         goto out;
-    tmem_write_lock(&pool->pool_rwlock);
+    write_lock(&pool->pool_rwlock);
     obj_destroy(obj,0);
     pool->flush_objs_found++;
-    tmem_write_unlock(&pool->pool_rwlock);
+    write_unlock(&pool->pool_rwlock);
 
 out:
     if ( pool->client->frozen )
@@ -2351,7 +2319,7 @@ static int tmemc_save_get_next_page(int cli_id, uint32_t pool_id,
     if ( bufsize < pagesize + sizeof(struct tmem_handle) )
         return -ENOMEM;
 
-    tmem_spin_lock(&pers_lists_spinlock);
+    spin_lock(&pers_lists_spinlock);
     if ( list_empty(&pool->persistent_page_list) )
     {
         ret = -1;
@@ -2383,7 +2351,7 @@ static int tmemc_save_get_next_page(int cli_id, uint32_t pool_id,
     ret = do_tmem_get(pool, &oid, pgp->index, 0, buf);
 
 out:
-    tmem_spin_unlock(&pers_lists_spinlock);
+    spin_unlock(&pers_lists_spinlock);
     return ret;
 }
 
@@ -2399,7 +2367,7 @@ static int tmemc_save_get_next_inv(int cli_id, tmem_cli_va_param_t buf,
         return 0;
     if ( bufsize < sizeof(struct tmem_handle) )
         return 0;
-    tmem_spin_lock(&pers_lists_spinlock);
+    spin_lock(&pers_lists_spinlock);
     if ( list_empty(&client->persistent_invalidated_list) )
         goto out;
     if ( client->cur_pgp == NULL )
@@ -2425,7 +2393,7 @@ static int tmemc_save_get_next_inv(int cli_id, tmem_cli_va_param_t buf,
     tmem_copy_to_client_buf(buf, &h, 1);
     ret = 1;
 out:
-    tmem_spin_unlock(&pers_lists_spinlock);
+    spin_unlock(&pers_lists_spinlock);
     return ret;
 }
 
@@ -2544,7 +2512,7 @@ long do_tmem_op(tmem_cli_op_t uops)
     bool_t succ_get = 0, succ_put = 0;
     bool_t non_succ_get = 0, non_succ_put = 0;
     bool_t flush = 0, flush_obj = 0;
-    bool_t tmem_write_lock_set = 0, tmem_read_lock_set = 0;
+    bool_t write_lock_set = 0, read_lock_set = 0;
 
     if ( !tmem_initialized )
         return -ENODEV;
@@ -2554,19 +2522,9 @@ long do_tmem_op(tmem_cli_op_t uops)
 
     total_tmem_ops++;
 
-    if ( tmem_lock_all )
-    {
-        if ( tmem_lock_all > 1 )
-            spin_lock_irq(&tmem_spinlock);
-        else
-            spin_lock(&tmem_spinlock);
-    }
-
     if ( client != NULL && tmem_client_is_dying(client) )
     {
         rc = -ENODEV;
-        if ( tmem_lock_all )
-            goto out;
  simple_error:
         errored_tmem_ops++;
         return rc;
@@ -2576,26 +2534,24 @@ long do_tmem_op(tmem_cli_op_t uops)
     {
         tmem_client_err("tmem: can't get tmem struct from %s\n", tmem_client_str);
         rc = -EFAULT;
-        if ( !tmem_lock_all )
-            goto simple_error;
-        goto out;
+        goto simple_error;
     }
 
     if ( op.cmd == TMEM_CONTROL )
     {
-        tmem_write_lock(&tmem_rwlock);
-        tmem_write_lock_set = 1;
+        write_lock(&tmem_rwlock);
+        write_lock_set = 1;
         rc = do_tmem_control(&op);
         goto out;
     } else if ( op.cmd == TMEM_AUTH ) {
-        tmem_write_lock(&tmem_rwlock);
-        tmem_write_lock_set = 1;
+        write_lock(&tmem_rwlock);
+        write_lock_set = 1;
         rc = tmemc_shared_pool_auth(op.u.creat.arg1,op.u.creat.uuid[0],
                          op.u.creat.uuid[1],op.u.creat.flags);
         goto out;
     } else if ( op.cmd == TMEM_RESTORE_NEW ) {
-        tmem_write_lock(&tmem_rwlock);
-        tmem_write_lock_set = 1;
+        write_lock(&tmem_rwlock);
+        write_lock_set = 1;
         rc = do_tmem_new_pool(op.u.creat.arg1, op.pool_id, op.u.creat.flags,
                          op.u.creat.uuid[0], op.u.creat.uuid[1]);
         goto out;
@@ -2604,8 +2560,8 @@ long do_tmem_op(tmem_cli_op_t uops)
     /* create per-client tmem structure dynamically on first use by client */
     if ( client == NULL )
     {
-        tmem_write_lock(&tmem_rwlock);
-        tmem_write_lock_set = 1;
+        write_lock(&tmem_rwlock);
+        write_lock_set = 1;
         if ( (client = client_create(tmem_get_cli_id_from_current())) == NULL )
         {
             tmem_client_err("tmem: can't create tmem structure for %s\n",
@@ -2617,18 +2573,18 @@ long do_tmem_op(tmem_cli_op_t uops)
 
     if ( op.cmd == TMEM_NEW_POOL || op.cmd == TMEM_DESTROY_POOL )
     {
-        if ( !tmem_write_lock_set )
+        if ( !write_lock_set )
         {
-            tmem_write_lock(&tmem_rwlock);
-            tmem_write_lock_set = 1;
+            write_lock(&tmem_rwlock);
+            write_lock_set = 1;
         }
     }
     else
     {
-        if ( !tmem_write_lock_set )
+        if ( !write_lock_set )
         {
-            tmem_read_lock(&tmem_rwlock);
-            tmem_read_lock_set = 1;
+            read_lock(&tmem_rwlock);
+            read_lock_set = 1;
         }
         if ( ((uint32_t)op.pool_id >= MAX_POOLS_PER_DOMAIN) ||
              ((pool = client->pools[op.pool_id]) == NULL) )
@@ -2680,20 +2636,12 @@ long do_tmem_op(tmem_cli_op_t uops)
 out:
     if ( rc < 0 )
         errored_tmem_ops++;
-    if ( tmem_lock_all )
-    {
-        if ( tmem_lock_all > 1 )
-            spin_unlock_irq(&tmem_spinlock);
-        else
-            spin_unlock(&tmem_spinlock);
-    } else {
-        if ( tmem_write_lock_set )
-            write_unlock(&tmem_rwlock);
-        else if ( tmem_read_lock_set )
-            read_unlock(&tmem_rwlock);
-        else 
-            ASSERT(0);
-    }
+    if ( write_lock_set )
+        write_unlock(&tmem_rwlock);
+    else if ( read_lock_set )
+        read_unlock(&tmem_rwlock);
+    else 
+        ASSERT(0);
 
     return rc;
 }
@@ -2712,38 +2660,26 @@ void tmem_destroy(void *v)
         return;
     }
 
-    if ( tmem_lock_all )
-        spin_lock(&tmem_spinlock);
-    else
-        write_lock(&tmem_rwlock);
+    write_lock(&tmem_rwlock);
 
     printk("tmem: flushing tmem pools for %s=%d\n",
            tmem_cli_id_str, client->cli_id);
     client_flush(client, 1);
 
-    if ( tmem_lock_all )
-        spin_unlock(&tmem_spinlock);
-    else
-        write_unlock(&tmem_rwlock);
+    write_unlock(&tmem_rwlock);
 }
 
 /* freezing all pools guarantees that no additional memory will be consumed */
 void tmem_freeze_all(unsigned char key)
 {
     static int freeze = 0;
- 
-    if ( tmem_lock_all )
-        spin_lock(&tmem_spinlock);
-    else
-        write_lock(&tmem_rwlock);
+
+    write_lock(&tmem_rwlock);
 
     freeze = !freeze;
     tmemc_freeze_pools(TMEM_CLI_ID_NULL,freeze);
 
-    if ( tmem_lock_all )
-        spin_unlock(&tmem_spinlock);
-    else
-        write_unlock(&tmem_rwlock);
+    write_unlock(&tmem_rwlock);
 }
 
 #define MAX_EVICTS 10  /* should be variable or set via TMEMC_ ?? */
@@ -2766,12 +2702,7 @@ void *tmem_relinquish_pages(unsigned int order, unsigned int memflags)
     }
 
     if ( tmem_called_from_tmem(memflags) )
-    {
-        if ( tmem_lock_all )
-            spin_lock(&tmem_spinlock);
-        else
-            read_lock(&tmem_rwlock);
-    }
+        read_lock(&tmem_rwlock);
 
     while ( (pfp = tmem_alloc_page(NULL,1)) == NULL )
     {
@@ -2789,12 +2720,7 @@ void *tmem_relinquish_pages(unsigned int order, unsigned int memflags)
     }
 
     if ( tmem_called_from_tmem(memflags) )
-    {
-        if ( tmem_lock_all )
-            spin_unlock(&tmem_spinlock);
-        else
-            read_unlock(&tmem_rwlock);
-    }
+        read_unlock(&tmem_rwlock);
 
     return pfp;
 }
@@ -2820,9 +2746,8 @@ static int __init init_tmem(void)
 
     if ( tmem_init() )
     {
-        printk("tmem: initialized comp=%d dedup=%d tze=%d global-lock=%d\n",
-            tmem_compression_enabled(), tmem_dedup_enabled(), tmem_tze_enabled(),
-            tmem_lock_all);
+        printk("tmem: initialized comp=%d dedup=%d tze=%d\n",
+            tmem_compression_enabled(), tmem_dedup_enabled(), tmem_tze_enabled());
         if ( tmem_dedup_enabled()&&tmem_compression_enabled()&&tmem_tze_enabled() )
         {
             tmem_tze_disable();
diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c
index fbd1acc..bc8e249 100644
--- a/xen/common/tmem_xen.c
+++ b/xen/common/tmem_xen.c
@@ -29,9 +29,6 @@ boolean_param("tmem_tze", opt_tmem_tze);
 bool_t __read_mostly opt_tmem_shared_auth = 0;
 boolean_param("tmem_shared_auth", opt_tmem_shared_auth);
 
-int __read_mostly opt_tmem_lock = 0;
-integer_param("tmem_lock", opt_tmem_lock);
-
 atomic_t freeable_page_count = ATOMIC_INIT(0);
 
 /* these are a concurrency bottleneck, could be percpu and dynamically
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index d842374..0628ef4 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -34,11 +34,6 @@ extern spinlock_t tmem_page_list_lock;
 extern unsigned long tmem_page_list_pages;
 extern atomic_t freeable_page_count;
 
-extern spinlock_t tmem_lock;
-extern spinlock_t tmem_spinlock;
-extern rwlock_t tmem_rwlock;
-
-extern void tmem_copy_page(char *to, char*from);
 extern int tmem_init(void);
 #define tmem_hash hash_long
 
@@ -77,8 +72,6 @@ static inline bool_t tmem_enabled(void)
     return opt_tmem;
 }
 
-extern int opt_tmem_lock;
-
 /*
  * Memory free page list management
  */
@@ -182,7 +175,6 @@ static inline unsigned long tmem_free_mb(void)
     return (tmem_page_list_pages + total_free_pages()) >> (20 - PAGE_SHIFT);
 }
 
-#define tmem_lock_all  opt_tmem_lock
 #define tmem_called_from_tmem(_memflags) (_memflags & MEMF_tmem)
 
 /*  "Client" (==domain) abstraction */
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 09/15] tmem: cleanup: refactor the alloc/free path
  2013-12-12 11:05 [PATCH v4 00/15] xen: continue to cleanup tmem Bob Liu
                   ` (7 preceding siblings ...)
  2013-12-12 11:05 ` [PATCH v4 08/15] tmem: cleanup: drop tmem_lock_all Bob Liu
@ 2013-12-12 11:05 ` Bob Liu
  2013-12-12 11:05 ` [PATCH v4 10/15] tmem: cleanup: __tmem_alloc_page: drop unneed parameters Bob Liu
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 23+ messages in thread
From: Bob Liu @ 2013-12-12 11:05 UTC (permalink / raw)
  To: xen-devel; +Cc: james.harper, ian.campbell, andrew.cooper3, JBeulich

There are two allocate path for each persistant and ephemeral pool.

This path try to refactor those allocate/free functions with better name and
more readable call layer. Also added more comment.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c          |  114 +++++++++++++++++++++++++++++++++++++-------
 xen/common/tmem_xen.c      |   63 ------------------------
 xen/include/xen/tmem_xen.h |   20 ++------
 3 files changed, 102 insertions(+), 95 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index d072d8c..07d62d7 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -206,6 +206,13 @@ static bool_t global_shared_auth = 0;
 static atomic_t client_weight_total = ATOMIC_INIT(0);
 static int tmem_initialized = 0;
 
+struct xmem_pool *tmem_mempool = 0;
+unsigned int tmem_mempool_maxalloc = 0;
+
+DEFINE_SPINLOCK(tmem_page_list_lock);
+PAGE_LIST_HEAD(tmem_page_list);
+unsigned long tmem_page_list_pages = 0;
+
 DEFINE_RWLOCK(tmem_rwlock);
 static DEFINE_SPINLOCK(eph_lists_spinlock); /* protects global AND clients */
 static DEFINE_SPINLOCK(pers_lists_spinlock);
@@ -233,7 +240,29 @@ static atomic_t global_rtree_node_count = ATOMIC_INIT(0);
 } while (0)
 
 
-/************ MEMORY ALLOCATION INTERFACE *****************************/
+/*
+ * There two types of memory allocation interfaces in tmem.
+ * One is based on xmem_pool and the other is used for allocate a whole page.
+ * Both of them are based on the lowlevel function __tmem_alloc_page/_thispool().
+ * The call trace of alloc path is like below.
+ * Persistant pool:
+ *     1.tmem_malloc()
+ *         > xmem_pool_alloc()
+ *             > tmem_persistent_pool_page_get()
+ *                 > __tmem_alloc_page_thispool()
+ *     2.tmem_alloc_page()
+ *         > __tmem_alloc_page_thispool()
+ *
+ * Ephemeral pool:
+ *     1.tmem_malloc()
+ *         > xmem_pool_alloc()
+ *             > tmem_mempool_page_get()
+ *                 > __tmem_alloc_page()
+ *     2.tmem_alloc_page()
+ *         > __tmem_alloc_page()
+ *
+ * The free path is done in the same manner.
+ */
 static void *tmem_malloc(size_t size, struct tmem_pool *pool)
 {
     void *v = NULL;
@@ -267,14 +296,14 @@ static void tmem_free(void *p, struct tmem_pool *pool)
     }
 }
 
-static struct page_info *tmem_page_alloc(struct tmem_pool *pool)
+static struct page_info *tmem_alloc_page(struct tmem_pool *pool)
 {
     struct page_info *pfp = NULL;
 
     if ( pool != NULL && is_persistent(pool) )
-        pfp = tmem_alloc_page_thispool(pool->client->domain);
+        pfp = __tmem_alloc_page_thispool(pool->client->domain);
     else
-        pfp = tmem_alloc_page(pool,0);
+        pfp = __tmem_alloc_page(pool,0);
     if ( pfp == NULL )
         alloc_page_failed++;
     else
@@ -282,18 +311,68 @@ static struct page_info *tmem_page_alloc(struct tmem_pool *pool)
     return pfp;
 }
 
-static void tmem_page_free(struct tmem_pool *pool, struct page_info *pfp)
+static void tmem_free_page(struct tmem_pool *pool, struct page_info *pfp)
 {
     ASSERT(pfp);
     if ( pool == NULL || !is_persistent(pool) )
-        tmem_free_page(pfp);
+        __tmem_free_page(pfp);
     else
-        tmem_free_page_thispool(pfp);
+        __tmem_free_page_thispool(pfp);
     atomic_dec_and_assert(global_page_count);
 }
 
-/************ PAGE CONTENT DESCRIPTOR MANIPULATION ROUTINES ***********/
+static noinline void *tmem_mempool_page_get(unsigned long size)
+{
+    struct page_info *pi;
 
+    ASSERT(size == PAGE_SIZE);
+    if ( (pi = __tmem_alloc_page(NULL,0)) == NULL )
+        return NULL;
+    ASSERT(IS_VALID_PAGE(pi));
+    return page_to_virt(pi);
+}
+
+static void tmem_mempool_page_put(void *page_va)
+{
+    ASSERT(IS_PAGE_ALIGNED(page_va));
+    __tmem_free_page(virt_to_page(page_va));
+}
+
+static int __init tmem_mempool_init(void)
+{
+    tmem_mempool = xmem_pool_create("tmem", tmem_mempool_page_get,
+        tmem_mempool_page_put, PAGE_SIZE, 0, PAGE_SIZE);
+    if ( tmem_mempool )
+        tmem_mempool_maxalloc = xmem_pool_maxalloc(tmem_mempool);
+    return tmem_mempool != NULL;
+}
+
+/* persistent pools are per-domain */
+static void *tmem_persistent_pool_page_get(unsigned long size)
+{
+    struct page_info *pi;
+    struct domain *d = current->domain;
+
+    ASSERT(size == PAGE_SIZE);
+    if ( (pi = __tmem_alloc_page_thispool(d)) == NULL )
+        return NULL;
+    ASSERT(IS_VALID_PAGE(pi));
+    return page_to_virt(pi);
+}
+
+static void tmem_persistent_pool_page_put(void *page_va)
+{
+    struct page_info *pi;
+
+    ASSERT(IS_PAGE_ALIGNED(page_va));
+    pi = mfn_to_page(virt_to_mfn(page_va));
+    ASSERT(IS_VALID_PAGE(pi));
+    __tmem_free_page_thispool(pi);
+}
+
+/*
+ * Page content descriptor manipulation routines
+ */
 #define NOT_SHAREABLE ((uint16_t)-1UL)
 
 static int pcd_copy_to_client(xen_pfn_t cmfn, struct tmem_page_descriptor *pgp)
@@ -376,7 +455,7 @@ static void pcd_disassociate(struct tmem_page_descriptor *pgp, struct tmem_pool
             pcd_tot_tze_size -= PAGE_SIZE;
         if ( tmem_compression_enabled() )
             pcd_tot_csize -= PAGE_SIZE;
-        tmem_page_free(pool,pfp);
+        tmem_free_page(pool,pfp);
     }
     write_unlock(&pcd_tree_rwlocks[firstbyte]);
 }
@@ -455,7 +534,7 @@ static int pcd_associate(struct tmem_page_descriptor *pgp, char *cdata, pagesize
             /* match! if not compressed, free the no-longer-needed page */
             /* but if compressed, data is assumed static so don't free! */
             if ( cdata == NULL )
-                tmem_page_free(pgp->us.obj->pool,pgp->pfp);
+                tmem_free_page(pgp->us.obj->pool,pgp->pfp);
             deduped_puts++;
             goto match;
         }
@@ -492,7 +571,7 @@ static int pcd_associate(struct tmem_page_descriptor *pgp, char *cdata, pagesize
         tmem_tze_copy_from_pfp(pcd->tze,pgp->pfp,pfp_size);
         pcd->size = pfp_size;
         pcd_tot_tze_size += pfp_size;
-        tmem_page_free(pgp->us.obj->pool,pgp->pfp);
+        tmem_free_page(pgp->us.obj->pool,pgp->pfp);
     } else {
         pcd->pfp = pgp->pfp;
         pcd->size = PAGE_SIZE;
@@ -566,7 +645,7 @@ static void pgp_free_data(struct tmem_page_descriptor *pgp, struct tmem_pool *po
     else if ( pgp_size )
         tmem_free(pgp->cdata, pool);
     else
-        tmem_page_free(pgp->us.obj->pool,pgp->pfp);
+        tmem_free_page(pgp->us.obj->pool,pgp->pfp);
     if ( pool != NULL && pgp_size )
     {
         pool->client->compressed_pages--;
@@ -1369,7 +1448,7 @@ static int do_tmem_put_compress(struct tmem_page_descriptor *pgp, xen_pfn_t cmfn
     ret = tmem_compress_from_client(cmfn, &dst, &size, clibuf);
     if ( ret <= 0 )
         goto out;
-    else if ( (size == 0) || (size >= tmem_subpage_maxsize()) ) {
+    else if ( (size == 0) || (size >= tmem_mempool_maxalloc) ) {
         ret = 0;
         goto out;
     } else if ( tmem_dedup_enabled() && !is_persistent(pgp->us.obj->pool) ) {
@@ -1428,7 +1507,7 @@ static int do_tmem_dup_put(struct tmem_page_descriptor *pgp, xen_pfn_t cmfn,
 copy_uncompressed:
     if ( pgp->pfp )
         pgp_free_data(pgp, pool);
-    if ( ( pgp->pfp = tmem_page_alloc(pool) ) == NULL )
+    if ( ( pgp->pfp = tmem_alloc_page(pool) ) == NULL )
         goto failed_dup;
     pgp->size = 0;
     ret = tmem_copy_from_client(pgp->pfp, cmfn, tmem_cli_buf_null);
@@ -1551,7 +1630,7 @@ static int do_tmem_put(struct tmem_pool *pool,
     }
 
 copy_uncompressed:
-    if ( ( pgp->pfp = tmem_page_alloc(pool) ) == NULL )
+    if ( ( pgp->pfp = tmem_alloc_page(pool) ) == NULL )
     {
         ret = -ENOMEM;
         goto del_pgp_from_obj;
@@ -2704,7 +2783,7 @@ void *tmem_relinquish_pages(unsigned int order, unsigned int memflags)
     if ( tmem_called_from_tmem(memflags) )
         read_lock(&tmem_rwlock);
 
-    while ( (pfp = tmem_alloc_page(NULL,1)) == NULL )
+    while ( (pfp = __tmem_alloc_page(NULL,1)) == NULL )
     {
         if ( (max_evictions-- <= 0) || !tmem_evict())
             break;
@@ -2744,6 +2823,9 @@ static int __init init_tmem(void)
             rwlock_init(&pcd_tree_rwlocks[i]);
         }
 
+    if ( !tmem_mempool_init() )
+        return 0;
+
     if ( tmem_init() )
     {
         printk("tmem: initialized comp=%d dedup=%d tze=%d\n",
diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c
index bc8e249..5ef131b 100644
--- a/xen/common/tmem_xen.c
+++ b/xen/common/tmem_xen.c
@@ -238,67 +238,7 @@ int tmem_copy_tze_to_client(xen_pfn_t cmfn, void *tmem_va,
     return 1;
 }
 
-/******************  XEN-SPECIFIC MEMORY ALLOCATION ********************/
-
-struct xmem_pool *tmem_mempool = 0;
-unsigned int tmem_mempool_maxalloc = 0;
-
-DEFINE_SPINLOCK(tmem_page_list_lock);
-PAGE_LIST_HEAD(tmem_page_list);
-unsigned long tmem_page_list_pages = 0;
-
-static noinline void *tmem_mempool_page_get(unsigned long size)
-{
-    struct page_info *pi;
-
-    ASSERT(size == PAGE_SIZE);
-    if ( (pi = tmem_alloc_page(NULL,0)) == NULL )
-        return NULL;
-    ASSERT(IS_VALID_PAGE(pi));
-    return page_to_virt(pi);
-}
-
-static void tmem_mempool_page_put(void *page_va)
-{
-    ASSERT(IS_PAGE_ALIGNED(page_va));
-    tmem_free_page(virt_to_page(page_va));
-}
-
-static int __init tmem_mempool_init(void)
-{
-    tmem_mempool = xmem_pool_create("tmem", tmem_mempool_page_get,
-        tmem_mempool_page_put, PAGE_SIZE, 0, PAGE_SIZE);
-    if ( tmem_mempool )
-        tmem_mempool_maxalloc = xmem_pool_maxalloc(tmem_mempool);
-    return tmem_mempool != NULL;
-}
-
-/* persistent pools are per-domain */
-
-void *tmem_persistent_pool_page_get(unsigned long size)
-{
-    struct page_info *pi;
-    struct domain *d = current->domain;
-
-    ASSERT(size == PAGE_SIZE);
-    if ( (pi = tmem_alloc_page_thispool(d)) == NULL )
-        return NULL;
-    ASSERT(IS_VALID_PAGE(pi));
-    return page_to_virt(pi);
-}
-
-void tmem_persistent_pool_page_put(void *page_va)
-{
-    struct page_info *pi;
-
-    ASSERT(IS_PAGE_ALIGNED(page_va));
-    pi = mfn_to_page(virt_to_mfn(page_va));
-    ASSERT(IS_VALID_PAGE(pi));
-    tmem_free_page_thispool(pi);
-}
-
 /******************  XEN-SPECIFIC HOST INITIALIZATION ********************/
-
 static int dstmem_order, workmem_order;
 
 static int cpu_callback(
@@ -351,9 +291,6 @@ int __init tmem_init(void)
 {
     unsigned int cpu;
 
-    if ( !tmem_mempool_init() )
-        return 0;
-
     dstmem_order = get_order_from_pages(LZO_DSTMEM_PAGES);
     workmem_order = get_order_from_bytes(LZO1X_1_MEM_COMPRESS);
 
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index 0628ef4..073fc1b 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -27,8 +27,6 @@ typedef uint32_t pagesize_t;  /* like size_t, must handle largest PAGE_SIZE */
   ((void *)((((unsigned long)addr + (PAGE_SIZE - 1)) & PAGE_MASK)) == addr)
 #define IS_VALID_PAGE(_pi)  ( mfn_valid(page_to_mfn(_pi)) )
 
-extern struct xmem_pool *tmem_mempool;
-extern unsigned int tmem_mempool_maxalloc;
 extern struct page_list_head tmem_page_list;
 extern spinlock_t tmem_page_list_lock;
 extern unsigned long tmem_page_list_pages;
@@ -100,7 +98,7 @@ static inline void tmem_page_list_put(struct page_info *pi)
 /*
  * Memory allocation for persistent data 
  */
-static inline struct page_info *tmem_alloc_page_thispool(struct domain *d)
+static inline struct page_info *__tmem_alloc_page_thispool(struct domain *d)
 {
     struct page_info *pi;
 
@@ -128,7 +126,7 @@ out:
     return pi;
 }
 
-static inline void tmem_free_page_thispool(struct page_info *pi)
+static inline void __tmem_free_page_thispool(struct page_info *pi)
 {
     struct domain *d = page_get_owner(pi);
 
@@ -146,7 +144,7 @@ static inline void tmem_free_page_thispool(struct page_info *pi)
 /*
  * Memory allocation for ephemeral (non-persistent) data
  */
-static inline struct page_info *tmem_alloc_page(void *pool, int no_heap)
+static inline struct page_info *__tmem_alloc_page(void *pool, int no_heap)
 {
     struct page_info *pi = tmem_page_list_get();
 
@@ -158,18 +156,13 @@ static inline struct page_info *tmem_alloc_page(void *pool, int no_heap)
     return pi;
 }
 
-static inline void tmem_free_page(struct page_info *pi)
+static inline void __tmem_free_page(struct page_info *pi)
 {
     ASSERT(IS_VALID_PAGE(pi));
     tmem_page_list_put(pi);
     atomic_dec(&freeable_page_count);
 }
 
-static inline unsigned int tmem_subpage_maxsize(void)
-{
-    return tmem_mempool_maxalloc;
-}
-
 static inline unsigned long tmem_free_mb(void)
 {
     return (tmem_page_list_pages + total_free_pages()) >> (20 - PAGE_SHIFT);
@@ -375,17 +368,12 @@ static inline void tmem_copy_to_client_buf_offset(tmem_cli_va_param_t clibuf,
 
 int tmem_decompress_to_client(xen_pfn_t, void *, size_t,
 			     tmem_cli_va_param_t);
-
 int tmem_compress_from_client(xen_pfn_t, void **, size_t *,
 			     tmem_cli_va_param_t);
 
 int tmem_copy_from_client(struct page_info *, xen_pfn_t, tmem_cli_va_param_t);
-
 int tmem_copy_to_client(xen_pfn_t, struct page_info *, tmem_cli_va_param_t);
-
 extern int tmem_copy_tze_to_client(xen_pfn_t cmfn, void *tmem_va, pagesize_t len);
-extern void *tmem_persistent_pool_page_get(unsigned long size);
-extern void tmem_persistent_pool_page_put(void *page_va);
 
 #define tmem_client_err(fmt, args...)  printk(XENLOG_G_ERR fmt, ##args)
 #define tmem_client_warn(fmt, args...) printk(XENLOG_G_WARNING fmt, ##args)
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 10/15] tmem: cleanup: __tmem_alloc_page: drop unneed parameters
  2013-12-12 11:05 [PATCH v4 00/15] xen: continue to cleanup tmem Bob Liu
                   ` (8 preceding siblings ...)
  2013-12-12 11:05 ` [PATCH v4 09/15] tmem: cleanup: refactor the alloc/free path Bob Liu
@ 2013-12-12 11:05 ` Bob Liu
  2013-12-12 11:05 ` [PATCH v4 11/15] tmem: cleanup: drop useless functions from head file Bob Liu
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 23+ messages in thread
From: Bob Liu @ 2013-12-12 11:05 UTC (permalink / raw)
  To: xen-devel; +Cc: james.harper, ian.campbell, andrew.cooper3, JBeulich

The two parameters of __tmem_alloc_page() can be reduced.
tmem_called_from_tmem() was also dropped by this patch.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c          |   11 +++++------
 xen/include/xen/tmem_xen.h |   11 +++++------
 2 files changed, 10 insertions(+), 12 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 07d62d7..5bf5f04 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -303,7 +303,7 @@ static struct page_info *tmem_alloc_page(struct tmem_pool *pool)
     if ( pool != NULL && is_persistent(pool) )
         pfp = __tmem_alloc_page_thispool(pool->client->domain);
     else
-        pfp = __tmem_alloc_page(pool,0);
+        pfp = __tmem_alloc_page();
     if ( pfp == NULL )
         alloc_page_failed++;
     else
@@ -326,9 +326,8 @@ static noinline void *tmem_mempool_page_get(unsigned long size)
     struct page_info *pi;
 
     ASSERT(size == PAGE_SIZE);
-    if ( (pi = __tmem_alloc_page(NULL,0)) == NULL )
+    if ( (pi = __tmem_alloc_page()) == NULL )
         return NULL;
-    ASSERT(IS_VALID_PAGE(pi));
     return page_to_virt(pi);
 }
 
@@ -2780,10 +2779,10 @@ void *tmem_relinquish_pages(unsigned int order, unsigned int memflags)
         return NULL;
     }
 
-    if ( tmem_called_from_tmem(memflags) )
+    if ( memflags & MEMF_tmem )
         read_lock(&tmem_rwlock);
 
-    while ( (pfp = __tmem_alloc_page(NULL,1)) == NULL )
+    while ( (pfp = tmem_page_list_get()) == NULL )
     {
         if ( (max_evictions-- <= 0) || !tmem_evict())
             break;
@@ -2798,7 +2797,7 @@ void *tmem_relinquish_pages(unsigned int order, unsigned int memflags)
         relinq_pgs++;
     }
 
-    if ( tmem_called_from_tmem(memflags) )
+    if ( memflags & MEMF_tmem )
         read_unlock(&tmem_rwlock);
 
     return pfp;
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index 073fc1b..9cfa73f 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -144,15 +144,16 @@ static inline void __tmem_free_page_thispool(struct page_info *pi)
 /*
  * Memory allocation for ephemeral (non-persistent) data
  */
-static inline struct page_info *__tmem_alloc_page(void *pool, int no_heap)
+static inline struct page_info *__tmem_alloc_page(void)
 {
     struct page_info *pi = tmem_page_list_get();
 
-    if ( pi == NULL && !no_heap )
+    if ( pi == NULL)
         pi = alloc_domheap_pages(0,0,MEMF_tmem);
-    ASSERT((pi == NULL) || IS_VALID_PAGE(pi));
-    if ( pi != NULL && !no_heap )
+
+    if ( pi )
         atomic_inc(&freeable_page_count);
+    ASSERT((pi == NULL) || IS_VALID_PAGE(pi));
     return pi;
 }
 
@@ -168,8 +169,6 @@ static inline unsigned long tmem_free_mb(void)
     return (tmem_page_list_pages + total_free_pages()) >> (20 - PAGE_SHIFT);
 }
 
-#define tmem_called_from_tmem(_memflags) (_memflags & MEMF_tmem)
-
 /*  "Client" (==domain) abstraction */
 
 struct client;
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 11/15] tmem: cleanup: drop useless functions from head file
  2013-12-12 11:05 [PATCH v4 00/15] xen: continue to cleanup tmem Bob Liu
                   ` (9 preceding siblings ...)
  2013-12-12 11:05 ` [PATCH v4 10/15] tmem: cleanup: __tmem_alloc_page: drop unneed parameters Bob Liu
@ 2013-12-12 11:05 ` Bob Liu
  2013-12-13 16:44   ` Konrad Rzeszutek Wilk
  2013-12-12 11:05 ` [PATCH v4 12/15] tmem: refator function tmem_ensure_avail_pages() Bob Liu
                   ` (3 subsequent siblings)
  14 siblings, 1 reply; 23+ messages in thread
From: Bob Liu @ 2013-12-12 11:05 UTC (permalink / raw)
  To: xen-devel; +Cc: james.harper, ian.campbell, andrew.cooper3, JBeulich

They are several one line functions in tmem_xen.h which are useless, this patch
embeded them into tmem.c directly.
Also modify void *tmem in struct domain to struct client *tmem_client in order
to make things more straightforward.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/domain.c        |    4 ++--
 xen/common/tmem.c          |   24 ++++++++++++------------
 xen/include/xen/sched.h    |    2 +-
 xen/include/xen/tmem_xen.h |   30 +-----------------------------
 4 files changed, 16 insertions(+), 44 deletions(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 2cbc489..2636fc9 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -528,9 +528,9 @@ int domain_kill(struct domain *d)
         spin_barrier(&d->domain_lock);
         evtchn_destroy(d);
         gnttab_release_mappings(d);
-        tmem_destroy(d->tmem);
+        tmem_destroy(d->tmem_client);
         domain_set_outstanding_pages(d, 0);
-        d->tmem = NULL;
+        d->tmem_client = NULL;
         /* fallthrough */
     case DOMDYING_dying:
         rc = domain_relinquish_resources(d);
diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 5bf5f04..2659651 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -1206,7 +1206,7 @@ static struct client *client_create(domid_t cli_id)
         goto fail;
     }
     if ( !d->is_dying ) {
-        d->tmem = client;
+        d->tmem_client = client;
 	client->domain = d;
     }
     rcu_unlock_domain(d);
@@ -1324,7 +1324,7 @@ obj_unlock:
 
 static int tmem_evict(void)
 {
-    struct client *client = tmem_client_from_current();
+    struct client *client = current->domain->tmem_client;
     struct tmem_page_descriptor *pgp = NULL, *pgp2, *pgp_del;
     struct tmem_object_root *obj;
     struct tmem_pool *pool;
@@ -1761,7 +1761,7 @@ static int do_tmem_get(struct tmem_pool *pool, struct oid *oidp, uint32_t index,
             list_del(&pgp->us.client_eph_pages);
             list_add_tail(&pgp->us.client_eph_pages,&client->ephemeral_page_list);
             spin_unlock(&eph_lists_spinlock);
-            obj->last_client = tmem_get_cli_id_from_current();
+            obj->last_client = current->domain->domain_id;
         }
     }
     if ( obj != NULL )
@@ -1836,7 +1836,7 @@ out:
 
 static int do_tmem_destroy_pool(uint32_t pool_id)
 {
-    struct client *client = tmem_client_from_current();
+    struct client *client = current->domain->tmem_client;
     struct tmem_pool *pool;
 
     if ( client->pools == NULL )
@@ -1867,7 +1867,7 @@ static int do_tmem_new_pool(domid_t this_cli_id,
     int i;
 
     if ( this_cli_id == TMEM_CLI_ID_NULL )
-        cli_id = tmem_get_cli_id_from_current();
+        cli_id = current->domain->domain_id;
     else
         cli_id = this_cli_id;
     tmem_client_info("tmem: allocating %s-%s tmem pool for %s=%d...",
@@ -1908,7 +1908,7 @@ static int do_tmem_new_pool(domid_t this_cli_id,
     }
     else
     {
-        client = tmem_client_from_current();
+        client = current->domain->tmem_client;
         ASSERT(client != NULL);
         for ( d_poolid = 0; d_poolid < MAX_POOLS_PER_DOMAIN; d_poolid++ )
             if ( client->pools[d_poolid] == NULL )
@@ -2511,7 +2511,7 @@ static int do_tmem_control(struct tmem_op *op)
     uint32_t subop = op->u.ctrl.subop;
     struct oid *oidp = (struct oid *)(&op->u.ctrl.oid[0]);
 
-    if (!tmem_current_is_privileged())
+    if ( xsm_tmem_control(XSM_PRIV) )
         return -EPERM;
 
     switch(subop)
@@ -2583,7 +2583,7 @@ static int do_tmem_control(struct tmem_op *op)
 long do_tmem_op(tmem_cli_op_t uops)
 {
     struct tmem_op op;
-    struct client *client = tmem_client_from_current();
+    struct client *client = current->domain->tmem_client;
     struct tmem_pool *pool = NULL;
     struct oid *oidp;
     int rc = 0;
@@ -2595,12 +2595,12 @@ long do_tmem_op(tmem_cli_op_t uops)
     if ( !tmem_initialized )
         return -ENODEV;
 
-    if ( !tmem_current_permitted() )
+    if ( xsm_tmem_op(XSM_HOOK) )
         return -EPERM;
 
     total_tmem_ops++;
 
-    if ( client != NULL && tmem_client_is_dying(client) )
+    if ( client != NULL && client->domain->is_dying )
     {
         rc = -ENODEV;
  simple_error:
@@ -2640,7 +2640,7 @@ long do_tmem_op(tmem_cli_op_t uops)
     {
         write_lock(&tmem_rwlock);
         write_lock_set = 1;
-        if ( (client = client_create(tmem_get_cli_id_from_current())) == NULL )
+        if ( (client = client_create(current->domain->domain_id)) == NULL )
         {
             tmem_client_err("tmem: can't create tmem structure for %s\n",
                            tmem_client_str);
@@ -2732,7 +2732,7 @@ void tmem_destroy(void *v)
     if ( client == NULL )
         return;
 
-    if ( !tmem_client_is_dying(client) )
+    if ( !client->domain->is_dying )
     {
         printk("tmem: tmem_destroy can only destroy dying client\n");
         return;
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index cbdf377..53ad32f 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -400,7 +400,7 @@ struct domain
     spinlock_t hypercall_deadlock_mutex;
 
     /* transcendent memory, auto-allocated on first tmem op by each domain */
-    void *tmem;
+    struct client *tmem_client;
 
     struct lock_profile_qhead profile_head;
 
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index 9cfa73f..11f4c2d 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -171,45 +171,17 @@ static inline unsigned long tmem_free_mb(void)
 
 /*  "Client" (==domain) abstraction */
 
-struct client;
 static inline struct client *tmem_client_from_cli_id(domid_t cli_id)
 {
     struct client *c;
     struct domain *d = rcu_lock_domain_by_id(cli_id);
     if (d == NULL)
         return NULL;
-    c = (struct client *)(d->tmem);
+    c = d->tmem_client;
     rcu_unlock_domain(d);
     return c;
 }
 
-static inline struct client *tmem_client_from_current(void)
-{
-    return (struct client *)(current->domain->tmem);
-}
-
-#define tmem_client_is_dying(_client) (!!_client->domain->is_dying)
-
-static inline domid_t tmem_get_cli_id_from_current(void)
-{
-    return current->domain->domain_id;
-}
-
-static inline struct domain *tmem_get_cli_ptr_from_current(void)
-{
-    return current->domain;
-}
-
-static inline bool_t tmem_current_permitted(void)
-{
-    return !xsm_tmem_op(XSM_HOOK);
-}
-
-static inline bool_t tmem_current_is_privileged(void)
-{
-    return !xsm_tmem_control(XSM_PRIV);
-}
-
 static inline uint8_t tmem_get_first_byte(struct page_info *pfp)
 {
     const uint8_t *p = __map_domain_page(pfp);
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 12/15] tmem: refator function tmem_ensure_avail_pages()
  2013-12-12 11:05 [PATCH v4 00/15] xen: continue to cleanup tmem Bob Liu
                   ` (10 preceding siblings ...)
  2013-12-12 11:05 ` [PATCH v4 11/15] tmem: cleanup: drop useless functions from head file Bob Liu
@ 2013-12-12 11:05 ` Bob Liu
  2013-12-12 11:05 ` [PATCH v4 13/15] tmem: cleanup: rename tmem_relinquish_npages() Bob Liu
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 23+ messages in thread
From: Bob Liu @ 2013-12-12 11:05 UTC (permalink / raw)
  To: xen-devel; +Cc: james.harper, ian.campbell, andrew.cooper3, JBeulich

tmem_ensure_avail_pages() doesn't return a value which is incorrect because
the caller need to confirm whether there is enough memory.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c          |   32 ++++++++++++++++++++------------
 xen/include/xen/tmem_xen.h |    6 ------
 2 files changed, 20 insertions(+), 18 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 2659651..685efef 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -1409,22 +1409,28 @@ static unsigned long tmem_relinquish_npages(unsigned long n)
     return avail_pages;
 }
 
-/* Under certain conditions (e.g. if each client is putting pages for exactly
+/*
+ * Under certain conditions (e.g. if each client is putting pages for exactly
  * one object), once locks are held, freeing up memory may
  * result in livelocks and very long "put" times, so we try to ensure there
  * is a minimum amount of memory (1MB) available BEFORE any data structure
- * locks are held */
-static inline void tmem_ensure_avail_pages(void)
+ * locks are held.
+ */
+static inline bool_t tmem_ensure_avail_pages(void)
 {
     int failed_evict = 10;
+    unsigned long free_mem;
 
-    while ( !tmem_free_mb() )
-    {
-        if ( tmem_evict() )
-            continue;
-        else if ( failed_evict-- <= 0 )
-            break;
-    }
+    do {
+        free_mem = (tmem_page_list_pages + total_free_pages())
+                        >> (20 - PAGE_SHIFT);
+        if ( free_mem )
+            return 1;
+        if ( !tmem_evict() )
+            failed_evict--;
+    } while ( failed_evict > 0 );
+
+    return 0;
 }
 
 /************ TMEM CORE OPERATIONS ************************************/
@@ -2681,9 +2687,11 @@ long do_tmem_op(tmem_cli_op_t uops)
                               op.u.creat.uuid[0], op.u.creat.uuid[1]);
         break;
     case TMEM_PUT_PAGE:
-        tmem_ensure_avail_pages();
-        rc = do_tmem_put(pool, oidp, op.u.gen.index, op.u.gen.cmfn,
+        if (tmem_ensure_avail_pages())
+            rc = do_tmem_put(pool, oidp, op.u.gen.index, op.u.gen.cmfn,
                         tmem_cli_buf_null);
+        else
+            rc = -ENOMEM;
         if (rc == 1) succ_put = 1;
         else non_succ_put = 1;
         break;
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index 11f4c2d..4e6c234 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -164,13 +164,7 @@ static inline void __tmem_free_page(struct page_info *pi)
     atomic_dec(&freeable_page_count);
 }
 
-static inline unsigned long tmem_free_mb(void)
-{
-    return (tmem_page_list_pages + total_free_pages()) >> (20 - PAGE_SHIFT);
-}
-
 /*  "Client" (==domain) abstraction */
-
 static inline struct client *tmem_client_from_cli_id(domid_t cli_id)
 {
     struct client *c;
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 13/15] tmem: cleanup: rename tmem_relinquish_npages()
  2013-12-12 11:05 [PATCH v4 00/15] xen: continue to cleanup tmem Bob Liu
                   ` (11 preceding siblings ...)
  2013-12-12 11:05 ` [PATCH v4 12/15] tmem: refator function tmem_ensure_avail_pages() Bob Liu
@ 2013-12-12 11:05 ` Bob Liu
  2013-12-12 11:05 ` [PATCH v4 14/15] tmem: cleanup: rm unused tmem_freeze_all() Bob Liu
  2013-12-12 11:05 ` [PATCH v4 15/15] tmem: check the return value of copy to guest Bob Liu
  14 siblings, 0 replies; 23+ messages in thread
From: Bob Liu @ 2013-12-12 11:05 UTC (permalink / raw)
  To: xen-devel; +Cc: james.harper, ian.campbell, andrew.cooper3, JBeulich

Rename tmem_relinquish_npages() to tmem_flush_npages() to
distinguish it from tmem_relinquish_pages().

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 685efef..9269ebf 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -1383,7 +1383,7 @@ out:
     return ret;
 }
 
-static unsigned long tmem_relinquish_npages(unsigned long n)
+static unsigned long tmem_flush_npages(unsigned long n)
 {
     unsigned long avail_pages = 0;
 
@@ -2028,7 +2028,7 @@ static int tmemc_flush_mem(domid_t cli_id, uint32_t kb)
     }
     /* convert kb to pages, rounding up if necessary */
     npages = (kb + ((1 << (PAGE_SHIFT-10))-1)) >> (PAGE_SHIFT-10);
-    flushed_pages = tmem_relinquish_npages(npages);
+    flushed_pages = tmem_flush_npages(npages);
     flushed_kb = flushed_pages << (PAGE_SHIFT-10);
     return flushed_kb;
 }
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 14/15] tmem: cleanup: rm unused tmem_freeze_all()
  2013-12-12 11:05 [PATCH v4 00/15] xen: continue to cleanup tmem Bob Liu
                   ` (12 preceding siblings ...)
  2013-12-12 11:05 ` [PATCH v4 13/15] tmem: cleanup: rename tmem_relinquish_npages() Bob Liu
@ 2013-12-12 11:05 ` Bob Liu
  2013-12-12 11:05 ` [PATCH v4 15/15] tmem: check the return value of copy to guest Bob Liu
  14 siblings, 0 replies; 23+ messages in thread
From: Bob Liu @ 2013-12-12 11:05 UTC (permalink / raw)
  To: xen-devel; +Cc: james.harper, ian.campbell, andrew.cooper3, JBeulich

Nobody uses tmem_freeze_all() so remove it.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c |   13 -------------
 1 file changed, 13 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index 9269ebf..fc75229 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -2755,19 +2755,6 @@ void tmem_destroy(void *v)
     write_unlock(&tmem_rwlock);
 }
 
-/* freezing all pools guarantees that no additional memory will be consumed */
-void tmem_freeze_all(unsigned char key)
-{
-    static int freeze = 0;
-
-    write_lock(&tmem_rwlock);
-
-    freeze = !freeze;
-    tmemc_freeze_pools(TMEM_CLI_ID_NULL,freeze);
-
-    write_unlock(&tmem_rwlock);
-}
-
 #define MAX_EVICTS 10  /* should be variable or set via TMEMC_ ?? */
 void *tmem_relinquish_pages(unsigned int order, unsigned int memflags)
 {
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v4 15/15] tmem: check the return value of copy to guest
  2013-12-12 11:05 [PATCH v4 00/15] xen: continue to cleanup tmem Bob Liu
                   ` (13 preceding siblings ...)
  2013-12-12 11:05 ` [PATCH v4 14/15] tmem: cleanup: rm unused tmem_freeze_all() Bob Liu
@ 2013-12-12 11:05 ` Bob Liu
  14 siblings, 0 replies; 23+ messages in thread
From: Bob Liu @ 2013-12-12 11:05 UTC (permalink / raw)
  To: xen-devel; +Cc: james.harper, ian.campbell, andrew.cooper3, JBeulich

Use function copy_to_guest_offset/copy_to_guest directly and check their return
value.

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 xen/common/tmem.c          |   34 ++++++++++++++++++++--------------
 xen/include/xen/tmem_xen.h |   14 --------------
 2 files changed, 20 insertions(+), 28 deletions(-)

diff --git a/xen/common/tmem.c b/xen/common/tmem.c
index fc75229..d9e912b 100644
--- a/xen/common/tmem.c
+++ b/xen/common/tmem.c
@@ -2063,8 +2063,8 @@ static int tmemc_list_client(struct client *c, tmem_cli_va_param_t buf,
              c->eph_count, c->eph_count_max,
              c->compressed_pages, c->compressed_sum_size,
              c->compress_poor, c->compress_nomem);
-    tmem_copy_to_client_buf_offset(buf,off+sum,info,n+1);
-    sum += n;
+    if ( !copy_to_guest_offset(buf, off + sum, info, n + 1) )
+        sum += n;
     for ( i = 0; i < MAX_POOLS_PER_DOMAIN; i++ )
     {
         if ( (p = c->pools[i]) == NULL )
@@ -2091,8 +2091,8 @@ static int tmemc_list_client(struct client *c, tmem_cli_va_param_t buf,
              p->flushs_found, p->flushs, p->flush_objs_found, p->flush_objs);
         if ( sum + n >= len )
             return sum;
-        tmem_copy_to_client_buf_offset(buf,off+sum,info,n+1);
-        sum += n;
+        if ( !copy_to_guest_offset(buf, off + sum, info, n + 1) )
+            sum += n;
     }
     return sum;
 }
@@ -2130,8 +2130,8 @@ static int tmemc_list_shared(tmem_cli_va_param_t buf, int off, uint32_t len,
              p->flushs_found, p->flushs, p->flush_objs_found, p->flush_objs);
         if ( sum + n >= len )
             return sum;
-        tmem_copy_to_client_buf_offset(buf,off+sum,info,n+1);
-        sum += n;
+        if ( !copy_to_guest_offset(buf, off + sum, info, n + 1) )
+            sum += n;
     }
     return sum;
 }
@@ -2147,8 +2147,8 @@ static int tmemc_list_global_perf(tmem_cli_va_param_t buf, int off,
     n += scnprintf(info+n,BSIZE-n,"\n");
     if ( sum + n >= len )
         return sum;
-    tmem_copy_to_client_buf_offset(buf,off+sum,info,n+1);
-    sum += n;
+    if ( !copy_to_guest_offset(buf, off + sum, info, n + 1) )
+        sum += n;
     return sum;
 }
 
@@ -2179,8 +2179,8 @@ static int tmemc_list_global(tmem_cli_va_param_t buf, int off, uint32_t len,
          tot_good_eph_puts,deduped_puts,pcd_tot_tze_size,pcd_tot_csize);
     if ( sum + n >= len )
         return sum;
-    tmem_copy_to_client_buf_offset(buf,off+sum,info,n+1);
-    sum += n;
+    if ( !copy_to_guest_offset(buf, off + sum, info, n + 1) )
+        sum += n;
     return sum;
 }
 
@@ -2366,8 +2366,9 @@ static int tmemc_save_subop(int cli_id, uint32_t pool_id,
     case TMEMC_SAVE_GET_POOL_UUID:
          if ( pool == NULL )
              break;
-        tmem_copy_to_client_buf(buf, pool->uuid, 2);
         rc = 0;
+        if ( copy_to_guest(guest_handle_cast(buf, void), pool->uuid, 2) )
+            rc = -EFAULT;
         break;
     case TMEMC_SAVE_END:
         if ( client == NULL )
@@ -2430,8 +2431,12 @@ static int tmemc_save_get_next_page(int cli_id, uint32_t pool_id,
     BUILD_BUG_ON(sizeof(h.oid) != sizeof(oid));
     memcpy(h.oid, oid.oid, sizeof(h.oid));
     h.index = pgp->index;
-    tmem_copy_to_client_buf(buf, &h, 1);
-    tmem_client_buf_add(buf, sizeof(h));
+    if ( copy_to_guest(guest_handle_cast(buf, void), &h, 1) )
+    {
+        ret = -EFAULT;
+        goto out;
+    }
+    guest_handle_add_offset(buf, sizeof(h));
     ret = do_tmem_get(pool, &oid, pgp->index, 0, buf);
 
 out:
@@ -2474,8 +2479,9 @@ static int tmemc_save_get_next_inv(int cli_id, tmem_cli_va_param_t buf,
     BUILD_BUG_ON(sizeof(h.oid) != sizeof(pgp->inv_oid));
     memcpy(h.oid, pgp->inv_oid.oid, sizeof(h.oid));
     h.index = pgp->index;
-    tmem_copy_to_client_buf(buf, &h, 1);
     ret = 1;
+    if ( copy_to_guest(guest_handle_cast(buf, void), &h, 1) )
+        ret = -EFAULT;
 out:
     spin_unlock(&pers_lists_spinlock);
     return ret;
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index 4e6c234..885ee21 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -313,21 +313,7 @@ static inline int tmem_get_tmemop_from_client(tmem_op_t *op, tmem_cli_op_t uops)
 }
 
 #define tmem_cli_buf_null guest_handle_from_ptr(NULL, char)
-
-static inline void tmem_copy_to_client_buf_offset(tmem_cli_va_param_t clibuf,
-						 int off,
-						 char *tmembuf, int len)
-{
-    copy_to_guest_offset(clibuf,off,tmembuf,len);
-}
-
-#define tmem_copy_to_client_buf(clibuf, tmembuf, cnt) \
-    copy_to_guest(guest_handle_cast(clibuf, void), tmembuf, cnt)
-
-#define tmem_client_buf_add guest_handle_add_offset
-
 #define TMEM_CLI_ID_NULL ((domid_t)((domid_t)-1L))
-
 #define tmem_cli_id_str "domid"
 #define tmem_client_str "domain"
 
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 11/15] tmem: cleanup: drop useless functions from head file
  2013-12-12 11:05 ` [PATCH v4 11/15] tmem: cleanup: drop useless functions from head file Bob Liu
@ 2013-12-13 16:44   ` Konrad Rzeszutek Wilk
  2014-01-07 10:44     ` Bob Liu
  2014-01-07 15:51     ` Keir Fraser
  0 siblings, 2 replies; 23+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-12-13 16:44 UTC (permalink / raw)
  To: Bob Liu, Keir Fraser
  Cc: james.harper, ian.campbell, andrew.cooper3, JBeulich, xen-devel

On Thu, Dec 12, 2013 at 07:05:11PM +0800, Bob Liu wrote:
> They are several one line functions in tmem_xen.h which are useless, this patch
> embeded them into tmem.c directly.
> Also modify void *tmem in struct domain to struct client *tmem_client in order
> to make things more straightforward.
> 
> Signed-off-by: Bob Liu <bob.liu@oracle.com>
> ---
>  xen/common/domain.c        |    4 ++--
>  xen/common/tmem.c          |   24 ++++++++++++------------
>  xen/include/xen/sched.h    |    2 +-
>  xen/include/xen/tmem_xen.h |   30 +-----------------------------

Keir, are you OK with this simple name change?

Thanks!
>  4 files changed, 16 insertions(+), 44 deletions(-)
> 
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 2cbc489..2636fc9 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -528,9 +528,9 @@ int domain_kill(struct domain *d)
>          spin_barrier(&d->domain_lock);
>          evtchn_destroy(d);
>          gnttab_release_mappings(d);
> -        tmem_destroy(d->tmem);
> +        tmem_destroy(d->tmem_client);
>          domain_set_outstanding_pages(d, 0);
> -        d->tmem = NULL;
> +        d->tmem_client = NULL;
>          /* fallthrough */
>      case DOMDYING_dying:
>          rc = domain_relinquish_resources(d);
> diff --git a/xen/common/tmem.c b/xen/common/tmem.c
> index 5bf5f04..2659651 100644
> --- a/xen/common/tmem.c
> +++ b/xen/common/tmem.c
> @@ -1206,7 +1206,7 @@ static struct client *client_create(domid_t cli_id)
>          goto fail;
>      }
>      if ( !d->is_dying ) {
> -        d->tmem = client;
> +        d->tmem_client = client;
>  	client->domain = d;
>      }
>      rcu_unlock_domain(d);
> @@ -1324,7 +1324,7 @@ obj_unlock:
>  
>  static int tmem_evict(void)
>  {
> -    struct client *client = tmem_client_from_current();
> +    struct client *client = current->domain->tmem_client;
>      struct tmem_page_descriptor *pgp = NULL, *pgp2, *pgp_del;
>      struct tmem_object_root *obj;
>      struct tmem_pool *pool;
> @@ -1761,7 +1761,7 @@ static int do_tmem_get(struct tmem_pool *pool, struct oid *oidp, uint32_t index,
>              list_del(&pgp->us.client_eph_pages);
>              list_add_tail(&pgp->us.client_eph_pages,&client->ephemeral_page_list);
>              spin_unlock(&eph_lists_spinlock);
> -            obj->last_client = tmem_get_cli_id_from_current();
> +            obj->last_client = current->domain->domain_id;
>          }
>      }
>      if ( obj != NULL )
> @@ -1836,7 +1836,7 @@ out:
>  
>  static int do_tmem_destroy_pool(uint32_t pool_id)
>  {
> -    struct client *client = tmem_client_from_current();
> +    struct client *client = current->domain->tmem_client;
>      struct tmem_pool *pool;
>  
>      if ( client->pools == NULL )
> @@ -1867,7 +1867,7 @@ static int do_tmem_new_pool(domid_t this_cli_id,
>      int i;
>  
>      if ( this_cli_id == TMEM_CLI_ID_NULL )
> -        cli_id = tmem_get_cli_id_from_current();
> +        cli_id = current->domain->domain_id;
>      else
>          cli_id = this_cli_id;
>      tmem_client_info("tmem: allocating %s-%s tmem pool for %s=%d...",
> @@ -1908,7 +1908,7 @@ static int do_tmem_new_pool(domid_t this_cli_id,
>      }
>      else
>      {
> -        client = tmem_client_from_current();
> +        client = current->domain->tmem_client;
>          ASSERT(client != NULL);
>          for ( d_poolid = 0; d_poolid < MAX_POOLS_PER_DOMAIN; d_poolid++ )
>              if ( client->pools[d_poolid] == NULL )
> @@ -2511,7 +2511,7 @@ static int do_tmem_control(struct tmem_op *op)
>      uint32_t subop = op->u.ctrl.subop;
>      struct oid *oidp = (struct oid *)(&op->u.ctrl.oid[0]);
>  
> -    if (!tmem_current_is_privileged())
> +    if ( xsm_tmem_control(XSM_PRIV) )
>          return -EPERM;
>  
>      switch(subop)
> @@ -2583,7 +2583,7 @@ static int do_tmem_control(struct tmem_op *op)
>  long do_tmem_op(tmem_cli_op_t uops)
>  {
>      struct tmem_op op;
> -    struct client *client = tmem_client_from_current();
> +    struct client *client = current->domain->tmem_client;
>      struct tmem_pool *pool = NULL;
>      struct oid *oidp;
>      int rc = 0;
> @@ -2595,12 +2595,12 @@ long do_tmem_op(tmem_cli_op_t uops)
>      if ( !tmem_initialized )
>          return -ENODEV;
>  
> -    if ( !tmem_current_permitted() )
> +    if ( xsm_tmem_op(XSM_HOOK) )
>          return -EPERM;
>  
>      total_tmem_ops++;
>  
> -    if ( client != NULL && tmem_client_is_dying(client) )
> +    if ( client != NULL && client->domain->is_dying )
>      {
>          rc = -ENODEV;
>   simple_error:
> @@ -2640,7 +2640,7 @@ long do_tmem_op(tmem_cli_op_t uops)
>      {
>          write_lock(&tmem_rwlock);
>          write_lock_set = 1;
> -        if ( (client = client_create(tmem_get_cli_id_from_current())) == NULL )
> +        if ( (client = client_create(current->domain->domain_id)) == NULL )
>          {
>              tmem_client_err("tmem: can't create tmem structure for %s\n",
>                             tmem_client_str);
> @@ -2732,7 +2732,7 @@ void tmem_destroy(void *v)
>      if ( client == NULL )
>          return;
>  
> -    if ( !tmem_client_is_dying(client) )
> +    if ( !client->domain->is_dying )
>      {
>          printk("tmem: tmem_destroy can only destroy dying client\n");
>          return;
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index cbdf377..53ad32f 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -400,7 +400,7 @@ struct domain
>      spinlock_t hypercall_deadlock_mutex;
>  
>      /* transcendent memory, auto-allocated on first tmem op by each domain */
> -    void *tmem;
> +    struct client *tmem_client;
>  
>      struct lock_profile_qhead profile_head;
>  
> diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
> index 9cfa73f..11f4c2d 100644
> --- a/xen/include/xen/tmem_xen.h
> +++ b/xen/include/xen/tmem_xen.h
> @@ -171,45 +171,17 @@ static inline unsigned long tmem_free_mb(void)
>  
>  /*  "Client" (==domain) abstraction */
>  
> -struct client;
>  static inline struct client *tmem_client_from_cli_id(domid_t cli_id)
>  {
>      struct client *c;
>      struct domain *d = rcu_lock_domain_by_id(cli_id);
>      if (d == NULL)
>          return NULL;
> -    c = (struct client *)(d->tmem);
> +    c = d->tmem_client;
>      rcu_unlock_domain(d);
>      return c;
>  }
>  
> -static inline struct client *tmem_client_from_current(void)
> -{
> -    return (struct client *)(current->domain->tmem);
> -}
> -
> -#define tmem_client_is_dying(_client) (!!_client->domain->is_dying)
> -
> -static inline domid_t tmem_get_cli_id_from_current(void)
> -{
> -    return current->domain->domain_id;
> -}
> -
> -static inline struct domain *tmem_get_cli_ptr_from_current(void)
> -{
> -    return current->domain;
> -}
> -
> -static inline bool_t tmem_current_permitted(void)
> -{
> -    return !xsm_tmem_op(XSM_HOOK);
> -}
> -
> -static inline bool_t tmem_current_is_privileged(void)
> -{
> -    return !xsm_tmem_control(XSM_PRIV);
> -}
> -
>  static inline uint8_t tmem_get_first_byte(struct page_info *pfp)
>  {
>      const uint8_t *p = __map_domain_page(pfp);
> -- 
> 1.7.10.4
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 11/15] tmem: cleanup: drop useless functions from head file
  2013-12-13 16:44   ` Konrad Rzeszutek Wilk
@ 2014-01-07 10:44     ` Bob Liu
  2014-01-07 14:27       ` Konrad Rzeszutek Wilk
  2014-01-07 15:51     ` Keir Fraser
  1 sibling, 1 reply; 23+ messages in thread
From: Bob Liu @ 2014-01-07 10:44 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Bob Liu, Keir Fraser, ian.campbell, andrew.cooper3, james.harper,
	JBeulich, xen-devel


On 12/14/2013 12:44 AM, Konrad Rzeszutek Wilk wrote:
> On Thu, Dec 12, 2013 at 07:05:11PM +0800, Bob Liu wrote:
>> They are several one line functions in tmem_xen.h which are useless, this patch
>> embeded them into tmem.c directly.
>> Also modify void *tmem in struct domain to struct client *tmem_client in order
>> to make things more straightforward.
>>
>> Signed-off-by: Bob Liu <bob.liu@oracle.com>
>> ---
>>  xen/common/domain.c        |    4 ++--
>>  xen/common/tmem.c          |   24 ++++++++++++------------
>>  xen/include/xen/sched.h    |    2 +-
>>  xen/include/xen/tmem_xen.h |   30 +-----------------------------
> 
> Keir, are you OK with this simple name change?
> 

Ping..

Thanks!
.. and Happy New Year
-Bob

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 11/15] tmem: cleanup: drop useless functions from head file
  2014-01-07 10:44     ` Bob Liu
@ 2014-01-07 14:27       ` Konrad Rzeszutek Wilk
  2014-01-07 14:32         ` Ian Campbell
  0 siblings, 1 reply; 23+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-01-07 14:27 UTC (permalink / raw)
  To: Bob Liu
  Cc: Bob Liu, Keir Fraser, ian.campbell, andrew.cooper3, james.harper,
	JBeulich, xen-devel

On Tue, Jan 07, 2014 at 06:44:51PM +0800, Bob Liu wrote:
> 
> On 12/14/2013 12:44 AM, Konrad Rzeszutek Wilk wrote:
> > On Thu, Dec 12, 2013 at 07:05:11PM +0800, Bob Liu wrote:
> >> They are several one line functions in tmem_xen.h which are useless, this patch
> >> embeded them into tmem.c directly.
> >> Also modify void *tmem in struct domain to struct client *tmem_client in order
> >> to make things more straightforward.
> >>
> >> Signed-off-by: Bob Liu <bob.liu@oracle.com>
> >> ---
> >>  xen/common/domain.c        |    4 ++--
> >>  xen/common/tmem.c          |   24 ++++++++++++------------
> >>  xen/include/xen/sched.h    |    2 +-
> >>  xen/include/xen/tmem_xen.h |   30 +-----------------------------
> > 
> > Keir, are you OK with this simple name change?
> > 
> 
> Ping..

Lets make sure his email is on the 'To' (I don't see it
in my email?)
> 
> Thanks!
> .. and Happy New Year
> -Bob

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 11/15] tmem: cleanup: drop useless functions from head file
  2014-01-07 14:27       ` Konrad Rzeszutek Wilk
@ 2014-01-07 14:32         ` Ian Campbell
  2014-01-07 14:44           ` Jan Beulich
  0 siblings, 1 reply; 23+ messages in thread
From: Ian Campbell @ 2014-01-07 14:32 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Bob Liu, keir, andrew.cooper3, james.harper, JBeulich, xen-devel

On Tue, 2014-01-07 at 09:27 -0500, Konrad Rzeszutek Wilk wrote:
> On Tue, Jan 07, 2014 at 06:44:51PM +0800, Bob Liu wrote:
> > 
> > On 12/14/2013 12:44 AM, Konrad Rzeszutek Wilk wrote:
> > > On Thu, Dec 12, 2013 at 07:05:11PM +0800, Bob Liu wrote:
> > >> They are several one line functions in tmem_xen.h which are useless, this patch
> > >> embeded them into tmem.c directly.
> > >> Also modify void *tmem in struct domain to struct client *tmem_client in order
> > >> to make things more straightforward.
> > >>
> > >> Signed-off-by: Bob Liu <bob.liu@oracle.com>
> > >> ---
> > >>  xen/common/domain.c        |    4 ++--
> > >>  xen/common/tmem.c          |   24 ++++++++++++------------
> > >>  xen/include/xen/sched.h    |    2 +-
> > >>  xen/include/xen/tmem_xen.h |   30 +-----------------------------
> > > 
> > > Keir, are you OK with this simple name change?
> > > 
> > 
> > Ping..
> 
> Lets make sure his email is on the 'To' (I don't see it
> in my email?)

I haven't reviewed the patch or anything but it says "cleanup" -- I
think we are past that point of the release process, aren't we?

Ian.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 11/15] tmem: cleanup: drop useless functions from head file
  2014-01-07 14:32         ` Ian Campbell
@ 2014-01-07 14:44           ` Jan Beulich
  2014-01-07 14:52             ` Ian Campbell
  0 siblings, 1 reply; 23+ messages in thread
From: Jan Beulich @ 2014-01-07 14:44 UTC (permalink / raw)
  To: Ian Campbell, Konrad Rzeszutek Wilk
  Cc: Bob Liu, keir, andrew.cooper3, james.harper, xen-devel

>>> On 07.01.14 at 15:32, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2014-01-07 at 09:27 -0500, Konrad Rzeszutek Wilk wrote:
>> On Tue, Jan 07, 2014 at 06:44:51PM +0800, Bob Liu wrote:
>> > 
>> > On 12/14/2013 12:44 AM, Konrad Rzeszutek Wilk wrote:
>> > > On Thu, Dec 12, 2013 at 07:05:11PM +0800, Bob Liu wrote:
>> > >> They are several one line functions in tmem_xen.h which are useless, this 
> patch
>> > >> embeded them into tmem.c directly.
>> > >> Also modify void *tmem in struct domain to struct client *tmem_client in 
> order
>> > >> to make things more straightforward.
>> > >>
>> > >> Signed-off-by: Bob Liu <bob.liu@oracle.com>
>> > >> ---
>> > >>  xen/common/domain.c        |    4 ++--
>> > >>  xen/common/tmem.c          |   24 ++++++++++++------------
>> > >>  xen/include/xen/sched.h    |    2 +-
>> > >>  xen/include/xen/tmem_xen.h |   30 +-----------------------------
>> > > 
>> > > Keir, are you OK with this simple name change?
>> > > 
>> > 
>> > Ping..
>> 
>> Lets make sure his email is on the 'To' (I don't see it
>> in my email?)
> 
> I haven't reviewed the patch or anything but it says "cleanup" -- I
> think we are past that point of the release process, aren't we?

It has been pending for quite a while, and tmem isn't critical code,
so I would tend towards taking it if we can get Keir's ack (if there
wasn't that relatively trivial change to common code, I would have
pulled the set already).

Jan

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 11/15] tmem: cleanup: drop useless functions from head file
  2014-01-07 14:44           ` Jan Beulich
@ 2014-01-07 14:52             ` Ian Campbell
  0 siblings, 0 replies; 23+ messages in thread
From: Ian Campbell @ 2014-01-07 14:52 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Bob Liu, keir, andrew.cooper3, james.harper, xen-devel

On Tue, 2014-01-07 at 14:44 +0000, Jan Beulich wrote:
> >>> On 07.01.14 at 15:32, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Tue, 2014-01-07 at 09:27 -0500, Konrad Rzeszutek Wilk wrote:
> >> On Tue, Jan 07, 2014 at 06:44:51PM +0800, Bob Liu wrote:
> >> > 
> >> > On 12/14/2013 12:44 AM, Konrad Rzeszutek Wilk wrote:
> >> > > On Thu, Dec 12, 2013 at 07:05:11PM +0800, Bob Liu wrote:
> >> > >> They are several one line functions in tmem_xen.h which are useless, this 
> > patch
> >> > >> embeded them into tmem.c directly.
> >> > >> Also modify void *tmem in struct domain to struct client *tmem_client in 
> > order
> >> > >> to make things more straightforward.
> >> > >>
> >> > >> Signed-off-by: Bob Liu <bob.liu@oracle.com>
> >> > >> ---
> >> > >>  xen/common/domain.c        |    4 ++--
> >> > >>  xen/common/tmem.c          |   24 ++++++++++++------------
> >> > >>  xen/include/xen/sched.h    |    2 +-
> >> > >>  xen/include/xen/tmem_xen.h |   30 +-----------------------------
> >> > > 
> >> > > Keir, are you OK with this simple name change?
> >> > > 
> >> > 
> >> > Ping..
> >> 
> >> Lets make sure his email is on the 'To' (I don't see it
> >> in my email?)
> > 
> > I haven't reviewed the patch or anything but it says "cleanup" -- I
> > think we are past that point of the release process, aren't we?
> 
> It has been pending for quite a while, and tmem isn't critical code,
> so I would tend towards taking it if we can get Keir's ack (if there
> wasn't that relatively trivial change to common code, I would have
> pulled the set already).

OK, I'm happy to follow your lead on that decision.

Ian.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v4 11/15] tmem: cleanup: drop useless functions from head file
  2013-12-13 16:44   ` Konrad Rzeszutek Wilk
  2014-01-07 10:44     ` Bob Liu
@ 2014-01-07 15:51     ` Keir Fraser
  1 sibling, 0 replies; 23+ messages in thread
From: Keir Fraser @ 2014-01-07 15:51 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk, Bob Liu
  Cc: james.harper, ian.campbell, andrew.cooper3, JBeulich, xen-devel

On 13/12/2013 16:44, "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com> wrote:

> On Thu, Dec 12, 2013 at 07:05:11PM +0800, Bob Liu wrote:
>> They are several one line functions in tmem_xen.h which are useless, this
>> patch
>> embeded them into tmem.c directly.
>> Also modify void *tmem in struct domain to struct client *tmem_client in
>> order
>> to make things more straightforward.
>> 
>> Signed-off-by: Bob Liu <bob.liu@oracle.com>
>> ---
>>  xen/common/domain.c        |    4 ++--
>>  xen/common/tmem.c          |   24 ++++++++++++------------
>>  xen/include/xen/sched.h    |    2 +-
>>  xen/include/xen/tmem_xen.h |   30 +-----------------------------
> 
> Keir, are you OK with this simple name change?

Yes.

Acked-by: Keir Fraser <keir@xen.org>

> Thanks!
>>  4 files changed, 16 insertions(+), 44 deletions(-)
>> 
>> diff --git a/xen/common/domain.c b/xen/common/domain.c
>> index 2cbc489..2636fc9 100644
>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
>> @@ -528,9 +528,9 @@ int domain_kill(struct domain *d)
>>          spin_barrier(&d->domain_lock);
>>          evtchn_destroy(d);
>>          gnttab_release_mappings(d);
>> -        tmem_destroy(d->tmem);
>> +        tmem_destroy(d->tmem_client);
>>          domain_set_outstanding_pages(d, 0);
>> -        d->tmem = NULL;
>> +        d->tmem_client = NULL;
>>          /* fallthrough */
>>      case DOMDYING_dying:
>>          rc = domain_relinquish_resources(d);
>> diff --git a/xen/common/tmem.c b/xen/common/tmem.c
>> index 5bf5f04..2659651 100644
>> --- a/xen/common/tmem.c
>> +++ b/xen/common/tmem.c
>> @@ -1206,7 +1206,7 @@ static struct client *client_create(domid_t cli_id)
>>          goto fail;
>>      }
>>      if ( !d->is_dying ) {
>> -        d->tmem = client;
>> +        d->tmem_client = client;
>> client->domain = d;
>>      }
>>      rcu_unlock_domain(d);
>> @@ -1324,7 +1324,7 @@ obj_unlock:
>>  
>>  static int tmem_evict(void)
>>  {
>> -    struct client *client = tmem_client_from_current();
>> +    struct client *client = current->domain->tmem_client;
>>      struct tmem_page_descriptor *pgp = NULL, *pgp2, *pgp_del;
>>      struct tmem_object_root *obj;
>>      struct tmem_pool *pool;
>> @@ -1761,7 +1761,7 @@ static int do_tmem_get(struct tmem_pool *pool, struct
>> oid *oidp, uint32_t index,
>>              list_del(&pgp->us.client_eph_pages);
>>              
>> list_add_tail(&pgp->us.client_eph_pages,&client->ephemeral_page_list);
>>              spin_unlock(&eph_lists_spinlock);
>> -            obj->last_client = tmem_get_cli_id_from_current();
>> +            obj->last_client = current->domain->domain_id;
>>          }
>>      }
>>      if ( obj != NULL )
>> @@ -1836,7 +1836,7 @@ out:
>>  
>>  static int do_tmem_destroy_pool(uint32_t pool_id)
>>  {
>> -    struct client *client = tmem_client_from_current();
>> +    struct client *client = current->domain->tmem_client;
>>      struct tmem_pool *pool;
>>  
>>      if ( client->pools == NULL )
>> @@ -1867,7 +1867,7 @@ static int do_tmem_new_pool(domid_t this_cli_id,
>>      int i;
>>  
>>      if ( this_cli_id == TMEM_CLI_ID_NULL )
>> -        cli_id = tmem_get_cli_id_from_current();
>> +        cli_id = current->domain->domain_id;
>>      else
>>          cli_id = this_cli_id;
>>      tmem_client_info("tmem: allocating %s-%s tmem pool for %s=%d...",
>> @@ -1908,7 +1908,7 @@ static int do_tmem_new_pool(domid_t this_cli_id,
>>      }
>>      else
>>      {
>> -        client = tmem_client_from_current();
>> +        client = current->domain->tmem_client;
>>          ASSERT(client != NULL);
>>          for ( d_poolid = 0; d_poolid < MAX_POOLS_PER_DOMAIN; d_poolid++ )
>>              if ( client->pools[d_poolid] == NULL )
>> @@ -2511,7 +2511,7 @@ static int do_tmem_control(struct tmem_op *op)
>>      uint32_t subop = op->u.ctrl.subop;
>>      struct oid *oidp = (struct oid *)(&op->u.ctrl.oid[0]);
>>  
>> -    if (!tmem_current_is_privileged())
>> +    if ( xsm_tmem_control(XSM_PRIV) )
>>          return -EPERM;
>>  
>>      switch(subop)
>> @@ -2583,7 +2583,7 @@ static int do_tmem_control(struct tmem_op *op)
>>  long do_tmem_op(tmem_cli_op_t uops)
>>  {
>>      struct tmem_op op;
>> -    struct client *client = tmem_client_from_current();
>> +    struct client *client = current->domain->tmem_client;
>>      struct tmem_pool *pool = NULL;
>>      struct oid *oidp;
>>      int rc = 0;
>> @@ -2595,12 +2595,12 @@ long do_tmem_op(tmem_cli_op_t uops)
>>      if ( !tmem_initialized )
>>          return -ENODEV;
>>  
>> -    if ( !tmem_current_permitted() )
>> +    if ( xsm_tmem_op(XSM_HOOK) )
>>          return -EPERM;
>>  
>>      total_tmem_ops++;
>>  
>> -    if ( client != NULL && tmem_client_is_dying(client) )
>> +    if ( client != NULL && client->domain->is_dying )
>>      {
>>          rc = -ENODEV;
>>   simple_error:
>> @@ -2640,7 +2640,7 @@ long do_tmem_op(tmem_cli_op_t uops)
>>      {
>>          write_lock(&tmem_rwlock);
>>          write_lock_set = 1;
>> -        if ( (client = client_create(tmem_get_cli_id_from_current())) ==
>> NULL )
>> +        if ( (client = client_create(current->domain->domain_id)) == NULL )
>>          {
>>              tmem_client_err("tmem: can't create tmem structure for %s\n",
>>                             tmem_client_str);
>> @@ -2732,7 +2732,7 @@ void tmem_destroy(void *v)
>>      if ( client == NULL )
>>          return;
>>  
>> -    if ( !tmem_client_is_dying(client) )
>> +    if ( !client->domain->is_dying )
>>      {
>>          printk("tmem: tmem_destroy can only destroy dying client\n");
>>          return;
>> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
>> index cbdf377..53ad32f 100644
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -400,7 +400,7 @@ struct domain
>>      spinlock_t hypercall_deadlock_mutex;
>>  
>>      /* transcendent memory, auto-allocated on first tmem op by each domain
>> */
>> -    void *tmem;
>> +    struct client *tmem_client;
>>  
>>      struct lock_profile_qhead profile_head;
>>  
>> diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
>> index 9cfa73f..11f4c2d 100644
>> --- a/xen/include/xen/tmem_xen.h
>> +++ b/xen/include/xen/tmem_xen.h
>> @@ -171,45 +171,17 @@ static inline unsigned long tmem_free_mb(void)
>>  
>>  /*  "Client" (==domain) abstraction */
>>  
>> -struct client;
>>  static inline struct client *tmem_client_from_cli_id(domid_t cli_id)
>>  {
>>      struct client *c;
>>      struct domain *d = rcu_lock_domain_by_id(cli_id);
>>      if (d == NULL)
>>          return NULL;
>> -    c = (struct client *)(d->tmem);
>> +    c = d->tmem_client;
>>      rcu_unlock_domain(d);
>>      return c;
>>  }
>>  
>> -static inline struct client *tmem_client_from_current(void)
>> -{
>> -    return (struct client *)(current->domain->tmem);
>> -}
>> -
>> -#define tmem_client_is_dying(_client) (!!_client->domain->is_dying)
>> -
>> -static inline domid_t tmem_get_cli_id_from_current(void)
>> -{
>> -    return current->domain->domain_id;
>> -}
>> -
>> -static inline struct domain *tmem_get_cli_ptr_from_current(void)
>> -{
>> -    return current->domain;
>> -}
>> -
>> -static inline bool_t tmem_current_permitted(void)
>> -{
>> -    return !xsm_tmem_op(XSM_HOOK);
>> -}
>> -
>> -static inline bool_t tmem_current_is_privileged(void)
>> -{
>> -    return !xsm_tmem_control(XSM_PRIV);
>> -}
>> -
>>  static inline uint8_t tmem_get_first_byte(struct page_info *pfp)
>>  {
>>      const uint8_t *p = __map_domain_page(pfp);
>> -- 
>> 1.7.10.4
>> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2014-01-07 15:52 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-12-12 11:05 [PATCH v4 00/15] xen: continue to cleanup tmem Bob Liu
2013-12-12 11:05 ` [PATCH v4 01/15] tmem: cleanup: drop unused sub command Bob Liu
2013-12-12 11:05 ` [PATCH v4 02/15] tmem: cleanup: drop some debug code Bob Liu
2013-12-12 11:05 ` [PATCH v4 03/15] tmem: cleanup: drop useless function 'tmem_copy_page' Bob Liu
2013-12-12 11:05 ` [PATCH v4 04/15] tmem: cleanup: drop useless parameters from put/get page Bob Liu
2013-12-12 11:05 ` [PATCH v4 05/15] tmem: cleanup: reorg function do_tmem_put() Bob Liu
2013-12-12 11:05 ` [PATCH v4 06/15] tmem: drop unneeded is_ephemeral() and is_private() Bob Liu
2013-12-12 11:05 ` [PATCH v4 07/15] tmem: cleanup: rm useless EXPORT/FORWARD define Bob Liu
2013-12-12 11:05 ` [PATCH v4 08/15] tmem: cleanup: drop tmem_lock_all Bob Liu
2013-12-12 11:05 ` [PATCH v4 09/15] tmem: cleanup: refactor the alloc/free path Bob Liu
2013-12-12 11:05 ` [PATCH v4 10/15] tmem: cleanup: __tmem_alloc_page: drop unneed parameters Bob Liu
2013-12-12 11:05 ` [PATCH v4 11/15] tmem: cleanup: drop useless functions from head file Bob Liu
2013-12-13 16:44   ` Konrad Rzeszutek Wilk
2014-01-07 10:44     ` Bob Liu
2014-01-07 14:27       ` Konrad Rzeszutek Wilk
2014-01-07 14:32         ` Ian Campbell
2014-01-07 14:44           ` Jan Beulich
2014-01-07 14:52             ` Ian Campbell
2014-01-07 15:51     ` Keir Fraser
2013-12-12 11:05 ` [PATCH v4 12/15] tmem: refator function tmem_ensure_avail_pages() Bob Liu
2013-12-12 11:05 ` [PATCH v4 13/15] tmem: cleanup: rename tmem_relinquish_npages() Bob Liu
2013-12-12 11:05 ` [PATCH v4 14/15] tmem: cleanup: rm unused tmem_freeze_all() Bob Liu
2013-12-12 11:05 ` [PATCH v4 15/15] tmem: check the return value of copy to guest Bob Liu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).