public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 1/3] staging, zsmalloc: introduce zs_mem_[read/write]
@ 2013-01-16  8:08 Joonsoo Kim
  2013-01-16  8:08 ` [PATCH 2/3] staging, zcache: use zs_mem_[read/write] Joonsoo Kim
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Joonsoo Kim @ 2013-01-16  8:08 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: linux-kernel, devel, js1304, Seth Jennings, Konrad Rzeszutek Wilk,
	Xiao Guangrong, Dan Magenheimer, Nitin Gupta, Minchan Kim,
	Joonsoo Kim

If object is on boundary of page, zs_map_object() copy content of object
to pre-allocated page and return virtual address of
this pre-allocated page. If user inform zsmalloc of memcpy region,
we can avoid this copy overhead.
This patch implement two API and these get information of memcpy region.
Using this information, we can do memcpy without overhead.

For USE_PGTABLE_MAPPING case, we can avoid flush cache and tlb overhead
via these API.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
These are [RFC] patches, because I don't test and
I don't have test environment, yet. Just compile test done.
If there is positive comment, I will setup test env and check correctness.
These are based on v3.8-rc3.
If rebase is needed, please notify me what tree I should rebase.

diff --git a/drivers/staging/zsmalloc/zsmalloc-main.c b/drivers/staging/zsmalloc/zsmalloc-main.c
index 09a9d35..e3ef5a5 100644
--- a/drivers/staging/zsmalloc/zsmalloc-main.c
+++ b/drivers/staging/zsmalloc/zsmalloc-main.c
@@ -1045,6 +1045,118 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 }
 EXPORT_SYMBOL_GPL(zs_unmap_object);
 
+void zs_mem_read(struct zs_pool *pool, unsigned long handle,
+			void *dest, unsigned long src_off, size_t n)
+{
+	struct page *page;
+	unsigned long obj_idx, off;
+
+	unsigned int class_idx;
+	enum fullness_group fg;
+	struct size_class *class;
+	struct page *pages[2];
+	int sizes[2];
+	void *addr;
+
+	BUG_ON(!handle);
+
+	/*
+	 * Because we use per-cpu mapping areas shared among the
+	 * pools/users, we can't allow mapping in interrupt context
+	 * because it can corrupt another users mappings.
+	 */
+	BUG_ON(in_interrupt());
+
+	obj_handle_to_location(handle, &page, &obj_idx);
+	get_zspage_mapping(get_first_page(page), &class_idx, &fg);
+	class = &pool->size_class[class_idx];
+	off = obj_idx_to_offset(page, obj_idx, class->size);
+	off += src_off;
+
+	BUG_ON(class->size < n);
+
+	if (off + n <= PAGE_SIZE) {
+		/* this object is contained entirely within a page */
+		addr = kmap_atomic(page);
+		memcpy(dest, addr + off, n);
+		kunmap_atomic(addr);
+		return;
+	}
+
+	/* this object spans two pages */
+	pages[0] = page;
+	pages[1] = get_next_page(page);
+	BUG_ON(!pages[1]);
+
+	sizes[0] = PAGE_SIZE - off;
+	sizes[1] = n - sizes[0];
+
+	addr = kmap_atomic(pages[0]);
+	memcpy(dest, addr + off, sizes[0]);
+	kunmap_atomic(addr);
+
+	addr = kmap_atomic(pages[1]);
+	memcpy(dest + sizes[0], addr, sizes[1]);
+	kunmap_atomic(addr);
+}
+EXPORT_SYMBOL_GPL(zs_mem_read);
+
+void zs_mem_write(struct zs_pool *pool, unsigned long handle,
+			const void *src, unsigned long dest_off, size_t n)
+{
+	struct page *page;
+	unsigned long obj_idx, off;
+
+	unsigned int class_idx;
+	enum fullness_group fg;
+	struct size_class *class;
+	struct page *pages[2];
+	int sizes[2];
+	void *addr;
+
+	BUG_ON(!handle);
+
+	/*
+	 * Because we use per-cpu mapping areas shared among the
+	 * pools/users, we can't allow mapping in interrupt context
+	 * because it can corrupt another users mappings.
+	 */
+	BUG_ON(in_interrupt());
+
+	obj_handle_to_location(handle, &page, &obj_idx);
+	get_zspage_mapping(get_first_page(page), &class_idx, &fg);
+	class = &pool->size_class[class_idx];
+	off = obj_idx_to_offset(page, obj_idx, class->size);
+	off += dest_off;
+
+	BUG_ON(class->size < n);
+
+	if (off + n <= PAGE_SIZE) {
+		/* this object is contained entirely within a page */
+		addr = kmap_atomic(page);
+		memcpy(addr + off, src, n);
+		kunmap_atomic(addr);
+		return;
+	}
+
+	/* this object spans two pages */
+	pages[0] = page;
+	pages[1] = get_next_page(page);
+	BUG_ON(!pages[1]);
+
+	sizes[0] = PAGE_SIZE - off;
+	sizes[1] = n - sizes[0];
+
+	addr = kmap_atomic(pages[0]);
+	memcpy(addr + off, src, sizes[0]);
+	kunmap_atomic(addr);
+
+	addr = kmap_atomic(pages[1]);
+	memcpy(addr, src + sizes[0], sizes[1]);
+	kunmap_atomic(addr);
+}
+EXPORT_SYMBOL_GPL(zs_mem_write);
+
 u64 zs_get_total_size_bytes(struct zs_pool *pool)
 {
 	int i;
diff --git a/drivers/staging/zsmalloc/zsmalloc.h b/drivers/staging/zsmalloc/zsmalloc.h
index de2e8bf..fb70f00 100644
--- a/drivers/staging/zsmalloc/zsmalloc.h
+++ b/drivers/staging/zsmalloc/zsmalloc.h
@@ -38,6 +38,11 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 			enum zs_mapmode mm);
 void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
 
+void zs_mem_read(struct zs_pool *pool, unsigned long handle,
+			void *dest, unsigned long src_off, size_t n);
+void zs_mem_write(struct zs_pool *pool, unsigned long handle,
+			const void *src, unsigned long dest_off, size_t n);
+
 u64 zs_get_total_size_bytes(struct zs_pool *pool);
 
 #endif
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/3] staging, zcache: use zs_mem_[read/write]
  2013-01-16  8:08 [RFC PATCH 1/3] staging, zsmalloc: introduce zs_mem_[read/write] Joonsoo Kim
@ 2013-01-16  8:08 ` Joonsoo Kim
  2013-01-16  8:08 ` [PATCH 3/3] staging, zram: " Joonsoo Kim
  2013-01-16 23:59 ` [RFC PATCH 1/3] staging, zsmalloc: introduce zs_mem_[read/write] Minchan Kim
  2 siblings, 0 replies; 6+ messages in thread
From: Joonsoo Kim @ 2013-01-16  8:08 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: linux-kernel, devel, js1304, Seth Jennings, Konrad Rzeszutek Wilk,
	Xiao Guangrong, Dan Magenheimer, Nitin Gupta, Minchan Kim,
	Joonsoo Kim

Now, we have newly introduced APIs which reduce copy overhead of
zsmalloc for objects on page boundary.
So use it in zcache.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/drivers/staging/zcache/zcache-main.c b/drivers/staging/zcache/zcache-main.c
index 52b43b7..d1dee76 100644
--- a/drivers/staging/zcache/zcache-main.c
+++ b/drivers/staging/zcache/zcache-main.c
@@ -704,7 +704,7 @@ static unsigned long zv_create(struct zs_pool *pool, uint32_t pool_id,
 				struct tmem_oid *oid, uint32_t index,
 				void *cdata, unsigned clen)
 {
-	struct zv_hdr *zv;
+	struct zv_hdr zv;
 	u32 size = clen + sizeof(struct zv_hdr);
 	int chunks = (size + (CHUNK_SIZE - 1)) >> CHUNK_SHIFT;
 	unsigned long handle = 0;
@@ -716,14 +716,13 @@ static unsigned long zv_create(struct zs_pool *pool, uint32_t pool_id,
 		goto out;
 	atomic_inc(&zv_curr_dist_counts[chunks]);
 	atomic_inc(&zv_cumul_dist_counts[chunks]);
-	zv = zs_map_object(pool, handle, ZS_MM_WO);
-	zv->index = index;
-	zv->oid = *oid;
-	zv->pool_id = pool_id;
-	zv->size = clen;
-	SET_SENTINEL(zv, ZVH);
-	memcpy((char *)zv + sizeof(struct zv_hdr), cdata, clen);
-	zs_unmap_object(pool, handle);
+	zv.index = index;
+	zv.oid = *oid;
+	zv.pool_id = pool_id;
+	zv.size = clen;
+	SET_SENTINEL((&zv), ZVH);
+	zs_mem_write(pool, handle, &zv, 0, sizeof(struct zv_hdr));
+	zs_mem_write(pool, handle, cdata, sizeof(struct zv_hdr), clen);
 out:
 	return handle;
 }
@@ -731,15 +730,13 @@ out:
 static void zv_free(struct zs_pool *pool, unsigned long handle)
 {
 	unsigned long flags;
-	struct zv_hdr *zv;
+	struct zv_hdr zv;
 	uint16_t size;
 	int chunks;
 
-	zv = zs_map_object(pool, handle, ZS_MM_RW);
-	ASSERT_SENTINEL(zv, ZVH);
-	size = zv->size + sizeof(struct zv_hdr);
-	INVERT_SENTINEL(zv, ZVH);
-	zs_unmap_object(pool, handle);
+	zs_mem_read(pool, handle, &zv, 0, sizeof(struct zv_hdr));
+	ASSERT_SENTINEL((&zv), ZVH);
+	size = zv.size + sizeof(struct zv_hdr);
 
 	chunks = (size + (CHUNK_SIZE - 1)) >> CHUNK_SHIFT;
 	BUG_ON(chunks >= NCHUNKS);
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 3/3] staging, zram: use zs_mem_[read/write]
  2013-01-16  8:08 [RFC PATCH 1/3] staging, zsmalloc: introduce zs_mem_[read/write] Joonsoo Kim
  2013-01-16  8:08 ` [PATCH 2/3] staging, zcache: use zs_mem_[read/write] Joonsoo Kim
@ 2013-01-16  8:08 ` Joonsoo Kim
  2013-01-16 23:59 ` [RFC PATCH 1/3] staging, zsmalloc: introduce zs_mem_[read/write] Minchan Kim
  2 siblings, 0 replies; 6+ messages in thread
From: Joonsoo Kim @ 2013-01-16  8:08 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: linux-kernel, devel, js1304, Seth Jennings, Konrad Rzeszutek Wilk,
	Xiao Guangrong, Dan Magenheimer, Nitin Gupta, Minchan Kim,
	Joonsoo Kim

Now, we have newly introduced APIs which reduce copy overhead of
zsmalloc for objects on page boundary.
So use it in zram.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/drivers/staging/zram/zram_drv.c b/drivers/staging/zram/zram_drv.c
index fb4a7c9..554a742 100644
--- a/drivers/staging/zram/zram_drv.c
+++ b/drivers/staging/zram/zram_drv.c
@@ -195,13 +195,14 @@ static int zram_decompress_page(struct zram *zram, char *mem, u32 index)
 		return 0;
 	}
 
-	cmem = zs_map_object(zram->mem_pool, handle, ZS_MM_RO);
 	if (zram->table[index].size == PAGE_SIZE)
-		memcpy(mem, cmem, PAGE_SIZE);
-	else
+		zs_mem_read(zram->mem_pool, handle, mem, 0, PAGE_SIZE);
+	else {
+		cmem = zs_map_object(zram->mem_pool, handle, ZS_MM_RO);
 		ret = lzo1x_decompress_safe(cmem, zram->table[index].size,
 						mem, &clen);
-	zs_unmap_object(zram->mem_pool, handle);
+		zs_unmap_object(zram->mem_pool, handle);
+	}
 
 	/* Should NEVER happen. Return bio error if it does. */
 	if (unlikely(ret != LZO_E_OK)) {
@@ -269,7 +270,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
 	size_t clen;
 	unsigned long handle;
 	struct page *page;
-	unsigned char *user_mem, *cmem, *src, *uncmem = NULL;
+	unsigned char *user_mem, *src, *uncmem = NULL;
 
 	page = bvec->bv_page;
 	src = zram->compress_buffer;
@@ -343,11 +344,8 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
 		ret = -ENOMEM;
 		goto out;
 	}
-	cmem = zs_map_object(zram->mem_pool, handle, ZS_MM_WO);
-
-	memcpy(cmem, src, clen);
 
-	zs_unmap_object(zram->mem_pool, handle);
+	zs_mem_write(zram->mem_pool, handle, src, 0, clen);
 
 	zram->table[index].handle = handle;
 	zram->table[index].size = clen;
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [RFC PATCH 1/3] staging, zsmalloc: introduce zs_mem_[read/write]
  2013-01-16  8:08 [RFC PATCH 1/3] staging, zsmalloc: introduce zs_mem_[read/write] Joonsoo Kim
  2013-01-16  8:08 ` [PATCH 2/3] staging, zcache: use zs_mem_[read/write] Joonsoo Kim
  2013-01-16  8:08 ` [PATCH 3/3] staging, zram: " Joonsoo Kim
@ 2013-01-16 23:59 ` Minchan Kim
  2013-01-21  7:16   ` Joonsoo Kim
  2013-01-22 20:32   ` Konrad Rzeszutek Wilk
  2 siblings, 2 replies; 6+ messages in thread
From: Minchan Kim @ 2013-01-16 23:59 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Greg Kroah-Hartman, linux-kernel, devel, js1304, Seth Jennings,
	Konrad Rzeszutek Wilk, Xiao Guangrong, Dan Magenheimer,
	Nitin Gupta

Hi Joonsoo,

On Wed, Jan 16, 2013 at 05:08:55PM +0900, Joonsoo Kim wrote:
> If object is on boundary of page, zs_map_object() copy content of object
> to pre-allocated page and return virtual address of

IMHO, for reviewer reading easily, it would be better to specify explict
word instead of abstract.

pre-allocated pages : vm_buf which is reserved pages for zsmalloc
    
> this pre-allocated page. If user inform zsmalloc of memcpy region,
> we can avoid this copy overhead.

That's a good idea!

> This patch implement two API and these get information of memcpy region.
> Using this information, we can do memcpy without overhead.

For the clarification,

                          we can reduce copy overhead with this patch
                          in !USE_PGTABLE_MAPPING case.

> 
> For USE_PGTABLE_MAPPING case, we can avoid flush cache and tlb overhead
> via these API.

Yeb!

> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> ---
> These are [RFC] patches, because I don't test and
> I don't have test environment, yet. Just compile test done.
> If there is positive comment, I will setup test env and check correctness.
> These are based on v3.8-rc3.
> If rebase is needed, please notify me what tree I should rebase.

Whenever you send zsmalloc/zram/zcache, you have to based on recent linux-next.
But I hope we send the patches to akpm by promoting soon. :(

> 
> diff --git a/drivers/staging/zsmalloc/zsmalloc-main.c b/drivers/staging/zsmalloc/zsmalloc-main.c
> index 09a9d35..e3ef5a5 100644
> --- a/drivers/staging/zsmalloc/zsmalloc-main.c
> +++ b/drivers/staging/zsmalloc/zsmalloc-main.c
> @@ -1045,6 +1045,118 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
>  }
>  EXPORT_SYMBOL_GPL(zs_unmap_object);
>  

It's exported function. Please write description.

> +void zs_mem_read(struct zs_pool *pool, unsigned long handle,
> +			void *dest, unsigned long src_off, size_t n)

n is meaningless, please use meaningful word.
How about this?
                        void *buf, unsigned long offset, size_t count

> +{
> +	struct page *page;
> +	unsigned long obj_idx, off;
> +
> +	unsigned int class_idx;
> +	enum fullness_group fg;
> +	struct size_class *class;
> +	struct page *pages[2];
> +	int sizes[2];
> +	void *addr;
> +
> +	BUG_ON(!handle);
> +
> +	/*
> +	 * Because we use per-cpu mapping areas shared among the
> +	 * pools/users, we can't allow mapping in interrupt context
> +	 * because it can corrupt another users mappings.
> +	 */
> +	BUG_ON(in_interrupt());
> +
> +	obj_handle_to_location(handle, &page, &obj_idx);
> +	get_zspage_mapping(get_first_page(page), &class_idx, &fg);
> +	class = &pool->size_class[class_idx];
> +	off = obj_idx_to_offset(page, obj_idx, class->size);
> +	off += src_off;
> +
> +	BUG_ON(class->size < n);
> +
> +	if (off + n <= PAGE_SIZE) {
> +		/* this object is contained entirely within a page */
> +		addr = kmap_atomic(page);
> +		memcpy(dest, addr + off, n);
> +		kunmap_atomic(addr);
> +		return;
> +	}
> +
> +	/* this object spans two pages */
> +	pages[0] = page;
> +	pages[1] = get_next_page(page);
> +	BUG_ON(!pages[1]);
> +
> +	sizes[0] = PAGE_SIZE - off;
> +	sizes[1] = n - sizes[0];
> +
> +	addr = kmap_atomic(pages[0]);
> +	memcpy(dest, addr + off, sizes[0]);
> +	kunmap_atomic(addr);
> +
> +	addr = kmap_atomic(pages[1]);
> +	memcpy(dest + sizes[0], addr, sizes[1]);
> +	kunmap_atomic(addr);
> +}
> +EXPORT_SYMBOL_GPL(zs_mem_read);
> +

Ditto. Write descriptoin.

> +void zs_mem_write(struct zs_pool *pool, unsigned long handle,
> +			const void *src, unsigned long dest_off, size_t n)
> +{
> +	struct page *page;
> +	unsigned long obj_idx, off;
> +
> +	unsigned int class_idx;
> +	enum fullness_group fg;
> +	struct size_class *class;
> +	struct page *pages[2];
> +	int sizes[2];
> +	void *addr;
> +
> +	BUG_ON(!handle);
> +
> +	/*
> +	 * Because we use per-cpu mapping areas shared among the
> +	 * pools/users, we can't allow mapping in interrupt context
> +	 * because it can corrupt another users mappings.
> +	 */
> +	BUG_ON(in_interrupt());
> +
> +	obj_handle_to_location(handle, &page, &obj_idx);
> +	get_zspage_mapping(get_first_page(page), &class_idx, &fg);
> +	class = &pool->size_class[class_idx];
> +	off = obj_idx_to_offset(page, obj_idx, class->size);
> +	off += dest_off;
> +
> +	BUG_ON(class->size < n);
> +
> +	if (off + n <= PAGE_SIZE) {
> +		/* this object is contained entirely within a page */
> +		addr = kmap_atomic(page);
> +		memcpy(addr + off, src, n);
> +		kunmap_atomic(addr);
> +		return;
> +	}
> +
> +	/* this object spans two pages */
> +	pages[0] = page;
> +	pages[1] = get_next_page(page);
> +	BUG_ON(!pages[1]);
> +
> +	sizes[0] = PAGE_SIZE - off;
> +	sizes[1] = n - sizes[0];
> +
> +	addr = kmap_atomic(pages[0]);
> +	memcpy(addr + off, src, sizes[0]);
> +	kunmap_atomic(addr);
> +
> +	addr = kmap_atomic(pages[1]);
> +	memcpy(addr, src + sizes[0], sizes[1]);
> +	kunmap_atomic(addr);
> +}
> +EXPORT_SYMBOL_GPL(zs_mem_write);

Two API have same logic but just different memcpy argument order for input/output
so you can factor out common logic.

Patch looks good to me but I have a concern.
I'd like to promote zram/zsmalloc as soon as possible.
My last hurdle was LOCKDEP complaint so I decided to stop sending promoting patches
until it was solved. At that same time, Nitin was sending some patches on zram meta
diet and critical bug fix. Of course, they was conflict so we should line patches up
following as

1. Critical bug fix and merge <- merged two days ago.
2. Nitin diet patch merge <- pending
3. Minchan Lockdep patch merge <- pending

And then, my plan was trying to promote again.
But unfortunately, I was not convinced of 2 at that time while we all agree on 3.
So it takes some time to discuss 2 again and finally merge.
So I would like to merge lockdep patch as top priority and then,
Joonsoo/Nitin/Seth could try to send your patches to staging.
(Seth already had a patch to solve lockdep problem simply in his zswap series
but I don't like it although I didn't reply his patch.)

If anyone has objection, please raise your hand.
I will do best effort to send lockdep patch until early next week.

-- 
Kind regards,
Minchan Kim

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC PATCH 1/3] staging, zsmalloc: introduce zs_mem_[read/write]
  2013-01-16 23:59 ` [RFC PATCH 1/3] staging, zsmalloc: introduce zs_mem_[read/write] Minchan Kim
@ 2013-01-21  7:16   ` Joonsoo Kim
  2013-01-22 20:32   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 6+ messages in thread
From: Joonsoo Kim @ 2013-01-21  7:16 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Greg Kroah-Hartman, linux-kernel, devel, Seth Jennings,
	Konrad Rzeszutek Wilk, Xiao Guangrong, Dan Magenheimer,
	Nitin Gupta

Hello, Minchan.

On Thu, Jan 17, 2013 at 08:59:22AM +0900, Minchan Kim wrote:
> Hi Joonsoo,
> 
> On Wed, Jan 16, 2013 at 05:08:55PM +0900, Joonsoo Kim wrote:
> > If object is on boundary of page, zs_map_object() copy content of object
> > to pre-allocated page and return virtual address of
> 
> IMHO, for reviewer reading easily, it would be better to specify explict
> word instead of abstract.
> 
> pre-allocated pages : vm_buf which is reserved pages for zsmalloc
>     
> > this pre-allocated page. If user inform zsmalloc of memcpy region,
> > we can avoid this copy overhead.
> 
> That's a good idea!
> 
> > This patch implement two API and these get information of memcpy region.
> > Using this information, we can do memcpy without overhead.
> 
> For the clarification,
> 
>                           we can reduce copy overhead with this patch
>                           in !USE_PGTABLE_MAPPING case.
> 
> > 
> > For USE_PGTABLE_MAPPING case, we can avoid flush cache and tlb overhead
> > via these API.
> 
> Yeb!
> 
> > 
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > ---
> > These are [RFC] patches, because I don't test and
> > I don't have test environment, yet. Just compile test done.
> > If there is positive comment, I will setup test env and check correctness.
> > These are based on v3.8-rc3.
> > If rebase is needed, please notify me what tree I should rebase.
> 
> Whenever you send zsmalloc/zram/zcache, you have to based on recent linux-next.
> But I hope we send the patches to akpm by promoting soon. :(
> 
> > 
> > diff --git a/drivers/staging/zsmalloc/zsmalloc-main.c b/drivers/staging/zsmalloc/zsmalloc-main.c
> > index 09a9d35..e3ef5a5 100644
> > --- a/drivers/staging/zsmalloc/zsmalloc-main.c
> > +++ b/drivers/staging/zsmalloc/zsmalloc-main.c
> > @@ -1045,6 +1045,118 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
> >  }
> >  EXPORT_SYMBOL_GPL(zs_unmap_object);
> >  
> 
> It's exported function. Please write description.
> 
> > +void zs_mem_read(struct zs_pool *pool, unsigned long handle,
> > +			void *dest, unsigned long src_off, size_t n)
> 
> n is meaningless, please use meaningful word.
> How about this?
>                         void *buf, unsigned long offset, size_t count
> 
> > +{
> > +	struct page *page;
> > +	unsigned long obj_idx, off;
> > +
> > +	unsigned int class_idx;
> > +	enum fullness_group fg;
> > +	struct size_class *class;
> > +	struct page *pages[2];
> > +	int sizes[2];
> > +	void *addr;
> > +
> > +	BUG_ON(!handle);
> > +
> > +	/*
> > +	 * Because we use per-cpu mapping areas shared among the
> > +	 * pools/users, we can't allow mapping in interrupt context
> > +	 * because it can corrupt another users mappings.
> > +	 */
> > +	BUG_ON(in_interrupt());
> > +
> > +	obj_handle_to_location(handle, &page, &obj_idx);
> > +	get_zspage_mapping(get_first_page(page), &class_idx, &fg);
> > +	class = &pool->size_class[class_idx];
> > +	off = obj_idx_to_offset(page, obj_idx, class->size);
> > +	off += src_off;
> > +
> > +	BUG_ON(class->size < n);
> > +
> > +	if (off + n <= PAGE_SIZE) {
> > +		/* this object is contained entirely within a page */
> > +		addr = kmap_atomic(page);
> > +		memcpy(dest, addr + off, n);
> > +		kunmap_atomic(addr);
> > +		return;
> > +	}
> > +
> > +	/* this object spans two pages */
> > +	pages[0] = page;
> > +	pages[1] = get_next_page(page);
> > +	BUG_ON(!pages[1]);
> > +
> > +	sizes[0] = PAGE_SIZE - off;
> > +	sizes[1] = n - sizes[0];
> > +
> > +	addr = kmap_atomic(pages[0]);
> > +	memcpy(dest, addr + off, sizes[0]);
> > +	kunmap_atomic(addr);
> > +
> > +	addr = kmap_atomic(pages[1]);
> > +	memcpy(dest + sizes[0], addr, sizes[1]);
> > +	kunmap_atomic(addr);
> > +}
> > +EXPORT_SYMBOL_GPL(zs_mem_read);
> > +
> 
> Ditto. Write descriptoin.
> 
> > +void zs_mem_write(struct zs_pool *pool, unsigned long handle,
> > +			const void *src, unsigned long dest_off, size_t n)
> > +{
> > +	struct page *page;
> > +	unsigned long obj_idx, off;
> > +
> > +	unsigned int class_idx;
> > +	enum fullness_group fg;
> > +	struct size_class *class;
> > +	struct page *pages[2];
> > +	int sizes[2];
> > +	void *addr;
> > +
> > +	BUG_ON(!handle);
> > +
> > +	/*
> > +	 * Because we use per-cpu mapping areas shared among the
> > +	 * pools/users, we can't allow mapping in interrupt context
> > +	 * because it can corrupt another users mappings.
> > +	 */
> > +	BUG_ON(in_interrupt());
> > +
> > +	obj_handle_to_location(handle, &page, &obj_idx);
> > +	get_zspage_mapping(get_first_page(page), &class_idx, &fg);
> > +	class = &pool->size_class[class_idx];
> > +	off = obj_idx_to_offset(page, obj_idx, class->size);
> > +	off += dest_off;
> > +
> > +	BUG_ON(class->size < n);
> > +
> > +	if (off + n <= PAGE_SIZE) {
> > +		/* this object is contained entirely within a page */
> > +		addr = kmap_atomic(page);
> > +		memcpy(addr + off, src, n);
> > +		kunmap_atomic(addr);
> > +		return;
> > +	}
> > +
> > +	/* this object spans two pages */
> > +	pages[0] = page;
> > +	pages[1] = get_next_page(page);
> > +	BUG_ON(!pages[1]);
> > +
> > +	sizes[0] = PAGE_SIZE - off;
> > +	sizes[1] = n - sizes[0];
> > +
> > +	addr = kmap_atomic(pages[0]);
> > +	memcpy(addr + off, src, sizes[0]);
> > +	kunmap_atomic(addr);
> > +
> > +	addr = kmap_atomic(pages[1]);
> > +	memcpy(addr, src + sizes[0], sizes[1]);
> > +	kunmap_atomic(addr);
> > +}
> > +EXPORT_SYMBOL_GPL(zs_mem_write);
> 
> Two API have same logic but just different memcpy argument order for input/output
> so you can factor out common logic.
> 
> Patch looks good to me but I have a concern.
> I'd like to promote zram/zsmalloc as soon as possible.
> My last hurdle was LOCKDEP complaint so I decided to stop sending promoting patches
> until it was solved. At that same time, Nitin was sending some patches on zram meta
> diet and critical bug fix. Of course, they was conflict so we should line patches up
> following as
> 
> 1. Critical bug fix and merge <- merged two days ago.
> 2. Nitin diet patch merge <- pending
> 3. Minchan Lockdep patch merge <- pending
> 
> And then, my plan was trying to promote again.
> But unfortunately, I was not convinced of 2 at that time while we all agree on 3.
> So it takes some time to discuss 2 again and finally merge.
> So I would like to merge lockdep patch as top priority and then,
> Joonsoo/Nitin/Seth could try to send your patches to staging.
> (Seth already had a patch to solve lockdep problem simply in his zswap series
> but I don't like it although I didn't reply his patch.)
> 
> If anyone has objection, please raise your hand.
> I will do best effort to send lockdep patch until early next week.

Okay.
I will try to re-send this patchset after promotion is complete.
And v2 will reflect your comments.
Thanks for reviewing this.

> -- 
> Kind regards,
> Minchan Kim
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [RFC PATCH 1/3] staging, zsmalloc: introduce zs_mem_[read/write]
  2013-01-16 23:59 ` [RFC PATCH 1/3] staging, zsmalloc: introduce zs_mem_[read/write] Minchan Kim
  2013-01-21  7:16   ` Joonsoo Kim
@ 2013-01-22 20:32   ` Konrad Rzeszutek Wilk
  1 sibling, 0 replies; 6+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-01-22 20:32 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Joonsoo Kim, Greg Kroah-Hartman, linux-kernel, devel, js1304,
	Seth Jennings, Xiao Guangrong, Dan Magenheimer, Nitin Gupta

> > +EXPORT_SYMBOL_GPL(zs_mem_write);
> 
> Two API have same logic but just different memcpy argument order for input/output
> so you can factor out common logic.

Or have one function but just use an extra argument to define the direction of the
copy.

> 
> Patch looks good to me but I have a concern.
> I'd like to promote zram/zsmalloc as soon as possible.
> My last hurdle was LOCKDEP complaint so I decided to stop sending promoting patches
> until it was solved. At that same time, Nitin was sending some patches on zram meta
> diet and critical bug fix. Of course, they was conflict so we should line patches up
> following as
> 
> 1. Critical bug fix and merge <- merged two days ago.
> 2. Nitin diet patch merge <- pending
> 3. Minchan Lockdep patch merge <- pending
> 
> And then, my plan was trying to promote again.
> But unfortunately, I was not convinced of 2 at that time while we all agree on 3.
> So it takes some time to discuss 2 again and finally merge.
> So I would like to merge lockdep patch as top priority and then,
> Joonsoo/Nitin/Seth could try to send your patches to staging.
> (Seth already had a patch to solve lockdep problem simply in his zswap series
> but I don't like it although I didn't reply his patch.)
> 
> If anyone has objection, please raise your hand.
> I will do best effort to send lockdep patch until early next week.

No objections here.
> 
> -- 
> Kind regards,
> Minchan Kim

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2013-01-22 20:32 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-01-16  8:08 [RFC PATCH 1/3] staging, zsmalloc: introduce zs_mem_[read/write] Joonsoo Kim
2013-01-16  8:08 ` [PATCH 2/3] staging, zcache: use zs_mem_[read/write] Joonsoo Kim
2013-01-16  8:08 ` [PATCH 3/3] staging, zram: " Joonsoo Kim
2013-01-16 23:59 ` [RFC PATCH 1/3] staging, zsmalloc: introduce zs_mem_[read/write] Minchan Kim
2013-01-21  7:16   ` Joonsoo Kim
2013-01-22 20:32   ` Konrad Rzeszutek Wilk

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox