* [PATCH v3 1/5] introduce zero filled pages handler
2013-03-15 2:34 [PATCH v3 0/5] zcache: Support zero-filled pages more efficiently Wanpeng Li
@ 2013-03-15 2:34 ` Wanpeng Li
2013-03-15 2:34 ` [PATCH v3 2/5] zero-filled pages awareness Wanpeng Li
` (4 subsequent siblings)
5 siblings, 0 replies; 18+ messages in thread
From: Wanpeng Li @ 2013-03-15 2:34 UTC (permalink / raw)
To: Greg Kroah-Hartman, Andrew Morton
Cc: Dan Magenheimer, Seth Jennings, Konrad Rzeszutek Wilk,
Minchan Kim, linux-mm, linux-kernel, Wanpeng Li
Introduce zero-filled pages handler to capture and handle zero pages.
Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
---
drivers/staging/zcache/zcache-main.c | 26 ++++++++++++++++++++++++++
1 files changed, 26 insertions(+), 0 deletions(-)
diff --git a/drivers/staging/zcache/zcache-main.c b/drivers/staging/zcache/zcache-main.c
index 328898e..d73dd4b 100644
--- a/drivers/staging/zcache/zcache-main.c
+++ b/drivers/staging/zcache/zcache-main.c
@@ -460,6 +460,32 @@ static void zcache_obj_free(struct tmem_obj *obj, struct tmem_pool *pool)
kmem_cache_free(zcache_obj_cache, obj);
}
+static bool page_is_zero_filled(void *ptr)
+{
+ unsigned int pos;
+ unsigned long *page;
+
+ page = (unsigned long *)ptr;
+
+ for (pos = 0; pos < PAGE_SIZE / sizeof(*page); pos++) {
+ if (page[pos])
+ return false;
+ }
+
+ return true;
+}
+
+static void handle_zero_filled_page(void *page)
+{
+ void *user_mem;
+
+ user_mem = kmap_atomic(page);
+ memset(user_mem, 0, PAGE_SIZE);
+ kunmap_atomic(user_mem);
+
+ flush_dcache_page(page);
+}
+
static struct tmem_hostops zcache_hostops = {
.obj_alloc = zcache_obj_alloc,
.obj_free = zcache_obj_free,
--
1.7.7.6
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v3 2/5] zero-filled pages awareness
2013-03-15 2:34 [PATCH v3 0/5] zcache: Support zero-filled pages more efficiently Wanpeng Li
2013-03-15 2:34 ` [PATCH v3 1/5] introduce zero filled pages handler Wanpeng Li
@ 2013-03-15 2:34 ` Wanpeng Li
2013-03-16 14:12 ` Bob Liu
2013-03-19 0:50 ` Greg Kroah-Hartman
2013-03-15 2:34 ` [PATCH v3 3/5] handle zcache_[eph|pers]_zpages for zero-filled page Wanpeng Li
` (3 subsequent siblings)
5 siblings, 2 replies; 18+ messages in thread
From: Wanpeng Li @ 2013-03-15 2:34 UTC (permalink / raw)
To: Greg Kroah-Hartman, Andrew Morton
Cc: Dan Magenheimer, Seth Jennings, Konrad Rzeszutek Wilk,
Minchan Kim, linux-mm, linux-kernel, Wanpeng Li
Compression of zero-filled pages can unneccessarily cause internal
fragmentation, and thus waste memory. This special case can be
optimized.
This patch captures zero-filled pages, and marks their corresponding
zcache backing page entry as zero-filled. Whenever such zero-filled
page is retrieved, we fill the page frame with zero.
Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
---
drivers/staging/zcache/zcache-main.c | 81 +++++++++++++++++++++++++++++++---
1 files changed, 75 insertions(+), 6 deletions(-)
diff --git a/drivers/staging/zcache/zcache-main.c b/drivers/staging/zcache/zcache-main.c
index d73dd4b..6c35c7d 100644
--- a/drivers/staging/zcache/zcache-main.c
+++ b/drivers/staging/zcache/zcache-main.c
@@ -59,6 +59,12 @@ static inline void frontswap_tmem_exclusive_gets(bool b)
}
#endif
+/*
+ * mark pampd to special value in order that later
+ * retrieve will identify zero-filled pages
+ */
+#define ZERO_FILLED 0x2
+
/* enable (or fix code) when Seth's patches are accepted upstream */
#define zcache_writeback_enabled 0
@@ -543,7 +549,23 @@ static void *zcache_pampd_eph_create(char *data, size_t size, bool raw,
{
void *pampd = NULL, *cdata = data;
unsigned clen = size;
+ bool zero_filled = false;
struct page *page = (struct page *)(data), *newpage;
+ char *user_mem;
+
+ user_mem = kmap_atomic(page);
+
+ /*
+ * Compressing zero-filled pages will waste memory and introduce
+ * serious fragmentation, skip it to avoid overhead
+ */
+ if (page_is_zero_filled(user_mem)) {
+ kunmap_atomic(user_mem);
+ clen = 0;
+ zero_filled = true;
+ goto got_pampd;
+ }
+ kunmap_atomic(user_mem);
if (!raw) {
zcache_compress(page, &cdata, &clen);
@@ -592,6 +614,8 @@ got_pampd:
zcache_eph_zpages_max = zcache_eph_zpages;
if (ramster_enabled && raw)
ramster_count_foreign_pages(true, 1);
+ if (zero_filled)
+ pampd = (void *)ZERO_FILLED;
out:
return pampd;
}
@@ -601,14 +625,31 @@ static void *zcache_pampd_pers_create(char *data, size_t size, bool raw,
{
void *pampd = NULL, *cdata = data;
unsigned clen = size;
+ bool zero_filled = false;
struct page *page = (struct page *)(data), *newpage;
unsigned long zbud_mean_zsize;
unsigned long curr_pers_zpages, total_zsize;
+ char *user_mem;
if (data == NULL) {
BUG_ON(!ramster_enabled);
goto create_pampd;
}
+
+ user_mem = kmap_atomic(page);
+
+ /*
+ * Compressing zero-filled pages will waste memory and introduce
+ * serious fragmentation, skip it to avoid overhead
+ */
+ if (page_is_zero_filled(page)) {
+ kunmap_atomic(user_mem);
+ clen = 0;
+ zero_filled = true;
+ goto got_pampd;
+ }
+ kunmap_atomic(user_mem);
+
curr_pers_zpages = zcache_pers_zpages;
/* FIXME CONFIG_RAMSTER... subtract atomic remote_pers_pages here? */
if (!raw)
@@ -674,6 +715,8 @@ got_pampd:
zcache_pers_zbytes_max = zcache_pers_zbytes;
if (ramster_enabled && raw)
ramster_count_foreign_pages(false, 1);
+ if (zero_filled)
+ pampd = (void *)ZERO_FILLED;
out:
return pampd;
}
@@ -735,7 +778,8 @@ out:
*/
void zcache_pampd_create_finish(void *pampd, bool eph)
{
- zbud_create_finish((struct zbudref *)pampd, eph);
+ if (pampd != (void *)ZERO_FILLED)
+ zbud_create_finish((struct zbudref *)pampd, eph);
}
/*
@@ -780,6 +824,14 @@ static int zcache_pampd_get_data(char *data, size_t *sizep, bool raw,
BUG_ON(preemptible());
BUG_ON(eph); /* fix later if shared pools get implemented */
BUG_ON(pampd_is_remote(pampd));
+
+ if (pampd == (void *)ZERO_FILLED) {
+ handle_zero_filled_page(data);
+ if (!raw)
+ *sizep = PAGE_SIZE;
+ return 0;
+ }
+
if (raw)
ret = zbud_copy_from_zbud(data, (struct zbudref *)pampd,
sizep, eph);
@@ -801,12 +853,21 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
struct tmem_oid *oid, uint32_t index)
{
int ret;
- bool eph = !is_persistent(pool);
+ bool eph = !is_persistent(pool), zero_filled = false;
struct page *page = NULL;
unsigned int zsize, zpages;
BUG_ON(preemptible());
BUG_ON(pampd_is_remote(pampd));
+
+ if (pampd == (void *)ZERO_FILLED) {
+ handle_zero_filled_page(data);
+ zero_filled = true;
+ if (!raw)
+ *sizep = PAGE_SIZE;
+ goto zero_fill;
+ }
+
if (raw)
ret = zbud_copy_from_zbud(data, (struct zbudref *)pampd,
sizep, eph);
@@ -818,6 +879,7 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
}
page = zbud_free_and_delist((struct zbudref *)pampd, eph,
&zsize, &zpages);
+zero_fill:
if (eph) {
if (page)
zcache_eph_pageframes =
@@ -837,7 +899,7 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
}
if (!is_local_client(pool->client))
ramster_count_foreign_pages(eph, -1);
- if (page)
+ if (page && !zero_filled)
zcache_free_page(page);
return ret;
}
@@ -851,16 +913,23 @@ static void zcache_pampd_free(void *pampd, struct tmem_pool *pool,
{
struct page *page = NULL;
unsigned int zsize, zpages;
+ bool zero_filled = false;
BUG_ON(preemptible());
- if (pampd_is_remote(pampd)) {
+
+ if (pampd == (void *)ZERO_FILLED)
+ zero_filled = true;
+
+ if (pampd_is_remote(pampd) && !zero_filled) {
+
BUG_ON(!ramster_enabled);
pampd = ramster_pampd_free(pampd, pool, oid, index, acct);
if (pampd == NULL)
return;
}
if (is_ephemeral(pool)) {
- page = zbud_free_and_delist((struct zbudref *)pampd,
+ if (!zero_filled)
+ page = zbud_free_and_delist((struct zbudref *)pampd,
true, &zsize, &zpages);
if (page)
zcache_eph_pageframes =
@@ -883,7 +952,7 @@ static void zcache_pampd_free(void *pampd, struct tmem_pool *pool,
}
if (!is_local_client(pool->client))
ramster_count_foreign_pages(is_ephemeral(pool), -1);
- if (page)
+ if (page && !zero_filled)
zcache_free_page(page);
}
--
1.7.7.6
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH v3 2/5] zero-filled pages awareness
2013-03-15 2:34 ` [PATCH v3 2/5] zero-filled pages awareness Wanpeng Li
@ 2013-03-16 14:12 ` Bob Liu
2013-03-17 0:04 ` Wanpeng Li
2013-03-17 0:04 ` Wanpeng Li
2013-03-19 0:50 ` Greg Kroah-Hartman
1 sibling, 2 replies; 18+ messages in thread
From: Bob Liu @ 2013-03-16 14:12 UTC (permalink / raw)
To: Wanpeng Li
Cc: Greg Kroah-Hartman, Andrew Morton, Dan Magenheimer, Seth Jennings,
Konrad Rzeszutek Wilk, Minchan Kim, linux-mm, linux-kernel
On 03/15/2013 10:34 AM, Wanpeng Li wrote:
> Compression of zero-filled pages can unneccessarily cause internal
> fragmentation, and thus waste memory. This special case can be
> optimized.
>
> This patch captures zero-filled pages, and marks their corresponding
> zcache backing page entry as zero-filled. Whenever such zero-filled
> page is retrieved, we fill the page frame with zero.
>
> Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
> ---
> drivers/staging/zcache/zcache-main.c | 81 +++++++++++++++++++++++++++++++---
> 1 files changed, 75 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/staging/zcache/zcache-main.c b/drivers/staging/zcache/zcache-main.c
> index d73dd4b..6c35c7d 100644
> --- a/drivers/staging/zcache/zcache-main.c
> +++ b/drivers/staging/zcache/zcache-main.c
> @@ -59,6 +59,12 @@ static inline void frontswap_tmem_exclusive_gets(bool b)
> }
> #endif
>
> +/*
> + * mark pampd to special value in order that later
> + * retrieve will identify zero-filled pages
> + */
> +#define ZERO_FILLED 0x2
> +
> /* enable (or fix code) when Seth's patches are accepted upstream */
> #define zcache_writeback_enabled 0
>
> @@ -543,7 +549,23 @@ static void *zcache_pampd_eph_create(char *data, size_t size, bool raw,
> {
> void *pampd = NULL, *cdata = data;
> unsigned clen = size;
> + bool zero_filled = false;
> struct page *page = (struct page *)(data), *newpage;
> + char *user_mem;
> +
> + user_mem = kmap_atomic(page);
> +
> + /*
> + * Compressing zero-filled pages will waste memory and introduce
> + * serious fragmentation, skip it to avoid overhead
> + */
> + if (page_is_zero_filled(user_mem)) {
> + kunmap_atomic(user_mem);
> + clen = 0;
> + zero_filled = true;
> + goto got_pampd;
> + }
> + kunmap_atomic(user_mem);
>
> if (!raw) {
> zcache_compress(page, &cdata, &clen);
> @@ -592,6 +614,8 @@ got_pampd:
> zcache_eph_zpages_max = zcache_eph_zpages;
> if (ramster_enabled && raw)
> ramster_count_foreign_pages(true, 1);
> + if (zero_filled)
> + pampd = (void *)ZERO_FILLED;
> out:
> return pampd;
> }
> @@ -601,14 +625,31 @@ static void *zcache_pampd_pers_create(char *data, size_t size, bool raw,
> {
> void *pampd = NULL, *cdata = data;
> unsigned clen = size;
> + bool zero_filled = false;
> struct page *page = (struct page *)(data), *newpage;
> unsigned long zbud_mean_zsize;
> unsigned long curr_pers_zpages, total_zsize;
> + char *user_mem;
>
> if (data == NULL) {
> BUG_ON(!ramster_enabled);
> goto create_pampd;
> }
> +
> + user_mem = kmap_atomic(page);
> +
> + /*
> + * Compressing zero-filled pages will waste memory and introduce
> + * serious fragmentation, skip it to avoid overhead
> + */
> + if (page_is_zero_filled(page)) {
> + kunmap_atomic(user_mem);
> + clen = 0;
> + zero_filled = true;
> + goto got_pampd;
> + }
> + kunmap_atomic(user_mem);
> +
Maybe we can add a function for this code? It seems a bit duplicated.
> curr_pers_zpages = zcache_pers_zpages;
> /* FIXME CONFIG_RAMSTER... subtract atomic remote_pers_pages here? */
> if (!raw)
> @@ -674,6 +715,8 @@ got_pampd:
> zcache_pers_zbytes_max = zcache_pers_zbytes;
> if (ramster_enabled && raw)
> ramster_count_foreign_pages(false, 1);
> + if (zero_filled)
> + pampd = (void *)ZERO_FILLED;
> out:
> return pampd;
> }
> @@ -735,7 +778,8 @@ out:
> */
> void zcache_pampd_create_finish(void *pampd, bool eph)
> {
> - zbud_create_finish((struct zbudref *)pampd, eph);
> + if (pampd != (void *)ZERO_FILLED)
> + zbud_create_finish((struct zbudref *)pampd, eph);
> }
>
> /*
> @@ -780,6 +824,14 @@ static int zcache_pampd_get_data(char *data, size_t *sizep, bool raw,
> BUG_ON(preemptible());
> BUG_ON(eph); /* fix later if shared pools get implemented */
> BUG_ON(pampd_is_remote(pampd));
> +
> + if (pampd == (void *)ZERO_FILLED) {
> + handle_zero_filled_page(data);
> + if (!raw)
> + *sizep = PAGE_SIZE;
> + return 0;
> + }
> +
> if (raw)
> ret = zbud_copy_from_zbud(data, (struct zbudref *)pampd,
> sizep, eph);
> @@ -801,12 +853,21 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
> struct tmem_oid *oid, uint32_t index)
> {
> int ret;
> - bool eph = !is_persistent(pool);
> + bool eph = !is_persistent(pool), zero_filled = false;
> struct page *page = NULL;
> unsigned int zsize, zpages;
>
> BUG_ON(preemptible());
> BUG_ON(pampd_is_remote(pampd));
> +
> + if (pampd == (void *)ZERO_FILLED) {
> + handle_zero_filled_page(data);
> + zero_filled = true;
> + if (!raw)
> + *sizep = PAGE_SIZE;
> + goto zero_fill;
> + }
> +
> if (raw)
> ret = zbud_copy_from_zbud(data, (struct zbudref *)pampd,
> sizep, eph);
> @@ -818,6 +879,7 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
> }
> page = zbud_free_and_delist((struct zbudref *)pampd, eph,
> &zsize, &zpages);
> +zero_fill:
> if (eph) {
> if (page)
> zcache_eph_pageframes =
> @@ -837,7 +899,7 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
> }
> if (!is_local_client(pool->client))
> ramster_count_foreign_pages(eph, -1);
> - if (page)
> + if (page && !zero_filled)
> zcache_free_page(page);
> return ret;
> }
> @@ -851,16 +913,23 @@ static void zcache_pampd_free(void *pampd, struct tmem_pool *pool,
> {
> struct page *page = NULL;
> unsigned int zsize, zpages;
> + bool zero_filled = false;
>
> BUG_ON(preemptible());
> - if (pampd_is_remote(pampd)) {
> +
> + if (pampd == (void *)ZERO_FILLED)
> + zero_filled = true;
> +
> + if (pampd_is_remote(pampd) && !zero_filled) {
> +
> BUG_ON(!ramster_enabled);
> pampd = ramster_pampd_free(pampd, pool, oid, index, acct);
> if (pampd == NULL)
> return;
> }
> if (is_ephemeral(pool)) {
> - page = zbud_free_and_delist((struct zbudref *)pampd,
> + if (!zero_filled)
> + page = zbud_free_and_delist((struct zbudref *)pampd,
> true, &zsize, &zpages);
> if (page)
> zcache_eph_pageframes =
> @@ -883,7 +952,7 @@ static void zcache_pampd_free(void *pampd, struct tmem_pool *pool,
> }
> if (!is_local_client(pool->client))
> ramster_count_foreign_pages(is_ephemeral(pool), -1);
> - if (page)
> + if (page && !zero_filled)
> zcache_free_page(page);
> }
>
>
--
Regards,
-Bob
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v3 2/5] zero-filled pages awareness
2013-03-16 14:12 ` Bob Liu
@ 2013-03-17 0:04 ` Wanpeng Li
2013-03-17 0:04 ` Wanpeng Li
1 sibling, 0 replies; 18+ messages in thread
From: Wanpeng Li @ 2013-03-17 0:04 UTC (permalink / raw)
To: Bob Liu
Cc: Greg Kroah-Hartman, Andrew Morton, Dan Magenheimer, Seth Jennings,
Konrad Rzeszutek Wilk, Minchan Kim, linux-mm, linux-kernel
On Sat, Mar 16, 2013 at 10:12:25PM +0800, Bob Liu wrote:
>
>On 03/15/2013 10:34 AM, Wanpeng Li wrote:
>> Compression of zero-filled pages can unneccessarily cause internal
>> fragmentation, and thus waste memory. This special case can be
>> optimized.
>>
>> This patch captures zero-filled pages, and marks their corresponding
>> zcache backing page entry as zero-filled. Whenever such zero-filled
>> page is retrieved, we fill the page frame with zero.
>>
>> Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
>> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
>> ---
>> drivers/staging/zcache/zcache-main.c | 81 +++++++++++++++++++++++++++++++---
>> 1 files changed, 75 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/staging/zcache/zcache-main.c b/drivers/staging/zcache/zcache-main.c
>> index d73dd4b..6c35c7d 100644
>> --- a/drivers/staging/zcache/zcache-main.c
>> +++ b/drivers/staging/zcache/zcache-main.c
>> @@ -59,6 +59,12 @@ static inline void frontswap_tmem_exclusive_gets(bool b)
>> }
>> #endif
>>
>> +/*
>> + * mark pampd to special value in order that later
>> + * retrieve will identify zero-filled pages
>> + */
>> +#define ZERO_FILLED 0x2
>> +
>> /* enable (or fix code) when Seth's patches are accepted upstream */
>> #define zcache_writeback_enabled 0
>>
>> @@ -543,7 +549,23 @@ static void *zcache_pampd_eph_create(char *data, size_t size, bool raw,
>> {
>> void *pampd = NULL, *cdata = data;
>> unsigned clen = size;
>> + bool zero_filled = false;
>> struct page *page = (struct page *)(data), *newpage;
>> + char *user_mem;
>> +
>> + user_mem = kmap_atomic(page);
>> +
>> + /*
>> + * Compressing zero-filled pages will waste memory and introduce
>> + * serious fragmentation, skip it to avoid overhead
>> + */
>> + if (page_is_zero_filled(user_mem)) {
>> + kunmap_atomic(user_mem);
>> + clen = 0;
>> + zero_filled = true;
>> + goto got_pampd;
>> + }
>> + kunmap_atomic(user_mem);
>>
>> if (!raw) {
>> zcache_compress(page, &cdata, &clen);
>> @@ -592,6 +614,8 @@ got_pampd:
>> zcache_eph_zpages_max = zcache_eph_zpages;
>> if (ramster_enabled && raw)
>> ramster_count_foreign_pages(true, 1);
>> + if (zero_filled)
>> + pampd = (void *)ZERO_FILLED;
>> out:
>> return pampd;
>> }
>> @@ -601,14 +625,31 @@ static void *zcache_pampd_pers_create(char *data, size_t size, bool raw,
>> {
>> void *pampd = NULL, *cdata = data;
>> unsigned clen = size;
>> + bool zero_filled = false;
>> struct page *page = (struct page *)(data), *newpage;
>> unsigned long zbud_mean_zsize;
>> unsigned long curr_pers_zpages, total_zsize;
>> + char *user_mem;
>>
>> if (data == NULL) {
>> BUG_ON(!ramster_enabled);
>> goto create_pampd;
>> }
>> +
>> + user_mem = kmap_atomic(page);
>> +
>> + /*
>> + * Compressing zero-filled pages will waste memory and introduce
>> + * serious fragmentation, skip it to avoid overhead
>> + */
>> + if (page_is_zero_filled(page)) {
>> + kunmap_atomic(user_mem);
>> + clen = 0;
>> + zero_filled = true;
>> + goto got_pampd;
>> + }
>> + kunmap_atomic(user_mem);
>> +
>
Hi Bob,
>Maybe we can add a function for this code? It seems a bit duplicated.
>
Great point! I will introduce a separate function to handle zero filled
capture stuff.
Regards,
Wanpeng Li
>> curr_pers_zpages = zcache_pers_zpages;
>> /* FIXME CONFIG_RAMSTER... subtract atomic remote_pers_pages here? */
>> if (!raw)
>> @@ -674,6 +715,8 @@ got_pampd:
>> zcache_pers_zbytes_max = zcache_pers_zbytes;
>> if (ramster_enabled && raw)
>> ramster_count_foreign_pages(false, 1);
>> + if (zero_filled)
>> + pampd = (void *)ZERO_FILLED;
>> out:
>> return pampd;
>> }
>> @@ -735,7 +778,8 @@ out:
>> */
>> void zcache_pampd_create_finish(void *pampd, bool eph)
>> {
>> - zbud_create_finish((struct zbudref *)pampd, eph);
>> + if (pampd != (void *)ZERO_FILLED)
>> + zbud_create_finish((struct zbudref *)pampd, eph);
>> }
>>
>> /*
>> @@ -780,6 +824,14 @@ static int zcache_pampd_get_data(char *data, size_t *sizep, bool raw,
>> BUG_ON(preemptible());
>> BUG_ON(eph); /* fix later if shared pools get implemented */
>> BUG_ON(pampd_is_remote(pampd));
>> +
>> + if (pampd == (void *)ZERO_FILLED) {
>> + handle_zero_filled_page(data);
>> + if (!raw)
>> + *sizep = PAGE_SIZE;
>> + return 0;
>> + }
>> +
>> if (raw)
>> ret = zbud_copy_from_zbud(data, (struct zbudref *)pampd,
>> sizep, eph);
>> @@ -801,12 +853,21 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
>> struct tmem_oid *oid, uint32_t index)
>> {
>> int ret;
>> - bool eph = !is_persistent(pool);
>> + bool eph = !is_persistent(pool), zero_filled = false;
>> struct page *page = NULL;
>> unsigned int zsize, zpages;
>>
>> BUG_ON(preemptible());
>> BUG_ON(pampd_is_remote(pampd));
>> +
>> + if (pampd == (void *)ZERO_FILLED) {
>> + handle_zero_filled_page(data);
>> + zero_filled = true;
>> + if (!raw)
>> + *sizep = PAGE_SIZE;
>> + goto zero_fill;
>> + }
>> +
>> if (raw)
>> ret = zbud_copy_from_zbud(data, (struct zbudref *)pampd,
>> sizep, eph);
>> @@ -818,6 +879,7 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
>> }
>> page = zbud_free_and_delist((struct zbudref *)pampd, eph,
>> &zsize, &zpages);
>> +zero_fill:
>> if (eph) {
>> if (page)
>> zcache_eph_pageframes =
>> @@ -837,7 +899,7 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
>> }
>> if (!is_local_client(pool->client))
>> ramster_count_foreign_pages(eph, -1);
>> - if (page)
>> + if (page && !zero_filled)
>> zcache_free_page(page);
>> return ret;
>> }
>> @@ -851,16 +913,23 @@ static void zcache_pampd_free(void *pampd, struct tmem_pool *pool,
>> {
>> struct page *page = NULL;
>> unsigned int zsize, zpages;
>> + bool zero_filled = false;
>>
>> BUG_ON(preemptible());
>> - if (pampd_is_remote(pampd)) {
>> +
>> + if (pampd == (void *)ZERO_FILLED)
>> + zero_filled = true;
>> +
>> + if (pampd_is_remote(pampd) && !zero_filled) {
>> +
>> BUG_ON(!ramster_enabled);
>> pampd = ramster_pampd_free(pampd, pool, oid, index, acct);
>> if (pampd == NULL)
>> return;
>> }
>> if (is_ephemeral(pool)) {
>> - page = zbud_free_and_delist((struct zbudref *)pampd,
>> + if (!zero_filled)
>> + page = zbud_free_and_delist((struct zbudref *)pampd,
>> true, &zsize, &zpages);
>> if (page)
>> zcache_eph_pageframes =
>> @@ -883,7 +952,7 @@ static void zcache_pampd_free(void *pampd, struct tmem_pool *pool,
>> }
>> if (!is_local_client(pool->client))
>> ramster_count_foreign_pages(is_ephemeral(pool), -1);
>> - if (page)
>> + if (page && !zero_filled)
>> zcache_free_page(page);
>> }
>>
>>
>
>--
>Regards,
>-Bob
>
>--
>To unsubscribe, send a message with 'unsubscribe linux-mm' in
>the body to majordomo@kvack.org. For more info on Linux MM,
>see: http://www.linux-mm.org/ .
>Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v3 2/5] zero-filled pages awareness
2013-03-16 14:12 ` Bob Liu
2013-03-17 0:04 ` Wanpeng Li
@ 2013-03-17 0:04 ` Wanpeng Li
1 sibling, 0 replies; 18+ messages in thread
From: Wanpeng Li @ 2013-03-17 0:04 UTC (permalink / raw)
To: Bob Liu
Cc: Greg Kroah-Hartman, Andrew Morton, Dan Magenheimer, Seth Jennings,
Konrad Rzeszutek Wilk, Minchan Kim, linux-mm, linux-kernel
On Sat, Mar 16, 2013 at 10:12:25PM +0800, Bob Liu wrote:
>
>On 03/15/2013 10:34 AM, Wanpeng Li wrote:
>> Compression of zero-filled pages can unneccessarily cause internal
>> fragmentation, and thus waste memory. This special case can be
>> optimized.
>>
>> This patch captures zero-filled pages, and marks their corresponding
>> zcache backing page entry as zero-filled. Whenever such zero-filled
>> page is retrieved, we fill the page frame with zero.
>>
>> Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
>> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
>> ---
>> drivers/staging/zcache/zcache-main.c | 81 +++++++++++++++++++++++++++++++---
>> 1 files changed, 75 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/staging/zcache/zcache-main.c b/drivers/staging/zcache/zcache-main.c
>> index d73dd4b..6c35c7d 100644
>> --- a/drivers/staging/zcache/zcache-main.c
>> +++ b/drivers/staging/zcache/zcache-main.c
>> @@ -59,6 +59,12 @@ static inline void frontswap_tmem_exclusive_gets(bool b)
>> }
>> #endif
>>
>> +/*
>> + * mark pampd to special value in order that later
>> + * retrieve will identify zero-filled pages
>> + */
>> +#define ZERO_FILLED 0x2
>> +
>> /* enable (or fix code) when Seth's patches are accepted upstream */
>> #define zcache_writeback_enabled 0
>>
>> @@ -543,7 +549,23 @@ static void *zcache_pampd_eph_create(char *data, size_t size, bool raw,
>> {
>> void *pampd = NULL, *cdata = data;
>> unsigned clen = size;
>> + bool zero_filled = false;
>> struct page *page = (struct page *)(data), *newpage;
>> + char *user_mem;
>> +
>> + user_mem = kmap_atomic(page);
>> +
>> + /*
>> + * Compressing zero-filled pages will waste memory and introduce
>> + * serious fragmentation, skip it to avoid overhead
>> + */
>> + if (page_is_zero_filled(user_mem)) {
>> + kunmap_atomic(user_mem);
>> + clen = 0;
>> + zero_filled = true;
>> + goto got_pampd;
>> + }
>> + kunmap_atomic(user_mem);
>>
>> if (!raw) {
>> zcache_compress(page, &cdata, &clen);
>> @@ -592,6 +614,8 @@ got_pampd:
>> zcache_eph_zpages_max = zcache_eph_zpages;
>> if (ramster_enabled && raw)
>> ramster_count_foreign_pages(true, 1);
>> + if (zero_filled)
>> + pampd = (void *)ZERO_FILLED;
>> out:
>> return pampd;
>> }
>> @@ -601,14 +625,31 @@ static void *zcache_pampd_pers_create(char *data, size_t size, bool raw,
>> {
>> void *pampd = NULL, *cdata = data;
>> unsigned clen = size;
>> + bool zero_filled = false;
>> struct page *page = (struct page *)(data), *newpage;
>> unsigned long zbud_mean_zsize;
>> unsigned long curr_pers_zpages, total_zsize;
>> + char *user_mem;
>>
>> if (data == NULL) {
>> BUG_ON(!ramster_enabled);
>> goto create_pampd;
>> }
>> +
>> + user_mem = kmap_atomic(page);
>> +
>> + /*
>> + * Compressing zero-filled pages will waste memory and introduce
>> + * serious fragmentation, skip it to avoid overhead
>> + */
>> + if (page_is_zero_filled(page)) {
>> + kunmap_atomic(user_mem);
>> + clen = 0;
>> + zero_filled = true;
>> + goto got_pampd;
>> + }
>> + kunmap_atomic(user_mem);
>> +
>
Hi Bob,
>Maybe we can add a function for this code? It seems a bit duplicated.
>
Great point! I will introduce a separate function to handle zero filled
capture stuff.
Regards,
Wanpeng Li
>> curr_pers_zpages = zcache_pers_zpages;
>> /* FIXME CONFIG_RAMSTER... subtract atomic remote_pers_pages here? */
>> if (!raw)
>> @@ -674,6 +715,8 @@ got_pampd:
>> zcache_pers_zbytes_max = zcache_pers_zbytes;
>> if (ramster_enabled && raw)
>> ramster_count_foreign_pages(false, 1);
>> + if (zero_filled)
>> + pampd = (void *)ZERO_FILLED;
>> out:
>> return pampd;
>> }
>> @@ -735,7 +778,8 @@ out:
>> */
>> void zcache_pampd_create_finish(void *pampd, bool eph)
>> {
>> - zbud_create_finish((struct zbudref *)pampd, eph);
>> + if (pampd != (void *)ZERO_FILLED)
>> + zbud_create_finish((struct zbudref *)pampd, eph);
>> }
>>
>> /*
>> @@ -780,6 +824,14 @@ static int zcache_pampd_get_data(char *data, size_t *sizep, bool raw,
>> BUG_ON(preemptible());
>> BUG_ON(eph); /* fix later if shared pools get implemented */
>> BUG_ON(pampd_is_remote(pampd));
>> +
>> + if (pampd == (void *)ZERO_FILLED) {
>> + handle_zero_filled_page(data);
>> + if (!raw)
>> + *sizep = PAGE_SIZE;
>> + return 0;
>> + }
>> +
>> if (raw)
>> ret = zbud_copy_from_zbud(data, (struct zbudref *)pampd,
>> sizep, eph);
>> @@ -801,12 +853,21 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
>> struct tmem_oid *oid, uint32_t index)
>> {
>> int ret;
>> - bool eph = !is_persistent(pool);
>> + bool eph = !is_persistent(pool), zero_filled = false;
>> struct page *page = NULL;
>> unsigned int zsize, zpages;
>>
>> BUG_ON(preemptible());
>> BUG_ON(pampd_is_remote(pampd));
>> +
>> + if (pampd == (void *)ZERO_FILLED) {
>> + handle_zero_filled_page(data);
>> + zero_filled = true;
>> + if (!raw)
>> + *sizep = PAGE_SIZE;
>> + goto zero_fill;
>> + }
>> +
>> if (raw)
>> ret = zbud_copy_from_zbud(data, (struct zbudref *)pampd,
>> sizep, eph);
>> @@ -818,6 +879,7 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
>> }
>> page = zbud_free_and_delist((struct zbudref *)pampd, eph,
>> &zsize, &zpages);
>> +zero_fill:
>> if (eph) {
>> if (page)
>> zcache_eph_pageframes =
>> @@ -837,7 +899,7 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
>> }
>> if (!is_local_client(pool->client))
>> ramster_count_foreign_pages(eph, -1);
>> - if (page)
>> + if (page && !zero_filled)
>> zcache_free_page(page);
>> return ret;
>> }
>> @@ -851,16 +913,23 @@ static void zcache_pampd_free(void *pampd, struct tmem_pool *pool,
>> {
>> struct page *page = NULL;
>> unsigned int zsize, zpages;
>> + bool zero_filled = false;
>>
>> BUG_ON(preemptible());
>> - if (pampd_is_remote(pampd)) {
>> +
>> + if (pampd == (void *)ZERO_FILLED)
>> + zero_filled = true;
>> +
>> + if (pampd_is_remote(pampd) && !zero_filled) {
>> +
>> BUG_ON(!ramster_enabled);
>> pampd = ramster_pampd_free(pampd, pool, oid, index, acct);
>> if (pampd == NULL)
>> return;
>> }
>> if (is_ephemeral(pool)) {
>> - page = zbud_free_and_delist((struct zbudref *)pampd,
>> + if (!zero_filled)
>> + page = zbud_free_and_delist((struct zbudref *)pampd,
>> true, &zsize, &zpages);
>> if (page)
>> zcache_eph_pageframes =
>> @@ -883,7 +952,7 @@ static void zcache_pampd_free(void *pampd, struct tmem_pool *pool,
>> }
>> if (!is_local_client(pool->client))
>> ramster_count_foreign_pages(is_ephemeral(pool), -1);
>> - if (page)
>> + if (page && !zero_filled)
>> zcache_free_page(page);
>> }
>>
>>
>
>--
>Regards,
>-Bob
>
>--
>To unsubscribe, send a message with 'unsubscribe linux-mm' in
>the body to majordomo@kvack.org. For more info on Linux MM,
>see: http://www.linux-mm.org/ .
>Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v3 2/5] zero-filled pages awareness
2013-03-15 2:34 ` [PATCH v3 2/5] zero-filled pages awareness Wanpeng Li
2013-03-16 14:12 ` Bob Liu
@ 2013-03-19 0:50 ` Greg Kroah-Hartman
2013-03-19 1:23 ` Wanpeng Li
2013-03-19 1:23 ` Wanpeng Li
1 sibling, 2 replies; 18+ messages in thread
From: Greg Kroah-Hartman @ 2013-03-19 0:50 UTC (permalink / raw)
To: Wanpeng Li
Cc: Andrew Morton, Dan Magenheimer, Seth Jennings,
Konrad Rzeszutek Wilk, Minchan Kim, linux-mm, linux-kernel
On Fri, Mar 15, 2013 at 10:34:17AM +0800, Wanpeng Li wrote:
> Compression of zero-filled pages can unneccessarily cause internal
> fragmentation, and thus waste memory. This special case can be
> optimized.
>
> This patch captures zero-filled pages, and marks their corresponding
> zcache backing page entry as zero-filled. Whenever such zero-filled
> page is retrieved, we fill the page frame with zero.
>
> Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
This patch applies with a bunch of fuzz, meaning it wasn't made against
the latest tree, which worries me. Care to redo it, and the rest of the
series, and resend it?
thanks,
greg k-h
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v3 2/5] zero-filled pages awareness
2013-03-19 0:50 ` Greg Kroah-Hartman
@ 2013-03-19 1:23 ` Wanpeng Li
2013-03-19 1:23 ` Wanpeng Li
1 sibling, 0 replies; 18+ messages in thread
From: Wanpeng Li @ 2013-03-19 1:23 UTC (permalink / raw)
To: Greg Kroah-Hartman
Cc: Andrew Morton, Dan Magenheimer, Seth Jennings,
Konrad Rzeszutek Wilk, Minchan Kim, linux-mm, linux-kernel
On Mon, Mar 18, 2013 at 05:50:23PM -0700, Greg Kroah-Hartman wrote:
>On Fri, Mar 15, 2013 at 10:34:17AM +0800, Wanpeng Li wrote:
>> Compression of zero-filled pages can unneccessarily cause internal
>> fragmentation, and thus waste memory. This special case can be
>> optimized.
>>
>> This patch captures zero-filled pages, and marks their corresponding
>> zcache backing page entry as zero-filled. Whenever such zero-filled
>> page is retrieved, we fill the page frame with zero.
>>
>> Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
>> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
>
>This patch applies with a bunch of fuzz, meaning it wasn't made against
>the latest tree, which worries me. Care to redo it, and the rest of the
>series, and resend it?
Ok, sorry for the confusing, I will do it today, thanks Greg. :-)
Regards,
Wanpeng Li
>
>thanks,
>
>greg k-h
>
>--
>To unsubscribe, send a message with 'unsubscribe linux-mm' in
>the body to majordomo@kvack.org. For more info on Linux MM,
>see: http://www.linux-mm.org/ .
>Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v3 2/5] zero-filled pages awareness
2013-03-19 0:50 ` Greg Kroah-Hartman
2013-03-19 1:23 ` Wanpeng Li
@ 2013-03-19 1:23 ` Wanpeng Li
1 sibling, 0 replies; 18+ messages in thread
From: Wanpeng Li @ 2013-03-19 1:23 UTC (permalink / raw)
To: Greg Kroah-Hartman
Cc: Andrew Morton, Dan Magenheimer, Seth Jennings,
Konrad Rzeszutek Wilk, Minchan Kim, linux-mm, linux-kernel
On Mon, Mar 18, 2013 at 05:50:23PM -0700, Greg Kroah-Hartman wrote:
>On Fri, Mar 15, 2013 at 10:34:17AM +0800, Wanpeng Li wrote:
>> Compression of zero-filled pages can unneccessarily cause internal
>> fragmentation, and thus waste memory. This special case can be
>> optimized.
>>
>> This patch captures zero-filled pages, and marks their corresponding
>> zcache backing page entry as zero-filled. Whenever such zero-filled
>> page is retrieved, we fill the page frame with zero.
>>
>> Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
>> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
>
>This patch applies with a bunch of fuzz, meaning it wasn't made against
>the latest tree, which worries me. Care to redo it, and the rest of the
>series, and resend it?
Ok, sorry for the confusing, I will do it today, thanks Greg. :-)
Regards,
Wanpeng Li
>
>thanks,
>
>greg k-h
>
>--
>To unsubscribe, send a message with 'unsubscribe linux-mm' in
>the body to majordomo@kvack.org. For more info on Linux MM,
>see: http://www.linux-mm.org/ .
>Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v3 3/5] handle zcache_[eph|pers]_zpages for zero-filled page
2013-03-15 2:34 [PATCH v3 0/5] zcache: Support zero-filled pages more efficiently Wanpeng Li
2013-03-15 2:34 ` [PATCH v3 1/5] introduce zero filled pages handler Wanpeng Li
2013-03-15 2:34 ` [PATCH v3 2/5] zero-filled pages awareness Wanpeng Li
@ 2013-03-15 2:34 ` Wanpeng Li
2013-03-16 13:11 ` Konrad Rzeszutek Wilk
2013-03-15 2:34 ` [PATCH v3 4/5] introduce zero-filled page stat count Wanpeng Li
` (2 subsequent siblings)
5 siblings, 1 reply; 18+ messages in thread
From: Wanpeng Li @ 2013-03-15 2:34 UTC (permalink / raw)
To: Greg Kroah-Hartman, Andrew Morton
Cc: Dan Magenheimer, Seth Jennings, Konrad Rzeszutek Wilk,
Minchan Kim, linux-mm, linux-kernel, Wanpeng Li
Increment/decrement zcache_[eph|pers]_zpages for zero-filled pages,
the main point of the counters for zpages and pageframes is to be
able to calculate density == zpages/pageframes. A zero-filled page
becomes a zpage that "compresses" to zero bytes and, as a result,
requires zero pageframes for storage. So the zpages counter should
be increased but the pageframes counter should not.
[Dan Magenheimer <dan.magenheimer@oracle.com>: patch description]
Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
---
drivers/staging/zcache/zcache-main.c | 7 ++++++-
1 files changed, 6 insertions(+), 1 deletions(-)
diff --git a/drivers/staging/zcache/zcache-main.c b/drivers/staging/zcache/zcache-main.c
index 6c35c7d..ef8c960 100644
--- a/drivers/staging/zcache/zcache-main.c
+++ b/drivers/staging/zcache/zcache-main.c
@@ -863,6 +863,8 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
if (pampd == (void *)ZERO_FILLED) {
handle_zero_filled_page(data);
zero_filled = true;
+ zsize = 0;
+ zpages = 1;
if (!raw)
*sizep = PAGE_SIZE;
goto zero_fill;
@@ -917,8 +919,11 @@ static void zcache_pampd_free(void *pampd, struct tmem_pool *pool,
BUG_ON(preemptible());
- if (pampd == (void *)ZERO_FILLED)
+ if (pampd == (void *)ZERO_FILLED) {
zero_filled = true;
+ zsize = 0;
+ zpages = 1;
+ }
if (pampd_is_remote(pampd) && !zero_filled) {
--
1.7.7.6
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH v3 3/5] handle zcache_[eph|pers]_zpages for zero-filled page
2013-03-15 2:34 ` [PATCH v3 3/5] handle zcache_[eph|pers]_zpages for zero-filled page Wanpeng Li
@ 2013-03-16 13:11 ` Konrad Rzeszutek Wilk
2013-03-17 0:05 ` Wanpeng Li
2013-03-17 0:05 ` Wanpeng Li
0 siblings, 2 replies; 18+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-03-16 13:11 UTC (permalink / raw)
To: Wanpeng Li
Cc: Greg Kroah-Hartman, Andrew Morton, Dan Magenheimer, Seth Jennings,
Minchan Kim, linux-mm, linux-kernel
On Fri, Mar 15, 2013 at 10:34:18AM +0800, Wanpeng Li wrote:
> Increment/decrement zcache_[eph|pers]_zpages for zero-filled pages,
> the main point of the counters for zpages and pageframes is to be
> able to calculate density == zpages/pageframes. A zero-filled page
> becomes a zpage that "compresses" to zero bytes and, as a result,
> requires zero pageframes for storage. So the zpages counter should
> be increased but the pageframes counter should not.
>
> [Dan Magenheimer <dan.magenheimer@oracle.com>: patch description]
> Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
> ---
> drivers/staging/zcache/zcache-main.c | 7 ++++++-
> 1 files changed, 6 insertions(+), 1 deletions(-)
>
> diff --git a/drivers/staging/zcache/zcache-main.c b/drivers/staging/zcache/zcache-main.c
> index 6c35c7d..ef8c960 100644
> --- a/drivers/staging/zcache/zcache-main.c
> +++ b/drivers/staging/zcache/zcache-main.c
> @@ -863,6 +863,8 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
> if (pampd == (void *)ZERO_FILLED) {
> handle_zero_filled_page(data);
> zero_filled = true;
> + zsize = 0;
> + zpages = 1;
> if (!raw)
> *sizep = PAGE_SIZE;
> goto zero_fill;
> @@ -917,8 +919,11 @@ static void zcache_pampd_free(void *pampd, struct tmem_pool *pool,
>
> BUG_ON(preemptible());
>
> - if (pampd == (void *)ZERO_FILLED)
> + if (pampd == (void *)ZERO_FILLED) {
> zero_filled = true;
> + zsize = 0;
> + zpages = 1;
> + }
>
> if (pampd_is_remote(pampd) && !zero_filled) {
>
> --
> 1.7.7.6
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v3 3/5] handle zcache_[eph|pers]_zpages for zero-filled page
2013-03-16 13:11 ` Konrad Rzeszutek Wilk
@ 2013-03-17 0:05 ` Wanpeng Li
2013-03-17 0:05 ` Wanpeng Li
1 sibling, 0 replies; 18+ messages in thread
From: Wanpeng Li @ 2013-03-17 0:05 UTC (permalink / raw)
To: Konrad Rzeszutek Wilk
Cc: Greg Kroah-Hartman, Andrew Morton, Dan Magenheimer, Seth Jennings,
Minchan Kim, linux-mm, linux-kernel
On Sat, Mar 16, 2013 at 09:11:06AM -0400, Konrad Rzeszutek Wilk wrote:
>On Fri, Mar 15, 2013 at 10:34:18AM +0800, Wanpeng Li wrote:
>> Increment/decrement zcache_[eph|pers]_zpages for zero-filled pages,
>> the main point of the counters for zpages and pageframes is to be
>> able to calculate density == zpages/pageframes. A zero-filled page
>> becomes a zpage that "compresses" to zero bytes and, as a result,
>> requires zero pageframes for storage. So the zpages counter should
>> be increased but the pageframes counter should not.
>>
>> [Dan Magenheimer <dan.magenheimer@oracle.com>: patch description]
>> Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
>
>Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thanks for your review Konrad. :-)
>> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
>> ---
>> drivers/staging/zcache/zcache-main.c | 7 ++++++-
>> 1 files changed, 6 insertions(+), 1 deletions(-)
>>
>> diff --git a/drivers/staging/zcache/zcache-main.c b/drivers/staging/zcache/zcache-main.c
>> index 6c35c7d..ef8c960 100644
>> --- a/drivers/staging/zcache/zcache-main.c
>> +++ b/drivers/staging/zcache/zcache-main.c
>> @@ -863,6 +863,8 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
>> if (pampd == (void *)ZERO_FILLED) {
>> handle_zero_filled_page(data);
>> zero_filled = true;
>> + zsize = 0;
>> + zpages = 1;
>> if (!raw)
>> *sizep = PAGE_SIZE;
>> goto zero_fill;
>> @@ -917,8 +919,11 @@ static void zcache_pampd_free(void *pampd, struct tmem_pool *pool,
>>
>> BUG_ON(preemptible());
>>
>> - if (pampd == (void *)ZERO_FILLED)
>> + if (pampd == (void *)ZERO_FILLED) {
>> zero_filled = true;
>> + zsize = 0;
>> + zpages = 1;
>> + }
>>
>> if (pampd_is_remote(pampd) && !zero_filled) {
>>
>> --
>> 1.7.7.6
>>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v3 3/5] handle zcache_[eph|pers]_zpages for zero-filled page
2013-03-16 13:11 ` Konrad Rzeszutek Wilk
2013-03-17 0:05 ` Wanpeng Li
@ 2013-03-17 0:05 ` Wanpeng Li
1 sibling, 0 replies; 18+ messages in thread
From: Wanpeng Li @ 2013-03-17 0:05 UTC (permalink / raw)
To: Konrad Rzeszutek Wilk
Cc: Greg Kroah-Hartman, Andrew Morton, Dan Magenheimer, Seth Jennings,
Minchan Kim, linux-mm, linux-kernel
On Sat, Mar 16, 2013 at 09:11:06AM -0400, Konrad Rzeszutek Wilk wrote:
>On Fri, Mar 15, 2013 at 10:34:18AM +0800, Wanpeng Li wrote:
>> Increment/decrement zcache_[eph|pers]_zpages for zero-filled pages,
>> the main point of the counters for zpages and pageframes is to be
>> able to calculate density == zpages/pageframes. A zero-filled page
>> becomes a zpage that "compresses" to zero bytes and, as a result,
>> requires zero pageframes for storage. So the zpages counter should
>> be increased but the pageframes counter should not.
>>
>> [Dan Magenheimer <dan.magenheimer@oracle.com>: patch description]
>> Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
>
>Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thanks for your review Konrad. :-)
>> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
>> ---
>> drivers/staging/zcache/zcache-main.c | 7 ++++++-
>> 1 files changed, 6 insertions(+), 1 deletions(-)
>>
>> diff --git a/drivers/staging/zcache/zcache-main.c b/drivers/staging/zcache/zcache-main.c
>> index 6c35c7d..ef8c960 100644
>> --- a/drivers/staging/zcache/zcache-main.c
>> +++ b/drivers/staging/zcache/zcache-main.c
>> @@ -863,6 +863,8 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
>> if (pampd == (void *)ZERO_FILLED) {
>> handle_zero_filled_page(data);
>> zero_filled = true;
>> + zsize = 0;
>> + zpages = 1;
>> if (!raw)
>> *sizep = PAGE_SIZE;
>> goto zero_fill;
>> @@ -917,8 +919,11 @@ static void zcache_pampd_free(void *pampd, struct tmem_pool *pool,
>>
>> BUG_ON(preemptible());
>>
>> - if (pampd == (void *)ZERO_FILLED)
>> + if (pampd == (void *)ZERO_FILLED) {
>> zero_filled = true;
>> + zsize = 0;
>> + zpages = 1;
>> + }
>>
>> if (pampd_is_remote(pampd) && !zero_filled) {
>>
>> --
>> 1.7.7.6
>>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v3 4/5] introduce zero-filled page stat count
2013-03-15 2:34 [PATCH v3 0/5] zcache: Support zero-filled pages more efficiently Wanpeng Li
` (2 preceding siblings ...)
2013-03-15 2:34 ` [PATCH v3 3/5] handle zcache_[eph|pers]_zpages for zero-filled page Wanpeng Li
@ 2013-03-15 2:34 ` Wanpeng Li
2013-03-15 2:34 ` [PATCH v3 5/5] clean TODO list Wanpeng Li
2013-03-19 0:23 ` [PATCH v3 0/5] zcache: Support zero-filled pages more efficiently Greg Kroah-Hartman
5 siblings, 0 replies; 18+ messages in thread
From: Wanpeng Li @ 2013-03-15 2:34 UTC (permalink / raw)
To: Greg Kroah-Hartman, Andrew Morton
Cc: Dan Magenheimer, Seth Jennings, Konrad Rzeszutek Wilk,
Minchan Kim, linux-mm, linux-kernel, Wanpeng Li
Introduce zero-filled page statistics to monitor the number of
zero-filled pages.
Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
---
drivers/staging/zcache/zcache-main.c | 7 +++++++
1 files changed, 7 insertions(+), 0 deletions(-)
diff --git a/drivers/staging/zcache/zcache-main.c b/drivers/staging/zcache/zcache-main.c
index ef8c960..bc7ccbb 100644
--- a/drivers/staging/zcache/zcache-main.c
+++ b/drivers/staging/zcache/zcache-main.c
@@ -197,6 +197,7 @@ static ssize_t zcache_eph_nonactive_puts_ignored;
static ssize_t zcache_pers_nonactive_puts_ignored;
static ssize_t zcache_writtenback_pages;
static ssize_t zcache_outstanding_writeback_pages;
+static ssize_t zcache_zero_filled_pages;
#ifdef CONFIG_DEBUG_FS
#include <linux/debugfs.h>
@@ -258,6 +259,7 @@ static int zcache_debugfs_init(void)
zdfs("outstanding_writeback_pages", S_IRUGO, root,
&zcache_outstanding_writeback_pages);
zdfs("writtenback_pages", S_IRUGO, root, &zcache_writtenback_pages);
+ zdfs("zero_filled_pages", S_IRUGO, root, &zcache_zero_filled_pages);
return 0;
}
#undef zdebugfs
@@ -327,6 +329,7 @@ void zcache_dump(void)
pr_info("zcache: outstanding_writeback_pages=%zd\n",
zcache_outstanding_writeback_pages);
pr_info("zcache: writtenback_pages=%zd\n", zcache_writtenback_pages);
+ pr_info("zcache: zero_filled_pages=%zd\n", zcache_zero_filled_pages);
}
#endif
@@ -563,6 +566,7 @@ static void *zcache_pampd_eph_create(char *data, size_t size, bool raw,
kunmap_atomic(user_mem);
clen = 0;
zero_filled = true;
+ zcache_zero_filled_pages++;
goto got_pampd;
}
kunmap_atomic(user_mem);
@@ -646,6 +650,7 @@ static void *zcache_pampd_pers_create(char *data, size_t size, bool raw,
kunmap_atomic(user_mem);
clen = 0;
zero_filled = true;
+ zcache_zero_filled_pages++;
goto got_pampd;
}
kunmap_atomic(user_mem);
@@ -867,6 +872,7 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
zpages = 1;
if (!raw)
*sizep = PAGE_SIZE;
+ zcache_zero_filled_pages--;
goto zero_fill;
}
@@ -923,6 +929,7 @@ static void zcache_pampd_free(void *pampd, struct tmem_pool *pool,
zero_filled = true;
zsize = 0;
zpages = 1;
+ zcache_zero_filled_pages--;
}
if (pampd_is_remote(pampd) && !zero_filled) {
--
1.7.7.6
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v3 5/5] clean TODO list
2013-03-15 2:34 [PATCH v3 0/5] zcache: Support zero-filled pages more efficiently Wanpeng Li
` (3 preceding siblings ...)
2013-03-15 2:34 ` [PATCH v3 4/5] introduce zero-filled page stat count Wanpeng Li
@ 2013-03-15 2:34 ` Wanpeng Li
2013-03-19 0:23 ` [PATCH v3 0/5] zcache: Support zero-filled pages more efficiently Greg Kroah-Hartman
5 siblings, 0 replies; 18+ messages in thread
From: Wanpeng Li @ 2013-03-15 2:34 UTC (permalink / raw)
To: Greg Kroah-Hartman, Andrew Morton
Cc: Dan Magenheimer, Seth Jennings, Konrad Rzeszutek Wilk,
Minchan Kim, linux-mm, linux-kernel, Wanpeng Li
Cleanup TODO list since support zero-filled pages more efficiently has
already done by this patchset.
Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
---
drivers/staging/zcache/TODO | 3 +--
1 files changed, 1 insertions(+), 2 deletions(-)
diff --git a/drivers/staging/zcache/TODO b/drivers/staging/zcache/TODO
index c1e26d4..9e755d3 100644
--- a/drivers/staging/zcache/TODO
+++ b/drivers/staging/zcache/TODO
@@ -65,5 +65,4 @@ ZCACHE FUTURE NEW FUNCTIONALITY
A. Support zsmalloc as an alternative high-density allocator
(See https://lkml.org/lkml/2013/1/23/511)
-B. Support zero-filled pages more efficiently
-C. Possibly support three zbuds per pageframe when space allows
+B. Possibly support three zbuds per pageframe when space allows
--
1.7.7.6
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH v3 0/5] zcache: Support zero-filled pages more efficiently
2013-03-15 2:34 [PATCH v3 0/5] zcache: Support zero-filled pages more efficiently Wanpeng Li
` (4 preceding siblings ...)
2013-03-15 2:34 ` [PATCH v3 5/5] clean TODO list Wanpeng Li
@ 2013-03-19 0:23 ` Greg Kroah-Hartman
2013-03-19 1:13 ` Wanpeng Li
2013-03-19 1:13 ` Wanpeng Li
5 siblings, 2 replies; 18+ messages in thread
From: Greg Kroah-Hartman @ 2013-03-19 0:23 UTC (permalink / raw)
To: Wanpeng Li
Cc: Andrew Morton, Dan Magenheimer, Seth Jennings,
Konrad Rzeszutek Wilk, Minchan Kim, linux-mm, linux-kernel
On Fri, Mar 15, 2013 at 10:34:15AM +0800, Wanpeng Li wrote:
> Changelog:
> v2 -> v3:
> * increment/decrement zcache_[eph|pers]_zpages for zero-filled pages, spotted by Dan
> * replace "zero" or "zero page" by "zero_filled_page", spotted by Dan
> v1 -> v2:
> * avoid changing tmem.[ch] entirely, spotted by Dan.
> * don't accumulate [eph|pers]pageframe and [eph|pers]zpages for
> zero-filled pages, spotted by Dan
> * cleanup TODO list
> * add Dan Acked-by.
In the future, please make the subject: lines have "staging: zcache:" in
them, so I don't have to edit them by hand.
thanks,
greg k-h
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v3 0/5] zcache: Support zero-filled pages more efficiently
2013-03-19 0:23 ` [PATCH v3 0/5] zcache: Support zero-filled pages more efficiently Greg Kroah-Hartman
@ 2013-03-19 1:13 ` Wanpeng Li
2013-03-19 1:13 ` Wanpeng Li
1 sibling, 0 replies; 18+ messages in thread
From: Wanpeng Li @ 2013-03-19 1:13 UTC (permalink / raw)
To: Greg Kroah-Hartman
Cc: Andrew Morton, Dan Magenheimer, Seth Jennings,
Konrad Rzeszutek Wilk, Minchan Kim, linux-mm, linux-kernel
On Mon, Mar 18, 2013 at 05:23:59PM -0700, Greg Kroah-Hartman wrote:
>On Fri, Mar 15, 2013 at 10:34:15AM +0800, Wanpeng Li wrote:
>> Changelog:
>> v2 -> v3:
>> * increment/decrement zcache_[eph|pers]_zpages for zero-filled pages, spotted by Dan
>> * replace "zero" or "zero page" by "zero_filled_page", spotted by Dan
>> v1 -> v2:
>> * avoid changing tmem.[ch] entirely, spotted by Dan.
>> * don't accumulate [eph|pers]pageframe and [eph|pers]zpages for
>> zero-filled pages, spotted by Dan
>> * cleanup TODO list
>> * add Dan Acked-by.
>
>In the future, please make the subject: lines have "staging: zcache:" in
>them, so I don't have to edit them by hand.
Ok, I will do them. thanks Greg. :-)
Regards,
Wanpeng Li
>
>thanks,
>
>greg k-h
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v3 0/5] zcache: Support zero-filled pages more efficiently
2013-03-19 0:23 ` [PATCH v3 0/5] zcache: Support zero-filled pages more efficiently Greg Kroah-Hartman
2013-03-19 1:13 ` Wanpeng Li
@ 2013-03-19 1:13 ` Wanpeng Li
1 sibling, 0 replies; 18+ messages in thread
From: Wanpeng Li @ 2013-03-19 1:13 UTC (permalink / raw)
To: Greg Kroah-Hartman
Cc: Andrew Morton, Dan Magenheimer, Seth Jennings,
Konrad Rzeszutek Wilk, Minchan Kim, linux-mm, linux-kernel
On Mon, Mar 18, 2013 at 05:23:59PM -0700, Greg Kroah-Hartman wrote:
>On Fri, Mar 15, 2013 at 10:34:15AM +0800, Wanpeng Li wrote:
>> Changelog:
>> v2 -> v3:
>> * increment/decrement zcache_[eph|pers]_zpages for zero-filled pages, spotted by Dan
>> * replace "zero" or "zero page" by "zero_filled_page", spotted by Dan
>> v1 -> v2:
>> * avoid changing tmem.[ch] entirely, spotted by Dan.
>> * don't accumulate [eph|pers]pageframe and [eph|pers]zpages for
>> zero-filled pages, spotted by Dan
>> * cleanup TODO list
>> * add Dan Acked-by.
>
>In the future, please make the subject: lines have "staging: zcache:" in
>them, so I don't have to edit them by hand.
Ok, I will do them. thanks Greg. :-)
Regards,
Wanpeng Li
>
>thanks,
>
>greg k-h
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 18+ messages in thread