All of lore.kernel.org
 help / color / mirror / Atom feed
diff for duplicates of <20120907143751.GB4670@phenom.dumpdata.com>

diff --git a/a/1.txt b/N1/1.txt
index 2f9a59d..7a365e3 100644
--- a/a/1.txt
+++ b/N1/1.txt
@@ -29,3 +29,72 @@ My TODO's were:
  - Work out automatic benchmarks in three categories: database (I am going to use
    swing for that), compile (that one is easy), and firefox tab browsers
    overloading.
+
+
+>From bd85d5fa0cc231f2779f3209ee62b755caf3aa9b Mon Sep 17 00:00:00 2001
+From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
+Date: Fri, 7 Sep 2012 10:21:01 -0400
+Subject: [PATCH] zsmalloc/zcache: TODO list.
+
+Adding in comments by Dan.
+
+Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
+---
+ drivers/staging/zcache/TODO   |   21 +++++++++++++++++++++
+ drivers/staging/zsmalloc/TODO |   17 +++++++++++++++++
+ 2 files changed, 38 insertions(+), 0 deletions(-)
+ create mode 100644 drivers/staging/zcache/TODO
+ create mode 100644 drivers/staging/zsmalloc/TODO
+
+diff --git a/drivers/staging/zcache/TODO b/drivers/staging/zcache/TODO
+new file mode 100644
+index 0000000..bf19a01
+--- /dev/null
++++ b/drivers/staging/zcache/TODO
+@@ -0,0 +1,21 @@
++
++A) Andrea Arcangeli pointed out and, after some deep thinking, I came
++   to agree that zcache _must_ have some "backdoor exit" for frontswap
++   pages [2], else bad things will eventually happen in many workloads.
++   This requires some kind of reaper of frontswap'ed zpages[1] which "evicts"
++   the data to the actual swap disk.  This reaper must ensure it can reclaim
++   _full_ pageframes (not just zpages) or it has little value.  Further the
++   reaper should determine which pageframes to reap based on an LRU-ish
++   (not random) approach.
++
++B) Zcache uses zbud(v1) for cleancache pages and includes a shrinker which
++   reclaims pairs of zpages to release whole pageframes, but there is
++   no attempt to shrink/reclaim cleanache pageframes in LRU order.
++   It would also be nice if single-cleancache-pageframe reclaim could
++   be implemented.
++
++C) Offer a mechanism to select whether zbud or zsmalloc should be used.
++   This should be for either cleancache or frontswap pages. Meaning there
++   are four choices: cleancache and frontswap using zbud; cleancache and
++   frontswap using zsmalloc; cleancache using zsmalloc, frontswap using zbud;
++   cleacache using zbud, and frontswap using zsmalloc.
+diff --git a/drivers/staging/zsmalloc/TODO b/drivers/staging/zsmalloc/TODO
+new file mode 100644
+index 0000000..b1debad
+--- /dev/null
++++ b/drivers/staging/zsmalloc/TODO
+@@ -0,0 +1,17 @@
++
++A) Zsmalloc has potentially far superior density vs zbud because zsmalloc can
++   pack more zpages into each pageframe and allows for zpages that cross pageframe
++   boundaries.  But, (i) this is very data dependent... the average compression
++   for LZO is about 2x.  The frontswap'ed pages in the kernel compile benchmark
++   compress to about 4x, which is impressive but probably not representative of
++   a wide range of zpages and workloads.  And (ii) there are many historical
++   discussions going back to Knuth and mainframes about tight packing of data...
++   high density has some advantages but also brings many disadvantages related to
++   fragmentation and compaction.  Zbud is much less aggressive (max two zpages
++   per pageframe) but has a similar density on average data, without the
++   disadvantages of high density.
++
++   So zsmalloc may blow zbud away on a kernel compile benchmark but, if both were
++   runners, zsmalloc is a sprinter and zbud is a marathoner.  Perhaps the best
++   solution is to offer both?
++
+-- 
+1.7.7.6
diff --git a/a/content_digest b/N1/content_digest
index 21fb738..ab82369 100644
--- a/a/content_digest
+++ b/N1/content_digest
@@ -46,6 +46,75 @@
  "   workloads.\n"
  " - Work out automatic benchmarks in three categories: database (I am going to use\n"
  "   swing for that), compile (that one is easy), and firefox tab browsers\n"
-    overloading.
+ "   overloading.\n"
+ "\n"
+ "\n"
+ ">From bd85d5fa0cc231f2779f3209ee62b755caf3aa9b Mon Sep 17 00:00:00 2001\n"
+ "From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>\n"
+ "Date: Fri, 7 Sep 2012 10:21:01 -0400\n"
+ "Subject: [PATCH] zsmalloc/zcache: TODO list.\n"
+ "\n"
+ "Adding in comments by Dan.\n"
+ "\n"
+ "Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>\n"
+ "---\n"
+ " drivers/staging/zcache/TODO   |   21 +++++++++++++++++++++\n"
+ " drivers/staging/zsmalloc/TODO |   17 +++++++++++++++++\n"
+ " 2 files changed, 38 insertions(+), 0 deletions(-)\n"
+ " create mode 100644 drivers/staging/zcache/TODO\n"
+ " create mode 100644 drivers/staging/zsmalloc/TODO\n"
+ "\n"
+ "diff --git a/drivers/staging/zcache/TODO b/drivers/staging/zcache/TODO\n"
+ "new file mode 100644\n"
+ "index 0000000..bf19a01\n"
+ "--- /dev/null\n"
+ "+++ b/drivers/staging/zcache/TODO\n"
+ "@@ -0,0 +1,21 @@\n"
+ "+\n"
+ "+A) Andrea Arcangeli pointed out and, after some deep thinking, I came\n"
+ "+   to agree that zcache _must_ have some \"backdoor exit\" for frontswap\n"
+ "+   pages [2], else bad things will eventually happen in many workloads.\n"
+ "+   This requires some kind of reaper of frontswap'ed zpages[1] which \"evicts\"\n"
+ "+   the data to the actual swap disk.  This reaper must ensure it can reclaim\n"
+ "+   _full_ pageframes (not just zpages) or it has little value.  Further the\n"
+ "+   reaper should determine which pageframes to reap based on an LRU-ish\n"
+ "+   (not random) approach.\n"
+ "+\n"
+ "+B) Zcache uses zbud(v1) for cleancache pages and includes a shrinker which\n"
+ "+   reclaims pairs of zpages to release whole pageframes, but there is\n"
+ "+   no attempt to shrink/reclaim cleanache pageframes in LRU order.\n"
+ "+   It would also be nice if single-cleancache-pageframe reclaim could\n"
+ "+   be implemented.\n"
+ "+\n"
+ "+C) Offer a mechanism to select whether zbud or zsmalloc should be used.\n"
+ "+   This should be for either cleancache or frontswap pages. Meaning there\n"
+ "+   are four choices: cleancache and frontswap using zbud; cleancache and\n"
+ "+   frontswap using zsmalloc; cleancache using zsmalloc, frontswap using zbud;\n"
+ "+   cleacache using zbud, and frontswap using zsmalloc.\n"
+ "diff --git a/drivers/staging/zsmalloc/TODO b/drivers/staging/zsmalloc/TODO\n"
+ "new file mode 100644\n"
+ "index 0000000..b1debad\n"
+ "--- /dev/null\n"
+ "+++ b/drivers/staging/zsmalloc/TODO\n"
+ "@@ -0,0 +1,17 @@\n"
+ "+\n"
+ "+A) Zsmalloc has potentially far superior density vs zbud because zsmalloc can\n"
+ "+   pack more zpages into each pageframe and allows for zpages that cross pageframe\n"
+ "+   boundaries.  But, (i) this is very data dependent... the average compression\n"
+ "+   for LZO is about 2x.  The frontswap'ed pages in the kernel compile benchmark\n"
+ "+   compress to about 4x, which is impressive but probably not representative of\n"
+ "+   a wide range of zpages and workloads.  And (ii) there are many historical\n"
+ "+   discussions going back to Knuth and mainframes about tight packing of data...\n"
+ "+   high density has some advantages but also brings many disadvantages related to\n"
+ "+   fragmentation and compaction.  Zbud is much less aggressive (max two zpages\n"
+ "+   per pageframe) but has a similar density on average data, without the\n"
+ "+   disadvantages of high density.\n"
+ "+\n"
+ "+   So zsmalloc may blow zbud away on a kernel compile benchmark but, if both were\n"
+ "+   runners, zsmalloc is a sprinter and zbud is a marathoner.  Perhaps the best\n"
+ "+   solution is to offer both?\n"
+ "+\n"
+ "-- \n"
+ 1.7.7.6
 
-44638dfabd82d5ed1f3a38ea15d18578b9a1e0ee90b2bb4acfe8fc3874f857b3
+62081e25f6a1e03fe19c8c4a82dc07617ae8cfe850b9436a833ad0ccd2c853c4

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.