* RE: [PATCHv6 4/8] zswap: add to mm/ [not found] ` <<1361397888-14863-5-git-send-email-sjenning@linux.vnet.ibm.com> @ 2013-02-28 18:13 ` Dan Magenheimer 2013-02-28 19:50 ` Dan Magenheimer 0 siblings, 1 reply; 5+ messages in thread From: Dan Magenheimer @ 2013-02-28 18:13 UTC (permalink / raw) To: Seth Jennings, Andrew Morton Cc: Greg Kroah-Hartman, Nitin Gupta, Minchan Kim, Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings, Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel, Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches, Joonsoo Kim, Cody P Schafer, linux-mm, linux-kernel, devel > From: Seth Jennings [mailto:sjenning@linux.vnet.ibm.com] > Subject: [PATCHv6 4/8] zswap: add to mm/ > > +/* > + * Maximum compression ratio, as as percentage, for an acceptable > + * compressed page. Any pages that do not compress by at least > + * this ratio will be rejected. > +*/ > +static unsigned int zswap_max_compression_ratio = 80; > +module_param_named(max_compression_ratio, > + zswap_max_compression_ratio, uint, 0644); Unless this is a complete coincidence, I believe that the default value "80" is actually: (100 * (1L >> ZS_MAX_ZSPAGE_ORDER)) / ((1L >> ZS_MAX_ZSPAGE_ORDER)) + 1) (though the constant ZS_MAX_ZSPAGE_ORDER is not currently defined outside of zsmalloc.c) because pages that compress less efficiently than this always require a full pageframe in zsmalloc. True? If this change were made, is there any real reason for this to be a user-selectable parameter, i.e. given the compression internals knowledge necessary to understand what value should be selected, would any mortal sysadmin ever want to change it or know what would be a reasonable value to change it to? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 5+ messages in thread
* RE: [PATCHv6 4/8] zswap: add to mm/ 2013-02-28 18:13 ` [PATCHv6 4/8] zswap: add to mm/ Dan Magenheimer @ 2013-02-28 19:50 ` Dan Magenheimer 0 siblings, 0 replies; 5+ messages in thread From: Dan Magenheimer @ 2013-02-28 19:50 UTC (permalink / raw) To: Dan Magenheimer, Seth Jennings, Andrew Morton Cc: Greg Kroah-Hartman, Nitin Gupta, Minchan Kim, Konrad Wilk, Robert Jennings, Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel, Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches, Joonsoo Kim, Cody P Schafer, linux-mm, linux-kernel, devel > From: Dan Magenheimer > Sent: Thursday, February 28, 2013 11:13 AM > To: Seth Jennings; Andrew Morton > Cc: Greg Kroah-Hartman; Nitin Gupta; Minchan Kim; Konrad Rzeszutek Wilk; Dan Magenheimer; Robert > Jennings; Jenifer Hopper; Mel Gorman; Johannes Weiner; Rik van Riel; Larry Woodman; Benjamin > Herrenschmidt; Dave Hansen; Joe Perches; Joonsoo Kim; Cody P Schafer; linux-mm@kvack.org; linux- > kernel@vger.kernel.org; devel@driverdev.osuosl.org > Subject: RE: [PATCHv6 4/8] zswap: add to mm/ > > > From: Seth Jennings [mailto:sjenning@linux.vnet.ibm.com] > > Subject: [PATCHv6 4/8] zswap: add to mm/ > > > > +/* > > + * Maximum compression ratio, as as percentage, for an acceptable > > + * compressed page. Any pages that do not compress by at least > > + * this ratio will be rejected. > > +*/ > > +static unsigned int zswap_max_compression_ratio = 80; > > +module_param_named(max_compression_ratio, > > + zswap_max_compression_ratio, uint, 0644); > > Unless this is a complete coincidence, I believe that > the default value "80" is actually: > > (100 * (1L >> ZS_MAX_ZSPAGE_ORDER)) / > ((1L >> ZS_MAX_ZSPAGE_ORDER)) + 1) Doh! If it wasn't obvious, those should be left shift operators, not right shift. So.... (100 * (1L << ZS_MAX_ZSPAGE_ORDER)) / ((1L << ZS_MAX_ZSPAGE_ORDER)) + 1) Sorry for that. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCHv6 0/8] zswap: compressed swap caching @ 2013-02-20 22:04 Seth Jennings 2013-02-20 22:04 ` [PATCHv6 4/8] zswap: add to mm/ Seth Jennings 0 siblings, 1 reply; 5+ messages in thread From: Seth Jennings @ 2013-02-20 22:04 UTC (permalink / raw) To: Andrew Morton Cc: Seth Jennings, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim, Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings, Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel, Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches, Joonsoo Kim, Cody P Schafer, linux-mm, linux-kernel, devel Changelog: v6: * fix access-after-free regression introduced in v5 (rb_erase() outside the lock) * fix improper freeing of rbtree (Cody) * fix comment typo (Ric) * add comments about ZS_MM_WO usage and page mapping mode (Joonsoo) * don't use page->object (Joonsoo) * remove DEBUG (Joonsoo) * rebase to v3.8 v5: * zsmalloc patch converted from promotion to "new code" (for review only, see note in [1/8]) * promote zsmalloc to mm/ instead of /lib * add more documentation everywhere * convert USE_PGTABLE_MAPPING to kconfig option, thanks to Minchan * s/flush/writeback/ * #define pr_fmt() for formatting messages (Joe) * checkpatch fixups * lots of changes suggested Minchan v4: * Added Acks (Minchan) * Separated flushing functionality into standalone patch for easier review (Minchan) * fix comment on zswap enabled attribute (Minchan) * add TODO for dynamic mempool size (Minchan) * and check for NULL in zswap_free_page() (Minchan) * add missing zs_free() in error path (Minchan) * TODO: add comments for flushing/refcounting (Minchan) v3: * Dropped the zsmalloc patches from the set, except the promotion patch which has be converted to a rename patch (vs full diff). The dropped patches have been Acked and are going into Greg's staging tree soon. * Separated [PATCHv2 7/9] into two patches since it makes changes for two different reasons (Minchan) * Moved ZSWAP_MAX_OUTSTANDING_FLUSHES near the top in zswap.c (Rik) * Rebase to v3.8-rc5. linux-next is a little volatile with the swapper_space per type changes which will effect this patchset. * TODO: Move some stats from debugfs to sysfs. Which ones? (Rik) v2: * Rename zswap_fs_* functions to zswap_frontswap_* to avoid confusion with "filesystem" * Add comment about what the tree lock protects * Remove "#if 0" code (should have been done before) * Break out changes to existing swap code into separate patch * Fix blank line EOF warning on documentation file * Rebase to next-20130107 Zswap Overview: Zswap is a lightweight compressed cache for swap pages. It takes pages that are in the process of being swapped out and attempts to compress them into a dynamically allocated RAM-based memory pool. If this process is successful, the writeback to the swap device is deferred and, in many cases, avoided completely. This results in a significant I/O reduction and performance gains for systems that are swapping. The results of a kernel building benchmark indicate a runtime reduction of 53% and an I/O reduction 76% with zswap vs normal swapping with a kernel build under heavy memory pressure (see Performance section for more). Some addition performance metrics regarding the performance improvements and I/O reductions that can be achieved using zswap as measured by SPECjbb are provided here: http://ibm.co/VCgHvM These results include runs on x86 and new results on Power7+ with hardware compression acceleration. Of particular note is that zswap is able to evict pages from the compressed cache, on an LRU basis, to the backing swap device when the compressed pool reaches it size limit or the pool is unable to obtain additional pages from the buddy allocator. This eviction functionality had been identified as a requirement in prior community discussions. Patchset Structure: 1-2: add zsmalloc and documentation 3: add atomic_t get/set to debugfs 4: add basic zswap functionality 4,5: changes to existing swap code for zswap 6,7: add zswap writeback support 8: add zswap documentation Rationale: Zswap provides compressed swap caching that basically trades CPU cycles for reduced swap I/O. This trade-off can result in a significant performance improvement as reads to/writes from to the compressed cache almost always faster that reading from a swap device which incurs the latency of an asynchronous block I/O read. Some potential benefits: * Desktop/laptop users with limited RAM capacities can mitigate the performance impact of swapping. * Overcommitted guests that share a common I/O resource can dramatically reduce their swap I/O pressure, avoiding heavy handed I/O throttling by the hypervisor. This allows more work to get done with less impact to the guest workload and guests sharing the I/O subsystem * Users with SSDs as swap devices can extend the life of the device by drastically reducing life-shortening writes. Compressed swap is also provided in zcache, along with page cache compression and RAM clustering through RAMSter. Zswap seeks to deliver the benefit of swap compression to users in a discrete function. This design decision is akin to Unix design philosophy of doing one thing well, it leaves file cache compression and other features for separate code. Design: Zswap receives pages for compression through the Frontswap API and is able to evict pages from its own compressed pool on an LRU basis and write them back to the backing swap device in the case that the compressed pool is full or unable to secure additional pages from the buddy allocator. Zswap makes use of zsmalloc for the managing the compressed memory pool. This is because zsmalloc is specifically designed to minimize fragmentation on large (> PAGE_SIZE/2) allocation sizes. Each allocation in zsmalloc is not directly accessible by address. Rather, a handle is return by the allocation routine and that handle must be mapped before being accessed. The compressed memory pool grows on demand and shrinks as compressed pages are freed. The pool is not preallocated. When a swap page is passed from frontswap to zswap, zswap maintains a mapping of the swap entry, a combination of the swap type and swap offset, to the zsmalloc handle that references that compressed swap page. This mapping is achieved with a red-black tree per swap type. The swap offset is the search key for the tree nodes. Zswap seeks to be simple in its policies. Sysfs attributes allow for two user controlled policies: * max_compression_ratio - Maximum compression ratio, as as percentage, for an acceptable compressed page. Any page that does not compress by at least this ratio will be rejected. * max_pool_percent - The maximum percentage of memory that the compressed pool can occupy. To enabled zswap, the "enabled" attribute must be set to 1 at boot time. Zswap allows the compressor to be selected at kernel boot time by setting the a??compressora?? attribute. The default compressor is lzo. A debugfs interface is provided for various statistic about pool size, number of pages stored, and various counters for the reasons pages are rejected. Performance, Kernel Building: Setup ======== Gentoo w/ kernel v3.7-rc7 Quad-core i5-2500 @ 3.3GHz 512MB DDR3 1600MHz (limited with mem=512m on boot) Filesystem and swap on 80GB HDD (about 58MB/s with hdparm -t) majflt are major page faults reported by the time command pswpin/out is the delta of pswpin/out from /proc/vmstat before and after the make -jN Summary ======== * Zswap reduces I/O and improves performance at all swap pressure levels. * Under heavy swaping at 24 threads, zswap reduced I/O by 76%, saving over 1.5GB of I/O, and cut runtime in half. Details ======== I/O (in pages) base zswap change change N pswpin pswpout majflt I/O sum pswpin pswpout majflt I/O sum %I/O MB 8 1 335 291 627 0 0 249 249 -60% 1 12 3688 14315 5290 23293 123 860 5954 6937 -70% 64 16 12711 46179 16803 75693 2936 7390 46092 56418 -25% 75 20 42178 133781 49898 225857 9460 28382 92951 130793 -42% 371 24 96079 357280 105242 558601 7719 18484 109309 135512 -76% 1653 Runtime (in seconds) N base zswap %change 8 107 107 0% 12 128 110 -14% 16 191 179 -6% 20 371 240 -35% 24 570 267 -53% %CPU utilization (out of 400% on 4 cpus) N base zswap %change 8 317 319 1% 12 267 311 16% 16 179 191 7% 20 94 143 52% 24 60 128 113% Seth Jennings (8): zsmalloc: add to mm/ zsmalloc: add documentation debugfs: add get/set for atomic types zswap: add to mm/ mm: break up swap_writepage() for frontswap backends mm: allow for outstanding swap writeback accounting zswap: add swap page writeback support zswap: add documentation Documentation/vm/zsmalloc.txt | 68 +++ Documentation/vm/zswap.txt | 82 +++ fs/debugfs/file.c | 42 ++ include/linux/debugfs.h | 2 + include/linux/swap.h | 4 + include/linux/zsmalloc.h | 56 ++ mm/Kconfig | 39 ++ mm/Makefile | 2 + mm/page_io.c | 22 +- mm/swap_state.c | 2 +- mm/zsmalloc.c | 1117 +++++++++++++++++++++++++++++++++++++++ mm/zswap.c | 1156 +++++++++++++++++++++++++++++++++++++++++ 12 files changed, 2586 insertions(+), 6 deletions(-) create mode 100644 Documentation/vm/zsmalloc.txt create mode 100644 Documentation/vm/zswap.txt create mode 100644 include/linux/zsmalloc.h create mode 100644 mm/zsmalloc.c create mode 100644 mm/zswap.c -- 1.8.1.1 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCHv6 4/8] zswap: add to mm/ 2013-02-20 22:04 [PATCHv6 0/8] zswap: compressed swap caching Seth Jennings @ 2013-02-20 22:04 ` Seth Jennings 2013-02-25 4:35 ` Joonsoo Kim 0 siblings, 1 reply; 5+ messages in thread From: Seth Jennings @ 2013-02-20 22:04 UTC (permalink / raw) To: Andrew Morton Cc: Seth Jennings, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim, Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings, Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel, Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches, Joonsoo Kim, Cody P Schafer, linux-mm, linux-kernel, devel zswap is a thin compression backend for frontswap. It receives pages from frontswap and attempts to store them in a compressed memory pool, resulting in an effective partial memory reclaim and dramatically reduced swap device I/O. Additionally, in most cases, pages can be retrieved from this compressed store much more quickly than reading from tradition swap devices resulting in faster performance for many workloads. This patch adds the zswap driver to mm/ Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com> --- mm/Kconfig | 15 ++ mm/Makefile | 1 + mm/zswap.c | 665 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 681 insertions(+) create mode 100644 mm/zswap.c diff --git a/mm/Kconfig b/mm/Kconfig index 25b8f38..f9f35b7 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -470,3 +470,18 @@ config PGTABLE_MAPPING You can check speed with zsmalloc benchmark[1]. [1] https://github.com/spartacus06/zsmalloc + +config ZSWAP + bool "In-kernel swap page compression" + depends on FRONTSWAP && CRYPTO + select CRYPTO_LZO + select ZSMALLOC + default n + help + Zswap is a backend for the frontswap mechanism in the VMM. + It receives pages from frontswap and attempts to store them + in a compressed memory pool, resulting in an effective + partial memory reclaim. In addition, pages and be retrieved + from this compressed store much faster than most tradition + swap devices resulting in reduced I/O and faster performance + for many workloads. diff --git a/mm/Makefile b/mm/Makefile index 0f6ef0a..1e0198f 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -32,6 +32,7 @@ obj-$(CONFIG_HAVE_MEMBLOCK) += memblock.o obj-$(CONFIG_BOUNCE) += bounce.o obj-$(CONFIG_SWAP) += page_io.o swap_state.o swapfile.o obj-$(CONFIG_FRONTSWAP) += frontswap.o +obj-$(CONFIG_ZSWAP) += zswap.o obj-$(CONFIG_HAS_DMA) += dmapool.o obj-$(CONFIG_HUGETLBFS) += hugetlb.o obj-$(CONFIG_NUMA) += mempolicy.o diff --git a/mm/zswap.c b/mm/zswap.c new file mode 100644 index 0000000..d3b4943 --- /dev/null +++ b/mm/zswap.c @@ -0,0 +1,665 @@ +/* + * zswap.c - zswap driver file + * + * zswap is a backend for frontswap that takes pages that are in the + * process of being swapped out and attempts to compress them and store + * them in a RAM-based memory pool. This results in a significant I/O + * reduction on the real swap device and, in the case of a slow swap + * device, can also improve workload performance. + * + * Copyright (C) 2012 Seth Jennings <sjenning@linux.vnet.ibm.com> + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. +*/ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include <linux/module.h> +#include <linux/cpu.h> +#include <linux/highmem.h> +#include <linux/slab.h> +#include <linux/spinlock.h> +#include <linux/types.h> +#include <linux/atomic.h> +#include <linux/frontswap.h> +#include <linux/rbtree.h> +#include <linux/swap.h> +#include <linux/crypto.h> +#include <linux/mempool.h> +#include <linux/zsmalloc.h> + +/********************************* +* statistics +**********************************/ +/* Number of memory pages used by the compressed pool */ +static atomic_t zswap_pool_pages = ATOMIC_INIT(0); +/* The number of compressed pages currently stored in zswap */ +static atomic_t zswap_stored_pages = ATOMIC_INIT(0); + +/* + * The statistics below are not protected from concurrent access for + * performance reasons so they may not be a 100% accurate. However, + * they do provide useful information on roughly how many times a + * certain event is occurring. +*/ +static u64 zswap_pool_limit_hit; +static u64 zswap_reject_compress_poor; +static u64 zswap_reject_zsmalloc_fail; +static u64 zswap_reject_kmemcache_fail; +static u64 zswap_duplicate_entry; + +/********************************* +* tunables +**********************************/ +/* Enable/disable zswap (disabled by default, fixed at boot for now) */ +static bool zswap_enabled; +module_param_named(enabled, zswap_enabled, bool, 0); + +/* Compressor to be used by zswap (fixed at boot for now) */ +#define ZSWAP_COMPRESSOR_DEFAULT "lzo" +static char *zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT; +module_param_named(compressor, zswap_compressor, charp, 0); + +/* The maximum percentage of memory that the compressed pool can occupy */ +static unsigned int zswap_max_pool_percent = 20; +module_param_named(max_pool_percent, + zswap_max_pool_percent, uint, 0644); + +/* + * Maximum compression ratio, as as percentage, for an acceptable + * compressed page. Any pages that do not compress by at least + * this ratio will be rejected. +*/ +static unsigned int zswap_max_compression_ratio = 80; +module_param_named(max_compression_ratio, + zswap_max_compression_ratio, uint, 0644); + +/********************************* +* compression functions +**********************************/ +/* per-cpu compression transforms */ +static struct crypto_comp * __percpu *zswap_comp_pcpu_tfms; + +enum comp_op { + ZSWAP_COMPOP_COMPRESS, + ZSWAP_COMPOP_DECOMPRESS +}; + +static int zswap_comp_op(enum comp_op op, const u8 *src, unsigned int slen, + u8 *dst, unsigned int *dlen) +{ + struct crypto_comp *tfm; + int ret; + + tfm = *per_cpu_ptr(zswap_comp_pcpu_tfms, get_cpu()); + switch (op) { + case ZSWAP_COMPOP_COMPRESS: + ret = crypto_comp_compress(tfm, src, slen, dst, dlen); + break; + case ZSWAP_COMPOP_DECOMPRESS: + ret = crypto_comp_decompress(tfm, src, slen, dst, dlen); + break; + default: + ret = -EINVAL; + } + + put_cpu(); + return ret; +} + +static int __init zswap_comp_init(void) +{ + if (!crypto_has_comp(zswap_compressor, 0, 0)) { + pr_info("%s compressor not available\n", zswap_compressor); + /* fall back to default compressor */ + zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT; + if (!crypto_has_comp(zswap_compressor, 0, 0)) + /* can't even load the default compressor */ + return -ENODEV; + } + pr_info("using %s compressor\n", zswap_compressor); + + /* alloc percpu transforms */ + zswap_comp_pcpu_tfms = alloc_percpu(struct crypto_comp *); + if (!zswap_comp_pcpu_tfms) + return -ENOMEM; + return 0; +} + +static void zswap_comp_exit(void) +{ + /* free percpu transforms */ + if (zswap_comp_pcpu_tfms) + free_percpu(zswap_comp_pcpu_tfms); +} + +/********************************* +* data structures +**********************************/ +struct zswap_entry { + struct rb_node rbnode; + unsigned type; + pgoff_t offset; + unsigned long handle; + unsigned int length; +}; + +struct zswap_tree { + struct rb_root rbroot; + spinlock_t lock; + struct zs_pool *pool; +}; + +static struct zswap_tree *zswap_trees[MAX_SWAPFILES]; + +/********************************* +* zswap entry functions +**********************************/ +#define ZSWAP_KMEM_CACHE_NAME "zswap_entry_cache" +static struct kmem_cache *zswap_entry_cache; + +static inline int zswap_entry_cache_create(void) +{ + zswap_entry_cache = + kmem_cache_create(ZSWAP_KMEM_CACHE_NAME, + sizeof(struct zswap_entry), 0, 0, NULL); + return (zswap_entry_cache == NULL); +} + +static inline void zswap_entry_cache_destory(void) +{ + kmem_cache_destroy(zswap_entry_cache); +} + +static inline struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp) +{ + struct zswap_entry *entry; + entry = kmem_cache_alloc(zswap_entry_cache, gfp); + if (!entry) + return NULL; + return entry; +} + +static inline void zswap_entry_cache_free(struct zswap_entry *entry) +{ + kmem_cache_free(zswap_entry_cache, entry); +} + +/********************************* +* rbtree functions +**********************************/ +static struct zswap_entry *zswap_rb_search(struct rb_root *root, pgoff_t offset) +{ + struct rb_node *node = root->rb_node; + struct zswap_entry *entry; + + while (node) { + entry = rb_entry(node, struct zswap_entry, rbnode); + if (entry->offset > offset) + node = node->rb_left; + else if (entry->offset < offset) + node = node->rb_right; + else + return entry; + } + return NULL; +} + +/* + * In the case that a entry with the same offset is found, it a pointer to + * the existing entry is stored in dupentry and the function returns -EEXIST +*/ +static int zswap_rb_insert(struct rb_root *root, struct zswap_entry *entry, + struct zswap_entry **dupentry) +{ + struct rb_node **link = &root->rb_node, *parent = NULL; + struct zswap_entry *myentry; + + while (*link) { + parent = *link; + myentry = rb_entry(parent, struct zswap_entry, rbnode); + if (myentry->offset > entry->offset) + link = &(*link)->rb_left; + else if (myentry->offset < entry->offset) + link = &(*link)->rb_right; + else { + *dupentry = myentry; + return -EEXIST; + } + } + rb_link_node(&entry->rbnode, parent, link); + rb_insert_color(&entry->rbnode, root); + return 0; +} + +/********************************* +* per-cpu code +**********************************/ +static DEFINE_PER_CPU(u8 *, zswap_dstmem); + +static int __zswap_cpu_notifier(unsigned long action, unsigned long cpu) +{ + struct crypto_comp *tfm; + u8 *dst; + + switch (action) { + case CPU_UP_PREPARE: + tfm = crypto_alloc_comp(zswap_compressor, 0, 0); + if (IS_ERR(tfm)) { + pr_err("can't allocate compressor transform\n"); + return NOTIFY_BAD; + } + *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = tfm; + dst = (u8 *)__get_free_pages(GFP_KERNEL, 1); + if (!dst) { + pr_err("can't allocate compressor buffer\n"); + crypto_free_comp(tfm); + *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = NULL; + return NOTIFY_BAD; + } + per_cpu(zswap_dstmem, cpu) = dst; + break; + case CPU_DEAD: + case CPU_UP_CANCELED: + tfm = *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu); + if (tfm) { + crypto_free_comp(tfm); + *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = NULL; + } + dst = per_cpu(zswap_dstmem, cpu); + if (dst) { + free_pages((unsigned long)dst, 1); + per_cpu(zswap_dstmem, cpu) = NULL; + } + break; + default: + break; + } + return NOTIFY_OK; +} + +static int zswap_cpu_notifier(struct notifier_block *nb, + unsigned long action, void *pcpu) +{ + unsigned long cpu = (unsigned long)pcpu; + return __zswap_cpu_notifier(action, cpu); +} + +static struct notifier_block zswap_cpu_notifier_block = { + .notifier_call = zswap_cpu_notifier +}; + +static int zswap_cpu_init(void) +{ + unsigned long cpu; + + get_online_cpus(); + for_each_online_cpu(cpu) + if (__zswap_cpu_notifier(CPU_UP_PREPARE, cpu) != NOTIFY_OK) + goto cleanup; + register_cpu_notifier(&zswap_cpu_notifier_block); + put_online_cpus(); + return 0; + +cleanup: + for_each_online_cpu(cpu) + __zswap_cpu_notifier(CPU_UP_CANCELED, cpu); + put_online_cpus(); + return -ENOMEM; +} + +/********************************* +* zsmalloc callbacks +**********************************/ +static mempool_t *zswap_page_pool; + +static inline unsigned int zswap_max_pool_pages(void) +{ + return zswap_max_pool_percent * totalram_pages / 100; +} + +static inline int zswap_page_pool_create(void) +{ + /* TODO: dynamically size mempool */ + zswap_page_pool = mempool_create_page_pool(256, 0); + if (!zswap_page_pool) + return -ENOMEM; + return 0; +} + +static inline void zswap_page_pool_destroy(void) +{ + mempool_destroy(zswap_page_pool); +} + +static struct page *zswap_alloc_page(gfp_t flags) +{ + struct page *page; + + if (atomic_read(&zswap_pool_pages) >= zswap_max_pool_pages()) { + zswap_pool_limit_hit++; + return NULL; + } + page = mempool_alloc(zswap_page_pool, flags); + if (page) + atomic_inc(&zswap_pool_pages); + return page; +} + +static void zswap_free_page(struct page *page) +{ + if (!page) + return; + mempool_free(page, zswap_page_pool); + atomic_dec(&zswap_pool_pages); +} + +static struct zs_ops zswap_zs_ops = { + .alloc = zswap_alloc_page, + .free = zswap_free_page +}; + +/********************************* +* frontswap hooks +**********************************/ +/* attempts to compress and store an single page */ +static int zswap_frontswap_store(unsigned type, pgoff_t offset, + struct page *page) +{ + struct zswap_tree *tree = zswap_trees[type]; + struct zswap_entry *entry, *dupentry; + int ret; + unsigned int dlen = PAGE_SIZE; + unsigned long handle; + char *buf; + u8 *src, *dst; + + if (!tree) { + ret = -ENODEV; + goto reject; + } + + /* compress */ + dst = get_cpu_var(zswap_dstmem); + src = kmap_atomic(page); + ret = zswap_comp_op(ZSWAP_COMPOP_COMPRESS, src, PAGE_SIZE, dst, &dlen); + kunmap_atomic(src); + if (ret) { + ret = -EINVAL; + goto putcpu; + } + if ((dlen * 100 / PAGE_SIZE) > zswap_max_compression_ratio) { + zswap_reject_compress_poor++; + ret = -E2BIG; + goto putcpu; + } + + /* store */ + handle = zs_malloc(tree->pool, dlen, + __GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC | + __GFP_NOWARN); + if (!handle) { + zswap_reject_zsmalloc_fail++; + ret = -ENOMEM; + goto putcpu; + } + + buf = zs_map_object(tree->pool, handle, ZS_MM_WO); + memcpy(buf, dst, dlen); + zs_unmap_object(tree->pool, handle); + put_cpu_var(zswap_dstmem); + + /* allocate entry */ + entry = zswap_entry_cache_alloc(GFP_KERNEL); + if (!entry) { + zs_free(tree->pool, handle); + zswap_reject_kmemcache_fail++; + ret = -ENOMEM; + goto reject; + } + + /* populate entry */ + entry->type = type; + entry->offset = offset; + entry->handle = handle; + entry->length = dlen; + + /* map */ + spin_lock(&tree->lock); + do { + ret = zswap_rb_insert(&tree->rbroot, entry, &dupentry); + if (ret == -EEXIST) { + zswap_duplicate_entry++; + + /* remove from rbtree */ + rb_erase(&dupentry->rbnode, &tree->rbroot); + + /* free */ + zs_free(tree->pool, dupentry->handle); + zswap_entry_cache_free(dupentry); + atomic_dec(&zswap_stored_pages); + } + } while (ret == -EEXIST); + spin_unlock(&tree->lock); + + /* update stats */ + atomic_inc(&zswap_stored_pages); + + return 0; + +putcpu: + put_cpu_var(zswap_dstmem); +reject: + return ret; +} + +/* + * returns 0 if the page was successfully decompressed + * return -1 on entry not found or error +*/ +static int zswap_frontswap_load(unsigned type, pgoff_t offset, + struct page *page) +{ + struct zswap_tree *tree = zswap_trees[type]; + struct zswap_entry *entry; + u8 *src, *dst; + unsigned int dlen; + + /* find */ + spin_lock(&tree->lock); + entry = zswap_rb_search(&tree->rbroot, offset); + spin_unlock(&tree->lock); + + /* decompress */ + dlen = PAGE_SIZE; + src = zs_map_object(tree->pool, entry->handle, ZS_MM_RO); + dst = kmap_atomic(page); + zswap_comp_op(ZSWAP_COMPOP_DECOMPRESS, src, entry->length, + dst, &dlen); + kunmap_atomic(dst); + zs_unmap_object(tree->pool, entry->handle); + + return 0; +} + +/* invalidates a single page */ +static void zswap_frontswap_invalidate_page(unsigned type, pgoff_t offset) +{ + struct zswap_tree *tree = zswap_trees[type]; + struct zswap_entry *entry; + + /* find */ + spin_lock(&tree->lock); + entry = zswap_rb_search(&tree->rbroot, offset); + + /* remove from rbtree */ + rb_erase(&entry->rbnode, &tree->rbroot); + spin_unlock(&tree->lock); + + /* free */ + zs_free(tree->pool, entry->handle); + zswap_entry_cache_free(entry); + atomic_dec(&zswap_stored_pages); +} + +/* invalidates all pages for the given swap type */ +static void zswap_frontswap_invalidate_area(unsigned type) +{ + struct zswap_tree *tree = zswap_trees[type]; + struct rb_node *node; + struct zswap_entry *entry; + + if (!tree) + return; + + /* walk the tree and free everything */ + spin_lock(&tree->lock); + /* + * TODO: Even though this code should not be executed because + * the try_to_unuse() in swapoff should have emptied the tree, + * it is very wasteful to rebalance the tree after every + * removal when we are freeing the whole tree. + * + * If post-order traversal code is ever added to the rbtree + * implementation, it should be used here. + */ + while ((node = rb_first(&tree->rbroot))) { + entry = rb_entry(node, struct zswap_entry, rbnode); + rb_erase(&entry->rbnode, &tree->rbroot); + zs_free(tree->pool, entry->handle); + zswap_entry_cache_free(entry); + } + tree->rbroot = RB_ROOT; + spin_unlock(&tree->lock); +} + +/* NOTE: this is called in atomic context from swapon and must not sleep */ +static void zswap_frontswap_init(unsigned type) +{ + struct zswap_tree *tree; + + tree = kzalloc(sizeof(struct zswap_tree), GFP_NOWAIT); + if (!tree) + goto err; + tree->pool = zs_create_pool(GFP_NOWAIT, &zswap_zs_ops); + if (!tree->pool) + goto freetree; + tree->rbroot = RB_ROOT; + spin_lock_init(&tree->lock); + zswap_trees[type] = tree; + return; + +freetree: + kfree(tree); +err: + pr_err("alloc failed, zswap disabled for swap type %d\n", type); +} + +static struct frontswap_ops zswap_frontswap_ops = { + .store = zswap_frontswap_store, + .load = zswap_frontswap_load, + .invalidate_page = zswap_frontswap_invalidate_page, + .invalidate_area = zswap_frontswap_invalidate_area, + .init = zswap_frontswap_init +}; + +/********************************* +* debugfs functions +**********************************/ +#ifdef CONFIG_DEBUG_FS +#include <linux/debugfs.h> + +static struct dentry *zswap_debugfs_root; + +static int __init zswap_debugfs_init(void) +{ + if (!debugfs_initialized()) + return -ENODEV; + + zswap_debugfs_root = debugfs_create_dir("zswap", NULL); + if (!zswap_debugfs_root) + return -ENOMEM; + + debugfs_create_u64("pool_limit_hit", S_IRUGO, + zswap_debugfs_root, &zswap_pool_limit_hit); + debugfs_create_u64("reject_zsmalloc_fail", S_IRUGO, + zswap_debugfs_root, &zswap_reject_zsmalloc_fail); + debugfs_create_u64("reject_kmemcache_fail", S_IRUGO, + zswap_debugfs_root, &zswap_reject_kmemcache_fail); + debugfs_create_u64("reject_compress_poor", S_IRUGO, + zswap_debugfs_root, &zswap_reject_compress_poor); + debugfs_create_u64("duplicate_entry", S_IRUGO, + zswap_debugfs_root, &zswap_duplicate_entry); + debugfs_create_atomic_t("pool_pages", S_IRUGO, + zswap_debugfs_root, &zswap_pool_pages); + debugfs_create_atomic_t("stored_pages", S_IRUGO, + zswap_debugfs_root, &zswap_stored_pages); + + return 0; +} + +static void __exit zswap_debugfs_exit(void) +{ + debugfs_remove_recursive(zswap_debugfs_root); +} +#else +static inline int __init zswap_debugfs_init(void) +{ + return 0; +} + +static inline void __exit zswap_debugfs_exit(void) { } +#endif + +/********************************* +* module init and exit +**********************************/ +static int __init init_zswap(void) +{ + if (!zswap_enabled) + return 0; + + pr_info("loading zswap\n"); + if (zswap_entry_cache_create()) { + pr_err("entry cache creation failed\n"); + goto error; + } + if (zswap_page_pool_create()) { + pr_err("page pool initialization failed\n"); + goto pagepoolfail; + } + if (zswap_comp_init()) { + pr_err("compressor initialization failed\n"); + goto compfail; + } + if (zswap_cpu_init()) { + pr_err("per-cpu initialization failed\n"); + goto pcpufail; + } + frontswap_register_ops(&zswap_frontswap_ops); + if (zswap_debugfs_init()) + pr_warn("debugfs initialization failed\n"); + return 0; +pcpufail: + zswap_comp_exit(); +compfail: + zswap_page_pool_destroy(); +pagepoolfail: + zswap_entry_cache_destory(); +error: + return -ENOMEM; +} +/* must be late so crypto has time to come up */ +late_initcall(init_zswap); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Seth Jennings <sjenning@linux.vnet.ibm.com>"); +MODULE_DESCRIPTION("Compressed cache for swap pages"); -- 1.8.1.1 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCHv6 4/8] zswap: add to mm/ 2013-02-20 22:04 ` [PATCHv6 4/8] zswap: add to mm/ Seth Jennings @ 2013-02-25 4:35 ` Joonsoo Kim 2013-02-25 17:21 ` Seth Jennings 0 siblings, 1 reply; 5+ messages in thread From: Joonsoo Kim @ 2013-02-25 4:35 UTC (permalink / raw) To: Seth Jennings Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim, Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings, Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel, Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches, Cody P Schafer, linux-mm, linux-kernel, devel Hello, Seth. Here comes minor comments. On Wed, Feb 20, 2013 at 04:04:44PM -0600, Seth Jennings wrote: > zswap is a thin compression backend for frontswap. It receives > pages from frontswap and attempts to store them in a compressed > memory pool, resulting in an effective partial memory reclaim and > dramatically reduced swap device I/O. > > Additionally, in most cases, pages can be retrieved from this > compressed store much more quickly than reading from tradition > swap devices resulting in faster performance for many workloads. > > This patch adds the zswap driver to mm/ > > Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com> > --- > mm/Kconfig | 15 ++ > mm/Makefile | 1 + > mm/zswap.c | 665 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > 3 files changed, 681 insertions(+) > create mode 100644 mm/zswap.c > > diff --git a/mm/Kconfig b/mm/Kconfig > index 25b8f38..f9f35b7 100644 > --- a/mm/Kconfig > +++ b/mm/Kconfig > @@ -470,3 +470,18 @@ config PGTABLE_MAPPING > > You can check speed with zsmalloc benchmark[1]. > [1] https://github.com/spartacus06/zsmalloc > + > +config ZSWAP > + bool "In-kernel swap page compression" > + depends on FRONTSWAP && CRYPTO > + select CRYPTO_LZO > + select ZSMALLOC > + default n > + help > + Zswap is a backend for the frontswap mechanism in the VMM. > + It receives pages from frontswap and attempts to store them > + in a compressed memory pool, resulting in an effective > + partial memory reclaim. In addition, pages and be retrieved > + from this compressed store much faster than most tradition > + swap devices resulting in reduced I/O and faster performance > + for many workloads. > diff --git a/mm/Makefile b/mm/Makefile > index 0f6ef0a..1e0198f 100644 > --- a/mm/Makefile > +++ b/mm/Makefile > @@ -32,6 +32,7 @@ obj-$(CONFIG_HAVE_MEMBLOCK) += memblock.o > obj-$(CONFIG_BOUNCE) += bounce.o > obj-$(CONFIG_SWAP) += page_io.o swap_state.o swapfile.o > obj-$(CONFIG_FRONTSWAP) += frontswap.o > +obj-$(CONFIG_ZSWAP) += zswap.o > obj-$(CONFIG_HAS_DMA) += dmapool.o > obj-$(CONFIG_HUGETLBFS) += hugetlb.o > obj-$(CONFIG_NUMA) += mempolicy.o > diff --git a/mm/zswap.c b/mm/zswap.c > new file mode 100644 > index 0000000..d3b4943 > --- /dev/null > +++ b/mm/zswap.c > @@ -0,0 +1,665 @@ > +/* > + * zswap.c - zswap driver file > + * > + * zswap is a backend for frontswap that takes pages that are in the > + * process of being swapped out and attempts to compress them and store > + * them in a RAM-based memory pool. This results in a significant I/O > + * reduction on the real swap device and, in the case of a slow swap > + * device, can also improve workload performance. > + * > + * Copyright (C) 2012 Seth Jennings <sjenning@linux.vnet.ibm.com> > + * > + * This program is free software; you can redistribute it and/or > + * modify it under the terms of the GNU General Public License > + * as published by the Free Software Foundation; either version 2 > + * of the License, or (at your option) any later version. > + * > + * This program is distributed in the hope that it will be useful, > + * but WITHOUT ANY WARRANTY; without even the implied warranty of > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the > + * GNU General Public License for more details. > +*/ > + > +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt > + > +#include <linux/module.h> > +#include <linux/cpu.h> > +#include <linux/highmem.h> > +#include <linux/slab.h> > +#include <linux/spinlock.h> > +#include <linux/types.h> > +#include <linux/atomic.h> > +#include <linux/frontswap.h> > +#include <linux/rbtree.h> > +#include <linux/swap.h> > +#include <linux/crypto.h> > +#include <linux/mempool.h> > +#include <linux/zsmalloc.h> > + > +/********************************* > +* statistics > +**********************************/ > +/* Number of memory pages used by the compressed pool */ > +static atomic_t zswap_pool_pages = ATOMIC_INIT(0); > +/* The number of compressed pages currently stored in zswap */ > +static atomic_t zswap_stored_pages = ATOMIC_INIT(0); > + > +/* > + * The statistics below are not protected from concurrent access for > + * performance reasons so they may not be a 100% accurate. However, > + * they do provide useful information on roughly how many times a > + * certain event is occurring. > +*/ > +static u64 zswap_pool_limit_hit; > +static u64 zswap_reject_compress_poor; > +static u64 zswap_reject_zsmalloc_fail; > +static u64 zswap_reject_kmemcache_fail; > +static u64 zswap_duplicate_entry; > + > +/********************************* > +* tunables > +**********************************/ > +/* Enable/disable zswap (disabled by default, fixed at boot for now) */ > +static bool zswap_enabled; > +module_param_named(enabled, zswap_enabled, bool, 0); > + > +/* Compressor to be used by zswap (fixed at boot for now) */ > +#define ZSWAP_COMPRESSOR_DEFAULT "lzo" > +static char *zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT; > +module_param_named(compressor, zswap_compressor, charp, 0); > + > +/* The maximum percentage of memory that the compressed pool can occupy */ > +static unsigned int zswap_max_pool_percent = 20; > +module_param_named(max_pool_percent, > + zswap_max_pool_percent, uint, 0644); > + > +/* > + * Maximum compression ratio, as as percentage, for an acceptable > + * compressed page. Any pages that do not compress by at least > + * this ratio will be rejected. > +*/ > +static unsigned int zswap_max_compression_ratio = 80; > +module_param_named(max_compression_ratio, > + zswap_max_compression_ratio, uint, 0644); > + > +/********************************* > +* compression functions > +**********************************/ > +/* per-cpu compression transforms */ > +static struct crypto_comp * __percpu *zswap_comp_pcpu_tfms; > + > +enum comp_op { > + ZSWAP_COMPOP_COMPRESS, > + ZSWAP_COMPOP_DECOMPRESS > +}; > + > +static int zswap_comp_op(enum comp_op op, const u8 *src, unsigned int slen, > + u8 *dst, unsigned int *dlen) > +{ > + struct crypto_comp *tfm; > + int ret; > + > + tfm = *per_cpu_ptr(zswap_comp_pcpu_tfms, get_cpu()); > + switch (op) { > + case ZSWAP_COMPOP_COMPRESS: > + ret = crypto_comp_compress(tfm, src, slen, dst, dlen); > + break; > + case ZSWAP_COMPOP_DECOMPRESS: > + ret = crypto_comp_decompress(tfm, src, slen, dst, dlen); > + break; > + default: > + ret = -EINVAL; > + } > + > + put_cpu(); > + return ret; > +} > + > +static int __init zswap_comp_init(void) > +{ > + if (!crypto_has_comp(zswap_compressor, 0, 0)) { > + pr_info("%s compressor not available\n", zswap_compressor); > + /* fall back to default compressor */ > + zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT; > + if (!crypto_has_comp(zswap_compressor, 0, 0)) > + /* can't even load the default compressor */ > + return -ENODEV; > + } > + pr_info("using %s compressor\n", zswap_compressor); > + > + /* alloc percpu transforms */ > + zswap_comp_pcpu_tfms = alloc_percpu(struct crypto_comp *); > + if (!zswap_comp_pcpu_tfms) > + return -ENOMEM; > + return 0; > +} > + > +static void zswap_comp_exit(void) > +{ > + /* free percpu transforms */ > + if (zswap_comp_pcpu_tfms) > + free_percpu(zswap_comp_pcpu_tfms); > +} > + > +/********************************* > +* data structures > +**********************************/ > +struct zswap_entry { > + struct rb_node rbnode; > + unsigned type; > + pgoff_t offset; > + unsigned long handle; > + unsigned int length; > +}; > + > +struct zswap_tree { > + struct rb_root rbroot; > + spinlock_t lock; > + struct zs_pool *pool; > +}; > + > +static struct zswap_tree *zswap_trees[MAX_SWAPFILES]; > + > +/********************************* > +* zswap entry functions > +**********************************/ > +#define ZSWAP_KMEM_CACHE_NAME "zswap_entry_cache" > +static struct kmem_cache *zswap_entry_cache; > + > +static inline int zswap_entry_cache_create(void) > +{ > + zswap_entry_cache = > + kmem_cache_create(ZSWAP_KMEM_CACHE_NAME, > + sizeof(struct zswap_entry), 0, 0, NULL); > + return (zswap_entry_cache == NULL); > +} > + > +static inline void zswap_entry_cache_destory(void) > +{ > + kmem_cache_destroy(zswap_entry_cache); > +} > + > +static inline struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp) > +{ > + struct zswap_entry *entry; > + entry = kmem_cache_alloc(zswap_entry_cache, gfp); > + if (!entry) > + return NULL; > + return entry; > +} > + > +static inline void zswap_entry_cache_free(struct zswap_entry *entry) > +{ > + kmem_cache_free(zswap_entry_cache, entry); > +} > + > +/********************************* > +* rbtree functions > +**********************************/ > +static struct zswap_entry *zswap_rb_search(struct rb_root *root, pgoff_t offset) > +{ > + struct rb_node *node = root->rb_node; > + struct zswap_entry *entry; > + > + while (node) { > + entry = rb_entry(node, struct zswap_entry, rbnode); > + if (entry->offset > offset) > + node = node->rb_left; > + else if (entry->offset < offset) > + node = node->rb_right; > + else > + return entry; > + } > + return NULL; > +} > + > +/* > + * In the case that a entry with the same offset is found, it a pointer to > + * the existing entry is stored in dupentry and the function returns -EEXIST > +*/ > +static int zswap_rb_insert(struct rb_root *root, struct zswap_entry *entry, > + struct zswap_entry **dupentry) > +{ > + struct rb_node **link = &root->rb_node, *parent = NULL; > + struct zswap_entry *myentry; > + > + while (*link) { > + parent = *link; > + myentry = rb_entry(parent, struct zswap_entry, rbnode); > + if (myentry->offset > entry->offset) > + link = &(*link)->rb_left; > + else if (myentry->offset < entry->offset) > + link = &(*link)->rb_right; > + else { > + *dupentry = myentry; > + return -EEXIST; > + } > + } > + rb_link_node(&entry->rbnode, parent, link); > + rb_insert_color(&entry->rbnode, root); > + return 0; > +} > + > +/********************************* > +* per-cpu code > +**********************************/ > +static DEFINE_PER_CPU(u8 *, zswap_dstmem); > + > +static int __zswap_cpu_notifier(unsigned long action, unsigned long cpu) > +{ > + struct crypto_comp *tfm; > + u8 *dst; > + > + switch (action) { > + case CPU_UP_PREPARE: > + tfm = crypto_alloc_comp(zswap_compressor, 0, 0); > + if (IS_ERR(tfm)) { > + pr_err("can't allocate compressor transform\n"); > + return NOTIFY_BAD; > + } > + *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = tfm; > + dst = (u8 *)__get_free_pages(GFP_KERNEL, 1); Order 1 is really needed? Following code uses only PAGE_SIZE, not 2 * PAGE_SIZE. > + if (!dst) { > + pr_err("can't allocate compressor buffer\n"); > + crypto_free_comp(tfm); > + *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = NULL; > + return NOTIFY_BAD; > + } > + per_cpu(zswap_dstmem, cpu) = dst; > + break; > + case CPU_DEAD: > + case CPU_UP_CANCELED: > + tfm = *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu); > + if (tfm) { > + crypto_free_comp(tfm); > + *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = NULL; > + } > + dst = per_cpu(zswap_dstmem, cpu); > + if (dst) { > + free_pages((unsigned long)dst, 1); > + per_cpu(zswap_dstmem, cpu) = NULL; > + } > + break; > + default: > + break; > + } > + return NOTIFY_OK; > +} > + > +static int zswap_cpu_notifier(struct notifier_block *nb, > + unsigned long action, void *pcpu) > +{ > + unsigned long cpu = (unsigned long)pcpu; > + return __zswap_cpu_notifier(action, cpu); > +} > + > +static struct notifier_block zswap_cpu_notifier_block = { > + .notifier_call = zswap_cpu_notifier > +}; > + > +static int zswap_cpu_init(void) > +{ > + unsigned long cpu; > + > + get_online_cpus(); > + for_each_online_cpu(cpu) > + if (__zswap_cpu_notifier(CPU_UP_PREPARE, cpu) != NOTIFY_OK) > + goto cleanup; > + register_cpu_notifier(&zswap_cpu_notifier_block); > + put_online_cpus(); > + return 0; > + > +cleanup: > + for_each_online_cpu(cpu) > + __zswap_cpu_notifier(CPU_UP_CANCELED, cpu); > + put_online_cpus(); > + return -ENOMEM; > +} > + > +/********************************* > +* zsmalloc callbacks > +**********************************/ > +static mempool_t *zswap_page_pool; > + > +static inline unsigned int zswap_max_pool_pages(void) > +{ > + return zswap_max_pool_percent * totalram_pages / 100; > +} > + > +static inline int zswap_page_pool_create(void) > +{ > + /* TODO: dynamically size mempool */ > + zswap_page_pool = mempool_create_page_pool(256, 0); > + if (!zswap_page_pool) > + return -ENOMEM; > + return 0; > +} > + > +static inline void zswap_page_pool_destroy(void) > +{ > + mempool_destroy(zswap_page_pool); > +} > + > +static struct page *zswap_alloc_page(gfp_t flags) > +{ > + struct page *page; > + > + if (atomic_read(&zswap_pool_pages) >= zswap_max_pool_pages()) { > + zswap_pool_limit_hit++; > + return NULL; > + } > + page = mempool_alloc(zswap_page_pool, flags); > + if (page) > + atomic_inc(&zswap_pool_pages); > + return page; > +} > + > +static void zswap_free_page(struct page *page) > +{ > + if (!page) > + return; > + mempool_free(page, zswap_page_pool); > + atomic_dec(&zswap_pool_pages); > +} > + > +static struct zs_ops zswap_zs_ops = { > + .alloc = zswap_alloc_page, > + .free = zswap_free_page > +}; > + > +/********************************* > +* frontswap hooks > +**********************************/ > +/* attempts to compress and store an single page */ > +static int zswap_frontswap_store(unsigned type, pgoff_t offset, > + struct page *page) > +{ > + struct zswap_tree *tree = zswap_trees[type]; > + struct zswap_entry *entry, *dupentry; > + int ret; > + unsigned int dlen = PAGE_SIZE; > + unsigned long handle; > + char *buf; > + u8 *src, *dst; > + > + if (!tree) { > + ret = -ENODEV; > + goto reject; > + } > + > + /* compress */ > + dst = get_cpu_var(zswap_dstmem); > + src = kmap_atomic(page); > + ret = zswap_comp_op(ZSWAP_COMPOP_COMPRESS, src, PAGE_SIZE, dst, &dlen); > + kunmap_atomic(src); > + if (ret) { > + ret = -EINVAL; > + goto putcpu; > + } > + if ((dlen * 100 / PAGE_SIZE) > zswap_max_compression_ratio) { > + zswap_reject_compress_poor++; > + ret = -E2BIG; > + goto putcpu; > + } > + > + /* store */ > + handle = zs_malloc(tree->pool, dlen, > + __GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC | > + __GFP_NOWARN); > + if (!handle) { > + zswap_reject_zsmalloc_fail++; > + ret = -ENOMEM; > + goto putcpu; > + } > + > + buf = zs_map_object(tree->pool, handle, ZS_MM_WO); > + memcpy(buf, dst, dlen); > + zs_unmap_object(tree->pool, handle); > + put_cpu_var(zswap_dstmem); > + > + /* allocate entry */ > + entry = zswap_entry_cache_alloc(GFP_KERNEL); > + if (!entry) { > + zs_free(tree->pool, handle); > + zswap_reject_kmemcache_fail++; > + ret = -ENOMEM; > + goto reject; > + } How about moving up zswap_entry_cache_alloc()? It can save compression processing time if zswap_entry_cache_alloc() is failed. > + > + /* populate entry */ > + entry->type = type; > + entry->offset = offset; > + entry->handle = handle; > + entry->length = dlen; > + > + /* map */ > + spin_lock(&tree->lock); > + do { > + ret = zswap_rb_insert(&tree->rbroot, entry, &dupentry); > + if (ret == -EEXIST) { > + zswap_duplicate_entry++; > + > + /* remove from rbtree */ > + rb_erase(&dupentry->rbnode, &tree->rbroot); > + > + /* free */ > + zs_free(tree->pool, dupentry->handle); > + zswap_entry_cache_free(dupentry); > + atomic_dec(&zswap_stored_pages); > + } > + } while (ret == -EEXIST); > + spin_unlock(&tree->lock); > + > + /* update stats */ > + atomic_inc(&zswap_stored_pages); > + > + return 0; > + > +putcpu: > + put_cpu_var(zswap_dstmem); > +reject: > + return ret; > +} > + > +/* > + * returns 0 if the page was successfully decompressed > + * return -1 on entry not found or error > +*/ > +static int zswap_frontswap_load(unsigned type, pgoff_t offset, > + struct page *page) > +{ > + struct zswap_tree *tree = zswap_trees[type]; > + struct zswap_entry *entry; > + u8 *src, *dst; > + unsigned int dlen; > + > + /* find */ > + spin_lock(&tree->lock); > + entry = zswap_rb_search(&tree->rbroot, offset); > + spin_unlock(&tree->lock); > + > + /* decompress */ > + dlen = PAGE_SIZE; > + src = zs_map_object(tree->pool, entry->handle, ZS_MM_RO); > + dst = kmap_atomic(page); > + zswap_comp_op(ZSWAP_COMPOP_DECOMPRESS, src, entry->length, > + dst, &dlen); > + kunmap_atomic(dst); > + zs_unmap_object(tree->pool, entry->handle); > + > + return 0; > +} > + > +/* invalidates a single page */ > +static void zswap_frontswap_invalidate_page(unsigned type, pgoff_t offset) > +{ > + struct zswap_tree *tree = zswap_trees[type]; > + struct zswap_entry *entry; > + > + /* find */ > + spin_lock(&tree->lock); > + entry = zswap_rb_search(&tree->rbroot, offset); > + > + /* remove from rbtree */ > + rb_erase(&entry->rbnode, &tree->rbroot); > + spin_unlock(&tree->lock); > + > + /* free */ > + zs_free(tree->pool, entry->handle); > + zswap_entry_cache_free(entry); > + atomic_dec(&zswap_stored_pages); > +} > + > +/* invalidates all pages for the given swap type */ > +static void zswap_frontswap_invalidate_area(unsigned type) > +{ > + struct zswap_tree *tree = zswap_trees[type]; > + struct rb_node *node; > + struct zswap_entry *entry; > + > + if (!tree) > + return; > + > + /* walk the tree and free everything */ > + spin_lock(&tree->lock); > + /* > + * TODO: Even though this code should not be executed because > + * the try_to_unuse() in swapoff should have emptied the tree, > + * it is very wasteful to rebalance the tree after every > + * removal when we are freeing the whole tree. > + * > + * If post-order traversal code is ever added to the rbtree > + * implementation, it should be used here. > + */ > + while ((node = rb_first(&tree->rbroot))) { > + entry = rb_entry(node, struct zswap_entry, rbnode); > + rb_erase(&entry->rbnode, &tree->rbroot); > + zs_free(tree->pool, entry->handle); > + zswap_entry_cache_free(entry); > + } You should decrease zswap_stored_pages in while loop. > + tree->rbroot = RB_ROOT; > + spin_unlock(&tree->lock); > +} > + > +/* NOTE: this is called in atomic context from swapon and must not sleep */ > +static void zswap_frontswap_init(unsigned type) > +{ > + struct zswap_tree *tree; > + > + tree = kzalloc(sizeof(struct zswap_tree), GFP_NOWAIT); > + if (!tree) > + goto err; > + tree->pool = zs_create_pool(GFP_NOWAIT, &zswap_zs_ops); > + if (!tree->pool) > + goto freetree; > + tree->rbroot = RB_ROOT; > + spin_lock_init(&tree->lock); > + zswap_trees[type] = tree; > + return; > + > +freetree: > + kfree(tree); > +err: > + pr_err("alloc failed, zswap disabled for swap type %d\n", type); > +} > + > +static struct frontswap_ops zswap_frontswap_ops = { > + .store = zswap_frontswap_store, > + .load = zswap_frontswap_load, > + .invalidate_page = zswap_frontswap_invalidate_page, > + .invalidate_area = zswap_frontswap_invalidate_area, > + .init = zswap_frontswap_init > +}; > + > +/********************************* > +* debugfs functions > +**********************************/ > +#ifdef CONFIG_DEBUG_FS > +#include <linux/debugfs.h> > + > +static struct dentry *zswap_debugfs_root; > + > +static int __init zswap_debugfs_init(void) > +{ > + if (!debugfs_initialized()) > + return -ENODEV; > + > + zswap_debugfs_root = debugfs_create_dir("zswap", NULL); > + if (!zswap_debugfs_root) > + return -ENOMEM; > + > + debugfs_create_u64("pool_limit_hit", S_IRUGO, > + zswap_debugfs_root, &zswap_pool_limit_hit); > + debugfs_create_u64("reject_zsmalloc_fail", S_IRUGO, > + zswap_debugfs_root, &zswap_reject_zsmalloc_fail); > + debugfs_create_u64("reject_kmemcache_fail", S_IRUGO, > + zswap_debugfs_root, &zswap_reject_kmemcache_fail); > + debugfs_create_u64("reject_compress_poor", S_IRUGO, > + zswap_debugfs_root, &zswap_reject_compress_poor); > + debugfs_create_u64("duplicate_entry", S_IRUGO, > + zswap_debugfs_root, &zswap_duplicate_entry); > + debugfs_create_atomic_t("pool_pages", S_IRUGO, > + zswap_debugfs_root, &zswap_pool_pages); > + debugfs_create_atomic_t("stored_pages", S_IRUGO, > + zswap_debugfs_root, &zswap_stored_pages); > + > + return 0; > +} > + > +static void __exit zswap_debugfs_exit(void) > +{ > + debugfs_remove_recursive(zswap_debugfs_root); > +} > +#else > +static inline int __init zswap_debugfs_init(void) > +{ > + return 0; > +} > + > +static inline void __exit zswap_debugfs_exit(void) { } > +#endif > + > +/********************************* > +* module init and exit > +**********************************/ > +static int __init init_zswap(void) > +{ > + if (!zswap_enabled) > + return 0; > + > + pr_info("loading zswap\n"); > + if (zswap_entry_cache_create()) { > + pr_err("entry cache creation failed\n"); > + goto error; > + } > + if (zswap_page_pool_create()) { > + pr_err("page pool initialization failed\n"); > + goto pagepoolfail; > + } > + if (zswap_comp_init()) { > + pr_err("compressor initialization failed\n"); > + goto compfail; > + } > + if (zswap_cpu_init()) { > + pr_err("per-cpu initialization failed\n"); > + goto pcpufail; > + } > + frontswap_register_ops(&zswap_frontswap_ops); > + if (zswap_debugfs_init()) > + pr_warn("debugfs initialization failed\n"); > + return 0; > +pcpufail: > + zswap_comp_exit(); > +compfail: > + zswap_page_pool_destroy(); > +pagepoolfail: > + zswap_entry_cache_destory(); > +error: > + return -ENOMEM; > +} > +/* must be late so crypto has time to come up */ > +late_initcall(init_zswap); > + > +MODULE_LICENSE("GPL"); > +MODULE_AUTHOR("Seth Jennings <sjenning@linux.vnet.ibm.com>"); > +MODULE_DESCRIPTION("Compressed cache for swap pages"); > -- > 1.8.1.1 > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCHv6 4/8] zswap: add to mm/ 2013-02-25 4:35 ` Joonsoo Kim @ 2013-02-25 17:21 ` Seth Jennings 0 siblings, 0 replies; 5+ messages in thread From: Seth Jennings @ 2013-02-25 17:21 UTC (permalink / raw) To: Joonsoo Kim Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim, Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings, Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel, Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, Joe Perches, Cody P Schafer, linux-mm, linux-kernel, devel On 02/24/2013 10:35 PM, Joonsoo Kim wrote: > Hello, Seth. > Here comes minor comments. > <snip> >> +static int __zswap_cpu_notifier(unsigned long action, unsigned long cpu) >> +{ >> + struct crypto_comp *tfm; >> + u8 *dst; >> + >> + switch (action) { >> + case CPU_UP_PREPARE: >> + tfm = crypto_alloc_comp(zswap_compressor, 0, 0); >> + if (IS_ERR(tfm)) { >> + pr_err("can't allocate compressor transform\n"); >> + return NOTIFY_BAD; >> + } >> + *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = tfm; >> + dst = (u8 *)__get_free_pages(GFP_KERNEL, 1); > > Order 1 is really needed? > Following code uses only PAGE_SIZE, not 2 * PAGE_SIZE. Yes, probably should add a comment here. Some compression modules in the kernel, notably LZO, do not guard against buffer overrun during compression. In cases where LZO tries to compress a page with high entropy (e.g. a page containing already compressed data like JPEG), the compressed result can actually be larger than the original data. In this case, if the compression buffer is only one page, we overrun. I actually encountered this during development. > >> + if (!dst) { >> + pr_err("can't allocate compressor buffer\n"); >> + crypto_free_comp(tfm); >> + *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = NULL; >> + return NOTIFY_BAD; >> + } <snip> >> + buf = zs_map_object(tree->pool, handle, ZS_MM_WO); >> + memcpy(buf, dst, dlen); >> + zs_unmap_object(tree->pool, handle); >> + put_cpu_var(zswap_dstmem); >> + >> + /* allocate entry */ >> + entry = zswap_entry_cache_alloc(GFP_KERNEL); >> + if (!entry) { >> + zs_free(tree->pool, handle); >> + zswap_reject_kmemcache_fail++; >> + ret = -ENOMEM; >> + goto reject; >> + } > > How about moving up zswap_entry_cache_alloc()? > It can save compression processing time > if zswap_entry_cache_alloc() is failed. Will do. > >> + >> + /* populate entry */ >> + entry->type = type; >> + entry->offset = offset; >> + entry->handle = handle; >> + entry->length = dlen; >> + <snip> >> +/* invalidates all pages for the given swap type */ >> +static void zswap_frontswap_invalidate_area(unsigned type) >> +{ >> + struct zswap_tree *tree = zswap_trees[type]; >> + struct rb_node *node; >> + struct zswap_entry *entry; >> + >> + if (!tree) >> + return; >> + >> + /* walk the tree and free everything */ >> + spin_lock(&tree->lock); >> + /* >> + * TODO: Even though this code should not be executed because >> + * the try_to_unuse() in swapoff should have emptied the tree, >> + * it is very wasteful to rebalance the tree after every >> + * removal when we are freeing the whole tree. >> + * >> + * If post-order traversal code is ever added to the rbtree >> + * implementation, it should be used here. >> + */ >> + while ((node = rb_first(&tree->rbroot))) { >> + entry = rb_entry(node, struct zswap_entry, rbnode); >> + rb_erase(&entry->rbnode, &tree->rbroot); >> + zs_free(tree->pool, entry->handle); >> + zswap_entry_cache_free(entry); >> + } > > You should decrease zswap_stored_pages in while loop. Yes. Will do. Thanks, Seth -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2013-02-28 19:51 UTC | newest] Thread overview: 5+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- [not found] <<1361397888-14863-1-git-send-email-sjenning@linux.vnet.ibm.com> [not found] ` <<1361397888-14863-5-git-send-email-sjenning@linux.vnet.ibm.com> 2013-02-28 18:13 ` [PATCHv6 4/8] zswap: add to mm/ Dan Magenheimer 2013-02-28 19:50 ` Dan Magenheimer 2013-02-20 22:04 [PATCHv6 0/8] zswap: compressed swap caching Seth Jennings 2013-02-20 22:04 ` [PATCHv6 4/8] zswap: add to mm/ Seth Jennings 2013-02-25 4:35 ` Joonsoo Kim 2013-02-25 17:21 ` Seth Jennings
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).