* RE: [PATCHv2 8/9] zswap: add to mm/ [not found] ` <<1357590280-31535-9-git-send-email-sjenning@linux.vnet.ibm.com> @ 2013-01-10 22:16 ` Dan Magenheimer 0 siblings, 0 replies; 10+ messages in thread From: Dan Magenheimer @ 2013-01-10 22:16 UTC (permalink / raw) To: Seth Jennings Cc: Nitin Gupta, Minchan Kim, Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings, Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel, Larry Woodman, linux-mm, linux-kernel, devel, Greg Kroah-Hartman, Andrew Morton > From: Seth Jennings [mailto:sjenning@linux.vnet.ibm.com] > Subject: [PATCHv2 8/9] zswap: add to mm/ > > zswap is a thin compression backend for frontswap. It receives > pages from frontswap and attempts to store them in a compressed > memory pool, resulting in an effective partial memory reclaim and > dramatically reduced swap device I/O. > > Additional, in most cases, pages can be retrieved from this > compressed store much more quickly than reading from tradition > swap devices resulting in faster performance for many workloads. > > This patch adds the zswap driver to mm/ > > Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com> I've implemented the equivalent of zswap_flush_* in zcache. It looks much better than my earlier attempt at similar code to move zpages to swap. Nice work and thanks! But... (isn't there always a "but";-)... > +/* > + * This limits is arbitrary for now until a better > + * policy can be implemented. This is so we don't > + * eat all of RAM decompressing pages for writeback. > + */ > +#define ZSWAP_MAX_OUTSTANDING_FLUSHES 64 > + if (atomic_read(&zswap_outstanding_flushes) > > + ZSWAP_MAX_OUTSTANDING_FLUSHES) > + return; >From what I can see, zcache is in some ways more aggressive in some circumstances in "flushing" (zcache calls it "unuse"), and in some ways less aggressive. But with significant exercise, I can always cause the kernel to OOM when it is under heavy memory pressure and the flush/unuse code is being used. Have you given any further thought to "a better policy" (see the comment in the snippet above)? I'm going to try a smaller number than 64 to see if the OOMs go away, but choosing a random number for this throttling doesn't seem like a good plan for moving forward. Thanks, Dan P.S. I know you, like I, often use something kernbench-ish to exercise your code. I've found that compiling a kernel, then switching to another kernel directory, doing a git pull, and compiling that kernel, causes a lot of flushes/unuses and the OOMs. (This with 1GB RAM booting RHEL6 with a full GUI.) -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCHv2 0/9] zswap: compressed swap caching
@ 2013-01-07 20:24 Seth Jennings
2013-01-07 20:24 ` [PATCHv2 8/9] zswap: add to mm/ Seth Jennings
0 siblings, 1 reply; 10+ messages in thread
From: Seth Jennings @ 2013-01-07 20:24 UTC (permalink / raw)
To: Greg Kroah-Hartman, Andrew Morton
Cc: Seth Jennings, Nitin Gupta, Minchan Kim, Konrad Rzeszutek Wilk,
Dan Magenheimer, Robert Jennings, Jenifer Hopper, Mel Gorman,
Johannes Weiner, Rik van Riel, Larry Woodman, linux-mm,
linux-kernel, devel
Changelog:
v2:
* Rename zswap_fs_* functions to zswap_frontswap_* to avoid
confusion with "filesystem"
* Add comment about what the tree lock protects
* Remove "#if 0" code (should have been done before)
* Break out changes to existing swap code into separate patch
* Fix blank line EOF warning on documentation file
* Rebase to next-20130107
Zswap Overview:
Zswap is a lightweight compressed cache for swap pages. It takes
pages that are in the process of being swapped out and attempts to
compress them into a dynamically allocated RAM-based memory pool.
If this process is successful, the writeback to the swap device is
deferred and, in many cases, avoided completely. This results in
a significant I/O reduction and performance gains for systems that
are swapping. The results of a kernel building benchmark indicate a
runtime reduction of 53% and an I/O reduction 76% with zswap vs normal
swapping with a kernel build under heavy memory pressure (see
Performance section for more).
Patchset Structure:
1-4: improvements/changes to zsmalloc
5: add atomic_t get/set to debugfs
6: promote zsmalloc to /lib
7-9: add zswap and documentation
Targeting this for linux-next.
Rationale:
Zswap provides compressed swap caching that basically trades CPU cycles
for reduced swap I/O. This trade-off can result in a significant
performance improvement as reads to/writes from to the compressed
cache almost always faster that reading from a swap device
which incurs the latency of an asynchronous block I/O read.
Some potential benefits:
* Desktop/laptop users with limited RAM capacities can mitigate the
performance impact of swapping.
* Overcommitted guests that share a common I/O resource can
dramatically reduce their swap I/O pressure, avoiding heavy
handed I/O throttling by the hypervisor. This allows more work
to get done with less impact to the guest workload and guests
sharing the I/O subsystem
* Users with SSDs as swap devices can extend the life of the device by
drastically reducing life-shortening writes.
Zswap evicts pages from compressed cache on an LRU basis to the backing
swap device when the compress pool reaches it size limit or the pool is
unable to obtain additional pages from the buddy allocator. This
requirement had been identified in prior community discussions.
Compressed swap is also provided in zcache, along with page cache
compression and RAM clustering through RAMSter. Zswap seeks to deliver
the benefit of swap compression to users in a discrete function.
This design decision is akin to Unix design philosophy of doing one
thing well, it leaves file cache compression and other features
for separate code.
Design:
Zswap receives pages for compression through the Frontswap API and
is able to evict pages from its own compressed pool on an LRU basis
and write them back to the backing swap device in the case that the
compressed pool is full or unable to secure additional pages from
the buddy allocator.
Zswap makes use of zsmalloc for the managing the compressed memory
pool. This is because zsmalloc is specifically designed to minimize
fragmentation on large (> PAGE_SIZE/2) allocation sizes. Each
allocation in zsmalloc is not directly accessible by address.
Rather, a handle is return by the allocation routine and that handle
must be mapped before being accessed. The compressed memory pool grows
on demand and shrinks as compressed pages are freed. The pool is
not preallocated.
When a swap page is passed from frontswap to zswap, zswap maintains
a mapping of the swap entry, a combination of the swap type and swap
offset, to the zsmalloc handle that references that compressed swap
page. This mapping is achieved with a red-black tree per swap type.
The swap offset is the search key for the tree nodes.
Zswap seeks to be simple in its policies. Sysfs attributes allow for
two user controlled policies:
* max_compression_ratio - Maximum compression ratio, as as percentage,
for an acceptable compressed page. Any page that does not compress
by at least this ratio will be rejected.
* max_pool_percent - The maximum percentage of memory that the compressed
pool can occupy.
To enabled zswap, the "enabled" attribute must be set to 1 at boot time.
Zswap allows the compressor to be selected at kernel boot time by
setting the a??compressora?? attribute. The default compressor is lzo.
A debugfs interface is provided for various statistic about pool size,
number of pages stored, and various counters for the reasons pages
are rejected.
Performance, Kernel Building:
Setup
========
Gentoo w/ kernel v3.7-rc7
Quad-core i5-2500 @ 3.3GHz
512MB DDR3 1600MHz (limited with mem=512m on boot)
Filesystem and swap on 80GB HDD (about 58MB/s with hdparm -t)
majflt are major page faults reported by the time command
pswpin/out is the delta of pswpin/out from /proc/vmstat before and after
the make -jN
Summary
========
* Zswap reduces I/O and improves performance at all swap pressure levels.
* Under heavy swaping at 24 threads, zswap reduced I/O by 76%, saving
over 1.5GB of I/O, and cut runtime in half.
Details
========
I/O (in pages)
base zswap change change
N pswpin pswpout majflt I/O sum pswpin pswpout majflt I/O sum %I/O MB
8 1 335 291 627 0 0 249 249 -60% 1
12 3688 14315 5290 23293 123 860 5954 6937 -70% 64
16 12711 46179 16803 75693 2936 7390 46092 56418 -25% 75
20 42178 133781 49898 225857 9460 28382 92951 130793 -42% 371
24 96079 357280 105242 558601 7719 18484 109309 135512 -76% 1653
Runtime (in seconds)
N base zswap %change
8 107 107 0%
12 128 110 -14%
16 191 179 -6%
20 371 240 -35%
24 570 267 -53%
%CPU utilization (out of 400% on 4 cpus)
N base zswap %change
8 317 319 1%
12 267 311 16%
16 179 191 7%
20 94 143 52%
24 60 128 113%
Seth Jennings (9):
staging: zsmalloc: add gfp flags to zs_create_pool
staging: zsmalloc: remove unsed pool name
staging: zsmalloc: add page alloc/free callbacks
staging: zsmalloc: make CLASS_DELTA relative to PAGE_SIZE
debugfs: add get/set for atomic types
zsmalloc: promote to lib/
mm: break up swap_writepage() for frontswap backends
zswap: add to mm/
zswap: add documentation
Documentation/vm/zswap.txt | 73 ++
drivers/staging/Kconfig | 2 -
drivers/staging/Makefile | 1 -
drivers/staging/zcache/zcache-main.c | 7 +-
drivers/staging/zram/zram_drv.c | 4 +-
drivers/staging/zram/zram_drv.h | 3 +-
drivers/staging/zsmalloc/Kconfig | 10 -
drivers/staging/zsmalloc/Makefile | 3 -
drivers/staging/zsmalloc/zsmalloc-main.c | 1064 -----------------------------
drivers/staging/zsmalloc/zsmalloc.h | 43 --
fs/debugfs/file.c | 42 ++
include/linux/debugfs.h | 2 +
include/linux/swap.h | 4 +
include/linux/zsmalloc.h | 49 ++
lib/Kconfig | 18 +
lib/Makefile | 1 +
lib/zsmalloc.c | 1076 ++++++++++++++++++++++++++++++
mm/Kconfig | 15 +
mm/Makefile | 1 +
mm/page_io.c | 22 +-
mm/swap_state.c | 2 +-
mm/zswap.c | 1066 +++++++++++++++++++++++++++++
22 files changed, 2371 insertions(+), 1137 deletions(-)
create mode 100644 Documentation/vm/zswap.txt
delete mode 100644 drivers/staging/zsmalloc/Kconfig
delete mode 100644 drivers/staging/zsmalloc/Makefile
delete mode 100644 drivers/staging/zsmalloc/zsmalloc-main.c
delete mode 100644 drivers/staging/zsmalloc/zsmalloc.h
create mode 100644 include/linux/zsmalloc.h
create mode 100644 lib/zsmalloc.c
create mode 100644 mm/zswap.c
--
1.7.9.5
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 10+ messages in thread* [PATCHv2 8/9] zswap: add to mm/ 2013-01-07 20:24 [PATCHv2 0/9] zswap: compressed swap caching Seth Jennings @ 2013-01-07 20:24 ` Seth Jennings 2013-01-08 17:15 ` Dave Hansen 2013-01-25 22:44 ` Rik van Riel 0 siblings, 2 replies; 10+ messages in thread From: Seth Jennings @ 2013-01-07 20:24 UTC (permalink / raw) To: Greg Kroah-Hartman, Andrew Morton Cc: Seth Jennings, Nitin Gupta, Minchan Kim, Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings, Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel, Larry Woodman, linux-mm, linux-kernel, devel zswap is a thin compression backend for frontswap. It receives pages from frontswap and attempts to store them in a compressed memory pool, resulting in an effective partial memory reclaim and dramatically reduced swap device I/O. Additional, in most cases, pages can be retrieved from this compressed store much more quickly than reading from tradition swap devices resulting in faster performance for many workloads. This patch adds the zswap driver to mm/ Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com> --- mm/Kconfig | 15 + mm/Makefile | 1 + mm/zswap.c | 1066 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 1082 insertions(+) create mode 100644 mm/zswap.c diff --git a/mm/Kconfig b/mm/Kconfig index 278e3ab..14b9acb 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -446,3 +446,18 @@ config FRONTSWAP and swap data is stored as normal on the matching swap device. If unsure, say Y to enable frontswap. + +config ZSWAP + bool "In-kernel swap page compression" + depends on FRONTSWAP && CRYPTO + select CRYPTO_LZO + select ZSMALLOC + default n + help + Zswap is a backend for the frontswap mechanism in the VMM. + It receives pages from frontswap and attempts to store them + in a compressed memory pool, resulting in an effective + partial memory reclaim. In addition, pages and be retrieved + from this compressed store much faster than most tradition + swap devices resulting in reduced I/O and faster performance + for many workloads. diff --git a/mm/Makefile b/mm/Makefile index 3a46287..1b1ed5c 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -32,6 +32,7 @@ obj-$(CONFIG_HAVE_MEMBLOCK) += memblock.o obj-$(CONFIG_BOUNCE) += bounce.o obj-$(CONFIG_SWAP) += page_io.o swap_state.o swapfile.o obj-$(CONFIG_FRONTSWAP) += frontswap.o +obj-$(CONFIG_ZSWAP) += zswap.o obj-$(CONFIG_HAS_DMA) += dmapool.o obj-$(CONFIG_HUGETLBFS) += hugetlb.o obj-$(CONFIG_NUMA) += mempolicy.o diff --git a/mm/zswap.c b/mm/zswap.c new file mode 100644 index 0000000..e76dd0d --- /dev/null +++ b/mm/zswap.c @@ -0,0 +1,1066 @@ +/* + * zswap-drv.c - zswap driver file + * + * zswap is a backend for frontswap that takes pages that are in the + * process of being swapped out and attempts to compress them and store + * them in a RAM-based memory pool. This results in a significant I/O + * reduction on the real swap device and, in the case of a slow swap + * device, can also improve workload performance. + * + * Copyright (C) 2012 Seth Jennings <sjenning@linux.vnet.ibm.com> + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version 2 + * of the License, or (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. +*/ + +#include <linux/module.h> +#include <linux/cpu.h> +#include <linux/highmem.h> +#include <linux/slab.h> +#include <linux/spinlock.h> +#include <linux/types.h> +#include <linux/atomic.h> +#include <linux/frontswap.h> +#include <linux/rbtree.h> +#include <linux/swap.h> +#include <linux/crypto.h> +#include <linux/mempool.h> +#include <linux/zsmalloc.h> + +#include <linux/mm_types.h> +#include <linux/page-flags.h> +#include <linux/swapops.h> +#include <linux/writeback.h> +#include <linux/pagemap.h> + +/********************************* +* statistics +**********************************/ +/* Number of memory pages used by the compressed pool */ +static atomic_t zswap_pool_pages = ATOMIC_INIT(0); +/* The number of compressed pages currently stored in zswap */ +static atomic_t zswap_stored_pages = ATOMIC_INIT(0); +/* The number of outstanding pages awaiting writeback */ +static atomic_t zswap_outstanding_flushes = ATOMIC_INIT(0); + +/* + * The statistics below are not protected from concurrent access for + * performance reasons so they may not be a 100% accurate. However, + * the do provide useful information on roughly how many times a + * certain event is occurring. +*/ +static u64 zswap_flushed_pages; +static u64 zswap_reject_compress_poor; +static u64 zswap_flush_attempted; +static u64 zswap_reject_tmppage_fail; +static u64 zswap_reject_flush_fail; +static u64 zswap_reject_zsmalloc_fail; +static u64 zswap_reject_kmemcache_fail; +static u64 zswap_saved_by_flush; +static u64 zswap_duplicate_entry; + +/********************************* +* tunables +**********************************/ +/* Enable/disable zswap (enabled by default, fixed at boot for now) */ +static bool zswap_enabled; +module_param_named(enabled, zswap_enabled, bool, 0); + +/* Compressor to be used by zswap (fixed at boot for now) */ +#define ZSWAP_COMPRESSOR_DEFAULT "lzo" +static char *zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT; +module_param_named(compressor, zswap_compressor, charp, 0); + +/* The maximum percentage of memory that the compressed pool can occupy */ +static unsigned int zswap_max_pool_percent = 20; +module_param_named(max_pool_percent, + zswap_max_pool_percent, uint, 0644); + +/* + * Maximum compression ratio, as as percentage, for an acceptable + * compressed page. Any pages that do not compress by at least + * this ratio will be rejected. +*/ +static unsigned int zswap_max_compression_ratio = 80; +module_param_named(max_compression_ratio, + zswap_max_compression_ratio, uint, 0644); + +/********************************* +* compression functions +**********************************/ +/* per-cpu compression transforms */ +static struct crypto_comp * __percpu *zswap_comp_pcpu_tfms; + +enum comp_op { + ZSWAP_COMPOP_COMPRESS, + ZSWAP_COMPOP_DECOMPRESS +}; + +static int zswap_comp_op(enum comp_op op, const u8 *src, unsigned int slen, + u8 *dst, unsigned int *dlen) +{ + struct crypto_comp *tfm; + int ret; + + tfm = *per_cpu_ptr(zswap_comp_pcpu_tfms, get_cpu()); + switch (op) { + case ZSWAP_COMPOP_COMPRESS: + ret = crypto_comp_compress(tfm, src, slen, dst, dlen); + break; + case ZSWAP_COMPOP_DECOMPRESS: + ret = crypto_comp_decompress(tfm, src, slen, dst, dlen); + break; + default: + ret = -EINVAL; + } + + put_cpu(); + return ret; +} + +static int __init zswap_comp_init(void) +{ + if (!crypto_has_comp(zswap_compressor, 0, 0)) { + pr_info("zswap: %s compressor not available\n", + zswap_compressor); + /* fall back to default compressor */ + zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT; + if (!crypto_has_comp(zswap_compressor, 0, 0)) + /* can't even load the default compressor */ + return -ENODEV; + } + pr_info("zswap: using %s compressor\n", zswap_compressor); + + /* alloc percpu transforms */ + zswap_comp_pcpu_tfms = alloc_percpu(struct crypto_comp *); + if (!zswap_comp_pcpu_tfms) + return -ENOMEM; + return 0; +} + +static void zswap_comp_exit(void) +{ + /* free percpu transforms */ + if (zswap_comp_pcpu_tfms) + free_percpu(zswap_comp_pcpu_tfms); +} + +/********************************* +* data structures +**********************************/ +struct zswap_entry { + struct rb_node rbnode; + struct list_head lru; + int refcount; + unsigned type; + pgoff_t offset; + unsigned long handle; + unsigned int length; +}; + +/* + * The tree lock in the zswap_tree struct protects a few things: + * - the rbtree + * - the lru list + * - the refcount field of each entry in the tree + */ +struct zswap_tree { + struct rb_root rbroot; + struct list_head lru; + spinlock_t lock; + struct zs_pool *pool; +}; + +static struct zswap_tree *zswap_trees[MAX_SWAPFILES]; + +/********************************* +* zswap entry functions +**********************************/ +#define ZSWAP_KMEM_CACHE_NAME "zswap_entry_cache" +static struct kmem_cache *zswap_entry_cache; + +static inline int zswap_entry_cache_create(void) +{ + zswap_entry_cache = + kmem_cache_create(ZSWAP_KMEM_CACHE_NAME, + sizeof(struct zswap_entry), 0, 0, NULL); + return (zswap_entry_cache == NULL); +} + +static inline void zswap_entry_cache_destory(void) +{ + kmem_cache_destroy(zswap_entry_cache); +} + +static inline struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp) +{ + struct zswap_entry *entry; + entry = kmem_cache_alloc(zswap_entry_cache, gfp); + if (!entry) + return NULL; + INIT_LIST_HEAD(&entry->lru); + entry->refcount = 1; + return entry; +} + +static inline void zswap_entry_cache_free(struct zswap_entry *entry) +{ + kmem_cache_free(zswap_entry_cache, entry); +} + +static inline void zswap_entry_get(struct zswap_entry *entry) +{ + entry->refcount++; +} + +static inline int zswap_entry_put(struct zswap_entry *entry) +{ + entry->refcount--; + return entry->refcount; +} + +/********************************* +* rbtree functions +**********************************/ +static struct zswap_entry *zswap_rb_search(struct rb_root *root, pgoff_t offset) +{ + struct rb_node *node = root->rb_node; + struct zswap_entry *entry; + + while (node) { + entry = rb_entry(node, struct zswap_entry, rbnode); + if (entry->offset > offset) + node = node->rb_left; + else if (entry->offset < offset) + node = node->rb_right; + else + return entry; + } + return NULL; +} + +/* + * In the case that a entry with the same offset is found, it a pointer to + * the existing entry is stored in dupentry and the function returns -EEXIST +*/ +static int zswap_rb_insert(struct rb_root *root, struct zswap_entry *entry, + struct zswap_entry **dupentry) +{ + struct rb_node **link = &root->rb_node, *parent = NULL; + struct zswap_entry *myentry; + + while (*link) { + parent = *link; + myentry = rb_entry(parent, struct zswap_entry, rbnode); + if (myentry->offset > entry->offset) + link = &(*link)->rb_left; + else if (myentry->offset < entry->offset) + link = &(*link)->rb_right; + else { + *dupentry = myentry; + return -EEXIST; + } + } + rb_link_node(&entry->rbnode, parent, link); + rb_insert_color(&entry->rbnode, root); + return 0; +} + +/********************************* +* per-cpu code +**********************************/ +static DEFINE_PER_CPU(u8 *, zswap_dstmem); + +static int __zswap_cpu_notifier(unsigned long action, unsigned long cpu) +{ + struct crypto_comp *tfm; + u8 *dst; + + switch (action) { + case CPU_UP_PREPARE: + tfm = crypto_alloc_comp(zswap_compressor, 0, 0); + if (IS_ERR(tfm)) { + pr_err("zswap: can't allocate compressor transform\n"); + return NOTIFY_BAD; + } + *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = tfm; + dst = (u8 *)__get_free_pages(GFP_KERNEL, 1); + if (!dst) { + pr_err("zswap: can't allocate compressor buffer\n"); + crypto_free_comp(tfm); + *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = NULL; + return NOTIFY_BAD; + } + per_cpu(zswap_dstmem, cpu) = dst; + break; + case CPU_DEAD: + case CPU_UP_CANCELED: + tfm = *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu); + if (tfm) { + crypto_free_comp(tfm); + *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = NULL; + } + dst = per_cpu(zswap_dstmem, cpu); + if (dst) { + free_pages((unsigned long)dst, 1); + per_cpu(zswap_dstmem, cpu) = NULL; + } + break; + default: + break; + } + return NOTIFY_OK; +} + +static int zswap_cpu_notifier(struct notifier_block *nb, + unsigned long action, void *pcpu) +{ + unsigned long cpu = (unsigned long)pcpu; + return __zswap_cpu_notifier(action, cpu); +} + +static struct notifier_block zswap_cpu_notifier_block = { + .notifier_call = zswap_cpu_notifier +}; + +static int zswap_cpu_init(void) +{ + unsigned long cpu; + + get_online_cpus(); + for_each_online_cpu(cpu) + if (__zswap_cpu_notifier(CPU_UP_PREPARE, cpu) != NOTIFY_OK) + goto cleanup; + register_cpu_notifier(&zswap_cpu_notifier_block); + put_online_cpus(); + return 0; + +cleanup: + for_each_online_cpu(cpu) + __zswap_cpu_notifier(CPU_UP_CANCELED, cpu); + put_online_cpus(); + return -ENOMEM; +} + +/********************************* +* zsmalloc callbacks +**********************************/ +static mempool_t *zswap_page_pool; + +static u64 zswap_pool_limit_hit; + +static inline unsigned int zswap_max_pool_pages(void) +{ + return zswap_max_pool_percent * totalram_pages / 100; +} + +static inline int zswap_page_pool_create(void) +{ + zswap_page_pool = mempool_create_page_pool(256, 0); + if (!zswap_page_pool) + return -ENOMEM; + return 0; +} + +static inline void zswap_page_pool_destroy(void) +{ + mempool_destroy(zswap_page_pool); +} + +static struct page *zswap_alloc_page(gfp_t flags) +{ + struct page *page; + + if (atomic_read(&zswap_pool_pages) >= zswap_max_pool_pages()) { + zswap_pool_limit_hit++; + return NULL; + } + page = mempool_alloc(zswap_page_pool, flags); + if (page) + atomic_inc(&zswap_pool_pages); + return page; +} + +static void zswap_free_page(struct page *page) +{ + mempool_free(page, zswap_page_pool); + atomic_dec(&zswap_pool_pages); +} + +static struct zs_ops zswap_zs_ops = { + .alloc = zswap_alloc_page, + .free = zswap_free_page +}; + +/********************************* +* flush code +**********************************/ +static void zswap_end_swap_write(struct bio *bio, int err) +{ + end_swap_bio_write(bio, err); + atomic_dec(&zswap_outstanding_flushes); + zswap_flushed_pages++; +} + +/* + * zswap_get_swap_cache_page + * + * This is an adaption of read_swap_cache_async() + * + * If success, page is returned in retpage + * Returns 0 if page was already in the swap cache, page is not locked + * Returns 1 if the new page needs to be populated, page is locked + */ +static int zswap_get_swap_cache_page(swp_entry_t entry, + struct page **retpage) +{ + struct page *found_page, *new_page = NULL; + int err; + + *retpage = NULL; + do { + /* + * First check the swap cache. Since this is normally + * called after lookup_swap_cache() failed, re-calling + * that would confuse statistics. + */ + found_page = find_get_page(&swapper_space, entry.val); + if (found_page) + break; + + /* + * Get a new page to read into from swap. + */ + if (!new_page) { + new_page = alloc_page(GFP_KERNEL); + if (!new_page) + break; /* Out of memory */ + } + + /* + * call radix_tree_preload() while we can wait. + */ + err = radix_tree_preload(GFP_KERNEL); + if (err) + break; + + /* + * Swap entry may have been freed since our caller observed it. + */ + err = swapcache_prepare(entry); + if (err == -EEXIST) { /* seems racy */ + radix_tree_preload_end(); + continue; + } + if (err) { /* swp entry is obsolete ? */ + radix_tree_preload_end(); + break; + } + + /* May fail (-ENOMEM) if radix-tree node allocation failed. */ + __set_page_locked(new_page); + SetPageSwapBacked(new_page); + err = __add_to_swap_cache(new_page, entry); + if (likely(!err)) { + radix_tree_preload_end(); + lru_cache_add_anon(new_page); + *retpage = new_page; + return 1; + } + radix_tree_preload_end(); + ClearPageSwapBacked(new_page); + __clear_page_locked(new_page); + /* + * add_to_swap_cache() doesn't return -EEXIST, so we can safely + * clear SWAP_HAS_CACHE flag. + */ + swapcache_free(entry, NULL); + } while (err != -ENOMEM); + + if (new_page) + page_cache_release(new_page); + if (!found_page) + return -ENOMEM; + *retpage = found_page; + return 0; +} + +static int zswap_flush_entry(struct zswap_entry *entry) +{ + unsigned long type = entry->type; + struct zswap_tree *tree = zswap_trees[type]; + struct page *page; + swp_entry_t swpentry; + u8 *src, *dst; + unsigned int dlen; + int ret, refcount; + struct writeback_control wbc = { + .sync_mode = WB_SYNC_NONE, + }; + + /* get/allocate page in the swap cache */ + swpentry = swp_entry(type, entry->offset); + ret = zswap_get_swap_cache_page(swpentry, &page); + if (ret < 0) + return ret; + else if (ret) { + /* decompress */ + dlen = PAGE_SIZE; + src = zs_map_object(tree->pool, entry->handle, ZS_MM_RO); + dst = kmap_atomic(page); + ret = zswap_comp_op(ZSWAP_COMPOP_DECOMPRESS, src, entry->length, + dst, &dlen); + kunmap_atomic(dst); + zs_unmap_object(tree->pool, entry->handle); + BUG_ON(ret); + BUG_ON(dlen != PAGE_SIZE); + SetPageUptodate(page); + } else { + /* page is already in the swap cache, ignore for now */ + spin_lock(&tree->lock); + refcount = zswap_entry_put(entry); + spin_unlock(&tree->lock); + + if (likely(refcount)) + return 0; + + /* if the refcount is zero, invalidate must have come in */ + /* free */ + zs_free(tree->pool, entry->handle); + zswap_entry_cache_free(entry); + atomic_dec(&zswap_stored_pages); + + return 0; + } + + /* start writeback */ + SetPageReclaim(page); + /* + * Return value is ignored here because it doesn't change anything + * for us. Page is returned unlocked. + */ + (void)__swap_writepage(page, &wbc, zswap_end_swap_write); + page_cache_release(page); + atomic_inc(&zswap_outstanding_flushes); + + /* remove */ + spin_lock(&tree->lock); + refcount = zswap_entry_put(entry); + if (refcount > 1) { + /* load in progress, load will free */ + spin_unlock(&tree->lock); + return 0; + } + if (refcount == 1) + /* no invalidate yet, remove from rbtree */ + rb_erase(&entry->rbnode, &tree->rbroot); + spin_unlock(&tree->lock); + + /* free */ + zs_free(tree->pool, entry->handle); + zswap_entry_cache_free(entry); + atomic_dec(&zswap_stored_pages); + + return 0; +} + +static void zswap_flush_entries(unsigned type, int nr) +{ + struct zswap_tree *tree = zswap_trees[type]; + struct zswap_entry *entry; + int i, ret; + +/* + * This limits is arbitrary for now until a better + * policy can be implemented. This is so we don't + * eat all of RAM decompressing pages for writeback. + */ +#define ZSWAP_MAX_OUTSTANDING_FLUSHES 64 + if (atomic_read(&zswap_outstanding_flushes) > + ZSWAP_MAX_OUTSTANDING_FLUSHES) + return; + + for (i = 0; i < nr; i++) { + /* dequeue from lru */ + spin_lock(&tree->lock); + if (list_empty(&tree->lru)) { + spin_unlock(&tree->lock); + break; + } + entry = list_first_entry(&tree->lru, + struct zswap_entry, lru); + list_del(&entry->lru); + zswap_entry_get(entry); + spin_unlock(&tree->lock); + ret = zswap_flush_entry(entry); + if (ret) { + /* put back on the lru */ + spin_lock(&tree->lock); + list_add(&entry->lru, &tree->lru); + spin_unlock(&tree->lock); + } else { + if (atomic_read(&zswap_outstanding_flushes) > + ZSWAP_MAX_OUTSTANDING_FLUSHES) + break; + } + } +} + +/******************************************* +* page pool for temporary compression result +********************************************/ +#define ZSWAP_TMPPAGE_POOL_PAGES 16 +static LIST_HEAD(zswap_tmppage_list); +static DEFINE_SPINLOCK(zswap_tmppage_lock); + +static void zswap_tmppage_pool_destroy(void) +{ + struct page *page, *tmppage; + + spin_lock(&zswap_tmppage_lock); + list_for_each_entry_safe(page, tmppage, &zswap_tmppage_list, lru) { + list_del(&page->lru); + __free_pages(page, 1); + } + spin_unlock(&zswap_tmppage_lock); +} + +static int zswap_tmppage_pool_create(void) +{ + int i; + struct page *page; + + for (i = 0; i < ZSWAP_TMPPAGE_POOL_PAGES; i++) { + page = alloc_pages(GFP_KERNEL, 1); + if (!page) { + zswap_tmppage_pool_destroy(); + return -ENOMEM; + } + spin_lock(&zswap_tmppage_lock); + list_add(&page->lru, &zswap_tmppage_list); + spin_unlock(&zswap_tmppage_lock); + } + return 0; +} + +static inline struct page *zswap_tmppage_alloc(void) +{ + struct page *page; + + spin_lock(&zswap_tmppage_lock); + if (list_empty(&zswap_tmppage_list)) { + spin_unlock(&zswap_tmppage_lock); + return NULL; + } + page = list_first_entry(&zswap_tmppage_list, struct page, lru); + list_del(&page->lru); + spin_unlock(&zswap_tmppage_lock); + return page; +} + +static inline void zswap_tmppage_free(struct page *page) +{ + spin_lock(&zswap_tmppage_lock); + list_add(&page->lru, &zswap_tmppage_list); + spin_unlock(&zswap_tmppage_lock); +} + +/********************************* +* frontswap hooks +**********************************/ +/* attempts to compress and store an single page */ +static int zswap_frontswap_store(unsigned type, pgoff_t offset, struct page *page) +{ + struct zswap_tree *tree = zswap_trees[type]; + struct zswap_entry *entry, *dupentry; + int ret; + unsigned int dlen = PAGE_SIZE; + unsigned long handle; + char *buf; + u8 *src, *dst, *tmpdst; + struct page *tmppage; + bool flush_attempted = 0; + + if (!tree) { + ret = -ENODEV; + goto reject; + } + + /* compress */ + dst = get_cpu_var(zswap_dstmem); + src = kmap_atomic(page); + ret = zswap_comp_op(ZSWAP_COMPOP_COMPRESS, src, PAGE_SIZE, dst, &dlen); + kunmap_atomic(src); + if (ret) { + ret = -EINVAL; + goto freepage; + } + if ((dlen * 100 / PAGE_SIZE) > zswap_max_compression_ratio) { + zswap_reject_compress_poor++; + ret = -E2BIG; + goto freepage; + } + + /* store */ + handle = zs_malloc(tree->pool, dlen, + __GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC | + __GFP_NOWARN); + if (!handle) { + zswap_flush_attempted++; + /* + * Copy compressed buffer out of per-cpu storage so + * we can re-enable preemption. + */ + tmppage = zswap_tmppage_alloc(); + if (!tmppage) { + zswap_reject_tmppage_fail++; + ret = -ENOMEM; + goto freepage; + } + flush_attempted = 1; + tmpdst = page_address(tmppage); + memcpy(tmpdst, dst, dlen); + dst = tmpdst; + put_cpu_var(zswap_dstmem); + + /* try to free up some space */ + /* TODO: replace with more targeted policy */ + zswap_flush_entries(type, 16); + /* try again, allowing wait */ + handle = zs_malloc(tree->pool, dlen, + __GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC | + __GFP_NOWARN); + if (!handle) { + /* still no space, fail */ + zswap_reject_zsmalloc_fail++; + ret = -ENOMEM; + goto freepage; + } + zswap_saved_by_flush++; + } + + buf = zs_map_object(tree->pool, handle, ZS_MM_WO); + memcpy(buf, dst, dlen); + zs_unmap_object(tree->pool, handle); + if (flush_attempted) + zswap_tmppage_free(tmppage); + else + put_cpu_var(zswap_dstmem); + + /* allocate entry */ + entry = zswap_entry_cache_alloc(GFP_KERNEL); + if (!entry) { + zswap_reject_kmemcache_fail++; + ret = -ENOMEM; + goto reject; + } + + /* populate entry */ + entry->type = type; + entry->offset = offset; + entry->handle = handle; + entry->length = dlen; + + /* map */ + spin_lock(&tree->lock); + do { + ret = zswap_rb_insert(&tree->rbroot, entry, &dupentry); + if (ret == -EEXIST) { + zswap_duplicate_entry++; + /* remove from rbtree and lru */ + rb_erase(&dupentry->rbnode, &tree->rbroot); + if (dupentry->lru.next != LIST_POISON1) + list_del(&dupentry->lru); + if (!zswap_entry_put(dupentry)) { + /* free */ + zs_free(tree->pool, dupentry->handle); + zswap_entry_cache_free(dupentry); + atomic_dec(&zswap_stored_pages); + } + } + } while (ret == -EEXIST); + list_add_tail(&entry->lru, &tree->lru); + spin_unlock(&tree->lock); + + /* update stats */ + atomic_inc(&zswap_stored_pages); + + return 0; + +freepage: + if (flush_attempted) + zswap_tmppage_free(tmppage); + else + put_cpu_var(zswap_dstmem); +reject: + return ret; +} + +/* + * returns 0 if the page was successfully decompressed + * return -1 on entry not found or error +*/ +static int zswap_frontswap_load(unsigned type, pgoff_t offset, struct page *page) +{ + struct zswap_tree *tree = zswap_trees[type]; + struct zswap_entry *entry; + u8 *src, *dst; + unsigned int dlen; + int refcount; + + /* find */ + spin_lock(&tree->lock); + entry = zswap_rb_search(&tree->rbroot, offset); + if (!entry) { + /* entry was flushed */ + spin_unlock(&tree->lock); + return -1; + } + zswap_entry_get(entry); + + /* remove from lru */ + if (entry->lru.next != LIST_POISON1) + list_del(&entry->lru); + spin_unlock(&tree->lock); + + /* decompress */ + dlen = PAGE_SIZE; + src = zs_map_object(tree->pool, entry->handle, ZS_MM_RO); + dst = kmap_atomic(page); + zswap_comp_op(ZSWAP_COMPOP_DECOMPRESS, src, entry->length, + dst, &dlen); + kunmap_atomic(dst); + zs_unmap_object(tree->pool, entry->handle); + + spin_lock(&tree->lock); + refcount = zswap_entry_put(entry); + if (likely(refcount)) { + list_add_tail(&entry->lru, &tree->lru); + spin_unlock(&tree->lock); + return 0; + } + spin_unlock(&tree->lock); + + /* + * We don't have to unlink from the rbtree because zswap_flush_entry() + * or zswap_frontswap_invalidate page() has already done this for us if we + * are the last reference. + */ + /* free */ + zs_free(tree->pool, entry->handle); + zswap_entry_cache_free(entry); + atomic_dec(&zswap_stored_pages); + + return 0; +} + +/* invalidates a single page */ +static void zswap_frontswap_invalidate_page(unsigned type, pgoff_t offset) +{ + struct zswap_tree *tree = zswap_trees[type]; + struct zswap_entry *entry; + int refcount; + + if (!tree) + return; + + /* find */ + spin_lock(&tree->lock); + entry = zswap_rb_search(&tree->rbroot, offset); + if (!entry) { + /* entry was flushed */ + spin_unlock(&tree->lock); + return; + } + + /* remove from rbtree and lru */ + rb_erase(&entry->rbnode, &tree->rbroot); + if (entry->lru.next != LIST_POISON1) + list_del(&entry->lru); + refcount = zswap_entry_put(entry); + spin_unlock(&tree->lock); + if (refcount) { + /* must be flushing */ + return; + } + + /* free */ + zs_free(tree->pool, entry->handle); + zswap_entry_cache_free(entry); + atomic_dec(&zswap_stored_pages); +} + +/* invalidates all pages for the given swap type */ +static void zswap_frontswap_invalidate_area(unsigned type) +{ + struct zswap_tree *tree = zswap_trees[type]; + struct rb_node *node, *next; + struct zswap_entry *entry; + + if (!tree) + return; + + /* walk the tree and free everything */ + spin_lock(&tree->lock); + node = rb_first(&tree->rbroot); + while (node) { + entry = rb_entry(node, struct zswap_entry, rbnode); + zs_free(tree->pool, entry->handle); + next = rb_next(node); + zswap_entry_cache_free(entry); + node = next; + } + tree->rbroot = RB_ROOT; + INIT_LIST_HEAD(&tree->lru); + spin_unlock(&tree->lock); +} + +/* NOTE: this is called in atomic context from swapon and must not sleep */ +static void zswap_frontswap_init(unsigned type) +{ + struct zswap_tree *tree; + + tree = kzalloc(sizeof(struct zswap_tree), GFP_NOWAIT); + if (!tree) + goto err; + tree->pool = zs_create_pool(GFP_NOWAIT, &zswap_zs_ops); + if (!tree->pool) + goto freetree; + tree->rbroot = RB_ROOT; + INIT_LIST_HEAD(&tree->lru); + spin_lock_init(&tree->lock); + zswap_trees[type] = tree; + return; + +freetree: + kfree(tree); +err: + pr_err("zswap: alloc failed, zswap disabled for swap type %d\n", type); +} + +static struct frontswap_ops zswap_frontswap_ops = { + .store = zswap_frontswap_store, + .load = zswap_frontswap_load, + .invalidate_page = zswap_frontswap_invalidate_page, + .invalidate_area = zswap_frontswap_invalidate_area, + .init = zswap_frontswap_init +}; + +/********************************* +* debugfs functions +**********************************/ +#ifdef CONFIG_DEBUG_FS +#include <linux/debugfs.h> + +static struct dentry *zswap_debugfs_root; + +static int __init zswap_debugfs_init(void) +{ + if (!debugfs_initialized()) + return -ENODEV; + + zswap_debugfs_root = debugfs_create_dir("zswap", NULL); + if (!zswap_debugfs_root) + return -ENOMEM; + + debugfs_create_u64("saved_by_flush", S_IRUGO, + zswap_debugfs_root, &zswap_saved_by_flush); + debugfs_create_u64("pool_limit_hit", S_IRUGO, + zswap_debugfs_root, &zswap_pool_limit_hit); + debugfs_create_u64("reject_flush_attempted", S_IRUGO, + zswap_debugfs_root, &zswap_flush_attempted); + debugfs_create_u64("reject_tmppage_fail", S_IRUGO, + zswap_debugfs_root, &zswap_reject_tmppage_fail); + debugfs_create_u64("reject_flush_fail", S_IRUGO, + zswap_debugfs_root, &zswap_reject_flush_fail); + debugfs_create_u64("reject_zsmalloc_fail", S_IRUGO, + zswap_debugfs_root, &zswap_reject_zsmalloc_fail); + debugfs_create_u64("reject_kmemcache_fail", S_IRUGO, + zswap_debugfs_root, &zswap_reject_kmemcache_fail); + debugfs_create_u64("reject_compress_poor", S_IRUGO, + zswap_debugfs_root, &zswap_reject_compress_poor); + debugfs_create_u64("flushed_pages", S_IRUGO, + zswap_debugfs_root, &zswap_flushed_pages); + debugfs_create_u64("duplicate_entry", S_IRUGO, + zswap_debugfs_root, &zswap_duplicate_entry); + debugfs_create_atomic_t("pool_pages", S_IRUGO, + zswap_debugfs_root, &zswap_pool_pages); + debugfs_create_atomic_t("stored_pages", S_IRUGO, + zswap_debugfs_root, &zswap_stored_pages); + debugfs_create_atomic_t("outstanding_flushes", S_IRUGO, + zswap_debugfs_root, &zswap_outstanding_flushes); + + return 0; +} + +static void __exit zswap_debugfs_exit(void) +{ + if (zswap_debugfs_root) + debugfs_remove_recursive(zswap_debugfs_root); +} +#else +static inline int __init zswap_debugfs_init(void) +{ + return 0; +} + +static inline void __exit zswap_debugfs_exit(void) { } +#endif + +/********************************* +* module init and exit +**********************************/ +static int __init init_zswap(void) +{ + if (!zswap_enabled) + return 0; + + pr_info("loading zswap\n"); + if (zswap_entry_cache_create()) { + pr_err("zswap: entry cache creation failed\n"); + goto error; + } + if (zswap_page_pool_create()) { + pr_err("zswap: page pool initialization failed\n"); + goto pagepoolfail; + } + if (zswap_tmppage_pool_create()) { + pr_err("zswap: workmem pool initialization failed\n"); + goto tmppoolfail; + } + if (zswap_comp_init()) { + pr_err("zswap: compressor initialization failed\n"); + goto compfail; + } + if (zswap_cpu_init()) { + pr_err("zswap: per-cpu initialization failed\n"); + goto pcpufail; + } + frontswap_register_ops(&zswap_frontswap_ops); + if (zswap_debugfs_init()) + pr_warn("zswap: debugfs initialization failed\n"); + return 0; +pcpufail: + zswap_comp_exit(); +compfail: + zswap_tmppage_pool_destroy(); +tmppoolfail: + zswap_page_pool_destroy(); +pagepoolfail: + zswap_entry_cache_destory(); +error: + return -ENOMEM; +} +/* must be late so crypto has time to come up */ +late_initcall(init_zswap); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Seth Jennings <sjenning@linux.vnet.ibm.com>"); +MODULE_DESCRIPTION("Compression backend for frontswap pages"); -- 1.7.9.5 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCHv2 8/9] zswap: add to mm/ 2013-01-07 20:24 ` [PATCHv2 8/9] zswap: add to mm/ Seth Jennings @ 2013-01-08 17:15 ` Dave Hansen 2013-01-08 17:54 ` Dan Magenheimer 2013-01-25 22:44 ` Rik van Riel 1 sibling, 1 reply; 10+ messages in thread From: Dave Hansen @ 2013-01-08 17:15 UTC (permalink / raw) To: Seth Jennings Cc: Greg Kroah-Hartman, Andrew Morton, Nitin Gupta, Minchan Kim, Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings, Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel, Larry Woodman, linux-mm, linux-kernel, devel On 01/07/2013 12:24 PM, Seth Jennings wrote: > +struct zswap_tree { > + struct rb_root rbroot; > + struct list_head lru; > + spinlock_t lock; > + struct zs_pool *pool; > +}; BTW, I spent some time trying to get this lock contended. You thought the anon_vma locks would dominate and this spinlock would not end up very contended. I figured that if I hit zswap from a bunch of CPUs that _didn't_ use anonymous memory (and thus the anon_vma locks) that some more contention would pop up. I did that with a bunch of CPUs writing to tmpfs, and this lock was still well down below anon_vma. The anon_vma contention was obviously coming from _other_ anonymous memory around. IOW, I feel a bit better about this lock. I only tested on 16 cores on a system with relatively light NUMA characteristics, and it might be the bottleneck if all the anonymous memory on the system is mlock()'d and you're pounding on tmpfs, but that's pretty contrived. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 10+ messages in thread
* RE: [PATCHv2 8/9] zswap: add to mm/ 2013-01-08 17:15 ` Dave Hansen @ 2013-01-08 17:54 ` Dan Magenheimer 0 siblings, 0 replies; 10+ messages in thread From: Dan Magenheimer @ 2013-01-08 17:54 UTC (permalink / raw) To: Dave Hansen, Seth Jennings Cc: Greg Kroah-Hartman, Andrew Morton, Nitin Gupta, Minchan Kim, Konrad Wilk, Robert Jennings, Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel, Larry Woodman, linux-mm, linux-kernel, devel > From: Dave Hansen [mailto:dave@linux.vnet.ibm.com] > Sent: Tuesday, January 08, 2013 10:15 AM > To: Seth Jennings > Cc: Greg Kroah-Hartman; Andrew Morton; Nitin Gupta; Minchan Kim; Konrad Rzeszutek Wilk; Dan > Magenheimer; Robert Jennings; Jenifer Hopper; Mel Gorman; Johannes Weiner; Rik van Riel; Larry > Woodman; linux-mm@kvack.org; linux-kernel@vger.kernel.org; devel@driverdev.osuosl.org > Subject: Re: [PATCHv2 8/9] zswap: add to mm/ > > On 01/07/2013 12:24 PM, Seth Jennings wrote: > > +struct zswap_tree { > > + struct rb_root rbroot; > > + struct list_head lru; > > + spinlock_t lock; > > + struct zs_pool *pool; > > +}; > > BTW, I spent some time trying to get this lock contended. You thought > the anon_vma locks would dominate and this spinlock would not end up > very contended. > > I figured that if I hit zswap from a bunch of CPUs that _didn't_ use > anonymous memory (and thus the anon_vma locks) that some more contention > would pop up. I did that with a bunch of CPUs writing to tmpfs, and > this lock was still well down below anon_vma. The anon_vma contention > was obviously coming from _other_ anonymous memory around. > > IOW, I feel a bit better about this lock. I only tested on 16 cores on > a system with relatively light NUMA characteristics, and it might be the > bottleneck if all the anonymous memory on the system is mlock()'d and > you're pounding on tmpfs, but that's pretty contrived. IIUC, Seth's current "flush" code only gets called when in the context of a frontswap_store and is very limited in what it does, whereas the goal will be for flushing to run both as an independent thread and do more complex things (e.g. so that wholepages can be reclaimed rather than random zpages). So it will be interesting to re-test contention when zswap is complete. Dan -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCHv2 8/9] zswap: add to mm/ 2013-01-07 20:24 ` [PATCHv2 8/9] zswap: add to mm/ Seth Jennings 2013-01-08 17:15 ` Dave Hansen @ 2013-01-25 22:44 ` Rik van Riel 2013-01-25 23:15 ` Dan Magenheimer 2013-01-28 15:27 ` Seth Jennings 1 sibling, 2 replies; 10+ messages in thread From: Rik van Riel @ 2013-01-25 22:44 UTC (permalink / raw) To: Seth Jennings Cc: Greg Kroah-Hartman, Andrew Morton, Nitin Gupta, Minchan Kim, Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings, Jenifer Hopper, Mel Gorman, Johannes Weiner, Larry Woodman, linux-mm, linux-kernel, devel On 01/07/2013 03:24 PM, Seth Jennings wrote: > zswap is a thin compression backend for frontswap. It receives > pages from frontswap and attempts to store them in a compressed > memory pool, resulting in an effective partial memory reclaim and > dramatically reduced swap device I/O. > > Additional, in most cases, pages can be retrieved from this > compressed store much more quickly than reading from tradition > swap devices resulting in faster performance for many workloads. > > This patch adds the zswap driver to mm/ > > Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com> I like the approach of flushing pages into actual disk based swap when compressed swap is full. I would like it if that was advertised more prominently in the changelog :) The code looks mostly good, complaints are at the nitpick level. One worry is that the pool can grow to whatever maximum was decided, and there is no way to shrink it when memory is required for something else. Would it be an idea to add a shrinker for the zcache pool, that can also shrink the zcache pool when required? Of course, that does lead to the question of how to balance the pressure from that shrinker, with the new memory entering zcache from the swap side. I have no clear answers here, just something to think about... > +static void zswap_flush_entries(unsigned type, int nr) > +{ > + struct zswap_tree *tree = zswap_trees[type]; > + struct zswap_entry *entry; > + int i, ret; > + > +/* > + * This limits is arbitrary for now until a better > + * policy can be implemented. This is so we don't > + * eat all of RAM decompressing pages for writeback. > + */ > +#define ZSWAP_MAX_OUTSTANDING_FLUSHES 64 > + if (atomic_read(&zswap_outstanding_flushes) > > + ZSWAP_MAX_OUTSTANDING_FLUSHES) > + return; Having this #define right in the middle of the function is rather ugly. Might be worth moving it to the top. > +static int __init zswap_debugfs_init(void) > +{ > + if (!debugfs_initialized()) > + return -ENODEV; > + > + zswap_debugfs_root = debugfs_create_dir("zswap", NULL); > + if (!zswap_debugfs_root) > + return -ENOMEM; > + > + debugfs_create_u64("saved_by_flush", S_IRUGO, > + zswap_debugfs_root, &zswap_saved_by_flush); > + debugfs_create_u64("pool_limit_hit", S_IRUGO, > + zswap_debugfs_root, &zswap_pool_limit_hit); > + debugfs_create_u64("reject_flush_attempted", S_IRUGO, > + zswap_debugfs_root, &zswap_flush_attempted); > + debugfs_create_u64("reject_tmppage_fail", S_IRUGO, > + zswap_debugfs_root, &zswap_reject_tmppage_fail); > + debugfs_create_u64("reject_flush_fail", S_IRUGO, > + zswap_debugfs_root, &zswap_reject_flush_fail); > + debugfs_create_u64("reject_zsmalloc_fail", S_IRUGO, > + zswap_debugfs_root, &zswap_reject_zsmalloc_fail); > + debugfs_create_u64("reject_kmemcache_fail", S_IRUGO, > + zswap_debugfs_root, &zswap_reject_kmemcache_fail); > + debugfs_create_u64("reject_compress_poor", S_IRUGO, > + zswap_debugfs_root, &zswap_reject_compress_poor); > + debugfs_create_u64("flushed_pages", S_IRUGO, > + zswap_debugfs_root, &zswap_flushed_pages); > + debugfs_create_u64("duplicate_entry", S_IRUGO, > + zswap_debugfs_root, &zswap_duplicate_entry); > + debugfs_create_atomic_t("pool_pages", S_IRUGO, > + zswap_debugfs_root, &zswap_pool_pages); > + debugfs_create_atomic_t("stored_pages", S_IRUGO, > + zswap_debugfs_root, &zswap_stored_pages); > + debugfs_create_atomic_t("outstanding_flushes", S_IRUGO, > + zswap_debugfs_root, &zswap_outstanding_flushes); > + Some of these statistics would be very useful to system administrators, who will not be mounting debugfs on production systems. Would it make sense to export some of these statistics through sysfs? -- All rights reversed -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 10+ messages in thread
* RE: [PATCHv2 8/9] zswap: add to mm/ 2013-01-25 22:44 ` Rik van Riel @ 2013-01-25 23:15 ` Dan Magenheimer 2013-01-28 15:27 ` Seth Jennings 1 sibling, 0 replies; 10+ messages in thread From: Dan Magenheimer @ 2013-01-25 23:15 UTC (permalink / raw) To: Rik van Riel, Seth Jennings Cc: Greg Kroah-Hartman, Andrew Morton, Nitin Gupta, Minchan Kim, Konrad Wilk, Robert Jennings, Jenifer Hopper, Mel Gorman, Johannes Weiner, Larry Woodman, linux-mm, linux-kernel, devel > From: Rik van Riel [mailto:riel@redhat.com] > Subject: Re: [PATCHv2 8/9] zswap: add to mm/ > > On 01/07/2013 03:24 PM, Seth Jennings wrote: > > zswap is a thin compression backend for frontswap. It receives > > pages from frontswap and attempts to store them in a compressed > > memory pool, resulting in an effective partial memory reclaim and > > dramatically reduced swap device I/O. > > > > Additional, in most cases, pages can be retrieved from this > > compressed store much more quickly than reading from tradition > > swap devices resulting in faster performance for many workloads. > > > > This patch adds the zswap driver to mm/ > > > > Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com> > > I like the approach of flushing pages into actual disk based > swap when compressed swap is full. I would like it if that > was advertised more prominently in the changelog :) > > The code looks mostly good, complaints are at the nitpick level. > > One worry is that the pool can grow to whatever maximum was > decided, and there is no way to shrink it when memory is > required for something else. > > Would it be an idea to add a shrinker for the zcache pool, > that can also shrink the zcache pool when required? > > Of course, that does lead to the question of how to balance > the pressure from that shrinker, with the new memory entering > zcache from the swap side. I have no clear answers here, just > something to think about... Hey Rik -- A shrinker needs to be able to free up whole pages. I think Seth is working on this with zsmalloc but it's quite a bit harder when pursuing high density and page crossing which are the benefits, but also part of the curse, of zsmalloc. I have some ideas on how to do pressure balancing and plan to propose a topic for LSF/MM to discuss various questions involving in-kernel compression, with this sub-topic included. Hopefully all the developers contributing various in-kernel compression solutions will be able to attend and participate and we can start converging on upstreaming (and/or promoting) some of them. Dan -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCHv2 8/9] zswap: add to mm/ 2013-01-25 22:44 ` Rik van Riel 2013-01-25 23:15 ` Dan Magenheimer @ 2013-01-28 15:27 ` Seth Jennings 2013-01-29 10:21 ` Lord Glauber Costa of Sealand 1 sibling, 1 reply; 10+ messages in thread From: Seth Jennings @ 2013-01-28 15:27 UTC (permalink / raw) To: Rik van Riel Cc: Greg Kroah-Hartman, Andrew Morton, Nitin Gupta, Minchan Kim, Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings, Jenifer Hopper, Mel Gorman, Johannes Weiner, Larry Woodman, linux-mm, linux-kernel, devel On 01/25/2013 04:44 PM, Rik van Riel wrote: > On 01/07/2013 03:24 PM, Seth Jennings wrote: >> zswap is a thin compression backend for frontswap. It receives >> pages from frontswap and attempts to store them in a compressed >> memory pool, resulting in an effective partial memory reclaim and >> dramatically reduced swap device I/O. >> >> Additional, in most cases, pages can be retrieved from this >> compressed store much more quickly than reading from tradition >> swap devices resulting in faster performance for many workloads. >> >> This patch adds the zswap driver to mm/ >> >> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com> > > I like the approach of flushing pages into actual disk based > swap when compressed swap is full. I would like it if that > was advertised more prominently in the changelog :) Thanks so much for the review! > The code looks mostly good, complaints are at the nitpick level. > > One worry is that the pool can grow to whatever maximum was > decided, and there is no way to shrink it when memory is > required for something else. > > Would it be an idea to add a shrinker for the zcache pool, > that can also shrink the zcache pool when required? > > Of course, that does lead to the question of how to balance > the pressure from that shrinker, with the new memory entering > zcache from the swap side. I have no clear answers here, just > something to think about... Yes, I prototyped a shrinker interface for zswap, but, as we both figured, it shrinks the zswap compressed pool too aggressively to the point of being useless. Right now I'm working on a zswap thread that will "leak" pages out to the swap device on an LRU basis over time. That way if the page is a rarely accessed page, it will eventually be written out to the swap device and it's memory freed, even if the zswap pool isn't full. Would this address your concerns? >> +static void zswap_flush_entries(unsigned type, int nr) >> +{ >> + struct zswap_tree *tree = zswap_trees[type]; >> + struct zswap_entry *entry; >> + int i, ret; >> + >> +/* >> + * This limits is arbitrary for now until a better >> + * policy can be implemented. This is so we don't >> + * eat all of RAM decompressing pages for writeback. >> + */ >> +#define ZSWAP_MAX_OUTSTANDING_FLUSHES 64 >> + if (atomic_read(&zswap_outstanding_flushes) > >> + ZSWAP_MAX_OUTSTANDING_FLUSHES) >> + return; > > Having this #define right in the middle of the function is > rather ugly. Might be worth moving it to the top. Yes. In my mind, this policy was going to be replaced by a better one soon. Checking may_write_to_queue() was my idea. I didn't spend too much time making that part pretty. >> +static int __init zswap_debugfs_init(void) >> +{ >> + if (!debugfs_initialized()) >> + return -ENODEV; >> + >> + zswap_debugfs_root = debugfs_create_dir("zswap", NULL); >> + if (!zswap_debugfs_root) >> + return -ENOMEM; >> + >> + debugfs_create_u64("saved_by_flush", S_IRUGO, >> + zswap_debugfs_root, &zswap_saved_by_flush); >> + debugfs_create_u64("pool_limit_hit", S_IRUGO, >> + zswap_debugfs_root, &zswap_pool_limit_hit); >> + debugfs_create_u64("reject_flush_attempted", S_IRUGO, >> + zswap_debugfs_root, &zswap_flush_attempted); >> + debugfs_create_u64("reject_tmppage_fail", S_IRUGO, >> + zswap_debugfs_root, &zswap_reject_tmppage_fail); >> + debugfs_create_u64("reject_flush_fail", S_IRUGO, >> + zswap_debugfs_root, &zswap_reject_flush_fail); >> + debugfs_create_u64("reject_zsmalloc_fail", S_IRUGO, >> + zswap_debugfs_root, &zswap_reject_zsmalloc_fail); >> + debugfs_create_u64("reject_kmemcache_fail", S_IRUGO, >> + zswap_debugfs_root, &zswap_reject_kmemcache_fail); >> + debugfs_create_u64("reject_compress_poor", S_IRUGO, >> + zswap_debugfs_root, &zswap_reject_compress_poor); >> + debugfs_create_u64("flushed_pages", S_IRUGO, >> + zswap_debugfs_root, &zswap_flushed_pages); >> + debugfs_create_u64("duplicate_entry", S_IRUGO, >> + zswap_debugfs_root, &zswap_duplicate_entry); >> + debugfs_create_atomic_t("pool_pages", S_IRUGO, >> + zswap_debugfs_root, &zswap_pool_pages); >> + debugfs_create_atomic_t("stored_pages", S_IRUGO, >> + zswap_debugfs_root, &zswap_stored_pages); >> + debugfs_create_atomic_t("outstanding_flushes", S_IRUGO, >> + zswap_debugfs_root, &zswap_outstanding_flushes); >> + > > Some of these statistics would be very useful to system > administrators, who will not be mounting debugfs on > production systems. > > Would it make sense to export some of these statistics > through sysfs? That's fine. Which of these stats do you think should be in sysfs? Thanks again for taking time to look at this! Seth -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCHv2 8/9] zswap: add to mm/ 2013-01-28 15:27 ` Seth Jennings @ 2013-01-29 10:21 ` Lord Glauber Costa of Sealand 2013-02-07 16:13 ` Seth Jennings 0 siblings, 1 reply; 10+ messages in thread From: Lord Glauber Costa of Sealand @ 2013-01-29 10:21 UTC (permalink / raw) To: Seth Jennings Cc: Rik van Riel, Greg Kroah-Hartman, Andrew Morton, Nitin Gupta, Minchan Kim, Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings, Jenifer Hopper, Mel Gorman, Johannes Weiner, Larry Woodman, linux-mm, linux-kernel, devel On 01/28/2013 07:27 PM, Seth Jennings wrote: > Yes, I prototyped a shrinker interface for zswap, but, as we both > figured, it shrinks the zswap compressed pool too aggressively to the > point of being useless. Can't you advertise a smaller number of objects that you actively have? Since the shrinker would never try to shrink more objects than you advertised, you could control pressure this way. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCHv2 8/9] zswap: add to mm/ 2013-01-29 10:21 ` Lord Glauber Costa of Sealand @ 2013-02-07 16:13 ` Seth Jennings 2013-02-11 19:13 ` Dan Magenheimer 0 siblings, 1 reply; 10+ messages in thread From: Seth Jennings @ 2013-02-07 16:13 UTC (permalink / raw) To: Lord Glauber Costa of Sealand Cc: Rik van Riel, Greg Kroah-Hartman, Andrew Morton, Nitin Gupta, Minchan Kim, Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings, Jenifer Hopper, Mel Gorman, Johannes Weiner, Larry Woodman, linux-mm, linux-kernel, devel On 01/29/2013 04:21 AM, Lord Glauber Costa of Sealand wrote: > On 01/28/2013 07:27 PM, Seth Jennings wrote: >> Yes, I prototyped a shrinker interface for zswap, but, as we both >> figured, it shrinks the zswap compressed pool too aggressively to the >> point of being useless. > Can't you advertise a smaller number of objects that you actively have? Thanks for looking at the code! An interesting idea. I'm just not sure how you would manage the underlying policy of how aggressively does zswap allow itself to be shrunk? The fact that zswap _only_ operates under memory pressure makes that policy difficult, because it is under continuous shrinking pressure, unlike other shrinkable caches in the kernel that spend most of their time operating in unconstrained or lightly/intermittently strained conditions. Thanks, Seth -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 10+ messages in thread
* RE: [PATCHv2 8/9] zswap: add to mm/ 2013-02-07 16:13 ` Seth Jennings @ 2013-02-11 19:13 ` Dan Magenheimer 0 siblings, 0 replies; 10+ messages in thread From: Dan Magenheimer @ 2013-02-11 19:13 UTC (permalink / raw) To: Seth Jennings, Lord Glauber Costa of Sealand Cc: Rik van Riel, Greg Kroah-Hartman, Andrew Morton, Nitin Gupta, Minchan Kim, Konrad Wilk, Robert Jennings, Jenifer Hopper, Mel Gorman, Johannes Weiner, Larry Woodman, linux-mm, linux-kernel, devel > From: Seth Jennings [mailto:sjenning@linux.vnet.ibm.com] > Subject: Re: [PATCHv2 8/9] zswap: add to mm/ > > On 01/29/2013 04:21 AM, Lord Glauber Costa of Sealand wrote: > > On 01/28/2013 07:27 PM, Seth Jennings wrote: > >> Yes, I prototyped a shrinker interface for zswap, but, as we both > >> figured, it shrinks the zswap compressed pool too aggressively to the > >> point of being useless. > > Can't you advertise a smaller number of objects that you actively have? > > Thanks for looking at the code! > > An interesting idea. I'm just not sure how you would manage the > underlying policy of how aggressively does zswap allow itself to be > shrunk? The fact that zswap _only_ operates under memory pressure > makes that policy difficult, because it is under continuous shrinking > pressure, unlike other shrinkable caches in the kernel that spend most > of their time operating in unconstrained or lightly/intermittently > strained conditions. Hi Seth -- Zswap (as well as zcache) doesn't "_only_ operate under memory pressure". It _grows_ only under memory pressure but can get smaller via frontswap_loads and frontswap_invalidates at other times. I agree that writeback (from zswap to the real swap disk, what zswap calls "flush") need only occur when under memory pressure, but that's when a shrinker is called. FYI, the way that zcache does this (for swap pages) is the zcache shrinker drives the number of wholepages used to store zpages down to match the number of wholepages used for anonymous pages. In zswap terms, that means you would call zswap_flush_entry in a zswap shrinker thread continually until zswap_pool_pages <= global_page_state(NR_LRU_BASE + LRU_ACTIVE_ANON) + global_page_state(NR_LRU_BASE + LRU_INACTIVE_ANON) The zcache shrinker (currently) ignores nr_to_scan entirely; the fact that the zcache shrinker is called is the signal for zswap/zcache to start flush/writeback (moving compressed pages out to swap disk). This isn't a great match for the system shrinker API but it seems to avoid the "aggressively to the point of being useless" problem so is at least a step in the right direction. Dan -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2013-02-11 19:14 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <<1357590280-31535-1-git-send-email-sjenning@linux.vnet.ibm.com>
[not found] ` <<1357590280-31535-9-git-send-email-sjenning@linux.vnet.ibm.com>
2013-01-10 22:16 ` [PATCHv2 8/9] zswap: add to mm/ Dan Magenheimer
2013-01-07 20:24 [PATCHv2 0/9] zswap: compressed swap caching Seth Jennings
2013-01-07 20:24 ` [PATCHv2 8/9] zswap: add to mm/ Seth Jennings
2013-01-08 17:15 ` Dave Hansen
2013-01-08 17:54 ` Dan Magenheimer
2013-01-25 22:44 ` Rik van Riel
2013-01-25 23:15 ` Dan Magenheimer
2013-01-28 15:27 ` Seth Jennings
2013-01-29 10:21 ` Lord Glauber Costa of Sealand
2013-02-07 16:13 ` Seth Jennings
2013-02-11 19:13 ` Dan Magenheimer
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).