* [PATCH v3 01/31] usercopy: Prepare for usercopy whitelisting
[not found] <1505940337-79069-1-git-send-email-keescook@chromium.org>
@ 2017-09-20 20:45 ` Kees Cook
2017-09-21 15:21 ` Christopher Lameter
2017-09-20 20:45 ` [PATCH v3 02/31] usercopy: Enforce slab cache usercopy region boundaries Kees Cook
` (2 subsequent siblings)
3 siblings, 1 reply; 10+ messages in thread
From: Kees Cook @ 2017-09-20 20:45 UTC (permalink / raw)
To: linux-kernel
Cc: Kees Cook, David Windsor, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Andrew Morton, linux-mm, linux-xfs,
linux-fsdevel, netdev, kernel-hardening
From: David Windsor <dave@nullcore.net>
This patch prepares the slab allocator to handle caches having annotations
(useroffset and usersize) defining usercopy regions.
This patch is modified from Brad Spengler/PaX Team's PAX_USERCOPY
whitelisting code in the last public patch of grsecurity/PaX based on
my understanding of the code. Changes or omissions from the original
code are mine and don't reflect the original grsecurity/PaX code.
Currently, hardened usercopy performs dynamic bounds checking on slab
cache objects. This is good, but still leaves a lot of kernel memory
available to be copied to/from userspace in the face of bugs. To further
restrict what memory is available for copying, this creates a way to
whitelist specific areas of a given slab cache object for copying to/from
userspace, allowing much finer granularity of access control. Slab caches
that are never exposed to userspace can declare no whitelist for their
objects, thereby keeping them unavailable to userspace via dynamic copy
operations. (Note, an implicit form of whitelisting is the use of constant
sizes in usercopy operations and get_user()/put_user(); these bypass
hardened usercopy checks since these sizes cannot change at runtime.)
To support this whitelist annotation, usercopy region offset and size
members are added to struct kmem_cache. The slab allocator receives a
new function, kmem_cache_create_usercopy(), that creates a new cache
with a usercopy region defined, suitable for declaring spans of fields
within the objects that get copied to/from userspace.
In this patch, the default kmem_cache_create() marks the entire allocation
as whitelisted, leaving it semantically unchanged. Once all fine-grained
whitelists have been added (in subsequent patches), this will be changed
to a usersize of 0, making caches created with kmem_cache_create() not
copyable to/from userspace.
After the entire usercopy whitelist series is applied, less than 15%
of the slab cache memory remains exposed to potential usercopy bugs
after a fresh boot:
Total Slab Memory: 48074720
Usercopyable Memory: 6367532 13.2%
task_struct 0.2% 4480/1630720
RAW 0.3% 300/96000
RAWv6 2.1% 1408/64768
ext4_inode_cache 3.0% 269760/8740224
dentry 11.1% 585984/5273856
mm_struct 29.1% 54912/188448
kmalloc-8 100.0% 24576/24576
kmalloc-16 100.0% 28672/28672
kmalloc-32 100.0% 81920/81920
kmalloc-192 100.0% 96768/96768
kmalloc-128 100.0% 143360/143360
names_cache 100.0% 163840/163840
kmalloc-64 100.0% 167936/167936
kmalloc-256 100.0% 339968/339968
kmalloc-512 100.0% 350720/350720
kmalloc-96 100.0% 455616/455616
kmalloc-8192 100.0% 655360/655360
kmalloc-1024 100.0% 812032/812032
kmalloc-4096 100.0% 819200/819200
kmalloc-2048 100.0% 1310720/1310720
After some kernel build workloads, the percentage (mainly driven by
dentry and inode caches expanding) drops under 10%:
Total Slab Memory: 95516184
Usercopyable Memory: 8497452 8.8%
task_struct 0.2% 4000/1456000
RAW 0.3% 300/96000
RAWv6 2.1% 1408/64768
ext4_inode_cache 3.0% 1217280/39439872
dentry 11.1% 1623200/14608800
mm_struct 29.1% 73216/251264
kmalloc-8 100.0% 24576/24576
kmalloc-16 100.0% 28672/28672
kmalloc-32 100.0% 94208/94208
kmalloc-192 100.0% 96768/96768
kmalloc-128 100.0% 143360/143360
names_cache 100.0% 163840/163840
kmalloc-64 100.0% 245760/245760
kmalloc-256 100.0% 339968/339968
kmalloc-512 100.0% 350720/350720
kmalloc-96 100.0% 563520/563520
kmalloc-8192 100.0% 655360/655360
kmalloc-1024 100.0% 794624/794624
kmalloc-4096 100.0% 819200/819200
kmalloc-2048 100.0% 1257472/1257472
Signed-off-by: David Windsor <dave@nullcore.net>
[kees: adjust commit log, split out a few extra kmalloc hunks]
[kees: add field names to function declarations]
[kees: convert BUGs to WARNs and fail closed]
[kees: add attack surface reduction analysis to commit log]
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org
Cc: linux-xfs@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
include/linux/slab.h | 27 +++++++++++++++++++++------
include/linux/slab_def.h | 3 +++
include/linux/slub_def.h | 3 +++
include/linux/stddef.h | 2 ++
mm/slab.c | 2 +-
mm/slab.h | 5 ++++-
mm/slab_common.c | 46 ++++++++++++++++++++++++++++++++++++++--------
mm/slub.c | 11 +++++++++--
8 files changed, 81 insertions(+), 18 deletions(-)
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 41473df6dfb0..8b6cb384f8b6 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -126,9 +126,13 @@ struct mem_cgroup;
void __init kmem_cache_init(void);
bool slab_is_available(void);
-struct kmem_cache *kmem_cache_create(const char *, size_t, size_t,
- unsigned long,
- void (*)(void *));
+struct kmem_cache *kmem_cache_create(const char *name, size_t size,
+ size_t align, unsigned long flags,
+ void (*ctor)(void *));
+struct kmem_cache *kmem_cache_create_usercopy(const char *name,
+ size_t size, size_t align, unsigned long flags,
+ size_t useroffset, size_t usersize,
+ void (*ctor)(void *));
void kmem_cache_destroy(struct kmem_cache *);
int kmem_cache_shrink(struct kmem_cache *);
@@ -144,9 +148,20 @@ void memcg_destroy_kmem_caches(struct mem_cgroup *);
* f.e. add ____cacheline_aligned_in_smp to the struct declaration
* then the objects will be properly aligned in SMP configurations.
*/
-#define KMEM_CACHE(__struct, __flags) kmem_cache_create(#__struct,\
- sizeof(struct __struct), __alignof__(struct __struct),\
- (__flags), NULL)
+#define KMEM_CACHE(__struct, __flags) \
+ kmem_cache_create(#__struct, sizeof(struct __struct), \
+ __alignof__(struct __struct), (__flags), NULL)
+
+/*
+ * To whitelist a single field for copying to/from usercopy, use this
+ * macro instead for KMEM_CACHE() above.
+ */
+#define KMEM_CACHE_USERCOPY(__struct, __flags, __field) \
+ kmem_cache_create_usercopy(#__struct, \
+ sizeof(struct __struct), \
+ __alignof__(struct __struct), (__flags), \
+ offsetof(struct __struct, __field), \
+ sizeof_field(struct __struct, __field), NULL)
/*
* Common kmalloc functions provided by all allocators
diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
index 4ad2c5a26399..03eef0df8648 100644
--- a/include/linux/slab_def.h
+++ b/include/linux/slab_def.h
@@ -84,6 +84,9 @@ struct kmem_cache {
unsigned int *random_seq;
#endif
+ size_t useroffset; /* Usercopy region offset */
+ size_t usersize; /* Usercopy region size */
+
struct kmem_cache_node *node[MAX_NUMNODES];
};
diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 0783b622311e..62866a1a767c 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -134,6 +134,9 @@ struct kmem_cache {
struct kasan_cache kasan_info;
#endif
+ size_t useroffset; /* Usercopy region offset */
+ size_t usersize; /* Usercopy region size */
+
struct kmem_cache_node *node[MAX_NUMNODES];
};
diff --git a/include/linux/stddef.h b/include/linux/stddef.h
index 9c61c7cda936..f00355086fb2 100644
--- a/include/linux/stddef.h
+++ b/include/linux/stddef.h
@@ -18,6 +18,8 @@ enum {
#define offsetof(TYPE, MEMBER) ((size_t)&((TYPE *)0)->MEMBER)
#endif
+#define sizeof_field(structure, field) sizeof((((structure *)0)->field))
+
/**
* offsetofend(TYPE, MEMBER)
*
diff --git a/mm/slab.c b/mm/slab.c
index 04dec48c3ed7..87b6e5e0cdaf 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1281,7 +1281,7 @@ void __init kmem_cache_init(void)
create_boot_cache(kmem_cache, "kmem_cache",
offsetof(struct kmem_cache, node) +
nr_node_ids * sizeof(struct kmem_cache_node *),
- SLAB_HWCACHE_ALIGN);
+ SLAB_HWCACHE_ALIGN, 0, 0);
list_add(&kmem_cache->list, &slab_caches);
slab_state = PARTIAL;
diff --git a/mm/slab.h b/mm/slab.h
index 073362816acc..044755ff9632 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -21,6 +21,8 @@ struct kmem_cache {
unsigned int size; /* The aligned/padded/added on size */
unsigned int align; /* Alignment as calculated */
unsigned long flags; /* Active flags on the slab */
+ size_t useroffset; /* Usercopy region offset */
+ size_t usersize; /* Usercopy region size */
const char *name; /* Slab name for sysfs */
int refcount; /* Use counter */
void (*ctor)(void *); /* Called on object slot creation */
@@ -97,7 +99,8 @@ extern int __kmem_cache_create(struct kmem_cache *, unsigned long flags);
extern struct kmem_cache *create_kmalloc_cache(const char *name, size_t size,
unsigned long flags);
extern void create_boot_cache(struct kmem_cache *, const char *name,
- size_t size, unsigned long flags);
+ size_t size, unsigned long flags, size_t useroffset,
+ size_t usersize);
int slab_unmergeable(struct kmem_cache *s);
struct kmem_cache *find_mergeable(size_t size, size_t align,
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 904a83be82de..36408f5f2a34 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -272,6 +272,9 @@ int slab_unmergeable(struct kmem_cache *s)
if (s->ctor)
return 1;
+ if (s->usersize)
+ return 1;
+
/*
* We may have set a slab to be unmergeable during bootstrap.
*/
@@ -357,12 +360,16 @@ unsigned long calculate_alignment(unsigned long flags,
static struct kmem_cache *create_cache(const char *name,
size_t object_size, size_t size, size_t align,
- unsigned long flags, void (*ctor)(void *),
+ unsigned long flags, size_t useroffset,
+ size_t usersize, void (*ctor)(void *),
struct mem_cgroup *memcg, struct kmem_cache *root_cache)
{
struct kmem_cache *s;
int err;
+ if (WARN_ON(useroffset + usersize > object_size))
+ useroffset = usersize = 0;
+
err = -ENOMEM;
s = kmem_cache_zalloc(kmem_cache, GFP_KERNEL);
if (!s)
@@ -373,6 +380,8 @@ static struct kmem_cache *create_cache(const char *name,
s->size = size;
s->align = align;
s->ctor = ctor;
+ s->useroffset = useroffset;
+ s->usersize = usersize;
err = init_memcg_params(s, memcg, root_cache);
if (err)
@@ -397,11 +406,13 @@ static struct kmem_cache *create_cache(const char *name,
}
/*
- * kmem_cache_create - Create a cache.
+ * kmem_cache_create_usercopy - Create a cache.
* @name: A string which is used in /proc/slabinfo to identify this cache.
* @size: The size of objects to be created in this cache.
* @align: The required alignment for the objects.
* @flags: SLAB flags
+ * @useroffset: Usercopy region offset
+ * @usersize: Usercopy region size
* @ctor: A constructor for the objects.
*
* Returns a ptr to the cache on success, NULL on failure.
@@ -421,8 +432,9 @@ static struct kmem_cache *create_cache(const char *name,
* as davem.
*/
struct kmem_cache *
-kmem_cache_create(const char *name, size_t size, size_t align,
- unsigned long flags, void (*ctor)(void *))
+kmem_cache_create_usercopy(const char *name, size_t size, size_t align,
+ unsigned long flags, size_t useroffset, size_t usersize,
+ void (*ctor)(void *))
{
struct kmem_cache *s = NULL;
const char *cache_name;
@@ -453,7 +465,13 @@ kmem_cache_create(const char *name, size_t size, size_t align,
*/
flags &= CACHE_CREATE_MASK;
- s = __kmem_cache_alias(name, size, align, flags, ctor);
+ /* Fail closed on bad usersize of useroffset values. */
+ if (WARN_ON(!usersize && useroffset) ||
+ WARN_ON(size < usersize || size - usersize < useroffset))
+ usersize = useroffset = 0;
+
+ if (!usersize)
+ s = __kmem_cache_alias(name, size, align, flags, ctor);
if (s)
goto out_unlock;
@@ -465,7 +483,7 @@ kmem_cache_create(const char *name, size_t size, size_t align,
s = create_cache(cache_name, size, size,
calculate_alignment(flags, align, size),
- flags, ctor, NULL, NULL);
+ flags, useroffset, usersize, ctor, NULL, NULL);
if (IS_ERR(s)) {
err = PTR_ERR(s);
kfree_const(cache_name);
@@ -491,6 +509,15 @@ kmem_cache_create(const char *name, size_t size, size_t align,
}
return s;
}
+EXPORT_SYMBOL(kmem_cache_create_usercopy);
+
+struct kmem_cache *
+kmem_cache_create(const char *name, size_t size, size_t align,
+ unsigned long flags, void (*ctor)(void *))
+{
+ return kmem_cache_create_usercopy(name, size, align, flags, 0, size,
+ ctor);
+}
EXPORT_SYMBOL(kmem_cache_create);
static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work)
@@ -603,6 +630,7 @@ void memcg_create_kmem_cache(struct mem_cgroup *memcg,
s = create_cache(cache_name, root_cache->object_size,
root_cache->size, root_cache->align,
root_cache->flags & CACHE_CREATE_MASK,
+ root_cache->useroffset, root_cache->usersize,
root_cache->ctor, memcg, root_cache);
/*
* If we could not create a memcg cache, do not complain, because
@@ -870,13 +898,15 @@ bool slab_is_available(void)
#ifndef CONFIG_SLOB
/* Create a cache during boot when no slab services are available yet */
void __init create_boot_cache(struct kmem_cache *s, const char *name, size_t size,
- unsigned long flags)
+ unsigned long flags, size_t useroffset, size_t usersize)
{
int err;
s->name = name;
s->size = s->object_size = size;
s->align = calculate_alignment(flags, ARCH_KMALLOC_MINALIGN, size);
+ s->useroffset = useroffset;
+ s->usersize = usersize;
slab_init_memcg_params(s);
@@ -897,7 +927,7 @@ struct kmem_cache *__init create_kmalloc_cache(const char *name, size_t size,
if (!s)
panic("Out of memory when creating slab %s\n", name);
- create_boot_cache(s, name, size, flags);
+ create_boot_cache(s, name, size, flags, 0, size);
list_add(&s->list, &slab_caches);
memcg_link_cache(s);
s->refcount = 1;
diff --git a/mm/slub.c b/mm/slub.c
index 163352c537ab..fae637726c44 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4201,7 +4201,7 @@ void __init kmem_cache_init(void)
kmem_cache = &boot_kmem_cache;
create_boot_cache(kmem_cache_node, "kmem_cache_node",
- sizeof(struct kmem_cache_node), SLAB_HWCACHE_ALIGN);
+ sizeof(struct kmem_cache_node), SLAB_HWCACHE_ALIGN, 0, 0);
register_hotmemory_notifier(&slab_memory_callback_nb);
@@ -4211,7 +4211,7 @@ void __init kmem_cache_init(void)
create_boot_cache(kmem_cache, "kmem_cache",
offsetof(struct kmem_cache, node) +
nr_node_ids * sizeof(struct kmem_cache_node *),
- SLAB_HWCACHE_ALIGN);
+ SLAB_HWCACHE_ALIGN, 0, 0);
kmem_cache = bootstrap(&boot_kmem_cache);
@@ -5081,6 +5081,12 @@ static ssize_t cache_dma_show(struct kmem_cache *s, char *buf)
SLAB_ATTR_RO(cache_dma);
#endif
+static ssize_t usersize_show(struct kmem_cache *s, char *buf)
+{
+ return sprintf(buf, "%zu\n", s->usersize);
+}
+SLAB_ATTR_RO(usersize);
+
static ssize_t destroy_by_rcu_show(struct kmem_cache *s, char *buf)
{
return sprintf(buf, "%d\n", !!(s->flags & SLAB_TYPESAFE_BY_RCU));
@@ -5455,6 +5461,7 @@ static struct attribute *slab_attrs[] = {
#ifdef CONFIG_FAILSLAB
&failslab_attr.attr,
#endif
+ &usersize_attr.attr,
NULL
};
--
2.7.4
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v3 02/31] usercopy: Enforce slab cache usercopy region boundaries
[not found] <1505940337-79069-1-git-send-email-keescook@chromium.org>
2017-09-20 20:45 ` [PATCH v3 01/31] usercopy: Prepare for usercopy whitelisting Kees Cook
@ 2017-09-20 20:45 ` Kees Cook
2017-09-21 15:23 ` Christopher Lameter
2017-09-20 20:45 ` [PATCH v3 03/31] usercopy: Mark kmalloc caches as usercopy caches Kees Cook
2017-09-20 20:45 ` [PATCH v3 15/31] xfs: Define usercopy region in xfs_inode slab cache Kees Cook
3 siblings, 1 reply; 10+ messages in thread
From: Kees Cook @ 2017-09-20 20:45 UTC (permalink / raw)
To: linux-kernel
Cc: Kees Cook, David Windsor, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Andrew Morton, Laura Abbott,
Ingo Molnar, Mark Rutland, linux-mm, linux-xfs, linux-fsdevel,
netdev, kernel-hardening
From: David Windsor <dave@nullcore.net>
This patch adds the enforcement component of usercopy cache whitelisting,
and is modified from Brad Spengler/PaX Team's PAX_USERCOPY whitelisting
code in the last public patch of grsecurity/PaX based on my understanding
of the code. Changes or omissions from the original code are mine and
don't reflect the original grsecurity/PaX code.
The SLAB and SLUB allocators are modified to deny all copy operations
in which the kernel heap memory being modified falls outside of the cache's
defined usercopy region.
Signed-off-by: David Windsor <dave@nullcore.net>
[kees: adjust commit log and comments]
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Laura Abbott <labbott@redhat.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-mm@kvack.org
Cc: linux-xfs@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
mm/slab.c | 16 +++++++++++-----
mm/slub.c | 18 +++++++++++-------
mm/usercopy.c | 12 ++++++++++++
3 files changed, 34 insertions(+), 12 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index 87b6e5e0cdaf..df268999cf02 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -4408,7 +4408,9 @@ module_init(slab_proc_init);
#ifdef CONFIG_HARDENED_USERCOPY
/*
- * Rejects objects that are incorrectly sized.
+ * Rejects incorrectly sized objects and objects that are to be copied
+ * to/from userspace but do not fall entirely within the containing slab
+ * cache's usercopy region.
*
* Returns NULL if check passes, otherwise const char * to name of cache
* to indicate an error.
@@ -4428,11 +4430,15 @@ const char *__check_heap_object(const void *ptr, unsigned long n,
/* Find offset within object. */
offset = ptr - index_to_obj(cachep, page, objnr) - obj_offset(cachep);
- /* Allow address range falling entirely within object size. */
- if (offset <= cachep->object_size && n <= cachep->object_size - offset)
- return NULL;
+ /* Make sure object falls entirely within cache's usercopy region. */
+ if (offset < cachep->useroffset)
+ return cachep->name;
+ if (offset - cachep->useroffset > cachep->usersize)
+ return cachep->name;
+ if (n > cachep->useroffset - offset + cachep->usersize)
+ return cachep->name;
- return cachep->name;
+ return NULL;
}
#endif /* CONFIG_HARDENED_USERCOPY */
diff --git a/mm/slub.c b/mm/slub.c
index fae637726c44..bbf73024be3a 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3833,7 +3833,9 @@ EXPORT_SYMBOL(__kmalloc_node);
#ifdef CONFIG_HARDENED_USERCOPY
/*
- * Rejects objects that are incorrectly sized.
+ * Rejects incorrectly sized objects and objects that are to be copied
+ * to/from userspace but do not fall entirely within the containing slab
+ * cache's usercopy region.
*
* Returns NULL if check passes, otherwise const char * to name of cache
* to indicate an error.
@@ -3843,11 +3845,9 @@ const char *__check_heap_object(const void *ptr, unsigned long n,
{
struct kmem_cache *s;
unsigned long offset;
- size_t object_size;
/* Find object and usable object size. */
s = page->slab_cache;
- object_size = slab_ksize(s);
/* Reject impossible pointers. */
if (ptr < page_address(page))
@@ -3863,11 +3863,15 @@ const char *__check_heap_object(const void *ptr, unsigned long n,
offset -= s->red_left_pad;
}
- /* Allow address range falling entirely within object size. */
- if (offset <= object_size && n <= object_size - offset)
- return NULL;
+ /* Make sure object falls entirely within cache's usercopy region. */
+ if (offset < s->useroffset)
+ return s->name;
+ if (offset - s->useroffset > s->usersize)
+ return s->name;
+ if (n > s->useroffset - offset + s->usersize)
+ return s->name;
- return s->name;
+ return NULL;
}
#endif /* CONFIG_HARDENED_USERCOPY */
diff --git a/mm/usercopy.c b/mm/usercopy.c
index a9852b24715d..cbffde670c49 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -58,6 +58,18 @@ static noinline int check_stack_object(const void *obj, unsigned long len)
return GOOD_STACK;
}
+/*
+ * If this function is reached, then CONFIG_HARDENED_USERCOPY has found an
+ * unexpected state during a copy_from_user() or copy_to_user() call.
+ * There are several checks being performed on the buffer by the
+ * __check_object_size() function. Normal stack buffer usage should never
+ * trip the checks, and kernel text addressing will always trip the check.
+ * For cache objects, it is checking that only the whitelisted range of
+ * bytes for a given cache is being accessed (via the cache's usersize and
+ * useroffset fields). To adjust a cache whitelist, use the usercopy-aware
+ * kmem_cache_create_usercopy() function to create the cache (and
+ * carefully audit the whitelist range).
+ */
static void report_usercopy(const void *ptr, unsigned long len,
bool to_user, const char *type)
{
--
2.7.4
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v3 03/31] usercopy: Mark kmalloc caches as usercopy caches
[not found] <1505940337-79069-1-git-send-email-keescook@chromium.org>
2017-09-20 20:45 ` [PATCH v3 01/31] usercopy: Prepare for usercopy whitelisting Kees Cook
2017-09-20 20:45 ` [PATCH v3 02/31] usercopy: Enforce slab cache usercopy region boundaries Kees Cook
@ 2017-09-20 20:45 ` Kees Cook
2017-09-21 15:27 ` Christopher Lameter
2017-09-20 20:45 ` [PATCH v3 15/31] xfs: Define usercopy region in xfs_inode slab cache Kees Cook
3 siblings, 1 reply; 10+ messages in thread
From: Kees Cook @ 2017-09-20 20:45 UTC (permalink / raw)
To: linux-kernel
Cc: Kees Cook, David Windsor, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Andrew Morton, linux-mm, linux-xfs,
linux-fsdevel, netdev, kernel-hardening
From: David Windsor <dave@nullcore.net>
Mark the kmalloc slab caches as entirely whitelisted. These caches
are frequently used to fulfill kernel allocations that contain data
to be copied to/from userspace. Internal-only uses are also common,
but are scattered in the kernel. For now, mark all the kmalloc caches
as whitelisted.
This patch is modified from Brad Spengler/PaX Team's PAX_USERCOPY
whitelisting code in the last public patch of grsecurity/PaX based on my
understanding of the code. Changes or omissions from the original code are
mine and don't reflect the original grsecurity/PaX code.
Signed-off-by: David Windsor <dave@nullcore.net>
[kees: merged in moved kmalloc hunks, adjust commit log]
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org
Cc: linux-xfs@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
mm/slab.c | 3 ++-
mm/slab.h | 3 ++-
mm/slab_common.c | 10 ++++++----
3 files changed, 10 insertions(+), 6 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index df268999cf02..9af16f675927 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1291,7 +1291,8 @@ void __init kmem_cache_init(void)
*/
kmalloc_caches[INDEX_NODE] = create_kmalloc_cache(
kmalloc_info[INDEX_NODE].name,
- kmalloc_size(INDEX_NODE), ARCH_KMALLOC_FLAGS);
+ kmalloc_size(INDEX_NODE), ARCH_KMALLOC_FLAGS,
+ 0, kmalloc_size(INDEX_NODE));
slab_state = PARTIAL_NODE;
setup_kmalloc_cache_index_table();
diff --git a/mm/slab.h b/mm/slab.h
index 044755ff9632..2e0fe357d777 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -97,7 +97,8 @@ struct kmem_cache *kmalloc_slab(size_t, gfp_t);
extern int __kmem_cache_create(struct kmem_cache *, unsigned long flags);
extern struct kmem_cache *create_kmalloc_cache(const char *name, size_t size,
- unsigned long flags);
+ unsigned long flags, size_t useroffset,
+ size_t usersize);
extern void create_boot_cache(struct kmem_cache *, const char *name,
size_t size, unsigned long flags, size_t useroffset,
size_t usersize);
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 36408f5f2a34..d4e6442f9bbc 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -920,14 +920,15 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, size_t siz
}
struct kmem_cache *__init create_kmalloc_cache(const char *name, size_t size,
- unsigned long flags)
+ unsigned long flags, size_t useroffset,
+ size_t usersize)
{
struct kmem_cache *s = kmem_cache_zalloc(kmem_cache, GFP_NOWAIT);
if (!s)
panic("Out of memory when creating slab %s\n", name);
- create_boot_cache(s, name, size, flags, 0, size);
+ create_boot_cache(s, name, size, flags, useroffset, usersize);
list_add(&s->list, &slab_caches);
memcg_link_cache(s);
s->refcount = 1;
@@ -1081,7 +1082,8 @@ void __init setup_kmalloc_cache_index_table(void)
static void __init new_kmalloc_cache(int idx, unsigned long flags)
{
kmalloc_caches[idx] = create_kmalloc_cache(kmalloc_info[idx].name,
- kmalloc_info[idx].size, flags);
+ kmalloc_info[idx].size, flags, 0,
+ kmalloc_info[idx].size);
}
/*
@@ -1122,7 +1124,7 @@ void __init create_kmalloc_caches(unsigned long flags)
BUG_ON(!n);
kmalloc_dma_caches[i] = create_kmalloc_cache(n,
- size, SLAB_CACHE_DMA | flags);
+ size, SLAB_CACHE_DMA | flags, 0, 0);
}
}
#endif
--
2.7.4
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v3 15/31] xfs: Define usercopy region in xfs_inode slab cache
[not found] <1505940337-79069-1-git-send-email-keescook@chromium.org>
` (2 preceding siblings ...)
2017-09-20 20:45 ` [PATCH v3 03/31] usercopy: Mark kmalloc caches as usercopy caches Kees Cook
@ 2017-09-20 20:45 ` Kees Cook
3 siblings, 0 replies; 10+ messages in thread
From: Kees Cook @ 2017-09-20 20:45 UTC (permalink / raw)
To: linux-kernel
Cc: Kees Cook, David Windsor, Darrick J. Wong, linux-xfs,
linux-fsdevel, netdev, linux-mm, kernel-hardening
From: David Windsor <dave@nullcore.net>
The XFS inline inode data, stored in struct xfs_inode_t field
i_df.if_u2.if_inline_data and therefore contained in the xfs_inode slab
cache, needs to be copied to/from userspace.
cache object allocation:
fs/xfs/xfs_icache.c:
xfs_inode_alloc(...):
...
ip = kmem_zone_alloc(xfs_inode_zone, KM_SLEEP);
fs/xfs/libxfs/xfs_inode_fork.c:
xfs_init_local_fork(...):
...
if (mem_size <= sizeof(ifp->if_u2.if_inline_data))
ifp->if_u1.if_data = ifp->if_u2.if_inline_data;
...
fs/xfs/xfs_symlink.c:
xfs_symlink(...):
...
xfs_init_local_fork(ip, XFS_DATA_FORK, target_path, pathlen);
example usage trace:
readlink_copy+0x43/0x70
vfs_readlink+0x62/0x110
SyS_readlinkat+0x100/0x130
fs/xfs/xfs_iops.c:
(via inode->i_op->get_link)
xfs_vn_get_link_inline(...):
...
return XFS_I(inode)->i_df.if_u1.if_data;
fs/namei.c:
readlink_copy(..., link):
...
copy_to_user(..., link, len);
generic_readlink(dentry, ...):
struct inode *inode = d_inode(dentry);
const char *link = inode->i_link;
...
if (!link) {
link = inode->i_op->get_link(dentry, inode, &done);
...
readlink_copy(..., link);
In support of usercopy hardening, this patch defines a region in the
xfs_inode slab cache in which userspace copy operations are allowed.
This region is known as the slab cache's usercopy region. Slab caches can
now check that each copy operation involving cache-managed memory falls
entirely within the slab's usercopy region.
This patch is modified from Brad Spengler/PaX Team's PAX_USERCOPY
whitelisting code in the last public patch of grsecurity/PaX based on my
understanding of the code. Changes or omissions from the original code are
mine and don't reflect the original grsecurity/PaX code.
Signed-off-by: David Windsor <dave@nullcore.net>
[kees: adjust commit log, provide usage trace]
Cc: "Darrick J. Wong" <darrick.wong@oracle.com>
Cc: linux-xfs@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
---
fs/xfs/kmem.h | 10 ++++++++++
fs/xfs/xfs_super.c | 7 +++++--
2 files changed, 15 insertions(+), 2 deletions(-)
diff --git a/fs/xfs/kmem.h b/fs/xfs/kmem.h
index 4d85992d75b2..08358f38dee6 100644
--- a/fs/xfs/kmem.h
+++ b/fs/xfs/kmem.h
@@ -110,6 +110,16 @@ kmem_zone_init_flags(int size, char *zone_name, unsigned long flags,
return kmem_cache_create(zone_name, size, 0, flags, construct);
}
+static inline kmem_zone_t *
+kmem_zone_init_flags_usercopy(int size, char *zone_name, unsigned long flags,
+ size_t useroffset, size_t usersize,
+ void (*construct)(void *))
+{
+ return kmem_cache_create_usercopy(zone_name, size, 0, flags,
+ useroffset, usersize, construct);
+}
+
+
static inline void
kmem_zone_free(kmem_zone_t *zone, void *ptr)
{
diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
index c996f4ae4a5f..1b4b67194538 100644
--- a/fs/xfs/xfs_super.c
+++ b/fs/xfs/xfs_super.c
@@ -1846,9 +1846,12 @@ xfs_init_zones(void)
goto out_destroy_efd_zone;
xfs_inode_zone =
- kmem_zone_init_flags(sizeof(xfs_inode_t), "xfs_inode",
+ kmem_zone_init_flags_usercopy(sizeof(xfs_inode_t), "xfs_inode",
KM_ZONE_HWALIGN | KM_ZONE_RECLAIM | KM_ZONE_SPREAD |
- KM_ZONE_ACCOUNT, xfs_fs_inode_init_once);
+ KM_ZONE_ACCOUNT,
+ offsetof(xfs_inode_t, i_df.if_u2.if_inline_data),
+ sizeof_field(xfs_inode_t, i_df.if_u2.if_inline_data),
+ xfs_fs_inode_init_once);
if (!xfs_inode_zone)
goto out_destroy_efi_zone;
--
2.7.4
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v3 01/31] usercopy: Prepare for usercopy whitelisting
2017-09-20 20:45 ` [PATCH v3 01/31] usercopy: Prepare for usercopy whitelisting Kees Cook
@ 2017-09-21 15:21 ` Christopher Lameter
0 siblings, 0 replies; 10+ messages in thread
From: Christopher Lameter @ 2017-09-21 15:21 UTC (permalink / raw)
To: Kees Cook
Cc: linux-kernel, David Windsor, Pekka Enberg, David Rientjes,
Joonsoo Kim, Andrew Morton, linux-mm, linux-xfs, linux-fsdevel,
netdev, kernel-hardening
On Wed, 20 Sep 2017, Kees Cook wrote:
> diff --git a/include/linux/stddef.h b/include/linux/stddef.h
> index 9c61c7cda936..f00355086fb2 100644
> --- a/include/linux/stddef.h
> +++ b/include/linux/stddef.h
> @@ -18,6 +18,8 @@ enum {
> #define offsetof(TYPE, MEMBER) ((size_t)&((TYPE *)0)->MEMBER)
> #endif
>
> +#define sizeof_field(structure, field) sizeof((((structure *)0)->field))
> +
> /**
> * offsetofend(TYPE, MEMBER)
> *
Hmmm.. Is that really necessary? Code knows the type of field and can
use sizeof type.
Also this is a non slab change hidden in the patchset.
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index 904a83be82de..36408f5f2a34 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -272,6 +272,9 @@ int slab_unmergeable(struct kmem_cache *s)
> if (s->ctor)
> return 1;
>
> + if (s->usersize)
> + return 1;
> +
> /*
> * We may have set a slab to be unmergeable during bootstrap.
> */
This will ultimately make all slabs unmergeable at the end of your
patchset? Lots of space will be wasted. Is there any way to make this
feature optional?
#ifdef CONFIG_HARDENED around this?
> @@ -491,6 +509,15 @@ kmem_cache_create(const char *name, size_t size, size_t align,
> }
> return s;
> }
> +EXPORT_SYMBOL(kmem_cache_create_usercopy);
> +
> +struct kmem_cache *
> +kmem_cache_create(const char *name, size_t size, size_t align,
> + unsigned long flags, void (*ctor)(void *))
> +{
> + return kmem_cache_create_usercopy(name, size, align, flags, 0, size,
> + ctor);
> +}
> EXPORT_SYMBOL(kmem_cache_create);
Well this makes the slab created unmergeable.
> @@ -897,7 +927,7 @@ struct kmem_cache *__init create_kmalloc_cache(const char *name, size_t size,
> if (!s)
> panic("Out of memory when creating slab %s\n", name);
>
> - create_boot_cache(s, name, size, flags);
> + create_boot_cache(s, name, size, flags, 0, size);
Ok this makes the kmalloc array unmergeable.
> @@ -5081,6 +5081,12 @@ static ssize_t cache_dma_show(struct kmem_cache *s, char *buf)
> SLAB_ATTR_RO(cache_dma);
> #endif
>
> +static ssize_t usersize_show(struct kmem_cache *s, char *buf)
> +{
> + return sprintf(buf, "%zu\n", s->usersize);
> +}
> +SLAB_ATTR_RO(usersize);
> +
> static ssize_t destroy_by_rcu_show(struct kmem_cache *s, char *buf)
> {
> return sprintf(buf, "%d\n", !!(s->flags & SLAB_TYPESAFE_BY_RCU));
> @@ -5455,6 +5461,7 @@ static struct attribute *slab_attrs[] = {
> #ifdef CONFIG_FAILSLAB
> &failslab_attr.attr,
> #endif
> + &usersize_attr.attr,
So useroffset is not exposed?
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 02/31] usercopy: Enforce slab cache usercopy region boundaries
2017-09-20 20:45 ` [PATCH v3 02/31] usercopy: Enforce slab cache usercopy region boundaries Kees Cook
@ 2017-09-21 15:23 ` Christopher Lameter
0 siblings, 0 replies; 10+ messages in thread
From: Christopher Lameter @ 2017-09-21 15:23 UTC (permalink / raw)
To: Kees Cook
Cc: linux-kernel, David Windsor, Pekka Enberg, David Rientjes,
Joonsoo Kim, Andrew Morton, Laura Abbott, Ingo Molnar,
Mark Rutland, linux-mm, linux-xfs, linux-fsdevel, netdev,
kernel-hardening
On Wed, 20 Sep 2017, Kees Cook wrote:
> diff --git a/mm/slab.c b/mm/slab.c
> index 87b6e5e0cdaf..df268999cf02 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -4408,7 +4408,9 @@ module_init(slab_proc_init);
>
> #ifdef CONFIG_HARDENED_USERCOPY
> /*
> - * Rejects objects that are incorrectly sized.
> + * Rejects incorrectly sized objects and objects that are to be copied
> + * to/from userspace but do not fall entirely within the containing slab
> + * cache's usercopy region.
> *
> * Returns NULL if check passes, otherwise const char * to name of cache
> * to indicate an error.
> @@ -4428,11 +4430,15 @@ const char *__check_heap_object(const void *ptr, unsigned long n,
> /* Find offset within object. */
> offset = ptr - index_to_obj(cachep, page, objnr) - obj_offset(cachep);
>
> - /* Allow address range falling entirely within object size. */
> - if (offset <= cachep->object_size && n <= cachep->object_size - offset)
> - return NULL;
> + /* Make sure object falls entirely within cache's usercopy region. */
> + if (offset < cachep->useroffset)
> + return cachep->name;
> + if (offset - cachep->useroffset > cachep->usersize)
> + return cachep->name;
> + if (n > cachep->useroffset - offset + cachep->usersize)
> + return cachep->name;
>
> - return cachep->name;
> + return NULL;
> }
> #endif /* CONFIG_HARDENED_USERCOPY */
Looks like this is almost the same for all allocators. Can we put this
into mm/slab_common.c?
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 03/31] usercopy: Mark kmalloc caches as usercopy caches
2017-09-20 20:45 ` [PATCH v3 03/31] usercopy: Mark kmalloc caches as usercopy caches Kees Cook
@ 2017-09-21 15:27 ` Christopher Lameter
2017-09-21 15:40 ` [kernel-hardening] " Kees Cook
0 siblings, 1 reply; 10+ messages in thread
From: Christopher Lameter @ 2017-09-21 15:27 UTC (permalink / raw)
To: Kees Cook
Cc: linux-kernel, David Windsor, Pekka Enberg, David Rientjes,
Joonsoo Kim, Andrew Morton, linux-mm, linux-xfs, linux-fsdevel,
netdev, kernel-hardening
On Wed, 20 Sep 2017, Kees Cook wrote:
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -1291,7 +1291,8 @@ void __init kmem_cache_init(void)
> */
> kmalloc_caches[INDEX_NODE] = create_kmalloc_cache(
> kmalloc_info[INDEX_NODE].name,
> - kmalloc_size(INDEX_NODE), ARCH_KMALLOC_FLAGS);
> + kmalloc_size(INDEX_NODE), ARCH_KMALLOC_FLAGS,
> + 0, kmalloc_size(INDEX_NODE));
> slab_state = PARTIAL_NODE;
> setup_kmalloc_cache_index_table();
Ok this presumes that at some point we will be able to restrict the number
of bytes writeable and thus set the offset and size field to different
values. Is that realistic?
We already whitelist all kmalloc caches (see first patch).
So what is the point of this patch?
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [kernel-hardening] Re: [PATCH v3 03/31] usercopy: Mark kmalloc caches as usercopy caches
2017-09-21 15:27 ` Christopher Lameter
@ 2017-09-21 15:40 ` Kees Cook
2017-09-21 16:04 ` Christopher Lameter
0 siblings, 1 reply; 10+ messages in thread
From: Kees Cook @ 2017-09-21 15:40 UTC (permalink / raw)
To: Christopher Lameter
Cc: LKML, David Windsor, Pekka Enberg, David Rientjes, Joonsoo Kim,
Andrew Morton, Linux-MM, linux-xfs, linux-fsdevel@vger.kernel.org,
Network Development, kernel-hardening@lists.openwall.com
On Thu, Sep 21, 2017 at 8:27 AM, Christopher Lameter <cl@linux.com> wrote:
> On Wed, 20 Sep 2017, Kees Cook wrote:
>
>> --- a/mm/slab.c
>> +++ b/mm/slab.c
>> @@ -1291,7 +1291,8 @@ void __init kmem_cache_init(void)
>> */
>> kmalloc_caches[INDEX_NODE] = create_kmalloc_cache(
>> kmalloc_info[INDEX_NODE].name,
>> - kmalloc_size(INDEX_NODE), ARCH_KMALLOC_FLAGS);
>> + kmalloc_size(INDEX_NODE), ARCH_KMALLOC_FLAGS,
>> + 0, kmalloc_size(INDEX_NODE));
>> slab_state = PARTIAL_NODE;
>> setup_kmalloc_cache_index_table();
>
> Ok this presumes that at some point we will be able to restrict the number
> of bytes writeable and thus set the offset and size field to different
> values. Is that realistic?
>
> We already whitelist all kmalloc caches (see first patch).
>
> So what is the point of this patch?
The DMA kmalloc caches are not whitelisted:
>> kmalloc_dma_caches[i] = create_kmalloc_cache(n,
>> - size, SLAB_CACHE_DMA | flags);
>> + size, SLAB_CACHE_DMA | flags, 0, 0);
So this is creating the distinction between the kmallocs that go to
userspace and those that don't. The expectation is that future work
can start to distinguish between "for userspace" and "only kernel"
kmalloc allocations, as is already done here for DMA.
-Kees
--
Kees Cook
Pixel Security
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [kernel-hardening] Re: [PATCH v3 03/31] usercopy: Mark kmalloc caches as usercopy caches
2017-09-21 15:40 ` [kernel-hardening] " Kees Cook
@ 2017-09-21 16:04 ` Christopher Lameter
2017-09-21 18:26 ` Kees Cook
0 siblings, 1 reply; 10+ messages in thread
From: Christopher Lameter @ 2017-09-21 16:04 UTC (permalink / raw)
To: Kees Cook
Cc: LKML, David Windsor, Pekka Enberg, David Rientjes, Joonsoo Kim,
Andrew Morton, Linux-MM, linux-xfs, linux-fsdevel@vger.kernel.org,
Network Development, kernel-hardening@lists.openwall.com
On Thu, 21 Sep 2017, Kees Cook wrote:
> > So what is the point of this patch?
>
> The DMA kmalloc caches are not whitelisted:
The DMA kmalloc caches are pretty obsolete and mostly there for obscure
drivers.
??
> >> kmalloc_dma_caches[i] = create_kmalloc_cache(n,
> >> - size, SLAB_CACHE_DMA | flags);
> >> + size, SLAB_CACHE_DMA | flags, 0, 0);
>
> So this is creating the distinction between the kmallocs that go to
> userspace and those that don't. The expectation is that future work
> can start to distinguish between "for userspace" and "only kernel"
> kmalloc allocations, as is already done here for DMA.
The creation of the kmalloc caches in earlier patches already setup the
"whitelisting". Why do it twice?
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [kernel-hardening] Re: [PATCH v3 03/31] usercopy: Mark kmalloc caches as usercopy caches
2017-09-21 16:04 ` Christopher Lameter
@ 2017-09-21 18:26 ` Kees Cook
0 siblings, 0 replies; 10+ messages in thread
From: Kees Cook @ 2017-09-21 18:26 UTC (permalink / raw)
To: Christopher Lameter
Cc: LKML, David Windsor, Pekka Enberg, David Rientjes, Joonsoo Kim,
Andrew Morton, Linux-MM, linux-xfs, linux-fsdevel@vger.kernel.org,
Network Development, kernel-hardening@lists.openwall.com
On Thu, Sep 21, 2017 at 9:04 AM, Christopher Lameter <cl@linux.com> wrote:
> On Thu, 21 Sep 2017, Kees Cook wrote:
>
>> > So what is the point of this patch?
>>
>> The DMA kmalloc caches are not whitelisted:
>
> The DMA kmalloc caches are pretty obsolete and mostly there for obscure
> drivers.
>
> ??
They may be obsolete, but they're still in the kernel, and they aren't
copied to userspace, so we can mark them.
>> >> kmalloc_dma_caches[i] = create_kmalloc_cache(n,
>> >> - size, SLAB_CACHE_DMA | flags);
>> >> + size, SLAB_CACHE_DMA | flags, 0, 0);
>>
>> So this is creating the distinction between the kmallocs that go to
>> userspace and those that don't. The expectation is that future work
>> can start to distinguish between "for userspace" and "only kernel"
>> kmalloc allocations, as is already done here for DMA.
>
> The creation of the kmalloc caches in earlier patches already setup the
> "whitelisting". Why do it twice?
Patch 1 is to allow for things to mark their whitelists. Patch 30
disables the full whitelisting, since then we've defined them all, so
the kmalloc caches need to mark themselves as whitelisted.
Patch 1 leaves unmarked things whitelisted so we can progressively
tighten the restriction and have a bisectable series. (i.e. if there
is something wrong with one of the whitelists in the series, it will
bisect to that one, not the one that removes the global whitelist from
patch 1.)
-Kees
--
Kees Cook
Pixel Security
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2017-09-21 18:26 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <1505940337-79069-1-git-send-email-keescook@chromium.org>
2017-09-20 20:45 ` [PATCH v3 01/31] usercopy: Prepare for usercopy whitelisting Kees Cook
2017-09-21 15:21 ` Christopher Lameter
2017-09-20 20:45 ` [PATCH v3 02/31] usercopy: Enforce slab cache usercopy region boundaries Kees Cook
2017-09-21 15:23 ` Christopher Lameter
2017-09-20 20:45 ` [PATCH v3 03/31] usercopy: Mark kmalloc caches as usercopy caches Kees Cook
2017-09-21 15:27 ` Christopher Lameter
2017-09-21 15:40 ` [kernel-hardening] " Kees Cook
2017-09-21 16:04 ` Christopher Lameter
2017-09-21 18:26 ` Kees Cook
2017-09-20 20:45 ` [PATCH v3 15/31] xfs: Define usercopy region in xfs_inode slab cache Kees Cook
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).