qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH repost 0/4] add mitigation against buffer overflows
@ 2015-09-27 10:14 Michael S. Tsirkin
  2015-09-27 10:14 ` [Qemu-devel] [PATCH repost 1/4] oslib: rework anonimous RAM allocation Michael S. Tsirkin
                   ` (4 more replies)
  0 siblings, 5 replies; 8+ messages in thread
From: Michael S. Tsirkin @ 2015-09-27 10:14 UTC (permalink / raw)
  To: qemu-devel; +Cc: Peter Maydell, Paolo Bonzini

Multiple places in QEMU map guest memory, then access it
directly. Unfortunately since we are using C, there's always
a chance that we'll miss a bounds check when we do this.
This has a potential to corrupt QEMU memory.

As a mitigation strategy against such exploits,
allocate a page in HVA space on top of each RAM chunk
with PROT_NONE protection.

Buffer overflows will now cause QEMU to crash.

This is a repost, combining separate patches into a single
series. No changes to patches themselves.

Michael S. Tsirkin (4):
  oslib: rework anonimous RAM allocation
  oslib: allocate PROT_NONE pages on top of RAM
  exec: allocate PROT_NONE pages on top of RAM
  exec: factor out duplicate mmap code

 include/qemu/mmap-alloc.h | 10 +++++++++
 exec.c                    | 19 ++++++++++++-----
 util/mmap-alloc.c         | 52 +++++++++++++++++++++++++++++++++++++++++++++++
 util/oslib-posix.c        | 20 ++++--------------
 util/Makefile.objs        |  2 +-
 5 files changed, 81 insertions(+), 22 deletions(-)
 create mode 100644 include/qemu/mmap-alloc.h
 create mode 100644 util/mmap-alloc.c

-- 
MST

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [Qemu-devel] [PATCH repost 1/4] oslib: rework anonimous RAM allocation
  2015-09-27 10:14 [Qemu-devel] [PATCH repost 0/4] add mitigation against buffer overflows Michael S. Tsirkin
@ 2015-09-27 10:14 ` Michael S. Tsirkin
  2015-09-27 10:14 ` [Qemu-devel] [PATCH repost 2/4] oslib: allocate PROT_NONE pages on top of RAM Michael S. Tsirkin
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Michael S. Tsirkin @ 2015-09-27 10:14 UTC (permalink / raw)
  To: qemu-devel; +Cc: Kevin Wolf, Peter Maydell, Michael Tokarev, Paolo Bonzini

At the moment we first allocate RAM, sometimes more than necessary for
alignment reasons.  We then free the extra RAM.

Rework this to avoid the temporary allocation: reserve the
range by mapping it with PROT_NONE, then use just the
necessary range with MAP_FIXED.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 util/oslib-posix.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/util/oslib-posix.c b/util/oslib-posix.c
index 3ae4987..27972d4 100644
--- a/util/oslib-posix.c
+++ b/util/oslib-posix.c
@@ -129,9 +129,9 @@ void *qemu_anon_ram_alloc(size_t size, uint64_t *alignment)
 {
     size_t align = QEMU_VMALLOC_ALIGN;
     size_t total = size + align - getpagesize();
-    void *ptr = mmap(0, total, PROT_READ | PROT_WRITE,
-                     MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+    void *ptr = mmap(0, total, PROT_NONE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
     size_t offset = QEMU_ALIGN_UP((uintptr_t)ptr, align) - (uintptr_t)ptr;
+    void *ptr1;
 
     if (ptr == MAP_FAILED) {
         return NULL;
@@ -140,6 +140,14 @@ void *qemu_anon_ram_alloc(size_t size, uint64_t *alignment)
     if (alignment) {
         *alignment = align;
     }
+
+    ptr1 = mmap(ptr + offset, size, PROT_READ | PROT_WRITE,
+                MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+    if (ptr1 == MAP_FAILED) {
+        munmap(ptr, total);
+        return NULL;
+    }
+
     ptr += offset;
     total -= offset;
 
-- 
MST

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [Qemu-devel] [PATCH repost 2/4] oslib: allocate PROT_NONE pages on top of RAM
  2015-09-27 10:14 [Qemu-devel] [PATCH repost 0/4] add mitigation against buffer overflows Michael S. Tsirkin
  2015-09-27 10:14 ` [Qemu-devel] [PATCH repost 1/4] oslib: rework anonimous RAM allocation Michael S. Tsirkin
@ 2015-09-27 10:14 ` Michael S. Tsirkin
  2015-09-28 10:59   ` Paolo Bonzini
  2015-09-27 10:14 ` [Qemu-devel] [PATCH repost 3/4] exec: " Michael S. Tsirkin
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 8+ messages in thread
From: Michael S. Tsirkin @ 2015-09-27 10:14 UTC (permalink / raw)
  To: qemu-devel; +Cc: Kevin Wolf, Peter Maydell, Michael Tokarev, Paolo Bonzini

This inserts a read and write protected page between RAM and QEMU
memory. This makes it harder to exploit QEMU bugs resulting from buffer
overflows in devices using variants of cpu_physical_memory_map,
dma_memory_map etc.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 util/oslib-posix.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/util/oslib-posix.c b/util/oslib-posix.c
index 27972d4..a0fcdc2 100644
--- a/util/oslib-posix.c
+++ b/util/oslib-posix.c
@@ -128,7 +128,7 @@ void *qemu_memalign(size_t alignment, size_t size)
 void *qemu_anon_ram_alloc(size_t size, uint64_t *alignment)
 {
     size_t align = QEMU_VMALLOC_ALIGN;
-    size_t total = size + align - getpagesize();
+    size_t total = size + align;
     void *ptr = mmap(0, total, PROT_NONE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
     size_t offset = QEMU_ALIGN_UP((uintptr_t)ptr, align) - (uintptr_t)ptr;
     void *ptr1;
@@ -154,8 +154,8 @@ void *qemu_anon_ram_alloc(size_t size, uint64_t *alignment)
     if (offset > 0) {
         munmap(ptr - offset, offset);
     }
-    if (total > size) {
-        munmap(ptr + size, total - size);
+    if (total > size + getpagesize()) {
+        munmap(ptr + size + getpagesize(), total - size - getpagesize());
     }
 
     trace_qemu_anon_ram_alloc(size, ptr);
@@ -172,7 +172,7 @@ void qemu_anon_ram_free(void *ptr, size_t size)
 {
     trace_qemu_anon_ram_free(ptr, size);
     if (ptr) {
-        munmap(ptr, size);
+        munmap(ptr, size + getpagesize());
     }
 }
 
-- 
MST

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [Qemu-devel] [PATCH repost 3/4] exec: allocate PROT_NONE pages on top of RAM
  2015-09-27 10:14 [Qemu-devel] [PATCH repost 0/4] add mitigation against buffer overflows Michael S. Tsirkin
  2015-09-27 10:14 ` [Qemu-devel] [PATCH repost 1/4] oslib: rework anonimous RAM allocation Michael S. Tsirkin
  2015-09-27 10:14 ` [Qemu-devel] [PATCH repost 2/4] oslib: allocate PROT_NONE pages on top of RAM Michael S. Tsirkin
@ 2015-09-27 10:14 ` Michael S. Tsirkin
  2015-09-27 10:14 ` [Qemu-devel] [PATCH repost 4/4] exec: factor out duplicate mmap code Michael S. Tsirkin
  2015-09-28 11:01 ` [Qemu-devel] [PATCH repost 0/4] add mitigation against buffer overflows Paolo Bonzini
  4 siblings, 0 replies; 8+ messages in thread
From: Michael S. Tsirkin @ 2015-09-27 10:14 UTC (permalink / raw)
  To: qemu-devel; +Cc: Peter Maydell, Paolo Bonzini

This inserts a read and write protected page between RAM and QEMU
memory, for file-backend RAM.
This makes it harder to exploit QEMU bugs resulting from buffer
overflows in devices using variants of cpu_physical_memory_map,
dma_memory_map etc.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 exec.c | 42 +++++++++++++++++++++++++++++++++++++++---
 1 file changed, 39 insertions(+), 3 deletions(-)

diff --git a/exec.c b/exec.c
index 47ada31..7d90a52 100644
--- a/exec.c
+++ b/exec.c
@@ -84,6 +84,9 @@ static MemoryRegion io_mem_unassigned;
  */
 #define RAM_RESIZEABLE (1 << 2)
 
+/* An extra page is mapped on top of this RAM.
+ */
+#define RAM_EXTRA (1 << 3)
 #endif
 
 struct CPUTailQ cpus = QTAILQ_HEAD_INITIALIZER(cpus);
@@ -1185,10 +1188,13 @@ static void *file_ram_alloc(RAMBlock *block,
     char *filename;
     char *sanitized_name;
     char *c;
+    void *ptr;
     void *area = NULL;
     int fd;
     uint64_t hpagesize;
+    uint64_t total;
     Error *local_err = NULL;
+    size_t offset;
 
     hpagesize = gethugepagesize(path, &local_err);
     if (local_err) {
@@ -1232,6 +1238,7 @@ static void *file_ram_alloc(RAMBlock *block,
     g_free(filename);
 
     memory = ROUND_UP(memory, hpagesize);
+    total = memory + hpagesize;
 
     /*
      * ftruncate is not supported by hugetlbfs in older
@@ -1243,16 +1250,40 @@ static void *file_ram_alloc(RAMBlock *block,
         perror("ftruncate");
     }
 
-    area = mmap(0, memory, PROT_READ | PROT_WRITE,
-                (block->flags & RAM_SHARED ? MAP_SHARED : MAP_PRIVATE),
+    ptr = mmap(0, total, PROT_NONE, MAP_PRIVATE | MAP_ANONYMOUS,
+                -1, 0);
+    if (ptr == MAP_FAILED) {
+        error_setg_errno(errp, errno,
+                         "unable to allocate memory range for hugepages");
+        close(fd);
+        goto error;
+    }
+
+    offset = QEMU_ALIGN_UP((uintptr_t)ptr, hpagesize) - (uintptr_t)ptr;
+
+    area = mmap(ptr + offset, memory, PROT_READ | PROT_WRITE,
+                (block->flags & RAM_SHARED ? MAP_SHARED : MAP_PRIVATE) |
+                MAP_FIXED,
                 fd, 0);
     if (area == MAP_FAILED) {
         error_setg_errno(errp, errno,
                          "unable to map backing store for hugepages");
+        munmap(ptr, total);
         close(fd);
         goto error;
     }
 
+    if (offset > 0) {
+        munmap(ptr, offset);
+    }
+    ptr += offset;
+    total -= offset;
+
+    if (total > memory + getpagesize()) {
+        munmap(ptr + memory + getpagesize(),
+               total - memory - getpagesize());
+    }
+
     if (mem_prealloc) {
         os_mem_prealloc(fd, area, memory);
     }
@@ -1570,6 +1601,7 @@ ram_addr_t qemu_ram_alloc_from_file(ram_addr_t size, MemoryRegion *mr,
     new_block->used_length = size;
     new_block->max_length = size;
     new_block->flags = share ? RAM_SHARED : 0;
+    new_block->flags |= RAM_EXTRA;
     new_block->host = file_ram_alloc(new_block, size,
                                      mem_path, errp);
     if (!new_block->host) {
@@ -1671,7 +1703,11 @@ static void reclaim_ramblock(RAMBlock *block)
         xen_invalidate_map_cache_entry(block->host);
 #ifndef _WIN32
     } else if (block->fd >= 0) {
-        munmap(block->host, block->max_length);
+        if (block->flags & RAM_EXTRA) {
+            munmap(block->host, block->max_length + getpagesize());
+        } else {
+            munmap(block->host, block->max_length);
+        }
         close(block->fd);
 #endif
     } else {
-- 
MST

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [Qemu-devel] [PATCH repost 4/4] exec: factor out duplicate mmap code
  2015-09-27 10:14 [Qemu-devel] [PATCH repost 0/4] add mitigation against buffer overflows Michael S. Tsirkin
                   ` (2 preceding siblings ...)
  2015-09-27 10:14 ` [Qemu-devel] [PATCH repost 3/4] exec: " Michael S. Tsirkin
@ 2015-09-27 10:14 ` Michael S. Tsirkin
  2015-09-30 13:12   ` Marc-André Lureau
  2015-09-28 11:01 ` [Qemu-devel] [PATCH repost 0/4] add mitigation against buffer overflows Paolo Bonzini
  4 siblings, 1 reply; 8+ messages in thread
From: Michael S. Tsirkin @ 2015-09-27 10:14 UTC (permalink / raw)
  To: qemu-devel; +Cc: Peter Maydell, Paolo Bonzini

Anonymous and file-backed RAM allocation are now almost exactly the same.

Reduce code duplication by moving RAM mmap code out of oslib-posix.c and
exec.c.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/qemu/mmap-alloc.h | 10 +++++++++
 exec.c                    | 47 +++++++++---------------------------------
 util/mmap-alloc.c         | 52 +++++++++++++++++++++++++++++++++++++++++++++++
 util/oslib-posix.c        | 28 ++++---------------------
 util/Makefile.objs        |  2 +-
 5 files changed, 77 insertions(+), 62 deletions(-)
 create mode 100644 include/qemu/mmap-alloc.h
 create mode 100644 util/mmap-alloc.c

diff --git a/include/qemu/mmap-alloc.h b/include/qemu/mmap-alloc.h
new file mode 100644
index 0000000..3400e14
--- /dev/null
+++ b/include/qemu/mmap-alloc.h
@@ -0,0 +1,10 @@
+#ifndef QEMU_MMAP_ALLOC
+#define QEMU_MMAP_ALLOC
+
+#include "qemu-common.h"
+
+void *qemu_ram_mmap(int fd, size_t size, size_t align);
+
+void qemu_ram_munmap(void *ptr, size_t size);
+
+#endif
diff --git a/exec.c b/exec.c
index 7d90a52..437634b 100644
--- a/exec.c
+++ b/exec.c
@@ -55,6 +55,9 @@
 #include "exec/ram_addr.h"
 
 #include "qemu/range.h"
+#ifndef _WIN32
+#include "qemu/mmap-alloc.h"
+#endif
 
 //#define DEBUG_SUBPAGE
 
@@ -84,9 +87,9 @@ static MemoryRegion io_mem_unassigned;
  */
 #define RAM_RESIZEABLE (1 << 2)
 
-/* An extra page is mapped on top of this RAM.
+/* RAM is backed by an mmapped file.
  */
-#define RAM_EXTRA (1 << 3)
+#define RAM_FILE (1 << 3)
 #endif
 
 struct CPUTailQ cpus = QTAILQ_HEAD_INITIALIZER(cpus);
@@ -1188,13 +1191,10 @@ static void *file_ram_alloc(RAMBlock *block,
     char *filename;
     char *sanitized_name;
     char *c;
-    void *ptr;
-    void *area = NULL;
+    void *area;
     int fd;
     uint64_t hpagesize;
-    uint64_t total;
     Error *local_err = NULL;
-    size_t offset;
 
     hpagesize = gethugepagesize(path, &local_err);
     if (local_err) {
@@ -1238,7 +1238,6 @@ static void *file_ram_alloc(RAMBlock *block,
     g_free(filename);
 
     memory = ROUND_UP(memory, hpagesize);
-    total = memory + hpagesize;
 
     /*
      * ftruncate is not supported by hugetlbfs in older
@@ -1250,40 +1249,14 @@ static void *file_ram_alloc(RAMBlock *block,
         perror("ftruncate");
     }
 
-    ptr = mmap(0, total, PROT_NONE, MAP_PRIVATE | MAP_ANONYMOUS,
-                -1, 0);
-    if (ptr == MAP_FAILED) {
-        error_setg_errno(errp, errno,
-                         "unable to allocate memory range for hugepages");
-        close(fd);
-        goto error;
-    }
-
-    offset = QEMU_ALIGN_UP((uintptr_t)ptr, hpagesize) - (uintptr_t)ptr;
-
-    area = mmap(ptr + offset, memory, PROT_READ | PROT_WRITE,
-                (block->flags & RAM_SHARED ? MAP_SHARED : MAP_PRIVATE) |
-                MAP_FIXED,
-                fd, 0);
+    area = qemu_ram_mmap(fd, memory, hpagesize);
     if (area == MAP_FAILED) {
         error_setg_errno(errp, errno,
                          "unable to map backing store for hugepages");
-        munmap(ptr, total);
         close(fd);
         goto error;
     }
 
-    if (offset > 0) {
-        munmap(ptr, offset);
-    }
-    ptr += offset;
-    total -= offset;
-
-    if (total > memory + getpagesize()) {
-        munmap(ptr + memory + getpagesize(),
-               total - memory - getpagesize());
-    }
-
     if (mem_prealloc) {
         os_mem_prealloc(fd, area, memory);
     }
@@ -1601,7 +1574,7 @@ ram_addr_t qemu_ram_alloc_from_file(ram_addr_t size, MemoryRegion *mr,
     new_block->used_length = size;
     new_block->max_length = size;
     new_block->flags = share ? RAM_SHARED : 0;
-    new_block->flags |= RAM_EXTRA;
+    new_block->flags |= RAM_FILE;
     new_block->host = file_ram_alloc(new_block, size,
                                      mem_path, errp);
     if (!new_block->host) {
@@ -1703,8 +1676,8 @@ static void reclaim_ramblock(RAMBlock *block)
         xen_invalidate_map_cache_entry(block->host);
 #ifndef _WIN32
     } else if (block->fd >= 0) {
-        if (block->flags & RAM_EXTRA) {
-            munmap(block->host, block->max_length + getpagesize());
+        if (block->flags & RAM_FILE) {
+            qemu_ram_munmap(block->host, block->max_length);
         } else {
             munmap(block->host, block->max_length);
         }
diff --git a/util/mmap-alloc.c b/util/mmap-alloc.c
new file mode 100644
index 0000000..05c8b4b
--- /dev/null
+++ b/util/mmap-alloc.c
@@ -0,0 +1,52 @@
+/* 
+ * Support for RAM backed by mmaped host memory.
+ *
+ * Copyright (c) 2015 Red Hat, Inc.
+ *
+ * Authors:
+ *  Michael S. Tsirkin <mst@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or
+ * later.  See the COPYING file in the top-level directory.
+ */
+#include <qemu/mmap-alloc.h>
+#include <sys/types.h>
+#include <sys/mman.h>
+
+void *qemu_ram_mmap(int fd, size_t size, size_t align)
+{
+    size_t total = size + align;
+    void *ptr = mmap(0, total, PROT_NONE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+    size_t offset = QEMU_ALIGN_UP((uintptr_t)ptr, align) - (uintptr_t)ptr;
+    void *ptr1;
+
+    if (ptr == MAP_FAILED) {
+        return NULL;
+    }
+
+    ptr1 = mmap(ptr + offset, size, PROT_READ | PROT_WRITE,
+                MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE, fd, 0);
+    if (ptr1 == MAP_FAILED) {
+        munmap(ptr, total);
+        return NULL;
+    }
+
+    ptr += offset;
+    total -= offset;
+
+    if (offset > 0) {
+        munmap(ptr - offset, offset);
+    }
+    if (total > size + getpagesize()) {
+        munmap(ptr + size + getpagesize(), total - size - getpagesize());
+    }
+
+    return ptr;
+}
+
+void qemu_ram_munmap(void *ptr, size_t size)
+{
+    if (ptr) {
+        munmap(ptr, size + getpagesize());
+    }
+}
diff --git a/util/oslib-posix.c b/util/oslib-posix.c
index a0fcdc2..72a6bc1 100644
--- a/util/oslib-posix.c
+++ b/util/oslib-posix.c
@@ -72,6 +72,8 @@ extern int daemon(int, int);
 #include <sys/sysctl.h>
 #endif
 
+#include <qemu/mmap-alloc.h>
+
 int qemu_get_thread_id(void)
 {
 #if defined(__linux__)
@@ -128,10 +130,7 @@ void *qemu_memalign(size_t alignment, size_t size)
 void *qemu_anon_ram_alloc(size_t size, uint64_t *alignment)
 {
     size_t align = QEMU_VMALLOC_ALIGN;
-    size_t total = size + align;
-    void *ptr = mmap(0, total, PROT_NONE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
-    size_t offset = QEMU_ALIGN_UP((uintptr_t)ptr, align) - (uintptr_t)ptr;
-    void *ptr1;
+    void *ptr = qemu_ram_mmap(-1, size, align);
 
     if (ptr == MAP_FAILED) {
         return NULL;
@@ -141,23 +140,6 @@ void *qemu_anon_ram_alloc(size_t size, uint64_t *alignment)
         *alignment = align;
     }
 
-    ptr1 = mmap(ptr + offset, size, PROT_READ | PROT_WRITE,
-                MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
-    if (ptr1 == MAP_FAILED) {
-        munmap(ptr, total);
-        return NULL;
-    }
-
-    ptr += offset;
-    total -= offset;
-
-    if (offset > 0) {
-        munmap(ptr - offset, offset);
-    }
-    if (total > size + getpagesize()) {
-        munmap(ptr + size + getpagesize(), total - size - getpagesize());
-    }
-
     trace_qemu_anon_ram_alloc(size, ptr);
     return ptr;
 }
@@ -171,9 +153,7 @@ void qemu_vfree(void *ptr)
 void qemu_anon_ram_free(void *ptr, size_t size)
 {
     trace_qemu_anon_ram_free(ptr, size);
-    if (ptr) {
-        munmap(ptr, size + getpagesize());
-    }
+    qemu_ram_munmap(ptr, size);
 }
 
 void qemu_set_block(int fd)
diff --git a/util/Makefile.objs b/util/Makefile.objs
index 114d657..372e037 100644
--- a/util/Makefile.objs
+++ b/util/Makefile.objs
@@ -1,6 +1,6 @@
 util-obj-y = osdep.o cutils.o unicode.o qemu-timer-common.o
 util-obj-$(CONFIG_WIN32) += oslib-win32.o qemu-thread-win32.o event_notifier-win32.o
-util-obj-$(CONFIG_POSIX) += oslib-posix.o qemu-thread-posix.o event_notifier-posix.o qemu-openpty.o
+util-obj-$(CONFIG_POSIX) += oslib-posix.o qemu-thread-posix.o event_notifier-posix.o qemu-openpty.o mmap-alloc.o
 util-obj-y += envlist.o path.o module.o
 util-obj-$(call lnot,$(CONFIG_INT128)) += host-utils.o
 util-obj-y += bitmap.o bitops.o hbitmap.o
-- 
MST

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] [PATCH repost 2/4] oslib: allocate PROT_NONE pages on top of RAM
  2015-09-27 10:14 ` [Qemu-devel] [PATCH repost 2/4] oslib: allocate PROT_NONE pages on top of RAM Michael S. Tsirkin
@ 2015-09-28 10:59   ` Paolo Bonzini
  0 siblings, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2015-09-28 10:59 UTC (permalink / raw)
  To: Michael S. Tsirkin, qemu-devel; +Cc: Kevin Wolf, Peter Maydell, Michael Tokarev



On 27/09/2015 12:14, Michael S. Tsirkin wrote:
> -    if (total > size) {
> -        munmap(ptr + size, total - size);
> +    if (total > size + getpagesize()) {
> +        munmap(ptr + size + getpagesize(), total - size - getpagesize());
>      }
>  

Please add a comment here, also noting that "total" always allocates at
least an extra page, even if size is already aligned.

>      if (ptr) {
> -        munmap(ptr, size);
> +        munmap(ptr, size + getpagesize());
>      }

And another comment here.

Paolo

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] [PATCH repost 0/4] add mitigation against buffer overflows
  2015-09-27 10:14 [Qemu-devel] [PATCH repost 0/4] add mitigation against buffer overflows Michael S. Tsirkin
                   ` (3 preceding siblings ...)
  2015-09-27 10:14 ` [Qemu-devel] [PATCH repost 4/4] exec: factor out duplicate mmap code Michael S. Tsirkin
@ 2015-09-28 11:01 ` Paolo Bonzini
  4 siblings, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2015-09-28 11:01 UTC (permalink / raw)
  To: Michael S. Tsirkin, qemu-devel; +Cc: Peter Maydell



On 27/09/2015 12:14, Michael S. Tsirkin wrote:
> Multiple places in QEMU map guest memory, then access it
> directly. Unfortunately since we are using C, there's always
> a chance that we'll miss a bounds check when we do this.
> This has a potential to corrupt QEMU memory.
> 
> As a mitigation strategy against such exploits,
> allocate a page in HVA space on top of each RAM chunk
> with PROT_NONE protection.
> 
> Buffer overflows will now cause QEMU to crash.
> 
> This is a repost, combining separate patches into a single
> series. No changes to patches themselves.
> 
> Michael S. Tsirkin (4):
>   oslib: rework anonimous RAM allocation
>   oslib: allocate PROT_NONE pages on top of RAM
>   exec: allocate PROT_NONE pages on top of RAM
>   exec: factor out duplicate mmap code
> 
>  include/qemu/mmap-alloc.h | 10 +++++++++
>  exec.c                    | 19 ++++++++++++-----
>  util/mmap-alloc.c         | 52 +++++++++++++++++++++++++++++++++++++++++++++++
>  util/oslib-posix.c        | 20 ++++--------------
>  util/Makefile.objs        |  2 +-
>  5 files changed, 81 insertions(+), 22 deletions(-)
>  create mode 100644 include/qemu/mmap-alloc.h
>  create mode 100644 util/mmap-alloc.c
> 

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>

Regarding my request to add comments in patch 2, feel free to add them
directly in patch 4 instead.

Paolo

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] [PATCH repost 4/4] exec: factor out duplicate mmap code
  2015-09-27 10:14 ` [Qemu-devel] [PATCH repost 4/4] exec: factor out duplicate mmap code Michael S. Tsirkin
@ 2015-09-30 13:12   ` Marc-André Lureau
  0 siblings, 0 replies; 8+ messages in thread
From: Marc-André Lureau @ 2015-09-30 13:12 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Peter Maydell, QEMU, Paolo Bonzini

Hi

On Sun, Sep 27, 2015 at 12:14 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
> Anonymous and file-backed RAM allocation are now almost exactly the same.
>
> Reduce code duplication by moving RAM mmap code out of oslib-posix.c and
> exec.c.
>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

This patch is failing vhost-user-test:

x86_64/vhost-user/read-guest-mem: **
ERROR:tests/vhost-user-test.c:248:read_guest_mem: assertion failed (a
== b): (4026597203 == 0)


> ---
>  include/qemu/mmap-alloc.h | 10 +++++++++
>  exec.c                    | 47 +++++++++---------------------------------
>  util/mmap-alloc.c         | 52 +++++++++++++++++++++++++++++++++++++++++++++++
>  util/oslib-posix.c        | 28 ++++---------------------
>  util/Makefile.objs        |  2 +-
>  5 files changed, 77 insertions(+), 62 deletions(-)
>  create mode 100644 include/qemu/mmap-alloc.h
>  create mode 100644 util/mmap-alloc.c
>
> diff --git a/include/qemu/mmap-alloc.h b/include/qemu/mmap-alloc.h
> new file mode 100644
> index 0000000..3400e14
> --- /dev/null
> +++ b/include/qemu/mmap-alloc.h
> @@ -0,0 +1,10 @@
> +#ifndef QEMU_MMAP_ALLOC
> +#define QEMU_MMAP_ALLOC
> +
> +#include "qemu-common.h"
> +
> +void *qemu_ram_mmap(int fd, size_t size, size_t align);
> +
> +void qemu_ram_munmap(void *ptr, size_t size);
> +
> +#endif
> diff --git a/exec.c b/exec.c
> index 7d90a52..437634b 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -55,6 +55,9 @@
>  #include "exec/ram_addr.h"
>
>  #include "qemu/range.h"
> +#ifndef _WIN32
> +#include "qemu/mmap-alloc.h"
> +#endif
>
>  //#define DEBUG_SUBPAGE
>
> @@ -84,9 +87,9 @@ static MemoryRegion io_mem_unassigned;
>   */
>  #define RAM_RESIZEABLE (1 << 2)
>
> -/* An extra page is mapped on top of this RAM.
> +/* RAM is backed by an mmapped file.
>   */
> -#define RAM_EXTRA (1 << 3)
> +#define RAM_FILE (1 << 3)
>  #endif
>
>  struct CPUTailQ cpus = QTAILQ_HEAD_INITIALIZER(cpus);
> @@ -1188,13 +1191,10 @@ static void *file_ram_alloc(RAMBlock *block,
>      char *filename;
>      char *sanitized_name;
>      char *c;
> -    void *ptr;
> -    void *area = NULL;
> +    void *area;
>      int fd;
>      uint64_t hpagesize;
> -    uint64_t total;
>      Error *local_err = NULL;
> -    size_t offset;
>
>      hpagesize = gethugepagesize(path, &local_err);
>      if (local_err) {
> @@ -1238,7 +1238,6 @@ static void *file_ram_alloc(RAMBlock *block,
>      g_free(filename);
>
>      memory = ROUND_UP(memory, hpagesize);
> -    total = memory + hpagesize;
>
>      /*
>       * ftruncate is not supported by hugetlbfs in older
> @@ -1250,40 +1249,14 @@ static void *file_ram_alloc(RAMBlock *block,
>          perror("ftruncate");
>      }
>
> -    ptr = mmap(0, total, PROT_NONE, MAP_PRIVATE | MAP_ANONYMOUS,
> -                -1, 0);
> -    if (ptr == MAP_FAILED) {
> -        error_setg_errno(errp, errno,
> -                         "unable to allocate memory range for hugepages");
> -        close(fd);
> -        goto error;
> -    }
> -
> -    offset = QEMU_ALIGN_UP((uintptr_t)ptr, hpagesize) - (uintptr_t)ptr;
> -
> -    area = mmap(ptr + offset, memory, PROT_READ | PROT_WRITE,
> -                (block->flags & RAM_SHARED ? MAP_SHARED : MAP_PRIVATE) |
> -                MAP_FIXED,
> -                fd, 0);
> +    area = qemu_ram_mmap(fd, memory, hpagesize);
>      if (area == MAP_FAILED) {
>          error_setg_errno(errp, errno,
>                           "unable to map backing store for hugepages");
> -        munmap(ptr, total);
>          close(fd);
>          goto error;
>      }
>
> -    if (offset > 0) {
> -        munmap(ptr, offset);
> -    }
> -    ptr += offset;
> -    total -= offset;
> -
> -    if (total > memory + getpagesize()) {
> -        munmap(ptr + memory + getpagesize(),
> -               total - memory - getpagesize());
> -    }
> -
>      if (mem_prealloc) {
>          os_mem_prealloc(fd, area, memory);
>      }
> @@ -1601,7 +1574,7 @@ ram_addr_t qemu_ram_alloc_from_file(ram_addr_t size, MemoryRegion *mr,
>      new_block->used_length = size;
>      new_block->max_length = size;
>      new_block->flags = share ? RAM_SHARED : 0;
> -    new_block->flags |= RAM_EXTRA;
> +    new_block->flags |= RAM_FILE;
>      new_block->host = file_ram_alloc(new_block, size,
>                                       mem_path, errp);
>      if (!new_block->host) {
> @@ -1703,8 +1676,8 @@ static void reclaim_ramblock(RAMBlock *block)
>          xen_invalidate_map_cache_entry(block->host);
>  #ifndef _WIN32
>      } else if (block->fd >= 0) {
> -        if (block->flags & RAM_EXTRA) {
> -            munmap(block->host, block->max_length + getpagesize());
> +        if (block->flags & RAM_FILE) {
> +            qemu_ram_munmap(block->host, block->max_length);
>          } else {
>              munmap(block->host, block->max_length);
>          }
> diff --git a/util/mmap-alloc.c b/util/mmap-alloc.c
> new file mode 100644
> index 0000000..05c8b4b
> --- /dev/null
> +++ b/util/mmap-alloc.c
> @@ -0,0 +1,52 @@
> +/*
> + * Support for RAM backed by mmaped host memory.
> + *
> + * Copyright (c) 2015 Red Hat, Inc.
> + *
> + * Authors:
> + *  Michael S. Tsirkin <mst@redhat.com>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or
> + * later.  See the COPYING file in the top-level directory.
> + */
> +#include <qemu/mmap-alloc.h>
> +#include <sys/types.h>
> +#include <sys/mman.h>
> +
> +void *qemu_ram_mmap(int fd, size_t size, size_t align)
> +{
> +    size_t total = size + align;
> +    void *ptr = mmap(0, total, PROT_NONE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
> +    size_t offset = QEMU_ALIGN_UP((uintptr_t)ptr, align) - (uintptr_t)ptr;
> +    void *ptr1;
> +
> +    if (ptr == MAP_FAILED) {
> +        return NULL;
> +    }
> +
> +    ptr1 = mmap(ptr + offset, size, PROT_READ | PROT_WRITE,
> +                MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE, fd, 0);
> +    if (ptr1 == MAP_FAILED) {
> +        munmap(ptr, total);
> +        return NULL;
> +    }
> +
> +    ptr += offset;
> +    total -= offset;
> +
> +    if (offset > 0) {
> +        munmap(ptr - offset, offset);
> +    }
> +    if (total > size + getpagesize()) {
> +        munmap(ptr + size + getpagesize(), total - size - getpagesize());
> +    }
> +
> +    return ptr;
> +}
> +
> +void qemu_ram_munmap(void *ptr, size_t size)
> +{
> +    if (ptr) {
> +        munmap(ptr, size + getpagesize());
> +    }
> +}
> diff --git a/util/oslib-posix.c b/util/oslib-posix.c
> index a0fcdc2..72a6bc1 100644
> --- a/util/oslib-posix.c
> +++ b/util/oslib-posix.c
> @@ -72,6 +72,8 @@ extern int daemon(int, int);
>  #include <sys/sysctl.h>
>  #endif
>
> +#include <qemu/mmap-alloc.h>
> +
>  int qemu_get_thread_id(void)
>  {
>  #if defined(__linux__)
> @@ -128,10 +130,7 @@ void *qemu_memalign(size_t alignment, size_t size)
>  void *qemu_anon_ram_alloc(size_t size, uint64_t *alignment)
>  {
>      size_t align = QEMU_VMALLOC_ALIGN;
> -    size_t total = size + align;
> -    void *ptr = mmap(0, total, PROT_NONE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
> -    size_t offset = QEMU_ALIGN_UP((uintptr_t)ptr, align) - (uintptr_t)ptr;
> -    void *ptr1;
> +    void *ptr = qemu_ram_mmap(-1, size, align);
>
>      if (ptr == MAP_FAILED) {
>          return NULL;
> @@ -141,23 +140,6 @@ void *qemu_anon_ram_alloc(size_t size, uint64_t *alignment)
>          *alignment = align;
>      }
>
> -    ptr1 = mmap(ptr + offset, size, PROT_READ | PROT_WRITE,
> -                MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
> -    if (ptr1 == MAP_FAILED) {
> -        munmap(ptr, total);
> -        return NULL;
> -    }
> -
> -    ptr += offset;
> -    total -= offset;
> -
> -    if (offset > 0) {
> -        munmap(ptr - offset, offset);
> -    }
> -    if (total > size + getpagesize()) {
> -        munmap(ptr + size + getpagesize(), total - size - getpagesize());
> -    }
> -
>      trace_qemu_anon_ram_alloc(size, ptr);
>      return ptr;
>  }
> @@ -171,9 +153,7 @@ void qemu_vfree(void *ptr)
>  void qemu_anon_ram_free(void *ptr, size_t size)
>  {
>      trace_qemu_anon_ram_free(ptr, size);
> -    if (ptr) {
> -        munmap(ptr, size + getpagesize());
> -    }
> +    qemu_ram_munmap(ptr, size);
>  }
>
>  void qemu_set_block(int fd)
> diff --git a/util/Makefile.objs b/util/Makefile.objs
> index 114d657..372e037 100644
> --- a/util/Makefile.objs
> +++ b/util/Makefile.objs
> @@ -1,6 +1,6 @@
>  util-obj-y = osdep.o cutils.o unicode.o qemu-timer-common.o
>  util-obj-$(CONFIG_WIN32) += oslib-win32.o qemu-thread-win32.o event_notifier-win32.o
> -util-obj-$(CONFIG_POSIX) += oslib-posix.o qemu-thread-posix.o event_notifier-posix.o qemu-openpty.o
> +util-obj-$(CONFIG_POSIX) += oslib-posix.o qemu-thread-posix.o event_notifier-posix.o qemu-openpty.o mmap-alloc.o
>  util-obj-y += envlist.o path.o module.o
>  util-obj-$(call lnot,$(CONFIG_INT128)) += host-utils.o
>  util-obj-y += bitmap.o bitops.o hbitmap.o
> --
> MST
>
>



-- 
Marc-André Lureau

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2015-09-30 13:12 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-09-27 10:14 [Qemu-devel] [PATCH repost 0/4] add mitigation against buffer overflows Michael S. Tsirkin
2015-09-27 10:14 ` [Qemu-devel] [PATCH repost 1/4] oslib: rework anonimous RAM allocation Michael S. Tsirkin
2015-09-27 10:14 ` [Qemu-devel] [PATCH repost 2/4] oslib: allocate PROT_NONE pages on top of RAM Michael S. Tsirkin
2015-09-28 10:59   ` Paolo Bonzini
2015-09-27 10:14 ` [Qemu-devel] [PATCH repost 3/4] exec: " Michael S. Tsirkin
2015-09-27 10:14 ` [Qemu-devel] [PATCH repost 4/4] exec: factor out duplicate mmap code Michael S. Tsirkin
2015-09-30 13:12   ` Marc-André Lureau
2015-09-28 11:01 ` [Qemu-devel] [PATCH repost 0/4] add mitigation against buffer overflows Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).