* [PATCH 2.6.28-rc5 00/11] Kernel memory leak detector (updated)
@ 2008-11-20 11:30 Catalin Marinas
2008-11-20 11:30 ` [PATCH 2.6.28-rc5 01/11] kmemleak: Add the base support Catalin Marinas
` (12 more replies)
0 siblings, 13 replies; 37+ messages in thread
From: Catalin Marinas @ 2008-11-20 11:30 UTC (permalink / raw)
To: linux-kernel
The kmemleak (visible) activity has been pretty quite for the past
year. I've actually been working on implementing some of the comments
received, adding support for slob and slub allocators and trying it on
various kernel versions. I found myself spending significant amount of
time on identifying false positives caused by pointer
aliasing. Because of that, I decided to track incoming pointers to any
location inside an allocated block rather than just the aliases,
leading to cleaner code and without many annotations for false
positives.
Kmemleak can also be found in a branh on this git tree:
git://linux-arm.org/linux-2.6.git kmemleak
The main changes (for those who remember the original features):
- it now uses a priority search tree to make it easier for looking up
intervals rather than just fixed values (the initial implementation
was with radix tree and changed to hash array because of
kmem_cache_alloc calls in the former)
- internal memory allocator to avoid recursive calls into
kmemleak. This is a simple lock-free, per-cpu allocator using
pages. The number of pages allocated is bounded, though there could
be (very unlikely) situations on SMP systems where page occupation
isn't optimal
- support for all three memory allocators - slab, slob and slub
- finer-grained locking - there is no global lock held during memory
scanning
- more information reported for leaked objects - current task's
command line and pid, jiffies and the stack trace
Things still to be done:
- kernel thread to scan and report leaked objects periodically
(currently done only when reading the /sys/kernel/debug/memleak
file)
- run-time and boot-time configuration like task stacks scanning,
disabling kmemleak, enabling/disabling the automatic scanning
An improvement in scanning time and false negatives would be to only
scan locations containing outgoing pointers. I did some tests (not
finished yet) to automatically ignore, in subsequent scans, areas of
memory that were found not to contain pointer-like values (or NULL)
during a first scan.
Thanks for your comments.
Catalin Marinas (11):
kmemleak: Add the corresponding MAINTAINERS entry
kmemleak: Simple testing module for kmemleak
kmemleak: Keep the __init functions after initialization
kmemleak: Enable the building of the memory leak detector
kmemleak: Remove some of the kmemleak false positives
kmemleak: Add support for ARM
kmemleak: Add support for i386
kmemleak: Add modules support
kmemleak: Add the memory allocation/freeing hooks
kmemleak: Add documentation on the memory leak detector
kmemleak: Add the base support
Documentation/kmemleak.txt | 125 +++++
MAINTAINERS | 6
arch/arm/kernel/vmlinux.lds.S | 2
arch/x86/kernel/vmlinux_32.lds.S | 1
drivers/char/vt.c | 5
include/linux/init.h | 6
include/linux/memleak.h | 60 ++
include/linux/percpu.h | 5
init/main.c | 4
kernel/module.c | 50 ++
lib/Kconfig.debug | 46 ++
mm/Makefile | 2
mm/memleak-test.c | 102 ++++
mm/memleak.c | 1012 ++++++++++++++++++++++++++++++++++++++
mm/page_alloc.c | 3
mm/slab.c | 9
mm/slob.c | 15 -
mm/slub.c | 3
mm/vmalloc.c | 25 +
19 files changed, 1473 insertions(+), 8 deletions(-)
create mode 100644 Documentation/kmemleak.txt
create mode 100644 include/linux/memleak.h
create mode 100644 mm/memleak-test.c
create mode 100644 mm/memleak.c
--
Catalin
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH 2.6.28-rc5 01/11] kmemleak: Add the base support
2008-11-20 11:30 [PATCH 2.6.28-rc5 00/11] Kernel memory leak detector (updated) Catalin Marinas
@ 2008-11-20 11:30 ` Catalin Marinas
2008-11-20 11:58 ` Ingo Molnar
` (2 more replies)
2008-11-20 11:30 ` [PATCH 2.6.28-rc5 02/11] kmemleak: Add documentation on the memory leak detector Catalin Marinas
` (11 subsequent siblings)
12 siblings, 3 replies; 37+ messages in thread
From: Catalin Marinas @ 2008-11-20 11:30 UTC (permalink / raw)
To: linux-kernel
This patch adds the base support for the kernel memory leak
detector. It traces the memory allocation/freeing in a way similar to
the Boehm's conservative garbage collector, the difference being that
the unreferenced objects are not freed but only shown in
/sys/kernel/debug/memleak. Enabling this feature introduces an
overhead to memory allocations.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
include/linux/memleak.h | 60 +++
init/main.c | 4
mm/memleak.c | 1012 +++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 1075 insertions(+), 1 deletions(-)
create mode 100644 include/linux/memleak.h
create mode 100644 mm/memleak.c
diff --git a/include/linux/memleak.h b/include/linux/memleak.h
new file mode 100644
index 0000000..29b3ecb
--- /dev/null
+++ b/include/linux/memleak.h
@@ -0,0 +1,60 @@
+/*
+ * include/linux/memleak.h
+ *
+ * Copyright (C) 2008 ARM Limited
+ * Written by Catalin Marinas <catalin.marinas@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __MEMLEAK_H
+#define __MEMLEAK_H
+
+#ifdef CONFIG_DEBUG_MEMLEAK
+
+extern void memleak_init(void);
+extern void memleak_alloc(const void *ptr, size_t size, int ref_count);
+extern void memleak_free(const void *ptr);
+extern void memleak_padding(const void *ptr, unsigned long offset, size_t size);
+extern void memleak_not_leak(const void *ptr);
+extern void memleak_ignore(const void *ptr);
+extern void memleak_scan_area(const void *ptr, unsigned long offset, size_t length);
+
+static inline void memleak_erase(void **ptr)
+{
+ *ptr = NULL;
+}
+
+#else
+
+#define DECLARE_MEMLEAK_OFFSET(name, type, member)
+
+static inline void memleak_init(void)
+{ }
+static inline void memleak_alloc(const void *ptr, size_t size, int ref_count)
+{ }
+static inline void memleak_free(const void *ptr)
+{ }
+static inline void memleak_not_leak(const void *ptr)
+{ }
+static inline void memleak_ignore(const void *ptr)
+{ }
+static inline void memleak_scan_area(const void *ptr, unsigned long offset, size_t length)
+{ }
+static inline void memleak_erase(void **ptr)
+{ }
+
+#endif /* CONFIG_DEBUG_MEMLEAK */
+
+#endif /* __MEMLEAK_H */
diff --git a/init/main.c b/init/main.c
index 7e117a2..e7f4d8c 100644
--- a/init/main.c
+++ b/init/main.c
@@ -63,6 +63,7 @@
#include <linux/signal.h>
#include <linux/idr.h>
#include <linux/ftrace.h>
+#include <linux/memleak.h>
#include <asm/io.h>
#include <asm/bugs.h>
@@ -652,6 +653,8 @@ asmlinkage void __init start_kernel(void)
mem_init();
enable_debug_pagealloc();
cpu_hotplug_init();
+ prio_tree_init();
+ memleak_init();
kmem_cache_init();
debug_objects_mem_init();
idr_init_cache();
@@ -662,7 +665,6 @@ asmlinkage void __init start_kernel(void)
calibrate_delay();
pidmap_init();
pgtable_cache_init();
- prio_tree_init();
anon_vma_init();
#ifdef CONFIG_X86
if (efi_enabled)
diff --git a/mm/memleak.c b/mm/memleak.c
new file mode 100644
index 0000000..8fb5260
--- /dev/null
+++ b/mm/memleak.c
@@ -0,0 +1,1012 @@
+/*
+ * mm/memleak.c
+ *
+ * Copyright (C) 2008 ARM Limited
+ * Written by Catalin Marinas <catalin.marinas@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/sched.h>
+#include <linux/jiffies.h>
+#include <linux/module.h>
+#include <linux/prio_tree.h>
+#include <linux/gfp.h>
+#include <linux/kallsyms.h>
+#include <linux/debugfs.h>
+#include <linux/seq_file.h>
+#include <linux/cpumask.h>
+#include <linux/spinlock.h>
+#include <linux/rcupdate.h>
+#include <linux/stacktrace.h>
+#include <linux/cache.h>
+#include <linux/percpu.h>
+#include <linux/lockdep.h>
+
+#include <asm/sections.h>
+#include <asm/processor.h>
+#include <asm/thread_info.h>
+#include <asm/atomic.h>
+
+#include <linux/memleak.h>
+
+/*
+ * kmemleak configuration and common defines
+ */
+#define MAX_TRACE 16 /* stack trace length */
+#define REPORT_THLD 1 /* unreferenced reporting threshold */
+#define REPORTS_NR 100 /* maximum number of reported leaks */
+#undef SCAN_TASK_STACKS /* scan the task kernel stacks */
+#undef REPORT_ORPHAN_FREEING /* notify when freeing orphan objects */
+#undef FAST_CACHE_STATS /* fast_cache statistics */
+
+#define BYTES_PER_WORD sizeof(void *)
+#define MSECS_SCAN_YIELD 100
+
+/*
+ * Simple lock-free memory allocator using pages. Objects are
+ * allocated on a CPU but can be freed on a different one
+ */
+struct fast_cache {
+ struct list_head free_list[NR_CPUS];
+ size_t obj_size;
+ int objs_per_page;
+#ifdef FAST_CACHE_STATS
+ int pages_nr[NR_CPUS];
+ int free_nr[NR_CPUS];
+#endif
+};
+
+/*
+ * Cache page information
+ */
+struct fast_cache_page {
+ int free_nr[NR_CPUS];
+ char data[0] ____cacheline_aligned_in_smp;
+};
+
+#define entry_to_page(entry) \
+ ((struct fast_cache_page *)((unsigned long)(entry) & ~(PAGE_SIZE - 1)))
+
+#ifdef CONFIG_SMP
+#define cache_line_align(x) L1_CACHE_ALIGN(x)
+#else
+#define cache_line_align(x) (x)
+#endif
+
+#ifdef FAST_CACHE_STATS
+#define fast_cache_inc_pages(cache, cpu) ((cache)->pages_nr[cpu]++)
+#define fast_cache_dec_pages(cache, cpu) ((cache)->pages_nr[cpu]--)
+#define fast_cache_inc_free(cache, cpu) ((cache)->free_nr[cpu]++)
+#define fast_cache_dec_free(cache, cpu) ((cache)->free_nr[cpu]--)
+static void fast_cache_dump_stats(struct fast_cache *cache, const char *name)
+{
+ unsigned int cpu = get_cpu();
+ unsigned long flags;
+
+ local_irq_save(flags);
+ pr_info("kmemleak: %s statistics\n", name);
+ pr_info(" obj_size: %d\n", cache->obj_size);
+ pr_info(" objs_per_page: %d\n", cache->objs_per_page);
+ for_each_online_cpu(cpu) {
+ pr_info(" CPU: %d\n", cpu);
+ pr_info(" pages_nr: %d\n", cache->pages_nr[cpu]);
+ pr_info(" free_nr: %d\n", cache->free_nr[cpu]);
+ }
+ local_irq_restore(flags);
+ put_cpu();
+}
+#else
+#define fast_cache_inc_pages(cache, cpu)
+#define fast_cache_dec_pages(cache, cpu)
+#define fast_cache_inc_free(cache, cpu)
+#define fast_cache_dec_free(cache, cpu)
+static inline void fast_cache_dump_stats(struct fast_cache *cache, const char *name)
+{ }
+#endif
+
+/*
+ * Initialize the cache. This function must be called before any
+ * tracked memory allocations take place
+ */
+static void fast_cache_init(struct fast_cache *cache, size_t size)
+{
+ unsigned int cpu;
+
+ for_each_possible_cpu(cpu) {
+ INIT_LIST_HEAD(&cache->free_list[cpu]);
+#ifdef FAST_CACHE_STATS
+ cache->pages_nr[cpu] = 0;
+ cache->free_nr[cpu] = 0;
+#endif
+ }
+ cache->obj_size = cache_line_align(sizeof(struct list_head) + size);
+ cache->objs_per_page = (PAGE_SIZE - sizeof(struct fast_cache_page)) /
+ cache->obj_size;
+}
+
+/*
+ * Expand the free list for the current CPU. Called with interrupts
+ * and preemption disabled
+ */
+static inline void __fast_cache_grow(struct fast_cache *cache, unsigned int cpu)
+{
+ struct fast_cache_page *page;
+ void *pos, *last;
+ unsigned int c;
+
+ page = (struct fast_cache_page *)__get_free_page(GFP_ATOMIC);
+ if (!page)
+ panic("kmemleak: cannot allocate page for fast_cache\n");
+ fast_cache_inc_pages(cache, cpu);
+
+ for_each_possible_cpu(c)
+ page->free_nr[c] = 0;
+ page->free_nr[cpu] = cache->objs_per_page;
+
+ last = (void *)page + PAGE_SIZE - cache->obj_size;
+ for (pos = page->data; pos <= last; pos += cache->obj_size) {
+ struct list_head *entry = pos;
+ list_add_tail(entry, &cache->free_list[cpu]);
+ fast_cache_inc_free(cache, cpu);
+ }
+}
+
+/*
+ * Shrink the free list for the current CPU and free the specified
+ * page. Called with interrupts and preemption disabled
+ */
+static inline void __fast_cache_shrink(struct fast_cache *cache, unsigned int cpu,
+ struct fast_cache_page *page)
+{
+ void *pos;
+ void *last = (void *)page + PAGE_SIZE - cache->obj_size;
+
+ for (pos = page->data; pos <= last; pos += cache->obj_size) {
+ struct list_head *entry = pos;
+ /* entry present only in cache->free_list[cpu] */
+ list_del(entry);
+ fast_cache_dec_free(cache, cpu);
+ }
+
+ free_page((unsigned long)page);
+ fast_cache_dec_pages(cache, cpu);
+}
+
+/*
+ * Object allocation
+ */
+static void *fast_cache_alloc(struct fast_cache *cache)
+{
+ unsigned int cpu = get_cpu();
+ unsigned long flags;
+ struct list_head *entry;
+ struct fast_cache_page *page;
+
+ local_irq_save(flags);
+
+ if (list_empty(&cache->free_list[cpu]))
+ __fast_cache_grow(cache, cpu);
+
+ entry = cache->free_list[cpu].next;
+ page = entry_to_page(entry);
+ list_del(entry);
+ page->free_nr[cpu]--;
+ BUG_ON(page->free_nr[cpu] < 0);
+ fast_cache_dec_free(cache, cpu);
+
+ local_irq_restore(flags);
+ put_cpu_no_resched();
+
+ return (void *)(entry + 1);
+}
+
+/*
+ * Object freeing
+ */
+static void fast_cache_free(struct fast_cache *cache, void *obj)
+{
+ unsigned int cpu = get_cpu();
+ unsigned long flags;
+ struct list_head *entry = (struct list_head *)obj - 1;
+ struct fast_cache_page *page = entry_to_page(entry);
+
+ local_irq_save(flags);
+
+ list_add(entry, &cache->free_list[cpu]);
+ page->free_nr[cpu]++;
+ BUG_ON(page->free_nr[cpu] > cache->objs_per_page);
+ fast_cache_inc_free(cache, cpu);
+
+ if (page->free_nr[cpu] == cache->objs_per_page)
+ __fast_cache_shrink(cache, cpu, page);
+
+ local_irq_restore(flags);
+ put_cpu_no_resched();
+}
+
+/* scanning area inside a memory block */
+struct memleak_scan_area {
+ struct hlist_node node;
+ unsigned long offset;
+ size_t length;
+};
+
+/* the main allocation tracking object */
+struct memleak_object {
+ spinlock_t lock;
+ unsigned long flags;
+ struct list_head object_list;
+ struct list_head gray_list;
+ struct prio_tree_node tree_node;
+ struct rcu_head rcu; /* used for object_list lockless traversal */
+ atomic_t use_count; /* internal usage count */
+ unsigned long pointer;
+ size_t size;
+ int ref_count; /* the minimum encounters of the pointer */
+ int count; /* the ecounters of the pointer */
+ int report_thld; /* the unreferenced reporting threshold */
+ struct hlist_head area_list; /* areas to be scanned (or empty for all) */
+ unsigned long trace[MAX_TRACE];
+ unsigned int trace_len;
+ unsigned long jiffies; /* creation timestamp */
+ pid_t pid; /* pid of the current task */
+ char comm[TASK_COMM_LEN]; /* executable name */
+};
+
+/* The list of all allocated objects */
+static LIST_HEAD(object_list);
+static DEFINE_SPINLOCK(object_list_lock);
+/* The list of the gray objects */
+static LIST_HEAD(gray_list);
+/* prio search tree for object boundaries */
+static struct prio_tree_root object_tree_root;
+static DEFINE_RWLOCK(object_tree_lock);
+
+/* allocation pools */
+static struct fast_cache object_cache;
+static struct fast_cache scan_area_cache;
+
+static atomic_t memleak_enabled = ATOMIC_INIT(0);
+static int reported_leaks;
+
+/* minimum and maximum address that may be valid pointers */
+static unsigned long min_addr = ~0;
+static unsigned long max_addr;
+
+/* used for yielding the CPU to other tasks during scanning */
+static unsigned long next_scan_yield;
+
+/* object flags */
+#define OBJECT_ALLOCATED 0x1
+
+/*
+ * Object colors, encoded with count and ref_count:
+ * - white - orphan object, i.e. not enough references to it (ref_count >= 1)
+ * - gray - referred at least once and therefore non-orphan (ref_count == 0)
+ * - black - ignore; it doesn't contain references (text section) (ref_count == -1)
+ */
+static inline int color_white(const struct memleak_object *object)
+{
+ return object->count != -1 && object->count < object->ref_count;
+}
+
+static inline int color_gray(const struct memleak_object *object)
+{
+ return object->ref_count != -1 && object->count >= object->ref_count;
+}
+
+static inline int color_black(const struct memleak_object *object)
+{
+ return object->ref_count == -1;
+}
+
+static void dump_object_info(struct memleak_object *object)
+{
+ struct stack_trace trace;
+
+ trace.nr_entries = object->trace_len;
+ trace.entries = object->trace;
+
+ pr_notice("kmemleak: object 0x%08lx (size %u):\n",
+ object->tree_node.start, object->size);
+ pr_notice(" comm \"%s\", pid %d, jiffies %lu\n",
+ object->comm, object->pid, object->jiffies);
+ pr_notice(" ref_count = %d\n", object->ref_count);
+ pr_notice(" count = %d\n", object->count);
+ pr_notice(" backtrace:\n");
+ print_stack_trace(&trace, 4);
+}
+
+static struct memleak_object *lookup_object(unsigned long ptr, int alias)
+{
+ struct prio_tree_node *node;
+ struct prio_tree_iter iter;
+ struct memleak_object *object;
+
+ prio_tree_iter_init(&iter, &object_tree_root, ptr, ptr);
+ node = prio_tree_next(&iter);
+ if (node) {
+ object = prio_tree_entry(node, struct memleak_object, tree_node);
+ if (!alias && object->pointer != ptr) {
+ pr_warning("kmemleak: found object by alias");
+ object = NULL;
+ }
+ } else
+ object = NULL;
+
+ return object;
+}
+
+/*
+ * return 1 if successful or 0 otherwise
+ */
+static inline int get_object(struct memleak_object *object)
+{
+ return atomic_inc_not_zero(&object->use_count);
+}
+
+static void free_object_rcu(struct rcu_head *rcu)
+{
+ struct hlist_node *elem, *tmp;
+ struct memleak_scan_area *area;
+ struct memleak_object *object =
+ container_of(rcu, struct memleak_object, rcu);
+
+ /* once use_count is 0, there is no code accessing the object */
+ hlist_for_each_entry_safe(area, elem, tmp, &object->area_list, node) {
+ hlist_del(elem);
+ fast_cache_free(&scan_area_cache, area);
+ }
+ fast_cache_free(&object_cache, object);
+}
+
+static void put_object(struct memleak_object *object)
+{
+ unsigned long flags;
+
+ if (!atomic_dec_and_test(&object->use_count))
+ return;
+
+ /* should only get here after delete_object was called */
+ BUG_ON(object->flags & OBJECT_ALLOCATED);
+
+ spin_lock_irqsave(&object_list_lock, flags);
+ /* the last reference to this object */
+ list_del_rcu(&object->object_list);
+ call_rcu(&object->rcu, free_object_rcu);
+ spin_unlock_irqrestore(&object_list_lock, flags);
+}
+
+static struct memleak_object *find_and_get_object(unsigned long ptr, int alias)
+{
+ unsigned long flags;
+ struct memleak_object *object;
+
+ read_lock_irqsave(&object_tree_lock, flags);
+ object = lookup_object(ptr, alias);
+ if (object)
+ get_object(object);
+ read_unlock_irqrestore(&object_tree_lock, flags);
+
+ return object;
+}
+
+/*
+ * Insert a pointer into the pointer hash table
+ */
+static inline void create_object(unsigned long ptr, size_t size, int ref_count)
+{
+ unsigned long flags;
+ struct memleak_object *object;
+ struct prio_tree_node *node;
+ struct stack_trace trace;
+
+ object = fast_cache_alloc(&object_cache);
+ if (!object)
+ panic("kmemleak: cannot allocate a memleak_object structure\n");
+
+ INIT_LIST_HEAD(&object->object_list);
+ INIT_LIST_HEAD(&object->gray_list);
+ INIT_HLIST_HEAD(&object->area_list);
+ spin_lock_init(&object->lock);
+ atomic_set(&object->use_count, 1);
+ object->flags = OBJECT_ALLOCATED;
+ object->pointer = ptr;
+ object->size = size;
+ object->ref_count = ref_count;
+ object->count = -1; /* black color initially */
+ object->report_thld = REPORT_THLD;
+ object->jiffies = jiffies;
+ if (in_irq()) {
+ object->pid = 0;
+ strncpy(object->comm, "hardirq", TASK_COMM_LEN);
+ } else if (in_softirq()) {
+ object->pid = 0;
+ strncpy(object->comm, "softirq", TASK_COMM_LEN);
+ } else {
+ object->pid = current->pid;
+ strncpy(object->comm, current->comm, TASK_COMM_LEN);
+ }
+
+ trace.max_entries = MAX_TRACE;
+ trace.nr_entries = 0;
+ trace.entries = object->trace;
+ trace.skip = 1;
+ save_stack_trace(&trace);
+
+ object->trace_len = trace.nr_entries;
+
+ INIT_PRIO_TREE_NODE(&object->tree_node);
+ object->tree_node.start = ptr;
+ object->tree_node.last = ptr + size - 1;
+
+ if (ptr < min_addr)
+ min_addr = ptr;
+ if (ptr + size > max_addr)
+ max_addr = ptr + size;
+ /* update the boundaries before inserting the object in the
+ * prio search tree */
+ smp_mb();
+
+ write_lock_irqsave(&object_tree_lock, flags);
+ node = prio_tree_insert(&object_tree_root, &object->tree_node);
+ if (node != &object->tree_node) {
+ unsigned long flags;
+
+ pr_warning("kmemleak: existing pointer\n");
+ dump_stack();
+
+ object = lookup_object(ptr, 1);
+ spin_lock_irqsave(&object->lock, flags);
+ dump_object_info(object);
+ spin_unlock_irqrestore(&object->lock, flags);
+
+ panic("kmemleak: cannot insert 0x%lx into the object search tree\n",
+ ptr);
+ }
+ write_unlock_irqrestore(&object_tree_lock, flags);
+
+ spin_lock_irqsave(&object_list_lock, flags);
+ list_add_tail_rcu(&object->object_list, &object_list);
+ spin_unlock_irqrestore(&object_list_lock, flags);
+}
+
+/*
+ * Remove a pointer from the pointer hash table
+ */
+static inline void delete_object(unsigned long ptr)
+{
+ unsigned long flags;
+ struct memleak_object *object;
+
+ write_lock_irqsave(&object_tree_lock, flags);
+ object = lookup_object(ptr, 0);
+ if (!object) {
+ pr_warning("kmemleak: freeing unknown object at 0x%08lx\n", ptr);
+ dump_stack();
+ write_unlock_irqrestore(&object_tree_lock, flags);
+ return;
+ }
+ prio_tree_remove(&object_tree_root, &object->tree_node);
+ write_unlock_irqrestore(&object_tree_lock, flags);
+
+ BUG_ON(!(object->flags & OBJECT_ALLOCATED));
+
+ spin_lock_irqsave(&object->lock, flags);
+ object->flags &= ~OBJECT_ALLOCATED;
+#ifdef REPORT_ORPHAN_FREEING
+ if (color_white(object)) {
+ pr_warning("kmemleak: freeing orphan object 0x%08lx\n", ptr);
+ dump_stack();
+ dump_object_info(object);
+ }
+#endif
+ object->pointer = 0;
+ spin_unlock_irqrestore(&object->lock, flags);
+
+ put_object(object);
+}
+
+/*
+ * Make a object permanently gray (false positive)
+ */
+static inline void make_gray_object(unsigned long ptr)
+{
+ unsigned long flags;
+ struct memleak_object *object;
+
+ object = find_and_get_object(ptr, 0);
+ if (!object) {
+ dump_stack();
+ panic("kmemleak: graying unknown object at 0x%08lx\n", ptr);
+ }
+
+ spin_lock_irqsave(&object->lock, flags);
+ object->ref_count = 0;
+ spin_unlock_irqrestore(&object->lock, flags);
+ put_object(object);
+}
+
+/*
+ * Mark the object as black
+ */
+static inline void make_black_object(unsigned long ptr)
+{
+ unsigned long flags;
+ struct memleak_object *object;
+
+ object = find_and_get_object(ptr, 0);
+ if (!object) {
+ dump_stack();
+ panic("kmemleak: blacking unknown object at 0x%08lx\n", ptr);
+ }
+
+ spin_lock_irqsave(&object->lock, flags);
+ object->ref_count = -1;
+ spin_unlock_irqrestore(&object->lock, flags);
+ put_object(object);
+}
+
+/*
+ * Add a scanning area to the object
+ */
+static inline void add_scan_area(unsigned long ptr, unsigned long offset, size_t length)
+{
+ unsigned long flags;
+ struct memleak_object *object;
+ struct memleak_scan_area *area;
+
+ object = find_and_get_object(ptr, 0);
+ if (!object) {
+ dump_stack();
+ panic("kmemleak: adding scan area to unknown object at 0x%08lx\n", ptr);
+ }
+
+ spin_lock_irqsave(&object->lock, flags);
+ if (offset + length > object->size) {
+ dump_stack();
+ dump_object_info(object);
+ panic("kmemleak: scan area larger than object 0x%08lx\n", ptr);
+ }
+
+ area = fast_cache_alloc(&scan_area_cache);
+ if (!area)
+ panic("kmemleak: cannot allocate a scan area\n");
+
+ INIT_HLIST_NODE(&area->node);
+ area->offset = offset;
+ area->length = length;
+
+ hlist_add_head(&area->node, &object->area_list);
+ spin_unlock_irqrestore(&object->lock, flags);
+ put_object(object);
+}
+
+/*
+ * Allocation function hook
+ */
+void memleak_alloc(const void *ptr, size_t size, int ref_count)
+{
+ pr_debug("%s(0x%p, %u, %d)\n", __FUNCTION__, ptr, size, ref_count);
+
+ if (!atomic_read(&memleak_enabled))
+ return;
+ if (!ptr)
+ return;
+
+ create_object((unsigned long)ptr, size, ref_count);
+}
+EXPORT_SYMBOL_GPL(memleak_alloc);
+
+/*
+ * Freeing function hook
+ */
+void memleak_free(const void *ptr)
+{
+ pr_debug("%s(0x%p)\n", __FUNCTION__, ptr);
+
+ if (!atomic_read(&memleak_enabled))
+ return;
+ if (!ptr)
+ return;
+
+ delete_object((unsigned long)ptr);
+}
+EXPORT_SYMBOL_GPL(memleak_free);
+
+/*
+ * Mark a object as a false positive
+ */
+void memleak_not_leak(const void *ptr)
+{
+ pr_debug("%s(0x%p)\n", __FUNCTION__, ptr);
+
+ if (!atomic_read(&memleak_enabled))
+ return;
+ if (!ptr)
+ return;
+
+ make_gray_object((unsigned long)ptr);
+}
+EXPORT_SYMBOL(memleak_not_leak);
+
+/*
+ * Ignore this memory object
+ */
+void memleak_ignore(const void *ptr)
+{
+ pr_debug("%s(0x%p)\n", __FUNCTION__, ptr);
+
+ if (!atomic_read(&memleak_enabled))
+ return;
+ if (!ptr)
+ return;
+
+ make_black_object((unsigned long)ptr);
+}
+EXPORT_SYMBOL(memleak_ignore);
+
+/*
+ * Add a scanning area to an object
+ */
+void memleak_scan_area(const void *ptr, unsigned long offset, size_t length)
+{
+ pr_debug("%s(0x%p)\n", __FUNCTION__, ptr);
+
+ if (!atomic_read(&memleak_enabled))
+ return;
+ if (!ptr)
+ return;
+
+ add_scan_area((unsigned long)ptr, offset, length);
+}
+EXPORT_SYMBOL(memleak_scan_area);
+
+static inline void scan_yield(void)
+{
+ BUG_ON(in_atomic());
+
+ if (time_is_before_eq_jiffies(next_scan_yield)) {
+ schedule();
+ next_scan_yield = jiffies + msecs_to_jiffies(MSECS_SCAN_YIELD);
+ }
+}
+
+/*
+ * Scan a block of memory (exclusive range) for pointers and move
+ * those found to the gray list
+ */
+static void scan_block(void *_start, void *_end, struct memleak_object *scanned)
+{
+ unsigned long *ptr;
+ unsigned long *start = PTR_ALIGN(_start, BYTES_PER_WORD);
+ unsigned long *end = _end - (BYTES_PER_WORD - 1);
+
+ for (ptr = start; ptr < end; ptr++) {
+ unsigned long flags;
+ unsigned long pointer = *ptr;
+ struct memleak_object *object;
+
+ if (!scanned)
+ scan_yield();
+
+ /* the boundaries check doesn't need to be precise
+ * (hence no locking) since orphan objects need to
+ * pass a scanning threshold before being reported */
+ if (pointer < min_addr || pointer >= max_addr)
+ continue;
+
+ object = find_and_get_object(pointer, 1);
+ if (!object)
+ continue;
+ if (object == scanned) {
+ /* self referenced */
+ put_object(object);
+ continue;
+ }
+
+ /* avoid the lockdep recursive warning on object->lock
+ * being previously acquired in scan_object(). These
+ * locks are enclosed by a mutex aquired in seq_open */
+ spin_lock_irqsave_nested(&object->lock, flags, SINGLE_DEPTH_NESTING);
+ if (!color_white(object)) {
+ /* non-orphan or ignored */
+ spin_unlock_irqrestore(&object->lock, flags);
+ put_object(object);
+ continue;
+ }
+
+ object->count++;
+ if (color_gray(object)) {
+ /* the object became gray, add it to the list */
+ object->report_thld++;
+ list_add_tail(&object->gray_list, &gray_list);
+ } else
+ put_object(object);
+ spin_unlock_irqrestore(&object->lock, flags);
+ }
+}
+
+/*
+ * Scan a memory block represented by a memleak_object
+ */
+static inline void scan_object(struct memleak_object *object)
+{
+ struct memleak_scan_area *area;
+ struct hlist_node *elem;
+ unsigned long flags;
+
+ spin_lock_irqsave(&object->lock, flags);
+
+ /* freed object */
+ if (!(object->flags & OBJECT_ALLOCATED))
+ goto out;
+
+ if (hlist_empty(&object->area_list))
+ scan_block((void *)object->pointer,
+ (void *)(object->pointer + object->size), object);
+ else
+ hlist_for_each_entry(area, elem, &object->area_list, node)
+ scan_block((void *)(object->pointer + area->offset),
+ (void *)(object->pointer + area->offset
+ + area->length), object);
+
+ out:
+ spin_unlock_irqrestore(&object->lock, flags);
+}
+
+/*
+ * Scan the memory and print the orphan objects
+ */
+static void memleak_scan(void)
+{
+ unsigned long flags;
+ struct memleak_object *object, *tmp;
+ int i;
+#ifdef SCAN_TASK_STACKS
+ struct task_struct *task;
+#endif
+
+ fast_cache_dump_stats(&object_cache, "object_cache");
+ fast_cache_dump_stats(&scan_area_cache, "scan_area_cache");
+
+ rcu_read_lock();
+ list_for_each_entry_rcu(object, &object_list, object_list) {
+ spin_lock_irqsave(&object->lock, flags);
+
+#ifdef DEBUG
+ /* with a few exceptions there should be a maximum of
+ * 1 reference to any object at this point */
+ if (atomic_read(&object->use_count) > 1) {
+ pr_debug("kmemleak: object->use_count = %d\n",
+ atomic_read(&object->use_count));
+ dump_object_info(object);
+ }
+#endif
+
+ /* reset the reference count (whiten the object) */
+ object->count = 0;
+ if (color_gray(object) && get_object(object))
+ list_add_tail(&object->gray_list, &gray_list);
+ else
+ object->report_thld--;
+
+ spin_unlock_irqrestore(&object->lock, flags);
+ }
+ rcu_read_unlock();
+
+ /* data/bss scanning */
+ scan_block(_sdata, _edata, NULL);
+ scan_block(__bss_start, __bss_stop, NULL);
+
+#ifdef CONFIG_SMP
+ /* per-cpu scanning */
+ for_each_possible_cpu(i)
+ scan_block(__per_cpu_start + per_cpu_offset(i),
+ __per_cpu_end + per_cpu_offset(i), NULL);
+#endif
+
+ /* mem_map scanning */
+ for_each_online_node(i) {
+ struct page *page, *end;
+
+ page = NODE_MEM_MAP(i);
+ end = page + NODE_DATA(i)->node_spanned_pages;
+
+ scan_block(page, end, NULL);
+ }
+
+#ifdef SCAN_TASK_STACKS
+ read_lock(&tasklist_lock);
+ for_each_process(task)
+ scan_block(task_stack_page(task),
+ task_stack_page(task) + THREAD_SIZE, NULL);
+ read_unlock(&tasklist_lock);
+#endif
+
+ /* scan the objects already referenced. More objects will be
+ * referenced and, if there are no memory leaks, all the
+ * objects will be scanned. The list traversal is safe for
+ * both tail additions and removals from inside the loop. The
+ * memleak objects cannot be freed from outside the loop
+ * because their use_count was increased */
+ object = list_entry(gray_list.next, typeof(*object), gray_list);
+ while (&object->gray_list != &gray_list) {
+ scan_yield();
+
+ /* may add new objects to the list */
+ scan_object(object);
+
+ tmp = list_entry(object->gray_list.next, typeof(*object),
+ gray_list);
+
+ /* remove the object from the list and release it */
+ list_del(&object->gray_list);
+ put_object(object);
+
+ object = tmp;
+ }
+ BUG_ON(!list_empty(&gray_list));
+}
+
+static void *memleak_seq_start(struct seq_file *seq, loff_t *pos)
+{
+ struct memleak_object *object;
+ loff_t n = *pos;
+
+ if (!n) {
+ memleak_scan();
+ reported_leaks = 0;
+ }
+ if (reported_leaks >= REPORTS_NR)
+ return NULL;
+
+ rcu_read_lock();
+ list_for_each_entry_rcu(object, &object_list, object_list) {
+ if (n-- > 0)
+ continue;
+
+ if (get_object(object))
+ goto out;
+ }
+ object = NULL;
+ out:
+ rcu_read_unlock();
+ return object;
+}
+
+static void *memleak_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+{
+ struct list_head *n;
+ struct memleak_object *next = NULL;
+ unsigned long flags;
+
+ ++(*pos);
+ if (reported_leaks >= REPORTS_NR)
+ goto out;
+
+ spin_lock_irqsave(&object_list_lock, flags);
+ n = ((struct memleak_object *)v)->object_list.next;
+ if (n != &object_list) {
+ next = list_entry(n, struct memleak_object, object_list);
+ get_object(next);
+ }
+ spin_unlock_irqrestore(&object_list_lock, flags);
+
+ out:
+ put_object(v);
+ return next;
+}
+
+static void memleak_seq_stop(struct seq_file *seq, void *v)
+{
+ if (v)
+ put_object(v);
+}
+
+static int memleak_seq_show(struct seq_file *seq, void *v)
+{
+ struct memleak_object *object = v;
+ unsigned long flags;
+ char namebuf[KSYM_NAME_LEN + 1] = "";
+ char *modname;
+ unsigned long symsize;
+ unsigned long offset = 0;
+ int i;
+
+ spin_lock_irqsave(&object->lock, flags);
+
+ if (!color_white(object))
+ goto out;
+ /* freed in the meantime (false positive) or just allocated */
+ if (!(object->flags & OBJECT_ALLOCATED))
+ goto out;
+ if (object->report_thld >= 0)
+ goto out;
+
+ reported_leaks++;
+ seq_printf(seq, "unreferenced object 0x%08lx (size %u):\n",
+ object->pointer, object->size);
+ seq_printf(seq, " comm \"%s\", pid %d, jiffies %lu\n",
+ object->comm, object->pid, object->jiffies);
+ seq_printf(seq, " backtrace:\n");
+
+ for (i = 0; i < object->trace_len; i++) {
+ unsigned long trace = object->trace[i];
+
+ kallsyms_lookup(trace, &symsize, &offset, &modname, namebuf);
+ seq_printf(seq, " [<%08lx>] %s\n", trace, namebuf);
+ }
+
+ out:
+ spin_unlock_irqrestore(&object->lock, flags);
+ return 0;
+}
+
+static struct seq_operations memleak_seq_ops = {
+ .start = memleak_seq_start,
+ .next = memleak_seq_next,
+ .stop = memleak_seq_stop,
+ .show = memleak_seq_show,
+};
+
+static int memleak_seq_open(struct inode *inode, struct file *file)
+{
+ return seq_open(file, &memleak_seq_ops);
+}
+
+static struct file_operations memleak_fops = {
+ .owner = THIS_MODULE,
+ .open = memleak_seq_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release,
+};
+
+/*
+ * Kmemleak initialization
+ */
+void __init memleak_init(void)
+{
+ fast_cache_init(&object_cache, sizeof(struct memleak_object));
+ fast_cache_init(&scan_area_cache, sizeof(struct memleak_scan_area));
+
+ INIT_PRIO_TREE_ROOT(&object_tree_root);
+
+ /* this is the point where tracking allocations is safe.
+ * Scanning is only available later */
+ atomic_set(&memleak_enabled, 1);
+}
+
+/*
+ * Late initialization function
+ */
+int __init memleak_late_init(void)
+{
+ struct dentry *dentry;
+
+ dentry = debugfs_create_file("memleak", S_IRUGO, NULL, NULL,
+ &memleak_fops);
+ if (!dentry)
+ return -ENOMEM;
+
+ pr_info("Kernel memory leak detector initialized\n");
+
+ return 0;
+}
+late_initcall(memleak_late_init);
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH 2.6.28-rc5 02/11] kmemleak: Add documentation on the memory leak detector
2008-11-20 11:30 [PATCH 2.6.28-rc5 00/11] Kernel memory leak detector (updated) Catalin Marinas
2008-11-20 11:30 ` [PATCH 2.6.28-rc5 01/11] kmemleak: Add the base support Catalin Marinas
@ 2008-11-20 11:30 ` Catalin Marinas
2008-11-20 11:30 ` [PATCH 2.6.28-rc5 03/11] kmemleak: Add the memory allocation/freeing hooks Catalin Marinas
` (10 subsequent siblings)
12 siblings, 0 replies; 37+ messages in thread
From: Catalin Marinas @ 2008-11-20 11:30 UTC (permalink / raw)
To: linux-kernel
This patch adds the Documentation/kmemleak.txt file with some
information about how kmemleak works.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
Documentation/kmemleak.txt | 125 ++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 125 insertions(+), 0 deletions(-)
create mode 100644 Documentation/kmemleak.txt
diff --git a/Documentation/kmemleak.txt b/Documentation/kmemleak.txt
new file mode 100644
index 0000000..bf985df
--- /dev/null
+++ b/Documentation/kmemleak.txt
@@ -0,0 +1,125 @@
+Kernel Memory Leak Detector
+===========================
+
+Introduction
+------------
+
+Kmemleak provides a way of detecting possible kernel memory leaks in a
+way similar to a tracing garbage collector
+(http://en.wikipedia.org/wiki/Garbage_collection_%28computer_science%29#Tracing_garbage_collectors),
+with the difference that the orphan objects are not freed but only
+reported via /sys/kernel/debug/memleak. A similar method is used by
+the Valgrind tool (memcheck --leak-check) to detect the memory leaks
+in user-space applications.
+
+Usage
+-----
+
+CONFIG_DEBUG_MEMLEAK in "Kernel hacking" has to be enabled. To display
+the possible memory leaks:
+
+ # mount -t debugfs nodev /sys/kernel/debug/
+ # cat /sys/kernel/debug/memleak
+
+In order to reduce the run-time overhead, memory scanning is only
+performed when reading the /sys/kernel/debug/memleak file. Note that
+the orphan objects are listed in the order they were allocated and one
+object at the beginning of the list may cause other subsequent objects
+to be reported as orphan.
+
+Basic Algorithm
+---------------
+
+The memory allocations via kmalloc, vmalloc, kmem_cache_alloc and
+friends are tracked and the pointers, together with additional
+information like size and stack trace, are stored in a prio search
+tree. The corresponding freeing function calls are tracked and the
+pointers removed from the kmemleak data structures.
+
+An allocated block of memory is considered orphan if no pointer to its
+start address or to any location inside the block can be found by
+scanning the memory (including saved registers). This means that there
+might be no way for the kernel to pass the address of the allocated
+block to a freeing function and therefore the block is considered a
+memory leak.
+
+The scanning algorithm steps:
+
+ 1. mark all objects as white (remaining white objects will later be
+ considered orphan)
+ 2. scan the memory starting with the data section and stacks,
+ checking the values against the addresses stored in the hash
+ table. If a pointer to a white object is found, the object is
+ added to the grey list
+ 3. scan the grey objects for matching addresses (some white objects
+ can become grey and added at the end of the grey list) until the
+ grey set is finished
+ 4. the remaining white objects are considered orphan and reported
+ via /sys/kernel/debug/memleak
+
+Some allocated memory blocks have pointers stored in the kernel's
+internal data structures and they cannot be detected as orphans. To
+avoid this, kmemleak can also store the number of values pointing to
+an address inside the block address range that need to be found so
+that the block is not considered a leak. One example is __vmalloc().
+
+Limitations and Drawbacks
+-------------------------
+
+The biggest drawback is the reduced performance of memory allocation
+and freeing. To avoid other penalties, the memory scanning is only
+performed when the /sys/kernel/debug/memleak file is read. Anyway,
+this tool is intended for debugging purposes where the performance
+might not be the most important requirement.
+
+To keep the algorithm simple, kmemleak scans for values pointing to
+any address inside a block's address range. This may lead to an
+increased number of false negatives. However, it is likely that a
+realy memory leak will eventually become visible.
+
+Another source of false negatives is the data stored in non-pointer
+values. In a future version, kmemleak could only scan the pointer
+members in the allocated structures. This feature would solve many of
+the false negative cases described above.
+
+The tool can report false positives. These are cases where an
+allocated block doesn't need to be freed (some cases in the init_call
+functions), the pointer is calculated by other methods than the usual
+container_of macro or the pointer is stored in a location not scanned
+by kmemleak.
+
+Page allocations and ioremap are not tracked. Only the ARM and i386
+architectures are currently supported.
+
+Kmemleak API
+------------
+
+See the include/linux/memleak.h header for the functions prototype.
+
+memleak_init - initialize kmemleak
+memleak_alloc - notify of a memory block allocation
+memleak_free - notify of a memory block freeing
+memleak_not_leak - mark an object as not a leak
+memleak_ignore - do not scan or report an object as leak
+memleak_scan_area - add scan areas inside a memory block
+memleak_erase - erase an old value in a pointer variable
+
+Dealing with false positives/negatives
+--------------------------------------
+
+To reduce the false negatives, kmemleak provides the memleak_ignore,
+memleak_scan_area and memleak_erase functions. The task stacks also
+increase the amount of false negatives and their scanning is not
+enabled by default.
+
+For objects known not to be leaks, kmemleak provides the
+memleak_not_leak function. The memleak_ignore could also be used if
+the memory block is known not to contain other pointers and it will no
+longer be scanned.
+
+Some of the reported leaks are only transient because of pointers
+temporarily stored in CPU registers or stacks. Kmemleak defines
+REPORT_THLD (defaulting to 1) representing the detection threshold
+after which the block is reported as a memory leak. This value may
+need to be increased if transient leaks are reported frequently,
+especially on SMP systems.
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH 2.6.28-rc5 03/11] kmemleak: Add the memory allocation/freeing hooks
2008-11-20 11:30 [PATCH 2.6.28-rc5 00/11] Kernel memory leak detector (updated) Catalin Marinas
2008-11-20 11:30 ` [PATCH 2.6.28-rc5 01/11] kmemleak: Add the base support Catalin Marinas
2008-11-20 11:30 ` [PATCH 2.6.28-rc5 02/11] kmemleak: Add documentation on the memory leak detector Catalin Marinas
@ 2008-11-20 11:30 ` Catalin Marinas
2008-11-20 12:00 ` Ingo Molnar
2008-11-20 19:30 ` Pekka Enberg
2008-11-20 11:30 ` [PATCH 2.6.28-rc5 04/11] kmemleak: Add modules support Catalin Marinas
` (9 subsequent siblings)
12 siblings, 2 replies; 37+ messages in thread
From: Catalin Marinas @ 2008-11-20 11:30 UTC (permalink / raw)
To: linux-kernel
This patch adds the callbacks to memleak_(alloc|free) functions from
kmalloc/kfree, kmem_cache_(alloc|free), vmalloc/vfree etc.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
mm/page_alloc.c | 3 +++
mm/slab.c | 9 +++++++++
mm/slob.c | 15 +++++++++++----
mm/slub.c | 3 +++
mm/vmalloc.c | 25 ++++++++++++++++++++++---
5 files changed, 48 insertions(+), 7 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d8ac014..90e7dbd 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -46,6 +46,7 @@
#include <linux/page-isolation.h>
#include <linux/page_cgroup.h>
#include <linux/debugobjects.h>
+#include <linux/memleak.h>
#include <asm/tlbflush.h>
#include <asm/div64.h>
@@ -4570,6 +4571,8 @@ void *__init alloc_large_system_hash(const char *tablename,
if (_hash_mask)
*_hash_mask = (1 << log2qty) - 1;
+ memleak_alloc(table, size, 1);
+
return table;
}
diff --git a/mm/slab.c b/mm/slab.c
index 0918751..ea76bcb 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -112,6 +112,7 @@
#include <linux/rtmutex.h>
#include <linux/reciprocal_div.h>
#include <linux/debugobjects.h>
+#include <linux/memleak.h>
#include <asm/cacheflush.h>
#include <asm/tlbflush.h>
@@ -2610,6 +2611,9 @@ static struct slab *alloc_slabmgmt(struct kmem_cache *cachep, void *objp,
/* Slab management obj is off-slab. */
slabp = kmem_cache_alloc_node(cachep->slabp_cache,
local_flags & ~GFP_THISNODE, nodeid);
+ /* only scan the list member to avoid false negatives */
+ memleak_scan_area(slabp, offsetof(struct slab, list),
+ sizeof(struct list_head));
if (!slabp)
return NULL;
} else {
@@ -3195,6 +3199,8 @@ static inline void *____cache_alloc(struct kmem_cache *cachep, gfp_t flags)
STATS_INC_ALLOCMISS(cachep);
objp = cache_alloc_refill(cachep, flags);
}
+ /* avoid false negatives */
+ memleak_erase(&ac->entry[ac->avail]);
return objp;
}
@@ -3412,6 +3418,7 @@ __cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid,
out:
local_irq_restore(save_flags);
ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller);
+ memleak_alloc(ptr, obj_size(cachep), 1);
if (unlikely((flags & __GFP_ZERO) && ptr))
memset(ptr, 0, obj_size(cachep));
@@ -3465,6 +3472,7 @@ __cache_alloc(struct kmem_cache *cachep, gfp_t flags, void *caller)
objp = __do_cache_alloc(cachep, flags);
local_irq_restore(save_flags);
objp = cache_alloc_debugcheck_after(cachep, flags, objp, caller);
+ memleak_alloc(objp, obj_size(cachep), 1);
prefetchw(objp);
if (unlikely((flags & __GFP_ZERO) && objp))
@@ -3580,6 +3588,7 @@ static inline void __cache_free(struct kmem_cache *cachep, void *objp)
struct array_cache *ac = cpu_cache_get(cachep);
check_irq_off();
+ memleak_free(objp);
objp = cache_free_debugcheck(cachep, objp, __builtin_return_address(0));
/*
diff --git a/mm/slob.c b/mm/slob.c
index cb675d1..062e967 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -65,6 +65,7 @@
#include <linux/module.h>
#include <linux/rcupdate.h>
#include <linux/list.h>
+#include <linux/memleak.h>
#include <asm/atomic.h>
/*
@@ -463,6 +464,7 @@ void *__kmalloc_node(size_t size, gfp_t gfp, int node)
{
unsigned int *m;
int align = max(ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
+ void *ret;
if (size < PAGE_SIZE - align) {
if (!size)
@@ -472,18 +474,18 @@ void *__kmalloc_node(size_t size, gfp_t gfp, int node)
if (!m)
return NULL;
*m = size;
- return (void *)m + align;
+ ret = (void *)m + align;
} else {
- void *ret;
-
ret = slob_new_page(gfp | __GFP_COMP, get_order(size), node);
if (ret) {
struct page *page;
page = virt_to_page(ret);
page->private = size;
}
- return ret;
}
+
+ memleak_alloc(ret, size, 1);
+ return ret;
}
EXPORT_SYMBOL(__kmalloc_node);
@@ -493,6 +495,7 @@ void kfree(const void *block)
if (unlikely(ZERO_OR_NULL_PTR(block)))
return;
+ memleak_free(block);
sp = (struct slob_page *)virt_to_page(block);
if (slob_page(sp)) {
@@ -555,12 +558,14 @@ struct kmem_cache *kmem_cache_create(const char *name, size_t size,
} else if (flags & SLAB_PANIC)
panic("Cannot create slab cache %s\n", name);
+ memleak_alloc(c, sizeof(struct kmem_cache), 1);
return c;
}
EXPORT_SYMBOL(kmem_cache_create);
void kmem_cache_destroy(struct kmem_cache *c)
{
+ memleak_free(c);
slob_free(c, sizeof(struct kmem_cache));
}
EXPORT_SYMBOL(kmem_cache_destroy);
@@ -577,6 +582,7 @@ void *kmem_cache_alloc_node(struct kmem_cache *c, gfp_t flags, int node)
if (c->ctor)
c->ctor(b);
+ memleak_alloc(b, c->size, 1);
return b;
}
EXPORT_SYMBOL(kmem_cache_alloc_node);
@@ -599,6 +605,7 @@ static void kmem_rcu_free(struct rcu_head *head)
void kmem_cache_free(struct kmem_cache *c, void *b)
{
+ memleak_free(b);
if (unlikely(c->flags & SLAB_DESTROY_BY_RCU)) {
struct slob_rcu *slob_rcu;
slob_rcu = b + (c->size - sizeof(struct slob_rcu));
diff --git a/mm/slub.c b/mm/slub.c
index 7ad489a..e84ed0d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -24,6 +24,7 @@
#include <linux/kallsyms.h>
#include <linux/memory.h>
#include <linux/math64.h>
+#include <linux/memleak.h>
/*
* Lock order:
@@ -1608,6 +1609,7 @@ static __always_inline void *slab_alloc(struct kmem_cache *s,
if (unlikely((gfpflags & __GFP_ZERO) && object))
memset(object, 0, objsize);
+ memleak_alloc(object, objsize, 1);
return object;
}
@@ -1710,6 +1712,7 @@ static __always_inline void slab_free(struct kmem_cache *s,
struct kmem_cache_cpu *c;
unsigned long flags;
+ memleak_free(x);
local_irq_save(flags);
c = get_cpu_slab(s, smp_processor_id());
debug_check_no_locks_freed(object, c->objsize);
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index ba6b0f5..e053875 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -23,6 +23,7 @@
#include <linux/rbtree.h>
#include <linux/radix-tree.h>
#include <linux/rcupdate.h>
+#include <linux/memleak.h>
#include <asm/atomic.h>
#include <asm/uaccess.h>
@@ -1173,6 +1174,9 @@ static void __vunmap(const void *addr, int deallocate_pages)
void vfree(const void *addr)
{
BUG_ON(in_interrupt());
+
+ memleak_free(addr);
+
__vunmap(addr, 1);
}
EXPORT_SYMBOL(vfree);
@@ -1282,8 +1286,15 @@ fail:
void *__vmalloc_area(struct vm_struct *area, gfp_t gfp_mask, pgprot_t prot)
{
- return __vmalloc_area_node(area, gfp_mask, prot, -1,
- __builtin_return_address(0));
+ void *addr = __vmalloc_area_node(area, gfp_mask, prot, -1,
+ __builtin_return_address(0));
+
+ /* this needs ref_count = 2 since vm_struct also contains a
+ * pointer to this address. The guard page is also subtracted
+ * from the size */
+ memleak_alloc(addr, area->size - PAGE_SIZE, 2);
+
+ return addr;
}
/**
@@ -1302,6 +1313,8 @@ static void *__vmalloc_node(unsigned long size, gfp_t gfp_mask, pgprot_t prot,
int node, void *caller)
{
struct vm_struct *area;
+ void *addr;
+ unsigned long real_size = size;
size = PAGE_ALIGN(size);
if (!size || (size >> PAGE_SHIFT) > num_physpages)
@@ -1313,7 +1326,13 @@ static void *__vmalloc_node(unsigned long size, gfp_t gfp_mask, pgprot_t prot,
if (!area)
return NULL;
- return __vmalloc_area_node(area, gfp_mask, prot, node, caller);
+ addr = __vmalloc_area_node(area, gfp_mask, prot, node, caller);
+
+ /* this needs ref_count = 2 since the vm_struct also contains
+ * a pointer to this address */
+ memleak_alloc(addr, real_size, 2);
+
+ return addr;
}
void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH 2.6.28-rc5 04/11] kmemleak: Add modules support
2008-11-20 11:30 [PATCH 2.6.28-rc5 00/11] Kernel memory leak detector (updated) Catalin Marinas
` (2 preceding siblings ...)
2008-11-20 11:30 ` [PATCH 2.6.28-rc5 03/11] kmemleak: Add the memory allocation/freeing hooks Catalin Marinas
@ 2008-11-20 11:30 ` Catalin Marinas
2008-11-20 12:03 ` Ingo Molnar
2008-11-20 11:30 ` [PATCH 2.6.28-rc5 05/11] kmemleak: Add support for i386 Catalin Marinas
` (8 subsequent siblings)
12 siblings, 1 reply; 37+ messages in thread
From: Catalin Marinas @ 2008-11-20 11:30 UTC (permalink / raw)
To: linux-kernel
This patch handles the kmemleak operations needed for modules loading so
that memory allocations from inside a module are properly tracked.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
kernel/module.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 50 insertions(+), 0 deletions(-)
diff --git a/kernel/module.c b/kernel/module.c
index 1f4cc00..85a773b 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -51,6 +51,7 @@
#include <asm/sections.h>
#include <linux/tracepoint.h>
#include <linux/ftrace.h>
+#include <linux/memleak.h>
#if 0
#define DEBUGP printk
@@ -409,6 +410,7 @@ static void *percpu_modalloc(unsigned long size, unsigned long align,
unsigned long extra;
unsigned int i;
void *ptr;
+ int cpu;
if (align > PAGE_SIZE) {
printk(KERN_WARNING "%s: per-cpu alignment %li > %li\n",
@@ -438,6 +440,10 @@ static void *percpu_modalloc(unsigned long size, unsigned long align,
if (!split_block(i, size))
return NULL;
+ /* add the per-cpu scanning areas */
+ for_each_possible_cpu(cpu)
+ memleak_alloc(ptr + per_cpu_offset(cpu), size, 0);
+
/* Mark allocated */
pcpu_size[i] = -pcpu_size[i];
return ptr;
@@ -452,6 +458,7 @@ static void percpu_modfree(void *freeme)
{
unsigned int i;
void *ptr = __per_cpu_start + block_size(pcpu_size[0]);
+ int cpu;
/* First entry is core kernel percpu data. */
for (i = 1; i < pcpu_num_used; ptr += block_size(pcpu_size[i]), i++) {
@@ -463,6 +470,10 @@ static void percpu_modfree(void *freeme)
BUG();
free:
+ /* remove the per-cpu scanning areas */
+ for_each_possible_cpu(cpu)
+ memleak_free(freeme + per_cpu_offset(cpu));
+
/* Merge with previous? */
if (pcpu_size[i-1] >= 0) {
pcpu_size[i-1] += pcpu_size[i];
@@ -1833,6 +1844,35 @@ static void *module_alloc_update_bounds(unsigned long size)
return ret;
}
+#ifdef CONFIG_DEBUG_MEMLEAK
+static void memleak_load_module(struct module *mod, Elf_Ehdr *hdr,
+ Elf_Shdr *sechdrs, char *secstrings)
+{
+ unsigned int i;
+
+ /* only scan the sections containing data */
+ memleak_scan_area(mod->module_core,
+ (unsigned long)mod - (unsigned long)mod->module_core,
+ sizeof(struct module));
+
+ for (i = 1; i < hdr->e_shnum; i++) {
+ if (!(sechdrs[i].sh_flags & SHF_ALLOC))
+ continue;
+ if (strncmp(secstrings + sechdrs[i].sh_name, ".data", 5) != 0
+ && strncmp(secstrings + sechdrs[i].sh_name, ".bss", 4) != 0)
+ continue;
+
+ memleak_scan_area(mod->module_core,
+ sechdrs[i].sh_addr - (unsigned long)mod->module_core,
+ sechdrs[i].sh_size);
+ }
+}
+#else
+static inline void memleak_load_module(struct module *mod, Elf_Ehdr *hdr,
+ Elf_Shdr *sechdrs, char *secstrings)
+{ }
+#endif
+
/* Allocate and load the module: note that size of section 0 is always
zero, and we rely on this for optional sections. */
static noinline struct module *load_module(void __user *umod,
@@ -2011,6 +2051,10 @@ static noinline struct module *load_module(void __user *umod,
/* Do the allocs. */
ptr = module_alloc_update_bounds(mod->core_size);
+ /* the pointer to this block is stored in the module structure
+ * which is inside the block. Just mark it as not being a
+ * leak */
+ memleak_not_leak(ptr);
if (!ptr) {
err = -ENOMEM;
goto free_percpu;
@@ -2019,6 +2063,11 @@ static noinline struct module *load_module(void __user *umod,
mod->module_core = ptr;
ptr = module_alloc_update_bounds(mod->init_size);
+ /* the pointer to this block is stored in the module structure
+ * which is inside the block. This block doesn't need to be
+ * scanned as it contains data and code that will be freed
+ * after the module is initialized */
+ memleak_ignore(ptr);
if (!ptr && mod->init_size) {
err = -ENOMEM;
goto free_core;
@@ -2049,6 +2098,7 @@ static noinline struct module *load_module(void __user *umod,
}
/* Module has been moved. */
mod = (void *)sechdrs[modindex].sh_addr;
+ memleak_load_module(mod, hdr, sechdrs, secstrings);
/* Now we've moved module, initialize linked lists, etc. */
module_unload_init(mod);
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH 2.6.28-rc5 05/11] kmemleak: Add support for i386
2008-11-20 11:30 [PATCH 2.6.28-rc5 00/11] Kernel memory leak detector (updated) Catalin Marinas
` (3 preceding siblings ...)
2008-11-20 11:30 ` [PATCH 2.6.28-rc5 04/11] kmemleak: Add modules support Catalin Marinas
@ 2008-11-20 11:30 ` Catalin Marinas
2008-11-20 12:16 ` Ingo Molnar
2008-11-20 11:31 ` [PATCH 2.6.28-rc5 06/11] kmemleak: Add support for ARM Catalin Marinas
` (7 subsequent siblings)
12 siblings, 1 reply; 37+ messages in thread
From: Catalin Marinas @ 2008-11-20 11:30 UTC (permalink / raw)
To: linux-kernel
This patch adds the kmemleak-related entries to the vmlinux.lds.S
linker script.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
arch/x86/kernel/vmlinux_32.lds.S | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kernel/vmlinux_32.lds.S b/arch/x86/kernel/vmlinux_32.lds.S
index a9b8560..b5d2b49 100644
--- a/arch/x86/kernel/vmlinux_32.lds.S
+++ b/arch/x86/kernel/vmlinux_32.lds.S
@@ -62,6 +62,7 @@ SECTIONS
/* writeable */
. = ALIGN(PAGE_SIZE);
+ _sdata = .; /* Start of data section */
.data : AT(ADDR(.data) - LOAD_OFFSET) { /* Data */
DATA_DATA
CONSTRUCTORS
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH 2.6.28-rc5 06/11] kmemleak: Add support for ARM
2008-11-20 11:30 [PATCH 2.6.28-rc5 00/11] Kernel memory leak detector (updated) Catalin Marinas
` (4 preceding siblings ...)
2008-11-20 11:30 ` [PATCH 2.6.28-rc5 05/11] kmemleak: Add support for i386 Catalin Marinas
@ 2008-11-20 11:31 ` Catalin Marinas
2008-11-20 11:31 ` [PATCH 2.6.28-rc5 07/11] kmemleak: Remove some of the kmemleak false positives Catalin Marinas
` (6 subsequent siblings)
12 siblings, 0 replies; 37+ messages in thread
From: Catalin Marinas @ 2008-11-20 11:31 UTC (permalink / raw)
To: linux-kernel
This patch adds the kmemleak-related entries to the vmlinux.lds.S
linker script.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
arch/arm/kernel/vmlinux.lds.S | 2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index 4898bdc..3cf1d44 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -120,6 +120,7 @@ SECTIONS
.data : AT(__data_loc) {
__data_start = .; /* address in memory */
+ _sdata = .;
/*
* first, the init task union, aligned
@@ -171,6 +172,7 @@ SECTIONS
__bss_start = .; /* BSS */
*(.bss)
*(COMMON)
+ __bss_stop = .;
_end = .;
}
/* Stabs debugging sections. */
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH 2.6.28-rc5 07/11] kmemleak: Remove some of the kmemleak false positives
2008-11-20 11:30 [PATCH 2.6.28-rc5 00/11] Kernel memory leak detector (updated) Catalin Marinas
` (5 preceding siblings ...)
2008-11-20 11:31 ` [PATCH 2.6.28-rc5 06/11] kmemleak: Add support for ARM Catalin Marinas
@ 2008-11-20 11:31 ` Catalin Marinas
2008-11-20 12:09 ` Ingo Molnar
2008-11-20 11:31 ` [PATCH 2.6.28-rc5 08/11] kmemleak: Enable the building of the memory leak detector Catalin Marinas
` (5 subsequent siblings)
12 siblings, 1 reply; 37+ messages in thread
From: Catalin Marinas @ 2008-11-20 11:31 UTC (permalink / raw)
To: linux-kernel
There are allocations for which the main pointer cannot be found but
they are not memory leaks. This patch fixes some of them. For more
information on false positives, see Documentation/kmemleak.txt.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
drivers/char/vt.c | 5 +++++
include/linux/percpu.h | 5 +++++
2 files changed, 10 insertions(+), 0 deletions(-)
diff --git a/drivers/char/vt.c b/drivers/char/vt.c
index a5af607..179960b 100644
--- a/drivers/char/vt.c
+++ b/drivers/char/vt.c
@@ -104,6 +104,7 @@
#include <linux/io.h>
#include <asm/system.h>
#include <linux/uaccess.h>
+#include <linux/memleak.h>
#define MAX_NR_CON_DRIVER 16
@@ -2882,6 +2883,10 @@ static int __init con_init(void)
*/
for (currcons = 0; currcons < MIN_NR_CONSOLES; currcons++) {
vc_cons[currcons].d = vc = alloc_bootmem(sizeof(struct vc_data));
+ /* kmemleak does not track the memory allocated via
+ * alloc_bootmem() but this block contains pointers to
+ * other blocks allocated via kmalloc */
+ memleak_alloc(vc, sizeof(struct vc_data), 1);
INIT_WORK(&vc_cons[currcons].SAK_work, vc_SAK);
visual_init(vc, currcons, 1);
vc->vc_screenbuf = (unsigned short *)alloc_bootmem(vc->vc_screenbuf_size);
diff --git a/include/linux/percpu.h b/include/linux/percpu.h
index 9f2a375..4d1ce18 100644
--- a/include/linux/percpu.h
+++ b/include/linux/percpu.h
@@ -69,7 +69,12 @@ struct percpu_data {
void *ptrs[1];
};
+/* pointer disguising messes up the kmemleak objects tracking */
+#ifndef CONFIG_DEBUG_MEMLEAK
#define __percpu_disguise(pdata) (struct percpu_data *)~(unsigned long)(pdata)
+#else
+#define __percpu_disguise(pdata) (struct percpu_data *)(pdata)
+#endif
/*
* Use this to get to a cpu's version of the per-cpu object dynamically
* allocated. Non-atomic access to the current CPU's version should
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH 2.6.28-rc5 08/11] kmemleak: Enable the building of the memory leak detector
2008-11-20 11:30 [PATCH 2.6.28-rc5 00/11] Kernel memory leak detector (updated) Catalin Marinas
` (6 preceding siblings ...)
2008-11-20 11:31 ` [PATCH 2.6.28-rc5 07/11] kmemleak: Remove some of the kmemleak false positives Catalin Marinas
@ 2008-11-20 11:31 ` Catalin Marinas
2008-11-20 11:31 ` [PATCH 2.6.28-rc5 09/11] kmemleak: Keep the __init functions after initialization Catalin Marinas
` (4 subsequent siblings)
12 siblings, 0 replies; 37+ messages in thread
From: Catalin Marinas @ 2008-11-20 11:31 UTC (permalink / raw)
To: linux-kernel
This patch adds the Kconfig.debug and Makefile entries needed for
building kmemleak into the kernel.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
lib/Kconfig.debug | 23 +++++++++++++++++++++++
mm/Makefile | 1 +
2 files changed, 24 insertions(+), 0 deletions(-)
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index b0f239e..1e59827 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -290,6 +290,29 @@ config SLUB_STATS
out which slabs are relevant to a particular load.
Try running: slabinfo -DA
+config DEBUG_MEMLEAK
+ bool "Kernel memory leak detector"
+ default n
+ depends on EXPERIMENTAL
+ select DEBUG_SLAB if SLAB
+ select SLUB_DEBUG if SLUB
+ select DEBUG_FS
+ select STACKTRACE
+ select FRAME_POINTER
+ select KALLSYMS
+ help
+ Say Y here if you want to enable the memory leak
+ detector. The memory allocation/freeing is traced in a way
+ similar to the Boehm's conservative garbage collector, the
+ difference being that the orphan objects are not freed but
+ only shown in /sys/kernel/debug/memleak. Enabling this
+ feature will introduce an overhead to memory
+ allocations. See Documentation/kmemleak.txt for more
+ details.
+
+ In order to access the memleak file, debugfs needs to be
+ mounted (usually at /sys/kernel/debug).
+
config DEBUG_PREEMPT
bool "Debug preemptible kernel"
depends on DEBUG_KERNEL && PREEMPT && (TRACE_IRQFLAGS_SUPPORT || PPC64)
diff --git a/mm/Makefile b/mm/Makefile
index c06b45a..3e43536 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -34,3 +34,4 @@ obj-$(CONFIG_MIGRATION) += migrate.o
obj-$(CONFIG_SMP) += allocpercpu.o
obj-$(CONFIG_QUICKLIST) += quicklist.o
obj-$(CONFIG_CGROUP_MEM_RES_CTLR) += memcontrol.o page_cgroup.o
+obj-$(CONFIG_DEBUG_MEMLEAK) += memleak.o
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH 2.6.28-rc5 09/11] kmemleak: Keep the __init functions after initialization
2008-11-20 11:30 [PATCH 2.6.28-rc5 00/11] Kernel memory leak detector (updated) Catalin Marinas
` (7 preceding siblings ...)
2008-11-20 11:31 ` [PATCH 2.6.28-rc5 08/11] kmemleak: Enable the building of the memory leak detector Catalin Marinas
@ 2008-11-20 11:31 ` Catalin Marinas
2008-11-20 11:31 ` [PATCH 2.6.28-rc5 10/11] kmemleak: Simple testing module for kmemleak Catalin Marinas
` (3 subsequent siblings)
12 siblings, 0 replies; 37+ messages in thread
From: Catalin Marinas @ 2008-11-20 11:31 UTC (permalink / raw)
To: linux-kernel
This patch adds the CONFIG_DEBUG_KEEP_INIT option which preserves the
.init.text section after initialization. Memory leaks happening during this
phase can be more easily tracked.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
include/linux/init.h | 6 ++++++
lib/Kconfig.debug | 12 ++++++++++++
2 files changed, 18 insertions(+), 0 deletions(-)
diff --git a/include/linux/init.h b/include/linux/init.h
index 68cb026..41321ad 100644
--- a/include/linux/init.h
+++ b/include/linux/init.h
@@ -40,9 +40,15 @@
/* These are for everybody (although not all archs will actually
discard it in modules) */
+#ifdef CONFIG_DEBUG_KEEP_INIT
+#define __init
+#define __initdata
+#define __initconst
+#else
#define __init __section(.init.text) __cold notrace
#define __initdata __section(.init.data)
#define __initconst __section(.init.rodata)
+#endif
#define __exitdata __section(.exit.data)
#define __exit_call __used __section(.exitcall.exit)
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 1e59827..72cde77 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -313,6 +313,18 @@ config DEBUG_MEMLEAK
In order to access the memleak file, debugfs needs to be
mounted (usually at /sys/kernel/debug).
+config DEBUG_KEEP_INIT
+ bool "Do not free the __init code/data"
+ default n
+ depends on DEBUG_MEMLEAK
+ help
+ This option moves the __init code/data out of the
+ .init.text/.init.data sections. It is useful for identifying
+ memory leaks happening during the kernel or modules
+ initialization.
+
+ If unsure, say N.
+
config DEBUG_PREEMPT
bool "Debug preemptible kernel"
depends on DEBUG_KERNEL && PREEMPT && (TRACE_IRQFLAGS_SUPPORT || PPC64)
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH 2.6.28-rc5 10/11] kmemleak: Simple testing module for kmemleak
2008-11-20 11:30 [PATCH 2.6.28-rc5 00/11] Kernel memory leak detector (updated) Catalin Marinas
` (8 preceding siblings ...)
2008-11-20 11:31 ` [PATCH 2.6.28-rc5 09/11] kmemleak: Keep the __init functions after initialization Catalin Marinas
@ 2008-11-20 11:31 ` Catalin Marinas
2008-11-20 12:11 ` Ingo Molnar
2008-11-20 11:31 ` [PATCH 2.6.28-rc5 11/11] kmemleak: Add the corresponding MAINTAINERS entry Catalin Marinas
` (2 subsequent siblings)
12 siblings, 1 reply; 37+ messages in thread
From: Catalin Marinas @ 2008-11-20 11:31 UTC (permalink / raw)
To: linux-kernel
This patch adds a loadable module that deliberately leaks memory. It
is used for testing various memory leaking scenarios.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
lib/Kconfig.debug | 11 ++++++
mm/Makefile | 1 +
mm/memleak-test.c | 102 +++++++++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 114 insertions(+), 0 deletions(-)
create mode 100644 mm/memleak-test.c
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 72cde77..205c1da 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -313,6 +313,17 @@ config DEBUG_MEMLEAK
In order to access the memleak file, debugfs needs to be
mounted (usually at /sys/kernel/debug).
+config DEBUG_MEMLEAK_TEST
+ tristate "Test the kernel memory leak detector"
+ default n
+ depends on DEBUG_MEMLEAK
+ help
+ Say Y or M here to build a test for the kernel memory leak
+ detector. This option enables a module that explicitly leaks
+ memory.
+
+ If unsure, say N.
+
config DEBUG_KEEP_INIT
bool "Do not free the __init code/data"
default n
diff --git a/mm/Makefile b/mm/Makefile
index 3e43536..deb5935 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -35,3 +35,4 @@ obj-$(CONFIG_SMP) += allocpercpu.o
obj-$(CONFIG_QUICKLIST) += quicklist.o
obj-$(CONFIG_CGROUP_MEM_RES_CTLR) += memcontrol.o page_cgroup.o
obj-$(CONFIG_DEBUG_MEMLEAK) += memleak.o
+obj-$(CONFIG_DEBUG_MEMLEAK_TEST) += memleak-test.o
diff --git a/mm/memleak-test.c b/mm/memleak-test.c
new file mode 100644
index 0000000..211219e
--- /dev/null
+++ b/mm/memleak-test.c
@@ -0,0 +1,102 @@
+/*
+ * mm/memleak-test.c
+ *
+ * Copyright (C) 2008 ARM Limited
+ * Written by Catalin Marinas <catalin.marinas@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include <linux/list.h>
+#include <linux/percpu.h>
+
+#include <linux/memleak.h>
+
+struct test_node {
+ long header[25];
+ struct list_head list;
+ long footer[25];
+};
+
+static LIST_HEAD(test_list);
+static DEFINE_PER_CPU(void *, test_pointer);
+
+/* Some very simple testing. This function needs to be extended for
+ * proper testing */
+static int __init memleak_test_init(void)
+{
+ struct test_node *elem;
+ int i;
+
+ printk(KERN_INFO "Kmemleak testing\n");
+
+ /* make some orphan objects */
+ pr_info("kmemleak: kmalloc(32) = %p\n", kmalloc(32, GFP_KERNEL));
+ pr_info("kmemleak: kmalloc(32) = %p\n", kmalloc(32, GFP_KERNEL));
+ pr_info("kmemleak: kmalloc(1024) = %p\n", kmalloc(1024, GFP_KERNEL));
+ pr_info("kmemleak: kmalloc(1024) = %p\n", kmalloc(1024, GFP_KERNEL));
+ pr_info("kmemleak: kmalloc(2048) = %p\n", kmalloc(2048, GFP_KERNEL));
+ pr_info("kmemleak: kmalloc(2048) = %p\n", kmalloc(2048, GFP_KERNEL));
+ pr_info("kmemleak: kmalloc(4096) = %p\n", kmalloc(4096, GFP_KERNEL));
+ pr_info("kmemleak: kmalloc(4096) = %p\n", kmalloc(4096, GFP_KERNEL));
+#ifndef CONFIG_MODULES
+ pr_info("kmemleak: kmem_cache_alloc(files_cachep) = %p\n",
+ kmem_cache_alloc(files_cachep, GFP_KERNEL));
+ pr_info("kmemleak: kmem_cache_alloc(files_cachep) = %p\n",
+ kmem_cache_alloc(files_cachep, GFP_KERNEL));
+#endif
+ pr_info("kmemleak: vmalloc(64) = %p\n", vmalloc(64));
+ pr_info("kmemleak: vmalloc(64) = %p\n", vmalloc(64));
+ pr_info("kmemleak: vmalloc(64) = %p\n", vmalloc(64));
+ pr_info("kmemleak: vmalloc(64) = %p\n", vmalloc(64));
+ pr_info("kmemleak: vmalloc(64) = %p\n", vmalloc(64));
+
+ /* add elements to a list. They should only appear as orphan
+ * after the module is removed */
+ for (i = 0; i < 10; i++) {
+ elem = kmalloc(sizeof(*elem), GFP_KERNEL);
+ pr_info("kmemleak: kmalloc(sizeof(*elem)) = %p\n", elem);
+ if (!elem)
+ return -ENOMEM;
+ memset(elem, 0, sizeof(*elem));
+ INIT_LIST_HEAD(&elem->list);
+
+ list_add_tail(&elem->list, &test_list);
+ }
+
+ for_each_possible_cpu(i) {
+ per_cpu(test_pointer, i) = kmalloc(129, GFP_KERNEL);
+ pr_info("kmemleak: kmalloc(129) = %p\n", per_cpu(test_pointer, i));
+ }
+
+ return 0;
+}
+module_init(memleak_test_init);
+
+static void __exit memleak_test_exit(void)
+{
+ struct test_node *elem, *tmp;
+
+ /* remove the list elements without actually freeing the memory */
+ list_for_each_entry_safe(elem, tmp, &test_list, list)
+ list_del(&elem->list);
+}
+module_exit(memleak_test_exit);
+
+MODULE_LICENSE("GPL");
^ permalink raw reply related [flat|nested] 37+ messages in thread
* [PATCH 2.6.28-rc5 11/11] kmemleak: Add the corresponding MAINTAINERS entry
2008-11-20 11:30 [PATCH 2.6.28-rc5 00/11] Kernel memory leak detector (updated) Catalin Marinas
` (9 preceding siblings ...)
2008-11-20 11:31 ` [PATCH 2.6.28-rc5 10/11] kmemleak: Simple testing module for kmemleak Catalin Marinas
@ 2008-11-20 11:31 ` Catalin Marinas
2008-11-20 12:10 ` [PATCH 2.6.28-rc5 00/11] Kernel memory leak detector (updated) Ingo Molnar
2008-11-20 12:22 ` Ingo Molnar
12 siblings, 0 replies; 37+ messages in thread
From: Catalin Marinas @ 2008-11-20 11:31 UTC (permalink / raw)
To: linux-kernel
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
MAINTAINERS | 6 ++++++
1 files changed, 6 insertions(+), 0 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 627e4c8..93400a6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -2502,6 +2502,12 @@ L: kernel-janitors@vger.kernel.org
W: http://www.kerneljanitors.org/
S: Maintained
+KERNEL MEMORY LEAK DETECTOR
+P: Catalin Marinas
+M: catalin.marinas@arm.com
+W: http://www.procode.org/kmemleak/
+S: Maintained
+
KERNEL NFSD, SUNRPC, AND LOCKD SERVERS
P: J. Bruce Fields
M: bfields@fieldses.org
^ permalink raw reply related [flat|nested] 37+ messages in thread
* Re: [PATCH 2.6.28-rc5 01/11] kmemleak: Add the base support
2008-11-20 11:30 ` [PATCH 2.6.28-rc5 01/11] kmemleak: Add the base support Catalin Marinas
@ 2008-11-20 11:58 ` Ingo Molnar
2008-11-20 19:35 ` Pekka Enberg
2008-12-03 18:12 ` Paul E. McKenney
2 siblings, 0 replies; 37+ messages in thread
From: Ingo Molnar @ 2008-11-20 11:58 UTC (permalink / raw)
To: Catalin Marinas; +Cc: linux-kernel
* Catalin Marinas <catalin.marinas@arm.com> wrote:
> --- a/init/main.c
> +++ b/init/main.c
> @@ -63,6 +63,7 @@
> #include <linux/signal.h>
> #include <linux/idr.h>
> #include <linux/ftrace.h>
> +#include <linux/memleak.h>
small request: could you please move this #include line up by 5-7
lines so that it auto-merges fine with the ftrace-next tree?
Ingo
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH 2.6.28-rc5 03/11] kmemleak: Add the memory allocation/freeing hooks
2008-11-20 11:30 ` [PATCH 2.6.28-rc5 03/11] kmemleak: Add the memory allocation/freeing hooks Catalin Marinas
@ 2008-11-20 12:00 ` Ingo Molnar
2008-11-20 19:30 ` Pekka Enberg
1 sibling, 0 replies; 37+ messages in thread
From: Ingo Molnar @ 2008-11-20 12:00 UTC (permalink / raw)
To: Catalin Marinas; +Cc: linux-kernel
* Catalin Marinas <catalin.marinas@arm.com> wrote:
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -112,6 +112,7 @@
> #include <linux/rtmutex.h>
> #include <linux/reciprocal_div.h>
> #include <linux/debugobjects.h>
> +#include <linux/memleak.h>
please move this line up 5-7 lines so that it auto-merges fine with
the kmemcheck-next tree.
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -24,6 +24,7 @@
> #include <linux/kallsyms.h>
> #include <linux/memory.h>
> #include <linux/math64.h>
> +#include <linux/memleak.h>
ditto.
> - return __vmalloc_area_node(area, gfp_mask, prot, -1,
> - __builtin_return_address(0));
> + void *addr = __vmalloc_area_node(area, gfp_mask, prot, -1,
> + __builtin_return_address(0));
> +
> + /* this needs ref_count = 2 since vm_struct also contains a
> + * pointer to this address. The guard page is also subtracted
> + * from the size */
> + memleak_alloc(addr, area->size - PAGE_SIZE, 2);
please use the customary comment style:
/*
* Comment .....
* ...... goes here:
*/
Thanks,
Ingo
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH 2.6.28-rc5 04/11] kmemleak: Add modules support
2008-11-20 11:30 ` [PATCH 2.6.28-rc5 04/11] kmemleak: Add modules support Catalin Marinas
@ 2008-11-20 12:03 ` Ingo Molnar
0 siblings, 0 replies; 37+ messages in thread
From: Ingo Molnar @ 2008-11-20 12:03 UTC (permalink / raw)
To: Catalin Marinas; +Cc: linux-kernel
small nit:
> +#else
> +static inline void memleak_load_module(struct module *mod, Elf_Ehdr *hdr,
> + Elf_Shdr *sechdrs, char *secstrings)
> +{ }
> +#endif
this looks nicer:
static inline void
memleak_load_module(struct module *mod, Elf_Ehdr *hdr, Elf_Shdr *sechdrs,
char *secstrings)
{
}
We dont use "{ }" in other places in the kernel, so lets not invent a
new style if possible.
> + /* the pointer to this block is stored in the module structure
> + * which is inside the block. Just mark it as not being a
> + * leak */
Comment style.
> + /* the pointer to this block is stored in the module structure
> + * which is inside the block. This block doesn't need to be
> + * scanned as it contains data and code that will be freed
> + * after the module is initialized */
ditto.
Thanks,
Ingo
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH 2.6.28-rc5 07/11] kmemleak: Remove some of the kmemleak false positives
2008-11-20 11:31 ` [PATCH 2.6.28-rc5 07/11] kmemleak: Remove some of the kmemleak false positives Catalin Marinas
@ 2008-11-20 12:09 ` Ingo Molnar
0 siblings, 0 replies; 37+ messages in thread
From: Ingo Molnar @ 2008-11-20 12:09 UTC (permalink / raw)
To: Catalin Marinas; +Cc: linux-kernel
* Catalin Marinas <catalin.marinas@arm.com> wrote:
> for (currcons = 0; currcons < MIN_NR_CONSOLES; currcons++) {
> vc_cons[currcons].d = vc = alloc_bootmem(sizeof(struct vc_data));
> + /* kmemleak does not track the memory allocated via
> + * alloc_bootmem() but this block contains pointers to
> + * other blocks allocated via kmalloc */
> + memleak_alloc(vc, sizeof(struct vc_data), 1);
Comment style - and that is true for other bits of your patchset too.
Ingo
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH 2.6.28-rc5 00/11] Kernel memory leak detector (updated)
2008-11-20 11:30 [PATCH 2.6.28-rc5 00/11] Kernel memory leak detector (updated) Catalin Marinas
` (10 preceding siblings ...)
2008-11-20 11:31 ` [PATCH 2.6.28-rc5 11/11] kmemleak: Add the corresponding MAINTAINERS entry Catalin Marinas
@ 2008-11-20 12:10 ` Ingo Molnar
2008-11-20 17:54 ` Catalin Marinas
2008-11-20 12:22 ` Ingo Molnar
12 siblings, 1 reply; 37+ messages in thread
From: Ingo Molnar @ 2008-11-20 12:10 UTC (permalink / raw)
To: Catalin Marinas; +Cc: linux-kernel
[-- Attachment #1: Type: text/plain, Size: 1768 bytes --]
got a couple of build errors and warnings on x86 with the attached
config:
mm/memleak.c: In function 'dump_object_info':
mm/memleak.c:325: warning: format '%u' expects type 'unsigned int', but argument 3 has type 'size_t'
mm/memleak.c: In function 'create_object':
mm/memleak.c:435: error: implicit declaration of function 'in_irq'
mm/memleak.c:438: error: implicit declaration of function 'in_softirq'
mm/memleak.c: In function 'memleak_alloc':
mm/memleak.c:605: warning: format '%u' expects type 'unsigned int', but argument 4 has type 'size_t'
mm/memleak.c: In function 'scan_yield':
mm/memleak.c:682: error: implicit declaration of function 'in_atomic'
mm/memleak.c: In function 'memleak_scan':
mm/memleak.c:828: error: implicit declaration of function 'NODE_MEM_MAP'
mm/memleak.c:828: warning: assignment makes pointer from integer without a cast
mm/memleak.c: In function 'memleak_seq_show':
mm/memleak.c:944: warning: format '%u' expects type 'unsigned int', but argument 4 has type 'size_t'
i fixed one - see it below.
Ingo
------------>
>From bc8c69e82ba3a4e9ec2105a464362ac9bc44ef63 Mon Sep 17 00:00:00 2001
From: Ingo Molnar <mingo@elte.hu>
Date: Thu, 20 Nov 2008 13:13:24 +0100
Subject: [PATCH] kmemleak: build fix
fix:
mm/memleak.c: In function 'scan_yield':
mm/memleak.c:682: error: implicit declaration of function 'in_atomic'
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
mm/memleak.c | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/mm/memleak.c b/mm/memleak.c
index 8fb5260..3adb5a4 100644
--- a/mm/memleak.c
+++ b/mm/memleak.c
@@ -36,6 +36,7 @@
#include <linux/cache.h>
#include <linux/percpu.h>
#include <linux/lockdep.h>
+#include <linux/hardirq.h>
#include <asm/sections.h>
#include <asm/processor.h>
[-- Attachment #2: config --]
[-- Type: text/plain, Size: 86259 bytes --]
#
# Automatically generated make config: don't edit
# Linux kernel version: 2.6.28-rc5
# Thu Nov 20 13:14:11 2008
#
CONFIG_64BIT=y
# CONFIG_X86_32 is not set
CONFIG_X86_64=y
CONFIG_X86=y
CONFIG_ARCH_DEFCONFIG="arch/x86/configs/x86_64_defconfig"
CONFIG_GENERIC_TIME=y
CONFIG_GENERIC_CMOS_UPDATE=y
CONFIG_CLOCKSOURCE_WATCHDOG=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_HAVE_LATENCYTOP_SUPPORT=y
CONFIG_FAST_CMPXCHG_LOCAL=y
CONFIG_MMU=y
CONFIG_ZONE_DMA=y
CONFIG_GENERIC_ISA_DMA=y
CONFIG_GENERIC_IOMAP=y
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_HWEIGHT=y
CONFIG_GENERIC_GPIO=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_RWSEM_GENERIC_SPINLOCK=y
# CONFIG_RWSEM_XCHGADD_ALGORITHM is not set
CONFIG_ARCH_HAS_CPU_IDLE_WAIT=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_ARCH_HAS_CPU_RELAX=y
CONFIG_ARCH_HAS_DEFAULT_IDLE=y
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_HAVE_CPUMASK_OF_CPU_MAP=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_ZONE_DMA32=y
CONFIG_ARCH_POPULATES_NODE_MAP=y
CONFIG_AUDIT_ARCH=y
CONFIG_ARCH_SUPPORTS_OPTIMIZED_INLINING=y
CONFIG_GENERIC_HARDIRQS=y
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_PENDING_IRQ=y
CONFIG_X86_SMP=y
CONFIG_USE_GENERIC_SMP_HELPERS=y
CONFIG_X86_64_SMP=y
CONFIG_X86_HT=y
CONFIG_X86_BIOS_REBOOT=y
CONFIG_X86_TRAMPOLINE=y
# CONFIG_KTIME_SCALAR is not set
CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
#
# General setup
#
CONFIG_EXPERIMENTAL=y
CONFIG_LOCK_KERNEL=y
CONFIG_INIT_ENV_ARG_LIMIT=32
CONFIG_LOCALVERSION=""
CONFIG_LOCALVERSION_AUTO=y
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
CONFIG_POSIX_MQUEUE=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
CONFIG_TASKSTATS=y
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y
CONFIG_AUDIT=y
CONFIG_AUDITSYSCALL=y
CONFIG_AUDIT_TREE=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=21
CONFIG_CGROUPS=y
CONFIG_CGROUP_DEBUG=y
CONFIG_CGROUP_NS=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CPUSETS=y
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y
CONFIG_GROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_USER_SCHED=y
# CONFIG_CGROUP_SCHED is not set
CONFIG_CGROUP_CPUACCT=y
CONFIG_RESOURCE_COUNTERS=y
CONFIG_MM_OWNER=y
CONFIG_CGROUP_MEM_RES_CTLR=y
CONFIG_SYSFS_DEPRECATED=y
CONFIG_SYSFS_DEPRECATED_V2=y
CONFIG_PROC_PID_CPUSET=y
CONFIG_RELAY=y
CONFIG_NAMESPACES=y
CONFIG_UTS_NS=y
CONFIG_IPC_NS=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_SYSCTL=y
CONFIG_EMBEDDED=y
CONFIG_UID16=y
CONFIG_SYSCTL_SYSCALL=y
CONFIG_KALLSYMS=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_EXTRA_PASS=y
CONFIG_HOTPLUG=y
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
CONFIG_PCSPKR_PLATFORM=y
CONFIG_COMPAT_BRK=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_ANON_INODES=y
CONFIG_EPOLL=y
CONFIG_SIGNALFD=y
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
CONFIG_SHMEM=y
CONFIG_AIO=y
CONFIG_VM_EVENT_COUNTERS=y
CONFIG_PCI_QUIRKS=y
CONFIG_SLUB_DEBUG=y
# CONFIG_SLAB is not set
CONFIG_SLUB=y
# CONFIG_SLOB is not set
CONFIG_PROFILING=y
CONFIG_TRACEPOINTS=y
CONFIG_MARKERS=y
CONFIG_OPROFILE=y
CONFIG_OPROFILE_IBS=y
CONFIG_HAVE_OPROFILE=y
CONFIG_KPROBES=y
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_KRETPROBES=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
# CONFIG_HAVE_GENERIC_DMA_COHERENT is not set
CONFIG_SLABINFO=y
CONFIG_RT_MUTEXES=y
# CONFIG_TINY_SHMEM is not set
CONFIG_BASE_SMALL=0
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
CONFIG_MODVERSIONS=y
CONFIG_MODULE_SRCVERSION_ALL=y
CONFIG_KMOD=y
CONFIG_STOP_MACHINE=y
CONFIG_BLOCK=y
CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_BLK_DEV_BSG=y
CONFIG_BLK_DEV_INTEGRITY=y
CONFIG_BLOCK_COMPAT=y
#
# IO Schedulers
#
CONFIG_IOSCHED_NOOP=y
CONFIG_IOSCHED_AS=y
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
# CONFIG_DEFAULT_AS is not set
# CONFIG_DEFAULT_DEADLINE is not set
CONFIG_DEFAULT_CFQ=y
# CONFIG_DEFAULT_NOOP is not set
CONFIG_DEFAULT_IOSCHED="cfq"
CONFIG_PREEMPT_NOTIFIERS=y
CONFIG_CLASSIC_RCU=y
CONFIG_FREEZER=y
#
# Processor type and features
#
CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_GENERIC_CLOCKEVENTS_BUILD=y
CONFIG_SMP=y
CONFIG_X86_FIND_SMP_CONFIG=y
CONFIG_X86_MPPARSE=y
CONFIG_X86_PC=y
# CONFIG_X86_ELAN is not set
# CONFIG_X86_VOYAGER is not set
# CONFIG_X86_GENERICARCH is not set
# CONFIG_X86_VSMP is not set
CONFIG_SCHED_OMIT_FRAME_POINTER=y
CONFIG_PARAVIRT_GUEST=y
CONFIG_XEN=y
CONFIG_XEN_MAX_DOMAIN_MEMORY=32
CONFIG_XEN_SAVE_RESTORE=y
CONFIG_XEN_DEBUG_FS=y
CONFIG_KVM_CLOCK=y
CONFIG_KVM_GUEST=y
CONFIG_PARAVIRT=y
CONFIG_PARAVIRT_CLOCK=y
CONFIG_PARAVIRT_DEBUG=y
CONFIG_MEMTEST=y
# CONFIG_M386 is not set
# CONFIG_M486 is not set
# CONFIG_M586 is not set
# CONFIG_M586TSC is not set
# CONFIG_M586MMX is not set
# CONFIG_M686 is not set
# CONFIG_MPENTIUMII is not set
# CONFIG_MPENTIUMIII is not set
# CONFIG_MPENTIUMM is not set
# CONFIG_MPENTIUM4 is not set
# CONFIG_MK6 is not set
# CONFIG_MK7 is not set
# CONFIG_MK8 is not set
# CONFIG_MCRUSOE is not set
# CONFIG_MEFFICEON is not set
# CONFIG_MWINCHIPC6 is not set
# CONFIG_MWINCHIP3D is not set
# CONFIG_MGEODEGX1 is not set
# CONFIG_MGEODE_LX is not set
# CONFIG_MCYRIXIII is not set
# CONFIG_MVIAC3_2 is not set
# CONFIG_MVIAC7 is not set
# CONFIG_MPSC is not set
# CONFIG_MCORE2 is not set
CONFIG_GENERIC_CPU=y
CONFIG_X86_CPU=y
CONFIG_X86_L1_CACHE_BYTES=128
CONFIG_X86_INTERNODE_CACHE_BYTES=128
CONFIG_X86_CMPXCHG=y
CONFIG_X86_L1_CACHE_SHIFT=7
CONFIG_X86_WP_WORKS_OK=y
CONFIG_X86_TSC=y
CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_PROCESSOR_SELECT=y
CONFIG_CPU_SUP_INTEL=y
CONFIG_CPU_SUP_AMD=y
CONFIG_CPU_SUP_CENTAUR_64=y
CONFIG_X86_DS=y
CONFIG_X86_PTRACE_BTS=y
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
CONFIG_DMI=y
CONFIG_GART_IOMMU=y
CONFIG_CALGARY_IOMMU=y
CONFIG_CALGARY_IOMMU_ENABLED_BY_DEFAULT=y
CONFIG_AMD_IOMMU=y
CONFIG_SWIOTLB=y
CONFIG_IOMMU_HELPER=y
CONFIG_NR_CPUS=8
CONFIG_SCHED_SMT=y
CONFIG_SCHED_MC=y
CONFIG_PREEMPT_NONE=y
# CONFIG_PREEMPT_VOLUNTARY is not set
# CONFIG_PREEMPT is not set
CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y
CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y
CONFIG_X86_MCE=y
CONFIG_X86_MCE_INTEL=y
CONFIG_X86_MCE_AMD=y
CONFIG_I8K=y
CONFIG_MICROCODE=y
CONFIG_MICROCODE_INTEL=y
CONFIG_MICROCODE_AMD=y
CONFIG_MICROCODE_OLD_INTERFACE=y
CONFIG_X86_MSR=y
CONFIG_X86_CPUID=y
CONFIG_ARCH_PHYS_ADDR_T_64BIT=y
CONFIG_DIRECT_GBPAGES=y
CONFIG_NUMA=y
CONFIG_K8_NUMA=y
CONFIG_X86_64_ACPI_NUMA=y
CONFIG_NODES_SPAN_OTHER_NODES=y
CONFIG_NUMA_EMU=y
CONFIG_NODES_SHIFT=6
CONFIG_ARCH_SPARSEMEM_DEFAULT=y
CONFIG_ARCH_SPARSEMEM_ENABLE=y
CONFIG_ARCH_SELECT_MEMORY_MODEL=y
CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
CONFIG_SELECT_MEMORY_MODEL=y
# CONFIG_FLATMEM_MANUAL is not set
# CONFIG_DISCONTIGMEM_MANUAL is not set
CONFIG_SPARSEMEM_MANUAL=y
CONFIG_SPARSEMEM=y
CONFIG_NEED_MULTIPLE_NODES=y
CONFIG_HAVE_MEMORY_PRESENT=y
CONFIG_SPARSEMEM_EXTREME=y
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
CONFIG_SPARSEMEM_VMEMMAP=y
#
# Memory hotplug is currently incompatible with Software Suspend
#
CONFIG_PAGEFLAGS_EXTENDED=y
CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_MIGRATION=y
CONFIG_RESOURCES_64BIT=y
CONFIG_PHYS_ADDR_T_64BIT=y
CONFIG_ZONE_DMA_FLAG=1
CONFIG_BOUNCE=y
CONFIG_VIRT_TO_BUS=y
CONFIG_UNEVICTABLE_LRU=y
CONFIG_MMU_NOTIFIER=y
CONFIG_X86_CHECK_BIOS_CORRUPTION=y
CONFIG_X86_BOOTPARAM_MEMORY_CORRUPTION_CHECK=y
CONFIG_X86_RESERVE_LOW_64K=y
CONFIG_MTRR=y
CONFIG_MTRR_SANITIZER=y
CONFIG_MTRR_SANITIZER_ENABLE_DEFAULT=0
CONFIG_MTRR_SANITIZER_SPARE_REG_NR_DEFAULT=1
CONFIG_X86_PAT=y
CONFIG_EFI=y
CONFIG_SECCOMP=y
CONFIG_CC_STACKPROTECTOR_ALL=y
CONFIG_CC_STACKPROTECTOR=y
# CONFIG_HZ_100 is not set
CONFIG_HZ_250=y
# CONFIG_HZ_300 is not set
# CONFIG_HZ_1000 is not set
CONFIG_HZ=250
CONFIG_SCHED_HRTICK=y
CONFIG_KEXEC=y
CONFIG_CRASH_DUMP=y
CONFIG_PHYSICAL_START=0x200000
CONFIG_RELOCATABLE=y
CONFIG_PHYSICAL_ALIGN=0x200000
CONFIG_HOTPLUG_CPU=y
CONFIG_COMPAT_VDSO=y
CONFIG_CMDLINE_BOOL=y
CONFIG_CMDLINE=""
CONFIG_CMDLINE_OVERRIDE=y
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID=y
#
# Power management and ACPI options
#
CONFIG_ARCH_HIBERNATION_HEADER=y
CONFIG_PM=y
CONFIG_PM_DEBUG=y
CONFIG_PM_VERBOSE=y
CONFIG_CAN_PM_TRACE=y
CONFIG_PM_TRACE=y
CONFIG_PM_TRACE_RTC=y
CONFIG_PM_SLEEP_SMP=y
CONFIG_PM_SLEEP=y
CONFIG_SUSPEND=y
# CONFIG_PM_TEST_SUSPEND is not set
CONFIG_SUSPEND_FREEZER=y
CONFIG_HIBERNATION=y
CONFIG_PM_STD_PARTITION=""
CONFIG_ACPI=y
CONFIG_ACPI_SLEEP=y
CONFIG_ACPI_PROCFS=y
CONFIG_ACPI_PROCFS_POWER=y
CONFIG_ACPI_SYSFS_POWER=y
CONFIG_ACPI_PROC_EVENT=y
CONFIG_ACPI_AC=y
CONFIG_ACPI_BATTERY=y
CONFIG_ACPI_BUTTON=y
CONFIG_ACPI_VIDEO=y
CONFIG_ACPI_FAN=y
CONFIG_ACPI_DOCK=y
CONFIG_ACPI_PROCESSOR=y
CONFIG_ACPI_HOTPLUG_CPU=y
CONFIG_ACPI_THERMAL=y
CONFIG_ACPI_NUMA=y
CONFIG_ACPI_WMI=y
CONFIG_ACPI_ASUS=y
CONFIG_ACPI_TOSHIBA=y
# CONFIG_ACPI_CUSTOM_DSDT is not set
CONFIG_ACPI_BLACKLIST_YEAR=0
CONFIG_ACPI_DEBUG=y
CONFIG_ACPI_DEBUG_FUNC_TRACE=y
CONFIG_ACPI_PCI_SLOT=y
CONFIG_ACPI_SYSTEM=y
CONFIG_X86_PM_TIMER=y
CONFIG_ACPI_CONTAINER=y
CONFIG_ACPI_SBS=y
#
# CPU Frequency scaling
#
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_TABLE=y
CONFIG_CPU_FREQ_DEBUG=y
CONFIG_CPU_FREQ_STAT=y
CONFIG_CPU_FREQ_STAT_DETAILS=y
CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE=y
# CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE is not set
CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
#
# CPUFreq processor drivers
#
CONFIG_X86_ACPI_CPUFREQ=y
CONFIG_X86_POWERNOW_K8=y
CONFIG_X86_POWERNOW_K8_ACPI=y
CONFIG_X86_SPEEDSTEP_CENTRINO=y
CONFIG_X86_P4_CLOCKMOD=y
#
# shared options
#
CONFIG_X86_ACPI_CPUFREQ_PROC_INTF=y
CONFIG_X86_SPEEDSTEP_LIB=y
CONFIG_CPU_IDLE=y
CONFIG_CPU_IDLE_GOV_LADDER=y
CONFIG_CPU_IDLE_GOV_MENU=y
#
# Memory power savings
#
CONFIG_I7300_IDLE_IOAT_CHANNEL=y
CONFIG_I7300_IDLE=y
#
# Bus options (PCI etc.)
#
CONFIG_PCI=y
CONFIG_PCI_DIRECT=y
CONFIG_PCI_MMCONFIG=y
CONFIG_PCI_DOMAINS=y
CONFIG_DMAR=y
CONFIG_DMAR_GFX_WA=y
CONFIG_DMAR_FLOPPY_WA=y
CONFIG_INTR_REMAP=y
CONFIG_PCIEPORTBUS=y
CONFIG_HOTPLUG_PCI_PCIE=y
CONFIG_PCIEAER=y
CONFIG_PCIEASPM=y
CONFIG_PCIEASPM_DEBUG=y
CONFIG_ARCH_SUPPORTS_MSI=y
CONFIG_PCI_MSI=y
CONFIG_PCI_LEGACY=y
CONFIG_PCI_DEBUG=y
CONFIG_HT_IRQ=y
CONFIG_ISA_DMA_API=y
CONFIG_K8_NB=y
CONFIG_PCCARD=y
CONFIG_PCMCIA_DEBUG=y
CONFIG_PCMCIA=y
CONFIG_PCMCIA_LOAD_CIS=y
CONFIG_PCMCIA_IOCTL=y
CONFIG_CARDBUS=y
#
# PC-card bridges
#
CONFIG_YENTA=y
CONFIG_YENTA_O2=y
CONFIG_YENTA_RICOH=y
CONFIG_YENTA_TI=y
CONFIG_YENTA_ENE_TUNE=y
CONFIG_YENTA_TOSHIBA=y
CONFIG_PD6729=y
CONFIG_I82092=y
CONFIG_PCCARD_NONSTATIC=y
CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_FAKE=y
CONFIG_HOTPLUG_PCI_ACPI=y
CONFIG_HOTPLUG_PCI_ACPI_IBM=y
CONFIG_HOTPLUG_PCI_CPCI=y
CONFIG_HOTPLUG_PCI_CPCI_ZT5550=y
CONFIG_HOTPLUG_PCI_CPCI_GENERIC=y
CONFIG_HOTPLUG_PCI_SHPC=y
#
# Executable file formats / Emulations
#
CONFIG_BINFMT_ELF=y
CONFIG_COMPAT_BINFMT_ELF=y
CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y
# CONFIG_HAVE_AOUT is not set
CONFIG_BINFMT_MISC=y
CONFIG_IA32_EMULATION=y
CONFIG_IA32_AOUT=y
CONFIG_COMPAT=y
CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
CONFIG_SYSVIPC_COMPAT=y
CONFIG_NET=y
#
# Networking options
#
CONFIG_PACKET=y
CONFIG_PACKET_MMAP=y
CONFIG_UNIX=y
CONFIG_XFRM=y
CONFIG_XFRM_USER=y
CONFIG_XFRM_SUB_POLICY=y
CONFIG_XFRM_MIGRATE=y
CONFIG_XFRM_STATISTICS=y
CONFIG_XFRM_IPCOMP=y
CONFIG_NET_KEY=y
CONFIG_NET_KEY_MIGRATE=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_ADVANCED_ROUTER=y
CONFIG_ASK_IP_FIB_HASH=y
# CONFIG_IP_FIB_TRIE is not set
CONFIG_IP_FIB_HASH=y
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_IP_ROUTE_MULTIPATH=y
CONFIG_IP_ROUTE_VERBOSE=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
CONFIG_IP_PNP_RARP=y
CONFIG_NET_IPIP=y
CONFIG_NET_IPGRE=y
CONFIG_NET_IPGRE_BROADCAST=y
CONFIG_IP_MROUTE=y
CONFIG_IP_PIMSM_V1=y
CONFIG_IP_PIMSM_V2=y
CONFIG_ARPD=y
CONFIG_SYN_COOKIES=y
CONFIG_INET_AH=y
CONFIG_INET_ESP=y
CONFIG_INET_IPCOMP=y
CONFIG_INET_XFRM_TUNNEL=y
CONFIG_INET_TUNNEL=y
CONFIG_INET_XFRM_MODE_TRANSPORT=y
CONFIG_INET_XFRM_MODE_TUNNEL=y
CONFIG_INET_XFRM_MODE_BEET=y
CONFIG_INET_LRO=y
CONFIG_INET_DIAG=y
CONFIG_INET_TCP_DIAG=y
CONFIG_TCP_CONG_ADVANCED=y
CONFIG_TCP_CONG_BIC=y
CONFIG_TCP_CONG_CUBIC=y
CONFIG_TCP_CONG_WESTWOOD=y
CONFIG_TCP_CONG_HTCP=y
CONFIG_TCP_CONG_HSTCP=y
CONFIG_TCP_CONG_HYBLA=y
CONFIG_TCP_CONG_VEGAS=y
CONFIG_TCP_CONG_SCALABLE=y
CONFIG_TCP_CONG_LP=y
CONFIG_TCP_CONG_VENO=y
CONFIG_TCP_CONG_YEAH=y
CONFIG_TCP_CONG_ILLINOIS=y
# CONFIG_DEFAULT_BIC is not set
CONFIG_DEFAULT_CUBIC=y
# CONFIG_DEFAULT_HTCP is not set
# CONFIG_DEFAULT_VEGAS is not set
# CONFIG_DEFAULT_WESTWOOD is not set
# CONFIG_DEFAULT_RENO is not set
CONFIG_DEFAULT_TCP_CONG="cubic"
CONFIG_TCP_MD5SIG=y
CONFIG_IPV6=y
CONFIG_IPV6_PRIVACY=y
CONFIG_IPV6_ROUTER_PREF=y
CONFIG_IPV6_ROUTE_INFO=y
CONFIG_IPV6_OPTIMISTIC_DAD=y
CONFIG_INET6_AH=y
CONFIG_INET6_ESP=y
CONFIG_INET6_IPCOMP=y
CONFIG_IPV6_MIP6=y
CONFIG_INET6_XFRM_TUNNEL=y
CONFIG_INET6_TUNNEL=y
CONFIG_INET6_XFRM_MODE_TRANSPORT=y
CONFIG_INET6_XFRM_MODE_TUNNEL=y
CONFIG_INET6_XFRM_MODE_BEET=y
CONFIG_INET6_XFRM_MODE_ROUTEOPTIMIZATION=y
CONFIG_IPV6_SIT=y
CONFIG_IPV6_NDISC_NODETYPE=y
CONFIG_IPV6_TUNNEL=y
CONFIG_IPV6_MULTIPLE_TABLES=y
CONFIG_IPV6_SUBTREES=y
CONFIG_IPV6_MROUTE=y
CONFIG_IPV6_PIMSM_V2=y
CONFIG_NETLABEL=y
CONFIG_NETWORK_SECMARK=y
CONFIG_NETFILTER=y
CONFIG_NETFILTER_DEBUG=y
CONFIG_NETFILTER_ADVANCED=y
CONFIG_BRIDGE_NETFILTER=y
#
# Core Netfilter Configuration
#
CONFIG_NETFILTER_NETLINK=y
CONFIG_NETFILTER_NETLINK_QUEUE=y
CONFIG_NETFILTER_NETLINK_LOG=y
CONFIG_NF_CONNTRACK=y
CONFIG_NF_CT_ACCT=y
CONFIG_NF_CONNTRACK_MARK=y
CONFIG_NF_CONNTRACK_SECMARK=y
CONFIG_NF_CONNTRACK_EVENTS=y
CONFIG_NF_CT_PROTO_DCCP=y
CONFIG_NF_CT_PROTO_GRE=y
CONFIG_NF_CT_PROTO_SCTP=y
CONFIG_NF_CT_PROTO_UDPLITE=y
CONFIG_NF_CONNTRACK_AMANDA=y
CONFIG_NF_CONNTRACK_FTP=y
CONFIG_NF_CONNTRACK_H323=y
CONFIG_NF_CONNTRACK_IRC=y
CONFIG_NF_CONNTRACK_NETBIOS_NS=y
CONFIG_NF_CONNTRACK_PPTP=y
CONFIG_NF_CONNTRACK_SANE=y
CONFIG_NF_CONNTRACK_SIP=y
CONFIG_NF_CONNTRACK_TFTP=y
CONFIG_NF_CT_NETLINK=y
CONFIG_NETFILTER_TPROXY=y
CONFIG_NETFILTER_XTABLES=y
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=y
CONFIG_NETFILTER_XT_TARGET_CONNMARK=y
CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=y
CONFIG_NETFILTER_XT_TARGET_DSCP=y
CONFIG_NETFILTER_XT_TARGET_MARK=y
CONFIG_NETFILTER_XT_TARGET_NFLOG=y
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=y
CONFIG_NETFILTER_XT_TARGET_NOTRACK=y
CONFIG_NETFILTER_XT_TARGET_RATEEST=y
CONFIG_NETFILTER_XT_TARGET_TPROXY=y
CONFIG_NETFILTER_XT_TARGET_TRACE=y
CONFIG_NETFILTER_XT_TARGET_SECMARK=y
CONFIG_NETFILTER_XT_TARGET_TCPMSS=y
CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=y
CONFIG_NETFILTER_XT_MATCH_COMMENT=y
CONFIG_NETFILTER_XT_MATCH_CONNBYTES=y
CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=y
CONFIG_NETFILTER_XT_MATCH_CONNMARK=y
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
CONFIG_NETFILTER_XT_MATCH_DCCP=y
CONFIG_NETFILTER_XT_MATCH_DSCP=y
CONFIG_NETFILTER_XT_MATCH_ESP=y
CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=y
CONFIG_NETFILTER_XT_MATCH_HELPER=y
CONFIG_NETFILTER_XT_MATCH_IPRANGE=y
CONFIG_NETFILTER_XT_MATCH_LENGTH=y
CONFIG_NETFILTER_XT_MATCH_LIMIT=y
CONFIG_NETFILTER_XT_MATCH_MAC=y
CONFIG_NETFILTER_XT_MATCH_MARK=y
CONFIG_NETFILTER_XT_MATCH_MULTIPORT=y
CONFIG_NETFILTER_XT_MATCH_OWNER=y
CONFIG_NETFILTER_XT_MATCH_POLICY=y
CONFIG_NETFILTER_XT_MATCH_PHYSDEV=y
CONFIG_NETFILTER_XT_MATCH_PKTTYPE=y
CONFIG_NETFILTER_XT_MATCH_QUOTA=y
CONFIG_NETFILTER_XT_MATCH_RATEEST=y
CONFIG_NETFILTER_XT_MATCH_REALM=y
CONFIG_NETFILTER_XT_MATCH_RECENT=y
CONFIG_NETFILTER_XT_MATCH_RECENT_PROC_COMPAT=y
CONFIG_NETFILTER_XT_MATCH_SCTP=y
CONFIG_NETFILTER_XT_MATCH_SOCKET=y
CONFIG_NETFILTER_XT_MATCH_STATE=y
CONFIG_NETFILTER_XT_MATCH_STATISTIC=y
CONFIG_NETFILTER_XT_MATCH_STRING=y
CONFIG_NETFILTER_XT_MATCH_TCPMSS=y
CONFIG_NETFILTER_XT_MATCH_TIME=y
CONFIG_NETFILTER_XT_MATCH_U32=y
CONFIG_IP_VS=y
CONFIG_IP_VS_IPV6=y
CONFIG_IP_VS_DEBUG=y
CONFIG_IP_VS_TAB_BITS=12
#
# IPVS transport protocol load balancing support
#
CONFIG_IP_VS_PROTO_TCP=y
CONFIG_IP_VS_PROTO_UDP=y
CONFIG_IP_VS_PROTO_AH_ESP=y
CONFIG_IP_VS_PROTO_ESP=y
CONFIG_IP_VS_PROTO_AH=y
#
# IPVS scheduler
#
CONFIG_IP_VS_RR=y
CONFIG_IP_VS_WRR=y
CONFIG_IP_VS_LC=y
CONFIG_IP_VS_WLC=y
CONFIG_IP_VS_LBLC=y
CONFIG_IP_VS_LBLCR=y
CONFIG_IP_VS_DH=y
CONFIG_IP_VS_SH=y
CONFIG_IP_VS_SED=y
CONFIG_IP_VS_NQ=y
#
# IPVS application helper
#
CONFIG_IP_VS_FTP=y
#
# IP: Netfilter Configuration
#
CONFIG_NF_DEFRAG_IPV4=y
CONFIG_NF_CONNTRACK_IPV4=y
CONFIG_NF_CONNTRACK_PROC_COMPAT=y
CONFIG_IP_NF_QUEUE=y
CONFIG_IP_NF_IPTABLES=y
CONFIG_IP_NF_MATCH_ADDRTYPE=y
CONFIG_IP_NF_MATCH_AH=y
CONFIG_IP_NF_MATCH_ECN=y
CONFIG_IP_NF_MATCH_TTL=y
CONFIG_IP_NF_FILTER=y
CONFIG_IP_NF_TARGET_REJECT=y
CONFIG_IP_NF_TARGET_LOG=y
CONFIG_IP_NF_TARGET_ULOG=y
CONFIG_NF_NAT=y
CONFIG_NF_NAT_NEEDED=y
CONFIG_IP_NF_TARGET_MASQUERADE=y
CONFIG_IP_NF_TARGET_NETMAP=y
CONFIG_IP_NF_TARGET_REDIRECT=y
CONFIG_NF_NAT_SNMP_BASIC=y
CONFIG_NF_NAT_PROTO_DCCP=y
CONFIG_NF_NAT_PROTO_GRE=y
CONFIG_NF_NAT_PROTO_UDPLITE=y
CONFIG_NF_NAT_PROTO_SCTP=y
CONFIG_NF_NAT_FTP=y
CONFIG_NF_NAT_IRC=y
CONFIG_NF_NAT_TFTP=y
CONFIG_NF_NAT_AMANDA=y
CONFIG_NF_NAT_PPTP=y
CONFIG_NF_NAT_H323=y
CONFIG_NF_NAT_SIP=y
CONFIG_IP_NF_MANGLE=y
CONFIG_IP_NF_TARGET_CLUSTERIP=y
CONFIG_IP_NF_TARGET_ECN=y
CONFIG_IP_NF_TARGET_TTL=y
CONFIG_IP_NF_RAW=y
CONFIG_IP_NF_SECURITY=y
CONFIG_IP_NF_ARPTABLES=y
CONFIG_IP_NF_ARPFILTER=y
CONFIG_IP_NF_ARP_MANGLE=y
#
# IPv6: Netfilter Configuration
#
CONFIG_NF_CONNTRACK_IPV6=y
CONFIG_IP6_NF_QUEUE=y
CONFIG_IP6_NF_IPTABLES=y
CONFIG_IP6_NF_MATCH_AH=y
CONFIG_IP6_NF_MATCH_EUI64=y
CONFIG_IP6_NF_MATCH_FRAG=y
CONFIG_IP6_NF_MATCH_OPTS=y
CONFIG_IP6_NF_MATCH_HL=y
CONFIG_IP6_NF_MATCH_IPV6HEADER=y
CONFIG_IP6_NF_MATCH_MH=y
CONFIG_IP6_NF_MATCH_RT=y
CONFIG_IP6_NF_TARGET_LOG=y
CONFIG_IP6_NF_FILTER=y
CONFIG_IP6_NF_TARGET_REJECT=y
CONFIG_IP6_NF_MANGLE=y
CONFIG_IP6_NF_TARGET_HL=y
CONFIG_IP6_NF_RAW=y
CONFIG_IP6_NF_SECURITY=y
#
# DECnet: Netfilter Configuration
#
CONFIG_DECNET_NF_GRABULATOR=y
CONFIG_BRIDGE_NF_EBTABLES=y
CONFIG_BRIDGE_EBT_BROUTE=y
CONFIG_BRIDGE_EBT_T_FILTER=y
CONFIG_BRIDGE_EBT_T_NAT=y
CONFIG_BRIDGE_EBT_802_3=y
CONFIG_BRIDGE_EBT_AMONG=y
CONFIG_BRIDGE_EBT_ARP=y
CONFIG_BRIDGE_EBT_IP=y
CONFIG_BRIDGE_EBT_IP6=y
CONFIG_BRIDGE_EBT_LIMIT=y
CONFIG_BRIDGE_EBT_MARK=y
CONFIG_BRIDGE_EBT_PKTTYPE=y
CONFIG_BRIDGE_EBT_STP=y
CONFIG_BRIDGE_EBT_VLAN=y
CONFIG_BRIDGE_EBT_ARPREPLY=y
CONFIG_BRIDGE_EBT_DNAT=y
CONFIG_BRIDGE_EBT_MARK_T=y
CONFIG_BRIDGE_EBT_REDIRECT=y
CONFIG_BRIDGE_EBT_SNAT=y
CONFIG_BRIDGE_EBT_LOG=y
CONFIG_BRIDGE_EBT_ULOG=y
CONFIG_BRIDGE_EBT_NFLOG=y
CONFIG_IP_DCCP=y
CONFIG_INET_DCCP_DIAG=y
CONFIG_IP_DCCP_ACKVEC=y
#
# DCCP CCIDs Configuration (EXPERIMENTAL)
#
CONFIG_IP_DCCP_CCID2=y
CONFIG_IP_DCCP_CCID2_DEBUG=y
CONFIG_IP_DCCP_CCID3=y
CONFIG_IP_DCCP_CCID3_DEBUG=y
CONFIG_IP_DCCP_CCID3_RTO=100
CONFIG_IP_DCCP_TFRC_LIB=y
CONFIG_IP_DCCP_TFRC_DEBUG=y
#
# DCCP Kernel Hacking
#
CONFIG_IP_DCCP_DEBUG=y
CONFIG_NET_DCCPPROBE=y
CONFIG_IP_SCTP=y
CONFIG_SCTP_DBG_MSG=y
CONFIG_SCTP_DBG_OBJCNT=y
# CONFIG_SCTP_HMAC_NONE is not set
# CONFIG_SCTP_HMAC_SHA1 is not set
CONFIG_SCTP_HMAC_MD5=y
CONFIG_TIPC=y
CONFIG_TIPC_ADVANCED=y
CONFIG_TIPC_ZONES=3
CONFIG_TIPC_CLUSTERS=1
CONFIG_TIPC_NODES=255
CONFIG_TIPC_SLAVE_NODES=0
CONFIG_TIPC_PORTS=8191
CONFIG_TIPC_LOG=0
CONFIG_TIPC_DEBUG=y
CONFIG_ATM=y
CONFIG_ATM_CLIP=y
CONFIG_ATM_CLIP_NO_ICMP=y
CONFIG_ATM_LANE=y
CONFIG_ATM_MPOA=y
CONFIG_ATM_BR2684=y
CONFIG_ATM_BR2684_IPFILTER=y
CONFIG_STP=y
CONFIG_GARP=y
CONFIG_BRIDGE=y
CONFIG_NET_DSA=y
CONFIG_NET_DSA_TAG_DSA=y
CONFIG_NET_DSA_TAG_EDSA=y
CONFIG_NET_DSA_TAG_TRAILER=y
CONFIG_NET_DSA_MV88E6XXX=y
CONFIG_NET_DSA_MV88E6060=y
CONFIG_NET_DSA_MV88E6XXX_NEED_PPU=y
CONFIG_NET_DSA_MV88E6131=y
CONFIG_NET_DSA_MV88E6123_61_65=y
CONFIG_VLAN_8021Q=y
CONFIG_VLAN_8021Q_GVRP=y
CONFIG_DECNET=y
CONFIG_DECNET_ROUTER=y
CONFIG_LLC=y
CONFIG_LLC2=y
CONFIG_IPX=y
CONFIG_IPX_INTERN=y
CONFIG_ATALK=y
CONFIG_DEV_APPLETALK=y
CONFIG_IPDDP=y
CONFIG_IPDDP_ENCAP=y
CONFIG_IPDDP_DECAP=y
CONFIG_X25=y
CONFIG_LAPB=y
CONFIG_ECONET=y
CONFIG_ECONET_AUNUDP=y
CONFIG_ECONET_NATIVE=y
CONFIG_WAN_ROUTER=y
CONFIG_NET_SCHED=y
#
# Queueing/Scheduling
#
CONFIG_NET_SCH_CBQ=y
CONFIG_NET_SCH_HTB=y
CONFIG_NET_SCH_HFSC=y
CONFIG_NET_SCH_ATM=y
CONFIG_NET_SCH_PRIO=y
CONFIG_NET_SCH_MULTIQ=y
CONFIG_NET_SCH_RED=y
CONFIG_NET_SCH_SFQ=y
CONFIG_NET_SCH_TEQL=y
CONFIG_NET_SCH_TBF=y
CONFIG_NET_SCH_GRED=y
CONFIG_NET_SCH_DSMARK=y
CONFIG_NET_SCH_NETEM=y
CONFIG_NET_SCH_INGRESS=y
#
# Classification
#
CONFIG_NET_CLS=y
CONFIG_NET_CLS_BASIC=y
CONFIG_NET_CLS_TCINDEX=y
CONFIG_NET_CLS_ROUTE4=y
CONFIG_NET_CLS_ROUTE=y
CONFIG_NET_CLS_FW=y
CONFIG_NET_CLS_U32=y
CONFIG_CLS_U32_PERF=y
CONFIG_CLS_U32_MARK=y
CONFIG_NET_CLS_RSVP=y
CONFIG_NET_CLS_RSVP6=y
CONFIG_NET_CLS_FLOW=y
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_STACK=32
CONFIG_NET_EMATCH_CMP=y
CONFIG_NET_EMATCH_NBYTE=y
CONFIG_NET_EMATCH_U32=y
CONFIG_NET_EMATCH_META=y
CONFIG_NET_EMATCH_TEXT=y
CONFIG_NET_CLS_ACT=y
CONFIG_NET_ACT_POLICE=y
CONFIG_NET_ACT_GACT=y
CONFIG_GACT_PROB=y
CONFIG_NET_ACT_MIRRED=y
CONFIG_NET_ACT_IPT=y
CONFIG_NET_ACT_NAT=y
CONFIG_NET_ACT_PEDIT=y
CONFIG_NET_ACT_SIMP=y
CONFIG_NET_ACT_SKBEDIT=y
CONFIG_NET_CLS_IND=y
CONFIG_NET_SCH_FIFO=y
#
# Network testing
#
CONFIG_NET_PKTGEN=y
CONFIG_NET_TCPPROBE=y
CONFIG_HAMRADIO=y
#
# Packet Radio protocols
#
CONFIG_AX25=y
CONFIG_AX25_DAMA_SLAVE=y
CONFIG_NETROM=y
CONFIG_ROSE=y
#
# AX.25 network device drivers
#
CONFIG_MKISS=y
CONFIG_6PACK=y
CONFIG_BPQETHER=y
CONFIG_BAYCOM_SER_FDX=y
CONFIG_BAYCOM_SER_HDX=y
CONFIG_BAYCOM_PAR=y
CONFIG_YAM=y
CONFIG_CAN=y
CONFIG_CAN_RAW=y
CONFIG_CAN_BCM=y
#
# CAN Device Drivers
#
CONFIG_CAN_VCAN=y
CONFIG_CAN_DEBUG_DEVICES=y
CONFIG_IRDA=y
#
# IrDA protocols
#
CONFIG_IRLAN=y
CONFIG_IRNET=y
CONFIG_IRCOMM=y
CONFIG_IRDA_ULTRA=y
#
# IrDA options
#
CONFIG_IRDA_CACHE_LAST_LSAP=y
CONFIG_IRDA_FAST_RR=y
CONFIG_IRDA_DEBUG=y
#
# Infrared-port device drivers
#
#
# SIR device drivers
#
CONFIG_IRTTY_SIR=y
#
# Dongle support
#
CONFIG_DONGLE=y
CONFIG_ESI_DONGLE=y
CONFIG_ACTISYS_DONGLE=y
CONFIG_TEKRAM_DONGLE=y
CONFIG_TOIM3232_DONGLE=y
CONFIG_LITELINK_DONGLE=y
CONFIG_MA600_DONGLE=y
CONFIG_GIRBIL_DONGLE=y
CONFIG_MCP2120_DONGLE=y
CONFIG_OLD_BELKIN_DONGLE=y
CONFIG_ACT200L_DONGLE=y
CONFIG_KINGSUN_DONGLE=y
CONFIG_KSDAZZLE_DONGLE=y
CONFIG_KS959_DONGLE=y
#
# FIR device drivers
#
CONFIG_USB_IRDA=y
CONFIG_SIGMATEL_FIR=y
CONFIG_NSC_FIR=y
CONFIG_WINBOND_FIR=y
CONFIG_SMC_IRCC_FIR=y
CONFIG_ALI_FIR=y
CONFIG_VLSI_FIR=y
CONFIG_VIA_FIR=y
CONFIG_MCS_FIR=y
CONFIG_BT=y
CONFIG_BT_L2CAP=y
CONFIG_BT_SCO=y
CONFIG_BT_RFCOMM=y
CONFIG_BT_RFCOMM_TTY=y
CONFIG_BT_BNEP=y
CONFIG_BT_BNEP_MC_FILTER=y
CONFIG_BT_BNEP_PROTO_FILTER=y
CONFIG_BT_CMTP=y
CONFIG_BT_HIDP=y
#
# Bluetooth device drivers
#
CONFIG_BT_HCIBTUSB=y
CONFIG_BT_HCIBTSDIO=y
CONFIG_BT_HCIUART=y
CONFIG_BT_HCIUART_H4=y
CONFIG_BT_HCIUART_BCSP=y
CONFIG_BT_HCIUART_LL=y
CONFIG_BT_HCIBCM203X=y
CONFIG_BT_HCIBPA10X=y
CONFIG_BT_HCIBFUSB=y
CONFIG_BT_HCIDTL1=y
CONFIG_BT_HCIBT3C=y
CONFIG_BT_HCIBLUECARD=y
CONFIG_BT_HCIBTUART=y
CONFIG_BT_HCIVHCI=y
CONFIG_AF_RXRPC=y
CONFIG_AF_RXRPC_DEBUG=y
CONFIG_RXKAD=y
CONFIG_PHONET=y
CONFIG_FIB_RULES=y
CONFIG_WIRELESS=y
CONFIG_CFG80211=y
CONFIG_NL80211=y
CONFIG_WIRELESS_OLD_REGULATORY=y
CONFIG_WIRELESS_EXT=y
CONFIG_WIRELESS_EXT_SYSFS=y
CONFIG_MAC80211=y
#
# Rate control algorithm selection
#
CONFIG_MAC80211_RC_PID=y
CONFIG_MAC80211_RC_MINSTREL=y
CONFIG_MAC80211_RC_DEFAULT_PID=y
# CONFIG_MAC80211_RC_DEFAULT_MINSTREL is not set
CONFIG_MAC80211_RC_DEFAULT="pid"
CONFIG_MAC80211_MESH=y
CONFIG_MAC80211_LEDS=y
CONFIG_MAC80211_DEBUGFS=y
CONFIG_MAC80211_DEBUG_MENU=y
CONFIG_MAC80211_DEBUG_PACKET_ALIGNMENT=y
CONFIG_MAC80211_NOINLINE=y
CONFIG_MAC80211_VERBOSE_DEBUG=y
CONFIG_MAC80211_HT_DEBUG=y
CONFIG_MAC80211_TKIP_DEBUG=y
CONFIG_MAC80211_IBSS_DEBUG=y
CONFIG_MAC80211_VERBOSE_PS_DEBUG=y
CONFIG_MAC80211_VERBOSE_MPL_DEBUG=y
CONFIG_MAC80211_DEBUG_COUNTERS=y
CONFIG_MAC80211_VERBOSE_SPECT_MGMT_DEBUG=y
CONFIG_IEEE80211=y
CONFIG_IEEE80211_DEBUG=y
CONFIG_IEEE80211_CRYPT_WEP=y
CONFIG_IEEE80211_CRYPT_CCMP=y
CONFIG_IEEE80211_CRYPT_TKIP=y
CONFIG_RFKILL=y
CONFIG_RFKILL_INPUT=y
CONFIG_RFKILL_LEDS=y
#
# Device Drivers
#
#
# Generic Driver Options
#
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=y
CONFIG_FIRMWARE_IN_KERNEL=y
CONFIG_EXTRA_FIRMWARE=""
CONFIG_DEBUG_DRIVER=y
CONFIG_DEBUG_DEVRES=y
# CONFIG_SYS_HYPERVISOR is not set
CONFIG_CONNECTOR=y
CONFIG_PROC_EVENTS=y
# CONFIG_MTD is not set
CONFIG_PARPORT=y
CONFIG_PARPORT_PC=y
CONFIG_PARPORT_SERIAL=y
CONFIG_PARPORT_PC_FIFO=y
CONFIG_PARPORT_PC_SUPERIO=y
CONFIG_PARPORT_PC_PCMCIA=y
# CONFIG_PARPORT_GSC is not set
CONFIG_PARPORT_AX88796=y
CONFIG_PARPORT_1284=y
CONFIG_PARPORT_NOT_PC=y
CONFIG_PNP=y
CONFIG_PNP_DEBUG_MESSAGES=y
#
# Protocols
#
CONFIG_PNPACPI=y
CONFIG_BLK_DEV=y
CONFIG_BLK_DEV_FD=y
# CONFIG_PARIDE is not set
CONFIG_BLK_CPQ_DA=y
CONFIG_BLK_CPQ_CISS_DA=y
CONFIG_CISS_SCSI_TAPE=y
CONFIG_BLK_DEV_DAC960=y
CONFIG_BLK_DEV_UMEM=y
# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_CRYPTOLOOP=y
CONFIG_BLK_DEV_NBD=y
CONFIG_BLK_DEV_SX8=y
CONFIG_BLK_DEV_UB=y
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=4096
CONFIG_BLK_DEV_XIP=y
CONFIG_CDROM_PKTCDVD=y
CONFIG_CDROM_PKTCDVD_BUFFERS=8
CONFIG_CDROM_PKTCDVD_WCACHE=y
CONFIG_ATA_OVER_ETH=y
CONFIG_XEN_BLKDEV_FRONTEND=y
CONFIG_VIRTIO_BLK=y
CONFIG_BLK_DEV_HD=y
CONFIG_MISC_DEVICES=y
CONFIG_IBM_ASM=y
CONFIG_PHANTOM=y
CONFIG_EEPROM_93CX6=y
CONFIG_SGI_IOC4=y
CONFIG_TIFM_CORE=y
CONFIG_TIFM_7XX1=y
CONFIG_ACER_WMI=y
CONFIG_FUJITSU_LAPTOP=y
CONFIG_FUJITSU_LAPTOP_DEBUG=y
CONFIG_HP_WMI=y
CONFIG_ICS932S401=y
CONFIG_MSI_LAPTOP=y
CONFIG_PANASONIC_LAPTOP=y
CONFIG_COMPAL_LAPTOP=y
CONFIG_SONY_LAPTOP=y
CONFIG_SONYPI_COMPAT=y
CONFIG_THINKPAD_ACPI=y
CONFIG_THINKPAD_ACPI_DEBUG=y
CONFIG_THINKPAD_ACPI_BAY=y
CONFIG_THINKPAD_ACPI_VIDEO=y
CONFIG_THINKPAD_ACPI_HOTKEY_POLL=y
CONFIG_INTEL_MENLOW=y
CONFIG_EEEPC_LAPTOP=y
CONFIG_ENCLOSURE_SERVICES=y
CONFIG_SGI_XP=y
CONFIG_HP_ILO=y
CONFIG_SGI_GRU=y
CONFIG_SGI_GRU_DEBUG=y
CONFIG_C2PORT=y
CONFIG_C2PORT_DURAMAR_2150=y
CONFIG_HAVE_IDE=y
# CONFIG_IDE is not set
#
# SCSI device support
#
CONFIG_RAID_ATTRS=y
CONFIG_SCSI=y
CONFIG_SCSI_DMA=y
CONFIG_SCSI_TGT=y
CONFIG_SCSI_NETLINK=y
CONFIG_SCSI_PROC_FS=y
#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_ST=y
CONFIG_CHR_DEV_OSST=y
CONFIG_BLK_DEV_SR=y
CONFIG_BLK_DEV_SR_VENDOR=y
CONFIG_CHR_DEV_SG=y
CONFIG_CHR_DEV_SCH=y
CONFIG_SCSI_ENCLOSURE=y
#
# Some SCSI devices (e.g. CD jukebox) support multiple LUNs
#
CONFIG_SCSI_MULTI_LUN=y
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
CONFIG_SCSI_SCAN_ASYNC=y
CONFIG_SCSI_WAIT_SCAN=m
#
# SCSI Transports
#
CONFIG_SCSI_SPI_ATTRS=y
CONFIG_SCSI_FC_ATTRS=y
CONFIG_SCSI_FC_TGT_ATTRS=y
CONFIG_SCSI_ISCSI_ATTRS=y
CONFIG_SCSI_SAS_ATTRS=y
CONFIG_SCSI_SAS_LIBSAS=y
CONFIG_SCSI_SAS_ATA=y
CONFIG_SCSI_SAS_HOST_SMP=y
CONFIG_SCSI_SAS_LIBSAS_DEBUG=y
CONFIG_SCSI_SRP_ATTRS=y
CONFIG_SCSI_SRP_TGT_ATTRS=y
CONFIG_SCSI_LOWLEVEL=y
CONFIG_ISCSI_TCP=y
CONFIG_BLK_DEV_3W_XXXX_RAID=y
CONFIG_SCSI_3W_9XXX=y
CONFIG_SCSI_ACARD=y
CONFIG_SCSI_AACRAID=y
CONFIG_SCSI_AIC7XXX=y
CONFIG_AIC7XXX_CMDS_PER_DEVICE=32
CONFIG_AIC7XXX_RESET_DELAY_MS=5000
CONFIG_AIC7XXX_DEBUG_ENABLE=y
CONFIG_AIC7XXX_DEBUG_MASK=0
CONFIG_AIC7XXX_REG_PRETTY_PRINT=y
CONFIG_SCSI_AIC7XXX_OLD=y
CONFIG_SCSI_AIC79XX=y
CONFIG_AIC79XX_CMDS_PER_DEVICE=32
CONFIG_AIC79XX_RESET_DELAY_MS=5000
CONFIG_AIC79XX_DEBUG_ENABLE=y
CONFIG_AIC79XX_DEBUG_MASK=0
CONFIG_AIC79XX_REG_PRETTY_PRINT=y
CONFIG_SCSI_AIC94XX=y
CONFIG_AIC94XX_DEBUG=y
CONFIG_SCSI_DPT_I2O=y
CONFIG_SCSI_ADVANSYS=y
CONFIG_SCSI_ARCMSR=y
CONFIG_SCSI_ARCMSR_AER=y
CONFIG_MEGARAID_NEWGEN=y
CONFIG_MEGARAID_MM=y
CONFIG_MEGARAID_MAILBOX=y
CONFIG_MEGARAID_LEGACY=y
CONFIG_MEGARAID_SAS=y
CONFIG_SCSI_HPTIOP=y
CONFIG_SCSI_BUSLOGIC=y
CONFIG_SCSI_DMX3191D=y
CONFIG_SCSI_EATA=y
CONFIG_SCSI_EATA_TAGGED_QUEUE=y
CONFIG_SCSI_EATA_LINKED_COMMANDS=y
CONFIG_SCSI_EATA_MAX_TAGS=16
CONFIG_SCSI_FUTURE_DOMAIN=y
CONFIG_SCSI_GDTH=y
CONFIG_SCSI_IPS=y
CONFIG_SCSI_INITIO=y
CONFIG_SCSI_INIA100=y
CONFIG_SCSI_PPA=y
CONFIG_SCSI_IMM=y
CONFIG_SCSI_IZIP_EPP16=y
CONFIG_SCSI_IZIP_SLOW_CTR=y
CONFIG_SCSI_MVSAS=y
CONFIG_SCSI_STEX=y
CONFIG_SCSI_SYM53C8XX_2=y
CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=1
CONFIG_SCSI_SYM53C8XX_DEFAULT_TAGS=16
CONFIG_SCSI_SYM53C8XX_MAX_TAGS=64
CONFIG_SCSI_SYM53C8XX_MMIO=y
CONFIG_SCSI_IPR=y
CONFIG_SCSI_IPR_TRACE=y
CONFIG_SCSI_IPR_DUMP=y
CONFIG_SCSI_QLOGIC_1280=y
CONFIG_SCSI_QLA_FC=y
CONFIG_SCSI_QLA_ISCSI=y
CONFIG_SCSI_LPFC=y
CONFIG_SCSI_DC395x=y
CONFIG_SCSI_DC390T=y
# CONFIG_SCSI_DEBUG is not set
CONFIG_SCSI_SRP=y
CONFIG_SCSI_LOWLEVEL_PCMCIA=y
CONFIG_PCMCIA_FDOMAIN=m
CONFIG_PCMCIA_QLOGIC=m
CONFIG_PCMCIA_SYM53C500=m
CONFIG_SCSI_DH=y
CONFIG_SCSI_DH_RDAC=y
CONFIG_SCSI_DH_HP_SW=y
CONFIG_SCSI_DH_EMC=y
CONFIG_SCSI_DH_ALUA=y
CONFIG_ATA=y
# CONFIG_ATA_NONSTANDARD is not set
CONFIG_ATA_ACPI=y
CONFIG_SATA_PMP=y
CONFIG_SATA_AHCI=y
CONFIG_SATA_SIL24=y
CONFIG_ATA_SFF=y
CONFIG_SATA_SVW=y
CONFIG_ATA_PIIX=y
CONFIG_SATA_MV=y
CONFIG_SATA_NV=y
CONFIG_PDC_ADMA=y
CONFIG_SATA_QSTOR=y
CONFIG_SATA_PROMISE=y
CONFIG_SATA_SX4=y
CONFIG_SATA_SIL=y
CONFIG_SATA_SIS=y
CONFIG_SATA_ULI=y
CONFIG_SATA_VIA=y
CONFIG_SATA_VITESSE=y
CONFIG_SATA_INIC162X=y
CONFIG_PATA_ACPI=y
CONFIG_PATA_ALI=y
CONFIG_PATA_AMD=y
CONFIG_PATA_ARTOP=y
CONFIG_PATA_ATIIXP=y
CONFIG_PATA_CMD640_PCI=y
CONFIG_PATA_CMD64X=y
CONFIG_PATA_CS5520=y
CONFIG_PATA_CS5530=y
CONFIG_PATA_CYPRESS=y
CONFIG_PATA_EFAR=y
CONFIG_ATA_GENERIC=y
CONFIG_PATA_HPT366=y
CONFIG_PATA_HPT37X=y
CONFIG_PATA_HPT3X2N=y
CONFIG_PATA_HPT3X3=y
CONFIG_PATA_HPT3X3_DMA=y
CONFIG_PATA_IT821X=y
CONFIG_PATA_IT8213=y
CONFIG_PATA_JMICRON=y
CONFIG_PATA_TRIFLEX=y
CONFIG_PATA_MARVELL=y
CONFIG_PATA_MPIIX=y
CONFIG_PATA_OLDPIIX=y
CONFIG_PATA_NETCELL=y
CONFIG_PATA_NINJA32=y
CONFIG_PATA_NS87410=y
CONFIG_PATA_NS87415=y
CONFIG_PATA_OPTI=y
CONFIG_PATA_OPTIDMA=y
CONFIG_PATA_PCMCIA=y
CONFIG_PATA_PDC_OLD=y
CONFIG_PATA_RADISYS=y
CONFIG_PATA_RZ1000=y
CONFIG_PATA_SC1200=y
CONFIG_PATA_SERVERWORKS=y
CONFIG_PATA_PDC2027X=y
CONFIG_PATA_SIL680=y
CONFIG_PATA_SIS=y
CONFIG_PATA_VIA=y
CONFIG_PATA_WINBOND=y
CONFIG_PATA_PLATFORM=y
CONFIG_PATA_SCH=y
CONFIG_MD=y
CONFIG_BLK_DEV_MD=y
CONFIG_MD_AUTODETECT=y
CONFIG_MD_LINEAR=y
CONFIG_MD_RAID0=y
CONFIG_MD_RAID1=y
CONFIG_MD_RAID10=y
CONFIG_MD_RAID456=y
CONFIG_MD_RAID5_RESHAPE=y
CONFIG_MD_MULTIPATH=y
CONFIG_MD_FAULTY=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_DEBUG=y
CONFIG_DM_CRYPT=y
CONFIG_DM_SNAPSHOT=y
CONFIG_DM_MIRROR=y
CONFIG_DM_ZERO=y
CONFIG_DM_MULTIPATH=y
CONFIG_DM_DELAY=y
CONFIG_DM_UEVENT=y
CONFIG_FUSION=y
CONFIG_FUSION_SPI=y
CONFIG_FUSION_FC=y
CONFIG_FUSION_SAS=y
CONFIG_FUSION_MAX_SGE=128
CONFIG_FUSION_CTL=y
CONFIG_FUSION_LAN=y
CONFIG_FUSION_LOGGING=y
#
# IEEE 1394 (FireWire) support
#
#
# Enable only one of the two stacks, unless you know what you are doing
#
CONFIG_FIREWIRE=y
CONFIG_FIREWIRE_OHCI=y
CONFIG_FIREWIRE_OHCI_DEBUG=y
CONFIG_FIREWIRE_SBP2=y
CONFIG_IEEE1394=y
CONFIG_IEEE1394_OHCI1394=y
CONFIG_IEEE1394_PCILYNX=y
CONFIG_IEEE1394_SBP2=y
CONFIG_IEEE1394_SBP2_PHYS_DMA=y
CONFIG_IEEE1394_ETH1394_ROM_ENTRY=y
CONFIG_IEEE1394_ETH1394=y
CONFIG_IEEE1394_RAWIO=y
CONFIG_IEEE1394_VIDEO1394=y
CONFIG_IEEE1394_DV1394=y
CONFIG_IEEE1394_VERBOSEDEBUG=y
CONFIG_I2O=y
CONFIG_I2O_LCT_NOTIFY_ON_CHANGES=y
CONFIG_I2O_EXT_ADAPTEC=y
CONFIG_I2O_EXT_ADAPTEC_DMA64=y
CONFIG_I2O_BUS=y
CONFIG_I2O_BLOCK=y
CONFIG_I2O_SCSI=y
CONFIG_I2O_PROC=y
CONFIG_MACINTOSH_DRIVERS=y
CONFIG_MAC_EMUMOUSEBTN=y
CONFIG_NETDEVICES=y
CONFIG_IFB=y
CONFIG_DUMMY=y
CONFIG_BONDING=y
CONFIG_MACVLAN=y
CONFIG_EQUALIZER=y
CONFIG_TUN=y
CONFIG_VETH=y
CONFIG_NET_SB1000=y
CONFIG_ARCNET=y
CONFIG_ARCNET_1201=y
CONFIG_ARCNET_1051=y
CONFIG_ARCNET_RAW=y
CONFIG_ARCNET_CAP=y
CONFIG_ARCNET_COM90xx=y
CONFIG_ARCNET_COM90xxIO=y
CONFIG_ARCNET_RIM_I=y
CONFIG_ARCNET_COM20020=y
CONFIG_ARCNET_COM20020_PCI=y
CONFIG_PHYLIB=y
#
# MII PHY device drivers
#
CONFIG_MARVELL_PHY=y
CONFIG_DAVICOM_PHY=y
CONFIG_QSEMI_PHY=y
CONFIG_LXT_PHY=y
CONFIG_CICADA_PHY=y
CONFIG_VITESSE_PHY=y
CONFIG_SMSC_PHY=y
CONFIG_BROADCOM_PHY=y
CONFIG_ICPLUS_PHY=y
CONFIG_REALTEK_PHY=y
CONFIG_FIXED_PHY=y
CONFIG_MDIO_BITBANG=y
CONFIG_NET_ETHERNET=y
CONFIG_MII=y
CONFIG_HAPPYMEAL=y
CONFIG_SUNGEM=y
CONFIG_CASSINI=y
CONFIG_NET_VENDOR_3COM=y
CONFIG_VORTEX=y
CONFIG_TYPHOON=y
CONFIG_ENC28J60=y
CONFIG_ENC28J60_WRITEVERIFY=y
CONFIG_NET_TULIP=y
CONFIG_DE2104X=y
CONFIG_TULIP=y
CONFIG_TULIP_MWI=y
CONFIG_TULIP_MMIO=y
CONFIG_TULIP_NAPI=y
CONFIG_TULIP_NAPI_HW_MITIGATION=y
CONFIG_DE4X5=y
CONFIG_WINBOND_840=y
CONFIG_DM9102=y
CONFIG_ULI526X=y
CONFIG_PCMCIA_XIRCOM=y
CONFIG_HP100=y
# CONFIG_IBM_NEW_EMAC_ZMII is not set
# CONFIG_IBM_NEW_EMAC_RGMII is not set
# CONFIG_IBM_NEW_EMAC_TAH is not set
# CONFIG_IBM_NEW_EMAC_EMAC4 is not set
# CONFIG_IBM_NEW_EMAC_NO_FLOW_CTRL is not set
# CONFIG_IBM_NEW_EMAC_MAL_CLR_ICINTSTAT is not set
# CONFIG_IBM_NEW_EMAC_MAL_COMMON_ERR is not set
CONFIG_NET_PCI=y
CONFIG_PCNET32=y
CONFIG_AMD8111_ETH=y
CONFIG_ADAPTEC_STARFIRE=y
CONFIG_B44=y
CONFIG_B44_PCI_AUTOSELECT=y
CONFIG_B44_PCICORE_AUTOSELECT=y
CONFIG_B44_PCI=y
CONFIG_FORCEDETH=y
CONFIG_FORCEDETH_NAPI=y
CONFIG_EEPRO100=y
CONFIG_E100=y
CONFIG_FEALNX=y
CONFIG_NATSEMI=y
CONFIG_NE2K_PCI=y
CONFIG_8139CP=y
CONFIG_8139TOO=y
CONFIG_8139TOO_PIO=y
CONFIG_8139TOO_TUNE_TWISTER=y
CONFIG_8139TOO_8129=y
CONFIG_8139_OLD_RX_RESET=y
CONFIG_R6040=y
CONFIG_SIS900=y
CONFIG_EPIC100=y
CONFIG_SUNDANCE=y
CONFIG_SUNDANCE_MMIO=y
CONFIG_TLAN=y
CONFIG_VIA_RHINE=y
CONFIG_VIA_RHINE_MMIO=y
CONFIG_SC92031=y
CONFIG_NET_POCKET=y
CONFIG_ATP=y
CONFIG_DE600=y
CONFIG_DE620=y
CONFIG_ATL2=y
CONFIG_NETDEV_1000=y
CONFIG_ACENIC=y
CONFIG_ACENIC_OMIT_TIGON_I=y
CONFIG_DL2K=y
CONFIG_E1000=y
CONFIG_E1000E=y
CONFIG_IP1000=y
CONFIG_IGB=y
CONFIG_IGB_LRO=y
CONFIG_IGB_DCA=y
CONFIG_NS83820=y
CONFIG_HAMACHI=y
CONFIG_YELLOWFIN=y
CONFIG_R8169=y
CONFIG_R8169_VLAN=y
CONFIG_SIS190=y
CONFIG_SKGE=y
CONFIG_SKGE_DEBUG=y
CONFIG_SKY2=y
CONFIG_SKY2_DEBUG=y
CONFIG_VIA_VELOCITY=y
CONFIG_TIGON3=y
CONFIG_BNX2=y
CONFIG_QLA3XXX=y
CONFIG_ATL1=y
CONFIG_ATL1E=y
CONFIG_JME=y
CONFIG_NETDEV_10000=y
CONFIG_CHELSIO_T1=y
CONFIG_CHELSIO_T1_1G=y
CONFIG_CHELSIO_T3=y
CONFIG_ENIC=y
CONFIG_IXGBE=y
CONFIG_IXGBE_DCA=y
CONFIG_IXGB=y
CONFIG_S2IO=y
CONFIG_MYRI10GE=y
CONFIG_MYRI10GE_DCA=y
CONFIG_NETXEN_NIC=y
CONFIG_NIU=y
CONFIG_MLX4_EN=y
CONFIG_MLX4_CORE=y
CONFIG_MLX4_DEBUG=y
CONFIG_TEHUTI=y
CONFIG_BNX2X=y
CONFIG_QLGE=y
CONFIG_SFC=y
CONFIG_TR=y
CONFIG_IBMOL=y
CONFIG_3C359=y
CONFIG_TMS380TR=y
CONFIG_TMSPCI=y
CONFIG_ABYSS=y
#
# Wireless LAN
#
CONFIG_WLAN_PRE80211=y
CONFIG_STRIP=y
CONFIG_PCMCIA_WAVELAN=y
CONFIG_PCMCIA_NETWAVE=y
CONFIG_WLAN_80211=y
CONFIG_PCMCIA_RAYCS=y
CONFIG_IPW2100=y
CONFIG_IPW2100_MONITOR=y
CONFIG_IPW2100_DEBUG=y
CONFIG_IPW2200=y
CONFIG_IPW2200_MONITOR=y
CONFIG_IPW2200_RADIOTAP=y
CONFIG_IPW2200_PROMISCUOUS=y
CONFIG_IPW2200_QOS=y
CONFIG_IPW2200_DEBUG=y
CONFIG_LIBERTAS=y
CONFIG_LIBERTAS_USB=y
CONFIG_LIBERTAS_CS=y
CONFIG_LIBERTAS_SDIO=y
CONFIG_LIBERTAS_DEBUG=y
CONFIG_LIBERTAS_THINFIRM=y
CONFIG_LIBERTAS_THINFIRM_USB=y
CONFIG_AIRO=y
CONFIG_HERMES=y
CONFIG_PLX_HERMES=y
CONFIG_TMD_HERMES=y
CONFIG_NORTEL_HERMES=y
CONFIG_PCI_HERMES=y
CONFIG_PCMCIA_HERMES=y
CONFIG_PCMCIA_SPECTRUM=y
CONFIG_ATMEL=y
CONFIG_PCI_ATMEL=y
CONFIG_PCMCIA_ATMEL=y
CONFIG_AIRO_CS=y
CONFIG_PCMCIA_WL3501=y
CONFIG_PRISM54=y
CONFIG_USB_ZD1201=y
CONFIG_USB_NET_RNDIS_WLAN=y
CONFIG_RTL8180=y
CONFIG_RTL8187=y
CONFIG_ADM8211=y
CONFIG_MAC80211_HWSIM=y
CONFIG_P54_COMMON=y
CONFIG_P54_USB=y
CONFIG_P54_PCI=y
CONFIG_ATH5K=y
CONFIG_ATH5K_DEBUG=y
CONFIG_ATH9K=y
CONFIG_IWLWIFI=y
CONFIG_IWLCORE=y
CONFIG_IWLWIFI_LEDS=y
CONFIG_IWLWIFI_RFKILL=y
CONFIG_IWLWIFI_DEBUG=y
CONFIG_IWLWIFI_DEBUGFS=y
CONFIG_IWLAGN=y
CONFIG_IWLAGN_SPECTRUM_MEASUREMENT=y
CONFIG_IWLAGN_LEDS=y
CONFIG_IWL4965=y
CONFIG_IWL5000=y
CONFIG_IWL3945=y
CONFIG_IWL3945_RFKILL=y
CONFIG_IWL3945_SPECTRUM_MEASUREMENT=y
CONFIG_IWL3945_LEDS=y
CONFIG_IWL3945_DEBUG=y
CONFIG_HOSTAP=y
CONFIG_HOSTAP_FIRMWARE=y
CONFIG_HOSTAP_FIRMWARE_NVRAM=y
CONFIG_HOSTAP_PLX=y
CONFIG_HOSTAP_PCI=y
CONFIG_HOSTAP_CS=y
CONFIG_B43=y
CONFIG_B43_PCI_AUTOSELECT=y
CONFIG_B43_PCICORE_AUTOSELECT=y
CONFIG_B43_PCMCIA=y
CONFIG_B43_PIO=y
CONFIG_B43_LEDS=y
CONFIG_B43_RFKILL=y
CONFIG_B43_DEBUG=y
CONFIG_B43_FORCE_PIO=y
CONFIG_B43LEGACY=y
CONFIG_B43LEGACY_PCI_AUTOSELECT=y
CONFIG_B43LEGACY_PCICORE_AUTOSELECT=y
CONFIG_B43LEGACY_LEDS=y
CONFIG_B43LEGACY_RFKILL=y
CONFIG_B43LEGACY_DEBUG=y
CONFIG_B43LEGACY_DMA=y
CONFIG_B43LEGACY_PIO=y
CONFIG_B43LEGACY_DMA_AND_PIO_MODE=y
# CONFIG_B43LEGACY_DMA_MODE is not set
# CONFIG_B43LEGACY_PIO_MODE is not set
CONFIG_ZD1211RW=y
CONFIG_ZD1211RW_DEBUG=y
#
# USB Network Adapters
#
CONFIG_USB_CATC=y
CONFIG_USB_KAWETH=y
CONFIG_USB_PEGASUS=y
CONFIG_USB_RTL8150=y
CONFIG_USB_USBNET=y
CONFIG_USB_NET_AX8817X=y
CONFIG_USB_NET_CDCETHER=y
CONFIG_USB_NET_DM9601=y
CONFIG_USB_NET_SMSC95XX=y
CONFIG_USB_NET_GL620A=y
CONFIG_USB_NET_NET1080=y
CONFIG_USB_NET_PLUSB=y
CONFIG_USB_NET_MCS7830=y
CONFIG_USB_NET_RNDIS_HOST=y
CONFIG_USB_NET_CDC_SUBSET=y
CONFIG_USB_ALI_M5632=y
CONFIG_USB_AN2720=y
CONFIG_USB_BELKIN=y
CONFIG_USB_ARMLINUX=y
CONFIG_USB_EPSON2888=y
CONFIG_USB_KC2190=y
CONFIG_USB_NET_ZAURUS=y
CONFIG_USB_HSO=y
CONFIG_NET_PCMCIA=y
CONFIG_PCMCIA_3C589=y
CONFIG_PCMCIA_3C574=y
CONFIG_PCMCIA_FMVJ18X=y
CONFIG_PCMCIA_PCNET=y
CONFIG_PCMCIA_NMCLAN=y
CONFIG_PCMCIA_SMC91C92=y
CONFIG_PCMCIA_XIRC2PS=y
CONFIG_PCMCIA_AXNET=y
CONFIG_ARCNET_COM20020_CS=y
CONFIG_PCMCIA_IBMTR=y
CONFIG_WAN=y
CONFIG_LANMEDIA=y
CONFIG_HDLC=y
CONFIG_HDLC_RAW=y
CONFIG_HDLC_RAW_ETH=y
CONFIG_HDLC_CISCO=y
CONFIG_HDLC_FR=y
CONFIG_HDLC_PPP=y
CONFIG_HDLC_X25=y
CONFIG_PCI200SYN=y
CONFIG_WANXL=y
CONFIG_PC300TOO=y
CONFIG_FARSYNC=y
CONFIG_DSCC4=m
CONFIG_DSCC4_PCISYNC=y
CONFIG_DSCC4_PCI_RST=y
CONFIG_DLCI=y
CONFIG_DLCI_MAX=8
CONFIG_WAN_ROUTER_DRIVERS=y
CONFIG_CYCLADES_SYNC=y
CONFIG_CYCLOMX_X25=y
CONFIG_LAPBETHER=y
CONFIG_X25_ASY=y
CONFIG_SBNI=y
CONFIG_SBNI_MULTILINE=y
CONFIG_ATM_DRIVERS=y
CONFIG_ATM_DUMMY=y
CONFIG_ATM_TCP=y
CONFIG_ATM_LANAI=y
CONFIG_ATM_ENI=y
CONFIG_ATM_ENI_DEBUG=y
CONFIG_ATM_ENI_TUNE_BURST=y
CONFIG_ATM_ENI_BURST_TX_16W=y
CONFIG_ATM_ENI_BURST_TX_8W=y
CONFIG_ATM_ENI_BURST_TX_4W=y
CONFIG_ATM_ENI_BURST_TX_2W=y
CONFIG_ATM_ENI_BURST_RX_16W=y
CONFIG_ATM_ENI_BURST_RX_8W=y
CONFIG_ATM_ENI_BURST_RX_4W=y
CONFIG_ATM_ENI_BURST_RX_2W=y
CONFIG_ATM_FIRESTREAM=y
CONFIG_ATM_ZATM=y
CONFIG_ATM_ZATM_DEBUG=y
CONFIG_ATM_IDT77252=y
CONFIG_ATM_IDT77252_DEBUG=y
CONFIG_ATM_IDT77252_RCV_ALL=y
CONFIG_ATM_IDT77252_USE_SUNI=y
CONFIG_ATM_AMBASSADOR=y
CONFIG_ATM_AMBASSADOR_DEBUG=y
CONFIG_ATM_HORIZON=y
CONFIG_ATM_HORIZON_DEBUG=y
CONFIG_ATM_IA=y
CONFIG_ATM_IA_DEBUG=y
CONFIG_ATM_FORE200E=y
CONFIG_ATM_FORE200E_USE_TASKLET=y
CONFIG_ATM_FORE200E_TX_RETRY=16
CONFIG_ATM_FORE200E_DEBUG=0
CONFIG_ATM_HE=y
CONFIG_ATM_HE_USE_SUNI=y
CONFIG_XEN_NETDEV_FRONTEND=y
CONFIG_FDDI=y
CONFIG_DEFXX=y
CONFIG_DEFXX_MMIO=y
CONFIG_SKFP=y
CONFIG_HIPPI=y
CONFIG_ROADRUNNER=y
CONFIG_ROADRUNNER_LARGE_RINGS=y
CONFIG_PLIP=y
CONFIG_PPP=y
CONFIG_PPP_MULTILINK=y
CONFIG_PPP_FILTER=y
CONFIG_PPP_ASYNC=y
CONFIG_PPP_SYNC_TTY=y
CONFIG_PPP_DEFLATE=y
CONFIG_PPP_BSDCOMP=y
CONFIG_PPP_MPPE=y
CONFIG_PPPOE=y
CONFIG_PPPOATM=y
CONFIG_PPPOL2TP=y
CONFIG_SLIP=y
CONFIG_SLIP_COMPRESSED=y
CONFIG_SLHC=y
CONFIG_SLIP_SMART=y
CONFIG_SLIP_MODE_SLIP6=y
CONFIG_NET_FC=y
CONFIG_NETCONSOLE=y
CONFIG_NETCONSOLE_DYNAMIC=y
CONFIG_NETPOLL=y
CONFIG_NETPOLL_TRAP=y
CONFIG_NET_POLL_CONTROLLER=y
CONFIG_VIRTIO_NET=y
CONFIG_ISDN=y
CONFIG_ISDN_I4L=y
CONFIG_ISDN_PPP=y
CONFIG_ISDN_PPP_VJ=y
CONFIG_ISDN_MPP=y
CONFIG_IPPP_FILTER=y
CONFIG_ISDN_PPP_BSDCOMP=y
CONFIG_ISDN_AUDIO=y
CONFIG_ISDN_TTY_FAX=y
CONFIG_ISDN_X25=y
#
# ISDN feature submodules
#
CONFIG_ISDN_DIVERSION=y
#
# ISDN4Linux hardware drivers
#
#
# Passive cards
#
CONFIG_ISDN_DRV_HISAX=y
#
# D-channel protocol features
#
CONFIG_HISAX_EURO=y
CONFIG_DE_AOC=y
CONFIG_HISAX_NO_SENDCOMPLETE=y
CONFIG_HISAX_NO_LLC=y
CONFIG_HISAX_NO_KEYPAD=y
CONFIG_HISAX_1TR6=y
CONFIG_HISAX_NI1=y
CONFIG_HISAX_MAX_CARDS=8
#
# HiSax supported cards
#
CONFIG_HISAX_16_3=y
CONFIG_HISAX_TELESPCI=y
CONFIG_HISAX_S0BOX=y
CONFIG_HISAX_FRITZPCI=y
CONFIG_HISAX_AVM_A1_PCMCIA=y
CONFIG_HISAX_ELSA=y
CONFIG_HISAX_DIEHLDIVA=y
CONFIG_HISAX_SEDLBAUER=y
CONFIG_HISAX_NETJET=y
CONFIG_HISAX_NETJET_U=y
CONFIG_HISAX_NICCY=y
CONFIG_HISAX_BKM_A4T=y
CONFIG_HISAX_SCT_QUADRO=y
CONFIG_HISAX_GAZEL=y
CONFIG_HISAX_HFC_PCI=y
CONFIG_HISAX_W6692=y
CONFIG_HISAX_HFC_SX=y
CONFIG_HISAX_ENTERNOW_PCI=y
CONFIG_HISAX_DEBUG=y
#
# HiSax PCMCIA card service modules
#
CONFIG_HISAX_SEDLBAUER_CS=y
CONFIG_HISAX_ELSA_CS=y
CONFIG_HISAX_AVM_A1_CS=y
CONFIG_HISAX_TELES_CS=y
#
# HiSax sub driver modules
#
CONFIG_HISAX_ST5481=y
CONFIG_HISAX_HFCUSB=y
CONFIG_HISAX_HFC4S8S=y
CONFIG_HISAX_FRITZ_PCIPNP=y
CONFIG_HISAX_HDLC=y
#
# Active cards
#
CONFIG_HYSDN=m
CONFIG_HYSDN_CAPI=y
CONFIG_ISDN_DRV_GIGASET=y
CONFIG_GIGASET_BASE=y
CONFIG_GIGASET_M105=y
CONFIG_GIGASET_M101=y
CONFIG_GIGASET_DEBUG=y
CONFIG_GIGASET_UNDOCREQ=y
CONFIG_ISDN_CAPI=y
CONFIG_ISDN_DRV_AVMB1_VERBOSE_REASON=y
CONFIG_CAPI_TRACE=y
CONFIG_ISDN_CAPI_MIDDLEWARE=y
CONFIG_ISDN_CAPI_CAPI20=y
CONFIG_ISDN_CAPI_CAPIFS_BOOL=y
CONFIG_ISDN_CAPI_CAPIFS=y
CONFIG_ISDN_CAPI_CAPIDRV=y
#
# CAPI hardware drivers
#
CONFIG_CAPI_AVM=y
CONFIG_ISDN_DRV_AVMB1_B1PCI=y
CONFIG_ISDN_DRV_AVMB1_B1PCIV4=y
CONFIG_ISDN_DRV_AVMB1_B1PCMCIA=y
CONFIG_ISDN_DRV_AVMB1_AVM_CS=y
CONFIG_ISDN_DRV_AVMB1_T1PCI=y
CONFIG_ISDN_DRV_AVMB1_C4=y
CONFIG_CAPI_EICON=y
CONFIG_ISDN_DIVAS=y
CONFIG_ISDN_DIVAS_BRIPCI=y
CONFIG_ISDN_DIVAS_PRIPCI=y
CONFIG_ISDN_DIVAS_DIVACAPI=y
CONFIG_ISDN_DIVAS_USERIDI=y
CONFIG_ISDN_DIVAS_MAINT=m
CONFIG_PHONE=y
#
# Input device support
#
CONFIG_INPUT=y
CONFIG_INPUT_FF_MEMLESS=y
CONFIG_INPUT_POLLDEV=y
#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=y
CONFIG_INPUT_MOUSEDEV_PSAUX=y
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
CONFIG_INPUT_JOYDEV=y
CONFIG_INPUT_EVDEV=y
CONFIG_INPUT_EVBUG=y
CONFIG_XEN_KBDDEV_FRONTEND=y
#
# Input Device Drivers
#
CONFIG_INPUT_KEYBOARD=y
CONFIG_KEYBOARD_ATKBD=y
CONFIG_KEYBOARD_SUNKBD=y
CONFIG_KEYBOARD_LKKBD=y
CONFIG_KEYBOARD_XTKBD=y
CONFIG_KEYBOARD_NEWTON=y
CONFIG_KEYBOARD_STOWAWAY=y
CONFIG_KEYBOARD_GPIO=y
CONFIG_INPUT_MOUSE=y
CONFIG_MOUSE_PS2=y
CONFIG_MOUSE_PS2_ALPS=y
CONFIG_MOUSE_PS2_LOGIPS2PP=y
CONFIG_MOUSE_PS2_SYNAPTICS=y
CONFIG_MOUSE_PS2_LIFEBOOK=y
CONFIG_MOUSE_PS2_TRACKPOINT=y
CONFIG_MOUSE_PS2_ELANTECH=y
CONFIG_MOUSE_PS2_TOUCHKIT=y
CONFIG_MOUSE_SERIAL=y
CONFIG_MOUSE_APPLETOUCH=y
CONFIG_MOUSE_BCM5974=y
CONFIG_MOUSE_VSXXXAA=y
CONFIG_MOUSE_GPIO=y
CONFIG_INPUT_JOYSTICK=y
CONFIG_JOYSTICK_ANALOG=y
CONFIG_JOYSTICK_A3D=y
CONFIG_JOYSTICK_ADI=y
CONFIG_JOYSTICK_COBRA=y
CONFIG_JOYSTICK_GF2K=y
CONFIG_JOYSTICK_GRIP=y
CONFIG_JOYSTICK_GRIP_MP=y
CONFIG_JOYSTICK_GUILLEMOT=y
CONFIG_JOYSTICK_INTERACT=y
CONFIG_JOYSTICK_SIDEWINDER=y
CONFIG_JOYSTICK_TMDC=y
CONFIG_JOYSTICK_IFORCE=y
CONFIG_JOYSTICK_IFORCE_USB=y
CONFIG_JOYSTICK_IFORCE_232=y
CONFIG_JOYSTICK_WARRIOR=y
CONFIG_JOYSTICK_MAGELLAN=y
CONFIG_JOYSTICK_SPACEORB=y
CONFIG_JOYSTICK_SPACEBALL=y
CONFIG_JOYSTICK_STINGER=y
CONFIG_JOYSTICK_TWIDJOY=y
CONFIG_JOYSTICK_ZHENHUA=y
CONFIG_JOYSTICK_DB9=y
CONFIG_JOYSTICK_GAMECON=y
CONFIG_JOYSTICK_TURBOGRAFX=y
CONFIG_JOYSTICK_JOYDUMP=y
CONFIG_JOYSTICK_XPAD=y
CONFIG_JOYSTICK_XPAD_FF=y
CONFIG_JOYSTICK_XPAD_LEDS=y
CONFIG_INPUT_TABLET=y
CONFIG_TABLET_USB_ACECAD=y
CONFIG_TABLET_USB_AIPTEK=y
CONFIG_TABLET_USB_GTCO=y
CONFIG_TABLET_USB_KBTAB=y
CONFIG_TABLET_USB_WACOM=y
CONFIG_INPUT_TOUCHSCREEN=y
CONFIG_TOUCHSCREEN_ADS7846=y
CONFIG_TOUCHSCREEN_FUJITSU=y
CONFIG_TOUCHSCREEN_GUNZE=y
CONFIG_TOUCHSCREEN_ELO=y
CONFIG_TOUCHSCREEN_MTOUCH=y
CONFIG_TOUCHSCREEN_INEXIO=y
CONFIG_TOUCHSCREEN_MK712=y
CONFIG_TOUCHSCREEN_PENMOUNT=y
CONFIG_TOUCHSCREEN_TOUCHRIGHT=y
CONFIG_TOUCHSCREEN_TOUCHWIN=y
CONFIG_TOUCHSCREEN_UCB1400=y
CONFIG_TOUCHSCREEN_WM97XX=y
CONFIG_TOUCHSCREEN_WM9705=y
CONFIG_TOUCHSCREEN_WM9712=y
CONFIG_TOUCHSCREEN_WM9713=y
CONFIG_TOUCHSCREEN_USB_COMPOSITE=y
CONFIG_TOUCHSCREEN_USB_EGALAX=y
CONFIG_TOUCHSCREEN_USB_PANJIT=y
CONFIG_TOUCHSCREEN_USB_3M=y
CONFIG_TOUCHSCREEN_USB_ITM=y
CONFIG_TOUCHSCREEN_USB_ETURBO=y
CONFIG_TOUCHSCREEN_USB_GUNZE=y
CONFIG_TOUCHSCREEN_USB_DMC_TSC10=y
CONFIG_TOUCHSCREEN_USB_IRTOUCH=y
CONFIG_TOUCHSCREEN_USB_IDEALTEK=y
CONFIG_TOUCHSCREEN_USB_GENERAL_TOUCH=y
CONFIG_TOUCHSCREEN_USB_GOTOP=y
CONFIG_TOUCHSCREEN_TOUCHIT213=y
CONFIG_INPUT_MISC=y
CONFIG_INPUT_PCSPKR=y
CONFIG_INPUT_APANEL=y
CONFIG_INPUT_ATLAS_BTNS=y
CONFIG_INPUT_ATI_REMOTE=y
CONFIG_INPUT_ATI_REMOTE2=y
CONFIG_INPUT_KEYSPAN_REMOTE=y
CONFIG_INPUT_POWERMATE=y
CONFIG_INPUT_YEALINK=y
CONFIG_INPUT_CM109=y
CONFIG_INPUT_UINPUT=y
#
# Hardware I/O ports
#
CONFIG_SERIO=y
CONFIG_SERIO_I8042=y
CONFIG_SERIO_SERPORT=y
CONFIG_SERIO_CT82C710=y
CONFIG_SERIO_PARKBD=y
CONFIG_SERIO_PCIPS2=y
CONFIG_SERIO_LIBPS2=y
CONFIG_SERIO_RAW=y
CONFIG_GAMEPORT=y
CONFIG_GAMEPORT_NS558=y
CONFIG_GAMEPORT_L4=y
CONFIG_GAMEPORT_EMU10K1=y
CONFIG_GAMEPORT_FM801=y
#
# Character devices
#
CONFIG_VT=y
CONFIG_CONSOLE_TRANSLATIONS=y
CONFIG_VT_CONSOLE=y
CONFIG_HW_CONSOLE=y
CONFIG_VT_HW_CONSOLE_BINDING=y
CONFIG_DEVKMEM=y
CONFIG_SERIAL_NONSTANDARD=y
CONFIG_COMPUTONE=y
CONFIG_ROCKETPORT=y
CONFIG_CYCLADES=y
CONFIG_CYZ_INTR=y
CONFIG_DIGIEPCA=y
CONFIG_MOXA_INTELLIO=y
CONFIG_MOXA_SMARTIO=y
CONFIG_ISI=y
CONFIG_SYNCLINK=y
CONFIG_SYNCLINKMP=y
CONFIG_SYNCLINK_GT=y
CONFIG_N_HDLC=y
CONFIG_RISCOM8=y
CONFIG_SPECIALIX=y
CONFIG_SX=y
CONFIG_RIO=y
CONFIG_RIO_OLDPCI=y
CONFIG_STALDRV=y
CONFIG_STALLION=y
CONFIG_ISTALLION=y
CONFIG_NOZOMI=y
#
# Serial drivers
#
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_FIX_EARLYCON_MEM=y
CONFIG_SERIAL_8250_PCI=y
CONFIG_SERIAL_8250_PNP=y
CONFIG_SERIAL_8250_CS=y
CONFIG_SERIAL_8250_NR_UARTS=4
CONFIG_SERIAL_8250_RUNTIME_UARTS=4
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_MANY_PORTS=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
CONFIG_SERIAL_8250_DETECT_IRQ=y
CONFIG_SERIAL_8250_RSA=y
#
# Non-8250 serial port support
#
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
CONFIG_CONSOLE_POLL=y
CONFIG_SERIAL_JSM=y
CONFIG_UNIX98_PTYS=y
CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=256
CONFIG_PRINTER=y
CONFIG_LP_CONSOLE=y
CONFIG_PPDEV=y
CONFIG_HVC_DRIVER=y
CONFIG_HVC_IRQ=y
CONFIG_HVC_XEN=y
CONFIG_VIRTIO_CONSOLE=y
CONFIG_IPMI_HANDLER=y
CONFIG_IPMI_PANIC_EVENT=y
CONFIG_IPMI_PANIC_STRING=y
CONFIG_IPMI_DEVICE_INTERFACE=y
CONFIG_IPMI_SI=y
CONFIG_IPMI_WATCHDOG=y
CONFIG_IPMI_POWEROFF=y
CONFIG_HW_RANDOM=y
CONFIG_HW_RANDOM_INTEL=y
CONFIG_HW_RANDOM_AMD=y
CONFIG_HW_RANDOM_VIRTIO=y
CONFIG_NVRAM=y
CONFIG_R3964=y
CONFIG_APPLICOM=y
#
# PCMCIA character devices
#
CONFIG_SYNCLINK_CS=y
CONFIG_CARDMAN_4000=y
CONFIG_CARDMAN_4040=y
CONFIG_IPWIRELESS=y
CONFIG_MWAVE=y
CONFIG_PC8736x_GPIO=y
CONFIG_NSC_GPIO=y
CONFIG_RAW_DRIVER=y
CONFIG_MAX_RAW_DEVS=256
CONFIG_HPET=y
CONFIG_HPET_MMAP=y
CONFIG_HANGCHECK_TIMER=y
CONFIG_TCG_TPM=y
CONFIG_TCG_TIS=y
CONFIG_TCG_NSC=y
CONFIG_TCG_ATMEL=y
CONFIG_TCG_INFINEON=y
CONFIG_TELCLOCK=y
CONFIG_DEVPORT=y
CONFIG_I2C=y
CONFIG_I2C_BOARDINFO=y
CONFIG_I2C_CHARDEV=y
CONFIG_I2C_HELPER_AUTO=y
CONFIG_I2C_ALGOBIT=y
CONFIG_I2C_ALGOPCA=y
#
# I2C Hardware Bus support
#
#
# PC SMBus host controller drivers
#
CONFIG_I2C_ALI1535=y
CONFIG_I2C_ALI1563=y
CONFIG_I2C_ALI15X3=y
CONFIG_I2C_AMD756=y
CONFIG_I2C_AMD8111=y
CONFIG_I2C_I801=y
CONFIG_I2C_ISCH=y
CONFIG_I2C_PIIX4=y
CONFIG_I2C_NFORCE2=y
CONFIG_I2C_SIS5595=y
CONFIG_I2C_SIS630=y
CONFIG_I2C_SIS96X=y
CONFIG_I2C_VIA=y
CONFIG_I2C_VIAPRO=y
#
# I2C system bus drivers (mostly embedded / system-on-chip)
#
CONFIG_I2C_GPIO=y
CONFIG_I2C_OCORES=y
CONFIG_I2C_SIMTEC=y
#
# External I2C/SMBus adapter drivers
#
CONFIG_I2C_PARPORT=y
CONFIG_I2C_PARPORT_LIGHT=y
CONFIG_I2C_TAOS_EVM=y
CONFIG_I2C_TINY_USB=y
#
# Graphics adapter I2C/DDC channel drivers
#
CONFIG_I2C_VOODOO3=y
#
# Other I2C/SMBus bus drivers
#
CONFIG_I2C_PCA_PLATFORM=y
CONFIG_I2C_STUB=m
#
# Miscellaneous I2C Chip support
#
CONFIG_DS1682=y
CONFIG_AT24=y
CONFIG_SENSORS_EEPROM=y
CONFIG_SENSORS_PCF8591=y
CONFIG_TPS65010=y
CONFIG_SENSORS_MAX6875=y
CONFIG_SENSORS_TSL2550=y
CONFIG_I2C_DEBUG_CORE=y
CONFIG_I2C_DEBUG_ALGO=y
CONFIG_I2C_DEBUG_BUS=y
CONFIG_I2C_DEBUG_CHIP=y
CONFIG_SPI=y
CONFIG_SPI_DEBUG=y
CONFIG_SPI_MASTER=y
#
# SPI Master Controller Drivers
#
CONFIG_SPI_BITBANG=y
CONFIG_SPI_BUTTERFLY=y
CONFIG_SPI_LM70_LLP=y
#
# SPI Protocol Masters
#
CONFIG_SPI_AT25=y
CONFIG_SPI_SPIDEV=y
CONFIG_SPI_TLE62X0=y
CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y
CONFIG_GPIOLIB=y
CONFIG_DEBUG_GPIO=y
CONFIG_GPIO_SYSFS=y
#
# Memory mapped GPIO expanders:
#
#
# I2C GPIO expanders:
#
CONFIG_GPIO_MAX732X=y
CONFIG_GPIO_PCA953X=y
CONFIG_GPIO_PCF857X=y
#
# PCI GPIO expanders:
#
#
# SPI GPIO expanders:
#
CONFIG_GPIO_MAX7301=y
CONFIG_GPIO_MCP23S08=y
CONFIG_W1=y
CONFIG_W1_CON=y
#
# 1-wire Bus Masters
#
CONFIG_W1_MASTER_MATROX=y
CONFIG_W1_MASTER_DS2490=y
CONFIG_W1_MASTER_DS2482=y
CONFIG_W1_MASTER_GPIO=y
#
# 1-wire Slaves
#
CONFIG_W1_SLAVE_THERM=y
CONFIG_W1_SLAVE_SMEM=y
CONFIG_W1_SLAVE_DS2433=y
CONFIG_W1_SLAVE_DS2433_CRC=y
CONFIG_W1_SLAVE_DS2760=y
CONFIG_W1_SLAVE_BQ27000=y
CONFIG_POWER_SUPPLY=y
CONFIG_POWER_SUPPLY_DEBUG=y
CONFIG_PDA_POWER=y
CONFIG_BATTERY_DS2760=y
CONFIG_BATTERY_WM97XX=y
CONFIG_BATTERY_BQ27x00=y
CONFIG_HWMON=y
CONFIG_HWMON_VID=y
CONFIG_SENSORS_ABITUGURU=y
CONFIG_SENSORS_ABITUGURU3=y
CONFIG_SENSORS_AD7414=y
CONFIG_SENSORS_AD7418=y
CONFIG_SENSORS_ADCXX=y
CONFIG_SENSORS_ADM1021=y
CONFIG_SENSORS_ADM1025=y
CONFIG_SENSORS_ADM1026=y
CONFIG_SENSORS_ADM1029=y
CONFIG_SENSORS_ADM1031=y
CONFIG_SENSORS_ADM9240=y
CONFIG_SENSORS_ADT7462=y
CONFIG_SENSORS_ADT7470=y
CONFIG_SENSORS_ADT7473=y
CONFIG_SENSORS_K8TEMP=y
CONFIG_SENSORS_ASB100=y
CONFIG_SENSORS_ATXP1=y
CONFIG_SENSORS_DS1621=y
CONFIG_SENSORS_I5K_AMB=y
CONFIG_SENSORS_F71805F=y
CONFIG_SENSORS_F71882FG=y
CONFIG_SENSORS_F75375S=y
CONFIG_SENSORS_FSCHER=y
CONFIG_SENSORS_FSCPOS=y
CONFIG_SENSORS_FSCHMD=y
CONFIG_SENSORS_GL518SM=y
CONFIG_SENSORS_GL520SM=y
CONFIG_SENSORS_CORETEMP=y
CONFIG_SENSORS_IBMAEM=y
CONFIG_SENSORS_IBMPEX=y
CONFIG_SENSORS_IT87=y
CONFIG_SENSORS_LM63=y
CONFIG_SENSORS_LM70=y
CONFIG_SENSORS_LM75=y
CONFIG_SENSORS_LM77=y
CONFIG_SENSORS_LM78=y
CONFIG_SENSORS_LM80=y
CONFIG_SENSORS_LM83=y
CONFIG_SENSORS_LM85=y
CONFIG_SENSORS_LM87=y
CONFIG_SENSORS_LM90=y
CONFIG_SENSORS_LM92=y
CONFIG_SENSORS_LM93=y
CONFIG_SENSORS_MAX1111=y
CONFIG_SENSORS_MAX1619=y
CONFIG_SENSORS_MAX6650=y
CONFIG_SENSORS_PC87360=y
CONFIG_SENSORS_PC87427=y
CONFIG_SENSORS_SIS5595=y
CONFIG_SENSORS_DME1737=y
CONFIG_SENSORS_SMSC47M1=y
CONFIG_SENSORS_SMSC47M192=y
CONFIG_SENSORS_SMSC47B397=y
CONFIG_SENSORS_ADS7828=y
CONFIG_SENSORS_THMC50=y
CONFIG_SENSORS_VIA686A=y
CONFIG_SENSORS_VT1211=y
CONFIG_SENSORS_VT8231=y
CONFIG_SENSORS_W83781D=y
CONFIG_SENSORS_W83791D=y
CONFIG_SENSORS_W83792D=y
CONFIG_SENSORS_W83793=y
CONFIG_SENSORS_W83L785TS=y
CONFIG_SENSORS_W83L786NG=y
CONFIG_SENSORS_W83627HF=y
CONFIG_SENSORS_W83627EHF=y
CONFIG_SENSORS_HDAPS=y
CONFIG_SENSORS_LIS3LV02D=y
CONFIG_SENSORS_APPLESMC=y
CONFIG_HWMON_DEBUG_CHIP=y
CONFIG_THERMAL=y
CONFIG_THERMAL_HWMON=y
CONFIG_WATCHDOG=y
CONFIG_WATCHDOG_NOWAYOUT=y
#
# Watchdog Device Drivers
#
CONFIG_SOFT_WATCHDOG=y
CONFIG_ACQUIRE_WDT=y
CONFIG_ADVANTECH_WDT=y
CONFIG_ALIM1535_WDT=y
CONFIG_ALIM7101_WDT=y
CONFIG_SC520_WDT=y
# CONFIG_EUROTECH_WDT is not set
CONFIG_IB700_WDT=y
CONFIG_IBMASR=y
CONFIG_WAFER_WDT=y
CONFIG_I6300ESB_WDT=y
CONFIG_ITCO_WDT=y
CONFIG_ITCO_VENDOR_SUPPORT=y
CONFIG_IT8712F_WDT=y
CONFIG_IT87_WDT=y
CONFIG_HP_WATCHDOG=y
CONFIG_SC1200_WDT=y
CONFIG_PC87413_WDT=y
CONFIG_60XX_WDT=y
CONFIG_SBC8360_WDT=y
CONFIG_CPU5_WDT=y
CONFIG_SMSC37B787_WDT=y
CONFIG_W83627HF_WDT=y
CONFIG_W83697HF_WDT=y
CONFIG_W83697UG_WDT=y
CONFIG_W83877F_WDT=y
CONFIG_W83977F_WDT=y
CONFIG_MACHZ_WDT=y
CONFIG_SBC_EPX_C3_WATCHDOG=y
#
# PCI-based Watchdog Cards
#
CONFIG_PCIPCWATCHDOG=y
CONFIG_WDTPCI=y
CONFIG_WDT_501_PCI=y
#
# USB-based Watchdog Cards
#
CONFIG_USBPCWATCHDOG=y
CONFIG_SSB_POSSIBLE=y
#
# Sonics Silicon Backplane
#
CONFIG_SSB=y
CONFIG_SSB_SPROM=y
CONFIG_SSB_BLOCKIO=y
CONFIG_SSB_PCIHOST_POSSIBLE=y
CONFIG_SSB_PCIHOST=y
CONFIG_SSB_B43_PCI_BRIDGE=y
CONFIG_SSB_PCMCIAHOST_POSSIBLE=y
CONFIG_SSB_PCMCIAHOST=y
CONFIG_SSB_SILENT=y
CONFIG_SSB_DRIVER_PCICORE_POSSIBLE=y
CONFIG_SSB_DRIVER_PCICORE=y
#
# Multifunction device drivers
#
# CONFIG_MFD_CORE is not set
CONFIG_MFD_SM501=y
CONFIG_MFD_SM501_GPIO=y
CONFIG_HTC_PASIC3=y
CONFIG_UCB1400_CORE=y
# CONFIG_MFD_TMIO is not set
CONFIG_PMIC_DA903X=y
CONFIG_MFD_WM8400=y
CONFIG_REGULATOR=y
CONFIG_REGULATOR_DEBUG=y
# CONFIG_REGULATOR_FIXED_VOLTAGE is not set
CONFIG_REGULATOR_VIRTUAL_CONSUMER=y
CONFIG_REGULATOR_BQ24022=y
CONFIG_REGULATOR_WM8400=y
CONFIG_REGULATOR_DA903X=y
#
# Multimedia devices
#
#
# Multimedia core support
#
CONFIG_VIDEO_DEV=y
CONFIG_VIDEO_V4L2_COMMON=y
CONFIG_VIDEO_ALLOW_V4L1=y
CONFIG_VIDEO_V4L1_COMPAT=y
CONFIG_DVB_CORE=y
CONFIG_VIDEO_MEDIA=y
#
# Multimedia drivers
#
CONFIG_VIDEO_SAA7146=y
CONFIG_VIDEO_SAA7146_VV=y
CONFIG_MEDIA_ATTACH=y
CONFIG_MEDIA_TUNER=y
CONFIG_MEDIA_TUNER_CUSTOMIZE=y
CONFIG_MEDIA_TUNER_SIMPLE=y
CONFIG_MEDIA_TUNER_TDA8290=y
CONFIG_MEDIA_TUNER_TDA827X=y
CONFIG_MEDIA_TUNER_TDA18271=y
CONFIG_MEDIA_TUNER_TDA9887=y
CONFIG_MEDIA_TUNER_TEA5761=y
CONFIG_MEDIA_TUNER_TEA5767=y
CONFIG_MEDIA_TUNER_MT20XX=y
CONFIG_MEDIA_TUNER_MT2060=y
CONFIG_MEDIA_TUNER_MT2266=y
CONFIG_MEDIA_TUNER_MT2131=y
CONFIG_MEDIA_TUNER_QT1010=y
CONFIG_MEDIA_TUNER_XC2028=y
CONFIG_MEDIA_TUNER_XC5000=y
CONFIG_MEDIA_TUNER_MXL5005S=y
CONFIG_MEDIA_TUNER_MXL5007T=y
CONFIG_VIDEO_V4L2=y
CONFIG_VIDEO_V4L1=y
CONFIG_VIDEOBUF_GEN=y
CONFIG_VIDEOBUF_DMA_SG=y
CONFIG_VIDEOBUF_VMALLOC=y
CONFIG_VIDEOBUF_DMA_CONTIG=y
CONFIG_VIDEOBUF_DVB=y
CONFIG_VIDEO_BTCX=y
CONFIG_VIDEO_IR=y
CONFIG_VIDEO_TVEEPROM=y
CONFIG_VIDEO_TUNER=y
CONFIG_VIDEO_CAPTURE_DRIVERS=y
CONFIG_VIDEO_ADV_DEBUG=y
CONFIG_VIDEO_FIXED_MINOR_RANGES=y
CONFIG_VIDEO_HELPER_CHIPS_AUTO=y
CONFIG_VIDEO_IR_I2C=y
CONFIG_VIDEO_TVAUDIO=y
CONFIG_VIDEO_TDA7432=y
CONFIG_VIDEO_TDA9840=y
CONFIG_VIDEO_TDA9875=y
CONFIG_VIDEO_TEA6415C=y
CONFIG_VIDEO_TEA6420=y
CONFIG_VIDEO_MSP3400=y
CONFIG_VIDEO_CS5345=y
CONFIG_VIDEO_CS53L32A=y
CONFIG_VIDEO_M52790=y
CONFIG_VIDEO_WM8775=y
CONFIG_VIDEO_WM8739=y
CONFIG_VIDEO_VP27SMPX=y
CONFIG_VIDEO_BT819=y
CONFIG_VIDEO_BT856=y
CONFIG_VIDEO_KS0127=y
CONFIG_VIDEO_OV7670=y
CONFIG_VIDEO_SAA7110=y
CONFIG_VIDEO_SAA7111=y
CONFIG_VIDEO_SAA7114=y
CONFIG_VIDEO_SAA711X=y
CONFIG_VIDEO_SAA717X=y
CONFIG_VIDEO_TVP5150=y
CONFIG_VIDEO_VPX3220=y
CONFIG_VIDEO_CX25840=y
CONFIG_VIDEO_CX2341X=y
CONFIG_VIDEO_SAA7127=y
CONFIG_VIDEO_SAA7185=y
CONFIG_VIDEO_ADV7170=y
CONFIG_VIDEO_ADV7175=y
CONFIG_VIDEO_UPD64031A=y
CONFIG_VIDEO_UPD64083=y
CONFIG_VIDEO_VIVI=y
CONFIG_VIDEO_BT848=y
CONFIG_VIDEO_BT848_DVB=y
CONFIG_VIDEO_SAA6588=y
CONFIG_VIDEO_BWQCAM=y
CONFIG_VIDEO_CQCAM=y
CONFIG_VIDEO_W9966=y
CONFIG_VIDEO_CPIA=y
CONFIG_VIDEO_CPIA_PP=y
CONFIG_VIDEO_CPIA_USB=y
CONFIG_VIDEO_CPIA2=y
CONFIG_VIDEO_SAA5246A=y
CONFIG_VIDEO_SAA5249=y
CONFIG_VIDEO_STRADIS=y
CONFIG_VIDEO_ZORAN=y
CONFIG_VIDEO_ZORAN_DC30=y
CONFIG_VIDEO_ZORAN_ZR36060=y
CONFIG_VIDEO_ZORAN_BUZ=y
CONFIG_VIDEO_ZORAN_DC10=y
CONFIG_VIDEO_ZORAN_LML33=y
CONFIG_VIDEO_ZORAN_LML33R10=y
CONFIG_VIDEO_ZORAN_AVS6EYES=y
CONFIG_VIDEO_MEYE=y
CONFIG_VIDEO_SAA7134=y
CONFIG_VIDEO_SAA7134_ALSA=y
CONFIG_VIDEO_SAA7134_DVB=y
CONFIG_VIDEO_MXB=y
CONFIG_VIDEO_HEXIUM_ORION=y
CONFIG_VIDEO_HEXIUM_GEMINI=y
CONFIG_VIDEO_CX23885=y
CONFIG_VIDEO_AU0828=y
CONFIG_VIDEO_IVTV=y
CONFIG_VIDEO_FB_IVTV=y
CONFIG_VIDEO_CX18=y
CONFIG_VIDEO_CAFE_CCIC=y
CONFIG_SOC_CAMERA=y
CONFIG_SOC_CAMERA_MT9M001=y
CONFIG_MT9M001_PCA9536_SWITCH=y
CONFIG_SOC_CAMERA_MT9M111=y
CONFIG_SOC_CAMERA_MT9V022=y
CONFIG_MT9V022_PCA9536_SWITCH=y
CONFIG_SOC_CAMERA_PLATFORM=y
CONFIG_VIDEO_SH_MOBILE_CEU=y
CONFIG_V4L_USB_DRIVERS=y
CONFIG_USB_VIDEO_CLASS=y
CONFIG_USB_VIDEO_CLASS_INPUT_EVDEV=y
CONFIG_USB_GSPCA=y
CONFIG_USB_M5602=y
CONFIG_USB_GSPCA_CONEX=y
CONFIG_USB_GSPCA_ETOMS=y
CONFIG_USB_GSPCA_FINEPIX=y
CONFIG_USB_GSPCA_MARS=y
CONFIG_USB_GSPCA_OV519=y
CONFIG_USB_GSPCA_PAC207=y
CONFIG_USB_GSPCA_PAC7311=y
CONFIG_USB_GSPCA_SONIXB=y
CONFIG_USB_GSPCA_SONIXJ=y
CONFIG_USB_GSPCA_SPCA500=y
CONFIG_USB_GSPCA_SPCA501=y
CONFIG_USB_GSPCA_SPCA505=y
CONFIG_USB_GSPCA_SPCA506=y
CONFIG_USB_GSPCA_SPCA508=y
CONFIG_USB_GSPCA_SPCA561=y
CONFIG_USB_GSPCA_STK014=y
CONFIG_USB_GSPCA_SUNPLUS=y
CONFIG_USB_GSPCA_T613=y
CONFIG_USB_GSPCA_TV8532=y
CONFIG_USB_GSPCA_VC032X=y
CONFIG_USB_GSPCA_ZC3XX=y
CONFIG_VIDEO_PVRUSB2=y
CONFIG_VIDEO_PVRUSB2_SYSFS=y
CONFIG_VIDEO_PVRUSB2_DVB=y
CONFIG_VIDEO_PVRUSB2_DEBUGIFC=y
CONFIG_VIDEO_EM28XX=y
CONFIG_VIDEO_EM28XX_ALSA=y
CONFIG_VIDEO_EM28XX_DVB=y
CONFIG_VIDEO_USBVISION=y
CONFIG_VIDEO_USBVIDEO=y
CONFIG_USB_VICAM=y
CONFIG_USB_IBMCAM=y
CONFIG_USB_KONICAWC=y
CONFIG_USB_QUICKCAM_MESSENGER=y
CONFIG_USB_ET61X251=y
CONFIG_VIDEO_OVCAMCHIP=y
CONFIG_USB_W9968CF=y
CONFIG_USB_OV511=y
CONFIG_USB_SE401=y
CONFIG_USB_SN9C102=y
CONFIG_USB_STV680=y
CONFIG_USB_ZC0301=y
CONFIG_USB_PWC=y
CONFIG_USB_PWC_DEBUG=y
CONFIG_USB_ZR364XX=y
CONFIG_USB_STKWEBCAM=y
CONFIG_USB_S2255=y
CONFIG_RADIO_ADAPTERS=y
CONFIG_RADIO_GEMTEK_PCI=y
CONFIG_RADIO_MAXIRADIO=y
CONFIG_RADIO_MAESTRO=y
CONFIG_USB_DSBR=y
CONFIG_USB_SI470X=y
CONFIG_USB_MR800=y
CONFIG_DVB_CAPTURE_DRIVERS=y
#
# Supported SAA7146 based PCI Adapters
#
CONFIG_TTPCI_EEPROM=y
CONFIG_DVB_AV7110=y
CONFIG_DVB_AV7110_OSD=y
CONFIG_DVB_BUDGET_CORE=y
CONFIG_DVB_BUDGET=y
CONFIG_DVB_BUDGET_CI=y
CONFIG_DVB_BUDGET_AV=y
CONFIG_DVB_BUDGET_PATCH=y
#
# Supported USB Adapters
#
CONFIG_DVB_TTUSB_BUDGET=y
CONFIG_DVB_TTUSB_DEC=y
CONFIG_DVB_SIANO_SMS1XXX=y
CONFIG_DVB_SIANO_SMS1XXX_SMS_IDS=y
#
# Supported FlexCopII (B2C2) Adapters
#
CONFIG_DVB_B2C2_FLEXCOP=y
CONFIG_DVB_B2C2_FLEXCOP_PCI=y
CONFIG_DVB_B2C2_FLEXCOP_USB=y
CONFIG_DVB_B2C2_FLEXCOP_DEBUG=y
#
# Supported BT878 Adapters
#
CONFIG_DVB_BT8XX=y
#
# Supported Pluto2 Adapters
#
CONFIG_DVB_PLUTO2=y
#
# Supported SDMC DM1105 Adapters
#
#
# Supported DVB Frontends
#
#
# Customise DVB Frontends
#
CONFIG_DVB_FE_CUSTOMISE=y
#
# DVB-S (satellite) frontends
#
CONFIG_DVB_CX24110=y
CONFIG_DVB_CX24123=y
CONFIG_DVB_MT312=y
CONFIG_DVB_S5H1420=y
CONFIG_DVB_STV0288=y
CONFIG_DVB_STB6000=y
CONFIG_DVB_STV0299=y
CONFIG_DVB_TDA8083=y
CONFIG_DVB_TDA10086=y
CONFIG_DVB_VES1X93=y
CONFIG_DVB_TUNER_ITD1000=y
CONFIG_DVB_TDA826X=y
CONFIG_DVB_TUA6100=y
CONFIG_DVB_CX24116=y
CONFIG_DVB_SI21XX=y
#
# DVB-T (terrestrial) frontends
#
CONFIG_DVB_SP8870=y
CONFIG_DVB_SP887X=y
CONFIG_DVB_CX22700=y
CONFIG_DVB_CX22702=y
CONFIG_DVB_DRX397XD=y
CONFIG_DVB_L64781=y
CONFIG_DVB_TDA1004X=y
CONFIG_DVB_NXT6000=y
CONFIG_DVB_MT352=y
CONFIG_DVB_ZL10353=y
CONFIG_DVB_DIB3000MB=y
CONFIG_DVB_DIB3000MC=y
CONFIG_DVB_DIB7000M=y
CONFIG_DVB_DIB7000P=y
CONFIG_DVB_TDA10048=y
#
# DVB-C (cable) frontends
#
CONFIG_DVB_VES1820=y
CONFIG_DVB_TDA10021=y
CONFIG_DVB_TDA10023=y
CONFIG_DVB_STV0297=y
#
# ATSC (North American/Korean Terrestrial/Cable DTV) frontends
#
CONFIG_DVB_NXT200X=y
CONFIG_DVB_OR51211=y
CONFIG_DVB_OR51132=y
CONFIG_DVB_BCM3510=y
CONFIG_DVB_LGDT330X=y
CONFIG_DVB_S5H1409=y
CONFIG_DVB_AU8522=y
CONFIG_DVB_S5H1411=y
#
# Digital terrestrial only tuners/PLL
#
CONFIG_DVB_PLL=y
CONFIG_DVB_TUNER_DIB0070=y
#
# SEC control devices for DVB-S
#
CONFIG_DVB_LNBP21=y
CONFIG_DVB_ISL6405=y
CONFIG_DVB_ISL6421=y
CONFIG_DVB_LGS8GL5=y
#
# Tools to develop new frontends
#
CONFIG_DVB_DUMMY_FE=y
CONFIG_DVB_AF9013=y
CONFIG_DAB=y
CONFIG_USB_DABUSB=y
#
# Graphics support
#
CONFIG_AGP=y
CONFIG_AGP_AMD64=y
CONFIG_AGP_INTEL=y
CONFIG_AGP_SIS=y
CONFIG_AGP_VIA=y
CONFIG_DRM=y
CONFIG_DRM_TDFX=y
CONFIG_DRM_R128=y
CONFIG_DRM_RADEON=y
CONFIG_DRM_I810=y
CONFIG_DRM_I830=y
# CONFIG_DRM_I915 is not set
CONFIG_DRM_MGA=y
CONFIG_DRM_SIS=y
CONFIG_DRM_VIA=y
CONFIG_DRM_SAVAGE=y
CONFIG_VGASTATE=y
CONFIG_VIDEO_OUTPUT_CONTROL=y
CONFIG_FB=y
CONFIG_FIRMWARE_EDID=y
CONFIG_FB_DDC=y
CONFIG_FB_BOOT_VESA_SUPPORT=y
CONFIG_FB_CFB_FILLRECT=y
CONFIG_FB_CFB_COPYAREA=y
CONFIG_FB_CFB_IMAGEBLIT=y
# CONFIG_FB_CFB_REV_PIXELS_IN_BYTE is not set
CONFIG_FB_SYS_FILLRECT=y
CONFIG_FB_SYS_COPYAREA=y
CONFIG_FB_SYS_IMAGEBLIT=y
CONFIG_FB_FOREIGN_ENDIAN=y
CONFIG_FB_BOTH_ENDIAN=y
# CONFIG_FB_BIG_ENDIAN is not set
# CONFIG_FB_LITTLE_ENDIAN is not set
CONFIG_FB_SYS_FOPS=y
CONFIG_FB_DEFERRED_IO=y
CONFIG_FB_HECUBA=y
CONFIG_FB_SVGALIB=y
# CONFIG_FB_MACMODES is not set
CONFIG_FB_BACKLIGHT=y
CONFIG_FB_MODE_HELPERS=y
CONFIG_FB_TILEBLITTING=y
#
# Frame buffer hardware drivers
#
# CONFIG_FB_CIRRUS is not set
CONFIG_FB_PM2=y
CONFIG_FB_PM2_FIFO_DISCONNECT=y
CONFIG_FB_CYBER2000=y
CONFIG_FB_ARC=y
# CONFIG_FB_ASILIANT is not set
CONFIG_FB_IMSTT=y
# CONFIG_FB_VGA16 is not set
CONFIG_FB_UVESA=y
# CONFIG_FB_VESA is not set
CONFIG_FB_EFI=y
CONFIG_FB_N411=y
CONFIG_FB_HGA=y
CONFIG_FB_HGA_ACCEL=y
CONFIG_FB_S1D13XXX=y
CONFIG_FB_NVIDIA=y
CONFIG_FB_NVIDIA_I2C=y
CONFIG_FB_NVIDIA_DEBUG=y
CONFIG_FB_NVIDIA_BACKLIGHT=y
CONFIG_FB_RIVA=y
CONFIG_FB_RIVA_I2C=y
CONFIG_FB_RIVA_DEBUG=y
CONFIG_FB_RIVA_BACKLIGHT=y
CONFIG_FB_LE80578=y
CONFIG_FB_CARILLO_RANCH=y
CONFIG_FB_INTEL=y
CONFIG_FB_INTEL_DEBUG=y
CONFIG_FB_INTEL_I2C=y
CONFIG_FB_MATROX=y
CONFIG_FB_MATROX_MILLENIUM=y
CONFIG_FB_MATROX_MYSTIQUE=y
CONFIG_FB_MATROX_G=y
CONFIG_FB_MATROX_I2C=y
CONFIG_FB_MATROX_MAVEN=y
CONFIG_FB_MATROX_MULTIHEAD=y
# CONFIG_FB_RADEON is not set
CONFIG_FB_ATY128=y
CONFIG_FB_ATY128_BACKLIGHT=y
CONFIG_FB_ATY=y
CONFIG_FB_ATY_CT=y
CONFIG_FB_ATY_GENERIC_LCD=y
CONFIG_FB_ATY_GX=y
CONFIG_FB_ATY_BACKLIGHT=y
CONFIG_FB_S3=y
CONFIG_FB_SAVAGE=y
CONFIG_FB_SAVAGE_I2C=y
CONFIG_FB_SAVAGE_ACCEL=y
CONFIG_FB_SIS=y
CONFIG_FB_SIS_300=y
CONFIG_FB_SIS_315=y
CONFIG_FB_VIA=y
CONFIG_FB_NEOMAGIC=y
CONFIG_FB_KYRO=y
CONFIG_FB_3DFX=y
CONFIG_FB_3DFX_ACCEL=y
CONFIG_FB_VOODOO1=y
CONFIG_FB_VT8623=y
CONFIG_FB_TRIDENT=y
CONFIG_FB_TRIDENT_ACCEL=y
CONFIG_FB_ARK=y
CONFIG_FB_PM3=y
CONFIG_FB_CARMINE=y
CONFIG_FB_CARMINE_DRAM_EVAL=y
# CONFIG_CARMINE_DRAM_CUSTOM is not set
CONFIG_FB_GEODE=y
CONFIG_FB_GEODE_LX=y
CONFIG_FB_GEODE_GX=y
CONFIG_FB_GEODE_GX1=y
CONFIG_FB_SM501=y
# CONFIG_FB_VIRTUAL is not set
CONFIG_XEN_FBDEV_FRONTEND=y
CONFIG_FB_METRONOME=y
CONFIG_FB_MB862XX=y
CONFIG_FB_MB862XX_PCI_GDC=y
CONFIG_BACKLIGHT_LCD_SUPPORT=y
CONFIG_LCD_CLASS_DEVICE=y
CONFIG_LCD_LTV350QV=y
CONFIG_LCD_ILI9320=y
CONFIG_LCD_TDO24M=y
CONFIG_LCD_VGG2432A4=y
CONFIG_LCD_PLATFORM=y
CONFIG_BACKLIGHT_CLASS_DEVICE=y
CONFIG_BACKLIGHT_CORGI=y
CONFIG_BACKLIGHT_PROGEAR=y
CONFIG_BACKLIGHT_CARILLO_RANCH=y
CONFIG_BACKLIGHT_DA903X=y
CONFIG_BACKLIGHT_MBP_NVIDIA=y
CONFIG_BACKLIGHT_SAHARA=y
#
# Display device support
#
CONFIG_DISPLAY_SUPPORT=y
#
# Display hardware drivers
#
#
# Console display driver support
#
CONFIG_VGA_CONSOLE=y
CONFIG_VGACON_SOFT_SCROLLBACK=y
CONFIG_VGACON_SOFT_SCROLLBACK_SIZE=64
CONFIG_DUMMY_CONSOLE=y
# CONFIG_FRAMEBUFFER_CONSOLE is not set
CONFIG_FONT_8x16=y
CONFIG_LOGO=y
CONFIG_LOGO_LINUX_MONO=y
CONFIG_LOGO_LINUX_VGA16=y
CONFIG_LOGO_LINUX_CLUT224=y
CONFIG_SOUND=y
CONFIG_SOUND_OSS_CORE=y
CONFIG_SND=y
CONFIG_SND_TIMER=y
CONFIG_SND_PCM=y
CONFIG_SND_HWDEP=y
CONFIG_SND_RAWMIDI=y
CONFIG_SND_SEQUENCER=y
CONFIG_SND_SEQ_DUMMY=y
CONFIG_SND_OSSEMUL=y
CONFIG_SND_MIXER_OSS=y
CONFIG_SND_PCM_OSS=y
CONFIG_SND_PCM_OSS_PLUGINS=y
CONFIG_SND_SEQUENCER_OSS=y
CONFIG_SND_DYNAMIC_MINORS=y
CONFIG_SND_SUPPORT_OLD_API=y
CONFIG_SND_VERBOSE_PROCFS=y
CONFIG_SND_VERBOSE_PRINTK=y
CONFIG_SND_DEBUG=y
CONFIG_SND_DEBUG_VERBOSE=y
CONFIG_SND_PCM_XRUN_DEBUG=y
CONFIG_SND_VMASTER=y
CONFIG_SND_MPU401_UART=y
CONFIG_SND_OPL3_LIB=y
CONFIG_SND_VX_LIB=y
CONFIG_SND_AC97_CODEC=y
CONFIG_SND_DRIVERS=y
CONFIG_SND_PCSP=y
CONFIG_SND_DUMMY=y
CONFIG_SND_VIRMIDI=y
CONFIG_SND_MTPAV=y
CONFIG_SND_MTS64=y
CONFIG_SND_SERIAL_U16550=y
CONFIG_SND_MPU401=y
CONFIG_SND_PORTMAN2X4=y
CONFIG_SND_AC97_POWER_SAVE=y
CONFIG_SND_AC97_POWER_SAVE_DEFAULT=0
CONFIG_SND_SB_COMMON=y
CONFIG_SND_SB16_DSP=y
CONFIG_SND_PCI=y
CONFIG_SND_AD1889=y
CONFIG_SND_ALS300=y
CONFIG_SND_ALS4000=y
CONFIG_SND_ALI5451=y
CONFIG_SND_ATIIXP=y
CONFIG_SND_ATIIXP_MODEM=y
CONFIG_SND_AU8810=y
CONFIG_SND_AU8820=y
CONFIG_SND_AU8830=y
CONFIG_SND_AW2=y
CONFIG_SND_AZT3328=y
CONFIG_SND_BT87X=y
CONFIG_SND_BT87X_OVERCLOCK=y
CONFIG_SND_CA0106=y
CONFIG_SND_CMIPCI=y
CONFIG_SND_OXYGEN_LIB=y
CONFIG_SND_OXYGEN=y
CONFIG_SND_CS4281=y
CONFIG_SND_CS46XX=y
CONFIG_SND_CS46XX_NEW_DSP=y
CONFIG_SND_CS5530=y
CONFIG_SND_DARLA20=y
CONFIG_SND_GINA20=y
CONFIG_SND_LAYLA20=y
CONFIG_SND_DARLA24=y
CONFIG_SND_GINA24=y
CONFIG_SND_LAYLA24=y
CONFIG_SND_MONA=y
CONFIG_SND_MIA=y
CONFIG_SND_ECHO3G=y
CONFIG_SND_INDIGO=y
CONFIG_SND_INDIGOIO=y
CONFIG_SND_INDIGODJ=y
CONFIG_SND_EMU10K1=y
CONFIG_SND_EMU10K1X=y
CONFIG_SND_ENS1370=y
CONFIG_SND_ENS1371=y
CONFIG_SND_ES1938=y
CONFIG_SND_ES1968=y
CONFIG_SND_FM801=y
CONFIG_SND_FM801_TEA575X_BOOL=y
CONFIG_SND_FM801_TEA575X=y
CONFIG_SND_HDA_INTEL=y
CONFIG_SND_HDA_HWDEP=y
CONFIG_SND_HDA_INPUT_BEEP=y
CONFIG_SND_HDA_CODEC_REALTEK=y
CONFIG_SND_HDA_CODEC_ANALOG=y
CONFIG_SND_HDA_CODEC_SIGMATEL=y
CONFIG_SND_HDA_CODEC_VIA=y
CONFIG_SND_HDA_CODEC_ATIHDMI=y
CONFIG_SND_HDA_CODEC_NVHDMI=y
CONFIG_SND_HDA_CODEC_CONEXANT=y
CONFIG_SND_HDA_CODEC_CMEDIA=y
CONFIG_SND_HDA_CODEC_SI3054=y
CONFIG_SND_HDA_GENERIC=y
CONFIG_SND_HDA_POWER_SAVE=y
CONFIG_SND_HDA_POWER_SAVE_DEFAULT=0
CONFIG_SND_HDSP=y
CONFIG_SND_HDSPM=y
CONFIG_SND_HIFIER=y
CONFIG_SND_ICE1712=y
CONFIG_SND_ICE1724=y
CONFIG_SND_INTEL8X0=y
CONFIG_SND_INTEL8X0M=y
CONFIG_SND_KORG1212=y
CONFIG_SND_MAESTRO3=y
CONFIG_SND_MIXART=y
CONFIG_SND_NM256=y
CONFIG_SND_PCXHR=y
CONFIG_SND_RIPTIDE=y
CONFIG_SND_RME32=y
CONFIG_SND_RME96=y
CONFIG_SND_RME9652=y
CONFIG_SND_SONICVIBES=y
CONFIG_SND_TRIDENT=y
CONFIG_SND_VIA82XX=y
CONFIG_SND_VIA82XX_MODEM=y
CONFIG_SND_VIRTUOSO=y
CONFIG_SND_VX222=y
CONFIG_SND_YMFPCI=y
CONFIG_SND_SPI=y
CONFIG_SND_USB=y
CONFIG_SND_USB_AUDIO=y
CONFIG_SND_USB_USX2Y=y
CONFIG_SND_USB_CAIAQ=y
CONFIG_SND_USB_CAIAQ_INPUT=y
CONFIG_SND_USB_US122L=y
CONFIG_SND_PCMCIA=y
CONFIG_SND_VXPOCKET=y
CONFIG_SND_PDAUDIOCF=y
CONFIG_SND_SOC=y
CONFIG_SND_SOC_ALL_CODECS=y
CONFIG_SND_SOC_AD73311=y
CONFIG_SND_SOC_AK4535=y
CONFIG_SND_SOC_CS4270=y
CONFIG_SND_SOC_SSM2602=y
CONFIG_SND_SOC_TLV320AIC23=y
CONFIG_SND_SOC_TLV320AIC26=y
CONFIG_SND_SOC_TLV320AIC3X=y
CONFIG_SND_SOC_UDA1380=y
CONFIG_SND_SOC_WM8510=y
CONFIG_SND_SOC_WM8580=y
CONFIG_SND_SOC_WM8731=y
CONFIG_SND_SOC_WM8750=y
CONFIG_SND_SOC_WM8753=y
CONFIG_SND_SOC_WM8900=y
CONFIG_SND_SOC_WM8903=y
CONFIG_SND_SOC_WM8971=y
CONFIG_SND_SOC_WM8990=y
CONFIG_SOUND_PRIME=y
CONFIG_SOUND_OSS=y
CONFIG_SOUND_TRACEINIT=y
CONFIG_SOUND_DMAP=y
CONFIG_SOUND_SSCAPE=y
CONFIG_SOUND_VMIDI=y
CONFIG_SOUND_TRIX=y
CONFIG_SOUND_MSS=y
CONFIG_SOUND_MPU401=y
CONFIG_SOUND_PAS=y
CONFIG_PAS_JOYSTICK=y
CONFIG_SOUND_PSS=y
CONFIG_PSS_MIXER=y
CONFIG_SOUND_SB=y
CONFIG_SOUND_YM3812=y
CONFIG_SOUND_UART6850=y
CONFIG_SOUND_AEDSP16=y
CONFIG_SC6600=y
CONFIG_SC6600_JOY=y
CONFIG_SC6600_CDROM=4
CONFIG_SC6600_CDROMBASE=0
CONFIG_SOUND_KAHLUA=y
CONFIG_AC97_BUS=y
CONFIG_HID_SUPPORT=y
CONFIG_HID=y
CONFIG_HID_DEBUG=y
CONFIG_HIDRAW=y
#
# USB Input Devices
#
CONFIG_USB_HID=y
CONFIG_HID_PID=y
CONFIG_USB_HIDDEV=y
#
# Special HID drivers
#
CONFIG_HID_COMPAT=y
CONFIG_HID_A4TECH=y
CONFIG_HID_APPLE=y
CONFIG_HID_BELKIN=y
CONFIG_HID_BRIGHT=y
CONFIG_HID_CHERRY=y
CONFIG_HID_CHICONY=y
CONFIG_HID_CYPRESS=y
CONFIG_HID_DELL=y
CONFIG_HID_EZKEY=y
CONFIG_HID_GYRATION=y
CONFIG_HID_LOGITECH=y
CONFIG_LOGITECH_FF=y
CONFIG_LOGIRUMBLEPAD2_FF=y
CONFIG_HID_MICROSOFT=y
CONFIG_HID_MONTEREY=y
CONFIG_HID_PANTHERLORD=y
CONFIG_PANTHERLORD_FF=y
CONFIG_HID_PETALYNX=y
CONFIG_HID_SAMSUNG=y
CONFIG_HID_SONY=y
CONFIG_HID_SUNPLUS=y
CONFIG_THRUSTMASTER_FF=y
CONFIG_ZEROPLUS_FF=y
CONFIG_USB_SUPPORT=y
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB_ARCH_HAS_OHCI=y
CONFIG_USB_ARCH_HAS_EHCI=y
CONFIG_USB=y
CONFIG_USB_DEBUG=y
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y
#
# Miscellaneous USB options
#
CONFIG_USB_DEVICEFS=y
CONFIG_USB_DEVICE_CLASS=y
CONFIG_USB_DYNAMIC_MINORS=y
CONFIG_USB_SUSPEND=y
# CONFIG_USB_OTG is not set
CONFIG_USB_OTG_WHITELIST=y
CONFIG_USB_OTG_BLACKLIST_HUB=y
CONFIG_USB_MON=y
CONFIG_USB_WUSB=y
CONFIG_USB_WUSB_CBAF=y
CONFIG_USB_WUSB_CBAF_DEBUG=y
#
# USB Host Controller Drivers
#
CONFIG_USB_C67X00_HCD=y
CONFIG_USB_EHCI_HCD=y
CONFIG_USB_EHCI_ROOT_HUB_TT=y
CONFIG_USB_EHCI_TT_NEWSCHED=y
CONFIG_USB_ISP116X_HCD=y
CONFIG_USB_ISP1760_HCD=y
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_OHCI_HCD_SSB=y
# CONFIG_USB_OHCI_BIG_ENDIAN_DESC is not set
# CONFIG_USB_OHCI_BIG_ENDIAN_MMIO is not set
CONFIG_USB_OHCI_LITTLE_ENDIAN=y
CONFIG_USB_UHCI_HCD=y
CONFIG_USB_U132_HCD=y
CONFIG_USB_SL811_HCD=y
CONFIG_USB_SL811_CS=y
CONFIG_USB_R8A66597_HCD=y
CONFIG_USB_HWA_HCD=y
#
# USB Device Class drivers
#
CONFIG_USB_ACM=y
CONFIG_USB_PRINTER=y
CONFIG_USB_WDM=y
CONFIG_USB_TMC=y
#
# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may also be needed;
#
#
# see USB_STORAGE Help for more information
#
CONFIG_USB_STORAGE=y
CONFIG_USB_STORAGE_DEBUG=y
CONFIG_USB_STORAGE_DATAFAB=y
CONFIG_USB_STORAGE_FREECOM=y
CONFIG_USB_STORAGE_ISD200=y
CONFIG_USB_STORAGE_DPCM=y
CONFIG_USB_STORAGE_USBAT=y
CONFIG_USB_STORAGE_SDDR09=y
CONFIG_USB_STORAGE_SDDR55=y
CONFIG_USB_STORAGE_JUMPSHOT=y
CONFIG_USB_STORAGE_ALAUDA=y
CONFIG_USB_STORAGE_ONETOUCH=y
CONFIG_USB_STORAGE_KARMA=y
CONFIG_USB_STORAGE_CYPRESS_ATACB=y
CONFIG_USB_LIBUSUAL=y
#
# USB Imaging devices
#
CONFIG_USB_MDC800=y
CONFIG_USB_MICROTEK=y
#
# USB port drivers
#
CONFIG_USB_USS720=y
CONFIG_USB_SERIAL=y
CONFIG_USB_SERIAL_CONSOLE=y
CONFIG_USB_EZUSB=y
CONFIG_USB_SERIAL_GENERIC=y
CONFIG_USB_SERIAL_AIRCABLE=y
CONFIG_USB_SERIAL_ARK3116=y
CONFIG_USB_SERIAL_BELKIN=y
CONFIG_USB_SERIAL_CH341=y
CONFIG_USB_SERIAL_WHITEHEAT=y
CONFIG_USB_SERIAL_DIGI_ACCELEPORT=y
CONFIG_USB_SERIAL_CP2101=y
CONFIG_USB_SERIAL_CYPRESS_M8=y
CONFIG_USB_SERIAL_EMPEG=y
CONFIG_USB_SERIAL_FTDI_SIO=y
CONFIG_USB_SERIAL_FUNSOFT=y
CONFIG_USB_SERIAL_VISOR=y
CONFIG_USB_SERIAL_IPAQ=y
CONFIG_USB_SERIAL_IR=y
CONFIG_USB_SERIAL_EDGEPORT=y
CONFIG_USB_SERIAL_EDGEPORT_TI=y
CONFIG_USB_SERIAL_GARMIN=y
CONFIG_USB_SERIAL_IPW=y
CONFIG_USB_SERIAL_IUU=y
CONFIG_USB_SERIAL_KEYSPAN_PDA=y
CONFIG_USB_SERIAL_KEYSPAN=y
CONFIG_USB_SERIAL_KEYSPAN_MPR=y
CONFIG_USB_SERIAL_KEYSPAN_USA28=y
CONFIG_USB_SERIAL_KEYSPAN_USA28X=y
CONFIG_USB_SERIAL_KEYSPAN_USA28XA=y
CONFIG_USB_SERIAL_KEYSPAN_USA28XB=y
CONFIG_USB_SERIAL_KEYSPAN_USA19=y
CONFIG_USB_SERIAL_KEYSPAN_USA18X=y
CONFIG_USB_SERIAL_KEYSPAN_USA19W=y
CONFIG_USB_SERIAL_KEYSPAN_USA19QW=y
CONFIG_USB_SERIAL_KEYSPAN_USA19QI=y
CONFIG_USB_SERIAL_KEYSPAN_USA49W=y
CONFIG_USB_SERIAL_KEYSPAN_USA49WLC=y
CONFIG_USB_SERIAL_KLSI=y
CONFIG_USB_SERIAL_KOBIL_SCT=y
CONFIG_USB_SERIAL_MCT_U232=y
CONFIG_USB_SERIAL_MOS7720=y
CONFIG_USB_SERIAL_MOS7840=y
CONFIG_USB_SERIAL_MOTOROLA=y
CONFIG_USB_SERIAL_NAVMAN=y
CONFIG_USB_SERIAL_PL2303=y
CONFIG_USB_SERIAL_OTI6858=y
CONFIG_USB_SERIAL_SPCP8X5=y
CONFIG_USB_SERIAL_HP4X=y
CONFIG_USB_SERIAL_SAFE=y
CONFIG_USB_SERIAL_SAFE_PADDED=y
CONFIG_USB_SERIAL_SIERRAWIRELESS=y
CONFIG_USB_SERIAL_TI=y
CONFIG_USB_SERIAL_CYBERJACK=y
CONFIG_USB_SERIAL_XIRCOM=y
CONFIG_USB_SERIAL_OPTION=y
CONFIG_USB_SERIAL_OMNINET=y
CONFIG_USB_SERIAL_DEBUG=y
#
# USB Miscellaneous drivers
#
CONFIG_USB_EMI62=y
CONFIG_USB_EMI26=y
CONFIG_USB_ADUTUX=y
CONFIG_USB_SEVSEG=y
CONFIG_USB_RIO500=y
CONFIG_USB_LEGOTOWER=y
CONFIG_USB_LCD=y
CONFIG_USB_BERRY_CHARGE=y
CONFIG_USB_LED=y
CONFIG_USB_CYPRESS_CY7C63=y
CONFIG_USB_CYTHERM=y
CONFIG_USB_PHIDGET=y
CONFIG_USB_PHIDGETKIT=y
CONFIG_USB_PHIDGETMOTORCONTROL=y
CONFIG_USB_PHIDGETSERVO=y
CONFIG_USB_IDMOUSE=y
CONFIG_USB_FTDI_ELAN=y
CONFIG_USB_APPLEDISPLAY=y
CONFIG_USB_SISUSBVGA=y
CONFIG_USB_SISUSBVGA_CON=y
CONFIG_USB_LD=y
CONFIG_USB_TRANCEVIBRATOR=y
CONFIG_USB_IOWARRIOR=y
CONFIG_USB_TEST=y
CONFIG_USB_ISIGHTFW=y
CONFIG_USB_VST=y
CONFIG_USB_ATM=y
CONFIG_USB_SPEEDTOUCH=y
CONFIG_USB_CXACRU=y
CONFIG_USB_UEAGLEATM=y
CONFIG_USB_XUSBATM=y
CONFIG_UWB=y
CONFIG_UWB_HWA=y
CONFIG_UWB_WHCI=y
CONFIG_UWB_WLP=y
CONFIG_UWB_I1480U=y
CONFIG_UWB_I1480U_WLP=y
CONFIG_MMC=y
CONFIG_MMC_DEBUG=y
CONFIG_MMC_UNSAFE_RESUME=y
#
# MMC/SD/SDIO Card Drivers
#
CONFIG_MMC_BLOCK=y
CONFIG_MMC_BLOCK_BOUNCE=y
CONFIG_SDIO_UART=y
CONFIG_MMC_TEST=y
#
# MMC/SD/SDIO Host Controller Drivers
#
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PCI=y
CONFIG_MMC_RICOH_MMC=y
CONFIG_MMC_WBSD=y
CONFIG_MMC_TIFM_SD=y
CONFIG_MMC_SPI=y
CONFIG_MMC_SDRICOH_CS=y
CONFIG_MEMSTICK=y
CONFIG_MEMSTICK_DEBUG=y
#
# MemoryStick drivers
#
CONFIG_MEMSTICK_UNSAFE_RESUME=y
CONFIG_MSPRO_BLOCK=y
#
# MemoryStick Host Controller Drivers
#
CONFIG_MEMSTICK_TIFM_MS=y
CONFIG_MEMSTICK_JMICRON_38X=y
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
#
# LED drivers
#
CONFIG_LEDS_PCA9532=y
CONFIG_LEDS_GPIO=y
CONFIG_LEDS_HP_DISK=y
CONFIG_LEDS_CLEVO_MAIL=y
CONFIG_LEDS_PCA955X=y
CONFIG_LEDS_DA903X=y
#
# LED Triggers
#
CONFIG_LEDS_TRIGGERS=y
CONFIG_LEDS_TRIGGER_TIMER=y
CONFIG_LEDS_TRIGGER_HEARTBEAT=y
CONFIG_LEDS_TRIGGER_BACKLIGHT=y
CONFIG_LEDS_TRIGGER_DEFAULT_ON=y
CONFIG_ACCESSIBILITY=y
CONFIG_A11Y_BRAILLE_CONSOLE=y
CONFIG_INFINIBAND=y
CONFIG_INFINIBAND_USER_MAD=y
CONFIG_INFINIBAND_USER_ACCESS=y
CONFIG_INFINIBAND_USER_MEM=y
CONFIG_INFINIBAND_ADDR_TRANS=y
CONFIG_INFINIBAND_MTHCA=y
CONFIG_INFINIBAND_MTHCA_DEBUG=y
CONFIG_INFINIBAND_IPATH=y
CONFIG_INFINIBAND_AMSO1100=y
CONFIG_INFINIBAND_AMSO1100_DEBUG=y
CONFIG_INFINIBAND_CXGB3=y
CONFIG_INFINIBAND_CXGB3_DEBUG=y
CONFIG_MLX4_INFINIBAND=y
CONFIG_INFINIBAND_NES=y
CONFIG_INFINIBAND_NES_DEBUG=y
CONFIG_INFINIBAND_IPOIB=y
CONFIG_INFINIBAND_IPOIB_CM=y
CONFIG_INFINIBAND_IPOIB_DEBUG=y
CONFIG_INFINIBAND_IPOIB_DEBUG_DATA=y
CONFIG_INFINIBAND_SRP=y
CONFIG_INFINIBAND_ISER=y
CONFIG_EDAC=y
#
# Reporting subsystems
#
CONFIG_EDAC_DEBUG=y
CONFIG_EDAC_MM_EDAC=y
CONFIG_EDAC_E752X=y
CONFIG_EDAC_I82975X=y
CONFIG_EDAC_I3000=y
CONFIG_EDAC_X38=y
CONFIG_EDAC_I5000=y
CONFIG_EDAC_I5100=y
CONFIG_RTC_LIB=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_HCTOSYS=y
CONFIG_RTC_HCTOSYS_DEVICE="rtc0"
CONFIG_RTC_DEBUG=y
#
# RTC interfaces
#
CONFIG_RTC_INTF_SYSFS=y
CONFIG_RTC_INTF_PROC=y
CONFIG_RTC_INTF_DEV=y
CONFIG_RTC_INTF_DEV_UIE_EMUL=y
CONFIG_RTC_DRV_TEST=y
#
# I2C RTC drivers
#
CONFIG_RTC_DRV_DS1307=y
CONFIG_RTC_DRV_DS1374=y
CONFIG_RTC_DRV_DS1672=y
CONFIG_RTC_DRV_MAX6900=y
CONFIG_RTC_DRV_RS5C372=y
CONFIG_RTC_DRV_ISL1208=y
CONFIG_RTC_DRV_X1205=y
CONFIG_RTC_DRV_PCF8563=y
CONFIG_RTC_DRV_PCF8583=y
CONFIG_RTC_DRV_M41T80=y
CONFIG_RTC_DRV_M41T80_WDT=y
CONFIG_RTC_DRV_S35390A=y
CONFIG_RTC_DRV_FM3130=y
CONFIG_RTC_DRV_RX8581=y
#
# SPI RTC drivers
#
CONFIG_RTC_DRV_M41T94=y
CONFIG_RTC_DRV_DS1305=y
CONFIG_RTC_DRV_DS1390=y
CONFIG_RTC_DRV_MAX6902=y
CONFIG_RTC_DRV_R9701=y
CONFIG_RTC_DRV_RS5C348=y
CONFIG_RTC_DRV_DS3234=y
#
# Platform RTC drivers
#
CONFIG_RTC_DRV_CMOS=y
CONFIG_RTC_DRV_DS1286=y
CONFIG_RTC_DRV_DS1511=y
CONFIG_RTC_DRV_DS1553=y
CONFIG_RTC_DRV_DS1742=y
CONFIG_RTC_DRV_STK17TA8=y
CONFIG_RTC_DRV_M48T86=y
CONFIG_RTC_DRV_M48T35=y
CONFIG_RTC_DRV_M48T59=y
CONFIG_RTC_DRV_BQ4802=y
CONFIG_RTC_DRV_V3020=y
#
# on-CPU RTC drivers
#
CONFIG_DMADEVICES=y
#
# DMA Devices
#
CONFIG_INTEL_IOATDMA=y
CONFIG_DMA_ENGINE=y
#
# DMA Clients
#
CONFIG_NET_DMA=y
CONFIG_DMATEST=y
CONFIG_DCA=y
CONFIG_AUXDISPLAY=y
CONFIG_KS0108=y
CONFIG_KS0108_PORT=0x378
CONFIG_KS0108_DELAY=2
CONFIG_CFAG12864B=y
CONFIG_CFAG12864B_RATE=20
CONFIG_UIO=y
CONFIG_UIO_CIF=y
CONFIG_UIO_PDRV=y
CONFIG_UIO_PDRV_GENIRQ=y
CONFIG_UIO_SMX=y
CONFIG_UIO_SERCOS3=y
CONFIG_XEN_BALLOON=y
CONFIG_XEN_SCRUB_PAGES=y
# CONFIG_STAGING is not set
CONFIG_STAGING_EXCLUDE_BUILD=y
#
# Firmware Drivers
#
CONFIG_EDD=y
CONFIG_EDD_OFF=y
CONFIG_FIRMWARE_MEMMAP=y
CONFIG_EFI_VARS=y
CONFIG_DELL_RBU=y
CONFIG_DCDBAS=y
CONFIG_DMIID=y
CONFIG_ISCSI_IBFT_FIND=y
CONFIG_ISCSI_IBFT=y
#
# File systems
#
CONFIG_EXT2_FS=y
CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT2_FS_POSIX_ACL=y
CONFIG_EXT2_FS_SECURITY=y
CONFIG_EXT2_FS_XIP=y
CONFIG_EXT3_FS=y
CONFIG_EXT3_FS_XATTR=y
CONFIG_EXT3_FS_POSIX_ACL=y
CONFIG_EXT3_FS_SECURITY=y
CONFIG_EXT4_FS=y
CONFIG_EXT4DEV_COMPAT=y
CONFIG_EXT4_FS_XATTR=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
CONFIG_FS_XIP=y
CONFIG_JBD=y
CONFIG_JBD_DEBUG=y
CONFIG_JBD2=y
CONFIG_JBD2_DEBUG=y
CONFIG_FS_MBCACHE=y
CONFIG_REISERFS_FS=y
CONFIG_REISERFS_CHECK=y
CONFIG_REISERFS_PROC_INFO=y
CONFIG_REISERFS_FS_XATTR=y
CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y
CONFIG_JFS_FS=y
CONFIG_JFS_POSIX_ACL=y
CONFIG_JFS_SECURITY=y
CONFIG_JFS_DEBUG=y
CONFIG_JFS_STATISTICS=y
CONFIG_FS_POSIX_ACL=y
CONFIG_FILE_LOCKING=y
CONFIG_XFS_FS=y
CONFIG_XFS_QUOTA=y
CONFIG_XFS_POSIX_ACL=y
CONFIG_XFS_RT=y
CONFIG_XFS_DEBUG=y
CONFIG_GFS2_FS=y
CONFIG_GFS2_FS_LOCKING_DLM=y
CONFIG_OCFS2_FS=y
CONFIG_OCFS2_FS_O2CB=y
CONFIG_OCFS2_FS_USERSPACE_CLUSTER=y
CONFIG_OCFS2_FS_STATS=y
CONFIG_OCFS2_DEBUG_MASKLOG=y
CONFIG_OCFS2_DEBUG_FS=y
CONFIG_OCFS2_COMPAT_JBD=y
CONFIG_DNOTIFY=y
CONFIG_INOTIFY=y
CONFIG_INOTIFY_USER=y
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
CONFIG_PRINT_QUOTA_WARNING=y
CONFIG_QFMT_V1=y
CONFIG_QFMT_V2=y
CONFIG_QUOTACTL=y
CONFIG_AUTOFS_FS=y
CONFIG_AUTOFS4_FS=y
CONFIG_FUSE_FS=y
CONFIG_GENERIC_ACL=y
#
# CD-ROM/DVD Filesystems
#
CONFIG_ISO9660_FS=y
CONFIG_JOLIET=y
CONFIG_ZISOFS=y
CONFIG_UDF_FS=y
CONFIG_UDF_NLS=y
#
# DOS/FAT/NT Filesystems
#
CONFIG_FAT_FS=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
CONFIG_FAT_DEFAULT_CODEPAGE=437
CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
CONFIG_NTFS_FS=y
CONFIG_NTFS_DEBUG=y
CONFIG_NTFS_RW=y
#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_PROC_VMCORE=y
CONFIG_PROC_SYSCTL=y
CONFIG_PROC_PAGE_MONITOR=y
CONFIG_SYSFS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
CONFIG_CONFIGFS_FS=y
#
# Miscellaneous filesystems
#
CONFIG_ADFS_FS=y
CONFIG_ADFS_FS_RW=y
CONFIG_AFFS_FS=y
CONFIG_ECRYPT_FS=y
CONFIG_HFS_FS=y
CONFIG_HFSPLUS_FS=y
CONFIG_BEFS_FS=y
CONFIG_BEFS_DEBUG=y
CONFIG_BFS_FS=y
CONFIG_EFS_FS=y
CONFIG_CRAMFS=y
CONFIG_VXFS_FS=y
CONFIG_MINIX_FS=y
CONFIG_OMFS_FS=y
CONFIG_HPFS_FS=y
CONFIG_QNX4FS_FS=y
CONFIG_ROMFS_FS=y
CONFIG_SYSV_FS=y
CONFIG_UFS_FS=y
CONFIG_UFS_FS_WRITE=y
CONFIG_UFS_DEBUG=y
CONFIG_NETWORK_FILESYSTEMS=y
CONFIG_NFS_FS=y
CONFIG_NFS_V3=y
CONFIG_NFS_V3_ACL=y
CONFIG_NFS_V4=y
# CONFIG_ROOT_NFS is not set
CONFIG_NFSD=y
CONFIG_NFSD_V2_ACL=y
CONFIG_NFSD_V3=y
CONFIG_NFSD_V3_ACL=y
CONFIG_NFSD_V4=y
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
CONFIG_EXPORTFS=y
CONFIG_NFS_ACL_SUPPORT=y
CONFIG_NFS_COMMON=y
CONFIG_SUNRPC=y
CONFIG_SUNRPC_GSS=y
CONFIG_SUNRPC_XPRT_RDMA=y
CONFIG_SUNRPC_REGISTER_V4=y
CONFIG_RPCSEC_GSS_KRB5=y
CONFIG_RPCSEC_GSS_SPKM3=y
CONFIG_SMB_FS=y
CONFIG_SMB_NLS_DEFAULT=y
CONFIG_SMB_NLS_REMOTE="cp437"
CONFIG_CIFS=y
CONFIG_CIFS_STATS=y
CONFIG_CIFS_STATS2=y
CONFIG_CIFS_WEAK_PW_HASH=y
CONFIG_CIFS_UPCALL=y
CONFIG_CIFS_XATTR=y
CONFIG_CIFS_POSIX=y
CONFIG_CIFS_DEBUG2=y
CONFIG_CIFS_EXPERIMENTAL=y
CONFIG_CIFS_DFS_UPCALL=y
CONFIG_NCP_FS=y
CONFIG_NCPFS_PACKET_SIGNING=y
CONFIG_NCPFS_IOCTL_LOCKING=y
CONFIG_NCPFS_STRONG=y
CONFIG_NCPFS_NFS_NS=y
CONFIG_NCPFS_OS2_NS=y
CONFIG_NCPFS_SMALLDOS=y
CONFIG_NCPFS_NLS=y
CONFIG_NCPFS_EXTRAS=y
CONFIG_CODA_FS=y
CONFIG_AFS_FS=y
CONFIG_AFS_DEBUG=y
#
# Partition Types
#
CONFIG_PARTITION_ADVANCED=y
CONFIG_ACORN_PARTITION=y
CONFIG_ACORN_PARTITION_CUMANA=y
CONFIG_ACORN_PARTITION_EESOX=y
CONFIG_ACORN_PARTITION_ICS=y
CONFIG_ACORN_PARTITION_ADFS=y
CONFIG_ACORN_PARTITION_POWERTEC=y
CONFIG_ACORN_PARTITION_RISCIX=y
CONFIG_OSF_PARTITION=y
CONFIG_AMIGA_PARTITION=y
CONFIG_ATARI_PARTITION=y
CONFIG_MAC_PARTITION=y
CONFIG_MSDOS_PARTITION=y
CONFIG_BSD_DISKLABEL=y
CONFIG_MINIX_SUBPARTITION=y
CONFIG_SOLARIS_X86_PARTITION=y
CONFIG_UNIXWARE_DISKLABEL=y
CONFIG_LDM_PARTITION=y
CONFIG_LDM_DEBUG=y
CONFIG_SGI_PARTITION=y
CONFIG_ULTRIX_PARTITION=y
CONFIG_SUN_PARTITION=y
CONFIG_KARMA_PARTITION=y
CONFIG_EFI_PARTITION=y
CONFIG_SYSV68_PARTITION=y
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="iso8859-1"
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_CODEPAGE_737=y
CONFIG_NLS_CODEPAGE_775=y
CONFIG_NLS_CODEPAGE_850=y
CONFIG_NLS_CODEPAGE_852=y
CONFIG_NLS_CODEPAGE_855=y
CONFIG_NLS_CODEPAGE_857=y
CONFIG_NLS_CODEPAGE_860=y
CONFIG_NLS_CODEPAGE_861=y
CONFIG_NLS_CODEPAGE_862=y
CONFIG_NLS_CODEPAGE_863=y
CONFIG_NLS_CODEPAGE_864=y
CONFIG_NLS_CODEPAGE_865=y
CONFIG_NLS_CODEPAGE_866=y
CONFIG_NLS_CODEPAGE_869=y
CONFIG_NLS_CODEPAGE_936=y
CONFIG_NLS_CODEPAGE_950=y
CONFIG_NLS_CODEPAGE_932=y
CONFIG_NLS_CODEPAGE_949=y
CONFIG_NLS_CODEPAGE_874=y
CONFIG_NLS_ISO8859_8=y
CONFIG_NLS_CODEPAGE_1250=y
CONFIG_NLS_CODEPAGE_1251=y
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=y
CONFIG_NLS_ISO8859_2=y
CONFIG_NLS_ISO8859_3=y
CONFIG_NLS_ISO8859_4=y
CONFIG_NLS_ISO8859_5=y
CONFIG_NLS_ISO8859_6=y
CONFIG_NLS_ISO8859_7=y
CONFIG_NLS_ISO8859_9=y
CONFIG_NLS_ISO8859_13=y
CONFIG_NLS_ISO8859_14=y
CONFIG_NLS_ISO8859_15=y
CONFIG_NLS_KOI8_R=y
CONFIG_NLS_KOI8_U=y
CONFIG_NLS_UTF8=y
CONFIG_DLM=y
CONFIG_DLM_DEBUG=y
#
# Kernel hacking
#
CONFIG_TRACE_IRQFLAGS_SUPPORT=y
CONFIG_PRINTK_TIME=y
CONFIG_ALLOW_WARNINGS=y
CONFIG_ENABLE_WARN_DEPRECATED=y
# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_FRAME_WARN=2048
CONFIG_MAGIC_SYSRQ=y
CONFIG_UNUSED_SYMBOLS=y
CONFIG_DEBUG_FS=y
CONFIG_HEADERS_CHECK=y
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_SHIRQ=y
CONFIG_DETECT_SOFTLOCKUP=y
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=1
CONFIG_SCHED_DEBUG=y
CONFIG_SCHEDSTATS=y
CONFIG_TIMER_STATS=y
CONFIG_DEBUG_OBJECTS=y
CONFIG_DEBUG_OBJECTS_SELFTEST=y
CONFIG_DEBUG_OBJECTS_FREE=y
CONFIG_DEBUG_OBJECTS_TIMERS=y
CONFIG_SLUB_DEBUG_ON=y
CONFIG_SLUB_STATS=y
CONFIG_DEBUG_MEMLEAK=y
CONFIG_DEBUG_MEMLEAK_TEST=y
CONFIG_DEBUG_KEEP_INIT=y
CONFIG_DEBUG_RT_MUTEXES=y
CONFIG_DEBUG_PI_LIST=y
CONFIG_RT_MUTEX_TESTER=y
CONFIG_DEBUG_SPINLOCK=y
CONFIG_DEBUG_MUTEXES=y
CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_PROVE_LOCKING=y
CONFIG_LOCKDEP=y
CONFIG_LOCK_STAT=y
CONFIG_DEBUG_LOCKDEP=y
CONFIG_TRACE_IRQFLAGS=y
CONFIG_DEBUG_SPINLOCK_SLEEP=y
CONFIG_DEBUG_LOCKING_API_SELFTESTS=y
CONFIG_STACKTRACE=y
# CONFIG_DEBUG_KOBJECT is not set
CONFIG_DEBUG_BUGVERBOSE=y
# CONFIG_DEBUG_INFO is not set
CONFIG_DEBUG_VM=y
CONFIG_DEBUG_VIRTUAL=y
CONFIG_DEBUG_WRITECOUNT=y
CONFIG_DEBUG_MEMORY_INIT=y
CONFIG_DEBUG_LIST=y
CONFIG_DEBUG_SG=y
CONFIG_DEBUG_NOTIFIERS=y
CONFIG_FRAME_POINTER=y
CONFIG_BOOT_PRINTK_DELAY=y
CONFIG_RCU_TORTURE_TEST=y
CONFIG_RCU_TORTURE_TEST_RUNNABLE=y
CONFIG_RCU_CPU_STALL_DETECTOR=y
CONFIG_KPROBES_SANITY_TEST=y
CONFIG_BACKTRACE_SELF_TEST=y
# CONFIG_DEBUG_BLOCK_EXT_DEVT is not set
CONFIG_LKDTM=y
CONFIG_FAULT_INJECTION=y
CONFIG_FAILSLAB=y
CONFIG_FAIL_PAGE_ALLOC=y
CONFIG_FAIL_MAKE_REQUEST=y
CONFIG_FAIL_IO_TIMEOUT=y
CONFIG_FAULT_INJECTION_DEBUG_FS=y
CONFIG_LATENCYTOP=y
CONFIG_SYSCTL_SYSCALL_CHECK=y
CONFIG_NOP_TRACER=y
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_TRACER_MAX_TRACE=y
CONFIG_RING_BUFFER=y
CONFIG_TRACING=y
#
# Tracers
#
CONFIG_FUNCTION_TRACER=y
CONFIG_IRQSOFF_TRACER=y
CONFIG_SYSPROF_TRACER=y
CONFIG_SCHED_TRACER=y
CONFIG_CONTEXT_SWITCH_TRACER=y
CONFIG_BOOT_TRACER=y
CONFIG_TRACE_BRANCH_PROFILING=y
CONFIG_TRACING_BRANCHES=y
CONFIG_BRANCH_TRACER=y
CONFIG_STACK_TRACER=y
CONFIG_DYNAMIC_FTRACE=y
CONFIG_FTRACE_MCOUNT_RECORD=y
CONFIG_PROVIDE_OHCI1394_DMA_INIT=y
CONFIG_FIREWIRE_OHCI_REMOTE_DMA=y
CONFIG_BUILD_DOCSRC=y
CONFIG_DYNAMIC_PRINTK_DEBUG=y
CONFIG_SAMPLES=y
CONFIG_SAMPLE_MARKERS=m
CONFIG_SAMPLE_TRACEPOINTS=m
CONFIG_SAMPLE_KOBJECT=y
CONFIG_SAMPLE_KPROBES=m
CONFIG_SAMPLE_KRETPROBES=m
CONFIG_HAVE_ARCH_KGDB=y
CONFIG_KGDB=y
CONFIG_KGDB_SERIAL_CONSOLE=y
CONFIG_KGDB_TESTS=y
# CONFIG_KGDB_TESTS_ON_BOOT is not set
CONFIG_STRICT_DEVMEM=y
CONFIG_X86_VERBOSE_BOOTUP=y
CONFIG_EARLY_PRINTK=y
CONFIG_EARLY_PRINTK_DBGP=y
CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_DEBUG_STACK_USAGE=y
CONFIG_DEBUG_PAGEALLOC=y
CONFIG_DEBUG_PER_CPU_MAPS=y
CONFIG_X86_PTDUMP=y
CONFIG_DEBUG_RODATA=y
CONFIG_DEBUG_RODATA_TEST=y
CONFIG_DEBUG_NX_TEST=m
CONFIG_IOMMU_DEBUG=y
CONFIG_IOMMU_LEAK=y
CONFIG_MMIOTRACE=y
CONFIG_MMIOTRACE_TEST=m
CONFIG_IO_DELAY_TYPE_0X80=0
CONFIG_IO_DELAY_TYPE_0XED=1
CONFIG_IO_DELAY_TYPE_UDELAY=2
CONFIG_IO_DELAY_TYPE_NONE=3
CONFIG_IO_DELAY_0X80=y
# CONFIG_IO_DELAY_0XED is not set
# CONFIG_IO_DELAY_UDELAY is not set
# CONFIG_IO_DELAY_NONE is not set
CONFIG_DEFAULT_IO_DELAY_TYPE=0
CONFIG_DEBUG_BOOT_PARAMS=y
CONFIG_CPA_DEBUG=y
CONFIG_OPTIMIZE_INLINING=y
#
# Security options
#
CONFIG_KEYS=y
CONFIG_KEYS_DEBUG_PROC_KEYS=y
CONFIG_SECURITY=y
CONFIG_SECURITYFS=y
CONFIG_SECURITY_NETWORK=y
CONFIG_SECURITY_NETWORK_XFRM=y
CONFIG_SECURITY_FILE_CAPABILITIES=y
# CONFIG_SECURITY_ROOTPLUG is not set
CONFIG_SECURITY_DEFAULT_MMAP_MIN_ADDR=0
CONFIG_SECURITY_SELINUX=y
CONFIG_SECURITY_SELINUX_BOOTPARAM=y
CONFIG_SECURITY_SELINUX_BOOTPARAM_VALUE=1
CONFIG_SECURITY_SELINUX_DISABLE=y
CONFIG_SECURITY_SELINUX_DEVELOP=y
CONFIG_SECURITY_SELINUX_AVC_STATS=y
CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=1
# CONFIG_SECURITY_SELINUX_ENABLE_SECMARK_DEFAULT is not set
CONFIG_SECURITY_SELINUX_POLICYDB_VERSION_MAX=y
CONFIG_SECURITY_SELINUX_POLICYDB_VERSION_MAX_VALUE=19
CONFIG_SECURITY_SMACK=y
CONFIG_XOR_BLOCKS=y
CONFIG_ASYNC_CORE=y
CONFIG_ASYNC_MEMCPY=y
CONFIG_ASYNC_XOR=y
CONFIG_CRYPTO=y
#
# Crypto core or helper
#
CONFIG_CRYPTO_FIPS=y
CONFIG_CRYPTO_ALGAPI=y
CONFIG_CRYPTO_AEAD=y
CONFIG_CRYPTO_BLKCIPHER=y
CONFIG_CRYPTO_HASH=y
CONFIG_CRYPTO_RNG=y
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_GF128MUL=y
CONFIG_CRYPTO_NULL=y
CONFIG_CRYPTO_CRYPTD=y
CONFIG_CRYPTO_AUTHENC=y
CONFIG_CRYPTO_TEST=m
#
# Authenticated Encryption with Associated Data
#
CONFIG_CRYPTO_CCM=y
CONFIG_CRYPTO_GCM=y
CONFIG_CRYPTO_SEQIV=y
#
# Block modes
#
CONFIG_CRYPTO_CBC=y
CONFIG_CRYPTO_CTR=y
CONFIG_CRYPTO_CTS=y
CONFIG_CRYPTO_ECB=y
CONFIG_CRYPTO_LRW=y
CONFIG_CRYPTO_PCBC=y
CONFIG_CRYPTO_XTS=y
#
# Hash modes
#
CONFIG_CRYPTO_HMAC=y
CONFIG_CRYPTO_XCBC=y
#
# Digest
#
CONFIG_CRYPTO_CRC32C=y
CONFIG_CRYPTO_CRC32C_INTEL=y
CONFIG_CRYPTO_MD4=y
CONFIG_CRYPTO_MD5=y
CONFIG_CRYPTO_MICHAEL_MIC=y
CONFIG_CRYPTO_RMD128=y
CONFIG_CRYPTO_RMD160=y
CONFIG_CRYPTO_RMD256=y
CONFIG_CRYPTO_RMD320=y
CONFIG_CRYPTO_SHA1=y
CONFIG_CRYPTO_SHA256=y
CONFIG_CRYPTO_SHA512=y
CONFIG_CRYPTO_TGR192=y
CONFIG_CRYPTO_WP512=y
#
# Ciphers
#
CONFIG_CRYPTO_AES=y
CONFIG_CRYPTO_AES_X86_64=y
CONFIG_CRYPTO_ANUBIS=y
CONFIG_CRYPTO_ARC4=y
CONFIG_CRYPTO_BLOWFISH=y
CONFIG_CRYPTO_CAMELLIA=y
CONFIG_CRYPTO_CAST5=y
CONFIG_CRYPTO_CAST6=y
CONFIG_CRYPTO_DES=y
CONFIG_CRYPTO_FCRYPT=y
CONFIG_CRYPTO_KHAZAD=y
CONFIG_CRYPTO_SALSA20=y
CONFIG_CRYPTO_SALSA20_X86_64=y
CONFIG_CRYPTO_SEED=y
CONFIG_CRYPTO_SERPENT=y
CONFIG_CRYPTO_TEA=y
CONFIG_CRYPTO_TWOFISH=y
CONFIG_CRYPTO_TWOFISH_COMMON=y
CONFIG_CRYPTO_TWOFISH_X86_64=y
#
# Compression
#
CONFIG_CRYPTO_DEFLATE=y
CONFIG_CRYPTO_LZO=y
#
# Random Number Generation
#
CONFIG_CRYPTO_ANSI_CPRNG=y
CONFIG_CRYPTO_HW=y
CONFIG_CRYPTO_DEV_HIFN_795X=y
CONFIG_CRYPTO_DEV_HIFN_795X_RNG=y
CONFIG_HAVE_KVM=y
CONFIG_VIRTUALIZATION=y
CONFIG_KVM=y
CONFIG_KVM_INTEL=y
CONFIG_KVM_AMD=y
CONFIG_KVM_TRACE=y
CONFIG_VIRTIO=y
CONFIG_VIRTIO_RING=y
CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_BALLOON=y
#
# Library routines
#
CONFIG_BITREVERSE=y
CONFIG_GENERIC_FIND_FIRST_BIT=y
CONFIG_GENERIC_FIND_NEXT_BIT=y
CONFIG_CRC_CCITT=y
CONFIG_CRC16=y
CONFIG_CRC_T10DIF=y
CONFIG_CRC_ITU_T=y
CONFIG_CRC32=y
CONFIG_CRC7=y
CONFIG_LIBCRC32C=y
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=y
CONFIG_LZO_COMPRESS=y
CONFIG_LZO_DECOMPRESS=y
CONFIG_GENERIC_ALLOCATOR=y
CONFIG_TEXTSEARCH=y
CONFIG_TEXTSEARCH_KMP=y
CONFIG_TEXTSEARCH_BM=y
CONFIG_TEXTSEARCH_FSM=y
CONFIG_PLIST=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
CONFIG_HAS_DMA=y
CONFIG_CHECK_SIGNATURE=y
^ permalink raw reply related [flat|nested] 37+ messages in thread
* Re: [PATCH 2.6.28-rc5 10/11] kmemleak: Simple testing module for kmemleak
2008-11-20 11:31 ` [PATCH 2.6.28-rc5 10/11] kmemleak: Simple testing module for kmemleak Catalin Marinas
@ 2008-11-20 12:11 ` Ingo Molnar
0 siblings, 0 replies; 37+ messages in thread
From: Ingo Molnar @ 2008-11-20 12:11 UTC (permalink / raw)
To: Catalin Marinas; +Cc: linux-kernel
* Catalin Marinas <catalin.marinas@arm.com> wrote:
> This patch adds a loadable module that deliberately leaks memory. It
> is used for testing various memory leaking scenarios.
please also trigger a kmemleak pass about 60 seconds after bootup and
trigger a warning that gets into the kernel log - and which can be
picked up by automated testing. I'd like to give kmemleak a try on x86
if it has such a debug feature.
Ingo
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH 2.6.28-rc5 05/11] kmemleak: Add support for i386
2008-11-20 11:30 ` [PATCH 2.6.28-rc5 05/11] kmemleak: Add support for i386 Catalin Marinas
@ 2008-11-20 12:16 ` Ingo Molnar
0 siblings, 0 replies; 37+ messages in thread
From: Ingo Molnar @ 2008-11-20 12:16 UTC (permalink / raw)
To: Catalin Marinas; +Cc: linux-kernel
* Catalin Marinas <catalin.marinas@arm.com> wrote:
> This patch adds the kmemleak-related entries to the vmlinux.lds.S
> linker script.
>
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> ---
> arch/x86/kernel/vmlinux_32.lds.S | 1 +
a small detail: a better changelog would be:
| x86: provide _sdata
|
| Impact: provide generic symbol
|
| _sdata is a common symbol defined by many architectures and made
| available to the core kernel via asm-generic/sections.h.
|
| Add it to x86 too, so that kmemleak can make use of it.
it is more informative this way, and would be in the customary x86
commit style.
Ingo
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH 2.6.28-rc5 00/11] Kernel memory leak detector (updated)
2008-11-20 11:30 [PATCH 2.6.28-rc5 00/11] Kernel memory leak detector (updated) Catalin Marinas
` (11 preceding siblings ...)
2008-11-20 12:10 ` [PATCH 2.6.28-rc5 00/11] Kernel memory leak detector (updated) Ingo Molnar
@ 2008-11-20 12:22 ` Ingo Molnar
2008-11-20 18:10 ` Catalin Marinas
12 siblings, 1 reply; 37+ messages in thread
From: Ingo Molnar @ 2008-11-20 12:22 UTC (permalink / raw)
To: Catalin Marinas; +Cc: linux-kernel
* Catalin Marinas <catalin.marinas@arm.com> wrote:
> The main changes (for those who remember the original features):
>
> - it now uses a priority search tree to make it easier for looking up
> intervals rather than just fixed values (the initial implementation
> was with radix tree and changed to hash array because of
> kmem_cache_alloc calls in the former)
> - internal memory allocator to avoid recursive calls into
> kmemleak. This is a simple lock-free, per-cpu allocator using
> pages. The number of pages allocated is bounded, though there could
> be (very unlikely) situations on SMP systems where page occupation
> isn't optimal
> - support for all three memory allocators - slab, slob and slub
> - finer-grained locking - there is no global lock held during memory
> scanning
> - more information reported for leaked objects - current task's
> command line and pid, jiffies and the stack trace
these are very nice improvements! In particular the sharp reduction in
false positives and annotations is encouraging.
I'd like to try it in the -tip automated testing setup, provided the
few details i just commented on are solved, and provided that these
things are addressed as well:
> Things still to be done:
>
> - kernel thread to scan and report leaked objects periodically
> (currently done only when reading the /sys/kernel/debug/memleak
> file)
> - run-time and boot-time configuration like task stacks scanning,
> disabling kmemleak, enabling/disabling the automatic scanning
the .config driven automatic "report currently known/suspected leaks
60 seconds after bootup" feature would be nice to have. Should be
fairly easy to add, right? Otherwise i'd have no good way of getting a
leak report out of it, in an automated way.
Ingo
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH 2.6.28-rc5 00/11] Kernel memory leak detector (updated)
2008-11-20 12:10 ` [PATCH 2.6.28-rc5 00/11] Kernel memory leak detector (updated) Ingo Molnar
@ 2008-11-20 17:54 ` Catalin Marinas
0 siblings, 0 replies; 37+ messages in thread
From: Catalin Marinas @ 2008-11-20 17:54 UTC (permalink / raw)
To: Ingo Molnar; +Cc: linux-kernel
On Thu, 2008-11-20 at 13:10 +0100, Ingo Molnar wrote:
> got a couple of build errors and warnings on x86 with the attached
> config:
Thanks. I haven't got any of these on ARM but the headers were probably
included via other headers. I'll give it a try with your .config.
--
Catalin
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH 2.6.28-rc5 00/11] Kernel memory leak detector (updated)
2008-11-20 12:22 ` Ingo Molnar
@ 2008-11-20 18:10 ` Catalin Marinas
0 siblings, 0 replies; 37+ messages in thread
From: Catalin Marinas @ 2008-11-20 18:10 UTC (permalink / raw)
To: Ingo Molnar; +Cc: linux-kernel
On Thu, 2008-11-20 at 13:22 +0100, Ingo Molnar wrote:
> * Catalin Marinas <catalin.marinas@arm.com> wrote:
> I'd like to try it in the -tip automated testing setup, provided the
> few details i just commented on are solved, and provided that these
> things are addressed as well:
>
> > Things still to be done:
> >
> > - kernel thread to scan and report leaked objects periodically
> > (currently done only when reading the /sys/kernel/debug/memleak
> > file)
> > - run-time and boot-time configuration like task stacks scanning,
> > disabling kmemleak, enabling/disabling the automatic scanning
>
> the .config driven automatic "report currently known/suspected leaks
> 60 seconds after bootup" feature would be nice to have. Should be
> fairly easy to add, right? Otherwise i'd have no good way of getting a
> leak report out of it, in an automated way.
This can be easily done and it will be part of the automatic scanning
mentioned above (after the first scan, I think a scan every 10 min would
be enough). The only issue is that currently a leak is reported after it
was found at least a number of times (the 2nd time by default) to avoid
transient reports where a pointer was held in registers for example. A
better approach might be to check the allocation jiffies and ignore the
very recent ones.
As for reporting, I think it should print a diff with the previous scan
otherwise you may end up with duplicated information in the log.
I'll look at implementing this over the following days and re-post.
Thanks for the other comments as well.
--
Catalin
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH 2.6.28-rc5 03/11] kmemleak: Add the memory allocation/freeing hooks
2008-11-20 11:30 ` [PATCH 2.6.28-rc5 03/11] kmemleak: Add the memory allocation/freeing hooks Catalin Marinas
2008-11-20 12:00 ` Ingo Molnar
@ 2008-11-20 19:30 ` Pekka Enberg
2008-11-21 11:07 ` Catalin Marinas
1 sibling, 1 reply; 37+ messages in thread
From: Pekka Enberg @ 2008-11-20 19:30 UTC (permalink / raw)
To: Catalin Marinas; +Cc: linux-kernel, Matt Mackall, Christoph Lameter
Hi Catalin,
On Thu, Nov 20, 2008 at 1:30 PM, Catalin Marinas
<catalin.marinas@arm.com> wrote:
> This patch adds the callbacks to memleak_(alloc|free) functions from
> kmalloc/kfree, kmem_cache_(alloc|free), vmalloc/vfree etc.
>
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
[snip]
> @@ -2610,6 +2611,9 @@ static struct slab *alloc_slabmgmt(struct kmem_cache *cachep, void *objp,
> /* Slab management obj is off-slab. */
> slabp = kmem_cache_alloc_node(cachep->slabp_cache,
> local_flags & ~GFP_THISNODE, nodeid);
> + /* only scan the list member to avoid false negatives */
> + memleak_scan_area(slabp, offsetof(struct slab, list),
> + sizeof(struct list_head));
I find this comment somewhat confusing. Does it mean we _must_ scan
the list members to avoid false negatives (i.e. leaks that happened
but were not reported) or that if we scan the whole of struct slab, we
get false negatives?
> if (!slabp)
> return NULL;
> } else {
Other than that, the SLAB, SLUB, and SLOB hooks look good to me. You
might want to split up the patch a bit and CC Matt for the SLOB and
Christoph for the SLUB hooks and me for all of the three.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH 2.6.28-rc5 01/11] kmemleak: Add the base support
2008-11-20 11:30 ` [PATCH 2.6.28-rc5 01/11] kmemleak: Add the base support Catalin Marinas
2008-11-20 11:58 ` Ingo Molnar
@ 2008-11-20 19:35 ` Pekka Enberg
2008-11-21 12:07 ` Catalin Marinas
2008-12-03 18:12 ` Paul E. McKenney
2 siblings, 1 reply; 37+ messages in thread
From: Pekka Enberg @ 2008-11-20 19:35 UTC (permalink / raw)
To: Catalin Marinas; +Cc: linux-kernel
Hi Catalin,
On Thu, Nov 20, 2008 at 1:30 PM, Catalin Marinas
<catalin.marinas@arm.com> wrote:
> +#ifdef CONFIG_SMP
> +#define cache_line_align(x) L1_CACHE_ALIGN(x)
> +#else
> +#define cache_line_align(x) (x)
> +#endif
Maybe we should be put to <linux/cache.h> and call it cache_line_align_in_smp()?
> +/*
> + * Object allocation
> + */
> +static void *fast_cache_alloc(struct fast_cache *cache)
> +{
> + unsigned int cpu = get_cpu();
> + unsigned long flags;
> + struct list_head *entry;
> + struct fast_cache_page *page;
> +
> + local_irq_save(flags);
> +
> + if (list_empty(&cache->free_list[cpu]))
> + __fast_cache_grow(cache, cpu);
> +
> + entry = cache->free_list[cpu].next;
> + page = entry_to_page(entry);
> + list_del(entry);
> + page->free_nr[cpu]--;
> + BUG_ON(page->free_nr[cpu] < 0);
> + fast_cache_dec_free(cache, cpu);
> +
> + local_irq_restore(flags);
> + put_cpu_no_resched();
> +
> + return (void *)(entry + 1);
> +}
The slab allocators are pretty fast as well. Is there a reason you
can't use kmalloc() or kmem_cache_alloc() for this? You can fix the
recursion problem by adding a new GFP_NOLEAKTRACK flag that makes sure
memleak hooks are not invoked if it's set.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH 2.6.28-rc5 03/11] kmemleak: Add the memory allocation/freeing hooks
2008-11-20 19:30 ` Pekka Enberg
@ 2008-11-21 11:07 ` Catalin Marinas
2008-11-24 8:19 ` Pekka Enberg
0 siblings, 1 reply; 37+ messages in thread
From: Catalin Marinas @ 2008-11-21 11:07 UTC (permalink / raw)
To: Pekka Enberg; +Cc: linux-kernel, Matt Mackall, Christoph Lameter
Hi Pekka,
On Thu, 2008-11-20 at 21:30 +0200, Pekka Enberg wrote:
> > @@ -2610,6 +2611,9 @@ static struct slab *alloc_slabmgmt(struct kmem_cache *cachep, void *objp,
> > /* Slab management obj is off-slab. */
> > slabp = kmem_cache_alloc_node(cachep->slabp_cache,
> > local_flags & ~GFP_THISNODE, nodeid);
> > + /* only scan the list member to avoid false negatives */
> > + memleak_scan_area(slabp, offsetof(struct slab, list),
> > + sizeof(struct list_head));
>
> I find this comment somewhat confusing. Does it mean we _must_ scan
> the list members to avoid false negatives (i.e. leaks that happened
> but were not reported) or that if we scan the whole of struct slab, we
> get false negatives?
It's been some time since I first added this and I may not remember the
full details but it's the latter case - it should avoid scanning
slabp->s_mem because (my understanding) is that it may contain a pointer
to an allocated block. Kmemleak only allows adding what sections to
scan, so in this case only the list_head is relevant.
Let me know if my understanding is correct and I'll make the comment
more clear.
Thanks.
--
Catalin
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH 2.6.28-rc5 01/11] kmemleak: Add the base support
2008-11-20 19:35 ` Pekka Enberg
@ 2008-11-21 12:07 ` Catalin Marinas
2008-11-24 8:16 ` Pekka Enberg
0 siblings, 1 reply; 37+ messages in thread
From: Catalin Marinas @ 2008-11-21 12:07 UTC (permalink / raw)
To: Pekka Enberg; +Cc: linux-kernel
On Thu, 2008-11-20 at 21:35 +0200, Pekka Enberg wrote:
> On Thu, Nov 20, 2008 at 1:30 PM, Catalin Marinas
> <catalin.marinas@arm.com> wrote:
> > +#ifdef CONFIG_SMP
> > +#define cache_line_align(x) L1_CACHE_ALIGN(x)
> > +#else
> > +#define cache_line_align(x) (x)
> > +#endif
>
> Maybe we should be put to <linux/cache.h> and call it cache_line_align_in_smp()?
Yes, it makes sense.
> > +/*
> > + * Object allocation
> > + */
> > +static void *fast_cache_alloc(struct fast_cache *cache)
[...]
> The slab allocators are pretty fast as well. Is there a reason you
> can't use kmalloc() or kmem_cache_alloc() for this?
The reason for the internal allocator wasn't speed, I would be happy to
use the main one. The past kmemleak versions used the slab allocator and
I was getting lockdep reports about the l3->list_lock via the
cache_grow() and memleak_alloc() functions, IIRC. At that time I also
had another lock held during radix_tree_insert (this function calling
kmem_cache_alloc), so some of these problems might have gone away now
with a finer-grained locking and some RCU usage (actually no locks
should be held when calling the alloc functions from kmemleak).
It seems that the flags are propagated inside the slab allocator so it
is also possible to miss some slab-internal allocations we would like to
track (like slabmgmt objects in a list) or get too many recursive calls
via memleak_alloc().
> You can fix the
> recursion problem by adding a new GFP_NOLEAKTRACK flag that makes sure
> memleak hooks are not invoked if it's set.
But this flag doesn't get passed to kfree. An option would be to use a
kmem_cache and a SLAB_NOLEAKTRACE bit so that it can be checked via
kmem_cache_free().
If you don't think the above issues are real problems, I'm happy to give
it a try.
--
Catalin
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH 2.6.28-rc5 01/11] kmemleak: Add the base support
2008-11-21 12:07 ` Catalin Marinas
@ 2008-11-24 8:16 ` Pekka Enberg
2008-11-24 8:19 ` Pekka Enberg
0 siblings, 1 reply; 37+ messages in thread
From: Pekka Enberg @ 2008-11-24 8:16 UTC (permalink / raw)
To: Catalin Marinas; +Cc: linux-kernel, cl
Hi Catalin,
On Fri, 2008-11-21 at 12:07 +0000, Catalin Marinas wrote:
> > > +/*
> > > + * Object allocation
> > > + */
> > > +static void *fast_cache_alloc(struct fast_cache *cache)
> [...]
> > The slab allocators are pretty fast as well. Is there a reason you
> > can't use kmalloc() or kmem_cache_alloc() for this?
>
> The reason for the internal allocator wasn't speed, I would be happy to
> use the main one. The past kmemleak versions used the slab allocator and
> I was getting lockdep reports about the l3->list_lock via the
> cache_grow() and memleak_alloc() functions, IIRC. At that time I also
> had another lock held during radix_tree_insert (this function calling
> kmem_cache_alloc), so some of these problems might have gone away now
> with a finer-grained locking and some RCU usage (actually no locks
> should be held when calling the alloc functions from kmemleak).
OK, but I don't really see any fundamental reason why we couldn't do
this. I mean, from my point of view, it would be better to add
specialized hooks (i.e. separate allocation paths for the kmemleak hooks
that avoid any issues) inside the SLAB allocators rather than invent a
separate allocator. As an example,
On Fri, 2008-11-21 at 12:07 +0000, Catalin Marinas wrote:
> It seems that the flags are propagated inside the slab allocator so it
> is also possible to miss some slab-internal allocations we would like to
> track (like slabmgmt objects in a list) or get too many recursive calls
> via memleak_alloc().
I'm not sure I understand this. Which flags are you talking about? I do
see you might run into locking trouble with calling kmalloc() within
kmalloc() but most of that should go away if kmemleak hooks use a
separate cache, no?
On Fri, 2008-11-21 at 12:07 +0000, Catalin Marinas wrote:
> > You can fix the
> > recursion problem by adding a new GFP_NOLEAKTRACK flag that makes sure
> > memleak hooks are not invoked if it's set.
>
> But this flag doesn't get passed to kfree. An option would be to use a
> kmem_cache and a SLAB_NOLEAKTRACE bit so that it can be checked via
> kmem_cache_free().
Right, of course. And I guess that's better for kmemleak anyway, as it
has fixed size objects.
However, just to drive my point home, another option would be to reuse
the non-tracing kmalloc functions we did for kmemtrace which needs to
tackle the recursion problem as well:
http://git.kernel.org/?p=linux/kernel/git/penberg/slab-2.6.git;a=shortlog;h=topic/kmemtrace
On Fri, 2008-11-21 at 12:07 +0000, Catalin Marinas wrote:
> If you don't think the above issues are real problems, I'm happy to give
> it a try.
Yes, please. If you run into problems, let me know if I can help out.
Pekka
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH 2.6.28-rc5 03/11] kmemleak: Add the memory allocation/freeing hooks
2008-11-21 11:07 ` Catalin Marinas
@ 2008-11-24 8:19 ` Pekka Enberg
2008-11-24 10:18 ` Catalin Marinas
0 siblings, 1 reply; 37+ messages in thread
From: Pekka Enberg @ 2008-11-24 8:19 UTC (permalink / raw)
To: Catalin Marinas; +Cc: linux-kernel, Matt Mackall, Christoph Lameter
Hi Catalin,
On Fri, 2008-11-21 at 11:07 +0000, Catalin Marinas wrote:
> Hi Pekka,
>
> On Thu, 2008-11-20 at 21:30 +0200, Pekka Enberg wrote:
> > > @@ -2610,6 +2611,9 @@ static struct slab *alloc_slabmgmt(struct kmem_cache *cachep, void *objp,
> > > /* Slab management obj is off-slab. */
> > > slabp = kmem_cache_alloc_node(cachep->slabp_cache,
> > > local_flags & ~GFP_THISNODE, nodeid);
> > > + /* only scan the list member to avoid false negatives */
> > > + memleak_scan_area(slabp, offsetof(struct slab, list),
> > > + sizeof(struct list_head));
> >
> > I find this comment somewhat confusing. Does it mean we _must_ scan
> > the list members to avoid false negatives (i.e. leaks that happened
> > but were not reported) or that if we scan the whole of struct slab, we
> > get false negatives?
>
> It's been some time since I first added this and I may not remember the
> full details but it's the latter case - it should avoid scanning
> slabp->s_mem because (my understanding) is that it may contain a pointer
> to an allocated block. Kmemleak only allows adding what sections to
> scan, so in this case only the list_head is relevant.
>
> Let me know if my understanding is correct and I'll make the comment
> more clear.
Well, slab->s_mem simply points to the slab (i.e. page) itself. So I
suppose we need to ignore ->s_mem to avoid scanning the same slab twice?
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH 2.6.28-rc5 01/11] kmemleak: Add the base support
2008-11-24 8:16 ` Pekka Enberg
@ 2008-11-24 8:19 ` Pekka Enberg
0 siblings, 0 replies; 37+ messages in thread
From: Pekka Enberg @ 2008-11-24 8:19 UTC (permalink / raw)
To: Catalin Marinas; +Cc: linux-kernel, cl
On Mon, Nov 24, 2008 at 10:16 AM, Pekka Enberg <penberg@cs.helsinki.fi> wrote:
> OK, but I don't really see any fundamental reason why we couldn't do
> this. I mean, from my point of view, it would be better to add
> specialized hooks (i.e. separate allocation paths for the kmemleak hooks
> that avoid any issues) inside the SLAB allocators rather than invent a
> separate allocator. As an example,
Oops, I don't know where that "as an example" part came from so just
ignore it. :-)
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH 2.6.28-rc5 03/11] kmemleak: Add the memory allocation/freeing hooks
2008-11-24 8:19 ` Pekka Enberg
@ 2008-11-24 10:18 ` Catalin Marinas
2008-11-24 10:35 ` Pekka Enberg
0 siblings, 1 reply; 37+ messages in thread
From: Catalin Marinas @ 2008-11-24 10:18 UTC (permalink / raw)
To: Pekka Enberg; +Cc: linux-kernel, Matt Mackall, Christoph Lameter
Pekka,
On Mon, 2008-11-24 at 10:19 +0200, Pekka Enberg wrote:
> On Fri, 2008-11-21 at 11:07 +0000, Catalin Marinas wrote:
> > On Thu, 2008-11-20 at 21:30 +0200, Pekka Enberg wrote:
> > > > @@ -2610,6 +2611,9 @@ static struct slab *alloc_slabmgmt(struct kmem_cache *cachep, void *objp,
> > > > /* Slab management obj is off-slab. */
> > > > slabp = kmem_cache_alloc_node(cachep->slabp_cache,
> > > > local_flags & ~GFP_THISNODE, nodeid);
> > > > + /* only scan the list member to avoid false negatives */
> > > > + memleak_scan_area(slabp, offsetof(struct slab, list),
> > > > + sizeof(struct list_head));
> > >
> > > I find this comment somewhat confusing. Does it mean we _must_ scan
> > > the list members to avoid false negatives (i.e. leaks that happened
> > > but were not reported) or that if we scan the whole of struct slab, we
> > > get false negatives?
> >
> > It's been some time since I first added this and I may not remember the
> > full details but it's the latter case - it should avoid scanning
> > slabp->s_mem because (my understanding) is that it may contain a pointer
> > to an allocated block. Kmemleak only allows adding what sections to
> > scan, so in this case only the list_head is relevant.
> >
> > Let me know if my understanding is correct and I'll make the comment
> > more clear.
>
> Well, slab->s_mem simply points to the slab (i.e. page) itself. So I
> suppose we need to ignore ->s_mem to avoid scanning the same slab twice?
Kmemleak never scans a block twice (well, during a scanning session), it
just increases a reference count for the objects referred via the
scanned object. Since slab structures are allocated via
kmem_cache_alloc_noe, they'll be tracked by kmemleak and scanned
(otherwise they would be memory leaks).
My understanding is that the ->s_mem value points to the slab itself but
the same pointer might actually be the beginning of an allocated memory
block, hence we get at least one reference to this block.
--
Catalin
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH 2.6.28-rc5 03/11] kmemleak: Add the memory allocation/freeing hooks
2008-11-24 10:18 ` Catalin Marinas
@ 2008-11-24 10:35 ` Pekka Enberg
2008-11-24 10:43 ` Catalin Marinas
0 siblings, 1 reply; 37+ messages in thread
From: Pekka Enberg @ 2008-11-24 10:35 UTC (permalink / raw)
To: Catalin Marinas; +Cc: linux-kernel, Matt Mackall, Christoph Lameter
Hi Catalin,
On Mon, 2008-11-24 at 10:18 +0000, Catalin Marinas wrote:
> My understanding is that the ->s_mem value points to the slab itself but
> the same pointer might actually be the beginning of an allocated memory
> block, hence we get at least one reference to this block.
Yes, ->s_mem points to the first object in the slab if CONFIG_DEBUG_SLAB
is disabled.
So if I understood this right, in case the first object in the slab is
leaked (it's allocated but no one references to it), we want to make
sure kmemleak doesn't see the ->s_mem link which would cause a false
negative (i.e. a leak that is not reported).
Did I get it correct this time?
Pekka
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH 2.6.28-rc5 03/11] kmemleak: Add the memory allocation/freeing hooks
2008-11-24 10:35 ` Pekka Enberg
@ 2008-11-24 10:43 ` Catalin Marinas
0 siblings, 0 replies; 37+ messages in thread
From: Catalin Marinas @ 2008-11-24 10:43 UTC (permalink / raw)
To: Pekka Enberg; +Cc: linux-kernel, Matt Mackall, Christoph Lameter
On Mon, 2008-11-24 at 12:35 +0200, Pekka Enberg wrote:
> On Mon, 2008-11-24 at 10:18 +0000, Catalin Marinas wrote:
> > My understanding is that the ->s_mem value points to the slab itself but
> > the same pointer might actually be the beginning of an allocated memory
> > block, hence we get at least one reference to this block.
>
> Yes, ->s_mem points to the first object in the slab if CONFIG_DEBUG_SLAB
> is disabled.
>
> So if I understood this right, in case the first object in the slab is
> leaked (it's allocated but no one references to it), we want to make
> sure kmemleak doesn't see the ->s_mem link which would cause a false
> negative (i.e. a leak that is not reported).
>
> Did I get it correct this time?
Yes.
--
Catalin
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH 2.6.28-rc5 01/11] kmemleak: Add the base support
2008-11-20 11:30 ` [PATCH 2.6.28-rc5 01/11] kmemleak: Add the base support Catalin Marinas
2008-11-20 11:58 ` Ingo Molnar
2008-11-20 19:35 ` Pekka Enberg
@ 2008-12-03 18:12 ` Paul E. McKenney
2008-12-04 12:14 ` Catalin Marinas
2 siblings, 1 reply; 37+ messages in thread
From: Paul E. McKenney @ 2008-12-03 18:12 UTC (permalink / raw)
To: Catalin Marinas; +Cc: linux-kernel
On Thu, Nov 20, 2008 at 11:30:34AM +0000, Catalin Marinas wrote:
> This patch adds the base support for the kernel memory leak
> detector. It traces the memory allocation/freeing in a way similar to
> the Boehm's conservative garbage collector, the difference being that
> the unreferenced objects are not freed but only shown in
> /sys/kernel/debug/memleak. Enabling this feature introduces an
> overhead to memory allocations.
Hello, Catalin,
I have some concerns about your locking/RCU design, please see
interspersed questions and comments.
Thanx, Paul
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> ---
> include/linux/memleak.h | 60 +++
> init/main.c | 4
> mm/memleak.c | 1012 +++++++++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 1075 insertions(+), 1 deletions(-)
> create mode 100644 include/linux/memleak.h
> create mode 100644 mm/memleak.c
>
> diff --git a/include/linux/memleak.h b/include/linux/memleak.h
> new file mode 100644
> index 0000000..29b3ecb
> --- /dev/null
> +++ b/include/linux/memleak.h
> @@ -0,0 +1,60 @@
> +/*
> + * include/linux/memleak.h
> + *
> + * Copyright (C) 2008 ARM Limited
> + * Written by Catalin Marinas <catalin.marinas@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; if not, write to the Free Software
> + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
> + */
> +
> +#ifndef __MEMLEAK_H
> +#define __MEMLEAK_H
> +
> +#ifdef CONFIG_DEBUG_MEMLEAK
> +
> +extern void memleak_init(void);
> +extern void memleak_alloc(const void *ptr, size_t size, int ref_count);
> +extern void memleak_free(const void *ptr);
> +extern void memleak_padding(const void *ptr, unsigned long offset, size_t size);
> +extern void memleak_not_leak(const void *ptr);
> +extern void memleak_ignore(const void *ptr);
> +extern void memleak_scan_area(const void *ptr, unsigned long offset, size_t length);
> +
> +static inline void memleak_erase(void **ptr)
> +{
> + *ptr = NULL;
> +}
> +
> +#else
> +
> +#define DECLARE_MEMLEAK_OFFSET(name, type, member)
> +
> +static inline void memleak_init(void)
> +{ }
> +static inline void memleak_alloc(const void *ptr, size_t size, int ref_count)
> +{ }
> +static inline void memleak_free(const void *ptr)
> +{ }
> +static inline void memleak_not_leak(const void *ptr)
> +{ }
> +static inline void memleak_ignore(const void *ptr)
> +{ }
> +static inline void memleak_scan_area(const void *ptr, unsigned long offset, size_t length)
> +{ }
> +static inline void memleak_erase(void **ptr)
> +{ }
> +
> +#endif /* CONFIG_DEBUG_MEMLEAK */
> +
> +#endif /* __MEMLEAK_H */
> diff --git a/init/main.c b/init/main.c
> index 7e117a2..e7f4d8c 100644
> --- a/init/main.c
> +++ b/init/main.c
> @@ -63,6 +63,7 @@
> #include <linux/signal.h>
> #include <linux/idr.h>
> #include <linux/ftrace.h>
> +#include <linux/memleak.h>
>
> #include <asm/io.h>
> #include <asm/bugs.h>
> @@ -652,6 +653,8 @@ asmlinkage void __init start_kernel(void)
> mem_init();
> enable_debug_pagealloc();
> cpu_hotplug_init();
> + prio_tree_init();
> + memleak_init();
> kmem_cache_init();
> debug_objects_mem_init();
> idr_init_cache();
> @@ -662,7 +665,6 @@ asmlinkage void __init start_kernel(void)
> calibrate_delay();
> pidmap_init();
> pgtable_cache_init();
> - prio_tree_init();
> anon_vma_init();
> #ifdef CONFIG_X86
> if (efi_enabled)
> diff --git a/mm/memleak.c b/mm/memleak.c
> new file mode 100644
> index 0000000..8fb5260
> --- /dev/null
> +++ b/mm/memleak.c
> @@ -0,0 +1,1012 @@
> +/*
> + * mm/memleak.c
> + *
> + * Copyright (C) 2008 ARM Limited
> + * Written by Catalin Marinas <catalin.marinas@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; if not, write to the Free Software
> + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
> + */
> +
> +#include <linux/init.h>
> +#include <linux/kernel.h>
> +#include <linux/list.h>
> +#include <linux/sched.h>
> +#include <linux/jiffies.h>
> +#include <linux/module.h>
> +#include <linux/prio_tree.h>
> +#include <linux/gfp.h>
> +#include <linux/kallsyms.h>
> +#include <linux/debugfs.h>
> +#include <linux/seq_file.h>
> +#include <linux/cpumask.h>
> +#include <linux/spinlock.h>
> +#include <linux/rcupdate.h>
> +#include <linux/stacktrace.h>
> +#include <linux/cache.h>
> +#include <linux/percpu.h>
> +#include <linux/lockdep.h>
> +
> +#include <asm/sections.h>
> +#include <asm/processor.h>
> +#include <asm/thread_info.h>
> +#include <asm/atomic.h>
> +
> +#include <linux/memleak.h>
> +
> +/*
> + * kmemleak configuration and common defines
> + */
> +#define MAX_TRACE 16 /* stack trace length */
> +#define REPORT_THLD 1 /* unreferenced reporting threshold */
> +#define REPORTS_NR 100 /* maximum number of reported leaks */
> +#undef SCAN_TASK_STACKS /* scan the task kernel stacks */
> +#undef REPORT_ORPHAN_FREEING /* notify when freeing orphan objects */
> +#undef FAST_CACHE_STATS /* fast_cache statistics */
> +
> +#define BYTES_PER_WORD sizeof(void *)
> +#define MSECS_SCAN_YIELD 100
> +
> +/*
> + * Simple lock-free memory allocator using pages. Objects are
> + * allocated on a CPU but can be freed on a different one
> + */
> +struct fast_cache {
> + struct list_head free_list[NR_CPUS];
> + size_t obj_size;
> + int objs_per_page;
> +#ifdef FAST_CACHE_STATS
> + int pages_nr[NR_CPUS];
> + int free_nr[NR_CPUS];
> +#endif
> +};
> +
> +/*
> + * Cache page information
> + */
> +struct fast_cache_page {
> + int free_nr[NR_CPUS];
> + char data[0] ____cacheline_aligned_in_smp;
> +};
> +
> +#define entry_to_page(entry) \
> + ((struct fast_cache_page *)((unsigned long)(entry) & ~(PAGE_SIZE - 1)))
> +
> +#ifdef CONFIG_SMP
> +#define cache_line_align(x) L1_CACHE_ALIGN(x)
> +#else
> +#define cache_line_align(x) (x)
> +#endif
> +
> +#ifdef FAST_CACHE_STATS
> +#define fast_cache_inc_pages(cache, cpu) ((cache)->pages_nr[cpu]++)
> +#define fast_cache_dec_pages(cache, cpu) ((cache)->pages_nr[cpu]--)
> +#define fast_cache_inc_free(cache, cpu) ((cache)->free_nr[cpu]++)
> +#define fast_cache_dec_free(cache, cpu) ((cache)->free_nr[cpu]--)
> +static void fast_cache_dump_stats(struct fast_cache *cache, const char *name)
> +{
> + unsigned int cpu = get_cpu();
> + unsigned long flags;
> +
> + local_irq_save(flags);
> + pr_info("kmemleak: %s statistics\n", name);
> + pr_info(" obj_size: %d\n", cache->obj_size);
> + pr_info(" objs_per_page: %d\n", cache->objs_per_page);
> + for_each_online_cpu(cpu) {
> + pr_info(" CPU: %d\n", cpu);
> + pr_info(" pages_nr: %d\n", cache->pages_nr[cpu]);
> + pr_info(" free_nr: %d\n", cache->free_nr[cpu]);
> + }
> + local_irq_restore(flags);
> + put_cpu();
> +}
> +#else
> +#define fast_cache_inc_pages(cache, cpu)
> +#define fast_cache_dec_pages(cache, cpu)
> +#define fast_cache_inc_free(cache, cpu)
> +#define fast_cache_dec_free(cache, cpu)
> +static inline void fast_cache_dump_stats(struct fast_cache *cache, const char *name)
> +{ }
> +#endif
> +
> +/*
> + * Initialize the cache. This function must be called before any
> + * tracked memory allocations take place
> + */
> +static void fast_cache_init(struct fast_cache *cache, size_t size)
> +{
> + unsigned int cpu;
> +
> + for_each_possible_cpu(cpu) {
> + INIT_LIST_HEAD(&cache->free_list[cpu]);
> +#ifdef FAST_CACHE_STATS
> + cache->pages_nr[cpu] = 0;
> + cache->free_nr[cpu] = 0;
> +#endif
> + }
> + cache->obj_size = cache_line_align(sizeof(struct list_head) + size);
> + cache->objs_per_page = (PAGE_SIZE - sizeof(struct fast_cache_page)) /
> + cache->obj_size;
> +}
> +
> +/*
> + * Expand the free list for the current CPU. Called with interrupts
> + * and preemption disabled
> + */
> +static inline void __fast_cache_grow(struct fast_cache *cache, unsigned int cpu)
> +{
> + struct fast_cache_page *page;
> + void *pos, *last;
> + unsigned int c;
> +
> + page = (struct fast_cache_page *)__get_free_page(GFP_ATOMIC);
> + if (!page)
> + panic("kmemleak: cannot allocate page for fast_cache\n");
> + fast_cache_inc_pages(cache, cpu);
> +
> + for_each_possible_cpu(c)
> + page->free_nr[c] = 0;
> + page->free_nr[cpu] = cache->objs_per_page;
> +
> + last = (void *)page + PAGE_SIZE - cache->obj_size;
> + for (pos = page->data; pos <= last; pos += cache->obj_size) {
> + struct list_head *entry = pos;
> + list_add_tail(entry, &cache->free_list[cpu]);
> + fast_cache_inc_free(cache, cpu);
> + }
> +}
> +
> +/*
> + * Shrink the free list for the current CPU and free the specified
> + * page. Called with interrupts and preemption disabled
> + */
> +static inline void __fast_cache_shrink(struct fast_cache *cache, unsigned int cpu,
> + struct fast_cache_page *page)
> +{
> + void *pos;
> + void *last = (void *)page + PAGE_SIZE - cache->obj_size;
> +
> + for (pos = page->data; pos <= last; pos += cache->obj_size) {
> + struct list_head *entry = pos;
> + /* entry present only in cache->free_list[cpu] */
> + list_del(entry);
> + fast_cache_dec_free(cache, cpu);
> + }
> +
> + free_page((unsigned long)page);
> + fast_cache_dec_pages(cache, cpu);
> +}
> +
> +/*
> + * Object allocation
> + */
> +static void *fast_cache_alloc(struct fast_cache *cache)
> +{
> + unsigned int cpu = get_cpu();
> + unsigned long flags;
> + struct list_head *entry;
> + struct fast_cache_page *page;
> +
> + local_irq_save(flags);
> +
> + if (list_empty(&cache->free_list[cpu]))
> + __fast_cache_grow(cache, cpu);
> +
> + entry = cache->free_list[cpu].next;
> + page = entry_to_page(entry);
> + list_del(entry);
> + page->free_nr[cpu]--;
> + BUG_ON(page->free_nr[cpu] < 0);
> + fast_cache_dec_free(cache, cpu);
> +
> + local_irq_restore(flags);
> + put_cpu_no_resched();
> +
> + return (void *)(entry + 1);
> +}
> +
> +/*
> + * Object freeing
> + */
> +static void fast_cache_free(struct fast_cache *cache, void *obj)
> +{
> + unsigned int cpu = get_cpu();
> + unsigned long flags;
> + struct list_head *entry = (struct list_head *)obj - 1;
> + struct fast_cache_page *page = entry_to_page(entry);
> +
> + local_irq_save(flags);
> +
> + list_add(entry, &cache->free_list[cpu]);
> + page->free_nr[cpu]++;
> + BUG_ON(page->free_nr[cpu] > cache->objs_per_page);
> + fast_cache_inc_free(cache, cpu);
> +
> + if (page->free_nr[cpu] == cache->objs_per_page)
> + __fast_cache_shrink(cache, cpu, page);
> +
> + local_irq_restore(flags);
> + put_cpu_no_resched();
> +}
> +
> +/* scanning area inside a memory block */
> +struct memleak_scan_area {
> + struct hlist_node node;
> + unsigned long offset;
> + size_t length;
> +};
> +
> +/* the main allocation tracking object */
> +struct memleak_object {
> + spinlock_t lock;
> + unsigned long flags;
> + struct list_head object_list;
> + struct list_head gray_list;
> + struct prio_tree_node tree_node;
> + struct rcu_head rcu; /* used for object_list lockless traversal */
> + atomic_t use_count; /* internal usage count */
> + unsigned long pointer;
> + size_t size;
> + int ref_count; /* the minimum encounters of the pointer */
> + int count; /* the ecounters of the pointer */
> + int report_thld; /* the unreferenced reporting threshold */
> + struct hlist_head area_list; /* areas to be scanned (or empty for all) */
> + unsigned long trace[MAX_TRACE];
> + unsigned int trace_len;
> + unsigned long jiffies; /* creation timestamp */
> + pid_t pid; /* pid of the current task */
> + char comm[TASK_COMM_LEN]; /* executable name */
> +};
> +
> +/* The list of all allocated objects */
> +static LIST_HEAD(object_list);
> +static DEFINE_SPINLOCK(object_list_lock);
> +/* The list of the gray objects */
> +static LIST_HEAD(gray_list);
> +/* prio search tree for object boundaries */
> +static struct prio_tree_root object_tree_root;
> +static DEFINE_RWLOCK(object_tree_lock);
> +
> +/* allocation pools */
> +static struct fast_cache object_cache;
> +static struct fast_cache scan_area_cache;
> +
> +static atomic_t memleak_enabled = ATOMIC_INIT(0);
> +static int reported_leaks;
> +
> +/* minimum and maximum address that may be valid pointers */
> +static unsigned long min_addr = ~0;
> +static unsigned long max_addr;
> +
> +/* used for yielding the CPU to other tasks during scanning */
> +static unsigned long next_scan_yield;
> +
> +/* object flags */
> +#define OBJECT_ALLOCATED 0x1
> +
> +/*
> + * Object colors, encoded with count and ref_count:
> + * - white - orphan object, i.e. not enough references to it (ref_count >= 1)
> + * - gray - referred at least once and therefore non-orphan (ref_count == 0)
> + * - black - ignore; it doesn't contain references (text section) (ref_count == -1)
> + */
> +static inline int color_white(const struct memleak_object *object)
> +{
> + return object->count != -1 && object->count < object->ref_count;
> +}
> +
> +static inline int color_gray(const struct memleak_object *object)
> +{
> + return object->ref_count != -1 && object->count >= object->ref_count;
> +}
> +
> +static inline int color_black(const struct memleak_object *object)
> +{
> + return object->ref_count == -1;
> +}
> +
> +static void dump_object_info(struct memleak_object *object)
> +{
> + struct stack_trace trace;
> +
> + trace.nr_entries = object->trace_len;
> + trace.entries = object->trace;
> +
> + pr_notice("kmemleak: object 0x%08lx (size %u):\n",
> + object->tree_node.start, object->size);
> + pr_notice(" comm \"%s\", pid %d, jiffies %lu\n",
> + object->comm, object->pid, object->jiffies);
> + pr_notice(" ref_count = %d\n", object->ref_count);
> + pr_notice(" count = %d\n", object->count);
> + pr_notice(" backtrace:\n");
> + print_stack_trace(&trace, 4);
> +}
> +
> +static struct memleak_object *lookup_object(unsigned long ptr, int alias)
> +{
> + struct prio_tree_node *node;
> + struct prio_tree_iter iter;
> + struct memleak_object *object;
> +
> + prio_tree_iter_init(&iter, &object_tree_root, ptr, ptr);
> + node = prio_tree_next(&iter);
> + if (node) {
> + object = prio_tree_entry(node, struct memleak_object, tree_node);
> + if (!alias && object->pointer != ptr) {
> + pr_warning("kmemleak: found object by alias");
> + object = NULL;
> + }
> + } else
> + object = NULL;
> +
> + return object;
> +}
> +
> +/*
> + * return 1 if successful or 0 otherwise
> + */
> +static inline int get_object(struct memleak_object *object)
> +{
> + return atomic_inc_not_zero(&object->use_count);
> +}
> +
> +static void free_object_rcu(struct rcu_head *rcu)
> +{
> + struct hlist_node *elem, *tmp;
> + struct memleak_scan_area *area;
> + struct memleak_object *object =
> + container_of(rcu, struct memleak_object, rcu);
> +
> + /* once use_count is 0, there is no code accessing the object */
OK, so we won't pass free_object_rcu() to call_rcu() until use_count
is equal to zero. And once use_count is zero, it is never incremented.
So the point of the RCU grace period is to ensure that all tasks
who didn't quite call get_object() soon enough get done failing
before we free up the object, correct?
Which means that get_object() needs to be under rcu_read_lock()...
> + hlist_for_each_entry_safe(area, elem, tmp, &object->area_list, node) {
> + hlist_del(elem);
> + fast_cache_free(&scan_area_cache, area);
> + }
> + fast_cache_free(&object_cache, object);
> +}
> +
> +static void put_object(struct memleak_object *object)
> +{
> + unsigned long flags;
> +
> + if (!atomic_dec_and_test(&object->use_count))
> + return;
> +
> + /* should only get here after delete_object was called */
> + BUG_ON(object->flags & OBJECT_ALLOCATED);
> +
> + spin_lock_irqsave(&object_list_lock, flags);
We also need to write-hold the object_tree_lock, not?
> + /* the last reference to this object */
> + list_del_rcu(&object->object_list);
> + call_rcu(&object->rcu, free_object_rcu);
> + spin_unlock_irqrestore(&object_list_lock, flags);
> +}
> +
> +static struct memleak_object *find_and_get_object(unsigned long ptr, int alias)
> +{
> + unsigned long flags;
> + struct memleak_object *object;
> +
> + read_lock_irqsave(&object_tree_lock, flags);
Use of read_lock_irqsave() will work, but only if -all- deletions are
protected by write-acquiring object_tree_lock.
Or unless there is an enclosing rcu_read_lock() around all calls to
find_and_get_object().
> + object = lookup_object(ptr, alias);
> + if (object)
> + get_object(object);
> + read_unlock_irqrestore(&object_tree_lock, flags);
> +
> + return object;
> +}
> +
> +/*
> + * Insert a pointer into the pointer hash table
> + */
> +static inline void create_object(unsigned long ptr, size_t size, int ref_count)
> +{
> + unsigned long flags;
> + struct memleak_object *object;
> + struct prio_tree_node *node;
> + struct stack_trace trace;
> +
> + object = fast_cache_alloc(&object_cache);
> + if (!object)
> + panic("kmemleak: cannot allocate a memleak_object structure\n");
> +
> + INIT_LIST_HEAD(&object->object_list);
> + INIT_LIST_HEAD(&object->gray_list);
> + INIT_HLIST_HEAD(&object->area_list);
> + spin_lock_init(&object->lock);
> + atomic_set(&object->use_count, 1);
> + object->flags = OBJECT_ALLOCATED;
> + object->pointer = ptr;
> + object->size = size;
> + object->ref_count = ref_count;
> + object->count = -1; /* black color initially */
> + object->report_thld = REPORT_THLD;
> + object->jiffies = jiffies;
> + if (in_irq()) {
> + object->pid = 0;
> + strncpy(object->comm, "hardirq", TASK_COMM_LEN);
> + } else if (in_softirq()) {
> + object->pid = 0;
> + strncpy(object->comm, "softirq", TASK_COMM_LEN);
> + } else {
> + object->pid = current->pid;
> + strncpy(object->comm, current->comm, TASK_COMM_LEN);
> + }
> +
> + trace.max_entries = MAX_TRACE;
> + trace.nr_entries = 0;
> + trace.entries = object->trace;
> + trace.skip = 1;
> + save_stack_trace(&trace);
> +
> + object->trace_len = trace.nr_entries;
> +
> + INIT_PRIO_TREE_NODE(&object->tree_node);
> + object->tree_node.start = ptr;
> + object->tree_node.last = ptr + size - 1;
> +
> + if (ptr < min_addr)
> + min_addr = ptr;
> + if (ptr + size > max_addr)
> + max_addr = ptr + size;
> + /* update the boundaries before inserting the object in the
> + * prio search tree */
> + smp_mb();
> +
> + write_lock_irqsave(&object_tree_lock, flags);
> + node = prio_tree_insert(&object_tree_root, &object->tree_node);
> + if (node != &object->tree_node) {
> + unsigned long flags;
> +
> + pr_warning("kmemleak: existing pointer\n");
> + dump_stack();
> +
> + object = lookup_object(ptr, 1);
> + spin_lock_irqsave(&object->lock, flags);
> + dump_object_info(object);
> + spin_unlock_irqrestore(&object->lock, flags);
> +
> + panic("kmemleak: cannot insert 0x%lx into the object search tree\n",
> + ptr);
> + }
> + write_unlock_irqrestore(&object_tree_lock, flags);
In theory, you are OK here, but only because the below is adding
an element (not removing it). But this code is looking pretty dodgy.
What exactly does object_tree_lock protect? What exactly does
object_list protect?
> +
> + spin_lock_irqsave(&object_list_lock, flags);
> + list_add_tail_rcu(&object->object_list, &object_list);
> + spin_unlock_irqrestore(&object_list_lock, flags);
> +}
> +
> +/*
> + * Remove a pointer from the pointer hash table
> + */
> +static inline void delete_object(unsigned long ptr)
> +{
> + unsigned long flags;
> + struct memleak_object *object;
> +
> + write_lock_irqsave(&object_tree_lock, flags);
> + object = lookup_object(ptr, 0);
> + if (!object) {
> + pr_warning("kmemleak: freeing unknown object at 0x%08lx\n", ptr);
> + dump_stack();
> + write_unlock_irqrestore(&object_tree_lock, flags);
> + return;
> + }
> + prio_tree_remove(&object_tree_root, &object->tree_node);
> + write_unlock_irqrestore(&object_tree_lock, flags);
> +
> + BUG_ON(!(object->flags & OBJECT_ALLOCATED));
> +
> + spin_lock_irqsave(&object->lock, flags);
> + object->flags &= ~OBJECT_ALLOCATED;
> +#ifdef REPORT_ORPHAN_FREEING
> + if (color_white(object)) {
> + pr_warning("kmemleak: freeing orphan object 0x%08lx\n", ptr);
> + dump_stack();
> + dump_object_info(object);
> + }
> +#endif
> + object->pointer = 0;
> + spin_unlock_irqrestore(&object->lock, flags);
> +
> + put_object(object);
> +}
> +
> +/*
> + * Make a object permanently gray (false positive)
> + */
> +static inline void make_gray_object(unsigned long ptr)
> +{
> + unsigned long flags;
> + struct memleak_object *object;
> +
> + object = find_and_get_object(ptr, 0);
> + if (!object) {
> + dump_stack();
> + panic("kmemleak: graying unknown object at 0x%08lx\n", ptr);
> + }
> +
> + spin_lock_irqsave(&object->lock, flags);
> + object->ref_count = 0;
> + spin_unlock_irqrestore(&object->lock, flags);
> + put_object(object);
> +}
> +
> +/*
> + * Mark the object as black
> + */
> +static inline void make_black_object(unsigned long ptr)
> +{
> + unsigned long flags;
> + struct memleak_object *object;
> +
> + object = find_and_get_object(ptr, 0);
> + if (!object) {
> + dump_stack();
> + panic("kmemleak: blacking unknown object at 0x%08lx\n", ptr);
> + }
> +
> + spin_lock_irqsave(&object->lock, flags);
> + object->ref_count = -1;
> + spin_unlock_irqrestore(&object->lock, flags);
> + put_object(object);
> +}
> +
> +/*
> + * Add a scanning area to the object
> + */
> +static inline void add_scan_area(unsigned long ptr, unsigned long offset, size_t length)
> +{
> + unsigned long flags;
> + struct memleak_object *object;
> + struct memleak_scan_area *area;
> +
> + object = find_and_get_object(ptr, 0);
> + if (!object) {
> + dump_stack();
> + panic("kmemleak: adding scan area to unknown object at 0x%08lx\n", ptr);
> + }
> +
> + spin_lock_irqsave(&object->lock, flags);
> + if (offset + length > object->size) {
> + dump_stack();
> + dump_object_info(object);
> + panic("kmemleak: scan area larger than object 0x%08lx\n", ptr);
> + }
> +
> + area = fast_cache_alloc(&scan_area_cache);
> + if (!area)
> + panic("kmemleak: cannot allocate a scan area\n");
> +
> + INIT_HLIST_NODE(&area->node);
> + area->offset = offset;
> + area->length = length;
> +
> + hlist_add_head(&area->node, &object->area_list);
> + spin_unlock_irqrestore(&object->lock, flags);
> + put_object(object);
> +}
> +
> +/*
> + * Allocation function hook
> + */
> +void memleak_alloc(const void *ptr, size_t size, int ref_count)
> +{
> + pr_debug("%s(0x%p, %u, %d)\n", __FUNCTION__, ptr, size, ref_count);
> +
> + if (!atomic_read(&memleak_enabled))
> + return;
> + if (!ptr)
> + return;
> +
> + create_object((unsigned long)ptr, size, ref_count);
> +}
> +EXPORT_SYMBOL_GPL(memleak_alloc);
> +
> +/*
> + * Freeing function hook
> + */
> +void memleak_free(const void *ptr)
> +{
> + pr_debug("%s(0x%p)\n", __FUNCTION__, ptr);
> +
> + if (!atomic_read(&memleak_enabled))
> + return;
> + if (!ptr)
> + return;
> +
> + delete_object((unsigned long)ptr);
> +}
> +EXPORT_SYMBOL_GPL(memleak_free);
> +
> +/*
> + * Mark a object as a false positive
> + */
> +void memleak_not_leak(const void *ptr)
> +{
> + pr_debug("%s(0x%p)\n", __FUNCTION__, ptr);
> +
> + if (!atomic_read(&memleak_enabled))
> + return;
> + if (!ptr)
> + return;
> +
> + make_gray_object((unsigned long)ptr);
> +}
> +EXPORT_SYMBOL(memleak_not_leak);
> +
> +/*
> + * Ignore this memory object
> + */
> +void memleak_ignore(const void *ptr)
> +{
> + pr_debug("%s(0x%p)\n", __FUNCTION__, ptr);
> +
> + if (!atomic_read(&memleak_enabled))
> + return;
> + if (!ptr)
> + return;
> +
> + make_black_object((unsigned long)ptr);
> +}
> +EXPORT_SYMBOL(memleak_ignore);
> +
> +/*
> + * Add a scanning area to an object
> + */
> +void memleak_scan_area(const void *ptr, unsigned long offset, size_t length)
> +{
> + pr_debug("%s(0x%p)\n", __FUNCTION__, ptr);
> +
> + if (!atomic_read(&memleak_enabled))
> + return;
> + if (!ptr)
> + return;
> +
> + add_scan_area((unsigned long)ptr, offset, length);
> +}
> +EXPORT_SYMBOL(memleak_scan_area);
> +
> +static inline void scan_yield(void)
> +{
> + BUG_ON(in_atomic());
> +
> + if (time_is_before_eq_jiffies(next_scan_yield)) {
> + schedule();
> + next_scan_yield = jiffies + msecs_to_jiffies(MSECS_SCAN_YIELD);
> + }
> +}
> +
> +/*
> + * Scan a block of memory (exclusive range) for pointers and move
> + * those found to the gray list
> + */
> +static void scan_block(void *_start, void *_end, struct memleak_object *scanned)
> +{
> + unsigned long *ptr;
> + unsigned long *start = PTR_ALIGN(_start, BYTES_PER_WORD);
> + unsigned long *end = _end - (BYTES_PER_WORD - 1);
> +
> + for (ptr = start; ptr < end; ptr++) {
> + unsigned long flags;
> + unsigned long pointer = *ptr;
> + struct memleak_object *object;
> +
> + if (!scanned)
> + scan_yield();
> +
> + /* the boundaries check doesn't need to be precise
> + * (hence no locking) since orphan objects need to
> + * pass a scanning threshold before being reported */
> + if (pointer < min_addr || pointer >= max_addr)
> + continue;
> +
> + object = find_and_get_object(pointer, 1);
> + if (!object)
> + continue;
> + if (object == scanned) {
> + /* self referenced */
> + put_object(object);
> + continue;
> + }
> +
> + /* avoid the lockdep recursive warning on object->lock
> + * being previously acquired in scan_object(). These
> + * locks are enclosed by a mutex aquired in seq_open */
> + spin_lock_irqsave_nested(&object->lock, flags, SINGLE_DEPTH_NESTING);
> + if (!color_white(object)) {
> + /* non-orphan or ignored */
> + spin_unlock_irqrestore(&object->lock, flags);
> + put_object(object);
> + continue;
> + }
> +
> + object->count++;
> + if (color_gray(object)) {
> + /* the object became gray, add it to the list */
> + object->report_thld++;
> + list_add_tail(&object->gray_list, &gray_list);
> + } else
> + put_object(object);
> + spin_unlock_irqrestore(&object->lock, flags);
> + }
> +}
> +
> +/*
> + * Scan a memory block represented by a memleak_object
> + */
> +static inline void scan_object(struct memleak_object *object)
> +{
> + struct memleak_scan_area *area;
> + struct hlist_node *elem;
> + unsigned long flags;
> +
> + spin_lock_irqsave(&object->lock, flags);
> +
> + /* freed object */
> + if (!(object->flags & OBJECT_ALLOCATED))
> + goto out;
> +
> + if (hlist_empty(&object->area_list))
> + scan_block((void *)object->pointer,
> + (void *)(object->pointer + object->size), object);
> + else
> + hlist_for_each_entry(area, elem, &object->area_list, node)
> + scan_block((void *)(object->pointer + area->offset),
> + (void *)(object->pointer + area->offset
> + + area->length), object);
> +
> + out:
> + spin_unlock_irqrestore(&object->lock, flags);
> +}
> +
> +/*
> + * Scan the memory and print the orphan objects
> + */
> +static void memleak_scan(void)
> +{
> + unsigned long flags;
> + struct memleak_object *object, *tmp;
> + int i;
> +#ifdef SCAN_TASK_STACKS
> + struct task_struct *task;
> +#endif
> +
> + fast_cache_dump_stats(&object_cache, "object_cache");
> + fast_cache_dump_stats(&scan_area_cache, "scan_area_cache");
> +
> + rcu_read_lock();
> + list_for_each_entry_rcu(object, &object_list, object_list) {
> + spin_lock_irqsave(&object->lock, flags);
> +
> +#ifdef DEBUG
> + /* with a few exceptions there should be a maximum of
> + * 1 reference to any object at this point */
> + if (atomic_read(&object->use_count) > 1) {
> + pr_debug("kmemleak: object->use_count = %d\n",
> + atomic_read(&object->use_count));
> + dump_object_info(object);
> + }
> +#endif
> +
> + /* reset the reference count (whiten the object) */
> + object->count = 0;
> + if (color_gray(object) && get_object(object))
> + list_add_tail(&object->gray_list, &gray_list);
What prevents other tasks from concurrently adding other objects to
gray_list? Preventing such concurrent adds is absolutely required,
otherwise gray_list will be corrupted.
> + else
> + object->report_thld--;
> +
> + spin_unlock_irqrestore(&object->lock, flags);
> + }
> + rcu_read_unlock();
> +
> + /* data/bss scanning */
> + scan_block(_sdata, _edata, NULL);
> + scan_block(__bss_start, __bss_stop, NULL);
> +
> +#ifdef CONFIG_SMP
> + /* per-cpu scanning */
> + for_each_possible_cpu(i)
> + scan_block(__per_cpu_start + per_cpu_offset(i),
> + __per_cpu_end + per_cpu_offset(i), NULL);
> +#endif
> +
> + /* mem_map scanning */
> + for_each_online_node(i) {
> + struct page *page, *end;
> +
> + page = NODE_MEM_MAP(i);
> + end = page + NODE_DATA(i)->node_spanned_pages;
> +
> + scan_block(page, end, NULL);
> + }
> +
> +#ifdef SCAN_TASK_STACKS
> + read_lock(&tasklist_lock);
> + for_each_process(task)
> + scan_block(task_stack_page(task),
> + task_stack_page(task) + THREAD_SIZE, NULL);
> + read_unlock(&tasklist_lock);
> +#endif
> +
> + /* scan the objects already referenced. More objects will be
> + * referenced and, if there are no memory leaks, all the
> + * objects will be scanned. The list traversal is safe for
> + * both tail additions and removals from inside the loop. The
> + * memleak objects cannot be freed from outside the loop
> + * because their use_count was increased */
> + object = list_entry(gray_list.next, typeof(*object), gray_list);
> + while (&object->gray_list != &gray_list) {
> + scan_yield();
> +
> + /* may add new objects to the list */
> + scan_object(object);
> +
> + tmp = list_entry(object->gray_list.next, typeof(*object),
> + gray_list);
> +
> + /* remove the object from the list and release it */
> + list_del(&object->gray_list);
> + put_object(object);
> +
> + object = tmp;
> + }
> + BUG_ON(!list_empty(&gray_list));
> +}
> +
> +static void *memleak_seq_start(struct seq_file *seq, loff_t *pos)
> +{
> + struct memleak_object *object;
> + loff_t n = *pos;
> +
> + if (!n) {
> + memleak_scan();
> + reported_leaks = 0;
> + }
> + if (reported_leaks >= REPORTS_NR)
> + return NULL;
> +
> + rcu_read_lock();
> + list_for_each_entry_rcu(object, &object_list, object_list) {
> + if (n-- > 0)
> + continue;
> +
> + if (get_object(object))
> + goto out;
> + }
> + object = NULL;
> + out:
> + rcu_read_unlock();
> + return object;
> +}
> +
> +static void *memleak_seq_next(struct seq_file *seq, void *v, loff_t *pos)
> +{
> + struct list_head *n;
> + struct memleak_object *next = NULL;
> + unsigned long flags;
> +
> + ++(*pos);
> + if (reported_leaks >= REPORTS_NR)
> + goto out;
> +
> + spin_lock_irqsave(&object_list_lock, flags);
Using a spinlock instead of rcu_read_lock() is OK, but only if all
updates are also protected by this same spinlock. Which means that,
given that find_and_get_object read-acquires object_tree_lock, deletions
must be projected both by object_list_lock and by write-acquiring
object_tree_lock. Or all calls to memleak_seq_next need to be covered
by rcu_read_lock().
> + n = ((struct memleak_object *)v)->object_list.next;
> + if (n != &object_list) {
> + next = list_entry(n, struct memleak_object, object_list);
> + get_object(next);
> + }
> + spin_unlock_irqrestore(&object_list_lock, flags);
> +
> + out:
> + put_object(v);
> + return next;
> +}
> +
> +static void memleak_seq_stop(struct seq_file *seq, void *v)
> +{
> + if (v)
> + put_object(v);
> +}
> +
> +static int memleak_seq_show(struct seq_file *seq, void *v)
> +{
> + struct memleak_object *object = v;
> + unsigned long flags;
> + char namebuf[KSYM_NAME_LEN + 1] = "";
> + char *modname;
> + unsigned long symsize;
> + unsigned long offset = 0;
> + int i;
> +
> + spin_lock_irqsave(&object->lock, flags);
> +
> + if (!color_white(object))
> + goto out;
> + /* freed in the meantime (false positive) or just allocated */
> + if (!(object->flags & OBJECT_ALLOCATED))
> + goto out;
> + if (object->report_thld >= 0)
> + goto out;
> +
> + reported_leaks++;
> + seq_printf(seq, "unreferenced object 0x%08lx (size %u):\n",
> + object->pointer, object->size);
> + seq_printf(seq, " comm \"%s\", pid %d, jiffies %lu\n",
> + object->comm, object->pid, object->jiffies);
> + seq_printf(seq, " backtrace:\n");
> +
> + for (i = 0; i < object->trace_len; i++) {
> + unsigned long trace = object->trace[i];
> +
> + kallsyms_lookup(trace, &symsize, &offset, &modname, namebuf);
> + seq_printf(seq, " [<%08lx>] %s\n", trace, namebuf);
> + }
> +
> + out:
> + spin_unlock_irqrestore(&object->lock, flags);
> + return 0;
> +}
> +
> +static struct seq_operations memleak_seq_ops = {
> + .start = memleak_seq_start,
> + .next = memleak_seq_next,
> + .stop = memleak_seq_stop,
> + .show = memleak_seq_show,
> +};
> +
> +static int memleak_seq_open(struct inode *inode, struct file *file)
> +{
> + return seq_open(file, &memleak_seq_ops);
> +}
> +
> +static struct file_operations memleak_fops = {
> + .owner = THIS_MODULE,
> + .open = memleak_seq_open,
> + .read = seq_read,
> + .llseek = seq_lseek,
> + .release = seq_release,
> +};
> +
> +/*
> + * Kmemleak initialization
> + */
> +void __init memleak_init(void)
> +{
> + fast_cache_init(&object_cache, sizeof(struct memleak_object));
> + fast_cache_init(&scan_area_cache, sizeof(struct memleak_scan_area));
> +
> + INIT_PRIO_TREE_ROOT(&object_tree_root);
> +
> + /* this is the point where tracking allocations is safe.
> + * Scanning is only available later */
> + atomic_set(&memleak_enabled, 1);
> +}
> +
> +/*
> + * Late initialization function
> + */
> +int __init memleak_late_init(void)
> +{
> + struct dentry *dentry;
> +
> + dentry = debugfs_create_file("memleak", S_IRUGO, NULL, NULL,
> + &memleak_fops);
> + if (!dentry)
> + return -ENOMEM;
> +
> + pr_info("Kernel memory leak detector initialized\n");
> +
> + return 0;
> +}
> +late_initcall(memleak_late_init);
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH 2.6.28-rc5 01/11] kmemleak: Add the base support
2008-12-03 18:12 ` Paul E. McKenney
@ 2008-12-04 12:14 ` Catalin Marinas
2008-12-04 16:55 ` Paul E. McKenney
0 siblings, 1 reply; 37+ messages in thread
From: Catalin Marinas @ 2008-12-04 12:14 UTC (permalink / raw)
To: paulmck; +Cc: linux-kernel
Hi Paul,
On Wed, 2008-12-03 at 10:12 -0800, Paul E. McKenney wrote:
> On Thu, Nov 20, 2008 at 11:30:34AM +0000, Catalin Marinas wrote:
> > This patch adds the base support for the kernel memory leak
> > detector. It traces the memory allocation/freeing in a way similar to
> > the Boehm's conservative garbage collector, the difference being that
> > the unreferenced objects are not freed but only shown in
> > /sys/kernel/debug/memleak. Enabling this feature introduces an
> > overhead to memory allocations.
>
> I have some concerns about your locking/RCU design, please see
> interspersed questions and comments.
Thanks for reviewing the locking. This was probably the hardest part of
kmemleak. It had several incarnations and slowly got towards a finer
grained locking.
I'll comment below as well but I'll first show my logic which I added as
comment at the top of the memleak.c file:
* The following locks are used by kmemleak:
*
* - object_list_lock (spinlock): protects the object_list. This is the main
* list holding the metadata (struct memleak_object) for the allocated
* objects. The memleak_object structures are added to the list in the
* create_object() function called from the memleak_alloc() callback. They
* are removed from the list in put_object() if the object->use_count is 0
* - object_tree_lock (rwlock): protects the object_tree_root. When the
* metadata is created in create_object(), it is added to the object prio
* search tree and removed in delete_object() with this lock held
* (write_lock). This lock is also acquired (read_lock) in
* find_and_get_object() when an object needs to be looked up by a pointer
* value (either during scanning or when changing its properties like
* marking as false positive)
* - memleak_object.lock (spinlock): protects a memleak_object. Modifications
* of the metadata (e.g. count) are protected by this lock. Note that some
* members of this structure may be protected by other means (atomic or
* object_list lock). This lock is also held when scanning the corresponding
* object to avoid the kernel freeing it via the memleak_free() callback.
* This is less heavyweight than holding a global lock like object_list_lock
* during scanning
*
* The only mutex used is scan_mutex. This ensures that only one thread may
* scan the memory for unreferenced objects at a time. The gray_list contains
* the objects which are already referenced or marked as false positives and
* need to be scanned. This list is only modified during a scanning episode
* when the scan_mutex is held. At the end of a scan, the gray_list is always
* empty. Note that the memleak_object.use_count is incremented when an object
* is added to the gray_list and therefore cannot be freed.
*
* Freeing a memleak_object is done via an RCU callback invoked from
* put_object() when its use_count is 0 and after removing it from the
* object_list. One of the reasons for the RCU is to delay the freeing and
* avoid a recursive call into the allocator via kmem_cache_free(). Another
* reason is to allow lock-less object_list traversal during memleak_scan().
> > +static void free_object_rcu(struct rcu_head *rcu)
> > +{
> > + struct hlist_node *elem, *tmp;
> > + struct memleak_scan_area *area;
> > + struct memleak_object *object =
> > + container_of(rcu, struct memleak_object, rcu);
> > +
> > + /* once use_count is 0, there is no code accessing the object */
>
> OK, so we won't pass free_object_rcu() to call_rcu() until use_count
> is equal to zero. And once use_count is zero, it is never incremented.
> So the point of the RCU grace period is to ensure that all tasks
> who didn't quite call get_object() soon enough get done failing
> before we free up the object, correct?
>
> Which means that get_object() needs to be under rcu_read_lock()...
My view here is that if use_count is 0, no other thread would be able to
use this object. It will also be removed from the object_list and hence
no other way to get this this object. I think it would have worked
pretty much OK without the RCU *but* the main reason for the grace
period is to avoid a recursive call into kmem_cache_free() since
put_object() is quite likely called via kmem_cache_free() ->
memleak_free() -> delete_object(). The alternative is to use a work
queue or something else but the RCU already had the infrastructure and I
also gained a lock-less object_list traversal in memleak_scan().
> > +static void put_object(struct memleak_object *object)
> > +{
> > + unsigned long flags;
> > +
> > + if (!atomic_dec_and_test(&object->use_count))
> > + return;
> > +
> > + /* should only get here after delete_object was called */
> > + BUG_ON(object->flags & OBJECT_ALLOCATED);
> > +
> > + spin_lock_irqsave(&object_list_lock, flags);
>
> We also need to write-hold the object_tree_lock, not?
Not here, the memleak_object is removed from the object_tree in the
delete_object() function (via from the memleak_free callback). If it is
in the object_tree, it should have a use_count >= 1.
> > + /* the last reference to this object */
> > + list_del_rcu(&object->object_list);
> > + call_rcu(&object->rcu, free_object_rcu);
> > + spin_unlock_irqrestore(&object_list_lock, flags);
> > +}
> > +
> > +static struct memleak_object *find_and_get_object(unsigned long ptr, int alias)
> > +{
> > + unsigned long flags;
> > + struct memleak_object *object;
> > +
> > + read_lock_irqsave(&object_tree_lock, flags);
>
> Use of read_lock_irqsave() will work, but only if -all- deletions are
> protected by write-acquiring object_tree_lock.
Yes, deletions (in delete_object) are protected by
write_lock(object_tree_lock).
> Or unless there is an enclosing rcu_read_lock() around all calls to
> find_and_get_object().
Not needed in my opinion since if it is in the object_tree, it has
use_count >= 1 anyway and the RCU callback for freeing won't be invoked.
> > +static inline void create_object(unsigned long ptr, size_t size, int ref_count)
[...]
> > + write_lock_irqsave(&object_tree_lock, flags);
> > + node = prio_tree_insert(&object_tree_root, &object->tree_node);
> > + if (node != &object->tree_node) {
> > + unsigned long flags;
> > +
> > + pr_warning("kmemleak: existing pointer\n");
> > + dump_stack();
> > +
> > + object = lookup_object(ptr, 1);
> > + spin_lock_irqsave(&object->lock, flags);
> > + dump_object_info(object);
> > + spin_unlock_irqrestore(&object->lock, flags);
> > +
> > + panic("kmemleak: cannot insert 0x%lx into the object search tree\n",
> > + ptr);
> > + }
> > + write_unlock_irqrestore(&object_tree_lock, flags);
>
> In theory, you are OK here, but only because the below is adding
> an element (not removing it). But this code is looking pretty dodgy.
>
> What exactly does object_tree_lock protect? What exactly does
> object_list protect?
See my explanation at the beginning. I separated the locking for the
object_list and the object_tree_root (prio search tree). I think I could
have held the object_tree_lock only for the prio_tree_insert() call.
> > +static void memleak_scan(void)
[...]
> > + /* reset the reference count (whiten the object) */
> > + object->count = 0;
> > + if (color_gray(object) && get_object(object))
> > + list_add_tail(&object->gray_list, &gray_list);
>
> What prevents other tasks from concurrently adding other objects to
> gray_list? Preventing such concurrent adds is absolutely required,
> otherwise gray_list will be corrupted.
There is scan_mutex for this.
> > +static void *memleak_seq_next(struct seq_file *seq, void *v, loff_t *pos)
> > +{
> > + struct list_head *n;
> > + struct memleak_object *next = NULL;
> > + unsigned long flags;
> > +
> > + ++(*pos);
> > + if (reported_leaks >= REPORTS_NR)
> > + goto out;
> > +
> > + spin_lock_irqsave(&object_list_lock, flags);
>
> Using a spinlock instead of rcu_read_lock() is OK, but only if all
> updates are also protected by this same spinlock. Which means that,
> given that find_and_get_object read-acquires object_tree_lock, deletions
> must be projected both by object_list_lock and by write-acquiring
> object_tree_lock. Or all calls to memleak_seq_next need to be covered
> by rcu_read_lock().
The spin_lock here is only to retrieve the next object in the list but I
agree that even if the object_list modifications are protected by
object_list_lock, put_object could actually set the use_count to 0 and
get_object return in this function isn't checked. If get_object returns
successfully, I don't think an rcu_read_lock() is needed since
put_object() can no longer invoke free_object_rcu().
Thanks for your time.
--
Catalin
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH 2.6.28-rc5 01/11] kmemleak: Add the base support
2008-12-04 12:14 ` Catalin Marinas
@ 2008-12-04 16:55 ` Paul E. McKenney
2008-12-06 23:07 ` Catalin Marinas
0 siblings, 1 reply; 37+ messages in thread
From: Paul E. McKenney @ 2008-12-04 16:55 UTC (permalink / raw)
To: Catalin Marinas; +Cc: linux-kernel
On Thu, Dec 04, 2008 at 12:14:26PM +0000, Catalin Marinas wrote:
> Hi Paul,
>
> On Wed, 2008-12-03 at 10:12 -0800, Paul E. McKenney wrote:
> > On Thu, Nov 20, 2008 at 11:30:34AM +0000, Catalin Marinas wrote:
> > > This patch adds the base support for the kernel memory leak
> > > detector. It traces the memory allocation/freeing in a way similar to
> > > the Boehm's conservative garbage collector, the difference being that
> > > the unreferenced objects are not freed but only shown in
> > > /sys/kernel/debug/memleak. Enabling this feature introduces an
> > > overhead to memory allocations.
> >
> > I have some concerns about your locking/RCU design, please see
> > interspersed questions and comments.
>
> Thanks for reviewing the locking. This was probably the hardest part of
> kmemleak. It had several incarnations and slowly got towards a finer
> grained locking.
And I should have also pointed out that a memory-leak detector is an
extremely useful thing!!! I look forward to its acceptance.
> I'll comment below as well but I'll first show my logic which I added as
> comment at the top of the memleak.c file:
This is very helpful, thank you! (And please accept my apologies if I
missed seeing it in my earlier review.)
> * The following locks are used by kmemleak:
> *
> * - object_list_lock (spinlock): protects the object_list. This is the main
> * list holding the metadata (struct memleak_object) for the allocated
> * objects. The memleak_object structures are added to the list in the
> * create_object() function called from the memleak_alloc() callback. They
> * are removed from the list in put_object() if the object->use_count is 0
This part sounds good. I would also add that once object->use_count is
zero, no one is allowed to increment it. Also, attempts to increment
object->use_count must be under the protection of rcu_read_lock(),
correct?
> * - object_tree_lock (rwlock): protects the object_tree_root. When the
> * metadata is created in create_object(), it is added to the object prio
> * search tree and removed in delete_object() with this lock held
> * (write_lock). This lock is also acquired (read_lock) in
> * find_and_get_object() when an object needs to be looked up by a pointer
> * value (either during scanning or when changing its properties like
> * marking as false positive)
Looks OK. I must confess that I am a bit fuzzy on the purpose of
object_tree_root vs. object_list.
> * - memleak_object.lock (spinlock): protects a memleak_object. Modifications
> * of the metadata (e.g. count) are protected by this lock. Note that some
> * members of this structure may be protected by other means (atomic or
> * object_list lock). This lock is also held when scanning the corresponding
> * object to avoid the kernel freeing it via the memleak_free() callback.
> * This is less heavyweight than holding a global lock like object_list_lock
> * during scanning
OK, holding an object's lock can protect that object from deletion,
but only after you actually acquire the lock. There must be some other
mechanism preventing the object from being freed during the actual
acquisition of the lock.
Now this might be the object_list_lock, object_tree_lock, RCU, or some
combination of the three, for example it might depend on how that object
is looked up.
> * The only mutex used is scan_mutex. This ensures that only one thread may
> * scan the memory for unreferenced objects at a time. The gray_list contains
> * the objects which are already referenced or marked as false positives and
> * need to be scanned. This list is only modified during a scanning episode
> * when the scan_mutex is held. At the end of a scan, the gray_list is always
> * empty. Note that the memleak_object.use_count is incremented when an object
> * is added to the gray_list and therefore cannot be freed.
This is quite helpful -- I totally missed this mutex.
> * Freeing a memleak_object is done via an RCU callback invoked from
> * put_object() when its use_count is 0 and after removing it from the
> * object_list. One of the reasons for the RCU is to delay the freeing and
> * avoid a recursive call into the allocator via kmem_cache_free(). Another
> * reason is to allow lock-less object_list traversal during memleak_scan().
I did figure out the lock-less object_list traversal, but totally missed
the fact that you were using RCU to prevent infinite recursion. Cute!
Also, the memleak_object must have been removed from object_tree before
its use_count can possibly go to 0, correct?
> > > +static void free_object_rcu(struct rcu_head *rcu)
> > > +{
> > > + struct hlist_node *elem, *tmp;
> > > + struct memleak_scan_area *area;
> > > + struct memleak_object *object =
> > > + container_of(rcu, struct memleak_object, rcu);
> > > +
> > > + /* once use_count is 0, there is no code accessing the object */
> >
> > OK, so we won't pass free_object_rcu() to call_rcu() until use_count
> > is equal to zero. And once use_count is zero, it is never incremented.
> > So the point of the RCU grace period is to ensure that all tasks
> > who didn't quite call get_object() soon enough get done failing
> > before we free up the object, correct?
> >
> > Which means that get_object() needs to be under rcu_read_lock()...
>
> My view here is that if use_count is 0, no other thread would be able to
> use this object. It will also be removed from the object_list and hence
> no other way to get this this object.
What if some other CPU picked up a pointer to the object just before it
was removed from the list? If that CPU was not under the protection of
rcu_read_lock(), and if that CPU was delayed, then the object could be
freed (and possibly re-allocated as something else) before the CPU got
around to doing the atomic_inc_not_zero().
Keep in mind that spinlocks can be preempted in -rt kernels, so it
is possible for the CPU to be delayed a -long- time.
So I believe you really do need all get_object() calls to either be under
the protection of rcu_read_lock() or under the protection of some lock
that excludes both deletion -and- lookup-for-deletion of that object.
It looks to me that the code currently does the right thing here, just
want to make sure I understand the locking and that we don't end up
tempting someone later to break it. ;-)
> I think it would have worked
> pretty much OK without the RCU *but* the main reason for the grace
> period is to avoid a recursive call into kmem_cache_free() since
> put_object() is quite likely called via kmem_cache_free() ->
> memleak_free() -> delete_object(). The alternative is to use a work
> queue or something else but the RCU already had the infrastructure and I
> also gained a lock-less object_list traversal in memleak_scan().
Yep, as noted earlier, I missed this prevent-infinite-recursion aspect,
and it is pretty cute.
> > > +static void put_object(struct memleak_object *object)
> > > +{
> > > + unsigned long flags;
> > > +
> > > + if (!atomic_dec_and_test(&object->use_count))
> > > + return;
> > > +
> > > + /* should only get here after delete_object was called */
> > > + BUG_ON(object->flags & OBJECT_ALLOCATED);
> > > +
> > > + spin_lock_irqsave(&object_list_lock, flags);
> >
> > We also need to write-hold the object_tree_lock, not?
>
> Not here, the memleak_object is removed from the object_tree in the
> delete_object() function (via from the memleak_free callback). If it is
> in the object_tree, it should have a use_count >= 1.
So the code never calls the last put_object() without first having
called delete_object() to remove it from the object_tree? The "last"
put_object() being the one that decrements object->use_count to zero.
> > > + /* the last reference to this object */
> > > + list_del_rcu(&object->object_list);
> > > + call_rcu(&object->rcu, free_object_rcu);
> > > + spin_unlock_irqrestore(&object_list_lock, flags);
> > > +}
> > > +
> > > +static struct memleak_object *find_and_get_object(unsigned long ptr, int alias)
> > > +{
> > > + unsigned long flags;
> > > + struct memleak_object *object;
> > > +
> > > + read_lock_irqsave(&object_tree_lock, flags);
> >
> > Use of read_lock_irqsave() will work, but only if -all- deletions are
> > protected by write-acquiring object_tree_lock.
>
> Yes, deletions (in delete_object) are protected by
> write_lock(object_tree_lock).
>
> > Or unless there is an enclosing rcu_read_lock() around all calls to
> > find_and_get_object().
>
> Not needed in my opinion since if it is in the object_tree, it has
> use_count >= 1 anyway and the RCU callback for freeing won't be invoked.
Yep, if all deletions are protected by write_lock(object_tree_lock),
then you -don't- need rcu_read_lock() around all find_and_get_object().
> > > +static inline void create_object(unsigned long ptr, size_t size, int ref_count)
> [...]
> > > + write_lock_irqsave(&object_tree_lock, flags);
> > > + node = prio_tree_insert(&object_tree_root, &object->tree_node);
> > > + if (node != &object->tree_node) {
> > > + unsigned long flags;
> > > +
> > > + pr_warning("kmemleak: existing pointer\n");
> > > + dump_stack();
> > > +
> > > + object = lookup_object(ptr, 1);
> > > + spin_lock_irqsave(&object->lock, flags);
> > > + dump_object_info(object);
> > > + spin_unlock_irqrestore(&object->lock, flags);
> > > +
> > > + panic("kmemleak: cannot insert 0x%lx into the object search tree\n",
> > > + ptr);
> > > + }
> > > + write_unlock_irqrestore(&object_tree_lock, flags);
> >
> > In theory, you are OK here, but only because the below is adding
> > an element (not removing it). But this code is looking pretty dodgy.
> >
> > What exactly does object_tree_lock protect? What exactly does
> > object_list protect?
>
> See my explanation at the beginning. I separated the locking for the
> object_list and the object_tree_root (prio search tree). I think I could
> have held the object_tree_lock only for the prio_tree_insert() call.
I can't say I understand the code well enough to say for sure, but
if that is true, it would make the locking easier for me to understand.
> > > +static void memleak_scan(void)
> [...]
> > > + /* reset the reference count (whiten the object) */
> > > + object->count = 0;
> > > + if (color_gray(object) && get_object(object))
> > > + list_add_tail(&object->gray_list, &gray_list);
> >
> > What prevents other tasks from concurrently adding other objects to
> > gray_list? Preventing such concurrent adds is absolutely required,
> > otherwise gray_list will be corrupted.
>
> There is scan_mutex for this.
Good point -- I totally missed the fact that scan_mutex even existed.
> > > +static void *memleak_seq_next(struct seq_file *seq, void *v, loff_t *pos)
> > > +{
> > > + struct list_head *n;
> > > + struct memleak_object *next = NULL;
> > > + unsigned long flags;
> > > +
> > > + ++(*pos);
> > > + if (reported_leaks >= REPORTS_NR)
> > > + goto out;
> > > +
> > > + spin_lock_irqsave(&object_list_lock, flags);
> >
> > Using a spinlock instead of rcu_read_lock() is OK, but only if all
> > updates are also protected by this same spinlock. Which means that,
> > given that find_and_get_object read-acquires object_tree_lock, deletions
> > must be projected both by object_list_lock and by write-acquiring
> > object_tree_lock. Or all calls to memleak_seq_next need to be covered
> > by rcu_read_lock().
>
> The spin_lock here is only to retrieve the next object in the list but I
> agree that even if the object_list modifications are protected by
> object_list_lock, put_object could actually set the use_count to 0 and
> get_object return in this function isn't checked. If get_object returns
> successfully, I don't think an rcu_read_lock() is needed since
> put_object() can no longer invoke free_object_rcu().
OK, so let me see if I understand:
The memleak_object passed in via the "v" argument to
memleak_seq_next() was get_object()-ed by some prior
call, either an earlier memleak_seq_next() or presumably
by memleak_seq_start().
memleak_seq_start() -does- do its scan under RCU protection,
so looks OK.
I believe you also need RCU protection in memleak_seq_next()
to prevent the next memleak_object from disappearing
during the traversal. Yes, you do greatly decrease the
odds of this happening by having irqs disabled, but the
fact is that RCU is within its rights to end a grace
period during this time.
Assuming that I do understand, as you say, if the get_object() in
memleak_seq_next() fails, we could end up accessing freed-up memory on
the next call to memleak_seq_next(), or even during the current one,
assuming an aggressive RCU or an extended NMI, SMI, burst of ECC errors
or some other delay. So I agree that it is necessary to check the return
value of get_object().
Also, I do see the put_object() call in memleak_seq_stop(), but it looks
to me that this only does a put_object() on the last memleak_object.
What does the put_object() on the earlier memleak_object structures that
were scanned by memleak_seq_next? Or is there never more than one
such object in a given list?
Thanx, Paul
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH 2.6.28-rc5 01/11] kmemleak: Add the base support
2008-12-04 16:55 ` Paul E. McKenney
@ 2008-12-06 23:07 ` Catalin Marinas
2008-12-07 23:19 ` Paul E. McKenney
0 siblings, 1 reply; 37+ messages in thread
From: Catalin Marinas @ 2008-12-06 23:07 UTC (permalink / raw)
To: paulmck; +Cc: linux-kernel
On Thu, 2008-12-04 at 08:55 -0800, Paul E. McKenney wrote:
> On Thu, Dec 04, 2008 at 12:14:26PM +0000, Catalin Marinas wrote:
> > On Wed, 2008-12-03 at 10:12 -0800, Paul E. McKenney wrote:
> > > On Thu, Nov 20, 2008 at 11:30:34AM +0000, Catalin Marinas wrote:
> > > > This patch adds the base support for the kernel memory leak
> > > > detector. It traces the memory allocation/freeing in a way similar to
> > > > the Boehm's conservative garbage collector, the difference being that
> > > > the unreferenced objects are not freed but only shown in
> > > > /sys/kernel/debug/memleak. Enabling this feature introduces an
> > > > overhead to memory allocations.
> > >
> > > I have some concerns about your locking/RCU design, please see
> > > interspersed questions and comments.
[...]
> > I'll comment below as well but I'll first show my logic which I added as
> > comment at the top of the memleak.c file:
>
> This is very helpful, thank you! (And please accept my apologies if I
> missed seeing it in my earlier review.)
You haven't missed it, that's the first time I posted this text.
> > * The following locks are used by kmemleak:
> > *
> > * - object_list_lock (spinlock): protects the object_list. This is the main
> > * list holding the metadata (struct memleak_object) for the allocated
> > * objects. The memleak_object structures are added to the list in the
> > * create_object() function called from the memleak_alloc() callback. They
> > * are removed from the list in put_object() if the object->use_count is 0
>
> This part sounds good. I would also add that once object->use_count is
> zero, no one is allowed to increment it. Also, attempts to increment
> object->use_count must be under the protection of rcu_read_lock(),
> correct?
So, to make sure I understand it correctly, the rcu_read_lock() is
needed to protect between the point where the object pointer was
obtained to the get_object() point. Would it also work if
spin_lock_irqsave(object_list_lock) is used instead of rcu_read_lock()?
The call_rcu() in put_object is bracketed with object_list_lock.
BTW, I'll have a look if I could remove an object from the object_list
in delete_object() rather than waiting until put_object().
> > * - object_tree_lock (rwlock): protects the object_tree_root. When the
> > * metadata is created in create_object(), it is added to the object prio
> > * search tree and removed in delete_object() with this lock held
> > * (write_lock). This lock is also acquired (read_lock) in
> > * find_and_get_object() when an object needs to be looked up by a pointer
> > * value (either during scanning or when changing its properties like
> > * marking as false positive)
>
> Looks OK. I must confess that I am a bit fuzzy on the purpose of
> object_tree_root vs. object_list.
object_list holds all the memleak_objects in the system and it is
traversed when preparing the scanning and also when reporting the leaks.
object_tree_root is used to look-up memleak_objects by the allocated
memory block. In the past, this used to be a radix tree (with some
lockdep problems) and later a hash. I now use a prio tree because it
allows pointer ranges.
Kmemleak could probably iterate over the object_tree_root when reporting
but it is more convenient to report the leaks in the order they were
allocated (preserved by object_list) since one leak may trigger many
subsequent reports but they disappear once the first one is solved.
> > * - memleak_object.lock (spinlock): protects a memleak_object. Modifications
> > * of the metadata (e.g. count) are protected by this lock. Note that some
> > * members of this structure may be protected by other means (atomic or
> > * object_list lock). This lock is also held when scanning the corresponding
> > * object to avoid the kernel freeing it via the memleak_free() callback.
> > * This is less heavyweight than holding a global lock like object_list_lock
> > * during scanning
>
> OK, holding an object's lock can protect that object from deletion,
> but only after you actually acquire the lock. There must be some other
> mechanism preventing the object from being freed during the actual
> acquisition of the lock.
>
> Now this might be the object_list_lock, object_tree_lock, RCU, or some
> combination of the three, for example it might depend on how that object
> is looked up.
Correct. I'll have to review this again.
> > * Freeing a memleak_object is done via an RCU callback invoked from
> > * put_object() when its use_count is 0 and after removing it from the
> > * object_list. One of the reasons for the RCU is to delay the freeing and
> > * avoid a recursive call into the allocator via kmem_cache_free(). Another
> > * reason is to allow lock-less object_list traversal during memleak_scan().
>
> I did figure out the lock-less object_list traversal, but totally missed
> the fact that you were using RCU to prevent infinite recursion. Cute!
It wasn't documented, so pretty hard to guess.
> Also, the memleak_object must have been removed from object_tree before
> its use_count can possibly go to 0, correct?
Yes.
> > > > +static void free_object_rcu(struct rcu_head *rcu)
> > > > +{
> > > > + struct hlist_node *elem, *tmp;
> > > > + struct memleak_scan_area *area;
> > > > + struct memleak_object *object =
> > > > + container_of(rcu, struct memleak_object, rcu);
> > > > +
> > > > + /* once use_count is 0, there is no code accessing the object */
> > >
> > > OK, so we won't pass free_object_rcu() to call_rcu() until use_count
> > > is equal to zero. And once use_count is zero, it is never incremented.
> > > So the point of the RCU grace period is to ensure that all tasks
> > > who didn't quite call get_object() soon enough get done failing
> > > before we free up the object, correct?
> > >
> > > Which means that get_object() needs to be under rcu_read_lock()...
> >
> > My view here is that if use_count is 0, no other thread would be able to
> > use this object. It will also be removed from the object_list and hence
> > no other way to get this this object.
>
> What if some other CPU picked up a pointer to the object just before it
> was removed from the list? If that CPU was not under the protection of
> rcu_read_lock(), and if that CPU was delayed, then the object could be
> freed (and possibly re-allocated as something else) before the CPU got
> around to doing the atomic_inc_not_zero().
OK, I got it now.
> It looks to me that the code currently does the right thing here, just
> want to make sure I understand the locking and that we don't end up
> tempting someone later to break it. ;-)
I'll document it better and make sure it's clear for me as well.
> > > > +static void put_object(struct memleak_object *object)
> > > > +{
> > > > + unsigned long flags;
> > > > +
> > > > + if (!atomic_dec_and_test(&object->use_count))
> > > > + return;
> > > > +
> > > > + /* should only get here after delete_object was called */
> > > > + BUG_ON(object->flags & OBJECT_ALLOCATED);
> > > > +
> > > > + spin_lock_irqsave(&object_list_lock, flags);
> > >
> > > We also need to write-hold the object_tree_lock, not?
> >
> > Not here, the memleak_object is removed from the object_tree in the
> > delete_object() function (via from the memleak_free callback). If it is
> > in the object_tree, it should have a use_count >= 1.
>
> So the code never calls the last put_object() without first having
> called delete_object() to remove it from the object_tree? The "last"
> put_object() being the one that decrements object->use_count to zero.
Yes.
> > > > +static void *memleak_seq_next(struct seq_file *seq, void *v, loff_t *pos)
> > > > +{
> > > > + struct list_head *n;
> > > > + struct memleak_object *next = NULL;
> > > > + unsigned long flags;
> > > > +
> > > > + ++(*pos);
> > > > + if (reported_leaks >= REPORTS_NR)
> > > > + goto out;
> > > > +
> > > > + spin_lock_irqsave(&object_list_lock, flags);
> > >
> > > Using a spinlock instead of rcu_read_lock() is OK, but only if all
> > > updates are also protected by this same spinlock. Which means that,
> > > given that find_and_get_object read-acquires object_tree_lock, deletions
> > > must be projected both by object_list_lock and by write-acquiring
> > > object_tree_lock. Or all calls to memleak_seq_next need to be covered
> > > by rcu_read_lock().
> >
> > The spin_lock here is only to retrieve the next object in the list but I
> > agree that even if the object_list modifications are protected by
> > object_list_lock, put_object could actually set the use_count to 0 and
> > get_object return in this function isn't checked. If get_object returns
> > successfully, I don't think an rcu_read_lock() is needed since
> > put_object() can no longer invoke free_object_rcu().
>
> OK, so let me see if I understand:
>
> The memleak_object passed in via the "v" argument to
> memleak_seq_next() was get_object()-ed by some prior
> call, either an earlier memleak_seq_next() or presumably
> by memleak_seq_start().
Yes.
> memleak_seq_start() -does- do its scan under RCU protection,
> so looks OK.
>
> I believe you also need RCU protection in memleak_seq_next()
> to prevent the next memleak_object from disappearing
> during the traversal. Yes, you do greatly decrease the
> odds of this happening by having irqs disabled, but the
> fact is that RCU is within its rights to end a grace
> period during this time.
I'll try to make the memleak_seq_next() function use rcu_dereference()
and rcu_read_lock(). ATM, the "v" object has use_count >= 1 (from a
previous get_object) and the next pointer is accessed under
object_list_lock, so no modifications to the list (even put_object
acquires this lock when invoking call_rcu). There is still the bug with
not checking the get_object() return.
> Assuming that I do understand, as you say, if the get_object() in
> memleak_seq_next() fails, we could end up accessing freed-up memory on
> the next call to memleak_seq_next(), or even during the current one,
> assuming an aggressive RCU or an extended NMI, SMI, burst of ECC errors
> or some other delay. So I agree that it is necessary to check the return
> value of get_object().
Yes
> Also, I do see the put_object() call in memleak_seq_stop(), but it looks
> to me that this only does a put_object() on the last memleak_object.
> What does the put_object() on the earlier memleak_object structures that
> were scanned by memleak_seq_next? Or is there never more than one
> such object in a given list?
The previous objects' use_count are decremented in memleak_seq_next()
just before returning "next", so between seq_start-seq_next and
seq_next-seq_stop, there is only one object with an incremented
use_count. The memleak_seq_next() function may have two such objects for
a small period of time.
I actually added a test for this in memleak_scan() (if DEBUG is defined)
and I've never got any reports. There may be some situations when for
very short periods of time the use_count is > 1 at the beginning of a
scan, usually when one of the memleak_scan_area or memleak_ignore
callbacks are invoked.
I'll revise the locking a bit and re-post the patches this week.
Thanks.
--
Catalin
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH 2.6.28-rc5 01/11] kmemleak: Add the base support
2008-12-06 23:07 ` Catalin Marinas
@ 2008-12-07 23:19 ` Paul E. McKenney
0 siblings, 0 replies; 37+ messages in thread
From: Paul E. McKenney @ 2008-12-07 23:19 UTC (permalink / raw)
To: Catalin Marinas; +Cc: linux-kernel
On Sat, Dec 06, 2008 at 11:07:30PM +0000, Catalin Marinas wrote:
> On Thu, 2008-12-04 at 08:55 -0800, Paul E. McKenney wrote:
> > On Thu, Dec 04, 2008 at 12:14:26PM +0000, Catalin Marinas wrote:
> > > On Wed, 2008-12-03 at 10:12 -0800, Paul E. McKenney wrote:
> > > > On Thu, Nov 20, 2008 at 11:30:34AM +0000, Catalin Marinas wrote:
> > > > > This patch adds the base support for the kernel memory leak
> > > > > detector. It traces the memory allocation/freeing in a way similar to
> > > > > the Boehm's conservative garbage collector, the difference being that
> > > > > the unreferenced objects are not freed but only shown in
> > > > > /sys/kernel/debug/memleak. Enabling this feature introduces an
> > > > > overhead to memory allocations.
> > > >
> > > > I have some concerns about your locking/RCU design, please see
> > > > interspersed questions and comments.
> [...]
> > > I'll comment below as well but I'll first show my logic which I added as
> > > comment at the top of the memleak.c file:
> >
> > This is very helpful, thank you! (And please accept my apologies if I
> > missed seeing it in my earlier review.)
>
> You haven't missed it, that's the first time I posted this text.
Whew! ;-)
> > > * The following locks are used by kmemleak:
> > > *
> > > * - object_list_lock (spinlock): protects the object_list. This is the main
> > > * list holding the metadata (struct memleak_object) for the allocated
> > > * objects. The memleak_object structures are added to the list in the
> > > * create_object() function called from the memleak_alloc() callback. They
> > > * are removed from the list in put_object() if the object->use_count is 0
> >
> > This part sounds good. I would also add that once object->use_count is
> > zero, no one is allowed to increment it. Also, attempts to increment
> > object->use_count must be under the protection of rcu_read_lock(),
> > correct?
>
> So, to make sure I understand it correctly, the rcu_read_lock() is
> needed to protect between the point where the object pointer was
> obtained to the get_object() point.
More generally, it ensures that an RCU-protected object stays around
until the matching rcu_read_unlock() is reached. The guarantee call_rcu()
provides is to invoke the specified function only after all pre-existing
RCU read-side critical sections have completed, in other words, only
after any task that previously executed rcu_read_lock() has executed
the matching rcu_read_unlock().
> Would it also work if
> spin_lock_irqsave(object_list_lock) is used instead of rcu_read_lock()?
> The call_rcu() in put_object is bracketed with object_list_lock.
For some RCU implementations, but only by accident.
Just please don't do this!
Instead, consider wrapping rcu_read_lock() and rcu_read_unlock() around
the region of interest, if need be. The RCU read-side primitives are
quite cheap, so you are losing almost nothing (and in some cases exactly
nothing) by using them.
> BTW, I'll have a look if I could remove an object from the object_list
> in delete_object() rather than waiting until put_object().
If by doing this, you exclude all readers while removing, you are set.
> > > * - object_tree_lock (rwlock): protects the object_tree_root. When the
> > > * metadata is created in create_object(), it is added to the object prio
> > > * search tree and removed in delete_object() with this lock held
> > > * (write_lock). This lock is also acquired (read_lock) in
> > > * find_and_get_object() when an object needs to be looked up by a pointer
> > > * value (either during scanning or when changing its properties like
> > > * marking as false positive)
> >
> > Looks OK. I must confess that I am a bit fuzzy on the purpose of
> > object_tree_root vs. object_list.
>
> object_list holds all the memleak_objects in the system and it is
> traversed when preparing the scanning and also when reporting the leaks.
>
> object_tree_root is used to look-up memleak_objects by the allocated
> memory block. In the past, this used to be a radix tree (with some
> lockdep problems) and later a hash. I now use a prio tree because it
> allows pointer ranges.
>
> Kmemleak could probably iterate over the object_tree_root when reporting
> but it is more convenient to report the leaks in the order they were
> allocated (preserved by object_list) since one leak may trigger many
> subsequent reports but they disappear once the first one is solved.
Very helpful, thank you!
> > > * - memleak_object.lock (spinlock): protects a memleak_object. Modifications
> > > * of the metadata (e.g. count) are protected by this lock. Note that some
> > > * members of this structure may be protected by other means (atomic or
> > > * object_list lock). This lock is also held when scanning the corresponding
> > > * object to avoid the kernel freeing it via the memleak_free() callback.
> > > * This is less heavyweight than holding a global lock like object_list_lock
> > > * during scanning
> >
> > OK, holding an object's lock can protect that object from deletion,
> > but only after you actually acquire the lock. There must be some other
> > mechanism preventing the object from being freed during the actual
> > acquisition of the lock.
> >
> > Now this might be the object_list_lock, object_tree_lock, RCU, or some
> > combination of the three, for example it might depend on how that object
> > is looked up.
>
> Correct. I'll have to review this again.
Fair enough!
> > > * Freeing a memleak_object is done via an RCU callback invoked from
> > > * put_object() when its use_count is 0 and after removing it from the
> > > * object_list. One of the reasons for the RCU is to delay the freeing and
> > > * avoid a recursive call into the allocator via kmem_cache_free(). Another
> > > * reason is to allow lock-less object_list traversal during memleak_scan().
> >
> > I did figure out the lock-less object_list traversal, but totally missed
> > the fact that you were using RCU to prevent infinite recursion. Cute!
>
> It wasn't documented, so pretty hard to guess.
I guess I don't feel quite so bad, then. ;-)
> > Also, the memleak_object must have been removed from object_tree before
> > its use_count can possibly go to 0, correct?
>
> Yes.
Good!
> > > > > +static void free_object_rcu(struct rcu_head *rcu)
> > > > > +{
> > > > > + struct hlist_node *elem, *tmp;
> > > > > + struct memleak_scan_area *area;
> > > > > + struct memleak_object *object =
> > > > > + container_of(rcu, struct memleak_object, rcu);
> > > > > +
> > > > > + /* once use_count is 0, there is no code accessing the object */
> > > >
> > > > OK, so we won't pass free_object_rcu() to call_rcu() until use_count
> > > > is equal to zero. And once use_count is zero, it is never incremented.
> > > > So the point of the RCU grace period is to ensure that all tasks
> > > > who didn't quite call get_object() soon enough get done failing
> > > > before we free up the object, correct?
> > > >
> > > > Which means that get_object() needs to be under rcu_read_lock()...
> > >
> > > My view here is that if use_count is 0, no other thread would be able to
> > > use this object. It will also be removed from the object_list and hence
> > > no other way to get this this object.
> >
> > What if some other CPU picked up a pointer to the object just before it
> > was removed from the list? If that CPU was not under the protection of
> > rcu_read_lock(), and if that CPU was delayed, then the object could be
> > freed (and possibly re-allocated as something else) before the CPU got
> > around to doing the atomic_inc_not_zero().
>
> OK, I got it now.
It can indeed be a bit subtle, to be sure.
> > It looks to me that the code currently does the right thing here, just
> > want to make sure I understand the locking and that we don't end up
> > tempting someone later to break it. ;-)
>
> I'll document it better and make sure it's clear for me as well.
Sounds good, look forward to seeing the next version!
> > > > > +static void put_object(struct memleak_object *object)
> > > > > +{
> > > > > + unsigned long flags;
> > > > > +
> > > > > + if (!atomic_dec_and_test(&object->use_count))
> > > > > + return;
> > > > > +
> > > > > + /* should only get here after delete_object was called */
> > > > > + BUG_ON(object->flags & OBJECT_ALLOCATED);
> > > > > +
> > > > > + spin_lock_irqsave(&object_list_lock, flags);
> > > >
> > > > We also need to write-hold the object_tree_lock, not?
> > >
> > > Not here, the memleak_object is removed from the object_tree in the
> > > delete_object() function (via from the memleak_free callback). If it is
> > > in the object_tree, it should have a use_count >= 1.
> >
> > So the code never calls the last put_object() without first having
> > called delete_object() to remove it from the object_tree? The "last"
> > put_object() being the one that decrements object->use_count to zero.
>
> Yes.
Good!
> > > > > +static void *memleak_seq_next(struct seq_file *seq, void *v, loff_t *pos)
> > > > > +{
> > > > > + struct list_head *n;
> > > > > + struct memleak_object *next = NULL;
> > > > > + unsigned long flags;
> > > > > +
> > > > > + ++(*pos);
> > > > > + if (reported_leaks >= REPORTS_NR)
> > > > > + goto out;
> > > > > +
> > > > > + spin_lock_irqsave(&object_list_lock, flags);
> > > >
> > > > Using a spinlock instead of rcu_read_lock() is OK, but only if all
> > > > updates are also protected by this same spinlock. Which means that,
> > > > given that find_and_get_object read-acquires object_tree_lock, deletions
> > > > must be projected both by object_list_lock and by write-acquiring
> > > > object_tree_lock. Or all calls to memleak_seq_next need to be covered
> > > > by rcu_read_lock().
> > >
> > > The spin_lock here is only to retrieve the next object in the list but I
> > > agree that even if the object_list modifications are protected by
> > > object_list_lock, put_object could actually set the use_count to 0 and
> > > get_object return in this function isn't checked. If get_object returns
> > > successfully, I don't think an rcu_read_lock() is needed since
> > > put_object() can no longer invoke free_object_rcu().
> >
> > OK, so let me see if I understand:
> >
> > The memleak_object passed in via the "v" argument to
> > memleak_seq_next() was get_object()-ed by some prior
> > call, either an earlier memleak_seq_next() or presumably
> > by memleak_seq_start().
>
> Yes.
OK, good! I might be (slowly) catching on here.
> > memleak_seq_start() -does- do its scan under RCU protection,
> > so looks OK.
> >
> > I believe you also need RCU protection in memleak_seq_next()
> > to prevent the next memleak_object from disappearing
> > during the traversal. Yes, you do greatly decrease the
> > odds of this happening by having irqs disabled, but the
> > fact is that RCU is within its rights to end a grace
> > period during this time.
>
> I'll try to make the memleak_seq_next() function use rcu_dereference()
> and rcu_read_lock(). ATM, the "v" object has use_count >= 1 (from a
> previous get_object) and the next pointer is accessed under
> object_list_lock, so no modifications to the list (even put_object
> acquires this lock when invoking call_rcu). There is still the bug with
> not checking the get_object() return.
Fair enough!
> > Assuming that I do understand, as you say, if the get_object() in
> > memleak_seq_next() fails, we could end up accessing freed-up memory on
> > the next call to memleak_seq_next(), or even during the current one,
> > assuming an aggressive RCU or an extended NMI, SMI, burst of ECC errors
> > or some other delay. So I agree that it is necessary to check the return
> > value of get_object().
>
> Yes
Whew!
> > Also, I do see the put_object() call in memleak_seq_stop(), but it looks
> > to me that this only does a put_object() on the last memleak_object.
> > What does the put_object() on the earlier memleak_object structures that
> > were scanned by memleak_seq_next? Or is there never more than one
> > such object in a given list?
>
> The previous objects' use_count are decremented in memleak_seq_next()
> just before returning "next", so between seq_start-seq_next and
> seq_next-seq_stop, there is only one object with an incremented
> use_count. The memleak_seq_next() function may have two such objects for
> a small period of time.
>
> I actually added a test for this in memleak_scan() (if DEBUG is defined)
> and I've never got any reports. There may be some situations when for
> very short periods of time the use_count is > 1 at the beginning of a
> scan, usually when one of the memleak_scan_area or memleak_ignore
> callbacks are invoked.
>
> I'll revise the locking a bit and re-post the patches this week.
I look forward to seeing them!
Thanx, Paul
^ permalink raw reply [flat|nested] 37+ messages in thread
end of thread, other threads:[~2008-12-07 23:20 UTC | newest]
Thread overview: 37+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-11-20 11:30 [PATCH 2.6.28-rc5 00/11] Kernel memory leak detector (updated) Catalin Marinas
2008-11-20 11:30 ` [PATCH 2.6.28-rc5 01/11] kmemleak: Add the base support Catalin Marinas
2008-11-20 11:58 ` Ingo Molnar
2008-11-20 19:35 ` Pekka Enberg
2008-11-21 12:07 ` Catalin Marinas
2008-11-24 8:16 ` Pekka Enberg
2008-11-24 8:19 ` Pekka Enberg
2008-12-03 18:12 ` Paul E. McKenney
2008-12-04 12:14 ` Catalin Marinas
2008-12-04 16:55 ` Paul E. McKenney
2008-12-06 23:07 ` Catalin Marinas
2008-12-07 23:19 ` Paul E. McKenney
2008-11-20 11:30 ` [PATCH 2.6.28-rc5 02/11] kmemleak: Add documentation on the memory leak detector Catalin Marinas
2008-11-20 11:30 ` [PATCH 2.6.28-rc5 03/11] kmemleak: Add the memory allocation/freeing hooks Catalin Marinas
2008-11-20 12:00 ` Ingo Molnar
2008-11-20 19:30 ` Pekka Enberg
2008-11-21 11:07 ` Catalin Marinas
2008-11-24 8:19 ` Pekka Enberg
2008-11-24 10:18 ` Catalin Marinas
2008-11-24 10:35 ` Pekka Enberg
2008-11-24 10:43 ` Catalin Marinas
2008-11-20 11:30 ` [PATCH 2.6.28-rc5 04/11] kmemleak: Add modules support Catalin Marinas
2008-11-20 12:03 ` Ingo Molnar
2008-11-20 11:30 ` [PATCH 2.6.28-rc5 05/11] kmemleak: Add support for i386 Catalin Marinas
2008-11-20 12:16 ` Ingo Molnar
2008-11-20 11:31 ` [PATCH 2.6.28-rc5 06/11] kmemleak: Add support for ARM Catalin Marinas
2008-11-20 11:31 ` [PATCH 2.6.28-rc5 07/11] kmemleak: Remove some of the kmemleak false positives Catalin Marinas
2008-11-20 12:09 ` Ingo Molnar
2008-11-20 11:31 ` [PATCH 2.6.28-rc5 08/11] kmemleak: Enable the building of the memory leak detector Catalin Marinas
2008-11-20 11:31 ` [PATCH 2.6.28-rc5 09/11] kmemleak: Keep the __init functions after initialization Catalin Marinas
2008-11-20 11:31 ` [PATCH 2.6.28-rc5 10/11] kmemleak: Simple testing module for kmemleak Catalin Marinas
2008-11-20 12:11 ` Ingo Molnar
2008-11-20 11:31 ` [PATCH 2.6.28-rc5 11/11] kmemleak: Add the corresponding MAINTAINERS entry Catalin Marinas
2008-11-20 12:10 ` [PATCH 2.6.28-rc5 00/11] Kernel memory leak detector (updated) Ingo Molnar
2008-11-20 17:54 ` Catalin Marinas
2008-11-20 12:22 ` Ingo Molnar
2008-11-20 18:10 ` Catalin Marinas
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox