public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* splat in kretprobe in get_task_mm(current)
@ 2014-06-03 17:39 Peter Moody
  2014-06-03 21:53 ` Peter Moody
  0 siblings, 1 reply; 8+ messages in thread
From: Peter Moody @ 2014-06-03 17:39 UTC (permalink / raw)
  To: ananth, anil.s.keshavamurthy, davem, masami.hiramatsu.pt
  Cc: linux-kernel, Kees Cook

[-- Attachment #1: Type: text/plain, Size: 1956 bytes --]

Hi folks,

I've managed to trigger the following splat on 3.15.0-rc8 using the attached kretprobe module.

[  339.208634] BUG: spinlock cpu recursion on CPU#4, rm/8733
[  339.208643]  lock: kretprobe_table_locks+0x600/0x2000, .magic: dead4ead, .owner: rm/8734, .owner_cpu: 4
[  339.208647] CPU: 4 PID: 8733 Comm: rm Tainted: G          IOE 3.15.0-rc8-splat+ #14
[  339.208648] Hardware name: Dell Inc. Precision WorkStation T3500  /09KPNV, BIOS A10 01/21/2011
[  339.208650]  ffff88044d3a2570 ffff880454e63d48 ffffffff81773413 0000000000000007
[  339.208654]  ffffffff82842080 ffff880454e63d68 ffffffff8176ec74 ffffffff82842080
[  339.208658]  ffffffff81a8b6a6 ffff880454e63d88 ffffffff8176ec9f ffffffff82842080
[  339.208662] Call Trace:
[  339.208667]  [<ffffffff81773413>] dump_stack+0x46/0x58
[  339.208671]  [<ffffffff8176ec74>] spin_dump+0x8f/0x94
[  339.208674]  [<ffffffff8176ec9f>] spin_bug+0x26/0x2b
[  339.208678]  [<ffffffff810c4195>] do_raw_spin_lock+0x105/0x190
[  339.208683]  [<ffffffff8177c7c0>] _raw_spin_lock_irqsave+0x70/0x90
[  339.208687]  [<ffffffff817839dc>] ? kretprobe_hash_lock+0x6c/0x80
[  339.208690]  [<ffffffff8177a86e>] ? mutex_unlock+0xe/0x10
[  339.208693]  [<ffffffff817839dc>] kretprobe_hash_lock+0x6c/0x80
[  339.208697]  [<ffffffff8177f16d>] trampoline_handler+0x3d/0x220
[  339.208700]  [<ffffffff811bc537>] ? kfree+0x147/0x190
[  339.208703]  [<ffffffff8177f0fe>] kretprobe_trampoline+0x25/0x57
[  339.208707]  [<ffffffff811e28e8>] ? do_execve+0x18/0x20
[  339.208710]  [<ffffffff817862a9>] stub_execve+0x69/0xa0

Unfortunately triggering the splat is kind of a pain. I've only been able to it by building chromeos. The cros build process kicks off dozens of processes (in this case, it was 32) to fetch/build the various packages for the system image. I can try to come up with a better reproducer if this splat and module aren't enough.

If I remove getting the reference to mm, I avoid the splat.

Cheers,
peter


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: exec_mm_probe.c --]
[-- Type: text/x-csrc, Size: 951 bytes --]

#include <linux/version.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/mm.h>
#include <linux/kprobes.h>

static int cntr;

static int exec_handler(struct kretprobe_instance *ri, struct pt_regs *regs)
{
  int ret = (int)regs_return_value(regs);
  if (!(unlikely(IS_ERR_VALUE(ret)))) {
     struct mm_struct *mm = NULL;
     mm = get_task_mm(current);
     if (mm)
       mmput(mm);
  }
  return 0;
}

static struct kretprobe exec_kretprobe = {
  .kp.symbol_name = "do_execve",
  .handler = exec_handler,
};

static int __init probe_init(void)
{
  int err;
  if ((err = register_kretprobe(&exec_kretprobe))) {
    pr_err("register failed: %d\n", err);
    return err;
  }
  pr_info("exec_mm_probe loaded.\n");
  cntr = 0;
  return 0;
}

static void probe_exit(void)
{
  unregister_kretprobe(&exec_kretprobe);
  pr_info("exec_mm_probe unloaded.\n");
}

MODULE_LICENSE("GPL v2");

module_init(probe_init);
module_exit(probe_exit);

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2014-06-04 23:01 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-06-03 17:39 splat in kretprobe in get_task_mm(current) Peter Moody
2014-06-03 21:53 ` Peter Moody
2014-06-04  8:03   ` Masami Hiramatsu
2014-06-04 14:07     ` Masami Hiramatsu
2014-06-04 15:23       ` Peter Moody
2014-06-04 22:49         ` Masami Hiramatsu
2014-06-04 23:00           ` Peter Moody
2014-06-04 16:05       ` (ltc-kernel 9473) " Masami Hiramatsu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox