public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Chris Wright <chrisw@sous-sol.org>
To: Andi Kleen <ak@suse.de>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>,
	Dave Jones <davej@redhat.com>, Chuck Ebbert <cebbert@redhat.com>,
	linux-kernel <linux-kernel@vger.kernel.org>
Subject: [PATCH] x86: fix oprofile double free (was Re: Multiple free during oprofile unload)
Date: Fri, 25 May 2007 19:27:42 -0700	[thread overview]
Message-ID: <20070526022741.GD3390@sequoia.sous-sol.org> (raw)
In-Reply-To: <20070525193737.34609de2@the-village.bc.nu>

* Alan Cox (alan@lxorguk.ukuu.org.uk) wrote:
> I'd agree entirely with Dave - if you are applying a fix to something
> that is currently totally broken which may make it work and which doesn't
> affect any other bit of code then it goes into the stable tree.

And, in this case we're in luck.  It's not released in any -stable tree
yet (it's queued for the next release).  So there's plenty of time to
fix it up before next -stable release.

Something like below should fix it.

thanks,
-chris
--

Subject: [PATCH] x86: fix oprofile double free

From: Chris Wright <chrisw@sous-sol.org>

Chuck reports that the recent fix from Andi to oprofile
6c977aad03a18019015035958c65b6729cd0574c introduces a double
free.  Each cpu's cpu_msrs is setup to point to cpu 0's, which
causes free_msrs to free cpu 0's pointers for_each_possible_cpu.
Rather than copy the pointers, do a deep copy instead.

Signed-off-by: Chris Wright <chrisw@sous-sol.org>
---

 arch/i386/oprofile/nmi_int.c |   10 ++++++++--
 1 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/i386/oprofile/nmi_int.c b/arch/i386/oprofile/nmi_int.c
index a7c0783..0c39443 100644
--- a/arch/i386/oprofile/nmi_int.c
+++ b/arch/i386/oprofile/nmi_int.c
@@ -211,8 +211,14 @@ static int nmi_setup(void)
 	/* Assume saved/restored counters are the same on all CPUs */
 	model->fill_in_addresses(&cpu_msrs[0]);
 	for_each_possible_cpu (cpu) {
-		if (cpu != 0)
-			cpu_msrs[cpu] = cpu_msrs[0];
+		if (cpu != 0) {
+			memcpy(cpu_msrs[cpu].counters, cpu_msrs[0].counters,
+				sizeof(struct op_msr) * model->num_counters);
+				
+			memcpy(cpu_msrs[cpu].controls, cpu_msrs[0].controls,
+				sizeof(struct op_msr) * model->num_controls);
+		}
+
 	}
 	on_each_cpu(nmi_save_registers, NULL, 0, 1);
 	on_each_cpu(nmi_cpu_setup, NULL, 0, 1);

  reply	other threads:[~2007-05-26  2:28 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-05-25 15:00 Multiple free during oprofile unload Chuck Ebbert
2007-05-25 15:42 ` Andi Kleen
2007-05-25 16:18   ` Dave Jones
2007-05-25 17:13     ` Andi Kleen
2007-05-25 17:38       ` Dave Jones
2007-05-25 18:02         ` Andi Kleen
2007-05-25 18:37           ` Alan Cox
2007-05-26  2:27             ` Chris Wright [this message]
2007-05-26  7:43               ` [PATCH] x86: fix oprofile double free (was Re: Multiple free during oprofile unload) Andi Kleen
2007-05-26 14:03                 ` Chris Wright

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20070526022741.GD3390@sequoia.sous-sol.org \
    --to=chrisw@sous-sol.org \
    --cc=ak@suse.de \
    --cc=alan@lxorguk.ukuu.org.uk \
    --cc=cebbert@redhat.com \
    --cc=davej@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox