public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Mark Rutland <mark.rutland@arm.com>
To: Pavel Machek <pavel@ucw.cz>
Cc: Kees Cook <keescook@chromium.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Arnaldo Carvalho de Melo <acme@redhat.com>,
	kernel list <linux-kernel@vger.kernel.org>,
	Ingo Molnar <mingo@redhat.com>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	"kernel-hardening@lists.openwall.com" 
	<kernel-hardening@lists.openwall.com>
Subject: Re: [kernel-hardening] rowhammer protection [was Re: Getting interrupt every million cache misses]
Date: Fri, 28 Oct 2016 15:05:22 +0100	[thread overview]
Message-ID: <20161028140522.GH5806@leverpostej> (raw)
In-Reply-To: <20161028112136.GA5635@amd>

Hi,

On Fri, Oct 28, 2016 at 01:21:36PM +0200, Pavel Machek wrote:
> > Has this been tested on a system vulnerable to rowhammer, and if so, was
> > it reliable in mitigating the issue?
> > 
> > Which particular attack codebase was it tested against?
> 
> I have rowhammer-test here,
> 
> commit 9824453fff76e0a3f5d1ac8200bc6c447c4fff57
> Author: Mark Seaborn <mseaborn@chromium.org>

... from which repo?

> I do not have vulnerable machine near me, so no "real" tests, but
> I'm pretty sure it will make the error no longer reproducible with the
> newer version. [Help welcome ;-)]

Even if we hope this works, I think we have to be very careful with that
kind of assertion. Until we have data is to its efficacy, I don't think
we should claim that this is an effective mitigation.

> > > +struct perf_event_attr rh_attr = {
> > > +	.type	= PERF_TYPE_HARDWARE,
> > > +	.config = PERF_COUNT_HW_CACHE_MISSES,
> > > +	.size	= sizeof(struct perf_event_attr),
> > > +	.pinned	= 1,
> > > +	/* FIXME: it is 1000000 per cpu. */
> > > +	.sample_period = 500000,
> > > +};
> > 
> > I'm not sure that this is general enough to live in core code, because:
> 
> Well, I'd like to postpone debate 'where does it live' to the later
> stage. The problem is not arch-specific, the solution is not too
> arch-specific either. I believe we can use Kconfig to hide it from
> users where it does not apply. Anyway, lets decide if it works and
> where, first.

You seem to have forgotten the drammer case here, which this would not
have protected against. I'm not sure, but I suspect that we could have
similar issues with mappings using other attributes (e.g write-through),
as these would cause the memory traffic without cache miss events.

> > * the precise semantics of performance counter events varies drastically
> >   across implementations. PERF_COUNT_HW_CACHE_MISSES, might only map to
> >   one particular level of cache, and/or may not be implemented on all
> >   cores.
> 
> If it maps to one particular cache level, we are fine (or maybe will
> trigger protection too often). If some cores are not counted, that's bad.

Perhaps, but that depends on a number of implementation details. If "too
often" means "all the time", people will turn this off when they could
otherwise have been protected (e.g. if we can accurately monitor the
last level of cache).

> > * On some implementations, it may be that the counters are not
> >   interchangeable, and for those this would take away
> >   PERF_COUNT_HW_CACHE_MISSES from existing users.
> 
> Yup. Note that with this kind of protection, one missing performance
> counter is likely to be small problem.

That depends. Who chooses when to turn this on? If it's down to the
distro, this can adversely affect users with perfectly safe DRAM.

> > > +	/* FIXME msec per usec, reverse logic? */
> > > +	if (delta < 64 * NSEC_PER_MSEC)
> > > +		mdelay(56);
> > > +}
> > 
> > If I round-robin my attack across CPUs, how much does this help?
> 
> See below for new explanation. With 2 CPUs, we are fine. On monster
> big-little 8-core machines, we'd probably trigger protection too
> often.

We see larger core counts in mobile devices these days. In China,
octa-core phones are popular, for example. Servers go much larger.

> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index e24e981..c6ffcaf 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -315,6 +315,7 @@ config PGTABLE_LEVELS
>  
>  source "init/Kconfig"
>  source "kernel/Kconfig.freezer"
> +source "kernel/events/Kconfig"
>  
>  menu "Processor type and features"
>  
> diff --git a/kernel/events/Kconfig b/kernel/events/Kconfig
> new file mode 100644
> index 0000000..7359427
> --- /dev/null
> +++ b/kernel/events/Kconfig
> @@ -0,0 +1,9 @@
> +config NOHAMMER
> +        tristate "Rowhammer protection"
> +        help
> +	  Enable rowhammer attack prevention. Will degrade system
> +	  performance under attack so much that attack should not
> +	  be feasible.


I think that this must make it clear that this is a best-effort approach
(i.e. it does not guarantee that an attack is not possible), and also
should make clear that said penalty may occur in other situations.

[...]

> +static struct perf_event_attr rh_attr = {
> +	.type	= PERF_TYPE_HARDWARE,
> +	.config = PERF_COUNT_HW_CACHE_MISSES,
> +	.size	= sizeof(struct perf_event_attr),
> +	.pinned	= 1,
> +	.sample_period = 10000,
> +};

What kind of overhead (just from taking the interrupt) will this come
with?

> +/*
> + * How often is the DRAM refreshed. Setting it too high is safe.
> + */

Stale comment? Given the check against delta below, this doesn't look to
be true.

> +static int dram_refresh_msec = 64;
> +
> +static DEFINE_PER_CPU(struct perf_event *, rh_event);
> +static DEFINE_PER_CPU(u64, rh_timestamp);
> +
> +static void rh_overflow(struct perf_event *event, struct perf_sample_data *data, struct pt_regs *regs)
> +{
> +	u64 *ts = this_cpu_ptr(&rh_timestamp); /* this is NMI context */
> +	u64 now = ktime_get_mono_fast_ns();
> +	s64 delta = now - *ts;
> +
> +	*ts = now;
> +
> +	if (delta < dram_refresh_msec * NSEC_PER_MSEC)
> +		mdelay(dram_refresh_msec);
> +}

[...]

> +/*
> + * DRAM is shared between CPUs, but these performance counters are per-CPU.
> + */
> +	int max_attacking_cpus = 2;

As above, many systems today have more than two CPUs. In the drammmer
paper, it looks like the majority had four.

Thanks
Mark.

  reply	other threads:[~2016-10-28 14:05 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-26 20:54 Getting interrupt every million cache misses Pavel Machek
2016-10-27  8:28 ` Peter Zijlstra
2016-10-27  8:46   ` Pavel Machek
2016-10-27  9:15     ` Peter Zijlstra
2016-10-27  9:11   ` Pavel Machek
2016-10-27  9:33     ` Peter Zijlstra
2016-10-27 20:40       ` Kees Cook
2016-10-27 21:27         ` rowhammer protection [was Re: Getting interrupt every million cache misses] Pavel Machek
2016-10-28  7:07           ` Ingo Molnar
2016-10-28  8:50             ` Pavel Machek
2016-10-28  8:59               ` Ingo Molnar
2016-10-28 11:55                 ` Pavel Machek
2016-10-28  9:04               ` Peter Zijlstra
2016-10-28  9:27                 ` Vegard Nossum
2016-10-28  9:35                   ` Ingo Molnar
2016-10-28  9:47                     ` Vegard Nossum
2016-10-28  9:53                     ` [kernel-hardening] " Mark Rutland
2016-10-28 11:27                 ` Pavel Machek
2016-10-28  9:51           ` [kernel-hardening] " Mark Rutland
2016-10-28 11:21             ` Pavel Machek
2016-10-28 14:05               ` Mark Rutland [this message]
2016-10-28 14:18                 ` Peter Zijlstra
2016-10-28 18:30                   ` Pavel Machek
2016-10-28 18:48                     ` Peter Zijlstra
2016-11-02 18:13                   ` Pavel Machek
2016-10-28 17:27                 ` Pavel Machek
2016-10-29 13:06                   ` Daniel Gruss
2016-10-29 19:42                     ` Pavel Machek
2016-10-29 20:05                       ` Daniel Gruss
2016-10-29 21:05                         ` Pavel Machek
2016-10-29 21:07                           ` Daniel Gruss
2016-10-29 21:45                             ` Pavel Machek
2016-10-29 21:49                               ` Daniel Gruss
2016-10-29 22:01                                 ` Pavel Machek
2016-10-29 22:02                                   ` Daniel Gruss
2016-10-31  8:27                 ` Pavel Machek
2016-10-31 14:47                   ` Mark Rutland
2016-10-31 21:13                     ` Pavel Machek
2016-10-31 22:09                       ` Mark Rutland
2016-11-01  6:33                   ` Ingo Molnar
2016-11-01  7:20                     ` Daniel Micay
2016-11-01  7:53                     ` Daniel Gruss
2016-11-01  8:10                     ` Pavel Machek
2016-11-01  8:13                       ` Daniel Gruss

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20161028140522.GH5806@leverpostej \
    --to=mark.rutland@arm.com \
    --cc=acme@redhat.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=keescook@chromium.org \
    --cc=kernel-hardening@lists.openwall.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=pavel@ucw.cz \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox