From: catalin.marinas@arm.com (Catalin Marinas)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH 2/3] ARM: cacheflush: don't bother rounding to nearest vma
Date: Wed, 27 Mar 2013 13:08:36 +0000 [thread overview]
Message-ID: <20130327130836.GA1863@MacBook-Pro.local> (raw)
In-Reply-To: <20130327124352.GC18429@mudshark.cambridge.arm.com>
On Wed, Mar 27, 2013 at 12:43:52PM +0000, Will Deacon wrote:
> On Wed, Mar 27, 2013 at 12:21:59PM +0000, Catalin Marinas wrote:
> > On Wed, Mar 27, 2013 at 12:15:12PM +0000, Will Deacon wrote:
> > > On Wed, Mar 27, 2013 at 11:09:38AM +0000, Catalin Marinas wrote:
> > > > While this would work, it introduces a possibility of DoS where an
> > > > application passes bigger valid range (kernel linear mapping) and the
> > > > kernel code would not be preempted (CONFIG_PREEMPT disabled). IIRC,
> > > > that's why Russell reject such patch a while back.
> > >
> > > Hmm, I'm not sure I buy that argument. Firstly, you can't just pass a kernel
> > > linear mapping address -- we'll fault straight away because it's not a
> > > userspace address.
> >
> > Fault where?
>
> I was expecting something like an access_ok check, but you're right, we
> don't have one (and I guess that's not strictly needed given that flushing
> isn't destructive). I still find it a bit scary that we allow userspace to
> pass kernel addresses through though -- especially if there's something like
> a DMA or CPU suspend operation running on another core.
Currently we don't allow kernel addresses since we require a valid vma.
If we drop the vma search, we should add an access_ok.
> > > Secondly, what's to stop an application from mmaping a large area into
> > > a single VMA and giving rise to the same situation? Finally,
> > > interrupts are enabled during this operation, so I don't understand
> > > how you can trigger a DoS, irrespective of the preempt configuration.
> >
> > You can prevent context switching to other threads. But I agree, with a
> > large vma (which is already faulted in), you can get similar behaviour.
>
> So the easy fix is to split the range up into chunks and call cond_resched
> after processing each one.
This would work.
--
Catalin
next prev parent reply other threads:[~2013-03-27 13:08 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-03-25 18:18 [PATCH 0/3] Optimise cache-flushing system call and add iovec variant Will Deacon
2013-03-25 18:18 ` [PATCH 1/3] ARM: cacheflush: don't round address range up to nearest page Will Deacon
2013-03-27 11:05 ` Catalin Marinas
2013-03-25 18:18 ` [PATCH 2/3] ARM: cacheflush: don't bother rounding to nearest vma Will Deacon
2013-03-27 11:09 ` Catalin Marinas
2013-03-27 12:15 ` Will Deacon
2013-03-27 12:21 ` Catalin Marinas
2013-03-27 12:43 ` Will Deacon
2013-03-27 13:08 ` Catalin Marinas [this message]
2013-03-25 18:18 ` [PATCH 3/3] ARM: cacheflush: add new iovec-based cache flushing system call Will Deacon
2013-03-27 11:12 ` Catalin Marinas
2013-05-23 10:52 ` Will Deacon
2013-03-25 18:44 ` [PATCH 0/3] Optimise cache-flushing system call and add iovec variant Jonathan Austin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130327130836.GA1863@MacBook-Pro.local \
--to=catalin.marinas@arm.com \
--cc=linux-arm-kernel@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).