linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Dave.Martin@arm.com (Dave Martin)
To: linux-arm-kernel@lists.infradead.org
Subject: arm64: unhandled level 0 translation fault
Date: Fri, 15 Dec 2017 17:11:30 +0000	[thread overview]
Message-ID: <20171215171129.GS22781@e103592.cambridge.arm.com> (raw)
In-Reply-To: <CAMuHMdWio0KnJc3DQeQyf-MHpDC=tc3cJLNK7MmaL=MdDz45UQ@mail.gmail.com>

On Fri, Dec 15, 2017 at 02:30:00PM +0100, Geert Uytterhoeven wrote:
> Hi Dave,
> 
> On Fri, Dec 15, 2017 at 12:23 PM, Dave Martin <Dave.Martin@arm.com> wrote:
> > On Thu, Dec 14, 2017 at 07:08:27PM +0100, Geert Uytterhoeven wrote:
> >> On Thu, Dec 14, 2017 at 4:24 PM, Dave P Martin <Dave.Martin@arm.com> wrote:

[...]

> >> > Good work on the bisect -- I'll need to have a think about this...
> >> >
> >> > That patch fixes a genuine problem so we can't just revert it.
> >> >
> >> > What if you revert _just this function_ back to what it was in v4.14?
> >>
> >> With fpsimd_update_current_state() reverted to v4.14, and
> >>
> >> -               __this_cpu_write(fpsimd_last_state, st);
> >> +               __this_cpu_write(fpsimd_last_state.st, st);
> >>
> >> to make it build, the problem seems to be fixed, too.
> 
> > Interesting if I apply that to v4.14 and then flatten the new code for CONFIG_ARM64_SVE=n, I get:
> >
> > Working:
> >
> > void fpsimd_update_current_state(struct fpsimd_state *state)
> > {
> >         local_bh_disable();
> >
> >         fpsimd_load_state(state);
> >         if (test_and_clear_thread_flag(TIF_FOREIGN_FPSTATE)) {
> >                 struct fpsimd_state *st = &current->thread.fpsimd_state;
> >
> >                 __this_cpu_write(fpsimd_last_state.st, st);
> >                 st->cpu = smp_processor_id();
> >         }
> >
> >         local_bh_enable();
> > }
> >
> > Broken:
> >
> > void fpsimd_update_current_state(struct fpsimd_state *state)
> > {
> >         struct fpsimd_last_state_struct *last;
> >         struct fpsimd_state *st;
> >
> >         local_bh_disable();
> >
> >         current->thread.fpsimd_state = *state;
> >         fpsimd_load_state(&current->thread.fpsimd_state);
> >
> >         if (test_and_clear_thread_flag(TIF_FOREIGN_FPSTATE)) {
> >                 last = this_cpu_ptr(&fpsimd_last_state);
> >                 st = &current->thread.fpsimd_state;
> >
> >                 last->st = st;
> >                 last->sve_in_use = test_thread_flag(TIF_SVE);
> >                 st->cpu = smp_processor_id();
> >         }
> >
> >         local_bh_enable();
> > }
> >
> > Can you try my flattened "broken" version by itself and see if that does
> > reproduce the bug?  If not, my flattening may be making bad assumptions...
> >
> > Assuming the "broken" version reproduces the bug, I can't yet see exactly
> > where the breakage comes from.
> 
> Correct, above "Working" is working, and "Broken" is broken.
> 
> > The two important differences here seem to be
> >
> > 1) Staging the state via current->thread.fpsimd_state instead of loading
> > directly:
> >
> > -       fpsimd_load_state(state);
> > +       current->thread.fpsimd_state = *state;
> > +       fpsimd_load_state(&current->thread.fpsimd_state);
> 
> The change above introduces the breakage.
> 
> > 2) Using this_cpu_ptr() + assignment instead of __this_cpu_write() when
> > reassociating the task's fpsimd context with the cpu:
> >
> >  {
> > +       struct fpsimd_last_state_struct *last;
> > +       struct fpsimd_state *st;
> >
> > [...]
> >
> >         if (test_and_clear_thread_flag(TIF_FOREIGN_FPSTATE)) {
> > -               struct fpsimd_state *st = &current->thread.fpsimd_state;
> > -
> > -               __this_cpu_write(fpsimd_last_state.st, st);
> > -               st->cpu = smp_processor_id();
> > +               last = this_cpu_ptr(&fpsimd_last_state);
> > +               st = &current->thread.fpsimd_state;
> > +
> > +               last->st = st;
> > +               last->sve_in_use = test_thread_flag(TIF_SVE);
> > +               st->cpu = smp_processor_id();
> >         }
> 
> The change above is fine.

Thanks for this.

Will came up with a convincing hypothesis for how the dodgy change broke
things here -- see the diff in his separate reply.

I'll cook up a more complete fix, but the diff Will provided should at
least get things working.

Cheers
---Dave

      parent reply	other threads:[~2017-12-15 17:11 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-12 10:20 arm64: unhandled level 0 translation fault Geert Uytterhoeven
2017-12-12 10:36 ` Will Deacon
2017-12-12 15:11   ` Geert Uytterhoeven
2017-12-12 16:00     ` Geert Uytterhoeven
2017-12-12 16:57       ` Will Deacon
2017-12-12 20:54         ` Geert Uytterhoeven
2017-12-13 10:24           ` Will Deacon
2017-12-14 14:34 ` Geert Uytterhoeven
2017-12-14 15:16   ` Will Deacon
2017-12-14 15:24   ` Dave P Martin
2017-12-14 18:08     ` Geert Uytterhoeven
2017-12-15 11:23       ` Dave Martin
2017-12-15 13:30         ` Geert Uytterhoeven
2017-12-15 14:27           ` Will Deacon
2017-12-15 15:56             ` Geert Uytterhoeven
2017-12-15 15:59             ` Geert Uytterhoeven
2017-12-15 16:06               ` Will Deacon
2017-12-15 17:11           ` Dave Martin [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171215171129.GS22781@e103592.cambridge.arm.com \
    --to=dave.martin@arm.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).