From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>, "Zheng, Lv" <lv.zheng@intel.com>,
"Wysocki, Rafael J" <rafael.j.wysocki@intel.com>,
"Moore, Robert" <robert.moore@intel.com>,
J?rg R?del <joro@8bytes.org>, lkml <linux-kernel@vger.kernel.org>,
Linux ACPI <linux-acpi@vger.kernel.org>
Subject: Re: 174cc7187e6f ACPICA: Tables: Back port acpi_get_table_with_size() and early_acpi_os_unmap_memory() from Linux kernel
Date: Mon, 9 Jan 2017 16:21:02 -0800 [thread overview]
Message-ID: <20170110002102.GI3800@linux.vnet.ibm.com> (raw)
In-Reply-To: <CAJZ5v0hnperiMk7tz7G6hqwuo4XTkzCjR-N2KC43QuVQoV6AgA@mail.gmail.com>
On Tue, Jan 10, 2017 at 12:42:47AM +0100, Rafael J. Wysocki wrote:
> On Tue, Jan 10, 2017 at 12:32 AM, Paul E. McKenney
> <paulmck@linux.vnet.ibm.com> wrote:
> > On Tue, Jan 10, 2017 at 12:15:01AM +0100, Borislav Petkov wrote:
> >> On Mon, Jan 09, 2017 at 02:18:31PM -0800, Paul E. McKenney wrote:
> >> > @@ -690,6 +690,8 @@ void synchronize_rcu_expedited(void)
> >> > {
> >> > struct rcu_state *rsp = rcu_state_p;
> >> >
> >> > + if (!rcu_scheduler_active)
> >> > + return;
> >> > _synchronize_rcu_expedited(rsp, sync_rcu_exp_handler);
> >> > }
> >> > EXPORT_SYMBOL_GPL(synchronize_rcu_expedited);
> >>
> >> That doesn't work and it is because of those damn what goes before what
> >> boot sequence issues :-\
> >>
> >> We have:
> >>
> >> rest_init()
> >> |-> rcu_scheduler_starting() ---> that sets rcu_scheduler_active = 1;
> >> |-> kernel_thread(kernel_init, NULL, CLONE_FS);
> >> |-> kernel_init()
> >> |-> kernel_init_freeable()
> >> |-> native_smp_prepare_cpus(setup_max_cpus)
> >> |-> default_setup_apic_routing
> >> |-> enable_IR_x2apic
> >> |-> irq_remapping_prepare
> >> |-> amd_iommu_prepare
> >> |-> iommu_go_to_state
> >> |-> acpi_put_table(ivrs_base);
> >> |-> acpi_tb_put_table(table_desc);
> >> |-> acpi_tb_invalidate_table(table_desc);
> >> |-> acpi_tb_release_table(...)
> >> |-> acpi_os_unmap_memory
> >> |-> acpi_os_unmap_iomem
> >> |-> acpi_os_map_cleanup
> >> |-> synchronize_rcu_expedited()
> >>
> >> Now here we have rcu_scheduler_active already set so the test doesn't
> >> hit and we hang.
> >>
> >> So we must do it differently.
> >
> > Yeah, there is a window just as the scheduler is starting where things don't
> > work.
> >
> > We could move rcu_scheduler_starting() later, as long as there
> > is no chance of preemption or context switch before it is invoked.
> > Would that help in this case, or are we already context switching before
> > acpi_os_map_cleanup() is invoked?
>
> In the particular AMD IOMMU case it doesn't look like we are, but we
> do in other cases.
>
> > (If we are already context switching,
> > short-circuiting synchronize_rcu_expedited() would be a bug.)
>
> It may be easier to make the caller avoid RCU synchronization
> altogether if that's not necessary and the caller should actually be
> able to figure out when that's the case.
>
> The patch from Lv at https://patchwork.kernel.org/patch/9504277/ goes
> in the right direction IMO, but I'm not yet convinced that this is the
> right one.
>From the RCU end, I could force expedited grace periods to translate to
normal grace periods during that window of time, and then make sure that
RCU's grace-period kthreads are spawned beforehand. Looking into this...
Thanx, Paul
next prev parent reply other threads:[~2017-01-10 0:21 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20170107204227.bwdb5yzrjpiggkmo@pd.tnic>
2017-01-07 23:30 ` 174cc7187e6f ACPICA: Tables: Back port acpi_get_table_with_size() and early_acpi_os_unmap_memory() from Linux kernel Rafael J. Wysocki
2017-01-08 0:07 ` Borislav Petkov
2017-01-08 0:22 ` Rafael J. Wysocki
2017-01-08 0:37 ` Borislav Petkov
2017-01-08 0:52 ` Rafael J. Wysocki
2017-01-08 1:01 ` Borislav Petkov
2017-01-08 1:45 ` Rafael J. Wysocki
2017-01-08 2:20 ` Rafael J. Wysocki
2017-01-08 13:03 ` Borislav Petkov
2017-01-09 1:58 ` Zheng, Lv
2017-01-09 2:36 ` Zheng, Lv
2017-01-09 9:33 ` Borislav Petkov
2017-01-09 22:18 ` Paul E. McKenney
2017-01-09 22:25 ` Rafael J. Wysocki
2017-01-09 23:14 ` Paul E. McKenney
2017-01-09 23:15 ` Borislav Petkov
2017-01-09 23:32 ` Paul E. McKenney
2017-01-09 23:40 ` Borislav Petkov
2017-01-09 23:52 ` Paul E. McKenney
2017-01-09 23:52 ` Borislav Petkov
2017-01-10 0:44 ` Zheng, Lv
2017-01-10 1:27 ` Rafael J. Wysocki
2017-01-10 2:23 ` Paul E. McKenney
2017-01-10 5:41 ` Zheng, Lv
2017-01-10 5:51 ` Paul E. McKenney
2017-01-11 9:21 ` Borislav Petkov
2017-01-11 9:51 ` Paul E. McKenney
2017-01-11 10:03 ` Borislav Petkov
2017-01-11 10:22 ` Paul E. McKenney
2017-01-10 9:41 ` Borislav Petkov
2017-01-11 3:42 ` Rafael J. Wysocki
2017-01-11 9:42 ` Borislav Petkov
2017-01-09 23:42 ` Rafael J. Wysocki
2017-01-10 0:21 ` Paul E. McKenney [this message]
2017-01-09 5:21 ` Zheng, Lv
2017-01-09 10:52 ` Jörg Rödel
2017-01-09 22:41 ` Rafael J. Wysocki
2017-01-09 22:57 ` Borislav Petkov
2017-01-10 13:58 ` Jörg Rödel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170110002102.GI3800@linux.vnet.ibm.com \
--to=paulmck@linux.vnet.ibm.com \
--cc=bp@alien8.de \
--cc=joro@8bytes.org \
--cc=linux-acpi@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=lv.zheng@intel.com \
--cc=rafael.j.wysocki@intel.com \
--cc=rafael@kernel.org \
--cc=robert.moore@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox