qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Laszlo Ersek <lersek@redhat.com>
To: Igor Mammedov <imammedo@redhat.com>
Cc: boris.ostrovsky@oracle.com, qemu-devel@nongnu.org,
	aaron.young@oracle.com
Subject: Re: [PATCH v5 09/10] x68: acpi: trigger SMI before sending hotplug Notify event to OSPM
Date: Tue, 8 Sep 2020 11:35:43 +0200	[thread overview]
Message-ID: <83fc69bf-cf64-06f9-522f-e5b3adebd7b7@redhat.com> (raw)
In-Reply-To: <20200908093933.1b33ab5b@redhat.com>

On 09/08/20 09:39, Igor Mammedov wrote:
> On Mon, 7 Sep 2020 17:17:52 +0200
> Laszlo Ersek <lersek@redhat.com> wrote:

>>    1              Method (CSCN, 0, Serialized)
>>    2              {
>>    3                  Acquire (\_SB.PCI0.PRES.CPLK, 0xFFFF)
>>    4                  Name (CNEW, Package (0xFF){})
>>    5                  Local_Uid = Zero
>>    6                  Local_HasJob = One
>>    7                  While ((Local_HasJob == One))
>>    8                  {
>>    9                      Local_HasJob = Zero
>>   10                      Local_HasEvent = One
>>   11                      Local_NumAddedCpus = Zero
>>   12                      While (((Local_HasEvent == One) && (Local_Uid < 0x08)))
>>   13                      {
>>   14                          Local_HasEvent = Zero
>>   15                          \_SB.PCI0.PRES.CSEL = Local_Uid
>>   16                          \_SB.PCI0.PRES.CCMD = Zero
>>   17                          If ((\_SB.PCI0.PRES.CDAT < Local_Uid))
>>   18                          {
>>   19                              Break
>>   20                          }
>>   21
>>   22                          If ((Local_NumAddedCpus == 0xFF))
>>   23                          {
>>   24                              Local_HasJob = One
>>   25                              Break
>>   26                          }
>>   27
>>   28                          Local_Uid = \_SB.PCI0.PRES.CDAT
>>   29                          If ((\_SB.PCI0.PRES.CINS == One))
>>   30                          {
>>   31                              CNEW [Local_NumAddedCpus] = Local_Uid
>>   32                              Local_NumAddedCpus++
>>   33                              Local_HasEvent = One
>>   34                          }
>>   35                          ElseIf ((\_SB.PCI0.PRES.CRMV == One))
>>   36                          {
>>   37                              CTFY (Local_Uid, 0x03)
>>   38                              \_SB.PCI0.PRES.CRMV = One
>>   39                              Local_HasEvent = One
>>   40                          }
>>   41
>>   42                          Local_Uid++
>>   43                      }
>>   44
>>   45                      If ((Local_NumAddedCpus > Zero))
>>   46                      {
>>   47                          \_SB.PCI0.SMI0.SMIC = 0x04
>>   48                      }
>>   49
>>   50                      Local_CpuIdx = Zero
>>   51                      While ((Local_CpuIdx < Local_NumAddedCpus))
>>   52                      {
>>   53                          Local_Uid = DerefOf (CNEW [Local_CpuIdx])
>>   54                          CTFY (Local_Uid, One)
>>   55                          Debug = Local_Uid
>>   56                          \_SB.PCI0.PRES.CSEL = Local_Uid
>>   57                          \_SB.PCI0.PRES.CINS = One
>>   58                          Local_CpuIdx++
>>   59                      }
>>   60                  }
>>   61
>>   62                  Release (\_SB.PCI0.PRES.CPLK)
>>   63              }
>>
>> When we take the Break on line 25, then:
>>
>> (a) on line 25, the following equality holds:
>>
>>   Local_Uid == CNEW[Local_NumAddedCpus - 1] + 1
>>
>> (b) on line 60, the following equality holds:
>>
>>   Local_Uid == CNEW[Local_NumAddedCpus - 1]
>>
>> This means that, when we write Local_Uid to CSEL on line 15 again, then:
>>
>> - we effectively re-investigate the last-cached CPU (with selector value
>>   CNEW[Local_NumAddedCpus - 1])
>>
>> - rather than resuming the scanning right after it -- that is, with
>>   selector value (CNEW[Local_NumAddedCpus - 1] + 1) --, in the spirit of
>>   line 42.
>>
>> My question is: is this "rewind" intentional?
>>
>> Now, I don't see a functionality problem with this, as on line 57, we
>> clear the pending insert event for the last-cached CPU, so when we
>> re-check it, the "get pending" command will simply seek past it.
>>
>> But I'd like to understand if this is *precisely* your intent, or if
>> it's an oversight and it just ends up working OK.
> it's the later (it should not have any adverse effects) so I didn't care
> much about restarting from the last processed CPU.
>
> how about moving
>
>   22                          If ((Local_NumAddedCpus == 0xFF))
>   23                          {
>   24                              Local_HasJob = One
>   25                              Break
>   26                          }
>
> right after
>   40                          }
>   41
>   42                          Local_Uid++
>
> instead of adding extra 'if' at the end of outer loop?

That only seems to save a CSEL write on line 15, during the first
iteration of the outer loop. And we would still re-select the last
selector from CNEW, in the second iteration of the outer loop.

But, again, there's no bug; I just wanted to understand your intent.

Can you please squash the following patch:

> diff --git a/hw/acpi/cpu.c b/hw/acpi/cpu.c
> index 12839720018e..8dd4d8ebbf55 100644
> --- a/hw/acpi/cpu.c
> +++ b/hw/acpi/cpu.c
> @@ -601,6 +601,15 @@ void build_cpus_aml(Aml *table, MachineState *machine, CPUHotplugFeatures opts,
>                      aml_append(while_ctx, aml_increment(cpu_idx));
>                  }
>                  aml_append(while_ctx2, while_ctx);
> +                /*
> +                 * If another batch is needed, then it will resume scanning
> +                 * exactly at -- and not after -- the last CPU that's currently
> +                 * in CPU_ADDED_LIST. In other words, the last CPU in
> +                 * CPU_ADDED_LIST is going to be re-checked. That's OK: we've
> +                 * just cleared the insert event for *all* CPUs in
> +                 * CPU_ADDED_LIST, including the last one. So the scan will
> +                 * simply seek past it.
> +                 */
>              }
>              aml_append(method, while_ctx2);
>              aml_append(method, aml_release(ctrl_lock));

With that:

Reviewed-by: Laszlo Ersek <lersek@redhat.com>

I'll also follow up with test results for this patch (meaning a lowered
"max_cpus_per_pass").

Thanks!
Laszlo



  reply	other threads:[~2020-09-08  9:37 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-07 11:23 [PATCH v5 00/10] x86: fix cpu hotplug with secure boot Igor Mammedov
2020-09-07 11:23 ` [PATCH v5 01/10] x86: lpc9: let firmware negotiate 'CPU hotplug with SMI' features Igor Mammedov
2020-09-07 11:23 ` [PATCH v5 02/10] x86: cpuhp: prevent guest crash on CPU hotplug when broadcast SMI is in use Igor Mammedov
2020-09-07 11:23 ` [PATCH v5 03/10] x86: cpuhp: refuse cpu hot-unplug request earlier if not supported Igor Mammedov
2020-09-07 11:23 ` [PATCH v5 04/10] acpi: add aml_land() and aml_break() primitives Igor Mammedov
2020-09-07 11:23 ` [PATCH v5 05/10] tests: acpi: mark to be changed tables in bios-tables-test-allowed-diff Igor Mammedov
2020-09-07 11:23 ` [PATCH v5 06/10] x86: ich9: expose "smi_negotiated_features" as a QOM property Igor Mammedov
2020-09-07 12:49   ` Laszlo Ersek
2020-09-07 11:23 ` [PATCH v5 07/10] x86: acpi: introduce AcpiPmInfo::smi_on_cpuhp Igor Mammedov
2020-09-07 12:49   ` Laszlo Ersek
2020-09-07 11:23 ` [PATCH v5 08/10] x86: acpi: introduce the PCI0.SMI0 ACPI device Igor Mammedov
2020-09-07 12:49   ` Laszlo Ersek
2020-09-07 11:23 ` [PATCH v5 09/10] x68: acpi: trigger SMI before sending hotplug Notify event to OSPM Igor Mammedov
2020-09-07 15:17   ` Laszlo Ersek
2020-09-08  7:39     ` Igor Mammedov
2020-09-08  9:35       ` Laszlo Ersek [this message]
2020-09-08 11:00         ` Laszlo Ersek
2020-09-08 14:24   ` [PATCH v5 9/10] fixup! " Igor Mammedov
2020-09-08 18:09     ` Laszlo Ersek
2020-09-07 11:23 ` [PATCH v5 10/10] tests: acpi: update acpi blobs with new AML Igor Mammedov
2020-09-08 14:29 ` [PATCH v5 00/10] x86: fix cpu hotplug with secure boot Igor Mammedov
2020-09-08 18:08   ` Laszlo Ersek
2020-09-21 11:46 ` Igor Mammedov
2020-09-21 12:34   ` Michael S. Tsirkin
2020-09-23  9:48     ` Igor Mammedov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=83fc69bf-cf64-06f9-522f-e5b3adebd7b7@redhat.com \
    --to=lersek@redhat.com \
    --cc=aaron.young@oracle.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=imammedo@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).