From: Simon Horman <horms@kernel.org>
To: Ross Philipson <ross.philipson@oracle.com>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
linux-integrity@vger.kernel.org, linux-doc@vger.kernel.org,
linux-crypto@vger.kernel.org, iommu@lists.linux-foundation.org,
kexec@lists.infradead.org, linux-efi@vger.kernel.org,
dpsmith@apertussolutions.com, tglx@linutronix.de,
mingo@redhat.com, bp@alien8.de, hpa@zytor.com, ardb@kernel.org,
mjg59@srcf.ucam.org, James.Bottomley@hansenpartnership.com,
luto@amacapital.net, nivedita@alum.mit.edu,
kanth.ghatraju@oracle.com, trenchboot-devel@googlegroups.com
Subject: Re: [PATCH v6 08/14] x86: Secure Launch kernel late boot stub
Date: Fri, 5 May 2023 19:52:39 +0200 [thread overview]
Message-ID: <ZFVCZ1EDxQgdyocc@kernel.org> (raw)
In-Reply-To: <20230504145023.835096-9-ross.philipson@oracle.com>
On Thu, May 04, 2023 at 02:50:17PM +0000, Ross Philipson wrote:
> The routine slaunch_setup is called out of the x86 specific setup_arch
> routine during early kernel boot. After determining what platform is
> present, various operations specific to that platform occur. This
> includes finalizing setting for the platform late launch and verifying
> that memory protections are in place.
>
> For TXT, this code also reserves the original compressed kernel setup
> area where the APs were left looping so that this memory cannot be used.
>
> Signed-off-by: Ross Philipson <ross.philipson@oracle.com>
Hi Ross,
a few nits from my side.
> +/*
> + * The TXT heap is too big to map all at once with early_ioremap
> + * so it is done a table at a time.
> + */
> +static void __init *txt_early_get_heap_table(void __iomem *txt, u32 type,
> + u32 bytes)
> +{
> + u64 base, size, offset = 0;
> + void *heap;
> + int i;
> +
> + if (type > TXT_SINIT_TABLE_MAX)
> + slaunch_txt_reset(txt,
> + "Error invalid table type for early heap walk\n",
> + SL_ERROR_HEAP_WALK);
nit: the indentation should align to the opening '('.
slaunch_txt_reset(txt,
"Error invalid table type for early heap walk\n",
SL_ERROR_HEAP_WALK);
Likewise in a few other places in this patch.
...
> +static void __init slaunch_txt_reserve_range(u64 base, u64 size)
> +{
> + int type;
> +
> + type = e820__get_entry_type(base, base + size - 1);
> + if (type == E820_TYPE_RAM) {
> + pr_info("memblock reserve base: %llx size: %llx\n", base, size);
> + memblock_reserve(base, size);
> + }
> +}
> +
> +/*
> + * For Intel, certain regions of memory must be marked as reserved by putting
> + * them on the memblock reserved list if they are not already e820 reserved.
> + * This includes:
> + * - The TXT HEAP
> + * - The ACM area
> + * - The TXT private register bank
> + * - The MDR list sent to the MLE by the ACM (see TXT specification)
> + * (Normally the above are properly reserved by firmware but if it was not
> + * done, reserve them now)
> + * - The AP wake block
> + * - TPM log external to the TXT heap
> + *
> + * Also if the low PMR doesn't cover all memory < 4G, any RAM regions above
> + * the low PMR must be reservered too.
nit: s/reservered/reserved/
> + */
> +static void __init slaunch_txt_reserve(void __iomem *txt)
> +{
> + struct txt_sinit_memory_descriptor_record *mdr;
> + struct txt_sinit_mle_data *sinit_mle_data;
> + u64 base, size, heap_base, heap_size;
> + u32 mdrnum, mdroffset, mdrslen;
> + u32 field_offset, i;
> + void *mdrs;
> +
> + base = TXT_PRIV_CONFIG_REGS_BASE;
> + size = TXT_PUB_CONFIG_REGS_BASE - TXT_PRIV_CONFIG_REGS_BASE;
> + slaunch_txt_reserve_range(base, size);
> +
> + memcpy_fromio(&heap_base, txt + TXT_CR_HEAP_BASE, sizeof(heap_base));
> + memcpy_fromio(&heap_size, txt + TXT_CR_HEAP_SIZE, sizeof(heap_size));
> + slaunch_txt_reserve_range(heap_base, heap_size);
> +
> + memcpy_fromio(&base, txt + TXT_CR_SINIT_BASE, sizeof(base));
> + memcpy_fromio(&size, txt + TXT_CR_SINIT_SIZE, sizeof(size));
> + slaunch_txt_reserve_range(base, size);
> +
> + field_offset = offsetof(struct txt_sinit_mle_data,
> + sinit_vtd_dmar_table_size);
> + sinit_mle_data = txt_early_get_heap_table(txt, TXT_SINIT_MLE_DATA_TABLE,
> + field_offset);
> +
> + mdrnum = sinit_mle_data->num_of_sinit_mdrs;
> + mdroffset = sinit_mle_data->sinit_mdrs_table_offset;
> +
> + txt_early_put_heap_table(sinit_mle_data, field_offset);
> +
> + if (!mdrnum)
> + goto nomdr;
> +
> + mdrslen = mdrnum * sizeof(struct txt_sinit_memory_descriptor_record);
> +
> + mdrs = txt_early_get_heap_table(txt, TXT_SINIT_MLE_DATA_TABLE,
> + mdroffset + mdrslen - 8);
> +
> + mdr = mdrs + mdroffset - 8;
> +
> + for (i = 0; i < mdrnum; i++, mdr++) {
> + /* Spec says some entries can have length 0, ignore them */
> + if (mdr->type > 0 && mdr->length > 0)
> + slaunch_txt_reserve_range(mdr->address, mdr->length);
> + }
> +
> + txt_early_put_heap_table(mdrs, mdroffset + mdrslen - 8);
> +
> +nomdr:
> + slaunch_txt_reserve_range(ap_wake_info.ap_wake_block,
> + ap_wake_info.ap_wake_block_size);
> +
> + /*
> + * Earlier checks ensured that the event log was properly situated
> + * either inside the TXT heap or outside. This is a check to see if the
> + * event log needs to be reserved. If it is in the TXT heap, it is
> + * already reserved.
> + */
> + if (evtlog_addr < heap_base || evtlog_addr > (heap_base + heap_size))
> + slaunch_txt_reserve_range(evtlog_addr, evtlog_size);
> +
> + for (i = 0; i < e820_table->nr_entries; i++) {
> + base = e820_table->entries[i].addr;
> + size = e820_table->entries[i].size;
> + if ((base >= vtd_pmr_lo_size) && (base < 0x100000000ULL))
nit: unnecessary parentheses
> + slaunch_txt_reserve_range(base, size);
> + else if ((base < vtd_pmr_lo_size) &&
> + (base + size > vtd_pmr_lo_size))
> + slaunch_txt_reserve_range(vtd_pmr_lo_size,
> + base + size - vtd_pmr_lo_size);
> + }
> +}
> +
> +/*
> + * TXT stashes a safe copy of the DMAR ACPI table to prevent tampering.
> + * It is stored in the TXT heap. Fetch it from there and make it available
> + * to the IOMMU driver.
> + */
> +static void __init slaunch_copy_dmar_table(void __iomem *txt)
> +{
> + struct txt_sinit_mle_data *sinit_mle_data;
> + u32 field_offset, dmar_size, dmar_offset;
> + void *dmar;
> +
> + memset(&txt_dmar, 0, PAGE_SIZE);
> +
> + field_offset = offsetof(struct txt_sinit_mle_data,
> + processor_scrtm_status);
> + sinit_mle_data = txt_early_get_heap_table(txt, TXT_SINIT_MLE_DATA_TABLE,
> + field_offset);
> +
> + dmar_size = sinit_mle_data->sinit_vtd_dmar_table_size;
> + dmar_offset = sinit_mle_data->sinit_vtd_dmar_table_offset;
> +
> + txt_early_put_heap_table(sinit_mle_data, field_offset);
> +
> + if (!dmar_size || !dmar_offset)
> + slaunch_txt_reset(txt,
> + "Error invalid DMAR table values\n",
> + SL_ERROR_HEAP_INVALID_DMAR);
> +
> + if (unlikely(dmar_size > PAGE_SIZE))
> + slaunch_txt_reset(txt,
> + "Error DMAR too big to store\n",
> + SL_ERROR_HEAP_DMAR_SIZE);
> +
> +
nit: one blank line is enough
> + dmar = txt_early_get_heap_table(txt, TXT_SINIT_MLE_DATA_TABLE,
> + dmar_offset + dmar_size - 8);
> + if (!dmar)
> + slaunch_txt_reset(txt,
> + "Error early_ioremap of DMAR\n",
> + SL_ERROR_HEAP_DMAR_MAP);
> +
> + memcpy(&txt_dmar[0], dmar + dmar_offset - 8, dmar_size);
> +
> + txt_early_put_heap_table(dmar, dmar_offset + dmar_size - 8);
> +}
...
> +/*
> + * Intel TXT specific late stub setup and validation.
> + */
> +void __init slaunch_setup_txt(void)
> +{
> + u64 one = TXT_REGVALUE_ONE, val;
> + void __iomem *txt;
> +
> + if (!boot_cpu_has(X86_FEATURE_SMX))
> + return;
> +
> + /*
> + * If booted through secure launch entry point, the loadflags
> + * option will be set.
> + */
> + if (!(boot_params.hdr.loadflags & SLAUNCH_FLAG))
> + return;
> +
> + /*
> + * See if SENTER was done by reading the status register in the
> + * public space. If the public register space cannot be read, TXT may
> + * be disabled.
> + */
> + txt = early_ioremap(TXT_PUB_CONFIG_REGS_BASE,
> + TXT_NR_CONFIG_PAGES * PAGE_SIZE);
> + if (!txt)
> + return;
> +
> + memcpy_fromio(&val, txt + TXT_CR_STS, sizeof(val));
> + early_iounmap(txt, TXT_NR_CONFIG_PAGES * PAGE_SIZE);
> +
> + /* SENTER should have been done */
> + if (!(val & TXT_SENTER_DONE_STS))
> + panic("Error TXT.STS SENTER_DONE not set\n");
> +
> + /* SEXIT should have been cleared */
> + if (val & TXT_SEXIT_DONE_STS)
> + panic("Error TXT.STS SEXIT_DONE set\n");
> +
> + /* Now we want to use the private register space */
> + txt = early_ioremap(TXT_PRIV_CONFIG_REGS_BASE,
> + TXT_NR_CONFIG_PAGES * PAGE_SIZE);
> + if (!txt) {
> + /* This is really bad, no where to go from here */
> + panic("Error early_ioremap of TXT priv registers\n");
> + }
> +
> + /*
> + * Try to read the Intel VID from the TXT private registers to see if
> + * TXT measured launch happened properly and the private space is
> + * available.
> + */
> + memcpy_fromio(&val, txt + TXT_CR_DIDVID, sizeof(val));
> + if ((val & 0xffff) != 0x8086) {
> + /*
> + * Can't do a proper TXT reset since it appears something is
> + * wrong even though SENTER happened and it should be in SMX
> + * mode.
> + */
> + panic("Invalid TXT vendor ID, not in SMX mode\n");
> + }
> +
> + /* Set flags so subsequent code knows the status of the launch */
> + sl_flags |= (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT);
nit: spaces around '|'
...
next prev parent reply other threads:[~2023-05-05 17:54 UTC|newest]
Thread overview: 100+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-04 14:50 [PATCH v6 00/14] x86: Trenchboot secure dynamic launch Linux kernel support Ross Philipson
2023-05-04 14:50 ` [PATCH v6 01/14] x86/boot: Place kernel_info at a fixed offset Ross Philipson
2023-05-04 14:50 ` [PATCH v6 02/14] Documentation/x86: Secure Launch kernel documentation Ross Philipson
2023-05-05 16:19 ` Simon Horman
2023-05-05 17:32 ` Ross Philipson
2023-05-06 8:48 ` Bagas Sanjaya
2023-05-10 15:41 ` Ross Philipson
2023-05-12 10:47 ` Matthew Garrett
2023-06-16 16:44 ` Daniel P. Smith
2023-06-16 16:54 ` Matthew Garrett
2023-06-16 18:21 ` Daniel P. Smith
2023-05-12 13:19 ` Thomas Gleixner
2023-05-04 14:50 ` [PATCH v6 03/14] x86: Secure Launch Kconfig Ross Philipson
2023-05-04 14:50 ` [PATCH v6 04/14] x86: Secure Launch Resource Table header file Ross Philipson
2023-05-05 16:22 ` Simon Horman
2023-05-05 17:34 ` Ross Philipson
2023-05-10 23:04 ` Jarkko Sakkinen
2023-05-15 20:58 ` Daniel P. Smith
2023-05-12 10:55 ` Matthew Garrett
2023-05-15 21:15 ` Daniel P. Smith
2023-05-15 21:22 ` Matthew Garrett
2023-05-16 0:41 ` Daniel P. Smith
2023-05-16 1:43 ` Matthew Garrett
2023-06-16 20:01 ` Daniel P. Smith
2023-06-16 20:15 ` Matthew Garrett
2023-07-07 19:31 ` Daniel P. Smith
2023-05-04 14:50 ` [PATCH v6 05/14] x86: Secure Launch main " Ross Philipson
2023-05-05 16:25 ` Simon Horman
2023-05-05 17:37 ` Ross Philipson
2023-05-12 11:00 ` Matthew Garrett
2023-05-12 16:10 ` Ross Philipson
2023-10-31 21:37 ` ross.philipson
2023-05-04 14:50 ` [PATCH v6 06/14] x86: Add early SHA support for Secure Launch early measurements Ross Philipson
2023-05-05 16:34 ` Simon Horman
2023-05-09 16:09 ` Daniel P. Smith
2023-05-10 1:21 ` Eric Biggers
2023-05-10 22:28 ` Jarkko Sakkinen
2023-05-12 11:04 ` Matthew Garrett
2023-05-12 11:18 ` Ard Biesheuvel
2023-05-12 11:28 ` Matthew Garrett
2023-05-12 11:58 ` Ard Biesheuvel
2023-05-12 12:24 ` Andrew Cooper
2023-05-14 18:18 ` Eric Biggers
2023-05-14 19:11 ` Matthew Garrett
2023-05-12 13:24 ` Thomas Gleixner
2023-05-12 16:13 ` Matthew Garrett
2023-05-12 18:17 ` Thomas Gleixner
2023-05-12 19:12 ` Matthew Garrett
2023-05-12 19:42 ` Andrew Cooper
2023-05-15 21:23 ` Daniel P. Smith
2023-05-11 3:33 ` Herbert Xu
2023-05-16 0:50 ` Daniel P. Smith
2023-05-04 14:50 ` [PATCH v6 07/14] x86: Secure Launch kernel early boot stub Ross Philipson
2023-05-05 17:47 ` Simon Horman
2023-05-05 18:58 ` Ross Philipson
2023-05-05 19:46 ` Simon Horman
2023-05-12 11:26 ` Matthew Garrett
2023-05-12 16:17 ` Ross Philipson
2023-05-12 16:27 ` Matthew Garrett
2023-05-16 1:11 ` Daniel P. Smith
2023-05-16 1:45 ` Matthew Garrett
2023-06-15 18:00 ` Ross Philipson
2023-05-12 18:04 ` Thomas Gleixner
2023-05-15 20:13 ` Ross Philipson
2023-09-20 21:40 ` ross.philipson
2023-05-04 14:50 ` [PATCH v6 08/14] x86: Secure Launch kernel late " Ross Philipson
2023-05-05 17:52 ` Simon Horman [this message]
2023-05-05 18:59 ` Ross Philipson
2023-05-10 23:02 ` Jarkko Sakkinen
2023-05-12 15:58 ` Ross Philipson
2023-05-24 2:55 ` Jarkko Sakkinen
2023-05-12 15:44 ` Thomas Gleixner
2023-05-15 20:06 ` Ross Philipson
2023-05-04 14:50 ` [PATCH v6 09/14] x86: Secure Launch SMP bringup support Ross Philipson
2023-05-05 17:54 ` Simon Horman
2023-05-05 18:59 ` Ross Philipson
2023-05-10 22:55 ` Jarkko Sakkinen
2023-05-11 16:21 ` Ross Philipson
2023-05-12 18:02 ` Thomas Gleixner
2023-05-15 20:19 ` Ross Philipson
2023-05-04 14:50 ` [PATCH v6 10/14] kexec: Secure Launch kexec SEXIT support Ross Philipson
2023-05-04 14:50 ` [PATCH v6 11/14] reboot: Secure Launch SEXIT support on reboot paths Ross Philipson
2023-05-12 11:40 ` Matthew Garrett
2023-05-15 18:16 ` Ross Philipson
2023-05-16 1:23 ` Daniel P. Smith
2023-05-04 14:50 ` [PATCH v6 12/14] x86: Secure Launch late initcall platform module Ross Philipson
2023-05-05 19:42 ` Simon Horman
2023-05-08 15:07 ` Ross Philipson
2023-05-10 22:39 ` Jarkko Sakkinen
2023-05-12 15:53 ` Ross Philipson
2023-05-10 22:40 ` Jarkko Sakkinen
2023-05-12 15:54 ` Ross Philipson
2023-05-04 14:50 ` [PATCH v6 13/14] tpm: Allow locality 2 to be set when initializing the TPM for Secure Launch Ross Philipson
2023-05-12 11:43 ` Matthew Garrett
2023-05-12 16:22 ` Ross Philipson
2023-05-16 1:37 ` Daniel P. Smith
2023-05-04 14:50 ` [PATCH v6 14/14] x86: EFI stub DRTM launch support " Ross Philipson
2023-05-05 8:39 ` [PATCH v6 00/14] x86: Trenchboot secure dynamic launch Linux kernel support Bagas Sanjaya
2023-05-05 15:45 ` Ross Philipson
2023-05-06 7:56 ` Bagas Sanjaya
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZFVCZ1EDxQgdyocc@kernel.org \
--to=horms@kernel.org \
--cc=James.Bottomley@hansenpartnership.com \
--cc=ardb@kernel.org \
--cc=bp@alien8.de \
--cc=dpsmith@apertussolutions.com \
--cc=hpa@zytor.com \
--cc=iommu@lists.linux-foundation.org \
--cc=kanth.ghatraju@oracle.com \
--cc=kexec@lists.infradead.org \
--cc=linux-crypto@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-efi@vger.kernel.org \
--cc=linux-integrity@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=luto@amacapital.net \
--cc=mingo@redhat.com \
--cc=mjg59@srcf.ucam.org \
--cc=nivedita@alum.mit.edu \
--cc=ross.philipson@oracle.com \
--cc=tglx@linutronix.de \
--cc=trenchboot-devel@googlegroups.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).