public inbox for linux-sgx@vger.kernel.org
 help / color / mirror / Atom feed
From: Jarkko Sakkinen <jarkko@kernel.org>
To: Zhiquan Li <zhiquan1.li@intel.com>
Cc: linux-sgx@vger.kernel.org, tony.luck@intel.com,
	dave.hansen@linux.intel.com, seanjc@google.com,
	kai.huang@intel.com, fan.du@intel.com, cathy.zhang@intel.com
Subject: Re: [PATCH v4 0/3] x86/sgx: fine grained SGX MCA behavior
Date: Wed, 8 Jun 2022 11:10:23 +0300	[thread overview]
Message-ID: <YqBZbyWW4jTkn7qH@iki.fi> (raw)
In-Reply-To: <20220608032654.1764936-1-zhiquan1.li@intel.com>

On Wed, Jun 08, 2022 at 11:26:51AM +0800, Zhiquan Li wrote:
> V3: https://lore.kernel.org/linux-sgx/41704e5d4c03b49fcda12e695595211d950cfb08.camel@kernel.org/T/#t
> 
> Changes since V3:
> - Move the definition of EPC page flag SGX_EPC_PAGE_KVM_GUEST from
>   Cathy's third patch of SGX rebootless recovery patch set but discard
>   irrelevant portion, since it might need more time to re-forge and
>   these are two different features.
>   Link: https://lore.kernel.org/linux-sgx/41704e5d4c03b49fcda12e695595211d950cfb08.camel@kernel.org/T/#m9782d23496cacecb7da07a67daa79f4b322ae170
> 
> V2: https://lore.kernel.org/linux-sgx/694234d7-6a0d-e85f-f2f9-e52b4a61e1ec@intel.com/T/#t
> 
> Changes since V2:
> - Repurpose the owner field as the virtual address of virtual EPC page
> - Remove struct sgx_vepc_page and relevant code.
> - Remove patch 01 as the changes are not necessary in new design.
> - Rework patch 02 suggested by Jarkko.
> - Adapt patch 03 and 04 since struct sgx_vepc_page was discarded.
> - Replace EPC page flag SGX_EPC_PAGE_IS_VEPC with
>   SGX_EPC_PAGE_KVM_GUEST as they are duplicated.
>   Link: https://lore.kernel.org/linux-sgx/eb95b32ecf3d44a695610cf7f2816785@intel.com/T/#u
> 
> V1: https://lore.kernel.org/linux-sgx/443cb425-009c-2784-56f4-5e707122de76@intel.com/T/#t
> 
> Changes since V1:
> - Updated cover letter and commit messages, added valuable
>   information from Jarkko, Tony and Kai’s comments.
> - Added documentations for struct struct sgx_vepc and
>   struct sgx_vepc_page.
> 
> Hi everyone,
> 
> This series contains a few patches to fine grained SGX MCA behavior.
> 
> When VM guest access a SGX EPC page with memory failure, current
> behavior will kill the guest, expected only kill the SGX application
> inside it.
> 
> To fix it we send SIGBUS with code BUS_MCEERR_AR and some extra
> information for hypervisor to inject #MC information to guest, which
> is helpful in SGX virtualization case.
> 
> The rest of things are guest side. Currently the hypervisor like
> Qemu already has mature facility to convert HVA to GPA and inject #MC
> to the guest OS.
> 
> Then we extend the solution for the normal SGX case, so that the task
> has opportunity to make further decision while EPC page has memory
> failure.
> 
> However, when a page triggers a machine check, it only reports the PFN.
> But in order to inject #MC into hypervisor, the virtual address
> is required. Then repurpose the “owner” field as the virtual address of
> the virtual EPC page so that arch_memory_failure() can easily retrieve
> it.
> 
> Add a new EPC page flag - SGX_EPC_PAGE_KVM_GUEST to interpret the
> meaning of the field.
> 
> Suppose an enclave is shared by multiple processes, when an enclave
> page triggers a machine check, the enclave will be disabled so that
> it couldn't be entered again. Killing other processes with the same
> enclave mapped would perhaps be overkill, but they are going to find
> that the enclave is "dead" next time they try to use it. Thanks for
> Jarkko’s head up and Tony’s clarification on this point.
> 
> Our intension is to provide additional info so that the application has
> more choices. Current behavior looks gently, and we don’t want to
> change it.
> 
> If you expect the other processes to be informed in such case, then
> you’re looking for an MCA “early kill” feature which worth another
> patch set to implement it.
> 
> Unlike host enclaves, virtual EPC instance cannot be shared by multiple
> VMs. It is because how enclaves are created is totally up to the guest.
> Sharing virtual EPC instance will be very likely to unexpectedly break
> enclaves in all VMs.
> 
> SGX virtual EPC driver doesn't explicitly prevent virtual EPC instance
> being shared by multiple VMs via fork(). However KVM doesn't support
> running a VM across multiple mm structures, and the de facto userspace
> hypervisor (Qemu) doesn't use fork() to create a new VM, so in practice
> this should not happen.
> 
> This series is based on tip/x86/sgx.
> 
> Tests:
> 1. MCE injection test for SGX in VM.
>    As we expected, the application was killed and VM was alive.
> 2. MCE injection test for SGX on host.
>    As we expected, the application received SIGBUS with extra info.
> 3. Kernel selftest/sgx: PASS
> 4. Internal SGX stress test: PASS
> 5. kmemleak test: No memory leakage detected.
> 
> Much appreciate your feedback.
> 
> Best Regards,
> Zhiquan
> 
> Zhiquan Li (3):
>   x86/sgx: Repurpose the owner field as the virtual address of virtual
>     EPC page
>   x86/sgx: Fine grained SGX MCA behavior for virtualization
>   x86/sgx: Fine grained SGX MCA behavior for normal case
> 
>  arch/x86/kernel/cpu/sgx/main.c | 27 +++++++++++++++++++++++++--
>  arch/x86/kernel/cpu/sgx/sgx.h  |  2 ++
>  arch/x86/kernel/cpu/sgx/virt.c |  4 +++-
>  3 files changed, 30 insertions(+), 3 deletions(-)
> 
> -- 
> 2.25.1
> 

LGTM, I'll have to check if I'm able to trigger MCE with
/sys/devices/system/memory/hard_offline_page, as hinted by Tony.

Just trying to think how to get a legit PFN number. I guess one workable
way is to attach kretprobe to sgx_alloc_epc_page(), and do similar
conversion as in sgx_get_epc_phys_addr() for ((struct sgx_epc_page
*)retval) and print it out.

BR, Jarkko

  parent reply	other threads:[~2022-06-08  8:53 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-08  3:26 [PATCH v4 0/3] x86/sgx: fine grained SGX MCA behavior Zhiquan Li
2022-06-08  3:26 ` [PATCH v4 1/3] x86/sgx: Repurpose the owner field as the virtual address of virtual EPC page Zhiquan Li
2022-06-08  3:45   ` Zhiquan Li
2022-06-08  3:54   ` Kai Huang
2022-06-08  3:26 ` [PATCH v4 2/3] x86/sgx: Fine grained SGX MCA behavior for virtualization Zhiquan Li
2022-06-08  3:52   ` Kai Huang
2022-06-08  8:13     ` Jarkko Sakkinen
2022-06-08  8:33       ` Zhiquan Li
2022-06-08  3:26 ` [PATCH v4 3/3] x86/sgx: Fine grained SGX MCA behavior for normal case Zhiquan Li
2022-06-08  8:10 ` Jarkko Sakkinen [this message]
2022-06-08  9:12   ` [PATCH v4 0/3] x86/sgx: fine grained SGX MCA behavior Jarkko Sakkinen
2022-06-08  9:48   ` Zhiquan Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YqBZbyWW4jTkn7qH@iki.fi \
    --to=jarkko@kernel.org \
    --cc=cathy.zhang@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=fan.du@intel.com \
    --cc=kai.huang@intel.com \
    --cc=linux-sgx@vger.kernel.org \
    --cc=seanjc@google.com \
    --cc=tony.luck@intel.com \
    --cc=zhiquan1.li@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox