From: Sean Christopherson <seanjc@google.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
Sean Christopherson <seanjc@google.com>
Subject: [GIT PULL] KVM: Common changes for 6.12
Date: Fri, 13 Sep 2024 18:13:42 -0700 [thread overview]
Message-ID: <20240914011348.2558415-2-seanjc@google.com> (raw)
In-Reply-To: <20240914011348.2558415-1-seanjc@google.com>
Fix a long-standing goof in the coalesced IO code, and a lurking bug in
kvm_clear_guest().
The following changes since commit 47ac09b91befbb6a235ab620c32af719f8208399:
Linux 6.11-rc4 (2024-08-18 13:17:27 -0700)
are available in the Git repository at:
https://github.com/kvm-x86/linux.git tags/kvm-x86-generic-6.12
for you to fetch changes up to 025dde582bbf31e7618f9283594ef5e2408e384b:
KVM: Harden guest memory APIs against out-of-bounds accesses (2024-09-09 20:15:34 -0700)
----------------------------------------------------------------
KVK generic changes for 6.12:
- Fix a bug that results in KVM prematurely exiting to userspace for coalesced
MMIO/PIO in many cases, clean up the related code, and add a testcase.
- Fix a bug in kvm_clear_guest() where it would trigger a buffer overflow _if_
the gpa+len crosses a page boundary, which thankfully is guaranteed to not
happen in the current code base. Add WARNs in more helpers that read/write
guest memory to detect similar bugs.
----------------------------------------------------------------
Ilias Stamatis (1):
KVM: Fix coalesced_mmio_has_room() to avoid premature userspace exit
Sean Christopherson (4):
KVM: selftests: Add a test for coalesced MMIO (and PIO on x86)
KVM: Clean up coalesced MMIO ring full check
KVM: Write the per-page "segment" when clearing (part of) a guest page
KVM: Harden guest memory APIs against out-of-bounds accesses
tools/testing/selftests/kvm/Makefile | 3 +
tools/testing/selftests/kvm/coalesced_io_test.c | 236 ++++++++++++++++++++++++
tools/testing/selftests/kvm/include/kvm_util.h | 26 +++
virt/kvm/coalesced_mmio.c | 31 +---
virt/kvm/kvm_main.c | 11 +-
5 files changed, 283 insertions(+), 24 deletions(-)
create mode 100644 tools/testing/selftests/kvm/coalesced_io_test.c
next prev parent reply other threads:[~2024-09-14 1:13 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-09-14 1:13 [GIT PULL] KVM: x86 pull requests for 6.12 Sean Christopherson
2024-09-14 1:13 ` Sean Christopherson [this message]
2024-09-14 13:35 ` [GIT PULL] KVM: Common changes " Paolo Bonzini
2024-09-14 1:13 ` [GIT PULL] KVM: x86: Misc " Sean Christopherson
2024-09-14 1:13 ` [GIT PULL] KVM: x86: MMU " Sean Christopherson
2024-09-14 13:50 ` Paolo Bonzini
2024-09-14 1:13 ` [GIT PULL] KVM: x86: VMX and PAT MSRs cleanup Sean Christopherson
2024-09-14 1:13 ` [GIT PULL] KVM: Selftests changes for 6.12 Sean Christopherson
2024-09-14 1:13 ` [GIT PULL] KVM: x86: SVM " Sean Christopherson
2024-09-14 1:13 ` [GIT PULL] KVM: x86: VMX " Sean Christopherson
2024-09-14 14:54 ` [GIT PULL] KVM: x86 pull requests " Paolo Bonzini
2024-09-15 6:32 ` Paolo Bonzini
2024-09-16 18:24 ` Sean Christopherson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240914011348.2558415-2-seanjc@google.com \
--to=seanjc@google.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox