From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:35903) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XIsuK-0000fb-Bf for qemu-devel@nongnu.org; Sun, 17 Aug 2014 01:21:41 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XIsu9-00032f-SV for qemu-devel@nongnu.org; Sun, 17 Aug 2014 01:21:32 -0400 Received: from mail-wi0-x22f.google.com ([2a00:1450:400c:c05::22f]:51549) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XIsu9-00032b-M1 for qemu-devel@nongnu.org; Sun, 17 Aug 2014 01:21:21 -0400 Received: by mail-wi0-f175.google.com with SMTP id ho1so2313884wib.8 for ; Sat, 16 Aug 2014 22:21:20 -0700 (PDT) Sender: Paolo Bonzini Message-ID: <53F03BCC.705@redhat.com> Date: Sun, 17 Aug 2014 07:21:16 +0200 From: Paolo Bonzini MIME-Version: 1.0 References: <5FAD0382C1B6944A908C8A46AB12DA9D03E1EB@LLE2K10-MBX02.mitll.ad.local> <53EE7214.9000603@redhat.com> <9BA52E25-E3BF-42FF-B080-86B7926D8B80@ll.mit.edu> In-Reply-To: <9BA52E25-E3BF-42FF-B080-86B7926D8B80@ll.mit.edu> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 8bit Subject: Re: [Qemu-devel] QEMU, self-modifying code, and Windows 7 64-bit (no KVM) List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Hulin, Patrick - 0559 - MITLL" Cc: "qemu-devel@nongnu.org" Il 15/08/2014 23:49, Hulin, Patrick - 0559 - MITLL ha scritto: >>> In this case, the write is 8 bytes and unaligned, so it gets split >>> into 8 single-byte writes. In stock QEMU, these writes are done in >>> reverse order (see the loop in softmmu_template.h, line 402). The >>> third decryption xor from Kernel Patch Protection should hit 4 bytes >>> that are in the current TB and 4 bytes in the TB afterwards in linear >>> order. Since this happens in reverse order, and the last 4 bytes of >>> the write do not intersect the current TB, those writes happen >>> successfully and QEMU's memory is modified. The 4th byte in linear >>> order (the 5th in temporal order) then triggers the >>> current_tb_modified flag and cpu_restore_state, longjmp'ing out. >>> >> Would it work to just call tb_invalidate_phys_page_range before the >> helper_ret_stb loop? > > Maybe. I think there’s another issue, which is that QEMU’s ending up > in the I/O read/write code instead of the normal memory RW. This could > be QEMU messing up, it could be PatchGuard doing something weird, or it > could be me misunderstanding what’s going on. I never really figured out > how the control flow works here. That's okay. Everything that's in the slow path goes down io_mem_read/write (in this case TLB_NOTDIRTY is set for dirty-page tracking and causes QEMU to choose that path). Try making a self-contained test case using the kvm-unit-tests harness (git://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git). Paolo