From mboxrd@z Thu Jan 1 00:00:00 1970 From: Radim =?utf-8?B?S3LEjW3DocWZ?= Subject: Re: [RFC][PATCH] KVM: SVM: Sync g_pat with guest-written PAT value Date: Mon, 20 Apr 2015 19:16:36 +0200 Message-ID: <20150420171635.GC26491@potion.brq.redhat.com> References: <552B5128.4010909@siemens.com> <552B6923.3020602@siemens.com> <20150420161401.GB26491@potion.brq.redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: kvm , Joel Schopp To: Jan Kiszka Return-path: Received: from mx1.redhat.com ([209.132.183.28]:57076 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751406AbbDTRQw (ORCPT ); Mon, 20 Apr 2015 13:16:52 -0400 Content-Disposition: inline In-Reply-To: <20150420161401.GB26491@potion.brq.redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: 2015-04-20 18:14+0200, Radim Kr=C4=8Dm=C3=A1=C5=99: > Tested-by: Radim Kr=C4=8Dm=C3=A1=C5=99 Uncached accesses were roughly 20x slower. In case anyone wanted to reproduce, I used this as a kvm-unit-test: --- #include "processor.h" #define NR_TOP_LOOPS 24 #define NR_MEM_LOOPS 10 #define MEM_ELEMENTS 1024 static volatile u64 pat_test_memory[MEM_ELEMENTS]; static void flush_tlb(void) { write_cr3(read_cr3()); } static void set_pat(u64 val) { wrmsr(0x277, val); flush_tlb(); } static u64 time_memory_accesses(void) { u64 tsc_before =3D rdtsc(); for (unsigned loop =3D 0; loop < NR_MEM_LOOPS; loop++) for (unsigned i =3D 0; i < MEM_ELEMENTS; i++) pat_test_memory[i]++; return rdtsc() - tsc_before; } int main(int argc, char **argv) { unsigned error =3D 0; for (unsigned loop =3D 0; loop < NR_TOP_LOOPS; loop++) { u64 time_uc, time_wb; set_pat(0); time_uc =3D time_memory_accesses(); set_pat(0x0606060606060606ULL); time_wb =3D time_memory_accesses(); if (time_uc < time_wb * 4) error++; printf("%02d uc: %10lld wb: %8lld\n", loop, time_uc, time_wb); } report("guest PAT", !error); return report_summary(); }