From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jeremy Fitzhardinge Subject: Re: [PATCH 3/16] read/write_crX, clts and wbinvd for 64-bit paravirt Date: Wed, 31 Oct 2007 21:48:39 -0700 Message-ID: <47295AA7.9090507@goop.org> References: <1193858101367-git-send-email-gcosta@redhat.com> <11938581073775-git-send-email-gcosta@redhat.com> <11938581133479-git-send-email-gcosta@redhat.com> <1193858118284-git-send-email-gcosta@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: zach-pghWNbHTmq7QT0dZR+AlfA@public.gmane.org, lguest-mnsaURCQ41sdnm+yROfE0A@public.gmane.org, kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org, ak-l3A5Bk7waGM@public.gmane.org, akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org, virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org To: Glauber de Oliveira Costa Return-path: In-Reply-To: <1193858118284-git-send-email-gcosta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org Errors-To: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org List-Id: kvm.vger.kernel.org Glauber de Oliveira Costa wrote: > This patch introduces, and patch callers when needed, native > versions for read/write_crX functions, clts and wbinvd. > > Signed-off-by: Glauber de Oliveira Costa > Signed-off-by: Steven Rostedt > Acked-by: Jeremy Fitzhardinge > --- > arch/x86/mm/pageattr_64.c | 3 +- > include/asm-x86/system_64.h | 60 ++++++++++++++++++++++++++++++------------ > 2 files changed, 45 insertions(+), 18 deletions(-) > > diff --git a/arch/x86/mm/pageattr_64.c b/arch/x86/mm/pageattr_64.c > index c40afba..59a52b0 100644 > --- a/arch/x86/mm/pageattr_64.c > +++ b/arch/x86/mm/pageattr_64.c > @@ -12,6 +12,7 @@ > #include > #include > #include > +#include > > pte_t *lookup_address(unsigned long address) > { > @@ -77,7 +78,7 @@ static void flush_kernel_map(void *arg) > much cheaper than WBINVD. */ > /* clflush is still broken. Disable for now. */ > if (1 || !cpu_has_clflush) > - asm volatile("wbinvd" ::: "memory"); > + wbinvd(); > else list_for_each_entry(pg, l, lru) { > void *adr = page_address(pg); > clflush_cache_range(adr, PAGE_SIZE); > diff --git a/include/asm-x86/system_64.h b/include/asm-x86/system_64.h > index 4cb2384..b558cb2 100644 > --- a/include/asm-x86/system_64.h > +++ b/include/asm-x86/system_64.h > @@ -65,53 +65,62 @@ extern void load_gs_index(unsigned); > /* > * Clear and set 'TS' bit respectively > */ > -#define clts() __asm__ __volatile__ ("clts") > +static inline void native_clts(void) > +{ > + asm volatile ("clts"); > +} > > -static inline unsigned long read_cr0(void) > -{ > +static inline unsigned long native_read_cr0(void) > +{ > unsigned long cr0; > asm volatile("movq %%cr0,%0" : "=r" (cr0)); > return cr0; > } > This is a pre-existing bug, but it seems to me that these read/write crX asms should have a constraint to stop the compiler from reordering them with respect to each other. The brute-force approach would be to add "memory" clobbers, but the subtle fix would be to add a variable which is only used to sequence: static int __cr_seq; static inline unsigned long native_read_cr0(void) { unsigned long cr0; asm volatile("mov %%cr0, %0" : "=r" (cr0), "=m" (__cr_seq)); } static inline void native_write_cr0(unsigned long val) { asm volatile("mov %1, %%cr0" : "+m" (__cr_seq) : "r" (val)); } J ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/