From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:55093) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1agZiM-0001R5-6v for qemu-devel@nongnu.org; Thu, 17 Mar 2016 11:19:58 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1agZiL-00070o-Bl for qemu-devel@nongnu.org; Thu, 17 Mar 2016 11:19:54 -0400 Received: from mail-qg0-x22d.google.com ([2607:f8b0:400d:c04::22d]:35324) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1agZiL-00070e-5F for qemu-devel@nongnu.org; Thu, 17 Mar 2016 11:19:53 -0400 Received: by mail-qg0-x22d.google.com with SMTP id y89so75296182qge.2 for ; Thu, 17 Mar 2016 08:19:52 -0700 (PDT) MIME-Version: 1.0 Date: Thu, 17 Mar 2016 16:19:52 +0100 Message-ID: From: Mohammed Gamal Content-Type: text/plain; charset=UTF-8 Subject: [Qemu-devel] CPU topology and hyperthreading List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Hi All, I have a question regarding the way CPU topology is exposed to the guest. On a 4-core Amazon AWS VM I can see the CPU topology exposed to the guest in the following manner: # lstopo Machine (7480MB) Socket L#0 + L3 L#0 (25MB) L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 PU L#0 (P#0) PU L#1 (P#2) L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 PU L#2 (P#1) PU L#3 (P#3) [...] Now trying to emulate this topology in qemu/kvm using the following command line options: -cpu Haswell,+ht -smp 4,sockets=1,cores=2,maxcpus=64,threads=2 as well as -cpu kvm64,+ht -smp 4,sockets=1,cores=2,maxcpus=64,threads=2 Shows me something like this: # lstopo Machine (1870MB) Socket L#0 L2 L#0 (4096KB) + Core L#0 L1d L#0 (32KB) + L1i L#0 (32KB) + PU L#0 (P#0) L1d L#1 (32KB) + L1i L#1 (32KB) + PU L#1 (P#1) L2 L#1 (4096KB) + Core L#1 L1d L#2 (32KB) + L1i L#2 (32KB) + PU L#2 (P#2) L1d L#3 (32KB) + L1i L#3 (32KB) + PU L#3 (P#3) [...] In other words, qemu exposes each hyperthread as if it has its own L1 data and instruction caches. Should the be a correct behavior? In all cases, what gets exposed in the guests's /proc/cpuinfo would be the same, but I wonder why the topology is different? Regards, Mohammed