* [Qemu-devel] CPU topology and hyperthreading
@ 2016-03-17 15:19 Mohammed Gamal
2016-03-21 10:39 ` Mohammed Gamal
0 siblings, 1 reply; 2+ messages in thread
From: Mohammed Gamal @ 2016-03-17 15:19 UTC (permalink / raw)
To: qemu-devel
Hi All,
I have a question regarding the way CPU topology is exposed to the guest.
On a 4-core Amazon AWS VM I can see the CPU topology exposed to the
guest in the following manner:
# lstopo
Machine (7480MB)
Socket L#0 + L3 L#0 (25MB)
L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0
PU L#0 (P#0)
PU L#1 (P#2)
L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1
PU L#2 (P#1)
PU L#3 (P#3)
[...]
Now trying to emulate this topology in qemu/kvm using the following
command line options:
-cpu Haswell,+ht -smp 4,sockets=1,cores=2,maxcpus=64,threads=2
as well as
-cpu kvm64,+ht -smp 4,sockets=1,cores=2,maxcpus=64,threads=2
Shows me something like this:
# lstopo
Machine (1870MB)
Socket L#0
L2 L#0 (4096KB) + Core L#0
L1d L#0 (32KB) + L1i L#0 (32KB) + PU L#0 (P#0)
L1d L#1 (32KB) + L1i L#1 (32KB) + PU L#1 (P#1)
L2 L#1 (4096KB) + Core L#1
L1d L#2 (32KB) + L1i L#2 (32KB) + PU L#2 (P#2)
L1d L#3 (32KB) + L1i L#3 (32KB) + PU L#3 (P#3)
[...]
In other words, qemu exposes each hyperthread as if it has its own L1
data and instruction caches. Should the be a correct behavior?
In all cases, what gets exposed in the guests's /proc/cpuinfo would be
the same, but I wonder why the topology is different?
Regards,
Mohammed
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [Qemu-devel] CPU topology and hyperthreading
2016-03-17 15:19 [Qemu-devel] CPU topology and hyperthreading Mohammed Gamal
@ 2016-03-21 10:39 ` Mohammed Gamal
0 siblings, 0 replies; 2+ messages in thread
From: Mohammed Gamal @ 2016-03-21 10:39 UTC (permalink / raw)
To: qemu-devel
Any ideas?
On Thu, Mar 17, 2016 at 4:19 PM, Mohammed Gamal <m.gamal005@gmail.com> wrote:
> Hi All,
>
> I have a question regarding the way CPU topology is exposed to the guest.
>
> On a 4-core Amazon AWS VM I can see the CPU topology exposed to the
> guest in the following manner:
>
> # lstopo
> Machine (7480MB)
> Socket L#0 + L3 L#0 (25MB)
> L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0
> PU L#0 (P#0)
> PU L#1 (P#2)
> L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1
> PU L#2 (P#1)
> PU L#3 (P#3)
> [...]
>
> Now trying to emulate this topology in qemu/kvm using the following
> command line options:
> -cpu Haswell,+ht -smp 4,sockets=1,cores=2,maxcpus=64,threads=2
>
> as well as
> -cpu kvm64,+ht -smp 4,sockets=1,cores=2,maxcpus=64,threads=2
>
>
> Shows me something like this:
>
> # lstopo
> Machine (1870MB)
> Socket L#0
> L2 L#0 (4096KB) + Core L#0
> L1d L#0 (32KB) + L1i L#0 (32KB) + PU L#0 (P#0)
> L1d L#1 (32KB) + L1i L#1 (32KB) + PU L#1 (P#1)
> L2 L#1 (4096KB) + Core L#1
> L1d L#2 (32KB) + L1i L#2 (32KB) + PU L#2 (P#2)
> L1d L#3 (32KB) + L1i L#3 (32KB) + PU L#3 (P#3)
> [...]
>
> In other words, qemu exposes each hyperthread as if it has its own L1
> data and instruction caches. Should the be a correct behavior?
>
> In all cases, what gets exposed in the guests's /proc/cpuinfo would be
> the same, but I wonder why the topology is different?
>
> Regards,
> Mohammed
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2016-03-21 10:39 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-03-17 15:19 [Qemu-devel] CPU topology and hyperthreading Mohammed Gamal
2016-03-21 10:39 ` Mohammed Gamal
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).