qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: cauchy-love <cauchy-love@163.com>
To: Jan Kiszka <jan.kiszka@web.de>
Cc: qemu-devel@nongnu.org
Subject: [Qemu-devel]  IO performance difference on different kernels
Date: Tue, 30 Jun 2015 16:29:01 +0800 (CST)	[thread overview]
Message-ID: <e753601.1b480.14e4395657e.Coremail.cauchy-love@163.com> (raw)
In-Reply-To: <558FA5A8.4000700@web.de>

[-- Attachment #1: Type: text/plain, Size: 2259 bytes --]


I am using qemu2.3.0 to start a guest on different linux kernel (2.6.33 and 2.6.39). The qemu command line is:
       kvm -m 2g -hda guest.img -enable-kvm
Experiments show that the disk write bandwidth of the guest on 2.6.33.3 is 10 times of that on 2.6.39. The period of paio_submit (time difference between two consecutive callbacks) on 2.6.39 is around 10 times of that on 2.6.33 but the time cost on aio_worker function does not show much difference between these two kernels. Hope you can provide some help on debug this problem. 
Yi



--
发自我的网易邮箱手机智能版


在 2015-06-28 15:43:36,"Jan Kiszka" <jan.kiszka@web.de> 写道:
>Hi David,
>
>On 2015-06-26 17:32, David kiarie wrote:
>> Hi all,
>> 
>> Some efforts to emulate AMD IOMMU have being going over the past few months.
>> 
>> In real hardware AMD IOMMU is implemented as a PCI function. When
>> emulating it in Qemu we want to allocate it MMIO space but real AMD
>> IOMMU manage to reserve memory without making a BAR request, probably
>> through a static address that's written by the device.(This is
>> something similar to what non-PCI bus devices do).Trying to reserve
>> memory via a BAR request results in address conflicts(in Linux) and
>> all other PCI devices reserve platform resources via BAR requests.
>
>The AMD IOMMU spec makes it even clearer:
>
>"3 Registers
>
>The IOMMU is configured and controlled via two sets of registers — one
>in the PCI configuration space and another set mapped in system address
>space. [...]
>
>3.1 PCI Resources
>
>[...] A PCI Function containing an IOMMU capability block does not
>include PCI BAR registers."
>
>> 
>> I would like to hear suggestions on how to reserve a memory region for
>> the device without making a BAR request.
>
>I see two approaches:
>
> - Let the IOMMU sit on two buses, PCI and system, i.e. become a PCI
>   and SysBus device at the same time - I suspect, though, that this
>   cannot be modeled with QOM right now.
>
> - Model the MMIO registers via the BAR interface but overwrite the
>   PCI config space so that no BAR becomes visible and make sure that
>   writes to the PCI command register cannot disable this region (which
>   would be the case with normal BARs). Hackish, but it seems feasible.
>
>Jan
>
>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

      reply	other threads:[~2015-06-30  8:29 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-06-26 15:32 [Qemu-devel] Allocate PCI MMIO without BAR requests David kiarie
2015-06-28  7:43 ` Jan Kiszka
2015-06-30  8:29   ` cauchy-love [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e753601.1b480.14e4395657e.Coremail.cauchy-love@163.com \
    --to=cauchy-love@163.com \
    --cc=jan.kiszka@web.de \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).