From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:45242) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VJQWi-00061s-7J for qemu-devel@nongnu.org; Tue, 10 Sep 2013 12:10:56 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1VJQWa-0007ZO-9h for qemu-devel@nongnu.org; Tue, 10 Sep 2013 12:10:52 -0400 Received: from mail-ve0-x229.google.com ([2607:f8b0:400c:c01::229]:43241) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VJQWa-0007ZK-4j for qemu-devel@nongnu.org; Tue, 10 Sep 2013 12:10:44 -0400 Received: by mail-ve0-f169.google.com with SMTP id db12so4751772veb.14 for ; Tue, 10 Sep 2013 09:10:43 -0700 (PDT) MIME-Version: 1.0 Date: Tue, 10 Sep 2013 18:10:43 +0200 Message-ID: From: Michael Moese Content-Type: multipart/alternative; boundary=20cf307812b6303aa904e609bf0d Subject: [Qemu-devel] PCI device waiting for some external (shared memory) events List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org --20cf307812b6303aa904e609bf0d Content-Type: text/plain; charset=UTF-8 Hello dear Qemu developers, I have some weird issues with a PCI device I have developed. I am using shared memory to create an abstraction from the Qemu-PCI device I developed and forward all requests to another process running a SystemC device (this should not matter here). In my pci-read and write implementations I lock a mutex on the shared memory, then write address, data and r/w-flag to the shared memory, then I poll for a completion-flag. This seems to work quite fine while writing data, which happen in some kind of posted writes, so I finish the operation as soon as possible. When I execute a pci memory read from inside the guest the read-function still gets executed, but Qemu gets stuck then. Using gdb I could find out Qemu claims some pthread mutexes (I don't use pthreads-mutexes myself, so there won't be any conflicts with that) - which seems to take forever (gave it like 30 minutes). I hope I could describe my situation somewhat understandable. Now my question is, am I doing something I must not when just busy-waiting for the completion flag? Is there another way to "stop" the simulated CPU during this transfer and resume afterwards? Thanks for your time reading this. I'd be happy to get any suggestions, and if required I could add some code of my experiments. Michael -- Michael Moese Baumgartenweg 1 91452 Wilhermsdorf Mobil: +49 176 61 05 94 99 Fax: +49 3212 11 42 49 7 --20cf307812b6303aa904e609bf0d Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hello dear Qemu developers,
I have some weird issues w= ith a PCI device I have developed.
I am using shared memory to create a= n abstraction from the Qemu-PCI device I developed and forward all requests= to another process running a SystemC device (this should not matter here).=

In my pci-read and write implementations I lock a mutex= on the shared memory, then write address, data and r/w-flag to the shared = memory, then I poll for a completion-flag.

This se= ems to work quite fine while writing data, which happen in some kind of pos= ted writes, so I finish the operation as soon as possible.
When I execute a pci memory read from inside the guest the read-functi= on still gets executed, but Qemu gets stuck then.

= Using gdb I could find out Qemu claims some pthread mutexes (I don't us= e pthreads-mutexes myself, so there won't be any conflicts with that) -= which seems to take forever (gave it like 30 minutes).

I hope I could describe my situation somewhat understan= dable.

Now my question is, am I doing something I = must not when just busy-waiting for the completion flag?

Is there another way to "stop" the simulated CPU durin= g this transfer and resume afterwards?


<= div>Thanks for your time reading this. I'd be happy to get any suggesti= ons, and if required I could add some code of my experiments.


Michael

--
=
Michael Moese
Baumgartenweg 1
91452 Wilhermsdorf
= Mobil: +49 176 61 05 94 99
Fax: +49 3212 11 42 49 7
--20cf307812b6303aa904e609bf0d--