From: Frederic Konrad <fred.konrad@greensocs.com>
To: Alexander Spyridakis <a.spyridakis@virtualopensystems.com>,
mttcg@listserver.greensocs.com,
Mark Burton <mark.burton@greensocs.com>
Cc: "Alex Bennée" <alex.bennee@linaro.org>,
"QEMU Developers" <qemu-devel@nongnu.org>,
"Alvise Rigo" <a.rigo@virtualopensystems.com>
Subject: Re: [Qemu-devel] TCG baremetal tests repo
Date: Thu, 25 Jun 2015 18:01:29 +0200 [thread overview]
Message-ID: <558C25D9.80300@greensocs.com> (raw)
In-Reply-To: <CAJRNFKLMU9GNGbtRT28ziJKf-+iQnBTmW1CkvFkp+k60PajcAQ@mail.gmail.com>
On 22/06/2015 12:54, Alexander Spyridakis wrote:
> Hello all,
>
> You can find the latest tcg atomic test payload in the following repo:
> > git clone https://git.virtualopensystems.com/dev/tcg_baremetal_tests.git
>
> You also need an arm baremetal cross-compiler like arm-none-gnueabi-
> (arm) and the usual aarch64-linux-gnu- (arm64). Due to a PSCI bug in
> the current multithreading tcg repo, the atomic test was modified to
> work also on the vexpress machine model.
>
> To run it:
> > make vexpress (or virt/virt64 for other targets)
> > ../mttcg/arm-softmmu/qemu-system-arm -nographic -M vexpress-a15
> -kernel build-vexpress/image-vexpress.axf -smp 4
>
> On my machine it takes around 30 seconds for one run of the test and
> the results vary from as low as 5 to 30 errors per vCPU per 10 million
> iterations (no errors with KVM). It is also very interesting to note,
> that the current test finishes faster on upstream qemu than
> multithreaded qemu.
>
> Best regards.
Hi,
I just tested this with vexpress, seems ATOMIC is not defined by default
it uses:
void non_atomic_lock(int *lock_var)
{
while (*lock_var != 0);
*lock_var = 1;
}
void non_atomic_unlock(int *lock_var)
{
*lock_var = 0;
}
instead of:
void atomic_lock(int *lock_var)
{
while (__sync_lock_test_and_set(lock_var, 1));
}
void atomic_unlock(int *lock_var)
{
__sync_lock_release(lock_var);
}
It doesn't cause any errors upstream but a lot on mttcg and mttcg is
faster in this
case.
I don't have any error when I use ATOMIC like this:
diff --git a/helpers.h b/helpers.h
index b5810ad..427659f 100644
--- a/helpers.h
+++ b/helpers.h
@@ -36,13 +36,8 @@
#define SYS_CFGCTR_WRITE 0x40000000
#define SYS_CFG_SHUTDOWN 0x00800000
-#ifdef ATOMIC
#define LOCK atomic_lock
#define UNLOCK atomic_unlock
-#else
-#define LOCK non_atomic_lock
-#define UNLOCK non_atomic_unlock
-#endif
int online_cpus;
int global_lock;
but it's slower than upstream which I think is normal. We can have two CPUs
fighting for the lock in mttcg but not in upstream as VCPUs doesn't run
at the same
time.
Fred
next prev parent reply other threads:[~2015-06-25 16:01 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-06-22 10:54 [Qemu-devel] TCG baremetal tests repo Alexander Spyridakis
2015-06-22 12:59 ` Alex Bennée
2015-06-24 16:39 ` Alex Bennée
2015-06-24 19:09 ` Peter Maydell
2015-06-25 6:39 ` Alex Bennée
2015-06-25 16:01 ` Frederic Konrad [this message]
2015-06-26 0:26 ` Alexander Spyridakis
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=558C25D9.80300@greensocs.com \
--to=fred.konrad@greensocs.com \
--cc=a.rigo@virtualopensystems.com \
--cc=a.spyridakis@virtualopensystems.com \
--cc=alex.bennee@linaro.org \
--cc=mark.burton@greensocs.com \
--cc=mttcg@listserver.greensocs.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).