From: "Pavel Dovgalyuk" <dovgaluk@ispras.ru>
To: "'Alex Bennée'" <alex.bennee@linaro.org>,
"'Aleksandr Bezzubikov'" <zuban32s@gmail.com>
Cc: 'QEMU Developers' <qemu-devel@nongnu.org>,
'Pranith Kumar' <bobby.prani+qemu@gmail.com>,
pavel.dovgaluk@ispras.ru,
'Aleksandr Bezzubikov' <abezzubikov@ispras.ru>,
'Paolo Bonzini' <pbonzini@redhat.com>,
'Igor R' <boost.lists@gmail.com>
Subject: Re: [Qemu-devel] What is the best commit for record-replay?
Date: Wed, 20 Sep 2017 09:17:41 +0300 [thread overview]
Message-ID: <000401d331d8$2a94d410$7fbe7c30$@ru> (raw)
In-Reply-To: <878thaepe2.fsf@linaro.org>
> From: Alex Bennée [mailto:alex.bennee@linaro.org]
> >
> >>>
> >>>> >
> >>>> > Hope you've already found the solution (as the last post was on 2 May)
> >>>> > and it's just got missed the mailing list.
> >>>
> >>> As I know, RR is still broken in the current version.
> >>> It was caused by the MTTCG implementation.
> >>> Alex Bennee tried to fix RR back. Alex, have you found any solution?
> >>>
> >>> We also trying to find a way to fix RR. It seems, that we will reinvent BQL for RR.
> >>
> >> I think the method outlined in my RFC is the way to go, essentially the
> >> RR mutex taking over for the what the BQL did. The RFC patch hadn't
> >> hoisted the mutex for the additional devices so I'm just re-basing now
> >> and I'll see if I can make the changes for Igor's test case.
> >>
> >> --
> >> Alex Bennée
>
> Could you try:
>
> https://github.com/stsquad/qemu/tree/bql-and-replay-locks-v2
>
> And report back?
Most of the code look reasonable.
Isn't better to lock before acting with icount in the following function?
static void prepare_icount_for_run(CPUState *cpu)
{
if (use_icount) {
int insns_left;
/* These should always be cleared by process_icount_data after
* each vCPU execution. However u16.high can be raised
* asynchronously by cpu_exit/cpu_interrupt/tcg_handle_interrupt
*/
g_assert(cpu->icount_decr.u16.low == 0);
g_assert(cpu->icount_extra == 0);
cpu->icount_budget = tcg_get_icount_limit();
insns_left = MIN(0xffff, cpu->icount_budget);
cpu->icount_decr.u16.low = insns_left;
cpu->icount_extra = cpu->icount_budget - insns_left;
if (replay_mode != REPLAY_MODE_NONE) {
replay_mutex_lock();
}
}
}
Pavel Dovgalyuk
next prev parent reply other threads:[~2017-09-20 6:17 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-03-23 8:05 [Qemu-devel] What is the best commit for record-replay? Igor R
2017-04-09 6:55 ` Pranith Kumar
2017-04-25 19:16 ` Igor R
2017-04-26 13:48 ` Alex Bennée
2017-05-02 12:42 ` Igor R
2017-09-18 12:02 ` Aleksandr Bezzubikov
2017-09-18 12:09 ` Aleksandr Bezzubikov
2017-09-19 9:17 ` Pavel Dovgalyuk
2017-09-19 9:30 ` Alex Bennée
2017-09-19 12:26 ` Aleksandr Bezzubikov
2017-09-19 14:25 ` Alex Bennée
2017-09-19 15:46 ` Alexander Bezzubikov
2017-09-20 6:17 ` Pavel Dovgalyuk [this message]
2017-10-31 11:27 ` Pavel Dovgalyuk
2017-10-31 14:27 ` Alex Bennée
2017-11-01 5:14 ` Pavel Dovgalyuk
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='000401d331d8$2a94d410$7fbe7c30$@ru' \
--to=dovgaluk@ispras.ru \
--cc=abezzubikov@ispras.ru \
--cc=alex.bennee@linaro.org \
--cc=bobby.prani+qemu@gmail.com \
--cc=boost.lists@gmail.com \
--cc=pavel.dovgaluk@ispras.ru \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=zuban32s@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).