qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: dovgaluk <dovgaluk@ispras.ru>
To: "Lluís Vilanova" <vilanova@ac.upc.edu>
Cc: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>,
	mttcg@greensocs.com, "Paolo Bonzini" <pbonzini@redhat.com>,
	"Claudio Fontana" <claudio.fontana@huawei.com>,
	"Mark Burton" <mark.burton@greensocs.com>,
	"Alvise Rigo" <a.rigo@virtualopensystems.com>,
	qemu-devel <qemu-devel@nongnu.org>,
	"Emilio G. Cota" <cota@braap.org>,
	"Alexander Spyridakis" <a.spyridakis@virtualopensystems.com>,
	"Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
	mttcg-request@listserver.greensocs.com,
	"Alex Bennée" <alex.bennee@linaro.org>,
	"KONRAD Frédéric" <fred.konrad@greensocs.com>
Subject: Re: [Qemu-devel] MTTCG Tasks (kvmforum summary)
Date: Fri, 04 Sep 2015 16:10:05 +0300	[thread overview]
Message-ID: <34d8946d03abc7568bb92085bf406de0@ispras.ru> (raw)
In-Reply-To: <87pp1ynxv2.fsf@fimbulvetr.bsc.es>

Lluís Vilanova писал 2015-09-04 16:00:
> Mark Burton writes:
> [...]
>>>>> * What to do about icount?
>>>>> 
>>>>> What is the impact of multi-thread on icount? Do we need to disable 
>>>>> it
>>>>> for MTTCG or can it be correct per-cpu? Can it be updated 
>>>>> lock-step?
>>>>> 
>>>>> We need some input from the guys that use icount the most.
>>>> 
>>>> That means Edgar. :)
>>> 
>>> Hi!
>>> 
>>> IMO it would be nice if we could run the cores in some kind of 
>>> lock-step
>>> with a configurable amount of instructions that they can run ahead
>>> of time (X).
>>> 
>>> For example, if X is 10000, every thread/core would checkpoint at
>>> 10000 insn boundaries and wait for other cores. Between these
>>> checkpoints, the cores will not be in sync. We might need to
>>> consider synchronizing at I/O accesses aswell to avoid weird
>>> timing issues when reading counter registers for example.
>>> 
>>> Of course the devil will be in the details but an approach roughly
>>> like that sounds useful to me.
> 
>> And “works" in other domains.
>> Theoretically we dont need to sync at IO (Dynamic quantums), for most 
>> systems
>> that have ’normal' IO its normally less efficient I believe. However, 
>> the
>> trouble is, the user typically doesn’t know, and mucking about with 
>> quantum
>> lengths, dynamic quantum switches etc is probably a royal pain in the 
>> butt. And
>> if you dont set your quantum right, the thing will run really slowly 
>> (or will
>> break)…
> 
>> The choices are a rock or a hard place. Dynamic quantums risk to be 
>> slow (you’ll
>> be forcing an expensive ’sync’ - all CPU’s will have to exit etc) on 
>> each IO
>> access from each core…. not great. Syncing with host time (e.g. each 
>> CPU tries
>> to sync with host clock as best it can) will fail when one or other 
>> CPU can’t
>> keep up…. In the end you end up with leaving the user with a nice long 
>> bit of
>> string and a message saying “hang yourself here”.
> 
> That price would not be paid when icount is disabled. Well, the code 
> complexity
> price is always paid... I meant runtime :)
> 
> Then, I think this depends on what type of guarantees you require from
> icount. I see two possible semantics:
> 
> * All CPUs are *exactly* synchronized at icount granularity
> 
>   This means that every icount instructions everyone has to stop and
>   synchronize.
> 
> * All CPUs are *loosely* synchronized at icount granularity
> 
>   You can implement it in a way that ensures that every cpu has *at 
> least*
>   reached a certain timestamp. So cpus can keep on running nonetheless.
> 

Is the third possibility looks sane?

* All CPUs synchronize at shared memory operations.
   When somebody tries to read/write shared memory, it should wait until 
all others
   will reach the same icount.

> The downside is that the latter loses the ability for reproducible 
> runs, which
> IMHO are useful. A more complex option is to merge both: icount sets 
> the
> "synchronization granularity" and another parameter sets the maximum 
> delta
> between cpus (i.e., set it to 0 to have the first option, and infinite 
> for the
> second).


Pavel Dovgalyuk

  reply	other threads:[~2015-09-04 13:10 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-09-04  7:49 [Qemu-devel] MTTCG Tasks (kvmforum summary) Alex Bennée
2015-09-04  8:10 ` Frederic Konrad
2015-09-04  9:25 ` Paolo Bonzini
2015-09-04  9:41   ` Edgar E. Iglesias
2015-09-04 10:18     ` Mark Burton
2015-09-04 13:00       ` Lluís Vilanova
2015-09-04 13:10         ` dovgaluk [this message]
2015-09-04 14:59           ` Lluís Vilanova
2015-09-04  9:45 ` dovgaluk
2015-09-04 12:38   ` Lluís Vilanova
2015-09-04 12:46     ` Mark Burton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=34d8946d03abc7568bb92085bf406de0@ispras.ru \
    --to=dovgaluk@ispras.ru \
    --cc=a.rigo@virtualopensystems.com \
    --cc=a.spyridakis@virtualopensystems.com \
    --cc=alex.bennee@linaro.org \
    --cc=claudio.fontana@huawei.com \
    --cc=cota@braap.org \
    --cc=edgar.iglesias@gmail.com \
    --cc=edgar.iglesias@xilinx.com \
    --cc=fred.konrad@greensocs.com \
    --cc=mark.burton@greensocs.com \
    --cc=mttcg-request@listserver.greensocs.com \
    --cc=mttcg@greensocs.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=vilanova@ac.upc.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).