qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: KONRAD Frederic <fred.konrad@greensocs.com>
To: "Alex Bennée" <alex.bennee@linaro.org>
Cc: MTTCG Devel <mttcg@listserver.greensocs.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Pranith Kumar <bobby.prani@gmail.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	alvise rigo <a.rigo@virtualopensystems.com>
Subject: Re: [Qemu-devel] Status of my hacks on the MTTCG WIP branch
Date: Fri, 15 Jan 2016 15:49:54 +0100	[thread overview]
Message-ID: <56990712.2050107@greensocs.com> (raw)
In-Reply-To: <87a8o6dhog.fsf@linaro.org>



Le 15/01/2016 15:46, Alex Bennée a écrit :
> KONRAD Frederic <fred.konrad@greensocs.com> writes:
>
>> Le 15/01/2016 15:24, Pranith Kumar a écrit :
>>> Hi Alex,
>>>
>>> On Fri, Jan 15, 2016 at 8:53 AM, Alex Bennée <alex.bennee@linaro.org
>>> <mailto:alex.bennee@linaro.org>> wrote:
>>>> Can you try this branch:
>>>>
>>>>
>>> https://github.com/stsquad/qemu/tree/mttcg/multi_tcg_v8_wip_ajb_fix_locks-r1
>>>> I think I've caught all the things likely to screw up addressing.
>>>>
>>> I tried this branch and the boot hangs like follows:
>>>
>>> [    2.001083] random: systemd-udevd urandom read with 1 bits of
>>> entropy available
>>> main-loop: WARNING: I/O thread spun for 1000 iterations
>>> [   23.778970] INFO: rcu_sched detected stalls on CPUs/tasks: {}
>>> (detected by 0, t=2102 jiffies, g=-165, c=-166, q=83)
>>> [   23.780265] All QSes seen, last rcu_sched kthread activity 2101
>>> (4294939656-4294937555), jiffies_till_next_fqs=1, root ->qsmask 0x0
>>> [   23.781228] swapper/0       R  running task        0 0      0
>>> 0x00000080
>>> [   23.781977] Call trace:
>>> [   23.782375] [<ffffffc00008a4cc>] dump_backtrace+0x0/0x170
>>> [   23.782852] [<ffffffc00008a65c>] show_stack+0x20/0x2c
>>> [   23.783279] [<ffffffc0000c6ba0>] sched_show_task+0x9c/0xf0
>>> [   23.783746] [<ffffffc0000f244c>] rcu_check_callbacks+0x7b8/0x828
>>> [   23.784230] [<ffffffc0000f75c4>] update_process_times+0x40/0x74
>>> [   23.784723] [<ffffffc000107a60>] tick_sched_handle.isra.15+0x38/0x7c
>>> [   23.785247] [<ffffffc000107aec>] tick_sched_timer+0x48/0x84
>>> [   23.785705] [<ffffffc0000f7bb0>] __run_hrtimer+0x90/0x200
>>> [   23.786148] [<ffffffc0000f874c>] hrtimer_interrupt+0xec/0x268
>>> [   23.786612] [<ffffffc0003d9304>] arch_timer_handler_virt+0x38/0x48
>>> [   23.787120] [<ffffffc0000e9ac4>] handle_percpu_devid_irq+0x90/0x12c
>>> [   23.787621] [<ffffffc0000e53fc>] generic_handle_irq+0x38/0x54
>>> [   23.788093] [<ffffffc0000e5744>] __handle_domain_irq+0x68/0xc4
>>> [   23.788578] [<ffffffc000082478>] gic_handle_irq+0x38/0x84
>>> [   23.789035] Exception stack(0xffffffc00073bde0 to 0xffffffc00073bf00)
>>> [   23.789650] bde0: 00738000 ffffffc0 0073e71c ffffffc0 0073bf20
>>> ffffffc0 00086948 ffffffc0
>>> [   23.790356] be00: 000d848c ffffffc0 00000000 00000000 3ffcdb0c
>>> ffffffc0 00000000 01000000
>>> [   23.791030] be20: 38b97100 ffffffc0 0073bea0 ffffffc0 67f6e000
>>> 00000005 567f1c33 00000000
>>> [   23.791744] be40: 00748cf0 ffffffc0 0073be70 ffffffc0 c1e2e4a0
>>> ffffffbd 3a801148 ffffffc0
>>> [   23.792406] be60: 00000000 00000040 0073e000 ffffffc0 3a801168
>>> ffffffc0 97bbb588 0000007f
>>> [   23.793055] be80: 0021d7e8 ffffffc0 97b3d6ec 0000007f c37184d0
>>> 0000007f 00738000 ffffffc0
>>> [   23.793720] bea0: 0073e71c ffffffc0 006ff7e8 ffffffc0 007c8000
>>> ffffffc0 0073e680 ffffffc0
>>> [   23.794373] bec0: 0072fac0 ffffffc0 00000001 00000000 0073bf30
>>> ffffffc0 0050e9e8 ffffffc0
>>> [   23.795025] bee0: 00000000 00000000 0073bf20 ffffffc0 00086944
>>> ffffffc0 0073bf20 ffffffc0
>>> [   23.795721] [<ffffffc0000855a4>] el1_irq+0x64/0xc0
>>> [   23.796131] [<ffffffc0000d8488>] cpu_startup_entry+0x130/0x204
>>> [   23.796605] [<ffffffc0004fba38>] rest_init+0x78/0x84
>>> [   23.797028] [<ffffffc0006ca99c>] start_kernel+0x3a0/0x3b8
>>> [   23.797528] rcu_sched kthread starved for 2101 jiffies!
>>>
>>>
>>> I will try to debug and see where it is hanging.
>>>
>>> Thanks!
>>> --
>>> Pranith
>> Hi Pranith,
>>
>> I don't have time today to look into that.
>>
>> But I missed a tb_find_physical which happen during tb_lock not held..
>> This hack should fix that (and probably slow things down):
>>
>> diff --git a/cpu-exec.c b/cpu-exec.c
>> index 903126f..25a005a 100644
>> --- a/cpu-exec.c
>> +++ b/cpu-exec.c
>> @@ -252,9 +252,9 @@ static TranslationBlock *tb_find_physical(CPUState *cpu,
>>        }
>>
>>        /* Move the TB to the head of the list */
>> -    *ptb1 = tb->phys_hash_next;
>> -    tb->phys_hash_next = tcg_ctx.tb_ctx.tb_phys_hash[h];
>> -    tcg_ctx.tb_ctx.tb_phys_hash[h] = tb;
>> +//    *ptb1 = tb->phys_hash_next;
>> +//    tb->phys_hash_next = tcg_ctx.tb_ctx.tb_phys_hash[h];
>> +//    tcg_ctx.tb_ctx.tb_phys_hash[h] = tb;
>>        return tb;
>>    }
> Hmm not in my build cpu_exec:
>
>                  ...
>                  tb_lock();
>                  tb = tb_find_fast(cpu);
>                  ...
>
> Which I think is right. I mean I can see if it wasn't then breakage
> could occur when you manipulate the lookup but I think we should keep
> the lock there and if it proves to be a performance hit come up with a
> safe optimisation. I think Paolo talked about using RCU type locks.
That's definitely a performance hit.
Ok we should talk about that Monday.

Fred
>
> --
> Alex Bennée

  reply	other threads:[~2016-01-15 14:50 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-01-12 17:29 [Qemu-devel] Status of my hacks on the MTTCG WIP branch Alex Bennée
2016-01-12 20:23 ` Pranith Kumar
2016-01-13 10:28   ` Alex Bennée
2016-01-14 13:10     ` Alex Bennée
2016-01-14 13:12       ` KONRAD Frederic
2016-01-14 13:58         ` Alex Bennée
2016-01-15 13:53   ` Alex Bennée
2016-01-15 14:24     ` Pranith Kumar
2016-01-15 14:30       ` KONRAD Frederic
2016-01-15 14:46         ` Alex Bennée
2016-01-15 14:49           ` KONRAD Frederic [this message]
2016-01-15 16:02             ` Paolo Bonzini
2016-01-15 14:32       ` alvise rigo
2016-01-15 14:51         ` Alex Bennée
2016-01-15 15:08           ` alvise rigo
2016-01-15 15:25             ` Alex Bennée
2016-01-15 16:34               ` alvise rigo
2016-01-15 16:59                 ` Alex Bennée
2016-01-18 19:09                   ` Alex Bennée
2016-01-19  8:31                     ` alvise rigo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56990712.2050107@greensocs.com \
    --to=fred.konrad@greensocs.com \
    --cc=a.rigo@virtualopensystems.com \
    --cc=alex.bennee@linaro.org \
    --cc=bobby.prani@gmail.com \
    --cc=mttcg@listserver.greensocs.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).