* [Qemu-devel] The details of round robin of multi-vcpu in TCG mode
@ 2013-04-16 1:51 puckbee
2013-04-16 2:53 ` li guang
0 siblings, 1 reply; 3+ messages in thread
From: puckbee @ 2013-04-16 1:51 UTC (permalink / raw)
To: qemu-devel
[-- Attachment #1: Type: text/plain, Size: 1451 bytes --]
Hi there:
Sorry to post this problem a second time, for I thought maybe the
question is not descripted clearly.
I'm studying the execution details of multi-vcpu in TCG mode.
The vcpus in TCG mode are executed one by one in a sequencial way,
according to some articles about Qemu.
I read the function [tcg_exec_all] in Qemu 1.3.0 as bellow, but the
implementation is not as expected.
[tcg_exec_all] will finally call [cpu_exec] to excute a loop to excute
the TBs in code cache
So,how does these functions control the running time of each VCPU?
That is,when will the execution of one VCPU return, in order to execute
the next_cpu in the loop of [tcg_exec_all].
Using alarm timer or other methonds?
staticvoidtcg_exec_all(void)
{
intr;
/* Account partial waits to the vm_clock. */
qemu_clock_warp(vm_clock);
if(next_cpu==NULL){
next_cpu=first_cpu;
}
*for(;next_cpu!=NULL&&!exit_request;next_cpu=next_cpu->next_cpu){*
CPUArchState*env=next_cpu;
CPUState*cpu=ENV_GET_CPU(env);
qemu_clock_enable(vm_clock,
(env->singlestep_enabled&SSTEP_NOTIMER)==0);
if(cpu_can_run(cpu)){
***r****=****tcg_cpu_exec**(**env**);*
if(r==EXCP_DEBUG){
cpu_handle_guest_debug(env);
break;
}
}elseif(cpu->stop||cpu->stopped){
break;
}
}
exit_request=0;
}
Yours
Puck
[-- Attachment #2: Type: text/html, Size: 25300 bytes --]
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [Qemu-devel] The details of round robin of multi-vcpu in TCG mode
2013-04-16 1:51 [Qemu-devel] The details of round robin of multi-vcpu in TCG mode puckbee
@ 2013-04-16 2:53 ` li guang
2013-04-18 11:54 ` [Qemu-devel] Is Qemu now support x2apic in TCG mode? puckbee
0 siblings, 1 reply; 3+ messages in thread
From: li guang @ 2013-04-16 2:53 UTC (permalink / raw)
To: puckbee; +Cc: qemu-devel
You can find the chance of exit main loop
by every calling of cpu_loop_exit() in cpu_exec().
在 2013-04-16二的 09:51 +0800,puckbee写道:
> Hi there:
>
> Sorry to post this problem a second time, for I thought maybe the
> question is not descripted clearly.
>
> I'm studying the execution details of multi-vcpu in TCG mode.
>
> The vcpus in TCG mode are executed one by one in a sequencial way,
> according to some articles about Qemu.
>
> I read the function [tcg_exec_all] in Qemu 1.3.0 as bellow, but
> the implementation is not as expected.
>
> [tcg_exec_all] will finally call [cpu_exec] to excute a loop to
> excute the TBs in code cache
>
> So,how does these functions control the running time of each VCPU?
>
> That is,when will the execution of one VCPU return, in order to
> execute the next_cpu in the loop of [tcg_exec_all].
>
> Using alarm timer or other methonds?
>
> static void tcg_exec_all(void)
> {
> int r;
>
> /* Account partial waits to the vm_clock. */
> qemu_clock_warp(vm_clock);
>
> if (next_cpu == NULL) {
> next_cpu = first_cpu;
> }
> for (; next_cpu != NULL && !
> exit_request; next_cpu = next_cpu->next_cpu) {
> CPUArchState *env = next_cpu;
> CPUState *cpu = ENV_GET_CPU(env);
>
> qemu_clock_enable(vm_clock,
> (env->singlestep_enabled & SSTEP_NOTIMER) == 0);
>
> if (cpu_can_run(cpu)) {
> r = tcg_cpu_exec(env);
> if (r == EXCP_DEBUG) {
> cpu_handle_guest_debug(env);
> break;
> }
> } else if (cpu->stop || cpu->stopped) {
> break;
> }
> }
> exit_request = 0;
> }
>
> Yours
> Puck
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2013-04-18 12:07 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-04-16 1:51 [Qemu-devel] The details of round robin of multi-vcpu in TCG mode puckbee
2013-04-16 2:53 ` li guang
2013-04-18 11:54 ` [Qemu-devel] Is Qemu now support x2apic in TCG mode? puckbee
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).