From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:48109) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1URv3O-0007bI-K5 for qemu-devel@nongnu.org; Mon, 15 Apr 2013 21:51:30 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1URv3M-0001Bn-3q for qemu-devel@nongnu.org; Mon, 15 Apr 2013 21:51:26 -0400 Received: from mail-da0-x232.google.com ([2607:f8b0:400e:c00::232]:60806) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1URv3L-0001Bg-NN for qemu-devel@nongnu.org; Mon, 15 Apr 2013 21:51:24 -0400 Received: by mail-da0-f50.google.com with SMTP id t1so2336915dae.37 for ; Mon, 15 Apr 2013 18:51:21 -0700 (PDT) Message-ID: <516CAE8F.3030905@gmail.com> Date: Tue, 16 Apr 2013 09:51:11 +0800 From: puckbee MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="------------080801020401010200080305" Subject: [Qemu-devel] The details of round robin of multi-vcpu in TCG mode List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org This is a multi-part message in MIME format. --------------080801020401010200080305 Content-Type: text/plain; charset=GB2312 Content-Transfer-Encoding: 7bit Hi there: Sorry to post this problem a second time, for I thought maybe the question is not descripted clearly. I'm studying the execution details of multi-vcpu in TCG mode. The vcpus in TCG mode are executed one by one in a sequencial way, according to some articles about Qemu. I read the function [tcg_exec_all] in Qemu 1.3.0 as bellow, but the implementation is not as expected. [tcg_exec_all] will finally call [cpu_exec] to excute a loop to excute the TBs in code cache So,how does these functions control the running time of each VCPU? That is,when will the execution of one VCPU return, in order to execute the next_cpu in the loop of [tcg_exec_all]. Using alarm timer or other methonds? staticvoidtcg_exec_all(void) { intr; /* Account partial waits to the vm_clock. */ qemu_clock_warp(vm_clock); if(next_cpu==NULL){ next_cpu=first_cpu; } *for(;next_cpu!=NULL&&!exit_request;next_cpu=next_cpu->next_cpu){* CPUArchState*env=next_cpu; CPUState*cpu=ENV_GET_CPU(env); qemu_clock_enable(vm_clock, (env->singlestep_enabled&SSTEP_NOTIMER)==0); if(cpu_can_run(cpu)){ ***r****=****tcg_cpu_exec**(**env**);* if(r==EXCP_DEBUG){ cpu_handle_guest_debug(env); break; } }elseif(cpu->stop||cpu->stopped){ break; } } exit_request=0; } Yours Puck --------------080801020401010200080305 Content-Type: text/html; charset=GB2312 Content-Transfer-Encoding: 7bit Hi there:

    Sorry to post this problem a second time, for I thought maybe the question is not descripted clearly.

    I'm studying the execution details of multi-vcpu in TCG mode.

    The vcpus in TCG mode are executed one by one in a sequencial way, according to some articles about Qemu.

    I read the function [tcg_exec_all] in Qemu 1.3.0 as bellow, but the implementation is not as expected.

    [tcg_exec_all] will finally call [cpu_exec] to excute a loop to excute the TBs in code cache

    So,how does these functions control the running time of each VCPU?
    
    That is,when will the execution of one VCPU return, in order to execute the next_cpu in the loop of [tcg_exec_all].

    Using alarm timer or other methonds?

static void tcg_exec_all(void)
{
    
int r;

    
/* Account partial waits to the vm_clock.  */
    
qemu_clock_warp(vm_clock);

    
if (next_cpu == NULL) {
        
next_cpu = first_cpu;
    
}
    
for (; next_cpu != NULL && !exit_request; next_cpu = next_cpu->next_cpu) {
        
CPUArchState *env = next_cpu;
        
CPUState *cpu = ENV_GET_CPU(env);

        
qemu_clock_enable(vm_clock,
                          
(env->singlestep_enabled & SSTEP_NOTIMER) == 0);

        
if (cpu_can_run(cpu)) {
            
r = tcg_cpu_exec(env);
            
if (r == EXCP_DEBUG) {
                
cpu_handle_guest_debug(env);
                
break;
            
}
        
} else if (cpu->stop || cpu->stopped) {
            
break;
        
}
    
}
    
exit_request = 0;
}

Yours
Puck --------------080801020401010200080305--