* [Qemu-devel] The reason behind block linking constraint?
@ 2011-08-18 6:33 陳韋任
2011-08-18 9:31 ` Max Filippov
0 siblings, 1 reply; 12+ messages in thread
From: 陳韋任 @ 2011-08-18 6:33 UTC (permalink / raw)
To: qemu-devel
Hi, all
I am trying to figure out why QEMU put some constraints on block
linking (chaining). Take x86 as an example, there are two places
put constraints on block linking, gen_goto_tb and cpu_exec.
----------------- gen_goto_tb (target-i386/translate.c) ---------------
/* NOTE: we handle the case where the TB spans two pages here */
if ((pc & TARGET_PAGE_MASK) == (tb->pc & TARGET_PAGE_MASK) ||
(pc & TARGET_PAGE_MASK) == ((s->pc - 1) & TARGET_PAGE_MASK)) {
/* jump to same page: we can use a direct jump */
tcg_gen_goto_tb(tb_num);
gen_jmp_im(eip);
tcg_gen_exit_tb((tcg_target_long)tb + tb_num);
} else {
/* jump to another page: currently not optimized */
gen_jmp_im(eip);
gen_eob(s);
}
-----------------------------------------------------------------------
----------------------- cpu_exec (cpu-exec.c) -------------------------
/* see if we can patch the calling TB. When the TB
spans two pages, we cannot safely do a direct
jump. */
if (next_tb != 0 && tb->page_addr[1] == -1) {
tb_add_jump((TranslationBlock *)(next_tb & ~3), next_tb & 3, tb);
}
-----------------------------------------------------------------------
Is it just because we cannot optimize block linking which crosses page
boundary, or there are some correctness/safety issues should be considered?
I did some experiments myself. First, I removed the if-else condition
in gen_goto_tb (always go to if branch) and leave cpu_exec alone. In this
case, user mode works fine, but system mode crashes while booting linux.
Then, I removed the "tb->page_addr[1]" check and leave gen_goto_tb
alone. This time, both user mode and system mode works fine. I use the
disk image and user mode tests downloaded from the website as the test
case.
Could someone kindly explain why there are constraints on block
linking? Thanks!
Regards,
chenwj
--
Wei-Ren Chen (陳韋任)
Computer Systems Lab, Institute of Information Science,
Academia Sinica, Taiwan (R.O.C.)
Tel:886-2-2788-3799 #1667
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] The reason behind block linking constraint?
2011-08-18 6:33 [Qemu-devel] The reason behind block linking constraint? 陳韋任
@ 2011-08-18 9:31 ` Max Filippov
2011-08-18 9:39 ` 陳韋任
2011-08-20 20:54 ` Rob Landley
0 siblings, 2 replies; 12+ messages in thread
From: Max Filippov @ 2011-08-18 9:31 UTC (permalink / raw)
To: 陳韋任; +Cc: qemu-devel
> Hi, all
>
> I am trying to figure out why QEMU put some constraints on block
> linking (chaining). Take x86 as an example, there are two places
> put constraints on block linking, gen_goto_tb and cpu_exec.
>
> ----------------- gen_goto_tb (target-i386/translate.c) ---------------
> /* NOTE: we handle the case where the TB spans two pages here */
> if ((pc & TARGET_PAGE_MASK) == (tb->pc & TARGET_PAGE_MASK) ||
> (pc & TARGET_PAGE_MASK) == ((s->pc - 1) & TARGET_PAGE_MASK)) {
> /* jump to same page: we can use a direct jump */
> tcg_gen_goto_tb(tb_num);
> gen_jmp_im(eip);
> tcg_gen_exit_tb((tcg_target_long)tb + tb_num);
> } else {
> /* jump to another page: currently not optimized */
> gen_jmp_im(eip);
> gen_eob(s);
> }
> -----------------------------------------------------------------------
>
> ----------------------- cpu_exec (cpu-exec.c) -------------------------
> /* see if we can patch the calling TB. When the TB
> spans two pages, we cannot safely do a direct
> jump. */
> if (next_tb != 0 && tb->page_addr[1] == -1) {
> tb_add_jump((TranslationBlock *)(next_tb & ~3), next_tb & 3, tb);
> }
> -----------------------------------------------------------------------
>
> Is it just because we cannot optimize block linking which crosses page
> boundary, or there are some correctness/safety issues should be considered?
If we link a TB with another TB from the different page, then the
second TB may disappear when the memory mapping changes and the
subsequent direct jump from the first TB will crash qemu.
I guess that this usually does not happen in usermode, because the
guest would not modify executable code memory mapping. However I
suppose that this is also possible.
--
Thanks.
-- Max
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] The reason behind block linking constraint?
2011-08-18 9:31 ` Max Filippov
@ 2011-08-18 9:39 ` 陳韋任
2011-08-18 10:04 ` Max Filippov
2011-08-20 20:54 ` Rob Landley
1 sibling, 1 reply; 12+ messages in thread
From: 陳韋任 @ 2011-08-18 9:39 UTC (permalink / raw)
To: Max Filippov; +Cc: qemu-devel, 陳韋任
> If we link a TB with another TB from the different page, then the
> second TB may disappear when the memory mapping changes and the
> subsequent direct jump from the first TB will crash qemu.
Perhaps the guest OS swap the second TB out of the guest memory,
is it what you mean?
Regards,
chenwj
--
Wei-Ren Chen (陳韋任)
Computer Systems Lab, Institute of Information Science,
Academia Sinica, Taiwan (R.O.C.)
Tel:886-2-2788-3799 #1667
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] The reason behind block linking constraint?
2011-08-18 9:39 ` 陳韋任
@ 2011-08-18 10:04 ` Max Filippov
2011-09-24 7:00 ` 陳韋任
0 siblings, 1 reply; 12+ messages in thread
From: Max Filippov @ 2011-08-18 10:04 UTC (permalink / raw)
To: 陳韋任; +Cc: qemu-devel
>> If we link a TB with another TB from the different page, then the
>> second TB may disappear when the memory mapping changes and the
>> subsequent direct jump from the first TB will crash qemu.
>
> Perhaps the guest OS swap the second TB out of the guest memory,
> is it what you mean?
I meant TLB change by e.g. tlb_set_page. If you change single page
mapping then all TBs in that page will be gone.
This may be the result of e.g. a page swapping, or a task switch.
If there's no direct link between TBs then softmmu will be used during
the target TB search and softmmu will generate an appropriate guest
exception. See cpu_exec -> tb_find_fast -> tb_find_slow ->
get_page_addr_code.
But if there is a direct link, then softmmu has no chance to do it.
--
Thanks.
-- Max
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] The reason behind block linking constraint?
2011-08-18 9:31 ` Max Filippov
2011-08-18 9:39 ` 陳韋任
@ 2011-08-20 20:54 ` Rob Landley
2011-09-27 3:13 ` 陳韋任
1 sibling, 1 reply; 12+ messages in thread
From: Rob Landley @ 2011-08-20 20:54 UTC (permalink / raw)
To: Max Filippov; +Cc: qemu-devel, 陳韋任
On 08/18/2011 04:31 AM, Max Filippov wrote:
>> Hi, all
>>
>> I am trying to figure out why QEMU put some constraints on block
>> linking (chaining). Take x86 as an example, there are two places
>> put constraints on block linking, gen_goto_tb and cpu_exec.
>>
>> ----------------- gen_goto_tb (target-i386/translate.c) ---------------
>> /* NOTE: we handle the case where the TB spans two pages here */
>> if ((pc & TARGET_PAGE_MASK) == (tb->pc & TARGET_PAGE_MASK) ||
>> (pc & TARGET_PAGE_MASK) == ((s->pc - 1) & TARGET_PAGE_MASK)) {
>> /* jump to same page: we can use a direct jump */
>> tcg_gen_goto_tb(tb_num);
>> gen_jmp_im(eip);
>> tcg_gen_exit_tb((tcg_target_long)tb + tb_num);
>> } else {
>> /* jump to another page: currently not optimized */
>> gen_jmp_im(eip);
>> gen_eob(s);
>> }
>> -----------------------------------------------------------------------
>>
>> ----------------------- cpu_exec (cpu-exec.c) -------------------------
>> /* see if we can patch the calling TB. When the TB
>> spans two pages, we cannot safely do a direct
>> jump. */
>> if (next_tb != 0 && tb->page_addr[1] == -1) {
>> tb_add_jump((TranslationBlock *)(next_tb & ~3), next_tb & 3, tb);
>> }
>> -----------------------------------------------------------------------
>>
>> Is it just because we cannot optimize block linking which crosses page
>> boundary, or there are some correctness/safety issues should be considered?
>
> If we link a TB with another TB from the different page, then the
> second TB may disappear when the memory mapping changes and the
> subsequent direct jump from the first TB will crash qemu.
>
> I guess that this usually does not happen in usermode, because the
> guest would not modify executable code memory mapping. However I
> suppose that this is also possible.
Dynamic linking modifies guest code, requiring the page to be
retranslated. With lazy binding this can happen at any time, and
without PIE executables this can happen to just about any executable page.
Rob
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] The reason behind block linking constraint?
2011-08-18 10:04 ` Max Filippov
@ 2011-09-24 7:00 ` 陳韋任
2011-09-25 21:47 ` Max Filippov
0 siblings, 1 reply; 12+ messages in thread
From: 陳韋任 @ 2011-09-24 7:00 UTC (permalink / raw)
To: Max Filippov; +Cc: qemu-devel, 陳韋任
Hi, Max
> I meant TLB change by e.g. tlb_set_page. If you change single page
> mapping then all TBs in that page will be gone.
> This may be the result of e.g. a page swapping, or a task switch.
You said "all TBs in that page will be gone". Does it mean QEMU will
invalidate those TBs by for example, tb_invalidate_phys_page_range? Or
it just leave those TBs there (without executing them)? Because I don't
see functions like tb_invalidate_phys_page_range are called after
tlb_set_page, I am not sure what "gone" actually means.
> If there's no direct link between TBs then softmmu will be used during
> the target TB search and softmmu will generate an appropriate guest
> exception. See cpu_exec -> tb_find_fast -> tb_find_slow ->
> get_page_addr_code.
>
> But if there is a direct link, then softmmu has no chance to do it.
Let me try to describe the flow. Correct me if I am wrong. Assume tb1
and tb2 belong to different guest pages.
If there's NO direct link between tb1 and tb2. After executing tb1,
the control is transfered back to QEMU (cpu_exec), QEMU then call
tb_find_fast to find the next TB, i.e., tb2.
I assume that "all TBs in that page will be gone" means QEMU will
invalidate those TBs. If not, I think tb_find_fast will return tb2
which should not be executed. So, tb_find_fast calls tb_find_slow,
then tb_find_slow calls get_page_addr_code.
get_page_addr_code return the guest physical address which
corresponds to the pc (tb2). It looks up TLB (env1->tlb_table), and
get a TLB miss since tlb_set_page has changed the mapping.
if (unlikely(env1->tlb_table[mmu_idx][page_index].addr_code !=
(addr & TARGET_PAGE_MASK))) {
ldub_code(addr);
}
But I am not sure what happen after the TLB miss. You said the
softmmu will generate a guest exception. Take x86 as an example,
do you mean the raise_exception_err in tlb_fill?
void tlb_fill(target_ulong addr, ...) {
ret = cpu_x86_handle_mmu_fault(env, addr, is_write, mmu_idx, 1);
if (ret) {
if (retaddr) {
}
}
raise_exception_err(env->exception_index, env->error_code);
}
Thanks!
Regards,
chenwj
--
Wei-Ren Chen (陳韋任)
Computer Systems Lab, Institute of Information Science,
Academia Sinica, Taiwan (R.O.C.)
Tel:886-2-2788-3799 #1667
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] The reason behind block linking constraint?
2011-09-24 7:00 ` 陳韋任
@ 2011-09-25 21:47 ` Max Filippov
2011-09-26 10:49 ` 陳韋任
0 siblings, 1 reply; 12+ messages in thread
From: Max Filippov @ 2011-09-25 21:47 UTC (permalink / raw)
To: 陳韋任; +Cc: qemu-devel
> > I meant TLB change by e.g. tlb_set_page. If you change single page
> > mapping then all TBs in that page will be gone.
> > This may be the result of e.g. a page swapping, or a task switch.
>
> You said "all TBs in that page will be gone". Does it mean QEMU will
> invalidate those TBs by for example, tb_invalidate_phys_page_range? Or
> it just leave those TBs there (without executing them)? Because I don't
> see functions like tb_invalidate_phys_page_range are called after
> tlb_set_page, I am not sure what "gone" actually means.
Well, my explanation sucks. Let's say it other way, more precisely:
- you have two pieces of code in different pages, one of them jumps to the other;
- and you have two TBs, tb1 for the first piece and tb2 for the second;
- and you link them and there's direct jump from tb1 to tb2;
- now you change the mapping of the code page that contains second piece of code;
- after that there's another code (or no code at all) at the place where the second piece of code used to be;
- but the jump to tb2 still remains in tb1.
> > If there's no direct link between TBs then softmmu will be used during
> > the target TB search and softmmu will generate an appropriate guest
> > exception. See cpu_exec -> tb_find_fast -> tb_find_slow ->
> > get_page_addr_code.
> >
> > But if there is a direct link, then softmmu has no chance to do it.
>
> Let me try to describe the flow. Correct me if I am wrong. Assume tb1
> and tb2 belong to different guest pages.
>
> If there's NO direct link between tb1 and tb2. After executing tb1,
> the control is transfered back to QEMU (cpu_exec), QEMU then call
> tb_find_fast to find the next TB, i.e., tb2.
Right.
> I assume that "all TBs in that page will be gone" means QEMU will
> invalidate those TBs.
No, it won't. I had to say "all code in that page will be gone", sorry for the confusion.
> If not, I think tb_find_fast will return tb2 which should not be executed.
It won't either. tb_find_fast searches tb this way:
tb = env->tb_jmp_cache[tb_jmp_cache_hash_func(pc)];
but 'page mapping change' implies TLB flush, at least for that page.
Both tlb_flush and tlb_flush_page will clear env->tb_jmp_cache and tb_find_fast will have to call tb_find_slow.
> So, tb_find_fast calls tb_find_slow,
> then tb_find_slow calls get_page_addr_code.
>
> get_page_addr_code return the guest physical address which
> corresponds to the pc (tb2). It looks up TLB (env1->tlb_table), and
> get a TLB miss since tlb_set_page has changed the mapping.
>
> if (unlikely(env1->tlb_table[mmu_idx][page_index].addr_code !=
> (addr & TARGET_PAGE_MASK))) {
> ldub_code(addr);
> }
>
> But I am not sure what happen after the TLB miss. You said the
> softmmu will generate a guest exception. Take x86 as an example,
> do you mean the raise_exception_err in tlb_fill?
>
> void tlb_fill(target_ulong addr, ...) {
> ret = cpu_x86_handle_mmu_fault(env, addr, is_write, mmu_idx, 1);
> if (ret) {
> if (retaddr) {
> }
> }
> raise_exception_err(env->exception_index, env->error_code);
> }
Exactly. The exception will be raised inside the guest and the guest will execute its page fault handler or whatever.
Thanks.
-- Max
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] The reason behind block linking constraint?
2011-09-25 21:47 ` Max Filippov
@ 2011-09-26 10:49 ` 陳韋任
2011-09-26 11:41 ` Max Filippov
0 siblings, 1 reply; 12+ messages in thread
From: 陳韋任 @ 2011-09-26 10:49 UTC (permalink / raw)
To: Max Filippov; +Cc: qemu-devel, 陳韋任
Hi, Max
Sorry, I have to be sure what you talked about is guest or host.
Let me try.
> Well, my explanation sucks. Let's say it other way, more precisely:
> - you have two pieces of code in different pages, one of them jumps to the other;
guest code in different guest pages.
> - and you have two TBs, tb1 for the first piece and tb2 for the second;
tb1 and tb2 are in the code cache (host binary).
> - and you link them and there's direct jump from tb1 to tb2;
> - now you change the mapping of the code page that contains second piece of code;
change the mapping of the guest page which contains second piece of
guest binary. Mapping guest page to what? Host virtual address?
> - after that there's another code (or no code at all) at the place where the second piece of code used to be;
> - but the jump to tb2 still remains in tb1.
there's another code (or no code at all) at the guest page which
used to contain second piece of guest binary.
So if we execute tb2, it might have wrong memory access through
the mapping of guest page. Am I right?
> > I assume that "all TBs in that page will be gone" means QEMU will
> > invalidate those TBs.
>
> No, it won't. I had to say "all code in that page will be gone", sorry for the confusion.
O.K., here TB is in the code cache, page is guest page, code is guest
binary. So the second piece of guest binary in that guest page will be
gone, but TBs related to the guest page still remain in the code cache.
No invalidation here.
> > If not, I think tb_find_fast will return tb2 which should not be executed.
>
> It won't either. tb_find_fast searches tb this way:
>
> tb = env->tb_jmp_cache[tb_jmp_cache_hash_func(pc)];
>
> but 'page mapping change' implies TLB flush, at least for that page.
> Both tlb_flush and tlb_flush_page will clear env->tb_jmp_cache and tb_find_fast will have to call tb_find_slow.
Yes, I see QEMU use memset to clear env->tb_jmp_cache while doing
tlb_flush.
> Exactly. The exception will be raised inside the guest and the guest will execute its page fault handler or whatever.
Thanks, Max. Although I still doesn't totally understand how softmmu
is done in QEMU, but the whole picture is much clear to me now. And
about the page boundary restraint of (direct) block linking,
if ((pc & TARGET_PAGE_MASK) == (tb->pc & TARGET_PAGE_MASK) ||
(pc & TARGET_PAGE_MASK) == ((s->pc - 1) & TARGET_PAGE_MASK)) {
I guess this is because it'll be too complicated to track all links
jump to this (guest) page. A guest page might contains hundreds of TBs.
If the guest page is gone, then it's not a easy thing to do unlinking.
Does this make sense?
Regards,
chenwj
--
Wei-Ren Chen (陳韋任)
Computer Systems Lab, Institute of Information Science,
Academia Sinica, Taiwan (R.O.C.)
Tel:886-2-2788-3799 #1667
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] The reason behind block linking constraint?
2011-09-26 10:49 ` 陳韋任
@ 2011-09-26 11:41 ` Max Filippov
2011-09-27 2:40 ` 陳韋任
0 siblings, 1 reply; 12+ messages in thread
From: Max Filippov @ 2011-09-26 11:41 UTC (permalink / raw)
To: 陳韋任; +Cc: qemu-devel
> Sorry, I have to be sure what you talked about is guest or host.
> Let me try.
>
>> Well, my explanation sucks. Let's say it other way, more precisely:
>> - you have two pieces of code in different pages, one of them jumps to the other;
>
> guest code in different guest pages.
Right.
>> - and you have two TBs, tb1 for the first piece and tb2 for the second;
>
> tb1 and tb2 are in the code cache (host binary).
Right.
>> - and you link them and there's direct jump from tb1 to tb2;
>> - now you change the mapping of the code page that contains second piece of code;
>
> change the mapping of the guest page which contains second piece of
> guest binary. Mapping guest page to what? Host virtual address?
Mapping of guest physical memory to guest virtual memory. Change in
the guest TLB. If we're talking about i386 guest that's change in the
page table + TLB flush, for the changed page or for the whole TLB.
>> - after that there's another code (or no code at all) at the place where the second piece of code used to be;
>> - but the jump to tb2 still remains in tb1.
>
> there's another code (or no code at all) at the guest page which
> used to contain second piece of guest binary.
At the virtual addresses of that guest page, right.
> So if we execute tb2, it might have wrong memory access through
> the mapping of guest page. Am I right?
If we execute tb2, it's not what guest would expect us to do at least.
>> > I assume that "all TBs in that page will be gone" means QEMU will
>> > invalidate those TBs.
>>
>> No, it won't. I had to say "all code in that page will be gone", sorry for the confusion.
>
> O.K., here TB is in the code cache, page is guest page, code is guest
> binary. So the second piece of guest binary in that guest page will be
> gone, but TBs related to the guest page still remain in the code cache.
> No invalidation here.
Right.
>> > If not, I think tb_find_fast will return tb2 which should not be executed.
>>
>> It won't either. tb_find_fast searches tb this way:
>>
>> tb = env->tb_jmp_cache[tb_jmp_cache_hash_func(pc)];
>>
>> but 'page mapping change' implies TLB flush, at least for that page.
>> Both tlb_flush and tlb_flush_page will clear env->tb_jmp_cache and tb_find_fast will have to call tb_find_slow.
>
> Yes, I see QEMU use memset to clear env->tb_jmp_cache while doing
> tlb_flush.
>
>> Exactly. The exception will be raised inside the guest and the guest will execute its page fault handler or whatever.
>
> Thanks, Max. Although I still doesn't totally understand how softmmu
> is done in QEMU, but the whole picture is much clear to me now. And
> about the page boundary restraint of (direct) block linking,
>
> if ((pc & TARGET_PAGE_MASK) == (tb->pc & TARGET_PAGE_MASK) ||
> (pc & TARGET_PAGE_MASK) == ((s->pc - 1) & TARGET_PAGE_MASK)) {
>
> I guess this is because it'll be too complicated to track all links
> jump to this (guest) page. A guest page might contains hundreds of TBs.
> If the guest page is gone, then it's not a easy thing to do unlinking.
> Does this make sense?
I'm not familiar with the motivation for the current implementation.
I guess that tracking otherwise linkable cross-page jumps just doesn't
worth it, because such jumps are rare.
I don't have any numbers though, but I think that qemu profiling may
be used to get these numbers.
--
Thanks.
-- Max
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] The reason behind block linking constraint?
2011-09-26 11:41 ` Max Filippov
@ 2011-09-27 2:40 ` 陳韋任
0 siblings, 0 replies; 12+ messages in thread
From: 陳韋任 @ 2011-09-27 2:40 UTC (permalink / raw)
To: Max Filippov; +Cc: qemu-devel, 陳韋任
O.K., now I have to make sure it's guest virtual or guest physical.
Correct me if I am wrong.
> >> - now you change the mapping of the code page that contains second piece of code;
> >
> > ?change the mapping of the guest page which contains second piece of
> > guest binary. Mapping guest page to what? Host virtual address?
>
> Mapping of guest physical memory to guest virtual memory. Change in
> the guest TLB. If we're talking about i386 guest that's change in the
> page table + TLB flush, for the changed page or for the whole TLB.
guest OS might swap out a guest physical page, then it have to change
the guest page table (mapping between guest virtual and guest physical).
Here, guest TLB means env->tlb_table, right? So, how page table is changed
is left to the guest OS, and QEMU takes care of the guest TLB (env->tlb_table).
> >> - after that there's another code (or no code at all) at the place where the second piece of code used to be;
> >> - but the jump to tb2 still remains in tb1.
> >
> > 慯here's another code (or no code at all) at the guest page which
> > used to contain second piece of guest binary.
>
> At the virtual addresses of that guest page, right.
Let's assume there is only (guest) page table first, no TLB (env->tlb_table).
When we use the virtual address of that guest page index the page table,
there should be a (guest) page fault since the (guest) physical page
is swapped out.
Then here comes the TLB (env->tlb_table). After tracing the code, I
think the TLB is used to do GVA (guest virtual address) -> HVA (host virtual
address) translation. We'll use the virtual address of that guest page
index the TLB first, but since the guest page table change comes along
with TLB flush, it'll try to walk the guest page table then raise a
(guest) page fault.
> > 䒷o if we execute tb2, it might have wrong memory access through
> > the mapping of guest page. Am I right?
>
> If we execute tb2, it's not what guest would expect us to do at least.
At least it should trigger a (guest) page fault.
Regards,
chenwj
--
Wei-Ren Chen (陳韋任)
Computer Systems Lab, Institute of Information Science,
Academia Sinica, Taiwan (R.O.C.)
Tel:886-2-2788-3799 #1667
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] The reason behind block linking constraint?
2011-08-20 20:54 ` Rob Landley
@ 2011-09-27 3:13 ` 陳韋任
2011-09-27 13:27 ` Rob Landley
0 siblings, 1 reply; 12+ messages in thread
From: 陳韋任 @ 2011-09-27 3:13 UTC (permalink / raw)
To: Rob Landley; +Cc: Max Filippov, qemu-devel, 陳韋任
Hi, Rob
> >> Is it just because we cannot optimize block linking which crosses page
> >> boundary, or there are some correctness/safety issues should be considered?
> >
> > If we link a TB with another TB from the different page, then the
> > second TB may disappear when the memory mapping changes and the
> > subsequent direct jump from the first TB will crash qemu.
> >
> > I guess that this usually does not happen in usermode, because the
> > guest would not modify executable code memory mapping. However I
> > suppose that this is also possible.
>
> Dynamic linking modifies guest code, requiring the page to be
> retranslated. With lazy binding this can happen at any time, and
> without PIE executables this can happen to just about any executable page.
Max and I have some discussion about the page boundary constraint
of block linking. Maybe it's not worth to track cross-page block
linking, for latter possible block unchaining. So there is a page
boundary constraint.
You said dynamic linking requires the page to be retranslated.
Does that imply if there is NO page boundary constraint, user
mode might crash? If so, does it occur frequently? Maybe small program
just works fine without such constraint, I have to run something
big to make QEMU crash?
Thanks!
Regards,
chenwj
--
Wei-Ren Chen (陳韋任)
Computer Systems Lab, Institute of Information Science,
Academia Sinica, Taiwan (R.O.C.)
Tel:886-2-2788-3799 #1667
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] The reason behind block linking constraint?
2011-09-27 3:13 ` 陳韋任
@ 2011-09-27 13:27 ` Rob Landley
0 siblings, 0 replies; 12+ messages in thread
From: Rob Landley @ 2011-09-27 13:27 UTC (permalink / raw)
To: 陳韋任; +Cc: Max Filippov, qemu-devel
On 09/26/2011 10:13 PM, 陳韋任 wrote:
> Hi, Rob
>
>>>> Is it just because we cannot optimize block linking which crosses page
>>>> boundary, or there are some correctness/safety issues should be considered?
>>>
>>> If we link a TB with another TB from the different page, then the
>>> second TB may disappear when the memory mapping changes and the
>>> subsequent direct jump from the first TB will crash qemu.
>>>
>>> I guess that this usually does not happen in usermode, because the
>>> guest would not modify executable code memory mapping. However I
>>> suppose that this is also possible.
>>
>> Dynamic linking modifies guest code, requiring the page to be
>> retranslated. With lazy binding this can happen at any time, and
>> without PIE executables this can happen to just about any executable page.
>
> Max and I have some discussion about the page boundary constraint
> of block linking. Maybe it's not worth to track cross-page block
> linking, for latter possible block unchaining. So there is a page
> boundary constraint.
>
> You said dynamic linking requires the page to be retranslated.
> Does that imply if there is NO page boundary constraint, user
> mode might crash? If so, does it occur frequently? Maybe small program
> just works fine without such constraint, I have to run something
> big to make QEMU crash?
The constraints you're talking about are on the translated code, dynamic
linking happens on the target code. Changes to the target code require
regenerating the translated code, which happens with page granularity.
Rob
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2011-09-27 13:27 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-08-18 6:33 [Qemu-devel] The reason behind block linking constraint? 陳韋任
2011-08-18 9:31 ` Max Filippov
2011-08-18 9:39 ` 陳韋任
2011-08-18 10:04 ` Max Filippov
2011-09-24 7:00 ` 陳韋任
2011-09-25 21:47 ` Max Filippov
2011-09-26 10:49 ` 陳韋任
2011-09-26 11:41 ` Max Filippov
2011-09-27 2:40 ` 陳韋任
2011-08-20 20:54 ` Rob Landley
2011-09-27 3:13 ` 陳韋任
2011-09-27 13:27 ` Rob Landley
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).