* [PATCH]0/2 Patches to furthure split kvm_init
@ 2007-11-29 8:16 Zhang, Xiantao
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDCA394B2-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
0 siblings, 1 reply; 19+ messages in thread
From: Zhang, Xiantao @ 2007-11-29 8:16 UTC (permalink / raw)
To: avi-atKUWr5tajBWk0Htik3J/w
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
carsteno-tA70FqPdS9bQT0dZR+AlfA, hollisb-r/Jw6+rmf7HQT0dZR+AlfA
[1/2] Fix missing bad_page free logic for possbile failures of kvm_init.
[2/2] Moving kvm_vcpu_cache to x86.c, since it belongs to x86-specific
part.
Signed-off-by: Zhang Xiantao <xiantao.zhang-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell. From the desktop to the data center, Linux is going
mainstream. Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH]0/2 Patches to furthure split kvm_init
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDCA394B2-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2007-11-29 9:59 ` Christian Ehrhardt
[not found] ` <474E8D88.4090508-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org>
0 siblings, 1 reply; 19+ messages in thread
From: Christian Ehrhardt @ 2007-11-29 9:59 UTC (permalink / raw)
To: Zhang, Xiantao
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
carsteno-tA70FqPdS9bQT0dZR+AlfA, hollisb-r/Jw6+rmf7HQT0dZR+AlfA,
avi-atKUWr5tajBWk0Htik3J/w
Hi Xiantao,
it looks good to me to move kvm_vcpu_cache out to the x86 specific code, but I wanted to suggest to go a bit further.
After your patch the structure kvm_vcpu_cache is only in x86/svm/vmx.c so we could prevent mistakes in two ways.
I send two extension patches which will fit on top of your 2 patch queue as suggestion and therefore call them 3/2 and 4/2.
[3/2] move_kvm_cpu_cache_to_x86_header
To prevent misuse of these x86 structure in generic code the definition moved from kvm.h to x86.h.
[4/2] rename_kvm_cpu_cache_x86
Renamed the kvm_vcpu_cache structure to kvm_x86_vcpu_cache to make clear to anyone who see's that variable in the code in future that it's x86 only.
Zhang, Xiantao wrote:
> [1/2] Fix missing bad_page free logic for possbile failures of kvm_init.
> [2/2] Moving kvm_vcpu_cache to x86.c, since it belongs to x86-specific
> part.
>
> Signed-off-by: Zhang Xiantao <xiantao.zhang-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
>
> -------------------------------------------------------------------------
> SF.Net email is sponsored by: The Future of Linux Business White Paper
> from Novell. From the desktop to the data center, Linux is going
> mainstream. Let it simplify your IT future.
> http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
> _______________________________________________
> kvm-devel mailing list
> kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
> https://lists.sourceforge.net/lists/listinfo/kvm-devel
--
Grüsse / regards,
Christian Ehrhardt
IBM Linux Technology Center, Open Virtualization
+49 7031/16-3385
Ehrhardt-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org
Ehrhardt-tA70FqPdS9bQT0dZR+AlfA@public.gmane.org
IBM Deutschland Entwicklung GmbH
Vorsitzender des Aufsichtsrats: Johann Weihen
Geschäftsführung: Herbert Kircher
Sitz der Gesellschaft: Böblingen
Registergericht: Amtsgericht Stuttgart, HRB 243294
-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell. From the desktop to the data center, Linux is going
mainstream. Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH]0/2 Patches to furthure split kvm_init
[not found] ` <474E8D88.4090508-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org>
@ 2007-11-30 7:43 ` Avi Kivity
[not found] ` <474FBF0D.7020601-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 19+ messages in thread
From: Avi Kivity @ 2007-11-30 7:43 UTC (permalink / raw)
To: Christian Ehrhardt
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
carsteno-tA70FqPdS9bQT0dZR+AlfA, hollisb-r/Jw6+rmf7HQT0dZR+AlfA,
Zhang, Xiantao
Christian Ehrhardt wrote:
> Hi Xiantao,
> it looks good to me to move kvm_vcpu_cache out to the x86 specific code
Why is that? Do other archs not want kvm_vcpu_cache, or is it just the
alignment?
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell. From the desktop to the data center, Linux is going
mainstream. Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH]0/2 Patches to furthure split kvm_init
[not found] ` <474FBF0D.7020601-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-11-30 8:27 ` Zhang, Xiantao
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDCA397B9-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-11-30 11:52 ` Carsten Otte
1 sibling, 1 reply; 19+ messages in thread
From: Zhang, Xiantao @ 2007-11-30 8:27 UTC (permalink / raw)
To: Avi Kivity, Christian Ehrhardt
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
carsteno-tA70FqPdS9bQT0dZR+AlfA, hollisb-r/Jw6+rmf7HQT0dZR+AlfA
Avi Kivity wrote:
> Christian Ehrhardt wrote:
>> Hi Xiantao,
>> it looks good to me to move kvm_vcpu_cache out to the x86 specific
>> code
>
> Why is that? Do other archs not want kvm_vcpu_cache, or is it just
> the alignment?
At lease we didn't fall across the similar requirements about such
alignment issues in IA64.
-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell. From the desktop to the data center, Linux is going
mainstream. Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH]0/2 Patches to furthure split kvm_init
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDCA397B9-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2007-11-30 8:36 ` Avi Kivity
[not found] ` <474FCB79.2010008-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 19+ messages in thread
From: Avi Kivity @ 2007-11-30 8:36 UTC (permalink / raw)
To: Zhang, Xiantao
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Christian Ehrhardt,
carsteno-tA70FqPdS9bQT0dZR+AlfA, hollisb-r/Jw6+rmf7HQT0dZR+AlfA
Zhang, Xiantao wrote:
> Avi Kivity wrote:
>
>> Christian Ehrhardt wrote:
>>
>>> Hi Xiantao,
>>> it looks good to me to move kvm_vcpu_cache out to the x86 specific
>>> code
>>>
>> Why is that? Do other archs not want kvm_vcpu_cache, or is it just
>> the alignment?
>>
> At lease we didn't fall across the similar requirements about such
> alignment issues in IA64.
>
What I mean is, other archs do require kvm_vcpu_cache (without the
alignment), so why move the code? Just make the alignment arch
dependent with a #define.
Oh, and since the code is written as
> - /* A kmem cache lets us meet the alignment requirements of
> fx_save. */
> - kvm_vcpu_cache = kmem_cache_create("kvm_vcpu", vcpu_size,
> - __alignof__(struct kvm_vcpu),
> - 0, NULL);
> - if (!kvm_vcpu_cache) {
If other archs don't require special alignment for kvm_vcpu,
__alignof__(struct kvm_vcpu) will return the natural alignment for that
arch, and no memory will be wasted.
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell. From the desktop to the data center, Linux is going
mainstream. Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH]0/2 Patches to furthure split kvm_init
[not found] ` <474FCB79.2010008-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-11-30 8:50 ` Zhang, Xiantao
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDCA397CF-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
0 siblings, 1 reply; 19+ messages in thread
From: Zhang, Xiantao @ 2007-11-30 8:50 UTC (permalink / raw)
To: Avi Kivity
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Christian Ehrhardt,
carsteno-tA70FqPdS9bQT0dZR+AlfA, hollisb-r/Jw6+rmf7HQT0dZR+AlfA
Avi Kivity wrote:
> Zhang, Xiantao wrote:
>> Avi Kivity wrote:
>>
>>> Christian Ehrhardt wrote:
>>>
>>>> Hi Xiantao,
>>>> it looks good to me to move kvm_vcpu_cache out to the x86 specific
>>>> code
>>>>
>>> Why is that? Do other archs not want kvm_vcpu_cache, or is it just
>>> the alignment?
>>>
>> At lease we didn't fall across the similar requirements about such
>> alignment issues in IA64.
>>
>
> What I mean is, other archs do require kvm_vcpu_cache (without the
> alignment), so why move the code? Just make the alignment arch
> dependent with a #define.
I think IA64 TOTALLY doen't need this logic, so do the move:)
> Oh, and since the code is written as
>
>> - /* A kmem cache lets us meet the alignment requirements of
>> fx_save. */
>> - kvm_vcpu_cache = kmem_cache_create("kvm_vcpu", vcpu_size,
>> - __alignof__(struct kvm_vcpu),
>> - 0, NULL);
>> - if (!kvm_vcpu_cache) {
>
> If other archs don't require special alignment for kvm_vcpu,
> __alignof__(struct kvm_vcpu) will return the natural alignment for
> that arch, and no memory will be wasted.
We use a very different method to allocate kvm_vcpu memory in IA64
side.
So we have to set vcpu_size to zero. But if vcpu_size is set to zero,
kmem_cache_create returns error, and this logic can't handle this error.
Finally, make the kvm_init aborted. Otherwise, it wastes memory.
-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell. From the desktop to the data center, Linux is going
mainstream. Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH]0/2 Patches to furthure split kvm_init
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDCA397CF-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2007-11-30 9:04 ` Avi Kivity
[not found] ` <474FD21E.8030900-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 19+ messages in thread
From: Avi Kivity @ 2007-11-30 9:04 UTC (permalink / raw)
To: Zhang, Xiantao
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Christian Ehrhardt,
carsteno-tA70FqPdS9bQT0dZR+AlfA, hollisb-r/Jw6+rmf7HQT0dZR+AlfA
Zhang, Xiantao wrote:
> Avi Kivity wrote:
>
>> Zhang, Xiantao wrote:
>>
>>> Avi Kivity wrote:
>>>
>>>
>>>> Christian Ehrhardt wrote:
>>>>
>>>>
>>>>> Hi Xiantao,
>>>>> it looks good to me to move kvm_vcpu_cache out to the x86 specific
>>>>> code
>>>>>
>>>>>
>>>> Why is that? Do other archs not want kvm_vcpu_cache, or is it just
>>>> the alignment?
>>>>
>>>>
>>> At lease we didn't fall across the similar requirements about such
>>> alignment issues in IA64.
>>>
>>>
>> What I mean is, other archs do require kvm_vcpu_cache (without the
>> alignment), so why move the code? Just make the alignment arch
>> dependent with a #define.
>>
>
> I think IA64 TOTALLY doen't need this logic, so do the move:)
>
>
Ah, I see. It isn't just the alignment. How do you allocate kvm_vcpu, then?
What about s390 and powerpc? I imagine they don't have an alignment
issue, but do they have a totally unique way of allocating vcpus as well?
Maybe we should just #ifndef CONFIG_IA64 (or #ifdef
CONFIG_HAVE_SPECIAL_VCPU_ALLOC) this bit instead of duplicating it for
s390 and ppc.
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell. From the desktop to the data center, Linux is going
mainstream. Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH]0/2 Patches to furthure split kvm_init
[not found] ` <474FD21E.8030900-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-11-30 9:14 ` Zhang, Xiantao
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDCA397EA-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-11-30 9:52 ` Christian Ehrhardt
1 sibling, 1 reply; 19+ messages in thread
From: Zhang, Xiantao @ 2007-11-30 9:14 UTC (permalink / raw)
To: Avi Kivity
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Christian Ehrhardt,
carsteno-tA70FqPdS9bQT0dZR+AlfA, hollisb-r/Jw6+rmf7HQT0dZR+AlfA
Avi Kivity wrote:
> Zhang, Xiantao wrote:
>> Avi Kivity wrote:
>>
>>> Zhang, Xiantao wrote:
>>>
>>>> Avi Kivity wrote:
>>>>
>>>>
>>>>> Christian Ehrhardt wrote:
>>>>>
>>>>>
>>>>>> Hi Xiantao,
>>>>>> it looks good to me to move kvm_vcpu_cache out to the x86
>>>>>> specific code
>>>>>>
>>>>>>
>>>>> Why is that? Do other archs not want kvm_vcpu_cache, or is it
>>>>> just the alignment?
>>>>>
>>>>>
>>>> At lease we didn't fall across the similar requirements about such
>>>> alignment issues in IA64.
>>>>
>>>>
>>> What I mean is, other archs do require kvm_vcpu_cache (without the
>>> alignment), so why move the code? Just make the alignment arch
>>> dependent with a #define.
>>>
>>
>> I think IA64 TOTALLY doen't need this logic, so do the move:)
>>
>>
>
> Ah, I see. It isn't just the alignment. How do you allocate
> kvm_vcpu, then?
For evevy vm, we allocate a big chunk of memory for structure
allocation. For vcpu, it should be always 64k aligned through our
allocation mechanism. So, we don't care about its aligment issue :)
> What about s390 and powerpc? I imagine they don't have an alignment
> issue, but do they have a totally unique way of allocating vcpus as
> well?
>
> Maybe we should just #ifndef CONFIG_IA64 (or #ifdef
> CONFIG_HAVE_SPECIAL_VCPU_ALLOC) this bit instead of duplicating it for
> s390 and ppc.
-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell. From the desktop to the data center, Linux is going
mainstream. Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH]0/2 Patches to furthure split kvm_init
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDCA397EA-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2007-11-30 9:51 ` Avi Kivity
[not found] ` <474FDD34.9020807-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 19+ messages in thread
From: Avi Kivity @ 2007-11-30 9:51 UTC (permalink / raw)
To: Zhang, Xiantao
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Christian Ehrhardt,
carsteno-tA70FqPdS9bQT0dZR+AlfA, hollisb-r/Jw6+rmf7HQT0dZR+AlfA
Zhang, Xiantao wrote:
>>>
>> Ah, I see. It isn't just the alignment. How do you allocate
>> kvm_vcpu, then?
>>
>
> For evevy vm, we allocate a big chunk of memory for structure
> allocation. For vcpu, it should be always 64k aligned through our
> allocation mechanism. So, we don't care about its aligment issue :)
>
I see. Can you explain why you do that? Do you have a special
allocator in your guest-resident vmm module?
I don't think other archs need that, so a CONFIG_KVM_ARCH_CVPU_ALLOC
approach seems better than copying all that code.
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell. From the desktop to the data center, Linux is going
mainstream. Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH]0/2 Patches to furthure split kvm_init
[not found] ` <474FD21E.8030900-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-11-30 9:14 ` Zhang, Xiantao
@ 2007-11-30 9:52 ` Christian Ehrhardt
1 sibling, 0 replies; 19+ messages in thread
From: Christian Ehrhardt @ 2007-11-30 9:52 UTC (permalink / raw)
To: Avi Kivity
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
carsteno-tA70FqPdS9bQT0dZR+AlfA, Hollis Blanchard,
Zhang, Xiantao
Avi Kivity wrote:
> Zhang, Xiantao wrote:
>> Avi Kivity wrote:
>>
>>> Zhang, Xiantao wrote:
>>>
>>>> Avi Kivity wrote:
>>>>
>>>>
>>>>> Christian Ehrhardt wrote:
>>>>>
>>>>>
>>>>>> Hi Xiantao,
>>>>>> it looks good to me to move kvm_vcpu_cache out to the x86 specific
>>>>>> code
>>>>>>
>>>>>>
>>>>> Why is that? Do other archs not want kvm_vcpu_cache, or is it just
>>>>> the alignment?
>>>>>
>>>>>
>>>> At lease we didn't fall across the similar requirements about such
>>>> alignment issues in IA64.
>>>>
>>>>
>>> What I mean is, other archs do require kvm_vcpu_cache (without the
>>> alignment), so why move the code? Just make the alignment arch
>>> dependent with a #define.
>>>
>> I think IA64 TOTALLY doen't need this logic, so do the move:)
>>
>>
>
> Ah, I see. It isn't just the alignment. How do you allocate kvm_vcpu, then?
>
>
> What about s390 and powerpc? I imagine they don't have an alignment
> issue, but do they have a totally unique way of allocating vcpus as well?
On one hand we don't have "these" alignment issues, but on the other hand we have some complex offset logic to integrate structures and handler vectors&code (which need special alignment).
The major problem is that the our prototype currently only supports one vcpu per guest and therefore we didn't think a lot about e.g. kmem_cache for vcpu structures.
>From my current point of view we may be able to use a kmem_cache and be able to do all sophisticated ppc stuff in an arch function filling the arch part of vcpu, but that opinion may change when we look further into it while implementing muli-vcpu support per guest.
Because of that I think atm your CONFIG_HAVE_SPECIAL_VCPU_ALLOC suggestion would be nice, with that we could do either way later without restructuring the generic code too much.
I added Hollis to the direct CC List, because this ppc code is his creation he might be able to give us a much clearer insight how ppc vcpu allocation in future might look like.
> Maybe we should just #ifndef CONFIG_IA64 (or #ifdef
> CONFIG_HAVE_SPECIAL_VCPU_ALLOC) this bit instead of duplicating it for
> s390 and ppc.
>
--
Grüsse / regards,
Christian Ehrhardt
IBM Linux Technology Center, Open Virtualization
+49 7031/16-3385
Ehrhardt-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org
Ehrhardt-tA70FqPdS9bQT0dZR+AlfA@public.gmane.org
IBM Deutschland Entwicklung GmbH
Vorsitzender des Aufsichtsrats: Johann Weihen
Geschäftsführung: Herbert Kircher
Sitz der Gesellschaft: Böblingen
Registergericht: Amtsgericht Stuttgart, HRB 243294
-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell. From the desktop to the data center, Linux is going
mainstream. Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH]0/2 Patches to furthure split kvm_init
[not found] ` <474FDD34.9020807-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-11-30 10:03 ` Zhang, Xiantao
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDCA39817-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
0 siblings, 1 reply; 19+ messages in thread
From: Zhang, Xiantao @ 2007-11-30 10:03 UTC (permalink / raw)
To: Avi Kivity
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Christian Ehrhardt,
carsteno-tA70FqPdS9bQT0dZR+AlfA, hollisb-r/Jw6+rmf7HQT0dZR+AlfA
Avi Kivity wrote:
> Zhang, Xiantao wrote:
>>>>
>>> Ah, I see. It isn't just the alignment. How do you allocate
>>> kvm_vcpu, then?
>>>
>>
>> For evevy vm, we allocate a big chunk of memory for structure
>> allocation. For vcpu, it should be always 64k aligned through our
>> allocation mechanism. So, we don't care about its aligment issue :)
>>
>
> I see. Can you explain why you do that? Do you have a special
> allocator in your guest-resident vmm module?
Since our VMM module and KVM module will share the kvm and vcpu
structure, but VMM module has a different address space, so we have to
use fixed allocation method to handle this share. For example, we
allocates 1M memory(1M align) for every vm for this purpose in kvm
module, and the first 64k is used for first vcpu of guest, and the
second 64 for the second vcpu, and same for other vcpus. You can call
it as special allocator or other names:) This is determined by IA64
virtualization architecture, and hard to workaround it in this
host-based vm model. :(
Xiantao
> I don't think other archs need that, so a CONFIG_KVM_ARCH_CVPU_ALLOC
> approach seems better than copying all that code.
-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell. From the desktop to the data center, Linux is going
mainstream. Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH]0/2 Patches to furthure split kvm_init
[not found] ` <474FBF0D.7020601-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-11-30 8:27 ` Zhang, Xiantao
@ 2007-11-30 11:52 ` Carsten Otte
[not found] ` <474FF970.9060404-tA70FqPdS9bQT0dZR+AlfA@public.gmane.org>
1 sibling, 1 reply; 19+ messages in thread
From: Carsten Otte @ 2007-11-30 11:52 UTC (permalink / raw)
To: Avi Kivity
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Christian Ehrhardt,
carsteno-tA70FqPdS9bQT0dZR+AlfA, hollisb-r/Jw6+rmf7HQT0dZR+AlfA,
Zhang, Xiantao
Avi Kivity wrote:
> Why is that? Do other archs not want kvm_vcpu_cache, or is it just the
> alignment?
On s390, our nice colleagues in the hardware depeartment take care of
caching vcpu related data on a phyical one. No need to do anything for
us in that area, except enjoying the benefits. This time, we don't
even have to call an instruction for that :-).
-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell. From the desktop to the data center, Linux is going
mainstream. Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH]0/2 Patches to furthure split kvm_init
[not found] ` <474FF970.9060404-tA70FqPdS9bQT0dZR+AlfA@public.gmane.org>
@ 2007-11-30 11:55 ` Avi Kivity
[not found] ` <474FFA26.6020302-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 19+ messages in thread
From: Avi Kivity @ 2007-11-30 11:55 UTC (permalink / raw)
To: carsteno-tA70FqPdS9bQT0dZR+AlfA
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Christian Ehrhardt,
hollisb-r/Jw6+rmf7HQT0dZR+AlfA, Zhang, Xiantao
Carsten Otte wrote:
> Avi Kivity wrote:
>> Why is that? Do other archs not want kvm_vcpu_cache, or is it just the
>> alignment?
> On s390, our nice colleagues in the hardware depeartment take care of
> caching vcpu related data on a phyical one. No need to do anything for
> us in that area, except enjoying the benefits. This time, we don't
> even have to call an instruction for that :-).
>
But you do need the vcpu cache, right?
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell. From the desktop to the data center, Linux is going
mainstream. Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH]0/2 Patches to furthure split kvm_init
[not found] ` <474FFA26.6020302-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-11-30 12:49 ` Carsten Otte
[not found] ` <475006D5.9060504-tA70FqPdS9bQT0dZR+AlfA@public.gmane.org>
0 siblings, 1 reply; 19+ messages in thread
From: Carsten Otte @ 2007-11-30 12:49 UTC (permalink / raw)
To: Avi Kivity
Cc: carsteno-tA70FqPdS9bQT0dZR+AlfA, Christian Ehrhardt,
hollisb-r/Jw6+rmf7HQT0dZR+AlfA, Zhang, Xiantao,
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Avi Kivity wrote:
> But you do need the vcpu cache, right?
I think about organizing our SIE control blocks in it, just like vmx
and svm do with their hardware structures backing a vcpu state.
They're 512 bytes in size, and need to start on a 512-byte boundary.
Sorry about my previous answer, I was confused by vcpu_cache /
vcpu_decache for x86. It's friday...
-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell. From the desktop to the data center, Linux is going
mainstream. Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH]0/2 Patches to furthure split kvm_init
[not found] ` <475006D5.9060504-tA70FqPdS9bQT0dZR+AlfA@public.gmane.org>
@ 2007-11-30 14:50 ` Avi Kivity
[not found] ` <4750234D.6000504-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 19+ messages in thread
From: Avi Kivity @ 2007-11-30 14:50 UTC (permalink / raw)
To: carsteno-tA70FqPdS9bQT0dZR+AlfA
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Christian Ehrhardt,
hollisb-r/Jw6+rmf7HQT0dZR+AlfA, Zhang, Xiantao
Carsten Otte wrote:
> Avi Kivity wrote:
>> But you do need the vcpu cache, right?
> I think about organizing our SIE control blocks in it, just like vmx
> and svm do with their hardware structures backing a vcpu state.
> They're 512 bytes in size, and need to start on a 512-byte boundary.
> Sorry about my previous answer, I was confused by vcpu_cache /
> vcpu_decache for x86. It's friday...
Ah, so you even need the alignment (which happen to be exactly the x86
fpu alignment).
So we have two archs needing special allocation, and two archs using a
common allocator.
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell. From the desktop to the data center, Linux is going
mainstream. Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH]0/2 Patches to furthure split kvm_init
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDCA39817-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2007-11-30 17:29 ` Hollis Blanchard
2007-11-30 20:36 ` Avi Kivity
0 siblings, 1 reply; 19+ messages in thread
From: Hollis Blanchard @ 2007-11-30 17:29 UTC (permalink / raw)
To: Avi Kivity
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Christian Ehrhardt,
carsteno-tA70FqPdS9bQT0dZR+AlfA, Zhang, Xiantao
On Fri, 2007-11-30 at 18:03 +0800, Zhang, Xiantao wrote:
> Avi Kivity wrote:
> > Zhang, Xiantao wrote:
> >>>>
> >>> Ah, I see. It isn't just the alignment. How do you allocate
> >>> kvm_vcpu, then?
> >>>
> >>
> >> For evevy vm, we allocate a big chunk of memory for structure
> >> allocation. For vcpu, it should be always 64k aligned through our
> >> allocation mechanism. So, we don't care about its aligment issue :)
> >>
> >
> > I see. Can you explain why you do that? Do you have a special
> > allocator in your guest-resident vmm module?
>
> Since our VMM module and KVM module will share the kvm and vcpu
> structure, but VMM module has a different address space, so we have to
> use fixed allocation method to handle this share. For example, we
> allocates 1M memory(1M align) for every vm for this purpose in kvm
> module, and the first 64k is used for first vcpu of guest, and the
> second 64 for the second vcpu, and same for other vcpus. You can call
> it as special allocator or other names:) This is determined by IA64
> virtualization architecture, and hard to workaround it in this
> host-based vm model. :(
We're doing something similar with very large allocations.
Currently, PowerPC's "vcpu" is actually a copy of the exception
handlers, plus the real vcpu data structure at a higher offset. Since
our exception handlers can't span 64KB regions, we allocate a full 64KB
for each vcpu. I'm not sure what benefit a kmem_cache would have in this
situation...
--
Hollis Blanchard
IBM Linux Technology Center
-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell. From the desktop to the data center, Linux is going
mainstream. Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH]0/2 Patches to furthure split kvm_init
[not found] ` <4750234D.6000504-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-11-30 18:18 ` Hollis Blanchard
2007-11-30 20:34 ` Avi Kivity
0 siblings, 1 reply; 19+ messages in thread
From: Hollis Blanchard @ 2007-11-30 18:18 UTC (permalink / raw)
To: Avi Kivity
Cc: carsteno-tA70FqPdS9bQT0dZR+AlfA, Christian Ehrhardt,
Zhang, Xiantao, kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
On Fri, 2007-11-30 at 16:50 +0200, Avi Kivity wrote:
> Carsten Otte wrote:
> > Avi Kivity wrote:
> >> But you do need the vcpu cache, right?
> > I think about organizing our SIE control blocks in it, just like vmx
> > and svm do with their hardware structures backing a vcpu state.
> > They're 512 bytes in size, and need to start on a 512-byte boundary.
> > Sorry about my previous answer, I was confused by vcpu_cache /
> > vcpu_decache for x86. It's friday...
>
> Ah, so you even need the alignment (which happen to be exactly the x86
> fpu alignment).
>
> So we have two archs needing special allocation, and two archs using a
> common allocator.
I think it's clear that this is an area that is likely to be very
architecture-specific, and I don't think that duplicating
kmem_cache_create() for each architecture is a big deal at all.
In fact the x86 split today is already pretty weird, since the code that
actually *uses* the cache (and the only code that knows what size the
vcpu actually is!), isn't the code creating it.
--
Hollis Blanchard
IBM Linux Technology Center
-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell. From the desktop to the data center, Linux is going
mainstream. Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH]0/2 Patches to furthure split kvm_init
2007-11-30 18:18 ` Hollis Blanchard
@ 2007-11-30 20:34 ` Avi Kivity
0 siblings, 0 replies; 19+ messages in thread
From: Avi Kivity @ 2007-11-30 20:34 UTC (permalink / raw)
To: Hollis Blanchard
Cc: carsteno-tA70FqPdS9bQT0dZR+AlfA, Christian Ehrhardt,
Zhang, Xiantao, kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Hollis Blanchard wrote:
> On Fri, 2007-11-30 at 16:50 +0200, Avi Kivity wrote:
>
>> Carsten Otte wrote:
>>
>>> Avi Kivity wrote:
>>>
>>>> But you do need the vcpu cache, right?
>>>>
>>> I think about organizing our SIE control blocks in it, just like vmx
>>> and svm do with their hardware structures backing a vcpu state.
>>> They're 512 bytes in size, and need to start on a 512-byte boundary.
>>> Sorry about my previous answer, I was confused by vcpu_cache /
>>> vcpu_decache for x86. It's friday...
>>>
>> Ah, so you even need the alignment (which happen to be exactly the x86
>> fpu alignment).
>>
>> So we have two archs needing special allocation, and two archs using a
>> common allocator.
>>
>
> I think it's clear that this is an area that is likely to be very
> architecture-specific, and I don't think that duplicating
> kmem_cache_create() for each architecture is a big deal at all.
>
No, it isn't, but having a KVM_ARCH_HAS_VCPU_ALLOC isn't either.
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell. From the desktop to the data center, Linux is going
mainstream. Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH]0/2 Patches to furthure split kvm_init
2007-11-30 17:29 ` Hollis Blanchard
@ 2007-11-30 20:36 ` Avi Kivity
0 siblings, 0 replies; 19+ messages in thread
From: Avi Kivity @ 2007-11-30 20:36 UTC (permalink / raw)
To: Hollis Blanchard
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Christian Ehrhardt,
carsteno-tA70FqPdS9bQT0dZR+AlfA, Zhang, Xiantao
Hollis Blanchard wrote:
> On Fri, 2007-11-30 at 18:03 +0800, Zhang, Xiantao wrote:
>
>> Avi Kivity wrote:
>>
>>> Zhang, Xiantao wrote:
>>>
>>>>> Ah, I see. It isn't just the alignment. How do you allocate
>>>>> kvm_vcpu, then?
>>>>>
>>>>>
>>>> For evevy vm, we allocate a big chunk of memory for structure
>>>> allocation. For vcpu, it should be always 64k aligned through our
>>>> allocation mechanism. So, we don't care about its aligment issue :)
>>>>
>>>>
>>> I see. Can you explain why you do that? Do you have a special
>>> allocator in your guest-resident vmm module?
>>>
>> Since our VMM module and KVM module will share the kvm and vcpu
>> structure, but VMM module has a different address space, so we have to
>> use fixed allocation method to handle this share. For example, we
>> allocates 1M memory(1M align) for every vm for this purpose in kvm
>> module, and the first 64k is used for first vcpu of guest, and the
>> second 64 for the second vcpu, and same for other vcpus. You can call
>> it as special allocator or other names:) This is determined by IA64
>> virtualization architecture, and hard to workaround it in this
>> host-based vm model. :(
>>
>
> We're doing something similar with very large allocations.
>
> Currently, PowerPC's "vcpu" is actually a copy of the exception
> handlers, plus the real vcpu data structure at a higher offset. Since
> our exception handlers can't span 64KB regions, we allocate a full 64KB
> for each vcpu. I'm not sure what benefit a kmem_cache would have in this
> situation...
>
>
A kmem_cache is useful for specifying alignment, and as a general
bookkeeping system. It's nice to see how many objects you have
allocated in /proc/slabinfo, and it will automatically inform you if you
have a leak when you unload the module.
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
-------------------------------------------------------------------------
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell. From the desktop to the data center, Linux is going
mainstream. Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2007-11-30 20:36 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-11-29 8:16 [PATCH]0/2 Patches to furthure split kvm_init Zhang, Xiantao
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDCA394B2-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-11-29 9:59 ` Christian Ehrhardt
[not found] ` <474E8D88.4090508-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org>
2007-11-30 7:43 ` Avi Kivity
[not found] ` <474FBF0D.7020601-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-11-30 8:27 ` Zhang, Xiantao
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDCA397B9-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-11-30 8:36 ` Avi Kivity
[not found] ` <474FCB79.2010008-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-11-30 8:50 ` Zhang, Xiantao
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDCA397CF-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-11-30 9:04 ` Avi Kivity
[not found] ` <474FD21E.8030900-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-11-30 9:14 ` Zhang, Xiantao
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDCA397EA-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-11-30 9:51 ` Avi Kivity
[not found] ` <474FDD34.9020807-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-11-30 10:03 ` Zhang, Xiantao
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDCA39817-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-11-30 17:29 ` Hollis Blanchard
2007-11-30 20:36 ` Avi Kivity
2007-11-30 9:52 ` Christian Ehrhardt
2007-11-30 11:52 ` Carsten Otte
[not found] ` <474FF970.9060404-tA70FqPdS9bQT0dZR+AlfA@public.gmane.org>
2007-11-30 11:55 ` Avi Kivity
[not found] ` <474FFA26.6020302-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-11-30 12:49 ` Carsten Otte
[not found] ` <475006D5.9060504-tA70FqPdS9bQT0dZR+AlfA@public.gmane.org>
2007-11-30 14:50 ` Avi Kivity
[not found] ` <4750234D.6000504-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-11-30 18:18 ` Hollis Blanchard
2007-11-30 20:34 ` Avi Kivity
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox