* [Qemu-devel] qemu-kvm vs. qemu: Terminate cpu loop on reset? @ 2011-01-07 15:57 Jan Kiszka 2011-01-07 16:53 ` [Qemu-devel] " Gleb Natapov 0 siblings, 1 reply; 12+ messages in thread From: Jan Kiszka @ 2011-01-07 15:57 UTC (permalink / raw) To: kvm; +Cc: qemu-devel [-- Attachment #1: Type: text/plain, Size: 655 bytes --] Hi, does anyone immediately know if this hunk from vl.c @@ -1278,6 +1197,10 @@ void qemu_system_reset_request(void) } else { reset_requested = 1; } + if (cpu_single_env) { + cpu_single_env->stopped = 1; + cpu_exit(cpu_single_env); + } qemu_notify_event(); } is (semantically) relevant for upstream as well? IIUC, it ensures that the kvm cpu loop is not continued if an IO access called into qemu_system_reset_request. If yes, then it would be a good time to push a patch: these bits will fall to dust on next merge from upstream (vl.c no longer has access to the cpu state). Jan [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 259 bytes --] ^ permalink raw reply [flat|nested] 12+ messages in thread
* [Qemu-devel] Re: qemu-kvm vs. qemu: Terminate cpu loop on reset? 2011-01-07 15:57 [Qemu-devel] qemu-kvm vs. qemu: Terminate cpu loop on reset? Jan Kiszka @ 2011-01-07 16:53 ` Gleb Natapov 2011-01-07 16:59 ` Jan Kiszka 0 siblings, 1 reply; 12+ messages in thread From: Gleb Natapov @ 2011-01-07 16:53 UTC (permalink / raw) To: Jan Kiszka; +Cc: qemu-devel, kvm On Fri, Jan 07, 2011 at 04:57:31PM +0100, Jan Kiszka wrote: > Hi, > > does anyone immediately know if this hunk from vl.c > > @@ -1278,6 +1197,10 @@ void qemu_system_reset_request(void) > } else { > reset_requested = 1; > } > + if (cpu_single_env) { > + cpu_single_env->stopped = 1; > + cpu_exit(cpu_single_env); > + } > qemu_notify_event(); > } > > is (semantically) relevant for upstream as well? IIUC, it ensures that > the kvm cpu loop is not continued if an IO access called into > qemu_system_reset_request. > I don't know TCG enough to tell. If TCG can continue vcpu execution after io without checking reset_requested then it is relevant for upstream too. > If yes, then it would be a good time to push a patch: these bits will > fall to dust on next merge from upstream (vl.c no longer has access to > the cpu state). > On a next merge cpu state will have to be exposed to vl.c then. This code cannot be dropped in qemu-kvm. -- Gleb. ^ permalink raw reply [flat|nested] 12+ messages in thread
* [Qemu-devel] Re: qemu-kvm vs. qemu: Terminate cpu loop on reset? 2011-01-07 16:53 ` [Qemu-devel] " Gleb Natapov @ 2011-01-07 16:59 ` Jan Kiszka 2011-01-07 17:16 ` Gleb Natapov 0 siblings, 1 reply; 12+ messages in thread From: Jan Kiszka @ 2011-01-07 16:59 UTC (permalink / raw) To: Gleb Natapov; +Cc: qemu-devel, kvm [-- Attachment #1: Type: text/plain, Size: 1548 bytes --] Am 07.01.2011 17:53, Gleb Natapov wrote: > On Fri, Jan 07, 2011 at 04:57:31PM +0100, Jan Kiszka wrote: >> Hi, >> >> does anyone immediately know if this hunk from vl.c >> >> @@ -1278,6 +1197,10 @@ void qemu_system_reset_request(void) >> } else { >> reset_requested = 1; >> } >> + if (cpu_single_env) { >> + cpu_single_env->stopped = 1; >> + cpu_exit(cpu_single_env); >> + } >> qemu_notify_event(); >> } >> >> is (semantically) relevant for upstream as well? IIUC, it ensures that >> the kvm cpu loop is not continued if an IO access called into >> qemu_system_reset_request. >> > I don't know TCG enough to tell. If TCG can continue vcpu execution > after io without checking reset_requested then it is relevant for > upstream too. I was first of all thinking about kvm upstream, but their handling differ much less upstream than in current qemu-kvm. Anyway, need to dig into the details. > >> If yes, then it would be a good time to push a patch: these bits will >> fall to dust on next merge from upstream (vl.c no longer has access to >> the cpu state). >> > On a next merge cpu state will have to be exposed to vl.c then. This > code cannot be dropped in qemu-kvm. I think a cleaner approach, even if it's only temporarily required, is to move that code to cpus.c. That's likely also the way when we need it upstream. If upstream does not need it, we have to understand why and maybe adopt its pattern (the ultimate goal is unification anyway). Jan [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 259 bytes --] ^ permalink raw reply [flat|nested] 12+ messages in thread
* [Qemu-devel] Re: qemu-kvm vs. qemu: Terminate cpu loop on reset? 2011-01-07 16:59 ` Jan Kiszka @ 2011-01-07 17:16 ` Gleb Natapov 2011-01-07 17:30 ` Jan Kiszka 0 siblings, 1 reply; 12+ messages in thread From: Gleb Natapov @ 2011-01-07 17:16 UTC (permalink / raw) To: Jan Kiszka; +Cc: qemu-devel, kvm On Fri, Jan 07, 2011 at 05:59:34PM +0100, Jan Kiszka wrote: > Am 07.01.2011 17:53, Gleb Natapov wrote: > > On Fri, Jan 07, 2011 at 04:57:31PM +0100, Jan Kiszka wrote: > >> Hi, > >> > >> does anyone immediately know if this hunk from vl.c > >> > >> @@ -1278,6 +1197,10 @@ void qemu_system_reset_request(void) > >> } else { > >> reset_requested = 1; > >> } > >> + if (cpu_single_env) { > >> + cpu_single_env->stopped = 1; > >> + cpu_exit(cpu_single_env); > >> + } > >> qemu_notify_event(); > >> } > >> > >> is (semantically) relevant for upstream as well? IIUC, it ensures that > >> the kvm cpu loop is not continued if an IO access called into > >> qemu_system_reset_request. > >> > > I don't know TCG enough to tell. If TCG can continue vcpu execution > > after io without checking reset_requested then it is relevant for > > upstream too. > > I was first of all thinking about kvm upstream, but their handling > differ much less upstream than in current qemu-kvm. Anyway, need to dig > into the details. > > > > >> If yes, then it would be a good time to push a patch: these bits will > >> fall to dust on next merge from upstream (vl.c no longer has access to > >> the cpu state). > >> > > On a next merge cpu state will have to be exposed to vl.c then. This > > code cannot be dropped in qemu-kvm. > > I think a cleaner approach, even if it's only temporarily required, is > to move that code to cpus.c. That's likely also the way when we need it > upstream. It doesn't matter where the code resides as long as it is called on reset. > If upstream does not need it, we have to understand why and > maybe adopt its pattern (the ultimate goal is unification anyway). > I don't consider kvm upstream as working product. The goal should be moving to qemu-kvm code in upstream preserving all the knowledge we acquired while making it production grade code. -- Gleb. ^ permalink raw reply [flat|nested] 12+ messages in thread
* [Qemu-devel] Re: qemu-kvm vs. qemu: Terminate cpu loop on reset? 2011-01-07 17:16 ` Gleb Natapov @ 2011-01-07 17:30 ` Jan Kiszka 2011-01-07 17:53 ` Gleb Natapov 0 siblings, 1 reply; 12+ messages in thread From: Jan Kiszka @ 2011-01-07 17:30 UTC (permalink / raw) To: Gleb Natapov; +Cc: qemu-devel, kvm [-- Attachment #1: Type: text/plain, Size: 2685 bytes --] Am 07.01.2011 18:16, Gleb Natapov wrote: > On Fri, Jan 07, 2011 at 05:59:34PM +0100, Jan Kiszka wrote: >> Am 07.01.2011 17:53, Gleb Natapov wrote: >>> On Fri, Jan 07, 2011 at 04:57:31PM +0100, Jan Kiszka wrote: >>>> Hi, >>>> >>>> does anyone immediately know if this hunk from vl.c >>>> >>>> @@ -1278,6 +1197,10 @@ void qemu_system_reset_request(void) >>>> } else { >>>> reset_requested = 1; >>>> } >>>> + if (cpu_single_env) { >>>> + cpu_single_env->stopped = 1; >>>> + cpu_exit(cpu_single_env); >>>> + } >>>> qemu_notify_event(); >>>> } >>>> >>>> is (semantically) relevant for upstream as well? IIUC, it ensures that >>>> the kvm cpu loop is not continued if an IO access called into >>>> qemu_system_reset_request. >>>> >>> I don't know TCG enough to tell. If TCG can continue vcpu execution >>> after io without checking reset_requested then it is relevant for >>> upstream too. >> >> I was first of all thinking about kvm upstream, but their handling >> differ much less upstream than in current qemu-kvm. Anyway, need to dig >> into the details. >> >>> >>>> If yes, then it would be a good time to push a patch: these bits will >>>> fall to dust on next merge from upstream (vl.c no longer has access to >>>> the cpu state). >>>> >>> On a next merge cpu state will have to be exposed to vl.c then. This >>> code cannot be dropped in qemu-kvm. >> >> I think a cleaner approach, even if it's only temporarily required, is >> to move that code to cpus.c. That's likely also the way when we need it >> upstream. > It doesn't matter where the code resides as long as it is called on > reset. It technically matters for the build process (vl.c is built once these days, cpus.c is built per target). In any case, we apparently need to fix upstream, I'm playing with some approach. > >> If upstream does not need it, we have to understand why and >> maybe adopt its pattern (the ultimate goal is unification anyway). >> > I don't consider kvm upstream as working product. The goal should be > moving to qemu-kvm code in upstream preserving all the knowledge we > acquired while making it production grade code. We had this discussion before. My goal remains to filter the remaining upstream fixes out of the noise, adjust both versions so that they are apparently identical, and then switch to a single version. We are on a good track now. I predict that we will be left with only one or two major additional features in qemu-kvm in a few months from now, no more duplications with subtle differences, and production-grade kvm upstream stability. Jan [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 259 bytes --] ^ permalink raw reply [flat|nested] 12+ messages in thread
* [Qemu-devel] Re: qemu-kvm vs. qemu: Terminate cpu loop on reset? 2011-01-07 17:30 ` Jan Kiszka @ 2011-01-07 17:53 ` Gleb Natapov 2011-01-07 18:24 ` Jan Kiszka 0 siblings, 1 reply; 12+ messages in thread From: Gleb Natapov @ 2011-01-07 17:53 UTC (permalink / raw) To: Jan Kiszka; +Cc: qemu-devel, kvm On Fri, Jan 07, 2011 at 06:30:57PM +0100, Jan Kiszka wrote: > Am 07.01.2011 18:16, Gleb Natapov wrote: > > On Fri, Jan 07, 2011 at 05:59:34PM +0100, Jan Kiszka wrote: > >> Am 07.01.2011 17:53, Gleb Natapov wrote: > >>> On Fri, Jan 07, 2011 at 04:57:31PM +0100, Jan Kiszka wrote: > >>>> Hi, > >>>> > >>>> does anyone immediately know if this hunk from vl.c > >>>> > >>>> @@ -1278,6 +1197,10 @@ void qemu_system_reset_request(void) > >>>> } else { > >>>> reset_requested = 1; > >>>> } > >>>> + if (cpu_single_env) { > >>>> + cpu_single_env->stopped = 1; > >>>> + cpu_exit(cpu_single_env); > >>>> + } > >>>> qemu_notify_event(); > >>>> } > >>>> > >>>> is (semantically) relevant for upstream as well? IIUC, it ensures that > >>>> the kvm cpu loop is not continued if an IO access called into > >>>> qemu_system_reset_request. > >>>> > >>> I don't know TCG enough to tell. If TCG can continue vcpu execution > >>> after io without checking reset_requested then it is relevant for > >>> upstream too. > >> > >> I was first of all thinking about kvm upstream, but their handling > >> differ much less upstream than in current qemu-kvm. Anyway, need to dig > >> into the details. > >> > >>> > >>>> If yes, then it would be a good time to push a patch: these bits will > >>>> fall to dust on next merge from upstream (vl.c no longer has access to > >>>> the cpu state). > >>>> > >>> On a next merge cpu state will have to be exposed to vl.c then. This > >>> code cannot be dropped in qemu-kvm. > >> > >> I think a cleaner approach, even if it's only temporarily required, is > >> to move that code to cpus.c. That's likely also the way when we need it > >> upstream. > > It doesn't matter where the code resides as long as it is called on > > reset. > > It technically matters for the build process (vl.c is built once these > days, cpus.c is built per target). > Yes, I understand the build requirement. Runtime behaviour should not change. > In any case, we apparently need to fix upstream, I'm playing with some > approach. > > > > >> If upstream does not need it, we have to understand why and > >> maybe adopt its pattern (the ultimate goal is unification anyway). > >> > > I don't consider kvm upstream as working product. The goal should be > > moving to qemu-kvm code in upstream preserving all the knowledge we > > acquired while making it production grade code. > > We had this discussion before. My goal remains to filter the remaining > upstream fixes out of the noise, adjust both versions so that they are > apparently identical, and then switch to a single version. > I thought there was an agreement to accept qemu-kvm implementation as is into upstream (without some parts like device assignment). If you look at qemu-kvm you'll see that upstream implementation is marked as OBSOLETE_KVM_IMPL. > We are on a good track now. I predict that we will be left with only one > or two major additional features in qemu-kvm in a few months from now, > no more duplications with subtle differences, and production-grade kvm > upstream stability. > You are optimistic. My prediction is that it will take at least one major RHEL release until such merged code base will become production-grade. That is when most bugs that were introduced by eliminating subtle differences between working and non-working version will be found :) BTW Do you have a plan how to move upstream to thread per vcpu? -- Gleb. ^ permalink raw reply [flat|nested] 12+ messages in thread
* [Qemu-devel] Re: qemu-kvm vs. qemu: Terminate cpu loop on reset? 2011-01-07 17:53 ` Gleb Natapov @ 2011-01-07 18:24 ` Jan Kiszka 2011-01-07 18:32 ` Jan Kiszka 2011-01-07 19:10 ` Gleb Natapov 0 siblings, 2 replies; 12+ messages in thread From: Jan Kiszka @ 2011-01-07 18:24 UTC (permalink / raw) To: Gleb Natapov; +Cc: qemu-devel, kvm [-- Attachment #1: Type: text/plain, Size: 4698 bytes --] Am 07.01.2011 18:53, Gleb Natapov wrote: > On Fri, Jan 07, 2011 at 06:30:57PM +0100, Jan Kiszka wrote: >> Am 07.01.2011 18:16, Gleb Natapov wrote: >>> On Fri, Jan 07, 2011 at 05:59:34PM +0100, Jan Kiszka wrote: >>>> Am 07.01.2011 17:53, Gleb Natapov wrote: >>>>> On Fri, Jan 07, 2011 at 04:57:31PM +0100, Jan Kiszka wrote: >>>>>> Hi, >>>>>> >>>>>> does anyone immediately know if this hunk from vl.c >>>>>> >>>>>> @@ -1278,6 +1197,10 @@ void qemu_system_reset_request(void) >>>>>> } else { >>>>>> reset_requested = 1; >>>>>> } >>>>>> + if (cpu_single_env) { >>>>>> + cpu_single_env->stopped = 1; >>>>>> + cpu_exit(cpu_single_env); >>>>>> + } >>>>>> qemu_notify_event(); >>>>>> } >>>>>> >>>>>> is (semantically) relevant for upstream as well? IIUC, it ensures that >>>>>> the kvm cpu loop is not continued if an IO access called into >>>>>> qemu_system_reset_request. >>>>>> >>>>> I don't know TCG enough to tell. If TCG can continue vcpu execution >>>>> after io without checking reset_requested then it is relevant for >>>>> upstream too. >>>> >>>> I was first of all thinking about kvm upstream, but their handling >>>> differ much less upstream than in current qemu-kvm. Anyway, need to dig >>>> into the details. >>>> >>>>> >>>>>> If yes, then it would be a good time to push a patch: these bits will >>>>>> fall to dust on next merge from upstream (vl.c no longer has access to >>>>>> the cpu state). >>>>>> >>>>> On a next merge cpu state will have to be exposed to vl.c then. This >>>>> code cannot be dropped in qemu-kvm. >>>> >>>> I think a cleaner approach, even if it's only temporarily required, is >>>> to move that code to cpus.c. That's likely also the way when we need it >>>> upstream. >>> It doesn't matter where the code resides as long as it is called on >>> reset. >> >> It technically matters for the build process (vl.c is built once these >> days, cpus.c is built per target). >> > Yes, I understand the build requirement. Runtime behaviour should not > change. Yep, for sure. BTW, the self-IPI on pending exit request is there for a reason I but. In order to complete half-done string-io or something like that? Would be the next patch for upstream then. > >> In any case, we apparently need to fix upstream, I'm playing with some >> approach. >> >>> >>>> If upstream does not need it, we have to understand why and >>>> maybe adopt its pattern (the ultimate goal is unification anyway). >>>> >>> I don't consider kvm upstream as working product. The goal should be >>> moving to qemu-kvm code in upstream preserving all the knowledge we >>> acquired while making it production grade code. >> >> We had this discussion before. My goal remains to filter the remaining >> upstream fixes out of the noise, adjust both versions so that they are >> apparently identical, and then switch to a single version. >> > I thought there was an agreement to accept qemu-kvm implementation as is > into upstream (without some parts like device assignment). If you look > at qemu-kvm you'll see that upstream implementation is marked as > OBSOLETE_KVM_IMPL. You can't merge both trees without introducing regressions, either in the kvm part or some other section that qemu-kvm did not stress. IMO, there is no way around understanding all the nice little "fixes" that piled up over the years and translate them into proper, documented patches. > >> We are on a good track now. I predict that we will be left with only one >> or two major additional features in qemu-kvm in a few months from now, >> no more duplications with subtle differences, and production-grade kvm >> upstream stability. >> > You are optimistic. My prediction is that it will take at least one major RHEL > release until such merged code base will become production-grade. That > is when most bugs that were introduced by eliminating subtle differences > between working and non-working version will be found :) The more upstream code qemu-kvm stresses, the faster this convergence will become. And there is really not that much left. E.g, I've a qemu-kvm-x86.c here that is <400 LOC. > > BTW Do you have a plan how to move upstream to thread per vcpu? Upstream has this already, but it's - once again - a different implementation. Understanding those differences is one of the next steps. In fact, as posted recently, unifying the execution model implementations is the only big problem I see. In-kernel irqchips and device assignment are things that can live in qemu-kvm without much conflicts until they are finally mergable. Jan [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 259 bytes --] ^ permalink raw reply [flat|nested] 12+ messages in thread
* [Qemu-devel] Re: qemu-kvm vs. qemu: Terminate cpu loop on reset? 2011-01-07 18:24 ` Jan Kiszka @ 2011-01-07 18:32 ` Jan Kiszka 2011-01-07 19:10 ` Gleb Natapov 1 sibling, 0 replies; 12+ messages in thread From: Jan Kiszka @ 2011-01-07 18:32 UTC (permalink / raw) To: Gleb Natapov; +Cc: qemu-devel, kvm [-- Attachment #1: Type: text/plain, Size: 2496 bytes --] Am 07.01.2011 19:24, Jan Kiszka wrote: > Am 07.01.2011 18:53, Gleb Natapov wrote: >> On Fri, Jan 07, 2011 at 06:30:57PM +0100, Jan Kiszka wrote: >>> Am 07.01.2011 18:16, Gleb Natapov wrote: >>>> On Fri, Jan 07, 2011 at 05:59:34PM +0100, Jan Kiszka wrote: >>>>> Am 07.01.2011 17:53, Gleb Natapov wrote: >>>>>> On Fri, Jan 07, 2011 at 04:57:31PM +0100, Jan Kiszka wrote: >>>>>>> Hi, >>>>>>> >>>>>>> does anyone immediately know if this hunk from vl.c >>>>>>> >>>>>>> @@ -1278,6 +1197,10 @@ void qemu_system_reset_request(void) >>>>>>> } else { >>>>>>> reset_requested = 1; >>>>>>> } >>>>>>> + if (cpu_single_env) { >>>>>>> + cpu_single_env->stopped = 1; >>>>>>> + cpu_exit(cpu_single_env); >>>>>>> + } >>>>>>> qemu_notify_event(); >>>>>>> } >>>>>>> >>>>>>> is (semantically) relevant for upstream as well? IIUC, it ensures that >>>>>>> the kvm cpu loop is not continued if an IO access called into >>>>>>> qemu_system_reset_request. >>>>>>> >>>>>> I don't know TCG enough to tell. If TCG can continue vcpu execution >>>>>> after io without checking reset_requested then it is relevant for >>>>>> upstream too. >>>>> >>>>> I was first of all thinking about kvm upstream, but their handling >>>>> differ much less upstream than in current qemu-kvm. Anyway, need to dig >>>>> into the details. >>>>> >>>>>> >>>>>>> If yes, then it would be a good time to push a patch: these bits will >>>>>>> fall to dust on next merge from upstream (vl.c no longer has access to >>>>>>> the cpu state). >>>>>>> >>>>>> On a next merge cpu state will have to be exposed to vl.c then. This >>>>>> code cannot be dropped in qemu-kvm. >>>>> >>>>> I think a cleaner approach, even if it's only temporarily required, is >>>>> to move that code to cpus.c. That's likely also the way when we need it >>>>> upstream. >>>> It doesn't matter where the code resides as long as it is called on >>>> reset. >>> >>> It technically matters for the build process (vl.c is built once these >>> days, cpus.c is built per target). >>> >> Yes, I understand the build requirement. Runtime behaviour should not >> change. > > Yep, for sure. > > BTW, the self-IPI on pending exit request is there for a reason I but. > In order to complete half-done string-io or something like that? Would > be the next patch for upstream then. > Yeah, it is, just found the confirming commit. It was just not pushed upstream as well. Jan [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 259 bytes --] ^ permalink raw reply [flat|nested] 12+ messages in thread
* [Qemu-devel] Re: qemu-kvm vs. qemu: Terminate cpu loop on reset? 2011-01-07 18:24 ` Jan Kiszka 2011-01-07 18:32 ` Jan Kiszka @ 2011-01-07 19:10 ` Gleb Natapov 2011-01-07 19:33 ` Jan Kiszka 1 sibling, 1 reply; 12+ messages in thread From: Gleb Natapov @ 2011-01-07 19:10 UTC (permalink / raw) To: Jan Kiszka; +Cc: qemu-devel, kvm On Fri, Jan 07, 2011 at 07:24:00PM +0100, Jan Kiszka wrote: > Am 07.01.2011 18:53, Gleb Natapov wrote: > > On Fri, Jan 07, 2011 at 06:30:57PM +0100, Jan Kiszka wrote: > >> Am 07.01.2011 18:16, Gleb Natapov wrote: > >>> On Fri, Jan 07, 2011 at 05:59:34PM +0100, Jan Kiszka wrote: > >>>> Am 07.01.2011 17:53, Gleb Natapov wrote: > >>>>> On Fri, Jan 07, 2011 at 04:57:31PM +0100, Jan Kiszka wrote: > >>>>>> Hi, > >>>>>> > >>>>>> does anyone immediately know if this hunk from vl.c > >>>>>> > >>>>>> @@ -1278,6 +1197,10 @@ void qemu_system_reset_request(void) > >>>>>> } else { > >>>>>> reset_requested = 1; > >>>>>> } > >>>>>> + if (cpu_single_env) { > >>>>>> + cpu_single_env->stopped = 1; > >>>>>> + cpu_exit(cpu_single_env); > >>>>>> + } > >>>>>> qemu_notify_event(); > >>>>>> } > >>>>>> > >>>>>> is (semantically) relevant for upstream as well? IIUC, it ensures that > >>>>>> the kvm cpu loop is not continued if an IO access called into > >>>>>> qemu_system_reset_request. > >>>>>> > >>>>> I don't know TCG enough to tell. If TCG can continue vcpu execution > >>>>> after io without checking reset_requested then it is relevant for > >>>>> upstream too. > >>>> > >>>> I was first of all thinking about kvm upstream, but their handling > >>>> differ much less upstream than in current qemu-kvm. Anyway, need to dig > >>>> into the details. > >>>> > >>>>> > >>>>>> If yes, then it would be a good time to push a patch: these bits will > >>>>>> fall to dust on next merge from upstream (vl.c no longer has access to > >>>>>> the cpu state). > >>>>>> > >>>>> On a next merge cpu state will have to be exposed to vl.c then. This > >>>>> code cannot be dropped in qemu-kvm. > >>>> > >>>> I think a cleaner approach, even if it's only temporarily required, is > >>>> to move that code to cpus.c. That's likely also the way when we need it > >>>> upstream. > >>> It doesn't matter where the code resides as long as it is called on > >>> reset. > >> > >> It technically matters for the build process (vl.c is built once these > >> days, cpus.c is built per target). > >> > > Yes, I understand the build requirement. Runtime behaviour should not > > change. > > Yep, for sure. > > BTW, the self-IPI on pending exit request is there for a reason I but. > In order to complete half-done string-io or something like that? Would > be the next patch for upstream then. > The (documented) rule of KVM is that if exit to userspace happens during instruction emulation KVM_RUN has to be called again to complete instruction emulation. > > > >> In any case, we apparently need to fix upstream, I'm playing with some > >> approach. > >> Note to self: need to write unit test to check that vcpu is not executed after it issues reset by doing pio. > >>> > >>>> If upstream does not need it, we have to understand why and > >>>> maybe adopt its pattern (the ultimate goal is unification anyway). > >>>> > >>> I don't consider kvm upstream as working product. The goal should be > >>> moving to qemu-kvm code in upstream preserving all the knowledge we > >>> acquired while making it production grade code. > >> > >> We had this discussion before. My goal remains to filter the remaining > >> upstream fixes out of the noise, adjust both versions so that they are > >> apparently identical, and then switch to a single version. > >> > > I thought there was an agreement to accept qemu-kvm implementation as is > > into upstream (without some parts like device assignment). If you look > > at qemu-kvm you'll see that upstream implementation is marked as > > OBSOLETE_KVM_IMPL. > > You can't merge both trees without introducing regressions, either in > the kvm part or some other section that qemu-kvm did not stress. IMO, > there is no way around understanding all the nice little "fixes" that > piled up over the years and translate them into proper, documented patches. OBSOLETE_KVM_IMPL should be just dropped, not merged. > > > > >> We are on a good track now. I predict that we will be left with only one > >> or two major additional features in qemu-kvm in a few months from now, > >> no more duplications with subtle differences, and production-grade kvm > >> upstream stability. > >> > > You are optimistic. My prediction is that it will take at least one major RHEL > > release until such merged code base will become production-grade. That > > is when most bugs that were introduced by eliminating subtle differences > > between working and non-working version will be found :) > > The more upstream code qemu-kvm stresses, the faster this convergence > will become. And there is really not that much left. E.g, I've a > qemu-kvm-x86.c here that is <400 LOC. > That's what I don't get. Why working qemu-kvm should stress non working upstream code? Just remove upstream code and replace it with qemu-kvm version. > > > > BTW Do you have a plan how to move upstream to thread per vcpu? > > Upstream has this already, but it's - once again - a different > implementation. Understanding those differences is one of the next steps. > I see only two threads on upstream no matter how much vcpus I configure. > In fact, as posted recently, unifying the execution model > implementations is the only big problem I see. In-kernel irqchips and > device assignment are things that can live in qemu-kvm without much > conflicts until they are finally mergable. > Upstream kvm is kinda useless without in-kernel irqchips. -- Gleb. ^ permalink raw reply [flat|nested] 12+ messages in thread
* [Qemu-devel] Re: qemu-kvm vs. qemu: Terminate cpu loop on reset? 2011-01-07 19:10 ` Gleb Natapov @ 2011-01-07 19:33 ` Jan Kiszka 2011-01-07 21:19 ` Gleb Natapov 0 siblings, 1 reply; 12+ messages in thread From: Jan Kiszka @ 2011-01-07 19:33 UTC (permalink / raw) To: Gleb Natapov; +Cc: qemu-devel, kvm [-- Attachment #1: Type: text/plain, Size: 2416 bytes --] Am 07.01.2011 20:10, Gleb Natapov wrote: >>>> We are on a good track now. I predict that we will be left with only one >>>> or two major additional features in qemu-kvm in a few months from now, >>>> no more duplications with subtle differences, and production-grade kvm >>>> upstream stability. >>>> >>> You are optimistic. My prediction is that it will take at least one major RHEL >>> release until such merged code base will become production-grade. That >>> is when most bugs that were introduced by eliminating subtle differences >>> between working and non-working version will be found :) >> >> The more upstream code qemu-kvm stresses, the faster this convergence >> will become. And there is really not that much left. E.g, I've a >> qemu-kvm-x86.c here that is <400 LOC. >> > That's what I don't get. Why working qemu-kvm should stress non working > upstream code? Just remove upstream code and replace it with qemu-kvm > version. We are 3/4 (if not more) done with refactoring qemu-kvm into a clean state, removing lots of cruft from libkvm days and early kvm modules. We achieved this by creating a "fork of the fork": upstream kvm. We may argue a lot about pros and cons of this approach, but it is a fact now. And a lot of effort would be wasted as well by throwing this away. Moreover, taking off the x86 glasses: ppc and s390 rely on upstream kvm. So it is impossible to drop those bits without breaking all non-x86 kvm archs. > >>> >>> BTW Do you have a plan how to move upstream to thread per vcpu? >> >> Upstream has this already, but it's - once again - a different >> implementation. Understanding those differences is one of the next steps. >> > I see only two threads on upstream no matter how much vcpus I configure. /me sees a lot of them. Did you enable io-thread support? Otherwise kvm is run just like tcg in single-thread mode. > >> In fact, as posted recently, unifying the execution model >> implementations is the only big problem I see. In-kernel irqchips and >> device assignment are things that can live in qemu-kvm without much >> conflicts until they are finally mergable. >> > Upstream kvm is kinda useless without in-kernel irqchips. Not if its code serves the rest of qemu-kvm without further patches (and merge conflicts). And we only need to sort out the execution loop and threading stuff to get there. Jan [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 259 bytes --] ^ permalink raw reply [flat|nested] 12+ messages in thread
* [Qemu-devel] Re: qemu-kvm vs. qemu: Terminate cpu loop on reset? 2011-01-07 19:33 ` Jan Kiszka @ 2011-01-07 21:19 ` Gleb Natapov 2011-01-08 9:12 ` Jan Kiszka 0 siblings, 1 reply; 12+ messages in thread From: Gleb Natapov @ 2011-01-07 21:19 UTC (permalink / raw) To: Jan Kiszka; +Cc: qemu-devel, kvm On Fri, Jan 07, 2011 at 08:33:20PM +0100, Jan Kiszka wrote: > Am 07.01.2011 20:10, Gleb Natapov wrote: > >>>> We are on a good track now. I predict that we will be left with only one > >>>> or two major additional features in qemu-kvm in a few months from now, > >>>> no more duplications with subtle differences, and production-grade kvm > >>>> upstream stability. > >>>> > >>> You are optimistic. My prediction is that it will take at least one major RHEL > >>> release until such merged code base will become production-grade. That > >>> is when most bugs that were introduced by eliminating subtle differences > >>> between working and non-working version will be found :) > >> > >> The more upstream code qemu-kvm stresses, the faster this convergence > >> will become. And there is really not that much left. E.g, I've a > >> qemu-kvm-x86.c here that is <400 LOC. > >> > > That's what I don't get. Why working qemu-kvm should stress non working > > upstream code? Just remove upstream code and replace it with qemu-kvm > > version. > > We are 3/4 (if not more) done with refactoring qemu-kvm into a clean > state, removing lots of cruft from libkvm days and early kvm modules. We > achieved this by creating a "fork of the fork": upstream kvm. We may > argue a lot about pros and cons of this approach, but it is a fact now. > And a lot of effort would be wasted as well by throwing this away. > Upstream kvm was not "fork of the fork". It was something much worse then that. It was (bad) reimplementation of kvm that was unfortunately merged upstream. This slowed proper kvm inclusion into upstream for more then 2 years now (and counting). Glauber and you did (and do) a great job trying to sort this mess and nobody propose to throw what was done so far. qemu-kvm and qemu upstream uses a lot of common code. We can either try hard to consolidate even mode code, or at some point just merge qemu-kvm and drop upstream functions that are not used by qemu-kvm (ifdefed as obsolete in qemu-kvm tree). > Moreover, taking off the x86 glasses: ppc and s390 rely on upstream kvm. > So it is impossible to drop those bits without breaking all non-x86 kvm > archs. > I do not propose to drop bits from upstream that are used in qemu-kvm obviously. > > > >>> > >>> BTW Do you have a plan how to move upstream to thread per vcpu? > >> > >> Upstream has this already, but it's - once again - a different > >> implementation. Understanding those differences is one of the next steps. > >> > > I see only two threads on upstream no matter how much vcpus I configure. > > /me sees a lot of them. Did you enable io-thread support? Otherwise kvm > is run just like tcg in single-thread mode. > No, I didn't. Does io-thread work properly with TCG? IIRC there were problems with io thread + TCG. > > > >> In fact, as posted recently, unifying the execution model > >> implementations is the only big problem I see. In-kernel irqchips and > >> device assignment are things that can live in qemu-kvm without much > >> conflicts until they are finally mergable. > >> > > Upstream kvm is kinda useless without in-kernel irqchips. > > Not if its code serves the rest of qemu-kvm without further patches (and > merge conflicts). And we only need to sort out the execution loop and > threading stuff to get there. > This could have been achieved by not introducing upstream kvm in the first place :). Many if not most merging problems were result of rival kvm implementation in upstream. I thought the goal is to get rid of qemu-kvm fork at all by having fully functional kvm in upstream. -- Gleb. ^ permalink raw reply [flat|nested] 12+ messages in thread
* [Qemu-devel] Re: qemu-kvm vs. qemu: Terminate cpu loop on reset? 2011-01-07 21:19 ` Gleb Natapov @ 2011-01-08 9:12 ` Jan Kiszka 0 siblings, 0 replies; 12+ messages in thread From: Jan Kiszka @ 2011-01-08 9:12 UTC (permalink / raw) To: Gleb Natapov; +Cc: qemu-devel, kvm [-- Attachment #1: Type: text/plain, Size: 4980 bytes --] Am 07.01.2011 22:19, Gleb Natapov wrote: > On Fri, Jan 07, 2011 at 08:33:20PM +0100, Jan Kiszka wrote: >> Am 07.01.2011 20:10, Gleb Natapov wrote: >>>>>> We are on a good track now. I predict that we will be left with only one >>>>>> or two major additional features in qemu-kvm in a few months from now, >>>>>> no more duplications with subtle differences, and production-grade kvm >>>>>> upstream stability. >>>>>> >>>>> You are optimistic. My prediction is that it will take at least one major RHEL >>>>> release until such merged code base will become production-grade. That >>>>> is when most bugs that were introduced by eliminating subtle differences >>>>> between working and non-working version will be found :) >>>> >>>> The more upstream code qemu-kvm stresses, the faster this convergence >>>> will become. And there is really not that much left. E.g, I've a >>>> qemu-kvm-x86.c here that is <400 LOC. >>>> >>> That's what I don't get. Why working qemu-kvm should stress non working >>> upstream code? Just remove upstream code and replace it with qemu-kvm >>> version. >> >> We are 3/4 (if not more) done with refactoring qemu-kvm into a clean >> state, removing lots of cruft from libkvm days and early kvm modules. We >> achieved this by creating a "fork of the fork": upstream kvm. We may >> argue a lot about pros and cons of this approach, but it is a fact now. >> And a lot of effort would be wasted as well by throwing this away. >> > Upstream kvm was not "fork of the fork". It was something much worse > then that. It was (bad) reimplementation of kvm that was unfortunately > merged upstream. Not everything is black or white. > This slowed proper kvm inclusion into upstream for more > then 2 years now (and counting). Glauber and you did (and do) a great > job trying to sort this mess and nobody propose to throw what was done > so far. qemu-kvm and qemu upstream uses a lot of common code. We can > either try hard to consolidate even mode code, or at some point just > merge qemu-kvm and drop upstream functions that are not used by qemu-kvm > (ifdefed as obsolete in qemu-kvm tree). Just take a look at the code: this is no longer that easy due to upstream code being actively even when removing current x86 support. I'm convinced we can't get around consolidating anymore. > >> Moreover, taking off the x86 glasses: ppc and s390 rely on upstream kvm. >> So it is impossible to drop those bits without breaking all non-x86 kvm >> archs. >> > I do not propose to drop bits from upstream that are used in qemu-kvm > obviously. > >>> >>>>> >>>>> BTW Do you have a plan how to move upstream to thread per vcpu? >>>> >>>> Upstream has this already, but it's - once again - a different >>>> implementation. Understanding those differences is one of the next steps. >>>> >>> I see only two threads on upstream no matter how much vcpus I configure. >> >> /me sees a lot of them. Did you enable io-thread support? Otherwise kvm >> is run just like tcg in single-thread mode. >> > No, I didn't. Does io-thread work properly with TCG? IIRC there were > problems with io thread + TCG. I'm not using TCG heavily, so I can't say for sure if there are still issues remaining with the I/O thread. Quite a few were fixed last year, and I'm currently not aware of open issues. > >>> >>>> In fact, as posted recently, unifying the execution model >>>> implementations is the only big problem I see. In-kernel irqchips and >>>> device assignment are things that can live in qemu-kvm without much >>>> conflicts until they are finally mergable. >>>> >>> Upstream kvm is kinda useless without in-kernel irqchips. >> >> Not if its code serves the rest of qemu-kvm without further patches (and >> merge conflicts). And we only need to sort out the execution loop and >> threading stuff to get there. >> > This could have been achieved by not introducing upstream kvm in the > first place :). Many if not most merging problems were result of rival > kvm implementation in upstream. I thought the goal is to get rid of > qemu-kvm fork at all by having fully functional kvm in upstream. I'm quite sure that, by the time kvm upstream was merged, qemu-kvm was still too far away from a mergable state, not so much its core but its hooks into and extensions of qemu. So, as far as I understood (Anthony may correct me), the upstream flavor originally served as an early teaser for the QEMU folks, opening their mind for the needs and possibilities of virtualization. However, at latest by the time ppc adopted this teaser, it became more. And I'm also not that sure we would be that far now if we tried to dress up qemu-kvm directly for a merge. What went wrong IMHO was that we were not aggressively enough merging, specifically once we reached the point where consolidating individual parts became as easy as it is now. That likely cost more than it saved. Jan [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 259 bytes --] ^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2011-01-08 9:13 UTC | newest] Thread overview: 12+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2011-01-07 15:57 [Qemu-devel] qemu-kvm vs. qemu: Terminate cpu loop on reset? Jan Kiszka 2011-01-07 16:53 ` [Qemu-devel] " Gleb Natapov 2011-01-07 16:59 ` Jan Kiszka 2011-01-07 17:16 ` Gleb Natapov 2011-01-07 17:30 ` Jan Kiszka 2011-01-07 17:53 ` Gleb Natapov 2011-01-07 18:24 ` Jan Kiszka 2011-01-07 18:32 ` Jan Kiszka 2011-01-07 19:10 ` Gleb Natapov 2011-01-07 19:33 ` Jan Kiszka 2011-01-07 21:19 ` Gleb Natapov 2011-01-08 9:12 ` Jan Kiszka
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).