From: Gerd Hoffmann <kraxel@redhat.com>
To: Anthony Liguori <anthony@codemonkey.ws>
Cc: qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH v3 5/5] switch -drive to QemuOpts.
Date: Thu, 16 Jul 2009 21:32:32 +0200 [thread overview]
Message-ID: <4A5F8050.1090308@redhat.com> (raw)
In-Reply-To: <4A5F7975.8020207@codemonkey.ws>
On 07/16/09 21:03, Anthony Liguori wrote:
> Gerd Hoffmann wrote:
>> Quick fix (incremental) attached.
>> Oh, and a leftover debug line ...
>
> Great, wait a day or two and please resend the series.
I'll be offline tomorrow, weekend and monday next week, so I can have a
closer look tuesday (merge bugfix, rebase, fixup whatever shows up).
> One thing that bothers me is that there is a really high rate of change
> in the qdev stuff. These series touch a lot of code and therefore cause
> quite a lot of conflicts. That concerns me that the long term
> maintenance of stable-0.11 is going to be really painful.
The by far worst offender is the property refactoring, and that one has
seen no fundamental changes since quite a while. Lots of little tweaks
though, mostly due to conflicts and due to devices being converted to
udev introducing build failures.
btw: blueswirl converted more sparc stuff to qdev, thus adding more
build failures with the property rework patch applied.
http://git.et.redhat.com/?p=qemu-kraxel.git;a=shortlog;h=refs/heads/qdev.v13
has fixes (four topmost patches), you might want to cherry pick them if
you find your tree not building.
All other patches are not *that* intrusive.
> So I'm thinking of making an exception for some of the more intrusive
> qdev changes and to continue pulling some of them in post freeze. It
> would have to be handled on a case by case basis but I'm specifically
> thinking of things like the Property refactoring you just did.
I see the property refactoring in your queue already.
And, yes, having that in 0.11 will most likely simplify backports alot.
cheers,
Gerd
next prev parent reply other threads:[~2009-07-16 19:32 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-07-16 14:56 [Qemu-devel] [PATCH v3 0/5] cleanup drive handling Gerd Hoffmann
2009-07-16 14:57 ` [Qemu-devel] [PATCH v3 1/5] kill drives_table Gerd Hoffmann
2009-07-16 14:57 ` [Qemu-devel] [PATCH v3 2/5] add support for drive ids Gerd Hoffmann
2009-07-16 14:57 ` [Qemu-devel] [PATCH v3 3/5] kill drives_opt Gerd Hoffmann
2009-07-16 14:57 ` [Qemu-devel] [PATCH v3 4/5] QemuOpts: framework for storing and parsing options Gerd Hoffmann
2009-07-16 16:35 ` [Qemu-devel] " Jan Kiszka
2009-07-16 18:50 ` Gerd Hoffmann
2009-07-17 7:03 ` [Qemu-devel] " Kevin Wolf
2009-07-21 7:25 ` Gerd Hoffmann
2009-07-21 7:42 ` Kevin Wolf
2009-07-21 13:59 ` Gerd Hoffmann
2009-07-21 15:58 ` Kevin Wolf
2009-07-22 6:58 ` Gerd Hoffmann
2009-07-22 7:31 ` Kevin Wolf
2009-07-22 7:55 ` Gerd Hoffmann
2009-07-16 14:57 ` [Qemu-devel] [PATCH v3 5/5] switch -drive to QemuOpts Gerd Hoffmann
2009-07-16 16:07 ` Anthony Liguori
2009-07-16 18:55 ` Gerd Hoffmann
2009-07-16 19:03 ` Anthony Liguori
2009-07-16 19:32 ` Gerd Hoffmann [this message]
2009-07-16 20:08 ` Anthony Liguori
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4A5F8050.1090308@redhat.com \
--to=kraxel@redhat.com \
--cc=anthony@codemonkey.ws \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).