public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* KVM call minutes for Sept 21
@ 2010-09-21 18:05 Chris Wright
  2010-09-21 18:23 ` Anthony Liguori
  2010-09-22  0:04 ` Nadav Har'El
  0 siblings, 2 replies; 22+ messages in thread
From: Chris Wright @ 2010-09-21 18:05 UTC (permalink / raw)
  To: kvm; +Cc: qemu-devel

Nested VMX
- looking for forward progress and better collaboration between the
  Intel and IBM teams
- needs more review (not a new issue)
- use cases
- work todo
  - merge baseline patch
    - looks pretty good
    - review is finding mostly small things at this point
    - need some correctness verification (both review from Intel and testing)
  - need a test suite
    - test suite harness will help here
      - a few dozen nested SVM tests are there, can follow for nested VMX
  - nested EPT
  - optimize (reduce vmreads and vmwrites)
- has long term maintan

Hotplug
- command...guest may or may not respond
- guest can't be trusted to be direct part of request/response loop
- solve at QMP level
- human monitor issues (multiple successive commands to complete a
  single unplug)
  - should be a GUI interface design decision, human monitor is not a
    good design point
    - digression into GUI interface

Drive caching
- need to formalize the meanings in terms of data integrity guarantees
- guest write cache (does it directly reflect the host write cache?)
  - live migration, underlying block dev changes, so need to decouple the two
- O_DIRECT + O_DSYNC
  - O_DSYNC needed based on whether disk cache is available
  - also issues with sparse files (e.g. O_DIRECT to unallocated extent)
  - how to manage w/out needing to flush every write, slow
- perhaps start with O_DIRECT on raw, non-sparse files only?
- backend needs to open backing store matching to guests disk cache state
- O_DIRECT itself has inconsistent integrity guarantees
  - works well with fully allocated file, depedent on disk cache disable
    (or fs specific flushing)
- filesystem specific warnings (ext4 w/ barriers on, brtfs)
- need to be able to open w/ O_DSYNC depending on guets's write cache mode
- make write cache visible to guest (need a knob for this)
- qemu default is cache=writethrough, do we need to revisit that?
- just present user with option whether or not to use host page cache
- allow guest OS to choose disk write cache setting
  - set up host backend accordingly
- be nice preserve write cache settings over boot (outgrowing cmos storage)
- maybe some host fs-level optimization possible
  - e.g. O_DSYNC to allocated O_DIRECT extent becomes no-op
- conclusion
  - one direct user tunable, "use host page cache or not"
  - one guest OS tunable, "enable disk cache"

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2010-09-27 14:22 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-09-21 18:05 KVM call minutes for Sept 21 Chris Wright
2010-09-21 18:23 ` Anthony Liguori
2010-09-22  0:04 ` Nadav Har'El
2010-09-22  1:48   ` Chris Wright
2010-09-22 17:49     ` Nadav Har'El
2010-09-22 18:03       ` Anthony Liguori
2010-09-22 19:34         ` Joerg Roedel
2010-09-22 19:48       ` Joerg Roedel
2010-09-22  9:02   ` Gleb Natapov
2010-09-22 16:29     ` Nadav Har'El
2010-09-22 17:47       ` Gleb Natapov
2010-09-22 19:20         ` Joerg Roedel
2010-09-22 20:18           ` Gleb Natapov
2010-09-22 23:00             ` Nadav Har'El
2010-09-26 14:03           ` Avi Kivity
2010-09-26 20:25             ` Joerg Roedel
2010-09-27  8:36               ` Avi Kivity
2010-09-27 14:18                 ` Gleb Natapov
2010-09-27 14:22                   ` Avi Kivity
2010-09-26 13:27   ` Avi Kivity
2010-09-26 14:28     ` Nadav Har'El
2010-09-26 14:50       ` Avi Kivity

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox