* [Qemu-devel] [RFC kernel 0/2]A PV solution for KVM live migration optimization
@ 2016-03-03 10:46 Liang Li
2016-03-03 10:46 ` [Qemu-devel] [RFC kernel 1/2] mm: Add the functions used to get free pages information Liang Li
2016-03-03 10:46 ` [Qemu-devel] [RFC kernel 2/2] virtio-balloon: extend balloon driver to support a new feature Liang Li
0 siblings, 2 replies; 6+ messages in thread
From: Liang Li @ 2016-03-03 10:46 UTC (permalink / raw)
To: mst, linux-kernel
Cc: ehabkost, kvm, quintela, Liang Li, qemu-devel, virtualization,
linux-mm, amit.shah, pbonzini, akpm, dgilbert, rth
The current QEMU live migration implementation mark the all the
guest's RAM pages as dirtied in the ram bulk stage, all these pages
will be processed and that takes quit a lot of CPU cycles.
>From guest's point of view, it doesn't care about the content in free
pages. We can make use of this fact and skip processing the free
pages in the ram bulk stage, it can save a lot CPU cycles and reduce
the network traffic significantly while speed up the live migration
process obviously.
This patch set is the kernel side implementation.
It get the free pages information by traversing
zone->free_area[order].free_list, and construct a free pages bitmap.
The virtio-balloon driver is extended so as to send the free pages
bitmap to QEMU for live migration optimization.
Performance data
================
Test environment:
CPU: Intel (R) Xeon(R) CPU ES-2699 v3 @ 2.30GHz
Host RAM: 64GB
Host Linux Kernel: 4.2.0 Host OS: CentOS 7.1
Guest Linux Kernel: 4.5.rc6 Guest OS: CentOS 6.6
Network: X540-AT2 with 10 Gigabit connection
Guest RAM: 8GB
Case 1: Idle guest just boots:
============================================
| original | pv
-------------------------------------------
total time(ms) | 1894 | 421
--------------------------------------------
transferred ram(KB) | 398017 | 353242
============================================
Case 2: The guest has ever run some memory consuming workload, the
workload is terminated just before live migration.
============================================
| original | pv
-------------------------------------------
total time(ms) | 7436 | 552
--------------------------------------------
transferred ram(KB) | 8146291 | 361375
============================================
Liang Li (2):
mm: Add the functions used to get free pages information
virtio-balloon: extend balloon driver to support a new feature
drivers/virtio/virtio_balloon.c | 108 ++++++++++++++++++++++++++++++++++--
include/uapi/linux/virtio_balloon.h | 1 +
mm/page_alloc.c | 58 +++++++++++++++++++
3 files changed, 162 insertions(+), 5 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 6+ messages in thread
* [Qemu-devel] [RFC kernel 1/2] mm: Add the functions used to get free pages information
2016-03-03 10:46 [Qemu-devel] [RFC kernel 0/2]A PV solution for KVM live migration optimization Liang Li
@ 2016-03-03 10:46 ` Liang Li
2016-03-03 10:46 ` [Qemu-devel] [RFC kernel 2/2] virtio-balloon: extend balloon driver to support a new feature Liang Li
1 sibling, 0 replies; 6+ messages in thread
From: Liang Li @ 2016-03-03 10:46 UTC (permalink / raw)
To: mst, linux-kernel
Cc: ehabkost, kvm, quintela, Liang Li, qemu-devel, virtualization,
linux-mm, amit.shah, pbonzini, akpm, dgilbert, rth
get_total_pages_count() tries to get the page count of the system
RAM.
get_free_pages() is intend to construct a free pages bitmap by
traversing the free_list.
The free pages information will be sent to QEMU through virtio
and used for live migration optimization.
Signed-off-by: Liang Li <liang.z.li@intel.com>
---
mm/page_alloc.c | 57 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 57 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 838ca8bb..81922e6 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3860,6 +3860,63 @@ void show_free_areas(unsigned int filter)
show_swap_cache_info();
}
+#define PFN_4G (0x100000000 >> PAGE_SHIFT)
+
+unsigned long get_total_pages_count(unsigned long low_mem)
+{
+ if (max_pfn >= PFN_4G) {
+ unsigned long pfn_gap = PFN_4G - (low_mem >> PAGE_SHIFT);
+
+ return max_pfn - pfn_gap;
+ } else
+ return max_pfn;
+}
+EXPORT_SYMBOL(get_total_pages_count);
+
+static void mark_free_pages_bitmap(struct zone *zone,
+ unsigned long *free_page_bitmap, unsigned long pfn_gap)
+{
+ unsigned long pfn, flags, i;
+ unsigned int order, t;
+ struct list_head *curr;
+
+ if (zone_is_empty(zone))
+ return;
+
+ spin_lock_irqsave(&zone->lock, flags);
+
+ for_each_migratetype_order(order, t) {
+ list_for_each(curr, &zone->free_area[order].free_list[t]) {
+
+ pfn = page_to_pfn(list_entry(curr, struct page, lru));
+ for (i = 0; i < (1UL << order); i++) {
+ if ((pfn + i) >= PFN_4G)
+ set_bit_le(pfn + i - pfn_gap,
+ free_page_bitmap);
+ else
+ set_bit_le(pfn + i, free_page_bitmap);
+ }
+ }
+ }
+
+ spin_unlock_irqrestore(&zone->lock, flags);
+}
+
+void get_free_pages(unsigned long *free_page_bitmap,
+ unsigned long *free_pages_count,
+ unsigned long low_mem)
+{
+ struct zone *zone;
+ unsigned long pfn_gap;
+
+ pfn_gap = PFN_4G - (low_mem >> PAGE_SHIFT);
+ for_each_populated_zone(zone)
+ mark_free_pages_bitmap(zone, free_page_bitmap, pfn_gap);
+
+ *free_pages_count = global_page_state(NR_FREE_PAGES);
+}
+EXPORT_SYMBOL(get_free_pages);
+
static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref)
{
zoneref->zone = zone;
--
1.8.3.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [Qemu-devel] [RFC kernel 2/2] virtio-balloon: extend balloon driver to support a new feature
2016-03-03 10:46 [Qemu-devel] [RFC kernel 0/2]A PV solution for KVM live migration optimization Liang Li
2016-03-03 10:46 ` [Qemu-devel] [RFC kernel 1/2] mm: Add the functions used to get free pages information Liang Li
@ 2016-03-03 10:46 ` Liang Li
1 sibling, 0 replies; 6+ messages in thread
From: Liang Li @ 2016-03-03 10:46 UTC (permalink / raw)
To: mst, linux-kernel
Cc: ehabkost, kvm, quintela, Liang Li, qemu-devel, virtualization,
linux-mm, amit.shah, pbonzini, akpm, dgilbert, rth
Extend the virio balloon to support the new feature
VIRTIO_BALLOON_F_GET_FREE_PAGES, so that we can use it to send the
free pages information from guest to QEMU, and then optimize the
live migration process.
Signed-off-by: Liang Li <liang.z.li@intel.com>
---
drivers/virtio/virtio_balloon.c | 106 ++++++++++++++++++++++++++++++++++--
include/uapi/linux/virtio_balloon.h | 1 +
2 files changed, 102 insertions(+), 5 deletions(-)
diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index 0c3691f..7461d3e 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -45,9 +45,18 @@ static int oom_pages = OOM_VBALLOON_DEFAULT_PAGES;
module_param(oom_pages, int, S_IRUSR | S_IWUSR);
MODULE_PARM_DESC(oom_pages, "pages to free on OOM");
+extern void get_free_pages(unsigned long *free_page_bitmap,
+ unsigned long *free_pages_num,
+ unsigned long lowmem);
+extern unsigned long get_total_pages_count(unsigned long lowmem);
+
+struct mem_layout {
+ unsigned long low_mem;
+};
+
struct virtio_balloon {
struct virtio_device *vdev;
- struct virtqueue *inflate_vq, *deflate_vq, *stats_vq;
+ struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *free_pages_vq;
/* Where the ballooning thread waits for config to change. */
wait_queue_head_t config_change;
@@ -75,6 +84,11 @@ struct virtio_balloon {
unsigned int num_pfns;
u32 pfns[VIRTIO_BALLOON_ARRAY_PFNS_MAX];
+ unsigned long *free_pages;
+ unsigned long free_pages_len;
+ unsigned long free_pages_num;
+ struct mem_layout mem_config;
+
/* Memory statistics */
int need_stats_update;
struct virtio_balloon_stat stats[VIRTIO_BALLOON_S_NR];
@@ -245,6 +259,34 @@ static void update_balloon_stats(struct virtio_balloon *vb)
pages_to_bytes(i.totalram));
}
+static void update_free_pages_stats(struct virtio_balloon *vb)
+{
+ unsigned long total_page_count, bitmap_bytes;
+
+ total_page_count = get_total_pages_count(vb->mem_config.low_mem);
+ bitmap_bytes = ALIGN(total_page_count, BITS_PER_LONG) / 8;
+
+ if (!vb->free_pages)
+ vb->free_pages = kzalloc(bitmap_bytes, GFP_KERNEL);
+ else {
+ if (bitmap_bytes < vb->free_pages_len)
+ memset(vb->free_pages, 0, bitmap_bytes);
+ else {
+ kfree(vb->free_pages);
+ vb->free_pages = kzalloc(bitmap_bytes, GFP_KERNEL);
+ }
+ }
+ if (!vb->free_pages) {
+ vb->free_pages_len = 0;
+ vb->free_pages_num = 0;
+ return;
+ }
+
+ vb->free_pages_len = bitmap_bytes;
+ get_free_pages(vb->free_pages, &vb->free_pages_num,
+ vb->mem_config.low_mem);
+}
+
/*
* While most virtqueues communicate guest-initiated requests to the hypervisor,
* the stats queue operates in reverse. The driver initializes the virtqueue
@@ -278,6 +320,39 @@ static void stats_handle_request(struct virtio_balloon *vb)
virtqueue_kick(vq);
}
+static void free_pages_handle_rq(struct virtio_balloon *vb)
+{
+ struct virtqueue *vq;
+ struct scatterlist sg[3];
+ unsigned int len;
+ struct mem_layout *ptr_mem_layout;
+ struct scatterlist sg_in;
+
+ vq = vb->free_pages_vq;
+ ptr_mem_layout = virtqueue_get_buf(vq, &len);
+
+ if (!ptr_mem_layout)
+ return;
+ update_free_pages_stats(vb);
+ sg_init_table(sg, 3);
+ sg_set_buf(&sg[0], &(vb->free_pages_num), sizeof(vb->free_pages_num));
+ sg_set_buf(&sg[1], &(vb->free_pages_len), sizeof(vb->free_pages_len));
+ sg_set_buf(&sg[2], vb->free_pages, vb->free_pages_len);
+
+ sg_init_one(&sg_in, &vb->mem_config, sizeof(vb->mem_config));
+
+ virtqueue_add_outbuf(vq, &sg[0], 3, vb, GFP_KERNEL);
+ virtqueue_add_inbuf(vq, &sg_in, 1, &vb->mem_config, GFP_KERNEL);
+ virtqueue_kick(vq);
+}
+
+static void free_pages_rq(struct virtqueue *vq)
+{
+ struct virtio_balloon *vb = vq->vdev->priv;
+
+ free_pages_handle_rq(vb);
+}
+
static void virtballoon_changed(struct virtio_device *vdev)
{
struct virtio_balloon *vb = vdev->priv;
@@ -386,16 +461,22 @@ static int balloon(void *_vballoon)
static int init_vqs(struct virtio_balloon *vb)
{
- struct virtqueue *vqs[3];
- vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request };
- static const char * const names[] = { "inflate", "deflate", "stats" };
+ struct virtqueue *vqs[4];
+ vq_callback_t *callbacks[] = { balloon_ack, balloon_ack,
+ stats_request, free_pages_rq };
+ const char *names[] = { "inflate", "deflate", "stats", "free_pages" };
int err, nvqs;
/*
* We expect two virtqueues: inflate and deflate, and
* optionally stat.
*/
- nvqs = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ) ? 3 : 2;
+ if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_GET_FREE_PAGES))
+ nvqs = 4;
+ else
+ nvqs = virtio_has_feature(vb->vdev,
+ VIRTIO_BALLOON_F_STATS_VQ) ? 3 : 2;
+
err = vb->vdev->config->find_vqs(vb->vdev, nvqs, vqs, callbacks, names);
if (err)
return err;
@@ -416,6 +497,16 @@ static int init_vqs(struct virtio_balloon *vb)
BUG();
virtqueue_kick(vb->stats_vq);
}
+ if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_GET_FREE_PAGES)) {
+ struct scatterlist sg_in;
+
+ vb->free_pages_vq = vqs[3];
+ sg_init_one(&sg_in, &vb->mem_config, sizeof(vb->mem_config));
+ if (virtqueue_add_inbuf(vb->free_pages_vq, &sg_in, 1,
+ &vb->mem_config, GFP_KERNEL) < 0)
+ BUG();
+ virtqueue_kick(vb->free_pages_vq);
+ }
return 0;
}
@@ -505,6 +596,9 @@ static int virtballoon_probe(struct virtio_device *vdev)
init_waitqueue_head(&vb->acked);
vb->vdev = vdev;
vb->need_stats_update = 0;
+ vb->free_pages_num = 0;
+ vb->free_pages_len = 0;
+ vb->free_pages = NULL;
balloon_devinfo_init(&vb->vb_dev_info);
#ifdef CONFIG_BALLOON_COMPACTION
@@ -561,6 +655,7 @@ static void virtballoon_remove(struct virtio_device *vdev)
unregister_oom_notifier(&vb->nb);
kthread_stop(vb->thread);
remove_common(vb);
+ kfree(vb->free_pages);
kfree(vb);
}
@@ -599,6 +694,7 @@ static unsigned int features[] = {
VIRTIO_BALLOON_F_MUST_TELL_HOST,
VIRTIO_BALLOON_F_STATS_VQ,
VIRTIO_BALLOON_F_DEFLATE_ON_OOM,
+ VIRTIO_BALLOON_F_GET_FREE_PAGES,
};
static struct virtio_driver virtio_balloon_driver = {
diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h
index d7f1cbc..54aaf20 100644
--- a/include/uapi/linux/virtio_balloon.h
+++ b/include/uapi/linux/virtio_balloon.h
@@ -34,6 +34,7 @@
#define VIRTIO_BALLOON_F_MUST_TELL_HOST 0 /* Tell before reclaiming pages */
#define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */
#define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */
+#define VIRTIO_BALLOON_F_GET_FREE_PAGES 3 /* Get free pages bitmap */
/* Size of a PFN in the balloon interface. */
#define VIRTIO_BALLOON_PFN_SHIFT 12
--
1.8.3.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [Qemu-devel] [RFC kernel 0/2]A PV solution for KVM live migration optimization
@ 2016-03-10 7:01 Jitendra Kolhe
2016-03-10 7:22 ` Li, Liang Z
2016-03-10 7:30 ` Amit Shah
0 siblings, 2 replies; 6+ messages in thread
From: Jitendra Kolhe @ 2016-03-10 7:01 UTC (permalink / raw)
To: amit.shah
Cc: ehabkost, kvm, quintela, qemu-devel, liang.z.li, dgilbert,
linux-kernel, linux-mm, mst, mohan_parthasarathy, simhan,
pbonzini, akpm, virtualization, rth
On 3/8/2016 4:44 PM, Amit Shah wrote:
> On (Fri) 04 Mar 2016 [15:02:47], Jitendra Kolhe wrote:
>>>>
>>>> * Liang Li (liang.z.li@intel.com) wrote:
>>>>> The current QEMU live migration implementation mark the all the
>>>>> guest's RAM pages as dirtied in the ram bulk stage, all these pages
>>>>> will be processed and that takes quit a lot of CPU cycles.
>>>>>
>>>>> From guest's point of view, it doesn't care about the content in free
>>>>> pages. We can make use of this fact and skip processing the free pages
>>>>> in the ram bulk stage, it can save a lot CPU cycles and reduce the
>>>>> network traffic significantly while speed up the live migration
>>>>> process obviously.
>>>>>
>>>>> This patch set is the QEMU side implementation.
>>>>>
>>>>> The virtio-balloon is extended so that QEMU can get the free pages
>>>>> information from the guest through virtio.
>>>>>
>>>>> After getting the free pages information (a bitmap), QEMU can use it
>>>>> to filter out the guest's free pages in the ram bulk stage. This make
>>>>> the live migration process much more efficient.
>>>>
>>>> Hi,
>>>> An interesting solution; I know a few different people have been looking at
>>>> how to speed up ballooned VM migration.
>>>>
>>>
>>> Ooh, different solutions for the same purpose, and both based on the balloon.
>>
>> We were also tying to address similar problem, without actually needing to modify
>> the guest driver. Please find patch details under mail with subject.
>> migration: skip sending ram pages released by virtio-balloon driver
>
> The scope of this patch series seems to be wider: don't send free
> pages to a dest at all, vs. don't send pages that are ballooned out.
>
> Amit
Hi,
Thanks for your response. The scope of this patch series doesn’t seem to take care
of ballooned out pages. To balloon out a guest ram page the guest balloon driver does
a alloc_page() and then return the guest pfn to Qemu, so ballooned out pages will not
be seen as free ram pages by the guest.
Thus we will still end up scanning (for zero page) for ballooned out pages during
migration. It would be ideal if we could have both solutions.
Thanks,
- Jitendra
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Qemu-devel] [RFC kernel 0/2]A PV solution for KVM live migration optimization
2016-03-10 7:01 [Qemu-devel] [RFC kernel 0/2]A PV solution for KVM live migration optimization Jitendra Kolhe
@ 2016-03-10 7:22 ` Li, Liang Z
2016-03-10 7:30 ` Amit Shah
1 sibling, 0 replies; 6+ messages in thread
From: Li, Liang Z @ 2016-03-10 7:22 UTC (permalink / raw)
To: Jitendra Kolhe, amit.shah@redhat.com
Cc: ehabkost@redhat.com, kvm@vger.kernel.org, quintela@redhat.com,
qemu-devel@nongnu.org, mst@redhat.com,
linux-kernel@vger.kernel.org, dgilbert@redhat.com,
linux-mm@kvack.org, mohan_parthasarathy@hpe.com, simhan@hpe.com,
pbonzini@redhat.com, akpm@linux-foundation.org,
virtualization@lists.linux-foundation.org, rth@twiddle.net
> On 3/8/2016 4:44 PM, Amit Shah wrote:
> > On (Fri) 04 Mar 2016 [15:02:47], Jitendra Kolhe wrote:
> >>>>
> >>>> * Liang Li (liang.z.li@intel.com) wrote:
> >>>>> The current QEMU live migration implementation mark the all the
> >>>>> guest's RAM pages as dirtied in the ram bulk stage, all these
> >>>>> pages will be processed and that takes quit a lot of CPU cycles.
> >>>>>
> >>>>> From guest's point of view, it doesn't care about the content in
> >>>>> free pages. We can make use of this fact and skip processing the
> >>>>> free pages in the ram bulk stage, it can save a lot CPU cycles and
> >>>>> reduce the network traffic significantly while speed up the live
> >>>>> migration process obviously.
> >>>>>
> >>>>> This patch set is the QEMU side implementation.
> >>>>>
> >>>>> The virtio-balloon is extended so that QEMU can get the free pages
> >>>>> information from the guest through virtio.
> >>>>>
> >>>>> After getting the free pages information (a bitmap), QEMU can use
> >>>>> it to filter out the guest's free pages in the ram bulk stage.
> >>>>> This make the live migration process much more efficient.
> >>>>
> >>>> Hi,
> >>>> An interesting solution; I know a few different people have been
> >>>> looking at how to speed up ballooned VM migration.
> >>>>
> >>>
> >>> Ooh, different solutions for the same purpose, and both based on the
> balloon.
> >>
> >> We were also tying to address similar problem, without actually
> >> needing to modify the guest driver. Please find patch details under mail
> with subject.
> >> migration: skip sending ram pages released by virtio-balloon driver
> >
> > The scope of this patch series seems to be wider: don't send free
> > pages to a dest at all, vs. don't send pages that are ballooned out.
> >
> > Amit
>
> Hi,
>
> Thanks for your response. The scope of this patch series doesn’t seem to
> take care of ballooned out pages. To balloon out a guest ram page the guest
> balloon driver does a alloc_page() and then return the guest pfn to Qemu, so
> ballooned out pages will not be seen as free ram pages by the guest.
> Thus we will still end up scanning (for zero page) for ballooned out pages
> during migration. It would be ideal if we could have both solutions.
>
Agree, for users who care about the performance, just skipping the free pages.
For users who have already turned on virtio-balloon, your solution can take effect.
Liang
> Thanks,
> - Jitendra
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Qemu-devel] [RFC kernel 0/2]A PV solution for KVM live migration optimization
2016-03-10 7:01 [Qemu-devel] [RFC kernel 0/2]A PV solution for KVM live migration optimization Jitendra Kolhe
2016-03-10 7:22 ` Li, Liang Z
@ 2016-03-10 7:30 ` Amit Shah
1 sibling, 0 replies; 6+ messages in thread
From: Amit Shah @ 2016-03-10 7:30 UTC (permalink / raw)
To: Jitendra Kolhe
Cc: ehabkost, kvm, quintela, qemu-devel, liang.z.li, dgilbert,
linux-kernel, linux-mm, mst, mohan_parthasarathy, simhan,
pbonzini, akpm, virtualization, rth
On (Thu) 10 Mar 2016 [12:31:32], Jitendra Kolhe wrote:
> On 3/8/2016 4:44 PM, Amit Shah wrote:
> >>>> Hi,
> >>>> An interesting solution; I know a few different people have been looking at
> >>>> how to speed up ballooned VM migration.
> >>>>
> >>>
> >>> Ooh, different solutions for the same purpose, and both based on the balloon.
> >>
> >> We were also tying to address similar problem, without actually needing to modify
> >> the guest driver. Please find patch details under mail with subject.
> >> migration: skip sending ram pages released by virtio-balloon driver
> >
> > The scope of this patch series seems to be wider: don't send free
> > pages to a dest at all, vs. don't send pages that are ballooned out.
>
> Hi,
>
> Thanks for your response. The scope of this patch series doesn’t seem to take care
> of ballooned out pages. To balloon out a guest ram page the guest balloon driver does
> a alloc_page() and then return the guest pfn to Qemu, so ballooned out pages will not
> be seen as free ram pages by the guest.
> Thus we will still end up scanning (for zero page) for ballooned out pages during
> migration. It would be ideal if we could have both solutions.
Yes, of course it would be nice to have both solutions. My response was to the line:
> >>> Ooh, different solutions for the same purpose, and both based on the balloon.
which sounded misleading to me for a couple of reasons: 1, as you
describe, pages being considered by this patchset and yours are
different; and 2, as I mentioned in the other mail, this patchset
doesn't really depend on the balloon, and I believe it should not.
Amit
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2016-03-10 7:30 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-03-03 10:46 [Qemu-devel] [RFC kernel 0/2]A PV solution for KVM live migration optimization Liang Li
2016-03-03 10:46 ` [Qemu-devel] [RFC kernel 1/2] mm: Add the functions used to get free pages information Liang Li
2016-03-03 10:46 ` [Qemu-devel] [RFC kernel 2/2] virtio-balloon: extend balloon driver to support a new feature Liang Li
-- strict thread matches above, loose matches on Subject: below --
2016-03-10 7:01 [Qemu-devel] [RFC kernel 0/2]A PV solution for KVM live migration optimization Jitendra Kolhe
2016-03-10 7:22 ` Li, Liang Z
2016-03-10 7:30 ` Amit Shah
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).