* [PATCH 0/6] vmw_balloon: 64-bit limit support, compaction, shrinker @ 2019-02-06 23:57 Nadav Amit 2019-02-06 23:57 ` [PATCH 3/6] mm/balloon_compaction: list interfaces Nadav Amit 0 siblings, 1 reply; 5+ messages in thread From: Nadav Amit @ 2019-02-06 23:57 UTC (permalink / raw) To: Greg Kroah-Hartman Cc: Arnd Bergmann, linux-kernel, Julien Freche, Nadav Amit, Michael S. Tsirkin, Jason Wang, linux-mm, virtualization Various enhancements for VMware balloon, some of which are remainder from a previous patch-set. Patch 1: Drop the version number Patch 2: Adds support for 64-bit memory limit Patches 3-4: Support for compaction Patch 5: Support for memory shrinker - disabled by default Patch 6: Split refused pages to improve performance This is sort of a resend, since patches 2-6 have not been sent (the mail server rejected since Xavier, whose email address was deactivated, was mistakenly cc'd). Patch 1 was changed according to Greg's feedback. Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Cc: linux-mm@kvack.org Cc: virtualization@lists.linux-foundation.org Nadav Amit (5): vmw_balloon: remove the version number mm/balloon_compaction: list interfaces vmw_balloon: compaction support vmw_balloon: add memory shrinker vmw_balloon: split refused pages Xavier Deguillard (1): vmw_balloon: support 64-bit memory limit drivers/misc/Kconfig | 1 + drivers/misc/vmw_balloon.c | 510 ++++++++++++++++++++++++++--- include/linux/balloon_compaction.h | 4 + mm/balloon_compaction.c | 139 +++++--- 4 files changed, 565 insertions(+), 89 deletions(-) -- 2.17.1 ^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH 3/6] mm/balloon_compaction: list interfaces 2019-02-06 23:57 [PATCH 0/6] vmw_balloon: 64-bit limit support, compaction, shrinker Nadav Amit @ 2019-02-06 23:57 ` Nadav Amit 2019-02-07 0:32 ` Michael S. Tsirkin 0 siblings, 1 reply; 5+ messages in thread From: Nadav Amit @ 2019-02-06 23:57 UTC (permalink / raw) To: Greg Kroah-Hartman Cc: Arnd Bergmann, linux-kernel, Julien Freche, Nadav Amit, Michael S. Tsirkin, Jason Wang, linux-mm, virtualization Introduce interfaces for ballooning enqueueing and dequeueing of a list of pages. These interfaces reduce the overhead of storing and restoring IRQs by batching the operations. In addition they do not panic if the list of pages is empty. Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Cc: linux-mm@kvack.org Cc: virtualization@lists.linux-foundation.org Reviewed-by: Xavier Deguillard <xdeguillard@vmware.com> Signed-off-by: Nadav Amit <namit@vmware.com> --- include/linux/balloon_compaction.h | 4 + mm/balloon_compaction.c | 139 +++++++++++++++++++++-------- 2 files changed, 105 insertions(+), 38 deletions(-) diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h index 53051f3d8f25..2c5a8e09e413 100644 --- a/include/linux/balloon_compaction.h +++ b/include/linux/balloon_compaction.h @@ -72,6 +72,10 @@ extern struct page *balloon_page_alloc(void); extern void balloon_page_enqueue(struct balloon_dev_info *b_dev_info, struct page *page); extern struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info); +extern void balloon_page_list_enqueue(struct balloon_dev_info *b_dev_info, + struct list_head *pages); +extern int balloon_page_list_dequeue(struct balloon_dev_info *b_dev_info, + struct list_head *pages, int n_req_pages); static inline void balloon_devinfo_init(struct balloon_dev_info *balloon) { diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c index ef858d547e2d..b8e82864f82c 100644 --- a/mm/balloon_compaction.c +++ b/mm/balloon_compaction.c @@ -10,6 +10,100 @@ #include <linux/export.h> #include <linux/balloon_compaction.h> +static int balloon_page_enqueue_one(struct balloon_dev_info *b_dev_info, + struct page *page) +{ + /* + * Block others from accessing the 'page' when we get around to + * establishing additional references. We should be the only one + * holding a reference to the 'page' at this point. + */ + if (!trylock_page(page)) { + WARN_ONCE(1, "balloon inflation failed to enqueue page\n"); + return -EFAULT; + } + list_del(&page->lru); + balloon_page_insert(b_dev_info, page); + unlock_page(page); + __count_vm_event(BALLOON_INFLATE); + return 0; +} + +/** + * balloon_page_list_enqueue() - inserts a list of pages into the balloon page + * list. + * @b_dev_info: balloon device descriptor where we will insert a new page to + * @pages: pages to enqueue - allocated using balloon_page_alloc. + * + * Driver must call it to properly enqueue a balloon pages before definitively + * removing it from the guest system. + */ +void balloon_page_list_enqueue(struct balloon_dev_info *b_dev_info, + struct list_head *pages) +{ + struct page *page, *tmp; + unsigned long flags; + + spin_lock_irqsave(&b_dev_info->pages_lock, flags); + list_for_each_entry_safe(page, tmp, pages, lru) + balloon_page_enqueue_one(b_dev_info, page); + spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); +} +EXPORT_SYMBOL_GPL(balloon_page_list_enqueue); + +/** + * balloon_page_list_dequeue() - removes pages from balloon's page list and + * returns a list of the pages. + * @b_dev_info: balloon device decriptor where we will grab a page from. + * @pages: pointer to the list of pages that would be returned to the caller. + * @n_req_pages: number of requested pages. + * + * Driver must call it to properly de-allocate a previous enlisted balloon pages + * before definetively releasing it back to the guest system. This function + * tries to remove @n_req_pages from the ballooned pages and return it to the + * caller in the @pages list. + * + * Note that this function may fail to dequeue some pages temporarily empty due + * to compaction isolated pages. + * + * Return: number of pages that were added to the @pages list. + */ +int balloon_page_list_dequeue(struct balloon_dev_info *b_dev_info, + struct list_head *pages, int n_req_pages) +{ + struct page *page, *tmp; + unsigned long flags; + int n_pages = 0; + + spin_lock_irqsave(&b_dev_info->pages_lock, flags); + list_for_each_entry_safe(page, tmp, &b_dev_info->pages, lru) { + /* + * Block others from accessing the 'page' while we get around + * establishing additional references and preparing the 'page' + * to be released by the balloon driver. + */ + if (!trylock_page(page)) + continue; + + if (IS_ENABLED(CONFIG_BALLOON_COMPACTION) && + PageIsolated(page)) { + /* raced with isolation */ + unlock_page(page); + continue; + } + balloon_page_delete(page); + __count_vm_event(BALLOON_DEFLATE); + unlock_page(page); + list_add(&page->lru, pages); + if (++n_pages >= n_req_pages) + break; + } + spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); + + return n_pages; +} +EXPORT_SYMBOL_GPL(balloon_page_list_dequeue); + /* * balloon_page_alloc - allocates a new page for insertion into the balloon * page list. @@ -43,17 +137,9 @@ void balloon_page_enqueue(struct balloon_dev_info *b_dev_info, { unsigned long flags; - /* - * Block others from accessing the 'page' when we get around to - * establishing additional references. We should be the only one - * holding a reference to the 'page' at this point. - */ - BUG_ON(!trylock_page(page)); spin_lock_irqsave(&b_dev_info->pages_lock, flags); - balloon_page_insert(b_dev_info, page); - __count_vm_event(BALLOON_INFLATE); + balloon_page_enqueue_one(b_dev_info, page); spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); - unlock_page(page); } EXPORT_SYMBOL_GPL(balloon_page_enqueue); @@ -70,36 +156,13 @@ EXPORT_SYMBOL_GPL(balloon_page_enqueue); */ struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info) { - struct page *page, *tmp; unsigned long flags; - bool dequeued_page; + LIST_HEAD(pages); + int n_pages; - dequeued_page = false; - spin_lock_irqsave(&b_dev_info->pages_lock, flags); - list_for_each_entry_safe(page, tmp, &b_dev_info->pages, lru) { - /* - * Block others from accessing the 'page' while we get around - * establishing additional references and preparing the 'page' - * to be released by the balloon driver. - */ - if (trylock_page(page)) { -#ifdef CONFIG_BALLOON_COMPACTION - if (PageIsolated(page)) { - /* raced with isolation */ - unlock_page(page); - continue; - } -#endif - balloon_page_delete(page); - __count_vm_event(BALLOON_DEFLATE); - unlock_page(page); - dequeued_page = true; - break; - } - } - spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); + n_pages = balloon_page_list_dequeue(b_dev_info, &pages, 1); - if (!dequeued_page) { + if (n_pages != 1) { /* * If we are unable to dequeue a balloon page because the page * list is empty and there is no isolated pages, then something @@ -112,9 +175,9 @@ struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info) !b_dev_info->isolated_pages)) BUG(); spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); - page = NULL; + return NULL; } - return page; + return list_first_entry(&pages, struct page, lru); } EXPORT_SYMBOL_GPL(balloon_page_dequeue); -- 2.17.1 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH 3/6] mm/balloon_compaction: list interfaces 2019-02-06 23:57 ` [PATCH 3/6] mm/balloon_compaction: list interfaces Nadav Amit @ 2019-02-07 0:32 ` Michael S. Tsirkin 2019-02-07 0:43 ` Nadav Amit 0 siblings, 1 reply; 5+ messages in thread From: Michael S. Tsirkin @ 2019-02-07 0:32 UTC (permalink / raw) To: Nadav Amit Cc: Greg Kroah-Hartman, Arnd Bergmann, linux-kernel, Julien Freche, Jason Wang, linux-mm, virtualization On Wed, Feb 06, 2019 at 03:57:03PM -0800, Nadav Amit wrote: > Introduce interfaces for ballooning enqueueing and dequeueing of a list > of pages. These interfaces reduce the overhead of storing and restoring > IRQs by batching the operations. In addition they do not panic if the > list of pages is empty. > > Cc: "Michael S. Tsirkin" <mst@redhat.com> > Cc: Jason Wang <jasowang@redhat.com> > Cc: linux-mm@kvack.org > Cc: virtualization@lists.linux-foundation.org > Reviewed-by: Xavier Deguillard <xdeguillard@vmware.com> > Signed-off-by: Nadav Amit <namit@vmware.com> > --- > include/linux/balloon_compaction.h | 4 + > mm/balloon_compaction.c | 139 +++++++++++++++++++++-------- > 2 files changed, 105 insertions(+), 38 deletions(-) > > diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h > index 53051f3d8f25..2c5a8e09e413 100644 > --- a/include/linux/balloon_compaction.h > +++ b/include/linux/balloon_compaction.h > @@ -72,6 +72,10 @@ extern struct page *balloon_page_alloc(void); > extern void balloon_page_enqueue(struct balloon_dev_info *b_dev_info, > struct page *page); > extern struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info); > +extern void balloon_page_list_enqueue(struct balloon_dev_info *b_dev_info, > + struct list_head *pages); > +extern int balloon_page_list_dequeue(struct balloon_dev_info *b_dev_info, > + struct list_head *pages, int n_req_pages); > > static inline void balloon_devinfo_init(struct balloon_dev_info *balloon) > { > diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c > index ef858d547e2d..b8e82864f82c 100644 > --- a/mm/balloon_compaction.c > +++ b/mm/balloon_compaction.c > @@ -10,6 +10,100 @@ > #include <linux/export.h> > #include <linux/balloon_compaction.h> > > +static int balloon_page_enqueue_one(struct balloon_dev_info *b_dev_info, > + struct page *page) > +{ > + /* > + * Block others from accessing the 'page' when we get around to > + * establishing additional references. We should be the only one > + * holding a reference to the 'page' at this point. > + */ > + if (!trylock_page(page)) { > + WARN_ONCE(1, "balloon inflation failed to enqueue page\n"); > + return -EFAULT; > + } > + list_del(&page->lru); > + balloon_page_insert(b_dev_info, page); > + unlock_page(page); > + __count_vm_event(BALLOON_INFLATE); > + return 0; > +} > + > +/** > + * balloon_page_list_enqueue() - inserts a list of pages into the balloon page > + * list. > + * @b_dev_info: balloon device descriptor where we will insert a new page to > + * @pages: pages to enqueue - allocated using balloon_page_alloc. > + * > + * Driver must call it to properly enqueue a balloon pages before definitively > + * removing it from the guest system. > + */ > +void balloon_page_list_enqueue(struct balloon_dev_info *b_dev_info, > + struct list_head *pages) > +{ > + struct page *page, *tmp; > + unsigned long flags; > + > + spin_lock_irqsave(&b_dev_info->pages_lock, flags); > + list_for_each_entry_safe(page, tmp, pages, lru) > + balloon_page_enqueue_one(b_dev_info, page); > + spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); As this is scanning pages one by one anyway, it will be useful to have this return the # of pages enqueued. > +} > +EXPORT_SYMBOL_GPL(balloon_page_list_enqueue); > + > +/** > + * balloon_page_list_dequeue() - removes pages from balloon's page list and > + * returns a list of the pages. > + * @b_dev_info: balloon device decriptor where we will grab a page from. > + * @pages: pointer to the list of pages that would be returned to the caller. > + * @n_req_pages: number of requested pages. > + * > + * Driver must call it to properly de-allocate a previous enlisted balloon pages > + * before definetively releasing it back to the guest system. This function > + * tries to remove @n_req_pages from the ballooned pages and return it to the > + * caller in the @pages list. > + * > + * Note that this function may fail to dequeue some pages temporarily empty due > + * to compaction isolated pages. > + * > + * Return: number of pages that were added to the @pages list. > + */ > +int balloon_page_list_dequeue(struct balloon_dev_info *b_dev_info, > + struct list_head *pages, int n_req_pages) Are we sure this int never overflows? Why not just use u64 or size_t straight away? > +{ > + struct page *page, *tmp; > + unsigned long flags; > + int n_pages = 0; > + > + spin_lock_irqsave(&b_dev_info->pages_lock, flags); > + list_for_each_entry_safe(page, tmp, &b_dev_info->pages, lru) { > + /* > + * Block others from accessing the 'page' while we get around > + * establishing additional references and preparing the 'page' > + * to be released by the balloon driver. > + */ > + if (!trylock_page(page)) > + continue; > + > + if (IS_ENABLED(CONFIG_BALLOON_COMPACTION) && > + PageIsolated(page)) { > + /* raced with isolation */ > + unlock_page(page); > + continue; > + } > + balloon_page_delete(page); > + __count_vm_event(BALLOON_DEFLATE); > + unlock_page(page); > + list_add(&page->lru, pages); > + if (++n_pages >= n_req_pages) > + break; > + } > + spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); > + > + return n_pages; > +} > +EXPORT_SYMBOL_GPL(balloon_page_list_dequeue); > + This looks quite reasonable. In fact virtio can be reworked to use this too and then the original one can be dropped. Have the time? > /* > * balloon_page_alloc - allocates a new page for insertion into the balloon > * page list. > @@ -43,17 +137,9 @@ void balloon_page_enqueue(struct balloon_dev_info *b_dev_info, > { > unsigned long flags; > > - /* > - * Block others from accessing the 'page' when we get around to > - * establishing additional references. We should be the only one > - * holding a reference to the 'page' at this point. > - */ > - BUG_ON(!trylock_page(page)); > spin_lock_irqsave(&b_dev_info->pages_lock, flags); > - balloon_page_insert(b_dev_info, page); > - __count_vm_event(BALLOON_INFLATE); > + balloon_page_enqueue_one(b_dev_info, page); > spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); > - unlock_page(page); > } > EXPORT_SYMBOL_GPL(balloon_page_enqueue); > > @@ -70,36 +156,13 @@ EXPORT_SYMBOL_GPL(balloon_page_enqueue); > */ > struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info) > { > - struct page *page, *tmp; > unsigned long flags; > - bool dequeued_page; > + LIST_HEAD(pages); > + int n_pages; > > - dequeued_page = false; > - spin_lock_irqsave(&b_dev_info->pages_lock, flags); > - list_for_each_entry_safe(page, tmp, &b_dev_info->pages, lru) { > - /* > - * Block others from accessing the 'page' while we get around > - * establishing additional references and preparing the 'page' > - * to be released by the balloon driver. > - */ > - if (trylock_page(page)) { > -#ifdef CONFIG_BALLOON_COMPACTION > - if (PageIsolated(page)) { > - /* raced with isolation */ > - unlock_page(page); > - continue; > - } > -#endif > - balloon_page_delete(page); > - __count_vm_event(BALLOON_DEFLATE); > - unlock_page(page); > - dequeued_page = true; > - break; > - } > - } > - spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); > + n_pages = balloon_page_list_dequeue(b_dev_info, &pages, 1); > > - if (!dequeued_page) { > + if (n_pages != 1) { > /* > * If we are unable to dequeue a balloon page because the page > * list is empty and there is no isolated pages, then something > @@ -112,9 +175,9 @@ struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info) > !b_dev_info->isolated_pages)) > BUG(); > spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); > - page = NULL; > + return NULL; > } > - return page; > + return list_first_entry(&pages, struct page, lru); > } > EXPORT_SYMBOL_GPL(balloon_page_dequeue); > > -- > 2.17.1 ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 3/6] mm/balloon_compaction: list interfaces 2019-02-07 0:32 ` Michael S. Tsirkin @ 2019-02-07 0:43 ` Nadav Amit 2019-02-07 1:26 ` Michael S. Tsirkin 0 siblings, 1 reply; 5+ messages in thread From: Nadav Amit @ 2019-02-07 0:43 UTC (permalink / raw) To: Michael S. Tsirkin Cc: Greg Kroah-Hartman, Arnd Bergmann, linux-kernel@vger.kernel.org, Julien Freche, Jason Wang, linux-mm@kvack.org, virtualization@lists.linux-foundation.org > On Feb 6, 2019, at 4:32 PM, Michael S. Tsirkin <mst@redhat.com> wrote: > > On Wed, Feb 06, 2019 at 03:57:03PM -0800, Nadav Amit wrote: >> Introduce interfaces for ballooning enqueueing and dequeueing of a list >> of pages. These interfaces reduce the overhead of storing and restoring >> IRQs by batching the operations. In addition they do not panic if the >> list of pages is empty. >> [Snip] First, thanks for the quick feedback. >> + >> +/** >> + * balloon_page_list_enqueue() - inserts a list of pages into the balloon page >> + * list. >> + * @b_dev_info: balloon device descriptor where we will insert a new page to >> + * @pages: pages to enqueue - allocated using balloon_page_alloc. >> + * >> + * Driver must call it to properly enqueue a balloon pages before definitively >> + * removing it from the guest system. >> + */ >> +void balloon_page_list_enqueue(struct balloon_dev_info *b_dev_info, >> + struct list_head *pages) >> +{ >> + struct page *page, *tmp; >> + unsigned long flags; >> + >> + spin_lock_irqsave(&b_dev_info->pages_lock, flags); >> + list_for_each_entry_safe(page, tmp, pages, lru) >> + balloon_page_enqueue_one(b_dev_info, page); >> + spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); > > As this is scanning pages one by one anyway, it will be useful > to have this return the # of pages enqueued. Sure. > >> +} >> +EXPORT_SYMBOL_GPL(balloon_page_list_enqueue); >> + >> +/** >> + * balloon_page_list_dequeue() - removes pages from balloon's page list and >> + * returns a list of the pages. >> + * @b_dev_info: balloon device decriptor where we will grab a page from. >> + * @pages: pointer to the list of pages that would be returned to the caller. >> + * @n_req_pages: number of requested pages. >> + * >> + * Driver must call it to properly de-allocate a previous enlisted balloon pages >> + * before definetively releasing it back to the guest system. This function >> + * tries to remove @n_req_pages from the ballooned pages and return it to the >> + * caller in the @pages list. >> + * >> + * Note that this function may fail to dequeue some pages temporarily empty due >> + * to compaction isolated pages. >> + * >> + * Return: number of pages that were added to the @pages list. >> + */ >> +int balloon_page_list_dequeue(struct balloon_dev_info *b_dev_info, >> + struct list_head *pages, int n_req_pages) > > Are we sure this int never overflows? Why not just use u64 > or size_t straight away? size_t it is. > >> +{ >> + struct page *page, *tmp; >> + unsigned long flags; >> + int n_pages = 0; >> + >> + spin_lock_irqsave(&b_dev_info->pages_lock, flags); >> + list_for_each_entry_safe(page, tmp, &b_dev_info->pages, lru) { >> + /* >> + * Block others from accessing the 'page' while we get around >> + * establishing additional references and preparing the 'page' >> + * to be released by the balloon driver. >> + */ >> + if (!trylock_page(page)) >> + continue; >> + >> + if (IS_ENABLED(CONFIG_BALLOON_COMPACTION) && >> + PageIsolated(page)) { >> + /* raced with isolation */ >> + unlock_page(page); >> + continue; >> + } >> + balloon_page_delete(page); >> + __count_vm_event(BALLOON_DEFLATE); >> + unlock_page(page); >> + list_add(&page->lru, pages); >> + if (++n_pages >= n_req_pages) >> + break; >> + } >> + spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); >> + >> + return n_pages; >> +} >> +EXPORT_SYMBOL_GPL(balloon_page_list_dequeue); >> + > > This looks quite reasonable. In fact virtio can be reworked to use > this too and then the original one can be dropped. > > Have the time? Obviously not, but I am willing to make the time. What I cannot “make" is an approval to send patches for other hypervisors. Let me run a quick check with our FOSS people here. Anyhow, I hope it would not prevent the patches from getting to the next release. ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 3/6] mm/balloon_compaction: list interfaces 2019-02-07 0:43 ` Nadav Amit @ 2019-02-07 1:26 ` Michael S. Tsirkin 0 siblings, 0 replies; 5+ messages in thread From: Michael S. Tsirkin @ 2019-02-07 1:26 UTC (permalink / raw) To: Nadav Amit Cc: Greg Kroah-Hartman, Arnd Bergmann, linux-kernel@vger.kernel.org, Julien Freche, Jason Wang, linux-mm@kvack.org, virtualization@lists.linux-foundation.org On Thu, Feb 07, 2019 at 12:43:51AM +0000, Nadav Amit wrote: > > On Feb 6, 2019, at 4:32 PM, Michael S. Tsirkin <mst@redhat.com> wrote: > > > > On Wed, Feb 06, 2019 at 03:57:03PM -0800, Nadav Amit wrote: > >> Introduce interfaces for ballooning enqueueing and dequeueing of a list > >> of pages. These interfaces reduce the overhead of storing and restoring > >> IRQs by batching the operations. In addition they do not panic if the > >> list of pages is empty. > >> > > [Snip] > > First, thanks for the quick feedback. > > >> + > >> +/** > >> + * balloon_page_list_enqueue() - inserts a list of pages into the balloon page > >> + * list. > >> + * @b_dev_info: balloon device descriptor where we will insert a new page to > >> + * @pages: pages to enqueue - allocated using balloon_page_alloc. > >> + * > >> + * Driver must call it to properly enqueue a balloon pages before definitively > >> + * removing it from the guest system. > >> + */ > >> +void balloon_page_list_enqueue(struct balloon_dev_info *b_dev_info, > >> + struct list_head *pages) > >> +{ > >> + struct page *page, *tmp; > >> + unsigned long flags; > >> + > >> + spin_lock_irqsave(&b_dev_info->pages_lock, flags); > >> + list_for_each_entry_safe(page, tmp, pages, lru) > >> + balloon_page_enqueue_one(b_dev_info, page); > >> + spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); > > > > As this is scanning pages one by one anyway, it will be useful > > to have this return the # of pages enqueued. > > Sure. > > > > >> +} > >> +EXPORT_SYMBOL_GPL(balloon_page_list_enqueue); > >> + > >> +/** > >> + * balloon_page_list_dequeue() - removes pages from balloon's page list and > >> + * returns a list of the pages. > >> + * @b_dev_info: balloon device decriptor where we will grab a page from. > >> + * @pages: pointer to the list of pages that would be returned to the caller. > >> + * @n_req_pages: number of requested pages. > >> + * > >> + * Driver must call it to properly de-allocate a previous enlisted balloon pages > >> + * before definetively releasing it back to the guest system. This function > >> + * tries to remove @n_req_pages from the ballooned pages and return it to the > >> + * caller in the @pages list. > >> + * > >> + * Note that this function may fail to dequeue some pages temporarily empty due > >> + * to compaction isolated pages. > >> + * > >> + * Return: number of pages that were added to the @pages list. > >> + */ > >> +int balloon_page_list_dequeue(struct balloon_dev_info *b_dev_info, > >> + struct list_head *pages, int n_req_pages) > > > > Are we sure this int never overflows? Why not just use u64 > > or size_t straight away? > > size_t it is. > > > > >> +{ > >> + struct page *page, *tmp; > >> + unsigned long flags; > >> + int n_pages = 0; > >> + > >> + spin_lock_irqsave(&b_dev_info->pages_lock, flags); > >> + list_for_each_entry_safe(page, tmp, &b_dev_info->pages, lru) { > >> + /* > >> + * Block others from accessing the 'page' while we get around > >> + * establishing additional references and preparing the 'page' > >> + * to be released by the balloon driver. > >> + */ > >> + if (!trylock_page(page)) > >> + continue; > >> + > >> + if (IS_ENABLED(CONFIG_BALLOON_COMPACTION) && > >> + PageIsolated(page)) { > >> + /* raced with isolation */ > >> + unlock_page(page); > >> + continue; > >> + } > >> + balloon_page_delete(page); > >> + __count_vm_event(BALLOON_DEFLATE); > >> + unlock_page(page); > >> + list_add(&page->lru, pages); > >> + if (++n_pages >= n_req_pages) > >> + break; > >> + } > >> + spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); > >> + > >> + return n_pages; > >> +} > >> +EXPORT_SYMBOL_GPL(balloon_page_list_dequeue); > >> + > > > > This looks quite reasonable. In fact virtio can be reworked to use > > this too and then the original one can be dropped. > > > > Have the time? > > Obviously not, but I am willing to make the time. What I cannot “make" is an > approval to send patches for other hypervisors. Let me run a quick check > with our FOSS people here. > > Anyhow, I hope it would not prevent the patches from getting to the next > release. > No, that's not a blocker. -- MST ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2019-02-07 1:26 UTC | newest] Thread overview: 5+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2019-02-06 23:57 [PATCH 0/6] vmw_balloon: 64-bit limit support, compaction, shrinker Nadav Amit 2019-02-06 23:57 ` [PATCH 3/6] mm/balloon_compaction: list interfaces Nadav Amit 2019-02-07 0:32 ` Michael S. Tsirkin 2019-02-07 0:43 ` Nadav Amit 2019-02-07 1:26 ` Michael S. Tsirkin
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).