From: David Hildenbrand <david@redhat.com>
To: Janosch Frank <frankja@linux.vnet.ibm.com>, kvm@vger.kernel.org
Cc: schwidefsky@de.ibm.com, borntraeger@de.ibm.com,
dominik.dingel@gmail.com, linux-s390@vger.kernel.org
Subject: Re: [RFC/PATCH 22/22] RFC: s390/mm: Add gmap lock classes
Date: Wed, 15 Nov 2017 11:10:09 +0100 [thread overview]
Message-ID: <91442796-11c7-6c41-fe52-1e33bc5107dc@redhat.com> (raw)
In-Reply-To: <1510007400-42493-23-git-send-email-frankja@linux.vnet.ibm.com>
On 06.11.2017 23:30, Janosch Frank wrote:
> A shadow gmap and its parent are locked right after each other when
> doing VSIE management. Lockdep can't differentiate between the two
> classes without some help.
>
> TODO: Not sure yet if I have to annotate all and if gmap_pmd_walk will
> be used by both shadow and parent
Why is that new, I thought we already had the same situation before and
lockdep didn't complain?
(worrying of we are now seeing a real problem and try to hide it)
>
> Signed-off-by: Janosch Frank <frankja@linux.vnet.ibm.com>
> ---
> arch/s390/include/asm/gmap.h | 6 ++++++
> arch/s390/mm/gmap.c | 40 ++++++++++++++++++++--------------------
> 2 files changed, 26 insertions(+), 20 deletions(-)
>
> diff --git a/arch/s390/include/asm/gmap.h b/arch/s390/include/asm/gmap.h
> index 15a7834..31fad98 100644
> --- a/arch/s390/include/asm/gmap.h
> +++ b/arch/s390/include/asm/gmap.h
> @@ -19,6 +19,12 @@
> #define _SEGMENT_ENTRY_GMAP_UC 0x4000 /* user dirty (migration) */
> #define _SEGMENT_ENTRY_GMAP_VSIE 0x8000 /* vsie bit */
>
> +
> +enum gmap_lock_class {
> + GMAP_LOCK_PARENT,
> + GMAP_LOCK_SHADOW
> +};
> +
> /**
> * struct gmap_struct - guest address space
> * @list: list head for the mm->context gmap list
> diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
> index 3cc2765..b17f424 100644
> --- a/arch/s390/mm/gmap.c
> +++ b/arch/s390/mm/gmap.c
> @@ -198,7 +198,7 @@ static void gmap_free(struct gmap *gmap)
> gmap_radix_tree_free(&gmap->host_to_guest);
>
> /* Free split pmd page tables */
> - spin_lock(&gmap->guest_table_lock);
> + spin_lock_nested(&gmap->guest_table_lock, GMAP_LOCK_PARENT);
> list_for_each_entry_safe(page, next, &gmap->split_list, lru)
> page_table_free_pgste(page);
> spin_unlock(&gmap->guest_table_lock);
> @@ -1385,7 +1385,7 @@ static int gmap_protect_rmap_large(struct gmap *sg, struct gmap_rmap *rmap,
> if (pmd_val(*pmdp) & _SEGMENT_ENTRY_INVALID)
> return -EAGAIN;
>
> - spin_lock(&sg->guest_table_lock);
> + spin_lock_nested(&sg->guest_table_lock, GMAP_LOCK_SHADOW);
> rc = gmap_protect_large(sg->parent, paddr, vmaddr, pmdp, hpmdp,
> prot, GMAP_ENTRY_VSIE);
> if (!rc)
> @@ -1413,7 +1413,7 @@ static int gmap_protect_rmap_pte(struct gmap *sg, struct gmap_rmap *rmap,
> ptep = pte_alloc_map_lock(sg->parent->mm, pmdp, paddr, &ptl);
>
> if (ptep) {
> - spin_lock(&sg->guest_table_lock);
> + spin_lock_nested(&sg->guest_table_lock, GMAP_LOCK_SHADOW);
> rc = ptep_force_prot(sg->parent->mm, paddr, ptep, prot,
> PGSTE_VSIE_BIT);
> if (!rc)
> @@ -1932,7 +1932,7 @@ struct gmap *gmap_shadow(struct gmap *parent, unsigned long asce,
> /* only allow one real-space gmap shadow */
> list_for_each_entry(sg, &parent->children, list) {
> if (sg->orig_asce & _ASCE_REAL_SPACE) {
> - spin_lock(&sg->guest_table_lock);
> + spin_lock_nested(&sg->guest_table_lock, GMAP_LOCK_SHADOW);
> gmap_unshadow(sg);
> spin_unlock(&sg->guest_table_lock);
> list_del(&sg->list);
> @@ -2004,7 +2004,7 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t,
> page->index |= GMAP_SHADOW_FAKE_TABLE;
> s_r2t = (unsigned long *) page_to_phys(page);
> /* Install shadow region second table */
> - spin_lock(&sg->guest_table_lock);
> + spin_lock_nested(&sg->guest_table_lock, GMAP_LOCK_SHADOW);
> table = gmap_table_walk(sg, saddr, 4); /* get region-1 pointer */
> if (!table) {
> rc = -EAGAIN; /* Race with unshadow */
> @@ -2037,7 +2037,7 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t,
> offset = ((r2t & _REGION_ENTRY_OFFSET) >> 6) * PAGE_SIZE;
> len = ((r2t & _REGION_ENTRY_LENGTH) + 1) * PAGE_SIZE - offset;
> rc = gmap_protect_rmap(sg, raddr, origin + offset, len, PROT_READ);
> - spin_lock(&sg->guest_table_lock);
> + spin_lock_nested(&sg->guest_table_lock, GMAP_LOCK_SHADOW);
> if (!rc) {
> table = gmap_table_walk(sg, saddr, 4);
> if (!table || (*table & _REGION_ENTRY_ORIGIN) !=
> @@ -2088,7 +2088,7 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t,
> page->index |= GMAP_SHADOW_FAKE_TABLE;
> s_r3t = (unsigned long *) page_to_phys(page);
> /* Install shadow region second table */
> - spin_lock(&sg->guest_table_lock);
> + spin_lock_nested(&sg->guest_table_lock, GMAP_LOCK_SHADOW);
> table = gmap_table_walk(sg, saddr, 3); /* get region-2 pointer */
> if (!table) {
> rc = -EAGAIN; /* Race with unshadow */
> @@ -2120,7 +2120,7 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t,
> offset = ((r3t & _REGION_ENTRY_OFFSET) >> 6) * PAGE_SIZE;
> len = ((r3t & _REGION_ENTRY_LENGTH) + 1) * PAGE_SIZE - offset;
> rc = gmap_protect_rmap(sg, raddr, origin + offset, len, PROT_READ);
> - spin_lock(&sg->guest_table_lock);
> + spin_lock_nested(&sg->guest_table_lock, GMAP_LOCK_SHADOW);
> if (!rc) {
> table = gmap_table_walk(sg, saddr, 3);
> if (!table || (*table & _REGION_ENTRY_ORIGIN) !=
> @@ -2171,7 +2171,7 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt,
> page->index |= GMAP_SHADOW_FAKE_TABLE;
> s_sgt = (unsigned long *) page_to_phys(page);
> /* Install shadow region second table */
> - spin_lock(&sg->guest_table_lock);
> + spin_lock_nested(&sg->guest_table_lock, GMAP_LOCK_SHADOW);
> table = gmap_table_walk(sg, saddr, 2); /* get region-3 pointer */
> if (!table) {
> rc = -EAGAIN; /* Race with unshadow */
> @@ -2204,7 +2204,7 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt,
> offset = ((sgt & _REGION_ENTRY_OFFSET) >> 6) * PAGE_SIZE;
> len = ((sgt & _REGION_ENTRY_LENGTH) + 1) * PAGE_SIZE - offset;
> rc = gmap_protect_rmap(sg, raddr, origin + offset, len, PROT_READ);
> - spin_lock(&sg->guest_table_lock);
> + spin_lock_nested(&sg->guest_table_lock, GMAP_LOCK_SHADOW);
> if (!rc) {
> table = gmap_table_walk(sg, saddr, 2);
> if (!table || (*table & _REGION_ENTRY_ORIGIN) !=
> @@ -2260,7 +2260,7 @@ int gmap_shadow_sgt_lookup(struct gmap *sg, unsigned long saddr,
> int rc = -EAGAIN;
>
> BUG_ON(!gmap_is_shadow(sg));
> - spin_lock(&sg->guest_table_lock);
> + spin_lock_nested(&sg->guest_table_lock, GMAP_LOCK_SHADOW);
> if (sg->asce & _ASCE_TYPE_MASK) {
> /* >2 GB guest */
> r3e = (unsigned long *) gmap_table_walk(sg, saddr, 2);
> @@ -2327,7 +2327,7 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt,
> page->index |= GMAP_SHADOW_FAKE_TABLE;
> s_pgt = (unsigned long *) page_to_phys(page);
> /* Install shadow page table */
> - spin_lock(&sg->guest_table_lock);
> + spin_lock_nested(&sg->guest_table_lock, GMAP_LOCK_SHADOW);
> table = gmap_table_walk(sg, saddr, 1); /* get segment pointer */
> if (!table) {
> rc = -EAGAIN; /* Race with unshadow */
> @@ -2355,7 +2355,7 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt,
> raddr = (saddr & _SEGMENT_MASK) | _SHADOW_RMAP_SEGMENT;
> origin = pgt & _SEGMENT_ENTRY_ORIGIN & PAGE_MASK;
> rc = gmap_protect_rmap(sg, raddr, origin, PAGE_SIZE, PROT_READ);
> - spin_lock(&sg->guest_table_lock);
> + spin_lock_nested(&sg->guest_table_lock, GMAP_LOCK_SHADOW);
> if (!rc) {
> table = gmap_table_walk(sg, saddr, 1);
> if (!table || (*table & _SEGMENT_ENTRY_ORIGIN) !=
> @@ -2417,7 +2417,7 @@ int gmap_shadow_segment(struct gmap *sg, unsigned long saddr, pmd_t pmd)
> if (spmdp) {
> if (!pmd_large(*spmdp))
> BUG();
> - spin_lock(&sg->guest_table_lock);
> + spin_lock_nested(&sg->guest_table_lock, GMAP_LOCK_SHADOW);
> /* Get shadow segment table pointer */
> tpmdp = (pmd_t *) gmap_table_walk(sg, saddr, 1);
> if (!tpmdp) {
> @@ -2513,7 +2513,7 @@ int gmap_shadow_page(struct gmap *sg, unsigned long saddr, pte_t pte)
> rc = -EAGAIN;
> spmdp = gmap_pmd_op_walk(parent, paddr);
> if (spmdp && !(pmd_val(*spmdp) & _SEGMENT_ENTRY_INVALID)) {
> - spin_lock(&sg->guest_table_lock);
> + spin_lock_nested(&sg->guest_table_lock, GMAP_LOCK_SHADOW);
> /* Get page table pointer */
> tptep = (pte_t *) gmap_table_walk(sg, saddr, 0);
> if (!tptep) {
> @@ -2597,7 +2597,7 @@ static void gmap_shadow_notify_pmd(struct gmap *sg, unsigned long vmaddr,
>
> BUG_ON(!gmap_is_shadow(sg));
>
> - spin_lock(&sg->guest_table_lock);
> + spin_lock_nested(&sg->guest_table_lock, GMAP_LOCK_SHADOW);
> if (sg->removed) {
> spin_unlock(&sg->guest_table_lock);
> return;
> @@ -2658,7 +2658,7 @@ static void gmap_shadow_notify(struct gmap *sg, unsigned long vmaddr,
>
> BUG_ON(!gmap_is_shadow(sg));
>
> - spin_lock(&sg->guest_table_lock);
> + spin_lock_nested(&sg->guest_table_lock, GMAP_LOCK_SHADOW);
> if (sg->removed) {
> spin_unlock(&sg->guest_table_lock);
> return;
> @@ -2899,7 +2899,7 @@ static void gmap_pmdp_clear(struct mm_struct *mm, unsigned long vmaddr,
>
> rcu_read_lock();
> list_for_each_entry_rcu(gmap, &mm->context.gmap_list, list) {
> - spin_lock(&gmap->guest_table_lock);
> + spin_lock_nested(&gmap->guest_table_lock, GMAP_LOCK_PARENT);
> pmdp = (pmd_t *)radix_tree_delete(&gmap->host_to_guest,
> vmaddr >> PMD_SHIFT);
> if (pmdp) {
> @@ -2949,7 +2949,7 @@ void gmap_pmdp_idte_local(struct mm_struct *mm, unsigned long vmaddr)
>
> rcu_read_lock();
> list_for_each_entry_rcu(gmap, &mm->context.gmap_list, list) {
> - spin_lock(&gmap->guest_table_lock);
> + spin_lock_nested(&gmap->guest_table_lock, GMAP_LOCK_PARENT);
> entry = radix_tree_delete(&gmap->host_to_guest,
> vmaddr >> PMD_SHIFT);
> if (entry) {
> @@ -2984,7 +2984,7 @@ void gmap_pmdp_idte_global(struct mm_struct *mm, unsigned long vmaddr)
>
> rcu_read_lock();
> list_for_each_entry_rcu(gmap, &mm->context.gmap_list, list) {
> - spin_lock(&gmap->guest_table_lock);
> + spin_lock_nested(&gmap->guest_table_lock, GMAP_LOCK_PARENT);
> entry = radix_tree_delete(&gmap->host_to_guest,
> vmaddr >> PMD_SHIFT);
> if (entry) {
>
--
Thanks,
David / dhildenb
next prev parent reply other threads:[~2017-11-15 10:10 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-11-06 22:29 [RFC/PATCH 00/22] KVM/s390: Hugetlbfs enablement Janosch Frank
2017-11-06 22:29 ` [RFC/PATCH 01/22] s390/mm: make gmap_protect_range more modular Janosch Frank
2017-11-08 10:40 ` David Hildenbrand
2017-11-08 12:21 ` Janosch Frank
2017-11-08 12:26 ` David Hildenbrand
2017-11-06 22:29 ` [RFC/PATCH 02/22] s390/mm: Abstract gmap notify bit setting Janosch Frank
2017-11-10 12:57 ` David Hildenbrand
2017-11-13 15:57 ` Janosch Frank
2017-11-15 9:30 ` David Hildenbrand
2017-11-06 22:29 ` [RFC/PATCH 03/22] s390/mm: add gmap PMD invalidation notification Janosch Frank
2017-11-15 9:55 ` David Hildenbrand
2017-11-17 9:02 ` Janosch Frank
2017-11-17 9:19 ` Martin Schwidefsky
2017-11-06 22:29 ` [RFC/PATCH 04/22] s390/mm: Add gmap pmd invalidation and clearing Janosch Frank
2017-11-06 22:29 ` [RFC/PATCH 05/22] s390/mm: hugetlb pages within a gmap can not be freed Janosch Frank
2017-11-06 22:29 ` [RFC/PATCH 06/22] s390/mm: Introduce gmap_pmdp_xchg Janosch Frank
2017-11-06 22:29 ` [RFC/PATCH 07/22] RFC: s390/mm: Transfer guest pmd protection to host Janosch Frank
2017-11-06 22:29 ` [RFC/PATCH 08/22] s390/mm: Add huge page dirty sync support Janosch Frank
2017-11-06 22:29 ` [RFC/PATCH 09/22] s390/mm: clear huge page storage keys on enable_skey Janosch Frank
2017-11-06 22:29 ` [RFC/PATCH 10/22] s390/mm: Add huge pmd storage key handling Janosch Frank
2017-11-06 22:29 ` [RFC/PATCH 11/22] s390/mm: Remove superfluous parameter Janosch Frank
2017-11-06 22:29 ` [RFC/PATCH 12/22] s390/mm: Add gmap_protect_large read protection support Janosch Frank
2017-11-06 22:29 ` [RFC/PATCH 13/22] s390/mm: Make gmap_read_table EDAT1 compatible Janosch Frank
2017-11-06 22:29 ` [RFC/PATCH 14/22] s390/mm: Make protect_rmap " Janosch Frank
2017-11-06 22:29 ` [RFC/PATCH 15/22] s390/mm: GMAP read table extensions Janosch Frank
2017-11-06 22:29 ` [RFC/PATCH 16/22] s390/mm: Add shadow segment code Janosch Frank
2017-11-06 22:29 ` [RFC/PATCH 17/22] s390/mm: Add VSIE reverse fake case Janosch Frank
2017-11-06 22:29 ` [RFC/PATCH 18/22] s390/mm: Remove gmap_pte_op_walk Janosch Frank
2017-11-06 22:29 ` [RFC/PATCH 19/22] s390/mm: Split huge pages if granular protection is needed Janosch Frank
2017-12-07 16:32 ` David Hildenbrand
2017-12-08 7:00 ` Janosch Frank
2017-11-06 22:29 ` [RFC/PATCH 20/22] s390/mm: Enable gmap huge pmd support Janosch Frank
2017-11-15 10:08 ` David Hildenbrand
2017-11-15 12:24 ` Janosch Frank
2017-11-06 22:29 ` [RFC/PATCH 21/22] KVM: s390: Add KVM HPAGE capability Janosch Frank
2017-11-07 10:07 ` Cornelia Huck
2017-11-07 10:53 ` Janosch Frank
2017-11-15 10:06 ` David Hildenbrand
2017-11-15 12:02 ` Janosch Frank
2017-11-06 22:30 ` [RFC/PATCH 22/22] RFC: s390/mm: Add gmap lock classes Janosch Frank
2017-11-15 10:10 ` David Hildenbrand [this message]
2017-11-15 12:16 ` Janosch Frank
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=91442796-11c7-6c41-fe52-1e33bc5107dc@redhat.com \
--to=david@redhat.com \
--cc=borntraeger@de.ibm.com \
--cc=dominik.dingel@gmail.com \
--cc=frankja@linux.vnet.ibm.com \
--cc=kvm@vger.kernel.org \
--cc=linux-s390@vger.kernel.org \
--cc=schwidefsky@de.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox