From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D845AC433E0 for ; Tue, 22 Dec 2020 14:54:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 462A3229C6 for ; Tue, 22 Dec 2020 14:54:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 462A3229C6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C05CF8D000E; Tue, 22 Dec 2020 09:54:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B8B886B00AF; Tue, 22 Dec 2020 09:54:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9C0EE8D0012; Tue, 22 Dec 2020 09:54:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0154.hostedemail.com [216.40.44.154]) by kanga.kvack.org (Postfix) with ESMTP id 745196B00AE for ; Tue, 22 Dec 2020 09:54:23 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id E6B69363C for ; Tue, 22 Dec 2020 14:54:22 +0000 (UTC) X-FDA: 77621213964.22.toad02_251496e27460 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id BDE8E18038E6C for ; Tue, 22 Dec 2020 14:54:22 +0000 (UTC) X-HE-Tag: toad02_251496e27460 X-Filterd-Recvd-Size: 9429 Received: from mail-pg1-f179.google.com (mail-pg1-f179.google.com [209.85.215.179]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Tue, 22 Dec 2020 14:54:22 +0000 (UTC) Received: by mail-pg1-f179.google.com with SMTP id f17so8495035pge.6 for ; Tue, 22 Dec 2020 06:54:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cR7jZ79Sb8lqcBGvyOl+8bdB8wHIq1EQVGYqgM9O97E=; b=z/yt6Hd1hZv+ox368PvLBGyeWoiEP3AjT55CJn/qkKnI5c1649754/dXxODxT6gcnk esn4XxR5pM7VVN83bkwR+M1aAnPPtXhkHjtA0zgGy9lLzbccaI74Cuxn7jlUl+sO0uJb 0S4Mvyyv6b0/RmuybT6zmD6fYhaX5XQEU/a7PBLiOV8EJ884+cksPse1iEC61dEi7S3Z Lg1XioKH4nqXPkD60fPdwgMoDpPQAYpbb5Kae8h0JetEGoKBx+qiHqAo8OpUcxZluOA8 KtpYhpmZOHygn+LFpX963N2wZNyCEZahTb7eUQ3OFWQ19vZ8MDDXdPp9fuhHQxtPslcf IRxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cR7jZ79Sb8lqcBGvyOl+8bdB8wHIq1EQVGYqgM9O97E=; b=soHcEzG1Ty5GplwANqsvSlAUvhSiGDJLxnaKjhLO3x4szBcvF5FFJo8MCFz8a1Blxe 0UEkDi0UTpht3DHAhemxlgwRF30FlLjlcgUNXDJFnuK84e/AXA/pFtH4jmzyqBpambkO r5S1BBpWthHY7k60yu5XL3NKaaU1QDwyQZpBBLP5w27ogxWvRtVvIQ4nqpyqAIBQPMHY t7Dz+wJv2EkwWLAVo+rV7ceNUNHw4M7OUy70JNjyKUZb+givsYn1TZahaQLuSVm1lhBz AyzV3AIsCP3Gq8i32WcoggQ19UZOcpZAQz6fLlkuMw/t72bEffnOOfAYRaDOhFWVnGtp o4HA== X-Gm-Message-State: AOAM531fPIxyWPhBklKaK4h48BBDTOjiFKO/oRUfr552peKMm9XFDwqf QqICwm9tUUHPpNzvizxabXgv X-Google-Smtp-Source: ABdhPJxXMqptUBDlJmnO+A6ZpHHaPhrTIrYuz6A++hcMoz2+nU+3pr189Qont8EBCkjnXew1W8fVZw== X-Received: by 2002:a63:4517:: with SMTP id s23mr2068953pga.267.1608648861396; Tue, 22 Dec 2020 06:54:21 -0800 (PST) Received: from localhost ([139.177.225.248]) by smtp.gmail.com with ESMTPSA id q23sm21530540pfg.18.2020.12.22.06.54.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Dec 2020 06:54:20 -0800 (PST) From: Xie Yongji To: mst@redhat.com, jasowang@redhat.com, stefanha@redhat.com, sgarzare@redhat.com, parav@nvidia.com, akpm@linux-foundation.org, rdunlap@infradead.org, willy@infradead.org, viro@zeniv.linux.org.uk, axboe@kernel.dk, bcrl@kvack.org, corbet@lwn.net Cc: virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, kvm@vger.kernel.org, linux-aio@kvack.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Subject: [RFC v2 11/13] vduse/iova_domain: Support reclaiming bounce pages Date: Tue, 22 Dec 2020 22:52:19 +0800 Message-Id: <20201222145221.711-12-xieyongji@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201222145221.711-1-xieyongji@bytedance.com> References: <20201222145221.711-1-xieyongji@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Introduce vduse_domain_reclaim() to support reclaiming bounce page when necessary. We will do reclaiming chunk by chunk. And only reclaim the iova chunk that no one used. Signed-off-by: Xie Yongji --- drivers/vdpa/vdpa_user/iova_domain.c | 83 ++++++++++++++++++++++++++++++= ++++-- drivers/vdpa/vdpa_user/iova_domain.h | 10 +++++ 2 files changed, 89 insertions(+), 4 deletions(-) diff --git a/drivers/vdpa/vdpa_user/iova_domain.c b/drivers/vdpa/vdpa_use= r/iova_domain.c index 27022157abc6..c438cc85d33d 100644 --- a/drivers/vdpa/vdpa_user/iova_domain.c +++ b/drivers/vdpa/vdpa_user/iova_domain.c @@ -29,6 +29,8 @@ struct vduse_mmap_vma { struct list_head list; }; =20 +struct percpu_counter vduse_total_bounce_pages; + static inline struct page * vduse_domain_get_bounce_page(struct vduse_iova_domain *domain, unsigned long iova) @@ -48,6 +50,13 @@ vduse_domain_set_bounce_page(struct vduse_iova_domain = *domain, unsigned long chunkoff =3D iova & ~IOVA_CHUNK_MASK; unsigned long pgindex =3D chunkoff >> PAGE_SHIFT; =20 + if (page) { + domain->chunks[index].used_bounce_pages++; + percpu_counter_inc(&vduse_total_bounce_pages); + } else { + domain->chunks[index].used_bounce_pages--; + percpu_counter_dec(&vduse_total_bounce_pages); + } domain->chunks[index].bounce_pages[pgindex] =3D page; } =20 @@ -175,6 +184,29 @@ void vduse_domain_remove_mapping(struct vduse_iova_d= omain *domain, } } =20 +static bool vduse_domain_try_unmap(struct vduse_iova_domain *domain, + unsigned long iova, size_t size) +{ + struct vduse_mmap_vma *mmap_vma; + unsigned long uaddr; + bool unmap =3D true; + + mutex_lock(&domain->vma_lock); + list_for_each_entry(mmap_vma, &domain->vma_list, list) { + if (!mmap_read_trylock(mmap_vma->vma->vm_mm)) { + unmap =3D false; + break; + } + + uaddr =3D iova + mmap_vma->vma->vm_start; + zap_page_range(mmap_vma->vma, uaddr, size); + mmap_read_unlock(mmap_vma->vma->vm_mm); + } + mutex_unlock(&domain->vma_lock); + + return unmap; +} + void vduse_domain_unmap(struct vduse_iova_domain *domain, unsigned long iova, size_t size) { @@ -302,6 +334,32 @@ bool vduse_domain_is_direct_map(struct vduse_iova_do= main *domain, return atomic_read(&chunk->map_type) =3D=3D TYPE_DIRECT_MAP; } =20 +int vduse_domain_reclaim(struct vduse_iova_domain *domain) +{ + struct vduse_iova_chunk *chunk; + int i, freed =3D 0; + + for (i =3D domain->chunk_num - 1; i >=3D 0; i--) { + chunk =3D &domain->chunks[i]; + if (!chunk->used_bounce_pages) + continue; + + if (atomic_cmpxchg(&chunk->state, 0, INT_MIN) !=3D 0) + continue; + + if (!vduse_domain_try_unmap(domain, + chunk->start, IOVA_CHUNK_SIZE)) { + atomic_sub(INT_MIN, &chunk->state); + break; + } + freed +=3D vduse_domain_free_bounce_pages(domain, + chunk->start, IOVA_CHUNK_SIZE); + atomic_sub(INT_MIN, &chunk->state); + } + + return freed; +} + unsigned long vduse_domain_alloc_iova(struct vduse_iova_domain *domain, size_t size, enum iova_map_type type) { @@ -319,10 +377,13 @@ unsigned long vduse_domain_alloc_iova(struct vduse_= iova_domain *domain, if (atomic_read(&chunk->map_type) !=3D type) continue; =20 - iova =3D gen_pool_alloc_algo(chunk->pool, size, + if (atomic_fetch_inc(&chunk->state) >=3D 0) { + iova =3D gen_pool_alloc_algo(chunk->pool, size, gen_pool_first_fit_align, &data); - if (iova) - break; + if (iova) + break; + } + atomic_dec(&chunk->state); } =20 return iova; @@ -335,6 +396,7 @@ void vduse_domain_free_iova(struct vduse_iova_domain = *domain, struct vduse_iova_chunk *chunk =3D &domain->chunks[index]; =20 gen_pool_free(chunk->pool, iova, size); + atomic_dec(&chunk->state); } =20 static void vduse_iova_chunk_cleanup(struct vduse_iova_chunk *chunk) @@ -351,7 +413,8 @@ void vduse_iova_domain_destroy(struct vduse_iova_doma= in *domain) =20 for (i =3D 0; i < domain->chunk_num; i++) { chunk =3D &domain->chunks[i]; - vduse_domain_free_bounce_pages(domain, + if (chunk->used_bounce_pages) + vduse_domain_free_bounce_pages(domain, chunk->start, IOVA_CHUNK_SIZE); vduse_iova_chunk_cleanup(chunk); } @@ -390,8 +453,10 @@ static int vduse_iova_chunk_init(struct vduse_iova_c= hunk *chunk, if (!chunk->iova_map) goto err; =20 + chunk->used_bounce_pages =3D 0; chunk->start =3D addr; atomic_set(&chunk->map_type, TYPE_NONE); + atomic_set(&chunk->state, 0); =20 return 0; err: @@ -440,3 +505,13 @@ struct vduse_iova_domain *vduse_iova_domain_create(s= ize_t size) =20 return NULL; } + +int vduse_domain_init(void) +{ + return percpu_counter_init(&vduse_total_bounce_pages, 0, GFP_KERNEL); +} + +void vduse_domain_exit(void) +{ + percpu_counter_destroy(&vduse_total_bounce_pages); +} diff --git a/drivers/vdpa/vdpa_user/iova_domain.h b/drivers/vdpa/vdpa_use= r/iova_domain.h index fe1816287f5f..6815b00629d2 100644 --- a/drivers/vdpa/vdpa_user/iova_domain.h +++ b/drivers/vdpa/vdpa_user/iova_domain.h @@ -31,8 +31,10 @@ struct vduse_iova_chunk { struct gen_pool *pool; struct page **bounce_pages; struct vduse_iova_map **iova_map; + int used_bounce_pages; unsigned long start; atomic_t map_type; + atomic_t state; }; =20 struct vduse_iova_domain { @@ -44,6 +46,8 @@ struct vduse_iova_domain { struct list_head vma_list; }; =20 +extern struct percpu_counter vduse_total_bounce_pages; + int vduse_domain_add_vma(struct vduse_iova_domain *domain, struct vm_area_struct *vma); =20 @@ -77,6 +81,8 @@ int vduse_domain_bounce_map(struct vduse_iova_domain *d= omain, bool vduse_domain_is_direct_map(struct vduse_iova_domain *domain, unsigned long iova); =20 +int vduse_domain_reclaim(struct vduse_iova_domain *domain); + unsigned long vduse_domain_alloc_iova(struct vduse_iova_domain *domain, size_t size, enum iova_map_type type); =20 @@ -90,4 +96,8 @@ void vduse_iova_domain_destroy(struct vduse_iova_domain= *domain); =20 struct vduse_iova_domain *vduse_iova_domain_create(size_t size); =20 +int vduse_domain_init(void); + +void vduse_domain_exit(void); + #endif /* _VDUSE_IOVA_DOMAIN_H */ --=20 2.11.0