From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BDECC4332F for ; Sun, 29 Oct 2023 23:25:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232371AbjJ2XZi (ORCPT ); Sun, 29 Oct 2023 19:25:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232151AbjJ2XZU (ORCPT ); Sun, 29 Oct 2023 19:25:20 -0400 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D30A8212F for ; Sun, 29 Oct 2023 16:04:13 -0700 (PDT) Received: from workpc.. (109-252-153-31.dynamic.spd-mgts.ru [109.252.153.31]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: dmitry.osipenko) by madras.collabora.co.uk (Postfix) with ESMTPSA id 1ACE4660739C; Sun, 29 Oct 2023 23:02:45 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1698620566; bh=0ZBUE0rLGCchFfVCmY9eX9kO1j9q0pZNyfBn0+Aeqt4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nSliMwpu82AbnU5e+erzpzW60WYrd2iWlEbQtA2FCaagAKLdJD3YIHk/zisDs1vgJ 0O2O2/RaQHJTtLvzNmPsarvVO/A9Yzxm+I3fda8P3iAOnXMkLjp0sHGxqaC1W16sZr e1aqdT0qI7jyhynFXkco2NGM+pm0hDW/taXH9aQGapSbjNYZ1KSruZzttU5ktUlYM+ iGJwoQzfBG2bRbMUYvocEhtEu11qu65pTHYR+DC5jKFCJHNs4+EwKFFHeIj8U2L5XP /w86l90cXCAOj0u7Qq+eZrUhbIdPkfbIiqaqivGyLkByUh5oUay2uP6xTMvD0NZOlQ 9fqz9i6ZUI4Ng== From: Dmitry Osipenko To: David Airlie , Gerd Hoffmann , Gurchetan Singh , Chia-I Wu , Daniel Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , =?UTF-8?q?Christian=20K=C3=B6nig?= , Qiang Yu , Steven Price , Boris Brezillon , Emma Anholt , Melissa Wen Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, kernel@collabora.com, virtualization@lists.linux-foundation.org Subject: [PATCH v18 13/26] drm/shmem-helper: Add drm_gem_shmem_put_pages() Date: Mon, 30 Oct 2023 02:01:52 +0300 Message-ID: <20231029230205.93277-14-dmitry.osipenko@collabora.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231029230205.93277-1-dmitry.osipenko@collabora.com> References: <20231029230205.93277-1-dmitry.osipenko@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We're going to move away from having implicit get_pages() done by get_pages_sgt() to ease simplify refcnt handling. Drivers will manage get/put_pages() by themselves. Add drm_gem_shmem_put_pages(). Signed-off-by: Dmitry Osipenko --- drivers/gpu/drm/drm_gem_shmem_helper.c | 20 ++++++++++++++++++++ include/drm/drm_gem_shmem_helper.h | 1 + 2 files changed, 21 insertions(+) diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index ca6f422c0dfc..f371ebc6f85c 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -217,6 +217,7 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem) * @shmem: shmem GEM object * * This function decreases the use count and puts the backing pages when use drops to zero. + * Caller must hold GEM's reservation lock. */ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) { @@ -227,6 +228,25 @@ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem) } EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked); +/* + * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object + * @shmem: shmem GEM object + * + * This function decreases the use count and puts the backing pages when use drops to zero. + * It's unlocked version of drm_gem_shmem_put_pages_locked(), caller must not hold + * GEM's reservation lock. + */ +void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem) +{ + if (refcount_dec_not_one(&shmem->pages_use_count)) + return; + + dma_resv_lock(shmem->base.resv, NULL); + drm_gem_shmem_put_pages_locked(shmem); + dma_resv_unlock(shmem->base.resv); +} +EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages); + /* * drm_gem_shmem_get_pages - Increase use count on the backing pages for a shmem GEM object * @shmem: shmem GEM object diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 45cd293e10a4..6aad3e27d7ee 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -111,6 +111,7 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem); +void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem); -- 2.41.0