From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from PH8PR06CU001.outbound.protection.outlook.com (mail-westus3azon11012032.outbound.protection.outlook.com [40.107.209.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EB8DA34B404; Tue, 24 Feb 2026 22:40:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.209.32 ARC-Seal:i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771972852; cv=fail; b=AWoF0GVKSA6RwN5G8Iza0zoB6eLNyo0u+g/EA5SEPRlJC6RQZ4RxPgL4h12ANerohQaNG3Dhjv4KDfzR+JiwgFKvSjkMHpgjK9D3dMJR0HWwm0Id/pEKEa7Uns6JMVO2tuRs5KHJT7j6HZwVtB364nY/u58pVDqQwrTYtTp+oM4= ARC-Message-Signature:i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771972852; c=relaxed/simple; bh=aJAs+vEq3TLVYCqNsEXTc7b54v619/Jl6EnNsqcw8vQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: Content-Type:MIME-Version; b=naGSd051qsDpB0YRq2KrcC4Ctg8LnoPfb+jUW0GDhSSoIfefo7PChHmTbCrhRiyfaiYmJsLdMxBYCQgYHUETieOz5gKVLwWLGU41sip6VamkVBxiXhOUov4Pq+MK+KVwIjWetrsB7YLSE+FVyLvbedn10NBKL3iKUhqs+LZymgY= ARC-Authentication-Results:i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=DJzyKRdP; arc=fail smtp.client-ip=40.107.209.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="DJzyKRdP" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=flJL0raWhkcUEV6gHghxvsyFAMEo8xJNDIMVUQvbt3n1/MSuyLtBDLtkcowfb2HuLEbHuTtLE028owoK9RTmywgrzGNR8J9uTHrTMwST+eDaXqvw27huIY0afwWPUjzTsJ2agvRnbgZqtTwHaoXQihnU5W8y9maVdswu22UNc1dpYJv6h9cgYXN+MfVqmSE2tDsVvezN8Cs+GlQzYvj3BF8W4xYE+e8Cgfk1P1i+hXIYlrPhQsEqrTXxzZzJwGeA3iG/BWz9k4NGC2XIGgEYeQPcC7KFTMmj2U97HfKTRF+Ok+OhOIbuq+qBSbikCdDYPp42bCEkkjsMGJ+P8BbVBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=On68GUi7IYDricIQ9WRvB5IuoNnm6w4u9DMdbJReF1E=; b=qrs42z08HP3mrBBFMxLAogNqB7LC2zs3IeXvtutnXAopNngva3tDlO/Y/O/B2LPRn/9EoYLdn5+2BOIfTOb/FmOgrCz+U49QBGPhcvAbGBXY39kgkVJ4ijiUQbmU+TqIrUVR3nsnfqvEmUIBTzq4GwySvJI/ioz0gRDdcpIszQbt2PYCCDUDRGbVQx1xiwQNW1YkhRNGGBACOgVX8+BoNhFBpTrGnPeEfIG2MbQikjdjTRBPaX4nNGEaXpYrUrJOtSLKrIye3I2/pSfOWYKh6aifbOmuoD9AK+luXjxTk8Hv24LdVLEGkhbA9vVvsw5+k2f5avWbJXcfljHXIChpdw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=On68GUi7IYDricIQ9WRvB5IuoNnm6w4u9DMdbJReF1E=; b=DJzyKRdPnzh0KWsr18Gv83A3SwtNmU+BQqJt6IQRMqUt94f2Q0CcIuZiPrF7DXs4hEmP+8D6qL5gizOH5zCxSI2KJMLTKHqmpI9RuwJPzvEXhO1w6F9bwOCFXvIWQsa4n51IJTI/slUCjunZ38GXNfsfU3tpxI73T9y3MPNYOUV1H3zZxmKXL+oI5R2vdjHoTHs2JnhQsiFaE9oH1UHnqQKhrRIlessmT+yLzVjkYQwFcjW/ZbCqPnoLZmRkSAwY9VVZVDVFzVksAYXUKzOm1ZKuBOqv4FWvlxmCTGteL+gtZOozRj/pRxt6UHeNlYUNiTpxfRptpAZDLg3gvsMgYQ== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from DS0PR12MB6486.namprd12.prod.outlook.com (2603:10b6:8:c5::21) by PH0PR12MB7816.namprd12.prod.outlook.com (2603:10b6:510:28c::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9632.22; Tue, 24 Feb 2026 22:40:15 +0000 Received: from DS0PR12MB6486.namprd12.prod.outlook.com ([fe80::88a9:f314:c95f:8b33]) by DS0PR12MB6486.namprd12.prod.outlook.com ([fe80::88a9:f314:c95f:8b33%4]) with mapi id 15.20.9632.017; Tue, 24 Feb 2026 22:40:15 +0000 From: Joel Fernandes To: linux-kernel@vger.kernel.org Cc: Miguel Ojeda , Boqun Feng , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Dave Airlie , Daniel Almeida , Koen Koning , dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, rust-for-linux@vger.kernel.org, Nikola Djukic , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jonathan Corbet , Alex Deucher , =?UTF-8?q?Christian=20K=C3=B6nig?= , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , Huang Rui , Matthew Auld , Matthew Brost , Lucas De Marchi , =?UTF-8?q?Thomas=20Hellstr=C3=B6m?= , Helge Deller , Alex Gaynor , Boqun Feng , John Hubbard , Alistair Popple , Timur Tabi , Edwin Peer , Alexandre Courbot , Andrea Righi , Andy Ritger , Zhi Wang , Balbir Singh , Philipp Stanner , Elle Rhumsaa , alexeyi@nvidia.com, Eliot Courtney , joel@joelfernandes.org, linux-doc@vger.kernel.org, amd-gfx@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, intel-xe@lists.freedesktop.org, linux-fbdev@vger.kernel.org, Joel Fernandes Subject: [reference PATCH v11 2/4] gpu: Move DRM buddy allocator one level up (part two) Date: Tue, 24 Feb 2026 17:40:03 -0500 Message-Id: <20260224224005.3232841-3-joelagnelf@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260224224005.3232841-1-joelagnelf@nvidia.com> References: <20260224224005.3232841-1-joelagnelf@nvidia.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-ClientProxiedBy: MN0PR05CA0025.namprd05.prod.outlook.com (2603:10b6:208:52c::13) To DS0PR12MB6486.namprd12.prod.outlook.com (2603:10b6:8:c5::21) Precedence: bulk X-Mailing-List: linux-fbdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR12MB6486:EE_|PH0PR12MB7816:EE_ X-MS-Office365-Filtering-Correlation-Id: 8943b040-dd16-4249-b50f-08de73f5ada2 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|7416014|376014|366016|13003099007; X-Microsoft-Antispam-Message-Info: =?utf-8?B?NVo0SUJVVkpiSFBSTDhWSWM5dTZNMlcyQWk5SS9UdWJOb0ZCTHA0djNnMzI2?= =?utf-8?B?WlQrSGRSVGJ2Z1R2T01mN0JKemhKeDJ6MWpMdGVGSWFvMWxXc2lpSThPMEVt?= =?utf-8?B?M2MyRDZhcTlyMnFnNCs5ZFVjZG9la0NYbUJ4eGFaZ3pLeVJBTzZ5MkxobDF6?= =?utf-8?B?VkxFbzZZTDMwcEUzM040Y1BhRXppUFdQSzJ6RUpjeHdwakt0alJlRTFWWUVV?= =?utf-8?B?RU1RNmFaNHdhcUVqTWNNZ1VQNmVsejBRcTZsdHc3U0d1d2E4bXhZRitLdkph?= =?utf-8?B?c1A5VDNqRHpTdTJaS3VSb2xUNEQvTzFhT2VxR29NeHJaQVFkcm1veUF0Q0k3?= =?utf-8?B?QkxybjFxYm9HTWdnTDNZczA1MHRkdFRna3M3L1ZET25VRnFvSHJkbEhtSWs3?= =?utf-8?B?bVJhQ1RBYU1rNXhVUGkwdTFKVGpDUlRLcEduV080dUVUcHh1VEsyNFlBQnc2?= =?utf-8?B?VmllQnVsQ1NQdXd2SDlEOE5Gd2NvdFNCbGRFdGVvWTY1OUpxNWVHM09rQmk2?= =?utf-8?B?cGxxS05mWStZVWNFTVcyUG9JNXJsSXcwRkhGenIwTnBhd3pjS2VkUW44VFU4?= =?utf-8?B?MXVVS2N4TjRsS1RsT2pqVDdUbElPcWZ2T0FGaHEyazE4dVBIUjNyNHl3UUFV?= =?utf-8?B?aVlBZEcvS1Q2OU5Qb1NFbFRjei8xVGM4c0N0eDVPNm01S29LZWF4UFNVRkhi?= =?utf-8?B?QTlhSURFOXp2OFlYdzd2cVNCSjVwdTltWUJGK0cxcFExVVQ1ZGIvWmtxVlpx?= =?utf-8?B?WWRwWDNkbm85YlQ5UEIyeWhmZWFVUmwxeVpycm15SnNQQUdtN2dpS3lvSHFD?= =?utf-8?B?c2I0ZUR6L203M2poTHlvaFZURzMvUGU4TWtsNzhsbUk5UHpicFVzd21ZUnZX?= =?utf-8?B?TFRpaHZtY2xGbzJxVVptN1lPcUhGSjlocFk2Y3BHZUlobG15M2F1ekkrT08v?= =?utf-8?B?VU9tMGlSUTZZV2RCdWdMUGNRTXdxMm01WnNUNytGRkdFQnJLZlFHbUZRMHdO?= =?utf-8?B?NGplM0Fyd0lFRGQ0Y0dGU3ZNRGUxNitIeWhDWW9UT0F6THhMYWY2RlI0Zzc2?= =?utf-8?B?czA4S2xFdmtRNS9YOFE0RTE2MmJYdjlSMUhMdGF6bWNDRGtUSFpMT1Y5Nzdw?= =?utf-8?B?OVg5bTVGS2xUTjlmaGxIUE5ZbXd4NWxqRW1NbTRJeFhjb0NNMzJXS1luZzJ2?= =?utf-8?B?M1JCL2wyQjBqS1ErRkxRQzhNME1kUEJGRnlLR3V3WVFnKzZ6ZitsZmxIelI0?= =?utf-8?B?V1g1WUFYYlhEckRZdzNkR2VSRXcvM1BUU2hab2xNMVM1WFhKc0hxVGt4NE41?= =?utf-8?B?MmxRcXN6YmVxWXdGVzE4bkFZS2hOTmZjc2lIcktwMGkvdTAyb3l5UDZoQ2Rh?= =?utf-8?B?aHo2bHBySTFkSzRKMzVRR2VwVTd6S1Iyc0VzZStZejk4cUx2Vk5ncUxSM3Ar?= =?utf-8?B?Ui9GdmxoRFdQZnFzSmRnQTNnRjNVbDM1Zmp3WVR5SWdFSCthNEw4aXpHTUJV?= =?utf-8?B?OEFIcmkrckgxMU1oeTliK2Fxd1B0Kzk5Y3VaRVBka0tBVjBHbkJkR2dvT2ZP?= =?utf-8?B?NHZONThYSDF3Mk5GTVdHa2JYU1NzQ0pFMXdjQUh6MjJ1NTM2SXQxaDl3bFE2?= =?utf-8?B?bjJPU2F6T0hGcVZ0WUFJOFovRm93ZkdvNi9ZMzZvNVlCYTZPcDFjV2RpTllI?= =?utf-8?B?ZTJZVVlwNTRzTUg0TkVnOFg3dG8veWZRY2piSTEvampST1o3RU9ETHl6ODdr?= =?utf-8?B?cGZpcUhoQ0ptRmp3OWVvdWRYc2FzdGg3bU9rbFMvNUFMZGRqK3Y3ZU5nVm1q?= =?utf-8?B?QWlsWlJGa0g4UGo4QkdrYldzSVdob0J4WUp5L1lUY08xZ1U1NGQ0UGRpWDBl?= =?utf-8?B?dTZXbFlWT3FmUXBqdUVrNzNZNEZWd2JmY3RPemtnNERyN1IzWjhSSlkxMnRa?= =?utf-8?B?K2RXQjhyM0lzc25hWlJSdGRCODgzTU5xcm5GOWpLaXhmei9xd3pWQWRJUUVK?= =?utf-8?B?L25DNitrLzFXTU9ib05hci9EMzI1MStKd0RPK0xvYWIrVzJadWdTRUQ0d2JR?= =?utf-8?B?MzJITlJSbmtheUJ5K2ZPOUNvQ2NHNDIvU2xzVTJMcXFxSXluSmZuY1pLa0lt?= =?utf-8?Q?82Gg=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS0PR12MB6486.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(7416014)(376014)(366016)(13003099007);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?ZFJFbTlmL0hGYkNLV3dRUGhHUVVjclZubWRwck9LMUUwSmRVanhSYUdDLzV1?= =?utf-8?B?L0VtVHVqMytQV201Zll4WWpiRzFzN1hqS1NXVWJsOUUraXhIZ3k1S0NaQ0Fs?= =?utf-8?B?VXlnQ2g2QVovVUJ5Z3U3Q3pWODMzRUxJVXdvcWZOZzJsV0ZUMzE5ejdOSFFK?= =?utf-8?B?S1QvTXF5US9PRFowbnpZVlhyNGtzRW9aWHBDdzkxdzVPQnExanBxTXc3T0Qw?= =?utf-8?B?bThqQlhLRHIwN2gvQzhVeGFYSzhaOE0wZW53Zld6NHlUb3lQTkd3clRqOU1r?= =?utf-8?B?VGkxdGkyQnVtNDZIU21UclNZZjdNM2QzbVl5N0NERVB6VHNiUVVMZHZzM2wr?= =?utf-8?B?b3hDMEF1VDRFQjhLZGdzd2thcEdPaHB4d1UwMWpOd2QrUDJTOGxHSTVreG1q?= =?utf-8?B?QWp6ODFNVWdyYlZLejkreFhkVDBwcTZzWVdGVmo0Y0ZURzcreFE2MFhEVzJB?= =?utf-8?B?YVFyNmx4M1hmbUJEQkpsS0JnU1Q3dVI1Y3ZnRUpTMXgwZGtqM255dG9ZRkZn?= =?utf-8?B?bkJxdlRqM05sVStwYmlOMFlNNmJibTZXdTBCanhGYWEzaktEc2d5R2RudUhs?= =?utf-8?B?RWExZ3VaV1RSQ1piZGF2M3UzQWc2KzFXT2NBOHB6VGxyZFRsOHpvUDRlSUd0?= =?utf-8?B?dm1Db3JlSjVlYWtLZ1ovK0lBa0xMdUxIZjBjYkprK2xsUEwyaVpnRlhQV2Zq?= =?utf-8?B?Y1ZOT0hpM0dDd2lQcG0xbGUyTE9HeWgzZVJnd3hCMS9OTkpOVUpwNzJPTkhT?= =?utf-8?B?cmVIVUkzOCt6UnhHeVhRZHBOcHNvR2VxTmhiUkV4clNrdWhtNTVnN05YUW5o?= =?utf-8?B?RHNhMkNMZENIcXV2VFJrLzdOOThRbm9LdFBMcVZSZ21oekltdmVKbFQrNVZ2?= =?utf-8?B?a2RyNjdvanNhTnNaWXRnSXRDVitoVXRhTkc1cmRvTmQzSUZYbkJJdGxGQi94?= =?utf-8?B?V0lJZmNVOWNqUVdmK3JTaVBxcHBTakJRalcvZnJqY1dHcWR1NlA3c2VqeFl4?= =?utf-8?B?T2tYaWtWTXQrZEJCUjFHUS8zMHpFWkdOVUlqM1JwL1FEUnVaUlI2VTdQVVND?= =?utf-8?B?Y3pIWHYrOXBySmU3UFMwZ3U0L053TVZYK296L014SEVxbHdvNXVrMEhyd0pH?= =?utf-8?B?c1RMalBhbDRMcXg3ZDZQWVNieHgzckZGTC8vUkM4cytDUklSaW9CeVh4OW0w?= =?utf-8?B?RnprUW5zcTUzb29HTS9kRzRVcWRGK2I5eDZOS1BrWGZmTzBWODNoNGJTTmdD?= =?utf-8?B?V0YvbFhFNmR4QU84VithejUva1NJL2EwZTBwOGxoMGJQYjhPN0lRZGlidlNv?= =?utf-8?B?cEdHeDA1R20yNFV3RFR3enhXb0Zxcy9jVXRFdFQxNWM2c3EyL2lVenV2VkpO?= =?utf-8?B?WmsxQllOYjhWeWVJY21xa3dPZkRaalNXdjBLWUdlRW1WT1JUWTFLclNIVkh2?= =?utf-8?B?THR4MEprd1FkZmFCZlFSeWI2MVhCdzRlc2JLYy9jMzFUbDFQOFp1TEJkZDdI?= =?utf-8?B?TW9VQ2RTM0kvczB5UjhaMzBoZmc4U0JPNmJJVDJDeFY5MXBFZTVuTENSUFcv?= =?utf-8?B?dmo2SU1GOHg3bVVHS3JUVVZxNEJ5bDVBaWhYS2JFck9MQ0ExQWNtbExQNENS?= =?utf-8?B?dklpU01nc28vT0FEMmlmNGkyNTIvY3lIdURLZ24rKzAwejEzZjBlcEJRNVkz?= =?utf-8?B?RFRKU3BrRjZxTCtIdSthS0VwWWs0WjJ2aHFaU2pBZDhxMzJTREF5bGlabnZF?= =?utf-8?B?bVR1RFZxZ09VOU83em56SEdVa2dyUXMxLytVVGJUbmYxS0pVMFNaQVdPSVdm?= =?utf-8?B?SGxKTXdHb0REblRKeE1Lb3lidUJ0RHdhUW0zZWliQVdEYUp3eGtDNE1PTWJC?= =?utf-8?B?SndOY0xhMkRYLzRXNWsxbzdUZ1o0cStYbjZ6ZW9UWmp5RFdqVU1BOGIrQVQ5?= =?utf-8?B?cWt0cFBNNG9STGhZSXpMV0V3OVhtd0dIQnNWNFpBajg5OVl1cHRxUEpHTHRK?= =?utf-8?B?Q3JSdFRwdmdkTFZNTmhEZDg4Yk51STU3QUJSODQvS1dDY29aL3lIcDlwWGFV?= =?utf-8?B?ZmJhSGJOd2RCa1FtVDVON0NrZExBWXF1cXhPZTZxVDc2eUVmeGF0cy8yOHlZ?= =?utf-8?B?QUNOT2FuQnYyV21SWHYvR0JyeFdFUUkvaEh3aW1SNFNGenhVV1UzS2lYWjh5?= =?utf-8?B?VHZpaVgzeTh6Ky9LMkZEZjhUOElyblk2UldnQ01MOUFEYytBc3FQaXNKNnpS?= =?utf-8?B?N3VRUEJheC96aWFVZmpUcmJrRURHNUswZ1lWSW1HTzJrZWYxZnI4ZU02ODdS?= =?utf-8?B?N3JnNko5N2Z1YXdXbE1PY2l0NHVLbjMwSlB1ZWxMMklrT0tycWMyUT09?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 8943b040-dd16-4249-b50f-08de73f5ada2 X-MS-Exchange-CrossTenant-AuthSource: DS0PR12MB6486.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Feb 2026 22:40:15.3292 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: A9Gj8ZgvCZ6E8HgFuB3Quf/p6n/NolSzsfHut/yozx6Bj5/dTLUZI/dmAfyUoC2KEn/NhsC7HYbeQ3OhK5EmPQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB7816 Move the DRM buddy allocator one level up so that it can be used by GPU drivers (example, nova-core) that have usecases other than DRM (such as VFIO vGPU support). Modify the API, structures and Kconfigs to use "gpu_buddy" terminology. Adapt the drivers and tests to use the new API. The commit cannot be split due to bisectability, however no functional change is intended. Verified by running K-UNIT tests and build tested various configurations. Signed-off-by: Joel Fernandes Reviewed-by: Dave Airlie [airlied: I've split this into two so git can find copies easier. I've also just nuked drm_random library, that stuff needs to be done elsewhere and only the buddy tests seem to be using it]. Signed-off-by: Dave Airlie --- Documentation/gpu/drm-mm.rst | 6 + MAINTAINERS | 8 +- drivers/gpu/Kconfig | 13 + drivers/gpu/Makefile | 1 + drivers/gpu/buddy.c | 556 +++++++++--------- drivers/gpu/drm/Kconfig | 1 + drivers/gpu/drm/Makefile | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c | 2 +- .../gpu/drm/amd/amdgpu/amdgpu_res_cursor.h | 12 +- drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c | 79 +-- drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h | 18 +- drivers/gpu/drm/drm_buddy.c | 77 +++ drivers/gpu/drm/i915/i915_scatterlist.c | 8 +- drivers/gpu/drm/i915/i915_ttm_buddy_manager.c | 55 +- drivers/gpu/drm/i915/i915_ttm_buddy_manager.h | 4 +- .../drm/i915/selftests/intel_memory_region.c | 20 +- .../gpu/drm/ttm/tests/ttm_bo_validate_test.c | 4 +- drivers/gpu/drm/ttm/tests/ttm_mock_manager.c | 18 +- drivers/gpu/drm/ttm/tests/ttm_mock_manager.h | 2 +- drivers/gpu/drm/xe/xe_res_cursor.h | 34 +- drivers/gpu/drm/xe/xe_svm.c | 12 +- drivers/gpu/drm/xe/xe_ttm_vram_mgr.c | 71 +-- drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h | 2 +- drivers/gpu/tests/Makefile | 2 +- drivers/gpu/tests/gpu_buddy_test.c | 412 ++++++------- drivers/gpu/tests/gpu_random.c | 16 +- drivers/gpu/tests/gpu_random.h | 18 +- drivers/video/Kconfig | 1 + include/drm/drm_buddy.h | 18 + include/linux/gpu_buddy.h | 120 ++-- 30 files changed, 853 insertions(+), 739 deletions(-) create mode 100644 drivers/gpu/Kconfig create mode 100644 drivers/gpu/drm/drm_buddy.c create mode 100644 include/drm/drm_buddy.h diff --git a/Documentation/gpu/drm-mm.rst b/Documentation/gpu/drm-mm.rst index ceee0e663237..32fb506db05b 100644 --- a/Documentation/gpu/drm-mm.rst +++ b/Documentation/gpu/drm-mm.rst @@ -532,6 +532,12 @@ Buddy Allocator Function References (GPU buddy) .. kernel-doc:: drivers/gpu/buddy.c :export: +DRM Buddy Specific Logging Function References +---------------------------------------------- + +.. kernel-doc:: drivers/gpu/drm/drm_buddy.c + :export: + DRM Cache Handling and Fast WC memcpy() ======================================= diff --git a/MAINTAINERS b/MAINTAINERS index 0dc318c94c99..f1e5a8011d2f 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -8905,15 +8905,17 @@ T: git https://gitlab.freedesktop.org/drm/misc/kernel.git F: drivers/gpu/drm/ttm/ F: include/drm/ttm/ -DRM BUDDY ALLOCATOR +GPU BUDDY ALLOCATOR M: Matthew Auld M: Arun Pravin R: Christian Koenig L: dri-devel@lists.freedesktop.org S: Maintained T: git https://gitlab.freedesktop.org/drm/misc/kernel.git -F: drivers/gpu/drm/drm_buddy.c -F: drivers/gpu/drm/tests/drm_buddy_test.c +F: drivers/gpu/drm_buddy.c +F: drivers/gpu/buddy.c +F: drivers/gpu/tests/gpu_buddy_test.c +F: include/linux/gpu_buddy.h F: include/drm/drm_buddy.h DRM AUTOMATED TESTING diff --git a/drivers/gpu/Kconfig b/drivers/gpu/Kconfig new file mode 100644 index 000000000000..ebb2ad4b7ea0 --- /dev/null +++ b/drivers/gpu/Kconfig @@ -0,0 +1,13 @@ +# SPDX-License-Identifier: GPL-2.0 + +config GPU_BUDDY + bool + help + A page based buddy allocator for GPU memory. + +config GPU_BUDDY_KUNIT_TEST + tristate "KUnit tests for GPU buddy allocator" if !KUNIT_ALL_TESTS + depends on GPU_BUDDY && KUNIT + default KUNIT_ALL_TESTS + help + KUnit tests for the GPU buddy allocator. diff --git a/drivers/gpu/Makefile b/drivers/gpu/Makefile index c5292ee2c852..5cd54d06e262 100644 --- a/drivers/gpu/Makefile +++ b/drivers/gpu/Makefile @@ -6,3 +6,4 @@ obj-y += host1x/ drm/ vga/ tests/ obj-$(CONFIG_IMX_IPUV3_CORE) += ipu-v3/ obj-$(CONFIG_TRACE_GPU_MEM) += trace/ obj-$(CONFIG_NOVA_CORE) += nova-core/ +obj-$(CONFIG_GPU_BUDDY) += buddy.o diff --git a/drivers/gpu/buddy.c b/drivers/gpu/buddy.c index 4cc63d961d26..603c59a2013a 100644 --- a/drivers/gpu/buddy.c +++ b/drivers/gpu/buddy.c @@ -11,27 +11,17 @@ #include #include -#include - -enum drm_buddy_free_tree { - DRM_BUDDY_CLEAR_TREE = 0, - DRM_BUDDY_DIRTY_TREE, - DRM_BUDDY_MAX_FREE_TREES, -}; static struct kmem_cache *slab_blocks; -#define for_each_free_tree(tree) \ - for ((tree) = 0; (tree) < DRM_BUDDY_MAX_FREE_TREES; (tree)++) - -static struct drm_buddy_block *drm_block_alloc(struct drm_buddy *mm, - struct drm_buddy_block *parent, +static struct gpu_buddy_block *gpu_block_alloc(struct gpu_buddy *mm, + struct gpu_buddy_block *parent, unsigned int order, u64 offset) { - struct drm_buddy_block *block; + struct gpu_buddy_block *block; - BUG_ON(order > DRM_BUDDY_MAX_ORDER); + BUG_ON(order > GPU_BUDDY_MAX_ORDER); block = kmem_cache_zalloc(slab_blocks, GFP_KERNEL); if (!block) @@ -43,30 +33,30 @@ static struct drm_buddy_block *drm_block_alloc(struct drm_buddy *mm, RB_CLEAR_NODE(&block->rb); - BUG_ON(block->header & DRM_BUDDY_HEADER_UNUSED); + BUG_ON(block->header & GPU_BUDDY_HEADER_UNUSED); return block; } -static void drm_block_free(struct drm_buddy *mm, - struct drm_buddy_block *block) +static void gpu_block_free(struct gpu_buddy *mm, + struct gpu_buddy_block *block) { kmem_cache_free(slab_blocks, block); } -static enum drm_buddy_free_tree -get_block_tree(struct drm_buddy_block *block) +static enum gpu_buddy_free_tree +get_block_tree(struct gpu_buddy_block *block) { - return drm_buddy_block_is_clear(block) ? - DRM_BUDDY_CLEAR_TREE : DRM_BUDDY_DIRTY_TREE; + return gpu_buddy_block_is_clear(block) ? + GPU_BUDDY_CLEAR_TREE : GPU_BUDDY_DIRTY_TREE; } -static struct drm_buddy_block * +static struct gpu_buddy_block * rbtree_get_free_block(const struct rb_node *node) { - return node ? rb_entry(node, struct drm_buddy_block, rb) : NULL; + return node ? rb_entry(node, struct gpu_buddy_block, rb) : NULL; } -static struct drm_buddy_block * +static struct gpu_buddy_block * rbtree_last_free_block(struct rb_root *root) { return rbtree_get_free_block(rb_last(root)); @@ -77,33 +67,33 @@ static bool rbtree_is_empty(struct rb_root *root) return RB_EMPTY_ROOT(root); } -static bool drm_buddy_block_offset_less(const struct drm_buddy_block *block, - const struct drm_buddy_block *node) +static bool gpu_buddy_block_offset_less(const struct gpu_buddy_block *block, + const struct gpu_buddy_block *node) { - return drm_buddy_block_offset(block) < drm_buddy_block_offset(node); + return gpu_buddy_block_offset(block) < gpu_buddy_block_offset(node); } static bool rbtree_block_offset_less(struct rb_node *block, const struct rb_node *node) { - return drm_buddy_block_offset_less(rbtree_get_free_block(block), + return gpu_buddy_block_offset_less(rbtree_get_free_block(block), rbtree_get_free_block(node)); } -static void rbtree_insert(struct drm_buddy *mm, - struct drm_buddy_block *block, - enum drm_buddy_free_tree tree) +static void rbtree_insert(struct gpu_buddy *mm, + struct gpu_buddy_block *block, + enum gpu_buddy_free_tree tree) { rb_add(&block->rb, - &mm->free_trees[tree][drm_buddy_block_order(block)], + &mm->free_trees[tree][gpu_buddy_block_order(block)], rbtree_block_offset_less); } -static void rbtree_remove(struct drm_buddy *mm, - struct drm_buddy_block *block) +static void rbtree_remove(struct gpu_buddy *mm, + struct gpu_buddy_block *block) { - unsigned int order = drm_buddy_block_order(block); - enum drm_buddy_free_tree tree; + unsigned int order = gpu_buddy_block_order(block); + enum gpu_buddy_free_tree tree; struct rb_root *root; tree = get_block_tree(block); @@ -113,42 +103,42 @@ static void rbtree_remove(struct drm_buddy *mm, RB_CLEAR_NODE(&block->rb); } -static void clear_reset(struct drm_buddy_block *block) +static void clear_reset(struct gpu_buddy_block *block) { - block->header &= ~DRM_BUDDY_HEADER_CLEAR; + block->header &= ~GPU_BUDDY_HEADER_CLEAR; } -static void mark_cleared(struct drm_buddy_block *block) +static void mark_cleared(struct gpu_buddy_block *block) { - block->header |= DRM_BUDDY_HEADER_CLEAR; + block->header |= GPU_BUDDY_HEADER_CLEAR; } -static void mark_allocated(struct drm_buddy *mm, - struct drm_buddy_block *block) +static void mark_allocated(struct gpu_buddy *mm, + struct gpu_buddy_block *block) { - block->header &= ~DRM_BUDDY_HEADER_STATE; - block->header |= DRM_BUDDY_ALLOCATED; + block->header &= ~GPU_BUDDY_HEADER_STATE; + block->header |= GPU_BUDDY_ALLOCATED; rbtree_remove(mm, block); } -static void mark_free(struct drm_buddy *mm, - struct drm_buddy_block *block) +static void mark_free(struct gpu_buddy *mm, + struct gpu_buddy_block *block) { - enum drm_buddy_free_tree tree; + enum gpu_buddy_free_tree tree; - block->header &= ~DRM_BUDDY_HEADER_STATE; - block->header |= DRM_BUDDY_FREE; + block->header &= ~GPU_BUDDY_HEADER_STATE; + block->header |= GPU_BUDDY_FREE; tree = get_block_tree(block); rbtree_insert(mm, block, tree); } -static void mark_split(struct drm_buddy *mm, - struct drm_buddy_block *block) +static void mark_split(struct gpu_buddy *mm, + struct gpu_buddy_block *block) { - block->header &= ~DRM_BUDDY_HEADER_STATE; - block->header |= DRM_BUDDY_SPLIT; + block->header &= ~GPU_BUDDY_HEADER_STATE; + block->header |= GPU_BUDDY_SPLIT; rbtree_remove(mm, block); } @@ -163,10 +153,10 @@ static inline bool contains(u64 s1, u64 e1, u64 s2, u64 e2) return s1 <= s2 && e1 >= e2; } -static struct drm_buddy_block * -__get_buddy(struct drm_buddy_block *block) +static struct gpu_buddy_block * +__get_buddy(struct gpu_buddy_block *block) { - struct drm_buddy_block *parent; + struct gpu_buddy_block *parent; parent = block->parent; if (!parent) @@ -178,19 +168,19 @@ __get_buddy(struct drm_buddy_block *block) return parent->left; } -static unsigned int __drm_buddy_free(struct drm_buddy *mm, - struct drm_buddy_block *block, +static unsigned int __gpu_buddy_free(struct gpu_buddy *mm, + struct gpu_buddy_block *block, bool force_merge) { - struct drm_buddy_block *parent; + struct gpu_buddy_block *parent; unsigned int order; while ((parent = block->parent)) { - struct drm_buddy_block *buddy; + struct gpu_buddy_block *buddy; buddy = __get_buddy(block); - if (!drm_buddy_block_is_free(buddy)) + if (!gpu_buddy_block_is_free(buddy)) break; if (!force_merge) { @@ -198,31 +188,31 @@ static unsigned int __drm_buddy_free(struct drm_buddy *mm, * Check the block and its buddy clear state and exit * the loop if they both have the dissimilar state. */ - if (drm_buddy_block_is_clear(block) != - drm_buddy_block_is_clear(buddy)) + if (gpu_buddy_block_is_clear(block) != + gpu_buddy_block_is_clear(buddy)) break; - if (drm_buddy_block_is_clear(block)) + if (gpu_buddy_block_is_clear(block)) mark_cleared(parent); } rbtree_remove(mm, buddy); - if (force_merge && drm_buddy_block_is_clear(buddy)) - mm->clear_avail -= drm_buddy_block_size(mm, buddy); + if (force_merge && gpu_buddy_block_is_clear(buddy)) + mm->clear_avail -= gpu_buddy_block_size(mm, buddy); - drm_block_free(mm, block); - drm_block_free(mm, buddy); + gpu_block_free(mm, block); + gpu_block_free(mm, buddy); block = parent; } - order = drm_buddy_block_order(block); + order = gpu_buddy_block_order(block); mark_free(mm, block); return order; } -static int __force_merge(struct drm_buddy *mm, +static int __force_merge(struct gpu_buddy *mm, u64 start, u64 end, unsigned int min_order) @@ -241,7 +231,7 @@ static int __force_merge(struct drm_buddy *mm, struct rb_node *iter = rb_last(&mm->free_trees[tree][i]); while (iter) { - struct drm_buddy_block *block, *buddy; + struct gpu_buddy_block *block, *buddy; u64 block_start, block_end; block = rbtree_get_free_block(iter); @@ -250,18 +240,18 @@ static int __force_merge(struct drm_buddy *mm, if (!block || !block->parent) continue; - block_start = drm_buddy_block_offset(block); - block_end = block_start + drm_buddy_block_size(mm, block) - 1; + block_start = gpu_buddy_block_offset(block); + block_end = block_start + gpu_buddy_block_size(mm, block) - 1; if (!contains(start, end, block_start, block_end)) continue; buddy = __get_buddy(block); - if (!drm_buddy_block_is_free(buddy)) + if (!gpu_buddy_block_is_free(buddy)) continue; - WARN_ON(drm_buddy_block_is_clear(block) == - drm_buddy_block_is_clear(buddy)); + WARN_ON(gpu_buddy_block_is_clear(block) == + gpu_buddy_block_is_clear(buddy)); /* * Advance to the next node when the current node is the buddy, @@ -271,10 +261,10 @@ static int __force_merge(struct drm_buddy *mm, iter = rb_prev(iter); rbtree_remove(mm, block); - if (drm_buddy_block_is_clear(block)) - mm->clear_avail -= drm_buddy_block_size(mm, block); + if (gpu_buddy_block_is_clear(block)) + mm->clear_avail -= gpu_buddy_block_size(mm, block); - order = __drm_buddy_free(mm, block, true); + order = __gpu_buddy_free(mm, block, true); if (order >= min_order) return 0; } @@ -285,9 +275,9 @@ static int __force_merge(struct drm_buddy *mm, } /** - * drm_buddy_init - init memory manager + * gpu_buddy_init - init memory manager * - * @mm: DRM buddy manager to initialize + * @mm: GPU buddy manager to initialize * @size: size in bytes to manage * @chunk_size: minimum page size in bytes for our allocations * @@ -296,7 +286,7 @@ static int __force_merge(struct drm_buddy *mm, * Returns: * 0 on success, error code on failure. */ -int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size) +int gpu_buddy_init(struct gpu_buddy *mm, u64 size, u64 chunk_size) { unsigned int i, j, root_count = 0; u64 offset = 0; @@ -318,9 +308,9 @@ int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size) mm->chunk_size = chunk_size; mm->max_order = ilog2(size) - ilog2(chunk_size); - BUG_ON(mm->max_order > DRM_BUDDY_MAX_ORDER); + BUG_ON(mm->max_order > GPU_BUDDY_MAX_ORDER); - mm->free_trees = kmalloc_array(DRM_BUDDY_MAX_FREE_TREES, + mm->free_trees = kmalloc_array(GPU_BUDDY_MAX_FREE_TREES, sizeof(*mm->free_trees), GFP_KERNEL); if (!mm->free_trees) @@ -340,7 +330,7 @@ int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size) mm->n_roots = hweight64(size); mm->roots = kmalloc_array(mm->n_roots, - sizeof(struct drm_buddy_block *), + sizeof(struct gpu_buddy_block *), GFP_KERNEL); if (!mm->roots) goto out_free_tree; @@ -350,21 +340,21 @@ int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size) * not itself a power-of-two. */ do { - struct drm_buddy_block *root; + struct gpu_buddy_block *root; unsigned int order; u64 root_size; order = ilog2(size) - ilog2(chunk_size); root_size = chunk_size << order; - root = drm_block_alloc(mm, NULL, order, offset); + root = gpu_block_alloc(mm, NULL, order, offset); if (!root) goto out_free_roots; mark_free(mm, root); BUG_ON(root_count > mm->max_order); - BUG_ON(drm_buddy_block_size(mm, root) < chunk_size); + BUG_ON(gpu_buddy_block_size(mm, root) < chunk_size); mm->roots[root_count] = root; @@ -377,7 +367,7 @@ int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size) out_free_roots: while (root_count--) - drm_block_free(mm, mm->roots[root_count]); + gpu_block_free(mm, mm->roots[root_count]); kfree(mm->roots); out_free_tree: while (i--) @@ -385,16 +375,16 @@ int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size) kfree(mm->free_trees); return -ENOMEM; } -EXPORT_SYMBOL(drm_buddy_init); +EXPORT_SYMBOL(gpu_buddy_init); /** - * drm_buddy_fini - tear down the memory manager + * gpu_buddy_fini - tear down the memory manager * - * @mm: DRM buddy manager to free + * @mm: GPU buddy manager to free * * Cleanup memory manager resources and the freetree */ -void drm_buddy_fini(struct drm_buddy *mm) +void gpu_buddy_fini(struct gpu_buddy *mm) { u64 root_size, size, start; unsigned int order; @@ -404,13 +394,13 @@ void drm_buddy_fini(struct drm_buddy *mm) for (i = 0; i < mm->n_roots; ++i) { order = ilog2(size) - ilog2(mm->chunk_size); - start = drm_buddy_block_offset(mm->roots[i]); + start = gpu_buddy_block_offset(mm->roots[i]); __force_merge(mm, start, start + size, order); - if (WARN_ON(!drm_buddy_block_is_free(mm->roots[i]))) + if (WARN_ON(!gpu_buddy_block_is_free(mm->roots[i]))) kunit_fail_current_test("buddy_fini() root"); - drm_block_free(mm, mm->roots[i]); + gpu_block_free(mm, mm->roots[i]); root_size = mm->chunk_size << order; size -= root_size; @@ -423,31 +413,31 @@ void drm_buddy_fini(struct drm_buddy *mm) kfree(mm->free_trees); kfree(mm->roots); } -EXPORT_SYMBOL(drm_buddy_fini); +EXPORT_SYMBOL(gpu_buddy_fini); -static int split_block(struct drm_buddy *mm, - struct drm_buddy_block *block) +static int split_block(struct gpu_buddy *mm, + struct gpu_buddy_block *block) { - unsigned int block_order = drm_buddy_block_order(block) - 1; - u64 offset = drm_buddy_block_offset(block); + unsigned int block_order = gpu_buddy_block_order(block) - 1; + u64 offset = gpu_buddy_block_offset(block); - BUG_ON(!drm_buddy_block_is_free(block)); - BUG_ON(!drm_buddy_block_order(block)); + BUG_ON(!gpu_buddy_block_is_free(block)); + BUG_ON(!gpu_buddy_block_order(block)); - block->left = drm_block_alloc(mm, block, block_order, offset); + block->left = gpu_block_alloc(mm, block, block_order, offset); if (!block->left) return -ENOMEM; - block->right = drm_block_alloc(mm, block, block_order, + block->right = gpu_block_alloc(mm, block, block_order, offset + (mm->chunk_size << block_order)); if (!block->right) { - drm_block_free(mm, block->left); + gpu_block_free(mm, block->left); return -ENOMEM; } mark_split(mm, block); - if (drm_buddy_block_is_clear(block)) { + if (gpu_buddy_block_is_clear(block)) { mark_cleared(block->left); mark_cleared(block->right); clear_reset(block); @@ -460,34 +450,34 @@ static int split_block(struct drm_buddy *mm, } /** - * drm_get_buddy - get buddy address + * gpu_get_buddy - get buddy address * - * @block: DRM buddy block + * @block: GPU buddy block * * Returns the corresponding buddy block for @block, or NULL * if this is a root block and can't be merged further. * Requires some kind of locking to protect against * any concurrent allocate and free operations. */ -struct drm_buddy_block * -drm_get_buddy(struct drm_buddy_block *block) +struct gpu_buddy_block * +gpu_get_buddy(struct gpu_buddy_block *block) { return __get_buddy(block); } -EXPORT_SYMBOL(drm_get_buddy); +EXPORT_SYMBOL(gpu_get_buddy); /** - * drm_buddy_reset_clear - reset blocks clear state + * gpu_buddy_reset_clear - reset blocks clear state * - * @mm: DRM buddy manager + * @mm: GPU buddy manager * @is_clear: blocks clear state * * Reset the clear state based on @is_clear value for each block * in the freetree. */ -void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear) +void gpu_buddy_reset_clear(struct gpu_buddy *mm, bool is_clear) { - enum drm_buddy_free_tree src_tree, dst_tree; + enum gpu_buddy_free_tree src_tree, dst_tree; u64 root_size, size, start; unsigned int order; int i; @@ -495,60 +485,60 @@ void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear) size = mm->size; for (i = 0; i < mm->n_roots; ++i) { order = ilog2(size) - ilog2(mm->chunk_size); - start = drm_buddy_block_offset(mm->roots[i]); + start = gpu_buddy_block_offset(mm->roots[i]); __force_merge(mm, start, start + size, order); root_size = mm->chunk_size << order; size -= root_size; } - src_tree = is_clear ? DRM_BUDDY_DIRTY_TREE : DRM_BUDDY_CLEAR_TREE; - dst_tree = is_clear ? DRM_BUDDY_CLEAR_TREE : DRM_BUDDY_DIRTY_TREE; + src_tree = is_clear ? GPU_BUDDY_DIRTY_TREE : GPU_BUDDY_CLEAR_TREE; + dst_tree = is_clear ? GPU_BUDDY_CLEAR_TREE : GPU_BUDDY_DIRTY_TREE; for (i = 0; i <= mm->max_order; ++i) { struct rb_root *root = &mm->free_trees[src_tree][i]; - struct drm_buddy_block *block, *tmp; + struct gpu_buddy_block *block, *tmp; rbtree_postorder_for_each_entry_safe(block, tmp, root, rb) { rbtree_remove(mm, block); if (is_clear) { mark_cleared(block); - mm->clear_avail += drm_buddy_block_size(mm, block); + mm->clear_avail += gpu_buddy_block_size(mm, block); } else { clear_reset(block); - mm->clear_avail -= drm_buddy_block_size(mm, block); + mm->clear_avail -= gpu_buddy_block_size(mm, block); } rbtree_insert(mm, block, dst_tree); } } } -EXPORT_SYMBOL(drm_buddy_reset_clear); +EXPORT_SYMBOL(gpu_buddy_reset_clear); /** - * drm_buddy_free_block - free a block + * gpu_buddy_free_block - free a block * - * @mm: DRM buddy manager + * @mm: GPU buddy manager * @block: block to be freed */ -void drm_buddy_free_block(struct drm_buddy *mm, - struct drm_buddy_block *block) +void gpu_buddy_free_block(struct gpu_buddy *mm, + struct gpu_buddy_block *block) { - BUG_ON(!drm_buddy_block_is_allocated(block)); - mm->avail += drm_buddy_block_size(mm, block); - if (drm_buddy_block_is_clear(block)) - mm->clear_avail += drm_buddy_block_size(mm, block); + BUG_ON(!gpu_buddy_block_is_allocated(block)); + mm->avail += gpu_buddy_block_size(mm, block); + if (gpu_buddy_block_is_clear(block)) + mm->clear_avail += gpu_buddy_block_size(mm, block); - __drm_buddy_free(mm, block, false); + __gpu_buddy_free(mm, block, false); } -EXPORT_SYMBOL(drm_buddy_free_block); +EXPORT_SYMBOL(gpu_buddy_free_block); -static void __drm_buddy_free_list(struct drm_buddy *mm, +static void __gpu_buddy_free_list(struct gpu_buddy *mm, struct list_head *objects, bool mark_clear, bool mark_dirty) { - struct drm_buddy_block *block, *on; + struct gpu_buddy_block *block, *on; WARN_ON(mark_dirty && mark_clear); @@ -557,13 +547,13 @@ static void __drm_buddy_free_list(struct drm_buddy *mm, mark_cleared(block); else if (mark_dirty) clear_reset(block); - drm_buddy_free_block(mm, block); + gpu_buddy_free_block(mm, block); cond_resched(); } INIT_LIST_HEAD(objects); } -static void drm_buddy_free_list_internal(struct drm_buddy *mm, +static void gpu_buddy_free_list_internal(struct gpu_buddy *mm, struct list_head *objects) { /* @@ -571,43 +561,43 @@ static void drm_buddy_free_list_internal(struct drm_buddy *mm, * at this point. For example we might have just failed part of the * allocation. */ - __drm_buddy_free_list(mm, objects, false, false); + __gpu_buddy_free_list(mm, objects, false, false); } /** - * drm_buddy_free_list - free blocks + * gpu_buddy_free_list - free blocks * - * @mm: DRM buddy manager + * @mm: GPU buddy manager * @objects: input list head to free blocks - * @flags: optional flags like DRM_BUDDY_CLEARED + * @flags: optional flags like GPU_BUDDY_CLEARED */ -void drm_buddy_free_list(struct drm_buddy *mm, +void gpu_buddy_free_list(struct gpu_buddy *mm, struct list_head *objects, unsigned int flags) { - bool mark_clear = flags & DRM_BUDDY_CLEARED; + bool mark_clear = flags & GPU_BUDDY_CLEARED; - __drm_buddy_free_list(mm, objects, mark_clear, !mark_clear); + __gpu_buddy_free_list(mm, objects, mark_clear, !mark_clear); } -EXPORT_SYMBOL(drm_buddy_free_list); +EXPORT_SYMBOL(gpu_buddy_free_list); -static bool block_incompatible(struct drm_buddy_block *block, unsigned int flags) +static bool block_incompatible(struct gpu_buddy_block *block, unsigned int flags) { - bool needs_clear = flags & DRM_BUDDY_CLEAR_ALLOCATION; + bool needs_clear = flags & GPU_BUDDY_CLEAR_ALLOCATION; - return needs_clear != drm_buddy_block_is_clear(block); + return needs_clear != gpu_buddy_block_is_clear(block); } -static struct drm_buddy_block * -__alloc_range_bias(struct drm_buddy *mm, +static struct gpu_buddy_block * +__alloc_range_bias(struct gpu_buddy *mm, u64 start, u64 end, unsigned int order, unsigned long flags, bool fallback) { u64 req_size = mm->chunk_size << order; - struct drm_buddy_block *block; - struct drm_buddy_block *buddy; + struct gpu_buddy_block *block; + struct gpu_buddy_block *buddy; LIST_HEAD(dfs); int err; int i; @@ -622,23 +612,23 @@ __alloc_range_bias(struct drm_buddy *mm, u64 block_end; block = list_first_entry_or_null(&dfs, - struct drm_buddy_block, + struct gpu_buddy_block, tmp_link); if (!block) break; list_del(&block->tmp_link); - if (drm_buddy_block_order(block) < order) + if (gpu_buddy_block_order(block) < order) continue; - block_start = drm_buddy_block_offset(block); - block_end = block_start + drm_buddy_block_size(mm, block) - 1; + block_start = gpu_buddy_block_offset(block); + block_end = block_start + gpu_buddy_block_size(mm, block) - 1; if (!overlaps(start, end, block_start, block_end)) continue; - if (drm_buddy_block_is_allocated(block)) + if (gpu_buddy_block_is_allocated(block)) continue; if (block_start < start || block_end > end) { @@ -654,17 +644,17 @@ __alloc_range_bias(struct drm_buddy *mm, continue; if (contains(start, end, block_start, block_end) && - order == drm_buddy_block_order(block)) { + order == gpu_buddy_block_order(block)) { /* * Find the free block within the range. */ - if (drm_buddy_block_is_free(block)) + if (gpu_buddy_block_is_free(block)) return block; continue; } - if (!drm_buddy_block_is_split(block)) { + if (!gpu_buddy_block_is_split(block)) { err = split_block(mm, block); if (unlikely(err)) goto err_undo; @@ -684,19 +674,19 @@ __alloc_range_bias(struct drm_buddy *mm, */ buddy = __get_buddy(block); if (buddy && - (drm_buddy_block_is_free(block) && - drm_buddy_block_is_free(buddy))) - __drm_buddy_free(mm, block, false); + (gpu_buddy_block_is_free(block) && + gpu_buddy_block_is_free(buddy))) + __gpu_buddy_free(mm, block, false); return ERR_PTR(err); } -static struct drm_buddy_block * -__drm_buddy_alloc_range_bias(struct drm_buddy *mm, +static struct gpu_buddy_block * +__gpu_buddy_alloc_range_bias(struct gpu_buddy *mm, u64 start, u64 end, unsigned int order, unsigned long flags) { - struct drm_buddy_block *block; + struct gpu_buddy_block *block; bool fallback = false; block = __alloc_range_bias(mm, start, end, order, @@ -708,12 +698,12 @@ __drm_buddy_alloc_range_bias(struct drm_buddy *mm, return block; } -static struct drm_buddy_block * -get_maxblock(struct drm_buddy *mm, +static struct gpu_buddy_block * +get_maxblock(struct gpu_buddy *mm, unsigned int order, - enum drm_buddy_free_tree tree) + enum gpu_buddy_free_tree tree) { - struct drm_buddy_block *max_block = NULL, *block = NULL; + struct gpu_buddy_block *max_block = NULL, *block = NULL; struct rb_root *root; unsigned int i; @@ -728,8 +718,8 @@ get_maxblock(struct drm_buddy *mm, continue; } - if (drm_buddy_block_offset(block) > - drm_buddy_block_offset(max_block)) { + if (gpu_buddy_block_offset(block) > + gpu_buddy_block_offset(max_block)) { max_block = block; } } @@ -737,25 +727,25 @@ get_maxblock(struct drm_buddy *mm, return max_block; } -static struct drm_buddy_block * -alloc_from_freetree(struct drm_buddy *mm, +static struct gpu_buddy_block * +alloc_from_freetree(struct gpu_buddy *mm, unsigned int order, unsigned long flags) { - struct drm_buddy_block *block = NULL; + struct gpu_buddy_block *block = NULL; struct rb_root *root; - enum drm_buddy_free_tree tree; + enum gpu_buddy_free_tree tree; unsigned int tmp; int err; - tree = (flags & DRM_BUDDY_CLEAR_ALLOCATION) ? - DRM_BUDDY_CLEAR_TREE : DRM_BUDDY_DIRTY_TREE; + tree = (flags & GPU_BUDDY_CLEAR_ALLOCATION) ? + GPU_BUDDY_CLEAR_TREE : GPU_BUDDY_DIRTY_TREE; - if (flags & DRM_BUDDY_TOPDOWN_ALLOCATION) { + if (flags & GPU_BUDDY_TOPDOWN_ALLOCATION) { block = get_maxblock(mm, order, tree); if (block) /* Store the obtained block order */ - tmp = drm_buddy_block_order(block); + tmp = gpu_buddy_block_order(block); } else { for (tmp = order; tmp <= mm->max_order; ++tmp) { /* Get RB tree root for this order and tree */ @@ -768,8 +758,8 @@ alloc_from_freetree(struct drm_buddy *mm, if (!block) { /* Try allocating from the other tree */ - tree = (tree == DRM_BUDDY_CLEAR_TREE) ? - DRM_BUDDY_DIRTY_TREE : DRM_BUDDY_CLEAR_TREE; + tree = (tree == GPU_BUDDY_CLEAR_TREE) ? + GPU_BUDDY_DIRTY_TREE : GPU_BUDDY_CLEAR_TREE; for (tmp = order; tmp <= mm->max_order; ++tmp) { root = &mm->free_trees[tree][tmp]; @@ -782,7 +772,7 @@ alloc_from_freetree(struct drm_buddy *mm, return ERR_PTR(-ENOSPC); } - BUG_ON(!drm_buddy_block_is_free(block)); + BUG_ON(!gpu_buddy_block_is_free(block)); while (tmp != order) { err = split_block(mm, block); @@ -796,18 +786,18 @@ alloc_from_freetree(struct drm_buddy *mm, err_undo: if (tmp != order) - __drm_buddy_free(mm, block, false); + __gpu_buddy_free(mm, block, false); return ERR_PTR(err); } -static int __alloc_range(struct drm_buddy *mm, +static int __alloc_range(struct gpu_buddy *mm, struct list_head *dfs, u64 start, u64 size, struct list_head *blocks, u64 *total_allocated_on_err) { - struct drm_buddy_block *block; - struct drm_buddy_block *buddy; + struct gpu_buddy_block *block; + struct gpu_buddy_block *buddy; u64 total_allocated = 0; LIST_HEAD(allocated); u64 end; @@ -820,31 +810,31 @@ static int __alloc_range(struct drm_buddy *mm, u64 block_end; block = list_first_entry_or_null(dfs, - struct drm_buddy_block, + struct gpu_buddy_block, tmp_link); if (!block) break; list_del(&block->tmp_link); - block_start = drm_buddy_block_offset(block); - block_end = block_start + drm_buddy_block_size(mm, block) - 1; + block_start = gpu_buddy_block_offset(block); + block_end = block_start + gpu_buddy_block_size(mm, block) - 1; if (!overlaps(start, end, block_start, block_end)) continue; - if (drm_buddy_block_is_allocated(block)) { + if (gpu_buddy_block_is_allocated(block)) { err = -ENOSPC; goto err_free; } if (contains(start, end, block_start, block_end)) { - if (drm_buddy_block_is_free(block)) { + if (gpu_buddy_block_is_free(block)) { mark_allocated(mm, block); - total_allocated += drm_buddy_block_size(mm, block); - mm->avail -= drm_buddy_block_size(mm, block); - if (drm_buddy_block_is_clear(block)) - mm->clear_avail -= drm_buddy_block_size(mm, block); + total_allocated += gpu_buddy_block_size(mm, block); + mm->avail -= gpu_buddy_block_size(mm, block); + if (gpu_buddy_block_is_clear(block)) + mm->clear_avail -= gpu_buddy_block_size(mm, block); list_add_tail(&block->link, &allocated); continue; } else if (!mm->clear_avail) { @@ -853,7 +843,7 @@ static int __alloc_range(struct drm_buddy *mm, } } - if (!drm_buddy_block_is_split(block)) { + if (!gpu_buddy_block_is_split(block)) { err = split_block(mm, block); if (unlikely(err)) goto err_undo; @@ -880,22 +870,22 @@ static int __alloc_range(struct drm_buddy *mm, */ buddy = __get_buddy(block); if (buddy && - (drm_buddy_block_is_free(block) && - drm_buddy_block_is_free(buddy))) - __drm_buddy_free(mm, block, false); + (gpu_buddy_block_is_free(block) && + gpu_buddy_block_is_free(buddy))) + __gpu_buddy_free(mm, block, false); err_free: if (err == -ENOSPC && total_allocated_on_err) { list_splice_tail(&allocated, blocks); *total_allocated_on_err = total_allocated; } else { - drm_buddy_free_list_internal(mm, &allocated); + gpu_buddy_free_list_internal(mm, &allocated); } return err; } -static int __drm_buddy_alloc_range(struct drm_buddy *mm, +static int __gpu_buddy_alloc_range(struct gpu_buddy *mm, u64 start, u64 size, u64 *total_allocated_on_err, @@ -911,13 +901,13 @@ static int __drm_buddy_alloc_range(struct drm_buddy *mm, blocks, total_allocated_on_err); } -static int __alloc_contig_try_harder(struct drm_buddy *mm, +static int __alloc_contig_try_harder(struct gpu_buddy *mm, u64 size, u64 min_block_size, struct list_head *blocks) { u64 rhs_offset, lhs_offset, lhs_size, filled; - struct drm_buddy_block *block; + struct gpu_buddy_block *block; unsigned int tree, order; LIST_HEAD(blocks_lhs); unsigned long pages; @@ -943,8 +933,8 @@ static int __alloc_contig_try_harder(struct drm_buddy *mm, block = rbtree_get_free_block(iter); /* Allocate blocks traversing RHS */ - rhs_offset = drm_buddy_block_offset(block); - err = __drm_buddy_alloc_range(mm, rhs_offset, size, + rhs_offset = gpu_buddy_block_offset(block); + err = __gpu_buddy_alloc_range(mm, rhs_offset, size, &filled, blocks); if (!err || err != -ENOSPC) return err; @@ -954,18 +944,18 @@ static int __alloc_contig_try_harder(struct drm_buddy *mm, lhs_size = round_up(lhs_size, min_block_size); /* Allocate blocks traversing LHS */ - lhs_offset = drm_buddy_block_offset(block) - lhs_size; - err = __drm_buddy_alloc_range(mm, lhs_offset, lhs_size, + lhs_offset = gpu_buddy_block_offset(block) - lhs_size; + err = __gpu_buddy_alloc_range(mm, lhs_offset, lhs_size, NULL, &blocks_lhs); if (!err) { list_splice(&blocks_lhs, blocks); return 0; } else if (err != -ENOSPC) { - drm_buddy_free_list_internal(mm, blocks); + gpu_buddy_free_list_internal(mm, blocks); return err; } /* Free blocks for the next iteration */ - drm_buddy_free_list_internal(mm, blocks); + gpu_buddy_free_list_internal(mm, blocks); iter = rb_prev(iter); } @@ -975,9 +965,9 @@ static int __alloc_contig_try_harder(struct drm_buddy *mm, } /** - * drm_buddy_block_trim - free unused pages + * gpu_buddy_block_trim - free unused pages * - * @mm: DRM buddy manager + * @mm: GPU buddy manager * @start: start address to begin the trimming. * @new_size: original size requested * @blocks: Input and output list of allocated blocks. @@ -993,13 +983,13 @@ static int __alloc_contig_try_harder(struct drm_buddy *mm, * Returns: * 0 on success, error code on failure. */ -int drm_buddy_block_trim(struct drm_buddy *mm, +int gpu_buddy_block_trim(struct gpu_buddy *mm, u64 *start, u64 new_size, struct list_head *blocks) { - struct drm_buddy_block *parent; - struct drm_buddy_block *block; + struct gpu_buddy_block *parent; + struct gpu_buddy_block *block; u64 block_start, block_end; LIST_HEAD(dfs); u64 new_start; @@ -1009,22 +999,22 @@ int drm_buddy_block_trim(struct drm_buddy *mm, return -EINVAL; block = list_first_entry(blocks, - struct drm_buddy_block, + struct gpu_buddy_block, link); - block_start = drm_buddy_block_offset(block); - block_end = block_start + drm_buddy_block_size(mm, block); + block_start = gpu_buddy_block_offset(block); + block_end = block_start + gpu_buddy_block_size(mm, block); - if (WARN_ON(!drm_buddy_block_is_allocated(block))) + if (WARN_ON(!gpu_buddy_block_is_allocated(block))) return -EINVAL; - if (new_size > drm_buddy_block_size(mm, block)) + if (new_size > gpu_buddy_block_size(mm, block)) return -EINVAL; if (!new_size || !IS_ALIGNED(new_size, mm->chunk_size)) return -EINVAL; - if (new_size == drm_buddy_block_size(mm, block)) + if (new_size == gpu_buddy_block_size(mm, block)) return 0; new_start = block_start; @@ -1043,9 +1033,9 @@ int drm_buddy_block_trim(struct drm_buddy *mm, list_del(&block->link); mark_free(mm, block); - mm->avail += drm_buddy_block_size(mm, block); - if (drm_buddy_block_is_clear(block)) - mm->clear_avail += drm_buddy_block_size(mm, block); + mm->avail += gpu_buddy_block_size(mm, block); + if (gpu_buddy_block_is_clear(block)) + mm->clear_avail += gpu_buddy_block_size(mm, block); /* Prevent recursively freeing this node */ parent = block->parent; @@ -1055,26 +1045,26 @@ int drm_buddy_block_trim(struct drm_buddy *mm, err = __alloc_range(mm, &dfs, new_start, new_size, blocks, NULL); if (err) { mark_allocated(mm, block); - mm->avail -= drm_buddy_block_size(mm, block); - if (drm_buddy_block_is_clear(block)) - mm->clear_avail -= drm_buddy_block_size(mm, block); + mm->avail -= gpu_buddy_block_size(mm, block); + if (gpu_buddy_block_is_clear(block)) + mm->clear_avail -= gpu_buddy_block_size(mm, block); list_add(&block->link, blocks); } block->parent = parent; return err; } -EXPORT_SYMBOL(drm_buddy_block_trim); +EXPORT_SYMBOL(gpu_buddy_block_trim); -static struct drm_buddy_block * -__drm_buddy_alloc_blocks(struct drm_buddy *mm, +static struct gpu_buddy_block * +__gpu_buddy_alloc_blocks(struct gpu_buddy *mm, u64 start, u64 end, unsigned int order, unsigned long flags) { - if (flags & DRM_BUDDY_RANGE_ALLOCATION) + if (flags & GPU_BUDDY_RANGE_ALLOCATION) /* Allocate traversing within the range */ - return __drm_buddy_alloc_range_bias(mm, start, end, + return __gpu_buddy_alloc_range_bias(mm, start, end, order, flags); else /* Allocate from freetree */ @@ -1082,15 +1072,15 @@ __drm_buddy_alloc_blocks(struct drm_buddy *mm, } /** - * drm_buddy_alloc_blocks - allocate power-of-two blocks + * gpu_buddy_alloc_blocks - allocate power-of-two blocks * - * @mm: DRM buddy manager to allocate from + * @mm: GPU buddy manager to allocate from * @start: start of the allowed range for this block * @end: end of the allowed range for this block * @size: size of the allocation in bytes * @min_block_size: alignment of the allocation * @blocks: output list head to add allocated blocks - * @flags: DRM_BUDDY_*_ALLOCATION flags + * @flags: GPU_BUDDY_*_ALLOCATION flags * * alloc_range_bias() called on range limitations, which traverses * the tree and returns the desired block. @@ -1101,13 +1091,13 @@ __drm_buddy_alloc_blocks(struct drm_buddy *mm, * Returns: * 0 on success, error code on failure. */ -int drm_buddy_alloc_blocks(struct drm_buddy *mm, +int gpu_buddy_alloc_blocks(struct gpu_buddy *mm, u64 start, u64 end, u64 size, u64 min_block_size, struct list_head *blocks, unsigned long flags) { - struct drm_buddy_block *block = NULL; + struct gpu_buddy_block *block = NULL; u64 original_size, original_min_size; unsigned int min_order, order; LIST_HEAD(allocated); @@ -1137,14 +1127,14 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm, if (!IS_ALIGNED(start | end, min_block_size)) return -EINVAL; - return __drm_buddy_alloc_range(mm, start, size, NULL, blocks); + return __gpu_buddy_alloc_range(mm, start, size, NULL, blocks); } original_size = size; original_min_size = min_block_size; /* Roundup the size to power of 2 */ - if (flags & DRM_BUDDY_CONTIGUOUS_ALLOCATION) { + if (flags & GPU_BUDDY_CONTIGUOUS_ALLOCATION) { size = roundup_pow_of_two(size); min_block_size = size; /* Align size value to min_block_size */ @@ -1157,8 +1147,8 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm, min_order = ilog2(min_block_size) - ilog2(mm->chunk_size); if (order > mm->max_order || size > mm->size) { - if ((flags & DRM_BUDDY_CONTIGUOUS_ALLOCATION) && - !(flags & DRM_BUDDY_RANGE_ALLOCATION)) + if ((flags & GPU_BUDDY_CONTIGUOUS_ALLOCATION) && + !(flags & GPU_BUDDY_RANGE_ALLOCATION)) return __alloc_contig_try_harder(mm, original_size, original_min_size, blocks); @@ -1171,7 +1161,7 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm, BUG_ON(order < min_order); do { - block = __drm_buddy_alloc_blocks(mm, start, + block = __gpu_buddy_alloc_blocks(mm, start, end, order, flags); @@ -1182,7 +1172,7 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm, /* Try allocation through force merge method */ if (mm->clear_avail && !__force_merge(mm, start, end, min_order)) { - block = __drm_buddy_alloc_blocks(mm, start, + block = __gpu_buddy_alloc_blocks(mm, start, end, min_order, flags); @@ -1196,8 +1186,8 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm, * Try contiguous block allocation through * try harder method. */ - if (flags & DRM_BUDDY_CONTIGUOUS_ALLOCATION && - !(flags & DRM_BUDDY_RANGE_ALLOCATION)) + if (flags & GPU_BUDDY_CONTIGUOUS_ALLOCATION && + !(flags & GPU_BUDDY_RANGE_ALLOCATION)) return __alloc_contig_try_harder(mm, original_size, original_min_size, @@ -1208,9 +1198,9 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm, } while (1); mark_allocated(mm, block); - mm->avail -= drm_buddy_block_size(mm, block); - if (drm_buddy_block_is_clear(block)) - mm->clear_avail -= drm_buddy_block_size(mm, block); + mm->avail -= gpu_buddy_block_size(mm, block); + if (gpu_buddy_block_is_clear(block)) + mm->clear_avail -= gpu_buddy_block_size(mm, block); kmemleak_update_trace(block); list_add_tail(&block->link, &allocated); @@ -1221,7 +1211,7 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm, } while (1); /* Trim the allocated block to the required size */ - if (!(flags & DRM_BUDDY_TRIM_DISABLE) && + if (!(flags & GPU_BUDDY_TRIM_DISABLE) && original_size != size) { struct list_head *trim_list; LIST_HEAD(temp); @@ -1234,11 +1224,11 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm, block = list_last_entry(&allocated, typeof(*block), link); list_move(&block->link, &temp); trim_list = &temp; - trim_size = drm_buddy_block_size(mm, block) - + trim_size = gpu_buddy_block_size(mm, block) - (size - original_size); } - drm_buddy_block_trim(mm, + gpu_buddy_block_trim(mm, NULL, trim_size, trim_list); @@ -1251,44 +1241,42 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm, return 0; err_free: - drm_buddy_free_list_internal(mm, &allocated); + gpu_buddy_free_list_internal(mm, &allocated); return err; } -EXPORT_SYMBOL(drm_buddy_alloc_blocks); +EXPORT_SYMBOL(gpu_buddy_alloc_blocks); /** - * drm_buddy_block_print - print block information + * gpu_buddy_block_print - print block information * - * @mm: DRM buddy manager - * @block: DRM buddy block - * @p: DRM printer to use + * @mm: GPU buddy manager + * @block: GPU buddy block */ -void drm_buddy_block_print(struct drm_buddy *mm, - struct drm_buddy_block *block, - struct drm_printer *p) +void gpu_buddy_block_print(struct gpu_buddy *mm, + struct gpu_buddy_block *block) { - u64 start = drm_buddy_block_offset(block); - u64 size = drm_buddy_block_size(mm, block); + u64 start = gpu_buddy_block_offset(block); + u64 size = gpu_buddy_block_size(mm, block); - drm_printf(p, "%#018llx-%#018llx: %llu\n", start, start + size, size); + pr_info("%#018llx-%#018llx: %llu\n", start, start + size, size); } -EXPORT_SYMBOL(drm_buddy_block_print); +EXPORT_SYMBOL(gpu_buddy_block_print); /** - * drm_buddy_print - print allocator state + * gpu_buddy_print - print allocator state * - * @mm: DRM buddy manager - * @p: DRM printer to use + * @mm: GPU buddy manager + * @p: GPU printer to use */ -void drm_buddy_print(struct drm_buddy *mm, struct drm_printer *p) +void gpu_buddy_print(struct gpu_buddy *mm) { int order; - drm_printf(p, "chunk_size: %lluKiB, total: %lluMiB, free: %lluMiB, clear_free: %lluMiB\n", - mm->chunk_size >> 10, mm->size >> 20, mm->avail >> 20, mm->clear_avail >> 20); + pr_info("chunk_size: %lluKiB, total: %lluMiB, free: %lluMiB, clear_free: %lluMiB\n", + mm->chunk_size >> 10, mm->size >> 20, mm->avail >> 20, mm->clear_avail >> 20); for (order = mm->max_order; order >= 0; order--) { - struct drm_buddy_block *block, *tmp; + struct gpu_buddy_block *block, *tmp; struct rb_root *root; u64 count = 0, free; unsigned int tree; @@ -1297,40 +1285,38 @@ void drm_buddy_print(struct drm_buddy *mm, struct drm_printer *p) root = &mm->free_trees[tree][order]; rbtree_postorder_for_each_entry_safe(block, tmp, root, rb) { - BUG_ON(!drm_buddy_block_is_free(block)); + BUG_ON(!gpu_buddy_block_is_free(block)); count++; } } - drm_printf(p, "order-%2d ", order); - free = count * (mm->chunk_size << order); if (free < SZ_1M) - drm_printf(p, "free: %8llu KiB", free >> 10); + pr_info("order-%2d free: %8llu KiB, blocks: %llu\n", + order, free >> 10, count); else - drm_printf(p, "free: %8llu MiB", free >> 20); - - drm_printf(p, ", blocks: %llu\n", count); + pr_info("order-%2d free: %8llu MiB, blocks: %llu\n", + order, free >> 20, count); } } -EXPORT_SYMBOL(drm_buddy_print); +EXPORT_SYMBOL(gpu_buddy_print); -static void drm_buddy_module_exit(void) +static void gpu_buddy_module_exit(void) { kmem_cache_destroy(slab_blocks); } -static int __init drm_buddy_module_init(void) +static int __init gpu_buddy_module_init(void) { - slab_blocks = KMEM_CACHE(drm_buddy_block, 0); + slab_blocks = KMEM_CACHE(gpu_buddy_block, 0); if (!slab_blocks) return -ENOMEM; return 0; } -module_init(drm_buddy_module_init); -module_exit(drm_buddy_module_exit); +module_init(gpu_buddy_module_init); +module_exit(gpu_buddy_module_exit); -MODULE_DESCRIPTION("DRM Buddy Allocator"); +MODULE_DESCRIPTION("GPU Buddy Allocator"); MODULE_LICENSE("Dual MIT/GPL"); diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index ca2a2801e77f..f48d00fe28cc 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -220,6 +220,7 @@ config DRM_GPUSVM config DRM_BUDDY tristate depends on DRM + select GPU_BUDDY help A page based buddy allocator diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile index 5c86bc908955..6ff0f6f10b58 100644 --- a/drivers/gpu/drm/Makefile +++ b/drivers/gpu/drm/Makefile @@ -113,7 +113,7 @@ drm_gpusvm_helper-$(CONFIG_ZONE_DEVICE) += \ obj-$(CONFIG_DRM_GPUSVM) += drm_gpusvm_helper.o -obj-$(CONFIG_DRM_BUDDY) += ../buddy.o +obj-$(CONFIG_DRM_BUDDY) += drm_buddy.o drm_dma_helper-y := drm_gem_dma_helper.o drm_dma_helper-$(CONFIG_DRM_FBDEV_EMULATION) += drm_fbdev_dma.o diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c index f582113d78b7..149f8f942eae 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c @@ -5663,7 +5663,7 @@ int amdgpu_ras_add_critical_region(struct amdgpu_device *adev, struct amdgpu_ras *con = amdgpu_ras_get_context(adev); struct amdgpu_vram_mgr_resource *vres; struct ras_critical_region *region; - struct drm_buddy_block *block; + struct gpu_buddy_block *block; int ret = 0; if (!bo || !bo->tbo.resource) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h index be2e56ce1355..8908d9e08a30 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h @@ -55,7 +55,7 @@ static inline void amdgpu_res_first(struct ttm_resource *res, uint64_t start, uint64_t size, struct amdgpu_res_cursor *cur) { - struct drm_buddy_block *block; + struct gpu_buddy_block *block; struct list_head *head, *next; struct drm_mm_node *node; @@ -71,7 +71,7 @@ static inline void amdgpu_res_first(struct ttm_resource *res, head = &to_amdgpu_vram_mgr_resource(res)->blocks; block = list_first_entry_or_null(head, - struct drm_buddy_block, + struct gpu_buddy_block, link); if (!block) goto fallback; @@ -81,7 +81,7 @@ static inline void amdgpu_res_first(struct ttm_resource *res, next = block->link.next; if (next != head) - block = list_entry(next, struct drm_buddy_block, link); + block = list_entry(next, struct gpu_buddy_block, link); } cur->start = amdgpu_vram_mgr_block_start(block) + start; @@ -125,7 +125,7 @@ static inline void amdgpu_res_first(struct ttm_resource *res, */ static inline void amdgpu_res_next(struct amdgpu_res_cursor *cur, uint64_t size) { - struct drm_buddy_block *block; + struct gpu_buddy_block *block; struct drm_mm_node *node; struct list_head *next; @@ -146,7 +146,7 @@ static inline void amdgpu_res_next(struct amdgpu_res_cursor *cur, uint64_t size) block = cur->node; next = block->link.next; - block = list_entry(next, struct drm_buddy_block, link); + block = list_entry(next, struct gpu_buddy_block, link); cur->node = block; cur->start = amdgpu_vram_mgr_block_start(block); @@ -175,7 +175,7 @@ static inline void amdgpu_res_next(struct amdgpu_res_cursor *cur, uint64_t size) */ static inline bool amdgpu_res_cleared(struct amdgpu_res_cursor *cur) { - struct drm_buddy_block *block; + struct gpu_buddy_block *block; switch (cur->mem_type) { case TTM_PL_VRAM: diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c index 9d934c07fa6b..cd94f6efb7cb 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c @@ -25,6 +25,7 @@ #include #include #include +#include #include "amdgpu.h" #include "amdgpu_vm.h" @@ -52,15 +53,15 @@ to_amdgpu_device(struct amdgpu_vram_mgr *mgr) return container_of(mgr, struct amdgpu_device, mman.vram_mgr); } -static inline struct drm_buddy_block * +static inline struct gpu_buddy_block * amdgpu_vram_mgr_first_block(struct list_head *list) { - return list_first_entry_or_null(list, struct drm_buddy_block, link); + return list_first_entry_or_null(list, struct gpu_buddy_block, link); } static inline bool amdgpu_is_vram_mgr_blocks_contiguous(struct list_head *head) { - struct drm_buddy_block *block; + struct gpu_buddy_block *block; u64 start, size; block = amdgpu_vram_mgr_first_block(head); @@ -71,7 +72,7 @@ static inline bool amdgpu_is_vram_mgr_blocks_contiguous(struct list_head *head) start = amdgpu_vram_mgr_block_start(block); size = amdgpu_vram_mgr_block_size(block); - block = list_entry(block->link.next, struct drm_buddy_block, link); + block = list_entry(block->link.next, struct gpu_buddy_block, link); if (start + size != amdgpu_vram_mgr_block_start(block)) return false; } @@ -81,7 +82,7 @@ static inline bool amdgpu_is_vram_mgr_blocks_contiguous(struct list_head *head) static inline u64 amdgpu_vram_mgr_blocks_size(struct list_head *head) { - struct drm_buddy_block *block; + struct gpu_buddy_block *block; u64 size = 0; list_for_each_entry(block, head, link) @@ -254,7 +255,7 @@ const struct attribute_group amdgpu_vram_mgr_attr_group = { * Calculate how many bytes of the DRM BUDDY block are inside visible VRAM */ static u64 amdgpu_vram_mgr_vis_size(struct amdgpu_device *adev, - struct drm_buddy_block *block) + struct gpu_buddy_block *block) { u64 start = amdgpu_vram_mgr_block_start(block); u64 end = start + amdgpu_vram_mgr_block_size(block); @@ -279,7 +280,7 @@ u64 amdgpu_vram_mgr_bo_visible_size(struct amdgpu_bo *bo) struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); struct ttm_resource *res = bo->tbo.resource; struct amdgpu_vram_mgr_resource *vres = to_amdgpu_vram_mgr_resource(res); - struct drm_buddy_block *block; + struct gpu_buddy_block *block; u64 usage = 0; if (amdgpu_gmc_vram_full_visible(&adev->gmc)) @@ -299,15 +300,15 @@ static void amdgpu_vram_mgr_do_reserve(struct ttm_resource_manager *man) { struct amdgpu_vram_mgr *mgr = to_vram_mgr(man); struct amdgpu_device *adev = to_amdgpu_device(mgr); - struct drm_buddy *mm = &mgr->mm; + struct gpu_buddy *mm = &mgr->mm; struct amdgpu_vram_reservation *rsv, *temp; - struct drm_buddy_block *block; + struct gpu_buddy_block *block; uint64_t vis_usage; list_for_each_entry_safe(rsv, temp, &mgr->reservations_pending, blocks) { - if (drm_buddy_alloc_blocks(mm, rsv->start, rsv->start + rsv->size, + if (gpu_buddy_alloc_blocks(mm, rsv->start, rsv->start + rsv->size, rsv->size, mm->chunk_size, &rsv->allocated, - DRM_BUDDY_RANGE_ALLOCATION)) + GPU_BUDDY_RANGE_ALLOCATION)) continue; block = amdgpu_vram_mgr_first_block(&rsv->allocated); @@ -403,7 +404,7 @@ int amdgpu_vram_mgr_query_address_block_info(struct amdgpu_vram_mgr *mgr, uint64_t address, struct amdgpu_vram_block_info *info) { struct amdgpu_vram_mgr_resource *vres; - struct drm_buddy_block *block; + struct gpu_buddy_block *block; u64 start, size; int ret = -ENOENT; @@ -450,8 +451,8 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man, struct amdgpu_vram_mgr_resource *vres; u64 size, remaining_size, lpfn, fpfn; unsigned int adjust_dcc_size = 0; - struct drm_buddy *mm = &mgr->mm; - struct drm_buddy_block *block; + struct gpu_buddy *mm = &mgr->mm; + struct gpu_buddy_block *block; unsigned long pages_per_block; int r; @@ -493,17 +494,17 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man, INIT_LIST_HEAD(&vres->blocks); if (place->flags & TTM_PL_FLAG_TOPDOWN) - vres->flags |= DRM_BUDDY_TOPDOWN_ALLOCATION; + vres->flags |= GPU_BUDDY_TOPDOWN_ALLOCATION; if (bo->flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS) - vres->flags |= DRM_BUDDY_CONTIGUOUS_ALLOCATION; + vres->flags |= GPU_BUDDY_CONTIGUOUS_ALLOCATION; if (bo->flags & AMDGPU_GEM_CREATE_VRAM_CLEARED) - vres->flags |= DRM_BUDDY_CLEAR_ALLOCATION; + vres->flags |= GPU_BUDDY_CLEAR_ALLOCATION; if (fpfn || lpfn != mgr->mm.size) /* Allocate blocks in desired range */ - vres->flags |= DRM_BUDDY_RANGE_ALLOCATION; + vres->flags |= GPU_BUDDY_RANGE_ALLOCATION; if (bo->flags & AMDGPU_GEM_CREATE_GFX12_DCC && adev->gmc.gmc_funcs->get_dcc_alignment) @@ -516,7 +517,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man, dcc_size = roundup_pow_of_two(vres->base.size + adjust_dcc_size); remaining_size = (u64)dcc_size; - vres->flags |= DRM_BUDDY_TRIM_DISABLE; + vres->flags |= GPU_BUDDY_TRIM_DISABLE; } mutex_lock(&mgr->lock); @@ -536,7 +537,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man, BUG_ON(min_block_size < mm->chunk_size); - r = drm_buddy_alloc_blocks(mm, fpfn, + r = gpu_buddy_alloc_blocks(mm, fpfn, lpfn, size, min_block_size, @@ -545,7 +546,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man, if (unlikely(r == -ENOSPC) && pages_per_block == ~0ul && !(place->flags & TTM_PL_FLAG_CONTIGUOUS)) { - vres->flags &= ~DRM_BUDDY_CONTIGUOUS_ALLOCATION; + vres->flags &= ~GPU_BUDDY_CONTIGUOUS_ALLOCATION; pages_per_block = max_t(u32, 2UL << (20UL - PAGE_SHIFT), tbo->page_alignment); @@ -566,7 +567,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man, list_add_tail(&vres->vres_node, &mgr->allocated_vres_list); if (bo->flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS && adjust_dcc_size) { - struct drm_buddy_block *dcc_block; + struct gpu_buddy_block *dcc_block; unsigned long dcc_start; u64 trim_start; @@ -576,7 +577,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man, roundup((unsigned long)amdgpu_vram_mgr_block_start(dcc_block), adjust_dcc_size); trim_start = (u64)dcc_start; - drm_buddy_block_trim(mm, &trim_start, + gpu_buddy_block_trim(mm, &trim_start, (u64)vres->base.size, &vres->blocks); } @@ -614,7 +615,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man, return 0; error_free_blocks: - drm_buddy_free_list(mm, &vres->blocks, 0); + gpu_buddy_free_list(mm, &vres->blocks, 0); mutex_unlock(&mgr->lock); error_fini: ttm_resource_fini(man, &vres->base); @@ -637,8 +638,8 @@ static void amdgpu_vram_mgr_del(struct ttm_resource_manager *man, struct amdgpu_vram_mgr_resource *vres = to_amdgpu_vram_mgr_resource(res); struct amdgpu_vram_mgr *mgr = to_vram_mgr(man); struct amdgpu_device *adev = to_amdgpu_device(mgr); - struct drm_buddy *mm = &mgr->mm; - struct drm_buddy_block *block; + struct gpu_buddy *mm = &mgr->mm; + struct gpu_buddy_block *block; uint64_t vis_usage = 0; mutex_lock(&mgr->lock); @@ -649,7 +650,7 @@ static void amdgpu_vram_mgr_del(struct ttm_resource_manager *man, list_for_each_entry(block, &vres->blocks, link) vis_usage += amdgpu_vram_mgr_vis_size(adev, block); - drm_buddy_free_list(mm, &vres->blocks, vres->flags); + gpu_buddy_free_list(mm, &vres->blocks, vres->flags); amdgpu_vram_mgr_do_reserve(man); mutex_unlock(&mgr->lock); @@ -688,7 +689,7 @@ int amdgpu_vram_mgr_alloc_sgt(struct amdgpu_device *adev, if (!*sgt) return -ENOMEM; - /* Determine the number of DRM_BUDDY blocks to export */ + /* Determine the number of GPU_BUDDY blocks to export */ amdgpu_res_first(res, offset, length, &cursor); while (cursor.remaining) { num_entries++; @@ -704,10 +705,10 @@ int amdgpu_vram_mgr_alloc_sgt(struct amdgpu_device *adev, sg->length = 0; /* - * Walk down DRM_BUDDY blocks to populate scatterlist nodes - * @note: Use iterator api to get first the DRM_BUDDY block + * Walk down GPU_BUDDY blocks to populate scatterlist nodes + * @note: Use iterator api to get first the GPU_BUDDY block * and the number of bytes from it. Access the following - * DRM_BUDDY block(s) if more buffer needs to exported + * GPU_BUDDY block(s) if more buffer needs to exported */ amdgpu_res_first(res, offset, length, &cursor); for_each_sgtable_sg((*sgt), sg, i) { @@ -792,10 +793,10 @@ uint64_t amdgpu_vram_mgr_vis_usage(struct amdgpu_vram_mgr *mgr) void amdgpu_vram_mgr_clear_reset_blocks(struct amdgpu_device *adev) { struct amdgpu_vram_mgr *mgr = &adev->mman.vram_mgr; - struct drm_buddy *mm = &mgr->mm; + struct gpu_buddy *mm = &mgr->mm; mutex_lock(&mgr->lock); - drm_buddy_reset_clear(mm, false); + gpu_buddy_reset_clear(mm, false); mutex_unlock(&mgr->lock); } @@ -815,7 +816,7 @@ static bool amdgpu_vram_mgr_intersects(struct ttm_resource_manager *man, size_t size) { struct amdgpu_vram_mgr_resource *mgr = to_amdgpu_vram_mgr_resource(res); - struct drm_buddy_block *block; + struct gpu_buddy_block *block; /* Check each drm buddy block individually */ list_for_each_entry(block, &mgr->blocks, link) { @@ -848,7 +849,7 @@ static bool amdgpu_vram_mgr_compatible(struct ttm_resource_manager *man, size_t size) { struct amdgpu_vram_mgr_resource *mgr = to_amdgpu_vram_mgr_resource(res); - struct drm_buddy_block *block; + struct gpu_buddy_block *block; /* Check each drm buddy block individually */ list_for_each_entry(block, &mgr->blocks, link) { @@ -877,7 +878,7 @@ static void amdgpu_vram_mgr_debug(struct ttm_resource_manager *man, struct drm_printer *printer) { struct amdgpu_vram_mgr *mgr = to_vram_mgr(man); - struct drm_buddy *mm = &mgr->mm; + struct gpu_buddy *mm = &mgr->mm; struct amdgpu_vram_reservation *rsv; drm_printf(printer, " vis usage:%llu\n", @@ -930,7 +931,7 @@ int amdgpu_vram_mgr_init(struct amdgpu_device *adev) mgr->default_page_size = PAGE_SIZE; man->func = &amdgpu_vram_mgr_func; - err = drm_buddy_init(&mgr->mm, man->size, PAGE_SIZE); + err = gpu_buddy_init(&mgr->mm, man->size, PAGE_SIZE); if (err) return err; @@ -965,11 +966,11 @@ void amdgpu_vram_mgr_fini(struct amdgpu_device *adev) kfree(rsv); list_for_each_entry_safe(rsv, temp, &mgr->reserved_pages, blocks) { - drm_buddy_free_list(&mgr->mm, &rsv->allocated, 0); + gpu_buddy_free_list(&mgr->mm, &rsv->allocated, 0); kfree(rsv); } if (!adev->gmc.is_app_apu) - drm_buddy_fini(&mgr->mm); + gpu_buddy_fini(&mgr->mm); mutex_unlock(&mgr->lock); ttm_resource_manager_cleanup(man); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h index 874779618056..429a21a2e9b2 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h @@ -28,7 +28,7 @@ struct amdgpu_vram_mgr { struct ttm_resource_manager manager; - struct drm_buddy mm; + struct gpu_buddy mm; /* protects access to buffer objects */ struct mutex lock; struct list_head reservations_pending; @@ -57,19 +57,19 @@ struct amdgpu_vram_mgr_resource { struct amdgpu_vres_task task; }; -static inline u64 amdgpu_vram_mgr_block_start(struct drm_buddy_block *block) +static inline u64 amdgpu_vram_mgr_block_start(struct gpu_buddy_block *block) { - return drm_buddy_block_offset(block); + return gpu_buddy_block_offset(block); } -static inline u64 amdgpu_vram_mgr_block_size(struct drm_buddy_block *block) +static inline u64 amdgpu_vram_mgr_block_size(struct gpu_buddy_block *block) { - return (u64)PAGE_SIZE << drm_buddy_block_order(block); + return (u64)PAGE_SIZE << gpu_buddy_block_order(block); } -static inline bool amdgpu_vram_mgr_is_cleared(struct drm_buddy_block *block) +static inline bool amdgpu_vram_mgr_is_cleared(struct gpu_buddy_block *block) { - return drm_buddy_block_is_clear(block); + return gpu_buddy_block_is_clear(block); } static inline struct amdgpu_vram_mgr_resource * @@ -82,8 +82,8 @@ static inline void amdgpu_vram_mgr_set_cleared(struct ttm_resource *res) { struct amdgpu_vram_mgr_resource *ares = to_amdgpu_vram_mgr_resource(res); - WARN_ON(ares->flags & DRM_BUDDY_CLEARED); - ares->flags |= DRM_BUDDY_CLEARED; + WARN_ON(ares->flags & GPU_BUDDY_CLEARED); + ares->flags |= GPU_BUDDY_CLEARED; } int amdgpu_vram_mgr_query_address_block_info(struct amdgpu_vram_mgr *mgr, diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c new file mode 100644 index 000000000000..841f3de5f307 --- /dev/null +++ b/drivers/gpu/drm/drm_buddy.c @@ -0,0 +1,77 @@ +// SPDX-License-Identifier: MIT +/* + * Copyright © 2021 Intel Corporation + */ + +#include + +#include +#include +#include +#include + +#include +#include +#include + +/** + * drm_buddy_block_print - print block information + * + * @mm: DRM buddy manager + * @block: DRM buddy block + * @p: DRM printer to use + */ +void drm_buddy_block_print(struct gpu_buddy *mm, + struct gpu_buddy_block *block, + struct drm_printer *p) +{ + u64 start = gpu_buddy_block_offset(block); + u64 size = gpu_buddy_block_size(mm, block); + + drm_printf(p, "%#018llx-%#018llx: %llu\n", start, start + size, size); +} +EXPORT_SYMBOL(drm_buddy_block_print); + +/** + * drm_buddy_print - print allocator state + * + * @mm: DRM buddy manager + * @p: DRM printer to use + */ +void drm_buddy_print(struct gpu_buddy *mm, struct drm_printer *p) +{ + int order; + + drm_printf(p, "chunk_size: %lluKiB, total: %lluMiB, free: %lluMiB, clear_free: %lluMiB\n", + mm->chunk_size >> 10, mm->size >> 20, mm->avail >> 20, mm->clear_avail >> 20); + + for (order = mm->max_order; order >= 0; order--) { + struct gpu_buddy_block *block, *tmp; + struct rb_root *root; + u64 count = 0, free; + unsigned int tree; + + for_each_free_tree(tree) { + root = &mm->free_trees[tree][order]; + + rbtree_postorder_for_each_entry_safe(block, tmp, root, rb) { + BUG_ON(!gpu_buddy_block_is_free(block)); + count++; + } + } + + drm_printf(p, "order-%2d ", order); + + free = count * (mm->chunk_size << order); + if (free < SZ_1M) + drm_printf(p, "free: %8llu KiB", free >> 10); + else + drm_printf(p, "free: %8llu MiB", free >> 20); + + drm_printf(p, ", blocks: %llu\n", count); + } +} +EXPORT_SYMBOL(drm_buddy_print); + +MODULE_DESCRIPTION("DRM-specific GPU Buddy Allocator Print Helpers"); +MODULE_LICENSE("Dual MIT/GPL"); diff --git a/drivers/gpu/drm/i915/i915_scatterlist.c b/drivers/gpu/drm/i915/i915_scatterlist.c index 30246f02bcfe..6a34dae13769 100644 --- a/drivers/gpu/drm/i915/i915_scatterlist.c +++ b/drivers/gpu/drm/i915/i915_scatterlist.c @@ -167,9 +167,9 @@ struct i915_refct_sgt *i915_rsgt_from_buddy_resource(struct ttm_resource *res, struct i915_ttm_buddy_resource *bman_res = to_ttm_buddy_resource(res); const u64 size = res->size; const u32 max_segment = round_down(UINT_MAX, page_alignment); - struct drm_buddy *mm = bman_res->mm; + struct gpu_buddy *mm = bman_res->mm; struct list_head *blocks = &bman_res->blocks; - struct drm_buddy_block *block; + struct gpu_buddy_block *block; struct i915_refct_sgt *rsgt; struct scatterlist *sg; struct sg_table *st; @@ -202,8 +202,8 @@ struct i915_refct_sgt *i915_rsgt_from_buddy_resource(struct ttm_resource *res, list_for_each_entry(block, blocks, link) { u64 block_size, offset; - block_size = min_t(u64, size, drm_buddy_block_size(mm, block)); - offset = drm_buddy_block_offset(block); + block_size = min_t(u64, size, gpu_buddy_block_size(mm, block)); + offset = gpu_buddy_block_offset(block); while (block_size) { u64 len; diff --git a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c index 6b256d95badd..c5ca90088705 100644 --- a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c +++ b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c @@ -6,6 +6,7 @@ #include #include +#include #include #include #include @@ -16,7 +17,7 @@ struct i915_ttm_buddy_manager { struct ttm_resource_manager manager; - struct drm_buddy mm; + struct gpu_buddy mm; struct list_head reserved; struct mutex lock; unsigned long visible_size; @@ -38,7 +39,7 @@ static int i915_ttm_buddy_man_alloc(struct ttm_resource_manager *man, { struct i915_ttm_buddy_manager *bman = to_buddy_manager(man); struct i915_ttm_buddy_resource *bman_res; - struct drm_buddy *mm = &bman->mm; + struct gpu_buddy *mm = &bman->mm; unsigned long n_pages, lpfn; u64 min_page_size; u64 size; @@ -57,13 +58,13 @@ static int i915_ttm_buddy_man_alloc(struct ttm_resource_manager *man, bman_res->mm = mm; if (place->flags & TTM_PL_FLAG_TOPDOWN) - bman_res->flags |= DRM_BUDDY_TOPDOWN_ALLOCATION; + bman_res->flags |= GPU_BUDDY_TOPDOWN_ALLOCATION; if (place->flags & TTM_PL_FLAG_CONTIGUOUS) - bman_res->flags |= DRM_BUDDY_CONTIGUOUS_ALLOCATION; + bman_res->flags |= GPU_BUDDY_CONTIGUOUS_ALLOCATION; if (place->fpfn || lpfn != man->size) - bman_res->flags |= DRM_BUDDY_RANGE_ALLOCATION; + bman_res->flags |= GPU_BUDDY_RANGE_ALLOCATION; GEM_BUG_ON(!bman_res->base.size); size = bman_res->base.size; @@ -89,7 +90,7 @@ static int i915_ttm_buddy_man_alloc(struct ttm_resource_manager *man, goto err_free_res; } - err = drm_buddy_alloc_blocks(mm, (u64)place->fpfn << PAGE_SHIFT, + err = gpu_buddy_alloc_blocks(mm, (u64)place->fpfn << PAGE_SHIFT, (u64)lpfn << PAGE_SHIFT, (u64)n_pages << PAGE_SHIFT, min_page_size, @@ -101,15 +102,15 @@ static int i915_ttm_buddy_man_alloc(struct ttm_resource_manager *man, if (lpfn <= bman->visible_size) { bman_res->used_visible_size = PFN_UP(bman_res->base.size); } else { - struct drm_buddy_block *block; + struct gpu_buddy_block *block; list_for_each_entry(block, &bman_res->blocks, link) { unsigned long start = - drm_buddy_block_offset(block) >> PAGE_SHIFT; + gpu_buddy_block_offset(block) >> PAGE_SHIFT; if (start < bman->visible_size) { unsigned long end = start + - (drm_buddy_block_size(mm, block) >> PAGE_SHIFT); + (gpu_buddy_block_size(mm, block) >> PAGE_SHIFT); bman_res->used_visible_size += min(end, bman->visible_size) - start; @@ -126,7 +127,7 @@ static int i915_ttm_buddy_man_alloc(struct ttm_resource_manager *man, return 0; err_free_blocks: - drm_buddy_free_list(mm, &bman_res->blocks, 0); + gpu_buddy_free_list(mm, &bman_res->blocks, 0); mutex_unlock(&bman->lock); err_free_res: ttm_resource_fini(man, &bman_res->base); @@ -141,7 +142,7 @@ static void i915_ttm_buddy_man_free(struct ttm_resource_manager *man, struct i915_ttm_buddy_manager *bman = to_buddy_manager(man); mutex_lock(&bman->lock); - drm_buddy_free_list(&bman->mm, &bman_res->blocks, 0); + gpu_buddy_free_list(&bman->mm, &bman_res->blocks, 0); bman->visible_avail += bman_res->used_visible_size; mutex_unlock(&bman->lock); @@ -156,8 +157,8 @@ static bool i915_ttm_buddy_man_intersects(struct ttm_resource_manager *man, { struct i915_ttm_buddy_resource *bman_res = to_ttm_buddy_resource(res); struct i915_ttm_buddy_manager *bman = to_buddy_manager(man); - struct drm_buddy *mm = &bman->mm; - struct drm_buddy_block *block; + struct gpu_buddy *mm = &bman->mm; + struct gpu_buddy_block *block; if (!place->fpfn && !place->lpfn) return true; @@ -176,9 +177,9 @@ static bool i915_ttm_buddy_man_intersects(struct ttm_resource_manager *man, /* Check each drm buddy block individually */ list_for_each_entry(block, &bman_res->blocks, link) { unsigned long fpfn = - drm_buddy_block_offset(block) >> PAGE_SHIFT; + gpu_buddy_block_offset(block) >> PAGE_SHIFT; unsigned long lpfn = fpfn + - (drm_buddy_block_size(mm, block) >> PAGE_SHIFT); + (gpu_buddy_block_size(mm, block) >> PAGE_SHIFT); if (place->fpfn < lpfn && place->lpfn > fpfn) return true; @@ -194,8 +195,8 @@ static bool i915_ttm_buddy_man_compatible(struct ttm_resource_manager *man, { struct i915_ttm_buddy_resource *bman_res = to_ttm_buddy_resource(res); struct i915_ttm_buddy_manager *bman = to_buddy_manager(man); - struct drm_buddy *mm = &bman->mm; - struct drm_buddy_block *block; + struct gpu_buddy *mm = &bman->mm; + struct gpu_buddy_block *block; if (!place->fpfn && !place->lpfn) return true; @@ -209,9 +210,9 @@ static bool i915_ttm_buddy_man_compatible(struct ttm_resource_manager *man, /* Check each drm buddy block individually */ list_for_each_entry(block, &bman_res->blocks, link) { unsigned long fpfn = - drm_buddy_block_offset(block) >> PAGE_SHIFT; + gpu_buddy_block_offset(block) >> PAGE_SHIFT; unsigned long lpfn = fpfn + - (drm_buddy_block_size(mm, block) >> PAGE_SHIFT); + (gpu_buddy_block_size(mm, block) >> PAGE_SHIFT); if (fpfn < place->fpfn || lpfn > place->lpfn) return false; @@ -224,7 +225,7 @@ static void i915_ttm_buddy_man_debug(struct ttm_resource_manager *man, struct drm_printer *printer) { struct i915_ttm_buddy_manager *bman = to_buddy_manager(man); - struct drm_buddy_block *block; + struct gpu_buddy_block *block; mutex_lock(&bman->lock); drm_printf(printer, "default_page_size: %lluKiB\n", @@ -293,7 +294,7 @@ int i915_ttm_buddy_man_init(struct ttm_device *bdev, if (!bman) return -ENOMEM; - err = drm_buddy_init(&bman->mm, size, chunk_size); + err = gpu_buddy_init(&bman->mm, size, chunk_size); if (err) goto err_free_bman; @@ -333,7 +334,7 @@ int i915_ttm_buddy_man_fini(struct ttm_device *bdev, unsigned int type) { struct ttm_resource_manager *man = ttm_manager_type(bdev, type); struct i915_ttm_buddy_manager *bman = to_buddy_manager(man); - struct drm_buddy *mm = &bman->mm; + struct gpu_buddy *mm = &bman->mm; int ret; ttm_resource_manager_set_used(man, false); @@ -345,8 +346,8 @@ int i915_ttm_buddy_man_fini(struct ttm_device *bdev, unsigned int type) ttm_set_driver_manager(bdev, type, NULL); mutex_lock(&bman->lock); - drm_buddy_free_list(mm, &bman->reserved, 0); - drm_buddy_fini(mm); + gpu_buddy_free_list(mm, &bman->reserved, 0); + gpu_buddy_fini(mm); bman->visible_avail += bman->visible_reserved; WARN_ON_ONCE(bman->visible_avail != bman->visible_size); mutex_unlock(&bman->lock); @@ -371,15 +372,15 @@ int i915_ttm_buddy_man_reserve(struct ttm_resource_manager *man, u64 start, u64 size) { struct i915_ttm_buddy_manager *bman = to_buddy_manager(man); - struct drm_buddy *mm = &bman->mm; + struct gpu_buddy *mm = &bman->mm; unsigned long fpfn = start >> PAGE_SHIFT; unsigned long flags = 0; int ret; - flags |= DRM_BUDDY_RANGE_ALLOCATION; + flags |= GPU_BUDDY_RANGE_ALLOCATION; mutex_lock(&bman->lock); - ret = drm_buddy_alloc_blocks(mm, start, + ret = gpu_buddy_alloc_blocks(mm, start, start + size, size, mm->chunk_size, &bman->reserved, diff --git a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.h b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.h index d64620712830..1cff018c1689 100644 --- a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.h +++ b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.h @@ -13,7 +13,7 @@ struct ttm_device; struct ttm_resource_manager; -struct drm_buddy; +struct gpu_buddy; /** * struct i915_ttm_buddy_resource @@ -33,7 +33,7 @@ struct i915_ttm_buddy_resource { struct list_head blocks; unsigned long flags; unsigned long used_visible_size; - struct drm_buddy *mm; + struct gpu_buddy *mm; }; /** diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c index 7b856b5090f9..8307390943a2 100644 --- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c +++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c @@ -6,7 +6,7 @@ #include #include -#include +#include #include "../i915_selftest.h" @@ -371,7 +371,7 @@ static int igt_mock_splintered_region(void *arg) struct drm_i915_private *i915 = mem->i915; struct i915_ttm_buddy_resource *res; struct drm_i915_gem_object *obj; - struct drm_buddy *mm; + struct gpu_buddy *mm; unsigned int expected_order; LIST_HEAD(objects); u64 size; @@ -447,8 +447,8 @@ static int igt_mock_max_segment(void *arg) struct drm_i915_private *i915 = mem->i915; struct i915_ttm_buddy_resource *res; struct drm_i915_gem_object *obj; - struct drm_buddy_block *block; - struct drm_buddy *mm; + struct gpu_buddy_block *block; + struct gpu_buddy *mm; struct list_head *blocks; struct scatterlist *sg; I915_RND_STATE(prng); @@ -487,8 +487,8 @@ static int igt_mock_max_segment(void *arg) mm = res->mm; size = 0; list_for_each_entry(block, blocks, link) { - if (drm_buddy_block_size(mm, block) > size) - size = drm_buddy_block_size(mm, block); + if (gpu_buddy_block_size(mm, block) > size) + size = gpu_buddy_block_size(mm, block); } if (size < max_segment) { pr_err("%s: Failed to create a huge contiguous block [> %u], largest block %lld\n", @@ -527,14 +527,14 @@ static u64 igt_object_mappable_total(struct drm_i915_gem_object *obj) struct intel_memory_region *mr = obj->mm.region; struct i915_ttm_buddy_resource *bman_res = to_ttm_buddy_resource(obj->mm.res); - struct drm_buddy *mm = bman_res->mm; - struct drm_buddy_block *block; + struct gpu_buddy *mm = bman_res->mm; + struct gpu_buddy_block *block; u64 total; total = 0; list_for_each_entry(block, &bman_res->blocks, link) { - u64 start = drm_buddy_block_offset(block); - u64 end = start + drm_buddy_block_size(mm, block); + u64 start = gpu_buddy_block_offset(block); + u64 end = start + gpu_buddy_block_size(mm, block); if (start < resource_size(&mr->io)) total += min_t(u64, end, resource_size(&mr->io)) - start; diff --git a/drivers/gpu/drm/ttm/tests/ttm_bo_validate_test.c b/drivers/gpu/drm/ttm/tests/ttm_bo_validate_test.c index 6d95447a989d..e32f3c8d7b84 100644 --- a/drivers/gpu/drm/ttm/tests/ttm_bo_validate_test.c +++ b/drivers/gpu/drm/ttm/tests/ttm_bo_validate_test.c @@ -251,7 +251,7 @@ static void ttm_bo_validate_basic(struct kunit *test) NULL, &dummy_ttm_bo_destroy); KUNIT_EXPECT_EQ(test, err, 0); - snd_place = ttm_place_kunit_init(test, snd_mem, DRM_BUDDY_TOPDOWN_ALLOCATION); + snd_place = ttm_place_kunit_init(test, snd_mem, GPU_BUDDY_TOPDOWN_ALLOCATION); snd_placement = ttm_placement_kunit_init(test, snd_place, 1); err = ttm_bo_validate(bo, snd_placement, &ctx_val); @@ -263,7 +263,7 @@ static void ttm_bo_validate_basic(struct kunit *test) KUNIT_EXPECT_TRUE(test, ttm_tt_is_populated(bo->ttm)); KUNIT_EXPECT_EQ(test, bo->resource->mem_type, snd_mem); KUNIT_EXPECT_EQ(test, bo->resource->placement, - DRM_BUDDY_TOPDOWN_ALLOCATION); + GPU_BUDDY_TOPDOWN_ALLOCATION); ttm_bo_fini(bo); ttm_mock_manager_fini(priv->ttm_dev, snd_mem); diff --git a/drivers/gpu/drm/ttm/tests/ttm_mock_manager.c b/drivers/gpu/drm/ttm/tests/ttm_mock_manager.c index dd395229e388..294d56d9067e 100644 --- a/drivers/gpu/drm/ttm/tests/ttm_mock_manager.c +++ b/drivers/gpu/drm/ttm/tests/ttm_mock_manager.c @@ -31,7 +31,7 @@ static int ttm_mock_manager_alloc(struct ttm_resource_manager *man, { struct ttm_mock_manager *manager = to_mock_mgr(man); struct ttm_mock_resource *mock_res; - struct drm_buddy *mm = &manager->mm; + struct gpu_buddy *mm = &manager->mm; u64 lpfn, fpfn, alloc_size; int err; @@ -47,14 +47,14 @@ static int ttm_mock_manager_alloc(struct ttm_resource_manager *man, INIT_LIST_HEAD(&mock_res->blocks); if (place->flags & TTM_PL_FLAG_TOPDOWN) - mock_res->flags |= DRM_BUDDY_TOPDOWN_ALLOCATION; + mock_res->flags |= GPU_BUDDY_TOPDOWN_ALLOCATION; if (place->flags & TTM_PL_FLAG_CONTIGUOUS) - mock_res->flags |= DRM_BUDDY_CONTIGUOUS_ALLOCATION; + mock_res->flags |= GPU_BUDDY_CONTIGUOUS_ALLOCATION; alloc_size = (uint64_t)mock_res->base.size; mutex_lock(&manager->lock); - err = drm_buddy_alloc_blocks(mm, fpfn, lpfn, alloc_size, + err = gpu_buddy_alloc_blocks(mm, fpfn, lpfn, alloc_size, manager->default_page_size, &mock_res->blocks, mock_res->flags); @@ -67,7 +67,7 @@ static int ttm_mock_manager_alloc(struct ttm_resource_manager *man, return 0; error_free_blocks: - drm_buddy_free_list(mm, &mock_res->blocks, 0); + gpu_buddy_free_list(mm, &mock_res->blocks, 0); ttm_resource_fini(man, &mock_res->base); mutex_unlock(&manager->lock); @@ -79,10 +79,10 @@ static void ttm_mock_manager_free(struct ttm_resource_manager *man, { struct ttm_mock_manager *manager = to_mock_mgr(man); struct ttm_mock_resource *mock_res = to_mock_mgr_resource(res); - struct drm_buddy *mm = &manager->mm; + struct gpu_buddy *mm = &manager->mm; mutex_lock(&manager->lock); - drm_buddy_free_list(mm, &mock_res->blocks, 0); + gpu_buddy_free_list(mm, &mock_res->blocks, 0); mutex_unlock(&manager->lock); ttm_resource_fini(man, res); @@ -106,7 +106,7 @@ int ttm_mock_manager_init(struct ttm_device *bdev, u32 mem_type, u32 size) mutex_init(&manager->lock); - err = drm_buddy_init(&manager->mm, size, PAGE_SIZE); + err = gpu_buddy_init(&manager->mm, size, PAGE_SIZE); if (err) { kfree(manager); @@ -142,7 +142,7 @@ void ttm_mock_manager_fini(struct ttm_device *bdev, u32 mem_type) ttm_resource_manager_set_used(man, false); mutex_lock(&mock_man->lock); - drm_buddy_fini(&mock_man->mm); + gpu_buddy_fini(&mock_man->mm); mutex_unlock(&mock_man->lock); ttm_set_driver_manager(bdev, mem_type, NULL); diff --git a/drivers/gpu/drm/ttm/tests/ttm_mock_manager.h b/drivers/gpu/drm/ttm/tests/ttm_mock_manager.h index 96ea8c9aae34..08710756fd8e 100644 --- a/drivers/gpu/drm/ttm/tests/ttm_mock_manager.h +++ b/drivers/gpu/drm/ttm/tests/ttm_mock_manager.h @@ -9,7 +9,7 @@ struct ttm_mock_manager { struct ttm_resource_manager man; - struct drm_buddy mm; + struct gpu_buddy mm; u64 default_page_size; /* protects allocations of mock buffer objects */ struct mutex lock; diff --git a/drivers/gpu/drm/xe/xe_res_cursor.h b/drivers/gpu/drm/xe/xe_res_cursor.h index 4e00008b7081..5f4ab08c0686 100644 --- a/drivers/gpu/drm/xe/xe_res_cursor.h +++ b/drivers/gpu/drm/xe/xe_res_cursor.h @@ -58,7 +58,7 @@ struct xe_res_cursor { /** @dma_addr: Current element in a struct drm_pagemap_addr array */ const struct drm_pagemap_addr *dma_addr; /** @mm: Buddy allocator for VRAM cursor */ - struct drm_buddy *mm; + struct gpu_buddy *mm; /** * @dma_start: DMA start address for the current segment. * This may be different to @dma_addr.addr since elements in @@ -69,7 +69,7 @@ struct xe_res_cursor { u64 dma_seg_size; }; -static struct drm_buddy *xe_res_get_buddy(struct ttm_resource *res) +static struct gpu_buddy *xe_res_get_buddy(struct ttm_resource *res) { struct ttm_resource_manager *mgr; @@ -104,30 +104,30 @@ static inline void xe_res_first(struct ttm_resource *res, case XE_PL_STOLEN: case XE_PL_VRAM0: case XE_PL_VRAM1: { - struct drm_buddy_block *block; + struct gpu_buddy_block *block; struct list_head *head, *next; - struct drm_buddy *mm = xe_res_get_buddy(res); + struct gpu_buddy *mm = xe_res_get_buddy(res); head = &to_xe_ttm_vram_mgr_resource(res)->blocks; block = list_first_entry_or_null(head, - struct drm_buddy_block, + struct gpu_buddy_block, link); if (!block) goto fallback; - while (start >= drm_buddy_block_size(mm, block)) { - start -= drm_buddy_block_size(mm, block); + while (start >= gpu_buddy_block_size(mm, block)) { + start -= gpu_buddy_block_size(mm, block); next = block->link.next; if (next != head) - block = list_entry(next, struct drm_buddy_block, + block = list_entry(next, struct gpu_buddy_block, link); } cur->mm = mm; - cur->start = drm_buddy_block_offset(block) + start; - cur->size = min(drm_buddy_block_size(mm, block) - start, + cur->start = gpu_buddy_block_offset(block) + start; + cur->size = min(gpu_buddy_block_size(mm, block) - start, size); cur->remaining = size; cur->node = block; @@ -259,7 +259,7 @@ static inline void xe_res_first_dma(const struct drm_pagemap_addr *dma_addr, */ static inline void xe_res_next(struct xe_res_cursor *cur, u64 size) { - struct drm_buddy_block *block; + struct gpu_buddy_block *block; struct list_head *next; u64 start; @@ -295,18 +295,18 @@ static inline void xe_res_next(struct xe_res_cursor *cur, u64 size) block = cur->node; next = block->link.next; - block = list_entry(next, struct drm_buddy_block, link); + block = list_entry(next, struct gpu_buddy_block, link); - while (start >= drm_buddy_block_size(cur->mm, block)) { - start -= drm_buddy_block_size(cur->mm, block); + while (start >= gpu_buddy_block_size(cur->mm, block)) { + start -= gpu_buddy_block_size(cur->mm, block); next = block->link.next; - block = list_entry(next, struct drm_buddy_block, link); + block = list_entry(next, struct gpu_buddy_block, link); } - cur->start = drm_buddy_block_offset(block) + start; - cur->size = min(drm_buddy_block_size(cur->mm, block) - start, + cur->start = gpu_buddy_block_offset(block) + start; + cur->size = min(gpu_buddy_block_size(cur->mm, block) - start, cur->remaining); cur->node = block; break; diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c index 213f0334518a..cda3bf7e2418 100644 --- a/drivers/gpu/drm/xe/xe_svm.c +++ b/drivers/gpu/drm/xe/xe_svm.c @@ -747,7 +747,7 @@ static u64 block_offset_to_pfn(struct drm_pagemap *dpagemap, u64 offset) return PHYS_PFN(offset + xpagemap->hpa_base); } -static struct drm_buddy *vram_to_buddy(struct xe_vram_region *vram) +static struct gpu_buddy *vram_to_buddy(struct xe_vram_region *vram) { return &vram->ttm.mm; } @@ -758,17 +758,17 @@ static int xe_svm_populate_devmem_pfn(struct drm_pagemap_devmem *devmem_allocati struct xe_bo *bo = to_xe_bo(devmem_allocation); struct ttm_resource *res = bo->ttm.resource; struct list_head *blocks = &to_xe_ttm_vram_mgr_resource(res)->blocks; - struct drm_buddy_block *block; + struct gpu_buddy_block *block; int j = 0; list_for_each_entry(block, blocks, link) { struct xe_vram_region *vr = block->private; - struct drm_buddy *buddy = vram_to_buddy(vr); + struct gpu_buddy *buddy = vram_to_buddy(vr); u64 block_pfn = block_offset_to_pfn(devmem_allocation->dpagemap, - drm_buddy_block_offset(block)); + gpu_buddy_block_offset(block)); int i; - for (i = 0; i < drm_buddy_block_size(buddy, block) >> PAGE_SHIFT; ++i) + for (i = 0; i < gpu_buddy_block_size(buddy, block) >> PAGE_SHIFT; ++i) pfn[j++] = block_pfn + i; } @@ -1033,7 +1033,7 @@ static int xe_drm_pagemap_populate_mm(struct drm_pagemap *dpagemap, struct dma_fence *pre_migrate_fence = NULL; struct xe_device *xe = vr->xe; struct device *dev = xe->drm.dev; - struct drm_buddy_block *block; + struct gpu_buddy_block *block; struct xe_validation_ctx vctx; struct list_head *blocks; struct drm_exec exec; diff --git a/drivers/gpu/drm/xe/xe_ttm_vram_mgr.c b/drivers/gpu/drm/xe/xe_ttm_vram_mgr.c index 6553a19f7cf2..d119217d566a 100644 --- a/drivers/gpu/drm/xe/xe_ttm_vram_mgr.c +++ b/drivers/gpu/drm/xe/xe_ttm_vram_mgr.c @@ -6,6 +6,7 @@ #include #include +#include #include #include @@ -16,16 +17,16 @@ #include "xe_ttm_vram_mgr.h" #include "xe_vram_types.h" -static inline struct drm_buddy_block * +static inline struct gpu_buddy_block * xe_ttm_vram_mgr_first_block(struct list_head *list) { - return list_first_entry_or_null(list, struct drm_buddy_block, link); + return list_first_entry_or_null(list, struct gpu_buddy_block, link); } -static inline bool xe_is_vram_mgr_blocks_contiguous(struct drm_buddy *mm, +static inline bool xe_is_vram_mgr_blocks_contiguous(struct gpu_buddy *mm, struct list_head *head) { - struct drm_buddy_block *block; + struct gpu_buddy_block *block; u64 start, size; block = xe_ttm_vram_mgr_first_block(head); @@ -33,12 +34,12 @@ static inline bool xe_is_vram_mgr_blocks_contiguous(struct drm_buddy *mm, return false; while (head != block->link.next) { - start = drm_buddy_block_offset(block); - size = drm_buddy_block_size(mm, block); + start = gpu_buddy_block_offset(block); + size = gpu_buddy_block_size(mm, block); - block = list_entry(block->link.next, struct drm_buddy_block, + block = list_entry(block->link.next, struct gpu_buddy_block, link); - if (start + size != drm_buddy_block_offset(block)) + if (start + size != gpu_buddy_block_offset(block)) return false; } @@ -52,7 +53,7 @@ static int xe_ttm_vram_mgr_new(struct ttm_resource_manager *man, { struct xe_ttm_vram_mgr *mgr = to_xe_ttm_vram_mgr(man); struct xe_ttm_vram_mgr_resource *vres; - struct drm_buddy *mm = &mgr->mm; + struct gpu_buddy *mm = &mgr->mm; u64 size, min_page_size; unsigned long lpfn; int err; @@ -79,10 +80,10 @@ static int xe_ttm_vram_mgr_new(struct ttm_resource_manager *man, INIT_LIST_HEAD(&vres->blocks); if (place->flags & TTM_PL_FLAG_TOPDOWN) - vres->flags |= DRM_BUDDY_TOPDOWN_ALLOCATION; + vres->flags |= GPU_BUDDY_TOPDOWN_ALLOCATION; if (place->fpfn || lpfn != man->size >> PAGE_SHIFT) - vres->flags |= DRM_BUDDY_RANGE_ALLOCATION; + vres->flags |= GPU_BUDDY_RANGE_ALLOCATION; if (WARN_ON(!vres->base.size)) { err = -EINVAL; @@ -118,27 +119,27 @@ static int xe_ttm_vram_mgr_new(struct ttm_resource_manager *man, lpfn = max_t(unsigned long, place->fpfn + (size >> PAGE_SHIFT), lpfn); } - err = drm_buddy_alloc_blocks(mm, (u64)place->fpfn << PAGE_SHIFT, + err = gpu_buddy_alloc_blocks(mm, (u64)place->fpfn << PAGE_SHIFT, (u64)lpfn << PAGE_SHIFT, size, min_page_size, &vres->blocks, vres->flags); if (err) goto error_unlock; if (place->flags & TTM_PL_FLAG_CONTIGUOUS) { - if (!drm_buddy_block_trim(mm, NULL, vres->base.size, &vres->blocks)) + if (!gpu_buddy_block_trim(mm, NULL, vres->base.size, &vres->blocks)) size = vres->base.size; } if (lpfn <= mgr->visible_size >> PAGE_SHIFT) { vres->used_visible_size = size; } else { - struct drm_buddy_block *block; + struct gpu_buddy_block *block; list_for_each_entry(block, &vres->blocks, link) { - u64 start = drm_buddy_block_offset(block); + u64 start = gpu_buddy_block_offset(block); if (start < mgr->visible_size) { - u64 end = start + drm_buddy_block_size(mm, block); + u64 end = start + gpu_buddy_block_size(mm, block); vres->used_visible_size += min(end, mgr->visible_size) - start; @@ -158,11 +159,11 @@ static int xe_ttm_vram_mgr_new(struct ttm_resource_manager *man, * the object. */ if (vres->base.placement & TTM_PL_FLAG_CONTIGUOUS) { - struct drm_buddy_block *block = list_first_entry(&vres->blocks, + struct gpu_buddy_block *block = list_first_entry(&vres->blocks, typeof(*block), link); - vres->base.start = drm_buddy_block_offset(block) >> PAGE_SHIFT; + vres->base.start = gpu_buddy_block_offset(block) >> PAGE_SHIFT; } else { vres->base.start = XE_BO_INVALID_OFFSET; } @@ -184,10 +185,10 @@ static void xe_ttm_vram_mgr_del(struct ttm_resource_manager *man, struct xe_ttm_vram_mgr_resource *vres = to_xe_ttm_vram_mgr_resource(res); struct xe_ttm_vram_mgr *mgr = to_xe_ttm_vram_mgr(man); - struct drm_buddy *mm = &mgr->mm; + struct gpu_buddy *mm = &mgr->mm; mutex_lock(&mgr->lock); - drm_buddy_free_list(mm, &vres->blocks, 0); + gpu_buddy_free_list(mm, &vres->blocks, 0); mgr->visible_avail += vres->used_visible_size; mutex_unlock(&mgr->lock); @@ -200,7 +201,7 @@ static void xe_ttm_vram_mgr_debug(struct ttm_resource_manager *man, struct drm_printer *printer) { struct xe_ttm_vram_mgr *mgr = to_xe_ttm_vram_mgr(man); - struct drm_buddy *mm = &mgr->mm; + struct gpu_buddy *mm = &mgr->mm; mutex_lock(&mgr->lock); drm_printf(printer, "default_page_size: %lluKiB\n", @@ -223,8 +224,8 @@ static bool xe_ttm_vram_mgr_intersects(struct ttm_resource_manager *man, struct xe_ttm_vram_mgr *mgr = to_xe_ttm_vram_mgr(man); struct xe_ttm_vram_mgr_resource *vres = to_xe_ttm_vram_mgr_resource(res); - struct drm_buddy *mm = &mgr->mm; - struct drm_buddy_block *block; + struct gpu_buddy *mm = &mgr->mm; + struct gpu_buddy_block *block; if (!place->fpfn && !place->lpfn) return true; @@ -234,9 +235,9 @@ static bool xe_ttm_vram_mgr_intersects(struct ttm_resource_manager *man, list_for_each_entry(block, &vres->blocks, link) { unsigned long fpfn = - drm_buddy_block_offset(block) >> PAGE_SHIFT; + gpu_buddy_block_offset(block) >> PAGE_SHIFT; unsigned long lpfn = fpfn + - (drm_buddy_block_size(mm, block) >> PAGE_SHIFT); + (gpu_buddy_block_size(mm, block) >> PAGE_SHIFT); if (place->fpfn < lpfn && place->lpfn > fpfn) return true; @@ -253,8 +254,8 @@ static bool xe_ttm_vram_mgr_compatible(struct ttm_resource_manager *man, struct xe_ttm_vram_mgr *mgr = to_xe_ttm_vram_mgr(man); struct xe_ttm_vram_mgr_resource *vres = to_xe_ttm_vram_mgr_resource(res); - struct drm_buddy *mm = &mgr->mm; - struct drm_buddy_block *block; + struct gpu_buddy *mm = &mgr->mm; + struct gpu_buddy_block *block; if (!place->fpfn && !place->lpfn) return true; @@ -264,9 +265,9 @@ static bool xe_ttm_vram_mgr_compatible(struct ttm_resource_manager *man, list_for_each_entry(block, &vres->blocks, link) { unsigned long fpfn = - drm_buddy_block_offset(block) >> PAGE_SHIFT; + gpu_buddy_block_offset(block) >> PAGE_SHIFT; unsigned long lpfn = fpfn + - (drm_buddy_block_size(mm, block) >> PAGE_SHIFT); + (gpu_buddy_block_size(mm, block) >> PAGE_SHIFT); if (fpfn < place->fpfn || lpfn > place->lpfn) return false; @@ -296,7 +297,7 @@ static void xe_ttm_vram_mgr_fini(struct drm_device *dev, void *arg) WARN_ON_ONCE(mgr->visible_avail != mgr->visible_size); - drm_buddy_fini(&mgr->mm); + gpu_buddy_fini(&mgr->mm); ttm_resource_manager_cleanup(&mgr->manager); @@ -327,7 +328,7 @@ int __xe_ttm_vram_mgr_init(struct xe_device *xe, struct xe_ttm_vram_mgr *mgr, mgr->visible_avail = io_size; ttm_resource_manager_init(man, &xe->ttm, size); - err = drm_buddy_init(&mgr->mm, man->size, default_page_size); + err = gpu_buddy_init(&mgr->mm, man->size, default_page_size); if (err) return err; @@ -375,7 +376,7 @@ int xe_ttm_vram_mgr_alloc_sgt(struct xe_device *xe, if (!*sgt) return -ENOMEM; - /* Determine the number of DRM_BUDDY blocks to export */ + /* Determine the number of GPU_BUDDY blocks to export */ xe_res_first(res, offset, length, &cursor); while (cursor.remaining) { num_entries++; @@ -392,10 +393,10 @@ int xe_ttm_vram_mgr_alloc_sgt(struct xe_device *xe, sg->length = 0; /* - * Walk down DRM_BUDDY blocks to populate scatterlist nodes - * @note: Use iterator api to get first the DRM_BUDDY block + * Walk down GPU_BUDDY blocks to populate scatterlist nodes + * @note: Use iterator api to get first the GPU_BUDDY block * and the number of bytes from it. Access the following - * DRM_BUDDY block(s) if more buffer needs to exported + * GPU_BUDDY block(s) if more buffer needs to exported */ xe_res_first(res, offset, length, &cursor); for_each_sgtable_sg((*sgt), sg, i) { diff --git a/drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h b/drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h index babeec5511d9..9106da056b49 100644 --- a/drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h +++ b/drivers/gpu/drm/xe/xe_ttm_vram_mgr_types.h @@ -18,7 +18,7 @@ struct xe_ttm_vram_mgr { /** @manager: Base TTM resource manager */ struct ttm_resource_manager manager; /** @mm: DRM buddy allocator which manages the VRAM */ - struct drm_buddy mm; + struct gpu_buddy mm; /** @visible_size: Proped size of the CPU visible portion */ u64 visible_size; /** @visible_avail: CPU visible portion still unallocated */ diff --git a/drivers/gpu/tests/Makefile b/drivers/gpu/tests/Makefile index 8e7654e87d82..4183e6e2de45 100644 --- a/drivers/gpu/tests/Makefile +++ b/drivers/gpu/tests/Makefile @@ -1,4 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 gpu_buddy_tests-y = gpu_buddy_test.o gpu_random.o -obj-$(CONFIG_DRM_KUNIT_TEST) += gpu_buddy_tests.o +obj-$(CONFIG_GPU_BUDDY_KUNIT_TEST) += gpu_buddy_tests.o diff --git a/drivers/gpu/tests/gpu_buddy_test.c b/drivers/gpu/tests/gpu_buddy_test.c index b905932da990..450e71deed90 100644 --- a/drivers/gpu/tests/gpu_buddy_test.c +++ b/drivers/gpu/tests/gpu_buddy_test.c @@ -21,9 +21,9 @@ static inline u64 get_size(int order, u64 chunk_size) return (1 << order) * chunk_size; } -static void drm_test_buddy_fragmentation_performance(struct kunit *test) +static void gpu_test_buddy_fragmentation_performance(struct kunit *test) { - struct drm_buddy_block *block, *tmp; + struct gpu_buddy_block *block, *tmp; int num_blocks, i, ret, count = 0; LIST_HEAD(allocated_blocks); unsigned long elapsed_ms; @@ -32,7 +32,7 @@ static void drm_test_buddy_fragmentation_performance(struct kunit *test) LIST_HEAD(clear_list); LIST_HEAD(dirty_list); LIST_HEAD(free_list); - struct drm_buddy mm; + struct gpu_buddy mm; u64 mm_size = SZ_4G; ktime_t start, end; @@ -47,7 +47,7 @@ static void drm_test_buddy_fragmentation_performance(struct kunit *test) * quickly the allocator can satisfy larger, aligned requests from a pool of * highly fragmented space. */ - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, SZ_4K), + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, SZ_4K), "buddy_init failed\n"); num_blocks = mm_size / SZ_64K; @@ -55,7 +55,7 @@ static void drm_test_buddy_fragmentation_performance(struct kunit *test) start = ktime_get(); /* Allocate with maximum fragmentation - 8K blocks with 64K alignment */ for (i = 0; i < num_blocks; i++) - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, SZ_8K, SZ_64K, + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, SZ_8K, SZ_64K, &allocated_blocks, 0), "buddy_alloc hit an error size=%u\n", SZ_8K); @@ -68,21 +68,21 @@ static void drm_test_buddy_fragmentation_performance(struct kunit *test) } /* Free with different flags to ensure no coalescing */ - drm_buddy_free_list(&mm, &clear_list, DRM_BUDDY_CLEARED); - drm_buddy_free_list(&mm, &dirty_list, 0); + gpu_buddy_free_list(&mm, &clear_list, GPU_BUDDY_CLEARED); + gpu_buddy_free_list(&mm, &dirty_list, 0); for (i = 0; i < num_blocks; i++) - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, SZ_64K, SZ_64K, + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, SZ_64K, SZ_64K, &test_blocks, 0), "buddy_alloc hit an error size=%u\n", SZ_64K); - drm_buddy_free_list(&mm, &test_blocks, 0); + gpu_buddy_free_list(&mm, &test_blocks, 0); end = ktime_get(); elapsed_ms = ktime_to_ms(ktime_sub(end, start)); kunit_info(test, "Fragmented allocation took %lu ms\n", elapsed_ms); - drm_buddy_fini(&mm); + gpu_buddy_fini(&mm); /* * Reverse free order under fragmentation @@ -96,13 +96,13 @@ static void drm_test_buddy_fragmentation_performance(struct kunit *test) * deallocation occurs in the opposite order of allocation, exposing the * cost difference between a linear freelist scan and an ordered tree lookup. */ - ret = drm_buddy_init(&mm, mm_size, SZ_4K); + ret = gpu_buddy_init(&mm, mm_size, SZ_4K); KUNIT_ASSERT_EQ(test, ret, 0); start = ktime_get(); /* Allocate maximum fragmentation */ for (i = 0; i < num_blocks; i++) - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, SZ_8K, SZ_64K, + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, SZ_8K, SZ_64K, &allocated_blocks, 0), "buddy_alloc hit an error size=%u\n", SZ_8K); @@ -111,28 +111,28 @@ static void drm_test_buddy_fragmentation_performance(struct kunit *test) list_move_tail(&block->link, &free_list); count++; } - drm_buddy_free_list(&mm, &free_list, DRM_BUDDY_CLEARED); + gpu_buddy_free_list(&mm, &free_list, GPU_BUDDY_CLEARED); list_for_each_entry_safe_reverse(block, tmp, &allocated_blocks, link) list_move(&block->link, &reverse_list); - drm_buddy_free_list(&mm, &reverse_list, DRM_BUDDY_CLEARED); + gpu_buddy_free_list(&mm, &reverse_list, GPU_BUDDY_CLEARED); end = ktime_get(); elapsed_ms = ktime_to_ms(ktime_sub(end, start)); kunit_info(test, "Reverse-ordered free took %lu ms\n", elapsed_ms); - drm_buddy_fini(&mm); + gpu_buddy_fini(&mm); } -static void drm_test_buddy_alloc_range_bias(struct kunit *test) +static void gpu_test_buddy_alloc_range_bias(struct kunit *test) { u32 mm_size, size, ps, bias_size, bias_start, bias_end, bias_rem; - DRM_RND_STATE(prng, random_seed); + GPU_RND_STATE(prng, random_seed); unsigned int i, count, *order; - struct drm_buddy_block *block; + struct gpu_buddy_block *block; unsigned long flags; - struct drm_buddy mm; + struct gpu_buddy mm; LIST_HEAD(allocated); bias_size = SZ_1M; @@ -142,11 +142,11 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test) kunit_info(test, "mm_size=%u, ps=%u\n", mm_size, ps); - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, ps), + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, ps), "buddy_init failed\n"); count = mm_size / bias_size; - order = drm_random_order(count, &prng); + order = gpu_random_order(count, &prng); KUNIT_EXPECT_TRUE(test, order); /* @@ -166,79 +166,79 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test) /* internal round_up too big */ KUNIT_ASSERT_TRUE_MSG(test, - drm_buddy_alloc_blocks(&mm, bias_start, + gpu_buddy_alloc_blocks(&mm, bias_start, bias_end, bias_size + ps, bias_size, &allocated, - DRM_BUDDY_RANGE_ALLOCATION), + GPU_BUDDY_RANGE_ALLOCATION), "buddy_alloc failed with bias(%x-%x), size=%u, ps=%u\n", bias_start, bias_end, bias_size, bias_size); /* size too big */ KUNIT_ASSERT_TRUE_MSG(test, - drm_buddy_alloc_blocks(&mm, bias_start, + gpu_buddy_alloc_blocks(&mm, bias_start, bias_end, bias_size + ps, ps, &allocated, - DRM_BUDDY_RANGE_ALLOCATION), + GPU_BUDDY_RANGE_ALLOCATION), "buddy_alloc didn't fail with bias(%x-%x), size=%u, ps=%u\n", bias_start, bias_end, bias_size + ps, ps); /* bias range too small for size */ KUNIT_ASSERT_TRUE_MSG(test, - drm_buddy_alloc_blocks(&mm, bias_start + ps, + gpu_buddy_alloc_blocks(&mm, bias_start + ps, bias_end, bias_size, ps, &allocated, - DRM_BUDDY_RANGE_ALLOCATION), + GPU_BUDDY_RANGE_ALLOCATION), "buddy_alloc didn't fail with bias(%x-%x), size=%u, ps=%u\n", bias_start + ps, bias_end, bias_size, ps); /* bias misaligned */ KUNIT_ASSERT_TRUE_MSG(test, - drm_buddy_alloc_blocks(&mm, bias_start + ps, + gpu_buddy_alloc_blocks(&mm, bias_start + ps, bias_end - ps, bias_size >> 1, bias_size >> 1, &allocated, - DRM_BUDDY_RANGE_ALLOCATION), + GPU_BUDDY_RANGE_ALLOCATION), "buddy_alloc h didn't fail with bias(%x-%x), size=%u, ps=%u\n", bias_start + ps, bias_end - ps, bias_size >> 1, bias_size >> 1); /* single big page */ KUNIT_ASSERT_FALSE_MSG(test, - drm_buddy_alloc_blocks(&mm, bias_start, + gpu_buddy_alloc_blocks(&mm, bias_start, bias_end, bias_size, bias_size, &tmp, - DRM_BUDDY_RANGE_ALLOCATION), + GPU_BUDDY_RANGE_ALLOCATION), "buddy_alloc i failed with bias(%x-%x), size=%u, ps=%u\n", bias_start, bias_end, bias_size, bias_size); - drm_buddy_free_list(&mm, &tmp, 0); + gpu_buddy_free_list(&mm, &tmp, 0); /* single page with internal round_up */ KUNIT_ASSERT_FALSE_MSG(test, - drm_buddy_alloc_blocks(&mm, bias_start, + gpu_buddy_alloc_blocks(&mm, bias_start, bias_end, ps, bias_size, &tmp, - DRM_BUDDY_RANGE_ALLOCATION), + GPU_BUDDY_RANGE_ALLOCATION), "buddy_alloc failed with bias(%x-%x), size=%u, ps=%u\n", bias_start, bias_end, ps, bias_size); - drm_buddy_free_list(&mm, &tmp, 0); + gpu_buddy_free_list(&mm, &tmp, 0); /* random size within */ size = max(round_up(prandom_u32_state(&prng) % bias_rem, ps), ps); if (size) KUNIT_ASSERT_FALSE_MSG(test, - drm_buddy_alloc_blocks(&mm, bias_start, + gpu_buddy_alloc_blocks(&mm, bias_start, bias_end, size, ps, &tmp, - DRM_BUDDY_RANGE_ALLOCATION), + GPU_BUDDY_RANGE_ALLOCATION), "buddy_alloc failed with bias(%x-%x), size=%u, ps=%u\n", bias_start, bias_end, size, ps); bias_rem -= size; /* too big for current avail */ KUNIT_ASSERT_TRUE_MSG(test, - drm_buddy_alloc_blocks(&mm, bias_start, + gpu_buddy_alloc_blocks(&mm, bias_start, bias_end, bias_rem + ps, ps, &allocated, - DRM_BUDDY_RANGE_ALLOCATION), + GPU_BUDDY_RANGE_ALLOCATION), "buddy_alloc didn't fail with bias(%x-%x), size=%u, ps=%u\n", bias_start, bias_end, bias_rem + ps, ps); @@ -248,10 +248,10 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test) size = max(size, ps); KUNIT_ASSERT_FALSE_MSG(test, - drm_buddy_alloc_blocks(&mm, bias_start, + gpu_buddy_alloc_blocks(&mm, bias_start, bias_end, size, ps, &allocated, - DRM_BUDDY_RANGE_ALLOCATION), + GPU_BUDDY_RANGE_ALLOCATION), "buddy_alloc failed with bias(%x-%x), size=%u, ps=%u\n", bias_start, bias_end, size, ps); /* @@ -259,15 +259,15 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test) * unallocated, and ideally not always on the bias * boundaries. */ - drm_buddy_free_list(&mm, &tmp, 0); + gpu_buddy_free_list(&mm, &tmp, 0); } else { list_splice_tail(&tmp, &allocated); } } kfree(order); - drm_buddy_free_list(&mm, &allocated, 0); - drm_buddy_fini(&mm); + gpu_buddy_free_list(&mm, &allocated, 0); + gpu_buddy_fini(&mm); /* * Something more free-form. Idea is to pick a random starting bias @@ -278,7 +278,7 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test) * allocated nodes in the middle of the address space. */ - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, ps), + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, ps), "buddy_init failed\n"); bias_start = round_up(prandom_u32_state(&prng) % (mm_size - ps), ps); @@ -290,10 +290,10 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test) u32 size = max(round_up(prandom_u32_state(&prng) % bias_rem, ps), ps); KUNIT_ASSERT_FALSE_MSG(test, - drm_buddy_alloc_blocks(&mm, bias_start, + gpu_buddy_alloc_blocks(&mm, bias_start, bias_end, size, ps, &allocated, - DRM_BUDDY_RANGE_ALLOCATION), + GPU_BUDDY_RANGE_ALLOCATION), "buddy_alloc failed with bias(%x-%x), size=%u, ps=%u\n", bias_start, bias_end, size, ps); bias_rem -= size; @@ -319,24 +319,24 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test) KUNIT_ASSERT_EQ(test, bias_start, 0); KUNIT_ASSERT_EQ(test, bias_end, mm_size); KUNIT_ASSERT_TRUE_MSG(test, - drm_buddy_alloc_blocks(&mm, bias_start, bias_end, + gpu_buddy_alloc_blocks(&mm, bias_start, bias_end, ps, ps, &allocated, - DRM_BUDDY_RANGE_ALLOCATION), + GPU_BUDDY_RANGE_ALLOCATION), "buddy_alloc passed with bias(%x-%x), size=%u\n", bias_start, bias_end, ps); - drm_buddy_free_list(&mm, &allocated, 0); - drm_buddy_fini(&mm); + gpu_buddy_free_list(&mm, &allocated, 0); + gpu_buddy_fini(&mm); /* - * Allocate cleared blocks in the bias range when the DRM buddy's clear avail is + * Allocate cleared blocks in the bias range when the GPU buddy's clear avail is * zero. This will validate the bias range allocation in scenarios like system boot * when no cleared blocks are available and exercise the fallback path too. The resulting * blocks should always be dirty. */ - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, ps), + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, ps), "buddy_init failed\n"); bias_start = round_up(prandom_u32_state(&prng) % (mm_size - ps), ps); @@ -344,11 +344,11 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test) bias_end = max(bias_end, bias_start + ps); bias_rem = bias_end - bias_start; - flags = DRM_BUDDY_CLEAR_ALLOCATION | DRM_BUDDY_RANGE_ALLOCATION; + flags = GPU_BUDDY_CLEAR_ALLOCATION | GPU_BUDDY_RANGE_ALLOCATION; size = max(round_up(prandom_u32_state(&prng) % bias_rem, ps), ps); KUNIT_ASSERT_FALSE_MSG(test, - drm_buddy_alloc_blocks(&mm, bias_start, + gpu_buddy_alloc_blocks(&mm, bias_start, bias_end, size, ps, &allocated, flags), @@ -356,27 +356,27 @@ static void drm_test_buddy_alloc_range_bias(struct kunit *test) bias_start, bias_end, size, ps); list_for_each_entry(block, &allocated, link) - KUNIT_EXPECT_EQ(test, drm_buddy_block_is_clear(block), false); + KUNIT_EXPECT_EQ(test, gpu_buddy_block_is_clear(block), false); - drm_buddy_free_list(&mm, &allocated, 0); - drm_buddy_fini(&mm); + gpu_buddy_free_list(&mm, &allocated, 0); + gpu_buddy_fini(&mm); } -static void drm_test_buddy_alloc_clear(struct kunit *test) +static void gpu_test_buddy_alloc_clear(struct kunit *test) { unsigned long n_pages, total, i = 0; const unsigned long ps = SZ_4K; - struct drm_buddy_block *block; + struct gpu_buddy_block *block; const int max_order = 12; LIST_HEAD(allocated); - struct drm_buddy mm; + struct gpu_buddy mm; unsigned int order; u32 mm_size, size; LIST_HEAD(dirty); LIST_HEAD(clean); mm_size = SZ_4K << max_order; - KUNIT_EXPECT_FALSE(test, drm_buddy_init(&mm, mm_size, ps)); + KUNIT_EXPECT_FALSE(test, gpu_buddy_init(&mm, mm_size, ps)); KUNIT_EXPECT_EQ(test, mm.max_order, max_order); @@ -389,11 +389,11 @@ static void drm_test_buddy_alloc_clear(struct kunit *test) * is indeed all dirty pages and vice versa. Free it all again, * keeping the dirty/clear status. */ - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, 5 * ps, ps, &allocated, - DRM_BUDDY_TOPDOWN_ALLOCATION), + GPU_BUDDY_TOPDOWN_ALLOCATION), "buddy_alloc hit an error size=%lu\n", 5 * ps); - drm_buddy_free_list(&mm, &allocated, DRM_BUDDY_CLEARED); + gpu_buddy_free_list(&mm, &allocated, GPU_BUDDY_CLEARED); n_pages = 10; do { @@ -406,37 +406,37 @@ static void drm_test_buddy_alloc_clear(struct kunit *test) flags = 0; } else { list = &clean; - flags = DRM_BUDDY_CLEAR_ALLOCATION; + flags = GPU_BUDDY_CLEAR_ALLOCATION; } - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, ps, ps, list, flags), "buddy_alloc hit an error size=%lu\n", ps); } while (++i < n_pages); list_for_each_entry(block, &clean, link) - KUNIT_EXPECT_EQ(test, drm_buddy_block_is_clear(block), true); + KUNIT_EXPECT_EQ(test, gpu_buddy_block_is_clear(block), true); list_for_each_entry(block, &dirty, link) - KUNIT_EXPECT_EQ(test, drm_buddy_block_is_clear(block), false); + KUNIT_EXPECT_EQ(test, gpu_buddy_block_is_clear(block), false); - drm_buddy_free_list(&mm, &clean, DRM_BUDDY_CLEARED); + gpu_buddy_free_list(&mm, &clean, GPU_BUDDY_CLEARED); /* * Trying to go over the clear limit for some allocation. * The allocation should never fail with reasonable page-size. */ - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, 10 * ps, ps, &clean, - DRM_BUDDY_CLEAR_ALLOCATION), + GPU_BUDDY_CLEAR_ALLOCATION), "buddy_alloc hit an error size=%lu\n", 10 * ps); - drm_buddy_free_list(&mm, &clean, DRM_BUDDY_CLEARED); - drm_buddy_free_list(&mm, &dirty, 0); - drm_buddy_fini(&mm); + gpu_buddy_free_list(&mm, &clean, GPU_BUDDY_CLEARED); + gpu_buddy_free_list(&mm, &dirty, 0); + gpu_buddy_fini(&mm); - KUNIT_EXPECT_FALSE(test, drm_buddy_init(&mm, mm_size, ps)); + KUNIT_EXPECT_FALSE(test, gpu_buddy_init(&mm, mm_size, ps)); /* * Create a new mm. Intentionally fragment the address space by creating @@ -458,34 +458,34 @@ static void drm_test_buddy_alloc_clear(struct kunit *test) else list = &clean; - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, ps, ps, list, 0), "buddy_alloc hit an error size=%lu\n", ps); } while (++i < n_pages); - drm_buddy_free_list(&mm, &clean, DRM_BUDDY_CLEARED); - drm_buddy_free_list(&mm, &dirty, 0); + gpu_buddy_free_list(&mm, &clean, GPU_BUDDY_CLEARED); + gpu_buddy_free_list(&mm, &dirty, 0); order = 1; do { size = SZ_4K << order; - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, size, size, &allocated, - DRM_BUDDY_CLEAR_ALLOCATION), + GPU_BUDDY_CLEAR_ALLOCATION), "buddy_alloc hit an error size=%u\n", size); total = 0; list_for_each_entry(block, &allocated, link) { if (size != mm_size) - KUNIT_EXPECT_EQ(test, drm_buddy_block_is_clear(block), false); - total += drm_buddy_block_size(&mm, block); + KUNIT_EXPECT_EQ(test, gpu_buddy_block_is_clear(block), false); + total += gpu_buddy_block_size(&mm, block); } KUNIT_EXPECT_EQ(test, total, size); - drm_buddy_free_list(&mm, &allocated, 0); + gpu_buddy_free_list(&mm, &allocated, 0); } while (++order <= max_order); - drm_buddy_fini(&mm); + gpu_buddy_fini(&mm); /* * Create a new mm with a non power-of-two size. Allocate a random size from each @@ -494,44 +494,44 @@ static void drm_test_buddy_alloc_clear(struct kunit *test) */ mm_size = (SZ_4K << max_order) + (SZ_4K << (max_order - 2)); - KUNIT_EXPECT_FALSE(test, drm_buddy_init(&mm, mm_size, ps)); + KUNIT_EXPECT_FALSE(test, gpu_buddy_init(&mm, mm_size, ps)); KUNIT_EXPECT_EQ(test, mm.max_order, max_order); - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, SZ_4K << max_order, + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, SZ_4K << max_order, 4 * ps, ps, &allocated, - DRM_BUDDY_RANGE_ALLOCATION), + GPU_BUDDY_RANGE_ALLOCATION), "buddy_alloc hit an error size=%lu\n", 4 * ps); - drm_buddy_free_list(&mm, &allocated, DRM_BUDDY_CLEARED); - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, SZ_4K << max_order, + gpu_buddy_free_list(&mm, &allocated, GPU_BUDDY_CLEARED); + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, SZ_4K << max_order, 2 * ps, ps, &allocated, - DRM_BUDDY_CLEAR_ALLOCATION), + GPU_BUDDY_CLEAR_ALLOCATION), "buddy_alloc hit an error size=%lu\n", 2 * ps); - drm_buddy_free_list(&mm, &allocated, DRM_BUDDY_CLEARED); - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, SZ_4K << max_order, mm_size, + gpu_buddy_free_list(&mm, &allocated, GPU_BUDDY_CLEARED); + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, SZ_4K << max_order, mm_size, ps, ps, &allocated, - DRM_BUDDY_RANGE_ALLOCATION), + GPU_BUDDY_RANGE_ALLOCATION), "buddy_alloc hit an error size=%lu\n", ps); - drm_buddy_free_list(&mm, &allocated, DRM_BUDDY_CLEARED); - drm_buddy_fini(&mm); + gpu_buddy_free_list(&mm, &allocated, GPU_BUDDY_CLEARED); + gpu_buddy_fini(&mm); } -static void drm_test_buddy_alloc_contiguous(struct kunit *test) +static void gpu_test_buddy_alloc_contiguous(struct kunit *test) { const unsigned long ps = SZ_4K, mm_size = 16 * 3 * SZ_4K; unsigned long i, n_pages, total; - struct drm_buddy_block *block; - struct drm_buddy mm; + struct gpu_buddy_block *block; + struct gpu_buddy mm; LIST_HEAD(left); LIST_HEAD(middle); LIST_HEAD(right); LIST_HEAD(allocated); - KUNIT_EXPECT_FALSE(test, drm_buddy_init(&mm, mm_size, ps)); + KUNIT_EXPECT_FALSE(test, gpu_buddy_init(&mm, mm_size, ps)); /* * Idea is to fragment the address space by alternating block * allocations between three different lists; one for left, middle and * right. We can then free a list to simulate fragmentation. In - * particular we want to exercise the DRM_BUDDY_CONTIGUOUS_ALLOCATION, + * particular we want to exercise the GPU_BUDDY_CONTIGUOUS_ALLOCATION, * including the try_harder path. */ @@ -548,66 +548,66 @@ static void drm_test_buddy_alloc_contiguous(struct kunit *test) else list = &right; KUNIT_ASSERT_FALSE_MSG(test, - drm_buddy_alloc_blocks(&mm, 0, mm_size, + gpu_buddy_alloc_blocks(&mm, 0, mm_size, ps, ps, list, 0), "buddy_alloc hit an error size=%lu\n", ps); } while (++i < n_pages); - KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, + KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, 3 * ps, ps, &allocated, - DRM_BUDDY_CONTIGUOUS_ALLOCATION), + GPU_BUDDY_CONTIGUOUS_ALLOCATION), "buddy_alloc didn't error size=%lu\n", 3 * ps); - drm_buddy_free_list(&mm, &middle, 0); - KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, + gpu_buddy_free_list(&mm, &middle, 0); + KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, 3 * ps, ps, &allocated, - DRM_BUDDY_CONTIGUOUS_ALLOCATION), + GPU_BUDDY_CONTIGUOUS_ALLOCATION), "buddy_alloc didn't error size=%lu\n", 3 * ps); - KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, + KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, 2 * ps, ps, &allocated, - DRM_BUDDY_CONTIGUOUS_ALLOCATION), + GPU_BUDDY_CONTIGUOUS_ALLOCATION), "buddy_alloc didn't error size=%lu\n", 2 * ps); - drm_buddy_free_list(&mm, &right, 0); - KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, + gpu_buddy_free_list(&mm, &right, 0); + KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, 3 * ps, ps, &allocated, - DRM_BUDDY_CONTIGUOUS_ALLOCATION), + GPU_BUDDY_CONTIGUOUS_ALLOCATION), "buddy_alloc didn't error size=%lu\n", 3 * ps); /* * At this point we should have enough contiguous space for 2 blocks, * however they are never buddies (since we freed middle and right) so * will require the try_harder logic to find them. */ - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, 2 * ps, ps, &allocated, - DRM_BUDDY_CONTIGUOUS_ALLOCATION), + GPU_BUDDY_CONTIGUOUS_ALLOCATION), "buddy_alloc hit an error size=%lu\n", 2 * ps); - drm_buddy_free_list(&mm, &left, 0); - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, + gpu_buddy_free_list(&mm, &left, 0); + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, 3 * ps, ps, &allocated, - DRM_BUDDY_CONTIGUOUS_ALLOCATION), + GPU_BUDDY_CONTIGUOUS_ALLOCATION), "buddy_alloc hit an error size=%lu\n", 3 * ps); total = 0; list_for_each_entry(block, &allocated, link) - total += drm_buddy_block_size(&mm, block); + total += gpu_buddy_block_size(&mm, block); KUNIT_ASSERT_EQ(test, total, ps * 2 + ps * 3); - drm_buddy_free_list(&mm, &allocated, 0); - drm_buddy_fini(&mm); + gpu_buddy_free_list(&mm, &allocated, 0); + gpu_buddy_fini(&mm); } -static void drm_test_buddy_alloc_pathological(struct kunit *test) +static void gpu_test_buddy_alloc_pathological(struct kunit *test) { u64 mm_size, size, start = 0; - struct drm_buddy_block *block; + struct gpu_buddy_block *block; const int max_order = 3; unsigned long flags = 0; int order, top; - struct drm_buddy mm; + struct gpu_buddy mm; LIST_HEAD(blocks); LIST_HEAD(holes); LIST_HEAD(tmp); @@ -620,7 +620,7 @@ static void drm_test_buddy_alloc_pathological(struct kunit *test) */ mm_size = SZ_4K << max_order; - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, SZ_4K), + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, SZ_4K), "buddy_init failed\n"); KUNIT_EXPECT_EQ(test, mm.max_order, max_order); @@ -630,18 +630,18 @@ static void drm_test_buddy_alloc_pathological(struct kunit *test) block = list_first_entry_or_null(&blocks, typeof(*block), link); if (block) { list_del(&block->link); - drm_buddy_free_block(&mm, block); + gpu_buddy_free_block(&mm, block); } for (order = top; order--;) { size = get_size(order, mm.chunk_size); - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, start, + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size, size, size, &tmp, flags), "buddy_alloc hit -ENOMEM with order=%d, top=%d\n", order, top); - block = list_first_entry_or_null(&tmp, struct drm_buddy_block, link); + block = list_first_entry_or_null(&tmp, struct gpu_buddy_block, link); KUNIT_ASSERT_TRUE_MSG(test, block, "alloc_blocks has no blocks\n"); list_move_tail(&block->link, &blocks); @@ -649,45 +649,45 @@ static void drm_test_buddy_alloc_pathological(struct kunit *test) /* There should be one final page for this sub-allocation */ size = get_size(0, mm.chunk_size); - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size, + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size, size, size, &tmp, flags), "buddy_alloc hit -ENOMEM for hole\n"); - block = list_first_entry_or_null(&tmp, struct drm_buddy_block, link); + block = list_first_entry_or_null(&tmp, struct gpu_buddy_block, link); KUNIT_ASSERT_TRUE_MSG(test, block, "alloc_blocks has no blocks\n"); list_move_tail(&block->link, &holes); size = get_size(top, mm.chunk_size); - KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size, + KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size, size, size, &tmp, flags), "buddy_alloc unexpectedly succeeded at top-order %d/%d, it should be full!", top, max_order); } - drm_buddy_free_list(&mm, &holes, 0); + gpu_buddy_free_list(&mm, &holes, 0); /* Nothing larger than blocks of chunk_size now available */ for (order = 1; order <= max_order; order++) { size = get_size(order, mm.chunk_size); - KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size, + KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size, size, size, &tmp, flags), "buddy_alloc unexpectedly succeeded at order %d, it should be full!", order); } list_splice_tail(&holes, &blocks); - drm_buddy_free_list(&mm, &blocks, 0); - drm_buddy_fini(&mm); + gpu_buddy_free_list(&mm, &blocks, 0); + gpu_buddy_fini(&mm); } -static void drm_test_buddy_alloc_pessimistic(struct kunit *test) +static void gpu_test_buddy_alloc_pessimistic(struct kunit *test) { u64 mm_size, size, start = 0; - struct drm_buddy_block *block, *bn; + struct gpu_buddy_block *block, *bn; const unsigned int max_order = 16; unsigned long flags = 0; - struct drm_buddy mm; + struct gpu_buddy mm; unsigned int order; LIST_HEAD(blocks); LIST_HEAD(tmp); @@ -699,19 +699,19 @@ static void drm_test_buddy_alloc_pessimistic(struct kunit *test) */ mm_size = SZ_4K << max_order; - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, SZ_4K), + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, SZ_4K), "buddy_init failed\n"); KUNIT_EXPECT_EQ(test, mm.max_order, max_order); for (order = 0; order < max_order; order++) { size = get_size(order, mm.chunk_size); - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size, + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size, size, size, &tmp, flags), "buddy_alloc hit -ENOMEM with order=%d\n", order); - block = list_first_entry_or_null(&tmp, struct drm_buddy_block, link); + block = list_first_entry_or_null(&tmp, struct gpu_buddy_block, link); KUNIT_ASSERT_TRUE_MSG(test, block, "alloc_blocks has no blocks\n"); list_move_tail(&block->link, &blocks); @@ -719,11 +719,11 @@ static void drm_test_buddy_alloc_pessimistic(struct kunit *test) /* And now the last remaining block available */ size = get_size(0, mm.chunk_size); - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size, + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size, size, size, &tmp, flags), "buddy_alloc hit -ENOMEM on final alloc\n"); - block = list_first_entry_or_null(&tmp, struct drm_buddy_block, link); + block = list_first_entry_or_null(&tmp, struct gpu_buddy_block, link); KUNIT_ASSERT_TRUE_MSG(test, block, "alloc_blocks has no blocks\n"); list_move_tail(&block->link, &blocks); @@ -731,58 +731,58 @@ static void drm_test_buddy_alloc_pessimistic(struct kunit *test) /* Should be completely full! */ for (order = max_order; order--;) { size = get_size(order, mm.chunk_size); - KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size, + KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size, size, size, &tmp, flags), "buddy_alloc unexpectedly succeeded, it should be full!"); } block = list_last_entry(&blocks, typeof(*block), link); list_del(&block->link); - drm_buddy_free_block(&mm, block); + gpu_buddy_free_block(&mm, block); /* As we free in increasing size, we make available larger blocks */ order = 1; list_for_each_entry_safe(block, bn, &blocks, link) { list_del(&block->link); - drm_buddy_free_block(&mm, block); + gpu_buddy_free_block(&mm, block); size = get_size(order, mm.chunk_size); - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size, + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size, size, size, &tmp, flags), "buddy_alloc hit -ENOMEM with order=%d\n", order); - block = list_first_entry_or_null(&tmp, struct drm_buddy_block, link); + block = list_first_entry_or_null(&tmp, struct gpu_buddy_block, link); KUNIT_ASSERT_TRUE_MSG(test, block, "alloc_blocks has no blocks\n"); list_del(&block->link); - drm_buddy_free_block(&mm, block); + gpu_buddy_free_block(&mm, block); order++; } /* To confirm, now the whole mm should be available */ size = get_size(max_order, mm.chunk_size); - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size, + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size, size, size, &tmp, flags), "buddy_alloc (realloc) hit -ENOMEM with order=%d\n", max_order); - block = list_first_entry_or_null(&tmp, struct drm_buddy_block, link); + block = list_first_entry_or_null(&tmp, struct gpu_buddy_block, link); KUNIT_ASSERT_TRUE_MSG(test, block, "alloc_blocks has no blocks\n"); list_del(&block->link); - drm_buddy_free_block(&mm, block); - drm_buddy_free_list(&mm, &blocks, 0); - drm_buddy_fini(&mm); + gpu_buddy_free_block(&mm, block); + gpu_buddy_free_list(&mm, &blocks, 0); + gpu_buddy_fini(&mm); } -static void drm_test_buddy_alloc_optimistic(struct kunit *test) +static void gpu_test_buddy_alloc_optimistic(struct kunit *test) { u64 mm_size, size, start = 0; - struct drm_buddy_block *block; + struct gpu_buddy_block *block; unsigned long flags = 0; const int max_order = 16; - struct drm_buddy mm; + struct gpu_buddy mm; LIST_HEAD(blocks); LIST_HEAD(tmp); int order; @@ -794,19 +794,19 @@ static void drm_test_buddy_alloc_optimistic(struct kunit *test) mm_size = SZ_4K * ((1 << (max_order + 1)) - 1); - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, SZ_4K), + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, SZ_4K), "buddy_init failed\n"); KUNIT_EXPECT_EQ(test, mm.max_order, max_order); for (order = 0; order <= max_order; order++) { size = get_size(order, mm.chunk_size); - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size, + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size, size, size, &tmp, flags), "buddy_alloc hit -ENOMEM with order=%d\n", order); - block = list_first_entry_or_null(&tmp, struct drm_buddy_block, link); + block = list_first_entry_or_null(&tmp, struct gpu_buddy_block, link); KUNIT_ASSERT_TRUE_MSG(test, block, "alloc_blocks has no blocks\n"); list_move_tail(&block->link, &blocks); @@ -814,115 +814,115 @@ static void drm_test_buddy_alloc_optimistic(struct kunit *test) /* Should be completely full! */ size = get_size(0, mm.chunk_size); - KUNIT_ASSERT_TRUE_MSG(test, drm_buddy_alloc_blocks(&mm, start, mm_size, + KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, start, mm_size, size, size, &tmp, flags), "buddy_alloc unexpectedly succeeded, it should be full!"); - drm_buddy_free_list(&mm, &blocks, 0); - drm_buddy_fini(&mm); + gpu_buddy_free_list(&mm, &blocks, 0); + gpu_buddy_fini(&mm); } -static void drm_test_buddy_alloc_limit(struct kunit *test) +static void gpu_test_buddy_alloc_limit(struct kunit *test) { u64 size = U64_MAX, start = 0; - struct drm_buddy_block *block; + struct gpu_buddy_block *block; unsigned long flags = 0; LIST_HEAD(allocated); - struct drm_buddy mm; + struct gpu_buddy mm; - KUNIT_EXPECT_FALSE(test, drm_buddy_init(&mm, size, SZ_4K)); + KUNIT_EXPECT_FALSE(test, gpu_buddy_init(&mm, size, SZ_4K)); - KUNIT_EXPECT_EQ_MSG(test, mm.max_order, DRM_BUDDY_MAX_ORDER, + KUNIT_EXPECT_EQ_MSG(test, mm.max_order, GPU_BUDDY_MAX_ORDER, "mm.max_order(%d) != %d\n", mm.max_order, - DRM_BUDDY_MAX_ORDER); + GPU_BUDDY_MAX_ORDER); size = mm.chunk_size << mm.max_order; - KUNIT_EXPECT_FALSE(test, drm_buddy_alloc_blocks(&mm, start, size, size, + KUNIT_EXPECT_FALSE(test, gpu_buddy_alloc_blocks(&mm, start, size, size, mm.chunk_size, &allocated, flags)); - block = list_first_entry_or_null(&allocated, struct drm_buddy_block, link); + block = list_first_entry_or_null(&allocated, struct gpu_buddy_block, link); KUNIT_EXPECT_TRUE(test, block); - KUNIT_EXPECT_EQ_MSG(test, drm_buddy_block_order(block), mm.max_order, + KUNIT_EXPECT_EQ_MSG(test, gpu_buddy_block_order(block), mm.max_order, "block order(%d) != %d\n", - drm_buddy_block_order(block), mm.max_order); + gpu_buddy_block_order(block), mm.max_order); - KUNIT_EXPECT_EQ_MSG(test, drm_buddy_block_size(&mm, block), + KUNIT_EXPECT_EQ_MSG(test, gpu_buddy_block_size(&mm, block), BIT_ULL(mm.max_order) * mm.chunk_size, "block size(%llu) != %llu\n", - drm_buddy_block_size(&mm, block), + gpu_buddy_block_size(&mm, block), BIT_ULL(mm.max_order) * mm.chunk_size); - drm_buddy_free_list(&mm, &allocated, 0); - drm_buddy_fini(&mm); + gpu_buddy_free_list(&mm, &allocated, 0); + gpu_buddy_fini(&mm); } -static void drm_test_buddy_alloc_exceeds_max_order(struct kunit *test) +static void gpu_test_buddy_alloc_exceeds_max_order(struct kunit *test) { u64 mm_size = SZ_8G + SZ_2G, size = SZ_8G + SZ_1G, min_block_size = SZ_8G; - struct drm_buddy mm; + struct gpu_buddy mm; LIST_HEAD(blocks); int err; - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, SZ_4K), + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, SZ_4K), "buddy_init failed\n"); /* CONTIGUOUS allocation should succeed via try_harder fallback */ - KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size, size, + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, size, SZ_4K, &blocks, - DRM_BUDDY_CONTIGUOUS_ALLOCATION), + GPU_BUDDY_CONTIGUOUS_ALLOCATION), "buddy_alloc hit an error size=%llu\n", size); - drm_buddy_free_list(&mm, &blocks, 0); + gpu_buddy_free_list(&mm, &blocks, 0); /* Non-CONTIGUOUS with large min_block_size should return -EINVAL */ - err = drm_buddy_alloc_blocks(&mm, 0, mm_size, size, min_block_size, &blocks, 0); + err = gpu_buddy_alloc_blocks(&mm, 0, mm_size, size, min_block_size, &blocks, 0); KUNIT_EXPECT_EQ(test, err, -EINVAL); /* Non-CONTIGUOUS + RANGE with large min_block_size should return -EINVAL */ - err = drm_buddy_alloc_blocks(&mm, 0, mm_size, size, min_block_size, &blocks, - DRM_BUDDY_RANGE_ALLOCATION); + err = gpu_buddy_alloc_blocks(&mm, 0, mm_size, size, min_block_size, &blocks, + GPU_BUDDY_RANGE_ALLOCATION); KUNIT_EXPECT_EQ(test, err, -EINVAL); /* CONTIGUOUS + RANGE should return -EINVAL (no try_harder for RANGE) */ - err = drm_buddy_alloc_blocks(&mm, 0, mm_size, size, SZ_4K, &blocks, - DRM_BUDDY_CONTIGUOUS_ALLOCATION | DRM_BUDDY_RANGE_ALLOCATION); + err = gpu_buddy_alloc_blocks(&mm, 0, mm_size, size, SZ_4K, &blocks, + GPU_BUDDY_CONTIGUOUS_ALLOCATION | GPU_BUDDY_RANGE_ALLOCATION); KUNIT_EXPECT_EQ(test, err, -EINVAL); - drm_buddy_fini(&mm); + gpu_buddy_fini(&mm); } -static int drm_buddy_suite_init(struct kunit_suite *suite) +static int gpu_buddy_suite_init(struct kunit_suite *suite) { while (!random_seed) random_seed = get_random_u32(); - kunit_info(suite, "Testing DRM buddy manager, with random_seed=0x%x\n", + kunit_info(suite, "Testing GPU buddy manager, with random_seed=0x%x\n", random_seed); return 0; } -static struct kunit_case drm_buddy_tests[] = { - KUNIT_CASE(drm_test_buddy_alloc_limit), - KUNIT_CASE(drm_test_buddy_alloc_optimistic), - KUNIT_CASE(drm_test_buddy_alloc_pessimistic), - KUNIT_CASE(drm_test_buddy_alloc_pathological), - KUNIT_CASE(drm_test_buddy_alloc_contiguous), - KUNIT_CASE(drm_test_buddy_alloc_clear), - KUNIT_CASE(drm_test_buddy_alloc_range_bias), - KUNIT_CASE(drm_test_buddy_fragmentation_performance), - KUNIT_CASE(drm_test_buddy_alloc_exceeds_max_order), +static struct kunit_case gpu_buddy_tests[] = { + KUNIT_CASE(gpu_test_buddy_alloc_limit), + KUNIT_CASE(gpu_test_buddy_alloc_optimistic), + KUNIT_CASE(gpu_test_buddy_alloc_pessimistic), + KUNIT_CASE(gpu_test_buddy_alloc_pathological), + KUNIT_CASE(gpu_test_buddy_alloc_contiguous), + KUNIT_CASE(gpu_test_buddy_alloc_clear), + KUNIT_CASE(gpu_test_buddy_alloc_range_bias), + KUNIT_CASE(gpu_test_buddy_fragmentation_performance), + KUNIT_CASE(gpu_test_buddy_alloc_exceeds_max_order), {} }; -static struct kunit_suite drm_buddy_test_suite = { - .name = "drm_buddy", - .suite_init = drm_buddy_suite_init, - .test_cases = drm_buddy_tests, +static struct kunit_suite gpu_buddy_test_suite = { + .name = "gpu_buddy", + .suite_init = gpu_buddy_suite_init, + .test_cases = gpu_buddy_tests, }; -kunit_test_suite(drm_buddy_test_suite); +kunit_test_suite(gpu_buddy_test_suite); MODULE_AUTHOR("Intel Corporation"); -MODULE_DESCRIPTION("Kunit test for drm_buddy functions"); +MODULE_DESCRIPTION("Kunit test for gpu_buddy functions"); MODULE_LICENSE("GPL"); diff --git a/drivers/gpu/tests/gpu_random.c b/drivers/gpu/tests/gpu_random.c index ddd1f594b5d5..6356372f7e52 100644 --- a/drivers/gpu/tests/gpu_random.c +++ b/drivers/gpu/tests/gpu_random.c @@ -8,26 +8,26 @@ #include "gpu_random.h" -u32 drm_prandom_u32_max_state(u32 ep_ro, struct rnd_state *state) +u32 gpu_prandom_u32_max_state(u32 ep_ro, struct rnd_state *state) { return upper_32_bits((u64)prandom_u32_state(state) * ep_ro); } -EXPORT_SYMBOL(drm_prandom_u32_max_state); +EXPORT_SYMBOL(gpu_prandom_u32_max_state); -void drm_random_reorder(unsigned int *order, unsigned int count, +void gpu_random_reorder(unsigned int *order, unsigned int count, struct rnd_state *state) { unsigned int i, j; for (i = 0; i < count; ++i) { BUILD_BUG_ON(sizeof(unsigned int) > sizeof(u32)); - j = drm_prandom_u32_max_state(count, state); + j = gpu_prandom_u32_max_state(count, state); swap(order[i], order[j]); } } -EXPORT_SYMBOL(drm_random_reorder); +EXPORT_SYMBOL(gpu_random_reorder); -unsigned int *drm_random_order(unsigned int count, struct rnd_state *state) +unsigned int *gpu_random_order(unsigned int count, struct rnd_state *state) { unsigned int *order, i; @@ -38,7 +38,7 @@ unsigned int *drm_random_order(unsigned int count, struct rnd_state *state) for (i = 0; i < count; i++) order[i] = i; - drm_random_reorder(order, count, state); + gpu_random_reorder(order, count, state); return order; } -EXPORT_SYMBOL(drm_random_order); +EXPORT_SYMBOL(gpu_random_order); diff --git a/drivers/gpu/tests/gpu_random.h b/drivers/gpu/tests/gpu_random.h index 9f827260a89d..b68cf3448264 100644 --- a/drivers/gpu/tests/gpu_random.h +++ b/drivers/gpu/tests/gpu_random.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 */ -#ifndef __DRM_RANDOM_H__ -#define __DRM_RANDOM_H__ +#ifndef __GPU_RANDOM_H__ +#define __GPU_RANDOM_H__ /* This is a temporary home for a couple of utility functions that should * be transposed to lib/ at the earliest convenience. @@ -8,21 +8,21 @@ #include -#define DRM_RND_STATE_INITIALIZER(seed__) ({ \ +#define GPU_RND_STATE_INITIALIZER(seed__) ({ \ struct rnd_state state__; \ prandom_seed_state(&state__, (seed__)); \ state__; \ }) -#define DRM_RND_STATE(name__, seed__) \ - struct rnd_state name__ = DRM_RND_STATE_INITIALIZER(seed__) +#define GPU_RND_STATE(name__, seed__) \ + struct rnd_state name__ = GPU_RND_STATE_INITIALIZER(seed__) -unsigned int *drm_random_order(unsigned int count, +unsigned int *gpu_random_order(unsigned int count, struct rnd_state *state); -void drm_random_reorder(unsigned int *order, +void gpu_random_reorder(unsigned int *order, unsigned int count, struct rnd_state *state); -u32 drm_prandom_u32_max_state(u32 ep_ro, +u32 gpu_prandom_u32_max_state(u32 ep_ro, struct rnd_state *state); -#endif /* !__DRM_RANDOM_H__ */ +#endif /* !__GPU_RANDOM_H__ */ diff --git a/drivers/video/Kconfig b/drivers/video/Kconfig index 9884f003247d..a7144d275f54 100644 --- a/drivers/video/Kconfig +++ b/drivers/video/Kconfig @@ -37,6 +37,7 @@ source "drivers/char/agp/Kconfig" source "drivers/gpu/vga/Kconfig" +source "drivers/gpu/Kconfig" source "drivers/gpu/host1x/Kconfig" source "drivers/gpu/ipu-v3/Kconfig" source "drivers/gpu/nova-core/Kconfig" diff --git a/include/drm/drm_buddy.h b/include/drm/drm_buddy.h new file mode 100644 index 000000000000..3054369bebff --- /dev/null +++ b/include/drm/drm_buddy.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: MIT */ +/* + * Copyright © 2021 Intel Corporation + */ + +#ifndef __DRM_BUDDY_H__ +#define __DRM_BUDDY_H__ + +#include + +struct drm_printer; + +/* DRM-specific GPU Buddy Allocator print helpers */ +void drm_buddy_print(struct gpu_buddy *mm, struct drm_printer *p); +void drm_buddy_block_print(struct gpu_buddy *mm, + struct gpu_buddy_block *block, + struct drm_printer *p); +#endif diff --git a/include/linux/gpu_buddy.h b/include/linux/gpu_buddy.h index b909fa8f810a..07ac65db6d2e 100644 --- a/include/linux/gpu_buddy.h +++ b/include/linux/gpu_buddy.h @@ -3,8 +3,8 @@ * Copyright © 2021 Intel Corporation */ -#ifndef __DRM_BUDDY_H__ -#define __DRM_BUDDY_H__ +#ifndef __GPU_BUDDY_H__ +#define __GPU_BUDDY_H__ #include #include @@ -12,38 +12,45 @@ #include #include -struct drm_printer; +#define GPU_BUDDY_RANGE_ALLOCATION BIT(0) +#define GPU_BUDDY_TOPDOWN_ALLOCATION BIT(1) +#define GPU_BUDDY_CONTIGUOUS_ALLOCATION BIT(2) +#define GPU_BUDDY_CLEAR_ALLOCATION BIT(3) +#define GPU_BUDDY_CLEARED BIT(4) +#define GPU_BUDDY_TRIM_DISABLE BIT(5) -#define DRM_BUDDY_RANGE_ALLOCATION BIT(0) -#define DRM_BUDDY_TOPDOWN_ALLOCATION BIT(1) -#define DRM_BUDDY_CONTIGUOUS_ALLOCATION BIT(2) -#define DRM_BUDDY_CLEAR_ALLOCATION BIT(3) -#define DRM_BUDDY_CLEARED BIT(4) -#define DRM_BUDDY_TRIM_DISABLE BIT(5) +enum gpu_buddy_free_tree { + GPU_BUDDY_CLEAR_TREE = 0, + GPU_BUDDY_DIRTY_TREE, + GPU_BUDDY_MAX_FREE_TREES, +}; -struct drm_buddy_block { -#define DRM_BUDDY_HEADER_OFFSET GENMASK_ULL(63, 12) -#define DRM_BUDDY_HEADER_STATE GENMASK_ULL(11, 10) -#define DRM_BUDDY_ALLOCATED (1 << 10) -#define DRM_BUDDY_FREE (2 << 10) -#define DRM_BUDDY_SPLIT (3 << 10) -#define DRM_BUDDY_HEADER_CLEAR GENMASK_ULL(9, 9) +#define for_each_free_tree(tree) \ + for ((tree) = 0; (tree) < GPU_BUDDY_MAX_FREE_TREES; (tree)++) + +struct gpu_buddy_block { +#define GPU_BUDDY_HEADER_OFFSET GENMASK_ULL(63, 12) +#define GPU_BUDDY_HEADER_STATE GENMASK_ULL(11, 10) +#define GPU_BUDDY_ALLOCATED (1 << 10) +#define GPU_BUDDY_FREE (2 << 10) +#define GPU_BUDDY_SPLIT (3 << 10) +#define GPU_BUDDY_HEADER_CLEAR GENMASK_ULL(9, 9) /* Free to be used, if needed in the future */ -#define DRM_BUDDY_HEADER_UNUSED GENMASK_ULL(8, 6) -#define DRM_BUDDY_HEADER_ORDER GENMASK_ULL(5, 0) +#define GPU_BUDDY_HEADER_UNUSED GENMASK_ULL(8, 6) +#define GPU_BUDDY_HEADER_ORDER GENMASK_ULL(5, 0) u64 header; - struct drm_buddy_block *left; - struct drm_buddy_block *right; - struct drm_buddy_block *parent; + struct gpu_buddy_block *left; + struct gpu_buddy_block *right; + struct gpu_buddy_block *parent; void *private; /* owned by creator */ /* - * While the block is allocated by the user through drm_buddy_alloc*, + * While the block is allocated by the user through gpu_buddy_alloc*, * the user has ownership of the link, for example to maintain within * a list, if so desired. As soon as the block is freed with - * drm_buddy_free* ownership is given back to the mm. + * gpu_buddy_free* ownership is given back to the mm. */ union { struct rb_node rb; @@ -54,15 +61,15 @@ struct drm_buddy_block { }; /* Order-zero must be at least SZ_4K */ -#define DRM_BUDDY_MAX_ORDER (63 - 12) +#define GPU_BUDDY_MAX_ORDER (63 - 12) /* * Binary Buddy System. * * Locking should be handled by the user, a simple mutex around - * drm_buddy_alloc* and drm_buddy_free* should suffice. + * gpu_buddy_alloc* and gpu_buddy_free* should suffice. */ -struct drm_buddy { +struct gpu_buddy { /* Maintain a free list for each order. */ struct rb_root **free_trees; @@ -73,7 +80,7 @@ struct drm_buddy { * block. Nodes are either allocated or free, in which case they will * also exist on the respective free list. */ - struct drm_buddy_block **roots; + struct gpu_buddy_block **roots; /* * Anything from here is public, and remains static for the lifetime of @@ -90,82 +97,81 @@ struct drm_buddy { }; static inline u64 -drm_buddy_block_offset(const struct drm_buddy_block *block) +gpu_buddy_block_offset(const struct gpu_buddy_block *block) { - return block->header & DRM_BUDDY_HEADER_OFFSET; + return block->header & GPU_BUDDY_HEADER_OFFSET; } static inline unsigned int -drm_buddy_block_order(struct drm_buddy_block *block) +gpu_buddy_block_order(struct gpu_buddy_block *block) { - return block->header & DRM_BUDDY_HEADER_ORDER; + return block->header & GPU_BUDDY_HEADER_ORDER; } static inline unsigned int -drm_buddy_block_state(struct drm_buddy_block *block) +gpu_buddy_block_state(struct gpu_buddy_block *block) { - return block->header & DRM_BUDDY_HEADER_STATE; + return block->header & GPU_BUDDY_HEADER_STATE; } static inline bool -drm_buddy_block_is_allocated(struct drm_buddy_block *block) +gpu_buddy_block_is_allocated(struct gpu_buddy_block *block) { - return drm_buddy_block_state(block) == DRM_BUDDY_ALLOCATED; + return gpu_buddy_block_state(block) == GPU_BUDDY_ALLOCATED; } static inline bool -drm_buddy_block_is_clear(struct drm_buddy_block *block) +gpu_buddy_block_is_clear(struct gpu_buddy_block *block) { - return block->header & DRM_BUDDY_HEADER_CLEAR; + return block->header & GPU_BUDDY_HEADER_CLEAR; } static inline bool -drm_buddy_block_is_free(struct drm_buddy_block *block) +gpu_buddy_block_is_free(struct gpu_buddy_block *block) { - return drm_buddy_block_state(block) == DRM_BUDDY_FREE; + return gpu_buddy_block_state(block) == GPU_BUDDY_FREE; } static inline bool -drm_buddy_block_is_split(struct drm_buddy_block *block) +gpu_buddy_block_is_split(struct gpu_buddy_block *block) { - return drm_buddy_block_state(block) == DRM_BUDDY_SPLIT; + return gpu_buddy_block_state(block) == GPU_BUDDY_SPLIT; } static inline u64 -drm_buddy_block_size(struct drm_buddy *mm, - struct drm_buddy_block *block) +gpu_buddy_block_size(struct gpu_buddy *mm, + struct gpu_buddy_block *block) { - return mm->chunk_size << drm_buddy_block_order(block); + return mm->chunk_size << gpu_buddy_block_order(block); } -int drm_buddy_init(struct drm_buddy *mm, u64 size, u64 chunk_size); +int gpu_buddy_init(struct gpu_buddy *mm, u64 size, u64 chunk_size); -void drm_buddy_fini(struct drm_buddy *mm); +void gpu_buddy_fini(struct gpu_buddy *mm); -struct drm_buddy_block * -drm_get_buddy(struct drm_buddy_block *block); +struct gpu_buddy_block * +gpu_get_buddy(struct gpu_buddy_block *block); -int drm_buddy_alloc_blocks(struct drm_buddy *mm, +int gpu_buddy_alloc_blocks(struct gpu_buddy *mm, u64 start, u64 end, u64 size, u64 min_page_size, struct list_head *blocks, unsigned long flags); -int drm_buddy_block_trim(struct drm_buddy *mm, +int gpu_buddy_block_trim(struct gpu_buddy *mm, u64 *start, u64 new_size, struct list_head *blocks); -void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear); +void gpu_buddy_reset_clear(struct gpu_buddy *mm, bool is_clear); -void drm_buddy_free_block(struct drm_buddy *mm, struct drm_buddy_block *block); +void gpu_buddy_free_block(struct gpu_buddy *mm, struct gpu_buddy_block *block); -void drm_buddy_free_list(struct drm_buddy *mm, +void gpu_buddy_free_list(struct gpu_buddy *mm, struct list_head *objects, unsigned int flags); -void drm_buddy_print(struct drm_buddy *mm, struct drm_printer *p); -void drm_buddy_block_print(struct drm_buddy *mm, - struct drm_buddy_block *block, - struct drm_printer *p); +void gpu_buddy_print(struct gpu_buddy *mm); +void gpu_buddy_block_print(struct gpu_buddy *mm, + struct gpu_buddy_block *block); #endif -- 2.34.1