From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 67861C77B7C for ; Tue, 24 Jun 2025 16:59:35 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2628710E5F8; Tue, 24 Jun 2025 16:59:35 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="UWTtZqfB"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.12]) by gabe.freedesktop.org (Postfix) with ESMTPS id EB9BE10E600 for ; Tue, 24 Jun 2025 16:59:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1750784374; x=1782320374; h=date:from:to:cc:subject:message-id:references: content-transfer-encoding:in-reply-to:mime-version; bh=Q1tTkAK4d7iXGbJ8+QZOMSELSFGdZ/T8wrSNqVP0eKY=; b=UWTtZqfBp/tFI+MvjeaVoyvTz0SGP22n4Nc23q9oUEBGasd6tB5tXNJi C+iTMRcQ+UaVdFsCj9sj71xnMv09/Z8ax/RUG3p26BzZlXSrmJKHCWCFO XdxJ/iMtVD08QVWN3uPbFpUHJzLbsXoi9u4fDoJBSb3CoYsS7AVTgOkHC YW21nwwe7qhsYoOHxTM5ECUAQ2vcV6oHNr1QtNehrbehcv1EJnJ/H8a1Z rFjm7boFUODjW3jxnkidkzG601JUfj8GKgtEt9N8hxcKnKINbKuBY8eYB oXfpzRbksxDMt860FDg9bxMki1nUC+zLp/5iXlAYwbahVvNR0YlFhPOQG Q==; X-CSE-ConnectionGUID: Nn6f4gigQeilKk7OG1lYIg== X-CSE-MsgGUID: jCCF8qiERbymGY6FS7iXrQ== X-IronPort-AV: E=McAfee;i="6800,10657,11474"; a="64466560" X-IronPort-AV: E=Sophos;i="6.16,262,1744095600"; d="scan'208";a="64466560" Received: from fmviesa004.fm.intel.com ([10.60.135.144]) by orvoesa104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jun 2025 09:59:34 -0700 X-CSE-ConnectionGUID: 8O7F+0j0TbOtsEphZg2QZw== X-CSE-MsgGUID: jVvGwobvTlCLSOnpwMqv/Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.16,262,1744095600"; d="scan'208";a="157460113" Received: from orsmsx901.amr.corp.intel.com ([10.22.229.23]) by fmviesa004.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jun 2025 09:59:33 -0700 Received: from ORSMSX903.amr.corp.intel.com (10.22.229.25) by ORSMSX901.amr.corp.intel.com (10.22.229.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.25; Tue, 24 Jun 2025 09:59:33 -0700 Received: from ORSEDG903.ED.cps.intel.com (10.7.248.13) by ORSMSX903.amr.corp.intel.com (10.22.229.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.25 via Frontend Transport; Tue, 24 Jun 2025 09:59:33 -0700 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (40.107.236.48) by edgegateway.intel.com (134.134.137.113) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.25; Tue, 24 Jun 2025 09:59:32 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=HN9B1kFhuq/XTeIJY7Bfy0fhGwN0ipREujkG3vNgQUJLzXh/KaKxAP9J0ddgTGMiz8hpZmTzzB+uxnoLjrTsuphHsS1oMdfxqOwLO2a0vUoO9y/11sA2bRp/eqPjFCq4OXcrPSN8lZu6Ikemp0MY6lDrA+00ITVnFaeKPtMRC+N8DQm89HLmhjLTeMaMVRhkehYcQPZRYD5U7kJU2WLxFIFO8ZUD/el25mD4LuzImwmTvjedwX7S9mUEXflE9C4dphC1ddGq8rIykOOXd+SfTCJFfQE2gXmo627JAEJ3aT7a+mIYXiCjO9I4EC/nspQVK4EPU0PFcNVkLExVPUT5+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=VlG/EDhwaCuudmovcPxqvZ6QxQsSVWXRe9792uVntx4=; b=Wi6wzbl0Nw+t1NDuCzqbWtRt5+lr03fGzwJdCza8UdU6ULCnPV8569CVZ+5iWM/RnBezQQGnSKxrRFI6BW0sTRTeDbi734ukG3Bb8zx9EdUU9CJ588l/00OVhL4d8m9ktv+jFWBopj4PNQpQoOnb0qlOQ2zSjG9f3Kf8Kd0Rx+MV6f9G01aRBPIhtfsK9PEL7z9URcuJRnoGkDJDvsK0P7qTm5VwP55uuTCVG/KFplLUkRMA6OjjVYtYkVRnwoLH6kj/uu3ZeMjcmdlXmzJQrcau4p00WWP9Yze0VzPMP2CiMHsiP/ZpPwWOEEqf+Fc8xH0gxpGnVuStDB4PRimspQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com; Received: from PH7PR11MB6522.namprd11.prod.outlook.com (2603:10b6:510:212::12) by DS0PR11MB7459.namprd11.prod.outlook.com (2603:10b6:8:144::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8857.28; Tue, 24 Jun 2025 16:59:30 +0000 Received: from PH7PR11MB6522.namprd11.prod.outlook.com ([fe80::9e94:e21f:e11a:332]) by PH7PR11MB6522.namprd11.prod.outlook.com ([fe80::9e94:e21f:e11a:332%5]) with mapi id 15.20.8857.026; Tue, 24 Jun 2025 16:59:30 +0000 Date: Tue, 24 Jun 2025 10:01:09 -0700 From: Matthew Brost To: Satyanarayana K V P CC: , Michal Wajdeczko , =?utf-8?Q?Micha=C5=82?= Winiarski , Tomasz Lis , Matthew Auld Subject: Re: [PATCH v9 1/3] drm/xe/vf: Create contexts for CCS read write Message-ID: References: <20250624100010.12254-1-satyanarayana.k.v.p@intel.com> <20250624100010.12254-2-satyanarayana.k.v.p@intel.com> Content-Type: text/plain; charset="utf-8" Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20250624100010.12254-2-satyanarayana.k.v.p@intel.com> X-ClientProxiedBy: BY3PR05CA0010.namprd05.prod.outlook.com (2603:10b6:a03:254::15) To PH7PR11MB6522.namprd11.prod.outlook.com (2603:10b6:510:212::12) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH7PR11MB6522:EE_|DS0PR11MB7459:EE_ X-MS-Office365-Filtering-Correlation-Id: e8debe3a-37c8-4fc4-8fec-08ddb3407c0a X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|376014|366016; X-Microsoft-Antispam-Message-Info: =?utf-8?B?Q1d3aXpRUXlvYmJ0ZHJ1Q0pTbUkrSjQwVk94NGNpWkVUY3NVMHRMQ2daK3E4?= =?utf-8?B?N0Vyd3loQ1pOSDZDZTZUOTlxeUJWejZyRDdxdlBWS3RrOHVwVFRydG9BREYv?= =?utf-8?B?eGQyUWdEdWIxYXpvc1hEUTI4bzhPWE1HdW9DK1gzZzV0MUwvaE8waGEyRklu?= =?utf-8?B?MmxKdEJzY3hzQmt5L0wzdStCSjBIZDhqMEJ2OFIyWEtrZ2k0VENLM0lwa3dk?= =?utf-8?B?NXdVYkV1TFI2dTFIVEsxOEMrN3U1MGdDM0tKTDlzOTV3STB6a1JZU29hcTl2?= =?utf-8?B?MzdvcGJncXJRSis5WURFZTA5WlZvcUVhWmNhTGtjdGMxbjVJVUVDZjBkc1Zt?= =?utf-8?B?S3ZLMkRlOWkxK1MvM2hxb0pmRUNocnhTNEhmS1JrMUhWSDdLUGxqM3R4amJ4?= =?utf-8?B?U2VKRytQM20xUzhQcFIveGVBc3Q0aFBqNUZoS0hwQWlWZ2dZdVhnOWFtTGJp?= =?utf-8?B?UFc5bDNQWTVZUXFVMnJ1SXY0NjJuRnVQdHNyZUoyNHNocndVRWlyUUlmY3N5?= =?utf-8?B?aHJxNCtiL2ppZUZXeVUwZmU4WXpKZjdHTldjU2tNeW9lN0x1c052NHlma0Ra?= =?utf-8?B?bitieDFFUjNUeGFrWlY1eFAzUVVBNVE0ckVNNHdXSzhMNitNbE9hTE1qZ1R2?= =?utf-8?B?RWREM3gxVXhjNEVWS1pxMmxvTUwxZU0xZ0VvY1A3NDhWcjd6S3l3QmJEWGRZ?= =?utf-8?B?RjY5alpEbGczYUY0L0lMM21VRnNUaWczMHVmbGI4SFhMOUpURktiMFVPaVYx?= =?utf-8?B?bjN3YjlUUjZpMHFycnZiVCt6aGdMYWlLVDB1Q1UyaWFaZTBTcy9DcmFEZlBn?= =?utf-8?B?WFRTMi8rWklZY2RmWDJUSit3MDdiM0k2TGxZL0NSTFhjV2E4ajlmT0RNZ056?= =?utf-8?B?bkQyWnVQd2dmYmdYd1o1blVJeG9LQVZJQTNKVCtZSlpXeGVLZ3RFekQzWWlX?= =?utf-8?B?N1hRZ1BPaVJEKzVBdlpYeUd4YUZpNEdjUVhOVFJqNGVkb0RuNVAvc2twd2pZ?= =?utf-8?B?S0xxV0ljdnVGdnA5U1p2aVFNSnZTWUwwYStMdnZMaHozZVd5cGJQL1lLenU3?= =?utf-8?B?M1NHbGV1SkprdjN3VDIrSy9WNUM1NkxRc2E4cUVPTmFjdFhvdm1mbWNwWnJ0?= =?utf-8?B?cCsrNUp0SytwUWhMZXUybkVHN3NtNWtkVkNxV0JGUmhvRGk0YTlCWUUvbk14?= =?utf-8?B?Q1VLVGt2cWZjeWs2ZEgvNVlpNWdrc0RwbkIvTmpXNzJrQXVVa2hKYThFQ1Zs?= =?utf-8?B?cXRNdEsvbmVpWDZZUkdsU2dRSCsvajNRWlRlOXNlUmZuT0dsQ3RDRWR3L25i?= =?utf-8?B?b2psZHFtc3A2dithL3pRNHB5N1dCd2JNS0ZFRlJXUHE0NFVOaHdNMFFQSjM0?= =?utf-8?B?N0lxN2FDTDBoN1NJN3pWYnMyTUNEWktXMDdRSzBrak9nY1V0dW1kL2xieTZl?= =?utf-8?B?VGx0WW9HOVdFNDVUNXNOamVROWpoUmMyNGxuQVk1ZkFHQU5ieGdCcUQrL3lY?= =?utf-8?B?Zk13a0tlQ0dKaHFYR3RKRUt1YXNKMWh0UUNOOW1sSnlSU2ZFVlJVMUhuTERy?= =?utf-8?B?T05LSFNXTmlZclpvYnVKZFNMenZ4QU5UeklaUDhxRk5WVkhudFR4b21VdkRI?= =?utf-8?B?VGhEQjI5V1lzVDlSdHN6SVF5RHIveE5rTlJmU3o2OE1tcVdoMzR6RmxYeGNs?= =?utf-8?B?NnNlalgxQjczSUVUSFpoL0Vta0pNekdiaXVIS1BCREhPb2tDRGhJNFJvNnJu?= =?utf-8?B?dmp5M3RTaVM3eGIwMS9pemxoTkZJcFRXQmovNWFqb0cxdkltQW1rZndNUlU5?= =?utf-8?B?aXZndHNHdlhZZS9xVDhlV3czbHZDRzdVbjFGYWpDUThnWjFBWGJobnZqMm1Y?= =?utf-8?B?ZWhjVTY0KzN4NzNFOEh0OGNCbnpsYmFrS1A3L29MVE1lVHc9PQ==?= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:PH7PR11MB6522.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230040)(1800799024)(376014)(366016); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?NWdaSXlxSDBvUGRXN2JsTXNJSkY3T0dUQ1ptYkdJdlJqUFFzcFoxQTVOWklD?= =?utf-8?B?OFFTVUFObkVRWFlFOEx1Y3k4YjFqRjB0YXgrODZ3OWErdDIwTFh5SFFJdzFH?= =?utf-8?B?MVFnNHprZUJUMndZaVIzRU1JL1NHWlBhNnpmUFhLTm9uSmsrSlNyek85bWNL?= =?utf-8?B?TjNIMEp1WXU5N1F6MFM2OGRBWEJpV1BsekYvaiszRjVWalVDM1cyR0NuMkZ4?= =?utf-8?B?d2NDWTVBb29tT3l3YmRRRmIwTzU4UWpjRTVMK09XalNXak9ZZzE0T0VTSE5j?= =?utf-8?B?QUx6OFR1dnFueUNPMEpreVJyYjdrb0JYaWdTUzlkdHNoV3dOSWRhVjg3QWVB?= =?utf-8?B?dHpYME9idkRTcVpoN3dBZzZTTGhyVk5HQjBXcnFuQTY0enBlbHpIN0JjVnp5?= =?utf-8?B?eVU5OUNaQ29PbjlQQ3hKSVYzNVo0S2dyZEo2K0JMZzlPcUVaV0NMYWtPMUpL?= =?utf-8?B?V1FFQkFMNGhJYlZ6TVVBVDMyc3ZpWmNRWnhBQ00yc2NXOFQ2blRZRGhYU3dm?= =?utf-8?B?Q3JEbDRnVXNxcElzMVU5VFNUcGQ3aEpwQnpmMmo0ajJJKzlIcUNJQ2dPWDJG?= =?utf-8?B?N0JWdkNFOE51ZlFzWHVNekdzalc3eVh4TTNjdC93WXRBeWFUcFFiSE15K3Z2?= =?utf-8?B?V0FqQmtqVlg2UHFhTGNqZ1FZOEtpSWxkbmNpYVFybERvWU5McGdsS3RSanAy?= =?utf-8?B?Y3VVVXpyNUFFWjZTMFQyTGUyYy9rQWo0SmNNcHVMTG9qcFZFeUI0bGFJRFhv?= =?utf-8?B?L3dJYk52YXZPV2JMTitoWjdkTXRPd0s2UzVOL3hUc1djb2FnbHcvSit5Wm9z?= =?utf-8?B?ZjZPL280SDVuQXBKQlczb1kwRWloSkc1ZEZxdjBCTkRuWTZGdFgyUHE4bHE0?= =?utf-8?B?eEhVbFErOUxsR3dXY3psRlM5TDhWTmR4a2gwZ2ZHTnNsdEJKNHdDTVh5Z0Fn?= =?utf-8?B?WUlrQ2lzRFFReERsUmRudmg1VzZnMWYvZERwNkdGMy94ZjlKOCs0YTRTcFJq?= =?utf-8?B?ZlgyY1NRdWdXZVVHV2lFM3QxVU56UlFWMk5JWWNnK0d1MHNRcHNHeFppTDVY?= =?utf-8?B?UERHdjREUEhhR0JYSFR2Uk43bWJCNDI2N2ZLYmdrUmxyMnhhTkV4VlFlaFNR?= =?utf-8?B?dCtXZURmREt0S25NZk9QQVh5aDFlQWRGdUZxUmZvQmZFQ3JzRmV4NDNSMnFo?= =?utf-8?B?NUtkVWFuVGZBQzJqQzBmTDlQcERQTFEwWC9SWVVyME9Ra3VlbTZ5QzI1ODRG?= =?utf-8?B?VUlSQUYrUEtOczNVczFBTTk4WlM4RjZLZEZjSFpGTjQ1b0ZJS280QVZISnBM?= =?utf-8?B?VHMxNlFUMG53bHpvWmFzSW8vYVZEMHpQN3VVN0NUV3hFNGF6ODhxK1YyejNk?= =?utf-8?B?RSs3dEdaL2dKYlg0M3praHVMQ1lnNGpuRHJXWVVlK3ZiY1lZY1BoNGZjMGt3?= =?utf-8?B?TnlpVjl1MFJKckx0SG5YSDZwRFBTMDlxOFhjVDRmT0xKZG9lZWgzNG9WZ3Q5?= =?utf-8?B?YlZZWk1mdFdNaUwvRmZYWFJ6QzZRdTFINlNLTy9CYnpZUjZtT2VUU1ZQUzZu?= =?utf-8?B?OXBuS0o0UFBIazd0SEYxallSN29nWlBVN3ExT0s2SU1EU0tNODVvRlZZUTNT?= =?utf-8?B?aVlLdkxiV2UzVFhHc1RZQW94RHJRM29PQ1lsczE3OWU4ZlFZY3RNRmR6SERu?= =?utf-8?B?ZXNLcVcxMmNNb1ZTaVNHbzQzTHRVbG5DdWVIZXBUQ3JoNUNCMG9SV0pFTHZY?= =?utf-8?B?dkVWNm93UitiN3QzU0RqNTVFMjIyWVJHQjh1QktmWUxxK0VCa0JreGZKVVdi?= =?utf-8?B?bm1XVU1qSXRUemVoNTRzbDM0bG1yOVhQbEtmYXV5SGVidVByenJoWXJEczdi?= =?utf-8?B?NTdVRlBEZExTVllCOTJONk0wcThKaVd1WEhqREhXMDYwSWhkWFJTWGZSTUU4?= =?utf-8?B?MncvZXMwMXREZ2Y1dE81cTA1NHhKdGpkSjBrU3JqYWVYUTg4NTVJUUdJQnpT?= =?utf-8?B?b3lJYk5UQ082RUlGclNVR3dLTklPRnlYd3VQNkl6K0tMV1ROME15NkRMaldU?= =?utf-8?B?RE9SZGRRWncycHdCZ1VwalM3Ny85WlRDcHBjYTdxWnJMQjBqbzdvTGRYdE9P?= =?utf-8?B?Qk5EQ2xsU255YUpvQXd5SG4vSWVGZzA3SlVhRjRuUFEwUUFlM1pWdUJiTTNT?= =?utf-8?B?U2c9PQ==?= X-MS-Exchange-CrossTenant-Network-Message-Id: e8debe3a-37c8-4fc4-8fec-08ddb3407c0a X-MS-Exchange-CrossTenant-AuthSource: PH7PR11MB6522.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jun 2025 16:59:29.9428 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: ns1rcCSpcMEK+P3QRngxJSbTEqEv9ZowPieeRjYlep5gO+/qcvA8VQ3WueS/+3CrEHI3TIhD9sosa6k5cLLQJg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR11MB7459 X-OriginatorOrg: intel.com X-BeenThere: intel-xe@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Xe graphics driver List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-xe-bounces@lists.freedesktop.org Sender: "Intel-xe" On Tue, Jun 24, 2025 at 03:30:08PM +0530, Satyanarayana K V P wrote: > Create two LRCs to handle CCS meta data read / write from CCS pool in the > VM. Read context is used to hold GPU instructions to be executed at save > time and write context is used to hold GPU instructions to be executed at > the restore time. > > Allocate batch buffer pool using suballocator for both read and write > contexts. > > Migration framework is reused to create LRCAs for read and write. > One more thing. > Signed-off-by: Satyanarayana K V P > Cc: Michal Wajdeczko > Cc: Matthew Brost > Cc: Michał Winiarski > Acked-by: Matthew Brost > --- > Cc: Tomasz Lis > Cc: Matthew Auld > > V8 -> V9: > - Initialized CCS read write contexts for only root tile (Matthew Brost). > > V7 -> V8: > - None. > > V6 -> V7: > - Fixed review comments (Michal Wajdeczko & Matthew Brost). > > V5 -> V6: > - Added id field in the xe_tile_vf_ccs structure for self identification. > > V4 -> V5: > - Modified read/write contexts to enums from #defines (Matthew Brost). > - The CCS BB pool size is calculated based on the system memory size (Michal > Wajdeczko & Matthew Brost). > > V3 -> V4: > - Fixed issues reported by patchworks. > > V2 -> V3: > - Added new variable which denotes the initialization of contexts. > > V1 -> V2: > - Fixed review comments. > --- > drivers/gpu/drm/xe/Makefile | 1 + > drivers/gpu/drm/xe/xe_device.c | 4 + > drivers/gpu/drm/xe/xe_device_types.h | 4 + > drivers/gpu/drm/xe/xe_gt_debugfs.c | 36 ++++ > drivers/gpu/drm/xe/xe_sriov.c | 19 ++ > drivers/gpu/drm/xe/xe_sriov.h | 1 + > drivers/gpu/drm/xe/xe_sriov_types.h | 5 + > drivers/gpu/drm/xe/xe_sriov_vf_ccs.c | 208 +++++++++++++++++++++ > drivers/gpu/drm/xe/xe_sriov_vf_ccs.h | 13 ++ > drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h | 45 +++++ > 10 files changed, 336 insertions(+) > create mode 100644 drivers/gpu/drm/xe/xe_sriov_vf_ccs.c > create mode 100644 drivers/gpu/drm/xe/xe_sriov_vf_ccs.h > create mode 100644 drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h > > diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile > index eee6bac01a00..853970ab1314 100644 > --- a/drivers/gpu/drm/xe/Makefile > +++ b/drivers/gpu/drm/xe/Makefile > @@ -141,6 +141,7 @@ xe-y += \ > xe_memirq.o \ > xe_sriov.o \ > xe_sriov_vf.o \ > + xe_sriov_vf_ccs.o \ > xe_tile_sriov_vf.o > > xe-$(CONFIG_PCI_IOV) += \ > diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c > index e160e7be84f0..b7922668741c 100644 > --- a/drivers/gpu/drm/xe/xe_device.c > +++ b/drivers/gpu/drm/xe/xe_device.c > @@ -929,6 +929,10 @@ int xe_device_probe(struct xe_device *xe) > > xe_vsec_init(xe); > > + err = xe_sriov_late_init(xe); > + if (err) > + goto err_unregister_display; > + > return devm_add_action_or_reset(xe->drm.dev, xe_device_sanitize, xe); > > err_unregister_display: > diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h > index 6aca4b1a2824..1b52db967ace 100644 > --- a/drivers/gpu/drm/xe/xe_device_types.h > +++ b/drivers/gpu/drm/xe/xe_device_types.h > @@ -22,6 +22,7 @@ > #include "xe_pmu_types.h" > #include "xe_pt_types.h" > #include "xe_sriov_types.h" > +#include "xe_sriov_vf_ccs_types.h" > #include "xe_step_types.h" > #include "xe_survivability_mode_types.h" > #include "xe_ttm_vram_mgr_types.h" > @@ -235,6 +236,9 @@ struct xe_tile { > struct { > /** @sriov.vf.ggtt_balloon: GGTT regions excluded from use. */ > struct xe_ggtt_node *ggtt_balloon[2]; > + > + /** @sriov.vf.ccs: CCS read and write contexts for VF. */ > + struct xe_tile_vf_ccs ccs[XE_SRIOV_VF_CCS_CTX_COUNT]; > } vf; > } sriov; > > diff --git a/drivers/gpu/drm/xe/xe_gt_debugfs.c b/drivers/gpu/drm/xe/xe_gt_debugfs.c > index 848618acdca8..404844515523 100644 > --- a/drivers/gpu/drm/xe/xe_gt_debugfs.c > +++ b/drivers/gpu/drm/xe/xe_gt_debugfs.c > @@ -134,6 +134,30 @@ static int sa_info(struct xe_gt *gt, struct drm_printer *p) > return 0; > } > > +static int sa_info_vf_ccs(struct xe_gt *gt, struct drm_printer *p) > +{ > + struct xe_tile *tile = gt_to_tile(gt); > + struct xe_sa_manager *bb_pool; > + enum xe_sriov_vf_ccs_rw_ctxs ctx_id; > + > + if (!IS_VF_CCS_READY(gt_to_xe(gt))) > + return 0; > + > + xe_pm_runtime_get(gt_to_xe(gt)); > + > + for_each_ccs_rw_ctx(ctx_id) { > + drm_printf(p, "ccs %s bb suballoc info\n", ctx_id ? "write" : "read"); > + drm_printf(p, "-------------------------\n"); > + bb_pool = tile->sriov.vf.ccs[ctx_id].mem.ccs_bb_pool; > + drm_suballoc_dump_debug_info(&bb_pool->base, p, bb_pool->gpu_addr); > + drm_puts(p, "\n"); > + } > + > + xe_pm_runtime_put(gt_to_xe(gt)); > + > + return 0; > +} > + > static int topology(struct xe_gt *gt, struct drm_printer *p) > { > xe_pm_runtime_get(gt_to_xe(gt)); > @@ -303,6 +327,13 @@ static const struct drm_info_list vf_safe_debugfs_list[] = { > {"hwconfig", .show = xe_gt_debugfs_simple_show, .data = hwconfig}, > }; > > +/* > + * only for GT debugfs files which are valid on VF. Not valid on PF. > + */ > +static const struct drm_info_list vf_only_debugfs_list[] = { > + {"sa_info_vf_ccs", .show = xe_gt_debugfs_simple_show, .data = sa_info_vf_ccs}, > +}; > + > /* everything else should be added here */ > static const struct drm_info_list pf_only_debugfs_list[] = { > {"hw_engines", .show = xe_gt_debugfs_simple_show, .data = hw_engines}, > @@ -419,6 +450,11 @@ void xe_gt_debugfs_register(struct xe_gt *gt) > drm_debugfs_create_files(pf_only_debugfs_list, > ARRAY_SIZE(pf_only_debugfs_list), > root, minor); > + else > + drm_debugfs_create_files(vf_only_debugfs_list, > + ARRAY_SIZE(vf_only_debugfs_list), > + root, minor); > + > > xe_uc_debugfs_register(>->uc, root); > > diff --git a/drivers/gpu/drm/xe/xe_sriov.c b/drivers/gpu/drm/xe/xe_sriov.c > index a0eab44c0e76..87911fb4eea7 100644 > --- a/drivers/gpu/drm/xe/xe_sriov.c > +++ b/drivers/gpu/drm/xe/xe_sriov.c > @@ -15,6 +15,7 @@ > #include "xe_sriov.h" > #include "xe_sriov_pf.h" > #include "xe_sriov_vf.h" > +#include "xe_sriov_vf_ccs.h" > > /** > * xe_sriov_mode_to_string - Convert enum value to string. > @@ -157,3 +158,21 @@ const char *xe_sriov_function_name(unsigned int n, char *buf, size_t size) > strscpy(buf, "PF", size); > return buf; > } > + > +/** > + * xe_sriov_late_init() - SR-IOV late initialization functions. > + * @xe: the &xe_device to initialize > + * > + * On VF this function will initialize code for CCS migration. > + * > + * Return: 0 on success or a negative error code on failure. > + */ > +int xe_sriov_late_init(struct xe_device *xe) > +{ > + int err = 0; > + > + if (IS_VF_CCS_INIT_NEEDED(xe)) > + err = xe_sriov_vf_ccs_init(xe); > + > + return err; > +} > diff --git a/drivers/gpu/drm/xe/xe_sriov.h b/drivers/gpu/drm/xe/xe_sriov.h > index 688fbabf08f1..0e0c1abf2d14 100644 > --- a/drivers/gpu/drm/xe/xe_sriov.h > +++ b/drivers/gpu/drm/xe/xe_sriov.h > @@ -18,6 +18,7 @@ const char *xe_sriov_function_name(unsigned int n, char *buf, size_t len); > void xe_sriov_probe_early(struct xe_device *xe); > void xe_sriov_print_info(struct xe_device *xe, struct drm_printer *p); > int xe_sriov_init(struct xe_device *xe); > +int xe_sriov_late_init(struct xe_device *xe); > > static inline enum xe_sriov_mode xe_device_sriov_mode(const struct xe_device *xe) > { > diff --git a/drivers/gpu/drm/xe/xe_sriov_types.h b/drivers/gpu/drm/xe/xe_sriov_types.h > index ca94382a721e..8abfdb2c5ead 100644 > --- a/drivers/gpu/drm/xe/xe_sriov_types.h > +++ b/drivers/gpu/drm/xe/xe_sriov_types.h > @@ -71,6 +71,11 @@ struct xe_device_vf { > /** @migration.gt_flags: Per-GT request flags for VF migration recovery */ > unsigned long gt_flags; > } migration; > + > + struct { > + /** @initialized: Initilalization of vf ccs is completed or not */ > + bool initialized; > + } ccs; > }; > > #endif > diff --git a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c > new file mode 100644 > index 000000000000..9000d618978d > --- /dev/null > +++ b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c > @@ -0,0 +1,208 @@ > +// SPDX-License-Identifier: MIT > +/* > + * Copyright © 2025 Intel Corporation > + */ > + > +#include "instructions/xe_mi_commands.h" > +#include "instructions/xe_gpu_commands.h" > +#include "xe_bo.h" > +#include "xe_device.h" > +#include "xe_migrate.h" > +#include "xe_sa.h" > +#include "xe_sriov_printk.h" > +#include "xe_sriov_vf_ccs.h" > +#include "xe_sriov_vf_ccs_types.h" > + > +/** > + * DOC: VF save/restore of compression Meta Data > + * > + * VF KMD registers two special contexts/LRCAs. > + * > + * Save Context/LRCA: contain necessary cmds+page table to trigger Meta data / > + * compression control surface (Aka CCS) save in regular System memory in VM. > + * > + * Restore Context/LRCA: contain necessary cmds+page table to trigger Meta data / > + * compression control surface (Aka CCS) Restore from regular System memory in > + * VM to corresponding CCS pool. > + * > + * Below diagram explain steps needed for VF save/Restore of compression Meta Data:: > + * > + * CCS Save CCS Restore VF KMD Guc BCS > + * LRCA LRCA > + * | | | | | > + * | | | | | > + * | Create Save LRCA | | | > + * [ ]<----------------------------- [ ] | | > + * | | | | | > + * | | | | | > + * | | | Register save LRCA | | > + * | | | with Guc | | > + * | | [ ]--------------------------->[ ] | > + * | | | | | > + * | | Create restore LRCA | | | > + * | [ ]<------------------[ ] | | > + * | | | | | > + * | | | Register restore LRCA | | > + * | | | with Guc | | > + * | | [ ]--------------------------->[ ] | > + * | | | | | > + * | | | | | > + * | | [ ]------------------------- | | > + * | | [ ] Allocate main memory. | | | > + * | | [ ] Allocate CCS memory. | | | > + * | | [ ] Update Main memory & | | | > + * [ ]<------------------------------[ ] CCS pages PPGTT + BB | | | > + * | [ ]<------------------[ ] cmds to save & restore.| | | > + * | | [ ]<------------------------ | | > + * | | | | | > + * | | | | | > + * | | | | | > + * : : : : : > + * ---------------------------- VF Paused ------------------------------------- > + * | | | | | > + * | | | | | > + * | | | |Schedule | > + * | | | |CCS Save | > + * | | | | LRCA | > + * | | | [ ]------>[ ] > + * | | | | | > + * | | | | | > + * | | | |CCS save | > + * | | | |completed| > + * | | | [ ]<------[ ] > + * | | | | | > + * : : : : : > + * ---------------------------- VM Migrated ----------------------------------- > + * | | | | | > + * | | | | | > + * : : : : : > + * ---------------------------- VF Resumed ------------------------------------ > + * | | | | | > + * | | | | | > + * | | [ ]-------------- | | > + * | | [ ] Fix up GGTT | | | > + * | | [ ]<------------- | | > + * | | | | | > + * | | | | | > + * | | | Notify VF_RESFIX_DONE | | > + * | | [ ]--------------------------->[ ] | > + * | | | | | > + * | | | |Schedule | > + * | | | |CCS | > + * | | | |Restore | > + * | | | |LRCA | > + * | | | [ ]------>[ ] > + * | | | | | > + * | | | | | > + * | | | |CCS | > + * | | | |restore | > + * | | | |completed| > + * | | | [ ]<------[ ] > + * | | | | | > + * | | | | | > + * | | | VF_RESFIX_DONE complete | | > + * | | | notification | | > + * | | [ ]<---------------------------[ ] | > + * | | | | | > + * | | | | | > + * : : : : : > + * ------------------------- Continue VM restore ------------------------------ > + */ > + > +static u64 get_ccs_bb_pool_size(struct xe_device *xe) > +{ > + u64 sys_mem_size, ccs_mem_size, ptes, bb_pool_size; > + struct sysinfo si; > + > + si_meminfo(&si); > + sys_mem_size = si.totalram * si.mem_unit; > + ccs_mem_size = sys_mem_size / NUM_BYTES_PER_CCS_BYTE(xe); > + ptes = DIV_ROUND_UP(sys_mem_size + ccs_mem_size, XE_PAGE_SIZE); s/DIV_ROUND_UP/DIV_ROUND_UP_ULL I'm pretty sure this is the CI hooks failure. Matt > + > + /** > + * We need below BB size to hold PTE mappings and some DWs for copy > + * command. In reality, we need space for many copy commands. So, let > + * us allocate double the calculated size which is enough to holds GPU > + * instructions for the whole region. > + */ > + bb_pool_size = ptes * sizeof(u32); > + > + return round_up(bb_pool_size * 2, SZ_1M); > +} > + > +static int alloc_bb_pool(struct xe_tile *tile, struct xe_tile_vf_ccs *ctx) > +{ > + struct xe_device *xe = tile_to_xe(tile); > + struct xe_sa_manager *sa_manager; > + u64 bb_pool_size; > + int offset, err; > + > + bb_pool_size = get_ccs_bb_pool_size(xe); > + xe_sriov_info(xe, "Allocating %s CCS BB pool size = %lldMB\n", > + ctx->ctx_id ? "Restore" : "Save", bb_pool_size / SZ_1M); > + > + sa_manager = xe_sa_bo_manager_init(tile, bb_pool_size, SZ_16); > + > + if (IS_ERR(sa_manager)) { > + xe_sriov_err(xe, "Suballocator init failed with error: %pe\n", > + sa_manager); > + err = PTR_ERR(sa_manager); > + return err; > + } > + > + offset = 0; > + xe_map_memset(xe, &sa_manager->bo->vmap, offset, MI_NOOP, > + bb_pool_size); > + > + offset = bb_pool_size - sizeof(u32); > + xe_map_wr(xe, &sa_manager->bo->vmap, offset, u32, MI_BATCH_BUFFER_END); > + > + ctx->mem.ccs_bb_pool = sa_manager; > + > + return 0; > +} > + > +/** > + * xe_sriov_vf_ccs_init - Setup LRCA for save & restore. > + * @xe: the &xe_device to start recovery on > + * > + * This function shall be called only by VF. It initializes > + * LRCA and suballocator needed for CCS save & restore. > + * > + * Return: 0 on success. Negative error code on failure. > + */ > +int xe_sriov_vf_ccs_init(struct xe_device *xe) > +{ > + struct xe_tile *tile = xe_device_get_root_tile(xe); > + enum xe_sriov_vf_ccs_rw_ctxs ctx_id; > + struct xe_migrate *migrate; > + struct xe_tile_vf_ccs *ctx; > + int err; > + > + xe_assert(xe, IS_SRIOV_VF(xe)); > + xe_assert(xe, !IS_DGFX(xe)); > + xe_assert(xe, xe_device_has_flat_ccs(xe)); > + > + for_each_ccs_rw_ctx(ctx_id) { > + ctx = &tile->sriov.vf.ccs[ctx_id]; > + ctx->ctx_id = ctx_id; > + > + migrate = xe_migrate_init(tile); > + if (IS_ERR(migrate)) { > + err = PTR_ERR(migrate); > + goto err_ret; > + } > + ctx->migrate = migrate; > + > + err = alloc_bb_pool(tile, ctx); > + if (err) > + goto err_ret; > + } > + > + xe->sriov.vf.ccs.initialized = 1; > + > + return 0; > + > +err_ret: > + return err; > +} > diff --git a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.h b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.h > new file mode 100644 > index 000000000000..5df9ba028d14 > --- /dev/null > +++ b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.h > @@ -0,0 +1,13 @@ > +/* SPDX-License-Identifier: MIT */ > +/* > + * Copyright © 2025 Intel Corporation > + */ > + > +#ifndef _XE_SRIOV_VF_CCS_H_ > +#define _XE_SRIOV_VF_CCS_H_ > + > +struct xe_device; > + > +int xe_sriov_vf_ccs_init(struct xe_device *xe); > + > +#endif > diff --git a/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h b/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h > new file mode 100644 > index 000000000000..6dc279d206ec > --- /dev/null > +++ b/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h > @@ -0,0 +1,45 @@ > +/* SPDX-License-Identifier: MIT */ > +/* > + * Copyright © 2025 Intel Corporation > + */ > + > +#ifndef _XE_SRIOV_VF_CCS_TYPES_H_ > +#define _XE_SRIOV_VF_CCS_TYPES_H_ > + > +#define for_each_ccs_rw_ctx(id__) \ > + for ((id__) = 0; (id__) < XE_SRIOV_VF_CCS_CTX_COUNT; (id__)++) > + > +#define IS_VF_CCS_READY(xe) ({ \ > + struct xe_device *___xe = (xe); \ > + xe_assert(___xe, IS_SRIOV_VF(___xe)); \ > + ___xe->sriov.vf.ccs.initialized; \ > + }) > + > +#define IS_VF_CCS_INIT_NEEDED(xe) ({\ > + struct xe_device *___xe = (xe); \ > + IS_SRIOV_VF(___xe) && !IS_DGFX(___xe) && \ > + xe_device_has_flat_ccs(___xe) && GRAPHICS_VER(___xe) >= 20; \ > + }) > + > +enum xe_sriov_vf_ccs_rw_ctxs { > + XE_SRIOV_VF_CCS_READ_CTX, > + XE_SRIOV_VF_CCS_WRITE_CTX, > + XE_SRIOV_VF_CCS_CTX_COUNT > +}; > + > +struct xe_migrate; > +struct xe_sa_manager; > + > +struct xe_tile_vf_ccs { > + /** @id: Id to which context it belongs to */ > + enum xe_sriov_vf_ccs_rw_ctxs ctx_id; > + /** @migrate: Migration helper for save/restore of CCS data */ > + struct xe_migrate *migrate; > + > + struct { > + /** @ccs_rw_bb_pool: Pool from which batch buffers are allocated. */ > + struct xe_sa_manager *ccs_bb_pool; > + } mem; > +}; > + > +#endif > -- > 2.43.0 >