From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A4E09C7115B for ; Fri, 20 Jun 2025 16:19:03 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 47DDE10E117; Fri, 20 Jun 2025 16:19:03 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="ZukVZcaJ"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) by gabe.freedesktop.org (Postfix) with ESMTPS id BAC3510E117 for ; Fri, 20 Jun 2025 16:19:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1750436342; x=1781972342; h=date:from:to:cc:subject:message-id:references: content-transfer-encoding:in-reply-to:mime-version; bh=VVNQucnx+SndW+3dBVL347N/Y2NGSQwqFiSuhQrTCbc=; b=ZukVZcaJEIlRyHyNzMLlaFJdrRYbug5yKJJogrJ9YLO6Mb8GBoLgRV/g GO+m7cGxCrn501zl0vYZT8Or5QPYC8bi24NtngJgrkdfvXOla02GWL69Y hwv2IMCCneTiDmpKhWd816SOUf/Np8by4LHhb8xb4MqTl4Nj/37DMXScl akG3oGmWrY891ILDE7F0x5yLwgZP/qg1bfHGQhfnoqhOM0ZaYGcuZe7nk fvjTtNwNdGfIEVulPZg7MU9v489QLR+GibOypBWUOtXHSnV8tmXKEyxnL U0yBDHArAqhOLOUHUJ0cFRnsbfo3sbz5FpAckj1C9t3G7fSCpzGLxmJ25 Q==; X-CSE-ConnectionGUID: tDJuWetkTBaYplRCmsZQEQ== X-CSE-MsgGUID: A5lPEQjjSOCVcRpTeHcXHQ== X-IronPort-AV: E=McAfee;i="6800,10657,11469"; a="52578046" X-IronPort-AV: E=Sophos;i="6.16,251,1744095600"; d="scan'208";a="52578046" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jun 2025 09:18:59 -0700 X-CSE-ConnectionGUID: eO6tLDzsTgSfH4IJVnDE7w== X-CSE-MsgGUID: oyZwI94aSveHL6DB054mxQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.16,251,1744095600"; d="scan'208";a="150381338" Received: from orsmsx901.amr.corp.intel.com ([10.22.229.23]) by orviesa010.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jun 2025 09:18:59 -0700 Received: from ORSMSX902.amr.corp.intel.com (10.22.229.24) by ORSMSX901.amr.corp.intel.com (10.22.229.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.25; Fri, 20 Jun 2025 09:18:58 -0700 Received: from ORSEDG901.ED.cps.intel.com (10.7.248.11) by ORSMSX902.amr.corp.intel.com (10.22.229.24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.25 via Frontend Transport; Fri, 20 Jun 2025 09:18:58 -0700 Received: from NAM10-BN7-obe.outbound.protection.outlook.com (40.107.92.83) by edgegateway.intel.com (134.134.137.111) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.25; Fri, 20 Jun 2025 09:18:56 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=k4Gn2HxZkxh9o33Vci576WsT79DfpzFbrlqcpAsfYmmPKmpwAQcBq0oLFxopLJf/jcJdJPiJFL6xkFVUEzQ3r9cfI7cNd3ny8rdmSwN32atNd2TbImmI5sBndpYMxMDjZwkiR82cFRwIvGo1TF2whRGqTq4HbME9SysiOeLhcOq5NaBpjHpH8HvkPjxidX26qYq+7g4dbW+z0VDocc8u25RP7CaKQpLsyjwhArXhPjhUKLCsQxquB+fQsvu0AytFXHSwFDPqqZs4RDZVnWFz6DxQQSUZgIi/WMUztVAPRV1tkzeL/dHeQI13pfmjgx1KiPNyDPbiDcTYeeos/EryWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=aqm4XKfdu/cPOQFnSOoZs4L+iZEdpE8YwUWCXH09AMg=; b=C8M2oCn2nkWdD5hJjL+BMcv+pgXqJMub5baunNEe1SWSm6jP2OymrDb13pEa8NaYb6DLGU2akN4Wb01P5OdwrotkKQisDCNhtRVNjgENDuJmMdV5K3Q2UVTJuFKnoDrpOLA6VRcODsQ0na7S6AXGe5KKOU7asOnbk83uoOcQ56h/IH3mz1P8Zta1oXnxfWo0J+hye/yiCVzjsH44KTXOjTVxctSJ0ak0lNY7MY+i3lC2AurjIbo+yF1taNo01ZmEleX65F+FrIDQN3yGRBDM2A8uwX2aBCs/TdKEZb+QlPd598JNuYb57BsoZgS55ASJti4BSsBeTmyzarznGaeaiQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com; Received: from PH7PR11MB6522.namprd11.prod.outlook.com (2603:10b6:510:212::12) by PH7PR11MB6006.namprd11.prod.outlook.com (2603:10b6:510:1e1::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8857.22; Fri, 20 Jun 2025 16:18:53 +0000 Received: from PH7PR11MB6522.namprd11.prod.outlook.com ([fe80::9e94:e21f:e11a:332]) by PH7PR11MB6522.namprd11.prod.outlook.com ([fe80::9e94:e21f:e11a:332%5]) with mapi id 15.20.8857.016; Fri, 20 Jun 2025 16:18:53 +0000 Date: Fri, 20 Jun 2025 09:20:31 -0700 From: Matthew Brost To: Satyanarayana K V P CC: , Michal Wajdeczko , =?utf-8?Q?Micha=C5=82?= Winiarski , Tomasz Lis , Matthew Auld Subject: Re: [PATCH v8 1/3] drm/xe/vf: Create contexts for CCS read write Message-ID: References: <20250619080459.27731-1-satyanarayana.k.v.p@intel.com> <20250619080459.27731-2-satyanarayana.k.v.p@intel.com> Content-Type: text/plain; charset="utf-8" Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20250619080459.27731-2-satyanarayana.k.v.p@intel.com> X-ClientProxiedBy: MW2PR2101CA0019.namprd21.prod.outlook.com (2603:10b6:302:1::32) To PH7PR11MB6522.namprd11.prod.outlook.com (2603:10b6:510:212::12) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH7PR11MB6522:EE_|PH7PR11MB6006:EE_ X-MS-Office365-Filtering-Correlation-Id: ad931c96-5953-41a3-c9f2-08ddb0162644 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|376014|1800799024; X-Microsoft-Antispam-Message-Info: =?utf-8?B?V1VWd2tzdlJ6OFJTWXFRaGZIeHFoclc0eTNzUHd0U0txSWlYRFF2VlJWcHJQ?= =?utf-8?B?V0hMTXA0NW55ZkdEeFN2R1VJdFBoSXdiMTI0TExxWGhlR2VNSWhJRlFrQkd1?= =?utf-8?B?ZE5kcXNzS3drV0F6N1pUZmZQbXVYQ3JibkljQ3dxaFVGdTFVbzE0QzI4Slox?= =?utf-8?B?QUhONEVtOFRNRDc3eHl0YlArY0NZVHBoOVVBQkJsUUg3YjJSUE9QNkNiSENH?= =?utf-8?B?Y3dsdmczMmRQUExYeVVwWk1sYXA1a0VsWGIwazhZcHJ0N3dLY0cycGthbVg5?= =?utf-8?B?TDdZTEpBUnFiN2tHSnVQdG9acEptNHQwOGMraE50L1hLaTZCZHpXTEJiVDNH?= =?utf-8?B?MS9sazFPTEhBNnBqSjRVTTI2b3hDQ09lYitpR0VZMVVOdFYwOHZMMnppVUNC?= =?utf-8?B?T2JIRUFsYXowNFdIUXozdDEwVlkyNnhNN1krNHlNUi9POGp2Wjg4aS92U0VS?= =?utf-8?B?RUlEdGJYdTdOejVUNkdKOUorN3JzK3I3bWNTVnpXaHo1U043QWpVMkI2NHZl?= =?utf-8?B?dWo1SGpBT21sRU82TmtOSWdhWkx0c3FaVGlGUnhoNnYxNWJxY25hWTdnSmZu?= =?utf-8?B?T2xhZWEwUWEweUsrdWhwL1UrenVBdUExOXEvU083NU5hOEFyd3NQOW45QjhZ?= =?utf-8?B?QnJEcWlib3RFL0RQVnk0cGpuMWxvMTJEQnIxclJ5QmxTYjFjN2s1dmxyRWtY?= =?utf-8?B?YitoWE1kZ1JERzAzOEVWNTh0NllqTWZDbjhxK1VLVm54enNoaFRBamFWSU0v?= =?utf-8?B?TWlQRSs5SGtNZ0c3QTY3VkY3NEY5T054TlRiS2RSMnVvVEoxdnhPV1dCWmtC?= =?utf-8?B?TGxoYy9FcFJaTG15S2RnalRCaStYcFVHMjQ0TTMzR1RvM1lTMEdaRUV2QlJx?= =?utf-8?B?VWd0NE1SeldUSWZ1T0YzZ0ZRWnBCMEo1bnRCc0QwZXRuVEhvREFhMjZ6R094?= =?utf-8?B?MUprSTFGWGIvMHJ0dWh6MHZxT1VCN0dhQXdrTUNtMHFtY252cy9qcDZDRkdR?= =?utf-8?B?QjBsU0R0dFJERUF2elZFaWlRNERvMzBzN0NKbVhiZUV1aXNqVkY1UFlMaXVq?= =?utf-8?B?RjhqaTE2MS9Mb2hTQ3FHcDNaMW45RDRkdGcyelhiSGh0TTZpT1JZMVdlRkIz?= =?utf-8?B?WkgzQlFQVGFvSlFiVTdZM0NsRGZQZXlLSzliRWNmRDJRM3ZpbzE0QUNGUFhR?= =?utf-8?B?Qi9mdU4xOGhxRWRMZUt3NmlZNjVDaVJqSy9CdW5pblQrU3dJREJyVjV5ZG15?= =?utf-8?B?VVhzWElxZ3JYcU9ZU2Iwdm03RjhoNk1rbHI3L2hDTUFtK1A1SE5sODMxa3d1?= =?utf-8?B?dnFCNnAwOENXeVNhb1pJci8rMExVcGlsR003LzlIQ25Od28rREpIcS9odjlV?= =?utf-8?B?c1RHN21CanhHYmZPTjcvLzV1Sm5oRmxiUEpEd0ppZ0JCYkhMZlB3OWMzQ0Z2?= =?utf-8?B?M2xocExCUTNSM3BXeE9oMktTWTJHazBuODY5OUpkdDd5NFVmUGlYR2NSSVU2?= =?utf-8?B?UlNuWGJIc1RkY1pNMmF6NnpYVkFWYUNvczVNeDZ4NmNBYTRnUXBsZFk5OWM2?= =?utf-8?B?UW5wYVJSQS9tK0piN0h4UnVDVGkxQXJwQ1JackcyQWFBZEVpNGZ5eE9MbHFj?= =?utf-8?B?VmJsa0UrV2RleUJId0tHa3pnZkJ6elJCeWhXcTZZOHFkOHFGL3U0a0lYM2cy?= =?utf-8?B?dVJWL21GTGx4Z1pHcU5pQjdXQVZpS0RVR1hCTHh6SmZkRmo4a0Mxc3Z5YlBX?= =?utf-8?B?QlJGQnMvYlFKWjFGZ1VGVDRoRUhzaFZISTcybEhieU11M0cySnhnMDhvSEV2?= =?utf-8?B?NS9PR2JGSUxLd0lYQ1Z5WVk0UWZlbytISW9YSVZpMERXbkhDUC82OHNwemZj?= =?utf-8?B?WSs1cmhvVTlhMjhCdU9Hck5jS3gxL0Vsc1o5dHpQaGpOM2ZWa1BBMEpxYzNE?= =?utf-8?Q?S5Wyrw9Yuvw=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:PH7PR11MB6522.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230040)(366016)(376014)(1800799024); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?RDhTR2ZFKzVXMk91aDNWY3JwR2tsejZXS2pLYk9oSnRnbC9rSWRKMkpPUjRB?= =?utf-8?B?ZFBLS2ExQ1FReFRqV2NQNzRCZ2k3RytwNmtqZW5HQUNENit0bDZSQTlORXhP?= =?utf-8?B?SGpNUlhQblNNZ3NuR1RMRkIvUTAreDZWWVNaSDdMU1hJeGVOZE9EbFl3bXVM?= =?utf-8?B?QUVTdGp2VUVsaWVidkVXdDUzVWNkUHljR3FjYXNOcGpIUHUycUgyY012Vk44?= =?utf-8?B?c2xGaHg1K1NHRlp2azh5T29KY2Y5S1F2dnkwZzVkakdMbVd5ZjVBV0tzbWNu?= =?utf-8?B?YUl4K1hOdlN4QXpLYVYwbWczVEN2d2NZK01GUjlZRTdtclJsRXRWZjY1ajFE?= =?utf-8?B?ZTlrM3UvZS8rbTFWMXJ6QzhYd3FlSGEzWkx3RjU4bXFHWHFQYmJydFV6N3dm?= =?utf-8?B?dk9pRVFhd3RkTk4xTzJibFJDc1dQbzQvRXI3UENJbk5HZUJDOWMwa3hJV0Jq?= =?utf-8?B?V09LWklGblJ4TUdNUHVCemR4akRsRGNFeFkyZ1ZlMmdCRVBTR2MyTFhWNUtx?= =?utf-8?B?S0FwOGw3aEdmVVB3ZWQ1N3BKbUJIK1RCMEV4amQ1SmNRc0JYd3VTOStIRzM0?= =?utf-8?B?T3lpT2FoUk1ZOTB6K2ZuNE00bTBSRGJOVU40UStDZ0V6dVhHMGFHMklDRlAy?= =?utf-8?B?c1U0YTZudklMaHZ4RWhlZHhheWloNDZId2lTelJQZWp4dHRsSVc0cEsxVjVI?= =?utf-8?B?SndXRlpoVjhwaCt1TXMzTFlQRzhUdUVxMzB4WStjelhGL1FnVXk2Ri90N0FP?= =?utf-8?B?cWN0MFFlK2J5TW5CMWVsUjBhMFBTK2xSWXlJWTRHNDNQNzZYclRETnM5VW5v?= =?utf-8?B?ZElXcnJhYThBVWFlUDRhV1AyOVRoMkE5MEtrVVNONWFVRW9idHpLU292cHd5?= =?utf-8?B?NUlaU0N3bGxhRUVUckRrRzJ5cUU4NndmMUozM1F2eFNsYmFkNGpnaUVoeU5O?= =?utf-8?B?SkRub25UYlE3WXRLTEU5ZGhEeURkazFwRjJyZytHbGpuQW5KemFXNENINHpw?= =?utf-8?B?SUFJdVRzcVdqMWVRcjArUmlIQ1N3bnRNOWMvR3NuL0pEbHR4NTd1c01HRHRP?= =?utf-8?B?M1dnSE1IQThydEUwWEt4cGFKZkVaa3JBSHZ5Wmc0aXhPK0tXV1JmUWcwbGY2?= =?utf-8?B?S2VYaHc4eTRJRXUrMDNjSkhMWlJKa1VnQzRrSDZLdTA3MUw1NjVPVmQwZWV1?= =?utf-8?B?MkpHZy9yTWFOOVprUnhTQ21BM0NUOTQ2b3ptTm5nUDR6MXRKV1g5RS9xQnFM?= =?utf-8?B?M1NrRXphMmpHK0VUSXBqTDA2VEJoSFlFcHYyWEQzdUI1aGh5VkhTd3dhWU0r?= =?utf-8?B?bXpMamZFR0ZobVd0aXNzSzB3eFRhemNlbW9iUXQxNDBsa0kyU1RMVDVFd3FR?= =?utf-8?B?R3dvZlFzZTFPVGJIL0RNUHQyb1B3R3FkRXBBU0hUa3JFYmVWQ0xLL0FEcUZN?= =?utf-8?B?WThXY1hOTzh0M1NiNld0eXB1L1dUWmt3eFV1bFpxV09nNmx5Z3BQaHhRbEFS?= =?utf-8?B?ZDJKTCtLTU9VZzQwbHhadU90d3NlbDJxbzJyc2JZc0xtMS9GdE95Rk12ZmR6?= =?utf-8?B?SWFzblFieEwyZ3hWYWQ5RGdGQVNQcXV4WW9BSUxVU3R5Yklkc2xjbFpWN05F?= =?utf-8?B?YUVRRGlJZHdLQnRBeFdqOHNrNGw0d0xjUGxWNXZkOVQrMSsrSm5mN3c2d0Ju?= =?utf-8?B?M3hkSzYxejQ4WnAyNUdSWENqZHRzTUZMR0VVMm1aVHRoUjdpcnJZQS9PWlZt?= =?utf-8?B?bE1xK0M4dk1oQ2lhMy9aRDRtekRpQ2RzY2xHTWNPc0VTQUUxQXdCV2lxTnZt?= =?utf-8?B?SC8rdm1qVVhTRGRjaWZNNGNhc0VxbGJBUzRicnJWWE5nemk1SUZ3aUJaVXBY?= =?utf-8?B?VnMrN1d6M2RWbVZGSGNDb0tqeTNMVkNyaFAydkM1S0xMYXd5QklYSlVHbFZ2?= =?utf-8?B?RHExUDdvS3lKaHNoNVBXVVZMVWhHNUdtNVBDUEg4RGZ3am0zanRQZExtYVpS?= =?utf-8?B?VDhoZXZBQkxTRmJvUWc1UnkrSFl1bUlwb29Pem1sNWlYNWVVbEhEbFNjQndy?= =?utf-8?B?aks4WWphNThzeXNqK0dkbkpseURLa05ud29zMWxOZ0c4dmFkUGNyNUxYNm9h?= =?utf-8?B?NWovd1BPQjA5SU1zVXNpM3NRcmliNmlOUDE5ODRTQms4Z3dOc3VoR3Q4R2hq?= =?utf-8?B?TkE9PQ==?= X-MS-Exchange-CrossTenant-Network-Message-Id: ad931c96-5953-41a3-c9f2-08ddb0162644 X-MS-Exchange-CrossTenant-AuthSource: PH7PR11MB6522.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jun 2025 16:18:53.6670 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: bt3yiPW8Gs4DH6QteZl1QlQVOmPot4uScTrgDaTeP9ATF0BITneJdC823xePXwRFUcRQdyzu4k5oUeOqSuGdSg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR11MB6006 X-OriginatorOrg: intel.com X-BeenThere: intel-xe@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Xe graphics driver List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-xe-bounces@lists.freedesktop.org Sender: "Intel-xe" On Thu, Jun 19, 2025 at 01:34:57PM +0530, Satyanarayana K V P wrote: > Create two LRCs to handle CCS meta data read / write from CCS pool in the > VM. Read context is used to hold GPU instructions to be executed at save > time and write context is used to hold GPU instructions to be executed at > the restore time. > > Allocate batch buffer pool using suballocator for both read and write > contexts. > > Migration framework is reused to create LRCAs for read and write. > > Signed-off-by: Satyanarayana K V P > Cc: Michal Wajdeczko > Cc: Matthew Brost > Cc: Michał Winiarski > --- > Cc: Tomasz Lis > Cc: Matthew Auld > > V7 -> V8: > - None. > > V6 -> V7: > - Fixed review comments (Michal Wajdeczko & Matthew Brost). > > V5 -> V6: > - Added id field in the xe_tile_vf_ccs structure for self identification. > > V4 -> V5: > - Modified read/write contexts to enums from #defines (Matthew Brost). > - The CCS BB pool size is calculated based on the system memory size (Michal > Wajdeczko & Matthew Brost). > > V3 -> V4: > - Fixed issues reported by patchworks. > > V2 -> V3: > - Added new variable which denotes the initialization of contexts. > > V1 -> V2: > - Fixed review comments. > --- > drivers/gpu/drm/xe/Makefile | 1 + > drivers/gpu/drm/xe/xe_device.c | 4 + > drivers/gpu/drm/xe/xe_device_types.h | 4 + > drivers/gpu/drm/xe/xe_gt_debugfs.c | 36 ++++ > drivers/gpu/drm/xe/xe_sriov.c | 19 ++ > drivers/gpu/drm/xe/xe_sriov.h | 1 + > drivers/gpu/drm/xe/xe_sriov_types.h | 5 + > drivers/gpu/drm/xe/xe_sriov_vf_ccs.c | 210 +++++++++++++++++++++ > drivers/gpu/drm/xe/xe_sriov_vf_ccs.h | 13 ++ > drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h | 45 +++++ > 10 files changed, 338 insertions(+) > create mode 100644 drivers/gpu/drm/xe/xe_sriov_vf_ccs.c > create mode 100644 drivers/gpu/drm/xe/xe_sriov_vf_ccs.h > create mode 100644 drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h > > diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile > index f5f5775acdc0..3b5241937742 100644 > --- a/drivers/gpu/drm/xe/Makefile > +++ b/drivers/gpu/drm/xe/Makefile > @@ -140,6 +140,7 @@ xe-y += \ > xe_memirq.o \ > xe_sriov.o \ > xe_sriov_vf.o \ > + xe_sriov_vf_ccs.o \ > xe_tile_sriov_vf.o > > xe-$(CONFIG_PCI_IOV) += \ > diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c > index 8cfcfff250ca..f1335c1d0183 100644 > --- a/drivers/gpu/drm/xe/xe_device.c > +++ b/drivers/gpu/drm/xe/xe_device.c > @@ -926,6 +926,10 @@ int xe_device_probe(struct xe_device *xe) > > xe_vsec_init(xe); > > + err = xe_sriov_late_init(xe); > + if (err) > + goto err_unregister_display; > + > return devm_add_action_or_reset(xe->drm.dev, xe_device_sanitize, xe); > > err_unregister_display: > diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h > index 003afb279a5e..5d2d87cc1c20 100644 > --- a/drivers/gpu/drm/xe/xe_device_types.h > +++ b/drivers/gpu/drm/xe/xe_device_types.h > @@ -22,6 +22,7 @@ > #include "xe_pmu_types.h" > #include "xe_pt_types.h" > #include "xe_sriov_types.h" > +#include "xe_sriov_vf_ccs_types.h" > #include "xe_step_types.h" > #include "xe_survivability_mode_types.h" > #include "xe_ttm_vram_mgr_types.h" > @@ -234,6 +235,9 @@ struct xe_tile { > struct { > /** @sriov.vf.ggtt_balloon: GGTT regions excluded from use. */ > struct xe_ggtt_node *ggtt_balloon[2]; > + > + /** @sriov.vf.ccs: CCS read and write contexts for VF. */ > + struct xe_tile_vf_ccs ccs[XE_SRIOV_VF_CCS_CTX_COUNT]; > } vf; > } sriov; > > diff --git a/drivers/gpu/drm/xe/xe_gt_debugfs.c b/drivers/gpu/drm/xe/xe_gt_debugfs.c > index 848618acdca8..404844515523 100644 > --- a/drivers/gpu/drm/xe/xe_gt_debugfs.c > +++ b/drivers/gpu/drm/xe/xe_gt_debugfs.c > @@ -134,6 +134,30 @@ static int sa_info(struct xe_gt *gt, struct drm_printer *p) > return 0; > } > > +static int sa_info_vf_ccs(struct xe_gt *gt, struct drm_printer *p) > +{ > + struct xe_tile *tile = gt_to_tile(gt); > + struct xe_sa_manager *bb_pool; > + enum xe_sriov_vf_ccs_rw_ctxs ctx_id; > + > + if (!IS_VF_CCS_READY(gt_to_xe(gt))) > + return 0; > + > + xe_pm_runtime_get(gt_to_xe(gt)); > + > + for_each_ccs_rw_ctx(ctx_id) { > + drm_printf(p, "ccs %s bb suballoc info\n", ctx_id ? "write" : "read"); > + drm_printf(p, "-------------------------\n"); > + bb_pool = tile->sriov.vf.ccs[ctx_id].mem.ccs_bb_pool; > + drm_suballoc_dump_debug_info(&bb_pool->base, p, bb_pool->gpu_addr); > + drm_puts(p, "\n"); > + } > + > + xe_pm_runtime_put(gt_to_xe(gt)); > + > + return 0; > +} > + > static int topology(struct xe_gt *gt, struct drm_printer *p) > { > xe_pm_runtime_get(gt_to_xe(gt)); > @@ -303,6 +327,13 @@ static const struct drm_info_list vf_safe_debugfs_list[] = { > {"hwconfig", .show = xe_gt_debugfs_simple_show, .data = hwconfig}, > }; > > +/* > + * only for GT debugfs files which are valid on VF. Not valid on PF. > + */ > +static const struct drm_info_list vf_only_debugfs_list[] = { > + {"sa_info_vf_ccs", .show = xe_gt_debugfs_simple_show, .data = sa_info_vf_ccs}, > +}; > + > /* everything else should be added here */ > static const struct drm_info_list pf_only_debugfs_list[] = { > {"hw_engines", .show = xe_gt_debugfs_simple_show, .data = hw_engines}, > @@ -419,6 +450,11 @@ void xe_gt_debugfs_register(struct xe_gt *gt) > drm_debugfs_create_files(pf_only_debugfs_list, > ARRAY_SIZE(pf_only_debugfs_list), > root, minor); > + else > + drm_debugfs_create_files(vf_only_debugfs_list, > + ARRAY_SIZE(vf_only_debugfs_list), > + root, minor); > + > > xe_uc_debugfs_register(>->uc, root); > > diff --git a/drivers/gpu/drm/xe/xe_sriov.c b/drivers/gpu/drm/xe/xe_sriov.c > index a0eab44c0e76..87911fb4eea7 100644 > --- a/drivers/gpu/drm/xe/xe_sriov.c > +++ b/drivers/gpu/drm/xe/xe_sriov.c > @@ -15,6 +15,7 @@ > #include "xe_sriov.h" > #include "xe_sriov_pf.h" > #include "xe_sriov_vf.h" > +#include "xe_sriov_vf_ccs.h" > > /** > * xe_sriov_mode_to_string - Convert enum value to string. > @@ -157,3 +158,21 @@ const char *xe_sriov_function_name(unsigned int n, char *buf, size_t size) > strscpy(buf, "PF", size); > return buf; > } > + > +/** > + * xe_sriov_late_init() - SR-IOV late initialization functions. > + * @xe: the &xe_device to initialize > + * > + * On VF this function will initialize code for CCS migration. > + * > + * Return: 0 on success or a negative error code on failure. > + */ > +int xe_sriov_late_init(struct xe_device *xe) > +{ > + int err = 0; > + > + if (IS_VF_CCS_INIT_NEEDED(xe)) > + err = xe_sriov_vf_ccs_init(xe); > + > + return err; > +} > diff --git a/drivers/gpu/drm/xe/xe_sriov.h b/drivers/gpu/drm/xe/xe_sriov.h > index 688fbabf08f1..0e0c1abf2d14 100644 > --- a/drivers/gpu/drm/xe/xe_sriov.h > +++ b/drivers/gpu/drm/xe/xe_sriov.h > @@ -18,6 +18,7 @@ const char *xe_sriov_function_name(unsigned int n, char *buf, size_t len); > void xe_sriov_probe_early(struct xe_device *xe); > void xe_sriov_print_info(struct xe_device *xe, struct drm_printer *p); > int xe_sriov_init(struct xe_device *xe); > +int xe_sriov_late_init(struct xe_device *xe); > > static inline enum xe_sriov_mode xe_device_sriov_mode(const struct xe_device *xe) > { > diff --git a/drivers/gpu/drm/xe/xe_sriov_types.h b/drivers/gpu/drm/xe/xe_sriov_types.h > index ca94382a721e..8abfdb2c5ead 100644 > --- a/drivers/gpu/drm/xe/xe_sriov_types.h > +++ b/drivers/gpu/drm/xe/xe_sriov_types.h > @@ -71,6 +71,11 @@ struct xe_device_vf { > /** @migration.gt_flags: Per-GT request flags for VF migration recovery */ > unsigned long gt_flags; > } migration; > + > + struct { > + /** @initialized: Initilalization of vf ccs is completed or not */ > + bool initialized; > + } ccs; > }; > > #endif > diff --git a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c > new file mode 100644 > index 000000000000..ff5ad472eb96 > --- /dev/null > +++ b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c > @@ -0,0 +1,210 @@ > +// SPDX-License-Identifier: MIT > +/* > + * Copyright © 2025 Intel Corporation > + */ > + > +#include "instructions/xe_mi_commands.h" > +#include "instructions/xe_gpu_commands.h" > +#include "xe_bo.h" > +#include "xe_device.h" > +#include "xe_migrate.h" > +#include "xe_sa.h" > +#include "xe_sriov_printk.h" > +#include "xe_sriov_vf_ccs.h" > +#include "xe_sriov_vf_ccs_types.h" > + > +/** > + * DOC: VF save/restore of compression Meta Data > + * > + * VF KMD registers two special contexts/LRCAs. > + * > + * Save Context/LRCA: contain necessary cmds+page table to trigger Meta data / > + * compression control surface (Aka CCS) save in regular System memory in VM. > + * > + * Restore Context/LRCA: contain necessary cmds+page table to trigger Meta data / > + * compression control surface (Aka CCS) Restore from regular System memory in > + * VM to corresponding CCS pool. > + * > + * Below diagram explain steps needed for VF save/Restore of compression Meta Data:: > + * > + * CCS Save CCS Restore VF KMD Guc BCS > + * LRCA LRCA > + * | | | | | > + * | | | | | > + * | Create Save LRCA | | | > + * [ ]<----------------------------- [ ] | | > + * | | | | | > + * | | | | | > + * | | | Register save LRCA | | > + * | | | with Guc | | > + * | | [ ]--------------------------->[ ] | > + * | | | | | > + * | | Create restore LRCA | | | > + * | [ ]<------------------[ ] | | > + * | | | | | > + * | | | Register restore LRCA | | > + * | | | with Guc | | > + * | | [ ]--------------------------->[ ] | > + * | | | | | > + * | | | | | > + * | | [ ]------------------------- | | > + * | | [ ] Allocate main memory. | | | > + * | | [ ] Allocate CCS memory. | | | > + * | | [ ] Update Main memory & | | | > + * [ ]<------------------------------[ ] CCS pages PPGTT + BB | | | > + * | [ ]<------------------[ ] cmds to save & restore.| | | > + * | | [ ]<------------------------ | | > + * | | | | | > + * | | | | | > + * | | | | | > + * : : : : : > + * ---------------------------- VF Paused ------------------------------------- > + * | | | | | > + * | | | | | > + * | | | |Schedule | > + * | | | |CCS Save | > + * | | | | LRCA | > + * | | | [ ]------>[ ] > + * | | | | | > + * | | | | | > + * | | | |CCS save | > + * | | | |completed| > + * | | | [ ]<------[ ] > + * | | | | | > + * : : : : : > + * ---------------------------- VM Migrated ----------------------------------- > + * | | | | | > + * | | | | | > + * : : : : : > + * ---------------------------- VF Resumed ------------------------------------ > + * | | | | | > + * | | | | | > + * | | [ ]-------------- | | > + * | | [ ] Fix up GGTT | | | > + * | | [ ]<------------- | | > + * | | | | | > + * | | | | | > + * | | | Notify VF_RESFIX_DONE | | > + * | | [ ]--------------------------->[ ] | > + * | | | | | > + * | | | |Schedule | > + * | | | |CCS | > + * | | | |Restore | > + * | | | |LRCA | > + * | | | [ ]------>[ ] > + * | | | | | > + * | | | | | > + * | | | |CCS | > + * | | | |restore | > + * | | | |completed| > + * | | | [ ]<------[ ] > + * | | | | | > + * | | | | | > + * | | | VF_RESFIX_DONE complete | | > + * | | | notification | | > + * | | [ ]<---------------------------[ ] | > + * | | | | | > + * | | | | | > + * : : : : : > + * ------------------------- Continue VM restore ------------------------------ > + */ > + > +static u64 get_ccs_bb_pool_size(struct xe_device *xe) > +{ > + u64 sys_mem_size, ccs_mem_size, ptes, bb_pool_size; > + struct sysinfo si; > + > + si_meminfo(&si); > + sys_mem_size = si.totalram * si.mem_unit; > + ccs_mem_size = sys_mem_size / NUM_BYTES_PER_CCS_BYTE(xe); > + ptes = DIV_ROUND_UP(sys_mem_size + ccs_mem_size, XE_PAGE_SIZE); > + > + /** > + * We need below BB size to hold PTE mappings and some DWs for copy > + * command. In reality, we need space for many copy commands. So, let > + * us allocate double the calculated size which is enough to holds GPU > + * instructions for the whole region. > + */ > + bb_pool_size = ptes * sizeof(u32); > + > + return round_up(bb_pool_size * 2, SZ_1M); > +} > + > +static int alloc_bb_pool(struct xe_tile *tile, struct xe_tile_vf_ccs *ctx) > +{ > + struct xe_device *xe = tile_to_xe(tile); > + struct xe_sa_manager *sa_manager; > + u64 bb_pool_size; > + int offset, err; > + > + bb_pool_size = get_ccs_bb_pool_size(xe); > + xe_sriov_info(xe, "Allocating %s CCS BB pool size = %lldMB\n", > + ctx->ctx_id ? "Restore" : "Save", bb_pool_size / SZ_1M); > + > + sa_manager = xe_sa_bo_manager_init(tile, bb_pool_size, SZ_16); > + > + if (IS_ERR(sa_manager)) { > + xe_sriov_err(xe, "Suballocator init failed with error: %pe\n", > + sa_manager); > + err = PTR_ERR(sa_manager); > + return err; > + } > + > + offset = 0; > + xe_map_memset(xe, &sa_manager->bo->vmap, offset, MI_NOOP, > + bb_pool_size); > + > + offset = bb_pool_size - sizeof(u32); > + xe_map_wr(xe, &sa_manager->bo->vmap, offset, u32, MI_BATCH_BUFFER_END); > + > + ctx->mem.ccs_bb_pool = sa_manager; > + > + return 0; > +} > + > +/** > + * xe_sriov_vf_ccs_init - Setup LRCA for save & restore. > + * @xe: the &xe_device to start recovery on > + * > + * This function shall be called only by VF. It initializes > + * LRCA and suballocator needed for CCS save & restore. > + * > + * Return: 0 on success. Negative error code on failure. > + */ > +int xe_sriov_vf_ccs_init(struct xe_device *xe) > +{ > + enum xe_sriov_vf_ccs_rw_ctxs ctx_id; > + struct xe_migrate *migrate; > + struct xe_tile_vf_ccs *ctx; > + struct xe_tile *tile; > + int tile_id, err; > + > + xe_assert(xe, IS_SRIOV_VF(xe)); > + xe_assert(xe, !IS_DGFX(xe)); > + xe_assert(xe, xe_device_has_flat_ccs(xe)); > + > + for_each_tile(tile, xe, tile_id) { Nit: This only needs to be done for 1 tile. All iGPU are 1 tile, so this works but you could rewrite this entire series to avoid loops and rather use xe_device_get_root_tile(). That might be better a bit more future proof. Nit aside, this LGTM and you can keep my previous: Acked-by: Matthew Brost > + for_each_ccs_rw_ctx(ctx_id) { > + ctx = &tile->sriov.vf.ccs[ctx_id]; > + ctx->ctx_id = ctx_id; > + > + migrate = xe_migrate_init(tile); > + if (IS_ERR(migrate)) { > + err = PTR_ERR(migrate); > + goto err_ret; > + } > + ctx->migrate = migrate; > + > + err = alloc_bb_pool(tile, ctx); > + if (err) > + goto err_ret; > + } > + } > + > + xe->sriov.vf.ccs.initialized = 1; > + > + return 0; > + > +err_ret: > + return err; > +} > diff --git a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.h b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.h > new file mode 100644 > index 000000000000..5df9ba028d14 > --- /dev/null > +++ b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.h > @@ -0,0 +1,13 @@ > +/* SPDX-License-Identifier: MIT */ > +/* > + * Copyright © 2025 Intel Corporation > + */ > + > +#ifndef _XE_SRIOV_VF_CCS_H_ > +#define _XE_SRIOV_VF_CCS_H_ > + > +struct xe_device; > + > +int xe_sriov_vf_ccs_init(struct xe_device *xe); > + > +#endif > diff --git a/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h b/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h > new file mode 100644 > index 000000000000..6dc279d206ec > --- /dev/null > +++ b/drivers/gpu/drm/xe/xe_sriov_vf_ccs_types.h > @@ -0,0 +1,45 @@ > +/* SPDX-License-Identifier: MIT */ > +/* > + * Copyright © 2025 Intel Corporation > + */ > + > +#ifndef _XE_SRIOV_VF_CCS_TYPES_H_ > +#define _XE_SRIOV_VF_CCS_TYPES_H_ > + > +#define for_each_ccs_rw_ctx(id__) \ > + for ((id__) = 0; (id__) < XE_SRIOV_VF_CCS_CTX_COUNT; (id__)++) > + > +#define IS_VF_CCS_READY(xe) ({ \ > + struct xe_device *___xe = (xe); \ > + xe_assert(___xe, IS_SRIOV_VF(___xe)); \ > + ___xe->sriov.vf.ccs.initialized; \ > + }) > + > +#define IS_VF_CCS_INIT_NEEDED(xe) ({\ > + struct xe_device *___xe = (xe); \ > + IS_SRIOV_VF(___xe) && !IS_DGFX(___xe) && \ > + xe_device_has_flat_ccs(___xe) && GRAPHICS_VER(___xe) >= 20; \ > + }) > + > +enum xe_sriov_vf_ccs_rw_ctxs { > + XE_SRIOV_VF_CCS_READ_CTX, > + XE_SRIOV_VF_CCS_WRITE_CTX, > + XE_SRIOV_VF_CCS_CTX_COUNT > +}; > + > +struct xe_migrate; > +struct xe_sa_manager; > + > +struct xe_tile_vf_ccs { > + /** @id: Id to which context it belongs to */ > + enum xe_sriov_vf_ccs_rw_ctxs ctx_id; > + /** @migrate: Migration helper for save/restore of CCS data */ > + struct xe_migrate *migrate; > + > + struct { > + /** @ccs_rw_bb_pool: Pool from which batch buffers are allocated. */ > + struct xe_sa_manager *ccs_bb_pool; > + } mem; > +}; > + > +#endif > -- > 2.43.0 >