From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from CO1PR03CU002.outbound.protection.outlook.com (mail-westus2azon11010011.outbound.protection.outlook.com [52.101.46.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 380B0342CA7 for ; Fri, 3 Apr 2026 10:25:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=52.101.46.11 ARC-Seal:i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775211937; cv=fail; b=HhhCYK6eHZ/kjvyycDaxTYBQBgawg7WjFbKDO7gaSfcF6U0mHqKzLvyuQzHr2UCUwBbtqEN9CWs1NDNLWTpTyIVln43FVYoxwBk9a0CP6xBvxQm76JWjk3bgxsZZjrnXu5vTXj2XC+pcn3AE79kLErwpPukBj+6RYb5faDrsxUc= ARC-Message-Signature:i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775211937; c=relaxed/simple; bh=wuJ5quROLXzTlIpPSzCAclbAv9dKgoVOzTzRB7LCdg0=; h=Message-ID:Date:MIME-Version:Subject:To:CC:References:From: In-Reply-To:Content-Type; b=Ry0UlblH4NMJdJvGSmlC+Mwr6nG2MbIu6UH54XEiOqVldT4FR5+ULTKqJC/sbYaX42d0T8T70WE7lX4yQrYj0dLiTjEKaxjuarzS783cui87cH77XOUM6MW1gLmfoAZMZ5lBDluq9UPV93FhBvZnxfllmIvzDnUxGfFaZXQY930= ARC-Authentication-Results:i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=hjl904GQ; arc=fail smtp.client-ip=52.101.46.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="hjl904GQ" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=twf7FFZSj3HqaQZ/UPPyT92owOP+bfBb03hxcPsR1BlFAWep8DGR647K10meUnQ4SsTWfBSeSR8YbNSHiZLBY088fiL1luMqiQRc5nuEcH5aaRxYxuJZts0dot8RUHwxvci6p9r5GuRnexeBkzekwIElgoKB3kmc+B9XDlEhvSW4hZtsA+TlSxZ1xjahrNTuSdulfeUe62m7AIKrYNIrhlPiJX+bjW2HKOJ02hcHsgIMxizb3kMdR3tyc0gz10/d0I9C+pjLZ5jbFNhX7ZQltpaZEUgDGV6O0DSWhAZ4dI/hAwVwt34gpzJtpapNTQzeIdrIdtBpQTi7cPIp7A0ABw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=AcoWk460yp3BSlu5HAXMdZ+eqUTxFYYOSQ5fXjxwfac=; b=Ii5ghMx8NXSARl2tsZkhqAxVtVInXoHwFbxfQakWFdzcPBWILwoSd+Ls+Re5aSoR8vBmmhaZI7Jpitb62Agqvaoms+2jJ8qVMDZdGy49zRmdYKkghLiggL+00y1I8DBCKhcCaBwgTtLFgbrnUXJ5m2VT3ba/GNWNXYAWMszIQGtQgbhc/xlQowauvboeL4SFh/w9H//swjlInP87uFilSYpIcT0ymB81k7tc0DIlWe/YJzrQbbzaWDCWafhc7LxcHv22UfDRTap4naxQJ9Jrf1NFf9H8xfm3SCA/IpVFZv5VWzU2NEw2DCsOBas2P37Q/snw2ofBGNmal4Q1M7Ew/A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=infradead.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=AcoWk460yp3BSlu5HAXMdZ+eqUTxFYYOSQ5fXjxwfac=; b=hjl904GQSre6TaiGdOjxbIf8uKMwbfRjbMA2OqvExnRNnpE8UHkHAu24W6UtE53YgLZE4BOqRXKCC3Y+j1dRU/gjZi4gEu7SCjw+s85ZiiOwLw4Xlt9pbrvUJ1U5bzTs0GFvD0kHFzZfQBKfWSC/RAPwZMvQHsYuWPyzUjkoxBA= Received: from BN0PR03CA0001.namprd03.prod.outlook.com (2603:10b6:408:e6::6) by MN6PR12MB8472.namprd12.prod.outlook.com (2603:10b6:208:46c::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9769.21; Fri, 3 Apr 2026 10:25:31 +0000 Received: from BN3PEPF0000B372.namprd21.prod.outlook.com (2603:10b6:408:e6:cafe::40) by BN0PR03CA0001.outlook.office365.com (2603:10b6:408:e6::6) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.9769.18 via Frontend Transport; Fri, 3 Apr 2026 10:25:31 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=satlexmb08.amd.com; pr=C Received: from satlexmb08.amd.com (165.204.84.17) by BN3PEPF0000B372.mail.protection.outlook.com (10.167.243.169) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9791.0 via Frontend Transport; Fri, 3 Apr 2026 10:25:31 +0000 Received: from Satlexmb09.amd.com (10.181.42.218) by satlexmb08.amd.com (10.181.42.217) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.2562.17; Fri, 3 Apr 2026 05:25:30 -0500 Received: from satlexmb08.amd.com (10.181.42.217) by satlexmb09.amd.com (10.181.42.218) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.2562.17; Fri, 3 Apr 2026 03:25:30 -0700 Received: from [10.136.42.52] (10.180.168.240) by satlexmb08.amd.com (10.181.42.217) with Microsoft SMTP Server id 15.2.2562.17 via Frontend Transport; Fri, 3 Apr 2026 05:25:23 -0500 Message-ID: <1d2d4596-93d6-4d87-babc-084b8d6c2d98@amd.com> Date: Fri, 3 Apr 2026 15:55:22 +0530 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v26 00/10] Simple Donor Migration for Proxy Execution To: Peter Zijlstra CC: John Stultz , LKML , Joel Fernandes , Qais Yousef , Ingo Molnar , Juri Lelli , "Vincent Guittot" , Dietmar Eggemann , Valentin Schneider , "Steven Rostedt" , Ben Segall , "Zimuzo Ezeozue" , Mel Gorman , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , Thomas Gleixner , Daniel Lezcano , "Suleiman Souhlal" , kuyo chang , hupu , References: <20260324191337.1841376-1-jstultz@google.com> <36e96f87-a682-436e-aefc-13e2e5810019@amd.com> <20260327114844.GQ2872@noisy.programming.kicks-ass.net> <33e60181-1809-44e1-bc4c-8ac7f79d49d6@amd.com> <20260327160017.GK3738010@noisy.programming.kicks-ass.net> <1515d405-62fc-4952-842f-b69e2bf192c0@amd.com> <20260402155055.GV3738010@noisy.programming.kicks-ass.net> <20260403095225.GY3738010@noisy.programming.kicks-ass.net> Content-Language: en-US From: K Prateek Nayak In-Reply-To: <20260403095225.GY3738010@noisy.programming.kicks-ass.net> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN3PEPF0000B372:EE_|MN6PR12MB8472:EE_ X-MS-Office365-Filtering-Correlation-Id: c98d96ef-8ac0-4c70-f7ca-08de916b5541 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|30052699003|7416014|82310400026|1800799024|376014|36860700016|13003099007|22082099003|18002099003|56012099003; X-Microsoft-Antispam-Message-Info: avVH0T8p0Gu3KSnHTJUJDpW2IfZAhbKCz3jVKF6CGKoeGoO604258PSEV0+Y15m6zKI+Q3U1K3b7A6J1qcXVsvsyIirZbSY068L/l9n/wrFdjZJqN1smH/0AmHsgXFeOSG0P+isBa2Ex+Hk6FIFvikOCNHHmkz59bwB0SxMw2Lt7Yo142pjXRK40uksgB0CsDlxBNp5jCc7hIQo1w2TNxZEbqvlWKlZLhyDqitYhnFfyORVUlt7fMTizN797iWk8mR8iLDL4yn00DHDwHhilyYezghXd0/bAPEAMTsbonZhBcSy9/gmBQP9swlqQnky+1DYUTzGtP1jNE+Edq+GN2uHsxMvNY616OpQ2xFI+rsBrhNr3LuCPOQu2S+f+cVi3oIDc0yIJMNNK4BLQogtE0+MG5i7aBpXo1pM5Cze7v/p+lKZwbRzuLx0UOq2fhmC8Ne3R2hWTmHS2Ahq/yWUetdnIubZG6+NXgerY8Yep3cDaXJu7IxUgrGRq3fn0CzLE0avqRpKIuiczLuUjvbocFYaHt7XYobc3wOwhgBNvSw5C/QPrGUTBwwjDBEfI3llr537KncTDxFCmz2aAooHEt9qSFSdSWJejeh9/J66WZnoVbj5+/uvh2CZwo8GrDDa3gDOyLtYw1XU/tLkoFJu3ZMIlnNrcee8B4madouJlXv+kXY/esmmqBTs1k1ZE8g1wu6s4gvrB2gY4NQSnraa9injIceTfed4hen8irTxmJIM8mFkoh7gkAjoWWLR6BFvaouu74TwlmKKh2RPRPVscGw== X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:satlexmb08.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(30052699003)(7416014)(82310400026)(1800799024)(376014)(36860700016)(13003099007)(22082099003)(18002099003)(56012099003);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: yjNKknr8+hGMq65zWKk8F88Oib9RrNLU4p4aYETGksTBte3rPM4L8x8vVz2/k9gBqbYHyLJ4t8vSN+M2uNhMOoFfTFn7/F0r5SavHKblizuvS259WsafOlioLFP5TAeTX2jppoB5wQXuuNBt9cQrTi4uistmv4x2tb6qJ7N1/3rb/l7Q5JftKPsx5ASocGLhECCcQ7tmoDt+KXCzduSFAfls1rOfcEoweoHIbI8UA1UPFbmNcUJ7i0MRgRII9+dLD/69yVV9FzG+H2uPLdxITVPSTtpZdzpHoJM1r8Ynt2kuLTr0zCEJGj7gBEiXjHi/dVXmTtRMJx7PnVBoeUGjlE27iK1VYMq2a0/0unUnX+cMUBCash7s49AzEW0jUMtXxJNBJi5Sf3DzpFAfjqiIRtUx+KjKoJ/aH9g38BDcyxYuQd/+U3us+UFXri27eomB X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2026 10:25:31.0712 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c98d96ef-8ac0-4c70-f7ca-08de916b5541 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[satlexmb08.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN3PEPF0000B372.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN6PR12MB8472 Hello Peter, On 4/3/2026 3:22 PM, Peter Zijlstra wrote: >>>> + if (p->se.sched_delayed) >>>> + enqueue_task(rq, p, ENQUEUE_NOCLOCK | ENQUEUE_DELAYED); >>> >>> I can't precisely remember the details now, but I believe we need to >>> handle enqueueing sched_delayed tasks before handling blocked_on >>> tasks. >> >> So proxy_deactivate() can still delay the task leading to >> task_on_rq_queued() and the wakeup coming to ttwu_runnable() so either >> we can dequeue it fully in proxy_deactivate() or we need to teach >> block_task() to add a DEQUEUE_DELAYED flag when task_is_blocked(). >> >> I think the former is cleaner but we don't decay lag for fair task :-( >> >> We can't simply re-enqueue it either since proxy migration might have >> put it on a CPU outside its affinity mask so we need to take a full >> dequeue + wakeup in ttwu_runnable(). > > Right, sanest option is to have ttwu_runnable() deal with this. Ack! For now I've used John's original move of doing re-enqueue before doing a dequeue if we find a delayed + blocked_on task. > >>>> -static void proxy_force_return(struct rq *rq, struct rq_flags *rf, >>>> - struct task_struct *p) >>>> - __must_hold(__rq_lockp(rq)) >>>> -{ >>>> -} >>>> - >> >> Went a little heavy of the delete there did you? :-) > > Well, I thought that was the whole idea, have ttwu() handle this :-) > >>>> /* >>>> * Find runnable lock owner to proxy for mutex blocked donor >>>> * >>>> @@ -6777,7 +6723,7 @@ find_proxy_task(struct rq *rq, struct ta >>>> clear_task_blocked_on(p, PROXY_WAKING); >>>> return p; >>>> } >>>> - goto force_return; >>>> + goto deactivate; >>>> } >> >> This makes sense if we preserve the !TASK_RUNNING + p->blocked_on >> invariant since we'll definitely get a wakeup here. > > Right, so TASK_RUNNING must imply !->blocked_on. > >>>> >>>> /* >>>> @@ -6812,7 +6758,7 @@ find_proxy_task(struct rq *rq, struct ta >>>> __clear_task_blocked_on(p, NULL); >>>> return p; >>>> } >>>> - goto force_return; >>>> + goto deactivate; >> >> This too makes sense considering !owner implies some task will be woken >> up but ... if we take this task off and another task steals the mutex, >> this task will no longer be able to proxy it since it is completely >> blocked now. >> >> Probably not desired. We should at least let it run and see if it can >> get the mutex and evaluate the "p->blocked_on" again since !owner is >> a limbo state. > > I need to go re-read the mutex side of things, but doesn't that do > hand-off way more agressively? Ack but we have optimistic spinning enabled for performance reasons so there is still a chance that the task may not get the mutex but now that I think about it, it will definitely receive a wakeup so it it should be able to re-establish the chain when it gets on CPU again. > > Anyway, one thing that is completely missing is a fast path for when the > task is still inside its valid mask. I suspect adding that back will > cure some of these issues. > >> So I added the following in top of Peter's diff on top of >> queue:sched/core and it hasn't crashed and burnt yet when running a >> handful instances of sched-messaging with a mix of fair and SCHED_RR >> priority: >> >> (Includes John's findings from the parallel thread) >> >> diff --git a/kernel/sched/core.c b/kernel/sched/core.c >> index 5b2b2451720a..e845e3a8ae65 100644 >> --- a/kernel/sched/core.c >> +++ b/kernel/sched/core.c >> @@ -2160,7 +2160,7 @@ void deactivate_task(struct rq *rq, struct task_struct *p, int flags) >> dequeue_task(rq, p, flags); >> } >> >> -static bool block_task(struct rq *rq, struct task_struct *p, unsigned long task_state) >> +static void block_task(struct rq *rq, struct task_struct *p, unsigned long task_state) >> { >> int flags = DEQUEUE_NOCLOCK; >> >> @@ -3696,6 +3696,20 @@ ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags, >> } >> } >> >> +static void zap_balance_callbacks(struct rq *rq); >> + >> +static inline void proxy_reset_donor(struct rq *rq) >> +{ >> +#ifdef CONFIG_SCHED_PROXY_EXEC >> + WARN_ON_ONCE(rq->curr == rq->donor); >> + >> + put_prev_set_next_task(rq, rq->donor, rq->curr); >> + rq_set_donor(rq, rq->curr); >> + zap_balance_callbacks(rq); >> + resched_curr(rq); >> +#endif >> +} > > This one hurts my bain :-) > >> /* >> * Consider @p being inside a wait loop: >> * >> @@ -3730,6 +3744,8 @@ static int ttwu_runnable(struct task_struct *p, int wake_flags) >> return 0; >> >> update_rq_clock(rq); >> + if (p->se.sched_delayed) >> + enqueue_task(rq, p, ENQUEUE_NOCLOCK | ENQUEUE_DELAYED); > > Right, this works but seems wasteful, might be better to add > DEQUEUE_DELAYED in the blocked_on case. > >> if (sched_proxy_exec() && p->blocked_on) { > > So I had doubts about this lockless test of ->blocked_on, I still cannot > convince myself it is correct. Let me give a try: A task's "blocked_on" starts off as a valid mutex and can be transitioned optionally to PROXY_WAKING (!= NULL) before being cleared. If blocked_on is cleared directly, PROXY_WAKING transition never happens even if someone does set_task_blocked_on_waking() since we bail out early if !p->blocked_on. All "p->blocked_on" transition happen with "blocked_on_lock" held. So that begs the question, when is "blocked_on" actually cleared? 1) If the task is task_on_rq_queued(), we either clear it in schedule() (find_proxy_task() to be precise) or in ttwu_runnable() - both with rq_lock held. 2) *NEW* If the task is off rq and is waking up, it means there is a ttwu_state_match() and without proxy, the task would have woken up and executed on the CPU. Since the task is completely off rq, schedule() cannot clear the p->blocked_on. Only other remote transition possible is to PROXY_WAKING (!= NULL). So *inspecting* the p->blocked_on relation without the blocked_on_lock held should be fine to know if the task has a blocked_on relation. Only the task itself can set "p->blocked_on" to a valid mutex when running on the CPU so it is out of question we can suddenly get a transition to a new mutex when we are in schedule() or in middle of waking the task. > >> guard(raw_spinlock)(&p->blocked_lock); >> struct mutex *lock = p->blocked_on; >> @@ -3738,15 +3754,20 @@ static int ttwu_runnable(struct task_struct *p, int wake_flags) >> * TASK_WAKING is a special state and results in >> * DEQUEUE_SPECIAL such that the task will actually be >> * forced from the runqueue. >> + * >> + * XXX: All of this is now equivalent of >> + * proxy_needs_return() from John's series :-) >> */ >> - block_task(rq, p, TASK_WAKING); >> p->blocked_on = NULL; >> + if (task_current(rq, p)) >> + goto out; > > Right, fair enough :-) This could also be done when rq->cpu is inside > p->cpus_ptr mask, because in that case we don't strictly need a > migration. Thinking about that was on the todo list. Ack. Once concern there is that task was out of load balancer's purview until it is "p->blocked_on" and this could be a good spot for balance during wakeup. > >> + if (task_current_donor(rq, p)) >> + proxy_reset_donor(rq); > > Fun fun fun :-) > >> + block_task(rq, p, TASK_WAKING); >> return 0; >> } >> } >> - >> - if (p->se.sched_delayed) >> - enqueue_task(rq, p, ENQUEUE_NOCLOCK | ENQUEUE_DELAYED); >> +out: >> if (!task_on_cpu(rq, p)) { >> /* >> * When on_rq && !on_cpu the task is preempted, see if >> @@ -4256,6 +4277,15 @@ int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) >> */ >> smp_cond_load_acquire(&p->on_cpu, !VAL); >> >> + /* >> + * We never clear the blocked_on relation on proxy_deactivate. >> + * If we don't clear it here, we have TASK_RUNNING + p->blocked_on >> + * when waking up. Since this is a fully blocked, off CPU task >> + * waking up, it should be safe to clear the blocked_on relation. >> + */ >> + if (task_is_blocked(p)) >> + clear_task_blocked_on(p, NULL); >> + > > Aah, yes! This is when find_proxy_task() hits deactivate() for us and we > skip ttwu_runnable(). We still need to clear ->blocked_on. > > I am once again not sure on the lockless nature of accessing > ->blocked_on. I hope I have convinced you from the short analysis above :-) > >> cpu = select_task_rq(p, p->wake_cpu, &wake_flags); >> if (task_cpu(p) != cpu) { >> if (p->in_iowait) { >> @@ -6977,6 +7007,10 @@ static void __sched notrace __schedule(int sched_mode) >> switch_count = &prev->nvcsw; >> } >> >> + /* See: https://github.com/kudureranganath/linux/commit/0d6a01bb19db39f045d6f0f5fb4d196500091637 */ >> + if (!prev_state && task_is_blocked(prev)) >> + clear_task_blocked_on(prev, NULL); >> + > > This one confuses me, ttwu() should never results in ->blocked_on being > set. This is from the signal_pending_state() in try_to_block_task() putting prev to TASK_RUNNING while still having p->blocked_on. It is expected that task executes and re-evaluates if it needs to block on the mutex again or simply return -EINTR from mutex_lock_interruptible(). > >> pick_again: >> assert_balance_callbacks_empty(rq); >> next = pick_next_task(rq, rq->donor, &rf); -- Thanks and Regards, Prateek