From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F18F4C54E67 for ; Thu, 28 Mar 2024 10:30:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=1Hvyl48G2lV/+i2aLycGmYnfyic87xPl64A5abkvml8=; b=l5fPTXnF7/M3bb 1v+h5fdDX0Z6uF8oyiaHjgVnESrQvwTD8swAM1z5GisuC4HH+KRZyAiIllUQBnRJNJwIClsOZxqxK 0WsQt82lDRgKqgSvmn5BabcJ/4Nwbm85mxt1eeiyMiYRgmyuIG4fq083JdtovOi3GNv0aUpzQkFl5 fGQJsY9jorbBW2Isq2NAFOpXxQkFAroCxcqcU7LLJ6n63gjFzMRlR/3Qm/DeaEvBeZaXbRRjnjWuH 5lAxHYLbURvg1J1Jp6O3jnzdPK4d2R54eLTF55aWgLnj9m2QyNha81nWNNMHDLxuiTD6xh0wfqjKE Uat3YT1s29aXC5tzwF0w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpn1b-0000000DZd1-1duX; Thu, 28 Mar 2024 10:30:23 +0000 Received: from mail-ed1-x536.google.com ([2a00:1450:4864:20::536]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpn1Y-0000000DZbb-2ou4 for linux-arm-kernel@lists.infradead.org; Thu, 28 Mar 2024 10:30:22 +0000 Received: by mail-ed1-x536.google.com with SMTP id 4fb4d7f45d1cf-566e869f631so831535a12.0 for ; Thu, 28 Mar 2024 03:30:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1711621817; x=1712226617; darn=lists.infradead.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=zm7UhGPLU4UMFzlv9xAHTMrvOq2hNK4t2z9KZhDSxt0=; b=Xj57gA1JFr/MHBy4b5Uh4sE8CA+Hd4+jcqz0VPD1x25WJQxfNMCyIJq7xCS1kEs8Ap jsTwUHzqkl/pk7/9EsrMq0sq9DlL6DpCw1ctL4P+70Z5oonPGJnCtIaXB8mNF82zfQAp gIrho3EQPXvvAxEGbErujfV9yOByeRpfmo5xGba/dOB0ETlWMHkjhbeHLtrsXIcnNKxP fuy6wgjILxvtaVcnkm6TEgcb2Ev1gPNdQYM9tUHkm/dt6UB7XFIHzVNu3kf80WOiOm2H EWCtqHpZpPK+HtSE7dO+ddmrOUjgadZa5i9cwiHI2nNPTwzVw+XeDgeX17KndqFfO5d0 om4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711621818; x=1712226618; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=zm7UhGPLU4UMFzlv9xAHTMrvOq2hNK4t2z9KZhDSxt0=; b=BstmPpBdx7ZpkeKIReypgWtvhUYFE3C/SzkBHe5IbO5A2pkGJisw0PJtj02QHdBtRB V8iKucbCZiN6Kyhwtbd8FWF7zPs+9vG2cNrElqZK1uoJICy9E9Evqzzxx8XB0LEl/8FE +bJmDeH8BA7HdaCfLOwiqpQfxVBr5gkv+CIWZY40S2gj/TGcSCIDBG7u1r4PZ63ZNbi/ yOcK8ZOtF5iMlzunAJZvFuq+2SWKg0+gJxAWImoQ5Ux/UwYVOznRn4IyvjB1lw5xDoK8 KsvwiuYBoudftCZjKBDTZPkcA4FET53f0CjZCtbxzfv/25xs/mgxgvjO5xGfUrjoXwZE 84AQ== X-Forwarded-Encrypted: i=1; AJvYcCUT0MM/F4xaOJGFhjt5cxvXfUA4Mzr/LDSZ4rxG4w9BwzaMxDXSHEiIA4bKlDDw81/JHA52xoT4V3v02cs7Ycfmyk8Ld0jv7e4gBedLJhmSdH4ec8s= X-Gm-Message-State: AOJu0YxWsSefJBZVLSC4pwiHpiXuXwyUwKEfB5OQ0fSzmuIZXW55OXcO hKuvRbKG7l47NbxarF+0aoZlbw7m1O71Y57ZTuTuywoG00F9POXJLRbupdtNKg== X-Google-Smtp-Source: AGHT+IHrzxVcayJwAymJeYjWNVkfBVqo8DMj23IPGaMRV9u8xUSwyjBPTm+iXUeqd1BagXZOXD7/pA== X-Received: by 2002:a50:d503:0:b0:568:b95f:5398 with SMTP id u3-20020a50d503000000b00568b95f5398mr1875841edi.38.1711621816064; Thu, 28 Mar 2024 03:30:16 -0700 (PDT) Received: from google.com (61.134.90.34.bc.googleusercontent.com. [34.90.134.61]) by smtp.gmail.com with ESMTPSA id x1-20020a056402414100b00568d7b0a21csm657536eda.61.2024.03.28.03.30.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 28 Mar 2024 03:30:14 -0700 (PDT) Date: Thu, 28 Mar 2024 10:30:08 +0000 From: Quentin Perret To: Colton Lewis Cc: kvm@vger.kernel.org, maz@kernel.org, oliver.upton@linux.dev, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, pbonzini@redhat.com, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2] KVM: arm64: Add KVM_CAP to control WFx trapping Message-ID: References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240328_033020_780766_5DFF7EF2 X-CRM114-Status: GOOD ( 41.99 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hi Colton, On Monday 25 Mar 2024 at 20:12:04 (+0000), Colton Lewis wrote: > Thanks for the feedback. > > Quentin Perret writes: > > > On Friday 22 Mar 2024 at 14:24:35 (+0000), Quentin Perret wrote: > > > On Tuesday 19 Mar 2024 at 16:43:41 (+0000), Colton Lewis wrote: > > > > Add a KVM_CAP to control WFx (WFI or WFE) trapping based on scheduler > > > > runqueue depth. This is so they can be passed through if the runqueue > > > > is shallow or the CPU has support for direct interrupt injection. They > > > > may be always trapped by setting this value to 0. Technically this > > > > means traps will be cleared when the runqueue depth is 0, but that > > > > implies nothing is running anyway so there is no reason to care. The > > > > default value is 1 to preserve previous behavior before adding this > > > > option. > > > > I recently discovered that this was enabled by default, but it's not > > > obvious to me everyone will want this enabled, so I'm in favour of > > > figuring out a way to turn it off (in fact we might want to make this > > > feature opt in as the status quo used to be to always trap). > > Setting the introduced threshold to zero will cause it to trap whenever > something is running. Is there a problem with doing it that way? No problem per se, I was simply hoping we could set the default to zero to revert to the old behaviour. I don't think removing WFx traps was a universally desirable behaviour, so it prob should have been opt-in from the start. > I'd also be interested to get more input before changing the current > default behavior. Ack, that is my personal opinion. > > > There are a few potential issues I see with having this enabled: > > > > - a lone vcpu thread on a CPU will completely screw up the host > > > scheduler's load tracking metrics if the vCPU actually spends a > > > significant amount of time in WFI (the PELT signal will no longer > > > be a good proxy for "how much CPU time does this task need"); > > > > - the scheduler's decision will impact massively the behaviour of the > > > vcpu task itself. Co-scheduling a task with a vcpu task (or not) will > > > impact massively the perceived behaviour of the vcpu task in a way > > > that is entirely unpredictable to the scheduler; > > > > - while the above problems might be OK for some users, I don't think > > > this will always be true, e.g. when running on big.LITTLE systems the > > > above sounds nightmare-ish; > > > > - the guest spending long periods of time in WFI prevents the host from > > > being able to enter deeper idle states, which will impact power very > > > negatively; > > > > And probably a whole bunch of other things. > > > > > Think about his option as a threshold. The instruction will be trapped > > > > if the runqueue depth is higher than the threshold. > > > > So talking about the exact interface, I'm not sure exposing this to > > > userspace is really appropriate. The current rq depth is next to > > > impossible for userspace to control well. > > Using runqueue depth is going off of a suggestion from Oliver [1], who I've > also talked to internally at Google a few times about this. > > But hearing your comment makes me lean more towards having some > enumeration of behaviors like TRAP_ALWAYS, TRAP_NEVER, > TRAP_IF_MULTIPLE_TASKS. Do you guys really expect to set this TRAP_IF_MULTIPLE_TASKS? Again, the rq depth is quite hard to reason about from userspace, so not sure anybody will really want that? A simpler on/off thing might be simpler. > > > My gut feeling tells me we might want to gate all of this on > > > PREEMPT_FULL instead, since PREEMPT_FULL is pretty much a way to say > > > "I'm willing to give up scheduler tracking accuracy to gain throughput > > > when I've got a task running alone on a CPU". Thoughts? > > > And obviously I meant s/PREEMPT_FULL/NOHZ_FULL, but hopefully that was > > clear :-) > > Sounds good to me but I've not touched anything scheduling related before. Do you guys use NOHZ_FULL in prod? If not that idea might very well be a non-starter, because switching to NOHZ_FULL would be a big ask. So, yeah, I'm curious :) Thanks, Quentin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel