From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B14357F4 for ; Wed, 3 May 2023 03:03:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1C1C9C433EF; Wed, 3 May 2023 03:03:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1683082988; bh=HWnPzdzfRZfaKJQmjGlFiyIXN5Cfhb/XR7QK2inUdIk=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=ksq81tb+JNyTjstOI4DZyRVOqgXX87QPTwka45tR1MQyeRMBC2YmeErsBw8mVGi6i zdvf+upXOdOTPBYUjsIFwFPTQQkM5Kcl9FVkO123KtvVU/Ij5msK24Ly/8vqR0wvRS Vy0BNp7nHMIeFi8Tq8o7r0tYuY5nuHZit2IYTKNJJA/EBHKcE935wRXqr/4blc+kpk lwkZ9gf/xRyW9S6P/SRQbo/Xp4cWrbtT/rDAq3uAb044BxZ+zyON5yCkdF5Z7/9mTS p3NsJ3QsZq+UVI89rZNE5rRzAYRNuQFZfEa2/lH9VjmkLv2Hld2PoBnEJY7A0gTG7i bH9pYTk1NN2gw== Date: Tue, 2 May 2023 20:03:07 -0700 From: Jakub Kicinski To: Manish Chopra Cc: Stephen Hemminger , , , , Sudarsana Kalluru , "David S . Miller" Subject: Re: [PATCH v3 net] qed/qede: Fix scheduling while atomic Message-ID: <20230502200307.11bbe4ef@kernel.org> In-Reply-To: <20230428102651.01215795@hermes.local> References: <20230428161337.8485-1-manishc@marvell.com> <20230428102651.01215795@hermes.local> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit On Fri, 28 Apr 2023 10:26:51 -0700 Stephen Hemminger wrote: > On Fri, 28 Apr 2023 09:13:37 -0700 > Manish Chopra wrote: > > > - usleep_range(1000, 2000); > > + > > + if (is_atomic) > > + udelay(QED_BAR_ACQUIRE_TIMEOUT_UDELAY); > > + else > > + usleep_range(QED_BAR_ACQUIRE_TIMEOUT_USLEEP, > > + QED_BAR_ACQUIRE_TIMEOUT_USLEEP * 2); > > } > > This is a variant of the conditional locking which is an ugly design pattern. > It makes static checking tools break and > a source of more bugs. > > Better to fix the infrastructure or caller to not spin, or have two different > functions. FWIW the most common way to solve this issue is using a delayed work which reads out the stats periodically from a non-atomic context, and return a stashed copy from get_stat64.