From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69E7FC433EF for ; Wed, 23 Mar 2022 08:15:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241186AbiCWIQz (ORCPT ); Wed, 23 Mar 2022 04:16:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229480AbiCWIQu (ORCPT ); Wed, 23 Mar 2022 04:16:50 -0400 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73EC95BE47 for ; Wed, 23 Mar 2022 01:15:21 -0700 (PDT) Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out2.suse.de (Postfix) with ESMTP id 093B41F37F; Wed, 23 Mar 2022 08:15:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1648023320; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=g1osbJeXlQpB5//z8F0X+vu5CcbRy6xr+WwjWLHu7ME=; b=Goihwp7Zy9WnNOAl84wO3Q/qKMVMCFj/rWC1JoKHO8SvxgV9u3GK9IUpAB687OYCoBGRaL 1PDX7tSszzm8Lx7GqvDWTKs0KXlF3NKQ4X5JQ4NJ/YZJhMsfvtn3W+2x3kRby+cTVHWLhF bFfrcKHpeDcVDmZDpKU2ayVKXhbHloY= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1648023320; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=g1osbJeXlQpB5//z8F0X+vu5CcbRy6xr+WwjWLHu7ME=; b=sC0InqTC4Xpudf7v+aBXZzhHaywMhlBzP21ZVPjCq8EC+vBAPxeA8r7+CM8N6TvhLDSX+b nykL+KVgp4vomXCg== Received: from alsa1.suse.de (alsa1.suse.de [10.160.4.42]) by relay2.suse.de (Postfix) with ESMTP id F0A8DA3B81; Wed, 23 Mar 2022 08:15:19 +0000 (UTC) Date: Wed, 23 Mar 2022 09:15:19 +0100 Message-ID: From: Takashi Iwai To: Amadeusz SX2awiX4ski Cc: alsa-devel@alsa-project.org, Hu Jiahui , linux-kernel@vger.kernel.org Subject: Re: [PATCH 3/4] ALSA: pcm: Fix races among concurrent prepare and hw_params/hw_free calls In-Reply-To: References: <20220322170720.3529-1-tiwai@suse.de> <20220322170720.3529-4-tiwai@suse.de> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI/1.14.6 (Maruoka) FLIM/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL/10.8 Emacs/25.3 (x86_64-suse-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI 1.14.6 - "Maruoka") Content-Type: text/plain; charset=US-ASCII Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 23 Mar 2022 09:08:25 +0100, Amadeusz SX2awiX4ski wrote: > > On 3/22/2022 6:07 PM, Takashi Iwai wrote: > > Like the previous fixes to hw_params and hw_free ioctl races, we need > > to paper over the concurrent prepare ioctl calls against hw_params and > > hw_free, too. > > > > This patch implements the locking with the existing > > runtime->buffer_mutex for prepare ioctls. Unlike the previous case > > for snd_pcm_hw_hw_params() and snd_pcm_hw_free(), snd_pcm_prepare() is > > performed to the linked streams, hence the lock can't be applied > > simply on the top. For tracking the lock in each linked substream, we > > modify snd_pcm_action_group() slightly and apply the buffer_mutex for > > the case stream_lock=false (formerly there was no lock applied) > > there. > > > > Cc: > > Signed-off-by: Takashi Iwai > > --- > > sound/core/pcm_native.c | 32 ++++++++++++++++++-------------- > > 1 file changed, 18 insertions(+), 14 deletions(-) > > > > diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c > > index 266895374b83..0e4fbf5fd87b 100644 > > --- a/sound/core/pcm_native.c > > +++ b/sound/core/pcm_native.c > > @@ -1190,15 +1190,17 @@ struct action_ops { > > static int snd_pcm_action_group(const struct action_ops *ops, > > struct snd_pcm_substream *substream, > > snd_pcm_state_t state, > > - bool do_lock) > > + bool stream_lock) > > { > > struct snd_pcm_substream *s = NULL; > > struct snd_pcm_substream *s1; > > int res = 0, depth = 1; > > snd_pcm_group_for_each_entry(s, substream) { > > - if (do_lock && s != substream) { > > - if (s->pcm->nonatomic) > > + if (s != substream) { > > + if (!stream_lock) > > + mutex_lock_nested(&s->runtime->buffer_mutex, depth); > > + else if (s->pcm->nonatomic) > > mutex_lock_nested(&s->self_group.mutex, depth); > > else > > spin_lock_nested(&s->self_group.lock, depth); > > Maybe > if (!stream_lock) > mutex_lock_nested(&s->runtime->buffer_mutex, depth); > else > snd_pcm_group_lock(&s->self_group, s->pcm->nonatomic); > ? No, it must be nested locks with the given subclass. That's why it has been the open code beforehand, too. > > @@ -1226,18 +1228,18 @@ static int snd_pcm_action_group(const struct action_ops *ops, > > ops->post_action(s, state); > > } > > _unlock: > > - if (do_lock) { > > - /* unlock streams */ > > - snd_pcm_group_for_each_entry(s1, substream) { > > - if (s1 != substream) { > > - if (s1->pcm->nonatomic) > > - mutex_unlock(&s1->self_group.mutex); > > - else > > - spin_unlock(&s1->self_group.lock); > > - } > > - if (s1 == s) /* end */ > > - break; > > + /* unlock streams */ > > + snd_pcm_group_for_each_entry(s1, substream) { > > + if (s1 != substream) { > > + if (!stream_lock) > > + mutex_unlock(&s1->runtime->buffer_mutex); > > + else if (s1->pcm->nonatomic) > > + mutex_unlock(&s1->self_group.mutex); > > + else > > + spin_unlock(&s1->self_group.lock); > > And similarly to above, use snd_pcm_group_unlock() here? This side would be possible to use that macro but it's still better to have the consistent call pattern. thanks, Takashi