From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 56C2B33F1 for ; Sun, 16 Jul 2023 20:49:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CEDE3C433C8; Sun, 16 Jul 2023 20:49:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1689540549; bh=qR7rO5nwXi0mDVrdMGgtYQfZ2HXMTY6iCmypUQKDuvk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gBz8UWrxQCIox5fY0C0BkRKFTtUkcvVWrQyFhKa85AOZSaKnOMasJGRvaXEO9rboQ 1LWoRbgg7tNdpue3lcvBOkN5mycC8v5cxG1lPV42aIA/4rZ8AqUjHDU6TpyUwG0WQC CSpQVFl//MQskUcjjqxBKsSlA0rSUVAG4FalJ4x8= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, BassCheck , Tuo Li , Jaroslav Kysela , Takashi Iwai Subject: [PATCH 6.1 375/591] ALSA: pcm: Fix potential data race at PCM memory allocation helpers Date: Sun, 16 Jul 2023 21:48:34 +0200 Message-ID: <20230716194933.619163136@linuxfoundation.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230716194923.861634455@linuxfoundation.org> References: <20230716194923.861634455@linuxfoundation.org> User-Agent: quilt/0.67 Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit From: Takashi Iwai commit bd55842ed998a622ba6611fe59b3358c9f76773d upstream. The PCM memory allocation helpers have a sanity check against too many buffer allocations. However, the check is performed without a proper lock and the allocation isn't serialized; this allows user to allocate more memories than predefined max size. Practically seen, this isn't really a big problem, as it's more or less some "soft limit" as a sanity check, and it's not possible to allocate unlimitedly. But it's still better to address this for more consistent behavior. The patch covers the size check in do_alloc_pages() with the card->memory_mutex, and increases the allocated size there for preventing the further overflow. When the actual allocation fails, the size is decreased accordingly. Reported-by: BassCheck Reported-by: Tuo Li Link: https://lore.kernel.org/r/CADm8Tek6t0WedK+3Y6rbE5YEt19tML8BUL45N2ji4ZAz1KcN_A@mail.gmail.com Reviewed-by: Jaroslav Kysela Cc: Link: https://lore.kernel.org/r/20230703112430.30634-1-tiwai@suse.de Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/core/pcm_memory.c | 44 ++++++++++++++++++++++++++++++++++++-------- 1 file changed, 36 insertions(+), 8 deletions(-) --- a/sound/core/pcm_memory.c +++ b/sound/core/pcm_memory.c @@ -31,15 +31,41 @@ static unsigned long max_alloc_per_card module_param(max_alloc_per_card, ulong, 0644); MODULE_PARM_DESC(max_alloc_per_card, "Max total allocation bytes per card."); +static void __update_allocated_size(struct snd_card *card, ssize_t bytes) +{ + card->total_pcm_alloc_bytes += bytes; +} + +static void update_allocated_size(struct snd_card *card, ssize_t bytes) +{ + mutex_lock(&card->memory_mutex); + __update_allocated_size(card, bytes); + mutex_unlock(&card->memory_mutex); +} + +static void decrease_allocated_size(struct snd_card *card, size_t bytes) +{ + mutex_lock(&card->memory_mutex); + WARN_ON(card->total_pcm_alloc_bytes < bytes); + __update_allocated_size(card, -(ssize_t)bytes); + mutex_unlock(&card->memory_mutex); +} + static int do_alloc_pages(struct snd_card *card, int type, struct device *dev, int str, size_t size, struct snd_dma_buffer *dmab) { enum dma_data_direction dir; int err; + /* check and reserve the requested size */ + mutex_lock(&card->memory_mutex); if (max_alloc_per_card && - card->total_pcm_alloc_bytes + size > max_alloc_per_card) + card->total_pcm_alloc_bytes + size > max_alloc_per_card) { + mutex_unlock(&card->memory_mutex); return -ENOMEM; + } + __update_allocated_size(card, size); + mutex_unlock(&card->memory_mutex); if (str == SNDRV_PCM_STREAM_PLAYBACK) dir = DMA_TO_DEVICE; @@ -47,9 +73,14 @@ static int do_alloc_pages(struct snd_car dir = DMA_FROM_DEVICE; err = snd_dma_alloc_dir_pages(type, dev, dir, size, dmab); if (!err) { - mutex_lock(&card->memory_mutex); - card->total_pcm_alloc_bytes += dmab->bytes; - mutex_unlock(&card->memory_mutex); + /* the actual allocation size might be bigger than requested, + * and we need to correct the account + */ + if (dmab->bytes != size) + update_allocated_size(card, dmab->bytes - size); + } else { + /* take back on allocation failure */ + decrease_allocated_size(card, size); } return err; } @@ -58,10 +89,7 @@ static void do_free_pages(struct snd_car { if (!dmab->area) return; - mutex_lock(&card->memory_mutex); - WARN_ON(card->total_pcm_alloc_bytes < dmab->bytes); - card->total_pcm_alloc_bytes -= dmab->bytes; - mutex_unlock(&card->memory_mutex); + decrease_allocated_size(card, dmab->bytes); snd_dma_free_pages(dmab); dmab->area = NULL; }