From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B0B8C433E1 for ; Mon, 17 Aug 2020 07:30:00 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DB9A72078A for ; Mon, 17 Aug 2020 07:29:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="UGR0wvz2"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=skidata.com header.i=@skidata.com header.b="Wa4/1l5N" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DB9A72078A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=skidata.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:In-Reply-To:References:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=C8U1R4XXqcamdJ3BJlrRDQW5hVN8Tb+rGBHh/4W+fAc=; b=UGR0wvz20wQp3Ilwv0ETM+0sE dQlryE+BpbMmKTfLi8p3zdrImA0UR1AG0oJ2YGyj3R8XLXtLpCuyZAIDDpJvF9SzJE2rHV1U2IHrf /OUIqjPAB3eHCPxw5DPBLBGFhz6rN/BfUSbdLhgUzX0z4zLckj/++JUPXFdrSRv1W1XKhjqMIKgHP 5/bJZpFrSwLdhip/6tEXvjrQ1DhcLvexW36H5i7OmKH2x+aDsGwoJV3UDJiZiFktMCDZGdeATzIRC i0J4ZCKSXzLuJ0Mu1tSOa6CEcafP2OjszL45abw3MOSDmIi9KKGj3JVRpIejlYA3OdzG5gTMgUaLM iSnFWvaqA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1k7ZZO-0002TU-Tb; Mon, 17 Aug 2020 07:28:38 +0000 Received: from mail1.skidata.com ([91.230.2.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1k7ZZK-0002Sd-8J for linux-arm-kernel@lists.infradead.org; Mon, 17 Aug 2020 07:28:36 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=skidata.com; i=@skidata.com; q=dns/txt; s=selector1; t=1597649314; x=1629185314; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=+RD2yN5N1q3un9IT1M50pdc25CeJUXzpLiptrNEMp4I=; b=Wa4/1l5Nc5LMx3AeKCmkjcmmN5+rPLrvno5YTvTnykZTGCvEWsIr6LYV uMmmAF8E5L57Kw9y21jgqE4E6oIT4j4std1F1IqWONIxZCC9yeccj+K+P PEkOVXhx9yZ4xA6ELTirGQ0vguE6899wbiqGNSe7g91LbCKyLEEhdCIWq 6LEwbB8x5IShjDN8UlAH9MDwwRFFvfeqNQa0KCDEtCRI5cfOvPCP5SMCB NlGKt3CWEULvXTgoQc69j78Ui2ZncxNfoN29LOIST5SqJLVPZNlBpSMux lpyISfdD98Jveb2/1UCSApighY/iu41/diE1oXh22SNBx/cz30gpS2XE5 w==; IronPort-SDR: jiDMR+V544j3gOce62tFfWKGXrrLqEe/pdjucy+gw24PjZnbDl329ICrdflz8SEFksEsf2k0jV 54mCnrEMkI6c/MniViVITnAcoy8eKx0GutP6stpU9hnISpJWeJftEEWEXrvQXzZYTtGuqYkRas Fmah0qd/0n5Y2jgBxCKMGHuyBpjyuZmFt51gQ/cJ8OiiBPV6+KZ+nXMjJZdUj8jTvq/siUl/Yz MyGnkKLE6fCmbZ/eASzZD0PGpr6AyxoACu573PAA0QSL/YljSVQR4qC4u13NB1Y2l0K37WHGff NIU= X-IronPort-AV: E=Sophos;i="5.76,322,1592863200"; d="scan'208";a="26167241" From: Benjamin Bara - SKIDATA To: Robin Gong Subject: RE: pcm|dmaengine|imx-sdma race condition on i.MX6 Thread-Topic: pcm|dmaengine|imx-sdma race condition on i.MX6 Thread-Index: AQHWcWQZKYOChL0mPkuCFeZyDJy6mKk3KiiAgABS1/A= Date: Mon, 17 Aug 2020 07:28:31 +0000 Message-ID: <61498763c60e488a825e8dd270732b62@skidata.com> References: <20200813112258.GA327172@pcleri> In-Reply-To: Accept-Language: en-US, de-AT Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [192.168.111.252] MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200817_032835_455966_F07AA11D X-CRM114-Status: GOOD ( 14.52 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "alsa-devel@alsa-project.org" , "timur@kernel.org" , "linux-kernel@vger.kernel.org" , "nicoleotsuka@gmail.com" , "vkoul@kernel.org" , dl-linux-imx , "kernel@pengutronix.de" , "dmaengine@vger.kernel.org" , "dan.j.williams@intel.com" , "shawnguo@kernel.org" , "linux-arm-kernel@lists.infradead.org" , Richard Leitner - SKIDATA Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We think this is not an i.MX6-specific problem, but a problem of the DMAengine usage from the PCM. In case of a XRUN, the DMA channel is never closed but first a SNDRV_PCM_TRIGGER_STOP next a SNDRV_PCM_TRIGGER_START is triggered. The SNDRV_PCM_TRIGGER_STOP simply executes a dmaengine_terminate_async() [1] but does not await the termination by calling dmaengine_synchronize(), which is required as stated by the docu [2]. Anyways, we are not able to fix it in the pcm_dmaengine layer either at the end of SNDRV_PCM_TRIGGER_STOP (called from the DMA on complete interrupt handler) or at the beginning of SNDRV_PCM_TRIGGER_START (called from a PCM ioctl), since the dmaengine_synchronize() requires a non-atomic context. Based on my understanding, most of the DMA implementations don't even implement device_synchronize and if they do, it might not really be necessary since the terminate_all operation is synchron. With the i.MX6, it looks a bit different: Since [4], the terminate_all operation really schedules a worker which waits the required ~1ms and then does the context freeing. Now, the ioctl(SNDRV_PCM_IOCTL_PREPARE) and the following ioctl(SNDRV_PCM_IOCTL_READI_FRAMES), which are called from US to handle/recover from a XRUN, are in a race with the terminate_worker. If the terminate_worker finishes earlier, everything is fine. Otherwise, the sdma_prep_dma_cyclic() is called, sets up everything and as soon as it is scheduled out to wait for data, the terminate_worker is scheduled and kills it. In this case, we wait in [5] until the timeout is reached and return with -EIO. Based on our understanding, there exist two different fixing approaches: We thought that the pcm_dmaengine should handle this by either synchronizing the DMA on a trigger or terminating it synchronously. However, as we are in an atomic context, we either have to give up the atomic context of the PCM to finish the termination or we have to design a synchronous terminate variant which is callable from an atomic context. For the first option, which is potentially more performant, we have to leave the atomic PCM context and we are not sure if we are allowed to. For the second option, we would have to divide the dma_device terminate_all into an atomic sync and an async one, which would align with the dmaengine API, giving it the option to ensure termination in an atomic context. Based on my understanding, most of them are synchronous anyways, for the currently async ones we would have to implement busy waits. However, with this approach, we reach the WARN_ON [6] inside of an atomic context, indicating we might not do the right thing. Ad Failure Log (at bottom): I haven't added the ioctl syscalls, but this is basically the output with additional prints to be able to follow the code execution path. A XRUN (buffer size is 480 but 960 available) leads to a SNDRV_PCM_TRIGGER_STOP. This leads to terminate_async, starting the terminate_worker. Next, the XRUN recovery triggers SNDRV_PCM_TRIGGER_START, calling sdma_prep_dma_cyclic and then waits for the DMA in wait_for_avail(). Next we see the two freeings, first the old, then the newly added one; so the terminate_worker is back at work. Now the DMA is terminated, while we are still waiting on data from it. What do you think about it? Is any of the provided solutions practicable? If you need further information or additional logging, feel free to ask. Best regards Benjamin [1] https://elixir.bootlin.com/linux/latest/source/sound/core/pcm_dmaengine.c#L209 [2] https://www.kernel.org/doc/html/latest/driver-api/dmaengine/client.html#further-apis [3] https://elixir.bootlin.com/linux/latest/source/sound/core/pcm_dmaengine.c#L189 [4] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b8603d2a5795c42f78998e70dc792336e0dc20c9 [5] https://elixir.bootlin.com/linux/v5.8/source/sound/core/pcm_lib.c#L1875 [6] https://elixir.bootlin.com/linux/latest/source/kernel/dma/mapping.c#L306 *Failure Log from latest 5.4 LTS kernel:* [ 535.201598] imx-sgtl5000 sound: snd_pcm_period_elapsed() [ 535.201610] imx-sgtl5000 sound: snd_pcm_period_elapsed: calls snd_pcm_update_hw_ptr0() [ 535.201626] imx-sdma 20ec000.sdma: sdma_tx_status channel: 2 [ 535.201640] snd_pcm_capture_avail: hw_ptr: 960, appl_ptr: 0, avail: 960, boundary: 2013265920 [ 535.201655] imx-sgtl5000 sound: snd_dmaengine_pcm_trigger command: 0 [ 535.201664] imx-sdma 20ec000.sdma: sdma_disable_channel_async channel: 2 [ 535.201672] imx-sdma 20ec000.sdma: sdma_disable_channel channel: 2 [ 535.201752] imx-sgtl5000 sound: wait_for_avail: tout=999, state=4 [ 535.201760] imx-sdma 20ec000.sdma: sdma_channel_terminate_work channel: 2 [ 535.201877] imx-sgtl5000 sound: snd_pcm_do_reset: ioctl SNDRV_PCM_IOCTL1_RESET [ 535.201888] imx-sgtl5000 sound: snd_pcm_lib_ioctl_reset: calls snd_pcm_update_hw_ptr() [ 535.201912] imx-sgtl5000 sound: snd_dmaengine_pcm_trigger command: 1 [ 535.201922] imx-sdma 20ec000.sdma: sdma_prep_dma_cyclic channel: 2 [ 535.201931] imx-sdma 20ec000.sdma: sdma_config_write channel: 1 [ 535.201939] imx-sdma 20ec000.sdma: sdma_config_channel channel: 2 [ 535.201948] imx-sdma 20ec000.sdma: sdma_disable_channel channel: 2 [ 535.201959] imx-sdma 20ec000.sdma: sdma_load_context channel: 2 [ 535.201967] imx-sdma 20ec000.sdma: sdma_transfer_init channel: 2 [ 535.201983] imx-sdma 20ec000.sdma: sdma_load_context channel: 2 [ 535.201995] imx-sdma 20ec000.sdma: entry 0: count: 256 dma: 0x4a300000 intr [ 535.202005] imx-sdma 20ec000.sdma: entry 1: count: 256 dma: 0x4a300100 intr [ 535.202014] imx-sdma 20ec000.sdma: entry 2: count: 256 dma: 0x4a300200 intr [ 535.202023] imx-sdma 20ec000.sdma: entry 3: count: 256 dma: 0x4a300300 intr [ 535.202033] imx-sdma 20ec000.sdma: entry 4: count: 256 dma: 0x4a300400 intr [ 535.202042] imx-sdma 20ec000.sdma: entry 5: count: 256 dma: 0x4a300500 intr [ 535.202050] imx-sdma 20ec000.sdma: entry 6: count: 256 dma: 0x4a300600 intr [ 535.202059] imx-sdma 20ec000.sdma: entry 7: count: 256 dma: 0x4a300700 intr [ 535.202067] imx-sdma 20ec000.sdma: entry 8: count: 256 dma: 0x4a300800 intr [ 535.202077] imx-sdma 20ec000.sdma: entry 9: count: 256 dma: 0x4a300900 intr [ 535.202086] imx-sdma 20ec000.sdma: entry 10: count: 256 dma: 0x4a300a00 intr [ 535.202094] imx-sdma 20ec000.sdma: entry 11: count: 256 dma: 0x4a300b00 intr [ 535.202103] imx-sdma 20ec000.sdma: entry 12: count: 256 dma: 0x4a300c00 intr [ 535.202111] imx-sdma 20ec000.sdma: entry 13: count: 256 dma: 0x4a300d00 intr [ 535.202120] imx-sdma 20ec000.sdma: entry 14: count: 256 dma: 0x4a300e00 wrap intr [ 535.202135] imx-sdma 20ec000.sdma: vchan 8aa58994: txd 0a262722[8]: submitted [ 535.202145] imx-sdma 20ec000.sdma: sdma_issue_pending channel: 2 [ 535.202181] snd_pcm_capture_avail: hw_ptr: 0, appl_ptr: 0, avail: 0, boundary: 2013265920 [ 535.202192] snd_pcm_capture_avail: hw_ptr: 0, appl_ptr: 0, avail: 0, boundary: 2013265920 [ 535.202202] imx-sgtl5000 sound: wait_for_avail: avail=0, state=3, twake=64 [ 535.203182] imx-sdma 20ec000.sdma: txd 19499aa8: freeing [ 535.203207] imx-sdma 20ec000.sdma: txd 0a262722: freeing [ 545.766059] imx-sgtl5000 sound: wait_for_avail: tout=0, state=3 [ 545.766075] imx-sgtl5000 sound: capture write error (DMA or IRQ trouble?) _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel