From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1JpCw6-0002Ra-HI for qemu-devel@nongnu.org; Thu, 24 Apr 2008 21:41:14 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1JpCw1-0002NQ-OP for qemu-devel@nongnu.org; Thu, 24 Apr 2008 21:41:14 -0400 Received: from [199.232.76.173] (port=37334 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1JpCw1-0002NL-Iw for qemu-devel@nongnu.org; Thu, 24 Apr 2008 21:41:09 -0400 Received: from ug-out-1314.google.com ([66.249.92.172]) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1JpCw1-0006VW-8L for qemu-devel@nongnu.org; Thu, 24 Apr 2008 21:41:09 -0400 Received: by ug-out-1314.google.com with SMTP id m2so920544uge.4 for ; Thu, 24 Apr 2008 18:41:06 -0700 (PDT) Message-ID: Date: Fri, 25 Apr 2008 03:41:06 +0200 From: "andrzej zaborowski" Subject: Re: [Qemu-devel] [4249] Improve audio api use in WM8750. In-Reply-To: <48112D47.6050500@web.de> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <4810FBF9.1040308@web.de> <48111E61.9050002@web.de> <4811213C.7020507@web.de> <48112992.2000602@web.de> <48112D47.6050500@web.de> Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org On 25/04/2008, Jan Kiszka wrote: > OK, it's late... The real issue here is that the wm8750's internal cache > correlates with the MusicPal guest's internal buffering threshold - it > happens to be 4K as well. Thus, the line above just pushes the problem > to guests that play with 2K buffers. And this demonstrates nicely that > the current caching is fragile, only suitable for a subset of guests. > > Back to square #1, only cache what piles up during controllable periods: > inside the callback. In my case, I _depend_ on flushing after the > callback, because this is where data gets transfered to the Wolfson, and > it gets transferred in larger chunks as well. Thus, flushing later > easily causes buffer underruns. > > And for those scenarios where data arrives asynchronously in smaller > chunks between the callbacks, we may also flush before entering the > subordinated callback. > > But, frankly, how may cycle does all this caching actually save us? Did > you measure it? I doubt it is relevant. It's not really caching but rather trying to emulate the hardware's FIFOs so that we get the same behavior. But I see the problem: the FIFO on wm8750 side was used to avoid buffering data more than once for the Spitz and the Neo1973 machines. But they use a I2S interface which is totally different than that of MusicPal (they both have a hw register through which all samples have to go, rather than a kind of DMA). What we need is an api to let the cpu explicitely flush the data, because in the model with DMA, only the CPU knows when it's good moment to do that (e.g at the end of audio_callback in hw/musicpal.c), I'll try to come up with something like that. An arbitrary threshold in wm8750 won't work for all machines. Regards