Linux Sound subsystem development
 help / color / mirror / Atom feed
From: Liam Girdwood <lgirdwood@gmail.com>
To: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>,
	Mark Brown <broonie@kernel.org>
Cc: Linux-ALSA <linux-sound@vger.kernel.org>, Lars-Peter <lars@metafoo.de>
Subject: Re: ASoC: soc-pcm2.c ?
Date: Tue, 3 Feb 2026 18:07:00 +0000	[thread overview]
Message-ID: <b55cce82-1bdb-4e99-86ba-024f73a0aef6@gmail.com> (raw)
In-Reply-To: <87zf5q1gve.wl-kuninori.morimoto.gx@renesas.com>

On 2/3/26 06:27, Kuninori Morimoto wrote:
> 
> Hi Mark
> 
>>> (A)	  soc-pcm.c
>>> (B)	+ soc-pcm2.c
>>> 	  ...
>>> 	  audio-graph-card2.c
>>> (C)	+ audio-graph-card3.c
>>
>>> All existing drivers uses existing soc-pcm.c (A). Nothing changed.
>>> And create new soc-pcm2.c (B) and new card driver, for example
>>> audio-graph-card3.c (C).
>>> audio-graph-card3 (C) user only can access to soc-pcm2 (B).
>>
>>> In this case, existing user will get zero damage from new soc-pcm2.c (B).
>>> We can just remove (B) and (C) file in worst case, it is easy to rollback.
>>> But what do you think ?
>>
>> So, as we disucssed in person on the one hand this is obviously not an
>> ideal way of doing things but I think in this case there's so much
>> fragility with the current code that it's difficult to do anything
>> safely which makes this much more viable.  It minimises disruption to
>> users of the existing code so we get much less stress all round,
>> hopefully in the long term people will convert to this code and we can
>> retire the old things.

Morimoto-san, first of all I think its really good you are taking time 
to improve things, however there are still some things we need to watch 
in order to make sure we maintain functionality and align with the audio 
integration ownership differences (i.e. who does what) between Linux and 
Windows on the client when Linux is installed afterwards.

Btw, I'm assuming pcm2 will reuse existing DAI and codec drivers but 
without the DAPM/DPCM flows ? i.e. same codec/dai binaries shipped by 
distros can be used by both pcm and pcm2 ?

> 
> Thanks.
> Yes, (B) should not have any effect to existing (A) envailonment, should
> keep independent from it. If this premise is maintained, we can do anything
> on new soc-pcm2 (B).
> 
>> Well, the big issue is that it's sort of partially using DAPM but DAPM
>> doesn't have full awareness of the system including none of things like
>> SRC.
> 
> So we want to expand it or create new one.
> DAPM is using "string match" method to check path/routing, but I want to
> use pointer style for it.

It should hopefully be easy to convert to an integer/pointer format here 
with some search/replace, however the mux enum controls reuse the 
strings in the kcontrol names IIRC so you may need to keep those.

> 
>> I'm not sure I'd go that far.  The trouble with setting a default path
>> and volume is working out what a suitable setting is for a given piece
>> of hardware - the really common case where this is a problem is that
>> many headphone drivers can also be line outputs but the volume levels
>> needed for the two are vastly different.  That's why we currently just
>> go with hardware defaults for everything, it avoids trying to code in
>> settings for a specific system.  It's clearly not ideally helpful for
>> users but it's a lot easier for them to fix problems with UCM than it is
>> for them to fix problems using the kernel.
>>
>> But that said once the code has a better understanding of how things are
>> wired up prehaps we can have some algorithm that can try to guess a good
>> default, possibly something people enable optionally.
> 
> OK. I think default routing / volume is set by Card instead of CPU/Codec,
> because it depends on Board, not chip.
> 
> I have mentioned that I want to reduce amixer settings, but it doesn't
> mean soc-pcm2 doesn't allow to use it.
> 
> Whether having default volume or not should not mandatory, should be just
> option. We want to have flexible framework, like some board want to use same
> style as before (= setup full routing etc via amixer), and/or some board want
> to have default routing/volumes, etc.

I think hiding any kcontrols or hard coding kcontrols that change audio 
processing or routing will break all client use cases, yes this may be 
nice on IoT where there is no sound server or userspace audio infra and 
very simple audio is needed, but on client we need the soundserver e.g. 
Pipewire, CRAS or audio HAL to configure and control the use case/policy.

Its also far simpler for regular users to work with changes in userspace 
whether that be Pipewire configuration, UCM, Audio HAL config compared 
to making similar audio configuration changes hard coded in the kernel.

The other thing to consider is that the integration model for audio is 
fully done by the OEM when you buy a device with an OS pre-installed 
i.e. if you buy a laptop with Linux or Windows then the OEM will fully 
integrate the audio driver including DSPs, DAIs, DMAs, codecs, amps, 
jacks alongside any userspace configuration like topology and UCM.

When a user buys a laptop with Windows then installs Linux there is no 
integration done by the OEM. The user has to rely on the existing kernel 
audio drivers for DSP, codec, DAIs, DMA, amps, etc to probe() and take a 
reasonable guess at the integration configuration. Sadly, the hardware 
schematics required to perform the full audio integration are not 
public, yes the codec/dsp/dma/dai is known but its not known how they 
are all connected to each other and to board specific amps/jacks, and 
hence the alsamixer and kernel quirks methods are often the simplest way 
to enable such devices instead of a new machine driver to manage the 
kcontrols and config (as I think you may have been proposing?). I 
appreciate this is not perfect, but without HW schematics we are in a 
difficult position to begin with.

> 
> 
>> The graph is generally fixed, where there's anything changing it's
>> usually just something being plugged into a port, but there's systems
>> where the routing can be more dynamic.  For a really simple example a
>> tablet might flip left and right speakers depending on which way up it's
>> being held.  You could do that in software with a DSP but you could also
>> do it through routing in the hardware (and depending on how the DSP
>> looks it might look more like hardware to the host computer...).  You
>> also get things like routing around some effects blocks depending on use
>> case, just turning the effect on or off might mean the effect is still
>> causing latency so instead it's disabled by not routing the audio to the
>> IP at all.
>>
>> My impression is that the systems that need to do dynamic routing beyond
>> just turning on and off some paths are getting less common but they do
>> still exist, and older hardware tends to stick around for a long time at
>> lower price points (eg, you'll see what was once a cutting edge CODEC on
>> a board with a low end SoC because these days the packaging it uses is
>> much more easy to assemble).
> 
> Ah, OK
> It should allow that graph itself can be updated runtime.
> And, I think amixer still can work for dynamic routing ?
> These can be good example for test case, I think.

Fwiw, we have a dynamic graph (at probe time) today and Peter is looking 
into support for dynamic graph at runtime e.g. where Pipewire could 
build a graph like PCM -> Volume -> DRC -> Speaker

> 
>> I think that's a good place to start TBH - it's pretty much what we have
>> for DAPM with the DAPM lock.  We can always go back and rework things to
>> make the locking finer grained but hopefully that's not needed, we just
>> don't need to do things that take the lock for long enough or often
>> enough that people get worried about contention.
> 
> Thanks.
> Let's start with big lock, first.
> 
>> Yes, being able to work on this without needing some specific hardware
>> would be great - it's one of the big drawbacks with DPCM and one of the
>> things that make it fragile.  If we can get a good development and
>> testing setup that allows people to do work without needing to worry so
>> much about what other hardware will be impacted that would be a massive
>> win.
> 
> Nice to know
> 
>>> I have indicated many random topics, but these are just my idea.
>>> My motivation is I want to cleanup ASoC framework, but can't on soc-pcm,
>>> especially around DPCM. There is no guarantee I can create soc-pcm2, but
>>> have interesting. But what do you think ?
>>
>> I'm super greatful that you want to spend time tackling these things,
>> I think this is heading in roughly the right direction.
> 
> I'm busy for a while, but want to do this this year.
> I can enjoy this :)
> 
> I think the initial design is very important for soc-pcm2.
> So I will not care about detail things first (= sampling rate, channels,
> etc,...), but focus about graph / routing thinks. I think this is the
> main part for it.
> 
> Thank you for your help !!

One other thing I would say is that the audio hardware landscape is 
changing for client devices in that they are transitioning to Soundwire 
SDCA compliance over the next few years. This will probably impact IoT 
too as codec prices become lower over time, so I would recommend more 
focus on soundwire integration than say legacy HDA in pcm2 as this would 
probably help more in the long term.

Thanks

Liam

  parent reply	other threads:[~2026-02-03 18:07 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-26  6:41 ASoC: soc-pcm2.c ? Kuninori Morimoto
2026-01-31 13:29 ` Mark Brown
2026-02-03  6:27   ` Kuninori Morimoto
2026-02-03 13:34     ` Pierre-Louis Bossart
2026-02-03 15:16       ` Mark Brown
2026-02-04  6:19         ` Kuninori Morimoto
2026-02-04 12:55           ` Mark Brown
2026-02-03 18:07     ` Liam Girdwood [this message]
2026-02-03 18:17       ` Mark Brown
2026-02-04  5:59         ` Kuninori Morimoto
2026-02-04  8:50         ` Takashi Iwai
2026-02-04 12:10           ` Mark Brown
2026-02-04 12:21             ` Takashi Iwai
2026-02-04 12:33               ` Jaroslav Kysela
2026-02-04 12:45                 ` Mark Brown
2026-02-03  7:45 ` Péter Ujfalusi
2026-02-03 10:11   ` Péter Ujfalusi
2026-02-04  7:28     ` Kuninori Morimoto
2026-02-04 13:07       ` Péter Ujfalusi
2026-02-03 16:18   ` Mark Brown
2026-02-04 10:18     ` Péter Ujfalusi
2026-02-04 12:39       ` Mark Brown

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b55cce82-1bdb-4e99-86ba-024f73a0aef6@gmail.com \
    --to=lgirdwood@gmail.com \
    --cc=broonie@kernel.org \
    --cc=kuninori.morimoto.gx@renesas.com \
    --cc=lars@metafoo.de \
    --cc=linux-sound@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox