From: Hans de Goede <hdegoede@redhat.com>
To: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
Cc: Hans de Goede <hdegoede@redhat.com>, alsa-devel@alsa-project.org
Subject: [RFC SOF 1/2] topology: Add sof-byt-codec-tdm4.m4 file
Date: Sun, 6 Dec 2020 13:46:25 +0100 [thread overview]
Message-ID: <20201206124626.13932-2-hdegoede@redhat.com> (raw)
In-Reply-To: <20201206124626.13932-1-hdegoede@redhat.com>
Some BYT/CHT boards (mostly Cherry Trail) use TDM 4 slots 24 bit as
wire format to the codec, rather then standard I2S 2 channel 24 bit.
Add a new m4 file for this. This is a copy of sof-byt-codec.m4 with
the following changes:
@@ -1,4 +1,4 @@
-`# Topology for generic' PLATFORM `board with' CODEC `on SSP' SSP_NUM
+`# Topology for generic' PLATFORM `board with' CODEC `on SSP' SSP_NUM `using TDM 4 slots 24 bit'
# Include topology builder
include(`utils.m4')
@@ -97,8 +97,8 @@
# BE configurations - overrides config in ACPI if present
#
DAI_CONFIG(SSP, SSP_NUM, 0, SSP2-Codec,
- SSP_CONFIG(I2S, SSP_CLOCK(mclk, 19200000, codec_mclk_in),
- SSP_CLOCK(bclk, 2400000, codec_slave),
+ SSP_CONFIG(DSP_B, SSP_CLOCK(mclk, 19200000, codec_mclk_in),
+ SSP_CLOCK(bclk, 4800000, codec_slave),
SSP_CLOCK(fsync, 48000, codec_slave),
- SSP_TDM(2, 25, 3, 3),
+ SSP_TDM(4, 25, 3, 3),
SSP_CONFIG_DATA(SSP, SSP_NUM, 24)))
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
---
tools/topology/sof-byt-codec-tdm4.m4 | 104 +++++++++++++++++++++++++++
1 file changed, 104 insertions(+)
create mode 100644 tools/topology/sof-byt-codec-tdm4.m4
diff --git a/tools/topology/sof-byt-codec-tdm4.m4 b/tools/topology/sof-byt-codec-tdm4.m4
new file mode 100644
index 000000000..dabf8342b
--- /dev/null
+++ b/tools/topology/sof-byt-codec-tdm4.m4
@@ -0,0 +1,104 @@
+`# Topology for generic' PLATFORM `board with' CODEC `on SSP' SSP_NUM `using TDM 4 slots 24 bit'
+
+# Include topology builder
+include(`utils.m4')
+include(`dai.m4')
+include(`pipeline.m4')
+include(`ssp.m4')
+
+# Include TLV library
+include(`common/tlv.m4')
+
+# Include Token library
+include(`sof/tokens.m4')
+
+# Include DSP configuration
+include(`platform/intel/'PLATFORM`.m4')
+
+#
+# Define the pipelines
+#
+# PCM0 -----> volume -------v
+# low latency mixer ----> volume ----> SSP2
+# PCM1 -----> volume -------^
+# PCM0 <---- Volume <---- SSP2
+#
+
+# Low Latency capture pipeline 2 on PCM 0 using max 2 channels of s32le.
+# 1000us deadline on core 0 with priority 0
+PIPELINE_PCM_ADD(sof/pipe-low-latency-capture.m4,
+ 2, 0, 2, s32le,
+ 1000, 0, 0,
+ 48000, 48000, 48000)
+
+#
+# DAI configuration
+#
+# SSP port 2 is our only pipeline DAI
+#
+
+# playback DAI is SSP2 using 2 periods
+# Buffers use s24le format, 1000us deadline on core 0 with priority 1
+# this defines pipeline 1. The 'NOT_USED_IGNORED' is due to dependencies
+# and is adjusted later with an explicit dapm line.
+DAI_ADD(sof/pipe-mixer-dai-playback.m4,
+ 1, SSP, SSP_NUM, SSP2-Codec,
+ NOT_USED_IGNORED, 2, s24le,
+ 1000, 1, 0, SCHEDULE_TIME_DOMAIN_DMA,
+ 2, 48000)
+
+# PCM Playback pipeline 3 on PCM 0 using max 2 channels of s32le.
+# 1000us deadline on core 0 with priority 0
+# this is connected to pipeline DAI 1
+PIPELINE_PCM_ADD(sof/pipe-host-volume-playback.m4,
+ 3, 0, 2, s32le,
+ 1000, 0, 0,
+ 48000, 48000, 48000,
+ SCHEDULE_TIME_DOMAIN_DMA,
+ PIPELINE_PLAYBACK_SCHED_COMP_1)
+
+# PCM Playback pipeline 4 on PCM 1 using max 2 channels of s32le.
+# 10ms deadline on core 0 with priority 0
+# this is connected to pipeline DAI 1
+PIPELINE_PCM_ADD(sof/pipe-host-volume-playback.m4,
+ 4, 1, 2, s32le,
+ 5000, 0, 0,
+ 48000, 48000, 48000,
+ SCHEDULE_TIME_DOMAIN_DMA,
+ PIPELINE_PLAYBACK_SCHED_COMP_1)
+
+# Connect pipelines together
+SectionGraph."PIPE_NAME" {
+ index "0"
+
+ lines [
+ # PCM pipeline 3 to DAI pipeline 1
+ dapm(PIPELINE_MIXER_1, PIPELINE_SOURCE_3)
+ # PCM pipeline 4 to DAI pipeline 1
+ dapm(PIPELINE_MIXER_1, PIPELINE_SOURCE_4)
+
+ ]
+}
+
+# capture DAI is SSP2 using 2 periods
+# Buffers use s24le format, 1000us deadline on core 0 with priority 0
+# this is part of pipeline 2
+DAI_ADD(sof/pipe-dai-capture.m4,
+ 2, SSP, SSP_NUM, SSP2-Codec,
+ PIPELINE_SINK_2, 2, s24le,
+ 1000, 0, 0, SCHEDULE_TIME_DOMAIN_DMA)
+
+
+# PCM definitions
+PCM_DUPLEX_ADD(PCM, 0, PIPELINE_PCM_3, PIPELINE_PCM_2)
+PCM_PLAYBACK_ADD(PCM Deep Buffer, 1, PIPELINE_PCM_4)
+
+#
+# BE configurations - overrides config in ACPI if present
+#
+DAI_CONFIG(SSP, SSP_NUM, 0, SSP2-Codec,
+ SSP_CONFIG(DSP_B, SSP_CLOCK(mclk, 19200000, codec_mclk_in),
+ SSP_CLOCK(bclk, 4800000, codec_slave),
+ SSP_CLOCK(fsync, 48000, codec_slave),
+ SSP_TDM(4, 25, 3, 3),
+ SSP_CONFIG_DATA(SSP, SSP_NUM, 24)))
--
2.28.0
next prev parent reply other threads:[~2020-12-06 12:48 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-12-06 12:46 [RFC SOF 0/2] topology: Try to add sof-cht-nau8824 topology file Hans de Goede
2020-12-06 12:46 ` Hans de Goede [this message]
2020-12-07 15:11 ` [RFC SOF 1/2] topology: Add sof-byt-codec-tdm4.m4 file Pierre-Louis Bossart
2020-12-07 15:17 ` Hans de Goede
2020-12-06 12:46 ` [RFC SOF 2/2] topology: Add sof-cht-nau8824 topology file Hans de Goede
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201206124626.13932-2-hdegoede@redhat.com \
--to=hdegoede@redhat.com \
--cc=alsa-devel@alsa-project.org \
--cc=pierre-louis.bossart@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).