From: Mauro Carvalho Chehab <mchehab@kernel.org>
To: Takashi Iwai <tiwai@suse.de>
Cc: nicoleotsuka@gmail.com, alsa-devel@alsa-project.org,
lgirdwood@gmail.com,
Sebastian Fricke <sebastian.fricke@collabora.com>,
Xiubo.Lee@gmail.com, festevam@gmail.com,
Shengjiu Wang <shengjiu.wang@nxp.com>,
tiwai@suse.com, linux-kernel@vger.kernel.org, tfiga@chromium.org,
hverkuil@xs4all.nl, linuxppc-dev@lists.ozlabs.org,
Mark Brown <broonie@kernel.org>,
sakari.ailus@iki.fi, perex@perex.cz, linux-media@vger.kernel.org,
shengjiu.wang@gmail.com, m.szyprowski@samsung.com
Subject: Re: [PATCH v15 00/16] Add audio support in v4l2 framework
Date: Thu, 2 May 2024 10:26:43 +0100 [thread overview]
Message-ID: <20240502102643.4ee7f6c2@sal.lan> (raw)
In-Reply-To: <20240502095956.0a8c5b26@sal.lan>
Em Thu, 2 May 2024 09:59:56 +0100
Mauro Carvalho Chehab <mchehab@kernel.org> escreveu:
> Em Thu, 02 May 2024 09:46:14 +0200
> Takashi Iwai <tiwai@suse.de> escreveu:
>
> > On Wed, 01 May 2024 03:56:15 +0200,
> > Mark Brown wrote:
> > >
> > > On Tue, Apr 30, 2024 at 05:27:52PM +0100, Mauro Carvalho Chehab wrote:
> > > > Mark Brown <broonie@kernel.org> escreveu:
> > > > > On Tue, Apr 30, 2024 at 10:21:12AM +0200, Sebastian Fricke wrote:
> > >
> > > > > The discussion around this originally was that all the audio APIs are
> > > > > very much centered around real time operations rather than completely
> > >
> > > > The media subsystem is also centered around real time. Without real
> > > > time, you can't have a decent video conference system. Having
> > > > mem2mem transfers actually help reducing real time delays, as it
> > > > avoids extra latency due to CPU congestion and/or data transfers
> > > > from/to userspace.
> > >
> > > Real time means strongly tied to wall clock times rather than fast - the
> > > issue was that all the ALSA APIs are based around pushing data through
> > > the system based on a clock.
> > >
> > > > > That doesn't sound like an immediate solution to maintainer overload
> > > > > issues... if something like this is going to happen the DRM solution
> > > > > does seem more general but I'm not sure the amount of stop energy is
> > > > > proportionate.
> > >
> > > > I don't think maintainer overload is the issue here. The main
> > > > point is to avoid a fork at the audio uAPI, plus the burden
> > > > of re-inventing the wheel with new codes for audio formats,
> > > > new documentation for them, etc.
> > >
> > > I thought that discussion had been had already at one of the earlier
> > > versions? TBH I've not really been paying attention to this since the
> > > very early versions where I raised some similar "why is this in media"
> > > points and I thought everyone had decided that this did actually make
> > > sense.
> >
> > Yeah, it was discussed in v1 and v2 threads, e.g.
> > https://patchwork.kernel.org/project/linux-media/cover/1690265540-25999-1-git-send-email-shengjiu.wang@nxp.com/#25485573
> >
> > My argument at that time was how the operation would be, and the point
> > was that it'd be a "batch-like" operation via M2M without any timing
> > control. It'd be a very special usage for for ALSA, and if any, it'd
> > be hwdep -- that is a very hardware-specific API implementation -- or
> > try compress-offload API, which looks dubious.
> >
> > OTOH, the argument was that there is already a framework for M2M in
> > media API and that also fits for the batch-like operation, too. So
> > was the thread evolved until now.
>
> M2M transfers are not a hardware-specific API, and such kind of
> transfers is not new either. Old media devices like bttv have
> internally a way to do PCI2PCI transfers, allowing media streams
> to be transferred directly without utilizing CPU. The media driver
> supports it for video, as this made a huge difference of performance
> back then.
>
> On embedded world, this is a pretty common scenario: different media
> IP blocks can communicate with each other directly via memory. This
> can happen for video capture, video display and audio.
>
> With M2M, most of the control is offloaded to the hardware.
>
> There are still time control associated with it, as audio and video
> needs to be in sync. This is done by controlling the buffers size
> and could be fine-tuned by checking when the buffer transfer is done.
>
> On media, M2M buffer transfers are started via VIDIOC_QBUF,
> which is a request to do a frame transfer. A similar ioctl
> (VIDIOC_DQBUF) is used to monitor when the hardware finishes
> transfering the buffer. On other words, the CPU is responsible
> for time control.
Just complementing: on media, we do this per video buffer (or
per half video buffer). A typical use case on cameras is to have
buffers transferred 30 times per second, if the video was streamed
at 30 frames per second.
I would assume that, on an audio/video stream, the audio data
transfer will be programmed to also happen on a regular interval.
So, if the video stream is programmed to a 30 frames per second
rate, I would assume that the associated audio stream will also be
programmed to be grouped into 30 data transfers per second. On such
scenario, if the audio is sampled at 48 kHZ, it means that:
1) each M2M transfer commanded by CPU will copy 1600 samples;
2) the time between each sample will remain 1/48000;
3) a notification event telling that 1600 samples were transferred
will be generated when the last sample happens;
4) CPU will do time control by looking at the notification events.
> On other words, this is still real time. The main difference
> from a "sync" transfer is that the CPU doesn't need to copy data
> from/to different devices, as such operation is offloaded to the
> hardware.
>
> Regards,
> Mauro
next prev parent reply other threads:[~2024-05-02 9:27 UTC|newest]
Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-19 7:50 [PATCH v15 00/16] Add audio support in v4l2 framework Shengjiu Wang
2024-03-19 7:50 ` [PATCH v15 01/16] media: v4l2-ctrls: add support for fraction_bits Shengjiu Wang
2024-03-19 7:51 ` [PATCH v15 02/16] ASoC: fsl_asrc: define functions for memory to memory usage Shengjiu Wang
2024-03-19 7:51 ` [PATCH v15 03/16] ASoC: fsl_easrc: " Shengjiu Wang
2024-03-19 7:51 ` [PATCH v15 04/16] ASoC: fsl_asrc: move fsl_asrc_common.h to include/sound Shengjiu Wang
2024-03-19 7:51 ` [PATCH v15 05/16] ASoC: fsl_asrc: register m2m platform device Shengjiu Wang
2024-03-19 7:51 ` [PATCH v15 06/16] ASoC: fsl_easrc: " Shengjiu Wang
2024-03-19 7:51 ` [PATCH v15 07/16] media: uapi: Add V4L2_CAP_AUDIO_M2M capability flag Shengjiu Wang
2024-03-19 7:51 ` [PATCH v15 08/16] media: v4l2: Add audio capture and output support Shengjiu Wang
2024-03-19 7:51 ` [PATCH v15 09/16] media: uapi: Define audio sample format fourcc type Shengjiu Wang
2024-03-19 7:51 ` [PATCH v15 10/16] media: uapi: Add V4L2_CTRL_CLASS_M2M_AUDIO Shengjiu Wang
2024-03-19 7:51 ` [PATCH v15 11/16] media: uapi: Add audio rate controls support Shengjiu Wang
2024-03-19 7:51 ` [PATCH v15 12/16] media: uapi: Declare interface types for Audio Shengjiu Wang
2024-03-19 7:51 ` [PATCH v15 13/16] media: uapi: Add an entity type for audio resampler Shengjiu Wang
2024-03-19 7:51 ` [PATCH v15 14/16] media: vivid: add fixed point test controls Shengjiu Wang
2024-03-19 7:51 ` [PATCH v15 15/16] media: imx-asrc: Add memory to memory driver Shengjiu Wang
2024-03-19 7:51 ` [PATCH v15 16/16] media: vim2m-audio: add virtual driver for audio memory to memory Shengjiu Wang
[not found] ` <20240430082112.jrovosb6lgblgpfg@basti-XPS-13-9310>
2024-04-30 8:47 ` [PATCH v15 00/16] Add audio support in v4l2 framework Hans Verkuil
2024-04-30 13:52 ` Mauro Carvalho Chehab
2024-04-30 14:46 ` Mark Brown
2024-04-30 15:03 ` Jaroslav Kysela
2024-04-30 16:27 ` Mauro Carvalho Chehab
2024-05-01 1:56 ` Mark Brown
2024-05-02 7:46 ` Takashi Iwai
2024-05-02 8:59 ` Mauro Carvalho Chehab
2024-05-02 9:26 ` Mauro Carvalho Chehab [this message]
2024-05-03 1:47 ` Mark Brown
2024-05-03 8:42 ` Mauro Carvalho Chehab
2024-05-06 8:49 ` Shengjiu Wang
2024-05-06 9:42 ` Jaroslav Kysela
2024-05-08 8:00 ` Hans Verkuil
2024-05-08 8:13 ` Amadeusz Sławiński
2024-05-09 9:36 ` Shengjiu Wang
2024-05-09 9:50 ` Amadeusz Sławiński
2024-05-09 10:12 ` Shengjiu Wang
2024-05-09 10:28 ` Amadeusz Sławiński
2024-05-09 10:44 ` Shengjiu Wang
2024-05-09 11:13 ` Jaroslav Kysela
2024-05-13 11:56 ` Jaroslav Kysela
2024-05-15 9:17 ` Hans Verkuil
2024-05-15 9:50 ` Jaroslav Kysela
2024-05-15 10:19 ` Takashi Iwai
2024-05-15 10:46 ` Jaroslav Kysela
2024-05-15 13:34 ` Shengjiu Wang
2024-05-16 14:58 ` Jaroslav Kysela
2024-05-15 20:33 ` Nicolas Dufresne
2024-05-16 14:50 ` Jaroslav Kysela
2024-05-27 7:24 ` Jaroslav Kysela
2024-05-15 14:04 ` Pierre-Louis Bossart
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240502102643.4ee7f6c2@sal.lan \
--to=mchehab@kernel.org \
--cc=Xiubo.Lee@gmail.com \
--cc=alsa-devel@alsa-project.org \
--cc=broonie@kernel.org \
--cc=festevam@gmail.com \
--cc=hverkuil@xs4all.nl \
--cc=lgirdwood@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-media@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=m.szyprowski@samsung.com \
--cc=nicoleotsuka@gmail.com \
--cc=perex@perex.cz \
--cc=sakari.ailus@iki.fi \
--cc=sebastian.fricke@collabora.com \
--cc=shengjiu.wang@gmail.com \
--cc=shengjiu.wang@nxp.com \
--cc=tfiga@chromium.org \
--cc=tiwai@suse.com \
--cc=tiwai@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).