linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Shengjiu Wang <shengjiu.wang@gmail.com>
To: Mauro Carvalho Chehab <mchehab@kernel.org>
Cc: nicoleotsuka@gmail.com, alsa-devel@alsa-project.org,
	linuxppc-dev@lists.ozlabs.org,
	Sebastian Fricke <sebastian.fricke@collabora.com>,
	Xiubo.Lee@gmail.com, Takashi Iwai <tiwai@suse.de>,
	lgirdwood@gmail.com, Shengjiu Wang <shengjiu.wang@nxp.com>,
	tiwai@suse.com, linux-kernel@vger.kernel.org, tfiga@chromium.org,
	hverkuil@xs4all.nl, Mark Brown <broonie@kernel.org>,
	sakari.ailus@iki.fi, perex@perex.cz, linux-media@vger.kernel.org,
	festevam@gmail.com, m.szyprowski@samsung.com
Subject: Re: [PATCH v15 00/16] Add audio support in v4l2 framework
Date: Mon, 6 May 2024 16:49:31 +0800	[thread overview]
Message-ID: <CAA+D8APfM3ayXHAPadHLty52PYE9soQM6o780=mZs+R4px-AOQ@mail.gmail.com> (raw)
In-Reply-To: <20240503094225.47fe4836@sal.lan>

On Fri, May 3, 2024 at 4:42 PM Mauro Carvalho Chehab <mchehab@kernel.org> wrote:
>
> Em Fri, 3 May 2024 10:47:19 +0900
> Mark Brown <broonie@kernel.org> escreveu:
>
> > On Thu, May 02, 2024 at 10:26:43AM +0100, Mauro Carvalho Chehab wrote:
> > > Mauro Carvalho Chehab <mchehab@kernel.org> escreveu:
> >
> > > > There are still time control associated with it, as audio and video
> > > > needs to be in sync. This is done by controlling the buffers size
> > > > and could be fine-tuned by checking when the buffer transfer is done.
> >
> > ...
> >
> > > Just complementing: on media, we do this per video buffer (or
> > > per half video buffer). A typical use case on cameras is to have
> > > buffers transferred 30 times per second, if the video was streamed
> > > at 30 frames per second.
> >
> > IIRC some big use case for this hardware was transcoding so there was a
> > desire to just go at whatever rate the hardware could support as there
> > is no interactive user consuming the output as it is generated.
>
> Indeed, codecs could be used to just do transcoding, but I would
> expect it to be a border use case. See, as the chipsets implementing
> codecs are typically the ones used on mobiles, I would expect that
> the major use cases to be to watch audio and video and to participate
> on audio/video conferences.
>
> Going further, the codec API may end supporting not only transcoding
> (which is something that CPU can usually handle without too much
> processing) but also audio processing that may require more
> complex algorithms - even deep learning ones - like background noise
> removal, echo detection/removal, volume auto-gain, audio enhancement
> and such.
>
> On other words, the typical use cases will either have input
> or output being a physical hardware (microphone or speaker).
>

All, thanks for spending time to discuss, it seems we go back to
the start point of this topic again.

Our main request is that there is a hardware sample rate converter
on the chip, so users can use it in user space as a component like
software sample rate converter. It mostly may run as a gstreamer plugin.
so it is a memory to memory component.

I didn't find such API in ALSA for such purpose, the best option for this
in the kernel is the V4L2 memory to memory framework I found.
As Hans said it is well designed for memory to memory.

And I think audio is one of 'media'.  As I can see that part of Radio
function is in ALSA, part of Radio function is in V4L2. part of HDMI
function is in DRM, part of HDMI function is in ALSA...
So using V4L2 for audio is not new from this point of view.

Even now I still think V4L2 is the best option, but it looks like there
are a lot of rejects.  If develop a new ALSA-mem2mem, it is also
a duplication of code (bigger duplication that just add audio support
in V4L2 I think).

Best regards
Shengjiu Wang.

> > > I would assume that, on an audio/video stream, the audio data
> > > transfer will be programmed to also happen on a regular interval.
> >
> > With audio the API is very much "wake userspace every Xms".

  reply	other threads:[~2024-05-06  8:50 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-19  7:50 [PATCH v15 00/16] Add audio support in v4l2 framework Shengjiu Wang
2024-03-19  7:50 ` [PATCH v15 01/16] media: v4l2-ctrls: add support for fraction_bits Shengjiu Wang
2024-03-19  7:51 ` [PATCH v15 02/16] ASoC: fsl_asrc: define functions for memory to memory usage Shengjiu Wang
2024-03-19  7:51 ` [PATCH v15 03/16] ASoC: fsl_easrc: " Shengjiu Wang
2024-03-19  7:51 ` [PATCH v15 04/16] ASoC: fsl_asrc: move fsl_asrc_common.h to include/sound Shengjiu Wang
2024-03-19  7:51 ` [PATCH v15 05/16] ASoC: fsl_asrc: register m2m platform device Shengjiu Wang
2024-03-19  7:51 ` [PATCH v15 06/16] ASoC: fsl_easrc: " Shengjiu Wang
2024-03-19  7:51 ` [PATCH v15 07/16] media: uapi: Add V4L2_CAP_AUDIO_M2M capability flag Shengjiu Wang
2024-03-19  7:51 ` [PATCH v15 08/16] media: v4l2: Add audio capture and output support Shengjiu Wang
2024-03-19  7:51 ` [PATCH v15 09/16] media: uapi: Define audio sample format fourcc type Shengjiu Wang
2024-03-19  7:51 ` [PATCH v15 10/16] media: uapi: Add V4L2_CTRL_CLASS_M2M_AUDIO Shengjiu Wang
2024-03-19  7:51 ` [PATCH v15 11/16] media: uapi: Add audio rate controls support Shengjiu Wang
2024-03-19  7:51 ` [PATCH v15 12/16] media: uapi: Declare interface types for Audio Shengjiu Wang
2024-03-19  7:51 ` [PATCH v15 13/16] media: uapi: Add an entity type for audio resampler Shengjiu Wang
2024-03-19  7:51 ` [PATCH v15 14/16] media: vivid: add fixed point test controls Shengjiu Wang
2024-03-19  7:51 ` [PATCH v15 15/16] media: imx-asrc: Add memory to memory driver Shengjiu Wang
2024-03-19  7:51 ` [PATCH v15 16/16] media: vim2m-audio: add virtual driver for audio memory to memory Shengjiu Wang
     [not found] ` <20240430082112.jrovosb6lgblgpfg@basti-XPS-13-9310>
2024-04-30  8:47   ` [PATCH v15 00/16] Add audio support in v4l2 framework Hans Verkuil
2024-04-30 13:52     ` Mauro Carvalho Chehab
2024-04-30 14:46   ` Mark Brown
2024-04-30 15:03     ` Jaroslav Kysela
2024-04-30 16:27     ` Mauro Carvalho Chehab
2024-05-01  1:56       ` Mark Brown
2024-05-02  7:46         ` Takashi Iwai
2024-05-02  8:59           ` Mauro Carvalho Chehab
2024-05-02  9:26             ` Mauro Carvalho Chehab
2024-05-03  1:47               ` Mark Brown
2024-05-03  8:42                 ` Mauro Carvalho Chehab
2024-05-06  8:49                   ` Shengjiu Wang [this message]
2024-05-06  9:42                     ` Jaroslav Kysela
2024-05-08  8:00                     ` Hans Verkuil
2024-05-08  8:13                       ` Amadeusz Sławiński
2024-05-09  9:36                         ` Shengjiu Wang
2024-05-09  9:50                           ` Amadeusz Sławiński
2024-05-09 10:12                             ` Shengjiu Wang
2024-05-09 10:28                               ` Amadeusz Sławiński
2024-05-09 10:44                                 ` Shengjiu Wang
2024-05-09 11:13                                   ` Jaroslav Kysela
2024-05-13 11:56                                     ` Jaroslav Kysela
2024-05-15  9:17                                       ` Hans Verkuil
2024-05-15  9:50                                         ` Jaroslav Kysela
2024-05-15 10:19                                           ` Takashi Iwai
2024-05-15 10:46                                             ` Jaroslav Kysela
2024-05-15 13:34                                               ` Shengjiu Wang
2024-05-16 14:58                                                 ` Jaroslav Kysela
2024-05-15 20:33                                               ` Nicolas Dufresne
2024-05-16 14:50                                                 ` Jaroslav Kysela
2024-05-27  7:24                                                   ` Jaroslav Kysela
2024-05-15 14:04                                     ` Pierre-Louis Bossart

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAA+D8APfM3ayXHAPadHLty52PYE9soQM6o780=mZs+R4px-AOQ@mail.gmail.com' \
    --to=shengjiu.wang@gmail.com \
    --cc=Xiubo.Lee@gmail.com \
    --cc=alsa-devel@alsa-project.org \
    --cc=broonie@kernel.org \
    --cc=festevam@gmail.com \
    --cc=hverkuil@xs4all.nl \
    --cc=lgirdwood@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-media@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=m.szyprowski@samsung.com \
    --cc=mchehab@kernel.org \
    --cc=nicoleotsuka@gmail.com \
    --cc=perex@perex.cz \
    --cc=sakari.ailus@iki.fi \
    --cc=sebastian.fricke@collabora.com \
    --cc=shengjiu.wang@nxp.com \
    --cc=tfiga@chromium.org \
    --cc=tiwai@suse.com \
    --cc=tiwai@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).