From: Vinod Koul <vkoul@kernel.org>
To: Sameer Pujar <spujar@nvidia.com>
Cc: dan.j.williams@intel.com, tiwai@suse.com, jonathanh@nvidia.com,
dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org,
sharadg@nvidia.com, rlokhande@nvidia.com, dramesh@nvidia.com,
mkumard@nvidia.com
Subject: Re: [PATCH] [RFC] dmaengine: add fifo_size member
Date: Mon, 24 Jun 2019 11:56:09 +0530 [thread overview]
Message-ID: <20190624062609.GV2962@vkoul-mobl> (raw)
In-Reply-To: <23474b74-3c26-3083-be21-4de7731a0e95@nvidia.com>
On 20-06-19, 15:59, Sameer Pujar wrote:
> > > > So can you explain me what is the difference here that the peripheral
> > > > cannot configure and use burst size with passing fifo depth?
> > > Say for example FIFO_THRESHOLD is programmed as 16 WORDS, BURST_SIZE as 8
> > > WORDS.
> > > ADMAIF does not push data to AHUB(operation [2]) till threshold of 16 WORDS
> > > is
> > > reached in ADMAIF FIFO. Hence 2 burst transfers are needed to reach the
> > > threshold.
> > > As mentioned earlier, threshold here is to just indicate when data transfer
> > > can happen
> > > to AHUB modules.
> > So we have ADMA and AHUB and peripheral. You are talking to AHUB and that
> > is _not_ peripheral and if I have guess right the fifo depth is for AHUB
> > right?
> Yes the communication is between ADMA and AHUB. ADMAIF is the interface
> between
> ADMA and AHUB. ADMA channel sending data to AHUB pairs with ADMAIF TX
> channel.
> Similary ADMA channel receiving data from AHUB pairs with ADMAIF RX channel.
> FIFO DEPTH we are talking is about each ADMAIF TX/RX channel and it is
> configurable.
> DMA transfers happen to/from ADMAIF FIFOs and whenever data(per WORD) is
> popped/pushed
> out of ADMAIF to/from AHUB, asseration is made to ADMA. ADMA keeps counters
> based on
> these assertions. By knowing FIFO DEPTH and these counters, ADMA knows when
> to wait or
> when to transfer data.
Where does ADMAIF driver reside in kernel, who configures it for normal
dma txns..?
Also it wold have helped the long discussion if that part was made clear
rather than talking about peripheral all this time :(
--
~Vinod
next prev parent reply other threads:[~2019-06-24 6:29 UTC|newest]
Thread overview: 60+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-04-30 11:30 [RFC] dmaengine: add fifo_size member Sameer Pujar
2019-04-30 11:30 ` [PATCH] " Sameer Pujar
2019-05-02 6:04 ` Vinod Koul
2019-05-02 6:04 ` [PATCH] " Vinod Koul
2019-05-02 10:53 ` Sameer Pujar
2019-05-02 12:25 ` Vinod Koul
2019-05-02 13:29 ` Sameer Pujar
2019-05-03 19:10 ` Peter Ujfalusi
2019-05-04 10:23 ` Vinod Koul
2019-05-06 13:04 ` Sameer Pujar
2019-05-06 15:50 ` Vinod Koul
2019-06-06 3:49 ` Sameer Pujar
2019-06-06 6:00 ` Peter Ujfalusi
2019-06-06 6:41 ` Sameer Pujar
2019-06-06 7:14 ` Jon Hunter
2019-06-06 10:22 ` Peter Ujfalusi
2019-06-06 10:49 ` Jon Hunter
2019-06-06 11:54 ` Peter Ujfalusi
2019-06-06 12:37 ` Jon Hunter
2019-06-06 13:45 ` Dmitry Osipenko
2019-06-06 13:55 ` Dmitry Osipenko
2019-06-06 14:26 ` Jon Hunter
2019-06-06 14:36 ` Jon Hunter
2019-06-06 14:36 ` Dmitry Osipenko
2019-06-06 14:47 ` Jon Hunter
2019-06-06 14:25 ` Jon Hunter
2019-06-06 15:18 ` Dmitry Osipenko
2019-06-06 16:32 ` Jon Hunter
2019-06-06 16:44 ` Dmitry Osipenko
2019-06-06 16:53 ` Jon Hunter
2019-06-06 17:25 ` Dmitry Osipenko
2019-06-06 17:56 ` Dmitry Osipenko
2019-06-07 9:24 ` Jon Hunter
2019-06-07 5:50 ` Peter Ujfalusi
2019-06-07 9:18 ` Jon Hunter
2019-06-07 10:27 ` Jon Hunter
2019-06-07 12:17 ` Peter Ujfalusi
2019-06-07 12:58 ` Jon Hunter
2019-06-07 13:35 ` Peter Ujfalusi
2019-06-07 20:53 ` Dmitry Osipenko
2019-06-10 8:01 ` Jon Hunter
2019-06-10 7:59 ` Jon Hunter
2019-06-13 4:43 ` Vinod Koul
2019-06-17 7:07 ` Sameer Pujar
2019-06-18 4:33 ` Vinod Koul
2019-06-20 10:29 ` Sameer Pujar
2019-06-24 6:26 ` Vinod Koul [this message]
2019-06-25 2:57 ` Sameer Pujar
2019-07-05 6:15 ` Sameer Pujar
2019-07-15 15:42 ` Sameer Pujar
2019-07-19 5:04 ` Vinod Koul
2019-07-23 5:54 ` Sameer Pujar
2019-07-29 6:10 ` Vinod Koul
2019-07-31 9:48 ` Jon Hunter
2019-07-31 15:16 ` Vinod Koul
2019-08-02 8:51 ` Jon Hunter
2019-08-08 12:38 ` Vinod Koul
2019-08-19 15:56 ` Jon Hunter
2019-08-20 11:05 ` Vinod Koul
2019-09-16 9:02 ` Sameer Pujar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190624062609.GV2962@vkoul-mobl \
--to=vkoul@kernel.org \
--cc=dan.j.williams@intel.com \
--cc=dmaengine@vger.kernel.org \
--cc=dramesh@nvidia.com \
--cc=jonathanh@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mkumard@nvidia.com \
--cc=rlokhande@nvidia.com \
--cc=sharadg@nvidia.com \
--cc=spujar@nvidia.com \
--cc=tiwai@suse.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox