netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@infradead.org>
To: Andrew Grover <andrew.grover@intel.com>
Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	john.ronciak@intel.com, christopher.leech@intel.com
Subject: Re: [RFC] [PATCH 1/3] ioat: DMA subsystem
Date: Thu, 24 Nov 2005 11:48:01 +0000	[thread overview]
Message-ID: <20051124114801.GA20244@infradead.org> (raw)
In-Reply-To: <Pine.LNX.4.44.0511231207410.32487-100000@isotope.jf.intel.com>

> +++ b/drivers/dma/cb_list.h
> @@ -0,0 +1,12 @@
> +/* Extra macros that build on <linux/list.h> */
> +#ifndef CB_LIST_H
> +#define CB_LIST_H
> +
> +#include <linux/list.h>
> +
> +/* Provide some safty to list_add, which I find easy to swap the arguments to */
> +
> +#define list_add_entry(pos, head, member)      list_add(&pos->member, head)
> +#define list_add_entry_tail(pos, head, member) list_add_tail(&pos->member, head)

these macros seem rather useless.  if you disagree please submit a patch to
<linux/list.h> - if it gets accepted fine, else just remove this usage.


> +	struct dma_chan *chan = container_of(cd, struct dma_chan, class_dev);

What about a

#define to_dma_chan(cdev) \
	container_of(cd, struct dma_chan, cdev)

helper as we have in many subsystems?

> +static void
> +dma_class_release(struct class_device *cd)
> +{
> +	/* do something */
> +}

umm, yeah.. :)

> +static struct dma_chan *
> +dma_client_chan_alloc(struct dma_client *client)
> +{
> +	struct dma_device *device;
> +	struct dma_chan *chan;
> +
> +	BUG_ON(!client);
> +
> +	/* Find a channel, any DMA engine will do */
> +	list_for_each_entry(device, &dma_device_list, global_node) {
> +		list_for_each_entry(chan, &device->channels, device_node) {
> +			if (chan->client)
> +				continue;

couldn't you use the normal device model list for this?

> +static void
> +dma_client_chan_free(struct dma_chan *chan)
> +{
> +	BUG_ON(!chan);
> +
> +	chan->device->device_free_chan_resources(chan);

you'll get the oops here anyway, no need for the BUG_ON

> +			chan = list_entry(client->channels.next, struct dma_chan, client_node);

please shorten the line length to fit 80 column terminals.

> +static int __init dma_bus_init(void)
> +{
> +	int cpu;
> +
> +	dma_wait_wq = create_workqueue("dmapoll");
> +	for_each_online_cpu(cpu) {
> +		init_completion(&per_cpu(kick_dma_poll, cpu));

you either need to make this for_each_possible_cpu or add a hotplug
cpu notifier.  the first is probably a lot easier :)

      parent reply	other threads:[~2005-11-24 11:48 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-11-23 20:26 [RFC] [PATCH 1/3] ioat: DMA subsystem Andrew Grover
2005-11-23 21:51 ` David S. Miller
2005-11-23 22:44 ` Jeff Garzik
2005-11-24 15:00   ` Ingo Oeser
2005-11-24 15:04     ` Jeff Garzik
2005-11-28 22:57       ` Andrew Grover
2005-11-24  0:07 ` Greg KH
2005-11-24 11:48 ` Christoph Hellwig [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20051124114801.GA20244@infradead.org \
    --to=hch@infradead.org \
    --cc=andrew.grover@intel.com \
    --cc=christopher.leech@intel.com \
    --cc=john.ronciak@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).