public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Ingo Oeser <netdev@axxeo.de>
To: Evgeniy Polyakov <johnpol@2ka.mipt.ru>
Cc: "David S. Miller" <davem@davemloft.net>,
	jengelh@linux01.gwdg.de, christopher.leech@intel.com,
	linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [PATCH 0/8] Intel I/O Acceleration Technology (I/OAT)
Date: Tue, 7 Mar 2006 10:43:59 +0100	[thread overview]
Message-ID: <200603071043.59479.netdev@axxeo.de> (raw)
In-Reply-To: <20060307074438.GA22672@2ka.mipt.ru>

Evgeniy Polyakov wrote:
> On Mon, Mar 06, 2006 at 06:44:07PM +0100, Ingo Oeser (netdev@axxeo.de) wrote:
> > Hmm, so I should resurrect my user page table walker abstraction?
> > 
> > There I would hand each page to a "recording" function, which
> > can drop the page from the collection or coalesce it in the collector
> > if your scatter gather implementation allows it.
> 
> It depends on where performance growth is stopped.
> From the first glance it does not look like find_extend_vma(),
> probably follow_page() fault and thus __handle_mm_fault().
> I can not say actually, but if it is true and performance growth is
> stopped due to increased number of faults and it's processing, 
> your approach will hit this problem too, doesn't it?

My approach reduced the number of loops performed and number
of memory needed at the expense of doing more work in the main
loop of get_user_pages. 

This was mitigated for the common case of getting just one page by 
providing a get_one_user_page() function.

The whole problem, why we need such multiple loops is that we have
no common container object for "IO vector + additional data".

So we always do a loop working over the vector returned by 
get_user_pages() all the time. The bigger that vector, 
the bigger the impact.

Maybe sth. as simple as providing get_user_pages() with some offset_of 
and container_of hackery will work these days without the disadvantages 
my old get_user_pages() work had.

The idea is, that you'll provide a vector (like arguments to calloc) and two 
offsets: One for the page to store within the offset and one for the vma 
to store.

If the offset has a special value (e.g MAX_LONG) you don't store there at all.

But if the performance problem really is get_user_pages() itself 
(and not its callers), then my approach won't help at all.


Regards

Ingo Oeser

  reply	other threads:[~2006-03-07  9:44 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-03-03 21:40 [PATCH 0/8] Intel I/O Acceleration Technology (I/OAT) Chris Leech
2006-03-03 21:42 ` [PATCH 1/8] [I/OAT] DMA memcpy subsystem Chris Leech
2006-03-04  1:40   ` David S. Miller
2006-03-06 19:39     ` Chris Leech
2006-03-04 19:20   ` Benjamin LaHaise
2006-03-06 19:48     ` Chris Leech
2006-03-03 21:42 ` [PATCH 3/8] [I/OAT] Setup the networking subsystem as a DMA client Chris Leech
2006-03-03 21:42 ` [PATCH 4/8] [I/OAT] Utility functions for offloading sk_buff to iovec copies Chris Leech
2006-03-05  7:15   ` Andrew Morton
2006-03-03 21:42 ` [PATCH 5/8] [I/OAT] Structure changes for TCP recv offload to I/OAT Chris Leech
2006-03-05  7:19   ` Andrew Morton
2006-03-03 21:42 ` [PATCH 6/8] [I/OAT] Rename cleanup_rbuf to tcp_cleanup_rbuf and make non-static Chris Leech
2006-03-03 21:42 ` [PATCH 7/8] [I/OAT] Add a sysctl for tuning the I/OAT offloaded I/O threshold Chris Leech
2006-03-04 11:22   ` Alexey Dobriyan
2006-03-05  7:21   ` Andrew Morton
2006-03-03 21:42 ` [PATCH 8/8] [I/OAT] TCP recv offload to I/OAT Chris Leech
2006-03-04 16:39   ` Pavel Machek
2006-03-04 23:18   ` Greg KH
2006-03-06 19:28     ` Chris Leech
2006-03-05  7:30   ` Andrew Morton
2006-03-05  8:45   ` Andrew Morton
2006-03-05 10:27     ` David S. Miller
2006-03-06 19:36     ` Chris Leech
2006-03-03 22:27 ` [PATCH 0/8] Intel I/O Acceleration Technology (I/OAT) Jeff Garzik
2006-03-03 22:39   ` Chris Leech
2006-03-03 22:45     ` Jeff Garzik
2006-03-04 11:35     ` Evgeniy Polyakov
2006-03-05  8:09     ` Andrew Morton
2006-03-05  9:02       ` Discourage duplicate symbols in the kernel? [Was: Intel I/O Acc...] Sam Ravnborg
2006-03-05  9:18         ` Andrew Morton
2006-03-06 19:56           ` Chris Leech
2006-03-03 22:58 ` [PATCH 0/8] Intel I/O Acceleration Technology (I/OAT) Kumar Gala
2006-03-03 23:32   ` Chris Leech
2006-03-04 18:46 ` Jan Engelhardt
2006-03-04 21:41   ` David S. Miller
2006-03-04 22:05     ` Gene Heskett
2006-03-04 22:16       ` David S. Miller
2006-03-05 13:45         ` Jan Engelhardt
2006-03-05 13:55           ` Arjan van de Ven
2006-03-05 16:14         ` Matthieu CASTET
2006-03-05 16:30           ` Jeff Garzik
2006-03-06 19:24           ` Chris Leech
2006-03-06 19:15       ` Chris Leech
2006-03-05  1:43     ` Evgeniy Polyakov
2006-03-05  2:08       ` David S. Miller
2006-03-06 17:44       ` Ingo Oeser
2006-03-07  7:44         ` Evgeniy Polyakov
2006-03-07  9:43           ` Ingo Oeser [this message]
2006-03-07 10:16             ` Evgeniy Polyakov
  -- strict thread matches above, loose matches on Subject: below --
2006-03-11  2:27 Chris Leech

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200603071043.59479.netdev@axxeo.de \
    --to=netdev@axxeo.de \
    --cc=christopher.leech@intel.com \
    --cc=davem@davemloft.net \
    --cc=jengelh@linux01.gwdg.de \
    --cc=johnpol@2ka.mipt.ru \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox