public inbox for linux-rdma@vger.kernel.org
 help / color / mirror / Atom feed
From: Roland Dreier <rdreier-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
To: Rui Machado <ruimario-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Cc: Ralph Campbell
	<ralph.campbell-h88ZbnxC6KDQT0dZR+AlfA@public.gmane.org>,
	linux-rdma <linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
Subject: Re: yet again the atomic operations
Date: Fri, 06 Aug 2010 08:49:16 -0700	[thread overview]
Message-ID: <adaocdf2137.fsf@roland-alpha.cisco.com> (raw)
In-Reply-To: <AANLkTimVTjHOubAaJ5oRHju6C7k7FZMqqq0Mvb4SBztB-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> (Rui Machado's message of "Fri, 6 Aug 2010 13:43:22 +0200")

 > So if the CPU writes/reads to/from the same address, even atomically
 > (lock), there might be room for some inconsistency on the values? It
 > is not really atomic from the whole system point of view, just for the
 > HCA? If so, is there any possibility to make the whole operation
 > 'system-wide' atomic?

PCI does not have any capability for atomic operations until PCI Express
3.0 (not available in any real devices yet).  So any current HCA
performing atomic operations across a PCI bus will always have to do
read-modify-write which leaves a window for the CPU to mess things up if
it accesses the same location.

You can work around this by creating a loopback connection (ie an RC
connection from the local HCA to itself) and post atomic operations to
that QP instead of accessing the memory directly with the CPU.

 - R.
-- 
Roland Dreier <rolandd-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org> || For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/index.html
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

  parent reply	other threads:[~2010-08-06 15:49 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-08-04 12:32 yet again the atomic operations Rui Machado
     [not found] ` <AANLkTikQX+CKvs_pjkH_Ap358jfjp4dYKYyKG95+eZmt-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2010-08-05 18:41   ` Ralph Campbell
     [not found]     ` <1281033688.7414.30.camel-/vjeY7uYZjrPXfVEPVhPGq6RkeBMCJyt@public.gmane.org>
2010-08-06 11:43       ` Rui Machado
     [not found]         ` <AANLkTimVTjHOubAaJ5oRHju6C7k7FZMqqq0Mvb4SBztB-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2010-08-06 15:49           ` Roland Dreier [this message]
     [not found]             ` <adaocdf2137.fsf-BjVyx320WGW9gfZ95n9DRSW4+XlvGpQz@public.gmane.org>
2010-08-10 11:50               ` Rui Machado
2010-08-06 17:26           ` Ralph Campbell
2010-08-10 11:46       ` Rui Machado
     [not found]         ` <AANLkTikPAqopgFR_vSnj9qkQu77S1RWixgM0POUQ5LM9-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2010-08-10 17:26           ` Ralph Campbell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=adaocdf2137.fsf@roland-alpha.cisco.com \
    --to=rdreier-fyb4gu1cfyuavxtiumwx3w@public.gmane.org \
    --cc=linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=ralph.campbell-h88ZbnxC6KDQT0dZR+AlfA@public.gmane.org \
    --cc=ruimario-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox