From: Horms <horms@verge.net.au>
To: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
David Miller <davem@davemloft.net>,
linux-kernel@vger.kernel.org, netdev@vger.kernel.org,
Martin Schwidefsky <schwidefsky@de.ibm.com>,
Wensong Zhang <wensong@linux-vs.org>
Subject: Re: [patch] ipvs: force read of atomic_t in while loop
Date: Wed, 8 Aug 2007 18:45:44 +0900 [thread overview]
Message-ID: <20070808094542.GA5901@verge.net.au> (raw)
In-Reply-To: <20070808093300.GA14530@osiris.boeblingen.de.ibm.com>
On Wed, Aug 08, 2007 at 11:33:00AM +0200, Heiko Carstens wrote:
> From: Heiko Carstens <heiko.carstens@de.ibm.com>
>
> For architectures that don't have a volatile atomic_ts constructs like
> while (atomic_read(&something)); might result in endless loops since a
> barrier() is missing which forces the compiler to generate code that
> actually reads memory contents.
> Fix this in ipvs by using the IP_VS_WAIT_WHILE macro which resolves to
> while (expr) { cpu_relax(); }
> (why isn't this open coded btw?)
>
> Cc: Wensong Zhang <wensong@linux-vs.org>
> Cc: Simon Horman <horms@verge.net.au>
> Cc: David Miller <davem@davemloft.net>
> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
> ---
>
> Just saw this while grepping for atomic_reads in a while loops.
> Maybe we should re-add the volatile to atomic_t. Not sure.
This looks good to me. A little wile back I noticed a few places
where IP_VS_WAIT_WHILE seemed to be curiously unused, then I got
distracted...
Signed-off-by: Simon Horman <horms@verge.net.au>
>
> net/ipv4/ipvs/ip_vs_ctl.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> Index: linux-2.6/net/ipv4/ipvs/ip_vs_ctl.c
> ===================================================================
> --- linux-2.6.orig/net/ipv4/ipvs/ip_vs_ctl.c
> +++ linux-2.6/net/ipv4/ipvs/ip_vs_ctl.c
> @@ -909,7 +909,7 @@ ip_vs_edit_dest(struct ip_vs_service *sv
> write_lock_bh(&__ip_vs_svc_lock);
>
> /* Wait until all other svc users go away */
> - while (atomic_read(&svc->usecnt) > 1) {};
> + IP_VS_WAIT_WHILE(atomic_read(&svc->usecnt) > 1);
>
> /* call the update_service, because server weight may be changed */
> svc->scheduler->update_service(svc);
--
Horms
H: http://www.vergenet.net/~horms/
W: http://www.valinux.co.jp/en/
next prev parent reply other threads:[~2007-08-08 9:45 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-08-08 9:33 [patch] ipvs: force read of atomic_t in while loop Heiko Carstens
2007-08-08 9:45 ` Horms [this message]
2007-08-08 10:21 ` David Miller
2007-08-08 10:28 ` Heiko Carstens
2007-08-08 21:08 ` Chris Snook
2007-08-08 21:31 ` Andrew Morton
2007-08-08 22:27 ` Heiko Carstens
2007-08-08 22:38 ` Chris Snook
2007-08-09 0:15 ` Andi Kleen
2007-08-09 12:35 ` Michael Buesch
2007-08-09 12:40 ` Chris Snook
2007-08-09 12:49 ` Martin Schwidefsky
2007-08-09 13:36 ` Andi Kleen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20070808094542.GA5901@verge.net.au \
--to=horms@verge.net.au \
--cc=akpm@linux-foundation.org \
--cc=davem@davemloft.net \
--cc=heiko.carstens@de.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=schwidefsky@de.ibm.com \
--cc=wensong@linux-vs.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).