From: Ian Campbell <ian.campbell@citrix.com>
To: Malcolm Crossley <malcolm.crossley@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
Jan Beulich <jbeulich@suse.com>,
xen-devel <xen-devel@lists.xen.org>
Subject: missing lock in percpu_rwlock? (Was: Re: New Defects reported by Coverity Scan for XenProject)
Date: Wed, 3 Feb 2016 10:45:49 +0000 [thread overview]
Message-ID: <1454496349.25207.54.camel@citrix.com> (raw)
In-Reply-To: <56b180c017d5f_214fb5b3143623f@ss1435.mail>
On Tue, 2016-02-02 at 20:23 -0800, scan-admin@coverity.com wrote:
> * CID 1351223: Concurrent data access violations (MISSING_LOCK)
> /xen/include/xen/spinlock.h: 362 in _percpu_write_unlock()
Coverity seems to think this is new in 41b0aa569adb..9937763265d,
presumably due to
commit f9dd43dddc0a31a4343a58072935c1b5c0cbbee
Author: Malcolm Crossley <malcolm.crossley@citrix.com>
Date: Fri Jan 22 16:04:41 2016 +0100
rwlock: add per-cpu reader-writer lock infrastructure
> _________________________________________________________________________
> _______________________________
> *** CID 1351223: Concurrent data access violations (MISSING_LOCK)
> /xen/include/xen/spinlock.h: 362 in _percpu_write_unlock()
> 356 percpu_rwlock_t *percpu_rwlock)
> 357 {
> 358 /* Validate the correct per_cpudata variable has been
> provided. */
> 359 _percpu_rwlock_owner_check(per_cpudata, percpu_rwlock);
> 360
> 361 ASSERT(percpu_rwlock->writer_activating);
> >>> CID 1351223: Concurrent data access violations (MISSING_LOCK)
> >>> Accessing "percpu_rwlock->writer_activating" without holding lock
> "percpu_rwlock.rwlock". Elsewhere, "percpu_rwlock.writer_activating" is
> accessed with "percpu_rwlock.rwlock" held 1 out of 2 times (1 of these
> accesses strongly imply that it is necessary).
> 362 percpu_rwlock->writer_activating = 0;
> 363 write_unlock(&percpu_rwlock->rwlock);
> 364 }
> 365
> 366 #define percpu_rw_is_write_locked(l)
> _rw_is_write_locked(&((l)->rwlock))
> 367
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-02-03 10:45 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <56b180c017d5f_214fb5b3143623f@ss1435.mail>
2016-02-03 10:37 ` Leaks in xc_tbuf_get_size() (Was: Re: New Defects reported by Coverity Scan for XenProject) Ian Campbell
2016-02-03 10:42 ` Andrew Cooper
2016-02-03 10:54 ` Ian Campbell
2016-02-03 14:21 ` George Dunlap
2016-02-03 10:39 ` Leak in xc_dom_load_hvm_kernel() (Was; " Ian Campbell
2016-02-03 10:59 ` [PATCH] libxc: fix leak in xc_dom_load_hvm_kernel error path Roger Pau Monne
2016-02-03 11:49 ` Ian Campbell
2016-02-03 10:45 ` Ian Campbell [this message]
2016-02-03 10:47 ` missing lock in percpu_rwlock? (Was: Re: New Defects reported by Coverity Scan for XenProject) Ian Campbell
2016-02-03 10:50 ` Andrew Cooper
2016-02-03 11:00 ` Ian Campbell
2016-02-03 12:21 ` Andrew Cooper
2016-02-03 12:24 ` Andrew Cooper
2016-02-03 12:32 ` Ian Campbell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1454496349.25207.54.camel@citrix.com \
--to=ian.campbell@citrix.com \
--cc=george.dunlap@citrix.com \
--cc=jbeulich@suse.com \
--cc=malcolm.crossley@citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).