From: Manoj Kumar <manoj@linux.vnet.ibm.com>
To: Benjamin Herrenschmidt <benh@kernel.crashing.org>,
"Matthew R. Ochs" <mrochs@linux.vnet.ibm.com>
Cc: linux-scsi@vger.kernel.org,
James.Bottomley@HansenPartnership.com, nab@linux-iscsi.org,
brking@linux.vnet.ibm.com, wenxiong@linux.vnet.ibm.com,
hch@infradead.org, mikey@neuling.org, imunsie@au1.ibm.com,
dja@ozlabs.au.ibm.com
Subject: Re: [PATCH v4 2/3] cxlflash: Superpipe support
Date: Tue, 11 Aug 2015 17:21:56 -0500 [thread overview]
Message-ID: <55CA7584.4020505@linux.vnet.ibm.com> (raw)
In-Reply-To: <1439270955.14448.91.camel@kernel.crashing.org>
Ben:
Comments inline below.
On 8/11/2015 12:29 AM, Benjamin Herrenschmidt wrote:
> So in a similar vein to the previous review, I am missing a lot of
> context here but a few things did spring to me eyes:
Thanks for your review.
>> + list_for_each_entry_safe(lli, temp, &cfg->lluns, list)
>> + if (!memcmp(lli->wwid, wwid, DK_CXLFLASH_MANAGE_LUN_WWID_LEN)) {
>> + lli->newly_created = false;
>
> This is weird ... a lookup effectively changes the state of the object
> looked up... what for ? There is something oddball here.
>
> It might be legit but in that case, you should really add a comment
> explaining the logic around that 'newly_created' field.
As suggested later, will rename these functions in v5.
> Also you drop the lock right below but you have no refcounting, are
> these objects ever disposed of ?
These objects are long lived. The local lun info structure lives as
long as the card is available, and the global lun info is kept around
as long as the module is loaded.
> In general, can you provide a clearer explanation of what are "global"
> vs "local" LUNs ?
This is a good idea. Will clarify in v5.
> Same ...
Same as above. Will address by renaming.
> Make the function name more explicit: find_or_create_lun() for example.
> I very much dislike a function called "lookup" that has side effects.
Good point. Will rename in v5.
>> + lli = create_local(sdev, wwid);
>> + if (unlikely(!lli))
>> + goto out;
>
> Similar question to earlier, you have no refcounting, should I assume
> these things never get removed ?
Right, these are long lived.
>> + lli->parent = gli;
>> + spin_lock_irqsave(&cfg->slock, lock_flags);
>> + list_add(&lli->list, &cfg->lluns);
>> + spin_unlock_irqrestore(&cfg->slock, lock_flags);
>> +
>> + spin_lock_irqsave(&global.slock, lock_flags);
>> + list_add(&gli->list, &global.gluns);
>> + spin_unlock_irqrestore(&global.slock, lock_flags);
>
> Your locks are extremely fine grained... too much ? Any reason why you
> don't have a simple/single lock handling all these ? IE, do you expect
> frequent accesses ?
>
> Also, a function called "lookup_something" that has the side effect of
> adding that something to two lists doesn't look great to me. You may
> want to review the function naming a bit.
>
> Finally, what happens if two processes call this trying to effectively
> create the same global LUN simultaneously ?
>
> IE, can't you have a case where both lookup fail, then they both hit the
> create_global() case for the same WWID ? Should you have a single lock
> or a mutex wrapping the whole thing ? That would make the code a lot
> simpler to review as well...
Good catch. Will look into simplifying to a mutex in v5, wrapping the
whole lookup/create sequence.
>> +void cxlflash_list_init(void)
>> +{
>> + INIT_LIST_HEAD(&global.gluns);
>> + spin_lock_init(&global.slock);
>> + global.err_page = NULL;
>> +}
>
> Wouldn't it make the code nicer to have all that LUN management in a
> separate file ?
Good suggestion. Will look at moving these LUN management to a separate
file.
>> + rc = mutex_lock_interruptible(&cfg->ctx_tbl_list_mutex);
>> + if (rc)
>> + goto out;
>
> This can be interrupted by any signal, I assume your userspace deals
> with it ?
That is correct. We do want to be interrupted by any signal (SIGTERM,
SIGKILL, SIGINT etc.) at this point. We fail the context validation, and
ultimately the ioctl and leave it to user-space to deal with it.
>> + rc = mutex_trylock(&ctxi->mutex);
>> + mutex_unlock(&cfg->ctx_tbl_list_mutex);
>> + if (!rc)
>> + goto retry;
>
> Ouch.. that's a nasty one. Are you using the above construct to avoid
> an A->B/B->A deadlock scenario where somebody else might be taking
> the list mutex while holding the context one ?
No, this is not addressing a lock ordering issue. Will clarify with a
comment why the list mutex is being released and reacquired in the
retry loop.
next prev parent reply other threads:[~2015-08-11 22:21 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-08-10 17:09 [PATCH v4 2/3] cxlflash: Superpipe support Matthew R. Ochs
2015-08-11 5:23 ` Michael Neuling
2015-08-11 21:51 ` Matthew R. Ochs
2015-08-12 3:54 ` Michael Neuling
2015-08-12 17:05 ` Matthew R. Ochs
2015-08-11 5:29 ` Benjamin Herrenschmidt
2015-08-11 22:21 ` Manoj Kumar [this message]
2015-08-12 3:20 ` wenxiong
2015-08-12 4:18 ` wenxiong
2015-08-12 17:02 ` Matthew R. Ochs
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=55CA7584.4020505@linux.vnet.ibm.com \
--to=manoj@linux.vnet.ibm.com \
--cc=James.Bottomley@HansenPartnership.com \
--cc=benh@kernel.crashing.org \
--cc=brking@linux.vnet.ibm.com \
--cc=dja@ozlabs.au.ibm.com \
--cc=hch@infradead.org \
--cc=imunsie@au1.ibm.com \
--cc=linux-scsi@vger.kernel.org \
--cc=mikey@neuling.org \
--cc=mrochs@linux.vnet.ibm.com \
--cc=nab@linux-iscsi.org \
--cc=wenxiong@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox