public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Jeremy Fitzhardinge <jeremy@goop.org>
To: Peter Teoh <htmldeveloper@gmail.com>
Cc: LKML <linux-kernel@vger.kernel.org>, Tejun Heo <htejun@gmail.com>,
	Dipankar Sarma <dipankar@in.ibm.com>
Subject: Re: per cpun+ spin locks coexistence?
Date: Fri, 14 Mar 2008 10:54:22 -0700	[thread overview]
Message-ID: <47DABBCE.5010803@goop.org> (raw)
In-Reply-To: <804dabb00803120917w451b16e6q685016d464a2edde@mail.gmail.com>

Peter Teoh wrote:
> Help me out this one - in fs/file.c, there is a function free_fdtable_rcu():
>
> void free_fdtable_rcu(struct rcu_head *rcu)
> {
>        struct fdtable *fdt = container_of(rcu, struct fdtable, rcu);
>        struct fdtable_defer *fddef;
>
>        BUG_ON(!fdt);
>
>        if (fdt->max_fds <= NR_OPEN_DEFAULT) {
>                /*
>                 * This fdtable is embedded in the files structure and that
>                 * structure itself is getting destroyed.
>                 */
>                kmem_cache_free(files_cachep,
>                                container_of(fdt, struct files_struct,
> fdtab));
>                return;
>        }
>        if (fdt->max_fds <= (PAGE_SIZE / sizeof(struct file *))) {
>                kfree(fdt->fd);
>                kfree(fdt->open_fds);
>                kfree(fdt);
>        } else {
>                fddef = &get_cpu_var(fdtable_defer_list);
>                spin_lock(&fddef->lock);
>                fdt->next = fddef->next;
>                fddef->next = fdt;
>                /* vmallocs are handled from the workqueue context */
>                schedule_work(&fddef->wq);
>                spin_unlock(&fddef->lock);
>                put_cpu_var(fdtable_defer_list);
>        }
> }
>
> Notice above that get_cpu_var() is followed by spin_lock().   Does this
> make sense?   get_cpu_var() will return a variable that is only
> accessible by the current CPU - guaranteed it will not be touch (read or
> write) by another CPU, right? 

No, not true.  percpu is for stuff which is generally only touched by 
one CPU, but there's nothing stopping other processors from accessing it 
with per_cpu(var, cpu).

Besides, the lock isn't locking the percpu list head, but the thing on 
the head of the list, presumably to prevent races with the workqueue.  
(Though the list structure is nonstandard, so its not completely clear.)

    J

  reply	other threads:[~2008-03-14 17:56 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-03-12 16:17 per cpun+ spin locks coexistence? Peter Teoh
2008-03-14 17:54 ` Jeremy Fitzhardinge [this message]
2008-03-16 16:30   ` Peter Teoh
2008-03-16 20:20     ` Johannes Weiner
2008-03-17 17:06       ` Peter Teoh
2008-03-17 17:51         ` Johannes Weiner
2008-03-17 19:22         ` Eric Dumazet
2008-03-18 17:00           ` Peter Teoh
2008-03-18 17:34             ` Dipankar Sarma
2008-03-18 18:00             ` Eric Dumazet
2008-03-19 16:25               ` Peter Teoh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=47DABBCE.5010803@goop.org \
    --to=jeremy@goop.org \
    --cc=dipankar@in.ibm.com \
    --cc=htejun@gmail.com \
    --cc=htmldeveloper@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox