All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jeff Layton <jlayton@kernel.org>
To: Jerome Glisse <jglisse@redhat.com>, Matthew Wilcox <willy@infradead.org>
Cc: Dave Chinner <david@fromorbit.com>,
	Michal Hocko <mhocko@kernel.org>,
	linux-block@vger.kernel.org, linux-mm@kvack.org,
	Johannes Weiner <hannes@cmpxchg.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	lsf-pc@lists.linux-foundation.org
Subject: Re: [Lsf-pc] [LSF/MM] schedule suggestion
Date: Thu, 19 Apr 2018 12:58:39 -0400	[thread overview]
Message-ID: <1524157119.2943.6.camel@kernel.org> (raw)
In-Reply-To: <20180419163036.GC3519@redhat.com>

On Thu, 2018-04-19 at 12:30 -0400, Jerome Glisse wrote:
> On Thu, Apr 19, 2018 at 07:43:56AM -0700, Matthew Wilcox wrote:
> > On Thu, Apr 19, 2018 at 10:38:25AM -0400, Jerome Glisse wrote:
> > > Oh can i get one more small slot for fs ? I want to ask if they are
> > > any people against having a callback everytime a struct file is added
> > > to a task_struct and also having a secondary array so that special
> > > file like device file can store something opaque per task_struct per
> > > struct file.
> > 
> > Do you really want something per _thread_, and not per _mm_?
> 
> Well per mm would be fine but i do not see how to make that happen with
> reasonable structure. So issue is that you can have multiple task with
> same mm but different file descriptors (or am i wrong here ?) thus there
> would be no easy way given a struct file to lookup the per mm struct.
> 
> So as a not perfect solution i see a new array in filedes which would
> allow device driver to store a pointer to their per mm data structure.
> To be fair usualy you will only have a single fd in a single task for
> a given device.
> 
> If you see an easy way to get a per mm per inode pointer store somewhere
> with easy lookup i am all ears :)
> 

I may be misunderstanding, but to be clear: struct files don't get
added to a thread, per-se.

When userland calls open() or similar, the struct file gets added to
the files_struct. Those are generally shared with other threads within
the same process. The files_struct can also be shared with other
processes if you clone() with the right flags.

Doing something per-thread on every open may be rather difficult to do.
-- 
Jeff Layton <jlayton@kernel.org>
_______________________________________________
Lsf-pc mailing list
Lsf-pc@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/lsf-pc

WARNING: multiple messages have this Message-ID (diff)
From: Jeff Layton <jlayton@kernel.org>
To: Jerome Glisse <jglisse@redhat.com>, Matthew Wilcox <willy@infradead.org>
Cc: Dave Chinner <david@fromorbit.com>,
	linux-mm@kvack.org, lsf-pc@lists.linux-foundation.org,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	 linux-block@vger.kernel.org,
	Johannes Weiner <hannes@cmpxchg.org>,
	Michal Hocko <mhocko@kernel.org>
Subject: Re: [LSF/MM] schedule suggestion
Date: Thu, 19 Apr 2018 12:58:39 -0400	[thread overview]
Message-ID: <1524157119.2943.6.camel@kernel.org> (raw)
In-Reply-To: <20180419163036.GC3519@redhat.com>

On Thu, 2018-04-19 at 12:30 -0400, Jerome Glisse wrote:
> On Thu, Apr 19, 2018 at 07:43:56AM -0700, Matthew Wilcox wrote:
> > On Thu, Apr 19, 2018 at 10:38:25AM -0400, Jerome Glisse wrote:
> > > Oh can i get one more small slot for fs ? I want to ask if they are
> > > any people against having a callback everytime a struct file is added
> > > to a task_struct and also having a secondary array so that special
> > > file like device file can store something opaque per task_struct per
> > > struct file.
> > 
> > Do you really want something per _thread_, and not per _mm_?
> 
> Well per mm would be fine but i do not see how to make that happen with
> reasonable structure. So issue is that you can have multiple task with
> same mm but different file descriptors (or am i wrong here ?) thus there
> would be no easy way given a struct file to lookup the per mm struct.
> 
> So as a not perfect solution i see a new array in filedes which would
> allow device driver to store a pointer to their per mm data structure.
> To be fair usualy you will only have a single fd in a single task for
> a given device.
> 
> If you see an easy way to get a per mm per inode pointer store somewhere
> with easy lookup i am all ears :)
> 

I may be misunderstanding, but to be clear: struct files don't get
added to a thread, per-se.

When userland calls open() or similar, the struct file gets added to
the files_struct. Those are generally shared with other threads within
the same process. The files_struct can also be shared with other
processes if you clone() with the right flags.

Doing something per-thread on every open may be rather difficult to do.
-- 
Jeff Layton <jlayton@kernel.org>

  reply	other threads:[~2018-04-19 16:58 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-04-18 21:19 [LSF/MM] schedule suggestion Jerome Glisse
2018-04-18 21:19 ` Jerome Glisse
2018-04-19  0:48 ` Dave Chinner
2018-04-19  1:55 ` [Lsf-pc] " Dave Chinner
2018-04-19  1:55   ` Dave Chinner
2018-04-19 14:38   ` [Lsf-pc] " Jerome Glisse
2018-04-19 14:38     ` Jerome Glisse
2018-04-19 14:38     ` Jerome Glisse
2018-04-19 14:43     ` [Lsf-pc] " Matthew Wilcox
2018-04-19 14:43       ` Matthew Wilcox
2018-04-19 16:30       ` Jerome Glisse
2018-04-19 16:30         ` Jerome Glisse
2018-04-19 16:58         ` Jeff Layton [this message]
2018-04-19 16:58           ` Jeff Layton
2018-04-19 17:26           ` Jerome Glisse
2018-04-19 17:26             ` Jerome Glisse
2018-04-19 18:31             ` [Lsf-pc] " Jeff Layton
2018-04-19 18:31               ` Jeff Layton
2018-04-19 19:31               ` Jerome Glisse
2018-04-19 19:31                 ` Jerome Glisse
2018-04-19 19:56                 ` [Lsf-pc] " Matthew Wilcox
2018-04-19 19:56                   ` Matthew Wilcox
2018-04-19 20:15                   ` Jerome Glisse
2018-04-19 20:15                     ` Jerome Glisse
2018-04-19 20:25                     ` [Lsf-pc] " Matthew Wilcox
2018-04-19 20:25                       ` Matthew Wilcox
2018-04-19 20:39                       ` [Lsf-pc] " Al Viro
2018-04-19 20:39                         ` Al Viro
2018-04-19 21:08                         ` Jerome Glisse
2018-04-19 21:08                           ` Jerome Glisse
2018-04-19 20:51                     ` [Lsf-pc] " Al Viro
2018-04-19 20:51                       ` Al Viro
2018-04-19 20:33             ` [Lsf-pc] " Al Viro
2018-04-19 20:33               ` Al Viro
2018-04-19 20:58               ` Jerome Glisse
2018-04-19 20:58                 ` Jerome Glisse
2018-04-19 21:21                 ` [Lsf-pc] " Al Viro
2018-04-19 21:21                   ` Al Viro
2018-04-19 21:47                   ` Jerome Glisse
2018-04-19 21:47                     ` Jerome Glisse
2018-04-19 22:13                     ` [Lsf-pc] " Al Viro
2018-04-19 22:13                       ` Al Viro
2018-04-19 14:51   ` [Lsf-pc] " Chris Mason
2018-04-19 14:51     ` Chris Mason
2018-04-19 14:51     ` Chris Mason
2018-04-19 15:07     ` [Lsf-pc] " Martin K. Petersen
2018-04-19 15:07       ` Martin K. Petersen
2018-04-19 15:07       ` Martin K. Petersen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1524157119.2943.6.camel@kernel.org \
    --to=jlayton@kernel.org \
    --cc=david@fromorbit.com \
    --cc=hannes@cmpxchg.org \
    --cc=jglisse@redhat.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=mhocko@kernel.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.