linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: James Bottomley <James.Bottomley@HansenPartnership.com>
To: Ric Wheeler <rwheeler@redhat.com>
Cc: Mike Snitzer <snitzer@redhat.com>,
	linux-fsdevel@vger.kernel.org, Eric Sandeen <sandeen@redhat.com>,
	Jeff Moyer <jmoyer@redhat.com>,
	lsf10-pc@lists.linuxfoundation.org
Subject: Re: [Lsf10-pc] [ATTEND] I'd like to attend ;)
Date: Thu, 17 Jun 2010 09:02:13 -0500	[thread overview]
Message-ID: <1276783333.7285.6.camel@mulgrave.site> (raw)
In-Reply-To: <4C1A29A5.2070901@redhat.com>

On Thu, 2010-06-17 at 09:56 -0400, Ric Wheeler wrote:
> On 06/17/2010 09:46 AM, Mike Snitzer wrote:
> > On Tue, Mar 2, 2010 at 9:40 AM, Ric Wheeler<rwheeler@redhat.com>  wrote:
> >    
> >> On 03/02/2010 08:37 AM, Jeff Moyer wrote:
> >>      
> >>> Ric Wheeler<rwheeler@redhat.com>    writes:
> >>>
> >>>
> >>>        
> >>>> On 03/01/2010 01:07 PM, Eric Sandeen wrote:
> >>>>
> >>>>          
> >>>>> I still need to decide which specific topic to bring/promote, but I'd
> >>>>> certainly
> >>>>> like to attend, as I have in the past.
> >>>>>
> >>>>> Generally would like to talk about...
> >>>>>
> >>>>> Trim/discard plans in filesystem - lately I'm thinking a batch mechanism
> >>>>> may
> >>>>> be better than trim-as-you-go.
> >>>>>
> >>>>> Advancements in generic test infrastructures since last year.
> >>>>>
> >>>>> Could talk a bit about the advancements in proper alignment detection&
> >>>>> setup for storage&     filesystems but that's not a very big topic.
> >>>>>
> >>>>> Jan's suggestion of some writeback sanity sounds good to me too.
> >>>>>
> >>>>> Thanks,
> >>>>> -Eric
> >>>>>
> >>>>>
> >>>>>            
> >>>> I think that all of the above are great topics for this time around.
> >>>>
> >>>> Specifically, the alignment work has been interesting in that we have
> >>>> tried to get not just the kernel bits but the entire stack to work
> >>>> properly.
> >>>>
> >>>> Both this and the discard work should still be quite topical if we can
> >>>> get the various storage vendors to evaluate our current upstream bits
> >>>> in time :-)
> >>>>
> >>>>          
> >>> As you know, I've been picking away at the discard support from the file
> >>> system all the way down to the storage.  I think I should have some good
> >>> numbers and guidance come time for the workshop.
> >>>
> >>> I am also interested in discussing all things I/O scheduler.
> >>>
> >>> Oh, and I'd like to attend as well.  ;)
> >>>
> >>> -Jeff
> >>>
> >>>        
> >> One topic that I think that might be interesting is to talk about how well
> >> we did integrating all of these new features into the stack that cross
> >> IO/FS/scheduler boundaries. In a way, we got ahead of the hardware vendors
> >> with both the discard support and the various topology bits but did manage
> >> to get that supported not just in the kernel but up the tool chain as well.
> >>
> >> Might be interesting to see where we ended up, think about next steps and
> >> if/how we would improve on what we have....
> >>      
> > I would like attend this year's LSF.
> >
> > I helped implement and coordinate the development of the I/O Topology
> > support through the entire storage stack.  My specific contribution
> > was in DM and LVM2 but I also worked with Martin Petersen and the
> > developers of the layers above the block layer (partition tools,
> > filesystems, virtio+qemu).
> >
> > I'm now looking to implement comprehensive discard support for DM.
> > There is a need for various discard support changes in the block layer
> > (and below) so it'd be interesting if we touched on this.  But my hope
> > is that at least some patches will make their way upstream before LSF
> > (for 2.6.36 inclusion) -- e.g.: hch's patches to push a discard's
> > payload allocation down to the LLD (SCSI or ATA).
> >
> > I'll also be prepared to discuss aspects of DM if there is any interest.
> >
> > Mike
> >    
> 
> I think that these are all great topics - getting discard finished for 
> DM will be an important goal for the coming year (especially given the 
> huge surge in SSD devices that really, really can start to use this !!!)

Just a procedural note: Since I'm compiling the list of attendee request
list, replying to someone else's request is not such a good idea because
I went by the thread heads when I did it ...

James



  reply	other threads:[~2010-06-17 14:02 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <4B8C027E.2090709@redhat.com>
2010-03-01 18:33 ` [Lsf10-pc] [ATTEND] I'd like to attend ;) Ric Wheeler
2010-03-02  0:15   ` Sorin Faibish
2010-03-02 13:37   ` Jeff Moyer
2010-03-02 13:40     ` Ric Wheeler
2010-06-17 13:46       ` Mike Snitzer
2010-06-17 13:56         ` Ric Wheeler
2010-06-17 14:02           ` James Bottomley [this message]
     [not found] <Pine.LNX.4.64.1004142112430.19850@cobra.newdream.net>
2010-04-15 13:58 ` [Lsf10-pc] [ATTEND] i'd like to attend! Ric Wheeler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1276783333.7285.6.camel@mulgrave.site \
    --to=james.bottomley@hansenpartnership.com \
    --cc=jmoyer@redhat.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=lsf10-pc@lists.linuxfoundation.org \
    --cc=rwheeler@redhat.com \
    --cc=sandeen@redhat.com \
    --cc=snitzer@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).