linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Piergiorgio Sartor <piergiorgio.sartor@nexgo.de>
To: NeilBrown <neilb@suse.de>
Cc: Piergiorgio Sartor <piergiorgio.sartor@nexgo.de>,
	linux-raid@vger.kernel.org
Subject: Re: stripe cache question
Date: Sun, 27 Feb 2011 12:37:11 +0100	[thread overview]
Message-ID: <20110227113710.GA16286@lazy.lzy> (raw)
In-Reply-To: <20110227154350.4448d731@notabene.brown>

> > > Ideally the cache should be automatically sized based on demand and memory
> > > size - with maybe just a tunable to select between "use a much memory as you
> > > need - within reason" verse "use a little memory as you can manage with".
> > > 
> > > But that requires thought and design and code and .... it just never seemed
> > > like a priority.
> > 
> > You're a bit contraddicting your philosopy of
> > "let's do the smart things in user space"... :-)
> > 
> > IMHO, if really necessary, it could be enough to
> > have this "upper limit" avaialable in sysfs.
> > 
> > Then user space can decide what to do with it.
> > 
> > For example, at boot the amount of memory is checked
> > and the upper limit set.
> > I see a duplication here, maybe better just remove
> > the upper limit and let user space to deal with that.
> 
> 
> Maybe....  I still feel I want some sort of built-in protection...

As I wrote, I think a second sysfs entry, with the upper
limit, could be enough.
It allows flexibility and somehow protection.
It would be required two _coordinated_ access to sysfs in order
to break the limit, which is unlikely to happen by random.
That is, at boot /sys/block/mdX/md/stripe_cache_limit will
be 32768 and the "cache_size" will be 256.
If someone wants to play with the cache size, will be able
to top it to 32768. Otherwise, the first entry has to be
changed to higher values (min value should be the cache_size).

This is, of course, a duplication, but it enforce a certain
process (two accesses) giving then some degree of protection.

I guess, but you're the expert, this should be easier than
other solutions.

> Maybe if I did all the allocations with "__GFP_WAIT" clear so that it would
> only allocate memory that is easily available.  It wouldn't be a hard
> guarantee against running out, but it might help..

Again, I think you're over-designing it.

BTW, I hope that is unswappable memory, or?

> Maybe you could try removing the limit and see what actually happens when
> you set a ridiculously large size.??

Yes and no. The home PC has a RAID-10f2, the work PC has
a RAID-5, but I do not want to play with the kernel on it.
I guess using loop devices will not be meaningful.

As soon as I manage to build the RAID-6 NAS I could give it
a try, but this has no "schedule" right now.

bye,

-- 

piergiorgio

  reply	other threads:[~2011-02-27 11:37 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-02-24 21:06 stripe cache question Piergiorgio Sartor
2011-02-25  3:51 ` NeilBrown
2011-02-26 10:21   ` Piergiorgio Sartor
2011-02-27  4:43     ` NeilBrown
2011-02-27 11:37       ` Piergiorgio Sartor [this message]
2011-03-06 20:08       ` Piergiorgio Sartor

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110227113710.GA16286@lazy.lzy \
    --to=piergiorgio.sartor@nexgo.de \
    --cc=linux-raid@vger.kernel.org \
    --cc=neilb@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).