From: Dave Hansen <dave@linux.vnet.ibm.com>
To: Seth Jennings <sjenning@linux.vnet.ibm.com>
Cc: Dan Magenheimer <dan.magenheimer@oracle.com>,
gregkh@suse.de, devel@driverdev.osuosl.org, ngupta@vflare.org,
cascardo@holoscopio.com, rdunlap@xenotime.net,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH 0/3] staging: zcache: xcfmalloc support
Date: Thu, 01 Sep 2011 16:44:23 -0700 [thread overview]
Message-ID: <1314920663.6393.18.camel@nimitz> (raw)
In-Reply-To: <4E6000C4.7030007@linux.vnet.ibm.com>
On Thu, 2011-09-01 at 17:01 -0500, Seth Jennings wrote:
> I was seeing n as the number of allocations. Since
> XCF_MAX_BLOCKS_PER_ALLOC and XCF_NUM_FREELISTS are constant (i.e.
> not increasing with the number of allocations) wouldn't it be
> O(1)?
It's the difference between your implementation and the _algorithm_
you've chosen. If someone doubled XCF_MAX_BLOCKS_PER_ALLOC and
XCF_NUM_FREELISTS, you'd see the time quadruple, not stay constant.
That's a property of the _algorithm_.
> > xcfmalloc's big compromise is that it doesn't do any searching or
> > fitting. It might needlessly split larger blocks when two small ones
> > would have worked, for instance.
>
> Splitting a larger block is the last option. I might not
> be understanding you correctly, but find_remove_block() does try to
> find the optimal block to use, which is "searching and fitting" in my
> mind.
I don't want to split hairs on the wording. It's obvious, though, that
xcfmalloc does not find _optimal_ fits. It also doesn't use the
smallest-possible blocks to fit the alloction. Consider if you wanted a
1000 byte allocation (with 10 100-byte buckets and no metadata for
simplicity), and had 4 blocks:
900
500,500,500
I think it would split a 500 into 100,400, and leave the 400:
500,500
400
It took the largest (most valuable) block, and split a 500 block when it
didn't have to. The reason it doesn't do this is that it doesn't
_search_. It just indexes and guesses. That's *fast*, but it errs on
the side of speed rather than being optimal. That's OK, we do it all
the time, but it *is* a compromise. We should at least be thinking of
the cases when this doesn't perform well.
-- Dave
next prev parent reply other threads:[~2011-09-01 23:44 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-08-31 14:40 [PATCH 0/3] staging: zcache: xcfmalloc support Seth Jennings
2011-08-31 14:40 ` [PATCH 1/3] staging: zcache: xcfmalloc memory allocator for zcache Seth Jennings
2011-09-01 15:43 ` Seth Jennings
2011-09-06 23:51 ` Greg KH
2011-08-31 14:40 ` [PATCH 2/3] staging: zcache: replace xvmalloc with xcfmalloc Seth Jennings
2011-08-31 14:40 ` [PATCH 3/3] staging: zcache: add zv_page_count and zv_desc_count Seth Jennings
2011-08-31 19:46 ` [PATCH 0/3] staging: zcache: xcfmalloc support Dan Magenheimer
2011-08-31 22:06 ` Seth Jennings
2011-09-01 15:17 ` Dan Magenheimer
2011-09-01 16:33 ` Seth Jennings
2011-09-01 16:54 ` Dave Hansen
2011-09-01 22:01 ` Seth Jennings
2011-09-01 23:44 ` Dave Hansen [this message]
2011-09-01 22:42 ` Seth Jennings
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1314920663.6393.18.camel@nimitz \
--to=dave@linux.vnet.ibm.com \
--cc=cascardo@holoscopio.com \
--cc=dan.magenheimer@oracle.com \
--cc=devel@driverdev.osuosl.org \
--cc=gregkh@suse.de \
--cc=linux-kernel@vger.kernel.org \
--cc=ngupta@vflare.org \
--cc=rdunlap@xenotime.net \
--cc=sjenning@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).