linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* zsmalloc/lzo compressibility vs entropy
@ 2013-03-27 21:42 Dan Magenheimer
  2013-03-29 22:19 ` Dan Magenheimer
  0 siblings, 1 reply; 2+ messages in thread
From: Dan Magenheimer @ 2013-03-27 21:42 UTC (permalink / raw)
  To: Seth Jennings, Konrad Wilk, Minchan Kim, Bob Liu, Robert Jennings,
	Nitin Gupta, Wanpeng Li, Andrew Morton, Mel Gorman
  Cc: linux-mm, linux-kernel

This might be obvious to those of you who are better
mathematicians than I, but I ran some experiments
to confirm the relationship between entropy and compressibility
and thought I should report the results to the list.

Using the LZO code in the kernel via zsmalloc and some
hacks in zswap, I measured the compression of pages
generated by get_random_bytes and then of pages
where half the page is generated by get_random_bytes()
and the other half-page is zero-filled.

For a fully random page, one would expect the number
of zeroes and ones generated to be equal (highest
entropy) and that proved true:  The mean number of
one-bits in the fully random page was 16384 (x86,
so PAGE_SIZE=4096 * 8 bits/byte) with a stddev of 93.
(sample size > 500000).  For this sample of pages,
zsize had a mean of 4116 and a stddev of 16.
So for fully random pages, LZO compression results
in "negative" compression... the size of the compressed
page is slightly larger than a page.

For a "half random page" -- a fully random page with
the first half of the page overwritten with zeros --
zsize mean is 2077 with a stddev of 6.  So a half-random
page compresses by about a factor of 2.  (Just to
be sure, I reran the experiment with the first half
of the page overwritten with ones instead of zeroes,
and the result was approximately the same.)

For extra credit, I ran a "quarter random page"...
zsize mean is 1052 with a stddev of 45.

For more extra credit, I tried a fully-random page
with every OTHER byte forced to zero, so half the
bytes are random and half are zero.  The result:
mean zsize is 3841 with a stddev of 33.  Then I
tried a fully-random page with every other PAIR of
bytes forced to zero.  The result: zsize mean is 4029
with a stddev of 67. (Worse!)

So LZO page compression works better when there are many
more zeroes than ones in a page (or vice-versa), but works
best when a long sequence of bits (bytes?) are the same.

All this still begs the question as to what the
page-entropy (and zsize distribution) will be over a
large set of pages and over a large set of workloads
AND across different classes of data (e.g. frontswap
pages vs cleancache pages), but at least we have
some theory to guide us.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2013-03-29 22:20 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-03-27 21:42 zsmalloc/lzo compressibility vs entropy Dan Magenheimer
2013-03-29 22:19 ` Dan Magenheimer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).