From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx116.postini.com [74.125.245.116]) by kanga.kvack.org (Postfix) with SMTP id 8547A6B0005 for ; Wed, 27 Mar 2013 17:42:24 -0400 (EDT) MIME-Version: 1.0 Message-ID: <026ccf11-82db-4ddf-9882-294ee578775f@default> Date: Wed, 27 Mar 2013 14:42:08 -0700 (PDT) From: Dan Magenheimer Subject: zsmalloc/lzo compressibility vs entropy Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Sender: owner-linux-mm@kvack.org List-ID: To: Seth Jennings , Konrad Wilk , Minchan Kim , Bob Liu , Robert Jennings , Nitin Gupta , Wanpeng Li , Andrew Morton , Mel Gorman Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org This might be obvious to those of you who are better mathematicians than I, but I ran some experiments to confirm the relationship between entropy and compressibility and thought I should report the results to the list. Using the LZO code in the kernel via zsmalloc and some hacks in zswap, I measured the compression of pages generated by get_random_bytes and then of pages where half the page is generated by get_random_bytes() and the other half-page is zero-filled. For a fully random page, one would expect the number of zeroes and ones generated to be equal (highest entropy) and that proved true: The mean number of one-bits in the fully random page was 16384 (x86, so PAGE_SIZE=3D4096 * 8 bits/byte) with a stddev of 93. (sample size > 500000). For this sample of pages, zsize had a mean of 4116 and a stddev of 16. So for fully random pages, LZO compression results in "negative" compression... the size of the compressed page is slightly larger than a page. For a "half random page" -- a fully random page with the first half of the page overwritten with zeros -- zsize mean is 2077 with a stddev of 6. So a half-random page compresses by about a factor of 2. (Just to be sure, I reran the experiment with the first half of the page overwritten with ones instead of zeroes, and the result was approximately the same.) For extra credit, I ran a "quarter random page"... zsize mean is 1052 with a stddev of 45. For more extra credit, I tried a fully-random page with every OTHER byte forced to zero, so half the bytes are random and half are zero. The result: mean zsize is 3841 with a stddev of 33. Then I tried a fully-random page with every other PAIR of bytes forced to zero. The result: zsize mean is 4029 with a stddev of 67. (Worse!) So LZO page compression works better when there are many more zeroes than ones in a page (or vice-versa), but works best when a long sequence of bits (bytes?) are the same. All this still begs the question as to what the page-entropy (and zsize distribution) will be over a large set of pages and over a large set of workloads AND across different classes of data (e.g. frontswap pages vs cleancache pages), but at least we have some theory to guide us. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org