* Compression filter for Loopback device
@ 2004-07-22 19:27 Lei Yang
2004-07-22 19:44 ` Luiz Fernando N. Capitulino
2004-07-23 11:16 ` Paulo Marques
0 siblings, 2 replies; 9+ messages in thread
From: Lei Yang @ 2004-07-22 19:27 UTC (permalink / raw)
To: linux-kernel
Hi all,
Is there anything like 'losetup' that allows choosing encryption
algorithm for a loopback device that can be used on compression
algorithms? Or in other words, when the data passes through loopback
device to a real storage device, it can be filtered and the filter is
compression instead of encryption; when kernel ask for data in a storage
device that is mounted to a loopback device with compression, it will be
filtered again -- decompressed.
If there is not a ready-to-use method for this, is there any way I can
implement the idea?
TIA! Really appreciate any comments.
Lei
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Compression filter for Loopback device
2004-07-22 19:27 Lei Yang
@ 2004-07-22 19:44 ` Luiz Fernando N. Capitulino
2004-07-23 11:16 ` Paulo Marques
1 sibling, 0 replies; 9+ messages in thread
From: Luiz Fernando N. Capitulino @ 2004-07-22 19:44 UTC (permalink / raw)
To: Lei Yang; +Cc: linux-kernel
Hi Lei,
Em Thu, Jul 22, 2004 at 03:27:17PM -0400, Lei Yang escreveu:
| Is there anything like 'losetup' that allows choosing encryption
| algorithm for a loopback device that can be used on compression
| algorithms? Or in other words, when the data passes through loopback
| device to a real storage device, it can be filtered and the filter is
| compression instead of encryption; when kernel ask for data in a storage
| device that is mounted to a loopback device with compression, it will be
| filtered again -- decompressed.
|
| If there is not a ready-to-use method for this, is there any way I can
| implement the idea?
|
| TIA! Really appreciate any comments.
maybe you can a try a mix: cloop + cryptloop (or something like
that), but it will be kludge.
--
Luiz Fernando N. Capitulino
<http://www.telecentros.sp.gov.br>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Compression filter for Loopback device
2004-07-22 19:27 Lei Yang
2004-07-22 19:44 ` Luiz Fernando N. Capitulino
@ 2004-07-23 11:16 ` Paulo Marques
1 sibling, 0 replies; 9+ messages in thread
From: Paulo Marques @ 2004-07-23 11:16 UTC (permalink / raw)
To: Lei Yang; +Cc: linux-kernel@vger.kernel.org
On Thu, 2004-07-22 at 20:27, Lei Yang wrote:
> Hi all,
>
> Is there anything like 'losetup' that allows choosing encryption
> algorithm for a loopback device that can be used on compression
> algorithms? Or in other words, when the data passes through loopback
> device to a real storage device, it can be filtered and the filter is
> compression instead of encryption; when kernel ask for data in a storage
> device that is mounted to a loopback device with compression, it will be
> filtered again -- decompressed.
>
> If there is not a ready-to-use method for this, is there any way I can
> implement the idea?
There is cloop. The only problem with cloop is that it is read-only.
If you want read-write, there is no solution AFAIK(*).
I did start working on something like that a while ago. I even
registered for a project on sourceforge:
http://sourceforge.net/projects/zloop/
I stopped working on it because:
1 - I didn't have the time
2 - There are some nasty issues with this concept:
- The image file would ideally shrink and grow according to the
achieved compression ratio. In the worst case it would have to grow to
more than the size of the "block device" (if you wrote a bunch of
already compressed files to the device, for instance), because it has to
keep some metadata. This reduces the scenarios where this sort of
compressed loopback device could be used.
- Respect the sequence of writes to the block device is tricky. This
is important because you have to guarantee that a journaled filesystem
on top of the block device will assure data integrity. This is worst
than on a normal loopback because you have to make sure the "block
device metadata" is also written at appropriate times. I actually
conceived an algorithm that could acomplish this with little overhead,
but never got to implement it.
- The block device doesn't understand anything about files. This is
an advantage because it will compress the filesystem metadata
transparently, but it is bad because it compresses "unused" blocks of
data. This could probably be avoided with a patch I saw floating around
a while ago that zero'ed delete ext2 files. Zero'ed blocks didn't accupy
any space at all in my compressed scheme, only metadata (only 2 bytes
per block).
3 - There didn't seem to be much interest from the commmunity in
something like this.
If interest rises now, I'll probably have the time to resume the project
where I left off.
I did a proof of concept using a nbd server. This way I could test
everything in user space.
With this NBD server I tested the compression ratios that my scheme
could achieve, and they were much better than those achieved by cramfs,
and close to tar.gz ratios. This I wasn't expecting, but it was a nice
surprise :)
Anyway, any feedback, suggestions, ideas on this will be greatly
appreciated.
--
Paulo Marques - www.grupopie.com
"In a world without walls and fences who needs windows and gates?"
(*) There is JFFS2 that you can mount on top of a mtd-block device. I
never tried it personally, because it seemed a bit of a cludge, and it
didn't get the shrink and grow effect that I wanted from the compressed
loop-back. There is also an old ext2 "compression extension" that seemed
not to be mantained for a long time, last time I checked.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Compression filter for Loopback device
@ 2004-07-23 18:20 Phillip Lougher
2004-07-26 12:38 ` Paulo Marques
0 siblings, 1 reply; 9+ messages in thread
From: Phillip Lougher @ 2004-07-23 18:20 UTC (permalink / raw)
To: linux-kernel; +Cc: pmarques
On Thu, 2004-07-23 Paulo Marques wrote:
>
>I did start working on something like that a while ago. I even
>registered for a project on sourceforge:
>
>http://sourceforge.net/projects/zloop/
>
> - The block device doesn't understand anything about files. This is
>an advantage because it will compress the filesystem metadata
>transparently, but it is bad because it compresses "unused" blocks of
>data. This could probably be avoided with a patch I saw floating around
>a while ago that zero'ed delete ext2 files. Zero'ed blocks didn't accupy
>any space at all in my compressed scheme, only metadata (only 2 bytes
>per block).
>
The fact the block device doesn't understand anything about the
filesystem is a *major* disadvantage. Cloop has a I/O and seeking
performance hit because it doesn't understand the filesystem, and this
will be far worse for write compression. Every time a block update is
seen by your block layer you'll have to recompress the block, it is
going to be difficult to cache the block because you're below the block
cache (any writes you see shouldn't be cached). If you use a larger
compressed block size than the block size, you'll also have to
decompress each compressed block to obtain the missing data to
recompress. Obviously Linux I/O scheduling has a large part to play,
and you better hope to see bursts of writes to consecutive disk blocks.
>I did a proof of concept using a nbd server. This way I could test
>everything in user space.
>
>With this NBD server I tested the compression ratios that my scheme
>could achieve, and they were much better than those achieved by cramfs,
>and close to tar.gz ratios. This I wasn't expecting, but it was a nice
>surprise :)
I'm very surprised you got ratios better than CramFS, which were close
to tar.gz. Cramfs is actually quite efficient in it's use of metadata,
what lets cramfs down is that it compresses in units of the page size or
4K blocks. Cloop/Squashfs/tar.gz use much larger blocks which obtain
much better compression ratios.
What size blocks did you do your compression and/or what compression
algorithm did you use? There is a dramatic performance trade-off here.
If you used larger than 4K blocks every time your compressing block
device is presented with a (probably 4K) block update, you need to
decompress your larger compression block, very slow. If you used 4K
blocks then I cannot see how you obtained better compression than cramfs.
Phillip
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Compression filter for Loopback device
2004-07-23 18:20 Phillip Lougher
@ 2004-07-26 12:38 ` Paulo Marques
0 siblings, 0 replies; 9+ messages in thread
From: Paulo Marques @ 2004-07-26 12:38 UTC (permalink / raw)
To: Phillip Lougher; +Cc: linux-kernel@vger.kernel.org
On Fri, 2004-07-23 at 19:20, Phillip Lougher wrote:
> On Thu, 2004-07-23 Paulo Marques wrote:
> >
> >I did start working on something like that a while ago. I even
> >registered for a project on sourceforge:
> >
> >http://sourceforge.net/projects/zloop/
> >
> > - The block device doesn't understand anything about files. This is
> >an advantage because it will compress the filesystem metadata
> >transparently, but it is bad because it compresses "unused" blocks of
> >data. This could probably be avoided with a patch I saw floating around
> >a while ago that zero'ed delete ext2 files. Zero'ed blocks didn't accupy
> >any space at all in my compressed scheme, only metadata (only 2 bytes
> >per block).
> >
>
> The fact the block device doesn't understand anything about the
> filesystem is a *major* disadvantage. Cloop has a I/O and seeking
> performance hit because it doesn't understand the filesystem, and this
> will be far worse for write compression. Every time a block update is
> seen by your block layer you'll have to recompress the block, it is
> going to be difficult to cache the block because you're below the block
> cache (any writes you see shouldn't be cached). If you use a larger
> compressed block size than the block size, you'll also have to
> decompress each compressed block to obtain the missing data to
> recompress. Obviously Linux I/O scheduling has a large part to play,
> and you better hope to see bursts of writes to consecutive disk blocks.
Yes, I agree it is a major disadvantage. That is way I listed this as
one of the reasons to drop the project altogether :)
Anyway, my main concern was compression ratio, not performance.
Seek times are very bad for live CD distros, but are not so bad for
flash or ram media.
> >I did a proof of concept using a nbd server. This way I could test
> >everything in user space.
> >
> >With this NBD server I tested the compression ratios that my scheme
> >could achieve, and they were much better than those achieved by cramfs,
> >and close to tar.gz ratios. This I wasn't expecting, but it was a nice
> >surprise :)
>
> I'm very surprised you got ratios better than CramFS, which were close
> to tar.gz. Cramfs is actually quite efficient in it's use of metadata,
> what lets cramfs down is that it compresses in units of the page size or
> 4K blocks. Cloop/Squashfs/tar.gz use much larger blocks which obtain
> much better compression ratios.
>
> What size blocks did you do your compression and/or what compression
> algorithm did you use? There is a dramatic performance trade-off here.
> If you used larger than 4K blocks every time your compressing block
> device is presented with a (probably 4K) block update, you need to
> decompress your larger compression block, very slow. If you used 4K
> blocks then I cannot see how you obtained better compression than cramfs.
You are absolutely correct. I was using 32k block size, with 512 byte
"sector size". A 32k block would have to compress into an integer number
of 512 byte sectors. Most of my wasted space comes from this, but I was
assuming that this would have to work over a real block device, so I
tried as much as possible to make every read/write request to the
underlying file to be "512-byte block"-aligned.
The compression algorithm was simply the standard zlib deflate.
But has I said before, my major concern was compression ratio.
I left the block size selectable on mk.zloop, so that I could test
several block sizes and measure compress ratio / performance.
>From what I remember, 4k block sizes really hurt compression ratio. 32k
was almost as good as 128k or higher.
What I would really like to know is if anyone has real world
applications for a compression scheme like this, or is this just a waste
of time...
--
Paulo Marques - www.grupopie.com
"In a world without walls and fences who needs windows and gates?"
^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: Compression filter for Loopback device
@ 2004-07-26 12:48 Lei Yang
2004-08-27 10:34 ` BAIN
0 siblings, 1 reply; 9+ messages in thread
From: Lei Yang @ 2004-07-26 12:48 UTC (permalink / raw)
To: pmarques, Phillip Lougher; +Cc: linux-kernel
Hmm, I am a bit surprised to see this...
Since I am the one who posted the question, could anyone pls give me
some clue of what is going on? Or something like a summary. Many many
thanks!!
Lei
-----Original Message-----
From: linux-kernel-owner@vger.kernel.org
[mailto:linux-kernel-owner@vger.kernel.org] On Behalf Of Paulo Marques
Sent: Monday, July 26, 2004 8:38 AM
To: Phillip Lougher
Cc: linux-kernel@vger.kernel.org
Subject: Re: Compression filter for Loopback device
On Fri, 2004-07-23 at 19:20, Phillip Lougher wrote:
> On Thu, 2004-07-23 Paulo Marques wrote:
> >
> >I did start working on something like that a while ago. I even
> >registered for a project on sourceforge:
> >
> >http://sourceforge.net/projects/zloop/
> >
> > - The block device doesn't understand anything about files. This
is
> >an advantage because it will compress the filesystem metadata
> >transparently, but it is bad because it compresses "unused" blocks
of
> >data. This could probably be avoided with a patch I saw floating
around
> >a while ago that zero'ed delete ext2 files. Zero'ed blocks didn't
accupy
> >any space at all in my compressed scheme, only metadata (only 2
bytes
> >per block).
> >
>
> The fact the block device doesn't understand anything about the
> filesystem is a *major* disadvantage. Cloop has a I/O and seeking
> performance hit because it doesn't understand the filesystem, and this
> will be far worse for write compression. Every time a block update is
> seen by your block layer you'll have to recompress the block, it is
> going to be difficult to cache the block because you're below the
block
> cache (any writes you see shouldn't be cached). If you use a larger
> compressed block size than the block size, you'll also have to
> decompress each compressed block to obtain the missing data to
> recompress. Obviously Linux I/O scheduling has a large part to play,
> and you better hope to see bursts of writes to consecutive disk
blocks.
Yes, I agree it is a major disadvantage. That is way I listed this as
one of the reasons to drop the project altogether :)
Anyway, my main concern was compression ratio, not performance.
Seek times are very bad for live CD distros, but are not so bad for
flash or ram media.
> >I did a proof of concept using a nbd server. This way I could test
> >everything in user space.
> >
> >With this NBD server I tested the compression ratios that my scheme
> >could achieve, and they were much better than those achieved by
cramfs,
> >and close to tar.gz ratios. This I wasn't expecting, but it was a
nice
> >surprise :)
>
> I'm very surprised you got ratios better than CramFS, which were close
> to tar.gz. Cramfs is actually quite efficient in it's use of
metadata,
> what lets cramfs down is that it compresses in units of the page size
or
> 4K blocks. Cloop/Squashfs/tar.gz use much larger blocks which obtain
> much better compression ratios.
>
> What size blocks did you do your compression and/or what compression
> algorithm did you use? There is a dramatic performance trade-off
here.
> If you used larger than 4K blocks every time your compressing block
> device is presented with a (probably 4K) block update, you need to
> decompress your larger compression block, very slow. If you used 4K
> blocks then I cannot see how you obtained better compression than
cramfs.
You are absolutely correct. I was using 32k block size, with 512 byte
"sector size". A 32k block would have to compress into an integer number
of 512 byte sectors. Most of my wasted space comes from this, but I was
assuming that this would have to work over a real block device, so I
tried as much as possible to make every read/write request to the
underlying file to be "512-byte block"-aligned.
The compression algorithm was simply the standard zlib deflate.
But has I said before, my major concern was compression ratio.
I left the block size selectable on mk.zloop, so that I could test
several block sizes and measure compress ratio / performance.
>From what I remember, 4k block sizes really hurt compression ratio. 32k
was almost as good as 128k or higher.
What I would really like to know is if anyone has real world
applications for a compression scheme like this, or is this just a waste
of time...
--
Paulo Marques - www.grupopie.com
"In a world without walls and fences who needs windows and gates?"
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel"
in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Compression filter for Loopback device
2004-07-26 12:48 Compression filter for Loopback device Lei Yang
@ 2004-08-27 10:34 ` BAIN
2004-08-27 10:38 ` BAIN
0 siblings, 1 reply; 9+ messages in thread
From: BAIN @ 2004-08-27 10:34 UTC (permalink / raw)
To: Lei Yang; +Cc: pmarques, Phillip Lougher, linux-kernel
Hi people,
I missed this mail for almost a month, but neway
i was also looking at something similar but failed to find nething,
(donno why zloop didn't show up on sf search)
My main reason to search something like this was triggered due to the
discussion on lkml about the necessity of the swap.
The idea i had was to mount a swap partition over a compressed block
device implemented in the ram, this way few of my biggest problems in
current projects would be solved,
1. The project is embedded project and is running out of ram quite
frequently, i have bunch of monitor stuff in userspace which is
triggered only at moderate intervals and does not need to be in memory
all the time. swap is what is required here but unfortunately i have
no backing store for swap. (not all the tasks kick in at one time
neway so swap will be just fine).
2. This also does seem to be good idea on top of IMHO otherwise silly
stuff like this
http://kerneltrap.org/node/view/3660 [ using ram as swap : kerneltrap.org ]
3. And according to discussion on lkml few months back , kernel is
suppose to work better if swap is enabled ?
NE progress neone of you have made so far
i was kinda alone doing this so this is going very slow, but will
speed up the things if i am backed up :).
BAIN
On Mon, 26 Jul 2004 08:48:21 -0400, Lei Yang <leiyang@nec-labs.com> wrote:
> Hmm, I am a bit surprised to see this...
> Since I am the one who posted the question, could anyone pls give me
> some clue of what is going on? Or something like a summary. Many many
> thanks!!
>
> Lei
>
>
>
> -----Original Message-----
> From: linux-kernel-owner@vger.kernel.org
> [mailto:linux-kernel-owner@vger.kernel.org] On Behalf Of Paulo Marques
> Sent: Monday, July 26, 2004 8:38 AM
> To: Phillip Lougher
> Cc: linux-kernel@vger.kernel.org
> Subject: Re: Compression filter for Loopback device
>
> On Fri, 2004-07-23 at 19:20, Phillip Lougher wrote:
> > On Thu, 2004-07-23 Paulo Marques wrote:
> > >
> > >I did start working on something like that a while ago. I even
> > >registered for a project on sourceforge:
> > >
> > >http://sourceforge.net/projects/zloop/
> > >
> > > - The block device doesn't understand anything about files. This
> is
> > >an advantage because it will compress the filesystem metadata
> > >transparently, but it is bad because it compresses "unused" blocks
> of
> > >data. This could probably be avoided with a patch I saw floating
> around
> > >a while ago that zero'ed delete ext2 files. Zero'ed blocks didn't
> accupy
> > >any space at all in my compressed scheme, only metadata (only 2
> bytes
> > >per block).
> > >
> >
> > The fact the block device doesn't understand anything about the
> > filesystem is a *major* disadvantage. Cloop has a I/O and seeking
> > performance hit because it doesn't understand the filesystem, and this
>
> > will be far worse for write compression. Every time a block update is
>
> > seen by your block layer you'll have to recompress the block, it is
> > going to be difficult to cache the block because you're below the
> block
> > cache (any writes you see shouldn't be cached). If you use a larger
> > compressed block size than the block size, you'll also have to
> > decompress each compressed block to obtain the missing data to
> > recompress. Obviously Linux I/O scheduling has a large part to play,
> > and you better hope to see bursts of writes to consecutive disk
> blocks.
>
> Yes, I agree it is a major disadvantage. That is way I listed this as
> one of the reasons to drop the project altogether :)
>
> Anyway, my main concern was compression ratio, not performance.
>
> Seek times are very bad for live CD distros, but are not so bad for
> flash or ram media.
>
> > >I did a proof of concept using a nbd server. This way I could test
> > >everything in user space.
> > >
> > >With this NBD server I tested the compression ratios that my scheme
> > >could achieve, and they were much better than those achieved by
> cramfs,
> > >and close to tar.gz ratios. This I wasn't expecting, but it was a
> nice
> > >surprise :)
> >
> > I'm very surprised you got ratios better than CramFS, which were close
>
> > to tar.gz. Cramfs is actually quite efficient in it's use of
> metadata,
> > what lets cramfs down is that it compresses in units of the page size
> or
> > 4K blocks. Cloop/Squashfs/tar.gz use much larger blocks which obtain
> > much better compression ratios.
> >
> > What size blocks did you do your compression and/or what compression
> > algorithm did you use? There is a dramatic performance trade-off
> here.
> > If you used larger than 4K blocks every time your compressing block
> > device is presented with a (probably 4K) block update, you need to
> > decompress your larger compression block, very slow. If you used 4K
> > blocks then I cannot see how you obtained better compression than
> cramfs.
>
> You are absolutely correct. I was using 32k block size, with 512 byte
> "sector size". A 32k block would have to compress into an integer number
> of 512 byte sectors. Most of my wasted space comes from this, but I was
> assuming that this would have to work over a real block device, so I
> tried as much as possible to make every read/write request to the
> underlying file to be "512-byte block"-aligned.
>
> The compression algorithm was simply the standard zlib deflate.
>
> But has I said before, my major concern was compression ratio.
>
> I left the block size selectable on mk.zloop, so that I could test
> several block sizes and measure compress ratio / performance.
>
> >From what I remember, 4k block sizes really hurt compression ratio. 32k
> was almost as good as 128k or higher.
>
> What I would really like to know is if anyone has real world
> applications for a compression scheme like this, or is this just a waste
> of time...
>
> --
> Paulo Marques - www.grupopie.com
> "In a world without walls and fences who needs windows and gates?"
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel"
> in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Compression filter for Loopback device
2004-08-27 10:34 ` BAIN
@ 2004-08-27 10:38 ` BAIN
[not found] ` <412F3210.3030506@nec-labs.com>
0 siblings, 1 reply; 9+ messages in thread
From: BAIN @ 2004-08-27 10:38 UTC (permalink / raw)
To: Lei Yang; +Cc: pmarques, Phillip Lougher, linux-kernel
Hi one thing i missed to mention.
i do know about linuxcc project
http://linuxcc.sf.net
but swap/compressed block device/ram is several degrees less intrusive
on kernel.
BAIN
On Fri, 27 Aug 2004 16:04:33 +0530, BAIN <bainonline@gmail.com> wrote:
> Hi people,
>
> I missed this mail for almost a month, but neway
> i was also looking at something similar but failed to find nething,
> (donno why zloop didn't show up on sf search)
>
> My main reason to search something like this was triggered due to the
> discussion on lkml about the necessity of the swap.
>
> The idea i had was to mount a swap partition over a compressed block
> device implemented in the ram, this way few of my biggest problems in
> current projects would be solved,
>
> 1. The project is embedded project and is running out of ram quite
> frequently, i have bunch of monitor stuff in userspace which is
> triggered only at moderate intervals and does not need to be in memory
> all the time. swap is what is required here but unfortunately i have
> no backing store for swap. (not all the tasks kick in at one time
> neway so swap will be just fine).
>
> 2. This also does seem to be good idea on top of IMHO otherwise silly
> stuff like this
> http://kerneltrap.org/node/view/3660 [ using ram as swap : kerneltrap.org ]
>
> 3. And according to discussion on lkml few months back , kernel is
> suppose to work better if swap is enabled ?
>
> NE progress neone of you have made so far
>
> i was kinda alone doing this so this is going very slow, but will
> speed up the things if i am backed up :).
>
> BAIN
>
>
> On Mon, 26 Jul 2004 08:48:21 -0400, Lei Yang <leiyang@nec-labs.com> wrote:
> > Hmm, I am a bit surprised to see this...
> > Since I am the one who posted the question, could anyone pls give me
> > some clue of what is going on? Or something like a summary. Many many
> > thanks!!
> >
> > Lei
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Compression filter for Loopback device
[not found] ` <412F3210.3030506@nec-labs.com>
@ 2004-08-28 7:57 ` BAIN
0 siblings, 0 replies; 9+ messages in thread
From: BAIN @ 2004-08-28 7:57 UTC (permalink / raw)
To: linux-kernel
On Fri, 27 Aug 2004 09:07:28 -0400, Lei Yang <leiyang@nec-labs.com> wrote:
> First of all, link is invalid:)
> > i do know about linuxcc project
> > http://linuxcc.sf.net
oops this shud have been
http://linuxcompressed.sf.net
sorry,
BAIN
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2004-08-28 7:57 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-07-26 12:48 Compression filter for Loopback device Lei Yang
2004-08-27 10:34 ` BAIN
2004-08-27 10:38 ` BAIN
[not found] ` <412F3210.3030506@nec-labs.com>
2004-08-28 7:57 ` BAIN
-- strict thread matches above, loose matches on Subject: below --
2004-07-23 18:20 Phillip Lougher
2004-07-26 12:38 ` Paulo Marques
2004-07-22 19:27 Lei Yang
2004-07-22 19:44 ` Luiz Fernando N. Capitulino
2004-07-23 11:16 ` Paulo Marques
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox