* compression
@ 2007-09-11 23:43 Jordan Mendler
2007-09-12 1:39 ` compression Eric Sandeen
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Jordan Mendler @ 2007-09-11 23:43 UTC (permalink / raw)
To: xfs
Hi all,
I searched the mailing list archive and could not find an answer. We are
currently using XFS on Linux for a 17TB Volume used for backups. We are
running out of space, so rather than order another array, I would like to
try to implement filesystem-level compression. Does XFS support any type of
compression? If not, are there any other ways to optimize for more space
storage? We are doing extensive rsyncs as our method of backups, so gzipping
on top of the filesystem is not really an option.
Thanks so much,
Jordan
[[HTML alternate version deleted]]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: compression
2007-09-11 23:43 compression Jordan Mendler
@ 2007-09-12 1:39 ` Eric Sandeen
2007-09-12 3:05 ` compression Jordan Mendler
2007-09-12 17:42 ` compression Josef Sipek
2007-09-15 10:05 ` compression Ralf Gross
2 siblings, 1 reply; 8+ messages in thread
From: Eric Sandeen @ 2007-09-12 1:39 UTC (permalink / raw)
To: Jordan Mendler; +Cc: xfs
Jordan Mendler wrote:
> Hi all,
>
> I searched the mailing list archive and could not find an answer. We are
> currently using XFS on Linux for a 17TB Volume used for backups. We are
> running out of space, so rather than order another array, I would like to
> try to implement filesystem-level compression. Does XFS support any type of
> compression? If not, are there any other ways to optimize for more space
> storage? We are doing extensive rsyncs as our method of backups, so gzipping
> on top of the filesystem is not really an option.
>
> Thanks so much,
> Jordan
>
No native compression in xfs... and it's not got a lot of space
overhead, to start with.
If you're keeping multiple copies of things via complete nightly rsync
backups, there are mechanisms that just symlink files which haven't
changed... Or, have you looked into incremental backups via xfsdump?
Dunno if any of that helps, or if you've already thought of such things. :)
-Eric
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: compression
2007-09-12 1:39 ` compression Eric Sandeen
@ 2007-09-12 3:05 ` Jordan Mendler
2007-09-12 4:07 ` compression Eric Sandeen
0 siblings, 1 reply; 8+ messages in thread
From: Jordan Mendler @ 2007-09-12 3:05 UTC (permalink / raw)
To: Eric Sandeen; +Cc: xfs
Do you know if there any plans to implement compression any time in the
somewhat near future?
Thanks, Jordan
On 9/11/07, Eric Sandeen <sandeen@sandeen.net> wrote:
>
> Jordan Mendler wrote:
> > Hi all,
> >
> > I searched the mailing list archive and could not find an answer. We are
> > currently using XFS on Linux for a 17TB Volume used for backups. We are
> > running out of space, so rather than order another array, I would like
> to
> > try to implement filesystem-level compression. Does XFS support any type
> of
> > compression? If not, are there any other ways to optimize for more space
> > storage? We are doing extensive rsyncs as our method of backups, so
> gzipping
> > on top of the filesystem is not really an option.
> >
> > Thanks so much,
> > Jordan
> >
>
> No native compression in xfs... and it's not got a lot of space
> overhead, to start with.
>
> If you're keeping multiple copies of things via complete nightly rsync
> backups, there are mechanisms that just symlink files which haven't
> changed... Or, have you looked into incremental backups via xfsdump?
> Dunno if any of that helps, or if you've already thought of such
> things. :)
>
> -Eric
>
[[HTML alternate version deleted]]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: compression
2007-09-12 3:05 ` compression Jordan Mendler
@ 2007-09-12 4:07 ` Eric Sandeen
0 siblings, 0 replies; 8+ messages in thread
From: Eric Sandeen @ 2007-09-12 4:07 UTC (permalink / raw)
To: Jordan Mendler; +Cc: xfs
Jordan Mendler wrote:
> Do you know if there any plans to implement compression any time in the
> somewhat near future?
I don't think so. Certainly not before your 17T fills up ;-)
-Eric
> Thanks, Jordan
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: compression
2007-09-11 23:43 compression Jordan Mendler
2007-09-12 1:39 ` compression Eric Sandeen
@ 2007-09-12 17:42 ` Josef Sipek
2007-09-13 19:39 ` compression Bhagi rathi
2007-09-15 10:05 ` compression Ralf Gross
2 siblings, 1 reply; 8+ messages in thread
From: Josef Sipek @ 2007-09-12 17:42 UTC (permalink / raw)
To: Jordan Mendler; +Cc: xfs
On Tue, Sep 11, 2007 at 04:43:20PM -0700, Jordan Mendler wrote:
> Hi all,
>
> I searched the mailing list archive and could not find an answer. We are
> currently using XFS on Linux for a 17TB Volume used for backups. We are
> running out of space, so rather than order another array, I would like to
> try to implement filesystem-level compression. Does XFS support any type of
> compression? If not, are there any other ways to optimize for more space
> storage? We are doing extensive rsyncs as our method of backups, so gzipping
> on top of the filesystem is not really an option.
Implementation-wise, one major thing to keep in mind is that offsets into
the uncompressed copies of files in memory need to be mapped to the
compressed ones. This is rather painful if you want to do things right
(supporting writing as well as reading from files).
As Eric mentioned, you may want to try to eliminate copies of identical
files with symlinks or even hardlinks (just make sure your backup sw is
smart enough to break links when necessary).
Josef 'Jeff' Sipek.
--
The reasonable man adapts himself to the world; the unreasonable one
persists in trying to adapt the world to himself. Therefore all progress
depends on the unreasonable man.
- George Bernard Shaw
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: compression
2007-09-12 17:42 ` compression Josef Sipek
@ 2007-09-13 19:39 ` Bhagi rathi
0 siblings, 0 replies; 8+ messages in thread
From: Bhagi rathi @ 2007-09-13 19:39 UTC (permalink / raw)
To: Josef Sipek; +Cc: Jordan Mendler, xfs
Using open source rsync seems to be smart enough to identify files which
haven't
been modified and use hard-link instead of copying the file. I am not sure
that rsync
used here is smart enough to identify the same file which haven't been
modified.
If hard-linking is already done, I believe that there is lot of duplication
of data in the same
file. It looks that open source rsync doesn't eliminate duplication of data
that
was already existing in older backup. It copies it again. Compressing the
data
for the same file across various backup snapshots can be very powerful and
my guess is that it can definitely free more that 30% of your space. Note
that
this is not file-system wide compression, it is compression of the same file
existing in various back-ups. Restore gets affected, but should be easy to
tweak
given that it can free lot of space.
-Cheers,
Saradhi.
On 9/12/07, Josef Sipek <jsipek@fsl.cs.sunysb.edu> wrote:
>
> On Tue, Sep 11, 2007 at 04:43:20PM -0700, Jordan Mendler wrote:
> > Hi all,
> >
> > I searched the mailing list archive and could not find an answer. We are
> > currently using XFS on Linux for a 17TB Volume used for backups. We are
> > running out of space, so rather than order another array, I would like
> to
> > try to implement filesystem-level compression. Does XFS support any type
> of
> > compression? If not, are there any other ways to optimize for more space
> > storage? We are doing extensive rsyncs as our method of backups, so
> gzipping
> > on top of the filesystem is not really an option.
>
> Implementation-wise, one major thing to keep in mind is that offsets into
> the uncompressed copies of files in memory need to be mapped to the
> compressed ones. This is rather painful if you want to do things right
> (supporting writing as well as reading from files).
>
> As Eric mentioned, you may want to try to eliminate copies of identical
> files with symlinks or even hardlinks (just make sure your backup sw is
> smart enough to break links when necessary).
>
> Josef 'Jeff' Sipek.
>
> --
> The reasonable man adapts himself to the world; the unreasonable one
> persists in trying to adapt the world to himself. Therefore all progress
> depends on the unreasonable man.
> - George Bernard Shaw
>
>
>
[[HTML alternate version deleted]]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: compression
2007-09-11 23:43 compression Jordan Mendler
2007-09-12 1:39 ` compression Eric Sandeen
2007-09-12 17:42 ` compression Josef Sipek
@ 2007-09-15 10:05 ` Ralf Gross
2007-09-15 18:58 ` compression Jordan Mendler
2 siblings, 1 reply; 8+ messages in thread
From: Ralf Gross @ 2007-09-15 10:05 UTC (permalink / raw)
To: Jordan Mendler; +Cc: xfs
Jordan Mendler schrieb:
>
> I searched the mailing list archive and could not find an answer. We are
> currently using XFS on Linux for a 17TB Volume used for backups. We are
> running out of space, so rather than order another array, I would like to
> try to implement filesystem-level compression. Does XFS support any type of
> compression? If not, are there any other ways to optimize for more space
> storage? We are doing extensive rsyncs as our method of backups, so gzipping
> on top of the filesystem is not really an option.
A very nice tool for backups is backuppc which stores all backed up
files in a pool and uses hardlinks to map these files to the real
backup. There is only one copy of each file in the pool regardless of
on how may clients this file exist (De-duplication). You can use
backuppc with tar/rsync/smb and with compression.
Ralf
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: compression
2007-09-15 10:05 ` compression Ralf Gross
@ 2007-09-15 18:58 ` Jordan Mendler
0 siblings, 0 replies; 8+ messages in thread
From: Jordan Mendler @ 2007-09-15 18:58 UTC (permalink / raw)
To: Ralf Gross; +Cc: xfs
Cool. I will give that a shot. Right now I am just using rsync and
rdiff-backup scripts, but it would be nice to have a frontend in addition to
command line.
Thanks so much,
Jordan
On 9/15/07, Ralf Gross <Ralf-Lists@ralfgross.de> wrote:
>
> Jordan Mendler schrieb:
> >
> > I searched the mailing list archive and could not find an answer. We are
> > currently using XFS on Linux for a 17TB Volume used for backups. We are
> > running out of space, so rather than order another array, I would like
> to
> > try to implement filesystem-level compression. Does XFS support any type
> of
> > compression? If not, are there any other ways to optimize for more space
> > storage? We are doing extensive rsyncs as our method of backups, so
> gzipping
> > on top of the filesystem is not really an option.
>
> A very nice tool for backups is backuppc which stores all backed up
> files in a pool and uses hardlinks to map these files to the real
> backup. There is only one copy of each file in the pool regardless of
> on how may clients this file exist (De-duplication). You can use
> backuppc with tar/rsync/smb and with compression.
>
> Ralf
>
[[HTML alternate version deleted]]
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2007-09-15 18:58 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-09-11 23:43 compression Jordan Mendler
2007-09-12 1:39 ` compression Eric Sandeen
2007-09-12 3:05 ` compression Jordan Mendler
2007-09-12 4:07 ` compression Eric Sandeen
2007-09-12 17:42 ` compression Josef Sipek
2007-09-13 19:39 ` compression Bhagi rathi
2007-09-15 10:05 ` compression Ralf Gross
2007-09-15 18:58 ` compression Jordan Mendler
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox