linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Workaround for hardlink count problem?
@ 2012-09-08 16:56 Marc MERLIN
  2012-09-10  9:12 ` Martin Steigerwald
  0 siblings, 1 reply; 7+ messages in thread
From: Marc MERLIN @ 2012-09-08 16:56 UTC (permalink / raw)
  To: linux-btrfs; +Cc: mfasheh

I read the discussions on hardlinks, and saw that there was a proposed patch
(although I'm not sure if it's due in 3.6 or not, or whether I can apply it 
to my 3.5.3 tree).

I was migrating a backup disk to a new btrfs disk, and the backup had a lot of hardlinks
to collapse identical files to cut down on inode count and disk space.

Then, I started seeing:

cp: cannot create hard link `../dshelf3/backup/saroumane/20080319/var/lib/dpkg/info/libaspell15.postrm' to `../dshelf3/backup/moremagic/oldinstall/var/lib/dpkg/info/libncurses5.postrm': Too many links
cp: cannot create hard link `../dshelf3/backup/saroumane/20080319/var/lib/dpkg/info/libxp6.postrm' to `../dshelf3/backup/moremagic/oldinstall/var/lib/dpkg/info/libncurses5.postrm': Too many links
cp: cannot create symbolic link `../dshelf3/backup/saroumane/20020317_oldload/usr/share/doc/menu/examples/system.fvwmrc': File name too long
cp: cannot create hard link `../dshelf3/backup/saroumane/20061218/var/lib/dpkg/info/libxxf86vm1.postrm' to `../dshelf3/backup/moremagic/oldinstall/var/lib/dpkg/info/libncurses5.postrm': Too many links
cp: cannot create hard link `../dshelf3/backup/saroumane/20061218/var/lib/dpkg/info/libxxf86dga1.postrm' to `../dshelf3/backup/moremagic/oldinstall/var/lib/dpkg/info/libncurses5.postrm': Too many links
cp: cannot create hard link `../dshelf3/backup/saroumane/20061218/var/lib/dpkg/info/libavc1394-0.postrm' to `../dshelf3/backup/moremagic/oldinstall/var/lib/dpkg/info/libncurses5.postrm': Too many links

What's interesting is the 'File name too long' one, but more generally, 
I'm trying to find a userspace workaround for this by unlinking files that go beyond
the hardlink count that btrfs can support for now.

Has someone come up with a cool way to work around the too many link error
and only when that happens, turn the hardlink into a file copy instead?
(that is when copying an entire tree with millions of files).

I realize I could parse the errors and pipe that into some crafty shell to do this,
but if there is a smarter already made solution, I'm all ears :)

Thanks,
Marc
-- 
"A mouse is a device used to point at the xterm you want to type in" - A.S.R.
Microsoft is to operating systems ....
                                      .... what McDonalds is to gourmet cooking
Home page: http://marc.merlins.org/  

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2012-09-11 14:20 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-09-08 16:56 Workaround for hardlink count problem? Marc MERLIN
2012-09-10  9:12 ` Martin Steigerwald
2012-09-10  9:21   ` Fajar A. Nugraha
2012-09-10 23:09     ` Martin Steigerwald
2012-09-10 23:38       ` Jan Engelhardt
2012-09-11  9:16         ` Martin Steigerwald
2012-09-11 14:20         ` Arne Jansen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).