linux-admin.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* 2GB max file size
@ 2005-03-29 14:50 Luca Ferrari
  2005-03-29 15:00 ` Thornton Prime
  0 siblings, 1 reply; 5+ messages in thread
From: Luca Ferrari @ 2005-03-29 14:50 UTC (permalink / raw)
  To: linux-admin

Hi,
I think there must be a solution for that "roof" of 2 GB for a single file. It 
will be great, since my backups are often bigger than 2 GB so I cannot 
zip/tar them in a single file. Any suggestion?

Thanks,
Luca
-- 
Luca Ferrari,
fluca1978@infinito.it

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: 2GB max file size
  2005-03-29 14:50 2GB max file size Luca Ferrari
@ 2005-03-29 15:00 ` Thornton Prime
  0 siblings, 0 replies; 5+ messages in thread
From: Thornton Prime @ 2005-03-29 15:00 UTC (permalink / raw)
  To: fluca1978; +Cc: linux-admin

On Tue, 29 Mar 2005 16:50:11 +0200, Luca Ferrari <fluca1978@infinito.it> wrote:
> I think there must be a solution for that "roof" of 2 GB for a single file. It
> will be great, since my backups are often bigger than 2 GB so I cannot
> zip/tar them in a single file. Any suggestion?

The solution was implemented quite some time ago, I can't even recall
how long ago.  I regularly create 10G to 200G files on Linux with
standard tools.

What kernel/glibc/filesystem are you running?

thornton

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: 2GB max file size
       [not found] <5.2.1.1.0.20050329160147.049ed668@127.0.0.1>
@ 2005-03-29 15:33 ` Luca Ferrari
  2005-03-29 16:02   ` Thornton Prime
  2005-04-02 12:35   ` Glynn Clements
  0 siblings, 2 replies; 5+ messages in thread
From: Luca Ferrari @ 2005-03-29 15:33 UTC (permalink / raw)
  To: linux-admin

On Tuesday 29 March 2005 17:06 Carl's cat walking on the keyboard  wrote:

> Luca,
>
> tar will handle input files greater than 2GB.
>
> I use files greater than 2GB on a RedHat 8 system with
> a generic 2.4.20 kernel and tar (version = tar (GNU tar) 1.13.25)
>
> I can also use dump/restore for backing up to tape.
>


Actually I'm trying with zip, that fails if the file becomes bigger than 2 GB. 
SInce I remember that, due to the i-node structure, unix cannot handle a file 
greater than 2GB, I was wondering the problem was of the filesystem and not 
of the program itself. However, before renouncing to use zip, is there 
something I can do on the filesystem to handle bigger files?

Luca

-- 
Luca Ferrari,
fluca1978@infinito.it

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: 2GB max file size
  2005-03-29 15:33 ` Luca Ferrari
@ 2005-03-29 16:02   ` Thornton Prime
  2005-04-02 12:35   ` Glynn Clements
  1 sibling, 0 replies; 5+ messages in thread
From: Thornton Prime @ 2005-03-29 16:02 UTC (permalink / raw)
  To: fluca1978; +Cc: linux-admin

On Tue, 29 Mar 2005 17:33:35 +0200, Luca Ferrari <fluca1978@infinito.it> wrote:
> Actually I'm trying with zip, that fails if the file becomes bigger than 2 GB.
> SInce I remember that, due to the i-node structure, unix cannot handle a file
> greater than 2GB, I was wondering the problem was of the filesystem and not
> of the program itself. However, before renouncing to use zip, is there
> something I can do on the filesystem to handle bigger files?

The limitation was lifted quite a while ago.

It could be an old format of the filesystem that still lives with the
limitation, or it could be a limitation in the version of zip that you
use. I use tar, so I'm not sure when or if the 2G limitation was fixed
in zip (I have a hard time believing it wasn't).

Make sure your reiserfs is sufficiently new format. I can't remember
when reiserfs began support of files >2G, but anything v3 should
certainly be capable, and probably anything v2. ext3, IIRC, has always
supported >2G files, so the support for 2G files is at least as old as
ext3.

thornton

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: 2GB max file size
  2005-03-29 15:33 ` Luca Ferrari
  2005-03-29 16:02   ` Thornton Prime
@ 2005-04-02 12:35   ` Glynn Clements
  1 sibling, 0 replies; 5+ messages in thread
From: Glynn Clements @ 2005-04-02 12:35 UTC (permalink / raw)
  To: fluca1978; +Cc: linux-admin


Luca Ferrari wrote:

> > tar will handle input files greater than 2GB.
> >
> > I use files greater than 2GB on a RedHat 8 system with
> > a generic 2.4.20 kernel and tar (version = tar (GNU tar) 1.13.25)
> >
> > I can also use dump/restore for backing up to tape.
> 
> Actually I'm trying with zip, that fails if the file becomes bigger than 2 GB. 
> SInce I remember that, due to the i-node structure, unix cannot handle a file 
> greater than 2GB, I was wondering the problem was of the filesystem and not 
> of the program itself. However, before renouncing to use zip, is there 
> something I can do on the filesystem to handle bigger files?

The issue almost certainly lies with the program, not the filesystem.
The 2Gb limitation was lifted for ext2 so long ago that it's a distant
memory. Certainly, 2.4.20 has no problems with files larger than 2Gb.

The main problem is that the historical Unix API used a "long" to
represent offsets within files (including the offset to the end of the
file, i.e. its size). On a 32-bit system, a "long" is only 32 bits, so
can only hold values up to +/- 2Gb.

A later revision of the API introduced the off_t type to hold file
offsets (and sizes). Because of the amount of historical code which
uses the "long" type, the off_t type is equivalent to "long" by
default. This can be changed to a 64-bit type at compile-time, but you
need to ensure that all code which references the file (including any
libraries) support large files (which is why it isn't the default).

The most likely reason why you might have problems is that the zip
program was compiled to use a 32-bit off_t type. If the other programs
from your distribution support large files, that may indicate that the
zip program itself would need code modifications (and not just
compilation options) before it will support large files. Early
versions of the zip file format cannot store large files because they
only use 32 bits for the size field.

In any case, if you're backing up to tape, you should be using tar,
cpio or dump rather than zip. The zip format was designed for random
access devices (i.e. disks), not sequential access devices (i.e. 
tapes), whereas tar, cpio and dump were designed for tapes.

To create a zip archive on tape, you first have to create the file on
disk then copy it to tape; similarly, to read a zip archive, you have
to copy it to disk first. With tar, cpio or dump, you can read/write
directly from/to tape.

-- 
Glynn Clements <glynn@gclements.plus.com>

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2005-04-02 12:35 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-03-29 14:50 2GB max file size Luca Ferrari
2005-03-29 15:00 ` Thornton Prime
     [not found] <5.2.1.1.0.20050329160147.049ed668@127.0.0.1>
2005-03-29 15:33 ` Luca Ferrari
2005-03-29 16:02   ` Thornton Prime
2005-04-02 12:35   ` Glynn Clements

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).