From: Nikola Ciprich <nikola.ciprich@linuxbox.cz>
To: Eric Sandeen <esandeen@redhat.com>
Cc: Eric Sandeen <sandeen@sandeen.net>,
linux-xfs@vger.kernel.org,
Nikola Ciprich <nikola.ciprich@linuxbox.cz>
Subject: Re: XFS / xfs_repair - problem reading very large sparse files on very large filesystem
Date: Fri, 5 Nov 2021 17:19:47 +0100 [thread overview]
Message-ID: <20211105161947.GK32555@pcnci.linuxbox.cz> (raw)
In-Reply-To: <48920430-e48b-0531-2627-0efee9845a1c@redhat.com>
>
> ok, thanks for the clarification.
no problem... in the meantime, xfs_bmap finished as well,
resulting output has 1.5GB, showing total of 25354643 groups :-O
>
> Though I've never heard of streaming video writes that weren't sequential ...
> have you actually observed that via strace or whatnot?
those are streams from many cameras, somehow multiplexed by processing software.
The guy I communicate with, whos responsible unfortunately does not know
many details
>
> What might be happening is that if you are streaming multiple files into a single
> directory at the same time, it competes for the allocator, and they will interleave.
>
> XFS has an allocator mode called "filestreams" which was designed just for this
> (video ingest).
thanks for the tip, I'll check that!
anyways I'll rather preallocate files fully for now, it takes a lot of time, but
should be the safest way before we know what exactly is wrong.. and I'll also
avoid creating such huge filesystems, as it leads to more trouble.. (like needs of huge
amounts of RAM for fs repair)
>
> If you set the "S" attribute on the target directory, IIRC it should enable this
> mode. You can do that with the xfs_io "chattr" command.
>
> Might be worth a test, or wait for dchinner to chime in on whether this is a
> reasonable suggestion...
OK
BR
nik
>
> -Eric
>
> >btw blocked read from file I sent backtrace seems to have started finally (after
> >maybe an hour) and runs 8-20MB/s
>
--
-------------------------------------
Ing. Nikola CIPRICH
LinuxBox.cz, s.r.o.
28.rijna 168, 709 00 Ostrava
tel.: +420 591 166 214
fax: +420 596 621 273
mobil: +420 777 093 799
www.linuxbox.cz
mobil servis: +420 737 238 656
email servis: servis@linuxbox.cz
-------------------------------------
next prev parent reply other threads:[~2021-11-05 16:19 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-11-04 9:09 XFS / xfs_repair - problem reading very large sparse files on very large filesystem Nikola Ciprich
2021-11-04 16:20 ` Eric Sandeen
2021-11-05 14:13 ` Nikola Ciprich
2021-11-05 14:17 ` Nikola Ciprich
2021-11-05 14:56 ` Eric Sandeen
2021-11-05 15:59 ` Nikola Ciprich
2021-11-05 16:11 ` Eric Sandeen
2021-11-05 16:19 ` Nikola Ciprich [this message]
2021-11-07 22:25 ` Dave Chinner
2021-11-04 23:04 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211105161947.GK32555@pcnci.linuxbox.cz \
--to=nikola.ciprich@linuxbox.cz \
--cc=esandeen@redhat.com \
--cc=linux-xfs@vger.kernel.org \
--cc=sandeen@sandeen.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox