* Shrink UBIFS @ 2017-09-28 11:43 Ryan Meulenkamp 2017-09-30 18:09 ` Richard Weinberger 0 siblings, 1 reply; 6+ messages in thread From: Ryan Meulenkamp @ 2017-09-28 11:43 UTC (permalink / raw) To: linux-mtd@lists.infradead.org Hi, I'm planning to write an ioctl for shrinking a UBIFS to be able to resize the volume its on and create another volume because it is essential for our upgrade/migration flow. Do you guys have any advice for me? The hard part is that LEB's that would fall outside the new size should be moved inside the new size. From what I read, the garbage collection code could be used to accomplish this, but it is not really possible to define which LEB's to put where. Thanks in advance! Ryan ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Shrink UBIFS 2017-09-28 11:43 Shrink UBIFS Ryan Meulenkamp @ 2017-09-30 18:09 ` Richard Weinberger 2017-10-02 8:27 ` Artem Bityutskiy 0 siblings, 1 reply; 6+ messages in thread From: Richard Weinberger @ 2017-09-30 18:09 UTC (permalink / raw) To: Ryan Meulenkamp; +Cc: linux-mtd@lists.infradead.org Ryan, On Thu, Sep 28, 2017 at 1:43 PM, Ryan Meulenkamp <Ryan.Meulenkamp@nedap.com> wrote: > Hi, > > > I'm planning to write an ioctl for shrinking a UBIFS to be able to resize the volume its on and create another > > volume because it is essential for our upgrade/migration flow. Do you guys have any advice for me? The > > hard part is that LEB's that would fall outside the new size should be moved inside the new size. From what > > I read, the garbage collection code could be used to accomplish this, but it is not really possible to define > > which LEB's to put where. > > > Thanks in advance! So, you want to implement _online_ shrinking? This is a way more complicated, think of power-cuts. I strongly suggest thinking about offline shrinking first. You are right, the garbage collector might be useful, you could modify it in a way to move blocks away from to-be-removed LEBs. But again, do you *really* need online shrinking? -- Thanks, //richard ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Shrink UBIFS 2017-09-30 18:09 ` Richard Weinberger @ 2017-10-02 8:27 ` Artem Bityutskiy [not found] ` <1507119835666.14660@nedap.com> 0 siblings, 1 reply; 6+ messages in thread From: Artem Bityutskiy @ 2017-10-02 8:27 UTC (permalink / raw) To: Richard Weinberger, Ryan Meulenkamp; +Cc: linux-mtd@lists.infradead.org On Sat, 2017-09-30 at 20:09 +0200, Richard Weinberger wrote: > Ryan, > > On Thu, Sep 28, 2017 at 1:43 PM, Ryan Meulenkamp > <Ryan.Meulenkamp@nedap.com> wrote: > > Hi, > > > > > > I'm planning to write an ioctl for shrinking a UBIFS to be able to > > resize the volume its on and create another > > > > volume because it is essential for our upgrade/migration flow. Do > > you guys have any advice for me? The > > > > hard part is that LEB's that would fall outside the new size should > > be moved inside the new size. From what > > > > I read, the garbage collection code could be used to accomplish > > this, but it is not really possible to define > > > > which LEB's to put where. > > > > > > Thanks in advance! > > So, you want to implement _online_ shrinking? > This is a way more complicated, think of power-cuts. > I strongly suggest thinking about offline shrinking first. > > You are right, the garbage collector might be useful, you could > modify > it in a way > to move blocks away from to-be-removed LEBs. > But again, do you *really* need online shrinking? Yeah, offline would be way easier But still just thinking aloud about online shrinking... Suppose we have UBIFS which looks like this. FDDDEFDFDFFFFFDDDFDFFEEEDFDDDFDFDFDFDFDDDDDDFFFFFFDDDDDDDDDDDFFDFF F - full eraseblock D - eraseblock with dirty space. E - empty eraseblock. To shrink, we need to turn it into something like this. FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFDDDDEEEEEEEEEEEEEEEEEEEE Currently GC selects the dirtiest eraseblock as the victim for cleaning, because this is the most efficient strategy. But you could introduce a special 'shrinking mode', where you'd tell GC to try hard picking the victims from the end, Probably also try hard not to re-use the empty eraseblocks at the end. Then while being in this mode, GC is used from a background thread to make enough empty eraseblocks at the end. Well, it is possible that GC will stop making progress before enough eraseblocks at the end are made empty, which would mean that further shrinking is impossible. Just thoughts. ^ permalink raw reply [flat|nested] 6+ messages in thread
[parent not found: <1507119835666.14660@nedap.com>]
* Re: Shrink UBIFS [not found] ` <1507119835666.14660@nedap.com> @ 2017-10-04 12:28 ` Richard Weinberger [not found] ` <1508397182265.19514@nedap.com> 0 siblings, 1 reply; 6+ messages in thread From: Richard Weinberger @ 2017-10-04 12:28 UTC (permalink / raw) To: Ryan Meulenkamp, linux-mtd@lists.infradead.org; +Cc: dedekind1@gmail.com Am Mittwoch, 4. Oktober 2017, 14:23:56 CEST schrieb Ryan Meulenkamp: > So, I stranded on the following: after performing the resize action, > I get a nice strain of errors telling me that the node type is bad: > (255 but expected <1 to 9>). I suspect this is because the block is > erased but UBIFS did not expect it to be. Is this a problem with > garbage collection of the index nodes? I don't understand how > index nodes are garbage collected, and where they end up. How can we know? :-) But yes, seems like an UBIFS structure points to an erased region. Thanks, //richard -- sigma star gmbh - Eduard-Bodem-Gasse 6 - 6020 Innsbruck - Austria ATU66964118 - FN 374287y ^ permalink raw reply [flat|nested] 6+ messages in thread
[parent not found: <1508397182265.19514@nedap.com>]
* Re: Shrink UBIFS [not found] ` <1508397182265.19514@nedap.com> @ 2017-11-06 21:27 ` Richard Weinberger 2017-11-07 7:06 ` Artem Bityutskiy 0 siblings, 1 reply; 6+ messages in thread From: Richard Weinberger @ 2017-11-06 21:27 UTC (permalink / raw) To: Ryan Meulenkamp; +Cc: dedekind1@gmail.com, linux-mtd@lists.infradead.org Ryan, Am Donnerstag, 19. Oktober 2017, 09:13:03 CET schrieb Ryan Meulenkamp: > Hi, > > I've been working on something else for a while, but this is the status on > resizing so far: A diff is not really a good status report. ;) Can you please outline what works so far and what not? Is there some point where you need technical input from us? Thanks, //richard -- sigma star gmbh - Eduard-Bodem-Gasse 6 - 6020 Innsbruck - Austria ATU66964118 - FN 374287y ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Shrink UBIFS 2017-11-06 21:27 ` Richard Weinberger @ 2017-11-07 7:06 ` Artem Bityutskiy 0 siblings, 0 replies; 6+ messages in thread From: Artem Bityutskiy @ 2017-11-07 7:06 UTC (permalink / raw) To: Richard Weinberger, Ryan Meulenkamp; +Cc: linux-mtd@lists.infradead.org On Mon, 2017-11-06 at 22:27 +0100, Richard Weinberger wrote: > Ryan, > > Am Donnerstag, 19. Oktober 2017, 09:13:03 CET schrieb Ryan > Meulenkamp: > > Hi, > > > > I've been working on something else for a while, but this is the > > status on > > resizing so far: > > A diff is not really a good status report. ;) > > Can you please outline what works so far and what not? > Is there some point where you need technical input from us? Yeah, it would be helpful if you could invest some time writing a good overview. What's the goal, what's the strategy achieving it, what has been tried, what worked, what did not work, what are the main obstacles, what was achieved, what are the tradeoffs, etc. ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2017-11-07 7:06 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-09-28 11:43 Shrink UBIFS Ryan Meulenkamp
2017-09-30 18:09 ` Richard Weinberger
2017-10-02 8:27 ` Artem Bityutskiy
[not found] ` <1507119835666.14660@nedap.com>
2017-10-04 12:28 ` Richard Weinberger
[not found] ` <1508397182265.19514@nedap.com>
2017-11-06 21:27 ` Richard Weinberger
2017-11-07 7:06 ` Artem Bityutskiy
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).