public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* FatELF patches...
@ 2009-10-30  2:19 Ryan C. Gordon
  2009-10-30  5:42 ` Rayson Ho
  2009-11-01 19:20 ` David Hagood
  0 siblings, 2 replies; 47+ messages in thread
From: Ryan C. Gordon @ 2009-10-30  2:19 UTC (permalink / raw)
  To: linux-kernel


Having heard a bunch of commentary, and made a bunch of changes based on 
some really good feedback, here are my hopefully-final FatELF patches. I'm 
pretty happy with the final results. The only changes over the last 
posting is that I cleaned up all the checkpatch.pl complaints (whitespace 
etc).

What's the best way to get this moving towards the mainline? It's not 
clear to me who the binfmt_elf maintainer would be. Is this something that 
should go to Andrew Morton for the -mm tree?

--ryan.


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-10-30  2:19 Ryan C. Gordon
@ 2009-10-30  5:42 ` Rayson Ho
  2009-10-30 14:54   ` Ryan C. Gordon
  2009-11-01 19:20 ` David Hagood
  1 sibling, 1 reply; 47+ messages in thread
From: Rayson Ho @ 2009-10-30  5:42 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: linux-kernel

On Thu, Oct 29, 2009 at 9:19 PM, Ryan C. Gordon <icculus@icculus.org> wrote:
> What's the best way to get this moving towards the mainline? It's not
> clear to me who the binfmt_elf maintainer would be. Is this something that
> should go to Andrew Morton for the -mm tree?

Can we first find out whether it is safe from a legal point of view??
After the SCO v. IBM lawsuit, we should be way more careful.

Like it or not, Apple invented universal binaries in 1993, and so far
we are not able to find any prior arts...  If we integrate something
that infringes Apple's patent, then Apple can ban all the Linux
distributions and devices from shipping.

Rayson



>
> --ryan.
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-10-30  5:42 ` Rayson Ho
@ 2009-10-30 14:54   ` Ryan C. Gordon
  0 siblings, 0 replies; 47+ messages in thread
From: Ryan C. Gordon @ 2009-10-30 14:54 UTC (permalink / raw)
  To: Rayson Ho; +Cc: linux-kernel


> Can we first find out whether it is safe from a legal point of view??
> After the SCO v. IBM lawsuit, we should be way more careful.

Does anyone have a spare patent lawyer? I'm not against changing my patch 
to work around a patent, but not knowing _how_ to change it, or if it 
needs changing at all? That's maddening.

--ryan.


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-10-30  2:19 Ryan C. Gordon
  2009-10-30  5:42 ` Rayson Ho
@ 2009-11-01 19:20 ` David Hagood
  2009-11-01 20:28   ` Måns Rullgård
                     ` (2 more replies)
  1 sibling, 3 replies; 47+ messages in thread
From: David Hagood @ 2009-11-01 19:20 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: linux-kernel

On Thu, 2009-10-29 at 22:19 -0400, Ryan C. Gordon wrote:
> Having heard a bunch of commentary, and made a bunch of changes based on 
> some really good feedback, here are my hopefully-final FatELF patches.

I hope it's not too late for a request for consideration: if we start
having fat binaries, could one of the "binaries" be one of the "not
quite compiled code" formats like Architecture Neutral Distribution
Format (ANDF), such that, given a fat binary which does NOT support a
given CPU, you could at least in theory process the ANDF section to
create the needed target binary? Bonus points for being able to then
append the newly created section to the file.

That way you could have a binary that supported some "common" subset of
CPUs (e.g. x86,x86-64,PPC,ARM) but still run on the "not common"
processors (Alpha, MIPS, Sparc) - it would just take a bit more time to
start.

As an embedded systems guy who is looking to have to support multiple
CPU types, this is really very interesting to me.



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-01 19:20 ` David Hagood
@ 2009-11-01 20:28   ` Måns Rullgård
  2009-11-01 20:59     ` Ryan C. Gordon
  2009-11-01 20:40   ` Ryan C. Gordon
  2009-11-10 10:04   ` Enrico Weigelt
  2 siblings, 1 reply; 47+ messages in thread
From: Måns Rullgård @ 2009-11-01 20:28 UTC (permalink / raw)
  To: linux-kernel

David Hagood <david.hagood@gmail.com> writes:

> On Thu, 2009-10-29 at 22:19 -0400, Ryan C. Gordon wrote:
>> Having heard a bunch of commentary, and made a bunch of changes based on 
>> some really good feedback, here are my hopefully-final FatELF patches.
>
> I hope it's not too late for a request for consideration: if we start
> having fat binaries, could one of the "binaries" be one of the "not
> quite compiled code" formats like Architecture Neutral Distribution
> Format (ANDF), such that, given a fat binary which does NOT support a
> given CPU, you could at least in theory process the ANDF section to
> create the needed target binary? Bonus points for being able to then
> append the newly created section to the file.

Am I the only one who sees this as nothing bloat for its own sake?
Did I miss a massive drop in intelligence of Linux users, causing them
to no longer be capable of picking the correct file themselves?

> As an embedded systems guy who is looking to have to support multiple
> CPU types, this is really very interesting to me.

As an embedded systems guy, I'm more concerned about precious flash
space going to waste than about some hypothetical convenience.

-- 
Måns Rullgård
mans@mansr.com


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-01 19:20 ` David Hagood
  2009-11-01 20:28   ` Måns Rullgård
@ 2009-11-01 20:40   ` Ryan C. Gordon
  2009-11-10 10:04   ` Enrico Weigelt
  2 siblings, 0 replies; 47+ messages in thread
From: Ryan C. Gordon @ 2009-11-01 20:40 UTC (permalink / raw)
  To: David Hagood; +Cc: linux-kernel


> Format (ANDF), such that, given a fat binary which does NOT support a
> given CPU, you could at least in theory process the ANDF section to
> create the needed target binary? Bonus points for being able to then
> append the newly created section to the file.

It's not a goal of mine, but I suppose you could have an ELF OSABI for it.

I don't think it changes the FatELF kernel patch at all. I don't know much 
about ANDF, but you'd probably just want to set the ELF "interpreter" to 
something other than ld.so and do this all in userspace, and maybe add a 
change to elf_check_arch() to approve ANDF binaries...or something.

To me, ANDF is interesting in an academic sense, but not enough to spend 
effort on it. YMMV.  :)

--ryan.



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-01 20:28   ` Måns Rullgård
@ 2009-11-01 20:59     ` Ryan C. Gordon
  2009-11-01 21:15       ` Måns Rullgård
                         ` (2 more replies)
  0 siblings, 3 replies; 47+ messages in thread
From: Ryan C. Gordon @ 2009-11-01 20:59 UTC (permalink / raw)
  To: Måns Rullgård; +Cc: linux-kernel


> Am I the only one who sees this as nothing bloat for its own sake?

I posted a fairly large list of benefits here:  http://icculus.org/fatelf/

Some are more far-fetched than others, I will grant. Also, I suspect most 
people will find one benefit and ten things they don't care about, but 
that benefit is different for different people. I'm confident that the 
benefits far outweigh the size of the kernel patch.

> Did I miss a massive drop in intelligence of Linux users, causing them
> to no longer be capable of picking the correct file themselves?

Also known as "market saturation."   :)

(But really, there are benefits beyond helping dumb people, even if 
helping dumb people wasn't a worthwhile goal in itself.)

> As an embedded systems guy, I'm more concerned about precious flash
> space going to waste than about some hypothetical convenience.

I wouldn't imagine this is the target audience for FatELF. For embedded 
devices, just use the same ELF files you've always used.

--ryan.


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-01 20:59     ` Ryan C. Gordon
@ 2009-11-01 21:15       ` Måns Rullgård
  2009-11-01 21:35         ` Ryan C. Gordon
  2009-11-01 22:08         ` Rayson Ho
  2009-11-02  0:01       ` Alan Cox
  2009-11-02 16:11       ` Chris Adams
  2 siblings, 2 replies; 47+ messages in thread
From: Måns Rullgård @ 2009-11-01 21:15 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: linux-kernel

"Ryan C. Gordon" <icculus@icculus.org> writes:

>> Am I the only one who sees this as nothing bloat for its own sake?
>
> I posted a fairly large list of benefits here:  http://icculus.org/fatelf/

I've read the list, and I can't find anything I agree with.  Honestly.

> Some are more far-fetched than others, I will grant. Also, I suspect most 
> people will find one benefit and ten things they don't care about, but 
> that benefit is different for different people. I'm confident that the 
> benefits far outweigh the size of the kernel patch.

It's not the size of the kernel patch I'm worried about.  What worries
me is the disk space needed when *all* my executables and libraries
are suddenly 3, 4, or 5 times the size they need to be.

There is also the issue of speed to launch these things.  It *has* to
be slower than executing a native file directly.

>> Did I miss a massive drop in intelligence of Linux users, causing them
>> to no longer be capable of picking the correct file themselves?
>
> Also known as "market saturation."   :)
>
> (But really, there are benefits beyond helping dumb people, even if 
> helping dumb people wasn't a worthwhile goal in itself.)

It's far too easy to use computers already.  That's the reason for the
spam problem.

Besides, clueless users would be installing a distro, which could
easily download the correct packages automatically.  In fact, that is
what they already do.  The bootable installation media would still
need to be distributed separately, since the boot formats differ
vastly between architectures.  It is not possible to create a CD/DVD
that is bootable on multiple system types (with a few exceptions).

>> As an embedded systems guy, I'm more concerned about precious flash
>> space going to waste than about some hypothetical convenience.
>
> I wouldn't imagine this is the target audience for FatELF. For embedded 
> devices, just use the same ELF files you've always used.

Of course I will.  The question is, will everybody else?  I'm seeing
enough bloat in the embedded world as it is without handing out new
ways to make it even easier.

-- 
Måns Rullgård
mans@mansr.com

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-01 21:15       ` Måns Rullgård
@ 2009-11-01 21:35         ` Ryan C. Gordon
  2009-11-02  4:58           ` Valdis.Kletnieks
  2009-11-01 22:08         ` Rayson Ho
  1 sibling, 1 reply; 47+ messages in thread
From: Ryan C. Gordon @ 2009-11-01 21:35 UTC (permalink / raw)
  To: Måns Rullgård; +Cc: linux-kernel


> It's not the size of the kernel patch I'm worried about.  What worries
> me is the disk space needed when *all* my executables and libraries
> are suddenly 3, 4, or 5 times the size they need to be.

Then don't make FatELF files with 5 binaries in it. Or don't make FatELF 
files at all.

I glued two full Ubuntu installs together as a proof of concept, but I 
think if Ubuntu did this as a distribution-wide policy, then people would 
probably choose a different distribution.

Then again, I hope Ubuntu uses FatELF on a handful of binaries, and 
removes the /lib64 and /lib32 directories.

> There is also the issue of speed to launch these things.  It *has* to
> be slower than executing a native file directly.

In that there will be one extra read of 128 bytes, yes, but I'm not sure 
that's a measurable performance hit. For regular ELF files, the overhead 
is approximately one extra branch instruction. Considering that most files 
won't be FatELF, that seems like an acceptable cost.

> It's far too easy to use computers already.  That's the reason for the
> spam problem.

Clearly that's going to remain as a philosophical difference between us, 
so I won't waste your time trying to dissuade you.

--ryan.



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-01 21:15       ` Måns Rullgård
  2009-11-01 21:35         ` Ryan C. Gordon
@ 2009-11-01 22:08         ` Rayson Ho
  2009-11-02  1:17           ` Ryan C. Gordon
  1 sibling, 1 reply; 47+ messages in thread
From: Rayson Ho @ 2009-11-01 22:08 UTC (permalink / raw)
  To: Måns Rullgård, Ryan C. Gordon, linux-kernel

2009/11/1 Måns Rullgård <mans@mansr.com>:
> I've read the list, and I can't find anything I agree with.  Honestly.

+1.

Adding code that might bring lawsuits to Linux developers,
distributors, users is a BIG disadvantage.

And beside the legal issues, this first point is already not right:

"Given enough disc space, there's no reason you couldn't have one DVD
.iso that installs an x86-64, x86, PowerPC, SPARC, and MIPS system"

The boot loader is different on different systems, and in fact
different with different firmware. A single DVD that can boot on
different hardware platforms might not be an easy thing to do.

Also, why not build the logic for picking which binary to install into
the installer?? This way, the users don't need to have half of the
disk space wasted due to this FatELF thing.

IMO, the biggest problem users get is not with which hardware binary
to download, but the incompatibly of different Linux kernels and glibc
(the API/ABI).

Rayson

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-01 20:59     ` Ryan C. Gordon
  2009-11-01 21:15       ` Måns Rullgård
@ 2009-11-02  0:01       ` Alan Cox
  2009-11-02  2:21         ` Ryan C. Gordon
  2009-11-02 17:52         ` Ryan C. Gordon
  2009-11-02 16:11       ` Chris Adams
  2 siblings, 2 replies; 47+ messages in thread
From: Alan Cox @ 2009-11-02  0:01 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: Måns Rullgård, linux-kernel

Lets go down the list of "benefits"

- Separate downloads
	- Doesn't work. The network usage would increase dramatically
	  pulling all sorts of unneeded crap.
	- Already solved by having a packaging system (in fact FatELF is
	  basically obsoleted by packaging tools)

- Separate lib, lib32, lib64
	- So you have one file with 3 files in it rather than three files
	  with one file in them. Directories were invented for a reason
	- Makes updates bigger
	- Stops users only having 32bit libs for some packages

- Third party packagers no longer have to publish multiple rpm/deb etc
	- By vastly increasing download size
	- By making updates vastly bigger
	- Assumes data files are not dependant on binary (often not true)
	- And is irrelevant really because 90% or more of the cost is
	  testing

- You no longer need to use shell scripts and flakey logic to pick the
  right binary ...
	- Since the 1990s we've used package managers to do that instead.
	  I just type "yum install bzflag", the rest is done for me.

- The ELF OSABI for your system changes someday?
	- We already handle that

- Ship a single shared library that provides bindings for a scripting
  language and not have to worry about whether the scripting language
  itself is built for the same architecture as your bindings. 
	- Except if they don't overlap it won't run

- Ship web browser plugins that work out of the box with multiple
  platforms.
	- yum install just works, and there is a search path in firefox
	  etc

- Ship kernel drivers for multiple processors in one file.
	- Not useful see separate downloads

- Transition to a new architecture in incremental steps. 
	- IFF the CPU supports both old and new
	- and we can already do that

- Support 64-bit and 32-bit compatibility binaries in one file. 
	- Not useful as we've already seen

- No more ia32 compatibility libraries! Even if your distro
  doesn't make a complete set of FatELF binaries available, they can
  still provide it for the handful of packages you need for 99% of 32-bit
  apps you want to run on a 64-bit system. 

	- Argument against FatELF - why waste the disk space if its rare ?

- Have a CPU that can handle different byte orders? Ship one binary that
  satisfies all configurations!

	- Variant of the distribution "advantage" - same problem - its
	  better to have two files, its all about testing anyway

- Ship one file that works across Linux and FreeBSD (without a platform
  compatibility layer on either of them). 

	- Ditto

- One hard drive partition can be booted on different machines with
  different CPU architectures, for development and experimentation. Same
  root file system, different kernel and CPU architecture. 

	- Now we are getting desperate.

- Prepare your app on a USB stick for sneakernet, know it'll work on
  whatever Linux box you are likely to plug it into.

	- No I don't because of the dependancies, architecture ordering
	  of data files, lack of testing on each platform and the fact
	  architecture isn't sufficient to define a platform

- Prepare your app on a network share, know it will work with all
  the workstations on your LAN. 

	- Variant of the distribution idea, again better to have multiple
	  files for updating and management, need to deal with
	  dependancies etc. Waste of storage space.
	- We have search paths, multiple mount points etc.

So why exactly do we want FatELF. It was obsoleted in the early 1990s
when architecture handling was introduced into package managers.


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-01 22:08         ` Rayson Ho
@ 2009-11-02  1:17           ` Ryan C. Gordon
  2009-11-02  3:27             ` Rayson Ho
  0 siblings, 1 reply; 47+ messages in thread
From: Ryan C. Gordon @ 2009-11-02  1:17 UTC (permalink / raw)
  To: Rayson Ho; +Cc: Måns Rullgård, linux-kernel


> Adding code that might bring lawsuits to Linux developers,
> distributors, users is a BIG disadvantage.

I'm tracking down a lawyer to discuss the issue. I'm surprised there 
aren't a few hanging around here, honestly. I sent a request in to the 
SFLC, and if that doesn't pan out, I'll dig for coins in my car seat to 
pay a lawyer for a few hours of her time.

If it's a big deal, we'll figure out what to do from there. But let's not 
talk about the sky falling until we get to that point, please.

> "Given enough disc space, there's no reason you couldn't have one DVD
> .iso that installs an x86-64, x86, PowerPC, SPARC, and MIPS system"

I've had about a million people point out the boot loader thing. There's 
an x86/amd64 forest if you can see past the MIPS trees.

Still, I said there were different points that were more compelling for 
different individuals. I don't think this is the most compelling argument 
on that page, and I think there's a value in talking about theoretical 
benefits in addition to practical ones. Theoretical ones become practical 
the moment someone decides to roll out a company-internal distribution 
that works on all the workstations inside IBM or Google or whatever...even 
if Fedora would turn their nose up at the idea for a general-purpose 
release.

> IMO, the biggest problem users get is not with which hardware binary
> to download, but the incompatibly of different Linux kernels and glibc
> (the API/ABI).

These are concerns, too, but the kernel has been, in my experience, very 
good at binary compatibility with user space back as far as I can 
remember. glibc has had some painful progress, but since NPTL stabilized a 
long time ago, even this hasn't been bad at all.

Certainly one has to be careful--I would even use the word diligent--to 
maintain binary compatibility, but this was much more of a hurting for 
application developers a decade ago.

At least, that's been my experience.

--ryan.



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-02  0:01       ` Alan Cox
@ 2009-11-02  2:21         ` Ryan C. Gordon
  2009-11-02  6:17           ` Julien BLACHE
                             ` (3 more replies)
  2009-11-02 17:52         ` Ryan C. Gordon
  1 sibling, 4 replies; 47+ messages in thread
From: Ryan C. Gordon @ 2009-11-02  2:21 UTC (permalink / raw)
  To: Alan Cox; +Cc: Måns Rullgård, linux-kernel


> So why exactly do we want FatELF. It was obsoleted in the early 1990s
> when architecture handling was introduced into package managers.

I'm not minimizing your other points by trimming down to one quote. Some 
of it I already covered, but mostly I suspect I'm talking way too much, so 
I'll spare everyone a little. I'm happy to address your other points if 
you like, though, even the one where you said I was being desperate.  :)

Most of your points are "package managers solve this problem" but they 
simply do not solve all of them.

Package managers are a _fantastic_ invention. They are a killer feature 
over other operating systems, including ones people pay way too much money 
to use. That being said, there are lots of places where using a package 
manager doesn't make sense: experimental software that might have an 
audience but isn't ready for wide adoption, software that isn't 
appropriate for an apt/yum repository, software that distros refuse to 
package but is still perfectly useful, closed-source software, and 
software that wants to work between distros that don't have 
otherwise-compatible rpm/debs (or perhaps no package manager at all).

I'm certain I'm about to get a flood of replies that say "you can make a 
cross-distro-compatible RPM if you just follow these steps" but that 
completely misses the point. Not all software comes from yum, or even from 
an .rpm, even if most of it _should_. This isn't about replacing or 
competing with apt-get or yum.

I'm certain if we made a Venn diagram, there would be an overlap. But 
FatELF solves different problems than package managers, and in the case of 
ia32 compatibility packages, it helps the package manager solve its 
problems better.

--ryan.


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-02  1:17           ` Ryan C. Gordon
@ 2009-11-02  3:27             ` Rayson Ho
  0 siblings, 0 replies; 47+ messages in thread
From: Rayson Ho @ 2009-11-02  3:27 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: Måns Rullgård, linux-kernel

On Sun, Nov 1, 2009 at 8:17 PM, Ryan C. Gordon <icculus@icculus.org> wrote:
> I'm tracking down a lawyer to discuss the issue. I'm surprised there
> aren't a few hanging around here, honestly. I sent a request in to the
> SFLC, and if that doesn't pan out, I'll dig for coins in my car seat to
> pay a lawyer for a few hours of her time.

Good!! And thanks :)

And is the lawyer specialized in patent laws??


> I've had about a million people point out the boot loader thing. There's
> an x86/amd64 forest if you can see past the MIPS trees.

If it's x86 vs. AMD64, then the installer can already do most of the
work, and it can ask the user to insert the right 2nd/3rd/etc CD/DVD.


> Theoretical ones become practical
> the moment someone decides to roll out a company-internal distribution
> that works on all the workstations inside IBM or Google or whatever...even
> if Fedora would turn their nose up at the idea for a general-purpose
> release.

Don't you think that taking a CD/DVD to each workstation and start the
installation or upgrade is so old school??

Software updates inside those companies are done via the internet
network, and it does not matter whether the DVD can handle all the
architectures or not.

And the idea of a general-purpose release might not work. As 90% of
the users are using a single architecture (I count AMD64 as x86 with
"some" extensions...), we won't get the benefits so having the extra
code in the kernel and the userspace. Most of the shipped commercial
binaries will be x86 anyways -- and as Alan stated, the packaging
system is already doing most of the work for us already (I don't
recall providing anything except the package name when I do apt-get).

For embedded systems, they wanted to take away all the fat more than
shipping a single app.


> These are concerns, too, but the kernel has been, in my experience, very
> good at binary compatibility with user space back as far as I can
> remember. glibc has had some painful progress, but since NPTL stabilized a
> long time ago, even this hasn't been bad at all.
>
> Certainly one has to be careful--I would even use the word diligent--to
> maintain binary compatibility, but this was much more of a hurting for
> application developers a decade ago.

The kernel part refers to kernel modules.

But yes, binary compatibility was a real pain when I "really" (played
with it in 1995, didn't really like it at that time) started using
Linux in 1997. However, I think the installer/package manager took out
most of the burden.

Rayson



>
> At least, that's been my experience.
>
> --ryan.
>
>
>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-01 21:35         ` Ryan C. Gordon
@ 2009-11-02  4:58           ` Valdis.Kletnieks
  2009-11-02 15:14             ` Ryan C. Gordon
  0 siblings, 1 reply; 47+ messages in thread
From: Valdis.Kletnieks @ 2009-11-02  4:58 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: Måns Rullgård, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1506 bytes --]

On Sun, 01 Nov 2009 16:35:05 EST, "Ryan C. Gordon" said:

> I glued two full Ubuntu installs together as a proof of concept, but I 
> think if Ubuntu did this as a distribution-wide policy, then people would 
> probably choose a different distribution.

Hmm.. so let's see - people compiling stuff for themselves won't use this
feature.  And if a distro uses it, users would probably go to a different
distro.

That's a bad sign right there...

> Then again, I hope Ubuntu uses FatELF on a handful of binaries, and 
> removes the /lib64 and /lib32 directories.

Actually, they can't nuke the /lib{32,64} directories unless *all* binaries
are using FatELF - as long as there's any binaries doing things The Old Way,
you need to keep the supporting binaries around.

> > There is also the issue of speed to launch these things.  It *has* to
> > be slower than executing a native file directly.

> In that there will be one extra read of 128 bytes, yes, but I'm not sure 
> that's a measurable performance hit. For regular ELF files, the overhead 
> is approximately one extra branch instruction. Considering that most files 
> won't be FatELF, that seems like an acceptable cost.

Don't forget you take that hit once for each shared library involved.  Plus
I'm not sure if there's hidden gotchas lurking in there (is there code that
assumes that if executable code is mmap'ed, it's only done so in one arch?
Or will a FatELF glibc.so screw up somebody's refcounts if it's mapped
in both 32 and 64 bit modes?

[-- Attachment #2: Type: application/pgp-signature, Size: 227 bytes --]

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-02  2:21         ` Ryan C. Gordon
@ 2009-11-02  6:17           ` Julien BLACHE
  2009-11-02 18:18             ` Ryan C. Gordon
  2009-11-02  6:27           ` David Miller
                             ` (2 subsequent siblings)
  3 siblings, 1 reply; 47+ messages in thread
From: Julien BLACHE @ 2009-11-02  6:17 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: linux-kernel

"Ryan C. Gordon" <icculus@icculus.org> wrote:

Hi,

With my Debian Developer hat on...

> Package managers are a _fantastic_ invention. They are a killer
> feature over other operating systems, including ones people pay way
> too much money to use. That being said, there are lots of places where
> using a package manager doesn't make sense:

> experimental software that might have an audience but isn't ready for
> wide adoption

That usually ships as sources or prebuilt binaries in a tarball - target
/opt and voila! For a bigger audience you'll see a lot of experimental
stuff that gets packaged (even in quick'n'dirty mode).

> software that isn't appropriate for an apt/yum repository

Just create a repository for the damn thing if you want to distribute it
that way. There's no "appropriate / not appropriate" that applies here.

> software that distros refuse to package but is still perfectly useful

Look at what happens today. A lot of that gets packaged by third
parties, and more often than not they involve distribution
maintainers. (See debian-multimedia, PLF for Mandriva, ...)

> closed-source software

Why do we even care? Besides, commercial companies can just stop sitting
on their hands and start distributing real packages. It's no different
from rolling out a Windows Installer or Innosetup. It's packaging.

> and software that wants to work between distros that don't have 
> otherwise-compatible rpm/debs (or perhaps no package manager at all).

Tarball, /opt, static build.


And, about the /lib, /lib32, /lib64 situation Debian and Debian-derived
systems, the solution to that is multiarch and it's being worked
on. It's a lot better and cleaner than the fat binary kludge.

JB.

-- 
Julien BLACHE                                   <http://www.jblache.org> 
<jb@jblache.org>                                  GPG KeyID 0xF5D65169

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-02  2:21         ` Ryan C. Gordon
  2009-11-02  6:17           ` Julien BLACHE
@ 2009-11-02  6:27           ` David Miller
  2009-11-02 15:32             ` Ryan C. Gordon
  2009-11-02  9:16           ` Alan Cox
  2009-11-02 15:40           ` Diego Calleja
  3 siblings, 1 reply; 47+ messages in thread
From: David Miller @ 2009-11-02  6:27 UTC (permalink / raw)
  To: icculus; +Cc: alan, mans, linux-kernel

From: "Ryan C. Gordon" <icculus@icculus.org>
Date: Sun, 1 Nov 2009 21:21:47 -0500 (EST)

> That being said, there are lots of places where using a package 
> manager doesn't make sense:

Yeah like maybe, just maybe, in an embedded system where increasing
space costs like FatELF does makes even less sense.

I think Alan's arguments against FatELF were the most comprehensive
and detailed, and I haven't seem them refuted very well, if at all.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-02  2:21         ` Ryan C. Gordon
  2009-11-02  6:17           ` Julien BLACHE
  2009-11-02  6:27           ` David Miller
@ 2009-11-02  9:16           ` Alan Cox
  2009-11-02 17:39             ` david
  2009-11-02 15:40           ` Diego Calleja
  3 siblings, 1 reply; 47+ messages in thread
From: Alan Cox @ 2009-11-02  9:16 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: Måns Rullgård, linux-kernel

> I'm certain if we made a Venn diagram, there would be an overlap. But 
> FatELF solves different problems than package managers, and in the case of 
> ia32 compatibility packages, it helps the package manager solve its 
> problems better.

Not really - as I said it drives disk usage up, it drives network
bandwidth up (which is a big issue for a distro vendor) and the package
manager and file system exist to avoid this kind of mess being needed.

You can ask the same question as FatELF the other way around and it
becomes even more obvious that it's a bad idea.

Imagine you did it by name not by architecture. So you had a single
"FatDirectory" file for /bin, /sbin and /usr/bin. It means you don't have
to worry about people having different sets of binaries, it means they
are always compatible. And like FatELF it's not a very good idea.

Welcome to the invention of the directory.

Alan

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-02  4:58           ` Valdis.Kletnieks
@ 2009-11-02 15:14             ` Ryan C. Gordon
  2009-11-03 14:54               ` Valdis.Kletnieks
  0 siblings, 1 reply; 47+ messages in thread
From: Ryan C. Gordon @ 2009-11-02 15:14 UTC (permalink / raw)
  To: Valdis.Kletnieks; +Cc: Måns Rullgård, linux-kernel


> > think if Ubuntu did this as a distribution-wide policy, then people would 
> > probably choose a different distribution.
> 
> Hmm.. so let's see - people compiling stuff for themselves won't use this
> feature.  And if a distro uses it, users would probably go to a different
> distro.

I probably wasn't clear when I said "distribution-wide policy" followed by 
a "then again." I meant there would be backlash if the distribution glued 
the whole system together, instead of just binaries that made sense to do 
it to.

And, again, there's a third use-case besides compiling your programs and 
getting them from the package manager, and FatELF is meant to address 
that.

> Actually, they can't nuke the /lib{32,64} directories unless *all* binaries
> are using FatELF - as long as there's any binaries doing things The Old Way,
> you need to keep the supporting binaries around.

Binaries don't refer directly to /libXX, they count on ld.so to tapdance 
on their behalf. My virtual machine example left the dirs there as 
symlinks to /lib, but they could probably just go away directly.

> Don't forget you take that hit once for each shared library involved.  Plus

That happens in user space in ld.so, so it's not a kernel problem in any 
case, but still...we're talking about, what? Twenty more branch 
instructions per-process?

> I'm not sure if there's hidden gotchas lurking in there (is there code that
> assumes that if executable code is mmap'ed, it's only done so in one arch?

The current code sets up file mappings based on the offset of the desired 
ELF binary.

> Or will a FatELF glibc.so screw up somebody's refcounts if it's mapped
> in both 32 and 64 bit modes?

Whose refcounts would this screw up? If there's a possible bug, I'd like 
to make sure it gets resolved, of course.

--ryan.


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-02  6:27           ` David Miller
@ 2009-11-02 15:32             ` Ryan C. Gordon
  0 siblings, 0 replies; 47+ messages in thread
From: Ryan C. Gordon @ 2009-11-02 15:32 UTC (permalink / raw)
  To: David Miller; +Cc: alan, mans, linux-kernel


> > That being said, there are lots of places where using a package 
> > manager doesn't make sense:
> 
> Yeah like maybe, just maybe, in an embedded system where increasing
> space costs like FatELF does makes even less sense.

I listed several examples. Embedded systems wasn't one of them.

> I think Alan's arguments against FatELF were the most comprehensive
> and detailed, and I haven't seem them refuted very well, if at all.

I said I was trying to avoid talking everyone to death.  :)

I'll respond to them, then.

--ryan.


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-02  2:21         ` Ryan C. Gordon
                             ` (2 preceding siblings ...)
  2009-11-02  9:16           ` Alan Cox
@ 2009-11-02 15:40           ` Diego Calleja
  3 siblings, 0 replies; 47+ messages in thread
From: Diego Calleja @ 2009-11-02 15:40 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: Alan Cox, Måns Rullgård, linux-kernel

On Lunes 02 Noviembre 2009 03:21:47 Ryan C. Gordon escribió:
> FatELF solves different problems than package managers, and in the case of 
> ia32 compatibility packages, it helps the package manager solve its 
> problems better.

Package managers can be modified to allow embeddeding a package inside of
another package. That could allow shipping support for multiple architectures
in a single package, and it could even do things that fatelf can't, like
in the case of experimental packages that need other experimental
dependencies: all of them could be packed in a single package, even with
support for multiple architectures. Heck, it could even be a new kind of
container that would allow packing .rpm and .debs for multiple distros
together. And it wouldnt touch a single line of kernel code.

So I don't think that fatelf is solving the problems of package managers,
it's quite the opposite.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-01 20:59     ` Ryan C. Gordon
  2009-11-01 21:15       ` Måns Rullgård
  2009-11-02  0:01       ` Alan Cox
@ 2009-11-02 16:11       ` Chris Adams
  2 siblings, 0 replies; 47+ messages in thread
From: Chris Adams @ 2009-11-02 16:11 UTC (permalink / raw)
  To: linux-kernel

Once upon a time, Ryan C. Gordon <icculus@icculus.org> said:
>I wouldn't imagine this is the target audience for FatELF. For embedded 
>devices, just use the same ELF files you've always used.

What _is_ the target audience?

As I see it, there are three main groups of Linux consumers:

- embedded: No interest in this; adds significant bloat, generally
  embedded systems don't allow random binaries anyway

- enterprise distributions (e.g. Red Hat, SuSE): They have specific
  supported architectures, with partner programs to support those archs.
  If something is supported, they can support all archs with
  arch-specific binaries.

- community distributions (e.g. Ubuntu, Fedora, Debian): This would
  greatly increase build infrastructure complexity, mirror disk space,
  and download bandwidth, and (from a user perspective) slow down update
  downloads significantly.

If you don't have buy-in from at least a large majority of one of these
segments, this is a big waste.  If none of the above support it, it will
not be used by any binary-only software distributors.

Is any major distribution (enterprise or community) going to use this?
If not, kill it now.

-- 
Chris Adams <cmadams@hiwaay.net>
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-02  9:16           ` Alan Cox
@ 2009-11-02 17:39             ` david
  2009-11-02 17:44               ` Alan Cox
  2009-11-02 19:56               ` Krzysztof Halasa
  0 siblings, 2 replies; 47+ messages in thread
From: david @ 2009-11-02 17:39 UTC (permalink / raw)
  To: Alan Cox; +Cc: Ryan C. Gordon, Måns Rullgård, linux-kernel

On Mon, 2 Nov 2009, Alan Cox wrote:

>> I'm certain if we made a Venn diagram, there would be an overlap. But
>> FatELF solves different problems than package managers, and in the case of
>> ia32 compatibility packages, it helps the package manager solve its
>> problems better.
>
> Not really - as I said it drives disk usage up, it drives network
> bandwidth up (which is a big issue for a distro vendor) and the package
> manager and file system exist to avoid this kind of mess being needed.

I think this depends on the particular package.

how much of the package is binary executables (which get multiplied) vs 
how much is data or scripts (which do not)

fo any individual user it will alsays be a larger download, but if you 
have to support more than one architecture (even 32 bit vs 64 bit x86) 
it may be smaller to have one fat package than to have two 'normal' 
packages.

yes, the package manager could handle this by splitting the package up 
into more pieces, with some of the pieces being arch independant, but that 
also adds complexity.

David Lang

> You can ask the same question as FatELF the other way around and it
> becomes even more obvious that it's a bad idea.
>
> Imagine you did it by name not by architecture. So you had a single
> "FatDirectory" file for /bin, /sbin and /usr/bin. It means you don't have
> to worry about people having different sets of binaries, it means they
> are always compatible. And like FatELF it's not a very good idea.
>
> Welcome to the invention of the directory.
>
> Alan
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-02 17:39             ` david
@ 2009-11-02 17:44               ` Alan Cox
  2009-11-02 19:56               ` Krzysztof Halasa
  1 sibling, 0 replies; 47+ messages in thread
From: Alan Cox @ 2009-11-02 17:44 UTC (permalink / raw)
  To: david; +Cc: Ryan C. Gordon, Måns Rullgård, linux-kernel

> how much of the package is binary executables (which get multiplied) vs 
> how much is data or scripts (which do not)

IFF the data is not platform dependant formats.

> fo any individual user it will alsays be a larger download, but if you 
> have to support more than one architecture (even 32 bit vs 64 bit x86) 
> it may be smaller to have one fat package than to have two 'normal' 
> packages.

Nope. The data files for non arch specific material get packaged
accordingly. Have done for years.

> 
> yes, the package manager could handle this by splitting the package up 
> into more pieces, with some of the pieces being arch independant, but that 
> also adds complexity.

Which was implemented years ago and turns out to be vital because only
some data is not arch specific.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-02  0:01       ` Alan Cox
  2009-11-02  2:21         ` Ryan C. Gordon
@ 2009-11-02 17:52         ` Ryan C. Gordon
  2009-11-02 18:53           ` Alan Cox
  2009-11-10 11:27           ` Enrico Weigelt
  1 sibling, 2 replies; 47+ messages in thread
From: Ryan C. Gordon @ 2009-11-02 17:52 UTC (permalink / raw)
  To: Alan Cox; +Cc: Måns Rullgård, linux-kernel, davem


(As requested by davem.)

On Mon, 2 Nov 2009, Alan Cox wrote:
> Lets go down the list of "benefits"
> 
> - Separate downloads
> 	- Doesn't work. The network usage would increase dramatically
> 	  pulling all sorts of unneeded crap.

Sure, this doesn't work for everyone, but this list isn't meant to be a 
massive pile of silver bullets. Some of the items are "that's a cool 
trick" and some are "that would help solve an annoyance." I can see a 
use-case for the one-iso-multiple-arch example, but it's not going to be 
Ubuntu.

> 	- Already solved by having a packaging system (in fact FatELF is
> 	  basically obsoleted by packaging tools)

I think I've probably talked this to death, and will again when I reply to 
Julien, but: packaging tools are a different thing entirely. They solve 
some of the same issues, they cause other issues. The fact that Debian is 
now talking about "multiarch" shows that they've experienced some of these 
problems, too, despite having a world-class package manager.

> - Separate lib, lib32, lib64
> 	- So you have one file with 3 files in it rather than three files
> 	  with one file in them. Directories were invented for a reason

We covered this when talking about shell scripts.

> 	- Makes updates bigger

I'm sure, but I'm not sure the increase is a staggering amount. We're not 
talking about making all packages into FatELF binaries.

> 	- Stops users only having 32bit libs for some packages

Is that a serious concern?

> - Third party packagers no longer have to publish multiple rpm/deb etc
> 	- By vastly increasing download size
> 	- By making updates vastly bigger

It's true that /bin/ls would double in size (although I'm sure at least 
the download saves some of this in compression). But how much of, say, 
Gnome or OpenOffice or Doom 3 is executable code? These things would be 
nowhere near "vastly" bigger.

> 	- Assumes data files are not dependant on binary (often not true)

Turns out that /usr/sbin/hald's cache file was. That would need to be 
fixed, which is trivial, but in my virtual machine test I had it delete 
and regenerate the file on each boot as a fast workaround.

The rest of the Ubuntu install boots and runs. This is millions of lines 
of code that does not depend on the byte order, alignment, and word size 
for its data files.

I don't claim to be an expert on the inner workings of every package you 
would find on a Linux system, but like you, I expected there would be a 
lot of things to fix. It turns out that "often not true" just turned out 
to actually _not_ be true at all.

> 	- And is irrelevant really because 90% or more of the cost is
> 	  testing

Testing doesn't really change with what I'm describing. If you want to 
ship a program for PowerPC and x86, you still need to test it on PowerPC 
and x86, no matter how you distribute or launch it.

> - You no longer need to use shell scripts and flakey logic to pick the
>   right binary ...
> 	- Since the 1990s we've used package managers to do that instead.
> 	  I just type "yum install bzflag", the rest is done for me.

Yes, that is true for software shipped via yum, which does not encompass 
all the software you may want to run on your system. I'm not arguing 
against package management.

> - The ELF OSABI for your system changes someday?
> 	- We already handle that

Do we? I grepped for OSABI in the 2.6.31 sources, and can't find anywhere, 
outside of my FatELF patches, where we check an ELF file's OSABI or OSABI 
version at all.

The kernel blindly loads ELF binaries without checking the ABI, and glibc 
checks the ABI for shared libraries--and flatly rejects files that don't 
match what it expects.

Where do we handle an ABI change gracefully? Am I misunderstanding the 
code?

> - Ship a single shared library that provides bindings for a scripting
>   language and not have to worry about whether the scripting language
>   itself is built for the same architecture as your bindings. 
> 	- Except if they don't overlap it won't run

True. If I try to run a PowerPC binary on a Sparc, it fails in any 
circumstance. I recognize the goal of this post was to shoot down every 
single point, but you can't see a scenario where this adds a benefit? Even 
in a world that's still running 32-bit web browsers on _every major 
operating system_ because some crucial plugins aren't 64-bit yet?

> - Ship web browser plugins that work out of the box with multiple
>   platforms.
> 	- yum install just works, and there is a search path in firefox
> 	  etc

So it's better to have a thousand little unique solutions to the same 
problem? Everything has a search path (except things that don't), and all 
of those search paths are set up in the same way (except things that 
aren't). Do we really need to have every single program screwing around 
with their own personal spiritual successor to the CLASSPATH environment 
variable?

> - Ship kernel drivers for multiple processors in one file.
> 	- Not useful see separate downloads

Pain in the butt see "which installer is right for me?"   :)

I don't want to get into a holy war about out-of-tree kernel drivers, 
because I'm totally on board with getting drivers into the mainline. But 
it doesn't change the fact that I downloaded the wrong nvidia drivers the 
other day because I accidentally grabbed the ia32 package instead of the 
amd64 one. So much for saving bandwidth.

I wasn't paying attention. But lots of people wouldn't know which to pick 
even if they were. Nvidia, etc, could certainly put everything in one 
shell script and choose for you, but now we're back at square one again.

This discussion applies to applications, not just kernel modules. 
The applications are more important here, in my opinion.

> - Transition to a new architecture in incremental steps. 
> 	- IFF the CPU supports both old and new

A lateral move would be painful (although Apple just did this very thing 
with a FatELF-style solution, albeit with the help of an emulator), but if 
we're talking about the most common case at the moment, x86 to amd64, it's 
not a serious concern.

> 	- and we can already do that

Not really. compat_binfmt_elf will run legacy binaries on new systems, but 
not vice versa. The goal is having something that will let it work on both 
without having to go through a package manager infrastructure.

> - Support 64-bit and 32-bit compatibility binaries in one file. 
> 	- Not useful as we've already seen

Where did we see that? There are certainly tradeoffs, pros and cons, but 
this is very dismissive despite several counter-examples.

> - No more ia32 compatibility libraries! Even if your distro
>   doesn't make a complete set of FatELF binaries available, they can
>   still provide it for the handful of packages you need for 99% of 32-bit
>   apps you want to run on a 64-bit system. 
> 
> 	- Argument against FatELF - why waste the disk space if its rare ?

This is _not_ an argument against FatELF.

Why install Gimp by default if I'm not an artist? Because disk space is 
cheap in the configurations I'm talking about and it's better to have it 
just in case, for the 1% of users that will want it. A desktop, laptop or 
server can swallow a few megabytes to clean up some awkward design 
decisions, like the /lib64 thing.

A few more megabytes installed may cut down on the support load for 
distributions when some old 32 bit program refuses to start at all.

In a world where terrabyte hard drives are cheap consumer-level 
commodities, the tradeoff seems like a complete no-brainer to me.

> - Have a CPU that can handle different byte orders? Ship one binary that
>   satisfies all configurations!
> 
> 	- Variant of the distribution "advantage" - same problem - its
> 	  better to have two files, its all about testing anyway
> 
> - Ship one file that works across Linux and FreeBSD (without a platform
>   compatibility layer on either of them). 
> 
> 	- Ditto

And ditto from me, too: testing is still testing, no matter how you 
package and ship it. It's just simply not related to FatELF. This problem 
exists in shipping binaries via apt and yum, too.

> - One hard drive partition can be booted on different machines with
>   different CPU architectures, for development and experimentation. Same
>   root file system, different kernel and CPU architecture. 
> 
> 	- Now we are getting desperate.

It's not like this is unheard of. Apple is selling this very thing for 129 
bucks a copy.

> - Prepare your app on a USB stick for sneakernet, know it'll work on
>   whatever Linux box you are likely to plug it into.
> 
> 	- No I don't because of the dependancies, architecture ordering
> 	  of data files, lack of testing on each platform and the fact
> 	  architecture isn't sufficient to define a platform

Yes, it's not a silver bullet. Fedora will not be promising binaries that 
run on every Unix box on the planet.

But the guy with the USB stick? He probably knows the details of every 
machine he wants to plug it into...
 
> - Prepare your app on a network share, know it will work with all
>   the workstations on your LAN. 

...and so does the LAN's administrator.

It's possible to ship binaries that don't depend on a specific 
distribution, or preinstalled dependencies, beyond the existance of a 
glibc that was built in the last five years or so. I do it every day. It's 
not unreasonable, if you aren't part of the package management network, to 
make something that will run, generically on "Linux."

> 	- We have search paths, multiple mount points etc.

I'm proposing a unified, clean, elegant way to solve the problem.

> So why exactly do we want FatELF. It was obsoleted in the early 1990s
> when architecture handling was introduced into package managers.

I can't speak for anyone but myself, but I can see lots of places where it 
would personally help me as a developer that isn't always inside the 
packaging system.

There are programs I support that I just simply won't bother moving to 
amd64 because it just complicates things for the end user, for example.

Goofy one-off example: a game that I ported named Lugaru ( 
http://www.wolfire.com/lugaru ) is being updated for Intel Mac OS X. The 
build on my hard drive will run natively as a PowerPC, x86, and amd64 
process, and Mac OS X just does the right thing on whatever hardware tries 
to launch it. On Linux...well, I'm not updating it. You can enjoy the x86 
version. It's easier on me, I have other projects to work on, and too bad 
for you. Granted, the x86_64 version _works_ on Linux, but shipping it is 
a serious pain, so it just won't ship.

That is anecdotal, and I apologize for that. But I'm not the only 
developer that's not in an apt repository, and all of these rebuttals are 
anecdotal: "I just use yum [...because I don't personally care about 
Debian users]."

The "third-party" is important. If your answer is "you should have 
petitioned Fedora, Ubuntu, Gentoo, CentOS, Slackware and every other 
distro to package it, or packaged it for all of those yourself, or open 
sourced someone else's software on their behalf and let the community 
figure it out" then I just don't think we're talking about the same 
reality at all, and I can't resolve that issue for you.

And, since I'm about to get a flood of "closed source is evil" emails: 
this applies to Free Software too. Take something bleeding edge but open 
source, like, say, Songbird, and you are going to find yourself working 
outside of apt-get to get a modern build, or perhaps a build at all.

In short: I'm glad yum works great for your users, but they aren't all the 
users, and it sure doesn't work well for all developers.

--ryan.


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-02  6:17           ` Julien BLACHE
@ 2009-11-02 18:18             ` Ryan C. Gordon
  2009-11-02 18:59               ` Julien BLACHE
  2009-11-02 19:08               ` Jesús Guerrero
  0 siblings, 2 replies; 47+ messages in thread
From: Ryan C. Gordon @ 2009-11-02 18:18 UTC (permalink / raw)
  To: Julien BLACHE; +Cc: linux-kernel


> With my Debian Developer hat on...

I'm repeating myself now, so I'm sorry if this is getting tedious for 
anyone. FatELF isn't meant to replace the package managers.

tl;dr: If all you have is an apt-get hammer, everything looks like a .deb nail.

> That usually ships as sources or prebuilt binaries in a tarball - target
> /opt and voila! For a bigger audience you'll see a lot of experimental
> stuff that gets packaged (even in quick'n'dirty mode).

"A lot" is hard to quantify. We can certainly see thousands of forum posts 
for help with software that hadn't been packaged yet.

> > software that isn't appropriate for an apt/yum repository
> 
> Just create a repository for the damn thing if you want to distribute it
> that way. There's no "appropriate / not appropriate" that applies here.

I can't imagine most people are interested in building repositories and 
telling their users how to add it to their package manager, period, but 
even less so when you have to build different repositories for different 
sets of users, and not know what to build for whatever is the next popular 
distribution. For things like Gentoo, which for years didn't have a way to 
extend portage, what was the solution?

(har har, don't run Gentoo is the solution, let's get the joke out of our 
systems here.)

> > software that distros refuse to package but is still perfectly useful
> 
> Look at what happens today. A lot of that gets packaged by third
> parties, and more often than not they involve distribution
> maintainers. (See debian-multimedia, PLF for Mandriva, ...)

I'm hearing a lot of "a lot" ... what actually happens today is that you 
depend on the kindness of strangers to package your software or you make a 
bunch of incompatible packages for different distributions.

> > closed-source software
> 
> Why do we even care?

Maybe you don't care, but that doesn't mean no one cares.

I am on Team Stallman. I'll take a crappy free software solution over a 
high quality closed-source one, and strive to improve the free software 
one until it is indisputably better. Most of my free time goes towards 
this very endeavor.

But still, let's not be jerks about it.

> Tarball,

Ugh.

> /opt,

Ugh.

> static build.

Ugh!

I think we can do better than that when we're outside of the package 
managers, but it's a rant for another time.

> And, about the /lib, /lib32, /lib64 situation Debian and Debian-derived
> systems, the solution to that is multiarch and it's being worked
> on. It's a lot better and cleaner than the fat binary kludge.

Having read the multiarch wiki briefly, I'm pleased to see other people 
find the current system "unwieldy," but it seems like FatELF "kludge" 
solves several of the points in the "unresolved issues" section.

YMMV, I guess.

--ryan.


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-02 17:52         ` Ryan C. Gordon
@ 2009-11-02 18:53           ` Alan Cox
  2009-11-02 20:13             ` Ryan C. Gordon
  2009-11-10 11:27           ` Enrico Weigelt
  1 sibling, 1 reply; 47+ messages in thread
From: Alan Cox @ 2009-11-02 18:53 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: Måns Rullgård, linux-kernel, davem

> Sure, this doesn't work for everyone, but this list isn't meant to be a 

You've not shown a single meaningful use case yet.

> some of the same issues, they cause other issues. The fact that Debian is 
> now talking about "multiarch" shows that they've experienced some of these 
> problems, too, despite having a world-class package manager.

No it means that Debian is finally catching up with rpm on this issue,
where it has been solved for years.

> 
> > - Separate lib, lib32, lib64
> > 	- So you have one file with 3 files in it rather than three files
> > 	  with one file in them. Directories were invented for a reason
> 
> We covered this when talking about shell scripts.

Without providing a justification

> I'm sure, but I'm not sure the increase is a staggering amount. We're not 
> talking about making all packages into FatELF binaries.

How will you jhandle cross package dependancies

> > 	- Stops users only having 32bit libs for some packages
> 
> Is that a serious concern?

Yes from a space perspective and a minimising updates perspective.

> > - Third party packagers no longer have to publish multiple rpm/deb etc
> > 	- By vastly increasing download size
> > 	- By making updates vastly bigger
> 
> It's true that /bin/ls would double in size (although I'm sure at least 
> the download saves some of this in compression). But how much of, say, 
> Gnome or OpenOffice or Doom 3 is executable code? These things would be 
> nowhere near "vastly" bigger.

Guess what: all the data files for Doom and OpenOffice are already
packaged separately as are many of the gnome ones, or automagically
shared by the two rpm packages.

> 
> > 	- Assumes data files are not dependant on binary (often not true)
> 
> Turns out that /usr/sbin/hald's cache file was. That would need to be 
> fixed, which is trivial, but in my virtual machine test I had it delete 
> and regenerate the file on each boot as a fast workaround.
> 
> The rest of the Ubuntu install boots and runs. This is millions of lines 
> of code that does not depend on the byte order, alignment, and word size 
> for its data files.

That you've noticed. But you've not done any formal testing with tens of
thousands of users so you've not done more than the "hey mummy it boots"
test (which is about one point over the Linus 'it might compile' stage)
 
> I don't claim to be an expert on the inner workings of every package you 
> would find on a Linux system, but like you, I expected there would be a 
> lot of things to fix. It turns out that "often not true" just turned out 
> to actually _not_ be true at all.

You need an expert on the inner workings of each package to review and
test them. Fortunately that work is already done- by the rpm packagers
for all the distros

> > - The ELF OSABI for your system changes someday?
> > 	- We already handle that
> 
> Do we? I grepped for OSABI in the 2.6.31 sources, and can't find anywhere, 
> outside of my FatELF patches, where we check an ELF file's OSABI or OSABI 
> version at all.

ARM has migrated ABI at least once.

> Where do we handle an ABI change gracefully? Am I misunderstanding the 
> code?

You add code for the migration as needed, in the distro

> single point, but you can't see a scenario where this adds a benefit? Even 
> in a world that's still running 32-bit web browsers on _every major 
> operating system_ because some crucial plugins aren't 64-bit yet?

Your distro must be out of date or a bit backward. Good ones thunk those
or run them in a different process (which is a very good idea for quality
reasons as well as security)

> 
> > - Ship web browser plugins that work out of the box with multiple
> >   platforms.
> > 	- yum install just works, and there is a search path in firefox
> > 	  etc
> 
> So it's better to have a thousand little unique solutions to the same 
> problem? 

We have one solution - package management. You want to add the extra one.

> it doesn't change the fact that I downloaded the wrong nvidia drivers the 
> other day because I accidentally grabbed the ia32 package instead of the 
> amd64 one. So much for saving bandwidth.

You mean your package manager didn't do it for you ? Anyway kernel
drivers are dependant on about 1500 variables and 1500! is a very very
large FatELF binary so it won't work.

> Not really. compat_binfmt_elf will run legacy binaries on new systems, but 
> not vice versa. The goal is having something that will let it work on both 
> without having to go through a package manager infrastructure.

See binfmt_misc. In fact you can probably do your ELF hacks in userspace
that way if you really must.

> In a world where terrabyte hard drives are cheap consumer-level 
> commodities, the tradeoff seems like a complete no-brainer to me.

Except that
- we are moving away from rotating storage for primary media
- flash still costs rather more
- virtual machines mean that disk space is now a real cost again as is RAM

> version. It's easier on me, I have other projects to work on, and too bad 
> for you. Granted, the x86_64 version _works_ on Linux, but shipping it is 
> a serious pain, so it just won't ship.

Distro problem, in the open source world someone will package it.

> That is anecdotal, and I apologize for that. But I'm not the only 
> developer that's not in an apt repository, and all of these rebuttals are 
> anecdotal: "I just use yum [...because I don't personally care about 
> Debian users]."

No. See yum/rpm demonstrates that it can be done right. Debian has fallen
a bit behind on that issue. We know it can be done right, and that tells
us that the Debian tools will eventually catch up and also do it right.

You have a solution (quite a nicely programmed one) in search of a
problem, and with patent concerns. That's a complete non-flier for the
kernel. It's not a dumping ground for neat toys and it would be several
gigabytes of code if it was.

You are also ignoring the other inconvenient detail. The architecture
selection used even by package managers is far more complex than i386 v
x86_64. Some distros build i686, some i686 optimisation but without cmov,
some i386, some install i386 or i686, others optimise for newer
processors only and so on.

Alan

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-02 18:18             ` Ryan C. Gordon
@ 2009-11-02 18:59               ` Julien BLACHE
  2009-11-02 19:08               ` Jesús Guerrero
  1 sibling, 0 replies; 47+ messages in thread
From: Julien BLACHE @ 2009-11-02 18:59 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: linux-kernel

"Ryan C. Gordon" <icculus@icculus.org> wrote:

Hi,

> "A lot" is hard to quantify. We can certainly see thousands of forum
> posts for help with software that hadn't been packaged yet.

"A lot" certainly doesn't mean "all of it", sure, but that's already a
clear improvement over the situation 10 years ago.

> I can't imagine most people are interested in building repositories and 
> telling their users how to add it to their package manager, period, but 
> even less so when you have to build different repositories for different 
> sets of users, and not know what to build for whatever is the next popular 
> distribution. For things like Gentoo, which for years didn't have a way to 
> extend portage, what was the solution?

You need to decide if and how you want to distribute your software,
define your target audience and work from there. Yes, it takes some
effort. Yes, it's not something that's very valued by today's
standards. So what?

You can as well decide that your software is so good that packagers from
everywhere will package it for you. Except sometimes your software
actually isn't that good and nobody gives a damn.

As it stands, it really looks like your main problem is that it's too
hard to distribute software for Linux, but you're really making it a lot
more difficult than it really is.

Basically, these days, if you can ship a generic RPM and a clean .deb,
you've got most of your users covered. Oh, that's per-architecture, so
with i386 and amd64, that makes 4 packages. And the accompanying source
packages, because that can't hurt.

Anyone that can't use those packages either knows how to build stuff on
her distro of choice or needs to upgrade.

> I'm hearing a lot of "a lot" ... what actually happens today is that you 
> depend on the kindness of strangers to package your software or you make a 
> bunch of incompatible packages for different distributions.

Err. Excuse me, but if you "depend on the kindness of strangers" it's
because you made that choice in the first place. There is nothing that
prevents you from producing packages yourself. You might even learn a
thing or ten in the process!

When software doesn't get packaged properly after some time, it's
usually because nobody knows about it or because it's not that good and
nobody bothered. As the author, you can fix both issues.

>> > closed-source software
>> 
>> Why do we even care?
>
> Maybe you don't care, but that doesn't mean no one cares.

The ones who care have the resources to produce proper packages. They
just don't do it.

> I am on Team Stallman. I'll take a crappy free software solution over a 
> high quality closed-source one, and strive to improve the free software 

I don't think FatELF improves anything at all in the Free Software
world.

[static builds distributed as tarballs]
> I think we can do better than that when we're outside of the package 
> managers, but it's a rant for another time.

Actually, no, you can't, because too many people out there writing
software don't have a clue about shared libraries. If you want things to
work everywhere, static is the way to go.

> Having read the multiarch wiki briefly, I'm pleased to see other people 
> find the current system "unwieldy," but it seems like FatELF "kludge" 
> solves several of the points in the "unresolved issues" section.

Err, the unresolved issues are all packaging issues, to which the
solutions have not been decided yet. I don't see what FatELF can fix
here.

Now, to put it in a nutshell, you are coming forward with a technical
solution to a problem that *isn't*:
 - "my software, Zorglub++ isn't packaged anywhere!"
   Did you package it? No? Why not? Besides, maybe nobody knows about
   it, maybe nobody needs it, maybe it's just crap. Whatever. Find out
   and act from there.

 - "proprietary Blahblah7 is not packaged!"
   Yeah, well, WeDoProprietaryStuff, Inc. decided not to package it
   for whatever reason. What about contacting them, finding out the
   reason and then working from there?

JB.

-- 
Julien BLACHE                                   <http://www.jblache.org> 
<jb@jblache.org>                                  GPG KeyID 0xF5D65169

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-02 18:18             ` Ryan C. Gordon
  2009-11-02 18:59               ` Julien BLACHE
@ 2009-11-02 19:08               ` Jesús Guerrero
  1 sibling, 0 replies; 47+ messages in thread
From: Jesús Guerrero @ 2009-11-02 19:08 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: linux-kernel

On Mon, 2 Nov 2009 13:18:41 -0500 (EST), "Ryan C. Gordon"
<icculus@icculus.org> wrote:
>> > software that isn't appropriate for an apt/yum repository
>> 
>> Just create a repository for the damn thing if you want to distribute
it
>> that way. There's no "appropriate / not appropriate" that applies here.
> 
> I can't imagine most people are interested in building repositories and 
> telling their users how to add it to their package manager, period, but 
> even less so when you have to build different repositories for different

> sets of users, and not know what to build for whatever is the next
popular 
> distribution. For things like Gentoo, which for years didn't have a way
to 
> extend portage, what was the solution?

I am not going into the FatELF thing. I am just following the debate
because it's interesting :)

However, for the sake of correctness about Gentoo, 

1)
Gentoo has had support for "overlays" *for ages*. I am sure they were
there when I joined in 2004. So I am not sure why you say that portage
can't be extended. I can't be sure when did overlays get into scene, I have
no idea if they were there from the beginning, but even at that stage, if
nothing else, you could still use the "ebuild" tool directly over an
ebuild, stored at any arbitrary place, not necessarily in the portage tree.
Nowadays there's a great number of well known overlays, where several
Gentoo devs are involved. Some of these are the testbed for trees that are
later incorporated to the official portage tree. A well known example is
sunrise, because it's big and of a great quality, but there are many more.

2)
Gentoo is probably the last distro that would benefit from FatELF, since
it's a distro where each user slims the system down to his/her needs.
Gentoo is not about making things generic. That's what compiling for your
architecture, USE flags, etc. are all about. If there's a distro out there
where FatELF doesn't make any sense at all, that's Gentoo for sure (as a
representative of source distros, I guess the same could apply to LFS,
sourcemage, etc.).

3)
Besides that, the average Gentoo user has no problem rolling his own
ebuilds if needed and putting them into a local overlay. And even if they
lack the skill there's always the forum and bugzilla for that. This is as
last resource, as said, there are *lots* of well known and maintained
overlays out there.

Again, these are not arguments in favor or against FatELF, as said, I am
staying away of the discussion, just some clarifications for things that I
thought are not correct. :)
-- 
Jesús Guerrero

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-02 17:39             ` david
  2009-11-02 17:44               ` Alan Cox
@ 2009-11-02 19:56               ` Krzysztof Halasa
  2009-11-02 20:11                 ` david
  1 sibling, 1 reply; 47+ messages in thread
From: Krzysztof Halasa @ 2009-11-02 19:56 UTC (permalink / raw)
  To: david; +Cc: Alan Cox, Ryan C. Gordon, Måns Rullgård, linux-kernel

david@lang.hm writes:

> fo any individual user it will alsays be a larger download, but if you
> have to support more than one architecture (even 32 bit vs 64 bit x86)
> it may be smaller to have one fat package than to have two 'normal'
> packages.

In terms on disk space on distro TFTP servers only. You'll need to
transfer more, both from user's and distro's POV (obviously). This one
simple fact alone is more than enough to forget the FatELF.

Disk space on FTP servers is cheap (though maybe not so on 32 GB SSDs
and certainly not on 16 MB NOR flash chips). Bandwidth is expensive. And
it doesn't seem to be going to change.

FatELF means you have to compile for many archs. Do you even have the
necessary compilers? Extra time and disk space used for what, to solve
a non-problem?

> yes, the package manager could handle this by splitting the package up
> into more pieces, with some of the pieces being arch independant, but
> that also adds complexity.

Even without splitting, separate per-arch packages are a clear win.

I'm surprised this idea made it here. It certainly has merit for
installation medium, but it's called directory tree and/or .tar or .zip
there.
-- 
Krzysztof Halasa

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-02 19:56               ` Krzysztof Halasa
@ 2009-11-02 20:11                 ` david
  2009-11-02 20:33                   ` Krzysztof Halasa
  2009-11-03  1:35                   ` Mikael Pettersson
  0 siblings, 2 replies; 47+ messages in thread
From: david @ 2009-11-02 20:11 UTC (permalink / raw)
  To: Krzysztof Halasa
  Cc: Alan Cox, Ryan C. Gordon, Måns Rullgård, linux-kernel

On Mon, 2 Nov 2009, Krzysztof Halasa wrote:

> david@lang.hm writes:
>
>> fo any individual user it will alsays be a larger download, but if you
>> have to support more than one architecture (even 32 bit vs 64 bit x86)
>> it may be smaller to have one fat package than to have two 'normal'
>> packages.
>
> In terms on disk space on distro TFTP servers only. You'll need to
> transfer more, both from user's and distro's POV (obviously). This one
> simple fact alone is more than enough to forget the FatELF.

it depends on if there is only one arch being downloaded ot not.

it could be considerably cheaper for mirroring bandwidth. Even if Alan is 
correct and distros have re-packaged everything so that the arch 
independant stuff is really in seperate packages, most 
mirroring/repository systems keep each distro release/arch in a seperate 
directory tree, so each of these arch-independant things gets copied 
multiple times.

> Disk space on FTP servers is cheap (though maybe not so on 32 GB SSDs
> and certainly not on 16 MB NOR flash chips). Bandwidth is expensive. And
> it doesn't seem to be going to change.
>
> FatELF means you have to compile for many archs. Do you even have the
> necessary compilers? Extra time and disk space used for what, to solve
> a non-problem?

you don't have to compile multiple arches anymore than you have to provide 
any other support for that arch. FatELF is a way to bundle the binaries 
that you were already creating, not something to force you to support an 
arch you otherwise wouldn't (although if it did make it easy enough for 
you to do so that you started to support additional arches, that would be 
a good thing)

>> yes, the package manager could handle this by splitting the package up
>> into more pieces, with some of the pieces being arch independant, but
>> that also adds complexity.
>
> Even without splitting, separate per-arch packages are a clear win.
>
> I'm surprised this idea made it here. It certainly has merit for
> installation medium, but it's called directory tree and/or .tar or .zip
> there.

if you have a 1M binary with 500M data, repeated for 5 arches it is not a 
win vs a single 505M FatELF package in all cases.

David Lang

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-02 18:53           ` Alan Cox
@ 2009-11-02 20:13             ` Ryan C. Gordon
  2009-11-04  1:09               ` Ryan C. Gordon
  0 siblings, 1 reply; 47+ messages in thread
From: Ryan C. Gordon @ 2009-11-02 20:13 UTC (permalink / raw)
  To: Alan Cox; +Cc: Måns Rullgård, linux-kernel, davem


> You've not shown a single meaningful use case yet.

I feel like we're at the point where we're each making points of various 
quality and the other person is going "nuh-uh."

You mentioned the patent thing and I don't have an answer at all yet from 
a lawyer. Let's table this for awhile until I have more information about 
that. If there's going to be a patent problem, it's not worth wasting 
everyone's time any further.

If it turns out to be no big deal, we can decide to revisit this.

--ryan.


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-02 20:11                 ` david
@ 2009-11-02 20:33                   ` Krzysztof Halasa
  2009-11-03  1:35                   ` Mikael Pettersson
  1 sibling, 0 replies; 47+ messages in thread
From: Krzysztof Halasa @ 2009-11-02 20:33 UTC (permalink / raw)
  To: david; +Cc: Alan Cox, Ryan C. Gordon, Måns Rullgård, linux-kernel

david@lang.hm writes:

>> In terms on disk space on distro TFTP servers only. You'll need to
>> transfer more, both from user's and distro's POV (obviously). This one
>> simple fact alone is more than enough to forget the FatELF.
>
> it depends on if there is only one arch being downloaded ot not.

Well, from user's POV it may get close if the user downloads maybe 5
different archs out of all supported by the distro. Not very typical
I guess.

> it could be considerably cheaper for mirroring bandwidth.

Maybe (though it can be solved with the existing techniques).
What does now count - bandwidth consumed by users or by mirrors?

> Even if Alan
> is correct and distros have re-packaged everything so that the arch
> independant stuff is really in seperate packages, most
> mirroring/repository systems keep each distro release/arch in a
> seperate directory tree, so each of these arch-independant things gets
> copied multiple times.

If it was a (serious) problem (I think it's not), it could be easily
solved. Think rsync, sha1|256-based mirroring stuff etc.

> you don't have to compile multiple arches anymore than you have to
> provide any other support for that arch. FatELF is a way to bundle the
> binaries that you were already creating, not something to force you to
> support an arch you otherwise wouldn't (although if it did make it
> easy enough for you to do so that you started to support additional
> arches, that would be a good thing)

Not sure - longer compile times, longer downloads, no testing.

> if you have a 1M binary with 500M data, repeated for 5 arches it is
> not a win vs a single 505M FatELF package in all cases.

A real example of such binary maybe?
-- 
Krzysztof Halasa

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-02 20:11                 ` david
  2009-11-02 20:33                   ` Krzysztof Halasa
@ 2009-11-03  1:35                   ` Mikael Pettersson
  1 sibling, 0 replies; 47+ messages in thread
From: Mikael Pettersson @ 2009-11-03  1:35 UTC (permalink / raw)
  To: david
  Cc: Krzysztof Halasa, Alan Cox, Ryan C. Gordon,
	Måns Rullgård, linux-kernel

david@lang.hm writes:
 > > FatELF means you have to compile for many archs. Do you even have the
 > > necessary compilers? Extra time and disk space used for what, to solve
 > > a non-problem?
 > 
 > you don't have to compile multiple arches anymore than you have to provide 
 > any other support for that arch. FatELF is a way to bundle the binaries 
 > that you were already creating, not something to force you to support an 
 > arch you otherwise wouldn't (although if it did make it easy enough for 
 > you to do so that you started to support additional arches, that would be 
 > a good thing)

'bundle' by gluing .o files together rather than using what we already have:
directories, search paths, $VARIABLES in search paths, and ELF interpreters
and .so loaders that know to look in $ARCH subdirectories first (I used that
feature to perform an incremental upgrade from OABI to EABI on my ARM/Linux
systems last winter).

Someone, somewhere, has to inspect $ARCH and make a decision. Moving that
decision from user-space to kernel-space for ELF file loading is neither
necessary nor sufficient. Consider .a and .h files for instance.

 > > I'm surprised this idea made it here. It certainly has merit for
 > > installation medium, but it's called directory tree and/or .tar or .zip
 > > there.
 > 
 > if you have a 1M binary with 500M data, repeated for 5 arches it is not a 
 > win vs a single 505M FatELF package in all cases.

If I have a 1M binary with 500M non-arch data I'll split the package because
I'm not a complete moron.

IMNSHO FatELF is a technology pretending to be a solution to "problems"
that don't exist or have user-space solutions. Either way, it doesn't
belong in the Linux kernel.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
@ 2009-11-03  6:43 Eric Windisch
  2009-11-03 11:21 ` Bernd Petrovitsch
  2009-11-10 10:21 ` Enrico Weigelt
  0 siblings, 2 replies; 47+ messages in thread
From: Eric Windisch @ 2009-11-03  6:43 UTC (permalink / raw)
  To: linux-kernel

First, I apologize if this message gets top-posted or otherwise
improperly threaded, as I'm not currently a subscriber to the list (I
can no longer handle the daily traffic).  I politely ask that I be CC'ed
on any replies.

In response to Alan's request for a FatELF use-case, I'll submit two of
my own.

I have customers which operate low-memory x86 virtual machine instances.
Until recently, these ran with as little as 64MB of RAM.  Many customers
have chosen 32-bit distributions for these systems, but would like the
flexibility of scaling beyond 4GB of memory.  These customers would like
the choice of migrating to 64-bit without having to reinstall their
distribution.

Furthermore, I'm involved in several "cloud computing" initiatives,
including interoperability efforts.  There has been discussion of
assuring portability of virtual machine images across varying
infrastructure services.  I could see how FatELF could be part of a
solution to this problem, enabling a single image to function against
host services running a variety of architectures.

As for negatives: I'm running ZFS which now supports deduplication, so
this might potentially eliminate my own concerns in regard to storage.
Eventually, Btrfs will provide this capability under Linux directly. The
networking isn't much of an issue either, as I have my own mirrors for
the popular distributions.  While this isn't the typical end-user
environment, it might be a typical environment for companies facing the
unique problems FatELF solves.

I concede that there are a number of ways that solutions to these
problems might be implemented, and FatELF binaries might not be the
optimal solution.  Regardless, I do feel that use cases do exist, even
if there are questions and concerns about the implementation.

-- 
Regards,
Eric Windisch


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-03  6:43 FatELF patches Eric Windisch
@ 2009-11-03 11:21 ` Bernd Petrovitsch
  2009-11-10 10:10   ` Enrico Weigelt
  2009-11-10 10:21 ` Enrico Weigelt
  1 sibling, 1 reply; 47+ messages in thread
From: Bernd Petrovitsch @ 2009-11-03 11:21 UTC (permalink / raw)
  To: Eric Windisch; +Cc: linux-kernel

On Tue, 2009-11-03 at 01:43 -0500, Eric Windisch wrote:
> First, I apologize if this message gets top-posted or otherwise
> improperly threaded, as I'm not currently a subscriber to the list (I
Given proper references:-headers, the mail should have threaded
properly.
> can no longer handle the daily traffic).  I politely ask that I be CC'ed
> on any replies.
Which raises the question why you didn't cc: anyone in the first place.

> In response to Alan's request for a FatELF use-case, I'll submit two of
> my own.
> 
> I have customers which operate low-memory x86 virtual machine instances.
Low resource environments (be it embedded or not) are probably the last
who wants (or even can handle) such "bloat by design".
The question in that world is not "how can I make it run on more
architectures" but "how can I get rid of run-time code as soon as
possible".

> Until recently, these ran with as little as 64MB of RAM.  Many customers
> have chosen 32-bit distributions for these systems, but would like the
> flexibility of scaling beyond 4GB of memory.  These customers would like
> the choice of migrating to 64-bit without having to reinstall their
> distribution.
Just install a 64bit kernel (and leave the user-space intact). A 64bit
kernel can run 32bit binaries.

> Furthermore, I'm involved in several "cloud computing" initiatives,
> including interoperability efforts.  There has been discussion of
The better solution is probably to agree on pseudo-machine-code (like
e.g. JVM, parrot, or whatever) with good interpreters/JIT-compilers
which focus more on security and how to validate potentially hostile
programs than anything else.

> assuring portability of virtual machine images across varying
> infrastructure services.  I could see how FatELF could be part of a
> solution to this problem, enabling a single image to function against
> host services running a variety of architectures.
Let's hope that the n versions in a given FatElf image actually are
instances of the same source.

[....]
> I concede that there are a number of ways that solutions to these
> problems might be implemented, and FatELF binaries might not be the
> optimal solution.  Regardless, I do feel that use cases do exist, even
> if there are questions and concerns about the implementation.
The obvious drawbacks are:
- Even if disk space is cheap, the vast amount is a problem for
  mirroring that stuff.
- Fat-Binaries (ab)use more Internet bandwidth. Hell, Fedora/RedHat got
  delta-RPMS working (just?) for this reason.
- Fat-Binaries (ab)use much more memory and I/O bandwidth - loading code
  for n architectures and throw n-1 of it away doesn't sound very sound.
- Compiling+linking for n architectures needs n-1 cross-compilers
  installed and working.
- Compiling+linking for n architectures needs much more *time* than for
  1 (n times or so).
  Guess what people/developers did first on the old NeXT machines: They
  disable the default "build for all architectures" as it speeded things
  up.
  Even if the expected development setup is "build for local only", at
  least packagers and regression testers won't have the luxury of that.

The only remotely useful benefit in the long run I can imagine is: The
permanent cross-compiling will make AC_TRY_RUN() go away. Or at least
the alternatives are applicable without reading the generated
configure.sh (and config.log) to guess how to tell the script some
details.
But that isn't really worth it - as we are living without it for long.

	Bernd
-- 
Firmix Software GmbH                   http://www.firmix.at/
mobil: +43 664 4416156                 fax: +43 1 7890849-55
          Embedded Linux Development and Services



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-02 15:14             ` Ryan C. Gordon
@ 2009-11-03 14:54               ` Valdis.Kletnieks
  2009-11-03 18:30                 ` Matt Thrailkill
  0 siblings, 1 reply; 47+ messages in thread
From: Valdis.Kletnieks @ 2009-11-03 14:54 UTC (permalink / raw)
  To: Ryan C. Gordon; +Cc: Måns Rullgård, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 3051 bytes --]

On Mon, 02 Nov 2009 10:14:15 EST, "Ryan C. Gordon" said:

> I probably wasn't clear when I said "distribution-wide policy" followed by 
> a "then again." I meant there would be backlash if the distribution glued 
> the whole system together, instead of just binaries that made sense to do 
> it to.

OK.. I'll bite - which binaries does it make sense to do so?  Remember in
your answer to address the very valid point that any binaries you *don't*
do this for will still need equivalent hand-holding by the package manager.
So if you're not doing all of them, you need to address the additional
maintenance overhead of "which way is this package supposed to be built?"
and all the derivative headaches.

It might be instructive to not do a merge of *everything* in Ubuntu as you
did, but only select a random 20% or so of the packages and convert them
to FatELF, and see what breaks. (If our experience with 'make randconfig'
in the kernel is any indication, you'll hit a *lot* of corner cases and
pre-reqs you didn't know about...)

> > Actually, they can't nuke the /lib{32,64} directories unless *all* binaries
> > are using FatELF - as long as there's any binaries doing things The Old Way,
> > you need to keep the supporting binaries around.
> 
> Binaries don't refer directly to /libXX, they count on ld.so to tapdance 
> on their behalf. My virtual machine example left the dirs there as 
> symlinks to /lib, but they could probably just go away directly.

Only if all your shared libs (which are binaries too) have migrated to FatELF.

On my box, I have:

% ls -l /usr/lib{,64}/libX11.so.6.3.0
-rwxr-xr-x 1 root root 1274156 2009-10-06 13:49 /usr/lib/libX11.so.6.3.0
-rwxr-xr-x 1 root root 1308600 2009-10-06 13:49 /usr/lib64/libX11.so.6.3.0

You can't dump them both into /usr/lib without making it a FatElf or doing
some name mangling. You probably didn't notice because you merged *all* of
an ubuntu distro into FatELF.

> > Don't forget you take that hit once for each shared library involved.  Plus
> 
> That happens in user space in ld.so, so it's not a kernel problem in any 
> case, but still...we're talking about, what? Twenty more branch 
> instructions per-process?

No, a lot more than that - you already identified an extra 128-byte read
as needing to happen.  Plus syscall overhead.

> > Or will a FatELF glibc.so screw up somebody's refcounts if it's mapped
> > in both 32 and 64 bit modes?
> 
> Whose refcounts would this screw up? If there's a possible bug, I'd like 
> to make sure it gets resolved, of course.

That's the point - nobody's done an audit for such things.  Does the kernel
DTRT when counting mapped pages (probably close-to-right, if you got it to boot)?
Where are the corresponding patches, if any, for tools like perf and oprofile?
Does lsof DTRT? /proc/<pid>/pagemap?  Any other tools that may break because
the make an assumption that executable files are mapped as 32-bit or 64-bit,
but not both (most likely choking if they see a 64-bit address someplace
after they've decided the binary is a 32-bit)?

[-- Attachment #2: Type: application/pgp-signature, Size: 227 bytes --]

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-03 14:54               ` Valdis.Kletnieks
@ 2009-11-03 18:30                 ` Matt Thrailkill
  0 siblings, 0 replies; 47+ messages in thread
From: Matt Thrailkill @ 2009-11-03 18:30 UTC (permalink / raw)
  To: Valdis.Kletnieks; +Cc: Ryan C. Gordon, Måns Rullgård, linux-kernel

On Tue, Nov 3, 2009 at 6:54 AM,  <Valdis.Kletnieks@vt.edu> wrote:
> On Mon, 02 Nov 2009 10:14:15 EST, "Ryan C. Gordon" said:
>
>> I probably wasn't clear when I said "distribution-wide policy" followed by
>> a "then again." I meant there would be backlash if the distribution glued
>> the whole system together, instead of just binaries that made sense to do
>> it to.
>
> OK.. I'll bite - which binaries does it make sense to do so?  Remember in
> your answer to address the very valid point that any binaries you *don't*
> do this for will still need equivalent hand-holding by the package manager.
> So if you're not doing all of them, you need to address the additional
> maintenance overhead of "which way is this package supposed to be built?"
> and all the derivative headaches.
>
> It might be instructive to not do a merge of *everything* in Ubuntu as you
> did, but only select a random 20% or so of the packages and convert them
> to FatELF, and see what breaks. (If our experience with 'make randconfig'
> in the kernel is any indication, you'll hit a *lot* of corner cases and
> pre-reqs you didn't know about...)

I think he is thinking of only having FatELF binaries for binaries and
libraries
that overlap between 32- and 64-bit in a distro install.  Perhaps everything
that is sitting in /lib32 for example could instead be in a FatELF
binaries in /lib,
alongside the 64-bit binary.

A thought I had, that I don't think has come up in this thread:
could it be practical or worthwhile for distros to use FatElf to ship multiple
executables with different compiler optimizations?  i586, i686, etc.

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-02 20:13             ` Ryan C. Gordon
@ 2009-11-04  1:09               ` Ryan C. Gordon
  0 siblings, 0 replies; 47+ messages in thread
From: Ryan C. Gordon @ 2009-11-04  1:09 UTC (permalink / raw)
  To: Alan Cox; +Cc: Måns Rullgård, linux-kernel, davem


> You mentioned the patent thing and I don't have an answer at all yet from 
> a lawyer. Let's table this for awhile until I have more information about 
> that. If there's going to be a patent problem, it's not worth wasting 
> everyone's time any further.
> 
> If it turns out to be no big deal, we can decide to revisit this.

The Software Freedom Law Center replied with this...

"I refer you to our Legal Guide section on dealing with patents available 
from our website.  I also refer you to our amici brief in Bilski, where we 
argue that patents on pure software are invalid.  If a patent is invalid, 
there's no reason to consider whether it is infringed."

...which may be promising some day, but doesn't resolve current concerns. 
Also: "I read a FAQ" doesn't hold up in court.  :)

Based on feedback from this list, the patent concern that I'm not 
qualified to resolve myself, and belief that I'll be on the losing end of 
the same argument with the glibc maintainers after this, I'm withdrawing 
my FatELF patch. If anyone wants it, I'll leave the project page and 
patches in place at http://icculus.org/fatelf/ ...

Thank you everyone for your time and feedback.

--ryan.


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-01 19:20 ` David Hagood
  2009-11-01 20:28   ` Måns Rullgård
  2009-11-01 20:40   ` Ryan C. Gordon
@ 2009-11-10 10:04   ` Enrico Weigelt
  2 siblings, 0 replies; 47+ messages in thread
From: Enrico Weigelt @ 2009-11-10 10:04 UTC (permalink / raw)
  To: linux-kernel

* David Hagood <david.hagood@gmail.com> wrote:

Hi,

> I hope it's not too late for a request for consideration: if we start
> having fat binaries, could one of the "binaries" be one of the "not
> quite compiled code" formats like Architecture Neutral Distribution
> Format (ANDF), such that, given a fat binary which does NOT support a
> given CPU, you could at least in theory process the ANDF section to
> create the needed target binary? Bonus points for being able to then
> append the newly created section to the file.

If you really wanna have arch independent binaries, you need some sort 
of virtual processor. Java, LLVM, etc. The idea is far from being new, 
IMHO originally came from old Burroughs Mainframes, which ran some
Algol-tailored bytecode, driven by an interpreter in microcode.
(I'm currently desiging a new VP with similar concepts, just in case
anybody's interested).

BTW: this does not need additional kernel support - binfmt_misc 
is your friend ;-P

> As an embedded systems guy who is looking to have to support multiple
> CPU types, this is really very interesting to me.

Just for the protocol: you want to have FatELF on embedded system ?


cu
-- 
---------------------------------------------------------------------
 Enrico Weigelt    ==   metux IT service - http://www.metux.de/
---------------------------------------------------------------------
 Please visit the OpenSource QM Taskforce:
 	http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
	http://patches.metux.de/
---------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-03 11:21 ` Bernd Petrovitsch
@ 2009-11-10 10:10   ` Enrico Weigelt
  2009-11-10 12:15     ` Bernd Petrovitsch
  0 siblings, 1 reply; 47+ messages in thread
From: Enrico Weigelt @ 2009-11-10 10:10 UTC (permalink / raw)
  To: linux-kernel

* Bernd Petrovitsch <bernd@firmix.at> wrote:

> The only remotely useful benefit in the long run I can imagine is: The
> permanent cross-compiling will make AC_TRY_RUN() go away. Or at least
> the alternatives are applicable without reading the generated
> configure.sh (and config.log) to guess how to tell the script some
> details.

hmm, that could be the real killer argument - evolutionarily 
sort out the guys who're too dumb to write proper buildscripts ;-)


cu
-- 
---------------------------------------------------------------------
 Enrico Weigelt    ==   metux IT service - http://www.metux.de/
---------------------------------------------------------------------
 Please visit the OpenSource QM Taskforce:
 	http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
	http://patches.metux.de/
---------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-03  6:43 FatELF patches Eric Windisch
  2009-11-03 11:21 ` Bernd Petrovitsch
@ 2009-11-10 10:21 ` Enrico Weigelt
  1 sibling, 0 replies; 47+ messages in thread
From: Enrico Weigelt @ 2009-11-10 10:21 UTC (permalink / raw)
  To: linux-kernel; +Cc: eric

* Eric Windisch <eric@grokthis.net> wrote:

Hi,

> I have customers which operate low-memory x86 virtual machine instances.
> Until recently, these ran with as little as 64MB of RAM.  Many customers
> have chosen 32-bit distributions for these systems, but would like the
> flexibility of scaling beyond 4GB of memory.  These customers would like
> the choice of migrating to 64-bit without having to reinstall their
> distribution.

Assuming that are reasonably critical production systems, you wont
get around a specially tailored distro/package manager (where anybody
already did all the vast amount of testing of the upgrade process)
or do it all manually. Nevertheless you'll (at least temporarily)
need an multilib system or jails. 

I don't see where FatELF will give you special help here.

> Furthermore, I'm involved in several "cloud computing" initiatives,
> including interoperability efforts.  There has been discussion of
> assuring portability of virtual machine images across varying
> infrastructure services.  I could see how FatELF could be part of a
> solution to this problem, enabling a single image to function against
> host services running a variety of architectures.

Drop that idea. Better create images for each target platform.
Let an automated build system handle that. (if you need one, 
feel free to contact me off-list).

You want to migrate a running VM to a different arch ?
Forget it. You won't come around processor emulation, better use 
some VP like Java, LLVM, etc.


cu
-- 
---------------------------------------------------------------------
 Enrico Weigelt    ==   metux IT service - http://www.metux.de/
---------------------------------------------------------------------
 Please visit the OpenSource QM Taskforce:
 	http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
	http://patches.metux.de/
---------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-02 17:52         ` Ryan C. Gordon
  2009-11-02 18:53           ` Alan Cox
@ 2009-11-10 11:27           ` Enrico Weigelt
  2009-11-10 12:40             ` Bernd Petrovitsch
  1 sibling, 1 reply; 47+ messages in thread
From: Enrico Weigelt @ 2009-11-10 11:27 UTC (permalink / raw)
  To: linux-kernel

* Ryan C. Gordon <icculus@icculus.org> wrote:

> It's true that /bin/ls would double in size (although I'm sure at least 
> the download saves some of this in compression). But how much of, say, 
> Gnome or OpenOffice or Doom 3 is executable code? These things would be 
> nowhere near "vastly" bigger.

OO takes about 140 MB for binaries at my site. Now just multiply it by 
the number of targets you'd like to support.

Gnome stuff also tends to be quite fat.

> > 	- Assumes data files are not dependant on binary (often not true)
> 
> Turns out that /usr/sbin/hald's cache file was. That would need to be 
> fixed, which is trivial, but in my virtual machine test I had it delete 
> and regenerate the file on each boot as a fast workaround.

Well, hald (and already the dbus stuff) is a misdesign, so we shouldn't
count it here ;-P
 
> Testing doesn't really change with what I'm describing. If you want to 
> ship a program for PowerPC and x86, you still need to test it on PowerPC 
> and x86, no matter how you distribute or launch it.

BUT: you have to test the whole combination on dozens of targets.
And it in now way releaves to from testing dozens of different distros.

If you want one binary package for many different targets, go for 
autopackage, LSM, etc.

> Yes, that is true for software shipped via yum, which does not encompass 
> all the software you may want to run on your system. I'm not arguing 
> against package management.

Why not fixing the package ?

> True. If I try to run a PowerPC binary on a Sparc, it fails in any 
> circumstance. I recognize the goal of this post was to shoot down every 
> single point, but you can't see a scenario where this adds a benefit? Even 
> in a world that's still running 32-bit web browsers on _every major 
> operating system_ because some crucial plugins aren't 64-bit yet?

The root of evil are plugins - even worse: binary-only plugins.

Let's just take browsers: is there any damn good reason for not putting
those things into their own process (9P provides a fine IPC for that),
besides stupidity and lazyness of certain devs (yes, this explicitly
includes mozilla guys) ?
 
> > - Ship web browser plugins that work out of the box with multiple
> >   platforms.
> > 	- yum install just works, and there is a search path in firefox
> > 	  etc
> 
> So it's better to have a thousand little unique solutions to the same 
> problem? Everything has a search path (except things that don't), and all 
> of those search paths are set up in the same way (except things that 
> aren't). Do we really need to have every single program screwing around 
> with their own personal spiritual successor to the CLASSPATH environment 
> variable?

You dont like $PATH ? Use a unionfs and let a installer / package manager
handle proper setups.

Yes, on Linux (contrary to Plan9) this (AFAIK) still requires root 
privileges, but there're ways around this.

> > - Ship kernel drivers for multiple processors in one file.
> > 	- Not useful see separate downloads
> 
> Pain in the butt see "which installer is right for me?"   :)

It even gets worse: you need different modules for different kernel
versions *and* kernel configs. Kernel image and modules strictly 
belong together - it's in fact *one* kernel that just happens to be 
split off into several files so parts of it can be loaded on-demand.
 
> I don't want to get into a holy war about out-of-tree kernel drivers, 
> because I'm totally on board with getting drivers into the mainline. But 
> it doesn't change the fact that I downloaded the wrong nvidia drivers the 
> other day because I accidentally grabbed the ia32 package instead of the 
> amd64 one. So much for saving bandwidth.

NVidia is a bad reference here. These folks simply don't get their
stuff stable, instead playing around w/ ugly code obfuscation.
No mercy for those jerks.

I'm strongly in favour of prohibiting proprietary kernel drivers.
 
> I wasn't paying attention. But lots of people wouldn't know which to pick 
> even if they were. Nvidia, etc, could certainly put everything in one 
> shell script and choose for you, but now we're back at square one again.

If NV wants to stick in their binary crap, they'll have to bite the 
bullet of maintaining proper packaging. The fault is on their side,
not on Linux' one.

> > - Transition to a new architecture in incremental steps. 
> > 	- IFF the CPU supports both old and new
> 
> A lateral move would be painful (although Apple just did this very thing 
> with a FatELF-style solution, albeit with the help of an emulator), but if 
> we're talking about the most common case at the moment, x86 to amd64, it's 
> not a serious concern.

This is a specific case, which could be handled easily in userland, IMHO.

> Why install Gimp by default if I'm not an artist? Because disk space is 
> cheap in the configurations I'm talking about and it's better to have it 
> just in case, for the 1% of users that will want it. A desktop, laptop or 
> server can swallow a few megabytes to clean up some awkward design 
> decisions, like the /lib64 thing.

What's so especially bad on the multilib approach ?

> A few more megabytes installed may cut down on the support load for 
> distributions when some old 32 bit program refuses to start at all.

The distro could simply provide a few compat packages.
It even could use a hooked-up ld.so which does appropriate checks
and notify the package manager if some 32bit libs are missing.

> > - One hard drive partition can be booted on different machines with
> >   different CPU architectures, for development and experimentation. Same
> >   root file system, different kernel and CPU architecture. 
> > 
> > 	- Now we are getting desperate.
> 
> It's not like this is unheard of. Apple is selling this very thing for 129 
> bucks a copy.

Distro issue.
You need to have all packages installed for each supported arch *and*
all applications must be capable of handling different bytesex or
typesizes in their data.

> > - Prepare your app on a USB stick for sneakernet, know it'll work on
> >   whatever Linux box you are likely to plug it into.
> > 
> > 	- No I don't because of the dependancies, architecture ordering
> > 	  of data files, lack of testing on each platform and the fact
> > 	  architecture isn't sufficient to define a platform
> 
> Yes, it's not a silver bullet. Fedora will not be promising binaries that 
> run on every Unix box on the planet.
> 
> But the guy with the USB stick? He probably knows the details of every 
> machine he wants to plug it into...

Then he's most likely capable of maintaining a multiarch distro.
Leaving out binary application data (see above), it's not such a big
deal - just work-intensive. Using FatELF most likely increases that work.

> It's possible to ship binaries that don't depend on a specific 
> distribution, or preinstalled dependencies, beyond the existance of a 
> glibc that was built in the last five years or so. I do it every day. It's 
> not unreasonable, if you aren't part of the package management network, to 
> make something that will run, generically on "Linux."

Good, why do you need FatELF then ?

> There are programs I support that I just simply won't bother moving to 
> amd64 because it just complicates things for the end user, for example.

Why don't you just solve that in userland ?

> That is anecdotal, and I apologize for that. But I'm not the only 
> developer that's not in an apt repository, and all of these rebuttals are 
> anecdotal: "I just use yum [...because I don't personally care about 
> Debian users]."

Can't just just make up your own repo ? Is it so hard ?
Just can speak for Gentoo - overlays are quite convenient here.
 

cu
-- 
---------------------------------------------------------------------
 Enrico Weigelt    ==   metux IT service - http://www.metux.de/
---------------------------------------------------------------------
 Please visit the OpenSource QM Taskforce:
 	http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
	http://patches.metux.de/
---------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-10 10:10   ` Enrico Weigelt
@ 2009-11-10 12:15     ` Bernd Petrovitsch
  0 siblings, 0 replies; 47+ messages in thread
From: Bernd Petrovitsch @ 2009-11-10 12:15 UTC (permalink / raw)
  To: weigelt; +Cc: linux-kernel

On Tue, 2009-11-10 at 11:10 +0100, Enrico Weigelt wrote:
> * Bernd Petrovitsch <bernd@firmix.at> wrote:
> 
> > The only remotely useful benefit in the long run I can imagine is: The
> > permanent cross-compiling will make AC_TRY_RUN() go away. Or at least
> > the alternatives are applicable without reading the generated
> > configure.sh (and config.log) to guess how to tell the script some
> > details.
> 
> hmm, that could be the real killer argument - evolutionarily 
> sort out the guys who're too dumb to write proper buildscripts ;-)
Obviously your irony detector triggered;-)

	Bernd
-- 
Firmix Software GmbH                   http://www.firmix.at/
mobil: +43 664 4416156                 fax: +43 1 7890849-55
          Embedded Linux Development and Services



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-10 11:27           ` Enrico Weigelt
@ 2009-11-10 12:40             ` Bernd Petrovitsch
  2009-11-10 13:00               ` Enrico Weigelt
  0 siblings, 1 reply; 47+ messages in thread
From: Bernd Petrovitsch @ 2009-11-10 12:40 UTC (permalink / raw)
  To: weigelt; +Cc: linux-kernel

On Tue, 2009-11-10 at 12:27 +0100, Enrico Weigelt wrote:
> * Ryan C. Gordon <icculus@icculus.org> wrote:
[...] 
> > True. If I try to run a PowerPC binary on a Sparc, it fails in any 
> > circumstance. I recognize the goal of this post was to shoot down every 
If tools like qemu support PowerPC or Sparc (similar to some dialects of
ARM), you your run it through that (on every hardware where qemu as such
runs[0]).
And if you have bimfmt_misc, you can start it like any other "native"
program.

> > single point, but you can't see a scenario where this adds a benefit? Even 
> > in a world that's still running 32-bit web browsers on _every major 
> > operating system_ because some crucial plugins aren't 64-bit yet?
>
> The root of evil are plugins - even worse: binary-only plugins.
> 
> Let's just take browsers: is there any damn good reason for not putting
> those things into their own process (9P provides a fine IPC for that),
> besides stupidity and lazyness of certain devs (yes, this explicitly
> includes mozilla guys) ?
Or implement running 32bit plugins from a 64bit browser.

[...]  
> > > - Prepare your app on a USB stick for sneakernet, know it'll work on
> > >   whatever Linux box you are likely to plug it into.
Trojan horse deployers paradise BTW.

[....]
> > It's possible to ship binaries that don't depend on a specific 
> > distribution, or preinstalled dependencies, beyond the existance of a 
> > glibc that was built in the last five years or so. I do it every day. It's 
ACK, just link it statically and be done (but then you have other
problems, e.g. "$LIB has an exploit and I have to rebuild and redeploy
$BINARY").

[...]
> > That is anecdotal, and I apologize for that. But I'm not the only 
> > developer that's not in an apt repository, and all of these rebuttals are 
> > anecdotal: "I just use yum [...because I don't personally care about 
> > Debian users]."
It's not that the other way around is much of a difference:-(
And if there is some really interested Debian user, he can package it
for Debian.
IMHO better no package for $DISTRIBUTION than only bad (and old) ones
because some packager (which is not necessarily a core programmer) has
only very little personal interest in the .deb version.

> Can't just just make up your own repo ? Is it so hard ?
> Just can speak for Gentoo - overlays are quite convenient here.
And it's not that hard to write .spec files for RPM (for average
packages - e.g. the kernel and gcc is somewhat different). Just take a
small one (e.g. the one from "trace") and start from there.
SCNR,
	Bernd

[09: I never tried to cascade qemu though.
-- 
Firmix Software GmbH                   http://www.firmix.at/
mobil: +43 664 4416156                 fax: +43 1 7890849-55
          Embedded Linux Development and Services



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-10 12:40             ` Bernd Petrovitsch
@ 2009-11-10 13:00               ` Enrico Weigelt
  2009-11-10 13:19                 ` Alan Cox
  0 siblings, 1 reply; 47+ messages in thread
From: Enrico Weigelt @ 2009-11-10 13:00 UTC (permalink / raw)
  To: linux-kernel

* Bernd Petrovitsch <bernd@firmix.at> wrote:

> > The root of evil are plugins - even worse: binary-only plugins.
> > 
> > Let's just take browsers: is there any damn good reason for not putting
> > those things into their own process (9P provides a fine IPC for that),
> > besides stupidity and lazyness of certain devs (yes, this explicitly
> > includes mozilla guys) ?
> Or implement running 32bit plugins from a 64bit browser.

And land in an nightmare: you have to create an kind of in-process jail, 
so all referenced 32bit lib references get properly emulated.

Better kickoff the whole idea of plugins at all.


cu
-- 
---------------------------------------------------------------------
 Enrico Weigelt    ==   metux IT service - http://www.metux.de/
---------------------------------------------------------------------
 Please visit the OpenSource QM Taskforce:
 	http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
	http://patches.metux.de/
---------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: FatELF patches...
  2009-11-10 13:00               ` Enrico Weigelt
@ 2009-11-10 13:19                 ` Alan Cox
  0 siblings, 0 replies; 47+ messages in thread
From: Alan Cox @ 2009-11-10 13:19 UTC (permalink / raw)
  To: weigelt; +Cc: linux-kernel

> > Or implement running 32bit plugins from a 64bit browser.
> 
> And land in an nightmare: you have to create an kind of in-process jail, 
> so all referenced 32bit lib references get properly emulated.

You instead want them out of process. Something that most distributions
seem to have managed.

http://gwenole.beauchesne.info//en/projects/nspluginwrapper

Alan

^ permalink raw reply	[flat|nested] 47+ messages in thread

end of thread, other threads:[~2009-11-10 13:17 UTC | newest]

Thread overview: 47+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-11-03  6:43 FatELF patches Eric Windisch
2009-11-03 11:21 ` Bernd Petrovitsch
2009-11-10 10:10   ` Enrico Weigelt
2009-11-10 12:15     ` Bernd Petrovitsch
2009-11-10 10:21 ` Enrico Weigelt
  -- strict thread matches above, loose matches on Subject: below --
2009-10-30  2:19 Ryan C. Gordon
2009-10-30  5:42 ` Rayson Ho
2009-10-30 14:54   ` Ryan C. Gordon
2009-11-01 19:20 ` David Hagood
2009-11-01 20:28   ` Måns Rullgård
2009-11-01 20:59     ` Ryan C. Gordon
2009-11-01 21:15       ` Måns Rullgård
2009-11-01 21:35         ` Ryan C. Gordon
2009-11-02  4:58           ` Valdis.Kletnieks
2009-11-02 15:14             ` Ryan C. Gordon
2009-11-03 14:54               ` Valdis.Kletnieks
2009-11-03 18:30                 ` Matt Thrailkill
2009-11-01 22:08         ` Rayson Ho
2009-11-02  1:17           ` Ryan C. Gordon
2009-11-02  3:27             ` Rayson Ho
2009-11-02  0:01       ` Alan Cox
2009-11-02  2:21         ` Ryan C. Gordon
2009-11-02  6:17           ` Julien BLACHE
2009-11-02 18:18             ` Ryan C. Gordon
2009-11-02 18:59               ` Julien BLACHE
2009-11-02 19:08               ` Jesús Guerrero
2009-11-02  6:27           ` David Miller
2009-11-02 15:32             ` Ryan C. Gordon
2009-11-02  9:16           ` Alan Cox
2009-11-02 17:39             ` david
2009-11-02 17:44               ` Alan Cox
2009-11-02 19:56               ` Krzysztof Halasa
2009-11-02 20:11                 ` david
2009-11-02 20:33                   ` Krzysztof Halasa
2009-11-03  1:35                   ` Mikael Pettersson
2009-11-02 15:40           ` Diego Calleja
2009-11-02 17:52         ` Ryan C. Gordon
2009-11-02 18:53           ` Alan Cox
2009-11-02 20:13             ` Ryan C. Gordon
2009-11-04  1:09               ` Ryan C. Gordon
2009-11-10 11:27           ` Enrico Weigelt
2009-11-10 12:40             ` Bernd Petrovitsch
2009-11-10 13:00               ` Enrico Weigelt
2009-11-10 13:19                 ` Alan Cox
2009-11-02 16:11       ` Chris Adams
2009-11-01 20:40   ` Ryan C. Gordon
2009-11-10 10:04   ` Enrico Weigelt
     [not found] <dAPfP-5R6-1@gated-at.bofh.it>
     [not found] ` <dBOhH-uY-9@gated-at.bofh.it>

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox