* Memory allocator semantics
@ 2014-01-02 20:33 Paul E. McKenney
2014-01-03 3:39 ` Josh Triplett
2014-02-08 10:27 ` Pekka Enberg
0 siblings, 2 replies; 16+ messages in thread
From: Paul E. McKenney @ 2014-01-02 20:33 UTC (permalink / raw)
To: linux-mm, linux-kernel; +Cc: cl, penberg, mpm
Hello!
>From what I can see, the Linux-kernel's SLAB, SLOB, and SLUB memory
allocators would deal with the following sort of race:
A. CPU 0: r1 = kmalloc(...); ACCESS_ONCE(gp) = r1;
CPU 1: r2 = ACCESS_ONCE(gp); if (r2) kfree(r2);
However, my guess is that this should be considered an accident of the
current implementation rather than a feature. The reason for this is
that I cannot see how you would usefully do (A) above without also allowing
(B) and (C) below, both of which look to me to be quite destructive:
B. CPU 0: r1 = kmalloc(...); ACCESS_ONCE(shared_x) = r1;
CPU 1: r2 = ACCESS_ONCE(shared_x); if (r2) kfree(r2);
CPU 2: r3 = ACCESS_ONCE(shared_x); if (r3) kfree(r3);
This results in the memory being on two different freelists.
C. CPU 0: r1 = kmalloc(...); ACCESS_ONCE(shared_x) = r1;
CPU 1: r2 = ACCESS_ONCE(shared_x); r2->a = 1; r2->b = 2;
CPU 2: r3 = ACCESS_ONCE(shared_x); if (r3) kfree(r3);
CPU 3: r4 = kmalloc(...); r4->s = 3; r4->t = 4;
This results in the memory being used by two different CPUs,
each of which believe that they have sole access.
But I thought I should ask the experts.
So, am I correct that kernel hackers are required to avoid "drive-by"
kfree()s of kmalloc()ed memory?
Thanx, Paul
PS. To the question "Why would anyone care about (A)?", then answer
is "Inquiring programming-language memory-model designers want
to know."
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Memory allocator semantics 2014-01-02 20:33 Memory allocator semantics Paul E. McKenney @ 2014-01-03 3:39 ` Josh Triplett 2014-01-03 5:14 ` Paul E. McKenney 2014-02-08 10:27 ` Pekka Enberg 1 sibling, 1 reply; 16+ messages in thread From: Josh Triplett @ 2014-01-03 3:39 UTC (permalink / raw) To: Paul E. McKenney; +Cc: linux-mm, linux-kernel, cl, penberg, mpm On Thu, Jan 02, 2014 at 12:33:20PM -0800, Paul E. McKenney wrote: > Hello! > > From what I can see, the Linux-kernel's SLAB, SLOB, and SLUB memory > allocators would deal with the following sort of race: > > A. CPU 0: r1 = kmalloc(...); ACCESS_ONCE(gp) = r1; > > CPU 1: r2 = ACCESS_ONCE(gp); if (r2) kfree(r2); > > However, my guess is that this should be considered an accident of the > current implementation rather than a feature. The reason for this is > that I cannot see how you would usefully do (A) above without also allowing > (B) and (C) below, both of which look to me to be quite destructive: (A) only seems OK if "gp" is guaranteed to be NULL beforehand, *and* if no other CPUs can possibly do what CPU 1 is doing in parallel. Even then, it seems questionable how this could ever be used successfully in practice. This seems similar to the TCP simultaneous-SYN case: theoretically possible, absurd in practice. > B. CPU 0: r1 = kmalloc(...); ACCESS_ONCE(shared_x) = r1; > > CPU 1: r2 = ACCESS_ONCE(shared_x); if (r2) kfree(r2); > > CPU 2: r3 = ACCESS_ONCE(shared_x); if (r3) kfree(r3); > > This results in the memory being on two different freelists. That's a straightforward double-free bug. You need some kind of synchronization there to ensure that only one call to kfree occurs. > C. CPU 0: r1 = kmalloc(...); ACCESS_ONCE(shared_x) = r1; > > CPU 1: r2 = ACCESS_ONCE(shared_x); r2->a = 1; r2->b = 2; > > CPU 2: r3 = ACCESS_ONCE(shared_x); if (r3) kfree(r3); > > CPU 3: r4 = kmalloc(...); r4->s = 3; r4->t = 4; > > This results in the memory being used by two different CPUs, > each of which believe that they have sole access. This is not OK either: CPU 2 has called kfree on a pointer that CPU 1 still considers alive, and again, the CPUs haven't used any form of synchronization to prevent that. > But I thought I should ask the experts. > > So, am I correct that kernel hackers are required to avoid "drive-by" > kfree()s of kmalloc()ed memory? Don't kfree things that are in use, and synchronize to make sure all CPUs agree about "in use", yes. > PS. To the question "Why would anyone care about (A)?", then answer > is "Inquiring programming-language memory-model designers want > to know." I find myself wondering about the original form of the question, since I'd hope that programming-languge memory-model designers would understand the need for synchronization around reclaiming memory. - Josh Triplett -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Memory allocator semantics 2014-01-03 3:39 ` Josh Triplett @ 2014-01-03 5:14 ` Paul E. McKenney 2014-01-03 5:47 ` Josh Triplett 0 siblings, 1 reply; 16+ messages in thread From: Paul E. McKenney @ 2014-01-03 5:14 UTC (permalink / raw) To: Josh Triplett; +Cc: linux-mm, linux-kernel, cl, penberg, mpm On Thu, Jan 02, 2014 at 07:39:07PM -0800, Josh Triplett wrote: > On Thu, Jan 02, 2014 at 12:33:20PM -0800, Paul E. McKenney wrote: > > Hello! > > > > From what I can see, the Linux-kernel's SLAB, SLOB, and SLUB memory > > allocators would deal with the following sort of race: > > > > A. CPU 0: r1 = kmalloc(...); ACCESS_ONCE(gp) = r1; > > > > CPU 1: r2 = ACCESS_ONCE(gp); if (r2) kfree(r2); > > > > However, my guess is that this should be considered an accident of the > > current implementation rather than a feature. The reason for this is > > that I cannot see how you would usefully do (A) above without also allowing > > (B) and (C) below, both of which look to me to be quite destructive: > > (A) only seems OK if "gp" is guaranteed to be NULL beforehand, *and* if > no other CPUs can possibly do what CPU 1 is doing in parallel. Even > then, it seems questionable how this could ever be used successfully in > practice. > > This seems similar to the TCP simultaneous-SYN case: theoretically > possible, absurd in practice. Heh! Agreed on the absurdity, but my quick look and slab/slob/slub leads me to believe that current Linux kernel would actually do something sensible in this case. But only because they don't touch the actual memory. DYNIX/ptx would have choked on it, IIRC. And the fact that slab/slob/slub seem to handle (A) seemed bizarre enough to be worth asking the question. > > B. CPU 0: r1 = kmalloc(...); ACCESS_ONCE(shared_x) = r1; > > > > CPU 1: r2 = ACCESS_ONCE(shared_x); if (r2) kfree(r2); > > > > CPU 2: r3 = ACCESS_ONCE(shared_x); if (r3) kfree(r3); > > > > This results in the memory being on two different freelists. > > That's a straightforward double-free bug. You need some kind of > synchronization there to ensure that only one call to kfree occurs. Yep! > > C. CPU 0: r1 = kmalloc(...); ACCESS_ONCE(shared_x) = r1; > > > > CPU 1: r2 = ACCESS_ONCE(shared_x); r2->a = 1; r2->b = 2; > > > > CPU 2: r3 = ACCESS_ONCE(shared_x); if (r3) kfree(r3); > > > > CPU 3: r4 = kmalloc(...); r4->s = 3; r4->t = 4; > > > > This results in the memory being used by two different CPUs, > > each of which believe that they have sole access. > > This is not OK either: CPU 2 has called kfree on a pointer that CPU 1 > still considers alive, and again, the CPUs haven't used any form of > synchronization to prevent that. Agreed. > > But I thought I should ask the experts. > > > > So, am I correct that kernel hackers are required to avoid "drive-by" > > kfree()s of kmalloc()ed memory? > > Don't kfree things that are in use, and synchronize to make sure all > CPUs agree about "in use", yes. For example, ensure that each kmalloc() happens unambiguously before the corresponding kfree(). ;-) > > PS. To the question "Why would anyone care about (A)?", then answer > > is "Inquiring programming-language memory-model designers want > > to know." > > I find myself wondering about the original form of the question, since > I'd hope that programming-languge memory-model designers would > understand the need for synchronization around reclaiming memory. I think that they do now. The original form of the question was as follows: But my intuition at the moment is that allowing racing accesses and providing pointer atomicity leads to a much more complicated and harder to explain model. You have to deal with initialization issues and OOTA problems without atomics. And the implementation has to deal with cross-thread visibility of malloc meta-information, which I suspect will be expensive. You now essentially have to be able to malloc() in one thread, transfer the pointer via a race to another thread, and free() in the second thread. Thata??s hard unless malloc() and free() always lock (as I presume they do in the Linux kernel). But the first I heard of it was something like litmus test (A) above. (And yes, I already disabused them of their notion that Linux kernel kmalloc() and kfree() always lock.) Thanx, Paul -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Memory allocator semantics 2014-01-03 5:14 ` Paul E. McKenney @ 2014-01-03 5:47 ` Josh Triplett 2014-01-03 7:57 ` Paul E. McKenney 0 siblings, 1 reply; 16+ messages in thread From: Josh Triplett @ 2014-01-03 5:47 UTC (permalink / raw) To: Paul E. McKenney; +Cc: linux-mm, linux-kernel, cl, penberg, mpm On Thu, Jan 02, 2014 at 09:14:17PM -0800, Paul E. McKenney wrote: > On Thu, Jan 02, 2014 at 07:39:07PM -0800, Josh Triplett wrote: > > On Thu, Jan 02, 2014 at 12:33:20PM -0800, Paul E. McKenney wrote: > > > Hello! > > > > > > From what I can see, the Linux-kernel's SLAB, SLOB, and SLUB memory > > > allocators would deal with the following sort of race: > > > > > > A. CPU 0: r1 = kmalloc(...); ACCESS_ONCE(gp) = r1; > > > > > > CPU 1: r2 = ACCESS_ONCE(gp); if (r2) kfree(r2); > > > > > > However, my guess is that this should be considered an accident of the > > > current implementation rather than a feature. The reason for this is > > > that I cannot see how you would usefully do (A) above without also allowing > > > (B) and (C) below, both of which look to me to be quite destructive: > > > > (A) only seems OK if "gp" is guaranteed to be NULL beforehand, *and* if > > no other CPUs can possibly do what CPU 1 is doing in parallel. Even > > then, it seems questionable how this could ever be used successfully in > > practice. > > > > This seems similar to the TCP simultaneous-SYN case: theoretically > > possible, absurd in practice. > > Heh! > > Agreed on the absurdity, but my quick look and slab/slob/slub leads > me to believe that current Linux kernel would actually do something > sensible in this case. But only because they don't touch the actual > memory. DYNIX/ptx would have choked on it, IIRC. Based on this and the discussion at the bottom of your mail, I think I'm starting to understand what you're getting at; this seems like less of a question of "could this usefully happen?" and more "does the allocator know how to protect *itself*?". > > > But I thought I should ask the experts. > > > > > > So, am I correct that kernel hackers are required to avoid "drive-by" > > > kfree()s of kmalloc()ed memory? > > > > Don't kfree things that are in use, and synchronize to make sure all > > CPUs agree about "in use", yes. > > For example, ensure that each kmalloc() happens unambiguously before the > corresponding kfree(). ;-) That too, yes. :) > > > PS. To the question "Why would anyone care about (A)?", then answer > > > is "Inquiring programming-language memory-model designers want > > > to know." > > > > I find myself wondering about the original form of the question, since > > I'd hope that programming-languge memory-model designers would > > understand the need for synchronization around reclaiming memory. > > I think that they do now. The original form of the question was as > follows: > > But my intuition at the moment is that allowing racing > accesses and providing pointer atomicity leads to a much more > complicated and harder to explain model. You have to deal > with initialization issues and OOTA problems without atomics. > And the implementation has to deal with cross-thread visibility > of malloc meta-information, which I suspect will be expensive. > You now essentially have to be able to malloc() in one thread, > transfer the pointer via a race to another thread, and free() > in the second thread. That’s hard unless malloc() and free() > always lock (as I presume they do in the Linux kernel). As mentioned above, this makes much more sense now. This seems like a question of how the allocator protects its *own* internal data structures, rather than whether the allocator can usefully be used for the cases you mentioned above. And that's a reasonable question to ask if you're building a language memory model for a language with malloc and free as part of its standard library. To roughly sketch out some general rules that might work as a set of scalable design constraints for malloc/free: - malloc may always return any unallocated memory; it has no obligation to avoid returning memory that was just recently freed. In fact, an implementation may even be particularly *likely* to return memory that was just recently freed, for performance reasons. Any program which assumes a delay or a memory barrier before memory reuse is broken. - Multiple calls to free on the same memory will produce undefined behavior, and in particular may result in a well-known form of security hole. free has no obligation to protect itself against multiple calls to free on the same memory, unless otherwise specified as part of some debugging mode. This holds whether the calls to free occur in series or in parallel (e.g. two or more calls racing with each other). It is the job of the calling program to avoid calling free multiple times on the same memory, such as via reference counting, RCU, or some other mechanism. - It is the job of the calling program to avoid calling free on memory that is currently in use, such as via reference counting, RCU, or some other mechanism. Accessing memory after reclaiming it will produce undefined behavior. This includes calling free on memory concurrently with accesses to that memory (e.g. via a race). - malloc and free must work correctly when concurrently called from multiple threads without synchronization. Any synchronization or memory barriers required internally by the implementations must be provided by the implementation. However, an implementation is not required to use any particular form of synchronization, such as locking or memory barriers, and the caller of malloc or free may not make any assumptions about the ordering of its own operations surrounding those calls. For example, an implementation may use per-CPU memory pools, and only use synchronization when it cannot satisfy an allocation request from the current CPU's pool. - An implementation of free must support being called on any memory allocated by the same implementation of malloc, at any time, from any CPU. In particular, a call to free on memory freshly malloc'd on another CPU, with no intervening synchronization between the two calls, must succeed and reclaim the memory. However, the actual calls to malloc and free must not race with each other; in particular, the pointer value returned by malloc is not valid (for access or for calls to free) until malloc itself has returned. (Such a race would require the caller of free to divine the value returned by malloc before malloc returns.) Thus, the implementations of malloc and free may safely assume a data dependency (via the returned pointer value itself) between the call to malloc and the call to free; such a dependency may allow further assumptions about memory ordering based on the platform's memory model. > But the first I heard of it was something like litmus test (A) above. > > (And yes, I already disabused them of their notion that Linux kernel > kmalloc() and kfree() always lock.) That much does seem like an easy assumption to make if you've never thought about how to write a scalable allocator. The concept of per-CPU memory pools is the very first thing that should come to mind when thinking the words "scalable" and "allocator" in the same sentence, but first you have to get programming-language memory-model designers thinking the word "scalable". ;) - Josh Triplett -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Memory allocator semantics 2014-01-03 5:47 ` Josh Triplett @ 2014-01-03 7:57 ` Paul E. McKenney 2014-01-03 8:42 ` Josh Triplett 0 siblings, 1 reply; 16+ messages in thread From: Paul E. McKenney @ 2014-01-03 7:57 UTC (permalink / raw) To: Josh Triplett; +Cc: linux-mm, linux-kernel, cl, penberg, mpm On Thu, Jan 02, 2014 at 09:47:00PM -0800, Josh Triplett wrote: > On Thu, Jan 02, 2014 at 09:14:17PM -0800, Paul E. McKenney wrote: > > On Thu, Jan 02, 2014 at 07:39:07PM -0800, Josh Triplett wrote: > > > On Thu, Jan 02, 2014 at 12:33:20PM -0800, Paul E. McKenney wrote: > > > > Hello! > > > > > > > > From what I can see, the Linux-kernel's SLAB, SLOB, and SLUB memory > > > > allocators would deal with the following sort of race: > > > > > > > > A. CPU 0: r1 = kmalloc(...); ACCESS_ONCE(gp) = r1; > > > > > > > > CPU 1: r2 = ACCESS_ONCE(gp); if (r2) kfree(r2); > > > > > > > > However, my guess is that this should be considered an accident of the > > > > current implementation rather than a feature. The reason for this is > > > > that I cannot see how you would usefully do (A) above without also allowing > > > > (B) and (C) below, both of which look to me to be quite destructive: > > > > > > (A) only seems OK if "gp" is guaranteed to be NULL beforehand, *and* if > > > no other CPUs can possibly do what CPU 1 is doing in parallel. Even > > > then, it seems questionable how this could ever be used successfully in > > > practice. > > > > > > This seems similar to the TCP simultaneous-SYN case: theoretically > > > possible, absurd in practice. > > > > Heh! > > > > Agreed on the absurdity, but my quick look and slab/slob/slub leads > > me to believe that current Linux kernel would actually do something > > sensible in this case. But only because they don't touch the actual > > memory. DYNIX/ptx would have choked on it, IIRC. > > Based on this and the discussion at the bottom of your mail, I think I'm > starting to understand what you're getting at; this seems like less of a > question of "could this usefully happen?" and more "does the allocator > know how to protect *itself*?". Or perhaps "What are the rules when a concurrent program interacts with a memory allocator?" Like the set you provided below. ;-) > > > > But I thought I should ask the experts. > > > > > > > > So, am I correct that kernel hackers are required to avoid "drive-by" > > > > kfree()s of kmalloc()ed memory? > > > > > > Don't kfree things that are in use, and synchronize to make sure all > > > CPUs agree about "in use", yes. > > > > For example, ensure that each kmalloc() happens unambiguously before the > > corresponding kfree(). ;-) > > That too, yes. :) > > > > > PS. To the question "Why would anyone care about (A)?", then answer > > > > is "Inquiring programming-language memory-model designers want > > > > to know." > > > > > > I find myself wondering about the original form of the question, since > > > I'd hope that programming-languge memory-model designers would > > > understand the need for synchronization around reclaiming memory. > > > > I think that they do now. The original form of the question was as > > follows: > > > > But my intuition at the moment is that allowing racing > > accesses and providing pointer atomicity leads to a much more > > complicated and harder to explain model. You have to deal > > with initialization issues and OOTA problems without atomics. > > And the implementation has to deal with cross-thread visibility > > of malloc meta-information, which I suspect will be expensive. > > You now essentially have to be able to malloc() in one thread, > > transfer the pointer via a race to another thread, and free() > > in the second thread. Thata??s hard unless malloc() and free() > > always lock (as I presume they do in the Linux kernel). > > As mentioned above, this makes much more sense now. This seems like a > question of how the allocator protects its *own* internal data > structures, rather than whether the allocator can usefully be used for > the cases you mentioned above. And that's a reasonable question to ask > if you're building a language memory model for a language with malloc > and free as part of its standard library. > > To roughly sketch out some general rules that might work as a set of > scalable design constraints for malloc/free: > > - malloc may always return any unallocated memory; it has no obligation > to avoid returning memory that was just recently freed. In fact, an > implementation may even be particularly *likely* to return memory that > was just recently freed, for performance reasons. Any program which > assumes a delay or a memory barrier before memory reuse is broken. Agreed. > - Multiple calls to free on the same memory will produce undefined > behavior, and in particular may result in a well-known form of > security hole. free has no obligation to protect itself against > multiple calls to free on the same memory, unless otherwise specified > as part of some debugging mode. This holds whether the calls to free > occur in series or in parallel (e.g. two or more calls racing with > each other). It is the job of the calling program to avoid calling > free multiple times on the same memory, such as via reference > counting, RCU, or some other mechanism. Yep! > - It is the job of the calling program to avoid calling free on memory > that is currently in use, such as via reference counting, RCU, or some > other mechanism. Accessing memory after reclaiming it will produce > undefined behavior. This includes calling free on memory concurrently > with accesses to that memory (e.g. via a race). Yep! > - malloc and free must work correctly when concurrently called from > multiple threads without synchronization. Any synchronization or > memory barriers required internally by the implementations must be > provided by the implementation. However, an implementation is not > required to use any particular form of synchronization, such as > locking or memory barriers, and the caller of malloc or free may not > make any assumptions about the ordering of its own operations > surrounding those calls. For example, an implementation may use > per-CPU memory pools, and only use synchronization when it cannot > satisfy an allocation request from the current CPU's pool. Yep, though in C/C++11 this comes out something very roughly like: "A free() involving a given byte of memory synchronizes-with a later alloc() returning a block containing that block of memory." > - An implementation of free must support being called on any memory > allocated by the same implementation of malloc, at any time, from any > CPU. In particular, a call to free on memory freshly malloc'd on > another CPU, with no intervening synchronization between the two > calls, must succeed and reclaim the memory. However, the actual calls > to malloc and free must not race with each other; in particular, the > pointer value returned by malloc is not valid (for access or for calls > to free) until malloc itself has returned. (Such a race would require > the caller of free to divine the value returned by malloc before > malloc returns.) Thus, the implementations of malloc and free may > safely assume a data dependency (via the returned pointer value > itself) between the call to malloc and the call to free; such a > dependency may allow further assumptions about memory ordering based > on the platform's memory model. I would be OK requiring the user to have a happens-before relationship between an allocation and a subsequent matching free. > > But the first I heard of it was something like litmus test (A) above. > > > > (And yes, I already disabused them of their notion that Linux kernel > > kmalloc() and kfree() always lock.) > > That much does seem like an easy assumption to make if you've never > thought about how to write a scalable allocator. The concept of per-CPU > memory pools is the very first thing that should come to mind when > thinking the words "scalable" and "allocator" in the same sentence, but > first you have to get programming-language memory-model designers > thinking the word "scalable". ;) Well, given that it was not obvious to me the first year or so that I was doing parallel programming, I cannot give them too much trouble. Of course, that was some time ago. ;-) Thanx, Paul -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Memory allocator semantics 2014-01-03 7:57 ` Paul E. McKenney @ 2014-01-03 8:42 ` Josh Triplett 0 siblings, 0 replies; 16+ messages in thread From: Josh Triplett @ 2014-01-03 8:42 UTC (permalink / raw) To: Paul E. McKenney; +Cc: linux-mm, linux-kernel, cl, penberg, mpm On Thu, Jan 02, 2014 at 11:57:27PM -0800, Paul E. McKenney wrote: > On Thu, Jan 02, 2014 at 09:47:00PM -0800, Josh Triplett wrote: > > On Thu, Jan 02, 2014 at 09:14:17PM -0800, Paul E. McKenney wrote: > > > On Thu, Jan 02, 2014 at 07:39:07PM -0800, Josh Triplett wrote: > > > > On Thu, Jan 02, 2014 at 12:33:20PM -0800, Paul E. McKenney wrote: > > > > > Hello! > > > > > > > > > > From what I can see, the Linux-kernel's SLAB, SLOB, and SLUB memory > > > > > allocators would deal with the following sort of race: > > > > > > > > > > A. CPU 0: r1 = kmalloc(...); ACCESS_ONCE(gp) = r1; > > > > > > > > > > CPU 1: r2 = ACCESS_ONCE(gp); if (r2) kfree(r2); > > > > > > > > > > However, my guess is that this should be considered an accident of the > > > > > current implementation rather than a feature. The reason for this is > > > > > that I cannot see how you would usefully do (A) above without also allowing > > > > > (B) and (C) below, both of which look to me to be quite destructive: > > > > > > > > (A) only seems OK if "gp" is guaranteed to be NULL beforehand, *and* if > > > > no other CPUs can possibly do what CPU 1 is doing in parallel. Even > > > > then, it seems questionable how this could ever be used successfully in > > > > practice. > > > > > > > > This seems similar to the TCP simultaneous-SYN case: theoretically > > > > possible, absurd in practice. > > > > > > Heh! > > > > > > Agreed on the absurdity, but my quick look and slab/slob/slub leads > > > me to believe that current Linux kernel would actually do something > > > sensible in this case. But only because they don't touch the actual > > > memory. DYNIX/ptx would have choked on it, IIRC. > > > > Based on this and the discussion at the bottom of your mail, I think I'm > > starting to understand what you're getting at; this seems like less of a > > question of "could this usefully happen?" and more "does the allocator > > know how to protect *itself*?". > > Or perhaps "What are the rules when a concurrent program interacts with > a memory allocator?" Like the set you provided below. ;-) :) > > > > > But I thought I should ask the experts. > > > > > > > > > > So, am I correct that kernel hackers are required to avoid "drive-by" > > > > > kfree()s of kmalloc()ed memory? > > > > > > > > Don't kfree things that are in use, and synchronize to make sure all > > > > CPUs agree about "in use", yes. > > > > > > For example, ensure that each kmalloc() happens unambiguously before the > > > corresponding kfree(). ;-) > > > > That too, yes. :) > > > > > > > PS. To the question "Why would anyone care about (A)?", then answer > > > > > is "Inquiring programming-language memory-model designers want > > > > > to know." > > > > > > > > I find myself wondering about the original form of the question, since > > > > I'd hope that programming-languge memory-model designers would > > > > understand the need for synchronization around reclaiming memory. > > > > > > I think that they do now. The original form of the question was as > > > follows: > > > > > > But my intuition at the moment is that allowing racing > > > accesses and providing pointer atomicity leads to a much more > > > complicated and harder to explain model. You have to deal > > > with initialization issues and OOTA problems without atomics. > > > And the implementation has to deal with cross-thread visibility > > > of malloc meta-information, which I suspect will be expensive. > > > You now essentially have to be able to malloc() in one thread, > > > transfer the pointer via a race to another thread, and free() > > > in the second thread. That’s hard unless malloc() and free() > > > always lock (as I presume they do in the Linux kernel). > > > > As mentioned above, this makes much more sense now. This seems like a > > question of how the allocator protects its *own* internal data > > structures, rather than whether the allocator can usefully be used for > > the cases you mentioned above. And that's a reasonable question to ask > > if you're building a language memory model for a language with malloc > > and free as part of its standard library. > > > > To roughly sketch out some general rules that might work as a set of > > scalable design constraints for malloc/free: > > > > - malloc may always return any unallocated memory; it has no obligation > > to avoid returning memory that was just recently freed. In fact, an > > implementation may even be particularly *likely* to return memory that > > was just recently freed, for performance reasons. Any program which > > assumes a delay or a memory barrier before memory reuse is broken. > > Agreed. > > > - Multiple calls to free on the same memory will produce undefined > > behavior, and in particular may result in a well-known form of > > security hole. free has no obligation to protect itself against > > multiple calls to free on the same memory, unless otherwise specified > > as part of some debugging mode. This holds whether the calls to free > > occur in series or in parallel (e.g. two or more calls racing with > > each other). It is the job of the calling program to avoid calling > > free multiple times on the same memory, such as via reference > > counting, RCU, or some other mechanism. > > Yep! > > > - It is the job of the calling program to avoid calling free on memory > > that is currently in use, such as via reference counting, RCU, or some > > other mechanism. Accessing memory after reclaiming it will produce > > undefined behavior. This includes calling free on memory concurrently > > with accesses to that memory (e.g. via a race). > > Yep! > > > - malloc and free must work correctly when concurrently called from > > multiple threads without synchronization. Any synchronization or > > memory barriers required internally by the implementations must be > > provided by the implementation. However, an implementation is not > > required to use any particular form of synchronization, such as > > locking or memory barriers, and the caller of malloc or free may not > > make any assumptions about the ordering of its own operations > > surrounding those calls. For example, an implementation may use > > per-CPU memory pools, and only use synchronization when it cannot > > satisfy an allocation request from the current CPU's pool. > > Yep, though in C/C++11 this comes out something very roughly like: > "A free() involving a given byte of memory synchronizes-with a later > alloc() returning a block containing that block of memory." Gah. That doesn't seem like a memory-ordering guarantee the allocator should have to provide for its caller, and I can easily think of allocator structures that wouldn't guarantee it without the inclusion of an explicit memory barrier. (Also, I assume the last use of "block" in that sentence should have been "byte"?) > > - An implementation of free must support being called on any memory > > allocated by the same implementation of malloc, at any time, from any > > CPU. In particular, a call to free on memory freshly malloc'd on > > another CPU, with no intervening synchronization between the two > > calls, must succeed and reclaim the memory. However, the actual calls > > to malloc and free must not race with each other; in particular, the > > pointer value returned by malloc is not valid (for access or for calls > > to free) until malloc itself has returned. (Such a race would require > > the caller of free to divine the value returned by malloc before > > malloc returns.) Thus, the implementations of malloc and free may > > safely assume a data dependency (via the returned pointer value > > itself) between the call to malloc and the call to free; such a > > dependency may allow further assumptions about memory ordering based > > on the platform's memory model. > > I would be OK requiring the user to have a happens-before relationship > between an allocation and a subsequent matching free. I *think* that's the right formal relationship I'm suggesting, yes. Mostly I'm suggesting that since the only sensible way for the pointer value you're passing to free to have come into existence is to have received it from malloc at some point in the past (as opposed to magically divining its value), it's not so much "requiring the user to have a happens-before relationship" as "not allowing the user to randomly make up pointers and free them, even if they happen to match the value being returned from a concurrent malloc". Because that's the only way I can think of for malloc and free to race on the same pointer. In any case, let me know if the rules sketched above end up proving useful as part of the requirements for malloc/free. - Josh Triplett -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Memory allocator semantics 2014-01-02 20:33 Memory allocator semantics Paul E. McKenney 2014-01-03 3:39 ` Josh Triplett @ 2014-02-08 10:27 ` Pekka Enberg 2014-02-09 2:00 ` Paul E. McKenney 2014-02-10 19:07 ` Christoph Lameter 1 sibling, 2 replies; 16+ messages in thread From: Pekka Enberg @ 2014-02-08 10:27 UTC (permalink / raw) To: paulmck, linux-mm, linux-kernel; +Cc: cl, penberg, mpm Hi Paul, On 01/02/2014 10:33 PM, Paul E. McKenney wrote: > From what I can see, the Linux-kernel's SLAB, SLOB, and SLUB memory > allocators would deal with the following sort of race: > > A. CPU 0: r1 = kmalloc(...); ACCESS_ONCE(gp) = r1; > > CPU 1: r2 = ACCESS_ONCE(gp); if (r2) kfree(r2); > > However, my guess is that this should be considered an accident of the > current implementation rather than a feature. The reason for this is > that I cannot see how you would usefully do (A) above without also allowing > (B) and (C) below, both of which look to me to be quite destructive: > > B. CPU 0: r1 = kmalloc(...); ACCESS_ONCE(shared_x) = r1; > > CPU 1: r2 = ACCESS_ONCE(shared_x); if (r2) kfree(r2); > > CPU 2: r3 = ACCESS_ONCE(shared_x); if (r3) kfree(r3); > > This results in the memory being on two different freelists. > > C. CPU 0: r1 = kmalloc(...); ACCESS_ONCE(shared_x) = r1; > > CPU 1: r2 = ACCESS_ONCE(shared_x); r2->a = 1; r2->b = 2; > > CPU 2: r3 = ACCESS_ONCE(shared_x); if (r3) kfree(r3); > > CPU 3: r4 = kmalloc(...); r4->s = 3; r4->t = 4; > > This results in the memory being used by two different CPUs, > each of which believe that they have sole access. > > But I thought I should ask the experts. > > So, am I correct that kernel hackers are required to avoid "drive-by" > kfree()s of kmalloc()ed memory? So to be completely honest, I don't understand what is the race in (A) that concerns the *memory allocator*. I also don't what the memory allocator can do in (B) and (C) which look like double-free and use-after-free, respectively, to me. :-) Pekka -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Memory allocator semantics 2014-02-08 10:27 ` Pekka Enberg @ 2014-02-09 2:00 ` Paul E. McKenney 2014-02-11 8:50 ` Pekka Enberg 2014-02-10 19:07 ` Christoph Lameter 1 sibling, 1 reply; 16+ messages in thread From: Paul E. McKenney @ 2014-02-09 2:00 UTC (permalink / raw) To: Pekka Enberg; +Cc: linux-mm, linux-kernel, cl, penberg, mpm On Sat, Feb 08, 2014 at 12:27:37PM +0200, Pekka Enberg wrote: > Hi Paul, > > On 01/02/2014 10:33 PM, Paul E. McKenney wrote: > > From what I can see, the Linux-kernel's SLAB, SLOB, and SLUB memory > >allocators would deal with the following sort of race: > > > >A. CPU 0: r1 = kmalloc(...); ACCESS_ONCE(gp) = r1; > > > > CPU 1: r2 = ACCESS_ONCE(gp); if (r2) kfree(r2); > > > >However, my guess is that this should be considered an accident of the > >current implementation rather than a feature. The reason for this is > >that I cannot see how you would usefully do (A) above without also allowing > >(B) and (C) below, both of which look to me to be quite destructive: > > > >B. CPU 0: r1 = kmalloc(...); ACCESS_ONCE(shared_x) = r1; > > > > CPU 1: r2 = ACCESS_ONCE(shared_x); if (r2) kfree(r2); > > > > CPU 2: r3 = ACCESS_ONCE(shared_x); if (r3) kfree(r3); > > > > This results in the memory being on two different freelists. > > > >C. CPU 0: r1 = kmalloc(...); ACCESS_ONCE(shared_x) = r1; > > > > CPU 1: r2 = ACCESS_ONCE(shared_x); r2->a = 1; r2->b = 2; > > > > CPU 2: r3 = ACCESS_ONCE(shared_x); if (r3) kfree(r3); > > > > CPU 3: r4 = kmalloc(...); r4->s = 3; r4->t = 4; > > > > This results in the memory being used by two different CPUs, > > each of which believe that they have sole access. > > > >But I thought I should ask the experts. > > > >So, am I correct that kernel hackers are required to avoid "drive-by" > >kfree()s of kmalloc()ed memory? > > So to be completely honest, I don't understand what is the race in > (A) that concerns the *memory allocator*. I also don't what the > memory allocator can do in (B) and (C) which look like double-free > and use-after-free, respectively, to me. :-) >From what I can see, (A) works by accident, but is kind of useless because you allocate and free the memory without touching it. (B) and (C) are the lightest touches I could imagine, and as you say, both are bad. So I believe that it is reasonable to prohibit (A). Or is there some use for (A) that I am missing? Thanx, Paul -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Memory allocator semantics 2014-02-09 2:00 ` Paul E. McKenney @ 2014-02-11 8:50 ` Pekka Enberg 2014-02-11 12:09 ` Paul E. McKenney 2014-02-11 18:43 ` Christoph Lameter 0 siblings, 2 replies; 16+ messages in thread From: Pekka Enberg @ 2014-02-11 8:50 UTC (permalink / raw) To: Paul McKenney; +Cc: linux-mm@kvack.org, LKML, Christoph Lameter, Matt Mackall Hi Paul, On Sun, Feb 9, 2014 at 4:00 AM, Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote: > From what I can see, (A) works by accident, but is kind of useless because > you allocate and free the memory without touching it. (B) and (C) are the > lightest touches I could imagine, and as you say, both are bad. So I > believe that it is reasonable to prohibit (A). > > Or is there some use for (A) that I am missing? So again, there's nothing in (A) that the memory allocator is concerned about. kmalloc() makes no guarantees whatsoever about the visibility of "r1" across CPUs. If you're saying that there's an implicit barrier between kmalloc() and kfree(), that's an unintended side-effect, not a design decision AFAICT. Pekka -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Memory allocator semantics 2014-02-11 8:50 ` Pekka Enberg @ 2014-02-11 12:09 ` Paul E. McKenney 2014-02-11 18:43 ` Christoph Lameter 1 sibling, 0 replies; 16+ messages in thread From: Paul E. McKenney @ 2014-02-11 12:09 UTC (permalink / raw) To: Pekka Enberg; +Cc: linux-mm@kvack.org, LKML, Christoph Lameter, Matt Mackall On Tue, Feb 11, 2014 at 10:50:24AM +0200, Pekka Enberg wrote: > Hi Paul, > > On Sun, Feb 9, 2014 at 4:00 AM, Paul E. McKenney > <paulmck@linux.vnet.ibm.com> wrote: > > From what I can see, (A) works by accident, but is kind of useless because > > you allocate and free the memory without touching it. (B) and (C) are the > > lightest touches I could imagine, and as you say, both are bad. So I > > believe that it is reasonable to prohibit (A). > > > > Or is there some use for (A) that I am missing? > > So again, there's nothing in (A) that the memory allocator is > concerned about. kmalloc() makes no guarantees whatsoever about the > visibility of "r1" across CPUs. If you're saying that there's an > implicit barrier between kmalloc() and kfree(), that's an unintended > side-effect, not a design decision AFAICT. Thank you. That was what I suspected, and I believe that it is a completely reasonable response to (A). Thanx, Paul -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Memory allocator semantics 2014-02-11 8:50 ` Pekka Enberg 2014-02-11 12:09 ` Paul E. McKenney @ 2014-02-11 18:43 ` Christoph Lameter 2014-02-14 17:30 ` Paul E. McKenney 1 sibling, 1 reply; 16+ messages in thread From: Christoph Lameter @ 2014-02-11 18:43 UTC (permalink / raw) To: Pekka Enberg; +Cc: Paul McKenney, linux-mm@kvack.org, LKML, Matt Mackall On Tue, 11 Feb 2014, Pekka Enberg wrote: > So again, there's nothing in (A) that the memory allocator is > concerned about. kmalloc() makes no guarantees whatsoever about the > visibility of "r1" across CPUs. If you're saying that there's an > implicit barrier between kmalloc() and kfree(), that's an unintended > side-effect, not a design decision AFAICT. I am not sure that this side effect necessarily happens. The SLUB fastpath does not disable interrupts and only uses a cmpxchg without lock semantics. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Memory allocator semantics 2014-02-11 18:43 ` Christoph Lameter @ 2014-02-14 17:30 ` Paul E. McKenney 0 siblings, 0 replies; 16+ messages in thread From: Paul E. McKenney @ 2014-02-14 17:30 UTC (permalink / raw) To: Christoph Lameter; +Cc: Pekka Enberg, linux-mm@kvack.org, LKML, Matt Mackall On Tue, Feb 11, 2014 at 12:43:35PM -0600, Christoph Lameter wrote: > On Tue, 11 Feb 2014, Pekka Enberg wrote: > > > So again, there's nothing in (A) that the memory allocator is > > concerned about. kmalloc() makes no guarantees whatsoever about the > > visibility of "r1" across CPUs. If you're saying that there's an > > implicit barrier between kmalloc() and kfree(), that's an unintended > > side-effect, not a design decision AFAICT. > > I am not sure that this side effect necessarily happens. The SLUB fastpath > does not disable interrupts and only uses a cmpxchg without lock > semantics. That tells me what I need to know. Users should definitely not try a "drive-by kfree()" of something that was concurrently allocated. ;-) Thanx, Paul -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Memory allocator semantics 2014-02-08 10:27 ` Pekka Enberg 2014-02-09 2:00 ` Paul E. McKenney @ 2014-02-10 19:07 ` Christoph Lameter 2014-02-11 12:14 ` Paul E. McKenney 1 sibling, 1 reply; 16+ messages in thread From: Christoph Lameter @ 2014-02-10 19:07 UTC (permalink / raw) To: Pekka Enberg; +Cc: paulmck, linux-mm, linux-kernel, penberg, mpm On Sat, 8 Feb 2014, Pekka Enberg wrote: > So to be completely honest, I don't understand what is the race in (A) that > concerns the *memory allocator*. I also don't what the memory allocator can > do in (B) and (C) which look like double-free and use-after-free, > respectively, to me. :-) Well it seems to be some academic mind game to me. Does an invocation of the allocator have barrier semantics or not? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Memory allocator semantics 2014-02-10 19:07 ` Christoph Lameter @ 2014-02-11 12:14 ` Paul E. McKenney 2014-02-11 13:20 ` Pekka Enberg 0 siblings, 1 reply; 16+ messages in thread From: Paul E. McKenney @ 2014-02-11 12:14 UTC (permalink / raw) To: Christoph Lameter; +Cc: Pekka Enberg, linux-mm, linux-kernel, penberg, mpm On Mon, Feb 10, 2014 at 01:07:58PM -0600, Christoph Lameter wrote: > On Sat, 8 Feb 2014, Pekka Enberg wrote: > > > So to be completely honest, I don't understand what is the race in (A) that > > concerns the *memory allocator*. I also don't what the memory allocator can > > do in (B) and (C) which look like double-free and use-after-free, > > respectively, to me. :-) > > Well it seems to be some academic mind game to me. > > Does an invocation of the allocator have barrier semantics or not? In case (A), I don't see why the allocator should have barrier semantics from kmalloc() to a matching kfree(). I would argue that any needed barrier semantics must be provided by the caller. In contrast, from kfree() to a kmalloc() returning some of the kfree()ed memory, I believe the kfree()/kmalloc() implementation must do any needed synchronization and ordering. But that is a different set of examples, for example, this one: CPU 0 CPU 1 p->a = 42; q = kmalloc(...); /* returning p */ kfree(p); q->a = 5; BUG_ON(q->a != 5); Unlike the situation with (A), (B), and (C), in this case I believe that it is kfree()'s and kmalloc()'s responsibility to ensure that the BUG_ON() never triggers. Make sense? Thanx, Paul -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Memory allocator semantics 2014-02-11 12:14 ` Paul E. McKenney @ 2014-02-11 13:20 ` Pekka Enberg 2014-02-11 15:01 ` Paul E. McKenney 0 siblings, 1 reply; 16+ messages in thread From: Pekka Enberg @ 2014-02-11 13:20 UTC (permalink / raw) To: Paul McKenney; +Cc: Christoph Lameter, linux-mm@kvack.org, LKML, Matt Mackall On Tue, Feb 11, 2014 at 2:14 PM, Paul E. McKenney <paulmck@linux.vnet.ibm.com> wrote: > In contrast, from kfree() to a kmalloc() returning some of the kfree()ed > memory, I believe the kfree()/kmalloc() implementation must do any needed > synchronization and ordering. But that is a different set of examples, > for example, this one: > > CPU 0 CPU 1 > p->a = 42; q = kmalloc(...); /* returning p */ > kfree(p); q->a = 5; > BUG_ON(q->a != 5); > > Unlike the situation with (A), (B), and (C), in this case I believe > that it is kfree()'s and kmalloc()'s responsibility to ensure that > the BUG_ON() never triggers. > > Make sense? I'm not sure... It's the caller's responsibility not to touch "p" after it's handed over to kfree() - otherwise that's a "use-after-free" error. If there's some reordering going on here, I'm tempted to blame the caller for lack of locking. Pekka -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Memory allocator semantics 2014-02-11 13:20 ` Pekka Enberg @ 2014-02-11 15:01 ` Paul E. McKenney 0 siblings, 0 replies; 16+ messages in thread From: Paul E. McKenney @ 2014-02-11 15:01 UTC (permalink / raw) To: Pekka Enberg; +Cc: Christoph Lameter, linux-mm@kvack.org, LKML, Matt Mackall On Tue, Feb 11, 2014 at 03:20:01PM +0200, Pekka Enberg wrote: > On Tue, Feb 11, 2014 at 2:14 PM, Paul E. McKenney > <paulmck@linux.vnet.ibm.com> wrote: > > In contrast, from kfree() to a kmalloc() returning some of the kfree()ed > > memory, I believe the kfree()/kmalloc() implementation must do any needed > > synchronization and ordering. But that is a different set of examples, > > for example, this one: > > > > CPU 0 CPU 1 > > p->a = 42; q = kmalloc(...); /* returning p */ > > kfree(p); q->a = 5; > > BUG_ON(q->a != 5); > > > > Unlike the situation with (A), (B), and (C), in this case I believe > > that it is kfree()'s and kmalloc()'s responsibility to ensure that > > the BUG_ON() never triggers. > > > > Make sense? > > I'm not sure... > > It's the caller's responsibility not to touch "p" after it's handed over to > kfree() - otherwise that's a "use-after-free" error. If there's some reordering > going on here, I'm tempted to blame the caller for lack of locking. But if the two callers are unrelated, what locking can they possibly use? >From what I can see, the current implementation prevents the above BUG_ON() from firing. If the two CPUs are the same, the CPU will see its own accesses in order, while if they are different, the implementation will have had to push the memory through non-CPU-local data structures, which must have had some heavyweight protection. Thanx, Paul -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2014-02-14 17:30 UTC | newest] Thread overview: 16+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2014-01-02 20:33 Memory allocator semantics Paul E. McKenney 2014-01-03 3:39 ` Josh Triplett 2014-01-03 5:14 ` Paul E. McKenney 2014-01-03 5:47 ` Josh Triplett 2014-01-03 7:57 ` Paul E. McKenney 2014-01-03 8:42 ` Josh Triplett 2014-02-08 10:27 ` Pekka Enberg 2014-02-09 2:00 ` Paul E. McKenney 2014-02-11 8:50 ` Pekka Enberg 2014-02-11 12:09 ` Paul E. McKenney 2014-02-11 18:43 ` Christoph Lameter 2014-02-14 17:30 ` Paul E. McKenney 2014-02-10 19:07 ` Christoph Lameter 2014-02-11 12:14 ` Paul E. McKenney 2014-02-11 13:20 ` Pekka Enberg 2014-02-11 15:01 ` Paul E. McKenney
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).