* Realtime capable userspace?
@ 2013-04-14 9:01 Stanislav Meduna
2013-04-14 10:08 ` Gilles Chanteperdrix
0 siblings, 1 reply; 5+ messages in thread
From: Stanislav Meduna @ 2013-04-14 9:01 UTC (permalink / raw)
To: linux-rt-users@vger.kernel.org
Hi,
I know this is not a kernel-related question, but I think the amount of
people doing realtime here is probably the largest.
As learned the hard-way the glibc is not (yet) realtime capable and
first changes are now starting to trickle in. Even if I try to fix known
places, who knows what, where and for how long is locked.
What userspace (libc / threads / malloc) are you using for realtime
applications? From my quick research:
- glibc:
- priority-inheritance mutex implemented, but never used inside
the glibc
- not easy to replace malloc used internally (so that e.g.
strdup() uses another one), at least when using shared
libraries
- ucLibc:
- threads from glibc, malloc uses non-PI-aware locks
- newlib:
- no PI support, no futexes, mutexes done via kill(treadid, signal)
- tcmalloc (gperftools)
- no PI locks used, although it is probably easily added. Not sure
about the state of the software generally - it probably got much
less real-world testing than the implementation in glibc
I ended with own low-level implementation of synchronization primitives
I needed via futexes (fortunately no full condvars or semaphores
needed) and serializing all malloc/free/... cals via a PI-aware mutex
(I don't care for performance when contented here, but I do care for
priority inversion).
What do you use for non-trivial applications?
Thanks
--
Stano
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Realtime capable userspace?
2013-04-14 9:01 Realtime capable userspace? Stanislav Meduna
@ 2013-04-14 10:08 ` Gilles Chanteperdrix
2013-04-14 11:46 ` Stanislav Meduna
0 siblings, 1 reply; 5+ messages in thread
From: Gilles Chanteperdrix @ 2013-04-14 10:08 UTC (permalink / raw)
To: Stanislav Meduna; +Cc: linux-rt-users@vger.kernel.org
On 04/14/2013 11:01 AM, Stanislav Meduna wrote:
> Hi,
>
> I know this is not a kernel-related question, but I think the amount of
> people doing realtime here is probably the largest.
>
> As learned the hard-way the glibc is not (yet) realtime capable and
> first changes are now starting to trickle in. Even if I try to fix known
> places, who knows what, where and for how long is locked.
>
> What userspace (libc / threads / malloc) are you using for realtime
> applications? From my quick research:
PI mutexes are not sufficient to make malloc deterministic. I guess if
you are looking for a deterministic malloc, you should have a look at TLSF.
You can probably get the replacement malloc be used by glibc by using
LD_PRELOAD techniques (but OK, it would be just a hack).
--
Gilles.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Realtime capable userspace?
2013-04-14 10:08 ` Gilles Chanteperdrix
@ 2013-04-14 11:46 ` Stanislav Meduna
2013-04-14 11:55 ` Gilles Chanteperdrix
0 siblings, 1 reply; 5+ messages in thread
From: Stanislav Meduna @ 2013-04-14 11:46 UTC (permalink / raw)
To: Gilles Chanteperdrix; +Cc: linux-rt-users@vger.kernel.org
On 14.04.2013 12:08, Gilles Chanteperdrix wrote:
> PI mutexes are not sufficient to make malloc deterministic. I guess if
> you are looking for a deterministic malloc, you should have a look at TLSF.
Yup, I know that malloc from glibc is a complicated beast and should
be probably not used at all during hard realtime operation.
Right now my biggest problem is the possibility of priority inversion.
My time constraints are in the 20 - 50 ms range and I hope the malloc
itself - while not deterministic - is not unbounded / won't go to that
times. I am of course using mlockall and everything. OTOH a priority
inversion case is a sure recipe for missing the deadlines.
Thank you for the pointer to the TLSF, this is very interesting project
and I will surely investigate this.
> You can probably get the replacement malloc be used by glibc by using
> LD_PRELOAD techniques (but OK, it would be just a hack).
Yes, this is possible but is indeed quite a hack.
Thanks
--
Stano
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Realtime capable userspace?
2013-04-14 11:46 ` Stanislav Meduna
@ 2013-04-14 11:55 ` Gilles Chanteperdrix
2013-04-14 12:20 ` Stanislav Meduna
0 siblings, 1 reply; 5+ messages in thread
From: Gilles Chanteperdrix @ 2013-04-14 11:55 UTC (permalink / raw)
To: Stanislav Meduna; +Cc: linux-rt-users@vger.kernel.org
On 04/14/2013 01:46 PM, Stanislav Meduna wrote:
> On 14.04.2013 12:08, Gilles Chanteperdrix wrote:
>
>> PI mutexes are not sufficient to make malloc deterministic. I guess if
>> you are looking for a deterministic malloc, you should have a look at TLSF.
>
> Yup, I know that malloc from glibc is a complicated beast and should
> be probably not used at all during hard realtime operation.
>
> Right now my biggest problem is the possibility of priority inversion.
> My time constraints are in the 20 - 50 ms range and I hope the malloc
> itself - while not deterministic - is not unbounded / won't go to that
> times. I am of course using mlockall and everything. OTOH a priority
> inversion case is a sure recipe for missing the deadlines.
mlockall does not really help reducing malloc worst case latency.
When malloc needs to grow the memory pool it manages, it will request
memory to the kernel, using brk or mmap. Since in the Linux kernel, the
memory is shared with the filesystem cache, the worst case latency for
satisfying such a memory request is the time of a disk flush, so, given
the latencies of good old hard disks, it is well in the milliseconds
range. mlockall will simply force the kernel to satisfy the whole
request ASAP instead of allocating it little by little, upon memory
faults, so, may in fact increase the latency of the malloc service.
--
Gilles.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Realtime capable userspace?
2013-04-14 11:55 ` Gilles Chanteperdrix
@ 2013-04-14 12:20 ` Stanislav Meduna
0 siblings, 0 replies; 5+ messages in thread
From: Stanislav Meduna @ 2013-04-14 12:20 UTC (permalink / raw)
To: Gilles Chanteperdrix; +Cc: linux-rt-users@vger.kernel.org
On 14.04.2013 13:55, Gilles Chanteperdrix wrote:
> When malloc needs to grow the memory pool it manages, it will request
> memory to the kernel, using brk or mmap. Since in the Linux kernel, the
> memory is shared with the filesystem cache, the worst case latency for
> satisfying such a memory request is the time of a disk flush, so, given
> the latencies of good old hard disks, it is well in the milliseconds
> range. mlockall will simply force the kernel to satisfy the whole
> request ASAP instead of allocating it little by little, upon memory
> faults, so, may in fact increase the latency of the malloc service.
Yes, the theoretical possibility is definitely here, but I don't think
it can happen in the real life in our environment. Given the processes
running there there is nearly no way that the kernel does not have
enough non-dirty pages to satisfy a brk or mmap request. It is also
highly improbable that the brk/mmap are necessary - during the phase
where the deadlines matter the only way they can happen is heap
fragmentation causing something to temporarily overrun the brk limit.
You are right, as soon as the kernel has to flush some dirty pages
the malloc latency goes through the roof - we have a CF Card instead
of a good old hard disk, but tens of ms are definitely possible.
I am using the mlockall() mainly as a guard against throwing
non-dirty code and data pages out and then getting a major
pagefault and wait for the kernel to read them back. This is
a much more probable scenario.
Regards
--
Stano
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2013-04-14 12:20 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-04-14 9:01 Realtime capable userspace? Stanislav Meduna
2013-04-14 10:08 ` Gilles Chanteperdrix
2013-04-14 11:46 ` Stanislav Meduna
2013-04-14 11:55 ` Gilles Chanteperdrix
2013-04-14 12:20 ` Stanislav Meduna
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox