From: Terry Barnaby <terry1@beam.ltd.uk>
To: linux-kernel@vger.kernel.org
Subject: Problem with mlockall() and Threads: memory usage
Date: Tue, 18 May 2004 11:10:13 +0100 [thread overview]
Message-ID: <40A9E105.7080907@beam.ltd.uk> (raw)
Hi,
We have a problem with a soft real-time program that uses mlockall
to improve its latency.
The basic problem, which can be seen with a simple test example, is
that if we have a program that uses a large amount of memory, uses multiple
threads and uses mlockall() the physical memory usage goes through the
roof. This problem/feature is present using RedHat 7.3 (2.4.x libc user level
threads), RedHat 9 (2.4.20 kernel threads) and Fedora Core 2 (2.6.5).
Our simple test program first does a mlockall(MCL_CURRENT | MCL_FUTURE),
mallocs 10MBytes and then creates 8 threads all which pause.
The memory usage with the mlockall() call is:
PID TTY STAT TIME MAJFL TRS DRS RSS %MEM COMMAND
2251 pts/1 SL 0:00 0 2 95921 95924 37.3 ./t2 8
The memory usage without the mlockall() call is:
PID TTY STAT TIME MAJFL TRS DRS RSS %MEM COMMAND
2275 pts/1 S 0:00 0 2 95929 11152 4.3 ./t2 8
It appears that the kernel is allocating physical memory for each
of the Threads shared data area's rather than allocating just
the one shared area.
Are we doing something wrong ?
Is this the correct behaviour ?
Is this a kernle or glibc bug ?
Example code follows:
Terry
/*******************************************************************************
* T2.c Test Threads
* T.Barnaby, BEAM Ltd, 18/5/04
*******************************************************************************
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <pthread.h>
#include <sys/mman.h>
#include <sys/statfs.h>
const int memSize = (10 * 1024*1024);
void* threadFunc(void* arg){
while(1){
printf("Thread::function: loop: Pid(%d)\n", getpid());
pause();
}
}
void test1(int n){
pthread_t* threads;
void* mem;
int i;
threads = (pthread_t*)malloc(n * sizeof(pthread_t));
mem = malloc(memSize);
memset(mem, 0, memSize);
printf("Mem: %p\n", mem);
for(i = 0; i < n; i++){
pthread_create(&threads[i], 0, threadFunc, 0);
}
pause();
}
int main(int argc, char** argv){
if(argc != 2){
fprintf(stderr, "Usage: t2 <numberOfThreads>\n");
return 1;
}
#ifndef ZAP
// Lock in all of the pages of this application
if(mlockall(MCL_CURRENT | MCL_FUTURE) < 0)
fprintf(stderr, "Warning: unable to lock in memory pages\n");
#endif
test1(atoi(argv[1]));
return 0;
}
--
Dr Terry Barnaby BEAM Ltd
Phone: +44 1454 324512 Northavon Business Center, Dean Rd
Fax: +44 1454 313172 Yate, Bristol, BS37 5NH, UK
Email: terry@beam.ltd.uk Web: www.beam.ltd.uk
BEAM for: Visually Impaired X-Terminals, Parallel Processing, Software
"Tandems are twice the fun !"
next reply other threads:[~2004-05-18 10:10 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-05-18 10:10 Terry Barnaby [this message]
[not found] ` <041501c43cc9$28aaed00$c8de11cc@black>
2004-05-18 12:51 ` Problem with mlockall() and Threads: memory usage Terry Barnaby
2004-05-18 20:38 ` David Schwartz
2004-05-19 8:45 ` Terry Barnaby
2004-05-20 0:23 ` Elladan
2004-05-21 14:28 ` Terry Barnaby
2004-05-21 14:28 ` Terry Barnaby
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=40A9E105.7080907@beam.ltd.uk \
--to=terry1@beam.ltd.uk \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox