qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] memory usage and ioports
@ 2007-11-19 15:20 Samuel Thibault
  2007-11-19 15:34 ` [Qemu-devel] " Samuel Thibault
  2008-02-06 14:24 ` [Qemu-devel] [PATCH] " Samuel Thibault
  0 siblings, 2 replies; 5+ messages in thread
From: Samuel Thibault @ 2007-11-19 15:20 UTC (permalink / raw)
  To: qemu-devel

Hi,

Qemu currently uses 6 65k tables of pointers for handling ioports, which
makes 3MB on 64bit machines. There's a comment that says "XXX: use a two
level table to limit memory usage". But wouldn't it be more simple and
effective to just allocate them through mmap() and when a NULL pointer
is read, call the default handlers?

Samuel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Qemu-devel] Re: memory usage and ioports
  2007-11-19 15:20 [Qemu-devel] memory usage and ioports Samuel Thibault
@ 2007-11-19 15:34 ` Samuel Thibault
  2007-11-19 16:17   ` Paul Brook
  2008-02-06 14:24 ` [Qemu-devel] [PATCH] " Samuel Thibault
  1 sibling, 1 reply; 5+ messages in thread
From: Samuel Thibault @ 2007-11-19 15:34 UTC (permalink / raw)
  To: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 502 bytes --]

Samuel Thibault, le Mon 19 Nov 2007 15:20:16 +0000, a écrit :
> Qemu currently uses 6 65k tables of pointers for handling ioports, which
> makes 3MB on 64bit machines. There's a comment that says "XXX: use a two
> level table to limit memory usage". But wouldn't it be more simple and
> effective to just allocate them through mmap() and when a NULL pointer
> is read, call the default handlers?

For the ioport_opaque array (500KB on 64bit), it's much simpler, as the
attached patch suggests.

Samuel

[-- Attachment #2: patch --]
[-- Type: text/plain, Size: 743 bytes --]

diff -r 6a6eace79e93 tools/ioemu/vl.c
--- qemu/vl.c	Mon Nov 19 15:04:05 2007 +0000
+++ qemu/vl.c	Mon Nov 19 15:31:35 2007 +0000
@@ -139,7 +139,7 @@
 
 const char *bios_dir = CONFIG_QEMU_SHAREDIR;
 char phys_ram_file[1024];
-void *ioport_opaque[MAX_IOPORTS];
+void **ioport_opaque;
 IOPortReadFunc *ioport_read_table[3][MAX_IOPORTS];
 IOPortWriteFunc *ioport_write_table[3][MAX_IOPORTS];
 /* Note: bs_table[MAX_DISKS] is a dummy block driver if none available
@@ -265,6 +265,7 @@ void init_ioports(void)
 {
     int i;
 
+    ioport_opaque = malloc(MAX_IOPORTS * sizeof(*ioport_opaque));
     for(i = 0; i < MAX_IOPORTS; i++) {
         ioport_read_table[0][i] = default_ioport_readb;
         ioport_write_table[0][i] = default_ioport_writeb;

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Qemu-devel] Re: memory usage and ioports
  2007-11-19 15:34 ` [Qemu-devel] " Samuel Thibault
@ 2007-11-19 16:17   ` Paul Brook
  2007-11-19 16:23     ` Samuel Thibault
  0 siblings, 1 reply; 5+ messages in thread
From: Paul Brook @ 2007-11-19 16:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: Samuel Thibault

On Monday 19 November 2007, Samuel Thibault wrote:
> Samuel Thibault, le Mon 19 Nov 2007 15:20:16 +0000, a écrit :
> > Qemu currently uses 6 65k tables of pointers for handling ioports, which
> > makes 3MB on 64bit machines. There's a comment that says "XXX: use a two
> > level table to limit memory usage". But wouldn't it be more simple and
> > effective to just allocate them through mmap() and when a NULL pointer
> > is read, call the default handlers?
>
> For the ioport_opaque array (500KB on 64bit), it's much simpler, as the
> attached patch suggests.

AFAICS This makes absolutely no difference to memory usage.

Paul

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Qemu-devel] Re: memory usage and ioports
  2007-11-19 16:17   ` Paul Brook
@ 2007-11-19 16:23     ` Samuel Thibault
  0 siblings, 0 replies; 5+ messages in thread
From: Samuel Thibault @ 2007-11-19 16:23 UTC (permalink / raw)
  To: Paul Brook; +Cc: qemu-devel

Paul Brook, le Mon 19 Nov 2007 16:17:26 +0000, a écrit :
> On Monday 19 November 2007, Samuel Thibault wrote:
> > Samuel Thibault, le Mon 19 Nov 2007 15:20:16 +0000, a écrit :
> > > Qemu currently uses 6 65k tables of pointers for handling ioports, which
> > > makes 3MB on 64bit machines. There's a comment that says "XXX: use a two
> > > level table to limit memory usage". But wouldn't it be more simple and
> > > effective to just allocate them through mmap() and when a NULL pointer
> > > is read, call the default handlers?
> >
> > For the ioport_opaque array (500KB on 64bit), it's much simpler, as the
> > attached patch suggests.
> 
> AFAICS This makes absolutely no difference to memory usage.

Ah, sorry, in a unix environment it doesn't indeed.  In an embedded
environment or so which has to provide a fully allocated bss because it
doesn't have cow support early enough, that makes a difference.

Samuel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Qemu-devel] [PATCH] memory usage and ioports
  2007-11-19 15:20 [Qemu-devel] memory usage and ioports Samuel Thibault
  2007-11-19 15:34 ` [Qemu-devel] " Samuel Thibault
@ 2008-02-06 14:24 ` Samuel Thibault
  1 sibling, 0 replies; 5+ messages in thread
From: Samuel Thibault @ 2008-02-06 14:24 UTC (permalink / raw)
  To: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 476 bytes --]

Samuel Thibault, le Mon 19 Nov 2007 15:20:16 +0000, a écrit :
> Qemu currently uses 6 65k tables of pointers for handling ioports, which
> makes 3MB on 64bit machines. There's a comment that says "XXX: use a two
> level table to limit memory usage". But wouldn't it be more simple and
> effective to just allocate them through mmap() and when a NULL pointer
> is read, call the default handlers?

Here is a patch that does this and indeed saves 3MB on 64bit machines.

Samuel

[-- Attachment #2: patch-qemu-ioport --]
[-- Type: text/plain, Size: 5184 bytes --]

? ChangeLog
Index: vl.c
===================================================================
RCS file: /sources/qemu/qemu/vl.c,v
retrieving revision 1.403
diff -u -p -r1.403 vl.c
--- vl.c	3 Feb 2008 03:45:47 -0000	1.403
+++ vl.c	6 Feb 2008 14:22:18 -0000
@@ -267,17 +267,29 @@ static void default_ioport_writeb(void *
 static uint32_t default_ioport_readw(void *opaque, uint32_t address)
 {
     uint32_t data;
-    data = ioport_read_table[0][address](ioport_opaque[address], address);
+    IOPortReadFunc *func = ioport_read_table[0][address];
+    if (!func)
+	    func = default_ioport_readb;
+    data = func(ioport_opaque[address], address);
     address = (address + 1) & (MAX_IOPORTS - 1);
-    data |= ioport_read_table[0][address](ioport_opaque[address], address) << 8;
+    func = ioport_read_table[0][address];
+    if (!func)
+	    func = default_ioport_readb;
+    data |= func(ioport_opaque[address], address) << 8;
     return data;
 }
 
 static void default_ioport_writew(void *opaque, uint32_t address, uint32_t data)
 {
-    ioport_write_table[0][address](ioport_opaque[address], address, data & 0xff);
+    IOPortWriteFunc *func = ioport_write_table[0][address];
+    if (!func)
+	    func = default_ioport_writeb;
+    func(ioport_opaque[address], address, data & 0xff);
     address = (address + 1) & (MAX_IOPORTS - 1);
-    ioport_write_table[0][address](ioport_opaque[address], address, (data >> 8) & 0xff);
+    func = ioport_write_table[0][address];
+    if (!func)
+	    func = default_ioport_writeb;
+    func(ioport_opaque[address], address, (data >> 8) & 0xff);
 }
 
 static uint32_t default_ioport_readl(void *opaque, uint32_t address)
@@ -297,16 +309,6 @@ static void default_ioport_writel(void *
 
 static void init_ioports(void)
 {
-    int i;
-
-    for(i = 0; i < MAX_IOPORTS; i++) {
-        ioport_read_table[0][i] = default_ioport_readb;
-        ioport_write_table[0][i] = default_ioport_writeb;
-        ioport_read_table[1][i] = default_ioport_readw;
-        ioport_write_table[1][i] = default_ioport_writew;
-        ioport_read_table[2][i] = default_ioport_readl;
-        ioport_write_table[2][i] = default_ioport_writel;
-    }
 }
 
 /* size is the word size in byte */
@@ -378,11 +380,14 @@ void isa_unassign_ioport(int start, int 
 
 void cpu_outb(CPUState *env, int addr, int val)
 {
+    IOPortWriteFunc *func = ioport_write_table[0][addr];
+    if (!func)
+	    func = default_ioport_writeb;
 #ifdef DEBUG_IOPORT
     if (loglevel & CPU_LOG_IOPORT)
         fprintf(logfile, "outb: %04x %02x\n", addr, val);
 #endif
-    ioport_write_table[0][addr](ioport_opaque[addr], addr, val);
+    func(ioport_opaque[addr], addr, val);
 #ifdef USE_KQEMU
     if (env)
         env->last_io_time = cpu_get_time_fast();
@@ -391,11 +396,14 @@ void cpu_outb(CPUState *env, int addr, i
 
 void cpu_outw(CPUState *env, int addr, int val)
 {
+    IOPortWriteFunc *func = ioport_write_table[1][addr];
+    if (!func)
+	    func = default_ioport_writew;
 #ifdef DEBUG_IOPORT
     if (loglevel & CPU_LOG_IOPORT)
         fprintf(logfile, "outw: %04x %04x\n", addr, val);
 #endif
-    ioport_write_table[1][addr](ioport_opaque[addr], addr, val);
+    func(ioport_opaque[addr], addr, val);
 #ifdef USE_KQEMU
     if (env)
         env->last_io_time = cpu_get_time_fast();
@@ -404,11 +412,14 @@ void cpu_outw(CPUState *env, int addr, i
 
 void cpu_outl(CPUState *env, int addr, int val)
 {
+    IOPortWriteFunc *func = ioport_write_table[2][addr];
+    if (!func)
+	    func = default_ioport_writel;
 #ifdef DEBUG_IOPORT
     if (loglevel & CPU_LOG_IOPORT)
         fprintf(logfile, "outl: %04x %08x\n", addr, val);
 #endif
-    ioport_write_table[2][addr](ioport_opaque[addr], addr, val);
+    func(ioport_opaque[addr], addr, val);
 #ifdef USE_KQEMU
     if (env)
         env->last_io_time = cpu_get_time_fast();
@@ -418,7 +429,10 @@ void cpu_outl(CPUState *env, int addr, i
 int cpu_inb(CPUState *env, int addr)
 {
     int val;
-    val = ioport_read_table[0][addr](ioport_opaque[addr], addr);
+    IOPortReadFunc *func = ioport_read_table[0][addr];
+    if (!func)
+	    func = default_ioport_readb;
+    val = func(ioport_opaque[addr], addr);
 #ifdef DEBUG_IOPORT
     if (loglevel & CPU_LOG_IOPORT)
         fprintf(logfile, "inb : %04x %02x\n", addr, val);
@@ -433,7 +447,10 @@ int cpu_inb(CPUState *env, int addr)
 int cpu_inw(CPUState *env, int addr)
 {
     int val;
-    val = ioport_read_table[1][addr](ioport_opaque[addr], addr);
+    IOPortReadFunc *func = ioport_read_table[1][addr];
+    if (!func)
+	    func = default_ioport_readw;
+    val = func(ioport_opaque[addr], addr);
 #ifdef DEBUG_IOPORT
     if (loglevel & CPU_LOG_IOPORT)
         fprintf(logfile, "inw : %04x %04x\n", addr, val);
@@ -448,7 +465,10 @@ int cpu_inw(CPUState *env, int addr)
 int cpu_inl(CPUState *env, int addr)
 {
     int val;
-    val = ioport_read_table[2][addr](ioport_opaque[addr], addr);
+    IOPortReadFunc *func = ioport_read_table[2][addr];
+    if (!func)
+	    func = default_ioport_readl;
+    val = func(ioport_opaque[addr], addr);
 #ifdef DEBUG_IOPORT
     if (loglevel & CPU_LOG_IOPORT)
         fprintf(logfile, "inl : %04x %08x\n", addr, val);

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2008-02-06 14:25 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-11-19 15:20 [Qemu-devel] memory usage and ioports Samuel Thibault
2007-11-19 15:34 ` [Qemu-devel] " Samuel Thibault
2007-11-19 16:17   ` Paul Brook
2007-11-19 16:23     ` Samuel Thibault
2008-02-06 14:24 ` [Qemu-devel] [PATCH] " Samuel Thibault

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).