From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:52374) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UT9yK-0007q4-UK for qemu-devel@nongnu.org; Fri, 19 Apr 2013 07:59:24 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UT9yH-00038j-VW for qemu-devel@nongnu.org; Fri, 19 Apr 2013 07:59:20 -0400 Received: from mx1.redhat.com ([209.132.183.28]:14848) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UT9yH-00038X-OJ for qemu-devel@nongnu.org; Fri, 19 Apr 2013 07:59:17 -0400 Date: Fri, 19 Apr 2013 13:59:06 +0200 From: Stefan Hajnoczi Message-ID: <20130419115906.GA23751@stefanha-thinkpad.redhat.com> References: <1366187964-14265-1-git-send-email-qemulist@gmail.com> <1366187964-14265-2-git-send-email-qemulist@gmail.com> <20130418140111.GA22842@stefanha-thinkpad.redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Subject: Re: [Qemu-devel] [RFC PATCH v4 01/15] util: introduce gsource event abstration List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: liu ping fan Cc: Stefan Hajnoczi , qemu-devel@nongnu.org, mdroth , Anthony Liguori , Jan Kiszka , Paolo Bonzini On Fri, Apr 19, 2013 at 02:52:08PM +0800, liu ping fan wrote: > On Thu, Apr 18, 2013 at 10:01 PM, Stefan Hajnoczi wrote: > > On Wed, Apr 17, 2013 at 04:39:10PM +0800, Liu Ping Fan wrote: > >> +static gboolean prepare(GSource *src, gint *time) > >> +{ > >> + EventGSource *nsrc = (EventGSource *)src; > >> + int events = 0; > >> + > >> + if (!nsrc->readable && !nsrc->writable) { > >> + return false; > >> + } > >> + if (nsrc->readable && nsrc->readable(nsrc->opaque)) { > >> + events |= G_IO_IN; > >> + } > >> + if ((nsrc->writable) && nsrc->writable(nsrc->opaque)) { > >> + events |= G_IO_OUT; > >> + } > > > > G_IO_ERR, G_IO_HUP, G_IO_PRI? > > > > Here is the select(2) to GCondition mapping: > > rfds -> G_IO_IN | G_IO_HUP | G_IO_ERR > > wfds -> G_IO_OUT | G_IO_ERR > > xfds -> G_IO_PRI > > > Does G_IO_PRI only happen on read-in direction? Yes. > > In other words, we're missing events by just using G_IO_IN and G_IO_OUT. > > Whether that matters depends on EventGSource users. For sockets it can > > matter. > > > I think you mean just prepare all of them, and let the dispatch decide > how to handle them, right? The user must decide which events to monitor. Otherwise the event loop may run at 100% CPU due to events that are monitored but not handled by the user. > >> +void event_source_release(EventGSource *src) > >> +{ > >> + g_source_destroy(&src->source); > > > > Leaks src. > > > All of the mem used by EventGSource are allocated by g_source_new, so > g_source_destroy can reclaim all of them. Okay, then the bug is events_source_release() which calls g_free(src) after g_source_destroy(&src->source). > >> +EventsGSource *events_source_new(GSourceFuncs *funcs, GSourceFunc dispatch_cb, void *opaque) > >> +{ > >> + EventsGSource *src = (EventsGSource *)g_source_new(funcs, sizeof(EventsGSource)); > >> + > >> + /* 8bits size at initial */ > >> + src->bmp_sz = 8; > >> + src->alloc_bmp = g_malloc0(src->bmp_sz >> 3); > > > > This is unportable. alloc_bmp is unsigned long, you are allocating just > > one byte! > > > I had thought that resorting to bmp_sz to guarantee the bit-ops on > alloc_bmp. And if EventsGSource->pollfds is allocated with 64 instance > at initialize, it cost too much. I can fix it with more fine code > when alloc_bmp's size growing. > > > Please drop the bitmap approach and use a doubly-linked list or another > > glib container type of your choice. It needs 3 operations: add, remove, > > and iterate. > > > But as the case for slirp, owning to network's connection and > disconnection, the slirp's sockets can be dynamically changed quickly. > The bitmap approach is something like slab, while glib container > type lacks such support (maybe using two GArray inuse[], free[]). Doubly-linked list insertion and removal are O(1). The linked list can be allocated with g_slice_alloc() which is efficient. Iterating linked lists isn't cache-friendly but this is premature optimization. I bet the userspace TCP - pulling packets apart - is more of a CPU bottleneck than a doubly-linked list of fds. Please use existing data structures instead of writing them from scratch unless there is a real need (e.g. profiling shows it matters).