From mboxrd@z Thu Jan 1 00:00:00 1970 From: Rusty Russell Subject: Re: [PATCH 13/28] virtio: console: Create a buffer pool for sending data to host Date: Wed, 2 Dec 2009 19:00:35 +1030 Message-ID: <200912021900.35219.rusty@rustcorp.com.au> References: <1259391051-7752-1-git-send-email-amit.shah@redhat.com> <1259391051-7752-13-git-send-email-amit.shah@redhat.com> <1259391051-7752-14-git-send-email-amit.shah@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1259391051-7752-14-git-send-email-amit.shah@redhat.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: virtualization-bounces@lists.linux-foundation.org Errors-To: virtualization-bounces@lists.linux-foundation.org To: Amit Shah Cc: virtualization@lists.linux-foundation.org List-Id: virtualization@lists.linuxfoundation.org On Sat, 28 Nov 2009 05:20:36 pm Amit Shah wrote: > The old way of sending data to the host was to populate one buffer and > then wait till the host consumed it before sending the next chunk of > data. > > Also, there was no support to send large chunks of data. > > We now maintain a per-device list of buffers that are ready to be > passed on to the host. > > This patch adds support to send big chunks in multiple buffers of > PAGE_SIZE each to the host. > > When the host consumes the data and signals to us via an interrupt, we > add the consumed buffer back to our unconsumed list. > > Signed-off-by: Amit Shah > --- > drivers/char/virtio_console.c | 159 +++++++++++++++++++++++++++++++++------- > 1 files changed, 131 insertions(+), 28 deletions(-) > > diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c > index e8dabae..3111e4c 100644 > --- a/drivers/char/virtio_console.c > +++ b/drivers/char/virtio_console.c > @@ -67,9 +67,13 @@ struct ports_device { > struct work_struct rx_work; > > struct list_head unused_read_head; > + struct list_head unused_write_head; > > /* To protect the list of unused read buffers and the in_vq */ > spinlock_t read_list_lock; > + > + /* To protect the list of unused write buffers and the out_vq */ > + spinlock_t write_list_lock; > }; Let's simplify this a little with a single "buffer_lock" or such in the previous patch. > + if (!in_count) > return 0; Not necessary: if it happens all we'll do is gratuitously kick the host. > + in_offset = 0; /* offset in the user buffer */ > + spin_lock_irqsave(&port->portdev->write_list_lock, irqf); > + while (in_count - in_offset) { while (in_offset < in_count) seems clearer to me here. > + copy_size = min(in_count - in_offset, PAGE_SIZE); Shouldn't you be using buf->size here? > + spin_unlock_irqrestore(&port->portdev->write_list_lock, irqf); > + > + /* > + * Since we're not sure when the host will actually > + * consume the data and tell us about it, we have > + * to copy the data here in case the caller > + * frees the in_buf > + */ > + memcpy(buf->buf, in_buf + in_offset, copy_size); > + > + buf->len = copy_size; > + sg_init_one(sg, buf->buf, buf->len); > + > + spin_lock_irqsave(&port->portdev->write_list_lock, irqf); Dropping the lock here seems gratuitous. > +/* > + * This function is only called from the init routine so the spinlock > + * for the unused_write_head list isn't taken > + */ > +static void alloc_write_bufs(struct ports_device *portdev) > +{ > + struct port_buffer *buf; > + int i; > + > + for (i = 0; i < 30; i++) { 30? And why aren't we allocating these somehow as they're consumed? > fill_receive_queue(portdev); > + alloc_write_bufs(portdev); What happens if this fails? Rusty.