From mboxrd@z Thu Jan 1 00:00:00 1970 From: Juan Quintela Subject: KVM Call agenda 2013-01-29 Date: Tue, 29 Jan 2013 15:40:34 +0100 Message-ID: <87boc8gihp.fsf@elfo.elfo> Reply-To: quintela@redhat.com Mime-Version: 1.0 Content-Type: text/plain To: KVM devel mailing list , qemu-devel qemu-devel Return-path: Received: from mx1.redhat.com ([209.132.183.28]:26483 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754062Ab3A2Okp (ORCPT ); Tue, 29 Jan 2013 09:40:45 -0500 Sender: kvm-owner@vger.kernel.org List-ID: As Today there were lots of topics, here is a collapsed agenda, basically three main topics now that Buildbot is dropped: * Buildbot: discussed on the list (Andreas retired it) * Replacing select(2) so that we will not hit the 1024 fd_set limit in the future. (stefan) * Outstanding virtio work for 1.4 - Multiqueue virtio-net (Amos/Michael) - Refactorings (Fred/Peter) - virtio-ccw (Cornelia/Alex) We need to work out the ordering here and what's reasonable to merge over the next week. * What's the plan for -device and IRQ assignment? (Alex) We need to start coming up with a solution to connect irq lines between cmdline created devices and interrupt controllers. Currently, I'm aware of 2 potential users - virtio-mmio - device assignment on platform devices I see 2 options: - create a new platform bus that enumerates IRQs linearly, similar to how the ISA bus works - allow arbitrary irq pin connection using a global pin namespace Surely what you want is to specify IRQ connections via something like "my-uart.irq => interrupt-controller.in[14]" (adjust punctuation to taste)? They're just device-to-device connections. You also need some way of specifying where in a memory map mmio devices should live. This is a little tricky because you don't want to assume a global flat address space (in future if we get the memory APIs right then devices could be in the address space of just one of 4 CPUs, for instance). "my-uart.regs => cpu1.memory[0x4000..0x4100]" ?