public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Dor Laor <dor.laor-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
To: Hollis Blanchard <hollisb-r/Jw6+rmf7HQT0dZR+AlfA@public.gmane.org>
Cc: kvm-ppc-devel
	<kvm-ppc-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org>,
	kvm-devel
	<kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org>
Subject: Re: [RFC PATCH] current PowerPC patch
Date: Sat, 20 Oct 2007 00:15:09 +0200	[thread overview]
Message-ID: <47192C6D.8020602@qumranet.com> (raw)
In-Reply-To: <1192817733.10451.38.camel@basalt>

Hollis Blanchard wrote:
> This patch can now execute guest userspace (I'm not saying it's complete
> or stable or anything though). I need to put together a more
> full-featured ramdisk to test userspace more completely.
>
> That reminds me, what is the status of KVM VirtIO work? If I can put a
> virt-io driver in the guest and not need to deal with storage device
> emulation that would make me very happy.
>
>   
Hi Hollis,
Good luck for the new patch and the ongoing kvm arch support.
Sadly KVM's virtio support didn't change much after the forum. Rusty did 
improve virtio
shared mem and added  pci like configuration space as discussed in the 
forum.
I was too busy with other things. The good thing is that in a week I'll 
be back on it.

Nevertheless, even with pv block driver you'll need to boot with a 
virtual device and then move to a pv one.
It will remain this way until we'll add boot from pv support. It will 
happen but further on.
Regards,
Dor.
> Once the flurry of refactoring patches settles down in kvm.git, I need
> to look at actually integrating all of this.
>
> Anyways, for PPC people:
>
> I've also divorced the guest/host MMU emulation a bit, which eventually
> should allow us to emulate e.g. a Book E guest on a Server host. Take a
> look at the new stuff in tlb.c.
>
> Since we don't do external interrupt injection yet, I also had to hack
> the guest device tree to disable UART interrupts, which forces the guest
> kernel to do polling. Obviously this can be removed once we get some IRQ
> support. We probably just want to go ahead and implement PIC emulation
> in-kernel, since x86 moved to the kernel after initially using qemu.
>
> I also need to rebase onto the latest upstream, and hopefully this will
> compress some of the 19 (!!!) patches I've accumulated. (Most of these
> are Bamboo and Sequoia board support.)
>
>
>
> CPU clock-frequency <- 0x27bc86ae (667MHz)
> CPU timebase-frequency <- 0x27bc86ae (667MHz)
> /plb: clock-frequency <- 9ef21ab (167MHz)
> /plb/opb: clock-frequency <- 4f790d5 (83MHz)
> /plb/opb/ebc: clock-frequency <- 34fb5e3 (56MHz)
> /plb/opb/serial@ef600300: clock-frequency <- a8c000 (11MHz)
> /plb/opb/serial@ef600400: clock-frequency <- a8c000 (11MHz)
> /plb/opb/serial@ef600500: clock-frequency <- a8c000 (11MHz)
> /plb/opb/serial@ef600600: clock-frequency <- a8c000 (11MHz)
> Memory <- <0x0 0x0 0x2000000> (32MB)
> ENET0: local-mac-address <- 00:00:00:00:00:00
> ENET1: local-mac-address <- 00:00:00:00:00:00
>
> zImage starting: loaded at 0x00400000 (sp: 0x00fffe98)
> Allocating 0x295c5c bytes for kernel ...
> gunzipping (0x00000000 <- 0x0040b000:0x00693acc)...done 0x275a9c bytes
>
> Linux/PowerPC load: 
> Finalizing device tree... flat tree at 0x6a03a0
> id mach(): done
> MMU:enter
> MMU:hw init
> MMU:mapin
> MMU:setio
> MMU:exit
> Using Bamboo machine description
> Linux version 2.6.23-rc1 (hollisb@basalt) (gcc version 3.4.2) #100 Fri Oct 19 13:07:14 CDT 2007
> console [udbg0] enabled
> setup_arch: bootmem
> arch: exit
> Zone PFN ranges:
>   DMA             0 ->     8192
>   Normal       8192 ->     8192
> Movable zone start PFN for each node
> early_node_map[1] active PFN ranges
>     0:        0 ->     8192
> Built 1 zonelists in Zone order.  Total pages: 8128
> Kernel command line: console=ttyS0,115200 debug
> UIC0 (32 IRQ sources) at DCR 0xc0
> UIC1 (32 IRQ sources) at DCR 0xd0
> PID hash table entries: 128 (order: 7, 512 bytes)
> time_init: decrementer frequency = 666.666670 MHz
> time_init: processor frequency   = 666.666670 MHz
> Dentry cache hash table entries: 4096 (order: 2, 16384 bytes)
> Inode-cache hash table entries: 2048 (order: 1, 8192 bytes)
> Memory: 29820k/32768k available (2396k kernel code, 2948k reserved, 100k data, 127k bss, 328k init)
> Calibrating delay loop... 1163.26 BogoMIPS (lpj=2326528)
> Mount-cache hash table entries: 512
> NET: Registered protocol family 16
>              
> PCI: Probing PCI hardware
> NET: Registered protocol family 2
> IP route cache hash table entries: 1024 (order: 0, 4096 bytes)
> TCP established hash table entries: 1024 (order: 1, 8192 bytes)
> TCP bind hash table entries: 1024 (order: 0, 4096 bytes)
> TCP: Hash tables configured (established 1024 bind 1024)
> TCP reno registered
> io scheduler noop registered
> io scheduler anticipatory registered (default)
> io scheduler deadline registered
> io scheduler cfq registered
> Serial: 8250/16550 driver $Revision: 1.90 $ 4 ports, IRQ sharing enabled
> ef600300.serial: ttyS0 at MMIO 0xef600300 (irq = 0) is a 16450
> console handover: boot [udbg0] -> real [ttyS0]
> RAMDISK driver initialized: 16 RAM disks of 35000K size 1024 blocksize
> PPC 4xx OCP EMAC driver, version 3.54
> MAL v1 /plb/mcmal, 4 TX channels, 4 RX channels
> ZMII /plb/opb/emac-zmii@ef600d00 initialized
> /plb/opb/emac-zmii@ef600d00: bridge in RMII mode
> /plb/opb/ethernet@ef600e00: can't find PHY!
> /plb/opb/ethernet@ef600f00: can't find PHY!
> TCP cubic registered
> NET: Registered protocol family 1
> NET: Registered protocol family 17
> Freeing unused kernel memory: 328k init
> Hello world
>
>
>   
> ------------------------------------------------------------------------
>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> ------------------------------------------------------------------------
>
> _______________________________________________
> kvm-devel mailing list
> kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
> https://lists.sourceforge.net/lists/listinfo/kvm-devel


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

  reply	other threads:[~2007-10-19 22:15 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-10-19 18:15 [RFC PATCH] current PowerPC patch Hollis Blanchard
2007-10-19 22:15 ` Dor Laor [this message]
     [not found]   ` <47192C6D.8020602-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-10-19 22:23     ` [kvm-ppc-devel] " Hollis Blanchard
2007-10-22 11:01 ` Carsten Otte

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=47192C6D.8020602@qumranet.com \
    --to=dor.laor-re5jqeeqqe8avxtiumwx3w@public.gmane.org \
    --cc=dor.laor-atKUWr5tajBWk0Htik3J/w@public.gmane.org \
    --cc=hollisb-r/Jw6+rmf7HQT0dZR+AlfA@public.gmane.org \
    --cc=kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org \
    --cc=kvm-ppc-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox