From mboxrd@z Thu Jan 1 00:00:00 1970 From: greno@verizon.net Subject: Re: Re: Xen pv_ops dom0 2.6.32.13 issues Date: Wed, 09 Jun 2010 18:52:40 -0500 (CDT) Message-ID: <457626400.4841749.1276127560967.JavaMail.root@vms170009.mailsrvcs.net> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0568928438==" Return-path: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: jeremy@goop.org Cc: xen-devel@lists.xensource.com List-Id: xen-devel@lists.xenproject.org --===============0568928438== Content-type: text/html; charset=UTF-8 Content-transfer-encoding: quoted-printable Thanks I check into using tap:aio.  I had tried once before and could = not get it to work.

Here is my entry from pv-grub:
# pv-grub: tap= :aio: will not work for disk, use file:
disk =3D [ "file:/var/lib/xen/im= ages/CLOUD-CC-1.img,xvda,w" ]

I had difficulty getting tap:aio to wo= rk with disk.  I can't remember if that problem was just with pv-grub = or with dom0 in general. This was about 6 months ago.  I guess that is= no longer a problem now?


Jun 9, 2010 07:37:46 PM, jeremy@goop.o= rg wrote:
On 06/09/2010 04:27 PM, greno@verizon.net wrote:
> blkbackd
Using phy: in your config file? That really isn't recommended because
= it has poor integrity; the writes are buffered in dom0 so writes can be
= reordered or lost on crash, and the guest filesystem can't maintain any
= of its own integrity guarantees.

tap:aio: is more resilient, since t= he writes go directly to the device
without buffering.

That doesn= 't directly relate to your lockup issues, but it should
prevent filesyst= em corruption when they happen.

J



>
>>
> Jun 9, 2010 07:13:23 PM, jeremy@goop.org wrote:
>
&g= t; On 06/09/2010 04:05 PM, greno@verizon.net wrote:
> > Je= remy,
> > The soft lockups seemed to be occurring in different= systems. And I
> > could never make sense out of what was tri= ggering them. I have not
> > mounted any file systems with "no= barriers" in guests. The guests are
> > all a single /dev/xvda= . The underlying physical hardware is LVM over
> > RAID-1 arra= ys. I'm attaching dmesg, kern.log, and messages in case
> > th= ese might be useful.
>
> Using what storage backend? blkbac= k? blktap2?
>
> J
>
> _____________________= __________________________
> Xen-devel mailing list
> X= en-devel@lists.xensource.com
> http://lists.xensource.com/xen-dev= el
>


_______________________________________________
Xe= n-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xenso= urce.com/xen-devel
--===============0568928438== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel --===============0568928438==--