xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Neil Sikka <neilsikka@gmail.com>
To: Michael Schinzel <schinzel@ip-projects.de>
Cc: Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: Payed Xen Admin
Date: Mon, 28 Nov 2016 08:30:15 -0500	[thread overview]
Message-ID: <CAHPMNWe_G-ZQ+jVrcDX8UT4KCb-xo2pGNg9ZhvbzdHB6eGfmEA@mail.gmail.com> (raw)
In-Reply-To: <d7bbdb7fef8a43258420d49a013b5a85@ip-projects.de>


[-- Attachment #1.1.1: Type: text/plain, Size: 6794 bytes --]

Usually, I've seen (null) domains are not running but their Qemu DMs are
running. You could probably remove the (null) from the list by using "kill
-9" on the qemu pids.

On Nov 27, 2016 11:55 PM, "Michael Schinzel" <schinzel@ip-projects.de>
wrote:

> Good Morning,
>
>
>
> we have some issues with our Xen Hosts. It seems it is a xen bug but we do
> not find the solution.
>
>
>
> Name                                        ID   Mem VCPUs      State
> Time(s)
>
> Domain-0                                     0 16192     4     r-----
> 147102.5
>
> (null)                                       2     1     1     --p--d
> 1273.2
>
> vmanager2268                                 4  1024     1     -b----
> 34798.8
>
> vmanager2340                                 5  1024     1     -b----
> 5983.8
>
> vmanager2619                                12   512     1     -b----
> 1067.0
>
> vmanager2618                                13  1024     4     -b----
> 1448.7
>
> vmanager2557                                14  1024     1     -b----
> 2783.5
>
> vmanager1871                                16   512     1     -b----
> 3772.1
>
> vmanager2592                                17   512     1     -b----
> 19744.5
>
> vmanager2566                                18  2048     1     -b----
> 3068.4
>
> vmanager2228                                19   512     1     -b----
> 837.6
>
> vmanager2241                                20   512     1     -b----
> 997.0
>
> vmanager2244                                21  2048     1     -b----
> 1457.9
>
> vmanager2272                                22  2048     1     -b----
> 1924.5
>
> vmanager2226                                23  1024     1     -b----
> 1454.0
>
> vmanager2245                                24   512     1     -b----
> 692.5
>
> vmanager2249                                25   512     1     -b----
> 22857.7
>
> vmanager2265                                26  2048     1     -b----
> 1388.1
>
> vmanager2270                                27   512     1     -b----
> 1250.6
>
> vmanager2271                                28  2048     3     -b----
> 2060.8
>
> vmanager2273                                29  1024     1     -b----
> 34089.4
>
> vmanager2274                                30  2048     1     -b----
> 8585.1
>
> vmanager2281                                31  2048     2     -b----
> 1848.9
>
> vmanager2282                                32   512     1     -b----
> 755.1
>
> vmanager2288                                33  1024     1     -b----
> 543.6
>
> vmanager2292                                34   512     1     -b----
> 3004.9
>
> vmanager2041                                35   512     1     -b----
> 4246.2
>
> vmanager2216                                36  1536     1     -b----
> 47508.3
>
> vmanager2295                                37   512     1     -b----
> 1414.9
>
> vmanager2599                                38  1024     4     -b----
> 7523.0
>
> vmanager2296                                39  1536     1     -b----
> 7142.0
>
> vmanager2297                                40   512     1     -b----
> 536.7
>
> vmanager2136                                42  1024     1     -b----
> 6162.9
>
> vmanager2298                                43   512     1     -b----
> 441.7
>
> vmanager2299                                44   512     1     -b----
> 368.7
>
> (null)                                      45     4     1     --p--d
> 1296.3
>
> vmanager2303                                46   512     1     -b----
> 1437.0
>
> vmanager2308                                47   512     1     -b----
> 619.3
>
> vmanager2318                                48   512     1     -b----
> 976.8
>
> vmanager2325                                49   512     1     -b----
> 480.2
>
> vmanager2620                                53   512     1     -b----
> 346.2
>
> (null)                                      56     0     1
> --p--d       8.8
>
> vmanager2334                                57   512     1     -b----
> 255.5
>
> vmanager2235                                58   512     1     -b----
> 1724.2
>
> vmanager987                                 59   512     1     -b----
> 647.1
>
> vmanager2302                                60   512     1     -b----
> 171.4
>
> vmanager2335                                61   512     1
> -b----      31.3
>
> vmanager2336                                62   512     1
> -b----      45.1
>
> vmanager2338                                63   512     1
> -b----      22.6
>
> vmanager2346                                64   512     1
> -b----      20.9
>
> vmanager2349                                65  2048     1
> -b----      14.4
>
> vmanager2350                                66   512     1     -b----
> 324.8
>
> vmanager2353                                67   512     1
> -b----       7.6
>
>
>
>
>
> HVM VMs change sometimes in the state (null).
>
>
>
> We still upgraded xen from 4.1.1 to 4.8, we upgraded the System Kernel –
>
>
>
> root@v8:~# uname -a
>
> Linux v8.ip-projects.de 4.8.10-xen #2 SMP Mon Nov 21 18:56:56 CET 2016
> x86_64 GNU/Linux
>
>
>
> But all these points dont help us to solve this issue.
>
>
>
> Now we are searching a Xen administrator which can help us anylising and
> solving this issue. We would also pay for this Service.
>
>
>
> Hardware Specs of the host:
>
>
>
> 2x Intel Xeon E5-2620v4
> 256 GB DDR4 ECC Reg RAM
> 6x 3 TB WD RE
> 2x 512 GB Kingston KC
> 2x 256 GB Kingston KC
> 2x 600 GB SAS
> LSI MegaRAID 9361-8i
> MegaRAID Kit LSICVM02
>
>
>
>
>
> The cause behind this Setup:
>
>
>
> 6x 3 TB WD RE – RAID 10 – W/R IO Cache + CacheCade LSI – Data Storage
>
> 2x 512 GB Kingston KC400 SSDs – RAID 1 – SSD Cache for RAID 10 Array
>
> 2x 256 GB Kingston KC400 SSD – RAID 1 – SWAP Array for Para VMs
>
> 2x 600 GB SAS  - RAID 1 – Backup Array for faster Backup of the VMs to
> external Storage.
>
>
>
>
>
>
>
>
>
> Mit freundlichen Grüßen
>
>
>
> Michael Schinzel
>
> - Geschäftsführer -
>
>
>
> [image: https://www.ip-projects.de/logo_mail.png]
> <https://www.ip-projects.de/>
>
> IP-Projects GmbH & Co. KG
> Am Vogelherd 14
> D - 97295 Waldbrunn
>
> Telefon: 09306 - 76499-0
> FAX: 09306 - 76499-15
> E-Mail: info@ip-projects.de
>
> Geschäftsführer: Michael Schinzel
> Registergericht Würzburg: HRA 6798
> Komplementär: IP-Projects Verwaltungs GmbH
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
>
>

[-- Attachment #1.1.2: Type: text/html, Size: 15502 bytes --]

[-- Attachment #1.2: image001.png --]
[-- Type: image/png, Size: 2217 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2016-11-28 13:30 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-11-27  8:52 Payed Xen Admin Michael Schinzel
2016-11-28 13:30 ` Neil Sikka [this message]
2016-11-28 17:19   ` Michael Schinzel
2016-11-28 18:27     ` Thomas Toka
2016-11-28 21:08       ` Neil Sikka
2016-11-29 20:01         ` Thomas Toka
2016-11-29 12:08 ` Dario Faggioli
2016-11-29 13:34   ` IP-Projects - Support
2016-11-29 18:16     ` PV and HVM domains left as zombies with grants [was: Re: AW: Payed Xen Admin] Dario Faggioli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAHPMNWe_G-ZQ+jVrcDX8UT4KCb-xo2pGNg9ZhvbzdHB6eGfmEA@mail.gmail.com \
    --to=neilsikka@gmail.com \
    --cc=schinzel@ip-projects.de \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).