From mboxrd@z Thu Jan 1 00:00:00 1970 From: Yang Zhang Subject: Re: Enable more than 255 VCPU support without irq remapping function in the guest Date: Tue, 3 May 2016 09:34:31 +0800 Message-ID: <57280027.4020005@gmail.com> References: <571F93CA.40200@intel.com> <571F9487.5090009@siemens.com> <20160426164939.GA18900@potion> <57203B9D.6020402@gmail.com> <57204D28.4070706@siemens.com> <572088D0.7040805@gmail.com> <57208A54.40502@siemens.com> <57216341.80006@gmail.com> <5721B394.9050008@siemens.com> <20160428153251.GA17368@potion> <5722C247.6040004@gmail.com> <6818B0B0-6F29-494D-8EA9-D69603AF6ED6@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Jan Kiszka , "Lan, Tianyu" , pbonzini@redhat.com, kvm@vger.kernel.org, tglx@linutronix.de, gleb@redhat.com, mst@redhat.com, x86@kernel.org, Peter Xu , Igor Mammedov To: Nadav Amit Return-path: Received: from mail-oi0-f53.google.com ([209.85.218.53]:34304 "EHLO mail-oi0-f53.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932436AbcECBen (ORCPT ); Mon, 2 May 2016 21:34:43 -0400 Received: by mail-oi0-f53.google.com with SMTP id k142so7569075oib.1 for ; Mon, 02 May 2016 18:34:43 -0700 (PDT) In-Reply-To: <6818B0B0-6F29-494D-8EA9-D69603AF6ED6@gmail.com> Sender: kvm-owner@vger.kernel.org List-ID: On 2016/4/29 11:01, Nadav Amit wrote: > Yang Zhang wrote: > >> On 2016/4/28 23:32, Radim Kr=C4=8Dm=C3=A1=C5=99 wrote: >>> I think we are talking about extending KVM's IR-less x2APIC, when >>> standard x2APIC is the future. >> >> Yes, Since IR is only useful for the external device, and 255 CPUs i= s enough to handle the interrupts from external devices. Besides, i thi= nk virtual VT-d will bring extra performance impaction for devices, so = if IR-less X2APIC also works well with more than 255 VCPUs, maybe exten= ding KVM with IR-less x2apic is not a bad idea. > > So will you use x2APIC physical mode in this system? Probably, cluster mode is the better choice. > Try not to send a multicast IPI to 400 cores in the VM... Yes, a multicast IPI to so many cores is a disaster in VM, like=20 flush_tlb_others(). --=20 best regards yang