From mboxrd@z Thu Jan 1 00:00:00 1970 From: Victor Kaplansky Subject: Re: [PATCH v2 2/3] vhost: protect dirty logging against logging base change Date: Mon, 27 Nov 2017 03:42:34 -0500 (EST) Message-ID: <1602820578.45759784.1511772154833.JavaMail.zimbra@redhat.com> References: <20171124180826.18439-1-maxime.coquelin@redhat.com> <20171124180826.18439-3-maxime.coquelin@redhat.com> <1760091245.45753170.1511770580954.JavaMail.zimbra@redhat.com> <0a577e7b-2d90-04ba-3fac-f70192970a09@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Cc: dev@dpdk.org, yliu@fridaylinux.org, tiwei bie , jianfeng tan , stable@dpdk.org, jfreiman@redhat.com To: Maxime Coquelin Return-path: In-Reply-To: <0a577e7b-2d90-04ba-3fac-f70192970a09@redhat.com> List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" ----- Original Message ----- > From: "Maxime Coquelin" > To: "Victor Kaplansky" > Cc: dev@dpdk.org, yliu@fridaylinux.org, "tiwei bie" , "jianfeng tan" , > stable@dpdk.org, jfreiman@redhat.com > Sent: Monday, November 27, 2017 10:27:22 AM > Subject: Re: [PATCH v2 2/3] vhost: protect dirty logging against logging base change > > Hi Victor, > > On 11/27/2017 09:16 AM, Victor Kaplansky wrote: > > Hi, > > > > While I agree that taking full fledged lock by rte_rwlock_read_lock() > > solves the race condition, > > I'm afraid that it would be too expensive in case when logging is off, > > since it introduces > > acquiring and releasing lock into the main flow of ring updates. > > Actually my v2 fixes the performance penalty when logging is off. The > lock is now taken after the logging feature check. > > But still, I agree logging on case will suffer from a performance > penalty. Yes, checking of logging feature is better than nothing, but VHOST_F_LOG_ALL marks only whether logging is supported by the device and not if logging is in the action. Thus, any guest will hit the performance degradation even not during migration. > > > It is OK for now, as it fixes the bug, but we need to perform more careful > > performance measurements, > > and see whether the performance degradation is not too prohibitive. > > > > As alternative, we may consider using more light weighted busy looping. > > I think it will end up almost being the same, as both threads will need > to busy loop. PMD thread to be sure the protocol thread isn't being > unmapping the region before doing the logging, and protocol thread to be > sure the PMD thread is not doing logging before handling the set log > base. > I'm not fully aware how rte_rwlock_read_lock() is implemented, but theoretically busy looping should be much cheaper in cases when taking lock by one side is very rare. > Maybe you have something else in mind? > > > Also, lets fix by this series the __sync_fetch_and_or_8 -> > > __sync_fetch_and_or, > > as it may improve the performance slightly. > > Sure, this can be done, but it would need to be benchmarked first. Agree. > > Regards, > Maxime >