From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jarod Wilson Subject: Re: Bond recovery from BOND_LINK_FAIL state not working Date: Fri, 3 Nov 2017 17:46:05 -0400 Message-ID: References: <28118.1509572045@famine> <10968.1509582913@famine> <17092.1509598291@famine> <995.1509671466@famine> <970a0fed-50b8-edd6-3a0f-d11b4f191058@hpe.com> <17639.1509733561@famine> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Cc: netdev@vger.kernel.org, Mahesh Bandewar To: Alex Sidorenko , Jay Vosburgh Return-path: Received: from mx1.redhat.com ([209.132.183.28]:41454 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752284AbdKCVqE (ORCPT ); Fri, 3 Nov 2017 17:46:04 -0400 In-Reply-To: Content-Language: en-US Sender: netdev-owner@vger.kernel.org List-ID: On 2017-11-03 3:30 PM, Alex Sidorenko wrote: > Indeed, we do not print slave's ->link_new_state on each entry - so it > is quite possible that we are at stage 6. > > It is even possible that this has something to do with how NM initially > created bonds. > Customer says that the problem occurs once only after host reboot, after > that > failover works fine no matter how many times he changes the state of > VirtualConnect modules. > > Jarod, > > could you please add printing slave->link_new_state for both slaves at each > entry to bond_miimon_inspect? > > (and instead of nudging slave->new_link like I suggested, use Jay's patch). Will do, test build is just about ready here. -- Jarod Wilson jarod@redhat.com