From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750951AbcARFIO (ORCPT ); Mon, 18 Jan 2016 00:08:14 -0500 Received: from mga11.intel.com ([192.55.52.93]:36535 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750735AbcARFIL (ORCPT ); Mon, 18 Jan 2016 00:08:11 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,311,1449561600"; d="scan'208";a="895485213" Subject: Re: kvm: access to invalid memory in mmu_zap_unsync_children To: Paolo Bonzini , Sasha Levin , Gleb Natapov References: <56992438.6090804@oracle.com> <56992719.9060004@redhat.com> Cc: LKML , Dmitry Vyukov , syzkaller , Marcelo Tosatti , Takuya Yoshikawa From: Xiao Guangrong Message-ID: <569C718A.9090802@linux.intel.com> Date: Mon, 18 Jan 2016 13:00:58 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.4.0 MIME-Version: 1.0 In-Reply-To: <56992719.9060004@redhat.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 01/16/2016 01:06 AM, Paolo Bonzini wrote: > > > On 15/01/2016 17:54, Sasha Levin wrote: >> Hi all, >> >> While fuzzing with syzkaller on the latest -next kernel running on a KVM tools >> guest, I've hit the following invalid memory access: >> >> [ 547.956284] UBSAN: Undefined behaviour in arch/x86/kvm/mmu.c:2011:17 >> >> [ 547.956940] index 3 is out of range for type 'kvm_mmu_page *[3]' >> >> [ 547.957567] CPU: 0 PID: 21577 Comm: syz-executor Tainted: G D 4.4.0-next-20160114-sasha-00021-gf1273d1-dirty #2798 >> >> [ 547.958739] 1ffff1001819be5c 000000002fa0e55b ffff8800c0cdf360 ffffffff83433c4e >> >> [ 547.972448] 0000000041b58ab3 ffffffff8f960c38 ffffffff83433b86 ffff8800c0cdf328 >> >> [ 547.973277] 0000000000000001 000000002fa0e55b ffffffff8feb8440 ffff8800c0cdf3f0 >> >> [ 547.974102] Call Trace: >> >> [ 547.974424] dump_stack (lib/dump_stack.c:52) >> [ 547.975774] ubsan_epilogue (lib/ubsan.c:165) >> [ 547.976408] __ubsan_handle_out_of_bounds (lib/ubsan.c:382) >> [ 547.980877] mmu_zap_unsync_children (arch/x86/kvm/mmu.c:2011 arch/x86/kvm/mmu.c:2272) > > Marcelo/Takuya/Xiao, > > do you know what's the point in the assignment in kvm_mmu_pages_init? > > It seems to me that it should be > > parents->parent[0] = NULL; > > since the only user of the ->parent[] array, mmu_pages_clear_parents, > walks the array up from parents->parent[0]. > Yes, it is bugly and it surprised me that it was not triggered in nested env. > Any other opinions? The idea we use the array as [PT64_ROOT_LEVEL-1] is because we never take the last level (level = 1) into account. I think this diff can fix this, but it has not tested yet. +#define INVALID_INDEX (-1) + static int mmu_unsync_walk(struct kvm_mmu_page *sp, struct kvm_mmu_pages *pvec) { if (!sp->unsync_children) return 0; - mmu_pages_add(pvec, sp, 0); + /* + * do not count the index in the parent of the sp we're + * walking start from. + */ + mmu_pages_add(pvec, sp, INVALID_INDEX); return __mmu_unsync_walk(sp, pvec); } @@ -1980,8 +1986,11 @@ static int mmu_pages_next(struct kvm_mmu_pages *pvec, return n; } - parents->parent[sp->role.level-2] = sp; - parents->idx[sp->role.level-1] = pvec->page[n].idx; + parents->parent[sp->role.level - 2] = sp; + + /* skip setting idex of the sp we start from. */ + if (pvec->page[n].idx != INVALID_INDEX) + parents->idx[sp->role.level - 1] = pvec->page[n].idx; } return n; @@ -1999,6 +2008,7 @@ static void mmu_pages_clear_parents(struct mmu_page_path *parents) if (!sp) return; + WARN_ON(idx != INVALID_INDEX); clear_unsync_child_bit(sp, idx); level++; } while (level < PT64_ROOT_LEVEL-1 && !sp->unsync_children); @@ -2008,7 +2018,7 @@ static void kvm_mmu_pages_init(struct kvm_mmu_page *parent, struct mmu_page_path *parents, struct kvm_mmu_pages *pvec) { - parents->parent[parent->role.level-1] = NULL; + parents->parent[parent->role.level - 2] = NULL; pvec->nr = 0; } > > Paolo >