From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hgYwn-0003fi-Lk for linux-um@lists.infradead.org; Thu, 27 Jun 2019 18:16:40 +0000 MIME-Version: 1.0 In-Reply-To: References: <20190617082613.109131-1-brendanhiggins@google.com> <20190617082613.109131-2-brendanhiggins@google.com> <20190620001526.93426218BE@mail.kernel.org> <20190626034100.B238520883@mail.kernel.org> From: Stephen Boyd Subject: Re: [PATCH v5 01/18] kunit: test: add KUnit test runner core Date: Thu, 27 Jun 2019 11:16:35 -0700 Message-Id: <20190627181636.5EA752064A@mail.kernel.org> List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-um" Errors-To: linux-um-bounces+geert=linux-m68k.org@lists.infradead.org To: Brendan Higgins Cc: Petr Mladek , "open list:DOCUMENTATION" , Peter Zijlstra , Amir Goldstein , dri-devel , Sasha Levin , Masahiro Yamada , Michael Ellerman , "open list:KERNEL SELFTEST FRAMEWORK" , shuah , Rob Herring , linux-nvdimm , Frank Rowand , Knut Omang , Kieran Bingham , wfg@linux.intel.com, Joel Stanley , David Rientjes , Jeff Dike , Dan Carpenter , devicetree , linux-kbuild , "Bird, Timothy , linux-um@lists.infradead.org, Steven Rostedt" , Julia Lawall , Josh Poimboeuf , kunit-dev@googlegroups.com, Theodore Ts'o , Richard Weinberger , Greg KH , Randy Dunlap , Linux Kernel Mailing List , Luis Chamberlain , Daniel Vetter , Kees Cook , linux-fsdevel@vger.kernel.org, Logan Gunthorpe , Kevin Hilman Quoting Brendan Higgins (2019-06-26 16:00:40) > On Tue, Jun 25, 2019 at 8:41 PM Stephen Boyd wrote: > > > scenario like below, but where it is a problem. There could be three > > CPUs, or even one CPU and three threads if you want to describe the > > extra thread scenario. > > > > Here's my scenario where it isn't needed: > > > > CPU0 CPU1 > > ---- ---- > > kunit_run_test(&test) > > test_case_func() > > .... > > [mock hardirq] > > kunit_set_success(&test) > > [hardirq ends] > > ... > > complete(&test_done) > > wait_for_completion(&test_done) > > kunit_get_success(&test) > > > > We don't need to care about having locking here because success or > > failure only happens in one place and it's synchronized with the > > completion. > > Here is the scenario I am concerned about: > > CPU0 CPU1 CPU2 > ---- ---- ---- > kunit_run_test(&test) > test_case_func() > .... > schedule_work(foo_func) > [mock hardirq] foo_func() > ... ... > kunit_set_success(false) kunit_set_success(false) > [hardirq ends] ... > ... > complete(&test_done) > wait_for_completion(...) > kunit_get_success(&test) > > In my scenario, since both CPU1 and CPU2 update the success status of > the test simultaneously, even though they are setting it to the same > value. If my understanding is correct, this could result in a > write-tear on some architectures in some circumstances. I suppose we > could just make it an atomic boolean, but I figured locking is also > fine, and generally preferred. This is what we have WRITE_ONCE() and READ_ONCE() for. Maybe you could just use that in the getter and setters and remove the lock if it isn't used for anything else. It may also be a good idea to have a kunit_fail_test() API that fails the test passed in with a WRITE_ONCE(false). Otherwise, the test is assumed successful and it isn't even possible for a test to change the state from failure to success due to a logical error because the API isn't available. Then we don't really need to have a generic kunit_set_success() function at all. We could have a kunit_test_failed() function too that replaces the kunit_get_success() function. That would read better in an if condition. > > Also, to be clear, I am onboard with dropping then IRQ stuff for now. > I am fine moving to a mutex for the time being. > Ok. _______________________________________________ linux-um mailing list linux-um@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-um