From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pj1-x1030.google.com ([2607:f8b0:4864:20::1030]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lLyMD-00HB5v-Ij for linux-um@lists.infradead.org; Tue, 16 Mar 2021 01:18:51 +0000 Received: by mail-pj1-x1030.google.com with SMTP id nh23-20020a17090b3657b02900c0d5e235a8so567148pjb.0 for ; Mon, 15 Mar 2021 18:18:49 -0700 (PDT) Date: Tue, 16 Mar 2021 10:18:47 +0900 Message-ID: From: Hajime Tazaki Subject: Re: [RFC v8 09/20] um: lkl: kernel thread support In-Reply-To: References: <3aecd66b9314f2b435fb6df029dc5829bf8c50ff.1611103406.git.thehajime@gmail.com> MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Sender: "linux-um" Errors-To: linux-um-bounces+geert=linux-m68k.org@lists.infradead.org To: johannes@sipsolutions.net Cc: linux-um@lists.infradead.org, jdike@addtoit.com, richard@nod.at, anton.ivanov@cambridgegreys.com, tavi.purdila@gmail.com, linux-kernel-library@freelists.org, linux-arch@vger.kernel.org, retrage01@gmail.com On Mon, 15 Mar 2021 02:01:20 +0900, Johannes Berg wrote: > = > On Wed, 2021-01-20 at 11:27 +0900, Hajime Tazaki wrote: > = > > +void __weak subarch_cpu_idle(void) > > +{ > > +} > > + > > =A0void arch_cpu_idle(void) > > =A0{ > > =A0 cpu_tasks[current_thread_info()->cpu].pid =3D os_getpid(); > > =A0 um_idle_sleep(); > > + subarch_cpu_idle(); > = > = > Not sure that belongs into this patch in the first place, but wouldn't > it make some sense to move the um_idle_sleep() into the > subarch_cpu_idle() so LKL (or apps using it) can get full control? Agree. I'll move that part to um_idle_sleep. > > +/* > > + * This structure is used to get access to the "LKL CPU" that allows u= s to run > > + * Linux code. Because we have to deal with various synchronization re= quirements > > + * between idle thread, system calls, interrupts, "reentrancy", CPU sh= utdown, > > + * imbalance wake up (i.e. acquire the CPU from one thread and release= it from > > + * another), we can't use a simple synchronization mechanism such as (= recursive) > > + * mutex or semaphore. Instead, we use a mutex and a bunch of status d= ata plus a > > + * semaphore. > > + */ > = > Honestly, some of that documentation, and perhaps even the whole API for > LKL feels like it should come earlier in the series. > = > E.g. now here I see all those lkl_mutex_lock() (where btw documentation > doesn't always match the function name), so you *don't* have the > function ops pointer struct anymore? I will check the inconsistency of names. # and, yes, we don't have function ops anymore. > It'd be nice to have some high-level view of what applications *must* > do, and what they *can* do, at the beginning of this series. agree. > > + * > > + * This algorithm assumes that we never have more the MAX_THREADS > > + * requesting CPU access. > > + */ > > + #define MAX_THREADS 1000000 > = > What implications does that value have? It seems several orders of > magnitude too large? I guess this is a kind of random number, but will justify (or make it smaller) after some investigation. > > +static int __cpu_try_get_lock(int n) > > +{ > > + lkl_thread_t self; > > + > > + if (__sync_fetch_and_add(&cpu.shutdown_gate, n) >=3D MAX_THREADS) > > + return -2; > = > Feels like that could be some kind of enum here instead of -2 and -1 and > all that magic. agree; I will update with an enum. > > + /* when somebody holds a lock, sleep until released, > > + * with obtaining a semaphore (cpu.sem) > > + */ > = > nit: /* > * use this comment style > */ thanks, it should be. will fix this. > > +void switch_threads(jmp_buf *me, jmp_buf *you) > > +{ > > + /* NOP */ > > +} > = > Why, actually? Our threads doesn't use the UML jmp_buf (use pthread instead) so, this function has to do nothing. We can add a comment of this here. > Again, goes back to the high-level design thing I alluded to above, but > it's not clear to me why you need idle (which implies you're running the > scheduler) but not this (which implies you're *not* running the > scheduler)? okay, will write a separate document for the high-level description of what this is. -- Hajime _______________________________________________ linux-um mailing list linux-um@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-um