From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751875AbcF0NnB (ORCPT ); Mon, 27 Jun 2016 09:43:01 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:47278 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1750870AbcF0Nm7 (ORCPT ); Mon, 27 Jun 2016 09:42:59 -0400 X-IBM-Helo: d03dlp02.boulder.ibm.com X-IBM-MailFrom: xinhui.pan@linux.vnet.ibm.com From: Pan Xinhui To: linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org Cc: paulmck@linux.vnet.ibm.com, peterz@infradead.org, mingo@redhat.com, mpe@ellerman.id.au, paulus@samba.org, benh@kernel.crashing.org, Waiman.Long@hpe.com, boqun.feng@gmail.com, will.deacon@arm.com, dave@stgolabs.net, Pan Xinhui Subject: [PATCH 1/3] kernel/sched: introduce vcpu preempted check interface Date: Mon, 27 Jun 2016 13:41:28 -0400 X-Mailer: git-send-email 2.4.11 In-Reply-To: <1467049290-32359-1-git-send-email-xinhui.pan@linux.vnet.ibm.com> References: <1467049290-32359-1-git-send-email-xinhui.pan@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16062713-0020-0000-0000-0000093102F2 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 16062713-0021-0000-0000-0000532E6DB0 Message-Id: <1467049290-32359-2-git-send-email-xinhui.pan@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2016-06-27_09:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1604210000 definitions=main-1606270146 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org this supports to fix lock holder preempted issue which run as a guest for kernel users, we could use bool vcpu_is_preempted(int cpu) to detech if one vcpu is preempted or not. The default implementation is a macrodefined by false. So compiler can wrap it out if arch dose not support such vcpu pteempted check. archs can implement it by define arch_vcpu_is_preempted(). Signed-off-by: Pan Xinhui --- include/linux/sched.h | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/include/linux/sched.h b/include/linux/sched.h index 6e42ada..dc0a9c3 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -3293,6 +3293,15 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu) #endif /* CONFIG_SMP */ +#ifdef arch_vcpu_is_preempted +static inline bool vcpu_is_preempted(int cpu) +{ + return arch_vcpu_is_preempted(cpu); +} +#else +#define vcpu_is_preempted(cpu) false +#endif + extern long sched_setaffinity(pid_t pid, const struct cpumask *new_mask); extern long sched_getaffinity(pid_t pid, struct cpumask *mask); -- 2.4.11