From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E00F6241E4 for ; Thu, 7 Mar 2024 20:31:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.175 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709843481; cv=none; b=c8jnJTp6jpGAnk3zyl7NfCSGV9CMX2vwmKlcSmO4NX/5LHVzzeS70mDHFaLnPHWz8cjOcfZGXf9CQI/Ywe8xywIkbWKzNFVM+IIUtwKeSD6WSeS+xs6ukdK5pybDGKIjeq75DVLXtmCGT7Y9kuKXG9E+CxQ3I7Sm/Z+ffQupdP0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709843481; c=relaxed/simple; bh=uFkSV3XWA6pSvBdWPTS5Z4D4hTCdELrOUpWgEurePV0=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=OPZjbZqZu6yp/sir6Mc9Tt9tftfXY3Acn3MJ8XWC1OoK+bFPlNIqqN3FGbuUyTpfO+2szOIRQ/xp3yKyGX1KzPR3qrjl/nt9tz7ed0ITJEJ0JvBqjxVJP9A4hvI4XCOC+KTN4ljwI4mxYJT4/NQBx/KD5fEQ9uACIPtRyklY2uc= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=PcZLmtxG; arc=none smtp.client-ip=209.85.214.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="PcZLmtxG" Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-1dc139ed11fso17862205ad.0 for ; Thu, 07 Mar 2024 12:31:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1709843479; x=1710448279; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=td/qc1v+LcJcAxN8xi3WFcUXSnLHGt1lHwkmw1ZD0As=; b=PcZLmtxGTh7uVIpOoxFb+QeLRCq04OP99KCB1sSGFi0d/uUTcPGVZwkss6BrXiAiUi fhV3wi1KmFHke/EUAggJyb4TDH8uN6X9TWvYkhoFq/2TNkOkurSMCYzZ/OAwgZQkYQx8 +JL3tjctLJ3J2e2JwyNP5wJw6MY3H1FSyEZ7U= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709843479; x=1710448279; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=td/qc1v+LcJcAxN8xi3WFcUXSnLHGt1lHwkmw1ZD0As=; b=tqnyiCBKlMAr0SCTQjejudmndFn3vGcogzaWCWcm/kI1RG4+tFMLPT79goEkrYO56P ZJmeWpCy2rdTMjW4BJwXok3v5Upm26F3GRz8axgAJM89BSBTUe/f0wV95DU8Hkkf7S2E wnyIBieVbDVR+JMZB/9nLXwkzLxxSgC2bZQLMTN1dVYes8gpoaNgvkHi2GpDB5GcD8gq /7WyjVAAEmwp6Xf9PgNDUsH2bLsdG+Ll5Ws0BrgZag7jA5IIfEDwS+VDyNVGeUcF1qO0 TeiwLzqhNb0ijr4yvv29+MdbFSj0GWHJ2powUWPC/sa97oPR5I656E2Q/O0/RSmvXELB TtLg== X-Forwarded-Encrypted: i=1; AJvYcCX7iaZ1XJBTg2bpIP1iBuhZ3R1XNW2QadRabHsmxWiqtsAxRLKF8B6C8ezI4pSisHx1/arA5AFdY2Sd29jGsa30HKFLM9s2xRvSxUBamXKA X-Gm-Message-State: AOJu0Yy69fBoabMEe0eP2f4MmmWCCdKVH3FgxV2MshBIadB0/ODOKNJ4 VgdcmtJvEGEz3Zf94vXW70ODY7mEhoYNm//0qzujx8LExmcezHXZh8CxJ10wRA== X-Google-Smtp-Source: AGHT+IHhycg/pXareFKjHDwHXtqmeBe85lRVxuPqPrSPkJ5nHJkgXlFDDbM6P1CA4nS+KS/ufBK3GA== X-Received: by 2002:a17:903:22cf:b0:1db:9ff1:b59b with SMTP id y15-20020a17090322cf00b001db9ff1b59bmr3493970plg.23.1709843479184; Thu, 07 Mar 2024 12:31:19 -0800 (PST) Received: from www.outflux.net ([198.0.35.241]) by smtp.gmail.com with ESMTPSA id i9-20020a170902c94900b001dcc09487e8sm15044171pla.50.2024.03.07.12.31.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Mar 2024 12:31:18 -0800 (PST) Date: Thu, 7 Mar 2024 12:31:17 -0800 From: Kees Cook To: "GONG, Ruiqi" Cc: Vlastimil Babka , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Xiu Jianfeng , Suren Baghdasaryan , Kent Overstreet , Jann Horn , Matteo Rizzo , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-hardening@vger.kernel.org Subject: Re: [PATCH v2 0/9] slab: Introduce dedicated bucket allocator Message-ID: <202403071227.D29DE5F8C4@keescook> References: <20240305100933.it.923-kees@kernel.org> Precedence: bulk X-Mailing-List: linux-hardening@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: On Wed, Mar 06, 2024 at 09:47:36AM +0800, GONG, Ruiqi wrote: > > > On 2024/03/05 18:10, Kees Cook wrote: > > Hi, > > > > Repeating the commit logs for patch 4 here: > > > > Dedicated caches are available For fixed size allocations via > > kmem_cache_alloc(), but for dynamically sized allocations there is only > > the global kmalloc API's set of buckets available. This means it isn't > > possible to separate specific sets of dynamically sized allocations into > > a separate collection of caches. > > > > This leads to a use-after-free exploitation weakness in the Linux > > kernel since many heap memory spraying/grooming attacks depend on using > > userspace-controllable dynamically sized allocations to collide with > > fixed size allocations that end up in same cache. > > > > While CONFIG_RANDOM_KMALLOC_CACHES provides a probabilistic defense > > against these kinds of "type confusion" attacks, including for fixed > > same-size heap objects, we can create a complementary deterministic > > defense for dynamically sized allocations. > > > > In order to isolate user-controllable sized allocations from system > > allocations, introduce kmem_buckets_create(), which behaves like > > kmem_cache_create(). (The next patch will introduce kmem_buckets_alloc(), > > which behaves like kmem_cache_alloc().) > > So can I say the vision here would be to make all the kernel interfaces > that handles user space input to use separated caches? Which looks like > creating a "grey zone" in the middle of kernel space (trusted) and user > space (untrusted) memory. I've also thought that maybe hardening on the > "border" could be more efficient and targeted than a mitigation that > affects globally, e.g. CONFIG_RANDOM_KMALLOC_CACHES. I think it ends up having a similar effect, yes. The more copies that move to memdup_user(), the more coverage is created. The main point is to just not share caches between different kinds of allocations. The most abused version of this is the userspace size-controllable allocations, which this targets. The existing caches (which could still be used for type confusion attacks when the sizes are sufficiently similar) have a good chance of being mitigated by CONFIG_RANDOM_KMALLOC_CACHES already, so this proposed change is just complementary, IMO. -Kees -- Kees Cook