* [patch] x86, cacheinfo: dereferences before check @ 2010-06-01 7:11 Dan Carpenter 2010-06-01 8:23 ` Borislav Petkov 0 siblings, 1 reply; 6+ messages in thread From: Dan Carpenter @ 2010-06-01 7:11 UTC (permalink / raw) To: Thomas Gleixner Cc: Ingo Molnar, H. Peter Anvin, x86, Borislav Petkov, Dave Jones, linux-kernel, kernel-janitors This moves a couple dereferences after the checks in show_cache_disable() and store_cache_disable(). These were introduced fairly recently in 9350f982e4: "x86, cacheinfo: Reorganize AMD L3 cache structure" Signed-off-by: Dan Carpenter <error27@gmail.com> diff --git a/arch/x86/kernel/cpu/intel_cacheinfo.c b/arch/x86/kernel/cpu/intel_cacheinfo.c index 33eae20..1725a0e 100644 --- a/arch/x86/kernel/cpu/intel_cacheinfo.c +++ b/arch/x86/kernel/cpu/intel_cacheinfo.c @@ -399,12 +399,13 @@ amd_check_l3_disable(int index, struct _cpuid4_info_regs *this_leaf) static ssize_t show_cache_disable(struct _cpuid4_info *this_leaf, char *buf, unsigned int slot) { - struct pci_dev *dev = this_leaf->l3->dev; + struct pci_dev *dev; unsigned int reg = 0; if (!this_leaf->l3 || !this_leaf->l3->can_disable) return -EINVAL; + dev = this_leaf->l3->dev; if (!dev) return -EINVAL; @@ -456,7 +457,7 @@ static ssize_t store_cache_disable(struct _cpuid4_info *this_leaf, const char *buf, size_t count, unsigned int slot) { - struct pci_dev *dev = this_leaf->l3->dev; + struct pci_dev *dev; int cpu = cpumask_first(to_cpumask(this_leaf->shared_cpu_map)); unsigned long val = 0; @@ -469,6 +470,7 @@ static ssize_t store_cache_disable(struct _cpuid4_info *this_leaf, if (!capable(CAP_SYS_ADMIN)) return -EPERM; + dev = this_leaf->l3->dev; if (!dev) return -EINVAL; ^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [patch] x86, cacheinfo: dereferences before check 2010-06-01 7:11 [patch] x86, cacheinfo: dereferences before check Dan Carpenter @ 2010-06-01 8:23 ` Borislav Petkov 2010-06-02 16:09 ` [PATCH] x86, cacheinfo: Carve out L3 cache slot accessors Borislav Petkov 2010-06-02 16:18 ` [PATCH -v2] " Borislav Petkov 0 siblings, 2 replies; 6+ messages in thread From: Borislav Petkov @ 2010-06-01 8:23 UTC (permalink / raw) To: Dan Carpenter Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, x86, Dave Jones, linux-kernel, kernel-janitors From: Dan Carpenter <error27@gmail.com> Date: Tue, Jun 01, 2010 at 03:11:27AM -0400 > This moves a couple dereferences after the checks in > show_cache_disable() and store_cache_disable(). These were introduced > fairly recently in 9350f982e4: "x86, cacheinfo: Reorganize AMD L3 cache > structure" Yes, this is correct, thanks. However I have a couple of patches which rewrite and cleanup this section even more and was waiting for the merge window to close before sending out. Stay tuned... -- Regards/Gruss, Boris. Operating Systems Research Center Advanced Micro Devices, Inc. ^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH] x86, cacheinfo: Carve out L3 cache slot accessors 2010-06-01 8:23 ` Borislav Petkov @ 2010-06-02 16:09 ` Borislav Petkov 2010-06-02 16:18 ` [PATCH -v2] " Borislav Petkov 1 sibling, 0 replies; 6+ messages in thread From: Borislav Petkov @ 2010-06-02 16:09 UTC (permalink / raw) To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin Cc: x86@kernel.org, Dan Carpenter, Dave Jones, linux-kernel@vger.kernel.org, kernel-janitors@vger.kernel.org This is in preparation for disabling L3 cache indices after having received correctable ECCs in the L3 cache. Now we allow for initial setting of a disabled index slot (write once) and deny writing new indices to it after it has been disabled. Also, we deny using both slots to disable one and the same index. Userspace can restore the previously disabled indices by rewriting those sysfs entries when booting. Cleanup and reorganize code while at it. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> --- arch/x86/kernel/cpu/intel_cacheinfo.c | 108 +++++++++++++++++++++++++-------- 1 files changed, 82 insertions(+), 26 deletions(-) diff --git a/arch/x86/kernel/cpu/intel_cacheinfo.c b/arch/x86/kernel/cpu/intel_cacheinfo.c index 33eae20..d6587ee 100644 --- a/arch/x86/kernel/cpu/intel_cacheinfo.c +++ b/arch/x86/kernel/cpu/intel_cacheinfo.c @@ -347,8 +347,8 @@ static struct amd_l3_cache * __cpuinit amd_init_l3_cache(int node) return l3; } -static void __cpuinit -amd_check_l3_disable(int index, struct _cpuid4_info_regs *this_leaf) +static void __cpuinit amd_check_l3_disable(struct _cpuid4_info_regs *this_leaf, + int index) { int node; @@ -396,20 +396,39 @@ amd_check_l3_disable(int index, struct _cpuid4_info_regs *this_leaf) this_leaf->l3 = l3_caches[node]; } +/* + * check whether a slot used for disabling an L3 index is occupied. + * @l3: L3 cache descriptor + * @slot: slot number (0..1) + * + * @returns: the disabled index if used or negative value if slot free. + */ +int amd_get_l3_disable_slot(struct amd_l3_cache *l3, unsigned slot) +{ + unsigned int reg = 0; + + pci_read_config_dword(l3->dev, 0x1BC + slot * 4, ®); + + /* check whether this slot is activated already */ + if (reg & (3UL << 30)) + return reg & 0xfff; + + return -1; +} + static ssize_t show_cache_disable(struct _cpuid4_info *this_leaf, char *buf, unsigned int slot) { - struct pci_dev *dev = this_leaf->l3->dev; - unsigned int reg = 0; + int index; if (!this_leaf->l3 || !this_leaf->l3->can_disable) return -EINVAL; - if (!dev) - return -EINVAL; + index = amd_get_l3_disable_slot(this_leaf->l3, slot); + if (index >= 0) + return sprintf(buf, "%d\n", index); - pci_read_config_dword(dev, 0x1BC + slot * 4, ®); - return sprintf(buf, "0x%08x\n", reg); + return sprintf(buf, "FREE\n"); } #define SHOW_CACHE_DISABLE(slot) \ @@ -451,37 +470,74 @@ static void amd_l3_disable_index(struct amd_l3_cache *l3, int cpu, } } - -static ssize_t store_cache_disable(struct _cpuid4_info *this_leaf, - const char *buf, size_t count, - unsigned int slot) +/* + * disable a L3 cache index by using a disable-slot + * + * @l3: L3 cache descriptor + * @cpu: A CPU on the node containing the L3 cache + * @slot: slot number (0..1) + * @index: index to disable + * + * @return: 0 on success, error status on failure + */ +int amd_set_l3_disable_slot(struct amd_l3_cache *l3, int cpu, unsigned slot, + unsigned long index) { - struct pci_dev *dev = this_leaf->l3->dev; - int cpu = cpumask_first(to_cpumask(this_leaf->shared_cpu_map)); - unsigned long val = 0; + int ret = 0; #define SUBCACHE_MASK (3UL << 20) #define SUBCACHE_INDEX 0xfff - if (!this_leaf->l3 || !this_leaf->l3->can_disable) + /* + * check whether this slot is already used or + * the index is already disabled + */ + ret = amd_get_l3_disable_slot(l3, slot); + if (ret > 0) return -EINVAL; + /* + * check whether the other slot has disabled the + * same index already + */ + if (index == amd_get_l3_disable_slot(l3, !slot)) + return -EINVAL; + + /* do not allow writes outside of allowed bits */ + if ((index & ~(SUBCACHE_MASK | SUBCACHE_INDEX)) || + ((index & SUBCACHE_INDEX) > l3->indices)) + return -EINVAL; + + amd_l3_disable_index(l3, cpu, slot, index); + + return 0; +} + +static ssize_t store_cache_disable(struct _cpuid4_info *this_leaf, + const char *buf, size_t count, + unsigned int slot) +{ + unsigned long val = 0; + int cpu, err = 0; + if (!capable(CAP_SYS_ADMIN)) return -EPERM; - if (!dev) + if (!this_leaf->l3 || !this_leaf->l3->can_disable) return -EINVAL; - if (strict_strtoul(buf, 10, &val) < 0) - return -EINVAL; + cpu = cpumask_first(to_cpumask(this_leaf->shared_cpu_map)); - /* do not allow writes outside of allowed bits */ - if ((val & ~(SUBCACHE_MASK | SUBCACHE_INDEX)) || - ((val & SUBCACHE_INDEX) > this_leaf->l3->indices)) + if (strict_strtoul(buf, 10, &val) < 0) return -EINVAL; - amd_l3_disable_index(this_leaf->l3, cpu, slot, val); - + err = amd_set_l3_disable_slot(this_leaf->l3, cpu, slot, val); + if (err) { + if (err == -EEXIST) + printk(KERN_WARNING "L3 disable slot %d in use!\n", + slot); + return err; + } return count; } @@ -502,7 +558,7 @@ static struct _cache_attr cache_disable_1 = __ATTR(cache_disable_1, 0644, #else /* CONFIG_CPU_SUP_AMD */ static void __cpuinit -amd_check_l3_disable(int index, struct _cpuid4_info_regs *this_leaf) +amd_check_l3_disable(struct _cpuid4_info_regs *this_leaf, int index) { }; #endif /* CONFIG_CPU_SUP_AMD */ @@ -518,7 +574,7 @@ __cpuinit cpuid4_cache_lookup_regs(int index, if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) { amd_cpuid4(index, &eax, &ebx, &ecx); - amd_check_l3_disable(index, this_leaf); + amd_check_l3_disable(this_leaf, index); } else { cpuid_count(4, index, &eax.full, &ebx.full, &ecx.full, &edx); } -- 1.6.4.4 -- Regards/Gruss, Boris. Operating Systems Research Center Advanced Micro Devices, Inc. ^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH -v2] x86, cacheinfo: Carve out L3 cache slot accessors 2010-06-01 8:23 ` Borislav Petkov 2010-06-02 16:09 ` [PATCH] x86, cacheinfo: Carve out L3 cache slot accessors Borislav Petkov @ 2010-06-02 16:18 ` Borislav Petkov 2010-06-10 0:10 ` [tip:x86/cpu] " tip-bot for Borislav Petkov 1 sibling, 1 reply; 6+ messages in thread From: Borislav Petkov @ 2010-06-02 16:18 UTC (permalink / raw) To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin Cc: Dan Carpenter, x86@kernel.org, Dave Jones, linux-kernel@vger.kernel.org, kernel-janitors@vger.kernel.org This is in preparation for disabling L3 cache indices after having received correctable ECCs in the L3 cache. Now we allow for initial setting of a disabled index slot (write once) and deny writing new indices to it after it has been disabled. Also, we deny using both slots to disable one and the same index. Userspace can restore the previously disabled indices by rewriting those sysfs entries when booting. Cleanup and reorganize code while at it. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> --- Please ignore the previous one which had a wrong check in amd_set_l3_disable_slot(). Thanks. arch/x86/kernel/cpu/intel_cacheinfo.c | 108 +++++++++++++++++++++++++-------- 1 files changed, 82 insertions(+), 26 deletions(-) diff --git a/arch/x86/kernel/cpu/intel_cacheinfo.c b/arch/x86/kernel/cpu/intel_cacheinfo.c index 33eae20..898c2f4 100644 --- a/arch/x86/kernel/cpu/intel_cacheinfo.c +++ b/arch/x86/kernel/cpu/intel_cacheinfo.c @@ -347,8 +347,8 @@ static struct amd_l3_cache * __cpuinit amd_init_l3_cache(int node) return l3; } -static void __cpuinit -amd_check_l3_disable(int index, struct _cpuid4_info_regs *this_leaf) +static void __cpuinit amd_check_l3_disable(struct _cpuid4_info_regs *this_leaf, + int index) { int node; @@ -396,20 +396,39 @@ amd_check_l3_disable(int index, struct _cpuid4_info_regs *this_leaf) this_leaf->l3 = l3_caches[node]; } +/* + * check whether a slot used for disabling an L3 index is occupied. + * @l3: L3 cache descriptor + * @slot: slot number (0..1) + * + * @returns: the disabled index if used or negative value if slot free. + */ +int amd_get_l3_disable_slot(struct amd_l3_cache *l3, unsigned slot) +{ + unsigned int reg = 0; + + pci_read_config_dword(l3->dev, 0x1BC + slot * 4, ®); + + /* check whether this slot is activated already */ + if (reg & (3UL << 30)) + return reg & 0xfff; + + return -1; +} + static ssize_t show_cache_disable(struct _cpuid4_info *this_leaf, char *buf, unsigned int slot) { - struct pci_dev *dev = this_leaf->l3->dev; - unsigned int reg = 0; + int index; if (!this_leaf->l3 || !this_leaf->l3->can_disable) return -EINVAL; - if (!dev) - return -EINVAL; + index = amd_get_l3_disable_slot(this_leaf->l3, slot); + if (index >= 0) + return sprintf(buf, "%d\n", index); - pci_read_config_dword(dev, 0x1BC + slot * 4, ®); - return sprintf(buf, "0x%08x\n", reg); + return sprintf(buf, "FREE\n"); } #define SHOW_CACHE_DISABLE(slot) \ @@ -451,37 +470,74 @@ static void amd_l3_disable_index(struct amd_l3_cache *l3, int cpu, } } - -static ssize_t store_cache_disable(struct _cpuid4_info *this_leaf, - const char *buf, size_t count, - unsigned int slot) +/* + * disable a L3 cache index by using a disable-slot + * + * @l3: L3 cache descriptor + * @cpu: A CPU on the node containing the L3 cache + * @slot: slot number (0..1) + * @index: index to disable + * + * @return: 0 on success, error status on failure + */ +int amd_set_l3_disable_slot(struct amd_l3_cache *l3, int cpu, unsigned slot, + unsigned long index) { - struct pci_dev *dev = this_leaf->l3->dev; - int cpu = cpumask_first(to_cpumask(this_leaf->shared_cpu_map)); - unsigned long val = 0; + int ret = 0; #define SUBCACHE_MASK (3UL << 20) #define SUBCACHE_INDEX 0xfff - if (!this_leaf->l3 || !this_leaf->l3->can_disable) + /* + * check whether this slot is already used or + * the index is already disabled + */ + ret = amd_get_l3_disable_slot(l3, slot); + if (ret >= 0) return -EINVAL; + /* + * check whether the other slot has disabled the + * same index already + */ + if (index == amd_get_l3_disable_slot(l3, !slot)) + return -EINVAL; + + /* do not allow writes outside of allowed bits */ + if ((index & ~(SUBCACHE_MASK | SUBCACHE_INDEX)) || + ((index & SUBCACHE_INDEX) > l3->indices)) + return -EINVAL; + + amd_l3_disable_index(l3, cpu, slot, index); + + return 0; +} + +static ssize_t store_cache_disable(struct _cpuid4_info *this_leaf, + const char *buf, size_t count, + unsigned int slot) +{ + unsigned long val = 0; + int cpu, err = 0; + if (!capable(CAP_SYS_ADMIN)) return -EPERM; - if (!dev) + if (!this_leaf->l3 || !this_leaf->l3->can_disable) return -EINVAL; - if (strict_strtoul(buf, 10, &val) < 0) - return -EINVAL; + cpu = cpumask_first(to_cpumask(this_leaf->shared_cpu_map)); - /* do not allow writes outside of allowed bits */ - if ((val & ~(SUBCACHE_MASK | SUBCACHE_INDEX)) || - ((val & SUBCACHE_INDEX) > this_leaf->l3->indices)) + if (strict_strtoul(buf, 10, &val) < 0) return -EINVAL; - amd_l3_disable_index(this_leaf->l3, cpu, slot, val); - + err = amd_set_l3_disable_slot(this_leaf->l3, cpu, slot, val); + if (err) { + if (err == -EEXIST) + printk(KERN_WARNING "L3 disable slot %d in use!\n", + slot); + return err; + } return count; } @@ -502,7 +558,7 @@ static struct _cache_attr cache_disable_1 = __ATTR(cache_disable_1, 0644, #else /* CONFIG_CPU_SUP_AMD */ static void __cpuinit -amd_check_l3_disable(int index, struct _cpuid4_info_regs *this_leaf) +amd_check_l3_disable(struct _cpuid4_info_regs *this_leaf, int index) { }; #endif /* CONFIG_CPU_SUP_AMD */ @@ -518,7 +574,7 @@ __cpuinit cpuid4_cache_lookup_regs(int index, if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) { amd_cpuid4(index, &eax, &ebx, &ecx); - amd_check_l3_disable(index, this_leaf); + amd_check_l3_disable(this_leaf, index); } else { cpuid_count(4, index, &eax.full, &ebx.full, &ecx.full, &edx); } -- 1.6.4.4 -- Regards/Gruss, Boris. Operating Systems Research Center Advanced Micro Devices, Inc. ^ permalink raw reply related [flat|nested] 6+ messages in thread
* [tip:x86/cpu] x86, cacheinfo: Carve out L3 cache slot accessors 2010-06-02 16:18 ` [PATCH -v2] " Borislav Petkov @ 2010-06-10 0:10 ` tip-bot for Borislav Petkov 0 siblings, 0 replies; 6+ messages in thread From: tip-bot for Borislav Petkov @ 2010-06-10 0:10 UTC (permalink / raw) To: linux-tip-commits Cc: linux-kernel, hpa, mingo, tglx, hpa, bp, borislav.petkov Commit-ID: 8cc1176e5de534d55cb26ff0cef3fd0d6ad8c3c0 Gitweb: http://git.kernel.org/tip/8cc1176e5de534d55cb26ff0cef3fd0d6ad8c3c0 Author: Borislav Petkov <bp@amd64.org> AuthorDate: Wed, 2 Jun 2010 18:18:40 +0200 Committer: H. Peter Anvin <hpa@linux.intel.com> CommitDate: Wed, 9 Jun 2010 15:57:41 -0700 x86, cacheinfo: Carve out L3 cache slot accessors This is in preparation for disabling L3 cache indices after having received correctable ECCs in the L3 cache. Now we allow for initial setting of a disabled index slot (write once) and deny writing new indices to it after it has been disabled. Also, we deny using both slots to disable one and the same index. Userspace can restore the previously disabled indices by rewriting those sysfs entries when booting. Cleanup and reorganize code while at it. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> LKML-Reference: <20100602161840.GI18327@aftab> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> --- arch/x86/kernel/cpu/intel_cacheinfo.c | 108 +++++++++++++++++++++++++-------- 1 files changed, 82 insertions(+), 26 deletions(-) diff --git a/arch/x86/kernel/cpu/intel_cacheinfo.c b/arch/x86/kernel/cpu/intel_cacheinfo.c index 33eae20..898c2f4 100644 --- a/arch/x86/kernel/cpu/intel_cacheinfo.c +++ b/arch/x86/kernel/cpu/intel_cacheinfo.c @@ -347,8 +347,8 @@ static struct amd_l3_cache * __cpuinit amd_init_l3_cache(int node) return l3; } -static void __cpuinit -amd_check_l3_disable(int index, struct _cpuid4_info_regs *this_leaf) +static void __cpuinit amd_check_l3_disable(struct _cpuid4_info_regs *this_leaf, + int index) { int node; @@ -396,20 +396,39 @@ amd_check_l3_disable(int index, struct _cpuid4_info_regs *this_leaf) this_leaf->l3 = l3_caches[node]; } +/* + * check whether a slot used for disabling an L3 index is occupied. + * @l3: L3 cache descriptor + * @slot: slot number (0..1) + * + * @returns: the disabled index if used or negative value if slot free. + */ +int amd_get_l3_disable_slot(struct amd_l3_cache *l3, unsigned slot) +{ + unsigned int reg = 0; + + pci_read_config_dword(l3->dev, 0x1BC + slot * 4, ®); + + /* check whether this slot is activated already */ + if (reg & (3UL << 30)) + return reg & 0xfff; + + return -1; +} + static ssize_t show_cache_disable(struct _cpuid4_info *this_leaf, char *buf, unsigned int slot) { - struct pci_dev *dev = this_leaf->l3->dev; - unsigned int reg = 0; + int index; if (!this_leaf->l3 || !this_leaf->l3->can_disable) return -EINVAL; - if (!dev) - return -EINVAL; + index = amd_get_l3_disable_slot(this_leaf->l3, slot); + if (index >= 0) + return sprintf(buf, "%d\n", index); - pci_read_config_dword(dev, 0x1BC + slot * 4, ®); - return sprintf(buf, "0x%08x\n", reg); + return sprintf(buf, "FREE\n"); } #define SHOW_CACHE_DISABLE(slot) \ @@ -451,37 +470,74 @@ static void amd_l3_disable_index(struct amd_l3_cache *l3, int cpu, } } - -static ssize_t store_cache_disable(struct _cpuid4_info *this_leaf, - const char *buf, size_t count, - unsigned int slot) +/* + * disable a L3 cache index by using a disable-slot + * + * @l3: L3 cache descriptor + * @cpu: A CPU on the node containing the L3 cache + * @slot: slot number (0..1) + * @index: index to disable + * + * @return: 0 on success, error status on failure + */ +int amd_set_l3_disable_slot(struct amd_l3_cache *l3, int cpu, unsigned slot, + unsigned long index) { - struct pci_dev *dev = this_leaf->l3->dev; - int cpu = cpumask_first(to_cpumask(this_leaf->shared_cpu_map)); - unsigned long val = 0; + int ret = 0; #define SUBCACHE_MASK (3UL << 20) #define SUBCACHE_INDEX 0xfff - if (!this_leaf->l3 || !this_leaf->l3->can_disable) + /* + * check whether this slot is already used or + * the index is already disabled + */ + ret = amd_get_l3_disable_slot(l3, slot); + if (ret >= 0) return -EINVAL; + /* + * check whether the other slot has disabled the + * same index already + */ + if (index == amd_get_l3_disable_slot(l3, !slot)) + return -EINVAL; + + /* do not allow writes outside of allowed bits */ + if ((index & ~(SUBCACHE_MASK | SUBCACHE_INDEX)) || + ((index & SUBCACHE_INDEX) > l3->indices)) + return -EINVAL; + + amd_l3_disable_index(l3, cpu, slot, index); + + return 0; +} + +static ssize_t store_cache_disable(struct _cpuid4_info *this_leaf, + const char *buf, size_t count, + unsigned int slot) +{ + unsigned long val = 0; + int cpu, err = 0; + if (!capable(CAP_SYS_ADMIN)) return -EPERM; - if (!dev) + if (!this_leaf->l3 || !this_leaf->l3->can_disable) return -EINVAL; - if (strict_strtoul(buf, 10, &val) < 0) - return -EINVAL; + cpu = cpumask_first(to_cpumask(this_leaf->shared_cpu_map)); - /* do not allow writes outside of allowed bits */ - if ((val & ~(SUBCACHE_MASK | SUBCACHE_INDEX)) || - ((val & SUBCACHE_INDEX) > this_leaf->l3->indices)) + if (strict_strtoul(buf, 10, &val) < 0) return -EINVAL; - amd_l3_disable_index(this_leaf->l3, cpu, slot, val); - + err = amd_set_l3_disable_slot(this_leaf->l3, cpu, slot, val); + if (err) { + if (err == -EEXIST) + printk(KERN_WARNING "L3 disable slot %d in use!\n", + slot); + return err; + } return count; } @@ -502,7 +558,7 @@ static struct _cache_attr cache_disable_1 = __ATTR(cache_disable_1, 0644, #else /* CONFIG_CPU_SUP_AMD */ static void __cpuinit -amd_check_l3_disable(int index, struct _cpuid4_info_regs *this_leaf) +amd_check_l3_disable(struct _cpuid4_info_regs *this_leaf, int index) { }; #endif /* CONFIG_CPU_SUP_AMD */ @@ -518,7 +574,7 @@ __cpuinit cpuid4_cache_lookup_regs(int index, if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) { amd_cpuid4(index, &eax, &ebx, &ecx); - amd_check_l3_disable(index, this_leaf); + amd_check_l3_disable(this_leaf, index); } else { cpuid_count(4, index, &eax.full, &ebx.full, &ecx.full, &edx); } ^ permalink raw reply related [flat|nested] 6+ messages in thread
* intel_cacheinfo: potential NULL dereference?
@ 2010-06-22 11:18 Jiri Slaby
2010-06-22 11:20 ` Jiri Slaby
0 siblings, 1 reply; 6+ messages in thread
From: Jiri Slaby @ 2010-06-22 11:18 UTC (permalink / raw)
To: borislav.petkov; +Cc: H. Peter Anvin, x86, Linux kernel mailing list
Hi,
commit 9350f982 changed the code so it looks like:
static ssize_t store_cache_disable(struct _cpuid4_info *this_leaf,
const char *buf, size_t count,
unsigned int slot)
{
struct pci_dev *dev = this_leaf->l3->dev; <<1>>
int cpu = cpumask_first(to_cpumask(this_leaf->shared_cpu_map));
unsigned long val = 0;
#define SUBCACHE_MASK (3UL << 20)
#define SUBCACHE_INDEX 0xfff
if (!this_leaf->l3 || !this_leaf->l3->can_disable) <<2>>
return -EINVAL;
Stanse found, that this_leaf->l3 is dereferenced at <<1>>, but checked
for being NULL at <<2>>. Is the check superfluous or the dev assignment
should go after the check?
thanks,
--
js
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: intel_cacheinfo: potential NULL dereference? 2010-06-22 11:18 intel_cacheinfo: potential NULL dereference? Jiri Slaby @ 2010-06-22 11:20 ` Jiri Slaby 2010-06-22 13:08 ` Borislav Petkov 0 siblings, 1 reply; 6+ messages in thread From: Jiri Slaby @ 2010-06-22 11:20 UTC (permalink / raw) To: borislav.petkov; +Cc: H. Peter Anvin, x86, Linux kernel mailing list On 06/22/2010 01:18 PM, Jiri Slaby wrote: > Hi, > > commit 9350f982 changed the code so it looks like: > static ssize_t store_cache_disable(struct _cpuid4_info *this_leaf, > const char *buf, size_t count, > unsigned int slot) > { > struct pci_dev *dev = this_leaf->l3->dev; <<1>> > int cpu = cpumask_first(to_cpumask(this_leaf->shared_cpu_map)); > unsigned long val = 0; > > #define SUBCACHE_MASK (3UL << 20) > #define SUBCACHE_INDEX 0xfff > > if (!this_leaf->l3 || !this_leaf->l3->can_disable) <<2>> > return -EINVAL; > > Stanse found, that this_leaf->l3 is dereferenced at <<1>>, but checked > for being NULL at <<2>>. Is the check superfluous or the dev assignment > should go after the check? Oh, and I have another report with same symptoms for show_cache_disable. -- js ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: intel_cacheinfo: potential NULL dereference? 2010-06-22 11:20 ` Jiri Slaby @ 2010-06-22 13:08 ` Borislav Petkov 2010-06-22 14:11 ` Jiri Slaby 0 siblings, 1 reply; 6+ messages in thread From: Borislav Petkov @ 2010-06-22 13:08 UTC (permalink / raw) To: Jiri Slaby; +Cc: H. Peter Anvin, x86@kernel.org, Linux kernel mailing list From: Jiri Slaby <jirislaby@gmail.com> Date: Tue, Jun 22, 2010 at 07:20:14AM -0400 > On 06/22/2010 01:18 PM, Jiri Slaby wrote: > > Hi, > > > > commit 9350f982 changed the code so it looks like: > > static ssize_t store_cache_disable(struct _cpuid4_info *this_leaf, > > const char *buf, size_t count, > > unsigned int slot) > > { > > struct pci_dev *dev = this_leaf->l3->dev; <<1>> > > int cpu = cpumask_first(to_cpumask(this_leaf->shared_cpu_map)); > > unsigned long val = 0; > > > > #define SUBCACHE_MASK (3UL << 20) > > #define SUBCACHE_INDEX 0xfff > > > > if (!this_leaf->l3 || !this_leaf->l3->can_disable) <<2>> > > return -EINVAL; > > > > Stanse found, that this_leaf->l3 is dereferenced at <<1>>, but checked > > for being NULL at <<2>>. Is the check superfluous or the dev assignment > > should go after the check? > > Oh, and I have another report with same symptoms for show_cache_disable. Right, so I have a patch in tip/x86/cpu (8cc1176e5de534d55cb26ff0cef3fd0d6ad8c3c0) which reorganizes and cleans up that code. With it, all possible checks land in amd_check_l3_disable() and if they have all been passed, the PCI dev is guaranteed to be properly set. So no need for sprinkling additional NULL checks in the code. How's that? -- Regards/Gruss, Boris. Advanced Micro Devices GmbH Einsteinring 24, 85609 Dornach General Managers: Alberto Bozzo, Andrew Bowd Registration: Dornach, Gemeinde Aschheim, Landkreis Muenchen Registergericht Muenchen, HRB Nr. 43632 ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: intel_cacheinfo: potential NULL dereference? 2010-06-22 13:08 ` Borislav Petkov @ 2010-06-22 14:11 ` Jiri Slaby 2010-06-22 17:09 ` H. Peter Anvin 0 siblings, 1 reply; 6+ messages in thread From: Jiri Slaby @ 2010-06-22 14:11 UTC (permalink / raw) To: Borislav Petkov; +Cc: H. Peter Anvin, x86@kernel.org, Linux kernel mailing list On 06/22/2010 03:08 PM, Borislav Petkov wrote: > From: Jiri Slaby <jirislaby@gmail.com> > Date: Tue, Jun 22, 2010 at 07:20:14AM -0400 > >> On 06/22/2010 01:18 PM, Jiri Slaby wrote: >>> Stanse found, that this_leaf->l3 is dereferenced at <<1>>, but checked >>> for being NULL at <<2>>. Is the check superfluous or the dev assignment >>> should go after the check? >> >> Oh, and I have another report with same symptoms for show_cache_disable. > > Right, so I have a patch in tip/x86/cpu > (8cc1176e5de534d55cb26ff0cef3fd0d6ad8c3c0) which reorganizes > and cleans up that code. With it, all possible checks land in > amd_check_l3_disable() and if they have all been passed, the PCI dev is > guaranteed to be properly set. So no need for sprinkling additional NULL > checks in the code. > > How's that? Looks good. thanks, -- js ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: intel_cacheinfo: potential NULL dereference? 2010-06-22 14:11 ` Jiri Slaby @ 2010-06-22 17:09 ` H. Peter Anvin 2010-06-22 19:19 ` Borislav Petkov 0 siblings, 1 reply; 6+ messages in thread From: H. Peter Anvin @ 2010-06-22 17:09 UTC (permalink / raw) To: Jiri Slaby; +Cc: Borislav Petkov, x86@kernel.org, Linux kernel mailing list On 06/22/2010 07:11 AM, Jiri Slaby wrote: > On 06/22/2010 03:08 PM, Borislav Petkov wrote: >> From: Jiri Slaby <jirislaby@gmail.com> >> Date: Tue, Jun 22, 2010 at 07:20:14AM -0400 >> >>> On 06/22/2010 01:18 PM, Jiri Slaby wrote: >>>> Stanse found, that this_leaf->l3 is dereferenced at <<1>>, but checked >>>> for being NULL at <<2>>. Is the check superfluous or the dev assignment >>>> should go after the check? >>> >>> Oh, and I have another report with same symptoms for show_cache_disable. >> >> Right, so I have a patch in tip/x86/cpu >> (8cc1176e5de534d55cb26ff0cef3fd0d6ad8c3c0) which reorganizes >> and cleans up that code. With it, all possible checks land in >> amd_check_l3_disable() and if they have all been passed, the PCI dev is >> guaranteed to be properly set. So no need for sprinkling additional NULL >> checks in the code. >> >> How's that? > > Looks good. > Do we need a patch for the existing code to go into -linus and -stable, though? -hpa ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: intel_cacheinfo: potential NULL dereference? 2010-06-22 17:09 ` H. Peter Anvin @ 2010-06-22 19:19 ` Borislav Petkov 2010-06-22 19:45 ` [PATCH -v2] x86, cacheinfo: Carve out L3 cache slot accessors Borislav Petkov 0 siblings, 1 reply; 6+ messages in thread From: Borislav Petkov @ 2010-06-22 19:19 UTC (permalink / raw) To: H. Peter Anvin; +Cc: Jiri Slaby, x86@kernel.org, Linux kernel mailing list From: "H. Peter Anvin" <hpa@zytor.com> Date: Tue, Jun 22, 2010 at 01:09:40PM -0400 > On 06/22/2010 07:11 AM, Jiri Slaby wrote: > > On 06/22/2010 03:08 PM, Borislav Petkov wrote: > >> From: Jiri Slaby <jirislaby@gmail.com> > >> Date: Tue, Jun 22, 2010 at 07:20:14AM -0400 > >> > >>> On 06/22/2010 01:18 PM, Jiri Slaby wrote: > >>>> Stanse found, that this_leaf->l3 is dereferenced at <<1>>, but checked > >>>> for being NULL at <<2>>. Is the check superfluous or the dev assignment > >>>> should go after the check? > >>> > >>> Oh, and I have another report with same symptoms for show_cache_disable. > >> > >> Right, so I have a patch in tip/x86/cpu > >> (8cc1176e5de534d55cb26ff0cef3fd0d6ad8c3c0) which reorganizes > >> and cleans up that code. With it, all possible checks land in > >> amd_check_l3_disable() and if they have all been passed, the PCI dev is > >> guaranteed to be properly set. So no need for sprinkling additional NULL > >> checks in the code. > >> > >> How's that? > > > > Looks good. > > > > Do we need a patch for the existing code to go into -linus and -stable, > though? Right, so this is not such a bad idea, actually. Here's a small fix, just in case. This should go to -linus so that the issue is patched up for .35 time. No need for -stable since this code came into the last merge window. The 8cc1176e5de534d55cb26ff0cef3fd0d6ad8c3c0 in tip/x86/cpu was meant for the .36 merge window so I'll readjust it relative to the small fix below. --- From: Borislav Petkov <borislav.petkov@amd.com> Date: Tue, 22 Jun 2010 21:06:51 +0200 Subject: [PATCH] x86, cacheinfo: Move dereferencing after NULL check The sysfs handlers {show,store}_cache_disable() should dereference the pci dev pointer after having checked the previous l3 pointer in the chain first. Spotted-by: Jiri Slaby <jirislaby@gmail.com> Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> --- arch/x86/kernel/cpu/intel_cacheinfo.c | 8 +++----- 1 files changed, 3 insertions(+), 5 deletions(-) diff --git a/arch/x86/kernel/cpu/intel_cacheinfo.c b/arch/x86/kernel/cpu/intel_cacheinfo.c index 33eae20..a7e358f 100644 --- a/arch/x86/kernel/cpu/intel_cacheinfo.c +++ b/arch/x86/kernel/cpu/intel_cacheinfo.c @@ -399,16 +399,15 @@ amd_check_l3_disable(int index, struct _cpuid4_info_regs *this_leaf) static ssize_t show_cache_disable(struct _cpuid4_info *this_leaf, char *buf, unsigned int slot) { - struct pci_dev *dev = this_leaf->l3->dev; unsigned int reg = 0; if (!this_leaf->l3 || !this_leaf->l3->can_disable) return -EINVAL; - if (!dev) + if (!this_leaf->l3->dev) return -EINVAL; - pci_read_config_dword(dev, 0x1BC + slot * 4, ®); + pci_read_config_dword(this_leaf->l3->dev, 0x1BC + slot * 4, ®); return sprintf(buf, "0x%08x\n", reg); } @@ -456,7 +455,6 @@ static ssize_t store_cache_disable(struct _cpuid4_info *this_leaf, const char *buf, size_t count, unsigned int slot) { - struct pci_dev *dev = this_leaf->l3->dev; int cpu = cpumask_first(to_cpumask(this_leaf->shared_cpu_map)); unsigned long val = 0; @@ -469,7 +467,7 @@ static ssize_t store_cache_disable(struct _cpuid4_info *this_leaf, if (!capable(CAP_SYS_ADMIN)) return -EPERM; - if (!dev) + if (!this_leaf->l3->dev) return -EINVAL; if (strict_strtoul(buf, 10, &val) < 0) -- 1.6.4.4 -- Regards/Gruss, Boris. Advanced Micro Devices GmbH Einsteinring 24, 85609 Dornach General Managers: Alberto Bozzo, Andrew Bowd Registration: Dornach, Gemeinde Aschheim, Landkreis Muenchen Registergericht Muenchen, HRB Nr. 43632 ^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH -v2] x86, cacheinfo: Carve out L3 cache slot accessors 2010-06-22 19:19 ` Borislav Petkov @ 2010-06-22 19:45 ` Borislav Petkov 0 siblings, 0 replies; 6+ messages in thread From: Borislav Petkov @ 2010-06-22 19:45 UTC (permalink / raw) To: H. Peter Anvin; +Cc: x86@kernel.org, Jiri Slaby, Linux kernel mailing list This is in preparation for disabling L3 cache indices after having received correctable ECCs in the L3 cache. Now we allow for initial setting of a disabled index slot (write once) and deny writing new indices to it after it has been disabled. Also, we deny using both slots to disable one and the same index. Userspace can restore the previously disabled indices by rewriting those sysfs entries when booting. Cleanup and reorganize code while at it. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> LKML-Reference: <20100602161840.GI18327@aftab> --- Aimed-at: 2.6.36 arch/x86/kernel/cpu/intel_cacheinfo.c | 106 +++++++++++++++++++++++++-------- 1 files changed, 82 insertions(+), 24 deletions(-) diff --git a/arch/x86/kernel/cpu/intel_cacheinfo.c b/arch/x86/kernel/cpu/intel_cacheinfo.c index a7e358f..898c2f4 100644 --- a/arch/x86/kernel/cpu/intel_cacheinfo.c +++ b/arch/x86/kernel/cpu/intel_cacheinfo.c @@ -347,8 +347,8 @@ static struct amd_l3_cache * __cpuinit amd_init_l3_cache(int node) return l3; } -static void __cpuinit -amd_check_l3_disable(int index, struct _cpuid4_info_regs *this_leaf) +static void __cpuinit amd_check_l3_disable(struct _cpuid4_info_regs *this_leaf, + int index) { int node; @@ -396,19 +396,39 @@ amd_check_l3_disable(int index, struct _cpuid4_info_regs *this_leaf) this_leaf->l3 = l3_caches[node]; } +/* + * check whether a slot used for disabling an L3 index is occupied. + * @l3: L3 cache descriptor + * @slot: slot number (0..1) + * + * @returns: the disabled index if used or negative value if slot free. + */ +int amd_get_l3_disable_slot(struct amd_l3_cache *l3, unsigned slot) +{ + unsigned int reg = 0; + + pci_read_config_dword(l3->dev, 0x1BC + slot * 4, ®); + + /* check whether this slot is activated already */ + if (reg & (3UL << 30)) + return reg & 0xfff; + + return -1; +} + static ssize_t show_cache_disable(struct _cpuid4_info *this_leaf, char *buf, unsigned int slot) { - unsigned int reg = 0; + int index; if (!this_leaf->l3 || !this_leaf->l3->can_disable) return -EINVAL; - if (!this_leaf->l3->dev) - return -EINVAL; + index = amd_get_l3_disable_slot(this_leaf->l3, slot); + if (index >= 0) + return sprintf(buf, "%d\n", index); - pci_read_config_dword(this_leaf->l3->dev, 0x1BC + slot * 4, ®); - return sprintf(buf, "0x%08x\n", reg); + return sprintf(buf, "FREE\n"); } #define SHOW_CACHE_DISABLE(slot) \ @@ -450,36 +470,74 @@ static void amd_l3_disable_index(struct amd_l3_cache *l3, int cpu, } } - -static ssize_t store_cache_disable(struct _cpuid4_info *this_leaf, - const char *buf, size_t count, - unsigned int slot) +/* + * disable a L3 cache index by using a disable-slot + * + * @l3: L3 cache descriptor + * @cpu: A CPU on the node containing the L3 cache + * @slot: slot number (0..1) + * @index: index to disable + * + * @return: 0 on success, error status on failure + */ +int amd_set_l3_disable_slot(struct amd_l3_cache *l3, int cpu, unsigned slot, + unsigned long index) { - int cpu = cpumask_first(to_cpumask(this_leaf->shared_cpu_map)); - unsigned long val = 0; + int ret = 0; #define SUBCACHE_MASK (3UL << 20) #define SUBCACHE_INDEX 0xfff - if (!this_leaf->l3 || !this_leaf->l3->can_disable) + /* + * check whether this slot is already used or + * the index is already disabled + */ + ret = amd_get_l3_disable_slot(l3, slot); + if (ret >= 0) return -EINVAL; + /* + * check whether the other slot has disabled the + * same index already + */ + if (index == amd_get_l3_disable_slot(l3, !slot)) + return -EINVAL; + + /* do not allow writes outside of allowed bits */ + if ((index & ~(SUBCACHE_MASK | SUBCACHE_INDEX)) || + ((index & SUBCACHE_INDEX) > l3->indices)) + return -EINVAL; + + amd_l3_disable_index(l3, cpu, slot, index); + + return 0; +} + +static ssize_t store_cache_disable(struct _cpuid4_info *this_leaf, + const char *buf, size_t count, + unsigned int slot) +{ + unsigned long val = 0; + int cpu, err = 0; + if (!capable(CAP_SYS_ADMIN)) return -EPERM; - if (!this_leaf->l3->dev) + if (!this_leaf->l3 || !this_leaf->l3->can_disable) return -EINVAL; - if (strict_strtoul(buf, 10, &val) < 0) - return -EINVAL; + cpu = cpumask_first(to_cpumask(this_leaf->shared_cpu_map)); - /* do not allow writes outside of allowed bits */ - if ((val & ~(SUBCACHE_MASK | SUBCACHE_INDEX)) || - ((val & SUBCACHE_INDEX) > this_leaf->l3->indices)) + if (strict_strtoul(buf, 10, &val) < 0) return -EINVAL; - amd_l3_disable_index(this_leaf->l3, cpu, slot, val); - + err = amd_set_l3_disable_slot(this_leaf->l3, cpu, slot, val); + if (err) { + if (err == -EEXIST) + printk(KERN_WARNING "L3 disable slot %d in use!\n", + slot); + return err; + } return count; } @@ -500,7 +558,7 @@ static struct _cache_attr cache_disable_1 = __ATTR(cache_disable_1, 0644, #else /* CONFIG_CPU_SUP_AMD */ static void __cpuinit -amd_check_l3_disable(int index, struct _cpuid4_info_regs *this_leaf) +amd_check_l3_disable(struct _cpuid4_info_regs *this_leaf, int index) { }; #endif /* CONFIG_CPU_SUP_AMD */ @@ -516,7 +574,7 @@ __cpuinit cpuid4_cache_lookup_regs(int index, if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) { amd_cpuid4(index, &eax, &ebx, &ecx); - amd_check_l3_disable(index, this_leaf); + amd_check_l3_disable(this_leaf, index); } else { cpuid_count(4, index, &eax.full, &ebx.full, &ecx.full, &edx); } -- 1.6.4.4 -- Regards/Gruss, Boris. Advanced Micro Devices GmbH Einsteinring 24, 85609 Dornach General Managers: Alberto Bozzo, Andrew Bowd Registration: Dornach, Gemeinde Aschheim, Landkreis Muenchen Registergericht Muenchen, HRB Nr. 43632 ^ permalink raw reply related [flat|nested] 6+ messages in thread
end of thread, other threads:[~2010-06-22 19:44 UTC | newest] Thread overview: 6+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2010-06-01 7:11 [patch] x86, cacheinfo: dereferences before check Dan Carpenter 2010-06-01 8:23 ` Borislav Petkov 2010-06-02 16:09 ` [PATCH] x86, cacheinfo: Carve out L3 cache slot accessors Borislav Petkov 2010-06-02 16:18 ` [PATCH -v2] " Borislav Petkov 2010-06-10 0:10 ` [tip:x86/cpu] " tip-bot for Borislav Petkov -- strict thread matches above, loose matches on Subject: below -- 2010-06-22 11:18 intel_cacheinfo: potential NULL dereference? Jiri Slaby 2010-06-22 11:20 ` Jiri Slaby 2010-06-22 13:08 ` Borislav Petkov 2010-06-22 14:11 ` Jiri Slaby 2010-06-22 17:09 ` H. Peter Anvin 2010-06-22 19:19 ` Borislav Petkov 2010-06-22 19:45 ` [PATCH -v2] x86, cacheinfo: Carve out L3 cache slot accessors Borislav Petkov
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).