public inbox for linux-arm-kernel@lists.infradead.org
 help / color / mirror / Atom feed
From: Suzuki K Poulose <suzuki.poulose@arm.com>
To: Will Deacon <will@kernel.org>,
	cki-project@redhat.com, mike.leach@linaro.org,
	james.clark@arm.com
Cc: catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org,
	bgoncalv@redhat.com, Jonathan.Cameron@huawei.com
Subject: Re: ❌ FAIL (MISSED 2 of 87): Test report for for-kernelci (6.9.0-rc4, arm-next, 6a71d290)
Date: Tue, 23 Apr 2024 12:06:53 +0100	[thread overview]
Message-ID: <72df68ca-80d8-4f5e-9dbc-ccaaa8468147@arm.com> (raw)
In-Reply-To: <20240422170821.GB6223@willie-the-truck>

Hi Will

On 22/04/2024 18:08, Will Deacon wrote:
> [+Suzuki, Mike and James]

Thanks for looping us in.

> 
> On Fri, Apr 19, 2024 at 08:30:09PM -0000, cki-project@redhat.com wrote:
>> Hi, we tested your kernel and here are the results:
>>
>>      Overall result: FAILED
>>               Merge: OK
>>             Compile: OK
>>                Test: FAILED
>>
>>
>> Kernel information:
>>      Commit message: Merge branch 'for-next/core' into for-kernelci
>>
>> You can find all the details about the test run at
>>      https://datawarehouse.cki-project.org/kcidb/checkouts/redhat:1260423326
>>
>> One or more kernel tests failed:
>>      Unrecognized or new issues:
>>          Boot test
>>               aarch64
>>                     Logs: https://datawarehouse.cki-project.org/kcidb/tests/redhat:1260423326-aarch64-kernel_upt_4
>>                     Non-passing ran subtests:
>>                         ❌ FAIL distribution/kpkginstall/journalctl-check
> 
> I'm not sure if it's the root cause, but the logs here have a tonne of
> coresight ETM splats (I included one at the end of the mail).
> 
> https://s3.amazonaws.com/arr-cki-prod-trusted-artifacts/trusted-artifacts/1260423326/test_aarch64/6670265232/artifacts/run.done.01/job.01/recipes/15985953/tasks/5/results/1713555252/logs/journalctl.log
> 
> Jonathan has recently done a bunch of work fixing up the ->parent
> pointers for PMU devices, but I don't see anything going near the
> coresight drivers so this is probably unrelated.
> 
> Will
> 
> --->8
> 
> Apr 19 15:33:38 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel: CSCFG registered etm103
> Apr 19 15:33:38 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel: coresight etm103: CPU103: etm v4.1 initialized
> Apr 19 15:33:38 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel: sysfs: cannot create duplicate filename '/devices/system/container/ACPI0004:00/ARMHC9FE:00/funnel0/connections/in:0'

That looks like a buggy ACPI table to me. We don't reach anywhwere near 
the PMU part yet. This is during the initial probe, the driver finds the
connections and exposes them in sysfs. I have, in the past seen a
similar splat (from  a similar Cavium/Marvell).

Looks like one of the funnels have got duplicate "input port0" 
connections. Most likely two different "ETMs" have described
their output port is connected to the same input port (0), of a
funnel.

Has this platform ever run CoreSight with ACPI tables in the past ?

Suzuki



> Apr 19 15:33:38 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel: mlx5_core 0000:0b:00.0: Adding to iommu group 1
> Apr 19 15:33:38 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel: CPU: 42 PID: 2528 Comm: (udev-worker) Not tainted 6.9.0-rc4 #1
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel: mlx5_core 0000:0b:00.0: firmware version: 14.21.1000
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel: Hardware name: HPE Apollo 70             /C01_APACHE_MB         , BIOS L50_5.13_1.16 07/29/2020
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel: Call trace:
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  dump_backtrace+0xdc/0x140
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel: mlx5_core 0000:0b:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link)
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  show_stack+0x20/0x40
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  dump_stack_lvl+0x60/0x80
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  dump_stack+0x18/0x28
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  sysfs_warn_dup+0x6c/0x90
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  sysfs_do_create_link_sd+0xf8/0x108
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  sysfs_create_link_sd+0x1c/0x30
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  sysfs_add_link_to_group+0x44/0x80
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  coresight_add_sysfs_link+0xa0/0x118 [coresight]
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  coresight_make_links+0xa0/0x108 [coresight]
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  coresight_orphan_match+0xf4/0x138 [coresight]
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  bus_for_each_dev+0x84/0x100
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  coresight_register+0x178/0x270 [coresight]
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel: ipmi_ssif i2c-IPI0001:06: IPMI message handler: Found new BMC (man_id: 0x00b3d1, prod_id: 0x0202, dev_id: 0x20)
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  etm4_add_coresight_dev.isra.0+0x14c/0x270 [coresight_etm4x]
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  etm4_probe+0x108/0x188 [coresight_etm4x]
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  etm4_probe_platform_dev+0xd8/0x188 [coresight_etm4x]
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  platform_probe+0x70/0xe8
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  really_probe+0xc8/0x3a0
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  __driver_probe_device+0x84/0x160
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  driver_probe_device+0x44/0x130
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  __driver_attach+0xcc/0x208
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  bus_for_each_dev+0x84/0x100
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  driver_attach+0x2c/0x40
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  bus_add_driver+0x11c/0x238
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  driver_register+0x70/0x138
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  __platform_driver_register+0x30/0x48
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  etm4x_init+0xec/0xff8 [coresight_etm4x]
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  do_one_initcall+0x60/0x318
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  do_init_module+0x68/0x260
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  load_module+0x62c/0x760
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  init_module_from_file+0x90/0xe0
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  idempotent_init_module+0x18c/0x2b8
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  __arm64_sys_finit_module+0x6c/0xe0
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  invoke_syscall+0x74/0x100
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  el0_svc_common.constprop.0+0xc8/0xf0
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  do_el0_svc+0x24/0x38
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  el0_svc+0x3c/0x158
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  el0t_64_sync_handler+0x120/0x138
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel:  el0t_64_sync+0x194/0x198
> Apr 19 15:33:39 hpe-apollo-cn99xx-03.khw.eng.rdu2.dc.redhat.com kernel: coresight-etm4x ARMHC500:20: probe with driver coresight-etm4x failed with error -17


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2024-04-23 11:07 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-19 20:30 ❌ FAIL (MISSED 2 of 87): Test report for for-kernelci (6.9.0-rc4, arm-next, 6a71d290) cki-project
2024-04-22 17:08 ` Will Deacon
2024-04-23 11:06   ` Suzuki K Poulose [this message]
2024-04-23 11:14   ` James Clark
2024-04-23 11:17     ` James Clark
2024-04-23 11:49       ` Suzuki K Poulose
     [not found]         ` <2d319f18-c279-48de-88cd-add456fe731en@redhat.com>
2024-04-23 15:23           ` Jeremy Linton
2024-04-23 15:36             ` Jeremy Linton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=72df68ca-80d8-4f5e-9dbc-ccaaa8468147@arm.com \
    --to=suzuki.poulose@arm.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=bgoncalv@redhat.com \
    --cc=catalin.marinas@arm.com \
    --cc=cki-project@redhat.com \
    --cc=james.clark@arm.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=mike.leach@linaro.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox