* [PATCH v1 0/1] Host side BMC device driver
@ 2023-08-21 18:35 Ninad Palsule
2023-08-21 18:35 ` [PATCH v1 1/1] soc/aspeed: Add host " Ninad Palsule
0 siblings, 1 reply; 7+ messages in thread
From: Ninad Palsule @ 2023-08-21 18:35 UTC (permalink / raw)
To: linux-aspeed
Hello,
This patch includes support for host side BMC device driver.
Ninad Palsule (1):
soc/aspeed: Add host side BMC device driver
drivers/soc/aspeed/Kconfig | 9 +
drivers/soc/aspeed/Makefile | 1 +
drivers/soc/aspeed/aspeed-host-bmc-dev.c | 251 +++++++++++++++++++++++
3 files changed, 261 insertions(+)
create mode 100644 drivers/soc/aspeed/aspeed-host-bmc-dev.c
--
2.39.2
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v1 1/1] soc/aspeed: Add host side BMC device driver
2023-08-21 18:35 [PATCH v1 0/1] Host side BMC device driver Ninad Palsule
@ 2023-08-21 18:35 ` Ninad Palsule
2023-08-21 19:29 ` Andrew Lunn
2023-08-23 7:57 ` kernel test robot
0 siblings, 2 replies; 7+ messages in thread
From: Ninad Palsule @ 2023-08-21 18:35 UTC (permalink / raw)
To: linux-aspeed
Taken from ASPEED's 5.15 SDK kernel.
The AST2600 supports 2 VUARTs over LPC bus and 2 over PCIe bus. This
patch adds host side driver for PCIe based VUARTs.
Testing:
- This is tested on IBM rainier system with BMC. It requires BMC side
BMC device driver which is available in the ASPEED's 5.15 SDK
kernel.
[ 1.313775][ T985] ASPEED BMC DEVICE 0002:02:01.0: enabling device (0140 -> 0142)
[ 1.314381][ T985] 0002:02:01.0: ttyS0 at MMIO 0x600c100100fe0 (irq = 91, base_baud = 115200) is a 16550A
[ 1.314607][ T985] 0002:02:01.0: ttyS1 at MMIO 0x600c100100be0 (irq = 91, base_baud = 115200) is a 16550A
- The host is loaded through IBM openpower petitboot boot loaded.
- Character echoed from BMC tty device seen on the host side tty device
and vice versa.
- BMC side
root at p10bmc:~# echo "123" > /dev/ttyPCIVUART0
root at p10bmc:~# echo "Hello" > /dev/ttyPCIVUART0
- Host side
# cat /dev/ttyS0
123
Hello
Co-developed-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: Ninad Palsule <ninad@linux.ibm.com>
Tested-by: Ninad Palsule <ninad@linux.ibm.com>
---
drivers/soc/aspeed/Kconfig | 9 +
drivers/soc/aspeed/Makefile | 1 +
drivers/soc/aspeed/aspeed-host-bmc-dev.c | 251 +++++++++++++++++++++++
3 files changed, 261 insertions(+)
create mode 100644 drivers/soc/aspeed/aspeed-host-bmc-dev.c
diff --git a/drivers/soc/aspeed/Kconfig b/drivers/soc/aspeed/Kconfig
index f579ee0b5afa..c2b11fa8f875 100644
--- a/drivers/soc/aspeed/Kconfig
+++ b/drivers/soc/aspeed/Kconfig
@@ -52,6 +52,15 @@ config ASPEED_SOCINFO
help
Say yes to support decoding of ASPEED BMC information.
+config ASPEED_HOST_BMC_DEV
+ bool "ASPEED SoC Host BMC device driver"
+ default ARCH_ASPEED
+ select SOC_BUS
+ default ARCH_ASPEED
+ help
+ Provides a driver to control the PCIe based VUARTs. This is a host
+ side BMC device driver.
+
endmenu
endif
diff --git a/drivers/soc/aspeed/Makefile b/drivers/soc/aspeed/Makefile
index b35d74592964..db6acff9fa52 100644
--- a/drivers/soc/aspeed/Makefile
+++ b/drivers/soc/aspeed/Makefile
@@ -4,3 +4,4 @@ obj-$(CONFIG_ASPEED_LPC_SNOOP) += aspeed-lpc-snoop.o
obj-$(CONFIG_ASPEED_UART_ROUTING) += aspeed-uart-routing.o
obj-$(CONFIG_ASPEED_P2A_CTRL) += aspeed-p2a-ctrl.o
obj-$(CONFIG_ASPEED_SOCINFO) += aspeed-socinfo.o
+obj-$(CONFIG_ASPEED_HOST_BMC_DEV) += aspeed-host-bmc-dev.o
diff --git a/drivers/soc/aspeed/aspeed-host-bmc-dev.c b/drivers/soc/aspeed/aspeed-host-bmc-dev.c
new file mode 100644
index 000000000000..9f23276a9787
--- /dev/null
+++ b/drivers/soc/aspeed/aspeed-host-bmc-dev.c
@@ -0,0 +1,251 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+// Copyright (C) ASPEED Technology Inc.
+
+#include <linux/init.h>
+#include <linux/version.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/pci.h>
+#include <linux/interrupt.h>
+#include <linux/serial_core.h>
+#include <linux/serial_8250.h>
+
+#define BMC_MULTI_MSI 32
+#define BMC_MSI_IDX_BASE 4
+
+#define DRIVER_NAME "ASPEED BMC DEVICE"
+
+#define VUART_MAX_PARMS 2
+
+#define BAR_MEM 0
+#define BAR_MSG 1
+#define BAR_MAX 2
+
+struct bar {
+ unsigned long bar_base;
+ unsigned long bar_size;
+ void __iomem *bar_ioremap;
+};
+
+struct aspeed_pci_bmc_dev {
+ struct device *dev;
+
+ struct bar bars[BAR_MAX];
+ int lines[VUART_MAX_PARMS];
+
+ int legacy_irq;
+};
+
+static uint16_t vuart_ioport[VUART_MAX_PARMS];
+static uint16_t vuart_sirq[VUART_MAX_PARMS];
+
+static int aspeed_pci_host_bmc_device_probe(struct pci_dev *pdev,
+ const struct pci_device_id *ent)
+{
+ struct uart_8250_port uart[VUART_MAX_PARMS];
+ struct device *dev = &pdev->dev;
+ struct aspeed_pci_bmc_dev *pci_bmc_dev;
+ int rc = 0;
+ int i = 0;
+ int nr_entries;
+ u16 config_cmd_val;
+
+ pci_bmc_dev = kzalloc(sizeof(*pci_bmc_dev), GFP_KERNEL);
+ if (!pci_bmc_dev) {
+ rc = -ENOMEM;
+ dev_err(dev, "kmalloc() returned NULL memory.\n");
+ goto out_err;
+ }
+
+ rc = pcim_enable_device(pdev);
+ if (rc != 0) {
+ dev_err(dev, "pcim_enable_device() returned error %d\n", rc);
+ goto out_free0;
+ }
+
+ /* set PCI host mastering */
+ pci_set_master(pdev);
+
+ /*
+ * Try to allocate max MSI. If multiple MSI is not possible then use
+ * the legacy interrupt. Note: PowerPC doesn't support multiple MSI.
+ */
+ nr_entries = pci_alloc_irq_vectors(pdev, BMC_MULTI_MSI, BMC_MULTI_MSI,
+ PCI_IRQ_MSIX | PCI_IRQ_MSI);
+
+ if (nr_entries < 0) {
+ pci_bmc_dev->legacy_irq = 1;
+ pci_read_config_word(pdev, PCI_COMMAND, &config_cmd_val);
+ config_cmd_val &= ~PCI_COMMAND_INTX_DISABLE;
+ pci_write_config_word((struct pci_dev *)pdev, PCI_COMMAND, config_cmd_val);
+
+ } else {
+ pci_bmc_dev->legacy_irq = 0;
+ pci_read_config_word(pdev, PCI_COMMAND, &config_cmd_val);
+ config_cmd_val |= PCI_COMMAND_INTX_DISABLE;
+ pci_write_config_word((struct pci_dev *)pdev, PCI_COMMAND, config_cmd_val);
+ rc = pci_irq_vector(pdev, BMC_MSI_IDX_BASE);
+ if (rc < 0) {
+ dev_err(dev, "pci_irq_vector() returned error %d msi=%u msix=%u\n",
+ -rc, pdev->msi_enabled, pdev->msix_enabled);
+ goto out_free1;
+ }
+ pdev->irq = rc;
+ }
+
+ /* Get access to the BARs */
+ for (i = 0; i < BAR_MAX; i++) {
+ rc = pci_request_region(pdev, i, DRIVER_NAME);
+ if (rc < 0) {
+ dev_err(dev, "pci_request_region(%d) returned error %d\n", i, rc);
+ goto out_unreg;
+ }
+
+ pci_bmc_dev->bars[i].bar_base = pci_resource_start(pdev, i);
+ pci_bmc_dev->bars[i].bar_size = pci_resource_len(pdev, i);
+ pci_bmc_dev->bars[i].bar_ioremap = pci_ioremap_bar(pdev, i);
+ if (pci_bmc_dev->bars[i].bar_ioremap == NULL) {
+ dev_err(dev, "pci_ioremap_bar(%d) failed\n", i);
+ rc = -ENOMEM;
+ goto out_unreg;
+ }
+ }
+
+ /* ERRTA40: dummy read */
+ (void)__raw_readl((void __iomem *)pci_bmc_dev->bars[BAR_MSG].bar_ioremap);
+
+ pci_set_drvdata(pdev, pci_bmc_dev);
+
+ /* setup VUART */
+ memset(uart, 0, sizeof(uart));
+
+ for (i = 0; i < VUART_MAX_PARMS; i++) {
+ vuart_ioport[i] = 0x3F8 - (i * 0x100);
+ vuart_sirq[i] = 0x10 + 4 - i - BMC_MSI_IDX_BASE;
+ uart[i].port.flags = UPF_SKIP_TEST | UPF_BOOT_AUTOCONF | UPF_SHARE_IRQ;
+ uart[i].port.uartclk = 115200 * 16;
+ pci_bmc_dev->lines[i] = -1;
+
+ if (pci_bmc_dev->legacy_irq) {
+ uart[i].port.irq = pdev->irq;
+ } else {
+ rc = pci_irq_vector(pdev, vuart_sirq[i]);
+ if (rc < 0) {
+ dev_err(dev,
+ "pci_irq_vector() returned error %d msi=%u msix=%u\n",
+ -rc, pdev->msi_enabled, pdev->msix_enabled);
+ goto out_unreg;
+ }
+ uart[i].port.irq = rc;
+ }
+ uart[i].port.dev = dev;
+ uart[i].port.iotype = UPIO_MEM32;
+ uart[i].port.iobase = 0;
+ uart[i].port.mapbase =
+ pci_bmc_dev->bars[BAR_MSG].bar_base + (vuart_ioport[i] << 2);
+ uart[i].port.membase =
+ pci_bmc_dev->bars[BAR_MSG].bar_ioremap + (vuart_ioport[i] << 2);
+ uart[i].port.type = PORT_16550A;
+ uart[i].port.flags |= (UPF_IOREMAP | UPF_FIXED_PORT | UPF_FIXED_TYPE);
+ uart[i].port.regshift = 2;
+
+ rc = serial8250_register_8250_port(&uart[i]);
+ if (rc < 0) {
+ dev_err(dev,
+ "cannot setup VUART@%xh over PCIe, rc=%d\n",
+ vuart_ioport[i], -rc);
+ goto out_unreg;
+ }
+ pci_bmc_dev->lines[i] = rc;
+ }
+
+ return 0;
+
+out_unreg:
+ for (i = 0; i < VUART_MAX_PARMS; i++) {
+ if (pci_bmc_dev->lines[i] >= 0)
+ serial8250_unregister_port(pci_bmc_dev->lines[i]);
+ }
+
+ pci_release_regions(pdev);
+out_free1:
+ if (pci_bmc_dev->legacy_irq)
+ free_irq(pdev->irq, pdev);
+ else
+ pci_free_irq_vectors(pdev);
+
+ pci_clear_master(pdev);
+out_free0:
+ kfree(pci_bmc_dev);
+out_err:
+
+ return rc;
+}
+
+static void aspeed_pci_host_bmc_device_remove(struct pci_dev *pdev)
+{
+ struct aspeed_pci_bmc_dev *pci_bmc_dev = pci_get_drvdata(pdev);
+ int i;
+
+ /* Unregister ports */
+ for (i = 0; i < VUART_MAX_PARMS; i++) {
+ if (pci_bmc_dev->lines[i] >= 0)
+ serial8250_unregister_port(pci_bmc_dev->lines[i]);
+ }
+
+ if (pci_bmc_dev->legacy_irq)
+ free_irq(pdev->irq, pdev);
+ else
+ pci_free_irq_vectors(pdev);
+
+ pci_release_regions(pdev);
+ pci_clear_master(pdev);
+ kfree(pci_bmc_dev);
+}
+
+/**
+ * This table holds the list of (VendorID,DeviceID) supported by this driver
+ *
+ */
+static struct pci_device_id aspeed_host_bmc_dev_pci_ids[] = {
+ { PCI_DEVICE(0x1A03, 0x2402), },
+ { 0, }
+};
+
+MODULE_DEVICE_TABLE(pci, aspeed_host_bmc_dev_pci_ids);
+
+static struct pci_driver aspeed_host_bmc_dev_driver = {
+ .name = DRIVER_NAME,
+ .id_table = aspeed_host_bmc_dev_pci_ids,
+ .probe = aspeed_pci_host_bmc_device_probe,
+ .remove = aspeed_pci_host_bmc_device_remove,
+};
+
+static int __init aspeed_host_bmc_device_init(void)
+{
+ int ret;
+
+ /* register pci driver */
+ ret = pci_register_driver(&aspeed_host_bmc_dev_driver);
+ if (ret < 0) {
+ pr_err("pci-driver: can't register pci driver\n");
+ return ret;
+ }
+
+ return 0;
+
+}
+
+static void aspeed_host_bmc_device_exit(void)
+{
+ /* unregister pci driver */
+ pci_unregister_driver(&aspeed_host_bmc_dev_driver);
+}
+
+late_initcall(aspeed_host_bmc_device_init);
+module_exit(aspeed_host_bmc_device_exit);
+
+MODULE_AUTHOR("Ryan Chen <ryan_chen@aspeedtech.com>");
+MODULE_DESCRIPTION("ASPEED Host BMC DEVICE Driver");
+MODULE_LICENSE("GPL");
--
2.39.2
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v1 1/1] soc/aspeed: Add host side BMC device driver
2023-08-21 18:35 ` [PATCH v1 1/1] soc/aspeed: Add host " Ninad Palsule
@ 2023-08-21 19:29 ` Andrew Lunn
2023-08-22 16:14 ` Ninad Palsule
2023-08-23 7:57 ` kernel test robot
1 sibling, 1 reply; 7+ messages in thread
From: Andrew Lunn @ 2023-08-21 19:29 UTC (permalink / raw)
To: linux-aspeed
> Testing:
> - This is tested on IBM rainier system with BMC. It requires BMC side
> BMC device driver which is available in the ASPEED's 5.15 SDK
> kernel.
How relevant is that? To the host side, it just appears to be an
16550A. Is the SDK emulating an 16550A? If you where to use a
different kernel, is it still guaranteed to be an 16550A? I also
notice there is a mainline
drivers/tty/serial/8250/8250_aspeed_vuart.c. Could that be used on the
BMC? That would be a better testing target than the vendor kernel.
> +config ASPEED_HOST_BMC_DEV
> + bool "ASPEED SoC Host BMC device driver"
> + default ARCH_ASPEED
> + select SOC_BUS
> + default ARCH_ASPEED
same default twice?
> +static int __init aspeed_host_bmc_device_init(void)
> +{
> + int ret;
> +
> + /* register pci driver */
> + ret = pci_register_driver(&aspeed_host_bmc_dev_driver);
> + if (ret < 0) {
> + pr_err("pci-driver: can't register pci driver\n");
> + return ret;
> + }
> +
> + return 0;
> +
> +}
> +
> +static void aspeed_host_bmc_device_exit(void)
> +{
> + /* unregister pci driver */
> + pci_unregister_driver(&aspeed_host_bmc_dev_driver);
> +}
> +
> +late_initcall(aspeed_host_bmc_device_init);
> +module_exit(aspeed_host_bmc_device_exit);
It looks like you can use module_pci_driver() ?
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v1 1/1] soc/aspeed: Add host side BMC device driver
2023-08-21 19:29 ` Andrew Lunn
@ 2023-08-22 16:14 ` Ninad Palsule
2023-08-23 17:32 ` Ninad Palsule
0 siblings, 1 reply; 7+ messages in thread
From: Ninad Palsule @ 2023-08-22 16:14 UTC (permalink / raw)
To: linux-aspeed
Hello Andrew,
Thanks for the review.
On 8/21/23 2:29 PM, Andrew Lunn wrote:
>> Testing:
>> - This is tested on IBM rainier system with BMC. It requires BMC side
>> BMC device driver which is available in the ASPEED's 5.15 SDK
>> kernel.
> How relevant is that? To the host side, it just appears to be an
> 16550A. Is the SDK emulating an 16550A? If you where to use a
> different kernel, is it still guaranteed to be an 16550A? I also
> notice there is a mainline
> drivers/tty/serial/8250/8250_aspeed_vuart.c. Could that be used on the
> BMC? That would be a better testing target than the vendor kernel.
This is just to indicate how I tested my code.
Yes, aspeed chip (in this case ast2600) is compatible with 16550 UART.
I am guessing it should work with different kernel too as 16550 standard
is used.
The 8250_aspeed_vuart.c is a BMC side driver for accessing VUART over
LPC bus and
this is a host side driver to access VUART over PCIe bus.
>> +config ASPEED_HOST_BMC_DEV
>> + bool "ASPEED SoC Host BMC device driver"
>> + default ARCH_ASPEED
>> + select SOC_BUS
>> + default ARCH_ASPEED
> same default twice?
Removed.
>
>> +static int __init aspeed_host_bmc_device_init(void)
>> +{
>> + int ret;
>> +
>> + /* register pci driver */
>> + ret = pci_register_driver(&aspeed_host_bmc_dev_driver);
>> + if (ret < 0) {
>> + pr_err("pci-driver: can't register pci driver\n");
>> + return ret;
>> + }
>> +
>> + return 0;
>> +
>> +}
>> +
>> +static void aspeed_host_bmc_device_exit(void)
>> +{
>> + /* unregister pci driver */
>> + pci_unregister_driver(&aspeed_host_bmc_dev_driver);
>> +}
>> +
>> +late_initcall(aspeed_host_bmc_device_init);
>> +module_exit(aspeed_host_bmc_device_exit);
> It looks like you can use module_pci_driver() ?
yes, It should work unless the late initcall is important. I will test
it and see.
Thanks & Regards,
Ninad Palsule
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v1 1/1] soc/aspeed: Add host side BMC device driver
2023-08-21 18:35 ` [PATCH v1 1/1] soc/aspeed: Add host " Ninad Palsule
2023-08-21 19:29 ` Andrew Lunn
@ 2023-08-23 7:57 ` kernel test robot
1 sibling, 0 replies; 7+ messages in thread
From: kernel test robot @ 2023-08-23 7:57 UTC (permalink / raw)
To: linux-aspeed
Hi Ninad,
kernel test robot noticed the following build warnings:
[auto build test WARNING on soc/for-next]
[also build test WARNING on linus/master v6.5-rc7 next-20230822]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Ninad-Palsule/soc-aspeed-Add-host-side-BMC-device-driver/20230822-023858
base: https://git.kernel.org/pub/scm/linux/kernel/git/soc/soc.git for-next
patch link: https://lore.kernel.org/r/20230821183525.3427144-2-ninad%40linux.ibm.com
patch subject: [PATCH v1 1/1] soc/aspeed: Add host side BMC device driver
config: arm-defconfig (https://download.01.org/0day-ci/archive/20230823/202308231554.SV5ASPV0-lkp at intel.com/config)
compiler: arm-linux-gnueabi-gcc (GCC) 13.2.0
reproduce: (https://download.01.org/0day-ci/archive/20230823/202308231554.SV5ASPV0-lkp at intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202308231554.SV5ASPV0-lkp at intel.com/
All warnings (new ones prefixed by >>):
drivers/soc/aspeed/aspeed-host-bmc-dev.c: In function 'aspeed_pci_host_bmc_device_probe':
>> drivers/soc/aspeed/aspeed-host-bmc-dev.c:184:1: warning: the frame size of 1072 bytes is larger than 1024 bytes [-Wframe-larger-than=]
184 | }
| ^
vim +184 drivers/soc/aspeed/aspeed-host-bmc-dev.c
42
43 static int aspeed_pci_host_bmc_device_probe(struct pci_dev *pdev,
44 const struct pci_device_id *ent)
45 {
46 struct uart_8250_port uart[VUART_MAX_PARMS];
47 struct device *dev = &pdev->dev;
48 struct aspeed_pci_bmc_dev *pci_bmc_dev;
49 int rc = 0;
50 int i = 0;
51 int nr_entries;
52 u16 config_cmd_val;
53
54 pci_bmc_dev = kzalloc(sizeof(*pci_bmc_dev), GFP_KERNEL);
55 if (!pci_bmc_dev) {
56 rc = -ENOMEM;
57 dev_err(dev, "kmalloc() returned NULL memory.\n");
58 goto out_err;
59 }
60
61 rc = pcim_enable_device(pdev);
62 if (rc != 0) {
63 dev_err(dev, "pcim_enable_device() returned error %d\n", rc);
64 goto out_free0;
65 }
66
67 /* set PCI host mastering */
68 pci_set_master(pdev);
69
70 /*
71 * Try to allocate max MSI. If multiple MSI is not possible then use
72 * the legacy interrupt. Note: PowerPC doesn't support multiple MSI.
73 */
74 nr_entries = pci_alloc_irq_vectors(pdev, BMC_MULTI_MSI, BMC_MULTI_MSI,
75 PCI_IRQ_MSIX | PCI_IRQ_MSI);
76
77 if (nr_entries < 0) {
78 pci_bmc_dev->legacy_irq = 1;
79 pci_read_config_word(pdev, PCI_COMMAND, &config_cmd_val);
80 config_cmd_val &= ~PCI_COMMAND_INTX_DISABLE;
81 pci_write_config_word((struct pci_dev *)pdev, PCI_COMMAND, config_cmd_val);
82
83 } else {
84 pci_bmc_dev->legacy_irq = 0;
85 pci_read_config_word(pdev, PCI_COMMAND, &config_cmd_val);
86 config_cmd_val |= PCI_COMMAND_INTX_DISABLE;
87 pci_write_config_word((struct pci_dev *)pdev, PCI_COMMAND, config_cmd_val);
88 rc = pci_irq_vector(pdev, BMC_MSI_IDX_BASE);
89 if (rc < 0) {
90 dev_err(dev, "pci_irq_vector() returned error %d msi=%u msix=%u\n",
91 -rc, pdev->msi_enabled, pdev->msix_enabled);
92 goto out_free1;
93 }
94 pdev->irq = rc;
95 }
96
97 /* Get access to the BARs */
98 for (i = 0; i < BAR_MAX; i++) {
99 rc = pci_request_region(pdev, i, DRIVER_NAME);
100 if (rc < 0) {
101 dev_err(dev, "pci_request_region(%d) returned error %d\n", i, rc);
102 goto out_unreg;
103 }
104
105 pci_bmc_dev->bars[i].bar_base = pci_resource_start(pdev, i);
106 pci_bmc_dev->bars[i].bar_size = pci_resource_len(pdev, i);
107 pci_bmc_dev->bars[i].bar_ioremap = pci_ioremap_bar(pdev, i);
108 if (pci_bmc_dev->bars[i].bar_ioremap == NULL) {
109 dev_err(dev, "pci_ioremap_bar(%d) failed\n", i);
110 rc = -ENOMEM;
111 goto out_unreg;
112 }
113 }
114
115 /* ERRTA40: dummy read */
116 (void)__raw_readl((void __iomem *)pci_bmc_dev->bars[BAR_MSG].bar_ioremap);
117
118 pci_set_drvdata(pdev, pci_bmc_dev);
119
120 /* setup VUART */
121 memset(uart, 0, sizeof(uart));
122
123 for (i = 0; i < VUART_MAX_PARMS; i++) {
124 vuart_ioport[i] = 0x3F8 - (i * 0x100);
125 vuart_sirq[i] = 0x10 + 4 - i - BMC_MSI_IDX_BASE;
126 uart[i].port.flags = UPF_SKIP_TEST | UPF_BOOT_AUTOCONF | UPF_SHARE_IRQ;
127 uart[i].port.uartclk = 115200 * 16;
128 pci_bmc_dev->lines[i] = -1;
129
130 if (pci_bmc_dev->legacy_irq) {
131 uart[i].port.irq = pdev->irq;
132 } else {
133 rc = pci_irq_vector(pdev, vuart_sirq[i]);
134 if (rc < 0) {
135 dev_err(dev,
136 "pci_irq_vector() returned error %d msi=%u msix=%u\n",
137 -rc, pdev->msi_enabled, pdev->msix_enabled);
138 goto out_unreg;
139 }
140 uart[i].port.irq = rc;
141 }
142 uart[i].port.dev = dev;
143 uart[i].port.iotype = UPIO_MEM32;
144 uart[i].port.iobase = 0;
145 uart[i].port.mapbase =
146 pci_bmc_dev->bars[BAR_MSG].bar_base + (vuart_ioport[i] << 2);
147 uart[i].port.membase =
148 pci_bmc_dev->bars[BAR_MSG].bar_ioremap + (vuart_ioport[i] << 2);
149 uart[i].port.type = PORT_16550A;
150 uart[i].port.flags |= (UPF_IOREMAP | UPF_FIXED_PORT | UPF_FIXED_TYPE);
151 uart[i].port.regshift = 2;
152
153 rc = serial8250_register_8250_port(&uart[i]);
154 if (rc < 0) {
155 dev_err(dev,
156 "cannot setup VUART@%xh over PCIe, rc=%d\n",
157 vuart_ioport[i], -rc);
158 goto out_unreg;
159 }
160 pci_bmc_dev->lines[i] = rc;
161 }
162
163 return 0;
164
165 out_unreg:
166 for (i = 0; i < VUART_MAX_PARMS; i++) {
167 if (pci_bmc_dev->lines[i] >= 0)
168 serial8250_unregister_port(pci_bmc_dev->lines[i]);
169 }
170
171 pci_release_regions(pdev);
172 out_free1:
173 if (pci_bmc_dev->legacy_irq)
174 free_irq(pdev->irq, pdev);
175 else
176 pci_free_irq_vectors(pdev);
177
178 pci_clear_master(pdev);
179 out_free0:
180 kfree(pci_bmc_dev);
181 out_err:
182
183 return rc;
> 184 }
185
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v1 1/1] soc/aspeed: Add host side BMC device driver
2023-08-22 16:14 ` Ninad Palsule
@ 2023-08-23 17:32 ` Ninad Palsule
2024-01-11 16:23 ` Ninad Palsule
0 siblings, 1 reply; 7+ messages in thread
From: Ninad Palsule @ 2023-08-23 17:32 UTC (permalink / raw)
To: linux-aspeed
Hello Andrew,
On 8/22/23 11:14 AM, Ninad Palsule wrote:
> Hello Andrew,
>
> Thanks for the review.
>
> On 8/21/23 2:29 PM, Andrew Lunn wrote:
>>> Testing:
>>> ?? - This is tested on IBM rainier system with BMC. It requires BMC
>>> side
>>> ???? BMC device driver which is available in the ASPEED's 5.15 SDK
>>> ???? kernel.
>> How relevant is that? To the host side, it just appears to be an
>> 16550A. Is the SDK emulating an 16550A? If you where to use a
>> different kernel, is it still guaranteed to be an 16550A? I also
>> notice there is a mainline
>> drivers/tty/serial/8250/8250_aspeed_vuart.c. Could that be used on the
>> BMC? That would be a better testing target than the vendor kernel.
>
> This is just to indicate how I tested my code.
>
> Yes, aspeed chip (in this case ast2600) is compatible with 16550 UART.
>
> I am guessing it should work with different kernel too as 16550
> standard is used.
>
> The 8250_aspeed_vuart.c is a BMC side driver for accessing VUART over
> LPC bus and
>
> this is a host side driver to access VUART over PCIe bus.
>
>>> +config ASPEED_HOST_BMC_DEV
>>> +??? bool "ASPEED SoC Host BMC device driver"
>>> +??? default ARCH_ASPEED
>>> +??? select SOC_BUS
>>> +??? default ARCH_ASPEED
>> same default twice?
> Removed.
>>
>>> +static int __init aspeed_host_bmc_device_init(void)
>>> +{
>>> +??? int ret;
>>> +
>>> +??? /* register pci driver */
>>> +??? ret = pci_register_driver(&aspeed_host_bmc_dev_driver);
>>> +??? if (ret < 0) {
>>> +??????? pr_err("pci-driver: can't register pci driver\n");
>>> +??????? return ret;
>>> +??? }
>>> +
>>> +??? return 0;
>>> +
>>> +}
>>> +
>>> +static void aspeed_host_bmc_device_exit(void)
>>> +{
>>> +??? /* unregister pci driver */
>>> +??? pci_unregister_driver(&aspeed_host_bmc_dev_driver);
>>> +}
>>> +
>>> +late_initcall(aspeed_host_bmc_device_init);
>>> +module_exit(aspeed_host_bmc_device_exit);
>> It looks like you can use module_pci_driver() ?
> yes, It should work unless the late initcall is important. I will test
> it and see.
I will not be able to use module_pci_driver() as it doesn't support late
initcall which is required otherwise
8250 registration fails. So I am not making this change.
>
> Thanks & Regards,
>
> Ninad Palsule
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v1 1/1] soc/aspeed: Add host side BMC device driver
2023-08-23 17:32 ` Ninad Palsule
@ 2024-01-11 16:23 ` Ninad Palsule
0 siblings, 0 replies; 7+ messages in thread
From: Ninad Palsule @ 2024-01-11 16:23 UTC (permalink / raw)
To: linux-aspeed
Hello Andrew,
On 8/23/23 12:32, Ninad Palsule wrote:
> Hello Andrew,
>
> On 8/22/23 11:14 AM, Ninad Palsule wrote:
>> Hello Andrew,
>>
>> Thanks for the review.
>>
>> On 8/21/23 2:29 PM, Andrew Lunn wrote:
>>>> Testing:
>>>> ?? - This is tested on IBM rainier system with BMC. It requires BMC
>>>> side
>>>> ???? BMC device driver which is available in the ASPEED's 5.15 SDK
>>>> ???? kernel.
>>> How relevant is that? To the host side, it just appears to be an
>>> 16550A. Is the SDK emulating an 16550A? If you where to use a
>>> different kernel, is it still guaranteed to be an 16550A? I also
>>> notice there is a mainline
>>> drivers/tty/serial/8250/8250_aspeed_vuart.c. Could that be used on the
>>> BMC? That would be a better testing target than the vendor kernel.
>>
>> This is just to indicate how I tested my code.
>>
>> Yes, aspeed chip (in this case ast2600) is compatible with 16550 UART.
>>
>> I am guessing it should work with different kernel too as 16550
>> standard is used.
>>
>> The 8250_aspeed_vuart.c is a BMC side driver for accessing VUART over
>> LPC bus and
>>
>> this is a host side driver to access VUART over PCIe bus.
>>
>>>> +config ASPEED_HOST_BMC_DEV
>>>> +??? bool "ASPEED SoC Host BMC device driver"
>>>> +??? default ARCH_ASPEED
>>>> +??? select SOC_BUS
>>>> +??? default ARCH_ASPEED
>>> same default twice?
>> Removed.
>>
>>>> +late_initcall(aspeed_host_bmc_device_init);
>>>> +module_exit(aspeed_host_bmc_device_exit);
>>> It looks like you can use module_pci_driver() ?
>> yes, It should work unless the late initcall is important. I will
>> test it and see.
>
> I will not be able to use module_pci_driver() as it doesn't support
> late initcall which is required otherwise
>
> 8250 registration fails. So I am not making this change.
Please let me know if you are fine with this.
Thanks for the review.
Regards,
Ninad
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2024-01-11 16:23 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-08-21 18:35 [PATCH v1 0/1] Host side BMC device driver Ninad Palsule
2023-08-21 18:35 ` [PATCH v1 1/1] soc/aspeed: Add host " Ninad Palsule
2023-08-21 19:29 ` Andrew Lunn
2023-08-22 16:14 ` Ninad Palsule
2023-08-23 17:32 ` Ninad Palsule
2024-01-11 16:23 ` Ninad Palsule
2023-08-23 7:57 ` kernel test robot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).