* [PATCH v2 00/15] tidspbridge driver MMU-related cleanups
@ 2012-09-19 12:06 Laurent Pinchart
2012-09-19 12:06 ` [PATCH v2 01/15] tidspbridge: hw_mmu: Reorder functions to avoid forward declarations Laurent Pinchart
` (15 more replies)
0 siblings, 16 replies; 23+ messages in thread
From: Laurent Pinchart @ 2012-09-19 12:06 UTC (permalink / raw)
To: linux-omap; +Cc: Omar Ramirez Luna
Hello,
Here's the second version of my tidspbridge MMU-related cleanup patches. The
first version has been sent privately only, don't try to search the mailing
list archive for it :-)
Replacing hw/hw_mmu.c and part of core/tiomap3430.c with generic IOMMU calls
should be less difficult now. Anyone would like to give it a try?
Laurent Pinchart (14):
tidspbridge: hw_mmu: Reorder functions to avoid forward declarations
tidspbridge: hw_mmu: Removed unused functions
tidspbridge: tiomap3430: Reorder functions to avoid forward
declarations
tidspbridge: tiomap3430: Remove unneeded dev_context local variables
tidspbridge: tiomap3430: Factor out common page release code
tidspbridge: tiomap3430: Remove ul_ prefix
tidspbridge: tiomap3430: Remove unneeded local variables
tidspbridge: Fix VM_PFNMAP mapping
tidspbridge: Remove unused hw_mmu_map_attrs_t::donotlockmpupage field
arm: omap: iommu: Include required headers in iommu.h and iopgtable.h
tidspbridge: Use constants defined in IOMMU platform headers
tidspbridge: Simplify pte_update and mem_map_vmalloc functions
tidspbridge: Use correct types to describe physical, MPU, DSP
addresses
tidspbridge: Replace hw_mmu_map_attrs_t structure with a prot
bitfield
Omar Ramirez Luna (1):
ARM: OMAP: iommu: fix including iommu.h without IOMMU_API selected
arch/arm/plat-omap/include/plat/iommu.h | 6 +
arch/arm/plat-omap/include/plat/iopgtable.h | 2 +
drivers/staging/tidspbridge/core/io_sm.c | 7 +-
drivers/staging/tidspbridge/core/tiomap3430.c | 1484 +++++++++-----------
drivers/staging/tidspbridge/core/tiomap_io.c | 2 +-
drivers/staging/tidspbridge/core/ue_deh.c | 21 +-
drivers/staging/tidspbridge/hw/hw_defs.h | 22 -
drivers/staging/tidspbridge/hw/hw_mmu.c | 332 ++----
drivers/staging/tidspbridge/hw/hw_mmu.h | 67 +-
.../staging/tidspbridge/include/dspbridge/drv.h | 1 +
.../tidspbridge/include/dspbridge/dspdefs.h | 27 +-
.../tidspbridge/include/dspbridge/dspioctl.h | 25 +
drivers/staging/tidspbridge/rmgr/proc.c | 119 +-
13 files changed, 899 insertions(+), 1216 deletions(-)
--
Regards,
Laurent Pinchart
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v2 01/15] tidspbridge: hw_mmu: Reorder functions to avoid forward declarations
2012-09-19 12:06 [PATCH v2 00/15] tidspbridge driver MMU-related cleanups Laurent Pinchart
@ 2012-09-19 12:06 ` Laurent Pinchart
2012-09-19 12:06 ` [PATCH v2 02/15] tidspbridge: hw_mmu: Removed unused functions Laurent Pinchart
` (14 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Laurent Pinchart @ 2012-09-19 12:06 UTC (permalink / raw)
To: linux-omap; +Cc: Omar Ramirez Luna
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Omar Ramirez Luna <omar.ramirez@ti.com>
---
drivers/staging/tidspbridge/hw/hw_mmu.c | 95 +++++++++++++------------------
1 files changed, 39 insertions(+), 56 deletions(-)
diff --git a/drivers/staging/tidspbridge/hw/hw_mmu.c b/drivers/staging/tidspbridge/hw/hw_mmu.c
index 8a93d55..2194a3f 100644
--- a/drivers/staging/tidspbridge/hw/hw_mmu.c
+++ b/drivers/staging/tidspbridge/hw/hw_mmu.c
@@ -70,7 +70,16 @@ enum hw_mmu_page_size_t {
* METHOD: : Check the Input parameter and Flush a
* single entry in the TLB.
*/
-static hw_status mmu_flush_entry(const void __iomem *base_address);
+static hw_status mmu_flush_entry(const void __iomem *base_address)
+{
+ hw_status status = 0;
+ u32 flush_entry_data = 0x1;
+
+ /* write values to register */
+ MMUMMU_FLUSH_ENTRY_WRITE_REGISTER32(base_address, flush_entry_data);
+
+ return status;
+}
/*
* FUNCTION : mmu_set_cam_entry
@@ -116,7 +125,20 @@ static hw_status mmu_set_cam_entry(const void __iomem *base_address,
const u32 page_sz,
const u32 preserved_bit,
const u32 valid_bit,
- const u32 virtual_addr_tag);
+ const u32 virtual_addr_tag)
+{
+ hw_status status = 0;
+ u32 mmu_cam_reg;
+
+ mmu_cam_reg = (virtual_addr_tag << 12);
+ mmu_cam_reg = (mmu_cam_reg) | (page_sz) | (valid_bit << 2) |
+ (preserved_bit << 3);
+
+ /* write values to register */
+ MMUMMU_CAM_WRITE_REGISTER32(base_address, mmu_cam_reg);
+
+ return status;
+}
/*
* FUNCTION : mmu_set_ram_entry
@@ -161,7 +183,21 @@ static hw_status mmu_set_ram_entry(const void __iomem *base_address,
const u32 physical_addr,
enum hw_endianism_t endianism,
enum hw_element_size_t element_size,
- enum hw_mmu_mixed_size_t mixed_size);
+ enum hw_mmu_mixed_size_t mixed_size)
+{
+ hw_status status = 0;
+ u32 mmu_ram_reg;
+
+ mmu_ram_reg = (physical_addr & MMU_ADDR_MASK);
+ mmu_ram_reg = (mmu_ram_reg) | ((endianism << 9) | (element_size << 7) |
+ (mixed_size << 6));
+
+ /* write values to register */
+ MMUMMU_RAM_WRITE_REGISTER32(base_address, mmu_ram_reg);
+
+ return status;
+
+}
/* HW FUNCTIONS */
@@ -503,59 +539,6 @@ hw_status hw_mmu_pte_clear(const u32 pg_tbl_va, u32 virtual_addr, u32 page_size)
return status;
}
-/* mmu_flush_entry */
-static hw_status mmu_flush_entry(const void __iomem *base_address)
-{
- hw_status status = 0;
- u32 flush_entry_data = 0x1;
-
- /* write values to register */
- MMUMMU_FLUSH_ENTRY_WRITE_REGISTER32(base_address, flush_entry_data);
-
- return status;
-}
-
-/* mmu_set_cam_entry */
-static hw_status mmu_set_cam_entry(const void __iomem *base_address,
- const u32 page_sz,
- const u32 preserved_bit,
- const u32 valid_bit,
- const u32 virtual_addr_tag)
-{
- hw_status status = 0;
- u32 mmu_cam_reg;
-
- mmu_cam_reg = (virtual_addr_tag << 12);
- mmu_cam_reg = (mmu_cam_reg) | (page_sz) | (valid_bit << 2) |
- (preserved_bit << 3);
-
- /* write values to register */
- MMUMMU_CAM_WRITE_REGISTER32(base_address, mmu_cam_reg);
-
- return status;
-}
-
-/* mmu_set_ram_entry */
-static hw_status mmu_set_ram_entry(const void __iomem *base_address,
- const u32 physical_addr,
- enum hw_endianism_t endianism,
- enum hw_element_size_t element_size,
- enum hw_mmu_mixed_size_t mixed_size)
-{
- hw_status status = 0;
- u32 mmu_ram_reg;
-
- mmu_ram_reg = (physical_addr & MMU_ADDR_MASK);
- mmu_ram_reg = (mmu_ram_reg) | ((endianism << 9) | (element_size << 7) |
- (mixed_size << 6));
-
- /* write values to register */
- MMUMMU_RAM_WRITE_REGISTER32(base_address, mmu_ram_reg);
-
- return status;
-
-}
-
void hw_mmu_tlb_flush_all(const void __iomem *base)
{
__raw_writel(1, base + MMU_GFLUSH);
--
1.7.8.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 02/15] tidspbridge: hw_mmu: Removed unused functions
2012-09-19 12:06 [PATCH v2 00/15] tidspbridge driver MMU-related cleanups Laurent Pinchart
2012-09-19 12:06 ` [PATCH v2 01/15] tidspbridge: hw_mmu: Reorder functions to avoid forward declarations Laurent Pinchart
@ 2012-09-19 12:06 ` Laurent Pinchart
2012-09-19 12:06 ` [PATCH v2 03/15] tidspbridge: tiomap3430: Reorder functions to avoid forward declarations Laurent Pinchart
` (13 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Laurent Pinchart @ 2012-09-19 12:06 UTC (permalink / raw)
To: linux-omap; +Cc: Omar Ramirez Luna
The hw_mmu_tlb_flush() function is unused, and the mmu_flush_entry()
function is used by hw_mmu_tlb_flush() only. Remove them both.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Omar Ramirez Luna <omar.ramirez@ti.com>
---
drivers/staging/tidspbridge/hw/hw_mmu.c | 72 -------------------------------
drivers/staging/tidspbridge/hw/hw_mmu.h | 3 -
2 files changed, 0 insertions(+), 75 deletions(-)
diff --git a/drivers/staging/tidspbridge/hw/hw_mmu.c b/drivers/staging/tidspbridge/hw/hw_mmu.c
index 2194a3f..981794d 100644
--- a/drivers/staging/tidspbridge/hw/hw_mmu.c
+++ b/drivers/staging/tidspbridge/hw/hw_mmu.c
@@ -48,40 +48,6 @@ enum hw_mmu_page_size_t {
};
/*
- * FUNCTION : mmu_flush_entry
- *
- * INPUTS:
- *
- * Identifier : base_address
- * Type : const u32
- * Description : Base Address of instance of MMU module
- *
- * RETURNS:
- *
- * Type : hw_status
- * Description : 0 -- No errors occurred
- * RET_BAD_NULL_PARAM -- A Pointer
- * Paramater was set to NULL
- *
- * PURPOSE: : Flush the TLB entry pointed by the
- * lock counter register
- * even if this entry is set protected
- *
- * METHOD: : Check the Input parameter and Flush a
- * single entry in the TLB.
- */
-static hw_status mmu_flush_entry(const void __iomem *base_address)
-{
- hw_status status = 0;
- u32 flush_entry_data = 0x1;
-
- /* write values to register */
- MMUMMU_FLUSH_ENTRY_WRITE_REGISTER32(base_address, flush_entry_data);
-
- return status;
-}
-
-/*
* FUNCTION : mmu_set_cam_entry
*
* INPUTS:
@@ -321,44 +287,6 @@ hw_status hw_mmu_twl_disable(const void __iomem *base_address)
return status;
}
-hw_status hw_mmu_tlb_flush(const void __iomem *base_address, u32 virtual_addr,
- u32 page_sz)
-{
- hw_status status = 0;
- u32 virtual_addr_tag;
- enum hw_mmu_page_size_t pg_size_bits;
-
- switch (page_sz) {
- case HW_PAGE_SIZE4KB:
- pg_size_bits = HW_MMU_SMALL_PAGE;
- break;
-
- case HW_PAGE_SIZE64KB:
- pg_size_bits = HW_MMU_LARGE_PAGE;
- break;
-
- case HW_PAGE_SIZE1MB:
- pg_size_bits = HW_MMU_SECTION;
- break;
-
- case HW_PAGE_SIZE16MB:
- pg_size_bits = HW_MMU_SUPERSECTION;
- break;
-
- default:
- return -EINVAL;
- }
-
- /* Generate the 20-bit tag from virtual address */
- virtual_addr_tag = ((virtual_addr & MMU_ADDR_MASK) >> 12);
-
- mmu_set_cam_entry(base_address, pg_size_bits, 0, 0, virtual_addr_tag);
-
- mmu_flush_entry(base_address);
-
- return status;
-}
-
hw_status hw_mmu_tlb_add(const void __iomem *base_address,
u32 physical_addr,
u32 virtual_addr,
diff --git a/drivers/staging/tidspbridge/hw/hw_mmu.h b/drivers/staging/tidspbridge/hw/hw_mmu.h
index 1458a2c..7f960cd 100644
--- a/drivers/staging/tidspbridge/hw/hw_mmu.h
+++ b/drivers/staging/tidspbridge/hw/hw_mmu.h
@@ -76,9 +76,6 @@ extern hw_status hw_mmu_twl_enable(const void __iomem *base_address);
extern hw_status hw_mmu_twl_disable(const void __iomem *base_address);
-extern hw_status hw_mmu_tlb_flush(const void __iomem *base_address,
- u32 virtual_addr, u32 page_sz);
-
extern hw_status hw_mmu_tlb_add(const void __iomem *base_address,
u32 physical_addr,
u32 virtual_addr,
--
1.7.8.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 03/15] tidspbridge: tiomap3430: Reorder functions to avoid forward declarations
2012-09-19 12:06 [PATCH v2 00/15] tidspbridge driver MMU-related cleanups Laurent Pinchart
2012-09-19 12:06 ` [PATCH v2 01/15] tidspbridge: hw_mmu: Reorder functions to avoid forward declarations Laurent Pinchart
2012-09-19 12:06 ` [PATCH v2 02/15] tidspbridge: hw_mmu: Removed unused functions Laurent Pinchart
@ 2012-09-19 12:06 ` Laurent Pinchart
2012-09-19 12:06 ` [PATCH v2 04/15] tidspbridge: tiomap3430: Remove unneeded dev_context local variables Laurent Pinchart
` (12 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Laurent Pinchart @ 2012-09-19 12:06 UTC (permalink / raw)
To: linux-omap; +Cc: Omar Ramirez Luna
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Omar Ramirez Luna <omar.ramirez@ti.com>
---
drivers/staging/tidspbridge/core/tiomap3430.c | 1123 ++++++++++++-------------
1 files changed, 537 insertions(+), 586 deletions(-)
diff --git a/drivers/staging/tidspbridge/core/tiomap3430.c b/drivers/staging/tidspbridge/core/tiomap3430.c
index f9609ce..fa5b7b9 100644
--- a/drivers/staging/tidspbridge/core/tiomap3430.c
+++ b/drivers/staging/tidspbridge/core/tiomap3430.c
@@ -79,55 +79,6 @@
#define OMAP343X_CONTROL_IVA2_BOOTADDR (OMAP2_CONTROL_GENERAL + 0x0190)
#define OMAP343X_CONTROL_IVA2_BOOTMOD (OMAP2_CONTROL_GENERAL + 0x0194)
-/* Forward Declarations: */
-static int bridge_brd_monitor(struct bridge_dev_context *dev_ctxt);
-static int bridge_brd_read(struct bridge_dev_context *dev_ctxt,
- u8 *host_buff,
- u32 dsp_addr, u32 ul_num_bytes,
- u32 mem_type);
-static int bridge_brd_start(struct bridge_dev_context *dev_ctxt,
- u32 dsp_addr);
-static int bridge_brd_status(struct bridge_dev_context *dev_ctxt,
- int *board_state);
-static int bridge_brd_stop(struct bridge_dev_context *dev_ctxt);
-static int bridge_brd_write(struct bridge_dev_context *dev_ctxt,
- u8 *host_buff,
- u32 dsp_addr, u32 ul_num_bytes,
- u32 mem_type);
-static int bridge_brd_set_state(struct bridge_dev_context *dev_ctxt,
- u32 brd_state);
-static int bridge_brd_mem_copy(struct bridge_dev_context *dev_ctxt,
- u32 dsp_dest_addr, u32 dsp_src_addr,
- u32 ul_num_bytes, u32 mem_type);
-static int bridge_brd_mem_write(struct bridge_dev_context *dev_ctxt,
- u8 *host_buff, u32 dsp_addr,
- u32 ul_num_bytes, u32 mem_type);
-static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
- u32 ul_mpu_addr, u32 virt_addr,
- u32 ul_num_bytes, u32 ul_map_attr,
- struct page **mapped_pages);
-static int bridge_brd_mem_un_map(struct bridge_dev_context *dev_ctxt,
- u32 virt_addr, u32 ul_num_bytes);
-static int bridge_dev_create(struct bridge_dev_context
- **dev_cntxt,
- struct dev_object *hdev_obj,
- struct cfg_hostres *config_param);
-static int bridge_dev_ctrl(struct bridge_dev_context *dev_context,
- u32 dw_cmd, void *pargs);
-static int bridge_dev_destroy(struct bridge_dev_context *dev_ctxt);
-static u32 user_va2_pa(struct mm_struct *mm, u32 address);
-static int pte_update(struct bridge_dev_context *dev_ctxt, u32 pa,
- u32 va, u32 size,
- struct hw_mmu_map_attrs_t *map_attrs);
-static int pte_set(struct pg_table_attrs *pt, u32 pa, u32 va,
- u32 size, struct hw_mmu_map_attrs_t *attrs);
-static int mem_map_vmalloc(struct bridge_dev_context *dev_context,
- u32 ul_mpu_addr, u32 virt_addr,
- u32 ul_num_bytes,
- struct hw_mmu_map_attrs_t *hw_attrs);
-
-bool wait_for_start(struct bridge_dev_context *dev_context, u32 dw_sync_addr);
-
/* ----------------------------------- Globals */
/* Attributes of L2 page tables for DSP MMU */
@@ -166,96 +117,10 @@ struct pg_table_attrs {
struct page_info *pg_info;
};
-/*
- * This Bridge driver's function interface table.
- */
-static struct bridge_drv_interface drv_interface_fxns = {
- /* Bridge API ver. for which this bridge driver is built. */
- BRD_API_MAJOR_VERSION,
- BRD_API_MINOR_VERSION,
- bridge_dev_create,
- bridge_dev_destroy,
- bridge_dev_ctrl,
- bridge_brd_monitor,
- bridge_brd_start,
- bridge_brd_stop,
- bridge_brd_status,
- bridge_brd_read,
- bridge_brd_write,
- bridge_brd_set_state,
- bridge_brd_mem_copy,
- bridge_brd_mem_write,
- bridge_brd_mem_map,
- bridge_brd_mem_un_map,
- /* The following CHNL functions are provided by chnl_io.lib: */
- bridge_chnl_create,
- bridge_chnl_destroy,
- bridge_chnl_open,
- bridge_chnl_close,
- bridge_chnl_add_io_req,
- bridge_chnl_get_ioc,
- bridge_chnl_cancel_io,
- bridge_chnl_flush_io,
- bridge_chnl_get_info,
- bridge_chnl_get_mgr_info,
- bridge_chnl_idle,
- bridge_chnl_register_notify,
- /* The following IO functions are provided by chnl_io.lib: */
- bridge_io_create,
- bridge_io_destroy,
- bridge_io_on_loaded,
- bridge_io_get_proc_load,
- /* The following msg_ctrl functions are provided by chnl_io.lib: */
- bridge_msg_create,
- bridge_msg_create_queue,
- bridge_msg_delete,
- bridge_msg_delete_queue,
- bridge_msg_get,
- bridge_msg_put,
- bridge_msg_register_notify,
- bridge_msg_set_queue_id,
-};
-
static struct notifier_block dsp_mbox_notifier = {
.notifier_call = io_mbox_msg,
};
-static inline void flush_all(struct bridge_dev_context *dev_context)
-{
- if (dev_context->brd_state == BRD_DSP_HIBERNATION ||
- dev_context->brd_state == BRD_HIBERNATION)
- wake_dsp(dev_context, NULL);
-
- hw_mmu_tlb_flush_all(dev_context->dsp_mmu_base);
-}
-
-static void bad_page_dump(u32 pa, struct page *pg)
-{
- pr_emerg("DSPBRIDGE: MAP function: COUNT 0 FOR PA 0x%x\n", pa);
- pr_emerg("Bad page state in process '%s'\n"
- "page:%p flags:0x%0*lx mapping:%p mapcount:%d count:%d\n"
- "Backtrace:\n",
- current->comm, pg, (int)(2 * sizeof(unsigned long)),
- (unsigned long)pg->flags, pg->mapping,
- page_mapcount(pg), page_count(pg));
- dump_stack();
-}
-
-/*
- * ======== bridge_drv_entry ========
- * purpose:
- * Bridge Driver entry point.
- */
-void bridge_drv_entry(struct bridge_drv_interface **drv_intf,
- const char *driver_file_name)
-{
- if (strcmp(driver_file_name, "UMA") == 0)
- *drv_intf = &drv_interface_fxns;
- else
- dev_dbg(bridge, "%s Unknown Bridge file name", __func__);
-
-}
-
/*
* ======== bridge_brd_monitor ========
* purpose:
@@ -334,6 +199,33 @@ static int bridge_brd_read(struct bridge_dev_context *dev_ctxt,
}
/*
+ * ======== bridge_brd_write ========
+ * Copies the buffers to DSP internal or external memory.
+ */
+static int bridge_brd_write(struct bridge_dev_context *dev_ctxt,
+ u8 *host_buff, u32 dsp_addr,
+ u32 ul_num_bytes, u32 mem_type)
+{
+ int status = 0;
+ struct bridge_dev_context *dev_context = dev_ctxt;
+
+ if (dsp_addr < dev_context->dsp_start_add) {
+ status = -EPERM;
+ return status;
+ }
+ if ((dsp_addr - dev_context->dsp_start_add) <
+ dev_context->internal_size) {
+ status = write_dsp_data(dev_ctxt, host_buff, dsp_addr,
+ ul_num_bytes, mem_type);
+ } else {
+ status = write_ext_dsp_data(dev_context, host_buff, dsp_addr,
+ ul_num_bytes, mem_type, false);
+ }
+
+ return status;
+}
+
+/*
* ======== bridge_brd_set_state ========
* purpose:
* This routine updates the Board status.
@@ -349,6 +241,26 @@ static int bridge_brd_set_state(struct bridge_dev_context *dev_ctxt,
}
/*
+ * ======== wait_for_start ========
+ * Wait for the singal from DSP that it has started, or time out.
+ */
+bool wait_for_start(struct bridge_dev_context *dev_context, u32 dw_sync_addr)
+{
+ u16 timeout = TIHELEN_ACKTIMEOUT;
+
+ /* Wait for response from board */
+ while (__raw_readw(dw_sync_addr) && --timeout)
+ udelay(10);
+
+ /* If timed out: return false */
+ if (!timeout) {
+ pr_err("%s: Timed out waiting DSP to Start\n", __func__);
+ return false;
+ }
+ return true;
+}
+
+/*
* ======== bridge_brd_start ========
* purpose:
* Initializes DSP MMU and Starts DSP.
@@ -710,33 +622,6 @@ static int bridge_brd_status(struct bridge_dev_context *dev_ctxt,
}
/*
- * ======== bridge_brd_write ========
- * Copies the buffers to DSP internal or external memory.
- */
-static int bridge_brd_write(struct bridge_dev_context *dev_ctxt,
- u8 *host_buff, u32 dsp_addr,
- u32 ul_num_bytes, u32 mem_type)
-{
- int status = 0;
- struct bridge_dev_context *dev_context = dev_ctxt;
-
- if (dsp_addr < dev_context->dsp_start_add) {
- status = -EPERM;
- return status;
- }
- if ((dsp_addr - dev_context->dsp_start_add) <
- dev_context->internal_size) {
- status = write_dsp_data(dev_ctxt, host_buff, dsp_addr,
- ul_num_bytes, mem_type);
- } else {
- status = write_ext_dsp_data(dev_context, host_buff, dsp_addr,
- ul_num_bytes, mem_type, false);
- }
-
- return status;
-}
-
-/*
* ======== bridge_dev_create ========
* Creates a driver object. Puts DSP in self loop.
*/
@@ -1119,215 +1004,256 @@ static int bridge_brd_mem_write(struct bridge_dev_context *dev_ctxt,
}
/*
- * ======== bridge_brd_mem_map ========
- * This function maps MPU buffer to the DSP address space. It performs
- * linear to physical address translation if required. It translates each
- * page since linear addresses can be physically non-contiguous
- * All address & size arguments are assumed to be page aligned (in proc.c)
- *
- * TODO: Disable MMU while updating the page tables (but that'll stall DSP)
+ * ======== pte_set ========
+ * This function calculates PTE address (MPU virtual) to be updated
+ * It also manages the L2 page tables
*/
-static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
- u32 ul_mpu_addr, u32 virt_addr,
- u32 ul_num_bytes, u32 ul_map_attr,
- struct page **mapped_pages)
+static int pte_set(struct pg_table_attrs *pt, u32 pa, u32 va,
+ u32 size, struct hw_mmu_map_attrs_t *attrs)
{
- u32 attrs;
+ u32 i;
+ u32 pte_val;
+ u32 pte_addr_l1;
+ u32 pte_size;
+ /* Base address of the PT that will be updated */
+ u32 pg_tbl_va;
+ u32 l1_base_va;
+ /* Compiler warns that the next three variables might be used
+ * uninitialized in this function. Doesn't seem so. Working around,
+ * anyways. */
+ u32 l2_base_va = 0;
+ u32 l2_base_pa = 0;
+ u32 l2_page_num = 0;
int status = 0;
- struct bridge_dev_context *dev_context = dev_ctxt;
- struct hw_mmu_map_attrs_t hw_attrs;
- struct vm_area_struct *vma;
- struct mm_struct *mm = current->mm;
- u32 write = 0;
- u32 num_usr_pgs = 0;
- struct page *mapped_page, *pg;
- s32 pg_num;
- u32 va = virt_addr;
- struct task_struct *curr_task = current;
- u32 pg_i = 0;
- u32 mpu_addr, pa;
-
- dev_dbg(bridge,
- "%s hDevCtxt %p, pa %x, va %x, size %x, ul_map_attr %x\n",
- __func__, dev_ctxt, ul_mpu_addr, virt_addr, ul_num_bytes,
- ul_map_attr);
- if (ul_num_bytes == 0)
- return -EINVAL;
-
- if (ul_map_attr & DSP_MAP_DIR_MASK) {
- attrs = ul_map_attr;
- } else {
- /* Assign default attributes */
- attrs = ul_map_attr | (DSP_MAPVIRTUALADDR | DSP_MAPELEMSIZE16);
- }
- /* Take mapping properties */
- if (attrs & DSP_MAPBIGENDIAN)
- hw_attrs.endianism = HW_BIG_ENDIAN;
- else
- hw_attrs.endianism = HW_LITTLE_ENDIAN;
- hw_attrs.mixed_size = (enum hw_mmu_mixed_size_t)
- ((attrs & DSP_MAPMIXEDELEMSIZE) >> 2);
- /* Ignore element_size if mixed_size is enabled */
- if (hw_attrs.mixed_size == 0) {
- if (attrs & DSP_MAPELEMSIZE8) {
- /* Size is 8 bit */
- hw_attrs.element_size = HW_ELEM_SIZE8BIT;
- } else if (attrs & DSP_MAPELEMSIZE16) {
- /* Size is 16 bit */
- hw_attrs.element_size = HW_ELEM_SIZE16BIT;
- } else if (attrs & DSP_MAPELEMSIZE32) {
- /* Size is 32 bit */
- hw_attrs.element_size = HW_ELEM_SIZE32BIT;
- } else if (attrs & DSP_MAPELEMSIZE64) {
- /* Size is 64 bit */
- hw_attrs.element_size = HW_ELEM_SIZE64BIT;
+ l1_base_va = pt->l1_base_va;
+ pg_tbl_va = l1_base_va;
+ if ((size == HW_PAGE_SIZE64KB) || (size == HW_PAGE_SIZE4KB)) {
+ /* Find whether the L1 PTE points to a valid L2 PT */
+ pte_addr_l1 = hw_mmu_pte_addr_l1(l1_base_va, va);
+ if (pte_addr_l1 <= (pt->l1_base_va + pt->l1_size)) {
+ pte_val = *(u32 *) pte_addr_l1;
+ pte_size = hw_mmu_pte_size_l1(pte_val);
} else {
- /*
- * Mixedsize isn't enabled, so size can't be
- * zero here
- */
- return -EINVAL;
+ return -EPERM;
}
+ spin_lock(&pt->pg_lock);
+ if (pte_size == HW_MMU_COARSE_PAGE_SIZE) {
+ /* Get the L2 PA from the L1 PTE, and find
+ * corresponding L2 VA */
+ l2_base_pa = hw_mmu_pte_coarse_l1(pte_val);
+ l2_base_va =
+ l2_base_pa - pt->l2_base_pa + pt->l2_base_va;
+ l2_page_num =
+ (l2_base_pa -
+ pt->l2_base_pa) / HW_MMU_COARSE_PAGE_SIZE;
+ } else if (pte_size == 0) {
+ /* L1 PTE is invalid. Allocate a L2 PT and
+ * point the L1 PTE to it */
+ /* Find a free L2 PT. */
+ for (i = 0; (i < pt->l2_num_pages) &&
+ (pt->pg_info[i].num_entries != 0); i++)
+ ;
+ if (i < pt->l2_num_pages) {
+ l2_page_num = i;
+ l2_base_pa = pt->l2_base_pa + (l2_page_num *
+ HW_MMU_COARSE_PAGE_SIZE);
+ l2_base_va = pt->l2_base_va + (l2_page_num *
+ HW_MMU_COARSE_PAGE_SIZE);
+ /* Endianness attributes are ignored for
+ * HW_MMU_COARSE_PAGE_SIZE */
+ status =
+ hw_mmu_pte_set(l1_base_va, l2_base_pa, va,
+ HW_MMU_COARSE_PAGE_SIZE,
+ attrs);
+ } else {
+ status = -ENOMEM;
+ }
+ } else {
+ /* Found valid L1 PTE of another size.
+ * Should not overwrite it. */
+ status = -EPERM;
+ }
+ if (!status) {
+ pg_tbl_va = l2_base_va;
+ if (size == HW_PAGE_SIZE64KB)
+ pt->pg_info[l2_page_num].num_entries += 16;
+ else
+ pt->pg_info[l2_page_num].num_entries++;
+ dev_dbg(bridge, "PTE: L2 BaseVa %x, BasePa %x, PageNum "
+ "%x, num_entries %x\n", l2_base_va,
+ l2_base_pa, l2_page_num,
+ pt->pg_info[l2_page_num].num_entries);
+ }
+ spin_unlock(&pt->pg_lock);
}
- if (attrs & DSP_MAPDONOTLOCK)
- hw_attrs.donotlockmpupage = 1;
- else
- hw_attrs.donotlockmpupage = 0;
-
- if (attrs & DSP_MAPVMALLOCADDR) {
- return mem_map_vmalloc(dev_ctxt, ul_mpu_addr, virt_addr,
- ul_num_bytes, &hw_attrs);
- }
- /*
- * Do OS-specific user-va to pa translation.
- * Combine physically contiguous regions to reduce TLBs.
- * Pass the translated pa to pte_update.
- */
- if ((attrs & DSP_MAPPHYSICALADDR)) {
- status = pte_update(dev_context, ul_mpu_addr, virt_addr,
- ul_num_bytes, &hw_attrs);
- goto func_cont;
+ if (!status) {
+ dev_dbg(bridge, "PTE: pg_tbl_va %x, pa %x, va %x, size %x\n",
+ pg_tbl_va, pa, va, size);
+ dev_dbg(bridge, "PTE: endianism %x, element_size %x, "
+ "mixed_size %x\n", attrs->endianism,
+ attrs->element_size, attrs->mixed_size);
+ status = hw_mmu_pte_set(pg_tbl_va, pa, va, size, attrs);
}
- /*
- * Important Note: ul_mpu_addr is mapped from user application process
- * to current process - it must lie completely within the current
- * virtual memory address space in order to be of use to us here!
- */
- down_read(&mm->mmap_sem);
- vma = find_vma(mm, ul_mpu_addr);
- if (vma)
- dev_dbg(bridge,
- "VMAfor UserBuf: ul_mpu_addr=%x, ul_num_bytes=%x, "
- "vm_start=%lx, vm_end=%lx, vm_flags=%lx\n", ul_mpu_addr,
- ul_num_bytes, vma->vm_start, vma->vm_end,
- vma->vm_flags);
+ return status;
+}
- /*
- * It is observed that under some circumstances, the user buffer is
- * spread across several VMAs. So loop through and check if the entire
- * user buffer is covered
- */
- while ((vma) && (ul_mpu_addr + ul_num_bytes > vma->vm_end)) {
- /* jump to the next VMA region */
- vma = find_vma(mm, vma->vm_end + 1);
- dev_dbg(bridge,
- "VMA for UserBuf ul_mpu_addr=%x ul_num_bytes=%x, "
- "vm_start=%lx, vm_end=%lx, vm_flags=%lx\n", ul_mpu_addr,
- ul_num_bytes, vma->vm_start, vma->vm_end,
- vma->vm_flags);
- }
- if (!vma) {
- pr_err("%s: Failed to get VMA region for 0x%x (%d)\n",
- __func__, ul_mpu_addr, ul_num_bytes);
- status = -EINVAL;
- up_read(&mm->mmap_sem);
- goto func_cont;
- }
+/*
+ * ======== pte_update ========
+ * This function calculates the optimum page-aligned addresses and sizes
+ * Caller must pass page-aligned values
+ */
+static int pte_update(struct bridge_dev_context *dev_ctxt, u32 pa,
+ u32 va, u32 size,
+ struct hw_mmu_map_attrs_t *map_attrs)
+{
+ u32 i;
+ u32 all_bits;
+ u32 pa_curr = pa;
+ u32 va_curr = va;
+ u32 num_bytes = size;
+ struct bridge_dev_context *dev_context = dev_ctxt;
+ int status = 0;
+ u32 page_size[] = { HW_PAGE_SIZE16MB, HW_PAGE_SIZE1MB,
+ HW_PAGE_SIZE64KB, HW_PAGE_SIZE4KB
+ };
- if (vma->vm_flags & VM_IO) {
- num_usr_pgs = ul_num_bytes / PG_SIZE4K;
- mpu_addr = ul_mpu_addr;
+ while (num_bytes && !status) {
+ /* To find the max. page size with which both PA & VA are
+ * aligned */
+ all_bits = pa_curr | va_curr;
- /* Get the physical addresses for user buffer */
- for (pg_i = 0; pg_i < num_usr_pgs; pg_i++) {
- pa = user_va2_pa(mm, mpu_addr);
- if (!pa) {
- status = -EPERM;
- pr_err("DSPBRIDGE: VM_IO mapping physical"
- "address is invalid\n");
+ for (i = 0; i < 4; i++) {
+ if ((num_bytes >= page_size[i]) && ((all_bits &
+ (page_size[i] -
+ 1)) == 0)) {
+ status =
+ pte_set(dev_context->pt_attrs, pa_curr,
+ va_curr, page_size[i], map_attrs);
+ pa_curr += page_size[i];
+ va_curr += page_size[i];
+ num_bytes -= page_size[i];
+ /* Don't try smaller sizes. Hopefully we have
+ * reached an address aligned to a bigger page
+ * size */
break;
}
- if (pfn_valid(__phys_to_pfn(pa))) {
- pg = PHYS_TO_PAGE(pa);
- get_page(pg);
- if (page_count(pg) < 1) {
- pr_err("Bad page in VM_IO buffer\n");
- bad_page_dump(pa, pg);
- }
- }
- status = pte_set(dev_context->pt_attrs, pa,
- va, HW_PAGE_SIZE4KB, &hw_attrs);
- if (status)
- break;
-
- va += HW_PAGE_SIZE4KB;
- mpu_addr += HW_PAGE_SIZE4KB;
- pa += HW_PAGE_SIZE4KB;
}
- } else {
- num_usr_pgs = ul_num_bytes / PG_SIZE4K;
- if (vma->vm_flags & (VM_WRITE | VM_MAYWRITE))
- write = 1;
+ }
- for (pg_i = 0; pg_i < num_usr_pgs; pg_i++) {
- pg_num = get_user_pages(curr_task, mm, ul_mpu_addr, 1,
- write, 1, &mapped_page, NULL);
- if (pg_num > 0) {
- if (page_count(mapped_page) < 1) {
- pr_err("Bad page count after doing"
- "get_user_pages on"
- "user buffer\n");
- bad_page_dump(page_to_phys(mapped_page),
- mapped_page);
- }
- status = pte_set(dev_context->pt_attrs,
- page_to_phys(mapped_page), va,
- HW_PAGE_SIZE4KB, &hw_attrs);
- if (status)
- break;
+ return status;
+}
- if (mapped_pages)
- mapped_pages[pg_i] = mapped_page;
+/*
+ * ======== user_va2_pa ========
+ * Purpose:
+ * This function walks through the page tables to convert a userland
+ * virtual address to physical address
+ */
+static u32 user_va2_pa(struct mm_struct *mm, u32 address)
+{
+ pgd_t *pgd;
+ pud_t *pud;
+ pmd_t *pmd;
+ pte_t *ptep, pte;
- va += HW_PAGE_SIZE4KB;
- ul_mpu_addr += HW_PAGE_SIZE4KB;
- } else {
- pr_err("DSPBRIDGE: get_user_pages FAILED,"
- "MPU addr = 0x%x,"
- "vma->vm_flags = 0x%lx,"
- "get_user_pages Err"
- "Value = %d, Buffer"
- "size=0x%x\n", ul_mpu_addr,
- vma->vm_flags, pg_num, ul_num_bytes);
- status = -EPERM;
- break;
- }
- }
+ pgd = pgd_offset(mm, address);
+ if (pgd_none(*pgd) || pgd_bad(*pgd))
+ return 0;
+
+ pud = pud_offset(pgd, address);
+ if (pud_none(*pud) || pud_bad(*pud))
+ return 0;
+
+ pmd = pmd_offset(pud, address);
+ if (pmd_none(*pmd) || pmd_bad(*pmd))
+ return 0;
+
+ ptep = pte_offset_map(pmd, address);
+ if (ptep) {
+ pte = *ptep;
+ if (pte_present(pte))
+ return pte & PAGE_MASK;
}
- up_read(&mm->mmap_sem);
-func_cont:
- if (status) {
+
+ return 0;
+}
+
+static inline void flush_all(struct bridge_dev_context *dev_context)
+{
+ if (dev_context->brd_state == BRD_DSP_HIBERNATION ||
+ dev_context->brd_state == BRD_HIBERNATION)
+ wake_dsp(dev_context, NULL);
+
+ hw_mmu_tlb_flush_all(dev_context->dsp_mmu_base);
+}
+
+/* Memory map kernel VA -- memory allocated with vmalloc */
+static int mem_map_vmalloc(struct bridge_dev_context *dev_context,
+ u32 ul_mpu_addr, u32 virt_addr,
+ u32 ul_num_bytes,
+ struct hw_mmu_map_attrs_t *hw_attrs)
+{
+ int status = 0;
+ struct page *page[1];
+ u32 i;
+ u32 pa_curr;
+ u32 pa_next;
+ u32 va_curr;
+ u32 size_curr;
+ u32 num_pages;
+ u32 pa;
+ u32 num_of4k_pages;
+ u32 temp = 0;
+
+ /*
+ * Do Kernel va to pa translation.
+ * Combine physically contiguous regions to reduce TLBs.
+ * Pass the translated pa to pte_update.
+ */
+ num_pages = ul_num_bytes / PAGE_SIZE; /* PAGE_SIZE = OS page size */
+ i = 0;
+ va_curr = ul_mpu_addr;
+ page[0] = vmalloc_to_page((void *)va_curr);
+ pa_next = page_to_phys(page[0]);
+ while (!status && (i < num_pages)) {
/*
- * Roll out the mapped pages incase it failed in middle of
- * mapping
+ * Reuse pa_next from the previous iteraion to avoid
+ * an extra va2pa call
*/
- if (pg_i) {
- bridge_brd_mem_un_map(dev_context, virt_addr,
- (pg_i * PG_SIZE4K));
+ pa_curr = pa_next;
+ size_curr = PAGE_SIZE;
+ /*
+ * If the next page is physically contiguous,
+ * map it with the current one by increasing
+ * the size of the region to be mapped
+ */
+ while (++i < num_pages) {
+ page[0] =
+ vmalloc_to_page((void *)(va_curr + size_curr));
+ pa_next = page_to_phys(page[0]);
+
+ if (pa_next == (pa_curr + size_curr))
+ size_curr += PAGE_SIZE;
+ else
+ break;
+
+ }
+ if (pa_next == 0) {
+ status = -ENOMEM;
+ break;
}
- status = -EPERM;
+ pa = pa_curr;
+ num_of4k_pages = size_curr / HW_PAGE_SIZE4KB;
+ while (temp++ < num_of4k_pages) {
+ get_page(PHYS_TO_PAGE(pa));
+ pa += HW_PAGE_SIZE4KB;
+ }
+ status = pte_update(dev_context, pa_curr, virt_addr +
+ (va_curr - ul_mpu_addr), size_curr,
+ hw_attrs);
+ va_curr += size_curr;
}
/*
* In any case, flush the TLB
@@ -1340,6 +1266,18 @@ func_cont:
return status;
}
+static void bad_page_dump(u32 pa, struct page *pg)
+{
+ pr_emerg("DSPBRIDGE: MAP function: COUNT 0 FOR PA 0x%x\n", pa);
+ pr_emerg("Bad page state in process '%s'\n"
+ "page:%p flags:0x%0*lx mapping:%p mapcount:%d count:%d\n"
+ "Backtrace:\n",
+ current->comm, pg, (int)(2 * sizeof(unsigned long)),
+ (unsigned long)pg->flags, pg->mapping,
+ page_mapcount(pg), page_count(pg));
+ dump_stack();
+}
+
/*
* ======== bridge_brd_mem_un_map ========
* Invalidate the PTEs for the DSP VA block to be unmapped.
@@ -1539,247 +1477,215 @@ EXIT_LOOP:
}
/*
- * ======== user_va2_pa ========
- * Purpose:
- * This function walks through the page tables to convert a userland
- * virtual address to physical address
- */
-static u32 user_va2_pa(struct mm_struct *mm, u32 address)
-{
- pgd_t *pgd;
- pud_t *pud;
- pmd_t *pmd;
- pte_t *ptep, pte;
-
- pgd = pgd_offset(mm, address);
- if (pgd_none(*pgd) || pgd_bad(*pgd))
- return 0;
-
- pud = pud_offset(pgd, address);
- if (pud_none(*pud) || pud_bad(*pud))
- return 0;
-
- pmd = pmd_offset(pud, address);
- if (pmd_none(*pmd) || pmd_bad(*pmd))
- return 0;
-
- ptep = pte_offset_map(pmd, address);
- if (ptep) {
- pte = *ptep;
- if (pte_present(pte))
- return pte & PAGE_MASK;
- }
-
- return 0;
-}
-
-/*
- * ======== pte_update ========
- * This function calculates the optimum page-aligned addresses and sizes
- * Caller must pass page-aligned values
+ * ======== bridge_brd_mem_map ========
+ * This function maps MPU buffer to the DSP address space. It performs
+ * linear to physical address translation if required. It translates each
+ * page since linear addresses can be physically non-contiguous
+ * All address & size arguments are assumed to be page aligned (in proc.c)
+ *
+ * TODO: Disable MMU while updating the page tables (but that'll stall DSP)
*/
-static int pte_update(struct bridge_dev_context *dev_ctxt, u32 pa,
- u32 va, u32 size,
- struct hw_mmu_map_attrs_t *map_attrs)
+static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
+ u32 ul_mpu_addr, u32 virt_addr,
+ u32 ul_num_bytes, u32 ul_map_attr,
+ struct page **mapped_pages)
{
- u32 i;
- u32 all_bits;
- u32 pa_curr = pa;
- u32 va_curr = va;
- u32 num_bytes = size;
- struct bridge_dev_context *dev_context = dev_ctxt;
+ u32 attrs;
int status = 0;
- u32 page_size[] = { HW_PAGE_SIZE16MB, HW_PAGE_SIZE1MB,
- HW_PAGE_SIZE64KB, HW_PAGE_SIZE4KB
- };
-
- while (num_bytes && !status) {
- /* To find the max. page size with which both PA & VA are
- * aligned */
- all_bits = pa_curr | va_curr;
-
- for (i = 0; i < 4; i++) {
- if ((num_bytes >= page_size[i]) && ((all_bits &
- (page_size[i] -
- 1)) == 0)) {
- status =
- pte_set(dev_context->pt_attrs, pa_curr,
- va_curr, page_size[i], map_attrs);
- pa_curr += page_size[i];
- va_curr += page_size[i];
- num_bytes -= page_size[i];
- /* Don't try smaller sizes. Hopefully we have
- * reached an address aligned to a bigger page
- * size */
- break;
- }
- }
- }
-
- return status;
-}
+ struct bridge_dev_context *dev_context = dev_ctxt;
+ struct hw_mmu_map_attrs_t hw_attrs;
+ struct vm_area_struct *vma;
+ struct mm_struct *mm = current->mm;
+ u32 write = 0;
+ u32 num_usr_pgs = 0;
+ struct page *mapped_page, *pg;
+ s32 pg_num;
+ u32 va = virt_addr;
+ struct task_struct *curr_task = current;
+ u32 pg_i = 0;
+ u32 mpu_addr, pa;
-/*
- * ======== pte_set ========
- * This function calculates PTE address (MPU virtual) to be updated
- * It also manages the L2 page tables
- */
-static int pte_set(struct pg_table_attrs *pt, u32 pa, u32 va,
- u32 size, struct hw_mmu_map_attrs_t *attrs)
-{
- u32 i;
- u32 pte_val;
- u32 pte_addr_l1;
- u32 pte_size;
- /* Base address of the PT that will be updated */
- u32 pg_tbl_va;
- u32 l1_base_va;
- /* Compiler warns that the next three variables might be used
- * uninitialized in this function. Doesn't seem so. Working around,
- * anyways. */
- u32 l2_base_va = 0;
- u32 l2_base_pa = 0;
- u32 l2_page_num = 0;
- int status = 0;
+ dev_dbg(bridge,
+ "%s hDevCtxt %p, pa %x, va %x, size %x, ul_map_attr %x\n",
+ __func__, dev_ctxt, ul_mpu_addr, virt_addr, ul_num_bytes,
+ ul_map_attr);
+ if (ul_num_bytes == 0)
+ return -EINVAL;
- l1_base_va = pt->l1_base_va;
- pg_tbl_va = l1_base_va;
- if ((size == HW_PAGE_SIZE64KB) || (size == HW_PAGE_SIZE4KB)) {
- /* Find whether the L1 PTE points to a valid L2 PT */
- pte_addr_l1 = hw_mmu_pte_addr_l1(l1_base_va, va);
- if (pte_addr_l1 <= (pt->l1_base_va + pt->l1_size)) {
- pte_val = *(u32 *) pte_addr_l1;
- pte_size = hw_mmu_pte_size_l1(pte_val);
- } else {
- return -EPERM;
- }
- spin_lock(&pt->pg_lock);
- if (pte_size == HW_MMU_COARSE_PAGE_SIZE) {
- /* Get the L2 PA from the L1 PTE, and find
- * corresponding L2 VA */
- l2_base_pa = hw_mmu_pte_coarse_l1(pte_val);
- l2_base_va =
- l2_base_pa - pt->l2_base_pa + pt->l2_base_va;
- l2_page_num =
- (l2_base_pa -
- pt->l2_base_pa) / HW_MMU_COARSE_PAGE_SIZE;
- } else if (pte_size == 0) {
- /* L1 PTE is invalid. Allocate a L2 PT and
- * point the L1 PTE to it */
- /* Find a free L2 PT. */
- for (i = 0; (i < pt->l2_num_pages) &&
- (pt->pg_info[i].num_entries != 0); i++)
- ;
- if (i < pt->l2_num_pages) {
- l2_page_num = i;
- l2_base_pa = pt->l2_base_pa + (l2_page_num *
- HW_MMU_COARSE_PAGE_SIZE);
- l2_base_va = pt->l2_base_va + (l2_page_num *
- HW_MMU_COARSE_PAGE_SIZE);
- /* Endianness attributes are ignored for
- * HW_MMU_COARSE_PAGE_SIZE */
- status =
- hw_mmu_pte_set(l1_base_va, l2_base_pa, va,
- HW_MMU_COARSE_PAGE_SIZE,
- attrs);
- } else {
- status = -ENOMEM;
- }
- } else {
- /* Found valid L1 PTE of another size.
- * Should not overwrite it. */
- status = -EPERM;
- }
- if (!status) {
- pg_tbl_va = l2_base_va;
- if (size == HW_PAGE_SIZE64KB)
- pt->pg_info[l2_page_num].num_entries += 16;
- else
- pt->pg_info[l2_page_num].num_entries++;
- dev_dbg(bridge, "PTE: L2 BaseVa %x, BasePa %x, PageNum "
- "%x, num_entries %x\n", l2_base_va,
- l2_base_pa, l2_page_num,
- pt->pg_info[l2_page_num].num_entries);
- }
- spin_unlock(&pt->pg_lock);
- }
- if (!status) {
- dev_dbg(bridge, "PTE: pg_tbl_va %x, pa %x, va %x, size %x\n",
- pg_tbl_va, pa, va, size);
- dev_dbg(bridge, "PTE: endianism %x, element_size %x, "
- "mixed_size %x\n", attrs->endianism,
- attrs->element_size, attrs->mixed_size);
- status = hw_mmu_pte_set(pg_tbl_va, pa, va, size, attrs);
+ if (ul_map_attr & DSP_MAP_DIR_MASK) {
+ attrs = ul_map_attr;
+ } else {
+ /* Assign default attributes */
+ attrs = ul_map_attr | (DSP_MAPVIRTUALADDR | DSP_MAPELEMSIZE16);
}
+ /* Take mapping properties */
+ if (attrs & DSP_MAPBIGENDIAN)
+ hw_attrs.endianism = HW_BIG_ENDIAN;
+ else
+ hw_attrs.endianism = HW_LITTLE_ENDIAN;
- return status;
-}
-
-/* Memory map kernel VA -- memory allocated with vmalloc */
-static int mem_map_vmalloc(struct bridge_dev_context *dev_context,
- u32 ul_mpu_addr, u32 virt_addr,
- u32 ul_num_bytes,
- struct hw_mmu_map_attrs_t *hw_attrs)
-{
- int status = 0;
- struct page *page[1];
- u32 i;
- u32 pa_curr;
- u32 pa_next;
- u32 va_curr;
- u32 size_curr;
- u32 num_pages;
- u32 pa;
- u32 num_of4k_pages;
- u32 temp = 0;
+ hw_attrs.mixed_size = (enum hw_mmu_mixed_size_t)
+ ((attrs & DSP_MAPMIXEDELEMSIZE) >> 2);
+ /* Ignore element_size if mixed_size is enabled */
+ if (hw_attrs.mixed_size == 0) {
+ if (attrs & DSP_MAPELEMSIZE8) {
+ /* Size is 8 bit */
+ hw_attrs.element_size = HW_ELEM_SIZE8BIT;
+ } else if (attrs & DSP_MAPELEMSIZE16) {
+ /* Size is 16 bit */
+ hw_attrs.element_size = HW_ELEM_SIZE16BIT;
+ } else if (attrs & DSP_MAPELEMSIZE32) {
+ /* Size is 32 bit */
+ hw_attrs.element_size = HW_ELEM_SIZE32BIT;
+ } else if (attrs & DSP_MAPELEMSIZE64) {
+ /* Size is 64 bit */
+ hw_attrs.element_size = HW_ELEM_SIZE64BIT;
+ } else {
+ /*
+ * Mixedsize isn't enabled, so size can't be
+ * zero here
+ */
+ return -EINVAL;
+ }
+ }
+ if (attrs & DSP_MAPDONOTLOCK)
+ hw_attrs.donotlockmpupage = 1;
+ else
+ hw_attrs.donotlockmpupage = 0;
+ if (attrs & DSP_MAPVMALLOCADDR) {
+ return mem_map_vmalloc(dev_ctxt, ul_mpu_addr, virt_addr,
+ ul_num_bytes, &hw_attrs);
+ }
/*
- * Do Kernel va to pa translation.
+ * Do OS-specific user-va to pa translation.
* Combine physically contiguous regions to reduce TLBs.
* Pass the translated pa to pte_update.
*/
- num_pages = ul_num_bytes / PAGE_SIZE; /* PAGE_SIZE = OS page size */
- i = 0;
- va_curr = ul_mpu_addr;
- page[0] = vmalloc_to_page((void *)va_curr);
- pa_next = page_to_phys(page[0]);
- while (!status && (i < num_pages)) {
- /*
- * Reuse pa_next from the previous iteraion to avoid
- * an extra va2pa call
- */
- pa_curr = pa_next;
- size_curr = PAGE_SIZE;
- /*
- * If the next page is physically contiguous,
- * map it with the current one by increasing
- * the size of the region to be mapped
- */
- while (++i < num_pages) {
- page[0] =
- vmalloc_to_page((void *)(va_curr + size_curr));
- pa_next = page_to_phys(page[0]);
+ if ((attrs & DSP_MAPPHYSICALADDR)) {
+ status = pte_update(dev_context, ul_mpu_addr, virt_addr,
+ ul_num_bytes, &hw_attrs);
+ goto func_cont;
+ }
- if (pa_next == (pa_curr + size_curr))
- size_curr += PAGE_SIZE;
- else
+ /*
+ * Important Note: ul_mpu_addr is mapped from user application process
+ * to current process - it must lie completely within the current
+ * virtual memory address space in order to be of use to us here!
+ */
+ down_read(&mm->mmap_sem);
+ vma = find_vma(mm, ul_mpu_addr);
+ if (vma)
+ dev_dbg(bridge,
+ "VMAfor UserBuf: ul_mpu_addr=%x, ul_num_bytes=%x, "
+ "vm_start=%lx, vm_end=%lx, vm_flags=%lx\n", ul_mpu_addr,
+ ul_num_bytes, vma->vm_start, vma->vm_end,
+ vma->vm_flags);
+
+ /*
+ * It is observed that under some circumstances, the user buffer is
+ * spread across several VMAs. So loop through and check if the entire
+ * user buffer is covered
+ */
+ while ((vma) && (ul_mpu_addr + ul_num_bytes > vma->vm_end)) {
+ /* jump to the next VMA region */
+ vma = find_vma(mm, vma->vm_end + 1);
+ dev_dbg(bridge,
+ "VMA for UserBuf ul_mpu_addr=%x ul_num_bytes=%x, "
+ "vm_start=%lx, vm_end=%lx, vm_flags=%lx\n", ul_mpu_addr,
+ ul_num_bytes, vma->vm_start, vma->vm_end,
+ vma->vm_flags);
+ }
+ if (!vma) {
+ pr_err("%s: Failed to get VMA region for 0x%x (%d)\n",
+ __func__, ul_mpu_addr, ul_num_bytes);
+ status = -EINVAL;
+ up_read(&mm->mmap_sem);
+ goto func_cont;
+ }
+
+ if (vma->vm_flags & VM_IO) {
+ num_usr_pgs = ul_num_bytes / PG_SIZE4K;
+ mpu_addr = ul_mpu_addr;
+
+ /* Get the physical addresses for user buffer */
+ for (pg_i = 0; pg_i < num_usr_pgs; pg_i++) {
+ pa = user_va2_pa(mm, mpu_addr);
+ if (!pa) {
+ status = -EPERM;
+ pr_err("DSPBRIDGE: VM_IO mapping physical"
+ "address is invalid\n");
+ break;
+ }
+ if (pfn_valid(__phys_to_pfn(pa))) {
+ pg = PHYS_TO_PAGE(pa);
+ get_page(pg);
+ if (page_count(pg) < 1) {
+ pr_err("Bad page in VM_IO buffer\n");
+ bad_page_dump(pa, pg);
+ }
+ }
+ status = pte_set(dev_context->pt_attrs, pa,
+ va, HW_PAGE_SIZE4KB, &hw_attrs);
+ if (status)
break;
+ va += HW_PAGE_SIZE4KB;
+ mpu_addr += HW_PAGE_SIZE4KB;
+ pa += HW_PAGE_SIZE4KB;
}
- if (pa_next == 0) {
- status = -ENOMEM;
- break;
+ } else {
+ num_usr_pgs = ul_num_bytes / PG_SIZE4K;
+ if (vma->vm_flags & (VM_WRITE | VM_MAYWRITE))
+ write = 1;
+
+ for (pg_i = 0; pg_i < num_usr_pgs; pg_i++) {
+ pg_num = get_user_pages(curr_task, mm, ul_mpu_addr, 1,
+ write, 1, &mapped_page, NULL);
+ if (pg_num > 0) {
+ if (page_count(mapped_page) < 1) {
+ pr_err("Bad page count after doing"
+ "get_user_pages on"
+ "user buffer\n");
+ bad_page_dump(page_to_phys(mapped_page),
+ mapped_page);
+ }
+ status = pte_set(dev_context->pt_attrs,
+ page_to_phys(mapped_page), va,
+ HW_PAGE_SIZE4KB, &hw_attrs);
+ if (status)
+ break;
+
+ if (mapped_pages)
+ mapped_pages[pg_i] = mapped_page;
+
+ va += HW_PAGE_SIZE4KB;
+ ul_mpu_addr += HW_PAGE_SIZE4KB;
+ } else {
+ pr_err("DSPBRIDGE: get_user_pages FAILED,"
+ "MPU addr = 0x%x,"
+ "vma->vm_flags = 0x%lx,"
+ "get_user_pages Err"
+ "Value = %d, Buffer"
+ "size=0x%x\n", ul_mpu_addr,
+ vma->vm_flags, pg_num, ul_num_bytes);
+ status = -EPERM;
+ break;
+ }
}
- pa = pa_curr;
- num_of4k_pages = size_curr / HW_PAGE_SIZE4KB;
- while (temp++ < num_of4k_pages) {
- get_page(PHYS_TO_PAGE(pa));
- pa += HW_PAGE_SIZE4KB;
+ }
+ up_read(&mm->mmap_sem);
+func_cont:
+ if (status) {
+ /*
+ * Roll out the mapped pages incase it failed in middle of
+ * mapping
+ */
+ if (pg_i) {
+ bridge_brd_mem_un_map(dev_context, virt_addr,
+ (pg_i * PG_SIZE4K));
}
- status = pte_update(dev_context, pa_curr, virt_addr +
- (va_curr - ul_mpu_addr), size_curr,
- hw_attrs);
- va_curr += size_curr;
+ status = -EPERM;
}
/*
* In any case, flush the TLB
@@ -1793,21 +1699,66 @@ static int mem_map_vmalloc(struct bridge_dev_context *dev_context,
}
/*
- * ======== wait_for_start ========
- * Wait for the singal from DSP that it has started, or time out.
+ * This Bridge driver's function interface table.
*/
-bool wait_for_start(struct bridge_dev_context *dev_context, u32 dw_sync_addr)
-{
- u16 timeout = TIHELEN_ACKTIMEOUT;
+static struct bridge_drv_interface drv_interface_fxns = {
+ /* Bridge API ver. for which this bridge driver is built. */
+ BRD_API_MAJOR_VERSION,
+ BRD_API_MINOR_VERSION,
+ bridge_dev_create,
+ bridge_dev_destroy,
+ bridge_dev_ctrl,
+ bridge_brd_monitor,
+ bridge_brd_start,
+ bridge_brd_stop,
+ bridge_brd_status,
+ bridge_brd_read,
+ bridge_brd_write,
+ bridge_brd_set_state,
+ bridge_brd_mem_copy,
+ bridge_brd_mem_write,
+ bridge_brd_mem_map,
+ bridge_brd_mem_un_map,
+ /* The following CHNL functions are provided by chnl_io.lib: */
+ bridge_chnl_create,
+ bridge_chnl_destroy,
+ bridge_chnl_open,
+ bridge_chnl_close,
+ bridge_chnl_add_io_req,
+ bridge_chnl_get_ioc,
+ bridge_chnl_cancel_io,
+ bridge_chnl_flush_io,
+ bridge_chnl_get_info,
+ bridge_chnl_get_mgr_info,
+ bridge_chnl_idle,
+ bridge_chnl_register_notify,
+ /* The following IO functions are provided by chnl_io.lib: */
+ bridge_io_create,
+ bridge_io_destroy,
+ bridge_io_on_loaded,
+ bridge_io_get_proc_load,
+ /* The following msg_ctrl functions are provided by chnl_io.lib: */
+ bridge_msg_create,
+ bridge_msg_create_queue,
+ bridge_msg_delete,
+ bridge_msg_delete_queue,
+ bridge_msg_get,
+ bridge_msg_put,
+ bridge_msg_register_notify,
+ bridge_msg_set_queue_id,
+};
- /* Wait for response from board */
- while (__raw_readw(dw_sync_addr) && --timeout)
- udelay(10);
+/*
+ * ======== bridge_drv_entry ========
+ * purpose:
+ * Bridge Driver entry point.
+ */
+void bridge_drv_entry(struct bridge_drv_interface **drv_intf,
+ const char *driver_file_name)
+{
+ if (strcmp(driver_file_name, "UMA") == 0)
+ *drv_intf = &drv_interface_fxns;
+ else
+ dev_dbg(bridge, "%s Unknown Bridge file name", __func__);
- /* If timed out: return false */
- if (!timeout) {
- pr_err("%s: Timed out waiting DSP to Start\n", __func__);
- return false;
- }
- return true;
}
--
1.7.8.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 04/15] tidspbridge: tiomap3430: Remove unneeded dev_context local variables
2012-09-19 12:06 [PATCH v2 00/15] tidspbridge driver MMU-related cleanups Laurent Pinchart
` (2 preceding siblings ...)
2012-09-19 12:06 ` [PATCH v2 03/15] tidspbridge: tiomap3430: Reorder functions to avoid forward declarations Laurent Pinchart
@ 2012-09-19 12:06 ` Laurent Pinchart
2012-09-19 12:06 ` [PATCH v2 05/15] tidspbridge: tiomap3430: Factor out common page release code Laurent Pinchart
` (11 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Laurent Pinchart @ 2012-09-19 12:06 UTC (permalink / raw)
To: linux-omap; +Cc: Omar Ramirez Luna
Most function that takes a device context as argument immediately assign
it to a local variable of the same type and use the local variable.
Remove the variable and use the function parameter directly. Rename all
remaining occurences of dev_context to dev_ctxt to be consistent with
the rest of the code.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Omar Ramirez Luna <omar.ramirez@ti.com>
---
drivers/staging/tidspbridge/core/tiomap3430.c | 216 ++++++++++++-------------
1 files changed, 101 insertions(+), 115 deletions(-)
diff --git a/drivers/staging/tidspbridge/core/tiomap3430.c b/drivers/staging/tidspbridge/core/tiomap3430.c
index fa5b7b9..5113da8 100644
--- a/drivers/staging/tidspbridge/core/tiomap3430.c
+++ b/drivers/staging/tidspbridge/core/tiomap3430.c
@@ -132,7 +132,6 @@ static struct notifier_block dsp_mbox_notifier = {
*/
static int bridge_brd_monitor(struct bridge_dev_context *dev_ctxt)
{
- struct bridge_dev_context *dev_context = dev_ctxt;
u32 temp;
struct omap_dsp_platform_data *pdata =
omap_dspbridge_dev->dev.platform_data;
@@ -161,7 +160,7 @@ static int bridge_brd_monitor(struct bridge_dev_context *dev_ctxt)
dsp_clk_enable(DSP_CLK_IVA2);
/* set the device state to IDLE */
- dev_context->brd_state = BRD_IDLE;
+ dev_ctxt->brd_state = BRD_IDLE;
return 0;
}
@@ -176,20 +175,19 @@ static int bridge_brd_read(struct bridge_dev_context *dev_ctxt,
u32 ul_num_bytes, u32 mem_type)
{
int status = 0;
- struct bridge_dev_context *dev_context = dev_ctxt;
u32 offset;
u32 dsp_base_addr = dev_ctxt->dsp_base_addr;
- if (dsp_addr < dev_context->dsp_start_add) {
+ if (dsp_addr < dev_ctxt->dsp_start_add) {
status = -EPERM;
return status;
}
/* change here to account for the 3 bands of the DSP internal memory */
- if ((dsp_addr - dev_context->dsp_start_add) <
- dev_context->internal_size) {
- offset = dsp_addr - dev_context->dsp_start_add;
+ if ((dsp_addr - dev_ctxt->dsp_start_add) <
+ dev_ctxt->internal_size) {
+ offset = dsp_addr - dev_ctxt->dsp_start_add;
} else {
- status = read_ext_dsp_data(dev_context, host_buff, dsp_addr,
+ status = read_ext_dsp_data(dev_ctxt, host_buff, dsp_addr,
ul_num_bytes, mem_type);
return status;
}
@@ -207,18 +205,17 @@ static int bridge_brd_write(struct bridge_dev_context *dev_ctxt,
u32 ul_num_bytes, u32 mem_type)
{
int status = 0;
- struct bridge_dev_context *dev_context = dev_ctxt;
- if (dsp_addr < dev_context->dsp_start_add) {
+ if (dsp_addr < dev_ctxt->dsp_start_add) {
status = -EPERM;
return status;
}
- if ((dsp_addr - dev_context->dsp_start_add) <
- dev_context->internal_size) {
+ if ((dsp_addr - dev_ctxt->dsp_start_add) <
+ dev_ctxt->internal_size) {
status = write_dsp_data(dev_ctxt, host_buff, dsp_addr,
ul_num_bytes, mem_type);
} else {
- status = write_ext_dsp_data(dev_context, host_buff, dsp_addr,
+ status = write_ext_dsp_data(dev_ctxt, host_buff, dsp_addr,
ul_num_bytes, mem_type, false);
}
@@ -234,9 +231,8 @@ static int bridge_brd_set_state(struct bridge_dev_context *dev_ctxt,
u32 brd_state)
{
int status = 0;
- struct bridge_dev_context *dev_context = dev_ctxt;
- dev_context->brd_state = brd_state;
+ dev_ctxt->brd_state = brd_state;
return status;
}
@@ -244,7 +240,7 @@ static int bridge_brd_set_state(struct bridge_dev_context *dev_ctxt,
* ======== wait_for_start ========
* Wait for the singal from DSP that it has started, or time out.
*/
-bool wait_for_start(struct bridge_dev_context *dev_context, u32 dw_sync_addr)
+bool wait_for_start(struct bridge_dev_context *dev_ctxt, u32 dw_sync_addr)
{
u16 timeout = TIHELEN_ACKTIMEOUT;
@@ -274,7 +270,6 @@ static int bridge_brd_start(struct bridge_dev_context *dev_ctxt,
u32 dsp_addr)
{
int status = 0;
- struct bridge_dev_context *dev_context = dev_ctxt;
u32 dw_sync_addr = 0;
u32 ul_shm_base; /* Gpp Phys SM base addr(byte) */
u32 ul_shm_base_virt; /* Dsp Virt SM base addr */
@@ -299,15 +294,15 @@ static int bridge_brd_start(struct bridge_dev_context *dev_ctxt,
* last dsp base image was loaded. The first entry is always
* SHMMEM base. */
/* Get SHM_BEG - convert to byte address */
- (void)dev_get_symbol(dev_context->dev_obj, SHMBASENAME,
+ (void)dev_get_symbol(dev_ctxt->dev_obj, SHMBASENAME,
&ul_shm_base_virt);
ul_shm_base_virt *= DSPWORDSIZE;
/* DSP Virtual address */
- ul_tlb_base_virt = dev_context->atlb_entry[0].dsp_va;
+ ul_tlb_base_virt = dev_ctxt->atlb_entry[0].dsp_va;
ul_shm_offset_virt =
ul_shm_base_virt - (ul_tlb_base_virt * DSPWORDSIZE);
/* Kernel logical address */
- ul_shm_base = dev_context->atlb_entry[0].gpp_va + ul_shm_offset_virt;
+ ul_shm_base = dev_ctxt->atlb_entry[0].gpp_va + ul_shm_offset_virt;
/* 2nd wd is used as sync field */
dw_sync_addr = ul_shm_base + SHMSYNCOFFSET;
@@ -320,7 +315,7 @@ static int bridge_brd_start(struct bridge_dev_context *dev_ctxt,
__raw_writel(0xffffffff, dw_sync_addr);
if (!status) {
- resources = dev_context->resources;
+ resources = dev_ctxt->resources;
if (!resources)
status = -EPERM;
@@ -367,7 +362,7 @@ static int bridge_brd_start(struct bridge_dev_context *dev_ctxt,
/* Only make TLB entry if both addresses are non-zero */
for (entry_ndx = 0; entry_ndx < BRDIOCTL_NUMOFMMUTLB;
entry_ndx++) {
- struct bridge_ioctl_extproc *e = &dev_context->atlb_entry[entry_ndx];
+ struct bridge_ioctl_extproc *e = &dev_ctxt->atlb_entry[entry_ndx];
struct hw_mmu_map_attrs_t map_attrs = {
.endianism = e->endianism,
.element_size = e->elem_size,
@@ -384,7 +379,7 @@ static int bridge_brd_start(struct bridge_dev_context *dev_ctxt,
e->dsp_va,
e->size);
- hw_mmu_tlb_add(dev_context->dsp_mmu_base,
+ hw_mmu_tlb_add(dev_ctxt->dsp_mmu_base,
e->gpp_pa,
e->dsp_va,
e->size,
@@ -401,7 +396,7 @@ static int bridge_brd_start(struct bridge_dev_context *dev_ctxt,
hw_mmu_num_locked_set(resources->dmmu_base, itmp_entry_ndx);
hw_mmu_victim_num_set(resources->dmmu_base, itmp_entry_ndx);
hw_mmu_ttb_set(resources->dmmu_base,
- dev_context->pt_attrs->l1_base_pa);
+ dev_ctxt->pt_attrs->l1_base_pa);
hw_mmu_twl_enable(resources->dmmu_base);
/* Enable the SmartIdle and AutoIdle bit for MMU_SYSCONFIG */
@@ -413,9 +408,9 @@ static int bridge_brd_start(struct bridge_dev_context *dev_ctxt,
hw_mmu_enable(resources->dmmu_base);
/* Enable the BIOS clock */
- (void)dev_get_symbol(dev_context->dev_obj,
+ (void)dev_get_symbol(dev_ctxt->dev_obj,
BRIDGEINIT_BIOSGPTIMER, &ul_bios_gp_timer);
- (void)dev_get_symbol(dev_context->dev_obj,
+ (void)dev_get_symbol(dev_ctxt->dev_obj,
BRIDGEINIT_LOADMON_GPTIMER,
&ul_load_monitor_timer);
}
@@ -424,7 +419,7 @@ static int bridge_brd_start(struct bridge_dev_context *dev_ctxt,
if (ul_load_monitor_timer != 0xFFFF) {
clk_cmd = (BPWR_ENABLE_CLOCK << MBX_PM_CLK_CMDSHIFT) |
ul_load_monitor_timer;
- dsp_peripheral_clk_ctrl(dev_context, &clk_cmd);
+ dsp_peripheral_clk_ctrl(dev_ctxt, &clk_cmd);
} else {
dev_dbg(bridge, "Not able to get the symbol for Load "
"Monitor Timer\n");
@@ -435,7 +430,7 @@ static int bridge_brd_start(struct bridge_dev_context *dev_ctxt,
if (ul_bios_gp_timer != 0xFFFF) {
clk_cmd = (BPWR_ENABLE_CLOCK << MBX_PM_CLK_CMDSHIFT) |
ul_bios_gp_timer;
- dsp_peripheral_clk_ctrl(dev_context, &clk_cmd);
+ dsp_peripheral_clk_ctrl(dev_ctxt, &clk_cmd);
} else {
dev_dbg(bridge,
"Not able to get the symbol for BIOS Timer\n");
@@ -444,7 +439,7 @@ static int bridge_brd_start(struct bridge_dev_context *dev_ctxt,
if (!status) {
/* Set the DSP clock rate */
- (void)dev_get_symbol(dev_context->dev_obj,
+ (void)dev_get_symbol(dev_ctxt->dev_obj,
"_BRIDGEINIT_DSP_FREQ", &ul_dsp_clk_addr);
/*Set Autoidle Mode for IVA2 PLL */
(*pdata->dsp_cm_write)(1 << OMAP3430_AUTO_IVA2_DPLL_SHIFT,
@@ -455,7 +450,7 @@ static int bridge_brd_start(struct bridge_dev_context *dev_ctxt,
ul_dsp_clk_rate = dsp_clk_get_iva2_rate();
dev_dbg(bridge, "%s: DSP clock rate (KHZ): 0x%x \n",
__func__, ul_dsp_clk_rate);
- (void)bridge_brd_write(dev_context,
+ (void)bridge_brd_write(dev_ctxt,
(u8 *) &ul_dsp_clk_rate,
ul_dsp_clk_addr, sizeof(u32), 0);
}
@@ -463,9 +458,9 @@ static int bridge_brd_start(struct bridge_dev_context *dev_ctxt,
* Enable Mailbox events and also drain any pending
* stale messages.
*/
- dev_context->mbox = omap_mbox_get("dsp", &dsp_mbox_notifier);
- if (IS_ERR(dev_context->mbox)) {
- dev_context->mbox = NULL;
+ dev_ctxt->mbox = omap_mbox_get("dsp", &dsp_mbox_notifier);
+ if (IS_ERR(dev_ctxt->mbox)) {
+ dev_ctxt->mbox = NULL;
pr_err("%s: Failed to get dsp mailbox handle\n",
__func__);
status = -EPERM;
@@ -508,17 +503,17 @@ static int bridge_brd_start(struct bridge_dev_context *dev_ctxt,
/* Wait for DSP to clear word in shared memory */
/* Read the Location */
- if (!wait_for_start(dev_context, dw_sync_addr))
+ if (!wait_for_start(dev_ctxt, dw_sync_addr))
status = -ETIMEDOUT;
- dev_get_symbol(dev_context->dev_obj, "_WDT_enable", &wdt_en);
+ dev_get_symbol(dev_ctxt->dev_obj, "_WDT_enable", &wdt_en);
if (wdt_en) {
/* Start wdt */
dsp_wdt_sm_set((void *)ul_shm_base);
dsp_wdt_enable(true);
}
- status = dev_get_io_mgr(dev_context->dev_obj, &hio_mgr);
+ status = dev_get_io_mgr(dev_ctxt->dev_obj, &hio_mgr);
if (hio_mgr) {
io_sh_msetting(hio_mgr, SHM_OPPINFO, NULL);
/* Write the synchronization bit to indicate the
@@ -527,10 +522,10 @@ static int bridge_brd_start(struct bridge_dev_context *dev_ctxt,
__raw_writel(0XCAFECAFE, dw_sync_addr);
/* update board state */
- dev_context->brd_state = BRD_RUNNING;
- /* (void)chnlsm_enable_interrupt(dev_context); */
+ dev_ctxt->brd_state = BRD_RUNNING;
+ /* (void)chnlsm_enable_interrupt(dev_ctxt); */
} else {
- dev_context->brd_state = BRD_UNKNOWN;
+ dev_ctxt->brd_state = BRD_UNKNOWN;
}
}
return status;
@@ -547,13 +542,12 @@ static int bridge_brd_start(struct bridge_dev_context *dev_ctxt,
static int bridge_brd_stop(struct bridge_dev_context *dev_ctxt)
{
int status = 0;
- struct bridge_dev_context *dev_context = dev_ctxt;
struct pg_table_attrs *pt_attrs;
u32 dsp_pwr_state;
struct omap_dsp_platform_data *pdata =
omap_dspbridge_dev->dev.platform_data;
- if (dev_context->brd_state == BRD_STOPPED)
+ if (dev_ctxt->brd_state == BRD_STOPPED)
return status;
/* as per TRM, it is advised to first drive the IVA2 to 'Standby' mode,
@@ -564,7 +558,7 @@ static int bridge_brd_stop(struct bridge_dev_context *dev_ctxt)
if (dsp_pwr_state != PWRDM_POWER_OFF) {
(*pdata->dsp_prm_rmw_bits)(OMAP3430_RST2_IVA2_MASK, 0,
OMAP3430_IVA2_MOD, OMAP2_RM_RSTCTRL);
- sm_interrupt_dsp(dev_context, MBX_PM_DSPIDLE);
+ sm_interrupt_dsp(dev_ctxt, MBX_PM_DSPIDLE);
mdelay(10);
/* IVA2 is not in OFF state */
@@ -578,32 +572,32 @@ static int bridge_brd_stop(struct bridge_dev_context *dev_ctxt)
udelay(10);
/* Release the Ext Base virtual Address as the next DSP Program
* may have a different load address */
- if (dev_context->dsp_ext_base_addr)
- dev_context->dsp_ext_base_addr = 0;
+ if (dev_ctxt->dsp_ext_base_addr)
+ dev_ctxt->dsp_ext_base_addr = 0;
- dev_context->brd_state = BRD_STOPPED; /* update board state */
+ dev_ctxt->brd_state = BRD_STOPPED; /* update board state */
dsp_wdt_enable(false);
/* This is a good place to clear the MMU page tables as well */
- if (dev_context->pt_attrs) {
- pt_attrs = dev_context->pt_attrs;
+ if (dev_ctxt->pt_attrs) {
+ pt_attrs = dev_ctxt->pt_attrs;
memset((u8 *) pt_attrs->l1_base_va, 0x00, pt_attrs->l1_size);
memset((u8 *) pt_attrs->l2_base_va, 0x00, pt_attrs->l2_size);
memset((u8 *) pt_attrs->pg_info, 0x00,
(pt_attrs->l2_num_pages * sizeof(struct page_info)));
}
/* Disable the mailbox interrupts */
- if (dev_context->mbox) {
- omap_mbox_disable_irq(dev_context->mbox, IRQ_RX);
- omap_mbox_put(dev_context->mbox, &dsp_mbox_notifier);
- dev_context->mbox = NULL;
+ if (dev_ctxt->mbox) {
+ omap_mbox_disable_irq(dev_ctxt->mbox, IRQ_RX);
+ omap_mbox_put(dev_ctxt->mbox, &dsp_mbox_notifier);
+ dev_ctxt->mbox = NULL;
}
/* Reset IVA2 clocks*/
(*pdata->dsp_prm_write)(OMAP3430_RST1_IVA2_MASK | OMAP3430_RST2_IVA2_MASK |
OMAP3430_RST3_IVA2_MASK, OMAP3430_IVA2_MOD, OMAP2_RM_RSTCTRL);
- dsp_clock_disable_all(dev_context->dsp_per_clks);
+ dsp_clock_disable_all(dev_ctxt->dsp_per_clks);
dsp_clk_disable(DSP_CLK_IVA2);
return status;
@@ -616,8 +610,7 @@ static int bridge_brd_stop(struct bridge_dev_context *dev_ctxt)
static int bridge_brd_status(struct bridge_dev_context *dev_ctxt,
int *board_state)
{
- struct bridge_dev_context *dev_context = dev_ctxt;
- *board_state = dev_context->brd_state;
+ *board_state = dev_ctxt->brd_state;
return 0;
}
@@ -631,7 +624,7 @@ static int bridge_dev_create(struct bridge_dev_context
struct cfg_hostres *config_param)
{
int status = 0;
- struct bridge_dev_context *dev_context = NULL;
+ struct bridge_dev_context *dev_ctxt = NULL;
s32 entry_ndx;
struct cfg_hostres *resources = config_param;
struct pg_table_attrs *pt_attrs;
@@ -642,30 +635,30 @@ static int bridge_dev_create(struct bridge_dev_context
/* Allocate and initialize a data structure to contain the bridge driver
* state, which becomes the context for later calls into this driver */
- dev_context = kzalloc(sizeof(struct bridge_dev_context), GFP_KERNEL);
- if (!dev_context) {
+ dev_ctxt = kzalloc(sizeof(struct bridge_dev_context), GFP_KERNEL);
+ if (!dev_ctxt) {
status = -ENOMEM;
goto func_end;
}
- dev_context->dsp_start_add = (u32) OMAP_GEM_BASE;
- dev_context->self_loop = (u32) NULL;
- dev_context->dsp_per_clks = 0;
- dev_context->internal_size = OMAP_DSP_SIZE;
+ dev_ctxt->dsp_start_add = (u32) OMAP_GEM_BASE;
+ dev_ctxt->self_loop = (u32) NULL;
+ dev_ctxt->dsp_per_clks = 0;
+ dev_ctxt->internal_size = OMAP_DSP_SIZE;
/* Clear dev context MMU table entries.
* These get set on bridge_io_on_loaded() call after program loaded. */
for (entry_ndx = 0; entry_ndx < BRDIOCTL_NUMOFMMUTLB; entry_ndx++) {
- dev_context->atlb_entry[entry_ndx].gpp_pa =
- dev_context->atlb_entry[entry_ndx].dsp_va = 0;
+ dev_ctxt->atlb_entry[entry_ndx].gpp_pa =
+ dev_ctxt->atlb_entry[entry_ndx].dsp_va = 0;
}
- dev_context->dsp_base_addr = (u32) MEM_LINEAR_ADDRESS((void *)
+ dev_ctxt->dsp_base_addr = (u32) MEM_LINEAR_ADDRESS((void *)
(config_param->
mem_base
[3]),
config_param->
mem_length
[3]);
- if (!dev_context->dsp_base_addr)
+ if (!dev_ctxt->dsp_base_addr)
status = -EPERM;
pt_attrs = kzalloc(sizeof(struct pg_table_attrs), GFP_KERNEL);
@@ -741,29 +734,29 @@ static int bridge_dev_create(struct bridge_dev_context
}
if ((pt_attrs != NULL) && (pt_attrs->l1_base_va != 0) &&
(pt_attrs->l2_base_va != 0) && (pt_attrs->pg_info != NULL))
- dev_context->pt_attrs = pt_attrs;
+ dev_ctxt->pt_attrs = pt_attrs;
else
status = -ENOMEM;
if (!status) {
spin_lock_init(&pt_attrs->pg_lock);
- dev_context->tc_word_swap_on = drv_datap->tc_wordswapon;
+ dev_ctxt->tc_word_swap_on = drv_datap->tc_wordswapon;
/* Set the Clock Divisor for the DSP module */
udelay(5);
/* MMU address is obtained from the host
* resources struct */
- dev_context->dsp_mmu_base = resources->dmmu_base;
+ dev_ctxt->dsp_mmu_base = resources->dmmu_base;
}
if (!status) {
- dev_context->dev_obj = hdev_obj;
+ dev_ctxt->dev_obj = hdev_obj;
/* Store current board state. */
- dev_context->brd_state = BRD_UNKNOWN;
- dev_context->resources = resources;
+ dev_ctxt->brd_state = BRD_UNKNOWN;
+ dev_ctxt->resources = resources;
dsp_clk_enable(DSP_CLK_IVA2);
- bridge_brd_stop(dev_context);
+ bridge_brd_stop(dev_ctxt);
/* Return ptr to our device state to the DSP API for storage */
- *dev_cntxt = dev_context;
+ *dev_cntxt = dev_ctxt;
} else {
if (pt_attrs != NULL) {
kfree(pt_attrs->pg_info);
@@ -782,7 +775,7 @@ static int bridge_dev_create(struct bridge_dev_context
}
}
kfree(pt_attrs);
- kfree(dev_context);
+ kfree(dev_ctxt);
}
func_end:
return status;
@@ -792,7 +785,7 @@ func_end:
* ======== bridge_dev_ctrl ========
* Receives device specific commands.
*/
-static int bridge_dev_ctrl(struct bridge_dev_context *dev_context,
+static int bridge_dev_ctrl(struct bridge_dev_context *dev_ctxt,
u32 dw_cmd, void *pargs)
{
int status = 0;
@@ -808,33 +801,33 @@ static int bridge_dev_ctrl(struct bridge_dev_context *dev_context,
case BRDIOCTL_SETMMUCONFIG:
/* store away dsp-mmu setup values for later use */
for (ndx = 0; ndx < BRDIOCTL_NUMOFMMUTLB; ndx++, pa_ext_proc++)
- dev_context->atlb_entry[ndx] = *pa_ext_proc;
+ dev_ctxt->atlb_entry[ndx] = *pa_ext_proc;
break;
case BRDIOCTL_DEEPSLEEP:
case BRDIOCTL_EMERGENCYSLEEP:
/* Currently only DSP Idle is supported Need to update for
* later releases */
- status = sleep_dsp(dev_context, PWR_DEEPSLEEP, pargs);
+ status = sleep_dsp(dev_ctxt, PWR_DEEPSLEEP, pargs);
break;
case BRDIOCTL_WAKEUP:
- status = wake_dsp(dev_context, pargs);
+ status = wake_dsp(dev_ctxt, pargs);
break;
case BRDIOCTL_CLK_CTRL:
status = 0;
/* Looking For Baseport Fix for Clocks */
- status = dsp_peripheral_clk_ctrl(dev_context, pargs);
+ status = dsp_peripheral_clk_ctrl(dev_ctxt, pargs);
break;
case BRDIOCTL_PWR_HIBERNATE:
- status = handle_hibernation_from_dsp(dev_context);
+ status = handle_hibernation_from_dsp(dev_ctxt);
break;
case BRDIOCTL_PRESCALE_NOTIFY:
- status = pre_scale_dsp(dev_context, pargs);
+ status = pre_scale_dsp(dev_ctxt, pargs);
break;
case BRDIOCTL_POSTSCALE_NOTIFY:
- status = post_scale_dsp(dev_context, pargs);
+ status = post_scale_dsp(dev_ctxt, pargs);
break;
case BRDIOCTL_CONSTRAINT_REQUEST:
- status = handle_constraints_set(dev_context, pargs);
+ status = handle_constraints_set(dev_ctxt, pargs);
break;
default:
status = -EPERM;
@@ -851,8 +844,6 @@ static int bridge_dev_destroy(struct bridge_dev_context *dev_ctxt)
{
struct pg_table_attrs *pt_attrs;
int status = 0;
- struct bridge_dev_context *dev_context = (struct bridge_dev_context *)
- dev_ctxt;
struct cfg_hostres *host_res;
u32 shm_size;
struct drv_data *drv_datap = dev_get_drvdata(bridge);
@@ -862,9 +853,9 @@ static int bridge_dev_destroy(struct bridge_dev_context *dev_ctxt)
return -EFAULT;
/* first put the device to stop state */
- bridge_brd_stop(dev_context);
- if (dev_context->pt_attrs) {
- pt_attrs = dev_context->pt_attrs;
+ bridge_brd_stop(dev_ctxt);
+ if (dev_ctxt->pt_attrs) {
+ pt_attrs = dev_ctxt->pt_attrs;
kfree(pt_attrs->pg_info);
if (pt_attrs->l2_tbl_alloc_va) {
@@ -881,8 +872,8 @@ static int bridge_dev_destroy(struct bridge_dev_context *dev_ctxt)
}
- if (dev_context->resources) {
- host_res = dev_context->resources;
+ if (dev_ctxt->resources) {
+ host_res = dev_ctxt->resources;
shm_size = drv_datap->shm_size;
if (shm_size >= 0x10000) {
if ((host_res->mem_base[1]) &&
@@ -944,7 +935,6 @@ static int bridge_brd_mem_copy(struct bridge_dev_context *dev_ctxt,
u32 copy_bytes = 0;
u32 total_bytes = ul_num_bytes;
u8 host_buf[BUFFERSIZE];
- struct bridge_dev_context *dev_context = dev_ctxt;
while (total_bytes > 0 && !status) {
copy_bytes =
total_bytes > BUFFERSIZE ? BUFFERSIZE : total_bytes;
@@ -952,8 +942,8 @@ static int bridge_brd_mem_copy(struct bridge_dev_context *dev_ctxt,
status = read_ext_dsp_data(dev_ctxt, host_buf, src_addr,
copy_bytes, mem_type);
if (!status) {
- if (dest_addr < (dev_context->dsp_start_add +
- dev_context->internal_size)) {
+ if (dest_addr < (dev_ctxt->dsp_start_add +
+ dev_ctxt->internal_size)) {
/* Write to Internal memory */
status = write_dsp_data(dev_ctxt, host_buf,
dest_addr, copy_bytes,
@@ -979,15 +969,14 @@ static int bridge_brd_mem_write(struct bridge_dev_context *dev_ctxt,
u32 ul_num_bytes, u32 mem_type)
{
int status = 0;
- struct bridge_dev_context *dev_context = dev_ctxt;
u32 ul_remain_bytes = 0;
u32 ul_bytes = 0;
ul_remain_bytes = ul_num_bytes;
while (ul_remain_bytes > 0 && !status) {
ul_bytes =
ul_remain_bytes > BUFFERSIZE ? BUFFERSIZE : ul_remain_bytes;
- if (dsp_addr < (dev_context->dsp_start_add +
- dev_context->internal_size)) {
+ if (dsp_addr < (dev_ctxt->dsp_start_add +
+ dev_ctxt->internal_size)) {
status =
write_dsp_data(dev_ctxt, host_buff, dsp_addr,
ul_bytes, mem_type);
@@ -1113,7 +1102,6 @@ static int pte_update(struct bridge_dev_context *dev_ctxt, u32 pa,
u32 pa_curr = pa;
u32 va_curr = va;
u32 num_bytes = size;
- struct bridge_dev_context *dev_context = dev_ctxt;
int status = 0;
u32 page_size[] = { HW_PAGE_SIZE16MB, HW_PAGE_SIZE1MB,
HW_PAGE_SIZE64KB, HW_PAGE_SIZE4KB
@@ -1129,7 +1117,7 @@ static int pte_update(struct bridge_dev_context *dev_ctxt, u32 pa,
(page_size[i] -
1)) == 0)) {
status =
- pte_set(dev_context->pt_attrs, pa_curr,
+ pte_set(dev_ctxt->pt_attrs, pa_curr,
va_curr, page_size[i], map_attrs);
pa_curr += page_size[i];
va_curr += page_size[i];
@@ -1180,17 +1168,17 @@ static u32 user_va2_pa(struct mm_struct *mm, u32 address)
return 0;
}
-static inline void flush_all(struct bridge_dev_context *dev_context)
+static inline void flush_all(struct bridge_dev_context *dev_ctxt)
{
- if (dev_context->brd_state == BRD_DSP_HIBERNATION ||
- dev_context->brd_state == BRD_HIBERNATION)
- wake_dsp(dev_context, NULL);
+ if (dev_ctxt->brd_state == BRD_DSP_HIBERNATION ||
+ dev_ctxt->brd_state == BRD_HIBERNATION)
+ wake_dsp(dev_ctxt, NULL);
- hw_mmu_tlb_flush_all(dev_context->dsp_mmu_base);
+ hw_mmu_tlb_flush_all(dev_ctxt->dsp_mmu_base);
}
/* Memory map kernel VA -- memory allocated with vmalloc */
-static int mem_map_vmalloc(struct bridge_dev_context *dev_context,
+static int mem_map_vmalloc(struct bridge_dev_context *dev_ctxt,
u32 ul_mpu_addr, u32 virt_addr,
u32 ul_num_bytes,
struct hw_mmu_map_attrs_t *hw_attrs)
@@ -1250,7 +1238,7 @@ static int mem_map_vmalloc(struct bridge_dev_context *dev_context,
get_page(PHYS_TO_PAGE(pa));
pa += HW_PAGE_SIZE4KB;
}
- status = pte_update(dev_context, pa_curr, virt_addr +
+ status = pte_update(dev_ctxt, pa_curr, virt_addr +
(va_curr - ul_mpu_addr), size_curr,
hw_attrs);
va_curr += size_curr;
@@ -1261,7 +1249,7 @@ static int mem_map_vmalloc(struct bridge_dev_context *dev_context,
* repetition while mapping non-contiguous physical regions of a virtual
* region
*/
- flush_all(dev_context);
+ flush_all(dev_ctxt);
dev_dbg(bridge, "%s status %x\n", __func__, status);
return status;
}
@@ -1303,8 +1291,7 @@ static int bridge_brd_mem_un_map(struct bridge_dev_context *dev_ctxt,
u32 va_curr;
struct page *pg = NULL;
int status = 0;
- struct bridge_dev_context *dev_context = dev_ctxt;
- struct pg_table_attrs *pt = dev_context->pt_attrs;
+ struct pg_table_attrs *pt = dev_ctxt->pt_attrs;
u32 temp;
u32 paddr;
u32 numof4k_pages = 0;
@@ -1468,7 +1455,7 @@ skip_coarse_page:
* get flushed
*/
EXIT_LOOP:
- flush_all(dev_context);
+ flush_all(dev_ctxt);
dev_dbg(bridge,
"%s: va_curr %x, pte_addr_l1 %x pte_addr_l2 %x rem_bytes %x,"
" rem_bytes_l2 %x status %x\n", __func__, va_curr, pte_addr_l1,
@@ -1492,7 +1479,6 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
{
u32 attrs;
int status = 0;
- struct bridge_dev_context *dev_context = dev_ctxt;
struct hw_mmu_map_attrs_t hw_attrs;
struct vm_area_struct *vma;
struct mm_struct *mm = current->mm;
@@ -1563,7 +1549,7 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
* Pass the translated pa to pte_update.
*/
if ((attrs & DSP_MAPPHYSICALADDR)) {
- status = pte_update(dev_context, ul_mpu_addr, virt_addr,
+ status = pte_update(dev_ctxt, ul_mpu_addr, virt_addr,
ul_num_bytes, &hw_attrs);
goto func_cont;
}
@@ -1625,7 +1611,7 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
bad_page_dump(pa, pg);
}
}
- status = pte_set(dev_context->pt_attrs, pa,
+ status = pte_set(dev_ctxt->pt_attrs, pa,
va, HW_PAGE_SIZE4KB, &hw_attrs);
if (status)
break;
@@ -1650,7 +1636,7 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
bad_page_dump(page_to_phys(mapped_page),
mapped_page);
}
- status = pte_set(dev_context->pt_attrs,
+ status = pte_set(dev_ctxt->pt_attrs,
page_to_phys(mapped_page), va,
HW_PAGE_SIZE4KB, &hw_attrs);
if (status)
@@ -1682,7 +1668,7 @@ func_cont:
* mapping
*/
if (pg_i) {
- bridge_brd_mem_un_map(dev_context, virt_addr,
+ bridge_brd_mem_un_map(dev_ctxt, virt_addr,
(pg_i * PG_SIZE4K));
}
status = -EPERM;
@@ -1693,7 +1679,7 @@ func_cont:
* repetition while mapping non-contiguous physical regions of a virtual
* region
*/
- flush_all(dev_context);
+ flush_all(dev_ctxt);
dev_dbg(bridge, "%s status %x\n", __func__, status);
return status;
}
--
1.7.8.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 05/15] tidspbridge: tiomap3430: Factor out common page release code
2012-09-19 12:06 [PATCH v2 00/15] tidspbridge driver MMU-related cleanups Laurent Pinchart
` (3 preceding siblings ...)
2012-09-19 12:06 ` [PATCH v2 04/15] tidspbridge: tiomap3430: Remove unneeded dev_context local variables Laurent Pinchart
@ 2012-09-19 12:06 ` Laurent Pinchart
2012-09-19 12:06 ` [PATCH v2 06/15] tidspbridge: tiomap3430: Remove ul_ prefix Laurent Pinchart
` (10 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Laurent Pinchart @ 2012-09-19 12:06 UTC (permalink / raw)
To: linux-omap; +Cc: Omar Ramirez Luna
The same block of code is used in two places to release pages. Factor it
out to a function.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Omar Ramirez Luna <omar.ramirez@ti.com>
---
drivers/staging/tidspbridge/core/tiomap3430.c | 81 ++++++++++---------------
1 files changed, 31 insertions(+), 50 deletions(-)
diff --git a/drivers/staging/tidspbridge/core/tiomap3430.c b/drivers/staging/tidspbridge/core/tiomap3430.c
index 5113da8..a01e9c5e 100644
--- a/drivers/staging/tidspbridge/core/tiomap3430.c
+++ b/drivers/staging/tidspbridge/core/tiomap3430.c
@@ -1266,6 +1266,31 @@ static void bad_page_dump(u32 pa, struct page *pg)
dump_stack();
}
+/* Release all pages associated with a physical addresses range. */
+static void bridge_release_pages(u32 paddr, u32 pte_size, u32 num_bytes)
+{
+ struct page *pg;
+ u32 num_pages;
+
+ num_pages = pte_size / PAGE_SIZE;
+
+ for (; num_pages > 0; --num_pages, paddr += HW_PAGE_SIZE4KB) {
+ if (!pfn_valid(__phys_to_pfn(paddr)))
+ continue;
+
+ pg = PHYS_TO_PAGE(paddr);
+ if (page_count(pg) < 1) {
+ pr_info("DSPBRIDGE: UNMAP function: "
+ "COUNT 0 FOR PA 0x%x, size = "
+ "0x%x\n", paddr, num_bytes);
+ bad_page_dump(paddr, pg);
+ } else {
+ set_page_dirty(pg);
+ page_cache_release(pg);
+ }
+ }
+}
+
/*
* ======== bridge_brd_mem_un_map ========
* Invalidate the PTEs for the DSP VA block to be unmapped.
@@ -1289,12 +1314,8 @@ static int bridge_brd_mem_un_map(struct bridge_dev_context *dev_ctxt,
u32 rem_bytes;
u32 rem_bytes_l2;
u32 va_curr;
- struct page *pg = NULL;
int status = 0;
struct pg_table_attrs *pt = dev_ctxt->pt_attrs;
- u32 temp;
- u32 paddr;
- u32 numof4k_pages = 0;
va_curr = virt_addr;
rem_bytes = ul_num_bytes;
@@ -1354,30 +1375,9 @@ static int bridge_brd_mem_un_map(struct bridge_dev_context *dev_ctxt,
break;
}
- /* Collect Physical addresses from VA */
- paddr = (pte_val & ~(pte_size - 1));
- if (pte_size == HW_PAGE_SIZE64KB)
- numof4k_pages = 16;
- else
- numof4k_pages = 1;
- temp = 0;
- while (temp++ < numof4k_pages) {
- if (!pfn_valid(__phys_to_pfn(paddr))) {
- paddr += HW_PAGE_SIZE4KB;
- continue;
- }
- pg = PHYS_TO_PAGE(paddr);
- if (page_count(pg) < 1) {
- pr_info("DSPBRIDGE: UNMAP function: "
- "COUNT 0 FOR PA 0x%x, size = "
- "0x%x\n", paddr, ul_num_bytes);
- bad_page_dump(paddr, pg);
- } else {
- set_page_dirty(pg);
- page_cache_release(pg);
- }
- paddr += HW_PAGE_SIZE4KB;
- }
+ bridge_release_pages(pte_val & ~(pte_size - 1), pte_size,
+ ul_num_bytes);
+
if (hw_mmu_pte_clear(pte_addr_l2, va_curr, pte_size)) {
status = -EPERM;
goto EXIT_LOOP;
@@ -1419,28 +1419,9 @@ skip_coarse_page:
break;
}
- if (pte_size == HW_PAGE_SIZE1MB)
- numof4k_pages = 256;
- else
- numof4k_pages = 4096;
- temp = 0;
- /* Collect Physical addresses from VA */
- paddr = (pte_val & ~(pte_size - 1));
- while (temp++ < numof4k_pages) {
- if (pfn_valid(__phys_to_pfn(paddr))) {
- pg = PHYS_TO_PAGE(paddr);
- if (page_count(pg) < 1) {
- pr_info("DSPBRIDGE: UNMAP function: "
- "COUNT 0 FOR PA 0x%x, size = "
- "0x%x\n", paddr, ul_num_bytes);
- bad_page_dump(paddr, pg);
- } else {
- set_page_dirty(pg);
- page_cache_release(pg);
- }
- }
- paddr += HW_PAGE_SIZE4KB;
- }
+ bridge_release_pages(pte_val & ~(pte_size - 1), pte_size,
+ ul_num_bytes);
+
if (!hw_mmu_pte_clear(l1_base_va, va_curr, pte_size)) {
status = 0;
rem_bytes -= pte_size;
--
1.7.8.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 06/15] tidspbridge: tiomap3430: Remove ul_ prefix
2012-09-19 12:06 [PATCH v2 00/15] tidspbridge driver MMU-related cleanups Laurent Pinchart
` (4 preceding siblings ...)
2012-09-19 12:06 ` [PATCH v2 05/15] tidspbridge: tiomap3430: Factor out common page release code Laurent Pinchart
@ 2012-09-19 12:06 ` Laurent Pinchart
2012-09-19 12:06 ` [PATCH v2 07/15] tidspbridge: tiomap3430: Remove unneeded local variables Laurent Pinchart
` (9 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Laurent Pinchart @ 2012-09-19 12:06 UTC (permalink / raw)
To: linux-omap; +Cc: Omar Ramirez Luna
There's no need to prefix all u32 variables with ul_.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Omar Ramirez Luna <omar.ramirez@ti.com>
---
drivers/staging/tidspbridge/core/tiomap3430.c | 183 ++++++++++++-------------
1 files changed, 87 insertions(+), 96 deletions(-)
diff --git a/drivers/staging/tidspbridge/core/tiomap3430.c b/drivers/staging/tidspbridge/core/tiomap3430.c
index a01e9c5e..3dfb663 100644
--- a/drivers/staging/tidspbridge/core/tiomap3430.c
+++ b/drivers/staging/tidspbridge/core/tiomap3430.c
@@ -172,7 +172,7 @@ static int bridge_brd_monitor(struct bridge_dev_context *dev_ctxt)
*/
static int bridge_brd_read(struct bridge_dev_context *dev_ctxt,
u8 *host_buff, u32 dsp_addr,
- u32 ul_num_bytes, u32 mem_type)
+ u32 num_bytes, u32 mem_type)
{
int status = 0;
u32 offset;
@@ -188,11 +188,11 @@ static int bridge_brd_read(struct bridge_dev_context *dev_ctxt,
offset = dsp_addr - dev_ctxt->dsp_start_add;
} else {
status = read_ext_dsp_data(dev_ctxt, host_buff, dsp_addr,
- ul_num_bytes, mem_type);
+ num_bytes, mem_type);
return status;
}
/* copy the data from DSP memory, */
- memcpy(host_buff, (void *)(dsp_base_addr + offset), ul_num_bytes);
+ memcpy(host_buff, (void *)(dsp_base_addr + offset), num_bytes);
return status;
}
@@ -202,7 +202,7 @@ static int bridge_brd_read(struct bridge_dev_context *dev_ctxt,
*/
static int bridge_brd_write(struct bridge_dev_context *dev_ctxt,
u8 *host_buff, u32 dsp_addr,
- u32 ul_num_bytes, u32 mem_type)
+ u32 num_bytes, u32 mem_type)
{
int status = 0;
@@ -213,10 +213,10 @@ static int bridge_brd_write(struct bridge_dev_context *dev_ctxt,
if ((dsp_addr - dev_ctxt->dsp_start_add) <
dev_ctxt->internal_size) {
status = write_dsp_data(dev_ctxt, host_buff, dsp_addr,
- ul_num_bytes, mem_type);
+ num_bytes, mem_type);
} else {
status = write_ext_dsp_data(dev_ctxt, host_buff, dsp_addr,
- ul_num_bytes, mem_type, false);
+ num_bytes, mem_type, false);
}
return status;
@@ -271,21 +271,21 @@ static int bridge_brd_start(struct bridge_dev_context *dev_ctxt,
{
int status = 0;
u32 dw_sync_addr = 0;
- u32 ul_shm_base; /* Gpp Phys SM base addr(byte) */
- u32 ul_shm_base_virt; /* Dsp Virt SM base addr */
- u32 ul_tlb_base_virt; /* Base of MMU TLB entry */
+ u32 shm_base; /* Gpp Phys SM base addr(byte) */
+ u32 shm_base_virt; /* Dsp Virt SM base addr */
+ u32 tlb_base_virt; /* Base of MMU TLB entry */
/* Offset of shm_base_virt from tlb_base_virt */
- u32 ul_shm_offset_virt;
+ u32 shm_offset_virt;
s32 entry_ndx;
s32 itmp_entry_ndx = 0; /* DSP-MMU TLB entry base address */
struct cfg_hostres *resources = NULL;
u32 temp;
- u32 ul_dsp_clk_rate;
- u32 ul_dsp_clk_addr;
- u32 ul_bios_gp_timer;
+ u32 dsp_clk_rate;
+ u32 dsp_clk_addr;
+ u32 bios_gp_timer;
u32 clk_cmd;
struct io_mgr *hio_mgr;
- u32 ul_load_monitor_timer;
+ u32 load_monitor_timer;
u32 wdt_en = 0;
struct omap_dsp_platform_data *pdata =
omap_dspbridge_dev->dev.platform_data;
@@ -294,21 +294,19 @@ static int bridge_brd_start(struct bridge_dev_context *dev_ctxt,
* last dsp base image was loaded. The first entry is always
* SHMMEM base. */
/* Get SHM_BEG - convert to byte address */
- (void)dev_get_symbol(dev_ctxt->dev_obj, SHMBASENAME,
- &ul_shm_base_virt);
- ul_shm_base_virt *= DSPWORDSIZE;
+ (void)dev_get_symbol(dev_ctxt->dev_obj, SHMBASENAME, &shm_base_virt);
+ shm_base_virt *= DSPWORDSIZE;
/* DSP Virtual address */
- ul_tlb_base_virt = dev_ctxt->atlb_entry[0].dsp_va;
- ul_shm_offset_virt =
- ul_shm_base_virt - (ul_tlb_base_virt * DSPWORDSIZE);
+ tlb_base_virt = dev_ctxt->atlb_entry[0].dsp_va;
+ shm_offset_virt = shm_base_virt - (tlb_base_virt * DSPWORDSIZE);
/* Kernel logical address */
- ul_shm_base = dev_ctxt->atlb_entry[0].gpp_va + ul_shm_offset_virt;
+ shm_base = dev_ctxt->atlb_entry[0].gpp_va + shm_offset_virt;
/* 2nd wd is used as sync field */
- dw_sync_addr = ul_shm_base + SHMSYNCOFFSET;
+ dw_sync_addr = shm_base + SHMSYNCOFFSET;
/* Write a signature into the shm base + offset; this will
* get cleared when the DSP program starts. */
- if ((ul_shm_base_virt == 0) || (ul_shm_base == 0)) {
+ if ((shm_base_virt == 0) || (shm_base == 0)) {
pr_err("%s: Illegal SM base\n", __func__);
status = -EPERM;
} else
@@ -409,16 +407,16 @@ static int bridge_brd_start(struct bridge_dev_context *dev_ctxt,
/* Enable the BIOS clock */
(void)dev_get_symbol(dev_ctxt->dev_obj,
- BRIDGEINIT_BIOSGPTIMER, &ul_bios_gp_timer);
+ BRIDGEINIT_BIOSGPTIMER, &bios_gp_timer);
(void)dev_get_symbol(dev_ctxt->dev_obj,
BRIDGEINIT_LOADMON_GPTIMER,
- &ul_load_monitor_timer);
+ &load_monitor_timer);
}
if (!status) {
- if (ul_load_monitor_timer != 0xFFFF) {
+ if (load_monitor_timer != 0xFFFF) {
clk_cmd = (BPWR_ENABLE_CLOCK << MBX_PM_CLK_CMDSHIFT) |
- ul_load_monitor_timer;
+ load_monitor_timer;
dsp_peripheral_clk_ctrl(dev_ctxt, &clk_cmd);
} else {
dev_dbg(bridge, "Not able to get the symbol for Load "
@@ -427,9 +425,9 @@ static int bridge_brd_start(struct bridge_dev_context *dev_ctxt,
}
if (!status) {
- if (ul_bios_gp_timer != 0xFFFF) {
+ if (bios_gp_timer != 0xFFFF) {
clk_cmd = (BPWR_ENABLE_CLOCK << MBX_PM_CLK_CMDSHIFT) |
- ul_bios_gp_timer;
+ bios_gp_timer;
dsp_peripheral_clk_ctrl(dev_ctxt, &clk_cmd);
} else {
dev_dbg(bridge,
@@ -440,19 +438,18 @@ static int bridge_brd_start(struct bridge_dev_context *dev_ctxt,
if (!status) {
/* Set the DSP clock rate */
(void)dev_get_symbol(dev_ctxt->dev_obj,
- "_BRIDGEINIT_DSP_FREQ", &ul_dsp_clk_addr);
+ "_BRIDGEINIT_DSP_FREQ", &dsp_clk_addr);
/*Set Autoidle Mode for IVA2 PLL */
(*pdata->dsp_cm_write)(1 << OMAP3430_AUTO_IVA2_DPLL_SHIFT,
OMAP3430_IVA2_MOD, OMAP3430_CM_AUTOIDLE_PLL);
- if ((unsigned int *)ul_dsp_clk_addr != NULL) {
+ if ((unsigned int *)dsp_clk_addr != NULL) {
/* Get the clock rate */
- ul_dsp_clk_rate = dsp_clk_get_iva2_rate();
+ dsp_clk_rate = dsp_clk_get_iva2_rate();
dev_dbg(bridge, "%s: DSP clock rate (KHZ): 0x%x \n",
- __func__, ul_dsp_clk_rate);
- (void)bridge_brd_write(dev_ctxt,
- (u8 *) &ul_dsp_clk_rate,
- ul_dsp_clk_addr, sizeof(u32), 0);
+ __func__, dsp_clk_rate);
+ (void)bridge_brd_write(dev_ctxt, (u8 *) &dsp_clk_rate,
+ dsp_clk_addr, sizeof(u32), 0);
}
/*
* Enable Mailbox events and also drain any pending
@@ -509,7 +506,7 @@ static int bridge_brd_start(struct bridge_dev_context *dev_ctxt,
dev_get_symbol(dev_ctxt->dev_obj, "_WDT_enable", &wdt_en);
if (wdt_en) {
/* Start wdt */
- dsp_wdt_sm_set((void *)ul_shm_base);
+ dsp_wdt_sm_set((void *)shm_base);
dsp_wdt_enable(true);
}
@@ -927,13 +924,13 @@ static int bridge_dev_destroy(struct bridge_dev_context *dev_ctxt)
static int bridge_brd_mem_copy(struct bridge_dev_context *dev_ctxt,
u32 dsp_dest_addr, u32 dsp_src_addr,
- u32 ul_num_bytes, u32 mem_type)
+ u32 num_bytes, u32 mem_type)
{
int status = 0;
u32 src_addr = dsp_src_addr;
u32 dest_addr = dsp_dest_addr;
u32 copy_bytes = 0;
- u32 total_bytes = ul_num_bytes;
+ u32 total_bytes = num_bytes;
u8 host_buf[BUFFERSIZE];
while (total_bytes > 0 && !status) {
copy_bytes =
@@ -966,28 +963,27 @@ static int bridge_brd_mem_copy(struct bridge_dev_context *dev_ctxt,
/* Mem Write does not halt the DSP to write unlike bridge_brd_write */
static int bridge_brd_mem_write(struct bridge_dev_context *dev_ctxt,
u8 *host_buff, u32 dsp_addr,
- u32 ul_num_bytes, u32 mem_type)
+ u32 num_bytes, u32 mem_type)
{
int status = 0;
- u32 ul_remain_bytes = 0;
- u32 ul_bytes = 0;
- ul_remain_bytes = ul_num_bytes;
- while (ul_remain_bytes > 0 && !status) {
- ul_bytes =
- ul_remain_bytes > BUFFERSIZE ? BUFFERSIZE : ul_remain_bytes;
+ u32 remain_bytes = 0;
+ u32 bytes = 0;
+ remain_bytes = num_bytes;
+ while (remain_bytes > 0 && !status) {
+ bytes = remain_bytes > BUFFERSIZE ? BUFFERSIZE : remain_bytes;
if (dsp_addr < (dev_ctxt->dsp_start_add +
dev_ctxt->internal_size)) {
status =
write_dsp_data(dev_ctxt, host_buff, dsp_addr,
- ul_bytes, mem_type);
+ bytes, mem_type);
} else {
status = write_ext_dsp_data(dev_ctxt, host_buff,
- dsp_addr, ul_bytes,
+ dsp_addr, bytes,
mem_type, true);
}
- ul_remain_bytes -= ul_bytes;
- dsp_addr += ul_bytes;
- host_buff = host_buff + ul_bytes;
+ remain_bytes -= bytes;
+ dsp_addr += bytes;
+ host_buff = host_buff + bytes;
}
return status;
}
@@ -1179,9 +1175,8 @@ static inline void flush_all(struct bridge_dev_context *dev_ctxt)
/* Memory map kernel VA -- memory allocated with vmalloc */
static int mem_map_vmalloc(struct bridge_dev_context *dev_ctxt,
- u32 ul_mpu_addr, u32 virt_addr,
- u32 ul_num_bytes,
- struct hw_mmu_map_attrs_t *hw_attrs)
+ u32 mpu_addr, u32 virt_addr, u32 num_bytes,
+ struct hw_mmu_map_attrs_t *hw_attrs)
{
int status = 0;
struct page *page[1];
@@ -1200,9 +1195,9 @@ static int mem_map_vmalloc(struct bridge_dev_context *dev_ctxt,
* Combine physically contiguous regions to reduce TLBs.
* Pass the translated pa to pte_update.
*/
- num_pages = ul_num_bytes / PAGE_SIZE; /* PAGE_SIZE = OS page size */
+ num_pages = num_bytes / PAGE_SIZE; /* PAGE_SIZE = OS page size */
i = 0;
- va_curr = ul_mpu_addr;
+ va_curr = mpu_addr;
page[0] = vmalloc_to_page((void *)va_curr);
pa_next = page_to_phys(page[0]);
while (!status && (i < num_pages)) {
@@ -1239,7 +1234,7 @@ static int mem_map_vmalloc(struct bridge_dev_context *dev_ctxt,
pa += HW_PAGE_SIZE4KB;
}
status = pte_update(dev_ctxt, pa_curr, virt_addr +
- (va_curr - ul_mpu_addr), size_curr,
+ (va_curr - mpu_addr), size_curr,
hw_attrs);
va_curr += size_curr;
}
@@ -1300,7 +1295,7 @@ static void bridge_release_pages(u32 paddr, u32 pte_size, u32 num_bytes)
* we clear consecutive PTEs until we unmap all the bytes
*/
static int bridge_brd_mem_un_map(struct bridge_dev_context *dev_ctxt,
- u32 virt_addr, u32 ul_num_bytes)
+ u32 virt_addr, u32 num_bytes)
{
u32 l1_base_va;
u32 l2_base_va;
@@ -1318,13 +1313,13 @@ static int bridge_brd_mem_un_map(struct bridge_dev_context *dev_ctxt,
struct pg_table_attrs *pt = dev_ctxt->pt_attrs;
va_curr = virt_addr;
- rem_bytes = ul_num_bytes;
+ rem_bytes = num_bytes;
rem_bytes_l2 = 0;
l1_base_va = pt->l1_base_va;
pte_addr_l1 = hw_mmu_pte_addr_l1(l1_base_va, va_curr);
dev_dbg(bridge, "%s dev_ctxt %p, va %x, NumBytes %x l1_base_va %x, "
"pte_addr_l1 %x\n", __func__, dev_ctxt, virt_addr,
- ul_num_bytes, l1_base_va, pte_addr_l1);
+ num_bytes, l1_base_va, pte_addr_l1);
while (rem_bytes && !status) {
u32 va_curr_orig = va_curr;
@@ -1376,7 +1371,7 @@ static int bridge_brd_mem_un_map(struct bridge_dev_context *dev_ctxt,
}
bridge_release_pages(pte_val & ~(pte_size - 1), pte_size,
- ul_num_bytes);
+ num_bytes);
if (hw_mmu_pte_clear(pte_addr_l2, va_curr, pte_size)) {
status = -EPERM;
@@ -1420,7 +1415,7 @@ skip_coarse_page:
}
bridge_release_pages(pte_val & ~(pte_size - 1), pte_size,
- ul_num_bytes);
+ num_bytes);
if (!hw_mmu_pte_clear(l1_base_va, va_curr, pte_size)) {
status = 0;
@@ -1454,9 +1449,8 @@ EXIT_LOOP:
* TODO: Disable MMU while updating the page tables (but that'll stall DSP)
*/
static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
- u32 ul_mpu_addr, u32 virt_addr,
- u32 ul_num_bytes, u32 ul_map_attr,
- struct page **mapped_pages)
+ u32 mpu_addr, u32 virt_addr, u32 num_bytes,
+ u32 map_attr, struct page **mapped_pages)
{
u32 attrs;
int status = 0;
@@ -1470,20 +1464,20 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
u32 va = virt_addr;
struct task_struct *curr_task = current;
u32 pg_i = 0;
- u32 mpu_addr, pa;
+ u32 pa;
dev_dbg(bridge,
- "%s hDevCtxt %p, pa %x, va %x, size %x, ul_map_attr %x\n",
- __func__, dev_ctxt, ul_mpu_addr, virt_addr, ul_num_bytes,
- ul_map_attr);
- if (ul_num_bytes == 0)
+ "%s hDevCtxt %p, pa %x, va %x, size %x, map_attr %x\n",
+ __func__, dev_ctxt, mpu_addr, virt_addr, num_bytes,
+ map_attr);
+ if (num_bytes == 0)
return -EINVAL;
- if (ul_map_attr & DSP_MAP_DIR_MASK) {
- attrs = ul_map_attr;
+ if (map_attr & DSP_MAP_DIR_MASK) {
+ attrs = map_attr;
} else {
/* Assign default attributes */
- attrs = ul_map_attr | (DSP_MAPVIRTUALADDR | DSP_MAPELEMSIZE16);
+ attrs = map_attr | (DSP_MAPVIRTUALADDR | DSP_MAPELEMSIZE16);
}
/* Take mapping properties */
if (attrs & DSP_MAPBIGENDIAN)
@@ -1521,8 +1515,8 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
hw_attrs.donotlockmpupage = 0;
if (attrs & DSP_MAPVMALLOCADDR) {
- return mem_map_vmalloc(dev_ctxt, ul_mpu_addr, virt_addr,
- ul_num_bytes, &hw_attrs);
+ return mem_map_vmalloc(dev_ctxt, mpu_addr, virt_addr,
+ num_bytes, &hw_attrs);
}
/*
* Do OS-specific user-va to pa translation.
@@ -1530,50 +1524,47 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
* Pass the translated pa to pte_update.
*/
if ((attrs & DSP_MAPPHYSICALADDR)) {
- status = pte_update(dev_ctxt, ul_mpu_addr, virt_addr,
- ul_num_bytes, &hw_attrs);
+ status = pte_update(dev_ctxt, mpu_addr, virt_addr,
+ num_bytes, &hw_attrs);
goto func_cont;
}
/*
- * Important Note: ul_mpu_addr is mapped from user application process
+ * Important Note: mpu_addr is mapped from user application process
* to current process - it must lie completely within the current
* virtual memory address space in order to be of use to us here!
*/
down_read(&mm->mmap_sem);
- vma = find_vma(mm, ul_mpu_addr);
+ vma = find_vma(mm, mpu_addr);
if (vma)
dev_dbg(bridge,
- "VMAfor UserBuf: ul_mpu_addr=%x, ul_num_bytes=%x, "
- "vm_start=%lx, vm_end=%lx, vm_flags=%lx\n", ul_mpu_addr,
- ul_num_bytes, vma->vm_start, vma->vm_end,
- vma->vm_flags);
+ "VMAfor UserBuf: mpu_addr=%x, num_bytes=%x, "
+ "vm_start=%lx, vm_end=%lx, vm_flags=%lx\n", mpu_addr,
+ num_bytes, vma->vm_start, vma->vm_end, vma->vm_flags);
/*
* It is observed that under some circumstances, the user buffer is
* spread across several VMAs. So loop through and check if the entire
* user buffer is covered
*/
- while ((vma) && (ul_mpu_addr + ul_num_bytes > vma->vm_end)) {
+ while ((vma) && (mpu_addr + num_bytes > vma->vm_end)) {
/* jump to the next VMA region */
vma = find_vma(mm, vma->vm_end + 1);
dev_dbg(bridge,
- "VMA for UserBuf ul_mpu_addr=%x ul_num_bytes=%x, "
- "vm_start=%lx, vm_end=%lx, vm_flags=%lx\n", ul_mpu_addr,
- ul_num_bytes, vma->vm_start, vma->vm_end,
- vma->vm_flags);
+ "VMA for UserBuf mpu_addr=%x num_bytes=%x, "
+ "vm_start=%lx, vm_end=%lx, vm_flags=%lx\n", mpu_addr,
+ num_bytes, vma->vm_start, vma->vm_end, vma->vm_flags);
}
if (!vma) {
pr_err("%s: Failed to get VMA region for 0x%x (%d)\n",
- __func__, ul_mpu_addr, ul_num_bytes);
+ __func__, mpu_addr, num_bytes);
status = -EINVAL;
up_read(&mm->mmap_sem);
goto func_cont;
}
if (vma->vm_flags & VM_IO) {
- num_usr_pgs = ul_num_bytes / PG_SIZE4K;
- mpu_addr = ul_mpu_addr;
+ num_usr_pgs = num_bytes / PG_SIZE4K;
/* Get the physical addresses for user buffer */
for (pg_i = 0; pg_i < num_usr_pgs; pg_i++) {
@@ -1602,12 +1593,12 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
pa += HW_PAGE_SIZE4KB;
}
} else {
- num_usr_pgs = ul_num_bytes / PG_SIZE4K;
+ num_usr_pgs = num_bytes / PG_SIZE4K;
if (vma->vm_flags & (VM_WRITE | VM_MAYWRITE))
write = 1;
for (pg_i = 0; pg_i < num_usr_pgs; pg_i++) {
- pg_num = get_user_pages(curr_task, mm, ul_mpu_addr, 1,
+ pg_num = get_user_pages(curr_task, mm, mpu_addr, 1,
write, 1, &mapped_page, NULL);
if (pg_num > 0) {
if (page_count(mapped_page) < 1) {
@@ -1627,15 +1618,15 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
mapped_pages[pg_i] = mapped_page;
va += HW_PAGE_SIZE4KB;
- ul_mpu_addr += HW_PAGE_SIZE4KB;
+ mpu_addr += HW_PAGE_SIZE4KB;
} else {
pr_err("DSPBRIDGE: get_user_pages FAILED,"
"MPU addr = 0x%x,"
"vma->vm_flags = 0x%lx,"
"get_user_pages Err"
"Value = %d, Buffer"
- "size=0x%x\n", ul_mpu_addr,
- vma->vm_flags, pg_num, ul_num_bytes);
+ "size=0x%x\n", mpu_addr,
+ vma->vm_flags, pg_num, num_bytes);
status = -EPERM;
break;
}
--
1.7.8.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 07/15] tidspbridge: tiomap3430: Remove unneeded local variables
2012-09-19 12:06 [PATCH v2 00/15] tidspbridge driver MMU-related cleanups Laurent Pinchart
` (5 preceding siblings ...)
2012-09-19 12:06 ` [PATCH v2 06/15] tidspbridge: tiomap3430: Remove ul_ prefix Laurent Pinchart
@ 2012-09-19 12:06 ` Laurent Pinchart
2012-09-19 12:06 ` [PATCH v2 08/15] tidspbridge: Fix VM_PFNMAP mapping Laurent Pinchart
` (8 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Laurent Pinchart @ 2012-09-19 12:06 UTC (permalink / raw)
To: linux-omap; +Cc: Omar Ramirez Luna
Several local variables just hold copies of function arguments.
Remove them and use the function arguments directly.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Omar Ramirez Luna <omar.ramirez@ti.com>
---
drivers/staging/tidspbridge/core/tiomap3430.c | 60 +++++++++++-------------
1 files changed, 28 insertions(+), 32 deletions(-)
diff --git a/drivers/staging/tidspbridge/core/tiomap3430.c b/drivers/staging/tidspbridge/core/tiomap3430.c
index 3dfb663..2c5be89 100644
--- a/drivers/staging/tidspbridge/core/tiomap3430.c
+++ b/drivers/staging/tidspbridge/core/tiomap3430.c
@@ -1308,23 +1308,21 @@ static int bridge_brd_mem_un_map(struct bridge_dev_context *dev_ctxt,
u32 pte_addr_l2 = 0;
u32 rem_bytes;
u32 rem_bytes_l2;
- u32 va_curr;
int status = 0;
struct pg_table_attrs *pt = dev_ctxt->pt_attrs;
- va_curr = virt_addr;
rem_bytes = num_bytes;
rem_bytes_l2 = 0;
l1_base_va = pt->l1_base_va;
- pte_addr_l1 = hw_mmu_pte_addr_l1(l1_base_va, va_curr);
+ pte_addr_l1 = hw_mmu_pte_addr_l1(l1_base_va, virt_addr);
dev_dbg(bridge, "%s dev_ctxt %p, va %x, NumBytes %x l1_base_va %x, "
"pte_addr_l1 %x\n", __func__, dev_ctxt, virt_addr,
num_bytes, l1_base_va, pte_addr_l1);
while (rem_bytes && !status) {
- u32 va_curr_orig = va_curr;
+ u32 virt_addr_orig = virt_addr;
/* Find whether the L1 PTE points to a valid L2 PT */
- pte_addr_l1 = hw_mmu_pte_addr_l1(l1_base_va, va_curr);
+ pte_addr_l1 = hw_mmu_pte_addr_l1(l1_base_va, virt_addr);
pte_val = *(u32 *) pte_addr_l1;
pte_size = hw_mmu_pte_size_l1(pte_val);
@@ -1345,7 +1343,7 @@ static int bridge_brd_mem_un_map(struct bridge_dev_context *dev_ctxt,
* page, and the size of VA space that needs to be
* cleared on this L2 page
*/
- pte_addr_l2 = hw_mmu_pte_addr_l2(l2_base_va, va_curr);
+ pte_addr_l2 = hw_mmu_pte_addr_l2(l2_base_va, virt_addr);
pte_count = pte_addr_l2 & (HW_MMU_COARSE_PAGE_SIZE - 1);
pte_count = (HW_MMU_COARSE_PAGE_SIZE - pte_count) / sizeof(u32);
if (rem_bytes < (pte_count * PG_SIZE4K))
@@ -1363,9 +1361,9 @@ static int bridge_brd_mem_un_map(struct bridge_dev_context *dev_ctxt,
while (rem_bytes_l2 && !status) {
pte_val = *(u32 *) pte_addr_l2;
pte_size = hw_mmu_pte_size_l2(pte_val);
- /* va_curr aligned to pte_size? */
+ /* virt_addr aligned to pte_size? */
if (pte_size == 0 || rem_bytes_l2 < pte_size ||
- va_curr & (pte_size - 1)) {
+ virt_addr & (pte_size - 1)) {
status = -EPERM;
break;
}
@@ -1373,14 +1371,14 @@ static int bridge_brd_mem_un_map(struct bridge_dev_context *dev_ctxt,
bridge_release_pages(pte_val & ~(pte_size - 1), pte_size,
num_bytes);
- if (hw_mmu_pte_clear(pte_addr_l2, va_curr, pte_size)) {
+ if (hw_mmu_pte_clear(pte_addr_l2, virt_addr, pte_size)) {
status = -EPERM;
goto EXIT_LOOP;
}
status = 0;
rem_bytes_l2 -= pte_size;
- va_curr += pte_size;
+ virt_addr += pte_size;
pte_addr_l2 += (pte_size >> 12) * sizeof(u32);
}
spin_lock(&pt->pg_lock);
@@ -1390,7 +1388,7 @@ static int bridge_brd_mem_un_map(struct bridge_dev_context *dev_ctxt,
/*
* Clear the L1 PTE pointing to the L2 PT
*/
- if (!hw_mmu_pte_clear(l1_base_va, va_curr_orig,
+ if (!hw_mmu_pte_clear(l1_base_va, virt_addr_orig,
HW_MMU_COARSE_PAGE_SIZE))
status = 0;
else {
@@ -1406,10 +1404,10 @@ static int bridge_brd_mem_un_map(struct bridge_dev_context *dev_ctxt,
spin_unlock(&pt->pg_lock);
continue;
skip_coarse_page:
- /* va_curr aligned to pte_size? */
+ /* virt_addr aligned to pte_size? */
/* pte_size = 1 MB or 16 MB */
if (pte_size == 0 || rem_bytes < pte_size ||
- va_curr & (pte_size - 1)) {
+ virt_addr & (pte_size - 1)) {
status = -EPERM;
break;
}
@@ -1417,10 +1415,10 @@ skip_coarse_page:
bridge_release_pages(pte_val & ~(pte_size - 1), pte_size,
num_bytes);
- if (!hw_mmu_pte_clear(l1_base_va, va_curr, pte_size)) {
+ if (!hw_mmu_pte_clear(l1_base_va, virt_addr, pte_size)) {
status = 0;
rem_bytes -= pte_size;
- va_curr += pte_size;
+ virt_addr += pte_size;
} else {
status = -EPERM;
goto EXIT_LOOP;
@@ -1433,8 +1431,8 @@ skip_coarse_page:
EXIT_LOOP:
flush_all(dev_ctxt);
dev_dbg(bridge,
- "%s: va_curr %x, pte_addr_l1 %x pte_addr_l2 %x rem_bytes %x,"
- " rem_bytes_l2 %x status %x\n", __func__, va_curr, pte_addr_l1,
+ "%s: virt_addr %x, pte_addr_l1 %x pte_addr_l2 %x rem_bytes %x,"
+ " rem_bytes_l2 %x status %x\n", __func__, virt_addr, pte_addr_l1,
pte_addr_l2, rem_bytes, rem_bytes_l2, status);
return status;
}
@@ -1458,11 +1456,9 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
struct vm_area_struct *vma;
struct mm_struct *mm = current->mm;
u32 write = 0;
- u32 num_usr_pgs = 0;
- struct page *mapped_page, *pg;
+ u32 num_usr_pgs;
+ struct page *pg;
s32 pg_num;
- u32 va = virt_addr;
- struct task_struct *curr_task = current;
u32 pg_i = 0;
u32 pa;
@@ -1584,11 +1580,11 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
}
}
status = pte_set(dev_ctxt->pt_attrs, pa,
- va, HW_PAGE_SIZE4KB, &hw_attrs);
+ virt_addr, HW_PAGE_SIZE4KB, &hw_attrs);
if (status)
break;
- va += HW_PAGE_SIZE4KB;
+ virt_addr += HW_PAGE_SIZE4KB;
mpu_addr += HW_PAGE_SIZE4KB;
pa += HW_PAGE_SIZE4KB;
}
@@ -1598,26 +1594,26 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
write = 1;
for (pg_i = 0; pg_i < num_usr_pgs; pg_i++) {
- pg_num = get_user_pages(curr_task, mm, mpu_addr, 1,
- write, 1, &mapped_page, NULL);
+ pg_num = get_user_pages(current, mm, mpu_addr, 1,
+ write, 1, &pg, NULL);
if (pg_num > 0) {
- if (page_count(mapped_page) < 1) {
+ if (page_count(pg) < 1) {
pr_err("Bad page count after doing"
"get_user_pages on"
"user buffer\n");
- bad_page_dump(page_to_phys(mapped_page),
- mapped_page);
+ bad_page_dump(page_to_phys(pg), pg);
}
status = pte_set(dev_ctxt->pt_attrs,
- page_to_phys(mapped_page), va,
- HW_PAGE_SIZE4KB, &hw_attrs);
+ page_to_phys(pg),
+ virt_addr, HW_PAGE_SIZE4KB,
+ &hw_attrs);
if (status)
break;
if (mapped_pages)
- mapped_pages[pg_i] = mapped_page;
+ mapped_pages[pg_i] = pg;
- va += HW_PAGE_SIZE4KB;
+ virt_addr += HW_PAGE_SIZE4KB;
mpu_addr += HW_PAGE_SIZE4KB;
} else {
pr_err("DSPBRIDGE: get_user_pages FAILED,"
--
1.7.8.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 08/15] tidspbridge: Fix VM_PFNMAP mapping
2012-09-19 12:06 [PATCH v2 00/15] tidspbridge driver MMU-related cleanups Laurent Pinchart
` (6 preceding siblings ...)
2012-09-19 12:06 ` [PATCH v2 07/15] tidspbridge: tiomap3430: Remove unneeded local variables Laurent Pinchart
@ 2012-09-19 12:06 ` Laurent Pinchart
2012-09-21 18:37 ` Felipe Contreras
2012-09-19 12:06 ` [PATCH v2 09/15] tidspbridge: Remove unused hw_mmu_map_attrs_t::donotlockmpupage field Laurent Pinchart
` (7 subsequent siblings)
15 siblings, 1 reply; 23+ messages in thread
From: Laurent Pinchart @ 2012-09-19 12:06 UTC (permalink / raw)
To: linux-omap; +Cc: Omar Ramirez Luna
VMAs marked with the VM_PFNMAP flag have no struct page associated with
the memory PFNs. Don't call get_page()/put_page() on the pages
supposedly associated with the PFNs.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Omar Ramirez Luna <omar.ramirez@ti.com>
---
drivers/staging/tidspbridge/core/tiomap3430.c | 30 +++--
.../staging/tidspbridge/include/dspbridge/drv.h | 1 +
.../tidspbridge/include/dspbridge/dspdefs.h | 9 +-
drivers/staging/tidspbridge/rmgr/proc.c | 119 ++++++++++----------
4 files changed, 84 insertions(+), 75 deletions(-)
diff --git a/drivers/staging/tidspbridge/core/tiomap3430.c b/drivers/staging/tidspbridge/core/tiomap3430.c
index 2c5be89..cc538ea 100644
--- a/drivers/staging/tidspbridge/core/tiomap3430.c
+++ b/drivers/staging/tidspbridge/core/tiomap3430.c
@@ -1262,7 +1262,8 @@ static void bad_page_dump(u32 pa, struct page *pg)
}
/* Release all pages associated with a physical addresses range. */
-static void bridge_release_pages(u32 paddr, u32 pte_size, u32 num_bytes)
+static void bridge_release_pages(u32 paddr, u32 pte_size, u32 num_bytes,
+ struct dmm_map_object *map_obj)
{
struct page *pg;
u32 num_pages;
@@ -1270,7 +1271,8 @@ static void bridge_release_pages(u32 paddr, u32 pte_size, u32 num_bytes)
num_pages = pte_size / PAGE_SIZE;
for (; num_pages > 0; --num_pages, paddr += HW_PAGE_SIZE4KB) {
- if (!pfn_valid(__phys_to_pfn(paddr)))
+ if (!pfn_valid(__phys_to_pfn(paddr)) ||
+ (map_obj && map_obj->vm_flags & VM_PFNMAP))
continue;
pg = PHYS_TO_PAGE(paddr);
@@ -1295,7 +1297,8 @@ static void bridge_release_pages(u32 paddr, u32 pte_size, u32 num_bytes)
* we clear consecutive PTEs until we unmap all the bytes
*/
static int bridge_brd_mem_un_map(struct bridge_dev_context *dev_ctxt,
- u32 virt_addr, u32 num_bytes)
+ u32 virt_addr, u32 num_bytes,
+ struct dmm_map_object *map_obj)
{
u32 l1_base_va;
u32 l2_base_va;
@@ -1369,7 +1372,7 @@ static int bridge_brd_mem_un_map(struct bridge_dev_context *dev_ctxt,
}
bridge_release_pages(pte_val & ~(pte_size - 1), pte_size,
- num_bytes);
+ num_bytes, map_obj);
if (hw_mmu_pte_clear(pte_addr_l2, virt_addr, pte_size)) {
status = -EPERM;
@@ -1413,7 +1416,7 @@ skip_coarse_page:
}
bridge_release_pages(pte_val & ~(pte_size - 1), pte_size,
- num_bytes);
+ num_bytes, map_obj);
if (!hw_mmu_pte_clear(l1_base_va, virt_addr, pte_size)) {
status = 0;
@@ -1448,7 +1451,7 @@ EXIT_LOOP:
*/
static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
u32 mpu_addr, u32 virt_addr, u32 num_bytes,
- u32 map_attr, struct page **mapped_pages)
+ u32 map_attr, struct dmm_map_object *map_obj)
{
u32 attrs;
int status = 0;
@@ -1559,6 +1562,9 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
goto func_cont;
}
+ if (map_obj)
+ map_obj->vm_flags = vma->vm_flags;
+
if (vma->vm_flags & VM_IO) {
num_usr_pgs = num_bytes / PG_SIZE4K;
@@ -1571,7 +1577,8 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
"address is invalid\n");
break;
}
- if (pfn_valid(__phys_to_pfn(pa))) {
+ if (!(vma->vm_flags & VM_PFNMAP) &&
+ pfn_valid(__phys_to_pfn(pa))) {
pg = PHYS_TO_PAGE(pa);
get_page(pg);
if (page_count(pg) < 1) {
@@ -1610,8 +1617,8 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
if (status)
break;
- if (mapped_pages)
- mapped_pages[pg_i] = pg;
+ if (map_obj)
+ map_obj->pages[pg_i] = pg;
virt_addr += HW_PAGE_SIZE4KB;
mpu_addr += HW_PAGE_SIZE4KB;
@@ -1635,10 +1642,9 @@ func_cont:
* Roll out the mapped pages incase it failed in middle of
* mapping
*/
- if (pg_i) {
+ if (pg_i)
bridge_brd_mem_un_map(dev_ctxt, virt_addr,
- (pg_i * PG_SIZE4K));
- }
+ pg_i * PG_SIZE4K, map_obj);
status = -EPERM;
}
/*
diff --git a/drivers/staging/tidspbridge/include/dspbridge/drv.h b/drivers/staging/tidspbridge/include/dspbridge/drv.h
index b0c7708..492d216 100644
--- a/drivers/staging/tidspbridge/include/dspbridge/drv.h
+++ b/drivers/staging/tidspbridge/include/dspbridge/drv.h
@@ -88,6 +88,7 @@ struct dmm_map_object {
u32 mpu_addr;
u32 size;
u32 num_usr_pgs;
+ vm_flags_t vm_flags;
struct page **pages;
struct bridge_dma_map_info dma_info;
};
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dspdefs.h b/drivers/staging/tidspbridge/include/dspbridge/dspdefs.h
index ed32bf3..0d28436 100644
--- a/drivers/staging/tidspbridge/include/dspbridge/dspdefs.h
+++ b/drivers/staging/tidspbridge/include/dspbridge/dspdefs.h
@@ -39,6 +39,7 @@
/* Handle to Bridge driver's private device context. */
struct bridge_dev_context;
+struct dmm_map_object;
/*--------------------------------------------------------------------------- */
/* BRIDGE DRIVER FUNCTION TYPES */
@@ -176,7 +177,7 @@ typedef int(*fxn_brd_memmap) (struct bridge_dev_context
* dev_ctxt, u32 ul_mpu_addr,
u32 virt_addr, u32 ul_num_bytes,
u32 map_attr,
- struct page **mapped_pages);
+ struct dmm_map_object *map_obj);
/*
* ======== bridge_brd_mem_un_map ========
@@ -193,9 +194,9 @@ typedef int(*fxn_brd_memmap) (struct bridge_dev_context
* dev_ctxt != NULL;
* Ensures:
*/
-typedef int(*fxn_brd_memunmap) (struct bridge_dev_context
- * dev_ctxt,
- u32 virt_addr, u32 ul_num_bytes);
+typedef int(*fxn_brd_memunmap) (struct bridge_dev_context *dev_ctxt,
+ u32 virt_addr, u32 ul_num_bytes,
+ struct dmm_map_object *map_obj);
/*
* ======== bridge_brd_stop ========
diff --git a/drivers/staging/tidspbridge/rmgr/proc.c b/drivers/staging/tidspbridge/rmgr/proc.c
index 7e4f12f..4253980d 100644
--- a/drivers/staging/tidspbridge/rmgr/proc.c
+++ b/drivers/staging/tidspbridge/rmgr/proc.c
@@ -145,47 +145,64 @@ static struct dmm_map_object *add_mapping_info(struct process_context *pr_ctxt,
return map_obj;
}
-static int match_exact_map_obj(struct dmm_map_object *map_obj,
- u32 dsp_addr, u32 size)
+static void remove_mapping_information(struct process_context *pr_ctxt,
+ struct dmm_map_object *map_obj)
{
- if (map_obj->dsp_addr == dsp_addr && map_obj->size != size)
- pr_err("%s: addr match (0x%x), size don't (0x%x != 0x%x)\n",
- __func__, dsp_addr, map_obj->size, size);
+ pr_debug("%s: match, deleting map info\n", __func__);
- return map_obj->dsp_addr == dsp_addr &&
- map_obj->size == size;
+ spin_lock(&pr_ctxt->dmm_map_lock);
+ list_del(&map_obj->link);
+ spin_unlock(&pr_ctxt->dmm_map_lock);
+
+ kfree(map_obj->dma_info.sg);
+ kfree(map_obj->pages);
+ kfree(map_obj);
}
-static void remove_mapping_information(struct process_context *pr_ctxt,
- u32 dsp_addr, u32 size)
+static struct dmm_map_object *
+find_mapping(struct process_context *pr_ctxt, u32 addr, u32 size,
+ int (*match)(struct dmm_map_object *, u32, u32))
{
struct dmm_map_object *map_obj;
- pr_debug("%s: looking for virt 0x%x size 0x%x\n", __func__,
- dsp_addr, size);
-
spin_lock(&pr_ctxt->dmm_map_lock);
list_for_each_entry(map_obj, &pr_ctxt->dmm_map_list, link) {
- pr_debug("%s: candidate: mpu_addr 0x%x virt 0x%x size 0x%x\n",
- __func__,
- map_obj->mpu_addr,
- map_obj->dsp_addr,
- map_obj->size);
-
- if (match_exact_map_obj(map_obj, dsp_addr, size)) {
- pr_debug("%s: match, deleting map info\n", __func__);
- list_del(&map_obj->link);
- kfree(map_obj->dma_info.sg);
- kfree(map_obj->pages);
- kfree(map_obj);
+ pr_debug("%s: candidate: mpu_addr 0x%x dsp_addr 0x%x size 0x%x\n",
+ __func__, map_obj->mpu_addr, map_obj->dsp_addr,
+ map_obj->size);
+
+ if (match(map_obj, addr, size)) {
+ pr_debug("%s: match!\n", __func__);
goto out;
}
- pr_debug("%s: candidate didn't match\n", __func__);
+
+ pr_debug("%s: no match!\n", __func__);
}
- pr_err("%s: failed to find given map info\n", __func__);
+ map_obj = NULL;
out:
spin_unlock(&pr_ctxt->dmm_map_lock);
+ return map_obj;
+}
+
+static int match_exact_map_obj(struct dmm_map_object *map_obj,
+ u32 dsp_addr, u32 size)
+{
+ if (map_obj->dsp_addr == dsp_addr && map_obj->size != size)
+ pr_err("%s: addr match (0x%x), size don't (0x%x != 0x%x)\n",
+ __func__, dsp_addr, map_obj->size, size);
+
+ return map_obj->dsp_addr == dsp_addr &&
+ map_obj->size == size;
+}
+
+static struct dmm_map_object *
+find_dsp_mapping(struct process_context *pr_ctxt, u32 dsp_addr, u32 size)
+{
+ pr_debug("%s: looking for virt 0x%x size 0x%x\n", __func__,
+ dsp_addr, size);
+
+ return find_mapping(pr_ctxt, dsp_addr, size, match_exact_map_obj);
}
static int match_containing_map_obj(struct dmm_map_object *map_obj,
@@ -197,33 +214,13 @@ static int match_containing_map_obj(struct dmm_map_object *map_obj,
mpu_addr + size <= map_obj_end;
}
-static struct dmm_map_object *find_containing_mapping(
- struct process_context *pr_ctxt,
- u32 mpu_addr, u32 size)
+static struct dmm_map_object *
+find_mpu_mapping(struct process_context *pr_ctxt, u32 mpu_addr, u32 size)
{
- struct dmm_map_object *map_obj;
pr_debug("%s: looking for mpu_addr 0x%x size 0x%x\n", __func__,
- mpu_addr, size);
-
- spin_lock(&pr_ctxt->dmm_map_lock);
- list_for_each_entry(map_obj, &pr_ctxt->dmm_map_list, link) {
- pr_debug("%s: candidate: mpu_addr 0x%x virt 0x%x size 0x%x\n",
- __func__,
- map_obj->mpu_addr,
- map_obj->dsp_addr,
- map_obj->size);
- if (match_containing_map_obj(map_obj, mpu_addr, size)) {
- pr_debug("%s: match!\n", __func__);
- goto out;
- }
-
- pr_debug("%s: no match!\n", __func__);
- }
+ mpu_addr, size);
- map_obj = NULL;
-out:
- spin_unlock(&pr_ctxt->dmm_map_lock);
- return map_obj;
+ return find_mapping(pr_ctxt, mpu_addr, size, match_containing_map_obj);
}
static int find_first_page_in_cache(struct dmm_map_object *map_obj,
@@ -755,9 +752,9 @@ int proc_begin_dma(void *hprocessor, void *pmpu_addr, u32 ul_size,
mutex_lock(&proc_lock);
/* find requested memory are in cached mapping information */
- map_obj = find_containing_mapping(pr_ctxt, (u32) pmpu_addr, ul_size);
+ map_obj = find_mpu_mapping(pr_ctxt, (u32) pmpu_addr, ul_size);
if (!map_obj) {
- pr_err("%s: find_containing_mapping failed\n", __func__);
+ pr_err("%s: find_mpu_mapping failed\n", __func__);
status = -EFAULT;
goto no_map;
}
@@ -795,9 +792,9 @@ int proc_end_dma(void *hprocessor, void *pmpu_addr, u32 ul_size,
mutex_lock(&proc_lock);
/* find requested memory are in cached mapping information */
- map_obj = find_containing_mapping(pr_ctxt, (u32) pmpu_addr, ul_size);
+ map_obj = find_mpu_mapping(pr_ctxt, (u32) pmpu_addr, ul_size);
if (!map_obj) {
- pr_err("%s: find_containing_mapping failed\n", __func__);
+ pr_err("%s: find_mpu_mapping failed\n", __func__);
status = -EFAULT;
goto no_map;
}
@@ -1273,7 +1270,7 @@ int proc_map(void *hprocessor, void *pmpu_addr, u32 ul_size,
u32 size_align;
int status = 0;
struct proc_object *p_proc_object = (struct proc_object *)hprocessor;
- struct dmm_map_object *map_obj;
+ struct dmm_map_object *map_obj = NULL;
u32 tmp_addr = 0;
#ifdef CONFIG_TIDSPBRIDGE_CACHE_LINE_CHECK
@@ -1318,13 +1315,14 @@ int proc_map(void *hprocessor, void *pmpu_addr, u32 ul_size,
else
status = (*p_proc_object->intf_fxns->brd_mem_map)
(p_proc_object->bridge_context, pa_align, va_align,
- size_align, ul_map_attr, map_obj->pages);
+ size_align, ul_map_attr, map_obj);
}
if (!status) {
/* Mapped address = MSB of VA | LSB of PA */
*pp_map_addr = (void *) tmp_addr;
} else {
- remove_mapping_information(pr_ctxt, tmp_addr, size_align);
+ if (map_obj)
+ remove_mapping_information(pr_ctxt, map_obj);
dmm_un_map_memory(dmm_mgr, va_align, &size_align);
}
mutex_unlock(&proc_lock);
@@ -1600,6 +1598,7 @@ int proc_un_map(void *hprocessor, void *map_addr,
{
int status = 0;
struct proc_object *p_proc_object = (struct proc_object *)hprocessor;
+ struct dmm_map_object *map_obj;
struct dmm_object *dmm_mgr;
u32 va_align;
u32 size_align;
@@ -1625,8 +1624,10 @@ int proc_un_map(void *hprocessor, void *map_addr,
status = dmm_un_map_memory(dmm_mgr, (u32) va_align, &size_align);
/* Remove mapping from the page tables. */
if (!status) {
+ map_obj = find_dsp_mapping(pr_ctxt, (u32) map_addr, size_align);
status = (*p_proc_object->intf_fxns->brd_mem_un_map)
- (p_proc_object->bridge_context, va_align, size_align);
+ (p_proc_object->bridge_context, va_align, size_align,
+ map_obj);
}
if (status)
@@ -1637,7 +1638,7 @@ int proc_un_map(void *hprocessor, void *map_addr,
* from dmm_map_list, so that mapped memory resource tracking
* remains uptodate
*/
- remove_mapping_information(pr_ctxt, (u32) map_addr, size_align);
+ remove_mapping_information(pr_ctxt, map_obj);
unmap_failed:
mutex_unlock(&proc_lock);
--
1.7.8.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 09/15] tidspbridge: Remove unused hw_mmu_map_attrs_t::donotlockmpupage field
2012-09-19 12:06 [PATCH v2 00/15] tidspbridge driver MMU-related cleanups Laurent Pinchart
` (7 preceding siblings ...)
2012-09-19 12:06 ` [PATCH v2 08/15] tidspbridge: Fix VM_PFNMAP mapping Laurent Pinchart
@ 2012-09-19 12:06 ` Laurent Pinchart
2012-09-19 12:06 ` [PATCH v2 10/15] ARM: OMAP: iommu: fix including iommu.h without IOMMU_API selected Laurent Pinchart
` (6 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Laurent Pinchart @ 2012-09-19 12:06 UTC (permalink / raw)
To: linux-omap; +Cc: Omar Ramirez Luna
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Omar Ramirez Luna <omar.ramirez@ti.com>
---
drivers/staging/tidspbridge/core/tiomap3430.c | 4 ----
drivers/staging/tidspbridge/hw/hw_mmu.h | 1 -
2 files changed, 0 insertions(+), 5 deletions(-)
diff --git a/drivers/staging/tidspbridge/core/tiomap3430.c b/drivers/staging/tidspbridge/core/tiomap3430.c
index cc538ea..5cd85dc 100644
--- a/drivers/staging/tidspbridge/core/tiomap3430.c
+++ b/drivers/staging/tidspbridge/core/tiomap3430.c
@@ -1508,10 +1508,6 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
return -EINVAL;
}
}
- if (attrs & DSP_MAPDONOTLOCK)
- hw_attrs.donotlockmpupage = 1;
- else
- hw_attrs.donotlockmpupage = 0;
if (attrs & DSP_MAPVMALLOCADDR) {
return mem_map_vmalloc(dev_ctxt, mpu_addr, virt_addr,
diff --git a/drivers/staging/tidspbridge/hw/hw_mmu.h b/drivers/staging/tidspbridge/hw/hw_mmu.h
index 7f960cd..37fc4d4 100644
--- a/drivers/staging/tidspbridge/hw/hw_mmu.h
+++ b/drivers/staging/tidspbridge/hw/hw_mmu.h
@@ -39,7 +39,6 @@ struct hw_mmu_map_attrs_t {
enum hw_endianism_t endianism;
enum hw_element_size_t element_size;
enum hw_mmu_mixed_size_t mixed_size;
- bool donotlockmpupage;
};
extern hw_status hw_mmu_enable(const void __iomem *base_address);
--
1.7.8.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 10/15] ARM: OMAP: iommu: fix including iommu.h without IOMMU_API selected
2012-09-19 12:06 [PATCH v2 00/15] tidspbridge driver MMU-related cleanups Laurent Pinchart
` (8 preceding siblings ...)
2012-09-19 12:06 ` [PATCH v2 09/15] tidspbridge: Remove unused hw_mmu_map_attrs_t::donotlockmpupage field Laurent Pinchart
@ 2012-09-19 12:06 ` Laurent Pinchart
2012-09-19 12:06 ` [PATCH v2 11/15] arm: omap: iommu: Include required headers in iommu.h and iopgtable.h Laurent Pinchart
` (5 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Laurent Pinchart @ 2012-09-19 12:06 UTC (permalink / raw)
To: linux-omap; +Cc: Omar Ramirez Luna
From: Omar Ramirez Luna <omar.luna@linaro.org>
If included without IOMMU_API being selected it will break
compilation:
arch/arm/plat-omap/include/plat/iommu.h:
In function 'dev_to_omap_iommu':
arch/arm/plat-omap/include/plat/iommu.h:148:
error: 'struct dev_archdata' has no member named 'iommu'
This will be seen when hwmod includes iommu.h to get the
structure for attributes.
Signed-off-by: Omar Ramirez Luna <omar.luna@linaro.org>
---
arch/arm/plat-omap/include/plat/iommu.h | 2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/arch/arm/plat-omap/include/plat/iommu.h b/arch/arm/plat-omap/include/plat/iommu.h
index 88be3e6..e58d571 100644
--- a/arch/arm/plat-omap/include/plat/iommu.h
+++ b/arch/arm/plat-omap/include/plat/iommu.h
@@ -126,6 +126,7 @@ struct omap_iommu_arch_data {
struct omap_iommu *iommu_dev;
};
+#ifdef CONFIG_IOMMU_API
/**
* dev_to_omap_iommu() - retrieves an omap iommu object from a user device
* @dev: iommu client device
@@ -136,6 +137,7 @@ static inline struct omap_iommu *dev_to_omap_iommu(struct device *dev)
return arch_data->iommu_dev;
}
+#endif
/* IOMMU errors */
#define OMAP_IOMMU_ERR_TLB_MISS (1 << 0)
--
1.7.8.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 11/15] arm: omap: iommu: Include required headers in iommu.h and iopgtable.h
2012-09-19 12:06 [PATCH v2 00/15] tidspbridge driver MMU-related cleanups Laurent Pinchart
` (9 preceding siblings ...)
2012-09-19 12:06 ` [PATCH v2 10/15] ARM: OMAP: iommu: fix including iommu.h without IOMMU_API selected Laurent Pinchart
@ 2012-09-19 12:06 ` Laurent Pinchart
2012-09-19 12:07 ` [PATCH v2 12/15] tidspbridge: Use constants defined in IOMMU platform headers Laurent Pinchart
` (4 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Laurent Pinchart @ 2012-09-19 12:06 UTC (permalink / raw)
To: linux-omap; +Cc: Omar Ramirez Luna
Both headers make use of externally defined structures, types or
functions. Include the appropriate headers.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
---
arch/arm/plat-omap/include/plat/iommu.h | 4 ++++
arch/arm/plat-omap/include/plat/iopgtable.h | 2 ++
2 files changed, 6 insertions(+), 0 deletions(-)
diff --git a/arch/arm/plat-omap/include/plat/iommu.h b/arch/arm/plat-omap/include/plat/iommu.h
index e58d571..6661eec 100644
--- a/arch/arm/plat-omap/include/plat/iommu.h
+++ b/arch/arm/plat-omap/include/plat/iommu.h
@@ -13,6 +13,10 @@
#ifndef __MACH_IOMMU_H
#define __MACH_IOMMU_H
+#include <linux/device.h>
+#include <linux/mutex.h>
+#include <linux/spinlock.h>
+
struct iotlb_entry {
u32 da;
u32 pa;
diff --git a/arch/arm/plat-omap/include/plat/iopgtable.h b/arch/arm/plat-omap/include/plat/iopgtable.h
index 66a8139..ebb6e21 100644
--- a/arch/arm/plat-omap/include/plat/iopgtable.h
+++ b/arch/arm/plat-omap/include/plat/iopgtable.h
@@ -13,6 +13,8 @@
#ifndef __PLAT_OMAP_IOMMU_H
#define __PLAT_OMAP_IOMMU_H
+#include <linux/string.h>
+
/*
* "L2 table" address mask and size definitions.
*/
--
1.7.8.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 12/15] tidspbridge: Use constants defined in IOMMU platform headers
2012-09-19 12:06 [PATCH v2 00/15] tidspbridge driver MMU-related cleanups Laurent Pinchart
` (10 preceding siblings ...)
2012-09-19 12:06 ` [PATCH v2 11/15] arm: omap: iommu: Include required headers in iommu.h and iopgtable.h Laurent Pinchart
@ 2012-09-19 12:07 ` Laurent Pinchart
2012-09-19 12:07 ` [PATCH v2 13/15] tidspbridge: Simplify pte_update and mem_map_vmalloc functions Laurent Pinchart
` (3 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Laurent Pinchart @ 2012-09-19 12:07 UTC (permalink / raw)
To: linux-omap; +Cc: Omar Ramirez Luna
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Omar Ramirez Luna <omar.ramirez@ti.com>
---
drivers/staging/tidspbridge/core/io_sm.c | 7 +-
drivers/staging/tidspbridge/core/tiomap3430.c | 38 +++-----
drivers/staging/tidspbridge/core/tiomap_io.c | 2 +-
drivers/staging/tidspbridge/core/ue_deh.c | 13 +--
drivers/staging/tidspbridge/hw/hw_defs.h | 6 --
drivers/staging/tidspbridge/hw/hw_mmu.c | 121 ++++++++++---------------
drivers/staging/tidspbridge/hw/hw_mmu.h | 19 +++--
7 files changed, 83 insertions(+), 123 deletions(-)
diff --git a/drivers/staging/tidspbridge/core/io_sm.c b/drivers/staging/tidspbridge/core/io_sm.c
index 480a384..78e6fe3 100644
--- a/drivers/staging/tidspbridge/core/io_sm.c
+++ b/drivers/staging/tidspbridge/core/io_sm.c
@@ -361,10 +361,7 @@ int bridge_io_on_loaded(struct io_mgr *hio_mgr)
u32 pa_curr, va_curr, da_curr;
u32 bytes;
u32 all_bits = 0;
- u32 page_size[] = {
- HW_PAGE_SIZE16MB, HW_PAGE_SIZE1MB,
- HW_PAGE_SIZE64KB, HW_PAGE_SIZE4KB
- };
+ u32 page_size[] = { SZ_16M, SZ_1M, SZ_64K, SZ_4K };
u32 map_attrs = DSP_MAPLITTLEENDIAN | DSP_MAPPHYSICALADDR |
DSP_MAPELEMSIZE32 | DSP_MAPDONOTLOCK;
@@ -616,7 +613,7 @@ int bridge_io_on_loaded(struct io_mgr *hio_mgr)
status = hio_mgr->intf_fxns->brd_mem_map(dc,
l4_peripheral_table[i].phys_addr,
l4_peripheral_table[i].dsp_virt_addr,
- HW_PAGE_SIZE4KB, map_attrs, NULL);
+ SZ_4K, map_attrs, NULL);
if (status)
goto free_eproc;
i++;
diff --git a/drivers/staging/tidspbridge/core/tiomap3430.c b/drivers/staging/tidspbridge/core/tiomap3430.c
index 5cd85dc..7f1372e 100644
--- a/drivers/staging/tidspbridge/core/tiomap3430.c
+++ b/drivers/staging/tidspbridge/core/tiomap3430.c
@@ -62,10 +62,6 @@
#define TIHELEN_ACKTIMEOUT 10000
-#define MMU_SECTION_ADDR_MASK 0xFFF00000
-#define MMU_SSECTION_ADDR_MASK 0xFF000000
-#define MMU_LARGE_PAGE_MASK 0xFFFF0000
-#define MMU_SMALL_PAGE_MASK 0xFFFFF000
#define OMAP3_IVA2_BOOTADDR_MASK 0xFFFFFC00
#define PAGES_II_LVL_TABLE 512
#define PHYS_TO_PAGE(phys) pfn_to_page((phys) >> PAGE_SHIFT)
@@ -486,8 +482,7 @@ static int bridge_brd_start(struct bridge_dev_context *dev_ctxt,
/* Let DSP go */
dev_dbg(bridge, "%s Unreset\n", __func__);
/* Enable DSP MMU Interrupts */
- hw_mmu_event_enable(resources->dmmu_base,
- HW_MMU_ALL_INTERRUPTS);
+ hw_mmu_event_enable(resources->dmmu_base, OMAP_IOMMU_ERR_ALL);
/* release the RST1, DSP starts executing now .. */
(*pdata->dsp_prm_rmw_bits)(OMAP3430_RST1_IVA2_MASK, 0,
OMAP3430_IVA2_MOD, OMAP2_RM_RSTCTRL);
@@ -1013,7 +1008,7 @@ static int pte_set(struct pg_table_attrs *pt, u32 pa, u32 va,
l1_base_va = pt->l1_base_va;
pg_tbl_va = l1_base_va;
- if ((size == HW_PAGE_SIZE64KB) || (size == HW_PAGE_SIZE4KB)) {
+ if (size == SZ_64K || size == SZ_4K) {
/* Find whether the L1 PTE points to a valid L2 PT */
pte_addr_l1 = hw_mmu_pte_addr_l1(l1_base_va, va);
if (pte_addr_l1 <= (pt->l1_base_va + pt->l1_size)) {
@@ -1061,7 +1056,7 @@ static int pte_set(struct pg_table_attrs *pt, u32 pa, u32 va,
}
if (!status) {
pg_tbl_va = l2_base_va;
- if (size == HW_PAGE_SIZE64KB)
+ if (size == SZ_64K)
pt->pg_info[l2_page_num].num_entries += 16;
else
pt->pg_info[l2_page_num].num_entries++;
@@ -1099,9 +1094,7 @@ static int pte_update(struct bridge_dev_context *dev_ctxt, u32 pa,
u32 va_curr = va;
u32 num_bytes = size;
int status = 0;
- u32 page_size[] = { HW_PAGE_SIZE16MB, HW_PAGE_SIZE1MB,
- HW_PAGE_SIZE64KB, HW_PAGE_SIZE4KB
- };
+ u32 page_size[] = { SZ_16M, SZ_1M, SZ_64K, SZ_4K };
while (num_bytes && !status) {
/* To find the max. page size with which both PA & VA are
@@ -1228,10 +1221,10 @@ static int mem_map_vmalloc(struct bridge_dev_context *dev_ctxt,
break;
}
pa = pa_curr;
- num_of4k_pages = size_curr / HW_PAGE_SIZE4KB;
+ num_of4k_pages = size_curr / SZ_4K;
while (temp++ < num_of4k_pages) {
get_page(PHYS_TO_PAGE(pa));
- pa += HW_PAGE_SIZE4KB;
+ pa += SZ_4K;
}
status = pte_update(dev_ctxt, pa_curr, virt_addr +
(va_curr - mpu_addr), size_curr,
@@ -1270,7 +1263,7 @@ static void bridge_release_pages(u32 paddr, u32 pte_size, u32 num_bytes,
num_pages = pte_size / PAGE_SIZE;
- for (; num_pages > 0; --num_pages, paddr += HW_PAGE_SIZE4KB) {
+ for (; num_pages > 0; --num_pages, paddr += SZ_4K) {
if (!pfn_valid(__phys_to_pfn(paddr)) ||
(map_obj && map_obj->vm_flags & VM_PFNMAP))
continue;
@@ -1582,14 +1575,14 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
bad_page_dump(pa, pg);
}
}
- status = pte_set(dev_ctxt->pt_attrs, pa,
- virt_addr, HW_PAGE_SIZE4KB, &hw_attrs);
+ status = pte_set(dev_ctxt->pt_attrs, pa, virt_addr,
+ SZ_4K, &hw_attrs);
if (status)
break;
- virt_addr += HW_PAGE_SIZE4KB;
- mpu_addr += HW_PAGE_SIZE4KB;
- pa += HW_PAGE_SIZE4KB;
+ virt_addr += SZ_4K;
+ mpu_addr += SZ_4K;
+ pa += SZ_4K;
}
} else {
num_usr_pgs = num_bytes / PG_SIZE4K;
@@ -1608,16 +1601,15 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
}
status = pte_set(dev_ctxt->pt_attrs,
page_to_phys(pg),
- virt_addr, HW_PAGE_SIZE4KB,
- &hw_attrs);
+ virt_addr, SZ_4K, &hw_attrs);
if (status)
break;
if (map_obj)
map_obj->pages[pg_i] = pg;
- virt_addr += HW_PAGE_SIZE4KB;
- mpu_addr += HW_PAGE_SIZE4KB;
+ virt_addr += SZ_4K;
+ mpu_addr += SZ_4K;
} else {
pr_err("DSPBRIDGE: get_user_pages FAILED,"
"MPU addr = 0x%x,"
diff --git a/drivers/staging/tidspbridge/core/tiomap_io.c b/drivers/staging/tidspbridge/core/tiomap_io.c
index 7fda10c..a76e37c 100644
--- a/drivers/staging/tidspbridge/core/tiomap_io.c
+++ b/drivers/staging/tidspbridge/core/tiomap_io.c
@@ -132,7 +132,7 @@ int read_ext_dsp_data(struct bridge_dev_context *dev_ctxt,
ul_shm_base_virt - ul_tlb_base_virt;
ul_shm_offset_virt +=
PG_ALIGN_HIGH(ul_ext_end - ul_dyn_ext_base +
- 1, HW_PAGE_SIZE64KB);
+ 1, SZ_64K);
dw_ext_prog_virt_mem -= ul_shm_offset_virt;
dw_ext_prog_virt_mem +=
(ul_ext_base - ul_dyn_ext_base);
diff --git a/drivers/staging/tidspbridge/core/ue_deh.c b/drivers/staging/tidspbridge/core/ue_deh.c
index 3d28b23..c6196c4 100644
--- a/drivers/staging/tidspbridge/core/ue_deh.c
+++ b/drivers/staging/tidspbridge/core/ue_deh.c
@@ -60,7 +60,7 @@ static irqreturn_t mmu_fault_isr(int irq, void *data)
}
hw_mmu_event_status(resources->dmmu_base, &event);
- if (event == HW_MMU_TRANSLATION_FAULT) {
+ if (event == OMAP_IOMMU_ERR_TRANS_FAULT) {
hw_mmu_fault_addr_read(resources->dmmu_base, &fault_addr);
dev_dbg(bridge, "%s: event=0x%x, fault_addr=0x%x\n", __func__,
event, fault_addr);
@@ -74,10 +74,9 @@ static irqreturn_t mmu_fault_isr(int irq, void *data)
/* Disable the MMU events, else once we clear it will
* start to raise INTs again */
hw_mmu_event_disable(resources->dmmu_base,
- HW_MMU_TRANSLATION_FAULT);
+ OMAP_IOMMU_ERR_TRANS_FAULT);
} else {
- hw_mmu_event_disable(resources->dmmu_base,
- HW_MMU_ALL_INTERRUPTS);
+ hw_mmu_event_disable(resources->dmmu_base, OMAP_IOMMU_ERR_ALL);
}
return IRQ_HANDLED;
}
@@ -189,8 +188,7 @@ static void mmu_fault_print_stack(struct bridge_dev_context *dev_context)
hw_mmu_tlb_flush_all(resources->dmmu_base);
hw_mmu_tlb_add(resources->dmmu_base,
- virt_to_phys(dummy_va_addr), fault_addr,
- HW_PAGE_SIZE4KB, 1,
+ virt_to_phys(dummy_va_addr), fault_addr, SZ_4K, 1,
&map_attrs, HW_SET, HW_SET);
dsp_clk_enable(DSP_CLK_GPT8);
@@ -198,8 +196,7 @@ static void mmu_fault_print_stack(struct bridge_dev_context *dev_context)
dsp_gpt_wait_overflow(DSP_CLK_GPT8, 0xfffffffe);
/* Clear MMU interrupt */
- hw_mmu_event_ack(resources->dmmu_base,
- HW_MMU_TRANSLATION_FAULT);
+ hw_mmu_event_ack(resources->dmmu_base, OMAP_IOMMU_ERR_TRANS_FAULT);
dump_dsp_stack(dev_context);
dsp_clk_disable(DSP_CLK_GPT8);
diff --git a/drivers/staging/tidspbridge/hw/hw_defs.h b/drivers/staging/tidspbridge/hw/hw_defs.h
index d5266d4..89e6c4f 100644
--- a/drivers/staging/tidspbridge/hw/hw_defs.h
+++ b/drivers/staging/tidspbridge/hw/hw_defs.h
@@ -19,12 +19,6 @@
#ifndef _HW_DEFS_H
#define _HW_DEFS_H
-/* Page size */
-#define HW_PAGE_SIZE4KB 0x1000
-#define HW_PAGE_SIZE64KB 0x10000
-#define HW_PAGE_SIZE1MB 0x100000
-#define HW_PAGE_SIZE16MB 0x1000000
-
/* hw_status: return type for HW API */
typedef long hw_status;
diff --git a/drivers/staging/tidspbridge/hw/hw_mmu.c b/drivers/staging/tidspbridge/hw/hw_mmu.c
index 981794d..a5766f6 100644
--- a/drivers/staging/tidspbridge/hw/hw_mmu.c
+++ b/drivers/staging/tidspbridge/hw/hw_mmu.c
@@ -16,36 +16,21 @@
* WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
*/
+#include <linux/err.h>
#include <linux/io.h>
-#include "MMURegAcM.h"
+#include <linux/types.h>
+#include <plat/iommu.h>
+#include <plat/iommu2.h>
+#include <plat/iopgtable.h>
+
#include <hw_defs.h>
#include <hw_mmu.h>
-#include <linux/types.h>
-#include <linux/err.h>
-#define MMU_BASE_VAL_MASK 0xFC00
-#define MMU_PAGE_MAX 3
-#define MMU_ELEMENTSIZE_MAX 3
-#define MMU_ADDR_MASK 0xFFFFF000
-#define MMU_TTB_MASK 0xFFFFC000
-#define MMU_SECTION_ADDR_MASK 0xFFF00000
-#define MMU_SSECTION_ADDR_MASK 0xFF000000
-#define MMU_PAGE_TABLE_MASK 0xFFFFFC00
-#define MMU_LARGE_PAGE_MASK 0xFFFF0000
-#define MMU_SMALL_PAGE_MASK 0xFFFFF000
+#include "MMURegAcM.h"
+
+#define IOPGD_TABLE_MASK (~((1UL << 10) - 1))
#define MMU_LOAD_TLB 0x00000001
-#define MMU_GFLUSH 0x60
-
-/*
- * hw_mmu_page_size_t: Enumerated Type used to specify the MMU Page Size(SLSS)
- */
-enum hw_mmu_page_size_t {
- HW_MMU_SECTION,
- HW_MMU_LARGE_PAGE,
- HW_MMU_SMALL_PAGE,
- HW_MMU_SUPERSECTION
-};
/*
* FUNCTION : mmu_set_cam_entry
@@ -151,18 +136,16 @@ static hw_status mmu_set_ram_entry(const void __iomem *base_address,
enum hw_element_size_t element_size,
enum hw_mmu_mixed_size_t mixed_size)
{
- hw_status status = 0;
u32 mmu_ram_reg;
- mmu_ram_reg = (physical_addr & MMU_ADDR_MASK);
+ mmu_ram_reg = (physical_addr & IOPAGE_MASK);
mmu_ram_reg = (mmu_ram_reg) | ((endianism << 9) | (element_size << 7) |
(mixed_size << 6));
/* write values to register */
MMUMMU_RAM_WRITE_REGISTER32(base_address, mmu_ram_reg);
- return status;
-
+ return 0;
}
/* HW FUNCTIONS */
@@ -298,24 +281,24 @@ hw_status hw_mmu_tlb_add(const void __iomem *base_address,
hw_status status = 0;
u32 lock_reg;
u32 virtual_addr_tag;
- enum hw_mmu_page_size_t mmu_pg_size;
+ u32 mmu_pg_size;
/*Check the input Parameters */
switch (page_sz) {
- case HW_PAGE_SIZE4KB:
- mmu_pg_size = HW_MMU_SMALL_PAGE;
+ case SZ_4K:
+ mmu_pg_size = MMU_CAM_PGSZ_4K;
break;
- case HW_PAGE_SIZE64KB:
- mmu_pg_size = HW_MMU_LARGE_PAGE;
+ case SZ_64K:
+ mmu_pg_size = MMU_CAM_PGSZ_64K;
break;
- case HW_PAGE_SIZE1MB:
- mmu_pg_size = HW_MMU_SECTION;
+ case SZ_1M:
+ mmu_pg_size = MMU_CAM_PGSZ_1M;
break;
- case HW_PAGE_SIZE16MB:
- mmu_pg_size = HW_MMU_SUPERSECTION;
+ case SZ_16M:
+ mmu_pg_size = MMU_CAM_PGSZ_16M;
break;
default:
@@ -325,7 +308,7 @@ hw_status hw_mmu_tlb_add(const void __iomem *base_address,
lock_reg = MMUMMU_LOCK_READ_REGISTER32(base_address);
/* Generate the 20-bit tag from virtual address */
- virtual_addr_tag = ((virtual_addr & MMU_ADDR_MASK) >> 12);
+ virtual_addr_tag = ((virtual_addr & IOPAGE_MASK) >> 12);
/* Write the fields in the CAM Entry Register */
mmu_set_cam_entry(base_address, mmu_pg_size, preserved_bit, valid_bit,
@@ -359,58 +342,54 @@ hw_status hw_mmu_pte_set(const u32 pg_tbl_va,
s32 num_entries = 1;
switch (page_sz) {
- case HW_PAGE_SIZE4KB:
+ case SZ_4K:
pte_addr = hw_mmu_pte_addr_l2(pg_tbl_va,
- virtual_addr &
- MMU_SMALL_PAGE_MASK);
+ virtual_addr & IOPAGE_MASK);
pte_val =
- ((physical_addr & MMU_SMALL_PAGE_MASK) |
+ ((physical_addr & IOPAGE_MASK) |
(map_attrs->endianism << 9) | (map_attrs->
element_size << 4) |
- (map_attrs->mixed_size << 11) | 2);
+ (map_attrs->mixed_size << 11) | IOPTE_SMALL);
break;
- case HW_PAGE_SIZE64KB:
+ case SZ_64K:
num_entries = 16;
pte_addr = hw_mmu_pte_addr_l2(pg_tbl_va,
- virtual_addr &
- MMU_LARGE_PAGE_MASK);
+ virtual_addr & IOLARGE_MASK);
pte_val =
- ((physical_addr & MMU_LARGE_PAGE_MASK) |
+ ((physical_addr & IOLARGE_MASK) |
(map_attrs->endianism << 9) | (map_attrs->
element_size << 4) |
- (map_attrs->mixed_size << 11) | 1);
+ (map_attrs->mixed_size << 11) | IOPTE_LARGE);
break;
- case HW_PAGE_SIZE1MB:
+ case SZ_1M:
pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va,
- virtual_addr &
- MMU_SECTION_ADDR_MASK);
+ virtual_addr & IOSECTION_MASK);
pte_val =
- ((((physical_addr & MMU_SECTION_ADDR_MASK) |
+ ((((physical_addr & IOSECTION_MASK) |
(map_attrs->endianism << 15) | (map_attrs->
element_size << 10) |
- (map_attrs->mixed_size << 17)) & ~0x40000) | 0x2);
+ (map_attrs->mixed_size << 17)) & ~0x40000) |
+ IOPGD_SECTION);
break;
- case HW_PAGE_SIZE16MB:
+ case SZ_16M:
num_entries = 16;
pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va,
- virtual_addr &
- MMU_SSECTION_ADDR_MASK);
+ virtual_addr & IOSUPER_MASK);
pte_val =
- (((physical_addr & MMU_SSECTION_ADDR_MASK) |
+ (((physical_addr & IOSUPER_MASK) |
(map_attrs->endianism << 15) | (map_attrs->
element_size << 10) |
(map_attrs->mixed_size << 17)
- ) | 0x40000 | 0x2);
+ ) | 0x40000 | IOPGD_SUPER);
break;
case HW_MMU_COARSE_PAGE_SIZE:
pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va,
- virtual_addr &
- MMU_SECTION_ADDR_MASK);
- pte_val = (physical_addr & MMU_PAGE_TABLE_MASK) | 1;
+ virtual_addr & IOPGD_TABLE_MASK);
+ pte_val = (physical_addr & IOPGD_TABLE_MASK) | IOPGD_TABLE;
break;
default:
@@ -430,31 +409,27 @@ hw_status hw_mmu_pte_clear(const u32 pg_tbl_va, u32 virtual_addr, u32 page_size)
s32 num_entries = 1;
switch (page_size) {
- case HW_PAGE_SIZE4KB:
+ case SZ_4K:
pte_addr = hw_mmu_pte_addr_l2(pg_tbl_va,
- virtual_addr &
- MMU_SMALL_PAGE_MASK);
+ virtual_addr & IOPAGE_MASK);
break;
- case HW_PAGE_SIZE64KB:
+ case SZ_64K:
num_entries = 16;
pte_addr = hw_mmu_pte_addr_l2(pg_tbl_va,
- virtual_addr &
- MMU_LARGE_PAGE_MASK);
+ virtual_addr & IOLARGE_MASK);
break;
- case HW_PAGE_SIZE1MB:
+ case SZ_1M:
case HW_MMU_COARSE_PAGE_SIZE:
pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va,
- virtual_addr &
- MMU_SECTION_ADDR_MASK);
+ virtual_addr & IOSECTION_MASK);
break;
- case HW_PAGE_SIZE16MB:
+ case SZ_16M:
num_entries = 16;
pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va,
- virtual_addr &
- MMU_SSECTION_ADDR_MASK);
+ virtual_addr & IOSUPER_MASK);
break;
default:
diff --git a/drivers/staging/tidspbridge/hw/hw_mmu.h b/drivers/staging/tidspbridge/hw/hw_mmu.h
index 37fc4d4..b034f28 100644
--- a/drivers/staging/tidspbridge/hw/hw_mmu.h
+++ b/drivers/staging/tidspbridge/hw/hw_mmu.h
@@ -20,10 +20,15 @@
#define _HW_MMU_H
#include <linux/types.h>
+#include <plat/iommu.h>
+#include <plat/iommu2.h>
-/* Bitmasks for interrupt sources */
-#define HW_MMU_TRANSLATION_FAULT 0x2
-#define HW_MMU_ALL_INTERRUPTS 0x1F
+#define OMAP_IOMMU_ERR_ALL \
+ (OMAP_IOMMU_ERR_TLB_MISS | \
+ OMAP_IOMMU_ERR_TRANS_FAULT | \
+ OMAP_IOMMU_ERR_EMU_MISS | \
+ OMAP_IOMMU_ERR_TBLWALK_FAULT | \
+ OMAP_IOMMU_ERR_MULTIHIT_FAULT)
#define HW_MMU_COARSE_PAGE_SIZE 0x400
@@ -136,9 +141,9 @@ static inline u32 hw_mmu_pte_size_l1(u32 pte_val)
if ((pte_val & 0x3) == 0x2) {
if (pte_val & (1 << 18))
- pte_size = HW_PAGE_SIZE16MB;
+ pte_size = SZ_16M;
else
- pte_size = HW_PAGE_SIZE1MB;
+ pte_size = SZ_1M;
}
return pte_size;
@@ -149,9 +154,9 @@ static inline u32 hw_mmu_pte_size_l2(u32 pte_val)
u32 pte_size = 0;
if (pte_val & 0x2)
- pte_size = HW_PAGE_SIZE4KB;
+ pte_size = SZ_4K;
else if (pte_val & 0x1)
- pte_size = HW_PAGE_SIZE64KB;
+ pte_size = SZ_64K;
return pte_size;
}
--
1.7.8.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 13/15] tidspbridge: Simplify pte_update and mem_map_vmalloc functions
2012-09-19 12:06 [PATCH v2 00/15] tidspbridge driver MMU-related cleanups Laurent Pinchart
` (11 preceding siblings ...)
2012-09-19 12:07 ` [PATCH v2 12/15] tidspbridge: Use constants defined in IOMMU platform headers Laurent Pinchart
@ 2012-09-19 12:07 ` Laurent Pinchart
2012-09-19 12:07 ` [PATCH v2 14/15] tidspbridge: Use correct types to describe physical, MPU, DSP addresses Laurent Pinchart
` (2 subsequent siblings)
15 siblings, 0 replies; 23+ messages in thread
From: Laurent Pinchart @ 2012-09-19 12:07 UTC (permalink / raw)
To: linux-omap; +Cc: Omar Ramirez Luna
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Omar Ramirez Luna <omar.ramirez@ti.com>
---
drivers/staging/tidspbridge/core/tiomap3430.c | 148 +++++++++++-------------
1 files changed, 68 insertions(+), 80 deletions(-)
diff --git a/drivers/staging/tidspbridge/core/tiomap3430.c b/drivers/staging/tidspbridge/core/tiomap3430.c
index 7f1372e..7d074fc 100644
--- a/drivers/staging/tidspbridge/core/tiomap3430.c
+++ b/drivers/staging/tidspbridge/core/tiomap3430.c
@@ -1079,47 +1079,46 @@ static int pte_set(struct pg_table_attrs *pt, u32 pa, u32 va,
return status;
}
+static unsigned max_alignment(u32 addr, u32 size)
+{
+ unsigned pagesize[] = { SZ_16M, SZ_1M, SZ_64K, SZ_4K, };
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(pagesize); i++) {
+ if ((addr & (pagesize[i] - 1)) == 0 && size >= pagesize[i])
+ return pagesize[i];
+ }
+
+ return 0;
+}
+
/*
* ======== pte_update ========
* This function calculates the optimum page-aligned addresses and sizes
* Caller must pass page-aligned values
*/
-static int pte_update(struct bridge_dev_context *dev_ctxt, u32 pa,
- u32 va, u32 size,
- struct hw_mmu_map_attrs_t *map_attrs)
+static int pte_update(struct bridge_dev_context *dev_ctxt, u32 pa, u32 va,
+ u32 size, struct hw_mmu_map_attrs_t *map_attrs)
{
- u32 i;
- u32 all_bits;
- u32 pa_curr = pa;
- u32 va_curr = va;
- u32 num_bytes = size;
- int status = 0;
- u32 page_size[] = { SZ_16M, SZ_1M, SZ_64K, SZ_4K };
-
- while (num_bytes && !status) {
+ while (size) {
/* To find the max. page size with which both PA & VA are
* aligned */
- all_bits = pa_curr | va_curr;
+ unsigned int ent_sz = max_alignment(va | pa, size);
+ int ret;
- for (i = 0; i < 4; i++) {
- if ((num_bytes >= page_size[i]) && ((all_bits &
- (page_size[i] -
- 1)) == 0)) {
- status =
- pte_set(dev_ctxt->pt_attrs, pa_curr,
- va_curr, page_size[i], map_attrs);
- pa_curr += page_size[i];
- va_curr += page_size[i];
- num_bytes -= page_size[i];
- /* Don't try smaller sizes. Hopefully we have
- * reached an address aligned to a bigger page
- * size */
- break;
- }
- }
+ if (WARN_ON(ent_sz == 0))
+ return -EINVAL;
+
+ ret = pte_set(dev_ctxt->pt_attrs, pa, va, ent_sz, map_attrs);
+ if (ret < 0)
+ return ret;
+
+ pa += ent_sz;
+ va += ent_sz;
+ size -= ent_sz;
}
- return status;
+ return 0;
}
/*
@@ -1167,70 +1166,58 @@ static inline void flush_all(struct bridge_dev_context *dev_ctxt)
}
/* Memory map kernel VA -- memory allocated with vmalloc */
-static int mem_map_vmalloc(struct bridge_dev_context *dev_ctxt,
- u32 mpu_addr, u32 virt_addr, u32 num_bytes,
+static int mem_map_vmalloc(struct bridge_dev_context *dev_ctxt, u32 mpu_addr,
+ u32 virt_addr, size_t num_bytes,
struct hw_mmu_map_attrs_t *hw_attrs)
{
- int status = 0;
- struct page *page[1];
- u32 i;
- u32 pa_curr;
- u32 pa_next;
- u32 va_curr;
- u32 size_curr;
- u32 num_pages;
- u32 pa;
- u32 num_of4k_pages;
- u32 temp = 0;
+ struct page *page_next;
+ int ret;
/*
* Do Kernel va to pa translation.
* Combine physically contiguous regions to reduce TLBs.
* Pass the translated pa to pte_update.
*/
- num_pages = num_bytes / PAGE_SIZE; /* PAGE_SIZE = OS page size */
- i = 0;
- va_curr = mpu_addr;
- page[0] = vmalloc_to_page((void *)va_curr);
- pa_next = page_to_phys(page[0]);
- while (!status && (i < num_pages)) {
- /*
- * Reuse pa_next from the previous iteraion to avoid
- * an extra va2pa call
- */
- pa_curr = pa_next;
- size_curr = PAGE_SIZE;
+ page_next = vmalloc_to_page((void *)mpu_addr);
+
+ while (num_bytes > 0) {
+ struct page *page = page_next;
+ size_t chunk_size = PAGE_SIZE;
+ u32 num_pages = 1;
+
+ get_page(page);
+
/*
- * If the next page is physically contiguous,
- * map it with the current one by increasing
- * the size of the region to be mapped
+ * If the next page is physically contiguous, map it with the
+ * current one by increasing the size of the region to be mapped.
*/
- while (++i < num_pages) {
- page[0] =
- vmalloc_to_page((void *)(va_curr + size_curr));
- pa_next = page_to_phys(page[0]);
-
- if (pa_next == (pa_curr + size_curr))
- size_curr += PAGE_SIZE;
- else
+ while (chunk_size < num_bytes) {
+ page_next =
+ vmalloc_to_page((void *)mpu_addr + chunk_size);
+ if (page_next != page + num_pages)
break;
+ chunk_size += PAGE_SIZE;
+ num_pages++;
+
+ get_page(page_next);
}
- if (pa_next == 0) {
- status = -ENOMEM;
+
+ if (page_next == NULL) {
+ ret = -ENOMEM;
break;
}
- pa = pa_curr;
- num_of4k_pages = size_curr / SZ_4K;
- while (temp++ < num_of4k_pages) {
- get_page(PHYS_TO_PAGE(pa));
- pa += SZ_4K;
- }
- status = pte_update(dev_ctxt, pa_curr, virt_addr +
- (va_curr - mpu_addr), size_curr,
- hw_attrs);
- va_curr += size_curr;
+
+ ret = pte_update(dev_ctxt, page_to_phys(page), virt_addr,
+ chunk_size, hw_attrs);
+ if (ret)
+ break;
+
+ mpu_addr += chunk_size;
+ virt_addr += chunk_size;
+ num_bytes -= chunk_size;
}
+
/*
* In any case, flush the TLB
* This is called from here instead from pte_update to avoid unnecessary
@@ -1238,8 +1225,9 @@ static int mem_map_vmalloc(struct bridge_dev_context *dev_ctxt,
* region
*/
flush_all(dev_ctxt);
- dev_dbg(bridge, "%s status %x\n", __func__, status);
- return status;
+ dev_dbg(bridge, "%s status %d\n", __func__, ret);
+
+ return ret;
}
static void bad_page_dump(u32 pa, struct page *pg)
--
1.7.8.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 14/15] tidspbridge: Use correct types to describe physical, MPU, DSP addresses
2012-09-19 12:06 [PATCH v2 00/15] tidspbridge driver MMU-related cleanups Laurent Pinchart
` (12 preceding siblings ...)
2012-09-19 12:07 ` [PATCH v2 13/15] tidspbridge: Simplify pte_update and mem_map_vmalloc functions Laurent Pinchart
@ 2012-09-19 12:07 ` Laurent Pinchart
2012-09-19 12:07 ` [PATCH v2 15/15] tidspbridge: Replace hw_mmu_map_attrs_t structure with a prot bitfield Laurent Pinchart
2012-09-21 16:18 ` [PATCH v2 00/15] tidspbridge driver MMU-related cleanups Ramirez Luna, Omar
15 siblings, 0 replies; 23+ messages in thread
From: Laurent Pinchart @ 2012-09-19 12:07 UTC (permalink / raw)
To: linux-omap; +Cc: Omar Ramirez Luna
Physical addresses: name them 'pa' and use the phys_addr_t type
MPU virtual addresses: name them 'va' and use the unsigned long type
DSP virtual addresses: name them 'da' and use the u32 type
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Omar Ramirez Luna <omar.ramirez@ti.com>
---
drivers/staging/tidspbridge/core/tiomap3430.c | 180 ++++++++++----------
drivers/staging/tidspbridge/hw/hw_mmu.c | 74 ++++-----
drivers/staging/tidspbridge/hw/hw_mmu.h | 24 +--
.../tidspbridge/include/dspbridge/dspdefs.h | 24 ++--
4 files changed, 140 insertions(+), 162 deletions(-)
diff --git a/drivers/staging/tidspbridge/core/tiomap3430.c b/drivers/staging/tidspbridge/core/tiomap3430.c
index 7d074fc..c9d240c 100644
--- a/drivers/staging/tidspbridge/core/tiomap3430.c
+++ b/drivers/staging/tidspbridge/core/tiomap3430.c
@@ -988,8 +988,8 @@ static int bridge_brd_mem_write(struct bridge_dev_context *dev_ctxt,
* This function calculates PTE address (MPU virtual) to be updated
* It also manages the L2 page tables
*/
-static int pte_set(struct pg_table_attrs *pt, u32 pa, u32 va,
- u32 size, struct hw_mmu_map_attrs_t *attrs)
+static int pte_set(struct pg_table_attrs *pt, phys_addr_t pa, u32 da,
+ size_t size, struct hw_mmu_map_attrs_t *attrs)
{
u32 i;
u32 pte_val;
@@ -1010,7 +1010,7 @@ static int pte_set(struct pg_table_attrs *pt, u32 pa, u32 va,
pg_tbl_va = l1_base_va;
if (size == SZ_64K || size == SZ_4K) {
/* Find whether the L1 PTE points to a valid L2 PT */
- pte_addr_l1 = hw_mmu_pte_addr_l1(l1_base_va, va);
+ pte_addr_l1 = hw_mmu_pte_addr_l1(l1_base_va, da);
if (pte_addr_l1 <= (pt->l1_base_va + pt->l1_size)) {
pte_val = *(u32 *) pte_addr_l1;
pte_size = hw_mmu_pte_size_l1(pte_val);
@@ -1043,7 +1043,7 @@ static int pte_set(struct pg_table_attrs *pt, u32 pa, u32 va,
/* Endianness attributes are ignored for
* HW_MMU_COARSE_PAGE_SIZE */
status =
- hw_mmu_pte_set(l1_base_va, l2_base_pa, va,
+ hw_mmu_pte_set(l1_base_va, l2_base_pa, da,
HW_MMU_COARSE_PAGE_SIZE,
attrs);
} else {
@@ -1068,18 +1068,18 @@ static int pte_set(struct pg_table_attrs *pt, u32 pa, u32 va,
spin_unlock(&pt->pg_lock);
}
if (!status) {
- dev_dbg(bridge, "PTE: pg_tbl_va %x, pa %x, va %x, size %x\n",
- pg_tbl_va, pa, va, size);
+ dev_dbg(bridge, "PTE: pg_tbl_va %x, pa %x, da %x, size %x\n",
+ pg_tbl_va, pa, da, size);
dev_dbg(bridge, "PTE: endianism %x, element_size %x, "
"mixed_size %x\n", attrs->endianism,
attrs->element_size, attrs->mixed_size);
- status = hw_mmu_pte_set(pg_tbl_va, pa, va, size, attrs);
+ status = hw_mmu_pte_set(pg_tbl_va, pa, da, size, attrs);
}
return status;
}
-static unsigned max_alignment(u32 addr, u32 size)
+static unsigned max_alignment(u32 addr, size_t size)
{
unsigned pagesize[] = { SZ_16M, SZ_1M, SZ_64K, SZ_4K, };
unsigned int i;
@@ -1097,24 +1097,24 @@ static unsigned max_alignment(u32 addr, u32 size)
* This function calculates the optimum page-aligned addresses and sizes
* Caller must pass page-aligned values
*/
-static int pte_update(struct bridge_dev_context *dev_ctxt, u32 pa, u32 va,
- u32 size, struct hw_mmu_map_attrs_t *map_attrs)
+static int pte_update(struct bridge_dev_context *dev_ctxt, phys_addr_t pa, u32 da,
+ size_t size, struct hw_mmu_map_attrs_t *map_attrs)
{
while (size) {
/* To find the max. page size with which both PA & VA are
* aligned */
- unsigned int ent_sz = max_alignment(va | pa, size);
+ unsigned int ent_sz = max_alignment(da | pa, size);
int ret;
if (WARN_ON(ent_sz == 0))
return -EINVAL;
- ret = pte_set(dev_ctxt->pt_attrs, pa, va, ent_sz, map_attrs);
+ ret = pte_set(dev_ctxt->pt_attrs, pa, da, ent_sz, map_attrs);
if (ret < 0)
return ret;
pa += ent_sz;
- va += ent_sz;
+ da += ent_sz;
size -= ent_sz;
}
@@ -1127,26 +1127,26 @@ static int pte_update(struct bridge_dev_context *dev_ctxt, u32 pa, u32 va,
* This function walks through the page tables to convert a userland
* virtual address to physical address
*/
-static u32 user_va2_pa(struct mm_struct *mm, u32 address)
+static u32 user_va2_pa(struct mm_struct *mm, unsigned long va)
{
pgd_t *pgd;
pud_t *pud;
pmd_t *pmd;
pte_t *ptep, pte;
- pgd = pgd_offset(mm, address);
+ pgd = pgd_offset(mm, va);
if (pgd_none(*pgd) || pgd_bad(*pgd))
return 0;
- pud = pud_offset(pgd, address);
+ pud = pud_offset(pgd, va);
if (pud_none(*pud) || pud_bad(*pud))
return 0;
- pmd = pmd_offset(pud, address);
+ pmd = pmd_offset(pud, va);
if (pmd_none(*pmd) || pmd_bad(*pmd))
return 0;
- ptep = pte_offset_map(pmd, address);
+ ptep = pte_offset_map(pmd, va);
if (ptep) {
pte = *ptep;
if (pte_present(pte))
@@ -1166,8 +1166,8 @@ static inline void flush_all(struct bridge_dev_context *dev_ctxt)
}
/* Memory map kernel VA -- memory allocated with vmalloc */
-static int mem_map_vmalloc(struct bridge_dev_context *dev_ctxt, u32 mpu_addr,
- u32 virt_addr, size_t num_bytes,
+static int mem_map_vmalloc(struct bridge_dev_context *dev_ctxt,
+ unsigned long va, u32 da, size_t bytes,
struct hw_mmu_map_attrs_t *hw_attrs)
{
struct page *page_next;
@@ -1178,9 +1178,9 @@ static int mem_map_vmalloc(struct bridge_dev_context *dev_ctxt, u32 mpu_addr,
* Combine physically contiguous regions to reduce TLBs.
* Pass the translated pa to pte_update.
*/
- page_next = vmalloc_to_page((void *)mpu_addr);
+ page_next = vmalloc_to_page((void *)va);
- while (num_bytes > 0) {
+ while (bytes > 0) {
struct page *page = page_next;
size_t chunk_size = PAGE_SIZE;
u32 num_pages = 1;
@@ -1191,9 +1191,8 @@ static int mem_map_vmalloc(struct bridge_dev_context *dev_ctxt, u32 mpu_addr,
* If the next page is physically contiguous, map it with the
* current one by increasing the size of the region to be mapped.
*/
- while (chunk_size < num_bytes) {
- page_next =
- vmalloc_to_page((void *)mpu_addr + chunk_size);
+ while (chunk_size < bytes) {
+ page_next = vmalloc_to_page((void *)va + chunk_size);
if (page_next != page + num_pages)
break;
@@ -1208,14 +1207,14 @@ static int mem_map_vmalloc(struct bridge_dev_context *dev_ctxt, u32 mpu_addr,
break;
}
- ret = pte_update(dev_ctxt, page_to_phys(page), virt_addr,
+ ret = pte_update(dev_ctxt, page_to_phys(page), da,
chunk_size, hw_attrs);
if (ret)
break;
- mpu_addr += chunk_size;
- virt_addr += chunk_size;
- num_bytes -= chunk_size;
+ va += chunk_size;
+ da += chunk_size;
+ bytes -= chunk_size;
}
/*
@@ -1243,7 +1242,7 @@ static void bad_page_dump(u32 pa, struct page *pg)
}
/* Release all pages associated with a physical addresses range. */
-static void bridge_release_pages(u32 paddr, u32 pte_size, u32 num_bytes,
+static void bridge_release_pages(phys_addr_t pa, u32 pte_size, size_t bytes,
struct dmm_map_object *map_obj)
{
struct page *pg;
@@ -1251,17 +1250,17 @@ static void bridge_release_pages(u32 paddr, u32 pte_size, u32 num_bytes,
num_pages = pte_size / PAGE_SIZE;
- for (; num_pages > 0; --num_pages, paddr += SZ_4K) {
- if (!pfn_valid(__phys_to_pfn(paddr)) ||
+ for (; num_pages > 0; --num_pages, pa += SZ_4K) {
+ if (!pfn_valid(__phys_to_pfn(pa)) ||
(map_obj && map_obj->vm_flags & VM_PFNMAP))
continue;
- pg = PHYS_TO_PAGE(paddr);
+ pg = PHYS_TO_PAGE(pa);
if (page_count(pg) < 1) {
pr_info("DSPBRIDGE: UNMAP function: "
"COUNT 0 FOR PA 0x%x, size = "
- "0x%x\n", paddr, num_bytes);
- bad_page_dump(paddr, pg);
+ "0x%x\n", pa, bytes);
+ bad_page_dump(pa, pg);
} else {
set_page_dirty(pg);
page_cache_release(pg);
@@ -1278,7 +1277,7 @@ static void bridge_release_pages(u32 paddr, u32 pte_size, u32 num_bytes,
* we clear consecutive PTEs until we unmap all the bytes
*/
static int bridge_brd_mem_un_map(struct bridge_dev_context *dev_ctxt,
- u32 virt_addr, u32 num_bytes,
+ u32 da, size_t bytes,
struct dmm_map_object *map_obj)
{
u32 l1_base_va;
@@ -1295,18 +1294,18 @@ static int bridge_brd_mem_un_map(struct bridge_dev_context *dev_ctxt,
int status = 0;
struct pg_table_attrs *pt = dev_ctxt->pt_attrs;
- rem_bytes = num_bytes;
+ rem_bytes = bytes;
rem_bytes_l2 = 0;
l1_base_va = pt->l1_base_va;
- pte_addr_l1 = hw_mmu_pte_addr_l1(l1_base_va, virt_addr);
- dev_dbg(bridge, "%s dev_ctxt %p, va %x, NumBytes %x l1_base_va %x, "
- "pte_addr_l1 %x\n", __func__, dev_ctxt, virt_addr,
- num_bytes, l1_base_va, pte_addr_l1);
+ pte_addr_l1 = hw_mmu_pte_addr_l1(l1_base_va, da);
+ dev_dbg(bridge, "%s dev_ctxt %p, da %x, NumBytes %x l1_base_va %x, "
+ "pte_addr_l1 %x\n", __func__, dev_ctxt, da,
+ bytes, l1_base_va, pte_addr_l1);
while (rem_bytes && !status) {
- u32 virt_addr_orig = virt_addr;
+ u32 da_orig = da;
/* Find whether the L1 PTE points to a valid L2 PT */
- pte_addr_l1 = hw_mmu_pte_addr_l1(l1_base_va, virt_addr);
+ pte_addr_l1 = hw_mmu_pte_addr_l1(l1_base_va, da);
pte_val = *(u32 *) pte_addr_l1;
pte_size = hw_mmu_pte_size_l1(pte_val);
@@ -1327,7 +1326,7 @@ static int bridge_brd_mem_un_map(struct bridge_dev_context *dev_ctxt,
* page, and the size of VA space that needs to be
* cleared on this L2 page
*/
- pte_addr_l2 = hw_mmu_pte_addr_l2(l2_base_va, virt_addr);
+ pte_addr_l2 = hw_mmu_pte_addr_l2(l2_base_va, da);
pte_count = pte_addr_l2 & (HW_MMU_COARSE_PAGE_SIZE - 1);
pte_count = (HW_MMU_COARSE_PAGE_SIZE - pte_count) / sizeof(u32);
if (rem_bytes < (pte_count * PG_SIZE4K))
@@ -1345,24 +1344,24 @@ static int bridge_brd_mem_un_map(struct bridge_dev_context *dev_ctxt,
while (rem_bytes_l2 && !status) {
pte_val = *(u32 *) pte_addr_l2;
pte_size = hw_mmu_pte_size_l2(pte_val);
- /* virt_addr aligned to pte_size? */
+ /* da aligned to pte_size? */
if (pte_size == 0 || rem_bytes_l2 < pte_size ||
- virt_addr & (pte_size - 1)) {
+ da & (pte_size - 1)) {
status = -EPERM;
break;
}
bridge_release_pages(pte_val & ~(pte_size - 1), pte_size,
- num_bytes, map_obj);
+ bytes, map_obj);
- if (hw_mmu_pte_clear(pte_addr_l2, virt_addr, pte_size)) {
+ if (hw_mmu_pte_clear(pte_addr_l2, da, pte_size)) {
status = -EPERM;
goto EXIT_LOOP;
}
status = 0;
rem_bytes_l2 -= pte_size;
- virt_addr += pte_size;
+ da += pte_size;
pte_addr_l2 += (pte_size >> 12) * sizeof(u32);
}
spin_lock(&pt->pg_lock);
@@ -1372,7 +1371,7 @@ static int bridge_brd_mem_un_map(struct bridge_dev_context *dev_ctxt,
/*
* Clear the L1 PTE pointing to the L2 PT
*/
- if (!hw_mmu_pte_clear(l1_base_va, virt_addr_orig,
+ if (!hw_mmu_pte_clear(l1_base_va, da_orig,
HW_MMU_COARSE_PAGE_SIZE))
status = 0;
else {
@@ -1388,21 +1387,21 @@ static int bridge_brd_mem_un_map(struct bridge_dev_context *dev_ctxt,
spin_unlock(&pt->pg_lock);
continue;
skip_coarse_page:
- /* virt_addr aligned to pte_size? */
+ /* da aligned to pte_size? */
/* pte_size = 1 MB or 16 MB */
if (pte_size == 0 || rem_bytes < pte_size ||
- virt_addr & (pte_size - 1)) {
+ da & (pte_size - 1)) {
status = -EPERM;
break;
}
bridge_release_pages(pte_val & ~(pte_size - 1), pte_size,
- num_bytes, map_obj);
+ bytes, map_obj);
- if (!hw_mmu_pte_clear(l1_base_va, virt_addr, pte_size)) {
+ if (!hw_mmu_pte_clear(l1_base_va, da, pte_size)) {
status = 0;
rem_bytes -= pte_size;
- virt_addr += pte_size;
+ da += pte_size;
} else {
status = -EPERM;
goto EXIT_LOOP;
@@ -1415,8 +1414,8 @@ skip_coarse_page:
EXIT_LOOP:
flush_all(dev_ctxt);
dev_dbg(bridge,
- "%s: virt_addr %x, pte_addr_l1 %x pte_addr_l2 %x rem_bytes %x,"
- " rem_bytes_l2 %x status %x\n", __func__, virt_addr, pte_addr_l1,
+ "%s: da %x, pte_addr_l1 %x pte_addr_l2 %x rem_bytes %x,"
+ " rem_bytes_l2 %x status %x\n", __func__, da, pte_addr_l1,
pte_addr_l2, rem_bytes, rem_bytes_l2, status);
return status;
}
@@ -1431,7 +1430,7 @@ EXIT_LOOP:
* TODO: Disable MMU while updating the page tables (but that'll stall DSP)
*/
static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
- u32 mpu_addr, u32 virt_addr, u32 num_bytes,
+ unsigned long va, u32 da, size_t bytes,
u32 map_attr, struct dmm_map_object *map_obj)
{
u32 attrs;
@@ -1447,10 +1446,10 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
u32 pa;
dev_dbg(bridge,
- "%s hDevCtxt %p, pa %x, va %x, size %x, map_attr %x\n",
- __func__, dev_ctxt, mpu_addr, virt_addr, num_bytes,
+ "%s hDevCtxt %p, va %lx, da %x, size %x, map_attr %x\n",
+ __func__, dev_ctxt, va, da, bytes,
map_attr);
- if (num_bytes == 0)
+ if (bytes == 0)
return -EINVAL;
if (map_attr & DSP_MAP_DIR_MASK) {
@@ -1491,8 +1490,7 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
}
if (attrs & DSP_MAPVMALLOCADDR) {
- return mem_map_vmalloc(dev_ctxt, mpu_addr, virt_addr,
- num_bytes, &hw_attrs);
+ return mem_map_vmalloc(dev_ctxt, va, da, bytes, &hw_attrs);
}
/*
* Do OS-specific user-va to pa translation.
@@ -1500,40 +1498,40 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
* Pass the translated pa to pte_update.
*/
if ((attrs & DSP_MAPPHYSICALADDR)) {
- status = pte_update(dev_ctxt, mpu_addr, virt_addr,
- num_bytes, &hw_attrs);
+ status = pte_update(dev_ctxt, (phys_addr_t)va, da,
+ bytes, &hw_attrs);
goto func_cont;
}
/*
- * Important Note: mpu_addr is mapped from user application process
+ * Important Note: va is mapped from user application process
* to current process - it must lie completely within the current
* virtual memory address space in order to be of use to us here!
*/
down_read(&mm->mmap_sem);
- vma = find_vma(mm, mpu_addr);
+ vma = find_vma(mm, va);
if (vma)
dev_dbg(bridge,
- "VMAfor UserBuf: mpu_addr=%x, num_bytes=%x, "
- "vm_start=%lx, vm_end=%lx, vm_flags=%lx\n", mpu_addr,
- num_bytes, vma->vm_start, vma->vm_end, vma->vm_flags);
+ "VMAfor UserBuf: va=%lx, bytes=%x, "
+ "vm_start=%lx, vm_end=%lx, vm_flags=%lx\n", va,
+ bytes, vma->vm_start, vma->vm_end, vma->vm_flags);
/*
* It is observed that under some circumstances, the user buffer is
* spread across several VMAs. So loop through and check if the entire
* user buffer is covered
*/
- while ((vma) && (mpu_addr + num_bytes > vma->vm_end)) {
+ while ((vma) && (va + bytes > vma->vm_end)) {
/* jump to the next VMA region */
vma = find_vma(mm, vma->vm_end + 1);
dev_dbg(bridge,
- "VMA for UserBuf mpu_addr=%x num_bytes=%x, "
- "vm_start=%lx, vm_end=%lx, vm_flags=%lx\n", mpu_addr,
- num_bytes, vma->vm_start, vma->vm_end, vma->vm_flags);
+ "VMA for UserBuf va=%lx bytes=%x, "
+ "vm_start=%lx, vm_end=%lx, vm_flags=%lx\n", va,
+ bytes, vma->vm_start, vma->vm_end, vma->vm_flags);
}
if (!vma) {
- pr_err("%s: Failed to get VMA region for 0x%x (%d)\n",
- __func__, mpu_addr, num_bytes);
+ pr_err("%s: Failed to get VMA region for 0x%lx (%d)\n",
+ __func__, va, bytes);
status = -EINVAL;
up_read(&mm->mmap_sem);
goto func_cont;
@@ -1543,11 +1541,11 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
map_obj->vm_flags = vma->vm_flags;
if (vma->vm_flags & VM_IO) {
- num_usr_pgs = num_bytes / PG_SIZE4K;
+ num_usr_pgs = bytes / PG_SIZE4K;
/* Get the physical addresses for user buffer */
for (pg_i = 0; pg_i < num_usr_pgs; pg_i++) {
- pa = user_va2_pa(mm, mpu_addr);
+ pa = user_va2_pa(mm, va);
if (!pa) {
status = -EPERM;
pr_err("DSPBRIDGE: VM_IO mapping physical"
@@ -1563,22 +1561,22 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
bad_page_dump(pa, pg);
}
}
- status = pte_set(dev_ctxt->pt_attrs, pa, virt_addr,
+ status = pte_set(dev_ctxt->pt_attrs, pa, da,
SZ_4K, &hw_attrs);
if (status)
break;
- virt_addr += SZ_4K;
- mpu_addr += SZ_4K;
+ da += SZ_4K;
+ va += SZ_4K;
pa += SZ_4K;
}
} else {
- num_usr_pgs = num_bytes / PG_SIZE4K;
+ num_usr_pgs = bytes / PG_SIZE4K;
if (vma->vm_flags & (VM_WRITE | VM_MAYWRITE))
write = 1;
for (pg_i = 0; pg_i < num_usr_pgs; pg_i++) {
- pg_num = get_user_pages(current, mm, mpu_addr, 1,
+ pg_num = get_user_pages(current, mm, va, 1,
write, 1, &pg, NULL);
if (pg_num > 0) {
if (page_count(pg) < 1) {
@@ -1588,24 +1586,24 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
bad_page_dump(page_to_phys(pg), pg);
}
status = pte_set(dev_ctxt->pt_attrs,
- page_to_phys(pg),
- virt_addr, SZ_4K, &hw_attrs);
+ page_to_phys(pg), da,
+ SZ_4K, &hw_attrs);
if (status)
break;
if (map_obj)
map_obj->pages[pg_i] = pg;
- virt_addr += SZ_4K;
- mpu_addr += SZ_4K;
+ da += SZ_4K;
+ va += SZ_4K;
} else {
pr_err("DSPBRIDGE: get_user_pages FAILED,"
- "MPU addr = 0x%x,"
+ "va = 0x%lx,"
"vma->vm_flags = 0x%lx,"
"get_user_pages Err"
"Value = %d, Buffer"
- "size=0x%x\n", mpu_addr,
- vma->vm_flags, pg_num, num_bytes);
+ "size=0x%x\n", va,
+ vma->vm_flags, pg_num, bytes);
status = -EPERM;
break;
}
@@ -1619,7 +1617,7 @@ func_cont:
* mapping
*/
if (pg_i)
- bridge_brd_mem_un_map(dev_ctxt, virt_addr,
+ bridge_brd_mem_un_map(dev_ctxt, da,
pg_i * PG_SIZE4K, map_obj);
status = -EPERM;
}
diff --git a/drivers/staging/tidspbridge/hw/hw_mmu.c b/drivers/staging/tidspbridge/hw/hw_mmu.c
index a5766f6..34c2054 100644
--- a/drivers/staging/tidspbridge/hw/hw_mmu.c
+++ b/drivers/staging/tidspbridge/hw/hw_mmu.c
@@ -55,9 +55,9 @@
* Description : It indicates the TLB entry is valid entry or not
*
*
- * Identifier : virtual_addr_tag
+ * Identifier : da_tag
* Type : const u32
- * Description : virtual Address
+ * Description : device virtual Address
*
* RETURNS:
*
@@ -76,12 +76,12 @@ static hw_status mmu_set_cam_entry(const void __iomem *base_address,
const u32 page_sz,
const u32 preserved_bit,
const u32 valid_bit,
- const u32 virtual_addr_tag)
+ const u32 da_tag)
{
hw_status status = 0;
u32 mmu_cam_reg;
- mmu_cam_reg = (virtual_addr_tag << 12);
+ mmu_cam_reg = (da_tag << 12);
mmu_cam_reg = (mmu_cam_reg) | (page_sz) | (valid_bit << 2) |
(preserved_bit << 3);
@@ -100,8 +100,8 @@ static hw_status mmu_set_cam_entry(const void __iomem *base_address,
* Type : const u32
* Description : Base Address of instance of MMU module
*
- * Identifier : physical_addr
- * Type : const u32
+ * Identifier : pa
+ * Type : phys_addr_t
* Description : Physical Address to which the corresponding
* virtual Address shouldpoint
*
@@ -131,14 +131,14 @@ static hw_status mmu_set_cam_entry(const void __iomem *base_address,
* METHOD: : Check the Input parameters and set the RAM entry.
*/
static hw_status mmu_set_ram_entry(const void __iomem *base_address,
- const u32 physical_addr,
+ phys_addr_t pa,
enum hw_endianism_t endianism,
enum hw_element_size_t element_size,
enum hw_mmu_mixed_size_t mixed_size)
{
u32 mmu_ram_reg;
- mmu_ram_reg = (physical_addr & IOPAGE_MASK);
+ mmu_ram_reg = (pa & IOPAGE_MASK);
mmu_ram_reg = (mmu_ram_reg) | ((endianism << 9) | (element_size << 7) |
(mixed_size << 6));
@@ -270,17 +270,14 @@ hw_status hw_mmu_twl_disable(const void __iomem *base_address)
return status;
}
-hw_status hw_mmu_tlb_add(const void __iomem *base_address,
- u32 physical_addr,
- u32 virtual_addr,
- u32 page_sz,
- u32 entry_num,
+hw_status hw_mmu_tlb_add(const void __iomem *base_address, phys_addr_t pa,
+ u32 da, u32 page_sz, u32 entry_num,
struct hw_mmu_map_attrs_t *map_attrs,
s8 preserved_bit, s8 valid_bit)
{
hw_status status = 0;
u32 lock_reg;
- u32 virtual_addr_tag;
+ u32 da_tag;
u32 mmu_pg_size;
/*Check the input Parameters */
@@ -308,15 +305,15 @@ hw_status hw_mmu_tlb_add(const void __iomem *base_address,
lock_reg = MMUMMU_LOCK_READ_REGISTER32(base_address);
/* Generate the 20-bit tag from virtual address */
- virtual_addr_tag = ((virtual_addr & IOPAGE_MASK) >> 12);
+ da_tag = ((da & IOPAGE_MASK) >> 12);
/* Write the fields in the CAM Entry Register */
mmu_set_cam_entry(base_address, mmu_pg_size, preserved_bit, valid_bit,
- virtual_addr_tag);
+ da_tag);
/* Write the different fields of the RAM Entry Register */
/* endianism of the page,Element Size of the page (8, 16, 32, 64 bit) */
- mmu_set_ram_entry(base_address, physical_addr, map_attrs->endianism,
+ mmu_set_ram_entry(base_address, pa, map_attrs->endianism,
map_attrs->element_size, map_attrs->mixed_size);
/* Update the MMU Lock Register */
@@ -332,9 +329,7 @@ hw_status hw_mmu_tlb_add(const void __iomem *base_address,
return status;
}
-hw_status hw_mmu_pte_set(const u32 pg_tbl_va,
- u32 physical_addr,
- u32 virtual_addr,
+hw_status hw_mmu_pte_set(const u32 pg_tbl_va, phys_addr_t pa, u32 da,
u32 page_sz, struct hw_mmu_map_attrs_t *map_attrs)
{
hw_status status = 0;
@@ -343,10 +338,9 @@ hw_status hw_mmu_pte_set(const u32 pg_tbl_va,
switch (page_sz) {
case SZ_4K:
- pte_addr = hw_mmu_pte_addr_l2(pg_tbl_va,
- virtual_addr & IOPAGE_MASK);
+ pte_addr = hw_mmu_pte_addr_l2(pg_tbl_va, da & IOPAGE_MASK);
pte_val =
- ((physical_addr & IOPAGE_MASK) |
+ ((pa & IOPAGE_MASK) |
(map_attrs->endianism << 9) | (map_attrs->
element_size << 4) |
(map_attrs->mixed_size << 11) | IOPTE_SMALL);
@@ -354,20 +348,18 @@ hw_status hw_mmu_pte_set(const u32 pg_tbl_va,
case SZ_64K:
num_entries = 16;
- pte_addr = hw_mmu_pte_addr_l2(pg_tbl_va,
- virtual_addr & IOLARGE_MASK);
+ pte_addr = hw_mmu_pte_addr_l2(pg_tbl_va, da & IOLARGE_MASK);
pte_val =
- ((physical_addr & IOLARGE_MASK) |
+ ((pa & IOLARGE_MASK) |
(map_attrs->endianism << 9) | (map_attrs->
element_size << 4) |
(map_attrs->mixed_size << 11) | IOPTE_LARGE);
break;
case SZ_1M:
- pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va,
- virtual_addr & IOSECTION_MASK);
+ pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va, da & IOSECTION_MASK);
pte_val =
- ((((physical_addr & IOSECTION_MASK) |
+ ((((pa & IOSECTION_MASK) |
(map_attrs->endianism << 15) | (map_attrs->
element_size << 10) |
(map_attrs->mixed_size << 17)) & ~0x40000) |
@@ -376,10 +368,9 @@ hw_status hw_mmu_pte_set(const u32 pg_tbl_va,
case SZ_16M:
num_entries = 16;
- pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va,
- virtual_addr & IOSUPER_MASK);
+ pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va, da & IOSUPER_MASK);
pte_val =
- (((physical_addr & IOSUPER_MASK) |
+ (((pa & IOSUPER_MASK) |
(map_attrs->endianism << 15) | (map_attrs->
element_size << 10) |
(map_attrs->mixed_size << 17)
@@ -387,9 +378,8 @@ hw_status hw_mmu_pte_set(const u32 pg_tbl_va,
break;
case HW_MMU_COARSE_PAGE_SIZE:
- pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va,
- virtual_addr & IOPGD_TABLE_MASK);
- pte_val = (physical_addr & IOPGD_TABLE_MASK) | IOPGD_TABLE;
+ pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va, da & IOPGD_TABLE_MASK);
+ pte_val = (pa & IOPGD_TABLE_MASK) | IOPGD_TABLE;
break;
default:
@@ -402,7 +392,7 @@ hw_status hw_mmu_pte_set(const u32 pg_tbl_va,
return status;
}
-hw_status hw_mmu_pte_clear(const u32 pg_tbl_va, u32 virtual_addr, u32 page_size)
+hw_status hw_mmu_pte_clear(const u32 pg_tbl_va, u32 da, u32 page_size)
{
hw_status status = 0;
u32 pte_addr;
@@ -410,26 +400,22 @@ hw_status hw_mmu_pte_clear(const u32 pg_tbl_va, u32 virtual_addr, u32 page_size)
switch (page_size) {
case SZ_4K:
- pte_addr = hw_mmu_pte_addr_l2(pg_tbl_va,
- virtual_addr & IOPAGE_MASK);
+ pte_addr = hw_mmu_pte_addr_l2(pg_tbl_va, da & IOPAGE_MASK);
break;
case SZ_64K:
num_entries = 16;
- pte_addr = hw_mmu_pte_addr_l2(pg_tbl_va,
- virtual_addr & IOLARGE_MASK);
+ pte_addr = hw_mmu_pte_addr_l2(pg_tbl_va, da & IOLARGE_MASK);
break;
case SZ_1M:
case HW_MMU_COARSE_PAGE_SIZE:
- pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va,
- virtual_addr & IOSECTION_MASK);
+ pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va, da & IOSECTION_MASK);
break;
case SZ_16M:
num_entries = 16;
- pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va,
- virtual_addr & IOSUPER_MASK);
+ pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va, da & IOSUPER_MASK);
break;
default:
diff --git a/drivers/staging/tidspbridge/hw/hw_mmu.h b/drivers/staging/tidspbridge/hw/hw_mmu.h
index b034f28..b4f476f 100644
--- a/drivers/staging/tidspbridge/hw/hw_mmu.h
+++ b/drivers/staging/tidspbridge/hw/hw_mmu.h
@@ -81,42 +81,38 @@ extern hw_status hw_mmu_twl_enable(const void __iomem *base_address);
extern hw_status hw_mmu_twl_disable(const void __iomem *base_address);
extern hw_status hw_mmu_tlb_add(const void __iomem *base_address,
- u32 physical_addr,
- u32 virtual_addr,
- u32 page_sz,
+ phys_addr_t pa, u32 da, u32 page_sz,
u32 entry_num,
struct hw_mmu_map_attrs_t *map_attrs,
s8 preserved_bit, s8 valid_bit);
/* For PTEs */
-extern hw_status hw_mmu_pte_set(const u32 pg_tbl_va,
- u32 physical_addr,
- u32 virtual_addr,
+extern hw_status hw_mmu_pte_set(const u32 pg_tbl_va, phys_addr_t pa, u32 da,
u32 page_sz,
struct hw_mmu_map_attrs_t *map_attrs);
extern hw_status hw_mmu_pte_clear(const u32 pg_tbl_va,
- u32 virtual_addr, u32 page_size);
+ u32 da, u32 page_size);
void hw_mmu_tlb_flush_all(const void __iomem *base);
-static inline u32 hw_mmu_pte_addr_l1(u32 l1_base, u32 va)
+static inline u32 hw_mmu_pte_addr_l1(u32 l1_base, u32 da)
{
u32 pte_addr;
- u32 va31_to20;
+ u32 da31_to20;
- va31_to20 = va >> (20 - 2); /* Left-shift by 2 here itself */
- va31_to20 &= 0xFFFFFFFCUL;
- pte_addr = l1_base + va31_to20;
+ da31_to20 = da >> (20 - 2); /* Left-shift by 2 here itself */
+ da31_to20 &= 0xFFFFFFFCUL;
+ pte_addr = l1_base + da31_to20;
return pte_addr;
}
-static inline u32 hw_mmu_pte_addr_l2(u32 l2_base, u32 va)
+static inline u32 hw_mmu_pte_addr_l2(u32 l2_base, unsigned long da)
{
u32 pte_addr;
- pte_addr = (l2_base & 0xFFFFFC00) | ((va >> 10) & 0x3FC);
+ pte_addr = (l2_base & 0xFFFFFC00) | ((da >> 10) & 0x3FC);
return pte_addr;
}
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dspdefs.h b/drivers/staging/tidspbridge/include/dspbridge/dspdefs.h
index 0d28436..64291cc 100644
--- a/drivers/staging/tidspbridge/include/dspbridge/dspdefs.h
+++ b/drivers/staging/tidspbridge/include/dspbridge/dspdefs.h
@@ -162,10 +162,10 @@ typedef int(*fxn_brd_memwrite) (struct bridge_dev_context
* Map a MPU memory region to a DSP/IVA memory space
* Parameters:
* dev_ctxt: Handle to Bridge driver defined device info.
- * ul_mpu_addr: MPU memory region start address.
- * virt_addr: DSP/IVA memory region u8 address.
- * ul_num_bytes: Number of bytes to map.
- * map_attrs: Mapping attributes (e.g. endianness).
+ * va: MPU memory region start address.
+ * da: DSP/IVA memory region u8 address.
+ * bytes: Number of bytes to map.
+ * map_attrs: Mapping attributes (e.g. endianness).
* Returns:
* 0: Success.
* -EPERM: Other, unspecified error.
@@ -173,11 +173,9 @@ typedef int(*fxn_brd_memwrite) (struct bridge_dev_context
* dev_ctxt != NULL;
* Ensures:
*/
-typedef int(*fxn_brd_memmap) (struct bridge_dev_context
- * dev_ctxt, u32 ul_mpu_addr,
- u32 virt_addr, u32 ul_num_bytes,
- u32 map_attr,
- struct dmm_map_object *map_obj);
+typedef int(*fxn_brd_memmap) (struct bridge_dev_context *dev_ctxt,
+ unsigned long va, u32 da, size_t bytes,
+ u32 map_attr, struct dmm_map_object *map_obj);
/*
* ======== bridge_brd_mem_un_map ========
@@ -185,8 +183,8 @@ typedef int(*fxn_brd_memmap) (struct bridge_dev_context
* UnMap an MPU memory region from DSP/IVA memory space
* Parameters:
* dev_ctxt: Handle to Bridge driver defined device info.
- * virt_addr: DSP/IVA memory region u8 address.
- * ul_num_bytes: Number of bytes to unmap.
+ * da: DSP/IVA memory region u8 address.
+ * bytes: Number of bytes to unmap.
* Returns:
* 0: Success.
* -EPERM: Other, unspecified error.
@@ -195,8 +193,8 @@ typedef int(*fxn_brd_memmap) (struct bridge_dev_context
* Ensures:
*/
typedef int(*fxn_brd_memunmap) (struct bridge_dev_context *dev_ctxt,
- u32 virt_addr, u32 ul_num_bytes,
- struct dmm_map_object *map_obj);
+ u32 da, size_t bytes,
+ struct dmm_map_object *map_obj);
/*
* ======== bridge_brd_stop ========
--
1.7.8.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v2 15/15] tidspbridge: Replace hw_mmu_map_attrs_t structure with a prot bitfield
2012-09-19 12:06 [PATCH v2 00/15] tidspbridge driver MMU-related cleanups Laurent Pinchart
` (13 preceding siblings ...)
2012-09-19 12:07 ` [PATCH v2 14/15] tidspbridge: Use correct types to describe physical, MPU, DSP addresses Laurent Pinchart
@ 2012-09-19 12:07 ` Laurent Pinchart
2012-09-21 16:18 ` [PATCH v2 00/15] tidspbridge driver MMU-related cleanups Ramirez Luna, Omar
15 siblings, 0 replies; 23+ messages in thread
From: Laurent Pinchart @ 2012-09-19 12:07 UTC (permalink / raw)
To: linux-omap; +Cc: Omar Ramirez Luna
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Omar Ramirez Luna <omar.ramirez@ti.com>
---
drivers/staging/tidspbridge/core/tiomap3430.c | 83 ++++++++-----------
drivers/staging/tidspbridge/core/ue_deh.c | 8 +--
drivers/staging/tidspbridge/hw/hw_defs.h | 16 ----
drivers/staging/tidspbridge/hw/hw_mmu.c | 80 ++++---------------
drivers/staging/tidspbridge/hw/hw_mmu.h | 20 +----
.../tidspbridge/include/dspbridge/dspioctl.h | 25 ++++++
6 files changed, 82 insertions(+), 150 deletions(-)
diff --git a/drivers/staging/tidspbridge/core/tiomap3430.c b/drivers/staging/tidspbridge/core/tiomap3430.c
index c9d240c..669b126 100644
--- a/drivers/staging/tidspbridge/core/tiomap3430.c
+++ b/drivers/staging/tidspbridge/core/tiomap3430.c
@@ -357,11 +357,9 @@ static int bridge_brd_start(struct bridge_dev_context *dev_ctxt,
for (entry_ndx = 0; entry_ndx < BRDIOCTL_NUMOFMMUTLB;
entry_ndx++) {
struct bridge_ioctl_extproc *e = &dev_ctxt->atlb_entry[entry_ndx];
- struct hw_mmu_map_attrs_t map_attrs = {
- .endianism = e->endianism,
- .element_size = e->elem_size,
- .mixed_size = e->mixed_mode,
- };
+ int prot = (e->endianism << MMU_RAM_ENDIAN_SHIFT)
+ | (e->elem_size << MMU_RAM_ELSZ_SHIFT)
+ | (e->mixed_mode << MMU_RAM_MIXED_SHIFT);
if (!e->gpp_pa || !e->dsp_va)
continue;
@@ -378,7 +376,7 @@ static int bridge_brd_start(struct bridge_dev_context *dev_ctxt,
e->dsp_va,
e->size,
itmp_entry_ndx,
- &map_attrs, 1, 1);
+ prot, 1, 1);
itmp_entry_ndx++;
}
@@ -989,7 +987,7 @@ static int bridge_brd_mem_write(struct bridge_dev_context *dev_ctxt,
* It also manages the L2 page tables
*/
static int pte_set(struct pg_table_attrs *pt, phys_addr_t pa, u32 da,
- size_t size, struct hw_mmu_map_attrs_t *attrs)
+ size_t size, int prot)
{
u32 i;
u32 pte_val;
@@ -1045,7 +1043,7 @@ static int pte_set(struct pg_table_attrs *pt, phys_addr_t pa, u32 da,
status =
hw_mmu_pte_set(l1_base_va, l2_base_pa, da,
HW_MMU_COARSE_PAGE_SIZE,
- attrs);
+ prot);
} else {
status = -ENOMEM;
}
@@ -1070,10 +1068,8 @@ static int pte_set(struct pg_table_attrs *pt, phys_addr_t pa, u32 da,
if (!status) {
dev_dbg(bridge, "PTE: pg_tbl_va %x, pa %x, da %x, size %x\n",
pg_tbl_va, pa, da, size);
- dev_dbg(bridge, "PTE: endianism %x, element_size %x, "
- "mixed_size %x\n", attrs->endianism,
- attrs->element_size, attrs->mixed_size);
- status = hw_mmu_pte_set(pg_tbl_va, pa, da, size, attrs);
+ dev_dbg(bridge, "PTE: prot %x", prot);
+ status = hw_mmu_pte_set(pg_tbl_va, pa, da, size, prot);
}
return status;
@@ -1098,7 +1094,7 @@ static unsigned max_alignment(u32 addr, size_t size)
* Caller must pass page-aligned values
*/
static int pte_update(struct bridge_dev_context *dev_ctxt, phys_addr_t pa, u32 da,
- size_t size, struct hw_mmu_map_attrs_t *map_attrs)
+ size_t size, int prot)
{
while (size) {
/* To find the max. page size with which both PA & VA are
@@ -1109,7 +1105,7 @@ static int pte_update(struct bridge_dev_context *dev_ctxt, phys_addr_t pa, u32 d
if (WARN_ON(ent_sz == 0))
return -EINVAL;
- ret = pte_set(dev_ctxt->pt_attrs, pa, da, ent_sz, map_attrs);
+ ret = pte_set(dev_ctxt->pt_attrs, pa, da, ent_sz, prot);
if (ret < 0)
return ret;
@@ -1167,8 +1163,7 @@ static inline void flush_all(struct bridge_dev_context *dev_ctxt)
/* Memory map kernel VA -- memory allocated with vmalloc */
static int mem_map_vmalloc(struct bridge_dev_context *dev_ctxt,
- unsigned long va, u32 da, size_t bytes,
- struct hw_mmu_map_attrs_t *hw_attrs)
+ unsigned long va, u32 da, size_t bytes, int prot)
{
struct page *page_next;
int ret;
@@ -1208,7 +1203,7 @@ static int mem_map_vmalloc(struct bridge_dev_context *dev_ctxt,
}
ret = pte_update(dev_ctxt, page_to_phys(page), da,
- chunk_size, hw_attrs);
+ chunk_size, prot);
if (ret)
break;
@@ -1435,12 +1430,12 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
{
u32 attrs;
int status = 0;
- struct hw_mmu_map_attrs_t hw_attrs;
struct vm_area_struct *vma;
struct mm_struct *mm = current->mm;
u32 write = 0;
u32 num_usr_pgs;
struct page *pg;
+ int prot;
s32 pg_num;
u32 pg_i = 0;
u32 pa;
@@ -1460,46 +1455,38 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
}
/* Take mapping properties */
if (attrs & DSP_MAPBIGENDIAN)
- hw_attrs.endianism = HW_BIG_ENDIAN;
+ prot = MMU_RAM_ENDIAN_BIG;
else
- hw_attrs.endianism = HW_LITTLE_ENDIAN;
+ prot = MMU_RAM_ENDIAN_LITTLE;
+
+ if (attrs & DSP_MAPMIXEDELEMSIZE)
+ prot |= MMU_RAM_MIXED;
- hw_attrs.mixed_size = (enum hw_mmu_mixed_size_t)
- ((attrs & DSP_MAPMIXEDELEMSIZE) >> 2);
/* Ignore element_size if mixed_size is enabled */
- if (hw_attrs.mixed_size == 0) {
- if (attrs & DSP_MAPELEMSIZE8) {
- /* Size is 8 bit */
- hw_attrs.element_size = HW_ELEM_SIZE8BIT;
- } else if (attrs & DSP_MAPELEMSIZE16) {
- /* Size is 16 bit */
- hw_attrs.element_size = HW_ELEM_SIZE16BIT;
- } else if (attrs & DSP_MAPELEMSIZE32) {
- /* Size is 32 bit */
- hw_attrs.element_size = HW_ELEM_SIZE32BIT;
- } else if (attrs & DSP_MAPELEMSIZE64) {
- /* Size is 64 bit */
- hw_attrs.element_size = HW_ELEM_SIZE64BIT;
- } else {
- /*
- * Mixedsize isn't enabled, so size can't be
- * zero here
- */
+ if (!(attrs & DSP_MAPMIXEDELEMSIZE)) {
+ if (attrs & DSP_MAPELEMSIZE8)
+ prot |= MMU_RAM_ELSZ_8;
+ else if (attrs & DSP_MAPELEMSIZE16)
+ prot |= MMU_RAM_ELSZ_16;
+ else if (attrs & DSP_MAPELEMSIZE32)
+ prot |= MMU_RAM_ELSZ_32;
+ else if (attrs & DSP_MAPELEMSIZE64)
+ prot |= MMU_RAM_ELSZ_NONE;
+ else
+ /* Mixedsize isn't enabled, size can't be zero here */
return -EINVAL;
- }
}
- if (attrs & DSP_MAPVMALLOCADDR) {
- return mem_map_vmalloc(dev_ctxt, va, da, bytes, &hw_attrs);
- }
+ if (attrs & DSP_MAPVMALLOCADDR)
+ return mem_map_vmalloc(dev_ctxt, va, da, bytes, prot);
+
/*
* Do OS-specific user-va to pa translation.
* Combine physically contiguous regions to reduce TLBs.
* Pass the translated pa to pte_update.
*/
if ((attrs & DSP_MAPPHYSICALADDR)) {
- status = pte_update(dev_ctxt, (phys_addr_t)va, da,
- bytes, &hw_attrs);
+ status = pte_update(dev_ctxt, (phys_addr_t)va, da, bytes, prot);
goto func_cont;
}
@@ -1562,7 +1549,7 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
}
}
status = pte_set(dev_ctxt->pt_attrs, pa, da,
- SZ_4K, &hw_attrs);
+ SZ_4K, prot);
if (status)
break;
@@ -1587,7 +1574,7 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
}
status = pte_set(dev_ctxt->pt_attrs,
page_to_phys(pg), da,
- SZ_4K, &hw_attrs);
+ SZ_4K, prot);
if (status)
break;
diff --git a/drivers/staging/tidspbridge/core/ue_deh.c b/drivers/staging/tidspbridge/core/ue_deh.c
index c6196c4..15ef933 100644
--- a/drivers/staging/tidspbridge/core/ue_deh.c
+++ b/drivers/staging/tidspbridge/core/ue_deh.c
@@ -169,11 +169,7 @@ int bridge_deh_register_notify(struct deh_mgr *deh, u32 event_mask,
static void mmu_fault_print_stack(struct bridge_dev_context *dev_context)
{
struct cfg_hostres *resources;
- struct hw_mmu_map_attrs_t map_attrs = {
- .endianism = HW_LITTLE_ENDIAN,
- .element_size = HW_ELEM_SIZE16BIT,
- .mixed_size = HW_MMU_CPUES,
- };
+ int prot = MMU_RAM_ENDIAN_LITTLE | MMU_RAM_ELSZ_16 | MMU_RAM_MIXED;
void *dummy_va_addr;
resources = dev_context->resources;
@@ -189,7 +185,7 @@ static void mmu_fault_print_stack(struct bridge_dev_context *dev_context)
hw_mmu_tlb_add(resources->dmmu_base,
virt_to_phys(dummy_va_addr), fault_addr, SZ_4K, 1,
- &map_attrs, HW_SET, HW_SET);
+ prot, HW_SET, HW_SET);
dsp_clk_enable(DSP_CLK_GPT8);
diff --git a/drivers/staging/tidspbridge/hw/hw_defs.h b/drivers/staging/tidspbridge/hw/hw_defs.h
index 89e6c4f..9a87e1c 100644
--- a/drivers/staging/tidspbridge/hw/hw_defs.h
+++ b/drivers/staging/tidspbridge/hw/hw_defs.h
@@ -26,22 +26,6 @@ typedef long hw_status;
#define HW_CLEAR 0
#define HW_SET 1
-/* hw_endianism_t: Enumerated Type used to specify the endianism
- * Do NOT change these values. They are used as bit fields. */
-enum hw_endianism_t {
- HW_LITTLE_ENDIAN,
- HW_BIG_ENDIAN
-};
-
-/* hw_element_size_t: Enumerated Type used to specify the element size
- * Do NOT change these values. They are used as bit fields. */
-enum hw_element_size_t {
- HW_ELEM_SIZE8BIT,
- HW_ELEM_SIZE16BIT,
- HW_ELEM_SIZE32BIT,
- HW_ELEM_SIZE64BIT
-};
-
/* hw_idle_mode_t: Enumerated Type used to specify Idle modes */
enum hw_idle_mode_t {
HW_FORCE_IDLE,
diff --git a/drivers/staging/tidspbridge/hw/hw_mmu.c b/drivers/staging/tidspbridge/hw/hw_mmu.c
index 34c2054..d80a2e2 100644
--- a/drivers/staging/tidspbridge/hw/hw_mmu.c
+++ b/drivers/staging/tidspbridge/hw/hw_mmu.c
@@ -105,47 +105,18 @@ static hw_status mmu_set_cam_entry(const void __iomem *base_address,
* Description : Physical Address to which the corresponding
* virtual Address shouldpoint
*
- * Identifier : endianism
- * Type : hw_endianism_t
- * Description : endianism for the given page
- *
- * Identifier : element_size
- * Type : hw_element_size_t
- * Description : The element size ( 8,16, 32 or 64 bit)
- *
- * Identifier : mixed_size
- * Type : hw_mmu_mixed_size_t
- * Description : Element Size to follow CPU or TLB
- *
- * RETURNS:
- *
- * Type : hw_status
- * Description : 0 -- No errors occurred
- * RET_BAD_NULL_PARAM -- A Pointer Paramater
- * was set to NULL
- * RET_PARAM_OUT_OF_RANGE -- Input Parameter
- * out of Range
+ * Identifier : prot
+ * Type : int
+ * Description : MMU_RAM_* flags
*
* PURPOSE: : Set MMU_CAM reg
*
* METHOD: : Check the Input parameters and set the RAM entry.
*/
-static hw_status mmu_set_ram_entry(const void __iomem *base_address,
- phys_addr_t pa,
- enum hw_endianism_t endianism,
- enum hw_element_size_t element_size,
- enum hw_mmu_mixed_size_t mixed_size)
+static void mmu_set_ram_entry(const void __iomem *base_address,
+ phys_addr_t pa, int prot)
{
- u32 mmu_ram_reg;
-
- mmu_ram_reg = (pa & IOPAGE_MASK);
- mmu_ram_reg = (mmu_ram_reg) | ((endianism << 9) | (element_size << 7) |
- (mixed_size << 6));
-
- /* write values to register */
- MMUMMU_RAM_WRITE_REGISTER32(base_address, mmu_ram_reg);
-
- return 0;
+ MMUMMU_RAM_WRITE_REGISTER32(base_address, (pa & IOPAGE_MASK) | prot);
}
/* HW FUNCTIONS */
@@ -271,8 +242,7 @@ hw_status hw_mmu_twl_disable(const void __iomem *base_address)
}
hw_status hw_mmu_tlb_add(const void __iomem *base_address, phys_addr_t pa,
- u32 da, u32 page_sz, u32 entry_num,
- struct hw_mmu_map_attrs_t *map_attrs,
+ u32 da, u32 page_sz, u32 entry_num, int prot,
s8 preserved_bit, s8 valid_bit)
{
hw_status status = 0;
@@ -313,8 +283,7 @@ hw_status hw_mmu_tlb_add(const void __iomem *base_address, phys_addr_t pa,
/* Write the different fields of the RAM Entry Register */
/* endianism of the page,Element Size of the page (8, 16, 32, 64 bit) */
- mmu_set_ram_entry(base_address, pa, map_attrs->endianism,
- map_attrs->element_size, map_attrs->mixed_size);
+ mmu_set_ram_entry(base_address, pa, prot);
/* Update the MMU Lock Register */
/* currentVictim between lockedBaseValue and (MMU_Entries_Number - 1) */
@@ -330,51 +299,38 @@ hw_status hw_mmu_tlb_add(const void __iomem *base_address, phys_addr_t pa,
}
hw_status hw_mmu_pte_set(const u32 pg_tbl_va, phys_addr_t pa, u32 da,
- u32 page_sz, struct hw_mmu_map_attrs_t *map_attrs)
+ u32 page_sz, int prot)
{
hw_status status = 0;
u32 pte_addr, pte_val;
s32 num_entries = 1;
+ pte_val = ((prot & MMU_RAM_MIXED_MASK) << 5)
+ | (prot & MMU_RAM_ENDIAN_MASK)
+ | ((prot & MMU_RAM_ELSZ_MASK) >> 3);
+
switch (page_sz) {
case SZ_4K:
pte_addr = hw_mmu_pte_addr_l2(pg_tbl_va, da & IOPAGE_MASK);
- pte_val =
- ((pa & IOPAGE_MASK) |
- (map_attrs->endianism << 9) | (map_attrs->
- element_size << 4) |
- (map_attrs->mixed_size << 11) | IOPTE_SMALL);
+ pte_val = (pa & IOPAGE_MASK) | pte_val | IOPTE_SMALL;
break;
case SZ_64K:
num_entries = 16;
pte_addr = hw_mmu_pte_addr_l2(pg_tbl_va, da & IOLARGE_MASK);
- pte_val =
- ((pa & IOLARGE_MASK) |
- (map_attrs->endianism << 9) | (map_attrs->
- element_size << 4) |
- (map_attrs->mixed_size << 11) | IOPTE_LARGE);
+ pte_val = (pa & IOLARGE_MASK) | pte_val | IOPTE_LARGE;
break;
case SZ_1M:
pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va, da & IOSECTION_MASK);
- pte_val =
- ((((pa & IOSECTION_MASK) |
- (map_attrs->endianism << 15) | (map_attrs->
- element_size << 10) |
- (map_attrs->mixed_size << 17)) & ~0x40000) |
- IOPGD_SECTION);
+ pte_val = (pa & IOSECTION_MASK) | (pte_val << 6)
+ | IOPGD_SECTION;
break;
case SZ_16M:
num_entries = 16;
pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va, da & IOSUPER_MASK);
- pte_val =
- (((pa & IOSUPER_MASK) |
- (map_attrs->endianism << 15) | (map_attrs->
- element_size << 10) |
- (map_attrs->mixed_size << 17)
- ) | 0x40000 | IOPGD_SUPER);
+ pte_val = (pa & IOSUPER_MASK) | (pte_val << 6) | IOPGD_SUPER;
break;
case HW_MMU_COARSE_PAGE_SIZE:
diff --git a/drivers/staging/tidspbridge/hw/hw_mmu.h b/drivers/staging/tidspbridge/hw/hw_mmu.h
index b4f476f..b7cc9e3 100644
--- a/drivers/staging/tidspbridge/hw/hw_mmu.h
+++ b/drivers/staging/tidspbridge/hw/hw_mmu.h
@@ -32,20 +32,6 @@
#define HW_MMU_COARSE_PAGE_SIZE 0x400
-/* hw_mmu_mixed_size_t: Enumerated Type used to specify whether to follow
- CPU/TLB Element size */
-enum hw_mmu_mixed_size_t {
- HW_MMU_TLBES,
- HW_MMU_CPUES
-};
-
-/* hw_mmu_map_attrs_t: Struct containing MMU mapping attributes */
-struct hw_mmu_map_attrs_t {
- enum hw_endianism_t endianism;
- enum hw_element_size_t element_size;
- enum hw_mmu_mixed_size_t mixed_size;
-};
-
extern hw_status hw_mmu_enable(const void __iomem *base_address);
extern hw_status hw_mmu_disable(const void __iomem *base_address);
@@ -82,14 +68,12 @@ extern hw_status hw_mmu_twl_disable(const void __iomem *base_address);
extern hw_status hw_mmu_tlb_add(const void __iomem *base_address,
phys_addr_t pa, u32 da, u32 page_sz,
- u32 entry_num,
- struct hw_mmu_map_attrs_t *map_attrs,
+ u32 entry_num, int prot,
s8 preserved_bit, s8 valid_bit);
/* For PTEs */
extern hw_status hw_mmu_pte_set(const u32 pg_tbl_va, phys_addr_t pa, u32 da,
- u32 page_sz,
- struct hw_mmu_map_attrs_t *map_attrs);
+ u32 page_sz, int prot);
extern hw_status hw_mmu_pte_clear(const u32 pg_tbl_va,
u32 da, u32 page_size);
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dspioctl.h b/drivers/staging/tidspbridge/include/dspbridge/dspioctl.h
index 0c7ec04..28eebbb 100644
--- a/drivers/staging/tidspbridge/include/dspbridge/dspioctl.h
+++ b/drivers/staging/tidspbridge/include/dspbridge/dspioctl.h
@@ -54,6 +54,31 @@
/* Number of actual DSP-MMU TLB entrries */
#define BRDIOCTL_NUMOFMMUTLB 32
+/* hw_endianism_t: Enumerated Type used to specify the endianism
+ * Do NOT change these values. They are used as bit fields.
+ */
+enum hw_endianism_t {
+ HW_LITTLE_ENDIAN,
+ HW_BIG_ENDIAN
+};
+
+/* hw_element_size_t: Enumerated Type used to specify the element size
+ * Do NOT change these values. They are used as bit fields.
+ */
+enum hw_element_size_t {
+ HW_ELEM_SIZE8BIT,
+ HW_ELEM_SIZE16BIT,
+ HW_ELEM_SIZE32BIT,
+ HW_ELEM_SIZE64BIT
+};
+
+/* hw_mmu_mixed_size_t: Enumerated Type used to specify whether to follow
+ CPU/TLB Element size */
+enum hw_mmu_mixed_size_t {
+ HW_MMU_TLBES,
+ HW_MMU_CPUES
+};
+
struct bridge_ioctl_extproc {
u32 dsp_va; /* DSP virtual address */
u32 gpp_pa; /* GPP physical address */
--
1.7.8.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH v2 00/15] tidspbridge driver MMU-related cleanups
2012-09-19 12:06 [PATCH v2 00/15] tidspbridge driver MMU-related cleanups Laurent Pinchart
` (14 preceding siblings ...)
2012-09-19 12:07 ` [PATCH v2 15/15] tidspbridge: Replace hw_mmu_map_attrs_t structure with a prot bitfield Laurent Pinchart
@ 2012-09-21 16:18 ` Ramirez Luna, Omar
2012-09-24 23:15 ` Laurent Pinchart
15 siblings, 1 reply; 23+ messages in thread
From: Ramirez Luna, Omar @ 2012-09-21 16:18 UTC (permalink / raw)
To: Laurent Pinchart; +Cc: linux-omap
Hi Laurent,
On Wed, Sep 19, 2012 at 7:06 AM, Laurent Pinchart
<laurent.pinchart@ideasonboard.com> wrote:
> Hello,
>
> Here's the second version of my tidspbridge MMU-related cleanup patches. The
> first version has been sent privately only, don't try to search the mailing
> list archive for it :-)
>
> Replacing hw/hw_mmu.c and part of core/tiomap3430.c with generic IOMMU calls
> should be less difficult now. Anyone would like to give it a try?
>
> Laurent Pinchart (14):
> tidspbridge: hw_mmu: Reorder functions to avoid forward declarations
> tidspbridge: hw_mmu: Removed unused functions
> tidspbridge: tiomap3430: Reorder functions to avoid forward
> declarations
> tidspbridge: tiomap3430: Remove unneeded dev_context local variables
> tidspbridge: tiomap3430: Factor out common page release code
> tidspbridge: tiomap3430: Remove ul_ prefix
> tidspbridge: tiomap3430: Remove unneeded local variables
> tidspbridge: Fix VM_PFNMAP mapping
> tidspbridge: Remove unused hw_mmu_map_attrs_t::donotlockmpupage field
> arm: omap: iommu: Include required headers in iommu.h and iopgtable.h
> tidspbridge: Use constants defined in IOMMU platform headers
> tidspbridge: Simplify pte_update and mem_map_vmalloc functions
> tidspbridge: Use correct types to describe physical, MPU, DSP
> addresses
> tidspbridge: Replace hw_mmu_map_attrs_t structure with a prot
> bitfield
Thanks, tested on beagle-xM, they look good!
Can you submit them to Greg KH and devel@driverdev.osuosl.org,
preferably with a 'staging:' prefix along with the current subject.
The only thing of concern is that:
ARM: OMAP: iommu: fix including iommu.h without IOMMU_API selected
Might be taking a different path than these to mainline[1].
Cheers,
Omar
---
[1] http://www.mail-archive.com/linux-omap@vger.kernel.org/msg76319.html
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 08/15] tidspbridge: Fix VM_PFNMAP mapping
2012-09-19 12:06 ` [PATCH v2 08/15] tidspbridge: Fix VM_PFNMAP mapping Laurent Pinchart
@ 2012-09-21 18:37 ` Felipe Contreras
2012-09-24 23:11 ` Laurent Pinchart
0 siblings, 1 reply; 23+ messages in thread
From: Felipe Contreras @ 2012-09-21 18:37 UTC (permalink / raw)
To: Laurent Pinchart; +Cc: linux-omap, Omar Ramirez Luna
On Wed, Sep 19, 2012 at 2:06 PM, Laurent Pinchart
<laurent.pinchart@ideasonboard.com> wrote:
> VMAs marked with the VM_PFNMAP flag have no struct page associated with
> the memory PFNs. Don't call get_page()/put_page() on the pages
> supposedly associated with the PFNs.
I don't see anything wrong with the patch, but I think there's a lot
of changes, and perhaps a bit more of explanation of how it's doing
this might help. Also, it seems to be reordering
match_exact_map_obj(), which messes up the diff a bit.
Cheers.
--
Felipe Contreras
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v2 08/15] tidspbridge: Fix VM_PFNMAP mapping
2012-09-21 18:37 ` Felipe Contreras
@ 2012-09-24 23:11 ` Laurent Pinchart
2012-09-24 23:13 ` [PATCH 1/2] tidspbridge: Refactor mapping find/remove operations Laurent Pinchart
0 siblings, 1 reply; 23+ messages in thread
From: Laurent Pinchart @ 2012-09-24 23:11 UTC (permalink / raw)
To: Felipe Contreras; +Cc: linux-omap, Omar Ramirez Luna
Hi Felipe,
On Friday 21 September 2012 20:37:40 Felipe Contreras wrote:
> On Wed, Sep 19, 2012 at 2:06 PM, Laurent Pinchart wrote:
> > VMAs marked with the VM_PFNMAP flag have no struct page associated with
> > the memory PFNs. Don't call get_page()/put_page() on the pages
> > supposedly associated with the PFNs.
>
> I don't see anything wrong with the patch, but I think there's a lot
> of changes, and perhaps a bit more of explanation of how it's doing
> this might help.
I'll make the commit message more explicit.
> Also, it seems to be reordering match_exact_map_obj(), which messes up the
> diff a bit.
That's because match_exact_map_obj() logically belonged to
remove_mapping_information(), but now belongs to find_dsp_mapping(). I've
split the commit in two, I'll send the result in reply to this e-mail. Please
let me know if you like that better.
--
Regards,
Laurent Pinchart
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 1/2] tidspbridge: Refactor mapping find/remove operations
2012-09-24 23:11 ` Laurent Pinchart
@ 2012-09-24 23:13 ` Laurent Pinchart
2012-09-24 23:13 ` [PATCH 2/2] tidspbridge: Fix VM_PFNMAP mapping Laurent Pinchart
2012-10-12 21:32 ` [PATCH 1/2] tidspbridge: Refactor mapping find/remove operations Laurent Pinchart
0 siblings, 2 replies; 23+ messages in thread
From: Laurent Pinchart @ 2012-09-24 23:13 UTC (permalink / raw)
To: linux-omap; +Cc: Omar Ramirez Luna, Felipe Contreras
Split the remove_mapping_information() function into find_dsp_mapping()
to locate the mapping and remove_mapping_information() to remove it.
Rename find_containing_mapping() to find_mpu_mapping() and share the
search code between find_dsp_mapping() and find_mpu_mapping().
This prepares the driver for VM_PFNMAP support.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
---
drivers/staging/tidspbridge/rmgr/proc.c | 116 ++++++++++++++++---------------
1 files changed, 59 insertions(+), 57 deletions(-)
diff --git a/drivers/staging/tidspbridge/rmgr/proc.c b/drivers/staging/tidspbridge/rmgr/proc.c
index 7e4f12f..64b1bba 100644
--- a/drivers/staging/tidspbridge/rmgr/proc.c
+++ b/drivers/staging/tidspbridge/rmgr/proc.c
@@ -145,47 +145,67 @@ static struct dmm_map_object *add_mapping_info(struct process_context *pr_ctxt,
return map_obj;
}
-static int match_exact_map_obj(struct dmm_map_object *map_obj,
- u32 dsp_addr, u32 size)
+static void remove_mapping_information(struct process_context *pr_ctxt,
+ struct dmm_map_object *map_obj)
{
- if (map_obj->dsp_addr == dsp_addr && map_obj->size != size)
- pr_err("%s: addr match (0x%x), size don't (0x%x != 0x%x)\n",
- __func__, dsp_addr, map_obj->size, size);
+ if (map_obj == NULL)
+ return;
- return map_obj->dsp_addr == dsp_addr &&
- map_obj->size == size;
+ pr_debug("%s: match, deleting map info\n", __func__);
+
+ spin_lock(&pr_ctxt->dmm_map_lock);
+ list_del(&map_obj->link);
+ spin_unlock(&pr_ctxt->dmm_map_lock);
+
+ kfree(map_obj->dma_info.sg);
+ kfree(map_obj->pages);
+ kfree(map_obj);
}
-static void remove_mapping_information(struct process_context *pr_ctxt,
- u32 dsp_addr, u32 size)
+static struct dmm_map_object *
+find_mapping(struct process_context *pr_ctxt, u32 addr, u32 size,
+ int (*match)(struct dmm_map_object *, u32, u32))
{
struct dmm_map_object *map_obj;
- pr_debug("%s: looking for virt 0x%x size 0x%x\n", __func__,
- dsp_addr, size);
-
spin_lock(&pr_ctxt->dmm_map_lock);
list_for_each_entry(map_obj, &pr_ctxt->dmm_map_list, link) {
- pr_debug("%s: candidate: mpu_addr 0x%x virt 0x%x size 0x%x\n",
- __func__,
- map_obj->mpu_addr,
- map_obj->dsp_addr,
- map_obj->size);
-
- if (match_exact_map_obj(map_obj, dsp_addr, size)) {
- pr_debug("%s: match, deleting map info\n", __func__);
- list_del(&map_obj->link);
- kfree(map_obj->dma_info.sg);
- kfree(map_obj->pages);
- kfree(map_obj);
+ pr_debug("%s: candidate: mpu_addr 0x%x dsp_addr 0x%x size 0x%x\n",
+ __func__, map_obj->mpu_addr, map_obj->dsp_addr,
+ map_obj->size);
+
+ if (match(map_obj, addr, size)) {
+ pr_debug("%s: match!\n", __func__);
goto out;
}
- pr_debug("%s: candidate didn't match\n", __func__);
+
+ pr_debug("%s: no match!\n", __func__);
}
- pr_err("%s: failed to find given map info\n", __func__);
+ map_obj = NULL;
out:
spin_unlock(&pr_ctxt->dmm_map_lock);
+ return map_obj;
+}
+
+static int match_exact_map_obj(struct dmm_map_object *map_obj,
+ u32 dsp_addr, u32 size)
+{
+ if (map_obj->dsp_addr == dsp_addr && map_obj->size != size)
+ pr_err("%s: addr match (0x%x), size don't (0x%x != 0x%x)\n",
+ __func__, dsp_addr, map_obj->size, size);
+
+ return map_obj->dsp_addr == dsp_addr &&
+ map_obj->size == size;
+}
+
+static struct dmm_map_object *
+find_dsp_mapping(struct process_context *pr_ctxt, u32 dsp_addr, u32 size)
+{
+ pr_debug("%s: looking for virt 0x%x size 0x%x\n", __func__,
+ dsp_addr, size);
+
+ return find_mapping(pr_ctxt, dsp_addr, size, match_exact_map_obj);
}
static int match_containing_map_obj(struct dmm_map_object *map_obj,
@@ -197,33 +217,13 @@ static int match_containing_map_obj(struct dmm_map_object *map_obj,
mpu_addr + size <= map_obj_end;
}
-static struct dmm_map_object *find_containing_mapping(
- struct process_context *pr_ctxt,
- u32 mpu_addr, u32 size)
+static struct dmm_map_object *
+find_mpu_mapping(struct process_context *pr_ctxt, u32 mpu_addr, u32 size)
{
- struct dmm_map_object *map_obj;
pr_debug("%s: looking for mpu_addr 0x%x size 0x%x\n", __func__,
- mpu_addr, size);
+ mpu_addr, size);
- spin_lock(&pr_ctxt->dmm_map_lock);
- list_for_each_entry(map_obj, &pr_ctxt->dmm_map_list, link) {
- pr_debug("%s: candidate: mpu_addr 0x%x virt 0x%x size 0x%x\n",
- __func__,
- map_obj->mpu_addr,
- map_obj->dsp_addr,
- map_obj->size);
- if (match_containing_map_obj(map_obj, mpu_addr, size)) {
- pr_debug("%s: match!\n", __func__);
- goto out;
- }
-
- pr_debug("%s: no match!\n", __func__);
- }
-
- map_obj = NULL;
-out:
- spin_unlock(&pr_ctxt->dmm_map_lock);
- return map_obj;
+ return find_mapping(pr_ctxt, mpu_addr, size, match_containing_map_obj);
}
static int find_first_page_in_cache(struct dmm_map_object *map_obj,
@@ -755,9 +755,9 @@ int proc_begin_dma(void *hprocessor, void *pmpu_addr, u32 ul_size,
mutex_lock(&proc_lock);
/* find requested memory are in cached mapping information */
- map_obj = find_containing_mapping(pr_ctxt, (u32) pmpu_addr, ul_size);
+ map_obj = find_mpu_mapping(pr_ctxt, (u32) pmpu_addr, ul_size);
if (!map_obj) {
- pr_err("%s: find_containing_mapping failed\n", __func__);
+ pr_err("%s: find_mpu_mapping failed\n", __func__);
status = -EFAULT;
goto no_map;
}
@@ -795,9 +795,9 @@ int proc_end_dma(void *hprocessor, void *pmpu_addr, u32 ul_size,
mutex_lock(&proc_lock);
/* find requested memory are in cached mapping information */
- map_obj = find_containing_mapping(pr_ctxt, (u32) pmpu_addr, ul_size);
+ map_obj = find_mpu_mapping(pr_ctxt, (u32) pmpu_addr, ul_size);
if (!map_obj) {
- pr_err("%s: find_containing_mapping failed\n", __func__);
+ pr_err("%s: find_mpu_mapping failed\n", __func__);
status = -EFAULT;
goto no_map;
}
@@ -1273,7 +1273,7 @@ int proc_map(void *hprocessor, void *pmpu_addr, u32 ul_size,
u32 size_align;
int status = 0;
struct proc_object *p_proc_object = (struct proc_object *)hprocessor;
- struct dmm_map_object *map_obj;
+ struct dmm_map_object *map_obj = NULL;
u32 tmp_addr = 0;
#ifdef CONFIG_TIDSPBRIDGE_CACHE_LINE_CHECK
@@ -1324,7 +1324,7 @@ int proc_map(void *hprocessor, void *pmpu_addr, u32 ul_size,
/* Mapped address = MSB of VA | LSB of PA */
*pp_map_addr = (void *) tmp_addr;
} else {
- remove_mapping_information(pr_ctxt, tmp_addr, size_align);
+ remove_mapping_information(pr_ctxt, map_obj);
dmm_un_map_memory(dmm_mgr, va_align, &size_align);
}
mutex_unlock(&proc_lock);
@@ -1600,6 +1600,7 @@ int proc_un_map(void *hprocessor, void *map_addr,
{
int status = 0;
struct proc_object *p_proc_object = (struct proc_object *)hprocessor;
+ struct dmm_map_object *map_obj;
struct dmm_object *dmm_mgr;
u32 va_align;
u32 size_align;
@@ -1637,7 +1638,8 @@ int proc_un_map(void *hprocessor, void *map_addr,
* from dmm_map_list, so that mapped memory resource tracking
* remains uptodate
*/
- remove_mapping_information(pr_ctxt, (u32) map_addr, size_align);
+ map_obj = find_dsp_mapping(pr_ctxt, (u32) map_addr, size_align);
+ remove_mapping_information(pr_ctxt, map_obj);
unmap_failed:
mutex_unlock(&proc_lock);
--
1.7.8.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 2/2] tidspbridge: Fix VM_PFNMAP mapping
2012-09-24 23:13 ` [PATCH 1/2] tidspbridge: Refactor mapping find/remove operations Laurent Pinchart
@ 2012-09-24 23:13 ` Laurent Pinchart
2012-10-12 21:32 ` [PATCH 1/2] tidspbridge: Refactor mapping find/remove operations Laurent Pinchart
1 sibling, 0 replies; 23+ messages in thread
From: Laurent Pinchart @ 2012-09-24 23:13 UTC (permalink / raw)
To: linux-omap; +Cc: Omar Ramirez Luna, Felipe Contreras
VMAs marked with the VM_PFNMAP flag have no struct page associated with
the memory PFNs. Don't call get_page()/put_page() on the pages
supposedly associated with the PFNs.
To check the VM flags at unmap time store them in the dmm_map_object
structure at map time, and pass the structure down to the tiomap3430.c
layer.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
---
drivers/staging/tidspbridge/core/tiomap3430.c | 30 ++++++++++++--------
.../staging/tidspbridge/include/dspbridge/drv.h | 1 +
.../tidspbridge/include/dspbridge/dspdefs.h | 9 +++--
drivers/staging/tidspbridge/rmgr/proc.c | 14 ++++----
4 files changed, 31 insertions(+), 23 deletions(-)
diff --git a/drivers/staging/tidspbridge/core/tiomap3430.c b/drivers/staging/tidspbridge/core/tiomap3430.c
index 2c5be89..cc538ea 100644
--- a/drivers/staging/tidspbridge/core/tiomap3430.c
+++ b/drivers/staging/tidspbridge/core/tiomap3430.c
@@ -1262,7 +1262,8 @@ static void bad_page_dump(u32 pa, struct page *pg)
}
/* Release all pages associated with a physical addresses range. */
-static void bridge_release_pages(u32 paddr, u32 pte_size, u32 num_bytes)
+static void bridge_release_pages(u32 paddr, u32 pte_size, u32 num_bytes,
+ struct dmm_map_object *map_obj)
{
struct page *pg;
u32 num_pages;
@@ -1270,7 +1271,8 @@ static void bridge_release_pages(u32 paddr, u32 pte_size, u32 num_bytes)
num_pages = pte_size / PAGE_SIZE;
for (; num_pages > 0; --num_pages, paddr += HW_PAGE_SIZE4KB) {
- if (!pfn_valid(__phys_to_pfn(paddr)))
+ if (!pfn_valid(__phys_to_pfn(paddr)) ||
+ (map_obj && map_obj->vm_flags & VM_PFNMAP))
continue;
pg = PHYS_TO_PAGE(paddr);
@@ -1295,7 +1297,8 @@ static void bridge_release_pages(u32 paddr, u32 pte_size, u32 num_bytes)
* we clear consecutive PTEs until we unmap all the bytes
*/
static int bridge_brd_mem_un_map(struct bridge_dev_context *dev_ctxt,
- u32 virt_addr, u32 num_bytes)
+ u32 virt_addr, u32 num_bytes,
+ struct dmm_map_object *map_obj)
{
u32 l1_base_va;
u32 l2_base_va;
@@ -1369,7 +1372,7 @@ static int bridge_brd_mem_un_map(struct bridge_dev_context *dev_ctxt,
}
bridge_release_pages(pte_val & ~(pte_size - 1), pte_size,
- num_bytes);
+ num_bytes, map_obj);
if (hw_mmu_pte_clear(pte_addr_l2, virt_addr, pte_size)) {
status = -EPERM;
@@ -1413,7 +1416,7 @@ skip_coarse_page:
}
bridge_release_pages(pte_val & ~(pte_size - 1), pte_size,
- num_bytes);
+ num_bytes, map_obj);
if (!hw_mmu_pte_clear(l1_base_va, virt_addr, pte_size)) {
status = 0;
@@ -1448,7 +1451,7 @@ EXIT_LOOP:
*/
static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
u32 mpu_addr, u32 virt_addr, u32 num_bytes,
- u32 map_attr, struct page **mapped_pages)
+ u32 map_attr, struct dmm_map_object *map_obj)
{
u32 attrs;
int status = 0;
@@ -1559,6 +1562,9 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
goto func_cont;
}
+ if (map_obj)
+ map_obj->vm_flags = vma->vm_flags;
+
if (vma->vm_flags & VM_IO) {
num_usr_pgs = num_bytes / PG_SIZE4K;
@@ -1571,7 +1577,8 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
"address is invalid\n");
break;
}
- if (pfn_valid(__phys_to_pfn(pa))) {
+ if (!(vma->vm_flags & VM_PFNMAP) &&
+ pfn_valid(__phys_to_pfn(pa))) {
pg = PHYS_TO_PAGE(pa);
get_page(pg);
if (page_count(pg) < 1) {
@@ -1610,8 +1617,8 @@ static int bridge_brd_mem_map(struct bridge_dev_context *dev_ctxt,
if (status)
break;
- if (mapped_pages)
- mapped_pages[pg_i] = pg;
+ if (map_obj)
+ map_obj->pages[pg_i] = pg;
virt_addr += HW_PAGE_SIZE4KB;
mpu_addr += HW_PAGE_SIZE4KB;
@@ -1635,10 +1642,9 @@ func_cont:
* Roll out the mapped pages incase it failed in middle of
* mapping
*/
- if (pg_i) {
+ if (pg_i)
bridge_brd_mem_un_map(dev_ctxt, virt_addr,
- (pg_i * PG_SIZE4K));
- }
+ pg_i * PG_SIZE4K, map_obj);
status = -EPERM;
}
/*
diff --git a/drivers/staging/tidspbridge/include/dspbridge/drv.h b/drivers/staging/tidspbridge/include/dspbridge/drv.h
index b0c7708..492d216 100644
--- a/drivers/staging/tidspbridge/include/dspbridge/drv.h
+++ b/drivers/staging/tidspbridge/include/dspbridge/drv.h
@@ -88,6 +88,7 @@ struct dmm_map_object {
u32 mpu_addr;
u32 size;
u32 num_usr_pgs;
+ vm_flags_t vm_flags;
struct page **pages;
struct bridge_dma_map_info dma_info;
};
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dspdefs.h b/drivers/staging/tidspbridge/include/dspbridge/dspdefs.h
index ed32bf3..0d28436 100644
--- a/drivers/staging/tidspbridge/include/dspbridge/dspdefs.h
+++ b/drivers/staging/tidspbridge/include/dspbridge/dspdefs.h
@@ -39,6 +39,7 @@
/* Handle to Bridge driver's private device context. */
struct bridge_dev_context;
+struct dmm_map_object;
/*--------------------------------------------------------------------------- */
/* BRIDGE DRIVER FUNCTION TYPES */
@@ -176,7 +177,7 @@ typedef int(*fxn_brd_memmap) (struct bridge_dev_context
* dev_ctxt, u32 ul_mpu_addr,
u32 virt_addr, u32 ul_num_bytes,
u32 map_attr,
- struct page **mapped_pages);
+ struct dmm_map_object *map_obj);
/*
* ======== bridge_brd_mem_un_map ========
@@ -193,9 +194,9 @@ typedef int(*fxn_brd_memmap) (struct bridge_dev_context
* dev_ctxt != NULL;
* Ensures:
*/
-typedef int(*fxn_brd_memunmap) (struct bridge_dev_context
- * dev_ctxt,
- u32 virt_addr, u32 ul_num_bytes);
+typedef int(*fxn_brd_memunmap) (struct bridge_dev_context *dev_ctxt,
+ u32 virt_addr, u32 ul_num_bytes,
+ struct dmm_map_object *map_obj);
/*
* ======== bridge_brd_stop ========
diff --git a/drivers/staging/tidspbridge/rmgr/proc.c b/drivers/staging/tidspbridge/rmgr/proc.c
index 64b1bba..88c5107 100644
--- a/drivers/staging/tidspbridge/rmgr/proc.c
+++ b/drivers/staging/tidspbridge/rmgr/proc.c
@@ -1318,7 +1318,7 @@ int proc_map(void *hprocessor, void *pmpu_addr, u32 ul_size,
else
status = (*p_proc_object->intf_fxns->brd_mem_map)
(p_proc_object->bridge_context, pa_align, va_align,
- size_align, ul_map_attr, map_obj->pages);
+ size_align, ul_map_attr, map_obj);
}
if (!status) {
/* Mapped address = MSB of VA | LSB of PA */
@@ -1624,12 +1624,13 @@ int proc_un_map(void *hprocessor, void *map_addr,
* This function returns error if the VA is not mapped
*/
status = dmm_un_map_memory(dmm_mgr, (u32) va_align, &size_align);
- /* Remove mapping from the page tables. */
- if (!status) {
- status = (*p_proc_object->intf_fxns->brd_mem_un_map)
- (p_proc_object->bridge_context, va_align, size_align);
- }
+ if (status)
+ goto unmap_failed;
+ /* Remove mapping from the page tables. */
+ map_obj = find_dsp_mapping(pr_ctxt, (u32) map_addr, size_align);
+ status = (*p_proc_object->intf_fxns->brd_mem_un_map)
+ (p_proc_object->bridge_context, va_align, size_align, map_obj);
if (status)
goto unmap_failed;
@@ -1638,7 +1639,6 @@ int proc_un_map(void *hprocessor, void *map_addr,
* from dmm_map_list, so that mapped memory resource tracking
* remains uptodate
*/
- map_obj = find_dsp_mapping(pr_ctxt, (u32) map_addr, size_align);
remove_mapping_information(pr_ctxt, map_obj);
unmap_failed:
--
1.7.8.6
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH v2 00/15] tidspbridge driver MMU-related cleanups
2012-09-21 16:18 ` [PATCH v2 00/15] tidspbridge driver MMU-related cleanups Ramirez Luna, Omar
@ 2012-09-24 23:15 ` Laurent Pinchart
0 siblings, 0 replies; 23+ messages in thread
From: Laurent Pinchart @ 2012-09-24 23:15 UTC (permalink / raw)
To: Ramirez Luna, Omar; +Cc: linux-omap
Hi Omar,
On Friday 21 September 2012 11:18:56 Ramirez Luna, Omar wrote:
> On Wed, Sep 19, 2012 at 7:06 AM, Laurent Pinchart wrote:
> > Hello,
> >
> > Here's the second version of my tidspbridge MMU-related cleanup patches.
> > The first version has been sent privately only, don't try to search the
> > mailing list archive for it :-)
> >
> > Replacing hw/hw_mmu.c and part of core/tiomap3430.c with generic IOMMU
> > calls should be less difficult now. Anyone would like to give it a try?
> >
> > Laurent Pinchart (14):
> > tidspbridge: hw_mmu: Reorder functions to avoid forward declarations
> > tidspbridge: hw_mmu: Removed unused functions
> > tidspbridge: tiomap3430: Reorder functions to avoid forward
> > declarations
> > tidspbridge: tiomap3430: Remove unneeded dev_context local variables
> > tidspbridge: tiomap3430: Factor out common page release code
> > tidspbridge: tiomap3430: Remove ul_ prefix
> > tidspbridge: tiomap3430: Remove unneeded local variables
> > tidspbridge: Fix VM_PFNMAP mapping
> > tidspbridge: Remove unused hw_mmu_map_attrs_t::donotlockmpupage field
> > arm: omap: iommu: Include required headers in iommu.h and iopgtable.h
> > tidspbridge: Use constants defined in IOMMU platform headers
> > tidspbridge: Simplify pte_update and mem_map_vmalloc functions
> > tidspbridge: Use correct types to describe physical, MPU, DSP
> > addresses
> > tidspbridge: Replace hw_mmu_map_attrs_t structure with a prot
> > bitfield
>
> Thanks, tested on beagle-xM, they look good!
>
> Can you submit them to Greg KH and devel@driverdev.osuosl.org,
> preferably with a 'staging:' prefix along with the current subject.
I'll do that after getting Felipe's feedback (as well as yours if possible
:-)) on the VM_PFNMAP patch split that I've posted today.
> The only thing of concern is that:
> ARM: OMAP: iommu: fix including iommu.h without IOMMU_API selected
>
> Might be taking a different path than these to mainline[1].
>
> Cheers,
>
> Omar
>
> ---
> [1] http://www.mail-archive.com/linux-omap@vger.kernel.org/msg76319.html
--
Regards,
Laurent Pinchart
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 1/2] tidspbridge: Refactor mapping find/remove operations
2012-09-24 23:13 ` [PATCH 1/2] tidspbridge: Refactor mapping find/remove operations Laurent Pinchart
2012-09-24 23:13 ` [PATCH 2/2] tidspbridge: Fix VM_PFNMAP mapping Laurent Pinchart
@ 2012-10-12 21:32 ` Laurent Pinchart
1 sibling, 0 replies; 23+ messages in thread
From: Laurent Pinchart @ 2012-10-12 21:32 UTC (permalink / raw)
To: Omar Ramirez Luna, Felipe Contreras; +Cc: linux-omap
Hi Felipe, Omar,
Could I please get your feedback on this patch and 2/2 ? They try to split
08/15 in easier to review chunks.
On Tuesday 25 September 2012 01:13:05 Laurent Pinchart wrote:
> Split the remove_mapping_information() function into find_dsp_mapping()
> to locate the mapping and remove_mapping_information() to remove it.
> Rename find_containing_mapping() to find_mpu_mapping() and share the
> search code between find_dsp_mapping() and find_mpu_mapping().
>
> This prepares the driver for VM_PFNMAP support.
>
> Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
> ---
> drivers/staging/tidspbridge/rmgr/proc.c | 116 ++++++++++++++--------------
> 1 files changed, 59 insertions(+), 57 deletions(-)
>
> diff --git a/drivers/staging/tidspbridge/rmgr/proc.c
> b/drivers/staging/tidspbridge/rmgr/proc.c index 7e4f12f..64b1bba 100644
> --- a/drivers/staging/tidspbridge/rmgr/proc.c
> +++ b/drivers/staging/tidspbridge/rmgr/proc.c
> @@ -145,47 +145,67 @@ static struct dmm_map_object *add_mapping_info(struct
> process_context *pr_ctxt, return map_obj;
> }
>
> -static int match_exact_map_obj(struct dmm_map_object *map_obj,
> - u32 dsp_addr, u32 size)
> +static void remove_mapping_information(struct process_context *pr_ctxt,
> + struct dmm_map_object *map_obj)
> {
> - if (map_obj->dsp_addr == dsp_addr && map_obj->size != size)
> - pr_err("%s: addr match (0x%x), size don't (0x%x != 0x%x)\n",
> - __func__, dsp_addr, map_obj->size, size);
> + if (map_obj == NULL)
> + return;
>
> - return map_obj->dsp_addr == dsp_addr &&
> - map_obj->size == size;
> + pr_debug("%s: match, deleting map info\n", __func__);
> +
> + spin_lock(&pr_ctxt->dmm_map_lock);
> + list_del(&map_obj->link);
> + spin_unlock(&pr_ctxt->dmm_map_lock);
> +
> + kfree(map_obj->dma_info.sg);
> + kfree(map_obj->pages);
> + kfree(map_obj);
> }
>
> -static void remove_mapping_information(struct process_context *pr_ctxt,
> - u32 dsp_addr, u32 size)
> +static struct dmm_map_object *
> +find_mapping(struct process_context *pr_ctxt, u32 addr, u32 size,
> + int (*match)(struct dmm_map_object *, u32, u32))
> {
> struct dmm_map_object *map_obj;
>
> - pr_debug("%s: looking for virt 0x%x size 0x%x\n", __func__,
> - dsp_addr, size);
> -
> spin_lock(&pr_ctxt->dmm_map_lock);
> list_for_each_entry(map_obj, &pr_ctxt->dmm_map_list, link) {
> - pr_debug("%s: candidate: mpu_addr 0x%x virt 0x%x size 0x%x\n",
> - __func__,
> - map_obj->mpu_addr,
> - map_obj->dsp_addr,
> - map_obj->size);
> -
> - if (match_exact_map_obj(map_obj, dsp_addr, size)) {
> - pr_debug("%s: match, deleting map info\n", __func__);
> - list_del(&map_obj->link);
> - kfree(map_obj->dma_info.sg);
> - kfree(map_obj->pages);
> - kfree(map_obj);
> + pr_debug("%s: candidate: mpu_addr 0x%x dsp_addr 0x%x size 0x%x\n",
> + __func__, map_obj->mpu_addr, map_obj->dsp_addr,
> + map_obj->size);
> +
> + if (match(map_obj, addr, size)) {
> + pr_debug("%s: match!\n", __func__);
> goto out;
> }
> - pr_debug("%s: candidate didn't match\n", __func__);
> +
> + pr_debug("%s: no match!\n", __func__);
> }
>
> - pr_err("%s: failed to find given map info\n", __func__);
> + map_obj = NULL;
> out:
> spin_unlock(&pr_ctxt->dmm_map_lock);
> + return map_obj;
> +}
> +
> +static int match_exact_map_obj(struct dmm_map_object *map_obj,
> + u32 dsp_addr, u32 size)
> +{
> + if (map_obj->dsp_addr == dsp_addr && map_obj->size != size)
> + pr_err("%s: addr match (0x%x), size don't (0x%x != 0x%x)\n",
> + __func__, dsp_addr, map_obj->size, size);
> +
> + return map_obj->dsp_addr == dsp_addr &&
> + map_obj->size == size;
> +}
> +
> +static struct dmm_map_object *
> +find_dsp_mapping(struct process_context *pr_ctxt, u32 dsp_addr, u32 size)
> +{
> + pr_debug("%s: looking for virt 0x%x size 0x%x\n", __func__,
> + dsp_addr, size);
> +
> + return find_mapping(pr_ctxt, dsp_addr, size, match_exact_map_obj);
> }
>
> static int match_containing_map_obj(struct dmm_map_object *map_obj,
> @@ -197,33 +217,13 @@ static int match_containing_map_obj(struct
> dmm_map_object *map_obj, mpu_addr + size <= map_obj_end;
> }
>
> -static struct dmm_map_object *find_containing_mapping(
> - struct process_context *pr_ctxt,
> - u32 mpu_addr, u32 size)
> +static struct dmm_map_object *
> +find_mpu_mapping(struct process_context *pr_ctxt, u32 mpu_addr, u32 size)
> {
> - struct dmm_map_object *map_obj;
> pr_debug("%s: looking for mpu_addr 0x%x size 0x%x\n", __func__,
> - mpu_addr, size);
> + mpu_addr, size);
>
> - spin_lock(&pr_ctxt->dmm_map_lock);
> - list_for_each_entry(map_obj, &pr_ctxt->dmm_map_list, link) {
> - pr_debug("%s: candidate: mpu_addr 0x%x virt 0x%x size 0x%x\n",
> - __func__,
> - map_obj->mpu_addr,
> - map_obj->dsp_addr,
> - map_obj->size);
> - if (match_containing_map_obj(map_obj, mpu_addr, size)) {
> - pr_debug("%s: match!\n", __func__);
> - goto out;
> - }
> -
> - pr_debug("%s: no match!\n", __func__);
> - }
> -
> - map_obj = NULL;
> -out:
> - spin_unlock(&pr_ctxt->dmm_map_lock);
> - return map_obj;
> + return find_mapping(pr_ctxt, mpu_addr, size, match_containing_map_obj);
> }
>
> static int find_first_page_in_cache(struct dmm_map_object *map_obj,
> @@ -755,9 +755,9 @@ int proc_begin_dma(void *hprocessor, void *pmpu_addr,
> u32 ul_size, mutex_lock(&proc_lock);
>
> /* find requested memory are in cached mapping information */
> - map_obj = find_containing_mapping(pr_ctxt, (u32) pmpu_addr, ul_size);
> + map_obj = find_mpu_mapping(pr_ctxt, (u32) pmpu_addr, ul_size);
> if (!map_obj) {
> - pr_err("%s: find_containing_mapping failed\n", __func__);
> + pr_err("%s: find_mpu_mapping failed\n", __func__);
> status = -EFAULT;
> goto no_map;
> }
> @@ -795,9 +795,9 @@ int proc_end_dma(void *hprocessor, void *pmpu_addr, u32
> ul_size, mutex_lock(&proc_lock);
>
> /* find requested memory are in cached mapping information */
> - map_obj = find_containing_mapping(pr_ctxt, (u32) pmpu_addr, ul_size);
> + map_obj = find_mpu_mapping(pr_ctxt, (u32) pmpu_addr, ul_size);
> if (!map_obj) {
> - pr_err("%s: find_containing_mapping failed\n", __func__);
> + pr_err("%s: find_mpu_mapping failed\n", __func__);
> status = -EFAULT;
> goto no_map;
> }
> @@ -1273,7 +1273,7 @@ int proc_map(void *hprocessor, void *pmpu_addr, u32
> ul_size, u32 size_align;
> int status = 0;
> struct proc_object *p_proc_object = (struct proc_object *)hprocessor;
> - struct dmm_map_object *map_obj;
> + struct dmm_map_object *map_obj = NULL;
> u32 tmp_addr = 0;
>
> #ifdef CONFIG_TIDSPBRIDGE_CACHE_LINE_CHECK
> @@ -1324,7 +1324,7 @@ int proc_map(void *hprocessor, void *pmpu_addr, u32
> ul_size, /* Mapped address = MSB of VA | LSB of PA */
> *pp_map_addr = (void *) tmp_addr;
> } else {
> - remove_mapping_information(pr_ctxt, tmp_addr, size_align);
> + remove_mapping_information(pr_ctxt, map_obj);
> dmm_un_map_memory(dmm_mgr, va_align, &size_align);
> }
> mutex_unlock(&proc_lock);
> @@ -1600,6 +1600,7 @@ int proc_un_map(void *hprocessor, void *map_addr,
> {
> int status = 0;
> struct proc_object *p_proc_object = (struct proc_object *)hprocessor;
> + struct dmm_map_object *map_obj;
> struct dmm_object *dmm_mgr;
> u32 va_align;
> u32 size_align;
> @@ -1637,7 +1638,8 @@ int proc_un_map(void *hprocessor, void *map_addr,
> * from dmm_map_list, so that mapped memory resource tracking
> * remains uptodate
> */
> - remove_mapping_information(pr_ctxt, (u32) map_addr, size_align);
> + map_obj = find_dsp_mapping(pr_ctxt, (u32) map_addr, size_align);
> + remove_mapping_information(pr_ctxt, map_obj);
>
> unmap_failed:
> mutex_unlock(&proc_lock);
--
Regards,
Laurent Pinchart
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2012-10-12 21:31 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-09-19 12:06 [PATCH v2 00/15] tidspbridge driver MMU-related cleanups Laurent Pinchart
2012-09-19 12:06 ` [PATCH v2 01/15] tidspbridge: hw_mmu: Reorder functions to avoid forward declarations Laurent Pinchart
2012-09-19 12:06 ` [PATCH v2 02/15] tidspbridge: hw_mmu: Removed unused functions Laurent Pinchart
2012-09-19 12:06 ` [PATCH v2 03/15] tidspbridge: tiomap3430: Reorder functions to avoid forward declarations Laurent Pinchart
2012-09-19 12:06 ` [PATCH v2 04/15] tidspbridge: tiomap3430: Remove unneeded dev_context local variables Laurent Pinchart
2012-09-19 12:06 ` [PATCH v2 05/15] tidspbridge: tiomap3430: Factor out common page release code Laurent Pinchart
2012-09-19 12:06 ` [PATCH v2 06/15] tidspbridge: tiomap3430: Remove ul_ prefix Laurent Pinchart
2012-09-19 12:06 ` [PATCH v2 07/15] tidspbridge: tiomap3430: Remove unneeded local variables Laurent Pinchart
2012-09-19 12:06 ` [PATCH v2 08/15] tidspbridge: Fix VM_PFNMAP mapping Laurent Pinchart
2012-09-21 18:37 ` Felipe Contreras
2012-09-24 23:11 ` Laurent Pinchart
2012-09-24 23:13 ` [PATCH 1/2] tidspbridge: Refactor mapping find/remove operations Laurent Pinchart
2012-09-24 23:13 ` [PATCH 2/2] tidspbridge: Fix VM_PFNMAP mapping Laurent Pinchart
2012-10-12 21:32 ` [PATCH 1/2] tidspbridge: Refactor mapping find/remove operations Laurent Pinchart
2012-09-19 12:06 ` [PATCH v2 09/15] tidspbridge: Remove unused hw_mmu_map_attrs_t::donotlockmpupage field Laurent Pinchart
2012-09-19 12:06 ` [PATCH v2 10/15] ARM: OMAP: iommu: fix including iommu.h without IOMMU_API selected Laurent Pinchart
2012-09-19 12:06 ` [PATCH v2 11/15] arm: omap: iommu: Include required headers in iommu.h and iopgtable.h Laurent Pinchart
2012-09-19 12:07 ` [PATCH v2 12/15] tidspbridge: Use constants defined in IOMMU platform headers Laurent Pinchart
2012-09-19 12:07 ` [PATCH v2 13/15] tidspbridge: Simplify pte_update and mem_map_vmalloc functions Laurent Pinchart
2012-09-19 12:07 ` [PATCH v2 14/15] tidspbridge: Use correct types to describe physical, MPU, DSP addresses Laurent Pinchart
2012-09-19 12:07 ` [PATCH v2 15/15] tidspbridge: Replace hw_mmu_map_attrs_t structure with a prot bitfield Laurent Pinchart
2012-09-21 16:18 ` [PATCH v2 00/15] tidspbridge driver MMU-related cleanups Ramirez Luna, Omar
2012-09-24 23:15 ` Laurent Pinchart
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).