From mboxrd@z Thu Jan 1 00:00:00 1970 From: Lars-Peter Clausen Subject: Re: [PATCH 2/2] spi: cadence: Configure SPI clock in the prepare_message() callback Date: Thu, 10 Jul 2014 12:50:05 +0200 Message-ID: <53BE6FDD.6050206@metafoo.de> References: <1404984389-12802-1-git-send-email-lars@metafoo.de> <1404984389-12802-2-git-send-email-lars@metafoo.de> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: Mark Brown , linux-spi To: Harini Katakam Return-path: In-Reply-To: Sender: linux-spi-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-ID: On 07/10/2014 12:43 PM, Harini Katakam wrote: > Hi, > > On Thu, Jul 10, 2014 at 2:56 PM, Lars-Peter Clausen wrote: >> Currently the cadence SPI driver does the SPI clock configuration (setup CPOL >> and CPHA) in the prepare_transfer_hardware() callback. The >> prepare_transfer_hardware() callback is only called though when the controller >> transitions from a idle state to a non-idle state. Such a transitions happens >> when the message queue goes from empty to non-empty. If multiple messages from >> different SPI slaves with different clock settings are in the message queue the >> clock settings will not be properly updated when switching from one slave device >> to another. Instead do the updating of the clock configuration in the >> prepare_message() callback which will be called for each individual message. >> > > Yes, the requirement from the controller is that CPOL/CPHA setting changes > will not take effect when SPI is enabled. CPOL/CPHA setting is done in > prepare_hardware > before SPI is enabled. So this works. > According to your patches I understand that you might change CPOL/CPHA for > each message. Is this possible? Is this a requirement? The message can be from different SPI devices, so yes this is a requirement. I'm seeing the issue in one of my setups. - Lars -- To unsubscribe from this list: send the line "unsubscribe linux-spi" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html