From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753504AbbJSO4Q (ORCPT ); Mon, 19 Oct 2015 10:56:16 -0400 Received: from szxga01-in.huawei.com ([58.251.152.64]:63447 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751103AbbJSO4O (ORCPT ); Mon, 19 Oct 2015 10:56:14 -0400 Subject: Re: [PATCH 13/25] scsi: hisi_sas: add path from phyup irq to SAS framework To: Arnd Bergmann References: <1444663237-238302-1-git-send-email-john.garry@huawei.com> <6858991.h0xpYFqkjM@wuerfel> <5624FA23.1050502@huawei.com> <4269975.xfCo9ut8KK@wuerfel> CC: , , , , , , , , From: John Garry Message-ID: <56250473.9090700@huawei.com> Date: Mon, 19 Oct 2015 15:55:47 +0100 User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:38.0) Gecko/20100101 Thunderbird/38.2.0 MIME-Version: 1.0 In-Reply-To: <4269975.xfCo9ut8KK@wuerfel> Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.202.137.251] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org >>> I'd have to review more closely, but I think that's fine, as this >>> is how most work queues are used: you can queue the same function >>> multiple times, and it's guaranteed to run at least once after >>> the last queue, so if you queue it while it's already running, >>> it will be called again, otherwise it won't. >>> >> In the scenario I described the issue is not that the second call to >> queue the work function is lost. The problem is that when we setup the >> second call we may overwrite elements of the phy's hisi_sas_wq struct >> which may be still being referenced in the work function for the first call. > > Do you mean these members? > >> + wq->event = PHYUP; >> + wq->hisi_hba = hisi_hba; >> + wq->phy_no = phy_no; > Yes, these are the members I was referring to. However there is also an element "data" in hisi_sas_wq. This is used in control phy as follows: int hisi_sas_control_phy(struct asd_sas_phy *sas_phy, enum phy_func func, void *funcdata) { ... wq->event = CONTROL_PHY; wq->data = func; ... INIT_WORK(&wq->work_struct, hisi_sas_wq_process); queue_work(hisi_hba->wq, &wq->work_struct); } void hisi_sas_wq_process(struct work_struct *work) { struct hisi_sas_wq *wq = container_of(work, struct hisi_sas_wq, work_struct); switch (wq->event) { case CONTROL_PHY: hisi_sas_control_phy_work(hisi_hba, wq->data, phy_no); } static void hisi_sas_control_phy_work(struct hisi_hba *hisi_hba, int func, int phy_no) { switch (func) { case PHY_FUNC_HARD_RESET: hard_phy_reset_v1_hw(hisi_hba, phy_no) > Sorry for being unclear, I implied getting rid of them, by having a > work queue per phy. You can create a structure for each phy like > > struct hisi_sas_phy { > struct hisi_sas *dev; /* pointer to the device */ > unsigned int phy_no; > struct work_struct phyup_work; > }; > > There are probably other things you can put into the same structure. > When the phy is brought up, you then just start the phyup work for > that phy, which can retrieve its hisi_sas_phy structure from the > work_struct pointer, and from there get to the device. > > Arnd > > . > We could create a work_struct for each event, which would be fine. However we would sometimes still need some way of passing data to the event, like the phy control example.