* Re: [PATCH][RFC 1/12/1/5] SCST core's Makefile and Kconfig
[not found] ` <4BC44D08.4060907@vlnb.net>
@ 2010-04-13 13:04 ` Vladislav Bolkhovitin
2010-04-13 13:04 ` [PATCH][RFC 2/12/1/5] SCST core's external headers Vladislav Bolkhovitin
` (8 subsequent siblings)
9 siblings, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:04 UTC (permalink / raw)
To: linux-scsi
Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
FUJITA Tomonori, Mike Christie, Jeff Garzik, Bart Van Assche,
James Smart, Joe Eykholt, Andy Yan, linux-driver, Vu Pham,
Linus Torvalds
This patch contains SCST core's Makefile and Kconfig.
Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
Kconfig | 246 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Makefile | 11 ++
2 files changed, 257 insertions(+)
diff -uprN orig/linux-2.6.33/drivers/scst/Kconfig linux-2.6.33/drivers/scst/Kconfig
--- orig/linux-2.6.33/drivers/scst/Kconfig
+++ linux-2.6.33/drivers/scst/Kconfig
@@ -0,0 +1,246 @@
+menu "SCSI target (SCST) support"
+
+config SCST
+ tristate "SCSI target (SCST) support"
+ depends on SCSI
+ help
+ SCSI target (SCST) is designed to provide unified, consistent
+ interface between SCSI target drivers and Linux kernel and
+ simplify target drivers development as much as possible. Visit
+ http://scst.sourceforge.net for more info about it.
+
+config SCST_DISK
+ tristate "SCSI target disk support"
+ default SCST
+ depends on SCSI && SCST
+ help
+ SCST pass-through device handler for disk device.
+
+config SCST_TAPE
+ tristate "SCSI target tape support"
+ default SCST
+ depends on SCSI && SCST
+ help
+ SCST pass-through device handler for tape device.
+
+config SCST_CDROM
+ tristate "SCSI target CDROM support"
+ default SCST
+ depends on SCSI && SCST
+ help
+ SCST pass-through device handler for CDROM device.
+
+config SCST_MODISK
+ tristate "SCSI target MO disk support"
+ default SCST
+ depends on SCSI && SCST
+ help
+ SCST pass-through device handler for MO disk device.
+
+config SCST_CHANGER
+ tristate "SCSI target changer support"
+ default SCST
+ depends on SCSI && SCST
+ help
+ SCST pass-through device handler for changer device.
+
+config SCST_PROCESSOR
+ tristate "SCSI target processor support"
+ default SCST
+ depends on SCSI && SCST
+ help
+ SCST pass-through device handler for processor device.
+
+config SCST_RAID
+ tristate "SCSI target storage array controller (RAID) support"
+ default SCST
+ depends on SCSI && SCST
+ help
+ SCST pass-through device handler for raid storage array controller (RAID) device.
+
+config SCST_VDISK
+ tristate "SCSI target virtual disk and/or CDROM support"
+ default SCST
+ depends on SCSI && SCST
+ help
+ SCST device handler for virtual disk and/or CDROM device.
+
+config SCST_STRICT_SERIALIZING
+ bool "Strict serialization"
+ depends on SCST
+ help
+ Enable strict SCSI command serialization. When enabled, SCST sends
+ all SCSI commands to the underlying SCSI device synchronously, one
+ after one. This makes task management more reliable, at the cost of
+ a performance penalty. This is most useful for stateful SCSI devices
+ like tapes, where the result of the execution of a command
+ depends on the device settings configured by previous commands. Disk
+ and RAID devices are stateless in most cases. The current SCSI core
+ in Linux doesn't allow to abort all commands reliably if they have
+ been sent asynchronously to a stateful device.
+ Enable this option if you use stateful device(s) and need as much
+ error recovery reliability as possible.
+
+ If unsure, say "N".
+
+config SCST_STRICT_SECURITY
+ bool "Strict security"
+ depends on SCST
+ help
+ Makes SCST clear (zero-fill) allocated data buffers. Note: this has a
+ significant performance penalty.
+
+ If unsure, say "N".
+
+config SCST_ALLOW_PASSTHROUGH_IO_SUBMIT_IN_SIRQ
+ bool "Allow pass-through commands to be sent from soft-IRQ context"
+ depends on SCST
+ help
+ Allows SCST to submit SCSI pass-through commands to real SCSI devices
+ via the SCSI middle layer using scsi_execute_async() function from
+ soft-IRQ context (tasklets). This used to be the default, but
+ currently it seems the SCSI middle layer starts expecting only thread
+ context on the IO submit path, so it is disabled now by default.
+ Enabling it will decrease amount of context switches and improve
+ performance. It is more or less safe. In the worst case, if in your
+ configuration the SCSI middle layer really doesn't expect SIRQ
+ context in scsi_execute_async() function, you will get a warning
+ message in the kernel log.
+
+ If unsure, say "N".
+
+config SCST_ABORT_CONSIDER_FINISHED_TASKS_AS_NOT_EXISTING
+ bool "Send back UNKNOWN TASK when an already finished task is aborted"
+ depends on SCST
+ help
+ Controls which response is sent by SCST to the initiator in case
+ the initiator attempts to abort (ABORT TASK) an already finished
+ request. If this option is enabled, the response UNKNOWN TASK is
+ sent back to the initiator. However, some initiators, particularly
+ the VMware iSCSI initiator, interpret the UNKNOWN TASK response as
+ if the target got crazy and try to RESET it. Then sometimes the
+ initiator gets crazy itself.
+
+ If unsure, say "N".
+
+config SCST_USE_EXPECTED_VALUES
+ bool "Prefer initiator-supplied SCSI command attributes"
+ depends on SCST
+ help
+ When SCST receives a SCSI command from an initiator, such a SCSI
+ command has both data transfer length and direction attributes.
+ There are two possible sources for these attributes: either the
+ values computed by SCST from its internal command translation table
+ or the values supplied by the initiator. The former are used by
+ default because of security reasons. Invalid initiator-supplied
+ attributes can crash the target, especially in pass-through mode.
+ Only consider enabling this option when SCST logs the following
+ message: "Unknown opcode XX for YY. Should you update
+ scst_scsi_op_table?" and when the initiator complains. Please
+ report any unrecognized commands to scst-devel@lists.sourceforge.net.
+
+ If unsure, say "N".
+
+config SCST_EXTRACHECKS
+ bool "Extra consistency checks"
+ depends on SCST
+ help
+ Enable additional consistency checks in the SCSI middle level target
+ code. This may be helpful for SCST developers. Enable it if you have
+ any problems.
+
+ If unsure, say "N".
+
+config SCST_TRACING
+ bool "Tracing support"
+ depends on SCST
+ default y
+ help
+ Enable SCSI middle level tracing support. Tracing can be controlled
+ dynamically via sysfs interface. The traced information
+ is sent to the kernel log and may be very helpful when analyzing
+ the cause of a communication problem between initiator and target.
+
+ If unsure, say "Y".
+
+config SCST_DEBUG
+ bool "Debugging support"
+ depends on SCST
+ select DEBUG_BUGVERBOSE
+ help
+ Enables support for debugging SCST. This may be helpful for SCST
+ developers.
+
+ If unsure, say "N".
+
+config SCST_DEBUG_OOM
+ bool "Out-of-memory debugging support"
+ depends on SCST
+ help
+ Let SCST's internal memory allocation function
+ (scst_alloc_sg_entries()) fail about once in every 10000 calls, at
+ least if the flag __GFP_NOFAIL has not been set. This allows SCST
+ developers to test the behavior of SCST in out-of-memory conditions.
+ This may be helpful for SCST developers.
+
+ If unsure, say "N".
+
+config SCST_DEBUG_RETRY
+ bool "SCSI command retry debugging support"
+ depends on SCST
+ help
+ Let SCST's internal SCSI command transfer function
+ (scst_rdy_to_xfer()) fail about once in every 100 calls. This allows
+ SCST developers to test the behavior of SCST when SCSI queues fill
+ up. This may be helpful for SCST developers.
+
+ If unsure, say "N".
+
+config SCST_DEBUG_SN
+ bool "SCSI sequence number debugging support"
+ depends on SCST
+ help
+ Allows to test SCSI command ordering via sequence numbers by
+ randomly changing the type of SCSI commands into
+ SCST_CMD_QUEUE_ORDERED, SCST_CMD_QUEUE_HEAD_OF_QUEUE or
+ SCST_CMD_QUEUE_SIMPLE for about one in 300 SCSI commands.
+ This may be helpful for SCST developers.
+
+ If unsure, say "N".
+
+config SCST_DEBUG_TM
+ bool "Task management debugging support"
+ depends on SCST_DEBUG
+ help
+ Enables support for debugging of SCST's task management functions.
+ When enabled, some of the commands on LUN 0 in the default access
+ control group will be delayed for about 60 seconds. This will
+ cause the remote initiator send SCSI task management functions,
+ e.g. ABORT TASK and TARGET RESET.
+
+ If unsure, say "N".
+
+config SCST_TM_DBG_GO_OFFLINE
+ bool "Let devices become completely unresponsive"
+ depends on SCST_DEBUG_TM
+ help
+ Enable this option if you want that the device eventually becomes
+ completely unresponsive. When disabled, the device will receive
+ ABORT and RESET commands.
+
+config SCST_MEASURE_LATENCY
+ bool "Commands processing latency measurement facility"
+ depends on SCST
+ help
+ This option enables commands processing latency measurement
+ facility in SCST. It will provide in the sysfs interface
+ average commands processing latency statistics. You can clear
+ already measured results by writing 0 in the corresponding sysfs file.
+ Note, you need a non-preemtible kernel to have correct results.
+
+ If unsure, say "N".
+
+source "drivers/scst/iscsi-scst/Kconfig"
+source "drivers/scst/srpt/Kconfig"
+
+endmenu
diff -uprN orig/linux-2.6.33/drivers/scst/Makefile linux-2.6.33/drivers/scst/Makefile
--- orig/linux-2.6.33/drivers/scst/Makefile
+++ linux-2.6.33/drivers/scst/Makefile
@@ -0,0 +1,11 @@
+ccflags-y += -Iinclude/scst -Wno-unused-parameter
+
+scst-y += scst_main.o
+scst-y += scst_targ.o
+scst-y += scst_lib.o
+scst-y += scst_sysfs.o
+scst-y += scst_mem.o
+scst-y += scst_debug.o
+
+obj-$(CONFIG_SCST) += scst.o dev_handlers/ iscsi-scst/ srpt/
+
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH][RFC 2/12/1/5] SCST core's external headers
[not found] ` <4BC44D08.4060907@vlnb.net>
2010-04-13 13:04 ` [PATCH][RFC 1/12/1/5] SCST core's Makefile and Kconfig Vladislav Bolkhovitin
@ 2010-04-13 13:04 ` Vladislav Bolkhovitin
2010-04-13 13:04 ` [PATCH][RFC 3/12/1/5] SCST core's scst_main.c Vladislav Bolkhovitin
` (7 subsequent siblings)
9 siblings, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:04 UTC (permalink / raw)
To: linux-scsi
Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
Vu Pham, Bart Van Assche, James Smart, Joe Eykholt, Andy Yan,
linux-driver
This patch contains declarations of all externally visible SCST constants,
types and functions prototypes.
Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
scst.h | 3169 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
scst_const.h | 330 ++++++
2 files changed, 3499 insertions(+)
diff -uprN orig/linux-2.6.33/include/scst/scst_const.h linux-2.6.33/include/scst/scst_const.h
--- orig/linux-2.6.33/include/scst/scst_const.h
+++ linux-2.6.33/include/scst/scst_const.h
@@ -0,0 +1,330 @@
+/*
+ * include/scst_const.h
+ *
+ * Copyright (C) 2004 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ * Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ * Contains common SCST constants.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation, version 2
+ * of the License.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __SCST_CONST_H
+#define __SCST_CONST_H
+
+#include <scsi/scsi.h>
+
+#define SCST_CONST_VERSION "$Revision: 1585 $"
+
+/*** Shared constants between user and kernel spaces ***/
+
+/* Max size of CDB */
+#define SCST_MAX_CDB_SIZE 16
+
+/* Max size of various names */
+#define SCST_MAX_NAME 50
+
+/* Max size of external names, like initiator name */
+#define SCST_MAX_EXTERNAL_NAME 256
+
+/*
+ * Size of sense sufficient to carry standard sense data.
+ * Warning! It's allocated on stack!
+ */
+#define SCST_STANDARD_SENSE_LEN 18
+
+/* Max size of sense */
+#define SCST_SENSE_BUFFERSIZE 96
+
+/*************************************************************
+ ** Allowed delivery statuses for cmd's delivery_status
+ *************************************************************/
+
+#define SCST_CMD_DELIVERY_SUCCESS 0
+#define SCST_CMD_DELIVERY_FAILED -1
+#define SCST_CMD_DELIVERY_ABORTED -2
+
+/*************************************************************
+ ** Values for task management functions
+ *************************************************************/
+#define SCST_ABORT_TASK 0
+#define SCST_ABORT_TASK_SET 1
+#define SCST_CLEAR_ACA 2
+#define SCST_CLEAR_TASK_SET 3
+#define SCST_LUN_RESET 4
+#define SCST_TARGET_RESET 5
+
+/** SCST extensions **/
+
+/*
+ * Notifies about I_T nexus loss event in the corresponding session.
+ * Aborts all tasks there, resets the reservation, if any, and sets
+ * up the I_T Nexus loss UA.
+ */
+#define SCST_NEXUS_LOSS_SESS 6
+
+/* Aborts all tasks in the corresponding session */
+#define SCST_ABORT_ALL_TASKS_SESS 7
+
+/*
+ * Notifies about I_T nexus loss event. Aborts all tasks in all sessions
+ * of the tgt, resets the reservations, if any, and sets up the I_T Nexus
+ * loss UA.
+ */
+#define SCST_NEXUS_LOSS 8
+
+/* Aborts all tasks in all sessions of the tgt */
+#define SCST_ABORT_ALL_TASKS 9
+
+/*
+ * Internal TM command issued by SCST in scst_unregister_session(). It is the
+ * same as SCST_NEXUS_LOSS_SESS, except:
+ * - it doesn't call task_mgmt_affected_cmds_done()
+ * - it doesn't call task_mgmt_fn_done()
+ * - it doesn't queue NEXUS LOSS UA.
+ *
+ * Target driver shall NEVER use it!!
+ */
+#define SCST_UNREG_SESS_TM 10
+
+/*************************************************************
+ ** Values for mgmt cmd's status field. Codes taken from iSCSI
+ *************************************************************/
+#define SCST_MGMT_STATUS_SUCCESS 0
+#define SCST_MGMT_STATUS_TASK_NOT_EXIST -1
+#define SCST_MGMT_STATUS_LUN_NOT_EXIST -2
+#define SCST_MGMT_STATUS_FN_NOT_SUPPORTED -5
+#define SCST_MGMT_STATUS_REJECTED -255
+#define SCST_MGMT_STATUS_FAILED -129
+
+/*************************************************************
+ ** SCSI task attribute queue types
+ *************************************************************/
+enum scst_cmd_queue_type {
+ SCST_CMD_QUEUE_UNTAGGED = 0,
+ SCST_CMD_QUEUE_SIMPLE,
+ SCST_CMD_QUEUE_ORDERED,
+ SCST_CMD_QUEUE_HEAD_OF_QUEUE,
+ SCST_CMD_QUEUE_ACA
+};
+
+/*************************************************************
+ ** CDB flags
+ *************************************************************/
+enum scst_cdb_flags {
+ SCST_TRANSFER_LEN_TYPE_FIXED = 0x001,
+ SCST_SMALL_TIMEOUT = 0x002,
+ SCST_LONG_TIMEOUT = 0x004,
+ SCST_UNKNOWN_LENGTH = 0x008,
+ SCST_INFO_VALID = 0x010, /* must be single bit */
+ SCST_VERIFY_BYTCHK_MISMATCH_ALLOWED = 0x020,
+ SCST_IMPLICIT_HQ = 0x040,
+ SCST_SKIP_UA = 0x080,
+ SCST_WRITE_MEDIUM = 0x100,
+ SCST_LOCAL_CMD = 0x200,
+ SCST_FULLY_LOCAL_CMD = 0x400,
+ SCST_REG_RESERVE_ALLOWED = 0x800,
+};
+
+/*************************************************************
+ ** Data direction aliases. Changing it don't forget to change
+ ** scst_to_tgt_dma_dir as well!!
+ *************************************************************/
+#define SCST_DATA_UNKNOWN 0
+#define SCST_DATA_WRITE 1
+#define SCST_DATA_READ 2
+#define SCST_DATA_BIDI (SCST_DATA_WRITE | SCST_DATA_READ)
+#define SCST_DATA_NONE 4
+
+/*************************************************************
+ ** Default suffix for targets with NULL names
+ *************************************************************/
+#define SCST_DEFAULT_TGT_NAME_SUFFIX "_target_"
+
+/*************************************************************
+ ** Sense manipulation and examination
+ *************************************************************/
+#define SCST_LOAD_SENSE(key_asc_ascq) key_asc_ascq
+
+#define SCST_SENSE_VALID(sense) ((sense != NULL) && \
+ ((((const uint8_t *)(sense))[0] & 0x70) == 0x70))
+
+#define SCST_NO_SENSE(sense) ((sense != NULL) && \
+ (((const uint8_t *)(sense))[2] == 0))
+
+/*************************************************************
+ ** Sense data for the appropriate errors. Can be used with
+ ** scst_set_cmd_error()
+ *************************************************************/
+#define scst_sense_no_sense NO_SENSE, 0x00, 0
+#define scst_sense_hardw_error HARDWARE_ERROR, 0x44, 0
+#define scst_sense_aborted_command ABORTED_COMMAND, 0x00, 0
+#define scst_sense_invalid_opcode ILLEGAL_REQUEST, 0x20, 0
+#define scst_sense_invalid_field_in_cdb ILLEGAL_REQUEST, 0x24, 0
+#define scst_sense_invalid_field_in_parm_list ILLEGAL_REQUEST, 0x26, 0
+#define scst_sense_parameter_value_invalid ILLEGAL_REQUEST, 0x26, 2
+#define scst_sense_reset_UA UNIT_ATTENTION, 0x29, 0
+#define scst_sense_nexus_loss_UA UNIT_ATTENTION, 0x29, 0x7
+#define scst_sense_saving_params_unsup ILLEGAL_REQUEST, 0x39, 0
+#define scst_sense_lun_not_supported ILLEGAL_REQUEST, 0x25, 0
+#define scst_sense_data_protect DATA_PROTECT, 0x00, 0
+#define scst_sense_miscompare_error MISCOMPARE, 0x1D, 0
+#define scst_sense_block_out_range_error ILLEGAL_REQUEST, 0x21, 0
+#define scst_sense_medium_changed_UA UNIT_ATTENTION, 0x28, 0
+#define scst_sense_read_error MEDIUM_ERROR, 0x11, 0
+#define scst_sense_write_error MEDIUM_ERROR, 0x03, 0
+#define scst_sense_not_ready NOT_READY, 0x04, 0x10
+#define scst_sense_invalid_message ILLEGAL_REQUEST, 0x49, 0
+#define scst_sense_cleared_by_another_ini_UA UNIT_ATTENTION, 0x2F, 0
+#define scst_sense_capacity_data_changed UNIT_ATTENTION, 0x2A, 0x9
+#define scst_sense_reported_luns_data_changed UNIT_ATTENTION, 0x3F, 0xE
+#define scst_sense_inquery_data_changed UNIT_ATTENTION, 0x3F, 0x3
+
+/*************************************************************
+ * SCSI opcodes not listed anywhere else
+ *************************************************************/
+#define REPORT_DEVICE_IDENTIFIER 0xA3
+#define INIT_ELEMENT_STATUS 0x07
+#define INIT_ELEMENT_STATUS_RANGE 0x37
+#define PREVENT_ALLOW_MEDIUM 0x1E
+#define READ_ATTRIBUTE 0x8C
+#define REQUEST_VOLUME_ADDRESS 0xB5
+#define WRITE_ATTRIBUTE 0x8D
+#define WRITE_VERIFY_16 0x8E
+#define VERIFY_6 0x13
+#define VERIFY_12 0xAF
+
+/*************************************************************
+ ** SCSI Architecture Model (SAM) Status codes. Taken from SAM-3 draft
+ ** T10/1561-D Revision 4 Draft dated 7th November 2002.
+ *************************************************************/
+#define SAM_STAT_GOOD 0x00
+#define SAM_STAT_CHECK_CONDITION 0x02
+#define SAM_STAT_CONDITION_MET 0x04
+#define SAM_STAT_BUSY 0x08
+#define SAM_STAT_INTERMEDIATE 0x10
+#define SAM_STAT_INTERMEDIATE_CONDITION_MET 0x14
+#define SAM_STAT_RESERVATION_CONFLICT 0x18
+#define SAM_STAT_COMMAND_TERMINATED 0x22 /* obsolete in SAM-3 */
+#define SAM_STAT_TASK_SET_FULL 0x28
+#define SAM_STAT_ACA_ACTIVE 0x30
+#define SAM_STAT_TASK_ABORTED 0x40
+
+/*************************************************************
+ ** Control byte field in CDB
+ *************************************************************/
+#define CONTROL_BYTE_LINK_BIT 0x01
+#define CONTROL_BYTE_NACA_BIT 0x04
+
+/*************************************************************
+ ** Byte 1 in INQUIRY CDB
+ *************************************************************/
+#define SCST_INQ_EVPD 0x01
+
+/*************************************************************
+ ** Byte 3 in Standard INQUIRY data
+ *************************************************************/
+#define SCST_INQ_BYTE3 3
+
+#define SCST_INQ_NORMACA_BIT 0x20
+
+/*************************************************************
+ ** Byte 2 in RESERVE_10 CDB
+ *************************************************************/
+#define SCST_RES_3RDPTY 0x10
+#define SCST_RES_LONGID 0x02
+
+/*************************************************************
+ ** Values for the control mode page TST field
+ *************************************************************/
+#define SCST_CONTR_MODE_ONE_TASK_SET 0
+#define SCST_CONTR_MODE_SEP_TASK_SETS 1
+
+/*******************************************************************
+ ** Values for the control mode page QUEUE ALGORITHM MODIFIER field
+ *******************************************************************/
+#define SCST_CONTR_MODE_QUEUE_ALG_RESTRICTED_REORDER 0
+#define SCST_CONTR_MODE_QUEUE_ALG_UNRESTRICTED_REORDER 1
+
+/*************************************************************
+ ** Values for the control mode page D_SENSE field
+ *************************************************************/
+#define SCST_CONTR_MODE_FIXED_SENSE 0
+#define SCST_CONTR_MODE_DESCR_SENSE 1
+
+/*************************************************************
+ ** Misc SCSI constants
+ *************************************************************/
+#define SCST_SENSE_ASC_UA_RESET 0x29
+#define READ_CAP_LEN 8
+#define READ_CAP16_LEN 32
+#define BYTCHK 0x02
+#define POSITION_LEN_SHORT 20
+#define POSITION_LEN_LONG 32
+
+/*************************************************************
+ ** Various timeouts
+ *************************************************************/
+#define SCST_DEFAULT_TIMEOUT (60 * HZ)
+
+#define SCST_GENERIC_CHANGER_TIMEOUT (3 * HZ)
+#define SCST_GENERIC_CHANGER_LONG_TIMEOUT (14000 * HZ)
+
+#define SCST_GENERIC_PROCESSOR_TIMEOUT (3 * HZ)
+#define SCST_GENERIC_PROCESSOR_LONG_TIMEOUT (14000 * HZ)
+
+#define SCST_GENERIC_TAPE_SMALL_TIMEOUT (3 * HZ)
+#define SCST_GENERIC_TAPE_REG_TIMEOUT (900 * HZ)
+#define SCST_GENERIC_TAPE_LONG_TIMEOUT (14000 * HZ)
+
+#define SCST_GENERIC_MODISK_SMALL_TIMEOUT (3 * HZ)
+#define SCST_GENERIC_MODISK_REG_TIMEOUT (900 * HZ)
+#define SCST_GENERIC_MODISK_LONG_TIMEOUT (14000 * HZ)
+
+#define SCST_GENERIC_DISK_SMALL_TIMEOUT (3 * HZ)
+#define SCST_GENERIC_DISK_REG_TIMEOUT (60 * HZ)
+#define SCST_GENERIC_DISK_LONG_TIMEOUT (3600 * HZ)
+
+#define SCST_GENERIC_RAID_TIMEOUT (3 * HZ)
+#define SCST_GENERIC_RAID_LONG_TIMEOUT (14000 * HZ)
+
+#define SCST_GENERIC_CDROM_SMALL_TIMEOUT (3 * HZ)
+#define SCST_GENERIC_CDROM_REG_TIMEOUT (900 * HZ)
+#define SCST_GENERIC_CDROM_LONG_TIMEOUT (14000 * HZ)
+
+#define SCST_MAX_OTHER_TIMEOUT (14000 * HZ)
+
+/*************************************************************
+ ** I/O grouping attribute string values. Must match constants
+ ** w/o '_STR' suffix!
+ *************************************************************/
+#define SCST_IO_GROUPING_AUTO_STR "auto"
+#define SCST_IO_GROUPING_THIS_GROUP_ONLY_STR "this_group_only"
+#define SCST_IO_GROUPING_NEVER_STR "never"
+
+/*************************************************************
+ ** Threads pool type attribute string values.
+ ** Must match scst_dev_type_threads_pool_type!
+ *************************************************************/
+#define SCST_THREADS_POOL_PER_INITIATOR_STR "per_initiator"
+#define SCST_THREADS_POOL_SHARED_STR "shared"
+
+/*************************************************************
+ ** Misc constants
+ *************************************************************/
+#define SCST_SYSFS_BLOCK_SIZE PAGE_SIZE
+
+#define SCST_SYSFS_KEY_MARK "[key]"
+
+#define SCST_MIN_REL_TGT_ID 1
+#define SCST_MAX_REL_TGT_ID 65535
+
+#endif /* __SCST_CONST_H */
diff -uprN orig/linux-2.6.33/include/scst/scst.h linux-2.6.33/include/scst/scst.h
--- orig/linux-2.6.33/include/scst/scst.h
+++ linux-2.6.33/include/scst/scst.h
@@ -0,0 +1,3169 @@
+/*
+ * include/scst.h
+ *
+ * Copyright (C) 2004 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ * Copyright (C) 2004 - 2005 Leonid Stoljar
+ * Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ * Main SCSI target mid-level include file.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation, version 2
+ * of the License.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __SCST_H
+#define __SCST_H
+
+#include <linux/types.h>
+#include <linux/version.h>
+#include <linux/blkdev.h>
+#include <linux/interrupt.h>
+#include <linux/wait.h>
+
+#include <scsi/scsi_cmnd.h>
+#include <scsi/scsi_device.h>
+#include <scsi/scsi_eh.h>
+#include <scsi/scsi.h>
+
+#include <scst_const.h>
+
+#include "scst_sgv.h"
+
+/*
+ * Version numbers, the same as for the kernel.
+ *
+ * Changing it don't forget to change SCST_FIO_REV in scst_vdisk.c
+ * and FIO_REV in usr/fileio/common.h as well.
+ */
+#define SCST_VERSION(a, b, c, d) (((a) << 24) + ((b) << 16) + ((c) << 8) + d)
+#define SCST_VERSION_CODE SCST_VERSION(2, 0, 0, 0)
+#define SCST_VERSION_STRING_SUFFIX
+#define SCST_VERSION_STRING "2.0.0-pre1" SCST_VERSION_STRING_SUFFIX
+#define SCST_INTERFACE_VERSION \
+ SCST_VERSION_STRING "$Revision: 1603 $" SCST_CONST_VERSION
+
+#define SCST_LOCAL_NAME "scst_lcl_drvr"
+
+/*************************************************************
+ ** States of command processing state machine. At first,
+ ** "active" states, then - "passive" ones. This is to have
+ ** more efficient generated code of the corresponding
+ ** "switch" statements.
+ *************************************************************/
+
+/* Internal parsing */
+#define SCST_CMD_STATE_PRE_PARSE 0
+
+/* Dev handler's parse() is going to be called */
+#define SCST_CMD_STATE_DEV_PARSE 1
+
+/* Allocation of the cmd's data buffer */
+#define SCST_CMD_STATE_PREPARE_SPACE 2
+
+/* Calling preprocessing_done() */
+#define SCST_CMD_STATE_PREPROCESSING_DONE 3
+
+/* Target driver's rdy_to_xfer() is going to be called */
+#define SCST_CMD_STATE_RDY_TO_XFER 4
+
+/* Target driver's pre_exec() is going to be called */
+#define SCST_CMD_STATE_TGT_PRE_EXEC 5
+
+/* Cmd is going to be sent for execution */
+#define SCST_CMD_STATE_SEND_FOR_EXEC 6
+
+/* Cmd is being checked if it should be executed locally */
+#define SCST_CMD_STATE_LOCAL_EXEC 7
+
+/* Cmd is ready for execution */
+#define SCST_CMD_STATE_REAL_EXEC 8
+
+/* Internal post-exec checks */
+#define SCST_CMD_STATE_PRE_DEV_DONE 9
+
+/* Internal MODE SELECT pages related checks */
+#define SCST_CMD_STATE_MODE_SELECT_CHECKS 10
+
+/* Dev handler's dev_done() is going to be called */
+#define SCST_CMD_STATE_DEV_DONE 11
+
+/* Target driver's xmit_response() is going to be called */
+#define SCST_CMD_STATE_PRE_XMIT_RESP 12
+
+/* Target driver's xmit_response() is going to be called */
+#define SCST_CMD_STATE_XMIT_RESP 13
+
+/* Cmd finished */
+#define SCST_CMD_STATE_FINISHED 14
+
+/* Internal cmd finished */
+#define SCST_CMD_STATE_FINISHED_INTERNAL 15
+
+#define SCST_CMD_STATE_LAST_ACTIVE (SCST_CMD_STATE_FINISHED_INTERNAL+100)
+
+/* A cmd is created, but scst_cmd_init_done() not called */
+#define SCST_CMD_STATE_INIT_WAIT (SCST_CMD_STATE_LAST_ACTIVE+1)
+
+/* LUN translation (cmd->tgt_dev assignment) */
+#define SCST_CMD_STATE_INIT (SCST_CMD_STATE_LAST_ACTIVE+2)
+
+/* Waiting for scst_restart_cmd() */
+#define SCST_CMD_STATE_PREPROCESSING_DONE_CALLED (SCST_CMD_STATE_LAST_ACTIVE+3)
+
+/* Waiting for data from the initiator (until scst_rx_data() called) */
+#define SCST_CMD_STATE_DATA_WAIT (SCST_CMD_STATE_LAST_ACTIVE+4)
+
+/* Waiting for CDB's execution finish */
+#define SCST_CMD_STATE_REAL_EXECUTING (SCST_CMD_STATE_LAST_ACTIVE+5)
+
+/* Waiting for response's transmission finish */
+#define SCST_CMD_STATE_XMIT_WAIT (SCST_CMD_STATE_LAST_ACTIVE+6)
+
+/*************************************************************
+ * Can be retuned instead of cmd's state by dev handlers'
+ * functions, if the command's state should be set by default
+ *************************************************************/
+#define SCST_CMD_STATE_DEFAULT 500
+
+/*************************************************************
+ * Can be retuned instead of cmd's state by dev handlers'
+ * functions, if it is impossible to complete requested
+ * task in atomic context. The cmd will be restarted in thread
+ * context.
+ *************************************************************/
+#define SCST_CMD_STATE_NEED_THREAD_CTX 1000
+
+/*************************************************************
+ * Can be retuned instead of cmd's state by dev handlers'
+ * parse function, if the cmd processing should be stopped
+ * for now. The cmd will be restarted by dev handlers itself.
+ *************************************************************/
+#define SCST_CMD_STATE_STOP 1001
+
+/*************************************************************
+ ** States of mgmt command processing state machine
+ *************************************************************/
+
+/* LUN translation (mcmd->tgt_dev assignment) */
+#define SCST_MCMD_STATE_INIT 0
+
+/* Mgmt cmd is ready for processing */
+#define SCST_MCMD_STATE_READY 1
+
+/* Mgmt cmd is being executing */
+#define SCST_MCMD_STATE_EXECUTING 2
+
+/* Post check when affected commands done */
+#define SCST_MCMD_STATE_POST_AFFECTED_CMDS_DONE 3
+
+/* Target driver's task_mgmt_fn_done() is going to be called */
+#define SCST_MCMD_STATE_DONE 4
+
+/* The mcmd finished */
+#define SCST_MCMD_STATE_FINISHED 5
+
+/*************************************************************
+ ** Constants for "atomic" parameter of SCST's functions
+ *************************************************************/
+#define SCST_NON_ATOMIC 0
+#define SCST_ATOMIC 1
+
+/*************************************************************
+ ** Values for pref_context parameter of scst_cmd_init_done(),
+ ** scst_rx_data(), scst_restart_cmd(), scst_tgt_cmd_done()
+ ** and scst_cmd_done()
+ *************************************************************/
+
+enum scst_exec_context {
+ /*
+ * Direct cmd's processing (i.e. regular function calls in the current
+ * context) sleeping is not allowed
+ */
+ SCST_CONTEXT_DIRECT_ATOMIC,
+
+ /*
+ * Direct cmd's processing (i.e. regular function calls in the current
+ * context), sleeping is allowed, no restrictions
+ */
+ SCST_CONTEXT_DIRECT,
+
+ /* Tasklet or thread context required for cmd's processing */
+ SCST_CONTEXT_TASKLET,
+
+ /* Thread context required for cmd's processing */
+ SCST_CONTEXT_THREAD,
+
+ /*
+ * Context is the same as it was in previous call of the corresponding
+ * callback. For example, if dev handler's exec() does sync. data
+ * reading this value should be used for scst_cmd_done(). The same is
+ * true if scst_tgt_cmd_done() called directly from target driver's
+ * xmit_response(). Not allowed in scst_cmd_init_done() and
+ * scst_cmd_init_stage1_done().
+ */
+ SCST_CONTEXT_SAME
+};
+
+/*************************************************************
+ ** Values for status parameter of scst_rx_data()
+ *************************************************************/
+
+/* Success */
+#define SCST_RX_STATUS_SUCCESS 0
+
+/*
+ * Data receiving finished with error, so set the sense and
+ * finish the command, including xmit_response() call
+ */
+#define SCST_RX_STATUS_ERROR 1
+
+/*
+ * Data receiving finished with error and the sense is set,
+ * so finish the command, including xmit_response() call
+ */
+#define SCST_RX_STATUS_ERROR_SENSE_SET 2
+
+/*
+ * Data receiving finished with fatal error, so finish the command,
+ * but don't call xmit_response()
+ */
+#define SCST_RX_STATUS_ERROR_FATAL 3
+
+/*************************************************************
+ ** Values for status parameter of scst_restart_cmd()
+ *************************************************************/
+
+/* Success */
+#define SCST_PREPROCESS_STATUS_SUCCESS 0
+
+/*
+ * Command's processing finished with error, so set the sense and
+ * finish the command, including xmit_response() call
+ */
+#define SCST_PREPROCESS_STATUS_ERROR 1
+
+/*
+ * Command's processing finished with error and the sense is set,
+ * so finish the command, including xmit_response() call
+ */
+#define SCST_PREPROCESS_STATUS_ERROR_SENSE_SET 2
+
+/*
+ * Command's processing finished with fatal error, so finish the command,
+ * but don't call xmit_response()
+ */
+#define SCST_PREPROCESS_STATUS_ERROR_FATAL 3
+
+/* Thread context requested */
+#define SCST_PREPROCESS_STATUS_NEED_THREAD 4
+
+/*************************************************************
+ ** Values for AEN functions
+ *************************************************************/
+
+/*
+ * SCSI Asynchronous Event. Parameter contains SCSI sense
+ * (Unit Attention). AENs generated only for 2 the following UAs:
+ * CAPACITY DATA HAS CHANGED and REPORTED LUNS DATA HAS CHANGED.
+ * Other UAs reported regularly as CHECK CONDITION status,
+ * because it doesn't look safe to report them using AENs, since
+ * reporting using AENs opens delivery race windows even in case of
+ * untagged commands.
+ */
+#define SCST_AEN_SCSI 0
+
+/*************************************************************
+ ** Allowed return/status codes for report_aen() callback and
+ ** scst_set_aen_delivery_status() function
+ *************************************************************/
+
+/* Success */
+#define SCST_AEN_RES_SUCCESS 0
+
+/* Not supported */
+#define SCST_AEN_RES_NOT_SUPPORTED -1
+
+/* Failure */
+#define SCST_AEN_RES_FAILED -2
+
+/*************************************************************
+ ** Allowed return codes for xmit_response(), rdy_to_xfer()
+ *************************************************************/
+
+/* Success */
+#define SCST_TGT_RES_SUCCESS 0
+
+/* Internal device queue is full, retry again later */
+#define SCST_TGT_RES_QUEUE_FULL -1
+
+/*
+ * It is impossible to complete requested task in atomic context.
+ * The cmd will be restarted in thread context.
+ */
+#define SCST_TGT_RES_NEED_THREAD_CTX -2
+
+/*
+ * Fatal error, if returned by xmit_response() the cmd will
+ * be destroyed, if by any other function, xmit_response()
+ * will be called with HARDWARE ERROR sense data
+ */
+#define SCST_TGT_RES_FATAL_ERROR -3
+
+/*************************************************************
+ ** Allowed return codes for dev handler's exec()
+ *************************************************************/
+
+/* The cmd is done, go to other ones */
+#define SCST_EXEC_COMPLETED 0
+
+/* The cmd should be sent to SCSI mid-level */
+#define SCST_EXEC_NOT_COMPLETED 1
+
+/*
+ * Thread context is required to execute the command.
+ * Exec() will be called again in the thread context.
+ */
+#define SCST_EXEC_NEED_THREAD 2
+
+/*
+ * Set if cmd is finished and there is status/sense to be sent.
+ * The status should be not sent (i.e. the flag not set) if the
+ * possibility to perform a command in "chunks" (i.e. with multiple
+ * xmit_response()/rdy_to_xfer()) is used (not implemented yet).
+ * Obsolete, use scst_cmd_get_is_send_status() instead.
+ */
+#define SCST_TSC_FLAG_STATUS 0x2
+
+/*************************************************************
+ ** Additional return code for dev handler's task_mgmt_fn()
+ *************************************************************/
+
+/* Regular standard actions for the command should be done */
+#define SCST_DEV_TM_NOT_COMPLETED 1
+
+/*************************************************************
+ ** Session initialization phases
+ *************************************************************/
+
+/* Set if session is being initialized */
+#define SCST_SESS_IPH_INITING 0
+
+/* Set if the session is successfully initialized */
+#define SCST_SESS_IPH_SUCCESS 1
+
+/* Set if the session initialization failed */
+#define SCST_SESS_IPH_FAILED 2
+
+/* Set if session is initialized and ready */
+#define SCST_SESS_IPH_READY 3
+
+/*************************************************************
+ ** Session shutdown phases
+ *************************************************************/
+
+/* Set if session is initialized and ready */
+#define SCST_SESS_SPH_READY 0
+
+/* Set if session is shutting down */
+#define SCST_SESS_SPH_SHUTDOWN 1
+
+/*************************************************************
+ ** Session's async (atomic) flags
+ *************************************************************/
+
+/* Set if the sess's hw pending work is scheduled */
+#define SCST_SESS_HW_PENDING_WORK_SCHEDULED 0
+
+/*************************************************************
+ ** Cmd's async (atomic) flags
+ *************************************************************/
+
+/* Set if the cmd is aborted and ABORTED sense will be sent as the result */
+#define SCST_CMD_ABORTED 0
+
+/* Set if the cmd is aborted by other initiator */
+#define SCST_CMD_ABORTED_OTHER 1
+
+/* Set if no response should be sent to the target about this cmd */
+#define SCST_CMD_NO_RESP 2
+
+/* Set if the cmd is dead and can be destroyed at any time */
+#define SCST_CMD_CAN_BE_DESTROYED 3
+
+/*
+ * Set if the cmd's device has TAS flag set. Used only when aborted by
+ * other initiator.
+ */
+#define SCST_CMD_DEVICE_TAS 4
+
+/*************************************************************
+ ** Tgt_dev's async. flags (tgt_dev_flags)
+ *************************************************************/
+
+/* Set if tgt_dev has Unit Attention sense */
+#define SCST_TGT_DEV_UA_PENDING 0
+
+/* Set if tgt_dev is RESERVED by another session */
+#define SCST_TGT_DEV_RESERVED 1
+
+/* Set if the corresponding context is atomic */
+#define SCST_TGT_DEV_AFTER_INIT_WR_ATOMIC 5
+#define SCST_TGT_DEV_AFTER_INIT_OTH_ATOMIC 6
+#define SCST_TGT_DEV_AFTER_RESTART_WR_ATOMIC 7
+#define SCST_TGT_DEV_AFTER_RESTART_OTH_ATOMIC 8
+#define SCST_TGT_DEV_AFTER_RX_DATA_ATOMIC 9
+#define SCST_TGT_DEV_AFTER_EXEC_ATOMIC 10
+
+#define SCST_TGT_DEV_CLUST_POOL 11
+
+/*************************************************************
+ ** I/O groupping types. Changing them don't forget to change
+ ** the corresponding *_STR values in scst_const.h!
+ *************************************************************/
+
+/*
+ * All initiators with the same name connected to this group will have
+ * shared IO context, for each name own context. All initiators with
+ * different names will have own IO context.
+ */
+#define SCST_IO_GROUPING_AUTO 0
+
+/* All initiators connected to this group will have shared IO context */
+#define SCST_IO_GROUPING_THIS_GROUP_ONLY -1
+
+/* Each initiator connected to this group will have own IO context */
+#define SCST_IO_GROUPING_NEVER -2
+
+/*************************************************************
+ ** Activities suspending timeout
+ *************************************************************/
+#define SCST_SUSPENDING_TIMEOUT (90 * HZ)
+
+/*************************************************************
+ ** Kernel cache creation helper
+ *************************************************************/
+#ifndef KMEM_CACHE
+#define KMEM_CACHE(__struct, __flags) kmem_cache_create(#__struct,\
+ sizeof(struct __struct), __alignof__(struct __struct),\
+ (__flags), NULL, NULL)
+#endif
+
+/*************************************************************
+ ** Vlaid_mask constants for scst_analyze_sense()
+ *************************************************************/
+
+#define SCST_SENSE_KEY_VALID 1
+#define SCST_SENSE_ASC_VALID 2
+#define SCST_SENSE_ASCQ_VALID 4
+
+#define SCST_SENSE_ASCx_VALID (SCST_SENSE_ASC_VALID | \
+ SCST_SENSE_ASCQ_VALID)
+
+#define SCST_SENSE_ALL_VALID (SCST_SENSE_KEY_VALID | \
+ SCST_SENSE_ASC_VALID | \
+ SCST_SENSE_ASCQ_VALID)
+
+/*************************************************************
+ * TYPES
+ *************************************************************/
+
+struct scst_tgt;
+struct scst_session;
+struct scst_cmd;
+struct scst_mgmt_cmd;
+struct scst_device;
+struct scst_tgt_dev;
+struct scst_dev_type;
+struct scst_acg;
+struct scst_acg_dev;
+struct scst_acn;
+struct scst_aen;
+
+/*
+ * SCST uses 64-bit numbers to represent LUN's internally. The value
+ * NO_SUCH_LUN is guaranteed to be different of every valid LUN.
+ */
+#define NO_SUCH_LUN ((uint64_t)-1)
+
+typedef enum dma_data_direction scst_data_direction;
+
+/*
+ * SCST target template: defines target driver's parameters and callback
+ * functions.
+ *
+ * MUST HAVEs define functions that are expected to be defined in order to
+ * work. OPTIONAL says that there is a choice.
+ */
+struct scst_tgt_template {
+ /* public: */
+
+ /*
+ * SG tablesize allows to check whether scatter/gather can be used
+ * or not.
+ */
+ int sg_tablesize;
+
+ /*
+ * True, if this target adapter uses unchecked DMA onto an ISA bus.
+ */
+ unsigned unchecked_isa_dma:1;
+
+ /*
+ * True, if this target adapter can benefit from using SG-vector
+ * clustering (i.e. smaller number of segments).
+ */
+ unsigned use_clustering:1;
+
+ /*
+ * True, if this target adapter doesn't support SG-vector clustering
+ */
+ unsigned no_clustering:1;
+
+ /*
+ * True, if corresponding function supports execution in
+ * the atomic (non-sleeping) context
+ */
+ unsigned xmit_response_atomic:1;
+ unsigned rdy_to_xfer_atomic:1;
+
+ /*
+ * The maximum time in seconds cmd can stay inside the target
+ * hardware, i.e. after rdy_to_xfer() and xmit_response(), before
+ * on_hw_pending_cmd_timeout() will be called, if defined.
+ *
+ * In the current implementation a cmd will be aborted in time t
+ * max_hw_pending_time <= t < 2*max_hw_pending_time.
+ */
+ int max_hw_pending_time;
+
+ /*
+ * This function is equivalent to the SCSI
+ * queuecommand. The target should transmit the response
+ * buffer and the status in the scst_cmd struct.
+ * The expectation is that this executing this command is NON-BLOCKING.
+ * If it is blocking, consider to set threads_num to some none 0 number.
+ *
+ * After the response is actually transmitted, the target
+ * should call the scst_tgt_cmd_done() function of the
+ * mid-level, which will allow it to free up the command.
+ * Returns one of the SCST_TGT_RES_* constants.
+ *
+ * Pay attention to "atomic" attribute of the cmd, which can be get
+ * by scst_cmd_atomic(): it is true if the function called in the
+ * atomic (non-sleeping) context.
+ *
+ * MUST HAVE
+ */
+ int (*xmit_response) (struct scst_cmd *cmd);
+
+ /*
+ * This function informs the driver that data
+ * buffer corresponding to the said command have now been
+ * allocated and it is OK to receive data for this command.
+ * This function is necessary because a SCSI target does not
+ * have any control over the commands it receives. Most lower
+ * level protocols have a corresponding function which informs
+ * the initiator that buffers have been allocated e.g., XFER_
+ * RDY in Fibre Channel. After the data is actually received
+ * the low-level driver needs to call scst_rx_data() in order to
+ * continue processing this command.
+ * Returns one of the SCST_TGT_RES_* constants.
+ *
+ * This command is expected to be NON-BLOCKING.
+ * If it is blocking, consider to set threads_num to some none 0 number.
+ *
+ * Pay attention to "atomic" attribute of the cmd, which can be get
+ * by scst_cmd_atomic(): it is true if the function called in the
+ * atomic (non-sleeping) context.
+ *
+ * OPTIONAL
+ */
+ int (*rdy_to_xfer) (struct scst_cmd *cmd);
+
+ /*
+ * Called if cmd stays inside the target hardware, i.e. after
+ * rdy_to_xfer() and xmit_response(), more than max_hw_pending_time
+ * time. The target driver supposed to cleanup this command and
+ * resume cmd's processing.
+ *
+ * OPTIONAL
+ */
+ void (*on_hw_pending_cmd_timeout) (struct scst_cmd *cmd);
+
+ /*
+ * Called to notify the driver that the command is about to be freed.
+ * Necessary, because for aborted commands xmit_response() could not
+ * be called. Could be called on IRQ context.
+ *
+ * OPTIONAL
+ */
+ void (*on_free_cmd) (struct scst_cmd *cmd);
+
+ /*
+ * This function allows target driver to handle data buffer
+ * allocations on its own.
+ *
+ * Target driver doesn't have to always allocate buffer in this
+ * function, but if it decide to do it, it must check that
+ * scst_cmd_get_data_buff_alloced() returns 0, otherwise to avoid
+ * double buffer allocation and memory leaks alloc_data_buf() shall
+ * fail.
+ *
+ * Shall return 0 in case of success or < 0 (preferrably -ENOMEM)
+ * in case of error, or > 0 if the regular SCST allocation should be
+ * done. In case of returning successfully,
+ * scst_cmd->tgt_data_buf_alloced will be set by SCST.
+ *
+ * It is possible that both target driver and dev handler request own
+ * memory allocation. In this case, data will be memcpy() between
+ * buffers, where necessary.
+ *
+ * If allocation in atomic context - cf. scst_cmd_atomic() - is not
+ * desired or fails and consequently < 0 is returned, this function
+ * will be re-called in thread context.
+ *
+ * Please note that the driver will have to handle itself all relevant
+ * details such as scatterlist setup, highmem, freeing the allocated
+ * memory, etc.
+ *
+ * OPTIONAL.
+ */
+ int (*alloc_data_buf) (struct scst_cmd *cmd);
+
+ /*
+ * This function informs the driver that data
+ * buffer corresponding to the said command have now been
+ * allocated and other preprocessing tasks have been done.
+ * A target driver could need to do some actions at this stage.
+ * After the target driver done the needed actions, it shall call
+ * scst_restart_cmd() in order to continue processing this command.
+ * In case of preliminary the command completion, this function will
+ * also be called before xmit_response().
+ *
+ * Called only if the cmd is queued using scst_cmd_init_stage1_done()
+ * instead of scst_cmd_init_done().
+ *
+ * Returns void, the result is expected to be returned using
+ * scst_restart_cmd().
+ *
+ * This command is expected to be NON-BLOCKING.
+ * If it is blocking, consider to set threads_num to some none 0 number.
+ *
+ * Pay attention to "atomic" attribute of the cmd, which can be get
+ * by scst_cmd_atomic(): it is true if the function called in the
+ * atomic (non-sleeping) context.
+ *
+ * OPTIONAL.
+ */
+ void (*preprocessing_done) (struct scst_cmd *cmd);
+
+ /*
+ * This function informs the driver that the said command is about
+ * to be executed.
+ *
+ * Returns one of the SCST_PREPROCESS_* constants.
+ *
+ * This command is expected to be NON-BLOCKING.
+ * If it is blocking, consider to set threads_num to some none 0 number.
+ *
+ * Pay attention to "atomic" attribute of the cmd, which can be get
+ * by scst_cmd_atomic(): it is true if the function called in the
+ * atomic (non-sleeping) context.
+ *
+ * OPTIONAL
+ */
+ int (*pre_exec) (struct scst_cmd *cmd);
+
+ /*
+ * This function informs the driver that all affected by the
+ * corresponding task management function commands have beed completed.
+ * No return value expected.
+ *
+ * This function is expected to be NON-BLOCKING.
+ *
+ * Called without any locks held from a thread context.
+ *
+ * OPTIONAL
+ */
+ void (*task_mgmt_affected_cmds_done) (struct scst_mgmt_cmd *mgmt_cmd);
+
+ /*
+ * This function informs the driver that the corresponding task
+ * management function has been completed, i.e. all the corresponding
+ * commands completed and freed. No return value expected.
+ *
+ * This function is expected to be NON-BLOCKING.
+ *
+ * Called without any locks held from a thread context.
+ *
+ * MUST HAVE if the target supports task management.
+ */
+ void (*task_mgmt_fn_done) (struct scst_mgmt_cmd *mgmt_cmd);
+
+ /*
+ * This function should detect the target adapters that
+ * are present in the system. The function should return a value
+ * >= 0 to signify the number of detected target adapters.
+ * A negative value should be returned whenever there is
+ * an error.
+ *
+ * MUST HAVE
+ */
+ int (*detect) (struct scst_tgt_template *tgt_template);
+
+ /*
+ * This function should free up the resources allocated to the device.
+ * The function should return 0 to indicate successful release
+ * or a negative value if there are some issues with the release.
+ * In the current version the return value is ignored.
+ *
+ * MUST HAVE
+ */
+ int (*release) (struct scst_tgt *tgt);
+
+ /*
+ * This function is used for Asynchronous Event Notifications.
+ *
+ * Returns one of the SCST_AEN_RES_* constants.
+ * After AEN is sent, target driver must call scst_aen_done() and,
+ * optionally, scst_set_aen_delivery_status().
+ *
+ * This command is expected to be NON-BLOCKING, but can sleep.
+ *
+ * MUST HAVE, if low-level protocol supports AENs.
+ */
+ int (*report_aen) (struct scst_aen *aen);
+
+ /*
+ * This function allows to enable or disable particular target.
+ * A disabled target doesn't receive and process any SCSI commands.
+ *
+ * SHOULD HAVE to avoid race when there are connected initiators,
+ * while target not yet completed the initial configuration. In this
+ * case the too early connected initiators would see not those devices,
+ * which they intended to see.
+ */
+ int (*enable_target) (struct scst_tgt *tgt, bool enable);
+
+ /*
+ * This function shows if particular target is enabled or not.
+ *
+ * SHOULD HAVE, see above why.
+ */
+ bool (*is_target_enabled) (struct scst_tgt *tgt);
+
+ /*
+ * This function adds a virtual target.
+ *
+ * If both add_target and del_target callbacks defined, then this
+ * target driver supposed to support virtual targets. In this case
+ * an "mgmt" entry will be created in the sysfs root for this driver.
+ * The "mgmt" entry will support 2 commands: "add_target" and
+ * "del_target", for which the corresponding callbacks will be called.
+ * Also target driver can define own commands for the "mgmt" entry, see
+ * mgmt_cmd and mgmt_cmd_help below.
+ *
+ * This approach allows uniform targets management to simplify external
+ * management tools like scstadmin. See README for more details.
+ *
+ * Either both add_target and del_target must be defined, or none.
+ *
+ * MUST HAVE if virtual targets are supported.
+ */
+ ssize_t (*add_target) (const char *target_name, char *params);
+
+ /*
+ * This function deletes a virtual target. See comment for add_target
+ * above.
+ *
+ * MUST HAVE if virtual targets are supported.
+ */
+ ssize_t (*del_target) (const char *target_name);
+
+ /*
+ * This function called if not "add_target" or "del_target" command is
+ * sent to the mgmt entry (see comment for add_target above). In this
+ * case the command passed to this function as is in a string form.
+ *
+ * OPTIONAL.
+ */
+ ssize_t (*mgmt_cmd) (char *cmd);
+
+ /*
+ * Name of the template. Must be unique to identify
+ * the template. MUST HAVE
+ */
+ const char name[SCST_MAX_NAME];
+
+ /*
+ * Number of additional threads to the pool of dedicated threads.
+ * Used if xmit_response() or rdy_to_xfer() is blocking.
+ * It is the target driver's duty to ensure that not more, than that
+ * number of threads, are blocked in those functions at any time.
+ */
+ int threads_num;
+
+ /* Optional default log flags */
+ const unsigned long default_trace_flags;
+
+ /* Optional pointer to trace flags */
+ unsigned long *trace_flags;
+
+ /* Optional local trace table */
+ struct scst_trace_log *trace_tbl;
+
+ /* Optional local trace table help string */
+ const char *trace_tbl_help;
+
+ /* Optional sysfs attributes */
+ const struct attribute **tgtt_attrs;
+
+ /* Optional sysfs target attributes */
+ const struct attribute **tgt_attrs;
+
+ /* Optional sysfs session attributes */
+ const struct attribute **sess_attrs;
+
+ /* Optional help string for mgmt_cmd commands */
+ const char *mgmt_cmd_help;
+
+ /* Optional help string for add_target parameters */
+ const char *add_target_parameters_help;
+
+ /** Private, must be inited to 0 by memset() **/
+
+ /* List of targets per template, protected by scst_mutex */
+ struct list_head tgt_list;
+
+ /* List entry of global templates list */
+ struct list_head scst_template_list_entry;
+
+ /* Set if tgtt_kobj was initialized */
+ unsigned int tgtt_kobj_initialized:1;
+
+ struct kobject tgtt_kobj; /* kobject for this struct */
+
+ struct completion tgtt_kobj_release_cmpl;
+
+};
+
+/*
+ * Threads pool types. Changing them don't forget to change
+ * the corresponding *_STR values in scst_const.h!
+ */
+enum scst_dev_type_threads_pool_type {
+ /* Each initiator will have dedicated threads pool. */
+ SCST_THREADS_POOL_PER_INITIATOR = 0,
+
+ /* All connected initiators will use shared threads pool */
+ SCST_THREADS_POOL_SHARED,
+
+ /* Invalid value for scst_parse_threads_pool_type() */
+ SCST_THREADS_POOL_TYPE_INVALID,
+};
+
+/*
+ * SCST dev handler template: defines dev handler's parameters and callback
+ * functions.
+ *
+ * MUST HAVEs define functions that are expected to be defined in order to
+ * work. OPTIONAL says that there is a choice.
+ */
+struct scst_dev_type {
+ /* SCSI type of the supported device. MUST HAVE */
+ int type;
+
+ /* True, if this dev handler is a pass-through dev handler */
+ unsigned pass_through:1;
+
+ /*
+ * True, if corresponding function supports execution in
+ * the atomic (non-sleeping) context
+ */
+ unsigned parse_atomic:1;
+ unsigned exec_atomic:1;
+ unsigned dev_done_atomic:1;
+
+ /*
+ * Should be true, if exec() is synchronous. This is a hint to SCST core
+ * to optimize commands order management.
+ */
+ unsigned exec_sync:1;
+
+ /*
+ * Called to parse CDB from the cmd and initialize
+ * cmd->bufflen and cmd->data_direction (both - REQUIRED).
+ * Returns the command's next state or SCST_CMD_STATE_DEFAULT,
+ * if the next default state should be used, or
+ * SCST_CMD_STATE_NEED_THREAD_CTX if the function called in atomic
+ * context, but requires sleeping, or SCST_CMD_STATE_STOP if the
+ * command should not be further processed for now. In the
+ * SCST_CMD_STATE_NEED_THREAD_CTX case the function
+ * will be recalled in the thread context, where sleeping is allowed.
+ *
+ * Pay attention to "atomic" attribute of the cmd, which can be get
+ * by scst_cmd_atomic(): it is true if the function called in the
+ * atomic (non-sleeping) context.
+ *
+ * MUST HAVE
+ */
+ int (*parse) (struct scst_cmd *cmd);
+
+ /*
+ * Called to execute CDB. Useful, for instance, to implement
+ * data caching. The result of CDB execution is reported via
+ * cmd->scst_cmd_done() callback.
+ * Returns:
+ * - SCST_EXEC_COMPLETED - the cmd is done, go to other ones
+ * - SCST_EXEC_NEED_THREAD - thread context is required to execute
+ * the command. Exec() will be called again in the thread context.
+ * - SCST_EXEC_NOT_COMPLETED - the cmd should be sent to SCSI
+ * mid-level.
+ *
+ * Pay attention to "atomic" attribute of the cmd, which can be get
+ * by scst_cmd_atomic(): it is true if the function called in the
+ * atomic (non-sleeping) context.
+ *
+ * If this function provides sync execution, you should set
+ * exec_sync flag and consider to setup dedicated threads by
+ * setting threads_num > 0.
+ *
+ * !! If this function is implemented, scst_check_local_events() !!
+ * !! shall be called inside it just before the actual command's !!
+ * !! execution. !!
+ *
+ * OPTIONAL, if not set, the commands will be sent directly to SCSI
+ * device.
+ */
+ int (*exec) (struct scst_cmd *cmd);
+
+ /*
+ * Called to notify dev handler about the result of cmd execution
+ * and perform some post processing. Cmd's fields is_send_status and
+ * resp_data_len should be set by this function, but SCST offers good
+ * defaults.
+ * Returns the command's next state or SCST_CMD_STATE_DEFAULT,
+ * if the next default state should be used, or
+ * SCST_CMD_STATE_NEED_THREAD_CTX if the function called in atomic
+ * context, but requires sleeping. In the last case, the function
+ * will be recalled in the thread context, where sleeping is allowed.
+ *
+ * Pay attention to "atomic" attribute of the cmd, which can be get
+ * by scst_cmd_atomic(): it is true if the function called in the
+ * atomic (non-sleeping) context.
+ */
+ int (*dev_done) (struct scst_cmd *cmd);
+
+ /*
+ * Called to notify dev hander that the command is about to be freed.
+ * Could be called on IRQ context.
+ */
+ void (*on_free_cmd) (struct scst_cmd *cmd);
+
+ /*
+ * Called to execute a task management command.
+ * Returns:
+ * - SCST_MGMT_STATUS_SUCCESS - the command is done with success,
+ * no firther actions required
+ * - The SCST_MGMT_STATUS_* error code if the command is failed and
+ * no further actions required
+ * - SCST_DEV_TM_NOT_COMPLETED - regular standard actions for the
+ * command should be done
+ *
+ * Called without any locks held from a thread context.
+ */
+ int (*task_mgmt_fn) (struct scst_mgmt_cmd *mgmt_cmd,
+ struct scst_tgt_dev *tgt_dev);
+
+ /*
+ * Called when new device is attaching to the dev handler
+ * Returns 0 on success, error code otherwise.
+ */
+ int (*attach) (struct scst_device *dev);
+
+ /* Called when new device is detaching from the dev handler */
+ void (*detach) (struct scst_device *dev);
+
+ /*
+ * Called when new tgt_dev (session) is attaching to the dev handler.
+ * Returns 0 on success, error code otherwise.
+ */
+ int (*attach_tgt) (struct scst_tgt_dev *tgt_dev);
+
+ /* Called when tgt_dev (session) is detaching from the dev handler */
+ void (*detach_tgt) (struct scst_tgt_dev *tgt_dev);
+
+ /*
+ * This function adds a virtual device.
+ *
+ * If both add_device and del_device callbacks defined, then this
+ * dev handler supposed to support adding/deleting virtual devices.
+ * In this case an "mgmt" entry will be created in the sysfs root for
+ * this handler. The "mgmt" entry will support 2 commands: "add_device"
+ * and "del_device", for which the corresponding callbacks will be called.
+ * Also dev handler can define own commands for the "mgmt" entry, see
+ * mgmt_cmd and mgmt_cmd_help below.
+ *
+ * This approach allows uniform devices management to simplify external
+ * management tools like scstadmin. See README for more details.
+ *
+ * Either both add_device and del_device must be defined, or none.
+ *
+ * MUST HAVE if virtual devices are supported.
+ */
+ ssize_t (*add_device) (const char *device_name, char *params);
+
+ /*
+ * This function deletes a virtual device. See comment for add_device
+ * above.
+ *
+ * MUST HAVE if virtual devices are supported.
+ */
+ ssize_t (*del_device) (const char *device_name);
+
+ /*
+ * This function called if not "add_device" or "del_device" command is
+ * sent to the mgmt entry (see comment for add_device above). In this
+ * case the command passed to this function as is in a string form.
+ *
+ * OPTIONAL.
+ */
+ ssize_t (*mgmt_cmd) (char *cmd);
+
+ /*
+ * Name of the dev handler. Must be unique. MUST HAVE.
+ *
+ * It's SCST_MAX_NAME + few more bytes to match scst_user expectations.
+ */
+ char name[SCST_MAX_NAME + 10];
+
+ /*
+ * Number of threads in this handler's devices' threads pools.
+ * If 0 - no threads will be created, if <0 - creation of the threads
+ * pools is prohibited. Also pay attention to threads_pool_type below.
+ */
+ int threads_num;
+
+ /* Threads pool type. Valid only if threads_num > 0. */
+ enum scst_dev_type_threads_pool_type threads_pool_type;
+
+ /* Optional default log flags */
+ const unsigned long default_trace_flags;
+
+ /* Optional pointer to trace flags */
+ unsigned long *trace_flags;
+
+ /* Optional local trace table */
+ struct scst_trace_log *trace_tbl;
+
+ /* Optional local trace table help string */
+ const char *trace_tbl_help;
+
+ /* Optional help string for mgmt_cmd commands */
+ const char *mgmt_cmd_help;
+
+ /* Optional help string for add_device parameters */
+ const char *add_device_parameters_help;
+
+ /* Optional sysfs attributes */
+ const struct attribute **devt_attrs;
+
+ /* Optional sysfs device attributes */
+ const struct attribute **dev_attrs;
+
+ /* Pointer to dev handler's private data */
+ void *devt_priv;
+
+ /* Pointer to parent dev type in the sysfs hierarchy */
+ struct scst_dev_type *parent;
+
+ struct module *module;
+
+ /** Private, must be inited to 0 by memset() **/
+
+ /* list entry in scst_dev_type_list */
+ struct list_head dev_type_list_entry;
+
+ unsigned int devt_kobj_initialized:1;
+
+ struct kobject devt_kobj; /* main handlers/driver */
+
+ /* To wait until devt_kobj released */
+ struct completion devt_kobj_release_compl;
+};
+
+/*
+ * An SCST target, analog of SCSI target port.
+ */
+struct scst_tgt {
+ /* List of remote sessions per target, protected by scst_mutex */
+ struct list_head sess_list;
+
+ /* List entry of targets per template (tgts_list) */
+ struct list_head tgt_list_entry;
+
+ struct scst_tgt_template *tgtt; /* corresponding target template */
+
+ struct scst_acg *default_acg; /* The default acg for this target. */
+
+ /*
+ * Device ACG groups
+ */
+ struct list_head tgt_acg_list;
+
+ /*
+ * Maximum SG table size. Needed here, since different cards on the
+ * same target template can have different SG table limitations.
+ */
+ int sg_tablesize;
+
+ /* Used for storage of target driver private stuff */
+ void *tgt_priv;
+
+ /*
+ * The following fields used to store and retry cmds if target's
+ * internal queue is full, so the target is unable to accept
+ * the cmd returning QUEUE FULL.
+ * They protected by tgt_lock, where necessary.
+ */
+ bool retry_timer_active;
+ struct timer_list retry_timer;
+ atomic_t finished_cmds;
+ int retry_cmds;
+ spinlock_t tgt_lock;
+ struct list_head retry_cmd_list;
+
+ /* Used to wait until session finished to unregister */
+ wait_queue_head_t unreg_waitQ;
+
+ /* Name of the target */
+ char *tgt_name;
+
+ uint16_t rel_tgt_id;
+
+ /* Set if tgt_kobj was initialized */
+ unsigned int tgt_kobj_initialized:1;
+
+ /* Set if scst_tgt_sysfs_prepare_put() was called for tgt_kobj */
+ unsigned int tgt_kobj_put_prepared:1;
+
+ /*
+ * Used to protect sysfs attributes to be called after this
+ * object was unregistered.
+ */
+ struct rw_semaphore tgt_attr_rwsem;
+
+ struct kobject tgt_kobj; /* main targets/target kobject */
+ struct kobject *tgt_sess_kobj; /* target/sessions/ */
+ struct kobject *tgt_luns_kobj; /* target/luns/ */
+ struct kobject *tgt_ini_grp_kobj; /* target/ini_groups/ */
+};
+
+/* Hash size and hash fn for hash based lun translation */
+#define TGT_DEV_HASH_SHIFT 5
+#define TGT_DEV_HASH_SIZE (1 << TGT_DEV_HASH_SHIFT)
+#define HASH_VAL(_val) (_val & (TGT_DEV_HASH_SIZE - 1))
+
+#ifdef CONFIG_SCST_MEASURE_LATENCY
+
+/* Defines extended latency statistics */
+struct scst_ext_latency_stat {
+ uint64_t scst_time_rd, tgt_time_rd, dev_time_rd;
+ unsigned int processed_cmds_rd;
+ uint64_t min_scst_time_rd, min_tgt_time_rd, min_dev_time_rd;
+ uint64_t max_scst_time_rd, max_tgt_time_rd, max_dev_time_rd;
+
+ uint64_t scst_time_wr, tgt_time_wr, dev_time_wr;
+ unsigned int processed_cmds_wr;
+ uint64_t min_scst_time_wr, min_tgt_time_wr, min_dev_time_wr;
+ uint64_t max_scst_time_wr, max_tgt_time_wr, max_dev_time_wr;
+};
+
+#define SCST_IO_SIZE_THRESHOLD_SMALL (8*1024)
+#define SCST_IO_SIZE_THRESHOLD_MEDIUM (32*1024)
+#define SCST_IO_SIZE_THRESHOLD_LARGE (128*1024)
+#define SCST_IO_SIZE_THRESHOLD_VERY_LARGE (512*1024)
+
+#define SCST_LATENCY_STAT_INDEX_SMALL 0
+#define SCST_LATENCY_STAT_INDEX_MEDIUM 1
+#define SCST_LATENCY_STAT_INDEX_LARGE 2
+#define SCST_LATENCY_STAT_INDEX_VERY_LARGE 3
+#define SCST_LATENCY_STAT_INDEX_OTHER 4
+#define SCST_LATENCY_STATS_NUM (SCST_LATENCY_STAT_INDEX_OTHER + 1)
+
+#endif /* CONFIG_SCST_MEASURE_LATENCY */
+
+/*
+ * SCST session, analog of SCSI I_T nexus
+ */
+struct scst_session {
+ /*
+ * Initialization phase, one of SCST_SESS_IPH_* constants, protected by
+ * sess_list_lock
+ */
+ int init_phase;
+
+ struct scst_tgt *tgt; /* corresponding target */
+
+ /* Used for storage of target driver private stuff */
+ void *tgt_priv;
+
+ unsigned long sess_aflags; /* session's async flags */
+
+ /*
+ * Hash list of tgt_dev's for this session, protected by scst_mutex
+ * and suspended activity
+ */
+ struct list_head sess_tgt_dev_list_hash[TGT_DEV_HASH_SIZE];
+
+ /*
+ * List of cmds in this session. Protected by sess_list_lock.
+ *
+ * We must always keep commands in the sess list from the
+ * very beginning, because otherwise they can be missed during
+ * TM processing.
+ */
+ struct list_head sess_cmd_list;
+
+ spinlock_t sess_list_lock; /* protects sess_cmd_list, etc */
+
+ atomic_t refcnt; /* get/put counter */
+
+ /*
+ * Alive commands for this session. ToDo: make it part of the common
+ * IO flow control.
+ */
+ atomic_t sess_cmd_count;
+
+ /* Access control for this session and list entry there */
+ struct scst_acg *acg;
+
+ /* List entry for the sessions list inside ACG */
+ struct list_head acg_sess_list_entry;
+
+ struct delayed_work hw_pending_work;
+
+ /* Name of attached initiator */
+ const char *initiator_name;
+
+ /* List entry of sessions per target */
+ struct list_head sess_list_entry;
+
+ /* List entry for the list that keeps session, waiting for the init */
+ struct list_head sess_init_list_entry;
+
+ /*
+ * List entry for the list that keeps session, waiting for the shutdown
+ */
+ struct list_head sess_shut_list_entry;
+
+ /*
+ * Lists of deferred during session initialization commands.
+ * Protected by sess_list_lock.
+ */
+ struct list_head init_deferred_cmd_list;
+ struct list_head init_deferred_mcmd_list;
+
+ /*
+ * Shutdown phase, one of SCST_SESS_SPH_* constants, unprotected.
+ * Async. relating to init_phase, must be a separate variable, because
+ * session could be unregistered before async. registration is finished.
+ */
+ unsigned long shut_phase;
+
+ /* Used if scst_unregister_session() called in wait mode */
+ struct completion *shutdown_compl;
+
+ /* Set if sess_kobj was initialized */
+ unsigned int sess_kobj_initialized:1;
+
+ /*
+ * Used to protect sysfs attributes to be called after this
+ * object was unregistered.
+ */
+ struct rw_semaphore sess_attr_rwsem;
+
+ struct kobject sess_kobj; /* kobject for this struct */
+
+ /*
+ * Functions and data for user callbacks from scst_register_session()
+ * and scst_unregister_session()
+ */
+ void *reg_sess_data;
+ void (*init_result_fn) (struct scst_session *sess, void *data,
+ int result);
+ void (*unreg_done_fn) (struct scst_session *sess);
+
+#ifdef CONFIG_SCST_MEASURE_LATENCY
+ /*
+ * Must be the last to allow to work with drivers who don't know
+ * about this config time option.
+ */
+ spinlock_t lat_lock;
+ uint64_t scst_time, tgt_time, dev_time;
+ unsigned int processed_cmds;
+ uint64_t min_scst_time, min_tgt_time, min_dev_time;
+ uint64_t max_scst_time, max_tgt_time, max_dev_time;
+ struct scst_ext_latency_stat sess_latency_stat[SCST_LATENCY_STATS_NUM];
+#endif
+};
+
+/*
+ * Structure to control commands' queuing and threads pool processing the queue
+ */
+struct scst_cmd_threads {
+ spinlock_t cmd_list_lock;
+ struct list_head active_cmd_list; /* commands queue */
+ wait_queue_head_t cmd_list_waitQ;
+
+ struct io_context *io_context; /* IO context of the threads pool */
+
+ int nr_threads; /* number of processing threads */
+ struct list_head threads_list; /* processing threads */
+
+ struct list_head lists_list_entry;
+};
+
+/*
+ * SCST command, analog of I_T_L_Q nexus or task
+ */
+struct scst_cmd {
+ /* List entry for below *_cmd_threads */
+ struct list_head cmd_list_entry;
+
+ /* Pointer to lists of commands with the lock */
+ struct scst_cmd_threads *cmd_threads;
+
+ atomic_t cmd_ref;
+
+ struct scst_session *sess; /* corresponding session */
+
+ /* Cmd state, one of SCST_CMD_STATE_* constants */
+ int state;
+
+ /*************************************************************
+ ** Cmd's flags
+ *************************************************************/
+
+ /*
+ * Set if expected_sn should be incremented, i.e. cmd was sent
+ * for execution
+ */
+ unsigned int sent_for_exec:1;
+
+ /* Set if the cmd's action is completed */
+ unsigned int completed:1;
+
+ /* Set if we should ignore Unit Attention in scst_check_sense() */
+ unsigned int ua_ignore:1;
+
+ /* Set if cmd is being processed in atomic context */
+ unsigned int atomic:1;
+
+ /* Set if this command was sent in double UA possible state */
+ unsigned int double_ua_possible:1;
+
+ /* Set if this command contains status */
+ unsigned int is_send_status:1;
+
+ /* Set if cmd is being retried */
+ unsigned int retry:1;
+
+ /* Set if cmd is internally generated */
+ unsigned int internal:1;
+
+ /* Set if the device was blocked by scst_inc_on_dev_cmd() (for debug) */
+ unsigned int inc_blocking:1;
+
+ /* Set if the device should be unblocked after cmd's finish */
+ unsigned int needs_unblocking:1;
+
+ /* Set if scst_dec_on_dev_cmd() call is needed on the cmd's finish */
+ unsigned int dec_on_dev_needed:1;
+
+ /* Set if cmd is queued as hw pending */
+ unsigned int cmd_hw_pending:1;
+
+ /*
+ * Set if the target driver wants to alloc data buffers on its own.
+ * In this case alloc_data_buf() must be provided in the target driver
+ * template.
+ */
+ unsigned int tgt_need_alloc_data_buf:1;
+
+ /*
+ * Set by SCST if the custom data buffer allocation by the target driver
+ * succeeded.
+ */
+ unsigned int tgt_data_buf_alloced:1;
+
+ /* Set if custom data buffer allocated by dev handler */
+ unsigned int dh_data_buf_alloced:1;
+
+ /* Set if the target driver called scst_set_expected() */
+ unsigned int expected_values_set:1;
+
+ /*
+ * Set if the SG buffer was modified by scst_set_resp_data_len()
+ */
+ unsigned int sg_buff_modified:1;
+
+ /*
+ * Set if scst_cmd_init_stage1_done() called and the target
+ * want that preprocessing_done() will be called
+ */
+ unsigned int preprocessing_only:1;
+
+ /* Set if cmd's SN was set */
+ unsigned int sn_set:1;
+
+ /* Set if hq_cmd_count was incremented */
+ unsigned int hq_cmd_inced:1;
+
+ /*
+ * Set if scst_cmd_init_stage1_done() called and the target wants
+ * that the SN for the cmd won't be assigned until scst_restart_cmd()
+ */
+ unsigned int set_sn_on_restart_cmd:1;
+
+ /* Set if the cmd's must not use sgv cache for data buffer */
+ unsigned int no_sgv:1;
+
+ /*
+ * Set if target driver may need to call dma_sync_sg() or similar
+ * function before transferring cmd' data to the target device
+ * via DMA.
+ */
+ unsigned int may_need_dma_sync:1;
+
+ /* Set if the cmd was done or aborted out of its SN */
+ unsigned int out_of_sn:1;
+
+ /* Set if increment expected_sn in cmd->scst_cmd_done() */
+ unsigned int inc_expected_sn_on_done:1;
+
+ /* Set if tgt_sn field is valid */
+ unsigned int tgt_sn_set:1;
+
+ /* Set if cmd is done */
+ unsigned int done:1;
+
+ /* Set if cmd is finished */
+ unsigned int finished:1;
+
+ /*
+ * Set if the cmd was delayed by task management debugging code.
+ * Used only if CONFIG_SCST_DEBUG_TM is on.
+ */
+ unsigned int tm_dbg_delayed:1;
+
+ /*
+ * Set if the cmd must be ignored by task management debugging code.
+ * Used only if CONFIG_SCST_DEBUG_TM is on.
+ */
+ unsigned int tm_dbg_immut:1;
+
+ /**************************************************************/
+
+ unsigned long cmd_flags; /* cmd's async flags */
+
+ /* Keeps status of cmd's status/data delivery to remote initiator */
+ int delivery_status;
+
+ struct scst_tgt_template *tgtt; /* to save extra dereferences */
+ struct scst_tgt *tgt; /* to save extra dereferences */
+ struct scst_device *dev; /* to save extra dereferences */
+
+ struct scst_tgt_dev *tgt_dev; /* corresponding device for this cmd */
+
+ uint64_t lun; /* LUN for this cmd */
+
+ unsigned long start_time;
+
+ /* List entry for tgt_dev's SN related lists */
+ struct list_head sn_cmd_list_entry;
+
+ /* Cmd's serial number, used to execute cmd's in order of arrival */
+ unsigned int sn;
+
+ /* The corresponding sn_slot in tgt_dev->sn_slots */
+ atomic_t *sn_slot;
+
+ /* List entry for sess's sess_cmd_list */
+ struct list_head sess_cmd_list_entry;
+
+ /*
+ * Used to found the cmd by scst_find_cmd_by_tag(). Set by the
+ * target driver on the cmd's initialization time
+ */
+ uint64_t tag;
+
+ uint32_t tgt_sn; /* SN set by target driver (for TM purposes) */
+
+ /* CDB and its len */
+ uint8_t cdb[SCST_MAX_CDB_SIZE];
+ short cdb_len; /* it might be -1 */
+ unsigned short ext_cdb_len;
+ uint8_t *ext_cdb;
+
+ enum scst_cdb_flags op_flags;
+ const char *op_name;
+
+ enum scst_cmd_queue_type queue_type;
+
+ int timeout; /* CDB execution timeout in seconds */
+ int retries; /* Amount of retries that will be done by SCSI mid-level */
+
+ /* SCSI data direction, one of SCST_DATA_* constants */
+ scst_data_direction data_direction;
+
+ /* Remote initiator supplied values, if any */
+ scst_data_direction expected_data_direction;
+ int expected_transfer_len;
+ int expected_in_transfer_len; /* for bidi writes */
+
+ /*
+ * Cmd data length. Could be different from bufflen for commands like
+ * VERIFY, which transfer different amount of data (if any), than
+ * processed.
+ */
+ int data_len;
+
+ /* Completition routine */
+ void (*scst_cmd_done) (struct scst_cmd *cmd, int next_state,
+ enum scst_exec_context pref_context);
+
+ struct sgv_pool_obj *sgv; /* sgv object */
+ int bufflen; /* cmd buffer length */
+ struct scatterlist *sg; /* cmd data buffer SG vector */
+ int sg_cnt; /* SG segments count */
+
+ /*
+ * Response data length in data buffer. This field must not be set
+ * directly, use scst_set_resp_data_len() for that
+ */
+ int resp_data_len;
+
+ /* scst_get_sg_buf_[first,next]() support */
+ int get_sg_buf_entry_num;
+
+ /* Bidirectional transfers support */
+ int in_bufflen; /* WRITE buffer length */
+ struct sgv_pool_obj *in_sgv; /* WRITE sgv object */
+ struct scatterlist *in_sg; /* WRITE data buffer SG vector */
+ int in_sg_cnt; /* WRITE SG segments count */
+
+ /*
+ * Used if both target driver and dev handler request own memory
+ * allocation. In other cases, both are equal to sg and sg_cnt
+ * correspondingly.
+ *
+ * If target driver requests own memory allocations, it MUST use
+ * functions scst_cmd_get_tgt_sg*() to get sg and sg_cnt! Otherwise,
+ * it may use functions scst_cmd_get_sg*().
+ */
+ struct scatterlist *tgt_sg;
+ int tgt_sg_cnt;
+ struct scatterlist *tgt_in_sg; /* bidirectional */
+ int tgt_in_sg_cnt; /* bidirectional */
+
+ /*
+ * The status fields in case of errors must be set using
+ * scst_set_cmd_error_status()!
+ */
+ uint8_t status; /* status byte from target device */
+ uint8_t msg_status; /* return status from host adapter itself */
+ uint8_t host_status; /* set by low-level driver to indicate status */
+ uint8_t driver_status; /* set by mid-level */
+
+ uint8_t *sense; /* pointer to sense buffer */
+ unsigned short sense_valid_len; /* length of valid sense data */
+ unsigned short sense_buflen; /* length of the sense buffer, if any */
+
+ /* Start time when cmd was sent to rdy_to_xfer() or xmit_response() */
+ unsigned long hw_pending_start;
+
+ /* Used for storage of target driver private stuff */
+ void *tgt_priv;
+
+ /* Used for storage of dev handler private stuff */
+ void *dh_priv;
+
+ /*
+ * Used to restore the SG vector if it was modified by
+ * scst_set_resp_data_len()
+ */
+ int orig_sg_cnt, orig_sg_entry, orig_entry_len;
+
+ /* Used to retry commands in case of double UA */
+ int dbl_ua_orig_resp_data_len, dbl_ua_orig_data_direction;
+
+ /* List corresponding mgmt cmd, if any, protected by sess_list_lock */
+ struct list_head mgmt_cmd_list;
+
+ /* List entry for dev's blocked_cmd_list */
+ struct list_head blocked_cmd_list_entry;
+
+ struct scst_cmd *orig_cmd; /* Used to issue REQUEST SENSE */
+
+#ifdef CONFIG_SCST_MEASURE_LATENCY
+ /*
+ * Must be the last to allow to work with drivers who don't know
+ * about this config time option.
+ */
+ uint64_t start, curr_start, parse_time, alloc_buf_time;
+ uint64_t restart_waiting_time, rdy_to_xfer_time;
+ uint64_t pre_exec_time, exec_time, dev_done_time;
+ uint64_t xmit_time, tgt_on_free_time, dev_on_free_time;
+#endif
+};
+
+/*
+ * Parameters for SCST management commands
+ */
+struct scst_rx_mgmt_params {
+ int fn;
+ uint64_t tag;
+ const uint8_t *lun;
+ int lun_len;
+ uint32_t cmd_sn;
+ int atomic;
+ void *tgt_priv;
+ unsigned char tag_set;
+ unsigned char lun_set;
+ unsigned char cmd_sn_set;
+};
+
+/*
+ * A stub structure to link an management command and affected regular commands
+ */
+struct scst_mgmt_cmd_stub {
+ struct scst_mgmt_cmd *mcmd;
+
+ /* List entry in cmd->mgmt_cmd_list */
+ struct list_head cmd_mgmt_cmd_list_entry;
+
+ /* set if the cmd was counted in mcmd->cmd_done_wait_count */
+ unsigned int done_counted:1;
+};
+
+/*
+ * SCST task management structure
+ */
+struct scst_mgmt_cmd {
+ /* List entry for *_mgmt_cmd_list */
+ struct list_head mgmt_cmd_list_entry;
+
+ struct scst_session *sess;
+
+ /* Mgmt cmd state, one of SCST_MCMD_STATE_* constants */
+ int state;
+
+ int fn; /* task management function */
+
+ unsigned int completed:1; /* set, if the mcmd is completed */
+ /* Set if device(s) should be unblocked after mcmd's finish */
+ unsigned int needs_unblocking:1;
+ unsigned int lun_set:1; /* set, if lun field is valid */
+ unsigned int cmd_sn_set:1; /* set, if cmd_sn field is valid */
+ /* set, if scst_mgmt_affected_cmds_done was called */
+ unsigned int affected_cmds_done_called:1;
+
+ /*
+ * Number of commands to finish before sending response,
+ * protected by scst_mcmd_lock
+ */
+ int cmd_finish_wait_count;
+
+ /*
+ * Number of commands to complete (done) before resetting reservation,
+ * protected by scst_mcmd_lock
+ */
+ int cmd_done_wait_count;
+
+ /* Number of completed commands, protected by scst_mcmd_lock */
+ int completed_cmd_count;
+
+ uint64_t lun; /* LUN for this mgmt cmd */
+ /* or (and for iSCSI) */
+ uint64_t tag; /* tag of the corresponding cmd */
+
+ uint32_t cmd_sn; /* affected command's highest SN */
+
+ /* corresponding cmd (to be aborted, found by tag) */
+ struct scst_cmd *cmd_to_abort;
+
+ /* corresponding device for this mgmt cmd (found by lun) */
+ struct scst_tgt_dev *mcmd_tgt_dev;
+
+ /* completition status, one of the SCST_MGMT_STATUS_* constants */
+ int status;
+
+ /* Used for storage of target driver private stuff */
+ void *tgt_priv;
+};
+
+/*
+ * SCST device
+ */
+struct scst_device {
+ struct scst_dev_type *handler; /* corresponding dev handler */
+
+ struct scst_mem_lim dev_mem_lim;
+
+ unsigned short type; /* SCSI type of the device */
+
+ /*************************************************************
+ ** Dev's flags. Updates serialized by dev_lock or suspended
+ ** activity
+ *************************************************************/
+
+ /* Set if dev is RESERVED */
+ unsigned short dev_reserved:1;
+
+ /* Set if double reset UA is possible */
+ unsigned short dev_double_ua_possible:1;
+
+ /* If set, dev is read only */
+ unsigned short rd_only:1;
+
+ /**************************************************************/
+
+ /*************************************************************
+ ** Dev's control mode page related values. Updates serialized
+ ** by scst_block_dev(). It's long to not interfere with the
+ ** above flags.
+ *************************************************************/
+
+ unsigned long queue_alg:4;
+ unsigned long tst:3;
+ unsigned long tas:1;
+ unsigned long swp:1;
+ unsigned long d_sense:1;
+
+ /*
+ * Set if device implements own ordered commands management. If not set
+ * and queue_alg is SCST_CONTR_MODE_QUEUE_ALG_RESTRICTED_REORDER,
+ * expected_sn will be incremented only after commands finished.
+ */
+ unsigned long has_own_order_mgmt:1;
+
+ /**************************************************************/
+
+ /* Used for storage of dev handler private stuff */
+ void *dh_priv;
+
+ /* Corresponding real SCSI device, could be NULL for virtual devices */
+ struct scsi_device *scsi_dev;
+
+ /* Lists of commands with lock, if dedicated threads are used */
+ struct scst_cmd_threads dev_cmd_threads;
+
+ /* How many cmds alive on this dev */
+ atomic_t dev_cmd_count;
+
+ /* How many write cmds alive on this dev. Temporary, ToDo */
+ atomic_t write_cmd_count;
+
+ spinlock_t dev_lock; /* device lock */
+
+ /*
+ * How many times device was blocked for new cmds execution.
+ * Protected by dev_lock
+ */
+ int block_count;
+
+ /*
+ * How many there are "on_dev" commands, i.e. ones those are being
+ * executed by the underlying SCSI/virtual device.
+ */
+ atomic_t on_dev_count;
+
+ struct list_head blocked_cmd_list; /* protected by dev_lock */
+
+ /* Used to wait for requested amount of "on_dev" commands */
+ wait_queue_head_t on_dev_waitQ;
+
+ /* A list entry used during TM, protected by scst_mutex */
+ struct list_head tm_dev_list_entry;
+
+ /* Virtual device internal ID */
+ int virt_id;
+
+ /* Pointer to virtual device name, for convenience only */
+ char *virt_name;
+
+ /* List entry in global devices list */
+ struct list_head dev_list_entry;
+
+ /*
+ * List of tgt_dev's, one per session, protected by scst_mutex or
+ * dev_lock for reads and both for writes
+ */
+ struct list_head dev_tgt_dev_list;
+
+ /* List of acg_dev's, one per acg, protected by scst_mutex */
+ struct list_head dev_acg_dev_list;
+
+ /* Number of threads in the device's threads pools */
+ int threads_num;
+
+ /* Threads pool type of the device. Valid only if threads_num > 0. */
+ enum scst_dev_type_threads_pool_type threads_pool_type;
+
+ /* Set if tgt_kobj was initialized */
+ unsigned int dev_kobj_initialized:1;
+
+ /*
+ * Used to protect sysfs attributes to be called after this
+ * object was unregistered.
+ */
+ struct rw_semaphore dev_attr_rwsem;
+
+ struct kobject dev_kobj; /* kobject for this struct */
+ struct kobject *dev_exp_kobj; /* exported groups */
+
+ /* Export number in the dev's sysfs list. Protected by scst_mutex */
+ int dev_exported_lun_num;
+};
+
+/*
+ * Used to store threads local tgt_dev specific data
+ */
+struct scst_thr_data_hdr {
+ /* List entry in tgt_dev->thr_data_list */
+ struct list_head thr_data_list_entry;
+ struct task_struct *owner_thr; /* the owner thread */
+ atomic_t ref;
+ /* Function that will be called on the tgt_dev destruction */
+ void (*free_fn) (struct scst_thr_data_hdr *data);
+};
+
+/*
+ * Used to clearly dispose async io_context
+ */
+struct scst_async_io_context_keeper {
+ struct kref aic_keeper_kref;
+ struct io_context *aic;
+ struct task_struct *aic_keeper_thr;
+ wait_queue_head_t aic_keeper_waitQ;
+};
+
+/*
+ * Used to store per-session specific device information, analog of
+ * SCSI I_T_L nexus.
+ */
+struct scst_tgt_dev {
+ /* List entry in sess->sess_tgt_dev_list_hash */
+ struct list_head sess_tgt_dev_list_entry;
+
+ struct scst_device *dev; /* to save extra dereferences */
+ uint64_t lun; /* to save extra dereferences */
+
+ gfp_t gfp_mask;
+ struct sgv_pool *pool;
+ int max_sg_cnt;
+
+ unsigned long tgt_dev_flags; /* tgt_dev's async flags */
+
+ /* Used for storage of dev handler private stuff */
+ void *dh_priv;
+
+ /* How many cmds alive on this dev in this session */
+ atomic_t tgt_dev_cmd_count;
+
+ /*
+ * Used to execute cmd's in order of arrival, honoring SCSI task
+ * attributes.
+ *
+ * Protected by sn_lock, except expected_sn, which is protected by
+ * itself. Curr_sn must have the same size as expected_sn to
+ * overflow simultaneously.
+ */
+ int def_cmd_count;
+ spinlock_t sn_lock;
+ unsigned int expected_sn;
+ unsigned int curr_sn;
+ int hq_cmd_count;
+ struct list_head deferred_cmd_list;
+ struct list_head skipped_sn_list;
+
+ /*
+ * Set if the prev cmd was ORDERED. Size must allow unprotected
+ * modifications independant to the neighbour fields.
+ */
+ unsigned long prev_cmd_ordered;
+
+ int num_free_sn_slots; /* if it's <0, then all slots are busy */
+ atomic_t *cur_sn_slot;
+ atomic_t sn_slots[15];
+
+ /* List of scst_thr_data_hdr and lock */
+ spinlock_t thr_data_lock;
+ struct list_head thr_data_list;
+
+ /* Pointer to lists of commands with the lock */
+ struct scst_cmd_threads *active_cmd_threads;
+
+ /* Union to save some CPU cache footprint */
+ union {
+ struct {
+ /* Copy to save fast path dereference */
+ struct io_context *async_io_context;
+
+ struct scst_async_io_context_keeper *aic_keeper;
+ };
+
+ /* Lists of commands with lock, if dedicated threads are used */
+ struct scst_cmd_threads tgt_dev_cmd_threads;
+ };
+
+ spinlock_t tgt_dev_lock; /* per-session device lock */
+
+ /* List of UA's for this device, protected by tgt_dev_lock */
+ struct list_head UA_list;
+
+ struct scst_session *sess; /* corresponding session */
+ struct scst_acg_dev *acg_dev; /* corresponding acg_dev */
+
+ /* List entry in dev->dev_tgt_dev_list */
+ struct list_head dev_tgt_dev_list_entry;
+
+ /* Internal tmp list entry */
+ struct list_head extra_tgt_dev_list_entry;
+
+ /* Set if INQUIRY DATA HAS CHANGED UA is needed */
+ unsigned int inq_changed_ua_needed:1;
+
+ /*
+ * Stored Unit Attention sense and its length for possible
+ * subsequent REQUEST SENSE. Both protected by tgt_dev_lock.
+ */
+ unsigned short tgt_dev_valid_sense_len;
+ uint8_t tgt_dev_sense[SCST_SENSE_BUFFERSIZE];
+
+#ifdef CONFIG_SCST_MEASURE_LATENCY
+ /*
+ * Must be the last to allow to work with drivers who don't know
+ * about this config time option.
+ *
+ * Protected by sess->lat_lock.
+ */
+ uint64_t scst_time, tgt_time, dev_time;
+ unsigned int processed_cmds;
+ struct scst_ext_latency_stat dev_latency_stat[SCST_LATENCY_STATS_NUM];
+#endif
+};
+
+/*
+ * Used to store ACG-specific device information, like LUN
+ */
+struct scst_acg_dev {
+ struct scst_device *dev; /* corresponding device */
+
+ uint64_t lun; /* device's LUN in this acg */
+
+ /* If set, the corresponding LU is read only */
+ unsigned int rd_only:1;
+
+ /* Set if acg_dev_kobj was initialized */
+ unsigned int acg_dev_kobj_initialized:1;
+
+ struct scst_acg *acg; /* parent acg */
+
+ /* List entry in dev->dev_acg_dev_list */
+ struct list_head dev_acg_dev_list_entry;
+
+ /* List entry in acg->acg_dev_list */
+ struct list_head acg_dev_list_entry;
+
+ /* kobject for this structure */
+ struct kobject acg_dev_kobj;
+};
+
+/*
+ * ACG - access control group. Used to store group related
+ * control information.
+ */
+struct scst_acg {
+ /* List of acg_dev's in this acg, protected by scst_mutex */
+ struct list_head acg_dev_list;
+
+ /* List of attached sessions, protected by scst_mutex */
+ struct list_head acg_sess_list;
+
+ /* List of attached acn's, protected by scst_mutex */
+ struct list_head acn_list;
+
+ /* List entry in acg_lists */
+ struct list_head acg_list_entry;
+
+ /* Name of this acg */
+ const char *acg_name;
+
+ /* Type of I/O initiators groupping */
+ int acg_io_grouping_type;
+
+ unsigned int acg_kobj_initialized:1;
+ unsigned int in_tgt_acg_list:1;
+
+ /* kobject for this structure */
+ struct kobject acg_kobj;
+
+ struct kobject *luns_kobj;
+ struct kobject *initiators_kobj;
+
+ unsigned int addr_method;
+};
+
+/*
+ * ACN - access control name. Used to store names, by which
+ * incoming sessions will be assigned to appropriate ACG.
+ */
+struct scst_acn {
+ /* Initiator's name */
+ const char *name;
+ /* List entry in acg->acn_list */
+ struct list_head acn_list_entry;
+
+ /* sysfs file attributes */
+ struct kobj_attribute *acn_attr;
+};
+
+/*
+ * Used to store per-session UNIT ATTENTIONs
+ */
+struct scst_tgt_dev_UA {
+ /* List entry in tgt_dev->UA_list */
+ struct list_head UA_list_entry;
+
+ /* Set if UA is global for session */
+ unsigned short global_UA:1;
+
+ /* Unit Attention valid sense len */
+ unsigned short UA_valid_sense_len;
+ /* Unit Attention sense buf */
+ uint8_t UA_sense_buffer[SCST_SENSE_BUFFERSIZE];
+};
+
+/* Used to deliver AENs */
+struct scst_aen {
+ int event_fn; /* AEN fn */
+
+ struct scst_session *sess; /* corresponding session */
+ uint64_t lun; /* corresponding LUN in SCSI form */
+
+ union {
+ /* SCSI AEN data */
+ struct {
+ int aen_sense_len;
+ uint8_t aen_sense[SCST_STANDARD_SENSE_LEN];
+ };
+ };
+
+ /* Keeps status of AEN's delivery to remote initiator */
+ int delivery_status;
+};
+
+#ifndef smp_mb__after_set_bit
+/* There is no smp_mb__after_set_bit() in the kernel */
+#define smp_mb__after_set_bit() smp_mb()
+#endif
+
+/*
+ * Registers target template.
+ * Returns 0 on success or appropriate error code otherwise.
+ *
+ * Note: *vtt must be static!
+ */
+int __scst_register_target_template(struct scst_tgt_template *vtt,
+ const char *version);
+static inline int scst_register_target_template(struct scst_tgt_template *vtt)
+{
+ return __scst_register_target_template(vtt, SCST_INTERFACE_VERSION);
+}
+
+/*
+ * Registers target template, non-GPL version.
+ * Returns 0 on success or appropriate error code otherwise.
+ *
+ * Note: *vtt must be static!
+ */
+int __scst_register_target_template_non_gpl(struct scst_tgt_template *vtt,
+ const char *version);
+static inline int scst_register_target_template_non_gpl(
+ struct scst_tgt_template *vtt)
+{
+ return __scst_register_target_template_non_gpl(vtt,
+ SCST_INTERFACE_VERSION);
+}
+
+void scst_unregister_target_template(struct scst_tgt_template *vtt);
+
+struct scst_tgt *scst_register_target(struct scst_tgt_template *vtt,
+ const char *target_name);
+void scst_unregister_target(struct scst_tgt *tgt);
+
+struct scst_session *scst_register_session(struct scst_tgt *tgt, int atomic,
+ const char *initiator_name, void *data,
+ void (*result_fn) (struct scst_session *sess, void *data, int result));
+struct scst_session *scst_register_session_simple(struct scst_tgt *tgt,
+ const char *initiator_name);
+void scst_unregister_session(struct scst_session *sess, int wait,
+ void (*unreg_done_fn) (struct scst_session *sess));
+void scst_unregister_session_simple(struct scst_session *sess);
+
+int __scst_register_dev_driver(struct scst_dev_type *dev_type,
+ const char *version);
+static inline int scst_register_dev_driver(struct scst_dev_type *dev_type)
+{
+ return __scst_register_dev_driver(dev_type, SCST_INTERFACE_VERSION);
+}
+void scst_unregister_dev_driver(struct scst_dev_type *dev_type);
+
+int __scst_register_virtual_dev_driver(struct scst_dev_type *dev_type,
+ const char *version);
+/*
+ * Registers dev handler driver for virtual devices (eg VDISK).
+ * Returns 0 on success or appropriate error code otherwise.
+ *
+ * Note: *dev_type must be static!
+ */
+static inline int scst_register_virtual_dev_driver(
+ struct scst_dev_type *dev_type)
+{
+ return __scst_register_virtual_dev_driver(dev_type,
+ SCST_INTERFACE_VERSION);
+}
+
+void scst_unregister_virtual_dev_driver(struct scst_dev_type *dev_type);
+
+struct scst_cmd *scst_rx_cmd(struct scst_session *sess,
+ const uint8_t *lun, int lun_len, const uint8_t *cdb,
+ int cdb_len, int atomic);
+void scst_cmd_init_done(struct scst_cmd *cmd,
+ enum scst_exec_context pref_context);
+
+/*
+ * Notifies SCST that the driver finished the first stage of the command
+ * initialization, and the command is ready for execution, but after
+ * SCST done the command's preprocessing preprocessing_done() function
+ * should be called. The second argument sets preferred command execition
+ * context. See SCST_CONTEXT_* constants for details.
+ *
+ * See comment for scst_cmd_init_done() for the serialization requirements.
+ */
+static inline void scst_cmd_init_stage1_done(struct scst_cmd *cmd,
+ enum scst_exec_context pref_context, int set_sn)
+{
+ cmd->preprocessing_only = 1;
+ cmd->set_sn_on_restart_cmd = !set_sn;
+ scst_cmd_init_done(cmd, pref_context);
+}
+
+void scst_restart_cmd(struct scst_cmd *cmd, int status,
+ enum scst_exec_context pref_context);
+
+void scst_rx_data(struct scst_cmd *cmd, int status,
+ enum scst_exec_context pref_context);
+
+void scst_tgt_cmd_done(struct scst_cmd *cmd,
+ enum scst_exec_context pref_context);
+
+int scst_rx_mgmt_fn(struct scst_session *sess,
+ const struct scst_rx_mgmt_params *params);
+
+/*
+ * Creates new management command using tag and sends it for execution.
+ * Can be used for SCST_ABORT_TASK only.
+ * Must not be called in parallel with scst_unregister_session() for the
+ * same sess. Returns 0 for success, error code otherwise.
+ *
+ * Obsolete in favor of scst_rx_mgmt_fn()
+ */
+static inline int scst_rx_mgmt_fn_tag(struct scst_session *sess, int fn,
+ uint64_t tag, int atomic, void *tgt_priv)
+{
+ struct scst_rx_mgmt_params params;
+
+ BUG_ON(fn != SCST_ABORT_TASK);
+
+ memset(¶ms, 0, sizeof(params));
+ params.fn = fn;
+ params.tag = tag;
+ params.tag_set = 1;
+ params.atomic = atomic;
+ params.tgt_priv = tgt_priv;
+ return scst_rx_mgmt_fn(sess, ¶ms);
+}
+
+/*
+ * Creates new management command using LUN and sends it for execution.
+ * Currently can be used for any fn, except SCST_ABORT_TASK.
+ * Must not be called in parallel with scst_unregister_session() for the
+ * same sess. Returns 0 for success, error code otherwise.
+ *
+ * Obsolete in favor of scst_rx_mgmt_fn()
+ */
+static inline int scst_rx_mgmt_fn_lun(struct scst_session *sess, int fn,
+ const uint8_t *lun, int lun_len, int atomic, void *tgt_priv)
+{
+ struct scst_rx_mgmt_params params;
+
+ BUG_ON(fn == SCST_ABORT_TASK);
+
+ memset(¶ms, 0, sizeof(params));
+ params.fn = fn;
+ params.lun = lun;
+ params.lun_len = lun_len;
+ params.lun_set = 1;
+ params.atomic = atomic;
+ params.tgt_priv = tgt_priv;
+ return scst_rx_mgmt_fn(sess, ¶ms);
+}
+
+int scst_get_cdb_info(struct scst_cmd *cmd);
+
+int scst_set_cmd_error_status(struct scst_cmd *cmd, int status);
+int scst_set_cmd_error(struct scst_cmd *cmd, int key, int asc, int ascq);
+void scst_set_busy(struct scst_cmd *cmd);
+
+void scst_check_convert_sense(struct scst_cmd *cmd);
+
+void scst_set_initial_UA(struct scst_session *sess, int key, int asc, int ascq);
+
+void scst_capacity_data_changed(struct scst_device *dev);
+
+struct scst_cmd *scst_find_cmd_by_tag(struct scst_session *sess, uint64_t tag);
+struct scst_cmd *scst_find_cmd(struct scst_session *sess, void *data,
+ int (*cmp_fn) (struct scst_cmd *cmd,
+ void *data));
+
+enum dma_data_direction scst_to_dma_dir(int scst_dir);
+enum dma_data_direction scst_to_tgt_dma_dir(int scst_dir);
+
+/*
+ * Returns true, if cmd's CDB is fully locally handled by SCST and false
+ * otherwise. Dev handlers parse() and dev_done() not called for such commands.
+ */
+static inline bool scst_is_cmd_fully_local(struct scst_cmd *cmd)
+{
+ return (cmd->op_flags & SCST_FULLY_LOCAL_CMD) != 0;
+}
+
+/*
+ * Returns true, if cmd's CDB is locally handled by SCST and
+ * false otherwise.
+ */
+static inline bool scst_is_cmd_local(struct scst_cmd *cmd)
+{
+ return (cmd->op_flags & SCST_LOCAL_CMD) != 0;
+}
+
+/* Returns true, if cmd can deliver UA */
+static inline bool scst_is_ua_command(struct scst_cmd *cmd)
+{
+ return (cmd->op_flags & SCST_SKIP_UA) == 0;
+}
+
+int scst_register_virtual_device(struct scst_dev_type *dev_handler,
+ const char *dev_name);
+void scst_unregister_virtual_device(int id);
+
+/*
+ * Get/Set functions for tgt's sg_tablesize
+ */
+static inline int scst_tgt_get_sg_tablesize(struct scst_tgt *tgt)
+{
+ return tgt->sg_tablesize;
+}
+
+static inline void scst_tgt_set_sg_tablesize(struct scst_tgt *tgt, int val)
+{
+ tgt->sg_tablesize = val;
+}
+
+/*
+ * Get/Set functions for tgt's target private data
+ */
+static inline void *scst_tgt_get_tgt_priv(struct scst_tgt *tgt)
+{
+ return tgt->tgt_priv;
+}
+
+static inline void scst_tgt_set_tgt_priv(struct scst_tgt *tgt, void *val)
+{
+ tgt->tgt_priv = val;
+}
+
+/*
+ * Get/Set functions for session's target private data
+ */
+static inline void *scst_sess_get_tgt_priv(struct scst_session *sess)
+{
+ return sess->tgt_priv;
+}
+
+static inline void scst_sess_set_tgt_priv(struct scst_session *sess,
+ void *val)
+{
+ sess->tgt_priv = val;
+}
+
+/**
+ * Returns TRUE if cmd is being executed in atomic context.
+ *
+ * Note: checkpatch will complain on the use of in_atomic() below. You can
+ * safely ignore this warning since in_atomic() is used here only for debugging
+ * purposes.
+ */
+static inline bool scst_cmd_atomic(struct scst_cmd *cmd)
+{
+ int res = cmd->atomic;
+#ifdef CONFIG_SCST_EXTRACHECKS
+ if (unlikely((in_atomic() || in_interrupt() || irqs_disabled()) &&
+ !res)) {
+ printk(KERN_ERR "ERROR: atomic context and non-atomic cmd\n");
+ dump_stack();
+ cmd->atomic = 1;
+ res = 1;
+ }
+#endif
+ return res;
+}
+
+/*
+ * Returns TRUE if cmd has been preliminary completed, i.e. completed or
+ * aborted.
+ */
+static inline bool scst_cmd_prelim_completed(struct scst_cmd *cmd)
+{
+ return cmd->completed || test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags);
+}
+
+static inline enum scst_exec_context __scst_estimate_context(bool direct)
+{
+ if (in_irq())
+ return SCST_CONTEXT_TASKLET;
+ else if (irqs_disabled())
+ return SCST_CONTEXT_THREAD;
+ else
+ return direct ? SCST_CONTEXT_DIRECT :
+ SCST_CONTEXT_DIRECT_ATOMIC;
+}
+
+static inline enum scst_exec_context scst_estimate_context(void)
+{
+ return __scst_estimate_context(0);
+}
+
+static inline enum scst_exec_context scst_estimate_context_direct(void)
+{
+ return __scst_estimate_context(1);
+}
+
+/* Returns cmd's CDB */
+static inline const uint8_t *scst_cmd_get_cdb(struct scst_cmd *cmd)
+{
+ return cmd->cdb;
+}
+
+/* Returns cmd's CDB length */
+static inline int scst_cmd_get_cdb_len(struct scst_cmd *cmd)
+{
+ return cmd->cdb_len;
+}
+
+/* Returns cmd's extended CDB */
+static inline const uint8_t *scst_cmd_get_ext_cdb(struct scst_cmd *cmd)
+{
+ return cmd->ext_cdb;
+}
+
+/* Returns cmd's extended CDB length */
+static inline int scst_cmd_get_ext_cdb_len(struct scst_cmd *cmd)
+{
+ return cmd->ext_cdb_len;
+}
+
+/* Sets cmd's extended CDB and its length */
+static inline void scst_cmd_set_ext_cdb(struct scst_cmd *cmd,
+ uint8_t *ext_cdb, unsigned int ext_cdb_len)
+{
+ cmd->ext_cdb = ext_cdb;
+ cmd->ext_cdb_len = ext_cdb_len;
+}
+
+/* Returns cmd's session */
+static inline struct scst_session *scst_cmd_get_session(struct scst_cmd *cmd)
+{
+ return cmd->sess;
+}
+
+/* Returns cmd's response data length */
+static inline int scst_cmd_get_resp_data_len(struct scst_cmd *cmd)
+{
+ return cmd->resp_data_len;
+}
+
+/* Returns if status should be sent for cmd */
+static inline int scst_cmd_get_is_send_status(struct scst_cmd *cmd)
+{
+ return cmd->is_send_status;
+}
+
+/*
+ * Returns pointer to cmd's SG data buffer.
+ *
+ * Usage of this function is not recommended, use scst_get_buf_*()
+ * family of functions instead.
+ */
+static inline struct scatterlist *scst_cmd_get_sg(struct scst_cmd *cmd)
+{
+ return cmd->sg;
+}
+
+/*
+ * Returns cmd's sg_cnt.
+ *
+ * Usage of this function is not recommended, use scst_get_buf_*()
+ * family of functions instead.
+ */
+static inline int scst_cmd_get_sg_cnt(struct scst_cmd *cmd)
+{
+ return cmd->sg_cnt;
+}
+
+/*
+ * Returns cmd's data buffer length.
+ *
+ * In case if you need to iterate over data in the buffer, usage of
+ * this function is not recommended, use scst_get_buf_*()
+ * family of functions instead.
+ */
+static inline unsigned int scst_cmd_get_bufflen(struct scst_cmd *cmd)
+{
+ return cmd->bufflen;
+}
+
+/*
+ * Returns pointer to cmd's bidirectional in (WRITE) SG data buffer.
+ *
+ * Usage of this function is not recommended, use scst_get_in_buf_*()
+ * family of functions instead.
+ */
+static inline struct scatterlist *scst_cmd_get_in_sg(struct scst_cmd *cmd)
+{
+ return cmd->in_sg;
+}
+
+/*
+ * Returns cmd's bidirectional in (WRITE) sg_cnt.
+ *
+ * Usage of this function is not recommended, use scst_get_in_buf_*()
+ * family of functions instead.
+ */
+static inline int scst_cmd_get_in_sg_cnt(struct scst_cmd *cmd)
+{
+ return cmd->in_sg_cnt;
+}
+
+/*
+ * Returns cmd's bidirectional in (WRITE) data buffer length.
+ *
+ * In case if you need to iterate over data in the buffer, usage of
+ * this function is not recommended, use scst_get_in_buf_*()
+ * family of functions instead.
+ */
+static inline unsigned int scst_cmd_get_in_bufflen(struct scst_cmd *cmd)
+{
+ return cmd->in_bufflen;
+}
+
+/* Returns pointer to cmd's target's SG data buffer */
+static inline struct scatterlist *scst_cmd_get_tgt_sg(struct scst_cmd *cmd)
+{
+ return cmd->tgt_sg;
+}
+
+/* Returns cmd's target's sg_cnt */
+static inline int scst_cmd_get_tgt_sg_cnt(struct scst_cmd *cmd)
+{
+ return cmd->tgt_sg_cnt;
+}
+
+/* Sets cmd's target's SG data buffer */
+static inline void scst_cmd_set_tgt_sg(struct scst_cmd *cmd,
+ struct scatterlist *sg, int sg_cnt)
+{
+ cmd->tgt_sg = sg;
+ cmd->tgt_sg_cnt = sg_cnt;
+ cmd->tgt_data_buf_alloced = 1;
+}
+
+/* Returns pointer to cmd's target's IN SG data buffer */
+static inline struct scatterlist *scst_cmd_get_in_tgt_sg(struct scst_cmd *cmd)
+{
+ return cmd->tgt_in_sg;
+}
+
+/* Returns cmd's target's IN sg_cnt */
+static inline int scst_cmd_get_tgt_in_sg_cnt(struct scst_cmd *cmd)
+{
+ return cmd->tgt_in_sg_cnt;
+}
+
+/* Sets cmd's target's IN SG data buffer */
+static inline void scst_cmd_set_tgt_in_sg(struct scst_cmd *cmd,
+ struct scatterlist *sg, int sg_cnt)
+{
+ WARN_ON(!cmd->tgt_data_buf_alloced);
+
+ cmd->tgt_in_sg = sg;
+ cmd->tgt_in_sg_cnt = sg_cnt;
+}
+
+/* Returns cmd's data direction */
+static inline scst_data_direction scst_cmd_get_data_direction(
+ struct scst_cmd *cmd)
+{
+ return cmd->data_direction;
+}
+
+/* Returns cmd's status byte from host device */
+static inline uint8_t scst_cmd_get_status(struct scst_cmd *cmd)
+{
+ return cmd->status;
+}
+
+/* Returns cmd's status from host adapter itself */
+static inline uint8_t scst_cmd_get_msg_status(struct scst_cmd *cmd)
+{
+ return cmd->msg_status;
+}
+
+/* Returns cmd's status set by low-level driver to indicate its status */
+static inline uint8_t scst_cmd_get_host_status(struct scst_cmd *cmd)
+{
+ return cmd->host_status;
+}
+
+/* Returns cmd's status set by SCSI mid-level */
+static inline uint8_t scst_cmd_get_driver_status(struct scst_cmd *cmd)
+{
+ return cmd->driver_status;
+}
+
+/* Returns pointer to cmd's sense buffer */
+static inline uint8_t *scst_cmd_get_sense_buffer(struct scst_cmd *cmd)
+{
+ return cmd->sense;
+}
+
+/* Returns cmd's valid sense length */
+static inline int scst_cmd_get_sense_buffer_len(struct scst_cmd *cmd)
+{
+ return cmd->sense_valid_len;
+}
+
+/*
+ * Get/Set functions for cmd's queue_type
+ */
+static inline enum scst_cmd_queue_type scst_cmd_get_queue_type(
+ struct scst_cmd *cmd)
+{
+ return cmd->queue_type;
+}
+
+static inline void scst_cmd_set_queue_type(struct scst_cmd *cmd,
+ enum scst_cmd_queue_type queue_type)
+{
+ cmd->queue_type = queue_type;
+}
+
+/*
+ * Get/Set functions for cmd's target SN
+ */
+static inline uint64_t scst_cmd_get_tag(struct scst_cmd *cmd)
+{
+ return cmd->tag;
+}
+
+static inline void scst_cmd_set_tag(struct scst_cmd *cmd, uint64_t tag)
+{
+ cmd->tag = tag;
+}
+
+/*
+ * Get/Set functions for cmd's target private data.
+ * Variant with *_lock must be used if target driver uses
+ * scst_find_cmd() to avoid race with it, except inside scst_find_cmd()'s
+ * callback, where lock is already taken.
+ */
+static inline void *scst_cmd_get_tgt_priv(struct scst_cmd *cmd)
+{
+ return cmd->tgt_priv;
+}
+
+static inline void scst_cmd_set_tgt_priv(struct scst_cmd *cmd, void *val)
+{
+ cmd->tgt_priv = val;
+}
+
+/*
+ * Get/Set functions for tgt_need_alloc_data_buf flag
+ */
+static inline int scst_cmd_get_tgt_need_alloc_data_buf(struct scst_cmd *cmd)
+{
+ return cmd->tgt_need_alloc_data_buf;
+}
+
+static inline void scst_cmd_set_tgt_need_alloc_data_buf(struct scst_cmd *cmd)
+{
+ cmd->tgt_need_alloc_data_buf = 1;
+}
+
+/*
+ * Get/Set functions for tgt_data_buf_alloced flag
+ */
+static inline int scst_cmd_get_tgt_data_buff_alloced(struct scst_cmd *cmd)
+{
+ return cmd->tgt_data_buf_alloced;
+}
+
+static inline void scst_cmd_set_tgt_data_buff_alloced(struct scst_cmd *cmd)
+{
+ cmd->tgt_data_buf_alloced = 1;
+}
+
+/*
+ * Get/Set functions for dh_data_buf_alloced flag
+ */
+static inline int scst_cmd_get_dh_data_buff_alloced(struct scst_cmd *cmd)
+{
+ return cmd->dh_data_buf_alloced;
+}
+
+static inline void scst_cmd_set_dh_data_buff_alloced(struct scst_cmd *cmd)
+{
+ cmd->dh_data_buf_alloced = 1;
+}
+
+/*
+ * Get/Set functions for no_sgv flag
+ */
+static inline int scst_cmd_get_no_sgv(struct scst_cmd *cmd)
+{
+ return cmd->no_sgv;
+}
+
+static inline void scst_cmd_set_no_sgv(struct scst_cmd *cmd)
+{
+ cmd->no_sgv = 1;
+}
+
+/*
+ * Get/Set functions for tgt_sn
+ */
+static inline int scst_cmd_get_tgt_sn(struct scst_cmd *cmd)
+{
+ BUG_ON(!cmd->tgt_sn_set);
+ return cmd->tgt_sn;
+}
+
+static inline void scst_cmd_set_tgt_sn(struct scst_cmd *cmd, uint32_t tgt_sn)
+{
+ cmd->tgt_sn_set = 1;
+ cmd->tgt_sn = tgt_sn;
+}
+
+/*
+ * Returns 1 if the cmd was aborted, so its status is invalid and no
+ * reply shall be sent to the remote initiator. A target driver should
+ * only clear internal resources, associated with cmd.
+ */
+static inline int scst_cmd_aborted(struct scst_cmd *cmd)
+{
+ return test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags) &&
+ !test_bit(SCST_CMD_ABORTED_OTHER, &cmd->cmd_flags);
+}
+
+/* Returns sense data format for cmd's dev */
+static inline bool scst_get_cmd_dev_d_sense(struct scst_cmd *cmd)
+{
+ return (cmd->dev != NULL) ? cmd->dev->d_sense : 0;
+}
+
+/*
+ * Get/Set functions for expected data direction, transfer length
+ * and its validity flag
+ */
+static inline int scst_cmd_is_expected_set(struct scst_cmd *cmd)
+{
+ return cmd->expected_values_set;
+}
+
+static inline scst_data_direction scst_cmd_get_expected_data_direction(
+ struct scst_cmd *cmd)
+{
+ return cmd->expected_data_direction;
+}
+
+static inline int scst_cmd_get_expected_transfer_len(
+ struct scst_cmd *cmd)
+{
+ return cmd->expected_transfer_len;
+}
+
+static inline int scst_cmd_get_expected_in_transfer_len(
+ struct scst_cmd *cmd)
+{
+ return cmd->expected_in_transfer_len;
+}
+
+static inline void scst_cmd_set_expected(struct scst_cmd *cmd,
+ scst_data_direction expected_data_direction,
+ int expected_transfer_len)
+{
+ cmd->expected_data_direction = expected_data_direction;
+ cmd->expected_transfer_len = expected_transfer_len;
+ cmd->expected_values_set = 1;
+}
+
+static inline void scst_cmd_set_expected_in_transfer_len(struct scst_cmd *cmd,
+ int expected_in_transfer_len)
+{
+ WARN_ON(!cmd->expected_values_set);
+ cmd->expected_in_transfer_len = expected_in_transfer_len;
+}
+
+/*
+ * Get/clear functions for cmd's may_need_dma_sync
+ */
+static inline int scst_get_may_need_dma_sync(struct scst_cmd *cmd)
+{
+ return cmd->may_need_dma_sync;
+}
+
+static inline void scst_clear_may_need_dma_sync(struct scst_cmd *cmd)
+{
+ cmd->may_need_dma_sync = 0;
+}
+
+/*
+ * Get/set functions for cmd's delivery_status. It is one of
+ * SCST_CMD_DELIVERY_* constants. It specifies the status of the
+ * command's delivery to initiator.
+ */
+static inline int scst_get_delivery_status(struct scst_cmd *cmd)
+{
+ return cmd->delivery_status;
+}
+
+static inline void scst_set_delivery_status(struct scst_cmd *cmd,
+ int delivery_status)
+{
+ cmd->delivery_status = delivery_status;
+}
+
+/*
+ * Get/Set function for mgmt cmd's target private data
+ */
+static inline void *scst_mgmt_cmd_get_tgt_priv(struct scst_mgmt_cmd *mcmd)
+{
+ return mcmd->tgt_priv;
+}
+
+static inline void scst_mgmt_cmd_set_tgt_priv(struct scst_mgmt_cmd *mcmd,
+ void *val)
+{
+ mcmd->tgt_priv = val;
+}
+
+/* Returns mgmt cmd's completition status (SCST_MGMT_STATUS_* constants) */
+static inline int scst_mgmt_cmd_get_status(struct scst_mgmt_cmd *mcmd)
+{
+ return mcmd->status;
+}
+
+/* Returns mgmt cmd's TM fn */
+static inline int scst_mgmt_cmd_get_fn(struct scst_mgmt_cmd *mcmd)
+{
+ return mcmd->fn;
+}
+
+/*
+ * Called by dev handler's task_mgmt_fn() to notify SCST core that mcmd
+ * is going to complete asynchronously.
+ */
+void scst_prepare_async_mcmd(struct scst_mgmt_cmd *mcmd);
+
+/*
+ * Called by dev handler to notify SCST core that async. mcmd is completed
+ * with status "status".
+ */
+void scst_async_mcmd_completed(struct scst_mgmt_cmd *mcmd, int status);
+
+/* Returns AEN's fn */
+static inline int scst_aen_get_event_fn(struct scst_aen *aen)
+{
+ return aen->event_fn;
+}
+
+/* Returns AEN's session */
+static inline struct scst_session *scst_aen_get_sess(struct scst_aen *aen)
+{
+ return aen->sess;
+}
+
+/* Returns AEN's LUN */
+static inline uint64_t scst_aen_get_lun(struct scst_aen *aen)
+{
+ return aen->lun;
+}
+
+/* Returns SCSI AEN's sense */
+static inline const uint8_t *scst_aen_get_sense(struct scst_aen *aen)
+{
+ return aen->aen_sense;
+}
+
+/* Returns SCSI AEN's sense length */
+static inline int scst_aen_get_sense_len(struct scst_aen *aen)
+{
+ return aen->aen_sense_len;
+}
+
+/*
+ * Get/set functions for AEN's delivery_status. It is one of
+ * SCST_AEN_RES_* constants. It specifies the status of the
+ * command's delivery to initiator.
+ */
+static inline int scst_get_aen_delivery_status(struct scst_aen *aen)
+{
+ return aen->delivery_status;
+}
+
+static inline void scst_set_aen_delivery_status(struct scst_aen *aen,
+ int status)
+{
+ aen->delivery_status = status;
+}
+
+void scst_aen_done(struct scst_aen *aen);
+
+static inline void sg_clear(struct scatterlist *sg)
+{
+ memset(sg, 0, sizeof(*sg));
+#ifdef CONFIG_DEBUG_SG
+ sg->sg_magic = SG_MAGIC;
+#endif
+}
+
+enum scst_sg_copy_dir {
+ SCST_SG_COPY_FROM_TARGET,
+ SCST_SG_COPY_TO_TARGET
+};
+
+void scst_copy_sg(struct scst_cmd *cmd, enum scst_sg_copy_dir copy_dir);
+
+/*
+ * Functions for access to the commands data (SG) buffer,
+ * including HIGHMEM environment. Should be used instead of direct
+ * access. Returns the mapped buffer length for success, 0 for EOD,
+ * negative error code otherwise.
+ *
+ * "Buf" argument returns the mapped buffer
+ *
+ * The "put" function unmaps the buffer.
+ */
+static inline int __scst_get_buf(struct scst_cmd *cmd, struct scatterlist *sg,
+ int sg_cnt, uint8_t **buf)
+{
+ int res = 0;
+ int i = cmd->get_sg_buf_entry_num;
+
+ *buf = NULL;
+
+ if ((i >= sg_cnt) || unlikely(sg == NULL))
+ goto out;
+
+ *buf = page_address(sg_page(&sg[i]));
+ *buf += sg[i].offset;
+
+ res = sg[i].length;
+ cmd->get_sg_buf_entry_num++;
+
+out:
+ return res;
+}
+
+static inline int scst_get_buf_first(struct scst_cmd *cmd, uint8_t **buf)
+{
+ cmd->get_sg_buf_entry_num = 0;
+ cmd->may_need_dma_sync = 1;
+ return __scst_get_buf(cmd, cmd->sg, cmd->sg_cnt, buf);
+}
+
+static inline int scst_get_buf_next(struct scst_cmd *cmd, uint8_t **buf)
+{
+ return __scst_get_buf(cmd, cmd->sg, cmd->sg_cnt, buf);
+}
+
+static inline void scst_put_buf(struct scst_cmd *cmd, void *buf)
+{
+ /* Nothing to do */
+}
+
+static inline int scst_get_in_buf_first(struct scst_cmd *cmd, uint8_t **buf)
+{
+ cmd->get_sg_buf_entry_num = 0;
+ cmd->may_need_dma_sync = 1;
+ return __scst_get_buf(cmd, cmd->in_sg, cmd->in_sg_cnt, buf);
+}
+
+static inline int scst_get_in_buf_next(struct scst_cmd *cmd, uint8_t **buf)
+{
+ return __scst_get_buf(cmd, cmd->in_sg, cmd->in_sg_cnt, buf);
+}
+
+static inline void scst_put_in_buf(struct scst_cmd *cmd, void *buf)
+{
+ /* Nothing to do */
+}
+
+/*
+ * Returns approximate higher rounded buffers count that
+ * scst_get_buf_[first|next]() return.
+ */
+static inline int scst_get_buf_count(struct scst_cmd *cmd)
+{
+ return (cmd->sg_cnt == 0) ? 1 : cmd->sg_cnt;
+}
+
+/*
+ * Returns approximate higher rounded buffers count that
+ * scst_get_in_buf_[first|next]() return.
+ */
+static inline int scst_get_in_buf_count(struct scst_cmd *cmd)
+{
+ return (cmd->in_sg_cnt == 0) ? 1 : cmd->in_sg_cnt;
+}
+
+int scst_suspend_activity(bool interruptible);
+void scst_resume_activity(void);
+
+void scst_process_active_cmd(struct scst_cmd *cmd, bool atomic);
+void scst_post_parse_process_active_cmd(struct scst_cmd *cmd, bool atomic);
+
+int scst_check_local_events(struct scst_cmd *cmd);
+
+int scst_get_cmd_abnormal_done_state(const struct scst_cmd *cmd);
+void scst_set_cmd_abnormal_done_state(struct scst_cmd *cmd);
+
+struct scst_trace_log {
+ unsigned int val;
+ const char *token;
+};
+
+extern struct mutex scst_mutex;
+
+extern struct sysfs_ops scst_sysfs_ops;
+
+/*
+ * Returns target driver's root sysfs kobject.
+ * The driver can create own files/directories/links here.
+ */
+static inline struct kobject *scst_sysfs_get_tgtt_kobj(
+ struct scst_tgt_template *tgtt)
+{
+ return &tgtt->tgtt_kobj;
+}
+
+/*
+ * Returns target's root sysfs kobject.
+ * The driver can create own files/directories/links here.
+ */
+static inline struct kobject *scst_sysfs_get_tgt_kobj(
+ struct scst_tgt *tgt)
+{
+ return &tgt->tgt_kobj;
+}
+
+/*
+ * Returns device handler's root sysfs kobject.
+ * The driver can create own files/directories/links here.
+ */
+static inline struct kobject *scst_sysfs_get_devt_kobj(
+ struct scst_dev_type *devt)
+{
+ return &devt->devt_kobj;
+}
+
+/*
+ * Returns device's root sysfs kobject.
+ * The driver can create own files/directories/links here.
+ */
+static inline struct kobject *scst_sysfs_get_dev_kobj(
+ struct scst_device *dev)
+{
+ return &dev->dev_kobj;
+}
+
+/*
+ * Returns session's root sysfs kobject.
+ * The driver can create own files/directories/links here.
+ */
+static inline struct kobject *scst_sysfs_get_sess_kobj(
+ struct scst_session *sess)
+{
+ return &sess->sess_kobj;
+}
+
+/* Returns target name */
+static inline const char *scst_get_tgt_name(const struct scst_tgt *tgt)
+{
+ return tgt->tgt_name;
+}
+
+int scst_alloc_sense(struct scst_cmd *cmd, int atomic);
+int scst_alloc_set_sense(struct scst_cmd *cmd, int atomic,
+ const uint8_t *sense, unsigned int len);
+
+int scst_set_sense(uint8_t *buffer, int len, bool d_sense,
+ int key, int asc, int ascq);
+
+bool scst_is_ua_sense(const uint8_t *sense, int len);
+
+bool scst_analyze_sense(const uint8_t *sense, int len,
+ unsigned int valid_mask, int key, int asc, int ascq);
+
+unsigned long scst_random(void);
+
+void scst_set_resp_data_len(struct scst_cmd *cmd, int resp_data_len);
+
+void scst_get(void);
+void scst_put(void);
+
+void scst_cmd_get(struct scst_cmd *cmd);
+void scst_cmd_put(struct scst_cmd *cmd);
+
+struct scatterlist *scst_alloc(int size, gfp_t gfp_mask, int *count);
+void scst_free(struct scatterlist *sg, int count);
+
+void scst_add_thr_data(struct scst_tgt_dev *tgt_dev,
+ struct scst_thr_data_hdr *data,
+ void (*free_fn) (struct scst_thr_data_hdr *data));
+void scst_del_all_thr_data(struct scst_tgt_dev *tgt_dev);
+void scst_dev_del_all_thr_data(struct scst_device *dev);
+struct scst_thr_data_hdr *__scst_find_thr_data(struct scst_tgt_dev *tgt_dev,
+ struct task_struct *tsk);
+
+/* Finds local to the current thread data. Returns NULL, if they not found. */
+static inline struct scst_thr_data_hdr *scst_find_thr_data(
+ struct scst_tgt_dev *tgt_dev)
+{
+ return __scst_find_thr_data(tgt_dev, current);
+}
+
+/* Increase ref counter for the thread data */
+static inline void scst_thr_data_get(struct scst_thr_data_hdr *data)
+{
+ atomic_inc(&data->ref);
+}
+
+/* Decrease ref counter for the thread data */
+static inline void scst_thr_data_put(struct scst_thr_data_hdr *data)
+{
+ if (atomic_dec_and_test(&data->ref))
+ data->free_fn(data);
+}
+
+int scst_calc_block_shift(int sector_size);
+int scst_sbc_generic_parse(struct scst_cmd *cmd,
+ int (*get_block_shift)(struct scst_cmd *cmd));
+int scst_cdrom_generic_parse(struct scst_cmd *cmd,
+ int (*get_block_shift)(struct scst_cmd *cmd));
+int scst_modisk_generic_parse(struct scst_cmd *cmd,
+ int (*get_block_shift)(struct scst_cmd *cmd));
+int scst_tape_generic_parse(struct scst_cmd *cmd,
+ int (*get_block_size)(struct scst_cmd *cmd));
+int scst_changer_generic_parse(struct scst_cmd *cmd,
+ int (*nothing)(struct scst_cmd *cmd));
+int scst_processor_generic_parse(struct scst_cmd *cmd,
+ int (*nothing)(struct scst_cmd *cmd));
+int scst_raid_generic_parse(struct scst_cmd *cmd,
+ int (*nothing)(struct scst_cmd *cmd));
+
+int scst_block_generic_dev_done(struct scst_cmd *cmd,
+ void (*set_block_shift)(struct scst_cmd *cmd, int block_shift));
+int scst_tape_generic_dev_done(struct scst_cmd *cmd,
+ void (*set_block_size)(struct scst_cmd *cmd, int block_size));
+
+int scst_obtain_device_parameters(struct scst_device *dev);
+
+int scst_get_max_lun_commands(struct scst_session *sess, uint64_t lun);
+
+/*
+ * Has to be put here open coded, because Linux doesn't have equivalent, which
+ * allows exclusive wake ups of threads in LIFO order. We need it to let (yet)
+ * unneeded threads sleep and not pollute CPU cache by their stacks.
+ */
+static inline void add_wait_queue_exclusive_head(wait_queue_head_t *q,
+ wait_queue_t *wait)
+{
+ unsigned long flags;
+
+ wait->flags |= WQ_FLAG_EXCLUSIVE;
+ spin_lock_irqsave(&q->lock, flags);
+ __add_wait_queue(q, wait);
+ spin_unlock_irqrestore(&q->lock, flags);
+}
+
+/*
+ * Structure to match events to user space and replies on them
+ */
+struct scst_sysfs_user_info {
+ /* Unique cookie to identify request */
+ uint32_t info_cookie;
+
+ /* Entry in the global list */
+ struct list_head info_list_entry;
+
+ /* Set if reply from the user space is being executed */
+ unsigned int info_being_executed:1;
+
+ /* Set if this info is in the info_list */
+ unsigned int info_in_list:1;
+
+ /* Completion to wait on for the request completion */
+ struct completion info_completion;
+
+ /* Request completion status and optional data */
+ int info_status;
+ void *data;
+};
+
+int scst_sysfs_user_add_info(struct scst_sysfs_user_info **out_info);
+void scst_sysfs_user_del_info(struct scst_sysfs_user_info *info);
+struct scst_sysfs_user_info *scst_sysfs_user_get_info(uint32_t cookie);
+int scst_wait_info_completion(struct scst_sysfs_user_info *info,
+ unsigned long timeout);
+
+unsigned int scst_get_setup_id(void);
+
+char *scst_get_next_lexem(char **token_str);
+void scst_restore_token_str(char *prev_lexem, char *token_str);
+char *scst_get_next_token_str(char **input_str);
+
+void scst_init_threads(struct scst_cmd_threads *cmd_threads);
+void scst_deinit_threads(struct scst_cmd_threads *cmd_threads);
+
+#endif /* __SCST_H */
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH][RFC 3/12/1/5] SCST core's scst_main.c
[not found] ` <4BC44D08.4060907@vlnb.net>
2010-04-13 13:04 ` [PATCH][RFC 1/12/1/5] SCST core's Makefile and Kconfig Vladislav Bolkhovitin
2010-04-13 13:04 ` [PATCH][RFC 2/12/1/5] SCST core's external headers Vladislav Bolkhovitin
@ 2010-04-13 13:04 ` Vladislav Bolkhovitin
2010-04-13 13:05 ` [PATCH][RFC 4/12/1/5] SCST core's scst_targ.c Vladislav Bolkhovitin
` (6 subsequent siblings)
9 siblings, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:04 UTC (permalink / raw)
To: linux-scsi
Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
Vu Pham, Bart Van Assche, James Smart, Joe Eykholt, Andy Yan,
linux-driver
This patch contains file scst_main.c.
Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
scst_main.c | 2047 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 2047 insertions(+)
diff -uprN orig/linux-2.6.33/drivers/scst/scst_main.c linux-2.6.33/drivers/scst/scst_main.c
--- orig/linux-2.6.33/drivers/scst/scst_main.c
+++ linux-2.6.33/drivers/scst/scst_main.c
@@ -0,0 +1,2047 @@
+/*
+ * scst_main.c
+ *
+ * Copyright (C) 2004 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ * Copyright (C) 2004 - 2005 Leonid Stoljar
+ * Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation, version 2
+ * of the License.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/slab.h>
+#include <linux/sched.h>
+#include <linux/unistd.h>
+#include <linux/string.h>
+#include <linux/kthread.h>
+
+#include "scst.h"
+#include "scst_priv.h"
+#include "scst_mem.h"
+
+#if defined(CONFIG_HIGHMEM4G) || defined(CONFIG_HIGHMEM64G)
+#warning "HIGHMEM kernel configurations are fully supported, but not\
+ recommended for performance reasons. Consider changing VMSPLIT\
+ option or use a 64-bit configuration instead. See README file for\
+ details."
+#endif
+
+/**
+ ** SCST global variables. They are all uninitialized to have their layout in
+ ** memory be exactly as specified. Otherwise compiler puts zero-initialized
+ ** variable separately from nonzero-initialized ones.
+ **/
+
+/*
+ * Main SCST mutex. All targets, devices and dev_types management is done
+ * under this mutex.
+ *
+ * It must NOT be used in any works (schedule_work(), etc.), because
+ * otherwise a deadlock (double lock, actually) is possible, e.g., with
+ * scst_user detach_tgt(), which is called under scst_mutex and calls
+ * flush_scheduled_work().
+ */
+struct mutex scst_mutex;
+EXPORT_SYMBOL_GPL(scst_mutex);
+
+ /* All 3 protected by scst_mutex */
+struct list_head scst_template_list;
+struct list_head scst_dev_list;
+struct list_head scst_dev_type_list;
+
+spinlock_t scst_main_lock;
+
+static struct kmem_cache *scst_mgmt_cachep;
+mempool_t *scst_mgmt_mempool;
+static struct kmem_cache *scst_mgmt_stub_cachep;
+mempool_t *scst_mgmt_stub_mempool;
+static struct kmem_cache *scst_ua_cachep;
+mempool_t *scst_ua_mempool;
+static struct kmem_cache *scst_sense_cachep;
+mempool_t *scst_sense_mempool;
+static struct kmem_cache *scst_aen_cachep;
+mempool_t *scst_aen_mempool;
+struct kmem_cache *scst_tgtd_cachep;
+struct kmem_cache *scst_sess_cachep;
+struct kmem_cache *scst_acgd_cachep;
+
+unsigned int scst_setup_id;
+
+spinlock_t scst_init_lock;
+wait_queue_head_t scst_init_cmd_list_waitQ;
+struct list_head scst_init_cmd_list;
+unsigned int scst_init_poll_cnt;
+
+struct kmem_cache *scst_cmd_cachep;
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+unsigned long scst_trace_flag;
+#endif
+
+unsigned long scst_flags;
+atomic_t scst_cmd_count;
+
+struct scst_cmd_threads scst_main_cmd_threads;
+
+struct scst_tasklet scst_tasklets[NR_CPUS];
+
+spinlock_t scst_mcmd_lock;
+struct list_head scst_active_mgmt_cmd_list;
+struct list_head scst_delayed_mgmt_cmd_list;
+wait_queue_head_t scst_mgmt_cmd_list_waitQ;
+
+wait_queue_head_t scst_mgmt_waitQ;
+spinlock_t scst_mgmt_lock;
+struct list_head scst_sess_init_list;
+struct list_head scst_sess_shut_list;
+
+wait_queue_head_t scst_dev_cmd_waitQ;
+
+static struct mutex scst_suspend_mutex;
+/* protected by scst_suspend_mutex */
+static struct list_head scst_cmd_threads_list;
+
+int scst_threads;
+static struct task_struct *scst_init_cmd_thread;
+static struct task_struct *scst_mgmt_thread;
+static struct task_struct *scst_mgmt_cmd_thread;
+
+static int suspend_count;
+
+static int scst_virt_dev_last_id; /* protected by scst_mutex */
+
+static unsigned int scst_max_cmd_mem;
+unsigned int scst_max_dev_cmd_mem;
+
+module_param_named(scst_threads, scst_threads, int, 0);
+MODULE_PARM_DESC(scst_threads, "SCSI target threads count");
+
+module_param_named(scst_max_cmd_mem, scst_max_cmd_mem, int, S_IRUGO);
+MODULE_PARM_DESC(scst_max_cmd_mem, "Maximum memory allowed to be consumed by "
+ "all SCSI commands of all devices at any given time in MB");
+
+module_param_named(scst_max_dev_cmd_mem, scst_max_dev_cmd_mem, int, S_IRUGO);
+MODULE_PARM_DESC(scst_max_dev_cmd_mem, "Maximum memory allowed to be consumed "
+ "by all SCSI commands of a device at any given time in MB");
+
+struct scst_dev_type scst_null_devtype = {
+ .name = "none",
+};
+
+static void __scst_resume_activity(void);
+
+/**
+ * __scst_register_target_template() - register target template.
+ * @vtt: target template
+ * @version: SCST_INTERFACE_VERSION version string to ensure that
+ * SCST core and the target driver use the same version of
+ * the SCST interface
+ *
+ * Description:
+ * Registers a target template and returns 0 on success or appropriate
+ * error code otherwise.
+ *
+ * Note: *vtt must be static!
+ */
+int __scst_register_target_template(struct scst_tgt_template *vtt,
+ const char *version)
+{
+ int res = 0;
+ struct scst_tgt_template *t;
+ static DEFINE_MUTEX(m);
+
+ INIT_LIST_HEAD(&vtt->tgt_list);
+
+ if (strcmp(version, SCST_INTERFACE_VERSION) != 0) {
+ PRINT_ERROR("Incorrect version of target %s", vtt->name);
+ res = -EINVAL;
+ goto out_err;
+ }
+
+ if (!vtt->detect) {
+ PRINT_ERROR("Target driver %s must have "
+ "detect() method.", vtt->name);
+ res = -EINVAL;
+ goto out_err;
+ }
+
+ if (!vtt->release) {
+ PRINT_ERROR("Target driver %s must have "
+ "release() method.", vtt->name);
+ res = -EINVAL;
+ goto out_err;
+ }
+
+ if (!vtt->xmit_response) {
+ PRINT_ERROR("Target driver %s must have "
+ "xmit_response() method.", vtt->name);
+ res = -EINVAL;
+ goto out_err;
+ }
+
+ if (vtt->threads_num < 0) {
+ PRINT_ERROR("Wrong threads_num value %d for "
+ "target \"%s\"", vtt->threads_num,
+ vtt->name);
+ res = -EINVAL;
+ goto out_err;
+ }
+
+ if (!vtt->enable_target || !vtt->is_target_enabled) {
+ PRINT_WARNING("Target driver %s doesn't have enable_target() "
+ "and/or is_target_enabled() method(s). This is unsafe "
+ "and can lead that initiators connected on the "
+ "initialization time can see an unexpected set of "
+ "devices or no devices at all!", vtt->name);
+ }
+
+ if (((vtt->add_target != NULL) && (vtt->del_target == NULL)) ||
+ ((vtt->add_target == NULL) && (vtt->del_target != NULL))) {
+ PRINT_ERROR("Target driver %s must either define both "
+ "add_target() and del_target(), or none.", vtt->name);
+ res = -EINVAL;
+ goto out_err;
+ }
+
+ res = scst_create_tgtt_sysfs(vtt);
+ if (res)
+ goto out_sysfs_err;
+
+ if (vtt->rdy_to_xfer == NULL)
+ vtt->rdy_to_xfer_atomic = 1;
+
+ if (mutex_lock_interruptible(&m) != 0)
+ goto out_sysfs_err;
+
+ if (mutex_lock_interruptible(&scst_mutex) != 0)
+ goto out_m_err;
+ list_for_each_entry(t, &scst_template_list, scst_template_list_entry) {
+ if (strcmp(t->name, vtt->name) == 0) {
+ PRINT_ERROR("Target driver %s already registered",
+ vtt->name);
+ mutex_unlock(&scst_mutex);
+ goto out_m_err;
+ }
+ }
+ mutex_unlock(&scst_mutex);
+
+ TRACE_DBG("%s", "Calling target driver's detect()");
+ res = vtt->detect(vtt);
+ TRACE_DBG("Target driver's detect() returned %d", res);
+ if (res < 0) {
+ PRINT_ERROR("%s", "The detect() routine failed");
+ res = -EINVAL;
+ goto out_m_err;
+ }
+
+ mutex_lock(&scst_mutex);
+ list_add_tail(&vtt->scst_template_list_entry, &scst_template_list);
+ mutex_unlock(&scst_mutex);
+
+ res = 0;
+
+ PRINT_INFO("Target template %s registered successfully", vtt->name);
+
+ mutex_unlock(&m);
+
+out:
+ return res;
+
+out_m_err:
+ mutex_unlock(&m);
+
+out_sysfs_err:
+ scst_tgtt_sysfs_put(vtt);
+
+out_err:
+ PRINT_ERROR("Failed to register target template %s", vtt->name);
+ goto out;
+}
+EXPORT_SYMBOL_GPL(__scst_register_target_template);
+
+static int scst_check_non_gpl_target_template(struct scst_tgt_template *vtt)
+{
+ int res;
+
+ if (vtt->task_mgmt_affected_cmds_done || vtt->threads_num) {
+ PRINT_ERROR("Not allowed functionality in non-GPL version for "
+ "target template %s", vtt->name);
+ res = -EPERM;
+ goto out;
+ }
+
+ res = 0;
+
+out:
+ return res;
+}
+
+/**
+ * __scst_register_target_template_non_gpl() - register target template,
+ * non-GPL version
+ * @vtt: target template
+ * @version: SCST_INTERFACE_VERSION version string to ensure that
+ * SCST core and the target driver use the same version of
+ * the SCST interface
+ *
+ * Description:
+ * Registers a target template and returns 0 on success or appropriate
+ * error code otherwise.
+ *
+ * Note: *vtt must be static!
+ */
+int __scst_register_target_template_non_gpl(struct scst_tgt_template *vtt,
+ const char *version)
+{
+ int res;
+
+ res = scst_check_non_gpl_target_template(vtt);
+ if (res != 0)
+ goto out;
+
+ res = __scst_register_target_template(vtt, version);
+
+out:
+ return res;
+}
+EXPORT_SYMBOL(__scst_register_target_template_non_gpl);
+
+/**
+ * scst_unregister_target_template() - unregister target template
+ */
+void scst_unregister_target_template(struct scst_tgt_template *vtt)
+{
+ struct scst_tgt *tgt;
+ struct scst_tgt_template *t;
+ int found = 0;
+
+ mutex_lock(&scst_mutex);
+
+ list_for_each_entry(t, &scst_template_list, scst_template_list_entry) {
+ if (strcmp(t->name, vtt->name) == 0) {
+ found = 1;
+ break;
+ }
+ }
+ if (!found) {
+ PRINT_ERROR("Target driver %s isn't registered", vtt->name);
+ goto out_err_up;
+ }
+
+restart:
+ list_for_each_entry(tgt, &vtt->tgt_list, tgt_list_entry) {
+ mutex_unlock(&scst_mutex);
+ scst_unregister_target(tgt);
+ mutex_lock(&scst_mutex);
+ goto restart;
+ }
+ list_del(&vtt->scst_template_list_entry);
+
+ mutex_unlock(&scst_mutex);
+
+ scst_tgtt_sysfs_put(vtt);
+
+ PRINT_INFO("Target template %s unregistered successfully", vtt->name);
+
+out:
+ return;
+
+out_err_up:
+ mutex_unlock(&scst_mutex);
+ goto out;
+}
+EXPORT_SYMBOL(scst_unregister_target_template);
+
+/**
+ * scst_register_target() - register target
+ *
+ * Registers a target for template vtt and returns new target structure on
+ * success or NULL otherwise.
+ */
+struct scst_tgt *scst_register_target(struct scst_tgt_template *vtt,
+ const char *target_name)
+{
+ struct scst_tgt *tgt;
+ int rc = 0;
+
+ rc = scst_alloc_tgt(vtt, &tgt);
+ if (rc != 0)
+ goto out_err;
+
+ rc = scst_suspend_activity(true);
+ if (rc != 0)
+ goto out_free_tgt_err;
+
+ if (mutex_lock_interruptible(&scst_mutex) != 0) {
+ rc = -EINTR;
+ goto out_resume_free;
+ }
+
+ if (target_name != NULL) {
+
+ tgt->tgt_name = kmalloc(strlen(target_name) + 1, GFP_KERNEL);
+ if (tgt->tgt_name == NULL) {
+ TRACE(TRACE_OUT_OF_MEM, "Allocation of tgt name %s failed",
+ target_name);
+ rc = -ENOMEM;
+ goto out_unlock_resume;
+ }
+ strcpy(tgt->tgt_name, target_name);
+ } else {
+ static int tgt_num; /* protected by scst_mutex */
+ int len = strlen(vtt->name) +
+ strlen(SCST_DEFAULT_TGT_NAME_SUFFIX) + 11 + 1;
+
+ tgt->tgt_name = kmalloc(len, GFP_KERNEL);
+ if (tgt->tgt_name == NULL) {
+ TRACE(TRACE_OUT_OF_MEM, "Allocation of tgt name failed "
+ "(template name %s)", vtt->name);
+ rc = -ENOMEM;
+ goto out_unlock_resume;
+ }
+ sprintf(tgt->tgt_name, "%s%s%d", vtt->name,
+ SCST_DEFAULT_TGT_NAME_SUFFIX, tgt_num++);
+ }
+
+ tgt->default_acg = scst_alloc_add_acg(NULL, tgt->tgt_name);
+ if (tgt->default_acg == NULL)
+ goto out_free_tgt_name;
+
+ INIT_LIST_HEAD(&tgt->tgt_acg_list);
+
+ rc = scst_create_tgt_sysfs(tgt);
+ if (rc < 0)
+ goto out_clear_acg;
+
+ list_add_tail(&tgt->tgt_list_entry, &vtt->tgt_list);
+
+ mutex_unlock(&scst_mutex);
+ scst_resume_activity();
+
+ PRINT_INFO("Target %s for template %s registered successfully",
+ tgt->tgt_name, vtt->name);
+
+ TRACE_DBG("tgt %p", tgt);
+
+out:
+ return tgt;
+
+out_clear_acg:
+ scst_clear_acg(tgt->default_acg);
+
+out_free_tgt_name:
+ kfree(tgt->tgt_name);
+
+out_unlock_resume:
+ mutex_unlock(&scst_mutex);
+
+out_resume_free:
+ scst_resume_activity();
+
+out_free_tgt_err:
+ scst_tgt_sysfs_put(tgt); /* must not be called under scst_mutex */
+ tgt = NULL;
+
+out_err:
+ PRINT_ERROR("Failed to register target %s for template %s (error %d)",
+ (tgt->tgt_name != NULL) ? tgt->tgt_name : target_name,
+ vtt->name, rc);
+ goto out;
+}
+EXPORT_SYMBOL_GPL(scst_register_target);
+
+/**
+ * scst_register_target_non_gpl() - register target, non-GPL version
+ *
+ * Registers a target for template vtt and returns new target structure on
+ * success or NULL otherwise.
+ */
+struct scst_tgt *scst_register_target_non_gpl(struct scst_tgt_template *vtt,
+ const char *target_name)
+{
+ struct scst_tgt *res;
+
+ if (scst_check_non_gpl_target_template(vtt)) {
+ res = NULL;
+ goto out;
+ }
+
+ res = scst_register_target(vtt, target_name);
+
+out:
+ return res;
+}
+EXPORT_SYMBOL(scst_register_target_non_gpl);
+
+static inline int test_sess_list(struct scst_tgt *tgt)
+{
+ int res;
+ mutex_lock(&scst_mutex);
+ res = list_empty(&tgt->sess_list);
+ mutex_unlock(&scst_mutex);
+ return res;
+}
+
+/**
+ * scst_unregister_target() - unregister target
+ */
+void scst_unregister_target(struct scst_tgt *tgt)
+{
+ struct scst_session *sess;
+ struct scst_tgt_template *vtt = tgt->tgtt;
+ struct scst_acg *acg, *acg_tmp;
+
+ scst_tgt_sysfs_prepare_put(tgt);
+
+ TRACE_DBG("%s", "Calling target driver's release()");
+ tgt->tgtt->release(tgt);
+ TRACE_DBG("%s", "Target driver's release() returned");
+
+ mutex_lock(&scst_mutex);
+again:
+ list_for_each_entry(sess, &tgt->sess_list, sess_list_entry) {
+ if (sess->shut_phase == SCST_SESS_SPH_READY) {
+ /*
+ * Sometimes it's hard for target driver to track all
+ * its sessions (see scst_local, for example), so let's
+ * help it.
+ */
+ mutex_unlock(&scst_mutex);
+ scst_unregister_session(sess, 0, NULL);
+ mutex_lock(&scst_mutex);
+ goto again;
+ }
+ }
+ mutex_unlock(&scst_mutex);
+
+ TRACE_DBG("%s", "Waiting for sessions shutdown");
+ wait_event(tgt->unreg_waitQ, test_sess_list(tgt));
+ TRACE_DBG("%s", "wait_event() returned");
+
+ scst_suspend_activity(false);
+ mutex_lock(&scst_mutex);
+
+ list_del(&tgt->tgt_list_entry);
+
+ mutex_unlock(&scst_mutex);
+ scst_resume_activity();
+
+ scst_clear_acg(tgt->default_acg);
+
+ list_for_each_entry_safe(acg, acg_tmp, &tgt->tgt_acg_list,
+ acg_list_entry) {
+ scst_acg_sysfs_put(acg);
+ }
+
+ del_timer_sync(&tgt->retry_timer);
+
+ PRINT_INFO("Target %s for template %s unregistered successfully",
+ tgt->tgt_name, vtt->name);
+
+ scst_tgt_sysfs_put(tgt); /* must not be called under scst_mutex */
+
+ TRACE_DBG("Unregistering tgt %p finished", tgt);
+ return;
+}
+EXPORT_SYMBOL(scst_unregister_target);
+
+static int scst_susp_wait(bool interruptible)
+{
+ int res = 0;
+
+ if (interruptible) {
+ res = wait_event_interruptible_timeout(scst_dev_cmd_waitQ,
+ (atomic_read(&scst_cmd_count) == 0),
+ SCST_SUSPENDING_TIMEOUT);
+ if (res <= 0) {
+ __scst_resume_activity();
+ if (res == 0)
+ res = -EBUSY;
+ } else
+ res = 0;
+ } else
+ wait_event(scst_dev_cmd_waitQ,
+ atomic_read(&scst_cmd_count) == 0);
+
+ TRACE_MGMT_DBG("wait_event() returned %d", res);
+ return res;
+}
+
+/**
+ * scst_suspend_activity() - globally suspend any activity
+ *
+ * Description:
+ * Globally suspends any activity and doesn't return, until there are any
+ * active commands (state after SCST_CMD_STATE_INIT). If "interruptible"
+ * is true, it returns after SCST_SUSPENDING_TIMEOUT or if it was interrupted
+ * by a signal with the corresponding error status < 0. If "interruptible"
+ * is false, it will wait virtually forever. On success returns 0.
+ *
+ * New arriving commands stay in the suspended state until
+ * scst_resume_activity() is called.
+ */
+int scst_suspend_activity(bool interruptible)
+{
+ int res = 0;
+ bool rep = false;
+
+ if (interruptible) {
+ if (mutex_lock_interruptible(&scst_suspend_mutex) != 0) {
+ res = -EINTR;
+ goto out;
+ }
+ } else
+ mutex_lock(&scst_suspend_mutex);
+
+ TRACE_MGMT_DBG("suspend_count %d", suspend_count);
+ suspend_count++;
+ if (suspend_count > 1)
+ goto out_up;
+
+ set_bit(SCST_FLAG_SUSPENDING, &scst_flags);
+ set_bit(SCST_FLAG_SUSPENDED, &scst_flags);
+ /*
+ * Assignment of SCST_FLAG_SUSPENDING and SCST_FLAG_SUSPENDED must be
+ * ordered with scst_cmd_count. Otherwise lockless logic in
+ * scst_translate_lun() and scst_mgmt_translate_lun() won't work.
+ */
+ smp_mb__after_set_bit();
+
+ /*
+ * See comment in scst_user.c::dev_user_task_mgmt_fn() for more
+ * information about scst_user behavior.
+ *
+ * ToDo: make the global suspending unneeded (switch to per-device
+ * reference counting? That would mean to switch off from lockless
+ * implementation of scst_translate_lun().. )
+ */
+
+ if (atomic_read(&scst_cmd_count) != 0) {
+ PRINT_INFO("Waiting for %d active commands to complete... This "
+ "might take few minutes for disks or few hours for "
+ "tapes, if you use long executed commands, like "
+ "REWIND or FORMAT. In case, if you have a hung user "
+ "space device (i.e. made using scst_user module) not "
+ "responding to any commands, if might take virtually "
+ "forever until the corresponding user space "
+ "program recovers and starts responding or gets "
+ "killed.", atomic_read(&scst_cmd_count));
+ rep = true;
+ }
+
+ res = scst_susp_wait(interruptible);
+ if (res != 0)
+ goto out_clear;
+
+ clear_bit(SCST_FLAG_SUSPENDING, &scst_flags);
+ /* See comment about smp_mb() above */
+ smp_mb__after_clear_bit();
+
+ TRACE_MGMT_DBG("Waiting for %d active commands finally to complete",
+ atomic_read(&scst_cmd_count));
+
+ res = scst_susp_wait(interruptible);
+ if (res != 0)
+ goto out_clear;
+
+ if (rep)
+ PRINT_INFO("%s", "All active commands completed");
+
+out_up:
+ mutex_unlock(&scst_suspend_mutex);
+
+out:
+ return res;
+
+out_clear:
+ clear_bit(SCST_FLAG_SUSPENDING, &scst_flags);
+ /* See comment about smp_mb() above */
+ smp_mb__after_clear_bit();
+ goto out_up;
+}
+EXPORT_SYMBOL_GPL(scst_suspend_activity);
+
+static void __scst_resume_activity(void)
+{
+ struct scst_cmd_threads *l;
+
+ suspend_count--;
+ TRACE_MGMT_DBG("suspend_count %d left", suspend_count);
+ if (suspend_count > 0)
+ goto out;
+
+ clear_bit(SCST_FLAG_SUSPENDED, &scst_flags);
+ /*
+ * The barrier is needed to make sure all woken up threads see the
+ * cleared flag. Not sure if it's really needed, but let's be safe.
+ */
+ smp_mb__after_clear_bit();
+
+ list_for_each_entry(l, &scst_cmd_threads_list, lists_list_entry) {
+ wake_up_all(&l->cmd_list_waitQ);
+ }
+ wake_up_all(&scst_init_cmd_list_waitQ);
+
+ spin_lock_irq(&scst_mcmd_lock);
+ if (!list_empty(&scst_delayed_mgmt_cmd_list)) {
+ struct scst_mgmt_cmd *m;
+ m = list_entry(scst_delayed_mgmt_cmd_list.next, typeof(*m),
+ mgmt_cmd_list_entry);
+ TRACE_MGMT_DBG("Moving delayed mgmt cmd %p to head of active "
+ "mgmt cmd list", m);
+ list_move(&m->mgmt_cmd_list_entry, &scst_active_mgmt_cmd_list);
+ }
+ spin_unlock_irq(&scst_mcmd_lock);
+ wake_up_all(&scst_mgmt_cmd_list_waitQ);
+
+out:
+ return;
+}
+
+/**
+ * scst_resume_activity() - globally resume all activities
+ *
+ * Resumes suspended by scst_suspend_activity() activities.
+ */
+void scst_resume_activity(void)
+{
+
+ mutex_lock(&scst_suspend_mutex);
+ __scst_resume_activity();
+ mutex_unlock(&scst_suspend_mutex);
+ return;
+}
+EXPORT_SYMBOL_GPL(scst_resume_activity);
+
+static int scst_register_device(struct scsi_device *scsidp)
+{
+ int res = 0;
+ struct scst_device *dev, *d;
+ struct scst_dev_type *dt;
+
+ res = scst_suspend_activity(true);
+ if (res != 0)
+ goto out_err;
+
+ if (mutex_lock_interruptible(&scst_mutex) != 0) {
+ res = -EINTR;
+ goto out_resume;
+ }
+
+ res = scst_alloc_device(GFP_KERNEL, &dev);
+ if (res != 0)
+ goto out_up;
+
+ dev->type = scsidp->type;
+
+ dev->virt_name = kmalloc(50, GFP_KERNEL);
+ if (dev->virt_name == NULL) {
+ PRINT_ERROR("%s", "Unable to alloc device name");
+ res = -ENOMEM;
+ goto out_free_dev;
+ }
+ snprintf(dev->virt_name, 50, "%d:%d:%d:%d", scsidp->host->host_no,
+ scsidp->channel, scsidp->id, scsidp->lun);
+
+ list_for_each_entry(d, &scst_dev_list, dev_list_entry) {
+ if (strcmp(d->virt_name, dev->virt_name) == 0) {
+ PRINT_ERROR("Device %s already exists", dev->virt_name);
+ res = -EEXIST;
+ goto out_free_dev;
+ }
+ }
+
+ dev->scsi_dev = scsidp;
+
+ list_add_tail(&dev->dev_list_entry, &scst_dev_list);
+
+ res = scst_create_device_sysfs(dev);
+ if (res != 0)
+ goto out_free;
+
+ list_for_each_entry(dt, &scst_dev_type_list, dev_type_list_entry) {
+ if (dt->type == scsidp->type) {
+ res = scst_assign_dev_handler(dev, dt);
+ if (res != 0)
+ goto out_free;
+ break;
+ }
+ }
+
+out_up:
+ mutex_unlock(&scst_mutex);
+
+out_resume:
+ scst_resume_activity();
+
+out_err:
+ if (res == 0) {
+ PRINT_INFO("Attached to scsi%d, channel %d, id %d, lun %d, "
+ "type %d", scsidp->host->host_no, scsidp->channel,
+ scsidp->id, scsidp->lun, scsidp->type);
+ } else {
+ PRINT_ERROR("Failed to attach to scsi%d, channel %d, id %d, "
+ "lun %d, type %d", scsidp->host->host_no,
+ scsidp->channel, scsidp->id, scsidp->lun, scsidp->type);
+ }
+ return res;
+
+out_free:
+ list_del(&dev->dev_list_entry);
+
+out_free_dev:
+ mutex_unlock(&scst_mutex);
+ scst_resume_activity();
+ scst_device_sysfs_put(dev); /* must not be called under scst_mutex */
+ goto out_err;
+}
+
+static void scst_unregister_device(struct scsi_device *scsidp)
+{
+ struct scst_device *d, *dev = NULL;
+ struct scst_acg_dev *acg_dev, *aa;
+
+ scst_suspend_activity(false);
+ mutex_lock(&scst_mutex);
+
+ list_for_each_entry(d, &scst_dev_list, dev_list_entry) {
+ if (d->scsi_dev == scsidp) {
+ dev = d;
+ TRACE_DBG("Target device %p found", dev);
+ break;
+ }
+ }
+ if (dev == NULL) {
+ PRINT_ERROR("%s", "Target device not found");
+ goto out_resume;
+ }
+
+ list_del(&dev->dev_list_entry);
+
+ list_for_each_entry_safe(acg_dev, aa, &dev->dev_acg_dev_list,
+ dev_acg_dev_list_entry) {
+ scst_acg_remove_dev(acg_dev->acg, dev, true);
+ }
+
+ scst_assign_dev_handler(dev, &scst_null_devtype);
+
+ mutex_unlock(&scst_mutex);
+ scst_resume_activity();
+
+ scst_device_sysfs_put(dev); /* must not be called under scst_mutex */
+
+ PRINT_INFO("Detached from scsi%d, channel %d, id %d, lun %d, type %d",
+ scsidp->host->host_no, scsidp->channel, scsidp->id,
+ scsidp->lun, scsidp->type);
+
+out:
+ return;
+
+out_resume:
+ mutex_unlock(&scst_mutex);
+ scst_resume_activity();
+ goto out;
+}
+
+static int scst_dev_handler_check(struct scst_dev_type *dev_handler)
+{
+ int res = 0;
+
+ if (dev_handler->parse == NULL) {
+ PRINT_ERROR("scst dev handler %s must have "
+ "parse() method.", dev_handler->name);
+ res = -EINVAL;
+ goto out;
+ }
+
+ if (((dev_handler->add_device != NULL) &&
+ (dev_handler->del_device == NULL)) ||
+ ((dev_handler->add_device == NULL) &&
+ (dev_handler->del_device != NULL))) {
+ PRINT_ERROR("Dev handler %s must either define both "
+ "add_device() and del_device(), or none.",
+ dev_handler->name);
+ res = -EINVAL;
+ goto out;
+ }
+
+ if (dev_handler->exec == NULL) {
+#ifdef CONFIG_SCST_ALLOW_PASSTHROUGH_IO_SUBMIT_IN_SIRQ
+ dev_handler->exec_atomic = 1;
+#else
+ dev_handler->exec_atomic = 0;
+#endif
+ }
+
+ if (dev_handler->dev_done == NULL)
+ dev_handler->dev_done_atomic = 1;
+
+out:
+ return res;
+}
+
+/**
+ * scst_register_virtual_device() - register a virtual device.
+ * @dev_handler: the device's device handler
+ * @dev_name: the new device name, NULL-terminated string. Must be uniq
+ * among all virtual devices in the system.
+ *
+ * Registers a virtual device and returns assinged to the device ID on
+ * success, or negative value otherwise
+ */
+int scst_register_virtual_device(struct scst_dev_type *dev_handler,
+ const char *dev_name)
+{
+ int res, rc;
+ struct scst_device *dev;
+
+ if (dev_handler == NULL) {
+ PRINT_ERROR("%s: valid device handler must be supplied",
+ __func__);
+ res = -EINVAL;
+ goto out;
+ }
+
+ if (dev_name == NULL) {
+ PRINT_ERROR("%s: device name must be non-NULL", __func__);
+ res = -EINVAL;
+ goto out;
+ }
+
+ res = scst_dev_handler_check(dev_handler);
+ if (res != 0)
+ goto out;
+
+ list_for_each_entry(dev, &scst_dev_list, dev_list_entry) {
+ if (strcmp(dev->virt_name, dev_name) == 0) {
+ PRINT_ERROR("Device %s already exists", dev_name);
+ res = -EEXIST;
+ goto out;
+ }
+ }
+
+ res = scst_suspend_activity(true);
+ if (res != 0)
+ goto out;
+
+ if (mutex_lock_interruptible(&scst_mutex) != 0) {
+ res = -EINTR;
+ goto out_resume;
+ }
+
+ res = scst_alloc_device(GFP_KERNEL, &dev);
+ if (res != 0)
+ goto out_up;
+
+ dev->type = dev_handler->type;
+ dev->scsi_dev = NULL;
+ dev->virt_name = kstrdup(dev_name, GFP_KERNEL);
+ if (dev->virt_name == NULL) {
+ PRINT_ERROR("Unable to allocate virt_name for dev %s",
+ dev_name);
+ res = -ENOMEM;
+ goto out_release;
+ }
+ dev->virt_id = scst_virt_dev_last_id++;
+
+ list_add_tail(&dev->dev_list_entry, &scst_dev_list);
+
+ res = dev->virt_id;
+
+ rc = scst_create_device_sysfs(dev);
+ if (rc != 0) {
+ res = rc;
+ goto out_free_del;
+ }
+
+ rc = scst_assign_dev_handler(dev, dev_handler);
+ if (rc != 0) {
+ res = rc;
+ goto out_free_del;
+ }
+
+out_up:
+ mutex_unlock(&scst_mutex);
+
+out_resume:
+ scst_resume_activity();
+
+out:
+ if (res > 0)
+ PRINT_INFO("Attached to virtual device %s (id %d)",
+ dev_name, dev->virt_id);
+ else
+ PRINT_INFO("Failed to attach to virtual device %s", dev_name);
+ return res;
+
+out_free_del:
+ list_del(&dev->dev_list_entry);
+
+out_release:
+ mutex_unlock(&scst_mutex);
+ scst_resume_activity();
+ scst_device_sysfs_put(dev); /* must not be called under scst_mutex */
+ goto out;
+}
+EXPORT_SYMBOL_GPL(scst_register_virtual_device);
+
+/**
+ * scst_unregister_virtual_device() - unegister a virtual device.
+ * @id: the device's ID, returned by the registration function
+ */
+void scst_unregister_virtual_device(int id)
+{
+ struct scst_device *d, *dev = NULL;
+ struct scst_acg_dev *acg_dev, *aa;
+
+ scst_suspend_activity(false);
+ mutex_lock(&scst_mutex);
+
+ list_for_each_entry(d, &scst_dev_list, dev_list_entry) {
+ if (d->virt_id == id) {
+ dev = d;
+ TRACE_DBG("Target device %p (id %d) found", dev, id);
+ break;
+ }
+ }
+ if (dev == NULL) {
+ PRINT_ERROR("Target virtual device (id %d) not found", id);
+ goto out_unblock;
+ }
+
+ list_del(&dev->dev_list_entry);
+
+ list_for_each_entry_safe(acg_dev, aa, &dev->dev_acg_dev_list,
+ dev_acg_dev_list_entry) {
+ scst_acg_remove_dev(acg_dev->acg, dev, true);
+ }
+
+ scst_assign_dev_handler(dev, &scst_null_devtype);
+
+ PRINT_INFO("Detached from virtual device %s (id %d)",
+ dev->virt_name, dev->virt_id);
+
+out_unblock:
+ mutex_unlock(&scst_mutex);
+ scst_resume_activity();
+
+ scst_device_sysfs_put(dev); /* must not be called under scst_mutex */
+ return;
+}
+EXPORT_SYMBOL_GPL(scst_unregister_virtual_device);
+
+/**
+ * __scst_register_dev_driver() - register pass-through dev handler driver
+ * @dev_type: dev handler template
+ * @version: SCST_INTERFACE_VERSION version string to ensure that
+ * SCST core and the dev handler use the same version of
+ * the SCST interface
+ *
+ * Description:
+ * Registers a pass-through dev handler driver. Returns 0 on success
+ * or appropriate error code otherwise.
+ *
+ * Note: *dev_type must be static!
+ */
+int __scst_register_dev_driver(struct scst_dev_type *dev_type,
+ const char *version)
+{
+ struct scst_dev_type *dt;
+ struct scst_device *dev;
+ int res;
+ int exist;
+
+ if (strcmp(version, SCST_INTERFACE_VERSION) != 0) {
+ PRINT_ERROR("Incorrect version of dev handler %s",
+ dev_type->name);
+ res = -EINVAL;
+ goto out_error;
+ }
+
+ res = scst_dev_handler_check(dev_type);
+ if (res != 0)
+ goto out_error;
+
+ res = scst_suspend_activity(true);
+ if (res != 0)
+ goto out_error;
+
+ if (mutex_lock_interruptible(&scst_mutex) != 0) {
+ res = -EINTR;
+ goto out_err_res;
+ }
+
+ exist = 0;
+ list_for_each_entry(dt, &scst_dev_type_list, dev_type_list_entry) {
+ if (strcmp(dt->name, dev_type->name) == 0) {
+ PRINT_ERROR("Device type handler \"%s\" already "
+ "exist", dt->name);
+ exist = 1;
+ break;
+ }
+ }
+ if (exist)
+ goto out_up;
+
+ res = scst_create_devt_sysfs(dev_type);
+ if (res < 0)
+ goto out_free;
+
+ list_add_tail(&dev_type->dev_type_list_entry, &scst_dev_type_list);
+
+ list_for_each_entry(dev, &scst_dev_list, dev_list_entry) {
+ if (dev->scsi_dev == NULL || dev->handler != &scst_null_devtype)
+ continue;
+ if (dev->scsi_dev->type == dev_type->type)
+ scst_assign_dev_handler(dev, dev_type);
+ }
+
+ mutex_unlock(&scst_mutex);
+ scst_resume_activity();
+
+ if (res == 0) {
+ PRINT_INFO("Device handler \"%s\" for type %d registered "
+ "successfully", dev_type->name, dev_type->type);
+ }
+
+out:
+ return res;
+
+out_free:
+ scst_devt_sysfs_put(dev_type);
+
+out_up:
+ mutex_unlock(&scst_mutex);
+
+out_err_res:
+ scst_resume_activity();
+
+out_error:
+ PRINT_ERROR("Failed to register device handler \"%s\" for type %d",
+ dev_type->name, dev_type->type);
+ goto out;
+}
+EXPORT_SYMBOL_GPL(__scst_register_dev_driver);
+
+/**
+ * scst_unregister_dev_driver() - unregister pass-through dev handler driver
+ */
+void scst_unregister_dev_driver(struct scst_dev_type *dev_type)
+{
+ struct scst_device *dev;
+ struct scst_dev_type *dt;
+ int found = 0;
+
+ scst_suspend_activity(false);
+ mutex_lock(&scst_mutex);
+
+ list_for_each_entry(dt, &scst_dev_type_list, dev_type_list_entry) {
+ if (strcmp(dt->name, dev_type->name) == 0) {
+ found = 1;
+ break;
+ }
+ }
+ if (!found) {
+ PRINT_ERROR("Dev handler \"%s\" isn't registered",
+ dev_type->name);
+ goto out_up;
+ }
+
+ list_for_each_entry(dev, &scst_dev_list, dev_list_entry) {
+ if (dev->handler == dev_type) {
+ scst_assign_dev_handler(dev, &scst_null_devtype);
+ TRACE_DBG("Dev handler removed from device %p", dev);
+ }
+ }
+
+ list_del(&dev_type->dev_type_list_entry);
+
+ mutex_unlock(&scst_mutex);
+ scst_resume_activity();
+
+ scst_devt_sysfs_put(dev_type);
+
+ PRINT_INFO("Device handler \"%s\" for type %d unloaded",
+ dev_type->name, dev_type->type);
+
+out:
+ return;
+
+out_up:
+ mutex_unlock(&scst_mutex);
+ scst_resume_activity();
+ goto out;
+}
+EXPORT_SYMBOL_GPL(scst_unregister_dev_driver);
+
+/**
+ * __scst_register_virtual_dev_driver() - register virtual dev handler driver
+ * @dev_type: dev handler template
+ * @version: SCST_INTERFACE_VERSION version string to ensure that
+ * SCST core and the dev handler use the same version of
+ * the SCST interface
+ *
+ * Description:
+ * Registers a virtual dev handler driver. Returns 0 on success or
+ * appropriate error code otherwise.
+ *
+ * Note: *dev_type must be static!
+ */
+int __scst_register_virtual_dev_driver(struct scst_dev_type *dev_type,
+ const char *version)
+{
+ int res;
+
+ if (strcmp(version, SCST_INTERFACE_VERSION) != 0) {
+ PRINT_ERROR("Incorrect version of virtual dev handler %s",
+ dev_type->name);
+ res = -EINVAL;
+ goto out_err;
+ }
+
+ res = scst_dev_handler_check(dev_type);
+ if (res != 0)
+ goto out_err;
+
+ res = scst_create_devt_sysfs(dev_type);
+ if (res < 0)
+ goto out_free;
+
+ if (dev_type->type != -1) {
+ PRINT_INFO("Virtual device handler %s for type %d "
+ "registered successfully", dev_type->name,
+ dev_type->type);
+ } else {
+ PRINT_INFO("Virtual device handler \"%s\" registered "
+ "successfully", dev_type->name);
+ }
+
+out:
+ return res;
+
+out_free:
+
+ scst_devt_sysfs_put(dev_type);
+
+out_err:
+ PRINT_ERROR("Failed to register virtual device handler \"%s\"",
+ dev_type->name);
+ goto out;
+}
+EXPORT_SYMBOL_GPL(__scst_register_virtual_dev_driver);
+
+/**
+ * scst_unregister_virtual_dev_driver() - unregister virtual dev driver
+ */
+void scst_unregister_virtual_dev_driver(struct scst_dev_type *dev_type)
+{
+
+ scst_devt_sysfs_put(dev_type);
+
+ PRINT_INFO("Device handler \"%s\" unloaded", dev_type->name);
+ return;
+}
+EXPORT_SYMBOL_GPL(scst_unregister_virtual_dev_driver);
+
+/* scst_mutex supposed to be held */
+int scst_add_threads(struct scst_cmd_threads *cmd_threads,
+ struct scst_device *dev, struct scst_tgt_dev *tgt_dev, int num)
+{
+ int res, i;
+ struct scst_cmd_thread_t *thr;
+ int n = 0, tgt_dev_num = 0;
+
+ list_for_each_entry(thr, &cmd_threads->threads_list, thread_list_entry) {
+ n++;
+ }
+
+ if (tgt_dev != NULL) {
+ struct scst_tgt_dev *t;
+ list_for_each_entry(t, &tgt_dev->dev->dev_tgt_dev_list,
+ dev_tgt_dev_list_entry) {
+ if (t == tgt_dev)
+ break;
+ tgt_dev_num++;
+ }
+ }
+
+ for (i = 0; i < num; i++) {
+ struct scst_cmd_thread_t *thr;
+
+ thr = kmalloc(sizeof(*thr), GFP_KERNEL);
+ if (!thr) {
+ res = -ENOMEM;
+ PRINT_ERROR("fail to allocate thr %d", res);
+ goto out_error;
+ }
+
+ if (dev != NULL) {
+ char nm[14]; /* to limit the name's len */
+ strlcpy(nm, dev->virt_name, ARRAY_SIZE(nm));
+ thr->cmd_thread = kthread_create(scst_cmd_thread,
+ cmd_threads, "%s%d", nm, n++);
+ } else if (tgt_dev != NULL) {
+ char nm[11]; /* to limit the name's len */
+ strlcpy(nm, tgt_dev->dev->virt_name, ARRAY_SIZE(nm));
+ thr->cmd_thread = kthread_create(scst_cmd_thread,
+ cmd_threads, "%s%d_%d", nm, tgt_dev_num, n++);
+ } else
+ thr->cmd_thread = kthread_create(scst_cmd_thread,
+ cmd_threads, "scsi_tgt%d", n++);
+
+ if (IS_ERR(thr->cmd_thread)) {
+ res = PTR_ERR(thr->cmd_thread);
+ PRINT_ERROR("kthread_create() failed: %d", res);
+ kfree(thr);
+ goto out_error;
+ }
+
+ list_add(&thr->thread_list_entry, &cmd_threads->threads_list);
+ cmd_threads->nr_threads++;
+
+ wake_up_process(thr->cmd_thread);
+ }
+
+ res = 0;
+
+out:
+ return res;
+
+out_error:
+ scst_del_threads(cmd_threads, i);
+ goto out;
+}
+
+/* scst_mutex supposed to be held */
+void scst_del_threads(struct scst_cmd_threads *cmd_threads, int num)
+{
+ struct scst_cmd_thread_t *ct, *tmp;
+
+ if (num == 0)
+ goto out;
+
+ list_for_each_entry_safe_reverse(ct, tmp, &cmd_threads->threads_list,
+ thread_list_entry) {
+ int rc;
+ struct scst_device *dev;
+
+ rc = kthread_stop(ct->cmd_thread);
+ if (rc < 0)
+ TRACE_MGMT_DBG("kthread_stop() failed: %d", rc);
+
+ list_del(&ct->thread_list_entry);
+
+ list_for_each_entry(dev, &scst_dev_list, dev_list_entry) {
+ struct scst_tgt_dev *tgt_dev;
+ list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+ dev_tgt_dev_list_entry) {
+ scst_del_thr_data(tgt_dev, ct->cmd_thread);
+ }
+ }
+
+ kfree(ct);
+
+ cmd_threads->nr_threads--;
+
+ --num;
+ if (num == 0)
+ break;
+ }
+
+ EXTRACHECKS_BUG_ON((cmd_threads->nr_threads == 0) &&
+ (cmd_threads->io_context != NULL));
+
+out:
+ return;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+void scst_stop_dev_threads(struct scst_device *dev)
+{
+ struct scst_tgt_dev *tgt_dev;
+
+ list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+ dev_tgt_dev_list_entry) {
+ scst_tgt_dev_stop_threads(tgt_dev);
+ }
+
+ if ((dev->threads_num > 0) &&
+ (dev->threads_pool_type == SCST_THREADS_POOL_SHARED))
+ scst_del_threads(&dev->dev_cmd_threads, -1);
+ return;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+int scst_create_dev_threads(struct scst_device *dev)
+{
+ int res = 0;
+ struct scst_tgt_dev *tgt_dev;
+
+ list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+ dev_tgt_dev_list_entry) {
+ res = scst_tgt_dev_setup_threads(tgt_dev);
+ if (res != 0)
+ goto out_err;
+ }
+
+ if ((dev->threads_num > 0) &&
+ (dev->threads_pool_type == SCST_THREADS_POOL_SHARED)) {
+ res = scst_add_threads(&dev->dev_cmd_threads, dev, NULL,
+ dev->threads_num);
+ if (res != 0)
+ goto out_err;
+ }
+
+out:
+ return res;
+
+out_err:
+ scst_stop_dev_threads(dev);
+ goto out;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+int scst_assign_dev_handler(struct scst_device *dev,
+ struct scst_dev_type *handler)
+{
+ int res = 0;
+ struct scst_tgt_dev *tgt_dev;
+ LIST_HEAD(attached_tgt_devs);
+
+ BUG_ON(handler == NULL);
+
+ if (dev->handler == handler)
+ goto out;
+
+ if (dev->handler == NULL)
+ goto assign;
+
+ if (dev->handler->detach_tgt) {
+ list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+ dev_tgt_dev_list_entry) {
+ TRACE_DBG("Calling dev handler's detach_tgt(%p)",
+ tgt_dev);
+ dev->handler->detach_tgt(tgt_dev);
+ TRACE_DBG("%s", "Dev handler's detach_tgt() returned");
+ }
+ }
+
+ if (dev->handler->detach) {
+ TRACE_DBG("%s", "Calling dev handler's detach()");
+ dev->handler->detach(dev);
+ TRACE_DBG("%s", "Old handler's detach() returned");
+ }
+
+ scst_stop_dev_threads(dev);
+
+ scst_devt_dev_sysfs_put(dev);
+
+assign:
+ dev->handler = handler;
+
+ if (handler == NULL)
+ goto out;
+
+ dev->threads_num = handler->threads_num;
+ dev->threads_pool_type = handler->threads_pool_type;
+
+ res = scst_create_devt_dev_sysfs(dev);
+ if (res != 0)
+ goto out_null;
+
+ if (handler->attach) {
+ TRACE_DBG("Calling new dev handler's attach(%p)", dev);
+ res = handler->attach(dev);
+ TRACE_DBG("New dev handler's attach() returned %d", res);
+ if (res != 0) {
+ PRINT_ERROR("New device handler's %s attach() "
+ "failed: %d", handler->name, res);
+ goto out_remove_sysfs;
+ }
+ }
+
+ if (handler->attach_tgt) {
+ list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+ dev_tgt_dev_list_entry) {
+ TRACE_DBG("Calling dev handler's attach_tgt(%p)",
+ tgt_dev);
+ res = handler->attach_tgt(tgt_dev);
+ TRACE_DBG("%s", "Dev handler's attach_tgt() returned");
+ if (res != 0) {
+ PRINT_ERROR("Device handler's %s attach_tgt() "
+ "failed: %d", handler->name, res);
+ goto out_err_detach_tgt;
+ }
+ list_add_tail(&tgt_dev->extra_tgt_dev_list_entry,
+ &attached_tgt_devs);
+ }
+ }
+
+ res = scst_create_dev_threads(dev);
+ if (res != 0)
+ goto out_err_detach_tgt;
+
+out:
+ return res;
+
+out_err_detach_tgt:
+ if (handler && handler->detach_tgt) {
+ list_for_each_entry(tgt_dev, &attached_tgt_devs,
+ extra_tgt_dev_list_entry) {
+ TRACE_DBG("Calling handler's detach_tgt(%p)",
+ tgt_dev);
+ handler->detach_tgt(tgt_dev);
+ TRACE_DBG("%s", "Handler's detach_tgt() returned");
+ }
+ }
+ if (handler && handler->detach) {
+ TRACE_DBG("%s", "Calling handler's detach()");
+ handler->detach(dev);
+ TRACE_DBG("%s", "Handler's detach() returned");
+ }
+
+out_remove_sysfs:
+ scst_devt_dev_sysfs_put(dev);
+
+out_null:
+ dev->handler = &scst_null_devtype;
+ goto out;
+}
+
+/**
+ * scst_init_threads() - initialize SCST processing threads pool
+ *
+ * Initializes scst_cmd_threads structure
+ */
+void scst_init_threads(struct scst_cmd_threads *cmd_threads)
+{
+
+ spin_lock_init(&cmd_threads->cmd_list_lock);
+ INIT_LIST_HEAD(&cmd_threads->active_cmd_list);
+ init_waitqueue_head(&cmd_threads->cmd_list_waitQ);
+ INIT_LIST_HEAD(&cmd_threads->threads_list);
+
+ mutex_lock(&scst_suspend_mutex);
+ list_add_tail(&cmd_threads->lists_list_entry,
+ &scst_cmd_threads_list);
+ mutex_unlock(&scst_suspend_mutex);
+ return;
+}
+EXPORT_SYMBOL_GPL(scst_init_threads);
+
+/**
+ * scst_deinit_threads() - deinitialize SCST processing threads pool
+ *
+ * Deinitializes scst_cmd_threads structure
+ */
+void scst_deinit_threads(struct scst_cmd_threads *cmd_threads)
+{
+
+ mutex_lock(&scst_suspend_mutex);
+ list_del(&cmd_threads->lists_list_entry);
+ mutex_unlock(&scst_suspend_mutex);
+
+ BUG_ON(cmd_threads->io_context);
+ return;
+}
+EXPORT_SYMBOL_GPL(scst_deinit_threads);
+
+static void scst_stop_all_threads(void)
+{
+
+ mutex_lock(&scst_mutex);
+
+ scst_del_threads(&scst_main_cmd_threads, -1);
+
+ if (scst_mgmt_cmd_thread)
+ kthread_stop(scst_mgmt_cmd_thread);
+ if (scst_mgmt_thread)
+ kthread_stop(scst_mgmt_thread);
+ if (scst_init_cmd_thread)
+ kthread_stop(scst_init_cmd_thread);
+
+ mutex_unlock(&scst_mutex);
+ return;
+}
+
+static int scst_start_all_threads(int num)
+{
+ int res;
+
+ mutex_lock(&scst_mutex);
+
+ res = scst_add_threads(&scst_main_cmd_threads, NULL, NULL, num);
+ if (res < 0)
+ goto out_unlock;
+
+ scst_init_cmd_thread = kthread_run(scst_init_thread,
+ NULL, "scsi_tgt_init");
+ if (IS_ERR(scst_init_cmd_thread)) {
+ res = PTR_ERR(scst_init_cmd_thread);
+ PRINT_ERROR("kthread_create() for init cmd failed: %d", res);
+ scst_init_cmd_thread = NULL;
+ goto out_unlock;
+ }
+
+ scst_mgmt_cmd_thread = kthread_run(scst_tm_thread,
+ NULL, "scsi_tm");
+ if (IS_ERR(scst_mgmt_cmd_thread)) {
+ res = PTR_ERR(scst_mgmt_cmd_thread);
+ PRINT_ERROR("kthread_create() for TM failed: %d", res);
+ scst_mgmt_cmd_thread = NULL;
+ goto out_unlock;
+ }
+
+ scst_mgmt_thread = kthread_run(scst_global_mgmt_thread,
+ NULL, "scsi_tgt_mgmt");
+ if (IS_ERR(scst_mgmt_thread)) {
+ res = PTR_ERR(scst_mgmt_thread);
+ PRINT_ERROR("kthread_create() for mgmt failed: %d", res);
+ scst_mgmt_thread = NULL;
+ goto out_unlock;
+ }
+
+out_unlock:
+ mutex_unlock(&scst_mutex);
+ return res;
+}
+
+/**
+ * scst_get() - increase global SCST ref counter
+ *
+ * Increases global SCST ref counter that prevents from entering into suspended
+ * activities stage, so protects from any global management operations.
+ */
+void scst_get(void)
+{
+ __scst_get(0);
+}
+EXPORT_SYMBOL_GPL(scst_get);
+
+/**
+ * scst_put() - decrease global SCST ref counter
+ *
+ * Decreses global SCST ref counter that prevents from entering into suspended
+ * activities stage, so protects from any global management operations. On
+ * zero, if suspending activities is waiting, they will be suspended.
+ */
+void scst_put(void)
+{
+ __scst_put();
+}
+EXPORT_SYMBOL_GPL(scst_put);
+
+/**
+ * scst_get_setup_id() - return SCST setup ID
+ *
+ * Returns SCST setup ID. This ID can be used for multiple
+ * setups with the same configuration.
+ */
+unsigned int scst_get_setup_id(void)
+{
+ return scst_setup_id;
+}
+EXPORT_SYMBOL_GPL(scst_get_setup_id);
+
+static int scst_add(struct device *cdev, struct class_interface *intf)
+{
+ struct scsi_device *scsidp;
+ int res = 0;
+
+ scsidp = to_scsi_device(cdev->parent);
+
+ if (strcmp(scsidp->host->hostt->name, SCST_LOCAL_NAME) != 0)
+ res = scst_register_device(scsidp);
+ return res;
+}
+
+static void scst_remove(struct device *cdev, struct class_interface *intf)
+{
+ struct scsi_device *scsidp;
+
+ scsidp = to_scsi_device(cdev->parent);
+
+ if (strcmp(scsidp->host->hostt->name, SCST_LOCAL_NAME) != 0)
+ scst_unregister_device(scsidp);
+ return;
+}
+
+static struct class_interface scst_interface = {
+ .add_dev = scst_add,
+ .remove_dev = scst_remove,
+};
+
+static void __init scst_print_config(void)
+{
+ char buf[128];
+ int i, j;
+
+ i = snprintf(buf, sizeof(buf), "Enabled features: ");
+ j = i;
+
+#ifdef CONFIG_SCST_STRICT_SERIALIZING
+ i += snprintf(&buf[i], sizeof(buf) - i, "STRICT_SERIALIZING");
+#endif
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+ i += snprintf(&buf[i], sizeof(buf) - i, "%sEXTRACHECKS",
+ (j == i) ? "" : ", ");
+#endif
+
+#ifdef CONFIG_SCST_TRACING
+ i += snprintf(&buf[i], sizeof(buf) - i, "%sTRACING",
+ (j == i) ? "" : ", ");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG
+ i += snprintf(&buf[i], sizeof(buf) - i, "%sDEBUG",
+ (j == i) ? "" : ", ");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG_TM
+ i += snprintf(&buf[i], sizeof(buf) - i, "%sDEBUG_TM",
+ (j == i) ? "" : ", ");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG_RETRY
+ i += snprintf(&buf[i], sizeof(buf) - i, "%sDEBUG_RETRY",
+ (j == i) ? "" : ", ");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG_OOM
+ i += snprintf(&buf[i], sizeof(buf) - i, "%sDEBUG_OOM",
+ (j == i) ? "" : ", ");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG_SN
+ i += snprintf(&buf[i], sizeof(buf) - i, "%sDEBUG_SN",
+ (j == i) ? "" : ", ");
+#endif
+
+#ifdef CONFIG_SCST_USE_EXPECTED_VALUES
+ i += snprintf(&buf[i], sizeof(buf) - i, "%sUSE_EXPECTED_VALUES",
+ (j == i) ? "" : ", ");
+#endif
+
+#ifdef CONFIG_SCST_ALLOW_PASSTHROUGH_IO_SUBMIT_IN_SIRQ
+ i += snprintf(&buf[i], sizeof(buf) - i,
+ "%sALLOW_PASSTHROUGH_IO_SUBMIT_IN_SIRQ",
+ (j == i) ? "" : ", ");
+#endif
+
+#ifdef CONFIG_SCST_STRICT_SECURITY
+ i += snprintf(&buf[i], sizeof(buf) - i, "%sSCST_STRICT_SECURITY",
+ (j == i) ? "" : ", ");
+#endif
+
+ if (j != i)
+ PRINT_INFO("%s", buf);
+}
+
+static int __init init_scst(void)
+{
+ int res, i;
+ int scst_num_cpus;
+
+ {
+ struct scsi_sense_hdr *shdr;
+ BUILD_BUG_ON(SCST_SENSE_BUFFERSIZE < sizeof(*shdr));
+ }
+ {
+ struct scst_tgt_dev *t;
+ struct scst_cmd *c;
+ BUILD_BUG_ON(sizeof(t->curr_sn) != sizeof(t->expected_sn));
+ BUILD_BUG_ON(sizeof(c->sn) != sizeof(t->expected_sn));
+ }
+
+ mutex_init(&scst_mutex);
+ INIT_LIST_HEAD(&scst_template_list);
+ INIT_LIST_HEAD(&scst_dev_list);
+ INIT_LIST_HEAD(&scst_dev_type_list);
+ spin_lock_init(&scst_main_lock);
+ spin_lock_init(&scst_init_lock);
+ init_waitqueue_head(&scst_init_cmd_list_waitQ);
+ INIT_LIST_HEAD(&scst_init_cmd_list);
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+ scst_trace_flag = SCST_DEFAULT_LOG_FLAGS;
+#endif
+ atomic_set(&scst_cmd_count, 0);
+ spin_lock_init(&scst_mcmd_lock);
+ INIT_LIST_HEAD(&scst_active_mgmt_cmd_list);
+ INIT_LIST_HEAD(&scst_delayed_mgmt_cmd_list);
+ init_waitqueue_head(&scst_mgmt_cmd_list_waitQ);
+ init_waitqueue_head(&scst_mgmt_waitQ);
+ spin_lock_init(&scst_mgmt_lock);
+ INIT_LIST_HEAD(&scst_sess_init_list);
+ INIT_LIST_HEAD(&scst_sess_shut_list);
+ init_waitqueue_head(&scst_dev_cmd_waitQ);
+ mutex_init(&scst_suspend_mutex);
+ INIT_LIST_HEAD(&scst_cmd_threads_list);
+ scst_virt_dev_last_id = 1;
+
+ scst_init_threads(&scst_main_cmd_threads);
+
+ res = scst_lib_init();
+ if (res != 0)
+ goto out;
+
+ scst_num_cpus = num_online_cpus();
+
+ /* ToDo: register_cpu_notifier() */
+
+ if (scst_threads == 0)
+ scst_threads = scst_num_cpus;
+
+ if (scst_threads < 1) {
+ PRINT_ERROR("%s", "scst_threads can not be less than 1");
+ scst_threads = scst_num_cpus;
+ }
+
+#define INIT_CACHEP(p, s, o) do { \
+ p = KMEM_CACHE(s, SCST_SLAB_FLAGS); \
+ TRACE_MEM("Slab create: %s at %p size %zd", #s, p, \
+ sizeof(struct s)); \
+ if (p == NULL) { \
+ res = -ENOMEM; \
+ goto o; \
+ } \
+ } while (0)
+
+ INIT_CACHEP(scst_mgmt_cachep, scst_mgmt_cmd, out_lib_exit);
+ INIT_CACHEP(scst_mgmt_stub_cachep, scst_mgmt_cmd_stub,
+ out_destroy_mgmt_cache);
+ INIT_CACHEP(scst_ua_cachep, scst_tgt_dev_UA,
+ out_destroy_mgmt_stub_cache);
+ {
+ struct scst_sense { uint8_t s[SCST_SENSE_BUFFERSIZE]; };
+ INIT_CACHEP(scst_sense_cachep, scst_sense,
+ out_destroy_ua_cache);
+ }
+ INIT_CACHEP(scst_aen_cachep, scst_aen, out_destroy_sense_cache);
+ INIT_CACHEP(scst_cmd_cachep, scst_cmd, out_destroy_aen_cache);
+ INIT_CACHEP(scst_sess_cachep, scst_session, out_destroy_cmd_cache);
+ INIT_CACHEP(scst_tgtd_cachep, scst_tgt_dev, out_destroy_sess_cache);
+ INIT_CACHEP(scst_acgd_cachep, scst_acg_dev, out_destroy_tgt_cache);
+
+ scst_mgmt_mempool = mempool_create(64, mempool_alloc_slab,
+ mempool_free_slab, scst_mgmt_cachep);
+ if (scst_mgmt_mempool == NULL) {
+ res = -ENOMEM;
+ goto out_destroy_acg_cache;
+ }
+
+ /*
+ * All mgmt stubs, UAs and sense buffers are bursty and loosing them
+ * may have fatal consequences, so let's have big pools for them.
+ */
+
+ scst_mgmt_stub_mempool = mempool_create(1024, mempool_alloc_slab,
+ mempool_free_slab, scst_mgmt_stub_cachep);
+ if (scst_mgmt_stub_mempool == NULL) {
+ res = -ENOMEM;
+ goto out_destroy_mgmt_mempool;
+ }
+
+ scst_ua_mempool = mempool_create(512, mempool_alloc_slab,
+ mempool_free_slab, scst_ua_cachep);
+ if (scst_ua_mempool == NULL) {
+ res = -ENOMEM;
+ goto out_destroy_mgmt_stub_mempool;
+ }
+
+ scst_sense_mempool = mempool_create(1024, mempool_alloc_slab,
+ mempool_free_slab, scst_sense_cachep);
+ if (scst_sense_mempool == NULL) {
+ res = -ENOMEM;
+ goto out_destroy_ua_mempool;
+ }
+
+ scst_aen_mempool = mempool_create(100, mempool_alloc_slab,
+ mempool_free_slab, scst_aen_cachep);
+ if (scst_aen_mempool == NULL) {
+ res = -ENOMEM;
+ goto out_destroy_sense_mempool;
+ }
+
+ res = scst_sysfs_init();
+ if (res != 0)
+ goto out_destroy_aen_mempool;
+
+ if (scst_max_cmd_mem == 0) {
+ struct sysinfo si;
+ si_meminfo(&si);
+#if BITS_PER_LONG == 32
+ scst_max_cmd_mem = min(
+ (((uint64_t)(si.totalram - si.totalhigh) << PAGE_SHIFT)
+ >> 20) >> 2, (uint64_t)1 << 30);
+#else
+ scst_max_cmd_mem = (((si.totalram - si.totalhigh) << PAGE_SHIFT)
+ >> 20) >> 2;
+#endif
+ }
+
+ if (scst_max_dev_cmd_mem != 0) {
+ if (scst_max_dev_cmd_mem > scst_max_cmd_mem) {
+ PRINT_ERROR("scst_max_dev_cmd_mem (%d) > "
+ "scst_max_cmd_mem (%d)",
+ scst_max_dev_cmd_mem,
+ scst_max_cmd_mem);
+ scst_max_dev_cmd_mem = scst_max_cmd_mem;
+ }
+ } else
+ scst_max_dev_cmd_mem = scst_max_cmd_mem * 2 / 5;
+
+ res = scst_sgv_pools_init(
+ ((uint64_t)scst_max_cmd_mem << 10) >> (PAGE_SHIFT - 10), 0);
+ if (res != 0)
+ goto out_sysfs_cleanup;
+
+ res = scsi_register_interface(&scst_interface);
+ if (res != 0)
+ goto out_destroy_sgv_pool;
+
+ for (i = 0; i < (int)ARRAY_SIZE(scst_tasklets); i++) {
+ spin_lock_init(&scst_tasklets[i].tasklet_lock);
+ INIT_LIST_HEAD(&scst_tasklets[i].tasklet_cmd_list);
+ tasklet_init(&scst_tasklets[i].tasklet,
+ (void *)scst_cmd_tasklet,
+ (unsigned long)&scst_tasklets[i]);
+ }
+
+ TRACE_DBG("%d CPUs found, starting %d threads", scst_num_cpus,
+ scst_threads);
+
+ res = scst_start_all_threads(scst_threads);
+ if (res < 0)
+ goto out_thread_free;
+
+ PRINT_INFO("SCST version %s loaded successfully (max mem for "
+ "commands %dMB, per device %dMB)", SCST_VERSION_STRING,
+ scst_max_cmd_mem, scst_max_dev_cmd_mem);
+
+ scst_print_config();
+
+out:
+ return res;
+
+out_thread_free:
+ scst_stop_all_threads();
+
+ scsi_unregister_interface(&scst_interface);
+
+out_destroy_sgv_pool:
+ scst_sgv_pools_deinit();
+
+out_sysfs_cleanup:
+ scst_sysfs_cleanup();
+
+out_destroy_aen_mempool:
+ mempool_destroy(scst_aen_mempool);
+
+out_destroy_sense_mempool:
+ mempool_destroy(scst_sense_mempool);
+
+out_destroy_ua_mempool:
+ mempool_destroy(scst_ua_mempool);
+
+out_destroy_mgmt_stub_mempool:
+ mempool_destroy(scst_mgmt_stub_mempool);
+
+out_destroy_mgmt_mempool:
+ mempool_destroy(scst_mgmt_mempool);
+
+out_destroy_acg_cache:
+ kmem_cache_destroy(scst_acgd_cachep);
+
+out_destroy_tgt_cache:
+ kmem_cache_destroy(scst_tgtd_cachep);
+
+out_destroy_sess_cache:
+ kmem_cache_destroy(scst_sess_cachep);
+
+out_destroy_cmd_cache:
+ kmem_cache_destroy(scst_cmd_cachep);
+
+out_destroy_aen_cache:
+ kmem_cache_destroy(scst_aen_cachep);
+
+out_destroy_sense_cache:
+ kmem_cache_destroy(scst_sense_cachep);
+
+out_destroy_ua_cache:
+ kmem_cache_destroy(scst_ua_cachep);
+
+out_destroy_mgmt_stub_cache:
+ kmem_cache_destroy(scst_mgmt_stub_cachep);
+
+out_destroy_mgmt_cache:
+ kmem_cache_destroy(scst_mgmt_cachep);
+
+out_lib_exit:
+ scst_lib_exit();
+ goto out;
+}
+
+static void __exit exit_scst(void)
+{
+
+ /* ToDo: unregister_cpu_notifier() */
+
+ scst_sysfs_cleanup();
+
+ scst_stop_all_threads();
+
+ scst_deinit_threads(&scst_main_cmd_threads);
+
+ scsi_unregister_interface(&scst_interface);
+
+ scst_sgv_pools_deinit();
+
+#define DEINIT_CACHEP(p) do { \
+ kmem_cache_destroy(p); \
+ p = NULL; \
+ } while (0)
+
+ mempool_destroy(scst_mgmt_mempool);
+ mempool_destroy(scst_mgmt_stub_mempool);
+ mempool_destroy(scst_ua_mempool);
+ mempool_destroy(scst_sense_mempool);
+ mempool_destroy(scst_aen_mempool);
+
+ DEINIT_CACHEP(scst_mgmt_cachep);
+ DEINIT_CACHEP(scst_mgmt_stub_cachep);
+ DEINIT_CACHEP(scst_ua_cachep);
+ DEINIT_CACHEP(scst_sense_cachep);
+ DEINIT_CACHEP(scst_aen_cachep);
+ DEINIT_CACHEP(scst_cmd_cachep);
+ DEINIT_CACHEP(scst_sess_cachep);
+ DEINIT_CACHEP(scst_tgtd_cachep);
+ DEINIT_CACHEP(scst_acgd_cachep);
+
+ scst_lib_exit();
+
+ PRINT_INFO("%s", "SCST unloaded");
+ return;
+}
+
+module_init(init_scst);
+module_exit(exit_scst);
+
+MODULE_AUTHOR("Vladislav Bolkhovitin");
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("SCSI target core");
+MODULE_VERSION(SCST_VERSION_STRING);
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH][RFC 4/12/1/5] SCST core's scst_targ.c
[not found] ` <4BC44D08.4060907@vlnb.net>
` (2 preceding siblings ...)
2010-04-13 13:04 ` [PATCH][RFC 3/12/1/5] SCST core's scst_main.c Vladislav Bolkhovitin
@ 2010-04-13 13:05 ` Vladislav Bolkhovitin
2010-04-13 13:05 ` [PATCH][RFC 5/12/1/5] SCST core's scst_lib.c Vladislav Bolkhovitin
` (5 subsequent siblings)
9 siblings, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:05 UTC (permalink / raw)
To: linux-scsi
Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
Vu Pham, Bart Van Assche, James Smart, Joe Eykholt, Andy Yan,
linux-driver
This patch contains file scst_targ.c.
Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
scst_targ.c | 5712 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 5712 insertions(+)
diff -uprN orig/linux-2.6.33/drivers/scst/scst_targ.c linux-2.6.33/drivers/scst/scst_targ.c
--- orig/linux-2.6.33/drivers/scst/scst_targ.c
+++ linux-2.6.33/drivers/scst/scst_targ.c
@@ -0,0 +1,5712 @@
+/*
+ * scst_targ.c
+ *
+ * Copyright (C) 2004 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ * Copyright (C) 2004 - 2005 Leonid Stoljar
+ * Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation, version 2
+ * of the License.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/slab.h>
+#include <linux/sched.h>
+#include <linux/smp_lock.h>
+#include <linux/unistd.h>
+#include <linux/string.h>
+#include <linux/kthread.h>
+#include <linux/delay.h>
+#include <linux/ktime.h>
+
+#include "scst.h"
+#include "scst_priv.h"
+
+#if 0 /* Temporary left for future performance investigations */
+/* Deleting it don't forget to delete write_cmd_count */
+#define CONFIG_SCST_ORDERED_READS
+#endif
+
+#if 0 /* Let's disable it for now to see if users will complain about it */
+/* Deleting it don't forget to delete write_cmd_count */
+#define CONFIG_SCST_PER_DEVICE_CMD_COUNT_LIMIT
+#endif
+
+static void scst_cmd_set_sn(struct scst_cmd *cmd);
+static int __scst_init_cmd(struct scst_cmd *cmd);
+static void scst_finish_cmd_mgmt(struct scst_cmd *cmd);
+static struct scst_cmd *__scst_find_cmd_by_tag(struct scst_session *sess,
+ uint64_t tag, bool to_abort);
+static void scst_process_redirect_cmd(struct scst_cmd *cmd,
+ enum scst_exec_context context, int check_retries);
+
+static inline void scst_schedule_tasklet(struct scst_cmd *cmd)
+{
+ struct scst_tasklet *t = &scst_tasklets[smp_processor_id()];
+ unsigned long flags;
+
+ spin_lock_irqsave(&t->tasklet_lock, flags);
+ TRACE_DBG("Adding cmd %p to tasklet %d cmd list", cmd,
+ smp_processor_id());
+ list_add_tail(&cmd->cmd_list_entry, &t->tasklet_cmd_list);
+ spin_unlock_irqrestore(&t->tasklet_lock, flags);
+
+ tasklet_schedule(&t->tasklet);
+}
+
+/**
+ * scst_rx_cmd() - create new command
+ * @sess: SCST session
+ * @lun: LUN for the command
+ * @lun_len: length of the LUN in bytes
+ * @cdb: CDB of the command
+ * @cdb_len: length of the CDB in bytes
+ * @atomic: true, if current context is atomic
+ *
+ * Description:
+ * Creates new SCST command. Returns new command on success or
+ * NULL otherwise.
+ *
+ * Must not be called in parallel with scst_unregister_session() for the
+ * same session.
+ */
+struct scst_cmd *scst_rx_cmd(struct scst_session *sess,
+ const uint8_t *lun, int lun_len,
+ const uint8_t *cdb, int cdb_len, int atomic)
+{
+ struct scst_cmd *cmd;
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+ if (unlikely(sess->shut_phase != SCST_SESS_SPH_READY)) {
+ PRINT_CRIT_ERROR("%s",
+ "New cmd while shutting down the session");
+ BUG();
+ }
+#endif
+
+ cmd = scst_alloc_cmd(atomic ? GFP_ATOMIC : GFP_KERNEL);
+ if (cmd == NULL)
+ goto out;
+
+ cmd->sess = sess;
+ cmd->tgt = sess->tgt;
+ cmd->tgtt = sess->tgt->tgtt;
+
+ cmd->lun = scst_unpack_lun(lun, lun_len);
+ if (unlikely(cmd->lun == NO_SUCH_LUN)) {
+ PRINT_ERROR("Wrong LUN %d, finishing cmd", -1);
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(scst_sense_lun_not_supported));
+ }
+
+ /*
+ * For cdb_len 0 defer the error reporting until scst_cmd_init_done(),
+ * scst_set_cmd_error() supports nested calls.
+ */
+ if (unlikely(cdb_len > SCST_MAX_CDB_SIZE)) {
+ PRINT_ERROR("Too big CDB len %d, finishing cmd", cdb_len);
+ cdb_len = SCST_MAX_CDB_SIZE;
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(scst_sense_invalid_message));
+ }
+
+ memcpy(cmd->cdb, cdb, cdb_len);
+ cmd->cdb_len = cdb_len;
+
+ TRACE_DBG("cmd %p, sess %p", cmd, sess);
+ scst_sess_get(sess);
+
+out:
+ return cmd;
+}
+EXPORT_SYMBOL(scst_rx_cmd);
+
+/*
+ * No locks, but might be on IRQ. Returns 0 on success, <0 if processing of
+ * this command should be stopped.
+ */
+static int scst_init_cmd(struct scst_cmd *cmd, enum scst_exec_context *context)
+{
+ int rc, res = 0;
+
+ /* See the comment in scst_do_job_init() */
+ if (unlikely(!list_empty(&scst_init_cmd_list))) {
+ TRACE_MGMT_DBG("%s", "init cmd list busy");
+ goto out_redirect;
+ }
+ /*
+ * Memory barrier isn't necessary here, because CPU appears to
+ * be self-consistent and we don't care about the race, described
+ * in comment in scst_do_job_init().
+ */
+
+ rc = __scst_init_cmd(cmd);
+ if (unlikely(rc > 0))
+ goto out_redirect;
+ else if (unlikely(rc != 0)) {
+ res = 1;
+ goto out;
+ }
+
+ EXTRACHECKS_BUG_ON(*context == SCST_CONTEXT_SAME);
+
+ /* Small context optimization */
+ if (((*context == SCST_CONTEXT_TASKLET) ||
+ (*context == SCST_CONTEXT_DIRECT_ATOMIC)) &&
+ scst_cmd_is_expected_set(cmd)) {
+ if (cmd->expected_data_direction & SCST_DATA_WRITE) {
+ if (!test_bit(SCST_TGT_DEV_AFTER_INIT_WR_ATOMIC,
+ &cmd->tgt_dev->tgt_dev_flags))
+ *context = SCST_CONTEXT_THREAD;
+ } else {
+ if (!test_bit(SCST_TGT_DEV_AFTER_INIT_OTH_ATOMIC,
+ &cmd->tgt_dev->tgt_dev_flags))
+ *context = SCST_CONTEXT_THREAD;
+ }
+ }
+
+out:
+ return res;
+
+out_redirect:
+ if (cmd->preprocessing_only) {
+ /*
+ * Poor man solution for single threaded targets, where
+ * blocking receiver at least sometimes means blocking all.
+ * For instance, iSCSI target won't be able to receive
+ * Data-Out PDUs.
+ */
+ BUG_ON(*context != SCST_CONTEXT_DIRECT);
+ scst_set_busy(cmd);
+ scst_set_cmd_abnormal_done_state(cmd);
+ res = 1;
+ /* Keep initiator away from too many BUSY commands */
+ msleep(50);
+ } else {
+ unsigned long flags;
+ spin_lock_irqsave(&scst_init_lock, flags);
+ TRACE_MGMT_DBG("Adding cmd %p to init cmd list (scst_cmd_count "
+ "%d)", cmd, atomic_read(&scst_cmd_count));
+ list_add_tail(&cmd->cmd_list_entry, &scst_init_cmd_list);
+ if (test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags))
+ scst_init_poll_cnt++;
+ spin_unlock_irqrestore(&scst_init_lock, flags);
+ wake_up(&scst_init_cmd_list_waitQ);
+ res = -1;
+ }
+ goto out;
+}
+
+/**
+ * scst_cmd_init_done() - the command's initialization done
+ * @cmd: SCST command
+ * @pref_context: preferred command execution context
+ *
+ * Description:
+ * Notifies SCST that the driver finished its part of the command
+ * initialization, and the command is ready for execution.
+ * The second argument sets preferred command execition context.
+ * See SCST_CONTEXT_* constants for details.
+ *
+ * !!IMPORTANT!!
+ *
+ * If cmd->set_sn_on_restart_cmd not set, this function, as well as
+ * scst_cmd_init_stage1_done() and scst_restart_cmd(), must not be
+ * called simultaneously for the same session (more precisely,
+ * for the same session/LUN, i.e. tgt_dev), i.e. they must be
+ * somehow externally serialized. This is needed to have lock free fast
+ * path in scst_cmd_set_sn(). For majority of targets those functions are
+ * naturally serialized by the single source of commands. Only iSCSI
+ * immediate commands with multiple connections per session seems to be an
+ * exception. For it, some mutex/lock shall be used for the serialization.
+ */
+void scst_cmd_init_done(struct scst_cmd *cmd,
+ enum scst_exec_context pref_context)
+{
+ unsigned long flags;
+ struct scst_session *sess = cmd->sess;
+ int rc;
+
+ scst_set_start_time(cmd);
+
+ TRACE_DBG("Preferred context: %d (cmd %p)", pref_context, cmd);
+ TRACE(TRACE_SCSI, "tag=%llu, lun=%lld, CDB len=%d, queue_type=%x "
+ "(cmd %p)", (long long unsigned int)cmd->tag,
+ (long long unsigned int)cmd->lun, cmd->cdb_len,
+ cmd->queue_type, cmd);
+ PRINT_BUFF_FLAG(TRACE_SCSI|TRACE_RCV_BOT, "Recieving CDB",
+ cmd->cdb, cmd->cdb_len);
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+ if (unlikely((in_irq() || irqs_disabled())) &&
+ ((pref_context == SCST_CONTEXT_DIRECT) ||
+ (pref_context == SCST_CONTEXT_DIRECT_ATOMIC))) {
+ PRINT_ERROR("Wrong context %d in IRQ from target %s, use "
+ "SCST_CONTEXT_THREAD instead", pref_context,
+ cmd->tgtt->name);
+ pref_context = SCST_CONTEXT_THREAD;
+ }
+#endif
+
+ atomic_inc(&sess->sess_cmd_count);
+
+ spin_lock_irqsave(&sess->sess_list_lock, flags);
+
+ if (unlikely(sess->init_phase != SCST_SESS_IPH_READY)) {
+ /*
+ * We must always keep commands in the sess list from the
+ * very beginning, because otherwise they can be missed during
+ * TM processing. This check is needed because there might be
+ * old, i.e. deferred, commands and new, i.e. just coming, ones.
+ */
+ if (cmd->sess_cmd_list_entry.next == NULL)
+ list_add_tail(&cmd->sess_cmd_list_entry,
+ &sess->sess_cmd_list);
+ switch (sess->init_phase) {
+ case SCST_SESS_IPH_SUCCESS:
+ break;
+ case SCST_SESS_IPH_INITING:
+ TRACE_DBG("Adding cmd %p to init deferred cmd list",
+ cmd);
+ list_add_tail(&cmd->cmd_list_entry,
+ &sess->init_deferred_cmd_list);
+ spin_unlock_irqrestore(&sess->sess_list_lock, flags);
+ goto out;
+ case SCST_SESS_IPH_FAILED:
+ spin_unlock_irqrestore(&sess->sess_list_lock, flags);
+ scst_set_busy(cmd);
+ scst_set_cmd_abnormal_done_state(cmd);
+ goto active;
+ default:
+ BUG();
+ }
+ } else
+ list_add_tail(&cmd->sess_cmd_list_entry,
+ &sess->sess_cmd_list);
+
+ spin_unlock_irqrestore(&sess->sess_list_lock, flags);
+
+ if (unlikely(cmd->cdb_len == 0)) {
+ PRINT_ERROR("%s", "Wrong CDB len 0, finishing cmd");
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(scst_sense_invalid_opcode));
+ scst_set_cmd_abnormal_done_state(cmd);
+ goto active;
+ }
+
+ if (unlikely(cmd->queue_type >= SCST_CMD_QUEUE_ACA)) {
+ PRINT_ERROR("Unsupported queue type %d", cmd->queue_type);
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(scst_sense_invalid_message));
+ goto active;
+ }
+
+ /*
+ * Cmd must be inited here to preserve the order. In case if cmd
+ * already preliminary completed by target driver we need to init
+ * cmd anyway to find out in which format we should return sense.
+ */
+ cmd->state = SCST_CMD_STATE_INIT;
+ rc = scst_init_cmd(cmd, &pref_context);
+ if (unlikely(rc < 0))
+ goto out;
+
+active:
+ /* Here cmd must not be in any cmd list, no locks */
+ switch (pref_context) {
+ case SCST_CONTEXT_TASKLET:
+ scst_schedule_tasklet(cmd);
+ break;
+
+ case SCST_CONTEXT_DIRECT:
+ scst_process_active_cmd(cmd, false);
+ break;
+
+ case SCST_CONTEXT_DIRECT_ATOMIC:
+ scst_process_active_cmd(cmd, true);
+ break;
+
+ default:
+ PRINT_ERROR("Context %x is undefined, using the thread one",
+ pref_context);
+ /* go through */
+ case SCST_CONTEXT_THREAD:
+ spin_lock_irqsave(&cmd->cmd_threads->cmd_list_lock, flags);
+ TRACE_DBG("Adding cmd %p to active cmd list", cmd);
+ if (unlikely(cmd->queue_type == SCST_CMD_QUEUE_HEAD_OF_QUEUE))
+ list_add(&cmd->cmd_list_entry,
+ &cmd->cmd_threads->active_cmd_list);
+ else
+ list_add_tail(&cmd->cmd_list_entry,
+ &cmd->cmd_threads->active_cmd_list);
+ wake_up(&cmd->cmd_threads->cmd_list_waitQ);
+ spin_unlock_irqrestore(&cmd->cmd_threads->cmd_list_lock, flags);
+ break;
+ }
+
+out:
+ return;
+}
+EXPORT_SYMBOL(scst_cmd_init_done);
+
+static int scst_pre_parse(struct scst_cmd *cmd)
+{
+ int res = SCST_CMD_STATE_RES_CONT_SAME;
+ struct scst_device *dev = cmd->dev;
+ int rc;
+
+#ifdef CONFIG_SCST_STRICT_SERIALIZING
+ cmd->inc_expected_sn_on_done = 1;
+#else
+ cmd->inc_expected_sn_on_done = dev->handler->exec_sync ||
+ (!dev->has_own_order_mgmt &&
+ (dev->queue_alg == SCST_CONTR_MODE_QUEUE_ALG_RESTRICTED_REORDER ||
+ cmd->queue_type == SCST_CMD_QUEUE_ORDERED));
+#endif
+
+ /*
+ * Expected transfer data supplied by the SCSI transport via the
+ * target driver are untrusted, so we prefer to fetch them from CDB.
+ * Additionally, not all transports support supplying the expected
+ * transfer data.
+ */
+
+ rc = scst_get_cdb_info(cmd);
+ if (unlikely(rc != 0)) {
+ if (rc > 0) {
+ PRINT_BUFFER("Failed CDB", cmd->cdb, cmd->cdb_len);
+ goto out_err;
+ }
+
+ EXTRACHECKS_BUG_ON(cmd->op_flags & SCST_INFO_VALID);
+
+ cmd->cdb_len = scst_get_cdb_len(cmd->cdb);
+
+ TRACE(TRACE_MINOR, "Unknown opcode 0x%02x for %s. "
+ "Should you update scst_scsi_op_table?",
+ cmd->cdb[0], dev->handler->name);
+ PRINT_BUFF_FLAG(TRACE_MINOR, "Failed CDB", cmd->cdb,
+ cmd->cdb_len);
+ } else {
+ EXTRACHECKS_BUG_ON(!(cmd->op_flags & SCST_INFO_VALID));
+ }
+
+ cmd->state = SCST_CMD_STATE_DEV_PARSE;
+
+ TRACE_DBG("op_name <%s> (cmd %p), direction=%d "
+ "(expected %d, set %s), transfer_len=%d (expected "
+ "len %d), flags=%d", cmd->op_name, cmd,
+ cmd->data_direction, cmd->expected_data_direction,
+ scst_cmd_is_expected_set(cmd) ? "yes" : "no",
+ cmd->bufflen, cmd->expected_transfer_len,
+ cmd->op_flags);
+
+out:
+ return res;
+
+out_err:
+ scst_set_cmd_error(cmd, SCST_LOAD_SENSE(scst_sense_invalid_field_in_cdb));
+ scst_set_cmd_abnormal_done_state(cmd);
+ res = SCST_CMD_STATE_RES_CONT_SAME;
+ goto out;
+}
+
+#ifndef CONFIG_SCST_USE_EXPECTED_VALUES
+static bool scst_is_allowed_to_mismatch_cmd(struct scst_cmd *cmd)
+{
+ bool res = false;
+
+ /* VERIFY commands with BYTCHK unset shouldn't fail here */
+ if ((cmd->op_flags & SCST_VERIFY_BYTCHK_MISMATCH_ALLOWED) &&
+ (cmd->cdb[1] & BYTCHK) == 0) {
+ res = true;
+ goto out;
+ }
+
+ switch (cmd->cdb[0]) {
+ case TEST_UNIT_READY:
+ /* Crazy VMware people sometimes do TUR with READ direction */
+ res = true;
+ break;
+ }
+
+out:
+ return res;
+}
+#endif
+
+static int scst_parse_cmd(struct scst_cmd *cmd)
+{
+ int res = SCST_CMD_STATE_RES_CONT_SAME;
+ int state;
+ struct scst_device *dev = cmd->dev;
+ int orig_bufflen = cmd->bufflen;
+
+ if (likely(!scst_is_cmd_fully_local(cmd))) {
+ if (unlikely(!dev->handler->parse_atomic &&
+ scst_cmd_atomic(cmd))) {
+ /*
+ * It shouldn't be because of the SCST_TGT_DEV_AFTER_*
+ * optimization.
+ */
+ TRACE_DBG("Dev handler %s parse() needs thread "
+ "context, rescheduling", dev->handler->name);
+ res = SCST_CMD_STATE_RES_NEED_THREAD;
+ goto out;
+ }
+
+ TRACE_DBG("Calling dev handler %s parse(%p)",
+ dev->handler->name, cmd);
+ TRACE_BUFF_FLAG(TRACE_SND_BOT, "Parsing: ",
+ cmd->cdb, cmd->cdb_len);
+ scst_set_cur_start(cmd);
+ state = dev->handler->parse(cmd);
+ /* Caution: cmd can be already dead here */
+ TRACE_DBG("Dev handler %s parse() returned %d",
+ dev->handler->name, state);
+
+ switch (state) {
+ case SCST_CMD_STATE_NEED_THREAD_CTX:
+ scst_set_parse_time(cmd);
+ TRACE_DBG("Dev handler %s parse() requested thread "
+ "context, rescheduling", dev->handler->name);
+ res = SCST_CMD_STATE_RES_NEED_THREAD;
+ goto out;
+
+ case SCST_CMD_STATE_STOP:
+ TRACE_DBG("Dev handler %s parse() requested stop "
+ "processing", dev->handler->name);
+ res = SCST_CMD_STATE_RES_CONT_NEXT;
+ goto out;
+ }
+
+ scst_set_parse_time(cmd);
+
+ if (state == SCST_CMD_STATE_DEFAULT)
+ state = SCST_CMD_STATE_PREPARE_SPACE;
+ } else
+ state = SCST_CMD_STATE_PREPARE_SPACE;
+
+ if (unlikely(state == SCST_CMD_STATE_PRE_XMIT_RESP))
+ goto set_res;
+
+ if (unlikely(!(cmd->op_flags & SCST_INFO_VALID))) {
+#ifdef CONFIG_SCST_USE_EXPECTED_VALUES
+ if (scst_cmd_is_expected_set(cmd)) {
+ TRACE(TRACE_MINOR, "Using initiator supplied values: "
+ "direction %d, transfer_len %d",
+ cmd->expected_data_direction,
+ cmd->expected_transfer_len);
+ cmd->data_direction = cmd->expected_data_direction;
+ cmd->bufflen = cmd->expected_transfer_len;
+ } else {
+ PRINT_ERROR("Unknown opcode 0x%02x for %s and "
+ "target %s not supplied expected values",
+ cmd->cdb[0], dev->handler->name, cmd->tgtt->name);
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(scst_sense_invalid_opcode));
+ goto out_done;
+ }
+#else
+ PRINT_ERROR("Unknown opcode %x", cmd->cdb[0]);
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(scst_sense_invalid_opcode));
+ goto out_done;
+#endif
+ }
+
+ if (unlikely(cmd->cdb_len == -1)) {
+ PRINT_ERROR("Unable to get CDB length for "
+ "opcode 0x%02x. Returning INVALID "
+ "OPCODE", cmd->cdb[0]);
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(scst_sense_invalid_opcode));
+ goto out_done;
+ }
+
+ EXTRACHECKS_BUG_ON(cmd->cdb_len == 0);
+
+ TRACE(TRACE_SCSI, "op_name <%s> (cmd %p), direction=%d "
+ "(expected %d, set %s), transfer_len=%d (expected "
+ "len %d), flags=%d", cmd->op_name, cmd,
+ cmd->data_direction, cmd->expected_data_direction,
+ scst_cmd_is_expected_set(cmd) ? "yes" : "no",
+ cmd->bufflen, cmd->expected_transfer_len,
+ cmd->op_flags);
+
+ if (unlikely((cmd->op_flags & SCST_UNKNOWN_LENGTH) != 0)) {
+ if (scst_cmd_is_expected_set(cmd)) {
+ /*
+ * Command data length can't be easily
+ * determined from the CDB. ToDo, all such
+ * commands processing should be fixed. Until
+ * it's done, get the length from the supplied
+ * expected value, but limit it to some
+ * reasonable value (15MB).
+ */
+ cmd->bufflen = min(cmd->expected_transfer_len,
+ 15*1024*1024);
+ cmd->op_flags &= ~SCST_UNKNOWN_LENGTH;
+ } else {
+ PRINT_ERROR("Unknown data transfer length for opcode "
+ "0x%x (handler %s, target %s)", cmd->cdb[0],
+ dev->handler->name, cmd->tgtt->name);
+ PRINT_BUFFER("Failed CDB", cmd->cdb, cmd->cdb_len);
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(scst_sense_invalid_message));
+ goto out_done;
+ }
+ }
+
+ if (unlikely(cmd->cdb[cmd->cdb_len - 1] & CONTROL_BYTE_NACA_BIT)) {
+ PRINT_ERROR("NACA bit in control byte CDB is not supported "
+ "(opcode 0x%02x)", cmd->cdb[0]);
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(scst_sense_invalid_field_in_cdb));
+ goto out_done;
+ }
+
+ if (unlikely(cmd->cdb[cmd->cdb_len - 1] & CONTROL_BYTE_LINK_BIT)) {
+ PRINT_ERROR("Linked commands are not supported "
+ "(opcode 0x%02x)", cmd->cdb[0]);
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(scst_sense_invalid_field_in_cdb));
+ goto out_done;
+ }
+
+ if (cmd->dh_data_buf_alloced &&
+ unlikely((orig_bufflen > cmd->bufflen))) {
+ PRINT_ERROR("Dev handler supplied data buffer (size %d), "
+ "is less, than required (size %d)", cmd->bufflen,
+ orig_bufflen);
+ PRINT_BUFFER("Failed CDB", cmd->cdb, cmd->cdb_len);
+ goto out_hw_error;
+ }
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+ if ((cmd->bufflen != 0) &&
+ ((cmd->data_direction == SCST_DATA_NONE) ||
+ ((cmd->sg == NULL) && (state > SCST_CMD_STATE_PREPARE_SPACE)))) {
+ PRINT_ERROR("Dev handler %s parse() returned "
+ "invalid cmd data_direction %d, bufflen %d, state %d "
+ "or sg %p (opcode 0x%x)", dev->handler->name,
+ cmd->data_direction, cmd->bufflen, state, cmd->sg,
+ cmd->cdb[0]);
+ PRINT_BUFFER("Failed CDB", cmd->cdb, cmd->cdb_len);
+ goto out_hw_error;
+ }
+#endif
+
+ if (scst_cmd_is_expected_set(cmd)) {
+#ifdef CONFIG_SCST_USE_EXPECTED_VALUES
+# ifdef CONFIG_SCST_EXTRACHECKS
+ if (unlikely((cmd->data_direction != cmd->expected_data_direction) ||
+ (cmd->bufflen != cmd->expected_transfer_len))) {
+ TRACE(TRACE_MINOR, "Expected values don't match "
+ "decoded ones: data_direction %d, "
+ "expected_data_direction %d, "
+ "bufflen %d, expected_transfer_len %d",
+ cmd->data_direction,
+ cmd->expected_data_direction,
+ cmd->bufflen, cmd->expected_transfer_len);
+ PRINT_BUFF_FLAG(TRACE_MINOR, "Suspicious CDB",
+ cmd->cdb, cmd->cdb_len);
+ }
+# endif
+ cmd->data_direction = cmd->expected_data_direction;
+ cmd->bufflen = cmd->expected_transfer_len;
+#else
+ if (unlikely(cmd->data_direction !=
+ cmd->expected_data_direction)) {
+ if (((cmd->expected_data_direction != SCST_DATA_NONE) ||
+ (cmd->bufflen != 0)) &&
+ !scst_is_allowed_to_mismatch_cmd(cmd)) {
+ PRINT_ERROR("Expected data direction %d for "
+ "opcode 0x%02x (handler %s, target %s) "
+ "doesn't match decoded value %d",
+ cmd->expected_data_direction,
+ cmd->cdb[0], dev->handler->name,
+ cmd->tgtt->name, cmd->data_direction);
+ PRINT_BUFFER("Failed CDB", cmd->cdb,
+ cmd->cdb_len);
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(scst_sense_invalid_message));
+ goto out_done;
+ }
+ }
+ if (unlikely(cmd->bufflen != cmd->expected_transfer_len)) {
+ TRACE(TRACE_MINOR, "Warning: expected "
+ "transfer length %d for opcode 0x%02x "
+ "(handler %s, target %s) doesn't match "
+ "decoded value %d. Faulty initiator "
+ "(e.g. VMware is known to be such) or "
+ "scst_scsi_op_table should be updated?",
+ cmd->expected_transfer_len, cmd->cdb[0],
+ dev->handler->name, cmd->tgtt->name,
+ cmd->bufflen);
+ PRINT_BUFF_FLAG(TRACE_MINOR, "Suspicious CDB",
+ cmd->cdb, cmd->cdb_len);
+ /* Needed, e.g., to get immediate iSCSI data */
+ cmd->bufflen = max(cmd->bufflen,
+ cmd->expected_transfer_len);
+ }
+#endif
+ }
+
+ if (unlikely(cmd->data_direction == SCST_DATA_UNKNOWN)) {
+ PRINT_ERROR("Unknown data direction. Opcode 0x%x, handler %s, "
+ "target %s", cmd->cdb[0], dev->handler->name,
+ cmd->tgtt->name);
+ PRINT_BUFFER("Failed CDB", cmd->cdb, cmd->cdb_len);
+ goto out_hw_error;
+ }
+
+set_res:
+ if (cmd->data_len == -1)
+ cmd->data_len = cmd->bufflen;
+
+ if (cmd->bufflen == 0) {
+ /*
+ * According to SPC bufflen 0 for data transfer commands isn't
+ * an error, so we need to fix the transfer direction.
+ */
+ cmd->data_direction = SCST_DATA_NONE;
+ }
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+ switch (state) {
+ case SCST_CMD_STATE_PREPARE_SPACE:
+ case SCST_CMD_STATE_PRE_PARSE:
+ case SCST_CMD_STATE_DEV_PARSE:
+ case SCST_CMD_STATE_RDY_TO_XFER:
+ case SCST_CMD_STATE_TGT_PRE_EXEC:
+ case SCST_CMD_STATE_SEND_FOR_EXEC:
+ case SCST_CMD_STATE_LOCAL_EXEC:
+ case SCST_CMD_STATE_REAL_EXEC:
+ case SCST_CMD_STATE_PRE_DEV_DONE:
+ case SCST_CMD_STATE_DEV_DONE:
+ case SCST_CMD_STATE_PRE_XMIT_RESP:
+ case SCST_CMD_STATE_XMIT_RESP:
+ case SCST_CMD_STATE_FINISHED:
+ case SCST_CMD_STATE_FINISHED_INTERNAL:
+#endif
+ cmd->state = state;
+ res = SCST_CMD_STATE_RES_CONT_SAME;
+#ifdef CONFIG_SCST_EXTRACHECKS
+ break;
+
+ default:
+ if (state >= 0) {
+ PRINT_ERROR("Dev handler %s parse() returned "
+ "invalid cmd state %d (opcode %d)",
+ dev->handler->name, state, cmd->cdb[0]);
+ } else {
+ PRINT_ERROR("Dev handler %s parse() returned "
+ "error %d (opcode %d)", dev->handler->name,
+ state, cmd->cdb[0]);
+ }
+ goto out_hw_error;
+ }
+#endif
+
+ if (cmd->resp_data_len == -1) {
+ if (cmd->data_direction & SCST_DATA_READ)
+ cmd->resp_data_len = cmd->bufflen;
+ else
+ cmd->resp_data_len = 0;
+ }
+
+ /* We already completed (with an error) */
+ if (unlikely(cmd->completed))
+ goto out_done;
+
+out:
+ return res;
+
+out_hw_error:
+ /* dev_done() will be called as part of the regular cmd's finish */
+ scst_set_cmd_error(cmd, SCST_LOAD_SENSE(scst_sense_hardw_error));
+
+out_done:
+ scst_set_cmd_abnormal_done_state(cmd);
+ res = SCST_CMD_STATE_RES_CONT_SAME;
+ goto out;
+}
+
+static int scst_prepare_space(struct scst_cmd *cmd)
+{
+ int r = 0, res = SCST_CMD_STATE_RES_CONT_SAME;
+
+ if (cmd->data_direction == SCST_DATA_NONE)
+ goto done;
+
+ if (cmd->tgt_need_alloc_data_buf) {
+ int orig_bufflen = cmd->bufflen;
+
+ TRACE_MEM("Custom tgt data buf allocation requested (cmd %p)",
+ cmd);
+
+ scst_set_cur_start(cmd);
+ r = cmd->tgtt->alloc_data_buf(cmd);
+ scst_set_alloc_buf_time(cmd);
+
+ if (r > 0)
+ goto alloc;
+ else if (r == 0) {
+ if (unlikely(cmd->bufflen == 0)) {
+ /* See comment in scst_alloc_space() */
+ if (cmd->sg == NULL)
+ goto alloc;
+ }
+
+ cmd->tgt_data_buf_alloced = 1;
+
+ if (unlikely(orig_bufflen < cmd->bufflen)) {
+ PRINT_ERROR("Target driver allocated data "
+ "buffer (size %d), is less, than "
+ "required (size %d)", orig_bufflen,
+ cmd->bufflen);
+ goto out_error;
+ }
+ TRACE_MEM("tgt_data_buf_alloced (cmd %p)", cmd);
+ } else
+ goto check;
+ }
+
+alloc:
+ if (!cmd->tgt_data_buf_alloced && !cmd->dh_data_buf_alloced) {
+ r = scst_alloc_space(cmd);
+ } else if (cmd->dh_data_buf_alloced && !cmd->tgt_data_buf_alloced) {
+ TRACE_MEM("dh_data_buf_alloced set (cmd %p)", cmd);
+ r = 0;
+ } else if (cmd->tgt_data_buf_alloced && !cmd->dh_data_buf_alloced) {
+ TRACE_MEM("tgt_data_buf_alloced set (cmd %p)", cmd);
+ cmd->sg = cmd->tgt_sg;
+ cmd->sg_cnt = cmd->tgt_sg_cnt;
+ cmd->in_sg = cmd->tgt_in_sg;
+ cmd->in_sg_cnt = cmd->tgt_in_sg_cnt;
+ r = 0;
+ } else {
+ TRACE_MEM("Both *_data_buf_alloced set (cmd %p, sg %p, "
+ "sg_cnt %d, tgt_sg %p, tgt_sg_cnt %d)", cmd, cmd->sg,
+ cmd->sg_cnt, cmd->tgt_sg, cmd->tgt_sg_cnt);
+ r = 0;
+ }
+
+check:
+ if (r != 0) {
+ if (scst_cmd_atomic(cmd)) {
+ TRACE_MEM("%s", "Atomic memory allocation failed, "
+ "rescheduling to the thread");
+ res = SCST_CMD_STATE_RES_NEED_THREAD;
+ goto out;
+ } else
+ goto out_no_space;
+ }
+
+done:
+ if (cmd->preprocessing_only)
+ cmd->state = SCST_CMD_STATE_PREPROCESSING_DONE;
+ else if (cmd->data_direction & SCST_DATA_WRITE)
+ cmd->state = SCST_CMD_STATE_RDY_TO_XFER;
+ else
+ cmd->state = SCST_CMD_STATE_TGT_PRE_EXEC;
+
+out:
+ return res;
+
+out_no_space:
+ TRACE(TRACE_OUT_OF_MEM, "Unable to allocate or build requested buffer "
+ "(size %d), sending BUSY or QUEUE FULL status", cmd->bufflen);
+ scst_set_busy(cmd);
+ scst_set_cmd_abnormal_done_state(cmd);
+ res = SCST_CMD_STATE_RES_CONT_SAME;
+ goto out;
+
+out_error:
+ scst_set_cmd_error(cmd, SCST_LOAD_SENSE(scst_sense_hardw_error));
+ scst_set_cmd_abnormal_done_state(cmd);
+ res = SCST_CMD_STATE_RES_CONT_SAME;
+ goto out;
+}
+
+static int scst_preprocessing_done(struct scst_cmd *cmd)
+{
+ int res;
+
+ EXTRACHECKS_BUG_ON(!cmd->preprocessing_only);
+
+ cmd->preprocessing_only = 0;
+
+ res = SCST_CMD_STATE_RES_CONT_NEXT;
+ cmd->state = SCST_CMD_STATE_PREPROCESSING_DONE_CALLED;
+
+ TRACE_DBG("Calling preprocessing_done(cmd %p)", cmd);
+ scst_set_cur_start(cmd);
+ cmd->tgtt->preprocessing_done(cmd);
+ TRACE_DBG("%s", "preprocessing_done() returned");
+ return res;
+}
+
+/**
+ * scst_restart_cmd() - restart execution of the command
+ * @cmd: SCST commands
+ * @status: completion status
+ * @pref_context: preferred command execition context
+ *
+ * Description:
+ * Notifies SCST that the driver finished its part of the command's
+ * preprocessing and it is ready for further processing.
+ *
+ * The second argument sets completion status
+ * (see SCST_PREPROCESS_STATUS_* constants for details)
+ *
+ * See also comment for scst_cmd_init_done() for the serialization
+ * requirements.
+ */
+void scst_restart_cmd(struct scst_cmd *cmd, int status,
+ enum scst_exec_context pref_context)
+{
+
+ scst_set_restart_waiting_time(cmd);
+
+ TRACE_DBG("Preferred context: %d", pref_context);
+ TRACE_DBG("tag=%llu, status=%#x",
+ (long long unsigned int)scst_cmd_get_tag(cmd),
+ status);
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+ if ((in_irq() || irqs_disabled()) &&
+ ((pref_context == SCST_CONTEXT_DIRECT) ||
+ (pref_context == SCST_CONTEXT_DIRECT_ATOMIC))) {
+ PRINT_ERROR("Wrong context %d in IRQ from target %s, use "
+ "SCST_CONTEXT_THREAD instead", pref_context,
+ cmd->tgtt->name);
+ pref_context = SCST_CONTEXT_THREAD;
+ }
+#endif
+
+ switch (status) {
+ case SCST_PREPROCESS_STATUS_SUCCESS:
+ if (cmd->data_direction & SCST_DATA_WRITE)
+ cmd->state = SCST_CMD_STATE_RDY_TO_XFER;
+ else
+ cmd->state = SCST_CMD_STATE_TGT_PRE_EXEC;
+ if (cmd->set_sn_on_restart_cmd)
+ scst_cmd_set_sn(cmd);
+ /* Small context optimization */
+ if ((pref_context == SCST_CONTEXT_TASKLET) ||
+ (pref_context == SCST_CONTEXT_DIRECT_ATOMIC) ||
+ ((pref_context == SCST_CONTEXT_SAME) &&
+ scst_cmd_atomic(cmd))) {
+ if (cmd->data_direction & SCST_DATA_WRITE) {
+ if (!test_bit(SCST_TGT_DEV_AFTER_RESTART_WR_ATOMIC,
+ &cmd->tgt_dev->tgt_dev_flags))
+ pref_context = SCST_CONTEXT_THREAD;
+ } else {
+ if (!test_bit(SCST_TGT_DEV_AFTER_RESTART_OTH_ATOMIC,
+ &cmd->tgt_dev->tgt_dev_flags))
+ pref_context = SCST_CONTEXT_THREAD;
+ }
+ }
+ break;
+
+ case SCST_PREPROCESS_STATUS_ERROR_SENSE_SET:
+ scst_set_cmd_abnormal_done_state(cmd);
+ break;
+
+ case SCST_PREPROCESS_STATUS_ERROR_FATAL:
+ set_bit(SCST_CMD_NO_RESP, &cmd->cmd_flags);
+ /* go through */
+ case SCST_PREPROCESS_STATUS_ERROR:
+ if (cmd->sense != NULL)
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(scst_sense_hardw_error));
+ scst_set_cmd_abnormal_done_state(cmd);
+ break;
+
+ default:
+ PRINT_ERROR("%s() received unknown status %x", __func__,
+ status);
+ scst_set_cmd_abnormal_done_state(cmd);
+ break;
+ }
+
+ scst_process_redirect_cmd(cmd, pref_context, 1);
+ return;
+}
+EXPORT_SYMBOL(scst_restart_cmd);
+
+static int scst_rdy_to_xfer(struct scst_cmd *cmd)
+{
+ int res, rc;
+ struct scst_tgt_template *tgtt = cmd->tgtt;
+
+ if (unlikely(test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags))) {
+ TRACE_MGMT_DBG("ABORTED set, aborting cmd %p", cmd);
+ goto out_dev_done;
+ }
+
+ if ((tgtt->rdy_to_xfer == NULL) || unlikely(cmd->internal)) {
+ cmd->state = SCST_CMD_STATE_TGT_PRE_EXEC;
+ res = SCST_CMD_STATE_RES_CONT_SAME;
+ goto out;
+ }
+
+ if (unlikely(!tgtt->rdy_to_xfer_atomic && scst_cmd_atomic(cmd))) {
+ /*
+ * It shouldn't be because of the SCST_TGT_DEV_AFTER_*
+ * optimization.
+ */
+ TRACE_DBG("Target driver %s rdy_to_xfer() needs thread "
+ "context, rescheduling", tgtt->name);
+ res = SCST_CMD_STATE_RES_NEED_THREAD;
+ goto out;
+ }
+
+ while (1) {
+ int finished_cmds = atomic_read(&cmd->tgt->finished_cmds);
+
+ res = SCST_CMD_STATE_RES_CONT_NEXT;
+ cmd->state = SCST_CMD_STATE_DATA_WAIT;
+
+ if (tgtt->on_hw_pending_cmd_timeout != NULL) {
+ struct scst_session *sess = cmd->sess;
+ cmd->hw_pending_start = jiffies;
+ cmd->cmd_hw_pending = 1;
+ if (!test_bit(SCST_SESS_HW_PENDING_WORK_SCHEDULED, &sess->sess_aflags)) {
+ TRACE_DBG("Sched HW pending work for sess %p "
+ "(max time %d)", sess,
+ tgtt->max_hw_pending_time);
+ set_bit(SCST_SESS_HW_PENDING_WORK_SCHEDULED,
+ &sess->sess_aflags);
+ schedule_delayed_work(&sess->hw_pending_work,
+ tgtt->max_hw_pending_time * HZ);
+ }
+ }
+
+ scst_set_cur_start(cmd);
+
+ TRACE_DBG("Calling rdy_to_xfer(%p)", cmd);
+#ifdef CONFIG_SCST_DEBUG_RETRY
+ if (((scst_random() % 100) == 75))
+ rc = SCST_TGT_RES_QUEUE_FULL;
+ else
+#endif
+ rc = tgtt->rdy_to_xfer(cmd);
+ TRACE_DBG("rdy_to_xfer() returned %d", rc);
+
+ if (likely(rc == SCST_TGT_RES_SUCCESS))
+ goto out;
+
+ scst_set_rdy_to_xfer_time(cmd);
+
+ cmd->cmd_hw_pending = 0;
+
+ /* Restore the previous state */
+ cmd->state = SCST_CMD_STATE_RDY_TO_XFER;
+
+ switch (rc) {
+ case SCST_TGT_RES_QUEUE_FULL:
+ if (scst_queue_retry_cmd(cmd, finished_cmds) == 0)
+ break;
+ else
+ continue;
+
+ case SCST_TGT_RES_NEED_THREAD_CTX:
+ TRACE_DBG("Target driver %s "
+ "rdy_to_xfer() requested thread "
+ "context, rescheduling", tgtt->name);
+ res = SCST_CMD_STATE_RES_NEED_THREAD;
+ break;
+
+ default:
+ goto out_error_rc;
+ }
+ break;
+ }
+
+out:
+ return res;
+
+out_error_rc:
+ if (rc == SCST_TGT_RES_FATAL_ERROR) {
+ PRINT_ERROR("Target driver %s rdy_to_xfer() returned "
+ "fatal error", tgtt->name);
+ } else {
+ PRINT_ERROR("Target driver %s rdy_to_xfer() returned invalid "
+ "value %d", tgtt->name, rc);
+ }
+ scst_set_cmd_error(cmd, SCST_LOAD_SENSE(scst_sense_hardw_error));
+
+out_dev_done:
+ scst_set_cmd_abnormal_done_state(cmd);
+ res = SCST_CMD_STATE_RES_CONT_SAME;
+ goto out;
+}
+
+/* No locks, but might be in IRQ */
+static void scst_process_redirect_cmd(struct scst_cmd *cmd,
+ enum scst_exec_context context, int check_retries)
+{
+ struct scst_tgt *tgt = cmd->tgt;
+ unsigned long flags;
+
+ TRACE_DBG("Context: %x", context);
+
+ if (context == SCST_CONTEXT_SAME)
+ context = scst_cmd_atomic(cmd) ? SCST_CONTEXT_DIRECT_ATOMIC :
+ SCST_CONTEXT_DIRECT;
+
+ switch (context) {
+ case SCST_CONTEXT_DIRECT_ATOMIC:
+ scst_process_active_cmd(cmd, true);
+ break;
+
+ case SCST_CONTEXT_DIRECT:
+ if (check_retries)
+ scst_check_retries(tgt);
+ scst_process_active_cmd(cmd, false);
+ break;
+
+ default:
+ PRINT_ERROR("Context %x is unknown, using the thread one",
+ context);
+ /* go through */
+ case SCST_CONTEXT_THREAD:
+ if (check_retries)
+ scst_check_retries(tgt);
+ spin_lock_irqsave(&cmd->cmd_threads->cmd_list_lock, flags);
+ TRACE_DBG("Adding cmd %p to active cmd list", cmd);
+ if (unlikely(cmd->queue_type == SCST_CMD_QUEUE_HEAD_OF_QUEUE))
+ list_add(&cmd->cmd_list_entry,
+ &cmd->cmd_threads->active_cmd_list);
+ else
+ list_add_tail(&cmd->cmd_list_entry,
+ &cmd->cmd_threads->active_cmd_list);
+ wake_up(&cmd->cmd_threads->cmd_list_waitQ);
+ spin_unlock_irqrestore(&cmd->cmd_threads->cmd_list_lock, flags);
+ break;
+
+ case SCST_CONTEXT_TASKLET:
+ if (check_retries)
+ scst_check_retries(tgt);
+ scst_schedule_tasklet(cmd);
+ break;
+ }
+ return;
+}
+
+/**
+ * scst_rx_data() - the command's data received
+ * @cmd: SCST commands
+ * @status: data receiving completion status
+ * @pref_context: preferred command execition context
+ *
+ * Description:
+ * Notifies SCST that the driver received all the necessary data
+ * and the command is ready for further processing.
+ *
+ * The second argument sets data receiving completion status
+ * (see SCST_RX_STATUS_* constants for details)
+ */
+void scst_rx_data(struct scst_cmd *cmd, int status,
+ enum scst_exec_context pref_context)
+{
+
+ scst_set_rdy_to_xfer_time(cmd);
+
+ TRACE_DBG("Preferred context: %d", pref_context);
+ TRACE(TRACE_SCSI, "cmd %p, status %#x", cmd, status);
+
+ cmd->cmd_hw_pending = 0;
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+ if ((in_irq() || irqs_disabled()) &&
+ ((pref_context == SCST_CONTEXT_DIRECT) ||
+ (pref_context == SCST_CONTEXT_DIRECT_ATOMIC))) {
+ PRINT_ERROR("Wrong context %d in IRQ from target %s, use "
+ "SCST_CONTEXT_THREAD instead", pref_context,
+ cmd->tgtt->name);
+ pref_context = SCST_CONTEXT_THREAD;
+ }
+#endif
+
+ switch (status) {
+ case SCST_RX_STATUS_SUCCESS:
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+ if (trace_flag & TRACE_RCV_BOT) {
+ int i;
+ struct scatterlist *sg;
+ if (cmd->in_sg != NULL)
+ sg = cmd->in_sg;
+ else if (cmd->tgt_in_sg != NULL)
+ sg = cmd->tgt_in_sg;
+ else if (cmd->tgt_sg != NULL)
+ sg = cmd->tgt_sg;
+ else
+ sg = cmd->sg;
+ if (sg != NULL) {
+ TRACE_RECV_BOT("RX data for cmd %p "
+ "(sg_cnt %d, sg %p, sg[0].page %p)",
+ cmd, cmd->tgt_sg_cnt, sg,
+ (void *)sg_page(&sg[0]));
+ for (i = 0; i < cmd->tgt_sg_cnt; ++i) {
+ PRINT_BUFF_FLAG(TRACE_RCV_BOT, "RX sg",
+ sg_virt(&sg[i]), sg[i].length);
+ }
+ }
+ }
+#endif
+ cmd->state = SCST_CMD_STATE_TGT_PRE_EXEC;
+
+ /* Small context optimization */
+ if ((pref_context == SCST_CONTEXT_TASKLET) ||
+ (pref_context == SCST_CONTEXT_DIRECT_ATOMIC) ||
+ ((pref_context == SCST_CONTEXT_SAME) &&
+ scst_cmd_atomic(cmd))) {
+ if (!test_bit(SCST_TGT_DEV_AFTER_RX_DATA_ATOMIC,
+ &cmd->tgt_dev->tgt_dev_flags))
+ pref_context = SCST_CONTEXT_THREAD;
+ }
+ break;
+
+ case SCST_RX_STATUS_ERROR_SENSE_SET:
+ scst_set_cmd_abnormal_done_state(cmd);
+ break;
+
+ case SCST_RX_STATUS_ERROR_FATAL:
+ set_bit(SCST_CMD_NO_RESP, &cmd->cmd_flags);
+ /* go through */
+ case SCST_RX_STATUS_ERROR:
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(scst_sense_hardw_error));
+ scst_set_cmd_abnormal_done_state(cmd);
+ break;
+
+ default:
+ PRINT_ERROR("scst_rx_data() received unknown status %x",
+ status);
+ scst_set_cmd_abnormal_done_state(cmd);
+ break;
+ }
+
+ scst_process_redirect_cmd(cmd, pref_context, 1);
+ return;
+}
+EXPORT_SYMBOL(scst_rx_data);
+
+static int scst_tgt_pre_exec(struct scst_cmd *cmd)
+{
+ int res = SCST_CMD_STATE_RES_CONT_SAME, rc;
+
+ cmd->state = SCST_CMD_STATE_SEND_FOR_EXEC;
+
+ if ((cmd->tgtt->pre_exec == NULL) || unlikely(cmd->internal))
+ goto out;
+
+ TRACE_DBG("Calling pre_exec(%p)", cmd);
+ scst_set_cur_start(cmd);
+ rc = cmd->tgtt->pre_exec(cmd);
+ scst_set_pre_exec_time(cmd);
+ TRACE_DBG("pre_exec() returned %d", rc);
+
+ if (unlikely(rc != SCST_PREPROCESS_STATUS_SUCCESS)) {
+ switch (rc) {
+ case SCST_PREPROCESS_STATUS_ERROR_SENSE_SET:
+ scst_set_cmd_abnormal_done_state(cmd);
+ break;
+ case SCST_PREPROCESS_STATUS_ERROR_FATAL:
+ set_bit(SCST_CMD_NO_RESP, &cmd->cmd_flags);
+ /* go through */
+ case SCST_PREPROCESS_STATUS_ERROR:
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(scst_sense_hardw_error));
+ scst_set_cmd_abnormal_done_state(cmd);
+ break;
+ case SCST_PREPROCESS_STATUS_NEED_THREAD:
+ TRACE_DBG("Target driver's %s pre_exec() requested "
+ "thread context, rescheduling",
+ cmd->tgtt->name);
+ res = SCST_CMD_STATE_RES_NEED_THREAD;
+ cmd->state = SCST_CMD_STATE_TGT_PRE_EXEC;
+ break;
+ default:
+ BUG();
+ break;
+ }
+ }
+
+out:
+ return res;
+}
+
+static void scst_do_cmd_done(struct scst_cmd *cmd, int result,
+ const uint8_t *rq_sense, int rq_sense_len, int resid)
+{
+
+ scst_set_exec_time(cmd);
+
+ cmd->status = result & 0xff;
+ cmd->msg_status = msg_byte(result);
+ cmd->host_status = host_byte(result);
+ cmd->driver_status = driver_byte(result);
+ if (unlikely(resid != 0)) {
+#ifdef CONFIG_SCST_EXTRACHECKS
+ if ((resid < 0) || (resid > cmd->resp_data_len)) {
+ PRINT_ERROR("Wrong resid %d (cmd->resp_data_len=%d, "
+ "op %x)", resid, cmd->resp_data_len,
+ cmd->cdb[0]);
+ } else
+#endif
+ scst_set_resp_data_len(cmd, cmd->resp_data_len - resid);
+ }
+
+ if (unlikely(cmd->status == SAM_STAT_CHECK_CONDITION)) {
+ /* We might have double reset UA here */
+ cmd->dbl_ua_orig_resp_data_len = cmd->resp_data_len;
+ cmd->dbl_ua_orig_data_direction = cmd->data_direction;
+
+ scst_alloc_set_sense(cmd, 1, rq_sense, rq_sense_len);
+ }
+
+ TRACE(TRACE_SCSI, "cmd %p, result=%x, cmd->status=%x, resid=%d, "
+ "cmd->msg_status=%x, cmd->host_status=%x, "
+ "cmd->driver_status=%x (cmd %p)", cmd, result, cmd->status, resid,
+ cmd->msg_status, cmd->host_status, cmd->driver_status, cmd);
+
+ cmd->completed = 1;
+ return;
+}
+
+/* For small context optimization */
+static inline enum scst_exec_context scst_optimize_post_exec_context(
+ struct scst_cmd *cmd, enum scst_exec_context context)
+{
+ if (((context == SCST_CONTEXT_SAME) && scst_cmd_atomic(cmd)) ||
+ (context == SCST_CONTEXT_TASKLET) ||
+ (context == SCST_CONTEXT_DIRECT_ATOMIC)) {
+ if (!test_bit(SCST_TGT_DEV_AFTER_EXEC_ATOMIC,
+ &cmd->tgt_dev->tgt_dev_flags))
+ context = SCST_CONTEXT_THREAD;
+ }
+ return context;
+}
+
+static void scst_cmd_done(void *data, char *sense, int result, int resid)
+{
+ struct scst_cmd *cmd;
+
+ cmd = (struct scst_cmd *)data;
+ if (cmd == NULL)
+ goto out;
+
+ scst_do_cmd_done(cmd, result, sense, SCSI_SENSE_BUFFERSIZE, resid);
+
+ cmd->state = SCST_CMD_STATE_PRE_DEV_DONE;
+
+ scst_process_redirect_cmd(cmd,
+ scst_optimize_post_exec_context(cmd, scst_estimate_context()), 0);
+
+out:
+ return;
+}
+
+static void scst_cmd_done_local(struct scst_cmd *cmd, int next_state,
+ enum scst_exec_context pref_context)
+{
+
+ scst_set_exec_time(cmd);
+
+ if (next_state == SCST_CMD_STATE_DEFAULT)
+ next_state = SCST_CMD_STATE_PRE_DEV_DONE;
+
+#if defined(CONFIG_SCST_DEBUG)
+ if (next_state == SCST_CMD_STATE_PRE_DEV_DONE) {
+ if ((trace_flag & TRACE_RCV_TOP) && (cmd->sg != NULL)) {
+ int i;
+ struct scatterlist *sg = cmd->sg;
+ TRACE_RECV_TOP("Exec'd %d S/G(s) at %p sg[0].page at "
+ "%p", cmd->sg_cnt, sg, (void *)sg_page(&sg[0]));
+ for (i = 0; i < cmd->sg_cnt; ++i) {
+ TRACE_BUFF_FLAG(TRACE_RCV_TOP,
+ "Exec'd sg", sg_virt(&sg[i]),
+ sg[i].length);
+ }
+ }
+ }
+#endif
+
+ cmd->state = next_state;
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+ if ((next_state != SCST_CMD_STATE_PRE_DEV_DONE) &&
+ (next_state != SCST_CMD_STATE_PRE_XMIT_RESP) &&
+ (next_state != SCST_CMD_STATE_FINISHED) &&
+ (next_state != SCST_CMD_STATE_FINISHED_INTERNAL)) {
+ PRINT_ERROR("%s() received invalid cmd state %d (opcode %d)",
+ __func__, next_state, cmd->cdb[0]);
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(scst_sense_hardw_error));
+ scst_set_cmd_abnormal_done_state(cmd);
+ }
+#endif
+ pref_context = scst_optimize_post_exec_context(cmd, pref_context);
+ scst_process_redirect_cmd(cmd, pref_context, 0);
+ return;
+}
+
+static int scst_report_luns_local(struct scst_cmd *cmd)
+{
+ int res = SCST_EXEC_COMPLETED, rc;
+ int dev_cnt = 0;
+ int buffer_size;
+ int i;
+ struct scst_tgt_dev *tgt_dev = NULL;
+ uint8_t *buffer;
+ int offs, overflow = 0;
+
+ if (scst_cmd_atomic(cmd)) {
+ res = SCST_EXEC_NEED_THREAD;
+ goto out;
+ }
+
+ rc = scst_check_local_events(cmd);
+ if (unlikely(rc != 0))
+ goto out_done;
+
+ cmd->status = 0;
+ cmd->msg_status = 0;
+ cmd->host_status = DID_OK;
+ cmd->driver_status = 0;
+
+ if ((cmd->cdb[2] != 0) && (cmd->cdb[2] != 2)) {
+ PRINT_ERROR("Unsupported SELECT REPORT value %x in REPORT "
+ "LUNS command", cmd->cdb[2]);
+ goto out_err;
+ }
+
+ buffer_size = scst_get_buf_first(cmd, &buffer);
+ if (unlikely(buffer_size == 0))
+ goto out_compl;
+ else if (unlikely(buffer_size < 0))
+ goto out_hw_err;
+
+ if (buffer_size < 16)
+ goto out_put_err;
+
+ memset(buffer, 0, buffer_size);
+ offs = 8;
+
+ /*
+ * cmd won't allow to suspend activities, so we can access
+ * sess->sess_tgt_dev_list_hash without any additional protection.
+ */
+ for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+ struct list_head *sess_tgt_dev_list_head =
+ &cmd->sess->sess_tgt_dev_list_hash[i];
+ list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+ sess_tgt_dev_list_entry) {
+ if (!overflow) {
+ if (offs >= buffer_size) {
+ scst_put_buf(cmd, buffer);
+ buffer_size = scst_get_buf_next(cmd,
+ &buffer);
+ if (buffer_size > 0) {
+ memset(buffer, 0, buffer_size);
+ offs = 0;
+ } else {
+ overflow = 1;
+ goto inc_dev_cnt;
+ }
+ }
+ if ((buffer_size - offs) < 8) {
+ PRINT_ERROR("Buffer allocated for "
+ "REPORT LUNS command doesn't "
+ "allow to fit 8 byte entry "
+ "(buffer_size=%d)",
+ buffer_size);
+ goto out_put_hw_err;
+ }
+ if ((cmd->sess->acg->addr_method == SCST_LUN_ADDR_METHOD_FLAT) &&
+ (tgt_dev->lun != 0)) {
+ buffer[offs] = (tgt_dev->lun >> 8) & 0x3f;
+ buffer[offs] = buffer[offs] | 0x40;
+ buffer[offs+1] = tgt_dev->lun & 0xff;
+ } else {
+ buffer[offs] = (tgt_dev->lun >> 8) & 0xff;
+ buffer[offs+1] = tgt_dev->lun & 0xff;
+ }
+ offs += 8;
+ }
+inc_dev_cnt:
+ dev_cnt++;
+ }
+ }
+ if (!overflow)
+ scst_put_buf(cmd, buffer);
+
+ /* Set the response header */
+ buffer_size = scst_get_buf_first(cmd, &buffer);
+ if (unlikely(buffer_size == 0))
+ goto out_compl;
+ else if (unlikely(buffer_size < 0))
+ goto out_hw_err;
+
+ dev_cnt *= 8;
+ buffer[0] = (dev_cnt >> 24) & 0xff;
+ buffer[1] = (dev_cnt >> 16) & 0xff;
+ buffer[2] = (dev_cnt >> 8) & 0xff;
+ buffer[3] = dev_cnt & 0xff;
+
+ scst_put_buf(cmd, buffer);
+
+ dev_cnt += 8;
+ if (dev_cnt < cmd->resp_data_len)
+ scst_set_resp_data_len(cmd, dev_cnt);
+
+out_compl:
+ cmd->completed = 1;
+
+ /* Clear left sense_reported_luns_data_changed UA, if any. */
+
+ /*
+ * cmd won't allow to suspend activities, so we can access
+ * sess->sess_tgt_dev_list_hash without any additional protection.
+ */
+ for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+ struct list_head *sess_tgt_dev_list_head =
+ &cmd->sess->sess_tgt_dev_list_hash[i];
+
+ list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+ sess_tgt_dev_list_entry) {
+ struct scst_tgt_dev_UA *ua;
+
+ spin_lock_bh(&tgt_dev->tgt_dev_lock);
+ list_for_each_entry(ua, &tgt_dev->UA_list,
+ UA_list_entry) {
+ if (scst_analyze_sense(ua->UA_sense_buffer,
+ ua->UA_valid_sense_len,
+ SCST_SENSE_ALL_VALID,
+ SCST_LOAD_SENSE(scst_sense_reported_luns_data_changed))) {
+ TRACE_MGMT_DBG("Freeing not needed "
+ "REPORTED LUNS DATA CHANGED UA "
+ "%p", ua);
+ list_del(&ua->UA_list_entry);
+ mempool_free(ua, scst_ua_mempool);
+ break;
+ }
+ }
+ spin_unlock_bh(&tgt_dev->tgt_dev_lock);
+ }
+ }
+
+out_done:
+ /* Report the result */
+ cmd->scst_cmd_done(cmd, SCST_CMD_STATE_DEFAULT, SCST_CONTEXT_SAME);
+
+out:
+ return res;
+
+out_put_err:
+ scst_put_buf(cmd, buffer);
+
+out_err:
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(scst_sense_invalid_field_in_cdb));
+ goto out_compl;
+
+out_put_hw_err:
+ scst_put_buf(cmd, buffer);
+
+out_hw_err:
+ scst_set_cmd_error(cmd, SCST_LOAD_SENSE(scst_sense_hardw_error));
+ goto out_compl;
+}
+
+static int scst_request_sense_local(struct scst_cmd *cmd)
+{
+ int res = SCST_EXEC_COMPLETED, rc;
+ struct scst_tgt_dev *tgt_dev = cmd->tgt_dev;
+ uint8_t *buffer;
+ int buffer_size = 0, sl = 0;
+
+ rc = scst_check_local_events(cmd);
+ if (unlikely(rc != 0))
+ goto out_done;
+
+ cmd->status = 0;
+ cmd->msg_status = 0;
+ cmd->host_status = DID_OK;
+ cmd->driver_status = 0;
+
+ spin_lock_bh(&tgt_dev->tgt_dev_lock);
+
+ if (tgt_dev->tgt_dev_valid_sense_len == 0)
+ goto out_not_completed;
+
+ TRACE(TRACE_SCSI, "%s: Returning stored sense", cmd->op_name);
+
+ buffer_size = scst_get_buf_first(cmd, &buffer);
+ if (unlikely(buffer_size == 0))
+ goto out_compl;
+ else if (unlikely(buffer_size < 0))
+ goto out_hw_err;
+
+ memset(buffer, 0, buffer_size);
+
+ if (((tgt_dev->tgt_dev_sense[0] == 0x70) ||
+ (tgt_dev->tgt_dev_sense[0] == 0x71)) && (cmd->cdb[1] & 1)) {
+ PRINT_WARNING("%s: Fixed format of the saved sense, but "
+ "descriptor format requested. Convertion will "
+ "truncated data", cmd->op_name);
+ PRINT_BUFFER("Original sense", tgt_dev->tgt_dev_sense,
+ tgt_dev->tgt_dev_valid_sense_len);
+
+ buffer_size = min(SCST_STANDARD_SENSE_LEN, buffer_size);
+ sl = scst_set_sense(buffer, buffer_size, true,
+ tgt_dev->tgt_dev_sense[2], tgt_dev->tgt_dev_sense[12],
+ tgt_dev->tgt_dev_sense[13]);
+ } else if (((tgt_dev->tgt_dev_sense[0] == 0x72) ||
+ (tgt_dev->tgt_dev_sense[0] == 0x73)) && !(cmd->cdb[1] & 1)) {
+ PRINT_WARNING("%s: Descriptor format of the "
+ "saved sense, but fixed format requested. Convertion "
+ "will truncated data", cmd->op_name);
+ PRINT_BUFFER("Original sense", tgt_dev->tgt_dev_sense,
+ tgt_dev->tgt_dev_valid_sense_len);
+
+ buffer_size = min(SCST_STANDARD_SENSE_LEN, buffer_size);
+ sl = scst_set_sense(buffer, buffer_size, false,
+ tgt_dev->tgt_dev_sense[1], tgt_dev->tgt_dev_sense[2],
+ tgt_dev->tgt_dev_sense[3]);
+ } else {
+ if (buffer_size >= tgt_dev->tgt_dev_valid_sense_len)
+ sl = tgt_dev->tgt_dev_valid_sense_len;
+ else {
+ sl = buffer_size;
+ PRINT_WARNING("%s: Being returned sense truncated to "
+ "size %d (needed %d)", cmd->op_name,
+ buffer_size, tgt_dev->tgt_dev_valid_sense_len);
+ }
+ memcpy(buffer, tgt_dev->tgt_dev_sense, sl);
+ }
+
+ scst_put_buf(cmd, buffer);
+
+ tgt_dev->tgt_dev_valid_sense_len = 0;
+
+ spin_unlock_bh(&tgt_dev->tgt_dev_lock);
+
+ scst_set_resp_data_len(cmd, sl);
+
+out_compl:
+ cmd->completed = 1;
+
+out_done:
+ /* Report the result */
+ cmd->scst_cmd_done(cmd, SCST_CMD_STATE_DEFAULT, SCST_CONTEXT_SAME);
+
+out:
+ return res;
+
+out_hw_err:
+ spin_unlock_bh(&tgt_dev->tgt_dev_lock);
+ scst_set_cmd_error(cmd, SCST_LOAD_SENSE(scst_sense_hardw_error));
+ goto out_compl;
+
+out_not_completed:
+ spin_unlock_bh(&tgt_dev->tgt_dev_lock);
+ res = SCST_EXEC_NOT_COMPLETED;
+ goto out;
+}
+
+static int scst_pre_select(struct scst_cmd *cmd)
+{
+ int res = SCST_EXEC_NOT_COMPLETED;
+
+ if (scst_cmd_atomic(cmd)) {
+ res = SCST_EXEC_NEED_THREAD;
+ goto out;
+ }
+
+ scst_block_dev_cmd(cmd, 1);
+
+ /* Check for local events will be done when cmd will be executed */
+
+out:
+ return res;
+}
+
+static int scst_reserve_local(struct scst_cmd *cmd)
+{
+ int res = SCST_EXEC_NOT_COMPLETED, rc;
+ struct scst_device *dev;
+ struct scst_tgt_dev *tgt_dev_tmp;
+
+ if (scst_cmd_atomic(cmd)) {
+ res = SCST_EXEC_NEED_THREAD;
+ goto out;
+ }
+
+ if ((cmd->cdb[0] == RESERVE_10) && (cmd->cdb[2] & SCST_RES_3RDPTY)) {
+ PRINT_ERROR("RESERVE_10: 3rdPty RESERVE not implemented "
+ "(lun=%lld)", (long long unsigned int)cmd->lun);
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(scst_sense_invalid_field_in_cdb));
+ goto out_done;
+ }
+
+ dev = cmd->dev;
+
+ if (dev->tst == SCST_CONTR_MODE_ONE_TASK_SET)
+ scst_block_dev_cmd(cmd, 1);
+
+ rc = scst_check_local_events(cmd);
+ if (unlikely(rc != 0))
+ goto out_done;
+
+ spin_lock_bh(&dev->dev_lock);
+
+ if (test_bit(SCST_TGT_DEV_RESERVED, &cmd->tgt_dev->tgt_dev_flags)) {
+ spin_unlock_bh(&dev->dev_lock);
+ scst_set_cmd_error_status(cmd, SAM_STAT_RESERVATION_CONFLICT);
+ goto out_done;
+ }
+
+ list_for_each_entry(tgt_dev_tmp, &dev->dev_tgt_dev_list,
+ dev_tgt_dev_list_entry) {
+ if (cmd->tgt_dev != tgt_dev_tmp)
+ set_bit(SCST_TGT_DEV_RESERVED,
+ &tgt_dev_tmp->tgt_dev_flags);
+ }
+ dev->dev_reserved = 1;
+
+ spin_unlock_bh(&dev->dev_lock);
+
+out:
+ return res;
+
+out_done:
+ /* Report the result */
+ cmd->scst_cmd_done(cmd, SCST_CMD_STATE_DEFAULT, SCST_CONTEXT_SAME);
+ res = SCST_EXEC_COMPLETED;
+ goto out;
+}
+
+static int scst_release_local(struct scst_cmd *cmd)
+{
+ int res = SCST_EXEC_NOT_COMPLETED, rc;
+ struct scst_tgt_dev *tgt_dev_tmp;
+ struct scst_device *dev;
+
+ if (scst_cmd_atomic(cmd)) {
+ res = SCST_EXEC_NEED_THREAD;
+ goto out;
+ }
+
+ dev = cmd->dev;
+
+ if (dev->tst == SCST_CONTR_MODE_ONE_TASK_SET)
+ scst_block_dev_cmd(cmd, 1);
+
+ rc = scst_check_local_events(cmd);
+ if (unlikely(rc != 0))
+ goto out_done;
+
+ spin_lock_bh(&dev->dev_lock);
+
+ /*
+ * The device could be RELEASED behind us, if RESERVING session
+ * is closed (see scst_free_tgt_dev()), but this actually doesn't
+ * matter, so use lock and no retest for DEV_RESERVED bits again
+ */
+ if (test_bit(SCST_TGT_DEV_RESERVED, &cmd->tgt_dev->tgt_dev_flags)) {
+ res = SCST_EXEC_COMPLETED;
+ cmd->status = 0;
+ cmd->msg_status = 0;
+ cmd->host_status = DID_OK;
+ cmd->driver_status = 0;
+ cmd->completed = 1;
+ } else {
+ list_for_each_entry(tgt_dev_tmp,
+ &dev->dev_tgt_dev_list,
+ dev_tgt_dev_list_entry) {
+ clear_bit(SCST_TGT_DEV_RESERVED,
+ &tgt_dev_tmp->tgt_dev_flags);
+ }
+ dev->dev_reserved = 0;
+ }
+
+ spin_unlock_bh(&dev->dev_lock);
+
+ if (res == SCST_EXEC_COMPLETED)
+ goto out_done;
+
+out:
+ return res;
+
+out_done:
+ res = SCST_EXEC_COMPLETED;
+ /* Report the result */
+ cmd->scst_cmd_done(cmd, SCST_CMD_STATE_DEFAULT, SCST_CONTEXT_SAME);
+ goto out;
+}
+
+/**
+ * scst_check_local_events() - check if there are any local SCSI events
+ *
+ * Description:
+ * Checks if the command can be executed or there are local events,
+ * like reservatons, pending UAs, etc. Returns < 0 if command must be
+ * aborted, > 0 if there is an event and command should be immediately
+ * completed, or 0 otherwise.
+ *
+ * !! Dev handlers implementing exec() callback must call this function there
+ * !! just before the actual command's execution!
+ *
+ * On call no locks, no IRQ or IRQ-disabled context allowed.
+ */
+int scst_check_local_events(struct scst_cmd *cmd)
+{
+ int res, rc;
+ struct scst_tgt_dev *tgt_dev = cmd->tgt_dev;
+ struct scst_device *dev = cmd->dev;
+
+ /*
+ * There's no race here, because we need to trace commands sent
+ * *after* dev_double_ua_possible flag was set.
+ */
+ if (unlikely(dev->dev_double_ua_possible))
+ cmd->double_ua_possible = 1;
+
+ if (unlikely(test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags))) {
+ TRACE_MGMT_DBG("ABORTED set, aborting cmd %p", cmd);
+ goto out_uncomplete;
+ }
+
+ /* Reserve check before Unit Attention */
+ if (unlikely(test_bit(SCST_TGT_DEV_RESERVED,
+ &tgt_dev->tgt_dev_flags))) {
+ if ((cmd->op_flags & SCST_REG_RESERVE_ALLOWED) == 0) {
+ scst_set_cmd_error_status(cmd,
+ SAM_STAT_RESERVATION_CONFLICT);
+ goto out_complete;
+ }
+ }
+
+ /* If we had internal bus reset, set the command error unit attention */
+ if ((dev->scsi_dev != NULL) &&
+ unlikely(dev->scsi_dev->was_reset)) {
+ if (scst_is_ua_command(cmd)) {
+ int done = 0;
+ /*
+ * Prevent more than 1 cmd to be triggered by
+ * was_reset.
+ */
+ spin_lock_bh(&dev->dev_lock);
+ if (dev->scsi_dev->was_reset) {
+ TRACE(TRACE_MGMT, "was_reset is %d", 1);
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(scst_sense_reset_UA));
+ /*
+ * It looks like it is safe to clear was_reset
+ * here.
+ */
+ dev->scsi_dev->was_reset = 0;
+ done = 1;
+ }
+ spin_unlock_bh(&dev->dev_lock);
+
+ if (done)
+ goto out_complete;
+ }
+ }
+
+ if (unlikely(test_bit(SCST_TGT_DEV_UA_PENDING,
+ &cmd->tgt_dev->tgt_dev_flags))) {
+ if (scst_is_ua_command(cmd)) {
+ rc = scst_set_pending_UA(cmd);
+ if (rc == 0)
+ goto out_complete;
+ }
+ }
+
+ res = 0;
+
+out:
+ return res;
+
+out_complete:
+ res = 1;
+ BUG_ON(!cmd->completed);
+ goto out;
+
+out_uncomplete:
+ res = -1;
+ goto out;
+}
+EXPORT_SYMBOL_GPL(scst_check_local_events);
+
+/* No locks */
+void scst_inc_expected_sn(struct scst_tgt_dev *tgt_dev, atomic_t *slot)
+{
+ if (slot == NULL)
+ goto inc;
+
+ /* Optimized for lockless fast path */
+
+ TRACE_SN("Slot %zd, *cur_sn_slot %d", slot - tgt_dev->sn_slots,
+ atomic_read(slot));
+
+ if (!atomic_dec_and_test(slot))
+ goto out;
+
+ TRACE_SN("Slot is 0 (num_free_sn_slots=%d)",
+ tgt_dev->num_free_sn_slots);
+ if (tgt_dev->num_free_sn_slots < (int)ARRAY_SIZE(tgt_dev->sn_slots)-1) {
+ spin_lock_irq(&tgt_dev->sn_lock);
+ if (likely(tgt_dev->num_free_sn_slots < (int)ARRAY_SIZE(tgt_dev->sn_slots)-1)) {
+ if (tgt_dev->num_free_sn_slots < 0)
+ tgt_dev->cur_sn_slot = slot;
+ /*
+ * To be in-sync with SIMPLE case in scst_cmd_set_sn()
+ */
+ smp_mb();
+ tgt_dev->num_free_sn_slots++;
+ TRACE_SN("Incremented num_free_sn_slots (%d)",
+ tgt_dev->num_free_sn_slots);
+
+ }
+ spin_unlock_irq(&tgt_dev->sn_lock);
+ }
+
+inc:
+ /*
+ * No protection of expected_sn is needed, because only one thread
+ * at time can be here (serialized by sn). Also it is supposed that
+ * there could not be half-incremented halves.
+ */
+ tgt_dev->expected_sn++;
+ /*
+ * Write must be before def_cmd_count read to be in sync. with
+ * scst_post_exec_sn(). See comment in scst_send_for_exec().
+ */
+ smp_mb();
+ TRACE_SN("Next expected_sn: %d", tgt_dev->expected_sn);
+
+out:
+ return;
+}
+
+/* No locks */
+static struct scst_cmd *scst_post_exec_sn(struct scst_cmd *cmd,
+ bool make_active)
+{
+ /* For HQ commands SN is not set */
+ bool inc_expected_sn = !cmd->inc_expected_sn_on_done &&
+ cmd->sn_set && !cmd->retry;
+ struct scst_tgt_dev *tgt_dev = cmd->tgt_dev;
+ struct scst_cmd *res;
+
+ if (inc_expected_sn)
+ scst_inc_expected_sn(tgt_dev, cmd->sn_slot);
+
+ if (make_active) {
+ scst_make_deferred_commands_active(tgt_dev);
+ res = NULL;
+ } else
+ res = scst_check_deferred_commands(tgt_dev);
+ return res;
+}
+
+/* cmd must be additionally referenced to not die inside */
+static int scst_do_real_exec(struct scst_cmd *cmd)
+{
+ int res = SCST_EXEC_NOT_COMPLETED;
+ int rc;
+ bool atomic = scst_cmd_atomic(cmd);
+ struct scst_device *dev = cmd->dev;
+ struct scst_dev_type *handler = dev->handler;
+ struct io_context *old_ctx = NULL;
+ bool ctx_changed = false;
+
+ if (!atomic)
+ ctx_changed = scst_set_io_context(cmd, &old_ctx);
+
+ cmd->state = SCST_CMD_STATE_REAL_EXECUTING;
+
+ if (handler->exec) {
+ if (unlikely(!dev->handler->exec_atomic && atomic)) {
+ /*
+ * It shouldn't be because of the SCST_TGT_DEV_AFTER_*
+ * optimization.
+ */
+ TRACE_DBG("Dev handler %s exec() needs thread "
+ "context, rescheduling", dev->handler->name);
+ res = SCST_EXEC_NEED_THREAD;
+ goto out_restore;
+ }
+
+ TRACE_DBG("Calling dev handler %s exec(%p)",
+ handler->name, cmd);
+ TRACE_BUFF_FLAG(TRACE_SND_TOP, "Execing: ", cmd->cdb,
+ cmd->cdb_len);
+ scst_set_cur_start(cmd);
+ res = handler->exec(cmd);
+ TRACE_DBG("Dev handler %s exec() returned %d",
+ handler->name, res);
+
+ if (res == SCST_EXEC_COMPLETED)
+ goto out_complete;
+ else if (res == SCST_EXEC_NEED_THREAD)
+ goto out_restore;
+
+ scst_set_exec_time(cmd);
+
+ BUG_ON(res != SCST_EXEC_NOT_COMPLETED);
+ }
+
+ TRACE_DBG("Sending cmd %p to SCSI mid-level", cmd);
+
+ if (unlikely(dev->scsi_dev == NULL)) {
+ PRINT_ERROR("Command for virtual device must be "
+ "processed by device handler (LUN %lld)!",
+ (long long unsigned int)cmd->lun);
+ goto out_error;
+ }
+
+ res = scst_check_local_events(cmd);
+ if (unlikely(res != 0))
+ goto out_done;
+
+#ifndef CONFIG_SCST_ALLOW_PASSTHROUGH_IO_SUBMIT_IN_SIRQ
+ if (unlikely(atomic)) {
+ TRACE_DBG("Pass-through exec() can not be called in atomic "
+ "context, rescheduling to the thread (handler %s)",
+ handler->name);
+ res = SCST_EXEC_NEED_THREAD;
+ goto out_restore;
+ }
+#endif
+
+ scst_set_cur_start(cmd);
+
+ rc = scst_scsi_exec_async(cmd, scst_cmd_done);
+ if (unlikely(rc != 0)) {
+ if (atomic) {
+ res = SCST_EXEC_NEED_THREAD;
+ goto out_restore;
+ } else {
+ PRINT_ERROR("scst pass-through exec failed: %x", rc);
+ goto out_error;
+ }
+ }
+
+out_complete:
+ res = SCST_EXEC_COMPLETED;
+
+out_reset_ctx:
+ if (ctx_changed)
+ scst_reset_io_context(cmd->tgt_dev, old_ctx);
+ return res;
+
+out_restore:
+ scst_set_exec_time(cmd);
+ /* Restore the state */
+ cmd->state = SCST_CMD_STATE_REAL_EXEC;
+ goto out_reset_ctx;
+
+out_error:
+ scst_set_cmd_error(cmd, SCST_LOAD_SENSE(scst_sense_hardw_error));
+ goto out_done;
+
+out_done:
+ res = SCST_EXEC_COMPLETED;
+ /* Report the result */
+ cmd->scst_cmd_done(cmd, SCST_CMD_STATE_DEFAULT, SCST_CONTEXT_SAME);
+ goto out_complete;
+}
+
+static inline int scst_real_exec(struct scst_cmd *cmd)
+{
+ int res;
+
+ BUILD_BUG_ON(SCST_CMD_STATE_RES_CONT_SAME != SCST_EXEC_NOT_COMPLETED);
+ BUILD_BUG_ON(SCST_CMD_STATE_RES_CONT_NEXT != SCST_EXEC_COMPLETED);
+ BUILD_BUG_ON(SCST_CMD_STATE_RES_NEED_THREAD != SCST_EXEC_NEED_THREAD);
+
+ __scst_cmd_get(cmd);
+
+ res = scst_do_real_exec(cmd);
+
+ if (likely(res == SCST_EXEC_COMPLETED)) {
+ scst_post_exec_sn(cmd, true);
+ if (cmd->dev->scsi_dev != NULL)
+ generic_unplug_device(
+ cmd->dev->scsi_dev->request_queue);
+ } else
+ BUG_ON(res != SCST_EXEC_NEED_THREAD);
+
+ __scst_cmd_put(cmd);
+
+ /* SCST_EXEC_* match SCST_CMD_STATE_RES_* */
+ return res;
+}
+
+static int scst_do_local_exec(struct scst_cmd *cmd)
+{
+ int res;
+ struct scst_tgt_dev *tgt_dev = cmd->tgt_dev;
+
+ /* Check READ_ONLY device status */
+ if ((cmd->op_flags & SCST_WRITE_MEDIUM) &&
+ (tgt_dev->acg_dev->rd_only || cmd->dev->swp ||
+ cmd->dev->rd_only)) {
+ PRINT_WARNING("Attempt of write access to read-only device: "
+ "initiator %s, LUN %lld, op %x",
+ cmd->sess->initiator_name, cmd->lun, cmd->cdb[0]);
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(scst_sense_data_protect));
+ goto out_done;
+ }
+
+ if (!scst_is_cmd_local(cmd)) {
+ res = SCST_EXEC_NOT_COMPLETED;
+ goto out;
+ }
+
+ switch (cmd->cdb[0]) {
+ case MODE_SELECT:
+ case MODE_SELECT_10:
+ case LOG_SELECT:
+ res = scst_pre_select(cmd);
+ break;
+ case RESERVE:
+ case RESERVE_10:
+ res = scst_reserve_local(cmd);
+ break;
+ case RELEASE:
+ case RELEASE_10:
+ res = scst_release_local(cmd);
+ break;
+ case REPORT_LUNS:
+ res = scst_report_luns_local(cmd);
+ break;
+ case REQUEST_SENSE:
+ res = scst_request_sense_local(cmd);
+ break;
+ default:
+ res = SCST_EXEC_NOT_COMPLETED;
+ break;
+ }
+
+out:
+ return res;
+
+out_done:
+ /* Report the result */
+ cmd->scst_cmd_done(cmd, SCST_CMD_STATE_DEFAULT, SCST_CONTEXT_SAME);
+ res = SCST_EXEC_COMPLETED;
+ goto out;
+}
+
+static int scst_local_exec(struct scst_cmd *cmd)
+{
+ int res;
+
+ BUILD_BUG_ON(SCST_CMD_STATE_RES_CONT_SAME != SCST_EXEC_NOT_COMPLETED);
+ BUILD_BUG_ON(SCST_CMD_STATE_RES_CONT_NEXT != SCST_EXEC_COMPLETED);
+ BUILD_BUG_ON(SCST_CMD_STATE_RES_NEED_THREAD != SCST_EXEC_NEED_THREAD);
+
+ __scst_cmd_get(cmd);
+
+ res = scst_do_local_exec(cmd);
+ if (likely(res == SCST_EXEC_NOT_COMPLETED))
+ cmd->state = SCST_CMD_STATE_REAL_EXEC;
+ else if (res == SCST_EXEC_COMPLETED)
+ scst_post_exec_sn(cmd, true);
+ else
+ BUG_ON(res != SCST_EXEC_NEED_THREAD);
+
+ __scst_cmd_put(cmd);
+
+ /* SCST_EXEC_* match SCST_CMD_STATE_RES_* */
+ return res;
+}
+
+static int scst_exec(struct scst_cmd **active_cmd)
+{
+ struct scst_cmd *cmd = *active_cmd;
+ struct scst_cmd *ref_cmd;
+ struct scst_device *dev = cmd->dev;
+ int res = SCST_CMD_STATE_RES_CONT_NEXT, count;
+
+ if (unlikely(scst_inc_on_dev_cmd(cmd) != 0))
+ goto out;
+
+ /* To protect tgt_dev */
+ ref_cmd = cmd;
+ __scst_cmd_get(ref_cmd);
+
+ count = 0;
+ while (1) {
+ int rc;
+
+ cmd->sent_for_exec = 1;
+ /*
+ * To sync with scst_abort_cmd(). The above assignment must
+ * be before SCST_CMD_ABORTED test, done later in
+ * scst_check_local_events(). It's far from here, so the order
+ * is virtually guaranteed, but let's have it just in case.
+ */
+ smp_mb();
+
+ cmd->scst_cmd_done = scst_cmd_done_local;
+ cmd->state = SCST_CMD_STATE_LOCAL_EXEC;
+
+ rc = scst_do_local_exec(cmd);
+ if (likely(rc == SCST_EXEC_NOT_COMPLETED))
+ /* Nothing to do */;
+ else if (rc == SCST_EXEC_NEED_THREAD) {
+ TRACE_DBG("%s", "scst_do_local_exec() requested "
+ "thread context, rescheduling");
+ scst_dec_on_dev_cmd(cmd);
+ res = SCST_CMD_STATE_RES_NEED_THREAD;
+ break;
+ } else {
+ BUG_ON(rc != SCST_EXEC_COMPLETED);
+ goto done;
+ }
+
+ cmd->state = SCST_CMD_STATE_REAL_EXEC;
+
+ rc = scst_do_real_exec(cmd);
+ if (likely(rc == SCST_EXEC_COMPLETED))
+ /* Nothing to do */;
+ else if (rc == SCST_EXEC_NEED_THREAD) {
+ TRACE_DBG("scst_real_exec() requested thread "
+ "context, rescheduling (cmd %p)", cmd);
+ scst_dec_on_dev_cmd(cmd);
+ res = SCST_CMD_STATE_RES_NEED_THREAD;
+ break;
+ } else
+ BUG();
+
+done:
+ count++;
+
+ cmd = scst_post_exec_sn(cmd, false);
+ if (cmd == NULL)
+ break;
+
+ if (unlikely(scst_inc_on_dev_cmd(cmd) != 0))
+ break;
+
+ __scst_cmd_put(ref_cmd);
+ ref_cmd = cmd;
+ __scst_cmd_get(ref_cmd);
+ }
+
+ *active_cmd = cmd;
+
+ if (count == 0)
+ goto out_put;
+
+ if (dev->scsi_dev != NULL)
+ generic_unplug_device(dev->scsi_dev->request_queue);
+
+out_put:
+ __scst_cmd_put(ref_cmd);
+ /* !! At this point sess, dev and tgt_dev can be already freed !! */
+
+out:
+ return res;
+}
+
+static int scst_send_for_exec(struct scst_cmd **active_cmd)
+{
+ int res;
+ struct scst_cmd *cmd = *active_cmd;
+ struct scst_tgt_dev *tgt_dev = cmd->tgt_dev;
+ typeof(tgt_dev->expected_sn) expected_sn;
+
+ if (unlikely(cmd->internal))
+ goto exec;
+
+ if (unlikely(cmd->queue_type == SCST_CMD_QUEUE_HEAD_OF_QUEUE))
+ goto exec;
+
+ BUG_ON(!cmd->sn_set);
+
+ expected_sn = tgt_dev->expected_sn;
+ /* Optimized for lockless fast path */
+ if ((cmd->sn != expected_sn) || (tgt_dev->hq_cmd_count > 0)) {
+ spin_lock_irq(&tgt_dev->sn_lock);
+
+ tgt_dev->def_cmd_count++;
+ /*
+ * Memory barrier is needed here to implement lockless fast
+ * path. We need the exact order of read and write between
+ * def_cmd_count and expected_sn. Otherwise, we can miss case,
+ * when expected_sn was changed to be equal to cmd->sn while
+ * we are queuing cmd the deferred list after the expected_sn
+ * below. It will lead to a forever stuck command. But with
+ * the barrier in such case __scst_check_deferred_commands()
+ * will be called and it will take sn_lock, so we will be
+ * synchronized.
+ */
+ smp_mb();
+
+ expected_sn = tgt_dev->expected_sn;
+ if ((cmd->sn != expected_sn) || (tgt_dev->hq_cmd_count > 0)) {
+ if (unlikely(test_bit(SCST_CMD_ABORTED,
+ &cmd->cmd_flags))) {
+ /* Necessary to allow aborting out of sn cmds */
+ TRACE_MGMT_DBG("Aborting out of sn cmd %p "
+ "(tag %llu, sn %u)", cmd,
+ (long long unsigned)cmd->tag, cmd->sn);
+ tgt_dev->def_cmd_count--;
+ scst_set_cmd_abnormal_done_state(cmd);
+ res = SCST_CMD_STATE_RES_CONT_SAME;
+ } else {
+ TRACE_SN("Deferring cmd %p (sn=%d, set %d, "
+ "expected_sn=%d)", cmd, cmd->sn,
+ cmd->sn_set, expected_sn);
+ list_add_tail(&cmd->sn_cmd_list_entry,
+ &tgt_dev->deferred_cmd_list);
+ res = SCST_CMD_STATE_RES_CONT_NEXT;
+ }
+ spin_unlock_irq(&tgt_dev->sn_lock);
+ goto out;
+ } else {
+ TRACE_SN("Somebody incremented expected_sn %d, "
+ "continuing", expected_sn);
+ tgt_dev->def_cmd_count--;
+ spin_unlock_irq(&tgt_dev->sn_lock);
+ }
+ }
+
+exec:
+ res = scst_exec(active_cmd);
+
+out:
+ return res;
+}
+
+/* No locks supposed to be held */
+static int scst_check_sense(struct scst_cmd *cmd)
+{
+ int res = 0;
+ struct scst_device *dev = cmd->dev;
+
+ if (unlikely(cmd->ua_ignore))
+ goto out;
+
+ /* If we had internal bus reset behind us, set the command error UA */
+ if ((dev->scsi_dev != NULL) &&
+ unlikely(cmd->host_status == DID_RESET) &&
+ scst_is_ua_command(cmd)) {
+ TRACE(TRACE_MGMT, "DID_RESET: was_reset=%d host_status=%x",
+ dev->scsi_dev->was_reset, cmd->host_status);
+ scst_set_cmd_error(cmd, SCST_LOAD_SENSE(scst_sense_reset_UA));
+ /* It looks like it is safe to clear was_reset here */
+ dev->scsi_dev->was_reset = 0;
+ }
+
+ if (unlikely(cmd->status == SAM_STAT_CHECK_CONDITION) &&
+ SCST_SENSE_VALID(cmd->sense)) {
+ PRINT_BUFF_FLAG(TRACE_SCSI, "Sense", cmd->sense,
+ cmd->sense_valid_len);
+
+ /* Check Unit Attention Sense Key */
+ if (scst_is_ua_sense(cmd->sense, cmd->sense_valid_len)) {
+ if (scst_analyze_sense(cmd->sense, cmd->sense_valid_len,
+ SCST_SENSE_ASC_VALID,
+ 0, SCST_SENSE_ASC_UA_RESET, 0)) {
+ if (cmd->double_ua_possible) {
+ TRACE_MGMT_DBG("Double UA "
+ "detected for device %p", dev);
+ TRACE_MGMT_DBG("Retrying cmd"
+ " %p (tag %llu)", cmd,
+ (long long unsigned)cmd->tag);
+
+ cmd->status = 0;
+ cmd->msg_status = 0;
+ cmd->host_status = DID_OK;
+ cmd->driver_status = 0;
+ cmd->completed = 0;
+
+ mempool_free(cmd->sense,
+ scst_sense_mempool);
+ cmd->sense = NULL;
+
+ scst_check_restore_sg_buff(cmd);
+
+ BUG_ON(cmd->dbl_ua_orig_resp_data_len < 0);
+ cmd->data_direction =
+ cmd->dbl_ua_orig_data_direction;
+ cmd->resp_data_len =
+ cmd->dbl_ua_orig_resp_data_len;
+
+ cmd->state = SCST_CMD_STATE_REAL_EXEC;
+ cmd->retry = 1;
+ res = 1;
+ goto out;
+ }
+ }
+ scst_dev_check_set_UA(dev, cmd, cmd->sense,
+ cmd->sense_valid_len);
+ }
+ }
+
+ if (unlikely(cmd->double_ua_possible)) {
+ if (scst_is_ua_command(cmd)) {
+ TRACE_MGMT_DBG("Clearing dbl_ua_possible flag (dev %p, "
+ "cmd %p)", dev, cmd);
+ /*
+ * Lock used to protect other flags in the bitfield
+ * (just in case, actually). Those flags can't be
+ * changed in parallel, because the device is
+ * serialized.
+ */
+ spin_lock_bh(&dev->dev_lock);
+ dev->dev_double_ua_possible = 0;
+ spin_unlock_bh(&dev->dev_lock);
+ }
+ }
+
+out:
+ return res;
+}
+
+static int scst_check_auto_sense(struct scst_cmd *cmd)
+{
+ int res = 0;
+
+ if (unlikely(cmd->status == SAM_STAT_CHECK_CONDITION) &&
+ (!SCST_SENSE_VALID(cmd->sense) ||
+ SCST_NO_SENSE(cmd->sense))) {
+ TRACE(TRACE_SCSI|TRACE_MINOR_AND_MGMT_DBG, "CHECK_CONDITION, "
+ "but no sense: cmd->status=%x, cmd->msg_status=%x, "
+ "cmd->host_status=%x, cmd->driver_status=%x (cmd %p)",
+ cmd->status, cmd->msg_status, cmd->host_status,
+ cmd->driver_status, cmd);
+ res = 1;
+ } else if (unlikely(cmd->host_status)) {
+ if ((cmd->host_status == DID_REQUEUE) ||
+ (cmd->host_status == DID_IMM_RETRY) ||
+ (cmd->host_status == DID_SOFT_ERROR) ||
+ (cmd->host_status == DID_ABORT)) {
+ scst_set_busy(cmd);
+ } else {
+ TRACE(TRACE_SCSI|TRACE_MINOR_AND_MGMT_DBG, "Host "
+ "status %x received, returning HARDWARE ERROR "
+ "instead (cmd %p)", cmd->host_status, cmd);
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(scst_sense_hardw_error));
+ }
+ }
+ return res;
+}
+
+static int scst_pre_dev_done(struct scst_cmd *cmd)
+{
+ int res = SCST_CMD_STATE_RES_CONT_SAME, rc;
+
+ if (unlikely(scst_check_auto_sense(cmd))) {
+ PRINT_INFO("Command finished with CHECK CONDITION, but "
+ "without sense data (opcode 0x%x), issuing "
+ "REQUEST SENSE", cmd->cdb[0]);
+ rc = scst_prepare_request_sense(cmd);
+ if (rc == 0)
+ res = SCST_CMD_STATE_RES_CONT_NEXT;
+ else {
+ PRINT_ERROR("%s", "Unable to issue REQUEST SENSE, "
+ "returning HARDWARE ERROR");
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(scst_sense_hardw_error));
+ }
+ goto out;
+ } else if (unlikely(scst_check_sense(cmd)))
+ goto out;
+
+ if (likely(scsi_status_is_good(cmd->status))) {
+ unsigned char type = cmd->dev->type;
+ if (unlikely((cmd->cdb[0] == MODE_SENSE ||
+ cmd->cdb[0] == MODE_SENSE_10)) &&
+ (cmd->tgt_dev->acg_dev->rd_only || cmd->dev->swp ||
+ cmd->dev->rd_only) &&
+ (type == TYPE_DISK ||
+ type == TYPE_WORM ||
+ type == TYPE_MOD ||
+ type == TYPE_TAPE)) {
+ int32_t length;
+ uint8_t *address;
+ bool err = false;
+
+ length = scst_get_buf_first(cmd, &address);
+ if (length < 0) {
+ PRINT_ERROR("%s", "Unable to get "
+ "MODE_SENSE buffer");
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(
+ scst_sense_hardw_error));
+ err = true;
+ } else if (length > 2 && cmd->cdb[0] == MODE_SENSE)
+ address[2] |= 0x80; /* Write Protect*/
+ else if (length > 3 && cmd->cdb[0] == MODE_SENSE_10)
+ address[3] |= 0x80; /* Write Protect*/
+ scst_put_buf(cmd, address);
+
+ if (err)
+ goto out;
+ }
+
+ /*
+ * Check and clear NormACA option for the device, if necessary,
+ * since we don't support ACA
+ */
+ if (unlikely((cmd->cdb[0] == INQUIRY)) &&
+ /* Std INQUIRY data (no EVPD) */
+ !(cmd->cdb[1] & SCST_INQ_EVPD) &&
+ (cmd->resp_data_len > SCST_INQ_BYTE3)) {
+ uint8_t *buffer;
+ int buflen;
+ bool err = false;
+
+ /* ToDo: all pages ?? */
+ buflen = scst_get_buf_first(cmd, &buffer);
+ if (buflen > SCST_INQ_BYTE3) {
+#ifdef CONFIG_SCST_EXTRACHECKS
+ if (buffer[SCST_INQ_BYTE3] & SCST_INQ_NORMACA_BIT) {
+ PRINT_INFO("NormACA set for device: "
+ "lun=%lld, type 0x%02x. Clear it, "
+ "since it's unsupported.",
+ (long long unsigned int)cmd->lun,
+ buffer[0]);
+ }
+#endif
+ buffer[SCST_INQ_BYTE3] &= ~SCST_INQ_NORMACA_BIT;
+ } else if (buflen != 0) {
+ PRINT_ERROR("%s", "Unable to get INQUIRY "
+ "buffer");
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(scst_sense_hardw_error));
+ err = true;
+ }
+ if (buflen > 0)
+ scst_put_buf(cmd, buffer);
+
+ if (err)
+ goto out;
+ }
+
+ if (unlikely((cmd->cdb[0] == MODE_SELECT) ||
+ (cmd->cdb[0] == MODE_SELECT_10) ||
+ (cmd->cdb[0] == LOG_SELECT))) {
+ TRACE(TRACE_SCSI,
+ "MODE/LOG SELECT succeeded (LUN %lld)",
+ (long long unsigned int)cmd->lun);
+ cmd->state = SCST_CMD_STATE_MODE_SELECT_CHECKS;
+ goto out;
+ }
+ } else {
+ TRACE(TRACE_SCSI, "cmd %p not succeeded with status %x",
+ cmd, cmd->status);
+
+ if ((cmd->cdb[0] == RESERVE) || (cmd->cdb[0] == RESERVE_10)) {
+ if (!test_bit(SCST_TGT_DEV_RESERVED,
+ &cmd->tgt_dev->tgt_dev_flags)) {
+ struct scst_tgt_dev *tgt_dev_tmp;
+ struct scst_device *dev = cmd->dev;
+
+ TRACE(TRACE_SCSI, "RESERVE failed lun=%lld, "
+ "status=%x",
+ (long long unsigned int)cmd->lun,
+ cmd->status);
+ PRINT_BUFF_FLAG(TRACE_SCSI, "Sense", cmd->sense,
+ cmd->sense_valid_len);
+
+ /* Clearing the reservation */
+ spin_lock_bh(&dev->dev_lock);
+ list_for_each_entry(tgt_dev_tmp,
+ &dev->dev_tgt_dev_list,
+ dev_tgt_dev_list_entry) {
+ clear_bit(SCST_TGT_DEV_RESERVED,
+ &tgt_dev_tmp->tgt_dev_flags);
+ }
+ dev->dev_reserved = 0;
+ spin_unlock_bh(&dev->dev_lock);
+ }
+ }
+
+ /* Check for MODE PARAMETERS CHANGED UA */
+ if ((cmd->dev->scsi_dev != NULL) &&
+ (cmd->status == SAM_STAT_CHECK_CONDITION) &&
+ scst_is_ua_sense(cmd->sense, cmd->sense_valid_len) &&
+ scst_analyze_sense(cmd->sense, cmd->sense_valid_len,
+ SCST_SENSE_ASCx_VALID,
+ 0, 0x2a, 0x01)) {
+ TRACE(TRACE_SCSI, "MODE PARAMETERS CHANGED UA (lun "
+ "%lld)", (long long unsigned int)cmd->lun);
+ cmd->state = SCST_CMD_STATE_MODE_SELECT_CHECKS;
+ goto out;
+ }
+ }
+
+ cmd->state = SCST_CMD_STATE_DEV_DONE;
+
+out:
+ return res;
+}
+
+static int scst_mode_select_checks(struct scst_cmd *cmd)
+{
+ int res = SCST_CMD_STATE_RES_CONT_SAME;
+ int atomic = scst_cmd_atomic(cmd);
+
+ if (likely(scsi_status_is_good(cmd->status))) {
+ if (unlikely((cmd->cdb[0] == MODE_SELECT) ||
+ (cmd->cdb[0] == MODE_SELECT_10) ||
+ (cmd->cdb[0] == LOG_SELECT))) {
+ struct scst_device *dev = cmd->dev;
+ int sl;
+ uint8_t sense_buffer[SCST_STANDARD_SENSE_LEN];
+
+ if (atomic && (dev->scsi_dev != NULL)) {
+ TRACE_DBG("%s", "MODE/LOG SELECT: thread "
+ "context required");
+ res = SCST_CMD_STATE_RES_NEED_THREAD;
+ goto out;
+ }
+
+ TRACE(TRACE_SCSI, "MODE/LOG SELECT succeeded, "
+ "setting the SELECT UA (lun=%lld)",
+ (long long unsigned int)cmd->lun);
+
+ spin_lock_bh(&dev->dev_lock);
+ if (cmd->cdb[0] == LOG_SELECT) {
+ sl = scst_set_sense(sense_buffer,
+ sizeof(sense_buffer),
+ dev->d_sense,
+ UNIT_ATTENTION, 0x2a, 0x02);
+ } else {
+ sl = scst_set_sense(sense_buffer,
+ sizeof(sense_buffer),
+ dev->d_sense,
+ UNIT_ATTENTION, 0x2a, 0x01);
+ }
+ scst_dev_check_set_local_UA(dev, cmd, sense_buffer, sl);
+ spin_unlock_bh(&dev->dev_lock);
+
+ if (dev->scsi_dev != NULL)
+ scst_obtain_device_parameters(dev);
+ }
+ } else if ((cmd->status == SAM_STAT_CHECK_CONDITION) &&
+ scst_is_ua_sense(cmd->sense, cmd->sense_valid_len) &&
+ /* mode parameters changed */
+ (scst_analyze_sense(cmd->sense, cmd->sense_valid_len,
+ SCST_SENSE_ASCx_VALID,
+ 0, 0x2a, 0x01) ||
+ scst_analyze_sense(cmd->sense, cmd->sense_valid_len,
+ SCST_SENSE_ASC_VALID,
+ 0, 0x29, 0) /* reset */ ||
+ scst_analyze_sense(cmd->sense, cmd->sense_valid_len,
+ SCST_SENSE_ASC_VALID,
+ 0, 0x28, 0) /* medium changed */ ||
+ /* cleared by another ini (just in case) */
+ scst_analyze_sense(cmd->sense, cmd->sense_valid_len,
+ SCST_SENSE_ASC_VALID,
+ 0, 0x2F, 0))) {
+ if (atomic) {
+ TRACE_DBG("Possible parameters changed UA %x: "
+ "thread context required", cmd->sense[12]);
+ res = SCST_CMD_STATE_RES_NEED_THREAD;
+ goto out;
+ }
+
+ TRACE(TRACE_SCSI, "Possible parameters changed UA %x "
+ "(LUN %lld): getting new parameters", cmd->sense[12],
+ (long long unsigned int)cmd->lun);
+
+ scst_obtain_device_parameters(cmd->dev);
+ } else
+ BUG();
+
+ cmd->state = SCST_CMD_STATE_DEV_DONE;
+
+out:
+ return res;
+}
+
+static void scst_inc_check_expected_sn(struct scst_cmd *cmd)
+{
+ if (likely(cmd->sn_set))
+ scst_inc_expected_sn(cmd->tgt_dev, cmd->sn_slot);
+
+ scst_make_deferred_commands_active(cmd->tgt_dev);
+}
+
+static int scst_dev_done(struct scst_cmd *cmd)
+{
+ int res = SCST_CMD_STATE_RES_CONT_SAME;
+ int state;
+ struct scst_device *dev = cmd->dev;
+
+ state = SCST_CMD_STATE_PRE_XMIT_RESP;
+
+ if (likely(!scst_is_cmd_fully_local(cmd)) &&
+ likely(dev->handler->dev_done != NULL)) {
+ int rc;
+
+ if (unlikely(!dev->handler->dev_done_atomic &&
+ scst_cmd_atomic(cmd))) {
+ /*
+ * It shouldn't be because of the SCST_TGT_DEV_AFTER_*
+ * optimization.
+ */
+ TRACE_DBG("Dev handler %s dev_done() needs thread "
+ "context, rescheduling", dev->handler->name);
+ res = SCST_CMD_STATE_RES_NEED_THREAD;
+ goto out;
+ }
+
+ TRACE_DBG("Calling dev handler %s dev_done(%p)",
+ dev->handler->name, cmd);
+ scst_set_cur_start(cmd);
+ rc = dev->handler->dev_done(cmd);
+ scst_set_dev_done_time(cmd);
+ TRACE_DBG("Dev handler %s dev_done() returned %d",
+ dev->handler->name, rc);
+ if (rc != SCST_CMD_STATE_DEFAULT)
+ state = rc;
+ }
+
+ switch (state) {
+#ifdef CONFIG_SCST_EXTRACHECKS
+ case SCST_CMD_STATE_PRE_XMIT_RESP:
+ case SCST_CMD_STATE_DEV_PARSE:
+ case SCST_CMD_STATE_PRE_PARSE:
+ case SCST_CMD_STATE_PREPARE_SPACE:
+ case SCST_CMD_STATE_RDY_TO_XFER:
+ case SCST_CMD_STATE_TGT_PRE_EXEC:
+ case SCST_CMD_STATE_SEND_FOR_EXEC:
+ case SCST_CMD_STATE_LOCAL_EXEC:
+ case SCST_CMD_STATE_REAL_EXEC:
+ case SCST_CMD_STATE_PRE_DEV_DONE:
+ case SCST_CMD_STATE_MODE_SELECT_CHECKS:
+ case SCST_CMD_STATE_DEV_DONE:
+ case SCST_CMD_STATE_XMIT_RESP:
+ case SCST_CMD_STATE_FINISHED:
+ case SCST_CMD_STATE_FINISHED_INTERNAL:
+#else
+ default:
+#endif
+ cmd->state = state;
+ break;
+ case SCST_CMD_STATE_NEED_THREAD_CTX:
+ TRACE_DBG("Dev handler %s dev_done() requested "
+ "thread context, rescheduling",
+ dev->handler->name);
+ res = SCST_CMD_STATE_RES_NEED_THREAD;
+ break;
+#ifdef CONFIG_SCST_EXTRACHECKS
+ default:
+ if (state >= 0) {
+ PRINT_ERROR("Dev handler %s dev_done() returned "
+ "invalid cmd state %d",
+ dev->handler->name, state);
+ } else {
+ PRINT_ERROR("Dev handler %s dev_done() returned "
+ "error %d", dev->handler->name,
+ state);
+ }
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(scst_sense_hardw_error));
+ scst_set_cmd_abnormal_done_state(cmd);
+ break;
+#endif
+ }
+
+ if (cmd->needs_unblocking)
+ scst_unblock_dev_cmd(cmd);
+
+ if (likely(cmd->dec_on_dev_needed))
+ scst_dec_on_dev_cmd(cmd);
+
+ if (cmd->inc_expected_sn_on_done && cmd->sent_for_exec)
+ scst_inc_check_expected_sn(cmd);
+
+ if (unlikely(cmd->internal))
+ cmd->state = SCST_CMD_STATE_FINISHED_INTERNAL;
+
+out:
+ return res;
+}
+
+static int scst_pre_xmit_response(struct scst_cmd *cmd)
+{
+ int res;
+
+ EXTRACHECKS_BUG_ON(cmd->internal);
+
+#ifdef CONFIG_SCST_DEBUG_TM
+ if (cmd->tm_dbg_delayed &&
+ !test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags)) {
+ if (scst_cmd_atomic(cmd)) {
+ TRACE_MGMT_DBG("%s",
+ "DEBUG_TM delayed cmd needs a thread");
+ res = SCST_CMD_STATE_RES_NEED_THREAD;
+ return res;
+ }
+ TRACE_MGMT_DBG("Delaying cmd %p (tag %llu) for 1 second",
+ cmd, cmd->tag);
+ schedule_timeout_uninterruptible(HZ);
+ }
+#endif
+
+ if (likely(cmd->tgt_dev != NULL)) {
+ /*
+ * Those counters protect from not getting too long processing
+ * latency, so we should decrement them after cmd completed.
+ */
+ atomic_dec(&cmd->tgt_dev->tgt_dev_cmd_count);
+#ifdef CONFIG_SCST_PER_DEVICE_CMD_COUNT_LIMIT
+ atomic_dec(&cmd->dev->dev_cmd_count);
+#endif
+#ifdef CONFIG_SCST_ORDERED_READS
+ /* If expected values not set, expected direction is UNKNOWN */
+ if (cmd->expected_data_direction & SCST_DATA_WRITE)
+ atomic_dec(&cmd->dev->write_cmd_count);
+#endif
+ if (unlikely(cmd->queue_type == SCST_CMD_QUEUE_HEAD_OF_QUEUE))
+ scst_on_hq_cmd_response(cmd);
+
+ if (unlikely(!cmd->sent_for_exec)) {
+ TRACE_SN("cmd %p was not sent to mid-lev"
+ " (sn %d, set %d)",
+ cmd, cmd->sn, cmd->sn_set);
+ scst_unblock_deferred(cmd->tgt_dev, cmd);
+ cmd->sent_for_exec = 1;
+ }
+ }
+
+ cmd->done = 1;
+ smp_mb(); /* to sync with scst_abort_cmd() */
+
+ if (unlikely(test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags)))
+ scst_xmit_process_aborted_cmd(cmd);
+ else if (unlikely(cmd->status == SAM_STAT_CHECK_CONDITION))
+ scst_store_sense(cmd);
+
+ if (unlikely(test_bit(SCST_CMD_NO_RESP, &cmd->cmd_flags))) {
+ TRACE_MGMT_DBG("Flag NO_RESP set for cmd %p (tag %llu),"
+ " skipping",
+ cmd, (long long unsigned int)cmd->tag);
+ cmd->state = SCST_CMD_STATE_FINISHED;
+ res = SCST_CMD_STATE_RES_CONT_SAME;
+ goto out;
+ }
+
+ cmd->state = SCST_CMD_STATE_XMIT_RESP;
+ res = SCST_CMD_STATE_RES_CONT_SAME;
+
+out:
+ return res;
+}
+
+static int scst_xmit_response(struct scst_cmd *cmd)
+{
+ struct scst_tgt_template *tgtt = cmd->tgtt;
+ int res, rc;
+
+ EXTRACHECKS_BUG_ON(cmd->internal);
+
+ if (unlikely(!tgtt->xmit_response_atomic &&
+ scst_cmd_atomic(cmd))) {
+ /*
+ * It shouldn't be because of the SCST_TGT_DEV_AFTER_*
+ * optimization.
+ */
+ TRACE_DBG("Target driver %s xmit_response() needs thread "
+ "context, rescheduling", tgtt->name);
+ res = SCST_CMD_STATE_RES_NEED_THREAD;
+ goto out;
+ }
+
+ while (1) {
+ int finished_cmds = atomic_read(&cmd->tgt->finished_cmds);
+
+ res = SCST_CMD_STATE_RES_CONT_NEXT;
+ cmd->state = SCST_CMD_STATE_XMIT_WAIT;
+
+ TRACE_DBG("Calling xmit_response(%p)", cmd);
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+ if (trace_flag & TRACE_SND_BOT) {
+ int i;
+ struct scatterlist *sg;
+ if (cmd->tgt_sg != NULL)
+ sg = cmd->tgt_sg;
+ else
+ sg = cmd->sg;
+ if (sg != NULL) {
+ TRACE(TRACE_SND_BOT, "Xmitting data for cmd %p "
+ "(sg_cnt %d, sg %p, sg[0].page %p)",
+ cmd, cmd->tgt_sg_cnt, sg,
+ (void *)sg_page(&sg[0]));
+ for (i = 0; i < cmd->tgt_sg_cnt; ++i) {
+ PRINT_BUFF_FLAG(TRACE_SND_BOT,
+ "Xmitting sg", sg_virt(&sg[i]),
+ sg[i].length);
+ }
+ }
+ }
+#endif
+
+ if (tgtt->on_hw_pending_cmd_timeout != NULL) {
+ struct scst_session *sess = cmd->sess;
+ cmd->hw_pending_start = jiffies;
+ cmd->cmd_hw_pending = 1;
+ if (!test_bit(SCST_SESS_HW_PENDING_WORK_SCHEDULED, &sess->sess_aflags)) {
+ TRACE_DBG("Sched HW pending work for sess %p "
+ "(max time %d)", sess,
+ tgtt->max_hw_pending_time);
+ set_bit(SCST_SESS_HW_PENDING_WORK_SCHEDULED,
+ &sess->sess_aflags);
+ schedule_delayed_work(&sess->hw_pending_work,
+ tgtt->max_hw_pending_time * HZ);
+ }
+ }
+
+ scst_set_cur_start(cmd);
+
+#ifdef CONFIG_SCST_DEBUG_RETRY
+ if (((scst_random() % 100) == 77))
+ rc = SCST_TGT_RES_QUEUE_FULL;
+ else
+#endif
+ rc = tgtt->xmit_response(cmd);
+ TRACE_DBG("xmit_response() returned %d", rc);
+
+ if (likely(rc == SCST_TGT_RES_SUCCESS))
+ goto out;
+
+ scst_set_xmit_time(cmd);
+
+ cmd->cmd_hw_pending = 0;
+
+ /* Restore the previous state */
+ cmd->state = SCST_CMD_STATE_XMIT_RESP;
+
+ switch (rc) {
+ case SCST_TGT_RES_QUEUE_FULL:
+ if (scst_queue_retry_cmd(cmd, finished_cmds) == 0)
+ break;
+ else
+ continue;
+
+ case SCST_TGT_RES_NEED_THREAD_CTX:
+ TRACE_DBG("Target driver %s xmit_response() "
+ "requested thread context, rescheduling",
+ tgtt->name);
+ res = SCST_CMD_STATE_RES_NEED_THREAD;
+ break;
+
+ default:
+ goto out_error;
+ }
+ break;
+ }
+
+out:
+ /* Caution: cmd can be already dead here */
+ return res;
+
+out_error:
+ if (rc == SCST_TGT_RES_FATAL_ERROR) {
+ PRINT_ERROR("Target driver %s xmit_response() returned "
+ "fatal error", tgtt->name);
+ } else {
+ PRINT_ERROR("Target driver %s xmit_response() returned "
+ "invalid value %d", tgtt->name, rc);
+ }
+ scst_set_cmd_error(cmd, SCST_LOAD_SENSE(scst_sense_hardw_error));
+ cmd->state = SCST_CMD_STATE_FINISHED;
+ res = SCST_CMD_STATE_RES_CONT_SAME;
+ goto out;
+}
+
+/**
+ * scst_tgt_cmd_done() - the command's processing done
+ * @cmd: SCST command
+ * @pref_context: preferred command execution context
+ *
+ * Description:
+ * Notifies SCST that the driver sent the response and the command
+ * can be freed now. Don't forget to set the delivery status, if it
+ * isn't success, using scst_set_delivery_status() before calling
+ * this function. The third argument sets preferred command execition
+ * context (see SCST_CONTEXT_* constants for details)
+ */
+void scst_tgt_cmd_done(struct scst_cmd *cmd,
+ enum scst_exec_context pref_context)
+{
+
+ BUG_ON(cmd->state != SCST_CMD_STATE_XMIT_WAIT);
+
+ scst_set_xmit_time(cmd);
+
+ cmd->cmd_hw_pending = 0;
+
+ cmd->state = SCST_CMD_STATE_FINISHED;
+ scst_process_redirect_cmd(cmd, pref_context, 1);
+ return;
+}
+EXPORT_SYMBOL(scst_tgt_cmd_done);
+
+static int scst_finish_cmd(struct scst_cmd *cmd)
+{
+ int res;
+ struct scst_session *sess = cmd->sess;
+
+ scst_update_lat_stats(cmd);
+
+ if (unlikely(cmd->delivery_status != SCST_CMD_DELIVERY_SUCCESS)) {
+ if ((cmd->tgt_dev != NULL) &&
+ scst_is_ua_sense(cmd->sense, cmd->sense_valid_len)) {
+ /* This UA delivery failed, so we need to requeue it */
+ if (scst_cmd_atomic(cmd) &&
+ scst_is_ua_global(cmd->sense, cmd->sense_valid_len)) {
+ TRACE_MGMT_DBG("Requeuing of global UA for "
+ "failed cmd %p needs a thread", cmd);
+ res = SCST_CMD_STATE_RES_NEED_THREAD;
+ goto out;
+ }
+ scst_requeue_ua(cmd);
+ }
+ }
+
+ atomic_dec(&sess->sess_cmd_count);
+
+ spin_lock_irq(&sess->sess_list_lock);
+ list_del(&cmd->sess_cmd_list_entry);
+ spin_unlock_irq(&sess->sess_list_lock);
+
+ cmd->finished = 1;
+ smp_mb(); /* to sync with scst_abort_cmd() */
+
+ if (unlikely(test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags))) {
+ TRACE_MGMT_DBG("Aborted cmd %p finished (cmd_ref %d, "
+ "scst_cmd_count %d)", cmd, atomic_read(&cmd->cmd_ref),
+ atomic_read(&scst_cmd_count));
+
+ scst_finish_cmd_mgmt(cmd);
+ }
+
+ __scst_cmd_put(cmd);
+
+ res = SCST_CMD_STATE_RES_CONT_NEXT;
+
+out:
+ return res;
+}
+
+/*
+ * No locks, but it must be externally serialized (see comment for
+ * scst_cmd_init_done() in scst.h)
+ */
+static void scst_cmd_set_sn(struct scst_cmd *cmd)
+{
+ struct scst_tgt_dev *tgt_dev = cmd->tgt_dev;
+ unsigned long flags;
+
+ if (scst_is_implicit_hq(cmd)) {
+ TRACE_SN("Implicit HQ cmd %p", cmd);
+ cmd->queue_type = SCST_CMD_QUEUE_HEAD_OF_QUEUE;
+ }
+
+ EXTRACHECKS_BUG_ON(cmd->sn_set || cmd->hq_cmd_inced);
+
+ /* Optimized for lockless fast path */
+
+ scst_check_debug_sn(cmd);
+
+#ifdef CONFIG_SCST_STRICT_SERIALIZING
+ cmd->queue_type = SCST_CMD_QUEUE_ORDERED;
+#endif
+
+ if (cmd->dev->queue_alg == SCST_CONTR_MODE_QUEUE_ALG_RESTRICTED_REORDER) {
+ /*
+ * Not the best way, but good enough until there is a
+ * possibility to specify queue type during pass-through
+ * commands submission.
+ */
+ cmd->queue_type = SCST_CMD_QUEUE_ORDERED;
+ }
+
+ switch (cmd->queue_type) {
+ case SCST_CMD_QUEUE_SIMPLE:
+ case SCST_CMD_QUEUE_UNTAGGED:
+#ifdef CONFIG_SCST_ORDERED_READS
+ if (scst_cmd_is_expected_set(cmd)) {
+ if ((cmd->expected_data_direction == SCST_DATA_READ) &&
+ (atomic_read(&cmd->dev->write_cmd_count) == 0))
+ goto ordered;
+ } else
+ goto ordered;
+#endif
+ if (likely(tgt_dev->num_free_sn_slots >= 0)) {
+ /*
+ * atomic_inc_return() implies memory barrier to sync
+ * with scst_inc_expected_sn()
+ */
+ if (atomic_inc_return(tgt_dev->cur_sn_slot) == 1) {
+ tgt_dev->curr_sn++;
+ TRACE_SN("Incremented curr_sn %d",
+ tgt_dev->curr_sn);
+ }
+ cmd->sn_slot = tgt_dev->cur_sn_slot;
+ cmd->sn = tgt_dev->curr_sn;
+
+ tgt_dev->prev_cmd_ordered = 0;
+ } else {
+ TRACE(TRACE_MINOR, "***WARNING*** Not enough SN slots "
+ "%zd", ARRAY_SIZE(tgt_dev->sn_slots));
+ goto ordered;
+ }
+ break;
+
+ case SCST_CMD_QUEUE_ORDERED:
+ TRACE_SN("ORDERED cmd %p (op %x)", cmd, cmd->cdb[0]);
+ordered:
+ if (!tgt_dev->prev_cmd_ordered) {
+ spin_lock_irqsave(&tgt_dev->sn_lock, flags);
+ if (tgt_dev->num_free_sn_slots >= 0) {
+ tgt_dev->num_free_sn_slots--;
+ if (tgt_dev->num_free_sn_slots >= 0) {
+ int i = 0;
+ /* Commands can finish in any order, so
+ * we don't know which slot is empty.
+ */
+ while (1) {
+ tgt_dev->cur_sn_slot++;
+ if (tgt_dev->cur_sn_slot ==
+ tgt_dev->sn_slots + ARRAY_SIZE(tgt_dev->sn_slots))
+ tgt_dev->cur_sn_slot = tgt_dev->sn_slots;
+
+ if (atomic_read(tgt_dev->cur_sn_slot) == 0)
+ break;
+
+ i++;
+ BUG_ON(i == ARRAY_SIZE(tgt_dev->sn_slots));
+ }
+ TRACE_SN("New cur SN slot %zd",
+ tgt_dev->cur_sn_slot -
+ tgt_dev->sn_slots);
+ }
+ }
+ spin_unlock_irqrestore(&tgt_dev->sn_lock, flags);
+ }
+ tgt_dev->prev_cmd_ordered = 1;
+ tgt_dev->curr_sn++;
+ cmd->sn = tgt_dev->curr_sn;
+ break;
+
+ case SCST_CMD_QUEUE_HEAD_OF_QUEUE:
+ TRACE_SN("HQ cmd %p (op %x)", cmd, cmd->cdb[0]);
+ spin_lock_irqsave(&tgt_dev->sn_lock, flags);
+ tgt_dev->hq_cmd_count++;
+ spin_unlock_irqrestore(&tgt_dev->sn_lock, flags);
+ cmd->hq_cmd_inced = 1;
+ goto out;
+
+ default:
+ BUG();
+ }
+
+ TRACE_SN("cmd(%p)->sn: %d (tgt_dev %p, *cur_sn_slot %d, "
+ "num_free_sn_slots %d, prev_cmd_ordered %ld, "
+ "cur_sn_slot %zd)", cmd, cmd->sn, tgt_dev,
+ atomic_read(tgt_dev->cur_sn_slot),
+ tgt_dev->num_free_sn_slots, tgt_dev->prev_cmd_ordered,
+ tgt_dev->cur_sn_slot-tgt_dev->sn_slots);
+
+ cmd->sn_set = 1;
+
+out:
+ return;
+}
+
+/*
+ * Returns 0 on success, > 0 when we need to wait for unblock,
+ * < 0 if there is no device (lun) or device type handler.
+ *
+ * No locks, but might be on IRQ, protection is done by the
+ * suspended activity.
+ */
+static int scst_translate_lun(struct scst_cmd *cmd)
+{
+ struct scst_tgt_dev *tgt_dev = NULL;
+ int res;
+
+ /* See comment about smp_mb() in scst_suspend_activity() */
+ __scst_get(1);
+
+ if (likely(!test_bit(SCST_FLAG_SUSPENDED, &scst_flags))) {
+ struct list_head *sess_tgt_dev_list_head =
+ &cmd->sess->sess_tgt_dev_list_hash[HASH_VAL(cmd->lun)];
+ TRACE_DBG("Finding tgt_dev for cmd %p (lun %lld)", cmd,
+ (long long unsigned int)cmd->lun);
+ res = -1;
+ list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+ sess_tgt_dev_list_entry) {
+ if (tgt_dev->lun == cmd->lun) {
+ TRACE_DBG("tgt_dev %p found", tgt_dev);
+
+ if (unlikely(tgt_dev->dev->handler ==
+ &scst_null_devtype)) {
+ PRINT_INFO("Dev handler for device "
+ "%lld is NULL, the device will not "
+ "be visible remotely",
+ (long long unsigned int)cmd->lun);
+ break;
+ }
+
+ cmd->cmd_threads = tgt_dev->active_cmd_threads;
+ cmd->tgt_dev = tgt_dev;
+ cmd->dev = tgt_dev->dev;
+
+ res = 0;
+ break;
+ }
+ }
+ if (res != 0) {
+ TRACE(TRACE_MINOR,
+ "tgt_dev for LUN %lld not found, command to "
+ "unexisting LU?",
+ (long long unsigned int)cmd->lun);
+ __scst_put();
+ }
+ } else {
+ TRACE_MGMT_DBG("%s", "FLAG SUSPENDED set, skipping");
+ __scst_put();
+ res = 1;
+ }
+ return res;
+}
+
+/*
+ * No locks, but might be on IRQ.
+ *
+ * Returns 0 on success, > 0 when we need to wait for unblock,
+ * < 0 if there is no device (lun) or device type handler.
+ */
+static int __scst_init_cmd(struct scst_cmd *cmd)
+{
+ int res = 0;
+
+ res = scst_translate_lun(cmd);
+ if (likely(res == 0)) {
+ int cnt;
+ bool failure = false;
+
+ cmd->state = SCST_CMD_STATE_PRE_PARSE;
+
+ cnt = atomic_inc_return(&cmd->tgt_dev->tgt_dev_cmd_count);
+ if (unlikely(cnt > SCST_MAX_TGT_DEV_COMMANDS)) {
+ TRACE(TRACE_FLOW_CONTROL,
+ "Too many pending commands (%d) in "
+ "session, returning BUSY to initiator \"%s\"",
+ cnt, (cmd->sess->initiator_name[0] == '\0') ?
+ "Anonymous" : cmd->sess->initiator_name);
+ failure = true;
+ }
+
+#ifdef CONFIG_SCST_PER_DEVICE_CMD_COUNT_LIMIT
+ cnt = atomic_inc_return(&cmd->dev->dev_cmd_count);
+ if (unlikely(cnt > SCST_MAX_DEV_COMMANDS)) {
+ if (!failure) {
+ TRACE(TRACE_FLOW_CONTROL,
+ "Too many pending device "
+ "commands (%d), returning BUSY to "
+ "initiator \"%s\"", cnt,
+ (cmd->sess->initiator_name[0] == '\0') ?
+ "Anonymous" :
+ cmd->sess->initiator_name);
+ failure = true;
+ }
+ }
+#endif
+
+#ifdef CONFIG_SCST_ORDERED_READS
+ /* If expected values not set, expected direction is UNKNOWN */
+ if (cmd->expected_data_direction & SCST_DATA_WRITE)
+ atomic_inc(&cmd->dev->write_cmd_count);
+#endif
+
+ if (unlikely(failure))
+ goto out_busy;
+
+ if (!cmd->set_sn_on_restart_cmd)
+ scst_cmd_set_sn(cmd);
+ } else if (res < 0) {
+ TRACE_DBG("Finishing cmd %p", cmd);
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(scst_sense_lun_not_supported));
+ scst_set_cmd_abnormal_done_state(cmd);
+ } else
+ goto out;
+
+out:
+ return res;
+
+out_busy:
+ scst_set_busy(cmd);
+ scst_set_cmd_abnormal_done_state(cmd);
+ goto out;
+}
+
+/* Called under scst_init_lock and IRQs disabled */
+static void scst_do_job_init(void)
+ __releases(&scst_init_lock)
+ __acquires(&scst_init_lock)
+{
+ struct scst_cmd *cmd;
+ int susp;
+
+restart:
+ /*
+ * There is no need for read barrier here, because we don't care where
+ * this check will be done.
+ */
+ susp = test_bit(SCST_FLAG_SUSPENDED, &scst_flags);
+ if (scst_init_poll_cnt > 0)
+ scst_init_poll_cnt--;
+
+ list_for_each_entry(cmd, &scst_init_cmd_list, cmd_list_entry) {
+ int rc;
+ if (susp && !test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags))
+ continue;
+ if (!test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags)) {
+ spin_unlock_irq(&scst_init_lock);
+ rc = __scst_init_cmd(cmd);
+ spin_lock_irq(&scst_init_lock);
+ if (rc > 0) {
+ TRACE_MGMT_DBG("%s",
+ "FLAG SUSPENDED set, restarting");
+ goto restart;
+ }
+ } else {
+ TRACE_MGMT_DBG("Aborting not inited cmd %p (tag %llu)",
+ cmd, (long long unsigned int)cmd->tag);
+ scst_set_cmd_abnormal_done_state(cmd);
+ }
+
+ /*
+ * Deleting cmd from init cmd list after __scst_init_cmd()
+ * is necessary to keep the check in scst_init_cmd() correct
+ * to preserve the commands order.
+ *
+ * We don't care about the race, when init cmd list is empty
+ * and one command detected that it just was not empty, so
+ * it's inserting to it, but another command at the same time
+ * seeing init cmd list empty and goes directly, because it
+ * could affect only commands from the same initiator to the
+ * same tgt_dev, but scst_cmd_init_done*() doesn't guarantee
+ * the order in case of simultaneous such calls anyway.
+ */
+ TRACE_MGMT_DBG("Deleting cmd %p from init cmd list", cmd);
+ smp_wmb(); /* enforce the required order */
+ list_del(&cmd->cmd_list_entry);
+ spin_unlock(&scst_init_lock);
+
+ spin_lock(&cmd->cmd_threads->cmd_list_lock);
+ TRACE_MGMT_DBG("Adding cmd %p to active cmd list", cmd);
+ if (unlikely(cmd->queue_type == SCST_CMD_QUEUE_HEAD_OF_QUEUE))
+ list_add(&cmd->cmd_list_entry,
+ &cmd->cmd_threads->active_cmd_list);
+ else
+ list_add_tail(&cmd->cmd_list_entry,
+ &cmd->cmd_threads->active_cmd_list);
+ wake_up(&cmd->cmd_threads->cmd_list_waitQ);
+ spin_unlock(&cmd->cmd_threads->cmd_list_lock);
+
+ spin_lock(&scst_init_lock);
+ goto restart;
+ }
+
+ /* It isn't really needed, but let's keep it */
+ if (susp != test_bit(SCST_FLAG_SUSPENDED, &scst_flags))
+ goto restart;
+ return;
+}
+
+static inline int test_init_cmd_list(void)
+{
+ int res = (!list_empty(&scst_init_cmd_list) &&
+ !test_bit(SCST_FLAG_SUSPENDED, &scst_flags)) ||
+ unlikely(kthread_should_stop()) ||
+ (scst_init_poll_cnt > 0);
+ return res;
+}
+
+int scst_init_thread(void *arg)
+{
+
+ PRINT_INFO("Init thread started, PID %d", current->pid);
+
+ current->flags |= PF_NOFREEZE;
+
+ set_user_nice(current, -10);
+
+ spin_lock_irq(&scst_init_lock);
+ while (!kthread_should_stop()) {
+ wait_queue_t wait;
+ init_waitqueue_entry(&wait, current);
+
+ if (!test_init_cmd_list()) {
+ add_wait_queue_exclusive(&scst_init_cmd_list_waitQ,
+ &wait);
+ for (;;) {
+ set_current_state(TASK_INTERRUPTIBLE);
+ if (test_init_cmd_list())
+ break;
+ spin_unlock_irq(&scst_init_lock);
+ schedule();
+ spin_lock_irq(&scst_init_lock);
+ }
+ set_current_state(TASK_RUNNING);
+ remove_wait_queue(&scst_init_cmd_list_waitQ, &wait);
+ }
+ scst_do_job_init();
+ }
+ spin_unlock_irq(&scst_init_lock);
+
+ /*
+ * If kthread_should_stop() is true, we are guaranteed to be
+ * on the module unload, so scst_init_cmd_list must be empty.
+ */
+ BUG_ON(!list_empty(&scst_init_cmd_list));
+
+ PRINT_INFO("Init thread PID %d finished", current->pid);
+ return 0;
+}
+
+/**
+ * scst_process_active_cmd() - process active command
+ *
+ * Description:
+ * Main SCST commands processing routing. Must be used only by dev handlers.
+ *
+ * Argument atomic is true, if function called in atomic context.
+ *
+ * Must be called with no locks held.
+ */
+void scst_process_active_cmd(struct scst_cmd *cmd, bool atomic)
+{
+ int res;
+
+ /*
+ * Checkpatch will complain on the use of in_atomic() below. You
+ * can safely ignore this warning since in_atomic() is used here only
+ * for debugging purposes.
+ */
+ EXTRACHECKS_BUG_ON(in_irq() || irqs_disabled());
+ EXTRACHECKS_WARN_ON((in_atomic() || in_interrupt() || irqs_disabled()) &&
+ !atomic);
+
+ cmd->atomic = atomic;
+
+ TRACE_DBG("cmd %p, atomic %d", cmd, atomic);
+
+ do {
+ switch (cmd->state) {
+ case SCST_CMD_STATE_PRE_PARSE:
+ res = scst_pre_parse(cmd);
+ EXTRACHECKS_BUG_ON(res ==
+ SCST_CMD_STATE_RES_NEED_THREAD);
+ break;
+
+ case SCST_CMD_STATE_DEV_PARSE:
+ res = scst_parse_cmd(cmd);
+ break;
+
+ case SCST_CMD_STATE_PREPARE_SPACE:
+ res = scst_prepare_space(cmd);
+ break;
+
+ case SCST_CMD_STATE_PREPROCESSING_DONE:
+ res = scst_preprocessing_done(cmd);
+ break;
+
+ case SCST_CMD_STATE_RDY_TO_XFER:
+ res = scst_rdy_to_xfer(cmd);
+ break;
+
+ case SCST_CMD_STATE_TGT_PRE_EXEC:
+ res = scst_tgt_pre_exec(cmd);
+ break;
+
+ case SCST_CMD_STATE_SEND_FOR_EXEC:
+ if (tm_dbg_check_cmd(cmd) != 0) {
+ res = SCST_CMD_STATE_RES_CONT_NEXT;
+ TRACE_MGMT_DBG("Skipping cmd %p (tag %llu), "
+ "because of TM DBG delay", cmd,
+ (long long unsigned int)cmd->tag);
+ break;
+ }
+ res = scst_send_for_exec(&cmd);
+ /*
+ * !! At this point cmd, sess & tgt_dev can already be
+ * freed !!
+ */
+ break;
+
+ case SCST_CMD_STATE_LOCAL_EXEC:
+ res = scst_local_exec(cmd);
+ /*
+ * !! At this point cmd, sess & tgt_dev can already be
+ * freed !!
+ */
+ break;
+
+ case SCST_CMD_STATE_REAL_EXEC:
+ res = scst_real_exec(cmd);
+ /*
+ * !! At this point cmd, sess & tgt_dev can already be
+ * freed !!
+ */
+ break;
+
+ case SCST_CMD_STATE_PRE_DEV_DONE:
+ res = scst_pre_dev_done(cmd);
+ EXTRACHECKS_BUG_ON(res ==
+ SCST_CMD_STATE_RES_NEED_THREAD);
+ break;
+
+ case SCST_CMD_STATE_MODE_SELECT_CHECKS:
+ res = scst_mode_select_checks(cmd);
+ break;
+
+ case SCST_CMD_STATE_DEV_DONE:
+ res = scst_dev_done(cmd);
+ break;
+
+ case SCST_CMD_STATE_PRE_XMIT_RESP:
+ res = scst_pre_xmit_response(cmd);
+ EXTRACHECKS_BUG_ON(res ==
+ SCST_CMD_STATE_RES_NEED_THREAD);
+ break;
+
+ case SCST_CMD_STATE_XMIT_RESP:
+ res = scst_xmit_response(cmd);
+ break;
+
+ case SCST_CMD_STATE_FINISHED:
+ res = scst_finish_cmd(cmd);
+ break;
+
+ case SCST_CMD_STATE_FINISHED_INTERNAL:
+ res = scst_finish_internal_cmd(cmd);
+ EXTRACHECKS_BUG_ON(res ==
+ SCST_CMD_STATE_RES_NEED_THREAD);
+ break;
+
+ default:
+ PRINT_CRIT_ERROR("cmd (%p) in state %d, but shouldn't "
+ "be", cmd, cmd->state);
+ BUG();
+ res = SCST_CMD_STATE_RES_CONT_NEXT;
+ break;
+ }
+ } while (res == SCST_CMD_STATE_RES_CONT_SAME);
+
+ if (res == SCST_CMD_STATE_RES_CONT_NEXT) {
+ /* None */
+ } else if (res == SCST_CMD_STATE_RES_NEED_THREAD) {
+ spin_lock_irq(&cmd->cmd_threads->cmd_list_lock);
+#ifdef CONFIG_SCST_EXTRACHECKS
+ switch (cmd->state) {
+ case SCST_CMD_STATE_DEV_PARSE:
+ case SCST_CMD_STATE_PREPARE_SPACE:
+ case SCST_CMD_STATE_RDY_TO_XFER:
+ case SCST_CMD_STATE_TGT_PRE_EXEC:
+ case SCST_CMD_STATE_SEND_FOR_EXEC:
+ case SCST_CMD_STATE_LOCAL_EXEC:
+ case SCST_CMD_STATE_REAL_EXEC:
+ case SCST_CMD_STATE_DEV_DONE:
+ case SCST_CMD_STATE_XMIT_RESP:
+#endif
+ TRACE_DBG("Adding cmd %p to head of active cmd list",
+ cmd);
+ list_add(&cmd->cmd_list_entry,
+ &cmd->cmd_threads->active_cmd_list);
+#ifdef CONFIG_SCST_EXTRACHECKS
+ break;
+ default:
+ PRINT_CRIT_ERROR("cmd %p is in invalid state %d)", cmd,
+ cmd->state);
+ spin_unlock_irq(&cmd->cmd_threads->cmd_list_lock);
+ BUG();
+ spin_lock_irq(&cmd->cmd_threads->cmd_list_lock);
+ break;
+ }
+#endif
+ wake_up(&cmd->cmd_threads->cmd_list_waitQ);
+ spin_unlock_irq(&cmd->cmd_threads->cmd_list_lock);
+ } else
+ BUG();
+ return;
+}
+EXPORT_SYMBOL_GPL(scst_process_active_cmd);
+
+/**
+ * scst_post_parse_process_active_cmd() - process command after parse
+ *
+ * SCST commands processing routine, which should be called by dev handler
+ * after its parse() callback returned SCST_CMD_STATE_STOP. Arguments are
+ * the same as for scst_process_active_cmd().
+ */
+void scst_post_parse_process_active_cmd(struct scst_cmd *cmd, bool atomic)
+{
+ scst_set_parse_time(cmd);
+ scst_process_active_cmd(cmd, atomic);
+}
+EXPORT_SYMBOL_GPL(scst_post_parse_process_active_cmd);
+
+/* Called under cmd_list_lock and IRQs disabled */
+static void scst_do_job_active(struct list_head *cmd_list,
+ spinlock_t *cmd_list_lock, bool atomic)
+ __releases(cmd_list_lock)
+ __acquires(cmd_list_lock)
+{
+
+ while (!list_empty(cmd_list)) {
+ struct scst_cmd *cmd = list_entry(cmd_list->next, typeof(*cmd),
+ cmd_list_entry);
+ TRACE_DBG("Deleting cmd %p from active cmd list", cmd);
+ list_del(&cmd->cmd_list_entry);
+ spin_unlock_irq(cmd_list_lock);
+ scst_process_active_cmd(cmd, atomic);
+ spin_lock_irq(cmd_list_lock);
+ }
+ return;
+}
+
+static inline int test_cmd_threads(struct scst_cmd_threads *p_cmd_threads)
+{
+ int res = !list_empty(&p_cmd_threads->active_cmd_list) ||
+ unlikely(kthread_should_stop()) ||
+ tm_dbg_is_release();
+ return res;
+}
+
+int scst_cmd_thread(void *arg)
+{
+ struct scst_cmd_threads *p_cmd_threads = (struct scst_cmd_threads *)arg;
+ static DEFINE_MUTEX(io_context_mutex);
+
+ PRINT_INFO("Processing thread %s (PID %d) started", current->comm,
+ current->pid);
+
+#if 0
+ set_user_nice(current, 10);
+#endif
+ current->flags |= PF_NOFREEZE;
+
+ mutex_lock(&io_context_mutex);
+
+ WARN_ON(current->io_context);
+
+ if (p_cmd_threads != &scst_main_cmd_threads) {
+ if (p_cmd_threads->io_context == NULL) {
+ p_cmd_threads->io_context = get_io_context(GFP_KERNEL, -1);
+ TRACE_MGMT_DBG("Alloced new IO context %p "
+ "(p_cmd_threads %p)",
+ p_cmd_threads->io_context,
+ p_cmd_threads);
+ /* It's ref counted via threads */
+ put_io_context(p_cmd_threads->io_context);
+ } else {
+ put_io_context(current->io_context);
+ current->io_context = ioc_task_link(p_cmd_threads->io_context);
+ TRACE_MGMT_DBG("Linked IO context %p "
+ "(p_cmd_threads %p)", p_cmd_threads->io_context,
+ p_cmd_threads);
+ }
+ }
+
+ mutex_unlock(&io_context_mutex);
+
+ spin_lock_irq(&p_cmd_threads->cmd_list_lock);
+ while (!kthread_should_stop()) {
+ wait_queue_t wait;
+ init_waitqueue_entry(&wait, current);
+
+ if (!test_cmd_threads(p_cmd_threads)) {
+ add_wait_queue_exclusive_head(
+ &p_cmd_threads->cmd_list_waitQ,
+ &wait);
+ for (;;) {
+ set_current_state(TASK_INTERRUPTIBLE);
+ if (test_cmd_threads(p_cmd_threads))
+ break;
+ spin_unlock_irq(&p_cmd_threads->cmd_list_lock);
+ schedule();
+ spin_lock_irq(&p_cmd_threads->cmd_list_lock);
+ }
+ set_current_state(TASK_RUNNING);
+ remove_wait_queue(&p_cmd_threads->cmd_list_waitQ, &wait);
+ }
+
+ if (tm_dbg_is_release()) {
+ spin_unlock_irq(&p_cmd_threads->cmd_list_lock);
+ tm_dbg_check_released_cmds();
+ spin_lock_irq(&p_cmd_threads->cmd_list_lock);
+ }
+
+ scst_do_job_active(&p_cmd_threads->active_cmd_list,
+ &p_cmd_threads->cmd_list_lock, false);
+ }
+ spin_unlock_irq(&p_cmd_threads->cmd_list_lock);
+
+ EXTRACHECKS_BUG_ON((p_cmd_threads->nr_threads == 1) &&
+ !list_empty(&p_cmd_threads->active_cmd_list));
+
+ if ((p_cmd_threads->nr_threads == 1) &&
+ (p_cmd_threads != &scst_main_cmd_threads))
+ p_cmd_threads->io_context = NULL;
+
+ PRINT_INFO("Processing thread %s (PID %d) finished", current->comm,
+ current->pid);
+ return 0;
+}
+
+void scst_cmd_tasklet(long p)
+{
+ struct scst_tasklet *t = (struct scst_tasklet *)p;
+
+ spin_lock_irq(&t->tasklet_lock);
+ scst_do_job_active(&t->tasklet_cmd_list, &t->tasklet_lock, true);
+ spin_unlock_irq(&t->tasklet_lock);
+ return;
+}
+
+/*
+ * Returns 0 on success, < 0 if there is no device handler or
+ * > 0 if SCST_FLAG_SUSPENDED set and SCST_FLAG_SUSPENDING - not.
+ * No locks, protection is done by the suspended activity.
+ */
+static int scst_mgmt_translate_lun(struct scst_mgmt_cmd *mcmd)
+{
+ struct scst_tgt_dev *tgt_dev = NULL;
+ struct list_head *sess_tgt_dev_list_head;
+ int res = -1;
+
+ TRACE_DBG("Finding tgt_dev for mgmt cmd %p (lun %lld)", mcmd,
+ (long long unsigned int)mcmd->lun);
+
+ /* See comment about smp_mb() in scst_suspend_activity() */
+ __scst_get(1);
+
+ if (unlikely(test_bit(SCST_FLAG_SUSPENDED, &scst_flags) &&
+ !test_bit(SCST_FLAG_SUSPENDING, &scst_flags))) {
+ TRACE_MGMT_DBG("%s", "FLAG SUSPENDED set, skipping");
+ __scst_put();
+ res = 1;
+ goto out;
+ }
+
+ sess_tgt_dev_list_head =
+ &mcmd->sess->sess_tgt_dev_list_hash[HASH_VAL(mcmd->lun)];
+ list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+ sess_tgt_dev_list_entry) {
+ if (tgt_dev->lun == mcmd->lun) {
+ TRACE_DBG("tgt_dev %p found", tgt_dev);
+ mcmd->mcmd_tgt_dev = tgt_dev;
+ res = 0;
+ break;
+ }
+ }
+ if (mcmd->mcmd_tgt_dev == NULL)
+ __scst_put();
+
+out:
+ return res;
+}
+
+/* No locks */
+void scst_done_cmd_mgmt(struct scst_cmd *cmd)
+{
+ struct scst_mgmt_cmd_stub *mstb;
+ bool wake = 0;
+ unsigned long flags;
+
+ TRACE_MGMT_DBG("cmd %p done (tag %llu)",
+ cmd, (long long unsigned int)cmd->tag);
+
+ spin_lock_irqsave(&scst_mcmd_lock, flags);
+
+ list_for_each_entry(mstb, &cmd->mgmt_cmd_list,
+ cmd_mgmt_cmd_list_entry) {
+ struct scst_mgmt_cmd *mcmd;
+
+ if (!mstb->done_counted)
+ continue;
+
+ mcmd = mstb->mcmd;
+ TRACE_MGMT_DBG("mcmd %p, mcmd->cmd_done_wait_count %d",
+ mcmd, mcmd->cmd_done_wait_count);
+
+ mcmd->cmd_done_wait_count--;
+ if (mcmd->cmd_done_wait_count > 0) {
+ TRACE_MGMT_DBG("cmd_done_wait_count(%d) not 0, "
+ "skipping", mcmd->cmd_done_wait_count);
+ continue;
+ }
+
+ if (mcmd->completed) {
+ BUG_ON(mcmd->affected_cmds_done_called);
+ mcmd->completed = 0;
+ mcmd->state = SCST_MCMD_STATE_POST_AFFECTED_CMDS_DONE;
+ TRACE_MGMT_DBG("Adding mgmt cmd %p to active mgmt cmd "
+ "list", mcmd);
+ list_add_tail(&mcmd->mgmt_cmd_list_entry,
+ &scst_active_mgmt_cmd_list);
+ wake = 1;
+ }
+ }
+
+ spin_unlock_irqrestore(&scst_mcmd_lock, flags);
+
+ if (wake)
+ wake_up(&scst_mgmt_cmd_list_waitQ);
+ return;
+}
+
+/* Called under scst_mcmd_lock and IRQs disabled */
+static int __scst_dec_finish_wait_count(struct scst_mgmt_cmd *mcmd, bool *wake)
+{
+
+ mcmd->cmd_finish_wait_count--;
+ if (mcmd->cmd_finish_wait_count > 0) {
+ TRACE_MGMT_DBG("cmd_finish_wait_count(%d) not 0, "
+ "skipping", mcmd->cmd_finish_wait_count);
+ goto out;
+ }
+
+ if (mcmd->completed) {
+ mcmd->state = SCST_MCMD_STATE_DONE;
+ TRACE_MGMT_DBG("Adding mgmt cmd %p to active mgmt cmd "
+ "list", mcmd);
+ list_add_tail(&mcmd->mgmt_cmd_list_entry,
+ &scst_active_mgmt_cmd_list);
+ *wake = true;
+ }
+
+out:
+ return mcmd->cmd_finish_wait_count;
+}
+
+/**
+ * scst_prepare_async_mcmd() - prepare async management command
+ *
+ * Notifies SCST that management command is going to be async, i.e.
+ * will be completed in another context.
+ *
+ * No SCST locks supposed to be held on entrance.
+ */
+void scst_prepare_async_mcmd(struct scst_mgmt_cmd *mcmd)
+{
+ unsigned long flags;
+
+ TRACE_MGMT_DBG("Preparing mcmd %p for async execution "
+ "(cmd_finish_wait_count %d)", mcmd,
+ mcmd->cmd_finish_wait_count);
+
+ spin_lock_irqsave(&scst_mcmd_lock, flags);
+ mcmd->cmd_finish_wait_count++;
+ spin_unlock_irqrestore(&scst_mcmd_lock, flags);
+ return;
+}
+EXPORT_SYMBOL_GPL(scst_prepare_async_mcmd);
+
+/**
+ * scst_async_mcmd_completed() - async management command completed
+ *
+ * Notifies SCST that async management command, prepared by
+ * scst_prepare_async_mcmd(), completed.
+ *
+ * No SCST locks supposed to be held on entrance.
+ */
+void scst_async_mcmd_completed(struct scst_mgmt_cmd *mcmd, int status)
+{
+ unsigned long flags;
+ bool wake = false;
+
+ TRACE_MGMT_DBG("Async mcmd %p completed (status %d)", mcmd, status);
+
+ spin_lock_irqsave(&scst_mcmd_lock, flags);
+
+ if (status != SCST_MGMT_STATUS_SUCCESS)
+ mcmd->status = status;
+
+ __scst_dec_finish_wait_count(mcmd, &wake);
+
+ spin_unlock_irqrestore(&scst_mcmd_lock, flags);
+
+ if (wake)
+ wake_up(&scst_mgmt_cmd_list_waitQ);
+ return;
+}
+EXPORT_SYMBOL_GPL(scst_async_mcmd_completed);
+
+/* No locks */
+static void scst_finish_cmd_mgmt(struct scst_cmd *cmd)
+{
+ struct scst_mgmt_cmd_stub *mstb, *t;
+ bool wake = false;
+ unsigned long flags;
+
+ TRACE_MGMT_DBG("cmd %p finished (tag %llu)",
+ cmd, (long long unsigned int)cmd->tag);
+
+ spin_lock_irqsave(&scst_mcmd_lock, flags);
+
+ list_for_each_entry_safe(mstb, t, &cmd->mgmt_cmd_list,
+ cmd_mgmt_cmd_list_entry) {
+ struct scst_mgmt_cmd *mcmd = mstb->mcmd;
+
+ TRACE_MGMT_DBG("mcmd %p, mcmd->cmd_finish_wait_count %d",
+ mcmd, mcmd->cmd_finish_wait_count);
+
+ list_del(&mstb->cmd_mgmt_cmd_list_entry);
+ mempool_free(mstb, scst_mgmt_stub_mempool);
+
+ if (cmd->completed)
+ mcmd->completed_cmd_count++;
+
+ if (__scst_dec_finish_wait_count(mcmd, &wake) > 0) {
+ TRACE_MGMT_DBG("cmd_finish_wait_count(%d) not 0, "
+ "skipping", mcmd->cmd_finish_wait_count);
+ continue;
+ }
+ }
+
+ spin_unlock_irqrestore(&scst_mcmd_lock, flags);
+
+ if (wake)
+ wake_up(&scst_mgmt_cmd_list_waitQ);
+ return;
+}
+
+static int scst_call_dev_task_mgmt_fn(struct scst_mgmt_cmd *mcmd,
+ struct scst_tgt_dev *tgt_dev, int set_status)
+{
+ int res = SCST_DEV_TM_NOT_COMPLETED;
+ struct scst_dev_type *h = tgt_dev->dev->handler;
+
+ if (h->task_mgmt_fn) {
+ TRACE_MGMT_DBG("Calling dev handler %s task_mgmt_fn(fn=%d)",
+ h->name, mcmd->fn);
+ EXTRACHECKS_BUG_ON(in_irq() || irqs_disabled());
+ res = h->task_mgmt_fn(mcmd, tgt_dev);
+ TRACE_MGMT_DBG("Dev handler %s task_mgmt_fn() returned %d",
+ h->name, res);
+ if (set_status && (res != SCST_DEV_TM_NOT_COMPLETED))
+ mcmd->status = res;
+ }
+ return res;
+}
+
+static inline int scst_is_strict_mgmt_fn(int mgmt_fn)
+{
+ switch (mgmt_fn) {
+#ifdef CONFIG_SCST_ABORT_CONSIDER_FINISHED_TASKS_AS_NOT_EXISTING
+ case SCST_ABORT_TASK:
+#endif
+#if 0
+ case SCST_ABORT_TASK_SET:
+ case SCST_CLEAR_TASK_SET:
+#endif
+ return 1;
+ default:
+ return 0;
+ }
+}
+
+/* Might be called under sess_list_lock and IRQ off + BHs also off */
+void scst_abort_cmd(struct scst_cmd *cmd, struct scst_mgmt_cmd *mcmd,
+ int other_ini, int call_dev_task_mgmt_fn)
+{
+ unsigned long flags;
+ static DEFINE_SPINLOCK(other_ini_lock);
+
+ TRACE(TRACE_MGMT, "Aborting cmd %p (tag %llu, op %x)",
+ cmd, (long long unsigned int)cmd->tag, cmd->cdb[0]);
+
+ /* To protect from concurrent aborts */
+ spin_lock_irqsave(&other_ini_lock, flags);
+
+ if (other_ini) {
+ struct scst_device *dev = NULL;
+
+ /* Might be necessary if command aborted several times */
+ if (!test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags))
+ set_bit(SCST_CMD_ABORTED_OTHER, &cmd->cmd_flags);
+
+ /* Necessary for scst_xmit_process_aborted_cmd */
+ if (cmd->dev != NULL)
+ dev = cmd->dev;
+ else if ((mcmd != NULL) && (mcmd->mcmd_tgt_dev != NULL))
+ dev = mcmd->mcmd_tgt_dev->dev;
+
+ if (dev != NULL) {
+ if (dev->tas)
+ set_bit(SCST_CMD_DEVICE_TAS, &cmd->cmd_flags);
+ } else
+ PRINT_WARNING("Abort cmd %p from other initiator, but "
+ "neither cmd, nor mcmd %p have tgt_dev set, so "
+ "TAS information can be lost", cmd, mcmd);
+ } else {
+ /* Might be necessary if command aborted several times */
+ clear_bit(SCST_CMD_ABORTED_OTHER, &cmd->cmd_flags);
+ }
+
+ set_bit(SCST_CMD_ABORTED, &cmd->cmd_flags);
+
+ spin_unlock_irqrestore(&other_ini_lock, flags);
+
+ /*
+ * To sync with cmd->finished/done set in
+ * scst_finish_cmd()/scst_pre_xmit_response()
+ */
+ smp_mb__after_set_bit();
+
+ if (cmd->tgt_dev == NULL) {
+ spin_lock_irqsave(&scst_init_lock, flags);
+ scst_init_poll_cnt++;
+ spin_unlock_irqrestore(&scst_init_lock, flags);
+ wake_up(&scst_init_cmd_list_waitQ);
+ }
+
+ if (call_dev_task_mgmt_fn && (cmd->tgt_dev != NULL)) {
+ EXTRACHECKS_BUG_ON(irqs_disabled());
+ scst_call_dev_task_mgmt_fn(mcmd, cmd->tgt_dev, 1);
+ }
+
+ spin_lock_irqsave(&scst_mcmd_lock, flags);
+ if ((mcmd != NULL) && !cmd->finished) {
+ struct scst_mgmt_cmd_stub *mstb;
+
+ mstb = mempool_alloc(scst_mgmt_stub_mempool, GFP_ATOMIC);
+ if (mstb == NULL) {
+ PRINT_CRIT_ERROR("Allocation of management command "
+ "stub failed (mcmd %p, cmd %p)", mcmd, cmd);
+ goto unlock;
+ }
+ memset(mstb, 0, sizeof(*mstb));
+
+ mstb->mcmd = mcmd;
+
+ /*
+ * cmd can't die here or sess_list_lock already taken and
+ * cmd is in the sess list
+ */
+ list_add_tail(&mstb->cmd_mgmt_cmd_list_entry,
+ &cmd->mgmt_cmd_list);
+
+ /*
+ * Delay the response until the command's finish in order to
+ * guarantee that "no further responses from the task are sent
+ * to the SCSI initiator port" after response from the TM
+ * function is sent (SAM). Plus, we must wait here to be sure
+ * that we won't receive double commands with the same tag.
+ * Moreover, if we don't wait here, we might have a possibility
+ * for data corruption, when aborted and reported as completed
+ * command actually gets executed *after* new commands sent
+ * after this TM command completed.
+ */
+ TRACE_MGMT_DBG("cmd %p (tag %llu, sn %u) being "
+ "executed/xmitted (state %d, op %x, proc time %ld "
+ "sec., timeout %d sec.), deferring ABORT...", cmd,
+ (long long unsigned int)cmd->tag, cmd->sn, cmd->state,
+ cmd->cdb[0], (long)(jiffies - cmd->start_time) / HZ,
+ cmd->timeout / HZ);
+
+ mcmd->cmd_finish_wait_count++;
+
+ if (cmd->sent_for_exec && !cmd->done) {
+ TRACE_MGMT_DBG("cmd %p (tag %llu) is being executed "
+ "and not done yet", cmd,
+ (long long unsigned int)cmd->tag);
+ mstb->done_counted = 1;
+ mcmd->cmd_done_wait_count++;
+ }
+ }
+unlock:
+ spin_unlock_irqrestore(&scst_mcmd_lock, flags);
+
+ tm_dbg_release_cmd(cmd);
+ return;
+}
+
+/* No locks */
+static int scst_set_mcmd_next_state(struct scst_mgmt_cmd *mcmd)
+{
+ int res;
+
+ spin_lock_irq(&scst_mcmd_lock);
+
+ if (mcmd->cmd_finish_wait_count == 0) {
+ if (!mcmd->affected_cmds_done_called)
+ mcmd->state = SCST_MCMD_STATE_POST_AFFECTED_CMDS_DONE;
+ else
+ mcmd->state = SCST_MCMD_STATE_DONE;
+ res = 0;
+ } else if ((mcmd->cmd_done_wait_count == 0) &&
+ (!mcmd->affected_cmds_done_called)) {
+ mcmd->state = SCST_MCMD_STATE_POST_AFFECTED_CMDS_DONE;
+ res = 0;
+ goto out_unlock;
+ } else {
+ TRACE_MGMT_DBG("cmd_finish_wait_count(%d) not 0, preparing to "
+ "wait", mcmd->cmd_finish_wait_count);
+ mcmd->state = SCST_MCMD_STATE_EXECUTING;
+ res = -1;
+ }
+
+ mcmd->completed = 1;
+
+out_unlock:
+ spin_unlock_irq(&scst_mcmd_lock);
+ return res;
+}
+
+static bool __scst_check_unblock_aborted_cmd(struct scst_cmd *cmd,
+ struct list_head *list_entry)
+{
+ bool res;
+ if (test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags)) {
+ list_del(list_entry);
+ spin_lock(&cmd->cmd_threads->cmd_list_lock);
+ list_add_tail(&cmd->cmd_list_entry,
+ &cmd->cmd_threads->active_cmd_list);
+ wake_up(&cmd->cmd_threads->cmd_list_waitQ);
+ spin_unlock(&cmd->cmd_threads->cmd_list_lock);
+ res = 1;
+ } else
+ res = 0;
+ return res;
+}
+
+static void scst_unblock_aborted_cmds(int scst_mutex_held)
+{
+ struct scst_device *dev;
+
+ if (!scst_mutex_held)
+ mutex_lock(&scst_mutex);
+
+ list_for_each_entry(dev, &scst_dev_list, dev_list_entry) {
+ struct scst_cmd *cmd, *tcmd;
+ struct scst_tgt_dev *tgt_dev;
+ spin_lock_bh(&dev->dev_lock);
+ local_irq_disable();
+ list_for_each_entry_safe(cmd, tcmd, &dev->blocked_cmd_list,
+ blocked_cmd_list_entry) {
+ if (__scst_check_unblock_aborted_cmd(cmd,
+ &cmd->blocked_cmd_list_entry)) {
+ TRACE_MGMT_DBG("Unblock aborted blocked cmd %p",
+ cmd);
+ }
+ }
+ local_irq_enable();
+ spin_unlock_bh(&dev->dev_lock);
+
+ local_irq_disable();
+ list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+ dev_tgt_dev_list_entry) {
+ spin_lock(&tgt_dev->sn_lock);
+ list_for_each_entry_safe(cmd, tcmd,
+ &tgt_dev->deferred_cmd_list,
+ sn_cmd_list_entry) {
+ if (__scst_check_unblock_aborted_cmd(cmd,
+ &cmd->sn_cmd_list_entry)) {
+ TRACE_MGMT_DBG("Unblocked aborted SN "
+ "cmd %p (sn %u)",
+ cmd, cmd->sn);
+ tgt_dev->def_cmd_count--;
+ }
+ }
+ spin_unlock(&tgt_dev->sn_lock);
+ }
+ local_irq_enable();
+ }
+
+ if (!scst_mutex_held)
+ mutex_unlock(&scst_mutex);
+ return;
+}
+
+static void __scst_abort_task_set(struct scst_mgmt_cmd *mcmd,
+ struct scst_tgt_dev *tgt_dev)
+{
+ struct scst_cmd *cmd;
+ struct scst_session *sess = tgt_dev->sess;
+
+ spin_lock_irq(&sess->sess_list_lock);
+
+ TRACE_DBG("Searching in sess cmd list (sess=%p)", sess);
+ list_for_each_entry(cmd, &sess->sess_cmd_list,
+ sess_cmd_list_entry) {
+ if ((cmd->tgt_dev == tgt_dev) ||
+ ((cmd->tgt_dev == NULL) &&
+ (cmd->lun == tgt_dev->lun))) {
+ if (mcmd->cmd_sn_set) {
+ BUG_ON(!cmd->tgt_sn_set);
+ if (scst_sn_before(mcmd->cmd_sn, cmd->tgt_sn) ||
+ (mcmd->cmd_sn == cmd->tgt_sn))
+ continue;
+ }
+ scst_abort_cmd(cmd, mcmd, 0, 0);
+ }
+ }
+ spin_unlock_irq(&sess->sess_list_lock);
+ return;
+}
+
+/* Returns 0 if the command processing should be continued, <0 otherwise */
+static int scst_abort_task_set(struct scst_mgmt_cmd *mcmd)
+{
+ int res;
+ struct scst_tgt_dev *tgt_dev = mcmd->mcmd_tgt_dev;
+
+ TRACE(TRACE_MGMT, "Aborting task set (lun=%lld, mcmd=%p)",
+ (long long unsigned int)tgt_dev->lun, mcmd);
+
+ __scst_abort_task_set(mcmd, tgt_dev);
+
+ tm_dbg_task_mgmt(mcmd->mcmd_tgt_dev->dev, "ABORT TASK SET", 0);
+
+ scst_unblock_aborted_cmds(0);
+
+ scst_call_dev_task_mgmt_fn(mcmd, tgt_dev, 0);
+
+ res = scst_set_mcmd_next_state(mcmd);
+ return res;
+}
+
+static int scst_is_cmd_belongs_to_dev(struct scst_cmd *cmd,
+ struct scst_device *dev)
+{
+ struct scst_tgt_dev *tgt_dev = NULL;
+ struct list_head *sess_tgt_dev_list_head;
+ int res = 0;
+
+ TRACE_DBG("Finding match for dev %p and cmd %p (lun %lld)", dev, cmd,
+ (long long unsigned int)cmd->lun);
+
+ sess_tgt_dev_list_head =
+ &cmd->sess->sess_tgt_dev_list_hash[HASH_VAL(cmd->lun)];
+ list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+ sess_tgt_dev_list_entry) {
+ if (tgt_dev->lun == cmd->lun) {
+ TRACE_DBG("dev %p found", tgt_dev->dev);
+ res = (tgt_dev->dev == dev);
+ goto out;
+ }
+ }
+
+out:
+ return res;
+}
+
+/* Returns 0 if the command processing should be continued, <0 otherwise */
+static int scst_clear_task_set(struct scst_mgmt_cmd *mcmd)
+{
+ int res;
+ struct scst_device *dev = mcmd->mcmd_tgt_dev->dev;
+ struct scst_tgt_dev *tgt_dev;
+ LIST_HEAD(UA_tgt_devs);
+
+ TRACE(TRACE_MGMT, "Clearing task set (lun=%lld, mcmd=%p)",
+ (long long unsigned int)mcmd->lun, mcmd);
+
+#if 0 /* we are SAM-3 */
+ /*
+ * When a logical unit is aborting one or more tasks from a SCSI
+ * initiator port with the TASK ABORTED status it should complete all
+ * of those tasks before entering additional tasks from that SCSI
+ * initiator port into the task set - SAM2
+ */
+ mcmd->needs_unblocking = 1;
+ spin_lock_bh(&dev->dev_lock);
+ __scst_block_dev(dev);
+ spin_unlock_bh(&dev->dev_lock);
+#endif
+
+ __scst_abort_task_set(mcmd, mcmd->mcmd_tgt_dev);
+
+ mutex_lock(&scst_mutex);
+
+ list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+ dev_tgt_dev_list_entry) {
+ struct scst_session *sess = tgt_dev->sess;
+ struct scst_cmd *cmd;
+ int aborted = 0;
+
+ if (tgt_dev == mcmd->mcmd_tgt_dev)
+ continue;
+
+ spin_lock_irq(&sess->sess_list_lock);
+
+ TRACE_DBG("Searching in sess cmd list (sess=%p)", sess);
+ list_for_each_entry(cmd, &sess->sess_cmd_list,
+ sess_cmd_list_entry) {
+ if ((cmd->dev == dev) ||
+ ((cmd->dev == NULL) &&
+ scst_is_cmd_belongs_to_dev(cmd, dev))) {
+ scst_abort_cmd(cmd, mcmd, 1, 0);
+ aborted = 1;
+ }
+ }
+ spin_unlock_irq(&sess->sess_list_lock);
+
+ if (aborted)
+ list_add_tail(&tgt_dev->extra_tgt_dev_list_entry,
+ &UA_tgt_devs);
+ }
+
+ tm_dbg_task_mgmt(mcmd->mcmd_tgt_dev->dev, "CLEAR TASK SET", 0);
+
+ scst_unblock_aborted_cmds(1);
+
+ mutex_unlock(&scst_mutex);
+
+ if (!dev->tas) {
+ uint8_t sense_buffer[SCST_STANDARD_SENSE_LEN];
+ int sl;
+
+ sl = scst_set_sense(sense_buffer, sizeof(sense_buffer),
+ dev->d_sense,
+ SCST_LOAD_SENSE(scst_sense_cleared_by_another_ini_UA));
+
+ list_for_each_entry(tgt_dev, &UA_tgt_devs,
+ extra_tgt_dev_list_entry) {
+ scst_check_set_UA(tgt_dev, sense_buffer, sl, 0);
+ }
+ }
+
+ scst_call_dev_task_mgmt_fn(mcmd, mcmd->mcmd_tgt_dev, 0);
+
+ res = scst_set_mcmd_next_state(mcmd);
+ return res;
+}
+
+/* Returns 0 if the command processing should be continued,
+ * >0, if it should be requeued, <0 otherwise */
+static int scst_mgmt_cmd_init(struct scst_mgmt_cmd *mcmd)
+{
+ int res = 0, rc;
+
+ switch (mcmd->fn) {
+ case SCST_ABORT_TASK:
+ {
+ struct scst_session *sess = mcmd->sess;
+ struct scst_cmd *cmd;
+
+ spin_lock_irq(&sess->sess_list_lock);
+ cmd = __scst_find_cmd_by_tag(sess, mcmd->tag, true);
+ if (cmd == NULL) {
+ TRACE_MGMT_DBG("ABORT TASK: command "
+ "for tag %llu not found",
+ (long long unsigned int)mcmd->tag);
+ mcmd->status = SCST_MGMT_STATUS_TASK_NOT_EXIST;
+ mcmd->state = SCST_MCMD_STATE_DONE;
+ spin_unlock_irq(&sess->sess_list_lock);
+ goto out;
+ }
+ __scst_cmd_get(cmd);
+ spin_unlock_irq(&sess->sess_list_lock);
+ TRACE_MGMT_DBG("Cmd %p for tag %llu (sn %d, set %d, "
+ "queue_type %x) found, aborting it",
+ cmd, (long long unsigned int)mcmd->tag,
+ cmd->sn, cmd->sn_set, cmd->queue_type);
+ mcmd->cmd_to_abort = cmd;
+ if (mcmd->lun_set && (mcmd->lun != cmd->lun)) {
+ PRINT_ERROR("ABORT TASK: LUN mismatch: mcmd LUN %llx, "
+ "cmd LUN %llx, cmd tag %llu",
+ (long long unsigned int)mcmd->lun,
+ (long long unsigned int)cmd->lun,
+ (long long unsigned int)mcmd->tag);
+ mcmd->status = SCST_MGMT_STATUS_REJECTED;
+ } else if (mcmd->cmd_sn_set &&
+ (scst_sn_before(mcmd->cmd_sn, cmd->tgt_sn) ||
+ (mcmd->cmd_sn == cmd->tgt_sn))) {
+ PRINT_ERROR("ABORT TASK: SN mismatch: mcmd SN %x, "
+ "cmd SN %x, cmd tag %llu", mcmd->cmd_sn,
+ cmd->tgt_sn, (long long unsigned int)mcmd->tag);
+ mcmd->status = SCST_MGMT_STATUS_REJECTED;
+ } else {
+ scst_abort_cmd(cmd, mcmd, 0, 1);
+ scst_unblock_aborted_cmds(0);
+ }
+ res = scst_set_mcmd_next_state(mcmd);
+ mcmd->cmd_to_abort = NULL; /* just in case */
+ __scst_cmd_put(cmd);
+ break;
+ }
+
+ case SCST_TARGET_RESET:
+ case SCST_NEXUS_LOSS_SESS:
+ case SCST_ABORT_ALL_TASKS_SESS:
+ case SCST_NEXUS_LOSS:
+ case SCST_ABORT_ALL_TASKS:
+ case SCST_UNREG_SESS_TM:
+ mcmd->state = SCST_MCMD_STATE_READY;
+ break;
+
+ case SCST_ABORT_TASK_SET:
+ case SCST_CLEAR_ACA:
+ case SCST_CLEAR_TASK_SET:
+ case SCST_LUN_RESET:
+ rc = scst_mgmt_translate_lun(mcmd);
+ if (rc == 0)
+ mcmd->state = SCST_MCMD_STATE_READY;
+ else if (rc < 0) {
+ PRINT_ERROR("Corresponding device for LUN %lld not "
+ "found", (long long unsigned int)mcmd->lun);
+ mcmd->status = SCST_MGMT_STATUS_LUN_NOT_EXIST;
+ mcmd->state = SCST_MCMD_STATE_DONE;
+ } else
+ res = rc;
+ break;
+
+ default:
+ BUG();
+ }
+
+out:
+ return res;
+}
+
+/* Returns 0 if the command processing should be continued, <0 otherwise */
+static int scst_target_reset(struct scst_mgmt_cmd *mcmd)
+{
+ int res, rc;
+ struct scst_device *dev;
+ struct scst_acg *acg = mcmd->sess->acg;
+ struct scst_acg_dev *acg_dev;
+ int cont, c;
+ LIST_HEAD(host_devs);
+
+ TRACE(TRACE_MGMT, "Target reset (mcmd %p, cmd count %d)",
+ mcmd, atomic_read(&mcmd->sess->sess_cmd_count));
+
+ mcmd->needs_unblocking = 1;
+
+ mutex_lock(&scst_mutex);
+
+ list_for_each_entry(acg_dev, &acg->acg_dev_list, acg_dev_list_entry) {
+ struct scst_device *d;
+ struct scst_tgt_dev *tgt_dev;
+ int found = 0;
+
+ dev = acg_dev->dev;
+
+ spin_lock_bh(&dev->dev_lock);
+ __scst_block_dev(dev);
+ scst_process_reset(dev, mcmd->sess, NULL, mcmd, true);
+ spin_unlock_bh(&dev->dev_lock);
+
+ cont = 0;
+ c = 0;
+ list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+ dev_tgt_dev_list_entry) {
+ cont = 1;
+ if (mcmd->sess == tgt_dev->sess) {
+ rc = scst_call_dev_task_mgmt_fn(mcmd,
+ tgt_dev, 0);
+ if (rc == SCST_DEV_TM_NOT_COMPLETED)
+ c = 1;
+ else if ((rc < 0) &&
+ (mcmd->status == SCST_MGMT_STATUS_SUCCESS))
+ mcmd->status = rc;
+ break;
+ }
+ }
+ if (cont && !c)
+ continue;
+
+ if (dev->scsi_dev == NULL)
+ continue;
+
+ list_for_each_entry(d, &host_devs, tm_dev_list_entry) {
+ if (dev->scsi_dev->host->host_no ==
+ d->scsi_dev->host->host_no) {
+ found = 1;
+ break;
+ }
+ }
+ if (!found)
+ list_add_tail(&dev->tm_dev_list_entry, &host_devs);
+
+ tm_dbg_task_mgmt(dev, "TARGET RESET", 0);
+ }
+
+ scst_unblock_aborted_cmds(1);
+
+ /*
+ * We suppose here that for all commands that already on devices
+ * on/after scsi_reset_provider() completion callbacks will be called.
+ */
+
+ list_for_each_entry(dev, &host_devs, tm_dev_list_entry) {
+ /* dev->scsi_dev must be non-NULL here */
+ TRACE(TRACE_MGMT, "Resetting host %d bus ",
+ dev->scsi_dev->host->host_no);
+ rc = scsi_reset_provider(dev->scsi_dev, SCSI_TRY_RESET_TARGET);
+ TRACE(TRACE_MGMT, "Result of host %d target reset: %s",
+ dev->scsi_dev->host->host_no,
+ (rc == SUCCESS) ? "SUCCESS" : "FAILED");
+#if 0
+ if ((rc != SUCCESS) &&
+ (mcmd->status == SCST_MGMT_STATUS_SUCCESS)) {
+ /*
+ * SCSI_TRY_RESET_BUS is also done by
+ * scsi_reset_provider()
+ */
+ mcmd->status = SCST_MGMT_STATUS_FAILED;
+ }
+#else
+ /*
+ * scsi_reset_provider() returns very weird status, so let's
+ * always succeed
+ */
+#endif
+ }
+
+ list_for_each_entry(acg_dev, &acg->acg_dev_list, acg_dev_list_entry) {
+ dev = acg_dev->dev;
+ if (dev->scsi_dev != NULL)
+ dev->scsi_dev->was_reset = 0;
+ }
+
+ mutex_unlock(&scst_mutex);
+
+ res = scst_set_mcmd_next_state(mcmd);
+ return res;
+}
+
+/* Returns 0 if the command processing should be continued, <0 otherwise */
+static int scst_lun_reset(struct scst_mgmt_cmd *mcmd)
+{
+ int res, rc;
+ struct scst_tgt_dev *tgt_dev = mcmd->mcmd_tgt_dev;
+ struct scst_device *dev = tgt_dev->dev;
+
+ TRACE(TRACE_MGMT, "Resetting LUN %lld (mcmd %p)",
+ (long long unsigned int)tgt_dev->lun, mcmd);
+
+ mcmd->needs_unblocking = 1;
+
+ spin_lock_bh(&dev->dev_lock);
+ __scst_block_dev(dev);
+ scst_process_reset(dev, mcmd->sess, NULL, mcmd, true);
+ spin_unlock_bh(&dev->dev_lock);
+
+ rc = scst_call_dev_task_mgmt_fn(mcmd, tgt_dev, 1);
+ if (rc != SCST_DEV_TM_NOT_COMPLETED)
+ goto out_tm_dbg;
+
+ if (dev->scsi_dev != NULL) {
+ TRACE(TRACE_MGMT, "Resetting host %d bus ",
+ dev->scsi_dev->host->host_no);
+ rc = scsi_reset_provider(dev->scsi_dev, SCSI_TRY_RESET_DEVICE);
+#if 0
+ if (rc != SUCCESS && mcmd->status == SCST_MGMT_STATUS_SUCCESS)
+ mcmd->status = SCST_MGMT_STATUS_FAILED;
+#else
+ /*
+ * scsi_reset_provider() returns very weird status, so let's
+ * always succeed
+ */
+#endif
+ dev->scsi_dev->was_reset = 0;
+ }
+
+ scst_unblock_aborted_cmds(0);
+
+out_tm_dbg:
+ tm_dbg_task_mgmt(mcmd->mcmd_tgt_dev->dev, "LUN RESET", 0);
+
+ res = scst_set_mcmd_next_state(mcmd);
+ return res;
+}
+
+/* scst_mutex supposed to be held */
+static void scst_do_nexus_loss_sess(struct scst_mgmt_cmd *mcmd)
+{
+ int i;
+ struct scst_session *sess = mcmd->sess;
+ struct scst_tgt_dev *tgt_dev;
+
+ for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+ struct list_head *sess_tgt_dev_list_head =
+ &sess->sess_tgt_dev_list_hash[i];
+ list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+ sess_tgt_dev_list_entry) {
+ scst_nexus_loss(tgt_dev,
+ (mcmd->fn != SCST_UNREG_SESS_TM));
+ }
+ }
+ return;
+}
+
+/* Returns 0 if the command processing should be continued, <0 otherwise */
+static int scst_abort_all_nexus_loss_sess(struct scst_mgmt_cmd *mcmd,
+ int nexus_loss)
+{
+ int res;
+ int i;
+ struct scst_session *sess = mcmd->sess;
+ struct scst_tgt_dev *tgt_dev;
+
+ if (nexus_loss) {
+ TRACE_MGMT_DBG("Nexus loss for sess %p (mcmd %p)",
+ sess, mcmd);
+ } else {
+ TRACE_MGMT_DBG("Aborting all from sess %p (mcmd %p)",
+ sess, mcmd);
+ }
+
+ mutex_lock(&scst_mutex);
+
+ for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+ struct list_head *sess_tgt_dev_list_head =
+ &sess->sess_tgt_dev_list_hash[i];
+ list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+ sess_tgt_dev_list_entry) {
+ int rc;
+
+ __scst_abort_task_set(mcmd, tgt_dev);
+
+ rc = scst_call_dev_task_mgmt_fn(mcmd, tgt_dev, 0);
+ if (rc < 0 && mcmd->status == SCST_MGMT_STATUS_SUCCESS)
+ mcmd->status = rc;
+
+ tm_dbg_task_mgmt(tgt_dev->dev, "NEXUS LOSS SESS or "
+ "ABORT ALL SESS or UNREG SESS",
+ (mcmd->fn == SCST_UNREG_SESS_TM));
+ }
+ }
+
+ scst_unblock_aborted_cmds(1);
+
+ mutex_unlock(&scst_mutex);
+
+ res = scst_set_mcmd_next_state(mcmd);
+ return res;
+}
+
+/* scst_mutex supposed to be held */
+static void scst_do_nexus_loss_tgt(struct scst_mgmt_cmd *mcmd)
+{
+ int i;
+ struct scst_tgt *tgt = mcmd->sess->tgt;
+ struct scst_session *sess;
+
+ list_for_each_entry(sess, &tgt->sess_list, sess_list_entry) {
+ for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+ struct list_head *sess_tgt_dev_list_head =
+ &sess->sess_tgt_dev_list_hash[i];
+ struct scst_tgt_dev *tgt_dev;
+ list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+ sess_tgt_dev_list_entry) {
+ scst_nexus_loss(tgt_dev, true);
+ }
+ }
+ }
+ return;
+}
+
+static int scst_abort_all_nexus_loss_tgt(struct scst_mgmt_cmd *mcmd,
+ int nexus_loss)
+{
+ int res;
+ int i;
+ struct scst_tgt *tgt = mcmd->sess->tgt;
+ struct scst_session *sess;
+
+ if (nexus_loss) {
+ TRACE_MGMT_DBG("I_T Nexus loss (tgt %p, mcmd %p)",
+ tgt, mcmd);
+ } else {
+ TRACE_MGMT_DBG("Aborting all from tgt %p (mcmd %p)",
+ tgt, mcmd);
+ }
+
+ mutex_lock(&scst_mutex);
+
+ list_for_each_entry(sess, &tgt->sess_list, sess_list_entry) {
+ for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+ struct list_head *sess_tgt_dev_list_head =
+ &sess->sess_tgt_dev_list_hash[i];
+ struct scst_tgt_dev *tgt_dev;
+ list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+ sess_tgt_dev_list_entry) {
+ int rc;
+
+ __scst_abort_task_set(mcmd, tgt_dev);
+
+ if (nexus_loss)
+ scst_nexus_loss(tgt_dev, true);
+
+ if (mcmd->sess == tgt_dev->sess) {
+ rc = scst_call_dev_task_mgmt_fn(
+ mcmd, tgt_dev, 0);
+ if ((rc < 0) &&
+ (mcmd->status == SCST_MGMT_STATUS_SUCCESS))
+ mcmd->status = rc;
+ }
+
+ tm_dbg_task_mgmt(tgt_dev->dev, "NEXUS LOSS or "
+ "ABORT ALL", 0);
+ }
+ }
+ }
+
+ scst_unblock_aborted_cmds(1);
+
+ mutex_unlock(&scst_mutex);
+
+ res = scst_set_mcmd_next_state(mcmd);
+ return res;
+}
+
+/* Returns 0 if the command processing should be continued, <0 otherwise */
+static int scst_mgmt_cmd_exec(struct scst_mgmt_cmd *mcmd)
+{
+ int res = 0;
+
+ mcmd->status = SCST_MGMT_STATUS_SUCCESS;
+
+ switch (mcmd->fn) {
+ case SCST_ABORT_TASK_SET:
+ res = scst_abort_task_set(mcmd);
+ break;
+
+ case SCST_CLEAR_TASK_SET:
+ if (mcmd->mcmd_tgt_dev->dev->tst ==
+ SCST_CONTR_MODE_SEP_TASK_SETS)
+ res = scst_abort_task_set(mcmd);
+ else
+ res = scst_clear_task_set(mcmd);
+ break;
+
+ case SCST_LUN_RESET:
+ res = scst_lun_reset(mcmd);
+ break;
+
+ case SCST_TARGET_RESET:
+ res = scst_target_reset(mcmd);
+ break;
+
+ case SCST_ABORT_ALL_TASKS_SESS:
+ res = scst_abort_all_nexus_loss_sess(mcmd, 0);
+ break;
+
+ case SCST_NEXUS_LOSS_SESS:
+ case SCST_UNREG_SESS_TM:
+ res = scst_abort_all_nexus_loss_sess(mcmd, 1);
+ break;
+
+ case SCST_ABORT_ALL_TASKS:
+ res = scst_abort_all_nexus_loss_tgt(mcmd, 0);
+ break;
+
+ case SCST_NEXUS_LOSS:
+ res = scst_abort_all_nexus_loss_tgt(mcmd, 1);
+ break;
+
+ case SCST_CLEAR_ACA:
+ if (scst_call_dev_task_mgmt_fn(mcmd, mcmd->mcmd_tgt_dev, 1) ==
+ SCST_DEV_TM_NOT_COMPLETED) {
+ mcmd->status = SCST_MGMT_STATUS_FN_NOT_SUPPORTED;
+ /* Nothing to do (yet) */
+ }
+ goto out_done;
+
+ default:
+ PRINT_ERROR("Unknown task management function %d", mcmd->fn);
+ mcmd->status = SCST_MGMT_STATUS_REJECTED;
+ goto out_done;
+ }
+
+out:
+ return res;
+
+out_done:
+ mcmd->state = SCST_MCMD_STATE_DONE;
+ goto out;
+}
+
+static void scst_call_task_mgmt_affected_cmds_done(struct scst_mgmt_cmd *mcmd)
+{
+ struct scst_session *sess = mcmd->sess;
+
+ if ((sess->tgt->tgtt->task_mgmt_affected_cmds_done != NULL) &&
+ (mcmd->fn != SCST_UNREG_SESS_TM)) {
+ TRACE_DBG("Calling target %s task_mgmt_affected_cmds_done(%p)",
+ sess->tgt->tgtt->name, sess);
+ sess->tgt->tgtt->task_mgmt_affected_cmds_done(mcmd);
+ TRACE_MGMT_DBG("Target's %s task_mgmt_affected_cmds_done() "
+ "returned", sess->tgt->tgtt->name);
+ }
+ return;
+}
+
+static int scst_mgmt_affected_cmds_done(struct scst_mgmt_cmd *mcmd)
+{
+ int res;
+
+ mutex_lock(&scst_mutex);
+
+ switch (mcmd->fn) {
+ case SCST_NEXUS_LOSS_SESS:
+ case SCST_UNREG_SESS_TM:
+ scst_do_nexus_loss_sess(mcmd);
+ break;
+
+ case SCST_NEXUS_LOSS:
+ scst_do_nexus_loss_tgt(mcmd);
+ break;
+ }
+
+ mutex_unlock(&scst_mutex);
+
+ scst_call_task_mgmt_affected_cmds_done(mcmd);
+
+ mcmd->affected_cmds_done_called = 1;
+
+ res = scst_set_mcmd_next_state(mcmd);
+ return res;
+}
+
+static void scst_mgmt_cmd_send_done(struct scst_mgmt_cmd *mcmd)
+{
+ struct scst_device *dev;
+ struct scst_session *sess = mcmd->sess;
+
+ mcmd->state = SCST_MCMD_STATE_FINISHED;
+ if (scst_is_strict_mgmt_fn(mcmd->fn) && (mcmd->completed_cmd_count > 0))
+ mcmd->status = SCST_MGMT_STATUS_TASK_NOT_EXIST;
+
+ TRACE(TRACE_MINOR_AND_MGMT_DBG, "TM command fn %d finished, "
+ "status %x", mcmd->fn, mcmd->status);
+
+ if (!mcmd->affected_cmds_done_called) {
+ /* It might happen in case of errors */
+ scst_call_task_mgmt_affected_cmds_done(mcmd);
+ }
+
+ if ((sess->tgt->tgtt->task_mgmt_fn_done != NULL) &&
+ (mcmd->fn != SCST_UNREG_SESS_TM)) {
+ TRACE_DBG("Calling target %s task_mgmt_fn_done(%p)",
+ sess->tgt->tgtt->name, sess);
+ sess->tgt->tgtt->task_mgmt_fn_done(mcmd);
+ TRACE_MGMT_DBG("Target's %s task_mgmt_fn_done() "
+ "returned", sess->tgt->tgtt->name);
+ }
+
+ if (mcmd->needs_unblocking) {
+ switch (mcmd->fn) {
+ case SCST_LUN_RESET:
+ case SCST_CLEAR_TASK_SET:
+ scst_unblock_dev(mcmd->mcmd_tgt_dev->dev);
+ break;
+
+ case SCST_TARGET_RESET:
+ {
+ struct scst_acg *acg = mcmd->sess->acg;
+ struct scst_acg_dev *acg_dev;
+
+ mutex_lock(&scst_mutex);
+ list_for_each_entry(acg_dev, &acg->acg_dev_list,
+ acg_dev_list_entry) {
+ dev = acg_dev->dev;
+ scst_unblock_dev(dev);
+ }
+ mutex_unlock(&scst_mutex);
+ break;
+ }
+
+ default:
+ BUG();
+ break;
+ }
+ }
+
+ mcmd->tgt_priv = NULL;
+ return;
+}
+
+/* Returns >0, if cmd should be requeued */
+static int scst_process_mgmt_cmd(struct scst_mgmt_cmd *mcmd)
+{
+ int res = 0;
+
+ TRACE_DBG("mcmd %p, state %d", mcmd, mcmd->state);
+
+ while (1) {
+ switch (mcmd->state) {
+ case SCST_MCMD_STATE_INIT:
+ res = scst_mgmt_cmd_init(mcmd);
+ if (res)
+ goto out;
+ break;
+
+ case SCST_MCMD_STATE_READY:
+ if (scst_mgmt_cmd_exec(mcmd))
+ goto out;
+ break;
+
+ case SCST_MCMD_STATE_POST_AFFECTED_CMDS_DONE:
+ if (scst_mgmt_affected_cmds_done(mcmd))
+ goto out;
+ break;
+
+ case SCST_MCMD_STATE_DONE:
+ scst_mgmt_cmd_send_done(mcmd);
+ break;
+
+ default:
+ PRINT_ERROR("Unknown state %d of management command",
+ mcmd->state);
+ res = -1;
+ /* go through */
+ case SCST_MCMD_STATE_FINISHED:
+ scst_free_mgmt_cmd(mcmd);
+ goto out;
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+ case SCST_MCMD_STATE_EXECUTING:
+ BUG();
+#endif
+ }
+ }
+
+out:
+ return res;
+}
+
+static inline int test_mgmt_cmd_list(void)
+{
+ int res = !list_empty(&scst_active_mgmt_cmd_list) ||
+ unlikely(kthread_should_stop());
+ return res;
+}
+
+int scst_tm_thread(void *arg)
+{
+
+ PRINT_INFO("Task management thread started, PID %d", current->pid);
+
+ current->flags |= PF_NOFREEZE;
+
+ set_user_nice(current, -10);
+
+ spin_lock_irq(&scst_mcmd_lock);
+ while (!kthread_should_stop()) {
+ wait_queue_t wait;
+ init_waitqueue_entry(&wait, current);
+
+ if (!test_mgmt_cmd_list()) {
+ add_wait_queue_exclusive(&scst_mgmt_cmd_list_waitQ,
+ &wait);
+ for (;;) {
+ set_current_state(TASK_INTERRUPTIBLE);
+ if (test_mgmt_cmd_list())
+ break;
+ spin_unlock_irq(&scst_mcmd_lock);
+ schedule();
+ spin_lock_irq(&scst_mcmd_lock);
+ }
+ set_current_state(TASK_RUNNING);
+ remove_wait_queue(&scst_mgmt_cmd_list_waitQ, &wait);
+ }
+
+ while (!list_empty(&scst_active_mgmt_cmd_list)) {
+ int rc;
+ struct scst_mgmt_cmd *mcmd;
+ mcmd = list_entry(scst_active_mgmt_cmd_list.next,
+ typeof(*mcmd), mgmt_cmd_list_entry);
+ TRACE_MGMT_DBG("Deleting mgmt cmd %p from active cmd "
+ "list", mcmd);
+ list_del(&mcmd->mgmt_cmd_list_entry);
+ spin_unlock_irq(&scst_mcmd_lock);
+ rc = scst_process_mgmt_cmd(mcmd);
+ spin_lock_irq(&scst_mcmd_lock);
+ if (rc > 0) {
+ if (test_bit(SCST_FLAG_SUSPENDED, &scst_flags) &&
+ !test_bit(SCST_FLAG_SUSPENDING,
+ &scst_flags)) {
+ TRACE_MGMT_DBG("Adding mgmt cmd %p to "
+ "head of delayed mgmt cmd list",
+ mcmd);
+ list_add(&mcmd->mgmt_cmd_list_entry,
+ &scst_delayed_mgmt_cmd_list);
+ } else {
+ TRACE_MGMT_DBG("Adding mgmt cmd %p to "
+ "head of active mgmt cmd list",
+ mcmd);
+ list_add(&mcmd->mgmt_cmd_list_entry,
+ &scst_active_mgmt_cmd_list);
+ }
+ }
+ }
+ }
+ spin_unlock_irq(&scst_mcmd_lock);
+
+ /*
+ * If kthread_should_stop() is true, we are guaranteed to be
+ * on the module unload, so scst_active_mgmt_cmd_list must be empty.
+ */
+ BUG_ON(!list_empty(&scst_active_mgmt_cmd_list));
+
+ PRINT_INFO("Task management thread PID %d finished", current->pid);
+ return 0;
+}
+
+static struct scst_mgmt_cmd *scst_pre_rx_mgmt_cmd(struct scst_session
+ *sess, int fn, int atomic, void *tgt_priv)
+{
+ struct scst_mgmt_cmd *mcmd = NULL;
+
+ if (unlikely(sess->tgt->tgtt->task_mgmt_fn_done == NULL)) {
+ PRINT_ERROR("New mgmt cmd, but task_mgmt_fn_done() is NULL "
+ "(target %s)", sess->tgt->tgtt->name);
+ goto out;
+ }
+
+ mcmd = scst_alloc_mgmt_cmd(atomic ? GFP_ATOMIC : GFP_KERNEL);
+ if (mcmd == NULL) {
+ PRINT_CRIT_ERROR("Lost TM fn %d, initiator %s", fn,
+ sess->initiator_name);
+ goto out;
+ }
+
+ mcmd->sess = sess;
+ mcmd->fn = fn;
+ mcmd->state = SCST_MCMD_STATE_INIT;
+ mcmd->tgt_priv = tgt_priv;
+
+out:
+ return mcmd;
+}
+
+static int scst_post_rx_mgmt_cmd(struct scst_session *sess,
+ struct scst_mgmt_cmd *mcmd)
+{
+ unsigned long flags;
+ int res = 0;
+
+ scst_sess_get(sess);
+
+ if (unlikely(sess->shut_phase != SCST_SESS_SPH_READY)) {
+ PRINT_CRIT_ERROR("New mgmt cmd while shutting down the "
+ "session %p shut_phase %ld", sess, sess->shut_phase);
+ BUG();
+ }
+
+ local_irq_save(flags);
+
+ spin_lock(&sess->sess_list_lock);
+ atomic_inc(&sess->sess_cmd_count);
+
+ if (unlikely(sess->init_phase != SCST_SESS_IPH_READY)) {
+ switch (sess->init_phase) {
+ case SCST_SESS_IPH_INITING:
+ TRACE_DBG("Adding mcmd %p to init deferred mcmd list",
+ mcmd);
+ list_add_tail(&mcmd->mgmt_cmd_list_entry,
+ &sess->init_deferred_mcmd_list);
+ goto out_unlock;
+ case SCST_SESS_IPH_SUCCESS:
+ break;
+ case SCST_SESS_IPH_FAILED:
+ res = -1;
+ goto out_unlock;
+ default:
+ BUG();
+ }
+ }
+
+ spin_unlock(&sess->sess_list_lock);
+
+ TRACE_MGMT_DBG("Adding mgmt cmd %p to active mgmt cmd list", mcmd);
+ spin_lock(&scst_mcmd_lock);
+ list_add_tail(&mcmd->mgmt_cmd_list_entry, &scst_active_mgmt_cmd_list);
+ spin_unlock(&scst_mcmd_lock);
+
+ local_irq_restore(flags);
+
+ wake_up(&scst_mgmt_cmd_list_waitQ);
+
+out:
+ return res;
+
+out_unlock:
+ spin_unlock(&sess->sess_list_lock);
+ local_irq_restore(flags);
+ goto out;
+}
+
+/**
+ * scst_rx_mgmt_fn() - create new management command and send it for execution
+ *
+ * Description:
+ * Creates new management command and sends it for execution.
+ *
+ * Returns 0 for success, error code otherwise.
+ *
+ * Must not be called in parallel with scst_unregister_session() for the
+ * same sess.
+ */
+int scst_rx_mgmt_fn(struct scst_session *sess,
+ const struct scst_rx_mgmt_params *params)
+{
+ int res = -EFAULT;
+ struct scst_mgmt_cmd *mcmd = NULL;
+
+ switch (params->fn) {
+ case SCST_ABORT_TASK:
+ BUG_ON(!params->tag_set);
+ break;
+ case SCST_TARGET_RESET:
+ case SCST_ABORT_ALL_TASKS:
+ case SCST_NEXUS_LOSS:
+ break;
+ default:
+ BUG_ON(!params->lun_set);
+ }
+
+ mcmd = scst_pre_rx_mgmt_cmd(sess, params->fn, params->atomic,
+ params->tgt_priv);
+ if (mcmd == NULL)
+ goto out;
+
+ if (params->lun_set) {
+ mcmd->lun = scst_unpack_lun(params->lun, params->lun_len);
+ if (mcmd->lun == NO_SUCH_LUN)
+ goto out_free;
+ mcmd->lun_set = 1;
+ }
+
+ if (params->tag_set)
+ mcmd->tag = params->tag;
+
+ mcmd->cmd_sn_set = params->cmd_sn_set;
+ mcmd->cmd_sn = params->cmd_sn;
+
+ TRACE(TRACE_MGMT, "TM fn %d", params->fn);
+
+ TRACE_MGMT_DBG("sess=%p, tag_set %d, tag %lld, lun_set %d, "
+ "lun=%lld, cmd_sn_set %d, cmd_sn %d, priv %p", sess,
+ params->tag_set,
+ (long long unsigned int)params->tag,
+ params->lun_set,
+ (long long unsigned int)mcmd->lun,
+ params->cmd_sn_set,
+ params->cmd_sn,
+ params->tgt_priv);
+
+ if (scst_post_rx_mgmt_cmd(sess, mcmd) != 0)
+ goto out_free;
+
+ res = 0;
+
+out:
+ return res;
+
+out_free:
+ scst_free_mgmt_cmd(mcmd);
+ mcmd = NULL;
+ goto out;
+}
+EXPORT_SYMBOL(scst_rx_mgmt_fn);
+
+/*
+ * Returns true if string "string" matches pattern "wild", false otherwise.
+ * Pattern is a regular DOS-type pattern, containing '*' and '?' symbols.
+ * '*' means match all any symbols, '?' means match only any single symbol.
+ *
+ * For instance:
+ * if (wildcmp("bl?h.*", "blah.jpg")) {
+ * // match
+ * } else {
+ * // no match
+ * }
+ *
+ * Written by Jack Handy - jakkhandy@hotmail.com
+ * Taken by Gennadiy Nerubayev <parakie@gmail.com> from
+ * http://www.codeproject.com/KB/string/wildcmp.aspx. No license attached
+ * to it, and it's posted on a free site; assumed to be free for use.
+ */
+static bool wildcmp(const char *wild, const char *string)
+{
+ const char *cp = NULL, *mp = NULL;
+
+ while ((*string) && (*wild != '*')) {
+ if ((*wild != *string) && (*wild != '?'))
+ return false;
+
+ wild++;
+ string++;
+ }
+
+ while (*string) {
+ if (*wild == '*') {
+ if (!*++wild)
+ return true;
+
+ mp = wild;
+ cp = string+1;
+ } else if ((*wild == *string) || (*wild == '?')) {
+ wild++;
+ string++;
+ } else {
+ wild = mp;
+ string = cp++;
+ }
+ }
+
+ while (*wild == '*')
+ wild++;
+
+ return !*wild;
+}
+
+/* scst_mutex supposed to be held */
+static struct scst_acg *scst_find_tgt_acg_by_name_wild(struct scst_tgt *tgt,
+ const char *initiator_name)
+{
+ struct scst_acg *acg, *res = NULL;
+ struct scst_acn *n;
+
+ if (initiator_name == NULL)
+ goto out;
+
+ list_for_each_entry(acg, &tgt->tgt_acg_list, acg_list_entry) {
+ list_for_each_entry(n, &acg->acn_list, acn_list_entry) {
+ if (wildcmp(n->name, initiator_name)) {
+ TRACE_DBG("Access control group %s found",
+ acg->acg_name);
+ res = acg;
+ goto out;
+ }
+ }
+ }
+
+out:
+ return res;
+}
+
+/* Must be called under scst_mutex */
+struct scst_acg *scst_find_acg(const struct scst_session *sess)
+{
+ struct scst_acg *acg = NULL;
+
+ acg = scst_find_tgt_acg_by_name_wild(sess->tgt, sess->initiator_name);
+ if (acg == NULL)
+ acg = sess->tgt->default_acg;
+ return acg;
+}
+
+static int scst_init_session(struct scst_session *sess)
+{
+ int res = 0, rc;
+ struct scst_cmd *cmd;
+ struct scst_mgmt_cmd *mcmd, *tm;
+ int mwake = 0;
+
+ mutex_lock(&scst_mutex);
+
+ sess->acg = scst_find_acg(sess);
+
+ PRINT_INFO("Using security group \"%s\" for initiator \"%s\"",
+ sess->acg->acg_name, sess->initiator_name);
+
+ list_add_tail(&sess->acg_sess_list_entry, &sess->acg->acg_sess_list);
+
+ TRACE_DBG("Adding sess %p to tgt->sess_list", sess);
+ list_add_tail(&sess->sess_list_entry, &sess->tgt->sess_list);
+
+ res = scst_sess_alloc_tgt_devs(sess);
+
+ /* Let's always create session's sysfs to simplify error recovery */
+ rc = scst_create_sess_sysfs(sess);
+ if (res == 0)
+ res = rc;
+
+ mutex_unlock(&scst_mutex);
+
+ if (sess->init_result_fn) {
+ TRACE_DBG("Calling init_result_fn(%p)", sess);
+ sess->init_result_fn(sess, sess->reg_sess_data, res);
+ TRACE_DBG("%s", "init_result_fn() returned");
+ }
+
+ spin_lock_irq(&sess->sess_list_lock);
+
+ if (res == 0)
+ sess->init_phase = SCST_SESS_IPH_SUCCESS;
+ else
+ sess->init_phase = SCST_SESS_IPH_FAILED;
+
+restart:
+ list_for_each_entry(cmd, &sess->init_deferred_cmd_list,
+ cmd_list_entry) {
+ TRACE_DBG("Deleting cmd %p from init deferred cmd list", cmd);
+ list_del(&cmd->cmd_list_entry);
+ atomic_dec(&sess->sess_cmd_count);
+ spin_unlock_irq(&sess->sess_list_lock);
+ scst_cmd_init_done(cmd, SCST_CONTEXT_THREAD);
+ spin_lock_irq(&sess->sess_list_lock);
+ goto restart;
+ }
+
+ spin_lock(&scst_mcmd_lock);
+ list_for_each_entry_safe(mcmd, tm, &sess->init_deferred_mcmd_list,
+ mgmt_cmd_list_entry) {
+ TRACE_DBG("Moving mgmt command %p from init deferred mcmd list",
+ mcmd);
+ list_move_tail(&mcmd->mgmt_cmd_list_entry,
+ &scst_active_mgmt_cmd_list);
+ mwake = 1;
+ }
+
+ spin_unlock(&scst_mcmd_lock);
+ /*
+ * In case of an error at this point the caller target driver supposed
+ * to already call this sess's unregistration.
+ */
+ sess->init_phase = SCST_SESS_IPH_READY;
+ spin_unlock_irq(&sess->sess_list_lock);
+
+ if (mwake)
+ wake_up(&scst_mgmt_cmd_list_waitQ);
+
+ scst_sess_put(sess);
+ return res;
+}
+
+/**
+ * scst_register_session() - register session
+ * @tgt: target
+ * @atomic: true, if the function called in the atomic context. If false,
+ * this function will block until the session registration is
+ * completed.
+ * @initiator_name: remote initiator's name, any NULL-terminated string,
+ * e.g. iSCSI name, which used as the key to found appropriate
+ * access control group. Could be NULL, then the default
+ * target's LUNs are used.
+ * @data: any target driver supplied data
+ * @result_fn: pointer to the function that will be asynchronously called
+ * when session initialization finishes.
+ * Can be NULL. Parameters:
+ * - sess - session
+ * - data - target driver supplied to scst_register_session()
+ * data
+ * - result - session initialization result, 0 on success or
+ * appropriate error code otherwise
+ *
+ * Description:
+ * Registers new session. Returns new session on success or NULL otherwise.
+ *
+ * Note: A session creation and initialization is a complex task,
+ * which requires sleeping state, so it can't be fully done
+ * in interrupt context. Therefore the "bottom half" of it, if
+ * scst_register_session() is called from atomic context, will be
+ * done in SCST thread context. In this case scst_register_session()
+ * will return not completely initialized session, but the target
+ * driver can supply commands to this session via scst_rx_cmd().
+ * Those commands processing will be delayed inside SCST until
+ * the session initialization is finished, then their processing
+ * will be restarted. The target driver will be notified about
+ * finish of the session initialization by function result_fn().
+ * On success the target driver could do nothing, but if the
+ * initialization fails, the target driver must ensure that
+ * no more new commands being sent or will be sent to SCST after
+ * result_fn() returns. All already sent to SCST commands for
+ * failed session will be returned in xmit_response() with BUSY status.
+ * In case of failure the driver shall call scst_unregister_session()
+ * inside result_fn(), it will NOT be called automatically.
+ */
+struct scst_session *scst_register_session(struct scst_tgt *tgt, int atomic,
+ const char *initiator_name, void *data,
+ void (*result_fn) (struct scst_session *sess, void *data, int result))
+{
+ struct scst_session *sess;
+ int res;
+ unsigned long flags;
+
+ sess = scst_alloc_session(tgt, atomic ? GFP_ATOMIC : GFP_KERNEL,
+ initiator_name);
+ if (sess == NULL)
+ goto out;
+
+ scst_sess_get(sess); /* one for registered session */
+ scst_sess_get(sess); /* one held until sess is inited */
+
+ if (atomic) {
+ sess->reg_sess_data = data;
+ sess->init_result_fn = result_fn;
+ spin_lock_irqsave(&scst_mgmt_lock, flags);
+ TRACE_DBG("Adding sess %p to scst_sess_init_list", sess);
+ list_add_tail(&sess->sess_init_list_entry,
+ &scst_sess_init_list);
+ spin_unlock_irqrestore(&scst_mgmt_lock, flags);
+ wake_up(&scst_mgmt_waitQ);
+ } else {
+ res = scst_init_session(sess);
+ if (res != 0)
+ goto out_free;
+ }
+
+out:
+ return sess;
+
+out_free:
+ scst_free_session(sess);
+ sess = NULL;
+ goto out;
+}
+EXPORT_SYMBOL_GPL(scst_register_session);
+
+/**
+ * scst_register_session_simple() - register session (simple version)
+ * @tgt: target
+ * @initiator_name: remote initiator's name, any NULL-terminated string,
+ * e.g. iSCSI name, which used as the key to found appropriate
+ * access control group. Could be NULL, then the default
+ * target's LUNs are used.
+ *
+ * Description:
+ * Registers new session. Returns new session on success or NULL otherwise.
+ */
+struct scst_session *scst_register_session_simple(struct scst_tgt *tgt,
+ const char *initiator_name)
+{
+ return scst_register_session(tgt, 0, initiator_name, NULL, NULL);
+}
+EXPORT_SYMBOL(scst_register_session_simple);
+
+/**
+ * scst_unregister_session() - unregister session
+ * @sess: session to be unregistered
+ * @wait: if true, instructs to wait until all commands, which
+ * currently is being executed and belonged to the session,
+ * finished. Otherwise, target driver should be prepared to
+ * receive xmit_response() for the session's command after
+ * scst_unregister_session() returns.
+ * @unreg_done_fn: pointer to the function that will be asynchronously called
+ * when the last session's command finishes and
+ * the session is about to be completely freed. Can be NULL.
+ * Parameter:
+ * - sess - session
+ *
+ * Unregisters session.
+ *
+ * Notes:
+ * - All outstanding commands will be finished regularly. After
+ * scst_unregister_session() returned, no new commands must be sent to
+ * SCST via scst_rx_cmd().
+ *
+ * - The caller must ensure that no scst_rx_cmd() or scst_rx_mgmt_fn_*() is
+ * called in paralell with scst_unregister_session().
+ *
+ * - Can be called before result_fn() of scst_register_session() called,
+ * i.e. during the session registration/initialization.
+ *
+ * - It is highly recommended to call scst_unregister_session() as soon as it
+ * gets clear that session will be unregistered and not to wait until all
+ * related commands finished. This function provides the wait functionality,
+ * but it also starts recovering stuck commands, if there are any.
+ * Otherwise, your target driver could wait for those commands forever.
+ */
+void scst_unregister_session(struct scst_session *sess, int wait,
+ void (*unreg_done_fn) (struct scst_session *sess))
+{
+ unsigned long flags;
+ DECLARE_COMPLETION_ONSTACK(c);
+ int rc, lun;
+
+ TRACE_MGMT_DBG("Unregistering session %p (wait %d)", sess, wait);
+
+ sess->unreg_done_fn = unreg_done_fn;
+
+ /* Abort all outstanding commands and clear reservation, if necessary */
+ lun = 0;
+ rc = scst_rx_mgmt_fn_lun(sess, SCST_UNREG_SESS_TM,
+ (uint8_t *)&lun, sizeof(lun), SCST_ATOMIC, NULL);
+ if (rc != 0) {
+ PRINT_ERROR("SCST_UNREG_SESS_TM failed %d (sess %p)",
+ rc, sess);
+ }
+
+ sess->shut_phase = SCST_SESS_SPH_SHUTDOWN;
+
+ spin_lock_irqsave(&scst_mgmt_lock, flags);
+
+ if (wait)
+ sess->shutdown_compl = &c;
+
+ spin_unlock_irqrestore(&scst_mgmt_lock, flags);
+
+ scst_sess_put(sess);
+
+ if (wait) {
+ TRACE_DBG("Waiting for session %p to complete", sess);
+ wait_for_completion(&c);
+ }
+ return;
+}
+EXPORT_SYMBOL_GPL(scst_unregister_session);
+
+/**
+ * scst_unregister_session_simple() - unregister session, simple version
+ * @sess: session to be unregistered
+ *
+ * Unregisters session.
+ *
+ * See notes for scst_unregister_session() above.
+ */
+void scst_unregister_session_simple(struct scst_session *sess)
+{
+
+ scst_unregister_session(sess, 1, NULL);
+ return;
+}
+EXPORT_SYMBOL(scst_unregister_session_simple);
+
+static inline int test_mgmt_list(void)
+{
+ int res = !list_empty(&scst_sess_init_list) ||
+ !list_empty(&scst_sess_shut_list) ||
+ unlikely(kthread_should_stop());
+ return res;
+}
+
+int scst_global_mgmt_thread(void *arg)
+{
+ struct scst_session *sess;
+
+ PRINT_INFO("Management thread started, PID %d", current->pid);
+
+ current->flags |= PF_NOFREEZE;
+
+ set_user_nice(current, -10);
+
+ spin_lock_irq(&scst_mgmt_lock);
+ while (!kthread_should_stop()) {
+ wait_queue_t wait;
+ init_waitqueue_entry(&wait, current);
+
+ if (!test_mgmt_list()) {
+ add_wait_queue_exclusive(&scst_mgmt_waitQ, &wait);
+ for (;;) {
+ set_current_state(TASK_INTERRUPTIBLE);
+ if (test_mgmt_list())
+ break;
+ spin_unlock_irq(&scst_mgmt_lock);
+ schedule();
+ spin_lock_irq(&scst_mgmt_lock);
+ }
+ set_current_state(TASK_RUNNING);
+ remove_wait_queue(&scst_mgmt_waitQ, &wait);
+ }
+
+ while (!list_empty(&scst_sess_init_list)) {
+ sess = list_entry(scst_sess_init_list.next,
+ typeof(*sess), sess_init_list_entry);
+ TRACE_DBG("Removing sess %p from scst_sess_init_list",
+ sess);
+ list_del(&sess->sess_init_list_entry);
+ spin_unlock_irq(&scst_mgmt_lock);
+
+ if (sess->init_phase == SCST_SESS_IPH_INITING)
+ scst_init_session(sess);
+ else {
+ PRINT_CRIT_ERROR("session %p is in "
+ "scst_sess_init_list, but in unknown "
+ "init phase %x", sess,
+ sess->init_phase);
+ BUG();
+ }
+
+ spin_lock_irq(&scst_mgmt_lock);
+ }
+
+ while (!list_empty(&scst_sess_shut_list)) {
+ sess = list_entry(scst_sess_shut_list.next,
+ typeof(*sess), sess_shut_list_entry);
+ TRACE_DBG("Removing sess %p from scst_sess_shut_list",
+ sess);
+ list_del(&sess->sess_shut_list_entry);
+ spin_unlock_irq(&scst_mgmt_lock);
+
+ switch (sess->shut_phase) {
+ case SCST_SESS_SPH_SHUTDOWN:
+ BUG_ON(atomic_read(&sess->refcnt) != 0);
+ scst_free_session_callback(sess);
+ break;
+ default:
+ PRINT_CRIT_ERROR("session %p is in "
+ "scst_sess_shut_list, but in unknown "
+ "shut phase %lx", sess,
+ sess->shut_phase);
+ BUG();
+ break;
+ }
+
+ spin_lock_irq(&scst_mgmt_lock);
+ }
+ }
+ spin_unlock_irq(&scst_mgmt_lock);
+
+ /*
+ * If kthread_should_stop() is true, we are guaranteed to be
+ * on the module unload, so both lists must be empty.
+ */
+ BUG_ON(!list_empty(&scst_sess_init_list));
+ BUG_ON(!list_empty(&scst_sess_shut_list));
+
+ PRINT_INFO("Management thread PID %d finished", current->pid);
+ return 0;
+}
+
+/* Called under sess->sess_list_lock */
+static struct scst_cmd *__scst_find_cmd_by_tag(struct scst_session *sess,
+ uint64_t tag, bool to_abort)
+{
+ struct scst_cmd *cmd, *res = NULL;
+
+ /* ToDo: hash list */
+
+ TRACE_DBG("%s (sess=%p, tag=%llu)", "Searching in sess cmd list",
+ sess, (long long unsigned int)tag);
+
+ list_for_each_entry(cmd, &sess->sess_cmd_list,
+ sess_cmd_list_entry) {
+ if (cmd->tag == tag) {
+ /*
+ * We must not count done commands, because
+ * they were submitted for transmittion.
+ * Otherwise we can have a race, when for
+ * some reason cmd's release delayed
+ * after transmittion and initiator sends
+ * cmd with the same tag => it can be possible
+ * that a wrong cmd will be returned.
+ */
+ if (cmd->done) {
+ if (to_abort) {
+ /*
+ * We should return the latest not
+ * aborted cmd with this tag.
+ */
+ if (res == NULL)
+ res = cmd;
+ else {
+ if (test_bit(SCST_CMD_ABORTED,
+ &res->cmd_flags)) {
+ res = cmd;
+ } else if (!test_bit(SCST_CMD_ABORTED,
+ &cmd->cmd_flags))
+ res = cmd;
+ }
+ }
+ continue;
+ } else {
+ res = cmd;
+ break;
+ }
+ }
+ }
+ return res;
+}
+
+/**
+ * scst_find_cmd() - find command by custom comparison function
+ *
+ * Finds a command based on user supplied data and comparision
+ * callback function, that should return true, if the command is found.
+ * Returns the command on success or NULL otherwise
+ */
+struct scst_cmd *scst_find_cmd(struct scst_session *sess, void *data,
+ int (*cmp_fn) (struct scst_cmd *cmd,
+ void *data))
+{
+ struct scst_cmd *cmd = NULL;
+ unsigned long flags = 0;
+
+ if (cmp_fn == NULL)
+ goto out;
+
+ spin_lock_irqsave(&sess->sess_list_lock, flags);
+
+ TRACE_DBG("Searching in sess cmd list (sess=%p)", sess);
+ list_for_each_entry(cmd, &sess->sess_cmd_list, sess_cmd_list_entry) {
+ /*
+ * We must not count done commands, because they were
+ * submitted for transmittion. Otherwise we can have a race,
+ * when for some reason cmd's release delayed after
+ * transmittion and initiator sends cmd with the same tag =>
+ * it can be possible that a wrong cmd will be returned.
+ */
+ if (cmd->done)
+ continue;
+ if (cmp_fn(cmd, data))
+ goto out_unlock;
+ }
+
+ cmd = NULL;
+
+out_unlock:
+ spin_unlock_irqrestore(&sess->sess_list_lock, flags);
+
+out:
+ return cmd;
+}
+EXPORT_SYMBOL(scst_find_cmd);
+
+/**
+ * scst_find_cmd_by_tag() - find command by tag
+ *
+ * Finds a command based on the supplied tag comparing it with one
+ * that previously set by scst_cmd_set_tag(). Returns the found command on
+ * success or NULL otherwise
+ */
+struct scst_cmd *scst_find_cmd_by_tag(struct scst_session *sess,
+ uint64_t tag)
+{
+ unsigned long flags;
+ struct scst_cmd *cmd;
+ spin_lock_irqsave(&sess->sess_list_lock, flags);
+ cmd = __scst_find_cmd_by_tag(sess, tag, false);
+ spin_unlock_irqrestore(&sess->sess_list_lock, flags);
+ return cmd;
+}
+EXPORT_SYMBOL(scst_find_cmd_by_tag);
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH][RFC 5/12/1/5] SCST core's scst_lib.c
[not found] ` <4BC44D08.4060907@vlnb.net>
` (3 preceding siblings ...)
2010-04-13 13:05 ` [PATCH][RFC 4/12/1/5] SCST core's scst_targ.c Vladislav Bolkhovitin
@ 2010-04-13 13:05 ` Vladislav Bolkhovitin
2010-04-13 13:06 ` [PATCH][RFC 6/12/1/5] SCST core's private header Vladislav Bolkhovitin
` (4 subsequent siblings)
9 siblings, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:05 UTC (permalink / raw)
To: linux-scsi
Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
Vu Pham, Bart Van Assche, James Smart, Joe Eykholt, Andy Yan,
linux-driver
This patch contains file scst_lib.c.
Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
scst_lib.c | 6337 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 6337 insertions(+)
diff -uprN orig/linux-2.6.33/drivers/scst/scst_lib.c linux-2.6.33/drivers/scst/scst_lib.c
--- orig/linux-2.6.33/drivers/scst/scst_lib.c
+++ linux-2.6.33/drivers/scst/scst_lib.c
@@ -0,0 +1,6337 @@
+/*
+ * scst_lib.c
+ *
+ * Copyright (C) 2004 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ * Copyright (C) 2004 - 2005 Leonid Stoljar
+ * Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation, version 2
+ * of the License.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/slab.h>
+#include <linux/sched.h>
+#include <linux/kthread.h>
+#include <linux/cdrom.h>
+#include <linux/unistd.h>
+#include <linux/string.h>
+#include <asm/kmap_types.h>
+#include <linux/ctype.h>
+#include <linux/delay.h>
+
+#include "scst.h"
+#include "scst_priv.h"
+#include "scst_mem.h"
+
+struct scsi_io_context {
+ unsigned int full_cdb_used:1;
+ void *data;
+ void (*done)(void *data, char *sense, int result, int resid);
+ char sense[SCST_SENSE_BUFFERSIZE];
+ unsigned char full_cdb[0];
+};
+static struct kmem_cache *scsi_io_context_cache;
+
+/* get_trans_len_x extract x bytes from cdb as length starting from off */
+static int get_trans_len_1(struct scst_cmd *cmd, uint8_t off);
+static int get_trans_len_1_256(struct scst_cmd *cmd, uint8_t off);
+static int get_trans_len_2(struct scst_cmd *cmd, uint8_t off);
+static int get_trans_len_3(struct scst_cmd *cmd, uint8_t off);
+static int get_trans_len_4(struct scst_cmd *cmd, uint8_t off);
+
+/* for special commands */
+static int get_trans_len_block_limit(struct scst_cmd *cmd, uint8_t off);
+static int get_trans_len_read_capacity(struct scst_cmd *cmd, uint8_t off);
+static int get_trans_len_serv_act_in(struct scst_cmd *cmd, uint8_t off);
+static int get_trans_len_single(struct scst_cmd *cmd, uint8_t off);
+static int get_trans_len_none(struct scst_cmd *cmd, uint8_t off);
+static int get_trans_len_read_pos(struct scst_cmd *cmd, uint8_t off);
+static int get_trans_cdb_len_10(struct scst_cmd *cmd, uint8_t off);
+static int get_trans_len_prevent_allow_medium_removal(struct scst_cmd *cmd,
+ uint8_t off);
+static int get_trans_len_3_read_elem_stat(struct scst_cmd *cmd, uint8_t off);
+static int get_trans_len_start_stop(struct scst_cmd *cmd, uint8_t off);
+
+/*
++=====================================-============-======-
+| Command name | Operation | Type |
+| | code | |
+|-------------------------------------+------------+------+
+
++=========================================================+
+|Key: M = command implementation is mandatory. |
+| O = command implementation is optional. |
+| V = Vendor-specific |
+| R = Reserved |
+| ' '= DON'T use for this device |
++=========================================================+
+*/
+
+#define SCST_CDB_MANDATORY 'M' /* mandatory */
+#define SCST_CDB_OPTIONAL 'O' /* optional */
+#define SCST_CDB_VENDOR 'V' /* vendor */
+#define SCST_CDB_RESERVED 'R' /* reserved */
+#define SCST_CDB_NOTSUPP ' ' /* don't use */
+
+struct scst_sdbops {
+ uint8_t ops; /* SCSI-2 op codes */
+ uint8_t devkey[16]; /* Key for every device type M,O,V,R
+ * type_disk devkey[0]
+ * type_tape devkey[1]
+ * type_printer devkey[2]
+ * type_proseccor devkey[3]
+ * type_worm devkey[4]
+ * type_cdrom devkey[5]
+ * type_scanner devkey[6]
+ * type_mod devkey[7]
+ * type_changer devkey[8]
+ * type_commdev devkey[9]
+ * type_reserv devkey[A]
+ * type_reserv devkey[B]
+ * type_raid devkey[C]
+ * type_enclosure devkey[D]
+ * type_reserv devkey[E]
+ * type_reserv devkey[F]
+ */
+ const char *op_name; /* SCSI-2 op codes full name */
+ uint8_t direction; /* init --> target: SCST_DATA_WRITE
+ * target --> init: SCST_DATA_READ
+ */
+ uint16_t flags; /* opcode -- various flags */
+ uint8_t off; /* length offset in cdb */
+ int (*get_trans_len)(struct scst_cmd *cmd, uint8_t off)
+ __attribute__ ((aligned));
+} __attribute__((packed));
+
+static int scst_scsi_op_list[256];
+
+#define FLAG_NONE 0
+
+static const struct scst_sdbops scst_scsi_op_table[] = {
+ /*
+ * +-------------------> TYPE_IS_DISK (0)
+ * |
+ * |+------------------> TYPE_IS_TAPE (1)
+ * ||
+ * || +----------------> TYPE_IS_PROCESSOR (3)
+ * || |
+ * || | +--------------> TYPE_IS_CDROM (5)
+ * || | |
+ * || | | +------------> TYPE_IS_MOD (7)
+ * || | | |
+ * || | | |+-----------> TYPE_IS_CHANGER (8)
+ * || | | ||
+ * || | | || +-------> TYPE_IS_RAID (C)
+ * || | | || |
+ * || | | || |
+ * 0123456789ABCDEF ---> TYPE_IS_???? */
+
+ /* 6-bytes length CDB */
+ {0x00, "MMMMMMMMMMMMMMMM", "TEST UNIT READY",
+ /* let's be HQ to don't look dead under high load */
+ SCST_DATA_NONE, SCST_SMALL_TIMEOUT|SCST_IMPLICIT_HQ|
+ SCST_REG_RESERVE_ALLOWED,
+ 0, get_trans_len_none},
+ {0x01, " M ", "REWIND",
+ SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+ {0x01, "O V OO OO ", "REZERO UNIT",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0x02, "VVVVVV V ", "REQUEST BLOCK ADDR",
+ SCST_DATA_NONE, SCST_SMALL_TIMEOUT, 0, get_trans_len_none},
+ {0x03, "MMMMMMMMMMMMMMMM", "REQUEST SENSE",
+ SCST_DATA_READ, SCST_SMALL_TIMEOUT|SCST_SKIP_UA|SCST_LOCAL_CMD|
+ SCST_REG_RESERVE_ALLOWED,
+ 4, get_trans_len_1},
+ {0x04, "M O O ", "FORMAT UNIT",
+ SCST_DATA_WRITE, SCST_LONG_TIMEOUT|SCST_UNKNOWN_LENGTH|SCST_WRITE_MEDIUM,
+ 0, get_trans_len_none},
+ {0x04, " O ", "FORMAT",
+ SCST_DATA_NONE, SCST_WRITE_MEDIUM, 0, get_trans_len_none},
+ {0x05, "VMVVVV V ", "READ BLOCK LIMITS",
+ SCST_DATA_READ, SCST_SMALL_TIMEOUT|SCST_REG_RESERVE_ALLOWED,
+ 0, get_trans_len_block_limit},
+ {0x07, " O ", "INITIALIZE ELEMENT STATUS",
+ SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+ {0x07, "OVV O OV ", "REASSIGN BLOCKS",
+ SCST_DATA_NONE, SCST_WRITE_MEDIUM, 0, get_trans_len_none},
+ {0x08, "O ", "READ(6)",
+ SCST_DATA_READ, SCST_TRANSFER_LEN_TYPE_FIXED, 4, get_trans_len_1_256},
+ {0x08, " MV OO OV ", "READ(6)",
+ SCST_DATA_READ, SCST_TRANSFER_LEN_TYPE_FIXED, 2, get_trans_len_3},
+ {0x08, " M ", "GET MESSAGE(6)",
+ SCST_DATA_READ, FLAG_NONE, 2, get_trans_len_3},
+ {0x08, " O ", "RECEIVE",
+ SCST_DATA_READ, FLAG_NONE, 2, get_trans_len_3},
+ {0x0A, "O ", "WRITE(6)",
+ SCST_DATA_WRITE, SCST_TRANSFER_LEN_TYPE_FIXED|SCST_WRITE_MEDIUM,
+ 4, get_trans_len_1_256},
+ {0x0A, " M O OV ", "WRITE(6)",
+ SCST_DATA_WRITE, SCST_TRANSFER_LEN_TYPE_FIXED|SCST_WRITE_MEDIUM,
+ 2, get_trans_len_3},
+ {0x0A, " M ", "PRINT",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0x0A, " M ", "SEND MESSAGE(6)",
+ SCST_DATA_WRITE, FLAG_NONE, 2, get_trans_len_3},
+ {0x0A, " M ", "SEND(6)",
+ SCST_DATA_WRITE, FLAG_NONE, 2, get_trans_len_3},
+ {0x0B, "O OO OV ", "SEEK(6)",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0x0B, " ", "TRACK SELECT",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0x0B, " O ", "SLEW AND PRINT",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0x0C, "VVVVVV V ", "SEEK BLOCK",
+ SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+ {0x0D, "VVVVVV V ", "PARTITION",
+ SCST_DATA_NONE, SCST_LONG_TIMEOUT|SCST_WRITE_MEDIUM,
+ 0, get_trans_len_none},
+ {0x0F, "VOVVVV V ", "READ REVERSE",
+ SCST_DATA_READ, SCST_TRANSFER_LEN_TYPE_FIXED, 2, get_trans_len_3},
+ {0x10, "VM V V ", "WRITE FILEMARKS",
+ SCST_DATA_NONE, SCST_WRITE_MEDIUM, 0, get_trans_len_none},
+ {0x10, " O O ", "SYNCHRONIZE BUFFER",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0x11, "VMVVVV ", "SPACE",
+ SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+ {0x12, "MMMMMMMMMMMMMMMM", "INQUIRY",
+ SCST_DATA_READ, SCST_SMALL_TIMEOUT|SCST_IMPLICIT_HQ|SCST_SKIP_UA|
+ SCST_REG_RESERVE_ALLOWED,
+ 4, get_trans_len_1},
+ {0x13, "VOVVVV ", "VERIFY(6)",
+ SCST_DATA_NONE, SCST_TRANSFER_LEN_TYPE_FIXED|
+ SCST_VERIFY_BYTCHK_MISMATCH_ALLOWED,
+ 2, get_trans_len_3},
+ {0x14, "VOOVVV ", "RECOVER BUFFERED DATA",
+ SCST_DATA_READ, SCST_TRANSFER_LEN_TYPE_FIXED, 2, get_trans_len_3},
+ {0x15, "OMOOOOOOOOOOOOOO", "MODE SELECT(6)",
+ SCST_DATA_WRITE, SCST_LOCAL_CMD, 4, get_trans_len_1},
+ {0x16, "MMMMMMMMMMMMMMMM", "RESERVE",
+ SCST_DATA_NONE, SCST_SMALL_TIMEOUT|SCST_LOCAL_CMD,
+ 0, get_trans_len_none},
+ {0x17, "MMMMMMMMMMMMMMMM", "RELEASE",
+ SCST_DATA_NONE, SCST_SMALL_TIMEOUT|SCST_LOCAL_CMD|SCST_REG_RESERVE_ALLOWED,
+ 0, get_trans_len_none},
+ {0x18, "OOOOOOOO ", "COPY",
+ SCST_DATA_WRITE, SCST_LONG_TIMEOUT, 2, get_trans_len_3},
+ {0x19, "VMVVVV ", "ERASE",
+ SCST_DATA_NONE, SCST_LONG_TIMEOUT|SCST_WRITE_MEDIUM,
+ 0, get_trans_len_none},
+ {0x1A, "OMOOOOOOOOOOOOOO", "MODE SENSE(6)",
+ SCST_DATA_READ, SCST_SMALL_TIMEOUT, 4, get_trans_len_1},
+ {0x1B, " O ", "SCAN",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0x1B, " O ", "LOAD UNLOAD",
+ SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+ {0x1B, " O ", "STOP PRINT",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0x1B, "O OO O O ", "START STOP UNIT",
+ SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_start_stop},
+ {0x1C, "OOOOOOOOOOOOOOOO", "RECEIVE DIAGNOSTIC RESULTS",
+ SCST_DATA_READ, FLAG_NONE, 3, get_trans_len_2},
+ {0x1D, "MMMMMMMMMMMMMMMM", "SEND DIAGNOSTIC",
+ SCST_DATA_WRITE, FLAG_NONE, 4, get_trans_len_1},
+ {0x1E, "OOOOOOOOOOOOOOOO", "PREVENT ALLOW MEDIUM REMOVAL",
+ SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0,
+ get_trans_len_prevent_allow_medium_removal},
+ {0x1F, " O ", "PORT STATUS",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+
+ /* 10-bytes length CDB */
+ {0x23, "V VV V ", "READ FORMAT CAPACITY",
+ SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_2},
+ {0x24, "V VVM ", "SET WINDOW",
+ SCST_DATA_WRITE, FLAG_NONE, 6, get_trans_len_3},
+ {0x25, "M MM M ", "READ CAPACITY",
+ SCST_DATA_READ, SCST_IMPLICIT_HQ|SCST_REG_RESERVE_ALLOWED,
+ 0, get_trans_len_read_capacity},
+ {0x25, " O ", "GET WINDOW",
+ SCST_DATA_READ, FLAG_NONE, 6, get_trans_len_3},
+ {0x28, "M MMMM ", "READ(10)",
+ SCST_DATA_READ, SCST_TRANSFER_LEN_TYPE_FIXED, 7, get_trans_len_2},
+ {0x28, " O ", "GET MESSAGE(10)",
+ SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_2},
+ {0x29, "V VV O ", "READ GENERATION",
+ SCST_DATA_READ, FLAG_NONE, 8, get_trans_len_1},
+ {0x2A, "O MO M ", "WRITE(10)",
+ SCST_DATA_WRITE, SCST_TRANSFER_LEN_TYPE_FIXED|SCST_WRITE_MEDIUM,
+ 7, get_trans_len_2},
+ {0x2A, " O ", "SEND MESSAGE(10)",
+ SCST_DATA_WRITE, FLAG_NONE, 7, get_trans_len_2},
+ {0x2A, " O ", "SEND(10)",
+ SCST_DATA_WRITE, FLAG_NONE, 7, get_trans_len_2},
+ {0x2B, " O ", "LOCATE",
+ SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+ {0x2B, " O ", "POSITION TO ELEMENT",
+ SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+ {0x2B, "O OO O ", "SEEK(10)",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0x2C, "V O O ", "ERASE(10)",
+ SCST_DATA_NONE, SCST_LONG_TIMEOUT|SCST_WRITE_MEDIUM,
+ 0, get_trans_len_none},
+ {0x2D, "V O O ", "READ UPDATED BLOCK",
+ SCST_DATA_READ, SCST_TRANSFER_LEN_TYPE_FIXED, 0, get_trans_len_single},
+ {0x2E, "O OO O ", "WRITE AND VERIFY(10)",
+ SCST_DATA_WRITE, SCST_TRANSFER_LEN_TYPE_FIXED|SCST_WRITE_MEDIUM,
+ 7, get_trans_len_2},
+ {0x2F, "O OO O ", "VERIFY(10)",
+ SCST_DATA_NONE, SCST_TRANSFER_LEN_TYPE_FIXED|
+ SCST_VERIFY_BYTCHK_MISMATCH_ALLOWED,
+ 7, get_trans_len_2},
+ {0x33, "O OO O ", "SET LIMITS(10)",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0x34, " O ", "READ POSITION",
+ SCST_DATA_READ, SCST_SMALL_TIMEOUT, 7, get_trans_len_read_pos},
+ {0x34, " O ", "GET DATA BUFFER STATUS",
+ SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_2},
+ {0x34, "O OO O ", "PRE-FETCH",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0x35, "O OO O ", "SYNCHRONIZE CACHE",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0x36, "O OO O ", "LOCK UNLOCK CACHE",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0x37, "O O ", "READ DEFECT DATA(10)",
+ SCST_DATA_READ, FLAG_NONE, 8, get_trans_len_1},
+ {0x37, " O ", "INIT ELEMENT STATUS WRANGE",
+ SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+ {0x38, " O O ", "MEDIUM SCAN",
+ SCST_DATA_READ, FLAG_NONE, 8, get_trans_len_1},
+ {0x39, "OOOOOOOO ", "COMPARE",
+ SCST_DATA_WRITE, FLAG_NONE, 3, get_trans_len_3},
+ {0x3A, "OOOOOOOO ", "COPY AND VERIFY",
+ SCST_DATA_WRITE, FLAG_NONE, 3, get_trans_len_3},
+ {0x3B, "OOOOOOOOOOOOOOOO", "WRITE BUFFER",
+ SCST_DATA_WRITE, SCST_SMALL_TIMEOUT, 6, get_trans_len_3},
+ {0x3C, "OOOOOOOOOOOOOOOO", "READ BUFFER",
+ SCST_DATA_READ, SCST_SMALL_TIMEOUT, 6, get_trans_len_3},
+ {0x3D, " O O ", "UPDATE BLOCK",
+ SCST_DATA_WRITE, SCST_TRANSFER_LEN_TYPE_FIXED,
+ 0, get_trans_len_single},
+ {0x3E, "O OO O ", "READ LONG",
+ SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_2},
+ {0x3F, "O O O ", "WRITE LONG",
+ SCST_DATA_WRITE, SCST_WRITE_MEDIUM, 7, get_trans_len_2},
+ {0x40, "OOOOOOOOOO ", "CHANGE DEFINITION",
+ SCST_DATA_WRITE, SCST_SMALL_TIMEOUT, 8, get_trans_len_1},
+ {0x41, "O O ", "WRITE SAME",
+ SCST_DATA_WRITE, SCST_TRANSFER_LEN_TYPE_FIXED|SCST_WRITE_MEDIUM,
+ 0, get_trans_len_single},
+ {0x42, " O ", "READ SUB-CHANNEL",
+ SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_2},
+ {0x43, " O ", "READ TOC/PMA/ATIP",
+ SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_2},
+ {0x44, " M ", "REPORT DENSITY SUPPORT",
+ SCST_DATA_READ, SCST_REG_RESERVE_ALLOWED, 7, get_trans_len_2},
+ {0x44, " O ", "READ HEADER",
+ SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_2},
+ {0x45, " O ", "PLAY AUDIO(10)",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0x46, " O ", "GET CONFIGURATION",
+ SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_2},
+ {0x47, " O ", "PLAY AUDIO MSF",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0x48, " O ", "PLAY AUDIO TRACK INDEX",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0x49, " O ", "PLAY TRACK RELATIVE(10)",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0x4A, " O ", "GET EVENT STATUS NOTIFICATION",
+ SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_2},
+ {0x4B, " O ", "PAUSE/RESUME",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0x4C, "OOOOOOOOOOOOOOOO", "LOG SELECT",
+ SCST_DATA_WRITE, SCST_SMALL_TIMEOUT, 7, get_trans_len_2},
+ {0x4D, "OOOOOOOOOOOOOOOO", "LOG SENSE",
+ SCST_DATA_READ, SCST_SMALL_TIMEOUT|SCST_REG_RESERVE_ALLOWED,
+ 7, get_trans_len_2},
+ {0x4E, " O ", "STOP PLAY/SCAN",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0x50, " ", "XDWRITE",
+ SCST_DATA_NONE, SCST_WRITE_MEDIUM, 0, get_trans_len_none},
+ {0x51, " O ", "READ DISC INFORMATION",
+ SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_2},
+ {0x51, " ", "XPWRITE",
+ SCST_DATA_NONE, SCST_WRITE_MEDIUM, 0, get_trans_len_none},
+ {0x52, " O ", "READ TRACK INFORMATION",
+ SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_2},
+ {0x53, " O ", "RESERVE TRACK",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0x54, " O ", "SEND OPC INFORMATION",
+ SCST_DATA_WRITE, FLAG_NONE, 7, get_trans_len_2},
+ {0x55, "OOOOOOOOOOOOOOOO", "MODE SELECT(10)",
+ SCST_DATA_WRITE, SCST_LOCAL_CMD, 7, get_trans_len_2},
+ {0x56, "OOOOOOOOOOOOOOOO", "RESERVE(10)",
+ SCST_DATA_NONE, SCST_SMALL_TIMEOUT|SCST_LOCAL_CMD,
+ 0, get_trans_len_none},
+ {0x57, "OOOOOOOOOOOOOOOO", "RELEASE(10)",
+ SCST_DATA_NONE, SCST_SMALL_TIMEOUT|SCST_LOCAL_CMD|SCST_REG_RESERVE_ALLOWED,
+ 0, get_trans_len_none},
+ {0x58, " O ", "REPAIR TRACK",
+ SCST_DATA_NONE, SCST_WRITE_MEDIUM, 0, get_trans_len_none},
+ {0x5A, "OOOOOOOOOOOOOOOO", "MODE SENSE(10)",
+ SCST_DATA_READ, SCST_SMALL_TIMEOUT, 7, get_trans_len_2},
+ {0x5B, " O ", "CLOSE TRACK/SESSION",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0x5C, " O ", "READ BUFFER CAPACITY",
+ SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_2},
+ {0x5D, " O ", "SEND CUE SHEET",
+ SCST_DATA_WRITE, FLAG_NONE, 6, get_trans_len_3},
+ {0x5E, "OOOOO OOOO ", "PERSISTENT RESERV IN",
+ SCST_DATA_READ, FLAG_NONE, 5, get_trans_len_4},
+ {0x5F, "OOOOO OOOO ", "PERSISTENT RESERV OUT",
+ SCST_DATA_WRITE, FLAG_NONE, 5, get_trans_len_4},
+
+ /* 16-bytes length CDB */
+ {0x80, "O OO O ", "XDWRITE EXTENDED",
+ SCST_DATA_NONE, SCST_WRITE_MEDIUM, 0, get_trans_len_none},
+ {0x80, " M ", "WRITE FILEMARKS",
+ SCST_DATA_NONE, SCST_WRITE_MEDIUM, 0, get_trans_len_none},
+ {0x81, "O OO O ", "REBUILD",
+ SCST_DATA_WRITE, SCST_WRITE_MEDIUM, 10, get_trans_len_4},
+ {0x82, "O OO O ", "REGENERATE",
+ SCST_DATA_WRITE, SCST_WRITE_MEDIUM, 10, get_trans_len_4},
+ {0x83, "OOOOOOOOOOOOOOOO", "EXTENDED COPY",
+ SCST_DATA_WRITE, SCST_WRITE_MEDIUM, 10, get_trans_len_4},
+ {0x84, "OOOOOOOOOOOOOOOO", "RECEIVE COPY RESULT",
+ SCST_DATA_WRITE, FLAG_NONE, 10, get_trans_len_4},
+ {0x86, "OOOOOOOOOO ", "ACCESS CONTROL IN",
+ SCST_DATA_NONE, SCST_REG_RESERVE_ALLOWED, 0, get_trans_len_none},
+ {0x87, "OOOOOOOOOO ", "ACCESS CONTROL OUT",
+ SCST_DATA_NONE, SCST_REG_RESERVE_ALLOWED, 0, get_trans_len_none},
+ {0x88, "M MMMM ", "READ(16)",
+ SCST_DATA_READ, SCST_TRANSFER_LEN_TYPE_FIXED, 10, get_trans_len_4},
+ {0x8A, "O OO O ", "WRITE(16)",
+ SCST_DATA_WRITE, SCST_TRANSFER_LEN_TYPE_FIXED|SCST_WRITE_MEDIUM,
+ 10, get_trans_len_4},
+ {0x8C, "OOOOOOOOOO ", "READ ATTRIBUTE",
+ SCST_DATA_READ, FLAG_NONE, 10, get_trans_len_4},
+ {0x8D, "OOOOOOOOOO ", "WRITE ATTRIBUTE",
+ SCST_DATA_WRITE, SCST_WRITE_MEDIUM, 10, get_trans_len_4},
+ {0x8E, "O OO O ", "WRITE AND VERIFY(16)",
+ SCST_DATA_WRITE, SCST_TRANSFER_LEN_TYPE_FIXED|SCST_WRITE_MEDIUM,
+ 10, get_trans_len_4},
+ {0x8F, "O OO O ", "VERIFY(16)",
+ SCST_DATA_NONE, SCST_TRANSFER_LEN_TYPE_FIXED|
+ SCST_VERIFY_BYTCHK_MISMATCH_ALLOWED,
+ 10, get_trans_len_4},
+ {0x90, "O OO O ", "PRE-FETCH(16)",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0x91, "O OO O ", "SYNCHRONIZE CACHE(16)",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0x91, " M ", "SPACE(16)",
+ SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+ {0x92, "O OO O ", "LOCK UNLOCK CACHE(16)",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0x92, " O ", "LOCATE(16)",
+ SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+ {0x93, "O O ", "WRITE SAME(16)",
+ SCST_DATA_WRITE, SCST_TRANSFER_LEN_TYPE_FIXED|SCST_WRITE_MEDIUM,
+ 10, get_trans_len_4},
+ {0x93, " M ", "ERASE(16)",
+ SCST_DATA_NONE, SCST_LONG_TIMEOUT|SCST_WRITE_MEDIUM,
+ 0, get_trans_len_none},
+ {0x9E, "O ", "SERVICE ACTION IN",
+ SCST_DATA_READ, FLAG_NONE, 0, get_trans_len_serv_act_in},
+
+ /* 12-bytes length CDB */
+ {0xA0, "VVVVVVVVVV M ", "REPORT LUNS",
+ SCST_DATA_READ, SCST_SMALL_TIMEOUT|SCST_IMPLICIT_HQ|SCST_SKIP_UA|
+ SCST_FULLY_LOCAL_CMD|SCST_LOCAL_CMD|
+ SCST_REG_RESERVE_ALLOWED,
+ 6, get_trans_len_4},
+ {0xA1, " O ", "BLANK",
+ SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+ {0xA3, " O ", "SEND KEY",
+ SCST_DATA_WRITE, FLAG_NONE, 8, get_trans_len_2},
+ {0xA3, "OOOOO OOOO ", "REPORT DEVICE IDENTIDIER",
+ SCST_DATA_READ, SCST_REG_RESERVE_ALLOWED, 6, get_trans_len_4},
+ {0xA3, " M ", "MAINTENANCE(IN)",
+ SCST_DATA_READ, FLAG_NONE, 6, get_trans_len_4},
+ {0xA4, " O ", "REPORT KEY",
+ SCST_DATA_READ, FLAG_NONE, 8, get_trans_len_2},
+ {0xA4, " O ", "MAINTENANCE(OUT)",
+ SCST_DATA_WRITE, FLAG_NONE, 6, get_trans_len_4},
+ {0xA5, " M ", "MOVE MEDIUM",
+ SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+ {0xA5, " O ", "PLAY AUDIO(12)",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0xA6, " O O ", "EXCHANGE/LOAD/UNLOAD MEDIUM",
+ SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+ {0xA7, " O ", "SET READ AHEAD",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0xA8, " O ", "GET MESSAGE(12)",
+ SCST_DATA_READ, FLAG_NONE, 6, get_trans_len_4},
+ {0xA8, "O OO O ", "READ(12)",
+ SCST_DATA_READ, SCST_TRANSFER_LEN_TYPE_FIXED, 6, get_trans_len_4},
+ {0xA9, " O ", "PLAY TRACK RELATIVE(12)",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0xAA, "O OO O ", "WRITE(12)",
+ SCST_DATA_WRITE, SCST_TRANSFER_LEN_TYPE_FIXED|SCST_WRITE_MEDIUM,
+ 6, get_trans_len_4},
+ {0xAA, " O ", "SEND MESSAGE(12)",
+ SCST_DATA_WRITE, FLAG_NONE, 6, get_trans_len_4},
+ {0xAC, " O ", "ERASE(12)",
+ SCST_DATA_NONE, SCST_WRITE_MEDIUM, 0, get_trans_len_none},
+ {0xAC, " M ", "GET PERFORMANCE",
+ SCST_DATA_READ, SCST_UNKNOWN_LENGTH, 0, get_trans_len_none},
+ {0xAD, " O ", "READ DVD STRUCTURE",
+ SCST_DATA_READ, FLAG_NONE, 8, get_trans_len_2},
+ {0xAE, "O OO O ", "WRITE AND VERIFY(12)",
+ SCST_DATA_WRITE, SCST_TRANSFER_LEN_TYPE_FIXED|SCST_WRITE_MEDIUM,
+ 6, get_trans_len_4},
+ {0xAF, "O OO O ", "VERIFY(12)",
+ SCST_DATA_NONE, SCST_TRANSFER_LEN_TYPE_FIXED|
+ SCST_VERIFY_BYTCHK_MISMATCH_ALLOWED,
+ 6, get_trans_len_4},
+#if 0 /* No need to support at all */
+ {0xB0, " OO O ", "SEARCH DATA HIGH(12)",
+ SCST_DATA_WRITE, FLAG_NONE, 9, get_trans_len_1},
+ {0xB1, " OO O ", "SEARCH DATA EQUAL(12)",
+ SCST_DATA_WRITE, FLAG_NONE, 9, get_trans_len_1},
+ {0xB2, " OO O ", "SEARCH DATA LOW(12)",
+ SCST_DATA_WRITE, FLAG_NONE, 9, get_trans_len_1},
+#endif
+ {0xB3, " OO O ", "SET LIMITS(12)",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0xB5, " O ", "REQUEST VOLUME ELEMENT ADDRESS",
+ SCST_DATA_READ, FLAG_NONE, 9, get_trans_len_1},
+ {0xB6, " O ", "SEND VOLUME TAG",
+ SCST_DATA_WRITE, FLAG_NONE, 9, get_trans_len_1},
+ {0xB6, " M ", "SET STREAMING",
+ SCST_DATA_WRITE, FLAG_NONE, 9, get_trans_len_2},
+ {0xB7, " O ", "READ DEFECT DATA(12)",
+ SCST_DATA_READ, FLAG_NONE, 9, get_trans_len_1},
+ {0xB8, " O ", "READ ELEMENT STATUS",
+ SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_3_read_elem_stat},
+ {0xB9, " O ", "READ CD MSF",
+ SCST_DATA_READ, SCST_UNKNOWN_LENGTH, 0, get_trans_len_none},
+ {0xBA, " O ", "SCAN",
+ SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+ {0xBA, " O ", "REDUNDANCY GROUP(IN)",
+ SCST_DATA_READ, FLAG_NONE, 6, get_trans_len_4},
+ {0xBB, " O ", "SET SPEED",
+ SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+ {0xBB, " O ", "REDUNDANCY GROUP(OUT)",
+ SCST_DATA_WRITE, FLAG_NONE, 6, get_trans_len_4},
+ {0xBC, " O ", "SPARE(IN)",
+ SCST_DATA_READ, FLAG_NONE, 6, get_trans_len_4},
+ {0xBD, " O ", "MECHANISM STATUS",
+ SCST_DATA_READ, FLAG_NONE, 8, get_trans_len_2},
+ {0xBD, " O ", "SPARE(OUT)",
+ SCST_DATA_WRITE, FLAG_NONE, 6, get_trans_len_4},
+ {0xBE, " O ", "READ CD",
+ SCST_DATA_READ, SCST_TRANSFER_LEN_TYPE_FIXED, 6, get_trans_len_3},
+ {0xBE, " O ", "VOLUME SET(IN)",
+ SCST_DATA_READ, FLAG_NONE, 6, get_trans_len_4},
+ {0xBF, " O ", "SEND DVD STRUCTUE",
+ SCST_DATA_WRITE, FLAG_NONE, 8, get_trans_len_2},
+ {0xBF, " O ", "VOLUME SET(OUT)",
+ SCST_DATA_WRITE, FLAG_NONE, 6, get_trans_len_4},
+ {0xE7, " V ", "INIT ELEMENT STATUS WRANGE",
+ SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_cdb_len_10}
+};
+
+#define SCST_CDB_TBL_SIZE ((int)ARRAY_SIZE(scst_scsi_op_table))
+
+static void scst_free_tgt_dev(struct scst_tgt_dev *tgt_dev);
+static void scst_check_internal_sense(struct scst_device *dev, int result,
+ uint8_t *sense, int sense_len);
+static void scst_queue_report_luns_changed_UA(struct scst_session *sess,
+ int flags);
+static void __scst_check_set_UA(struct scst_tgt_dev *tgt_dev,
+ const uint8_t *sense, int sense_len, int flags);
+static void scst_alloc_set_UA(struct scst_tgt_dev *tgt_dev,
+ const uint8_t *sense, int sense_len, int flags);
+static void scst_free_all_UA(struct scst_tgt_dev *tgt_dev);
+static void scst_release_space(struct scst_cmd *cmd);
+static void scst_unblock_cmds(struct scst_device *dev);
+static void scst_clear_reservation(struct scst_tgt_dev *tgt_dev);
+static struct scst_tgt_dev *scst_alloc_add_tgt_dev(struct scst_session *sess,
+ struct scst_acg_dev *acg_dev);
+static void scst_tgt_retry_timer_fn(unsigned long arg);
+
+#ifdef CONFIG_SCST_DEBUG_TM
+static void tm_dbg_init_tgt_dev(struct scst_tgt_dev *tgt_dev);
+static void tm_dbg_deinit_tgt_dev(struct scst_tgt_dev *tgt_dev);
+#else
+static inline void tm_dbg_init_tgt_dev(struct scst_tgt_dev *tgt_dev) {}
+static inline void tm_dbg_deinit_tgt_dev(struct scst_tgt_dev *tgt_dev) {}
+#endif /* CONFIG_SCST_DEBUG_TM */
+
+/**
+ * scst_alloc_sense() - allocate sense buffer for command
+ *
+ * Allocates, if necessary, sense buffer for command. Returns 0 on success
+ * and error code othrwise. Parameter "atomic" should be non-0 if the
+ * function called in atomic context.
+ */
+int scst_alloc_sense(struct scst_cmd *cmd, int atomic)
+{
+ int res = 0;
+ gfp_t gfp_mask = atomic ? GFP_ATOMIC : (GFP_KERNEL|__GFP_NOFAIL);
+
+ if (cmd->sense != NULL)
+ goto memzero;
+
+ cmd->sense = mempool_alloc(scst_sense_mempool, gfp_mask);
+ if (cmd->sense == NULL) {
+ PRINT_CRIT_ERROR("Sense memory allocation failed (op %x). "
+ "The sense data will be lost!!", cmd->cdb[0]);
+ res = -ENOMEM;
+ goto out;
+ }
+
+ cmd->sense_buflen = SCST_SENSE_BUFFERSIZE;
+
+memzero:
+ cmd->sense_valid_len = 0;
+ memset(cmd->sense, 0, cmd->sense_buflen);
+
+out:
+ return res;
+}
+EXPORT_SYMBOL(scst_alloc_sense);
+
+/**
+ * scst_alloc_set_sense() - allocate and fill sense buffer for command
+ *
+ * Allocates, if necessary, sense buffer for command and copies in
+ * it data from the supplied sense buffer. Returns 0 on success
+ * and error code othrwise.
+ */
+int scst_alloc_set_sense(struct scst_cmd *cmd, int atomic,
+ const uint8_t *sense, unsigned int len)
+{
+ int res;
+
+ /*
+ * We don't check here if the existing sense is valid or not, because
+ * we suppose the caller did it based on cmd->status.
+ */
+
+ res = scst_alloc_sense(cmd, atomic);
+ if (res != 0) {
+ PRINT_BUFFER("Lost sense", sense, len);
+ goto out;
+ }
+
+ cmd->sense_valid_len = len;
+ if (cmd->sense_buflen < len) {
+ PRINT_WARNING("Sense truncated (needed %d), shall you increase "
+ "SCST_SENSE_BUFFERSIZE? Op: %x", len, cmd->cdb[0]);
+ cmd->sense_valid_len = cmd->sense_buflen;
+ }
+
+ memcpy(cmd->sense, sense, cmd->sense_valid_len);
+ TRACE_BUFFER("Sense set", cmd->sense, cmd->sense_valid_len);
+
+out:
+ return res;
+}
+EXPORT_SYMBOL(scst_alloc_set_sense);
+
+/**
+ * scst_set_cmd_error_status() - set error SCSI status
+ * @cmd: SCST command
+ * @status: SCSI status to set
+ *
+ * Description:
+ * Sets error SCSI status in the command and prepares it for returning it.
+ * Returns 0 on success, error code otherwise.
+ */
+int scst_set_cmd_error_status(struct scst_cmd *cmd, int status)
+{
+ int res = 0;
+
+ if (cmd->status != 0) {
+ TRACE_MGMT_DBG("cmd %p already has status %x set", cmd,
+ cmd->status);
+ res = -EEXIST;
+ goto out;
+ }
+
+ cmd->status = status;
+ cmd->host_status = DID_OK;
+
+ cmd->dbl_ua_orig_resp_data_len = cmd->resp_data_len;
+ cmd->dbl_ua_orig_data_direction = cmd->data_direction;
+
+ cmd->data_direction = SCST_DATA_NONE;
+ cmd->resp_data_len = 0;
+ cmd->is_send_status = 1;
+
+ cmd->completed = 1;
+
+out:
+ return res;
+}
+EXPORT_SYMBOL(scst_set_cmd_error_status);
+
+static int scst_set_lun_not_supported_request_sense(struct scst_cmd *cmd,
+ int key, int asc, int ascq)
+{
+ int res;
+ int sense_len;
+
+ if (cmd->status != 0) {
+ TRACE_MGMT_DBG("cmd %p already has status %x set", cmd,
+ cmd->status);
+ res = -EEXIST;
+ goto out;
+ }
+
+ if ((cmd->sg != NULL) && SCST_SENSE_VALID(sg_virt(cmd->sg))) {
+ TRACE_MGMT_DBG("cmd %p already has sense set", cmd);
+ res = -EEXIST;
+ goto out;
+ }
+
+ if (cmd->sg == NULL) {
+ if (cmd->bufflen == 0)
+ cmd->bufflen = cmd->cdb[4];
+
+ cmd->sg = scst_alloc(cmd->bufflen, GFP_ATOMIC, &cmd->sg_cnt);
+ if (cmd->sg == NULL) {
+ PRINT_ERROR("Unable to alloc sg for REQUEST SENSE"
+ "(sense %x/%x/%x)", key, asc, ascq);
+ res = 1;
+ goto out;
+ }
+ }
+
+ TRACE_MEM("sg %p alloced for sense for cmd %p (cnt %d, "
+ "len %d)", cmd->sg, cmd, cmd->sg_cnt, cmd->bufflen);
+
+ sense_len = scst_set_sense(sg_virt(cmd->sg),
+ cmd->bufflen, cmd->cdb[1] & 1, key, asc, ascq);
+ scst_set_resp_data_len(cmd, sense_len);
+
+ TRACE_BUFFER("Sense set", sg_virt(cmd->sg), sense_len);
+
+ res = 0;
+ cmd->completed = 1;
+
+out:
+ return res;
+}
+
+static int scst_set_lun_not_supported_inquiry(struct scst_cmd *cmd)
+{
+ int res;
+ uint8_t *buf;
+ int len;
+
+ if (cmd->status != 0) {
+ TRACE_MGMT_DBG("cmd %p already has status %x set", cmd,
+ cmd->status);
+ res = -EEXIST;
+ goto out;
+ }
+
+ if (cmd->sg == NULL) {
+ if (cmd->bufflen == 0)
+ cmd->bufflen = min_t(int, 36, (cmd->cdb[3] << 8) | cmd->cdb[4]);
+
+ cmd->sg = scst_alloc(cmd->bufflen, GFP_ATOMIC, &cmd->sg_cnt);
+ if (cmd->sg == NULL) {
+ PRINT_ERROR("%s", "Unable to alloc sg for INQUIRY "
+ "for not supported LUN");
+ res = 1;
+ goto out;
+ }
+ }
+
+ TRACE_MEM("sg %p alloced INQUIRY for cmd %p (cnt %d, len %d)",
+ cmd->sg, cmd, cmd->sg_cnt, cmd->bufflen);
+
+ buf = sg_virt(cmd->sg);
+ len = min_t(int, 36, cmd->bufflen);
+
+ memset(buf, 0, len);
+ buf[0] = 0x7F; /* Peripheral qualifier 011b, Peripheral device type 1Fh */
+
+ TRACE_BUFFER("INQUIRY for not supported LUN set", buf, len);
+
+ res = 0;
+ cmd->completed = 1;
+
+out:
+ return res;
+}
+
+/**
+ * scst_set_cmd_error() - set error in the command and fill the sense buffer.
+ *
+ * Sets error in the command and fill the sense buffer. Returns 0 on success,
+ * error code otherwise.
+ */
+int scst_set_cmd_error(struct scst_cmd *cmd, int key, int asc, int ascq)
+{
+ int res;
+
+ /*
+ * We need for LOGICAL UNIT NOT SUPPORTED special handling for
+ * REQUEST SENSE and INQUIRY.
+ */
+ if ((key == ILLEGAL_REQUEST) && (asc == 0x25) && (ascq == 0)) {
+ if (cmd->cdb[0] == REQUEST_SENSE)
+ res = scst_set_lun_not_supported_request_sense(cmd,
+ key, asc, ascq);
+ else if (cmd->cdb[0] == INQUIRY)
+ res = scst_set_lun_not_supported_inquiry(cmd);
+ else
+ goto do_sense;
+
+ if (res > 0)
+ goto do_sense;
+ else
+ goto out;
+ }
+
+do_sense:
+ res = scst_set_cmd_error_status(cmd, SAM_STAT_CHECK_CONDITION);
+ if (res != 0)
+ goto out;
+
+ res = scst_alloc_sense(cmd, 1);
+ if (res != 0) {
+ PRINT_ERROR("Lost sense data (key %x, asc %x, ascq %x)",
+ key, asc, ascq);
+ goto out;
+ }
+
+ cmd->sense_valid_len = scst_set_sense(cmd->sense, cmd->sense_buflen,
+ scst_get_cmd_dev_d_sense(cmd), key, asc, ascq);
+ TRACE_BUFFER("Sense set", cmd->sense, cmd->sense_valid_len);
+
+out:
+ return res;
+}
+EXPORT_SYMBOL(scst_set_cmd_error);
+
+/**
+ * scst_set_sense() - set sense from KEY/ASC/ASCQ numbers
+ *
+ * Sets the corresponding fields in the sense buffer taking sense type
+ * into account. Returns resulting sense length.
+ */
+int scst_set_sense(uint8_t *buffer, int len, bool d_sense,
+ int key, int asc, int ascq)
+{
+ int res;
+
+ BUG_ON(len == 0);
+
+ memset(buffer, 0, len);
+
+ if (d_sense) {
+ /* Descriptor format */
+ if (len < 8) {
+ PRINT_ERROR("Length %d of sense buffer too small to "
+ "fit sense %x:%x:%x", len, key, asc, ascq);
+ }
+
+ buffer[0] = 0x72; /* Response Code */
+ if (len > 1)
+ buffer[1] = key; /* Sense Key */
+ if (len > 2)
+ buffer[2] = asc; /* ASC */
+ if (len > 3)
+ buffer[3] = ascq; /* ASCQ */
+ res = 8;
+ } else {
+ /* Fixed format */
+ if (len < 18) {
+ PRINT_ERROR("Length %d of sense buffer too small to "
+ "fit sense %x:%x:%x", len, key, asc, ascq);
+ }
+
+ buffer[0] = 0x70; /* Response Code */
+ if (len > 2)
+ buffer[2] = key; /* Sense Key */
+ if (len > 7)
+ buffer[7] = 0x0a; /* Additional Sense Length */
+ if (len > 12)
+ buffer[12] = asc; /* ASC */
+ if (len > 13)
+ buffer[13] = ascq; /* ASCQ */
+ res = 18;
+ }
+
+ TRACE_BUFFER("Sense set", buffer, res);
+ return res;
+}
+EXPORT_SYMBOL(scst_set_sense);
+
+/**
+ * scst_analyze_sense() - analyze sense
+ *
+ * Returns true if sense matches to (key, asc, ascq) and false otherwise.
+ * Valid_mask is one or several SCST_SENSE_*_VALID constants setting valid
+ * (key, asc, ascq) values.
+ */
+bool scst_analyze_sense(const uint8_t *sense, int len, unsigned int valid_mask,
+ int key, int asc, int ascq)
+{
+ bool res = false;
+
+ /* Response Code */
+ if ((sense[0] == 0x70) || (sense[0] == 0x71)) {
+ /* Fixed format */
+
+ /* Sense Key */
+ if (valid_mask & SCST_SENSE_KEY_VALID) {
+ if (len < 3)
+ goto out;
+ if (sense[2] != key)
+ goto out;
+ }
+
+ /* ASC */
+ if (valid_mask & SCST_SENSE_ASC_VALID) {
+ if (len < 13)
+ goto out;
+ if (sense[12] != asc)
+ goto out;
+ }
+
+ /* ASCQ */
+ if (valid_mask & SCST_SENSE_ASCQ_VALID) {
+ if (len < 14)
+ goto out;
+ if (sense[13] != ascq)
+ goto out;
+ }
+ } else if ((sense[0] == 0x72) || (sense[0] == 0x73)) {
+ /* Descriptor format */
+
+ /* Sense Key */
+ if (valid_mask & SCST_SENSE_KEY_VALID) {
+ if (len < 2)
+ goto out;
+ if (sense[1] != key)
+ goto out;
+ }
+
+ /* ASC */
+ if (valid_mask & SCST_SENSE_ASC_VALID) {
+ if (len < 3)
+ goto out;
+ if (sense[2] != asc)
+ goto out;
+ }
+
+ /* ASCQ */
+ if (valid_mask & SCST_SENSE_ASCQ_VALID) {
+ if (len < 4)
+ goto out;
+ if (sense[3] != ascq)
+ goto out;
+ }
+ } else
+ goto out;
+
+ res = true;
+
+out:
+ return res;
+}
+EXPORT_SYMBOL(scst_analyze_sense);
+
+/**
+ * scst_is_ua_sense() - determine if the sense is UA sense
+ *
+ * Returns true if the sense is valid and carrying a Unit
+ * Attention or false otherwise.
+ */
+bool scst_is_ua_sense(const uint8_t *sense, int len)
+{
+ if (SCST_SENSE_VALID(sense))
+ return scst_analyze_sense(sense, len,
+ SCST_SENSE_KEY_VALID, UNIT_ATTENTION, 0, 0);
+ else
+ return false;
+}
+EXPORT_SYMBOL(scst_is_ua_sense);
+
+bool scst_is_ua_global(const uint8_t *sense, int len)
+{
+ bool res;
+
+ /* Changing it don't forget to change scst_requeue_ua() as well!! */
+
+ if (scst_analyze_sense(sense, len, SCST_SENSE_ALL_VALID,
+ SCST_LOAD_SENSE(scst_sense_reported_luns_data_changed)))
+ res = true;
+ else
+ res = false;
+
+ return res;
+}
+
+/**
+ * scst_check_convert_sense() - check sense type and convert it if needed
+ *
+ * Checks if sense in the sense buffer, if any, is in the correct format.
+ * If not, converts it in the correct format.
+ */
+void scst_check_convert_sense(struct scst_cmd *cmd)
+{
+ bool d_sense;
+
+ if ((cmd->sense == NULL) || (cmd->status != SAM_STAT_CHECK_CONDITION))
+ goto out;
+
+ d_sense = scst_get_cmd_dev_d_sense(cmd);
+ if (d_sense && ((cmd->sense[0] == 0x70) || (cmd->sense[0] == 0x71))) {
+ TRACE_MGMT_DBG("Converting fixed sense to descriptor (cmd %p)",
+ cmd);
+ if ((cmd->sense_valid_len < 18)) {
+ PRINT_ERROR("Sense too small to convert (%d, "
+ "type: fixed)", cmd->sense_buflen);
+ goto out;
+ }
+ cmd->sense_valid_len = scst_set_sense(cmd->sense, cmd->sense_buflen,
+ d_sense, cmd->sense[2], cmd->sense[12], cmd->sense[13]);
+ } else if (!d_sense && ((cmd->sense[0] == 0x72) ||
+ (cmd->sense[0] == 0x73))) {
+ TRACE_MGMT_DBG("Converting descriptor sense to fixed (cmd %p)",
+ cmd);
+ if ((cmd->sense_buflen < 18) || (cmd->sense_valid_len < 8)) {
+ PRINT_ERROR("Sense too small to convert (%d, "
+ "type: descryptor, valid %d)",
+ cmd->sense_buflen, cmd->sense_valid_len);
+ goto out;
+ }
+ cmd->sense_valid_len = scst_set_sense(cmd->sense,
+ cmd->sense_buflen, d_sense,
+ cmd->sense[1], cmd->sense[2], cmd->sense[3]);
+ }
+
+out:
+ return;
+}
+EXPORT_SYMBOL(scst_check_convert_sense);
+
+static int scst_set_cmd_error_sense(struct scst_cmd *cmd, uint8_t *sense,
+ unsigned int len)
+{
+ int res;
+
+ res = scst_set_cmd_error_status(cmd, SAM_STAT_CHECK_CONDITION);
+ if (res != 0)
+ goto out;
+
+ res = scst_alloc_set_sense(cmd, 1, sense, len);
+
+out:
+ return res;
+}
+
+/**
+ * scst_set_busy() - set BUSY or TASK QUEUE FULL status
+ *
+ * Sets BUSY or TASK QUEUE FULL status depending on if this session has other
+ * outstanding commands or not.
+ */
+void scst_set_busy(struct scst_cmd *cmd)
+{
+ int c = atomic_read(&cmd->sess->sess_cmd_count);
+
+ if ((c <= 1) || (cmd->sess->init_phase != SCST_SESS_IPH_READY)) {
+ scst_set_cmd_error_status(cmd, SAM_STAT_BUSY);
+ TRACE(TRACE_FLOW_CONTROL, "Sending BUSY status to initiator %s "
+ "(cmds count %d, queue_type %x, sess->init_phase %d)",
+ cmd->sess->initiator_name, c,
+ cmd->queue_type, cmd->sess->init_phase);
+ } else {
+ scst_set_cmd_error_status(cmd, SAM_STAT_TASK_SET_FULL);
+ TRACE(TRACE_FLOW_CONTROL, "Sending QUEUE_FULL status to "
+ "initiator %s (cmds count %d, queue_type %x, "
+ "sess->init_phase %d)", cmd->sess->initiator_name, c,
+ cmd->queue_type, cmd->sess->init_phase);
+ }
+ return;
+}
+EXPORT_SYMBOL(scst_set_busy);
+
+/**
+ * scst_set_initial_UA() - set initial Unit Attention
+ *
+ * Sets initial Unit Attention on all devices of the session,
+ * replacing default scst_sense_reset_UA
+ */
+void scst_set_initial_UA(struct scst_session *sess, int key, int asc, int ascq)
+{
+ int i;
+
+ TRACE_MGMT_DBG("Setting for sess %p initial UA %x/%x/%x", sess, key,
+ asc, ascq);
+
+ /* Protect sess_tgt_dev_list_hash */
+ mutex_lock(&scst_mutex);
+
+ for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+ struct list_head *sess_tgt_dev_list_head =
+ &sess->sess_tgt_dev_list_hash[i];
+ struct scst_tgt_dev *tgt_dev;
+
+ list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+ sess_tgt_dev_list_entry) {
+ spin_lock_bh(&tgt_dev->tgt_dev_lock);
+ if (!list_empty(&tgt_dev->UA_list)) {
+ struct scst_tgt_dev_UA *ua;
+
+ ua = list_entry(tgt_dev->UA_list.next,
+ typeof(*ua), UA_list_entry);
+ if (scst_analyze_sense(ua->UA_sense_buffer,
+ ua->UA_valid_sense_len,
+ SCST_SENSE_ALL_VALID,
+ SCST_LOAD_SENSE(scst_sense_reset_UA))) {
+ ua->UA_valid_sense_len = scst_set_sense(
+ ua->UA_sense_buffer,
+ sizeof(ua->UA_sense_buffer),
+ tgt_dev->dev->d_sense,
+ key, asc, ascq);
+ } else
+ PRINT_ERROR("%s",
+ "The first UA isn't RESET UA");
+ } else
+ PRINT_ERROR("%s", "There's no RESET UA to "
+ "replace");
+ spin_unlock_bh(&tgt_dev->tgt_dev_lock);
+ }
+ }
+
+ mutex_unlock(&scst_mutex);
+ return;
+}
+EXPORT_SYMBOL(scst_set_initial_UA);
+
+static struct scst_aen *scst_alloc_aen(struct scst_session *sess,
+ uint64_t unpacked_lun)
+{
+ struct scst_aen *aen;
+
+ aen = mempool_alloc(scst_aen_mempool, GFP_KERNEL);
+ if (aen == NULL) {
+ PRINT_ERROR("AEN memory allocation failed. Corresponding "
+ "event notification will not be performed (initiator "
+ "%s)", sess->initiator_name);
+ goto out;
+ }
+ memset(aen, 0, sizeof(*aen));
+
+ aen->sess = sess;
+ scst_sess_get(sess);
+
+ aen->lun = scst_pack_lun(unpacked_lun, sess->acg->addr_method);
+
+out:
+ return aen;
+};
+
+static void scst_free_aen(struct scst_aen *aen)
+{
+
+ scst_sess_put(aen->sess);
+ mempool_free(aen, scst_aen_mempool);
+ return;
+};
+
+/* Must be called under scst_mutex */
+void scst_gen_aen_or_ua(struct scst_tgt_dev *tgt_dev,
+ int key, int asc, int ascq)
+{
+ struct scst_tgt_template *tgtt = tgt_dev->sess->tgt->tgtt;
+ uint8_t sense_buffer[SCST_STANDARD_SENSE_LEN];
+ int sl;
+
+ if (tgtt->report_aen != NULL) {
+ struct scst_aen *aen;
+ int rc;
+
+ aen = scst_alloc_aen(tgt_dev->sess, tgt_dev->lun);
+ if (aen == NULL)
+ goto queue_ua;
+
+ aen->event_fn = SCST_AEN_SCSI;
+ aen->aen_sense_len = scst_set_sense(aen->aen_sense,
+ sizeof(aen->aen_sense), tgt_dev->dev->d_sense,
+ key, asc, ascq);
+
+ TRACE_DBG("Calling target's %s report_aen(%p)",
+ tgtt->name, aen);
+ rc = tgtt->report_aen(aen);
+ TRACE_DBG("Target's %s report_aen(%p) returned %d",
+ tgtt->name, aen, rc);
+ if (rc == SCST_AEN_RES_SUCCESS)
+ goto out;
+
+ scst_free_aen(aen);
+ }
+
+queue_ua:
+ TRACE_MGMT_DBG("AEN not supported, queuing plain UA (tgt_dev %p)",
+ tgt_dev);
+ sl = scst_set_sense(sense_buffer, sizeof(sense_buffer),
+ tgt_dev->dev->d_sense, key, asc, ascq);
+ scst_check_set_UA(tgt_dev, sense_buffer, sl, 0);
+
+out:
+ return;
+}
+
+/**
+ * scst_capacity_data_changed() - notify SCST about device capacity change
+ *
+ * Notifies SCST core that dev has changed its capacity. Called under no locks.
+ */
+void scst_capacity_data_changed(struct scst_device *dev)
+{
+ struct scst_tgt_dev *tgt_dev;
+
+ if (dev->type != TYPE_DISK) {
+ TRACE_MGMT_DBG("Device type %d isn't for CAPACITY DATA "
+ "CHANGED UA", dev->type);
+ goto out;
+ }
+
+ TRACE_MGMT_DBG("CAPACITY DATA CHANGED (dev %p)", dev);
+
+ mutex_lock(&scst_mutex);
+
+ list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+ dev_tgt_dev_list_entry) {
+ scst_gen_aen_or_ua(tgt_dev,
+ SCST_LOAD_SENSE(scst_sense_capacity_data_changed));
+ }
+
+ mutex_unlock(&scst_mutex);
+
+out:
+ return;
+}
+EXPORT_SYMBOL_GPL(scst_capacity_data_changed);
+
+static inline bool scst_is_report_luns_changed_type(int type)
+{
+ switch (type) {
+ case TYPE_DISK:
+ case TYPE_TAPE:
+ case TYPE_PRINTER:
+ case TYPE_PROCESSOR:
+ case TYPE_WORM:
+ case TYPE_ROM:
+ case TYPE_SCANNER:
+ case TYPE_MOD:
+ case TYPE_MEDIUM_CHANGER:
+ case TYPE_RAID:
+ case TYPE_ENCLOSURE:
+ return true;
+ default:
+ return false;
+ }
+}
+
+/* scst_mutex supposed to be held */
+static void scst_queue_report_luns_changed_UA(struct scst_session *sess,
+ int flags)
+{
+ uint8_t sense_buffer[SCST_STANDARD_SENSE_LEN];
+ struct list_head *shead;
+ struct scst_tgt_dev *tgt_dev;
+ int i;
+
+ TRACE_MGMT_DBG("Queuing REPORTED LUNS DATA CHANGED UA "
+ "(sess %p)", sess);
+
+ local_bh_disable();
+
+ for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+ shead = &sess->sess_tgt_dev_list_hash[i];
+
+ list_for_each_entry(tgt_dev, shead,
+ sess_tgt_dev_list_entry) {
+ /* Lockdep triggers here a false positive.. */
+ spin_lock(&tgt_dev->tgt_dev_lock);
+ }
+ }
+
+ for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+ shead = &sess->sess_tgt_dev_list_hash[i];
+
+ list_for_each_entry(tgt_dev, shead,
+ sess_tgt_dev_list_entry) {
+ int sl;
+
+ if (!scst_is_report_luns_changed_type(
+ tgt_dev->dev->type))
+ continue;
+
+ sl = scst_set_sense(sense_buffer, sizeof(sense_buffer),
+ tgt_dev->dev->d_sense,
+ SCST_LOAD_SENSE(scst_sense_reported_luns_data_changed));
+
+ __scst_check_set_UA(tgt_dev, sense_buffer,
+ sl, flags | SCST_SET_UA_FLAG_GLOBAL);
+ }
+ }
+
+ for (i = TGT_DEV_HASH_SIZE-1; i >= 0; i--) {
+ shead = &sess->sess_tgt_dev_list_hash[i];
+
+ list_for_each_entry_reverse(tgt_dev,
+ shead, sess_tgt_dev_list_entry) {
+ spin_unlock(&tgt_dev->tgt_dev_lock);
+ }
+ }
+
+ local_bh_enable();
+ return;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+static void scst_report_luns_changed_sess(struct scst_session *sess)
+{
+ int i;
+ struct scst_tgt_template *tgtt = sess->tgt->tgtt;
+ int d_sense = 0;
+ uint64_t lun = 0;
+
+ TRACE_DBG("REPORTED LUNS DATA CHANGED (sess %p)", sess);
+
+ for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+ struct list_head *shead;
+ struct scst_tgt_dev *tgt_dev;
+
+ shead = &sess->sess_tgt_dev_list_hash[i];
+
+ list_for_each_entry(tgt_dev, shead,
+ sess_tgt_dev_list_entry) {
+ if (scst_is_report_luns_changed_type(
+ tgt_dev->dev->type)) {
+ lun = tgt_dev->lun;
+ d_sense = tgt_dev->dev->d_sense;
+ goto found;
+ }
+ }
+ }
+
+found:
+ if (tgtt->report_aen != NULL) {
+ struct scst_aen *aen;
+ int rc;
+
+ aen = scst_alloc_aen(sess, lun);
+ if (aen == NULL)
+ goto queue_ua;
+
+ aen->event_fn = SCST_AEN_SCSI;
+ aen->aen_sense_len = scst_set_sense(aen->aen_sense,
+ sizeof(aen->aen_sense), d_sense,
+ SCST_LOAD_SENSE(scst_sense_reported_luns_data_changed));
+
+ TRACE_DBG("Calling target's %s report_aen(%p)",
+ tgtt->name, aen);
+ rc = tgtt->report_aen(aen);
+ TRACE_DBG("Target's %s report_aen(%p) returned %d",
+ tgtt->name, aen, rc);
+ if (rc == SCST_AEN_RES_SUCCESS)
+ goto out;
+
+ scst_free_aen(aen);
+ }
+
+queue_ua:
+ scst_queue_report_luns_changed_UA(sess, 0);
+
+out:
+ return;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+void scst_report_luns_changed(struct scst_acg *acg)
+{
+ struct scst_session *sess;
+
+ TRACE_MGMT_DBG("REPORTED LUNS DATA CHANGED (acg %s)", acg->acg_name);
+
+ list_for_each_entry(sess, &acg->acg_sess_list, acg_sess_list_entry) {
+ scst_report_luns_changed_sess(sess);
+ }
+ return;
+}
+
+/**
+ * scst_aen_done() - AEN processing done
+ *
+ * Notifies SCST that the driver has sent the AEN and it
+ * can be freed now. Don't forget to set the delivery status, if it
+ * isn't success, using scst_set_aen_delivery_status() before calling
+ * this function.
+ */
+void scst_aen_done(struct scst_aen *aen)
+{
+
+ TRACE_MGMT_DBG("AEN %p (fn %d) done (initiator %s)", aen,
+ aen->event_fn, aen->sess->initiator_name);
+
+ if (aen->delivery_status == SCST_AEN_RES_SUCCESS)
+ goto out_free;
+
+ if (aen->event_fn != SCST_AEN_SCSI)
+ goto out_free;
+
+ TRACE_MGMT_DBG("Delivery of SCSI AEN failed (initiator %s)",
+ aen->sess->initiator_name);
+
+ if (scst_analyze_sense(aen->aen_sense, aen->aen_sense_len,
+ SCST_SENSE_ALL_VALID, SCST_LOAD_SENSE(
+ scst_sense_reported_luns_data_changed))) {
+ mutex_lock(&scst_mutex);
+ scst_queue_report_luns_changed_UA(aen->sess,
+ SCST_SET_UA_FLAG_AT_HEAD);
+ mutex_unlock(&scst_mutex);
+ } else {
+ struct list_head *shead;
+ struct scst_tgt_dev *tgt_dev;
+ uint64_t lun;
+
+ lun = scst_unpack_lun((uint8_t *)&aen->lun, sizeof(aen->lun));
+
+ mutex_lock(&scst_mutex);
+
+ /* tgt_dev might get dead, so we need to reseek it */
+ shead = &aen->sess->sess_tgt_dev_list_hash[HASH_VAL(lun)];
+ list_for_each_entry(tgt_dev, shead,
+ sess_tgt_dev_list_entry) {
+ if (tgt_dev->lun == lun) {
+ TRACE_MGMT_DBG("Requeuing failed AEN UA for "
+ "tgt_dev %p", tgt_dev);
+ scst_check_set_UA(tgt_dev, aen->aen_sense,
+ aen->aen_sense_len,
+ SCST_SET_UA_FLAG_AT_HEAD);
+ break;
+ }
+ }
+
+ mutex_unlock(&scst_mutex);
+ }
+
+out_free:
+ scst_free_aen(aen);
+ return;
+}
+EXPORT_SYMBOL(scst_aen_done);
+
+void scst_requeue_ua(struct scst_cmd *cmd)
+{
+
+ if (scst_analyze_sense(cmd->sense, cmd->sense_valid_len,
+ SCST_SENSE_ALL_VALID,
+ SCST_LOAD_SENSE(scst_sense_reported_luns_data_changed))) {
+ TRACE_MGMT_DBG("Requeuing REPORTED LUNS DATA CHANGED UA "
+ "for delivery failed cmd %p", cmd);
+ mutex_lock(&scst_mutex);
+ scst_queue_report_luns_changed_UA(cmd->sess,
+ SCST_SET_UA_FLAG_AT_HEAD);
+ mutex_unlock(&scst_mutex);
+ } else {
+ TRACE_MGMT_DBG("Requeuing UA for delivery failed cmd %p", cmd);
+ scst_check_set_UA(cmd->tgt_dev, cmd->sense,
+ cmd->sense_valid_len, SCST_SET_UA_FLAG_AT_HEAD);
+ }
+ return;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+static void scst_check_reassign_sess(struct scst_session *sess)
+{
+ struct scst_acg *acg, *old_acg;
+ struct scst_acg_dev *acg_dev;
+ int i;
+ struct list_head *shead;
+ struct scst_tgt_dev *tgt_dev;
+ bool luns_changed = false;
+ bool add_failed, something_freed, not_needed_freed = false;
+
+ TRACE_MGMT_DBG("Checking reassignment for sess %p (initiator %s)",
+ sess, sess->initiator_name);
+
+ acg = scst_find_acg(sess);
+ if (acg == sess->acg) {
+ TRACE_MGMT_DBG("No reassignment for sess %p", sess);
+ goto out;
+ }
+
+ TRACE_MGMT_DBG("sess %p will be reassigned from acg %s to acg %s",
+ sess, sess->acg->acg_name, acg->acg_name);
+
+ old_acg = sess->acg;
+ sess->acg = NULL; /* to catch implicit dependencies earlier */
+
+retry_add:
+ add_failed = false;
+ list_for_each_entry(acg_dev, &acg->acg_dev_list, acg_dev_list_entry) {
+ unsigned int inq_changed_ua_needed = 0;
+
+ for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+ shead = &sess->sess_tgt_dev_list_hash[i];
+
+ list_for_each_entry(tgt_dev, shead,
+ sess_tgt_dev_list_entry) {
+ if ((tgt_dev->dev == acg_dev->dev) &&
+ (tgt_dev->lun == acg_dev->lun) &&
+ (tgt_dev->acg_dev->rd_only == acg_dev->rd_only)) {
+ TRACE_MGMT_DBG("sess %p: tgt_dev %p for "
+ "LUN %lld stays the same",
+ sess, tgt_dev,
+ (unsigned long long)tgt_dev->lun);
+ tgt_dev->acg_dev = acg_dev;
+ goto next;
+ } else if (tgt_dev->lun == acg_dev->lun)
+ inq_changed_ua_needed = 1;
+ }
+ }
+
+ luns_changed = true;
+
+ TRACE_MGMT_DBG("sess %p: Allocing new tgt_dev for LUN %lld",
+ sess, (unsigned long long)acg_dev->lun);
+
+ tgt_dev = scst_alloc_add_tgt_dev(sess, acg_dev);
+ if (tgt_dev == NULL) {
+ add_failed = true;
+ break;
+ }
+
+ tgt_dev->inq_changed_ua_needed = inq_changed_ua_needed ||
+ not_needed_freed;
+next:
+ continue;
+ }
+
+ something_freed = false;
+ not_needed_freed = true;
+ for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+ struct scst_tgt_dev *t;
+ shead = &sess->sess_tgt_dev_list_hash[i];
+
+ list_for_each_entry_safe(tgt_dev, t, shead,
+ sess_tgt_dev_list_entry) {
+ if (tgt_dev->acg_dev->acg != acg) {
+ TRACE_MGMT_DBG("sess %p: Deleting not used "
+ "tgt_dev %p for LUN %lld",
+ sess, tgt_dev,
+ (unsigned long long)tgt_dev->lun);
+ luns_changed = true;
+ something_freed = true;
+ scst_free_tgt_dev(tgt_dev);
+ }
+ }
+ }
+
+ if (add_failed && something_freed) {
+ TRACE_MGMT_DBG("sess %p: Retrying adding new tgt_devs", sess);
+ goto retry_add;
+ }
+
+ sess->acg = acg;
+
+ TRACE_DBG("Moving sess %p from acg %s to acg %s", sess,
+ old_acg->acg_name, acg->acg_name);
+ list_move_tail(&sess->acg_sess_list_entry, &acg->acg_sess_list);
+
+ if (luns_changed) {
+ scst_report_luns_changed_sess(sess);
+
+ for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+ shead = &sess->sess_tgt_dev_list_hash[i];
+
+ list_for_each_entry(tgt_dev, shead,
+ sess_tgt_dev_list_entry) {
+ if (tgt_dev->inq_changed_ua_needed) {
+ TRACE_MGMT_DBG("sess %p: Setting "
+ "INQUIRY DATA HAS CHANGED UA "
+ "(tgt_dev %p)", sess, tgt_dev);
+
+ tgt_dev->inq_changed_ua_needed = 0;
+
+ scst_gen_aen_or_ua(tgt_dev,
+ SCST_LOAD_SENSE(scst_sense_inquery_data_changed));
+ }
+ }
+ }
+ }
+
+out:
+ return;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+void scst_check_reassign_sessions(void)
+{
+ struct scst_tgt_template *tgtt;
+
+ list_for_each_entry(tgtt, &scst_template_list, scst_template_list_entry) {
+ struct scst_tgt *tgt;
+ list_for_each_entry(tgt, &tgtt->tgt_list, tgt_list_entry) {
+ struct scst_session *sess;
+ list_for_each_entry(sess, &tgt->sess_list,
+ sess_list_entry) {
+ scst_check_reassign_sess(sess);
+ }
+ }
+ }
+ return;
+}
+
+/**
+ * scst_get_cmd_abnormal_done_state() - get command's next abnormal done state
+ *
+ * Returns the next state of the SCSI target state machine in case if command's
+ * completed abnormally.
+ */
+int scst_get_cmd_abnormal_done_state(const struct scst_cmd *cmd)
+{
+ int res;
+
+ switch (cmd->state) {
+ case SCST_CMD_STATE_INIT_WAIT:
+ case SCST_CMD_STATE_INIT:
+ case SCST_CMD_STATE_PRE_PARSE:
+ case SCST_CMD_STATE_DEV_PARSE:
+ if (cmd->preprocessing_only) {
+ res = SCST_CMD_STATE_PREPROCESSING_DONE;
+ break;
+ } /* else go through */
+ case SCST_CMD_STATE_DEV_DONE:
+ if (cmd->internal)
+ res = SCST_CMD_STATE_FINISHED_INTERNAL;
+ else
+ res = SCST_CMD_STATE_PRE_XMIT_RESP;
+ break;
+
+ case SCST_CMD_STATE_PRE_DEV_DONE:
+ case SCST_CMD_STATE_MODE_SELECT_CHECKS:
+ res = SCST_CMD_STATE_DEV_DONE;
+ break;
+
+ case SCST_CMD_STATE_PRE_XMIT_RESP:
+ res = SCST_CMD_STATE_XMIT_RESP;
+ break;
+
+ case SCST_CMD_STATE_PREPROCESSING_DONE:
+ case SCST_CMD_STATE_PREPROCESSING_DONE_CALLED:
+ if (cmd->tgt_dev == NULL)
+ res = SCST_CMD_STATE_PRE_XMIT_RESP;
+ else
+ res = SCST_CMD_STATE_PRE_DEV_DONE;
+ break;
+
+ case SCST_CMD_STATE_PREPARE_SPACE:
+ if (cmd->preprocessing_only) {
+ res = SCST_CMD_STATE_PREPROCESSING_DONE;
+ break;
+ } /* else go through */
+ case SCST_CMD_STATE_RDY_TO_XFER:
+ case SCST_CMD_STATE_DATA_WAIT:
+ case SCST_CMD_STATE_TGT_PRE_EXEC:
+ case SCST_CMD_STATE_SEND_FOR_EXEC:
+ case SCST_CMD_STATE_LOCAL_EXEC:
+ case SCST_CMD_STATE_REAL_EXEC:
+ case SCST_CMD_STATE_REAL_EXECUTING:
+ res = SCST_CMD_STATE_PRE_DEV_DONE;
+ break;
+
+ default:
+ PRINT_CRIT_ERROR("Wrong cmd state %d (cmd %p, op %x)",
+ cmd->state, cmd, cmd->cdb[0]);
+ BUG();
+ /* Invalid state to supress compiler's warning */
+ res = SCST_CMD_STATE_LAST_ACTIVE;
+ }
+ return res;
+}
+EXPORT_SYMBOL_GPL(scst_get_cmd_abnormal_done_state);
+
+/**
+ * scst_set_cmd_abnormal_done_state() - set command's next abnormal done state
+ *
+ * Sets state of the SCSI target state machine in case if command's completed
+ * abnormally.
+ */
+void scst_set_cmd_abnormal_done_state(struct scst_cmd *cmd)
+{
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+ switch (cmd->state) {
+ case SCST_CMD_STATE_XMIT_RESP:
+ case SCST_CMD_STATE_FINISHED:
+ case SCST_CMD_STATE_FINISHED_INTERNAL:
+ case SCST_CMD_STATE_XMIT_WAIT:
+ PRINT_CRIT_ERROR("Wrong cmd state %d (cmd %p, op %x)",
+ cmd->state, cmd, cmd->cdb[0]);
+ BUG();
+ }
+#endif
+
+ cmd->state = scst_get_cmd_abnormal_done_state(cmd);
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+ if (((cmd->state != SCST_CMD_STATE_PRE_XMIT_RESP) &&
+ (cmd->state != SCST_CMD_STATE_PREPROCESSING_DONE)) &&
+ (cmd->tgt_dev == NULL) && !cmd->internal) {
+ PRINT_CRIT_ERROR("Wrong not inited cmd state %d (cmd %p, "
+ "op %x)", cmd->state, cmd, cmd->cdb[0]);
+ BUG();
+ }
+#endif
+ return;
+}
+EXPORT_SYMBOL_GPL(scst_set_cmd_abnormal_done_state);
+
+/**
+ * scst_set_resp_data_len() - set response data length
+ *
+ * Sets response data length for cmd and truncates its SG vector accordingly.
+ *
+ * The cmd->resp_data_len must not be set directly, it must be set only
+ * using this function. Value of resp_data_len must be <= cmd->bufflen.
+ */
+void scst_set_resp_data_len(struct scst_cmd *cmd, int resp_data_len)
+{
+ int i, l;
+
+ scst_check_restore_sg_buff(cmd);
+ cmd->resp_data_len = resp_data_len;
+
+ if (resp_data_len == cmd->bufflen)
+ goto out;
+
+ l = 0;
+ for (i = 0; i < cmd->sg_cnt; i++) {
+ l += cmd->sg[i].length;
+ if (l >= resp_data_len) {
+ int left = resp_data_len - (l - cmd->sg[i].length);
+#ifdef CONFIG_SCST_DEBUG
+ TRACE(TRACE_SG_OP|TRACE_MEMORY, "cmd %p (tag %llu), "
+ "resp_data_len %d, i %d, cmd->sg[i].length %d, "
+ "left %d",
+ cmd, (long long unsigned int)cmd->tag,
+ resp_data_len, i,
+ cmd->sg[i].length, left);
+#endif
+ cmd->orig_sg_cnt = cmd->sg_cnt;
+ cmd->orig_sg_entry = i;
+ cmd->orig_entry_len = cmd->sg[i].length;
+ cmd->sg_cnt = (left > 0) ? i+1 : i;
+ cmd->sg[i].length = left;
+ cmd->sg_buff_modified = 1;
+ break;
+ }
+ }
+
+out:
+ return;
+}
+EXPORT_SYMBOL_GPL(scst_set_resp_data_len);
+
+/* No locks */
+int scst_queue_retry_cmd(struct scst_cmd *cmd, int finished_cmds)
+{
+ struct scst_tgt *tgt = cmd->tgt;
+ int res = 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&tgt->tgt_lock, flags);
+ tgt->retry_cmds++;
+ /*
+ * Memory barrier is needed here, because we need the exact order
+ * between the read and write between retry_cmds and finished_cmds to
+ * not miss the case when a command finished while we queuing it for
+ * retry after the finished_cmds check.
+ */
+ smp_mb();
+ TRACE_RETRY("TGT QUEUE FULL: incrementing retry_cmds %d",
+ tgt->retry_cmds);
+ if (finished_cmds != atomic_read(&tgt->finished_cmds)) {
+ /* At least one cmd finished, so try again */
+ tgt->retry_cmds--;
+ TRACE_RETRY("Some command(s) finished, direct retry "
+ "(finished_cmds=%d, tgt->finished_cmds=%d, "
+ "retry_cmds=%d)", finished_cmds,
+ atomic_read(&tgt->finished_cmds), tgt->retry_cmds);
+ res = -1;
+ goto out_unlock_tgt;
+ }
+
+ TRACE_RETRY("Adding cmd %p to retry cmd list", cmd);
+ list_add_tail(&cmd->cmd_list_entry, &tgt->retry_cmd_list);
+
+ if (!tgt->retry_timer_active) {
+ tgt->retry_timer.expires = jiffies + SCST_TGT_RETRY_TIMEOUT;
+ add_timer(&tgt->retry_timer);
+ tgt->retry_timer_active = 1;
+ }
+
+out_unlock_tgt:
+ spin_unlock_irqrestore(&tgt->tgt_lock, flags);
+ return res;
+}
+
+/* Returns 0 to continue, >0 to restart, <0 to break */
+static int scst_check_hw_pending_cmd(struct scst_cmd *cmd,
+ unsigned long cur_time, unsigned long max_time,
+ struct scst_session *sess, unsigned long *flags,
+ struct scst_tgt_template *tgtt)
+{
+ int res = -1; /* break */
+
+ TRACE_DBG("cmd %p, hw_pending %d, proc time %ld, "
+ "pending time %ld", cmd, cmd->cmd_hw_pending,
+ (long)(cur_time - cmd->start_time) / HZ,
+ (long)(cur_time - cmd->hw_pending_start) / HZ);
+
+ if (time_before_eq(cur_time, cmd->start_time + max_time)) {
+ /* Cmds are ordered, so no need to check more */
+ goto out;
+ }
+
+ if (!cmd->cmd_hw_pending) {
+ res = 0; /* continue */
+ goto out;
+ }
+
+ if (time_before(cur_time, cmd->hw_pending_start + max_time)) {
+ /* Cmds are ordered, so no need to check more */
+ goto out;
+ }
+
+ TRACE_MGMT_DBG("Cmd %p HW pending for too long %ld (state %x)",
+ cmd, (cur_time - cmd->hw_pending_start) / HZ,
+ cmd->state);
+
+ cmd->cmd_hw_pending = 0;
+
+ spin_unlock_irqrestore(&sess->sess_list_lock, *flags);
+ tgtt->on_hw_pending_cmd_timeout(cmd);
+ spin_lock_irqsave(&sess->sess_list_lock, *flags);
+
+ res = 1; /* restart */
+
+out:
+ return res;
+}
+
+static void scst_hw_pending_work_fn(struct delayed_work *work)
+{
+ struct scst_session *sess = container_of(work, struct scst_session,
+ hw_pending_work);
+ struct scst_tgt_template *tgtt = sess->tgt->tgtt;
+ struct scst_cmd *cmd;
+ unsigned long cur_time = jiffies;
+ unsigned long flags;
+ unsigned long max_time = tgtt->max_hw_pending_time * HZ;
+
+ TRACE_DBG("HW pending work (sess %p, max time %ld)", sess, max_time/HZ);
+
+ clear_bit(SCST_SESS_HW_PENDING_WORK_SCHEDULED, &sess->sess_aflags);
+
+ spin_lock_irqsave(&sess->sess_list_lock, flags);
+
+restart:
+ list_for_each_entry(cmd, &sess->sess_cmd_list, sess_cmd_list_entry) {
+ int rc;
+
+ rc = scst_check_hw_pending_cmd(cmd, cur_time, max_time, sess,
+ &flags, tgtt);
+ if (rc < 0)
+ break;
+ else if (rc == 0)
+ continue;
+ else
+ goto restart;
+ }
+
+ if (!list_empty(&sess->sess_cmd_list)) {
+ /*
+ * For stuck cmds if there is no activity we might need to have
+ * one more run to release them, so reschedule once again.
+ */
+ TRACE_DBG("Sched HW pending work for sess %p (max time %d)",
+ sess, tgtt->max_hw_pending_time);
+ set_bit(SCST_SESS_HW_PENDING_WORK_SCHEDULED, &sess->sess_aflags);
+ schedule_delayed_work(&sess->hw_pending_work,
+ tgtt->max_hw_pending_time * HZ);
+ }
+
+ spin_unlock_irqrestore(&sess->sess_list_lock, flags);
+ return;
+}
+
+bool scst_is_relative_target_port_id_unique(uint16_t id, struct scst_tgt *t)
+{
+ bool res = true;
+ struct scst_tgt_template *tgtt;
+
+ mutex_lock(&scst_mutex);
+ list_for_each_entry(tgtt, &scst_template_list,
+ scst_template_list_entry) {
+ struct scst_tgt *tgt;
+ list_for_each_entry(tgt, &tgtt->tgt_list, tgt_list_entry) {
+ if (tgt == t)
+ continue;
+ if (id == tgt->rel_tgt_id) {
+ res = false;
+ break;
+ }
+ }
+ }
+ mutex_unlock(&scst_mutex);
+ return res;
+}
+
+int gen_relative_target_port_id(uint16_t *id)
+{
+ int res = -EOVERFLOW;
+ static unsigned long rti = SCST_MIN_REL_TGT_ID, rti_prev;
+
+ rti_prev = rti;
+ do {
+ if (scst_is_relative_target_port_id_unique(rti, NULL)) {
+ *id = (uint16_t)rti++;
+ res = 0;
+ goto out;
+ }
+ rti++;
+ if (rti > SCST_MAX_REL_TGT_ID)
+ rti = SCST_MIN_REL_TGT_ID;
+ } while (rti != rti_prev);
+
+ PRINT_ERROR("%s", "Unable to create unique relative target port id");
+
+out:
+ return res;
+}
+
+int scst_alloc_tgt(struct scst_tgt_template *tgtt, struct scst_tgt **tgt)
+{
+ struct scst_tgt *t;
+ int res = 0;
+
+ t = kzalloc(sizeof(*t), GFP_KERNEL);
+ if (t == NULL) {
+ TRACE(TRACE_OUT_OF_MEM, "%s", "Allocation of tgt failed");
+ res = -ENOMEM;
+ goto out;
+ }
+
+ INIT_LIST_HEAD(&t->sess_list);
+ init_waitqueue_head(&t->unreg_waitQ);
+ t->tgtt = tgtt;
+ t->sg_tablesize = tgtt->sg_tablesize;
+ spin_lock_init(&t->tgt_lock);
+ INIT_LIST_HEAD(&t->retry_cmd_list);
+ atomic_set(&t->finished_cmds, 0);
+ init_timer(&t->retry_timer);
+ t->retry_timer.data = (unsigned long)t;
+ t->retry_timer.function = scst_tgt_retry_timer_fn;
+
+ *tgt = t;
+
+out:
+ return res;
+}
+
+void scst_free_tgt(struct scst_tgt *tgt)
+{
+
+ if (tgt->default_acg != NULL)
+ scst_free_acg(tgt->default_acg);
+
+ kfree(tgt->tgt_name);
+
+ kfree(tgt);
+ return;
+}
+
+/* Called under scst_mutex and suspended activity */
+int scst_alloc_device(gfp_t gfp_mask, struct scst_device **out_dev)
+{
+ struct scst_device *dev;
+ int res = 0;
+
+ dev = kzalloc(sizeof(*dev), gfp_mask);
+ if (dev == NULL) {
+ TRACE(TRACE_OUT_OF_MEM, "%s",
+ "Allocation of scst_device failed");
+ res = -ENOMEM;
+ goto out;
+ }
+
+ dev->handler = &scst_null_devtype;
+ atomic_set(&dev->dev_cmd_count, 0);
+ atomic_set(&dev->write_cmd_count, 0);
+ scst_init_mem_lim(&dev->dev_mem_lim);
+ spin_lock_init(&dev->dev_lock);
+ atomic_set(&dev->on_dev_count, 0);
+ INIT_LIST_HEAD(&dev->blocked_cmd_list);
+ INIT_LIST_HEAD(&dev->dev_tgt_dev_list);
+ INIT_LIST_HEAD(&dev->dev_acg_dev_list);
+ init_waitqueue_head(&dev->on_dev_waitQ);
+ dev->dev_double_ua_possible = 1;
+ dev->queue_alg = SCST_CONTR_MODE_QUEUE_ALG_UNRESTRICTED_REORDER;
+
+ scst_init_threads(&dev->dev_cmd_threads);
+
+ *out_dev = dev;
+
+out:
+ return res;
+}
+
+void scst_free_device(struct scst_device *dev)
+{
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+ if (!list_empty(&dev->dev_tgt_dev_list) ||
+ !list_empty(&dev->dev_acg_dev_list)) {
+ PRINT_CRIT_ERROR("%s: dev_tgt_dev_list or dev_acg_dev_list "
+ "is not empty!", __func__);
+ BUG();
+ }
+#endif
+
+ scst_deinit_threads(&dev->dev_cmd_threads);
+
+ kfree(dev->virt_name);
+ kfree(dev);
+ return;
+}
+
+/**
+ * scst_init_mem_lim - initialize memory limits structure
+ *
+ * Initializes memory limits structure mem_lim according to
+ * the current system configuration. This structure should be latter used
+ * to track and limit allocated by one or more SGV pools memory.
+ */
+void scst_init_mem_lim(struct scst_mem_lim *mem_lim)
+{
+ atomic_set(&mem_lim->alloced_pages, 0);
+ mem_lim->max_allowed_pages =
+ ((uint64_t)scst_max_dev_cmd_mem << 10) >> (PAGE_SHIFT - 10);
+}
+EXPORT_SYMBOL_GPL(scst_init_mem_lim);
+
+static struct scst_acg_dev *scst_alloc_acg_dev(struct scst_acg *acg,
+ struct scst_device *dev, uint64_t lun)
+{
+ struct scst_acg_dev *res;
+
+ res = kmem_cache_zalloc(scst_acgd_cachep, GFP_KERNEL);
+ if (res == NULL) {
+ TRACE(TRACE_OUT_OF_MEM,
+ "%s", "Allocation of scst_acg_dev failed");
+ goto out;
+ }
+
+ res->dev = dev;
+ res->acg = acg;
+ res->lun = lun;
+
+out:
+ return res;
+}
+
+void scst_acg_dev_destroy(struct scst_acg_dev *acg_dev)
+{
+
+ if ((acg_dev->dev != NULL) && acg_dev->dev->dev_kobj_initialized)
+ kobject_put(&acg_dev->dev->dev_kobj);
+
+ kmem_cache_free(scst_acgd_cachep, acg_dev);
+ return;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+static void scst_free_acg_dev(struct scst_acg_dev *acg_dev)
+{
+
+ TRACE_DBG("Removing acg_dev %p from acg_dev_list and dev_acg_dev_list",
+ acg_dev);
+ list_del(&acg_dev->acg_dev_list_entry);
+ list_del(&acg_dev->dev_acg_dev_list_entry);
+
+ if (acg_dev->acg_dev_kobj_initialized) {
+ kobject_del(&acg_dev->acg_dev_kobj);
+ kobject_put(&acg_dev->acg_dev_kobj);
+ } else
+ scst_acg_dev_destroy(acg_dev);
+ return;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+struct scst_acg *scst_alloc_add_acg(struct scst_tgt *tgt,
+ const char *acg_name)
+{
+ struct scst_acg *acg;
+
+ acg = kzalloc(sizeof(*acg), GFP_KERNEL);
+ if (acg == NULL) {
+ PRINT_ERROR("%s", "Allocation of acg failed");
+ goto out;
+ }
+
+ INIT_LIST_HEAD(&acg->acg_dev_list);
+ INIT_LIST_HEAD(&acg->acg_sess_list);
+ INIT_LIST_HEAD(&acg->acn_list);
+ acg->acg_name = kstrdup(acg_name, GFP_KERNEL);
+ if (acg->acg_name == NULL) {
+ PRINT_ERROR("%s", "Allocation of acg_name failed");
+ goto out_free;
+ }
+
+ acg->addr_method = SCST_LUN_ADDR_METHOD_PERIPHERAL;
+
+ if (tgt != NULL) {
+ TRACE_DBG("Adding acg '%s' to device '%s' acg_list", acg_name,
+ tgt->tgt_name);
+ list_add_tail(&acg->acg_list_entry, &tgt->tgt_acg_list);
+ acg->in_tgt_acg_list = 1;
+ }
+
+out:
+ return acg;
+
+out_free:
+ kfree(acg);
+ acg = NULL;
+ goto out;
+}
+
+void scst_free_acg(struct scst_acg *acg)
+{
+
+ BUG_ON(!list_empty(&acg->acg_sess_list));
+ BUG_ON(!list_empty(&acg->acg_dev_list));
+ BUG_ON(!list_empty(&acg->acn_list));
+
+ kfree(acg->acg_name);
+ kfree(acg);
+ return;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+void scst_clear_acg(struct scst_acg *acg)
+{
+ struct scst_acn *n, *nn;
+ struct scst_acg_dev *acg_dev, *acg_dev_tmp;
+
+ TRACE_DBG("Clearing acg %s from list", acg->acg_name);
+
+ BUG_ON(!list_empty(&acg->acg_sess_list));
+ if (acg->in_tgt_acg_list) {
+ TRACE_DBG("Removing acg %s from list", acg->acg_name);
+ list_del(&acg->acg_list_entry);
+ acg->in_tgt_acg_list = 0;
+ }
+ /* Freeing acg_devs */
+ list_for_each_entry_safe(acg_dev, acg_dev_tmp, &acg->acg_dev_list,
+ acg_dev_list_entry) {
+ struct scst_tgt_dev *tgt_dev, *tt;
+ list_for_each_entry_safe(tgt_dev, tt,
+ &acg_dev->dev->dev_tgt_dev_list,
+ dev_tgt_dev_list_entry) {
+ if (tgt_dev->acg_dev == acg_dev)
+ scst_free_tgt_dev(tgt_dev);
+ }
+ scst_free_acg_dev(acg_dev);
+ }
+
+ /* Freeing names */
+ list_for_each_entry_safe(n, nn, &acg->acn_list,
+ acn_list_entry) {
+ scst_acn_sysfs_del(acg, n,
+ list_is_last(&n->acn_list_entry, &acg->acn_list));
+ }
+ INIT_LIST_HEAD(&acg->acn_list);
+ return;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+void scst_destroy_acg(struct scst_acg *acg)
+{
+
+ scst_clear_acg(acg);
+ scst_free_acg(acg);
+ return;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+struct scst_acg *scst_tgt_find_acg(struct scst_tgt *tgt, const char *name)
+{
+ struct scst_acg *acg, *acg_ret = NULL;
+
+ list_for_each_entry(acg, &tgt->tgt_acg_list, acg_list_entry) {
+ if (strcmp(acg->acg_name, name) == 0) {
+ acg_ret = acg;
+ break;
+ }
+ }
+ return acg_ret;
+}
+
+/* scst_mutex supposed to be held */
+static struct scst_tgt_dev *scst_find_shared_io_tgt_dev(
+ struct scst_tgt_dev *tgt_dev)
+{
+ struct scst_tgt_dev *res = NULL;
+ struct scst_acg *acg = tgt_dev->acg_dev->acg;
+ struct scst_tgt_dev *t;
+
+ TRACE_DBG("tgt_dev %s (acg %p, io_grouping_type %d)",
+ tgt_dev->sess->initiator_name, acg, acg->acg_io_grouping_type);
+
+ switch (acg->acg_io_grouping_type) {
+ case SCST_IO_GROUPING_AUTO:
+ if (tgt_dev->sess->initiator_name == NULL)
+ goto out;
+
+ list_for_each_entry(t, &tgt_dev->dev->dev_tgt_dev_list,
+ dev_tgt_dev_list_entry) {
+ if ((t == tgt_dev) ||
+ (t->sess->initiator_name == NULL) ||
+ (t->active_cmd_threads == NULL))
+ continue;
+
+ TRACE_DBG("t %s", t->sess->initiator_name);
+
+ /* We check other ACG's as well */
+
+ if (strcmp(t->sess->initiator_name,
+ tgt_dev->sess->initiator_name) == 0)
+ goto found;
+ }
+ break;
+
+ case SCST_IO_GROUPING_THIS_GROUP_ONLY:
+ list_for_each_entry(t, &tgt_dev->dev->dev_tgt_dev_list,
+ dev_tgt_dev_list_entry) {
+ if ((t == tgt_dev) || (t->active_cmd_threads == NULL))
+ continue;
+
+ TRACE_DBG("t %s (acg %p)", t->sess->initiator_name,
+ t->acg_dev->acg);
+
+ if (t->acg_dev->acg == acg)
+ goto found;
+ }
+ break;
+
+ case SCST_IO_GROUPING_NEVER:
+ goto out;
+
+ default:
+ list_for_each_entry(t, &tgt_dev->dev->dev_tgt_dev_list,
+ dev_tgt_dev_list_entry) {
+ if ((t == tgt_dev) || (t->active_cmd_threads == NULL))
+ continue;
+
+ TRACE_DBG("t %s (acg %p, io_grouping_type %d)",
+ t->sess->initiator_name, t->acg_dev->acg,
+ t->acg_dev->acg->acg_io_grouping_type);
+
+ if (t->acg_dev->acg->acg_io_grouping_type ==
+ acg->acg_io_grouping_type)
+ goto found;
+ }
+ break;
+ }
+
+out:
+ return res;
+
+found:
+ if (t->active_cmd_threads == &scst_main_cmd_threads) {
+ res = t;
+ TRACE_MGMT_DBG("Going to share async IO context %p (res %p, "
+ "ini %s, dev %s, grouping type %d)",
+ t->aic_keeper->aic, res, t->sess->initiator_name,
+ t->dev->virt_name,
+ t->acg_dev->acg->acg_io_grouping_type);
+ } else {
+ res = t;
+ if (res->active_cmd_threads->io_context == NULL) {
+ TRACE_MGMT_DBG("IO context for t %p not yet "
+ "initialized, waiting...", t);
+ msleep(100);
+ barrier();
+ goto found;
+ }
+ TRACE_MGMT_DBG("Going to share IO context %p (res %p, ini %s, "
+ "dev %s, cmd_threads %p, grouping type %d)",
+ res->active_cmd_threads->io_context, res,
+ t->sess->initiator_name, t->dev->virt_name,
+ t->active_cmd_threads,
+ t->acg_dev->acg->acg_io_grouping_type);
+ }
+ goto out;
+}
+
+enum scst_dev_type_threads_pool_type scst_parse_threads_pool_type(const char *p,
+ int len)
+{
+ enum scst_dev_type_threads_pool_type res;
+
+ if (strncasecmp(p, SCST_THREADS_POOL_PER_INITIATOR_STR,
+ min_t(int, strlen(SCST_THREADS_POOL_PER_INITIATOR_STR),
+ len)) == 0)
+ res = SCST_THREADS_POOL_PER_INITIATOR;
+ else if (strncasecmp(p, SCST_THREADS_POOL_SHARED_STR,
+ min_t(int, strlen(SCST_THREADS_POOL_SHARED_STR),
+ len)) == 0)
+ res = SCST_THREADS_POOL_SHARED;
+ else {
+ PRINT_ERROR("Unknown threads pool type %s", p);
+ res = SCST_THREADS_POOL_TYPE_INVALID;
+ }
+
+ return res;
+}
+
+static int scst_ioc_keeper_thread(void *arg)
+{
+ struct scst_async_io_context_keeper *aic_keeper =
+ (struct scst_async_io_context_keeper *)arg;
+
+ TRACE_MGMT_DBG("AIC %p keeper thread %s (PID %d) started", aic_keeper,
+ current->comm, current->pid);
+
+ current->flags |= PF_NOFREEZE;
+
+ BUG_ON(aic_keeper->aic != NULL);
+
+ aic_keeper->aic = get_io_context(GFP_KERNEL, -1);
+ TRACE_MGMT_DBG("Alloced new async IO context %p (aic %p)",
+ aic_keeper->aic, aic_keeper);
+
+ /* We have our own ref counting */
+ put_io_context(aic_keeper->aic);
+
+ /* We are ready */
+ wake_up_all(&aic_keeper->aic_keeper_waitQ);
+
+ wait_event_interruptible(aic_keeper->aic_keeper_waitQ,
+ kthread_should_stop());
+
+ TRACE_MGMT_DBG("AIC %p keeper thread %s (PID %d) finished", aic_keeper,
+ current->comm, current->pid);
+ return 0;
+}
+
+/* scst_mutex supposed to be held */
+int scst_tgt_dev_setup_threads(struct scst_tgt_dev *tgt_dev)
+{
+ int res = 0;
+ struct scst_device *dev = tgt_dev->dev;
+ struct scst_async_io_context_keeper *aic_keeper;
+
+ if (dev->threads_num <= 0) {
+ tgt_dev->active_cmd_threads = &scst_main_cmd_threads;
+
+ if (dev->threads_num == 0) {
+ struct scst_tgt_dev *shared_io_tgt_dev;
+
+ shared_io_tgt_dev = scst_find_shared_io_tgt_dev(tgt_dev);
+ if (shared_io_tgt_dev != NULL) {
+ aic_keeper = shared_io_tgt_dev->aic_keeper;
+ kref_get(&aic_keeper->aic_keeper_kref);
+
+ TRACE_MGMT_DBG("Linking async io context %p "
+ "for shared tgt_dev %p (dev %s)",
+ aic_keeper->aic, tgt_dev,
+ tgt_dev->dev->virt_name);
+ } else {
+ /* Create new context */
+ aic_keeper = kzalloc(sizeof(*aic_keeper),
+ GFP_KERNEL);
+ if (aic_keeper == NULL) {
+ PRINT_ERROR("Unable to alloc aic_keeper "
+ "(size %zd)", sizeof(*aic_keeper));
+ res = -ENOMEM;
+ goto out;
+ }
+
+ kref_init(&aic_keeper->aic_keeper_kref);
+ init_waitqueue_head(&aic_keeper->aic_keeper_waitQ);
+
+ aic_keeper->aic_keeper_thr =
+ kthread_run(scst_ioc_keeper_thread,
+ aic_keeper, "aic_keeper");
+ if (IS_ERR(aic_keeper->aic_keeper_thr)) {
+ PRINT_ERROR("Error running ioc_keeper "
+ "thread (tgt_dev %p)", tgt_dev);
+ res = PTR_ERR(aic_keeper->aic_keeper_thr);
+ goto out_free_keeper;
+ }
+
+ wait_event(aic_keeper->aic_keeper_waitQ,
+ aic_keeper->aic != NULL);
+
+ TRACE_MGMT_DBG("Created async io context %p "
+ "for not shared tgt_dev %p (dev %s)",
+ aic_keeper->aic, tgt_dev,
+ tgt_dev->dev->virt_name);
+ }
+
+ tgt_dev->async_io_context = aic_keeper->aic;
+ tgt_dev->aic_keeper = aic_keeper;
+ }
+
+ res = scst_add_threads(tgt_dev->active_cmd_threads, NULL, NULL,
+ tgt_dev->sess->tgt->tgtt->threads_num);
+ goto out;
+ }
+
+ switch (dev->threads_pool_type) {
+ case SCST_THREADS_POOL_PER_INITIATOR:
+ {
+ struct scst_tgt_dev *shared_io_tgt_dev;
+
+ scst_init_threads(&tgt_dev->tgt_dev_cmd_threads);
+
+ tgt_dev->active_cmd_threads = &tgt_dev->tgt_dev_cmd_threads;
+
+ shared_io_tgt_dev = scst_find_shared_io_tgt_dev(tgt_dev);
+ if (shared_io_tgt_dev != NULL) {
+ TRACE_MGMT_DBG("Linking io context %p for "
+ "shared tgt_dev %p (cmd_threads %p)",
+ shared_io_tgt_dev->active_cmd_threads->io_context,
+ tgt_dev, tgt_dev->active_cmd_threads);
+ /* It's ref counted via threads */
+ tgt_dev->active_cmd_threads->io_context =
+ shared_io_tgt_dev->active_cmd_threads->io_context;
+ }
+
+ res = scst_add_threads(tgt_dev->active_cmd_threads, NULL,
+ tgt_dev,
+ dev->threads_num + tgt_dev->sess->tgt->tgtt->threads_num);
+ if (res != 0) {
+ /* Let's clear here, because no threads could be run */
+ tgt_dev->active_cmd_threads->io_context = NULL;
+ }
+ break;
+ }
+ case SCST_THREADS_POOL_SHARED:
+ tgt_dev->active_cmd_threads = &dev->dev_cmd_threads;
+
+ res = scst_add_threads(tgt_dev->active_cmd_threads, dev, NULL,
+ tgt_dev->sess->tgt->tgtt->threads_num);
+ break;
+ default:
+ PRINT_CRIT_ERROR("Unknown threads pool type %d (dev %s)",
+ dev->threads_pool_type, dev->virt_name);
+ BUG();
+ break;
+ }
+
+out:
+ if (res == 0)
+ tm_dbg_init_tgt_dev(tgt_dev);
+ return res;
+
+out_free_keeper:
+ kfree(aic_keeper);
+ goto out;
+}
+
+static void scst_aic_keeper_release(struct kref *kref)
+{
+ struct scst_async_io_context_keeper *aic_keeper;
+
+ aic_keeper = container_of(kref, struct scst_async_io_context_keeper,
+ aic_keeper_kref);
+
+ kthread_stop(aic_keeper->aic_keeper_thr);
+
+ kfree(aic_keeper);
+ return;
+}
+
+/* scst_mutex supposed to be held */
+void scst_tgt_dev_stop_threads(struct scst_tgt_dev *tgt_dev)
+{
+
+ if (tgt_dev->active_cmd_threads == &scst_main_cmd_threads) {
+ /* Global async threads */
+ kref_put(&tgt_dev->aic_keeper->aic_keeper_kref,
+ scst_aic_keeper_release);
+ tgt_dev->async_io_context = NULL;
+ tgt_dev->aic_keeper = NULL;
+ } else if (tgt_dev->active_cmd_threads == &tgt_dev->dev->dev_cmd_threads) {
+ /* Per device shared threads */
+ scst_del_threads(tgt_dev->active_cmd_threads,
+ tgt_dev->sess->tgt->tgtt->threads_num);
+ } else if (tgt_dev->active_cmd_threads == &tgt_dev->tgt_dev_cmd_threads) {
+ /* Per tgt_dev threads */
+ scst_del_threads(tgt_dev->active_cmd_threads, -1);
+ scst_deinit_threads(&tgt_dev->tgt_dev_cmd_threads);
+ } /* else no threads (not yet initialized, e.g.) */
+
+ tm_dbg_deinit_tgt_dev(tgt_dev);
+ tgt_dev->active_cmd_threads = NULL;
+ return;
+}
+
+/*
+ * scst_mutex supposed to be held, there must not be parallel activity in this
+ * session.
+ */
+static struct scst_tgt_dev *scst_alloc_add_tgt_dev(struct scst_session *sess,
+ struct scst_acg_dev *acg_dev)
+{
+ int ini_sg, ini_unchecked_isa_dma, ini_use_clustering;
+ struct scst_tgt_dev *tgt_dev;
+ struct scst_device *dev = acg_dev->dev;
+ struct list_head *sess_tgt_dev_list_head;
+ int rc, i, sl;
+ uint8_t sense_buffer[SCST_STANDARD_SENSE_LEN];
+
+ tgt_dev = kmem_cache_zalloc(scst_tgtd_cachep, GFP_KERNEL);
+ if (tgt_dev == NULL) {
+ TRACE(TRACE_OUT_OF_MEM, "%s",
+ "Allocation of scst_tgt_dev failed");
+ goto out;
+ }
+
+ tgt_dev->dev = dev;
+ tgt_dev->lun = acg_dev->lun;
+ tgt_dev->acg_dev = acg_dev;
+ tgt_dev->sess = sess;
+ atomic_set(&tgt_dev->tgt_dev_cmd_count, 0);
+
+ scst_sgv_pool_use_norm(tgt_dev);
+
+ if (dev->scsi_dev != NULL) {
+ ini_sg = dev->scsi_dev->host->sg_tablesize;
+ ini_unchecked_isa_dma = dev->scsi_dev->host->unchecked_isa_dma;
+ ini_use_clustering = (dev->scsi_dev->host->use_clustering ==
+ ENABLE_CLUSTERING);
+ } else {
+ ini_sg = (1 << 15) /* infinite */;
+ ini_unchecked_isa_dma = 0;
+ ini_use_clustering = 0;
+ }
+ tgt_dev->max_sg_cnt = min(ini_sg, sess->tgt->sg_tablesize);
+
+ if ((sess->tgt->tgtt->use_clustering || ini_use_clustering) &&
+ !sess->tgt->tgtt->no_clustering)
+ scst_sgv_pool_use_norm_clust(tgt_dev);
+
+ if (sess->tgt->tgtt->unchecked_isa_dma || ini_unchecked_isa_dma)
+ scst_sgv_pool_use_dma(tgt_dev);
+
+ TRACE_MGMT_DBG("Device %s on SCST lun=%lld",
+ dev->virt_name, (long long unsigned int)tgt_dev->lun);
+
+ spin_lock_init(&tgt_dev->tgt_dev_lock);
+ INIT_LIST_HEAD(&tgt_dev->UA_list);
+ spin_lock_init(&tgt_dev->thr_data_lock);
+ INIT_LIST_HEAD(&tgt_dev->thr_data_list);
+ spin_lock_init(&tgt_dev->sn_lock);
+ INIT_LIST_HEAD(&tgt_dev->deferred_cmd_list);
+ INIT_LIST_HEAD(&tgt_dev->skipped_sn_list);
+ tgt_dev->curr_sn = (typeof(tgt_dev->curr_sn))(-300);
+ tgt_dev->expected_sn = tgt_dev->curr_sn + 1;
+ tgt_dev->num_free_sn_slots = ARRAY_SIZE(tgt_dev->sn_slots)-1;
+ tgt_dev->cur_sn_slot = &tgt_dev->sn_slots[0];
+ for (i = 0; i < (int)ARRAY_SIZE(tgt_dev->sn_slots); i++)
+ atomic_set(&tgt_dev->sn_slots[i], 0);
+
+ if (dev->handler->parse_atomic &&
+ (sess->tgt->tgtt->preprocessing_done == NULL)) {
+ if (sess->tgt->tgtt->rdy_to_xfer_atomic)
+ __set_bit(SCST_TGT_DEV_AFTER_INIT_WR_ATOMIC,
+ &tgt_dev->tgt_dev_flags);
+ if (dev->handler->exec_atomic)
+ __set_bit(SCST_TGT_DEV_AFTER_INIT_OTH_ATOMIC,
+ &tgt_dev->tgt_dev_flags);
+ }
+ if (dev->handler->exec_atomic) {
+ if (sess->tgt->tgtt->rdy_to_xfer_atomic)
+ __set_bit(SCST_TGT_DEV_AFTER_RESTART_WR_ATOMIC,
+ &tgt_dev->tgt_dev_flags);
+ __set_bit(SCST_TGT_DEV_AFTER_RESTART_OTH_ATOMIC,
+ &tgt_dev->tgt_dev_flags);
+ __set_bit(SCST_TGT_DEV_AFTER_RX_DATA_ATOMIC,
+ &tgt_dev->tgt_dev_flags);
+ }
+ if (dev->handler->dev_done_atomic &&
+ sess->tgt->tgtt->xmit_response_atomic) {
+ __set_bit(SCST_TGT_DEV_AFTER_EXEC_ATOMIC,
+ &tgt_dev->tgt_dev_flags);
+ }
+
+ sl = scst_set_sense(sense_buffer, sizeof(sense_buffer),
+ dev->d_sense, SCST_LOAD_SENSE(scst_sense_reset_UA));
+ scst_alloc_set_UA(tgt_dev, sense_buffer, sl, 0);
+
+ rc = scst_tgt_dev_setup_threads(tgt_dev);
+ if (rc != 0)
+ goto out_free;
+
+ if (dev->handler && dev->handler->attach_tgt) {
+ TRACE_DBG("Calling dev handler's attach_tgt(%p)", tgt_dev);
+ rc = dev->handler->attach_tgt(tgt_dev);
+ TRACE_DBG("%s", "Dev handler's attach_tgt() returned");
+ if (rc != 0) {
+ PRINT_ERROR("Device handler's %s attach_tgt() "
+ "failed: %d", dev->handler->name, rc);
+ goto out_stop_threads;
+ }
+ }
+
+ spin_lock_bh(&dev->dev_lock);
+ list_add_tail(&tgt_dev->dev_tgt_dev_list_entry, &dev->dev_tgt_dev_list);
+ if (dev->dev_reserved)
+ __set_bit(SCST_TGT_DEV_RESERVED, &tgt_dev->tgt_dev_flags);
+ spin_unlock_bh(&dev->dev_lock);
+
+ sess_tgt_dev_list_head =
+ &sess->sess_tgt_dev_list_hash[HASH_VAL(tgt_dev->lun)];
+ list_add_tail(&tgt_dev->sess_tgt_dev_list_entry,
+ sess_tgt_dev_list_head);
+
+out:
+ return tgt_dev;
+
+out_stop_threads:
+ scst_tgt_dev_stop_threads(tgt_dev);
+
+out_free:
+ scst_free_all_UA(tgt_dev);
+
+ kmem_cache_free(scst_tgtd_cachep, tgt_dev);
+ tgt_dev = NULL;
+ goto out;
+}
+
+/* No locks supposed to be held, scst_mutex - held */
+void scst_nexus_loss(struct scst_tgt_dev *tgt_dev, bool queue_UA)
+{
+
+ scst_clear_reservation(tgt_dev);
+
+ /* With activity suspended the lock isn't needed, but let's be safe */
+ spin_lock_bh(&tgt_dev->tgt_dev_lock);
+ scst_free_all_UA(tgt_dev);
+ memset(tgt_dev->tgt_dev_sense, 0, sizeof(tgt_dev->tgt_dev_sense));
+ spin_unlock_bh(&tgt_dev->tgt_dev_lock);
+
+ if (queue_UA) {
+ uint8_t sense_buffer[SCST_STANDARD_SENSE_LEN];
+ int sl = scst_set_sense(sense_buffer, sizeof(sense_buffer),
+ tgt_dev->dev->d_sense,
+ SCST_LOAD_SENSE(scst_sense_nexus_loss_UA));
+ scst_check_set_UA(tgt_dev, sense_buffer, sl, 0);
+ }
+ return;
+}
+
+/*
+ * scst_mutex supposed to be held, there must not be parallel activity in this
+ * session.
+ */
+static void scst_free_tgt_dev(struct scst_tgt_dev *tgt_dev)
+{
+ struct scst_device *dev = tgt_dev->dev;
+
+ spin_lock_bh(&dev->dev_lock);
+ list_del(&tgt_dev->dev_tgt_dev_list_entry);
+ spin_unlock_bh(&dev->dev_lock);
+
+ list_del(&tgt_dev->sess_tgt_dev_list_entry);
+
+ scst_clear_reservation(tgt_dev);
+ scst_free_all_UA(tgt_dev);
+
+ if (dev->handler && dev->handler->detach_tgt) {
+ TRACE_DBG("Calling dev handler's detach_tgt(%p)",
+ tgt_dev);
+ dev->handler->detach_tgt(tgt_dev);
+ TRACE_DBG("%s", "Dev handler's detach_tgt() returned");
+ }
+
+ scst_tgt_dev_stop_threads(tgt_dev);
+
+ BUG_ON(!list_empty(&tgt_dev->thr_data_list));
+
+ kmem_cache_free(scst_tgtd_cachep, tgt_dev);
+ return;
+}
+
+/* scst_mutex supposed to be held */
+int scst_sess_alloc_tgt_devs(struct scst_session *sess)
+{
+ int res = 0;
+ struct scst_acg_dev *acg_dev;
+ struct scst_tgt_dev *tgt_dev;
+
+ list_for_each_entry(acg_dev, &sess->acg->acg_dev_list,
+ acg_dev_list_entry) {
+ tgt_dev = scst_alloc_add_tgt_dev(sess, acg_dev);
+ if (tgt_dev == NULL) {
+ res = -ENOMEM;
+ goto out_free;
+ }
+ }
+
+out:
+ return res;
+
+out_free:
+ scst_sess_free_tgt_devs(sess);
+ goto out;
+}
+
+/*
+ * scst_mutex supposed to be held, there must not be parallel activity in this
+ * session.
+ */
+void scst_sess_free_tgt_devs(struct scst_session *sess)
+{
+ int i;
+ struct scst_tgt_dev *tgt_dev, *t;
+
+ /* The session is going down, no users, so no locks */
+ for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+ struct list_head *sess_tgt_dev_list_head =
+ &sess->sess_tgt_dev_list_hash[i];
+ list_for_each_entry_safe(tgt_dev, t, sess_tgt_dev_list_head,
+ sess_tgt_dev_list_entry) {
+ scst_free_tgt_dev(tgt_dev);
+ }
+ INIT_LIST_HEAD(sess_tgt_dev_list_head);
+ }
+ return;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+int scst_acg_add_dev(struct scst_acg *acg, struct scst_device *dev,
+ uint64_t lun, int read_only, bool gen_scst_report_luns_changed)
+{
+ int res = 0;
+ struct scst_acg_dev *acg_dev;
+ struct scst_tgt_dev *tgt_dev;
+ struct scst_session *sess;
+ LIST_HEAD(tmp_tgt_dev_list);
+
+ INIT_LIST_HEAD(&tmp_tgt_dev_list);
+
+ acg_dev = scst_alloc_acg_dev(acg, dev, lun);
+ if (acg_dev == NULL) {
+ res = -ENOMEM;
+ goto out;
+ }
+ acg_dev->rd_only = read_only;
+
+ TRACE_DBG("Adding acg_dev %p to acg_dev_list and dev_acg_dev_list",
+ acg_dev);
+ list_add_tail(&acg_dev->acg_dev_list_entry, &acg->acg_dev_list);
+ list_add_tail(&acg_dev->dev_acg_dev_list_entry, &dev->dev_acg_dev_list);
+
+ list_for_each_entry(sess, &acg->acg_sess_list, acg_sess_list_entry) {
+ tgt_dev = scst_alloc_add_tgt_dev(sess, acg_dev);
+ if (tgt_dev == NULL) {
+ res = -ENOMEM;
+ goto out_free;
+ }
+ list_add_tail(&tgt_dev->extra_tgt_dev_list_entry,
+ &tmp_tgt_dev_list);
+ }
+
+ if (gen_scst_report_luns_changed)
+ scst_report_luns_changed(acg);
+
+ PRINT_INFO("Added device %s to group %s (LUN %lld, "
+ "rd_only %d)", dev->virt_name, acg->acg_name,
+ (long long unsigned int)lun, read_only);
+
+out:
+ return res;
+
+out_free:
+ list_for_each_entry(tgt_dev, &tmp_tgt_dev_list,
+ extra_tgt_dev_list_entry) {
+ scst_free_tgt_dev(tgt_dev);
+ }
+ scst_free_acg_dev(acg_dev);
+ goto out;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+int scst_acg_remove_dev(struct scst_acg *acg, struct scst_device *dev,
+ bool gen_scst_report_luns_changed)
+{
+ int res = 0;
+ struct scst_acg_dev *acg_dev = NULL, *a;
+ struct scst_tgt_dev *tgt_dev, *tt;
+
+ list_for_each_entry(a, &acg->acg_dev_list, acg_dev_list_entry) {
+ if (a->dev == dev) {
+ acg_dev = a;
+ break;
+ }
+ }
+
+ if (acg_dev == NULL) {
+ PRINT_ERROR("Device is not found in group %s", acg->acg_name);
+ res = -EINVAL;
+ goto out;
+ }
+
+ list_for_each_entry_safe(tgt_dev, tt, &dev->dev_tgt_dev_list,
+ dev_tgt_dev_list_entry) {
+ if (tgt_dev->acg_dev == acg_dev)
+ scst_free_tgt_dev(tgt_dev);
+ }
+ scst_free_acg_dev(acg_dev);
+
+ if (gen_scst_report_luns_changed)
+ scst_report_luns_changed(acg);
+
+ PRINT_INFO("Removed device %s from group %s", dev->virt_name,
+ acg->acg_name);
+
+out:
+ return res;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+int scst_acg_add_name(struct scst_acg *acg, const char *name)
+{
+ int res = 0;
+ struct scst_acn *n;
+ int len;
+ char *nm;
+
+ list_for_each_entry(n, &acg->acn_list, acn_list_entry) {
+ if (strcmp(n->name, name) == 0) {
+ PRINT_ERROR("Name %s already exists in group %s",
+ name, acg->acg_name);
+ res = -EEXIST;
+ goto out;
+ }
+ }
+
+ n = kmalloc(sizeof(*n), GFP_KERNEL);
+ if (n == NULL) {
+ PRINT_ERROR("%s", "Unable to allocate scst_acn");
+ res = -ENOMEM;
+ goto out;
+ }
+
+ len = strlen(name);
+ nm = kmalloc(len + 1, GFP_KERNEL);
+ if (nm == NULL) {
+ PRINT_ERROR("%s", "Unable to allocate scst_acn->name");
+ res = -ENOMEM;
+ goto out_free;
+ }
+
+ strcpy(nm, name);
+ n->name = nm;
+
+ res = scst_create_acn_sysfs(acg, n);
+ if (res != 0)
+ goto out_free_nm;
+
+ list_add_tail(&n->acn_list_entry, &acg->acn_list);
+
+out:
+ if (res == 0) {
+ PRINT_INFO("Added name '%s' to group '%s'", name, acg->acg_name);
+ scst_check_reassign_sessions();
+ }
+ return res;
+
+out_free_nm:
+ kfree(nm);
+
+out_free:
+ kfree(n);
+ goto out;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+struct scst_acn *scst_acg_find_name(struct scst_acg *acg, const char *name)
+{
+ struct scst_acn *n;
+
+ TRACE_DBG("Trying to find name '%s'", name);
+
+ list_for_each_entry(n, &acg->acn_list, acn_list_entry) {
+ if (strcmp(n->name, name) == 0) {
+ TRACE_DBG("%s", "Found");
+ goto out;
+ }
+ }
+ n = NULL;
+out:
+ return n;
+}
+
+/* scst_mutex supposed to be held */
+void scst_acg_remove_acn(struct scst_acn *acn)
+{
+
+ list_del(&acn->acn_list_entry);
+ kfree(acn->name);
+ kfree(acn);
+ return;
+}
+
+static struct scst_cmd *scst_create_prepare_internal_cmd(
+ struct scst_cmd *orig_cmd, int bufsize)
+{
+ struct scst_cmd *res;
+ gfp_t gfp_mask = scst_cmd_atomic(orig_cmd) ? GFP_ATOMIC : GFP_KERNEL;
+
+ res = scst_alloc_cmd(gfp_mask);
+ if (res == NULL)
+ goto out;
+
+ res->cmd_threads = orig_cmd->cmd_threads;
+ res->sess = orig_cmd->sess;
+ res->atomic = scst_cmd_atomic(orig_cmd);
+ res->internal = 1;
+ res->tgtt = orig_cmd->tgtt;
+ res->tgt = orig_cmd->tgt;
+ res->dev = orig_cmd->dev;
+ res->tgt_dev = orig_cmd->tgt_dev;
+ res->lun = orig_cmd->lun;
+ res->queue_type = SCST_CMD_QUEUE_HEAD_OF_QUEUE;
+ res->data_direction = SCST_DATA_UNKNOWN;
+ res->orig_cmd = orig_cmd;
+ res->bufflen = bufsize;
+
+ scst_sess_get(res->sess);
+ if (res->tgt_dev != NULL)
+ __scst_get(0);
+
+ res->state = SCST_CMD_STATE_PRE_PARSE;
+
+out:
+ return res;
+}
+
+int scst_prepare_request_sense(struct scst_cmd *orig_cmd)
+{
+ int res = 0;
+ static const uint8_t request_sense[6] = {
+ REQUEST_SENSE, 0, 0, 0, SCST_SENSE_BUFFERSIZE, 0
+ };
+ struct scst_cmd *rs_cmd;
+
+ if (orig_cmd->sense != NULL) {
+ TRACE_MEM("Releasing sense %p (orig_cmd %p)",
+ orig_cmd->sense, orig_cmd);
+ mempool_free(orig_cmd->sense, scst_sense_mempool);
+ orig_cmd->sense = NULL;
+ }
+
+ rs_cmd = scst_create_prepare_internal_cmd(orig_cmd,
+ SCST_SENSE_BUFFERSIZE);
+ if (rs_cmd == NULL)
+ goto out_error;
+
+ memcpy(rs_cmd->cdb, request_sense, sizeof(request_sense));
+ rs_cmd->cdb[1] |= scst_get_cmd_dev_d_sense(orig_cmd);
+ rs_cmd->cdb_len = sizeof(request_sense);
+ rs_cmd->data_direction = SCST_DATA_READ;
+ rs_cmd->expected_data_direction = rs_cmd->data_direction;
+ rs_cmd->expected_transfer_len = SCST_SENSE_BUFFERSIZE;
+ rs_cmd->expected_values_set = 1;
+
+ TRACE_MGMT_DBG("Adding REQUEST SENSE cmd %p to head of active "
+ "cmd list", rs_cmd);
+ spin_lock_irq(&rs_cmd->cmd_threads->cmd_list_lock);
+ list_add(&rs_cmd->cmd_list_entry, &rs_cmd->cmd_threads->active_cmd_list);
+ wake_up(&rs_cmd->cmd_threads->cmd_list_waitQ);
+ spin_unlock_irq(&rs_cmd->cmd_threads->cmd_list_lock);
+
+out:
+ return res;
+
+out_error:
+ res = -1;
+ goto out;
+}
+
+static void scst_complete_request_sense(struct scst_cmd *req_cmd)
+{
+ struct scst_cmd *orig_cmd = req_cmd->orig_cmd;
+ uint8_t *buf;
+ int len;
+
+ BUG_ON(orig_cmd == NULL);
+
+ len = scst_get_buf_first(req_cmd, &buf);
+
+ if (scsi_status_is_good(req_cmd->status) && (len > 0) &&
+ SCST_SENSE_VALID(buf) && (!SCST_NO_SENSE(buf))) {
+ PRINT_BUFF_FLAG(TRACE_SCSI, "REQUEST SENSE returned",
+ buf, len);
+ scst_alloc_set_sense(orig_cmd, scst_cmd_atomic(req_cmd), buf,
+ len);
+ } else {
+ PRINT_ERROR("%s", "Unable to get the sense via "
+ "REQUEST SENSE, returning HARDWARE ERROR");
+ scst_set_cmd_error(orig_cmd,
+ SCST_LOAD_SENSE(scst_sense_hardw_error));
+ }
+
+ if (len > 0)
+ scst_put_buf(req_cmd, buf);
+
+ TRACE_MGMT_DBG("Adding orig cmd %p to head of active "
+ "cmd list", orig_cmd);
+ spin_lock_irq(&orig_cmd->cmd_threads->cmd_list_lock);
+ list_add(&orig_cmd->cmd_list_entry, &orig_cmd->cmd_threads->active_cmd_list);
+ wake_up(&orig_cmd->cmd_threads->cmd_list_waitQ);
+ spin_unlock_irq(&orig_cmd->cmd_threads->cmd_list_lock);
+ return;
+}
+
+int scst_finish_internal_cmd(struct scst_cmd *cmd)
+{
+ int res;
+
+ BUG_ON(!cmd->internal);
+
+ if (cmd->cdb[0] == REQUEST_SENSE)
+ scst_complete_request_sense(cmd);
+
+ __scst_cmd_put(cmd);
+
+ res = SCST_CMD_STATE_RES_CONT_NEXT;
+ return res;
+}
+
+static void scst_send_release(struct scst_device *dev)
+{
+ struct scsi_device *scsi_dev;
+ unsigned char cdb[6];
+ uint8_t sense[SCSI_SENSE_BUFFERSIZE];
+ int rc, i;
+
+ if (dev->scsi_dev == NULL)
+ goto out;
+
+ scsi_dev = dev->scsi_dev;
+
+ for (i = 0; i < 5; i++) {
+ memset(cdb, 0, sizeof(cdb));
+ cdb[0] = RELEASE;
+ cdb[1] = (scsi_dev->scsi_level <= SCSI_2) ?
+ ((scsi_dev->lun << 5) & 0xe0) : 0;
+
+ memset(sense, 0, sizeof(sense));
+
+ TRACE(TRACE_DEBUG | TRACE_SCSI, "%s", "Sending RELEASE req to "
+ "SCSI mid-level");
+ rc = scsi_execute(scsi_dev, cdb, SCST_DATA_NONE, NULL, 0,
+ sense, 15, 0, 0
+ , NULL
+ );
+ TRACE_DBG("MODE_SENSE done: %x", rc);
+
+ if (scsi_status_is_good(rc)) {
+ break;
+ } else {
+ PRINT_ERROR("RELEASE failed: %d", rc);
+ PRINT_BUFFER("RELEASE sense", sense, sizeof(sense));
+ scst_check_internal_sense(dev, rc, sense,
+ sizeof(sense));
+ }
+ }
+
+out:
+ return;
+}
+
+/* scst_mutex supposed to be held */
+static void scst_clear_reservation(struct scst_tgt_dev *tgt_dev)
+{
+ struct scst_device *dev = tgt_dev->dev;
+ int release = 0;
+
+ spin_lock_bh(&dev->dev_lock);
+ if (dev->dev_reserved &&
+ !test_bit(SCST_TGT_DEV_RESERVED, &tgt_dev->tgt_dev_flags)) {
+ /* This is one who holds the reservation */
+ struct scst_tgt_dev *tgt_dev_tmp;
+ list_for_each_entry(tgt_dev_tmp, &dev->dev_tgt_dev_list,
+ dev_tgt_dev_list_entry) {
+ clear_bit(SCST_TGT_DEV_RESERVED,
+ &tgt_dev_tmp->tgt_dev_flags);
+ }
+ dev->dev_reserved = 0;
+ release = 1;
+ }
+ spin_unlock_bh(&dev->dev_lock);
+
+ if (release)
+ scst_send_release(dev);
+ return;
+}
+
+struct scst_session *scst_alloc_session(struct scst_tgt *tgt, gfp_t gfp_mask,
+ const char *initiator_name)
+{
+ struct scst_session *sess;
+ int i;
+ int len;
+ char *nm;
+
+ sess = kmem_cache_zalloc(scst_sess_cachep, gfp_mask);
+ if (sess == NULL) {
+ TRACE(TRACE_OUT_OF_MEM, "%s",
+ "Allocation of scst_session failed");
+ goto out;
+ }
+
+ sess->init_phase = SCST_SESS_IPH_INITING;
+ sess->shut_phase = SCST_SESS_SPH_READY;
+ atomic_set(&sess->refcnt, 0);
+ for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+ struct list_head *sess_tgt_dev_list_head =
+ &sess->sess_tgt_dev_list_hash[i];
+ INIT_LIST_HEAD(sess_tgt_dev_list_head);
+ }
+ spin_lock_init(&sess->sess_list_lock);
+ INIT_LIST_HEAD(&sess->sess_cmd_list);
+ sess->tgt = tgt;
+ INIT_LIST_HEAD(&sess->init_deferred_cmd_list);
+ INIT_LIST_HEAD(&sess->init_deferred_mcmd_list);
+ INIT_DELAYED_WORK(&sess->hw_pending_work,
+ (void (*)(struct work_struct *))scst_hw_pending_work_fn);
+
+#ifdef CONFIG_SCST_MEASURE_LATENCY
+ spin_lock_init(&sess->lat_lock);
+#endif
+
+ len = strlen(initiator_name);
+ nm = kmalloc(len + 1, gfp_mask);
+ if (nm == NULL) {
+ PRINT_ERROR("%s", "Unable to allocate sess->initiator_name");
+ goto out_free;
+ }
+
+ strcpy(nm, initiator_name);
+ sess->initiator_name = nm;
+
+out:
+ return sess;
+
+out_free:
+ kmem_cache_free(scst_sess_cachep, sess);
+ sess = NULL;
+ goto out;
+}
+
+void scst_free_session(struct scst_session *sess)
+{
+
+ mutex_lock(&scst_mutex);
+
+ TRACE_DBG("Removing sess %p from the list", sess);
+ list_del(&sess->sess_list_entry);
+ TRACE_DBG("Removing session %p from acg %s", sess, sess->acg->acg_name);
+ list_del(&sess->acg_sess_list_entry);
+
+ scst_sess_free_tgt_devs(sess);
+
+ /* Called under lock to protect from too early tgt release */
+ wake_up_all(&sess->tgt->unreg_waitQ);
+
+ mutex_unlock(&scst_mutex);
+
+ scst_sess_sysfs_put(sess); /* must not be called under scst_mutex */
+ return;
+}
+
+void scst_release_session(struct scst_session *sess)
+{
+
+ kfree(sess->initiator_name);
+ kmem_cache_free(scst_sess_cachep, sess);
+ return;
+}
+
+void scst_free_session_callback(struct scst_session *sess)
+{
+ struct completion *c;
+
+ TRACE_DBG("Freeing session %p", sess);
+
+ cancel_delayed_work_sync(&sess->hw_pending_work);
+
+ c = sess->shutdown_compl;
+
+ if (sess->unreg_done_fn) {
+ TRACE_DBG("Calling unreg_done_fn(%p)", sess);
+ sess->unreg_done_fn(sess);
+ TRACE_DBG("%s", "unreg_done_fn() returned");
+ }
+ scst_free_session(sess);
+
+ if (c)
+ complete_all(c);
+ return;
+}
+
+void scst_sched_session_free(struct scst_session *sess)
+{
+ unsigned long flags;
+
+ if (sess->shut_phase != SCST_SESS_SPH_SHUTDOWN) {
+ PRINT_CRIT_ERROR("session %p is going to shutdown with unknown "
+ "shut phase %lx", sess, sess->shut_phase);
+ BUG();
+ }
+
+ spin_lock_irqsave(&scst_mgmt_lock, flags);
+ TRACE_DBG("Adding sess %p to scst_sess_shut_list", sess);
+ list_add_tail(&sess->sess_shut_list_entry, &scst_sess_shut_list);
+ spin_unlock_irqrestore(&scst_mgmt_lock, flags);
+
+ wake_up(&scst_mgmt_waitQ);
+ return;
+}
+
+/**
+ * scst_cmd_get() - increase command's reference counter
+ */
+void scst_cmd_get(struct scst_cmd *cmd)
+{
+ __scst_cmd_get(cmd);
+}
+EXPORT_SYMBOL_GPL(scst_cmd_get);
+
+/**
+ * scst_cmd_put() - decrease command's reference counter
+ */
+void scst_cmd_put(struct scst_cmd *cmd)
+{
+ __scst_cmd_put(cmd);
+}
+EXPORT_SYMBOL_GPL(scst_cmd_put);
+
+struct scst_cmd *scst_alloc_cmd(gfp_t gfp_mask)
+{
+ struct scst_cmd *cmd;
+
+ cmd = kmem_cache_zalloc(scst_cmd_cachep, gfp_mask);
+ if (cmd == NULL) {
+ TRACE(TRACE_OUT_OF_MEM, "%s", "Allocation of scst_cmd failed");
+ goto out;
+ }
+
+ cmd->state = SCST_CMD_STATE_INIT_WAIT;
+ cmd->start_time = jiffies;
+ atomic_set(&cmd->cmd_ref, 1);
+ cmd->cmd_threads = &scst_main_cmd_threads;
+ INIT_LIST_HEAD(&cmd->mgmt_cmd_list);
+ cmd->queue_type = SCST_CMD_QUEUE_SIMPLE;
+ cmd->timeout = SCST_DEFAULT_TIMEOUT;
+ cmd->retries = 0;
+ cmd->data_len = -1;
+ cmd->is_send_status = 1;
+ cmd->resp_data_len = -1;
+
+ cmd->dbl_ua_orig_data_direction = SCST_DATA_UNKNOWN;
+ cmd->dbl_ua_orig_resp_data_len = -1;
+
+out:
+ return cmd;
+}
+
+static void scst_destroy_put_cmd(struct scst_cmd *cmd)
+{
+ scst_sess_put(cmd->sess);
+
+ /*
+ * At this point tgt_dev can be dead, but the pointer remains non-NULL
+ */
+ if (likely(cmd->tgt_dev != NULL))
+ __scst_put();
+
+ scst_destroy_cmd(cmd);
+ return;
+}
+
+/* No locks supposed to be held */
+void scst_free_cmd(struct scst_cmd *cmd)
+{
+ int destroy = 1;
+
+ TRACE_DBG("Freeing cmd %p (tag %llu)",
+ cmd, (long long unsigned int)cmd->tag);
+
+ if (unlikely(test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags))) {
+ TRACE_MGMT_DBG("Freeing aborted cmd %p (scst_cmd_count %d)",
+ cmd, atomic_read(&scst_cmd_count));
+ }
+
+ BUG_ON(cmd->inc_blocking || cmd->needs_unblocking ||
+ cmd->dec_on_dev_needed);
+
+ /*
+ * Target driver can already free sg buffer before calling
+ * scst_tgt_cmd_done(). E.g., scst_local has to do that.
+ */
+ if (!cmd->tgt_data_buf_alloced)
+ scst_check_restore_sg_buff(cmd);
+
+ if ((cmd->tgtt->on_free_cmd != NULL) && likely(!cmd->internal)) {
+ TRACE_DBG("Calling target's on_free_cmd(%p)", cmd);
+ scst_set_cur_start(cmd);
+ cmd->tgtt->on_free_cmd(cmd);
+ scst_set_tgt_on_free_time(cmd);
+ TRACE_DBG("%s", "Target's on_free_cmd() returned");
+ }
+
+ if (likely(cmd->dev != NULL)) {
+ struct scst_dev_type *handler = cmd->dev->handler;
+ if (handler->on_free_cmd != NULL) {
+ TRACE_DBG("Calling dev handler %s on_free_cmd(%p)",
+ handler->name, cmd);
+ scst_set_cur_start(cmd);
+ handler->on_free_cmd(cmd);
+ scst_set_dev_on_free_time(cmd);
+ TRACE_DBG("Dev handler %s on_free_cmd() returned",
+ handler->name);
+ }
+ }
+
+ scst_release_space(cmd);
+
+ if (unlikely(cmd->sense != NULL)) {
+ TRACE_MEM("Releasing sense %p (cmd %p)", cmd->sense, cmd);
+ mempool_free(cmd->sense, scst_sense_mempool);
+ cmd->sense = NULL;
+ }
+
+ if (likely(cmd->tgt_dev != NULL)) {
+#ifdef CONFIG_SCST_EXTRACHECKS
+ if (unlikely(!cmd->sent_for_exec) && !cmd->internal) {
+ PRINT_ERROR("Finishing not executed cmd %p (opcode "
+ "%d, target %s, LUN %lld, sn %d, expected_sn %d)",
+ cmd, cmd->cdb[0], cmd->tgtt->name,
+ (long long unsigned int)cmd->lun,
+ cmd->sn, cmd->tgt_dev->expected_sn);
+ scst_unblock_deferred(cmd->tgt_dev, cmd);
+ }
+#endif
+
+ if (unlikely(cmd->out_of_sn)) {
+ TRACE_SN("Out of SN cmd %p (tag %llu, sn %d), "
+ "destroy=%d", cmd,
+ (long long unsigned int)cmd->tag,
+ cmd->sn, destroy);
+ destroy = test_and_set_bit(SCST_CMD_CAN_BE_DESTROYED,
+ &cmd->cmd_flags);
+ }
+ }
+
+ if (likely(destroy))
+ scst_destroy_put_cmd(cmd);
+ return;
+}
+
+/* No locks supposed to be held. */
+void scst_check_retries(struct scst_tgt *tgt)
+{
+ int need_wake_up = 0;
+
+ /*
+ * We don't worry about overflow of finished_cmds, because we check
+ * only for its change.
+ */
+ atomic_inc(&tgt->finished_cmds);
+ /* See comment in scst_queue_retry_cmd() */
+ smp_mb__after_atomic_inc();
+ if (unlikely(tgt->retry_cmds > 0)) {
+ struct scst_cmd *c, *tc;
+ unsigned long flags;
+
+ TRACE_RETRY("Checking retry cmd list (retry_cmds %d)",
+ tgt->retry_cmds);
+
+ spin_lock_irqsave(&tgt->tgt_lock, flags);
+ list_for_each_entry_safe(c, tc, &tgt->retry_cmd_list,
+ cmd_list_entry) {
+ tgt->retry_cmds--;
+
+ TRACE_RETRY("Moving retry cmd %p to head of active "
+ "cmd list (retry_cmds left %d)",
+ c, tgt->retry_cmds);
+ spin_lock(&c->cmd_threads->cmd_list_lock);
+ list_move(&c->cmd_list_entry,
+ &c->cmd_threads->active_cmd_list);
+ wake_up(&c->cmd_threads->cmd_list_waitQ);
+ spin_unlock(&c->cmd_threads->cmd_list_lock);
+
+ need_wake_up++;
+ if (need_wake_up >= 2) /* "slow start" */
+ break;
+ }
+ spin_unlock_irqrestore(&tgt->tgt_lock, flags);
+ }
+ return;
+}
+
+static void scst_tgt_retry_timer_fn(unsigned long arg)
+{
+ struct scst_tgt *tgt = (struct scst_tgt *)arg;
+ unsigned long flags;
+
+ TRACE_RETRY("Retry timer expired (retry_cmds %d)", tgt->retry_cmds);
+
+ spin_lock_irqsave(&tgt->tgt_lock, flags);
+ tgt->retry_timer_active = 0;
+ spin_unlock_irqrestore(&tgt->tgt_lock, flags);
+
+ scst_check_retries(tgt);
+ return;
+}
+
+struct scst_mgmt_cmd *scst_alloc_mgmt_cmd(gfp_t gfp_mask)
+{
+ struct scst_mgmt_cmd *mcmd;
+
+ mcmd = mempool_alloc(scst_mgmt_mempool, gfp_mask);
+ if (mcmd == NULL) {
+ PRINT_CRIT_ERROR("%s", "Allocation of management command "
+ "failed, some commands and their data could leak");
+ goto out;
+ }
+ memset(mcmd, 0, sizeof(*mcmd));
+
+out:
+ return mcmd;
+}
+
+void scst_free_mgmt_cmd(struct scst_mgmt_cmd *mcmd)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&mcmd->sess->sess_list_lock, flags);
+ atomic_dec(&mcmd->sess->sess_cmd_count);
+ spin_unlock_irqrestore(&mcmd->sess->sess_list_lock, flags);
+
+ scst_sess_put(mcmd->sess);
+
+ if (mcmd->mcmd_tgt_dev != NULL)
+ __scst_put();
+
+ mempool_free(mcmd, scst_mgmt_mempool);
+ return;
+}
+
+static bool is_report_sg_limitation(void)
+{
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+ return (trace_flag & TRACE_OUT_OF_MEM) != 0;
+#else
+ return false;
+#endif
+}
+
+int scst_alloc_space(struct scst_cmd *cmd)
+{
+ gfp_t gfp_mask;
+ int res = -ENOMEM;
+ int atomic = scst_cmd_atomic(cmd);
+ int flags;
+ struct scst_tgt_dev *tgt_dev = cmd->tgt_dev;
+ static int ll;
+
+ gfp_mask = tgt_dev->gfp_mask | (atomic ? GFP_ATOMIC : GFP_KERNEL);
+
+ flags = atomic ? SGV_POOL_NO_ALLOC_ON_CACHE_MISS : 0;
+ if (cmd->no_sgv)
+ flags |= SGV_POOL_ALLOC_NO_CACHED;
+
+ cmd->sg = sgv_pool_alloc(tgt_dev->pool, cmd->bufflen, gfp_mask, flags,
+ &cmd->sg_cnt, &cmd->sgv, &cmd->dev->dev_mem_lim, NULL);
+ if (cmd->sg == NULL)
+ goto out;
+
+ if (unlikely(cmd->sg_cnt > tgt_dev->max_sg_cnt)) {
+ if ((ll < 10) || is_report_sg_limitation()) {
+ PRINT_INFO("Unable to complete command due to "
+ "SG IO count limitation (requested %d, "
+ "available %d, tgt lim %d)", cmd->sg_cnt,
+ tgt_dev->max_sg_cnt, cmd->tgt->sg_tablesize);
+ ll++;
+ }
+ goto out_sg_free;
+ }
+
+ if (cmd->data_direction != SCST_DATA_BIDI)
+ goto success;
+
+ cmd->in_sg = sgv_pool_alloc(tgt_dev->pool, cmd->in_bufflen, gfp_mask,
+ flags, &cmd->in_sg_cnt, &cmd->in_sgv,
+ &cmd->dev->dev_mem_lim, NULL);
+ if (cmd->in_sg == NULL)
+ goto out_sg_free;
+
+ if (unlikely(cmd->in_sg_cnt > tgt_dev->max_sg_cnt)) {
+ if ((ll < 10) || is_report_sg_limitation()) {
+ PRINT_INFO("Unable to complete command due to "
+ "SG IO count limitation (IN buffer, requested "
+ "%d, available %d, tgt lim %d)", cmd->in_sg_cnt,
+ tgt_dev->max_sg_cnt, cmd->tgt->sg_tablesize);
+ ll++;
+ }
+ goto out_in_sg_free;
+ }
+
+success:
+ res = 0;
+
+out:
+ return res;
+
+out_in_sg_free:
+ sgv_pool_free(cmd->in_sgv, &cmd->dev->dev_mem_lim);
+ cmd->in_sgv = NULL;
+ cmd->in_sg = NULL;
+ cmd->in_sg_cnt = 0;
+
+out_sg_free:
+ sgv_pool_free(cmd->sgv, &cmd->dev->dev_mem_lim);
+ cmd->sgv = NULL;
+ cmd->sg = NULL;
+ cmd->sg_cnt = 0;
+ goto out;
+}
+
+static void scst_release_space(struct scst_cmd *cmd)
+{
+
+ if (cmd->sgv == NULL) {
+ if ((cmd->sg != NULL) &&
+ !(cmd->tgt_data_buf_alloced || cmd->dh_data_buf_alloced)) {
+ TRACE_MEM("Freeing sg %p for cmd %p (cnt %d)", cmd->sg,
+ cmd, cmd->sg_cnt);
+ scst_free(cmd->sg, cmd->sg_cnt);
+ goto out_zero;
+ } else
+ goto out;
+ }
+
+ if (cmd->tgt_data_buf_alloced || cmd->dh_data_buf_alloced) {
+ TRACE_MEM("%s", "*data_buf_alloced set, returning");
+ goto out;
+ }
+
+ if (cmd->in_sgv != NULL) {
+ sgv_pool_free(cmd->in_sgv, &cmd->dev->dev_mem_lim);
+ cmd->in_sgv = NULL;
+ cmd->in_sg_cnt = 0;
+ cmd->in_sg = NULL;
+ cmd->in_bufflen = 0;
+ }
+
+ sgv_pool_free(cmd->sgv, &cmd->dev->dev_mem_lim);
+
+out_zero:
+ cmd->sgv = NULL;
+ cmd->sg_cnt = 0;
+ cmd->sg = NULL;
+ cmd->bufflen = 0;
+ cmd->data_len = 0;
+
+out:
+ return;
+}
+
+static void scsi_end_async(struct request *req, int error)
+{
+ struct scsi_io_context *sioc = req->end_io_data;
+
+ TRACE_DBG("sioc %p, cmd %p", sioc, sioc->data);
+
+ if (sioc->done)
+ sioc->done(sioc->data, sioc->sense, req->errors, req->resid_len);
+
+ if (!sioc->full_cdb_used)
+ kmem_cache_free(scsi_io_context_cache, sioc);
+ else
+ kfree(sioc);
+
+ __blk_put_request(req->q, req);
+ return;
+}
+
+/**
+ * scst_scsi_exec_async - executes a SCSI command in pass-through mode
+ * @cmd: scst command
+ * @done: callback function when done
+ */
+int scst_scsi_exec_async(struct scst_cmd *cmd,
+ void (*done)(void *, char *, int, int))
+{
+ int res = 0;
+ struct request_queue *q = cmd->dev->scsi_dev->request_queue;
+ struct request *rq;
+ struct scsi_io_context *sioc;
+ int write = (cmd->data_direction & SCST_DATA_WRITE) ? WRITE : READ;
+ gfp_t gfp = scst_cmd_atomic(cmd) ? GFP_ATOMIC : GFP_KERNEL;
+ int cmd_len = cmd->cdb_len;
+
+ if (cmd->ext_cdb_len == 0) {
+ TRACE_DBG("Simple CDB (cmd_len %d)", cmd_len);
+ sioc = kmem_cache_zalloc(scsi_io_context_cache, gfp);
+ if (sioc == NULL) {
+ res = -ENOMEM;
+ goto out;
+ }
+ } else {
+ cmd_len += cmd->ext_cdb_len;
+
+ TRACE_DBG("Extended CDB (cmd_len %d)", cmd_len);
+
+ sioc = kzalloc(sizeof(*sioc) + cmd_len, gfp);
+ if (sioc == NULL) {
+ res = -ENOMEM;
+ goto out;
+ }
+
+ sioc->full_cdb_used = 1;
+
+ memcpy(sioc->full_cdb, cmd->cdb, cmd->cdb_len);
+ memcpy(&sioc->full_cdb[cmd->cdb_len], cmd->ext_cdb,
+ cmd->ext_cdb_len);
+ }
+
+ rq = blk_get_request(q, write, gfp);
+ if (rq == NULL) {
+ res = -ENOMEM;
+ goto out_free_sioc;
+ }
+
+ rq->cmd_type = REQ_TYPE_BLOCK_PC;
+ rq->cmd_flags |= REQ_QUIET;
+
+ if (cmd->sg != NULL) {
+ res = blk_rq_map_kern_sg(rq, cmd->sg, cmd->sg_cnt, gfp);
+ if (res) {
+ TRACE_DBG("blk_rq_map_kern_sg() failed: %d", res);
+ goto out_free_rq;
+ }
+ }
+
+ if (cmd->data_direction == SCST_DATA_BIDI) {
+ struct request *next_rq;
+
+ if (!test_bit(QUEUE_FLAG_BIDI, &q->queue_flags)) {
+ res = -EOPNOTSUPP;
+ goto out_free_unmap;
+ }
+
+ next_rq = blk_get_request(q, READ, gfp);
+ if (next_rq == NULL) {
+ res = -ENOMEM;
+ goto out_free_unmap;
+ }
+ rq->next_rq = next_rq;
+ next_rq->cmd_type = rq->cmd_type;
+
+ res = blk_rq_map_kern_sg(next_rq, cmd->in_sg,
+ cmd->in_sg_cnt, gfp);
+ if (res != 0)
+ goto out_free_unmap;
+ }
+
+ TRACE_DBG("sioc %p, cmd %p", sioc, cmd);
+
+ sioc->data = cmd;
+ sioc->done = done;
+
+ rq->cmd_len = cmd_len;
+ if (cmd->ext_cdb_len == 0) {
+ memset(rq->cmd, 0, BLK_MAX_CDB); /* ATAPI hates garbage after CDB */
+ memcpy(rq->cmd, cmd->cdb, cmd->cdb_len);
+ } else
+ rq->cmd = sioc->full_cdb;
+
+ rq->sense = sioc->sense;
+ rq->sense_len = sizeof(sioc->sense);
+ rq->timeout = cmd->timeout;
+ rq->retries = cmd->retries;
+ rq->end_io_data = sioc;
+
+ blk_execute_rq_nowait(rq->q, NULL, rq,
+ (cmd->queue_type == SCST_CMD_QUEUE_HEAD_OF_QUEUE), scsi_end_async);
+out:
+ return res;
+
+out_free_unmap:
+ if (rq->next_rq != NULL) {
+ blk_put_request(rq->next_rq);
+ rq->next_rq = NULL;
+ }
+ blk_rq_unmap_kern_sg(rq, res);
+
+out_free_rq:
+ blk_put_request(rq);
+
+out_free_sioc:
+ if (!sioc->full_cdb_used)
+ kmem_cache_free(scsi_io_context_cache, sioc);
+ else
+ kfree(sioc);
+ goto out;
+}
+
+/**
+ * scst_copy_sg() - copy data between the command's SGs
+ *
+ * Copies data between cmd->tgt_sg and cmd->sg in direction defined by
+ * copy_dir parameter.
+ */
+void scst_copy_sg(struct scst_cmd *cmd, enum scst_sg_copy_dir copy_dir)
+{
+ struct scatterlist *src_sg, *dst_sg;
+ unsigned int to_copy;
+ int atomic = scst_cmd_atomic(cmd);
+
+ if (copy_dir == SCST_SG_COPY_FROM_TARGET) {
+ if (cmd->data_direction != SCST_DATA_BIDI) {
+ src_sg = cmd->tgt_sg;
+ dst_sg = cmd->sg;
+ to_copy = cmd->bufflen;
+ } else {
+ TRACE_MEM("BIDI cmd %p", cmd);
+ src_sg = cmd->tgt_in_sg;
+ dst_sg = cmd->in_sg;
+ to_copy = cmd->in_bufflen;
+ }
+ } else {
+ src_sg = cmd->sg;
+ dst_sg = cmd->tgt_sg;
+ to_copy = cmd->resp_data_len;
+ }
+
+ TRACE_MEM("cmd %p, copy_dir %d, src_sg %p, dst_sg %p, to_copy %lld",
+ cmd, copy_dir, src_sg, dst_sg, (long long)to_copy);
+
+ if (unlikely(src_sg == NULL) || unlikely(dst_sg == NULL)) {
+ /*
+ * It can happened, e.g., with scst_user for cmd with delay
+ * alloc, which failed with Check Condition.
+ */
+ goto out;
+ }
+
+ sg_copy(dst_sg, src_sg, 0, to_copy,
+ atomic ? KM_SOFTIRQ0 : KM_USER0,
+ atomic ? KM_SOFTIRQ1 : KM_USER1);
+
+out:
+ return;
+}
+EXPORT_SYMBOL_GPL(scst_copy_sg);
+
+static const int SCST_CDB_LENGTH[8] = { 6, 10, 10, -1, 16, 12, -1, -1 };
+
+#define SCST_CDB_GROUP(opcode) ((opcode >> 5) & 0x7)
+#define SCST_GET_CDB_LEN(opcode) SCST_CDB_LENGTH[SCST_CDB_GROUP(opcode)]
+
+int scst_get_cdb_len(const uint8_t *cdb)
+{
+ return SCST_GET_CDB_LEN(cdb[0]);
+}
+
+/* get_trans_len_x extract x bytes from cdb as length starting from off */
+
+static int get_trans_cdb_len_10(struct scst_cmd *cmd, uint8_t off)
+{
+ cmd->cdb_len = 10;
+ cmd->bufflen = 0;
+ return 0;
+}
+
+static int get_trans_len_block_limit(struct scst_cmd *cmd, uint8_t off)
+{
+ cmd->bufflen = 6;
+ return 0;
+}
+
+static int get_trans_len_read_capacity(struct scst_cmd *cmd, uint8_t off)
+{
+ cmd->bufflen = READ_CAP_LEN;
+ return 0;
+}
+
+static int get_trans_len_serv_act_in(struct scst_cmd *cmd, uint8_t off)
+{
+ int res = 0;
+
+ if ((cmd->cdb[1] & 0x1f) == SAI_READ_CAPACITY_16) {
+ cmd->op_name = "READ CAPACITY(16)";
+ cmd->bufflen = READ_CAP16_LEN;
+ cmd->op_flags |= SCST_IMPLICIT_HQ|SCST_REG_RESERVE_ALLOWED;
+ } else
+ cmd->op_flags |= SCST_UNKNOWN_LENGTH;
+ return res;
+}
+
+static int get_trans_len_single(struct scst_cmd *cmd, uint8_t off)
+{
+ cmd->bufflen = 1;
+ return 0;
+}
+
+static int get_trans_len_read_pos(struct scst_cmd *cmd, uint8_t off)
+{
+ uint8_t *p = (uint8_t *)cmd->cdb + off;
+ int res = 0;
+
+ cmd->bufflen = 0;
+ cmd->bufflen |= ((u32)p[0]) << 8;
+ cmd->bufflen |= ((u32)p[1]);
+
+ switch (cmd->cdb[1] & 0x1f) {
+ case 0:
+ case 1:
+ case 6:
+ if (cmd->bufflen != 0) {
+ PRINT_ERROR("READ POSITION: Invalid non-zero (%d) "
+ "allocation length for service action %x",
+ cmd->bufflen, cmd->cdb[1] & 0x1f);
+ goto out_inval;
+ }
+ break;
+ }
+
+ switch (cmd->cdb[1] & 0x1f) {
+ case 0:
+ case 1:
+ cmd->bufflen = 20;
+ break;
+ case 6:
+ cmd->bufflen = 32;
+ break;
+ case 8:
+ cmd->bufflen = max(28, cmd->bufflen);
+ break;
+ default:
+ PRINT_ERROR("READ POSITION: Invalid service action %x",
+ cmd->cdb[1] & 0x1f);
+ goto out_inval;
+ }
+
+out:
+ return res;
+
+out_inval:
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(scst_sense_invalid_field_in_cdb));
+ res = 1;
+ goto out;
+}
+
+static int get_trans_len_prevent_allow_medium_removal(struct scst_cmd *cmd,
+ uint8_t off)
+{
+ if ((cmd->cdb[4] & 3) == 0)
+ cmd->op_flags |= SCST_REG_RESERVE_ALLOWED;
+ return 0;
+}
+
+static int get_trans_len_start_stop(struct scst_cmd *cmd, uint8_t off)
+{
+ if ((cmd->cdb[4] & 0xF1) == 0x1)
+ cmd->op_flags |= SCST_REG_RESERVE_ALLOWED;
+ return 0;
+}
+
+static int get_trans_len_3_read_elem_stat(struct scst_cmd *cmd, uint8_t off)
+{
+ const uint8_t *p = cmd->cdb + off;
+
+ cmd->bufflen = 0;
+ cmd->bufflen |= ((u32)p[0]) << 16;
+ cmd->bufflen |= ((u32)p[1]) << 8;
+ cmd->bufflen |= ((u32)p[2]);
+
+ if ((cmd->cdb[6] & 0x2) == 0x2)
+ cmd->op_flags |= SCST_REG_RESERVE_ALLOWED;
+
+ return 0;
+}
+
+static int get_trans_len_1(struct scst_cmd *cmd, uint8_t off)
+{
+ cmd->bufflen = (u32)cmd->cdb[off];
+ return 0;
+}
+
+static int get_trans_len_1_256(struct scst_cmd *cmd, uint8_t off)
+{
+ cmd->bufflen = (u32)cmd->cdb[off];
+ if (cmd->bufflen == 0)
+ cmd->bufflen = 256;
+ return 0;
+}
+
+static int get_trans_len_2(struct scst_cmd *cmd, uint8_t off)
+{
+ const uint8_t *p = cmd->cdb + off;
+
+ cmd->bufflen = 0;
+ cmd->bufflen |= ((u32)p[0]) << 8;
+ cmd->bufflen |= ((u32)p[1]);
+
+ return 0;
+}
+
+static int get_trans_len_3(struct scst_cmd *cmd, uint8_t off)
+{
+ const uint8_t *p = cmd->cdb + off;
+
+ cmd->bufflen = 0;
+ cmd->bufflen |= ((u32)p[0]) << 16;
+ cmd->bufflen |= ((u32)p[1]) << 8;
+ cmd->bufflen |= ((u32)p[2]);
+
+ return 0;
+}
+
+static int get_trans_len_4(struct scst_cmd *cmd, uint8_t off)
+{
+ const uint8_t *p = cmd->cdb + off;
+
+ cmd->bufflen = 0;
+ cmd->bufflen |= ((u32)p[0]) << 24;
+ cmd->bufflen |= ((u32)p[1]) << 16;
+ cmd->bufflen |= ((u32)p[2]) << 8;
+ cmd->bufflen |= ((u32)p[3]);
+
+ return 0;
+}
+
+static int get_trans_len_none(struct scst_cmd *cmd, uint8_t off)
+{
+ cmd->bufflen = 0;
+ return 0;
+}
+
+/**
+ * scst_get_cdb_info() - fill various info about the command's CDB
+ *
+ * Description:
+ * Fills various info about the command's CDB in the corresponding fields
+ * in the command.
+ *
+ * Returns: 0 on success, <0 if command is unknown, >0 if command
+ * is invalid.
+ */
+int scst_get_cdb_info(struct scst_cmd *cmd)
+{
+ int dev_type = cmd->dev->type;
+ int i, res = 0;
+ uint8_t op;
+ const struct scst_sdbops *ptr = NULL;
+
+ op = cmd->cdb[0]; /* get clear opcode */
+
+ TRACE_DBG("opcode=%02x, cdblen=%d bytes, tblsize=%d, "
+ "dev_type=%d", op, SCST_GET_CDB_LEN(op), SCST_CDB_TBL_SIZE,
+ dev_type);
+
+ i = scst_scsi_op_list[op];
+ while (i < SCST_CDB_TBL_SIZE && scst_scsi_op_table[i].ops == op) {
+ if (scst_scsi_op_table[i].devkey[dev_type] != SCST_CDB_NOTSUPP) {
+ ptr = &scst_scsi_op_table[i];
+ TRACE_DBG("op = 0x%02x+'%c%c%c%c%c%c%c%c%c%c'+<%s>",
+ ptr->ops, ptr->devkey[0], /* disk */
+ ptr->devkey[1], /* tape */
+ ptr->devkey[2], /* printer */
+ ptr->devkey[3], /* cpu */
+ ptr->devkey[4], /* cdr */
+ ptr->devkey[5], /* cdrom */
+ ptr->devkey[6], /* scanner */
+ ptr->devkey[7], /* worm */
+ ptr->devkey[8], /* changer */
+ ptr->devkey[9], /* commdev */
+ ptr->op_name);
+ TRACE_DBG("direction=%d flags=%d off=%d",
+ ptr->direction,
+ ptr->flags,
+ ptr->off);
+ break;
+ }
+ i++;
+ }
+
+ if (unlikely(ptr == NULL)) {
+ /* opcode not found or now not used */
+ TRACE(TRACE_MINOR, "Unknown opcode 0x%x for type %d", op,
+ dev_type);
+ res = -1;
+ goto out;
+ }
+
+ cmd->cdb_len = SCST_GET_CDB_LEN(op);
+ cmd->op_name = ptr->op_name;
+ cmd->data_direction = ptr->direction;
+ cmd->op_flags = ptr->flags | SCST_INFO_VALID;
+ res = (*ptr->get_trans_len)(cmd, ptr->off);
+
+out:
+ return res;
+}
+EXPORT_SYMBOL_GPL(scst_get_cdb_info);
+
+/* Packs SCST LUN back to SCSI form */
+uint64_t scst_pack_lun(const uint64_t lun, unsigned int addr_method)
+{
+
+ uint64_t res = 0;
+ uint16_t *p = (uint16_t *)&res;
+
+ res = lun;
+
+ if ((addr_method == SCST_LUN_ADDR_METHOD_FLAT) && (lun != 0)) {
+ /*
+ * Flat space: luns other than 0 should use flat space
+ * addressing method.
+ */
+ *p = 0x7fff & *p;
+ *p = 0x4000 | *p;
+ }
+ /* Default is to use peripheral device addressing mode */
+
+ *p = cpu_to_be16(*p);
+ return res;
+}
+
+/*
+ * Routine to extract a lun number from an 8-byte LUN structure
+ * in network byte order (BE).
+ * (see SAM-2, Section 4.12.3 page 40)
+ * Supports 2 types of lun unpacking: peripheral and logical unit.
+ */
+uint64_t scst_unpack_lun(const uint8_t *lun, int len)
+{
+ uint64_t res = NO_SUCH_LUN;
+ int address_method;
+
+ TRACE_BUFF_FLAG(TRACE_DEBUG, "Raw LUN", lun, len);
+
+ if (unlikely(len < 2)) {
+ PRINT_ERROR("Illegal lun length %d, expected 2 bytes or "
+ "more", len);
+ goto out;
+ }
+
+ if (len > 2) {
+ switch (len) {
+ case 8:
+ if ((*((uint64_t *)lun) &
+ __constant_cpu_to_be64(0x0000FFFFFFFFFFFFLL)) != 0)
+ goto out_err;
+ break;
+ case 4:
+ if (*((uint16_t *)&lun[2]) != 0)
+ goto out_err;
+ break;
+ case 6:
+ if (*((uint32_t *)&lun[2]) != 0)
+ goto out_err;
+ break;
+ default:
+ goto out_err;
+ }
+ }
+
+ address_method = (*lun) >> 6; /* high 2 bits of byte 0 */
+ switch (address_method) {
+ case 0: /* peripheral device addressing method */
+#if 0
+ if (*lun) {
+ PRINT_ERROR("Illegal BUS INDENTIFIER in LUN "
+ "peripheral device addressing method 0x%02x, "
+ "expected 0", *lun);
+ break;
+ }
+ res = *(lun + 1);
+ break;
+#else
+ /*
+ * Looks like it's legal to use it as flat space addressing
+ * method as well
+ */
+
+ /* go through */
+#endif
+
+ case 1: /* flat space addressing method */
+ res = *(lun + 1) | (((*lun) & 0x3f) << 8);
+ break;
+
+ case 2: /* logical unit addressing method */
+ if (*lun & 0x3f) {
+ PRINT_ERROR("Illegal BUS NUMBER in LUN logical unit "
+ "addressing method 0x%02x, expected 0",
+ *lun & 0x3f);
+ break;
+ }
+ if (*(lun + 1) & 0xe0) {
+ PRINT_ERROR("Illegal TARGET in LUN logical unit "
+ "addressing method 0x%02x, expected 0",
+ (*(lun + 1) & 0xf8) >> 5);
+ break;
+ }
+ res = *(lun + 1) & 0x1f;
+ break;
+
+ case 3: /* extended logical unit addressing method */
+ default:
+ PRINT_ERROR("Unimplemented LUN addressing method %u",
+ address_method);
+ break;
+ }
+
+out:
+ return res;
+
+out_err:
+ PRINT_ERROR("%s", "Multi-level LUN unimplemented");
+ goto out;
+}
+
+/**
+ ** Generic parse() support routines.
+ ** Done via pointer on functions to avoid unneeded dereferences on
+ ** the fast path.
+ **/
+
+/**
+ * scst_calc_block_shift() - calculate block shift
+ *
+ * Calculates and returns block shift for the given sector size
+ */
+int scst_calc_block_shift(int sector_size)
+{
+ int block_shift = 0;
+ int t;
+
+ if (sector_size == 0)
+ sector_size = 512;
+
+ t = sector_size;
+ while (1) {
+ if ((t & 1) != 0)
+ break;
+ t >>= 1;
+ block_shift++;
+ }
+ if (block_shift < 9) {
+ PRINT_ERROR("Wrong sector size %d", sector_size);
+ block_shift = -1;
+ }
+ return block_shift;
+}
+EXPORT_SYMBOL_GPL(scst_calc_block_shift);
+
+/**
+ * scst_sbc_generic_parse() - generic SBC parsing
+ *
+ * Generic parse() for SBC (disk) devices
+ */
+int scst_sbc_generic_parse(struct scst_cmd *cmd,
+ int (*get_block_shift)(struct scst_cmd *cmd))
+{
+ int res = 0;
+
+ /*
+ * SCST sets good defaults for cmd->data_direction and cmd->bufflen,
+ * therefore change them only if necessary
+ */
+
+ TRACE_DBG("op_name <%s> direct %d flags %d transfer_len %d",
+ cmd->op_name, cmd->data_direction, cmd->op_flags, cmd->bufflen);
+
+ switch (cmd->cdb[0]) {
+ case VERIFY_6:
+ case VERIFY:
+ case VERIFY_12:
+ case VERIFY_16:
+ if ((cmd->cdb[1] & BYTCHK) == 0) {
+ cmd->data_len = cmd->bufflen << get_block_shift(cmd);
+ cmd->bufflen = 0;
+ goto set_timeout;
+ } else
+ cmd->data_len = 0;
+ break;
+ default:
+ /* It's all good */
+ break;
+ }
+
+ if (cmd->op_flags & SCST_TRANSFER_LEN_TYPE_FIXED) {
+ /*
+ * No need for locks here, since *_detach() can not be
+ * called, when there are existing commands.
+ */
+ cmd->bufflen = cmd->bufflen << get_block_shift(cmd);
+ }
+
+set_timeout:
+ if ((cmd->op_flags & (SCST_SMALL_TIMEOUT | SCST_LONG_TIMEOUT)) == 0)
+ cmd->timeout = SCST_GENERIC_DISK_REG_TIMEOUT;
+ else if (cmd->op_flags & SCST_SMALL_TIMEOUT)
+ cmd->timeout = SCST_GENERIC_DISK_SMALL_TIMEOUT;
+ else if (cmd->op_flags & SCST_LONG_TIMEOUT)
+ cmd->timeout = SCST_GENERIC_DISK_LONG_TIMEOUT;
+
+ TRACE_DBG("res %d, bufflen %d, data_len %d, direct %d",
+ res, cmd->bufflen, cmd->data_len, cmd->data_direction);
+ return res;
+}
+EXPORT_SYMBOL_GPL(scst_sbc_generic_parse);
+
+/**
+ * scst_cdrom_generic_parse() - generic MMC parse
+ *
+ * Generic parse() for MMC (cdrom) devices
+ */
+int scst_cdrom_generic_parse(struct scst_cmd *cmd,
+ int (*get_block_shift)(struct scst_cmd *cmd))
+{
+ int res = 0;
+
+ /*
+ * SCST sets good defaults for cmd->data_direction and cmd->bufflen,
+ * therefore change them only if necessary
+ */
+
+ TRACE_DBG("op_name <%s> direct %d flags %d transfer_len %d",
+ cmd->op_name, cmd->data_direction, cmd->op_flags, cmd->bufflen);
+
+ cmd->cdb[1] &= 0x1f;
+
+ switch (cmd->cdb[0]) {
+ case VERIFY_6:
+ case VERIFY:
+ case VERIFY_12:
+ case VERIFY_16:
+ if ((cmd->cdb[1] & BYTCHK) == 0) {
+ cmd->data_len = cmd->bufflen << get_block_shift(cmd);
+ cmd->bufflen = 0;
+ goto set_timeout;
+ }
+ break;
+ default:
+ /* It's all good */
+ break;
+ }
+
+ if (cmd->op_flags & SCST_TRANSFER_LEN_TYPE_FIXED)
+ cmd->bufflen = cmd->bufflen << get_block_shift(cmd);
+
+set_timeout:
+ if ((cmd->op_flags & (SCST_SMALL_TIMEOUT | SCST_LONG_TIMEOUT)) == 0)
+ cmd->timeout = SCST_GENERIC_CDROM_REG_TIMEOUT;
+ else if (cmd->op_flags & SCST_SMALL_TIMEOUT)
+ cmd->timeout = SCST_GENERIC_CDROM_SMALL_TIMEOUT;
+ else if (cmd->op_flags & SCST_LONG_TIMEOUT)
+ cmd->timeout = SCST_GENERIC_CDROM_LONG_TIMEOUT;
+
+ TRACE_DBG("res=%d, bufflen=%d, direct=%d", res, cmd->bufflen,
+ cmd->data_direction);
+ return res;
+}
+EXPORT_SYMBOL_GPL(scst_cdrom_generic_parse);
+
+/**
+ * scst_modisk_generic_parse() - generic MO parse
+ *
+ * Generic parse() for MO disk devices
+ */
+int scst_modisk_generic_parse(struct scst_cmd *cmd,
+ int (*get_block_shift)(struct scst_cmd *cmd))
+{
+ int res = 0;
+
+ /*
+ * SCST sets good defaults for cmd->data_direction and cmd->bufflen,
+ * therefore change them only if necessary
+ */
+
+ TRACE_DBG("op_name <%s> direct %d flags %d transfer_len %d",
+ cmd->op_name, cmd->data_direction, cmd->op_flags, cmd->bufflen);
+
+ cmd->cdb[1] &= 0x1f;
+
+ switch (cmd->cdb[0]) {
+ case VERIFY_6:
+ case VERIFY:
+ case VERIFY_12:
+ case VERIFY_16:
+ if ((cmd->cdb[1] & BYTCHK) == 0) {
+ cmd->data_len = cmd->bufflen << get_block_shift(cmd);
+ cmd->bufflen = 0;
+ goto set_timeout;
+ }
+ break;
+ default:
+ /* It's all good */
+ break;
+ }
+
+ if (cmd->op_flags & SCST_TRANSFER_LEN_TYPE_FIXED)
+ cmd->bufflen = cmd->bufflen << get_block_shift(cmd);
+
+set_timeout:
+ if ((cmd->op_flags & (SCST_SMALL_TIMEOUT | SCST_LONG_TIMEOUT)) == 0)
+ cmd->timeout = SCST_GENERIC_MODISK_REG_TIMEOUT;
+ else if (cmd->op_flags & SCST_SMALL_TIMEOUT)
+ cmd->timeout = SCST_GENERIC_MODISK_SMALL_TIMEOUT;
+ else if (cmd->op_flags & SCST_LONG_TIMEOUT)
+ cmd->timeout = SCST_GENERIC_MODISK_LONG_TIMEOUT;
+
+ TRACE_DBG("res=%d, bufflen=%d, direct=%d", res, cmd->bufflen,
+ cmd->data_direction);
+ return res;
+}
+EXPORT_SYMBOL_GPL(scst_modisk_generic_parse);
+
+/**
+ * scst_tape_generic_parse() - generic tape parse
+ *
+ * Generic parse() for tape devices
+ */
+int scst_tape_generic_parse(struct scst_cmd *cmd,
+ int (*get_block_size)(struct scst_cmd *cmd))
+{
+ int res = 0;
+
+ /*
+ * SCST sets good defaults for cmd->data_direction and cmd->bufflen,
+ * therefore change them only if necessary
+ */
+
+ TRACE_DBG("op_name <%s> direct %d flags %d transfer_len %d",
+ cmd->op_name, cmd->data_direction, cmd->op_flags, cmd->bufflen);
+
+ if (cmd->cdb[0] == READ_POSITION) {
+ int tclp = cmd->cdb[1] & 4;
+ int long_bit = cmd->cdb[1] & 2;
+ int bt = cmd->cdb[1] & 1;
+
+ if ((tclp == long_bit) && (!bt || !long_bit)) {
+ cmd->bufflen =
+ tclp ? POSITION_LEN_LONG : POSITION_LEN_SHORT;
+ cmd->data_direction = SCST_DATA_READ;
+ } else {
+ cmd->bufflen = 0;
+ cmd->data_direction = SCST_DATA_NONE;
+ }
+ }
+
+ if (cmd->op_flags & SCST_TRANSFER_LEN_TYPE_FIXED & cmd->cdb[1])
+ cmd->bufflen = cmd->bufflen * get_block_size(cmd);
+
+ if ((cmd->op_flags & (SCST_SMALL_TIMEOUT | SCST_LONG_TIMEOUT)) == 0)
+ cmd->timeout = SCST_GENERIC_TAPE_REG_TIMEOUT;
+ else if (cmd->op_flags & SCST_SMALL_TIMEOUT)
+ cmd->timeout = SCST_GENERIC_TAPE_SMALL_TIMEOUT;
+ else if (cmd->op_flags & SCST_LONG_TIMEOUT)
+ cmd->timeout = SCST_GENERIC_TAPE_LONG_TIMEOUT;
+ return res;
+}
+EXPORT_SYMBOL_GPL(scst_tape_generic_parse);
+
+static int scst_null_parse(struct scst_cmd *cmd)
+{
+ int res = 0;
+
+ /*
+ * SCST sets good defaults for cmd->data_direction and cmd->bufflen,
+ * therefore change them only if necessary
+ */
+
+ TRACE_DBG("op_name <%s> direct %d flags %d transfer_len %d",
+ cmd->op_name, cmd->data_direction, cmd->op_flags, cmd->bufflen);
+#if 0
+ switch (cmd->cdb[0]) {
+ default:
+ /* It's all good */
+ break;
+ }
+#endif
+ TRACE_DBG("res %d bufflen %d direct %d",
+ res, cmd->bufflen, cmd->data_direction);
+ return res;
+}
+
+/**
+ * scst_changer_generic_parse() - generic changer parse
+ *
+ * Generic parse() for changer devices
+ */
+int scst_changer_generic_parse(struct scst_cmd *cmd,
+ int (*nothing)(struct scst_cmd *cmd))
+{
+ int res = scst_null_parse(cmd);
+
+ if (cmd->op_flags & SCST_LONG_TIMEOUT)
+ cmd->timeout = SCST_GENERIC_CHANGER_LONG_TIMEOUT;
+ else
+ cmd->timeout = SCST_GENERIC_CHANGER_TIMEOUT;
+
+ return res;
+}
+EXPORT_SYMBOL_GPL(scst_changer_generic_parse);
+
+/**
+ * scst_processor_generic_parse - generic SCSI processor parse
+ *
+ * Generic parse() for SCSI processor devices
+ */
+int scst_processor_generic_parse(struct scst_cmd *cmd,
+ int (*nothing)(struct scst_cmd *cmd))
+{
+ int res = scst_null_parse(cmd);
+
+ if (cmd->op_flags & SCST_LONG_TIMEOUT)
+ cmd->timeout = SCST_GENERIC_PROCESSOR_LONG_TIMEOUT;
+ else
+ cmd->timeout = SCST_GENERIC_PROCESSOR_TIMEOUT;
+
+ return res;
+}
+EXPORT_SYMBOL_GPL(scst_processor_generic_parse);
+
+/**
+ * scst_raid_generic_parse() - generic RAID parse
+ *
+ * Generic parse() for RAID devices
+ */
+int scst_raid_generic_parse(struct scst_cmd *cmd,
+ int (*nothing)(struct scst_cmd *cmd))
+{
+ int res = scst_null_parse(cmd);
+
+ if (cmd->op_flags & SCST_LONG_TIMEOUT)
+ cmd->timeout = SCST_GENERIC_RAID_LONG_TIMEOUT;
+ else
+ cmd->timeout = SCST_GENERIC_RAID_TIMEOUT;
+
+ return res;
+}
+EXPORT_SYMBOL_GPL(scst_raid_generic_parse);
+
+/**
+ ** Generic dev_done() support routines.
+ ** Done via pointer on functions to avoid unneeded dereferences on
+ ** the fast path.
+ **/
+
+/**
+ * scst_block_generic_dev_done() - generic SBC dev_done
+ *
+ * Generic dev_done() for block (SBC) devices
+ */
+int scst_block_generic_dev_done(struct scst_cmd *cmd,
+ void (*set_block_shift)(struct scst_cmd *cmd, int block_shift))
+{
+ int opcode = cmd->cdb[0];
+ int status = cmd->status;
+ int res = SCST_CMD_STATE_DEFAULT;
+
+ /*
+ * SCST sets good defaults for cmd->is_send_status and
+ * cmd->resp_data_len based on cmd->status and cmd->data_direction,
+ * therefore change them only if necessary
+ */
+
+ if ((status == SAM_STAT_GOOD) || (status == SAM_STAT_CONDITION_MET)) {
+ switch (opcode) {
+ case READ_CAPACITY:
+ {
+ /* Always keep track of disk capacity */
+ int buffer_size, sector_size, sh;
+ uint8_t *buffer;
+
+ buffer_size = scst_get_buf_first(cmd, &buffer);
+ if (unlikely(buffer_size <= 0)) {
+ if (buffer_size < 0) {
+ PRINT_ERROR("%s: Unable to get the"
+ " buffer (%d)", __func__, buffer_size);
+ }
+ goto out;
+ }
+
+ sector_size =
+ ((buffer[4] << 24) | (buffer[5] << 16) |
+ (buffer[6] << 8) | (buffer[7] << 0));
+ scst_put_buf(cmd, buffer);
+ if (sector_size != 0)
+ sh = scst_calc_block_shift(sector_size);
+ else
+ sh = 0;
+ set_block_shift(cmd, sh);
+ TRACE_DBG("block_shift %d", sh);
+ break;
+ }
+ default:
+ /* It's all good */
+ break;
+ }
+ }
+
+ TRACE_DBG("cmd->is_send_status=%x, cmd->resp_data_len=%d, "
+ "res=%d", cmd->is_send_status, cmd->resp_data_len, res);
+
+out:
+ return res;
+}
+EXPORT_SYMBOL_GPL(scst_block_generic_dev_done);
+
+/**
+ * scst_tape_generic_dev_done() - generic tape dev done
+ *
+ * Generic dev_done() for tape devices
+ */
+int scst_tape_generic_dev_done(struct scst_cmd *cmd,
+ void (*set_block_size)(struct scst_cmd *cmd, int block_shift))
+{
+ int opcode = cmd->cdb[0];
+ int res = SCST_CMD_STATE_DEFAULT;
+ int buffer_size, bs;
+ uint8_t *buffer = NULL;
+
+ /*
+ * SCST sets good defaults for cmd->is_send_status and
+ * cmd->resp_data_len based on cmd->status and cmd->data_direction,
+ * therefore change them only if necessary
+ */
+
+ switch (opcode) {
+ case MODE_SENSE:
+ case MODE_SELECT:
+ buffer_size = scst_get_buf_first(cmd, &buffer);
+ if (unlikely(buffer_size <= 0)) {
+ if (buffer_size < 0) {
+ PRINT_ERROR("%s: Unable to get the buffer (%d)",
+ __func__, buffer_size);
+ }
+ goto out;
+ }
+ break;
+ }
+
+ switch (opcode) {
+ case MODE_SENSE:
+ TRACE_DBG("%s", "MODE_SENSE");
+ if ((cmd->cdb[2] & 0xC0) == 0) {
+ if (buffer[3] == 8) {
+ bs = (buffer[9] << 16) |
+ (buffer[10] << 8) | buffer[11];
+ set_block_size(cmd, bs);
+ }
+ }
+ break;
+ case MODE_SELECT:
+ TRACE_DBG("%s", "MODE_SELECT");
+ if (buffer[3] == 8) {
+ bs = (buffer[9] << 16) | (buffer[10] << 8) |
+ (buffer[11]);
+ set_block_size(cmd, bs);
+ }
+ break;
+ default:
+ /* It's all good */
+ break;
+ }
+
+ switch (opcode) {
+ case MODE_SENSE:
+ case MODE_SELECT:
+ scst_put_buf(cmd, buffer);
+ break;
+ }
+
+out:
+ return res;
+}
+EXPORT_SYMBOL_GPL(scst_tape_generic_dev_done);
+
+static void scst_check_internal_sense(struct scst_device *dev, int result,
+ uint8_t *sense, int sense_len)
+{
+
+ if (host_byte(result) == DID_RESET) {
+ int sl;
+ TRACE(TRACE_MGMT, "DID_RESET received for device %s, "
+ "triggering reset UA", dev->virt_name);
+ sl = scst_set_sense(sense, sense_len, dev->d_sense,
+ SCST_LOAD_SENSE(scst_sense_reset_UA));
+ scst_dev_check_set_UA(dev, NULL, sense, sl);
+ } else if ((status_byte(result) == CHECK_CONDITION) &&
+ scst_is_ua_sense(sense, sense_len))
+ scst_dev_check_set_UA(dev, NULL, sense, sense_len);
+ return;
+}
+
+/**
+ * scst_to_dma_dir() - translate SCST's data direction to DMA direction
+ *
+ * Translates SCST's data direction to DMA one from backend storage
+ * perspective.
+ */
+enum dma_data_direction scst_to_dma_dir(int scst_dir)
+{
+ static const enum dma_data_direction tr_tbl[] = { DMA_NONE,
+ DMA_TO_DEVICE, DMA_FROM_DEVICE, DMA_BIDIRECTIONAL, DMA_NONE };
+
+ return tr_tbl[scst_dir];
+}
+EXPORT_SYMBOL(scst_to_dma_dir);
+
+/*
+ * scst_to_tgt_dma_dir() - translate SCST data direction to DMA direction
+ *
+ * Translates SCST data direction to DMA data direction from the perspective
+ * of the target device.
+ */
+enum dma_data_direction scst_to_tgt_dma_dir(int scst_dir)
+{
+ static const enum dma_data_direction tr_tbl[] = { DMA_NONE,
+ DMA_FROM_DEVICE, DMA_TO_DEVICE, DMA_BIDIRECTIONAL, DMA_NONE };
+
+ return tr_tbl[scst_dir];
+}
+EXPORT_SYMBOL(scst_to_tgt_dma_dir);
+
+/**
+ * scst_obtain_device_parameters() - obtain device control parameters
+ *
+ * Issues a MODE SENSE for control mode page data and sets the corresponding
+ * dev's parameter from it. Returns 0 on success and not 0 otherwise.
+ */
+int scst_obtain_device_parameters(struct scst_device *dev)
+{
+ int rc, i;
+ uint8_t cmd[16];
+ uint8_t buffer[4+0x0A];
+ uint8_t sense_buffer[SCSI_SENSE_BUFFERSIZE];
+
+ EXTRACHECKS_BUG_ON(dev->scsi_dev == NULL);
+
+ for (i = 0; i < 5; i++) {
+ /* Get control mode page */
+ memset(cmd, 0, sizeof(cmd));
+#if 0
+ cmd[0] = MODE_SENSE_10;
+ cmd[1] = 0;
+ cmd[2] = 0x0A;
+ cmd[8] = sizeof(buffer); /* it's < 256 */
+#else
+ cmd[0] = MODE_SENSE;
+ cmd[1] = 8; /* DBD */
+ cmd[2] = 0x0A;
+ cmd[4] = sizeof(buffer);
+#endif
+
+ memset(buffer, 0, sizeof(buffer));
+ memset(sense_buffer, 0, sizeof(sense_buffer));
+
+ TRACE(TRACE_SCSI, "%s", "Doing internal MODE_SENSE");
+ rc = scsi_execute(dev->scsi_dev, cmd, SCST_DATA_READ, buffer,
+ sizeof(buffer), sense_buffer, 15, 0, 0
+ , NULL
+ );
+
+ TRACE_DBG("MODE_SENSE done: %x", rc);
+
+ if (scsi_status_is_good(rc)) {
+ int q;
+
+ PRINT_BUFF_FLAG(TRACE_SCSI,
+ "Returned control mode page data",
+ buffer, sizeof(buffer));
+
+ dev->tst = buffer[4+2] >> 5;
+ q = buffer[4+3] >> 4;
+ if (q > SCST_CONTR_MODE_QUEUE_ALG_UNRESTRICTED_REORDER) {
+ PRINT_ERROR("Too big QUEUE ALG %x, dev %s",
+ dev->queue_alg, dev->virt_name);
+ }
+ dev->queue_alg = q;
+ dev->swp = (buffer[4+4] & 0x8) >> 3;
+ dev->tas = (buffer[4+5] & 0x40) >> 6;
+ dev->d_sense = (buffer[4+2] & 0x4) >> 2;
+
+ /*
+ * Unfortunately, SCSI ML doesn't provide a way to
+ * specify commands task attribute, so we can rely on
+ * device's restricted reordering only. Linux I/O
+ * subsystem doesn't reorder pass-through (PC) requests.
+ */
+ dev->has_own_order_mgmt = !dev->queue_alg;
+
+ PRINT_INFO("Device %s: TST %x, QUEUE ALG %x, SWP %x, "
+ "TAS %x, D_SENSE %d, has_own_order_mgmt %d",
+ dev->virt_name, dev->tst, dev->queue_alg,
+ dev->swp, dev->tas, dev->d_sense,
+ dev->has_own_order_mgmt);
+
+ goto out;
+ } else {
+ scst_check_internal_sense(dev, rc, sense_buffer,
+ sizeof(sense_buffer));
+#if 0
+ if ((status_byte(rc) == CHECK_CONDITION) &&
+ SCST_SENSE_VALID(sense_buffer)) {
+#else
+ /*
+ * 3ware controller is buggy and returns CONDITION_GOOD
+ * instead of CHECK_CONDITION
+ */
+ if (SCST_SENSE_VALID(sense_buffer)) {
+#endif
+ PRINT_BUFF_FLAG(TRACE_SCSI,
+ "Returned sense data",
+ sense_buffer, sizeof(sense_buffer));
+ if (scst_analyze_sense(sense_buffer,
+ sizeof(sense_buffer),
+ SCST_SENSE_KEY_VALID,
+ ILLEGAL_REQUEST, 0, 0)) {
+ PRINT_INFO("Device %s doesn't support "
+ "MODE SENSE", dev->virt_name);
+ break;
+ } else if (scst_analyze_sense(sense_buffer,
+ sizeof(sense_buffer),
+ SCST_SENSE_KEY_VALID,
+ NOT_READY, 0, 0)) {
+ PRINT_ERROR("Device %s not ready",
+ dev->virt_name);
+ break;
+ }
+ } else {
+ PRINT_INFO("Internal MODE SENSE to "
+ "device %s failed: %x",
+ dev->virt_name, rc);
+ PRINT_BUFF_FLAG(TRACE_SCSI, "MODE SENSE sense",
+ sense_buffer, sizeof(sense_buffer));
+ switch (host_byte(rc)) {
+ case DID_RESET:
+ case DID_ABORT:
+ case DID_SOFT_ERROR:
+ break;
+ default:
+ goto brk;
+ }
+ switch (driver_byte(rc)) {
+ case DRIVER_BUSY:
+ case DRIVER_SOFT:
+ break;
+ default:
+ goto brk;
+ }
+ }
+ }
+ }
+brk:
+ PRINT_WARNING("Unable to get device's %s control mode page, using "
+ "existing values/defaults: TST %x, QUEUE ALG %x, SWP %x, "
+ "TAS %x, D_SENSE %d, has_own_order_mgmt %d", dev->virt_name,
+ dev->tst, dev->queue_alg, dev->swp, dev->tas, dev->d_sense,
+ dev->has_own_order_mgmt);
+
+out:
+ return 0;
+}
+EXPORT_SYMBOL_GPL(scst_obtain_device_parameters);
+
+/* Called under dev_lock and BH off */
+void scst_process_reset(struct scst_device *dev,
+ struct scst_session *originator, struct scst_cmd *exclude_cmd,
+ struct scst_mgmt_cmd *mcmd, bool setUA)
+{
+ struct scst_tgt_dev *tgt_dev;
+ struct scst_cmd *cmd, *tcmd;
+
+ /* Clear RESERVE'ation, if necessary */
+ if (dev->dev_reserved) {
+ list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+ dev_tgt_dev_list_entry) {
+ TRACE_MGMT_DBG("Clearing RESERVE'ation for "
+ "tgt_dev LUN %lld",
+ (long long unsigned int)tgt_dev->lun);
+ clear_bit(SCST_TGT_DEV_RESERVED,
+ &tgt_dev->tgt_dev_flags);
+ }
+ dev->dev_reserved = 0;
+ /*
+ * There is no need to send RELEASE, since the device is going
+ * to be resetted. Actually, since we can be in RESET TM
+ * function, it might be dangerous.
+ */
+ }
+
+ dev->dev_double_ua_possible = 1;
+
+ list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+ dev_tgt_dev_list_entry) {
+ struct scst_session *sess = tgt_dev->sess;
+
+ spin_lock_bh(&tgt_dev->tgt_dev_lock);
+
+ scst_free_all_UA(tgt_dev);
+
+ memset(tgt_dev->tgt_dev_sense, 0,
+ sizeof(tgt_dev->tgt_dev_sense));
+
+ spin_unlock_bh(&tgt_dev->tgt_dev_lock);
+
+ spin_lock_irq(&sess->sess_list_lock);
+
+ TRACE_DBG("Searching in sess cmd list (sess=%p)", sess);
+ list_for_each_entry(cmd, &sess->sess_cmd_list,
+ sess_cmd_list_entry) {
+ if (cmd == exclude_cmd)
+ continue;
+ if ((cmd->tgt_dev == tgt_dev) ||
+ ((cmd->tgt_dev == NULL) &&
+ (cmd->lun == tgt_dev->lun))) {
+ scst_abort_cmd(cmd, mcmd,
+ (tgt_dev->sess != originator), 0);
+ }
+ }
+ spin_unlock_irq(&sess->sess_list_lock);
+ }
+
+ list_for_each_entry_safe(cmd, tcmd, &dev->blocked_cmd_list,
+ blocked_cmd_list_entry) {
+ if (test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags)) {
+ list_del(&cmd->blocked_cmd_list_entry);
+ TRACE_MGMT_DBG("Adding aborted blocked cmd %p "
+ "to active cmd list", cmd);
+ spin_lock_irq(&cmd->cmd_threads->cmd_list_lock);
+ list_add_tail(&cmd->cmd_list_entry,
+ &cmd->cmd_threads->active_cmd_list);
+ wake_up(&cmd->cmd_threads->cmd_list_waitQ);
+ spin_unlock_irq(&cmd->cmd_threads->cmd_list_lock);
+ }
+ }
+
+ if (setUA) {
+ uint8_t sense_buffer[SCST_STANDARD_SENSE_LEN];
+ int sl = scst_set_sense(sense_buffer, sizeof(sense_buffer),
+ dev->d_sense, SCST_LOAD_SENSE(scst_sense_reset_UA));
+ scst_dev_check_set_local_UA(dev, exclude_cmd, sense_buffer, sl);
+ }
+ return;
+}
+
+/* No locks, no IRQ or IRQ-disabled context allowed */
+int scst_set_pending_UA(struct scst_cmd *cmd)
+{
+ int res = 0, i;
+ struct scst_tgt_dev_UA *UA_entry;
+ bool first = true, global_unlock = false;
+ struct scst_session *sess = cmd->sess;
+
+ TRACE_MGMT_DBG("Setting pending UA cmd %p", cmd);
+
+ spin_lock_bh(&cmd->tgt_dev->tgt_dev_lock);
+
+again:
+ /* UA list could be cleared behind us, so retest */
+ if (list_empty(&cmd->tgt_dev->UA_list)) {
+ TRACE_DBG("%s",
+ "SCST_TGT_DEV_UA_PENDING set, but UA_list empty");
+ res = -1;
+ goto out_unlock_tgt_dev_lock;
+ }
+
+ UA_entry = list_entry(cmd->tgt_dev->UA_list.next, typeof(*UA_entry),
+ UA_list_entry);
+
+ TRACE_DBG("next %p UA_entry %p",
+ cmd->tgt_dev->UA_list.next, UA_entry);
+
+ if (UA_entry->global_UA && first) {
+ TRACE_MGMT_DBG("Global UA %p detected", UA_entry);
+
+ spin_unlock_bh(&cmd->tgt_dev->tgt_dev_lock);
+
+ /*
+ * cmd won't allow to suspend activities, so we can access
+ * sess->sess_tgt_dev_list_hash without any additional
+ * protection.
+ */
+
+ local_bh_disable();
+
+ for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+ struct list_head *sess_tgt_dev_list_head =
+ &sess->sess_tgt_dev_list_hash[i];
+ struct scst_tgt_dev *tgt_dev;
+ list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+ sess_tgt_dev_list_entry) {
+ /* Lockdep triggers here a false positive.. */
+ spin_lock(&tgt_dev->tgt_dev_lock);
+ }
+ }
+
+ first = false;
+ global_unlock = true;
+ goto again;
+ }
+
+ if (scst_set_cmd_error_sense(cmd, UA_entry->UA_sense_buffer,
+ UA_entry->UA_valid_sense_len) != 0)
+ goto out_unlock;
+
+ cmd->ua_ignore = 1;
+
+ list_del(&UA_entry->UA_list_entry);
+
+ if (UA_entry->global_UA) {
+ for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+ struct list_head *sess_tgt_dev_list_head =
+ &sess->sess_tgt_dev_list_hash[i];
+ struct scst_tgt_dev *tgt_dev;
+
+ list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+ sess_tgt_dev_list_entry) {
+ struct scst_tgt_dev_UA *ua;
+ list_for_each_entry(ua, &tgt_dev->UA_list,
+ UA_list_entry) {
+ if (ua->global_UA &&
+ memcmp(ua->UA_sense_buffer,
+ UA_entry->UA_sense_buffer,
+ sizeof(ua->UA_sense_buffer)) == 0) {
+ TRACE_MGMT_DBG("Freeing not "
+ "needed global UA %p",
+ ua);
+ list_del(&ua->UA_list_entry);
+ mempool_free(ua, scst_ua_mempool);
+ break;
+ }
+ }
+ }
+ }
+ }
+
+ mempool_free(UA_entry, scst_ua_mempool);
+
+ if (list_empty(&cmd->tgt_dev->UA_list)) {
+ clear_bit(SCST_TGT_DEV_UA_PENDING,
+ &cmd->tgt_dev->tgt_dev_flags);
+ }
+
+out_unlock:
+ if (global_unlock) {
+ for (i = TGT_DEV_HASH_SIZE-1; i >= 0; i--) {
+ struct list_head *sess_tgt_dev_list_head =
+ &sess->sess_tgt_dev_list_hash[i];
+ struct scst_tgt_dev *tgt_dev;
+ list_for_each_entry_reverse(tgt_dev, sess_tgt_dev_list_head,
+ sess_tgt_dev_list_entry) {
+ spin_unlock(&tgt_dev->tgt_dev_lock);
+ }
+ }
+
+ local_bh_enable();
+ spin_lock_bh(&cmd->tgt_dev->tgt_dev_lock);
+ }
+
+out_unlock_tgt_dev_lock:
+ spin_unlock_bh(&cmd->tgt_dev->tgt_dev_lock);
+ return res;
+}
+
+/* Called under tgt_dev_lock and BH off */
+static void scst_alloc_set_UA(struct scst_tgt_dev *tgt_dev,
+ const uint8_t *sense, int sense_len, int flags)
+{
+ struct scst_tgt_dev_UA *UA_entry = NULL;
+
+ UA_entry = mempool_alloc(scst_ua_mempool, GFP_ATOMIC);
+ if (UA_entry == NULL) {
+ PRINT_CRIT_ERROR("%s", "UNIT ATTENTION memory "
+ "allocation failed. The UNIT ATTENTION "
+ "on some sessions will be missed");
+ PRINT_BUFFER("Lost UA", sense, sense_len);
+ goto out;
+ }
+ memset(UA_entry, 0, sizeof(*UA_entry));
+
+ UA_entry->global_UA = (flags & SCST_SET_UA_FLAG_GLOBAL) != 0;
+ if (UA_entry->global_UA)
+ TRACE_MGMT_DBG("Queuing global UA %p", UA_entry);
+
+ if (sense_len > (int)sizeof(UA_entry->UA_sense_buffer)) {
+ PRINT_WARNING("Sense truncated (needed %d), shall you increase "
+ "SCST_SENSE_BUFFERSIZE?", sense_len);
+ sense_len = sizeof(UA_entry->UA_sense_buffer);
+ }
+ memcpy(UA_entry->UA_sense_buffer, sense, sense_len);
+ UA_entry->UA_valid_sense_len = sense_len;
+
+ set_bit(SCST_TGT_DEV_UA_PENDING, &tgt_dev->tgt_dev_flags);
+
+ TRACE_MGMT_DBG("Adding new UA to tgt_dev %p", tgt_dev);
+
+ if (flags & SCST_SET_UA_FLAG_AT_HEAD)
+ list_add(&UA_entry->UA_list_entry, &tgt_dev->UA_list);
+ else
+ list_add_tail(&UA_entry->UA_list_entry, &tgt_dev->UA_list);
+
+out:
+ return;
+}
+
+/* tgt_dev_lock supposed to be held and BH off */
+static void __scst_check_set_UA(struct scst_tgt_dev *tgt_dev,
+ const uint8_t *sense, int sense_len, int flags)
+{
+ int skip_UA = 0;
+ struct scst_tgt_dev_UA *UA_entry_tmp;
+ int len = min((int)sizeof(UA_entry_tmp->UA_sense_buffer), sense_len);
+
+ list_for_each_entry(UA_entry_tmp, &tgt_dev->UA_list,
+ UA_list_entry) {
+ if (memcmp(sense, UA_entry_tmp->UA_sense_buffer, len) == 0) {
+ TRACE_MGMT_DBG("%s", "UA already exists");
+ skip_UA = 1;
+ break;
+ }
+ }
+
+ if (skip_UA == 0)
+ scst_alloc_set_UA(tgt_dev, sense, len, flags);
+ return;
+}
+
+void scst_check_set_UA(struct scst_tgt_dev *tgt_dev,
+ const uint8_t *sense, int sense_len, int flags)
+{
+
+ spin_lock_bh(&tgt_dev->tgt_dev_lock);
+ __scst_check_set_UA(tgt_dev, sense, sense_len, flags);
+ spin_unlock_bh(&tgt_dev->tgt_dev_lock);
+ return;
+}
+
+/* Called under dev_lock and BH off */
+void scst_dev_check_set_local_UA(struct scst_device *dev,
+ struct scst_cmd *exclude, const uint8_t *sense, int sense_len)
+{
+ struct scst_tgt_dev *tgt_dev, *exclude_tgt_dev = NULL;
+
+ if (exclude != NULL)
+ exclude_tgt_dev = exclude->tgt_dev;
+
+ list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+ dev_tgt_dev_list_entry) {
+ if (tgt_dev != exclude_tgt_dev)
+ scst_check_set_UA(tgt_dev, sense, sense_len, 0);
+ }
+ return;
+}
+
+/* Called under dev_lock and BH off */
+void __scst_dev_check_set_UA(struct scst_device *dev,
+ struct scst_cmd *exclude, const uint8_t *sense, int sense_len)
+{
+
+ TRACE_MGMT_DBG("Processing UA dev %p", dev);
+
+ /* Check for reset UA */
+ if (scst_analyze_sense(sense, sense_len, SCST_SENSE_ASC_VALID,
+ 0, SCST_SENSE_ASC_UA_RESET, 0))
+ scst_process_reset(dev,
+ (exclude != NULL) ? exclude->sess : NULL,
+ exclude, NULL, false);
+
+ scst_dev_check_set_local_UA(dev, exclude, sense, sense_len);
+ return;
+}
+
+/* Called under tgt_dev_lock or when tgt_dev is unused */
+static void scst_free_all_UA(struct scst_tgt_dev *tgt_dev)
+{
+ struct scst_tgt_dev_UA *UA_entry, *t;
+
+ list_for_each_entry_safe(UA_entry, t,
+ &tgt_dev->UA_list, UA_list_entry) {
+ TRACE_MGMT_DBG("Clearing UA for tgt_dev LUN %lld",
+ (long long unsigned int)tgt_dev->lun);
+ list_del(&UA_entry->UA_list_entry);
+ mempool_free(UA_entry, scst_ua_mempool);
+ }
+ INIT_LIST_HEAD(&tgt_dev->UA_list);
+ clear_bit(SCST_TGT_DEV_UA_PENDING, &tgt_dev->tgt_dev_flags);
+ return;
+}
+
+/* No locks */
+struct scst_cmd *__scst_check_deferred_commands(struct scst_tgt_dev *tgt_dev)
+{
+ struct scst_cmd *res = NULL, *cmd, *t;
+ typeof(tgt_dev->expected_sn) expected_sn = tgt_dev->expected_sn;
+
+ spin_lock_irq(&tgt_dev->sn_lock);
+
+ if (unlikely(tgt_dev->hq_cmd_count != 0))
+ goto out_unlock;
+
+restart:
+ list_for_each_entry_safe(cmd, t, &tgt_dev->deferred_cmd_list,
+ sn_cmd_list_entry) {
+ EXTRACHECKS_BUG_ON(cmd->queue_type ==
+ SCST_CMD_QUEUE_HEAD_OF_QUEUE);
+ if (cmd->sn == expected_sn) {
+ TRACE_SN("Deferred command %p (sn %d, set %d) found",
+ cmd, cmd->sn, cmd->sn_set);
+ tgt_dev->def_cmd_count--;
+ list_del(&cmd->sn_cmd_list_entry);
+ if (res == NULL)
+ res = cmd;
+ else {
+ spin_lock(&cmd->cmd_threads->cmd_list_lock);
+ TRACE_SN("Adding cmd %p to active cmd list",
+ cmd);
+ list_add_tail(&cmd->cmd_list_entry,
+ &cmd->cmd_threads->active_cmd_list);
+ wake_up(&cmd->cmd_threads->cmd_list_waitQ);
+ spin_unlock(&cmd->cmd_threads->cmd_list_lock);
+ }
+ }
+ }
+ if (res != NULL)
+ goto out_unlock;
+
+ list_for_each_entry(cmd, &tgt_dev->skipped_sn_list,
+ sn_cmd_list_entry) {
+ EXTRACHECKS_BUG_ON(cmd->queue_type ==
+ SCST_CMD_QUEUE_HEAD_OF_QUEUE);
+ if (cmd->sn == expected_sn) {
+ atomic_t *slot = cmd->sn_slot;
+ /*
+ * !! At this point any pointer in cmd, except !!
+ * !! sn_slot and sn_cmd_list_entry, could be !!
+ * !! already destroyed !!
+ */
+ TRACE_SN("cmd %p (tag %llu) with skipped sn %d found",
+ cmd,
+ (long long unsigned int)cmd->tag,
+ cmd->sn);
+ tgt_dev->def_cmd_count--;
+ list_del(&cmd->sn_cmd_list_entry);
+ spin_unlock_irq(&tgt_dev->sn_lock);
+ if (test_and_set_bit(SCST_CMD_CAN_BE_DESTROYED,
+ &cmd->cmd_flags))
+ scst_destroy_put_cmd(cmd);
+ scst_inc_expected_sn(tgt_dev, slot);
+ expected_sn = tgt_dev->expected_sn;
+ spin_lock_irq(&tgt_dev->sn_lock);
+ goto restart;
+ }
+ }
+
+out_unlock:
+ spin_unlock_irq(&tgt_dev->sn_lock);
+ return res;
+}
+
+/*****************************************************************
+ ** The following thr_data functions are necessary, because the
+ ** kernel doesn't provide a better way to have threads local
+ ** storage
+ *****************************************************************/
+
+/**
+ * scst_add_thr_data() - add the current thread's local data
+ *
+ * Adds local to the current thread data to tgt_dev
+ * (they will be local for the tgt_dev and current thread).
+ */
+void scst_add_thr_data(struct scst_tgt_dev *tgt_dev,
+ struct scst_thr_data_hdr *data,
+ void (*free_fn) (struct scst_thr_data_hdr *data))
+{
+ data->owner_thr = current;
+ atomic_set(&data->ref, 1);
+ EXTRACHECKS_BUG_ON(free_fn == NULL);
+ data->free_fn = free_fn;
+ spin_lock(&tgt_dev->thr_data_lock);
+ list_add_tail(&data->thr_data_list_entry, &tgt_dev->thr_data_list);
+ spin_unlock(&tgt_dev->thr_data_lock);
+}
+EXPORT_SYMBOL_GPL(scst_add_thr_data);
+
+/**
+ * scst_del_all_thr_data() - delete all thread's local data
+ *
+ * Deletes all local to threads data from tgt_dev
+ */
+void scst_del_all_thr_data(struct scst_tgt_dev *tgt_dev)
+{
+ spin_lock(&tgt_dev->thr_data_lock);
+ while (!list_empty(&tgt_dev->thr_data_list)) {
+ struct scst_thr_data_hdr *d = list_entry(
+ tgt_dev->thr_data_list.next, typeof(*d),
+ thr_data_list_entry);
+ list_del(&d->thr_data_list_entry);
+ spin_unlock(&tgt_dev->thr_data_lock);
+ scst_thr_data_put(d);
+ spin_lock(&tgt_dev->thr_data_lock);
+ }
+ spin_unlock(&tgt_dev->thr_data_lock);
+ return;
+}
+EXPORT_SYMBOL_GPL(scst_del_all_thr_data);
+
+/**
+ * scst_dev_del_all_thr_data() - delete all thread's local data from device
+ *
+ * Deletes all local to threads data from all tgt_dev's of the device
+ */
+void scst_dev_del_all_thr_data(struct scst_device *dev)
+{
+ struct scst_tgt_dev *tgt_dev;
+
+ mutex_lock(&scst_mutex);
+
+ list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+ dev_tgt_dev_list_entry) {
+ scst_del_all_thr_data(tgt_dev);
+ }
+
+ mutex_unlock(&scst_mutex);
+ return;
+}
+EXPORT_SYMBOL_GPL(scst_dev_del_all_thr_data);
+
+/* thr_data_lock supposed to be held */
+static struct scst_thr_data_hdr *__scst_find_thr_data_locked(
+ struct scst_tgt_dev *tgt_dev, struct task_struct *tsk)
+{
+ struct scst_thr_data_hdr *res = NULL, *d;
+
+ list_for_each_entry(d, &tgt_dev->thr_data_list, thr_data_list_entry) {
+ if (d->owner_thr == tsk) {
+ res = d;
+ scst_thr_data_get(res);
+ break;
+ }
+ }
+ return res;
+}
+
+/**
+ * __scst_find_thr_data() - find local to the thread data
+ *
+ * Finds local to the thread data. Returns NULL, if they not found.
+ */
+struct scst_thr_data_hdr *__scst_find_thr_data(struct scst_tgt_dev *tgt_dev,
+ struct task_struct *tsk)
+{
+ struct scst_thr_data_hdr *res;
+
+ spin_lock(&tgt_dev->thr_data_lock);
+ res = __scst_find_thr_data_locked(tgt_dev, tsk);
+ spin_unlock(&tgt_dev->thr_data_lock);
+
+ return res;
+}
+EXPORT_SYMBOL_GPL(__scst_find_thr_data);
+
+bool scst_del_thr_data(struct scst_tgt_dev *tgt_dev, struct task_struct *tsk)
+{
+ bool res;
+ struct scst_thr_data_hdr *td;
+
+ spin_lock(&tgt_dev->thr_data_lock);
+
+ td = __scst_find_thr_data_locked(tgt_dev, tsk);
+ if (td != NULL) {
+ list_del(&td->thr_data_list_entry);
+ res = true;
+ } else
+ res = false;
+
+ spin_unlock(&tgt_dev->thr_data_lock);
+
+ if (td != NULL) {
+ /* the find() fn also gets it */
+ scst_thr_data_put(td);
+ scst_thr_data_put(td);
+ }
+
+ return res;
+}
+
+/* dev_lock supposed to be held and BH disabled */
+void __scst_block_dev(struct scst_device *dev)
+{
+ dev->block_count++;
+ TRACE_MGMT_DBG("Device BLOCK(new %d), dev %p", dev->block_count, dev);
+}
+
+/* No locks */
+static void scst_block_dev(struct scst_device *dev, int outstanding)
+{
+ spin_lock_bh(&dev->dev_lock);
+ __scst_block_dev(dev);
+ spin_unlock_bh(&dev->dev_lock);
+
+ /*
+ * Memory barrier is necessary here, because we need to read
+ * on_dev_count in wait_event() below after we increased block_count.
+ * Otherwise, we can miss wake up in scst_dec_on_dev_cmd().
+ * We use the explicit barrier, because spin_unlock_bh() doesn't
+ * provide the necessary memory barrier functionality.
+ */
+ smp_mb();
+
+ TRACE_MGMT_DBG("Waiting during blocking outstanding %d (on_dev_count "
+ "%d)", outstanding, atomic_read(&dev->on_dev_count));
+ wait_event(dev->on_dev_waitQ,
+ atomic_read(&dev->on_dev_count) <= outstanding);
+ TRACE_MGMT_DBG("%s", "wait_event() returned");
+}
+
+/* No locks */
+void scst_block_dev_cmd(struct scst_cmd *cmd, int outstanding)
+{
+ BUG_ON(cmd->needs_unblocking);
+
+ cmd->needs_unblocking = 1;
+ TRACE_MGMT_DBG("Needs unblocking cmd %p (tag %llu)",
+ cmd, (long long unsigned int)cmd->tag);
+
+ scst_block_dev(cmd->dev, outstanding);
+}
+
+/* No locks */
+void scst_unblock_dev(struct scst_device *dev)
+{
+ spin_lock_bh(&dev->dev_lock);
+ TRACE_MGMT_DBG("Device UNBLOCK(new %d), dev %p",
+ dev->block_count-1, dev);
+ if (--dev->block_count == 0)
+ scst_unblock_cmds(dev);
+ spin_unlock_bh(&dev->dev_lock);
+ BUG_ON(dev->block_count < 0);
+}
+
+/* No locks */
+void scst_unblock_dev_cmd(struct scst_cmd *cmd)
+{
+ scst_unblock_dev(cmd->dev);
+ cmd->needs_unblocking = 0;
+}
+
+/* No locks */
+int scst_inc_on_dev_cmd(struct scst_cmd *cmd)
+{
+ int res = 0;
+ struct scst_device *dev = cmd->dev;
+
+ BUG_ON(cmd->inc_blocking || cmd->dec_on_dev_needed);
+
+ atomic_inc(&dev->on_dev_count);
+ cmd->dec_on_dev_needed = 1;
+ TRACE_DBG("New on_dev_count %d", atomic_read(&dev->on_dev_count));
+
+ if (unlikely(cmd->internal) && (cmd->cdb[0] == REQUEST_SENSE)) {
+ /*
+ * The original command can already block the device, so
+ * REQUEST SENSE command should always pass.
+ */
+ goto out;
+ }
+
+repeat:
+ if (unlikely(dev->block_count > 0)) {
+ spin_lock_bh(&dev->dev_lock);
+ if (unlikely(test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags)))
+ goto out_unlock;
+ if (dev->block_count > 0) {
+ scst_dec_on_dev_cmd(cmd);
+ TRACE_MGMT_DBG("Delaying cmd %p due to blocking "
+ "(tag %llu, dev %p)", cmd,
+ (long long unsigned int)cmd->tag, dev);
+ list_add_tail(&cmd->blocked_cmd_list_entry,
+ &dev->blocked_cmd_list);
+ res = 1;
+ spin_unlock_bh(&dev->dev_lock);
+ goto out;
+ } else {
+ TRACE_MGMT_DBG("%s", "Somebody unblocked the device, "
+ "continuing");
+ }
+ spin_unlock_bh(&dev->dev_lock);
+ }
+ if (unlikely(dev->dev_double_ua_possible)) {
+ spin_lock_bh(&dev->dev_lock);
+ if (dev->block_count == 0) {
+ TRACE_MGMT_DBG("cmd %p (tag %llu), blocking further "
+ "cmds due to possible double reset UA (dev %p)",
+ cmd, (long long unsigned int)cmd->tag, dev);
+ __scst_block_dev(dev);
+ cmd->inc_blocking = 1;
+ } else {
+ spin_unlock_bh(&dev->dev_lock);
+ TRACE_MGMT_DBG("Somebody blocked the device, "
+ "repeating (count %d)", dev->block_count);
+ goto repeat;
+ }
+ spin_unlock_bh(&dev->dev_lock);
+ }
+
+out:
+ return res;
+
+out_unlock:
+ spin_unlock_bh(&dev->dev_lock);
+ goto out;
+}
+
+/* Called under dev_lock */
+static void scst_unblock_cmds(struct scst_device *dev)
+{
+ struct scst_cmd *cmd, *tcmd;
+ unsigned long flags;
+
+ local_irq_save(flags);
+ list_for_each_entry_safe(cmd, tcmd, &dev->blocked_cmd_list,
+ blocked_cmd_list_entry) {
+ list_del(&cmd->blocked_cmd_list_entry);
+ TRACE_MGMT_DBG("Adding blocked cmd %p to active cmd list", cmd);
+ spin_lock(&cmd->cmd_threads->cmd_list_lock);
+ if (unlikely(cmd->queue_type == SCST_CMD_QUEUE_HEAD_OF_QUEUE))
+ list_add(&cmd->cmd_list_entry,
+ &cmd->cmd_threads->active_cmd_list);
+ else
+ list_add_tail(&cmd->cmd_list_entry,
+ &cmd->cmd_threads->active_cmd_list);
+ wake_up(&cmd->cmd_threads->cmd_list_waitQ);
+ spin_unlock(&cmd->cmd_threads->cmd_list_lock);
+ }
+ local_irq_restore(flags);
+ return;
+}
+
+static void __scst_unblock_deferred(struct scst_tgt_dev *tgt_dev,
+ struct scst_cmd *out_of_sn_cmd)
+{
+ EXTRACHECKS_BUG_ON(!out_of_sn_cmd->sn_set);
+
+ if (out_of_sn_cmd->sn == tgt_dev->expected_sn) {
+ scst_inc_expected_sn(tgt_dev, out_of_sn_cmd->sn_slot);
+ scst_make_deferred_commands_active(tgt_dev);
+ } else {
+ out_of_sn_cmd->out_of_sn = 1;
+ spin_lock_irq(&tgt_dev->sn_lock);
+ tgt_dev->def_cmd_count++;
+ list_add_tail(&out_of_sn_cmd->sn_cmd_list_entry,
+ &tgt_dev->skipped_sn_list);
+ TRACE_SN("out_of_sn_cmd %p with sn %d added to skipped_sn_list"
+ " (expected_sn %d)", out_of_sn_cmd, out_of_sn_cmd->sn,
+ tgt_dev->expected_sn);
+ spin_unlock_irq(&tgt_dev->sn_lock);
+ }
+
+ return;
+}
+
+void scst_unblock_deferred(struct scst_tgt_dev *tgt_dev,
+ struct scst_cmd *out_of_sn_cmd)
+{
+
+ if (!out_of_sn_cmd->sn_set) {
+ TRACE_SN("cmd %p without sn", out_of_sn_cmd);
+ goto out;
+ }
+
+ __scst_unblock_deferred(tgt_dev, out_of_sn_cmd);
+
+out:
+ return;
+}
+
+void scst_on_hq_cmd_response(struct scst_cmd *cmd)
+{
+ struct scst_tgt_dev *tgt_dev = cmd->tgt_dev;
+
+ if (!cmd->hq_cmd_inced)
+ goto out;
+
+ spin_lock_irq(&tgt_dev->sn_lock);
+ tgt_dev->hq_cmd_count--;
+ spin_unlock_irq(&tgt_dev->sn_lock);
+
+ EXTRACHECKS_BUG_ON(tgt_dev->hq_cmd_count < 0);
+
+ /*
+ * There is no problem in checking hq_cmd_count in the
+ * non-locked state. In the worst case we will only have
+ * unneeded run of the deferred commands.
+ */
+ if (tgt_dev->hq_cmd_count == 0)
+ scst_make_deferred_commands_active(tgt_dev);
+
+out:
+ return;
+}
+
+void scst_store_sense(struct scst_cmd *cmd)
+{
+
+ if (SCST_SENSE_VALID(cmd->sense) &&
+ !test_bit(SCST_CMD_NO_RESP, &cmd->cmd_flags) &&
+ (cmd->tgt_dev != NULL)) {
+ struct scst_tgt_dev *tgt_dev = cmd->tgt_dev;
+
+ TRACE_DBG("Storing sense (cmd %p)", cmd);
+
+ spin_lock_bh(&tgt_dev->tgt_dev_lock);
+
+ if (cmd->sense_valid_len <= sizeof(tgt_dev->tgt_dev_sense))
+ tgt_dev->tgt_dev_valid_sense_len = cmd->sense_valid_len;
+ else {
+ tgt_dev->tgt_dev_valid_sense_len = sizeof(tgt_dev->tgt_dev_sense);
+ PRINT_ERROR("Stored sense truncated to size %d "
+ "(needed %d)", tgt_dev->tgt_dev_valid_sense_len,
+ cmd->sense_valid_len);
+ }
+ memcpy(tgt_dev->tgt_dev_sense, cmd->sense,
+ tgt_dev->tgt_dev_valid_sense_len);
+
+ spin_unlock_bh(&tgt_dev->tgt_dev_lock);
+ }
+ return;
+}
+
+void scst_xmit_process_aborted_cmd(struct scst_cmd *cmd)
+{
+
+ TRACE_MGMT_DBG("Aborted cmd %p done (cmd_ref %d, "
+ "scst_cmd_count %d)", cmd, atomic_read(&cmd->cmd_ref),
+ atomic_read(&scst_cmd_count));
+
+ scst_done_cmd_mgmt(cmd);
+
+ if (test_bit(SCST_CMD_ABORTED_OTHER, &cmd->cmd_flags)) {
+ if (cmd->completed) {
+ /* It's completed and it's OK to return its result */
+ goto out;
+ }
+
+ /* For not yet inited commands cmd->dev can be NULL here */
+ if (test_bit(SCST_CMD_DEVICE_TAS, &cmd->cmd_flags)) {
+ TRACE_MGMT_DBG("Flag ABORTED OTHER set for cmd %p "
+ "(tag %llu), returning TASK ABORTED ", cmd,
+ (long long unsigned int)cmd->tag);
+ scst_set_cmd_error_status(cmd, SAM_STAT_TASK_ABORTED);
+ } else {
+ TRACE_MGMT_DBG("Flag ABORTED OTHER set for cmd %p "
+ "(tag %llu), aborting without delivery or "
+ "notification",
+ cmd, (long long unsigned int)cmd->tag);
+ /*
+ * There is no need to check/requeue possible UA,
+ * because, if it exists, it will be delivered
+ * by the "completed" branch above.
+ */
+ clear_bit(SCST_CMD_ABORTED_OTHER, &cmd->cmd_flags);
+ }
+ }
+
+out:
+ return;
+}
+
+/**
+ * scst_get_max_lun_commands() - return maximum supported commands count
+ *
+ * Returns maximum commands count which can be queued to this LUN in this
+ * session.
+ *
+ * If lun is NO_SUCH_LUN, returns minimum of maximum commands count which
+ * can be queued to any LUN in this session.
+ *
+ * If sess is NULL, returns minimum of maximum commands count which can be
+ * queued to any SCST device.
+ */
+int scst_get_max_lun_commands(struct scst_session *sess, uint64_t lun)
+{
+ return SCST_MAX_TGT_DEV_COMMANDS;
+}
+EXPORT_SYMBOL_GPL(scst_get_max_lun_commands);
+
+/**
+ * scst_get_next_lexem() - parse and return next lexem in the string
+ *
+ * Returns pointer to the next lexem from token_str skipping
+ * spaces and '=' character and using them then as a delimeter. Content
+ * of token_str is modified by setting '\0' at the delimeter's position.
+ */
+char *scst_get_next_lexem(char **token_str)
+{
+ char *p = *token_str;
+ char *q;
+ static const char blank = '\0';
+
+ if ((token_str == NULL) || (*token_str == NULL))
+ return (char *)␣
+
+ for (p = *token_str; (*p != '\0') && (isspace(*p) || (*p == '=')); p++)
+ ;
+
+ for (q = p; (*q != '\0') && !isspace(*q) && (*q != '='); q++)
+ ;
+
+ if (*q != '\0')
+ *q++ = '\0';
+
+ *token_str = q;
+ return p;
+}
+EXPORT_SYMBOL_GPL(scst_get_next_lexem);
+
+/**
+ * scst_restore_token_str() - restore string modified by scst_get_next_lexem()
+ *
+ * Restores token_str modified by scst_get_next_lexem() to the
+ * previous value before scst_get_next_lexem() was called. Prev_lexem is
+ * a pointer to lexem returned by scst_get_next_lexem().
+ */
+void scst_restore_token_str(char *prev_lexem, char *token_str)
+{
+ if (&prev_lexem[strlen(prev_lexem)] != token_str)
+ prev_lexem[strlen(prev_lexem)] = ' ';
+ return;
+}
+EXPORT_SYMBOL_GPL(scst_restore_token_str);
+
+/**
+ * scst_get_next_token_str() - parse and return next token
+ *
+ * This function returns pointer to the next token strings from input_str
+ * using '\n', ';' and '\0' as a delimeter. Content of input_str is
+ * modified by setting '\0' at the delimeter's position.
+ */
+char *scst_get_next_token_str(char **input_str)
+{
+ char *p = *input_str;
+ int i = 0;
+
+ while ((p[i] != '\n') && (p[i] != ';') && (p[i] != '\0'))
+ i++;
+
+ if (i == 0)
+ return NULL;
+
+ if (p[i] == '\0')
+ *input_str = &p[i];
+ else
+ *input_str = &p[i+1];
+
+ p[i] = '\0';
+
+ return p;
+}
+EXPORT_SYMBOL_GPL(scst_get_next_token_str);
+
+static void __init scst_scsi_op_list_init(void)
+{
+ int i;
+ uint8_t op = 0xff;
+
+ for (i = 0; i < 256; i++)
+ scst_scsi_op_list[i] = SCST_CDB_TBL_SIZE;
+
+ for (i = 0; i < SCST_CDB_TBL_SIZE; i++) {
+ if (scst_scsi_op_table[i].ops != op) {
+ op = scst_scsi_op_table[i].ops;
+ scst_scsi_op_list[op] = i;
+ }
+ }
+ return;
+}
+
+int __init scst_lib_init(void)
+{
+ int res = 0;
+
+ scst_scsi_op_list_init();
+
+ scsi_io_context_cache = kmem_cache_create("scst_scsi_io_context",
+ sizeof(struct scsi_io_context),
+ 0, 0, NULL);
+ if (!scsi_io_context_cache) {
+ PRINT_ERROR("%s", "Can't init scsi io context cache");
+ res = -ENOMEM;
+ goto out;
+ }
+
+out:
+ return res;
+}
+
+void scst_lib_exit(void)
+{
+ BUILD_BUG_ON(SCST_MAX_CDB_SIZE != BLK_MAX_CDB);
+ BUILD_BUG_ON(SCST_SENSE_BUFFERSIZE < SCSI_SENSE_BUFFERSIZE);
+
+ kmem_cache_destroy(scsi_io_context_cache);
+}
+
+#ifdef CONFIG_SCST_DEBUG
+
+/**
+ * scst_random() - return a pseudo-random number for debugging purposes.
+ *
+ * Returns a pseudo-random number for debugging purposes. Available only in
+ * the DEBUG build.
+ *
+ * Original taken from the XFS code
+ */
+unsigned long scst_random(void)
+{
+ static int Inited;
+ static unsigned long RandomValue;
+ static DEFINE_SPINLOCK(lock);
+ /* cycles pseudo-randomly through all values between 1 and 2^31 - 2 */
+ register long rv;
+ register long lo;
+ register long hi;
+ unsigned long flags;
+
+ spin_lock_irqsave(&lock, flags);
+ if (!Inited) {
+ RandomValue = jiffies;
+ Inited = 1;
+ }
+ rv = RandomValue;
+ hi = rv / 127773;
+ lo = rv % 127773;
+ rv = 16807 * lo - 2836 * hi;
+ if (rv <= 0)
+ rv += 2147483647;
+ RandomValue = rv;
+ spin_unlock_irqrestore(&lock, flags);
+ return rv;
+}
+EXPORT_SYMBOL_GPL(scst_random);
+#endif /* CONFIG_SCST_DEBUG */
+
+#ifdef CONFIG_SCST_DEBUG_TM
+
+#define TM_DBG_STATE_ABORT 0
+#define TM_DBG_STATE_RESET 1
+#define TM_DBG_STATE_OFFLINE 2
+
+#define INIT_TM_DBG_STATE TM_DBG_STATE_ABORT
+
+static void tm_dbg_timer_fn(unsigned long arg);
+
+static DEFINE_SPINLOCK(scst_tm_dbg_lock);
+/* All serialized by scst_tm_dbg_lock */
+static struct {
+ unsigned int tm_dbg_release:1;
+ unsigned int tm_dbg_blocked:1;
+} tm_dbg_flags;
+static LIST_HEAD(tm_dbg_delayed_cmd_list);
+static int tm_dbg_delayed_cmds_count;
+static int tm_dbg_passed_cmds_count;
+static int tm_dbg_state;
+static int tm_dbg_on_state_passes;
+static DEFINE_TIMER(tm_dbg_timer, tm_dbg_timer_fn, 0, 0);
+static struct scst_tgt_dev *tm_dbg_tgt_dev;
+
+static const int tm_dbg_on_state_num_passes[] = { 5, 1, 0x7ffffff };
+
+static void tm_dbg_init_tgt_dev(struct scst_tgt_dev *tgt_dev)
+{
+ if (tgt_dev->lun == 6) {
+ unsigned long flags;
+
+ if (tm_dbg_tgt_dev != NULL)
+ tm_dbg_deinit_tgt_dev(tm_dbg_tgt_dev);
+
+ spin_lock_irqsave(&scst_tm_dbg_lock, flags);
+ tm_dbg_state = INIT_TM_DBG_STATE;
+ tm_dbg_on_state_passes =
+ tm_dbg_on_state_num_passes[tm_dbg_state];
+ tm_dbg_tgt_dev = tgt_dev;
+ PRINT_INFO("LUN %lld connected from initiator %s is under "
+ "TM debugging (tgt_dev %p)",
+ (unsigned long long)tgt_dev->lun,
+ tgt_dev->sess->initiator_name, tgt_dev);
+ spin_unlock_irqrestore(&scst_tm_dbg_lock, flags);
+ }
+ return;
+}
+
+static void tm_dbg_deinit_tgt_dev(struct scst_tgt_dev *tgt_dev)
+{
+ if (tm_dbg_tgt_dev == tgt_dev) {
+ unsigned long flags;
+ TRACE_MGMT_DBG("Deinit TM debugging tgt_dev %p", tgt_dev);
+ del_timer_sync(&tm_dbg_timer);
+ spin_lock_irqsave(&scst_tm_dbg_lock, flags);
+ tm_dbg_tgt_dev = NULL;
+ spin_unlock_irqrestore(&scst_tm_dbg_lock, flags);
+ }
+ return;
+}
+
+static void tm_dbg_timer_fn(unsigned long arg)
+{
+ TRACE_MGMT_DBG("%s", "delayed cmd timer expired");
+ tm_dbg_flags.tm_dbg_release = 1;
+ /* Used to make sure that all woken up threads see the new value */
+ smp_wmb();
+ wake_up_all(&tm_dbg_tgt_dev->active_cmd_threads->cmd_list_waitQ);
+ return;
+}
+
+/* Called under scst_tm_dbg_lock and IRQs off */
+static void tm_dbg_delay_cmd(struct scst_cmd *cmd)
+{
+ switch (tm_dbg_state) {
+ case TM_DBG_STATE_ABORT:
+ if (tm_dbg_delayed_cmds_count == 0) {
+ unsigned long d = 58*HZ + (scst_random() % (4*HZ));
+ TRACE_MGMT_DBG("STATE ABORT: delaying cmd %p (tag %llu)"
+ " for %ld.%ld seconds (%ld HZ), "
+ "tm_dbg_on_state_passes=%d", cmd, cmd->tag,
+ d/HZ, (d%HZ)*100/HZ, d, tm_dbg_on_state_passes);
+ mod_timer(&tm_dbg_timer, jiffies + d);
+#if 0
+ tm_dbg_flags.tm_dbg_blocked = 1;
+#endif
+ } else {
+ TRACE_MGMT_DBG("Delaying another timed cmd %p "
+ "(tag %llu), delayed_cmds_count=%d, "
+ "tm_dbg_on_state_passes=%d", cmd, cmd->tag,
+ tm_dbg_delayed_cmds_count,
+ tm_dbg_on_state_passes);
+ if (tm_dbg_delayed_cmds_count == 2)
+ tm_dbg_flags.tm_dbg_blocked = 0;
+ }
+ break;
+
+ case TM_DBG_STATE_RESET:
+ case TM_DBG_STATE_OFFLINE:
+ TRACE_MGMT_DBG("STATE RESET/OFFLINE: delaying cmd %p "
+ "(tag %llu), delayed_cmds_count=%d, "
+ "tm_dbg_on_state_passes=%d", cmd, cmd->tag,
+ tm_dbg_delayed_cmds_count, tm_dbg_on_state_passes);
+ tm_dbg_flags.tm_dbg_blocked = 1;
+ break;
+
+ default:
+ BUG();
+ }
+ /* IRQs already off */
+ spin_lock(&cmd->cmd_threads->cmd_list_lock);
+ list_add_tail(&cmd->cmd_list_entry, &tm_dbg_delayed_cmd_list);
+ spin_unlock(&cmd->cmd_threads->cmd_list_lock);
+ cmd->tm_dbg_delayed = 1;
+ tm_dbg_delayed_cmds_count++;
+ return;
+}
+
+/* No locks */
+void tm_dbg_check_released_cmds(void)
+{
+ if (tm_dbg_flags.tm_dbg_release) {
+ struct scst_cmd *cmd, *tc;
+ spin_lock_irq(&scst_tm_dbg_lock);
+ list_for_each_entry_safe_reverse(cmd, tc,
+ &tm_dbg_delayed_cmd_list, cmd_list_entry) {
+ TRACE_MGMT_DBG("Releasing timed cmd %p (tag %llu), "
+ "delayed_cmds_count=%d", cmd, cmd->tag,
+ tm_dbg_delayed_cmds_count);
+ spin_lock(&cmd->cmd_threads->cmd_list_lock);
+ list_move(&cmd->cmd_list_entry,
+ &cmd->cmd_threads->active_cmd_list);
+ spin_unlock(&cmd->cmd_threads->cmd_list_lock);
+ }
+ tm_dbg_flags.tm_dbg_release = 0;
+ spin_unlock_irq(&scst_tm_dbg_lock);
+ }
+}
+
+/* Called under scst_tm_dbg_lock */
+static void tm_dbg_change_state(void)
+{
+ tm_dbg_flags.tm_dbg_blocked = 0;
+ if (--tm_dbg_on_state_passes == 0) {
+ switch (tm_dbg_state) {
+ case TM_DBG_STATE_ABORT:
+ TRACE_MGMT_DBG("%s", "Changing "
+ "tm_dbg_state to RESET");
+ tm_dbg_state = TM_DBG_STATE_RESET;
+ tm_dbg_flags.tm_dbg_blocked = 0;
+ break;
+ case TM_DBG_STATE_RESET:
+ case TM_DBG_STATE_OFFLINE:
+#ifdef CONFIG_SCST_TM_DBG_GO_OFFLINE
+ TRACE_MGMT_DBG("%s", "Changing "
+ "tm_dbg_state to OFFLINE");
+ tm_dbg_state = TM_DBG_STATE_OFFLINE;
+#else
+ TRACE_MGMT_DBG("%s", "Changing "
+ "tm_dbg_state to ABORT");
+ tm_dbg_state = TM_DBG_STATE_ABORT;
+#endif
+ break;
+ default:
+ BUG();
+ }
+ tm_dbg_on_state_passes =
+ tm_dbg_on_state_num_passes[tm_dbg_state];
+ }
+
+ TRACE_MGMT_DBG("%s", "Deleting timer");
+ del_timer_sync(&tm_dbg_timer);
+ return;
+}
+
+/* No locks */
+int tm_dbg_check_cmd(struct scst_cmd *cmd)
+{
+ int res = 0;
+ unsigned long flags;
+
+ if (cmd->tm_dbg_immut)
+ goto out;
+
+ if (cmd->tm_dbg_delayed) {
+ spin_lock_irqsave(&scst_tm_dbg_lock, flags);
+ TRACE_MGMT_DBG("Processing delayed cmd %p (tag %llu), "
+ "delayed_cmds_count=%d", cmd, cmd->tag,
+ tm_dbg_delayed_cmds_count);
+
+ cmd->tm_dbg_immut = 1;
+ tm_dbg_delayed_cmds_count--;
+ if ((tm_dbg_delayed_cmds_count == 0) &&
+ (tm_dbg_state == TM_DBG_STATE_ABORT))
+ tm_dbg_change_state();
+ spin_unlock_irqrestore(&scst_tm_dbg_lock, flags);
+ } else if (cmd->tgt_dev && (tm_dbg_tgt_dev == cmd->tgt_dev)) {
+ /* Delay 50th command */
+ spin_lock_irqsave(&scst_tm_dbg_lock, flags);
+ if (tm_dbg_flags.tm_dbg_blocked ||
+ (++tm_dbg_passed_cmds_count % 50) == 0) {
+ tm_dbg_delay_cmd(cmd);
+ res = 1;
+ } else
+ cmd->tm_dbg_immut = 1;
+ spin_unlock_irqrestore(&scst_tm_dbg_lock, flags);
+ }
+
+out:
+ return res;
+}
+
+/* No locks */
+void tm_dbg_release_cmd(struct scst_cmd *cmd)
+{
+ struct scst_cmd *c;
+ unsigned long flags;
+
+ spin_lock_irqsave(&scst_tm_dbg_lock, flags);
+ list_for_each_entry(c, &tm_dbg_delayed_cmd_list,
+ cmd_list_entry) {
+ if (c == cmd) {
+ TRACE_MGMT_DBG("Abort request for "
+ "delayed cmd %p (tag=%llu), moving it to "
+ "active cmd list (delayed_cmds_count=%d)",
+ c, c->tag, tm_dbg_delayed_cmds_count);
+
+ if (!test_bit(SCST_CMD_ABORTED_OTHER,
+ &cmd->cmd_flags)) {
+ /* Test how completed commands handled */
+ if (((scst_random() % 10) == 5)) {
+ scst_set_cmd_error(cmd,
+ SCST_LOAD_SENSE(
+ scst_sense_hardw_error));
+ /* It's completed now */
+ }
+ }
+
+ spin_lock(&cmd->cmd_threads->cmd_list_lock);
+ list_move(&c->cmd_list_entry,
+ &c->cmd_threads->active_cmd_list);
+ wake_up(&c->cmd_threads->cmd_list_waitQ);
+ spin_unlock(&cmd->cmd_threads->cmd_list_lock);
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&scst_tm_dbg_lock, flags);
+ return;
+}
+
+/* Might be called under scst_mutex */
+void tm_dbg_task_mgmt(struct scst_device *dev, const char *fn, int force)
+{
+ unsigned long flags;
+
+ if (dev != NULL) {
+ if (tm_dbg_tgt_dev == NULL)
+ goto out;
+
+ if (tm_dbg_tgt_dev->dev != dev)
+ goto out;
+ }
+
+ spin_lock_irqsave(&scst_tm_dbg_lock, flags);
+ if ((tm_dbg_state != TM_DBG_STATE_OFFLINE) || force) {
+ TRACE_MGMT_DBG("%s: freeing %d delayed cmds", fn,
+ tm_dbg_delayed_cmds_count);
+ tm_dbg_change_state();
+ tm_dbg_flags.tm_dbg_release = 1;
+ /*
+ * Used to make sure that all woken up threads see the new
+ * value.
+ */
+ smp_wmb();
+ if (tm_dbg_tgt_dev != NULL)
+ wake_up_all(&tm_dbg_tgt_dev->active_cmd_threads->cmd_list_waitQ);
+ } else {
+ TRACE_MGMT_DBG("%s: while OFFLINE state, doing nothing", fn);
+ }
+ spin_unlock_irqrestore(&scst_tm_dbg_lock, flags);
+
+out:
+ return;
+}
+
+int tm_dbg_is_release(void)
+{
+ return tm_dbg_flags.tm_dbg_release;
+}
+#endif /* CONFIG_SCST_DEBUG_TM */
+
+#ifdef CONFIG_SCST_DEBUG_SN
+void scst_check_debug_sn(struct scst_cmd *cmd)
+{
+ static DEFINE_SPINLOCK(lock);
+ static int type;
+ static int cnt;
+ unsigned long flags;
+ int old = cmd->queue_type;
+
+ spin_lock_irqsave(&lock, flags);
+
+ if (cnt == 0) {
+ if ((scst_random() % 1000) == 500) {
+ if ((scst_random() % 3) == 1)
+ type = SCST_CMD_QUEUE_HEAD_OF_QUEUE;
+ else
+ type = SCST_CMD_QUEUE_ORDERED;
+ do {
+ cnt = scst_random() % 10;
+ } while (cnt == 0);
+ } else
+ goto out_unlock;
+ }
+
+ cmd->queue_type = type;
+ cnt--;
+
+ if (((scst_random() % 1000) == 750))
+ cmd->queue_type = SCST_CMD_QUEUE_ORDERED;
+ else if (((scst_random() % 1000) == 751))
+ cmd->queue_type = SCST_CMD_QUEUE_HEAD_OF_QUEUE;
+ else if (((scst_random() % 1000) == 752))
+ cmd->queue_type = SCST_CMD_QUEUE_SIMPLE;
+
+ TRACE_SN("DbgSN changed cmd %p: %d/%d (cnt %d)", cmd, old,
+ cmd->queue_type, cnt);
+
+out_unlock:
+ spin_unlock_irqrestore(&lock, flags);
+ return;
+}
+#endif /* CONFIG_SCST_DEBUG_SN */
+
+#ifdef CONFIG_SCST_MEASURE_LATENCY
+
+static uint64_t scst_get_nsec(void)
+{
+ struct timespec ts;
+ ktime_get_ts(&ts);
+ return (uint64_t)ts.tv_sec * 1000000000 + ts.tv_nsec;
+}
+
+void scst_set_start_time(struct scst_cmd *cmd)
+{
+ cmd->start = scst_get_nsec();
+ TRACE_DBG("cmd %p: start %lld", cmd, cmd->start);
+}
+
+void scst_set_cur_start(struct scst_cmd *cmd)
+{
+ cmd->curr_start = scst_get_nsec();
+ TRACE_DBG("cmd %p: cur_start %lld", cmd, cmd->curr_start);
+}
+
+void scst_set_parse_time(struct scst_cmd *cmd)
+{
+ cmd->parse_time += scst_get_nsec() - cmd->curr_start;
+ TRACE_DBG("cmd %p: parse_time %lld", cmd, cmd->parse_time);
+}
+
+void scst_set_alloc_buf_time(struct scst_cmd *cmd)
+{
+ cmd->alloc_buf_time += scst_get_nsec() - cmd->curr_start;
+ TRACE_DBG("cmd %p: alloc_buf_time %lld", cmd, cmd->alloc_buf_time);
+}
+
+void scst_set_restart_waiting_time(struct scst_cmd *cmd)
+{
+ cmd->restart_waiting_time += scst_get_nsec() - cmd->curr_start;
+ TRACE_DBG("cmd %p: restart_waiting_time %lld", cmd,
+ cmd->restart_waiting_time);
+}
+
+void scst_set_rdy_to_xfer_time(struct scst_cmd *cmd)
+{
+ cmd->rdy_to_xfer_time += scst_get_nsec() - cmd->curr_start;
+ TRACE_DBG("cmd %p: rdy_to_xfer_time %lld", cmd, cmd->rdy_to_xfer_time);
+}
+
+void scst_set_pre_exec_time(struct scst_cmd *cmd)
+{
+ cmd->pre_exec_time += scst_get_nsec() - cmd->curr_start;
+ TRACE_DBG("cmd %p: pre_exec_time %lld", cmd, cmd->pre_exec_time);
+}
+
+void scst_set_exec_time(struct scst_cmd *cmd)
+{
+ cmd->exec_time += scst_get_nsec() - cmd->curr_start;
+ TRACE_DBG("cmd %p: exec_time %lld", cmd, cmd->exec_time);
+}
+
+void scst_set_dev_done_time(struct scst_cmd *cmd)
+{
+ cmd->dev_done_time += scst_get_nsec() - cmd->curr_start;
+ TRACE_DBG("cmd %p: dev_done_time %lld", cmd, cmd->dev_done_time);
+}
+
+void scst_set_xmit_time(struct scst_cmd *cmd)
+{
+ cmd->xmit_time += scst_get_nsec() - cmd->curr_start;
+ TRACE_DBG("cmd %p: xmit_time %lld", cmd, cmd->xmit_time);
+}
+
+void scst_set_tgt_on_free_time(struct scst_cmd *cmd)
+{
+ cmd->tgt_on_free_time += scst_get_nsec() - cmd->curr_start;
+ TRACE_DBG("cmd %p: tgt_on_free_time %lld", cmd, cmd->tgt_on_free_time);
+}
+
+void scst_set_dev_on_free_time(struct scst_cmd *cmd)
+{
+ cmd->dev_on_free_time += scst_get_nsec() - cmd->curr_start;
+ TRACE_DBG("cmd %p: dev_on_free_time %lld", cmd, cmd->dev_on_free_time);
+}
+
+void scst_update_lat_stats(struct scst_cmd *cmd)
+{
+ uint64_t finish, scst_time, tgt_time, dev_time;
+ struct scst_session *sess = cmd->sess;
+ int data_len;
+ int i;
+ struct scst_ext_latency_stat *latency_stat, *dev_latency_stat;
+
+ finish = scst_get_nsec();
+
+ /* Determine the IO size for extended latency statistics */
+ data_len = cmd->bufflen;
+ i = SCST_LATENCY_STAT_INDEX_OTHER;
+ if (data_len <= SCST_IO_SIZE_THRESHOLD_SMALL)
+ i = SCST_LATENCY_STAT_INDEX_SMALL;
+ else if (data_len <= SCST_IO_SIZE_THRESHOLD_MEDIUM)
+ i = SCST_LATENCY_STAT_INDEX_MEDIUM;
+ else if (data_len <= SCST_IO_SIZE_THRESHOLD_LARGE)
+ i = SCST_LATENCY_STAT_INDEX_LARGE;
+ else if (data_len <= SCST_IO_SIZE_THRESHOLD_VERY_LARGE)
+ i = SCST_LATENCY_STAT_INDEX_VERY_LARGE;
+ latency_stat = &sess->sess_latency_stat[i];
+ dev_latency_stat = &cmd->tgt_dev->dev_latency_stat[i];
+
+ spin_lock_bh(&sess->lat_lock);
+
+ /* Calculate the latencies */
+ scst_time = finish - cmd->start - (cmd->parse_time +
+ cmd->alloc_buf_time + cmd->restart_waiting_time +
+ cmd->rdy_to_xfer_time + cmd->pre_exec_time +
+ cmd->exec_time + cmd->dev_done_time + cmd->xmit_time +
+ cmd->tgt_on_free_time + cmd->dev_on_free_time);
+ tgt_time = cmd->alloc_buf_time + cmd->restart_waiting_time +
+ cmd->rdy_to_xfer_time + cmd->pre_exec_time +
+ cmd->xmit_time + cmd->tgt_on_free_time;
+ dev_time = cmd->parse_time + cmd->exec_time + cmd->dev_done_time +
+ cmd->dev_on_free_time;
+
+ /* Save the basic latency information */
+ sess->scst_time += scst_time;
+ sess->tgt_time += tgt_time;
+ sess->dev_time += dev_time;
+ sess->processed_cmds++;
+
+ if ((sess->min_scst_time == 0) ||
+ (sess->min_scst_time > scst_time))
+ sess->min_scst_time = scst_time;
+ if ((sess->min_tgt_time == 0) ||
+ (sess->min_tgt_time > tgt_time))
+ sess->min_tgt_time = tgt_time;
+ if ((sess->min_dev_time == 0) ||
+ (sess->min_dev_time > dev_time))
+ sess->min_dev_time = dev_time;
+
+ if (sess->max_scst_time < scst_time)
+ sess->max_scst_time = scst_time;
+ if (sess->max_tgt_time < tgt_time)
+ sess->max_tgt_time = tgt_time;
+ if (sess->max_dev_time < dev_time)
+ sess->max_dev_time = dev_time;
+
+ /* Save the extended latency information */
+ if (cmd->data_direction & SCST_DATA_READ) {
+ latency_stat->scst_time_rd += scst_time;
+ latency_stat->tgt_time_rd += tgt_time;
+ latency_stat->dev_time_rd += dev_time;
+ latency_stat->processed_cmds_rd++;
+
+ if ((latency_stat->min_scst_time_rd == 0) ||
+ (latency_stat->min_scst_time_rd > scst_time))
+ latency_stat->min_scst_time_rd = scst_time;
+ if ((latency_stat->min_tgt_time_rd == 0) ||
+ (latency_stat->min_tgt_time_rd > tgt_time))
+ latency_stat->min_tgt_time_rd = tgt_time;
+ if ((latency_stat->min_dev_time_rd == 0) ||
+ (latency_stat->min_dev_time_rd > dev_time))
+ latency_stat->min_dev_time_rd = dev_time;
+
+ if (latency_stat->max_scst_time_rd < scst_time)
+ latency_stat->max_scst_time_rd = scst_time;
+ if (latency_stat->max_tgt_time_rd < tgt_time)
+ latency_stat->max_tgt_time_rd = tgt_time;
+ if (latency_stat->max_dev_time_rd < dev_time)
+ latency_stat->max_dev_time_rd = dev_time;
+
+ dev_latency_stat->scst_time_rd += scst_time;
+ dev_latency_stat->tgt_time_rd += tgt_time;
+ dev_latency_stat->dev_time_rd += dev_time;
+ dev_latency_stat->processed_cmds_rd++;
+
+ if ((dev_latency_stat->min_scst_time_rd == 0) ||
+ (dev_latency_stat->min_scst_time_rd > scst_time))
+ dev_latency_stat->min_scst_time_rd = scst_time;
+ if ((dev_latency_stat->min_tgt_time_rd == 0) ||
+ (dev_latency_stat->min_tgt_time_rd > tgt_time))
+ dev_latency_stat->min_tgt_time_rd = tgt_time;
+ if ((dev_latency_stat->min_dev_time_rd == 0) ||
+ (dev_latency_stat->min_dev_time_rd > dev_time))
+ dev_latency_stat->min_dev_time_rd = dev_time;
+
+ if (dev_latency_stat->max_scst_time_rd < scst_time)
+ dev_latency_stat->max_scst_time_rd = scst_time;
+ if (dev_latency_stat->max_tgt_time_rd < tgt_time)
+ dev_latency_stat->max_tgt_time_rd = tgt_time;
+ if (dev_latency_stat->max_dev_time_rd < dev_time)
+ dev_latency_stat->max_dev_time_rd = dev_time;
+ } else if (cmd->data_direction & SCST_DATA_WRITE) {
+ latency_stat->scst_time_wr += scst_time;
+ latency_stat->tgt_time_wr += tgt_time;
+ latency_stat->dev_time_wr += dev_time;
+ latency_stat->processed_cmds_wr++;
+
+ if ((latency_stat->min_scst_time_wr == 0) ||
+ (latency_stat->min_scst_time_wr > scst_time))
+ latency_stat->min_scst_time_wr = scst_time;
+ if ((latency_stat->min_tgt_time_wr == 0) ||
+ (latency_stat->min_tgt_time_wr > tgt_time))
+ latency_stat->min_tgt_time_wr = tgt_time;
+ if ((latency_stat->min_dev_time_wr == 0) ||
+ (latency_stat->min_dev_time_wr > dev_time))
+ latency_stat->min_dev_time_wr = dev_time;
+
+ if (latency_stat->max_scst_time_wr < scst_time)
+ latency_stat->max_scst_time_wr = scst_time;
+ if (latency_stat->max_tgt_time_wr < tgt_time)
+ latency_stat->max_tgt_time_wr = tgt_time;
+ if (latency_stat->max_dev_time_wr < dev_time)
+ latency_stat->max_dev_time_wr = dev_time;
+
+ dev_latency_stat->scst_time_wr += scst_time;
+ dev_latency_stat->tgt_time_wr += tgt_time;
+ dev_latency_stat->dev_time_wr += dev_time;
+ dev_latency_stat->processed_cmds_wr++;
+
+ if ((dev_latency_stat->min_scst_time_wr == 0) ||
+ (dev_latency_stat->min_scst_time_wr > scst_time))
+ dev_latency_stat->min_scst_time_wr = scst_time;
+ if ((dev_latency_stat->min_tgt_time_wr == 0) ||
+ (dev_latency_stat->min_tgt_time_wr > tgt_time))
+ dev_latency_stat->min_tgt_time_wr = tgt_time;
+ if ((dev_latency_stat->min_dev_time_wr == 0) ||
+ (dev_latency_stat->min_dev_time_wr > dev_time))
+ dev_latency_stat->min_dev_time_wr = dev_time;
+
+ if (dev_latency_stat->max_scst_time_wr < scst_time)
+ dev_latency_stat->max_scst_time_wr = scst_time;
+ if (dev_latency_stat->max_tgt_time_wr < tgt_time)
+ dev_latency_stat->max_tgt_time_wr = tgt_time;
+ if (dev_latency_stat->max_dev_time_wr < dev_time)
+ dev_latency_stat->max_dev_time_wr = dev_time;
+ }
+
+ spin_unlock_bh(&sess->lat_lock);
+
+ TRACE_DBG("cmd %p: finish %lld, scst_time %lld, "
+ "tgt_time %lld, dev_time %lld", cmd, finish, scst_time,
+ tgt_time, dev_time);
+ return;
+}
+
+#endif /* CONFIG_SCST_MEASURE_LATENCY */
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH][RFC 6/12/1/5] SCST core's private header
[not found] ` <4BC44D08.4060907@vlnb.net>
` (4 preceding siblings ...)
2010-04-13 13:05 ` [PATCH][RFC 5/12/1/5] SCST core's scst_lib.c Vladislav Bolkhovitin
@ 2010-04-13 13:06 ` Vladislav Bolkhovitin
2010-04-13 13:06 ` [PATCH][RFC 7/12/1/5] SCST SGV cache Vladislav Bolkhovitin
` (3 subsequent siblings)
9 siblings, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:06 UTC (permalink / raw)
To: linux-scsi
Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
Vu Pham, Bart Van Assche, James Smart, Joe Eykholt, Andy Yan,
linux-driver
This patch contains file scst_priv.h, which contains internal SCST types, constants
and declarations.
Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
scst_priv.h | 609 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 609 insertions(+)
diff -uprN orig/linux-2.6.33/drivers/scst/scst_priv.h linux-2.6.33/drivers/scst/scst_priv.h
--- orig/linux-2.6.33/drivers/scst/scst_priv.h
+++ linux-2.6.33/drivers/scst/scst_priv.h
@@ -0,0 +1,609 @@
+/*
+ * scst_priv.h
+ *
+ * Copyright (C) 2004 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ * Copyright (C) 2004 - 2005 Leonid Stoljar
+ * Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation, version 2
+ * of the License.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __SCST_PRIV_H
+#define __SCST_PRIV_H
+
+#include <linux/types.h>
+
+#include <scsi/scsi.h>
+#include <scsi/scsi_cmnd.h>
+#include <scsi/scsi_driver.h>
+#include <scsi/scsi_device.h>
+#include <scsi/scsi_host.h>
+
+#define LOG_PREFIX "scst"
+
+#include "scst_debug.h"
+
+#define TRACE_RTRY 0x80000000
+#define TRACE_SCSI_SERIALIZING 0x40000000
+/** top being the edge away from the interupt */
+#define TRACE_SND_TOP 0x20000000
+#define TRACE_RCV_TOP 0x01000000
+/** bottom being the edge toward the interupt */
+#define TRACE_SND_BOT 0x08000000
+#define TRACE_RCV_BOT 0x04000000
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+#define trace_flag scst_trace_flag
+extern unsigned long scst_trace_flag;
+#endif
+
+#ifdef CONFIG_SCST_DEBUG
+
+#define SCST_DEFAULT_LOG_FLAGS (TRACE_OUT_OF_MEM | TRACE_MINOR | TRACE_PID | \
+ TRACE_LINE | TRACE_FUNCTION | TRACE_SPECIAL | TRACE_MGMT | \
+ TRACE_MGMT_DEBUG | TRACE_RTRY)
+
+#define TRACE_RETRY(args...) TRACE_DBG_FLAG(TRACE_RTRY, args)
+#define TRACE_SN(args...) TRACE_DBG_FLAG(TRACE_SCSI_SERIALIZING, args)
+#define TRACE_SEND_TOP(args...) TRACE_DBG_FLAG(TRACE_SND_TOP, args)
+#define TRACE_RECV_TOP(args...) TRACE_DBG_FLAG(TRACE_RCV_TOP, args)
+#define TRACE_SEND_BOT(args...) TRACE_DBG_FLAG(TRACE_SND_BOT, args)
+#define TRACE_RECV_BOT(args...) TRACE_DBG_FLAG(TRACE_RCV_BOT, args)
+
+#else /* CONFIG_SCST_DEBUG */
+
+# ifdef CONFIG_SCST_TRACING
+#define SCST_DEFAULT_LOG_FLAGS (TRACE_OUT_OF_MEM | TRACE_MGMT | \
+ TRACE_SPECIAL)
+# else
+#define SCST_DEFAULT_LOG_FLAGS 0
+# endif
+
+#define TRACE_RETRY(args...)
+#define TRACE_SN(args...)
+#define TRACE_SEND_TOP(args...)
+#define TRACE_RECV_TOP(args...)
+#define TRACE_SEND_BOT(args...)
+#define TRACE_RECV_BOT(args...)
+
+#endif
+
+/**
+ ** Bits for scst_flags
+ **/
+
+/*
+ * Set if new commands initialization is being suspended for a while.
+ * Used to let TM commands execute while preparing the suspend, since
+ * RESET or ABORT could be necessary to free SCSI commands.
+ */
+#define SCST_FLAG_SUSPENDING 0
+
+/* Set if new commands initialization is suspended for a while */
+#define SCST_FLAG_SUSPENDED 1
+
+/**
+ ** Return codes for cmd state process functions. Codes are the same as
+ ** for SCST_EXEC_* to avoid translation to them and, hence, have better code.
+ **/
+#define SCST_CMD_STATE_RES_CONT_NEXT SCST_EXEC_COMPLETED
+#define SCST_CMD_STATE_RES_CONT_SAME SCST_EXEC_NOT_COMPLETED
+#define SCST_CMD_STATE_RES_NEED_THREAD SCST_EXEC_NEED_THREAD
+
+/**
+ ** Maximum count of uncompleted commands that an initiator could
+ ** queue on any device. Then it will start getting TASK QUEUE FULL status.
+ **/
+#define SCST_MAX_TGT_DEV_COMMANDS 48
+
+/**
+ ** Maximum count of uncompleted commands that could be queued on any device.
+ ** Then initiators sending commands to this device will start getting
+ ** TASK QUEUE FULL status.
+ **/
+#define SCST_MAX_DEV_COMMANDS 256
+
+#define SCST_TGT_RETRY_TIMEOUT (3/2*HZ)
+
+/* Definitions of symbolic constants for LUN addressing method */
+#define SCST_LUN_ADDR_METHOD_PERIPHERAL 0
+#define SCST_LUN_ADDR_METHOD_FLAT 1
+
+extern int scst_threads;
+
+extern unsigned int scst_max_dev_cmd_mem;
+
+extern mempool_t *scst_mgmt_mempool;
+extern mempool_t *scst_mgmt_stub_mempool;
+extern mempool_t *scst_ua_mempool;
+extern mempool_t *scst_sense_mempool;
+extern mempool_t *scst_aen_mempool;
+
+extern struct kmem_cache *scst_cmd_cachep;
+extern struct kmem_cache *scst_sess_cachep;
+extern struct kmem_cache *scst_tgtd_cachep;
+extern struct kmem_cache *scst_acgd_cachep;
+
+extern spinlock_t scst_main_lock;
+
+extern struct scst_sgv_pools scst_sgv;
+
+extern unsigned long scst_flags;
+extern atomic_t scst_cmd_count;
+extern struct list_head scst_template_list;
+extern struct list_head scst_dev_list;
+extern struct list_head scst_dev_type_list;
+extern wait_queue_head_t scst_dev_cmd_waitQ;
+
+extern unsigned int scst_setup_id;
+
+extern spinlock_t scst_init_lock;
+extern struct list_head scst_init_cmd_list;
+extern wait_queue_head_t scst_init_cmd_list_waitQ;
+extern unsigned int scst_init_poll_cnt;
+
+extern struct scst_cmd_threads scst_main_cmd_threads;
+
+extern spinlock_t scst_mcmd_lock;
+/* The following lists protected by scst_mcmd_lock */
+extern struct list_head scst_active_mgmt_cmd_list;
+extern struct list_head scst_delayed_mgmt_cmd_list;
+extern wait_queue_head_t scst_mgmt_cmd_list_waitQ;
+
+struct scst_tasklet {
+ spinlock_t tasklet_lock;
+ struct list_head tasklet_cmd_list;
+ struct tasklet_struct tasklet;
+};
+extern struct scst_tasklet scst_tasklets[NR_CPUS];
+
+extern wait_queue_head_t scst_mgmt_waitQ;
+extern spinlock_t scst_mgmt_lock;
+extern struct list_head scst_sess_init_list;
+extern struct list_head scst_sess_shut_list;
+
+struct scst_cmd_thread_t {
+ struct task_struct *cmd_thread;
+ struct list_head thread_list_entry;
+};
+
+static inline bool scst_set_io_context(struct scst_cmd *cmd,
+ struct io_context **old)
+{
+ bool res;
+
+ if (cmd->cmd_threads == &scst_main_cmd_threads) {
+ EXTRACHECKS_BUG_ON(in_interrupt());
+ /*
+ * No need for any ref counting action, because io_context
+ * supposed to be cleared in the end of the caller function.
+ */
+ current->io_context = cmd->tgt_dev->async_io_context;
+ res = true;
+ TRACE_DBG("io_context %p (tgt_dev %p)", current->io_context,
+ cmd->tgt_dev);
+ EXTRACHECKS_BUG_ON(current->io_context == NULL);
+ } else
+ res = false;
+
+ return res;
+}
+
+static inline void scst_reset_io_context(struct scst_tgt_dev *tgt_dev,
+ struct io_context *old)
+{
+ current->io_context = old;
+ TRACE_DBG("io_context %p reset", current->io_context);
+ return;
+}
+
+/*
+ * Converts string presentation of threads pool type to enum.
+ * Returns SCST_THREADS_POOL_TYPE_INVALID if the string is invalid.
+ */
+extern enum scst_dev_type_threads_pool_type scst_parse_threads_pool_type(
+ const char *p, int len);
+
+extern int scst_add_threads(struct scst_cmd_threads *cmd_threads,
+ struct scst_device *dev, struct scst_tgt_dev *tgt_dev, int num);
+extern void scst_del_threads(struct scst_cmd_threads *cmd_threads, int num);
+
+extern int scst_create_dev_threads(struct scst_device *dev);
+extern void scst_stop_dev_threads(struct scst_device *dev);
+
+extern int scst_tgt_dev_setup_threads(struct scst_tgt_dev *tgt_dev);
+extern void scst_tgt_dev_stop_threads(struct scst_tgt_dev *tgt_dev);
+
+extern bool scst_del_thr_data(struct scst_tgt_dev *tgt_dev,
+ struct task_struct *tsk);
+
+extern struct scst_dev_type scst_null_devtype;
+
+extern struct scst_cmd *__scst_check_deferred_commands(
+ struct scst_tgt_dev *tgt_dev);
+
+/* Used to save the function call on the fast path */
+static inline struct scst_cmd *scst_check_deferred_commands(
+ struct scst_tgt_dev *tgt_dev)
+{
+ if (tgt_dev->def_cmd_count == 0)
+ return NULL;
+ else
+ return __scst_check_deferred_commands(tgt_dev);
+}
+
+static inline void scst_make_deferred_commands_active(
+ struct scst_tgt_dev *tgt_dev)
+{
+ struct scst_cmd *c;
+
+ c = __scst_check_deferred_commands(tgt_dev);
+ if (c != NULL) {
+ TRACE_SN("Adding cmd %p to active cmd list", c);
+ spin_lock_irq(&c->cmd_threads->cmd_list_lock);
+ list_add_tail(&c->cmd_list_entry,
+ &c->cmd_threads->active_cmd_list);
+ wake_up(&c->cmd_threads->cmd_list_waitQ);
+ spin_unlock_irq(&c->cmd_threads->cmd_list_lock);
+ }
+
+ return;
+}
+
+void scst_inc_expected_sn(struct scst_tgt_dev *tgt_dev, atomic_t *slot);
+int scst_check_hq_cmd(struct scst_cmd *cmd);
+
+void scst_unblock_deferred(struct scst_tgt_dev *tgt_dev,
+ struct scst_cmd *cmd_sn);
+
+void scst_on_hq_cmd_response(struct scst_cmd *cmd);
+void scst_xmit_process_aborted_cmd(struct scst_cmd *cmd);
+
+int scst_cmd_thread(void *arg);
+void scst_cmd_tasklet(long p);
+int scst_init_thread(void *arg);
+int scst_tm_thread(void *arg);
+int scst_global_mgmt_thread(void *arg);
+
+int scst_queue_retry_cmd(struct scst_cmd *cmd, int finished_cmds);
+
+static inline void scst_tgtt_cleanup(struct scst_tgt_template *tgtt) { }
+static inline void scst_devt_cleanup(struct scst_dev_type *devt) { }
+
+int scst_alloc_tgt(struct scst_tgt_template *tgtt, struct scst_tgt **tgt);
+void scst_free_tgt(struct scst_tgt *tgt);
+
+int scst_alloc_device(gfp_t gfp_mask, struct scst_device **out_dev);
+void scst_free_device(struct scst_device *dev);
+
+struct scst_acg *scst_alloc_add_acg(struct scst_tgt *tgt,
+ const char *acg_name);
+void scst_clear_acg(struct scst_acg *acg);
+void scst_destroy_acg(struct scst_acg *acg);
+void scst_free_acg(struct scst_acg *acg);
+
+struct scst_acg *scst_tgt_find_acg(struct scst_tgt *tgt, const char *name);
+
+struct scst_acg *scst_find_acg(const struct scst_session *sess);
+
+void scst_check_reassign_sessions(void);
+
+int scst_sess_alloc_tgt_devs(struct scst_session *sess);
+void scst_sess_free_tgt_devs(struct scst_session *sess);
+void scst_nexus_loss(struct scst_tgt_dev *tgt_dev, bool queue_UA);
+
+int scst_acg_add_dev(struct scst_acg *acg, struct scst_device *dev,
+ uint64_t lun, int read_only, bool gen_scst_report_luns_changed);
+int scst_acg_remove_dev(struct scst_acg *acg, struct scst_device *dev,
+ bool gen_scst_report_luns_changed);
+
+void scst_acg_dev_destroy(struct scst_acg_dev *acg_dev);
+
+int scst_acg_add_name(struct scst_acg *acg, const char *name);
+void scst_acg_remove_acn(struct scst_acn *acn);
+struct scst_acn *scst_acg_find_name(struct scst_acg *acg, const char *name);
+
+/* The activity supposed to be suspended and scst_mutex held */
+static inline bool scst_acg_sess_is_empty(struct scst_acg *acg)
+{
+ return list_empty(&acg->acg_sess_list);
+}
+
+int scst_prepare_request_sense(struct scst_cmd *orig_cmd);
+int scst_finish_internal_cmd(struct scst_cmd *cmd);
+
+void scst_store_sense(struct scst_cmd *cmd);
+
+int scst_assign_dev_handler(struct scst_device *dev,
+ struct scst_dev_type *handler);
+
+struct scst_session *scst_alloc_session(struct scst_tgt *tgt, gfp_t gfp_mask,
+ const char *initiator_name);
+void scst_free_session(struct scst_session *sess);
+void scst_release_session(struct scst_session *sess);
+void scst_free_session_callback(struct scst_session *sess);
+
+struct scst_cmd *scst_alloc_cmd(gfp_t gfp_mask);
+void scst_free_cmd(struct scst_cmd *cmd);
+static inline void scst_destroy_cmd(struct scst_cmd *cmd)
+{
+ kmem_cache_free(scst_cmd_cachep, cmd);
+ return;
+}
+
+void scst_check_retries(struct scst_tgt *tgt);
+
+int scst_scsi_exec_async(struct scst_cmd *cmd,
+ void (*done)(void *, char *, int, int));
+
+int scst_alloc_space(struct scst_cmd *cmd);
+
+int scst_lib_init(void);
+void scst_lib_exit(void);
+
+uint64_t scst_pack_lun(const uint64_t lun, unsigned int addr_method);
+uint64_t scst_unpack_lun(const uint8_t *lun, int len);
+
+struct scst_mgmt_cmd *scst_alloc_mgmt_cmd(gfp_t gfp_mask);
+void scst_free_mgmt_cmd(struct scst_mgmt_cmd *mcmd);
+void scst_done_cmd_mgmt(struct scst_cmd *cmd);
+
+int scst_sysfs_init(void);
+void scst_sysfs_cleanup(void);
+int scst_create_tgtt_sysfs(struct scst_tgt_template *tgtt);
+void scst_tgtt_sysfs_put(struct scst_tgt_template *tgtt);
+int scst_create_tgt_sysfs(struct scst_tgt *tgt);
+void scst_tgt_sysfs_prepare_put(struct scst_tgt *tgt);
+void scst_tgt_sysfs_put(struct scst_tgt *tgt);
+int scst_create_sess_sysfs(struct scst_session *sess);
+void scst_sess_sysfs_put(struct scst_session *sess);
+int scst_create_sgv_sysfs(struct sgv_pool *pool);
+void scst_sgv_sysfs_put(struct sgv_pool *pool);
+int scst_create_devt_sysfs(struct scst_dev_type *devt);
+void scst_devt_sysfs_put(struct scst_dev_type *devt);
+int scst_create_device_sysfs(struct scst_device *dev);
+void scst_device_sysfs_put(struct scst_device *dev);
+int scst_create_devt_dev_sysfs(struct scst_device *dev);
+void scst_devt_dev_sysfs_put(struct scst_device *dev);
+int scst_create_acn_sysfs(struct scst_acg *acg, struct scst_acn *acn);
+void scst_acn_sysfs_del(struct scst_acg *acg, struct scst_acn *acn,
+ bool reassign);
+
+void scst_acg_sysfs_put(struct scst_acg *acg);
+
+int scst_get_cdb_len(const uint8_t *cdb);
+
+void __scst_dev_check_set_UA(struct scst_device *dev, struct scst_cmd *exclude,
+ const uint8_t *sense, int sense_len);
+static inline void scst_dev_check_set_UA(struct scst_device *dev,
+ struct scst_cmd *exclude, const uint8_t *sense, int sense_len)
+{
+ spin_lock_bh(&dev->dev_lock);
+ __scst_dev_check_set_UA(dev, exclude, sense, sense_len);
+ spin_unlock_bh(&dev->dev_lock);
+ return;
+}
+void scst_dev_check_set_local_UA(struct scst_device *dev,
+ struct scst_cmd *exclude, const uint8_t *sense, int sense_len);
+
+#define SCST_SET_UA_FLAG_AT_HEAD 1
+#define SCST_SET_UA_FLAG_GLOBAL 2
+
+void scst_check_set_UA(struct scst_tgt_dev *tgt_dev,
+ const uint8_t *sense, int sense_len, int flags);
+int scst_set_pending_UA(struct scst_cmd *cmd);
+
+void scst_report_luns_changed(struct scst_acg *acg);
+
+void scst_abort_cmd(struct scst_cmd *cmd, struct scst_mgmt_cmd *mcmd,
+ int other_ini, int call_dev_task_mgmt_fn);
+void scst_process_reset(struct scst_device *dev,
+ struct scst_session *originator, struct scst_cmd *exclude_cmd,
+ struct scst_mgmt_cmd *mcmd, bool setUA);
+
+bool scst_is_ua_global(const uint8_t *sense, int len);
+void scst_requeue_ua(struct scst_cmd *cmd);
+
+void scst_gen_aen_or_ua(struct scst_tgt_dev *tgt_dev,
+ int key, int asc, int ascq);
+
+static inline bool scst_is_implicit_hq(struct scst_cmd *cmd)
+{
+ return (cmd->op_flags & SCST_IMPLICIT_HQ) != 0;
+}
+
+/*
+ * Some notes on devices "blocking". Blocking means that no
+ * commands will go from SCST to underlying SCSI device until it
+ * is unblocked. But we don't care about all commands that
+ * already on the device.
+ */
+
+extern int scst_inc_on_dev_cmd(struct scst_cmd *cmd);
+
+extern void __scst_block_dev(struct scst_device *dev);
+extern void scst_block_dev_cmd(struct scst_cmd *cmd, int outstanding);
+extern void scst_unblock_dev(struct scst_device *dev);
+extern void scst_unblock_dev_cmd(struct scst_cmd *cmd);
+
+/* No locks */
+static inline void scst_dec_on_dev_cmd(struct scst_cmd *cmd)
+{
+ struct scst_device *dev = cmd->dev;
+ bool unblock_dev = cmd->inc_blocking;
+
+ if (cmd->inc_blocking) {
+ TRACE_MGMT_DBG("cmd %p (tag %llu): unblocking dev %p", cmd,
+ (long long unsigned int)cmd->tag, cmd->dev);
+ cmd->inc_blocking = 0;
+ }
+ cmd->dec_on_dev_needed = 0;
+
+ if (unblock_dev)
+ scst_unblock_dev(dev);
+
+ atomic_dec(&dev->on_dev_count);
+ /* See comment in scst_block_dev() */
+ smp_mb__after_atomic_dec();
+
+ TRACE_DBG("New on_dev_count %d", atomic_read(&dev->on_dev_count));
+
+ BUG_ON(atomic_read(&dev->on_dev_count) < 0);
+
+ if (unlikely(dev->block_count != 0))
+ wake_up_all(&dev->on_dev_waitQ);
+
+ return;
+}
+
+static inline void __scst_get(int barrier)
+{
+ atomic_inc(&scst_cmd_count);
+ TRACE_DBG("Incrementing scst_cmd_count(new value %d)",
+ atomic_read(&scst_cmd_count));
+
+ /* See comment about smp_mb() in scst_suspend_activity() */
+ if (barrier)
+ smp_mb__after_atomic_inc();
+}
+
+static inline void __scst_put(void)
+{
+ int f;
+ f = atomic_dec_and_test(&scst_cmd_count);
+ /* See comment about smp_mb() in scst_suspend_activity() */
+ if (f && unlikely(test_bit(SCST_FLAG_SUSPENDED, &scst_flags))) {
+ TRACE_MGMT_DBG("%s", "Waking up scst_dev_cmd_waitQ");
+ wake_up_all(&scst_dev_cmd_waitQ);
+ }
+ TRACE_DBG("Decrementing scst_cmd_count(new value %d)",
+ atomic_read(&scst_cmd_count));
+}
+
+void scst_sched_session_free(struct scst_session *sess);
+
+static inline void scst_sess_get(struct scst_session *sess)
+{
+ atomic_inc(&sess->refcnt);
+ TRACE_DBG("Incrementing sess %p refcnt (new value %d)",
+ sess, atomic_read(&sess->refcnt));
+}
+
+static inline void scst_sess_put(struct scst_session *sess)
+{
+ TRACE_DBG("Decrementing sess %p refcnt (new value %d)",
+ sess, atomic_read(&sess->refcnt)-1);
+ if (atomic_dec_and_test(&sess->refcnt))
+ scst_sched_session_free(sess);
+}
+
+static inline void __scst_cmd_get(struct scst_cmd *cmd)
+{
+ atomic_inc(&cmd->cmd_ref);
+ TRACE_DBG("Incrementing cmd %p ref (new value %d)",
+ cmd, atomic_read(&cmd->cmd_ref));
+}
+
+static inline void __scst_cmd_put(struct scst_cmd *cmd)
+{
+ TRACE_DBG("Decrementing cmd %p ref (new value %d)",
+ cmd, atomic_read(&cmd->cmd_ref)-1);
+ if (atomic_dec_and_test(&cmd->cmd_ref))
+ scst_free_cmd(cmd);
+}
+
+extern void scst_throttle_cmd(struct scst_cmd *cmd);
+extern void scst_unthrottle_cmd(struct scst_cmd *cmd);
+
+static inline void scst_check_restore_sg_buff(struct scst_cmd *cmd)
+{
+ if (cmd->sg_buff_modified) {
+ TRACE_MEM("cmd %p, sg %p, orig_sg_entry %d, "
+ "orig_entry_len %d, orig_sg_cnt %d", cmd, cmd->sg,
+ cmd->orig_sg_entry, cmd->orig_entry_len,
+ cmd->orig_sg_cnt);
+ cmd->sg[cmd->orig_sg_entry].length = cmd->orig_entry_len;
+ cmd->sg_cnt = cmd->orig_sg_cnt;
+ cmd->sg_buff_modified = 0;
+ }
+}
+
+#ifdef CONFIG_SCST_DEBUG_TM
+extern void tm_dbg_check_released_cmds(void);
+extern int tm_dbg_check_cmd(struct scst_cmd *cmd);
+extern void tm_dbg_release_cmd(struct scst_cmd *cmd);
+extern void tm_dbg_task_mgmt(struct scst_device *dev, const char *fn,
+ int force);
+extern int tm_dbg_is_release(void);
+#else
+static inline void tm_dbg_check_released_cmds(void) {}
+static inline int tm_dbg_check_cmd(struct scst_cmd *cmd)
+{
+ return 0;
+}
+static inline void tm_dbg_release_cmd(struct scst_cmd *cmd) {}
+static inline void tm_dbg_task_mgmt(struct scst_device *dev, const char *fn,
+ int force) {}
+static inline int tm_dbg_is_release(void)
+{
+ return 0;
+}
+#endif /* CONFIG_SCST_DEBUG_TM */
+
+#ifdef CONFIG_SCST_DEBUG_SN
+void scst_check_debug_sn(struct scst_cmd *cmd);
+#else
+static inline void scst_check_debug_sn(struct scst_cmd *cmd) {}
+#endif
+
+static inline int scst_sn_before(uint32_t seq1, uint32_t seq2)
+{
+ return (int32_t)(seq1-seq2) < 0;
+}
+
+bool scst_is_relative_target_port_id_unique(uint16_t id, struct scst_tgt *t);
+int gen_relative_target_port_id(uint16_t *id);
+
+#ifdef CONFIG_SCST_MEASURE_LATENCY
+
+void scst_set_start_time(struct scst_cmd *cmd);
+void scst_set_cur_start(struct scst_cmd *cmd);
+void scst_set_parse_time(struct scst_cmd *cmd);
+void scst_set_alloc_buf_time(struct scst_cmd *cmd);
+void scst_set_restart_waiting_time(struct scst_cmd *cmd);
+void scst_set_rdy_to_xfer_time(struct scst_cmd *cmd);
+void scst_set_pre_exec_time(struct scst_cmd *cmd);
+void scst_set_exec_time(struct scst_cmd *cmd);
+void scst_set_dev_done_time(struct scst_cmd *cmd);
+void scst_set_xmit_time(struct scst_cmd *cmd);
+void scst_set_tgt_on_free_time(struct scst_cmd *cmd);
+void scst_set_dev_on_free_time(struct scst_cmd *cmd);
+void scst_update_lat_stats(struct scst_cmd *cmd);
+
+#else
+
+static inline void scst_set_start_time(struct scst_cmd *cmd) {}
+static inline void scst_set_cur_start(struct scst_cmd *cmd) {}
+static inline void scst_set_parse_time(struct scst_cmd *cmd) {}
+static inline void scst_set_alloc_buf_time(struct scst_cmd *cmd) {}
+static inline void scst_set_restart_waiting_time(struct scst_cmd *cmd) {}
+static inline void scst_set_rdy_to_xfer_time(struct scst_cmd *cmd) {}
+static inline void scst_set_pre_exec_time(struct scst_cmd *cmd) {}
+static inline void scst_set_exec_time(struct scst_cmd *cmd) {}
+static inline void scst_set_dev_done_time(struct scst_cmd *cmd) {}
+static inline void scst_set_xmit_time(struct scst_cmd *cmd) {}
+static inline void scst_set_tgt_on_free_time(struct scst_cmd *cmd) {}
+static inline void scst_set_dev_on_free_time(struct scst_cmd *cmd) {}
+static inline void scst_update_lat_stats(struct scst_cmd *cmd) {}
+
+#endif /* CONFIG_SCST_MEASURE_LATENCY */
+
+#endif /* __SCST_PRIV_H */
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH][RFC 7/12/1/5] SCST SGV cache
[not found] ` <4BC44D08.4060907@vlnb.net>
` (5 preceding siblings ...)
2010-04-13 13:06 ` [PATCH][RFC 6/12/1/5] SCST core's private header Vladislav Bolkhovitin
@ 2010-04-13 13:06 ` Vladislav Bolkhovitin
2010-04-13 13:06 ` [PATCH][RFC 8/12/1/5] SCST sysfs interface Vladislav Bolkhovitin
` (2 subsequent siblings)
9 siblings, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:06 UTC (permalink / raw)
To: linux-scsi
Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
Vu Pham, Bart Van Assche, James Smart, Joe Eykholt, Andy Yan,
linux-driver
This patch contains SCST SGV cache. SGV cache is a memory management
subsystem in SCST. More info about it you can find in the documentation
in this patch.
P.S. Solaris COMSTAR also has similar facility.
Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
Documentation/scst/sgv_cache.txt | 224 ++++
drivers/scst/scst_mem.c | 1788 +++++++++++++++++++++++++++++++++++++++
drivers/scst/scst_mem.h | 150 +++
include/scst/scst_sgv.h | 97 ++
4 files changed, 2259 insertions(+)
diff -uprN orig/linux-2.6.33/include/scst/scst_sgv.h linux-2.6.33/include/scst/scst_sgv.h
--- orig/linux-2.6.33/include/scst/scst_sgv.h
+++ linux-2.6.33/include/scst/scst_sgv.h
@@ -0,0 +1,97 @@
+/*
+ * include/scst_sgv.h
+ *
+ * Copyright (C) 2004 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ * Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ * Include file for SCST SGV cache.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation, version 2
+ * of the License.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#ifndef __SCST_SGV_H
+#define __SCST_SGV_H
+
+/** SGV pool routines and flag bits **/
+
+/* Set if the allocated object must be not from the cache */
+#define SGV_POOL_ALLOC_NO_CACHED 1
+
+/* Set if there should not be any memory allocations on a cache miss */
+#define SGV_POOL_NO_ALLOC_ON_CACHE_MISS 2
+
+/* Set an object should be returned even if it doesn't have SG vector built */
+#define SGV_POOL_RETURN_OBJ_ON_ALLOC_FAIL 4
+
+/*
+ * Set if the allocated object must be a new one, i.e. from the cache,
+ * but not cached
+ */
+#define SGV_POOL_ALLOC_GET_NEW 8
+
+struct sgv_pool_obj;
+struct sgv_pool;
+
+/*
+ * Structure to keep a memory limit for an SCST object
+ */
+struct scst_mem_lim {
+ /* How much memory allocated under this object */
+ atomic_t alloced_pages;
+
+ /*
+ * How much memory allowed to allocated under this object. Put here
+ * mostly to save a possible cache miss accessing scst_max_dev_cmd_mem.
+ */
+ int max_allowed_pages;
+};
+
+/* Types of clustering */
+enum sgv_clustering_types {
+ /* No clustering performed */
+ sgv_no_clustering = 0,
+
+ /*
+ * A page will only be merged with the latest previously allocated
+ * page, so the order of pages in the SG will be preserved.
+ */
+ sgv_tail_clustering,
+
+ /*
+ * Free merging of pages at any place in the SG is allowed. This mode
+ * usually provides the best merging rate.
+ */
+ sgv_full_clustering,
+};
+
+struct sgv_pool *sgv_pool_create(const char *name,
+ enum sgv_clustering_types clustered, int single_alloc_pages,
+ bool shared, int purge_interval);
+void sgv_pool_del(struct sgv_pool *pool);
+
+void sgv_pool_get(struct sgv_pool *pool);
+void sgv_pool_put(struct sgv_pool *pool);
+
+void sgv_pool_flush(struct sgv_pool *pool);
+
+void sgv_pool_set_allocator(struct sgv_pool *pool,
+ struct page *(*alloc_pages_fn)(struct scatterlist *, gfp_t, void *),
+ void (*free_pages_fn)(struct scatterlist *, int, void *));
+
+struct scatterlist *sgv_pool_alloc(struct sgv_pool *pool, unsigned int size,
+ gfp_t gfp_mask, int flags, int *count,
+ struct sgv_pool_obj **sgv, struct scst_mem_lim *mem_lim, void *priv);
+void sgv_pool_free(struct sgv_pool_obj *sgv, struct scst_mem_lim *mem_lim);
+
+void *sgv_get_priv(struct sgv_pool_obj *sgv);
+
+void scst_init_mem_lim(struct scst_mem_lim *mem_lim);
+
+#endif /* __SCST_SGV_H */
diff -uprN orig/linux-2.6.33/drivers/scst/scst_mem.h linux-2.6.33/drivers/scst/scst_mem.h
--- orig/linux-2.6.33/drivers/scst/scst_mem.h
+++ linux-2.6.33/drivers/scst/scst_mem.h
@@ -0,0 +1,150 @@
+/*
+ * scst_mem.h
+ *
+ * Copyright (C) 2006 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ * Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation, version 2
+ * of the License.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/scatterlist.h>
+#include <linux/workqueue.h>
+
+#define SGV_POOL_ELEMENTS 11
+
+/*
+ * sg_num is indexed by the page number, pg_count is indexed by the sg number.
+ * Made in one entry to simplify the code (eg all sizeof(*) parts) and save
+ * some CPU cache for non-clustered case.
+ */
+struct trans_tbl_ent {
+ unsigned short sg_num;
+ unsigned short pg_count;
+};
+
+/*
+ * SGV pool object
+ */
+struct sgv_pool_obj {
+ int cache_num;
+ int pages;
+
+ /* jiffies, protected by sgv_pool_lock */
+ unsigned long time_stamp;
+
+ struct list_head recycling_list_entry;
+ struct list_head sorted_recycling_list_entry;
+
+ struct sgv_pool *owner_pool;
+ int orig_sg;
+ int orig_length;
+ int sg_count;
+ void *allocator_priv;
+ struct trans_tbl_ent *trans_tbl;
+ struct scatterlist *sg_entries;
+ struct scatterlist sg_entries_data[0];
+};
+
+/*
+ * SGV pool statistics accounting structure
+ */
+struct sgv_pool_cache_acc {
+ atomic_t total_alloc, hit_alloc;
+ atomic_t merged;
+};
+
+/*
+ * SGV pool allocation functions
+ */
+struct sgv_pool_alloc_fns {
+ struct page *(*alloc_pages_fn)(struct scatterlist *sg, gfp_t gfp_mask,
+ void *priv);
+ void (*free_pages_fn)(struct scatterlist *sg, int sg_count,
+ void *priv);
+};
+
+/*
+ * SGV pool
+ */
+struct sgv_pool {
+ enum sgv_clustering_types clustering_type;
+ int single_alloc_pages;
+ int max_cached_pages;
+
+ struct sgv_pool_alloc_fns alloc_fns;
+
+ /* <=4K, <=8, <=16, <=32, <=64, <=128, <=256, <=512, <=1024, <=2048 */
+ struct kmem_cache *caches[SGV_POOL_ELEMENTS];
+
+ spinlock_t sgv_pool_lock; /* outer lock for sgv_pools_lock! */
+
+ int purge_interval;
+
+ /* Protected by sgv_pool_lock, if necessary */
+ unsigned int purge_work_scheduled:1;
+ unsigned int sgv_kobj_initialized:1;
+
+ /* Protected by sgv_pool_lock */
+ struct list_head sorted_recycling_list;
+
+ int inactive_cached_pages; /* protected by sgv_pool_lock */
+
+ /* Protected by sgv_pool_lock */
+ struct list_head recycling_lists[SGV_POOL_ELEMENTS];
+
+ int cached_pages, cached_entries; /* protected by sgv_pool_lock */
+
+ struct sgv_pool_cache_acc cache_acc[SGV_POOL_ELEMENTS];
+
+ struct delayed_work sgv_purge_work;
+
+ struct list_head sgv_active_pools_list_entry;
+
+ atomic_t big_alloc, big_pages, big_merged;
+ atomic_t other_alloc, other_pages, other_merged;
+
+ atomic_t sgv_pool_ref;
+
+ int max_caches;
+
+ /* SCST_MAX_NAME + few more bytes to match scst_user expectations */
+ char cache_names[SGV_POOL_ELEMENTS][SCST_MAX_NAME + 10];
+ char name[SCST_MAX_NAME + 10];
+
+ struct mm_struct *owner_mm;
+
+ struct list_head sgv_pools_list_entry;
+
+ struct kobject sgv_kobj;
+};
+
+static inline struct scatterlist *sgv_pool_sg(struct sgv_pool_obj *obj)
+{
+ return obj->sg_entries;
+}
+
+int scst_sgv_pools_init(unsigned long mem_hwmark, unsigned long mem_lwmark);
+void scst_sgv_pools_deinit(void);
+
+void sgv_pool_destroy(struct sgv_pool *pool);
+
+ssize_t sgv_sysfs_stat_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf);
+ssize_t sgv_sysfs_stat_reset(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count);
+ssize_t sgv_sysfs_global_stat_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf);
+ssize_t sgv_sysfs_global_stat_reset(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count);
+
+void scst_sgv_pool_use_norm(struct scst_tgt_dev *tgt_dev);
+void scst_sgv_pool_use_norm_clust(struct scst_tgt_dev *tgt_dev);
+void scst_sgv_pool_use_dma(struct scst_tgt_dev *tgt_dev);
diff -uprN orig/linux-2.6.33/drivers/scst/scst_mem.c linux-2.6.33/drivers/scst/scst_mem.c
--- orig/linux-2.6.33/drivers/scst/scst_mem.c
+++ linux-2.6.33/drivers/scst/scst_mem.c
@@ -0,0 +1,1788 @@
+/*
+ * scst_mem.c
+ *
+ * Copyright (C) 2006 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ * Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation, version 2
+ * of the License.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/slab.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/unistd.h>
+#include <linux/string.h>
+
+#include "scst.h"
+#include "scst_priv.h"
+#include "scst_mem.h"
+
+#define SGV_DEFAULT_PURGE_INTERVAL (60 * HZ)
+#define SGV_MIN_SHRINK_INTERVAL (1 * HZ)
+
+/* Max pages freed from a pool per shrinking iteration */
+#define MAX_PAGES_PER_POOL 50
+
+static struct sgv_pool *sgv_norm_clust_pool, *sgv_norm_pool, *sgv_dma_pool;
+
+static atomic_t sgv_pages_total = ATOMIC_INIT(0);
+
+/* Both read-only */
+static int sgv_hi_wmk;
+static int sgv_lo_wmk;
+
+static int sgv_max_local_pages, sgv_max_trans_pages;
+
+static DEFINE_SPINLOCK(sgv_pools_lock); /* inner lock for sgv_pool_lock! */
+static DEFINE_MUTEX(sgv_pools_mutex);
+
+/* Both protected by sgv_pools_lock */
+static struct sgv_pool *sgv_cur_purge_pool;
+static LIST_HEAD(sgv_active_pools_list);
+
+static atomic_t sgv_releases_on_hiwmk = ATOMIC_INIT(0);
+static atomic_t sgv_releases_on_hiwmk_failed = ATOMIC_INIT(0);
+
+static atomic_t sgv_other_total_alloc = ATOMIC_INIT(0);
+
+static struct shrinker sgv_shrinker;
+
+/*
+ * Protected by sgv_pools_mutex AND sgv_pools_lock for writes,
+ * either one for reads.
+ */
+static LIST_HEAD(sgv_pools_list);
+
+static inline bool sgv_pool_clustered(const struct sgv_pool *pool)
+{
+ return pool->clustering_type != sgv_no_clustering;
+}
+
+void scst_sgv_pool_use_norm(struct scst_tgt_dev *tgt_dev)
+{
+ tgt_dev->gfp_mask = __GFP_NOWARN;
+ tgt_dev->pool = sgv_norm_pool;
+ clear_bit(SCST_TGT_DEV_CLUST_POOL, &tgt_dev->tgt_dev_flags);
+}
+
+void scst_sgv_pool_use_norm_clust(struct scst_tgt_dev *tgt_dev)
+{
+ TRACE_MEM("%s", "Use clustering");
+ tgt_dev->gfp_mask = __GFP_NOWARN;
+ tgt_dev->pool = sgv_norm_clust_pool;
+ set_bit(SCST_TGT_DEV_CLUST_POOL, &tgt_dev->tgt_dev_flags);
+}
+
+void scst_sgv_pool_use_dma(struct scst_tgt_dev *tgt_dev)
+{
+ TRACE_MEM("%s", "Use ISA DMA memory");
+ tgt_dev->gfp_mask = __GFP_NOWARN | GFP_DMA;
+ tgt_dev->pool = sgv_dma_pool;
+ clear_bit(SCST_TGT_DEV_CLUST_POOL, &tgt_dev->tgt_dev_flags);
+}
+
+/* Must be no locks */
+static void sgv_dtor_and_free(struct sgv_pool_obj *obj)
+{
+ struct sgv_pool *pool = obj->owner_pool;
+
+ TRACE_MEM("Destroying sgv obj %p", obj);
+
+ if (obj->sg_count != 0) {
+ pool->alloc_fns.free_pages_fn(obj->sg_entries,
+ obj->sg_count, obj->allocator_priv);
+ }
+ if (obj->sg_entries != obj->sg_entries_data) {
+ if (obj->trans_tbl !=
+ (struct trans_tbl_ent *)obj->sg_entries_data) {
+ /* kfree() handles NULL parameter */
+ kfree(obj->trans_tbl);
+ obj->trans_tbl = NULL;
+ }
+ kfree(obj->sg_entries);
+ }
+
+ kmem_cache_free(pool->caches[obj->cache_num], obj);
+ return;
+}
+
+/* Might be called under sgv_pool_lock */
+static inline void sgv_del_from_active(struct sgv_pool *pool)
+{
+ struct list_head *next;
+
+ TRACE_MEM("Deleting sgv pool %p from the active list", pool);
+
+ spin_lock_bh(&sgv_pools_lock);
+
+ next = pool->sgv_active_pools_list_entry.next;
+ list_del(&pool->sgv_active_pools_list_entry);
+
+ if (sgv_cur_purge_pool == pool) {
+ TRACE_MEM("Sgv pool %p is sgv cur purge pool", pool);
+
+ if (next == &sgv_active_pools_list)
+ next = next->next;
+
+ if (next == &sgv_active_pools_list) {
+ sgv_cur_purge_pool = NULL;
+ TRACE_MEM("%s", "Sgv active list now empty");
+ } else {
+ sgv_cur_purge_pool = list_entry(next, typeof(*pool),
+ sgv_active_pools_list_entry);
+ TRACE_MEM("New sgv cur purge pool %p",
+ sgv_cur_purge_pool);
+ }
+ }
+
+ spin_unlock_bh(&sgv_pools_lock);
+ return;
+}
+
+/* Must be called under sgv_pool_lock held */
+static void sgv_dec_cached_entries(struct sgv_pool *pool, int pages)
+{
+ pool->cached_entries--;
+ pool->cached_pages -= pages;
+
+ if (pool->cached_entries == 0)
+ sgv_del_from_active(pool);
+
+ return;
+}
+
+/* Must be called under sgv_pool_lock held */
+static void __sgv_purge_from_cache(struct sgv_pool_obj *obj)
+{
+ int pages = obj->pages;
+ struct sgv_pool *pool = obj->owner_pool;
+
+ TRACE_MEM("Purging sgv obj %p from pool %p (new cached_entries %d)",
+ obj, pool, pool->cached_entries-1);
+
+ list_del(&obj->sorted_recycling_list_entry);
+ list_del(&obj->recycling_list_entry);
+
+ pool->inactive_cached_pages -= pages;
+ sgv_dec_cached_entries(pool, pages);
+
+ atomic_sub(pages, &sgv_pages_total);
+
+ return;
+}
+
+/* Must be called under sgv_pool_lock held */
+static bool sgv_purge_from_cache(struct sgv_pool_obj *obj, int min_interval,
+ unsigned long cur_time)
+{
+ EXTRACHECKS_BUG_ON(min_interval < 0);
+
+ TRACE_MEM("Checking if sgv obj %p should be purged (cur time %ld, "
+ "obj time %ld, time to purge %ld)", obj, cur_time,
+ obj->time_stamp, obj->time_stamp + min_interval);
+
+ if (time_after_eq(cur_time, (obj->time_stamp + min_interval))) {
+ __sgv_purge_from_cache(obj);
+ return true;
+ }
+ return false;
+}
+
+/* No locks */
+static int sgv_shrink_pool(struct sgv_pool *pool, int nr, int min_interval,
+ unsigned long cur_time)
+{
+ int freed = 0;
+
+ TRACE_MEM("Trying to shrink pool %p (nr %d, min_interval %d)",
+ pool, nr, min_interval);
+
+ if (pool->purge_interval < 0) {
+ TRACE_MEM("Not shrinkable pool %p, skipping", pool);
+ goto out;
+ }
+
+ spin_lock_bh(&pool->sgv_pool_lock);
+
+ while (!list_empty(&pool->sorted_recycling_list) &&
+ (atomic_read(&sgv_pages_total) > sgv_lo_wmk)) {
+ struct sgv_pool_obj *obj = list_entry(
+ pool->sorted_recycling_list.next,
+ struct sgv_pool_obj, sorted_recycling_list_entry);
+
+ if (sgv_purge_from_cache(obj, min_interval, cur_time)) {
+ int pages = obj->pages;
+
+ freed += pages;
+ nr -= pages;
+
+ TRACE_MEM("%d pages purged from pool %p (nr left %d, "
+ "total freed %d)", pages, pool, nr, freed);
+
+ spin_unlock_bh(&pool->sgv_pool_lock);
+ sgv_dtor_and_free(obj);
+ spin_lock_bh(&pool->sgv_pool_lock);
+ } else
+ break;
+
+ if ((nr <= 0) || (freed >= MAX_PAGES_PER_POOL)) {
+ if (freed >= MAX_PAGES_PER_POOL)
+ TRACE_MEM("%d pages purged from pool %p, "
+ "leaving", freed, pool);
+ break;
+ }
+ }
+
+ spin_unlock_bh(&pool->sgv_pool_lock);
+
+out:
+ return nr;
+}
+
+/* No locks */
+static int __sgv_shrink(int nr, int min_interval)
+{
+ struct sgv_pool *pool;
+ unsigned long cur_time = jiffies;
+ int prev_nr = nr;
+ bool circle = false;
+
+ TRACE_MEM("Trying to shrink %d pages from all sgv pools "
+ "(min_interval %d)", nr, min_interval);
+
+ while (nr > 0) {
+ struct list_head *next;
+
+ spin_lock_bh(&sgv_pools_lock);
+
+ pool = sgv_cur_purge_pool;
+ if (pool == NULL) {
+ if (list_empty(&sgv_active_pools_list)) {
+ TRACE_MEM("%s", "Active pools list is empty");
+ goto out_unlock;
+ }
+
+ pool = list_entry(sgv_active_pools_list.next,
+ typeof(*pool),
+ sgv_active_pools_list_entry);
+ }
+ sgv_pool_get(pool);
+
+ next = pool->sgv_active_pools_list_entry.next;
+ if (next == &sgv_active_pools_list) {
+ if (circle && (prev_nr == nr)) {
+ TRACE_MEM("Full circle done, but no progress, "
+ "leaving (nr %d)", nr);
+ goto out_unlock_put;
+ }
+ circle = true;
+ prev_nr = nr;
+
+ next = next->next;
+ }
+
+ sgv_cur_purge_pool = list_entry(next, typeof(*pool),
+ sgv_active_pools_list_entry);
+ TRACE_MEM("New cur purge pool %p", sgv_cur_purge_pool);
+
+ spin_unlock_bh(&sgv_pools_lock);
+
+ nr = sgv_shrink_pool(pool, nr, min_interval, cur_time);
+
+ sgv_pool_put(pool);
+ }
+
+out:
+ return nr;
+
+out_unlock:
+ spin_unlock_bh(&sgv_pools_lock);
+ goto out;
+
+out_unlock_put:
+ spin_unlock_bh(&sgv_pools_lock);
+ sgv_pool_put(pool);
+ goto out;
+}
+
+static int sgv_shrink(int nr, gfp_t gfpm)
+{
+
+ if (nr > 0) {
+ nr = __sgv_shrink(nr, SGV_MIN_SHRINK_INTERVAL);
+ TRACE_MEM("Left %d", nr);
+ } else {
+ struct sgv_pool *pool;
+ int inactive_pages = 0;
+
+ spin_lock_bh(&sgv_pools_lock);
+ list_for_each_entry(pool, &sgv_active_pools_list,
+ sgv_active_pools_list_entry) {
+ if (pool->purge_interval > 0)
+ inactive_pages += pool->inactive_cached_pages;
+ }
+ spin_unlock_bh(&sgv_pools_lock);
+
+ nr = max((int)0, inactive_pages - sgv_lo_wmk);
+ TRACE_MEM("Can free %d (total %d)", nr,
+ atomic_read(&sgv_pages_total));
+ }
+ return nr;
+}
+
+static void sgv_purge_work_fn(struct delayed_work *work)
+{
+ unsigned long cur_time = jiffies;
+ struct sgv_pool *pool = container_of(work, struct sgv_pool,
+ sgv_purge_work);
+
+ TRACE_MEM("Purge work for pool %p", pool);
+
+ spin_lock_bh(&pool->sgv_pool_lock);
+
+ pool->purge_work_scheduled = false;
+
+ while (!list_empty(&pool->sorted_recycling_list)) {
+ struct sgv_pool_obj *obj = list_entry(
+ pool->sorted_recycling_list.next,
+ struct sgv_pool_obj, sorted_recycling_list_entry);
+
+ if (sgv_purge_from_cache(obj, pool->purge_interval, cur_time)) {
+ spin_unlock_bh(&pool->sgv_pool_lock);
+ sgv_dtor_and_free(obj);
+ spin_lock_bh(&pool->sgv_pool_lock);
+ } else {
+ /*
+ * Let's reschedule it for full period to not get here
+ * too often. In the worst case we have shrinker
+ * to reclaim buffers quickier.
+ */
+ TRACE_MEM("Rescheduling purge work for pool %p (delay "
+ "%d HZ/%d sec)", pool, pool->purge_interval,
+ pool->purge_interval/HZ);
+ schedule_delayed_work(&pool->sgv_purge_work,
+ pool->purge_interval);
+ pool->purge_work_scheduled = true;
+ break;
+ }
+ }
+
+ spin_unlock_bh(&pool->sgv_pool_lock);
+
+ TRACE_MEM("Leaving purge work for pool %p", pool);
+ return;
+}
+
+static int sgv_check_full_clustering(struct scatterlist *sg, int cur, int hint)
+{
+ int res = -1;
+ int i = hint;
+ unsigned long pfn_cur = page_to_pfn(sg_page(&sg[cur]));
+ int len_cur = sg[cur].length;
+ unsigned long pfn_cur_next = pfn_cur + (len_cur >> PAGE_SHIFT);
+ int full_page_cur = (len_cur & (PAGE_SIZE - 1)) == 0;
+ unsigned long pfn, pfn_next;
+ bool full_page;
+
+#if 0
+ TRACE_MEM("pfn_cur %ld, pfn_cur_next %ld, len_cur %d, full_page_cur %d",
+ pfn_cur, pfn_cur_next, len_cur, full_page_cur);
+#endif
+
+ /* check the hint first */
+ if (i >= 0) {
+ pfn = page_to_pfn(sg_page(&sg[i]));
+ pfn_next = pfn + (sg[i].length >> PAGE_SHIFT);
+ full_page = (sg[i].length & (PAGE_SIZE - 1)) == 0;
+
+ if ((pfn == pfn_cur_next) && full_page_cur)
+ goto out_head;
+
+ if ((pfn_next == pfn_cur) && full_page)
+ goto out_tail;
+ }
+
+ /* ToDo: implement more intelligent search */
+ for (i = cur - 1; i >= 0; i--) {
+ pfn = page_to_pfn(sg_page(&sg[i]));
+ pfn_next = pfn + (sg[i].length >> PAGE_SHIFT);
+ full_page = (sg[i].length & (PAGE_SIZE - 1)) == 0;
+
+ if ((pfn == pfn_cur_next) && full_page_cur)
+ goto out_head;
+
+ if ((pfn_next == pfn_cur) && full_page)
+ goto out_tail;
+ }
+
+out:
+ return res;
+
+out_tail:
+ TRACE_MEM("SG segment %d will be tail merged with segment %d", cur, i);
+ sg[i].length += len_cur;
+ sg_clear(&sg[cur]);
+ res = i;
+ goto out;
+
+out_head:
+ TRACE_MEM("SG segment %d will be head merged with segment %d", cur, i);
+ sg_assign_page(&sg[i], sg_page(&sg[cur]));
+ sg[i].length += len_cur;
+ sg_clear(&sg[cur]);
+ res = i;
+ goto out;
+}
+
+static int sgv_check_tail_clustering(struct scatterlist *sg, int cur, int hint)
+{
+ int res = -1;
+ unsigned long pfn_cur = page_to_pfn(sg_page(&sg[cur]));
+ int len_cur = sg[cur].length;
+ int prev;
+ unsigned long pfn_prev;
+ bool full_page;
+
+#ifdef SCST_HIGHMEM
+ if (page >= highmem_start_page) {
+ TRACE_MEM("%s", "HIGHMEM page allocated, no clustering")
+ goto out;
+ }
+#endif
+
+#if 0
+ TRACE_MEM("pfn_cur %ld, pfn_cur_next %ld, len_cur %d, full_page_cur %d",
+ pfn_cur, pfn_cur_next, len_cur, full_page_cur);
+#endif
+
+ if (cur == 0)
+ goto out;
+
+ prev = cur - 1;
+ pfn_prev = page_to_pfn(sg_page(&sg[prev])) +
+ (sg[prev].length >> PAGE_SHIFT);
+ full_page = (sg[prev].length & (PAGE_SIZE - 1)) == 0;
+
+ if ((pfn_prev == pfn_cur) && full_page) {
+ TRACE_MEM("SG segment %d will be tail merged with segment %d",
+ cur, prev);
+ sg[prev].length += len_cur;
+ sg_clear(&sg[cur]);
+ res = prev;
+ }
+
+out:
+ return res;
+}
+
+static void sgv_free_sys_sg_entries(struct scatterlist *sg, int sg_count,
+ void *priv)
+{
+ int i;
+
+ TRACE_MEM("sg=%p, sg_count=%d", sg, sg_count);
+
+ for (i = 0; i < sg_count; i++) {
+ struct page *p = sg_page(&sg[i]);
+ int len = sg[i].length;
+ int pages =
+ (len >> PAGE_SHIFT) + ((len & ~PAGE_MASK) != 0);
+
+ TRACE_MEM("page %lx, len %d, pages %d",
+ (unsigned long)p, len, pages);
+
+ while (pages > 0) {
+ int order = 0;
+
+/*
+ * __free_pages() doesn't like freeing pages with not that order with
+ * which they were allocated, so disable this small optimization.
+ */
+#if 0
+ if (len > 0) {
+ while (((1 << order) << PAGE_SHIFT) < len)
+ order++;
+ len = 0;
+ }
+#endif
+ TRACE_MEM("free_pages(): order %d, page %lx",
+ order, (unsigned long)p);
+
+ __free_pages(p, order);
+
+ pages -= 1 << order;
+ p += 1 << order;
+ }
+ }
+}
+
+static struct page *sgv_alloc_sys_pages(struct scatterlist *sg,
+ gfp_t gfp_mask, void *priv)
+{
+ struct page *page = alloc_pages(gfp_mask, 0);
+
+ sg_set_page(sg, page, PAGE_SIZE, 0);
+ TRACE_MEM("page=%p, sg=%p, priv=%p", page, sg, priv);
+ if (page == NULL) {
+ TRACE(TRACE_OUT_OF_MEM, "%s", "Allocation of "
+ "sg page failed");
+ }
+ return page;
+}
+
+static int sgv_alloc_sg_entries(struct scatterlist *sg, int pages,
+ gfp_t gfp_mask, enum sgv_clustering_types clustering_type,
+ struct trans_tbl_ent *trans_tbl,
+ const struct sgv_pool_alloc_fns *alloc_fns, void *priv)
+{
+ int sg_count = 0;
+ int pg, i, j;
+ int merged = -1;
+
+ TRACE_MEM("pages=%d, clustering_type=%d", pages, clustering_type);
+
+#if 0
+ gfp_mask |= __GFP_COLD;
+#endif
+#ifdef CONFIG_SCST_STRICT_SECURITY
+ gfp_mask |= __GFP_ZERO;
+#endif
+
+ for (pg = 0; pg < pages; pg++) {
+ void *rc;
+#ifdef CONFIG_SCST_DEBUG_OOM
+ if (((gfp_mask & __GFP_NOFAIL) != __GFP_NOFAIL) &&
+ ((scst_random() % 10000) == 55))
+ rc = NULL;
+ else
+#endif
+ rc = alloc_fns->alloc_pages_fn(&sg[sg_count], gfp_mask,
+ priv);
+ if (rc == NULL)
+ goto out_no_mem;
+
+ /*
+ * This code allows compiler to see full body of the clustering
+ * functions and gives it a chance to generate better code.
+ * At least, the resulting code is smaller, comparing to
+ * calling them using a function pointer.
+ */
+ if (clustering_type == sgv_full_clustering)
+ merged = sgv_check_full_clustering(sg, sg_count, merged);
+ else if (clustering_type == sgv_tail_clustering)
+ merged = sgv_check_tail_clustering(sg, sg_count, merged);
+ else
+ merged = -1;
+
+ if (merged == -1)
+ sg_count++;
+
+ TRACE_MEM("pg=%d, merged=%d, sg_count=%d", pg, merged,
+ sg_count);
+ }
+
+ if ((clustering_type != sgv_no_clustering) && (trans_tbl != NULL)) {
+ pg = 0;
+ for (i = 0; i < pages; i++) {
+ int n = (sg[i].length >> PAGE_SHIFT) +
+ ((sg[i].length & ~PAGE_MASK) != 0);
+ trans_tbl[i].pg_count = pg;
+ for (j = 0; j < n; j++)
+ trans_tbl[pg++].sg_num = i+1;
+ TRACE_MEM("i=%d, n=%d, pg_count=%d", i, n,
+ trans_tbl[i].pg_count);
+ }
+ }
+
+out:
+ TRACE_MEM("sg_count=%d", sg_count);
+ return sg_count;
+
+out_no_mem:
+ alloc_fns->free_pages_fn(sg, sg_count, priv);
+ sg_count = 0;
+ goto out;
+}
+
+static int sgv_alloc_arrays(struct sgv_pool_obj *obj,
+ int pages_to_alloc, gfp_t gfp_mask)
+{
+ int sz, tsz = 0;
+ int res = 0;
+
+ sz = pages_to_alloc * sizeof(obj->sg_entries[0]);
+
+ obj->sg_entries = kmalloc(sz, gfp_mask);
+ if (unlikely(obj->sg_entries == NULL)) {
+ TRACE(TRACE_OUT_OF_MEM, "Allocation of sgv_pool_obj "
+ "SG vector failed (size %d)", sz);
+ res = -ENOMEM;
+ goto out;
+ }
+
+ sg_init_table(obj->sg_entries, pages_to_alloc);
+
+ if (sgv_pool_clustered(obj->owner_pool)) {
+ if (pages_to_alloc <= sgv_max_trans_pages) {
+ obj->trans_tbl =
+ (struct trans_tbl_ent *)obj->sg_entries_data;
+ /*
+ * No need to clear trans_tbl, if needed, it will be
+ * fully rewritten in sgv_alloc_sg_entries()
+ */
+ } else {
+ tsz = pages_to_alloc * sizeof(obj->trans_tbl[0]);
+ obj->trans_tbl = kzalloc(tsz, gfp_mask);
+ if (unlikely(obj->trans_tbl == NULL)) {
+ TRACE(TRACE_OUT_OF_MEM, "Allocation of "
+ "trans_tbl failed (size %d)", tsz);
+ res = -ENOMEM;
+ goto out_free;
+ }
+ }
+ }
+
+ TRACE_MEM("pages_to_alloc %d, sz %d, tsz %d, obj %p, sg_entries %p, "
+ "trans_tbl %p", pages_to_alloc, sz, tsz, obj, obj->sg_entries,
+ obj->trans_tbl);
+
+out:
+ return res;
+
+out_free:
+ kfree(obj->sg_entries);
+ obj->sg_entries = NULL;
+ goto out;
+}
+
+static struct sgv_pool_obj *sgv_get_obj(struct sgv_pool *pool, int cache_num,
+ int pages, gfp_t gfp_mask, bool get_new)
+{
+ struct sgv_pool_obj *obj;
+
+ spin_lock_bh(&pool->sgv_pool_lock);
+
+ if (unlikely(get_new)) {
+ /* Used only for buffers preallocation */
+ goto get_new;
+ }
+
+ if (likely(!list_empty(&pool->recycling_lists[cache_num]))) {
+ obj = list_entry(pool->recycling_lists[cache_num].next,
+ struct sgv_pool_obj, recycling_list_entry);
+
+ list_del(&obj->sorted_recycling_list_entry);
+ list_del(&obj->recycling_list_entry);
+
+ pool->inactive_cached_pages -= pages;
+
+ spin_unlock_bh(&pool->sgv_pool_lock);
+ goto out;
+ }
+
+get_new:
+ if (pool->cached_entries == 0) {
+ TRACE_MEM("Adding pool %p to the active list", pool);
+ spin_lock_bh(&sgv_pools_lock);
+ list_add_tail(&pool->sgv_active_pools_list_entry,
+ &sgv_active_pools_list);
+ spin_unlock_bh(&sgv_pools_lock);
+ }
+
+ pool->cached_entries++;
+ pool->cached_pages += pages;
+
+ spin_unlock_bh(&pool->sgv_pool_lock);
+
+ TRACE_MEM("New cached entries %d (pool %p)", pool->cached_entries,
+ pool);
+
+ obj = kmem_cache_alloc(pool->caches[cache_num],
+ gfp_mask & ~(__GFP_HIGHMEM|GFP_DMA));
+ if (likely(obj)) {
+ memset(obj, 0, sizeof(*obj));
+ obj->cache_num = cache_num;
+ obj->pages = pages;
+ obj->owner_pool = pool;
+ } else {
+ spin_lock_bh(&pool->sgv_pool_lock);
+ sgv_dec_cached_entries(pool, pages);
+ spin_unlock_bh(&pool->sgv_pool_lock);
+ }
+
+out:
+ return obj;
+}
+
+static void sgv_put_obj(struct sgv_pool_obj *obj)
+{
+ struct sgv_pool *pool = obj->owner_pool;
+ struct list_head *entry;
+ struct list_head *list = &pool->recycling_lists[obj->cache_num];
+ int pages = obj->pages;
+
+ spin_lock_bh(&pool->sgv_pool_lock);
+
+ TRACE_MEM("sgv %p, cache num %d, pages %d, sg_count %d", obj,
+ obj->cache_num, pages, obj->sg_count);
+
+ if (sgv_pool_clustered(pool)) {
+ /* Make objects with less entries more preferred */
+ __list_for_each(entry, list) {
+ struct sgv_pool_obj *tmp = list_entry(entry,
+ struct sgv_pool_obj, recycling_list_entry);
+
+ TRACE_MEM("tmp %p, cache num %d, pages %d, sg_count %d",
+ tmp, tmp->cache_num, tmp->pages, tmp->sg_count);
+
+ if (obj->sg_count <= tmp->sg_count)
+ break;
+ }
+ entry = entry->prev;
+ } else
+ entry = list;
+
+ TRACE_MEM("Adding in %p (list %p)", entry, list);
+ list_add(&obj->recycling_list_entry, entry);
+
+ list_add_tail(&obj->sorted_recycling_list_entry,
+ &pool->sorted_recycling_list);
+
+ obj->time_stamp = jiffies;
+
+ pool->inactive_cached_pages += pages;
+
+ if (!pool->purge_work_scheduled) {
+ TRACE_MEM("Scheduling purge work for pool %p", pool);
+ pool->purge_work_scheduled = true;
+ schedule_delayed_work(&pool->sgv_purge_work,
+ pool->purge_interval);
+ }
+
+ spin_unlock_bh(&pool->sgv_pool_lock);
+ return;
+}
+
+/* No locks */
+static int sgv_hiwmk_check(int pages_to_alloc)
+{
+ int res = 0;
+ int pages = pages_to_alloc;
+
+ pages += atomic_read(&sgv_pages_total);
+
+ if (unlikely(pages > sgv_hi_wmk)) {
+ pages -= sgv_hi_wmk;
+ atomic_inc(&sgv_releases_on_hiwmk);
+
+ pages = __sgv_shrink(pages, 0);
+ if (pages > 0) {
+ TRACE(TRACE_OUT_OF_MEM, "Requested amount of "
+ "memory (%d pages) for being executed "
+ "commands together with the already "
+ "allocated memory exceeds the allowed "
+ "maximum %d. Should you increase "
+ "scst_max_cmd_mem?", pages_to_alloc,
+ sgv_hi_wmk);
+ atomic_inc(&sgv_releases_on_hiwmk_failed);
+ res = -ENOMEM;
+ goto out_unlock;
+ }
+ }
+
+ atomic_add(pages_to_alloc, &sgv_pages_total);
+
+out_unlock:
+ TRACE_MEM("pages_to_alloc %d, new total %d", pages_to_alloc,
+ atomic_read(&sgv_pages_total));
+
+ return res;
+}
+
+/* No locks */
+static void sgv_hiwmk_uncheck(int pages)
+{
+ atomic_sub(pages, &sgv_pages_total);
+ TRACE_MEM("pages %d, new total %d", pages,
+ atomic_read(&sgv_pages_total));
+ return;
+}
+
+/* No locks */
+static bool sgv_check_allowed_mem(struct scst_mem_lim *mem_lim, int pages)
+{
+ int alloced;
+ bool res = true;
+
+ alloced = atomic_add_return(pages, &mem_lim->alloced_pages);
+ if (unlikely(alloced > mem_lim->max_allowed_pages)) {
+ TRACE(TRACE_OUT_OF_MEM, "Requested amount of memory "
+ "(%d pages) for being executed commands on a device "
+ "together with the already allocated memory exceeds "
+ "the allowed maximum %d. Should you increase "
+ "scst_max_dev_cmd_mem?", pages,
+ mem_lim->max_allowed_pages);
+ atomic_sub(pages, &mem_lim->alloced_pages);
+ res = false;
+ }
+
+ TRACE_MEM("mem_lim %p, pages %d, res %d, new alloced %d", mem_lim,
+ pages, res, atomic_read(&mem_lim->alloced_pages));
+
+ return res;
+}
+
+/* No locks */
+static void sgv_uncheck_allowed_mem(struct scst_mem_lim *mem_lim, int pages)
+{
+ atomic_sub(pages, &mem_lim->alloced_pages);
+
+ TRACE_MEM("mem_lim %p, pages %d, new alloced %d", mem_lim,
+ pages, atomic_read(&mem_lim->alloced_pages));
+ return;
+}
+
+/**
+ * sgv_pool_alloc - allocate an SG vector from the SGV pool
+ * @pool: the cache to alloc from
+ * @size: size of the resulting SG vector in bytes
+ * @gfp_mask: the allocation mask
+ * @flags: the allocation flags
+ * @count: the resulting count of SG entries in the resulting SG vector
+ * @sgv: the resulting SGV object
+ * @mem_lim: memory limits
+ * @priv: pointer to private for this allocation data
+ *
+ * Description:
+ * Allocate an SG vector from the SGV pool and returns pointer to it or
+ * NULL in case of any error. See the SGV pool documentation for more details.
+ */
+struct scatterlist *sgv_pool_alloc(struct sgv_pool *pool, unsigned int size,
+ gfp_t gfp_mask, int flags, int *count,
+ struct sgv_pool_obj **sgv, struct scst_mem_lim *mem_lim, void *priv)
+{
+ struct sgv_pool_obj *obj;
+ int cache_num, pages, cnt;
+ struct scatterlist *res = NULL;
+ int pages_to_alloc;
+ int no_cached = flags & SGV_POOL_ALLOC_NO_CACHED;
+ bool allowed_mem_checked = false, hiwmk_checked = false;
+
+ if (unlikely(size == 0))
+ goto out;
+
+ EXTRACHECKS_BUG_ON((gfp_mask & __GFP_NOFAIL) == __GFP_NOFAIL);
+
+ pages = ((size + PAGE_SIZE - 1) >> PAGE_SHIFT);
+ if (pool->single_alloc_pages == 0) {
+ int pages_order = get_order(size);
+ cache_num = pages_order;
+ pages_to_alloc = (1 << pages_order);
+ } else {
+ cache_num = 0;
+ pages_to_alloc = max(pool->single_alloc_pages, pages);
+ }
+
+ TRACE_MEM("size=%d, pages=%d, pages_to_alloc=%d, cache num=%d, "
+ "flags=%x, no_cached=%d, *sgv=%p", size, pages,
+ pages_to_alloc, cache_num, flags, no_cached, *sgv);
+
+ if (*sgv != NULL) {
+ obj = *sgv;
+
+ TRACE_MEM("Supplied obj %p, cache num %d", obj, obj->cache_num);
+
+ EXTRACHECKS_BUG_ON(obj->sg_count != 0);
+
+ if (unlikely(!sgv_check_allowed_mem(mem_lim, pages_to_alloc)))
+ goto out_fail_free_sg_entries;
+ allowed_mem_checked = true;
+
+ if (unlikely(sgv_hiwmk_check(pages_to_alloc) != 0))
+ goto out_fail_free_sg_entries;
+ hiwmk_checked = true;
+ } else if ((pages_to_alloc <= pool->max_cached_pages) && !no_cached) {
+ if (unlikely(!sgv_check_allowed_mem(mem_lim, pages_to_alloc)))
+ goto out_fail;
+ allowed_mem_checked = true;
+
+ obj = sgv_get_obj(pool, cache_num, pages_to_alloc, gfp_mask,
+ flags & SGV_POOL_ALLOC_GET_NEW);
+ if (unlikely(obj == NULL)) {
+ TRACE(TRACE_OUT_OF_MEM, "Allocation of "
+ "sgv_pool_obj failed (size %d)", size);
+ goto out_fail;
+ }
+
+ if (obj->sg_count != 0) {
+ TRACE_MEM("Cached obj %p", obj);
+ atomic_inc(&pool->cache_acc[cache_num].hit_alloc);
+ goto success;
+ }
+
+ if (flags & SGV_POOL_NO_ALLOC_ON_CACHE_MISS) {
+ if (!(flags & SGV_POOL_RETURN_OBJ_ON_ALLOC_FAIL))
+ goto out_fail_free;
+ }
+
+ TRACE_MEM("Brand new obj %p", obj);
+
+ if (pages_to_alloc <= sgv_max_local_pages) {
+ obj->sg_entries = obj->sg_entries_data;
+ sg_init_table(obj->sg_entries, pages_to_alloc);
+ TRACE_MEM("sg_entries %p", obj->sg_entries);
+ if (sgv_pool_clustered(pool)) {
+ obj->trans_tbl = (struct trans_tbl_ent *)
+ (obj->sg_entries + pages_to_alloc);
+ TRACE_MEM("trans_tbl %p", obj->trans_tbl);
+ /*
+ * No need to clear trans_tbl, if needed, it
+ * will be fully rewritten in
+ * sgv_alloc_sg_entries().
+ */
+ }
+ } else {
+ if (unlikely(sgv_alloc_arrays(obj, pages_to_alloc,
+ gfp_mask) != 0))
+ goto out_fail_free;
+ }
+
+ if ((flags & SGV_POOL_NO_ALLOC_ON_CACHE_MISS) &&
+ (flags & SGV_POOL_RETURN_OBJ_ON_ALLOC_FAIL))
+ goto out_return;
+
+ obj->allocator_priv = priv;
+
+ if (unlikely(sgv_hiwmk_check(pages_to_alloc) != 0))
+ goto out_fail_free_sg_entries;
+ hiwmk_checked = true;
+ } else {
+ int sz;
+
+ pages_to_alloc = pages;
+
+ if (unlikely(!sgv_check_allowed_mem(mem_lim, pages_to_alloc)))
+ goto out_fail;
+ allowed_mem_checked = true;
+
+ if (flags & SGV_POOL_NO_ALLOC_ON_CACHE_MISS)
+ goto out_return2;
+
+ sz = sizeof(*obj) + pages * sizeof(obj->sg_entries[0]);
+
+ obj = kmalloc(sz, gfp_mask);
+ if (unlikely(obj == NULL)) {
+ TRACE(TRACE_OUT_OF_MEM, "Allocation of "
+ "sgv_pool_obj failed (size %d)", size);
+ goto out_fail;
+ }
+ memset(obj, 0, sizeof(*obj));
+
+ obj->owner_pool = pool;
+ cache_num = -1;
+ obj->cache_num = cache_num;
+ obj->pages = pages_to_alloc;
+ obj->allocator_priv = priv;
+
+ obj->sg_entries = obj->sg_entries_data;
+ sg_init_table(obj->sg_entries, pages);
+
+ if (unlikely(sgv_hiwmk_check(pages_to_alloc) != 0))
+ goto out_fail_free_sg_entries;
+ hiwmk_checked = true;
+
+ TRACE_MEM("Big or no_cached obj %p (size %d)", obj, sz);
+ }
+
+ obj->sg_count = sgv_alloc_sg_entries(obj->sg_entries,
+ pages_to_alloc, gfp_mask, pool->clustering_type,
+ obj->trans_tbl, &pool->alloc_fns, priv);
+ if (unlikely(obj->sg_count <= 0)) {
+ obj->sg_count = 0;
+ if ((flags & SGV_POOL_RETURN_OBJ_ON_ALLOC_FAIL) &&
+ (cache_num >= 0))
+ goto out_return1;
+ else
+ goto out_fail_free_sg_entries;
+ }
+
+ if (cache_num >= 0) {
+ atomic_add(pages_to_alloc - obj->sg_count,
+ &pool->cache_acc[cache_num].merged);
+ } else {
+ if (no_cached) {
+ atomic_add(pages_to_alloc,
+ &pool->other_pages);
+ atomic_add(pages_to_alloc - obj->sg_count,
+ &pool->other_merged);
+ } else {
+ atomic_add(pages_to_alloc,
+ &pool->big_pages);
+ atomic_add(pages_to_alloc - obj->sg_count,
+ &pool->big_merged);
+ }
+ }
+
+success:
+ if (cache_num >= 0) {
+ int sg;
+ atomic_inc(&pool->cache_acc[cache_num].total_alloc);
+ if (sgv_pool_clustered(pool))
+ cnt = obj->trans_tbl[pages-1].sg_num;
+ else
+ cnt = pages;
+ sg = cnt-1;
+ obj->orig_sg = sg;
+ obj->orig_length = obj->sg_entries[sg].length;
+ if (sgv_pool_clustered(pool)) {
+ obj->sg_entries[sg].length =
+ (pages - obj->trans_tbl[sg].pg_count) << PAGE_SHIFT;
+ }
+ } else {
+ cnt = obj->sg_count;
+ if (no_cached)
+ atomic_inc(&pool->other_alloc);
+ else
+ atomic_inc(&pool->big_alloc);
+ }
+
+ *count = cnt;
+ res = obj->sg_entries;
+ *sgv = obj;
+
+ if (size & ~PAGE_MASK)
+ obj->sg_entries[cnt-1].length -=
+ PAGE_SIZE - (size & ~PAGE_MASK);
+
+ TRACE_MEM("obj=%p, sg_entries %p (size=%d, pages=%d, sg_count=%d, "
+ "count=%d, last_len=%d)", obj, obj->sg_entries, size, pages,
+ obj->sg_count, *count, obj->sg_entries[obj->orig_sg].length);
+
+out:
+ return res;
+
+out_return:
+ obj->allocator_priv = priv;
+ obj->owner_pool = pool;
+
+out_return1:
+ *sgv = obj;
+ TRACE_MEM("Returning failed obj %p (count %d)", obj, *count);
+
+out_return2:
+ *count = pages_to_alloc;
+ res = NULL;
+ goto out_uncheck;
+
+out_fail_free_sg_entries:
+ if (obj->sg_entries != obj->sg_entries_data) {
+ if (obj->trans_tbl !=
+ (struct trans_tbl_ent *)obj->sg_entries_data) {
+ /* kfree() handles NULL parameter */
+ kfree(obj->trans_tbl);
+ obj->trans_tbl = NULL;
+ }
+ kfree(obj->sg_entries);
+ obj->sg_entries = NULL;
+ }
+
+out_fail_free:
+ if (cache_num >= 0) {
+ spin_lock_bh(&pool->sgv_pool_lock);
+ sgv_dec_cached_entries(pool, pages_to_alloc);
+ spin_unlock_bh(&pool->sgv_pool_lock);
+
+ kmem_cache_free(pool->caches[obj->cache_num], obj);
+ } else
+ kfree(obj);
+
+out_fail:
+ res = NULL;
+ *count = 0;
+ *sgv = NULL;
+ TRACE_MEM("%s", "Allocation failed");
+
+out_uncheck:
+ if (hiwmk_checked)
+ sgv_hiwmk_uncheck(pages_to_alloc);
+ if (allowed_mem_checked)
+ sgv_uncheck_allowed_mem(mem_lim, pages_to_alloc);
+ goto out;
+}
+EXPORT_SYMBOL_GPL(sgv_pool_alloc);
+
+/**
+ * sgv_get_priv - return the private allocation data
+ *
+ * Allows to get the allocation private data for this SGV
+ * cache object. The private data supposed to be set by sgv_pool_alloc().
+ */
+void *sgv_get_priv(struct sgv_pool_obj *obj)
+{
+ return obj->allocator_priv;
+}
+EXPORT_SYMBOL_GPL(sgv_get_priv);
+
+/**
+ * sgv_pool_free - free previously allocated SG vector
+ * @sgv: the SGV object to free
+ * @mem_lim: memory limits
+ *
+ * Description:
+ * Frees previously allocated SG vector and updates memory limits
+ */
+void sgv_pool_free(struct sgv_pool_obj *obj, struct scst_mem_lim *mem_lim)
+{
+ int pages = (obj->sg_count != 0) ? obj->pages : 0;
+
+ TRACE_MEM("Freeing obj %p, cache num %d, pages %d, sg_entries %p, "
+ "sg_count %d, allocator_priv %p", obj, obj->cache_num, pages,
+ obj->sg_entries, obj->sg_count, obj->allocator_priv);
+ if (obj->cache_num >= 0) {
+ obj->sg_entries[obj->orig_sg].length = obj->orig_length;
+ sgv_put_obj(obj);
+ } else {
+ obj->owner_pool->alloc_fns.free_pages_fn(obj->sg_entries,
+ obj->sg_count, obj->allocator_priv);
+ kfree(obj);
+ sgv_hiwmk_uncheck(pages);
+ }
+
+ sgv_uncheck_allowed_mem(mem_lim, pages);
+ return;
+}
+EXPORT_SYMBOL_GPL(sgv_pool_free);
+
+/**
+ * scst_alloc() - allocates an SG vector
+ *
+ * Allocates and returns pointer to SG vector with data size "size".
+ * In *count returned the count of entries in the vector.
+ * Returns NULL for failure.
+ */
+struct scatterlist *scst_alloc(int size, gfp_t gfp_mask, int *count)
+{
+ struct scatterlist *res;
+ int pages = (size >> PAGE_SHIFT) + ((size & ~PAGE_MASK) != 0);
+ struct sgv_pool_alloc_fns sys_alloc_fns = {
+ sgv_alloc_sys_pages, sgv_free_sys_sg_entries };
+ int no_fail = ((gfp_mask & __GFP_NOFAIL) == __GFP_NOFAIL);
+
+ atomic_inc(&sgv_other_total_alloc);
+
+ if (unlikely(sgv_hiwmk_check(pages) != 0)) {
+ if (!no_fail) {
+ res = NULL;
+ goto out;
+ } else {
+ /*
+ * Update active_pages_total since alloc can't fail.
+ * If it wasn't updated then the counter would cross 0
+ * on free again.
+ */
+ sgv_hiwmk_uncheck(-pages);
+ }
+ }
+
+ res = kmalloc(pages*sizeof(*res), gfp_mask);
+ if (res == NULL) {
+ TRACE(TRACE_OUT_OF_MEM, "Unable to allocate sg for %d pages",
+ pages);
+ goto out_uncheck;
+ }
+
+ sg_init_table(res, pages);
+
+ /*
+ * If we allow use clustering here, we will have troubles in
+ * scst_free() to figure out how many pages are in the SG vector.
+ * So, always don't use clustering.
+ */
+ *count = sgv_alloc_sg_entries(res, pages, gfp_mask, sgv_no_clustering,
+ NULL, &sys_alloc_fns, NULL);
+ if (*count <= 0)
+ goto out_free;
+
+out:
+ TRACE_MEM("Alloced sg %p (count %d) \"no fail\" %d", res, *count, no_fail);
+ return res;
+
+out_free:
+ kfree(res);
+ res = NULL;
+
+out_uncheck:
+ if (!no_fail)
+ sgv_hiwmk_uncheck(pages);
+ goto out;
+}
+EXPORT_SYMBOL_GPL(scst_alloc);
+
+/**
+ * scst_free() - frees SG vector
+ *
+ * Frees SG vector returned by scst_alloc().
+ */
+void scst_free(struct scatterlist *sg, int count)
+{
+ TRACE_MEM("Freeing sg=%p", sg);
+
+ sgv_hiwmk_uncheck(count);
+
+ sgv_free_sys_sg_entries(sg, count, NULL);
+ kfree(sg);
+ return;
+}
+EXPORT_SYMBOL_GPL(scst_free);
+
+/* Must be called under sgv_pools_mutex */
+static void sgv_pool_init_cache(struct sgv_pool *pool, int cache_num)
+{
+ int size;
+ int pages;
+ struct sgv_pool_obj *obj;
+
+ atomic_set(&pool->cache_acc[cache_num].total_alloc, 0);
+ atomic_set(&pool->cache_acc[cache_num].hit_alloc, 0);
+ atomic_set(&pool->cache_acc[cache_num].merged, 0);
+
+ if (pool->single_alloc_pages == 0)
+ pages = 1 << cache_num;
+ else
+ pages = pool->single_alloc_pages;
+
+ if (pages <= sgv_max_local_pages) {
+ size = sizeof(*obj) + pages *
+ (sizeof(obj->sg_entries[0]) +
+ ((pool->clustering_type != sgv_no_clustering) ?
+ sizeof(obj->trans_tbl[0]) : 0));
+ } else if (pages <= sgv_max_trans_pages) {
+ /*
+ * sg_entries is allocated outside object,
+ * but trans_tbl is still embedded.
+ */
+ size = sizeof(*obj) + pages *
+ (((pool->clustering_type != sgv_no_clustering) ?
+ sizeof(obj->trans_tbl[0]) : 0));
+ } else {
+ size = sizeof(*obj);
+ /* both sgv and trans_tbl are kmalloc'ed() */
+ }
+
+ TRACE_MEM("pages=%d, size=%d", pages, size);
+
+ scnprintf(pool->cache_names[cache_num],
+ sizeof(pool->cache_names[cache_num]),
+ "%s-%uK", pool->name, (pages << PAGE_SHIFT) >> 10);
+ pool->caches[cache_num] = kmem_cache_create(
+ pool->cache_names[cache_num], size, 0, SCST_SLAB_FLAGS, NULL
+ );
+ return;
+}
+
+/* Must be called under sgv_pools_mutex */
+static int sgv_pool_init(struct sgv_pool *pool, const char *name,
+ enum sgv_clustering_types clustering_type, int single_alloc_pages,
+ int purge_interval)
+{
+ int res = -ENOMEM;
+ int i;
+
+ if (single_alloc_pages < 0) {
+ PRINT_ERROR("Wrong single_alloc_pages value %d",
+ single_alloc_pages);
+ res = -EINVAL;
+ goto out;
+ }
+
+ memset(pool, 0, sizeof(*pool));
+
+ atomic_set(&pool->big_alloc, 0);
+ atomic_set(&pool->big_pages, 0);
+ atomic_set(&pool->big_merged, 0);
+ atomic_set(&pool->other_alloc, 0);
+ atomic_set(&pool->other_pages, 0);
+ atomic_set(&pool->other_merged, 0);
+
+ pool->clustering_type = clustering_type;
+ pool->single_alloc_pages = single_alloc_pages;
+ if (purge_interval != 0) {
+ pool->purge_interval = purge_interval;
+ if (purge_interval < 0) {
+ /* Let's pretend that it's always scheduled */
+ pool->purge_work_scheduled = 1;
+ }
+ } else
+ pool->purge_interval = SGV_DEFAULT_PURGE_INTERVAL;
+ if (single_alloc_pages == 0) {
+ pool->max_caches = SGV_POOL_ELEMENTS;
+ pool->max_cached_pages = 1 << (SGV_POOL_ELEMENTS - 1);
+ } else {
+ pool->max_caches = 1;
+ pool->max_cached_pages = single_alloc_pages;
+ }
+ pool->alloc_fns.alloc_pages_fn = sgv_alloc_sys_pages;
+ pool->alloc_fns.free_pages_fn = sgv_free_sys_sg_entries;
+
+ TRACE_MEM("name %s, sizeof(*obj)=%zd, clustering_type=%d, "
+ "single_alloc_pages=%d, max_caches=%d, max_cached_pages=%d",
+ name, sizeof(struct sgv_pool_obj), clustering_type,
+ single_alloc_pages, pool->max_caches, pool->max_cached_pages);
+
+ strlcpy(pool->name, name, sizeof(pool->name)-1);
+
+ pool->owner_mm = current->mm;
+
+ for (i = 0; i < pool->max_caches; i++) {
+ sgv_pool_init_cache(pool, i);
+ if (pool->caches[i] == NULL) {
+ TRACE(TRACE_OUT_OF_MEM, "Allocation of sgv_pool "
+ "cache %s(%d) failed", name, i);
+ goto out_free;
+ }
+ }
+
+ atomic_set(&pool->sgv_pool_ref, 1);
+ spin_lock_init(&pool->sgv_pool_lock);
+ INIT_LIST_HEAD(&pool->sorted_recycling_list);
+ for (i = 0; i < pool->max_caches; i++)
+ INIT_LIST_HEAD(&pool->recycling_lists[i]);
+
+ INIT_DELAYED_WORK(&pool->sgv_purge_work,
+ (void (*)(struct work_struct *))sgv_purge_work_fn);
+
+ spin_lock_bh(&sgv_pools_lock);
+ list_add_tail(&pool->sgv_pools_list_entry, &sgv_pools_list);
+ spin_unlock_bh(&sgv_pools_lock);
+
+ res = 0;
+
+out:
+ return res;
+
+out_free:
+ for (i = 0; i < pool->max_caches; i++) {
+ if (pool->caches[i]) {
+ kmem_cache_destroy(pool->caches[i]);
+ pool->caches[i] = NULL;
+ } else
+ break;
+ }
+ goto out;
+}
+
+static void sgv_evaluate_local_max_pages(void)
+{
+ int space4sgv_ttbl = PAGE_SIZE - sizeof(struct sgv_pool_obj);
+
+ sgv_max_local_pages = space4sgv_ttbl /
+ (sizeof(struct trans_tbl_ent) + sizeof(struct scatterlist));
+
+ sgv_max_trans_pages = space4sgv_ttbl / sizeof(struct trans_tbl_ent);
+
+ TRACE_MEM("sgv_max_local_pages %d, sgv_max_trans_pages %d",
+ sgv_max_local_pages, sgv_max_trans_pages);
+ return;
+}
+
+/**
+ * sgv_pool_flush - flushe the SGV pool
+ *
+ * Flushes, i.e. frees, all the cached entries in the SGV pool.
+ */
+void sgv_pool_flush(struct sgv_pool *pool)
+{
+ int i;
+
+ for (i = 0; i < pool->max_caches; i++) {
+ struct sgv_pool_obj *obj;
+
+ spin_lock_bh(&pool->sgv_pool_lock);
+
+ while (!list_empty(&pool->recycling_lists[i])) {
+ obj = list_entry(pool->recycling_lists[i].next,
+ struct sgv_pool_obj, recycling_list_entry);
+
+ __sgv_purge_from_cache(obj);
+
+ spin_unlock_bh(&pool->sgv_pool_lock);
+
+ EXTRACHECKS_BUG_ON(obj->owner_pool != pool);
+ sgv_dtor_and_free(obj);
+
+ spin_lock_bh(&pool->sgv_pool_lock);
+ }
+ spin_unlock_bh(&pool->sgv_pool_lock);
+ }
+ return;
+}
+EXPORT_SYMBOL_GPL(sgv_pool_flush);
+
+static void sgv_pool_deinit_put(struct sgv_pool *pool)
+{
+
+ cancel_delayed_work_sync(&pool->sgv_purge_work);
+
+ sgv_pool_flush(pool);
+
+ mutex_lock(&sgv_pools_mutex);
+ spin_lock_bh(&sgv_pools_lock);
+ list_del(&pool->sgv_pools_list_entry);
+ spin_unlock_bh(&sgv_pools_lock);
+ mutex_unlock(&sgv_pools_mutex);
+
+ scst_sgv_sysfs_put(pool);
+
+ /* pool can be dead here */
+ return;
+}
+
+/**
+ * sgv_pool_set_allocator - set custom pages allocator
+ * @pool: the cache
+ * @alloc_pages_fn: pages allocation function
+ * @free_pages_fn: pages freeing function
+ *
+ * Description:
+ * Allows to set custom pages allocator for the SGV pool.
+ * See the SGV pool documentation for more details.
+ */
+void sgv_pool_set_allocator(struct sgv_pool *pool,
+ struct page *(*alloc_pages_fn)(struct scatterlist *, gfp_t, void *),
+ void (*free_pages_fn)(struct scatterlist *, int, void *))
+{
+ pool->alloc_fns.alloc_pages_fn = alloc_pages_fn;
+ pool->alloc_fns.free_pages_fn = free_pages_fn;
+ return;
+}
+EXPORT_SYMBOL_GPL(sgv_pool_set_allocator);
+
+/**
+ * sgv_pool_create - creates and initializes an SGV pool
+ * @name: the name of the SGV pool
+ * @clustered: sets type of the pages clustering.
+ * @single_alloc_pages: if 0, then the SGV pool will work in the set of
+ * power 2 size buffers mode. If >0, then the SGV pool will
+ * work in the fixed size buffers mode. In this case
+ * single_alloc_pages sets the size of each buffer in pages.
+ * @shared: sets if the SGV pool can be shared between devices or not.
+ * The cache sharing allowed only between devices created inside
+ * the same address space. If an SGV pool is shared, each
+ * subsequent call of sgv_pool_create() with the same cache name
+ * will not create a new cache, but instead return a reference
+ * to it.
+ * @purge_interval: sets the cache purging interval. I.e., an SG buffer
+ * will be freed if it's unused for time t
+ * purge_interval <= t < 2*purge_interval. If purge_interval
+ * is 0, then the default interval will be used (60 seconds).
+ * If purge_interval <0, then the automatic purging will be
+ * disabled.
+ *
+ * Description:
+ * Returns the resulting SGV pool or NULL in case of any error.
+ */
+struct sgv_pool *sgv_pool_create(const char *name,
+ enum sgv_clustering_types clustering_type,
+ int single_alloc_pages, bool shared, int purge_interval)
+{
+ struct sgv_pool *pool;
+ int rc;
+
+ mutex_lock(&sgv_pools_mutex);
+ list_for_each_entry(pool, &sgv_pools_list, sgv_pools_list_entry) {
+ if (strcmp(pool->name, name) == 0) {
+ if (shared) {
+ if (pool->owner_mm != current->mm) {
+ PRINT_ERROR("Attempt of a shared use "
+ "of SGV pool %s with "
+ "different MM", name);
+ goto out_err_unlock;
+ }
+ sgv_pool_get(pool);
+ goto out_unlock;
+ } else {
+ PRINT_ERROR("SGV pool %s already exists", name);
+ goto out_err_unlock;
+ }
+ }
+ }
+
+ pool = kzalloc(sizeof(*pool), GFP_KERNEL);
+ if (pool == NULL) {
+ TRACE(TRACE_OUT_OF_MEM, "%s", "Allocation of sgv_pool failed");
+ goto out_unlock;
+ }
+
+ rc = sgv_pool_init(pool, name, clustering_type, single_alloc_pages,
+ purge_interval);
+ if (rc != 0)
+ goto out_free_unlock;
+
+ rc = scst_create_sgv_sysfs(pool);
+ if (rc != 0)
+ goto out_err_unlock_put;
+
+out_unlock:
+ mutex_unlock(&sgv_pools_mutex);
+ return pool;
+
+out_free_unlock:
+ kfree(pool);
+
+out_err_unlock:
+ pool = NULL;
+ goto out_unlock;
+
+out_err_unlock_put:
+ mutex_unlock(&sgv_pools_mutex);
+ sgv_pool_deinit_put(pool);
+ goto out_err_unlock;
+}
+EXPORT_SYMBOL_GPL(sgv_pool_create);
+
+void sgv_pool_destroy(struct sgv_pool *pool)
+{
+ int i;
+
+ for (i = 0; i < pool->max_caches; i++) {
+ if (pool->caches[i])
+ kmem_cache_destroy(pool->caches[i]);
+ pool->caches[i] = NULL;
+ }
+
+ kfree(pool);
+ return;
+}
+
+/**
+ * sgv_pool_get - increase ref counter for the corresponding SGV pool
+ *
+ * Increases ref counter for the corresponding SGV pool
+ */
+void sgv_pool_get(struct sgv_pool *pool)
+{
+ atomic_inc(&pool->sgv_pool_ref);
+ TRACE_MEM("Incrementing sgv pool %p ref (new value %d)",
+ pool, atomic_read(&pool->sgv_pool_ref));
+ return;
+}
+EXPORT_SYMBOL_GPL(sgv_pool_get);
+
+/**
+ * sgv_pool_put - decrease ref counter for the corresponding SGV pool
+ *
+ * Decreases ref counter for the corresponding SGV pool. If the ref
+ * counter reaches 0, the cache will be destroyed.
+ */
+void sgv_pool_put(struct sgv_pool *pool)
+{
+ TRACE_MEM("Decrementing sgv pool %p ref (new value %d)",
+ pool, atomic_read(&pool->sgv_pool_ref)-1);
+ if (atomic_dec_and_test(&pool->sgv_pool_ref))
+ sgv_pool_deinit_put(pool);
+ return;
+}
+EXPORT_SYMBOL_GPL(sgv_pool_put);
+
+/**
+ * sgv_pool_del - deletes the corresponding SGV pool
+ * @pool: the cache to delete.
+ *
+ * Description:
+ * If the cache is shared, it will decrease its reference counter.
+ * If the reference counter reaches 0, the cache will be destroyed.
+ */
+void sgv_pool_del(struct sgv_pool *pool)
+{
+
+ sgv_pool_put(pool);
+ return;
+}
+EXPORT_SYMBOL_GPL(sgv_pool_del);
+
+/* Both parameters in pages */
+int scst_sgv_pools_init(unsigned long mem_hwmark, unsigned long mem_lwmark)
+{
+ int res = 0;
+
+ sgv_hi_wmk = mem_hwmark;
+ sgv_lo_wmk = mem_lwmark;
+
+ sgv_evaluate_local_max_pages();
+
+ sgv_norm_pool = sgv_pool_create("sgv", sgv_no_clustering, 0, false, 0);
+ if (sgv_norm_pool == NULL)
+ goto out_err;
+
+ sgv_norm_clust_pool = sgv_pool_create("sgv-clust",
+ sgv_full_clustering, 0, false, 0);
+ if (sgv_norm_clust_pool == NULL)
+ goto out_free_norm;
+
+ sgv_dma_pool = sgv_pool_create("sgv-dma", sgv_no_clustering, 0,
+ false, 0);
+ if (sgv_dma_pool == NULL)
+ goto out_free_clust;
+
+ sgv_shrinker.shrink = sgv_shrink;
+ sgv_shrinker.seeks = DEFAULT_SEEKS;
+ register_shrinker(&sgv_shrinker);
+
+out:
+ return res;
+
+out_free_clust:
+ sgv_pool_deinit_put(sgv_norm_clust_pool);
+
+out_free_norm:
+ sgv_pool_deinit_put(sgv_norm_pool);
+
+out_err:
+ res = -ENOMEM;
+ goto out;
+}
+
+void scst_sgv_pools_deinit(void)
+{
+
+ unregister_shrinker(&sgv_shrinker);
+
+ sgv_pool_deinit_put(sgv_dma_pool);
+ sgv_pool_deinit_put(sgv_norm_pool);
+ sgv_pool_deinit_put(sgv_norm_clust_pool);
+
+ flush_scheduled_work();
+ return;
+}
+
+ssize_t sgv_sysfs_stat_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ struct sgv_pool *pool;
+ int i, total = 0, hit = 0, merged = 0, allocated = 0;
+ int oa, om, res;
+
+ pool = container_of(kobj, struct sgv_pool, sgv_kobj);
+
+ for (i = 0; i < SGV_POOL_ELEMENTS; i++) {
+ int t;
+
+ hit += atomic_read(&pool->cache_acc[i].hit_alloc);
+ total += atomic_read(&pool->cache_acc[i].total_alloc);
+
+ t = atomic_read(&pool->cache_acc[i].total_alloc) -
+ atomic_read(&pool->cache_acc[i].hit_alloc);
+ allocated += t * (1 << i);
+ merged += atomic_read(&pool->cache_acc[i].merged);
+ }
+
+ res = sprintf(buf, "%-30s %-11s %-11s %-11s %-11s", "Name", "Hit", "Total",
+ "% merged", "Cached (P/I/O)");
+
+ res += sprintf(&buf[res], "\n%-30s %-11d %-11d %-11d %d/%d/%d\n",
+ pool->name, hit, total,
+ (allocated != 0) ? merged*100/allocated : 0,
+ pool->cached_pages, pool->inactive_cached_pages,
+ pool->cached_entries);
+
+ for (i = 0; i < SGV_POOL_ELEMENTS; i++) {
+ int t = atomic_read(&pool->cache_acc[i].total_alloc) -
+ atomic_read(&pool->cache_acc[i].hit_alloc);
+ allocated = t * (1 << i);
+ merged = atomic_read(&pool->cache_acc[i].merged);
+
+ res += sprintf(&buf[res], " %-28s %-11d %-11d %d\n",
+ pool->cache_names[i],
+ atomic_read(&pool->cache_acc[i].hit_alloc),
+ atomic_read(&pool->cache_acc[i].total_alloc),
+ (allocated != 0) ? merged*100/allocated : 0);
+ }
+
+ allocated = atomic_read(&pool->big_pages);
+ merged = atomic_read(&pool->big_merged);
+ oa = atomic_read(&pool->other_pages);
+ om = atomic_read(&pool->other_merged);
+
+ res += sprintf(&buf[res], " %-40s %d/%-9d %d/%d\n", "big/other",
+ atomic_read(&pool->big_alloc), atomic_read(&pool->other_alloc),
+ (allocated != 0) ? merged*100/allocated : 0,
+ (oa != 0) ? om/oa : 0);
+
+ return res;
+}
+
+ssize_t sgv_sysfs_stat_reset(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ struct sgv_pool *pool;
+ int i;
+
+ pool = container_of(kobj, struct sgv_pool, sgv_kobj);
+
+ for (i = 0; i < SGV_POOL_ELEMENTS; i++) {
+ atomic_set(&pool->cache_acc[i].hit_alloc, 0);
+ atomic_set(&pool->cache_acc[i].total_alloc, 0);
+ atomic_set(&pool->cache_acc[i].merged, 0);
+ }
+
+ atomic_set(&pool->big_pages, 0);
+ atomic_set(&pool->big_merged, 0);
+ atomic_set(&pool->big_alloc, 0);
+ atomic_set(&pool->other_pages, 0);
+ atomic_set(&pool->other_merged, 0);
+ atomic_set(&pool->other_alloc, 0);
+
+ PRINT_INFO("Statistics for SGV pool %s resetted", pool->name);
+ return count;
+}
+
+ssize_t sgv_sysfs_global_stat_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ struct sgv_pool *pool;
+ int inactive_pages = 0, res;
+
+ spin_lock_bh(&sgv_pools_lock);
+ list_for_each_entry(pool, &sgv_active_pools_list,
+ sgv_active_pools_list_entry) {
+ inactive_pages += pool->inactive_cached_pages;
+ }
+ spin_unlock_bh(&sgv_pools_lock);
+
+ res = sprintf(buf, "%-42s %d/%d\n%-42s %d/%d\n%-42s %d/%d\n"
+ "%-42s %-11d\n",
+ "Inactive/active pages", inactive_pages,
+ atomic_read(&sgv_pages_total) - inactive_pages,
+ "Hi/lo watermarks [pages]", sgv_hi_wmk, sgv_lo_wmk,
+ "Hi watermark releases/failures",
+ atomic_read(&sgv_releases_on_hiwmk),
+ atomic_read(&sgv_releases_on_hiwmk_failed),
+ "Other allocs", atomic_read(&sgv_other_total_alloc));
+ return res;
+}
+
+ssize_t sgv_sysfs_global_stat_reset(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+
+ atomic_set(&sgv_releases_on_hiwmk, 0);
+ atomic_set(&sgv_releases_on_hiwmk_failed, 0);
+ atomic_set(&sgv_other_total_alloc, 0);
+
+ PRINT_INFO("%s", "Global SGV pool statistics resetted");
+ return count;
+}
+
diff -uprN orig/linux-2.6.33/Documentation/scst/sgv_cache.txt linux-2.6.33/Documentation/scst/sgv_cache.txt
--- orig/linux-2.6.33/Documentation/scst/sgv_cache.txt
+++ linux-2.6.33/Documentation/scst/sgv_cache.txt
@@ -0,0 +1,224 @@
+ SCST SGV CACHE.
+
+ PROGRAMMING INTERFACE DESCRIPTION.
+
+ For SCST version 1.0.2
+
+SCST SGV cache is a memory management subsystem in SCST. One can call it
+a "memory pool", but Linux kernel already have a mempool interface,
+which serves different purposes. SGV cache provides to SCST core, target
+drivers and backend dev handlers facilities to allocate, build and cache
+SG vectors for data buffers. The main advantage of it is the caching
+facility, when it doesn't free to the system each vector, which is not
+used anymore, but keeps it for a while (possibly indefinitely) to let it
+be reused by the next consecutive command. This allows to:
+
+ - Reduce commands processing latencies and, hence, improve performance;
+
+ - Make commands processing latencies predictable, which is essential
+ for RT applications.
+
+The freed SG vectors are kept by the SGV cache either for some (possibly
+indefinite) time, or, optionally, until the system needs more memory and
+asks to free some using the set_shrinker() interface. Also the SGV cache
+allows to:
+
+ - Cluster pages together. "Cluster" means merging adjacent pages in a
+single SG entry. It allows to have less SG entries in the resulting SG
+vector, hence improve performance handling it as well as allow to
+work with bigger buffers on hardware with limited SG capabilities.
+
+ - Set custom page allocator functions. For instance, scst_user device
+handler uses this facility to eliminate unneeded mapping/unmapping of
+user space pages and avoid unneeded IOCTL calls for buffers allocations.
+In fileio_tgt application, which uses a regular malloc() function to
+allocate data buffers, this facility allows ~30% less CPU load and
+considerable performance increase.
+
+ - Prevent each initiator or all initiators altogether to allocate too
+much memory and DoS the target. Consider 10 initiators, which can have
+access to 10 devices each. Any of them can queue up to 64 commands, each
+can transfer up to 1MB of data. So, all of them in a peak can allocate
+up to 10*10*64 = ~6.5GB of memory for data buffers. This amount must be
+limited somehow and the SGV cache performs this function.
+
+From implementation POV the SGV cache is a simple extension of the kmem
+cache. It can work in 2 modes:
+
+1. With fixed size buffers.
+
+2. With a set of power 2 size buffers. In this mode each SGV cache
+(struct sgv_pool) has SGV_POOL_ELEMENTS (11 currently) of kmem caches.
+Each of those kmem caches keeps SGV cache objects (struct sgv_pool_obj)
+corresponding to SG vectors with size of order X pages. For instance,
+request to allocate 4 pages will be served from kmem cache[2], since the
+order of the of number of requested pages is 2. If later request to
+allocate 11KB comes, the same SG vector with 4 pages will be reused (see
+below). This mode is in average allows less memory overhead comparing
+with the fixed size buffers mode.
+
+Consider how the SGV cache works in the set of buffers mode. When a
+request to allocate new SG vector comes, sgv_pool_alloc() via
+sgv_get_obj() checks if there is already a cached vector with that
+order. If yes, then that vector will be reused and its length, if
+necessary, will be modified to match the requested size. In the above
+example request for 11KB buffer, 4 pages vector will be reused and
+modified using trans_tbl to contain 3 pages and the last entry will be
+modified to contain the requested length - 2*PAGE_SIZE. If there is no
+cached object, then a new sgv_pool_obj will be allocated from the
+corresponding kmem cache, chosen by the order of number of requested
+pages. Then that vector will be filled by pages and returned.
+
+In the fixed size buffers mode the SGV cache works similarly, except
+that it always allocate buffer with the predefined fixed size. I.e.
+even for 4K request the whole buffer with predefined size, say, 1MB,
+will be used.
+
+In both modes, if size of a request exceeds the maximum allowed for
+caching buffer size, the requested buffer will be allocated, but not
+cached.
+
+Freed cached sgv_pool_obj objects are actually freed to the system
+either by the purge work, which is scheduled once in 60 seconds, or in
+sgv_shrink() called by system, when it's asking for memory.
+
+ Interface.
+
+struct sgv_pool *sgv_pool_create(const char *name,
+ enum sgv_clustering_types clustered, int single_alloc_pages,
+ bool shared, int purge_interval)
+
+This function creates and initializes an SGV cache. It has the following
+arguments:
+
+ - name - the name of the SGV cache
+
+ - clustered - sets type of the pages clustering. The type can be:
+
+ * sgv_no_clustering - no clustering performed.
+
+ * sgv_tail_clustering - a page will only be merged with the latest
+ previously allocated page, so the order of pages in the SG will be
+ preserved
+
+ * sgv_full_clustering - free merging of pages at any place in
+ the SG is allowed. This mode usually provides the best merging
+ rate.
+
+ - single_alloc_pages - if 0, then the SGV cache will work in the set of
+ power 2 size buffers mode. If >0, then the SGV cache will work in the
+ fixed size buffers mode. In this case single_alloc_pages sets the
+ size of each buffer in pages.
+
+ - shared - sets if the SGV cache can be shared between devices or not.
+ The cache sharing allowed only between devices created inside the same
+ address space. If an SGV cache is shared, each subsequent call of
+ sgv_pool_create() with the same cache name will not create a new cache,
+ but instead return a reference to it.
+
+ - purge_interval - sets the cache purging interval. I.e. an SG buffer
+ will be freed if it's unused for time t purge_interval <= t <
+ 2*purge_interval. If purge_interval is 0, then the default interval
+ will be used (60 seconds). If purge_interval <0, then the automatic
+ purging will be disabled. Shrinking by the system's demand will also
+ be disabled.
+
+Returns the resulting SGV cache or NULL in case of any error.
+
+void sgv_pool_del(struct sgv_pool *pool)
+
+This function deletes the corresponding SGV cache. If the cache is
+shared, it will decrease its reference counter. If the reference counter
+reaches 0, the cache will be destroyed.
+
+void sgv_pool_flush(struct sgv_pool *pool)
+
+This function flushes, i.e. frees, all the cached entries in the SGV
+cache.
+
+void sgv_pool_set_allocator(struct sgv_pool *pool,
+ struct page *(*alloc_pages_fn)(struct scatterlist *sg, gfp_t gfp, void *priv),
+ void (*free_pages_fn)(struct scatterlist *sg, int sg_count, void *priv));
+
+This function allows to set for the SGV cache a custom pages allocator. For
+instance, scst_user uses such function to supply to the cache mapped from
+user space pages.
+
+alloc_pages_fn() has the following parameters:
+
+ - sg - SG entry, to which the allocated page should be added.
+
+ - gfp - the allocation GFP flags
+
+ - priv - pointer to a private data supplied to sgv_pool_alloc()
+
+This function should return the allocated page or NULL, if no page was
+allocated.
+
+free_pages_fn() has the following parameters:
+
+ - sg - SG vector to free
+
+ - sg_count - number of SG entries in the sg
+
+ - priv - pointer to a private data supplied to the corresponding sgv_pool_alloc()
+
+struct scatterlist *sgv_pool_alloc(struct sgv_pool *pool, unsigned int size,
+ gfp_t gfp_mask, int flags, int *count,
+ struct sgv_pool_obj **sgv, struct scst_mem_lim *mem_lim, void *priv)
+
+This function allocates an SG vector from the SGV cache. It has the
+following parameters:
+
+ - pool - the cache to alloc from
+
+ - size - size of the resulting SG vector in bytes
+
+ - gfp_mask - the allocation mask
+
+ - flags - the allocation flags. The following flags are possible and
+ can be set using OR operation:
+
+ * SGV_POOL_ALLOC_NO_CACHED - the SG vector must not be cached.
+
+ * SGV_POOL_NO_ALLOC_ON_CACHE_MISS - don't do an allocation on a
+ cache miss.
+
+ * SGV_POOL_RETURN_OBJ_ON_ALLOC_FAIL - return an empty SGV object,
+ i.e. without the SG vector, if the allocation can't be completed.
+ For instance, because SGV_POOL_NO_ALLOC_ON_CACHE_MISS flag set.
+
+ - count - the resulting count of SG entries in the resulting SG vector.
+
+ - sgv - the resulting SGV object. It should be used to free the
+ resulting SG vector.
+
+ - mem_lim - memory limits, see below.
+
+ - priv - pointer to private for this allocation data. This pointer will
+ be supplied to alloc_pages_fn() and free_pages_fn() and can be
+ retrieved by sgv_get_priv().
+
+This function returns pointer to the resulting SG vector or NULL in case
+of any error.
+
+void sgv_pool_free(struct sgv_pool_obj *sgv, struct scst_mem_lim *mem_lim)
+
+This function frees previously allocated SG vector, referenced by SGV
+cache object sgv.
+
+void *sgv_get_priv(struct sgv_pool_obj *sgv)
+
+This function allows to get the allocation private data for this SGV
+cache object sgv. The private data are set by sgv_pool_alloc().
+
+void scst_init_mem_lim(struct scst_mem_lim *mem_lim)
+
+This function initializes memory limits structure mem_lim according to
+the current system configuration. This structure should be latter used
+to track and limit allocated by one or more SGV caches memory.
+
+ Runtime information and statistics.
+
+Runtime information and statistics is available in /sys/kernel/scst_tgt/sgv.
+
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH][RFC 8/12/1/5] SCST sysfs interface
[not found] ` <4BC44D08.4060907@vlnb.net>
` (6 preceding siblings ...)
2010-04-13 13:06 ` [PATCH][RFC 7/12/1/5] SCST SGV cache Vladislav Bolkhovitin
@ 2010-04-13 13:06 ` Vladislav Bolkhovitin
2010-04-13 13:06 ` [PATCH][RFC 9/12/1/5] SCST debugging support Vladislav Bolkhovitin
2010-04-13 13:06 ` [PATCH][RFC 10/12/1/5] SCST external modules support Vladislav Bolkhovitin
9 siblings, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:06 UTC (permalink / raw)
To: linux-scsi
Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
Vu Pham, Bart Van Assche, James Smart, Joe Eykholt, Andy Yan,
linux-driver, Daniel Henrique Debonzi
This patch contains file scst_sysfs.c.
Signed-off-by: Daniel Henrique Debonzi <debonzi@linux.vnet.ibm.com>
Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
scst_sysfs.c | 3884 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 3884 insertions(+)
diff -uprN orig/linux-2.6.33/drivers/scst/scst_sysfs.c linux-2.6.33/drivers/scst/scst_sysfs.c
--- orig/linux-2.6.33/drivers/scst/scst_sysfs.c
+++ linux-2.6.33/drivers/scst/scst_sysfs.c
@@ -0,0 +1,3884 @@
+/*
+ * scst_sysfs.c
+ *
+ * Copyright (C) 2009 Daniel Henrique Debonzi <debonzi@linux.vnet.ibm.com>
+ * Copyright (C) 2009 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ * Copyright (C) 2009 - 2010 ID7 Ltd.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation, version 2
+ * of the License.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/kobject.h>
+#include <linux/string.h>
+#include <linux/sysfs.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/ctype.h>
+
+#include "scst.h"
+#include "scst_priv.h"
+#include "scst_mem.h"
+
+static DECLARE_COMPLETION(scst_sysfs_root_release_completion);
+
+static struct kobject scst_sysfs_root_kobj;
+static struct kobject *scst_targets_kobj;
+static struct kobject *scst_devices_kobj;
+static struct kobject *scst_sgv_kobj;
+static struct kobject *scst_handlers_kobj;
+
+/* Regular SCST sysfs operations */
+struct sysfs_ops scst_sysfs_ops;
+EXPORT_SYMBOL_GPL(scst_sysfs_ops);
+
+static const char *scst_dev_handler_types[] = {
+ "Direct-access device (e.g., magnetic disk)",
+ "Sequential-access device (e.g., magnetic tape)",
+ "Printer device",
+ "Processor device",
+ "Write-once device (e.g., some optical disks)",
+ "CD-ROM device",
+ "Scanner device (obsolete)",
+ "Optical memory device (e.g., some optical disks)",
+ "Medium changer device (e.g., jukeboxes)",
+ "Communications device (obsolete)",
+ "Defined by ASC IT8 (Graphic arts pre-press devices)",
+ "Defined by ASC IT8 (Graphic arts pre-press devices)",
+ "Storage array controller device (e.g., RAID)",
+ "Enclosure services device",
+ "Simplified direct-access device (e.g., magnetic disk)",
+ "Optical card reader/writer device"
+};
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+
+static DEFINE_MUTEX(scst_log_mutex);
+
+static struct scst_trace_log scst_trace_tbl[] = {
+ { TRACE_OUT_OF_MEM, "out_of_mem" },
+ { TRACE_MINOR, "minor" },
+ { TRACE_SG_OP, "sg" },
+ { TRACE_MEMORY, "mem" },
+ { TRACE_BUFF, "buff" },
+ { TRACE_PID, "pid" },
+ { TRACE_LINE, "line" },
+ { TRACE_FUNCTION, "function" },
+ { TRACE_DEBUG, "debug" },
+ { TRACE_SPECIAL, "special" },
+ { TRACE_SCSI, "scsi" },
+ { TRACE_MGMT, "mgmt" },
+ { TRACE_MGMT_DEBUG, "mgmt_dbg" },
+ { TRACE_FLOW_CONTROL, "flow_control" },
+ { 0, NULL }
+};
+
+static struct scst_trace_log scst_local_trace_tbl[] = {
+ { TRACE_RTRY, "retry" },
+ { TRACE_SCSI_SERIALIZING, "scsi_serializing" },
+ { TRACE_RCV_BOT, "recv_bot" },
+ { TRACE_SND_BOT, "send_bot" },
+ { TRACE_RCV_TOP, "recv_top" },
+ { TRACE_SND_TOP, "send_top" },
+ { 0, NULL }
+};
+
+static ssize_t scst_trace_level_show(const struct scst_trace_log *local_tbl,
+ unsigned long log_level, char *buf, const char *help);
+static int scst_write_trace(const char *buf, size_t length,
+ unsigned long *log_level, unsigned long default_level,
+ const char *name, const struct scst_trace_log *tbl);
+
+#endif /* defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING) */
+
+static ssize_t scst_luns_mgmt_show(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ char *buf);
+static ssize_t scst_luns_mgmt_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t count);
+static ssize_t scst_tgt_addr_method_show(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ char *buf);
+static ssize_t scst_tgt_addr_method_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t count);
+static ssize_t scst_tgt_io_grouping_type_show(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ char *buf);
+static ssize_t scst_tgt_io_grouping_type_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t count);
+static ssize_t scst_ini_group_mgmt_show(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ char *buf);
+static ssize_t scst_ini_group_mgmt_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t count);
+static ssize_t scst_rel_tgt_id_show(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ char *buf);
+static ssize_t scst_rel_tgt_id_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t count);
+static ssize_t scst_acg_luns_mgmt_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t count);
+static ssize_t scst_acg_ini_mgmt_show(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ char *buf);
+static ssize_t scst_acg_ini_mgmt_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t count);
+static ssize_t scst_acg_addr_method_show(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ char *buf);
+static ssize_t scst_acg_addr_method_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t count);
+static ssize_t scst_acg_io_grouping_type_show(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ char *buf);
+static ssize_t scst_acg_io_grouping_type_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t count);
+static ssize_t scst_acn_file_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf);
+
+static void scst_sysfs_release(struct kobject *kobj)
+{
+ kfree(kobj);
+}
+
+/*
+ * Target Template
+ */
+
+static void scst_tgtt_release(struct kobject *kobj)
+{
+ struct scst_tgt_template *tgtt;
+
+ tgtt = container_of(kobj, struct scst_tgt_template, tgtt_kobj);
+
+ complete_all(&tgtt->tgtt_kobj_release_cmpl);
+
+ scst_tgtt_cleanup(tgtt);
+ return;
+}
+
+static struct kobj_type tgtt_ktype = {
+ .sysfs_ops = &scst_sysfs_ops,
+ .release = scst_tgtt_release,
+};
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+
+static ssize_t scst_tgtt_trace_level_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ struct scst_tgt_template *tgtt;
+
+ tgtt = container_of(kobj, struct scst_tgt_template, tgtt_kobj);
+
+ return scst_trace_level_show(tgtt->trace_tbl,
+ tgtt->trace_flags ? *tgtt->trace_flags : 0, buf,
+ tgtt->trace_tbl_help);
+}
+
+static ssize_t scst_tgtt_trace_level_store(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ int res;
+ struct scst_tgt_template *tgtt;
+
+ tgtt = container_of(kobj, struct scst_tgt_template, tgtt_kobj);
+
+ if (mutex_lock_interruptible(&scst_log_mutex) != 0) {
+ res = -EINTR;
+ goto out;
+ }
+
+ res = scst_write_trace(buf, count, tgtt->trace_flags,
+ tgtt->default_trace_flags, tgtt->name, tgtt->trace_tbl);
+
+ mutex_unlock(&scst_log_mutex);
+
+out:
+ return res;
+}
+
+static struct kobj_attribute tgtt_trace_attr =
+ __ATTR(trace_level, S_IRUGO | S_IWUSR,
+ scst_tgtt_trace_level_show, scst_tgtt_trace_level_store);
+
+#endif /* #if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING) */
+
+static ssize_t scst_tgtt_mgmt_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ char *help = "Usage: echo \"add_target target_name [parameters]\" "
+ ">mgmt\n"
+ " echo \"del_target target_name\" >mgmt\n"
+ "%s"
+ "\n"
+ "where parameters are one or more "
+ "param_name=value pairs separated by ';'\n"
+ "%s%s";
+ struct scst_tgt_template *tgtt;
+
+ tgtt = container_of(kobj, struct scst_tgt_template, tgtt_kobj);
+
+ if (tgtt->add_target_parameters_help != NULL)
+ return sprintf(buf, help,
+ (tgtt->mgmt_cmd_help) ? tgtt->mgmt_cmd_help : "",
+ "\nThe following parameters available: ",
+ tgtt->add_target_parameters_help);
+ else
+ return sprintf(buf, help,
+ (tgtt->mgmt_cmd_help) ? tgtt->mgmt_cmd_help : "",
+ "", "");
+}
+
+static ssize_t scst_tgtt_mgmt_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t count)
+{
+ int res;
+ char *buffer, *p, *pp, *target_name;
+ struct scst_tgt_template *tgtt;
+
+ tgtt = container_of(kobj, struct scst_tgt_template, tgtt_kobj);
+
+ buffer = kzalloc(count+1, GFP_KERNEL);
+ if (buffer == NULL) {
+ res = -ENOMEM;
+ goto out;
+ }
+
+ memcpy(buffer, buf, count);
+ buffer[count] = '\0';
+
+ pp = buffer;
+ if (pp[strlen(pp) - 1] == '\n')
+ pp[strlen(pp) - 1] = '\0';
+
+ p = scst_get_next_lexem(&pp);
+
+ if (strcasecmp("add_target", p) == 0) {
+ target_name = scst_get_next_lexem(&pp);
+ if (*target_name == '\0') {
+ PRINT_ERROR("%s", "Target name required");
+ res = -EINVAL;
+ goto out_free;
+ }
+ res = tgtt->add_target(target_name, pp);
+ } else if (strcasecmp("del_target", p) == 0) {
+ target_name = scst_get_next_lexem(&pp);
+ if (*target_name == '\0') {
+ PRINT_ERROR("%s", "Target name required");
+ res = -EINVAL;
+ goto out_free;
+ }
+
+ p = scst_get_next_lexem(&pp);
+ if (*p != '\0')
+ goto out_syntax_err;
+
+ res = tgtt->del_target(target_name);
+ } else if (tgtt->mgmt_cmd != NULL) {
+ scst_restore_token_str(p, pp);
+ res = tgtt->mgmt_cmd(buffer);
+ } else {
+ PRINT_ERROR("Unknown action \"%s\"", p);
+ res = -EINVAL;
+ goto out_free;
+ }
+
+ if (res == 0)
+ res = count;
+
+out_free:
+ kfree(buffer);
+
+out:
+ return res;
+
+out_syntax_err:
+ PRINT_ERROR("Syntax error on \"%s\"", p);
+ res = -EINVAL;
+ goto out_free;
+}
+
+static struct kobj_attribute scst_tgtt_mgmt =
+ __ATTR(mgmt, S_IRUGO | S_IWUSR, scst_tgtt_mgmt_show,
+ scst_tgtt_mgmt_store);
+
+int scst_create_tgtt_sysfs(struct scst_tgt_template *tgtt)
+{
+ int retval = 0;
+ const struct attribute **pattr;
+
+ init_completion(&tgtt->tgtt_kobj_release_cmpl);
+
+ tgtt->tgtt_kobj_initialized = 1;
+
+ retval = kobject_init_and_add(&tgtt->tgtt_kobj, &tgtt_ktype,
+ scst_targets_kobj, tgtt->name);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add tgtt %s to sysfs", tgtt->name);
+ goto out;
+ }
+
+ /*
+ * In case of errors there's no need for additional cleanup, because
+ * it will be done by the _put function() called by the caller.
+ */
+
+ if (tgtt->add_target != NULL) {
+ retval = sysfs_create_file(&tgtt->tgtt_kobj,
+ &scst_tgtt_mgmt.attr);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add mgmt attr for target driver %s",
+ tgtt->name);
+ goto out;
+ }
+ }
+
+ pattr = tgtt->tgtt_attrs;
+ if (pattr != NULL) {
+ while (*pattr != NULL) {
+ TRACE_DBG("Creating attr %s for target driver %s",
+ (*pattr)->name, tgtt->name);
+ retval = sysfs_create_file(&tgtt->tgtt_kobj, *pattr);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add attr %s for target "
+ "driver %s", (*pattr)->name,
+ tgtt->name);
+ goto out;
+ }
+ pattr++;
+ }
+ }
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+ if (tgtt->trace_flags != NULL) {
+ retval = sysfs_create_file(&tgtt->tgtt_kobj,
+ &tgtt_trace_attr.attr);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add trace_flag for target "
+ "driver %s", tgtt->name);
+ goto out;
+ }
+ }
+#endif
+
+out:
+ return retval;
+}
+
+void scst_tgtt_sysfs_put(struct scst_tgt_template *tgtt)
+{
+
+ if (tgtt->tgtt_kobj_initialized) {
+ int rc;
+
+ kobject_del(&tgtt->tgtt_kobj);
+ kobject_put(&tgtt->tgtt_kobj);
+
+ rc = wait_for_completion_timeout(&tgtt->tgtt_kobj_release_cmpl, HZ);
+ if (rc == 0) {
+ PRINT_INFO("Waiting for releasing sysfs entry "
+ "for target template %s...", tgtt->name);
+ wait_for_completion(&tgtt->tgtt_kobj_release_cmpl);
+ PRINT_INFO("Done waiting for releasing sysfs "
+ "entry for target template %s", tgtt->name);
+ }
+ } else
+ scst_tgtt_cleanup(tgtt);
+ return;
+}
+
+/*
+ * Target directory implementation
+ */
+
+static void scst_tgt_release(struct kobject *kobj)
+{
+ struct scst_tgt *tgt;
+
+ tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+
+ /* Let's make lockdep happy */
+ up_write(&tgt->tgt_attr_rwsem);
+
+ scst_free_tgt(tgt);
+ return;
+}
+
+static ssize_t scst_tgt_attr_show(struct kobject *kobj, struct attribute *attr,
+ char *buf)
+{
+ int res;
+ struct kobj_attribute *kobj_attr;
+ struct scst_tgt *tgt;
+
+ tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+
+ if (down_read_trylock(&tgt->tgt_attr_rwsem) == 0) {
+ res = -ENOENT;
+ goto out;
+ }
+
+ kobj_attr = container_of(attr, struct kobj_attribute, attr);
+
+ res = kobj_attr->show(kobj, kobj_attr, buf);
+
+ up_read(&tgt->tgt_attr_rwsem);
+
+out:
+ return res;
+}
+
+static ssize_t scst_tgt_attr_store(struct kobject *kobj,
+ struct attribute *attr, const char *buf, size_t count)
+{
+ int res;
+ struct kobj_attribute *kobj_attr;
+ struct scst_tgt *tgt;
+
+ tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+
+ if (down_read_trylock(&tgt->tgt_attr_rwsem) == 0) {
+ res = -ENOENT;
+ goto out;
+ }
+
+ kobj_attr = container_of(attr, struct kobj_attribute, attr);
+
+ res = kobj_attr->store(kobj, kobj_attr, buf, count);
+
+ up_read(&tgt->tgt_attr_rwsem);
+
+out:
+ return res;
+}
+
+static struct sysfs_ops scst_tgt_sysfs_ops = {
+ .show = scst_tgt_attr_show,
+ .store = scst_tgt_attr_store,
+};
+
+static struct kobj_type tgt_ktype = {
+ .sysfs_ops = &scst_tgt_sysfs_ops,
+ .release = scst_tgt_release,
+};
+
+static void scst_acg_release(struct kobject *kobj)
+{
+ struct scst_acg *acg;
+
+ acg = container_of(kobj, struct scst_acg, acg_kobj);
+
+ scst_destroy_acg(acg);
+ return;
+}
+
+static struct kobj_type acg_ktype = {
+ .sysfs_ops = &scst_sysfs_ops,
+ .release = scst_acg_release,
+};
+
+static struct kobj_attribute scst_luns_mgmt =
+ __ATTR(mgmt, S_IRUGO | S_IWUSR, scst_luns_mgmt_show,
+ scst_luns_mgmt_store);
+
+static struct kobj_attribute scst_acg_luns_mgmt =
+ __ATTR(mgmt, S_IRUGO | S_IWUSR, scst_luns_mgmt_show,
+ scst_acg_luns_mgmt_store);
+
+static struct kobj_attribute scst_acg_ini_mgmt =
+ __ATTR(mgmt, S_IRUGO | S_IWUSR, scst_acg_ini_mgmt_show,
+ scst_acg_ini_mgmt_store);
+
+static struct kobj_attribute scst_ini_group_mgmt =
+ __ATTR(mgmt, S_IRUGO | S_IWUSR, scst_ini_group_mgmt_show,
+ scst_ini_group_mgmt_store);
+
+static struct kobj_attribute scst_tgt_addr_method =
+ __ATTR(addr_method, S_IRUGO | S_IWUSR, scst_tgt_addr_method_show,
+ scst_tgt_addr_method_store);
+
+static struct kobj_attribute scst_tgt_io_grouping_type =
+ __ATTR(io_grouping_type, S_IRUGO | S_IWUSR,
+ scst_tgt_io_grouping_type_show,
+ scst_tgt_io_grouping_type_store);
+
+static struct kobj_attribute scst_rel_tgt_id =
+ __ATTR(rel_tgt_id, S_IRUGO | S_IWUSR, scst_rel_tgt_id_show,
+ scst_rel_tgt_id_store);
+
+static struct kobj_attribute scst_acg_addr_method =
+ __ATTR(addr_method, S_IRUGO | S_IWUSR, scst_acg_addr_method_show,
+ scst_acg_addr_method_store);
+
+static struct kobj_attribute scst_acg_io_grouping_type =
+ __ATTR(io_grouping_type, S_IRUGO | S_IWUSR,
+ scst_acg_io_grouping_type_show,
+ scst_acg_io_grouping_type_store);
+
+static ssize_t scst_tgt_enable_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ struct scst_tgt *tgt;
+ int res;
+ bool enabled;
+
+ tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+
+ enabled = tgt->tgtt->is_target_enabled(tgt);
+
+ res = sprintf(buf, "%d\n", enabled ? 1 : 0);
+ return res;
+}
+
+static ssize_t scst_tgt_enable_store(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ int res;
+ struct scst_tgt *tgt;
+ bool enable;
+
+ if (buf == NULL) {
+ PRINT_ERROR("%s: NULL buffer?", __func__);
+ res = -EINVAL;
+ goto out;
+ }
+
+ tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+
+ switch (buf[0]) {
+ case '0':
+ enable = false;
+ break;
+ case '1':
+ if (tgt->rel_tgt_id == 0) {
+ res = gen_relative_target_port_id(&tgt->rel_tgt_id);
+ if (res)
+ goto out;
+ PRINT_INFO("Using autogenerated rel ID %d for target "
+ "%s", tgt->rel_tgt_id, tgt->tgt_name);
+ } else {
+ if (!scst_is_relative_target_port_id_unique(
+ tgt->rel_tgt_id, tgt)) {
+ PRINT_ERROR("Relative port id %d is not unique",
+ tgt->rel_tgt_id);
+ res = -EBADSLT;
+ goto out;
+ }
+ }
+ enable = true;
+ break;
+ default:
+ PRINT_ERROR("%s: Requested action not understood: %s",
+ __func__, buf);
+ res = -EINVAL;
+ goto out;
+ }
+
+ res = tgt->tgtt->enable_target(tgt, enable);
+ if (res == 0)
+ res = count;
+
+out:
+ return res;
+}
+
+static struct kobj_attribute tgt_enable_attr =
+ __ATTR(enabled, S_IRUGO | S_IWUSR,
+ scst_tgt_enable_show, scst_tgt_enable_store);
+
+int scst_create_tgt_sysfs(struct scst_tgt *tgt)
+{
+ int retval;
+ const struct attribute **pattr;
+
+ init_rwsem(&tgt->tgt_attr_rwsem);
+
+ tgt->tgt_kobj_initialized = 1;
+
+ retval = kobject_init_and_add(&tgt->tgt_kobj, &tgt_ktype,
+ &tgt->tgtt->tgtt_kobj, tgt->tgt_name);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add tgt %s to sysfs", tgt->tgt_name);
+ goto out;
+ }
+
+ /*
+ * In case of errors there's no need for additional cleanup, because
+ * it will be done by the _put function() called by the caller.
+ */
+
+ if ((tgt->tgtt->enable_target != NULL) &&
+ (tgt->tgtt->is_target_enabled != NULL)) {
+ retval = sysfs_create_file(&tgt->tgt_kobj,
+ &tgt_enable_attr.attr);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add attr %s to sysfs",
+ tgt_enable_attr.attr.name);
+ goto out;
+ }
+ }
+
+ tgt->tgt_sess_kobj = kobject_create_and_add("sessions", &tgt->tgt_kobj);
+ if (tgt->tgt_sess_kobj == NULL) {
+ PRINT_ERROR("Can't create sess kobj for tgt %s", tgt->tgt_name);
+ goto out_nomem;
+ }
+
+ tgt->tgt_luns_kobj = kobject_create_and_add("luns", &tgt->tgt_kobj);
+ if (tgt->tgt_luns_kobj == NULL) {
+ PRINT_ERROR("Can't create luns kobj for tgt %s", tgt->tgt_name);
+ goto out_nomem;
+ }
+
+ retval = sysfs_create_file(tgt->tgt_luns_kobj, &scst_luns_mgmt.attr);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add tgt attr %s for tgt %s",
+ scst_luns_mgmt.attr.name, tgt->tgt_name);
+ goto out;
+ }
+
+ tgt->tgt_ini_grp_kobj = kobject_create_and_add("ini_groups",
+ &tgt->tgt_kobj);
+ if (tgt->tgt_ini_grp_kobj == NULL) {
+ PRINT_ERROR("Can't create ini_grp kobj for tgt %s",
+ tgt->tgt_name);
+ goto out_nomem;
+ }
+
+ retval = sysfs_create_file(tgt->tgt_ini_grp_kobj,
+ &scst_ini_group_mgmt.attr);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add tgt attr %s for tgt %s",
+ scst_ini_group_mgmt.attr.name, tgt->tgt_name);
+ goto out;
+ }
+
+ retval = sysfs_create_file(&tgt->tgt_kobj,
+ &scst_rel_tgt_id.attr);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add attribute %s for tgt %s",
+ scst_rel_tgt_id.attr.name, tgt->tgt_name);
+ goto out;
+ }
+
+ retval = sysfs_create_file(&tgt->tgt_kobj,
+ &scst_tgt_addr_method.attr);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add attribute %s for tgt %s",
+ scst_tgt_addr_method.attr.name, tgt->tgt_name);
+ goto out;
+ }
+
+ retval = sysfs_create_file(&tgt->tgt_kobj,
+ &scst_tgt_io_grouping_type.attr);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add attribute %s for tgt %s",
+ scst_tgt_io_grouping_type.attr.name, tgt->tgt_name);
+ goto out;
+ }
+
+ pattr = tgt->tgtt->tgt_attrs;
+ if (pattr != NULL) {
+ while (*pattr != NULL) {
+ TRACE_DBG("Creating attr %s for tgt %s", (*pattr)->name,
+ tgt->tgt_name);
+ retval = sysfs_create_file(&tgt->tgt_kobj, *pattr);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add tgt attr %s for tgt %s",
+ (*pattr)->name, tgt->tgt_name);
+ goto out;
+ }
+ pattr++;
+ }
+ }
+
+out:
+ return retval;
+
+out_nomem:
+ retval = -ENOMEM;
+ goto out;
+}
+
+/*
+ * Must not be called under scst_mutex or there can be a deadlock with
+ * tgt_attr_rwsem
+ */
+void scst_tgt_sysfs_prepare_put(struct scst_tgt *tgt)
+{
+ if (tgt->tgt_kobj_initialized) {
+ down_write(&tgt->tgt_attr_rwsem);
+ tgt->tgt_kobj_put_prepared = 1;
+ }
+
+ return;
+}
+
+/*
+ * Must not be called under scst_mutex or there can be a deadlock with
+ * tgt_attr_rwsem
+ */
+void scst_tgt_sysfs_put(struct scst_tgt *tgt)
+{
+ if (tgt->tgt_kobj_initialized) {
+ kobject_del(tgt->tgt_sess_kobj);
+ kobject_put(tgt->tgt_sess_kobj);
+
+ kobject_del(tgt->tgt_luns_kobj);
+ kobject_put(tgt->tgt_luns_kobj);
+
+ kobject_del(tgt->tgt_ini_grp_kobj);
+ kobject_put(tgt->tgt_ini_grp_kobj);
+
+ kobject_del(&tgt->tgt_kobj);
+
+ if (!tgt->tgt_kobj_put_prepared)
+ down_write(&tgt->tgt_attr_rwsem);
+ kobject_put(&tgt->tgt_kobj);
+ } else
+ scst_free_tgt(tgt);
+ return;
+}
+
+/*
+ * Devices directory implementation
+ */
+
+ssize_t scst_device_sysfs_type_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ int pos = 0;
+
+ struct scst_device *dev;
+
+ dev = container_of(kobj, struct scst_device, dev_kobj);
+
+ pos = sprintf(buf, "%d - %s\n", dev->type,
+ (unsigned)dev->type > ARRAY_SIZE(scst_dev_handler_types) ?
+ "unknown" : scst_dev_handler_types[dev->type]);
+
+ return pos;
+}
+
+static struct kobj_attribute device_type_attr =
+ __ATTR(type, S_IRUGO, scst_device_sysfs_type_show, NULL);
+
+static ssize_t scst_device_sysfs_threads_num_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ int pos = 0;
+ struct scst_device *dev;
+
+ dev = container_of(kobj, struct scst_device, dev_kobj);
+
+ pos = sprintf(buf, "%d\n%s", dev->threads_num,
+ (dev->threads_num != dev->handler->threads_num) ?
+ SCST_SYSFS_KEY_MARK "\n" : "");
+ return pos;
+}
+
+static ssize_t scst_device_sysfs_threads_data_store(struct scst_device *dev,
+ int threads_num, enum scst_dev_type_threads_pool_type threads_pool_type)
+{
+ int res = 0;
+
+ if (dev->threads_num < 0) {
+ PRINT_ERROR("Threads pool disabled for device %s",
+ dev->virt_name);
+ res = -EPERM;
+ goto out;
+ }
+
+ if ((threads_num == dev->threads_num) &&
+ (threads_pool_type == dev->threads_pool_type))
+ goto out;
+
+ res = scst_suspend_activity(true);
+ if (res != 0)
+ goto out;
+
+ if (mutex_lock_interruptible(&scst_mutex) != 0) {
+ res = -EINTR;
+ goto out_resume;
+ }
+
+ scst_stop_dev_threads(dev);
+
+ dev->threads_num = threads_num;
+ dev->threads_pool_type = threads_pool_type;
+
+ res = scst_create_dev_threads(dev);
+ if (res != 0)
+ goto out_up;
+
+out_up:
+ mutex_unlock(&scst_mutex);
+
+out_resume:
+ scst_resume_activity();
+
+out:
+ return res;
+}
+
+static ssize_t scst_device_sysfs_threads_num_store(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ int res;
+ struct scst_device *dev;
+ long newtn;
+
+ dev = container_of(kobj, struct scst_device, dev_kobj);
+
+ res = strict_strtoul(buf, 0, &newtn);
+ if (res != 0) {
+ PRINT_ERROR("strict_strtoul() for %s failed: %d ", buf, res);
+ goto out;
+ }
+
+ if (newtn < 0) {
+ PRINT_ERROR("Illegal threads num value %ld", newtn);
+ res = -EINVAL;
+ goto out;
+ }
+
+ res = scst_device_sysfs_threads_data_store(dev, newtn,
+ dev->threads_pool_type);
+ if (res != 0)
+ goto out;
+
+ PRINT_INFO("Changed cmd threads num to %ld", newtn);
+
+ res = count;
+
+out:
+ return res;
+}
+
+static struct kobj_attribute device_threads_num_attr =
+ __ATTR(threads_num, S_IRUGO | S_IWUSR,
+ scst_device_sysfs_threads_num_show,
+ scst_device_sysfs_threads_num_store);
+
+static ssize_t scst_device_sysfs_threads_pool_type_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ int pos = 0;
+ struct scst_device *dev;
+
+ dev = container_of(kobj, struct scst_device, dev_kobj);
+
+ if (dev->threads_num == 0) {
+ pos = sprintf(buf, "Async\n");
+ goto out;
+ } else if (dev->threads_num < 0) {
+ pos = sprintf(buf, "Not valid\n");
+ goto out;
+ }
+
+ switch (dev->threads_pool_type) {
+ case SCST_THREADS_POOL_PER_INITIATOR:
+ pos = sprintf(buf, "%s\n%s", SCST_THREADS_POOL_PER_INITIATOR_STR,
+ (dev->threads_pool_type != dev->handler->threads_pool_type) ?
+ SCST_SYSFS_KEY_MARK "\n" : "");
+ break;
+ case SCST_THREADS_POOL_SHARED:
+ pos = sprintf(buf, "%s\n%s", SCST_THREADS_POOL_SHARED_STR,
+ (dev->threads_pool_type != dev->handler->threads_pool_type) ?
+ SCST_SYSFS_KEY_MARK "\n" : "");
+ break;
+ default:
+ pos = sprintf(buf, "Unknown\n");
+ break;
+ }
+
+out:
+ return pos;
+}
+
+static ssize_t scst_device_sysfs_threads_pool_type_store(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ int res;
+ struct scst_device *dev;
+ enum scst_dev_type_threads_pool_type newtpt;
+
+ dev = container_of(kobj, struct scst_device, dev_kobj);
+
+ newtpt = scst_parse_threads_pool_type(buf, count);
+ if (newtpt == SCST_THREADS_POOL_TYPE_INVALID) {
+ PRINT_ERROR("Illegal threads pool type %s", buf);
+ res = -EINVAL;
+ goto out;
+ }
+
+ TRACE_DBG("buf %s, count %zd, newtpt %d", buf, count, newtpt);
+
+ res = scst_device_sysfs_threads_data_store(dev, dev->threads_num,
+ newtpt);
+ if (res != 0)
+ goto out;
+
+ PRINT_INFO("Changed cmd threads pool type to %d", newtpt);
+
+ res = count;
+
+out:
+ return res;
+}
+
+static struct kobj_attribute device_threads_pool_type_attr =
+ __ATTR(threads_pool_type, S_IRUGO | S_IWUSR,
+ scst_device_sysfs_threads_pool_type_show,
+ scst_device_sysfs_threads_pool_type_store);
+
+static struct attribute *scst_device_attrs[] = {
+ &device_type_attr.attr,
+ &device_threads_num_attr.attr,
+ &device_threads_pool_type_attr.attr,
+ NULL,
+};
+
+static void scst_sysfs_device_release(struct kobject *kobj)
+{
+ struct scst_device *dev;
+
+ dev = container_of(kobj, struct scst_device, dev_kobj);
+
+ /* Let's make lockdep happy */
+ up_write(&dev->dev_attr_rwsem);
+
+ scst_free_device(dev);
+ return;
+}
+
+int scst_create_devt_dev_sysfs(struct scst_device *dev)
+{
+ int retval = 0;
+ const struct attribute **pattr;
+
+ if (dev->handler == &scst_null_devtype)
+ goto out;
+
+ BUG_ON(!dev->handler->devt_kobj_initialized);
+
+ /*
+ * In case of errors there's no need for additional cleanup, because
+ * it will be done by the _put function() called by the caller.
+ */
+
+ retval = sysfs_create_link(&dev->dev_kobj,
+ &dev->handler->devt_kobj, "handler");
+ if (retval != 0) {
+ PRINT_ERROR("Can't create handler link for dev %s",
+ dev->virt_name);
+ goto out;
+ }
+
+ retval = sysfs_create_link(&dev->handler->devt_kobj,
+ &dev->dev_kobj, dev->virt_name);
+ if (retval != 0) {
+ PRINT_ERROR("Can't create handler link for dev %s",
+ dev->virt_name);
+ goto out;
+ }
+
+ pattr = dev->handler->dev_attrs;
+ if (pattr != NULL) {
+ while (*pattr != NULL) {
+ retval = sysfs_create_file(&dev->dev_kobj, *pattr);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add dev attr %s for dev %s",
+ (*pattr)->name, dev->virt_name);
+ goto out;
+ }
+ pattr++;
+ }
+ }
+
+out:
+ return retval;
+}
+
+void scst_devt_dev_sysfs_put(struct scst_device *dev)
+{
+ const struct attribute **pattr;
+
+ if (dev->handler == &scst_null_devtype)
+ goto out;
+
+ BUG_ON(!dev->handler->devt_kobj_initialized);
+
+ pattr = dev->handler->dev_attrs;
+ if (pattr != NULL) {
+ while (*pattr != NULL) {
+ sysfs_remove_file(&dev->dev_kobj, *pattr);
+ pattr++;
+ }
+ }
+
+ sysfs_remove_link(&dev->dev_kobj, "handler");
+ sysfs_remove_link(&dev->handler->devt_kobj, dev->virt_name);
+
+out:
+ return;
+}
+
+static ssize_t scst_dev_attr_show(struct kobject *kobj, struct attribute *attr,
+ char *buf)
+{
+ int res;
+ struct kobj_attribute *kobj_attr;
+ struct scst_device *dev;
+
+ dev = container_of(kobj, struct scst_device, dev_kobj);
+
+ if (down_read_trylock(&dev->dev_attr_rwsem) == 0) {
+ res = -ENOENT;
+ goto out;
+ }
+
+ kobj_attr = container_of(attr, struct kobj_attribute, attr);
+
+ res = kobj_attr->show(kobj, kobj_attr, buf);
+
+ up_read(&dev->dev_attr_rwsem);
+
+out:
+ return res;
+}
+
+static ssize_t scst_dev_attr_store(struct kobject *kobj, struct attribute *attr,
+ const char *buf, size_t count)
+{
+ int res;
+ struct kobj_attribute *kobj_attr;
+ struct scst_device *dev;
+
+ dev = container_of(kobj, struct scst_device, dev_kobj);
+
+ if (down_read_trylock(&dev->dev_attr_rwsem) == 0) {
+ res = -ENOENT;
+ goto out;
+ }
+
+ kobj_attr = container_of(attr, struct kobj_attribute, attr);
+
+ res = kobj_attr->store(kobj, kobj_attr, buf, count);
+
+ up_read(&dev->dev_attr_rwsem);
+
+out:
+ return res;
+}
+
+static struct sysfs_ops scst_dev_sysfs_ops = {
+ .show = scst_dev_attr_show,
+ .store = scst_dev_attr_store,
+};
+
+static struct kobj_type scst_device_ktype = {
+ .sysfs_ops = &scst_dev_sysfs_ops,
+ .release = scst_sysfs_device_release,
+ .default_attrs = scst_device_attrs,
+};
+
+int scst_create_device_sysfs(struct scst_device *dev)
+{
+ int retval = 0;
+
+ init_rwsem(&dev->dev_attr_rwsem);
+
+ dev->dev_kobj_initialized = 1;
+
+ retval = kobject_init_and_add(&dev->dev_kobj, &scst_device_ktype,
+ scst_devices_kobj, dev->virt_name);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add device %s to sysfs", dev->virt_name);
+ goto out;
+ }
+
+ /*
+ * In case of errors there's no need for additional cleanup, because
+ * it will be done by the _put function() called by the caller.
+ */
+
+ dev->dev_exp_kobj = kobject_create_and_add("exported",
+ &dev->dev_kobj);
+ if (dev->dev_exp_kobj == NULL) {
+ PRINT_ERROR("Can't create exported link for device %s",
+ dev->virt_name);
+ retval = -ENOMEM;
+ goto out;
+ }
+
+ if (dev->scsi_dev != NULL) {
+ retval = sysfs_create_link(&dev->dev_kobj,
+ &dev->scsi_dev->sdev_dev.kobj, "scsi_device");
+ if (retval != 0) {
+ PRINT_ERROR("Can't create scsi_device link for dev %s",
+ dev->virt_name);
+ goto out;
+ }
+ }
+
+out:
+ return retval;
+}
+
+/*
+ * Must not be called under scst_mutex or there can be a deadlock with
+ * dev_attr_rwsem
+ */
+void scst_device_sysfs_put(struct scst_device *dev)
+{
+
+ if (dev->dev_kobj_initialized) {
+ kobject_del(dev->dev_exp_kobj);
+ kobject_put(dev->dev_exp_kobj);
+
+ kobject_del(&dev->dev_kobj);
+
+ down_write(&dev->dev_attr_rwsem);
+ kobject_put(&dev->dev_kobj);
+ } else
+ scst_free_device(dev);
+ return;
+}
+
+/*
+ * Target sessions directory implementation
+ */
+
+static ssize_t scst_sess_sysfs_commands_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ struct scst_session *sess;
+
+ sess = container_of(kobj, struct scst_session, sess_kobj);
+
+ return sprintf(buf, "%i\n", atomic_read(&sess->sess_cmd_count));
+}
+
+static struct kobj_attribute session_commands_attr =
+ __ATTR(commands, S_IRUGO, scst_sess_sysfs_commands_show, NULL);
+
+static ssize_t scst_sess_sysfs_active_commands_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ int res;
+ struct scst_session *sess;
+ int active_cmds = 0, t;
+
+ if (mutex_lock_interruptible(&scst_mutex) != 0) {
+ res = -EINTR;
+ goto out;
+ }
+
+ sess = container_of(kobj, struct scst_session, sess_kobj);
+
+ for (t = TGT_DEV_HASH_SIZE-1; t >= 0; t--) {
+ struct list_head *sess_tgt_dev_list_head =
+ &sess->sess_tgt_dev_list_hash[t];
+ struct scst_tgt_dev *tgt_dev;
+ list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+ sess_tgt_dev_list_entry) {
+ active_cmds += atomic_read(&tgt_dev->tgt_dev_cmd_count);
+ }
+ }
+
+ mutex_unlock(&scst_mutex);
+
+ res = sprintf(buf, "%i\n", active_cmds);
+
+out:
+ return res;
+}
+
+static struct kobj_attribute session_active_commands_attr =
+ __ATTR(active_commands, S_IRUGO, scst_sess_sysfs_active_commands_show,
+ NULL);
+
+static ssize_t scst_sess_sysfs_initiator_name_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ struct scst_session *sess;
+
+ sess = container_of(kobj, struct scst_session, sess_kobj);
+
+ return scnprintf(buf, SCST_SYSFS_BLOCK_SIZE, "%s\n",
+ sess->initiator_name);
+}
+
+static struct kobj_attribute session_initiator_name_attr =
+ __ATTR(initiator_name, S_IRUGO, scst_sess_sysfs_initiator_name_show, NULL);
+
+static struct attribute *scst_session_attrs[] = {
+ &session_commands_attr.attr,
+ &session_active_commands_attr.attr,
+ &session_initiator_name_attr.attr,
+ NULL,
+};
+
+static void scst_sysfs_session_release(struct kobject *kobj)
+{
+ struct scst_session *sess;
+
+ sess = container_of(kobj, struct scst_session, sess_kobj);
+
+ /* Let's make lockdep happy */
+ up_write(&sess->sess_attr_rwsem);
+
+ scst_release_session(sess);
+ return;
+}
+
+static ssize_t scst_sess_attr_show(struct kobject *kobj, struct attribute *attr,
+ char *buf)
+{
+ int res;
+ struct kobj_attribute *kobj_attr;
+ struct scst_session *sess;
+
+ sess = container_of(kobj, struct scst_session, sess_kobj);
+
+ if (down_read_trylock(&sess->sess_attr_rwsem) == 0) {
+ res = -ENOENT;
+ goto out;
+ }
+
+ kobj_attr = container_of(attr, struct kobj_attribute, attr);
+
+ res = kobj_attr->show(kobj, kobj_attr, buf);
+
+ up_read(&sess->sess_attr_rwsem);
+
+out:
+ return res;
+}
+
+static ssize_t scst_sess_attr_store(struct kobject *kobj, struct attribute *attr,
+ const char *buf, size_t count)
+{
+ int res;
+ struct kobj_attribute *kobj_attr;
+ struct scst_session *sess;
+
+ sess = container_of(kobj, struct scst_session, sess_kobj);
+
+ if (down_read_trylock(&sess->sess_attr_rwsem) == 0) {
+ res = -ENOENT;
+ goto out;
+ }
+
+ kobj_attr = container_of(attr, struct kobj_attribute, attr);
+
+ res = kobj_attr->store(kobj, kobj_attr, buf, count);
+
+ up_read(&sess->sess_attr_rwsem);
+
+out:
+ return res;
+}
+
+static struct sysfs_ops scst_sess_sysfs_ops = {
+ .show = scst_sess_attr_show,
+ .store = scst_sess_attr_store,
+};
+
+static struct kobj_type scst_session_ktype = {
+ .sysfs_ops = &scst_sess_sysfs_ops,
+ .release = scst_sysfs_session_release,
+ .default_attrs = scst_session_attrs,
+};
+
+/* scst_mutex supposed to be locked */
+int scst_create_sess_sysfs(struct scst_session *sess)
+{
+ int retval = 0;
+ struct scst_session *s;
+ const struct attribute **pattr;
+ char *name = (char *)sess->initiator_name;
+ int len = strlen(name) + 1, n = 1;
+
+restart:
+ list_for_each_entry(s, &sess->tgt->sess_list, sess_list_entry) {
+ if (!s->sess_kobj_initialized)
+ continue;
+
+ if (strcmp(name, kobject_name(&s->sess_kobj)) == 0) {
+ if (s == sess)
+ continue;
+
+ TRACE_DBG("Dublicated session from the same initiator "
+ "%s found", name);
+
+ if (name == sess->initiator_name) {
+ len = strlen(sess->initiator_name);
+ len += 20;
+ name = kmalloc(len, GFP_KERNEL);
+ if (name == NULL) {
+ PRINT_ERROR("Unable to allocate a "
+ "replacement name (size %d)",
+ len);
+ }
+ }
+
+ snprintf(name, len, "%s_%d", sess->initiator_name, n);
+ n++;
+ goto restart;
+ }
+ }
+
+ init_rwsem(&sess->sess_attr_rwsem);
+
+ sess->sess_kobj_initialized = 1;
+
+ retval = kobject_init_and_add(&sess->sess_kobj, &scst_session_ktype,
+ sess->tgt->tgt_sess_kobj, name);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add session %s to sysfs", name);
+ goto out_free;
+ }
+
+ /*
+ * In case of errors there's no need for additional cleanup, because
+ * it will be done by the _put function() called by the caller.
+ */
+
+ pattr = sess->tgt->tgtt->sess_attrs;
+ if (pattr != NULL) {
+ while (*pattr != NULL) {
+ retval = sysfs_create_file(&sess->sess_kobj, *pattr);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add sess attr %s for sess "
+ "for initiator %s", (*pattr)->name,
+ name);
+ goto out_free;
+ }
+ pattr++;
+ }
+ }
+
+ if (sess->acg == sess->tgt->default_acg)
+ retval = sysfs_create_link(&sess->sess_kobj,
+ sess->tgt->tgt_luns_kobj, "luns");
+ else
+ retval = sysfs_create_link(&sess->sess_kobj,
+ sess->acg->luns_kobj, "luns");
+
+out_free:
+ if (name != sess->initiator_name)
+ kfree(name);
+ return retval;
+}
+
+/*
+ * Must not be called under scst_mutex or there can be a deadlock with
+ * sess_attr_rwsem
+ */
+void scst_sess_sysfs_put(struct scst_session *sess)
+{
+
+ if (sess->sess_kobj_initialized) {
+ kobject_del(&sess->sess_kobj);
+
+ down_write(&sess->sess_attr_rwsem);
+ kobject_put(&sess->sess_kobj);
+ } else
+ scst_release_session(sess);
+ return;
+}
+
+/*
+ * Target luns directory implementation
+ */
+
+static void scst_acg_dev_release(struct kobject *kobj)
+{
+ struct scst_acg_dev *acg_dev;
+
+ acg_dev = container_of(kobj, struct scst_acg_dev, acg_dev_kobj);
+
+ scst_acg_dev_destroy(acg_dev);
+ return;
+}
+
+static ssize_t scst_lun_rd_only_show(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ char *buf)
+{
+ struct scst_acg_dev *acg_dev;
+
+ acg_dev = container_of(kobj, struct scst_acg_dev, acg_dev_kobj);
+
+ if (acg_dev->rd_only || acg_dev->dev->rd_only)
+ return sprintf(buf, "%d\n%s\n", 1, SCST_SYSFS_KEY_MARK);
+ else
+ return sprintf(buf, "%d\n", 0);
+}
+
+static struct kobj_attribute lun_options_attr =
+ __ATTR(read_only, S_IRUGO, scst_lun_rd_only_show, NULL);
+
+static struct attribute *lun_attrs[] = {
+ &lun_options_attr.attr,
+ NULL,
+};
+
+static struct kobj_type acg_dev_ktype = {
+ .sysfs_ops = &scst_sysfs_ops,
+ .release = scst_acg_dev_release,
+ .default_attrs = lun_attrs,
+};
+
+int scst_create_acg_dev_sysfs(struct scst_acg *acg, unsigned int virt_lun,
+ struct kobject *parent)
+{
+ int retval;
+ struct scst_acg_dev *acg_dev = NULL, *acg_dev_tmp;
+ char str[20];
+
+ list_for_each_entry(acg_dev_tmp, &acg->acg_dev_list,
+ acg_dev_list_entry) {
+ if (acg_dev_tmp->lun == virt_lun) {
+ acg_dev = acg_dev_tmp;
+ break;
+ }
+ }
+ if (acg_dev == NULL) {
+ PRINT_ERROR("%s", "acg_dev lookup for kobject creation failed");
+ retval = -EINVAL;
+ goto out;
+ }
+
+ snprintf(str, sizeof(str), "export%u",
+ acg_dev->dev->dev_exported_lun_num++);
+
+ kobject_get(&acg_dev->dev->dev_kobj);
+
+ acg_dev->acg_dev_kobj_initialized = 1;
+
+ retval = kobject_init_and_add(&acg_dev->acg_dev_kobj, &acg_dev_ktype,
+ parent, "%u", virt_lun);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add acg %s to sysfs", acg->acg_name);
+ goto out;
+ }
+
+ /*
+ * In case of errors there's no need for additional cleanup, because
+ * it will be done by the _put function() called by the caller.
+ */
+
+ retval = sysfs_create_link(acg_dev->dev->dev_exp_kobj,
+ &acg_dev->acg_dev_kobj, str);
+ if (retval != 0) {
+ PRINT_ERROR("Can't create acg %s LUN link", acg->acg_name);
+ goto out;
+ }
+
+ retval = sysfs_create_link(&acg_dev->acg_dev_kobj,
+ &acg_dev->dev->dev_kobj, "device");
+ if (retval != 0) {
+ PRINT_ERROR("Can't create acg %s device link", acg->acg_name);
+ goto out;
+ }
+
+out:
+ return retval;
+}
+
+static ssize_t __scst_luns_mgmt_store(struct scst_acg *acg,
+ struct kobject *kobj, const char *buf, size_t count)
+{
+ int res, virt = 0, read_only = 0, action;
+ char *buffer, *p, *e = NULL;
+ unsigned int host, channel = 0, id = 0, lun = 0, virt_lun;
+ struct scst_acg_dev *acg_dev = NULL, *acg_dev_tmp;
+ struct scst_device *d, *dev = NULL;
+
+#define SCST_LUN_ACTION_ADD 1
+#define SCST_LUN_ACTION_DEL 2
+#define SCST_LUN_ACTION_REPLACE 3
+#define SCST_LUN_ACTION_CLEAR 4
+
+ buffer = kzalloc(count+1, GFP_KERNEL);
+ if (buffer == NULL) {
+ res = -ENOMEM;
+ goto out;
+ }
+
+ memcpy(buffer, buf, count);
+ buffer[count] = '\0';
+ p = buffer;
+
+ p = buffer;
+ if (p[strlen(p) - 1] == '\n')
+ p[strlen(p) - 1] = '\0';
+ if (strncasecmp("add", p, 3) == 0) {
+ p += 3;
+ action = SCST_LUN_ACTION_ADD;
+ } else if (strncasecmp("del", p, 3) == 0) {
+ p += 3;
+ action = SCST_LUN_ACTION_DEL;
+ } else if (!strncasecmp("replace", p, 7)) {
+ p += 7;
+ action = SCST_LUN_ACTION_REPLACE;
+ } else if (!strncasecmp("clear", p, 5)) {
+ p += 5;
+ action = SCST_LUN_ACTION_CLEAR;
+ } else {
+ PRINT_ERROR("Unknown action \"%s\"", p);
+ res = -EINVAL;
+ goto out_free;
+ }
+
+ res = scst_suspend_activity(true);
+ if (res != 0)
+ goto out_free;
+
+ if (mutex_lock_interruptible(&scst_mutex) != 0) {
+ res = -EINTR;
+ goto out_free_resume;
+ }
+
+ if (action != SCST_LUN_ACTION_CLEAR) {
+ if (!isspace(*p)) {
+ PRINT_ERROR("%s", "Syntax error");
+ res = -EINVAL;
+ goto out_free_up;
+ }
+
+ while (isspace(*p) && *p != '\0')
+ p++;
+ e = p; /* save p */
+ host = simple_strtoul(p, &p, 0);
+ if (*p == ':') {
+ channel = simple_strtoul(p + 1, &p, 0);
+ id = simple_strtoul(p + 1, &p, 0);
+ lun = simple_strtoul(p + 1, &p, 0);
+ e = p;
+ } else {
+ virt++;
+ p = e; /* restore p */
+ while (!isspace(*e) && *e != '\0')
+ e++;
+ *e = '\0';
+ }
+
+ list_for_each_entry(d, &scst_dev_list, dev_list_entry) {
+ if (virt) {
+ if (d->virt_id && !strcmp(d->virt_name, p)) {
+ dev = d;
+ TRACE_DBG("Virt device %p (%s) found",
+ dev, p);
+ break;
+ }
+ } else {
+ if (d->scsi_dev &&
+ d->scsi_dev->host->host_no == host &&
+ d->scsi_dev->channel == channel &&
+ d->scsi_dev->id == id &&
+ d->scsi_dev->lun == lun) {
+ dev = d;
+ TRACE_DBG("Dev %p (%d:%d:%d:%d) found",
+ dev, host, channel, id, lun);
+ break;
+ }
+ }
+ }
+ if (dev == NULL) {
+ if (virt) {
+ PRINT_ERROR("Virt device '%s' not found", p);
+ } else {
+ PRINT_ERROR("Device %d:%d:%d:%d not found",
+ host, channel, id, lun);
+ }
+ res = -EINVAL;
+ goto out_free_up;
+ }
+ }
+
+ switch (action) {
+ case SCST_LUN_ACTION_ADD:
+ case SCST_LUN_ACTION_REPLACE:
+ {
+ bool dev_replaced = false;
+
+ e++;
+ while (isspace(*e) && *e != '\0')
+ e++;
+ virt_lun = simple_strtoul(e, &e, 0);
+
+ while (isspace(*e) && *e != '\0')
+ e++;
+
+ while (1) {
+ char *pp;
+ unsigned long val;
+ char *param = scst_get_next_token_str(&e);
+ if (param == NULL)
+ break;
+
+ p = scst_get_next_lexem(¶m);
+ if (*p == '\0') {
+ PRINT_ERROR("Syntax error at %s (device %s)",
+ param, dev->virt_name);
+ res = -EINVAL;
+ goto out_free_up;
+ }
+
+ pp = scst_get_next_lexem(¶m);
+ if (*pp == '\0') {
+ PRINT_ERROR("Parameter %s value missed for device %s",
+ p, dev->virt_name);
+ res = -EINVAL;
+ goto out_free_up;
+ }
+
+ if (scst_get_next_lexem(¶m)[0] != '\0') {
+ PRINT_ERROR("Too many parameter's %s values (device %s)",
+ p, dev->virt_name);
+ res = -EINVAL;
+ goto out_free_up;
+ }
+
+ res = strict_strtoul(pp, 0, &val);
+ if (res != 0) {
+ PRINT_ERROR("strict_strtoul() for %s failed: %d "
+ "(device %s)", pp, res, dev->virt_name);
+ goto out_free_up;
+ }
+
+ if (!strcasecmp("read_only", p)) {
+ read_only = val;
+ TRACE_DBG("READ ONLY %d", read_only);
+ } else {
+ PRINT_ERROR("Unknown parameter %s (device %s)",
+ p, dev->virt_name);
+ res = -EINVAL;
+ goto out_free_up;
+ }
+ }
+
+ acg_dev = NULL;
+ list_for_each_entry(acg_dev_tmp, &acg->acg_dev_list,
+ acg_dev_list_entry) {
+ if (acg_dev_tmp->lun == virt_lun) {
+ acg_dev = acg_dev_tmp;
+ break;
+ }
+ }
+
+ if (acg_dev != NULL) {
+ if (action == SCST_LUN_ACTION_ADD) {
+ PRINT_ERROR("virt lun %d already exists in "
+ "group %s", virt_lun, acg->acg_name);
+ res = -EEXIST;
+ goto out_free_up;
+ } else {
+ /* Replace */
+ res = scst_acg_remove_dev(acg, acg_dev->dev,
+ false);
+ if (res != 0)
+ goto out_free_up;
+
+ dev_replaced = true;
+ }
+ }
+
+ res = scst_acg_add_dev(acg, dev, virt_lun, read_only,
+ !dev_replaced);
+ if (res != 0)
+ goto out_free_up;
+
+ res = scst_create_acg_dev_sysfs(acg, virt_lun, kobj);
+ if (res != 0) {
+ PRINT_ERROR("%s", "Creation of acg_dev kobject failed");
+ goto out_remove_acg_dev;
+ }
+
+ if (dev_replaced) {
+ struct scst_tgt_dev *tgt_dev;
+
+ list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+ dev_tgt_dev_list_entry) {
+ if ((tgt_dev->acg_dev->acg == acg) &&
+ (tgt_dev->lun == virt_lun)) {
+ TRACE_MGMT_DBG("INQUIRY DATA HAS CHANGED"
+ " on tgt_dev %p", tgt_dev);
+ scst_gen_aen_or_ua(tgt_dev,
+ SCST_LOAD_SENSE(scst_sense_inquery_data_changed));
+ }
+ }
+ }
+
+ break;
+ }
+ case SCST_LUN_ACTION_DEL:
+ res = scst_acg_remove_dev(acg, dev, true);
+ if (res != 0)
+ goto out_free_up;
+ break;
+ case SCST_LUN_ACTION_CLEAR:
+ PRINT_INFO("Removed all devices from group %s",
+ acg->acg_name);
+ list_for_each_entry_safe(acg_dev, acg_dev_tmp,
+ &acg->acg_dev_list,
+ acg_dev_list_entry) {
+ res = scst_acg_remove_dev(acg, acg_dev->dev,
+ list_is_last(&acg_dev->acg_dev_list_entry,
+ &acg->acg_dev_list));
+ if (res)
+ goto out_free_up;
+ }
+ break;
+ }
+
+ res = count;
+
+out_free_up:
+ mutex_unlock(&scst_mutex);
+
+out_free_resume:
+ scst_resume_activity();
+
+out_free:
+ kfree(buffer);
+
+out:
+ return res;
+
+out_remove_acg_dev:
+ scst_acg_remove_dev(acg, dev, true);
+ goto out_free_up;
+
+#undef SCST_LUN_ACTION_ADD
+#undef SCST_LUN_ACTION_DEL
+#undef SCST_LUN_ACTION_REPLACE
+#undef SCST_LUN_ACTION_CLEAR
+}
+
+static ssize_t scst_luns_mgmt_show(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ char *buf)
+{
+ static char *help = "Usage: echo \"add|del H:C:I:L lun [parameters]\" "
+ ">mgmt\n"
+ " echo \"add|del VNAME lun [parameters]\" "
+ ">mgmt\n"
+ " echo \"replace H:C:I:L lun [parameters]\" "
+ ">mgmt\n"
+ " echo \"replace VNAME lun [parameters]\" "
+ ">mgmt\n"
+ " echo \"clear\" >mgmt\n"
+ "\n"
+ "where parameters are one or more "
+ "param_name=value pairs separated by ';'\n"
+ "\nThe following parameters available: read_only.";
+
+ return sprintf(buf, help);
+}
+
+static ssize_t scst_luns_mgmt_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t count)
+{
+ int res;
+ struct scst_acg *acg;
+ struct scst_tgt *tgt;
+
+ tgt = container_of(kobj->parent, struct scst_tgt, tgt_kobj);
+ acg = tgt->default_acg;
+
+ res = __scst_luns_mgmt_store(acg, kobj, buf, count);
+ return res;
+}
+
+static ssize_t __scst_acg_addr_method_show(struct scst_acg *acg, char *buf)
+{
+ int res;
+
+ switch (acg->addr_method) {
+ case SCST_LUN_ADDR_METHOD_FLAT:
+ res = sprintf(buf, "FLAT\n%s\n", SCST_SYSFS_KEY_MARK);
+ break;
+ case SCST_LUN_ADDR_METHOD_PERIPHERAL:
+ res = sprintf(buf, "PERIPHERAL\n");
+ break;
+ default:
+ res = sprintf(buf, "UNKNOWN\n");
+ break;
+ }
+
+ return res;
+}
+
+static ssize_t __scst_acg_addr_method_store(struct scst_acg *acg,
+ const char *buf, size_t count)
+{
+ int res = count;
+
+ if (strncasecmp(buf, "FLAT", min_t(int, 4, count)) == 0)
+ acg->addr_method = SCST_LUN_ADDR_METHOD_FLAT;
+ else if (strncasecmp(buf, "PERIPHERAL", min_t(int, 10, count)) == 0)
+ acg->addr_method = SCST_LUN_ADDR_METHOD_PERIPHERAL;
+ else {
+ PRINT_ERROR("Unknown address method %s", buf);
+ res = -EINVAL;
+ }
+ return res;
+}
+
+static ssize_t scst_tgt_addr_method_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ struct scst_acg *acg;
+ struct scst_tgt *tgt;
+
+ tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+ acg = tgt->default_acg;
+
+ return __scst_acg_addr_method_show(acg, buf);
+}
+
+static ssize_t scst_tgt_addr_method_store(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ int res;
+ struct scst_acg *acg;
+ struct scst_tgt *tgt;
+
+ tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+ acg = tgt->default_acg;
+
+ res = __scst_acg_addr_method_store(acg, buf, count);
+ return res;
+}
+
+static ssize_t __scst_acg_io_grouping_type_show(struct scst_acg *acg, char *buf)
+{
+ int res;
+
+ switch (acg->acg_io_grouping_type) {
+ case SCST_IO_GROUPING_AUTO:
+ res = sprintf(buf, "%s\n", SCST_IO_GROUPING_AUTO_STR);
+ break;
+ case SCST_IO_GROUPING_THIS_GROUP_ONLY:
+ res = sprintf(buf, "%s\n%s\n",
+ SCST_IO_GROUPING_THIS_GROUP_ONLY_STR,
+ SCST_SYSFS_KEY_MARK);
+ break;
+ case SCST_IO_GROUPING_NEVER:
+ res = sprintf(buf, "%s\n%s\n", SCST_IO_GROUPING_NEVER_STR,
+ SCST_SYSFS_KEY_MARK);
+ break;
+ default:
+ res = sprintf(buf, "%d\n%s\n", acg->acg_io_grouping_type,
+ SCST_SYSFS_KEY_MARK);
+ break;
+ }
+
+ return res;
+}
+
+static ssize_t __scst_acg_io_grouping_type_store(struct scst_acg *acg,
+ const char *buf, size_t count)
+{
+ int res = 0;
+ int prev = acg->acg_io_grouping_type;
+ struct scst_acg_dev *acg_dev;
+
+ if (strncasecmp(buf, SCST_IO_GROUPING_AUTO_STR,
+ min_t(int, strlen(SCST_IO_GROUPING_AUTO_STR), count)) == 0)
+ acg->acg_io_grouping_type = SCST_IO_GROUPING_AUTO;
+ else if (strncasecmp(buf, SCST_IO_GROUPING_THIS_GROUP_ONLY_STR,
+ min_t(int, strlen(SCST_IO_GROUPING_THIS_GROUP_ONLY_STR), count)) == 0)
+ acg->acg_io_grouping_type = SCST_IO_GROUPING_THIS_GROUP_ONLY;
+ else if (strncasecmp(buf, SCST_IO_GROUPING_NEVER_STR,
+ min_t(int, strlen(SCST_IO_GROUPING_NEVER_STR), count)) == 0)
+ acg->acg_io_grouping_type = SCST_IO_GROUPING_NEVER;
+ else {
+ long io_grouping_type;
+ res = strict_strtoul(buf, 0, &io_grouping_type);
+ if ((res != 0) || (io_grouping_type <= 0)) {
+ PRINT_ERROR("Unknown or not allowed I/O grouping type "
+ "%s", buf);
+ res = -EINVAL;
+ goto out;
+ }
+ acg->acg_io_grouping_type = io_grouping_type;
+ }
+
+ if (prev == acg->acg_io_grouping_type)
+ goto out;
+
+ res = scst_suspend_activity(true);
+ if (res != 0)
+ goto out;
+
+ if (mutex_lock_interruptible(&scst_mutex) != 0) {
+ res = -EINTR;
+ goto out_resume;
+ }
+
+ list_for_each_entry(acg_dev, &acg->acg_dev_list, acg_dev_list_entry) {
+ int rc;
+
+ scst_stop_dev_threads(acg_dev->dev);
+
+ rc = scst_create_dev_threads(acg_dev->dev);
+ if (rc != 0)
+ res = rc;
+ }
+
+ mutex_unlock(&scst_mutex);
+
+out_resume:
+ scst_resume_activity();
+
+out:
+ return res;
+}
+
+static ssize_t scst_tgt_io_grouping_type_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ struct scst_acg *acg;
+ struct scst_tgt *tgt;
+
+ tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+ acg = tgt->default_acg;
+
+ return __scst_acg_io_grouping_type_show(acg, buf);
+}
+
+static ssize_t scst_tgt_io_grouping_type_store(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ int res;
+ struct scst_acg *acg;
+ struct scst_tgt *tgt;
+
+ tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+ acg = tgt->default_acg;
+
+ res = __scst_acg_io_grouping_type_store(acg, buf, count);
+ if (res != 0)
+ goto out;
+
+ res = count;
+
+out:
+ return res;
+}
+
+static int scst_create_acg_sysfs(struct scst_tgt *tgt,
+ struct scst_acg *acg)
+{
+ int retval = 0;
+
+ acg->acg_kobj_initialized = 1;
+
+ retval = kobject_init_and_add(&acg->acg_kobj, &acg_ktype,
+ tgt->tgt_ini_grp_kobj, acg->acg_name);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add acg '%s' to sysfs", acg->acg_name);
+ goto out;
+ }
+
+ acg->luns_kobj = kobject_create_and_add("luns", &acg->acg_kobj);
+ if (acg->luns_kobj == NULL) {
+ PRINT_ERROR("Can't create luns kobj for tgt %s",
+ tgt->tgt_name);
+ retval = -ENOMEM;
+ goto out;
+ }
+
+ retval = sysfs_create_file(acg->luns_kobj, &scst_acg_luns_mgmt.attr);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add tgt attr %s for tgt %s",
+ scst_acg_luns_mgmt.attr.name, tgt->tgt_name);
+ goto out;
+ }
+
+ acg->initiators_kobj = kobject_create_and_add("initiators",
+ &acg->acg_kobj);
+ if (acg->initiators_kobj == NULL) {
+ PRINT_ERROR("Can't create initiators kobj for tgt %s",
+ tgt->tgt_name);
+ retval = -ENOMEM;
+ goto out;
+ }
+
+ retval = sysfs_create_file(acg->initiators_kobj,
+ &scst_acg_ini_mgmt.attr);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add tgt attr %s for tgt %s",
+ scst_acg_ini_mgmt.attr.name, tgt->tgt_name);
+ goto out;
+ }
+
+ retval = sysfs_create_file(&acg->acg_kobj, &scst_acg_addr_method.attr);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add tgt attr %s for tgt %s",
+ scst_acg_addr_method.attr.name, tgt->tgt_name);
+ goto out;
+ }
+
+ retval = sysfs_create_file(&acg->acg_kobj, &scst_acg_io_grouping_type.attr);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add tgt attr %s for tgt %s",
+ scst_acg_io_grouping_type.attr.name, tgt->tgt_name);
+ goto out;
+ }
+
+out:
+ return retval;
+}
+
+void scst_acg_sysfs_put(struct scst_acg *acg)
+{
+
+ if (acg->acg_kobj_initialized) {
+ scst_clear_acg(acg);
+
+ kobject_del(acg->luns_kobj);
+ kobject_put(acg->luns_kobj);
+
+ kobject_del(acg->initiators_kobj);
+ kobject_put(acg->initiators_kobj);
+
+ kobject_del(&acg->acg_kobj);
+ kobject_put(&acg->acg_kobj);
+ } else
+ scst_destroy_acg(acg);
+ return;
+}
+
+static ssize_t scst_acg_addr_method_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ struct scst_acg *acg;
+
+ acg = container_of(kobj, struct scst_acg, acg_kobj);
+
+ return __scst_acg_addr_method_show(acg, buf);
+}
+
+static ssize_t scst_acg_addr_method_store(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ int res;
+ struct scst_acg *acg;
+
+ acg = container_of(kobj, struct scst_acg, acg_kobj);
+
+ res = __scst_acg_addr_method_store(acg, buf, count);
+ return res;
+}
+
+static ssize_t scst_acg_io_grouping_type_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ struct scst_acg *acg;
+
+ acg = container_of(kobj, struct scst_acg, acg_kobj);
+
+ return __scst_acg_io_grouping_type_show(acg, buf);
+}
+
+static ssize_t scst_acg_io_grouping_type_store(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ int res;
+ struct scst_acg *acg;
+
+ acg = container_of(kobj, struct scst_acg, acg_kobj);
+
+ res = __scst_acg_io_grouping_type_store(acg, buf, count);
+ if (res != 0)
+ goto out;
+
+ res = count;
+
+out:
+ return res;
+}
+
+static ssize_t scst_ini_group_mgmt_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ static char *help = "Usage: echo \"create GROUP_NAME\" >mgmt\n"
+ " echo \"del GROUP_NAME\" >mgmt\n";
+
+ return sprintf(buf, help);
+}
+
+static ssize_t scst_ini_group_mgmt_store(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ int res, action;
+ int len;
+ char *name;
+ char *buffer, *p, *e = NULL;
+ struct scst_acg *a, *acg = NULL;
+ struct scst_tgt *tgt;
+
+#define SCST_INI_GROUP_ACTION_CREATE 1
+#define SCST_INI_GROUP_ACTION_DEL 2
+
+ tgt = container_of(kobj->parent, struct scst_tgt, tgt_kobj);
+
+ buffer = kzalloc(count+1, GFP_KERNEL);
+ if (buffer == NULL) {
+ res = -ENOMEM;
+ goto out;
+ }
+
+ memcpy(buffer, buf, count);
+ buffer[count] = '\0';
+ p = buffer;
+
+ p = buffer;
+ if (p[strlen(p) - 1] == '\n')
+ p[strlen(p) - 1] = '\0';
+ if (strncasecmp("create ", p, 7) == 0) {
+ p += 7;
+ action = SCST_INI_GROUP_ACTION_CREATE;
+ } else if (strncasecmp("del ", p, 4) == 0) {
+ p += 4;
+ action = SCST_INI_GROUP_ACTION_DEL;
+ } else {
+ PRINT_ERROR("Unknown action \"%s\"", p);
+ res = -EINVAL;
+ goto out_free;
+ }
+
+ res = scst_suspend_activity(true);
+ if (res != 0)
+ goto out_free;
+
+ if (mutex_lock_interruptible(&scst_mutex) != 0) {
+ res = -EINTR;
+ goto out_free_resume;
+ }
+
+ while (isspace(*p) && *p != '\0')
+ p++;
+ e = p;
+ while (!isspace(*e) && *e != '\0')
+ e++;
+ *e = '\0';
+
+ if (p[0] == '\0') {
+ PRINT_ERROR("%s", "Group name required");
+ res = -EINVAL;
+ goto out_free_up;
+ }
+
+ list_for_each_entry(a, &tgt->tgt_acg_list, acg_list_entry) {
+ if (strcmp(a->acg_name, p) == 0) {
+ TRACE_DBG("group (acg) %p %s found",
+ a, a->acg_name);
+ acg = a;
+ break;
+ }
+ }
+
+ switch (action) {
+ case SCST_INI_GROUP_ACTION_CREATE:
+ TRACE_DBG("Creating group '%s'", p);
+ if (acg != NULL) {
+ PRINT_ERROR("acg name %s exist", p);
+ res = -EINVAL;
+ goto out_free_up;
+ }
+
+ len = strlen(p) + 1;
+ name = kmalloc(len, GFP_KERNEL);
+ if (name == NULL) {
+ PRINT_ERROR("%s", "Allocation of name failed");
+ res = -ENOMEM;
+ goto out_free_up;
+ }
+ strlcpy(name, p, len);
+
+ acg = scst_alloc_add_acg(tgt, name);
+ kfree(name);
+ if (acg == NULL)
+ goto out_free_up;
+
+ res = scst_create_acg_sysfs(tgt, acg);
+ if (res != 0)
+ goto out_free_acg;
+ break;
+ case SCST_INI_GROUP_ACTION_DEL:
+ TRACE_DBG("Deleting group '%s'", p);
+ if (acg == NULL) {
+ PRINT_ERROR("Group %s not found", p);
+ res = -EINVAL;
+ goto out_free_up;
+ }
+ if (!scst_acg_sess_is_empty(acg)) {
+ PRINT_ERROR("Group %s is not empty", acg->acg_name);
+ res = -EBUSY;
+ goto out_free_up;
+ }
+ scst_acg_sysfs_put(acg);
+ break;
+ }
+
+ res = count;
+
+out_free_up:
+ mutex_unlock(&scst_mutex);
+
+out_free_resume:
+ scst_resume_activity();
+
+out_free:
+ kfree(buffer);
+
+out:
+ return res;
+
+out_free_acg:
+ scst_acg_sysfs_put(acg);
+ goto out_free_up;
+
+#undef SCST_LUN_ACTION_CREATE
+#undef SCST_LUN_ACTION_DEL
+}
+
+static ssize_t scst_rel_tgt_id_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ struct scst_tgt *tgt;
+ int res;
+
+ tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+
+ res = sprintf(buf, "%d\n%s", tgt->rel_tgt_id,
+ (tgt->rel_tgt_id != 0) ? SCST_SYSFS_KEY_MARK "\n" : "");
+ return res;
+}
+
+static ssize_t scst_rel_tgt_id_store(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ int res = 0;
+ struct scst_tgt *tgt;
+ unsigned long rel_tgt_id;
+
+ if (buf == NULL)
+ goto out_err;
+
+ tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+
+ res = strict_strtoul(buf, 0, &rel_tgt_id);
+ if (res != 0)
+ goto out_err;
+
+ TRACE_DBG("Try to set relative target port id %d",
+ (uint16_t)rel_tgt_id);
+
+ if (rel_tgt_id < SCST_MIN_REL_TGT_ID ||
+ rel_tgt_id > SCST_MAX_REL_TGT_ID) {
+ if ((rel_tgt_id == 0) && !tgt->tgtt->is_target_enabled(tgt))
+ goto set;
+
+ PRINT_ERROR("Invalid relative port id %d",
+ (uint16_t)rel_tgt_id);
+ res = -EINVAL;
+ goto out;
+ }
+
+ if (tgt->tgtt->is_target_enabled(tgt) &&
+ rel_tgt_id != tgt->rel_tgt_id) {
+ if (!scst_is_relative_target_port_id_unique(rel_tgt_id, tgt)) {
+ PRINT_ERROR("Relative port id %d is not unique",
+ (uint16_t)rel_tgt_id);
+ res = -EBADSLT;
+ goto out;
+ }
+ }
+
+set:
+ tgt->rel_tgt_id = (uint16_t)rel_tgt_id;
+
+ res = count;
+
+out:
+ return res;
+
+out_err:
+ PRINT_ERROR("%s: Requested action not understood: %s", __func__, buf);
+ res = -EINVAL;
+ goto out;
+}
+
+int scst_create_acn_sysfs(struct scst_acg *acg, struct scst_acn *acn)
+{
+ int retval = 0;
+ int len;
+ struct kobj_attribute *attr = NULL;
+
+ acn->acn_attr = NULL;
+
+ attr = kzalloc(sizeof(struct kobj_attribute), GFP_KERNEL);
+ if (attr == NULL) {
+ PRINT_ERROR("Unable to allocate attributes for initiator '%s'",
+ acn->name);
+ retval = -ENOMEM;
+ goto out;
+ }
+
+ len = strlen(acn->name) + 1;
+ attr->attr.name = kzalloc(len, GFP_KERNEL);
+ if (attr->attr.name == NULL) {
+ PRINT_ERROR("Unable to allocate attributes for initiator '%s'",
+ acn->name);
+ retval = -ENOMEM;
+ goto out_free;
+ }
+ strlcpy((char *)attr->attr.name, acn->name, len);
+
+ attr->attr.owner = THIS_MODULE;
+ attr->attr.mode = S_IRUGO;
+ attr->show = scst_acn_file_show;
+ attr->store = NULL;
+
+ retval = sysfs_create_file(acg->initiators_kobj, &attr->attr);
+ if (retval != 0) {
+ PRINT_ERROR("Unable to create acn '%s' for group '%s'",
+ acn->name, acg->acg_name);
+ kfree(attr->attr.name);
+ goto out_free;
+ }
+
+ acn->acn_attr = attr;
+
+out:
+ return retval;
+
+out_free:
+ kfree(attr);
+ goto out;
+}
+
+void scst_acn_sysfs_del(struct scst_acg *acg, struct scst_acn *acn,
+ bool reassign)
+{
+
+ if (acn->acn_attr != NULL) {
+ sysfs_remove_file(acg->initiators_kobj,
+ &acn->acn_attr->attr);
+ kfree(acn->acn_attr->attr.name);
+ kfree(acn->acn_attr);
+ }
+ scst_acg_remove_acn(acn);
+ if (reassign)
+ scst_check_reassign_sessions();
+ return;
+}
+
+static ssize_t scst_acn_file_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ return scnprintf(buf, SCST_SYSFS_BLOCK_SIZE, "%s\n",
+ attr->attr.name);
+}
+
+static ssize_t scst_acg_luns_mgmt_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t count)
+{
+ int res;
+ struct scst_acg *acg;
+
+ acg = container_of(kobj->parent, struct scst_acg, acg_kobj);
+ res = __scst_luns_mgmt_store(acg, kobj, buf, count);
+ return res;
+}
+
+static ssize_t scst_acg_ini_mgmt_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ static char *help = "Usage: echo \"add INITIATOR_NAME\" "
+ ">mgmt\n"
+ " echo \"del INITIATOR_NAME\" "
+ ">mgmt\n"
+ " echo \"move INITIATOR_NAME DEST_GROUP_NAME\" "
+ ">mgmt\n"
+ " echo \"clear\" "
+ ">mgmt\n";
+
+ return sprintf(buf, help);
+}
+
+static ssize_t scst_acg_ini_mgmt_store(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ int res, action;
+ char *buffer, *p, *e = NULL;
+ char *name = NULL, *group = NULL;
+ struct scst_acg *acg = NULL, *acg_dest = NULL;
+ struct scst_tgt *tgt = NULL;
+ struct scst_acn *acn = NULL, *acn_tmp;
+
+#define SCST_ACG_ACTION_INI_ADD 1
+#define SCST_ACG_ACTION_INI_DEL 2
+#define SCST_ACG_ACTION_INI_CLEAR 3
+#define SCST_ACG_ACTION_INI_MOVE 4
+
+ acg = container_of(kobj->parent, struct scst_acg, acg_kobj);
+
+ buffer = kzalloc(count+1, GFP_KERNEL);
+ if (buffer == NULL) {
+ res = -ENOMEM;
+ goto out;
+ }
+
+ memcpy(buffer, buf, count);
+ buffer[count] = '\0';
+ p = buffer;
+
+ p = buffer;
+ if (p[strlen(p) - 1] == '\n')
+ p[strlen(p) - 1] = '\0';
+
+ if (strncasecmp("add", p, 3) == 0) {
+ p += 3;
+ action = SCST_ACG_ACTION_INI_ADD;
+ } else if (strncasecmp("del", p, 3) == 0) {
+ p += 3;
+ action = SCST_ACG_ACTION_INI_DEL;
+ } else if (strncasecmp("clear", p, 5) == 0) {
+ p += 5;
+ action = SCST_ACG_ACTION_INI_CLEAR;
+ } else if (strncasecmp("move", p, 4) == 0) {
+ p += 4;
+ action = SCST_ACG_ACTION_INI_MOVE;
+ } else {
+ PRINT_ERROR("Unknown action \"%s\"", p);
+ res = -EINVAL;
+ goto out_free;
+ }
+
+ if (action != SCST_ACG_ACTION_INI_CLEAR)
+ if (!isspace(*p)) {
+ PRINT_ERROR("%s", "Syntax error");
+ res = -EINVAL;
+ goto out_free;
+ }
+
+ res = scst_suspend_activity(true);
+ if (res != 0)
+ goto out_free;
+
+ if (mutex_lock_interruptible(&scst_mutex) != 0) {
+ res = -EINTR;
+ goto out_free_resume;
+ }
+
+ if (action != SCST_ACG_ACTION_INI_CLEAR)
+ while (isspace(*p) && *p != '\0')
+ p++;
+
+ switch (action) {
+ case SCST_ACG_ACTION_INI_ADD:
+ e = p;
+ while (!isspace(*e) && *e != '\0')
+ e++;
+ *e = '\0';
+ name = p;
+
+ if (name[0] == '\0') {
+ PRINT_ERROR("%s", "Invalid initiator name");
+ res = -EINVAL;
+ goto out_free_up;
+ }
+
+ res = scst_acg_add_name(acg, name);
+ if (res != 0)
+ goto out_free_up;
+ break;
+ case SCST_ACG_ACTION_INI_DEL:
+ e = p;
+ while (!isspace(*e) && *e != '\0')
+ e++;
+ *e = '\0';
+ name = p;
+
+ if (name[0] == '\0') {
+ PRINT_ERROR("%s", "Invalid initiator name");
+ res = -EINVAL;
+ goto out_free_up;
+ }
+
+ acn = scst_acg_find_name(acg, name);
+ if (acn == NULL) {
+ PRINT_ERROR("Unable to find "
+ "initiator '%s' in group '%s'",
+ name, acg->acg_name);
+ res = -EINVAL;
+ goto out_free_up;
+ }
+ scst_acn_sysfs_del(acg, acn, true);
+ break;
+ case SCST_ACG_ACTION_INI_CLEAR:
+ list_for_each_entry_safe(acn, acn_tmp, &acg->acn_list,
+ acn_list_entry) {
+ scst_acn_sysfs_del(acg, acn, false);
+ }
+ scst_check_reassign_sessions();
+ break;
+ case SCST_ACG_ACTION_INI_MOVE:
+ e = p;
+ while (!isspace(*e) && *e != '\0')
+ e++;
+ if (*e == '\0') {
+ PRINT_ERROR("%s", "Too few parameters");
+ res = -EINVAL;
+ goto out_free_up;
+ }
+ *e = '\0';
+ name = p;
+
+ if (name[0] == '\0') {
+ PRINT_ERROR("%s", "Invalid initiator name");
+ res = -EINVAL;
+ goto out_free_up;
+ }
+
+ e++;
+ p = e;
+ while (!isspace(*e) && *e != '\0')
+ e++;
+ *e = '\0';
+ group = p;
+
+ if (group[0] == '\0') {
+ PRINT_ERROR("%s", "Invalid group name");
+ res = -EINVAL;
+ goto out_free_up;
+ }
+
+ TRACE_DBG("Move initiator '%s' to group '%s'",
+ name, group);
+
+ /*
+ * Better get tgt from hierarchy tgt_kobj -> tgt_ini_grp_kobj ->
+ * acg_kobj -> initiators_kobj than have direct pointer to tgt
+ * in struct acg and have a headache to care about its possible
+ * wrong dereference on the destruction time.
+ */
+ {
+ struct kobject *k;
+
+ /* acg_kobj */
+ k = kobj->parent;
+ if (k == NULL) {
+ res = -EINVAL;
+ goto out_free_up;
+ }
+ /* tgt_ini_grp_kobj */
+ k = k->parent;
+ if (k == NULL) {
+ res = -EINVAL;
+ goto out_free_up;
+ }
+ /* tgt_kobj */
+ k = k->parent;
+ if (k == NULL) {
+ res = -EINVAL;
+ goto out_free_up;
+ }
+
+ tgt = container_of(k, struct scst_tgt, tgt_kobj);
+ }
+
+ acn = scst_acg_find_name(acg, name);
+ if (acn == NULL) {
+ PRINT_ERROR("Unable to find "
+ "initiator '%s' in group '%s'",
+ name, acg->acg_name);
+ res = -EINVAL;
+ goto out_free_up;
+ }
+ acg_dest = scst_tgt_find_acg(tgt, group);
+ if (acg_dest == NULL) {
+ PRINT_ERROR("Unable to find group '%s' in target '%s'",
+ group, tgt->tgt_name);
+ res = -EINVAL;
+ goto out_free_up;
+ }
+ if (scst_acg_find_name(acg_dest, name) != NULL) {
+ PRINT_ERROR("Initiator '%s' already exists in group '%s'",
+ name, acg_dest->acg_name);
+ res = -EEXIST;
+ goto out_free_up;
+ }
+ scst_acn_sysfs_del(acg, acn, false);
+
+ res = scst_acg_add_name(acg_dest, name);
+ if (res != 0)
+ goto out_free_up;
+ break;
+ }
+
+ res = count;
+
+out_free_up:
+ mutex_unlock(&scst_mutex);
+
+out_free_resume:
+ scst_resume_activity();
+
+out_free:
+ kfree(buffer);
+
+out:
+ return res;
+
+#undef SCST_ACG_ACTION_INI_ADD
+#undef SCST_ACG_ACTION_INI_DEL
+#undef SCST_ACG_ACTION_INI_CLEAR
+#undef SCST_ACG_ACTION_INI_MOVE
+}
+
+/*
+ * SGV directory implementation
+ */
+
+static struct kobj_attribute sgv_stat_attr =
+ __ATTR(stats, S_IRUGO | S_IWUSR, sgv_sysfs_stat_show,
+ sgv_sysfs_stat_reset);
+
+static struct attribute *sgv_attrs[] = {
+ &sgv_stat_attr.attr,
+ NULL,
+};
+
+static void sgv_kobj_release(struct kobject *kobj)
+{
+ struct sgv_pool *pool;
+
+ pool = container_of(kobj, struct sgv_pool, sgv_kobj);
+
+ sgv_pool_destroy(pool);
+ return;
+}
+
+static struct kobj_type sgv_pool_ktype = {
+ .sysfs_ops = &scst_sysfs_ops,
+ .release = sgv_kobj_release,
+ .default_attrs = sgv_attrs,
+};
+
+int scst_create_sgv_sysfs(struct sgv_pool *pool)
+{
+ int retval;
+
+ pool->sgv_kobj_initialized = 1;
+
+ retval = kobject_init_and_add(&pool->sgv_kobj, &sgv_pool_ktype,
+ scst_sgv_kobj, pool->name);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add sgv pool %s to sysfs", pool->name);
+ goto out;
+ }
+
+out:
+ return retval;
+}
+
+/* pool can be dead upon exit from this function! */
+void scst_sgv_sysfs_put(struct sgv_pool *pool)
+{
+ if (pool->sgv_kobj_initialized) {
+ kobject_del(&pool->sgv_kobj);
+ kobject_put(&pool->sgv_kobj);
+ } else
+ sgv_pool_destroy(pool);
+ return;
+}
+
+static struct kobj_attribute sgv_global_stat_attr =
+ __ATTR(global_stats, S_IRUGO | S_IWUSR, sgv_sysfs_global_stat_show,
+ sgv_sysfs_global_stat_reset);
+
+static struct attribute *sgv_default_attrs[] = {
+ &sgv_global_stat_attr.attr,
+ NULL,
+};
+
+static struct kobj_type sgv_ktype = {
+ .sysfs_ops = &scst_sysfs_ops,
+ .release = scst_sysfs_release,
+ .default_attrs = sgv_default_attrs,
+};
+
+/*
+ * SCST sysfs root directory implementation
+ */
+
+static ssize_t scst_threads_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ int count;
+
+ count = sprintf(buf, "%d\n%s", scst_main_cmd_threads.nr_threads,
+ (scst_main_cmd_threads.nr_threads != scst_threads) ?
+ SCST_SYSFS_KEY_MARK "\n" : "");
+ return count;
+}
+
+static ssize_t scst_threads_store(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ int res;
+ long oldtn, newtn, delta;
+
+ if (mutex_lock_interruptible(&scst_mutex) != 0) {
+ res = -EINTR;
+ goto out;
+ }
+
+ oldtn = scst_main_cmd_threads.nr_threads;
+
+ res = strict_strtoul(buf, 0, &newtn);
+ if (res != 0) {
+ PRINT_ERROR("strict_strtoul() for %s failed: %d ", buf, res);
+ goto out_up;
+ }
+
+ if (newtn <= 0) {
+ PRINT_ERROR("Illegal threads num value %ld", newtn);
+ res = -EINVAL;
+ goto out_up;
+ }
+
+ delta = newtn - oldtn;
+ if (delta < 0)
+ scst_del_threads(&scst_main_cmd_threads, -delta);
+ else {
+ res = scst_add_threads(&scst_main_cmd_threads, NULL, NULL, delta);
+ if (res != 0)
+ goto out_up;
+ }
+
+ PRINT_INFO("Changed cmd threads num: old %ld, new %ld", oldtn, newtn);
+
+ res = count;
+
+out_up:
+ mutex_unlock(&scst_mutex);
+
+out:
+ return res;
+}
+
+static ssize_t scst_setup_id_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ int count;
+
+ count = sprintf(buf, "0x%x%s\n", scst_setup_id,
+ (scst_setup_id == 0) ? "" : SCST_SYSFS_KEY_MARK "\n");
+ return count;
+}
+
+static ssize_t scst_setup_id_store(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ int res;
+ unsigned long val;
+
+ res = strict_strtoul(buf, 0, &val);
+ if (res != 0) {
+ PRINT_ERROR("strict_strtoul() for %s failed: %d ", buf, res);
+ goto out;
+ }
+
+ scst_setup_id = val;
+ PRINT_INFO("Changed scst_setup_id to %x", scst_setup_id);
+
+ res = count;
+
+out:
+ return res;
+}
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+
+static void scst_read_trace_tlb(const struct scst_trace_log *tbl, char *buf,
+ unsigned long log_level, int *pos)
+{
+ const struct scst_trace_log *t = tbl;
+
+ if (t == NULL)
+ goto out;
+
+ while (t->token) {
+ if (log_level & t->val) {
+ *pos += sprintf(&buf[*pos], "%s%s",
+ (*pos == 0) ? "" : " | ",
+ t->token);
+ }
+ t++;
+ }
+out:
+ return;
+}
+
+static ssize_t scst_trace_level_show(const struct scst_trace_log *local_tbl,
+ unsigned long log_level, char *buf, const char *help)
+{
+ int pos = 0;
+
+ scst_read_trace_tlb(scst_trace_tbl, buf, log_level, &pos);
+ scst_read_trace_tlb(local_tbl, buf, log_level, &pos);
+
+ pos += sprintf(&buf[pos], "\n\n\nUsage:\n"
+ " echo \"all|none|default\" >trace_level\n"
+ " echo \"value DEC|0xHEX|0OCT\" >trace_level\n"
+ " echo \"add|del TOKEN\" >trace_level\n"
+ "\nwhere TOKEN is one of [debug, function, line, pid,\n"
+ " buff, mem, sg, out_of_mem,\n"
+ " special, scsi, mgmt, minor,\n"
+ " mgmt_dbg, scsi_serializing,\n"
+ " retry, recv_bot, send_bot, recv_top,\n"
+ " send_top%s]", help != NULL ? help : "");
+
+ return pos;
+}
+
+static ssize_t scst_main_trace_level_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ return scst_trace_level_show(scst_local_trace_tbl, trace_flag,
+ buf, NULL);
+}
+
+static int scst_write_trace(const char *buf, size_t length,
+ unsigned long *log_level, unsigned long default_level,
+ const char *name, const struct scst_trace_log *tbl)
+{
+ int res = length;
+ int action;
+ unsigned long level = 0, oldlevel;
+ char *buffer, *p, *e;
+ const struct scst_trace_log *t;
+
+#define SCST_TRACE_ACTION_ALL 1
+#define SCST_TRACE_ACTION_NONE 2
+#define SCST_TRACE_ACTION_DEFAULT 3
+#define SCST_TRACE_ACTION_ADD 4
+#define SCST_TRACE_ACTION_DEL 5
+#define SCST_TRACE_ACTION_VALUE 6
+
+ if ((buf == NULL) || (length == 0)) {
+ res = -EINVAL;
+ goto out;
+ }
+
+ buffer = kmalloc(length+1, GFP_KERNEL);
+ if (buffer == NULL) {
+ PRINT_ERROR("Unable to alloc intermediate buffer (size %zd)",
+ length+1);
+ res = -ENOMEM;
+ goto out;
+ }
+ memcpy(buffer, buf, length);
+ buffer[length] = '\0';
+
+ p = buffer;
+ if (!strncasecmp("all", p, 3)) {
+ action = SCST_TRACE_ACTION_ALL;
+ } else if (!strncasecmp("none", p, 4) || !strncasecmp("null", p, 4)) {
+ action = SCST_TRACE_ACTION_NONE;
+ } else if (!strncasecmp("default", p, 7)) {
+ action = SCST_TRACE_ACTION_DEFAULT;
+ } else if (!strncasecmp("add", p, 3)) {
+ p += 3;
+ action = SCST_TRACE_ACTION_ADD;
+ } else if (!strncasecmp("del", p, 3)) {
+ p += 3;
+ action = SCST_TRACE_ACTION_DEL;
+ } else if (!strncasecmp("value", p, 5)) {
+ p += 5;
+ action = SCST_TRACE_ACTION_VALUE;
+ } else {
+ if (p[strlen(p) - 1] == '\n')
+ p[strlen(p) - 1] = '\0';
+ PRINT_ERROR("Unknown action \"%s\"", p);
+ res = -EINVAL;
+ goto out_free;
+ }
+
+ switch (action) {
+ case SCST_TRACE_ACTION_ADD:
+ case SCST_TRACE_ACTION_DEL:
+ case SCST_TRACE_ACTION_VALUE:
+ if (!isspace(*p)) {
+ PRINT_ERROR("%s", "Syntax error");
+ res = -EINVAL;
+ goto out_free;
+ }
+ }
+
+ switch (action) {
+ case SCST_TRACE_ACTION_ALL:
+ level = TRACE_ALL;
+ break;
+ case SCST_TRACE_ACTION_DEFAULT:
+ level = default_level;
+ break;
+ case SCST_TRACE_ACTION_NONE:
+ level = TRACE_NULL;
+ break;
+ case SCST_TRACE_ACTION_ADD:
+ case SCST_TRACE_ACTION_DEL:
+ while (isspace(*p) && *p != '\0')
+ p++;
+ e = p;
+ while (!isspace(*e) && *e != '\0')
+ e++;
+ *e = 0;
+ if (tbl) {
+ t = tbl;
+ while (t->token) {
+ if (!strcasecmp(p, t->token)) {
+ level = t->val;
+ break;
+ }
+ t++;
+ }
+ }
+ if (level == 0) {
+ t = scst_trace_tbl;
+ while (t->token) {
+ if (!strcasecmp(p, t->token)) {
+ level = t->val;
+ break;
+ }
+ t++;
+ }
+ }
+ if (level == 0) {
+ PRINT_ERROR("Unknown token \"%s\"", p);
+ res = -EINVAL;
+ goto out_free;
+ }
+ break;
+ case SCST_TRACE_ACTION_VALUE:
+ while (isspace(*p) && *p != '\0')
+ p++;
+ res = strict_strtoul(p, 0, &level);
+ if (res != 0) {
+ PRINT_ERROR("Invalid trace value \"%s\"", p);
+ res = -EINVAL;
+ goto out_free;
+ }
+ break;
+ }
+
+ oldlevel = *log_level;
+
+ switch (action) {
+ case SCST_TRACE_ACTION_ADD:
+ *log_level |= level;
+ break;
+ case SCST_TRACE_ACTION_DEL:
+ *log_level &= ~level;
+ break;
+ default:
+ *log_level = level;
+ break;
+ }
+
+ PRINT_INFO("Changed trace level for \"%s\": old 0x%08lx, new 0x%08lx",
+ name, oldlevel, *log_level);
+
+out_free:
+ kfree(buffer);
+out:
+ return res;
+
+#undef SCST_TRACE_ACTION_ALL
+#undef SCST_TRACE_ACTION_NONE
+#undef SCST_TRACE_ACTION_DEFAULT
+#undef SCST_TRACE_ACTION_ADD
+#undef SCST_TRACE_ACTION_DEL
+#undef SCST_TRACE_ACTION_VALUE
+}
+
+static ssize_t scst_main_trace_level_store(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ int res;
+
+ if (mutex_lock_interruptible(&scst_log_mutex) != 0) {
+ res = -EINTR;
+ goto out;
+ }
+
+ res = scst_write_trace(buf, count, &trace_flag,
+ SCST_DEFAULT_LOG_FLAGS, "scst", scst_local_trace_tbl);
+
+ mutex_unlock(&scst_log_mutex);
+
+out:
+ return res;
+}
+
+#endif /* defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING) */
+
+static ssize_t scst_version_show(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ char *buf)
+{
+
+ sprintf(buf, "%s\n", SCST_VERSION_STRING);
+
+#ifdef CONFIG_SCST_STRICT_SERIALIZING
+ strcat(buf, "STRICT_SERIALIZING\n");
+#endif
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+ strcat(buf, "EXTRACHECKS\n");
+#endif
+
+#ifdef CONFIG_SCST_TRACING
+ strcat(buf, "TRACING\n");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG
+ strcat(buf, "DEBUG\n");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG_TM
+ strcat(buf, "DEBUG_TM\n");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG_RETRY
+ strcat(buf, "DEBUG_RETRY\n");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG_OOM
+ strcat(buf, "DEBUG_OOM\n");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG_SN
+ strcat(buf, "DEBUG_SN\n");
+#endif
+
+#ifdef CONFIG_SCST_USE_EXPECTED_VALUES
+ strcat(buf, "USE_EXPECTED_VALUES\n");
+#endif
+
+#ifdef CONFIG_SCST_ALLOW_PASSTHROUGH_IO_SUBMIT_IN_SIRQ
+ strcat(buf, "ALLOW_PASSTHROUGH_IO_SUBMIT_IN_SIRQ\n");
+#endif
+
+#ifdef CONFIG_SCST_STRICT_SECURITY
+ strcat(buf, "SCST_STRICT_SECURITY\n");
+#endif
+ return strlen(buf);
+}
+
+static struct kobj_attribute scst_threads_attr =
+ __ATTR(threads, S_IRUGO | S_IWUSR, scst_threads_show,
+ scst_threads_store);
+
+static struct kobj_attribute scst_setup_id_attr =
+ __ATTR(setup_id, S_IRUGO | S_IWUSR, scst_setup_id_show,
+ scst_setup_id_store);
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+static struct kobj_attribute scst_trace_level_attr =
+ __ATTR(trace_level, S_IRUGO | S_IWUSR, scst_main_trace_level_show,
+ scst_main_trace_level_store);
+#endif
+
+static struct kobj_attribute scst_version_attr =
+ __ATTR(version, S_IRUGO, scst_version_show, NULL);
+
+static struct attribute *scst_sysfs_root_default_attrs[] = {
+ &scst_threads_attr.attr,
+ &scst_setup_id_attr.attr,
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+ &scst_trace_level_attr.attr,
+#endif
+ &scst_version_attr.attr,
+ NULL,
+};
+
+static void scst_sysfs_root_release(struct kobject *kobj)
+{
+ complete_all(&scst_sysfs_root_release_completion);
+}
+
+static ssize_t scst_show(struct kobject *kobj, struct attribute *attr,
+ char *buf)
+{
+ struct kobj_attribute *kobj_attr;
+ kobj_attr = container_of(attr, struct kobj_attribute, attr);
+
+ return kobj_attr->show(kobj, kobj_attr, buf);
+}
+
+static ssize_t scst_store(struct kobject *kobj, struct attribute *attr,
+ const char *buf, size_t count)
+{
+ struct kobj_attribute *kobj_attr;
+ kobj_attr = container_of(attr, struct kobj_attribute, attr);
+
+ return kobj_attr->store(kobj, kobj_attr, buf, count);
+}
+
+struct sysfs_ops scst_sysfs_ops = {
+ .show = scst_show,
+ .store = scst_store,
+};
+
+static struct kobj_type scst_sysfs_root_ktype = {
+ .sysfs_ops = &scst_sysfs_ops,
+ .release = scst_sysfs_root_release,
+ .default_attrs = scst_sysfs_root_default_attrs,
+};
+
+static void scst_devt_free(struct kobject *kobj)
+{
+ struct scst_dev_type *devt;
+
+ devt = container_of(kobj, struct scst_dev_type, devt_kobj);
+
+ complete_all(&devt->devt_kobj_release_compl);
+
+ scst_devt_cleanup(devt);
+ return;
+}
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+
+static ssize_t scst_devt_trace_level_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ struct scst_dev_type *devt;
+
+ devt = container_of(kobj, struct scst_dev_type, devt_kobj);
+
+ return scst_trace_level_show(devt->trace_tbl,
+ devt->trace_flags ? *devt->trace_flags : 0, buf,
+ devt->trace_tbl_help);
+}
+
+static ssize_t scst_devt_trace_level_store(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ int res;
+ struct scst_dev_type *devt;
+
+ devt = container_of(kobj, struct scst_dev_type, devt_kobj);
+
+ if (mutex_lock_interruptible(&scst_log_mutex) != 0) {
+ res = -EINTR;
+ goto out;
+ }
+
+ res = scst_write_trace(buf, count, devt->trace_flags,
+ devt->default_trace_flags, devt->name, devt->trace_tbl);
+
+ mutex_unlock(&scst_log_mutex);
+
+out:
+ return res;
+}
+
+static struct kobj_attribute devt_trace_attr =
+ __ATTR(trace_level, S_IRUGO | S_IWUSR,
+ scst_devt_trace_level_show, scst_devt_trace_level_store);
+
+#endif /* #if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING) */
+
+static ssize_t scst_devt_type_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ int pos;
+ struct scst_dev_type *devt;
+
+ devt = container_of(kobj, struct scst_dev_type, devt_kobj);
+
+ pos = sprintf(buf, "%d - %s\n", devt->type,
+ (unsigned)devt->type > ARRAY_SIZE(scst_dev_handler_types) ?
+ "unknown" : scst_dev_handler_types[devt->type]);
+
+ return pos;
+}
+
+static struct kobj_attribute scst_devt_type_attr =
+ __ATTR(type, S_IRUGO, scst_devt_type_show, NULL);
+
+static struct attribute *scst_devt_default_attrs[] = {
+ &scst_devt_type_attr.attr,
+ NULL,
+};
+
+static struct kobj_type scst_devt_ktype = {
+ .sysfs_ops = &scst_sysfs_ops,
+ .release = scst_devt_free,
+ .default_attrs = scst_devt_default_attrs,
+};
+
+static ssize_t scst_devt_mgmt_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ char *help = "Usage: echo \"add_device device_name [parameters]\" "
+ ">mgmt\n"
+ " echo \"del_device device_name\" >mgmt\n"
+ "%s"
+ "\n"
+ "where parameters are one or more "
+ "param_name=value pairs separated by ';'\n"
+ "%s%s";
+ struct scst_dev_type *devt;
+
+ devt = container_of(kobj, struct scst_dev_type, devt_kobj);
+
+ if (devt->add_device_parameters_help != NULL)
+ return sprintf(buf, help,
+ (devt->mgmt_cmd_help) ? devt->mgmt_cmd_help : "",
+ "\nThe following parameters available: ",
+ devt->add_device_parameters_help);
+ else
+ return sprintf(buf, help,
+ (devt->mgmt_cmd_help) ? devt->mgmt_cmd_help : "",
+ "", "");
+}
+
+static ssize_t scst_devt_mgmt_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t count)
+{
+ int res;
+ char *buffer, *p, *pp, *device_name;
+ struct scst_dev_type *devt;
+
+ devt = container_of(kobj, struct scst_dev_type, devt_kobj);
+
+ buffer = kzalloc(count+1, GFP_KERNEL);
+ if (buffer == NULL) {
+ res = -ENOMEM;
+ goto out;
+ }
+
+ memcpy(buffer, buf, count);
+ buffer[count] = '\0';
+
+ pp = buffer;
+ if (pp[strlen(pp) - 1] == '\n')
+ pp[strlen(pp) - 1] = '\0';
+
+ p = scst_get_next_lexem(&pp);
+
+ if (strcasecmp("add_device", p) == 0) {
+ device_name = scst_get_next_lexem(&pp);
+ if (*device_name == '\0') {
+ PRINT_ERROR("%s", "Device name required");
+ res = -EINVAL;
+ goto out_free;
+ }
+ res = devt->add_device(device_name, pp);
+ } else if (strcasecmp("del_device", p) == 0) {
+ device_name = scst_get_next_lexem(&pp);
+ if (*device_name == '\0') {
+ PRINT_ERROR("%s", "Device name required");
+ res = -EINVAL;
+ goto out_free;
+ }
+
+ p = scst_get_next_lexem(&pp);
+ if (*p != '\0')
+ goto out_syntax_err;
+
+ res = devt->del_device(device_name);
+ } else if (devt->mgmt_cmd != NULL) {
+ scst_restore_token_str(p, pp);
+ res = devt->mgmt_cmd(buffer);
+ } else {
+ PRINT_ERROR("Unknown action \"%s\"", p);
+ res = -EINVAL;
+ goto out_free;
+ }
+
+ if (res == 0)
+ res = count;
+
+out_free:
+ kfree(buffer);
+
+out:
+ return res;
+
+out_syntax_err:
+ PRINT_ERROR("Syntax error on \"%s\"", p);
+ res = -EINVAL;
+ goto out_free;
+}
+
+static struct kobj_attribute scst_devt_mgmt =
+ __ATTR(mgmt, S_IRUGO | S_IWUSR, scst_devt_mgmt_show,
+ scst_devt_mgmt_store);
+
+static ssize_t scst_devt_pass_through_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ return sprintf(buf, "1");
+}
+
+static struct kobj_attribute scst_devt_pass_through =
+ __ATTR(pass_through, S_IRUGO, scst_devt_pass_through_show, NULL);
+
+static ssize_t scst_devt_pass_through_mgmt_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ char *help = "Usage: echo \"assign H:C:I:L\" >mgmt\n"
+ " echo \"unassign H:C:I:L\" >mgmt\n";
+ return sprintf(buf, help);
+}
+
+static ssize_t scst_devt_pass_through_mgmt_store(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ int res;
+ char *buffer, *p, *pp, *action;
+ struct scst_dev_type *devt;
+ unsigned long host, channel, id, lun;
+ struct scst_device *d, *dev = NULL;
+
+ devt = container_of(kobj, struct scst_dev_type, devt_kobj);
+
+ buffer = kzalloc(count+1, GFP_KERNEL);
+ if (buffer == NULL) {
+ res = -ENOMEM;
+ goto out;
+ }
+
+ memcpy(buffer, buf, count);
+ buffer[count] = '\0';
+
+ pp = buffer;
+ if (pp[strlen(pp) - 1] == '\n')
+ pp[strlen(pp) - 1] = '\0';
+
+ action = scst_get_next_lexem(&pp);
+ p = scst_get_next_lexem(&pp);
+ if (*p == '\0') {
+ PRINT_ERROR("%s", "Device required");
+ res = -EINVAL;
+ goto out_free;
+ }
+
+ ;
+ if (*scst_get_next_lexem(&pp) != '\0') {
+ PRINT_ERROR("%s", "Too many parameters");
+ res = -EINVAL;
+ goto out_syntax_err;
+ }
+
+ host = simple_strtoul(p, &p, 0);
+ if ((host == ULONG_MAX) || (*p != ':'))
+ goto out_syntax_err;
+ p++;
+ channel = simple_strtoul(p, &p, 0);
+ if ((channel == ULONG_MAX) || (*p != ':'))
+ goto out_syntax_err;
+ p++;
+ id = simple_strtoul(p, &p, 0);
+ if ((channel == ULONG_MAX) || (*p != ':'))
+ goto out_syntax_err;
+ p++;
+ lun = simple_strtoul(p, &p, 0);
+ if (lun == ULONG_MAX)
+ goto out_syntax_err;
+
+ TRACE_DBG("Dev %ld:%ld:%ld:%ld", host, channel, id, lun);
+
+ if (mutex_lock_interruptible(&scst_mutex) != 0) {
+ res = -EINTR;
+ goto out_free;
+ }
+
+ list_for_each_entry(d, &scst_dev_list, dev_list_entry) {
+ if ((d->virt_id == 0) &&
+ d->scsi_dev->host->host_no == host &&
+ d->scsi_dev->channel == channel &&
+ d->scsi_dev->id == id &&
+ d->scsi_dev->lun == lun) {
+ dev = d;
+ TRACE_DBG("Dev %p (%ld:%ld:%ld:%ld) found",
+ dev, host, channel, id, lun);
+ break;
+ }
+ }
+ if (dev == NULL) {
+ PRINT_ERROR("Device %ld:%ld:%ld:%ld not found",
+ host, channel, id, lun);
+ res = -EINVAL;
+ goto out_unlock;
+ }
+
+ if (dev->scsi_dev->type != devt->type) {
+ PRINT_ERROR("Type %d of device %s differs from type "
+ "%d of dev handler %s", dev->type,
+ dev->virt_name, devt->type, devt->name);
+ res = -EINVAL;
+ goto out_unlock;
+ }
+
+ if (strcasecmp("assign", action) == 0)
+ res = scst_assign_dev_handler(dev, devt);
+ else if (strcasecmp("deassign", action) == 0) {
+ if (dev->handler != devt) {
+ PRINT_ERROR("Device %s is not assigned to handler %s",
+ dev->virt_name, devt->name);
+ res = -EINVAL;
+ goto out_unlock;
+ }
+ res = scst_assign_dev_handler(dev, &scst_null_devtype);
+ } else {
+ PRINT_ERROR("Unknown action \"%s\"", action);
+ res = -EINVAL;
+ goto out_unlock;
+ }
+
+ if (res == 0)
+ res = count;
+
+out_unlock:
+ mutex_unlock(&scst_mutex);
+
+out_free:
+ kfree(buffer);
+
+out:
+ return res;
+
+out_syntax_err:
+ PRINT_ERROR("Syntax error on \"%s\"", p);
+ res = -EINVAL;
+ goto out_free;
+}
+
+static struct kobj_attribute scst_devt_pass_through_mgmt =
+ __ATTR(mgmt, S_IRUGO | S_IWUSR, scst_devt_pass_through_mgmt_show,
+ scst_devt_pass_through_mgmt_store);
+
+int scst_create_devt_sysfs(struct scst_dev_type *devt)
+{
+ int retval;
+ struct kobject *parent;
+ const struct attribute **pattr;
+
+ init_completion(&devt->devt_kobj_release_compl);
+
+ if (devt->parent != NULL)
+ parent = &devt->parent->devt_kobj;
+ else
+ parent = scst_handlers_kobj;
+
+ devt->devt_kobj_initialized = 1;
+
+ retval = kobject_init_and_add(&devt->devt_kobj, &scst_devt_ktype,
+ parent, devt->name);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add devt %s to sysfs", devt->name);
+ goto out;
+ }
+
+ /*
+ * In case of errors there's no need for additional cleanup, because
+ * it will be done by the _put function() called by the caller.
+ */
+
+ if (devt->add_device != NULL) {
+ retval = sysfs_create_file(&devt->devt_kobj,
+ &scst_devt_mgmt.attr);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add mgmt attr for dev handler %s",
+ devt->name);
+ goto out;
+ }
+ } else if (devt->pass_through) {
+ retval = sysfs_create_file(&devt->devt_kobj,
+ &scst_devt_pass_through_mgmt.attr);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add mgmt attr for dev handler %s",
+ devt->name);
+ goto out;
+ }
+
+ retval = sysfs_create_file(&devt->devt_kobj,
+ &scst_devt_pass_through.attr);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add pass_through attr for dev "
+ "handler %s", devt->name);
+ goto out;
+ }
+ }
+
+ pattr = devt->devt_attrs;
+ if (pattr != NULL) {
+ while (*pattr != NULL) {
+ retval = sysfs_create_file(&devt->devt_kobj, *pattr);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add devt attr %s for dev "
+ "handler %s", (*pattr)->name,
+ devt->name);
+ goto out;
+ }
+ pattr++;
+ }
+ }
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+ if (devt->trace_flags != NULL) {
+ retval = sysfs_create_file(&devt->devt_kobj,
+ &devt_trace_attr.attr);
+ if (retval != 0) {
+ PRINT_ERROR("Can't add devt trace_flag for dev "
+ "handler %s", devt->name);
+ goto out;
+ }
+ }
+#endif
+
+out:
+ return retval;
+}
+
+void scst_devt_sysfs_put(struct scst_dev_type *devt)
+{
+
+ if (devt->devt_kobj_initialized) {
+ int rc;
+
+ kobject_del(&devt->devt_kobj);
+ kobject_put(&devt->devt_kobj);
+
+ rc = wait_for_completion_timeout(&devt->devt_kobj_release_compl, HZ);
+ if (rc == 0) {
+ PRINT_INFO("Waiting for releasing sysfs entry "
+ "for dev handler template %s...", devt->name);
+ wait_for_completion(&devt->devt_kobj_release_compl);
+ PRINT_INFO("Done waiting for releasing sysfs entry "
+ "for dev handler template %s", devt->name);
+ }
+ } else
+ scst_devt_cleanup(devt);
+ return;
+}
+
+static DEFINE_MUTEX(scst_sysfs_user_info_mutex);
+
+/* All protected by scst_sysfs_user_info_mutex */
+static LIST_HEAD(scst_sysfs_user_info_list);
+static uint32_t scst_sysfs_info_cur_cookie;
+
+/* scst_sysfs_user_info_mutex supposed to be held */
+static struct scst_sysfs_user_info *scst_sysfs_user_find_info(uint32_t cookie)
+{
+ struct scst_sysfs_user_info *info, *res = NULL;
+
+ list_for_each_entry(info, &scst_sysfs_user_info_list,
+ info_list_entry) {
+ if (info->info_cookie == cookie) {
+ res = info;
+ break;
+ }
+ }
+ return res;
+}
+
+/**
+ * scst_sysfs_user_get_info() - get user_info
+ *
+ * Finds the user_info based on cookie and mark it as received the reply by
+ * setting for it flag info_being_executed.
+ *
+ * Returns found entry or NULL.
+ */
+struct scst_sysfs_user_info *scst_sysfs_user_get_info(uint32_t cookie)
+{
+ struct scst_sysfs_user_info *res = NULL;
+
+ mutex_lock(&scst_sysfs_user_info_mutex);
+
+ res = scst_sysfs_user_find_info(cookie);
+ if (res != NULL) {
+ if (!res->info_being_executed)
+ res->info_being_executed = 1;
+ }
+
+ mutex_unlock(&scst_sysfs_user_info_mutex);
+ return res;
+}
+EXPORT_SYMBOL_GPL(scst_sysfs_user_get_info);
+
+/**
+ ** Helper functionality to help target drivers and dev handlers support
+ ** sending events to user space and wait for their completion in a safe
+ ** manner. See samples how to use it in iscsi-scst or scst_user.
+ **/
+
+/**
+ * scst_sysfs_user_add_info() - create and add user_info in the global list
+ *
+ * Creates an info structure and adds it in the info_list.
+ * Returns 0 and out_info on success, error code otherwise.
+ */
+int scst_sysfs_user_add_info(struct scst_sysfs_user_info **out_info)
+{
+ int res = 0;
+ struct scst_sysfs_user_info *info;
+
+ info = kzalloc(sizeof(*info), GFP_KERNEL);
+ if (info == NULL) {
+ PRINT_ERROR("Unable to allocate sysfs user info (size %zd)",
+ sizeof(*info));
+ res = -ENOMEM;
+ goto out;
+ }
+
+ mutex_lock(&scst_sysfs_user_info_mutex);
+
+ while ((info->info_cookie == 0) ||
+ (scst_sysfs_user_find_info(info->info_cookie) != NULL))
+ info->info_cookie = scst_sysfs_info_cur_cookie++;
+
+ init_completion(&info->info_completion);
+
+ list_add_tail(&info->info_list_entry, &scst_sysfs_user_info_list);
+ info->info_in_list = 1;
+
+ *out_info = info;
+
+ mutex_unlock(&scst_sysfs_user_info_mutex);
+
+out:
+ return res;
+}
+EXPORT_SYMBOL_GPL(scst_sysfs_user_add_info);
+
+/**
+ * scst_sysfs_user_del_info - delete and frees user_info
+ */
+void scst_sysfs_user_del_info(struct scst_sysfs_user_info *info)
+{
+
+ mutex_lock(&scst_sysfs_user_info_mutex);
+
+ if (info->info_in_list)
+ list_del(&info->info_list_entry);
+
+ mutex_unlock(&scst_sysfs_user_info_mutex);
+
+ kfree(info);
+ return;
+}
+EXPORT_SYMBOL_GPL(scst_sysfs_user_del_info);
+
+/*
+ * Returns true if the reply received and being processed by another part of
+ * the kernel, false otherwise. Also removes the user_info from the list to
+ * fix for the user space that it missed the timeout.
+ */
+static bool scst_sysfs_user_info_executing(struct scst_sysfs_user_info *info)
+{
+ bool res;
+
+ mutex_lock(&scst_sysfs_user_info_mutex);
+
+ res = info->info_being_executed;
+
+ if (info->info_in_list) {
+ list_del(&info->info_list_entry);
+ info->info_in_list = 0;
+ }
+
+ mutex_unlock(&scst_sysfs_user_info_mutex);
+ return res;
+}
+
+/**
+ * scst_wait_info_completion() - wait an user space event's completion
+ *
+ * Waits for the info request been completed by user space at most timeout
+ * jiffies. If the reply received before timeout and being processed by
+ * another part of the kernel, i.e. scst_sysfs_user_info_executing()
+ * returned true, waits for it to complete indefinitely.
+ *
+ * Returns status of the request completion.
+ */
+int scst_wait_info_completion(struct scst_sysfs_user_info *info,
+ unsigned long timeout)
+{
+ int res, rc;
+
+ TRACE_DBG("Waiting for info %p completion", info);
+
+ while (1) {
+ rc = wait_for_completion_interruptible_timeout(
+ &info->info_completion, timeout);
+ if (rc > 0) {
+ TRACE_DBG("Waiting for info %p finished with %d",
+ info, rc);
+ break;
+ } else if (rc == 0) {
+ if (!scst_sysfs_user_info_executing(info)) {
+ PRINT_ERROR("Timeout waiting for user "
+ "space event %p", info);
+ res = -EBUSY;
+ goto out;
+ } else {
+ /* Req is being executed in the kernel */
+ TRACE_DBG("Keep waiting for info %p completion",
+ info);
+ wait_for_completion(&info->info_completion);
+ break;
+ }
+ } else if (rc != -ERESTARTSYS) {
+ res = rc;
+ PRINT_ERROR("wait_for_completion() failed: %d",
+ res);
+ goto out;
+ } else {
+ TRACE_DBG("Waiting for info %p finished with %d, "
+ "retrying", info, rc);
+ }
+ }
+
+ TRACE_DBG("info %p, status %d", info, info->info_status);
+ res = info->info_status;
+
+out:
+ return res;
+}
+EXPORT_SYMBOL_GPL(scst_wait_info_completion);
+
+int __init scst_sysfs_init(void)
+{
+ int retval = 0;
+
+ retval = kobject_init_and_add(&scst_sysfs_root_kobj,
+ &scst_sysfs_root_ktype, kernel_kobj, "%s", "scst_tgt");
+ if (retval != 0)
+ goto sysfs_root_add_error;
+
+ scst_targets_kobj = kobject_create_and_add("targets",
+ &scst_sysfs_root_kobj);
+ if (scst_targets_kobj == NULL)
+ goto targets_kobj_error;
+
+ scst_devices_kobj = kobject_create_and_add("devices",
+ &scst_sysfs_root_kobj);
+ if (scst_devices_kobj == NULL)
+ goto devices_kobj_error;
+
+ scst_sgv_kobj = kzalloc(sizeof(*scst_sgv_kobj), GFP_KERNEL);
+ if (scst_sgv_kobj == NULL)
+ goto sgv_kobj_error;
+
+ retval = kobject_init_and_add(scst_sgv_kobj, &sgv_ktype,
+ &scst_sysfs_root_kobj, "%s", "sgv");
+ if (retval != 0)
+ goto sgv_kobj_add_error;
+
+ scst_handlers_kobj = kobject_create_and_add("handlers",
+ &scst_sysfs_root_kobj);
+ if (scst_handlers_kobj == NULL)
+ goto handlers_kobj_error;
+
+out:
+ return retval;
+
+handlers_kobj_error:
+ kobject_del(scst_sgv_kobj);
+
+sgv_kobj_add_error:
+ kobject_put(scst_sgv_kobj);
+
+sgv_kobj_error:
+ kobject_del(scst_devices_kobj);
+ kobject_put(scst_devices_kobj);
+
+devices_kobj_error:
+ kobject_del(scst_targets_kobj);
+ kobject_put(scst_targets_kobj);
+
+targets_kobj_error:
+ kobject_del(&scst_sysfs_root_kobj);
+
+sysfs_root_add_error:
+ kobject_put(&scst_sysfs_root_kobj);
+
+ if (retval == 0)
+ retval = -EINVAL;
+ goto out;
+}
+
+void scst_sysfs_cleanup(void)
+{
+
+ PRINT_INFO("%s", "Exiting SCST sysfs hierarchy...");
+
+ kobject_del(scst_sgv_kobj);
+ kobject_put(scst_sgv_kobj);
+
+ kobject_del(scst_devices_kobj);
+ kobject_put(scst_devices_kobj);
+
+ kobject_del(scst_targets_kobj);
+ kobject_put(scst_targets_kobj);
+
+ kobject_del(scst_handlers_kobj);
+ kobject_put(scst_handlers_kobj);
+
+ kobject_del(&scst_sysfs_root_kobj);
+ kobject_put(&scst_sysfs_root_kobj);
+
+ wait_for_completion(&scst_sysfs_root_release_completion);
+ /*
+ * There is a race, when in the release() schedule happens just after
+ * calling complete(), so if we exit and unload scst module immediately,
+ * there will be oops there. So let's give it a chance to quit
+ * gracefully. Unfortunately, current kobjects implementation
+ * doesn't allow better ways to handle it.
+ */
+ msleep(3000);
+
+ PRINT_INFO("%s", "Exiting SCST sysfs hierarchy done");
+ return;
+}
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH][RFC 9/12/1/5] SCST debugging support
[not found] ` <4BC44D08.4060907@vlnb.net>
` (7 preceding siblings ...)
2010-04-13 13:06 ` [PATCH][RFC 8/12/1/5] SCST sysfs interface Vladislav Bolkhovitin
@ 2010-04-13 13:06 ` Vladislav Bolkhovitin
2010-04-13 13:06 ` [PATCH][RFC 10/12/1/5] SCST external modules support Vladislav Bolkhovitin
9 siblings, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:06 UTC (permalink / raw)
To: linux-scsi
Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
Vu Pham, Bart Van Assche, James Smart, Joe Eykholt, Andy Yan,
linux-driver
This patch contains SCST debugging support routines.
Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
drivers/scst/scst_debug.c | 136 ++++++++++++++++++++++
include/scst/scst_debug.h | 276 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 412 insertions(+)
diff -uprN orig/linux-2.6.33/include/scst/scst_debug.h linux-2.6.33/include/scst/scst_debug.h
--- orig/linux-2.6.33/include/scst/scst_debug.h
+++ linux-2.6.33/include/scst/scst_debug.h
@@ -0,0 +1,276 @@
+/*
+ * include/scst_debug.h
+ *
+ * Copyright (C) 2004 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ * Copyright (C) 2004 - 2005 Leonid Stoljar
+ * Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ * Contains macroses for execution tracing and error reporting
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation, version 2
+ * of the License.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __SCST_DEBUG_H
+#define __SCST_DEBUG_H
+
+#include <generated/autoconf.h> /* for CONFIG_* */
+
+#include <linux/bug.h> /* for WARN_ON_ONCE */
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+#define EXTRACHECKS_BUG_ON(a) BUG_ON(a)
+#define EXTRACHECKS_WARN_ON(a) WARN_ON(a)
+#define EXTRACHECKS_WARN_ON_ONCE(a) WARN_ON_ONCE(a)
+#else
+#define EXTRACHECKS_BUG_ON(a) do { } while (0)
+#define EXTRACHECKS_WARN_ON(a) do { } while (0)
+#define EXTRACHECKS_WARN_ON_ONCE(a) do { } while (0)
+#endif
+
+#define TRACE_NULL 0x00000000
+#define TRACE_DEBUG 0x00000001
+#define TRACE_FUNCTION 0x00000002
+#define TRACE_LINE 0x00000004
+#define TRACE_PID 0x00000008
+#define TRACE_BUFF 0x00000020
+#define TRACE_MEMORY 0x00000040
+#define TRACE_SG_OP 0x00000080
+#define TRACE_OUT_OF_MEM 0x00000100
+#define TRACE_MINOR 0x00000200 /* less important events */
+#define TRACE_MGMT 0x00000400
+#define TRACE_MGMT_DEBUG 0x00000800
+#define TRACE_SCSI 0x00001000
+#define TRACE_SPECIAL 0x00002000 /* filtering debug, etc */
+#define TRACE_FLOW_CONTROL 0x00004000 /* flow control in action */
+#define TRACE_ALL 0xffffffff
+/* Flags 0xXXXX0000 are local for users */
+
+#define TRACE_MINOR_AND_MGMT_DBG (TRACE_MINOR|TRACE_MGMT_DEBUG)
+
+#ifndef KERN_CONT
+#define KERN_CONT ""
+#endif
+
+/*
+ * Note: in the next two printk() statements the KERN_CONT macro is only
+ * present to suppress a checkpatch warning (KERN_CONT is defined as "").
+ */
+#define PRINT(log_flag, format, args...) \
+ printk(log_flag format "\n", ## args)
+#define PRINTN(log_flag, format, args...) \
+ printk(log_flag format, ## args)
+
+#ifdef LOG_PREFIX
+#define __LOG_PREFIX LOG_PREFIX
+#else
+#define __LOG_PREFIX NULL
+#endif
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+
+#ifndef CONFIG_SCST_DEBUG
+#define ___unlikely(a) (a)
+#else
+#define ___unlikely(a) unlikely(a)
+#endif
+
+/*
+ * We don't print prefix for debug traces to not put additional preasure
+ * on the logging system in case of a lot of logging.
+ */
+
+extern int debug_print_prefix(unsigned long trace_flag,
+ const char *prefix, const char *func, int line);
+extern void debug_print_buffer(const void *data, int len);
+
+#define TRACE(trace, format, args...) \
+do { \
+ if (___unlikely(trace_flag & (trace))) { \
+ debug_print_prefix(trace_flag, __LOG_PREFIX, \
+ __func__, __LINE__); \
+ PRINT(KERN_CONT, format, args); \
+ } \
+} while (0)
+
+#ifdef CONFIG_SCST_DEBUG
+
+#define PRINT_BUFFER(message, buff, len) \
+do { \
+ PRINT(KERN_INFO, "%s:%s:", __func__, message); \
+ debug_print_buffer(buff, len); \
+} while (0)
+
+#else
+
+#define PRINT_BUFFER(message, buff, len) \
+do { \
+ PRINT(KERN_INFO, "%s:", message); \
+ debug_print_buffer(buff, len); \
+} while (0)
+
+#endif
+
+#define PRINT_BUFF_FLAG(flag, message, buff, len) \
+do { \
+ if (___unlikely(trace_flag & (flag))) { \
+ debug_print_prefix(trace_flag, NULL, __func__, __LINE__);\
+ PRINT(KERN_CONT, "%s:", message); \
+ debug_print_buffer(buff, len); \
+ } \
+} while (0)
+
+#else /* CONFIG_SCST_DEBUG || CONFIG_SCST_TRACING */
+
+#define TRACE(trace, args...) do {} while (0)
+#define PRINT_BUFFER(message, buff, len) do {} while (0)
+#define PRINT_BUFF_FLAG(flag, message, buff, len) do {} while (0)
+
+#endif /* CONFIG_SCST_DEBUG || CONFIG_SCST_TRACING */
+
+#ifdef CONFIG_SCST_DEBUG
+
+#define TRACE_DBG_FLAG(trace, format, args...) \
+do { \
+ if (trace_flag & (trace)) { \
+ debug_print_prefix(trace_flag, NULL, __func__, __LINE__);\
+ PRINT(KERN_CONT, format, args); \
+ } \
+} while (0)
+
+#define TRACE_MEM(args...) TRACE_DBG_FLAG(TRACE_MEMORY, args)
+#define TRACE_SG(args...) TRACE_DBG_FLAG(TRACE_SG_OP, args)
+#define TRACE_DBG(args...) TRACE_DBG_FLAG(TRACE_DEBUG, args)
+#define TRACE_DBG_SPECIAL(args...) TRACE_DBG_FLAG(TRACE_DEBUG|TRACE_SPECIAL, args)
+#define TRACE_MGMT_DBG(args...) TRACE_DBG_FLAG(TRACE_MGMT_DEBUG, args)
+#define TRACE_MGMT_DBG_SPECIAL(args...) \
+ TRACE_DBG_FLAG(TRACE_MGMT_DEBUG|TRACE_SPECIAL, args)
+
+#define TRACE_BUFFER(message, buff, len) \
+do { \
+ if (trace_flag & TRACE_BUFF) { \
+ debug_print_prefix(trace_flag, NULL, __func__, __LINE__);\
+ PRINT(KERN_CONT, "%s:", message); \
+ debug_print_buffer(buff, len); \
+ } \
+} while (0)
+
+#define TRACE_BUFF_FLAG(flag, message, buff, len) \
+do { \
+ if (trace_flag & (flag)) { \
+ debug_print_prefix(trace_flag, NULL, __func__, __LINE__);\
+ PRINT(KERN_CONT, "%s:", message); \
+ debug_print_buffer(buff, len); \
+ } \
+} while (0)
+
+#define PRINT_LOG_FLAG(log_flag, format, args...) \
+do { \
+ debug_print_prefix(trace_flag, __LOG_PREFIX, __func__, __LINE__);\
+ PRINT(KERN_CONT, format, args); \
+} while (0)
+
+#define PRINT_WARNING(format, args...) \
+do { \
+ debug_print_prefix(trace_flag, __LOG_PREFIX, __func__, __LINE__);\
+ PRINT(KERN_CONT, "***WARNING***: " format, args); \
+} while (0)
+
+#define PRINT_ERROR(format, args...) \
+do { \
+ debug_print_prefix(trace_flag, __LOG_PREFIX, __func__, __LINE__);\
+ PRINT(KERN_CONT, "***ERROR***: " format, args); \
+} while (0)
+
+#define PRINT_CRIT_ERROR(format, args...) \
+do { \
+ debug_print_prefix(trace_flag, __LOG_PREFIX, __func__, __LINE__);\
+ PRINT(KERN_CONT, "***CRITICAL ERROR***: " format, args); \
+} while (0)
+
+#define PRINT_INFO(format, args...) \
+do { \
+ debug_print_prefix(trace_flag, __LOG_PREFIX, __func__, __LINE__);\
+ PRINT(KERN_CONT, format, args); \
+} while (0)
+
+#else /* CONFIG_SCST_DEBUG */
+
+#define TRACE_MEM(format, args...) do {} while (0)
+#define TRACE_SG(format, args...) do {} while (0)
+#define TRACE_DBG(format, args...) do {} while (0)
+#define TRACE_DBG_FLAG(format, args...) do {} while (0)
+#define TRACE_DBG_SPECIAL(format, args...) do {} while (0)
+#define TRACE_MGMT_DBG(format, args...) do {} while (0)
+#define TRACE_MGMT_DBG_SPECIAL(format, args...) do {} while (0)
+#define TRACE_BUFFER(message, buff, len) do {} while (0)
+#define TRACE_BUFF_FLAG(flag, message, buff, len) do {} while (0)
+
+#ifdef LOG_PREFIX
+
+#define PRINT_INFO(format, args...) \
+do { \
+ PRINT(KERN_INFO, "%s: " format, LOG_PREFIX, args); \
+} while (0)
+
+#define PRINT_WARNING(format, args...) \
+do { \
+ PRINT(KERN_INFO, "%s: ***WARNING***: " \
+ format, LOG_PREFIX, args); \
+} while (0)
+
+#define PRINT_ERROR(format, args...) \
+do { \
+ PRINT(KERN_INFO, "%s: ***ERROR***: " \
+ format, LOG_PREFIX, args); \
+} while (0)
+
+#define PRINT_CRIT_ERROR(format, args...) \
+do { \
+ PRINT(KERN_INFO, "%s: ***CRITICAL ERROR***: " \
+ format, LOG_PREFIX, args); \
+} while (0)
+
+#else
+
+#define PRINT_INFO(format, args...) \
+do { \
+ PRINT(KERN_INFO, format, args); \
+} while (0)
+
+#define PRINT_WARNING(format, args...) \
+do { \
+ PRINT(KERN_INFO, "***WARNING***: " \
+ format, args); \
+} while (0)
+
+#define PRINT_ERROR(format, args...) \
+do { \
+ PRINT(KERN_ERR, "***ERROR***: " \
+ format, args); \
+} while (0)
+
+#define PRINT_CRIT_ERROR(format, args...) \
+do { \
+ PRINT(KERN_CRIT, "***CRITICAL ERROR***: " \
+ format, args); \
+} while (0)
+
+#endif /* LOG_PREFIX */
+
+#endif /* CONFIG_SCST_DEBUG */
+
+#if defined(CONFIG_SCST_DEBUG) && defined(CONFIG_DEBUG_SLAB)
+#define SCST_SLAB_FLAGS (SLAB_RED_ZONE | SLAB_POISON)
+#else
+#define SCST_SLAB_FLAGS 0L
+#endif
+
+#endif /* __SCST_DEBUG_H */
diff -uprN orig/linux-2.6.33/drivers/scst/scst_debug.c linux-2.6.33/drivers/scst/scst_debug.c
--- orig/linux-2.6.33/drivers/scst/scst_debug.c
+++ linux-2.6.33/drivers/scst/scst_debug.c
@@ -0,0 +1,136 @@
+/*
+ * scst_debug.c
+ *
+ * Copyright (C) 2004 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ * Copyright (C) 2004 - 2005 Leonid Stoljar
+ * Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ * Contains helper functions for execution tracing and error reporting.
+ * Intended to be included in main .c file.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation, version 2
+ * of the License.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include "scst.h"
+#include "scst_debug.h"
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+
+#define TRACE_BUF_SIZE 512
+
+static char trace_buf[TRACE_BUF_SIZE];
+static DEFINE_SPINLOCK(trace_buf_lock);
+
+static inline int get_current_tid(void)
+{
+ /* Code should be the same as in sys_gettid() */
+ if (in_interrupt()) {
+ /*
+ * Unfortunately, task_pid_vnr() isn't IRQ-safe, so otherwise
+ * it can oops. ToDo.
+ */
+ return 0;
+ }
+ return task_pid_vnr(current);
+}
+
+/**
+ * debug_print_prefix() - print debug prefix for a log line
+ *
+ * Prints, if requested by trace_flag, debug prefix for each log line
+ */
+int debug_print_prefix(unsigned long trace_flag,
+ const char *prefix, const char *func, int line)
+{
+ int i = 0;
+ unsigned long flags;
+ int pid = get_current_tid();
+
+ spin_lock_irqsave(&trace_buf_lock, flags);
+
+ trace_buf[0] = '\0';
+
+ if (trace_flag & TRACE_PID)
+ i += snprintf(&trace_buf[i], TRACE_BUF_SIZE, "[%d]: ", pid);
+ if (prefix != NULL)
+ i += snprintf(&trace_buf[i], TRACE_BUF_SIZE - i, "%s: ",
+ prefix);
+ if (trace_flag & TRACE_FUNCTION)
+ i += snprintf(&trace_buf[i], TRACE_BUF_SIZE - i, "%s:", func);
+ if (trace_flag & TRACE_LINE)
+ i += snprintf(&trace_buf[i], TRACE_BUF_SIZE - i, "%i:", line);
+
+ PRINTN(KERN_INFO, "%s", trace_buf);
+
+ spin_unlock_irqrestore(&trace_buf_lock, flags);
+
+ return i;
+}
+EXPORT_SYMBOL(debug_print_prefix);
+
+/**
+ * debug_print_buffer() - print a buffer
+ *
+ * Prints in the log data from the buffer
+ */
+void debug_print_buffer(const void *data, int len)
+{
+ int z, z1, i;
+ const unsigned char *buf = (const unsigned char *) data;
+ unsigned long flags;
+
+ if (buf == NULL)
+ return;
+
+ spin_lock_irqsave(&trace_buf_lock, flags);
+
+ PRINT(KERN_INFO, " (h)___0__1__2__3__4__5__6__7__8__9__A__B__C__D__E__F");
+ for (z = 0, z1 = 0, i = 0; z < len; z++) {
+ if (z % 16 == 0) {
+ if (z != 0) {
+ i += snprintf(&trace_buf[i], TRACE_BUF_SIZE - i,
+ " ");
+ for (; (z1 < z) && (i < TRACE_BUF_SIZE - 1);
+ z1++) {
+ if ((buf[z1] >= 0x20) &&
+ (buf[z1] < 0x80))
+ trace_buf[i++] = buf[z1];
+ else
+ trace_buf[i++] = '.';
+ }
+ trace_buf[i] = '\0';
+ PRINT(KERN_INFO, "%s", trace_buf);
+ i = 0;
+ }
+ i += snprintf(&trace_buf[i], TRACE_BUF_SIZE - i,
+ "%4x: ", z);
+ }
+ i += snprintf(&trace_buf[i], TRACE_BUF_SIZE - i, "%02x ",
+ buf[z]);
+ }
+
+ i += snprintf(&trace_buf[i], TRACE_BUF_SIZE - i, " ");
+ for (; (z1 < z) && (i < TRACE_BUF_SIZE - 1); z1++) {
+ if ((buf[z1] > 0x20) && (buf[z1] < 0x80))
+ trace_buf[i++] = buf[z1];
+ else
+ trace_buf[i++] = '.';
+ }
+ trace_buf[i] = '\0';
+
+ PRINT(KERN_INFO, "%s", trace_buf);
+
+ spin_unlock_irqrestore(&trace_buf_lock, flags);
+ return;
+}
+EXPORT_SYMBOL(debug_print_buffer);
+
+#endif /* CONFIG_SCST_DEBUG || CONFIG_SCST_TRACING */
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH][RFC 10/12/1/5] SCST external modules support
[not found] ` <4BC44D08.4060907@vlnb.net>
` (8 preceding siblings ...)
2010-04-13 13:06 ` [PATCH][RFC 9/12/1/5] SCST debugging support Vladislav Bolkhovitin
@ 2010-04-13 13:06 ` Vladislav Bolkhovitin
9 siblings, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:06 UTC (permalink / raw)
To: linux-scsi
Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
Vu Pham, Bart Van Assche, James Smart, Joe Eykholt, Andy Yan,
linux-driver
This patch contains SCST external modules support routines.
Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
scst_module.c | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 63 insertions(+)
diff -uprN orig/linux-2.6.33/drivers/scst/scst_module.c linux-2.6.33/drivers/scst/scst_module.c
--- orig/linux-2.6.33/drivers/scst/scst_module.c
+++ linux-2.6.33/drivers/scst/scst_module.c
@@ -0,0 +1,63 @@
+/*
+ * scst_module.c
+ *
+ * Copyright (C) 2004 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ * Copyright (C) 2004 - 2005 Leonid Stoljar
+ * Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ * Support for loading target modules. The usage is similar to scsi_module.c
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation, version 2
+ * of the License.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+
+#include <scst.h>
+
+static int __init init_this_scst_driver(void)
+{
+ int res;
+
+ res = scst_register_target_template(&driver_target_template);
+ TRACE_DBG("scst_register_target_template() returned %d", res);
+ if (res < 0)
+ goto out;
+
+#ifdef SCST_REGISTER_INITIATOR_DRIVER
+ driver_template.module = THIS_MODULE;
+ scsi_register_module(MODULE_SCSI_HA, &driver_template);
+ TRACE_DBG("driver_template.present=%d",
+ driver_template.present);
+ if (driver_template.present == 0) {
+ res = -ENODEV;
+ MOD_DEC_USE_COUNT;
+ goto out;
+ }
+#endif
+
+out:
+ return res;
+}
+
+static void __exit exit_this_scst_driver(void)
+{
+
+#ifdef SCST_REGISTER_INITIATOR_DRIVER
+ scsi_unregister_module(MODULE_SCSI_HA, &driver_template);
+#endif
+
+ scst_unregister_target_template(&driver_target_template);
+ return;
+}
+
+module_init(init_this_scst_driver);
+module_exit(exit_this_scst_driver);
^ permalink raw reply [flat|nested] 18+ messages in thread