linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH][RFC 0/12/1/5] SCST core
       [not found] <4BC44A49.7070307@vlnb.net>
@ 2010-04-13 13:04 ` Vladislav Bolkhovitin
       [not found] ` <4BC44D08.4060907@vlnb.net>
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:04 UTC (permalink / raw)
  To: linux-scsi
  Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
	FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
	Vu Pham, Bart Van Assche, James Smart, Joe Eykholt, Andy Yan,
	linux-driver

This patchset contains SCST core together with documentation. Where 
possible, it was divided based on functionality grouping, in other cases 
it was divided by per file basis.

Vlad

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH][RFC 1/12/1/5] SCST core's Makefile and Kconfig
       [not found] ` <4BC44D08.4060907@vlnb.net>
@ 2010-04-13 13:04   ` Vladislav Bolkhovitin
  2010-04-13 13:04   ` [PATCH][RFC 2/12/1/5] SCST core's external headers Vladislav Bolkhovitin
                     ` (8 subsequent siblings)
  9 siblings, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:04 UTC (permalink / raw)
  To: linux-scsi
  Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
	FUJITA Tomonori, Mike Christie, Jeff Garzik, Bart Van Assche,
	James Smart, Joe Eykholt, Andy Yan, linux-driver, Vu Pham,
	Linus Torvalds

This patch contains SCST core's Makefile and Kconfig.

Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
 Kconfig  |  246 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 Makefile |   11 ++
 2 files changed, 257 insertions(+)

diff -uprN orig/linux-2.6.33/drivers/scst/Kconfig linux-2.6.33/drivers/scst/Kconfig
--- orig/linux-2.6.33/drivers/scst/Kconfig
+++ linux-2.6.33/drivers/scst/Kconfig
@@ -0,0 +1,246 @@
+menu "SCSI target (SCST) support"
+
+config SCST
+	tristate "SCSI target (SCST) support"
+	depends on SCSI
+	help
+	  SCSI target (SCST) is designed to provide unified, consistent
+	  interface between SCSI target drivers and Linux kernel and
+	  simplify target drivers development as much as possible. Visit
+	  http://scst.sourceforge.net for more info about it.
+
+config SCST_DISK
+	tristate "SCSI target disk support"
+	default SCST
+	depends on SCSI && SCST
+	help
+	  SCST pass-through device handler for disk device.
+
+config SCST_TAPE
+	tristate "SCSI target tape support"
+	default SCST
+	depends on SCSI && SCST
+	help
+	  SCST pass-through device handler for tape device.
+
+config SCST_CDROM
+	tristate "SCSI target CDROM support"
+	default SCST
+	depends on SCSI && SCST
+	help
+	  SCST pass-through device handler for CDROM device.
+
+config SCST_MODISK
+	tristate "SCSI target MO disk support"
+	default SCST
+	depends on SCSI && SCST
+	help
+	  SCST pass-through device handler for MO disk device.
+
+config SCST_CHANGER
+	tristate "SCSI target changer support"
+	default SCST
+	depends on SCSI && SCST
+	help
+	  SCST pass-through device handler for changer device.
+
+config SCST_PROCESSOR
+	tristate "SCSI target processor support"
+	default SCST
+	depends on SCSI && SCST
+	help
+	  SCST pass-through device handler for processor device.
+
+config SCST_RAID
+	tristate "SCSI target storage array controller (RAID) support"
+	default SCST
+	depends on SCSI && SCST
+	help
+	  SCST pass-through device handler for raid storage array controller (RAID) device.
+
+config SCST_VDISK
+	tristate "SCSI target virtual disk and/or CDROM support"
+	default SCST
+	depends on SCSI && SCST
+	help
+	  SCST device handler for virtual disk and/or CDROM device.
+
+config SCST_STRICT_SERIALIZING
+	bool "Strict serialization"
+	depends on SCST
+	help
+	  Enable strict SCSI command serialization. When enabled, SCST sends
+	  all SCSI commands to the underlying SCSI device synchronously, one
+	  after one. This makes task management more reliable, at the cost of
+	  a performance penalty. This is most useful for stateful SCSI devices
+	  like tapes, where the result of the execution of a command
+	  depends on the device settings configured by previous commands. Disk
+	  and RAID devices are stateless in most cases. The current SCSI core
+	  in Linux doesn't allow to abort all commands reliably if they have
+	  been sent asynchronously to a stateful device.
+	  Enable this option if you use stateful device(s) and need as much
+	  error recovery reliability as possible.
+
+	  If unsure, say "N".
+
+config SCST_STRICT_SECURITY
+	bool "Strict security"
+	depends on SCST
+	help
+	  Makes SCST clear (zero-fill) allocated data buffers. Note: this has a
+	  significant performance penalty.
+
+	  If unsure, say "N".
+
+config SCST_ALLOW_PASSTHROUGH_IO_SUBMIT_IN_SIRQ
+	bool "Allow pass-through commands to be sent from soft-IRQ context"
+	depends on SCST
+	help
+	  Allows SCST to submit SCSI pass-through commands to real SCSI devices
+	  via the SCSI middle layer using scsi_execute_async() function from
+	  soft-IRQ context (tasklets). This used to be the default, but
+	  currently it seems the SCSI middle layer starts expecting only thread
+	  context on the IO submit path, so it is disabled now by default.
+	  Enabling it will decrease amount of context switches and improve
+	  performance. It is more or less safe. In the worst case, if in your
+	  configuration the SCSI middle layer really doesn't expect SIRQ
+	  context in scsi_execute_async() function, you will get a warning
+	  message in the kernel log.
+
+	  If unsure, say "N".
+
+config SCST_ABORT_CONSIDER_FINISHED_TASKS_AS_NOT_EXISTING
+	bool "Send back UNKNOWN TASK when an already finished task is aborted"
+	depends on SCST
+	help
+	  Controls which response is sent by SCST to the initiator in case
+	  the initiator attempts to abort (ABORT TASK) an already finished
+	  request. If this option is enabled, the response UNKNOWN TASK is
+	  sent back to the initiator. However, some initiators, particularly
+	  the VMware iSCSI initiator, interpret the UNKNOWN TASK response as
+	  if the target got crazy and try to RESET it. Then sometimes the
+	  initiator gets crazy itself.
+
+	  If unsure, say "N".
+
+config SCST_USE_EXPECTED_VALUES
+	bool "Prefer initiator-supplied SCSI command attributes"
+	depends on SCST
+	help
+	  When SCST receives a SCSI command from an initiator, such a SCSI
+	  command has both data transfer length and direction attributes.
+	  There are two possible sources for these attributes: either the
+	  values computed by SCST from its internal command translation table
+	  or the values supplied by the initiator. The former are used by
+	  default because of security reasons. Invalid initiator-supplied
+	  attributes can crash the target, especially in pass-through mode.
+	  Only consider enabling this option when SCST logs the following
+	  message: "Unknown opcode XX for YY. Should you update
+	  scst_scsi_op_table?" and when the initiator complains. Please
+	  report any unrecognized commands to scst-devel@lists.sourceforge.net.
+
+	  If unsure, say "N".
+
+config SCST_EXTRACHECKS
+	bool "Extra consistency checks"
+	depends on SCST
+	help
+	  Enable additional consistency checks in the SCSI middle level target
+	  code. This may be helpful for SCST developers. Enable it if you have
+	  any problems.
+
+	  If unsure, say "N".
+
+config SCST_TRACING
+	bool "Tracing support"
+	depends on SCST
+	default y
+	help
+	  Enable SCSI middle level tracing support. Tracing can be controlled
+	  dynamically via sysfs interface. The traced information
+	  is sent to the kernel log and may be very helpful when analyzing
+	  the cause of a communication problem between initiator and target.
+
+	  If unsure, say "Y".
+
+config SCST_DEBUG
+	bool "Debugging support"
+	depends on SCST
+	select DEBUG_BUGVERBOSE
+	help
+	  Enables support for debugging SCST. This may be helpful for SCST
+	  developers.
+
+	  If unsure, say "N".
+
+config SCST_DEBUG_OOM
+	bool "Out-of-memory debugging support"
+	depends on SCST
+	help
+	  Let SCST's internal memory allocation function
+	  (scst_alloc_sg_entries()) fail about once in every 10000 calls, at
+	  least if the flag __GFP_NOFAIL has not been set. This allows SCST
+	  developers to test the behavior of SCST in out-of-memory conditions.
+	  This may be helpful for SCST developers.
+
+	  If unsure, say "N".
+
+config SCST_DEBUG_RETRY
+	bool "SCSI command retry debugging support"
+	depends on SCST
+	help
+	  Let SCST's internal SCSI command transfer function
+	  (scst_rdy_to_xfer()) fail about once in every 100 calls. This allows
+	  SCST developers to test the behavior of SCST when SCSI queues fill
+	  up. This may be helpful for SCST developers.
+
+	  If unsure, say "N".
+
+config SCST_DEBUG_SN
+	bool "SCSI sequence number debugging support"
+	depends on SCST
+	help
+	  Allows to test SCSI command ordering via sequence numbers by
+	  randomly changing the type of SCSI commands into
+	  SCST_CMD_QUEUE_ORDERED, SCST_CMD_QUEUE_HEAD_OF_QUEUE or
+	  SCST_CMD_QUEUE_SIMPLE for about one in 300 SCSI commands.
+	  This may be helpful for SCST developers.
+
+	  If unsure, say "N".
+
+config SCST_DEBUG_TM
+	bool "Task management debugging support"
+	depends on SCST_DEBUG
+	help
+	  Enables support for debugging of SCST's task management functions.
+	  When enabled, some of the commands on LUN 0 in the default access
+	  control group will be delayed for about 60 seconds. This will
+	  cause the remote initiator send SCSI task management functions,
+	  e.g. ABORT TASK and TARGET RESET.
+
+	  If unsure, say "N".
+
+config SCST_TM_DBG_GO_OFFLINE
+	bool "Let devices become completely unresponsive"
+	depends on SCST_DEBUG_TM
+	help
+	  Enable this option if you want that the device eventually becomes
+	  completely unresponsive. When disabled, the device will receive
+	  ABORT and RESET commands.
+
+config SCST_MEASURE_LATENCY
+	bool "Commands processing latency measurement facility"
+	depends on SCST
+	help
+	  This option enables commands processing latency measurement
+	  facility in SCST. It will provide in the sysfs interface
+	  average commands processing latency statistics. You can clear
+	  already measured results by writing 0 in the corresponding sysfs file.
+	  Note, you need a non-preemtible kernel to have correct results.
+
+	  If unsure, say "N".
+
+source "drivers/scst/iscsi-scst/Kconfig"
+source "drivers/scst/srpt/Kconfig"
+
+endmenu
diff -uprN orig/linux-2.6.33/drivers/scst/Makefile linux-2.6.33/drivers/scst/Makefile
--- orig/linux-2.6.33/drivers/scst/Makefile
+++ linux-2.6.33/drivers/scst/Makefile
@@ -0,0 +1,11 @@
+ccflags-y += -Iinclude/scst -Wno-unused-parameter
+
+scst-y        += scst_main.o
+scst-y        += scst_targ.o
+scst-y        += scst_lib.o
+scst-y        += scst_sysfs.o
+scst-y        += scst_mem.o
+scst-y        += scst_debug.o
+
+obj-$(CONFIG_SCST)   += scst.o dev_handlers/ iscsi-scst/ srpt/
+


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH][RFC 2/12/1/5] SCST core's external headers
       [not found] ` <4BC44D08.4060907@vlnb.net>
  2010-04-13 13:04   ` [PATCH][RFC 1/12/1/5] SCST core's Makefile and Kconfig Vladislav Bolkhovitin
@ 2010-04-13 13:04   ` Vladislav Bolkhovitin
  2010-04-13 13:04   ` [PATCH][RFC 3/12/1/5] SCST core's scst_main.c Vladislav Bolkhovitin
                     ` (7 subsequent siblings)
  9 siblings, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:04 UTC (permalink / raw)
  To: linux-scsi
  Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
	FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
	Vu Pham, Bart Van Assche, James Smart, Joe Eykholt, Andy Yan,
	linux-driver

This patch contains declarations of all externally visible SCST constants, 
types and functions prototypes.

Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
 scst.h       | 3169 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 scst_const.h |  330 ++++++
 2 files changed, 3499 insertions(+)

diff -uprN orig/linux-2.6.33/include/scst/scst_const.h linux-2.6.33/include/scst/scst_const.h
--- orig/linux-2.6.33/include/scst/scst_const.h
+++ linux-2.6.33/include/scst/scst_const.h
@@ -0,0 +1,330 @@
+/*
+ *  include/scst_const.h
+ *
+ *  Copyright (C) 2004 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  Contains common SCST constants.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation, version 2
+ *  of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#ifndef __SCST_CONST_H
+#define __SCST_CONST_H
+
+#include <scsi/scsi.h>
+
+#define SCST_CONST_VERSION "$Revision: 1585 $"
+
+/*** Shared constants between user and kernel spaces ***/
+
+/* Max size of CDB */
+#define SCST_MAX_CDB_SIZE            16
+
+/* Max size of various names */
+#define SCST_MAX_NAME		     50
+
+/* Max size of external names, like initiator name */
+#define SCST_MAX_EXTERNAL_NAME	     256
+
+/*
+ * Size of sense sufficient to carry standard sense data.
+ * Warning! It's allocated on stack!
+ */
+#define SCST_STANDARD_SENSE_LEN      18
+
+/* Max size of sense */
+#define SCST_SENSE_BUFFERSIZE        96
+
+/*************************************************************
+ ** Allowed delivery statuses for cmd's delivery_status
+ *************************************************************/
+
+#define SCST_CMD_DELIVERY_SUCCESS	0
+#define SCST_CMD_DELIVERY_FAILED	-1
+#define SCST_CMD_DELIVERY_ABORTED	-2
+
+/*************************************************************
+ ** Values for task management functions
+ *************************************************************/
+#define SCST_ABORT_TASK              0
+#define SCST_ABORT_TASK_SET          1
+#define SCST_CLEAR_ACA               2
+#define SCST_CLEAR_TASK_SET          3
+#define SCST_LUN_RESET               4
+#define SCST_TARGET_RESET            5
+
+/** SCST extensions **/
+
+/*
+ * Notifies about I_T nexus loss event in the corresponding session.
+ * Aborts all tasks there, resets the reservation, if any, and sets
+ * up the I_T Nexus loss UA.
+ */
+#define SCST_NEXUS_LOSS_SESS         6
+
+/* Aborts all tasks in the corresponding session */
+#define SCST_ABORT_ALL_TASKS_SESS    7
+
+/*
+ * Notifies about I_T nexus loss event. Aborts all tasks in all sessions
+ * of the tgt, resets the reservations, if any,  and sets up the I_T Nexus
+ * loss UA.
+ */
+#define SCST_NEXUS_LOSS              8
+
+/* Aborts all tasks in all sessions of the tgt */
+#define SCST_ABORT_ALL_TASKS         9
+
+/*
+ * Internal TM command issued by SCST in scst_unregister_session(). It is the
+ * same as SCST_NEXUS_LOSS_SESS, except:
+ *  - it doesn't call task_mgmt_affected_cmds_done()
+ *  - it doesn't call task_mgmt_fn_done()
+ *  - it doesn't queue NEXUS LOSS UA.
+ *
+ * Target driver shall NEVER use it!!
+ */
+#define SCST_UNREG_SESS_TM           10
+
+/*************************************************************
+ ** Values for mgmt cmd's status field. Codes taken from iSCSI
+ *************************************************************/
+#define SCST_MGMT_STATUS_SUCCESS		0
+#define SCST_MGMT_STATUS_TASK_NOT_EXIST		-1
+#define SCST_MGMT_STATUS_LUN_NOT_EXIST		-2
+#define SCST_MGMT_STATUS_FN_NOT_SUPPORTED	-5
+#define SCST_MGMT_STATUS_REJECTED		-255
+#define SCST_MGMT_STATUS_FAILED			-129
+
+/*************************************************************
+ ** SCSI task attribute queue types
+ *************************************************************/
+enum scst_cmd_queue_type {
+	SCST_CMD_QUEUE_UNTAGGED = 0,
+	SCST_CMD_QUEUE_SIMPLE,
+	SCST_CMD_QUEUE_ORDERED,
+	SCST_CMD_QUEUE_HEAD_OF_QUEUE,
+	SCST_CMD_QUEUE_ACA
+};
+
+/*************************************************************
+ ** CDB flags
+ *************************************************************/
+enum scst_cdb_flags {
+	SCST_TRANSFER_LEN_TYPE_FIXED =		0x001,
+	SCST_SMALL_TIMEOUT =			0x002,
+	SCST_LONG_TIMEOUT =			0x004,
+	SCST_UNKNOWN_LENGTH =			0x008,
+	SCST_INFO_VALID =			0x010, /* must be single bit */
+	SCST_VERIFY_BYTCHK_MISMATCH_ALLOWED =	0x020,
+	SCST_IMPLICIT_HQ =			0x040,
+	SCST_SKIP_UA =				0x080,
+	SCST_WRITE_MEDIUM =			0x100,
+	SCST_LOCAL_CMD =			0x200,
+	SCST_FULLY_LOCAL_CMD =			0x400,
+	SCST_REG_RESERVE_ALLOWED =		0x800,
+};
+
+/*************************************************************
+ ** Data direction aliases. Changing it don't forget to change
+ ** scst_to_tgt_dma_dir as well!!
+ *************************************************************/
+#define SCST_DATA_UNKNOWN		0
+#define SCST_DATA_WRITE			1
+#define SCST_DATA_READ			2
+#define SCST_DATA_BIDI			(SCST_DATA_WRITE | SCST_DATA_READ)
+#define SCST_DATA_NONE			4
+
+/*************************************************************
+ ** Default suffix for targets with NULL names
+ *************************************************************/
+#define SCST_DEFAULT_TGT_NAME_SUFFIX		"_target_"
+
+/*************************************************************
+ ** Sense manipulation and examination
+ *************************************************************/
+#define SCST_LOAD_SENSE(key_asc_ascq) key_asc_ascq
+
+#define SCST_SENSE_VALID(sense)  ((sense != NULL) && \
+				  ((((const uint8_t *)(sense))[0] & 0x70) == 0x70))
+
+#define SCST_NO_SENSE(sense)     ((sense != NULL) && \
+				  (((const uint8_t *)(sense))[2] == 0))
+
+/*************************************************************
+ ** Sense data for the appropriate errors. Can be used with
+ ** scst_set_cmd_error()
+ *************************************************************/
+#define scst_sense_no_sense			NO_SENSE,        0x00, 0
+#define scst_sense_hardw_error			HARDWARE_ERROR,  0x44, 0
+#define scst_sense_aborted_command		ABORTED_COMMAND, 0x00, 0
+#define scst_sense_invalid_opcode		ILLEGAL_REQUEST, 0x20, 0
+#define scst_sense_invalid_field_in_cdb		ILLEGAL_REQUEST, 0x24, 0
+#define scst_sense_invalid_field_in_parm_list	ILLEGAL_REQUEST, 0x26, 0
+#define scst_sense_parameter_value_invalid	ILLEGAL_REQUEST, 0x26, 2
+#define scst_sense_reset_UA			UNIT_ATTENTION,  0x29, 0
+#define scst_sense_nexus_loss_UA		UNIT_ATTENTION,  0x29, 0x7
+#define scst_sense_saving_params_unsup		ILLEGAL_REQUEST, 0x39, 0
+#define scst_sense_lun_not_supported		ILLEGAL_REQUEST, 0x25, 0
+#define scst_sense_data_protect			DATA_PROTECT,    0x00, 0
+#define scst_sense_miscompare_error		MISCOMPARE,      0x1D, 0
+#define scst_sense_block_out_range_error	ILLEGAL_REQUEST, 0x21, 0
+#define scst_sense_medium_changed_UA		UNIT_ATTENTION,  0x28, 0
+#define scst_sense_read_error			MEDIUM_ERROR,    0x11, 0
+#define scst_sense_write_error			MEDIUM_ERROR,    0x03, 0
+#define scst_sense_not_ready			NOT_READY,       0x04, 0x10
+#define scst_sense_invalid_message		ILLEGAL_REQUEST, 0x49, 0
+#define scst_sense_cleared_by_another_ini_UA	UNIT_ATTENTION,  0x2F, 0
+#define scst_sense_capacity_data_changed	UNIT_ATTENTION,  0x2A, 0x9
+#define scst_sense_reported_luns_data_changed	UNIT_ATTENTION,  0x3F, 0xE
+#define scst_sense_inquery_data_changed		UNIT_ATTENTION,  0x3F, 0x3
+
+/*************************************************************
+ * SCSI opcodes not listed anywhere else
+ *************************************************************/
+#define REPORT_DEVICE_IDENTIFIER    0xA3
+#define INIT_ELEMENT_STATUS         0x07
+#define INIT_ELEMENT_STATUS_RANGE   0x37
+#define PREVENT_ALLOW_MEDIUM        0x1E
+#define READ_ATTRIBUTE              0x8C
+#define REQUEST_VOLUME_ADDRESS      0xB5
+#define WRITE_ATTRIBUTE             0x8D
+#define WRITE_VERIFY_16             0x8E
+#define VERIFY_6                    0x13
+#define VERIFY_12                   0xAF
+
+/*************************************************************
+ **  SCSI Architecture Model (SAM) Status codes. Taken from SAM-3 draft
+ **  T10/1561-D Revision 4 Draft dated 7th November 2002.
+ *************************************************************/
+#define SAM_STAT_GOOD            0x00
+#define SAM_STAT_CHECK_CONDITION 0x02
+#define SAM_STAT_CONDITION_MET   0x04
+#define SAM_STAT_BUSY            0x08
+#define SAM_STAT_INTERMEDIATE    0x10
+#define SAM_STAT_INTERMEDIATE_CONDITION_MET 0x14
+#define SAM_STAT_RESERVATION_CONFLICT 0x18
+#define SAM_STAT_COMMAND_TERMINATED 0x22	/* obsolete in SAM-3 */
+#define SAM_STAT_TASK_SET_FULL   0x28
+#define SAM_STAT_ACA_ACTIVE      0x30
+#define SAM_STAT_TASK_ABORTED    0x40
+
+/*************************************************************
+ ** Control byte field in CDB
+ *************************************************************/
+#define CONTROL_BYTE_LINK_BIT       0x01
+#define CONTROL_BYTE_NACA_BIT       0x04
+
+/*************************************************************
+ ** Byte 1 in INQUIRY CDB
+ *************************************************************/
+#define SCST_INQ_EVPD                0x01
+
+/*************************************************************
+ ** Byte 3 in Standard INQUIRY data
+ *************************************************************/
+#define SCST_INQ_BYTE3               3
+
+#define SCST_INQ_NORMACA_BIT         0x20
+
+/*************************************************************
+ ** Byte 2 in RESERVE_10 CDB
+ *************************************************************/
+#define SCST_RES_3RDPTY              0x10
+#define SCST_RES_LONGID              0x02
+
+/*************************************************************
+ ** Values for the control mode page TST field
+ *************************************************************/
+#define SCST_CONTR_MODE_ONE_TASK_SET  0
+#define SCST_CONTR_MODE_SEP_TASK_SETS 1
+
+/*******************************************************************
+ ** Values for the control mode page QUEUE ALGORITHM MODIFIER field
+ *******************************************************************/
+#define SCST_CONTR_MODE_QUEUE_ALG_RESTRICTED_REORDER   0
+#define SCST_CONTR_MODE_QUEUE_ALG_UNRESTRICTED_REORDER 1
+
+/*************************************************************
+ ** Values for the control mode page D_SENSE field
+ *************************************************************/
+#define SCST_CONTR_MODE_FIXED_SENSE  0
+#define SCST_CONTR_MODE_DESCR_SENSE 1
+
+/*************************************************************
+ ** Misc SCSI constants
+ *************************************************************/
+#define SCST_SENSE_ASC_UA_RESET      0x29
+#define READ_CAP_LEN		     8
+#define READ_CAP16_LEN		     32
+#define BYTCHK			     0x02
+#define POSITION_LEN_SHORT           20
+#define POSITION_LEN_LONG            32
+
+/*************************************************************
+ ** Various timeouts
+ *************************************************************/
+#define SCST_DEFAULT_TIMEOUT			(60 * HZ)
+
+#define SCST_GENERIC_CHANGER_TIMEOUT		(3 * HZ)
+#define SCST_GENERIC_CHANGER_LONG_TIMEOUT	(14000 * HZ)
+
+#define SCST_GENERIC_PROCESSOR_TIMEOUT		(3 * HZ)
+#define SCST_GENERIC_PROCESSOR_LONG_TIMEOUT	(14000 * HZ)
+
+#define SCST_GENERIC_TAPE_SMALL_TIMEOUT		(3 * HZ)
+#define SCST_GENERIC_TAPE_REG_TIMEOUT		(900 * HZ)
+#define SCST_GENERIC_TAPE_LONG_TIMEOUT		(14000 * HZ)
+
+#define SCST_GENERIC_MODISK_SMALL_TIMEOUT	(3 * HZ)
+#define SCST_GENERIC_MODISK_REG_TIMEOUT		(900 * HZ)
+#define SCST_GENERIC_MODISK_LONG_TIMEOUT	(14000 * HZ)
+
+#define SCST_GENERIC_DISK_SMALL_TIMEOUT		(3 * HZ)
+#define SCST_GENERIC_DISK_REG_TIMEOUT		(60 * HZ)
+#define SCST_GENERIC_DISK_LONG_TIMEOUT		(3600 * HZ)
+
+#define SCST_GENERIC_RAID_TIMEOUT		(3 * HZ)
+#define SCST_GENERIC_RAID_LONG_TIMEOUT		(14000 * HZ)
+
+#define SCST_GENERIC_CDROM_SMALL_TIMEOUT	(3 * HZ)
+#define SCST_GENERIC_CDROM_REG_TIMEOUT		(900 * HZ)
+#define SCST_GENERIC_CDROM_LONG_TIMEOUT		(14000 * HZ)
+
+#define SCST_MAX_OTHER_TIMEOUT			(14000 * HZ)
+
+/*************************************************************
+ ** I/O grouping attribute string values. Must match constants
+ ** w/o '_STR' suffix!
+ *************************************************************/
+#define SCST_IO_GROUPING_AUTO_STR		"auto"
+#define SCST_IO_GROUPING_THIS_GROUP_ONLY_STR	"this_group_only"
+#define SCST_IO_GROUPING_NEVER_STR		"never"
+
+/*************************************************************
+ ** Threads pool type attribute string values.
+ ** Must match scst_dev_type_threads_pool_type!
+ *************************************************************/
+#define SCST_THREADS_POOL_PER_INITIATOR_STR	"per_initiator"
+#define SCST_THREADS_POOL_SHARED_STR		"shared"
+
+/*************************************************************
+ ** Misc constants
+ *************************************************************/
+#define SCST_SYSFS_BLOCK_SIZE			PAGE_SIZE
+
+#define SCST_SYSFS_KEY_MARK			"[key]"
+
+#define SCST_MIN_REL_TGT_ID			1
+#define SCST_MAX_REL_TGT_ID			65535
+
+#endif /* __SCST_CONST_H */
diff -uprN orig/linux-2.6.33/include/scst/scst.h linux-2.6.33/include/scst/scst.h
--- orig/linux-2.6.33/include/scst/scst.h
+++ linux-2.6.33/include/scst/scst.h
@@ -0,0 +1,3169 @@
+/*
+ *  include/scst.h
+ *
+ *  Copyright (C) 2004 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ *  Copyright (C) 2004 - 2005 Leonid Stoljar
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  Main SCSI target mid-level include file.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation, version 2
+ *  of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#ifndef __SCST_H
+#define __SCST_H
+
+#include <linux/types.h>
+#include <linux/version.h>
+#include <linux/blkdev.h>
+#include <linux/interrupt.h>
+#include <linux/wait.h>
+
+#include <scsi/scsi_cmnd.h>
+#include <scsi/scsi_device.h>
+#include <scsi/scsi_eh.h>
+#include <scsi/scsi.h>
+
+#include <scst_const.h>
+
+#include "scst_sgv.h"
+
+/*
+ * Version numbers, the same as for the kernel.
+ *
+ * Changing it don't forget to change SCST_FIO_REV in scst_vdisk.c
+ * and FIO_REV in usr/fileio/common.h as well.
+ */
+#define SCST_VERSION(a, b, c, d)    (((a) << 24) + ((b) << 16) + ((c) << 8) + d)
+#define SCST_VERSION_CODE	    SCST_VERSION(2, 0, 0, 0)
+#define SCST_VERSION_STRING_SUFFIX
+#define SCST_VERSION_STRING	    "2.0.0-pre1" SCST_VERSION_STRING_SUFFIX
+#define SCST_INTERFACE_VERSION	    \
+		SCST_VERSION_STRING "$Revision: 1603 $" SCST_CONST_VERSION
+
+#define SCST_LOCAL_NAME			"scst_lcl_drvr"
+
+/*************************************************************
+ ** States of command processing state machine. At first,
+ ** "active" states, then - "passive" ones. This is to have
+ ** more efficient generated code of the corresponding
+ ** "switch" statements.
+ *************************************************************/
+
+/* Internal parsing */
+#define SCST_CMD_STATE_PRE_PARSE     0
+
+/* Dev handler's parse() is going to be called */
+#define SCST_CMD_STATE_DEV_PARSE     1
+
+/* Allocation of the cmd's data buffer */
+#define SCST_CMD_STATE_PREPARE_SPACE 2
+
+/* Calling preprocessing_done() */
+#define SCST_CMD_STATE_PREPROCESSING_DONE 3
+
+/* Target driver's rdy_to_xfer() is going to be called */
+#define SCST_CMD_STATE_RDY_TO_XFER   4
+
+/* Target driver's pre_exec() is going to be called */
+#define SCST_CMD_STATE_TGT_PRE_EXEC  5
+
+/* Cmd is going to be sent for execution */
+#define SCST_CMD_STATE_SEND_FOR_EXEC 6
+
+/* Cmd is being checked if it should be executed locally */
+#define SCST_CMD_STATE_LOCAL_EXEC    7
+
+/* Cmd is ready for execution */
+#define SCST_CMD_STATE_REAL_EXEC     8
+
+/* Internal post-exec checks */
+#define SCST_CMD_STATE_PRE_DEV_DONE  9
+
+/* Internal MODE SELECT pages related checks */
+#define SCST_CMD_STATE_MODE_SELECT_CHECKS 10
+
+/* Dev handler's dev_done() is going to be called */
+#define SCST_CMD_STATE_DEV_DONE      11
+
+/* Target driver's xmit_response() is going to be called */
+#define SCST_CMD_STATE_PRE_XMIT_RESP 12
+
+/* Target driver's xmit_response() is going to be called */
+#define SCST_CMD_STATE_XMIT_RESP     13
+
+/* Cmd finished */
+#define SCST_CMD_STATE_FINISHED      14
+
+/* Internal cmd finished */
+#define SCST_CMD_STATE_FINISHED_INTERNAL 15
+
+#define SCST_CMD_STATE_LAST_ACTIVE   (SCST_CMD_STATE_FINISHED_INTERNAL+100)
+
+/* A cmd is created, but scst_cmd_init_done() not called */
+#define SCST_CMD_STATE_INIT_WAIT     (SCST_CMD_STATE_LAST_ACTIVE+1)
+
+/* LUN translation (cmd->tgt_dev assignment) */
+#define SCST_CMD_STATE_INIT          (SCST_CMD_STATE_LAST_ACTIVE+2)
+
+/* Waiting for scst_restart_cmd() */
+#define SCST_CMD_STATE_PREPROCESSING_DONE_CALLED (SCST_CMD_STATE_LAST_ACTIVE+3)
+
+/* Waiting for data from the initiator (until scst_rx_data() called) */
+#define SCST_CMD_STATE_DATA_WAIT     (SCST_CMD_STATE_LAST_ACTIVE+4)
+
+/* Waiting for CDB's execution finish */
+#define SCST_CMD_STATE_REAL_EXECUTING (SCST_CMD_STATE_LAST_ACTIVE+5)
+
+/* Waiting for response's transmission finish */
+#define SCST_CMD_STATE_XMIT_WAIT     (SCST_CMD_STATE_LAST_ACTIVE+6)
+
+/*************************************************************
+ * Can be retuned instead of cmd's state by dev handlers'
+ * functions, if the command's state should be set by default
+ *************************************************************/
+#define SCST_CMD_STATE_DEFAULT        500
+
+/*************************************************************
+ * Can be retuned instead of cmd's state by dev handlers'
+ * functions, if it is impossible to complete requested
+ * task in atomic context. The cmd will be restarted in thread
+ * context.
+ *************************************************************/
+#define SCST_CMD_STATE_NEED_THREAD_CTX 1000
+
+/*************************************************************
+ * Can be retuned instead of cmd's state by dev handlers'
+ * parse function, if the cmd processing should be stopped
+ * for now. The cmd will be restarted by dev handlers itself.
+ *************************************************************/
+#define SCST_CMD_STATE_STOP           1001
+
+/*************************************************************
+ ** States of mgmt command processing state machine
+ *************************************************************/
+
+/* LUN translation (mcmd->tgt_dev assignment) */
+#define SCST_MCMD_STATE_INIT     0
+
+/* Mgmt cmd is ready for processing */
+#define SCST_MCMD_STATE_READY    1
+
+/* Mgmt cmd is being executing */
+#define SCST_MCMD_STATE_EXECUTING 2
+
+/* Post check when affected commands done */
+#define SCST_MCMD_STATE_POST_AFFECTED_CMDS_DONE 3
+
+/* Target driver's task_mgmt_fn_done() is going to be called */
+#define SCST_MCMD_STATE_DONE     4
+
+/* The mcmd finished */
+#define SCST_MCMD_STATE_FINISHED 5
+
+/*************************************************************
+ ** Constants for "atomic" parameter of SCST's functions
+ *************************************************************/
+#define SCST_NON_ATOMIC              0
+#define SCST_ATOMIC                  1
+
+/*************************************************************
+ ** Values for pref_context parameter of scst_cmd_init_done(),
+ ** scst_rx_data(), scst_restart_cmd(), scst_tgt_cmd_done()
+ ** and scst_cmd_done()
+ *************************************************************/
+
+enum scst_exec_context {
+	/*
+	 * Direct cmd's processing (i.e. regular function calls in the current
+	 * context) sleeping is not allowed
+	 */
+	SCST_CONTEXT_DIRECT_ATOMIC,
+
+	/*
+	 * Direct cmd's processing (i.e. regular function calls in the current
+	 * context), sleeping is allowed, no restrictions
+	 */
+	SCST_CONTEXT_DIRECT,
+
+	/* Tasklet or thread context required for cmd's processing */
+	SCST_CONTEXT_TASKLET,
+
+	/* Thread context required for cmd's processing */
+	SCST_CONTEXT_THREAD,
+
+	/*
+	 * Context is the same as it was in previous call of the corresponding
+	 * callback. For example, if dev handler's exec() does sync. data
+	 * reading this value should be used for scst_cmd_done(). The same is
+	 * true if scst_tgt_cmd_done() called directly from target driver's
+	 * xmit_response(). Not allowed in scst_cmd_init_done() and
+	 * scst_cmd_init_stage1_done().
+	 */
+	SCST_CONTEXT_SAME
+};
+
+/*************************************************************
+ ** Values for status parameter of scst_rx_data()
+ *************************************************************/
+
+/* Success */
+#define SCST_RX_STATUS_SUCCESS       0
+
+/*
+ * Data receiving finished with error, so set the sense and
+ * finish the command, including xmit_response() call
+ */
+#define SCST_RX_STATUS_ERROR         1
+
+/*
+ * Data receiving finished with error and the sense is set,
+ * so finish the command, including xmit_response() call
+ */
+#define SCST_RX_STATUS_ERROR_SENSE_SET 2
+
+/*
+ * Data receiving finished with fatal error, so finish the command,
+ * but don't call xmit_response()
+ */
+#define SCST_RX_STATUS_ERROR_FATAL   3
+
+/*************************************************************
+ ** Values for status parameter of scst_restart_cmd()
+ *************************************************************/
+
+/* Success */
+#define SCST_PREPROCESS_STATUS_SUCCESS       0
+
+/*
+ * Command's processing finished with error, so set the sense and
+ * finish the command, including xmit_response() call
+ */
+#define SCST_PREPROCESS_STATUS_ERROR         1
+
+/*
+ * Command's processing finished with error and the sense is set,
+ * so finish the command, including xmit_response() call
+ */
+#define SCST_PREPROCESS_STATUS_ERROR_SENSE_SET 2
+
+/*
+ * Command's processing finished with fatal error, so finish the command,
+ * but don't call xmit_response()
+ */
+#define SCST_PREPROCESS_STATUS_ERROR_FATAL   3
+
+/* Thread context requested */
+#define SCST_PREPROCESS_STATUS_NEED_THREAD   4
+
+/*************************************************************
+ ** Values for AEN functions
+ *************************************************************/
+
+/*
+ * SCSI Asynchronous Event. Parameter contains SCSI sense
+ * (Unit Attention). AENs generated only for 2 the following UAs:
+ * CAPACITY DATA HAS CHANGED and REPORTED LUNS DATA HAS CHANGED.
+ * Other UAs reported regularly as CHECK CONDITION status,
+ * because it doesn't look safe to report them using AENs, since
+ * reporting using AENs opens delivery race windows even in case of
+ * untagged commands.
+ */
+#define SCST_AEN_SCSI                0
+
+/*************************************************************
+ ** Allowed return/status codes for report_aen() callback and
+ ** scst_set_aen_delivery_status() function
+ *************************************************************/
+
+/* Success */
+#define SCST_AEN_RES_SUCCESS         0
+
+/* Not supported */
+#define SCST_AEN_RES_NOT_SUPPORTED  -1
+
+/* Failure */
+#define SCST_AEN_RES_FAILED         -2
+
+/*************************************************************
+ ** Allowed return codes for xmit_response(), rdy_to_xfer()
+ *************************************************************/
+
+/* Success */
+#define SCST_TGT_RES_SUCCESS         0
+
+/* Internal device queue is full, retry again later */
+#define SCST_TGT_RES_QUEUE_FULL      -1
+
+/*
+ * It is impossible to complete requested task in atomic context.
+ * The cmd will be restarted in thread  context.
+ */
+#define SCST_TGT_RES_NEED_THREAD_CTX -2
+
+/*
+ * Fatal error, if returned by xmit_response() the cmd will
+ * be destroyed, if by any other function, xmit_response()
+ * will be called with HARDWARE ERROR sense data
+ */
+#define SCST_TGT_RES_FATAL_ERROR     -3
+
+/*************************************************************
+ ** Allowed return codes for dev handler's exec()
+ *************************************************************/
+
+/* The cmd is done, go to other ones */
+#define SCST_EXEC_COMPLETED          0
+
+/* The cmd should be sent to SCSI mid-level */
+#define SCST_EXEC_NOT_COMPLETED      1
+
+/*
+ * Thread context is required to execute the command.
+ * Exec() will be called again in the thread context.
+ */
+#define SCST_EXEC_NEED_THREAD        2
+
+/*
+ * Set if cmd is finished and there is status/sense to be sent.
+ * The status should be not sent (i.e. the flag not set) if the
+ * possibility to perform a command in "chunks" (i.e. with multiple
+ * xmit_response()/rdy_to_xfer()) is used (not implemented yet).
+ * Obsolete, use scst_cmd_get_is_send_status() instead.
+ */
+#define SCST_TSC_FLAG_STATUS         0x2
+
+/*************************************************************
+ ** Additional return code for dev handler's task_mgmt_fn()
+ *************************************************************/
+
+/* Regular standard actions for the command should be done */
+#define SCST_DEV_TM_NOT_COMPLETED     1
+
+/*************************************************************
+ ** Session initialization phases
+ *************************************************************/
+
+/* Set if session is being initialized */
+#define SCST_SESS_IPH_INITING        0
+
+/* Set if the session is successfully initialized */
+#define SCST_SESS_IPH_SUCCESS        1
+
+/* Set if the session initialization failed */
+#define SCST_SESS_IPH_FAILED         2
+
+/* Set if session is initialized and ready */
+#define SCST_SESS_IPH_READY          3
+
+/*************************************************************
+ ** Session shutdown phases
+ *************************************************************/
+
+/* Set if session is initialized and ready */
+#define SCST_SESS_SPH_READY          0
+
+/* Set if session is shutting down */
+#define SCST_SESS_SPH_SHUTDOWN       1
+
+/*************************************************************
+ ** Session's async (atomic) flags
+ *************************************************************/
+
+/* Set if the sess's hw pending work is scheduled */
+#define SCST_SESS_HW_PENDING_WORK_SCHEDULED	0
+
+/*************************************************************
+ ** Cmd's async (atomic) flags
+ *************************************************************/
+
+/* Set if the cmd is aborted and ABORTED sense will be sent as the result */
+#define SCST_CMD_ABORTED		0
+
+/* Set if the cmd is aborted by other initiator */
+#define SCST_CMD_ABORTED_OTHER		1
+
+/* Set if no response should be sent to the target about this cmd */
+#define SCST_CMD_NO_RESP		2
+
+/* Set if the cmd is dead and can be destroyed at any time */
+#define SCST_CMD_CAN_BE_DESTROYED	3
+
+/*
+ * Set if the cmd's device has TAS flag set. Used only when aborted by
+ * other initiator.
+ */
+#define SCST_CMD_DEVICE_TAS		4
+
+/*************************************************************
+ ** Tgt_dev's async. flags (tgt_dev_flags)
+ *************************************************************/
+
+/* Set if tgt_dev has Unit Attention sense */
+#define SCST_TGT_DEV_UA_PENDING		0
+
+/* Set if tgt_dev is RESERVED by another session */
+#define SCST_TGT_DEV_RESERVED		1
+
+/* Set if the corresponding context is atomic */
+#define SCST_TGT_DEV_AFTER_INIT_WR_ATOMIC	5
+#define SCST_TGT_DEV_AFTER_INIT_OTH_ATOMIC	6
+#define SCST_TGT_DEV_AFTER_RESTART_WR_ATOMIC	7
+#define SCST_TGT_DEV_AFTER_RESTART_OTH_ATOMIC	8
+#define SCST_TGT_DEV_AFTER_RX_DATA_ATOMIC	9
+#define SCST_TGT_DEV_AFTER_EXEC_ATOMIC		10
+
+#define SCST_TGT_DEV_CLUST_POOL			11
+
+/*************************************************************
+ ** I/O groupping types. Changing them don't forget to change
+ ** the corresponding *_STR values in scst_const.h!
+ *************************************************************/
+
+/*
+ * All initiators with the same name connected to this group will have
+ * shared IO context, for each name own context. All initiators with
+ * different names will have own IO context.
+ */
+#define SCST_IO_GROUPING_AUTO			0
+
+/* All initiators connected to this group will have shared IO context */
+#define SCST_IO_GROUPING_THIS_GROUP_ONLY	-1
+
+/* Each initiator connected to this group will have own IO context */
+#define SCST_IO_GROUPING_NEVER			-2
+
+/*************************************************************
+ ** Activities suspending timeout
+ *************************************************************/
+#define SCST_SUSPENDING_TIMEOUT			(90 * HZ)
+
+/*************************************************************
+ ** Kernel cache creation helper
+ *************************************************************/
+#ifndef KMEM_CACHE
+#define KMEM_CACHE(__struct, __flags) kmem_cache_create(#__struct,\
+	sizeof(struct __struct), __alignof__(struct __struct),\
+	(__flags), NULL, NULL)
+#endif
+
+/*************************************************************
+ ** Vlaid_mask constants for scst_analyze_sense()
+ *************************************************************/
+
+#define SCST_SENSE_KEY_VALID		1
+#define SCST_SENSE_ASC_VALID		2
+#define SCST_SENSE_ASCQ_VALID		4
+
+#define SCST_SENSE_ASCx_VALID		(SCST_SENSE_ASC_VALID | \
+					 SCST_SENSE_ASCQ_VALID)
+
+#define SCST_SENSE_ALL_VALID		(SCST_SENSE_KEY_VALID | \
+					 SCST_SENSE_ASC_VALID | \
+					 SCST_SENSE_ASCQ_VALID)
+
+/*************************************************************
+ *                     TYPES
+ *************************************************************/
+
+struct scst_tgt;
+struct scst_session;
+struct scst_cmd;
+struct scst_mgmt_cmd;
+struct scst_device;
+struct scst_tgt_dev;
+struct scst_dev_type;
+struct scst_acg;
+struct scst_acg_dev;
+struct scst_acn;
+struct scst_aen;
+
+/*
+ * SCST uses 64-bit numbers to represent LUN's internally. The value
+ * NO_SUCH_LUN is guaranteed to be different of every valid LUN.
+ */
+#define NO_SUCH_LUN ((uint64_t)-1)
+
+typedef enum dma_data_direction scst_data_direction;
+
+/*
+ * SCST target template: defines target driver's parameters and callback
+ * functions.
+ *
+ * MUST HAVEs define functions that are expected to be defined in order to
+ * work. OPTIONAL says that there is a choice.
+ */
+struct scst_tgt_template {
+	/* public: */
+
+	/*
+	 * SG tablesize allows to check whether scatter/gather can be used
+	 * or not.
+	 */
+	int sg_tablesize;
+
+	/*
+	 * True, if this target adapter uses unchecked DMA onto an ISA bus.
+	 */
+	unsigned unchecked_isa_dma:1;
+
+	/*
+	 * True, if this target adapter can benefit from using SG-vector
+	 * clustering (i.e. smaller number of segments).
+	 */
+	unsigned use_clustering:1;
+
+	/*
+	 * True, if this target adapter doesn't support SG-vector clustering
+	 */
+	unsigned no_clustering:1;
+
+	/*
+	 * True, if corresponding function supports execution in
+	 * the atomic (non-sleeping) context
+	 */
+	unsigned xmit_response_atomic:1;
+	unsigned rdy_to_xfer_atomic:1;
+
+	/*
+	 * The maximum time in seconds cmd can stay inside the target
+	 * hardware, i.e. after rdy_to_xfer() and xmit_response(), before
+	 * on_hw_pending_cmd_timeout() will be called, if defined.
+	 *
+	 * In the current implementation a cmd will be aborted in time t
+	 * max_hw_pending_time <= t < 2*max_hw_pending_time.
+	 */
+	int max_hw_pending_time;
+
+	/*
+	 * This function is equivalent to the SCSI
+	 * queuecommand. The target should transmit the response
+	 * buffer and the status in the scst_cmd struct.
+	 * The expectation is that this executing this command is NON-BLOCKING.
+	 * If it is blocking, consider to set threads_num to some none 0 number.
+	 *
+	 * After the response is actually transmitted, the target
+	 * should call the scst_tgt_cmd_done() function of the
+	 * mid-level, which will allow it to free up the command.
+	 * Returns one of the SCST_TGT_RES_* constants.
+	 *
+	 * Pay attention to "atomic" attribute of the cmd, which can be get
+	 * by scst_cmd_atomic(): it is true if the function called in the
+	 * atomic (non-sleeping) context.
+	 *
+	 * MUST HAVE
+	 */
+	int (*xmit_response) (struct scst_cmd *cmd);
+
+	/*
+	 * This function informs the driver that data
+	 * buffer corresponding to the said command have now been
+	 * allocated and it is OK to receive data for this command.
+	 * This function is necessary because a SCSI target does not
+	 * have any control over the commands it receives. Most lower
+	 * level protocols have a corresponding function which informs
+	 * the initiator that buffers have been allocated e.g., XFER_
+	 * RDY in Fibre Channel. After the data is actually received
+	 * the low-level driver needs to call scst_rx_data() in order to
+	 * continue processing this command.
+	 * Returns one of the SCST_TGT_RES_* constants.
+	 *
+	 * This command is expected to be NON-BLOCKING.
+	 * If it is blocking, consider to set threads_num to some none 0 number.
+	 *
+	 * Pay attention to "atomic" attribute of the cmd, which can be get
+	 * by scst_cmd_atomic(): it is true if the function called in the
+	 * atomic (non-sleeping) context.
+	 *
+	 * OPTIONAL
+	 */
+	int (*rdy_to_xfer) (struct scst_cmd *cmd);
+
+	/*
+	 * Called if cmd stays inside the target hardware, i.e. after
+	 * rdy_to_xfer() and xmit_response(), more than max_hw_pending_time
+	 * time. The target driver supposed to cleanup this command and
+	 * resume cmd's processing.
+	 *
+	 * OPTIONAL
+	 */
+	void (*on_hw_pending_cmd_timeout) (struct scst_cmd *cmd);
+
+	/*
+	 * Called to notify the driver that the command is about to be freed.
+	 * Necessary, because for aborted commands xmit_response() could not
+	 * be called. Could be called on IRQ context.
+	 *
+	 * OPTIONAL
+	 */
+	void (*on_free_cmd) (struct scst_cmd *cmd);
+
+	/*
+	 * This function allows target driver to handle data buffer
+	 * allocations on its own.
+	 *
+	 * Target driver doesn't have to always allocate buffer in this
+	 * function, but if it decide to do it, it must check that
+	 * scst_cmd_get_data_buff_alloced() returns 0, otherwise to avoid
+	 * double buffer allocation and memory leaks alloc_data_buf() shall
+	 * fail.
+	 *
+	 * Shall return 0 in case of success or < 0 (preferrably -ENOMEM)
+	 * in case of error, or > 0 if the regular SCST allocation should be
+	 * done. In case of returning successfully,
+	 * scst_cmd->tgt_data_buf_alloced will be set by SCST.
+	 *
+	 * It is possible that both target driver and dev handler request own
+	 * memory allocation. In this case, data will be memcpy() between
+	 * buffers, where necessary.
+	 *
+	 * If allocation in atomic context - cf. scst_cmd_atomic() - is not
+	 * desired or fails and consequently < 0 is returned, this function
+	 * will be re-called in thread context.
+	 *
+	 * Please note that the driver will have to handle itself all relevant
+	 * details such as scatterlist setup, highmem, freeing the allocated
+	 * memory, etc.
+	 *
+	 * OPTIONAL.
+	 */
+	int (*alloc_data_buf) (struct scst_cmd *cmd);
+
+	/*
+	 * This function informs the driver that data
+	 * buffer corresponding to the said command have now been
+	 * allocated and other preprocessing tasks have been done.
+	 * A target driver could need to do some actions at this stage.
+	 * After the target driver done the needed actions, it shall call
+	 * scst_restart_cmd() in order to continue processing this command.
+	 * In case of preliminary the command completion, this function will
+	 * also be called before xmit_response().
+	 *
+	 * Called only if the cmd is queued using scst_cmd_init_stage1_done()
+	 * instead of scst_cmd_init_done().
+	 *
+	 * Returns void, the result is expected to be returned using
+	 * scst_restart_cmd().
+	 *
+	 * This command is expected to be NON-BLOCKING.
+	 * If it is blocking, consider to set threads_num to some none 0 number.
+	 *
+	 * Pay attention to "atomic" attribute of the cmd, which can be get
+	 * by scst_cmd_atomic(): it is true if the function called in the
+	 * atomic (non-sleeping) context.
+	 *
+	 * OPTIONAL.
+	 */
+	void (*preprocessing_done) (struct scst_cmd *cmd);
+
+	/*
+	 * This function informs the driver that the said command is about
+	 * to be executed.
+	 *
+	 * Returns one of the SCST_PREPROCESS_* constants.
+	 *
+	 * This command is expected to be NON-BLOCKING.
+	 * If it is blocking, consider to set threads_num to some none 0 number.
+	 *
+	 * Pay attention to "atomic" attribute of the cmd, which can be get
+	 * by scst_cmd_atomic(): it is true if the function called in the
+	 * atomic (non-sleeping) context.
+	 *
+	 * OPTIONAL
+	 */
+	int (*pre_exec) (struct scst_cmd *cmd);
+
+	/*
+	 * This function informs the driver that all affected by the
+	 * corresponding task management function commands have beed completed.
+	 * No return value expected.
+	 *
+	 * This function is expected to be NON-BLOCKING.
+	 *
+	 * Called without any locks held from a thread context.
+	 *
+	 * OPTIONAL
+	 */
+	void (*task_mgmt_affected_cmds_done) (struct scst_mgmt_cmd *mgmt_cmd);
+
+	/*
+	 * This function informs the driver that the corresponding task
+	 * management function has been completed, i.e. all the corresponding
+	 * commands completed and freed. No return value expected.
+	 *
+	 * This function is expected to be NON-BLOCKING.
+	 *
+	 * Called without any locks held from a thread context.
+	 *
+	 * MUST HAVE if the target supports task management.
+	 */
+	void (*task_mgmt_fn_done) (struct scst_mgmt_cmd *mgmt_cmd);
+
+	/*
+	 * This function should detect the target adapters that
+	 * are present in the system. The function should return a value
+	 * >= 0 to signify the number of detected target adapters.
+	 * A negative value should be returned whenever there is
+	 * an error.
+	 *
+	 * MUST HAVE
+	 */
+	int (*detect) (struct scst_tgt_template *tgt_template);
+
+	/*
+	 * This function should free up the resources allocated to the device.
+	 * The function should return 0 to indicate successful release
+	 * or a negative value if there are some issues with the release.
+	 * In the current version the return value is ignored.
+	 *
+	 * MUST HAVE
+	 */
+	int (*release) (struct scst_tgt *tgt);
+
+	/*
+	 * This function is used for Asynchronous Event Notifications.
+	 *
+	 * Returns one of the SCST_AEN_RES_* constants.
+	 * After AEN is sent, target driver must call scst_aen_done() and,
+	 * optionally, scst_set_aen_delivery_status().
+	 *
+	 * This command is expected to be NON-BLOCKING, but can sleep.
+	 *
+	 * MUST HAVE, if low-level protocol supports AENs.
+	 */
+	int (*report_aen) (struct scst_aen *aen);
+
+	/*
+	 * This function allows to enable or disable particular target.
+	 * A disabled target doesn't receive and process any SCSI commands.
+	 *
+	 * SHOULD HAVE to avoid race when there are connected initiators,
+	 * while target not yet completed the initial configuration. In this
+	 * case the too early connected initiators would see not those devices,
+	 * which they intended to see.
+	 */
+	int (*enable_target) (struct scst_tgt *tgt, bool enable);
+
+	/*
+	 * This function shows if particular target is enabled or not.
+	 *
+	 * SHOULD HAVE, see above why.
+	 */
+	bool (*is_target_enabled) (struct scst_tgt *tgt);
+
+	/*
+	 * This function adds a virtual target.
+	 *
+	 * If both add_target and del_target callbacks defined, then this
+	 * target driver supposed to support virtual targets. In this case
+	 * an "mgmt" entry will be created in the sysfs root for this driver.
+	 * The "mgmt" entry will support 2 commands: "add_target" and
+	 * "del_target", for which the corresponding callbacks will be called.
+	 * Also target driver can define own commands for the "mgmt" entry, see
+	 * mgmt_cmd and mgmt_cmd_help below.
+	 *
+	 * This approach allows uniform targets management to simplify external
+	 * management tools like scstadmin. See README for more details.
+	 *
+	 * Either both add_target and del_target must be defined, or none.
+	 *
+	 * MUST HAVE if virtual targets are supported.
+	 */
+	ssize_t (*add_target) (const char *target_name, char *params);
+
+	/*
+	 * This function deletes a virtual target. See comment for add_target
+	 * above.
+	 *
+	 * MUST HAVE if virtual targets are supported.
+	 */
+	ssize_t (*del_target) (const char *target_name);
+
+	/*
+	 * This function called if not "add_target" or "del_target" command is
+	 * sent to the mgmt entry (see comment for add_target above). In this
+	 * case the command passed to this function as is in a string form.
+	 *
+	 * OPTIONAL.
+	 */
+	ssize_t (*mgmt_cmd) (char *cmd);
+
+	/*
+	 * Name of the template. Must be unique to identify
+	 * the template. MUST HAVE
+	 */
+	const char name[SCST_MAX_NAME];
+
+	/*
+	 * Number of additional threads to the pool of dedicated threads.
+	 * Used if xmit_response() or rdy_to_xfer() is blocking.
+	 * It is the target driver's duty to ensure that not more, than that
+	 * number of threads, are blocked in those functions at any time.
+	 */
+	int threads_num;
+
+	/* Optional default log flags */
+	const unsigned long default_trace_flags;
+
+	/* Optional pointer to trace flags */
+	unsigned long *trace_flags;
+
+	/* Optional local trace table */
+	struct scst_trace_log *trace_tbl;
+
+	/* Optional local trace table help string */
+	const char *trace_tbl_help;
+
+	/* Optional sysfs attributes */
+	const struct attribute **tgtt_attrs;
+
+	/* Optional sysfs target attributes */
+	const struct attribute **tgt_attrs;
+
+	/* Optional sysfs session attributes */
+	const struct attribute **sess_attrs;
+
+	/* Optional help string for mgmt_cmd commands */
+	const char *mgmt_cmd_help;
+
+	/* Optional help string for add_target parameters */
+	const char *add_target_parameters_help;
+
+	/** Private, must be inited to 0 by memset() **/
+
+	/* List of targets per template, protected by scst_mutex */
+	struct list_head tgt_list;
+
+	/* List entry of global templates list */
+	struct list_head scst_template_list_entry;
+
+	/* Set if tgtt_kobj was initialized */
+	unsigned int tgtt_kobj_initialized:1;
+
+	struct kobject tgtt_kobj; /* kobject for this struct */
+
+	struct completion tgtt_kobj_release_cmpl;
+
+};
+
+/*
+ * Threads pool types. Changing them don't forget to change
+ * the corresponding *_STR values in scst_const.h!
+ */
+enum scst_dev_type_threads_pool_type {
+	/* Each initiator will have dedicated threads pool. */
+	SCST_THREADS_POOL_PER_INITIATOR = 0,
+
+	/* All connected initiators will use shared threads pool */
+	SCST_THREADS_POOL_SHARED,
+
+	/* Invalid value for scst_parse_threads_pool_type() */
+	SCST_THREADS_POOL_TYPE_INVALID,
+};
+
+/*
+ * SCST dev handler template: defines dev handler's parameters and callback
+ * functions.
+ *
+ * MUST HAVEs define functions that are expected to be defined in order to
+ * work. OPTIONAL says that there is a choice.
+ */
+struct scst_dev_type {
+	/* SCSI type of the supported device. MUST HAVE */
+	int type;
+
+	/* True, if this dev handler is a pass-through dev handler */
+	unsigned pass_through:1;
+
+	/*
+	 * True, if corresponding function supports execution in
+	 * the atomic (non-sleeping) context
+	 */
+	unsigned parse_atomic:1;
+	unsigned exec_atomic:1;
+	unsigned dev_done_atomic:1;
+
+	/*
+	 * Should be true, if exec() is synchronous. This is a hint to SCST core
+	 * to optimize commands order management.
+	 */
+	unsigned exec_sync:1;
+
+	/*
+	 * Called to parse CDB from the cmd and initialize
+	 * cmd->bufflen and cmd->data_direction (both - REQUIRED).
+	 * Returns the command's next state or SCST_CMD_STATE_DEFAULT,
+	 * if the next default state should be used, or
+	 * SCST_CMD_STATE_NEED_THREAD_CTX if the function called in atomic
+	 * context, but requires sleeping, or SCST_CMD_STATE_STOP if the
+	 * command should not be further processed for now. In the
+	 * SCST_CMD_STATE_NEED_THREAD_CTX case the function
+	 * will be recalled in the thread context, where sleeping is allowed.
+	 *
+	 * Pay attention to "atomic" attribute of the cmd, which can be get
+	 * by scst_cmd_atomic(): it is true if the function called in the
+	 * atomic (non-sleeping) context.
+	 *
+	 * MUST HAVE
+	 */
+	int (*parse) (struct scst_cmd *cmd);
+
+	/*
+	 * Called to execute CDB. Useful, for instance, to implement
+	 * data caching. The result of CDB execution is reported via
+	 * cmd->scst_cmd_done() callback.
+	 * Returns:
+	 *  - SCST_EXEC_COMPLETED - the cmd is done, go to other ones
+	 *  - SCST_EXEC_NEED_THREAD - thread context is required to execute
+	 *	the command. Exec() will be called again in the thread context.
+	 *  - SCST_EXEC_NOT_COMPLETED - the cmd should be sent to SCSI
+	 *	mid-level.
+	 *
+	 * Pay attention to "atomic" attribute of the cmd, which can be get
+	 * by scst_cmd_atomic(): it is true if the function called in the
+	 * atomic (non-sleeping) context.
+	 *
+	 * If this function provides sync execution, you should set
+	 * exec_sync flag and consider to setup dedicated threads by
+	 * setting threads_num > 0.
+	 *
+	 * !! If this function is implemented, scst_check_local_events() !!
+	 * !! shall be called inside it just before the actual command's !!
+	 * !! execution.                                                 !!
+	 *
+	 * OPTIONAL, if not set, the commands will be sent directly to SCSI
+	 * device.
+	 */
+	int (*exec) (struct scst_cmd *cmd);
+
+	/*
+	 * Called to notify dev handler about the result of cmd execution
+	 * and perform some post processing. Cmd's fields is_send_status and
+	 * resp_data_len should be set by this function, but SCST offers good
+	 * defaults.
+	 * Returns the command's next state or SCST_CMD_STATE_DEFAULT,
+	 * if the next default state should be used, or
+	 * SCST_CMD_STATE_NEED_THREAD_CTX if the function called in atomic
+	 * context, but requires sleeping. In the last case, the function
+	 * will be recalled in the thread context, where sleeping is allowed.
+	 *
+	 * Pay attention to "atomic" attribute of the cmd, which can be get
+	 * by scst_cmd_atomic(): it is true if the function called in the
+	 * atomic (non-sleeping) context.
+	 */
+	int (*dev_done) (struct scst_cmd *cmd);
+
+	/*
+	 * Called to notify dev hander that the command is about to be freed.
+	 * Could be called on IRQ context.
+	 */
+	void (*on_free_cmd) (struct scst_cmd *cmd);
+
+	/*
+	 * Called to execute a task management command.
+	 * Returns:
+	 *  - SCST_MGMT_STATUS_SUCCESS - the command is done with success,
+	 *	no firther actions required
+	 *  - The SCST_MGMT_STATUS_* error code if the command is failed and
+	 *	no further actions required
+	 *  - SCST_DEV_TM_NOT_COMPLETED - regular standard actions for the
+	 *      command should be done
+	 *
+	 * Called without any locks held from a thread context.
+	 */
+	int (*task_mgmt_fn) (struct scst_mgmt_cmd *mgmt_cmd,
+		struct scst_tgt_dev *tgt_dev);
+
+	/*
+	 * Called when new device is attaching to the dev handler
+	 * Returns 0 on success, error code otherwise.
+	 */
+	int (*attach) (struct scst_device *dev);
+
+	/* Called when new device is detaching from the dev handler */
+	void (*detach) (struct scst_device *dev);
+
+	/*
+	 * Called when new tgt_dev (session) is attaching to the dev handler.
+	 * Returns 0 on success, error code otherwise.
+	 */
+	int (*attach_tgt) (struct scst_tgt_dev *tgt_dev);
+
+	/* Called when tgt_dev (session) is detaching from the dev handler */
+	void (*detach_tgt) (struct scst_tgt_dev *tgt_dev);
+
+	/*
+	 * This function adds a virtual device.
+	 *
+	 * If both add_device and del_device callbacks defined, then this
+	 * dev handler supposed to support adding/deleting virtual devices.
+	 * In this case an "mgmt" entry will be created in the sysfs root for
+	 * this handler. The "mgmt" entry will support 2 commands: "add_device"
+	 * and "del_device", for which the corresponding callbacks will be called.
+	 * Also dev handler can define own commands for the "mgmt" entry, see
+	 * mgmt_cmd and mgmt_cmd_help below.
+	 *
+	 * This approach allows uniform devices management to simplify external
+	 * management tools like scstadmin. See README for more details.
+	 *
+	 * Either both add_device and del_device must be defined, or none.
+	 *
+	 * MUST HAVE if virtual devices are supported.
+	 */
+	ssize_t (*add_device) (const char *device_name, char *params);
+
+	/*
+	 * This function deletes a virtual device. See comment for add_device
+	 * above.
+	 *
+	 * MUST HAVE if virtual devices are supported.
+	 */
+	ssize_t (*del_device) (const char *device_name);
+
+	/*
+	 * This function called if not "add_device" or "del_device" command is
+	 * sent to the mgmt entry (see comment for add_device above). In this
+	 * case the command passed to this function as is in a string form.
+	 *
+	 * OPTIONAL.
+	 */
+	ssize_t (*mgmt_cmd) (char *cmd);
+
+	/*
+	 * Name of the dev handler. Must be unique. MUST HAVE.
+	 *
+	 * It's SCST_MAX_NAME + few more bytes to match scst_user expectations.
+	 */
+	char name[SCST_MAX_NAME + 10];
+
+	/*
+	 * Number of threads in this handler's devices' threads pools.
+	 * If 0 - no threads will be created, if <0 - creation of the threads
+	 * pools is prohibited. Also pay attention to threads_pool_type below.
+	 */
+	int threads_num;
+
+	/* Threads pool type. Valid only if threads_num > 0. */
+	enum scst_dev_type_threads_pool_type threads_pool_type;
+
+	/* Optional default log flags */
+	const unsigned long default_trace_flags;
+
+	/* Optional pointer to trace flags */
+	unsigned long *trace_flags;
+
+	/* Optional local trace table */
+	struct scst_trace_log *trace_tbl;
+
+	/* Optional local trace table help string */
+	const char *trace_tbl_help;
+
+	/* Optional help string for mgmt_cmd commands */
+	const char *mgmt_cmd_help;
+
+	/* Optional help string for add_device parameters */
+	const char *add_device_parameters_help;
+
+	/* Optional sysfs attributes */
+	const struct attribute **devt_attrs;
+
+	/* Optional sysfs device attributes */
+	const struct attribute **dev_attrs;
+
+	/* Pointer to dev handler's private data */
+	void *devt_priv;
+
+	/* Pointer to parent dev type in the sysfs hierarchy */
+	struct scst_dev_type *parent;
+
+	struct module *module;
+
+	/** Private, must be inited to 0 by memset() **/
+
+	/* list entry in scst_dev_type_list */
+	struct list_head dev_type_list_entry;
+
+	unsigned int devt_kobj_initialized:1;
+
+	struct kobject devt_kobj; /* main handlers/driver */
+
+	/* To wait until devt_kobj released */
+	struct completion devt_kobj_release_compl;
+};
+
+/*
+ * An SCST target, analog of SCSI target port.
+ */
+struct scst_tgt {
+	/* List of remote sessions per target, protected by scst_mutex */
+	struct list_head sess_list;
+
+	/* List entry of targets per template (tgts_list) */
+	struct list_head tgt_list_entry;
+
+	struct scst_tgt_template *tgtt;	/* corresponding target template */
+
+	struct scst_acg *default_acg; /* The default acg for this target. */
+
+	/*
+	 * Device ACG groups
+	 */
+	struct list_head tgt_acg_list;
+
+	/*
+	 * Maximum SG table size. Needed here, since different cards on the
+	 * same target template can have different SG table limitations.
+	 */
+	int sg_tablesize;
+
+	/* Used for storage of target driver private stuff */
+	void *tgt_priv;
+
+	/*
+	 * The following fields used to store and retry cmds if target's
+	 * internal queue is full, so the target is unable to accept
+	 * the cmd returning QUEUE FULL.
+	 * They protected by tgt_lock, where necessary.
+	 */
+	bool retry_timer_active;
+	struct timer_list retry_timer;
+	atomic_t finished_cmds;
+	int retry_cmds;
+	spinlock_t tgt_lock;
+	struct list_head retry_cmd_list;
+
+	/* Used to wait until session finished to unregister */
+	wait_queue_head_t unreg_waitQ;
+
+	/* Name of the target */
+	char *tgt_name;
+
+	uint16_t rel_tgt_id;
+
+	/* Set if tgt_kobj was initialized */
+	unsigned int tgt_kobj_initialized:1;
+
+	/* Set if scst_tgt_sysfs_prepare_put() was called for tgt_kobj */
+	unsigned int tgt_kobj_put_prepared:1;
+
+	/*
+	 * Used to protect sysfs attributes to be called after this
+	 * object was unregistered.
+	 */
+	struct rw_semaphore tgt_attr_rwsem;
+
+	struct kobject tgt_kobj; /* main targets/target kobject */
+	struct kobject *tgt_sess_kobj; /* target/sessions/ */
+	struct kobject *tgt_luns_kobj; /* target/luns/ */
+	struct kobject *tgt_ini_grp_kobj; /* target/ini_groups/ */
+};
+
+/* Hash size and hash fn for hash based lun translation */
+#define	TGT_DEV_HASH_SHIFT	5
+#define	TGT_DEV_HASH_SIZE	(1 << TGT_DEV_HASH_SHIFT)
+#define	HASH_VAL(_val)		(_val & (TGT_DEV_HASH_SIZE - 1))
+
+#ifdef CONFIG_SCST_MEASURE_LATENCY
+
+/* Defines extended latency statistics */
+struct scst_ext_latency_stat {
+	uint64_t scst_time_rd, tgt_time_rd, dev_time_rd;
+	unsigned int processed_cmds_rd;
+	uint64_t min_scst_time_rd, min_tgt_time_rd, min_dev_time_rd;
+	uint64_t max_scst_time_rd, max_tgt_time_rd, max_dev_time_rd;
+
+	uint64_t scst_time_wr, tgt_time_wr, dev_time_wr;
+	unsigned int processed_cmds_wr;
+	uint64_t min_scst_time_wr, min_tgt_time_wr, min_dev_time_wr;
+	uint64_t max_scst_time_wr, max_tgt_time_wr, max_dev_time_wr;
+};
+
+#define SCST_IO_SIZE_THRESHOLD_SMALL		(8*1024)
+#define SCST_IO_SIZE_THRESHOLD_MEDIUM		(32*1024)
+#define SCST_IO_SIZE_THRESHOLD_LARGE		(128*1024)
+#define SCST_IO_SIZE_THRESHOLD_VERY_LARGE	(512*1024)
+
+#define SCST_LATENCY_STAT_INDEX_SMALL		0
+#define SCST_LATENCY_STAT_INDEX_MEDIUM		1
+#define SCST_LATENCY_STAT_INDEX_LARGE		2
+#define SCST_LATENCY_STAT_INDEX_VERY_LARGE	3
+#define SCST_LATENCY_STAT_INDEX_OTHER		4
+#define SCST_LATENCY_STATS_NUM		(SCST_LATENCY_STAT_INDEX_OTHER + 1)
+
+#endif /* CONFIG_SCST_MEASURE_LATENCY */
+
+/*
+ * SCST session, analog of SCSI I_T nexus
+ */
+struct scst_session {
+	/*
+	 * Initialization phase, one of SCST_SESS_IPH_* constants, protected by
+	 * sess_list_lock
+	 */
+	int init_phase;
+
+	struct scst_tgt *tgt;	/* corresponding target */
+
+	/* Used for storage of target driver private stuff */
+	void *tgt_priv;
+
+	unsigned long sess_aflags; /* session's async flags */
+
+	/*
+	 * Hash list of tgt_dev's for this session, protected by scst_mutex
+	 * and suspended activity
+	 */
+	struct list_head sess_tgt_dev_list_hash[TGT_DEV_HASH_SIZE];
+
+	/*
+	 * List of cmds in this session. Protected by sess_list_lock.
+	 *
+	 * We must always keep commands in the sess list from the
+	 * very beginning, because otherwise they can be missed during
+	 * TM processing.
+	 */
+	struct list_head sess_cmd_list;
+
+	spinlock_t sess_list_lock; /* protects sess_cmd_list, etc */
+
+	atomic_t refcnt;		/* get/put counter */
+
+	/*
+	 * Alive commands for this session. ToDo: make it part of the common
+	 * IO flow control.
+	 */
+	atomic_t sess_cmd_count;
+
+	/* Access control for this session and list entry there */
+	struct scst_acg *acg;
+
+	/* List entry for the sessions list inside ACG */
+	struct list_head acg_sess_list_entry;
+
+	struct delayed_work hw_pending_work;
+
+	/* Name of attached initiator */
+	const char *initiator_name;
+
+	/* List entry of sessions per target */
+	struct list_head sess_list_entry;
+
+	/* List entry for the list that keeps session, waiting for the init */
+	struct list_head sess_init_list_entry;
+
+	/*
+	 * List entry for the list that keeps session, waiting for the shutdown
+	 */
+	struct list_head sess_shut_list_entry;
+
+	/*
+	 * Lists of deferred during session initialization commands.
+	 * Protected by sess_list_lock.
+	 */
+	struct list_head init_deferred_cmd_list;
+	struct list_head init_deferred_mcmd_list;
+
+	/*
+	 * Shutdown phase, one of SCST_SESS_SPH_* constants, unprotected.
+	 * Async. relating to init_phase, must be a separate variable, because
+	 * session could be unregistered before async. registration is finished.
+	 */
+	unsigned long shut_phase;
+
+	/* Used if scst_unregister_session() called in wait mode */
+	struct completion *shutdown_compl;
+
+	/* Set if sess_kobj was initialized */
+	unsigned int sess_kobj_initialized:1;
+
+	/*
+	 * Used to protect sysfs attributes to be called after this
+	 * object was unregistered.
+	 */
+	struct rw_semaphore sess_attr_rwsem;
+
+	struct kobject sess_kobj; /* kobject for this struct */
+
+	/*
+	 * Functions and data for user callbacks from scst_register_session()
+	 * and scst_unregister_session()
+	 */
+	void *reg_sess_data;
+	void (*init_result_fn) (struct scst_session *sess, void *data,
+				int result);
+	void (*unreg_done_fn) (struct scst_session *sess);
+
+#ifdef CONFIG_SCST_MEASURE_LATENCY
+	/*
+	 * Must be the last to allow to work with drivers who don't know
+	 * about this config time option.
+	 */
+	spinlock_t lat_lock;
+	uint64_t scst_time, tgt_time, dev_time;
+	unsigned int processed_cmds;
+	uint64_t min_scst_time, min_tgt_time, min_dev_time;
+	uint64_t max_scst_time, max_tgt_time, max_dev_time;
+	struct scst_ext_latency_stat sess_latency_stat[SCST_LATENCY_STATS_NUM];
+#endif
+};
+
+/*
+ * Structure to control commands' queuing and threads pool processing the queue
+ */
+struct scst_cmd_threads {
+	spinlock_t cmd_list_lock;
+	struct list_head active_cmd_list; /* commands queue */
+	wait_queue_head_t cmd_list_waitQ;
+
+	struct io_context *io_context; /* IO context of the threads pool */
+
+	int nr_threads; /* number of processing threads */
+	struct list_head threads_list; /* processing threads */
+
+	struct list_head lists_list_entry;
+};
+
+/*
+ * SCST command, analog of I_T_L_Q nexus or task
+ */
+struct scst_cmd {
+	/* List entry for below *_cmd_threads */
+	struct list_head cmd_list_entry;
+
+	/* Pointer to lists of commands with the lock */
+	struct scst_cmd_threads *cmd_threads;
+
+	atomic_t cmd_ref;
+
+	struct scst_session *sess;	/* corresponding session */
+
+	/* Cmd state, one of SCST_CMD_STATE_* constants */
+	int state;
+
+	/*************************************************************
+	 ** Cmd's flags
+	 *************************************************************/
+
+	/*
+	 * Set if expected_sn should be incremented, i.e. cmd was sent
+	 * for execution
+	 */
+	unsigned int sent_for_exec:1;
+
+	/* Set if the cmd's action is completed */
+	unsigned int completed:1;
+
+	/* Set if we should ignore Unit Attention in scst_check_sense() */
+	unsigned int ua_ignore:1;
+
+	/* Set if cmd is being processed in atomic context */
+	unsigned int atomic:1;
+
+	/* Set if this command was sent in double UA possible state */
+	unsigned int double_ua_possible:1;
+
+	/* Set if this command contains status */
+	unsigned int is_send_status:1;
+
+	/* Set if cmd is being retried */
+	unsigned int retry:1;
+
+	/* Set if cmd is internally generated */
+	unsigned int internal:1;
+
+	/* Set if the device was blocked by scst_inc_on_dev_cmd() (for debug) */
+	unsigned int inc_blocking:1;
+
+	/* Set if the device should be unblocked after cmd's finish */
+	unsigned int needs_unblocking:1;
+
+	/* Set if scst_dec_on_dev_cmd() call is needed on the cmd's finish */
+	unsigned int dec_on_dev_needed:1;
+
+	/* Set if cmd is queued as hw pending */
+	unsigned int cmd_hw_pending:1;
+
+	/*
+	 * Set if the target driver wants to alloc data buffers on its own.
+	 * In this case alloc_data_buf() must be provided in the target driver
+	 * template.
+	 */
+	unsigned int tgt_need_alloc_data_buf:1;
+
+	/*
+	 * Set by SCST if the custom data buffer allocation by the target driver
+	 * succeeded.
+	 */
+	unsigned int tgt_data_buf_alloced:1;
+
+	/* Set if custom data buffer allocated by dev handler */
+	unsigned int dh_data_buf_alloced:1;
+
+	/* Set if the target driver called scst_set_expected() */
+	unsigned int expected_values_set:1;
+
+	/*
+	 * Set if the SG buffer was modified by scst_set_resp_data_len()
+	 */
+	unsigned int sg_buff_modified:1;
+
+	/*
+	 * Set if scst_cmd_init_stage1_done() called and the target
+	 * want that preprocessing_done() will be called
+	 */
+	unsigned int preprocessing_only:1;
+
+	/* Set if cmd's SN was set */
+	unsigned int sn_set:1;
+
+	/* Set if hq_cmd_count was incremented */
+	unsigned int hq_cmd_inced:1;
+
+	/*
+	 * Set if scst_cmd_init_stage1_done() called and the target wants
+	 * that the SN for the cmd won't be assigned until scst_restart_cmd()
+	 */
+	unsigned int set_sn_on_restart_cmd:1;
+
+	/* Set if the cmd's must not use sgv cache for data buffer */
+	unsigned int no_sgv:1;
+
+	/*
+	 * Set if target driver may need to call dma_sync_sg() or similar
+	 * function before transferring cmd' data to the target device
+	 * via DMA.
+	 */
+	unsigned int may_need_dma_sync:1;
+
+	/* Set if the cmd was done or aborted out of its SN */
+	unsigned int out_of_sn:1;
+
+	/* Set if increment expected_sn in cmd->scst_cmd_done() */
+	unsigned int inc_expected_sn_on_done:1;
+
+	/* Set if tgt_sn field is valid */
+	unsigned int tgt_sn_set:1;
+
+	/* Set if cmd is done */
+	unsigned int done:1;
+
+	/* Set if cmd is finished */
+	unsigned int finished:1;
+
+	/*
+	 * Set if the cmd was delayed by task management debugging code.
+	 * Used only if CONFIG_SCST_DEBUG_TM is on.
+	 */
+	unsigned int tm_dbg_delayed:1;
+
+	/*
+	 * Set if the cmd must be ignored by task management debugging code.
+	 * Used only if CONFIG_SCST_DEBUG_TM is on.
+	 */
+	unsigned int tm_dbg_immut:1;
+
+	/**************************************************************/
+
+	unsigned long cmd_flags; /* cmd's async flags */
+
+	/* Keeps status of cmd's status/data delivery to remote initiator */
+	int delivery_status;
+
+	struct scst_tgt_template *tgtt;	/* to save extra dereferences */
+	struct scst_tgt *tgt;		/* to save extra dereferences */
+	struct scst_device *dev;	/* to save extra dereferences */
+
+	struct scst_tgt_dev *tgt_dev;	/* corresponding device for this cmd */
+
+	uint64_t lun;			/* LUN for this cmd */
+
+	unsigned long start_time;
+
+	/* List entry for tgt_dev's SN related lists */
+	struct list_head sn_cmd_list_entry;
+
+	/* Cmd's serial number, used to execute cmd's in order of arrival */
+	unsigned int sn;
+
+	/* The corresponding sn_slot in tgt_dev->sn_slots */
+	atomic_t *sn_slot;
+
+	/* List entry for sess's sess_cmd_list */
+	struct list_head sess_cmd_list_entry;
+
+	/*
+	 * Used to found the cmd by scst_find_cmd_by_tag(). Set by the
+	 * target driver on the cmd's initialization time
+	 */
+	uint64_t tag;
+
+	uint32_t tgt_sn; /* SN set by target driver (for TM purposes) */
+
+	/* CDB and its len */
+	uint8_t cdb[SCST_MAX_CDB_SIZE];
+	short cdb_len; /* it might be -1 */
+	unsigned short ext_cdb_len;
+	uint8_t *ext_cdb;
+
+	enum scst_cdb_flags op_flags;
+	const char *op_name;
+
+	enum scst_cmd_queue_type queue_type;
+
+	int timeout; /* CDB execution timeout in seconds */
+	int retries; /* Amount of retries that will be done by SCSI mid-level */
+
+	/* SCSI data direction, one of SCST_DATA_* constants */
+	scst_data_direction data_direction;
+
+	/* Remote initiator supplied values, if any */
+	scst_data_direction expected_data_direction;
+	int expected_transfer_len;
+	int expected_in_transfer_len; /* for bidi writes */
+
+	/*
+	 * Cmd data length. Could be different from bufflen for commands like
+	 * VERIFY, which transfer different amount of data (if any), than
+	 * processed.
+	 */
+	int data_len;
+
+	/* Completition routine */
+	void (*scst_cmd_done) (struct scst_cmd *cmd, int next_state,
+		enum scst_exec_context pref_context);
+
+	struct sgv_pool_obj *sgv;	/* sgv object */
+	int bufflen;			/* cmd buffer length */
+	struct scatterlist *sg;		/* cmd data buffer SG vector */
+	int sg_cnt;			/* SG segments count */
+
+	/*
+	 * Response data length in data buffer. This field must not be set
+	 * directly, use scst_set_resp_data_len() for that
+	 */
+	int resp_data_len;
+
+	/* scst_get_sg_buf_[first,next]() support */
+	int get_sg_buf_entry_num;
+
+	/* Bidirectional transfers support */
+	int in_bufflen;			/* WRITE buffer length */
+	struct sgv_pool_obj *in_sgv;	/* WRITE sgv object */
+	struct scatterlist *in_sg;	/* WRITE data buffer SG vector */
+	int in_sg_cnt;			/* WRITE SG segments count */
+
+	/*
+	 * Used if both target driver and dev handler request own memory
+	 * allocation. In other cases, both are equal to sg and sg_cnt
+	 * correspondingly.
+	 *
+	 * If target driver requests own memory allocations, it MUST use
+	 * functions scst_cmd_get_tgt_sg*() to get sg and sg_cnt! Otherwise,
+	 * it may use functions scst_cmd_get_sg*().
+	 */
+	struct scatterlist *tgt_sg;
+	int tgt_sg_cnt;
+	struct scatterlist *tgt_in_sg;	/* bidirectional */
+	int tgt_in_sg_cnt;		/* bidirectional */
+
+	/*
+	 * The status fields in case of errors must be set using
+	 * scst_set_cmd_error_status()!
+	 */
+	uint8_t status;		/* status byte from target device */
+	uint8_t msg_status;	/* return status from host adapter itself */
+	uint8_t host_status;	/* set by low-level driver to indicate status */
+	uint8_t driver_status;	/* set by mid-level */
+
+	uint8_t *sense;		/* pointer to sense buffer */
+	unsigned short sense_valid_len; /* length of valid sense data */
+	unsigned short sense_buflen; /* length of the sense buffer, if any */
+
+	/* Start time when cmd was sent to rdy_to_xfer() or xmit_response() */
+	unsigned long hw_pending_start;
+
+	/* Used for storage of target driver private stuff */
+	void *tgt_priv;
+
+	/* Used for storage of dev handler private stuff */
+	void *dh_priv;
+
+	/*
+	 * Used to restore the SG vector if it was modified by
+	 * scst_set_resp_data_len()
+	 */
+	int orig_sg_cnt, orig_sg_entry, orig_entry_len;
+
+	/* Used to retry commands in case of double UA */
+	int dbl_ua_orig_resp_data_len, dbl_ua_orig_data_direction;
+
+	/* List corresponding mgmt cmd, if any, protected by sess_list_lock */
+	struct list_head mgmt_cmd_list;
+
+	/* List entry for dev's blocked_cmd_list */
+	struct list_head blocked_cmd_list_entry;
+
+	struct scst_cmd *orig_cmd; /* Used to issue REQUEST SENSE */
+
+#ifdef CONFIG_SCST_MEASURE_LATENCY
+	/*
+	 * Must be the last to allow to work with drivers who don't know
+	 * about this config time option.
+	 */
+	uint64_t start, curr_start, parse_time, alloc_buf_time;
+	uint64_t restart_waiting_time, rdy_to_xfer_time;
+	uint64_t pre_exec_time, exec_time, dev_done_time;
+	uint64_t xmit_time, tgt_on_free_time, dev_on_free_time;
+#endif
+};
+
+/*
+ * Parameters for SCST management commands
+ */
+struct scst_rx_mgmt_params {
+	int fn;
+	uint64_t tag;
+	const uint8_t *lun;
+	int lun_len;
+	uint32_t cmd_sn;
+	int atomic;
+	void *tgt_priv;
+	unsigned char tag_set;
+	unsigned char lun_set;
+	unsigned char cmd_sn_set;
+};
+
+/*
+ * A stub structure to link an management command and affected regular commands
+ */
+struct scst_mgmt_cmd_stub {
+	struct scst_mgmt_cmd *mcmd;
+
+	/* List entry in cmd->mgmt_cmd_list */
+	struct list_head cmd_mgmt_cmd_list_entry;
+
+	/* set if the cmd was counted in  mcmd->cmd_done_wait_count */
+	unsigned int done_counted:1;
+};
+
+/*
+ * SCST task management structure
+ */
+struct scst_mgmt_cmd {
+	/* List entry for *_mgmt_cmd_list */
+	struct list_head mgmt_cmd_list_entry;
+
+	struct scst_session *sess;
+
+	/* Mgmt cmd state, one of SCST_MCMD_STATE_* constants */
+	int state;
+
+	int fn; /* task management function */
+
+	unsigned int completed:1;	/* set, if the mcmd is completed */
+	/* Set if device(s) should be unblocked after mcmd's finish */
+	unsigned int needs_unblocking:1;
+	unsigned int lun_set:1;		/* set, if lun field is valid */
+	unsigned int cmd_sn_set:1;	/* set, if cmd_sn field is valid */
+	/* set, if scst_mgmt_affected_cmds_done was called */
+	unsigned int affected_cmds_done_called:1;
+
+	/*
+	 * Number of commands to finish before sending response,
+	 * protected by scst_mcmd_lock
+	 */
+	int cmd_finish_wait_count;
+
+	/*
+	 * Number of commands to complete (done) before resetting reservation,
+	 * protected by scst_mcmd_lock
+	 */
+	int cmd_done_wait_count;
+
+	/* Number of completed commands, protected by scst_mcmd_lock */
+	int completed_cmd_count;
+
+	uint64_t lun;	/* LUN for this mgmt cmd */
+	/* or (and for iSCSI) */
+	uint64_t tag;	/* tag of the corresponding cmd */
+
+	uint32_t cmd_sn; /* affected command's highest SN */
+
+	/* corresponding cmd (to be aborted, found by tag) */
+	struct scst_cmd *cmd_to_abort;
+
+	/* corresponding device for this mgmt cmd (found by lun) */
+	struct scst_tgt_dev *mcmd_tgt_dev;
+
+	/* completition status, one of the SCST_MGMT_STATUS_* constants */
+	int status;
+
+	/* Used for storage of target driver private stuff */
+	void *tgt_priv;
+};
+
+/*
+ * SCST device
+ */
+struct scst_device {
+	struct scst_dev_type *handler;	/* corresponding dev handler */
+
+	struct scst_mem_lim dev_mem_lim;
+
+	unsigned short type;	/* SCSI type of the device */
+
+	/*************************************************************
+	 ** Dev's flags. Updates serialized by dev_lock or suspended
+	 ** activity
+	 *************************************************************/
+
+	/* Set if dev is RESERVED */
+	unsigned short dev_reserved:1;
+
+	/* Set if double reset UA is possible */
+	unsigned short dev_double_ua_possible:1;
+
+	/* If set, dev is read only */
+	unsigned short rd_only:1;
+
+	/**************************************************************/
+
+	/*************************************************************
+	 ** Dev's control mode page related values. Updates serialized
+	 ** by scst_block_dev(). It's long to not interfere with the
+	 ** above flags.
+	 *************************************************************/
+
+	unsigned long queue_alg:4;
+	unsigned long tst:3;
+	unsigned long tas:1;
+	unsigned long swp:1;
+	unsigned long d_sense:1;
+
+	/*
+	 * Set if device implements own ordered commands management. If not set
+	 * and queue_alg is SCST_CONTR_MODE_QUEUE_ALG_RESTRICTED_REORDER,
+	 * expected_sn will be incremented only after commands finished.
+	 */
+	unsigned long has_own_order_mgmt:1;
+
+	/**************************************************************/
+
+	/* Used for storage of dev handler private stuff */
+	void *dh_priv;
+
+	/* Corresponding real SCSI device, could be NULL for virtual devices */
+	struct scsi_device *scsi_dev;
+
+	/* Lists of commands with lock, if dedicated threads are used */
+	struct scst_cmd_threads dev_cmd_threads;
+
+	/* How many cmds alive on this dev */
+	atomic_t dev_cmd_count;
+
+	/* How many write cmds alive on this dev. Temporary, ToDo */
+	atomic_t write_cmd_count;
+
+	spinlock_t dev_lock;		/* device lock */
+
+	/*
+	 * How many times device was blocked for new cmds execution.
+	 * Protected by dev_lock
+	 */
+	int block_count;
+
+	/*
+	 * How many there are "on_dev" commands, i.e. ones those are being
+	 * executed by the underlying SCSI/virtual device.
+	 */
+	atomic_t on_dev_count;
+
+	struct list_head blocked_cmd_list; /* protected by dev_lock */
+
+	/* Used to wait for requested amount of "on_dev" commands */
+	wait_queue_head_t on_dev_waitQ;
+
+	/* A list entry used during TM, protected by scst_mutex */
+	struct list_head tm_dev_list_entry;
+
+	/* Virtual device internal ID */
+	int virt_id;
+
+	/* Pointer to virtual device name, for convenience only */
+	char *virt_name;
+
+	/* List entry in global devices list */
+	struct list_head dev_list_entry;
+
+	/*
+	 * List of tgt_dev's, one per session, protected by scst_mutex or
+	 * dev_lock for reads and both for writes
+	 */
+	struct list_head dev_tgt_dev_list;
+
+	/* List of acg_dev's, one per acg, protected by scst_mutex */
+	struct list_head dev_acg_dev_list;
+
+	/* Number of threads in the device's threads pools */
+	int threads_num;
+
+	/* Threads pool type of the device. Valid only if threads_num > 0. */
+	enum scst_dev_type_threads_pool_type threads_pool_type;
+
+	/* Set if tgt_kobj was initialized */
+	unsigned int dev_kobj_initialized:1;
+
+	/*
+	 * Used to protect sysfs attributes to be called after this
+	 * object was unregistered.
+	 */
+	struct rw_semaphore dev_attr_rwsem;
+
+	struct kobject dev_kobj; /* kobject for this struct */
+	struct kobject *dev_exp_kobj; /* exported groups */
+
+	/* Export number in the dev's sysfs list. Protected by scst_mutex */
+	int dev_exported_lun_num;
+};
+
+/*
+ * Used to store threads local tgt_dev specific data
+ */
+struct scst_thr_data_hdr {
+	/* List entry in tgt_dev->thr_data_list */
+	struct list_head thr_data_list_entry;
+	struct task_struct *owner_thr; /* the owner thread */
+	atomic_t ref;
+	/* Function that will be called on the tgt_dev destruction */
+	void (*free_fn) (struct scst_thr_data_hdr *data);
+};
+
+/*
+ * Used to clearly dispose async io_context
+ */
+struct scst_async_io_context_keeper {
+	struct kref aic_keeper_kref;
+	struct io_context *aic;
+	struct task_struct *aic_keeper_thr;
+	wait_queue_head_t aic_keeper_waitQ;
+};
+
+/*
+ * Used to store per-session specific device information, analog of
+ * SCSI I_T_L nexus.
+ */
+struct scst_tgt_dev {
+	/* List entry in sess->sess_tgt_dev_list_hash */
+	struct list_head sess_tgt_dev_list_entry;
+
+	struct scst_device *dev; /* to save extra dereferences */
+	uint64_t lun;		 /* to save extra dereferences */
+
+	gfp_t gfp_mask;
+	struct sgv_pool *pool;
+	int max_sg_cnt;
+
+	unsigned long tgt_dev_flags;	/* tgt_dev's async flags */
+
+	/* Used for storage of dev handler private stuff */
+	void *dh_priv;
+
+	/* How many cmds alive on this dev in this session */
+	atomic_t tgt_dev_cmd_count;
+
+	/*
+	 * Used to execute cmd's in order of arrival, honoring SCSI task
+	 * attributes.
+	 *
+	 * Protected by sn_lock, except expected_sn, which is protected by
+	 * itself. Curr_sn must have the same size as expected_sn to
+	 * overflow simultaneously.
+	 */
+	int def_cmd_count;
+	spinlock_t sn_lock;
+	unsigned int expected_sn;
+	unsigned int curr_sn;
+	int hq_cmd_count;
+	struct list_head deferred_cmd_list;
+	struct list_head skipped_sn_list;
+
+	/*
+	 * Set if the prev cmd was ORDERED. Size must allow unprotected
+	 * modifications independant to the neighbour fields.
+	 */
+	unsigned long prev_cmd_ordered;
+
+	int num_free_sn_slots; /* if it's <0, then all slots are busy */
+	atomic_t *cur_sn_slot;
+	atomic_t sn_slots[15];
+
+	/* List of scst_thr_data_hdr and lock */
+	spinlock_t thr_data_lock;
+	struct list_head thr_data_list;
+
+	/* Pointer to lists of commands with the lock */
+	struct scst_cmd_threads *active_cmd_threads;
+
+	/* Union to save some CPU cache footprint */
+	union {
+		struct {
+			/* Copy to save fast path dereference */
+			struct io_context *async_io_context;
+
+			struct scst_async_io_context_keeper *aic_keeper;
+		};
+
+		/* Lists of commands with lock, if dedicated threads are used */
+		struct scst_cmd_threads tgt_dev_cmd_threads;
+	};
+
+	spinlock_t tgt_dev_lock;	/* per-session device lock */
+
+	/* List of UA's for this device, protected by tgt_dev_lock */
+	struct list_head UA_list;
+
+	struct scst_session *sess;	/* corresponding session */
+	struct scst_acg_dev *acg_dev;	/* corresponding acg_dev */
+
+	/* List entry in dev->dev_tgt_dev_list */
+	struct list_head dev_tgt_dev_list_entry;
+
+	/* Internal tmp list entry */
+	struct list_head extra_tgt_dev_list_entry;
+
+	/* Set if INQUIRY DATA HAS CHANGED UA is needed */
+	unsigned int inq_changed_ua_needed:1;
+
+	/*
+	 * Stored Unit Attention sense and its length for possible
+	 * subsequent REQUEST SENSE. Both protected by tgt_dev_lock.
+	 */
+	unsigned short tgt_dev_valid_sense_len;
+	uint8_t tgt_dev_sense[SCST_SENSE_BUFFERSIZE];
+
+#ifdef CONFIG_SCST_MEASURE_LATENCY
+	/*
+	 * Must be the last to allow to work with drivers who don't know
+	 * about this config time option.
+	 *
+	 * Protected by sess->lat_lock.
+	 */
+	uint64_t scst_time, tgt_time, dev_time;
+	unsigned int processed_cmds;
+	struct scst_ext_latency_stat dev_latency_stat[SCST_LATENCY_STATS_NUM];
+#endif
+};
+
+/*
+ * Used to store ACG-specific device information, like LUN
+ */
+struct scst_acg_dev {
+	struct scst_device *dev; /* corresponding device */
+
+	uint64_t lun; /* device's LUN in this acg */
+
+	/* If set, the corresponding LU is read only */
+	unsigned int rd_only:1;
+
+	/* Set if acg_dev_kobj was initialized */
+	unsigned int acg_dev_kobj_initialized:1;
+
+	struct scst_acg *acg; /* parent acg */
+
+	/* List entry in dev->dev_acg_dev_list */
+	struct list_head dev_acg_dev_list_entry;
+
+	/* List entry in acg->acg_dev_list */
+	struct list_head acg_dev_list_entry;
+
+	/* kobject for this structure */
+	struct kobject acg_dev_kobj;
+};
+
+/*
+ * ACG - access control group. Used to store group related
+ * control information.
+ */
+struct scst_acg {
+	/* List of acg_dev's in this acg, protected by scst_mutex */
+	struct list_head acg_dev_list;
+
+	/* List of attached sessions, protected by scst_mutex */
+	struct list_head acg_sess_list;
+
+	/* List of attached acn's, protected by scst_mutex */
+	struct list_head acn_list;
+
+	/* List entry in acg_lists */
+	struct list_head acg_list_entry;
+
+	/* Name of this acg */
+	const char *acg_name;
+
+	/* Type of I/O initiators groupping */
+	int acg_io_grouping_type;
+
+	unsigned int acg_kobj_initialized:1;
+	unsigned int in_tgt_acg_list:1;
+
+	/* kobject for this structure */
+	struct kobject acg_kobj;
+
+	struct kobject *luns_kobj;
+	struct kobject *initiators_kobj;
+
+	unsigned int addr_method;
+};
+
+/*
+ * ACN - access control name. Used to store names, by which
+ * incoming sessions will be assigned to appropriate ACG.
+ */
+struct scst_acn {
+	/* Initiator's name */
+	const char *name;
+	/* List entry in acg->acn_list */
+	struct list_head acn_list_entry;
+
+	/* sysfs file attributes */
+	struct kobj_attribute *acn_attr;
+};
+
+/*
+ * Used to store per-session UNIT ATTENTIONs
+ */
+struct scst_tgt_dev_UA {
+	/* List entry in tgt_dev->UA_list */
+	struct list_head UA_list_entry;
+
+	/* Set if UA is global for session */
+	unsigned short global_UA:1;
+
+	/* Unit Attention valid sense len */
+	unsigned short UA_valid_sense_len;
+	/* Unit Attention sense buf */
+	uint8_t UA_sense_buffer[SCST_SENSE_BUFFERSIZE];
+};
+
+/* Used to deliver AENs */
+struct scst_aen {
+	int event_fn; /* AEN fn */
+
+	struct scst_session *sess;	/* corresponding session */
+	uint64_t lun;			/* corresponding LUN in SCSI form */
+
+	union {
+		/* SCSI AEN data */
+		struct {
+			int aen_sense_len;
+			uint8_t aen_sense[SCST_STANDARD_SENSE_LEN];
+		};
+	};
+
+	/* Keeps status of AEN's delivery to remote initiator */
+	int delivery_status;
+};
+
+#ifndef smp_mb__after_set_bit
+/* There is no smp_mb__after_set_bit() in the kernel */
+#define smp_mb__after_set_bit()                 smp_mb()
+#endif
+
+/*
+ * Registers target template.
+ * Returns 0 on success or appropriate error code otherwise.
+ *
+ * Note: *vtt must be static!
+ */
+int __scst_register_target_template(struct scst_tgt_template *vtt,
+	const char *version);
+static inline int scst_register_target_template(struct scst_tgt_template *vtt)
+{
+	return __scst_register_target_template(vtt, SCST_INTERFACE_VERSION);
+}
+
+/*
+ * Registers target template, non-GPL version.
+ * Returns 0 on success or appropriate error code otherwise.
+ *
+ * Note: *vtt must be static!
+ */
+int __scst_register_target_template_non_gpl(struct scst_tgt_template *vtt,
+	const char *version);
+static inline int scst_register_target_template_non_gpl(
+	struct scst_tgt_template *vtt)
+{
+	return __scst_register_target_template_non_gpl(vtt,
+		SCST_INTERFACE_VERSION);
+}
+
+void scst_unregister_target_template(struct scst_tgt_template *vtt);
+
+struct scst_tgt *scst_register_target(struct scst_tgt_template *vtt,
+	const char *target_name);
+void scst_unregister_target(struct scst_tgt *tgt);
+
+struct scst_session *scst_register_session(struct scst_tgt *tgt, int atomic,
+	const char *initiator_name, void *data,
+	void (*result_fn) (struct scst_session *sess, void *data, int result));
+struct scst_session *scst_register_session_simple(struct scst_tgt *tgt,
+	const char *initiator_name);
+void scst_unregister_session(struct scst_session *sess, int wait,
+	void (*unreg_done_fn) (struct scst_session *sess));
+void scst_unregister_session_simple(struct scst_session *sess);
+
+int __scst_register_dev_driver(struct scst_dev_type *dev_type,
+	const char *version);
+static inline int scst_register_dev_driver(struct scst_dev_type *dev_type)
+{
+	return __scst_register_dev_driver(dev_type, SCST_INTERFACE_VERSION);
+}
+void scst_unregister_dev_driver(struct scst_dev_type *dev_type);
+
+int __scst_register_virtual_dev_driver(struct scst_dev_type *dev_type,
+	const char *version);
+/*
+ * Registers dev handler driver for virtual devices (eg VDISK).
+ * Returns 0 on success or appropriate error code otherwise.
+ *
+ * Note: *dev_type must be static!
+ */
+static inline int scst_register_virtual_dev_driver(
+	struct scst_dev_type *dev_type)
+{
+	return __scst_register_virtual_dev_driver(dev_type,
+		SCST_INTERFACE_VERSION);
+}
+
+void scst_unregister_virtual_dev_driver(struct scst_dev_type *dev_type);
+
+struct scst_cmd *scst_rx_cmd(struct scst_session *sess,
+	const uint8_t *lun, int lun_len, const uint8_t *cdb,
+	int cdb_len, int atomic);
+void scst_cmd_init_done(struct scst_cmd *cmd,
+	enum scst_exec_context pref_context);
+
+/*
+ * Notifies SCST that the driver finished the first stage of the command
+ * initialization, and the command is ready for execution, but after
+ * SCST done the command's preprocessing preprocessing_done() function
+ * should be called. The second argument sets preferred command execition
+ * context. See SCST_CONTEXT_* constants for details.
+ *
+ * See comment for scst_cmd_init_done() for the serialization requirements.
+ */
+static inline void scst_cmd_init_stage1_done(struct scst_cmd *cmd,
+	enum scst_exec_context pref_context, int set_sn)
+{
+	cmd->preprocessing_only = 1;
+	cmd->set_sn_on_restart_cmd = !set_sn;
+	scst_cmd_init_done(cmd, pref_context);
+}
+
+void scst_restart_cmd(struct scst_cmd *cmd, int status,
+	enum scst_exec_context pref_context);
+
+void scst_rx_data(struct scst_cmd *cmd, int status,
+	enum scst_exec_context pref_context);
+
+void scst_tgt_cmd_done(struct scst_cmd *cmd,
+	enum scst_exec_context pref_context);
+
+int scst_rx_mgmt_fn(struct scst_session *sess,
+	const struct scst_rx_mgmt_params *params);
+
+/*
+ * Creates new management command using tag and sends it for execution.
+ * Can be used for SCST_ABORT_TASK only.
+ * Must not be called in parallel with scst_unregister_session() for the
+ * same sess. Returns 0 for success, error code otherwise.
+ *
+ * Obsolete in favor of scst_rx_mgmt_fn()
+ */
+static inline int scst_rx_mgmt_fn_tag(struct scst_session *sess, int fn,
+	uint64_t tag, int atomic, void *tgt_priv)
+{
+	struct scst_rx_mgmt_params params;
+
+	BUG_ON(fn != SCST_ABORT_TASK);
+
+	memset(&params, 0, sizeof(params));
+	params.fn = fn;
+	params.tag = tag;
+	params.tag_set = 1;
+	params.atomic = atomic;
+	params.tgt_priv = tgt_priv;
+	return scst_rx_mgmt_fn(sess, &params);
+}
+
+/*
+ * Creates new management command using LUN and sends it for execution.
+ * Currently can be used for any fn, except SCST_ABORT_TASK.
+ * Must not be called in parallel with scst_unregister_session() for the
+ * same sess. Returns 0 for success, error code otherwise.
+ *
+ * Obsolete in favor of scst_rx_mgmt_fn()
+ */
+static inline int scst_rx_mgmt_fn_lun(struct scst_session *sess, int fn,
+	const uint8_t *lun, int lun_len, int atomic, void *tgt_priv)
+{
+	struct scst_rx_mgmt_params params;
+
+	BUG_ON(fn == SCST_ABORT_TASK);
+
+	memset(&params, 0, sizeof(params));
+	params.fn = fn;
+	params.lun = lun;
+	params.lun_len = lun_len;
+	params.lun_set = 1;
+	params.atomic = atomic;
+	params.tgt_priv = tgt_priv;
+	return scst_rx_mgmt_fn(sess, &params);
+}
+
+int scst_get_cdb_info(struct scst_cmd *cmd);
+
+int scst_set_cmd_error_status(struct scst_cmd *cmd, int status);
+int scst_set_cmd_error(struct scst_cmd *cmd, int key, int asc, int ascq);
+void scst_set_busy(struct scst_cmd *cmd);
+
+void scst_check_convert_sense(struct scst_cmd *cmd);
+
+void scst_set_initial_UA(struct scst_session *sess, int key, int asc, int ascq);
+
+void scst_capacity_data_changed(struct scst_device *dev);
+
+struct scst_cmd *scst_find_cmd_by_tag(struct scst_session *sess, uint64_t tag);
+struct scst_cmd *scst_find_cmd(struct scst_session *sess, void *data,
+			       int (*cmp_fn) (struct scst_cmd *cmd,
+					      void *data));
+
+enum dma_data_direction scst_to_dma_dir(int scst_dir);
+enum dma_data_direction scst_to_tgt_dma_dir(int scst_dir);
+
+/*
+ * Returns true, if cmd's CDB is fully locally handled by SCST and false
+ * otherwise. Dev handlers parse() and dev_done() not called for such commands.
+ */
+static inline bool scst_is_cmd_fully_local(struct scst_cmd *cmd)
+{
+	return (cmd->op_flags & SCST_FULLY_LOCAL_CMD) != 0;
+}
+
+/*
+ * Returns true, if cmd's CDB is locally handled by SCST and
+ * false otherwise.
+ */
+static inline bool scst_is_cmd_local(struct scst_cmd *cmd)
+{
+	return (cmd->op_flags & SCST_LOCAL_CMD) != 0;
+}
+
+/* Returns true, if cmd can deliver UA */
+static inline bool scst_is_ua_command(struct scst_cmd *cmd)
+{
+	return (cmd->op_flags & SCST_SKIP_UA) == 0;
+}
+
+int scst_register_virtual_device(struct scst_dev_type *dev_handler,
+	const char *dev_name);
+void scst_unregister_virtual_device(int id);
+
+/*
+ * Get/Set functions for tgt's sg_tablesize
+ */
+static inline int scst_tgt_get_sg_tablesize(struct scst_tgt *tgt)
+{
+	return tgt->sg_tablesize;
+}
+
+static inline void scst_tgt_set_sg_tablesize(struct scst_tgt *tgt, int val)
+{
+	tgt->sg_tablesize = val;
+}
+
+/*
+ * Get/Set functions for tgt's target private data
+ */
+static inline void *scst_tgt_get_tgt_priv(struct scst_tgt *tgt)
+{
+	return tgt->tgt_priv;
+}
+
+static inline void scst_tgt_set_tgt_priv(struct scst_tgt *tgt, void *val)
+{
+	tgt->tgt_priv = val;
+}
+
+/*
+ * Get/Set functions for session's target private data
+ */
+static inline void *scst_sess_get_tgt_priv(struct scst_session *sess)
+{
+	return sess->tgt_priv;
+}
+
+static inline void scst_sess_set_tgt_priv(struct scst_session *sess,
+					      void *val)
+{
+	sess->tgt_priv = val;
+}
+
+/**
+ * Returns TRUE if cmd is being executed in atomic context.
+ *
+ * Note: checkpatch will complain on the use of in_atomic() below. You can
+ * safely ignore this warning since in_atomic() is used here only for debugging
+ * purposes.
+ */
+static inline bool scst_cmd_atomic(struct scst_cmd *cmd)
+{
+	int res = cmd->atomic;
+#ifdef CONFIG_SCST_EXTRACHECKS
+	if (unlikely((in_atomic() || in_interrupt() || irqs_disabled()) &&
+		     !res)) {
+		printk(KERN_ERR "ERROR: atomic context and non-atomic cmd\n");
+		dump_stack();
+		cmd->atomic = 1;
+		res = 1;
+	}
+#endif
+	return res;
+}
+
+/*
+ * Returns TRUE if cmd has been preliminary completed, i.e. completed or
+ * aborted.
+ */
+static inline bool scst_cmd_prelim_completed(struct scst_cmd *cmd)
+{
+	return cmd->completed || test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags);
+}
+
+static inline enum scst_exec_context __scst_estimate_context(bool direct)
+{
+	if (in_irq())
+		return SCST_CONTEXT_TASKLET;
+	else if (irqs_disabled())
+		return SCST_CONTEXT_THREAD;
+	else
+		return direct ? SCST_CONTEXT_DIRECT :
+				SCST_CONTEXT_DIRECT_ATOMIC;
+}
+
+static inline enum scst_exec_context scst_estimate_context(void)
+{
+	return __scst_estimate_context(0);
+}
+
+static inline enum scst_exec_context scst_estimate_context_direct(void)
+{
+	return __scst_estimate_context(1);
+}
+
+/* Returns cmd's CDB */
+static inline const uint8_t *scst_cmd_get_cdb(struct scst_cmd *cmd)
+{
+	return cmd->cdb;
+}
+
+/* Returns cmd's CDB length */
+static inline int scst_cmd_get_cdb_len(struct scst_cmd *cmd)
+{
+	return cmd->cdb_len;
+}
+
+/* Returns cmd's extended CDB */
+static inline const uint8_t *scst_cmd_get_ext_cdb(struct scst_cmd *cmd)
+{
+	return cmd->ext_cdb;
+}
+
+/* Returns cmd's extended CDB length */
+static inline int scst_cmd_get_ext_cdb_len(struct scst_cmd *cmd)
+{
+	return cmd->ext_cdb_len;
+}
+
+/* Sets cmd's extended CDB and its length */
+static inline void scst_cmd_set_ext_cdb(struct scst_cmd *cmd,
+	uint8_t *ext_cdb, unsigned int ext_cdb_len)
+{
+	cmd->ext_cdb = ext_cdb;
+	cmd->ext_cdb_len = ext_cdb_len;
+}
+
+/* Returns cmd's session */
+static inline struct scst_session *scst_cmd_get_session(struct scst_cmd *cmd)
+{
+	return cmd->sess;
+}
+
+/* Returns cmd's response data length */
+static inline int scst_cmd_get_resp_data_len(struct scst_cmd *cmd)
+{
+	return cmd->resp_data_len;
+}
+
+/* Returns if status should be sent for cmd */
+static inline int scst_cmd_get_is_send_status(struct scst_cmd *cmd)
+{
+	return cmd->is_send_status;
+}
+
+/*
+ * Returns pointer to cmd's SG data buffer.
+ *
+ * Usage of this function is not recommended, use scst_get_buf_*()
+ * family of functions instead.
+ */
+static inline struct scatterlist *scst_cmd_get_sg(struct scst_cmd *cmd)
+{
+	return cmd->sg;
+}
+
+/*
+ * Returns cmd's sg_cnt.
+ *
+ * Usage of this function is not recommended, use scst_get_buf_*()
+ * family of functions instead.
+ */
+static inline int scst_cmd_get_sg_cnt(struct scst_cmd *cmd)
+{
+	return cmd->sg_cnt;
+}
+
+/*
+ * Returns cmd's data buffer length.
+ *
+ * In case if you need to iterate over data in the buffer, usage of
+ * this function is not recommended, use scst_get_buf_*()
+ * family of functions instead.
+ */
+static inline unsigned int scst_cmd_get_bufflen(struct scst_cmd *cmd)
+{
+	return cmd->bufflen;
+}
+
+/*
+ * Returns pointer to cmd's bidirectional in (WRITE) SG data buffer.
+ *
+ * Usage of this function is not recommended, use scst_get_in_buf_*()
+ * family of functions instead.
+ */
+static inline struct scatterlist *scst_cmd_get_in_sg(struct scst_cmd *cmd)
+{
+	return cmd->in_sg;
+}
+
+/*
+ * Returns cmd's bidirectional in (WRITE) sg_cnt.
+ *
+ * Usage of this function is not recommended, use scst_get_in_buf_*()
+ * family of functions instead.
+ */
+static inline int scst_cmd_get_in_sg_cnt(struct scst_cmd *cmd)
+{
+	return cmd->in_sg_cnt;
+}
+
+/*
+ * Returns cmd's bidirectional in (WRITE) data buffer length.
+ *
+ * In case if you need to iterate over data in the buffer, usage of
+ * this function is not recommended, use scst_get_in_buf_*()
+ * family of functions instead.
+ */
+static inline unsigned int scst_cmd_get_in_bufflen(struct scst_cmd *cmd)
+{
+	return cmd->in_bufflen;
+}
+
+/* Returns pointer to cmd's target's SG data buffer */
+static inline struct scatterlist *scst_cmd_get_tgt_sg(struct scst_cmd *cmd)
+{
+	return cmd->tgt_sg;
+}
+
+/* Returns cmd's target's sg_cnt */
+static inline int scst_cmd_get_tgt_sg_cnt(struct scst_cmd *cmd)
+{
+	return cmd->tgt_sg_cnt;
+}
+
+/* Sets cmd's target's SG data buffer */
+static inline void scst_cmd_set_tgt_sg(struct scst_cmd *cmd,
+	struct scatterlist *sg, int sg_cnt)
+{
+	cmd->tgt_sg = sg;
+	cmd->tgt_sg_cnt = sg_cnt;
+	cmd->tgt_data_buf_alloced = 1;
+}
+
+/* Returns pointer to cmd's target's IN SG data buffer */
+static inline struct scatterlist *scst_cmd_get_in_tgt_sg(struct scst_cmd *cmd)
+{
+	return cmd->tgt_in_sg;
+}
+
+/* Returns cmd's target's IN sg_cnt */
+static inline int scst_cmd_get_tgt_in_sg_cnt(struct scst_cmd *cmd)
+{
+	return cmd->tgt_in_sg_cnt;
+}
+
+/* Sets cmd's target's IN SG data buffer */
+static inline void scst_cmd_set_tgt_in_sg(struct scst_cmd *cmd,
+	struct scatterlist *sg, int sg_cnt)
+{
+	WARN_ON(!cmd->tgt_data_buf_alloced);
+
+	cmd->tgt_in_sg = sg;
+	cmd->tgt_in_sg_cnt = sg_cnt;
+}
+
+/* Returns cmd's data direction */
+static inline scst_data_direction scst_cmd_get_data_direction(
+	struct scst_cmd *cmd)
+{
+	return cmd->data_direction;
+}
+
+/* Returns cmd's status byte from host device */
+static inline uint8_t scst_cmd_get_status(struct scst_cmd *cmd)
+{
+	return cmd->status;
+}
+
+/* Returns cmd's status from host adapter itself */
+static inline uint8_t scst_cmd_get_msg_status(struct scst_cmd *cmd)
+{
+	return cmd->msg_status;
+}
+
+/* Returns cmd's status set by low-level driver to indicate its status */
+static inline uint8_t scst_cmd_get_host_status(struct scst_cmd *cmd)
+{
+	return cmd->host_status;
+}
+
+/* Returns cmd's status set by SCSI mid-level */
+static inline uint8_t scst_cmd_get_driver_status(struct scst_cmd *cmd)
+{
+	return cmd->driver_status;
+}
+
+/* Returns pointer to cmd's sense buffer */
+static inline uint8_t *scst_cmd_get_sense_buffer(struct scst_cmd *cmd)
+{
+	return cmd->sense;
+}
+
+/* Returns cmd's valid sense length */
+static inline int scst_cmd_get_sense_buffer_len(struct scst_cmd *cmd)
+{
+	return cmd->sense_valid_len;
+}
+
+/*
+ * Get/Set functions for cmd's queue_type
+ */
+static inline enum scst_cmd_queue_type scst_cmd_get_queue_type(
+	struct scst_cmd *cmd)
+{
+	return cmd->queue_type;
+}
+
+static inline void scst_cmd_set_queue_type(struct scst_cmd *cmd,
+	enum scst_cmd_queue_type queue_type)
+{
+	cmd->queue_type = queue_type;
+}
+
+/*
+ * Get/Set functions for cmd's target SN
+ */
+static inline uint64_t scst_cmd_get_tag(struct scst_cmd *cmd)
+{
+	return cmd->tag;
+}
+
+static inline void scst_cmd_set_tag(struct scst_cmd *cmd, uint64_t tag)
+{
+	cmd->tag = tag;
+}
+
+/*
+ * Get/Set functions for cmd's target private data.
+ * Variant with *_lock must be used if target driver uses
+ * scst_find_cmd() to avoid race with it, except inside scst_find_cmd()'s
+ * callback, where lock is already taken.
+ */
+static inline void *scst_cmd_get_tgt_priv(struct scst_cmd *cmd)
+{
+	return cmd->tgt_priv;
+}
+
+static inline void scst_cmd_set_tgt_priv(struct scst_cmd *cmd, void *val)
+{
+	cmd->tgt_priv = val;
+}
+
+/*
+ * Get/Set functions for tgt_need_alloc_data_buf flag
+ */
+static inline int scst_cmd_get_tgt_need_alloc_data_buf(struct scst_cmd *cmd)
+{
+	return cmd->tgt_need_alloc_data_buf;
+}
+
+static inline void scst_cmd_set_tgt_need_alloc_data_buf(struct scst_cmd *cmd)
+{
+	cmd->tgt_need_alloc_data_buf = 1;
+}
+
+/*
+ * Get/Set functions for tgt_data_buf_alloced flag
+ */
+static inline int scst_cmd_get_tgt_data_buff_alloced(struct scst_cmd *cmd)
+{
+	return cmd->tgt_data_buf_alloced;
+}
+
+static inline void scst_cmd_set_tgt_data_buff_alloced(struct scst_cmd *cmd)
+{
+	cmd->tgt_data_buf_alloced = 1;
+}
+
+/*
+ * Get/Set functions for dh_data_buf_alloced flag
+ */
+static inline int scst_cmd_get_dh_data_buff_alloced(struct scst_cmd *cmd)
+{
+	return cmd->dh_data_buf_alloced;
+}
+
+static inline void scst_cmd_set_dh_data_buff_alloced(struct scst_cmd *cmd)
+{
+	cmd->dh_data_buf_alloced = 1;
+}
+
+/*
+ * Get/Set functions for no_sgv flag
+ */
+static inline int scst_cmd_get_no_sgv(struct scst_cmd *cmd)
+{
+	return cmd->no_sgv;
+}
+
+static inline void scst_cmd_set_no_sgv(struct scst_cmd *cmd)
+{
+	cmd->no_sgv = 1;
+}
+
+/*
+ * Get/Set functions for tgt_sn
+ */
+static inline int scst_cmd_get_tgt_sn(struct scst_cmd *cmd)
+{
+	BUG_ON(!cmd->tgt_sn_set);
+	return cmd->tgt_sn;
+}
+
+static inline void scst_cmd_set_tgt_sn(struct scst_cmd *cmd, uint32_t tgt_sn)
+{
+	cmd->tgt_sn_set = 1;
+	cmd->tgt_sn = tgt_sn;
+}
+
+/*
+ * Returns 1 if the cmd was aborted, so its status is invalid and no
+ * reply shall be sent to the remote initiator. A target driver should
+ * only clear internal resources, associated with cmd.
+ */
+static inline int scst_cmd_aborted(struct scst_cmd *cmd)
+{
+	return test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags) &&
+		!test_bit(SCST_CMD_ABORTED_OTHER, &cmd->cmd_flags);
+}
+
+/* Returns sense data format for cmd's dev */
+static inline bool scst_get_cmd_dev_d_sense(struct scst_cmd *cmd)
+{
+	return (cmd->dev != NULL) ? cmd->dev->d_sense : 0;
+}
+
+/*
+ * Get/Set functions for expected data direction, transfer length
+ * and its validity flag
+ */
+static inline int scst_cmd_is_expected_set(struct scst_cmd *cmd)
+{
+	return cmd->expected_values_set;
+}
+
+static inline scst_data_direction scst_cmd_get_expected_data_direction(
+	struct scst_cmd *cmd)
+{
+	return cmd->expected_data_direction;
+}
+
+static inline int scst_cmd_get_expected_transfer_len(
+	struct scst_cmd *cmd)
+{
+	return cmd->expected_transfer_len;
+}
+
+static inline int scst_cmd_get_expected_in_transfer_len(
+	struct scst_cmd *cmd)
+{
+	return cmd->expected_in_transfer_len;
+}
+
+static inline void scst_cmd_set_expected(struct scst_cmd *cmd,
+	scst_data_direction expected_data_direction,
+	int expected_transfer_len)
+{
+	cmd->expected_data_direction = expected_data_direction;
+	cmd->expected_transfer_len = expected_transfer_len;
+	cmd->expected_values_set = 1;
+}
+
+static inline void scst_cmd_set_expected_in_transfer_len(struct scst_cmd *cmd,
+	int expected_in_transfer_len)
+{
+	WARN_ON(!cmd->expected_values_set);
+	cmd->expected_in_transfer_len = expected_in_transfer_len;
+}
+
+/*
+ * Get/clear functions for cmd's may_need_dma_sync
+ */
+static inline int scst_get_may_need_dma_sync(struct scst_cmd *cmd)
+{
+	return cmd->may_need_dma_sync;
+}
+
+static inline void scst_clear_may_need_dma_sync(struct scst_cmd *cmd)
+{
+	cmd->may_need_dma_sync = 0;
+}
+
+/*
+ * Get/set functions for cmd's delivery_status. It is one of
+ * SCST_CMD_DELIVERY_* constants. It specifies the status of the
+ * command's delivery to initiator.
+ */
+static inline int scst_get_delivery_status(struct scst_cmd *cmd)
+{
+	return cmd->delivery_status;
+}
+
+static inline void scst_set_delivery_status(struct scst_cmd *cmd,
+	int delivery_status)
+{
+	cmd->delivery_status = delivery_status;
+}
+
+/*
+ * Get/Set function for mgmt cmd's target private data
+ */
+static inline void *scst_mgmt_cmd_get_tgt_priv(struct scst_mgmt_cmd *mcmd)
+{
+	return mcmd->tgt_priv;
+}
+
+static inline void scst_mgmt_cmd_set_tgt_priv(struct scst_mgmt_cmd *mcmd,
+	void *val)
+{
+	mcmd->tgt_priv = val;
+}
+
+/* Returns mgmt cmd's completition status (SCST_MGMT_STATUS_* constants) */
+static inline int scst_mgmt_cmd_get_status(struct scst_mgmt_cmd *mcmd)
+{
+	return mcmd->status;
+}
+
+/* Returns mgmt cmd's TM fn */
+static inline int scst_mgmt_cmd_get_fn(struct scst_mgmt_cmd *mcmd)
+{
+	return mcmd->fn;
+}
+
+/*
+ * Called by dev handler's task_mgmt_fn() to notify SCST core that mcmd
+ * is going to complete asynchronously.
+ */
+void scst_prepare_async_mcmd(struct scst_mgmt_cmd *mcmd);
+
+/*
+ * Called by dev handler to notify SCST core that async. mcmd is completed
+ * with status "status".
+ */
+void scst_async_mcmd_completed(struct scst_mgmt_cmd *mcmd, int status);
+
+/* Returns AEN's fn */
+static inline int scst_aen_get_event_fn(struct scst_aen *aen)
+{
+	return aen->event_fn;
+}
+
+/* Returns AEN's session */
+static inline struct scst_session *scst_aen_get_sess(struct scst_aen *aen)
+{
+	return aen->sess;
+}
+
+/* Returns AEN's LUN */
+static inline uint64_t scst_aen_get_lun(struct scst_aen *aen)
+{
+	return aen->lun;
+}
+
+/* Returns SCSI AEN's sense */
+static inline const uint8_t *scst_aen_get_sense(struct scst_aen *aen)
+{
+	return aen->aen_sense;
+}
+
+/* Returns SCSI AEN's sense length */
+static inline int scst_aen_get_sense_len(struct scst_aen *aen)
+{
+	return aen->aen_sense_len;
+}
+
+/*
+ * Get/set functions for AEN's delivery_status. It is one of
+ * SCST_AEN_RES_* constants. It specifies the status of the
+ * command's delivery to initiator.
+ */
+static inline int scst_get_aen_delivery_status(struct scst_aen *aen)
+{
+	return aen->delivery_status;
+}
+
+static inline void scst_set_aen_delivery_status(struct scst_aen *aen,
+	int status)
+{
+	aen->delivery_status = status;
+}
+
+void scst_aen_done(struct scst_aen *aen);
+
+static inline void sg_clear(struct scatterlist *sg)
+{
+	memset(sg, 0, sizeof(*sg));
+#ifdef CONFIG_DEBUG_SG
+	sg->sg_magic = SG_MAGIC;
+#endif
+}
+
+enum scst_sg_copy_dir {
+	SCST_SG_COPY_FROM_TARGET,
+	SCST_SG_COPY_TO_TARGET
+};
+
+void scst_copy_sg(struct scst_cmd *cmd, enum scst_sg_copy_dir copy_dir);
+
+/*
+ * Functions for access to the commands data (SG) buffer,
+ * including HIGHMEM environment. Should be used instead of direct
+ * access. Returns the mapped buffer length for success, 0 for EOD,
+ * negative error code otherwise.
+ *
+ * "Buf" argument returns the mapped buffer
+ *
+ * The "put" function unmaps the buffer.
+ */
+static inline int __scst_get_buf(struct scst_cmd *cmd, struct scatterlist *sg,
+	int sg_cnt, uint8_t **buf)
+{
+	int res = 0;
+	int i = cmd->get_sg_buf_entry_num;
+
+	*buf = NULL;
+
+	if ((i >= sg_cnt) || unlikely(sg == NULL))
+		goto out;
+
+	*buf = page_address(sg_page(&sg[i]));
+	*buf += sg[i].offset;
+
+	res = sg[i].length;
+	cmd->get_sg_buf_entry_num++;
+
+out:
+	return res;
+}
+
+static inline int scst_get_buf_first(struct scst_cmd *cmd, uint8_t **buf)
+{
+	cmd->get_sg_buf_entry_num = 0;
+	cmd->may_need_dma_sync = 1;
+	return __scst_get_buf(cmd, cmd->sg, cmd->sg_cnt, buf);
+}
+
+static inline int scst_get_buf_next(struct scst_cmd *cmd, uint8_t **buf)
+{
+	return __scst_get_buf(cmd, cmd->sg, cmd->sg_cnt, buf);
+}
+
+static inline void scst_put_buf(struct scst_cmd *cmd, void *buf)
+{
+	/* Nothing to do */
+}
+
+static inline int scst_get_in_buf_first(struct scst_cmd *cmd, uint8_t **buf)
+{
+	cmd->get_sg_buf_entry_num = 0;
+	cmd->may_need_dma_sync = 1;
+	return __scst_get_buf(cmd, cmd->in_sg, cmd->in_sg_cnt, buf);
+}
+
+static inline int scst_get_in_buf_next(struct scst_cmd *cmd, uint8_t **buf)
+{
+	return __scst_get_buf(cmd, cmd->in_sg, cmd->in_sg_cnt, buf);
+}
+
+static inline void scst_put_in_buf(struct scst_cmd *cmd, void *buf)
+{
+	/* Nothing to do */
+}
+
+/*
+ * Returns approximate higher rounded buffers count that
+ * scst_get_buf_[first|next]() return.
+ */
+static inline int scst_get_buf_count(struct scst_cmd *cmd)
+{
+	return (cmd->sg_cnt == 0) ? 1 : cmd->sg_cnt;
+}
+
+/*
+ * Returns approximate higher rounded buffers count that
+ * scst_get_in_buf_[first|next]() return.
+ */
+static inline int scst_get_in_buf_count(struct scst_cmd *cmd)
+{
+	return (cmd->in_sg_cnt == 0) ? 1 : cmd->in_sg_cnt;
+}
+
+int scst_suspend_activity(bool interruptible);
+void scst_resume_activity(void);
+
+void scst_process_active_cmd(struct scst_cmd *cmd, bool atomic);
+void scst_post_parse_process_active_cmd(struct scst_cmd *cmd, bool atomic);
+
+int scst_check_local_events(struct scst_cmd *cmd);
+
+int scst_get_cmd_abnormal_done_state(const struct scst_cmd *cmd);
+void scst_set_cmd_abnormal_done_state(struct scst_cmd *cmd);
+
+struct scst_trace_log {
+	unsigned int val;
+	const char *token;
+};
+
+extern struct mutex scst_mutex;
+
+extern struct sysfs_ops scst_sysfs_ops;
+
+/*
+ * Returns target driver's root sysfs kobject.
+ * The driver can create own files/directories/links here.
+ */
+static inline struct kobject *scst_sysfs_get_tgtt_kobj(
+	struct scst_tgt_template *tgtt)
+{
+	return &tgtt->tgtt_kobj;
+}
+
+/*
+ * Returns target's root sysfs kobject.
+ * The driver can create own files/directories/links here.
+ */
+static inline struct kobject *scst_sysfs_get_tgt_kobj(
+	struct scst_tgt *tgt)
+{
+	return &tgt->tgt_kobj;
+}
+
+/*
+ * Returns device handler's root sysfs kobject.
+ * The driver can create own files/directories/links here.
+ */
+static inline struct kobject *scst_sysfs_get_devt_kobj(
+	struct scst_dev_type *devt)
+{
+	return &devt->devt_kobj;
+}
+
+/*
+ * Returns device's root sysfs kobject.
+ * The driver can create own files/directories/links here.
+ */
+static inline struct kobject *scst_sysfs_get_dev_kobj(
+	struct scst_device *dev)
+{
+	return &dev->dev_kobj;
+}
+
+/*
+ * Returns session's root sysfs kobject.
+ * The driver can create own files/directories/links here.
+ */
+static inline struct kobject *scst_sysfs_get_sess_kobj(
+	struct scst_session *sess)
+{
+	return &sess->sess_kobj;
+}
+
+/* Returns target name */
+static inline const char *scst_get_tgt_name(const struct scst_tgt *tgt)
+{
+	return tgt->tgt_name;
+}
+
+int scst_alloc_sense(struct scst_cmd *cmd, int atomic);
+int scst_alloc_set_sense(struct scst_cmd *cmd, int atomic,
+	const uint8_t *sense, unsigned int len);
+
+int scst_set_sense(uint8_t *buffer, int len, bool d_sense,
+	int key, int asc, int ascq);
+
+bool scst_is_ua_sense(const uint8_t *sense, int len);
+
+bool scst_analyze_sense(const uint8_t *sense, int len,
+	unsigned int valid_mask, int key, int asc, int ascq);
+
+unsigned long scst_random(void);
+
+void scst_set_resp_data_len(struct scst_cmd *cmd, int resp_data_len);
+
+void scst_get(void);
+void scst_put(void);
+
+void scst_cmd_get(struct scst_cmd *cmd);
+void scst_cmd_put(struct scst_cmd *cmd);
+
+struct scatterlist *scst_alloc(int size, gfp_t gfp_mask, int *count);
+void scst_free(struct scatterlist *sg, int count);
+
+void scst_add_thr_data(struct scst_tgt_dev *tgt_dev,
+	struct scst_thr_data_hdr *data,
+	void (*free_fn) (struct scst_thr_data_hdr *data));
+void scst_del_all_thr_data(struct scst_tgt_dev *tgt_dev);
+void scst_dev_del_all_thr_data(struct scst_device *dev);
+struct scst_thr_data_hdr *__scst_find_thr_data(struct scst_tgt_dev *tgt_dev,
+	struct task_struct *tsk);
+
+/* Finds local to the current thread data. Returns NULL, if they not found. */
+static inline struct scst_thr_data_hdr *scst_find_thr_data(
+	struct scst_tgt_dev *tgt_dev)
+{
+	return __scst_find_thr_data(tgt_dev, current);
+}
+
+/* Increase ref counter for the thread data */
+static inline void scst_thr_data_get(struct scst_thr_data_hdr *data)
+{
+	atomic_inc(&data->ref);
+}
+
+/* Decrease ref counter for the thread data */
+static inline void scst_thr_data_put(struct scst_thr_data_hdr *data)
+{
+	if (atomic_dec_and_test(&data->ref))
+		data->free_fn(data);
+}
+
+int scst_calc_block_shift(int sector_size);
+int scst_sbc_generic_parse(struct scst_cmd *cmd,
+	int (*get_block_shift)(struct scst_cmd *cmd));
+int scst_cdrom_generic_parse(struct scst_cmd *cmd,
+	int (*get_block_shift)(struct scst_cmd *cmd));
+int scst_modisk_generic_parse(struct scst_cmd *cmd,
+	int (*get_block_shift)(struct scst_cmd *cmd));
+int scst_tape_generic_parse(struct scst_cmd *cmd,
+	int (*get_block_size)(struct scst_cmd *cmd));
+int scst_changer_generic_parse(struct scst_cmd *cmd,
+	int (*nothing)(struct scst_cmd *cmd));
+int scst_processor_generic_parse(struct scst_cmd *cmd,
+	int (*nothing)(struct scst_cmd *cmd));
+int scst_raid_generic_parse(struct scst_cmd *cmd,
+	int (*nothing)(struct scst_cmd *cmd));
+
+int scst_block_generic_dev_done(struct scst_cmd *cmd,
+	void (*set_block_shift)(struct scst_cmd *cmd, int block_shift));
+int scst_tape_generic_dev_done(struct scst_cmd *cmd,
+	void (*set_block_size)(struct scst_cmd *cmd, int block_size));
+
+int scst_obtain_device_parameters(struct scst_device *dev);
+
+int scst_get_max_lun_commands(struct scst_session *sess, uint64_t lun);
+
+/*
+ * Has to be put here open coded, because Linux doesn't have equivalent, which
+ * allows exclusive wake ups of threads in LIFO order. We need it to let (yet)
+ * unneeded threads sleep and not pollute CPU cache by their stacks.
+ */
+static inline void add_wait_queue_exclusive_head(wait_queue_head_t *q,
+	wait_queue_t *wait)
+{
+	unsigned long flags;
+
+	wait->flags |= WQ_FLAG_EXCLUSIVE;
+	spin_lock_irqsave(&q->lock, flags);
+	__add_wait_queue(q, wait);
+	spin_unlock_irqrestore(&q->lock, flags);
+}
+
+/*
+ * Structure to match events to user space and replies on them
+ */
+struct scst_sysfs_user_info {
+	/* Unique cookie to identify request */
+	uint32_t info_cookie;
+
+	/* Entry in the global list */
+	struct list_head info_list_entry;
+
+	/* Set if reply from the user space is being executed */
+	unsigned int info_being_executed:1;
+
+	/* Set if this info is in the info_list */
+	unsigned int info_in_list:1;
+
+	/* Completion to wait on for the request completion */
+	struct completion info_completion;
+
+	/* Request completion status and optional data */
+	int info_status;
+	void *data;
+};
+
+int scst_sysfs_user_add_info(struct scst_sysfs_user_info **out_info);
+void scst_sysfs_user_del_info(struct scst_sysfs_user_info *info);
+struct scst_sysfs_user_info *scst_sysfs_user_get_info(uint32_t cookie);
+int scst_wait_info_completion(struct scst_sysfs_user_info *info,
+	unsigned long timeout);
+
+unsigned int scst_get_setup_id(void);
+
+char *scst_get_next_lexem(char **token_str);
+void scst_restore_token_str(char *prev_lexem, char *token_str);
+char *scst_get_next_token_str(char **input_str);
+
+void scst_init_threads(struct scst_cmd_threads *cmd_threads);
+void scst_deinit_threads(struct scst_cmd_threads *cmd_threads);
+
+#endif /* __SCST_H */


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH][RFC 3/12/1/5] SCST core's scst_main.c
       [not found] ` <4BC44D08.4060907@vlnb.net>
  2010-04-13 13:04   ` [PATCH][RFC 1/12/1/5] SCST core's Makefile and Kconfig Vladislav Bolkhovitin
  2010-04-13 13:04   ` [PATCH][RFC 2/12/1/5] SCST core's external headers Vladislav Bolkhovitin
@ 2010-04-13 13:04   ` Vladislav Bolkhovitin
  2010-04-13 13:05   ` [PATCH][RFC 4/12/1/5] SCST core's scst_targ.c Vladislav Bolkhovitin
                     ` (6 subsequent siblings)
  9 siblings, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:04 UTC (permalink / raw)
  To: linux-scsi
  Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
	FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
	Vu Pham, Bart Van Assche, James Smart, Joe Eykholt, Andy Yan,
	linux-driver

This patch contains file scst_main.c.

Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
 scst_main.c | 2047 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 2047 insertions(+)

diff -uprN orig/linux-2.6.33/drivers/scst/scst_main.c linux-2.6.33/drivers/scst/scst_main.c
--- orig/linux-2.6.33/drivers/scst/scst_main.c
+++ linux-2.6.33/drivers/scst/scst_main.c
@@ -0,0 +1,2047 @@
+/*
+ *  scst_main.c
+ *
+ *  Copyright (C) 2004 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ *  Copyright (C) 2004 - 2005 Leonid Stoljar
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation, version 2
+ *  of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/slab.h>
+#include <linux/sched.h>
+#include <linux/unistd.h>
+#include <linux/string.h>
+#include <linux/kthread.h>
+
+#include "scst.h"
+#include "scst_priv.h"
+#include "scst_mem.h"
+
+#if defined(CONFIG_HIGHMEM4G) || defined(CONFIG_HIGHMEM64G)
+#warning "HIGHMEM kernel configurations are fully supported, but not\
+ recommended for performance reasons. Consider changing VMSPLIT\
+ option or use a 64-bit configuration instead. See README file for\
+ details."
+#endif
+
+/**
+ ** SCST global variables. They are all uninitialized to have their layout in
+ ** memory be exactly as specified. Otherwise compiler puts zero-initialized
+ ** variable separately from nonzero-initialized ones.
+ **/
+
+/*
+ * Main SCST mutex. All targets, devices and dev_types management is done
+ * under this mutex.
+ *
+ * It must NOT be used in any works (schedule_work(), etc.), because
+ * otherwise a deadlock (double lock, actually) is possible, e.g., with
+ * scst_user detach_tgt(), which is called under scst_mutex and calls
+ * flush_scheduled_work().
+ */
+struct mutex scst_mutex;
+EXPORT_SYMBOL_GPL(scst_mutex);
+
+ /* All 3 protected by scst_mutex */
+struct list_head scst_template_list;
+struct list_head scst_dev_list;
+struct list_head scst_dev_type_list;
+
+spinlock_t scst_main_lock;
+
+static struct kmem_cache *scst_mgmt_cachep;
+mempool_t *scst_mgmt_mempool;
+static struct kmem_cache *scst_mgmt_stub_cachep;
+mempool_t *scst_mgmt_stub_mempool;
+static struct kmem_cache *scst_ua_cachep;
+mempool_t *scst_ua_mempool;
+static struct kmem_cache *scst_sense_cachep;
+mempool_t *scst_sense_mempool;
+static struct kmem_cache *scst_aen_cachep;
+mempool_t *scst_aen_mempool;
+struct kmem_cache *scst_tgtd_cachep;
+struct kmem_cache *scst_sess_cachep;
+struct kmem_cache *scst_acgd_cachep;
+
+unsigned int scst_setup_id;
+
+spinlock_t scst_init_lock;
+wait_queue_head_t scst_init_cmd_list_waitQ;
+struct list_head scst_init_cmd_list;
+unsigned int scst_init_poll_cnt;
+
+struct kmem_cache *scst_cmd_cachep;
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+unsigned long scst_trace_flag;
+#endif
+
+unsigned long scst_flags;
+atomic_t scst_cmd_count;
+
+struct scst_cmd_threads scst_main_cmd_threads;
+
+struct scst_tasklet scst_tasklets[NR_CPUS];
+
+spinlock_t scst_mcmd_lock;
+struct list_head scst_active_mgmt_cmd_list;
+struct list_head scst_delayed_mgmt_cmd_list;
+wait_queue_head_t scst_mgmt_cmd_list_waitQ;
+
+wait_queue_head_t scst_mgmt_waitQ;
+spinlock_t scst_mgmt_lock;
+struct list_head scst_sess_init_list;
+struct list_head scst_sess_shut_list;
+
+wait_queue_head_t scst_dev_cmd_waitQ;
+
+static struct mutex scst_suspend_mutex;
+/* protected by scst_suspend_mutex */
+static struct list_head scst_cmd_threads_list;
+
+int scst_threads;
+static struct task_struct *scst_init_cmd_thread;
+static struct task_struct *scst_mgmt_thread;
+static struct task_struct *scst_mgmt_cmd_thread;
+
+static int suspend_count;
+
+static int scst_virt_dev_last_id; /* protected by scst_mutex */
+
+static unsigned int scst_max_cmd_mem;
+unsigned int scst_max_dev_cmd_mem;
+
+module_param_named(scst_threads, scst_threads, int, 0);
+MODULE_PARM_DESC(scst_threads, "SCSI target threads count");
+
+module_param_named(scst_max_cmd_mem, scst_max_cmd_mem, int, S_IRUGO);
+MODULE_PARM_DESC(scst_max_cmd_mem, "Maximum memory allowed to be consumed by "
+	"all SCSI commands of all devices at any given time in MB");
+
+module_param_named(scst_max_dev_cmd_mem, scst_max_dev_cmd_mem, int, S_IRUGO);
+MODULE_PARM_DESC(scst_max_dev_cmd_mem, "Maximum memory allowed to be consumed "
+	"by all SCSI commands of a device at any given time in MB");
+
+struct scst_dev_type scst_null_devtype = {
+	.name = "none",
+};
+
+static void __scst_resume_activity(void);
+
+/**
+ * __scst_register_target_template() - register target template.
+ * @vtt:	target template
+ * @version:	SCST_INTERFACE_VERSION version string to ensure that
+ *		SCST core and the target driver use the same version of
+ *		the SCST interface
+ *
+ * Description:
+ *    Registers a target template and returns 0 on success or appropriate
+ *    error code otherwise.
+ *
+ *    Note: *vtt must be static!
+ */
+int __scst_register_target_template(struct scst_tgt_template *vtt,
+	const char *version)
+{
+	int res = 0;
+	struct scst_tgt_template *t;
+	static DEFINE_MUTEX(m);
+
+	INIT_LIST_HEAD(&vtt->tgt_list);
+
+	if (strcmp(version, SCST_INTERFACE_VERSION) != 0) {
+		PRINT_ERROR("Incorrect version of target %s", vtt->name);
+		res = -EINVAL;
+		goto out_err;
+	}
+
+	if (!vtt->detect) {
+		PRINT_ERROR("Target driver %s must have "
+			"detect() method.", vtt->name);
+		res = -EINVAL;
+		goto out_err;
+	}
+
+	if (!vtt->release) {
+		PRINT_ERROR("Target driver %s must have "
+			"release() method.", vtt->name);
+		res = -EINVAL;
+		goto out_err;
+	}
+
+	if (!vtt->xmit_response) {
+		PRINT_ERROR("Target driver %s must have "
+			"xmit_response() method.", vtt->name);
+		res = -EINVAL;
+		goto out_err;
+	}
+
+	if (vtt->threads_num < 0) {
+		PRINT_ERROR("Wrong threads_num value %d for "
+			"target \"%s\"", vtt->threads_num,
+			vtt->name);
+		res = -EINVAL;
+		goto out_err;
+	}
+
+	if (!vtt->enable_target || !vtt->is_target_enabled) {
+		PRINT_WARNING("Target driver %s doesn't have enable_target() "
+			"and/or is_target_enabled() method(s). This is unsafe "
+			"and can lead that initiators connected on the "
+			"initialization time can see an unexpected set of "
+			"devices or no devices at all!", vtt->name);
+	}
+
+	if (((vtt->add_target != NULL) && (vtt->del_target == NULL)) ||
+	    ((vtt->add_target == NULL) && (vtt->del_target != NULL))) {
+		PRINT_ERROR("Target driver %s must either define both "
+			"add_target() and del_target(), or none.", vtt->name);
+		res = -EINVAL;
+		goto out_err;
+	}
+
+	res = scst_create_tgtt_sysfs(vtt);
+	if (res)
+		goto out_sysfs_err;
+
+	if (vtt->rdy_to_xfer == NULL)
+		vtt->rdy_to_xfer_atomic = 1;
+
+	if (mutex_lock_interruptible(&m) != 0)
+		goto out_sysfs_err;
+
+	if (mutex_lock_interruptible(&scst_mutex) != 0)
+		goto out_m_err;
+	list_for_each_entry(t, &scst_template_list, scst_template_list_entry) {
+		if (strcmp(t->name, vtt->name) == 0) {
+			PRINT_ERROR("Target driver %s already registered",
+				vtt->name);
+			mutex_unlock(&scst_mutex);
+			goto out_m_err;
+		}
+	}
+	mutex_unlock(&scst_mutex);
+
+	TRACE_DBG("%s", "Calling target driver's detect()");
+	res = vtt->detect(vtt);
+	TRACE_DBG("Target driver's detect() returned %d", res);
+	if (res < 0) {
+		PRINT_ERROR("%s", "The detect() routine failed");
+		res = -EINVAL;
+		goto out_m_err;
+	}
+
+	mutex_lock(&scst_mutex);
+	list_add_tail(&vtt->scst_template_list_entry, &scst_template_list);
+	mutex_unlock(&scst_mutex);
+
+	res = 0;
+
+	PRINT_INFO("Target template %s registered successfully", vtt->name);
+
+	mutex_unlock(&m);
+
+out:
+	return res;
+
+out_m_err:
+	mutex_unlock(&m);
+
+out_sysfs_err:
+	scst_tgtt_sysfs_put(vtt);
+
+out_err:
+	PRINT_ERROR("Failed to register target template %s", vtt->name);
+	goto out;
+}
+EXPORT_SYMBOL_GPL(__scst_register_target_template);
+
+static int scst_check_non_gpl_target_template(struct scst_tgt_template *vtt)
+{
+	int res;
+
+	if (vtt->task_mgmt_affected_cmds_done || vtt->threads_num) {
+		PRINT_ERROR("Not allowed functionality in non-GPL version for "
+			"target template %s", vtt->name);
+		res = -EPERM;
+		goto out;
+	}
+
+	res = 0;
+
+out:
+	return res;
+}
+
+/**
+ * __scst_register_target_template_non_gpl() - register target template,
+ *					      non-GPL version
+ * @vtt:	target template
+ * @version:	SCST_INTERFACE_VERSION version string to ensure that
+ *		SCST core and the target driver use the same version of
+ *		the SCST interface
+ *
+ * Description:
+ *    Registers a target template and returns 0 on success or appropriate
+ *    error code otherwise.
+ *
+ *    Note: *vtt must be static!
+ */
+int __scst_register_target_template_non_gpl(struct scst_tgt_template *vtt,
+	const char *version)
+{
+	int res;
+
+	res = scst_check_non_gpl_target_template(vtt);
+	if (res != 0)
+		goto out;
+
+	res = __scst_register_target_template(vtt, version);
+
+out:
+	return res;
+}
+EXPORT_SYMBOL(__scst_register_target_template_non_gpl);
+
+/**
+ * scst_unregister_target_template() - unregister target template
+ */
+void scst_unregister_target_template(struct scst_tgt_template *vtt)
+{
+	struct scst_tgt *tgt;
+	struct scst_tgt_template *t;
+	int found = 0;
+
+	mutex_lock(&scst_mutex);
+
+	list_for_each_entry(t, &scst_template_list, scst_template_list_entry) {
+		if (strcmp(t->name, vtt->name) == 0) {
+			found = 1;
+			break;
+		}
+	}
+	if (!found) {
+		PRINT_ERROR("Target driver %s isn't registered", vtt->name);
+		goto out_err_up;
+	}
+
+restart:
+	list_for_each_entry(tgt, &vtt->tgt_list, tgt_list_entry) {
+		mutex_unlock(&scst_mutex);
+		scst_unregister_target(tgt);
+		mutex_lock(&scst_mutex);
+		goto restart;
+	}
+	list_del(&vtt->scst_template_list_entry);
+
+	mutex_unlock(&scst_mutex);
+
+	scst_tgtt_sysfs_put(vtt);
+
+	PRINT_INFO("Target template %s unregistered successfully", vtt->name);
+
+out:
+	return;
+
+out_err_up:
+	mutex_unlock(&scst_mutex);
+	goto out;
+}
+EXPORT_SYMBOL(scst_unregister_target_template);
+
+/**
+ * scst_register_target() - register target
+ *
+ * Registers a target for template vtt and returns new target structure on
+ * success or NULL otherwise.
+ */
+struct scst_tgt *scst_register_target(struct scst_tgt_template *vtt,
+	const char *target_name)
+{
+	struct scst_tgt *tgt;
+	int rc = 0;
+
+	rc = scst_alloc_tgt(vtt, &tgt);
+	if (rc != 0)
+		goto out_err;
+
+	rc = scst_suspend_activity(true);
+	if (rc != 0)
+		goto out_free_tgt_err;
+
+	if (mutex_lock_interruptible(&scst_mutex) != 0) {
+		rc = -EINTR;
+		goto out_resume_free;
+	}
+
+	if (target_name != NULL) {
+
+		tgt->tgt_name = kmalloc(strlen(target_name) + 1, GFP_KERNEL);
+		if (tgt->tgt_name == NULL) {
+			TRACE(TRACE_OUT_OF_MEM, "Allocation of tgt name %s failed",
+				target_name);
+			rc = -ENOMEM;
+			goto out_unlock_resume;
+		}
+		strcpy(tgt->tgt_name, target_name);
+	} else {
+		static int tgt_num; /* protected by scst_mutex */
+		int len = strlen(vtt->name) +
+			strlen(SCST_DEFAULT_TGT_NAME_SUFFIX) + 11 + 1;
+
+		tgt->tgt_name = kmalloc(len, GFP_KERNEL);
+		if (tgt->tgt_name == NULL) {
+			TRACE(TRACE_OUT_OF_MEM, "Allocation of tgt name failed "
+				"(template name %s)", vtt->name);
+			rc = -ENOMEM;
+			goto out_unlock_resume;
+		}
+		sprintf(tgt->tgt_name, "%s%s%d", vtt->name,
+			SCST_DEFAULT_TGT_NAME_SUFFIX, tgt_num++);
+	}
+
+	tgt->default_acg = scst_alloc_add_acg(NULL, tgt->tgt_name);
+	if (tgt->default_acg == NULL)
+		goto out_free_tgt_name;
+
+	INIT_LIST_HEAD(&tgt->tgt_acg_list);
+
+	rc = scst_create_tgt_sysfs(tgt);
+	if (rc < 0)
+		goto out_clear_acg;
+
+	list_add_tail(&tgt->tgt_list_entry, &vtt->tgt_list);
+
+	mutex_unlock(&scst_mutex);
+	scst_resume_activity();
+
+	PRINT_INFO("Target %s for template %s registered successfully",
+		tgt->tgt_name, vtt->name);
+
+	TRACE_DBG("tgt %p", tgt);
+
+out:
+	return tgt;
+
+out_clear_acg:
+	scst_clear_acg(tgt->default_acg);
+
+out_free_tgt_name:
+	kfree(tgt->tgt_name);
+
+out_unlock_resume:
+	mutex_unlock(&scst_mutex);
+
+out_resume_free:
+	scst_resume_activity();
+
+out_free_tgt_err:
+	scst_tgt_sysfs_put(tgt); /* must not be called under scst_mutex */
+	tgt = NULL;
+
+out_err:
+	PRINT_ERROR("Failed to register target %s for template %s (error %d)",
+		(tgt->tgt_name != NULL) ? tgt->tgt_name : target_name,
+		vtt->name, rc);
+	goto out;
+}
+EXPORT_SYMBOL_GPL(scst_register_target);
+
+/**
+ * scst_register_target_non_gpl() - register target, non-GPL version
+ *
+ * Registers a target for template vtt and returns new target structure on
+ * success or NULL otherwise.
+ */
+struct scst_tgt *scst_register_target_non_gpl(struct scst_tgt_template *vtt,
+	const char *target_name)
+{
+	struct scst_tgt *res;
+
+	if (scst_check_non_gpl_target_template(vtt)) {
+		res = NULL;
+		goto out;
+	}
+
+	res = scst_register_target(vtt, target_name);
+
+out:
+	return res;
+}
+EXPORT_SYMBOL(scst_register_target_non_gpl);
+
+static inline int test_sess_list(struct scst_tgt *tgt)
+{
+	int res;
+	mutex_lock(&scst_mutex);
+	res = list_empty(&tgt->sess_list);
+	mutex_unlock(&scst_mutex);
+	return res;
+}
+
+/**
+ * scst_unregister_target() - unregister target
+ */
+void scst_unregister_target(struct scst_tgt *tgt)
+{
+	struct scst_session *sess;
+	struct scst_tgt_template *vtt = tgt->tgtt;
+	struct scst_acg *acg, *acg_tmp;
+
+	scst_tgt_sysfs_prepare_put(tgt);
+
+	TRACE_DBG("%s", "Calling target driver's release()");
+	tgt->tgtt->release(tgt);
+	TRACE_DBG("%s", "Target driver's release() returned");
+
+	mutex_lock(&scst_mutex);
+again:
+	list_for_each_entry(sess, &tgt->sess_list, sess_list_entry) {
+		if (sess->shut_phase == SCST_SESS_SPH_READY) {
+			/*
+			 * Sometimes it's hard for target driver to track all
+			 * its sessions (see scst_local, for example), so let's
+			 * help it.
+			 */
+			mutex_unlock(&scst_mutex);
+			scst_unregister_session(sess, 0, NULL);
+			mutex_lock(&scst_mutex);
+			goto again;
+		}
+	}
+	mutex_unlock(&scst_mutex);
+
+	TRACE_DBG("%s", "Waiting for sessions shutdown");
+	wait_event(tgt->unreg_waitQ, test_sess_list(tgt));
+	TRACE_DBG("%s", "wait_event() returned");
+
+	scst_suspend_activity(false);
+	mutex_lock(&scst_mutex);
+
+	list_del(&tgt->tgt_list_entry);
+
+	mutex_unlock(&scst_mutex);
+	scst_resume_activity();
+
+	scst_clear_acg(tgt->default_acg);
+
+	list_for_each_entry_safe(acg, acg_tmp, &tgt->tgt_acg_list,
+					acg_list_entry) {
+		scst_acg_sysfs_put(acg);
+	}
+
+	del_timer_sync(&tgt->retry_timer);
+
+	PRINT_INFO("Target %s for template %s unregistered successfully",
+		tgt->tgt_name, vtt->name);
+
+	scst_tgt_sysfs_put(tgt); /* must not be called under scst_mutex */
+
+	TRACE_DBG("Unregistering tgt %p finished", tgt);
+	return;
+}
+EXPORT_SYMBOL(scst_unregister_target);
+
+static int scst_susp_wait(bool interruptible)
+{
+	int res = 0;
+
+	if (interruptible) {
+		res = wait_event_interruptible_timeout(scst_dev_cmd_waitQ,
+			(atomic_read(&scst_cmd_count) == 0),
+			SCST_SUSPENDING_TIMEOUT);
+		if (res <= 0) {
+			__scst_resume_activity();
+			if (res == 0)
+				res = -EBUSY;
+		} else
+			res = 0;
+	} else
+		wait_event(scst_dev_cmd_waitQ,
+			   atomic_read(&scst_cmd_count) == 0);
+
+	TRACE_MGMT_DBG("wait_event() returned %d", res);
+	return res;
+}
+
+/**
+ * scst_suspend_activity() - globally suspend any activity
+ *
+ * Description:
+ *    Globally suspends any activity and doesn't return, until there are any
+ *    active commands (state after SCST_CMD_STATE_INIT). If "interruptible"
+ *    is true, it returns after SCST_SUSPENDING_TIMEOUT or if it was interrupted
+ *    by a signal with the corresponding error status < 0. If "interruptible"
+ *    is false, it will wait virtually forever. On success returns 0.
+ *
+ *    New arriving commands stay in the suspended state until
+ *    scst_resume_activity() is called.
+ */
+int scst_suspend_activity(bool interruptible)
+{
+	int res = 0;
+	bool rep = false;
+
+	if (interruptible) {
+		if (mutex_lock_interruptible(&scst_suspend_mutex) != 0) {
+			res = -EINTR;
+			goto out;
+		}
+	} else
+		mutex_lock(&scst_suspend_mutex);
+
+	TRACE_MGMT_DBG("suspend_count %d", suspend_count);
+	suspend_count++;
+	if (suspend_count > 1)
+		goto out_up;
+
+	set_bit(SCST_FLAG_SUSPENDING, &scst_flags);
+	set_bit(SCST_FLAG_SUSPENDED, &scst_flags);
+	/*
+	 * Assignment of SCST_FLAG_SUSPENDING and SCST_FLAG_SUSPENDED must be
+	 * ordered with scst_cmd_count. Otherwise lockless logic in
+	 * scst_translate_lun() and scst_mgmt_translate_lun() won't work.
+	 */
+	smp_mb__after_set_bit();
+
+	/*
+	 * See comment in scst_user.c::dev_user_task_mgmt_fn() for more
+	 * information about scst_user behavior.
+	 *
+	 * ToDo: make the global suspending unneeded (switch to per-device
+	 * reference counting? That would mean to switch off from lockless
+	 * implementation of scst_translate_lun().. )
+	 */
+
+	if (atomic_read(&scst_cmd_count) != 0) {
+		PRINT_INFO("Waiting for %d active commands to complete... This "
+			"might take few minutes for disks or few hours for "
+			"tapes, if you use long executed commands, like "
+			"REWIND or FORMAT. In case, if you have a hung user "
+			"space device (i.e. made using scst_user module) not "
+			"responding to any commands, if might take virtually "
+			"forever until the corresponding user space "
+			"program recovers and starts responding or gets "
+			"killed.", atomic_read(&scst_cmd_count));
+		rep = true;
+	}
+
+	res = scst_susp_wait(interruptible);
+	if (res != 0)
+		goto out_clear;
+
+	clear_bit(SCST_FLAG_SUSPENDING, &scst_flags);
+	/* See comment about smp_mb() above */
+	smp_mb__after_clear_bit();
+
+	TRACE_MGMT_DBG("Waiting for %d active commands finally to complete",
+		atomic_read(&scst_cmd_count));
+
+	res = scst_susp_wait(interruptible);
+	if (res != 0)
+		goto out_clear;
+
+	if (rep)
+		PRINT_INFO("%s", "All active commands completed");
+
+out_up:
+	mutex_unlock(&scst_suspend_mutex);
+
+out:
+	return res;
+
+out_clear:
+	clear_bit(SCST_FLAG_SUSPENDING, &scst_flags);
+	/* See comment about smp_mb() above */
+	smp_mb__after_clear_bit();
+	goto out_up;
+}
+EXPORT_SYMBOL_GPL(scst_suspend_activity);
+
+static void __scst_resume_activity(void)
+{
+	struct scst_cmd_threads *l;
+
+	suspend_count--;
+	TRACE_MGMT_DBG("suspend_count %d left", suspend_count);
+	if (suspend_count > 0)
+		goto out;
+
+	clear_bit(SCST_FLAG_SUSPENDED, &scst_flags);
+	/*
+	 * The barrier is needed to make sure all woken up threads see the
+	 * cleared flag. Not sure if it's really needed, but let's be safe.
+	 */
+	smp_mb__after_clear_bit();
+
+	list_for_each_entry(l, &scst_cmd_threads_list, lists_list_entry) {
+		wake_up_all(&l->cmd_list_waitQ);
+	}
+	wake_up_all(&scst_init_cmd_list_waitQ);
+
+	spin_lock_irq(&scst_mcmd_lock);
+	if (!list_empty(&scst_delayed_mgmt_cmd_list)) {
+		struct scst_mgmt_cmd *m;
+		m = list_entry(scst_delayed_mgmt_cmd_list.next, typeof(*m),
+				mgmt_cmd_list_entry);
+		TRACE_MGMT_DBG("Moving delayed mgmt cmd %p to head of active "
+			"mgmt cmd list", m);
+		list_move(&m->mgmt_cmd_list_entry, &scst_active_mgmt_cmd_list);
+	}
+	spin_unlock_irq(&scst_mcmd_lock);
+	wake_up_all(&scst_mgmt_cmd_list_waitQ);
+
+out:
+	return;
+}
+
+/**
+ * scst_resume_activity() - globally resume all activities
+ *
+ * Resumes suspended by scst_suspend_activity() activities.
+ */
+void scst_resume_activity(void)
+{
+
+	mutex_lock(&scst_suspend_mutex);
+	__scst_resume_activity();
+	mutex_unlock(&scst_suspend_mutex);
+	return;
+}
+EXPORT_SYMBOL_GPL(scst_resume_activity);
+
+static int scst_register_device(struct scsi_device *scsidp)
+{
+	int res = 0;
+	struct scst_device *dev, *d;
+	struct scst_dev_type *dt;
+
+	res = scst_suspend_activity(true);
+	if (res != 0)
+		goto out_err;
+
+	if (mutex_lock_interruptible(&scst_mutex) != 0) {
+		res = -EINTR;
+		goto out_resume;
+	}
+
+	res = scst_alloc_device(GFP_KERNEL, &dev);
+	if (res != 0)
+		goto out_up;
+
+	dev->type = scsidp->type;
+
+	dev->virt_name = kmalloc(50, GFP_KERNEL);
+	if (dev->virt_name == NULL) {
+		PRINT_ERROR("%s", "Unable to alloc device name");
+		res = -ENOMEM;
+		goto out_free_dev;
+	}
+	snprintf(dev->virt_name, 50, "%d:%d:%d:%d", scsidp->host->host_no,
+		scsidp->channel, scsidp->id, scsidp->lun);
+
+	list_for_each_entry(d, &scst_dev_list, dev_list_entry) {
+		if (strcmp(d->virt_name, dev->virt_name) == 0) {
+			PRINT_ERROR("Device %s already exists", dev->virt_name);
+			res = -EEXIST;
+			goto out_free_dev;
+		}
+	}
+
+	dev->scsi_dev = scsidp;
+
+	list_add_tail(&dev->dev_list_entry, &scst_dev_list);
+
+	res = scst_create_device_sysfs(dev);
+	if (res != 0)
+		goto out_free;
+
+	list_for_each_entry(dt, &scst_dev_type_list, dev_type_list_entry) {
+		if (dt->type == scsidp->type) {
+			res = scst_assign_dev_handler(dev, dt);
+			if (res != 0)
+				goto out_free;
+			break;
+		}
+	}
+
+out_up:
+	mutex_unlock(&scst_mutex);
+
+out_resume:
+	scst_resume_activity();
+
+out_err:
+	if (res == 0) {
+		PRINT_INFO("Attached to scsi%d, channel %d, id %d, lun %d, "
+			"type %d", scsidp->host->host_no, scsidp->channel,
+			scsidp->id, scsidp->lun, scsidp->type);
+	} else {
+		PRINT_ERROR("Failed to attach to scsi%d, channel %d, id %d, "
+			"lun %d, type %d", scsidp->host->host_no,
+			scsidp->channel, scsidp->id, scsidp->lun, scsidp->type);
+	}
+	return res;
+
+out_free:
+	list_del(&dev->dev_list_entry);
+
+out_free_dev:
+	mutex_unlock(&scst_mutex);
+	scst_resume_activity();
+	scst_device_sysfs_put(dev); /* must not be called under scst_mutex */
+	goto out_err;
+}
+
+static void scst_unregister_device(struct scsi_device *scsidp)
+{
+	struct scst_device *d, *dev = NULL;
+	struct scst_acg_dev *acg_dev, *aa;
+
+	scst_suspend_activity(false);
+	mutex_lock(&scst_mutex);
+
+	list_for_each_entry(d, &scst_dev_list, dev_list_entry) {
+		if (d->scsi_dev == scsidp) {
+			dev = d;
+			TRACE_DBG("Target device %p found", dev);
+			break;
+		}
+	}
+	if (dev == NULL) {
+		PRINT_ERROR("%s", "Target device not found");
+		goto out_resume;
+	}
+
+	list_del(&dev->dev_list_entry);
+
+	list_for_each_entry_safe(acg_dev, aa, &dev->dev_acg_dev_list,
+				 dev_acg_dev_list_entry) {
+		scst_acg_remove_dev(acg_dev->acg, dev, true);
+	}
+
+	scst_assign_dev_handler(dev, &scst_null_devtype);
+
+	mutex_unlock(&scst_mutex);
+	scst_resume_activity();
+
+	scst_device_sysfs_put(dev); /* must not be called under scst_mutex */
+
+	PRINT_INFO("Detached from scsi%d, channel %d, id %d, lun %d, type %d",
+		scsidp->host->host_no, scsidp->channel, scsidp->id,
+		scsidp->lun, scsidp->type);
+
+out:
+	return;
+
+out_resume:
+	mutex_unlock(&scst_mutex);
+	scst_resume_activity();
+	goto out;
+}
+
+static int scst_dev_handler_check(struct scst_dev_type *dev_handler)
+{
+	int res = 0;
+
+	if (dev_handler->parse == NULL) {
+		PRINT_ERROR("scst dev handler %s must have "
+			"parse() method.", dev_handler->name);
+		res = -EINVAL;
+		goto out;
+	}
+
+	if (((dev_handler->add_device != NULL) &&
+	     (dev_handler->del_device == NULL)) ||
+	    ((dev_handler->add_device == NULL) &&
+	     (dev_handler->del_device != NULL))) {
+		PRINT_ERROR("Dev handler %s must either define both "
+			"add_device() and del_device(), or none.",
+			dev_handler->name);
+		res = -EINVAL;
+		goto out;
+	}
+
+	if (dev_handler->exec == NULL) {
+#ifdef CONFIG_SCST_ALLOW_PASSTHROUGH_IO_SUBMIT_IN_SIRQ
+		dev_handler->exec_atomic = 1;
+#else
+		dev_handler->exec_atomic = 0;
+#endif
+	}
+
+	if (dev_handler->dev_done == NULL)
+		dev_handler->dev_done_atomic = 1;
+
+out:
+	return res;
+}
+
+/**
+ * scst_register_virtual_device() - register a virtual device.
+ * @dev_handler: the device's device handler
+ * @dev_name:	the new device name, NULL-terminated string. Must be uniq
+ *              among all virtual devices in the system.
+ *
+ * Registers a virtual device and returns assinged to the device ID on
+ * success, or negative value otherwise
+ */
+int scst_register_virtual_device(struct scst_dev_type *dev_handler,
+	const char *dev_name)
+{
+	int res, rc;
+	struct scst_device *dev;
+
+	if (dev_handler == NULL) {
+		PRINT_ERROR("%s: valid device handler must be supplied",
+			    __func__);
+		res = -EINVAL;
+		goto out;
+	}
+
+	if (dev_name == NULL) {
+		PRINT_ERROR("%s: device name must be non-NULL", __func__);
+		res = -EINVAL;
+		goto out;
+	}
+
+	res = scst_dev_handler_check(dev_handler);
+	if (res != 0)
+		goto out;
+
+	list_for_each_entry(dev, &scst_dev_list, dev_list_entry) {
+		if (strcmp(dev->virt_name, dev_name) == 0) {
+			PRINT_ERROR("Device %s already exists", dev_name);
+			res = -EEXIST;
+			goto out;
+		}
+	}
+
+	res = scst_suspend_activity(true);
+	if (res != 0)
+		goto out;
+
+	if (mutex_lock_interruptible(&scst_mutex) != 0) {
+		res = -EINTR;
+		goto out_resume;
+	}
+
+	res = scst_alloc_device(GFP_KERNEL, &dev);
+	if (res != 0)
+		goto out_up;
+
+	dev->type = dev_handler->type;
+	dev->scsi_dev = NULL;
+	dev->virt_name = kstrdup(dev_name, GFP_KERNEL);
+	if (dev->virt_name == NULL) {
+		PRINT_ERROR("Unable to allocate virt_name for dev %s",
+			dev_name);
+		res = -ENOMEM;
+		goto out_release;
+	}
+	dev->virt_id = scst_virt_dev_last_id++;
+
+	list_add_tail(&dev->dev_list_entry, &scst_dev_list);
+
+	res = dev->virt_id;
+
+	rc = scst_create_device_sysfs(dev);
+	if (rc != 0) {
+		res = rc;
+		goto out_free_del;
+	}
+
+	rc = scst_assign_dev_handler(dev, dev_handler);
+	if (rc != 0) {
+		res = rc;
+		goto out_free_del;
+	}
+
+out_up:
+	mutex_unlock(&scst_mutex);
+
+out_resume:
+	scst_resume_activity();
+
+out:
+	if (res > 0)
+		PRINT_INFO("Attached to virtual device %s (id %d)",
+			dev_name, dev->virt_id);
+	else
+		PRINT_INFO("Failed to attach to virtual device %s", dev_name);
+	return res;
+
+out_free_del:
+	list_del(&dev->dev_list_entry);
+
+out_release:
+	mutex_unlock(&scst_mutex);
+	scst_resume_activity();
+	scst_device_sysfs_put(dev); /* must not be called under scst_mutex */
+	goto out;
+}
+EXPORT_SYMBOL_GPL(scst_register_virtual_device);
+
+/**
+ * scst_unregister_virtual_device() - unegister a virtual device.
+ * @id:		the device's ID, returned by the registration function
+ */
+void scst_unregister_virtual_device(int id)
+{
+	struct scst_device *d, *dev = NULL;
+	struct scst_acg_dev *acg_dev, *aa;
+
+	scst_suspend_activity(false);
+	mutex_lock(&scst_mutex);
+
+	list_for_each_entry(d, &scst_dev_list, dev_list_entry) {
+		if (d->virt_id == id) {
+			dev = d;
+			TRACE_DBG("Target device %p (id %d) found", dev, id);
+			break;
+		}
+	}
+	if (dev == NULL) {
+		PRINT_ERROR("Target virtual device (id %d) not found", id);
+		goto out_unblock;
+	}
+
+	list_del(&dev->dev_list_entry);
+
+	list_for_each_entry_safe(acg_dev, aa, &dev->dev_acg_dev_list,
+				 dev_acg_dev_list_entry) {
+		scst_acg_remove_dev(acg_dev->acg, dev, true);
+	}
+
+	scst_assign_dev_handler(dev, &scst_null_devtype);
+
+	PRINT_INFO("Detached from virtual device %s (id %d)",
+		dev->virt_name, dev->virt_id);
+
+out_unblock:
+	mutex_unlock(&scst_mutex);
+	scst_resume_activity();
+
+	scst_device_sysfs_put(dev); /* must not be called under scst_mutex */
+	return;
+}
+EXPORT_SYMBOL_GPL(scst_unregister_virtual_device);
+
+/**
+ * __scst_register_dev_driver() - register pass-through dev handler driver
+ * @dev_type:	dev handler template
+ * @version:	SCST_INTERFACE_VERSION version string to ensure that
+ *		SCST core and the dev handler use the same version of
+ *		the SCST interface
+ *
+ * Description:
+ *    Registers a pass-through dev handler driver. Returns 0 on success
+ *    or appropriate error code otherwise.
+ *
+ *    Note: *dev_type must be static!
+ */
+int __scst_register_dev_driver(struct scst_dev_type *dev_type,
+	const char *version)
+{
+	struct scst_dev_type *dt;
+	struct scst_device *dev;
+	int res;
+	int exist;
+
+	if (strcmp(version, SCST_INTERFACE_VERSION) != 0) {
+		PRINT_ERROR("Incorrect version of dev handler %s",
+			dev_type->name);
+		res = -EINVAL;
+		goto out_error;
+	}
+
+	res = scst_dev_handler_check(dev_type);
+	if (res != 0)
+		goto out_error;
+
+	res = scst_suspend_activity(true);
+	if (res != 0)
+		goto out_error;
+
+	if (mutex_lock_interruptible(&scst_mutex) != 0) {
+		res = -EINTR;
+		goto out_err_res;
+	}
+
+	exist = 0;
+	list_for_each_entry(dt, &scst_dev_type_list, dev_type_list_entry) {
+		if (strcmp(dt->name, dev_type->name) == 0) {
+			PRINT_ERROR("Device type handler \"%s\" already "
+				"exist", dt->name);
+			exist = 1;
+			break;
+		}
+	}
+	if (exist)
+		goto out_up;
+
+	res = scst_create_devt_sysfs(dev_type);
+	if (res < 0)
+		goto out_free;
+
+	list_add_tail(&dev_type->dev_type_list_entry, &scst_dev_type_list);
+
+	list_for_each_entry(dev, &scst_dev_list, dev_list_entry) {
+		if (dev->scsi_dev == NULL || dev->handler != &scst_null_devtype)
+			continue;
+		if (dev->scsi_dev->type == dev_type->type)
+			scst_assign_dev_handler(dev, dev_type);
+	}
+
+	mutex_unlock(&scst_mutex);
+	scst_resume_activity();
+
+	if (res == 0) {
+		PRINT_INFO("Device handler \"%s\" for type %d registered "
+			"successfully", dev_type->name, dev_type->type);
+	}
+
+out:
+	return res;
+
+out_free:
+	scst_devt_sysfs_put(dev_type);
+
+out_up:
+	mutex_unlock(&scst_mutex);
+
+out_err_res:
+	scst_resume_activity();
+
+out_error:
+	PRINT_ERROR("Failed to register device handler \"%s\" for type %d",
+		dev_type->name, dev_type->type);
+	goto out;
+}
+EXPORT_SYMBOL_GPL(__scst_register_dev_driver);
+
+/**
+ * scst_unregister_dev_driver() - unregister pass-through dev handler driver
+ */
+void scst_unregister_dev_driver(struct scst_dev_type *dev_type)
+{
+	struct scst_device *dev;
+	struct scst_dev_type *dt;
+	int found = 0;
+
+	scst_suspend_activity(false);
+	mutex_lock(&scst_mutex);
+
+	list_for_each_entry(dt, &scst_dev_type_list, dev_type_list_entry) {
+		if (strcmp(dt->name, dev_type->name) == 0) {
+			found = 1;
+			break;
+		}
+	}
+	if (!found) {
+		PRINT_ERROR("Dev handler \"%s\" isn't registered",
+			dev_type->name);
+		goto out_up;
+	}
+
+	list_for_each_entry(dev, &scst_dev_list, dev_list_entry) {
+		if (dev->handler == dev_type) {
+			scst_assign_dev_handler(dev, &scst_null_devtype);
+			TRACE_DBG("Dev handler removed from device %p", dev);
+		}
+	}
+
+	list_del(&dev_type->dev_type_list_entry);
+
+	mutex_unlock(&scst_mutex);
+	scst_resume_activity();
+
+	scst_devt_sysfs_put(dev_type);
+
+	PRINT_INFO("Device handler \"%s\" for type %d unloaded",
+		   dev_type->name, dev_type->type);
+
+out:
+	return;
+
+out_up:
+	mutex_unlock(&scst_mutex);
+	scst_resume_activity();
+	goto out;
+}
+EXPORT_SYMBOL_GPL(scst_unregister_dev_driver);
+
+/**
+ * __scst_register_virtual_dev_driver() - register virtual dev handler driver
+ * @dev_type:	dev handler template
+ * @version:	SCST_INTERFACE_VERSION version string to ensure that
+ *		SCST core and the dev handler use the same version of
+ *		the SCST interface
+ *
+ * Description:
+ *    Registers a virtual dev handler driver. Returns 0 on success or
+ *    appropriate error code otherwise.
+ *
+ *    Note: *dev_type must be static!
+ */
+int __scst_register_virtual_dev_driver(struct scst_dev_type *dev_type,
+	const char *version)
+{
+	int res;
+
+	if (strcmp(version, SCST_INTERFACE_VERSION) != 0) {
+		PRINT_ERROR("Incorrect version of virtual dev handler %s",
+			dev_type->name);
+		res = -EINVAL;
+		goto out_err;
+	}
+
+	res = scst_dev_handler_check(dev_type);
+	if (res != 0)
+		goto out_err;
+
+	res = scst_create_devt_sysfs(dev_type);
+	if (res < 0)
+		goto out_free;
+
+	if (dev_type->type != -1) {
+		PRINT_INFO("Virtual device handler %s for type %d "
+			"registered successfully", dev_type->name,
+			dev_type->type);
+	} else {
+		PRINT_INFO("Virtual device handler \"%s\" registered "
+			"successfully", dev_type->name);
+	}
+
+out:
+	return res;
+
+out_free:
+
+	scst_devt_sysfs_put(dev_type);
+
+out_err:
+	PRINT_ERROR("Failed to register virtual device handler \"%s\"",
+		dev_type->name);
+	goto out;
+}
+EXPORT_SYMBOL_GPL(__scst_register_virtual_dev_driver);
+
+/**
+ * scst_unregister_virtual_dev_driver() - unregister virtual dev driver
+ */
+void scst_unregister_virtual_dev_driver(struct scst_dev_type *dev_type)
+{
+
+	scst_devt_sysfs_put(dev_type);
+
+	PRINT_INFO("Device handler \"%s\" unloaded", dev_type->name);
+	return;
+}
+EXPORT_SYMBOL_GPL(scst_unregister_virtual_dev_driver);
+
+/* scst_mutex supposed to be held */
+int scst_add_threads(struct scst_cmd_threads *cmd_threads,
+	struct scst_device *dev, struct scst_tgt_dev *tgt_dev, int num)
+{
+	int res, i;
+	struct scst_cmd_thread_t *thr;
+	int n = 0, tgt_dev_num = 0;
+
+	list_for_each_entry(thr, &cmd_threads->threads_list, thread_list_entry) {
+		n++;
+	}
+
+	if (tgt_dev != NULL) {
+		struct scst_tgt_dev *t;
+		list_for_each_entry(t, &tgt_dev->dev->dev_tgt_dev_list,
+				dev_tgt_dev_list_entry) {
+			if (t == tgt_dev)
+				break;
+			tgt_dev_num++;
+		}
+	}
+
+	for (i = 0; i < num; i++) {
+		struct scst_cmd_thread_t *thr;
+
+		thr = kmalloc(sizeof(*thr), GFP_KERNEL);
+		if (!thr) {
+			res = -ENOMEM;
+			PRINT_ERROR("fail to allocate thr %d", res);
+			goto out_error;
+		}
+
+		if (dev != NULL) {
+			char nm[14]; /* to limit the name's len */
+			strlcpy(nm, dev->virt_name, ARRAY_SIZE(nm));
+			thr->cmd_thread = kthread_create(scst_cmd_thread,
+				cmd_threads, "%s%d", nm, n++);
+		} else if (tgt_dev != NULL) {
+			char nm[11]; /* to limit the name's len */
+			strlcpy(nm, tgt_dev->dev->virt_name, ARRAY_SIZE(nm));
+			thr->cmd_thread = kthread_create(scst_cmd_thread,
+				cmd_threads, "%s%d_%d", nm, tgt_dev_num, n++);
+		} else
+			thr->cmd_thread = kthread_create(scst_cmd_thread,
+				cmd_threads, "scsi_tgt%d", n++);
+
+		if (IS_ERR(thr->cmd_thread)) {
+			res = PTR_ERR(thr->cmd_thread);
+			PRINT_ERROR("kthread_create() failed: %d", res);
+			kfree(thr);
+			goto out_error;
+		}
+
+		list_add(&thr->thread_list_entry, &cmd_threads->threads_list);
+		cmd_threads->nr_threads++;
+
+		wake_up_process(thr->cmd_thread);
+	}
+
+	res = 0;
+
+out:
+	return res;
+
+out_error:
+	scst_del_threads(cmd_threads, i);
+	goto out;
+}
+
+/* scst_mutex supposed to be held */
+void scst_del_threads(struct scst_cmd_threads *cmd_threads, int num)
+{
+	struct scst_cmd_thread_t *ct, *tmp;
+
+	if (num == 0)
+		goto out;
+
+	list_for_each_entry_safe_reverse(ct, tmp, &cmd_threads->threads_list,
+				thread_list_entry) {
+		int rc;
+		struct scst_device *dev;
+
+		rc = kthread_stop(ct->cmd_thread);
+		if (rc < 0)
+			TRACE_MGMT_DBG("kthread_stop() failed: %d", rc);
+
+		list_del(&ct->thread_list_entry);
+
+		list_for_each_entry(dev, &scst_dev_list, dev_list_entry) {
+			struct scst_tgt_dev *tgt_dev;
+			list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+					dev_tgt_dev_list_entry) {
+				scst_del_thr_data(tgt_dev, ct->cmd_thread);
+			}
+		}
+
+		kfree(ct);
+
+		cmd_threads->nr_threads--;
+
+		--num;
+		if (num == 0)
+			break;
+	}
+
+	EXTRACHECKS_BUG_ON((cmd_threads->nr_threads == 0) &&
+		(cmd_threads->io_context != NULL));
+
+out:
+	return;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+void scst_stop_dev_threads(struct scst_device *dev)
+{
+	struct scst_tgt_dev *tgt_dev;
+
+	list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+				dev_tgt_dev_list_entry) {
+		scst_tgt_dev_stop_threads(tgt_dev);
+	}
+
+	if ((dev->threads_num > 0) &&
+	    (dev->threads_pool_type == SCST_THREADS_POOL_SHARED))
+		scst_del_threads(&dev->dev_cmd_threads, -1);
+	return;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+int scst_create_dev_threads(struct scst_device *dev)
+{
+	int res = 0;
+	struct scst_tgt_dev *tgt_dev;
+
+	list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+			dev_tgt_dev_list_entry) {
+		res = scst_tgt_dev_setup_threads(tgt_dev);
+		if (res != 0)
+			goto out_err;
+	}
+
+	if ((dev->threads_num > 0) &&
+	    (dev->threads_pool_type == SCST_THREADS_POOL_SHARED)) {
+		res = scst_add_threads(&dev->dev_cmd_threads, dev, NULL,
+			dev->threads_num);
+		if (res != 0)
+			goto out_err;
+	}
+
+out:
+	return res;
+
+out_err:
+	scst_stop_dev_threads(dev);
+	goto out;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+int scst_assign_dev_handler(struct scst_device *dev,
+	struct scst_dev_type *handler)
+{
+	int res = 0;
+	struct scst_tgt_dev *tgt_dev;
+	LIST_HEAD(attached_tgt_devs);
+
+	BUG_ON(handler == NULL);
+
+	if (dev->handler == handler)
+		goto out;
+
+	if (dev->handler == NULL)
+		goto assign;
+
+	if (dev->handler->detach_tgt) {
+		list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+				dev_tgt_dev_list_entry) {
+			TRACE_DBG("Calling dev handler's detach_tgt(%p)",
+				tgt_dev);
+			dev->handler->detach_tgt(tgt_dev);
+			TRACE_DBG("%s", "Dev handler's detach_tgt() returned");
+		}
+	}
+
+	if (dev->handler->detach) {
+		TRACE_DBG("%s", "Calling dev handler's detach()");
+		dev->handler->detach(dev);
+		TRACE_DBG("%s", "Old handler's detach() returned");
+	}
+
+	scst_stop_dev_threads(dev);
+
+	scst_devt_dev_sysfs_put(dev);
+
+assign:
+	dev->handler = handler;
+
+	if (handler == NULL)
+		goto out;
+
+	dev->threads_num = handler->threads_num;
+	dev->threads_pool_type = handler->threads_pool_type;
+
+	res = scst_create_devt_dev_sysfs(dev);
+	if (res != 0)
+		goto out_null;
+
+	if (handler->attach) {
+		TRACE_DBG("Calling new dev handler's attach(%p)", dev);
+		res = handler->attach(dev);
+		TRACE_DBG("New dev handler's attach() returned %d", res);
+		if (res != 0) {
+			PRINT_ERROR("New device handler's %s attach() "
+				"failed: %d", handler->name, res);
+			goto out_remove_sysfs;
+		}
+	}
+
+	if (handler->attach_tgt) {
+		list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+				dev_tgt_dev_list_entry) {
+			TRACE_DBG("Calling dev handler's attach_tgt(%p)",
+				tgt_dev);
+			res = handler->attach_tgt(tgt_dev);
+			TRACE_DBG("%s", "Dev handler's attach_tgt() returned");
+			if (res != 0) {
+				PRINT_ERROR("Device handler's %s attach_tgt() "
+				    "failed: %d", handler->name, res);
+				goto out_err_detach_tgt;
+			}
+			list_add_tail(&tgt_dev->extra_tgt_dev_list_entry,
+				&attached_tgt_devs);
+		}
+	}
+
+	res = scst_create_dev_threads(dev);
+	if (res != 0)
+		goto out_err_detach_tgt;
+
+out:
+	return res;
+
+out_err_detach_tgt:
+	if (handler && handler->detach_tgt) {
+		list_for_each_entry(tgt_dev, &attached_tgt_devs,
+				 extra_tgt_dev_list_entry) {
+			TRACE_DBG("Calling handler's detach_tgt(%p)",
+				tgt_dev);
+			handler->detach_tgt(tgt_dev);
+			TRACE_DBG("%s", "Handler's detach_tgt() returned");
+		}
+	}
+	if (handler && handler->detach) {
+		TRACE_DBG("%s", "Calling handler's detach()");
+		handler->detach(dev);
+		TRACE_DBG("%s", "Handler's detach() returned");
+	}
+
+out_remove_sysfs:
+	scst_devt_dev_sysfs_put(dev);
+
+out_null:
+	dev->handler = &scst_null_devtype;
+	goto out;
+}
+
+/**
+ * scst_init_threads() - initialize SCST processing threads pool
+ *
+ * Initializes scst_cmd_threads structure
+ */
+void scst_init_threads(struct scst_cmd_threads *cmd_threads)
+{
+
+	spin_lock_init(&cmd_threads->cmd_list_lock);
+	INIT_LIST_HEAD(&cmd_threads->active_cmd_list);
+	init_waitqueue_head(&cmd_threads->cmd_list_waitQ);
+	INIT_LIST_HEAD(&cmd_threads->threads_list);
+
+	mutex_lock(&scst_suspend_mutex);
+	list_add_tail(&cmd_threads->lists_list_entry,
+		&scst_cmd_threads_list);
+	mutex_unlock(&scst_suspend_mutex);
+	return;
+}
+EXPORT_SYMBOL_GPL(scst_init_threads);
+
+/**
+ * scst_deinit_threads() - deinitialize SCST processing threads pool
+ *
+ * Deinitializes scst_cmd_threads structure
+ */
+void scst_deinit_threads(struct scst_cmd_threads *cmd_threads)
+{
+
+	mutex_lock(&scst_suspend_mutex);
+	list_del(&cmd_threads->lists_list_entry);
+	mutex_unlock(&scst_suspend_mutex);
+
+	BUG_ON(cmd_threads->io_context);
+	return;
+}
+EXPORT_SYMBOL_GPL(scst_deinit_threads);
+
+static void scst_stop_all_threads(void)
+{
+
+	mutex_lock(&scst_mutex);
+
+	scst_del_threads(&scst_main_cmd_threads, -1);
+
+	if (scst_mgmt_cmd_thread)
+		kthread_stop(scst_mgmt_cmd_thread);
+	if (scst_mgmt_thread)
+		kthread_stop(scst_mgmt_thread);
+	if (scst_init_cmd_thread)
+		kthread_stop(scst_init_cmd_thread);
+
+	mutex_unlock(&scst_mutex);
+	return;
+}
+
+static int scst_start_all_threads(int num)
+{
+	int res;
+
+	mutex_lock(&scst_mutex);
+
+	res = scst_add_threads(&scst_main_cmd_threads, NULL, NULL, num);
+	if (res < 0)
+		goto out_unlock;
+
+	scst_init_cmd_thread = kthread_run(scst_init_thread,
+		NULL, "scsi_tgt_init");
+	if (IS_ERR(scst_init_cmd_thread)) {
+		res = PTR_ERR(scst_init_cmd_thread);
+		PRINT_ERROR("kthread_create() for init cmd failed: %d", res);
+		scst_init_cmd_thread = NULL;
+		goto out_unlock;
+	}
+
+	scst_mgmt_cmd_thread = kthread_run(scst_tm_thread,
+		NULL, "scsi_tm");
+	if (IS_ERR(scst_mgmt_cmd_thread)) {
+		res = PTR_ERR(scst_mgmt_cmd_thread);
+		PRINT_ERROR("kthread_create() for TM failed: %d", res);
+		scst_mgmt_cmd_thread = NULL;
+		goto out_unlock;
+	}
+
+	scst_mgmt_thread = kthread_run(scst_global_mgmt_thread,
+		NULL, "scsi_tgt_mgmt");
+	if (IS_ERR(scst_mgmt_thread)) {
+		res = PTR_ERR(scst_mgmt_thread);
+		PRINT_ERROR("kthread_create() for mgmt failed: %d", res);
+		scst_mgmt_thread = NULL;
+		goto out_unlock;
+	}
+
+out_unlock:
+	mutex_unlock(&scst_mutex);
+	return res;
+}
+
+/**
+ * scst_get() - increase global SCST ref counter
+ *
+ * Increases global SCST ref counter that prevents from entering into suspended
+ * activities stage, so protects from any global management operations.
+ */
+void scst_get(void)
+{
+	__scst_get(0);
+}
+EXPORT_SYMBOL_GPL(scst_get);
+
+/**
+ * scst_put() - decrease global SCST ref counter
+ *
+ * Decreses global SCST ref counter that prevents from entering into suspended
+ * activities stage, so protects from any global management operations. On
+ * zero, if suspending activities is waiting, they will be suspended.
+ */
+void scst_put(void)
+{
+	__scst_put();
+}
+EXPORT_SYMBOL_GPL(scst_put);
+
+/**
+ * scst_get_setup_id() - return SCST setup ID
+ *
+ * Returns SCST setup ID. This ID can be used for multiple
+ * setups with the same configuration.
+ */
+unsigned int scst_get_setup_id(void)
+{
+	return scst_setup_id;
+}
+EXPORT_SYMBOL_GPL(scst_get_setup_id);
+
+static int scst_add(struct device *cdev, struct class_interface *intf)
+{
+	struct scsi_device *scsidp;
+	int res = 0;
+
+	scsidp = to_scsi_device(cdev->parent);
+
+	if (strcmp(scsidp->host->hostt->name, SCST_LOCAL_NAME) != 0)
+		res = scst_register_device(scsidp);
+	return res;
+}
+
+static void scst_remove(struct device *cdev, struct class_interface *intf)
+{
+	struct scsi_device *scsidp;
+
+	scsidp = to_scsi_device(cdev->parent);
+
+	if (strcmp(scsidp->host->hostt->name, SCST_LOCAL_NAME) != 0)
+		scst_unregister_device(scsidp);
+	return;
+}
+
+static struct class_interface scst_interface = {
+	.add_dev = scst_add,
+	.remove_dev = scst_remove,
+};
+
+static void __init scst_print_config(void)
+{
+	char buf[128];
+	int i, j;
+
+	i = snprintf(buf, sizeof(buf), "Enabled features: ");
+	j = i;
+
+#ifdef CONFIG_SCST_STRICT_SERIALIZING
+	i += snprintf(&buf[i], sizeof(buf) - i, "STRICT_SERIALIZING");
+#endif
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+	i += snprintf(&buf[i], sizeof(buf) - i, "%sEXTRACHECKS",
+		(j == i) ? "" : ", ");
+#endif
+
+#ifdef CONFIG_SCST_TRACING
+	i += snprintf(&buf[i], sizeof(buf) - i, "%sTRACING",
+		(j == i) ? "" : ", ");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG
+	i += snprintf(&buf[i], sizeof(buf) - i, "%sDEBUG",
+		(j == i) ? "" : ", ");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG_TM
+	i += snprintf(&buf[i], sizeof(buf) - i, "%sDEBUG_TM",
+		(j == i) ? "" : ", ");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG_RETRY
+	i += snprintf(&buf[i], sizeof(buf) - i, "%sDEBUG_RETRY",
+		(j == i) ? "" : ", ");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG_OOM
+	i += snprintf(&buf[i], sizeof(buf) - i, "%sDEBUG_OOM",
+		(j == i) ? "" : ", ");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG_SN
+	i += snprintf(&buf[i], sizeof(buf) - i, "%sDEBUG_SN",
+		(j == i) ? "" : ", ");
+#endif
+
+#ifdef CONFIG_SCST_USE_EXPECTED_VALUES
+	i += snprintf(&buf[i], sizeof(buf) - i, "%sUSE_EXPECTED_VALUES",
+		(j == i) ? "" : ", ");
+#endif
+
+#ifdef CONFIG_SCST_ALLOW_PASSTHROUGH_IO_SUBMIT_IN_SIRQ
+	i += snprintf(&buf[i], sizeof(buf) - i,
+		"%sALLOW_PASSTHROUGH_IO_SUBMIT_IN_SIRQ",
+		(j == i) ? "" : ", ");
+#endif
+
+#ifdef CONFIG_SCST_STRICT_SECURITY
+	i += snprintf(&buf[i], sizeof(buf) - i, "%sSCST_STRICT_SECURITY",
+		(j == i) ? "" : ", ");
+#endif
+
+	if (j != i)
+		PRINT_INFO("%s", buf);
+}
+
+static int __init init_scst(void)
+{
+	int res, i;
+	int scst_num_cpus;
+
+	{
+		struct scsi_sense_hdr *shdr;
+		BUILD_BUG_ON(SCST_SENSE_BUFFERSIZE < sizeof(*shdr));
+	}
+	{
+		struct scst_tgt_dev *t;
+		struct scst_cmd *c;
+		BUILD_BUG_ON(sizeof(t->curr_sn) != sizeof(t->expected_sn));
+		BUILD_BUG_ON(sizeof(c->sn) != sizeof(t->expected_sn));
+	}
+
+	mutex_init(&scst_mutex);
+	INIT_LIST_HEAD(&scst_template_list);
+	INIT_LIST_HEAD(&scst_dev_list);
+	INIT_LIST_HEAD(&scst_dev_type_list);
+	spin_lock_init(&scst_main_lock);
+	spin_lock_init(&scst_init_lock);
+	init_waitqueue_head(&scst_init_cmd_list_waitQ);
+	INIT_LIST_HEAD(&scst_init_cmd_list);
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+	scst_trace_flag = SCST_DEFAULT_LOG_FLAGS;
+#endif
+	atomic_set(&scst_cmd_count, 0);
+	spin_lock_init(&scst_mcmd_lock);
+	INIT_LIST_HEAD(&scst_active_mgmt_cmd_list);
+	INIT_LIST_HEAD(&scst_delayed_mgmt_cmd_list);
+	init_waitqueue_head(&scst_mgmt_cmd_list_waitQ);
+	init_waitqueue_head(&scst_mgmt_waitQ);
+	spin_lock_init(&scst_mgmt_lock);
+	INIT_LIST_HEAD(&scst_sess_init_list);
+	INIT_LIST_HEAD(&scst_sess_shut_list);
+	init_waitqueue_head(&scst_dev_cmd_waitQ);
+	mutex_init(&scst_suspend_mutex);
+	INIT_LIST_HEAD(&scst_cmd_threads_list);
+	scst_virt_dev_last_id = 1;
+
+	scst_init_threads(&scst_main_cmd_threads);
+
+	res = scst_lib_init();
+	if (res != 0)
+		goto out;
+
+	scst_num_cpus = num_online_cpus();
+
+	/* ToDo: register_cpu_notifier() */
+
+	if (scst_threads == 0)
+		scst_threads = scst_num_cpus;
+
+	if (scst_threads < 1) {
+		PRINT_ERROR("%s", "scst_threads can not be less than 1");
+		scst_threads = scst_num_cpus;
+	}
+
+#define INIT_CACHEP(p, s, o) do {					\
+		p = KMEM_CACHE(s, SCST_SLAB_FLAGS);			\
+		TRACE_MEM("Slab create: %s at %p size %zd", #s, p,	\
+			  sizeof(struct s));				\
+		if (p == NULL) {					\
+			res = -ENOMEM;					\
+			goto o;						\
+		}							\
+	} while (0)
+
+	INIT_CACHEP(scst_mgmt_cachep, scst_mgmt_cmd, out_lib_exit);
+	INIT_CACHEP(scst_mgmt_stub_cachep, scst_mgmt_cmd_stub,
+			out_destroy_mgmt_cache);
+	INIT_CACHEP(scst_ua_cachep, scst_tgt_dev_UA,
+			out_destroy_mgmt_stub_cache);
+	{
+		struct scst_sense { uint8_t s[SCST_SENSE_BUFFERSIZE]; };
+		INIT_CACHEP(scst_sense_cachep, scst_sense,
+			    out_destroy_ua_cache);
+	}
+	INIT_CACHEP(scst_aen_cachep, scst_aen, out_destroy_sense_cache);
+	INIT_CACHEP(scst_cmd_cachep, scst_cmd, out_destroy_aen_cache);
+	INIT_CACHEP(scst_sess_cachep, scst_session, out_destroy_cmd_cache);
+	INIT_CACHEP(scst_tgtd_cachep, scst_tgt_dev, out_destroy_sess_cache);
+	INIT_CACHEP(scst_acgd_cachep, scst_acg_dev, out_destroy_tgt_cache);
+
+	scst_mgmt_mempool = mempool_create(64, mempool_alloc_slab,
+		mempool_free_slab, scst_mgmt_cachep);
+	if (scst_mgmt_mempool == NULL) {
+		res = -ENOMEM;
+		goto out_destroy_acg_cache;
+	}
+
+	/*
+	 * All mgmt stubs, UAs and sense buffers are bursty and loosing them
+	 * may have fatal consequences, so let's have big pools for them.
+	 */
+
+	scst_mgmt_stub_mempool = mempool_create(1024, mempool_alloc_slab,
+		mempool_free_slab, scst_mgmt_stub_cachep);
+	if (scst_mgmt_stub_mempool == NULL) {
+		res = -ENOMEM;
+		goto out_destroy_mgmt_mempool;
+	}
+
+	scst_ua_mempool = mempool_create(512, mempool_alloc_slab,
+		mempool_free_slab, scst_ua_cachep);
+	if (scst_ua_mempool == NULL) {
+		res = -ENOMEM;
+		goto out_destroy_mgmt_stub_mempool;
+	}
+
+	scst_sense_mempool = mempool_create(1024, mempool_alloc_slab,
+		mempool_free_slab, scst_sense_cachep);
+	if (scst_sense_mempool == NULL) {
+		res = -ENOMEM;
+		goto out_destroy_ua_mempool;
+	}
+
+	scst_aen_mempool = mempool_create(100, mempool_alloc_slab,
+		mempool_free_slab, scst_aen_cachep);
+	if (scst_aen_mempool == NULL) {
+		res = -ENOMEM;
+		goto out_destroy_sense_mempool;
+	}
+
+	res = scst_sysfs_init();
+	if (res != 0)
+		goto out_destroy_aen_mempool;
+
+	if (scst_max_cmd_mem == 0) {
+		struct sysinfo si;
+		si_meminfo(&si);
+#if BITS_PER_LONG == 32
+		scst_max_cmd_mem = min(
+			(((uint64_t)(si.totalram - si.totalhigh) << PAGE_SHIFT)
+				>> 20) >> 2, (uint64_t)1 << 30);
+#else
+		scst_max_cmd_mem = (((si.totalram - si.totalhigh) << PAGE_SHIFT)
+					>> 20) >> 2;
+#endif
+	}
+
+	if (scst_max_dev_cmd_mem != 0) {
+		if (scst_max_dev_cmd_mem > scst_max_cmd_mem) {
+			PRINT_ERROR("scst_max_dev_cmd_mem (%d) > "
+				"scst_max_cmd_mem (%d)",
+				scst_max_dev_cmd_mem,
+				scst_max_cmd_mem);
+			scst_max_dev_cmd_mem = scst_max_cmd_mem;
+		}
+	} else
+		scst_max_dev_cmd_mem = scst_max_cmd_mem * 2 / 5;
+
+	res = scst_sgv_pools_init(
+		((uint64_t)scst_max_cmd_mem << 10) >> (PAGE_SHIFT - 10), 0);
+	if (res != 0)
+		goto out_sysfs_cleanup;
+
+	res = scsi_register_interface(&scst_interface);
+	if (res != 0)
+		goto out_destroy_sgv_pool;
+
+	for (i = 0; i < (int)ARRAY_SIZE(scst_tasklets); i++) {
+		spin_lock_init(&scst_tasklets[i].tasklet_lock);
+		INIT_LIST_HEAD(&scst_tasklets[i].tasklet_cmd_list);
+		tasklet_init(&scst_tasklets[i].tasklet,
+			     (void *)scst_cmd_tasklet,
+			     (unsigned long)&scst_tasklets[i]);
+	}
+
+	TRACE_DBG("%d CPUs found, starting %d threads", scst_num_cpus,
+		scst_threads);
+
+	res = scst_start_all_threads(scst_threads);
+	if (res < 0)
+		goto out_thread_free;
+
+	PRINT_INFO("SCST version %s loaded successfully (max mem for "
+		"commands %dMB, per device %dMB)", SCST_VERSION_STRING,
+		scst_max_cmd_mem, scst_max_dev_cmd_mem);
+
+	scst_print_config();
+
+out:
+	return res;
+
+out_thread_free:
+	scst_stop_all_threads();
+
+	scsi_unregister_interface(&scst_interface);
+
+out_destroy_sgv_pool:
+	scst_sgv_pools_deinit();
+
+out_sysfs_cleanup:
+	scst_sysfs_cleanup();
+
+out_destroy_aen_mempool:
+	mempool_destroy(scst_aen_mempool);
+
+out_destroy_sense_mempool:
+	mempool_destroy(scst_sense_mempool);
+
+out_destroy_ua_mempool:
+	mempool_destroy(scst_ua_mempool);
+
+out_destroy_mgmt_stub_mempool:
+	mempool_destroy(scst_mgmt_stub_mempool);
+
+out_destroy_mgmt_mempool:
+	mempool_destroy(scst_mgmt_mempool);
+
+out_destroy_acg_cache:
+	kmem_cache_destroy(scst_acgd_cachep);
+
+out_destroy_tgt_cache:
+	kmem_cache_destroy(scst_tgtd_cachep);
+
+out_destroy_sess_cache:
+	kmem_cache_destroy(scst_sess_cachep);
+
+out_destroy_cmd_cache:
+	kmem_cache_destroy(scst_cmd_cachep);
+
+out_destroy_aen_cache:
+	kmem_cache_destroy(scst_aen_cachep);
+
+out_destroy_sense_cache:
+	kmem_cache_destroy(scst_sense_cachep);
+
+out_destroy_ua_cache:
+	kmem_cache_destroy(scst_ua_cachep);
+
+out_destroy_mgmt_stub_cache:
+	kmem_cache_destroy(scst_mgmt_stub_cachep);
+
+out_destroy_mgmt_cache:
+	kmem_cache_destroy(scst_mgmt_cachep);
+
+out_lib_exit:
+	scst_lib_exit();
+	goto out;
+}
+
+static void __exit exit_scst(void)
+{
+
+	/* ToDo: unregister_cpu_notifier() */
+
+	scst_sysfs_cleanup();
+
+	scst_stop_all_threads();
+
+	scst_deinit_threads(&scst_main_cmd_threads);
+
+	scsi_unregister_interface(&scst_interface);
+
+	scst_sgv_pools_deinit();
+
+#define DEINIT_CACHEP(p) do {		\
+		kmem_cache_destroy(p);	\
+		p = NULL;		\
+	} while (0)
+
+	mempool_destroy(scst_mgmt_mempool);
+	mempool_destroy(scst_mgmt_stub_mempool);
+	mempool_destroy(scst_ua_mempool);
+	mempool_destroy(scst_sense_mempool);
+	mempool_destroy(scst_aen_mempool);
+
+	DEINIT_CACHEP(scst_mgmt_cachep);
+	DEINIT_CACHEP(scst_mgmt_stub_cachep);
+	DEINIT_CACHEP(scst_ua_cachep);
+	DEINIT_CACHEP(scst_sense_cachep);
+	DEINIT_CACHEP(scst_aen_cachep);
+	DEINIT_CACHEP(scst_cmd_cachep);
+	DEINIT_CACHEP(scst_sess_cachep);
+	DEINIT_CACHEP(scst_tgtd_cachep);
+	DEINIT_CACHEP(scst_acgd_cachep);
+
+	scst_lib_exit();
+
+	PRINT_INFO("%s", "SCST unloaded");
+	return;
+}
+
+module_init(init_scst);
+module_exit(exit_scst);
+
+MODULE_AUTHOR("Vladislav Bolkhovitin");
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("SCSI target core");
+MODULE_VERSION(SCST_VERSION_STRING);


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH][RFC 4/12/1/5] SCST core's scst_targ.c
       [not found] ` <4BC44D08.4060907@vlnb.net>
                     ` (2 preceding siblings ...)
  2010-04-13 13:04   ` [PATCH][RFC 3/12/1/5] SCST core's scst_main.c Vladislav Bolkhovitin
@ 2010-04-13 13:05   ` Vladislav Bolkhovitin
  2010-04-13 13:05   ` [PATCH][RFC 5/12/1/5] SCST core's scst_lib.c Vladislav Bolkhovitin
                     ` (5 subsequent siblings)
  9 siblings, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:05 UTC (permalink / raw)
  To: linux-scsi
  Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
	FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
	Vu Pham, Bart Van Assche, James Smart, Joe Eykholt, Andy Yan,
	linux-driver

This patch contains file scst_targ.c.

Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
 scst_targ.c | 5712 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 5712 insertions(+)

diff -uprN orig/linux-2.6.33/drivers/scst/scst_targ.c linux-2.6.33/drivers/scst/scst_targ.c
--- orig/linux-2.6.33/drivers/scst/scst_targ.c
+++ linux-2.6.33/drivers/scst/scst_targ.c
@@ -0,0 +1,5712 @@
+/*
+ *  scst_targ.c
+ *
+ *  Copyright (C) 2004 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ *  Copyright (C) 2004 - 2005 Leonid Stoljar
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation, version 2
+ *  of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/slab.h>
+#include <linux/sched.h>
+#include <linux/smp_lock.h>
+#include <linux/unistd.h>
+#include <linux/string.h>
+#include <linux/kthread.h>
+#include <linux/delay.h>
+#include <linux/ktime.h>
+
+#include "scst.h"
+#include "scst_priv.h"
+
+#if 0 /* Temporary left for future performance investigations */
+/* Deleting it don't forget to delete write_cmd_count */
+#define CONFIG_SCST_ORDERED_READS
+#endif
+
+#if 0 /* Let's disable it for now to see if users will complain about it */
+/* Deleting it don't forget to delete write_cmd_count */
+#define CONFIG_SCST_PER_DEVICE_CMD_COUNT_LIMIT
+#endif
+
+static void scst_cmd_set_sn(struct scst_cmd *cmd);
+static int __scst_init_cmd(struct scst_cmd *cmd);
+static void scst_finish_cmd_mgmt(struct scst_cmd *cmd);
+static struct scst_cmd *__scst_find_cmd_by_tag(struct scst_session *sess,
+	uint64_t tag, bool to_abort);
+static void scst_process_redirect_cmd(struct scst_cmd *cmd,
+	enum scst_exec_context context, int check_retries);
+
+static inline void scst_schedule_tasklet(struct scst_cmd *cmd)
+{
+	struct scst_tasklet *t = &scst_tasklets[smp_processor_id()];
+	unsigned long flags;
+
+	spin_lock_irqsave(&t->tasklet_lock, flags);
+	TRACE_DBG("Adding cmd %p to tasklet %d cmd list", cmd,
+		smp_processor_id());
+	list_add_tail(&cmd->cmd_list_entry, &t->tasklet_cmd_list);
+	spin_unlock_irqrestore(&t->tasklet_lock, flags);
+
+	tasklet_schedule(&t->tasklet);
+}
+
+/**
+ * scst_rx_cmd() - create new command
+ * @sess:	SCST session
+ * @lun:	LUN for the command
+ * @lun_len:	length of the LUN in bytes
+ * @cdb:	CDB of the command
+ * @cdb_len:	length of the CDB in bytes
+ * @atomic:	true, if current context is atomic
+ *
+ * Description:
+ *    Creates new SCST command. Returns new command on success or
+ *    NULL otherwise.
+ *
+ *    Must not be called in parallel with scst_unregister_session() for the
+ *    same session.
+ */
+struct scst_cmd *scst_rx_cmd(struct scst_session *sess,
+			     const uint8_t *lun, int lun_len,
+			     const uint8_t *cdb, int cdb_len, int atomic)
+{
+	struct scst_cmd *cmd;
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+	if (unlikely(sess->shut_phase != SCST_SESS_SPH_READY)) {
+		PRINT_CRIT_ERROR("%s",
+			"New cmd while shutting down the session");
+		BUG();
+	}
+#endif
+
+	cmd = scst_alloc_cmd(atomic ? GFP_ATOMIC : GFP_KERNEL);
+	if (cmd == NULL)
+		goto out;
+
+	cmd->sess = sess;
+	cmd->tgt = sess->tgt;
+	cmd->tgtt = sess->tgt->tgtt;
+
+	cmd->lun = scst_unpack_lun(lun, lun_len);
+	if (unlikely(cmd->lun == NO_SUCH_LUN)) {
+		PRINT_ERROR("Wrong LUN %d, finishing cmd", -1);
+		scst_set_cmd_error(cmd,
+			   SCST_LOAD_SENSE(scst_sense_lun_not_supported));
+	}
+
+	/*
+	 * For cdb_len 0 defer the error reporting until scst_cmd_init_done(),
+	 * scst_set_cmd_error() supports nested calls.
+	 */
+	if (unlikely(cdb_len > SCST_MAX_CDB_SIZE)) {
+		PRINT_ERROR("Too big CDB len %d, finishing cmd", cdb_len);
+		cdb_len = SCST_MAX_CDB_SIZE;
+		scst_set_cmd_error(cmd,
+			SCST_LOAD_SENSE(scst_sense_invalid_message));
+	}
+
+	memcpy(cmd->cdb, cdb, cdb_len);
+	cmd->cdb_len = cdb_len;
+
+	TRACE_DBG("cmd %p, sess %p", cmd, sess);
+	scst_sess_get(sess);
+
+out:
+	return cmd;
+}
+EXPORT_SYMBOL(scst_rx_cmd);
+
+/*
+ * No locks, but might be on IRQ. Returns 0 on success, <0 if processing of
+ * this command should be stopped.
+ */
+static int scst_init_cmd(struct scst_cmd *cmd, enum scst_exec_context *context)
+{
+	int rc, res = 0;
+
+	/* See the comment in scst_do_job_init() */
+	if (unlikely(!list_empty(&scst_init_cmd_list))) {
+		TRACE_MGMT_DBG("%s", "init cmd list busy");
+		goto out_redirect;
+	}
+	/*
+	 * Memory barrier isn't necessary here, because CPU appears to
+	 * be self-consistent and we don't care about the race, described
+	 * in comment in scst_do_job_init().
+	 */
+
+	rc = __scst_init_cmd(cmd);
+	if (unlikely(rc > 0))
+		goto out_redirect;
+	else if (unlikely(rc != 0)) {
+		res = 1;
+		goto out;
+	}
+
+	EXTRACHECKS_BUG_ON(*context == SCST_CONTEXT_SAME);
+
+	/* Small context optimization */
+	if (((*context == SCST_CONTEXT_TASKLET) ||
+	     (*context == SCST_CONTEXT_DIRECT_ATOMIC)) &&
+	      scst_cmd_is_expected_set(cmd)) {
+		if (cmd->expected_data_direction & SCST_DATA_WRITE) {
+			if (!test_bit(SCST_TGT_DEV_AFTER_INIT_WR_ATOMIC,
+					&cmd->tgt_dev->tgt_dev_flags))
+				*context = SCST_CONTEXT_THREAD;
+		} else {
+			if (!test_bit(SCST_TGT_DEV_AFTER_INIT_OTH_ATOMIC,
+					&cmd->tgt_dev->tgt_dev_flags))
+				*context = SCST_CONTEXT_THREAD;
+		}
+	}
+
+out:
+	return res;
+
+out_redirect:
+	if (cmd->preprocessing_only) {
+		/*
+		 * Poor man solution for single threaded targets, where
+		 * blocking receiver at least sometimes means blocking all.
+		 * For instance, iSCSI target won't be able to receive
+		 * Data-Out PDUs.
+		 */
+		BUG_ON(*context != SCST_CONTEXT_DIRECT);
+		scst_set_busy(cmd);
+		scst_set_cmd_abnormal_done_state(cmd);
+		res = 1;
+		/* Keep initiator away from too many BUSY commands */
+		msleep(50);
+	} else {
+		unsigned long flags;
+		spin_lock_irqsave(&scst_init_lock, flags);
+		TRACE_MGMT_DBG("Adding cmd %p to init cmd list (scst_cmd_count "
+			"%d)", cmd, atomic_read(&scst_cmd_count));
+		list_add_tail(&cmd->cmd_list_entry, &scst_init_cmd_list);
+		if (test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags))
+			scst_init_poll_cnt++;
+		spin_unlock_irqrestore(&scst_init_lock, flags);
+		wake_up(&scst_init_cmd_list_waitQ);
+		res = -1;
+	}
+	goto out;
+}
+
+/**
+ * scst_cmd_init_done() - the command's initialization done
+ * @cmd:	SCST command
+ * @pref_context: preferred command execution context
+ *
+ * Description:
+ *    Notifies SCST that the driver finished its part of the command
+ *    initialization, and the command is ready for execution.
+ *    The second argument sets preferred command execition context.
+ *    See SCST_CONTEXT_* constants for details.
+ *
+ *    !!IMPORTANT!!
+ *
+ *    If cmd->set_sn_on_restart_cmd not set, this function, as well as
+ *    scst_cmd_init_stage1_done() and scst_restart_cmd(), must not be
+ *    called simultaneously for the same session (more precisely,
+ *    for the same session/LUN, i.e. tgt_dev), i.e. they must be
+ *    somehow externally serialized. This is needed to have lock free fast
+ *    path in scst_cmd_set_sn(). For majority of targets those functions are
+ *    naturally serialized by the single source of commands. Only iSCSI
+ *    immediate commands with multiple connections per session seems to be an
+ *    exception. For it, some mutex/lock shall be used for the serialization.
+ */
+void scst_cmd_init_done(struct scst_cmd *cmd,
+	enum scst_exec_context pref_context)
+{
+	unsigned long flags;
+	struct scst_session *sess = cmd->sess;
+	int rc;
+
+	scst_set_start_time(cmd);
+
+	TRACE_DBG("Preferred context: %d (cmd %p)", pref_context, cmd);
+	TRACE(TRACE_SCSI, "tag=%llu, lun=%lld, CDB len=%d, queue_type=%x "
+		"(cmd %p)", (long long unsigned int)cmd->tag,
+		(long long unsigned int)cmd->lun, cmd->cdb_len,
+		cmd->queue_type, cmd);
+	PRINT_BUFF_FLAG(TRACE_SCSI|TRACE_RCV_BOT, "Recieving CDB",
+		cmd->cdb, cmd->cdb_len);
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+	if (unlikely((in_irq() || irqs_disabled())) &&
+	    ((pref_context == SCST_CONTEXT_DIRECT) ||
+	     (pref_context == SCST_CONTEXT_DIRECT_ATOMIC))) {
+		PRINT_ERROR("Wrong context %d in IRQ from target %s, use "
+			"SCST_CONTEXT_THREAD instead", pref_context,
+			cmd->tgtt->name);
+		pref_context = SCST_CONTEXT_THREAD;
+	}
+#endif
+
+	atomic_inc(&sess->sess_cmd_count);
+
+	spin_lock_irqsave(&sess->sess_list_lock, flags);
+
+	if (unlikely(sess->init_phase != SCST_SESS_IPH_READY)) {
+		/*
+		 * We must always keep commands in the sess list from the
+		 * very beginning, because otherwise they can be missed during
+		 * TM processing. This check is needed because there might be
+		 * old, i.e. deferred, commands and new, i.e. just coming, ones.
+		 */
+		if (cmd->sess_cmd_list_entry.next == NULL)
+			list_add_tail(&cmd->sess_cmd_list_entry,
+				&sess->sess_cmd_list);
+		switch (sess->init_phase) {
+		case SCST_SESS_IPH_SUCCESS:
+			break;
+		case SCST_SESS_IPH_INITING:
+			TRACE_DBG("Adding cmd %p to init deferred cmd list",
+				  cmd);
+			list_add_tail(&cmd->cmd_list_entry,
+				&sess->init_deferred_cmd_list);
+			spin_unlock_irqrestore(&sess->sess_list_lock, flags);
+			goto out;
+		case SCST_SESS_IPH_FAILED:
+			spin_unlock_irqrestore(&sess->sess_list_lock, flags);
+			scst_set_busy(cmd);
+			scst_set_cmd_abnormal_done_state(cmd);
+			goto active;
+		default:
+			BUG();
+		}
+	} else
+		list_add_tail(&cmd->sess_cmd_list_entry,
+			      &sess->sess_cmd_list);
+
+	spin_unlock_irqrestore(&sess->sess_list_lock, flags);
+
+	if (unlikely(cmd->cdb_len == 0)) {
+		PRINT_ERROR("%s", "Wrong CDB len 0, finishing cmd");
+		scst_set_cmd_error(cmd,
+			   SCST_LOAD_SENSE(scst_sense_invalid_opcode));
+		scst_set_cmd_abnormal_done_state(cmd);
+		goto active;
+	}
+
+	if (unlikely(cmd->queue_type >= SCST_CMD_QUEUE_ACA)) {
+		PRINT_ERROR("Unsupported queue type %d", cmd->queue_type);
+		scst_set_cmd_error(cmd,
+			SCST_LOAD_SENSE(scst_sense_invalid_message));
+		goto active;
+	}
+
+	/*
+	 * Cmd must be inited here to preserve the order. In case if cmd
+	 * already preliminary completed by target driver we need to init
+	 * cmd anyway to find out in which format we should return sense.
+	 */
+	cmd->state = SCST_CMD_STATE_INIT;
+	rc = scst_init_cmd(cmd, &pref_context);
+	if (unlikely(rc < 0))
+		goto out;
+
+active:
+	/* Here cmd must not be in any cmd list, no locks */
+	switch (pref_context) {
+	case SCST_CONTEXT_TASKLET:
+		scst_schedule_tasklet(cmd);
+		break;
+
+	case SCST_CONTEXT_DIRECT:
+		scst_process_active_cmd(cmd, false);
+		break;
+
+	case SCST_CONTEXT_DIRECT_ATOMIC:
+		scst_process_active_cmd(cmd, true);
+		break;
+
+	default:
+		PRINT_ERROR("Context %x is undefined, using the thread one",
+			pref_context);
+		/* go through */
+	case SCST_CONTEXT_THREAD:
+		spin_lock_irqsave(&cmd->cmd_threads->cmd_list_lock, flags);
+		TRACE_DBG("Adding cmd %p to active cmd list", cmd);
+		if (unlikely(cmd->queue_type == SCST_CMD_QUEUE_HEAD_OF_QUEUE))
+			list_add(&cmd->cmd_list_entry,
+				&cmd->cmd_threads->active_cmd_list);
+		else
+			list_add_tail(&cmd->cmd_list_entry,
+				&cmd->cmd_threads->active_cmd_list);
+		wake_up(&cmd->cmd_threads->cmd_list_waitQ);
+		spin_unlock_irqrestore(&cmd->cmd_threads->cmd_list_lock, flags);
+		break;
+	}
+
+out:
+	return;
+}
+EXPORT_SYMBOL(scst_cmd_init_done);
+
+static int scst_pre_parse(struct scst_cmd *cmd)
+{
+	int res = SCST_CMD_STATE_RES_CONT_SAME;
+	struct scst_device *dev = cmd->dev;
+	int rc;
+
+#ifdef CONFIG_SCST_STRICT_SERIALIZING
+	cmd->inc_expected_sn_on_done = 1;
+#else
+	cmd->inc_expected_sn_on_done = dev->handler->exec_sync ||
+	     (!dev->has_own_order_mgmt &&
+	      (dev->queue_alg == SCST_CONTR_MODE_QUEUE_ALG_RESTRICTED_REORDER ||
+	       cmd->queue_type == SCST_CMD_QUEUE_ORDERED));
+#endif
+
+	/*
+	 * Expected transfer data supplied by the SCSI transport via the
+	 * target driver are untrusted, so we prefer to fetch them from CDB.
+	 * Additionally, not all transports support supplying the expected
+	 * transfer data.
+	 */
+
+	rc = scst_get_cdb_info(cmd);
+	if (unlikely(rc != 0)) {
+		if (rc > 0) {
+			PRINT_BUFFER("Failed CDB", cmd->cdb, cmd->cdb_len);
+			goto out_err;
+		}
+
+		EXTRACHECKS_BUG_ON(cmd->op_flags & SCST_INFO_VALID);
+
+		cmd->cdb_len = scst_get_cdb_len(cmd->cdb);
+
+		TRACE(TRACE_MINOR, "Unknown opcode 0x%02x for %s. "
+			"Should you update scst_scsi_op_table?",
+			cmd->cdb[0], dev->handler->name);
+		PRINT_BUFF_FLAG(TRACE_MINOR, "Failed CDB", cmd->cdb,
+			cmd->cdb_len);
+	} else {
+		EXTRACHECKS_BUG_ON(!(cmd->op_flags & SCST_INFO_VALID));
+	}
+
+	cmd->state = SCST_CMD_STATE_DEV_PARSE;
+
+	TRACE_DBG("op_name <%s> (cmd %p), direction=%d "
+		"(expected %d, set %s), transfer_len=%d (expected "
+		"len %d), flags=%d", cmd->op_name, cmd,
+		cmd->data_direction, cmd->expected_data_direction,
+		scst_cmd_is_expected_set(cmd) ? "yes" : "no",
+		cmd->bufflen, cmd->expected_transfer_len,
+		cmd->op_flags);
+
+out:
+	return res;
+
+out_err:
+	scst_set_cmd_error(cmd, SCST_LOAD_SENSE(scst_sense_invalid_field_in_cdb));
+	scst_set_cmd_abnormal_done_state(cmd);
+	res = SCST_CMD_STATE_RES_CONT_SAME;
+	goto out;
+}
+
+#ifndef CONFIG_SCST_USE_EXPECTED_VALUES
+static bool scst_is_allowed_to_mismatch_cmd(struct scst_cmd *cmd)
+{
+	bool res = false;
+
+	/* VERIFY commands with BYTCHK unset shouldn't fail here */
+	if ((cmd->op_flags & SCST_VERIFY_BYTCHK_MISMATCH_ALLOWED) &&
+	    (cmd->cdb[1] & BYTCHK) == 0) {
+		res = true;
+		goto out;
+	}
+
+	switch (cmd->cdb[0]) {
+	case TEST_UNIT_READY:
+		/* Crazy VMware people sometimes do TUR with READ direction */
+		res = true;
+		break;
+	}
+
+out:
+	return res;
+}
+#endif
+
+static int scst_parse_cmd(struct scst_cmd *cmd)
+{
+	int res = SCST_CMD_STATE_RES_CONT_SAME;
+	int state;
+	struct scst_device *dev = cmd->dev;
+	int orig_bufflen = cmd->bufflen;
+
+	if (likely(!scst_is_cmd_fully_local(cmd))) {
+		if (unlikely(!dev->handler->parse_atomic &&
+			     scst_cmd_atomic(cmd))) {
+			/*
+			 * It shouldn't be because of the SCST_TGT_DEV_AFTER_*
+			 * optimization.
+			 */
+			TRACE_DBG("Dev handler %s parse() needs thread "
+				"context, rescheduling", dev->handler->name);
+			res = SCST_CMD_STATE_RES_NEED_THREAD;
+			goto out;
+		}
+
+		TRACE_DBG("Calling dev handler %s parse(%p)",
+		      dev->handler->name, cmd);
+		TRACE_BUFF_FLAG(TRACE_SND_BOT, "Parsing: ",
+				cmd->cdb, cmd->cdb_len);
+		scst_set_cur_start(cmd);
+		state = dev->handler->parse(cmd);
+		/* Caution: cmd can be already dead here */
+		TRACE_DBG("Dev handler %s parse() returned %d",
+			dev->handler->name, state);
+
+		switch (state) {
+		case SCST_CMD_STATE_NEED_THREAD_CTX:
+			scst_set_parse_time(cmd);
+			TRACE_DBG("Dev handler %s parse() requested thread "
+			      "context, rescheduling", dev->handler->name);
+			res = SCST_CMD_STATE_RES_NEED_THREAD;
+			goto out;
+
+		case SCST_CMD_STATE_STOP:
+			TRACE_DBG("Dev handler %s parse() requested stop "
+				"processing", dev->handler->name);
+			res = SCST_CMD_STATE_RES_CONT_NEXT;
+			goto out;
+		}
+
+		scst_set_parse_time(cmd);
+
+		if (state == SCST_CMD_STATE_DEFAULT)
+			state = SCST_CMD_STATE_PREPARE_SPACE;
+	} else
+		state = SCST_CMD_STATE_PREPARE_SPACE;
+
+	if (unlikely(state == SCST_CMD_STATE_PRE_XMIT_RESP))
+		goto set_res;
+
+	if (unlikely(!(cmd->op_flags & SCST_INFO_VALID))) {
+#ifdef CONFIG_SCST_USE_EXPECTED_VALUES
+		if (scst_cmd_is_expected_set(cmd)) {
+			TRACE(TRACE_MINOR, "Using initiator supplied values: "
+				"direction %d, transfer_len %d",
+				cmd->expected_data_direction,
+				cmd->expected_transfer_len);
+			cmd->data_direction = cmd->expected_data_direction;
+			cmd->bufflen = cmd->expected_transfer_len;
+		} else {
+			PRINT_ERROR("Unknown opcode 0x%02x for %s and "
+			     "target %s not supplied expected values",
+			     cmd->cdb[0], dev->handler->name, cmd->tgtt->name);
+			scst_set_cmd_error(cmd,
+				   SCST_LOAD_SENSE(scst_sense_invalid_opcode));
+			goto out_done;
+		}
+#else
+		PRINT_ERROR("Unknown opcode %x", cmd->cdb[0]);
+		scst_set_cmd_error(cmd,
+			   SCST_LOAD_SENSE(scst_sense_invalid_opcode));
+		goto out_done;
+#endif
+	}
+
+	if (unlikely(cmd->cdb_len == -1)) {
+		PRINT_ERROR("Unable to get CDB length for "
+			"opcode 0x%02x. Returning INVALID "
+			"OPCODE", cmd->cdb[0]);
+		scst_set_cmd_error(cmd,
+			SCST_LOAD_SENSE(scst_sense_invalid_opcode));
+		goto out_done;
+	}
+
+	EXTRACHECKS_BUG_ON(cmd->cdb_len == 0);
+
+	TRACE(TRACE_SCSI, "op_name <%s> (cmd %p), direction=%d "
+		"(expected %d, set %s), transfer_len=%d (expected "
+		"len %d), flags=%d", cmd->op_name, cmd,
+		cmd->data_direction, cmd->expected_data_direction,
+		scst_cmd_is_expected_set(cmd) ? "yes" : "no",
+		cmd->bufflen, cmd->expected_transfer_len,
+		cmd->op_flags);
+
+	if (unlikely((cmd->op_flags & SCST_UNKNOWN_LENGTH) != 0)) {
+		if (scst_cmd_is_expected_set(cmd)) {
+			/*
+			 * Command data length can't be easily
+			 * determined from the CDB. ToDo, all such
+			 * commands processing should be fixed. Until
+			 * it's done, get the length from the supplied
+			 * expected value, but limit it to some
+			 * reasonable value (15MB).
+			 */
+			cmd->bufflen = min(cmd->expected_transfer_len,
+						15*1024*1024);
+			cmd->op_flags &= ~SCST_UNKNOWN_LENGTH;
+		} else {
+			PRINT_ERROR("Unknown data transfer length for opcode "
+				"0x%x (handler %s, target %s)", cmd->cdb[0],
+				dev->handler->name, cmd->tgtt->name);
+			PRINT_BUFFER("Failed CDB", cmd->cdb, cmd->cdb_len);
+			scst_set_cmd_error(cmd,
+				SCST_LOAD_SENSE(scst_sense_invalid_message));
+			goto out_done;
+		}
+	}
+
+	if (unlikely(cmd->cdb[cmd->cdb_len - 1] & CONTROL_BYTE_NACA_BIT)) {
+		PRINT_ERROR("NACA bit in control byte CDB is not supported "
+			    "(opcode 0x%02x)", cmd->cdb[0]);
+		scst_set_cmd_error(cmd,
+			SCST_LOAD_SENSE(scst_sense_invalid_field_in_cdb));
+		goto out_done;
+	}
+
+	if (unlikely(cmd->cdb[cmd->cdb_len - 1] & CONTROL_BYTE_LINK_BIT)) {
+		PRINT_ERROR("Linked commands are not supported "
+			    "(opcode 0x%02x)", cmd->cdb[0]);
+		scst_set_cmd_error(cmd,
+			SCST_LOAD_SENSE(scst_sense_invalid_field_in_cdb));
+		goto out_done;
+	}
+
+	if (cmd->dh_data_buf_alloced &&
+	    unlikely((orig_bufflen > cmd->bufflen))) {
+		PRINT_ERROR("Dev handler supplied data buffer (size %d), "
+			"is less, than required (size %d)", cmd->bufflen,
+			orig_bufflen);
+		PRINT_BUFFER("Failed CDB", cmd->cdb, cmd->cdb_len);
+		goto out_hw_error;
+	}
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+	if ((cmd->bufflen != 0) &&
+	    ((cmd->data_direction == SCST_DATA_NONE) ||
+	     ((cmd->sg == NULL) && (state > SCST_CMD_STATE_PREPARE_SPACE)))) {
+		PRINT_ERROR("Dev handler %s parse() returned "
+			"invalid cmd data_direction %d, bufflen %d, state %d "
+			"or sg %p (opcode 0x%x)", dev->handler->name,
+			cmd->data_direction, cmd->bufflen, state, cmd->sg,
+			cmd->cdb[0]);
+		PRINT_BUFFER("Failed CDB", cmd->cdb, cmd->cdb_len);
+		goto out_hw_error;
+	}
+#endif
+
+	if (scst_cmd_is_expected_set(cmd)) {
+#ifdef CONFIG_SCST_USE_EXPECTED_VALUES
+#	ifdef CONFIG_SCST_EXTRACHECKS
+		if (unlikely((cmd->data_direction != cmd->expected_data_direction) ||
+			     (cmd->bufflen != cmd->expected_transfer_len))) {
+			TRACE(TRACE_MINOR, "Expected values don't match "
+				"decoded ones: data_direction %d, "
+				"expected_data_direction %d, "
+				"bufflen %d, expected_transfer_len %d",
+				cmd->data_direction,
+				cmd->expected_data_direction,
+				cmd->bufflen, cmd->expected_transfer_len);
+			PRINT_BUFF_FLAG(TRACE_MINOR, "Suspicious CDB",
+				cmd->cdb, cmd->cdb_len);
+		}
+#	endif
+		cmd->data_direction = cmd->expected_data_direction;
+		cmd->bufflen = cmd->expected_transfer_len;
+#else
+		if (unlikely(cmd->data_direction !=
+				cmd->expected_data_direction)) {
+			if (((cmd->expected_data_direction != SCST_DATA_NONE) ||
+			     (cmd->bufflen != 0)) &&
+			    !scst_is_allowed_to_mismatch_cmd(cmd)) {
+				PRINT_ERROR("Expected data direction %d for "
+					"opcode 0x%02x (handler %s, target %s) "
+					"doesn't match decoded value %d",
+					cmd->expected_data_direction,
+					cmd->cdb[0], dev->handler->name,
+					cmd->tgtt->name, cmd->data_direction);
+				PRINT_BUFFER("Failed CDB", cmd->cdb,
+					cmd->cdb_len);
+				scst_set_cmd_error(cmd,
+				   SCST_LOAD_SENSE(scst_sense_invalid_message));
+				goto out_done;
+			}
+		}
+		if (unlikely(cmd->bufflen != cmd->expected_transfer_len)) {
+			TRACE(TRACE_MINOR, "Warning: expected "
+				"transfer length %d for opcode 0x%02x "
+				"(handler %s, target %s) doesn't match "
+				"decoded value %d. Faulty initiator "
+				"(e.g. VMware is known to be such) or "
+				"scst_scsi_op_table should be updated?",
+				cmd->expected_transfer_len, cmd->cdb[0],
+				dev->handler->name, cmd->tgtt->name,
+				cmd->bufflen);
+			PRINT_BUFF_FLAG(TRACE_MINOR, "Suspicious CDB",
+				cmd->cdb, cmd->cdb_len);
+			/* Needed, e.g., to get immediate iSCSI data */
+			cmd->bufflen = max(cmd->bufflen,
+					   cmd->expected_transfer_len);
+		}
+#endif
+	}
+
+	if (unlikely(cmd->data_direction == SCST_DATA_UNKNOWN)) {
+		PRINT_ERROR("Unknown data direction. Opcode 0x%x, handler %s, "
+			"target %s", cmd->cdb[0], dev->handler->name,
+			cmd->tgtt->name);
+		PRINT_BUFFER("Failed CDB", cmd->cdb, cmd->cdb_len);
+		goto out_hw_error;
+	}
+
+set_res:
+	if (cmd->data_len == -1)
+		cmd->data_len = cmd->bufflen;
+
+	if (cmd->bufflen == 0) {
+		/*
+		 * According to SPC bufflen 0 for data transfer commands isn't
+		 * an error, so we need to fix the transfer direction.
+		 */
+		cmd->data_direction = SCST_DATA_NONE;
+	}
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+	switch (state) {
+	case SCST_CMD_STATE_PREPARE_SPACE:
+	case SCST_CMD_STATE_PRE_PARSE:
+	case SCST_CMD_STATE_DEV_PARSE:
+	case SCST_CMD_STATE_RDY_TO_XFER:
+	case SCST_CMD_STATE_TGT_PRE_EXEC:
+	case SCST_CMD_STATE_SEND_FOR_EXEC:
+	case SCST_CMD_STATE_LOCAL_EXEC:
+	case SCST_CMD_STATE_REAL_EXEC:
+	case SCST_CMD_STATE_PRE_DEV_DONE:
+	case SCST_CMD_STATE_DEV_DONE:
+	case SCST_CMD_STATE_PRE_XMIT_RESP:
+	case SCST_CMD_STATE_XMIT_RESP:
+	case SCST_CMD_STATE_FINISHED:
+	case SCST_CMD_STATE_FINISHED_INTERNAL:
+#endif
+		cmd->state = state;
+		res = SCST_CMD_STATE_RES_CONT_SAME;
+#ifdef CONFIG_SCST_EXTRACHECKS
+		break;
+
+	default:
+		if (state >= 0) {
+			PRINT_ERROR("Dev handler %s parse() returned "
+			     "invalid cmd state %d (opcode %d)",
+			     dev->handler->name, state, cmd->cdb[0]);
+		} else {
+			PRINT_ERROR("Dev handler %s parse() returned "
+				"error %d (opcode %d)", dev->handler->name,
+				state, cmd->cdb[0]);
+		}
+		goto out_hw_error;
+	}
+#endif
+
+	if (cmd->resp_data_len == -1) {
+		if (cmd->data_direction & SCST_DATA_READ)
+			cmd->resp_data_len = cmd->bufflen;
+		else
+			 cmd->resp_data_len = 0;
+	}
+
+	/* We already completed (with an error) */
+	if (unlikely(cmd->completed))
+		goto out_done;
+
+out:
+	return res;
+
+out_hw_error:
+	/* dev_done() will be called as part of the regular cmd's finish */
+	scst_set_cmd_error(cmd, SCST_LOAD_SENSE(scst_sense_hardw_error));
+
+out_done:
+	scst_set_cmd_abnormal_done_state(cmd);
+	res = SCST_CMD_STATE_RES_CONT_SAME;
+	goto out;
+}
+
+static int scst_prepare_space(struct scst_cmd *cmd)
+{
+	int r = 0, res = SCST_CMD_STATE_RES_CONT_SAME;
+
+	if (cmd->data_direction == SCST_DATA_NONE)
+		goto done;
+
+	if (cmd->tgt_need_alloc_data_buf) {
+		int orig_bufflen = cmd->bufflen;
+
+		TRACE_MEM("Custom tgt data buf allocation requested (cmd %p)",
+			cmd);
+
+		scst_set_cur_start(cmd);
+		r = cmd->tgtt->alloc_data_buf(cmd);
+		scst_set_alloc_buf_time(cmd);
+
+		if (r > 0)
+			goto alloc;
+		else if (r == 0) {
+			if (unlikely(cmd->bufflen == 0)) {
+				/* See comment in scst_alloc_space() */
+				if (cmd->sg == NULL)
+					goto alloc;
+			}
+
+			cmd->tgt_data_buf_alloced = 1;
+
+			if (unlikely(orig_bufflen < cmd->bufflen)) {
+				PRINT_ERROR("Target driver allocated data "
+					"buffer (size %d), is less, than "
+					"required (size %d)", orig_bufflen,
+					cmd->bufflen);
+				goto out_error;
+			}
+			TRACE_MEM("tgt_data_buf_alloced (cmd %p)", cmd);
+		} else
+			goto check;
+	}
+
+alloc:
+	if (!cmd->tgt_data_buf_alloced && !cmd->dh_data_buf_alloced) {
+		r = scst_alloc_space(cmd);
+	} else if (cmd->dh_data_buf_alloced && !cmd->tgt_data_buf_alloced) {
+		TRACE_MEM("dh_data_buf_alloced set (cmd %p)", cmd);
+		r = 0;
+	} else if (cmd->tgt_data_buf_alloced && !cmd->dh_data_buf_alloced) {
+		TRACE_MEM("tgt_data_buf_alloced set (cmd %p)", cmd);
+		cmd->sg = cmd->tgt_sg;
+		cmd->sg_cnt = cmd->tgt_sg_cnt;
+		cmd->in_sg = cmd->tgt_in_sg;
+		cmd->in_sg_cnt = cmd->tgt_in_sg_cnt;
+		r = 0;
+	} else {
+		TRACE_MEM("Both *_data_buf_alloced set (cmd %p, sg %p, "
+			"sg_cnt %d, tgt_sg %p, tgt_sg_cnt %d)", cmd, cmd->sg,
+			cmd->sg_cnt, cmd->tgt_sg, cmd->tgt_sg_cnt);
+		r = 0;
+	}
+
+check:
+	if (r != 0) {
+		if (scst_cmd_atomic(cmd)) {
+			TRACE_MEM("%s", "Atomic memory allocation failed, "
+			      "rescheduling to the thread");
+			res = SCST_CMD_STATE_RES_NEED_THREAD;
+			goto out;
+		} else
+			goto out_no_space;
+	}
+
+done:
+	if (cmd->preprocessing_only)
+		cmd->state = SCST_CMD_STATE_PREPROCESSING_DONE;
+	else if (cmd->data_direction & SCST_DATA_WRITE)
+		cmd->state = SCST_CMD_STATE_RDY_TO_XFER;
+	else
+		cmd->state = SCST_CMD_STATE_TGT_PRE_EXEC;
+
+out:
+	return res;
+
+out_no_space:
+	TRACE(TRACE_OUT_OF_MEM, "Unable to allocate or build requested buffer "
+		"(size %d), sending BUSY or QUEUE FULL status", cmd->bufflen);
+	scst_set_busy(cmd);
+	scst_set_cmd_abnormal_done_state(cmd);
+	res = SCST_CMD_STATE_RES_CONT_SAME;
+	goto out;
+
+out_error:
+	scst_set_cmd_error(cmd,	SCST_LOAD_SENSE(scst_sense_hardw_error));
+	scst_set_cmd_abnormal_done_state(cmd);
+	res = SCST_CMD_STATE_RES_CONT_SAME;
+	goto out;
+}
+
+static int scst_preprocessing_done(struct scst_cmd *cmd)
+{
+	int res;
+
+	EXTRACHECKS_BUG_ON(!cmd->preprocessing_only);
+
+	cmd->preprocessing_only = 0;
+
+	res = SCST_CMD_STATE_RES_CONT_NEXT;
+	cmd->state = SCST_CMD_STATE_PREPROCESSING_DONE_CALLED;
+
+	TRACE_DBG("Calling preprocessing_done(cmd %p)", cmd);
+	scst_set_cur_start(cmd);
+	cmd->tgtt->preprocessing_done(cmd);
+	TRACE_DBG("%s", "preprocessing_done() returned");
+	return res;
+}
+
+/**
+ * scst_restart_cmd() - restart execution of the command
+ * @cmd:	SCST commands
+ * @status:	completion status
+ * @pref_context: preferred command execition context
+ *
+ * Description:
+ *    Notifies SCST that the driver finished its part of the command's
+ *    preprocessing and it is ready for further processing.
+ *
+ *    The second argument sets completion status
+ *    (see SCST_PREPROCESS_STATUS_* constants for details)
+ *
+ *    See also comment for scst_cmd_init_done() for the serialization
+ *    requirements.
+ */
+void scst_restart_cmd(struct scst_cmd *cmd, int status,
+	enum scst_exec_context pref_context)
+{
+
+	scst_set_restart_waiting_time(cmd);
+
+	TRACE_DBG("Preferred context: %d", pref_context);
+	TRACE_DBG("tag=%llu, status=%#x",
+		  (long long unsigned int)scst_cmd_get_tag(cmd),
+		  status);
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+	if ((in_irq() || irqs_disabled()) &&
+	    ((pref_context == SCST_CONTEXT_DIRECT) ||
+	     (pref_context == SCST_CONTEXT_DIRECT_ATOMIC))) {
+		PRINT_ERROR("Wrong context %d in IRQ from target %s, use "
+			"SCST_CONTEXT_THREAD instead", pref_context,
+			cmd->tgtt->name);
+		pref_context = SCST_CONTEXT_THREAD;
+	}
+#endif
+
+	switch (status) {
+	case SCST_PREPROCESS_STATUS_SUCCESS:
+		if (cmd->data_direction & SCST_DATA_WRITE)
+			cmd->state = SCST_CMD_STATE_RDY_TO_XFER;
+		else
+			cmd->state = SCST_CMD_STATE_TGT_PRE_EXEC;
+		if (cmd->set_sn_on_restart_cmd)
+			scst_cmd_set_sn(cmd);
+		/* Small context optimization */
+		if ((pref_context == SCST_CONTEXT_TASKLET) ||
+		    (pref_context == SCST_CONTEXT_DIRECT_ATOMIC) ||
+		    ((pref_context == SCST_CONTEXT_SAME) &&
+		     scst_cmd_atomic(cmd))) {
+			if (cmd->data_direction & SCST_DATA_WRITE) {
+				if (!test_bit(SCST_TGT_DEV_AFTER_RESTART_WR_ATOMIC,
+						&cmd->tgt_dev->tgt_dev_flags))
+					pref_context = SCST_CONTEXT_THREAD;
+			} else {
+				if (!test_bit(SCST_TGT_DEV_AFTER_RESTART_OTH_ATOMIC,
+						&cmd->tgt_dev->tgt_dev_flags))
+					pref_context = SCST_CONTEXT_THREAD;
+			}
+		}
+		break;
+
+	case SCST_PREPROCESS_STATUS_ERROR_SENSE_SET:
+		scst_set_cmd_abnormal_done_state(cmd);
+		break;
+
+	case SCST_PREPROCESS_STATUS_ERROR_FATAL:
+		set_bit(SCST_CMD_NO_RESP, &cmd->cmd_flags);
+		/* go through */
+	case SCST_PREPROCESS_STATUS_ERROR:
+		if (cmd->sense != NULL)
+			scst_set_cmd_error(cmd,
+				SCST_LOAD_SENSE(scst_sense_hardw_error));
+		scst_set_cmd_abnormal_done_state(cmd);
+		break;
+
+	default:
+		PRINT_ERROR("%s() received unknown status %x", __func__,
+			status);
+		scst_set_cmd_abnormal_done_state(cmd);
+		break;
+	}
+
+	scst_process_redirect_cmd(cmd, pref_context, 1);
+	return;
+}
+EXPORT_SYMBOL(scst_restart_cmd);
+
+static int scst_rdy_to_xfer(struct scst_cmd *cmd)
+{
+	int res, rc;
+	struct scst_tgt_template *tgtt = cmd->tgtt;
+
+	if (unlikely(test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags))) {
+		TRACE_MGMT_DBG("ABORTED set, aborting cmd %p", cmd);
+		goto out_dev_done;
+	}
+
+	if ((tgtt->rdy_to_xfer == NULL) || unlikely(cmd->internal)) {
+		cmd->state = SCST_CMD_STATE_TGT_PRE_EXEC;
+		res = SCST_CMD_STATE_RES_CONT_SAME;
+		goto out;
+	}
+
+	if (unlikely(!tgtt->rdy_to_xfer_atomic && scst_cmd_atomic(cmd))) {
+		/*
+		 * It shouldn't be because of the SCST_TGT_DEV_AFTER_*
+		 * optimization.
+		 */
+		TRACE_DBG("Target driver %s rdy_to_xfer() needs thread "
+			      "context, rescheduling", tgtt->name);
+		res = SCST_CMD_STATE_RES_NEED_THREAD;
+		goto out;
+	}
+
+	while (1) {
+		int finished_cmds = atomic_read(&cmd->tgt->finished_cmds);
+
+		res = SCST_CMD_STATE_RES_CONT_NEXT;
+		cmd->state = SCST_CMD_STATE_DATA_WAIT;
+
+		if (tgtt->on_hw_pending_cmd_timeout != NULL) {
+			struct scst_session *sess = cmd->sess;
+			cmd->hw_pending_start = jiffies;
+			cmd->cmd_hw_pending = 1;
+			if (!test_bit(SCST_SESS_HW_PENDING_WORK_SCHEDULED, &sess->sess_aflags)) {
+				TRACE_DBG("Sched HW pending work for sess %p "
+					"(max time %d)", sess,
+					tgtt->max_hw_pending_time);
+				set_bit(SCST_SESS_HW_PENDING_WORK_SCHEDULED,
+					&sess->sess_aflags);
+				schedule_delayed_work(&sess->hw_pending_work,
+					tgtt->max_hw_pending_time * HZ);
+			}
+		}
+
+		scst_set_cur_start(cmd);
+
+		TRACE_DBG("Calling rdy_to_xfer(%p)", cmd);
+#ifdef CONFIG_SCST_DEBUG_RETRY
+		if (((scst_random() % 100) == 75))
+			rc = SCST_TGT_RES_QUEUE_FULL;
+		else
+#endif
+			rc = tgtt->rdy_to_xfer(cmd);
+		TRACE_DBG("rdy_to_xfer() returned %d", rc);
+
+		if (likely(rc == SCST_TGT_RES_SUCCESS))
+			goto out;
+
+		scst_set_rdy_to_xfer_time(cmd);
+
+		cmd->cmd_hw_pending = 0;
+
+		/* Restore the previous state */
+		cmd->state = SCST_CMD_STATE_RDY_TO_XFER;
+
+		switch (rc) {
+		case SCST_TGT_RES_QUEUE_FULL:
+			if (scst_queue_retry_cmd(cmd, finished_cmds) == 0)
+				break;
+			else
+				continue;
+
+		case SCST_TGT_RES_NEED_THREAD_CTX:
+			TRACE_DBG("Target driver %s "
+			      "rdy_to_xfer() requested thread "
+			      "context, rescheduling", tgtt->name);
+			res = SCST_CMD_STATE_RES_NEED_THREAD;
+			break;
+
+		default:
+			goto out_error_rc;
+		}
+		break;
+	}
+
+out:
+	return res;
+
+out_error_rc:
+	if (rc == SCST_TGT_RES_FATAL_ERROR) {
+		PRINT_ERROR("Target driver %s rdy_to_xfer() returned "
+		     "fatal error", tgtt->name);
+	} else {
+		PRINT_ERROR("Target driver %s rdy_to_xfer() returned invalid "
+			    "value %d", tgtt->name, rc);
+	}
+	scst_set_cmd_error(cmd, SCST_LOAD_SENSE(scst_sense_hardw_error));
+
+out_dev_done:
+	scst_set_cmd_abnormal_done_state(cmd);
+	res = SCST_CMD_STATE_RES_CONT_SAME;
+	goto out;
+}
+
+/* No locks, but might be in IRQ */
+static void scst_process_redirect_cmd(struct scst_cmd *cmd,
+	enum scst_exec_context context, int check_retries)
+{
+	struct scst_tgt *tgt = cmd->tgt;
+	unsigned long flags;
+
+	TRACE_DBG("Context: %x", context);
+
+	if (context == SCST_CONTEXT_SAME)
+		context = scst_cmd_atomic(cmd) ? SCST_CONTEXT_DIRECT_ATOMIC :
+						 SCST_CONTEXT_DIRECT;
+
+	switch (context) {
+	case SCST_CONTEXT_DIRECT_ATOMIC:
+		scst_process_active_cmd(cmd, true);
+		break;
+
+	case SCST_CONTEXT_DIRECT:
+		if (check_retries)
+			scst_check_retries(tgt);
+		scst_process_active_cmd(cmd, false);
+		break;
+
+	default:
+		PRINT_ERROR("Context %x is unknown, using the thread one",
+			    context);
+		/* go through */
+	case SCST_CONTEXT_THREAD:
+		if (check_retries)
+			scst_check_retries(tgt);
+		spin_lock_irqsave(&cmd->cmd_threads->cmd_list_lock, flags);
+		TRACE_DBG("Adding cmd %p to active cmd list", cmd);
+		if (unlikely(cmd->queue_type == SCST_CMD_QUEUE_HEAD_OF_QUEUE))
+			list_add(&cmd->cmd_list_entry,
+				&cmd->cmd_threads->active_cmd_list);
+		else
+			list_add_tail(&cmd->cmd_list_entry,
+				&cmd->cmd_threads->active_cmd_list);
+		wake_up(&cmd->cmd_threads->cmd_list_waitQ);
+		spin_unlock_irqrestore(&cmd->cmd_threads->cmd_list_lock, flags);
+		break;
+
+	case SCST_CONTEXT_TASKLET:
+		if (check_retries)
+			scst_check_retries(tgt);
+		scst_schedule_tasklet(cmd);
+		break;
+	}
+	return;
+}
+
+/**
+ * scst_rx_data() - the command's data received
+ * @cmd:	SCST commands
+ * @status:	data receiving completion status
+ * @pref_context: preferred command execition context
+ *
+ * Description:
+ *    Notifies SCST that the driver received all the necessary data
+ *    and the command is ready for further processing.
+ *
+ *    The second argument sets data receiving completion status
+ *    (see SCST_RX_STATUS_* constants for details)
+ */
+void scst_rx_data(struct scst_cmd *cmd, int status,
+	enum scst_exec_context pref_context)
+{
+
+	scst_set_rdy_to_xfer_time(cmd);
+
+	TRACE_DBG("Preferred context: %d", pref_context);
+	TRACE(TRACE_SCSI, "cmd %p, status %#x", cmd, status);
+
+	cmd->cmd_hw_pending = 0;
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+	if ((in_irq() || irqs_disabled()) &&
+	    ((pref_context == SCST_CONTEXT_DIRECT) ||
+	     (pref_context == SCST_CONTEXT_DIRECT_ATOMIC))) {
+		PRINT_ERROR("Wrong context %d in IRQ from target %s, use "
+			"SCST_CONTEXT_THREAD instead", pref_context,
+			cmd->tgtt->name);
+		pref_context = SCST_CONTEXT_THREAD;
+	}
+#endif
+
+	switch (status) {
+	case SCST_RX_STATUS_SUCCESS:
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+		if (trace_flag & TRACE_RCV_BOT) {
+			int i;
+			struct scatterlist *sg;
+			if (cmd->in_sg != NULL)
+				sg = cmd->in_sg;
+			else if (cmd->tgt_in_sg != NULL)
+				sg = cmd->tgt_in_sg;
+			else if (cmd->tgt_sg != NULL)
+				sg = cmd->tgt_sg;
+			else
+				sg = cmd->sg;
+			if (sg != NULL) {
+				TRACE_RECV_BOT("RX data for cmd %p "
+					"(sg_cnt %d, sg %p, sg[0].page %p)",
+					cmd, cmd->tgt_sg_cnt, sg,
+					(void *)sg_page(&sg[0]));
+				for (i = 0; i < cmd->tgt_sg_cnt; ++i) {
+					PRINT_BUFF_FLAG(TRACE_RCV_BOT, "RX sg",
+						sg_virt(&sg[i]), sg[i].length);
+				}
+			}
+		}
+#endif
+		cmd->state = SCST_CMD_STATE_TGT_PRE_EXEC;
+
+		/* Small context optimization */
+		if ((pref_context == SCST_CONTEXT_TASKLET) ||
+		    (pref_context == SCST_CONTEXT_DIRECT_ATOMIC) ||
+		    ((pref_context == SCST_CONTEXT_SAME) &&
+		     scst_cmd_atomic(cmd))) {
+			if (!test_bit(SCST_TGT_DEV_AFTER_RX_DATA_ATOMIC,
+					&cmd->tgt_dev->tgt_dev_flags))
+				pref_context = SCST_CONTEXT_THREAD;
+		}
+		break;
+
+	case SCST_RX_STATUS_ERROR_SENSE_SET:
+		scst_set_cmd_abnormal_done_state(cmd);
+		break;
+
+	case SCST_RX_STATUS_ERROR_FATAL:
+		set_bit(SCST_CMD_NO_RESP, &cmd->cmd_flags);
+		/* go through */
+	case SCST_RX_STATUS_ERROR:
+		scst_set_cmd_error(cmd,
+			   SCST_LOAD_SENSE(scst_sense_hardw_error));
+		scst_set_cmd_abnormal_done_state(cmd);
+		break;
+
+	default:
+		PRINT_ERROR("scst_rx_data() received unknown status %x",
+			status);
+		scst_set_cmd_abnormal_done_state(cmd);
+		break;
+	}
+
+	scst_process_redirect_cmd(cmd, pref_context, 1);
+	return;
+}
+EXPORT_SYMBOL(scst_rx_data);
+
+static int scst_tgt_pre_exec(struct scst_cmd *cmd)
+{
+	int res = SCST_CMD_STATE_RES_CONT_SAME, rc;
+
+	cmd->state = SCST_CMD_STATE_SEND_FOR_EXEC;
+
+	if ((cmd->tgtt->pre_exec == NULL) || unlikely(cmd->internal))
+		goto out;
+
+	TRACE_DBG("Calling pre_exec(%p)", cmd);
+	scst_set_cur_start(cmd);
+	rc = cmd->tgtt->pre_exec(cmd);
+	scst_set_pre_exec_time(cmd);
+	TRACE_DBG("pre_exec() returned %d", rc);
+
+	if (unlikely(rc != SCST_PREPROCESS_STATUS_SUCCESS)) {
+		switch (rc) {
+		case SCST_PREPROCESS_STATUS_ERROR_SENSE_SET:
+			scst_set_cmd_abnormal_done_state(cmd);
+			break;
+		case SCST_PREPROCESS_STATUS_ERROR_FATAL:
+			set_bit(SCST_CMD_NO_RESP, &cmd->cmd_flags);
+			/* go through */
+		case SCST_PREPROCESS_STATUS_ERROR:
+			scst_set_cmd_error(cmd,
+				   SCST_LOAD_SENSE(scst_sense_hardw_error));
+			scst_set_cmd_abnormal_done_state(cmd);
+			break;
+		case SCST_PREPROCESS_STATUS_NEED_THREAD:
+			TRACE_DBG("Target driver's %s pre_exec() requested "
+				"thread context, rescheduling",
+				cmd->tgtt->name);
+			res = SCST_CMD_STATE_RES_NEED_THREAD;
+			cmd->state = SCST_CMD_STATE_TGT_PRE_EXEC;
+			break;
+		default:
+			BUG();
+			break;
+		}
+	}
+
+out:
+	return res;
+}
+
+static void scst_do_cmd_done(struct scst_cmd *cmd, int result,
+	const uint8_t *rq_sense, int rq_sense_len, int resid)
+{
+
+	scst_set_exec_time(cmd);
+
+	cmd->status = result & 0xff;
+	cmd->msg_status = msg_byte(result);
+	cmd->host_status = host_byte(result);
+	cmd->driver_status = driver_byte(result);
+	if (unlikely(resid != 0)) {
+#ifdef CONFIG_SCST_EXTRACHECKS
+		if ((resid < 0) || (resid > cmd->resp_data_len)) {
+			PRINT_ERROR("Wrong resid %d (cmd->resp_data_len=%d, "
+				"op %x)", resid, cmd->resp_data_len,
+				cmd->cdb[0]);
+		} else
+#endif
+			scst_set_resp_data_len(cmd, cmd->resp_data_len - resid);
+	}
+
+	if (unlikely(cmd->status == SAM_STAT_CHECK_CONDITION)) {
+		/* We might have double reset UA here */
+		cmd->dbl_ua_orig_resp_data_len = cmd->resp_data_len;
+		cmd->dbl_ua_orig_data_direction = cmd->data_direction;
+
+		scst_alloc_set_sense(cmd, 1, rq_sense, rq_sense_len);
+	}
+
+	TRACE(TRACE_SCSI, "cmd %p, result=%x, cmd->status=%x, resid=%d, "
+	      "cmd->msg_status=%x, cmd->host_status=%x, "
+	      "cmd->driver_status=%x (cmd %p)", cmd, result, cmd->status, resid,
+	      cmd->msg_status, cmd->host_status, cmd->driver_status, cmd);
+
+	cmd->completed = 1;
+	return;
+}
+
+/* For small context optimization */
+static inline enum scst_exec_context scst_optimize_post_exec_context(
+	struct scst_cmd *cmd, enum scst_exec_context context)
+{
+	if (((context == SCST_CONTEXT_SAME) && scst_cmd_atomic(cmd)) ||
+	    (context == SCST_CONTEXT_TASKLET) ||
+	    (context == SCST_CONTEXT_DIRECT_ATOMIC)) {
+		if (!test_bit(SCST_TGT_DEV_AFTER_EXEC_ATOMIC,
+				&cmd->tgt_dev->tgt_dev_flags))
+			context = SCST_CONTEXT_THREAD;
+	}
+	return context;
+}
+
+static void scst_cmd_done(void *data, char *sense, int result, int resid)
+{
+	struct scst_cmd *cmd;
+
+	cmd = (struct scst_cmd *)data;
+	if (cmd == NULL)
+		goto out;
+
+	scst_do_cmd_done(cmd, result, sense, SCSI_SENSE_BUFFERSIZE, resid);
+
+	cmd->state = SCST_CMD_STATE_PRE_DEV_DONE;
+
+	scst_process_redirect_cmd(cmd,
+	    scst_optimize_post_exec_context(cmd, scst_estimate_context()), 0);
+
+out:
+	return;
+}
+
+static void scst_cmd_done_local(struct scst_cmd *cmd, int next_state,
+	enum scst_exec_context pref_context)
+{
+
+	scst_set_exec_time(cmd);
+
+	if (next_state == SCST_CMD_STATE_DEFAULT)
+		next_state = SCST_CMD_STATE_PRE_DEV_DONE;
+
+#if defined(CONFIG_SCST_DEBUG)
+	if (next_state == SCST_CMD_STATE_PRE_DEV_DONE) {
+		if ((trace_flag & TRACE_RCV_TOP) && (cmd->sg != NULL)) {
+			int i;
+			struct scatterlist *sg = cmd->sg;
+			TRACE_RECV_TOP("Exec'd %d S/G(s) at %p sg[0].page at "
+				"%p", cmd->sg_cnt, sg, (void *)sg_page(&sg[0]));
+			for (i = 0; i < cmd->sg_cnt; ++i) {
+				TRACE_BUFF_FLAG(TRACE_RCV_TOP,
+					"Exec'd sg", sg_virt(&sg[i]),
+					sg[i].length);
+			}
+		}
+	}
+#endif
+
+	cmd->state = next_state;
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+	if ((next_state != SCST_CMD_STATE_PRE_DEV_DONE) &&
+	    (next_state != SCST_CMD_STATE_PRE_XMIT_RESP) &&
+	    (next_state != SCST_CMD_STATE_FINISHED) &&
+	    (next_state != SCST_CMD_STATE_FINISHED_INTERNAL)) {
+		PRINT_ERROR("%s() received invalid cmd state %d (opcode %d)",
+			__func__, next_state, cmd->cdb[0]);
+		scst_set_cmd_error(cmd,
+				   SCST_LOAD_SENSE(scst_sense_hardw_error));
+		scst_set_cmd_abnormal_done_state(cmd);
+	}
+#endif
+	pref_context = scst_optimize_post_exec_context(cmd, pref_context);
+	scst_process_redirect_cmd(cmd, pref_context, 0);
+	return;
+}
+
+static int scst_report_luns_local(struct scst_cmd *cmd)
+{
+	int res = SCST_EXEC_COMPLETED, rc;
+	int dev_cnt = 0;
+	int buffer_size;
+	int i;
+	struct scst_tgt_dev *tgt_dev = NULL;
+	uint8_t *buffer;
+	int offs, overflow = 0;
+
+	if (scst_cmd_atomic(cmd)) {
+		res = SCST_EXEC_NEED_THREAD;
+		goto out;
+	}
+
+	rc = scst_check_local_events(cmd);
+	if (unlikely(rc != 0))
+		goto out_done;
+
+	cmd->status = 0;
+	cmd->msg_status = 0;
+	cmd->host_status = DID_OK;
+	cmd->driver_status = 0;
+
+	if ((cmd->cdb[2] != 0) && (cmd->cdb[2] != 2)) {
+		PRINT_ERROR("Unsupported SELECT REPORT value %x in REPORT "
+			"LUNS command", cmd->cdb[2]);
+		goto out_err;
+	}
+
+	buffer_size = scst_get_buf_first(cmd, &buffer);
+	if (unlikely(buffer_size == 0))
+		goto out_compl;
+	else if (unlikely(buffer_size < 0))
+		goto out_hw_err;
+
+	if (buffer_size < 16)
+		goto out_put_err;
+
+	memset(buffer, 0, buffer_size);
+	offs = 8;
+
+	/*
+	 * cmd won't allow to suspend activities, so we can access
+	 * sess->sess_tgt_dev_list_hash without any additional protection.
+	 */
+	for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+		struct list_head *sess_tgt_dev_list_head =
+			&cmd->sess->sess_tgt_dev_list_hash[i];
+		list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+				sess_tgt_dev_list_entry) {
+			if (!overflow) {
+				if (offs >= buffer_size) {
+					scst_put_buf(cmd, buffer);
+					buffer_size = scst_get_buf_next(cmd,
+								       &buffer);
+					if (buffer_size > 0) {
+						memset(buffer, 0, buffer_size);
+						offs = 0;
+					} else {
+						overflow = 1;
+						goto inc_dev_cnt;
+					}
+				}
+				if ((buffer_size - offs) < 8) {
+					PRINT_ERROR("Buffer allocated for "
+						"REPORT LUNS command doesn't "
+						"allow to fit 8 byte entry "
+						"(buffer_size=%d)",
+						buffer_size);
+					goto out_put_hw_err;
+				}
+				if ((cmd->sess->acg->addr_method == SCST_LUN_ADDR_METHOD_FLAT) &&
+				    (tgt_dev->lun != 0)) {
+					buffer[offs] = (tgt_dev->lun >> 8) & 0x3f;
+					buffer[offs] = buffer[offs] | 0x40;
+					buffer[offs+1] = tgt_dev->lun & 0xff;
+				} else {
+					buffer[offs] = (tgt_dev->lun >> 8) & 0xff;
+					buffer[offs+1] = tgt_dev->lun & 0xff;
+				}
+				offs += 8;
+			}
+inc_dev_cnt:
+			dev_cnt++;
+		}
+	}
+	if (!overflow)
+		scst_put_buf(cmd, buffer);
+
+	/* Set the response header */
+	buffer_size = scst_get_buf_first(cmd, &buffer);
+	if (unlikely(buffer_size == 0))
+		goto out_compl;
+	else if (unlikely(buffer_size < 0))
+		goto out_hw_err;
+
+	dev_cnt *= 8;
+	buffer[0] = (dev_cnt >> 24) & 0xff;
+	buffer[1] = (dev_cnt >> 16) & 0xff;
+	buffer[2] = (dev_cnt >> 8) & 0xff;
+	buffer[3] = dev_cnt & 0xff;
+
+	scst_put_buf(cmd, buffer);
+
+	dev_cnt += 8;
+	if (dev_cnt < cmd->resp_data_len)
+		scst_set_resp_data_len(cmd, dev_cnt);
+
+out_compl:
+	cmd->completed = 1;
+
+	/* Clear left sense_reported_luns_data_changed UA, if any. */
+
+	/*
+	 * cmd won't allow to suspend activities, so we can access
+	 * sess->sess_tgt_dev_list_hash without any additional protection.
+	 */
+	for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+		struct list_head *sess_tgt_dev_list_head =
+			&cmd->sess->sess_tgt_dev_list_hash[i];
+
+		list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+				sess_tgt_dev_list_entry) {
+			struct scst_tgt_dev_UA *ua;
+
+			spin_lock_bh(&tgt_dev->tgt_dev_lock);
+			list_for_each_entry(ua, &tgt_dev->UA_list,
+						UA_list_entry) {
+				if (scst_analyze_sense(ua->UA_sense_buffer,
+						ua->UA_valid_sense_len,
+						SCST_SENSE_ALL_VALID,
+						SCST_LOAD_SENSE(scst_sense_reported_luns_data_changed))) {
+					TRACE_MGMT_DBG("Freeing not needed "
+						"REPORTED LUNS DATA CHANGED UA "
+						"%p", ua);
+					list_del(&ua->UA_list_entry);
+					mempool_free(ua, scst_ua_mempool);
+					break;
+				}
+			}
+			spin_unlock_bh(&tgt_dev->tgt_dev_lock);
+		}
+	}
+
+out_done:
+	/* Report the result */
+	cmd->scst_cmd_done(cmd, SCST_CMD_STATE_DEFAULT, SCST_CONTEXT_SAME);
+
+out:
+	return res;
+
+out_put_err:
+	scst_put_buf(cmd, buffer);
+
+out_err:
+	scst_set_cmd_error(cmd,
+		   SCST_LOAD_SENSE(scst_sense_invalid_field_in_cdb));
+	goto out_compl;
+
+out_put_hw_err:
+	scst_put_buf(cmd, buffer);
+
+out_hw_err:
+	scst_set_cmd_error(cmd, SCST_LOAD_SENSE(scst_sense_hardw_error));
+	goto out_compl;
+}
+
+static int scst_request_sense_local(struct scst_cmd *cmd)
+{
+	int res = SCST_EXEC_COMPLETED, rc;
+	struct scst_tgt_dev *tgt_dev = cmd->tgt_dev;
+	uint8_t *buffer;
+	int buffer_size = 0, sl = 0;
+
+	rc = scst_check_local_events(cmd);
+	if (unlikely(rc != 0))
+		goto out_done;
+
+	cmd->status = 0;
+	cmd->msg_status = 0;
+	cmd->host_status = DID_OK;
+	cmd->driver_status = 0;
+
+	spin_lock_bh(&tgt_dev->tgt_dev_lock);
+
+	if (tgt_dev->tgt_dev_valid_sense_len == 0)
+		goto out_not_completed;
+
+	TRACE(TRACE_SCSI, "%s: Returning stored sense", cmd->op_name);
+
+	buffer_size = scst_get_buf_first(cmd, &buffer);
+	if (unlikely(buffer_size == 0))
+		goto out_compl;
+	else if (unlikely(buffer_size < 0))
+		goto out_hw_err;
+
+	memset(buffer, 0, buffer_size);
+
+	if (((tgt_dev->tgt_dev_sense[0] == 0x70) ||
+	     (tgt_dev->tgt_dev_sense[0] == 0x71)) && (cmd->cdb[1] & 1)) {
+		PRINT_WARNING("%s: Fixed format of the saved sense, but "
+			"descriptor format requested. Convertion will "
+			"truncated data", cmd->op_name);
+		PRINT_BUFFER("Original sense", tgt_dev->tgt_dev_sense,
+			tgt_dev->tgt_dev_valid_sense_len);
+
+		buffer_size = min(SCST_STANDARD_SENSE_LEN, buffer_size);
+		sl = scst_set_sense(buffer, buffer_size, true,
+			tgt_dev->tgt_dev_sense[2], tgt_dev->tgt_dev_sense[12],
+			tgt_dev->tgt_dev_sense[13]);
+	} else if (((tgt_dev->tgt_dev_sense[0] == 0x72) ||
+		    (tgt_dev->tgt_dev_sense[0] == 0x73)) && !(cmd->cdb[1] & 1)) {
+		PRINT_WARNING("%s: Descriptor format of the "
+			"saved sense, but fixed format requested. Convertion "
+			"will truncated data", cmd->op_name);
+		PRINT_BUFFER("Original sense", tgt_dev->tgt_dev_sense,
+			tgt_dev->tgt_dev_valid_sense_len);
+
+		buffer_size = min(SCST_STANDARD_SENSE_LEN, buffer_size);
+		sl = scst_set_sense(buffer, buffer_size, false,
+			tgt_dev->tgt_dev_sense[1], tgt_dev->tgt_dev_sense[2],
+			tgt_dev->tgt_dev_sense[3]);
+	} else {
+		if (buffer_size >= tgt_dev->tgt_dev_valid_sense_len)
+			sl = tgt_dev->tgt_dev_valid_sense_len;
+		else {
+			sl = buffer_size;
+			PRINT_WARNING("%s: Being returned sense truncated to "
+				"size %d (needed %d)", cmd->op_name,
+				buffer_size, tgt_dev->tgt_dev_valid_sense_len);
+		}
+		memcpy(buffer, tgt_dev->tgt_dev_sense, sl);
+	}
+
+	scst_put_buf(cmd, buffer);
+
+	tgt_dev->tgt_dev_valid_sense_len = 0;
+
+	spin_unlock_bh(&tgt_dev->tgt_dev_lock);
+
+	scst_set_resp_data_len(cmd, sl);
+
+out_compl:
+	cmd->completed = 1;
+
+out_done:
+	/* Report the result */
+	cmd->scst_cmd_done(cmd, SCST_CMD_STATE_DEFAULT, SCST_CONTEXT_SAME);
+
+out:
+	return res;
+
+out_hw_err:
+	spin_unlock_bh(&tgt_dev->tgt_dev_lock);
+	scst_set_cmd_error(cmd, SCST_LOAD_SENSE(scst_sense_hardw_error));
+	goto out_compl;
+
+out_not_completed:
+	spin_unlock_bh(&tgt_dev->tgt_dev_lock);
+	res = SCST_EXEC_NOT_COMPLETED;
+	goto out;
+}
+
+static int scst_pre_select(struct scst_cmd *cmd)
+{
+	int res = SCST_EXEC_NOT_COMPLETED;
+
+	if (scst_cmd_atomic(cmd)) {
+		res = SCST_EXEC_NEED_THREAD;
+		goto out;
+	}
+
+	scst_block_dev_cmd(cmd, 1);
+
+	/* Check for local events will be done when cmd will be executed */
+
+out:
+	return res;
+}
+
+static int scst_reserve_local(struct scst_cmd *cmd)
+{
+	int res = SCST_EXEC_NOT_COMPLETED, rc;
+	struct scst_device *dev;
+	struct scst_tgt_dev *tgt_dev_tmp;
+
+	if (scst_cmd_atomic(cmd)) {
+		res = SCST_EXEC_NEED_THREAD;
+		goto out;
+	}
+
+	if ((cmd->cdb[0] == RESERVE_10) && (cmd->cdb[2] & SCST_RES_3RDPTY)) {
+		PRINT_ERROR("RESERVE_10: 3rdPty RESERVE not implemented "
+		     "(lun=%lld)", (long long unsigned int)cmd->lun);
+		scst_set_cmd_error(cmd,
+			SCST_LOAD_SENSE(scst_sense_invalid_field_in_cdb));
+		goto out_done;
+	}
+
+	dev = cmd->dev;
+
+	if (dev->tst == SCST_CONTR_MODE_ONE_TASK_SET)
+		scst_block_dev_cmd(cmd, 1);
+
+	rc = scst_check_local_events(cmd);
+	if (unlikely(rc != 0))
+		goto out_done;
+
+	spin_lock_bh(&dev->dev_lock);
+
+	if (test_bit(SCST_TGT_DEV_RESERVED, &cmd->tgt_dev->tgt_dev_flags)) {
+		spin_unlock_bh(&dev->dev_lock);
+		scst_set_cmd_error_status(cmd, SAM_STAT_RESERVATION_CONFLICT);
+		goto out_done;
+	}
+
+	list_for_each_entry(tgt_dev_tmp, &dev->dev_tgt_dev_list,
+			    dev_tgt_dev_list_entry) {
+		if (cmd->tgt_dev != tgt_dev_tmp)
+			set_bit(SCST_TGT_DEV_RESERVED,
+				&tgt_dev_tmp->tgt_dev_flags);
+	}
+	dev->dev_reserved = 1;
+
+	spin_unlock_bh(&dev->dev_lock);
+
+out:
+	return res;
+
+out_done:
+	/* Report the result */
+	cmd->scst_cmd_done(cmd, SCST_CMD_STATE_DEFAULT, SCST_CONTEXT_SAME);
+	res = SCST_EXEC_COMPLETED;
+	goto out;
+}
+
+static int scst_release_local(struct scst_cmd *cmd)
+{
+	int res = SCST_EXEC_NOT_COMPLETED, rc;
+	struct scst_tgt_dev *tgt_dev_tmp;
+	struct scst_device *dev;
+
+	if (scst_cmd_atomic(cmd)) {
+		res = SCST_EXEC_NEED_THREAD;
+		goto out;
+	}
+
+	dev = cmd->dev;
+
+	if (dev->tst == SCST_CONTR_MODE_ONE_TASK_SET)
+		scst_block_dev_cmd(cmd, 1);
+
+	rc = scst_check_local_events(cmd);
+	if (unlikely(rc != 0))
+		goto out_done;
+
+	spin_lock_bh(&dev->dev_lock);
+
+	/*
+	 * The device could be RELEASED behind us, if RESERVING session
+	 * is closed (see scst_free_tgt_dev()), but this actually doesn't
+	 * matter, so use lock and no retest for DEV_RESERVED bits again
+	 */
+	if (test_bit(SCST_TGT_DEV_RESERVED, &cmd->tgt_dev->tgt_dev_flags)) {
+		res = SCST_EXEC_COMPLETED;
+		cmd->status = 0;
+		cmd->msg_status = 0;
+		cmd->host_status = DID_OK;
+		cmd->driver_status = 0;
+		cmd->completed = 1;
+	} else {
+		list_for_each_entry(tgt_dev_tmp,
+				    &dev->dev_tgt_dev_list,
+				    dev_tgt_dev_list_entry) {
+			clear_bit(SCST_TGT_DEV_RESERVED,
+				&tgt_dev_tmp->tgt_dev_flags);
+		}
+		dev->dev_reserved = 0;
+	}
+
+	spin_unlock_bh(&dev->dev_lock);
+
+	if (res == SCST_EXEC_COMPLETED)
+		goto out_done;
+
+out:
+	return res;
+
+out_done:
+	res = SCST_EXEC_COMPLETED;
+	/* Report the result */
+	cmd->scst_cmd_done(cmd, SCST_CMD_STATE_DEFAULT, SCST_CONTEXT_SAME);
+	goto out;
+}
+
+/**
+ * scst_check_local_events() - check if there are any local SCSI events
+ *
+ * Description:
+ *    Checks if the command can be executed or there are local events,
+ *    like reservatons, pending UAs, etc. Returns < 0 if command must be
+ *    aborted, > 0 if there is an event and command should be immediately
+ *    completed, or 0 otherwise.
+ *
+ * !! Dev handlers implementing exec() callback must call this function there
+ * !! just before the actual command's execution!
+ *
+ *    On call no locks, no IRQ or IRQ-disabled context allowed.
+ */
+int scst_check_local_events(struct scst_cmd *cmd)
+{
+	int res, rc;
+	struct scst_tgt_dev *tgt_dev = cmd->tgt_dev;
+	struct scst_device *dev = cmd->dev;
+
+	/*
+	 * There's no race here, because we need to trace commands sent
+	 * *after* dev_double_ua_possible flag was set.
+	 */
+	if (unlikely(dev->dev_double_ua_possible))
+		cmd->double_ua_possible = 1;
+
+	if (unlikely(test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags))) {
+		TRACE_MGMT_DBG("ABORTED set, aborting cmd %p", cmd);
+		goto out_uncomplete;
+	}
+
+	/* Reserve check before Unit Attention */
+	if (unlikely(test_bit(SCST_TGT_DEV_RESERVED,
+			      &tgt_dev->tgt_dev_flags))) {
+		if ((cmd->op_flags & SCST_REG_RESERVE_ALLOWED) == 0) {
+			scst_set_cmd_error_status(cmd,
+				SAM_STAT_RESERVATION_CONFLICT);
+			goto out_complete;
+		}
+	}
+
+	/* If we had internal bus reset, set the command error unit attention */
+	if ((dev->scsi_dev != NULL) &&
+	    unlikely(dev->scsi_dev->was_reset)) {
+		if (scst_is_ua_command(cmd)) {
+			int done = 0;
+			/*
+			 * Prevent more than 1 cmd to be triggered by
+			 * was_reset.
+			 */
+			spin_lock_bh(&dev->dev_lock);
+			if (dev->scsi_dev->was_reset) {
+				TRACE(TRACE_MGMT, "was_reset is %d", 1);
+				scst_set_cmd_error(cmd,
+					  SCST_LOAD_SENSE(scst_sense_reset_UA));
+				/*
+				 * It looks like it is safe to clear was_reset
+				 * here.
+				 */
+				dev->scsi_dev->was_reset = 0;
+				done = 1;
+			}
+			spin_unlock_bh(&dev->dev_lock);
+
+			if (done)
+				goto out_complete;
+		}
+	}
+
+	if (unlikely(test_bit(SCST_TGT_DEV_UA_PENDING,
+			&cmd->tgt_dev->tgt_dev_flags))) {
+		if (scst_is_ua_command(cmd)) {
+			rc = scst_set_pending_UA(cmd);
+			if (rc == 0)
+				goto out_complete;
+		}
+	}
+
+	res = 0;
+
+out:
+	return res;
+
+out_complete:
+	res = 1;
+	BUG_ON(!cmd->completed);
+	goto out;
+
+out_uncomplete:
+	res = -1;
+	goto out;
+}
+EXPORT_SYMBOL_GPL(scst_check_local_events);
+
+/* No locks */
+void scst_inc_expected_sn(struct scst_tgt_dev *tgt_dev, atomic_t *slot)
+{
+	if (slot == NULL)
+		goto inc;
+
+	/* Optimized for lockless fast path */
+
+	TRACE_SN("Slot %zd, *cur_sn_slot %d", slot - tgt_dev->sn_slots,
+		atomic_read(slot));
+
+	if (!atomic_dec_and_test(slot))
+		goto out;
+
+	TRACE_SN("Slot is 0 (num_free_sn_slots=%d)",
+		tgt_dev->num_free_sn_slots);
+	if (tgt_dev->num_free_sn_slots < (int)ARRAY_SIZE(tgt_dev->sn_slots)-1) {
+		spin_lock_irq(&tgt_dev->sn_lock);
+		if (likely(tgt_dev->num_free_sn_slots < (int)ARRAY_SIZE(tgt_dev->sn_slots)-1)) {
+			if (tgt_dev->num_free_sn_slots < 0)
+				tgt_dev->cur_sn_slot = slot;
+			/*
+			 * To be in-sync with SIMPLE case in scst_cmd_set_sn()
+			 */
+			smp_mb();
+			tgt_dev->num_free_sn_slots++;
+			TRACE_SN("Incremented num_free_sn_slots (%d)",
+				tgt_dev->num_free_sn_slots);
+
+		}
+		spin_unlock_irq(&tgt_dev->sn_lock);
+	}
+
+inc:
+	/*
+	 * No protection of expected_sn is needed, because only one thread
+	 * at time can be here (serialized by sn). Also it is supposed that
+	 * there could not be half-incremented halves.
+	 */
+	tgt_dev->expected_sn++;
+	/*
+	 * Write must be before def_cmd_count read to be in sync. with
+	 * scst_post_exec_sn(). See comment in scst_send_for_exec().
+	 */
+	smp_mb();
+	TRACE_SN("Next expected_sn: %d", tgt_dev->expected_sn);
+
+out:
+	return;
+}
+
+/* No locks */
+static struct scst_cmd *scst_post_exec_sn(struct scst_cmd *cmd,
+	bool make_active)
+{
+	/* For HQ commands SN is not set */
+	bool inc_expected_sn = !cmd->inc_expected_sn_on_done &&
+			       cmd->sn_set && !cmd->retry;
+	struct scst_tgt_dev *tgt_dev = cmd->tgt_dev;
+	struct scst_cmd *res;
+
+	if (inc_expected_sn)
+		scst_inc_expected_sn(tgt_dev, cmd->sn_slot);
+
+	if (make_active) {
+		scst_make_deferred_commands_active(tgt_dev);
+		res = NULL;
+	} else
+		res = scst_check_deferred_commands(tgt_dev);
+	return res;
+}
+
+/* cmd must be additionally referenced to not die inside */
+static int scst_do_real_exec(struct scst_cmd *cmd)
+{
+	int res = SCST_EXEC_NOT_COMPLETED;
+	int rc;
+	bool atomic = scst_cmd_atomic(cmd);
+	struct scst_device *dev = cmd->dev;
+	struct scst_dev_type *handler = dev->handler;
+	struct io_context *old_ctx = NULL;
+	bool ctx_changed = false;
+
+	if (!atomic)
+		ctx_changed = scst_set_io_context(cmd, &old_ctx);
+
+	cmd->state = SCST_CMD_STATE_REAL_EXECUTING;
+
+	if (handler->exec) {
+		if (unlikely(!dev->handler->exec_atomic && atomic)) {
+			/*
+			 * It shouldn't be because of the SCST_TGT_DEV_AFTER_*
+			 * optimization.
+			 */
+			TRACE_DBG("Dev handler %s exec() needs thread "
+				"context, rescheduling", dev->handler->name);
+			res = SCST_EXEC_NEED_THREAD;
+			goto out_restore;
+		}
+
+		TRACE_DBG("Calling dev handler %s exec(%p)",
+		      handler->name, cmd);
+		TRACE_BUFF_FLAG(TRACE_SND_TOP, "Execing: ", cmd->cdb,
+			cmd->cdb_len);
+		scst_set_cur_start(cmd);
+		res = handler->exec(cmd);
+		TRACE_DBG("Dev handler %s exec() returned %d",
+		      handler->name, res);
+
+		if (res == SCST_EXEC_COMPLETED)
+			goto out_complete;
+		else if (res == SCST_EXEC_NEED_THREAD)
+			goto out_restore;
+
+		scst_set_exec_time(cmd);
+
+		BUG_ON(res != SCST_EXEC_NOT_COMPLETED);
+	}
+
+	TRACE_DBG("Sending cmd %p to SCSI mid-level", cmd);
+
+	if (unlikely(dev->scsi_dev == NULL)) {
+		PRINT_ERROR("Command for virtual device must be "
+			"processed by device handler (LUN %lld)!",
+			(long long unsigned int)cmd->lun);
+		goto out_error;
+	}
+
+	res = scst_check_local_events(cmd);
+	if (unlikely(res != 0))
+		goto out_done;
+
+#ifndef CONFIG_SCST_ALLOW_PASSTHROUGH_IO_SUBMIT_IN_SIRQ
+	if (unlikely(atomic)) {
+		TRACE_DBG("Pass-through exec() can not be called in atomic "
+			"context, rescheduling to the thread (handler %s)",
+			handler->name);
+		res = SCST_EXEC_NEED_THREAD;
+		goto out_restore;
+	}
+#endif
+
+	scst_set_cur_start(cmd);
+
+	rc = scst_scsi_exec_async(cmd, scst_cmd_done);
+	if (unlikely(rc != 0)) {
+		if (atomic) {
+			res = SCST_EXEC_NEED_THREAD;
+			goto out_restore;
+		} else {
+			PRINT_ERROR("scst pass-through exec failed: %x", rc);
+			goto out_error;
+		}
+	}
+
+out_complete:
+	res = SCST_EXEC_COMPLETED;
+
+out_reset_ctx:
+	if (ctx_changed)
+		scst_reset_io_context(cmd->tgt_dev, old_ctx);
+	return res;
+
+out_restore:
+	scst_set_exec_time(cmd);
+	/* Restore the state */
+	cmd->state = SCST_CMD_STATE_REAL_EXEC;
+	goto out_reset_ctx;
+
+out_error:
+	scst_set_cmd_error(cmd, SCST_LOAD_SENSE(scst_sense_hardw_error));
+	goto out_done;
+
+out_done:
+	res = SCST_EXEC_COMPLETED;
+	/* Report the result */
+	cmd->scst_cmd_done(cmd, SCST_CMD_STATE_DEFAULT, SCST_CONTEXT_SAME);
+	goto out_complete;
+}
+
+static inline int scst_real_exec(struct scst_cmd *cmd)
+{
+	int res;
+
+	BUILD_BUG_ON(SCST_CMD_STATE_RES_CONT_SAME != SCST_EXEC_NOT_COMPLETED);
+	BUILD_BUG_ON(SCST_CMD_STATE_RES_CONT_NEXT != SCST_EXEC_COMPLETED);
+	BUILD_BUG_ON(SCST_CMD_STATE_RES_NEED_THREAD != SCST_EXEC_NEED_THREAD);
+
+	__scst_cmd_get(cmd);
+
+	res = scst_do_real_exec(cmd);
+
+	if (likely(res == SCST_EXEC_COMPLETED)) {
+		scst_post_exec_sn(cmd, true);
+		if (cmd->dev->scsi_dev != NULL)
+			generic_unplug_device(
+				cmd->dev->scsi_dev->request_queue);
+	} else
+		BUG_ON(res != SCST_EXEC_NEED_THREAD);
+
+	__scst_cmd_put(cmd);
+
+	/* SCST_EXEC_* match SCST_CMD_STATE_RES_* */
+	return res;
+}
+
+static int scst_do_local_exec(struct scst_cmd *cmd)
+{
+	int res;
+	struct scst_tgt_dev *tgt_dev = cmd->tgt_dev;
+
+	/* Check READ_ONLY device status */
+	if ((cmd->op_flags & SCST_WRITE_MEDIUM) &&
+	    (tgt_dev->acg_dev->rd_only || cmd->dev->swp ||
+	     cmd->dev->rd_only)) {
+		PRINT_WARNING("Attempt of write access to read-only device: "
+			"initiator %s, LUN %lld, op %x",
+			cmd->sess->initiator_name, cmd->lun, cmd->cdb[0]);
+		scst_set_cmd_error(cmd,
+			   SCST_LOAD_SENSE(scst_sense_data_protect));
+		goto out_done;
+	}
+
+	if (!scst_is_cmd_local(cmd)) {
+		res = SCST_EXEC_NOT_COMPLETED;
+		goto out;
+	}
+
+	switch (cmd->cdb[0]) {
+	case MODE_SELECT:
+	case MODE_SELECT_10:
+	case LOG_SELECT:
+		res = scst_pre_select(cmd);
+		break;
+	case RESERVE:
+	case RESERVE_10:
+		res = scst_reserve_local(cmd);
+		break;
+	case RELEASE:
+	case RELEASE_10:
+		res = scst_release_local(cmd);
+		break;
+	case REPORT_LUNS:
+		res = scst_report_luns_local(cmd);
+		break;
+	case REQUEST_SENSE:
+		res = scst_request_sense_local(cmd);
+		break;
+	default:
+		res = SCST_EXEC_NOT_COMPLETED;
+		break;
+	}
+
+out:
+	return res;
+
+out_done:
+	/* Report the result */
+	cmd->scst_cmd_done(cmd, SCST_CMD_STATE_DEFAULT, SCST_CONTEXT_SAME);
+	res = SCST_EXEC_COMPLETED;
+	goto out;
+}
+
+static int scst_local_exec(struct scst_cmd *cmd)
+{
+	int res;
+
+	BUILD_BUG_ON(SCST_CMD_STATE_RES_CONT_SAME != SCST_EXEC_NOT_COMPLETED);
+	BUILD_BUG_ON(SCST_CMD_STATE_RES_CONT_NEXT != SCST_EXEC_COMPLETED);
+	BUILD_BUG_ON(SCST_CMD_STATE_RES_NEED_THREAD != SCST_EXEC_NEED_THREAD);
+
+	__scst_cmd_get(cmd);
+
+	res = scst_do_local_exec(cmd);
+	if (likely(res == SCST_EXEC_NOT_COMPLETED))
+		cmd->state = SCST_CMD_STATE_REAL_EXEC;
+	else if (res == SCST_EXEC_COMPLETED)
+		scst_post_exec_sn(cmd, true);
+	else
+		BUG_ON(res != SCST_EXEC_NEED_THREAD);
+
+	__scst_cmd_put(cmd);
+
+	/* SCST_EXEC_* match SCST_CMD_STATE_RES_* */
+	return res;
+}
+
+static int scst_exec(struct scst_cmd **active_cmd)
+{
+	struct scst_cmd *cmd = *active_cmd;
+	struct scst_cmd *ref_cmd;
+	struct scst_device *dev = cmd->dev;
+	int res = SCST_CMD_STATE_RES_CONT_NEXT, count;
+
+	if (unlikely(scst_inc_on_dev_cmd(cmd) != 0))
+		goto out;
+
+	/* To protect tgt_dev */
+	ref_cmd = cmd;
+	__scst_cmd_get(ref_cmd);
+
+	count = 0;
+	while (1) {
+		int rc;
+
+		cmd->sent_for_exec = 1;
+		/*
+		 * To sync with scst_abort_cmd(). The above assignment must
+		 * be before SCST_CMD_ABORTED test, done later in
+		 * scst_check_local_events(). It's far from here, so the order
+		 * is virtually guaranteed, but let's have it just in case.
+		 */
+		smp_mb();
+
+		cmd->scst_cmd_done = scst_cmd_done_local;
+		cmd->state = SCST_CMD_STATE_LOCAL_EXEC;
+
+		rc = scst_do_local_exec(cmd);
+		if (likely(rc == SCST_EXEC_NOT_COMPLETED))
+			/* Nothing to do */;
+		else if (rc == SCST_EXEC_NEED_THREAD) {
+			TRACE_DBG("%s", "scst_do_local_exec() requested "
+				"thread context, rescheduling");
+			scst_dec_on_dev_cmd(cmd);
+			res = SCST_CMD_STATE_RES_NEED_THREAD;
+			break;
+		} else {
+			BUG_ON(rc != SCST_EXEC_COMPLETED);
+			goto done;
+		}
+
+		cmd->state = SCST_CMD_STATE_REAL_EXEC;
+
+		rc = scst_do_real_exec(cmd);
+		if (likely(rc == SCST_EXEC_COMPLETED))
+			/* Nothing to do */;
+		else if (rc == SCST_EXEC_NEED_THREAD) {
+			TRACE_DBG("scst_real_exec() requested thread "
+				"context, rescheduling (cmd %p)", cmd);
+			scst_dec_on_dev_cmd(cmd);
+			res = SCST_CMD_STATE_RES_NEED_THREAD;
+			break;
+		} else
+			BUG();
+
+done:
+		count++;
+
+		cmd = scst_post_exec_sn(cmd, false);
+		if (cmd == NULL)
+			break;
+
+		if (unlikely(scst_inc_on_dev_cmd(cmd) != 0))
+			break;
+
+		__scst_cmd_put(ref_cmd);
+		ref_cmd = cmd;
+		__scst_cmd_get(ref_cmd);
+	}
+
+	*active_cmd = cmd;
+
+	if (count == 0)
+		goto out_put;
+
+	if (dev->scsi_dev != NULL)
+		generic_unplug_device(dev->scsi_dev->request_queue);
+
+out_put:
+	__scst_cmd_put(ref_cmd);
+	/* !! At this point sess, dev and tgt_dev can be already freed !! */
+
+out:
+	return res;
+}
+
+static int scst_send_for_exec(struct scst_cmd **active_cmd)
+{
+	int res;
+	struct scst_cmd *cmd = *active_cmd;
+	struct scst_tgt_dev *tgt_dev = cmd->tgt_dev;
+	typeof(tgt_dev->expected_sn) expected_sn;
+
+	if (unlikely(cmd->internal))
+		goto exec;
+
+	if (unlikely(cmd->queue_type == SCST_CMD_QUEUE_HEAD_OF_QUEUE))
+		goto exec;
+
+	BUG_ON(!cmd->sn_set);
+
+	expected_sn = tgt_dev->expected_sn;
+	/* Optimized for lockless fast path */
+	if ((cmd->sn != expected_sn) || (tgt_dev->hq_cmd_count > 0)) {
+		spin_lock_irq(&tgt_dev->sn_lock);
+
+		tgt_dev->def_cmd_count++;
+		/*
+		 * Memory barrier is needed here to implement lockless fast
+		 * path. We need the exact order of read and write between
+		 * def_cmd_count and expected_sn. Otherwise, we can miss case,
+		 * when expected_sn was changed to be equal to cmd->sn while
+		 * we are queuing cmd the deferred list after the expected_sn
+		 * below. It will lead to a forever stuck command. But with
+		 * the barrier in such case __scst_check_deferred_commands()
+		 * will be called and it will take sn_lock, so we will be
+		 * synchronized.
+		 */
+		smp_mb();
+
+		expected_sn = tgt_dev->expected_sn;
+		if ((cmd->sn != expected_sn) || (tgt_dev->hq_cmd_count > 0)) {
+			if (unlikely(test_bit(SCST_CMD_ABORTED,
+					      &cmd->cmd_flags))) {
+				/* Necessary to allow aborting out of sn cmds */
+				TRACE_MGMT_DBG("Aborting out of sn cmd %p "
+					"(tag %llu, sn %u)", cmd,
+					(long long unsigned)cmd->tag, cmd->sn);
+				tgt_dev->def_cmd_count--;
+				scst_set_cmd_abnormal_done_state(cmd);
+				res = SCST_CMD_STATE_RES_CONT_SAME;
+			} else {
+				TRACE_SN("Deferring cmd %p (sn=%d, set %d, "
+					"expected_sn=%d)", cmd, cmd->sn,
+					cmd->sn_set, expected_sn);
+				list_add_tail(&cmd->sn_cmd_list_entry,
+					      &tgt_dev->deferred_cmd_list);
+				res = SCST_CMD_STATE_RES_CONT_NEXT;
+			}
+			spin_unlock_irq(&tgt_dev->sn_lock);
+			goto out;
+		} else {
+			TRACE_SN("Somebody incremented expected_sn %d, "
+				"continuing", expected_sn);
+			tgt_dev->def_cmd_count--;
+			spin_unlock_irq(&tgt_dev->sn_lock);
+		}
+	}
+
+exec:
+	res = scst_exec(active_cmd);
+
+out:
+	return res;
+}
+
+/* No locks supposed to be held */
+static int scst_check_sense(struct scst_cmd *cmd)
+{
+	int res = 0;
+	struct scst_device *dev = cmd->dev;
+
+	if (unlikely(cmd->ua_ignore))
+		goto out;
+
+	/* If we had internal bus reset behind us, set the command error UA */
+	if ((dev->scsi_dev != NULL) &&
+	    unlikely(cmd->host_status == DID_RESET) &&
+	    scst_is_ua_command(cmd)) {
+		TRACE(TRACE_MGMT, "DID_RESET: was_reset=%d host_status=%x",
+		      dev->scsi_dev->was_reset, cmd->host_status);
+		scst_set_cmd_error(cmd, SCST_LOAD_SENSE(scst_sense_reset_UA));
+		/* It looks like it is safe to clear was_reset here */
+		dev->scsi_dev->was_reset = 0;
+	}
+
+	if (unlikely(cmd->status == SAM_STAT_CHECK_CONDITION) &&
+	    SCST_SENSE_VALID(cmd->sense)) {
+		PRINT_BUFF_FLAG(TRACE_SCSI, "Sense", cmd->sense,
+			cmd->sense_valid_len);
+
+		/* Check Unit Attention Sense Key */
+		if (scst_is_ua_sense(cmd->sense, cmd->sense_valid_len)) {
+			if (scst_analyze_sense(cmd->sense, cmd->sense_valid_len,
+					SCST_SENSE_ASC_VALID,
+					0, SCST_SENSE_ASC_UA_RESET, 0)) {
+				if (cmd->double_ua_possible) {
+					TRACE_MGMT_DBG("Double UA "
+						"detected for device %p", dev);
+					TRACE_MGMT_DBG("Retrying cmd"
+						" %p (tag %llu)", cmd,
+						(long long unsigned)cmd->tag);
+
+					cmd->status = 0;
+					cmd->msg_status = 0;
+					cmd->host_status = DID_OK;
+					cmd->driver_status = 0;
+					cmd->completed = 0;
+
+					mempool_free(cmd->sense,
+						     scst_sense_mempool);
+					cmd->sense = NULL;
+
+					scst_check_restore_sg_buff(cmd);
+
+					BUG_ON(cmd->dbl_ua_orig_resp_data_len < 0);
+					cmd->data_direction =
+						cmd->dbl_ua_orig_data_direction;
+					cmd->resp_data_len =
+						cmd->dbl_ua_orig_resp_data_len;
+
+					cmd->state = SCST_CMD_STATE_REAL_EXEC;
+					cmd->retry = 1;
+					res = 1;
+					goto out;
+				}
+			}
+			scst_dev_check_set_UA(dev, cmd,	cmd->sense,
+				cmd->sense_valid_len);
+		}
+	}
+
+	if (unlikely(cmd->double_ua_possible)) {
+		if (scst_is_ua_command(cmd)) {
+			TRACE_MGMT_DBG("Clearing dbl_ua_possible flag (dev %p, "
+				"cmd %p)", dev, cmd);
+			/*
+			 * Lock used to protect other flags in the bitfield
+			 * (just in case, actually). Those flags can't be
+			 * changed in parallel, because the device is
+			 * serialized.
+			 */
+			spin_lock_bh(&dev->dev_lock);
+			dev->dev_double_ua_possible = 0;
+			spin_unlock_bh(&dev->dev_lock);
+		}
+	}
+
+out:
+	return res;
+}
+
+static int scst_check_auto_sense(struct scst_cmd *cmd)
+{
+	int res = 0;
+
+	if (unlikely(cmd->status == SAM_STAT_CHECK_CONDITION) &&
+	    (!SCST_SENSE_VALID(cmd->sense) ||
+	     SCST_NO_SENSE(cmd->sense))) {
+		TRACE(TRACE_SCSI|TRACE_MINOR_AND_MGMT_DBG, "CHECK_CONDITION, "
+			"but no sense: cmd->status=%x, cmd->msg_status=%x, "
+		      "cmd->host_status=%x, cmd->driver_status=%x (cmd %p)",
+		      cmd->status, cmd->msg_status, cmd->host_status,
+		      cmd->driver_status, cmd);
+		res = 1;
+	} else if (unlikely(cmd->host_status)) {
+		if ((cmd->host_status == DID_REQUEUE) ||
+		    (cmd->host_status == DID_IMM_RETRY) ||
+		    (cmd->host_status == DID_SOFT_ERROR) ||
+		    (cmd->host_status == DID_ABORT)) {
+			scst_set_busy(cmd);
+		} else {
+			TRACE(TRACE_SCSI|TRACE_MINOR_AND_MGMT_DBG, "Host "
+				"status %x received, returning HARDWARE ERROR "
+				"instead (cmd %p)", cmd->host_status, cmd);
+			scst_set_cmd_error(cmd,
+				SCST_LOAD_SENSE(scst_sense_hardw_error));
+		}
+	}
+	return res;
+}
+
+static int scst_pre_dev_done(struct scst_cmd *cmd)
+{
+	int res = SCST_CMD_STATE_RES_CONT_SAME, rc;
+
+	if (unlikely(scst_check_auto_sense(cmd))) {
+		PRINT_INFO("Command finished with CHECK CONDITION, but "
+			    "without sense data (opcode 0x%x), issuing "
+			    "REQUEST SENSE", cmd->cdb[0]);
+		rc = scst_prepare_request_sense(cmd);
+		if (rc == 0)
+			res = SCST_CMD_STATE_RES_CONT_NEXT;
+		else {
+			PRINT_ERROR("%s", "Unable to issue REQUEST SENSE, "
+				    "returning HARDWARE ERROR");
+			scst_set_cmd_error(cmd,
+				SCST_LOAD_SENSE(scst_sense_hardw_error));
+		}
+		goto out;
+	} else if (unlikely(scst_check_sense(cmd)))
+		goto out;
+
+	if (likely(scsi_status_is_good(cmd->status))) {
+		unsigned char type = cmd->dev->type;
+		if (unlikely((cmd->cdb[0] == MODE_SENSE ||
+			      cmd->cdb[0] == MODE_SENSE_10)) &&
+		    (cmd->tgt_dev->acg_dev->rd_only || cmd->dev->swp ||
+		     cmd->dev->rd_only) &&
+		    (type == TYPE_DISK ||
+		     type == TYPE_WORM ||
+		     type == TYPE_MOD ||
+		     type == TYPE_TAPE)) {
+			int32_t length;
+			uint8_t *address;
+			bool err = false;
+
+			length = scst_get_buf_first(cmd, &address);
+			if (length < 0) {
+				PRINT_ERROR("%s", "Unable to get "
+					"MODE_SENSE buffer");
+				scst_set_cmd_error(cmd,
+					SCST_LOAD_SENSE(
+						scst_sense_hardw_error));
+				err = true;
+			} else if (length > 2 && cmd->cdb[0] == MODE_SENSE)
+				address[2] |= 0x80;   /* Write Protect*/
+			else if (length > 3 && cmd->cdb[0] == MODE_SENSE_10)
+				address[3] |= 0x80;   /* Write Protect*/
+			scst_put_buf(cmd, address);
+
+			if (err)
+				goto out;
+		}
+
+		/*
+		 * Check and clear NormACA option for the device, if necessary,
+		 * since we don't support ACA
+		 */
+		if (unlikely((cmd->cdb[0] == INQUIRY)) &&
+		    /* Std INQUIRY data (no EVPD) */
+		    !(cmd->cdb[1] & SCST_INQ_EVPD) &&
+		    (cmd->resp_data_len > SCST_INQ_BYTE3)) {
+			uint8_t *buffer;
+			int buflen;
+			bool err = false;
+
+			/* ToDo: all pages ?? */
+			buflen = scst_get_buf_first(cmd, &buffer);
+			if (buflen > SCST_INQ_BYTE3) {
+#ifdef CONFIG_SCST_EXTRACHECKS
+				if (buffer[SCST_INQ_BYTE3] & SCST_INQ_NORMACA_BIT) {
+					PRINT_INFO("NormACA set for device: "
+					    "lun=%lld, type 0x%02x. Clear it, "
+					    "since it's unsupported.",
+					    (long long unsigned int)cmd->lun,
+					    buffer[0]);
+				}
+#endif
+				buffer[SCST_INQ_BYTE3] &= ~SCST_INQ_NORMACA_BIT;
+			} else if (buflen != 0) {
+				PRINT_ERROR("%s", "Unable to get INQUIRY "
+				    "buffer");
+				scst_set_cmd_error(cmd,
+				       SCST_LOAD_SENSE(scst_sense_hardw_error));
+				err = true;
+			}
+			if (buflen > 0)
+				scst_put_buf(cmd, buffer);
+
+			if (err)
+				goto out;
+		}
+
+		if (unlikely((cmd->cdb[0] == MODE_SELECT) ||
+		    (cmd->cdb[0] == MODE_SELECT_10) ||
+		    (cmd->cdb[0] == LOG_SELECT))) {
+			TRACE(TRACE_SCSI,
+				"MODE/LOG SELECT succeeded (LUN %lld)",
+				(long long unsigned int)cmd->lun);
+			cmd->state = SCST_CMD_STATE_MODE_SELECT_CHECKS;
+			goto out;
+		}
+	} else {
+		TRACE(TRACE_SCSI, "cmd %p not succeeded with status %x",
+			cmd, cmd->status);
+
+		if ((cmd->cdb[0] == RESERVE) || (cmd->cdb[0] == RESERVE_10)) {
+			if (!test_bit(SCST_TGT_DEV_RESERVED,
+					&cmd->tgt_dev->tgt_dev_flags)) {
+				struct scst_tgt_dev *tgt_dev_tmp;
+				struct scst_device *dev = cmd->dev;
+
+				TRACE(TRACE_SCSI, "RESERVE failed lun=%lld, "
+					"status=%x",
+					(long long unsigned int)cmd->lun,
+					cmd->status);
+				PRINT_BUFF_FLAG(TRACE_SCSI, "Sense", cmd->sense,
+					cmd->sense_valid_len);
+
+				/* Clearing the reservation */
+				spin_lock_bh(&dev->dev_lock);
+				list_for_each_entry(tgt_dev_tmp,
+						    &dev->dev_tgt_dev_list,
+						    dev_tgt_dev_list_entry) {
+					clear_bit(SCST_TGT_DEV_RESERVED,
+						&tgt_dev_tmp->tgt_dev_flags);
+				}
+				dev->dev_reserved = 0;
+				spin_unlock_bh(&dev->dev_lock);
+			}
+		}
+
+		/* Check for MODE PARAMETERS CHANGED UA */
+		if ((cmd->dev->scsi_dev != NULL) &&
+		    (cmd->status == SAM_STAT_CHECK_CONDITION) &&
+		    scst_is_ua_sense(cmd->sense, cmd->sense_valid_len) &&
+		    scst_analyze_sense(cmd->sense, cmd->sense_valid_len,
+					SCST_SENSE_ASCx_VALID,
+					0, 0x2a, 0x01)) {
+			TRACE(TRACE_SCSI, "MODE PARAMETERS CHANGED UA (lun "
+				"%lld)", (long long unsigned int)cmd->lun);
+			cmd->state = SCST_CMD_STATE_MODE_SELECT_CHECKS;
+			goto out;
+		}
+	}
+
+	cmd->state = SCST_CMD_STATE_DEV_DONE;
+
+out:
+	return res;
+}
+
+static int scst_mode_select_checks(struct scst_cmd *cmd)
+{
+	int res = SCST_CMD_STATE_RES_CONT_SAME;
+	int atomic = scst_cmd_atomic(cmd);
+
+	if (likely(scsi_status_is_good(cmd->status))) {
+		if (unlikely((cmd->cdb[0] == MODE_SELECT) ||
+		    (cmd->cdb[0] == MODE_SELECT_10) ||
+		    (cmd->cdb[0] == LOG_SELECT))) {
+			struct scst_device *dev = cmd->dev;
+			int sl;
+			uint8_t sense_buffer[SCST_STANDARD_SENSE_LEN];
+
+			if (atomic && (dev->scsi_dev != NULL)) {
+				TRACE_DBG("%s", "MODE/LOG SELECT: thread "
+					"context required");
+				res = SCST_CMD_STATE_RES_NEED_THREAD;
+				goto out;
+			}
+
+			TRACE(TRACE_SCSI, "MODE/LOG SELECT succeeded, "
+				"setting the SELECT UA (lun=%lld)",
+				(long long unsigned int)cmd->lun);
+
+			spin_lock_bh(&dev->dev_lock);
+			if (cmd->cdb[0] == LOG_SELECT) {
+				sl = scst_set_sense(sense_buffer,
+					sizeof(sense_buffer),
+					dev->d_sense,
+					UNIT_ATTENTION, 0x2a, 0x02);
+			} else {
+				sl = scst_set_sense(sense_buffer,
+					sizeof(sense_buffer),
+					dev->d_sense,
+					UNIT_ATTENTION, 0x2a, 0x01);
+			}
+			scst_dev_check_set_local_UA(dev, cmd, sense_buffer, sl);
+			spin_unlock_bh(&dev->dev_lock);
+
+			if (dev->scsi_dev != NULL)
+				scst_obtain_device_parameters(dev);
+		}
+	} else if ((cmd->status == SAM_STAT_CHECK_CONDITION) &&
+		    scst_is_ua_sense(cmd->sense, cmd->sense_valid_len) &&
+		     /* mode parameters changed */
+		    (scst_analyze_sense(cmd->sense, cmd->sense_valid_len,
+					SCST_SENSE_ASCx_VALID,
+					0, 0x2a, 0x01) ||
+		     scst_analyze_sense(cmd->sense, cmd->sense_valid_len,
+					SCST_SENSE_ASC_VALID,
+					0, 0x29, 0) /* reset */ ||
+		     scst_analyze_sense(cmd->sense, cmd->sense_valid_len,
+					SCST_SENSE_ASC_VALID,
+					0, 0x28, 0) /* medium changed */ ||
+		     /* cleared by another ini (just in case) */
+		     scst_analyze_sense(cmd->sense, cmd->sense_valid_len,
+					SCST_SENSE_ASC_VALID,
+					0, 0x2F, 0))) {
+		if (atomic) {
+			TRACE_DBG("Possible parameters changed UA %x: "
+				"thread context required", cmd->sense[12]);
+			res = SCST_CMD_STATE_RES_NEED_THREAD;
+			goto out;
+		}
+
+		TRACE(TRACE_SCSI, "Possible parameters changed UA %x "
+			"(LUN %lld): getting new parameters", cmd->sense[12],
+			(long long unsigned int)cmd->lun);
+
+		scst_obtain_device_parameters(cmd->dev);
+	} else
+		BUG();
+
+	cmd->state = SCST_CMD_STATE_DEV_DONE;
+
+out:
+	return res;
+}
+
+static void scst_inc_check_expected_sn(struct scst_cmd *cmd)
+{
+	if (likely(cmd->sn_set))
+		scst_inc_expected_sn(cmd->tgt_dev, cmd->sn_slot);
+
+	scst_make_deferred_commands_active(cmd->tgt_dev);
+}
+
+static int scst_dev_done(struct scst_cmd *cmd)
+{
+	int res = SCST_CMD_STATE_RES_CONT_SAME;
+	int state;
+	struct scst_device *dev = cmd->dev;
+
+	state = SCST_CMD_STATE_PRE_XMIT_RESP;
+
+	if (likely(!scst_is_cmd_fully_local(cmd)) &&
+	    likely(dev->handler->dev_done != NULL)) {
+		int rc;
+
+		if (unlikely(!dev->handler->dev_done_atomic &&
+			     scst_cmd_atomic(cmd))) {
+			/*
+			 * It shouldn't be because of the SCST_TGT_DEV_AFTER_*
+			 * optimization.
+			 */
+			TRACE_DBG("Dev handler %s dev_done() needs thread "
+			      "context, rescheduling", dev->handler->name);
+			res = SCST_CMD_STATE_RES_NEED_THREAD;
+			goto out;
+		}
+
+		TRACE_DBG("Calling dev handler %s dev_done(%p)",
+			dev->handler->name, cmd);
+		scst_set_cur_start(cmd);
+		rc = dev->handler->dev_done(cmd);
+		scst_set_dev_done_time(cmd);
+		TRACE_DBG("Dev handler %s dev_done() returned %d",
+		      dev->handler->name, rc);
+		if (rc != SCST_CMD_STATE_DEFAULT)
+			state = rc;
+	}
+
+	switch (state) {
+#ifdef CONFIG_SCST_EXTRACHECKS
+	case SCST_CMD_STATE_PRE_XMIT_RESP:
+	case SCST_CMD_STATE_DEV_PARSE:
+	case SCST_CMD_STATE_PRE_PARSE:
+	case SCST_CMD_STATE_PREPARE_SPACE:
+	case SCST_CMD_STATE_RDY_TO_XFER:
+	case SCST_CMD_STATE_TGT_PRE_EXEC:
+	case SCST_CMD_STATE_SEND_FOR_EXEC:
+	case SCST_CMD_STATE_LOCAL_EXEC:
+	case SCST_CMD_STATE_REAL_EXEC:
+	case SCST_CMD_STATE_PRE_DEV_DONE:
+	case SCST_CMD_STATE_MODE_SELECT_CHECKS:
+	case SCST_CMD_STATE_DEV_DONE:
+	case SCST_CMD_STATE_XMIT_RESP:
+	case SCST_CMD_STATE_FINISHED:
+	case SCST_CMD_STATE_FINISHED_INTERNAL:
+#else
+	default:
+#endif
+		cmd->state = state;
+		break;
+	case SCST_CMD_STATE_NEED_THREAD_CTX:
+		TRACE_DBG("Dev handler %s dev_done() requested "
+		      "thread context, rescheduling",
+		      dev->handler->name);
+		res = SCST_CMD_STATE_RES_NEED_THREAD;
+		break;
+#ifdef CONFIG_SCST_EXTRACHECKS
+	default:
+		if (state >= 0) {
+			PRINT_ERROR("Dev handler %s dev_done() returned "
+				"invalid cmd state %d",
+				dev->handler->name, state);
+		} else {
+			PRINT_ERROR("Dev handler %s dev_done() returned "
+				"error %d", dev->handler->name,
+				state);
+		}
+		scst_set_cmd_error(cmd,
+			   SCST_LOAD_SENSE(scst_sense_hardw_error));
+		scst_set_cmd_abnormal_done_state(cmd);
+		break;
+#endif
+	}
+
+	if (cmd->needs_unblocking)
+		scst_unblock_dev_cmd(cmd);
+
+	if (likely(cmd->dec_on_dev_needed))
+		scst_dec_on_dev_cmd(cmd);
+
+	if (cmd->inc_expected_sn_on_done && cmd->sent_for_exec)
+		scst_inc_check_expected_sn(cmd);
+
+	if (unlikely(cmd->internal))
+		cmd->state = SCST_CMD_STATE_FINISHED_INTERNAL;
+
+out:
+	return res;
+}
+
+static int scst_pre_xmit_response(struct scst_cmd *cmd)
+{
+	int res;
+
+	EXTRACHECKS_BUG_ON(cmd->internal);
+
+#ifdef CONFIG_SCST_DEBUG_TM
+	if (cmd->tm_dbg_delayed &&
+			!test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags)) {
+		if (scst_cmd_atomic(cmd)) {
+			TRACE_MGMT_DBG("%s",
+				"DEBUG_TM delayed cmd needs a thread");
+			res = SCST_CMD_STATE_RES_NEED_THREAD;
+			return res;
+		}
+		TRACE_MGMT_DBG("Delaying cmd %p (tag %llu) for 1 second",
+			cmd, cmd->tag);
+		schedule_timeout_uninterruptible(HZ);
+	}
+#endif
+
+	if (likely(cmd->tgt_dev != NULL)) {
+		/*
+		 * Those counters protect from not getting too long processing
+		 * latency, so we should decrement them after cmd completed.
+		 */
+		atomic_dec(&cmd->tgt_dev->tgt_dev_cmd_count);
+#ifdef CONFIG_SCST_PER_DEVICE_CMD_COUNT_LIMIT
+		atomic_dec(&cmd->dev->dev_cmd_count);
+#endif
+#ifdef CONFIG_SCST_ORDERED_READS
+		/* If expected values not set, expected direction is UNKNOWN */
+		if (cmd->expected_data_direction & SCST_DATA_WRITE)
+			atomic_dec(&cmd->dev->write_cmd_count);
+#endif
+		if (unlikely(cmd->queue_type == SCST_CMD_QUEUE_HEAD_OF_QUEUE))
+			scst_on_hq_cmd_response(cmd);
+
+		if (unlikely(!cmd->sent_for_exec)) {
+			TRACE_SN("cmd %p was not sent to mid-lev"
+				" (sn %d, set %d)",
+				cmd, cmd->sn, cmd->sn_set);
+			scst_unblock_deferred(cmd->tgt_dev, cmd);
+			cmd->sent_for_exec = 1;
+		}
+	}
+
+	cmd->done = 1;
+	smp_mb(); /* to sync with scst_abort_cmd() */
+
+	if (unlikely(test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags)))
+		scst_xmit_process_aborted_cmd(cmd);
+	else if (unlikely(cmd->status == SAM_STAT_CHECK_CONDITION))
+		scst_store_sense(cmd);
+
+	if (unlikely(test_bit(SCST_CMD_NO_RESP, &cmd->cmd_flags))) {
+		TRACE_MGMT_DBG("Flag NO_RESP set for cmd %p (tag %llu),"
+				" skipping",
+				cmd, (long long unsigned int)cmd->tag);
+		cmd->state = SCST_CMD_STATE_FINISHED;
+		res = SCST_CMD_STATE_RES_CONT_SAME;
+		goto out;
+	}
+
+	cmd->state = SCST_CMD_STATE_XMIT_RESP;
+	res = SCST_CMD_STATE_RES_CONT_SAME;
+
+out:
+	return res;
+}
+
+static int scst_xmit_response(struct scst_cmd *cmd)
+{
+	struct scst_tgt_template *tgtt = cmd->tgtt;
+	int res, rc;
+
+	EXTRACHECKS_BUG_ON(cmd->internal);
+
+	if (unlikely(!tgtt->xmit_response_atomic &&
+		     scst_cmd_atomic(cmd))) {
+		/*
+		 * It shouldn't be because of the SCST_TGT_DEV_AFTER_*
+		 * optimization.
+		 */
+		TRACE_DBG("Target driver %s xmit_response() needs thread "
+			      "context, rescheduling", tgtt->name);
+		res = SCST_CMD_STATE_RES_NEED_THREAD;
+		goto out;
+	}
+
+	while (1) {
+		int finished_cmds = atomic_read(&cmd->tgt->finished_cmds);
+
+		res = SCST_CMD_STATE_RES_CONT_NEXT;
+		cmd->state = SCST_CMD_STATE_XMIT_WAIT;
+
+		TRACE_DBG("Calling xmit_response(%p)", cmd);
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+		if (trace_flag & TRACE_SND_BOT) {
+			int i;
+			struct scatterlist *sg;
+			if (cmd->tgt_sg != NULL)
+				sg = cmd->tgt_sg;
+			else
+				sg = cmd->sg;
+			if (sg != NULL) {
+				TRACE(TRACE_SND_BOT, "Xmitting data for cmd %p "
+					"(sg_cnt %d, sg %p, sg[0].page %p)",
+					cmd, cmd->tgt_sg_cnt, sg,
+					(void *)sg_page(&sg[0]));
+				for (i = 0; i < cmd->tgt_sg_cnt; ++i) {
+					PRINT_BUFF_FLAG(TRACE_SND_BOT,
+						"Xmitting sg", sg_virt(&sg[i]),
+						sg[i].length);
+				}
+			}
+		}
+#endif
+
+		if (tgtt->on_hw_pending_cmd_timeout != NULL) {
+			struct scst_session *sess = cmd->sess;
+			cmd->hw_pending_start = jiffies;
+			cmd->cmd_hw_pending = 1;
+			if (!test_bit(SCST_SESS_HW_PENDING_WORK_SCHEDULED, &sess->sess_aflags)) {
+				TRACE_DBG("Sched HW pending work for sess %p "
+					"(max time %d)", sess,
+					tgtt->max_hw_pending_time);
+				set_bit(SCST_SESS_HW_PENDING_WORK_SCHEDULED,
+					&sess->sess_aflags);
+				schedule_delayed_work(&sess->hw_pending_work,
+					tgtt->max_hw_pending_time * HZ);
+			}
+		}
+
+		scst_set_cur_start(cmd);
+
+#ifdef CONFIG_SCST_DEBUG_RETRY
+		if (((scst_random() % 100) == 77))
+			rc = SCST_TGT_RES_QUEUE_FULL;
+		else
+#endif
+			rc = tgtt->xmit_response(cmd);
+		TRACE_DBG("xmit_response() returned %d", rc);
+
+		if (likely(rc == SCST_TGT_RES_SUCCESS))
+			goto out;
+
+		scst_set_xmit_time(cmd);
+
+		cmd->cmd_hw_pending = 0;
+
+		/* Restore the previous state */
+		cmd->state = SCST_CMD_STATE_XMIT_RESP;
+
+		switch (rc) {
+		case SCST_TGT_RES_QUEUE_FULL:
+			if (scst_queue_retry_cmd(cmd, finished_cmds) == 0)
+				break;
+			else
+				continue;
+
+		case SCST_TGT_RES_NEED_THREAD_CTX:
+			TRACE_DBG("Target driver %s xmit_response() "
+			      "requested thread context, rescheduling",
+			      tgtt->name);
+			res = SCST_CMD_STATE_RES_NEED_THREAD;
+			break;
+
+		default:
+			goto out_error;
+		}
+		break;
+	}
+
+out:
+	/* Caution: cmd can be already dead here */
+	return res;
+
+out_error:
+	if (rc == SCST_TGT_RES_FATAL_ERROR) {
+		PRINT_ERROR("Target driver %s xmit_response() returned "
+			"fatal error", tgtt->name);
+	} else {
+		PRINT_ERROR("Target driver %s xmit_response() returned "
+			"invalid value %d", tgtt->name, rc);
+	}
+	scst_set_cmd_error(cmd, SCST_LOAD_SENSE(scst_sense_hardw_error));
+	cmd->state = SCST_CMD_STATE_FINISHED;
+	res = SCST_CMD_STATE_RES_CONT_SAME;
+	goto out;
+}
+
+/**
+ * scst_tgt_cmd_done() - the command's processing done
+ * @cmd:	SCST command
+ * @pref_context: preferred command execution context
+ *
+ * Description:
+ *    Notifies SCST that the driver sent the response and the command
+ *    can be freed now. Don't forget to set the delivery status, if it
+ *    isn't success, using scst_set_delivery_status() before calling
+ *    this function. The third argument sets preferred command execition
+ *    context (see SCST_CONTEXT_* constants for details)
+ */
+void scst_tgt_cmd_done(struct scst_cmd *cmd,
+	enum scst_exec_context pref_context)
+{
+
+	BUG_ON(cmd->state != SCST_CMD_STATE_XMIT_WAIT);
+
+	scst_set_xmit_time(cmd);
+
+	cmd->cmd_hw_pending = 0;
+
+	cmd->state = SCST_CMD_STATE_FINISHED;
+	scst_process_redirect_cmd(cmd, pref_context, 1);
+	return;
+}
+EXPORT_SYMBOL(scst_tgt_cmd_done);
+
+static int scst_finish_cmd(struct scst_cmd *cmd)
+{
+	int res;
+	struct scst_session *sess = cmd->sess;
+
+	scst_update_lat_stats(cmd);
+
+	if (unlikely(cmd->delivery_status != SCST_CMD_DELIVERY_SUCCESS)) {
+		if ((cmd->tgt_dev != NULL) &&
+		    scst_is_ua_sense(cmd->sense, cmd->sense_valid_len)) {
+			/* This UA delivery failed, so we need to requeue it */
+			if (scst_cmd_atomic(cmd) &&
+			    scst_is_ua_global(cmd->sense, cmd->sense_valid_len)) {
+				TRACE_MGMT_DBG("Requeuing of global UA for "
+					"failed cmd %p needs a thread", cmd);
+				res = SCST_CMD_STATE_RES_NEED_THREAD;
+				goto out;
+			}
+			scst_requeue_ua(cmd);
+		}
+	}
+
+	atomic_dec(&sess->sess_cmd_count);
+
+	spin_lock_irq(&sess->sess_list_lock);
+	list_del(&cmd->sess_cmd_list_entry);
+	spin_unlock_irq(&sess->sess_list_lock);
+
+	cmd->finished = 1;
+	smp_mb(); /* to sync with scst_abort_cmd() */
+
+	if (unlikely(test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags))) {
+		TRACE_MGMT_DBG("Aborted cmd %p finished (cmd_ref %d, "
+			"scst_cmd_count %d)", cmd, atomic_read(&cmd->cmd_ref),
+			atomic_read(&scst_cmd_count));
+
+		scst_finish_cmd_mgmt(cmd);
+	}
+
+	__scst_cmd_put(cmd);
+
+	res = SCST_CMD_STATE_RES_CONT_NEXT;
+
+out:
+	return res;
+}
+
+/*
+ * No locks, but it must be externally serialized (see comment for
+ * scst_cmd_init_done() in scst.h)
+ */
+static void scst_cmd_set_sn(struct scst_cmd *cmd)
+{
+	struct scst_tgt_dev *tgt_dev = cmd->tgt_dev;
+	unsigned long flags;
+
+	if (scst_is_implicit_hq(cmd)) {
+		TRACE_SN("Implicit HQ cmd %p", cmd);
+		cmd->queue_type = SCST_CMD_QUEUE_HEAD_OF_QUEUE;
+	}
+
+	EXTRACHECKS_BUG_ON(cmd->sn_set || cmd->hq_cmd_inced);
+
+	/* Optimized for lockless fast path */
+
+	scst_check_debug_sn(cmd);
+
+#ifdef CONFIG_SCST_STRICT_SERIALIZING
+	cmd->queue_type = SCST_CMD_QUEUE_ORDERED;
+#endif
+
+	if (cmd->dev->queue_alg == SCST_CONTR_MODE_QUEUE_ALG_RESTRICTED_REORDER) {
+		/*
+		 * Not the best way, but good enough until there is a
+		 * possibility to specify queue type during pass-through
+		 * commands submission.
+		 */
+		cmd->queue_type = SCST_CMD_QUEUE_ORDERED;
+	}
+
+	switch (cmd->queue_type) {
+	case SCST_CMD_QUEUE_SIMPLE:
+	case SCST_CMD_QUEUE_UNTAGGED:
+#ifdef CONFIG_SCST_ORDERED_READS
+		if (scst_cmd_is_expected_set(cmd)) {
+			if ((cmd->expected_data_direction == SCST_DATA_READ) &&
+			    (atomic_read(&cmd->dev->write_cmd_count) == 0))
+				goto ordered;
+		} else
+			goto ordered;
+#endif
+		if (likely(tgt_dev->num_free_sn_slots >= 0)) {
+			/*
+			 * atomic_inc_return() implies memory barrier to sync
+			 * with scst_inc_expected_sn()
+			 */
+			if (atomic_inc_return(tgt_dev->cur_sn_slot) == 1) {
+				tgt_dev->curr_sn++;
+				TRACE_SN("Incremented curr_sn %d",
+					tgt_dev->curr_sn);
+			}
+			cmd->sn_slot = tgt_dev->cur_sn_slot;
+			cmd->sn = tgt_dev->curr_sn;
+
+			tgt_dev->prev_cmd_ordered = 0;
+		} else {
+			TRACE(TRACE_MINOR, "***WARNING*** Not enough SN slots "
+				"%zd", ARRAY_SIZE(tgt_dev->sn_slots));
+			goto ordered;
+		}
+		break;
+
+	case SCST_CMD_QUEUE_ORDERED:
+		TRACE_SN("ORDERED cmd %p (op %x)", cmd, cmd->cdb[0]);
+ordered:
+		if (!tgt_dev->prev_cmd_ordered) {
+			spin_lock_irqsave(&tgt_dev->sn_lock, flags);
+			if (tgt_dev->num_free_sn_slots >= 0) {
+				tgt_dev->num_free_sn_slots--;
+				if (tgt_dev->num_free_sn_slots >= 0) {
+					int i = 0;
+					/* Commands can finish in any order, so
+					 * we don't know which slot is empty.
+					 */
+					while (1) {
+						tgt_dev->cur_sn_slot++;
+						if (tgt_dev->cur_sn_slot ==
+						      tgt_dev->sn_slots + ARRAY_SIZE(tgt_dev->sn_slots))
+							tgt_dev->cur_sn_slot = tgt_dev->sn_slots;
+
+						if (atomic_read(tgt_dev->cur_sn_slot) == 0)
+							break;
+
+						i++;
+						BUG_ON(i == ARRAY_SIZE(tgt_dev->sn_slots));
+					}
+					TRACE_SN("New cur SN slot %zd",
+						tgt_dev->cur_sn_slot -
+						tgt_dev->sn_slots);
+				}
+			}
+			spin_unlock_irqrestore(&tgt_dev->sn_lock, flags);
+		}
+		tgt_dev->prev_cmd_ordered = 1;
+		tgt_dev->curr_sn++;
+		cmd->sn = tgt_dev->curr_sn;
+		break;
+
+	case SCST_CMD_QUEUE_HEAD_OF_QUEUE:
+		TRACE_SN("HQ cmd %p (op %x)", cmd, cmd->cdb[0]);
+		spin_lock_irqsave(&tgt_dev->sn_lock, flags);
+		tgt_dev->hq_cmd_count++;
+		spin_unlock_irqrestore(&tgt_dev->sn_lock, flags);
+		cmd->hq_cmd_inced = 1;
+		goto out;
+
+	default:
+		BUG();
+	}
+
+	TRACE_SN("cmd(%p)->sn: %d (tgt_dev %p, *cur_sn_slot %d, "
+		"num_free_sn_slots %d, prev_cmd_ordered %ld, "
+		"cur_sn_slot %zd)", cmd, cmd->sn, tgt_dev,
+		atomic_read(tgt_dev->cur_sn_slot),
+		tgt_dev->num_free_sn_slots, tgt_dev->prev_cmd_ordered,
+		tgt_dev->cur_sn_slot-tgt_dev->sn_slots);
+
+	cmd->sn_set = 1;
+
+out:
+	return;
+}
+
+/*
+ * Returns 0 on success, > 0 when we need to wait for unblock,
+ * < 0 if there is no device (lun) or device type handler.
+ *
+ * No locks, but might be on IRQ, protection is done by the
+ * suspended activity.
+ */
+static int scst_translate_lun(struct scst_cmd *cmd)
+{
+	struct scst_tgt_dev *tgt_dev = NULL;
+	int res;
+
+	/* See comment about smp_mb() in scst_suspend_activity() */
+	__scst_get(1);
+
+	if (likely(!test_bit(SCST_FLAG_SUSPENDED, &scst_flags))) {
+		struct list_head *sess_tgt_dev_list_head =
+			&cmd->sess->sess_tgt_dev_list_hash[HASH_VAL(cmd->lun)];
+		TRACE_DBG("Finding tgt_dev for cmd %p (lun %lld)", cmd,
+			(long long unsigned int)cmd->lun);
+		res = -1;
+		list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+				sess_tgt_dev_list_entry) {
+			if (tgt_dev->lun == cmd->lun) {
+				TRACE_DBG("tgt_dev %p found", tgt_dev);
+
+				if (unlikely(tgt_dev->dev->handler ==
+						&scst_null_devtype)) {
+					PRINT_INFO("Dev handler for device "
+					  "%lld is NULL, the device will not "
+					  "be visible remotely",
+					   (long long unsigned int)cmd->lun);
+					break;
+				}
+
+				cmd->cmd_threads = tgt_dev->active_cmd_threads;
+				cmd->tgt_dev = tgt_dev;
+				cmd->dev = tgt_dev->dev;
+
+				res = 0;
+				break;
+			}
+		}
+		if (res != 0) {
+			TRACE(TRACE_MINOR,
+				"tgt_dev for LUN %lld not found, command to "
+				"unexisting LU?",
+				(long long unsigned int)cmd->lun);
+			__scst_put();
+		}
+	} else {
+		TRACE_MGMT_DBG("%s", "FLAG SUSPENDED set, skipping");
+		__scst_put();
+		res = 1;
+	}
+	return res;
+}
+
+/*
+ * No locks, but might be on IRQ.
+ *
+ * Returns 0 on success, > 0 when we need to wait for unblock,
+ * < 0 if there is no device (lun) or device type handler.
+ */
+static int __scst_init_cmd(struct scst_cmd *cmd)
+{
+	int res = 0;
+
+	res = scst_translate_lun(cmd);
+	if (likely(res == 0)) {
+		int cnt;
+		bool failure = false;
+
+		cmd->state = SCST_CMD_STATE_PRE_PARSE;
+
+		cnt = atomic_inc_return(&cmd->tgt_dev->tgt_dev_cmd_count);
+		if (unlikely(cnt > SCST_MAX_TGT_DEV_COMMANDS)) {
+			TRACE(TRACE_FLOW_CONTROL,
+				"Too many pending commands (%d) in "
+				"session, returning BUSY to initiator \"%s\"",
+				cnt, (cmd->sess->initiator_name[0] == '\0') ?
+				  "Anonymous" : cmd->sess->initiator_name);
+			failure = true;
+		}
+
+#ifdef CONFIG_SCST_PER_DEVICE_CMD_COUNT_LIMIT
+		cnt = atomic_inc_return(&cmd->dev->dev_cmd_count);
+		if (unlikely(cnt > SCST_MAX_DEV_COMMANDS)) {
+			if (!failure) {
+				TRACE(TRACE_FLOW_CONTROL,
+					"Too many pending device "
+					"commands (%d), returning BUSY to "
+					"initiator \"%s\"", cnt,
+					(cmd->sess->initiator_name[0] == '\0') ?
+						"Anonymous" :
+						cmd->sess->initiator_name);
+				failure = true;
+			}
+		}
+#endif
+
+#ifdef CONFIG_SCST_ORDERED_READS
+		/* If expected values not set, expected direction is UNKNOWN */
+		if (cmd->expected_data_direction & SCST_DATA_WRITE)
+			atomic_inc(&cmd->dev->write_cmd_count);
+#endif
+
+		if (unlikely(failure))
+			goto out_busy;
+
+		if (!cmd->set_sn_on_restart_cmd)
+			scst_cmd_set_sn(cmd);
+	} else if (res < 0) {
+		TRACE_DBG("Finishing cmd %p", cmd);
+		scst_set_cmd_error(cmd,
+			   SCST_LOAD_SENSE(scst_sense_lun_not_supported));
+		scst_set_cmd_abnormal_done_state(cmd);
+	} else
+		goto out;
+
+out:
+	return res;
+
+out_busy:
+	scst_set_busy(cmd);
+	scst_set_cmd_abnormal_done_state(cmd);
+	goto out;
+}
+
+/* Called under scst_init_lock and IRQs disabled */
+static void scst_do_job_init(void)
+	__releases(&scst_init_lock)
+	__acquires(&scst_init_lock)
+{
+	struct scst_cmd *cmd;
+	int susp;
+
+restart:
+	/*
+	 * There is no need for read barrier here, because we don't care where
+	 * this check will be done.
+	 */
+	susp = test_bit(SCST_FLAG_SUSPENDED, &scst_flags);
+	if (scst_init_poll_cnt > 0)
+		scst_init_poll_cnt--;
+
+	list_for_each_entry(cmd, &scst_init_cmd_list, cmd_list_entry) {
+		int rc;
+		if (susp && !test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags))
+			continue;
+		if (!test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags)) {
+			spin_unlock_irq(&scst_init_lock);
+			rc = __scst_init_cmd(cmd);
+			spin_lock_irq(&scst_init_lock);
+			if (rc > 0) {
+				TRACE_MGMT_DBG("%s",
+					"FLAG SUSPENDED set, restarting");
+				goto restart;
+			}
+		} else {
+			TRACE_MGMT_DBG("Aborting not inited cmd %p (tag %llu)",
+				       cmd, (long long unsigned int)cmd->tag);
+			scst_set_cmd_abnormal_done_state(cmd);
+		}
+
+		/*
+		 * Deleting cmd from init cmd list after __scst_init_cmd()
+		 * is necessary to keep the check in scst_init_cmd() correct
+		 * to preserve the commands order.
+		 *
+		 * We don't care about the race, when init cmd list is empty
+		 * and one command detected that it just was not empty, so
+		 * it's inserting to it, but another command at the same time
+		 * seeing init cmd list empty and goes directly, because it
+		 * could affect only commands from the same initiator to the
+		 * same tgt_dev, but scst_cmd_init_done*() doesn't guarantee
+		 * the order in case of simultaneous such calls anyway.
+		 */
+		TRACE_MGMT_DBG("Deleting cmd %p from init cmd list", cmd);
+		smp_wmb(); /* enforce the required order */
+		list_del(&cmd->cmd_list_entry);
+		spin_unlock(&scst_init_lock);
+
+		spin_lock(&cmd->cmd_threads->cmd_list_lock);
+		TRACE_MGMT_DBG("Adding cmd %p to active cmd list", cmd);
+		if (unlikely(cmd->queue_type == SCST_CMD_QUEUE_HEAD_OF_QUEUE))
+			list_add(&cmd->cmd_list_entry,
+				&cmd->cmd_threads->active_cmd_list);
+		else
+			list_add_tail(&cmd->cmd_list_entry,
+				&cmd->cmd_threads->active_cmd_list);
+		wake_up(&cmd->cmd_threads->cmd_list_waitQ);
+		spin_unlock(&cmd->cmd_threads->cmd_list_lock);
+
+		spin_lock(&scst_init_lock);
+		goto restart;
+	}
+
+	/* It isn't really needed, but let's keep it */
+	if (susp != test_bit(SCST_FLAG_SUSPENDED, &scst_flags))
+		goto restart;
+	return;
+}
+
+static inline int test_init_cmd_list(void)
+{
+	int res = (!list_empty(&scst_init_cmd_list) &&
+		   !test_bit(SCST_FLAG_SUSPENDED, &scst_flags)) ||
+		  unlikely(kthread_should_stop()) ||
+		  (scst_init_poll_cnt > 0);
+	return res;
+}
+
+int scst_init_thread(void *arg)
+{
+
+	PRINT_INFO("Init thread started, PID %d", current->pid);
+
+	current->flags |= PF_NOFREEZE;
+
+	set_user_nice(current, -10);
+
+	spin_lock_irq(&scst_init_lock);
+	while (!kthread_should_stop()) {
+		wait_queue_t wait;
+		init_waitqueue_entry(&wait, current);
+
+		if (!test_init_cmd_list()) {
+			add_wait_queue_exclusive(&scst_init_cmd_list_waitQ,
+						 &wait);
+			for (;;) {
+				set_current_state(TASK_INTERRUPTIBLE);
+				if (test_init_cmd_list())
+					break;
+				spin_unlock_irq(&scst_init_lock);
+				schedule();
+				spin_lock_irq(&scst_init_lock);
+			}
+			set_current_state(TASK_RUNNING);
+			remove_wait_queue(&scst_init_cmd_list_waitQ, &wait);
+		}
+		scst_do_job_init();
+	}
+	spin_unlock_irq(&scst_init_lock);
+
+	/*
+	 * If kthread_should_stop() is true, we are guaranteed to be
+	 * on the module unload, so scst_init_cmd_list must be empty.
+	 */
+	BUG_ON(!list_empty(&scst_init_cmd_list));
+
+	PRINT_INFO("Init thread PID %d finished", current->pid);
+	return 0;
+}
+
+/**
+ * scst_process_active_cmd() - process active command
+ *
+ * Description:
+ *    Main SCST commands processing routing. Must be used only by dev handlers.
+ *
+ *    Argument atomic is true, if function called in atomic context.
+ *
+ *    Must be called with no locks held.
+ */
+void scst_process_active_cmd(struct scst_cmd *cmd, bool atomic)
+{
+	int res;
+
+	/*
+	 * Checkpatch will complain on the use of in_atomic() below. You
+	 * can safely ignore this warning since in_atomic() is used here only
+	 * for debugging purposes.
+	 */
+	EXTRACHECKS_BUG_ON(in_irq() || irqs_disabled());
+	EXTRACHECKS_WARN_ON((in_atomic() || in_interrupt() || irqs_disabled()) &&
+			     !atomic);
+
+	cmd->atomic = atomic;
+
+	TRACE_DBG("cmd %p, atomic %d", cmd, atomic);
+
+	do {
+		switch (cmd->state) {
+		case SCST_CMD_STATE_PRE_PARSE:
+			res = scst_pre_parse(cmd);
+			EXTRACHECKS_BUG_ON(res ==
+				SCST_CMD_STATE_RES_NEED_THREAD);
+			break;
+
+		case SCST_CMD_STATE_DEV_PARSE:
+			res = scst_parse_cmd(cmd);
+			break;
+
+		case SCST_CMD_STATE_PREPARE_SPACE:
+			res = scst_prepare_space(cmd);
+			break;
+
+		case SCST_CMD_STATE_PREPROCESSING_DONE:
+			res = scst_preprocessing_done(cmd);
+			break;
+
+		case SCST_CMD_STATE_RDY_TO_XFER:
+			res = scst_rdy_to_xfer(cmd);
+			break;
+
+		case SCST_CMD_STATE_TGT_PRE_EXEC:
+			res = scst_tgt_pre_exec(cmd);
+			break;
+
+		case SCST_CMD_STATE_SEND_FOR_EXEC:
+			if (tm_dbg_check_cmd(cmd) != 0) {
+				res = SCST_CMD_STATE_RES_CONT_NEXT;
+				TRACE_MGMT_DBG("Skipping cmd %p (tag %llu), "
+					"because of TM DBG delay", cmd,
+					(long long unsigned int)cmd->tag);
+				break;
+			}
+			res = scst_send_for_exec(&cmd);
+			/*
+			 * !! At this point cmd, sess & tgt_dev can already be
+			 * freed !!
+			 */
+			break;
+
+		case SCST_CMD_STATE_LOCAL_EXEC:
+			res = scst_local_exec(cmd);
+			/*
+			 * !! At this point cmd, sess & tgt_dev can already be
+			 * freed !!
+			 */
+			break;
+
+		case SCST_CMD_STATE_REAL_EXEC:
+			res = scst_real_exec(cmd);
+			/*
+			 * !! At this point cmd, sess & tgt_dev can already be
+			 * freed !!
+			 */
+			break;
+
+		case SCST_CMD_STATE_PRE_DEV_DONE:
+			res = scst_pre_dev_done(cmd);
+			EXTRACHECKS_BUG_ON(res ==
+				SCST_CMD_STATE_RES_NEED_THREAD);
+			break;
+
+		case SCST_CMD_STATE_MODE_SELECT_CHECKS:
+			res = scst_mode_select_checks(cmd);
+			break;
+
+		case SCST_CMD_STATE_DEV_DONE:
+			res = scst_dev_done(cmd);
+			break;
+
+		case SCST_CMD_STATE_PRE_XMIT_RESP:
+			res = scst_pre_xmit_response(cmd);
+			EXTRACHECKS_BUG_ON(res ==
+				SCST_CMD_STATE_RES_NEED_THREAD);
+			break;
+
+		case SCST_CMD_STATE_XMIT_RESP:
+			res = scst_xmit_response(cmd);
+			break;
+
+		case SCST_CMD_STATE_FINISHED:
+			res = scst_finish_cmd(cmd);
+			break;
+
+		case SCST_CMD_STATE_FINISHED_INTERNAL:
+			res = scst_finish_internal_cmd(cmd);
+			EXTRACHECKS_BUG_ON(res ==
+				SCST_CMD_STATE_RES_NEED_THREAD);
+			break;
+
+		default:
+			PRINT_CRIT_ERROR("cmd (%p) in state %d, but shouldn't "
+				"be", cmd, cmd->state);
+			BUG();
+			res = SCST_CMD_STATE_RES_CONT_NEXT;
+			break;
+		}
+	} while (res == SCST_CMD_STATE_RES_CONT_SAME);
+
+	if (res == SCST_CMD_STATE_RES_CONT_NEXT) {
+		/* None */
+	} else if (res == SCST_CMD_STATE_RES_NEED_THREAD) {
+		spin_lock_irq(&cmd->cmd_threads->cmd_list_lock);
+#ifdef CONFIG_SCST_EXTRACHECKS
+		switch (cmd->state) {
+		case SCST_CMD_STATE_DEV_PARSE:
+		case SCST_CMD_STATE_PREPARE_SPACE:
+		case SCST_CMD_STATE_RDY_TO_XFER:
+		case SCST_CMD_STATE_TGT_PRE_EXEC:
+		case SCST_CMD_STATE_SEND_FOR_EXEC:
+		case SCST_CMD_STATE_LOCAL_EXEC:
+		case SCST_CMD_STATE_REAL_EXEC:
+		case SCST_CMD_STATE_DEV_DONE:
+		case SCST_CMD_STATE_XMIT_RESP:
+#endif
+			TRACE_DBG("Adding cmd %p to head of active cmd list",
+				  cmd);
+			list_add(&cmd->cmd_list_entry,
+				&cmd->cmd_threads->active_cmd_list);
+#ifdef CONFIG_SCST_EXTRACHECKS
+			break;
+		default:
+			PRINT_CRIT_ERROR("cmd %p is in invalid state %d)", cmd,
+				cmd->state);
+			spin_unlock_irq(&cmd->cmd_threads->cmd_list_lock);
+			BUG();
+			spin_lock_irq(&cmd->cmd_threads->cmd_list_lock);
+			break;
+		}
+#endif
+		wake_up(&cmd->cmd_threads->cmd_list_waitQ);
+		spin_unlock_irq(&cmd->cmd_threads->cmd_list_lock);
+	} else
+		BUG();
+	return;
+}
+EXPORT_SYMBOL_GPL(scst_process_active_cmd);
+
+/**
+ * scst_post_parse_process_active_cmd() - process command after parse
+ *
+ * SCST commands processing routine, which should be called by dev handler
+ * after its parse() callback returned SCST_CMD_STATE_STOP. Arguments are
+ * the same as for scst_process_active_cmd().
+ */
+void scst_post_parse_process_active_cmd(struct scst_cmd *cmd, bool atomic)
+{
+	scst_set_parse_time(cmd);
+	scst_process_active_cmd(cmd, atomic);
+}
+EXPORT_SYMBOL_GPL(scst_post_parse_process_active_cmd);
+
+/* Called under cmd_list_lock and IRQs disabled */
+static void scst_do_job_active(struct list_head *cmd_list,
+	spinlock_t *cmd_list_lock, bool atomic)
+	__releases(cmd_list_lock)
+	__acquires(cmd_list_lock)
+{
+
+	while (!list_empty(cmd_list)) {
+		struct scst_cmd *cmd = list_entry(cmd_list->next, typeof(*cmd),
+					cmd_list_entry);
+		TRACE_DBG("Deleting cmd %p from active cmd list", cmd);
+		list_del(&cmd->cmd_list_entry);
+		spin_unlock_irq(cmd_list_lock);
+		scst_process_active_cmd(cmd, atomic);
+		spin_lock_irq(cmd_list_lock);
+	}
+	return;
+}
+
+static inline int test_cmd_threads(struct scst_cmd_threads *p_cmd_threads)
+{
+	int res = !list_empty(&p_cmd_threads->active_cmd_list) ||
+	    unlikely(kthread_should_stop()) ||
+	    tm_dbg_is_release();
+	return res;
+}
+
+int scst_cmd_thread(void *arg)
+{
+	struct scst_cmd_threads *p_cmd_threads = (struct scst_cmd_threads *)arg;
+	static DEFINE_MUTEX(io_context_mutex);
+
+	PRINT_INFO("Processing thread %s (PID %d) started", current->comm,
+		current->pid);
+
+#if 0
+	set_user_nice(current, 10);
+#endif
+	current->flags |= PF_NOFREEZE;
+
+	mutex_lock(&io_context_mutex);
+
+	WARN_ON(current->io_context);
+
+	if (p_cmd_threads != &scst_main_cmd_threads) {
+		if (p_cmd_threads->io_context == NULL) {
+			p_cmd_threads->io_context = get_io_context(GFP_KERNEL, -1);
+			TRACE_MGMT_DBG("Alloced new IO context %p "
+				"(p_cmd_threads %p)",
+				p_cmd_threads->io_context,
+				p_cmd_threads);
+			/* It's ref counted via threads */
+			put_io_context(p_cmd_threads->io_context);
+		} else {
+			put_io_context(current->io_context);
+			current->io_context = ioc_task_link(p_cmd_threads->io_context);
+			TRACE_MGMT_DBG("Linked IO context %p "
+				"(p_cmd_threads %p)", p_cmd_threads->io_context,
+				p_cmd_threads);
+		}
+	}
+
+	mutex_unlock(&io_context_mutex);
+
+	spin_lock_irq(&p_cmd_threads->cmd_list_lock);
+	while (!kthread_should_stop()) {
+		wait_queue_t wait;
+		init_waitqueue_entry(&wait, current);
+
+		if (!test_cmd_threads(p_cmd_threads)) {
+			add_wait_queue_exclusive_head(
+				&p_cmd_threads->cmd_list_waitQ,
+				&wait);
+			for (;;) {
+				set_current_state(TASK_INTERRUPTIBLE);
+				if (test_cmd_threads(p_cmd_threads))
+					break;
+				spin_unlock_irq(&p_cmd_threads->cmd_list_lock);
+				schedule();
+				spin_lock_irq(&p_cmd_threads->cmd_list_lock);
+			}
+			set_current_state(TASK_RUNNING);
+			remove_wait_queue(&p_cmd_threads->cmd_list_waitQ, &wait);
+		}
+
+		if (tm_dbg_is_release()) {
+			spin_unlock_irq(&p_cmd_threads->cmd_list_lock);
+			tm_dbg_check_released_cmds();
+			spin_lock_irq(&p_cmd_threads->cmd_list_lock);
+		}
+
+		scst_do_job_active(&p_cmd_threads->active_cmd_list,
+			&p_cmd_threads->cmd_list_lock, false);
+	}
+	spin_unlock_irq(&p_cmd_threads->cmd_list_lock);
+
+	EXTRACHECKS_BUG_ON((p_cmd_threads->nr_threads == 1) &&
+		 !list_empty(&p_cmd_threads->active_cmd_list));
+
+	if ((p_cmd_threads->nr_threads == 1) &&
+	    (p_cmd_threads != &scst_main_cmd_threads))
+		p_cmd_threads->io_context = NULL;
+
+	PRINT_INFO("Processing thread %s (PID %d) finished", current->comm,
+		current->pid);
+	return 0;
+}
+
+void scst_cmd_tasklet(long p)
+{
+	struct scst_tasklet *t = (struct scst_tasklet *)p;
+
+	spin_lock_irq(&t->tasklet_lock);
+	scst_do_job_active(&t->tasklet_cmd_list, &t->tasklet_lock, true);
+	spin_unlock_irq(&t->tasklet_lock);
+	return;
+}
+
+/*
+ * Returns 0 on success, < 0 if there is no device handler or
+ * > 0 if SCST_FLAG_SUSPENDED set and SCST_FLAG_SUSPENDING - not.
+ * No locks, protection is done by the suspended activity.
+ */
+static int scst_mgmt_translate_lun(struct scst_mgmt_cmd *mcmd)
+{
+	struct scst_tgt_dev *tgt_dev = NULL;
+	struct list_head *sess_tgt_dev_list_head;
+	int res = -1;
+
+	TRACE_DBG("Finding tgt_dev for mgmt cmd %p (lun %lld)", mcmd,
+	      (long long unsigned int)mcmd->lun);
+
+	/* See comment about smp_mb() in scst_suspend_activity() */
+	__scst_get(1);
+
+	if (unlikely(test_bit(SCST_FLAG_SUSPENDED, &scst_flags) &&
+		     !test_bit(SCST_FLAG_SUSPENDING, &scst_flags))) {
+		TRACE_MGMT_DBG("%s", "FLAG SUSPENDED set, skipping");
+		__scst_put();
+		res = 1;
+		goto out;
+	}
+
+	sess_tgt_dev_list_head =
+		&mcmd->sess->sess_tgt_dev_list_hash[HASH_VAL(mcmd->lun)];
+	list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+			sess_tgt_dev_list_entry) {
+		if (tgt_dev->lun == mcmd->lun) {
+			TRACE_DBG("tgt_dev %p found", tgt_dev);
+			mcmd->mcmd_tgt_dev = tgt_dev;
+			res = 0;
+			break;
+		}
+	}
+	if (mcmd->mcmd_tgt_dev == NULL)
+		__scst_put();
+
+out:
+	return res;
+}
+
+/* No locks */
+void scst_done_cmd_mgmt(struct scst_cmd *cmd)
+{
+	struct scst_mgmt_cmd_stub *mstb;
+	bool wake = 0;
+	unsigned long flags;
+
+	TRACE_MGMT_DBG("cmd %p done (tag %llu)",
+		       cmd, (long long unsigned int)cmd->tag);
+
+	spin_lock_irqsave(&scst_mcmd_lock, flags);
+
+	list_for_each_entry(mstb, &cmd->mgmt_cmd_list,
+			cmd_mgmt_cmd_list_entry) {
+		struct scst_mgmt_cmd *mcmd;
+
+		if (!mstb->done_counted)
+			continue;
+
+		mcmd = mstb->mcmd;
+		TRACE_MGMT_DBG("mcmd %p, mcmd->cmd_done_wait_count %d",
+			mcmd, mcmd->cmd_done_wait_count);
+
+		mcmd->cmd_done_wait_count--;
+		if (mcmd->cmd_done_wait_count > 0) {
+			TRACE_MGMT_DBG("cmd_done_wait_count(%d) not 0, "
+				"skipping", mcmd->cmd_done_wait_count);
+			continue;
+		}
+
+		if (mcmd->completed) {
+			BUG_ON(mcmd->affected_cmds_done_called);
+			mcmd->completed = 0;
+			mcmd->state = SCST_MCMD_STATE_POST_AFFECTED_CMDS_DONE;
+			TRACE_MGMT_DBG("Adding mgmt cmd %p to active mgmt cmd "
+				"list", mcmd);
+			list_add_tail(&mcmd->mgmt_cmd_list_entry,
+				&scst_active_mgmt_cmd_list);
+			wake = 1;
+		}
+	}
+
+	spin_unlock_irqrestore(&scst_mcmd_lock, flags);
+
+	if (wake)
+		wake_up(&scst_mgmt_cmd_list_waitQ);
+	return;
+}
+
+/* Called under scst_mcmd_lock and IRQs disabled */
+static int __scst_dec_finish_wait_count(struct scst_mgmt_cmd *mcmd, bool *wake)
+{
+
+	mcmd->cmd_finish_wait_count--;
+	if (mcmd->cmd_finish_wait_count > 0) {
+		TRACE_MGMT_DBG("cmd_finish_wait_count(%d) not 0, "
+			"skipping", mcmd->cmd_finish_wait_count);
+		goto out;
+	}
+
+	if (mcmd->completed) {
+		mcmd->state = SCST_MCMD_STATE_DONE;
+		TRACE_MGMT_DBG("Adding mgmt cmd %p to active mgmt cmd "
+			"list",	mcmd);
+		list_add_tail(&mcmd->mgmt_cmd_list_entry,
+			&scst_active_mgmt_cmd_list);
+		*wake = true;
+	}
+
+out:
+	return mcmd->cmd_finish_wait_count;
+}
+
+/**
+ * scst_prepare_async_mcmd() - prepare async management command
+ *
+ * Notifies SCST that management command is going to be async, i.e.
+ * will be completed in another context.
+ *
+ * No SCST locks supposed to be held on entrance.
+ */
+void scst_prepare_async_mcmd(struct scst_mgmt_cmd *mcmd)
+{
+	unsigned long flags;
+
+	TRACE_MGMT_DBG("Preparing mcmd %p for async execution "
+		"(cmd_finish_wait_count %d)", mcmd,
+		mcmd->cmd_finish_wait_count);
+
+	spin_lock_irqsave(&scst_mcmd_lock, flags);
+	mcmd->cmd_finish_wait_count++;
+	spin_unlock_irqrestore(&scst_mcmd_lock, flags);
+	return;
+}
+EXPORT_SYMBOL_GPL(scst_prepare_async_mcmd);
+
+/**
+ * scst_async_mcmd_completed() - async management command completed
+ *
+ * Notifies SCST that async management command, prepared by
+ * scst_prepare_async_mcmd(), completed.
+ *
+ * No SCST locks supposed to be held on entrance.
+ */
+void scst_async_mcmd_completed(struct scst_mgmt_cmd *mcmd, int status)
+{
+	unsigned long flags;
+	bool wake = false;
+
+	TRACE_MGMT_DBG("Async mcmd %p completed (status %d)", mcmd, status);
+
+	spin_lock_irqsave(&scst_mcmd_lock, flags);
+
+	if (status != SCST_MGMT_STATUS_SUCCESS)
+		mcmd->status = status;
+
+	__scst_dec_finish_wait_count(mcmd, &wake);
+
+	spin_unlock_irqrestore(&scst_mcmd_lock, flags);
+
+	if (wake)
+		wake_up(&scst_mgmt_cmd_list_waitQ);
+	return;
+}
+EXPORT_SYMBOL_GPL(scst_async_mcmd_completed);
+
+/* No locks */
+static void scst_finish_cmd_mgmt(struct scst_cmd *cmd)
+{
+	struct scst_mgmt_cmd_stub *mstb, *t;
+	bool wake = false;
+	unsigned long flags;
+
+	TRACE_MGMT_DBG("cmd %p finished (tag %llu)",
+		       cmd, (long long unsigned int)cmd->tag);
+
+	spin_lock_irqsave(&scst_mcmd_lock, flags);
+
+	list_for_each_entry_safe(mstb, t, &cmd->mgmt_cmd_list,
+			cmd_mgmt_cmd_list_entry) {
+		struct scst_mgmt_cmd *mcmd = mstb->mcmd;
+
+		TRACE_MGMT_DBG("mcmd %p, mcmd->cmd_finish_wait_count %d",
+			mcmd, mcmd->cmd_finish_wait_count);
+
+		list_del(&mstb->cmd_mgmt_cmd_list_entry);
+		mempool_free(mstb, scst_mgmt_stub_mempool);
+
+		if (cmd->completed)
+			mcmd->completed_cmd_count++;
+
+		if (__scst_dec_finish_wait_count(mcmd, &wake) > 0) {
+			TRACE_MGMT_DBG("cmd_finish_wait_count(%d) not 0, "
+				"skipping", mcmd->cmd_finish_wait_count);
+			continue;
+		}
+	}
+
+	spin_unlock_irqrestore(&scst_mcmd_lock, flags);
+
+	if (wake)
+		wake_up(&scst_mgmt_cmd_list_waitQ);
+	return;
+}
+
+static int scst_call_dev_task_mgmt_fn(struct scst_mgmt_cmd *mcmd,
+	struct scst_tgt_dev *tgt_dev, int set_status)
+{
+	int res = SCST_DEV_TM_NOT_COMPLETED;
+	struct scst_dev_type *h = tgt_dev->dev->handler;
+
+	if (h->task_mgmt_fn) {
+		TRACE_MGMT_DBG("Calling dev handler %s task_mgmt_fn(fn=%d)",
+			h->name, mcmd->fn);
+		EXTRACHECKS_BUG_ON(in_irq() || irqs_disabled());
+		res = h->task_mgmt_fn(mcmd, tgt_dev);
+		TRACE_MGMT_DBG("Dev handler %s task_mgmt_fn() returned %d",
+		      h->name, res);
+		if (set_status && (res != SCST_DEV_TM_NOT_COMPLETED))
+			mcmd->status = res;
+	}
+	return res;
+}
+
+static inline int scst_is_strict_mgmt_fn(int mgmt_fn)
+{
+	switch (mgmt_fn) {
+#ifdef CONFIG_SCST_ABORT_CONSIDER_FINISHED_TASKS_AS_NOT_EXISTING
+	case SCST_ABORT_TASK:
+#endif
+#if 0
+	case SCST_ABORT_TASK_SET:
+	case SCST_CLEAR_TASK_SET:
+#endif
+		return 1;
+	default:
+		return 0;
+	}
+}
+
+/* Might be called under sess_list_lock and IRQ off + BHs also off */
+void scst_abort_cmd(struct scst_cmd *cmd, struct scst_mgmt_cmd *mcmd,
+	int other_ini, int call_dev_task_mgmt_fn)
+{
+	unsigned long flags;
+	static DEFINE_SPINLOCK(other_ini_lock);
+
+	TRACE(TRACE_MGMT, "Aborting cmd %p (tag %llu, op %x)",
+		cmd, (long long unsigned int)cmd->tag, cmd->cdb[0]);
+
+	/* To protect from concurrent aborts */
+	spin_lock_irqsave(&other_ini_lock, flags);
+
+	if (other_ini) {
+		struct scst_device *dev = NULL;
+
+		/* Might be necessary if command aborted several times */
+		if (!test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags))
+			set_bit(SCST_CMD_ABORTED_OTHER, &cmd->cmd_flags);
+
+		/* Necessary for scst_xmit_process_aborted_cmd */
+		if (cmd->dev != NULL)
+			dev = cmd->dev;
+		else if ((mcmd != NULL) && (mcmd->mcmd_tgt_dev != NULL))
+			dev = mcmd->mcmd_tgt_dev->dev;
+
+		if (dev != NULL) {
+			if (dev->tas)
+				set_bit(SCST_CMD_DEVICE_TAS, &cmd->cmd_flags);
+		} else
+			PRINT_WARNING("Abort cmd %p from other initiator, but "
+				"neither cmd, nor mcmd %p have tgt_dev set, so "
+				"TAS information can be lost", cmd, mcmd);
+	} else {
+		/* Might be necessary if command aborted several times */
+		clear_bit(SCST_CMD_ABORTED_OTHER, &cmd->cmd_flags);
+	}
+
+	set_bit(SCST_CMD_ABORTED, &cmd->cmd_flags);
+
+	spin_unlock_irqrestore(&other_ini_lock, flags);
+
+	/*
+	 * To sync with cmd->finished/done set in
+	 * scst_finish_cmd()/scst_pre_xmit_response()
+	 */
+	smp_mb__after_set_bit();
+
+	if (cmd->tgt_dev == NULL) {
+		spin_lock_irqsave(&scst_init_lock, flags);
+		scst_init_poll_cnt++;
+		spin_unlock_irqrestore(&scst_init_lock, flags);
+		wake_up(&scst_init_cmd_list_waitQ);
+	}
+
+	if (call_dev_task_mgmt_fn && (cmd->tgt_dev != NULL)) {
+		EXTRACHECKS_BUG_ON(irqs_disabled());
+		scst_call_dev_task_mgmt_fn(mcmd, cmd->tgt_dev, 1);
+	}
+
+	spin_lock_irqsave(&scst_mcmd_lock, flags);
+	if ((mcmd != NULL) && !cmd->finished) {
+		struct scst_mgmt_cmd_stub *mstb;
+
+		mstb = mempool_alloc(scst_mgmt_stub_mempool, GFP_ATOMIC);
+		if (mstb == NULL) {
+			PRINT_CRIT_ERROR("Allocation of management command "
+				"stub failed (mcmd %p, cmd %p)", mcmd, cmd);
+			goto unlock;
+		}
+		memset(mstb, 0, sizeof(*mstb));
+
+		mstb->mcmd = mcmd;
+
+		/*
+		 * cmd can't die here or sess_list_lock already taken and
+		 * cmd is in the sess list
+		 */
+		list_add_tail(&mstb->cmd_mgmt_cmd_list_entry,
+			&cmd->mgmt_cmd_list);
+
+		/*
+		 * Delay the response until the command's finish in order to
+		 * guarantee that "no further responses from the task are sent
+		 * to the SCSI initiator port" after response from the TM
+		 * function is sent (SAM). Plus, we must wait here to be sure
+		 * that we won't receive double commands with the same tag.
+		 * Moreover, if we don't wait here, we might have a possibility
+		 * for data corruption, when aborted and reported as completed
+		 * command actually gets executed *after* new commands sent
+		 * after this TM command completed.
+		 */
+		TRACE_MGMT_DBG("cmd %p (tag %llu, sn %u) being "
+			"executed/xmitted (state %d, op %x, proc time %ld "
+			"sec., timeout %d sec.), deferring ABORT...", cmd,
+			(long long unsigned int)cmd->tag, cmd->sn, cmd->state,
+			cmd->cdb[0], (long)(jiffies - cmd->start_time) / HZ,
+			cmd->timeout / HZ);
+
+		mcmd->cmd_finish_wait_count++;
+
+		if (cmd->sent_for_exec && !cmd->done) {
+			TRACE_MGMT_DBG("cmd %p (tag %llu) is being executed "
+				"and not done yet", cmd,
+				(long long unsigned int)cmd->tag);
+			mstb->done_counted = 1;
+			mcmd->cmd_done_wait_count++;
+		}
+	}
+unlock:
+	spin_unlock_irqrestore(&scst_mcmd_lock, flags);
+
+	tm_dbg_release_cmd(cmd);
+	return;
+}
+
+/* No locks */
+static int scst_set_mcmd_next_state(struct scst_mgmt_cmd *mcmd)
+{
+	int res;
+
+	spin_lock_irq(&scst_mcmd_lock);
+
+	if (mcmd->cmd_finish_wait_count == 0) {
+		if (!mcmd->affected_cmds_done_called)
+			mcmd->state = SCST_MCMD_STATE_POST_AFFECTED_CMDS_DONE;
+		else
+			mcmd->state = SCST_MCMD_STATE_DONE;
+		res = 0;
+	} else if ((mcmd->cmd_done_wait_count == 0) &&
+		   (!mcmd->affected_cmds_done_called)) {
+		mcmd->state = SCST_MCMD_STATE_POST_AFFECTED_CMDS_DONE;
+		res = 0;
+		goto out_unlock;
+	} else {
+		TRACE_MGMT_DBG("cmd_finish_wait_count(%d) not 0, preparing to "
+			"wait", mcmd->cmd_finish_wait_count);
+		mcmd->state = SCST_MCMD_STATE_EXECUTING;
+		res = -1;
+	}
+
+	mcmd->completed = 1;
+
+out_unlock:
+	spin_unlock_irq(&scst_mcmd_lock);
+	return res;
+}
+
+static bool __scst_check_unblock_aborted_cmd(struct scst_cmd *cmd,
+	struct list_head *list_entry)
+{
+	bool res;
+	if (test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags)) {
+		list_del(list_entry);
+		spin_lock(&cmd->cmd_threads->cmd_list_lock);
+		list_add_tail(&cmd->cmd_list_entry,
+			&cmd->cmd_threads->active_cmd_list);
+		wake_up(&cmd->cmd_threads->cmd_list_waitQ);
+		spin_unlock(&cmd->cmd_threads->cmd_list_lock);
+		res = 1;
+	} else
+		res = 0;
+	return res;
+}
+
+static void scst_unblock_aborted_cmds(int scst_mutex_held)
+{
+	struct scst_device *dev;
+
+	if (!scst_mutex_held)
+		mutex_lock(&scst_mutex);
+
+	list_for_each_entry(dev, &scst_dev_list, dev_list_entry) {
+		struct scst_cmd *cmd, *tcmd;
+		struct scst_tgt_dev *tgt_dev;
+		spin_lock_bh(&dev->dev_lock);
+		local_irq_disable();
+		list_for_each_entry_safe(cmd, tcmd, &dev->blocked_cmd_list,
+					blocked_cmd_list_entry) {
+			if (__scst_check_unblock_aborted_cmd(cmd,
+					&cmd->blocked_cmd_list_entry)) {
+				TRACE_MGMT_DBG("Unblock aborted blocked cmd %p",
+					cmd);
+			}
+		}
+		local_irq_enable();
+		spin_unlock_bh(&dev->dev_lock);
+
+		local_irq_disable();
+		list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+					 dev_tgt_dev_list_entry) {
+			spin_lock(&tgt_dev->sn_lock);
+			list_for_each_entry_safe(cmd, tcmd,
+					&tgt_dev->deferred_cmd_list,
+					sn_cmd_list_entry) {
+				if (__scst_check_unblock_aborted_cmd(cmd,
+						&cmd->sn_cmd_list_entry)) {
+					TRACE_MGMT_DBG("Unblocked aborted SN "
+						"cmd %p (sn %u)",
+						cmd, cmd->sn);
+					tgt_dev->def_cmd_count--;
+				}
+			}
+			spin_unlock(&tgt_dev->sn_lock);
+		}
+		local_irq_enable();
+	}
+
+	if (!scst_mutex_held)
+		mutex_unlock(&scst_mutex);
+	return;
+}
+
+static void __scst_abort_task_set(struct scst_mgmt_cmd *mcmd,
+	struct scst_tgt_dev *tgt_dev)
+{
+	struct scst_cmd *cmd;
+	struct scst_session *sess = tgt_dev->sess;
+
+	spin_lock_irq(&sess->sess_list_lock);
+
+	TRACE_DBG("Searching in sess cmd list (sess=%p)", sess);
+	list_for_each_entry(cmd, &sess->sess_cmd_list,
+			    sess_cmd_list_entry) {
+		if ((cmd->tgt_dev == tgt_dev) ||
+		    ((cmd->tgt_dev == NULL) &&
+		     (cmd->lun == tgt_dev->lun))) {
+			if (mcmd->cmd_sn_set) {
+				BUG_ON(!cmd->tgt_sn_set);
+				if (scst_sn_before(mcmd->cmd_sn, cmd->tgt_sn) ||
+				    (mcmd->cmd_sn == cmd->tgt_sn))
+					continue;
+			}
+			scst_abort_cmd(cmd, mcmd, 0, 0);
+		}
+	}
+	spin_unlock_irq(&sess->sess_list_lock);
+	return;
+}
+
+/* Returns 0 if the command processing should be continued, <0 otherwise */
+static int scst_abort_task_set(struct scst_mgmt_cmd *mcmd)
+{
+	int res;
+	struct scst_tgt_dev *tgt_dev = mcmd->mcmd_tgt_dev;
+
+	TRACE(TRACE_MGMT, "Aborting task set (lun=%lld, mcmd=%p)",
+	      (long long unsigned int)tgt_dev->lun, mcmd);
+
+	__scst_abort_task_set(mcmd, tgt_dev);
+
+	tm_dbg_task_mgmt(mcmd->mcmd_tgt_dev->dev, "ABORT TASK SET", 0);
+
+	scst_unblock_aborted_cmds(0);
+
+	scst_call_dev_task_mgmt_fn(mcmd, tgt_dev, 0);
+
+	res = scst_set_mcmd_next_state(mcmd);
+	return res;
+}
+
+static int scst_is_cmd_belongs_to_dev(struct scst_cmd *cmd,
+	struct scst_device *dev)
+{
+	struct scst_tgt_dev *tgt_dev = NULL;
+	struct list_head *sess_tgt_dev_list_head;
+	int res = 0;
+
+	TRACE_DBG("Finding match for dev %p and cmd %p (lun %lld)", dev, cmd,
+	      (long long unsigned int)cmd->lun);
+
+	sess_tgt_dev_list_head =
+		&cmd->sess->sess_tgt_dev_list_hash[HASH_VAL(cmd->lun)];
+	list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+			sess_tgt_dev_list_entry) {
+		if (tgt_dev->lun == cmd->lun) {
+			TRACE_DBG("dev %p found", tgt_dev->dev);
+			res = (tgt_dev->dev == dev);
+			goto out;
+		}
+	}
+
+out:
+	return res;
+}
+
+/* Returns 0 if the command processing should be continued, <0 otherwise */
+static int scst_clear_task_set(struct scst_mgmt_cmd *mcmd)
+{
+	int res;
+	struct scst_device *dev = mcmd->mcmd_tgt_dev->dev;
+	struct scst_tgt_dev *tgt_dev;
+	LIST_HEAD(UA_tgt_devs);
+
+	TRACE(TRACE_MGMT, "Clearing task set (lun=%lld, mcmd=%p)",
+		(long long unsigned int)mcmd->lun, mcmd);
+
+#if 0 /* we are SAM-3 */
+	/*
+	 * When a logical unit is aborting one or more tasks from a SCSI
+	 * initiator port with the TASK ABORTED status it should complete all
+	 * of those tasks before entering additional tasks from that SCSI
+	 * initiator port into the task set - SAM2
+	 */
+	mcmd->needs_unblocking = 1;
+	spin_lock_bh(&dev->dev_lock);
+	__scst_block_dev(dev);
+	spin_unlock_bh(&dev->dev_lock);
+#endif
+
+	__scst_abort_task_set(mcmd, mcmd->mcmd_tgt_dev);
+
+	mutex_lock(&scst_mutex);
+
+	list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+			dev_tgt_dev_list_entry) {
+		struct scst_session *sess = tgt_dev->sess;
+		struct scst_cmd *cmd;
+		int aborted = 0;
+
+		if (tgt_dev == mcmd->mcmd_tgt_dev)
+			continue;
+
+		spin_lock_irq(&sess->sess_list_lock);
+
+		TRACE_DBG("Searching in sess cmd list (sess=%p)", sess);
+		list_for_each_entry(cmd, &sess->sess_cmd_list,
+				    sess_cmd_list_entry) {
+			if ((cmd->dev == dev) ||
+			    ((cmd->dev == NULL) &&
+			     scst_is_cmd_belongs_to_dev(cmd, dev))) {
+				scst_abort_cmd(cmd, mcmd, 1, 0);
+				aborted = 1;
+			}
+		}
+		spin_unlock_irq(&sess->sess_list_lock);
+
+		if (aborted)
+			list_add_tail(&tgt_dev->extra_tgt_dev_list_entry,
+					&UA_tgt_devs);
+	}
+
+	tm_dbg_task_mgmt(mcmd->mcmd_tgt_dev->dev, "CLEAR TASK SET", 0);
+
+	scst_unblock_aborted_cmds(1);
+
+	mutex_unlock(&scst_mutex);
+
+	if (!dev->tas) {
+		uint8_t sense_buffer[SCST_STANDARD_SENSE_LEN];
+		int sl;
+
+		sl = scst_set_sense(sense_buffer, sizeof(sense_buffer),
+			dev->d_sense,
+			SCST_LOAD_SENSE(scst_sense_cleared_by_another_ini_UA));
+
+		list_for_each_entry(tgt_dev, &UA_tgt_devs,
+				extra_tgt_dev_list_entry) {
+			scst_check_set_UA(tgt_dev, sense_buffer, sl, 0);
+		}
+	}
+
+	scst_call_dev_task_mgmt_fn(mcmd, mcmd->mcmd_tgt_dev, 0);
+
+	res = scst_set_mcmd_next_state(mcmd);
+	return res;
+}
+
+/* Returns 0 if the command processing should be continued,
+ * >0, if it should be requeued, <0 otherwise */
+static int scst_mgmt_cmd_init(struct scst_mgmt_cmd *mcmd)
+{
+	int res = 0, rc;
+
+	switch (mcmd->fn) {
+	case SCST_ABORT_TASK:
+	{
+		struct scst_session *sess = mcmd->sess;
+		struct scst_cmd *cmd;
+
+		spin_lock_irq(&sess->sess_list_lock);
+		cmd = __scst_find_cmd_by_tag(sess, mcmd->tag, true);
+		if (cmd == NULL) {
+			TRACE_MGMT_DBG("ABORT TASK: command "
+			      "for tag %llu not found",
+			      (long long unsigned int)mcmd->tag);
+			mcmd->status = SCST_MGMT_STATUS_TASK_NOT_EXIST;
+			mcmd->state = SCST_MCMD_STATE_DONE;
+			spin_unlock_irq(&sess->sess_list_lock);
+			goto out;
+		}
+		__scst_cmd_get(cmd);
+		spin_unlock_irq(&sess->sess_list_lock);
+		TRACE_MGMT_DBG("Cmd %p for tag %llu (sn %d, set %d, "
+			"queue_type %x) found, aborting it",
+			cmd, (long long unsigned int)mcmd->tag,
+			cmd->sn, cmd->sn_set, cmd->queue_type);
+		mcmd->cmd_to_abort = cmd;
+		if (mcmd->lun_set && (mcmd->lun != cmd->lun)) {
+			PRINT_ERROR("ABORT TASK: LUN mismatch: mcmd LUN %llx, "
+				"cmd LUN %llx, cmd tag %llu",
+				(long long unsigned int)mcmd->lun,
+				(long long unsigned int)cmd->lun,
+				(long long unsigned int)mcmd->tag);
+			mcmd->status = SCST_MGMT_STATUS_REJECTED;
+		} else if (mcmd->cmd_sn_set &&
+			   (scst_sn_before(mcmd->cmd_sn, cmd->tgt_sn) ||
+			    (mcmd->cmd_sn == cmd->tgt_sn))) {
+			PRINT_ERROR("ABORT TASK: SN mismatch: mcmd SN %x, "
+				"cmd SN %x, cmd tag %llu", mcmd->cmd_sn,
+				cmd->tgt_sn, (long long unsigned int)mcmd->tag);
+			mcmd->status = SCST_MGMT_STATUS_REJECTED;
+		} else {
+			scst_abort_cmd(cmd, mcmd, 0, 1);
+			scst_unblock_aborted_cmds(0);
+		}
+		res = scst_set_mcmd_next_state(mcmd);
+		mcmd->cmd_to_abort = NULL; /* just in case */
+		__scst_cmd_put(cmd);
+		break;
+	}
+
+	case SCST_TARGET_RESET:
+	case SCST_NEXUS_LOSS_SESS:
+	case SCST_ABORT_ALL_TASKS_SESS:
+	case SCST_NEXUS_LOSS:
+	case SCST_ABORT_ALL_TASKS:
+	case SCST_UNREG_SESS_TM:
+		mcmd->state = SCST_MCMD_STATE_READY;
+		break;
+
+	case SCST_ABORT_TASK_SET:
+	case SCST_CLEAR_ACA:
+	case SCST_CLEAR_TASK_SET:
+	case SCST_LUN_RESET:
+		rc = scst_mgmt_translate_lun(mcmd);
+		if (rc == 0)
+			mcmd->state = SCST_MCMD_STATE_READY;
+		else if (rc < 0) {
+			PRINT_ERROR("Corresponding device for LUN %lld not "
+				"found", (long long unsigned int)mcmd->lun);
+			mcmd->status = SCST_MGMT_STATUS_LUN_NOT_EXIST;
+			mcmd->state = SCST_MCMD_STATE_DONE;
+		} else
+			res = rc;
+		break;
+
+	default:
+		BUG();
+	}
+
+out:
+	return res;
+}
+
+/* Returns 0 if the command processing should be continued, <0 otherwise */
+static int scst_target_reset(struct scst_mgmt_cmd *mcmd)
+{
+	int res, rc;
+	struct scst_device *dev;
+	struct scst_acg *acg = mcmd->sess->acg;
+	struct scst_acg_dev *acg_dev;
+	int cont, c;
+	LIST_HEAD(host_devs);
+
+	TRACE(TRACE_MGMT, "Target reset (mcmd %p, cmd count %d)",
+		mcmd, atomic_read(&mcmd->sess->sess_cmd_count));
+
+	mcmd->needs_unblocking = 1;
+
+	mutex_lock(&scst_mutex);
+
+	list_for_each_entry(acg_dev, &acg->acg_dev_list, acg_dev_list_entry) {
+		struct scst_device *d;
+		struct scst_tgt_dev *tgt_dev;
+		int found = 0;
+
+		dev = acg_dev->dev;
+
+		spin_lock_bh(&dev->dev_lock);
+		__scst_block_dev(dev);
+		scst_process_reset(dev, mcmd->sess, NULL, mcmd, true);
+		spin_unlock_bh(&dev->dev_lock);
+
+		cont = 0;
+		c = 0;
+		list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+				dev_tgt_dev_list_entry) {
+			cont = 1;
+			if (mcmd->sess == tgt_dev->sess) {
+				rc = scst_call_dev_task_mgmt_fn(mcmd,
+						tgt_dev, 0);
+				if (rc == SCST_DEV_TM_NOT_COMPLETED)
+					c = 1;
+				else if ((rc < 0) &&
+					 (mcmd->status == SCST_MGMT_STATUS_SUCCESS))
+					mcmd->status = rc;
+				break;
+			}
+		}
+		if (cont && !c)
+			continue;
+
+		if (dev->scsi_dev == NULL)
+			continue;
+
+		list_for_each_entry(d, &host_devs, tm_dev_list_entry) {
+			if (dev->scsi_dev->host->host_no ==
+				    d->scsi_dev->host->host_no) {
+				found = 1;
+				break;
+			}
+		}
+		if (!found)
+			list_add_tail(&dev->tm_dev_list_entry, &host_devs);
+
+		tm_dbg_task_mgmt(dev, "TARGET RESET", 0);
+	}
+
+	scst_unblock_aborted_cmds(1);
+
+	/*
+	 * We suppose here that for all commands that already on devices
+	 * on/after scsi_reset_provider() completion callbacks will be called.
+	 */
+
+	list_for_each_entry(dev, &host_devs, tm_dev_list_entry) {
+		/* dev->scsi_dev must be non-NULL here */
+		TRACE(TRACE_MGMT, "Resetting host %d bus ",
+			dev->scsi_dev->host->host_no);
+		rc = scsi_reset_provider(dev->scsi_dev, SCSI_TRY_RESET_TARGET);
+		TRACE(TRACE_MGMT, "Result of host %d target reset: %s",
+		      dev->scsi_dev->host->host_no,
+		      (rc == SUCCESS) ? "SUCCESS" : "FAILED");
+#if 0
+		if ((rc != SUCCESS) &&
+		    (mcmd->status == SCST_MGMT_STATUS_SUCCESS)) {
+			/*
+			 * SCSI_TRY_RESET_BUS is also done by
+			 * scsi_reset_provider()
+			 */
+			mcmd->status = SCST_MGMT_STATUS_FAILED;
+		}
+#else
+	/*
+	 * scsi_reset_provider() returns very weird status, so let's
+	 * always succeed
+	 */
+#endif
+	}
+
+	list_for_each_entry(acg_dev, &acg->acg_dev_list, acg_dev_list_entry) {
+		dev = acg_dev->dev;
+		if (dev->scsi_dev != NULL)
+			dev->scsi_dev->was_reset = 0;
+	}
+
+	mutex_unlock(&scst_mutex);
+
+	res = scst_set_mcmd_next_state(mcmd);
+	return res;
+}
+
+/* Returns 0 if the command processing should be continued, <0 otherwise */
+static int scst_lun_reset(struct scst_mgmt_cmd *mcmd)
+{
+	int res, rc;
+	struct scst_tgt_dev *tgt_dev = mcmd->mcmd_tgt_dev;
+	struct scst_device *dev = tgt_dev->dev;
+
+	TRACE(TRACE_MGMT, "Resetting LUN %lld (mcmd %p)",
+	      (long long unsigned int)tgt_dev->lun, mcmd);
+
+	mcmd->needs_unblocking = 1;
+
+	spin_lock_bh(&dev->dev_lock);
+	__scst_block_dev(dev);
+	scst_process_reset(dev, mcmd->sess, NULL, mcmd, true);
+	spin_unlock_bh(&dev->dev_lock);
+
+	rc = scst_call_dev_task_mgmt_fn(mcmd, tgt_dev, 1);
+	if (rc != SCST_DEV_TM_NOT_COMPLETED)
+		goto out_tm_dbg;
+
+	if (dev->scsi_dev != NULL) {
+		TRACE(TRACE_MGMT, "Resetting host %d bus ",
+		      dev->scsi_dev->host->host_no);
+		rc = scsi_reset_provider(dev->scsi_dev, SCSI_TRY_RESET_DEVICE);
+#if 0
+		if (rc != SUCCESS && mcmd->status == SCST_MGMT_STATUS_SUCCESS)
+			mcmd->status = SCST_MGMT_STATUS_FAILED;
+#else
+		/*
+		 * scsi_reset_provider() returns very weird status, so let's
+		 * always succeed
+		 */
+#endif
+		dev->scsi_dev->was_reset = 0;
+	}
+
+	scst_unblock_aborted_cmds(0);
+
+out_tm_dbg:
+	tm_dbg_task_mgmt(mcmd->mcmd_tgt_dev->dev, "LUN RESET", 0);
+
+	res = scst_set_mcmd_next_state(mcmd);
+	return res;
+}
+
+/* scst_mutex supposed to be held */
+static void scst_do_nexus_loss_sess(struct scst_mgmt_cmd *mcmd)
+{
+	int i;
+	struct scst_session *sess = mcmd->sess;
+	struct scst_tgt_dev *tgt_dev;
+
+	for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+		struct list_head *sess_tgt_dev_list_head =
+			&sess->sess_tgt_dev_list_hash[i];
+		list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+				sess_tgt_dev_list_entry) {
+			scst_nexus_loss(tgt_dev,
+				(mcmd->fn != SCST_UNREG_SESS_TM));
+		}
+	}
+	return;
+}
+
+/* Returns 0 if the command processing should be continued, <0 otherwise */
+static int scst_abort_all_nexus_loss_sess(struct scst_mgmt_cmd *mcmd,
+	int nexus_loss)
+{
+	int res;
+	int i;
+	struct scst_session *sess = mcmd->sess;
+	struct scst_tgt_dev *tgt_dev;
+
+	if (nexus_loss) {
+		TRACE_MGMT_DBG("Nexus loss for sess %p (mcmd %p)",
+			sess, mcmd);
+	} else {
+		TRACE_MGMT_DBG("Aborting all from sess %p (mcmd %p)",
+			sess, mcmd);
+	}
+
+	mutex_lock(&scst_mutex);
+
+	for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+		struct list_head *sess_tgt_dev_list_head =
+			&sess->sess_tgt_dev_list_hash[i];
+		list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+				sess_tgt_dev_list_entry) {
+			int rc;
+
+			__scst_abort_task_set(mcmd, tgt_dev);
+
+			rc = scst_call_dev_task_mgmt_fn(mcmd, tgt_dev, 0);
+			if (rc < 0 && mcmd->status == SCST_MGMT_STATUS_SUCCESS)
+				mcmd->status = rc;
+
+			tm_dbg_task_mgmt(tgt_dev->dev, "NEXUS LOSS SESS or "
+				"ABORT ALL SESS or UNREG SESS",
+				(mcmd->fn == SCST_UNREG_SESS_TM));
+		}
+	}
+
+	scst_unblock_aborted_cmds(1);
+
+	mutex_unlock(&scst_mutex);
+
+	res = scst_set_mcmd_next_state(mcmd);
+	return res;
+}
+
+/* scst_mutex supposed to be held */
+static void scst_do_nexus_loss_tgt(struct scst_mgmt_cmd *mcmd)
+{
+	int i;
+	struct scst_tgt *tgt = mcmd->sess->tgt;
+	struct scst_session *sess;
+
+	list_for_each_entry(sess, &tgt->sess_list, sess_list_entry) {
+		for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+			struct list_head *sess_tgt_dev_list_head =
+				&sess->sess_tgt_dev_list_hash[i];
+			struct scst_tgt_dev *tgt_dev;
+			list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+					sess_tgt_dev_list_entry) {
+				scst_nexus_loss(tgt_dev, true);
+			}
+		}
+	}
+	return;
+}
+
+static int scst_abort_all_nexus_loss_tgt(struct scst_mgmt_cmd *mcmd,
+	int nexus_loss)
+{
+	int res;
+	int i;
+	struct scst_tgt *tgt = mcmd->sess->tgt;
+	struct scst_session *sess;
+
+	if (nexus_loss) {
+		TRACE_MGMT_DBG("I_T Nexus loss (tgt %p, mcmd %p)",
+			tgt, mcmd);
+	} else {
+		TRACE_MGMT_DBG("Aborting all from tgt %p (mcmd %p)",
+			tgt, mcmd);
+	}
+
+	mutex_lock(&scst_mutex);
+
+	list_for_each_entry(sess, &tgt->sess_list, sess_list_entry) {
+		for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+			struct list_head *sess_tgt_dev_list_head =
+				&sess->sess_tgt_dev_list_hash[i];
+			struct scst_tgt_dev *tgt_dev;
+			list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+					sess_tgt_dev_list_entry) {
+				int rc;
+
+				__scst_abort_task_set(mcmd, tgt_dev);
+
+				if (nexus_loss)
+					scst_nexus_loss(tgt_dev, true);
+
+				if (mcmd->sess == tgt_dev->sess) {
+					rc = scst_call_dev_task_mgmt_fn(
+						mcmd, tgt_dev, 0);
+					if ((rc < 0) &&
+					    (mcmd->status == SCST_MGMT_STATUS_SUCCESS))
+						mcmd->status = rc;
+				}
+
+				tm_dbg_task_mgmt(tgt_dev->dev, "NEXUS LOSS or "
+					"ABORT ALL", 0);
+			}
+		}
+	}
+
+	scst_unblock_aborted_cmds(1);
+
+	mutex_unlock(&scst_mutex);
+
+	res = scst_set_mcmd_next_state(mcmd);
+	return res;
+}
+
+/* Returns 0 if the command processing should be continued, <0 otherwise */
+static int scst_mgmt_cmd_exec(struct scst_mgmt_cmd *mcmd)
+{
+	int res = 0;
+
+	mcmd->status = SCST_MGMT_STATUS_SUCCESS;
+
+	switch (mcmd->fn) {
+	case SCST_ABORT_TASK_SET:
+		res = scst_abort_task_set(mcmd);
+		break;
+
+	case SCST_CLEAR_TASK_SET:
+		if (mcmd->mcmd_tgt_dev->dev->tst ==
+				SCST_CONTR_MODE_SEP_TASK_SETS)
+			res = scst_abort_task_set(mcmd);
+		else
+			res = scst_clear_task_set(mcmd);
+		break;
+
+	case SCST_LUN_RESET:
+		res = scst_lun_reset(mcmd);
+		break;
+
+	case SCST_TARGET_RESET:
+		res = scst_target_reset(mcmd);
+		break;
+
+	case SCST_ABORT_ALL_TASKS_SESS:
+		res = scst_abort_all_nexus_loss_sess(mcmd, 0);
+		break;
+
+	case SCST_NEXUS_LOSS_SESS:
+	case SCST_UNREG_SESS_TM:
+		res = scst_abort_all_nexus_loss_sess(mcmd, 1);
+		break;
+
+	case SCST_ABORT_ALL_TASKS:
+		res = scst_abort_all_nexus_loss_tgt(mcmd, 0);
+		break;
+
+	case SCST_NEXUS_LOSS:
+		res = scst_abort_all_nexus_loss_tgt(mcmd, 1);
+		break;
+
+	case SCST_CLEAR_ACA:
+		if (scst_call_dev_task_mgmt_fn(mcmd, mcmd->mcmd_tgt_dev, 1) ==
+				SCST_DEV_TM_NOT_COMPLETED) {
+			mcmd->status = SCST_MGMT_STATUS_FN_NOT_SUPPORTED;
+			/* Nothing to do (yet) */
+		}
+		goto out_done;
+
+	default:
+		PRINT_ERROR("Unknown task management function %d", mcmd->fn);
+		mcmd->status = SCST_MGMT_STATUS_REJECTED;
+		goto out_done;
+	}
+
+out:
+	return res;
+
+out_done:
+	mcmd->state = SCST_MCMD_STATE_DONE;
+	goto out;
+}
+
+static void scst_call_task_mgmt_affected_cmds_done(struct scst_mgmt_cmd *mcmd)
+{
+	struct scst_session *sess = mcmd->sess;
+
+	if ((sess->tgt->tgtt->task_mgmt_affected_cmds_done != NULL) &&
+	    (mcmd->fn != SCST_UNREG_SESS_TM)) {
+		TRACE_DBG("Calling target %s task_mgmt_affected_cmds_done(%p)",
+			sess->tgt->tgtt->name, sess);
+		sess->tgt->tgtt->task_mgmt_affected_cmds_done(mcmd);
+		TRACE_MGMT_DBG("Target's %s task_mgmt_affected_cmds_done() "
+			"returned", sess->tgt->tgtt->name);
+	}
+	return;
+}
+
+static int scst_mgmt_affected_cmds_done(struct scst_mgmt_cmd *mcmd)
+{
+	int res;
+
+	mutex_lock(&scst_mutex);
+
+	switch (mcmd->fn) {
+	case SCST_NEXUS_LOSS_SESS:
+	case SCST_UNREG_SESS_TM:
+		scst_do_nexus_loss_sess(mcmd);
+		break;
+
+	case SCST_NEXUS_LOSS:
+		scst_do_nexus_loss_tgt(mcmd);
+		break;
+	}
+
+	mutex_unlock(&scst_mutex);
+
+	scst_call_task_mgmt_affected_cmds_done(mcmd);
+
+	mcmd->affected_cmds_done_called = 1;
+
+	res = scst_set_mcmd_next_state(mcmd);
+	return res;
+}
+
+static void scst_mgmt_cmd_send_done(struct scst_mgmt_cmd *mcmd)
+{
+	struct scst_device *dev;
+	struct scst_session *sess = mcmd->sess;
+
+	mcmd->state = SCST_MCMD_STATE_FINISHED;
+	if (scst_is_strict_mgmt_fn(mcmd->fn) && (mcmd->completed_cmd_count > 0))
+		mcmd->status = SCST_MGMT_STATUS_TASK_NOT_EXIST;
+
+	TRACE(TRACE_MINOR_AND_MGMT_DBG, "TM command fn %d finished, "
+		"status %x", mcmd->fn, mcmd->status);
+
+	if (!mcmd->affected_cmds_done_called) {
+		/* It might happen in case of errors */
+		scst_call_task_mgmt_affected_cmds_done(mcmd);
+	}
+
+	if ((sess->tgt->tgtt->task_mgmt_fn_done != NULL) &&
+	    (mcmd->fn != SCST_UNREG_SESS_TM)) {
+		TRACE_DBG("Calling target %s task_mgmt_fn_done(%p)",
+			sess->tgt->tgtt->name, sess);
+		sess->tgt->tgtt->task_mgmt_fn_done(mcmd);
+		TRACE_MGMT_DBG("Target's %s task_mgmt_fn_done() "
+			"returned", sess->tgt->tgtt->name);
+	}
+
+	if (mcmd->needs_unblocking) {
+		switch (mcmd->fn) {
+		case SCST_LUN_RESET:
+		case SCST_CLEAR_TASK_SET:
+			scst_unblock_dev(mcmd->mcmd_tgt_dev->dev);
+			break;
+
+		case SCST_TARGET_RESET:
+		{
+			struct scst_acg *acg = mcmd->sess->acg;
+			struct scst_acg_dev *acg_dev;
+
+			mutex_lock(&scst_mutex);
+			list_for_each_entry(acg_dev, &acg->acg_dev_list,
+					acg_dev_list_entry) {
+				dev = acg_dev->dev;
+				scst_unblock_dev(dev);
+			}
+			mutex_unlock(&scst_mutex);
+			break;
+		}
+
+		default:
+			BUG();
+			break;
+		}
+	}
+
+	mcmd->tgt_priv = NULL;
+	return;
+}
+
+/* Returns >0, if cmd should be requeued */
+static int scst_process_mgmt_cmd(struct scst_mgmt_cmd *mcmd)
+{
+	int res = 0;
+
+	TRACE_DBG("mcmd %p, state %d", mcmd, mcmd->state);
+
+	while (1) {
+		switch (mcmd->state) {
+		case SCST_MCMD_STATE_INIT:
+			res = scst_mgmt_cmd_init(mcmd);
+			if (res)
+				goto out;
+			break;
+
+		case SCST_MCMD_STATE_READY:
+			if (scst_mgmt_cmd_exec(mcmd))
+				goto out;
+			break;
+
+		case SCST_MCMD_STATE_POST_AFFECTED_CMDS_DONE:
+			if (scst_mgmt_affected_cmds_done(mcmd))
+				goto out;
+			break;
+
+		case SCST_MCMD_STATE_DONE:
+			scst_mgmt_cmd_send_done(mcmd);
+			break;
+
+		default:
+			PRINT_ERROR("Unknown state %d of management command",
+				    mcmd->state);
+			res = -1;
+			/* go through */
+		case SCST_MCMD_STATE_FINISHED:
+			scst_free_mgmt_cmd(mcmd);
+			goto out;
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+		case SCST_MCMD_STATE_EXECUTING:
+			BUG();
+#endif
+		}
+	}
+
+out:
+	return res;
+}
+
+static inline int test_mgmt_cmd_list(void)
+{
+	int res = !list_empty(&scst_active_mgmt_cmd_list) ||
+		  unlikely(kthread_should_stop());
+	return res;
+}
+
+int scst_tm_thread(void *arg)
+{
+
+	PRINT_INFO("Task management thread started, PID %d", current->pid);
+
+	current->flags |= PF_NOFREEZE;
+
+	set_user_nice(current, -10);
+
+	spin_lock_irq(&scst_mcmd_lock);
+	while (!kthread_should_stop()) {
+		wait_queue_t wait;
+		init_waitqueue_entry(&wait, current);
+
+		if (!test_mgmt_cmd_list()) {
+			add_wait_queue_exclusive(&scst_mgmt_cmd_list_waitQ,
+						 &wait);
+			for (;;) {
+				set_current_state(TASK_INTERRUPTIBLE);
+				if (test_mgmt_cmd_list())
+					break;
+				spin_unlock_irq(&scst_mcmd_lock);
+				schedule();
+				spin_lock_irq(&scst_mcmd_lock);
+			}
+			set_current_state(TASK_RUNNING);
+			remove_wait_queue(&scst_mgmt_cmd_list_waitQ, &wait);
+		}
+
+		while (!list_empty(&scst_active_mgmt_cmd_list)) {
+			int rc;
+			struct scst_mgmt_cmd *mcmd;
+			mcmd = list_entry(scst_active_mgmt_cmd_list.next,
+					  typeof(*mcmd), mgmt_cmd_list_entry);
+			TRACE_MGMT_DBG("Deleting mgmt cmd %p from active cmd "
+				"list", mcmd);
+			list_del(&mcmd->mgmt_cmd_list_entry);
+			spin_unlock_irq(&scst_mcmd_lock);
+			rc = scst_process_mgmt_cmd(mcmd);
+			spin_lock_irq(&scst_mcmd_lock);
+			if (rc > 0) {
+				if (test_bit(SCST_FLAG_SUSPENDED, &scst_flags) &&
+				    !test_bit(SCST_FLAG_SUSPENDING,
+						&scst_flags)) {
+					TRACE_MGMT_DBG("Adding mgmt cmd %p to "
+						"head of delayed mgmt cmd list",
+						mcmd);
+					list_add(&mcmd->mgmt_cmd_list_entry,
+						&scst_delayed_mgmt_cmd_list);
+				} else {
+					TRACE_MGMT_DBG("Adding mgmt cmd %p to "
+						"head of active mgmt cmd list",
+						mcmd);
+					list_add(&mcmd->mgmt_cmd_list_entry,
+					       &scst_active_mgmt_cmd_list);
+				}
+			}
+		}
+	}
+	spin_unlock_irq(&scst_mcmd_lock);
+
+	/*
+	 * If kthread_should_stop() is true, we are guaranteed to be
+	 * on the module unload, so scst_active_mgmt_cmd_list must be empty.
+	 */
+	BUG_ON(!list_empty(&scst_active_mgmt_cmd_list));
+
+	PRINT_INFO("Task management thread PID %d finished", current->pid);
+	return 0;
+}
+
+static struct scst_mgmt_cmd *scst_pre_rx_mgmt_cmd(struct scst_session
+	*sess, int fn, int atomic, void *tgt_priv)
+{
+	struct scst_mgmt_cmd *mcmd = NULL;
+
+	if (unlikely(sess->tgt->tgtt->task_mgmt_fn_done == NULL)) {
+		PRINT_ERROR("New mgmt cmd, but task_mgmt_fn_done() is NULL "
+			    "(target %s)", sess->tgt->tgtt->name);
+		goto out;
+	}
+
+	mcmd = scst_alloc_mgmt_cmd(atomic ? GFP_ATOMIC : GFP_KERNEL);
+	if (mcmd == NULL) {
+		PRINT_CRIT_ERROR("Lost TM fn %d, initiator %s", fn,
+			sess->initiator_name);
+		goto out;
+	}
+
+	mcmd->sess = sess;
+	mcmd->fn = fn;
+	mcmd->state = SCST_MCMD_STATE_INIT;
+	mcmd->tgt_priv = tgt_priv;
+
+out:
+	return mcmd;
+}
+
+static int scst_post_rx_mgmt_cmd(struct scst_session *sess,
+	struct scst_mgmt_cmd *mcmd)
+{
+	unsigned long flags;
+	int res = 0;
+
+	scst_sess_get(sess);
+
+	if (unlikely(sess->shut_phase != SCST_SESS_SPH_READY)) {
+		PRINT_CRIT_ERROR("New mgmt cmd while shutting down the "
+			"session %p shut_phase %ld", sess, sess->shut_phase);
+		BUG();
+	}
+
+	local_irq_save(flags);
+
+	spin_lock(&sess->sess_list_lock);
+	atomic_inc(&sess->sess_cmd_count);
+
+	if (unlikely(sess->init_phase != SCST_SESS_IPH_READY)) {
+		switch (sess->init_phase) {
+		case SCST_SESS_IPH_INITING:
+			TRACE_DBG("Adding mcmd %p to init deferred mcmd list",
+				mcmd);
+			list_add_tail(&mcmd->mgmt_cmd_list_entry,
+				&sess->init_deferred_mcmd_list);
+			goto out_unlock;
+		case SCST_SESS_IPH_SUCCESS:
+			break;
+		case SCST_SESS_IPH_FAILED:
+			res = -1;
+			goto out_unlock;
+		default:
+			BUG();
+		}
+	}
+
+	spin_unlock(&sess->sess_list_lock);
+
+	TRACE_MGMT_DBG("Adding mgmt cmd %p to active mgmt cmd list", mcmd);
+	spin_lock(&scst_mcmd_lock);
+	list_add_tail(&mcmd->mgmt_cmd_list_entry, &scst_active_mgmt_cmd_list);
+	spin_unlock(&scst_mcmd_lock);
+
+	local_irq_restore(flags);
+
+	wake_up(&scst_mgmt_cmd_list_waitQ);
+
+out:
+	return res;
+
+out_unlock:
+	spin_unlock(&sess->sess_list_lock);
+	local_irq_restore(flags);
+	goto out;
+}
+
+/**
+ * scst_rx_mgmt_fn() - create new management command and send it for execution
+ *
+ * Description:
+ *    Creates new management command and sends it for execution.
+ *
+ *    Returns 0 for success, error code otherwise.
+ *
+ *    Must not be called in parallel with scst_unregister_session() for the
+ *    same sess.
+ */
+int scst_rx_mgmt_fn(struct scst_session *sess,
+	const struct scst_rx_mgmt_params *params)
+{
+	int res = -EFAULT;
+	struct scst_mgmt_cmd *mcmd = NULL;
+
+	switch (params->fn) {
+	case SCST_ABORT_TASK:
+		BUG_ON(!params->tag_set);
+		break;
+	case SCST_TARGET_RESET:
+	case SCST_ABORT_ALL_TASKS:
+	case SCST_NEXUS_LOSS:
+		break;
+	default:
+		BUG_ON(!params->lun_set);
+	}
+
+	mcmd = scst_pre_rx_mgmt_cmd(sess, params->fn, params->atomic,
+		params->tgt_priv);
+	if (mcmd == NULL)
+		goto out;
+
+	if (params->lun_set) {
+		mcmd->lun = scst_unpack_lun(params->lun, params->lun_len);
+		if (mcmd->lun == NO_SUCH_LUN)
+			goto out_free;
+		mcmd->lun_set = 1;
+	}
+
+	if (params->tag_set)
+		mcmd->tag = params->tag;
+
+	mcmd->cmd_sn_set = params->cmd_sn_set;
+	mcmd->cmd_sn = params->cmd_sn;
+
+	TRACE(TRACE_MGMT, "TM fn %d", params->fn);
+
+	TRACE_MGMT_DBG("sess=%p, tag_set %d, tag %lld, lun_set %d, "
+		"lun=%lld, cmd_sn_set %d, cmd_sn %d, priv %p", sess,
+		params->tag_set,
+		(long long unsigned int)params->tag,
+		params->lun_set,
+		(long long unsigned int)mcmd->lun,
+		params->cmd_sn_set,
+		params->cmd_sn,
+		params->tgt_priv);
+
+	if (scst_post_rx_mgmt_cmd(sess, mcmd) != 0)
+		goto out_free;
+
+	res = 0;
+
+out:
+	return res;
+
+out_free:
+	scst_free_mgmt_cmd(mcmd);
+	mcmd = NULL;
+	goto out;
+}
+EXPORT_SYMBOL(scst_rx_mgmt_fn);
+
+/*
+ * Returns true if string "string" matches pattern "wild", false otherwise.
+ * Pattern is a regular DOS-type pattern, containing '*' and '?' symbols.
+ * '*' means match all any symbols, '?' means match only any single symbol.
+ *
+ * For instance:
+ * if (wildcmp("bl?h.*", "blah.jpg")) {
+ *   // match
+ *  } else {
+ *   // no match
+ *  }
+ *
+ * Written by Jack Handy - jakkhandy@hotmail.com
+ * Taken by Gennadiy Nerubayev <parakie@gmail.com> from
+ * http://www.codeproject.com/KB/string/wildcmp.aspx. No license attached
+ * to it, and it's posted on a free site; assumed to be free for use.
+ */
+static bool wildcmp(const char *wild, const char *string)
+{
+	const char *cp = NULL, *mp = NULL;
+
+	while ((*string) && (*wild != '*')) {
+		if ((*wild != *string) && (*wild != '?'))
+			return false;
+
+		wild++;
+		string++;
+	}
+
+	while (*string) {
+		if (*wild == '*') {
+			if (!*++wild)
+				return true;
+
+			mp = wild;
+			cp = string+1;
+		} else if ((*wild == *string) || (*wild == '?')) {
+			wild++;
+			string++;
+		} else {
+			wild = mp;
+			string = cp++;
+		}
+	}
+
+	while (*wild == '*')
+		wild++;
+
+	return !*wild;
+}
+
+/* scst_mutex supposed to be held */
+static struct scst_acg *scst_find_tgt_acg_by_name_wild(struct scst_tgt *tgt,
+	const char *initiator_name)
+{
+	struct scst_acg *acg, *res = NULL;
+	struct scst_acn *n;
+
+	if (initiator_name == NULL)
+		goto out;
+
+	list_for_each_entry(acg, &tgt->tgt_acg_list, acg_list_entry) {
+		list_for_each_entry(n, &acg->acn_list, acn_list_entry) {
+			if (wildcmp(n->name, initiator_name)) {
+				TRACE_DBG("Access control group %s found",
+					acg->acg_name);
+				res = acg;
+				goto out;
+			}
+		}
+	}
+
+out:
+	return res;
+}
+
+/* Must be called under scst_mutex */
+struct scst_acg *scst_find_acg(const struct scst_session *sess)
+{
+	struct scst_acg *acg = NULL;
+
+	acg = scst_find_tgt_acg_by_name_wild(sess->tgt, sess->initiator_name);
+	if (acg == NULL)
+		acg = sess->tgt->default_acg;
+	return acg;
+}
+
+static int scst_init_session(struct scst_session *sess)
+{
+	int res = 0, rc;
+	struct scst_cmd *cmd;
+	struct scst_mgmt_cmd *mcmd, *tm;
+	int mwake = 0;
+
+	mutex_lock(&scst_mutex);
+
+	sess->acg = scst_find_acg(sess);
+
+	PRINT_INFO("Using security group \"%s\" for initiator \"%s\"",
+		sess->acg->acg_name, sess->initiator_name);
+
+	list_add_tail(&sess->acg_sess_list_entry, &sess->acg->acg_sess_list);
+
+	TRACE_DBG("Adding sess %p to tgt->sess_list", sess);
+	list_add_tail(&sess->sess_list_entry, &sess->tgt->sess_list);
+
+	res = scst_sess_alloc_tgt_devs(sess);
+
+	/* Let's always create session's sysfs to simplify error recovery */
+	rc = scst_create_sess_sysfs(sess);
+	if (res == 0)
+		res = rc;
+
+	mutex_unlock(&scst_mutex);
+
+	if (sess->init_result_fn) {
+		TRACE_DBG("Calling init_result_fn(%p)", sess);
+		sess->init_result_fn(sess, sess->reg_sess_data, res);
+		TRACE_DBG("%s", "init_result_fn() returned");
+	}
+
+	spin_lock_irq(&sess->sess_list_lock);
+
+	if (res == 0)
+		sess->init_phase = SCST_SESS_IPH_SUCCESS;
+	else
+		sess->init_phase = SCST_SESS_IPH_FAILED;
+
+restart:
+	list_for_each_entry(cmd, &sess->init_deferred_cmd_list,
+				cmd_list_entry) {
+		TRACE_DBG("Deleting cmd %p from init deferred cmd list", cmd);
+		list_del(&cmd->cmd_list_entry);
+		atomic_dec(&sess->sess_cmd_count);
+		spin_unlock_irq(&sess->sess_list_lock);
+		scst_cmd_init_done(cmd, SCST_CONTEXT_THREAD);
+		spin_lock_irq(&sess->sess_list_lock);
+		goto restart;
+	}
+
+	spin_lock(&scst_mcmd_lock);
+	list_for_each_entry_safe(mcmd, tm, &sess->init_deferred_mcmd_list,
+				mgmt_cmd_list_entry) {
+		TRACE_DBG("Moving mgmt command %p from init deferred mcmd list",
+			mcmd);
+		list_move_tail(&mcmd->mgmt_cmd_list_entry,
+			&scst_active_mgmt_cmd_list);
+		mwake = 1;
+	}
+
+	spin_unlock(&scst_mcmd_lock);
+	/*
+	 * In case of an error at this point the caller target driver supposed
+	 * to already call this sess's unregistration.
+	 */
+	sess->init_phase = SCST_SESS_IPH_READY;
+	spin_unlock_irq(&sess->sess_list_lock);
+
+	if (mwake)
+		wake_up(&scst_mgmt_cmd_list_waitQ);
+
+	scst_sess_put(sess);
+	return res;
+}
+
+/**
+ * scst_register_session() - register session
+ * @tgt:	target
+ * @atomic:	true, if the function called in the atomic context. If false,
+ *		 this function will block until the session registration is
+ *		 completed.
+ * @initiator_name: remote initiator's name, any NULL-terminated string,
+ *		    e.g. iSCSI name, which used as the key to found appropriate
+ *		    access control group. Could be NULL, then the default
+ *		    target's LUNs are used.
+ * @data:	any target driver supplied data
+ * @result_fn:	pointer to the function that will be asynchronously called
+ *		 when session initialization finishes.
+ *		 Can be NULL. Parameters:
+ *		    - sess - session
+ *		    - data - target driver supplied to scst_register_session()
+ *			     data
+ *		    - result - session initialization result, 0 on success or
+ *			      appropriate error code otherwise
+ *
+ * Description:
+ *    Registers new session. Returns new session on success or NULL otherwise.
+ *
+ *    Note: A session creation and initialization is a complex task,
+ *    which requires sleeping state, so it can't be fully done
+ *    in interrupt context. Therefore the "bottom half" of it, if
+ *    scst_register_session() is called from atomic context, will be
+ *    done in SCST thread context. In this case scst_register_session()
+ *    will return not completely initialized session, but the target
+ *    driver can supply commands to this session via scst_rx_cmd().
+ *    Those commands processing will be delayed inside SCST until
+ *    the session initialization is finished, then their processing
+ *    will be restarted. The target driver will be notified about
+ *    finish of the session initialization by function result_fn().
+ *    On success the target driver could do nothing, but if the
+ *    initialization fails, the target driver must ensure that
+ *    no more new commands being sent or will be sent to SCST after
+ *    result_fn() returns. All already sent to SCST commands for
+ *    failed session will be returned in xmit_response() with BUSY status.
+ *    In case of failure the driver shall call scst_unregister_session()
+ *    inside result_fn(), it will NOT be called automatically.
+ */
+struct scst_session *scst_register_session(struct scst_tgt *tgt, int atomic,
+	const char *initiator_name, void *data,
+	void (*result_fn) (struct scst_session *sess, void *data, int result))
+{
+	struct scst_session *sess;
+	int res;
+	unsigned long flags;
+
+	sess = scst_alloc_session(tgt, atomic ? GFP_ATOMIC : GFP_KERNEL,
+		initiator_name);
+	if (sess == NULL)
+		goto out;
+
+	scst_sess_get(sess); /* one for registered session */
+	scst_sess_get(sess); /* one held until sess is inited */
+
+	if (atomic) {
+		sess->reg_sess_data = data;
+		sess->init_result_fn = result_fn;
+		spin_lock_irqsave(&scst_mgmt_lock, flags);
+		TRACE_DBG("Adding sess %p to scst_sess_init_list", sess);
+		list_add_tail(&sess->sess_init_list_entry,
+			      &scst_sess_init_list);
+		spin_unlock_irqrestore(&scst_mgmt_lock, flags);
+		wake_up(&scst_mgmt_waitQ);
+	} else {
+		res = scst_init_session(sess);
+		if (res != 0)
+			goto out_free;
+	}
+
+out:
+	return sess;
+
+out_free:
+	scst_free_session(sess);
+	sess = NULL;
+	goto out;
+}
+EXPORT_SYMBOL_GPL(scst_register_session);
+
+/**
+ * scst_register_session_simple() - register session (simple version)
+ * @tgt:	target
+ * @initiator_name: remote initiator's name, any NULL-terminated string,
+ *		    e.g. iSCSI name, which used as the key to found appropriate
+ *		    access control group. Could be NULL, then the default
+ *		    target's LUNs are used.
+ *
+ * Description:
+ *    Registers new session. Returns new session on success or NULL otherwise.
+ */
+struct scst_session *scst_register_session_simple(struct scst_tgt *tgt,
+	const char *initiator_name)
+{
+	return scst_register_session(tgt, 0, initiator_name, NULL, NULL);
+}
+EXPORT_SYMBOL(scst_register_session_simple);
+
+/**
+ * scst_unregister_session() - unregister session
+ * @sess:	session to be unregistered
+ * @wait:	if true, instructs to wait until all commands, which
+ *		currently is being executed and belonged to the session,
+ *		finished. Otherwise, target driver should be prepared to
+ *		receive xmit_response() for the session's command after
+ *		scst_unregister_session() returns.
+ * @unreg_done_fn: pointer to the function that will be asynchronously called
+ *		   when the last session's command finishes and
+ *		   the session is about to be completely freed. Can be NULL.
+ *		   Parameter:
+ *			- sess - session
+ *
+ * Unregisters session.
+ *
+ * Notes:
+ * - All outstanding commands will be finished regularly. After
+ *   scst_unregister_session() returned, no new commands must be sent to
+ *   SCST via scst_rx_cmd().
+ *
+ * - The caller must ensure that no scst_rx_cmd() or scst_rx_mgmt_fn_*() is
+ *   called in paralell with scst_unregister_session().
+ *
+ * - Can be called before result_fn() of scst_register_session() called,
+ *   i.e. during the session registration/initialization.
+ *
+ * - It is highly recommended to call scst_unregister_session() as soon as it
+ *   gets clear that session will be unregistered and not to wait until all
+ *   related commands finished. This function provides the wait functionality,
+ *   but it also starts recovering stuck commands, if there are any.
+ *   Otherwise, your target driver could wait for those commands forever.
+ */
+void scst_unregister_session(struct scst_session *sess, int wait,
+	void (*unreg_done_fn) (struct scst_session *sess))
+{
+	unsigned long flags;
+	DECLARE_COMPLETION_ONSTACK(c);
+	int rc, lun;
+
+	TRACE_MGMT_DBG("Unregistering session %p (wait %d)", sess, wait);
+
+	sess->unreg_done_fn = unreg_done_fn;
+
+	/* Abort all outstanding commands and clear reservation, if necessary */
+	lun = 0;
+	rc = scst_rx_mgmt_fn_lun(sess, SCST_UNREG_SESS_TM,
+		(uint8_t *)&lun, sizeof(lun), SCST_ATOMIC, NULL);
+	if (rc != 0) {
+		PRINT_ERROR("SCST_UNREG_SESS_TM failed %d (sess %p)",
+			rc, sess);
+	}
+
+	sess->shut_phase = SCST_SESS_SPH_SHUTDOWN;
+
+	spin_lock_irqsave(&scst_mgmt_lock, flags);
+
+	if (wait)
+		sess->shutdown_compl = &c;
+
+	spin_unlock_irqrestore(&scst_mgmt_lock, flags);
+
+	scst_sess_put(sess);
+
+	if (wait) {
+		TRACE_DBG("Waiting for session %p to complete", sess);
+		wait_for_completion(&c);
+	}
+	return;
+}
+EXPORT_SYMBOL_GPL(scst_unregister_session);
+
+/**
+ * scst_unregister_session_simple() - unregister session, simple version
+ * @sess:	session to be unregistered
+ *
+ * Unregisters session.
+ *
+ * See notes for scst_unregister_session() above.
+ */
+void scst_unregister_session_simple(struct scst_session *sess)
+{
+
+	scst_unregister_session(sess, 1, NULL);
+	return;
+}
+EXPORT_SYMBOL(scst_unregister_session_simple);
+
+static inline int test_mgmt_list(void)
+{
+	int res = !list_empty(&scst_sess_init_list) ||
+		  !list_empty(&scst_sess_shut_list) ||
+		  unlikely(kthread_should_stop());
+	return res;
+}
+
+int scst_global_mgmt_thread(void *arg)
+{
+	struct scst_session *sess;
+
+	PRINT_INFO("Management thread started, PID %d", current->pid);
+
+	current->flags |= PF_NOFREEZE;
+
+	set_user_nice(current, -10);
+
+	spin_lock_irq(&scst_mgmt_lock);
+	while (!kthread_should_stop()) {
+		wait_queue_t wait;
+		init_waitqueue_entry(&wait, current);
+
+		if (!test_mgmt_list()) {
+			add_wait_queue_exclusive(&scst_mgmt_waitQ, &wait);
+			for (;;) {
+				set_current_state(TASK_INTERRUPTIBLE);
+				if (test_mgmt_list())
+					break;
+				spin_unlock_irq(&scst_mgmt_lock);
+				schedule();
+				spin_lock_irq(&scst_mgmt_lock);
+			}
+			set_current_state(TASK_RUNNING);
+			remove_wait_queue(&scst_mgmt_waitQ, &wait);
+		}
+
+		while (!list_empty(&scst_sess_init_list)) {
+			sess = list_entry(scst_sess_init_list.next,
+				typeof(*sess), sess_init_list_entry);
+			TRACE_DBG("Removing sess %p from scst_sess_init_list",
+				sess);
+			list_del(&sess->sess_init_list_entry);
+			spin_unlock_irq(&scst_mgmt_lock);
+
+			if (sess->init_phase == SCST_SESS_IPH_INITING)
+				scst_init_session(sess);
+			else {
+				PRINT_CRIT_ERROR("session %p is in "
+					"scst_sess_init_list, but in unknown "
+					"init phase %x", sess,
+					sess->init_phase);
+				BUG();
+			}
+
+			spin_lock_irq(&scst_mgmt_lock);
+		}
+
+		while (!list_empty(&scst_sess_shut_list)) {
+			sess = list_entry(scst_sess_shut_list.next,
+				typeof(*sess), sess_shut_list_entry);
+			TRACE_DBG("Removing sess %p from scst_sess_shut_list",
+				sess);
+			list_del(&sess->sess_shut_list_entry);
+			spin_unlock_irq(&scst_mgmt_lock);
+
+			switch (sess->shut_phase) {
+			case SCST_SESS_SPH_SHUTDOWN:
+				BUG_ON(atomic_read(&sess->refcnt) != 0);
+				scst_free_session_callback(sess);
+				break;
+			default:
+				PRINT_CRIT_ERROR("session %p is in "
+					"scst_sess_shut_list, but in unknown "
+					"shut phase %lx", sess,
+					sess->shut_phase);
+				BUG();
+				break;
+			}
+
+			spin_lock_irq(&scst_mgmt_lock);
+		}
+	}
+	spin_unlock_irq(&scst_mgmt_lock);
+
+	/*
+	 * If kthread_should_stop() is true, we are guaranteed to be
+	 * on the module unload, so both lists must be empty.
+	 */
+	BUG_ON(!list_empty(&scst_sess_init_list));
+	BUG_ON(!list_empty(&scst_sess_shut_list));
+
+	PRINT_INFO("Management thread PID %d finished", current->pid);
+	return 0;
+}
+
+/* Called under sess->sess_list_lock */
+static struct scst_cmd *__scst_find_cmd_by_tag(struct scst_session *sess,
+	uint64_t tag, bool to_abort)
+{
+	struct scst_cmd *cmd, *res = NULL;
+
+	/* ToDo: hash list */
+
+	TRACE_DBG("%s (sess=%p, tag=%llu)", "Searching in sess cmd list",
+		  sess, (long long unsigned int)tag);
+
+	list_for_each_entry(cmd, &sess->sess_cmd_list,
+			sess_cmd_list_entry) {
+		if (cmd->tag == tag) {
+			/*
+			 * We must not count done commands, because
+			 * they were submitted for transmittion.
+			 * Otherwise we can have a race, when for
+			 * some reason cmd's release delayed
+			 * after transmittion and initiator sends
+			 * cmd with the same tag => it can be possible
+			 * that a wrong cmd will be returned.
+			 */
+			if (cmd->done) {
+				if (to_abort) {
+					/*
+					 * We should return the latest not
+					 * aborted cmd with this tag.
+					 */
+					if (res == NULL)
+						res = cmd;
+					else {
+						if (test_bit(SCST_CMD_ABORTED,
+								&res->cmd_flags)) {
+							res = cmd;
+						} else if (!test_bit(SCST_CMD_ABORTED,
+								&cmd->cmd_flags))
+							res = cmd;
+					}
+				}
+				continue;
+			} else {
+				res = cmd;
+				break;
+			}
+		}
+	}
+	return res;
+}
+
+/**
+ * scst_find_cmd() - find command by custom comparison function
+ *
+ * Finds a command based on user supplied data and comparision
+ * callback function, that should return true, if the command is found.
+ * Returns the command on success or NULL otherwise
+ */
+struct scst_cmd *scst_find_cmd(struct scst_session *sess, void *data,
+			       int (*cmp_fn) (struct scst_cmd *cmd,
+					      void *data))
+{
+	struct scst_cmd *cmd = NULL;
+	unsigned long flags = 0;
+
+	if (cmp_fn == NULL)
+		goto out;
+
+	spin_lock_irqsave(&sess->sess_list_lock, flags);
+
+	TRACE_DBG("Searching in sess cmd list (sess=%p)", sess);
+	list_for_each_entry(cmd, &sess->sess_cmd_list, sess_cmd_list_entry) {
+		/*
+		 * We must not count done commands, because they were
+		 * submitted for transmittion. Otherwise we can have a race,
+		 * when for some reason cmd's release delayed after
+		 * transmittion and initiator sends cmd with the same tag =>
+		 * it can be possible that a wrong cmd will be returned.
+		 */
+		if (cmd->done)
+			continue;
+		if (cmp_fn(cmd, data))
+			goto out_unlock;
+	}
+
+	cmd = NULL;
+
+out_unlock:
+	spin_unlock_irqrestore(&sess->sess_list_lock, flags);
+
+out:
+	return cmd;
+}
+EXPORT_SYMBOL(scst_find_cmd);
+
+/**
+ * scst_find_cmd_by_tag() - find command by tag
+ *
+ * Finds a command based on the supplied tag comparing it with one
+ * that previously set by scst_cmd_set_tag(). Returns the found command on
+ * success or NULL otherwise
+ */
+struct scst_cmd *scst_find_cmd_by_tag(struct scst_session *sess,
+	uint64_t tag)
+{
+	unsigned long flags;
+	struct scst_cmd *cmd;
+	spin_lock_irqsave(&sess->sess_list_lock, flags);
+	cmd = __scst_find_cmd_by_tag(sess, tag, false);
+	spin_unlock_irqrestore(&sess->sess_list_lock, flags);
+	return cmd;
+}
+EXPORT_SYMBOL(scst_find_cmd_by_tag);


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH][RFC 5/12/1/5] SCST core's scst_lib.c
       [not found] ` <4BC44D08.4060907@vlnb.net>
                     ` (3 preceding siblings ...)
  2010-04-13 13:05   ` [PATCH][RFC 4/12/1/5] SCST core's scst_targ.c Vladislav Bolkhovitin
@ 2010-04-13 13:05   ` Vladislav Bolkhovitin
  2010-04-13 13:06   ` [PATCH][RFC 6/12/1/5] SCST core's private header Vladislav Bolkhovitin
                     ` (4 subsequent siblings)
  9 siblings, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:05 UTC (permalink / raw)
  To: linux-scsi
  Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
	FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
	Vu Pham, Bart Van Assche, James Smart, Joe Eykholt, Andy Yan,
	linux-driver

This patch contains file scst_lib.c.

Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
 scst_lib.c | 6337 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 6337 insertions(+)

diff -uprN orig/linux-2.6.33/drivers/scst/scst_lib.c linux-2.6.33/drivers/scst/scst_lib.c
--- orig/linux-2.6.33/drivers/scst/scst_lib.c
+++ linux-2.6.33/drivers/scst/scst_lib.c
@@ -0,0 +1,6337 @@
+/*
+ *  scst_lib.c
+ *
+ *  Copyright (C) 2004 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ *  Copyright (C) 2004 - 2005 Leonid Stoljar
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation, version 2
+ *  of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/slab.h>
+#include <linux/sched.h>
+#include <linux/kthread.h>
+#include <linux/cdrom.h>
+#include <linux/unistd.h>
+#include <linux/string.h>
+#include <asm/kmap_types.h>
+#include <linux/ctype.h>
+#include <linux/delay.h>
+
+#include "scst.h"
+#include "scst_priv.h"
+#include "scst_mem.h"
+
+struct scsi_io_context {
+	unsigned int full_cdb_used:1;
+	void *data;
+	void (*done)(void *data, char *sense, int result, int resid);
+	char sense[SCST_SENSE_BUFFERSIZE];
+	unsigned char full_cdb[0];
+};
+static struct kmem_cache *scsi_io_context_cache;
+
+/* get_trans_len_x extract x bytes from cdb as length starting from off */
+static int get_trans_len_1(struct scst_cmd *cmd, uint8_t off);
+static int get_trans_len_1_256(struct scst_cmd *cmd, uint8_t off);
+static int get_trans_len_2(struct scst_cmd *cmd, uint8_t off);
+static int get_trans_len_3(struct scst_cmd *cmd, uint8_t off);
+static int get_trans_len_4(struct scst_cmd *cmd, uint8_t off);
+
+/* for special commands */
+static int get_trans_len_block_limit(struct scst_cmd *cmd, uint8_t off);
+static int get_trans_len_read_capacity(struct scst_cmd *cmd, uint8_t off);
+static int get_trans_len_serv_act_in(struct scst_cmd *cmd, uint8_t off);
+static int get_trans_len_single(struct scst_cmd *cmd, uint8_t off);
+static int get_trans_len_none(struct scst_cmd *cmd, uint8_t off);
+static int get_trans_len_read_pos(struct scst_cmd *cmd, uint8_t off);
+static int get_trans_cdb_len_10(struct scst_cmd *cmd, uint8_t off);
+static int get_trans_len_prevent_allow_medium_removal(struct scst_cmd *cmd,
+	uint8_t off);
+static int get_trans_len_3_read_elem_stat(struct scst_cmd *cmd, uint8_t off);
+static int get_trans_len_start_stop(struct scst_cmd *cmd, uint8_t off);
+
+/*
++=====================================-============-======-
+|  Command name                       | Operation  | Type |
+|                                     |   code     |      |
+|-------------------------------------+------------+------+
+
++=========================================================+
+|Key:  M = command implementation is mandatory.           |
+|      O = command implementation is optional.            |
+|      V = Vendor-specific                                |
+|      R = Reserved                                       |
+|     ' '= DON'T use for this device                      |
++=========================================================+
+*/
+
+#define SCST_CDB_MANDATORY  'M'	/* mandatory */
+#define SCST_CDB_OPTIONAL   'O'	/* optional  */
+#define SCST_CDB_VENDOR     'V'	/* vendor    */
+#define SCST_CDB_RESERVED   'R'	/* reserved  */
+#define SCST_CDB_NOTSUPP    ' '	/* don't use */
+
+struct scst_sdbops {
+	uint8_t ops;		/* SCSI-2 op codes */
+	uint8_t devkey[16];	/* Key for every device type M,O,V,R
+				 * type_disk      devkey[0]
+				 * type_tape      devkey[1]
+				 * type_printer   devkey[2]
+				 * type_proseccor devkey[3]
+				 * type_worm      devkey[4]
+				 * type_cdrom     devkey[5]
+				 * type_scanner   devkey[6]
+				 * type_mod       devkey[7]
+				 * type_changer   devkey[8]
+				 * type_commdev   devkey[9]
+				 * type_reserv    devkey[A]
+				 * type_reserv    devkey[B]
+				 * type_raid      devkey[C]
+				 * type_enclosure devkey[D]
+				 * type_reserv    devkey[E]
+				 * type_reserv    devkey[F]
+				 */
+	const char *op_name;	/* SCSI-2 op codes full name */
+	uint8_t direction;	/* init   --> target: SCST_DATA_WRITE
+				 * target --> init:   SCST_DATA_READ
+				 */
+	uint16_t flags;		/* opcode --  various flags */
+	uint8_t off;		/* length offset in cdb */
+	int (*get_trans_len)(struct scst_cmd *cmd, uint8_t off)
+		__attribute__ ((aligned));
+}  __attribute__((packed));
+
+static int scst_scsi_op_list[256];
+
+#define FLAG_NONE 0
+
+static const struct scst_sdbops scst_scsi_op_table[] = {
+	/*
+	 *      +-------------------> TYPE_IS_DISK      (0)
+	 *      |
+	 *      |+------------------> TYPE_IS_TAPE      (1)
+	 *      ||
+	 *      || +----------------> TYPE_IS_PROCESSOR (3)
+	 *      || |
+	 *      || | +--------------> TYPE_IS_CDROM     (5)
+	 *      || | |
+	 *      || | | +------------> TYPE_IS_MOD       (7)
+	 *      || | | |
+	 *      || | | |+-----------> TYPE_IS_CHANGER   (8)
+	 *      || | | ||
+	 *      || | | ||   +-------> TYPE_IS_RAID      (C)
+	 *      || | | ||   |
+	 *      || | | ||   |
+	 *      0123456789ABCDEF ---> TYPE_IS_????     */
+
+	/* 6-bytes length CDB */
+	{0x00, "MMMMMMMMMMMMMMMM", "TEST UNIT READY",
+	 /* let's be HQ to don't look dead under high load */
+	 SCST_DATA_NONE, SCST_SMALL_TIMEOUT|SCST_IMPLICIT_HQ|
+			 SCST_REG_RESERVE_ALLOWED,
+	 0, get_trans_len_none},
+	{0x01, " M              ", "REWIND",
+	 SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+	{0x01, "O V OO OO       ", "REZERO UNIT",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0x02, "VVVVVV  V       ", "REQUEST BLOCK ADDR",
+	 SCST_DATA_NONE, SCST_SMALL_TIMEOUT, 0, get_trans_len_none},
+	{0x03, "MMMMMMMMMMMMMMMM", "REQUEST SENSE",
+	 SCST_DATA_READ, SCST_SMALL_TIMEOUT|SCST_SKIP_UA|SCST_LOCAL_CMD|
+			 SCST_REG_RESERVE_ALLOWED,
+	 4, get_trans_len_1},
+	{0x04, "M    O O        ", "FORMAT UNIT",
+	 SCST_DATA_WRITE, SCST_LONG_TIMEOUT|SCST_UNKNOWN_LENGTH|SCST_WRITE_MEDIUM,
+	 0, get_trans_len_none},
+	{0x04, "  O             ", "FORMAT",
+	 SCST_DATA_NONE, SCST_WRITE_MEDIUM, 0, get_trans_len_none},
+	{0x05, "VMVVVV  V       ", "READ BLOCK LIMITS",
+	 SCST_DATA_READ, SCST_SMALL_TIMEOUT|SCST_REG_RESERVE_ALLOWED,
+	 0, get_trans_len_block_limit},
+	{0x07, "        O       ", "INITIALIZE ELEMENT STATUS",
+	 SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+	{0x07, "OVV O  OV       ", "REASSIGN BLOCKS",
+	 SCST_DATA_NONE, SCST_WRITE_MEDIUM, 0, get_trans_len_none},
+	{0x08, "O               ", "READ(6)",
+	 SCST_DATA_READ, SCST_TRANSFER_LEN_TYPE_FIXED, 4, get_trans_len_1_256},
+	{0x08, " MV OO OV       ", "READ(6)",
+	 SCST_DATA_READ, SCST_TRANSFER_LEN_TYPE_FIXED, 2, get_trans_len_3},
+	{0x08, "         M      ", "GET MESSAGE(6)",
+	 SCST_DATA_READ, FLAG_NONE, 2, get_trans_len_3},
+	{0x08, "    O           ", "RECEIVE",
+	 SCST_DATA_READ, FLAG_NONE, 2, get_trans_len_3},
+	{0x0A, "O               ", "WRITE(6)",
+	 SCST_DATA_WRITE, SCST_TRANSFER_LEN_TYPE_FIXED|SCST_WRITE_MEDIUM,
+	 4, get_trans_len_1_256},
+	{0x0A, " M  O  OV       ", "WRITE(6)",
+	 SCST_DATA_WRITE, SCST_TRANSFER_LEN_TYPE_FIXED|SCST_WRITE_MEDIUM,
+	 2, get_trans_len_3},
+	{0x0A, "  M             ", "PRINT",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0x0A, "         M      ", "SEND MESSAGE(6)",
+	 SCST_DATA_WRITE, FLAG_NONE, 2, get_trans_len_3},
+	{0x0A, "    M           ", "SEND(6)",
+	 SCST_DATA_WRITE, FLAG_NONE, 2, get_trans_len_3},
+	{0x0B, "O   OO OV       ", "SEEK(6)",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0x0B, "                ", "TRACK SELECT",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0x0B, "  O             ", "SLEW AND PRINT",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0x0C, "VVVVVV  V       ", "SEEK BLOCK",
+	 SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+	{0x0D, "VVVVVV  V       ", "PARTITION",
+	 SCST_DATA_NONE, SCST_LONG_TIMEOUT|SCST_WRITE_MEDIUM,
+	 0, get_trans_len_none},
+	{0x0F, "VOVVVV  V       ", "READ REVERSE",
+	 SCST_DATA_READ, SCST_TRANSFER_LEN_TYPE_FIXED, 2, get_trans_len_3},
+	{0x10, "VM V V          ", "WRITE FILEMARKS",
+	 SCST_DATA_NONE, SCST_WRITE_MEDIUM, 0, get_trans_len_none},
+	{0x10, "  O O           ", "SYNCHRONIZE BUFFER",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0x11, "VMVVVV          ", "SPACE",
+	 SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+	{0x12, "MMMMMMMMMMMMMMMM", "INQUIRY",
+	 SCST_DATA_READ, SCST_SMALL_TIMEOUT|SCST_IMPLICIT_HQ|SCST_SKIP_UA|
+			 SCST_REG_RESERVE_ALLOWED,
+	 4, get_trans_len_1},
+	{0x13, "VOVVVV          ", "VERIFY(6)",
+	 SCST_DATA_NONE, SCST_TRANSFER_LEN_TYPE_FIXED|
+			 SCST_VERIFY_BYTCHK_MISMATCH_ALLOWED,
+	 2, get_trans_len_3},
+	{0x14, "VOOVVV          ", "RECOVER BUFFERED DATA",
+	 SCST_DATA_READ, SCST_TRANSFER_LEN_TYPE_FIXED, 2, get_trans_len_3},
+	{0x15, "OMOOOOOOOOOOOOOO", "MODE SELECT(6)",
+	 SCST_DATA_WRITE, SCST_LOCAL_CMD, 4, get_trans_len_1},
+	{0x16, "MMMMMMMMMMMMMMMM", "RESERVE",
+	 SCST_DATA_NONE, SCST_SMALL_TIMEOUT|SCST_LOCAL_CMD,
+	 0, get_trans_len_none},
+	{0x17, "MMMMMMMMMMMMMMMM", "RELEASE",
+	 SCST_DATA_NONE, SCST_SMALL_TIMEOUT|SCST_LOCAL_CMD|SCST_REG_RESERVE_ALLOWED,
+	 0, get_trans_len_none},
+	{0x18, "OOOOOOOO        ", "COPY",
+	 SCST_DATA_WRITE, SCST_LONG_TIMEOUT, 2, get_trans_len_3},
+	{0x19, "VMVVVV          ", "ERASE",
+	 SCST_DATA_NONE, SCST_LONG_TIMEOUT|SCST_WRITE_MEDIUM,
+	 0, get_trans_len_none},
+	{0x1A, "OMOOOOOOOOOOOOOO", "MODE SENSE(6)",
+	 SCST_DATA_READ, SCST_SMALL_TIMEOUT, 4, get_trans_len_1},
+	{0x1B, "      O         ", "SCAN",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0x1B, " O              ", "LOAD UNLOAD",
+	 SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+	{0x1B, "  O             ", "STOP PRINT",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0x1B, "O   OO O    O   ", "START STOP UNIT",
+	 SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_start_stop},
+	{0x1C, "OOOOOOOOOOOOOOOO", "RECEIVE DIAGNOSTIC RESULTS",
+	 SCST_DATA_READ, FLAG_NONE, 3, get_trans_len_2},
+	{0x1D, "MMMMMMMMMMMMMMMM", "SEND DIAGNOSTIC",
+	 SCST_DATA_WRITE, FLAG_NONE, 4, get_trans_len_1},
+	{0x1E, "OOOOOOOOOOOOOOOO", "PREVENT ALLOW MEDIUM REMOVAL",
+	 SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0,
+	 get_trans_len_prevent_allow_medium_removal},
+	{0x1F, "            O   ", "PORT STATUS",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+
+	 /* 10-bytes length CDB */
+	{0x23, "V   VV V        ", "READ FORMAT CAPACITY",
+	 SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_2},
+	{0x24, "V   VVM         ", "SET WINDOW",
+	 SCST_DATA_WRITE, FLAG_NONE, 6, get_trans_len_3},
+	{0x25, "M   MM M        ", "READ CAPACITY",
+	 SCST_DATA_READ, SCST_IMPLICIT_HQ|SCST_REG_RESERVE_ALLOWED,
+	 0, get_trans_len_read_capacity},
+	{0x25, "      O         ", "GET WINDOW",
+	 SCST_DATA_READ, FLAG_NONE, 6, get_trans_len_3},
+	{0x28, "M   MMMM        ", "READ(10)",
+	 SCST_DATA_READ, SCST_TRANSFER_LEN_TYPE_FIXED, 7, get_trans_len_2},
+	{0x28, "         O      ", "GET MESSAGE(10)",
+	 SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_2},
+	{0x29, "V   VV O        ", "READ GENERATION",
+	 SCST_DATA_READ, FLAG_NONE, 8, get_trans_len_1},
+	{0x2A, "O   MO M        ", "WRITE(10)",
+	 SCST_DATA_WRITE, SCST_TRANSFER_LEN_TYPE_FIXED|SCST_WRITE_MEDIUM,
+	 7, get_trans_len_2},
+	{0x2A, "         O      ", "SEND MESSAGE(10)",
+	 SCST_DATA_WRITE, FLAG_NONE, 7, get_trans_len_2},
+	{0x2A, "      O         ", "SEND(10)",
+	 SCST_DATA_WRITE, FLAG_NONE, 7, get_trans_len_2},
+	{0x2B, " O              ", "LOCATE",
+	 SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+	{0x2B, "        O       ", "POSITION TO ELEMENT",
+	 SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+	{0x2B, "O   OO O        ", "SEEK(10)",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0x2C, "V    O O        ", "ERASE(10)",
+	 SCST_DATA_NONE, SCST_LONG_TIMEOUT|SCST_WRITE_MEDIUM,
+	 0, get_trans_len_none},
+	{0x2D, "V   O  O        ", "READ UPDATED BLOCK",
+	 SCST_DATA_READ, SCST_TRANSFER_LEN_TYPE_FIXED, 0, get_trans_len_single},
+	{0x2E, "O   OO O        ", "WRITE AND VERIFY(10)",
+	 SCST_DATA_WRITE, SCST_TRANSFER_LEN_TYPE_FIXED|SCST_WRITE_MEDIUM,
+	 7, get_trans_len_2},
+	{0x2F, "O   OO O        ", "VERIFY(10)",
+	 SCST_DATA_NONE, SCST_TRANSFER_LEN_TYPE_FIXED|
+			 SCST_VERIFY_BYTCHK_MISMATCH_ALLOWED,
+	 7, get_trans_len_2},
+	{0x33, "O   OO O        ", "SET LIMITS(10)",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0x34, " O              ", "READ POSITION",
+	 SCST_DATA_READ, SCST_SMALL_TIMEOUT, 7, get_trans_len_read_pos},
+	{0x34, "      O         ", "GET DATA BUFFER STATUS",
+	 SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_2},
+	{0x34, "O   OO O        ", "PRE-FETCH",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0x35, "O   OO O        ", "SYNCHRONIZE CACHE",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0x36, "O   OO O        ", "LOCK UNLOCK CACHE",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0x37, "O      O        ", "READ DEFECT DATA(10)",
+	 SCST_DATA_READ, FLAG_NONE, 8, get_trans_len_1},
+	{0x37, "        O       ", "INIT ELEMENT STATUS WRANGE",
+	 SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+	{0x38, "    O  O        ", "MEDIUM SCAN",
+	 SCST_DATA_READ, FLAG_NONE, 8, get_trans_len_1},
+	{0x39, "OOOOOOOO        ", "COMPARE",
+	 SCST_DATA_WRITE, FLAG_NONE, 3, get_trans_len_3},
+	{0x3A, "OOOOOOOO        ", "COPY AND VERIFY",
+	 SCST_DATA_WRITE, FLAG_NONE, 3, get_trans_len_3},
+	{0x3B, "OOOOOOOOOOOOOOOO", "WRITE BUFFER",
+	 SCST_DATA_WRITE, SCST_SMALL_TIMEOUT, 6, get_trans_len_3},
+	{0x3C, "OOOOOOOOOOOOOOOO", "READ BUFFER",
+	 SCST_DATA_READ, SCST_SMALL_TIMEOUT, 6, get_trans_len_3},
+	{0x3D, "    O  O        ", "UPDATE BLOCK",
+	 SCST_DATA_WRITE, SCST_TRANSFER_LEN_TYPE_FIXED,
+	 0, get_trans_len_single},
+	{0x3E, "O   OO O        ", "READ LONG",
+	 SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_2},
+	{0x3F, "O   O  O        ", "WRITE LONG",
+	 SCST_DATA_WRITE, SCST_WRITE_MEDIUM, 7, get_trans_len_2},
+	{0x40, "OOOOOOOOOO      ", "CHANGE DEFINITION",
+	 SCST_DATA_WRITE, SCST_SMALL_TIMEOUT, 8, get_trans_len_1},
+	{0x41, "O    O          ", "WRITE SAME",
+	 SCST_DATA_WRITE, SCST_TRANSFER_LEN_TYPE_FIXED|SCST_WRITE_MEDIUM,
+	 0, get_trans_len_single},
+	{0x42, "     O          ", "READ SUB-CHANNEL",
+	 SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_2},
+	{0x43, "     O          ", "READ TOC/PMA/ATIP",
+	 SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_2},
+	{0x44, " M              ", "REPORT DENSITY SUPPORT",
+	 SCST_DATA_READ, SCST_REG_RESERVE_ALLOWED, 7, get_trans_len_2},
+	{0x44, "     O          ", "READ HEADER",
+	 SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_2},
+	{0x45, "     O          ", "PLAY AUDIO(10)",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0x46, "     O          ", "GET CONFIGURATION",
+	 SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_2},
+	{0x47, "     O          ", "PLAY AUDIO MSF",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0x48, "     O          ", "PLAY AUDIO TRACK INDEX",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0x49, "     O          ", "PLAY TRACK RELATIVE(10)",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0x4A, "     O          ", "GET EVENT STATUS NOTIFICATION",
+	 SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_2},
+	{0x4B, "     O          ", "PAUSE/RESUME",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0x4C, "OOOOOOOOOOOOOOOO", "LOG SELECT",
+	 SCST_DATA_WRITE, SCST_SMALL_TIMEOUT, 7, get_trans_len_2},
+	{0x4D, "OOOOOOOOOOOOOOOO", "LOG SENSE",
+	 SCST_DATA_READ, SCST_SMALL_TIMEOUT|SCST_REG_RESERVE_ALLOWED,
+	 7, get_trans_len_2},
+	{0x4E, "     O          ", "STOP PLAY/SCAN",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0x50, "                ", "XDWRITE",
+	 SCST_DATA_NONE, SCST_WRITE_MEDIUM, 0, get_trans_len_none},
+	{0x51, "     O          ", "READ DISC INFORMATION",
+	 SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_2},
+	{0x51, "                ", "XPWRITE",
+	 SCST_DATA_NONE, SCST_WRITE_MEDIUM, 0, get_trans_len_none},
+	{0x52, "     O          ", "READ TRACK INFORMATION",
+	 SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_2},
+	{0x53, "     O          ", "RESERVE TRACK",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0x54, "     O          ", "SEND OPC INFORMATION",
+	 SCST_DATA_WRITE, FLAG_NONE, 7, get_trans_len_2},
+	{0x55, "OOOOOOOOOOOOOOOO", "MODE SELECT(10)",
+	 SCST_DATA_WRITE, SCST_LOCAL_CMD, 7, get_trans_len_2},
+	{0x56, "OOOOOOOOOOOOOOOO", "RESERVE(10)",
+	 SCST_DATA_NONE, SCST_SMALL_TIMEOUT|SCST_LOCAL_CMD,
+	 0, get_trans_len_none},
+	{0x57, "OOOOOOOOOOOOOOOO", "RELEASE(10)",
+	 SCST_DATA_NONE, SCST_SMALL_TIMEOUT|SCST_LOCAL_CMD|SCST_REG_RESERVE_ALLOWED,
+	 0, get_trans_len_none},
+	{0x58, "     O          ", "REPAIR TRACK",
+	 SCST_DATA_NONE, SCST_WRITE_MEDIUM, 0, get_trans_len_none},
+	{0x5A, "OOOOOOOOOOOOOOOO", "MODE SENSE(10)",
+	 SCST_DATA_READ, SCST_SMALL_TIMEOUT, 7, get_trans_len_2},
+	{0x5B, "     O          ", "CLOSE TRACK/SESSION",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0x5C, "     O          ", "READ BUFFER CAPACITY",
+	 SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_2},
+	{0x5D, "     O          ", "SEND CUE SHEET",
+	 SCST_DATA_WRITE, FLAG_NONE, 6, get_trans_len_3},
+	{0x5E, "OOOOO OOOO      ", "PERSISTENT RESERV IN",
+	 SCST_DATA_READ, FLAG_NONE, 5, get_trans_len_4},
+	{0x5F, "OOOOO OOOO      ", "PERSISTENT RESERV OUT",
+	 SCST_DATA_WRITE, FLAG_NONE, 5, get_trans_len_4},
+
+	/* 16-bytes length CDB */
+	{0x80, "O   OO O        ", "XDWRITE EXTENDED",
+	 SCST_DATA_NONE, SCST_WRITE_MEDIUM, 0, get_trans_len_none},
+	{0x80, " M              ", "WRITE FILEMARKS",
+	 SCST_DATA_NONE, SCST_WRITE_MEDIUM, 0, get_trans_len_none},
+	{0x81, "O   OO O        ", "REBUILD",
+	 SCST_DATA_WRITE, SCST_WRITE_MEDIUM, 10, get_trans_len_4},
+	{0x82, "O   OO O        ", "REGENERATE",
+	 SCST_DATA_WRITE, SCST_WRITE_MEDIUM, 10, get_trans_len_4},
+	{0x83, "OOOOOOOOOOOOOOOO", "EXTENDED COPY",
+	 SCST_DATA_WRITE, SCST_WRITE_MEDIUM, 10, get_trans_len_4},
+	{0x84, "OOOOOOOOOOOOOOOO", "RECEIVE COPY RESULT",
+	 SCST_DATA_WRITE, FLAG_NONE, 10, get_trans_len_4},
+	{0x86, "OOOOOOOOOO      ", "ACCESS CONTROL IN",
+	 SCST_DATA_NONE, SCST_REG_RESERVE_ALLOWED, 0, get_trans_len_none},
+	{0x87, "OOOOOOOOOO      ", "ACCESS CONTROL OUT",
+	 SCST_DATA_NONE, SCST_REG_RESERVE_ALLOWED, 0, get_trans_len_none},
+	{0x88, "M   MMMM        ", "READ(16)",
+	 SCST_DATA_READ, SCST_TRANSFER_LEN_TYPE_FIXED, 10, get_trans_len_4},
+	{0x8A, "O   OO O        ", "WRITE(16)",
+	 SCST_DATA_WRITE, SCST_TRANSFER_LEN_TYPE_FIXED|SCST_WRITE_MEDIUM,
+	 10, get_trans_len_4},
+	{0x8C, "OOOOOOOOOO      ", "READ ATTRIBUTE",
+	 SCST_DATA_READ, FLAG_NONE, 10, get_trans_len_4},
+	{0x8D, "OOOOOOOOOO      ", "WRITE ATTRIBUTE",
+	 SCST_DATA_WRITE, SCST_WRITE_MEDIUM, 10, get_trans_len_4},
+	{0x8E, "O   OO O        ", "WRITE AND VERIFY(16)",
+	 SCST_DATA_WRITE, SCST_TRANSFER_LEN_TYPE_FIXED|SCST_WRITE_MEDIUM,
+	 10, get_trans_len_4},
+	{0x8F, "O   OO O        ", "VERIFY(16)",
+	 SCST_DATA_NONE, SCST_TRANSFER_LEN_TYPE_FIXED|
+			 SCST_VERIFY_BYTCHK_MISMATCH_ALLOWED,
+	 10, get_trans_len_4},
+	{0x90, "O   OO O        ", "PRE-FETCH(16)",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0x91, "O   OO O        ", "SYNCHRONIZE CACHE(16)",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0x91, " M              ", "SPACE(16)",
+	 SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+	{0x92, "O   OO O        ", "LOCK UNLOCK CACHE(16)",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0x92, " O              ", "LOCATE(16)",
+	 SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+	{0x93, "O    O          ", "WRITE SAME(16)",
+	 SCST_DATA_WRITE, SCST_TRANSFER_LEN_TYPE_FIXED|SCST_WRITE_MEDIUM,
+	 10, get_trans_len_4},
+	{0x93, " M              ", "ERASE(16)",
+	 SCST_DATA_NONE, SCST_LONG_TIMEOUT|SCST_WRITE_MEDIUM,
+	 0, get_trans_len_none},
+	{0x9E, "O               ", "SERVICE ACTION IN",
+	 SCST_DATA_READ, FLAG_NONE, 0, get_trans_len_serv_act_in},
+
+	/* 12-bytes length CDB */
+	{0xA0, "VVVVVVVVVV  M   ", "REPORT LUNS",
+	 SCST_DATA_READ, SCST_SMALL_TIMEOUT|SCST_IMPLICIT_HQ|SCST_SKIP_UA|
+			 SCST_FULLY_LOCAL_CMD|SCST_LOCAL_CMD|
+			 SCST_REG_RESERVE_ALLOWED,
+	 6, get_trans_len_4},
+	{0xA1, "     O          ", "BLANK",
+	 SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+	{0xA3, "     O          ", "SEND KEY",
+	 SCST_DATA_WRITE, FLAG_NONE, 8, get_trans_len_2},
+	{0xA3, "OOOOO OOOO      ", "REPORT DEVICE IDENTIDIER",
+	 SCST_DATA_READ, SCST_REG_RESERVE_ALLOWED, 6, get_trans_len_4},
+	{0xA3, "            M   ", "MAINTENANCE(IN)",
+	 SCST_DATA_READ, FLAG_NONE, 6, get_trans_len_4},
+	{0xA4, "     O          ", "REPORT KEY",
+	 SCST_DATA_READ, FLAG_NONE, 8, get_trans_len_2},
+	{0xA4, "            O   ", "MAINTENANCE(OUT)",
+	 SCST_DATA_WRITE, FLAG_NONE, 6, get_trans_len_4},
+	{0xA5, "        M       ", "MOVE MEDIUM",
+	 SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+	{0xA5, "     O          ", "PLAY AUDIO(12)",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0xA6, "     O  O       ", "EXCHANGE/LOAD/UNLOAD MEDIUM",
+	 SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+	{0xA7, "     O          ", "SET READ AHEAD",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0xA8, "         O      ", "GET MESSAGE(12)",
+	 SCST_DATA_READ, FLAG_NONE, 6, get_trans_len_4},
+	{0xA8, "O   OO O        ", "READ(12)",
+	 SCST_DATA_READ, SCST_TRANSFER_LEN_TYPE_FIXED, 6, get_trans_len_4},
+	{0xA9, "     O          ", "PLAY TRACK RELATIVE(12)",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0xAA, "O   OO O        ", "WRITE(12)",
+	 SCST_DATA_WRITE, SCST_TRANSFER_LEN_TYPE_FIXED|SCST_WRITE_MEDIUM,
+	 6, get_trans_len_4},
+	{0xAA, "         O      ", "SEND MESSAGE(12)",
+	 SCST_DATA_WRITE, FLAG_NONE, 6, get_trans_len_4},
+	{0xAC, "       O        ", "ERASE(12)",
+	 SCST_DATA_NONE, SCST_WRITE_MEDIUM, 0, get_trans_len_none},
+	{0xAC, "     M          ", "GET PERFORMANCE",
+	 SCST_DATA_READ, SCST_UNKNOWN_LENGTH, 0, get_trans_len_none},
+	{0xAD, "     O          ", "READ DVD STRUCTURE",
+	 SCST_DATA_READ, FLAG_NONE, 8, get_trans_len_2},
+	{0xAE, "O   OO O        ", "WRITE AND VERIFY(12)",
+	 SCST_DATA_WRITE, SCST_TRANSFER_LEN_TYPE_FIXED|SCST_WRITE_MEDIUM,
+	 6, get_trans_len_4},
+	{0xAF, "O   OO O        ", "VERIFY(12)",
+	 SCST_DATA_NONE, SCST_TRANSFER_LEN_TYPE_FIXED|
+			 SCST_VERIFY_BYTCHK_MISMATCH_ALLOWED,
+	 6, get_trans_len_4},
+#if 0 /* No need to support at all */
+	{0xB0, "    OO O        ", "SEARCH DATA HIGH(12)",
+	 SCST_DATA_WRITE, FLAG_NONE, 9, get_trans_len_1},
+	{0xB1, "    OO O        ", "SEARCH DATA EQUAL(12)",
+	 SCST_DATA_WRITE, FLAG_NONE, 9, get_trans_len_1},
+	{0xB2, "    OO O        ", "SEARCH DATA LOW(12)",
+	 SCST_DATA_WRITE, FLAG_NONE, 9, get_trans_len_1},
+#endif
+	{0xB3, "    OO O        ", "SET LIMITS(12)",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0xB5, "        O       ", "REQUEST VOLUME ELEMENT ADDRESS",
+	 SCST_DATA_READ, FLAG_NONE, 9, get_trans_len_1},
+	{0xB6, "        O       ", "SEND VOLUME TAG",
+	 SCST_DATA_WRITE, FLAG_NONE, 9, get_trans_len_1},
+	{0xB6, "     M         ", "SET STREAMING",
+	 SCST_DATA_WRITE, FLAG_NONE, 9, get_trans_len_2},
+	{0xB7, "       O        ", "READ DEFECT DATA(12)",
+	 SCST_DATA_READ, FLAG_NONE, 9, get_trans_len_1},
+	{0xB8, "        O       ", "READ ELEMENT STATUS",
+	 SCST_DATA_READ, FLAG_NONE, 7, get_trans_len_3_read_elem_stat},
+	{0xB9, "     O          ", "READ CD MSF",
+	 SCST_DATA_READ, SCST_UNKNOWN_LENGTH, 0, get_trans_len_none},
+	{0xBA, "     O          ", "SCAN",
+	 SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_len_none},
+	{0xBA, "            O   ", "REDUNDANCY GROUP(IN)",
+	 SCST_DATA_READ, FLAG_NONE, 6, get_trans_len_4},
+	{0xBB, "     O          ", "SET SPEED",
+	 SCST_DATA_NONE, FLAG_NONE, 0, get_trans_len_none},
+	{0xBB, "            O   ", "REDUNDANCY GROUP(OUT)",
+	 SCST_DATA_WRITE, FLAG_NONE, 6, get_trans_len_4},
+	{0xBC, "            O   ", "SPARE(IN)",
+	 SCST_DATA_READ, FLAG_NONE, 6, get_trans_len_4},
+	{0xBD, "     O          ", "MECHANISM STATUS",
+	 SCST_DATA_READ, FLAG_NONE, 8, get_trans_len_2},
+	{0xBD, "            O   ", "SPARE(OUT)",
+	 SCST_DATA_WRITE, FLAG_NONE, 6, get_trans_len_4},
+	{0xBE, "     O          ", "READ CD",
+	 SCST_DATA_READ, SCST_TRANSFER_LEN_TYPE_FIXED, 6, get_trans_len_3},
+	{0xBE, "            O   ", "VOLUME SET(IN)",
+	 SCST_DATA_READ, FLAG_NONE, 6, get_trans_len_4},
+	{0xBF, "     O          ", "SEND DVD STRUCTUE",
+	 SCST_DATA_WRITE, FLAG_NONE, 8, get_trans_len_2},
+	{0xBF, "            O   ", "VOLUME SET(OUT)",
+	 SCST_DATA_WRITE, FLAG_NONE, 6, get_trans_len_4},
+	{0xE7, "        V       ", "INIT ELEMENT STATUS WRANGE",
+	 SCST_DATA_NONE, SCST_LONG_TIMEOUT, 0, get_trans_cdb_len_10}
+};
+
+#define SCST_CDB_TBL_SIZE	((int)ARRAY_SIZE(scst_scsi_op_table))
+
+static void scst_free_tgt_dev(struct scst_tgt_dev *tgt_dev);
+static void scst_check_internal_sense(struct scst_device *dev, int result,
+	uint8_t *sense, int sense_len);
+static void scst_queue_report_luns_changed_UA(struct scst_session *sess,
+	int flags);
+static void __scst_check_set_UA(struct scst_tgt_dev *tgt_dev,
+	const uint8_t *sense, int sense_len, int flags);
+static void scst_alloc_set_UA(struct scst_tgt_dev *tgt_dev,
+	const uint8_t *sense, int sense_len, int flags);
+static void scst_free_all_UA(struct scst_tgt_dev *tgt_dev);
+static void scst_release_space(struct scst_cmd *cmd);
+static void scst_unblock_cmds(struct scst_device *dev);
+static void scst_clear_reservation(struct scst_tgt_dev *tgt_dev);
+static struct scst_tgt_dev *scst_alloc_add_tgt_dev(struct scst_session *sess,
+	struct scst_acg_dev *acg_dev);
+static void scst_tgt_retry_timer_fn(unsigned long arg);
+
+#ifdef CONFIG_SCST_DEBUG_TM
+static void tm_dbg_init_tgt_dev(struct scst_tgt_dev *tgt_dev);
+static void tm_dbg_deinit_tgt_dev(struct scst_tgt_dev *tgt_dev);
+#else
+static inline void tm_dbg_init_tgt_dev(struct scst_tgt_dev *tgt_dev) {}
+static inline void tm_dbg_deinit_tgt_dev(struct scst_tgt_dev *tgt_dev) {}
+#endif /* CONFIG_SCST_DEBUG_TM */
+
+/**
+ * scst_alloc_sense() - allocate sense buffer for command
+ *
+ * Allocates, if necessary, sense buffer for command. Returns 0 on success
+ * and error code othrwise. Parameter "atomic" should be non-0 if the
+ * function called in atomic context.
+ */
+int scst_alloc_sense(struct scst_cmd *cmd, int atomic)
+{
+	int res = 0;
+	gfp_t gfp_mask = atomic ? GFP_ATOMIC : (GFP_KERNEL|__GFP_NOFAIL);
+
+	if (cmd->sense != NULL)
+		goto memzero;
+
+	cmd->sense = mempool_alloc(scst_sense_mempool, gfp_mask);
+	if (cmd->sense == NULL) {
+		PRINT_CRIT_ERROR("Sense memory allocation failed (op %x). "
+			"The sense data will be lost!!", cmd->cdb[0]);
+		res = -ENOMEM;
+		goto out;
+	}
+
+	cmd->sense_buflen = SCST_SENSE_BUFFERSIZE;
+
+memzero:
+	cmd->sense_valid_len = 0;
+	memset(cmd->sense, 0, cmd->sense_buflen);
+
+out:
+	return res;
+}
+EXPORT_SYMBOL(scst_alloc_sense);
+
+/**
+ * scst_alloc_set_sense() - allocate and fill sense buffer for command
+ *
+ * Allocates, if necessary, sense buffer for command and copies in
+ * it data from the supplied sense buffer. Returns 0 on success
+ * and error code othrwise.
+ */
+int scst_alloc_set_sense(struct scst_cmd *cmd, int atomic,
+	const uint8_t *sense, unsigned int len)
+{
+	int res;
+
+	/*
+	 * We don't check here if the existing sense is valid or not, because
+	 * we suppose the caller did it based on cmd->status.
+	 */
+
+	res = scst_alloc_sense(cmd, atomic);
+	if (res != 0) {
+		PRINT_BUFFER("Lost sense", sense, len);
+		goto out;
+	}
+
+	cmd->sense_valid_len = len;
+	if (cmd->sense_buflen < len) {
+		PRINT_WARNING("Sense truncated (needed %d), shall you increase "
+			"SCST_SENSE_BUFFERSIZE? Op: %x", len, cmd->cdb[0]);
+		cmd->sense_valid_len = cmd->sense_buflen;
+	}
+
+	memcpy(cmd->sense, sense, cmd->sense_valid_len);
+	TRACE_BUFFER("Sense set", cmd->sense, cmd->sense_valid_len);
+
+out:
+	return res;
+}
+EXPORT_SYMBOL(scst_alloc_set_sense);
+
+/**
+ * scst_set_cmd_error_status() - set error SCSI status
+ * @cmd:	SCST command
+ * @status:	SCSI status to set
+ *
+ * Description:
+ *    Sets error SCSI status in the command and prepares it for returning it.
+ *    Returns 0 on success, error code otherwise.
+ */
+int scst_set_cmd_error_status(struct scst_cmd *cmd, int status)
+{
+	int res = 0;
+
+	if (cmd->status != 0) {
+		TRACE_MGMT_DBG("cmd %p already has status %x set", cmd,
+			cmd->status);
+		res = -EEXIST;
+		goto out;
+	}
+
+	cmd->status = status;
+	cmd->host_status = DID_OK;
+
+	cmd->dbl_ua_orig_resp_data_len = cmd->resp_data_len;
+	cmd->dbl_ua_orig_data_direction = cmd->data_direction;
+
+	cmd->data_direction = SCST_DATA_NONE;
+	cmd->resp_data_len = 0;
+	cmd->is_send_status = 1;
+
+	cmd->completed = 1;
+
+out:
+	return res;
+}
+EXPORT_SYMBOL(scst_set_cmd_error_status);
+
+static int scst_set_lun_not_supported_request_sense(struct scst_cmd *cmd,
+	int key, int asc, int ascq)
+{
+	int res;
+	int sense_len;
+
+	if (cmd->status != 0) {
+		TRACE_MGMT_DBG("cmd %p already has status %x set", cmd,
+			cmd->status);
+		res = -EEXIST;
+		goto out;
+	}
+
+	if ((cmd->sg != NULL) && SCST_SENSE_VALID(sg_virt(cmd->sg))) {
+		TRACE_MGMT_DBG("cmd %p already has sense set", cmd);
+		res = -EEXIST;
+		goto out;
+	}
+
+	if (cmd->sg == NULL) {
+		if (cmd->bufflen == 0)
+			cmd->bufflen = cmd->cdb[4];
+
+		cmd->sg = scst_alloc(cmd->bufflen, GFP_ATOMIC, &cmd->sg_cnt);
+		if (cmd->sg == NULL) {
+			PRINT_ERROR("Unable to alloc sg for REQUEST SENSE"
+				"(sense %x/%x/%x)", key, asc, ascq);
+			res = 1;
+			goto out;
+		}
+	}
+
+	TRACE_MEM("sg %p alloced for sense for cmd %p (cnt %d, "
+		"len %d)", cmd->sg, cmd, cmd->sg_cnt, cmd->bufflen);
+
+	sense_len = scst_set_sense(sg_virt(cmd->sg),
+		cmd->bufflen, cmd->cdb[1] & 1, key, asc, ascq);
+	scst_set_resp_data_len(cmd, sense_len);
+
+	TRACE_BUFFER("Sense set", sg_virt(cmd->sg), sense_len);
+
+	res = 0;
+	cmd->completed = 1;
+
+out:
+	return res;
+}
+
+static int scst_set_lun_not_supported_inquiry(struct scst_cmd *cmd)
+{
+	int res;
+	uint8_t *buf;
+	int len;
+
+	if (cmd->status != 0) {
+		TRACE_MGMT_DBG("cmd %p already has status %x set", cmd,
+			cmd->status);
+		res = -EEXIST;
+		goto out;
+	}
+
+	if (cmd->sg == NULL) {
+		if (cmd->bufflen == 0)
+			cmd->bufflen = min_t(int, 36, (cmd->cdb[3] << 8) | cmd->cdb[4]);
+
+		cmd->sg = scst_alloc(cmd->bufflen, GFP_ATOMIC, &cmd->sg_cnt);
+		if (cmd->sg == NULL) {
+			PRINT_ERROR("%s", "Unable to alloc sg for INQUIRY "
+				"for not supported LUN");
+			res = 1;
+			goto out;
+		}
+	}
+
+	TRACE_MEM("sg %p alloced INQUIRY for cmd %p (cnt %d, len %d)",
+		cmd->sg, cmd, cmd->sg_cnt, cmd->bufflen);
+
+	buf = sg_virt(cmd->sg);
+	len = min_t(int, 36, cmd->bufflen);
+
+	memset(buf, 0, len);
+	buf[0] = 0x7F; /* Peripheral qualifier 011b, Peripheral device type 1Fh */
+
+	TRACE_BUFFER("INQUIRY for not supported LUN set", buf, len);
+
+	res = 0;
+	cmd->completed = 1;
+
+out:
+	return res;
+}
+
+/**
+ * scst_set_cmd_error() - set error in the command and fill the sense buffer.
+ *
+ * Sets error in the command and fill the sense buffer. Returns 0 on success,
+ * error code otherwise.
+ */
+int scst_set_cmd_error(struct scst_cmd *cmd, int key, int asc, int ascq)
+{
+	int res;
+
+	/*
+	 * We need for LOGICAL UNIT NOT SUPPORTED special handling for
+	 * REQUEST SENSE and INQUIRY.
+	 */
+	if ((key == ILLEGAL_REQUEST) && (asc == 0x25) && (ascq == 0)) {
+		if (cmd->cdb[0] == REQUEST_SENSE)
+			res = scst_set_lun_not_supported_request_sense(cmd,
+				key, asc, ascq);
+		else if (cmd->cdb[0] == INQUIRY)
+			res = scst_set_lun_not_supported_inquiry(cmd);
+		else
+			goto do_sense;
+
+		if (res > 0)
+			goto do_sense;
+		else
+			goto out;
+	}
+
+do_sense:
+	res = scst_set_cmd_error_status(cmd, SAM_STAT_CHECK_CONDITION);
+	if (res != 0)
+		goto out;
+
+	res = scst_alloc_sense(cmd, 1);
+	if (res != 0) {
+		PRINT_ERROR("Lost sense data (key %x, asc %x, ascq %x)",
+			key, asc, ascq);
+		goto out;
+	}
+
+	cmd->sense_valid_len = scst_set_sense(cmd->sense, cmd->sense_buflen,
+		scst_get_cmd_dev_d_sense(cmd), key, asc, ascq);
+	TRACE_BUFFER("Sense set", cmd->sense, cmd->sense_valid_len);
+
+out:
+	return res;
+}
+EXPORT_SYMBOL(scst_set_cmd_error);
+
+/**
+ * scst_set_sense() - set sense from KEY/ASC/ASCQ numbers
+ *
+ * Sets the corresponding fields in the sense buffer taking sense type
+ * into account. Returns resulting sense length.
+ */
+int scst_set_sense(uint8_t *buffer, int len, bool d_sense,
+	int key, int asc, int ascq)
+{
+	int res;
+
+	BUG_ON(len == 0);
+
+	memset(buffer, 0, len);
+
+	if (d_sense) {
+		/* Descriptor format */
+		if (len < 8) {
+			PRINT_ERROR("Length %d of sense buffer too small to "
+				"fit sense %x:%x:%x", len, key, asc, ascq);
+		}
+
+		buffer[0] = 0x72;		/* Response Code	*/
+		if (len > 1)
+			buffer[1] = key;	/* Sense Key		*/
+		if (len > 2)
+			buffer[2] = asc;	/* ASC			*/
+		if (len > 3)
+			buffer[3] = ascq;	/* ASCQ			*/
+		res = 8;
+	} else {
+		/* Fixed format */
+		if (len < 18) {
+			PRINT_ERROR("Length %d of sense buffer too small to "
+				"fit sense %x:%x:%x", len, key, asc, ascq);
+		}
+
+		buffer[0] = 0x70;		/* Response Code	*/
+		if (len > 2)
+			buffer[2] = key;	/* Sense Key		*/
+		if (len > 7)
+			buffer[7] = 0x0a;	/* Additional Sense Length */
+		if (len > 12)
+			buffer[12] = asc;	/* ASC			*/
+		if (len > 13)
+			buffer[13] = ascq;	/* ASCQ			*/
+		res = 18;
+	}
+
+	TRACE_BUFFER("Sense set", buffer, res);
+	return res;
+}
+EXPORT_SYMBOL(scst_set_sense);
+
+/**
+ * scst_analyze_sense() - analyze sense
+ *
+ * Returns true if sense matches to (key, asc, ascq) and false otherwise.
+ * Valid_mask is one or several SCST_SENSE_*_VALID constants setting valid
+ * (key, asc, ascq) values.
+ */
+bool scst_analyze_sense(const uint8_t *sense, int len, unsigned int valid_mask,
+	int key, int asc, int ascq)
+{
+	bool res = false;
+
+	/* Response Code */
+	if ((sense[0] == 0x70) || (sense[0] == 0x71)) {
+		/* Fixed format */
+
+		/* Sense Key */
+		if (valid_mask & SCST_SENSE_KEY_VALID) {
+			if (len < 3)
+				goto out;
+			if (sense[2] != key)
+				goto out;
+		}
+
+		/* ASC */
+		if (valid_mask & SCST_SENSE_ASC_VALID) {
+			if (len < 13)
+				goto out;
+			if (sense[12] != asc)
+				goto out;
+		}
+
+		/* ASCQ */
+		if (valid_mask & SCST_SENSE_ASCQ_VALID) {
+			if (len < 14)
+				goto out;
+			if (sense[13] != ascq)
+				goto out;
+		}
+	} else if ((sense[0] == 0x72) || (sense[0] == 0x73)) {
+		/* Descriptor format */
+
+		/* Sense Key */
+		if (valid_mask & SCST_SENSE_KEY_VALID) {
+			if (len < 2)
+				goto out;
+			if (sense[1] != key)
+				goto out;
+		}
+
+		/* ASC */
+		if (valid_mask & SCST_SENSE_ASC_VALID) {
+			if (len < 3)
+				goto out;
+			if (sense[2] != asc)
+				goto out;
+		}
+
+		/* ASCQ */
+		if (valid_mask & SCST_SENSE_ASCQ_VALID) {
+			if (len < 4)
+				goto out;
+			if (sense[3] != ascq)
+				goto out;
+		}
+	} else
+		goto out;
+
+	res = true;
+
+out:
+	return res;
+}
+EXPORT_SYMBOL(scst_analyze_sense);
+
+/**
+ * scst_is_ua_sense() - determine if the sense is UA sense
+ *
+ * Returns true if the sense is valid and carrying a Unit
+ * Attention or false otherwise.
+ */
+bool scst_is_ua_sense(const uint8_t *sense, int len)
+{
+	if (SCST_SENSE_VALID(sense))
+		return scst_analyze_sense(sense, len,
+			SCST_SENSE_KEY_VALID, UNIT_ATTENTION, 0, 0);
+	else
+		return false;
+}
+EXPORT_SYMBOL(scst_is_ua_sense);
+
+bool scst_is_ua_global(const uint8_t *sense, int len)
+{
+	bool res;
+
+	/* Changing it don't forget to change scst_requeue_ua() as well!! */
+
+	if (scst_analyze_sense(sense, len, SCST_SENSE_ALL_VALID,
+			SCST_LOAD_SENSE(scst_sense_reported_luns_data_changed)))
+		res = true;
+	else
+		res = false;
+
+	return res;
+}
+
+/**
+ * scst_check_convert_sense() - check sense type and convert it if needed
+ *
+ * Checks if sense in the sense buffer, if any, is in the correct format.
+ * If not, converts it in the correct format.
+ */
+void scst_check_convert_sense(struct scst_cmd *cmd)
+{
+	bool d_sense;
+
+	if ((cmd->sense == NULL) || (cmd->status != SAM_STAT_CHECK_CONDITION))
+		goto out;
+
+	d_sense = scst_get_cmd_dev_d_sense(cmd);
+	if (d_sense && ((cmd->sense[0] == 0x70) || (cmd->sense[0] == 0x71))) {
+		TRACE_MGMT_DBG("Converting fixed sense to descriptor (cmd %p)",
+			cmd);
+		if ((cmd->sense_valid_len < 18)) {
+			PRINT_ERROR("Sense too small to convert (%d, "
+				"type: fixed)", cmd->sense_buflen);
+			goto out;
+		}
+		cmd->sense_valid_len = scst_set_sense(cmd->sense, cmd->sense_buflen,
+			d_sense, cmd->sense[2], cmd->sense[12], cmd->sense[13]);
+	} else if (!d_sense && ((cmd->sense[0] == 0x72) ||
+				(cmd->sense[0] == 0x73))) {
+		TRACE_MGMT_DBG("Converting descriptor sense to fixed (cmd %p)",
+			cmd);
+		if ((cmd->sense_buflen < 18) || (cmd->sense_valid_len < 8)) {
+			PRINT_ERROR("Sense too small to convert (%d, "
+				"type: descryptor, valid %d)",
+				cmd->sense_buflen, cmd->sense_valid_len);
+			goto out;
+		}
+		cmd->sense_valid_len = scst_set_sense(cmd->sense,
+			cmd->sense_buflen, d_sense,
+			cmd->sense[1], cmd->sense[2], cmd->sense[3]);
+	}
+
+out:
+	return;
+}
+EXPORT_SYMBOL(scst_check_convert_sense);
+
+static int scst_set_cmd_error_sense(struct scst_cmd *cmd, uint8_t *sense,
+	unsigned int len)
+{
+	int res;
+
+	res = scst_set_cmd_error_status(cmd, SAM_STAT_CHECK_CONDITION);
+	if (res != 0)
+		goto out;
+
+	res = scst_alloc_set_sense(cmd, 1, sense, len);
+
+out:
+	return res;
+}
+
+/**
+ * scst_set_busy() - set BUSY or TASK QUEUE FULL status
+ *
+ * Sets BUSY or TASK QUEUE FULL status depending on if this session has other
+ * outstanding commands or not.
+ */
+void scst_set_busy(struct scst_cmd *cmd)
+{
+	int c = atomic_read(&cmd->sess->sess_cmd_count);
+
+	if ((c <= 1) || (cmd->sess->init_phase != SCST_SESS_IPH_READY))	{
+		scst_set_cmd_error_status(cmd, SAM_STAT_BUSY);
+		TRACE(TRACE_FLOW_CONTROL, "Sending BUSY status to initiator %s "
+			"(cmds count %d, queue_type %x, sess->init_phase %d)",
+			cmd->sess->initiator_name, c,
+			cmd->queue_type, cmd->sess->init_phase);
+	} else {
+		scst_set_cmd_error_status(cmd, SAM_STAT_TASK_SET_FULL);
+		TRACE(TRACE_FLOW_CONTROL, "Sending QUEUE_FULL status to "
+			"initiator %s (cmds count %d, queue_type %x, "
+			"sess->init_phase %d)", cmd->sess->initiator_name, c,
+			cmd->queue_type, cmd->sess->init_phase);
+	}
+	return;
+}
+EXPORT_SYMBOL(scst_set_busy);
+
+/**
+ * scst_set_initial_UA() - set initial Unit Attention
+ *
+ * Sets initial Unit Attention on all devices of the session,
+ * replacing default scst_sense_reset_UA
+ */
+void scst_set_initial_UA(struct scst_session *sess, int key, int asc, int ascq)
+{
+	int i;
+
+	TRACE_MGMT_DBG("Setting for sess %p initial UA %x/%x/%x", sess, key,
+		asc, ascq);
+
+	/* Protect sess_tgt_dev_list_hash */
+	mutex_lock(&scst_mutex);
+
+	for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+		struct list_head *sess_tgt_dev_list_head =
+			&sess->sess_tgt_dev_list_hash[i];
+		struct scst_tgt_dev *tgt_dev;
+
+		list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+				sess_tgt_dev_list_entry) {
+			spin_lock_bh(&tgt_dev->tgt_dev_lock);
+			if (!list_empty(&tgt_dev->UA_list)) {
+				struct scst_tgt_dev_UA *ua;
+
+				ua = list_entry(tgt_dev->UA_list.next,
+					typeof(*ua), UA_list_entry);
+				if (scst_analyze_sense(ua->UA_sense_buffer,
+						ua->UA_valid_sense_len,
+						SCST_SENSE_ALL_VALID,
+						SCST_LOAD_SENSE(scst_sense_reset_UA))) {
+					ua->UA_valid_sense_len = scst_set_sense(
+						ua->UA_sense_buffer,
+						sizeof(ua->UA_sense_buffer),
+						tgt_dev->dev->d_sense,
+						key, asc, ascq);
+				} else
+					PRINT_ERROR("%s",
+						"The first UA isn't RESET UA");
+			} else
+				PRINT_ERROR("%s", "There's no RESET UA to "
+					"replace");
+			spin_unlock_bh(&tgt_dev->tgt_dev_lock);
+		}
+	}
+
+	mutex_unlock(&scst_mutex);
+	return;
+}
+EXPORT_SYMBOL(scst_set_initial_UA);
+
+static struct scst_aen *scst_alloc_aen(struct scst_session *sess,
+	uint64_t unpacked_lun)
+{
+	struct scst_aen *aen;
+
+	aen = mempool_alloc(scst_aen_mempool, GFP_KERNEL);
+	if (aen == NULL) {
+		PRINT_ERROR("AEN memory allocation failed. Corresponding "
+			"event notification will not be performed (initiator "
+			"%s)", sess->initiator_name);
+		goto out;
+	}
+	memset(aen, 0, sizeof(*aen));
+
+	aen->sess = sess;
+	scst_sess_get(sess);
+
+	aen->lun = scst_pack_lun(unpacked_lun, sess->acg->addr_method);
+
+out:
+	return aen;
+};
+
+static void scst_free_aen(struct scst_aen *aen)
+{
+
+	scst_sess_put(aen->sess);
+	mempool_free(aen, scst_aen_mempool);
+	return;
+};
+
+/* Must be called under scst_mutex */
+void scst_gen_aen_or_ua(struct scst_tgt_dev *tgt_dev,
+	int key, int asc, int ascq)
+{
+	struct scst_tgt_template *tgtt = tgt_dev->sess->tgt->tgtt;
+	uint8_t sense_buffer[SCST_STANDARD_SENSE_LEN];
+	int sl;
+
+	if (tgtt->report_aen != NULL) {
+		struct scst_aen *aen;
+		int rc;
+
+		aen = scst_alloc_aen(tgt_dev->sess, tgt_dev->lun);
+		if (aen == NULL)
+			goto queue_ua;
+
+		aen->event_fn = SCST_AEN_SCSI;
+		aen->aen_sense_len = scst_set_sense(aen->aen_sense,
+			sizeof(aen->aen_sense), tgt_dev->dev->d_sense,
+			key, asc, ascq);
+
+		TRACE_DBG("Calling target's %s report_aen(%p)",
+			tgtt->name, aen);
+		rc = tgtt->report_aen(aen);
+		TRACE_DBG("Target's %s report_aen(%p) returned %d",
+			tgtt->name, aen, rc);
+		if (rc == SCST_AEN_RES_SUCCESS)
+			goto out;
+
+		scst_free_aen(aen);
+	}
+
+queue_ua:
+	TRACE_MGMT_DBG("AEN not supported, queuing plain UA (tgt_dev %p)",
+		tgt_dev);
+	sl = scst_set_sense(sense_buffer, sizeof(sense_buffer),
+		tgt_dev->dev->d_sense, key, asc, ascq);
+	scst_check_set_UA(tgt_dev, sense_buffer, sl, 0);
+
+out:
+	return;
+}
+
+/**
+ * scst_capacity_data_changed() - notify SCST about device capacity change
+ *
+ * Notifies SCST core that dev has changed its capacity. Called under no locks.
+ */
+void scst_capacity_data_changed(struct scst_device *dev)
+{
+	struct scst_tgt_dev *tgt_dev;
+
+	if (dev->type != TYPE_DISK) {
+		TRACE_MGMT_DBG("Device type %d isn't for CAPACITY DATA "
+			"CHANGED UA", dev->type);
+		goto out;
+	}
+
+	TRACE_MGMT_DBG("CAPACITY DATA CHANGED (dev %p)", dev);
+
+	mutex_lock(&scst_mutex);
+
+	list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+			    dev_tgt_dev_list_entry) {
+		scst_gen_aen_or_ua(tgt_dev,
+			SCST_LOAD_SENSE(scst_sense_capacity_data_changed));
+	}
+
+	mutex_unlock(&scst_mutex);
+
+out:
+	return;
+}
+EXPORT_SYMBOL_GPL(scst_capacity_data_changed);
+
+static inline bool scst_is_report_luns_changed_type(int type)
+{
+	switch (type) {
+	case TYPE_DISK:
+	case TYPE_TAPE:
+	case TYPE_PRINTER:
+	case TYPE_PROCESSOR:
+	case TYPE_WORM:
+	case TYPE_ROM:
+	case TYPE_SCANNER:
+	case TYPE_MOD:
+	case TYPE_MEDIUM_CHANGER:
+	case TYPE_RAID:
+	case TYPE_ENCLOSURE:
+		return true;
+	default:
+		return false;
+	}
+}
+
+/* scst_mutex supposed to be held */
+static void scst_queue_report_luns_changed_UA(struct scst_session *sess,
+					      int flags)
+{
+	uint8_t sense_buffer[SCST_STANDARD_SENSE_LEN];
+	struct list_head *shead;
+	struct scst_tgt_dev *tgt_dev;
+	int i;
+
+	TRACE_MGMT_DBG("Queuing REPORTED LUNS DATA CHANGED UA "
+		"(sess %p)", sess);
+
+	local_bh_disable();
+
+	for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+		shead = &sess->sess_tgt_dev_list_hash[i];
+
+		list_for_each_entry(tgt_dev, shead,
+				sess_tgt_dev_list_entry) {
+			/* Lockdep triggers here a false positive.. */
+			spin_lock(&tgt_dev->tgt_dev_lock);
+		}
+	}
+
+	for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+		shead = &sess->sess_tgt_dev_list_hash[i];
+
+		list_for_each_entry(tgt_dev, shead,
+				sess_tgt_dev_list_entry) {
+			int sl;
+
+			if (!scst_is_report_luns_changed_type(
+					tgt_dev->dev->type))
+				continue;
+
+			sl = scst_set_sense(sense_buffer, sizeof(sense_buffer),
+				tgt_dev->dev->d_sense,
+				SCST_LOAD_SENSE(scst_sense_reported_luns_data_changed));
+
+			__scst_check_set_UA(tgt_dev, sense_buffer,
+				sl, flags | SCST_SET_UA_FLAG_GLOBAL);
+		}
+	}
+
+	for (i = TGT_DEV_HASH_SIZE-1; i >= 0; i--) {
+		shead = &sess->sess_tgt_dev_list_hash[i];
+
+		list_for_each_entry_reverse(tgt_dev,
+				shead, sess_tgt_dev_list_entry) {
+			spin_unlock(&tgt_dev->tgt_dev_lock);
+		}
+	}
+
+	local_bh_enable();
+	return;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+static void scst_report_luns_changed_sess(struct scst_session *sess)
+{
+	int i;
+	struct scst_tgt_template *tgtt = sess->tgt->tgtt;
+	int d_sense = 0;
+	uint64_t lun = 0;
+
+	TRACE_DBG("REPORTED LUNS DATA CHANGED (sess %p)", sess);
+
+	for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+		struct list_head *shead;
+		struct scst_tgt_dev *tgt_dev;
+
+		shead = &sess->sess_tgt_dev_list_hash[i];
+
+		list_for_each_entry(tgt_dev, shead,
+				sess_tgt_dev_list_entry) {
+			if (scst_is_report_luns_changed_type(
+					tgt_dev->dev->type)) {
+				lun = tgt_dev->lun;
+				d_sense = tgt_dev->dev->d_sense;
+				goto found;
+			}
+		}
+	}
+
+found:
+	if (tgtt->report_aen != NULL) {
+		struct scst_aen *aen;
+		int rc;
+
+		aen = scst_alloc_aen(sess, lun);
+		if (aen == NULL)
+			goto queue_ua;
+
+		aen->event_fn = SCST_AEN_SCSI;
+		aen->aen_sense_len = scst_set_sense(aen->aen_sense,
+			sizeof(aen->aen_sense), d_sense,
+			SCST_LOAD_SENSE(scst_sense_reported_luns_data_changed));
+
+		TRACE_DBG("Calling target's %s report_aen(%p)",
+			tgtt->name, aen);
+		rc = tgtt->report_aen(aen);
+		TRACE_DBG("Target's %s report_aen(%p) returned %d",
+			tgtt->name, aen, rc);
+		if (rc == SCST_AEN_RES_SUCCESS)
+			goto out;
+
+		scst_free_aen(aen);
+	}
+
+queue_ua:
+	scst_queue_report_luns_changed_UA(sess, 0);
+
+out:
+	return;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+void scst_report_luns_changed(struct scst_acg *acg)
+{
+	struct scst_session *sess;
+
+	TRACE_MGMT_DBG("REPORTED LUNS DATA CHANGED (acg %s)", acg->acg_name);
+
+	list_for_each_entry(sess, &acg->acg_sess_list, acg_sess_list_entry) {
+		scst_report_luns_changed_sess(sess);
+	}
+	return;
+}
+
+/**
+ * scst_aen_done() - AEN processing done
+ *
+ * Notifies SCST that the driver has sent the AEN and it
+ * can be freed now. Don't forget to set the delivery status, if it
+ * isn't success, using scst_set_aen_delivery_status() before calling
+ * this function.
+ */
+void scst_aen_done(struct scst_aen *aen)
+{
+
+	TRACE_MGMT_DBG("AEN %p (fn %d) done (initiator %s)", aen,
+		aen->event_fn, aen->sess->initiator_name);
+
+	if (aen->delivery_status == SCST_AEN_RES_SUCCESS)
+		goto out_free;
+
+	if (aen->event_fn != SCST_AEN_SCSI)
+		goto out_free;
+
+	TRACE_MGMT_DBG("Delivery of SCSI AEN failed (initiator %s)",
+		aen->sess->initiator_name);
+
+	if (scst_analyze_sense(aen->aen_sense, aen->aen_sense_len,
+			SCST_SENSE_ALL_VALID, SCST_LOAD_SENSE(
+				scst_sense_reported_luns_data_changed))) {
+		mutex_lock(&scst_mutex);
+		scst_queue_report_luns_changed_UA(aen->sess,
+			SCST_SET_UA_FLAG_AT_HEAD);
+		mutex_unlock(&scst_mutex);
+	} else {
+		struct list_head *shead;
+		struct scst_tgt_dev *tgt_dev;
+		uint64_t lun;
+
+		lun = scst_unpack_lun((uint8_t *)&aen->lun, sizeof(aen->lun));
+
+		mutex_lock(&scst_mutex);
+
+		/* tgt_dev might get dead, so we need to reseek it */
+		shead = &aen->sess->sess_tgt_dev_list_hash[HASH_VAL(lun)];
+		list_for_each_entry(tgt_dev, shead,
+				sess_tgt_dev_list_entry) {
+			if (tgt_dev->lun == lun) {
+				TRACE_MGMT_DBG("Requeuing failed AEN UA for "
+					"tgt_dev %p", tgt_dev);
+				scst_check_set_UA(tgt_dev, aen->aen_sense,
+					aen->aen_sense_len,
+					SCST_SET_UA_FLAG_AT_HEAD);
+				break;
+			}
+		}
+
+		mutex_unlock(&scst_mutex);
+	}
+
+out_free:
+	scst_free_aen(aen);
+	return;
+}
+EXPORT_SYMBOL(scst_aen_done);
+
+void scst_requeue_ua(struct scst_cmd *cmd)
+{
+
+	if (scst_analyze_sense(cmd->sense, cmd->sense_valid_len,
+			SCST_SENSE_ALL_VALID,
+			SCST_LOAD_SENSE(scst_sense_reported_luns_data_changed))) {
+		TRACE_MGMT_DBG("Requeuing REPORTED LUNS DATA CHANGED UA "
+			"for delivery failed cmd %p", cmd);
+		mutex_lock(&scst_mutex);
+		scst_queue_report_luns_changed_UA(cmd->sess,
+			SCST_SET_UA_FLAG_AT_HEAD);
+		mutex_unlock(&scst_mutex);
+	} else {
+		TRACE_MGMT_DBG("Requeuing UA for delivery failed cmd %p", cmd);
+		scst_check_set_UA(cmd->tgt_dev, cmd->sense,
+			cmd->sense_valid_len, SCST_SET_UA_FLAG_AT_HEAD);
+	}
+	return;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+static void scst_check_reassign_sess(struct scst_session *sess)
+{
+	struct scst_acg *acg, *old_acg;
+	struct scst_acg_dev *acg_dev;
+	int i;
+	struct list_head *shead;
+	struct scst_tgt_dev *tgt_dev;
+	bool luns_changed = false;
+	bool add_failed, something_freed, not_needed_freed = false;
+
+	TRACE_MGMT_DBG("Checking reassignment for sess %p (initiator %s)",
+		sess, sess->initiator_name);
+
+	acg = scst_find_acg(sess);
+	if (acg == sess->acg) {
+		TRACE_MGMT_DBG("No reassignment for sess %p", sess);
+		goto out;
+	}
+
+	TRACE_MGMT_DBG("sess %p will be reassigned from acg %s to acg %s",
+		sess, sess->acg->acg_name, acg->acg_name);
+
+	old_acg = sess->acg;
+	sess->acg = NULL; /* to catch implicit dependencies earlier */
+
+retry_add:
+	add_failed = false;
+	list_for_each_entry(acg_dev, &acg->acg_dev_list, acg_dev_list_entry) {
+		unsigned int inq_changed_ua_needed = 0;
+
+		for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+			shead = &sess->sess_tgt_dev_list_hash[i];
+
+			list_for_each_entry(tgt_dev, shead,
+					sess_tgt_dev_list_entry) {
+				if ((tgt_dev->dev == acg_dev->dev) &&
+				    (tgt_dev->lun == acg_dev->lun) &&
+				    (tgt_dev->acg_dev->rd_only == acg_dev->rd_only)) {
+					TRACE_MGMT_DBG("sess %p: tgt_dev %p for "
+						"LUN %lld stays the same",
+						sess, tgt_dev,
+						(unsigned long long)tgt_dev->lun);
+					tgt_dev->acg_dev = acg_dev;
+					goto next;
+				} else if (tgt_dev->lun == acg_dev->lun)
+					inq_changed_ua_needed = 1;
+			}
+		}
+
+		luns_changed = true;
+
+		TRACE_MGMT_DBG("sess %p: Allocing new tgt_dev for LUN %lld",
+			sess, (unsigned long long)acg_dev->lun);
+
+		tgt_dev = scst_alloc_add_tgt_dev(sess, acg_dev);
+		if (tgt_dev == NULL) {
+			add_failed = true;
+			break;
+		}
+
+		tgt_dev->inq_changed_ua_needed = inq_changed_ua_needed ||
+						 not_needed_freed;
+next:
+		continue;
+	}
+
+	something_freed = false;
+	not_needed_freed = true;
+	for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+		struct scst_tgt_dev *t;
+		shead = &sess->sess_tgt_dev_list_hash[i];
+
+		list_for_each_entry_safe(tgt_dev, t, shead,
+					sess_tgt_dev_list_entry) {
+			if (tgt_dev->acg_dev->acg != acg) {
+				TRACE_MGMT_DBG("sess %p: Deleting not used "
+					"tgt_dev %p for LUN %lld",
+					sess, tgt_dev,
+					(unsigned long long)tgt_dev->lun);
+				luns_changed = true;
+				something_freed = true;
+				scst_free_tgt_dev(tgt_dev);
+			}
+		}
+	}
+
+	if (add_failed && something_freed) {
+		TRACE_MGMT_DBG("sess %p: Retrying adding new tgt_devs", sess);
+		goto retry_add;
+	}
+
+	sess->acg = acg;
+
+	TRACE_DBG("Moving sess %p from acg %s to acg %s", sess,
+		old_acg->acg_name, acg->acg_name);
+	list_move_tail(&sess->acg_sess_list_entry, &acg->acg_sess_list);
+
+	if (luns_changed) {
+		scst_report_luns_changed_sess(sess);
+
+		for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+			shead = &sess->sess_tgt_dev_list_hash[i];
+
+			list_for_each_entry(tgt_dev, shead,
+					sess_tgt_dev_list_entry) {
+				if (tgt_dev->inq_changed_ua_needed) {
+					TRACE_MGMT_DBG("sess %p: Setting "
+						"INQUIRY DATA HAS CHANGED UA "
+						"(tgt_dev %p)", sess, tgt_dev);
+
+					tgt_dev->inq_changed_ua_needed = 0;
+
+					scst_gen_aen_or_ua(tgt_dev,
+						SCST_LOAD_SENSE(scst_sense_inquery_data_changed));
+				}
+			}
+		}
+	}
+
+out:
+	return;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+void scst_check_reassign_sessions(void)
+{
+	struct scst_tgt_template *tgtt;
+
+	list_for_each_entry(tgtt, &scst_template_list, scst_template_list_entry) {
+		struct scst_tgt *tgt;
+		list_for_each_entry(tgt, &tgtt->tgt_list, tgt_list_entry) {
+			struct scst_session *sess;
+			list_for_each_entry(sess, &tgt->sess_list,
+						sess_list_entry) {
+				scst_check_reassign_sess(sess);
+			}
+		}
+	}
+	return;
+}
+
+/**
+ * scst_get_cmd_abnormal_done_state() - get command's next abnormal done state
+ *
+ * Returns the next state of the SCSI target state machine in case if command's
+ * completed abnormally.
+ */
+int scst_get_cmd_abnormal_done_state(const struct scst_cmd *cmd)
+{
+	int res;
+
+	switch (cmd->state) {
+	case SCST_CMD_STATE_INIT_WAIT:
+	case SCST_CMD_STATE_INIT:
+	case SCST_CMD_STATE_PRE_PARSE:
+	case SCST_CMD_STATE_DEV_PARSE:
+		if (cmd->preprocessing_only) {
+			res = SCST_CMD_STATE_PREPROCESSING_DONE;
+			break;
+		} /* else go through */
+	case SCST_CMD_STATE_DEV_DONE:
+		if (cmd->internal)
+			res = SCST_CMD_STATE_FINISHED_INTERNAL;
+		else
+			res = SCST_CMD_STATE_PRE_XMIT_RESP;
+		break;
+
+	case SCST_CMD_STATE_PRE_DEV_DONE:
+	case SCST_CMD_STATE_MODE_SELECT_CHECKS:
+		res = SCST_CMD_STATE_DEV_DONE;
+		break;
+
+	case SCST_CMD_STATE_PRE_XMIT_RESP:
+		res = SCST_CMD_STATE_XMIT_RESP;
+		break;
+
+	case SCST_CMD_STATE_PREPROCESSING_DONE:
+	case SCST_CMD_STATE_PREPROCESSING_DONE_CALLED:
+		if (cmd->tgt_dev == NULL)
+			res = SCST_CMD_STATE_PRE_XMIT_RESP;
+		else
+			res = SCST_CMD_STATE_PRE_DEV_DONE;
+		break;
+
+	case SCST_CMD_STATE_PREPARE_SPACE:
+		if (cmd->preprocessing_only) {
+			res = SCST_CMD_STATE_PREPROCESSING_DONE;
+			break;
+		} /* else go through */
+	case SCST_CMD_STATE_RDY_TO_XFER:
+	case SCST_CMD_STATE_DATA_WAIT:
+	case SCST_CMD_STATE_TGT_PRE_EXEC:
+	case SCST_CMD_STATE_SEND_FOR_EXEC:
+	case SCST_CMD_STATE_LOCAL_EXEC:
+	case SCST_CMD_STATE_REAL_EXEC:
+	case SCST_CMD_STATE_REAL_EXECUTING:
+		res = SCST_CMD_STATE_PRE_DEV_DONE;
+		break;
+
+	default:
+		PRINT_CRIT_ERROR("Wrong cmd state %d (cmd %p, op %x)",
+			cmd->state, cmd, cmd->cdb[0]);
+		BUG();
+		/* Invalid state to supress compiler's warning */
+		res = SCST_CMD_STATE_LAST_ACTIVE;
+	}
+	return res;
+}
+EXPORT_SYMBOL_GPL(scst_get_cmd_abnormal_done_state);
+
+/**
+ * scst_set_cmd_abnormal_done_state() - set command's next abnormal done state
+ *
+ * Sets state of the SCSI target state machine in case if command's completed
+ * abnormally.
+ */
+void scst_set_cmd_abnormal_done_state(struct scst_cmd *cmd)
+{
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+	switch (cmd->state) {
+	case SCST_CMD_STATE_XMIT_RESP:
+	case SCST_CMD_STATE_FINISHED:
+	case SCST_CMD_STATE_FINISHED_INTERNAL:
+	case SCST_CMD_STATE_XMIT_WAIT:
+		PRINT_CRIT_ERROR("Wrong cmd state %d (cmd %p, op %x)",
+			cmd->state, cmd, cmd->cdb[0]);
+		BUG();
+	}
+#endif
+
+	cmd->state = scst_get_cmd_abnormal_done_state(cmd);
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+	if (((cmd->state != SCST_CMD_STATE_PRE_XMIT_RESP) &&
+	     (cmd->state != SCST_CMD_STATE_PREPROCESSING_DONE)) &&
+		   (cmd->tgt_dev == NULL) && !cmd->internal) {
+		PRINT_CRIT_ERROR("Wrong not inited cmd state %d (cmd %p, "
+			"op %x)", cmd->state, cmd, cmd->cdb[0]);
+		BUG();
+	}
+#endif
+	return;
+}
+EXPORT_SYMBOL_GPL(scst_set_cmd_abnormal_done_state);
+
+/**
+ * scst_set_resp_data_len() - set response data length
+ *
+ * Sets response data length for cmd and truncates its SG vector accordingly.
+ *
+ * The cmd->resp_data_len must not be set directly, it must be set only
+ * using this function. Value of resp_data_len must be <= cmd->bufflen.
+ */
+void scst_set_resp_data_len(struct scst_cmd *cmd, int resp_data_len)
+{
+	int i, l;
+
+	scst_check_restore_sg_buff(cmd);
+	cmd->resp_data_len = resp_data_len;
+
+	if (resp_data_len == cmd->bufflen)
+		goto out;
+
+	l = 0;
+	for (i = 0; i < cmd->sg_cnt; i++) {
+		l += cmd->sg[i].length;
+		if (l >= resp_data_len) {
+			int left = resp_data_len - (l - cmd->sg[i].length);
+#ifdef CONFIG_SCST_DEBUG
+			TRACE(TRACE_SG_OP|TRACE_MEMORY, "cmd %p (tag %llu), "
+				"resp_data_len %d, i %d, cmd->sg[i].length %d, "
+				"left %d",
+				cmd, (long long unsigned int)cmd->tag,
+				resp_data_len, i,
+				cmd->sg[i].length, left);
+#endif
+			cmd->orig_sg_cnt = cmd->sg_cnt;
+			cmd->orig_sg_entry = i;
+			cmd->orig_entry_len = cmd->sg[i].length;
+			cmd->sg_cnt = (left > 0) ? i+1 : i;
+			cmd->sg[i].length = left;
+			cmd->sg_buff_modified = 1;
+			break;
+		}
+	}
+
+out:
+	return;
+}
+EXPORT_SYMBOL_GPL(scst_set_resp_data_len);
+
+/* No locks */
+int scst_queue_retry_cmd(struct scst_cmd *cmd, int finished_cmds)
+{
+	struct scst_tgt *tgt = cmd->tgt;
+	int res = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&tgt->tgt_lock, flags);
+	tgt->retry_cmds++;
+	/*
+	 * Memory barrier is needed here, because we need the exact order
+	 * between the read and write between retry_cmds and finished_cmds to
+	 * not miss the case when a command finished while we queuing it for
+	 * retry after the finished_cmds check.
+	 */
+	smp_mb();
+	TRACE_RETRY("TGT QUEUE FULL: incrementing retry_cmds %d",
+	      tgt->retry_cmds);
+	if (finished_cmds != atomic_read(&tgt->finished_cmds)) {
+		/* At least one cmd finished, so try again */
+		tgt->retry_cmds--;
+		TRACE_RETRY("Some command(s) finished, direct retry "
+		      "(finished_cmds=%d, tgt->finished_cmds=%d, "
+		      "retry_cmds=%d)", finished_cmds,
+		      atomic_read(&tgt->finished_cmds), tgt->retry_cmds);
+		res = -1;
+		goto out_unlock_tgt;
+	}
+
+	TRACE_RETRY("Adding cmd %p to retry cmd list", cmd);
+	list_add_tail(&cmd->cmd_list_entry, &tgt->retry_cmd_list);
+
+	if (!tgt->retry_timer_active) {
+		tgt->retry_timer.expires = jiffies + SCST_TGT_RETRY_TIMEOUT;
+		add_timer(&tgt->retry_timer);
+		tgt->retry_timer_active = 1;
+	}
+
+out_unlock_tgt:
+	spin_unlock_irqrestore(&tgt->tgt_lock, flags);
+	return res;
+}
+
+/* Returns 0 to continue, >0 to restart, <0 to break */
+static int scst_check_hw_pending_cmd(struct scst_cmd *cmd,
+	unsigned long cur_time, unsigned long max_time,
+	struct scst_session *sess, unsigned long *flags,
+	struct scst_tgt_template *tgtt)
+{
+	int res = -1; /* break */
+
+	TRACE_DBG("cmd %p, hw_pending %d, proc time %ld, "
+		"pending time %ld", cmd, cmd->cmd_hw_pending,
+		(long)(cur_time - cmd->start_time) / HZ,
+		(long)(cur_time - cmd->hw_pending_start) / HZ);
+
+	if (time_before_eq(cur_time, cmd->start_time + max_time)) {
+		/* Cmds are ordered, so no need to check more */
+		goto out;
+	}
+
+	if (!cmd->cmd_hw_pending) {
+		res = 0; /* continue */
+		goto out;
+	}
+
+	if (time_before(cur_time, cmd->hw_pending_start + max_time)) {
+		/* Cmds are ordered, so no need to check more */
+		goto out;
+	}
+
+	TRACE_MGMT_DBG("Cmd %p HW pending for too long %ld (state %x)",
+		cmd, (cur_time - cmd->hw_pending_start) / HZ,
+		cmd->state);
+
+	cmd->cmd_hw_pending = 0;
+
+	spin_unlock_irqrestore(&sess->sess_list_lock, *flags);
+	tgtt->on_hw_pending_cmd_timeout(cmd);
+	spin_lock_irqsave(&sess->sess_list_lock, *flags);
+
+	res = 1; /* restart */
+
+out:
+	return res;
+}
+
+static void scst_hw_pending_work_fn(struct delayed_work *work)
+{
+	struct scst_session *sess = container_of(work, struct scst_session,
+					hw_pending_work);
+	struct scst_tgt_template *tgtt = sess->tgt->tgtt;
+	struct scst_cmd *cmd;
+	unsigned long cur_time = jiffies;
+	unsigned long flags;
+	unsigned long max_time = tgtt->max_hw_pending_time * HZ;
+
+	TRACE_DBG("HW pending work (sess %p, max time %ld)", sess, max_time/HZ);
+
+	clear_bit(SCST_SESS_HW_PENDING_WORK_SCHEDULED, &sess->sess_aflags);
+
+	spin_lock_irqsave(&sess->sess_list_lock, flags);
+
+restart:
+	list_for_each_entry(cmd, &sess->sess_cmd_list, sess_cmd_list_entry) {
+		int rc;
+
+		rc = scst_check_hw_pending_cmd(cmd, cur_time, max_time, sess,
+					&flags, tgtt);
+		if (rc < 0)
+			break;
+		else if (rc == 0)
+			continue;
+		else
+			goto restart;
+	}
+
+	if (!list_empty(&sess->sess_cmd_list)) {
+		/*
+		 * For stuck cmds if there is no activity we might need to have
+		 * one more run to release them, so reschedule once again.
+		 */
+		TRACE_DBG("Sched HW pending work for sess %p (max time %d)",
+			sess, tgtt->max_hw_pending_time);
+		set_bit(SCST_SESS_HW_PENDING_WORK_SCHEDULED, &sess->sess_aflags);
+		schedule_delayed_work(&sess->hw_pending_work,
+				tgtt->max_hw_pending_time * HZ);
+	}
+
+	spin_unlock_irqrestore(&sess->sess_list_lock, flags);
+	return;
+}
+
+bool scst_is_relative_target_port_id_unique(uint16_t id, struct scst_tgt *t)
+{
+	bool res = true;
+	struct scst_tgt_template *tgtt;
+
+	mutex_lock(&scst_mutex);
+	list_for_each_entry(tgtt, &scst_template_list,
+				scst_template_list_entry) {
+		struct scst_tgt *tgt;
+		list_for_each_entry(tgt, &tgtt->tgt_list, tgt_list_entry) {
+			if (tgt == t)
+				continue;
+			if (id == tgt->rel_tgt_id) {
+				res = false;
+				break;
+			}
+		}
+	}
+	mutex_unlock(&scst_mutex);
+	return res;
+}
+
+int gen_relative_target_port_id(uint16_t *id)
+{
+	int res = -EOVERFLOW;
+	static unsigned long rti = SCST_MIN_REL_TGT_ID, rti_prev;
+
+	rti_prev = rti;
+	do {
+		if (scst_is_relative_target_port_id_unique(rti, NULL)) {
+			*id = (uint16_t)rti++;
+			res = 0;
+			goto out;
+		}
+		rti++;
+		if (rti > SCST_MAX_REL_TGT_ID)
+			rti = SCST_MIN_REL_TGT_ID;
+	} while (rti != rti_prev);
+
+	PRINT_ERROR("%s", "Unable to create unique relative target port id");
+
+out:
+	return res;
+}
+
+int scst_alloc_tgt(struct scst_tgt_template *tgtt, struct scst_tgt **tgt)
+{
+	struct scst_tgt *t;
+	int res = 0;
+
+	t = kzalloc(sizeof(*t), GFP_KERNEL);
+	if (t == NULL) {
+		TRACE(TRACE_OUT_OF_MEM, "%s", "Allocation of tgt failed");
+		res = -ENOMEM;
+		goto out;
+	}
+
+	INIT_LIST_HEAD(&t->sess_list);
+	init_waitqueue_head(&t->unreg_waitQ);
+	t->tgtt = tgtt;
+	t->sg_tablesize = tgtt->sg_tablesize;
+	spin_lock_init(&t->tgt_lock);
+	INIT_LIST_HEAD(&t->retry_cmd_list);
+	atomic_set(&t->finished_cmds, 0);
+	init_timer(&t->retry_timer);
+	t->retry_timer.data = (unsigned long)t;
+	t->retry_timer.function = scst_tgt_retry_timer_fn;
+
+	*tgt = t;
+
+out:
+	return res;
+}
+
+void scst_free_tgt(struct scst_tgt *tgt)
+{
+
+	if (tgt->default_acg != NULL)
+		scst_free_acg(tgt->default_acg);
+
+	kfree(tgt->tgt_name);
+
+	kfree(tgt);
+	return;
+}
+
+/* Called under scst_mutex and suspended activity */
+int scst_alloc_device(gfp_t gfp_mask, struct scst_device **out_dev)
+{
+	struct scst_device *dev;
+	int res = 0;
+
+	dev = kzalloc(sizeof(*dev), gfp_mask);
+	if (dev == NULL) {
+		TRACE(TRACE_OUT_OF_MEM, "%s",
+			"Allocation of scst_device failed");
+		res = -ENOMEM;
+		goto out;
+	}
+
+	dev->handler = &scst_null_devtype;
+	atomic_set(&dev->dev_cmd_count, 0);
+	atomic_set(&dev->write_cmd_count, 0);
+	scst_init_mem_lim(&dev->dev_mem_lim);
+	spin_lock_init(&dev->dev_lock);
+	atomic_set(&dev->on_dev_count, 0);
+	INIT_LIST_HEAD(&dev->blocked_cmd_list);
+	INIT_LIST_HEAD(&dev->dev_tgt_dev_list);
+	INIT_LIST_HEAD(&dev->dev_acg_dev_list);
+	init_waitqueue_head(&dev->on_dev_waitQ);
+	dev->dev_double_ua_possible = 1;
+	dev->queue_alg = SCST_CONTR_MODE_QUEUE_ALG_UNRESTRICTED_REORDER;
+
+	scst_init_threads(&dev->dev_cmd_threads);
+
+	*out_dev = dev;
+
+out:
+	return res;
+}
+
+void scst_free_device(struct scst_device *dev)
+{
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+	if (!list_empty(&dev->dev_tgt_dev_list) ||
+	    !list_empty(&dev->dev_acg_dev_list)) {
+		PRINT_CRIT_ERROR("%s: dev_tgt_dev_list or dev_acg_dev_list "
+			"is not empty!", __func__);
+		BUG();
+	}
+#endif
+
+	scst_deinit_threads(&dev->dev_cmd_threads);
+
+	kfree(dev->virt_name);
+	kfree(dev);
+	return;
+}
+
+/**
+ * scst_init_mem_lim - initialize memory limits structure
+ *
+ * Initializes memory limits structure mem_lim according to
+ * the current system configuration. This structure should be latter used
+ * to track and limit allocated by one or more SGV pools memory.
+ */
+void scst_init_mem_lim(struct scst_mem_lim *mem_lim)
+{
+	atomic_set(&mem_lim->alloced_pages, 0);
+	mem_lim->max_allowed_pages =
+		((uint64_t)scst_max_dev_cmd_mem << 10) >> (PAGE_SHIFT - 10);
+}
+EXPORT_SYMBOL_GPL(scst_init_mem_lim);
+
+static struct scst_acg_dev *scst_alloc_acg_dev(struct scst_acg *acg,
+					struct scst_device *dev, uint64_t lun)
+{
+	struct scst_acg_dev *res;
+
+	res = kmem_cache_zalloc(scst_acgd_cachep, GFP_KERNEL);
+	if (res == NULL) {
+		TRACE(TRACE_OUT_OF_MEM,
+		      "%s", "Allocation of scst_acg_dev failed");
+		goto out;
+	}
+
+	res->dev = dev;
+	res->acg = acg;
+	res->lun = lun;
+
+out:
+	return res;
+}
+
+void scst_acg_dev_destroy(struct scst_acg_dev *acg_dev)
+{
+
+	if ((acg_dev->dev != NULL) && acg_dev->dev->dev_kobj_initialized)
+		kobject_put(&acg_dev->dev->dev_kobj);
+
+	kmem_cache_free(scst_acgd_cachep, acg_dev);
+	return;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+static void scst_free_acg_dev(struct scst_acg_dev *acg_dev)
+{
+
+	TRACE_DBG("Removing acg_dev %p from acg_dev_list and dev_acg_dev_list",
+		acg_dev);
+	list_del(&acg_dev->acg_dev_list_entry);
+	list_del(&acg_dev->dev_acg_dev_list_entry);
+
+	if (acg_dev->acg_dev_kobj_initialized) {
+		kobject_del(&acg_dev->acg_dev_kobj);
+		kobject_put(&acg_dev->acg_dev_kobj);
+	} else
+		scst_acg_dev_destroy(acg_dev);
+	return;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+struct scst_acg *scst_alloc_add_acg(struct scst_tgt *tgt,
+	const char *acg_name)
+{
+	struct scst_acg *acg;
+
+	acg = kzalloc(sizeof(*acg), GFP_KERNEL);
+	if (acg == NULL) {
+		PRINT_ERROR("%s", "Allocation of acg failed");
+		goto out;
+	}
+
+	INIT_LIST_HEAD(&acg->acg_dev_list);
+	INIT_LIST_HEAD(&acg->acg_sess_list);
+	INIT_LIST_HEAD(&acg->acn_list);
+	acg->acg_name = kstrdup(acg_name, GFP_KERNEL);
+	if (acg->acg_name == NULL) {
+		PRINT_ERROR("%s", "Allocation of acg_name failed");
+		goto out_free;
+	}
+
+	acg->addr_method = SCST_LUN_ADDR_METHOD_PERIPHERAL;
+
+	if (tgt != NULL) {
+		TRACE_DBG("Adding acg '%s' to device '%s' acg_list", acg_name,
+			tgt->tgt_name);
+		list_add_tail(&acg->acg_list_entry, &tgt->tgt_acg_list);
+		acg->in_tgt_acg_list = 1;
+	}
+
+out:
+	return acg;
+
+out_free:
+	kfree(acg);
+	acg = NULL;
+	goto out;
+}
+
+void scst_free_acg(struct scst_acg *acg)
+{
+
+	BUG_ON(!list_empty(&acg->acg_sess_list));
+	BUG_ON(!list_empty(&acg->acg_dev_list));
+	BUG_ON(!list_empty(&acg->acn_list));
+
+	kfree(acg->acg_name);
+	kfree(acg);
+	return;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+void scst_clear_acg(struct scst_acg *acg)
+{
+	struct scst_acn *n, *nn;
+	struct scst_acg_dev *acg_dev, *acg_dev_tmp;
+
+	TRACE_DBG("Clearing acg %s from list", acg->acg_name);
+
+	BUG_ON(!list_empty(&acg->acg_sess_list));
+	if (acg->in_tgt_acg_list) {
+		TRACE_DBG("Removing acg %s from list", acg->acg_name);
+		list_del(&acg->acg_list_entry);
+		acg->in_tgt_acg_list = 0;
+	}
+	/* Freeing acg_devs */
+	list_for_each_entry_safe(acg_dev, acg_dev_tmp, &acg->acg_dev_list,
+			acg_dev_list_entry) {
+		struct scst_tgt_dev *tgt_dev, *tt;
+		list_for_each_entry_safe(tgt_dev, tt,
+				 &acg_dev->dev->dev_tgt_dev_list,
+				 dev_tgt_dev_list_entry) {
+			if (tgt_dev->acg_dev == acg_dev)
+				scst_free_tgt_dev(tgt_dev);
+		}
+		scst_free_acg_dev(acg_dev);
+	}
+
+	/* Freeing names */
+	list_for_each_entry_safe(n, nn, &acg->acn_list,
+			acn_list_entry) {
+		scst_acn_sysfs_del(acg, n,
+			list_is_last(&n->acn_list_entry, &acg->acn_list));
+	}
+	INIT_LIST_HEAD(&acg->acn_list);
+	return;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+void scst_destroy_acg(struct scst_acg *acg)
+{
+
+	scst_clear_acg(acg);
+	scst_free_acg(acg);
+	return;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+struct scst_acg *scst_tgt_find_acg(struct scst_tgt *tgt, const char *name)
+{
+	struct scst_acg *acg, *acg_ret = NULL;
+
+	list_for_each_entry(acg, &tgt->tgt_acg_list, acg_list_entry) {
+		if (strcmp(acg->acg_name, name) == 0) {
+			acg_ret = acg;
+			break;
+		}
+	}
+	return acg_ret;
+}
+
+/* scst_mutex supposed to be held */
+static struct scst_tgt_dev *scst_find_shared_io_tgt_dev(
+	struct scst_tgt_dev *tgt_dev)
+{
+	struct scst_tgt_dev *res = NULL;
+	struct scst_acg *acg = tgt_dev->acg_dev->acg;
+	struct scst_tgt_dev *t;
+
+	TRACE_DBG("tgt_dev %s (acg %p, io_grouping_type %d)",
+		tgt_dev->sess->initiator_name, acg, acg->acg_io_grouping_type);
+
+	switch (acg->acg_io_grouping_type) {
+	case SCST_IO_GROUPING_AUTO:
+		if (tgt_dev->sess->initiator_name == NULL)
+			goto out;
+
+		list_for_each_entry(t, &tgt_dev->dev->dev_tgt_dev_list,
+				dev_tgt_dev_list_entry) {
+			if ((t == tgt_dev) ||
+			    (t->sess->initiator_name == NULL) ||
+			    (t->active_cmd_threads == NULL))
+				continue;
+
+			TRACE_DBG("t %s", t->sess->initiator_name);
+
+			/* We check other ACG's as well */
+
+			if (strcmp(t->sess->initiator_name,
+					tgt_dev->sess->initiator_name) == 0)
+				goto found;
+		}
+		break;
+
+	case SCST_IO_GROUPING_THIS_GROUP_ONLY:
+		list_for_each_entry(t, &tgt_dev->dev->dev_tgt_dev_list,
+				dev_tgt_dev_list_entry) {
+			if ((t == tgt_dev) || (t->active_cmd_threads == NULL))
+				continue;
+
+			TRACE_DBG("t %s (acg %p)", t->sess->initiator_name,
+				t->acg_dev->acg);
+
+			if (t->acg_dev->acg == acg)
+				goto found;
+		}
+		break;
+
+	case SCST_IO_GROUPING_NEVER:
+		goto out;
+
+	default:
+		list_for_each_entry(t, &tgt_dev->dev->dev_tgt_dev_list,
+				dev_tgt_dev_list_entry) {
+			if ((t == tgt_dev) || (t->active_cmd_threads == NULL))
+				continue;
+
+			TRACE_DBG("t %s (acg %p, io_grouping_type %d)",
+				t->sess->initiator_name, t->acg_dev->acg,
+				t->acg_dev->acg->acg_io_grouping_type);
+
+			if (t->acg_dev->acg->acg_io_grouping_type ==
+					acg->acg_io_grouping_type)
+				goto found;
+		}
+		break;
+	}
+
+out:
+	return res;
+
+found:
+	if (t->active_cmd_threads == &scst_main_cmd_threads) {
+		res = t;
+		TRACE_MGMT_DBG("Going to share async IO context %p (res %p, "
+			"ini %s, dev %s, grouping type %d)",
+			t->aic_keeper->aic, res, t->sess->initiator_name,
+			t->dev->virt_name,
+			t->acg_dev->acg->acg_io_grouping_type);
+	} else {
+		res = t;
+		if (res->active_cmd_threads->io_context == NULL) {
+			TRACE_MGMT_DBG("IO context for t %p not yet "
+				"initialized, waiting...", t);
+			msleep(100);
+			barrier();
+			goto found;
+		}
+		TRACE_MGMT_DBG("Going to share IO context %p (res %p, ini %s, "
+			"dev %s, cmd_threads %p, grouping type %d)",
+			res->active_cmd_threads->io_context, res,
+			t->sess->initiator_name, t->dev->virt_name,
+			t->active_cmd_threads,
+			t->acg_dev->acg->acg_io_grouping_type);
+	}
+	goto out;
+}
+
+enum scst_dev_type_threads_pool_type scst_parse_threads_pool_type(const char *p,
+	int len)
+{
+	enum scst_dev_type_threads_pool_type res;
+
+	if (strncasecmp(p, SCST_THREADS_POOL_PER_INITIATOR_STR,
+			min_t(int, strlen(SCST_THREADS_POOL_PER_INITIATOR_STR),
+				len)) == 0)
+		res = SCST_THREADS_POOL_PER_INITIATOR;
+	else if (strncasecmp(p, SCST_THREADS_POOL_SHARED_STR,
+			min_t(int, strlen(SCST_THREADS_POOL_SHARED_STR),
+				len)) == 0)
+		res = SCST_THREADS_POOL_SHARED;
+	else {
+		PRINT_ERROR("Unknown threads pool type %s", p);
+		res = SCST_THREADS_POOL_TYPE_INVALID;
+	}
+
+	return res;
+}
+
+static int scst_ioc_keeper_thread(void *arg)
+{
+	struct scst_async_io_context_keeper *aic_keeper =
+		(struct scst_async_io_context_keeper *)arg;
+
+	TRACE_MGMT_DBG("AIC %p keeper thread %s (PID %d) started", aic_keeper,
+		current->comm, current->pid);
+
+	current->flags |= PF_NOFREEZE;
+
+	BUG_ON(aic_keeper->aic != NULL);
+
+	aic_keeper->aic = get_io_context(GFP_KERNEL, -1);
+	TRACE_MGMT_DBG("Alloced new async IO context %p (aic %p)",
+		aic_keeper->aic, aic_keeper);
+
+	/* We have our own ref counting */
+	put_io_context(aic_keeper->aic);
+
+	/* We are ready */
+	wake_up_all(&aic_keeper->aic_keeper_waitQ);
+
+	wait_event_interruptible(aic_keeper->aic_keeper_waitQ,
+		kthread_should_stop());
+
+	TRACE_MGMT_DBG("AIC %p keeper thread %s (PID %d) finished", aic_keeper,
+		current->comm, current->pid);
+	return 0;
+}
+
+/* scst_mutex supposed to be held */
+int scst_tgt_dev_setup_threads(struct scst_tgt_dev *tgt_dev)
+{
+	int res = 0;
+	struct scst_device *dev = tgt_dev->dev;
+	struct scst_async_io_context_keeper *aic_keeper;
+
+	if (dev->threads_num <= 0) {
+		tgt_dev->active_cmd_threads = &scst_main_cmd_threads;
+
+		if (dev->threads_num == 0) {
+			struct scst_tgt_dev *shared_io_tgt_dev;
+
+			shared_io_tgt_dev = scst_find_shared_io_tgt_dev(tgt_dev);
+			if (shared_io_tgt_dev != NULL) {
+				aic_keeper = shared_io_tgt_dev->aic_keeper;
+				kref_get(&aic_keeper->aic_keeper_kref);
+
+				TRACE_MGMT_DBG("Linking async io context %p "
+					"for shared tgt_dev %p (dev %s)",
+					aic_keeper->aic, tgt_dev,
+					tgt_dev->dev->virt_name);
+			} else {
+				/* Create new context */
+				aic_keeper = kzalloc(sizeof(*aic_keeper),
+						GFP_KERNEL);
+				if (aic_keeper == NULL) {
+					PRINT_ERROR("Unable to alloc aic_keeper "
+						"(size %zd)", sizeof(*aic_keeper));
+					res = -ENOMEM;
+					goto out;
+				}
+
+				kref_init(&aic_keeper->aic_keeper_kref);
+				init_waitqueue_head(&aic_keeper->aic_keeper_waitQ);
+
+				aic_keeper->aic_keeper_thr =
+					kthread_run(scst_ioc_keeper_thread,
+						aic_keeper, "aic_keeper");
+				if (IS_ERR(aic_keeper->aic_keeper_thr)) {
+					PRINT_ERROR("Error running ioc_keeper "
+						"thread (tgt_dev %p)", tgt_dev);
+					res = PTR_ERR(aic_keeper->aic_keeper_thr);
+					goto out_free_keeper;
+				}
+
+				wait_event(aic_keeper->aic_keeper_waitQ,
+					aic_keeper->aic != NULL);
+
+				TRACE_MGMT_DBG("Created async io context %p "
+					"for not shared tgt_dev %p (dev %s)",
+					aic_keeper->aic, tgt_dev,
+					tgt_dev->dev->virt_name);
+			}
+
+			tgt_dev->async_io_context = aic_keeper->aic;
+			tgt_dev->aic_keeper = aic_keeper;
+		}
+
+		res = scst_add_threads(tgt_dev->active_cmd_threads, NULL, NULL,
+				tgt_dev->sess->tgt->tgtt->threads_num);
+		goto out;
+	}
+
+	switch (dev->threads_pool_type) {
+	case SCST_THREADS_POOL_PER_INITIATOR:
+	{
+		struct scst_tgt_dev *shared_io_tgt_dev;
+
+		scst_init_threads(&tgt_dev->tgt_dev_cmd_threads);
+
+		tgt_dev->active_cmd_threads = &tgt_dev->tgt_dev_cmd_threads;
+
+		shared_io_tgt_dev = scst_find_shared_io_tgt_dev(tgt_dev);
+		if (shared_io_tgt_dev != NULL) {
+			TRACE_MGMT_DBG("Linking io context %p for "
+				"shared tgt_dev %p (cmd_threads %p)",
+				shared_io_tgt_dev->active_cmd_threads->io_context,
+				tgt_dev, tgt_dev->active_cmd_threads);
+			/* It's ref counted via threads */
+			tgt_dev->active_cmd_threads->io_context =
+				shared_io_tgt_dev->active_cmd_threads->io_context;
+		}
+
+		res = scst_add_threads(tgt_dev->active_cmd_threads, NULL,
+			tgt_dev,
+			dev->threads_num + tgt_dev->sess->tgt->tgtt->threads_num);
+		if (res != 0) {
+			/* Let's clear here, because no threads could be run */
+			tgt_dev->active_cmd_threads->io_context = NULL;
+		}
+		break;
+	}
+	case SCST_THREADS_POOL_SHARED:
+		tgt_dev->active_cmd_threads = &dev->dev_cmd_threads;
+
+		res = scst_add_threads(tgt_dev->active_cmd_threads, dev, NULL,
+			tgt_dev->sess->tgt->tgtt->threads_num);
+		break;
+	default:
+		PRINT_CRIT_ERROR("Unknown threads pool type %d (dev %s)",
+			dev->threads_pool_type, dev->virt_name);
+		BUG();
+		break;
+	}
+
+out:
+	if (res == 0)
+		tm_dbg_init_tgt_dev(tgt_dev);
+	return res;
+
+out_free_keeper:
+	kfree(aic_keeper);
+	goto out;
+}
+
+static void scst_aic_keeper_release(struct kref *kref)
+{
+	struct scst_async_io_context_keeper *aic_keeper;
+
+	aic_keeper = container_of(kref, struct scst_async_io_context_keeper,
+			aic_keeper_kref);
+
+	kthread_stop(aic_keeper->aic_keeper_thr);
+
+	kfree(aic_keeper);
+	return;
+}
+
+/* scst_mutex supposed to be held */
+void scst_tgt_dev_stop_threads(struct scst_tgt_dev *tgt_dev)
+{
+
+	if (tgt_dev->active_cmd_threads == &scst_main_cmd_threads) {
+		/* Global async threads */
+		kref_put(&tgt_dev->aic_keeper->aic_keeper_kref,
+			scst_aic_keeper_release);
+		tgt_dev->async_io_context = NULL;
+		tgt_dev->aic_keeper = NULL;
+	} else if (tgt_dev->active_cmd_threads == &tgt_dev->dev->dev_cmd_threads) {
+		/* Per device shared threads */
+		scst_del_threads(tgt_dev->active_cmd_threads,
+			tgt_dev->sess->tgt->tgtt->threads_num);
+	} else if (tgt_dev->active_cmd_threads == &tgt_dev->tgt_dev_cmd_threads) {
+		/* Per tgt_dev threads */
+		scst_del_threads(tgt_dev->active_cmd_threads, -1);
+		scst_deinit_threads(&tgt_dev->tgt_dev_cmd_threads);
+	} /* else no threads (not yet initialized, e.g.) */
+
+	tm_dbg_deinit_tgt_dev(tgt_dev);
+	tgt_dev->active_cmd_threads = NULL;
+	return;
+}
+
+/*
+ * scst_mutex supposed to be held, there must not be parallel activity in this
+ * session.
+ */
+static struct scst_tgt_dev *scst_alloc_add_tgt_dev(struct scst_session *sess,
+	struct scst_acg_dev *acg_dev)
+{
+	int ini_sg, ini_unchecked_isa_dma, ini_use_clustering;
+	struct scst_tgt_dev *tgt_dev;
+	struct scst_device *dev = acg_dev->dev;
+	struct list_head *sess_tgt_dev_list_head;
+	int rc, i, sl;
+	uint8_t sense_buffer[SCST_STANDARD_SENSE_LEN];
+
+	tgt_dev = kmem_cache_zalloc(scst_tgtd_cachep, GFP_KERNEL);
+	if (tgt_dev == NULL) {
+		TRACE(TRACE_OUT_OF_MEM, "%s",
+		      "Allocation of scst_tgt_dev failed");
+		goto out;
+	}
+
+	tgt_dev->dev = dev;
+	tgt_dev->lun = acg_dev->lun;
+	tgt_dev->acg_dev = acg_dev;
+	tgt_dev->sess = sess;
+	atomic_set(&tgt_dev->tgt_dev_cmd_count, 0);
+
+	scst_sgv_pool_use_norm(tgt_dev);
+
+	if (dev->scsi_dev != NULL) {
+		ini_sg = dev->scsi_dev->host->sg_tablesize;
+		ini_unchecked_isa_dma = dev->scsi_dev->host->unchecked_isa_dma;
+		ini_use_clustering = (dev->scsi_dev->host->use_clustering ==
+				ENABLE_CLUSTERING);
+	} else {
+		ini_sg = (1 << 15) /* infinite */;
+		ini_unchecked_isa_dma = 0;
+		ini_use_clustering = 0;
+	}
+	tgt_dev->max_sg_cnt = min(ini_sg, sess->tgt->sg_tablesize);
+
+	if ((sess->tgt->tgtt->use_clustering || ini_use_clustering) &&
+	    !sess->tgt->tgtt->no_clustering)
+		scst_sgv_pool_use_norm_clust(tgt_dev);
+
+	if (sess->tgt->tgtt->unchecked_isa_dma || ini_unchecked_isa_dma)
+		scst_sgv_pool_use_dma(tgt_dev);
+
+	TRACE_MGMT_DBG("Device %s on SCST lun=%lld",
+	       dev->virt_name, (long long unsigned int)tgt_dev->lun);
+
+	spin_lock_init(&tgt_dev->tgt_dev_lock);
+	INIT_LIST_HEAD(&tgt_dev->UA_list);
+	spin_lock_init(&tgt_dev->thr_data_lock);
+	INIT_LIST_HEAD(&tgt_dev->thr_data_list);
+	spin_lock_init(&tgt_dev->sn_lock);
+	INIT_LIST_HEAD(&tgt_dev->deferred_cmd_list);
+	INIT_LIST_HEAD(&tgt_dev->skipped_sn_list);
+	tgt_dev->curr_sn = (typeof(tgt_dev->curr_sn))(-300);
+	tgt_dev->expected_sn = tgt_dev->curr_sn + 1;
+	tgt_dev->num_free_sn_slots = ARRAY_SIZE(tgt_dev->sn_slots)-1;
+	tgt_dev->cur_sn_slot = &tgt_dev->sn_slots[0];
+	for (i = 0; i < (int)ARRAY_SIZE(tgt_dev->sn_slots); i++)
+		atomic_set(&tgt_dev->sn_slots[i], 0);
+
+	if (dev->handler->parse_atomic &&
+	    (sess->tgt->tgtt->preprocessing_done == NULL)) {
+		if (sess->tgt->tgtt->rdy_to_xfer_atomic)
+			__set_bit(SCST_TGT_DEV_AFTER_INIT_WR_ATOMIC,
+				&tgt_dev->tgt_dev_flags);
+		if (dev->handler->exec_atomic)
+			__set_bit(SCST_TGT_DEV_AFTER_INIT_OTH_ATOMIC,
+				&tgt_dev->tgt_dev_flags);
+	}
+	if (dev->handler->exec_atomic) {
+		if (sess->tgt->tgtt->rdy_to_xfer_atomic)
+			__set_bit(SCST_TGT_DEV_AFTER_RESTART_WR_ATOMIC,
+				&tgt_dev->tgt_dev_flags);
+		__set_bit(SCST_TGT_DEV_AFTER_RESTART_OTH_ATOMIC,
+				&tgt_dev->tgt_dev_flags);
+		__set_bit(SCST_TGT_DEV_AFTER_RX_DATA_ATOMIC,
+			&tgt_dev->tgt_dev_flags);
+	}
+	if (dev->handler->dev_done_atomic &&
+	    sess->tgt->tgtt->xmit_response_atomic) {
+		__set_bit(SCST_TGT_DEV_AFTER_EXEC_ATOMIC,
+			&tgt_dev->tgt_dev_flags);
+	}
+
+	sl = scst_set_sense(sense_buffer, sizeof(sense_buffer),
+		dev->d_sense, SCST_LOAD_SENSE(scst_sense_reset_UA));
+	scst_alloc_set_UA(tgt_dev, sense_buffer, sl, 0);
+
+	rc = scst_tgt_dev_setup_threads(tgt_dev);
+	if (rc != 0)
+		goto out_free;
+
+	if (dev->handler && dev->handler->attach_tgt) {
+		TRACE_DBG("Calling dev handler's attach_tgt(%p)", tgt_dev);
+		rc = dev->handler->attach_tgt(tgt_dev);
+		TRACE_DBG("%s", "Dev handler's attach_tgt() returned");
+		if (rc != 0) {
+			PRINT_ERROR("Device handler's %s attach_tgt() "
+			    "failed: %d", dev->handler->name, rc);
+			goto out_stop_threads;
+		}
+	}
+
+	spin_lock_bh(&dev->dev_lock);
+	list_add_tail(&tgt_dev->dev_tgt_dev_list_entry, &dev->dev_tgt_dev_list);
+	if (dev->dev_reserved)
+		__set_bit(SCST_TGT_DEV_RESERVED, &tgt_dev->tgt_dev_flags);
+	spin_unlock_bh(&dev->dev_lock);
+
+	sess_tgt_dev_list_head =
+		&sess->sess_tgt_dev_list_hash[HASH_VAL(tgt_dev->lun)];
+	list_add_tail(&tgt_dev->sess_tgt_dev_list_entry,
+		      sess_tgt_dev_list_head);
+
+out:
+	return tgt_dev;
+
+out_stop_threads:
+	scst_tgt_dev_stop_threads(tgt_dev);
+
+out_free:
+	scst_free_all_UA(tgt_dev);
+
+	kmem_cache_free(scst_tgtd_cachep, tgt_dev);
+	tgt_dev = NULL;
+	goto out;
+}
+
+/* No locks supposed to be held, scst_mutex - held */
+void scst_nexus_loss(struct scst_tgt_dev *tgt_dev, bool queue_UA)
+{
+
+	scst_clear_reservation(tgt_dev);
+
+	/* With activity suspended the lock isn't needed, but let's be safe */
+	spin_lock_bh(&tgt_dev->tgt_dev_lock);
+	scst_free_all_UA(tgt_dev);
+	memset(tgt_dev->tgt_dev_sense, 0, sizeof(tgt_dev->tgt_dev_sense));
+	spin_unlock_bh(&tgt_dev->tgt_dev_lock);
+
+	if (queue_UA) {
+		uint8_t sense_buffer[SCST_STANDARD_SENSE_LEN];
+		int sl = scst_set_sense(sense_buffer, sizeof(sense_buffer),
+				tgt_dev->dev->d_sense,
+				SCST_LOAD_SENSE(scst_sense_nexus_loss_UA));
+		scst_check_set_UA(tgt_dev, sense_buffer, sl, 0);
+	}
+	return;
+}
+
+/*
+ * scst_mutex supposed to be held, there must not be parallel activity in this
+ * session.
+ */
+static void scst_free_tgt_dev(struct scst_tgt_dev *tgt_dev)
+{
+	struct scst_device *dev = tgt_dev->dev;
+
+	spin_lock_bh(&dev->dev_lock);
+	list_del(&tgt_dev->dev_tgt_dev_list_entry);
+	spin_unlock_bh(&dev->dev_lock);
+
+	list_del(&tgt_dev->sess_tgt_dev_list_entry);
+
+	scst_clear_reservation(tgt_dev);
+	scst_free_all_UA(tgt_dev);
+
+	if (dev->handler && dev->handler->detach_tgt) {
+		TRACE_DBG("Calling dev handler's detach_tgt(%p)",
+		      tgt_dev);
+		dev->handler->detach_tgt(tgt_dev);
+		TRACE_DBG("%s", "Dev handler's detach_tgt() returned");
+	}
+
+	scst_tgt_dev_stop_threads(tgt_dev);
+
+	BUG_ON(!list_empty(&tgt_dev->thr_data_list));
+
+	kmem_cache_free(scst_tgtd_cachep, tgt_dev);
+	return;
+}
+
+/* scst_mutex supposed to be held */
+int scst_sess_alloc_tgt_devs(struct scst_session *sess)
+{
+	int res = 0;
+	struct scst_acg_dev *acg_dev;
+	struct scst_tgt_dev *tgt_dev;
+
+	list_for_each_entry(acg_dev, &sess->acg->acg_dev_list,
+			acg_dev_list_entry) {
+		tgt_dev = scst_alloc_add_tgt_dev(sess, acg_dev);
+		if (tgt_dev == NULL) {
+			res = -ENOMEM;
+			goto out_free;
+		}
+	}
+
+out:
+	return res;
+
+out_free:
+	scst_sess_free_tgt_devs(sess);
+	goto out;
+}
+
+/*
+ * scst_mutex supposed to be held, there must not be parallel activity in this
+ * session.
+ */
+void scst_sess_free_tgt_devs(struct scst_session *sess)
+{
+	int i;
+	struct scst_tgt_dev *tgt_dev, *t;
+
+	/* The session is going down, no users, so no locks */
+	for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+		struct list_head *sess_tgt_dev_list_head =
+			&sess->sess_tgt_dev_list_hash[i];
+		list_for_each_entry_safe(tgt_dev, t, sess_tgt_dev_list_head,
+				sess_tgt_dev_list_entry) {
+			scst_free_tgt_dev(tgt_dev);
+		}
+		INIT_LIST_HEAD(sess_tgt_dev_list_head);
+	}
+	return;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+int scst_acg_add_dev(struct scst_acg *acg, struct scst_device *dev,
+	uint64_t lun, int read_only, bool gen_scst_report_luns_changed)
+{
+	int res = 0;
+	struct scst_acg_dev *acg_dev;
+	struct scst_tgt_dev *tgt_dev;
+	struct scst_session *sess;
+	LIST_HEAD(tmp_tgt_dev_list);
+
+	INIT_LIST_HEAD(&tmp_tgt_dev_list);
+
+	acg_dev = scst_alloc_acg_dev(acg, dev, lun);
+	if (acg_dev == NULL) {
+		res = -ENOMEM;
+		goto out;
+	}
+	acg_dev->rd_only = read_only;
+
+	TRACE_DBG("Adding acg_dev %p to acg_dev_list and dev_acg_dev_list",
+		acg_dev);
+	list_add_tail(&acg_dev->acg_dev_list_entry, &acg->acg_dev_list);
+	list_add_tail(&acg_dev->dev_acg_dev_list_entry, &dev->dev_acg_dev_list);
+
+	list_for_each_entry(sess, &acg->acg_sess_list, acg_sess_list_entry) {
+		tgt_dev = scst_alloc_add_tgt_dev(sess, acg_dev);
+		if (tgt_dev == NULL) {
+			res = -ENOMEM;
+			goto out_free;
+		}
+		list_add_tail(&tgt_dev->extra_tgt_dev_list_entry,
+			      &tmp_tgt_dev_list);
+	}
+
+	if (gen_scst_report_luns_changed)
+		scst_report_luns_changed(acg);
+
+	PRINT_INFO("Added device %s to group %s (LUN %lld, "
+		"rd_only %d)", dev->virt_name, acg->acg_name,
+		(long long unsigned int)lun, read_only);
+
+out:
+	return res;
+
+out_free:
+	list_for_each_entry(tgt_dev, &tmp_tgt_dev_list,
+			 extra_tgt_dev_list_entry) {
+		scst_free_tgt_dev(tgt_dev);
+	}
+	scst_free_acg_dev(acg_dev);
+	goto out;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+int scst_acg_remove_dev(struct scst_acg *acg, struct scst_device *dev,
+	bool gen_scst_report_luns_changed)
+{
+	int res = 0;
+	struct scst_acg_dev *acg_dev = NULL, *a;
+	struct scst_tgt_dev *tgt_dev, *tt;
+
+	list_for_each_entry(a, &acg->acg_dev_list, acg_dev_list_entry) {
+		if (a->dev == dev) {
+			acg_dev = a;
+			break;
+		}
+	}
+
+	if (acg_dev == NULL) {
+		PRINT_ERROR("Device is not found in group %s", acg->acg_name);
+		res = -EINVAL;
+		goto out;
+	}
+
+	list_for_each_entry_safe(tgt_dev, tt, &dev->dev_tgt_dev_list,
+			 dev_tgt_dev_list_entry) {
+		if (tgt_dev->acg_dev == acg_dev)
+			scst_free_tgt_dev(tgt_dev);
+	}
+	scst_free_acg_dev(acg_dev);
+
+	if (gen_scst_report_luns_changed)
+		scst_report_luns_changed(acg);
+
+	PRINT_INFO("Removed device %s from group %s", dev->virt_name,
+		acg->acg_name);
+
+out:
+	return res;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+int scst_acg_add_name(struct scst_acg *acg, const char *name)
+{
+	int res = 0;
+	struct scst_acn *n;
+	int len;
+	char *nm;
+
+	list_for_each_entry(n, &acg->acn_list, acn_list_entry) {
+		if (strcmp(n->name, name) == 0) {
+			PRINT_ERROR("Name %s already exists in group %s",
+				name, acg->acg_name);
+			res = -EEXIST;
+			goto out;
+		}
+	}
+
+	n = kmalloc(sizeof(*n), GFP_KERNEL);
+	if (n == NULL) {
+		PRINT_ERROR("%s", "Unable to allocate scst_acn");
+		res = -ENOMEM;
+		goto out;
+	}
+
+	len = strlen(name);
+	nm = kmalloc(len + 1, GFP_KERNEL);
+	if (nm == NULL) {
+		PRINT_ERROR("%s", "Unable to allocate scst_acn->name");
+		res = -ENOMEM;
+		goto out_free;
+	}
+
+	strcpy(nm, name);
+	n->name = nm;
+
+	res = scst_create_acn_sysfs(acg, n);
+	if (res != 0)
+		goto out_free_nm;
+
+	list_add_tail(&n->acn_list_entry, &acg->acn_list);
+
+out:
+	if (res == 0) {
+		PRINT_INFO("Added name '%s' to group '%s'", name, acg->acg_name);
+		scst_check_reassign_sessions();
+	}
+	return res;
+
+out_free_nm:
+	kfree(nm);
+
+out_free:
+	kfree(n);
+	goto out;
+}
+
+/* The activity supposed to be suspended and scst_mutex held */
+struct scst_acn *scst_acg_find_name(struct scst_acg *acg, const char *name)
+{
+	struct scst_acn *n;
+
+	TRACE_DBG("Trying to find name '%s'", name);
+
+	list_for_each_entry(n, &acg->acn_list, acn_list_entry) {
+		if (strcmp(n->name, name) == 0) {
+			TRACE_DBG("%s", "Found");
+			goto out;
+		}
+	}
+	n = NULL;
+out:
+	return n;
+}
+
+/* scst_mutex supposed to be held */
+void scst_acg_remove_acn(struct scst_acn *acn)
+{
+
+	list_del(&acn->acn_list_entry);
+	kfree(acn->name);
+	kfree(acn);
+	return;
+}
+
+static struct scst_cmd *scst_create_prepare_internal_cmd(
+	struct scst_cmd *orig_cmd, int bufsize)
+{
+	struct scst_cmd *res;
+	gfp_t gfp_mask = scst_cmd_atomic(orig_cmd) ? GFP_ATOMIC : GFP_KERNEL;
+
+	res = scst_alloc_cmd(gfp_mask);
+	if (res == NULL)
+		goto out;
+
+	res->cmd_threads = orig_cmd->cmd_threads;
+	res->sess = orig_cmd->sess;
+	res->atomic = scst_cmd_atomic(orig_cmd);
+	res->internal = 1;
+	res->tgtt = orig_cmd->tgtt;
+	res->tgt = orig_cmd->tgt;
+	res->dev = orig_cmd->dev;
+	res->tgt_dev = orig_cmd->tgt_dev;
+	res->lun = orig_cmd->lun;
+	res->queue_type = SCST_CMD_QUEUE_HEAD_OF_QUEUE;
+	res->data_direction = SCST_DATA_UNKNOWN;
+	res->orig_cmd = orig_cmd;
+	res->bufflen = bufsize;
+
+	scst_sess_get(res->sess);
+	if (res->tgt_dev != NULL)
+		__scst_get(0);
+
+	res->state = SCST_CMD_STATE_PRE_PARSE;
+
+out:
+	return res;
+}
+
+int scst_prepare_request_sense(struct scst_cmd *orig_cmd)
+{
+	int res = 0;
+	static const uint8_t request_sense[6] = {
+		REQUEST_SENSE, 0, 0, 0, SCST_SENSE_BUFFERSIZE, 0
+	};
+	struct scst_cmd *rs_cmd;
+
+	if (orig_cmd->sense != NULL) {
+		TRACE_MEM("Releasing sense %p (orig_cmd %p)",
+			orig_cmd->sense, orig_cmd);
+		mempool_free(orig_cmd->sense, scst_sense_mempool);
+		orig_cmd->sense = NULL;
+	}
+
+	rs_cmd = scst_create_prepare_internal_cmd(orig_cmd,
+			SCST_SENSE_BUFFERSIZE);
+	if (rs_cmd == NULL)
+		goto out_error;
+
+	memcpy(rs_cmd->cdb, request_sense, sizeof(request_sense));
+	rs_cmd->cdb[1] |= scst_get_cmd_dev_d_sense(orig_cmd);
+	rs_cmd->cdb_len = sizeof(request_sense);
+	rs_cmd->data_direction = SCST_DATA_READ;
+	rs_cmd->expected_data_direction = rs_cmd->data_direction;
+	rs_cmd->expected_transfer_len = SCST_SENSE_BUFFERSIZE;
+	rs_cmd->expected_values_set = 1;
+
+	TRACE_MGMT_DBG("Adding REQUEST SENSE cmd %p to head of active "
+		"cmd list", rs_cmd);
+	spin_lock_irq(&rs_cmd->cmd_threads->cmd_list_lock);
+	list_add(&rs_cmd->cmd_list_entry, &rs_cmd->cmd_threads->active_cmd_list);
+	wake_up(&rs_cmd->cmd_threads->cmd_list_waitQ);
+	spin_unlock_irq(&rs_cmd->cmd_threads->cmd_list_lock);
+
+out:
+	return res;
+
+out_error:
+	res = -1;
+	goto out;
+}
+
+static void scst_complete_request_sense(struct scst_cmd *req_cmd)
+{
+	struct scst_cmd *orig_cmd = req_cmd->orig_cmd;
+	uint8_t *buf;
+	int len;
+
+	BUG_ON(orig_cmd == NULL);
+
+	len = scst_get_buf_first(req_cmd, &buf);
+
+	if (scsi_status_is_good(req_cmd->status) && (len > 0) &&
+	    SCST_SENSE_VALID(buf) && (!SCST_NO_SENSE(buf))) {
+		PRINT_BUFF_FLAG(TRACE_SCSI, "REQUEST SENSE returned",
+			buf, len);
+		scst_alloc_set_sense(orig_cmd, scst_cmd_atomic(req_cmd), buf,
+			len);
+	} else {
+		PRINT_ERROR("%s", "Unable to get the sense via "
+			"REQUEST SENSE, returning HARDWARE ERROR");
+		scst_set_cmd_error(orig_cmd,
+			SCST_LOAD_SENSE(scst_sense_hardw_error));
+	}
+
+	if (len > 0)
+		scst_put_buf(req_cmd, buf);
+
+	TRACE_MGMT_DBG("Adding orig cmd %p to head of active "
+		"cmd list", orig_cmd);
+	spin_lock_irq(&orig_cmd->cmd_threads->cmd_list_lock);
+	list_add(&orig_cmd->cmd_list_entry, &orig_cmd->cmd_threads->active_cmd_list);
+	wake_up(&orig_cmd->cmd_threads->cmd_list_waitQ);
+	spin_unlock_irq(&orig_cmd->cmd_threads->cmd_list_lock);
+	return;
+}
+
+int scst_finish_internal_cmd(struct scst_cmd *cmd)
+{
+	int res;
+
+	BUG_ON(!cmd->internal);
+
+	if (cmd->cdb[0] == REQUEST_SENSE)
+		scst_complete_request_sense(cmd);
+
+	__scst_cmd_put(cmd);
+
+	res = SCST_CMD_STATE_RES_CONT_NEXT;
+	return res;
+}
+
+static void scst_send_release(struct scst_device *dev)
+{
+	struct scsi_device *scsi_dev;
+	unsigned char cdb[6];
+	uint8_t sense[SCSI_SENSE_BUFFERSIZE];
+	int rc, i;
+
+	if (dev->scsi_dev == NULL)
+		goto out;
+
+	scsi_dev = dev->scsi_dev;
+
+	for (i = 0; i < 5; i++) {
+		memset(cdb, 0, sizeof(cdb));
+		cdb[0] = RELEASE;
+		cdb[1] = (scsi_dev->scsi_level <= SCSI_2) ?
+		    ((scsi_dev->lun << 5) & 0xe0) : 0;
+
+		memset(sense, 0, sizeof(sense));
+
+		TRACE(TRACE_DEBUG | TRACE_SCSI, "%s", "Sending RELEASE req to "
+			"SCSI mid-level");
+		rc = scsi_execute(scsi_dev, cdb, SCST_DATA_NONE, NULL, 0,
+				sense, 15, 0, 0
+				, NULL
+				);
+		TRACE_DBG("MODE_SENSE done: %x", rc);
+
+		if (scsi_status_is_good(rc)) {
+			break;
+		} else {
+			PRINT_ERROR("RELEASE failed: %d", rc);
+			PRINT_BUFFER("RELEASE sense", sense, sizeof(sense));
+			scst_check_internal_sense(dev, rc, sense,
+				sizeof(sense));
+		}
+	}
+
+out:
+	return;
+}
+
+/* scst_mutex supposed to be held */
+static void scst_clear_reservation(struct scst_tgt_dev *tgt_dev)
+{
+	struct scst_device *dev = tgt_dev->dev;
+	int release = 0;
+
+	spin_lock_bh(&dev->dev_lock);
+	if (dev->dev_reserved &&
+	    !test_bit(SCST_TGT_DEV_RESERVED, &tgt_dev->tgt_dev_flags)) {
+		/* This is one who holds the reservation */
+		struct scst_tgt_dev *tgt_dev_tmp;
+		list_for_each_entry(tgt_dev_tmp, &dev->dev_tgt_dev_list,
+				    dev_tgt_dev_list_entry) {
+			clear_bit(SCST_TGT_DEV_RESERVED,
+				    &tgt_dev_tmp->tgt_dev_flags);
+		}
+		dev->dev_reserved = 0;
+		release = 1;
+	}
+	spin_unlock_bh(&dev->dev_lock);
+
+	if (release)
+		scst_send_release(dev);
+	return;
+}
+
+struct scst_session *scst_alloc_session(struct scst_tgt *tgt, gfp_t gfp_mask,
+	const char *initiator_name)
+{
+	struct scst_session *sess;
+	int i;
+	int len;
+	char *nm;
+
+	sess = kmem_cache_zalloc(scst_sess_cachep, gfp_mask);
+	if (sess == NULL) {
+		TRACE(TRACE_OUT_OF_MEM, "%s",
+		      "Allocation of scst_session failed");
+		goto out;
+	}
+
+	sess->init_phase = SCST_SESS_IPH_INITING;
+	sess->shut_phase = SCST_SESS_SPH_READY;
+	atomic_set(&sess->refcnt, 0);
+	for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+		struct list_head *sess_tgt_dev_list_head =
+			 &sess->sess_tgt_dev_list_hash[i];
+		INIT_LIST_HEAD(sess_tgt_dev_list_head);
+	}
+	spin_lock_init(&sess->sess_list_lock);
+	INIT_LIST_HEAD(&sess->sess_cmd_list);
+	sess->tgt = tgt;
+	INIT_LIST_HEAD(&sess->init_deferred_cmd_list);
+	INIT_LIST_HEAD(&sess->init_deferred_mcmd_list);
+	INIT_DELAYED_WORK(&sess->hw_pending_work,
+		(void (*)(struct work_struct *))scst_hw_pending_work_fn);
+
+#ifdef CONFIG_SCST_MEASURE_LATENCY
+	spin_lock_init(&sess->lat_lock);
+#endif
+
+	len = strlen(initiator_name);
+	nm = kmalloc(len + 1, gfp_mask);
+	if (nm == NULL) {
+		PRINT_ERROR("%s", "Unable to allocate sess->initiator_name");
+		goto out_free;
+	}
+
+	strcpy(nm, initiator_name);
+	sess->initiator_name = nm;
+
+out:
+	return sess;
+
+out_free:
+	kmem_cache_free(scst_sess_cachep, sess);
+	sess = NULL;
+	goto out;
+}
+
+void scst_free_session(struct scst_session *sess)
+{
+
+	mutex_lock(&scst_mutex);
+
+	TRACE_DBG("Removing sess %p from the list", sess);
+	list_del(&sess->sess_list_entry);
+	TRACE_DBG("Removing session %p from acg %s", sess, sess->acg->acg_name);
+	list_del(&sess->acg_sess_list_entry);
+
+	scst_sess_free_tgt_devs(sess);
+
+	/* Called under lock to protect from too early tgt release */
+	wake_up_all(&sess->tgt->unreg_waitQ);
+
+	mutex_unlock(&scst_mutex);
+
+	scst_sess_sysfs_put(sess); /* must not be called under scst_mutex */
+	return;
+}
+
+void scst_release_session(struct scst_session *sess)
+{
+
+	kfree(sess->initiator_name);
+	kmem_cache_free(scst_sess_cachep, sess);
+	return;
+}
+
+void scst_free_session_callback(struct scst_session *sess)
+{
+	struct completion *c;
+
+	TRACE_DBG("Freeing session %p", sess);
+
+	cancel_delayed_work_sync(&sess->hw_pending_work);
+
+	c = sess->shutdown_compl;
+
+	if (sess->unreg_done_fn) {
+		TRACE_DBG("Calling unreg_done_fn(%p)", sess);
+		sess->unreg_done_fn(sess);
+		TRACE_DBG("%s", "unreg_done_fn() returned");
+	}
+	scst_free_session(sess);
+
+	if (c)
+		complete_all(c);
+	return;
+}
+
+void scst_sched_session_free(struct scst_session *sess)
+{
+	unsigned long flags;
+
+	if (sess->shut_phase != SCST_SESS_SPH_SHUTDOWN) {
+		PRINT_CRIT_ERROR("session %p is going to shutdown with unknown "
+			"shut phase %lx", sess, sess->shut_phase);
+		BUG();
+	}
+
+	spin_lock_irqsave(&scst_mgmt_lock, flags);
+	TRACE_DBG("Adding sess %p to scst_sess_shut_list", sess);
+	list_add_tail(&sess->sess_shut_list_entry, &scst_sess_shut_list);
+	spin_unlock_irqrestore(&scst_mgmt_lock, flags);
+
+	wake_up(&scst_mgmt_waitQ);
+	return;
+}
+
+/**
+ * scst_cmd_get() - increase command's reference counter
+ */
+void scst_cmd_get(struct scst_cmd *cmd)
+{
+	__scst_cmd_get(cmd);
+}
+EXPORT_SYMBOL_GPL(scst_cmd_get);
+
+/**
+ * scst_cmd_put() - decrease command's reference counter
+ */
+void scst_cmd_put(struct scst_cmd *cmd)
+{
+	__scst_cmd_put(cmd);
+}
+EXPORT_SYMBOL_GPL(scst_cmd_put);
+
+struct scst_cmd *scst_alloc_cmd(gfp_t gfp_mask)
+{
+	struct scst_cmd *cmd;
+
+	cmd = kmem_cache_zalloc(scst_cmd_cachep, gfp_mask);
+	if (cmd == NULL) {
+		TRACE(TRACE_OUT_OF_MEM, "%s", "Allocation of scst_cmd failed");
+		goto out;
+	}
+
+	cmd->state = SCST_CMD_STATE_INIT_WAIT;
+	cmd->start_time = jiffies;
+	atomic_set(&cmd->cmd_ref, 1);
+	cmd->cmd_threads = &scst_main_cmd_threads;
+	INIT_LIST_HEAD(&cmd->mgmt_cmd_list);
+	cmd->queue_type = SCST_CMD_QUEUE_SIMPLE;
+	cmd->timeout = SCST_DEFAULT_TIMEOUT;
+	cmd->retries = 0;
+	cmd->data_len = -1;
+	cmd->is_send_status = 1;
+	cmd->resp_data_len = -1;
+
+	cmd->dbl_ua_orig_data_direction = SCST_DATA_UNKNOWN;
+	cmd->dbl_ua_orig_resp_data_len = -1;
+
+out:
+	return cmd;
+}
+
+static void scst_destroy_put_cmd(struct scst_cmd *cmd)
+{
+	scst_sess_put(cmd->sess);
+
+	/*
+	 * At this point tgt_dev can be dead, but the pointer remains non-NULL
+	 */
+	if (likely(cmd->tgt_dev != NULL))
+		__scst_put();
+
+	scst_destroy_cmd(cmd);
+	return;
+}
+
+/* No locks supposed to be held */
+void scst_free_cmd(struct scst_cmd *cmd)
+{
+	int destroy = 1;
+
+	TRACE_DBG("Freeing cmd %p (tag %llu)",
+		  cmd, (long long unsigned int)cmd->tag);
+
+	if (unlikely(test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags))) {
+		TRACE_MGMT_DBG("Freeing aborted cmd %p (scst_cmd_count %d)",
+			cmd, atomic_read(&scst_cmd_count));
+	}
+
+	BUG_ON(cmd->inc_blocking || cmd->needs_unblocking ||
+		cmd->dec_on_dev_needed);
+
+	/*
+	 * Target driver can already free sg buffer before calling
+	 * scst_tgt_cmd_done(). E.g., scst_local has to do that.
+	 */
+	if (!cmd->tgt_data_buf_alloced)
+		scst_check_restore_sg_buff(cmd);
+
+	if ((cmd->tgtt->on_free_cmd != NULL) && likely(!cmd->internal)) {
+		TRACE_DBG("Calling target's on_free_cmd(%p)", cmd);
+		scst_set_cur_start(cmd);
+		cmd->tgtt->on_free_cmd(cmd);
+		scst_set_tgt_on_free_time(cmd);
+		TRACE_DBG("%s", "Target's on_free_cmd() returned");
+	}
+
+	if (likely(cmd->dev != NULL)) {
+		struct scst_dev_type *handler = cmd->dev->handler;
+		if (handler->on_free_cmd != NULL) {
+			TRACE_DBG("Calling dev handler %s on_free_cmd(%p)",
+				handler->name, cmd);
+			scst_set_cur_start(cmd);
+			handler->on_free_cmd(cmd);
+			scst_set_dev_on_free_time(cmd);
+			TRACE_DBG("Dev handler %s on_free_cmd() returned",
+				handler->name);
+		}
+	}
+
+	scst_release_space(cmd);
+
+	if (unlikely(cmd->sense != NULL)) {
+		TRACE_MEM("Releasing sense %p (cmd %p)", cmd->sense, cmd);
+		mempool_free(cmd->sense, scst_sense_mempool);
+		cmd->sense = NULL;
+	}
+
+	if (likely(cmd->tgt_dev != NULL)) {
+#ifdef CONFIG_SCST_EXTRACHECKS
+		if (unlikely(!cmd->sent_for_exec) && !cmd->internal) {
+			PRINT_ERROR("Finishing not executed cmd %p (opcode "
+			    "%d, target %s, LUN %lld, sn %d, expected_sn %d)",
+			    cmd, cmd->cdb[0], cmd->tgtt->name,
+			    (long long unsigned int)cmd->lun,
+			    cmd->sn, cmd->tgt_dev->expected_sn);
+			scst_unblock_deferred(cmd->tgt_dev, cmd);
+		}
+#endif
+
+		if (unlikely(cmd->out_of_sn)) {
+			TRACE_SN("Out of SN cmd %p (tag %llu, sn %d), "
+				"destroy=%d", cmd,
+				(long long unsigned int)cmd->tag,
+				cmd->sn, destroy);
+			destroy = test_and_set_bit(SCST_CMD_CAN_BE_DESTROYED,
+					&cmd->cmd_flags);
+		}
+	}
+
+	if (likely(destroy))
+		scst_destroy_put_cmd(cmd);
+	return;
+}
+
+/* No locks supposed to be held. */
+void scst_check_retries(struct scst_tgt *tgt)
+{
+	int need_wake_up = 0;
+
+	/*
+	 * We don't worry about overflow of finished_cmds, because we check
+	 * only for its change.
+	 */
+	atomic_inc(&tgt->finished_cmds);
+	/* See comment in scst_queue_retry_cmd() */
+	smp_mb__after_atomic_inc();
+	if (unlikely(tgt->retry_cmds > 0)) {
+		struct scst_cmd *c, *tc;
+		unsigned long flags;
+
+		TRACE_RETRY("Checking retry cmd list (retry_cmds %d)",
+		      tgt->retry_cmds);
+
+		spin_lock_irqsave(&tgt->tgt_lock, flags);
+		list_for_each_entry_safe(c, tc, &tgt->retry_cmd_list,
+				cmd_list_entry) {
+			tgt->retry_cmds--;
+
+			TRACE_RETRY("Moving retry cmd %p to head of active "
+				"cmd list (retry_cmds left %d)",
+				c, tgt->retry_cmds);
+			spin_lock(&c->cmd_threads->cmd_list_lock);
+			list_move(&c->cmd_list_entry,
+				  &c->cmd_threads->active_cmd_list);
+			wake_up(&c->cmd_threads->cmd_list_waitQ);
+			spin_unlock(&c->cmd_threads->cmd_list_lock);
+
+			need_wake_up++;
+			if (need_wake_up >= 2) /* "slow start" */
+				break;
+		}
+		spin_unlock_irqrestore(&tgt->tgt_lock, flags);
+	}
+	return;
+}
+
+static void scst_tgt_retry_timer_fn(unsigned long arg)
+{
+	struct scst_tgt *tgt = (struct scst_tgt *)arg;
+	unsigned long flags;
+
+	TRACE_RETRY("Retry timer expired (retry_cmds %d)", tgt->retry_cmds);
+
+	spin_lock_irqsave(&tgt->tgt_lock, flags);
+	tgt->retry_timer_active = 0;
+	spin_unlock_irqrestore(&tgt->tgt_lock, flags);
+
+	scst_check_retries(tgt);
+	return;
+}
+
+struct scst_mgmt_cmd *scst_alloc_mgmt_cmd(gfp_t gfp_mask)
+{
+	struct scst_mgmt_cmd *mcmd;
+
+	mcmd = mempool_alloc(scst_mgmt_mempool, gfp_mask);
+	if (mcmd == NULL) {
+		PRINT_CRIT_ERROR("%s", "Allocation of management command "
+			"failed, some commands and their data could leak");
+		goto out;
+	}
+	memset(mcmd, 0, sizeof(*mcmd));
+
+out:
+	return mcmd;
+}
+
+void scst_free_mgmt_cmd(struct scst_mgmt_cmd *mcmd)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&mcmd->sess->sess_list_lock, flags);
+	atomic_dec(&mcmd->sess->sess_cmd_count);
+	spin_unlock_irqrestore(&mcmd->sess->sess_list_lock, flags);
+
+	scst_sess_put(mcmd->sess);
+
+	if (mcmd->mcmd_tgt_dev != NULL)
+		__scst_put();
+
+	mempool_free(mcmd, scst_mgmt_mempool);
+	return;
+}
+
+static bool is_report_sg_limitation(void)
+{
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+	return (trace_flag & TRACE_OUT_OF_MEM) != 0;
+#else
+	return false;
+#endif
+}
+
+int scst_alloc_space(struct scst_cmd *cmd)
+{
+	gfp_t gfp_mask;
+	int res = -ENOMEM;
+	int atomic = scst_cmd_atomic(cmd);
+	int flags;
+	struct scst_tgt_dev *tgt_dev = cmd->tgt_dev;
+	static int ll;
+
+	gfp_mask = tgt_dev->gfp_mask | (atomic ? GFP_ATOMIC : GFP_KERNEL);
+
+	flags = atomic ? SGV_POOL_NO_ALLOC_ON_CACHE_MISS : 0;
+	if (cmd->no_sgv)
+		flags |= SGV_POOL_ALLOC_NO_CACHED;
+
+	cmd->sg = sgv_pool_alloc(tgt_dev->pool, cmd->bufflen, gfp_mask, flags,
+			&cmd->sg_cnt, &cmd->sgv, &cmd->dev->dev_mem_lim, NULL);
+	if (cmd->sg == NULL)
+		goto out;
+
+	if (unlikely(cmd->sg_cnt > tgt_dev->max_sg_cnt)) {
+		if ((ll < 10) || is_report_sg_limitation()) {
+			PRINT_INFO("Unable to complete command due to "
+				"SG IO count limitation (requested %d, "
+				"available %d, tgt lim %d)", cmd->sg_cnt,
+				tgt_dev->max_sg_cnt, cmd->tgt->sg_tablesize);
+			ll++;
+		}
+		goto out_sg_free;
+	}
+
+	if (cmd->data_direction != SCST_DATA_BIDI)
+		goto success;
+
+	cmd->in_sg = sgv_pool_alloc(tgt_dev->pool, cmd->in_bufflen, gfp_mask,
+			 flags, &cmd->in_sg_cnt, &cmd->in_sgv,
+			 &cmd->dev->dev_mem_lim, NULL);
+	if (cmd->in_sg == NULL)
+		goto out_sg_free;
+
+	if (unlikely(cmd->in_sg_cnt > tgt_dev->max_sg_cnt)) {
+		if ((ll < 10)  || is_report_sg_limitation()) {
+			PRINT_INFO("Unable to complete command due to "
+				"SG IO count limitation (IN buffer, requested "
+				"%d, available %d, tgt lim %d)", cmd->in_sg_cnt,
+				tgt_dev->max_sg_cnt, cmd->tgt->sg_tablesize);
+			ll++;
+		}
+		goto out_in_sg_free;
+	}
+
+success:
+	res = 0;
+
+out:
+	return res;
+
+out_in_sg_free:
+	sgv_pool_free(cmd->in_sgv, &cmd->dev->dev_mem_lim);
+	cmd->in_sgv = NULL;
+	cmd->in_sg = NULL;
+	cmd->in_sg_cnt = 0;
+
+out_sg_free:
+	sgv_pool_free(cmd->sgv, &cmd->dev->dev_mem_lim);
+	cmd->sgv = NULL;
+	cmd->sg = NULL;
+	cmd->sg_cnt = 0;
+	goto out;
+}
+
+static void scst_release_space(struct scst_cmd *cmd)
+{
+
+	if (cmd->sgv == NULL) {
+		if ((cmd->sg != NULL) &&
+		    !(cmd->tgt_data_buf_alloced || cmd->dh_data_buf_alloced)) {
+			TRACE_MEM("Freeing sg %p for cmd %p (cnt %d)", cmd->sg,
+				cmd, cmd->sg_cnt);
+			scst_free(cmd->sg, cmd->sg_cnt);
+			goto out_zero;
+		} else
+			goto out;
+	}
+
+	if (cmd->tgt_data_buf_alloced || cmd->dh_data_buf_alloced) {
+		TRACE_MEM("%s", "*data_buf_alloced set, returning");
+		goto out;
+	}
+
+	if (cmd->in_sgv != NULL) {
+		sgv_pool_free(cmd->in_sgv, &cmd->dev->dev_mem_lim);
+		cmd->in_sgv = NULL;
+		cmd->in_sg_cnt = 0;
+		cmd->in_sg = NULL;
+		cmd->in_bufflen = 0;
+	}
+
+	sgv_pool_free(cmd->sgv, &cmd->dev->dev_mem_lim);
+
+out_zero:
+	cmd->sgv = NULL;
+	cmd->sg_cnt = 0;
+	cmd->sg = NULL;
+	cmd->bufflen = 0;
+	cmd->data_len = 0;
+
+out:
+	return;
+}
+
+static void scsi_end_async(struct request *req, int error)
+{
+	struct scsi_io_context *sioc = req->end_io_data;
+
+	TRACE_DBG("sioc %p, cmd %p", sioc, sioc->data);
+
+	if (sioc->done)
+		sioc->done(sioc->data, sioc->sense, req->errors, req->resid_len);
+
+	if (!sioc->full_cdb_used)
+		kmem_cache_free(scsi_io_context_cache, sioc);
+	else
+		kfree(sioc);
+
+	__blk_put_request(req->q, req);
+	return;
+}
+
+/**
+ * scst_scsi_exec_async - executes a SCSI command in pass-through mode
+ * @cmd:	scst command
+ * @done:	callback function when done
+ */
+int scst_scsi_exec_async(struct scst_cmd *cmd,
+		       void (*done)(void *, char *, int, int))
+{
+	int res = 0;
+	struct request_queue *q = cmd->dev->scsi_dev->request_queue;
+	struct request *rq;
+	struct scsi_io_context *sioc;
+	int write = (cmd->data_direction & SCST_DATA_WRITE) ? WRITE : READ;
+	gfp_t gfp = scst_cmd_atomic(cmd) ? GFP_ATOMIC : GFP_KERNEL;
+	int cmd_len = cmd->cdb_len;
+
+	if (cmd->ext_cdb_len == 0) {
+		TRACE_DBG("Simple CDB (cmd_len %d)", cmd_len);
+		sioc = kmem_cache_zalloc(scsi_io_context_cache, gfp);
+		if (sioc == NULL) {
+			res = -ENOMEM;
+			goto out;
+		}
+	} else {
+		cmd_len += cmd->ext_cdb_len;
+
+		TRACE_DBG("Extended CDB (cmd_len %d)", cmd_len);
+
+		sioc = kzalloc(sizeof(*sioc) + cmd_len, gfp);
+		if (sioc == NULL) {
+			res = -ENOMEM;
+			goto out;
+		}
+
+		sioc->full_cdb_used = 1;
+
+		memcpy(sioc->full_cdb, cmd->cdb, cmd->cdb_len);
+		memcpy(&sioc->full_cdb[cmd->cdb_len], cmd->ext_cdb,
+			cmd->ext_cdb_len);
+	}
+
+	rq = blk_get_request(q, write, gfp);
+	if (rq == NULL) {
+		res = -ENOMEM;
+		goto out_free_sioc;
+	}
+
+	rq->cmd_type = REQ_TYPE_BLOCK_PC;
+	rq->cmd_flags |= REQ_QUIET;
+
+	if (cmd->sg != NULL) {
+		res = blk_rq_map_kern_sg(rq, cmd->sg, cmd->sg_cnt, gfp);
+		if (res) {
+			TRACE_DBG("blk_rq_map_kern_sg() failed: %d", res);
+			goto out_free_rq;
+		}
+	}
+
+	if (cmd->data_direction == SCST_DATA_BIDI) {
+		struct request *next_rq;
+
+		if (!test_bit(QUEUE_FLAG_BIDI, &q->queue_flags)) {
+			res = -EOPNOTSUPP;
+			goto out_free_unmap;
+		}
+
+		next_rq = blk_get_request(q, READ, gfp);
+		if (next_rq == NULL) {
+			res = -ENOMEM;
+			goto out_free_unmap;
+		}
+		rq->next_rq = next_rq;
+		next_rq->cmd_type = rq->cmd_type;
+
+		res = blk_rq_map_kern_sg(next_rq, cmd->in_sg,
+			cmd->in_sg_cnt, gfp);
+		if (res != 0)
+			goto out_free_unmap;
+	}
+
+	TRACE_DBG("sioc %p, cmd %p", sioc, cmd);
+
+	sioc->data = cmd;
+	sioc->done = done;
+
+	rq->cmd_len = cmd_len;
+	if (cmd->ext_cdb_len == 0) {
+		memset(rq->cmd, 0, BLK_MAX_CDB); /* ATAPI hates garbage after CDB */
+		memcpy(rq->cmd, cmd->cdb, cmd->cdb_len);
+	} else
+		rq->cmd = sioc->full_cdb;
+
+	rq->sense = sioc->sense;
+	rq->sense_len = sizeof(sioc->sense);
+	rq->timeout = cmd->timeout;
+	rq->retries = cmd->retries;
+	rq->end_io_data = sioc;
+
+	blk_execute_rq_nowait(rq->q, NULL, rq,
+		(cmd->queue_type == SCST_CMD_QUEUE_HEAD_OF_QUEUE), scsi_end_async);
+out:
+	return res;
+
+out_free_unmap:
+	if (rq->next_rq != NULL) {
+		blk_put_request(rq->next_rq);
+		rq->next_rq = NULL;
+	}
+	blk_rq_unmap_kern_sg(rq, res);
+
+out_free_rq:
+	blk_put_request(rq);
+
+out_free_sioc:
+	if (!sioc->full_cdb_used)
+		kmem_cache_free(scsi_io_context_cache, sioc);
+	else
+		kfree(sioc);
+	goto out;
+}
+
+/**
+ * scst_copy_sg() - copy data between the command's SGs
+ *
+ * Copies data between cmd->tgt_sg and cmd->sg in direction defined by
+ * copy_dir parameter.
+ */
+void scst_copy_sg(struct scst_cmd *cmd, enum scst_sg_copy_dir copy_dir)
+{
+	struct scatterlist *src_sg, *dst_sg;
+	unsigned int to_copy;
+	int atomic = scst_cmd_atomic(cmd);
+
+	if (copy_dir == SCST_SG_COPY_FROM_TARGET) {
+		if (cmd->data_direction != SCST_DATA_BIDI) {
+			src_sg = cmd->tgt_sg;
+			dst_sg = cmd->sg;
+			to_copy = cmd->bufflen;
+		} else {
+			TRACE_MEM("BIDI cmd %p", cmd);
+			src_sg = cmd->tgt_in_sg;
+			dst_sg = cmd->in_sg;
+			to_copy = cmd->in_bufflen;
+		}
+	} else {
+		src_sg = cmd->sg;
+		dst_sg = cmd->tgt_sg;
+		to_copy = cmd->resp_data_len;
+	}
+
+	TRACE_MEM("cmd %p, copy_dir %d, src_sg %p, dst_sg %p, to_copy %lld",
+		cmd, copy_dir, src_sg, dst_sg, (long long)to_copy);
+
+	if (unlikely(src_sg == NULL) || unlikely(dst_sg == NULL)) {
+		/*
+		 * It can happened, e.g., with scst_user for cmd with delay
+		 * alloc, which failed with Check Condition.
+		 */
+		goto out;
+	}
+
+	sg_copy(dst_sg, src_sg, 0, to_copy,
+		atomic ? KM_SOFTIRQ0 : KM_USER0,
+		atomic ? KM_SOFTIRQ1 : KM_USER1);
+
+out:
+	return;
+}
+EXPORT_SYMBOL_GPL(scst_copy_sg);
+
+static const int SCST_CDB_LENGTH[8] = { 6, 10, 10, -1, 16, 12, -1, -1 };
+
+#define SCST_CDB_GROUP(opcode)   ((opcode >> 5) & 0x7)
+#define SCST_GET_CDB_LEN(opcode) SCST_CDB_LENGTH[SCST_CDB_GROUP(opcode)]
+
+int scst_get_cdb_len(const uint8_t *cdb)
+{
+	return SCST_GET_CDB_LEN(cdb[0]);
+}
+
+/* get_trans_len_x extract x bytes from cdb as length starting from off */
+
+static int get_trans_cdb_len_10(struct scst_cmd *cmd, uint8_t off)
+{
+	cmd->cdb_len = 10;
+	cmd->bufflen = 0;
+	return 0;
+}
+
+static int get_trans_len_block_limit(struct scst_cmd *cmd, uint8_t off)
+{
+	cmd->bufflen = 6;
+	return 0;
+}
+
+static int get_trans_len_read_capacity(struct scst_cmd *cmd, uint8_t off)
+{
+	cmd->bufflen = READ_CAP_LEN;
+	return 0;
+}
+
+static int get_trans_len_serv_act_in(struct scst_cmd *cmd, uint8_t off)
+{
+	int res = 0;
+
+	if ((cmd->cdb[1] & 0x1f) == SAI_READ_CAPACITY_16) {
+		cmd->op_name = "READ CAPACITY(16)";
+		cmd->bufflen = READ_CAP16_LEN;
+		cmd->op_flags |= SCST_IMPLICIT_HQ|SCST_REG_RESERVE_ALLOWED;
+	} else
+		cmd->op_flags |= SCST_UNKNOWN_LENGTH;
+	return res;
+}
+
+static int get_trans_len_single(struct scst_cmd *cmd, uint8_t off)
+{
+	cmd->bufflen = 1;
+	return 0;
+}
+
+static int get_trans_len_read_pos(struct scst_cmd *cmd, uint8_t off)
+{
+	uint8_t *p = (uint8_t *)cmd->cdb + off;
+	int res = 0;
+
+	cmd->bufflen = 0;
+	cmd->bufflen |= ((u32)p[0]) << 8;
+	cmd->bufflen |= ((u32)p[1]);
+
+	switch (cmd->cdb[1] & 0x1f) {
+	case 0:
+	case 1:
+	case 6:
+		if (cmd->bufflen != 0) {
+			PRINT_ERROR("READ POSITION: Invalid non-zero (%d) "
+				"allocation length for service action %x",
+				cmd->bufflen, cmd->cdb[1] & 0x1f);
+			goto out_inval;
+		}
+		break;
+	}
+
+	switch (cmd->cdb[1] & 0x1f) {
+	case 0:
+	case 1:
+		cmd->bufflen = 20;
+		break;
+	case 6:
+		cmd->bufflen = 32;
+		break;
+	case 8:
+		cmd->bufflen = max(28, cmd->bufflen);
+		break;
+	default:
+		PRINT_ERROR("READ POSITION: Invalid service action %x",
+			cmd->cdb[1] & 0x1f);
+		goto out_inval;
+	}
+
+out:
+	return res;
+
+out_inval:
+	scst_set_cmd_error(cmd,
+		SCST_LOAD_SENSE(scst_sense_invalid_field_in_cdb));
+	res = 1;
+	goto out;
+}
+
+static int get_trans_len_prevent_allow_medium_removal(struct scst_cmd *cmd,
+	uint8_t off)
+{
+	if ((cmd->cdb[4] & 3) == 0)
+		cmd->op_flags |= SCST_REG_RESERVE_ALLOWED;
+	return 0;
+}
+
+static int get_trans_len_start_stop(struct scst_cmd *cmd, uint8_t off)
+{
+	if ((cmd->cdb[4] & 0xF1) == 0x1)
+		cmd->op_flags |= SCST_REG_RESERVE_ALLOWED;
+	return 0;
+}
+
+static int get_trans_len_3_read_elem_stat(struct scst_cmd *cmd, uint8_t off)
+{
+	const uint8_t *p = cmd->cdb + off;
+
+	cmd->bufflen = 0;
+	cmd->bufflen |= ((u32)p[0]) << 16;
+	cmd->bufflen |= ((u32)p[1]) << 8;
+	cmd->bufflen |= ((u32)p[2]);
+
+	if ((cmd->cdb[6] & 0x2) == 0x2)
+		cmd->op_flags |= SCST_REG_RESERVE_ALLOWED;
+
+	return 0;
+}
+
+static int get_trans_len_1(struct scst_cmd *cmd, uint8_t off)
+{
+	cmd->bufflen = (u32)cmd->cdb[off];
+	return 0;
+}
+
+static int get_trans_len_1_256(struct scst_cmd *cmd, uint8_t off)
+{
+	cmd->bufflen = (u32)cmd->cdb[off];
+	if (cmd->bufflen == 0)
+		cmd->bufflen = 256;
+	return 0;
+}
+
+static int get_trans_len_2(struct scst_cmd *cmd, uint8_t off)
+{
+	const uint8_t *p = cmd->cdb + off;
+
+	cmd->bufflen = 0;
+	cmd->bufflen |= ((u32)p[0]) << 8;
+	cmd->bufflen |= ((u32)p[1]);
+
+	return 0;
+}
+
+static int get_trans_len_3(struct scst_cmd *cmd, uint8_t off)
+{
+	const uint8_t *p = cmd->cdb + off;
+
+	cmd->bufflen = 0;
+	cmd->bufflen |= ((u32)p[0]) << 16;
+	cmd->bufflen |= ((u32)p[1]) << 8;
+	cmd->bufflen |= ((u32)p[2]);
+
+	return 0;
+}
+
+static int get_trans_len_4(struct scst_cmd *cmd, uint8_t off)
+{
+	const uint8_t *p = cmd->cdb + off;
+
+	cmd->bufflen = 0;
+	cmd->bufflen |= ((u32)p[0]) << 24;
+	cmd->bufflen |= ((u32)p[1]) << 16;
+	cmd->bufflen |= ((u32)p[2]) << 8;
+	cmd->bufflen |= ((u32)p[3]);
+
+	return 0;
+}
+
+static int get_trans_len_none(struct scst_cmd *cmd, uint8_t off)
+{
+	cmd->bufflen = 0;
+	return 0;
+}
+
+/**
+ * scst_get_cdb_info() - fill various info about the command's CDB
+ *
+ * Description:
+ *    Fills various info about the command's CDB in the corresponding fields
+ *    in the command.
+ *
+ *    Returns: 0 on success, <0 if command is unknown, >0 if command
+ *    is invalid.
+ */
+int scst_get_cdb_info(struct scst_cmd *cmd)
+{
+	int dev_type = cmd->dev->type;
+	int i, res = 0;
+	uint8_t op;
+	const struct scst_sdbops *ptr = NULL;
+
+	op = cmd->cdb[0];	/* get clear opcode */
+
+	TRACE_DBG("opcode=%02x, cdblen=%d bytes, tblsize=%d, "
+		"dev_type=%d", op, SCST_GET_CDB_LEN(op), SCST_CDB_TBL_SIZE,
+		dev_type);
+
+	i = scst_scsi_op_list[op];
+	while (i < SCST_CDB_TBL_SIZE && scst_scsi_op_table[i].ops == op) {
+		if (scst_scsi_op_table[i].devkey[dev_type] != SCST_CDB_NOTSUPP) {
+			ptr = &scst_scsi_op_table[i];
+			TRACE_DBG("op = 0x%02x+'%c%c%c%c%c%c%c%c%c%c'+<%s>",
+			      ptr->ops, ptr->devkey[0],	/* disk     */
+			      ptr->devkey[1],	/* tape     */
+			      ptr->devkey[2],	/* printer */
+			      ptr->devkey[3],	/* cpu      */
+			      ptr->devkey[4],	/* cdr      */
+			      ptr->devkey[5],	/* cdrom    */
+			      ptr->devkey[6],	/* scanner */
+			      ptr->devkey[7],	/* worm     */
+			      ptr->devkey[8],	/* changer */
+			      ptr->devkey[9],	/* commdev */
+			      ptr->op_name);
+			TRACE_DBG("direction=%d flags=%d off=%d",
+			      ptr->direction,
+			      ptr->flags,
+			      ptr->off);
+			break;
+		}
+		i++;
+	}
+
+	if (unlikely(ptr == NULL)) {
+		/* opcode not found or now not used */
+		TRACE(TRACE_MINOR, "Unknown opcode 0x%x for type %d", op,
+		      dev_type);
+		res = -1;
+		goto out;
+	}
+
+	cmd->cdb_len = SCST_GET_CDB_LEN(op);
+	cmd->op_name = ptr->op_name;
+	cmd->data_direction = ptr->direction;
+	cmd->op_flags = ptr->flags | SCST_INFO_VALID;
+	res = (*ptr->get_trans_len)(cmd, ptr->off);
+
+out:
+	return res;
+}
+EXPORT_SYMBOL_GPL(scst_get_cdb_info);
+
+/* Packs SCST LUN back to SCSI form */
+uint64_t scst_pack_lun(const uint64_t lun, unsigned int addr_method)
+{
+
+	uint64_t res = 0;
+	uint16_t *p = (uint16_t *)&res;
+
+	res = lun;
+
+	if ((addr_method == SCST_LUN_ADDR_METHOD_FLAT) && (lun != 0)) {
+		/*
+		 * Flat space: luns other than 0 should use flat space
+		 * addressing method.
+		 */
+		*p = 0x7fff & *p;
+		*p = 0x4000 | *p;
+	}
+	/* Default is to use peripheral device addressing mode */
+
+	*p = cpu_to_be16(*p);
+	return res;
+}
+
+/*
+ * Routine to extract a lun number from an 8-byte LUN structure
+ * in network byte order (BE).
+ * (see SAM-2, Section 4.12.3 page 40)
+ * Supports 2 types of lun unpacking: peripheral and logical unit.
+ */
+uint64_t scst_unpack_lun(const uint8_t *lun, int len)
+{
+	uint64_t res = NO_SUCH_LUN;
+	int address_method;
+
+	TRACE_BUFF_FLAG(TRACE_DEBUG, "Raw LUN", lun, len);
+
+	if (unlikely(len < 2)) {
+		PRINT_ERROR("Illegal lun length %d, expected 2 bytes or "
+			"more", len);
+		goto out;
+	}
+
+	if (len > 2) {
+		switch (len) {
+		case 8:
+			if ((*((uint64_t *)lun) &
+			  __constant_cpu_to_be64(0x0000FFFFFFFFFFFFLL)) != 0)
+				goto out_err;
+			break;
+		case 4:
+			if (*((uint16_t *)&lun[2]) != 0)
+				goto out_err;
+			break;
+		case 6:
+			if (*((uint32_t *)&lun[2]) != 0)
+				goto out_err;
+			break;
+		default:
+			goto out_err;
+		}
+	}
+
+	address_method = (*lun) >> 6;	/* high 2 bits of byte 0 */
+	switch (address_method) {
+	case 0:	/* peripheral device addressing method */
+#if 0
+		if (*lun) {
+			PRINT_ERROR("Illegal BUS INDENTIFIER in LUN "
+			     "peripheral device addressing method 0x%02x, "
+			     "expected 0", *lun);
+			break;
+		}
+		res = *(lun + 1);
+		break;
+#else
+		/*
+		 * Looks like it's legal to use it as flat space addressing
+		 * method as well
+		 */
+
+		/* go through */
+#endif
+
+	case 1:	/* flat space addressing method */
+		res = *(lun + 1) | (((*lun) & 0x3f) << 8);
+		break;
+
+	case 2:	/* logical unit addressing method */
+		if (*lun & 0x3f) {
+			PRINT_ERROR("Illegal BUS NUMBER in LUN logical unit "
+				    "addressing method 0x%02x, expected 0",
+				    *lun & 0x3f);
+			break;
+		}
+		if (*(lun + 1) & 0xe0) {
+			PRINT_ERROR("Illegal TARGET in LUN logical unit "
+				    "addressing method 0x%02x, expected 0",
+				    (*(lun + 1) & 0xf8) >> 5);
+			break;
+		}
+		res = *(lun + 1) & 0x1f;
+		break;
+
+	case 3:	/* extended logical unit addressing method */
+	default:
+		PRINT_ERROR("Unimplemented LUN addressing method %u",
+			    address_method);
+		break;
+	}
+
+out:
+	return res;
+
+out_err:
+	PRINT_ERROR("%s", "Multi-level LUN unimplemented");
+	goto out;
+}
+
+/**
+ ** Generic parse() support routines.
+ ** Done via pointer on functions to avoid unneeded dereferences on
+ ** the fast path.
+ **/
+
+/**
+ * scst_calc_block_shift() - calculate block shift
+ *
+ * Calculates and returns block shift for the given sector size
+ */
+int scst_calc_block_shift(int sector_size)
+{
+	int block_shift = 0;
+	int t;
+
+	if (sector_size == 0)
+		sector_size = 512;
+
+	t = sector_size;
+	while (1) {
+		if ((t & 1) != 0)
+			break;
+		t >>= 1;
+		block_shift++;
+	}
+	if (block_shift < 9) {
+		PRINT_ERROR("Wrong sector size %d", sector_size);
+		block_shift = -1;
+	}
+	return block_shift;
+}
+EXPORT_SYMBOL_GPL(scst_calc_block_shift);
+
+/**
+ * scst_sbc_generic_parse() - generic SBC parsing
+ *
+ * Generic parse() for SBC (disk) devices
+ */
+int scst_sbc_generic_parse(struct scst_cmd *cmd,
+	int (*get_block_shift)(struct scst_cmd *cmd))
+{
+	int res = 0;
+
+	/*
+	 * SCST sets good defaults for cmd->data_direction and cmd->bufflen,
+	 * therefore change them only if necessary
+	 */
+
+	TRACE_DBG("op_name <%s> direct %d flags %d transfer_len %d",
+	      cmd->op_name, cmd->data_direction, cmd->op_flags, cmd->bufflen);
+
+	switch (cmd->cdb[0]) {
+	case VERIFY_6:
+	case VERIFY:
+	case VERIFY_12:
+	case VERIFY_16:
+		if ((cmd->cdb[1] & BYTCHK) == 0) {
+			cmd->data_len = cmd->bufflen << get_block_shift(cmd);
+			cmd->bufflen = 0;
+			goto set_timeout;
+		} else
+			cmd->data_len = 0;
+		break;
+	default:
+		/* It's all good */
+		break;
+	}
+
+	if (cmd->op_flags & SCST_TRANSFER_LEN_TYPE_FIXED) {
+		/*
+		 * No need for locks here, since *_detach() can not be
+		 * called, when there are existing commands.
+		 */
+		cmd->bufflen = cmd->bufflen << get_block_shift(cmd);
+	}
+
+set_timeout:
+	if ((cmd->op_flags & (SCST_SMALL_TIMEOUT | SCST_LONG_TIMEOUT)) == 0)
+		cmd->timeout = SCST_GENERIC_DISK_REG_TIMEOUT;
+	else if (cmd->op_flags & SCST_SMALL_TIMEOUT)
+		cmd->timeout = SCST_GENERIC_DISK_SMALL_TIMEOUT;
+	else if (cmd->op_flags & SCST_LONG_TIMEOUT)
+		cmd->timeout = SCST_GENERIC_DISK_LONG_TIMEOUT;
+
+	TRACE_DBG("res %d, bufflen %d, data_len %d, direct %d",
+	      res, cmd->bufflen, cmd->data_len, cmd->data_direction);
+	return res;
+}
+EXPORT_SYMBOL_GPL(scst_sbc_generic_parse);
+
+/**
+ * scst_cdrom_generic_parse() - generic MMC parse
+ *
+ * Generic parse() for MMC (cdrom) devices
+ */
+int scst_cdrom_generic_parse(struct scst_cmd *cmd,
+	int (*get_block_shift)(struct scst_cmd *cmd))
+{
+	int res = 0;
+
+	/*
+	 * SCST sets good defaults for cmd->data_direction and cmd->bufflen,
+	 * therefore change them only if necessary
+	 */
+
+	TRACE_DBG("op_name <%s> direct %d flags %d transfer_len %d",
+	      cmd->op_name, cmd->data_direction, cmd->op_flags, cmd->bufflen);
+
+	cmd->cdb[1] &= 0x1f;
+
+	switch (cmd->cdb[0]) {
+	case VERIFY_6:
+	case VERIFY:
+	case VERIFY_12:
+	case VERIFY_16:
+		if ((cmd->cdb[1] & BYTCHK) == 0) {
+			cmd->data_len = cmd->bufflen << get_block_shift(cmd);
+			cmd->bufflen = 0;
+			goto set_timeout;
+		}
+		break;
+	default:
+		/* It's all good */
+		break;
+	}
+
+	if (cmd->op_flags & SCST_TRANSFER_LEN_TYPE_FIXED)
+		cmd->bufflen = cmd->bufflen << get_block_shift(cmd);
+
+set_timeout:
+	if ((cmd->op_flags & (SCST_SMALL_TIMEOUT | SCST_LONG_TIMEOUT)) == 0)
+		cmd->timeout = SCST_GENERIC_CDROM_REG_TIMEOUT;
+	else if (cmd->op_flags & SCST_SMALL_TIMEOUT)
+		cmd->timeout = SCST_GENERIC_CDROM_SMALL_TIMEOUT;
+	else if (cmd->op_flags & SCST_LONG_TIMEOUT)
+		cmd->timeout = SCST_GENERIC_CDROM_LONG_TIMEOUT;
+
+	TRACE_DBG("res=%d, bufflen=%d, direct=%d", res, cmd->bufflen,
+		cmd->data_direction);
+	return res;
+}
+EXPORT_SYMBOL_GPL(scst_cdrom_generic_parse);
+
+/**
+ * scst_modisk_generic_parse() - generic MO parse
+ *
+ * Generic parse() for MO disk devices
+ */
+int scst_modisk_generic_parse(struct scst_cmd *cmd,
+	int (*get_block_shift)(struct scst_cmd *cmd))
+{
+	int res = 0;
+
+	/*
+	 * SCST sets good defaults for cmd->data_direction and cmd->bufflen,
+	 * therefore change them only if necessary
+	 */
+
+	TRACE_DBG("op_name <%s> direct %d flags %d transfer_len %d",
+	      cmd->op_name, cmd->data_direction, cmd->op_flags, cmd->bufflen);
+
+	cmd->cdb[1] &= 0x1f;
+
+	switch (cmd->cdb[0]) {
+	case VERIFY_6:
+	case VERIFY:
+	case VERIFY_12:
+	case VERIFY_16:
+		if ((cmd->cdb[1] & BYTCHK) == 0) {
+			cmd->data_len = cmd->bufflen << get_block_shift(cmd);
+			cmd->bufflen = 0;
+			goto set_timeout;
+		}
+		break;
+	default:
+		/* It's all good */
+		break;
+	}
+
+	if (cmd->op_flags & SCST_TRANSFER_LEN_TYPE_FIXED)
+		cmd->bufflen = cmd->bufflen << get_block_shift(cmd);
+
+set_timeout:
+	if ((cmd->op_flags & (SCST_SMALL_TIMEOUT | SCST_LONG_TIMEOUT)) == 0)
+		cmd->timeout = SCST_GENERIC_MODISK_REG_TIMEOUT;
+	else if (cmd->op_flags & SCST_SMALL_TIMEOUT)
+		cmd->timeout = SCST_GENERIC_MODISK_SMALL_TIMEOUT;
+	else if (cmd->op_flags & SCST_LONG_TIMEOUT)
+		cmd->timeout = SCST_GENERIC_MODISK_LONG_TIMEOUT;
+
+	TRACE_DBG("res=%d, bufflen=%d, direct=%d", res, cmd->bufflen,
+		cmd->data_direction);
+	return res;
+}
+EXPORT_SYMBOL_GPL(scst_modisk_generic_parse);
+
+/**
+ * scst_tape_generic_parse() - generic tape parse
+ *
+ * Generic parse() for tape devices
+ */
+int scst_tape_generic_parse(struct scst_cmd *cmd,
+	int (*get_block_size)(struct scst_cmd *cmd))
+{
+	int res = 0;
+
+	/*
+	 * SCST sets good defaults for cmd->data_direction and cmd->bufflen,
+	 * therefore change them only if necessary
+	 */
+
+	TRACE_DBG("op_name <%s> direct %d flags %d transfer_len %d",
+	      cmd->op_name, cmd->data_direction, cmd->op_flags, cmd->bufflen);
+
+	if (cmd->cdb[0] == READ_POSITION) {
+		int tclp = cmd->cdb[1] & 4;
+		int long_bit = cmd->cdb[1] & 2;
+		int bt = cmd->cdb[1] & 1;
+
+		if ((tclp == long_bit) && (!bt || !long_bit)) {
+			cmd->bufflen =
+			    tclp ? POSITION_LEN_LONG : POSITION_LEN_SHORT;
+			cmd->data_direction = SCST_DATA_READ;
+		} else {
+			cmd->bufflen = 0;
+			cmd->data_direction = SCST_DATA_NONE;
+		}
+	}
+
+	if (cmd->op_flags & SCST_TRANSFER_LEN_TYPE_FIXED & cmd->cdb[1])
+		cmd->bufflen = cmd->bufflen * get_block_size(cmd);
+
+	if ((cmd->op_flags & (SCST_SMALL_TIMEOUT | SCST_LONG_TIMEOUT)) == 0)
+		cmd->timeout = SCST_GENERIC_TAPE_REG_TIMEOUT;
+	else if (cmd->op_flags & SCST_SMALL_TIMEOUT)
+		cmd->timeout = SCST_GENERIC_TAPE_SMALL_TIMEOUT;
+	else if (cmd->op_flags & SCST_LONG_TIMEOUT)
+		cmd->timeout = SCST_GENERIC_TAPE_LONG_TIMEOUT;
+	return res;
+}
+EXPORT_SYMBOL_GPL(scst_tape_generic_parse);
+
+static int scst_null_parse(struct scst_cmd *cmd)
+{
+	int res = 0;
+
+	/*
+	 * SCST sets good defaults for cmd->data_direction and cmd->bufflen,
+	 * therefore change them only if necessary
+	 */
+
+	TRACE_DBG("op_name <%s> direct %d flags %d transfer_len %d",
+	      cmd->op_name, cmd->data_direction, cmd->op_flags, cmd->bufflen);
+#if 0
+	switch (cmd->cdb[0]) {
+	default:
+		/* It's all good */
+		break;
+	}
+#endif
+	TRACE_DBG("res %d bufflen %d direct %d",
+	      res, cmd->bufflen, cmd->data_direction);
+	return res;
+}
+
+/**
+ * scst_changer_generic_parse() - generic changer parse
+ *
+ * Generic parse() for changer devices
+ */
+int scst_changer_generic_parse(struct scst_cmd *cmd,
+	int (*nothing)(struct scst_cmd *cmd))
+{
+	int res = scst_null_parse(cmd);
+
+	if (cmd->op_flags & SCST_LONG_TIMEOUT)
+		cmd->timeout = SCST_GENERIC_CHANGER_LONG_TIMEOUT;
+	else
+		cmd->timeout = SCST_GENERIC_CHANGER_TIMEOUT;
+
+	return res;
+}
+EXPORT_SYMBOL_GPL(scst_changer_generic_parse);
+
+/**
+ * scst_processor_generic_parse - generic SCSI processor parse
+ *
+ * Generic parse() for SCSI processor devices
+ */
+int scst_processor_generic_parse(struct scst_cmd *cmd,
+	int (*nothing)(struct scst_cmd *cmd))
+{
+	int res = scst_null_parse(cmd);
+
+	if (cmd->op_flags & SCST_LONG_TIMEOUT)
+		cmd->timeout = SCST_GENERIC_PROCESSOR_LONG_TIMEOUT;
+	else
+		cmd->timeout = SCST_GENERIC_PROCESSOR_TIMEOUT;
+
+	return res;
+}
+EXPORT_SYMBOL_GPL(scst_processor_generic_parse);
+
+/**
+ * scst_raid_generic_parse() - generic RAID parse
+ *
+ * Generic parse() for RAID devices
+ */
+int scst_raid_generic_parse(struct scst_cmd *cmd,
+	int (*nothing)(struct scst_cmd *cmd))
+{
+	int res = scst_null_parse(cmd);
+
+	if (cmd->op_flags & SCST_LONG_TIMEOUT)
+		cmd->timeout = SCST_GENERIC_RAID_LONG_TIMEOUT;
+	else
+		cmd->timeout = SCST_GENERIC_RAID_TIMEOUT;
+
+	return res;
+}
+EXPORT_SYMBOL_GPL(scst_raid_generic_parse);
+
+/**
+ ** Generic dev_done() support routines.
+ ** Done via pointer on functions to avoid unneeded dereferences on
+ ** the fast path.
+ **/
+
+/**
+ * scst_block_generic_dev_done() - generic SBC dev_done
+ *
+ * Generic dev_done() for block (SBC) devices
+ */
+int scst_block_generic_dev_done(struct scst_cmd *cmd,
+	void (*set_block_shift)(struct scst_cmd *cmd, int block_shift))
+{
+	int opcode = cmd->cdb[0];
+	int status = cmd->status;
+	int res = SCST_CMD_STATE_DEFAULT;
+
+	/*
+	 * SCST sets good defaults for cmd->is_send_status and
+	 * cmd->resp_data_len based on cmd->status and cmd->data_direction,
+	 * therefore change them only if necessary
+	 */
+
+	if ((status == SAM_STAT_GOOD) || (status == SAM_STAT_CONDITION_MET)) {
+		switch (opcode) {
+		case READ_CAPACITY:
+		{
+			/* Always keep track of disk capacity */
+			int buffer_size, sector_size, sh;
+			uint8_t *buffer;
+
+			buffer_size = scst_get_buf_first(cmd, &buffer);
+			if (unlikely(buffer_size <= 0)) {
+				if (buffer_size < 0) {
+					PRINT_ERROR("%s: Unable to get the"
+					" buffer (%d)",	__func__, buffer_size);
+				}
+				goto out;
+			}
+
+			sector_size =
+			    ((buffer[4] << 24) | (buffer[5] << 16) |
+			     (buffer[6] << 8) | (buffer[7] << 0));
+			scst_put_buf(cmd, buffer);
+			if (sector_size != 0)
+				sh = scst_calc_block_shift(sector_size);
+			else
+				sh = 0;
+			set_block_shift(cmd, sh);
+			TRACE_DBG("block_shift %d", sh);
+			break;
+		}
+		default:
+			/* It's all good */
+			break;
+		}
+	}
+
+	TRACE_DBG("cmd->is_send_status=%x, cmd->resp_data_len=%d, "
+	      "res=%d", cmd->is_send_status, cmd->resp_data_len, res);
+
+out:
+	return res;
+}
+EXPORT_SYMBOL_GPL(scst_block_generic_dev_done);
+
+/**
+ * scst_tape_generic_dev_done() - generic tape dev done
+ *
+ * Generic dev_done() for tape devices
+ */
+int scst_tape_generic_dev_done(struct scst_cmd *cmd,
+	void (*set_block_size)(struct scst_cmd *cmd, int block_shift))
+{
+	int opcode = cmd->cdb[0];
+	int res = SCST_CMD_STATE_DEFAULT;
+	int buffer_size, bs;
+	uint8_t *buffer = NULL;
+
+	/*
+	 * SCST sets good defaults for cmd->is_send_status and
+	 * cmd->resp_data_len based on cmd->status and cmd->data_direction,
+	 * therefore change them only if necessary
+	 */
+
+	switch (opcode) {
+	case MODE_SENSE:
+	case MODE_SELECT:
+		buffer_size = scst_get_buf_first(cmd, &buffer);
+		if (unlikely(buffer_size <= 0)) {
+			if (buffer_size < 0) {
+				PRINT_ERROR("%s: Unable to get the buffer (%d)",
+					__func__, buffer_size);
+			}
+			goto out;
+		}
+		break;
+	}
+
+	switch (opcode) {
+	case MODE_SENSE:
+		TRACE_DBG("%s", "MODE_SENSE");
+		if ((cmd->cdb[2] & 0xC0) == 0) {
+			if (buffer[3] == 8) {
+				bs = (buffer[9] << 16) |
+				    (buffer[10] << 8) | buffer[11];
+				set_block_size(cmd, bs);
+			}
+		}
+		break;
+	case MODE_SELECT:
+		TRACE_DBG("%s", "MODE_SELECT");
+		if (buffer[3] == 8) {
+			bs = (buffer[9] << 16) | (buffer[10] << 8) |
+			    (buffer[11]);
+			set_block_size(cmd, bs);
+		}
+		break;
+	default:
+		/* It's all good */
+		break;
+	}
+
+	switch (opcode) {
+	case MODE_SENSE:
+	case MODE_SELECT:
+		scst_put_buf(cmd, buffer);
+		break;
+	}
+
+out:
+	return res;
+}
+EXPORT_SYMBOL_GPL(scst_tape_generic_dev_done);
+
+static void scst_check_internal_sense(struct scst_device *dev, int result,
+	uint8_t *sense, int sense_len)
+{
+
+	if (host_byte(result) == DID_RESET) {
+		int sl;
+		TRACE(TRACE_MGMT, "DID_RESET received for device %s, "
+			"triggering reset UA", dev->virt_name);
+		sl = scst_set_sense(sense, sense_len, dev->d_sense,
+			SCST_LOAD_SENSE(scst_sense_reset_UA));
+		scst_dev_check_set_UA(dev, NULL, sense, sl);
+	} else if ((status_byte(result) == CHECK_CONDITION) &&
+		   scst_is_ua_sense(sense, sense_len))
+		scst_dev_check_set_UA(dev, NULL, sense, sense_len);
+	return;
+}
+
+/**
+ * scst_to_dma_dir() - translate SCST's data direction to DMA direction
+ *
+ * Translates SCST's data direction to DMA one from backend storage
+ * perspective.
+ */
+enum dma_data_direction scst_to_dma_dir(int scst_dir)
+{
+	static const enum dma_data_direction tr_tbl[] = { DMA_NONE,
+		DMA_TO_DEVICE, DMA_FROM_DEVICE, DMA_BIDIRECTIONAL, DMA_NONE };
+
+	return tr_tbl[scst_dir];
+}
+EXPORT_SYMBOL(scst_to_dma_dir);
+
+/*
+ * scst_to_tgt_dma_dir() - translate SCST data direction to DMA direction
+ *
+ * Translates SCST data direction to DMA data direction from the perspective
+ * of the target device.
+ */
+enum dma_data_direction scst_to_tgt_dma_dir(int scst_dir)
+{
+	static const enum dma_data_direction tr_tbl[] = { DMA_NONE,
+		DMA_FROM_DEVICE, DMA_TO_DEVICE, DMA_BIDIRECTIONAL, DMA_NONE };
+
+	return tr_tbl[scst_dir];
+}
+EXPORT_SYMBOL(scst_to_tgt_dma_dir);
+
+/**
+ * scst_obtain_device_parameters() - obtain device control parameters
+ *
+ * Issues a MODE SENSE for control mode page data and sets the corresponding
+ * dev's parameter from it. Returns 0 on success and not 0 otherwise.
+ */
+int scst_obtain_device_parameters(struct scst_device *dev)
+{
+	int rc, i;
+	uint8_t cmd[16];
+	uint8_t buffer[4+0x0A];
+	uint8_t sense_buffer[SCSI_SENSE_BUFFERSIZE];
+
+	EXTRACHECKS_BUG_ON(dev->scsi_dev == NULL);
+
+	for (i = 0; i < 5; i++) {
+		/* Get control mode page */
+		memset(cmd, 0, sizeof(cmd));
+#if 0
+		cmd[0] = MODE_SENSE_10;
+		cmd[1] = 0;
+		cmd[2] = 0x0A;
+		cmd[8] = sizeof(buffer); /* it's < 256 */
+#else
+		cmd[0] = MODE_SENSE;
+		cmd[1] = 8; /* DBD */
+		cmd[2] = 0x0A;
+		cmd[4] = sizeof(buffer);
+#endif
+
+		memset(buffer, 0, sizeof(buffer));
+		memset(sense_buffer, 0, sizeof(sense_buffer));
+
+		TRACE(TRACE_SCSI, "%s", "Doing internal MODE_SENSE");
+		rc = scsi_execute(dev->scsi_dev, cmd, SCST_DATA_READ, buffer,
+				sizeof(buffer), sense_buffer, 15, 0, 0
+				, NULL
+				);
+
+		TRACE_DBG("MODE_SENSE done: %x", rc);
+
+		if (scsi_status_is_good(rc)) {
+			int q;
+
+			PRINT_BUFF_FLAG(TRACE_SCSI,
+				"Returned control mode page data",
+				buffer,	sizeof(buffer));
+
+			dev->tst = buffer[4+2] >> 5;
+			q = buffer[4+3] >> 4;
+			if (q > SCST_CONTR_MODE_QUEUE_ALG_UNRESTRICTED_REORDER) {
+				PRINT_ERROR("Too big QUEUE ALG %x, dev %s",
+					dev->queue_alg, dev->virt_name);
+			}
+			dev->queue_alg = q;
+			dev->swp = (buffer[4+4] & 0x8) >> 3;
+			dev->tas = (buffer[4+5] & 0x40) >> 6;
+			dev->d_sense = (buffer[4+2] & 0x4) >> 2;
+
+			/*
+			 * Unfortunately, SCSI ML doesn't provide a way to
+			 * specify commands task attribute, so we can rely on
+			 * device's restricted reordering only. Linux I/O
+			 * subsystem doesn't reorder pass-through (PC) requests.
+			 */
+			dev->has_own_order_mgmt = !dev->queue_alg;
+
+			PRINT_INFO("Device %s: TST %x, QUEUE ALG %x, SWP %x, "
+				"TAS %x, D_SENSE %d, has_own_order_mgmt %d",
+				dev->virt_name, dev->tst, dev->queue_alg,
+				dev->swp, dev->tas, dev->d_sense,
+				dev->has_own_order_mgmt);
+
+			goto out;
+		} else {
+			scst_check_internal_sense(dev, rc, sense_buffer,
+				sizeof(sense_buffer));
+#if 0
+			if ((status_byte(rc) == CHECK_CONDITION) &&
+			    SCST_SENSE_VALID(sense_buffer)) {
+#else
+			/*
+			 * 3ware controller is buggy and returns CONDITION_GOOD
+			 * instead of CHECK_CONDITION
+			 */
+			if (SCST_SENSE_VALID(sense_buffer)) {
+#endif
+				PRINT_BUFF_FLAG(TRACE_SCSI,
+					"Returned sense data",
+					sense_buffer, sizeof(sense_buffer));
+				if (scst_analyze_sense(sense_buffer,
+						sizeof(sense_buffer),
+						SCST_SENSE_KEY_VALID,
+						ILLEGAL_REQUEST, 0, 0)) {
+					PRINT_INFO("Device %s doesn't support "
+						"MODE SENSE", dev->virt_name);
+					break;
+				} else if (scst_analyze_sense(sense_buffer,
+						sizeof(sense_buffer),
+						SCST_SENSE_KEY_VALID,
+						NOT_READY, 0, 0)) {
+					PRINT_ERROR("Device %s not ready",
+						dev->virt_name);
+					break;
+				}
+			} else {
+				PRINT_INFO("Internal MODE SENSE to "
+					"device %s failed: %x",
+					dev->virt_name, rc);
+				PRINT_BUFF_FLAG(TRACE_SCSI, "MODE SENSE sense",
+					sense_buffer, sizeof(sense_buffer));
+				switch (host_byte(rc)) {
+				case DID_RESET:
+				case DID_ABORT:
+				case DID_SOFT_ERROR:
+					break;
+				default:
+					goto brk;
+				}
+				switch (driver_byte(rc)) {
+				case DRIVER_BUSY:
+				case DRIVER_SOFT:
+					break;
+				default:
+					goto brk;
+				}
+			}
+		}
+	}
+brk:
+	PRINT_WARNING("Unable to get device's %s control mode page, using "
+		"existing values/defaults: TST %x, QUEUE ALG %x, SWP %x, "
+		"TAS %x, D_SENSE %d, has_own_order_mgmt %d", dev->virt_name,
+		dev->tst, dev->queue_alg, dev->swp, dev->tas, dev->d_sense,
+		dev->has_own_order_mgmt);
+
+out:
+	return 0;
+}
+EXPORT_SYMBOL_GPL(scst_obtain_device_parameters);
+
+/* Called under dev_lock and BH off */
+void scst_process_reset(struct scst_device *dev,
+	struct scst_session *originator, struct scst_cmd *exclude_cmd,
+	struct scst_mgmt_cmd *mcmd, bool setUA)
+{
+	struct scst_tgt_dev *tgt_dev;
+	struct scst_cmd *cmd, *tcmd;
+
+	/* Clear RESERVE'ation, if necessary */
+	if (dev->dev_reserved) {
+		list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+				    dev_tgt_dev_list_entry) {
+			TRACE_MGMT_DBG("Clearing RESERVE'ation for "
+				"tgt_dev LUN %lld",
+				(long long unsigned int)tgt_dev->lun);
+			clear_bit(SCST_TGT_DEV_RESERVED,
+				  &tgt_dev->tgt_dev_flags);
+		}
+		dev->dev_reserved = 0;
+		/*
+		 * There is no need to send RELEASE, since the device is going
+		 * to be resetted. Actually, since we can be in RESET TM
+		 * function, it might be dangerous.
+		 */
+	}
+
+	dev->dev_double_ua_possible = 1;
+
+	list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+		dev_tgt_dev_list_entry) {
+		struct scst_session *sess = tgt_dev->sess;
+
+		spin_lock_bh(&tgt_dev->tgt_dev_lock);
+
+		scst_free_all_UA(tgt_dev);
+
+		memset(tgt_dev->tgt_dev_sense, 0,
+			sizeof(tgt_dev->tgt_dev_sense));
+
+		spin_unlock_bh(&tgt_dev->tgt_dev_lock);
+
+		spin_lock_irq(&sess->sess_list_lock);
+
+		TRACE_DBG("Searching in sess cmd list (sess=%p)", sess);
+		list_for_each_entry(cmd, &sess->sess_cmd_list,
+					sess_cmd_list_entry) {
+			if (cmd == exclude_cmd)
+				continue;
+			if ((cmd->tgt_dev == tgt_dev) ||
+			    ((cmd->tgt_dev == NULL) &&
+			     (cmd->lun == tgt_dev->lun))) {
+				scst_abort_cmd(cmd, mcmd,
+					(tgt_dev->sess != originator), 0);
+			}
+		}
+		spin_unlock_irq(&sess->sess_list_lock);
+	}
+
+	list_for_each_entry_safe(cmd, tcmd, &dev->blocked_cmd_list,
+				blocked_cmd_list_entry) {
+		if (test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags)) {
+			list_del(&cmd->blocked_cmd_list_entry);
+			TRACE_MGMT_DBG("Adding aborted blocked cmd %p "
+				"to active cmd list", cmd);
+			spin_lock_irq(&cmd->cmd_threads->cmd_list_lock);
+			list_add_tail(&cmd->cmd_list_entry,
+				&cmd->cmd_threads->active_cmd_list);
+			wake_up(&cmd->cmd_threads->cmd_list_waitQ);
+			spin_unlock_irq(&cmd->cmd_threads->cmd_list_lock);
+		}
+	}
+
+	if (setUA) {
+		uint8_t sense_buffer[SCST_STANDARD_SENSE_LEN];
+		int sl = scst_set_sense(sense_buffer, sizeof(sense_buffer),
+			dev->d_sense, SCST_LOAD_SENSE(scst_sense_reset_UA));
+		scst_dev_check_set_local_UA(dev, exclude_cmd, sense_buffer, sl);
+	}
+	return;
+}
+
+/* No locks, no IRQ or IRQ-disabled context allowed */
+int scst_set_pending_UA(struct scst_cmd *cmd)
+{
+	int res = 0, i;
+	struct scst_tgt_dev_UA *UA_entry;
+	bool first = true, global_unlock = false;
+	struct scst_session *sess = cmd->sess;
+
+	TRACE_MGMT_DBG("Setting pending UA cmd %p", cmd);
+
+	spin_lock_bh(&cmd->tgt_dev->tgt_dev_lock);
+
+again:
+	/* UA list could be cleared behind us, so retest */
+	if (list_empty(&cmd->tgt_dev->UA_list)) {
+		TRACE_DBG("%s",
+		      "SCST_TGT_DEV_UA_PENDING set, but UA_list empty");
+		res = -1;
+		goto out_unlock_tgt_dev_lock;
+	}
+
+	UA_entry = list_entry(cmd->tgt_dev->UA_list.next, typeof(*UA_entry),
+			      UA_list_entry);
+
+	TRACE_DBG("next %p UA_entry %p",
+	      cmd->tgt_dev->UA_list.next, UA_entry);
+
+	if (UA_entry->global_UA && first) {
+		TRACE_MGMT_DBG("Global UA %p detected", UA_entry);
+
+		spin_unlock_bh(&cmd->tgt_dev->tgt_dev_lock);
+
+		/*
+		 * cmd won't allow to suspend activities, so we can access
+		 * sess->sess_tgt_dev_list_hash without any additional
+		 * protection.
+		 */
+
+		local_bh_disable();
+
+		for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+			struct list_head *sess_tgt_dev_list_head =
+				&sess->sess_tgt_dev_list_hash[i];
+			struct scst_tgt_dev *tgt_dev;
+			list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+					sess_tgt_dev_list_entry) {
+				/* Lockdep triggers here a false positive.. */
+				spin_lock(&tgt_dev->tgt_dev_lock);
+			}
+		}
+
+		first = false;
+		global_unlock = true;
+		goto again;
+	}
+
+	if (scst_set_cmd_error_sense(cmd, UA_entry->UA_sense_buffer,
+			UA_entry->UA_valid_sense_len) != 0)
+		goto out_unlock;
+
+	cmd->ua_ignore = 1;
+
+	list_del(&UA_entry->UA_list_entry);
+
+	if (UA_entry->global_UA) {
+		for (i = 0; i < TGT_DEV_HASH_SIZE; i++) {
+			struct list_head *sess_tgt_dev_list_head =
+				&sess->sess_tgt_dev_list_hash[i];
+			struct scst_tgt_dev *tgt_dev;
+
+			list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+					sess_tgt_dev_list_entry) {
+				struct scst_tgt_dev_UA *ua;
+				list_for_each_entry(ua, &tgt_dev->UA_list,
+							UA_list_entry) {
+					if (ua->global_UA &&
+					    memcmp(ua->UA_sense_buffer,
+					      UA_entry->UA_sense_buffer,
+					      sizeof(ua->UA_sense_buffer)) == 0) {
+						TRACE_MGMT_DBG("Freeing not "
+							"needed global UA %p",
+							ua);
+						list_del(&ua->UA_list_entry);
+						mempool_free(ua, scst_ua_mempool);
+						break;
+					}
+				}
+			}
+		}
+	}
+
+	mempool_free(UA_entry, scst_ua_mempool);
+
+	if (list_empty(&cmd->tgt_dev->UA_list)) {
+		clear_bit(SCST_TGT_DEV_UA_PENDING,
+			  &cmd->tgt_dev->tgt_dev_flags);
+	}
+
+out_unlock:
+	if (global_unlock) {
+		for (i = TGT_DEV_HASH_SIZE-1; i >= 0; i--) {
+			struct list_head *sess_tgt_dev_list_head =
+				&sess->sess_tgt_dev_list_hash[i];
+			struct scst_tgt_dev *tgt_dev;
+			list_for_each_entry_reverse(tgt_dev, sess_tgt_dev_list_head,
+					sess_tgt_dev_list_entry) {
+				spin_unlock(&tgt_dev->tgt_dev_lock);
+			}
+		}
+
+		local_bh_enable();
+		spin_lock_bh(&cmd->tgt_dev->tgt_dev_lock);
+	}
+
+out_unlock_tgt_dev_lock:
+	spin_unlock_bh(&cmd->tgt_dev->tgt_dev_lock);
+	return res;
+}
+
+/* Called under tgt_dev_lock and BH off */
+static void scst_alloc_set_UA(struct scst_tgt_dev *tgt_dev,
+	const uint8_t *sense, int sense_len, int flags)
+{
+	struct scst_tgt_dev_UA *UA_entry = NULL;
+
+	UA_entry = mempool_alloc(scst_ua_mempool, GFP_ATOMIC);
+	if (UA_entry == NULL) {
+		PRINT_CRIT_ERROR("%s", "UNIT ATTENTION memory "
+		     "allocation failed. The UNIT ATTENTION "
+		     "on some sessions will be missed");
+		PRINT_BUFFER("Lost UA", sense, sense_len);
+		goto out;
+	}
+	memset(UA_entry, 0, sizeof(*UA_entry));
+
+	UA_entry->global_UA = (flags & SCST_SET_UA_FLAG_GLOBAL) != 0;
+	if (UA_entry->global_UA)
+		TRACE_MGMT_DBG("Queuing global UA %p", UA_entry);
+
+	if (sense_len > (int)sizeof(UA_entry->UA_sense_buffer)) {
+		PRINT_WARNING("Sense truncated (needed %d), shall you increase "
+			"SCST_SENSE_BUFFERSIZE?", sense_len);
+		sense_len = sizeof(UA_entry->UA_sense_buffer);
+	}
+	memcpy(UA_entry->UA_sense_buffer, sense, sense_len);
+	UA_entry->UA_valid_sense_len = sense_len;
+
+	set_bit(SCST_TGT_DEV_UA_PENDING, &tgt_dev->tgt_dev_flags);
+
+	TRACE_MGMT_DBG("Adding new UA to tgt_dev %p", tgt_dev);
+
+	if (flags & SCST_SET_UA_FLAG_AT_HEAD)
+		list_add(&UA_entry->UA_list_entry, &tgt_dev->UA_list);
+	else
+		list_add_tail(&UA_entry->UA_list_entry, &tgt_dev->UA_list);
+
+out:
+	return;
+}
+
+/* tgt_dev_lock supposed to be held and BH off */
+static void __scst_check_set_UA(struct scst_tgt_dev *tgt_dev,
+	const uint8_t *sense, int sense_len, int flags)
+{
+	int skip_UA = 0;
+	struct scst_tgt_dev_UA *UA_entry_tmp;
+	int len = min((int)sizeof(UA_entry_tmp->UA_sense_buffer), sense_len);
+
+	list_for_each_entry(UA_entry_tmp, &tgt_dev->UA_list,
+			    UA_list_entry) {
+		if (memcmp(sense, UA_entry_tmp->UA_sense_buffer, len) == 0) {
+			TRACE_MGMT_DBG("%s", "UA already exists");
+			skip_UA = 1;
+			break;
+		}
+	}
+
+	if (skip_UA == 0)
+		scst_alloc_set_UA(tgt_dev, sense, len, flags);
+	return;
+}
+
+void scst_check_set_UA(struct scst_tgt_dev *tgt_dev,
+	const uint8_t *sense, int sense_len, int flags)
+{
+
+	spin_lock_bh(&tgt_dev->tgt_dev_lock);
+	__scst_check_set_UA(tgt_dev, sense, sense_len, flags);
+	spin_unlock_bh(&tgt_dev->tgt_dev_lock);
+	return;
+}
+
+/* Called under dev_lock and BH off */
+void scst_dev_check_set_local_UA(struct scst_device *dev,
+	struct scst_cmd *exclude, const uint8_t *sense, int sense_len)
+{
+	struct scst_tgt_dev *tgt_dev, *exclude_tgt_dev = NULL;
+
+	if (exclude != NULL)
+		exclude_tgt_dev = exclude->tgt_dev;
+
+	list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+			dev_tgt_dev_list_entry) {
+		if (tgt_dev != exclude_tgt_dev)
+			scst_check_set_UA(tgt_dev, sense, sense_len, 0);
+	}
+	return;
+}
+
+/* Called under dev_lock and BH off */
+void __scst_dev_check_set_UA(struct scst_device *dev,
+	struct scst_cmd *exclude, const uint8_t *sense, int sense_len)
+{
+
+	TRACE_MGMT_DBG("Processing UA dev %p", dev);
+
+	/* Check for reset UA */
+	if (scst_analyze_sense(sense, sense_len, SCST_SENSE_ASC_VALID,
+				0, SCST_SENSE_ASC_UA_RESET, 0))
+		scst_process_reset(dev,
+				   (exclude != NULL) ? exclude->sess : NULL,
+				   exclude, NULL, false);
+
+	scst_dev_check_set_local_UA(dev, exclude, sense, sense_len);
+	return;
+}
+
+/* Called under tgt_dev_lock or when tgt_dev is unused */
+static void scst_free_all_UA(struct scst_tgt_dev *tgt_dev)
+{
+	struct scst_tgt_dev_UA *UA_entry, *t;
+
+	list_for_each_entry_safe(UA_entry, t,
+				 &tgt_dev->UA_list, UA_list_entry) {
+		TRACE_MGMT_DBG("Clearing UA for tgt_dev LUN %lld",
+			       (long long unsigned int)tgt_dev->lun);
+		list_del(&UA_entry->UA_list_entry);
+		mempool_free(UA_entry, scst_ua_mempool);
+	}
+	INIT_LIST_HEAD(&tgt_dev->UA_list);
+	clear_bit(SCST_TGT_DEV_UA_PENDING, &tgt_dev->tgt_dev_flags);
+	return;
+}
+
+/* No locks */
+struct scst_cmd *__scst_check_deferred_commands(struct scst_tgt_dev *tgt_dev)
+{
+	struct scst_cmd *res = NULL, *cmd, *t;
+	typeof(tgt_dev->expected_sn) expected_sn = tgt_dev->expected_sn;
+
+	spin_lock_irq(&tgt_dev->sn_lock);
+
+	if (unlikely(tgt_dev->hq_cmd_count != 0))
+		goto out_unlock;
+
+restart:
+	list_for_each_entry_safe(cmd, t, &tgt_dev->deferred_cmd_list,
+				sn_cmd_list_entry) {
+		EXTRACHECKS_BUG_ON(cmd->queue_type ==
+			SCST_CMD_QUEUE_HEAD_OF_QUEUE);
+		if (cmd->sn == expected_sn) {
+			TRACE_SN("Deferred command %p (sn %d, set %d) found",
+				cmd, cmd->sn, cmd->sn_set);
+			tgt_dev->def_cmd_count--;
+			list_del(&cmd->sn_cmd_list_entry);
+			if (res == NULL)
+				res = cmd;
+			else {
+				spin_lock(&cmd->cmd_threads->cmd_list_lock);
+				TRACE_SN("Adding cmd %p to active cmd list",
+					cmd);
+				list_add_tail(&cmd->cmd_list_entry,
+					&cmd->cmd_threads->active_cmd_list);
+				wake_up(&cmd->cmd_threads->cmd_list_waitQ);
+				spin_unlock(&cmd->cmd_threads->cmd_list_lock);
+			}
+		}
+	}
+	if (res != NULL)
+		goto out_unlock;
+
+	list_for_each_entry(cmd, &tgt_dev->skipped_sn_list,
+				sn_cmd_list_entry) {
+		EXTRACHECKS_BUG_ON(cmd->queue_type ==
+			SCST_CMD_QUEUE_HEAD_OF_QUEUE);
+		if (cmd->sn == expected_sn) {
+			atomic_t *slot = cmd->sn_slot;
+			/*
+			 * !! At this point any pointer in cmd, except !!
+			 * !! sn_slot and sn_cmd_list_entry, could be	!!
+			 * !! already destroyed				!!
+			 */
+			TRACE_SN("cmd %p (tag %llu) with skipped sn %d found",
+				 cmd,
+				 (long long unsigned int)cmd->tag,
+				 cmd->sn);
+			tgt_dev->def_cmd_count--;
+			list_del(&cmd->sn_cmd_list_entry);
+			spin_unlock_irq(&tgt_dev->sn_lock);
+			if (test_and_set_bit(SCST_CMD_CAN_BE_DESTROYED,
+					     &cmd->cmd_flags))
+				scst_destroy_put_cmd(cmd);
+			scst_inc_expected_sn(tgt_dev, slot);
+			expected_sn = tgt_dev->expected_sn;
+			spin_lock_irq(&tgt_dev->sn_lock);
+			goto restart;
+		}
+	}
+
+out_unlock:
+	spin_unlock_irq(&tgt_dev->sn_lock);
+	return res;
+}
+
+/*****************************************************************
+ ** The following thr_data functions are necessary, because the
+ ** kernel doesn't provide a better way to have threads local
+ ** storage
+ *****************************************************************/
+
+/**
+ * scst_add_thr_data() - add the current thread's local data
+ *
+ * Adds local to the current thread data to tgt_dev
+ * (they will be local for the tgt_dev and current thread).
+ */
+void scst_add_thr_data(struct scst_tgt_dev *tgt_dev,
+	struct scst_thr_data_hdr *data,
+	void (*free_fn) (struct scst_thr_data_hdr *data))
+{
+	data->owner_thr = current;
+	atomic_set(&data->ref, 1);
+	EXTRACHECKS_BUG_ON(free_fn == NULL);
+	data->free_fn = free_fn;
+	spin_lock(&tgt_dev->thr_data_lock);
+	list_add_tail(&data->thr_data_list_entry, &tgt_dev->thr_data_list);
+	spin_unlock(&tgt_dev->thr_data_lock);
+}
+EXPORT_SYMBOL_GPL(scst_add_thr_data);
+
+/**
+ * scst_del_all_thr_data() - delete all thread's local data
+ *
+ * Deletes all local to threads data from tgt_dev
+ */
+void scst_del_all_thr_data(struct scst_tgt_dev *tgt_dev)
+{
+	spin_lock(&tgt_dev->thr_data_lock);
+	while (!list_empty(&tgt_dev->thr_data_list)) {
+		struct scst_thr_data_hdr *d = list_entry(
+				tgt_dev->thr_data_list.next, typeof(*d),
+				thr_data_list_entry);
+		list_del(&d->thr_data_list_entry);
+		spin_unlock(&tgt_dev->thr_data_lock);
+		scst_thr_data_put(d);
+		spin_lock(&tgt_dev->thr_data_lock);
+	}
+	spin_unlock(&tgt_dev->thr_data_lock);
+	return;
+}
+EXPORT_SYMBOL_GPL(scst_del_all_thr_data);
+
+/**
+ * scst_dev_del_all_thr_data() - delete all thread's local data from device
+ *
+ * Deletes all local to threads data from all tgt_dev's of the device
+ */
+void scst_dev_del_all_thr_data(struct scst_device *dev)
+{
+	struct scst_tgt_dev *tgt_dev;
+
+	mutex_lock(&scst_mutex);
+
+	list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+				dev_tgt_dev_list_entry) {
+		scst_del_all_thr_data(tgt_dev);
+	}
+
+	mutex_unlock(&scst_mutex);
+	return;
+}
+EXPORT_SYMBOL_GPL(scst_dev_del_all_thr_data);
+
+/* thr_data_lock supposed to be held */
+static struct scst_thr_data_hdr *__scst_find_thr_data_locked(
+	struct scst_tgt_dev *tgt_dev, struct task_struct *tsk)
+{
+	struct scst_thr_data_hdr *res = NULL, *d;
+
+	list_for_each_entry(d, &tgt_dev->thr_data_list, thr_data_list_entry) {
+		if (d->owner_thr == tsk) {
+			res = d;
+			scst_thr_data_get(res);
+			break;
+		}
+	}
+	return res;
+}
+
+/**
+ * __scst_find_thr_data() - find local to the thread data
+ *
+ * Finds local to the thread data. Returns NULL, if they not found.
+ */
+struct scst_thr_data_hdr *__scst_find_thr_data(struct scst_tgt_dev *tgt_dev,
+	struct task_struct *tsk)
+{
+	struct scst_thr_data_hdr *res;
+
+	spin_lock(&tgt_dev->thr_data_lock);
+	res = __scst_find_thr_data_locked(tgt_dev, tsk);
+	spin_unlock(&tgt_dev->thr_data_lock);
+
+	return res;
+}
+EXPORT_SYMBOL_GPL(__scst_find_thr_data);
+
+bool scst_del_thr_data(struct scst_tgt_dev *tgt_dev, struct task_struct *tsk)
+{
+	bool res;
+	struct scst_thr_data_hdr *td;
+
+	spin_lock(&tgt_dev->thr_data_lock);
+
+	td = __scst_find_thr_data_locked(tgt_dev, tsk);
+	if (td != NULL) {
+		list_del(&td->thr_data_list_entry);
+		res = true;
+	} else
+		res = false;
+
+	spin_unlock(&tgt_dev->thr_data_lock);
+
+	if (td != NULL) {
+		/* the find() fn also gets it */
+		scst_thr_data_put(td);
+		scst_thr_data_put(td);
+	}
+
+	return res;
+}
+
+/* dev_lock supposed to be held and BH disabled */
+void __scst_block_dev(struct scst_device *dev)
+{
+	dev->block_count++;
+	TRACE_MGMT_DBG("Device BLOCK(new %d), dev %p", dev->block_count, dev);
+}
+
+/* No locks */
+static void scst_block_dev(struct scst_device *dev, int outstanding)
+{
+	spin_lock_bh(&dev->dev_lock);
+	__scst_block_dev(dev);
+	spin_unlock_bh(&dev->dev_lock);
+
+	/*
+	 * Memory barrier is necessary here, because we need to read
+	 * on_dev_count in wait_event() below after we increased block_count.
+	 * Otherwise, we can miss wake up in scst_dec_on_dev_cmd().
+	 * We use the explicit barrier, because spin_unlock_bh() doesn't
+	 * provide the necessary memory barrier functionality.
+	 */
+	smp_mb();
+
+	TRACE_MGMT_DBG("Waiting during blocking outstanding %d (on_dev_count "
+		"%d)", outstanding, atomic_read(&dev->on_dev_count));
+	wait_event(dev->on_dev_waitQ,
+		atomic_read(&dev->on_dev_count) <= outstanding);
+	TRACE_MGMT_DBG("%s", "wait_event() returned");
+}
+
+/* No locks */
+void scst_block_dev_cmd(struct scst_cmd *cmd, int outstanding)
+{
+	BUG_ON(cmd->needs_unblocking);
+
+	cmd->needs_unblocking = 1;
+	TRACE_MGMT_DBG("Needs unblocking cmd %p (tag %llu)",
+		       cmd, (long long unsigned int)cmd->tag);
+
+	scst_block_dev(cmd->dev, outstanding);
+}
+
+/* No locks */
+void scst_unblock_dev(struct scst_device *dev)
+{
+	spin_lock_bh(&dev->dev_lock);
+	TRACE_MGMT_DBG("Device UNBLOCK(new %d), dev %p",
+		dev->block_count-1, dev);
+	if (--dev->block_count == 0)
+		scst_unblock_cmds(dev);
+	spin_unlock_bh(&dev->dev_lock);
+	BUG_ON(dev->block_count < 0);
+}
+
+/* No locks */
+void scst_unblock_dev_cmd(struct scst_cmd *cmd)
+{
+	scst_unblock_dev(cmd->dev);
+	cmd->needs_unblocking = 0;
+}
+
+/* No locks */
+int scst_inc_on_dev_cmd(struct scst_cmd *cmd)
+{
+	int res = 0;
+	struct scst_device *dev = cmd->dev;
+
+	BUG_ON(cmd->inc_blocking || cmd->dec_on_dev_needed);
+
+	atomic_inc(&dev->on_dev_count);
+	cmd->dec_on_dev_needed = 1;
+	TRACE_DBG("New on_dev_count %d", atomic_read(&dev->on_dev_count));
+
+	if (unlikely(cmd->internal) && (cmd->cdb[0] == REQUEST_SENSE)) {
+		/*
+		 * The original command can already block the device, so
+		 * REQUEST SENSE command should always pass.
+		 */
+		goto out;
+	}
+
+repeat:
+	if (unlikely(dev->block_count > 0)) {
+		spin_lock_bh(&dev->dev_lock);
+		if (unlikely(test_bit(SCST_CMD_ABORTED, &cmd->cmd_flags)))
+			goto out_unlock;
+		if (dev->block_count > 0) {
+			scst_dec_on_dev_cmd(cmd);
+			TRACE_MGMT_DBG("Delaying cmd %p due to blocking "
+				"(tag %llu, dev %p)", cmd,
+				(long long unsigned int)cmd->tag, dev);
+			list_add_tail(&cmd->blocked_cmd_list_entry,
+				      &dev->blocked_cmd_list);
+			res = 1;
+			spin_unlock_bh(&dev->dev_lock);
+			goto out;
+		} else {
+			TRACE_MGMT_DBG("%s", "Somebody unblocked the device, "
+				"continuing");
+		}
+		spin_unlock_bh(&dev->dev_lock);
+	}
+	if (unlikely(dev->dev_double_ua_possible)) {
+		spin_lock_bh(&dev->dev_lock);
+		if (dev->block_count == 0) {
+			TRACE_MGMT_DBG("cmd %p (tag %llu), blocking further "
+				"cmds due to possible double reset UA (dev %p)",
+				cmd, (long long unsigned int)cmd->tag, dev);
+			__scst_block_dev(dev);
+			cmd->inc_blocking = 1;
+		} else {
+			spin_unlock_bh(&dev->dev_lock);
+			TRACE_MGMT_DBG("Somebody blocked the device, "
+				"repeating (count %d)", dev->block_count);
+			goto repeat;
+		}
+		spin_unlock_bh(&dev->dev_lock);
+	}
+
+out:
+	return res;
+
+out_unlock:
+	spin_unlock_bh(&dev->dev_lock);
+	goto out;
+}
+
+/* Called under dev_lock */
+static void scst_unblock_cmds(struct scst_device *dev)
+{
+	struct scst_cmd *cmd, *tcmd;
+	unsigned long flags;
+
+	local_irq_save(flags);
+	list_for_each_entry_safe(cmd, tcmd, &dev->blocked_cmd_list,
+				 blocked_cmd_list_entry) {
+		list_del(&cmd->blocked_cmd_list_entry);
+		TRACE_MGMT_DBG("Adding blocked cmd %p to active cmd list", cmd);
+		spin_lock(&cmd->cmd_threads->cmd_list_lock);
+		if (unlikely(cmd->queue_type == SCST_CMD_QUEUE_HEAD_OF_QUEUE))
+			list_add(&cmd->cmd_list_entry,
+				&cmd->cmd_threads->active_cmd_list);
+		else
+			list_add_tail(&cmd->cmd_list_entry,
+				&cmd->cmd_threads->active_cmd_list);
+		wake_up(&cmd->cmd_threads->cmd_list_waitQ);
+		spin_unlock(&cmd->cmd_threads->cmd_list_lock);
+	}
+	local_irq_restore(flags);
+	return;
+}
+
+static void __scst_unblock_deferred(struct scst_tgt_dev *tgt_dev,
+	struct scst_cmd *out_of_sn_cmd)
+{
+	EXTRACHECKS_BUG_ON(!out_of_sn_cmd->sn_set);
+
+	if (out_of_sn_cmd->sn == tgt_dev->expected_sn) {
+		scst_inc_expected_sn(tgt_dev, out_of_sn_cmd->sn_slot);
+		scst_make_deferred_commands_active(tgt_dev);
+	} else {
+		out_of_sn_cmd->out_of_sn = 1;
+		spin_lock_irq(&tgt_dev->sn_lock);
+		tgt_dev->def_cmd_count++;
+		list_add_tail(&out_of_sn_cmd->sn_cmd_list_entry,
+			      &tgt_dev->skipped_sn_list);
+		TRACE_SN("out_of_sn_cmd %p with sn %d added to skipped_sn_list"
+			" (expected_sn %d)", out_of_sn_cmd, out_of_sn_cmd->sn,
+			tgt_dev->expected_sn);
+		spin_unlock_irq(&tgt_dev->sn_lock);
+	}
+
+	return;
+}
+
+void scst_unblock_deferred(struct scst_tgt_dev *tgt_dev,
+	struct scst_cmd *out_of_sn_cmd)
+{
+
+	if (!out_of_sn_cmd->sn_set) {
+		TRACE_SN("cmd %p without sn", out_of_sn_cmd);
+		goto out;
+	}
+
+	__scst_unblock_deferred(tgt_dev, out_of_sn_cmd);
+
+out:
+	return;
+}
+
+void scst_on_hq_cmd_response(struct scst_cmd *cmd)
+{
+	struct scst_tgt_dev *tgt_dev = cmd->tgt_dev;
+
+	if (!cmd->hq_cmd_inced)
+		goto out;
+
+	spin_lock_irq(&tgt_dev->sn_lock);
+	tgt_dev->hq_cmd_count--;
+	spin_unlock_irq(&tgt_dev->sn_lock);
+
+	EXTRACHECKS_BUG_ON(tgt_dev->hq_cmd_count < 0);
+
+	/*
+	 * There is no problem in checking hq_cmd_count in the
+	 * non-locked state. In the worst case we will only have
+	 * unneeded run of the deferred commands.
+	 */
+	if (tgt_dev->hq_cmd_count == 0)
+		scst_make_deferred_commands_active(tgt_dev);
+
+out:
+	return;
+}
+
+void scst_store_sense(struct scst_cmd *cmd)
+{
+
+	if (SCST_SENSE_VALID(cmd->sense) &&
+	    !test_bit(SCST_CMD_NO_RESP, &cmd->cmd_flags) &&
+	    (cmd->tgt_dev != NULL)) {
+		struct scst_tgt_dev *tgt_dev = cmd->tgt_dev;
+
+		TRACE_DBG("Storing sense (cmd %p)", cmd);
+
+		spin_lock_bh(&tgt_dev->tgt_dev_lock);
+
+		if (cmd->sense_valid_len <= sizeof(tgt_dev->tgt_dev_sense))
+			tgt_dev->tgt_dev_valid_sense_len = cmd->sense_valid_len;
+		else {
+			tgt_dev->tgt_dev_valid_sense_len = sizeof(tgt_dev->tgt_dev_sense);
+			PRINT_ERROR("Stored sense truncated to size %d "
+				"(needed %d)", tgt_dev->tgt_dev_valid_sense_len,
+				cmd->sense_valid_len);
+		}
+		memcpy(tgt_dev->tgt_dev_sense, cmd->sense,
+			tgt_dev->tgt_dev_valid_sense_len);
+
+		spin_unlock_bh(&tgt_dev->tgt_dev_lock);
+	}
+	return;
+}
+
+void scst_xmit_process_aborted_cmd(struct scst_cmd *cmd)
+{
+
+	TRACE_MGMT_DBG("Aborted cmd %p done (cmd_ref %d, "
+		"scst_cmd_count %d)", cmd, atomic_read(&cmd->cmd_ref),
+		atomic_read(&scst_cmd_count));
+
+	scst_done_cmd_mgmt(cmd);
+
+	if (test_bit(SCST_CMD_ABORTED_OTHER, &cmd->cmd_flags)) {
+		if (cmd->completed) {
+			/* It's completed and it's OK to return its result */
+			goto out;
+		}
+
+		/* For not yet inited commands cmd->dev can be NULL here */
+		if (test_bit(SCST_CMD_DEVICE_TAS, &cmd->cmd_flags)) {
+			TRACE_MGMT_DBG("Flag ABORTED OTHER set for cmd %p "
+				"(tag %llu), returning TASK ABORTED ", cmd,
+				(long long unsigned int)cmd->tag);
+			scst_set_cmd_error_status(cmd, SAM_STAT_TASK_ABORTED);
+		} else {
+			TRACE_MGMT_DBG("Flag ABORTED OTHER set for cmd %p "
+				"(tag %llu), aborting without delivery or "
+				"notification",
+				cmd, (long long unsigned int)cmd->tag);
+			/*
+			 * There is no need to check/requeue possible UA,
+			 * because, if it exists, it will be delivered
+			 * by the "completed" branch above.
+			 */
+			clear_bit(SCST_CMD_ABORTED_OTHER, &cmd->cmd_flags);
+		}
+	}
+
+out:
+	return;
+}
+
+/**
+ * scst_get_max_lun_commands() - return maximum supported commands count
+ *
+ * Returns maximum commands count which can be queued to this LUN in this
+ * session.
+ *
+ * If lun is NO_SUCH_LUN, returns minimum of maximum commands count which
+ * can be queued to any LUN in this session.
+ *
+ * If sess is NULL, returns minimum of maximum commands count which can be
+ * queued to any SCST device.
+ */
+int scst_get_max_lun_commands(struct scst_session *sess, uint64_t lun)
+{
+	return SCST_MAX_TGT_DEV_COMMANDS;
+}
+EXPORT_SYMBOL_GPL(scst_get_max_lun_commands);
+
+/**
+ * scst_get_next_lexem() - parse and return next lexem in the string
+ *
+ * Returns pointer to the next lexem from token_str skipping
+ * spaces and '=' character and using them then as a delimeter. Content
+ * of token_str is modified by setting '\0' at the delimeter's position.
+ */
+char *scst_get_next_lexem(char **token_str)
+{
+	char *p = *token_str;
+	char *q;
+	static const char blank = '\0';
+
+	if ((token_str == NULL) || (*token_str == NULL))
+		return (char *)&blank;
+
+	for (p = *token_str; (*p != '\0') && (isspace(*p) || (*p == '=')); p++)
+		;
+
+	for (q = p; (*q != '\0') && !isspace(*q) && (*q != '='); q++)
+		;
+
+	if (*q != '\0')
+		*q++ = '\0';
+
+	*token_str = q;
+	return p;
+}
+EXPORT_SYMBOL_GPL(scst_get_next_lexem);
+
+/**
+ * scst_restore_token_str() - restore string modified by scst_get_next_lexem()
+ *
+ * Restores token_str modified by scst_get_next_lexem() to the
+ * previous value before scst_get_next_lexem() was called. Prev_lexem is
+ * a pointer to lexem returned by scst_get_next_lexem().
+ */
+void scst_restore_token_str(char *prev_lexem, char *token_str)
+{
+	if (&prev_lexem[strlen(prev_lexem)] != token_str)
+		prev_lexem[strlen(prev_lexem)] = ' ';
+	return;
+}
+EXPORT_SYMBOL_GPL(scst_restore_token_str);
+
+/**
+ * scst_get_next_token_str() - parse and return next token
+ *
+ * This function returns pointer to the next token strings from input_str
+ * using '\n', ';' and '\0' as a delimeter. Content of input_str is
+ * modified by setting '\0' at the delimeter's position.
+ */
+char *scst_get_next_token_str(char **input_str)
+{
+	char *p = *input_str;
+	int i = 0;
+
+	while ((p[i] != '\n') && (p[i] != ';') && (p[i] != '\0'))
+		i++;
+
+	if (i == 0)
+		return NULL;
+
+	if (p[i] == '\0')
+		*input_str = &p[i];
+	else
+		*input_str = &p[i+1];
+
+	p[i] = '\0';
+
+	return p;
+}
+EXPORT_SYMBOL_GPL(scst_get_next_token_str);
+
+static void __init scst_scsi_op_list_init(void)
+{
+	int i;
+	uint8_t op = 0xff;
+
+	for (i = 0; i < 256; i++)
+		scst_scsi_op_list[i] = SCST_CDB_TBL_SIZE;
+
+	for (i = 0; i < SCST_CDB_TBL_SIZE; i++) {
+		if (scst_scsi_op_table[i].ops != op) {
+			op = scst_scsi_op_table[i].ops;
+			scst_scsi_op_list[op] = i;
+		}
+	}
+	return;
+}
+
+int __init scst_lib_init(void)
+{
+	int res = 0;
+
+	scst_scsi_op_list_init();
+
+	scsi_io_context_cache = kmem_cache_create("scst_scsi_io_context",
+					sizeof(struct scsi_io_context),
+					0, 0, NULL);
+	if (!scsi_io_context_cache) {
+		PRINT_ERROR("%s", "Can't init scsi io context cache");
+		res = -ENOMEM;
+		goto out;
+	}
+
+out:
+	return res;
+}
+
+void scst_lib_exit(void)
+{
+	BUILD_BUG_ON(SCST_MAX_CDB_SIZE != BLK_MAX_CDB);
+	BUILD_BUG_ON(SCST_SENSE_BUFFERSIZE < SCSI_SENSE_BUFFERSIZE);
+
+	kmem_cache_destroy(scsi_io_context_cache);
+}
+
+#ifdef CONFIG_SCST_DEBUG
+
+/**
+ * scst_random() - return a pseudo-random number for debugging purposes.
+ *
+ * Returns a pseudo-random number for debugging purposes. Available only in
+ * the DEBUG build.
+ *
+ * Original taken from the XFS code
+ */
+unsigned long scst_random(void)
+{
+	static int Inited;
+	static unsigned long RandomValue;
+	static DEFINE_SPINLOCK(lock);
+	/* cycles pseudo-randomly through all values between 1 and 2^31 - 2 */
+	register long rv;
+	register long lo;
+	register long hi;
+	unsigned long flags;
+
+	spin_lock_irqsave(&lock, flags);
+	if (!Inited) {
+		RandomValue = jiffies;
+		Inited = 1;
+	}
+	rv = RandomValue;
+	hi = rv / 127773;
+	lo = rv % 127773;
+	rv = 16807 * lo - 2836 * hi;
+	if (rv <= 0)
+		rv += 2147483647;
+	RandomValue = rv;
+	spin_unlock_irqrestore(&lock, flags);
+	return rv;
+}
+EXPORT_SYMBOL_GPL(scst_random);
+#endif /* CONFIG_SCST_DEBUG */
+
+#ifdef CONFIG_SCST_DEBUG_TM
+
+#define TM_DBG_STATE_ABORT		0
+#define TM_DBG_STATE_RESET		1
+#define TM_DBG_STATE_OFFLINE		2
+
+#define INIT_TM_DBG_STATE		TM_DBG_STATE_ABORT
+
+static void tm_dbg_timer_fn(unsigned long arg);
+
+static DEFINE_SPINLOCK(scst_tm_dbg_lock);
+/* All serialized by scst_tm_dbg_lock */
+static struct {
+	unsigned int tm_dbg_release:1;
+	unsigned int tm_dbg_blocked:1;
+} tm_dbg_flags;
+static LIST_HEAD(tm_dbg_delayed_cmd_list);
+static int tm_dbg_delayed_cmds_count;
+static int tm_dbg_passed_cmds_count;
+static int tm_dbg_state;
+static int tm_dbg_on_state_passes;
+static DEFINE_TIMER(tm_dbg_timer, tm_dbg_timer_fn, 0, 0);
+static struct scst_tgt_dev *tm_dbg_tgt_dev;
+
+static const int tm_dbg_on_state_num_passes[] = { 5, 1, 0x7ffffff };
+
+static void tm_dbg_init_tgt_dev(struct scst_tgt_dev *tgt_dev)
+{
+	if (tgt_dev->lun == 6) {
+		unsigned long flags;
+
+		if (tm_dbg_tgt_dev != NULL)
+			tm_dbg_deinit_tgt_dev(tm_dbg_tgt_dev);
+
+		spin_lock_irqsave(&scst_tm_dbg_lock, flags);
+		tm_dbg_state = INIT_TM_DBG_STATE;
+		tm_dbg_on_state_passes =
+			tm_dbg_on_state_num_passes[tm_dbg_state];
+		tm_dbg_tgt_dev = tgt_dev;
+		PRINT_INFO("LUN %lld connected from initiator %s is under "
+			"TM debugging (tgt_dev %p)",
+			(unsigned long long)tgt_dev->lun,
+			tgt_dev->sess->initiator_name, tgt_dev);
+		spin_unlock_irqrestore(&scst_tm_dbg_lock, flags);
+	}
+	return;
+}
+
+static void tm_dbg_deinit_tgt_dev(struct scst_tgt_dev *tgt_dev)
+{
+	if (tm_dbg_tgt_dev == tgt_dev) {
+		unsigned long flags;
+		TRACE_MGMT_DBG("Deinit TM debugging tgt_dev %p", tgt_dev);
+		del_timer_sync(&tm_dbg_timer);
+		spin_lock_irqsave(&scst_tm_dbg_lock, flags);
+		tm_dbg_tgt_dev = NULL;
+		spin_unlock_irqrestore(&scst_tm_dbg_lock, flags);
+	}
+	return;
+}
+
+static void tm_dbg_timer_fn(unsigned long arg)
+{
+	TRACE_MGMT_DBG("%s", "delayed cmd timer expired");
+	tm_dbg_flags.tm_dbg_release = 1;
+	/* Used to make sure that all woken up threads see the new value */
+	smp_wmb();
+	wake_up_all(&tm_dbg_tgt_dev->active_cmd_threads->cmd_list_waitQ);
+	return;
+}
+
+/* Called under scst_tm_dbg_lock and IRQs off */
+static void tm_dbg_delay_cmd(struct scst_cmd *cmd)
+{
+	switch (tm_dbg_state) {
+	case TM_DBG_STATE_ABORT:
+		if (tm_dbg_delayed_cmds_count == 0) {
+			unsigned long d = 58*HZ + (scst_random() % (4*HZ));
+			TRACE_MGMT_DBG("STATE ABORT: delaying cmd %p (tag %llu)"
+				" for %ld.%ld seconds (%ld HZ), "
+				"tm_dbg_on_state_passes=%d", cmd, cmd->tag,
+				d/HZ, (d%HZ)*100/HZ, d,	tm_dbg_on_state_passes);
+			mod_timer(&tm_dbg_timer, jiffies + d);
+#if 0
+			tm_dbg_flags.tm_dbg_blocked = 1;
+#endif
+		} else {
+			TRACE_MGMT_DBG("Delaying another timed cmd %p "
+				"(tag %llu), delayed_cmds_count=%d, "
+				"tm_dbg_on_state_passes=%d", cmd, cmd->tag,
+				tm_dbg_delayed_cmds_count,
+				tm_dbg_on_state_passes);
+			if (tm_dbg_delayed_cmds_count == 2)
+				tm_dbg_flags.tm_dbg_blocked = 0;
+		}
+		break;
+
+	case TM_DBG_STATE_RESET:
+	case TM_DBG_STATE_OFFLINE:
+		TRACE_MGMT_DBG("STATE RESET/OFFLINE: delaying cmd %p "
+			"(tag %llu), delayed_cmds_count=%d, "
+			"tm_dbg_on_state_passes=%d", cmd, cmd->tag,
+			tm_dbg_delayed_cmds_count, tm_dbg_on_state_passes);
+		tm_dbg_flags.tm_dbg_blocked = 1;
+		break;
+
+	default:
+		BUG();
+	}
+	/* IRQs already off */
+	spin_lock(&cmd->cmd_threads->cmd_list_lock);
+	list_add_tail(&cmd->cmd_list_entry, &tm_dbg_delayed_cmd_list);
+	spin_unlock(&cmd->cmd_threads->cmd_list_lock);
+	cmd->tm_dbg_delayed = 1;
+	tm_dbg_delayed_cmds_count++;
+	return;
+}
+
+/* No locks */
+void tm_dbg_check_released_cmds(void)
+{
+	if (tm_dbg_flags.tm_dbg_release) {
+		struct scst_cmd *cmd, *tc;
+		spin_lock_irq(&scst_tm_dbg_lock);
+		list_for_each_entry_safe_reverse(cmd, tc,
+				&tm_dbg_delayed_cmd_list, cmd_list_entry) {
+			TRACE_MGMT_DBG("Releasing timed cmd %p (tag %llu), "
+				"delayed_cmds_count=%d", cmd, cmd->tag,
+				tm_dbg_delayed_cmds_count);
+			spin_lock(&cmd->cmd_threads->cmd_list_lock);
+			list_move(&cmd->cmd_list_entry,
+				&cmd->cmd_threads->active_cmd_list);
+			spin_unlock(&cmd->cmd_threads->cmd_list_lock);
+		}
+		tm_dbg_flags.tm_dbg_release = 0;
+		spin_unlock_irq(&scst_tm_dbg_lock);
+	}
+}
+
+/* Called under scst_tm_dbg_lock */
+static void tm_dbg_change_state(void)
+{
+	tm_dbg_flags.tm_dbg_blocked = 0;
+	if (--tm_dbg_on_state_passes == 0) {
+		switch (tm_dbg_state) {
+		case TM_DBG_STATE_ABORT:
+			TRACE_MGMT_DBG("%s", "Changing "
+			    "tm_dbg_state to RESET");
+			tm_dbg_state = TM_DBG_STATE_RESET;
+			tm_dbg_flags.tm_dbg_blocked = 0;
+			break;
+		case TM_DBG_STATE_RESET:
+		case TM_DBG_STATE_OFFLINE:
+#ifdef CONFIG_SCST_TM_DBG_GO_OFFLINE
+			    TRACE_MGMT_DBG("%s", "Changing "
+				    "tm_dbg_state to OFFLINE");
+			    tm_dbg_state = TM_DBG_STATE_OFFLINE;
+#else
+			    TRACE_MGMT_DBG("%s", "Changing "
+				    "tm_dbg_state to ABORT");
+			    tm_dbg_state = TM_DBG_STATE_ABORT;
+#endif
+			break;
+		default:
+			BUG();
+		}
+		tm_dbg_on_state_passes =
+		    tm_dbg_on_state_num_passes[tm_dbg_state];
+	}
+
+	TRACE_MGMT_DBG("%s", "Deleting timer");
+	del_timer_sync(&tm_dbg_timer);
+	return;
+}
+
+/* No locks */
+int tm_dbg_check_cmd(struct scst_cmd *cmd)
+{
+	int res = 0;
+	unsigned long flags;
+
+	if (cmd->tm_dbg_immut)
+		goto out;
+
+	if (cmd->tm_dbg_delayed) {
+		spin_lock_irqsave(&scst_tm_dbg_lock, flags);
+		TRACE_MGMT_DBG("Processing delayed cmd %p (tag %llu), "
+			"delayed_cmds_count=%d", cmd, cmd->tag,
+			tm_dbg_delayed_cmds_count);
+
+		cmd->tm_dbg_immut = 1;
+		tm_dbg_delayed_cmds_count--;
+		if ((tm_dbg_delayed_cmds_count == 0) &&
+		    (tm_dbg_state == TM_DBG_STATE_ABORT))
+			tm_dbg_change_state();
+		spin_unlock_irqrestore(&scst_tm_dbg_lock, flags);
+	} else if (cmd->tgt_dev && (tm_dbg_tgt_dev == cmd->tgt_dev)) {
+		/* Delay 50th command */
+		spin_lock_irqsave(&scst_tm_dbg_lock, flags);
+		if (tm_dbg_flags.tm_dbg_blocked ||
+		    (++tm_dbg_passed_cmds_count % 50) == 0) {
+			tm_dbg_delay_cmd(cmd);
+			res = 1;
+		} else
+			cmd->tm_dbg_immut = 1;
+		spin_unlock_irqrestore(&scst_tm_dbg_lock, flags);
+	}
+
+out:
+	return res;
+}
+
+/* No locks */
+void tm_dbg_release_cmd(struct scst_cmd *cmd)
+{
+	struct scst_cmd *c;
+	unsigned long flags;
+
+	spin_lock_irqsave(&scst_tm_dbg_lock, flags);
+	list_for_each_entry(c, &tm_dbg_delayed_cmd_list,
+				cmd_list_entry) {
+		if (c == cmd) {
+			TRACE_MGMT_DBG("Abort request for "
+				"delayed cmd %p (tag=%llu), moving it to "
+				"active cmd list (delayed_cmds_count=%d)",
+				c, c->tag, tm_dbg_delayed_cmds_count);
+
+			if (!test_bit(SCST_CMD_ABORTED_OTHER,
+					    &cmd->cmd_flags)) {
+				/* Test how completed commands handled */
+				if (((scst_random() % 10) == 5)) {
+					scst_set_cmd_error(cmd,
+						SCST_LOAD_SENSE(
+						scst_sense_hardw_error));
+					/* It's completed now */
+				}
+			}
+
+			spin_lock(&cmd->cmd_threads->cmd_list_lock);
+			list_move(&c->cmd_list_entry,
+				&c->cmd_threads->active_cmd_list);
+			wake_up(&c->cmd_threads->cmd_list_waitQ);
+			spin_unlock(&cmd->cmd_threads->cmd_list_lock);
+			break;
+		}
+	}
+	spin_unlock_irqrestore(&scst_tm_dbg_lock, flags);
+	return;
+}
+
+/* Might be called under scst_mutex */
+void tm_dbg_task_mgmt(struct scst_device *dev, const char *fn, int force)
+{
+	unsigned long flags;
+
+	if (dev != NULL) {
+		if (tm_dbg_tgt_dev == NULL)
+			goto out;
+
+		if (tm_dbg_tgt_dev->dev != dev)
+			goto out;
+	}
+
+	spin_lock_irqsave(&scst_tm_dbg_lock, flags);
+	if ((tm_dbg_state != TM_DBG_STATE_OFFLINE) || force) {
+		TRACE_MGMT_DBG("%s: freeing %d delayed cmds", fn,
+			tm_dbg_delayed_cmds_count);
+		tm_dbg_change_state();
+		tm_dbg_flags.tm_dbg_release = 1;
+		/*
+		 * Used to make sure that all woken up threads see the new
+		 * value.
+		 */
+		smp_wmb();
+		if (tm_dbg_tgt_dev != NULL)
+			wake_up_all(&tm_dbg_tgt_dev->active_cmd_threads->cmd_list_waitQ);
+	} else {
+		TRACE_MGMT_DBG("%s: while OFFLINE state, doing nothing", fn);
+	}
+	spin_unlock_irqrestore(&scst_tm_dbg_lock, flags);
+
+out:
+	return;
+}
+
+int tm_dbg_is_release(void)
+{
+	return tm_dbg_flags.tm_dbg_release;
+}
+#endif /* CONFIG_SCST_DEBUG_TM */
+
+#ifdef CONFIG_SCST_DEBUG_SN
+void scst_check_debug_sn(struct scst_cmd *cmd)
+{
+	static DEFINE_SPINLOCK(lock);
+	static int type;
+	static int cnt;
+	unsigned long flags;
+	int old = cmd->queue_type;
+
+	spin_lock_irqsave(&lock, flags);
+
+	if (cnt == 0) {
+		if ((scst_random() % 1000) == 500) {
+			if ((scst_random() % 3) == 1)
+				type = SCST_CMD_QUEUE_HEAD_OF_QUEUE;
+			else
+				type = SCST_CMD_QUEUE_ORDERED;
+			do {
+				cnt = scst_random() % 10;
+			} while (cnt == 0);
+		} else
+			goto out_unlock;
+	}
+
+	cmd->queue_type = type;
+	cnt--;
+
+	if (((scst_random() % 1000) == 750))
+		cmd->queue_type = SCST_CMD_QUEUE_ORDERED;
+	else if (((scst_random() % 1000) == 751))
+		cmd->queue_type = SCST_CMD_QUEUE_HEAD_OF_QUEUE;
+	else if (((scst_random() % 1000) == 752))
+		cmd->queue_type = SCST_CMD_QUEUE_SIMPLE;
+
+	TRACE_SN("DbgSN changed cmd %p: %d/%d (cnt %d)", cmd, old,
+		cmd->queue_type, cnt);
+
+out_unlock:
+	spin_unlock_irqrestore(&lock, flags);
+	return;
+}
+#endif /* CONFIG_SCST_DEBUG_SN */
+
+#ifdef CONFIG_SCST_MEASURE_LATENCY
+
+static uint64_t scst_get_nsec(void)
+{
+	struct timespec ts;
+	ktime_get_ts(&ts);
+	return (uint64_t)ts.tv_sec * 1000000000 + ts.tv_nsec;
+}
+
+void scst_set_start_time(struct scst_cmd *cmd)
+{
+	cmd->start = scst_get_nsec();
+	TRACE_DBG("cmd %p: start %lld", cmd, cmd->start);
+}
+
+void scst_set_cur_start(struct scst_cmd *cmd)
+{
+	cmd->curr_start = scst_get_nsec();
+	TRACE_DBG("cmd %p: cur_start %lld", cmd, cmd->curr_start);
+}
+
+void scst_set_parse_time(struct scst_cmd *cmd)
+{
+	cmd->parse_time += scst_get_nsec() - cmd->curr_start;
+	TRACE_DBG("cmd %p: parse_time %lld", cmd, cmd->parse_time);
+}
+
+void scst_set_alloc_buf_time(struct scst_cmd *cmd)
+{
+	cmd->alloc_buf_time += scst_get_nsec() - cmd->curr_start;
+	TRACE_DBG("cmd %p: alloc_buf_time %lld", cmd, cmd->alloc_buf_time);
+}
+
+void scst_set_restart_waiting_time(struct scst_cmd *cmd)
+{
+	cmd->restart_waiting_time += scst_get_nsec() - cmd->curr_start;
+	TRACE_DBG("cmd %p: restart_waiting_time %lld", cmd,
+		cmd->restart_waiting_time);
+}
+
+void scst_set_rdy_to_xfer_time(struct scst_cmd *cmd)
+{
+	cmd->rdy_to_xfer_time += scst_get_nsec() - cmd->curr_start;
+	TRACE_DBG("cmd %p: rdy_to_xfer_time %lld", cmd, cmd->rdy_to_xfer_time);
+}
+
+void scst_set_pre_exec_time(struct scst_cmd *cmd)
+{
+	cmd->pre_exec_time += scst_get_nsec() - cmd->curr_start;
+	TRACE_DBG("cmd %p: pre_exec_time %lld", cmd, cmd->pre_exec_time);
+}
+
+void scst_set_exec_time(struct scst_cmd *cmd)
+{
+	cmd->exec_time += scst_get_nsec() - cmd->curr_start;
+	TRACE_DBG("cmd %p: exec_time %lld", cmd, cmd->exec_time);
+}
+
+void scst_set_dev_done_time(struct scst_cmd *cmd)
+{
+	cmd->dev_done_time += scst_get_nsec() - cmd->curr_start;
+	TRACE_DBG("cmd %p: dev_done_time %lld", cmd, cmd->dev_done_time);
+}
+
+void scst_set_xmit_time(struct scst_cmd *cmd)
+{
+	cmd->xmit_time += scst_get_nsec() - cmd->curr_start;
+	TRACE_DBG("cmd %p: xmit_time %lld", cmd, cmd->xmit_time);
+}
+
+void scst_set_tgt_on_free_time(struct scst_cmd *cmd)
+{
+	cmd->tgt_on_free_time += scst_get_nsec() - cmd->curr_start;
+	TRACE_DBG("cmd %p: tgt_on_free_time %lld", cmd, cmd->tgt_on_free_time);
+}
+
+void scst_set_dev_on_free_time(struct scst_cmd *cmd)
+{
+	cmd->dev_on_free_time += scst_get_nsec() - cmd->curr_start;
+	TRACE_DBG("cmd %p: dev_on_free_time %lld", cmd, cmd->dev_on_free_time);
+}
+
+void scst_update_lat_stats(struct scst_cmd *cmd)
+{
+	uint64_t finish, scst_time, tgt_time, dev_time;
+	struct scst_session *sess = cmd->sess;
+	int data_len;
+	int i;
+	struct scst_ext_latency_stat *latency_stat, *dev_latency_stat;
+
+	finish = scst_get_nsec();
+
+	/* Determine the IO size for extended latency statistics */
+	data_len = cmd->bufflen;
+	i = SCST_LATENCY_STAT_INDEX_OTHER;
+	if (data_len <= SCST_IO_SIZE_THRESHOLD_SMALL)
+		i = SCST_LATENCY_STAT_INDEX_SMALL;
+	else if (data_len <= SCST_IO_SIZE_THRESHOLD_MEDIUM)
+		i = SCST_LATENCY_STAT_INDEX_MEDIUM;
+	else if (data_len <= SCST_IO_SIZE_THRESHOLD_LARGE)
+		i = SCST_LATENCY_STAT_INDEX_LARGE;
+	else if (data_len <= SCST_IO_SIZE_THRESHOLD_VERY_LARGE)
+		i = SCST_LATENCY_STAT_INDEX_VERY_LARGE;
+	latency_stat = &sess->sess_latency_stat[i];
+	dev_latency_stat = &cmd->tgt_dev->dev_latency_stat[i];
+
+	spin_lock_bh(&sess->lat_lock);
+
+	/* Calculate the latencies */
+	scst_time = finish - cmd->start - (cmd->parse_time +
+		cmd->alloc_buf_time + cmd->restart_waiting_time +
+		cmd->rdy_to_xfer_time + cmd->pre_exec_time +
+		cmd->exec_time + cmd->dev_done_time + cmd->xmit_time +
+		cmd->tgt_on_free_time + cmd->dev_on_free_time);
+	tgt_time = cmd->alloc_buf_time + cmd->restart_waiting_time +
+		cmd->rdy_to_xfer_time + cmd->pre_exec_time +
+		cmd->xmit_time + cmd->tgt_on_free_time;
+	dev_time = cmd->parse_time + cmd->exec_time + cmd->dev_done_time +
+		cmd->dev_on_free_time;
+
+	/* Save the basic latency information */
+	sess->scst_time += scst_time;
+	sess->tgt_time += tgt_time;
+	sess->dev_time += dev_time;
+	sess->processed_cmds++;
+
+	if ((sess->min_scst_time == 0) ||
+	    (sess->min_scst_time > scst_time))
+		sess->min_scst_time = scst_time;
+	if ((sess->min_tgt_time == 0) ||
+	    (sess->min_tgt_time > tgt_time))
+		sess->min_tgt_time = tgt_time;
+	if ((sess->min_dev_time == 0) ||
+	    (sess->min_dev_time > dev_time))
+		sess->min_dev_time = dev_time;
+
+	if (sess->max_scst_time < scst_time)
+		sess->max_scst_time = scst_time;
+	if (sess->max_tgt_time < tgt_time)
+		sess->max_tgt_time = tgt_time;
+	if (sess->max_dev_time < dev_time)
+		sess->max_dev_time = dev_time;
+
+	/* Save the extended latency information */
+	if (cmd->data_direction & SCST_DATA_READ) {
+		latency_stat->scst_time_rd += scst_time;
+		latency_stat->tgt_time_rd += tgt_time;
+		latency_stat->dev_time_rd += dev_time;
+		latency_stat->processed_cmds_rd++;
+
+		if ((latency_stat->min_scst_time_rd == 0) ||
+		    (latency_stat->min_scst_time_rd > scst_time))
+			latency_stat->min_scst_time_rd = scst_time;
+		if ((latency_stat->min_tgt_time_rd == 0) ||
+		    (latency_stat->min_tgt_time_rd > tgt_time))
+			latency_stat->min_tgt_time_rd = tgt_time;
+		if ((latency_stat->min_dev_time_rd == 0) ||
+		    (latency_stat->min_dev_time_rd > dev_time))
+			latency_stat->min_dev_time_rd = dev_time;
+
+		if (latency_stat->max_scst_time_rd < scst_time)
+			latency_stat->max_scst_time_rd = scst_time;
+		if (latency_stat->max_tgt_time_rd < tgt_time)
+			latency_stat->max_tgt_time_rd = tgt_time;
+		if (latency_stat->max_dev_time_rd < dev_time)
+			latency_stat->max_dev_time_rd = dev_time;
+
+		dev_latency_stat->scst_time_rd += scst_time;
+		dev_latency_stat->tgt_time_rd += tgt_time;
+		dev_latency_stat->dev_time_rd += dev_time;
+		dev_latency_stat->processed_cmds_rd++;
+
+		if ((dev_latency_stat->min_scst_time_rd == 0) ||
+		    (dev_latency_stat->min_scst_time_rd > scst_time))
+			dev_latency_stat->min_scst_time_rd = scst_time;
+		if ((dev_latency_stat->min_tgt_time_rd == 0) ||
+		    (dev_latency_stat->min_tgt_time_rd > tgt_time))
+			dev_latency_stat->min_tgt_time_rd = tgt_time;
+		if ((dev_latency_stat->min_dev_time_rd == 0) ||
+		    (dev_latency_stat->min_dev_time_rd > dev_time))
+			dev_latency_stat->min_dev_time_rd = dev_time;
+
+		if (dev_latency_stat->max_scst_time_rd < scst_time)
+			dev_latency_stat->max_scst_time_rd = scst_time;
+		if (dev_latency_stat->max_tgt_time_rd < tgt_time)
+			dev_latency_stat->max_tgt_time_rd = tgt_time;
+		if (dev_latency_stat->max_dev_time_rd < dev_time)
+			dev_latency_stat->max_dev_time_rd = dev_time;
+	} else if (cmd->data_direction & SCST_DATA_WRITE) {
+		latency_stat->scst_time_wr += scst_time;
+		latency_stat->tgt_time_wr += tgt_time;
+		latency_stat->dev_time_wr += dev_time;
+		latency_stat->processed_cmds_wr++;
+
+		if ((latency_stat->min_scst_time_wr == 0) ||
+		    (latency_stat->min_scst_time_wr > scst_time))
+			latency_stat->min_scst_time_wr = scst_time;
+		if ((latency_stat->min_tgt_time_wr == 0) ||
+		    (latency_stat->min_tgt_time_wr > tgt_time))
+			latency_stat->min_tgt_time_wr = tgt_time;
+		if ((latency_stat->min_dev_time_wr == 0) ||
+		    (latency_stat->min_dev_time_wr > dev_time))
+			latency_stat->min_dev_time_wr = dev_time;
+
+		if (latency_stat->max_scst_time_wr < scst_time)
+			latency_stat->max_scst_time_wr = scst_time;
+		if (latency_stat->max_tgt_time_wr < tgt_time)
+			latency_stat->max_tgt_time_wr = tgt_time;
+		if (latency_stat->max_dev_time_wr < dev_time)
+			latency_stat->max_dev_time_wr = dev_time;
+
+		dev_latency_stat->scst_time_wr += scst_time;
+		dev_latency_stat->tgt_time_wr += tgt_time;
+		dev_latency_stat->dev_time_wr += dev_time;
+		dev_latency_stat->processed_cmds_wr++;
+
+		if ((dev_latency_stat->min_scst_time_wr == 0) ||
+		    (dev_latency_stat->min_scst_time_wr > scst_time))
+			dev_latency_stat->min_scst_time_wr = scst_time;
+		if ((dev_latency_stat->min_tgt_time_wr == 0) ||
+		    (dev_latency_stat->min_tgt_time_wr > tgt_time))
+			dev_latency_stat->min_tgt_time_wr = tgt_time;
+		if ((dev_latency_stat->min_dev_time_wr == 0) ||
+		    (dev_latency_stat->min_dev_time_wr > dev_time))
+			dev_latency_stat->min_dev_time_wr = dev_time;
+
+		if (dev_latency_stat->max_scst_time_wr < scst_time)
+			dev_latency_stat->max_scst_time_wr = scst_time;
+		if (dev_latency_stat->max_tgt_time_wr < tgt_time)
+			dev_latency_stat->max_tgt_time_wr = tgt_time;
+		if (dev_latency_stat->max_dev_time_wr < dev_time)
+			dev_latency_stat->max_dev_time_wr = dev_time;
+	}
+
+	spin_unlock_bh(&sess->lat_lock);
+
+	TRACE_DBG("cmd %p: finish %lld, scst_time %lld, "
+		"tgt_time %lld, dev_time %lld", cmd, finish, scst_time,
+		tgt_time, dev_time);
+	return;
+}
+
+#endif /* CONFIG_SCST_MEASURE_LATENCY */


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH][RFC 6/12/1/5] SCST core's private header
       [not found] ` <4BC44D08.4060907@vlnb.net>
                     ` (4 preceding siblings ...)
  2010-04-13 13:05   ` [PATCH][RFC 5/12/1/5] SCST core's scst_lib.c Vladislav Bolkhovitin
@ 2010-04-13 13:06   ` Vladislav Bolkhovitin
  2010-04-13 13:06   ` [PATCH][RFC 7/12/1/5] SCST SGV cache Vladislav Bolkhovitin
                     ` (3 subsequent siblings)
  9 siblings, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:06 UTC (permalink / raw)
  To: linux-scsi
  Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
	FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
	Vu Pham, Bart Van Assche, James Smart, Joe Eykholt, Andy Yan,
	linux-driver

This patch contains file scst_priv.h, which contains internal SCST types, constants
and declarations.

Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
 scst_priv.h |  609 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 609 insertions(+)

diff -uprN orig/linux-2.6.33/drivers/scst/scst_priv.h linux-2.6.33/drivers/scst/scst_priv.h
--- orig/linux-2.6.33/drivers/scst/scst_priv.h
+++ linux-2.6.33/drivers/scst/scst_priv.h
@@ -0,0 +1,609 @@
+/*
+ *  scst_priv.h
+ *
+ *  Copyright (C) 2004 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ *  Copyright (C) 2004 - 2005 Leonid Stoljar
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation, version 2
+ *  of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#ifndef __SCST_PRIV_H
+#define __SCST_PRIV_H
+
+#include <linux/types.h>
+
+#include <scsi/scsi.h>
+#include <scsi/scsi_cmnd.h>
+#include <scsi/scsi_driver.h>
+#include <scsi/scsi_device.h>
+#include <scsi/scsi_host.h>
+
+#define LOG_PREFIX "scst"
+
+#include "scst_debug.h"
+
+#define TRACE_RTRY              0x80000000
+#define TRACE_SCSI_SERIALIZING  0x40000000
+/** top being the edge away from the interupt */
+#define TRACE_SND_TOP		0x20000000
+#define TRACE_RCV_TOP		0x01000000
+/** bottom being the edge toward the interupt */
+#define TRACE_SND_BOT		0x08000000
+#define TRACE_RCV_BOT		0x04000000
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+#define trace_flag scst_trace_flag
+extern unsigned long scst_trace_flag;
+#endif
+
+#ifdef CONFIG_SCST_DEBUG
+
+#define SCST_DEFAULT_LOG_FLAGS (TRACE_OUT_OF_MEM | TRACE_MINOR | TRACE_PID | \
+	TRACE_LINE | TRACE_FUNCTION | TRACE_SPECIAL | TRACE_MGMT | \
+	TRACE_MGMT_DEBUG | TRACE_RTRY)
+
+#define TRACE_RETRY(args...)	TRACE_DBG_FLAG(TRACE_RTRY, args)
+#define TRACE_SN(args...)	TRACE_DBG_FLAG(TRACE_SCSI_SERIALIZING, args)
+#define TRACE_SEND_TOP(args...)	TRACE_DBG_FLAG(TRACE_SND_TOP, args)
+#define TRACE_RECV_TOP(args...)	TRACE_DBG_FLAG(TRACE_RCV_TOP, args)
+#define TRACE_SEND_BOT(args...)	TRACE_DBG_FLAG(TRACE_SND_BOT, args)
+#define TRACE_RECV_BOT(args...)	TRACE_DBG_FLAG(TRACE_RCV_BOT, args)
+
+#else /* CONFIG_SCST_DEBUG */
+
+# ifdef CONFIG_SCST_TRACING
+#define SCST_DEFAULT_LOG_FLAGS (TRACE_OUT_OF_MEM | TRACE_MGMT | \
+	TRACE_SPECIAL)
+# else
+#define SCST_DEFAULT_LOG_FLAGS 0
+# endif
+
+#define TRACE_RETRY(args...)
+#define TRACE_SN(args...)
+#define TRACE_SEND_TOP(args...)
+#define TRACE_RECV_TOP(args...)
+#define TRACE_SEND_BOT(args...)
+#define TRACE_RECV_BOT(args...)
+
+#endif
+
+/**
+ ** Bits for scst_flags
+ **/
+
+/*
+ * Set if new commands initialization is being suspended for a while.
+ * Used to let TM commands execute while preparing the suspend, since
+ * RESET or ABORT could be necessary to free SCSI commands.
+ */
+#define SCST_FLAG_SUSPENDING		     0
+
+/* Set if new commands initialization is suspended for a while */
+#define SCST_FLAG_SUSPENDED		     1
+
+/**
+ ** Return codes for cmd state process functions. Codes are the same as
+ ** for SCST_EXEC_* to avoid translation to them and, hence, have better code.
+ **/
+#define SCST_CMD_STATE_RES_CONT_NEXT         SCST_EXEC_COMPLETED
+#define SCST_CMD_STATE_RES_CONT_SAME         SCST_EXEC_NOT_COMPLETED
+#define SCST_CMD_STATE_RES_NEED_THREAD       SCST_EXEC_NEED_THREAD
+
+/**
+ ** Maximum count of uncompleted commands that an initiator could
+ ** queue on any device. Then it will start getting TASK QUEUE FULL status.
+ **/
+#define SCST_MAX_TGT_DEV_COMMANDS            48
+
+/**
+ ** Maximum count of uncompleted commands that could be queued on any device.
+ ** Then initiators sending commands to this device will start getting
+ ** TASK QUEUE FULL status.
+ **/
+#define SCST_MAX_DEV_COMMANDS                256
+
+#define SCST_TGT_RETRY_TIMEOUT               (3/2*HZ)
+
+/* Definitions of symbolic constants for LUN addressing method */
+#define SCST_LUN_ADDR_METHOD_PERIPHERAL	0
+#define SCST_LUN_ADDR_METHOD_FLAT 	1
+
+extern int scst_threads;
+
+extern unsigned int scst_max_dev_cmd_mem;
+
+extern mempool_t *scst_mgmt_mempool;
+extern mempool_t *scst_mgmt_stub_mempool;
+extern mempool_t *scst_ua_mempool;
+extern mempool_t *scst_sense_mempool;
+extern mempool_t *scst_aen_mempool;
+
+extern struct kmem_cache *scst_cmd_cachep;
+extern struct kmem_cache *scst_sess_cachep;
+extern struct kmem_cache *scst_tgtd_cachep;
+extern struct kmem_cache *scst_acgd_cachep;
+
+extern spinlock_t scst_main_lock;
+
+extern struct scst_sgv_pools scst_sgv;
+
+extern unsigned long scst_flags;
+extern atomic_t scst_cmd_count;
+extern struct list_head scst_template_list;
+extern struct list_head scst_dev_list;
+extern struct list_head scst_dev_type_list;
+extern wait_queue_head_t scst_dev_cmd_waitQ;
+
+extern unsigned int scst_setup_id;
+
+extern spinlock_t scst_init_lock;
+extern struct list_head scst_init_cmd_list;
+extern wait_queue_head_t scst_init_cmd_list_waitQ;
+extern unsigned int scst_init_poll_cnt;
+
+extern struct scst_cmd_threads scst_main_cmd_threads;
+
+extern spinlock_t scst_mcmd_lock;
+/* The following lists protected by scst_mcmd_lock */
+extern struct list_head scst_active_mgmt_cmd_list;
+extern struct list_head scst_delayed_mgmt_cmd_list;
+extern wait_queue_head_t scst_mgmt_cmd_list_waitQ;
+
+struct scst_tasklet {
+	spinlock_t tasklet_lock;
+	struct list_head tasklet_cmd_list;
+	struct tasklet_struct tasklet;
+};
+extern struct scst_tasklet scst_tasklets[NR_CPUS];
+
+extern wait_queue_head_t scst_mgmt_waitQ;
+extern spinlock_t scst_mgmt_lock;
+extern struct list_head scst_sess_init_list;
+extern struct list_head scst_sess_shut_list;
+
+struct scst_cmd_thread_t {
+	struct task_struct *cmd_thread;
+	struct list_head thread_list_entry;
+};
+
+static inline bool scst_set_io_context(struct scst_cmd *cmd,
+	struct io_context **old)
+{
+	bool res;
+
+	if (cmd->cmd_threads == &scst_main_cmd_threads) {
+		EXTRACHECKS_BUG_ON(in_interrupt());
+		/*
+		 * No need for any ref counting action, because io_context
+		 * supposed to be cleared in the end of the caller function.
+		 */
+		current->io_context = cmd->tgt_dev->async_io_context;
+		res = true;
+		TRACE_DBG("io_context %p (tgt_dev %p)", current->io_context,
+			cmd->tgt_dev);
+		EXTRACHECKS_BUG_ON(current->io_context == NULL);
+	} else
+		res = false;
+
+	return res;
+}
+
+static inline void scst_reset_io_context(struct scst_tgt_dev *tgt_dev,
+	struct io_context *old)
+{
+	current->io_context = old;
+	TRACE_DBG("io_context %p reset", current->io_context);
+	return;
+}
+
+/*
+ * Converts string presentation of threads pool type to enum.
+ * Returns SCST_THREADS_POOL_TYPE_INVALID if the string is invalid.
+ */
+extern enum scst_dev_type_threads_pool_type scst_parse_threads_pool_type(
+	const char *p, int len);
+
+extern int scst_add_threads(struct scst_cmd_threads *cmd_threads,
+	struct scst_device *dev, struct scst_tgt_dev *tgt_dev, int num);
+extern void scst_del_threads(struct scst_cmd_threads *cmd_threads, int num);
+
+extern int scst_create_dev_threads(struct scst_device *dev);
+extern void scst_stop_dev_threads(struct scst_device *dev);
+
+extern int scst_tgt_dev_setup_threads(struct scst_tgt_dev *tgt_dev);
+extern void scst_tgt_dev_stop_threads(struct scst_tgt_dev *tgt_dev);
+
+extern bool scst_del_thr_data(struct scst_tgt_dev *tgt_dev,
+	struct task_struct *tsk);
+
+extern struct scst_dev_type scst_null_devtype;
+
+extern struct scst_cmd *__scst_check_deferred_commands(
+	struct scst_tgt_dev *tgt_dev);
+
+/* Used to save the function call on the fast path */
+static inline struct scst_cmd *scst_check_deferred_commands(
+	struct scst_tgt_dev *tgt_dev)
+{
+	if (tgt_dev->def_cmd_count == 0)
+		return NULL;
+	else
+		return __scst_check_deferred_commands(tgt_dev);
+}
+
+static inline void scst_make_deferred_commands_active(
+	struct scst_tgt_dev *tgt_dev)
+{
+	struct scst_cmd *c;
+
+	c = __scst_check_deferred_commands(tgt_dev);
+	if (c != NULL) {
+		TRACE_SN("Adding cmd %p to active cmd list", c);
+		spin_lock_irq(&c->cmd_threads->cmd_list_lock);
+		list_add_tail(&c->cmd_list_entry,
+			&c->cmd_threads->active_cmd_list);
+		wake_up(&c->cmd_threads->cmd_list_waitQ);
+		spin_unlock_irq(&c->cmd_threads->cmd_list_lock);
+	}
+
+	return;
+}
+
+void scst_inc_expected_sn(struct scst_tgt_dev *tgt_dev, atomic_t *slot);
+int scst_check_hq_cmd(struct scst_cmd *cmd);
+
+void scst_unblock_deferred(struct scst_tgt_dev *tgt_dev,
+	struct scst_cmd *cmd_sn);
+
+void scst_on_hq_cmd_response(struct scst_cmd *cmd);
+void scst_xmit_process_aborted_cmd(struct scst_cmd *cmd);
+
+int scst_cmd_thread(void *arg);
+void scst_cmd_tasklet(long p);
+int scst_init_thread(void *arg);
+int scst_tm_thread(void *arg);
+int scst_global_mgmt_thread(void *arg);
+
+int scst_queue_retry_cmd(struct scst_cmd *cmd, int finished_cmds);
+
+static inline void scst_tgtt_cleanup(struct scst_tgt_template *tgtt) { }
+static inline void scst_devt_cleanup(struct scst_dev_type *devt) { }
+
+int scst_alloc_tgt(struct scst_tgt_template *tgtt, struct scst_tgt **tgt);
+void scst_free_tgt(struct scst_tgt *tgt);
+
+int scst_alloc_device(gfp_t gfp_mask, struct scst_device **out_dev);
+void scst_free_device(struct scst_device *dev);
+
+struct scst_acg *scst_alloc_add_acg(struct scst_tgt *tgt,
+	const char *acg_name);
+void scst_clear_acg(struct scst_acg *acg);
+void scst_destroy_acg(struct scst_acg *acg);
+void scst_free_acg(struct scst_acg *acg);
+
+struct scst_acg *scst_tgt_find_acg(struct scst_tgt *tgt, const char *name);
+
+struct scst_acg *scst_find_acg(const struct scst_session *sess);
+
+void scst_check_reassign_sessions(void);
+
+int scst_sess_alloc_tgt_devs(struct scst_session *sess);
+void scst_sess_free_tgt_devs(struct scst_session *sess);
+void scst_nexus_loss(struct scst_tgt_dev *tgt_dev, bool queue_UA);
+
+int scst_acg_add_dev(struct scst_acg *acg, struct scst_device *dev,
+	uint64_t lun, int read_only, bool gen_scst_report_luns_changed);
+int scst_acg_remove_dev(struct scst_acg *acg, struct scst_device *dev,
+	bool gen_scst_report_luns_changed);
+
+void scst_acg_dev_destroy(struct scst_acg_dev *acg_dev);
+
+int scst_acg_add_name(struct scst_acg *acg, const char *name);
+void scst_acg_remove_acn(struct scst_acn *acn);
+struct scst_acn *scst_acg_find_name(struct scst_acg *acg, const char *name);
+
+/* The activity supposed to be suspended and scst_mutex held */
+static inline bool scst_acg_sess_is_empty(struct scst_acg *acg)
+{
+	return list_empty(&acg->acg_sess_list);
+}
+
+int scst_prepare_request_sense(struct scst_cmd *orig_cmd);
+int scst_finish_internal_cmd(struct scst_cmd *cmd);
+
+void scst_store_sense(struct scst_cmd *cmd);
+
+int scst_assign_dev_handler(struct scst_device *dev,
+	struct scst_dev_type *handler);
+
+struct scst_session *scst_alloc_session(struct scst_tgt *tgt, gfp_t gfp_mask,
+	const char *initiator_name);
+void scst_free_session(struct scst_session *sess);
+void scst_release_session(struct scst_session *sess);
+void scst_free_session_callback(struct scst_session *sess);
+
+struct scst_cmd *scst_alloc_cmd(gfp_t gfp_mask);
+void scst_free_cmd(struct scst_cmd *cmd);
+static inline void scst_destroy_cmd(struct scst_cmd *cmd)
+{
+	kmem_cache_free(scst_cmd_cachep, cmd);
+	return;
+}
+
+void scst_check_retries(struct scst_tgt *tgt);
+
+int scst_scsi_exec_async(struct scst_cmd *cmd,
+	void (*done)(void *, char *, int, int));
+
+int scst_alloc_space(struct scst_cmd *cmd);
+
+int scst_lib_init(void);
+void scst_lib_exit(void);
+
+uint64_t scst_pack_lun(const uint64_t lun, unsigned int addr_method);
+uint64_t scst_unpack_lun(const uint8_t *lun, int len);
+
+struct scst_mgmt_cmd *scst_alloc_mgmt_cmd(gfp_t gfp_mask);
+void scst_free_mgmt_cmd(struct scst_mgmt_cmd *mcmd);
+void scst_done_cmd_mgmt(struct scst_cmd *cmd);
+
+int scst_sysfs_init(void);
+void scst_sysfs_cleanup(void);
+int scst_create_tgtt_sysfs(struct scst_tgt_template *tgtt);
+void scst_tgtt_sysfs_put(struct scst_tgt_template *tgtt);
+int scst_create_tgt_sysfs(struct scst_tgt *tgt);
+void scst_tgt_sysfs_prepare_put(struct scst_tgt *tgt);
+void scst_tgt_sysfs_put(struct scst_tgt *tgt);
+int scst_create_sess_sysfs(struct scst_session *sess);
+void scst_sess_sysfs_put(struct scst_session *sess);
+int scst_create_sgv_sysfs(struct sgv_pool *pool);
+void scst_sgv_sysfs_put(struct sgv_pool *pool);
+int scst_create_devt_sysfs(struct scst_dev_type *devt);
+void scst_devt_sysfs_put(struct scst_dev_type *devt);
+int scst_create_device_sysfs(struct scst_device *dev);
+void scst_device_sysfs_put(struct scst_device *dev);
+int scst_create_devt_dev_sysfs(struct scst_device *dev);
+void scst_devt_dev_sysfs_put(struct scst_device *dev);
+int scst_create_acn_sysfs(struct scst_acg *acg, struct scst_acn *acn);
+void scst_acn_sysfs_del(struct scst_acg *acg, struct scst_acn *acn,
+	bool reassign);
+
+void scst_acg_sysfs_put(struct scst_acg *acg);
+
+int scst_get_cdb_len(const uint8_t *cdb);
+
+void __scst_dev_check_set_UA(struct scst_device *dev, struct scst_cmd *exclude,
+	const uint8_t *sense, int sense_len);
+static inline void scst_dev_check_set_UA(struct scst_device *dev,
+	struct scst_cmd *exclude, const uint8_t *sense, int sense_len)
+{
+	spin_lock_bh(&dev->dev_lock);
+	__scst_dev_check_set_UA(dev, exclude, sense, sense_len);
+	spin_unlock_bh(&dev->dev_lock);
+	return;
+}
+void scst_dev_check_set_local_UA(struct scst_device *dev,
+	struct scst_cmd *exclude, const uint8_t *sense, int sense_len);
+
+#define SCST_SET_UA_FLAG_AT_HEAD	1
+#define SCST_SET_UA_FLAG_GLOBAL		2
+
+void scst_check_set_UA(struct scst_tgt_dev *tgt_dev,
+	const uint8_t *sense, int sense_len, int flags);
+int scst_set_pending_UA(struct scst_cmd *cmd);
+
+void scst_report_luns_changed(struct scst_acg *acg);
+
+void scst_abort_cmd(struct scst_cmd *cmd, struct scst_mgmt_cmd *mcmd,
+	int other_ini, int call_dev_task_mgmt_fn);
+void scst_process_reset(struct scst_device *dev,
+	struct scst_session *originator, struct scst_cmd *exclude_cmd,
+	struct scst_mgmt_cmd *mcmd, bool setUA);
+
+bool scst_is_ua_global(const uint8_t *sense, int len);
+void scst_requeue_ua(struct scst_cmd *cmd);
+
+void scst_gen_aen_or_ua(struct scst_tgt_dev *tgt_dev,
+	int key, int asc, int ascq);
+
+static inline bool scst_is_implicit_hq(struct scst_cmd *cmd)
+{
+	return (cmd->op_flags & SCST_IMPLICIT_HQ) != 0;
+}
+
+/*
+ * Some notes on devices "blocking". Blocking means that no
+ * commands will go from SCST to underlying SCSI device until it
+ * is unblocked. But we don't care about all commands that
+ * already on the device.
+ */
+
+extern int scst_inc_on_dev_cmd(struct scst_cmd *cmd);
+
+extern void __scst_block_dev(struct scst_device *dev);
+extern void scst_block_dev_cmd(struct scst_cmd *cmd, int outstanding);
+extern void scst_unblock_dev(struct scst_device *dev);
+extern void scst_unblock_dev_cmd(struct scst_cmd *cmd);
+
+/* No locks */
+static inline void scst_dec_on_dev_cmd(struct scst_cmd *cmd)
+{
+	struct scst_device *dev = cmd->dev;
+	bool unblock_dev = cmd->inc_blocking;
+
+	if (cmd->inc_blocking) {
+		TRACE_MGMT_DBG("cmd %p (tag %llu): unblocking dev %p", cmd,
+			       (long long unsigned int)cmd->tag, cmd->dev);
+		cmd->inc_blocking = 0;
+	}
+	cmd->dec_on_dev_needed = 0;
+
+	if (unblock_dev)
+		scst_unblock_dev(dev);
+
+	atomic_dec(&dev->on_dev_count);
+	/* See comment in scst_block_dev() */
+	smp_mb__after_atomic_dec();
+
+	TRACE_DBG("New on_dev_count %d", atomic_read(&dev->on_dev_count));
+
+	BUG_ON(atomic_read(&dev->on_dev_count) < 0);
+
+	if (unlikely(dev->block_count != 0))
+		wake_up_all(&dev->on_dev_waitQ);
+
+	return;
+}
+
+static inline void __scst_get(int barrier)
+{
+	atomic_inc(&scst_cmd_count);
+	TRACE_DBG("Incrementing scst_cmd_count(new value %d)",
+		atomic_read(&scst_cmd_count));
+
+	/* See comment about smp_mb() in scst_suspend_activity() */
+	if (barrier)
+		smp_mb__after_atomic_inc();
+}
+
+static inline void __scst_put(void)
+{
+	int f;
+	f = atomic_dec_and_test(&scst_cmd_count);
+	/* See comment about smp_mb() in scst_suspend_activity() */
+	if (f && unlikely(test_bit(SCST_FLAG_SUSPENDED, &scst_flags))) {
+		TRACE_MGMT_DBG("%s", "Waking up scst_dev_cmd_waitQ");
+		wake_up_all(&scst_dev_cmd_waitQ);
+	}
+	TRACE_DBG("Decrementing scst_cmd_count(new value %d)",
+	      atomic_read(&scst_cmd_count));
+}
+
+void scst_sched_session_free(struct scst_session *sess);
+
+static inline void scst_sess_get(struct scst_session *sess)
+{
+	atomic_inc(&sess->refcnt);
+	TRACE_DBG("Incrementing sess %p refcnt (new value %d)",
+		sess, atomic_read(&sess->refcnt));
+}
+
+static inline void scst_sess_put(struct scst_session *sess)
+{
+	TRACE_DBG("Decrementing sess %p refcnt (new value %d)",
+		sess, atomic_read(&sess->refcnt)-1);
+	if (atomic_dec_and_test(&sess->refcnt))
+		scst_sched_session_free(sess);
+}
+
+static inline void __scst_cmd_get(struct scst_cmd *cmd)
+{
+	atomic_inc(&cmd->cmd_ref);
+	TRACE_DBG("Incrementing cmd %p ref (new value %d)",
+		cmd, atomic_read(&cmd->cmd_ref));
+}
+
+static inline void __scst_cmd_put(struct scst_cmd *cmd)
+{
+	TRACE_DBG("Decrementing cmd %p ref (new value %d)",
+		cmd, atomic_read(&cmd->cmd_ref)-1);
+	if (atomic_dec_and_test(&cmd->cmd_ref))
+		scst_free_cmd(cmd);
+}
+
+extern void scst_throttle_cmd(struct scst_cmd *cmd);
+extern void scst_unthrottle_cmd(struct scst_cmd *cmd);
+
+static inline void scst_check_restore_sg_buff(struct scst_cmd *cmd)
+{
+	if (cmd->sg_buff_modified) {
+		TRACE_MEM("cmd %p, sg %p, orig_sg_entry %d, "
+			"orig_entry_len %d, orig_sg_cnt %d", cmd, cmd->sg,
+			cmd->orig_sg_entry, cmd->orig_entry_len,
+			cmd->orig_sg_cnt);
+		cmd->sg[cmd->orig_sg_entry].length = cmd->orig_entry_len;
+		cmd->sg_cnt = cmd->orig_sg_cnt;
+		cmd->sg_buff_modified = 0;
+	}
+}
+
+#ifdef CONFIG_SCST_DEBUG_TM
+extern void tm_dbg_check_released_cmds(void);
+extern int tm_dbg_check_cmd(struct scst_cmd *cmd);
+extern void tm_dbg_release_cmd(struct scst_cmd *cmd);
+extern void tm_dbg_task_mgmt(struct scst_device *dev, const char *fn,
+	int force);
+extern int tm_dbg_is_release(void);
+#else
+static inline void tm_dbg_check_released_cmds(void) {}
+static inline int tm_dbg_check_cmd(struct scst_cmd *cmd)
+{
+	return 0;
+}
+static inline void tm_dbg_release_cmd(struct scst_cmd *cmd) {}
+static inline void tm_dbg_task_mgmt(struct scst_device *dev, const char *fn,
+	int force) {}
+static inline int tm_dbg_is_release(void)
+{
+	return 0;
+}
+#endif /* CONFIG_SCST_DEBUG_TM */
+
+#ifdef CONFIG_SCST_DEBUG_SN
+void scst_check_debug_sn(struct scst_cmd *cmd);
+#else
+static inline void scst_check_debug_sn(struct scst_cmd *cmd) {}
+#endif
+
+static inline int scst_sn_before(uint32_t seq1, uint32_t seq2)
+{
+	return (int32_t)(seq1-seq2) < 0;
+}
+
+bool scst_is_relative_target_port_id_unique(uint16_t id, struct scst_tgt *t);
+int gen_relative_target_port_id(uint16_t *id);
+
+#ifdef CONFIG_SCST_MEASURE_LATENCY
+
+void scst_set_start_time(struct scst_cmd *cmd);
+void scst_set_cur_start(struct scst_cmd *cmd);
+void scst_set_parse_time(struct scst_cmd *cmd);
+void scst_set_alloc_buf_time(struct scst_cmd *cmd);
+void scst_set_restart_waiting_time(struct scst_cmd *cmd);
+void scst_set_rdy_to_xfer_time(struct scst_cmd *cmd);
+void scst_set_pre_exec_time(struct scst_cmd *cmd);
+void scst_set_exec_time(struct scst_cmd *cmd);
+void scst_set_dev_done_time(struct scst_cmd *cmd);
+void scst_set_xmit_time(struct scst_cmd *cmd);
+void scst_set_tgt_on_free_time(struct scst_cmd *cmd);
+void scst_set_dev_on_free_time(struct scst_cmd *cmd);
+void scst_update_lat_stats(struct scst_cmd *cmd);
+
+#else
+
+static inline void scst_set_start_time(struct scst_cmd *cmd) {}
+static inline void scst_set_cur_start(struct scst_cmd *cmd) {}
+static inline void scst_set_parse_time(struct scst_cmd *cmd) {}
+static inline void scst_set_alloc_buf_time(struct scst_cmd *cmd) {}
+static inline void scst_set_restart_waiting_time(struct scst_cmd *cmd) {}
+static inline void scst_set_rdy_to_xfer_time(struct scst_cmd *cmd) {}
+static inline void scst_set_pre_exec_time(struct scst_cmd *cmd) {}
+static inline void scst_set_exec_time(struct scst_cmd *cmd) {}
+static inline void scst_set_dev_done_time(struct scst_cmd *cmd) {}
+static inline void scst_set_xmit_time(struct scst_cmd *cmd) {}
+static inline void scst_set_tgt_on_free_time(struct scst_cmd *cmd) {}
+static inline void scst_set_dev_on_free_time(struct scst_cmd *cmd) {}
+static inline void scst_update_lat_stats(struct scst_cmd *cmd) {}
+
+#endif /* CONFIG_SCST_MEASURE_LATENCY */
+
+#endif /* __SCST_PRIV_H */


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH][RFC 7/12/1/5] SCST SGV cache
       [not found] ` <4BC44D08.4060907@vlnb.net>
                     ` (5 preceding siblings ...)
  2010-04-13 13:06   ` [PATCH][RFC 6/12/1/5] SCST core's private header Vladislav Bolkhovitin
@ 2010-04-13 13:06   ` Vladislav Bolkhovitin
  2010-04-13 13:06   ` [PATCH][RFC 8/12/1/5] SCST sysfs interface Vladislav Bolkhovitin
                     ` (2 subsequent siblings)
  9 siblings, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:06 UTC (permalink / raw)
  To: linux-scsi
  Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
	FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
	Vu Pham, Bart Van Assche, James Smart, Joe Eykholt, Andy Yan,
	linux-driver

This patch contains SCST SGV cache. SGV cache is a memory management 
subsystem in SCST. More info about it you can find in the documentation
in this patch.

P.S. Solaris COMSTAR also has similar facility.

Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
 Documentation/scst/sgv_cache.txt |  224 ++++
 drivers/scst/scst_mem.c          | 1788 +++++++++++++++++++++++++++++++++++++++
 drivers/scst/scst_mem.h          |  150 +++
 include/scst/scst_sgv.h          |   97 ++
 4 files changed, 2259 insertions(+)

diff -uprN orig/linux-2.6.33/include/scst/scst_sgv.h linux-2.6.33/include/scst/scst_sgv.h
--- orig/linux-2.6.33/include/scst/scst_sgv.h
+++ linux-2.6.33/include/scst/scst_sgv.h
@@ -0,0 +1,97 @@
+/*
+ *  include/scst_sgv.h
+ *
+ *  Copyright (C) 2004 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  Include file for SCST SGV cache.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation, version 2
+ *  of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+#ifndef __SCST_SGV_H
+#define __SCST_SGV_H
+
+/** SGV pool routines and flag bits **/
+
+/* Set if the allocated object must be not from the cache */
+#define SGV_POOL_ALLOC_NO_CACHED		1
+
+/* Set if there should not be any memory allocations on a cache miss */
+#define SGV_POOL_NO_ALLOC_ON_CACHE_MISS		2
+
+/* Set an object should be returned even if it doesn't have SG vector built */
+#define SGV_POOL_RETURN_OBJ_ON_ALLOC_FAIL	4
+
+/*
+ * Set if the allocated object must be a new one, i.e. from the cache,
+ * but not cached
+ */
+#define SGV_POOL_ALLOC_GET_NEW			8
+
+struct sgv_pool_obj;
+struct sgv_pool;
+
+/*
+ * Structure to keep a memory limit for an SCST object
+ */
+struct scst_mem_lim {
+	/* How much memory allocated under this object */
+	atomic_t alloced_pages;
+
+	/*
+	 * How much memory allowed to allocated under this object. Put here
+	 * mostly to save a possible cache miss accessing scst_max_dev_cmd_mem.
+	 */
+	int max_allowed_pages;
+};
+
+/* Types of clustering */
+enum sgv_clustering_types {
+	/* No clustering performed */
+	sgv_no_clustering = 0,
+
+	/*
+	 * A page will only be merged with the latest previously allocated
+	 * page, so the order of pages in the SG will be preserved.
+	 */
+	sgv_tail_clustering,
+
+	/*
+	 * Free merging of pages at any place in the SG is allowed. This mode
+	 * usually provides the best merging rate.
+	 */
+	sgv_full_clustering,
+};
+
+struct sgv_pool *sgv_pool_create(const char *name,
+	enum sgv_clustering_types clustered, int single_alloc_pages,
+	bool shared, int purge_interval);
+void sgv_pool_del(struct sgv_pool *pool);
+
+void sgv_pool_get(struct sgv_pool *pool);
+void sgv_pool_put(struct sgv_pool *pool);
+
+void sgv_pool_flush(struct sgv_pool *pool);
+
+void sgv_pool_set_allocator(struct sgv_pool *pool,
+	struct page *(*alloc_pages_fn)(struct scatterlist *, gfp_t, void *),
+	void (*free_pages_fn)(struct scatterlist *, int, void *));
+
+struct scatterlist *sgv_pool_alloc(struct sgv_pool *pool, unsigned int size,
+	gfp_t gfp_mask, int flags, int *count,
+	struct sgv_pool_obj **sgv, struct scst_mem_lim *mem_lim, void *priv);
+void sgv_pool_free(struct sgv_pool_obj *sgv, struct scst_mem_lim *mem_lim);
+
+void *sgv_get_priv(struct sgv_pool_obj *sgv);
+
+void scst_init_mem_lim(struct scst_mem_lim *mem_lim);
+
+#endif /* __SCST_SGV_H */
diff -uprN orig/linux-2.6.33/drivers/scst/scst_mem.h linux-2.6.33/drivers/scst/scst_mem.h
--- orig/linux-2.6.33/drivers/scst/scst_mem.h
+++ linux-2.6.33/drivers/scst/scst_mem.h
@@ -0,0 +1,150 @@
+/*
+ *  scst_mem.h
+ *
+ *  Copyright (C) 2006 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation, version 2
+ *  of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#include <linux/scatterlist.h>
+#include <linux/workqueue.h>
+
+#define SGV_POOL_ELEMENTS	11
+
+/*
+ * sg_num is indexed by the page number, pg_count is indexed by the sg number.
+ * Made in one entry to simplify the code (eg all sizeof(*) parts) and save
+ * some CPU cache for non-clustered case.
+ */
+struct trans_tbl_ent {
+	unsigned short sg_num;
+	unsigned short pg_count;
+};
+
+/*
+ * SGV pool object
+ */
+struct sgv_pool_obj {
+	int cache_num;
+	int pages;
+
+	/* jiffies, protected by sgv_pool_lock */
+	unsigned long time_stamp;
+
+	struct list_head recycling_list_entry;
+	struct list_head sorted_recycling_list_entry;
+
+	struct sgv_pool *owner_pool;
+	int orig_sg;
+	int orig_length;
+	int sg_count;
+	void *allocator_priv;
+	struct trans_tbl_ent *trans_tbl;
+	struct scatterlist *sg_entries;
+	struct scatterlist sg_entries_data[0];
+};
+
+/*
+ * SGV pool statistics accounting structure
+ */
+struct sgv_pool_cache_acc {
+	atomic_t total_alloc, hit_alloc;
+	atomic_t merged;
+};
+
+/*
+ * SGV pool allocation functions
+ */
+struct sgv_pool_alloc_fns {
+	struct page *(*alloc_pages_fn)(struct scatterlist *sg, gfp_t gfp_mask,
+		void *priv);
+	void (*free_pages_fn)(struct scatterlist *sg, int sg_count,
+		void *priv);
+};
+
+/*
+ * SGV pool
+ */
+struct sgv_pool {
+	enum sgv_clustering_types clustering_type;
+	int single_alloc_pages;
+	int max_cached_pages;
+
+	struct sgv_pool_alloc_fns alloc_fns;
+
+	/* <=4K, <=8, <=16, <=32, <=64, <=128, <=256, <=512, <=1024, <=2048 */
+	struct kmem_cache *caches[SGV_POOL_ELEMENTS];
+
+	spinlock_t sgv_pool_lock; /* outer lock for sgv_pools_lock! */
+
+	int purge_interval;
+
+	/* Protected by sgv_pool_lock, if necessary */
+	unsigned int purge_work_scheduled:1;
+	unsigned int sgv_kobj_initialized:1;
+
+	/* Protected by sgv_pool_lock */
+	struct list_head sorted_recycling_list;
+
+	int inactive_cached_pages; /* protected by sgv_pool_lock */
+
+	/* Protected by sgv_pool_lock */
+	struct list_head recycling_lists[SGV_POOL_ELEMENTS];
+
+	int cached_pages, cached_entries; /* protected by sgv_pool_lock */
+
+	struct sgv_pool_cache_acc cache_acc[SGV_POOL_ELEMENTS];
+
+	struct delayed_work sgv_purge_work;
+
+	struct list_head sgv_active_pools_list_entry;
+
+	atomic_t big_alloc, big_pages, big_merged;
+	atomic_t other_alloc, other_pages, other_merged;
+
+	atomic_t sgv_pool_ref;
+
+	int max_caches;
+
+	/* SCST_MAX_NAME + few more bytes to match scst_user expectations */
+	char cache_names[SGV_POOL_ELEMENTS][SCST_MAX_NAME + 10];
+	char name[SCST_MAX_NAME + 10];
+
+	struct mm_struct *owner_mm;
+
+	struct list_head sgv_pools_list_entry;
+
+	struct kobject sgv_kobj;
+};
+
+static inline struct scatterlist *sgv_pool_sg(struct sgv_pool_obj *obj)
+{
+	return obj->sg_entries;
+}
+
+int scst_sgv_pools_init(unsigned long mem_hwmark, unsigned long mem_lwmark);
+void scst_sgv_pools_deinit(void);
+
+void sgv_pool_destroy(struct sgv_pool *pool);
+
+ssize_t sgv_sysfs_stat_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf);
+ssize_t sgv_sysfs_stat_reset(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count);
+ssize_t sgv_sysfs_global_stat_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf);
+ssize_t sgv_sysfs_global_stat_reset(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count);
+
+void scst_sgv_pool_use_norm(struct scst_tgt_dev *tgt_dev);
+void scst_sgv_pool_use_norm_clust(struct scst_tgt_dev *tgt_dev);
+void scst_sgv_pool_use_dma(struct scst_tgt_dev *tgt_dev);
diff -uprN orig/linux-2.6.33/drivers/scst/scst_mem.c linux-2.6.33/drivers/scst/scst_mem.c
--- orig/linux-2.6.33/drivers/scst/scst_mem.c
+++ linux-2.6.33/drivers/scst/scst_mem.c
@@ -0,0 +1,1788 @@
+/*
+ *  scst_mem.c
+ *
+ *  Copyright (C) 2006 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation, version 2
+ *  of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/slab.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/unistd.h>
+#include <linux/string.h>
+
+#include "scst.h"
+#include "scst_priv.h"
+#include "scst_mem.h"
+
+#define SGV_DEFAULT_PURGE_INTERVAL	(60 * HZ)
+#define SGV_MIN_SHRINK_INTERVAL		(1 * HZ)
+
+/* Max pages freed from a pool per shrinking iteration */
+#define MAX_PAGES_PER_POOL	50
+
+static struct sgv_pool *sgv_norm_clust_pool, *sgv_norm_pool, *sgv_dma_pool;
+
+static atomic_t sgv_pages_total = ATOMIC_INIT(0);
+
+/* Both read-only */
+static int sgv_hi_wmk;
+static int sgv_lo_wmk;
+
+static int sgv_max_local_pages, sgv_max_trans_pages;
+
+static DEFINE_SPINLOCK(sgv_pools_lock); /* inner lock for sgv_pool_lock! */
+static DEFINE_MUTEX(sgv_pools_mutex);
+
+/* Both protected by sgv_pools_lock */
+static struct sgv_pool *sgv_cur_purge_pool;
+static LIST_HEAD(sgv_active_pools_list);
+
+static atomic_t sgv_releases_on_hiwmk = ATOMIC_INIT(0);
+static atomic_t sgv_releases_on_hiwmk_failed = ATOMIC_INIT(0);
+
+static atomic_t sgv_other_total_alloc = ATOMIC_INIT(0);
+
+static struct shrinker sgv_shrinker;
+
+/*
+ * Protected by sgv_pools_mutex AND sgv_pools_lock for writes,
+ * either one for reads.
+ */
+static LIST_HEAD(sgv_pools_list);
+
+static inline bool sgv_pool_clustered(const struct sgv_pool *pool)
+{
+	return pool->clustering_type != sgv_no_clustering;
+}
+
+void scst_sgv_pool_use_norm(struct scst_tgt_dev *tgt_dev)
+{
+	tgt_dev->gfp_mask = __GFP_NOWARN;
+	tgt_dev->pool = sgv_norm_pool;
+	clear_bit(SCST_TGT_DEV_CLUST_POOL, &tgt_dev->tgt_dev_flags);
+}
+
+void scst_sgv_pool_use_norm_clust(struct scst_tgt_dev *tgt_dev)
+{
+	TRACE_MEM("%s", "Use clustering");
+	tgt_dev->gfp_mask = __GFP_NOWARN;
+	tgt_dev->pool = sgv_norm_clust_pool;
+	set_bit(SCST_TGT_DEV_CLUST_POOL, &tgt_dev->tgt_dev_flags);
+}
+
+void scst_sgv_pool_use_dma(struct scst_tgt_dev *tgt_dev)
+{
+	TRACE_MEM("%s", "Use ISA DMA memory");
+	tgt_dev->gfp_mask = __GFP_NOWARN | GFP_DMA;
+	tgt_dev->pool = sgv_dma_pool;
+	clear_bit(SCST_TGT_DEV_CLUST_POOL, &tgt_dev->tgt_dev_flags);
+}
+
+/* Must be no locks */
+static void sgv_dtor_and_free(struct sgv_pool_obj *obj)
+{
+	struct sgv_pool *pool = obj->owner_pool;
+
+	TRACE_MEM("Destroying sgv obj %p", obj);
+
+	if (obj->sg_count != 0) {
+		pool->alloc_fns.free_pages_fn(obj->sg_entries,
+			obj->sg_count, obj->allocator_priv);
+	}
+	if (obj->sg_entries != obj->sg_entries_data) {
+		if (obj->trans_tbl !=
+		    (struct trans_tbl_ent *)obj->sg_entries_data) {
+			/* kfree() handles NULL parameter */
+			kfree(obj->trans_tbl);
+			obj->trans_tbl = NULL;
+		}
+		kfree(obj->sg_entries);
+	}
+
+	kmem_cache_free(pool->caches[obj->cache_num], obj);
+	return;
+}
+
+/* Might be called under sgv_pool_lock */
+static inline void sgv_del_from_active(struct sgv_pool *pool)
+{
+	struct list_head *next;
+
+	TRACE_MEM("Deleting sgv pool %p from the active list", pool);
+
+	spin_lock_bh(&sgv_pools_lock);
+
+	next = pool->sgv_active_pools_list_entry.next;
+	list_del(&pool->sgv_active_pools_list_entry);
+
+	if (sgv_cur_purge_pool == pool) {
+		TRACE_MEM("Sgv pool %p is sgv cur purge pool", pool);
+
+		if (next == &sgv_active_pools_list)
+			next = next->next;
+
+		if (next == &sgv_active_pools_list) {
+			sgv_cur_purge_pool = NULL;
+			TRACE_MEM("%s", "Sgv active list now empty");
+		} else {
+			sgv_cur_purge_pool = list_entry(next, typeof(*pool),
+				sgv_active_pools_list_entry);
+			TRACE_MEM("New sgv cur purge pool %p",
+				sgv_cur_purge_pool);
+		}
+	}
+
+	spin_unlock_bh(&sgv_pools_lock);
+	return;
+}
+
+/* Must be called under sgv_pool_lock held */
+static void sgv_dec_cached_entries(struct sgv_pool *pool, int pages)
+{
+	pool->cached_entries--;
+	pool->cached_pages -= pages;
+
+	if (pool->cached_entries == 0)
+		sgv_del_from_active(pool);
+
+	return;
+}
+
+/* Must be called under sgv_pool_lock held */
+static void __sgv_purge_from_cache(struct sgv_pool_obj *obj)
+{
+	int pages = obj->pages;
+	struct sgv_pool *pool = obj->owner_pool;
+
+	TRACE_MEM("Purging sgv obj %p from pool %p (new cached_entries %d)",
+		obj, pool, pool->cached_entries-1);
+
+	list_del(&obj->sorted_recycling_list_entry);
+	list_del(&obj->recycling_list_entry);
+
+	pool->inactive_cached_pages -= pages;
+	sgv_dec_cached_entries(pool, pages);
+
+	atomic_sub(pages, &sgv_pages_total);
+
+	return;
+}
+
+/* Must be called under sgv_pool_lock held */
+static bool sgv_purge_from_cache(struct sgv_pool_obj *obj, int min_interval,
+	unsigned long cur_time)
+{
+	EXTRACHECKS_BUG_ON(min_interval < 0);
+
+	TRACE_MEM("Checking if sgv obj %p should be purged (cur time %ld, "
+		"obj time %ld, time to purge %ld)", obj, cur_time,
+		obj->time_stamp, obj->time_stamp + min_interval);
+
+	if (time_after_eq(cur_time, (obj->time_stamp + min_interval))) {
+		__sgv_purge_from_cache(obj);
+		return true;
+	}
+	return false;
+}
+
+/* No locks */
+static int sgv_shrink_pool(struct sgv_pool *pool, int nr, int min_interval,
+	unsigned long cur_time)
+{
+	int freed = 0;
+
+	TRACE_MEM("Trying to shrink pool %p (nr %d, min_interval %d)",
+		pool, nr, min_interval);
+
+	if (pool->purge_interval < 0) {
+		TRACE_MEM("Not shrinkable pool %p, skipping", pool);
+		goto out;
+	}
+
+	spin_lock_bh(&pool->sgv_pool_lock);
+
+	while (!list_empty(&pool->sorted_recycling_list) &&
+			(atomic_read(&sgv_pages_total) > sgv_lo_wmk)) {
+		struct sgv_pool_obj *obj = list_entry(
+			pool->sorted_recycling_list.next,
+			struct sgv_pool_obj, sorted_recycling_list_entry);
+
+		if (sgv_purge_from_cache(obj, min_interval, cur_time)) {
+			int pages = obj->pages;
+
+			freed += pages;
+			nr -= pages;
+
+			TRACE_MEM("%d pages purged from pool %p (nr left %d, "
+				"total freed %d)", pages, pool, nr, freed);
+
+			spin_unlock_bh(&pool->sgv_pool_lock);
+			sgv_dtor_and_free(obj);
+			spin_lock_bh(&pool->sgv_pool_lock);
+		} else
+			break;
+
+		if ((nr <= 0) || (freed >= MAX_PAGES_PER_POOL)) {
+			if (freed >= MAX_PAGES_PER_POOL)
+				TRACE_MEM("%d pages purged from pool %p, "
+					"leaving", freed, pool);
+			break;
+		}
+	}
+
+	spin_unlock_bh(&pool->sgv_pool_lock);
+
+out:
+	return nr;
+}
+
+/* No locks */
+static int __sgv_shrink(int nr, int min_interval)
+{
+	struct sgv_pool *pool;
+	unsigned long cur_time = jiffies;
+	int prev_nr = nr;
+	bool circle = false;
+
+	TRACE_MEM("Trying to shrink %d pages from all sgv pools "
+		"(min_interval %d)", nr, min_interval);
+
+	while (nr > 0) {
+		struct list_head *next;
+
+		spin_lock_bh(&sgv_pools_lock);
+
+		pool = sgv_cur_purge_pool;
+		if (pool == NULL) {
+			if (list_empty(&sgv_active_pools_list)) {
+				TRACE_MEM("%s", "Active pools list is empty");
+				goto out_unlock;
+			}
+
+			pool = list_entry(sgv_active_pools_list.next,
+					typeof(*pool),
+					sgv_active_pools_list_entry);
+		}
+		sgv_pool_get(pool);
+
+		next = pool->sgv_active_pools_list_entry.next;
+		if (next == &sgv_active_pools_list) {
+			if (circle && (prev_nr == nr)) {
+				TRACE_MEM("Full circle done, but no progress, "
+					"leaving (nr %d)", nr);
+				goto out_unlock_put;
+			}
+			circle = true;
+			prev_nr = nr;
+
+			next = next->next;
+		}
+
+		sgv_cur_purge_pool = list_entry(next, typeof(*pool),
+			sgv_active_pools_list_entry);
+		TRACE_MEM("New cur purge pool %p", sgv_cur_purge_pool);
+
+		spin_unlock_bh(&sgv_pools_lock);
+
+		nr = sgv_shrink_pool(pool, nr, min_interval, cur_time);
+
+		sgv_pool_put(pool);
+	}
+
+out:
+	return nr;
+
+out_unlock:
+	spin_unlock_bh(&sgv_pools_lock);
+	goto out;
+
+out_unlock_put:
+	spin_unlock_bh(&sgv_pools_lock);
+	sgv_pool_put(pool);
+	goto out;
+}
+
+static int sgv_shrink(int nr, gfp_t gfpm)
+{
+
+	if (nr > 0) {
+		nr = __sgv_shrink(nr, SGV_MIN_SHRINK_INTERVAL);
+		TRACE_MEM("Left %d", nr);
+	} else {
+		struct sgv_pool *pool;
+		int inactive_pages = 0;
+
+		spin_lock_bh(&sgv_pools_lock);
+		list_for_each_entry(pool, &sgv_active_pools_list,
+				sgv_active_pools_list_entry) {
+			if (pool->purge_interval > 0)
+				inactive_pages += pool->inactive_cached_pages;
+		}
+		spin_unlock_bh(&sgv_pools_lock);
+
+		nr = max((int)0, inactive_pages - sgv_lo_wmk);
+		TRACE_MEM("Can free %d (total %d)", nr,
+			atomic_read(&sgv_pages_total));
+	}
+	return nr;
+}
+
+static void sgv_purge_work_fn(struct delayed_work *work)
+{
+	unsigned long cur_time = jiffies;
+	struct sgv_pool *pool = container_of(work, struct sgv_pool,
+					sgv_purge_work);
+
+	TRACE_MEM("Purge work for pool %p", pool);
+
+	spin_lock_bh(&pool->sgv_pool_lock);
+
+	pool->purge_work_scheduled = false;
+
+	while (!list_empty(&pool->sorted_recycling_list)) {
+		struct sgv_pool_obj *obj = list_entry(
+			pool->sorted_recycling_list.next,
+			struct sgv_pool_obj, sorted_recycling_list_entry);
+
+		if (sgv_purge_from_cache(obj, pool->purge_interval, cur_time)) {
+			spin_unlock_bh(&pool->sgv_pool_lock);
+			sgv_dtor_and_free(obj);
+			spin_lock_bh(&pool->sgv_pool_lock);
+		} else {
+			/*
+			 * Let's reschedule it for full period to not get here
+			 * too often. In the worst case we have shrinker
+			 * to reclaim buffers quickier.
+			 */
+			TRACE_MEM("Rescheduling purge work for pool %p (delay "
+				"%d HZ/%d sec)", pool, pool->purge_interval,
+				pool->purge_interval/HZ);
+			schedule_delayed_work(&pool->sgv_purge_work,
+				pool->purge_interval);
+			pool->purge_work_scheduled = true;
+			break;
+		}
+	}
+
+	spin_unlock_bh(&pool->sgv_pool_lock);
+
+	TRACE_MEM("Leaving purge work for pool %p", pool);
+	return;
+}
+
+static int sgv_check_full_clustering(struct scatterlist *sg, int cur, int hint)
+{
+	int res = -1;
+	int i = hint;
+	unsigned long pfn_cur = page_to_pfn(sg_page(&sg[cur]));
+	int len_cur = sg[cur].length;
+	unsigned long pfn_cur_next = pfn_cur + (len_cur >> PAGE_SHIFT);
+	int full_page_cur = (len_cur & (PAGE_SIZE - 1)) == 0;
+	unsigned long pfn, pfn_next;
+	bool full_page;
+
+#if 0
+	TRACE_MEM("pfn_cur %ld, pfn_cur_next %ld, len_cur %d, full_page_cur %d",
+		pfn_cur, pfn_cur_next, len_cur, full_page_cur);
+#endif
+
+	/* check the hint first */
+	if (i >= 0) {
+		pfn = page_to_pfn(sg_page(&sg[i]));
+		pfn_next = pfn + (sg[i].length >> PAGE_SHIFT);
+		full_page = (sg[i].length & (PAGE_SIZE - 1)) == 0;
+
+		if ((pfn == pfn_cur_next) && full_page_cur)
+			goto out_head;
+
+		if ((pfn_next == pfn_cur) && full_page)
+			goto out_tail;
+	}
+
+	/* ToDo: implement more intelligent search */
+	for (i = cur - 1; i >= 0; i--) {
+		pfn = page_to_pfn(sg_page(&sg[i]));
+		pfn_next = pfn + (sg[i].length >> PAGE_SHIFT);
+		full_page = (sg[i].length & (PAGE_SIZE - 1)) == 0;
+
+		if ((pfn == pfn_cur_next) && full_page_cur)
+			goto out_head;
+
+		if ((pfn_next == pfn_cur) && full_page)
+			goto out_tail;
+	}
+
+out:
+	return res;
+
+out_tail:
+	TRACE_MEM("SG segment %d will be tail merged with segment %d", cur, i);
+	sg[i].length += len_cur;
+	sg_clear(&sg[cur]);
+	res = i;
+	goto out;
+
+out_head:
+	TRACE_MEM("SG segment %d will be head merged with segment %d", cur, i);
+	sg_assign_page(&sg[i], sg_page(&sg[cur]));
+	sg[i].length += len_cur;
+	sg_clear(&sg[cur]);
+	res = i;
+	goto out;
+}
+
+static int sgv_check_tail_clustering(struct scatterlist *sg, int cur, int hint)
+{
+	int res = -1;
+	unsigned long pfn_cur = page_to_pfn(sg_page(&sg[cur]));
+	int len_cur = sg[cur].length;
+	int prev;
+	unsigned long pfn_prev;
+	bool full_page;
+
+#ifdef SCST_HIGHMEM
+	if (page >= highmem_start_page) {
+		TRACE_MEM("%s", "HIGHMEM page allocated, no clustering")
+		goto out;
+	}
+#endif
+
+#if 0
+	TRACE_MEM("pfn_cur %ld, pfn_cur_next %ld, len_cur %d, full_page_cur %d",
+		pfn_cur, pfn_cur_next, len_cur, full_page_cur);
+#endif
+
+	if (cur == 0)
+		goto out;
+
+	prev = cur - 1;
+	pfn_prev = page_to_pfn(sg_page(&sg[prev])) +
+			(sg[prev].length >> PAGE_SHIFT);
+	full_page = (sg[prev].length & (PAGE_SIZE - 1)) == 0;
+
+	if ((pfn_prev == pfn_cur) && full_page) {
+		TRACE_MEM("SG segment %d will be tail merged with segment %d",
+			cur, prev);
+		sg[prev].length += len_cur;
+		sg_clear(&sg[cur]);
+		res = prev;
+	}
+
+out:
+	return res;
+}
+
+static void sgv_free_sys_sg_entries(struct scatterlist *sg, int sg_count,
+	void *priv)
+{
+	int i;
+
+	TRACE_MEM("sg=%p, sg_count=%d", sg, sg_count);
+
+	for (i = 0; i < sg_count; i++) {
+		struct page *p = sg_page(&sg[i]);
+		int len = sg[i].length;
+		int pages =
+			(len >> PAGE_SHIFT) + ((len & ~PAGE_MASK) != 0);
+
+		TRACE_MEM("page %lx, len %d, pages %d",
+			(unsigned long)p, len, pages);
+
+		while (pages > 0) {
+			int order = 0;
+
+/*
+ * __free_pages() doesn't like freeing pages with not that order with
+ * which they were allocated, so disable this small optimization.
+ */
+#if 0
+			if (len > 0) {
+				while (((1 << order) << PAGE_SHIFT) < len)
+					order++;
+				len = 0;
+			}
+#endif
+			TRACE_MEM("free_pages(): order %d, page %lx",
+				order, (unsigned long)p);
+
+			__free_pages(p, order);
+
+			pages -= 1 << order;
+			p += 1 << order;
+		}
+	}
+}
+
+static struct page *sgv_alloc_sys_pages(struct scatterlist *sg,
+	gfp_t gfp_mask, void *priv)
+{
+	struct page *page = alloc_pages(gfp_mask, 0);
+
+	sg_set_page(sg, page, PAGE_SIZE, 0);
+	TRACE_MEM("page=%p, sg=%p, priv=%p", page, sg, priv);
+	if (page == NULL) {
+		TRACE(TRACE_OUT_OF_MEM, "%s", "Allocation of "
+			"sg page failed");
+	}
+	return page;
+}
+
+static int sgv_alloc_sg_entries(struct scatterlist *sg, int pages,
+	gfp_t gfp_mask, enum sgv_clustering_types clustering_type,
+	struct trans_tbl_ent *trans_tbl,
+	const struct sgv_pool_alloc_fns *alloc_fns, void *priv)
+{
+	int sg_count = 0;
+	int pg, i, j;
+	int merged = -1;
+
+	TRACE_MEM("pages=%d, clustering_type=%d", pages, clustering_type);
+
+#if 0
+	gfp_mask |= __GFP_COLD;
+#endif
+#ifdef CONFIG_SCST_STRICT_SECURITY
+	gfp_mask |= __GFP_ZERO;
+#endif
+
+	for (pg = 0; pg < pages; pg++) {
+		void *rc;
+#ifdef CONFIG_SCST_DEBUG_OOM
+		if (((gfp_mask & __GFP_NOFAIL) != __GFP_NOFAIL) &&
+		    ((scst_random() % 10000) == 55))
+			rc = NULL;
+		else
+#endif
+			rc = alloc_fns->alloc_pages_fn(&sg[sg_count], gfp_mask,
+				priv);
+		if (rc == NULL)
+			goto out_no_mem;
+
+		/*
+		 * This code allows compiler to see full body of the clustering
+		 * functions and gives it a chance to generate better code.
+		 * At least, the resulting code is smaller, comparing to
+		 * calling them using a function pointer.
+		 */
+		if (clustering_type == sgv_full_clustering)
+			merged = sgv_check_full_clustering(sg, sg_count, merged);
+		else if (clustering_type == sgv_tail_clustering)
+			merged = sgv_check_tail_clustering(sg, sg_count, merged);
+		else
+			merged = -1;
+
+		if (merged == -1)
+			sg_count++;
+
+		TRACE_MEM("pg=%d, merged=%d, sg_count=%d", pg, merged,
+			sg_count);
+	}
+
+	if ((clustering_type != sgv_no_clustering) && (trans_tbl != NULL)) {
+		pg = 0;
+		for (i = 0; i < pages; i++) {
+			int n = (sg[i].length >> PAGE_SHIFT) +
+				((sg[i].length & ~PAGE_MASK) != 0);
+			trans_tbl[i].pg_count = pg;
+			for (j = 0; j < n; j++)
+				trans_tbl[pg++].sg_num = i+1;
+			TRACE_MEM("i=%d, n=%d, pg_count=%d", i, n,
+				trans_tbl[i].pg_count);
+		}
+	}
+
+out:
+	TRACE_MEM("sg_count=%d", sg_count);
+	return sg_count;
+
+out_no_mem:
+	alloc_fns->free_pages_fn(sg, sg_count, priv);
+	sg_count = 0;
+	goto out;
+}
+
+static int sgv_alloc_arrays(struct sgv_pool_obj *obj,
+	int pages_to_alloc, gfp_t gfp_mask)
+{
+	int sz, tsz = 0;
+	int res = 0;
+
+	sz = pages_to_alloc * sizeof(obj->sg_entries[0]);
+
+	obj->sg_entries = kmalloc(sz, gfp_mask);
+	if (unlikely(obj->sg_entries == NULL)) {
+		TRACE(TRACE_OUT_OF_MEM, "Allocation of sgv_pool_obj "
+			"SG vector failed (size %d)", sz);
+		res = -ENOMEM;
+		goto out;
+	}
+
+	sg_init_table(obj->sg_entries, pages_to_alloc);
+
+	if (sgv_pool_clustered(obj->owner_pool)) {
+		if (pages_to_alloc <= sgv_max_trans_pages) {
+			obj->trans_tbl =
+				(struct trans_tbl_ent *)obj->sg_entries_data;
+			/*
+			 * No need to clear trans_tbl, if needed, it will be
+			 * fully rewritten in sgv_alloc_sg_entries()
+			 */
+		} else {
+			tsz = pages_to_alloc * sizeof(obj->trans_tbl[0]);
+			obj->trans_tbl = kzalloc(tsz, gfp_mask);
+			if (unlikely(obj->trans_tbl == NULL)) {
+				TRACE(TRACE_OUT_OF_MEM, "Allocation of "
+					"trans_tbl failed (size %d)", tsz);
+				res = -ENOMEM;
+				goto out_free;
+			}
+		}
+	}
+
+	TRACE_MEM("pages_to_alloc %d, sz %d, tsz %d, obj %p, sg_entries %p, "
+		"trans_tbl %p", pages_to_alloc, sz, tsz, obj, obj->sg_entries,
+		obj->trans_tbl);
+
+out:
+	return res;
+
+out_free:
+	kfree(obj->sg_entries);
+	obj->sg_entries = NULL;
+	goto out;
+}
+
+static struct sgv_pool_obj *sgv_get_obj(struct sgv_pool *pool, int cache_num,
+	int pages, gfp_t gfp_mask, bool get_new)
+{
+	struct sgv_pool_obj *obj;
+
+	spin_lock_bh(&pool->sgv_pool_lock);
+
+	if (unlikely(get_new)) {
+		/* Used only for buffers preallocation */
+		goto get_new;
+	}
+
+	if (likely(!list_empty(&pool->recycling_lists[cache_num]))) {
+		obj = list_entry(pool->recycling_lists[cache_num].next,
+			 struct sgv_pool_obj, recycling_list_entry);
+
+		list_del(&obj->sorted_recycling_list_entry);
+		list_del(&obj->recycling_list_entry);
+
+		pool->inactive_cached_pages -= pages;
+
+		spin_unlock_bh(&pool->sgv_pool_lock);
+		goto out;
+	}
+
+get_new:
+	if (pool->cached_entries == 0) {
+		TRACE_MEM("Adding pool %p to the active list", pool);
+		spin_lock_bh(&sgv_pools_lock);
+		list_add_tail(&pool->sgv_active_pools_list_entry,
+			&sgv_active_pools_list);
+		spin_unlock_bh(&sgv_pools_lock);
+	}
+
+	pool->cached_entries++;
+	pool->cached_pages += pages;
+
+	spin_unlock_bh(&pool->sgv_pool_lock);
+
+	TRACE_MEM("New cached entries %d (pool %p)", pool->cached_entries,
+		pool);
+
+	obj = kmem_cache_alloc(pool->caches[cache_num],
+		gfp_mask & ~(__GFP_HIGHMEM|GFP_DMA));
+	if (likely(obj)) {
+		memset(obj, 0, sizeof(*obj));
+		obj->cache_num = cache_num;
+		obj->pages = pages;
+		obj->owner_pool = pool;
+	} else {
+		spin_lock_bh(&pool->sgv_pool_lock);
+		sgv_dec_cached_entries(pool, pages);
+		spin_unlock_bh(&pool->sgv_pool_lock);
+	}
+
+out:
+	return obj;
+}
+
+static void sgv_put_obj(struct sgv_pool_obj *obj)
+{
+	struct sgv_pool *pool = obj->owner_pool;
+	struct list_head *entry;
+	struct list_head *list = &pool->recycling_lists[obj->cache_num];
+	int pages = obj->pages;
+
+	spin_lock_bh(&pool->sgv_pool_lock);
+
+	TRACE_MEM("sgv %p, cache num %d, pages %d, sg_count %d", obj,
+		obj->cache_num, pages, obj->sg_count);
+
+	if (sgv_pool_clustered(pool)) {
+		/* Make objects with less entries more preferred */
+		__list_for_each(entry, list) {
+			struct sgv_pool_obj *tmp = list_entry(entry,
+				struct sgv_pool_obj, recycling_list_entry);
+
+			TRACE_MEM("tmp %p, cache num %d, pages %d, sg_count %d",
+				tmp, tmp->cache_num, tmp->pages, tmp->sg_count);
+
+			if (obj->sg_count <= tmp->sg_count)
+				break;
+		}
+		entry = entry->prev;
+	} else
+		entry = list;
+
+	TRACE_MEM("Adding in %p (list %p)", entry, list);
+	list_add(&obj->recycling_list_entry, entry);
+
+	list_add_tail(&obj->sorted_recycling_list_entry,
+		&pool->sorted_recycling_list);
+
+	obj->time_stamp = jiffies;
+
+	pool->inactive_cached_pages += pages;
+
+	if (!pool->purge_work_scheduled) {
+		TRACE_MEM("Scheduling purge work for pool %p", pool);
+		pool->purge_work_scheduled = true;
+		schedule_delayed_work(&pool->sgv_purge_work,
+			pool->purge_interval);
+	}
+
+	spin_unlock_bh(&pool->sgv_pool_lock);
+	return;
+}
+
+/* No locks */
+static int sgv_hiwmk_check(int pages_to_alloc)
+{
+	int res = 0;
+	int pages = pages_to_alloc;
+
+	pages += atomic_read(&sgv_pages_total);
+
+	if (unlikely(pages > sgv_hi_wmk)) {
+		pages -= sgv_hi_wmk;
+		atomic_inc(&sgv_releases_on_hiwmk);
+
+		pages = __sgv_shrink(pages, 0);
+		if (pages > 0) {
+			TRACE(TRACE_OUT_OF_MEM, "Requested amount of "
+			    "memory (%d pages) for being executed "
+			    "commands together with the already "
+			    "allocated memory exceeds the allowed "
+			    "maximum %d. Should you increase "
+			    "scst_max_cmd_mem?", pages_to_alloc,
+			   sgv_hi_wmk);
+			atomic_inc(&sgv_releases_on_hiwmk_failed);
+			res = -ENOMEM;
+			goto out_unlock;
+		}
+	}
+
+	atomic_add(pages_to_alloc, &sgv_pages_total);
+
+out_unlock:
+	TRACE_MEM("pages_to_alloc %d, new total %d", pages_to_alloc,
+		atomic_read(&sgv_pages_total));
+
+	return res;
+}
+
+/* No locks */
+static void sgv_hiwmk_uncheck(int pages)
+{
+	atomic_sub(pages, &sgv_pages_total);
+	TRACE_MEM("pages %d, new total %d", pages,
+		atomic_read(&sgv_pages_total));
+	return;
+}
+
+/* No locks */
+static bool sgv_check_allowed_mem(struct scst_mem_lim *mem_lim, int pages)
+{
+	int alloced;
+	bool res = true;
+
+	alloced = atomic_add_return(pages, &mem_lim->alloced_pages);
+	if (unlikely(alloced > mem_lim->max_allowed_pages)) {
+		TRACE(TRACE_OUT_OF_MEM, "Requested amount of memory "
+			"(%d pages) for being executed commands on a device "
+			"together with the already allocated memory exceeds "
+			"the allowed maximum %d. Should you increase "
+			"scst_max_dev_cmd_mem?", pages,
+			mem_lim->max_allowed_pages);
+		atomic_sub(pages, &mem_lim->alloced_pages);
+		res = false;
+	}
+
+	TRACE_MEM("mem_lim %p, pages %d, res %d, new alloced %d", mem_lim,
+		pages, res, atomic_read(&mem_lim->alloced_pages));
+
+	return res;
+}
+
+/* No locks */
+static void sgv_uncheck_allowed_mem(struct scst_mem_lim *mem_lim, int pages)
+{
+	atomic_sub(pages, &mem_lim->alloced_pages);
+
+	TRACE_MEM("mem_lim %p, pages %d, new alloced %d", mem_lim,
+		pages, atomic_read(&mem_lim->alloced_pages));
+	return;
+}
+
+/**
+ * sgv_pool_alloc - allocate an SG vector from the SGV pool
+ * @pool:	the cache to alloc from
+ * @size:	size of the resulting SG vector in bytes
+ * @gfp_mask:	the allocation mask
+ * @flags:	the allocation flags
+ * @count:	the resulting count of SG entries in the resulting SG vector
+ * @sgv:	the resulting SGV object
+ * @mem_lim:	memory limits
+ * @priv:	pointer to private for this allocation data
+ *
+ * Description:
+ *    Allocate an SG vector from the SGV pool and returns pointer to it or
+ *    NULL in case of any error. See the SGV pool documentation for more details.
+ */
+struct scatterlist *sgv_pool_alloc(struct sgv_pool *pool, unsigned int size,
+	gfp_t gfp_mask, int flags, int *count,
+	struct sgv_pool_obj **sgv, struct scst_mem_lim *mem_lim, void *priv)
+{
+	struct sgv_pool_obj *obj;
+	int cache_num, pages, cnt;
+	struct scatterlist *res = NULL;
+	int pages_to_alloc;
+	int no_cached = flags & SGV_POOL_ALLOC_NO_CACHED;
+	bool allowed_mem_checked = false, hiwmk_checked = false;
+
+	if (unlikely(size == 0))
+		goto out;
+
+	EXTRACHECKS_BUG_ON((gfp_mask & __GFP_NOFAIL) == __GFP_NOFAIL);
+
+	pages = ((size + PAGE_SIZE - 1) >> PAGE_SHIFT);
+	if (pool->single_alloc_pages == 0) {
+		int pages_order = get_order(size);
+		cache_num = pages_order;
+		pages_to_alloc = (1 << pages_order);
+	} else {
+		cache_num = 0;
+		pages_to_alloc = max(pool->single_alloc_pages, pages);
+	}
+
+	TRACE_MEM("size=%d, pages=%d, pages_to_alloc=%d, cache num=%d, "
+		"flags=%x, no_cached=%d, *sgv=%p", size, pages,
+		pages_to_alloc, cache_num, flags, no_cached, *sgv);
+
+	if (*sgv != NULL) {
+		obj = *sgv;
+
+		TRACE_MEM("Supplied obj %p, cache num %d", obj, obj->cache_num);
+
+		EXTRACHECKS_BUG_ON(obj->sg_count != 0);
+
+		if (unlikely(!sgv_check_allowed_mem(mem_lim, pages_to_alloc)))
+			goto out_fail_free_sg_entries;
+		allowed_mem_checked = true;
+
+		if (unlikely(sgv_hiwmk_check(pages_to_alloc) != 0))
+			goto out_fail_free_sg_entries;
+		hiwmk_checked = true;
+	} else if ((pages_to_alloc <= pool->max_cached_pages) && !no_cached) {
+		if (unlikely(!sgv_check_allowed_mem(mem_lim, pages_to_alloc)))
+			goto out_fail;
+		allowed_mem_checked = true;
+
+		obj = sgv_get_obj(pool, cache_num, pages_to_alloc, gfp_mask,
+			flags & SGV_POOL_ALLOC_GET_NEW);
+		if (unlikely(obj == NULL)) {
+			TRACE(TRACE_OUT_OF_MEM, "Allocation of "
+				"sgv_pool_obj failed (size %d)", size);
+			goto out_fail;
+		}
+
+		if (obj->sg_count != 0) {
+			TRACE_MEM("Cached obj %p", obj);
+			atomic_inc(&pool->cache_acc[cache_num].hit_alloc);
+			goto success;
+		}
+
+		if (flags & SGV_POOL_NO_ALLOC_ON_CACHE_MISS) {
+			if (!(flags & SGV_POOL_RETURN_OBJ_ON_ALLOC_FAIL))
+				goto out_fail_free;
+		}
+
+		TRACE_MEM("Brand new obj %p", obj);
+
+		if (pages_to_alloc <= sgv_max_local_pages) {
+			obj->sg_entries = obj->sg_entries_data;
+			sg_init_table(obj->sg_entries, pages_to_alloc);
+			TRACE_MEM("sg_entries %p", obj->sg_entries);
+			if (sgv_pool_clustered(pool)) {
+				obj->trans_tbl = (struct trans_tbl_ent *)
+					(obj->sg_entries + pages_to_alloc);
+				TRACE_MEM("trans_tbl %p", obj->trans_tbl);
+				/*
+				 * No need to clear trans_tbl, if needed, it
+				 * will be fully rewritten in
+				 * sgv_alloc_sg_entries().
+				 */
+			}
+		} else {
+			if (unlikely(sgv_alloc_arrays(obj, pages_to_alloc,
+					gfp_mask) != 0))
+				goto out_fail_free;
+		}
+
+		if ((flags & SGV_POOL_NO_ALLOC_ON_CACHE_MISS) &&
+		    (flags & SGV_POOL_RETURN_OBJ_ON_ALLOC_FAIL))
+			goto out_return;
+
+		obj->allocator_priv = priv;
+
+		if (unlikely(sgv_hiwmk_check(pages_to_alloc) != 0))
+			goto out_fail_free_sg_entries;
+		hiwmk_checked = true;
+	} else {
+		int sz;
+
+		pages_to_alloc = pages;
+
+		if (unlikely(!sgv_check_allowed_mem(mem_lim, pages_to_alloc)))
+			goto out_fail;
+		allowed_mem_checked = true;
+
+		if (flags & SGV_POOL_NO_ALLOC_ON_CACHE_MISS)
+			goto out_return2;
+
+		sz = sizeof(*obj) + pages * sizeof(obj->sg_entries[0]);
+
+		obj = kmalloc(sz, gfp_mask);
+		if (unlikely(obj == NULL)) {
+			TRACE(TRACE_OUT_OF_MEM, "Allocation of "
+				"sgv_pool_obj failed (size %d)", size);
+			goto out_fail;
+		}
+		memset(obj, 0, sizeof(*obj));
+
+		obj->owner_pool = pool;
+		cache_num = -1;
+		obj->cache_num = cache_num;
+		obj->pages = pages_to_alloc;
+		obj->allocator_priv = priv;
+
+		obj->sg_entries = obj->sg_entries_data;
+		sg_init_table(obj->sg_entries, pages);
+
+		if (unlikely(sgv_hiwmk_check(pages_to_alloc) != 0))
+			goto out_fail_free_sg_entries;
+		hiwmk_checked = true;
+
+		TRACE_MEM("Big or no_cached obj %p (size %d)", obj, sz);
+	}
+
+	obj->sg_count = sgv_alloc_sg_entries(obj->sg_entries,
+		pages_to_alloc, gfp_mask, pool->clustering_type,
+		obj->trans_tbl, &pool->alloc_fns, priv);
+	if (unlikely(obj->sg_count <= 0)) {
+		obj->sg_count = 0;
+		if ((flags & SGV_POOL_RETURN_OBJ_ON_ALLOC_FAIL) &&
+		    (cache_num >= 0))
+			goto out_return1;
+		else
+			goto out_fail_free_sg_entries;
+	}
+
+	if (cache_num >= 0) {
+		atomic_add(pages_to_alloc - obj->sg_count,
+			&pool->cache_acc[cache_num].merged);
+	} else {
+		if (no_cached) {
+			atomic_add(pages_to_alloc,
+				&pool->other_pages);
+			atomic_add(pages_to_alloc - obj->sg_count,
+				&pool->other_merged);
+		} else {
+			atomic_add(pages_to_alloc,
+				&pool->big_pages);
+			atomic_add(pages_to_alloc - obj->sg_count,
+				&pool->big_merged);
+		}
+	}
+
+success:
+	if (cache_num >= 0) {
+		int sg;
+		atomic_inc(&pool->cache_acc[cache_num].total_alloc);
+		if (sgv_pool_clustered(pool))
+			cnt = obj->trans_tbl[pages-1].sg_num;
+		else
+			cnt = pages;
+		sg = cnt-1;
+		obj->orig_sg = sg;
+		obj->orig_length = obj->sg_entries[sg].length;
+		if (sgv_pool_clustered(pool)) {
+			obj->sg_entries[sg].length =
+				(pages - obj->trans_tbl[sg].pg_count) << PAGE_SHIFT;
+		}
+	} else {
+		cnt = obj->sg_count;
+		if (no_cached)
+			atomic_inc(&pool->other_alloc);
+		else
+			atomic_inc(&pool->big_alloc);
+	}
+
+	*count = cnt;
+	res = obj->sg_entries;
+	*sgv = obj;
+
+	if (size & ~PAGE_MASK)
+		obj->sg_entries[cnt-1].length -=
+			PAGE_SIZE - (size & ~PAGE_MASK);
+
+	TRACE_MEM("obj=%p, sg_entries %p (size=%d, pages=%d, sg_count=%d, "
+		"count=%d, last_len=%d)", obj, obj->sg_entries, size, pages,
+		obj->sg_count, *count, obj->sg_entries[obj->orig_sg].length);
+
+out:
+	return res;
+
+out_return:
+	obj->allocator_priv = priv;
+	obj->owner_pool = pool;
+
+out_return1:
+	*sgv = obj;
+	TRACE_MEM("Returning failed obj %p (count %d)", obj, *count);
+
+out_return2:
+	*count = pages_to_alloc;
+	res = NULL;
+	goto out_uncheck;
+
+out_fail_free_sg_entries:
+	if (obj->sg_entries != obj->sg_entries_data) {
+		if (obj->trans_tbl !=
+			(struct trans_tbl_ent *)obj->sg_entries_data) {
+			/* kfree() handles NULL parameter */
+			kfree(obj->trans_tbl);
+			obj->trans_tbl = NULL;
+		}
+		kfree(obj->sg_entries);
+		obj->sg_entries = NULL;
+	}
+
+out_fail_free:
+	if (cache_num >= 0) {
+		spin_lock_bh(&pool->sgv_pool_lock);
+		sgv_dec_cached_entries(pool, pages_to_alloc);
+		spin_unlock_bh(&pool->sgv_pool_lock);
+
+		kmem_cache_free(pool->caches[obj->cache_num], obj);
+	} else
+		kfree(obj);
+
+out_fail:
+	res = NULL;
+	*count = 0;
+	*sgv = NULL;
+	TRACE_MEM("%s", "Allocation failed");
+
+out_uncheck:
+	if (hiwmk_checked)
+		sgv_hiwmk_uncheck(pages_to_alloc);
+	if (allowed_mem_checked)
+		sgv_uncheck_allowed_mem(mem_lim, pages_to_alloc);
+	goto out;
+}
+EXPORT_SYMBOL_GPL(sgv_pool_alloc);
+
+/**
+ * sgv_get_priv - return the private allocation data
+ *
+ * Allows to get the allocation private data for this SGV
+ * cache object. The private data supposed to be set by sgv_pool_alloc().
+ */
+void *sgv_get_priv(struct sgv_pool_obj *obj)
+{
+	return obj->allocator_priv;
+}
+EXPORT_SYMBOL_GPL(sgv_get_priv);
+
+/**
+ * sgv_pool_free - free previously allocated SG vector
+ * @sgv:	the SGV object to free
+ * @mem_lim:	memory limits
+ *
+ * Description:
+ *    Frees previously allocated SG vector and updates memory limits
+ */
+void sgv_pool_free(struct sgv_pool_obj *obj, struct scst_mem_lim *mem_lim)
+{
+	int pages = (obj->sg_count != 0) ? obj->pages : 0;
+
+	TRACE_MEM("Freeing obj %p, cache num %d, pages %d, sg_entries %p, "
+		"sg_count %d, allocator_priv %p", obj, obj->cache_num, pages,
+		obj->sg_entries, obj->sg_count, obj->allocator_priv);
+	if (obj->cache_num >= 0) {
+		obj->sg_entries[obj->orig_sg].length = obj->orig_length;
+		sgv_put_obj(obj);
+	} else {
+		obj->owner_pool->alloc_fns.free_pages_fn(obj->sg_entries,
+			obj->sg_count, obj->allocator_priv);
+		kfree(obj);
+		sgv_hiwmk_uncheck(pages);
+	}
+
+	sgv_uncheck_allowed_mem(mem_lim, pages);
+	return;
+}
+EXPORT_SYMBOL_GPL(sgv_pool_free);
+
+/**
+ * scst_alloc() - allocates an SG vector
+ *
+ * Allocates and returns pointer to SG vector with data size "size".
+ * In *count returned the count of entries in the vector.
+ * Returns NULL for failure.
+ */
+struct scatterlist *scst_alloc(int size, gfp_t gfp_mask, int *count)
+{
+	struct scatterlist *res;
+	int pages = (size >> PAGE_SHIFT) + ((size & ~PAGE_MASK) != 0);
+	struct sgv_pool_alloc_fns sys_alloc_fns = {
+		sgv_alloc_sys_pages, sgv_free_sys_sg_entries };
+	int no_fail = ((gfp_mask & __GFP_NOFAIL) == __GFP_NOFAIL);
+
+	atomic_inc(&sgv_other_total_alloc);
+
+	if (unlikely(sgv_hiwmk_check(pages) != 0)) {
+		if (!no_fail) {
+			res = NULL;
+			goto out;
+		} else {
+			/*
+			 * Update active_pages_total since alloc can't fail.
+			 * If it wasn't updated then the counter would cross 0
+			 * on free again.
+			 */
+			sgv_hiwmk_uncheck(-pages);
+		 }
+	}
+
+	res = kmalloc(pages*sizeof(*res), gfp_mask);
+	if (res == NULL) {
+		TRACE(TRACE_OUT_OF_MEM, "Unable to allocate sg for %d pages",
+			pages);
+		goto out_uncheck;
+	}
+
+	sg_init_table(res, pages);
+
+	/*
+	 * If we allow use clustering here, we will have troubles in
+	 * scst_free() to figure out how many pages are in the SG vector.
+	 * So, always don't use clustering.
+	 */
+	*count = sgv_alloc_sg_entries(res, pages, gfp_mask, sgv_no_clustering,
+			NULL, &sys_alloc_fns, NULL);
+	if (*count <= 0)
+		goto out_free;
+
+out:
+	TRACE_MEM("Alloced sg %p (count %d) \"no fail\" %d", res, *count, no_fail);
+	return res;
+
+out_free:
+	kfree(res);
+	res = NULL;
+
+out_uncheck:
+	if (!no_fail)
+		sgv_hiwmk_uncheck(pages);
+	goto out;
+}
+EXPORT_SYMBOL_GPL(scst_alloc);
+
+/**
+ * scst_free() - frees SG vector
+ *
+ * Frees SG vector returned by scst_alloc().
+ */
+void scst_free(struct scatterlist *sg, int count)
+{
+	TRACE_MEM("Freeing sg=%p", sg);
+
+	sgv_hiwmk_uncheck(count);
+
+	sgv_free_sys_sg_entries(sg, count, NULL);
+	kfree(sg);
+	return;
+}
+EXPORT_SYMBOL_GPL(scst_free);
+
+/* Must be called under sgv_pools_mutex */
+static void sgv_pool_init_cache(struct sgv_pool *pool, int cache_num)
+{
+	int size;
+	int pages;
+	struct sgv_pool_obj *obj;
+
+	atomic_set(&pool->cache_acc[cache_num].total_alloc, 0);
+	atomic_set(&pool->cache_acc[cache_num].hit_alloc, 0);
+	atomic_set(&pool->cache_acc[cache_num].merged, 0);
+
+	if (pool->single_alloc_pages == 0)
+		pages = 1 << cache_num;
+	else
+		pages = pool->single_alloc_pages;
+
+	if (pages <= sgv_max_local_pages) {
+		size = sizeof(*obj) + pages *
+			(sizeof(obj->sg_entries[0]) +
+			 ((pool->clustering_type != sgv_no_clustering) ?
+				sizeof(obj->trans_tbl[0]) : 0));
+	} else if (pages <= sgv_max_trans_pages) {
+		/*
+		 * sg_entries is allocated outside object,
+		 * but trans_tbl is still embedded.
+		 */
+		size = sizeof(*obj) + pages *
+			(((pool->clustering_type != sgv_no_clustering) ?
+				sizeof(obj->trans_tbl[0]) : 0));
+	} else {
+		size = sizeof(*obj);
+		/* both sgv and trans_tbl are kmalloc'ed() */
+	}
+
+	TRACE_MEM("pages=%d, size=%d", pages, size);
+
+	scnprintf(pool->cache_names[cache_num],
+		sizeof(pool->cache_names[cache_num]),
+		"%s-%uK", pool->name, (pages << PAGE_SHIFT) >> 10);
+	pool->caches[cache_num] = kmem_cache_create(
+		pool->cache_names[cache_num], size, 0, SCST_SLAB_FLAGS, NULL
+		);
+	return;
+}
+
+/* Must be called under sgv_pools_mutex */
+static int sgv_pool_init(struct sgv_pool *pool, const char *name,
+	enum sgv_clustering_types clustering_type, int single_alloc_pages,
+	int purge_interval)
+{
+	int res = -ENOMEM;
+	int i;
+
+	if (single_alloc_pages < 0) {
+		PRINT_ERROR("Wrong single_alloc_pages value %d",
+			single_alloc_pages);
+		res = -EINVAL;
+		goto out;
+	}
+
+	memset(pool, 0, sizeof(*pool));
+
+	atomic_set(&pool->big_alloc, 0);
+	atomic_set(&pool->big_pages, 0);
+	atomic_set(&pool->big_merged, 0);
+	atomic_set(&pool->other_alloc, 0);
+	atomic_set(&pool->other_pages, 0);
+	atomic_set(&pool->other_merged, 0);
+
+	pool->clustering_type = clustering_type;
+	pool->single_alloc_pages = single_alloc_pages;
+	if (purge_interval != 0) {
+		pool->purge_interval = purge_interval;
+		if (purge_interval < 0) {
+			/* Let's pretend that it's always scheduled */
+			pool->purge_work_scheduled = 1;
+		}
+	} else
+		pool->purge_interval = SGV_DEFAULT_PURGE_INTERVAL;
+	if (single_alloc_pages == 0) {
+		pool->max_caches = SGV_POOL_ELEMENTS;
+		pool->max_cached_pages = 1 << (SGV_POOL_ELEMENTS - 1);
+	} else {
+		pool->max_caches = 1;
+		pool->max_cached_pages = single_alloc_pages;
+	}
+	pool->alloc_fns.alloc_pages_fn = sgv_alloc_sys_pages;
+	pool->alloc_fns.free_pages_fn = sgv_free_sys_sg_entries;
+
+	TRACE_MEM("name %s, sizeof(*obj)=%zd, clustering_type=%d, "
+		"single_alloc_pages=%d, max_caches=%d, max_cached_pages=%d",
+		name, sizeof(struct sgv_pool_obj), clustering_type,
+		single_alloc_pages, pool->max_caches, pool->max_cached_pages);
+
+	strlcpy(pool->name, name, sizeof(pool->name)-1);
+
+	pool->owner_mm = current->mm;
+
+	for (i = 0; i < pool->max_caches; i++) {
+		sgv_pool_init_cache(pool, i);
+		if (pool->caches[i] == NULL) {
+			TRACE(TRACE_OUT_OF_MEM, "Allocation of sgv_pool "
+				"cache %s(%d) failed", name, i);
+			goto out_free;
+		}
+	}
+
+	atomic_set(&pool->sgv_pool_ref, 1);
+	spin_lock_init(&pool->sgv_pool_lock);
+	INIT_LIST_HEAD(&pool->sorted_recycling_list);
+	for (i = 0; i < pool->max_caches; i++)
+		INIT_LIST_HEAD(&pool->recycling_lists[i]);
+
+	INIT_DELAYED_WORK(&pool->sgv_purge_work,
+		(void (*)(struct work_struct *))sgv_purge_work_fn);
+
+	spin_lock_bh(&sgv_pools_lock);
+	list_add_tail(&pool->sgv_pools_list_entry, &sgv_pools_list);
+	spin_unlock_bh(&sgv_pools_lock);
+
+	res = 0;
+
+out:
+	return res;
+
+out_free:
+	for (i = 0; i < pool->max_caches; i++) {
+		if (pool->caches[i]) {
+			kmem_cache_destroy(pool->caches[i]);
+			pool->caches[i] = NULL;
+		} else
+			break;
+	}
+	goto out;
+}
+
+static void sgv_evaluate_local_max_pages(void)
+{
+	int space4sgv_ttbl = PAGE_SIZE - sizeof(struct sgv_pool_obj);
+
+	sgv_max_local_pages = space4sgv_ttbl /
+		  (sizeof(struct trans_tbl_ent) + sizeof(struct scatterlist));
+
+	sgv_max_trans_pages =  space4sgv_ttbl / sizeof(struct trans_tbl_ent);
+
+	TRACE_MEM("sgv_max_local_pages %d, sgv_max_trans_pages %d",
+		sgv_max_local_pages, sgv_max_trans_pages);
+	return;
+}
+
+/**
+ * sgv_pool_flush - flushe the SGV pool
+ *
+ * Flushes, i.e. frees, all the cached entries in the SGV pool.
+ */
+void sgv_pool_flush(struct sgv_pool *pool)
+{
+	int i;
+
+	for (i = 0; i < pool->max_caches; i++) {
+		struct sgv_pool_obj *obj;
+
+		spin_lock_bh(&pool->sgv_pool_lock);
+
+		while (!list_empty(&pool->recycling_lists[i])) {
+			obj = list_entry(pool->recycling_lists[i].next,
+				struct sgv_pool_obj, recycling_list_entry);
+
+			__sgv_purge_from_cache(obj);
+
+			spin_unlock_bh(&pool->sgv_pool_lock);
+
+			EXTRACHECKS_BUG_ON(obj->owner_pool != pool);
+			sgv_dtor_and_free(obj);
+
+			spin_lock_bh(&pool->sgv_pool_lock);
+		}
+		spin_unlock_bh(&pool->sgv_pool_lock);
+	}
+	return;
+}
+EXPORT_SYMBOL_GPL(sgv_pool_flush);
+
+static void sgv_pool_deinit_put(struct sgv_pool *pool)
+{
+
+	cancel_delayed_work_sync(&pool->sgv_purge_work);
+
+	sgv_pool_flush(pool);
+
+	mutex_lock(&sgv_pools_mutex);
+	spin_lock_bh(&sgv_pools_lock);
+	list_del(&pool->sgv_pools_list_entry);
+	spin_unlock_bh(&sgv_pools_lock);
+	mutex_unlock(&sgv_pools_mutex);
+
+	scst_sgv_sysfs_put(pool);
+
+	/* pool can be dead here */
+	return;
+}
+
+/**
+ * sgv_pool_set_allocator - set custom pages allocator
+ * @pool:	the cache
+ * @alloc_pages_fn: pages allocation function
+ * @free_pages_fn: pages freeing function
+ *
+ * Description:
+ *    Allows to set custom pages allocator for the SGV pool.
+ *    See the SGV pool documentation for more details.
+ */
+void sgv_pool_set_allocator(struct sgv_pool *pool,
+	struct page *(*alloc_pages_fn)(struct scatterlist *, gfp_t, void *),
+	void (*free_pages_fn)(struct scatterlist *, int, void *))
+{
+	pool->alloc_fns.alloc_pages_fn = alloc_pages_fn;
+	pool->alloc_fns.free_pages_fn = free_pages_fn;
+	return;
+}
+EXPORT_SYMBOL_GPL(sgv_pool_set_allocator);
+
+/**
+ * sgv_pool_create - creates and initializes an SGV pool
+ * @name:	the name of the SGV pool
+ * @clustered:	sets type of the pages clustering.
+ * @single_alloc_pages:	if 0, then the SGV pool will work in the set of
+ *		power 2 size buffers mode. If >0, then the SGV pool will
+ *		work in the fixed size buffers mode. In this case
+ *		single_alloc_pages sets the size of each buffer in pages.
+ * @shared:	sets if the SGV pool can be shared between devices or not.
+ *		The cache sharing allowed only between devices created inside
+ *		the same address space. If an SGV pool is shared, each
+ *		subsequent call of sgv_pool_create() with the same cache name
+ *		will not create a new cache, but instead return a reference
+ *		to it.
+ * @purge_interval: sets the cache purging interval. I.e., an SG buffer
+ *		will be freed if it's unused for time t
+ *		purge_interval <= t < 2*purge_interval. If purge_interval
+ *		is 0, then the default interval will be used (60 seconds).
+ *		If purge_interval <0, then the automatic purging will be
+ *		disabled.
+ *
+ * Description:
+ *    Returns the resulting SGV pool or NULL in case of any error.
+ */
+struct sgv_pool *sgv_pool_create(const char *name,
+	enum sgv_clustering_types clustering_type,
+	int single_alloc_pages, bool shared, int purge_interval)
+{
+	struct sgv_pool *pool;
+	int rc;
+
+	mutex_lock(&sgv_pools_mutex);
+	list_for_each_entry(pool, &sgv_pools_list, sgv_pools_list_entry) {
+		if (strcmp(pool->name, name) == 0) {
+			if (shared) {
+				if (pool->owner_mm != current->mm) {
+					PRINT_ERROR("Attempt of a shared use "
+						"of SGV pool %s with "
+						"different MM", name);
+					goto out_err_unlock;
+				}
+				sgv_pool_get(pool);
+				goto out_unlock;
+			} else {
+				PRINT_ERROR("SGV pool %s already exists", name);
+				goto out_err_unlock;
+			}
+		}
+	}
+
+	pool = kzalloc(sizeof(*pool), GFP_KERNEL);
+	if (pool == NULL) {
+		TRACE(TRACE_OUT_OF_MEM, "%s", "Allocation of sgv_pool failed");
+		goto out_unlock;
+	}
+
+	rc = sgv_pool_init(pool, name, clustering_type, single_alloc_pages,
+				purge_interval);
+	if (rc != 0)
+		goto out_free_unlock;
+
+	rc = scst_create_sgv_sysfs(pool);
+	if (rc != 0)
+		goto out_err_unlock_put;
+
+out_unlock:
+	mutex_unlock(&sgv_pools_mutex);
+	return pool;
+
+out_free_unlock:
+	kfree(pool);
+
+out_err_unlock:
+	pool = NULL;
+	goto out_unlock;
+
+out_err_unlock_put:
+	mutex_unlock(&sgv_pools_mutex);
+	sgv_pool_deinit_put(pool);
+	goto out_err_unlock;
+}
+EXPORT_SYMBOL_GPL(sgv_pool_create);
+
+void sgv_pool_destroy(struct sgv_pool *pool)
+{
+	int i;
+
+	for (i = 0; i < pool->max_caches; i++) {
+		if (pool->caches[i])
+			kmem_cache_destroy(pool->caches[i]);
+		pool->caches[i] = NULL;
+	}
+
+	kfree(pool);
+	return;
+}
+
+/**
+ * sgv_pool_get - increase ref counter for the corresponding SGV pool
+ *
+ * Increases ref counter for the corresponding SGV pool
+ */
+void sgv_pool_get(struct sgv_pool *pool)
+{
+	atomic_inc(&pool->sgv_pool_ref);
+	TRACE_MEM("Incrementing sgv pool %p ref (new value %d)",
+		pool, atomic_read(&pool->sgv_pool_ref));
+	return;
+}
+EXPORT_SYMBOL_GPL(sgv_pool_get);
+
+/**
+ * sgv_pool_put - decrease ref counter for the corresponding SGV pool
+ *
+ * Decreases ref counter for the corresponding SGV pool. If the ref
+ * counter reaches 0, the cache will be destroyed.
+ */
+void sgv_pool_put(struct sgv_pool *pool)
+{
+	TRACE_MEM("Decrementing sgv pool %p ref (new value %d)",
+		pool, atomic_read(&pool->sgv_pool_ref)-1);
+	if (atomic_dec_and_test(&pool->sgv_pool_ref))
+		sgv_pool_deinit_put(pool);
+	return;
+}
+EXPORT_SYMBOL_GPL(sgv_pool_put);
+
+/**
+ * sgv_pool_del - deletes the corresponding SGV pool
+ * @pool:	the cache to delete.
+ *
+ * Description:
+ *    If the cache is shared, it will decrease its reference counter.
+ *    If the reference counter reaches 0, the cache will be destroyed.
+ */
+void sgv_pool_del(struct sgv_pool *pool)
+{
+
+	sgv_pool_put(pool);
+	return;
+}
+EXPORT_SYMBOL_GPL(sgv_pool_del);
+
+/* Both parameters in pages */
+int scst_sgv_pools_init(unsigned long mem_hwmark, unsigned long mem_lwmark)
+{
+	int res = 0;
+
+	sgv_hi_wmk = mem_hwmark;
+	sgv_lo_wmk = mem_lwmark;
+
+	sgv_evaluate_local_max_pages();
+
+	sgv_norm_pool = sgv_pool_create("sgv", sgv_no_clustering, 0, false, 0);
+	if (sgv_norm_pool == NULL)
+		goto out_err;
+
+	sgv_norm_clust_pool = sgv_pool_create("sgv-clust",
+		sgv_full_clustering, 0, false, 0);
+	if (sgv_norm_clust_pool == NULL)
+		goto out_free_norm;
+
+	sgv_dma_pool = sgv_pool_create("sgv-dma", sgv_no_clustering, 0,
+				false, 0);
+	if (sgv_dma_pool == NULL)
+		goto out_free_clust;
+
+	sgv_shrinker.shrink = sgv_shrink;
+	sgv_shrinker.seeks = DEFAULT_SEEKS;
+	register_shrinker(&sgv_shrinker);
+
+out:
+	return res;
+
+out_free_clust:
+	sgv_pool_deinit_put(sgv_norm_clust_pool);
+
+out_free_norm:
+	sgv_pool_deinit_put(sgv_norm_pool);
+
+out_err:
+	res = -ENOMEM;
+	goto out;
+}
+
+void scst_sgv_pools_deinit(void)
+{
+
+	unregister_shrinker(&sgv_shrinker);
+
+	sgv_pool_deinit_put(sgv_dma_pool);
+	sgv_pool_deinit_put(sgv_norm_pool);
+	sgv_pool_deinit_put(sgv_norm_clust_pool);
+
+	flush_scheduled_work();
+	return;
+}
+
+ssize_t sgv_sysfs_stat_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	struct sgv_pool *pool;
+	int i, total = 0, hit = 0, merged = 0, allocated = 0;
+	int oa, om, res;
+
+	pool = container_of(kobj, struct sgv_pool, sgv_kobj);
+
+	for (i = 0; i < SGV_POOL_ELEMENTS; i++) {
+		int t;
+
+		hit += atomic_read(&pool->cache_acc[i].hit_alloc);
+		total += atomic_read(&pool->cache_acc[i].total_alloc);
+
+		t = atomic_read(&pool->cache_acc[i].total_alloc) -
+			atomic_read(&pool->cache_acc[i].hit_alloc);
+		allocated += t * (1 << i);
+		merged += atomic_read(&pool->cache_acc[i].merged);
+	}
+
+	res = sprintf(buf, "%-30s %-11s %-11s %-11s %-11s", "Name", "Hit", "Total",
+		"% merged", "Cached (P/I/O)");
+
+	res += sprintf(&buf[res], "\n%-30s %-11d %-11d %-11d %d/%d/%d\n",
+		pool->name, hit, total,
+		(allocated != 0) ? merged*100/allocated : 0,
+		pool->cached_pages, pool->inactive_cached_pages,
+		pool->cached_entries);
+
+	for (i = 0; i < SGV_POOL_ELEMENTS; i++) {
+		int t = atomic_read(&pool->cache_acc[i].total_alloc) -
+			atomic_read(&pool->cache_acc[i].hit_alloc);
+		allocated = t * (1 << i);
+		merged = atomic_read(&pool->cache_acc[i].merged);
+
+		res += sprintf(&buf[res], "  %-28s %-11d %-11d %d\n",
+			pool->cache_names[i],
+			atomic_read(&pool->cache_acc[i].hit_alloc),
+			atomic_read(&pool->cache_acc[i].total_alloc),
+			(allocated != 0) ? merged*100/allocated : 0);
+	}
+
+	allocated = atomic_read(&pool->big_pages);
+	merged = atomic_read(&pool->big_merged);
+	oa = atomic_read(&pool->other_pages);
+	om = atomic_read(&pool->other_merged);
+
+	res += sprintf(&buf[res], "  %-40s %d/%-9d %d/%d\n", "big/other",
+		atomic_read(&pool->big_alloc), atomic_read(&pool->other_alloc),
+		(allocated != 0) ? merged*100/allocated : 0,
+		(oa != 0) ? om/oa : 0);
+
+	return res;
+}
+
+ssize_t sgv_sysfs_stat_reset(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	struct sgv_pool *pool;
+	int i;
+
+	pool = container_of(kobj, struct sgv_pool, sgv_kobj);
+
+	for (i = 0; i < SGV_POOL_ELEMENTS; i++) {
+		atomic_set(&pool->cache_acc[i].hit_alloc, 0);
+		atomic_set(&pool->cache_acc[i].total_alloc, 0);
+		atomic_set(&pool->cache_acc[i].merged, 0);
+	}
+
+	atomic_set(&pool->big_pages, 0);
+	atomic_set(&pool->big_merged, 0);
+	atomic_set(&pool->big_alloc, 0);
+	atomic_set(&pool->other_pages, 0);
+	atomic_set(&pool->other_merged, 0);
+	atomic_set(&pool->other_alloc, 0);
+
+	PRINT_INFO("Statistics for SGV pool %s resetted", pool->name);
+	return count;
+}
+
+ssize_t sgv_sysfs_global_stat_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	struct sgv_pool *pool;
+	int inactive_pages = 0, res;
+
+	spin_lock_bh(&sgv_pools_lock);
+	list_for_each_entry(pool, &sgv_active_pools_list,
+			sgv_active_pools_list_entry) {
+		inactive_pages += pool->inactive_cached_pages;
+	}
+	spin_unlock_bh(&sgv_pools_lock);
+
+	res = sprintf(buf, "%-42s %d/%d\n%-42s %d/%d\n%-42s %d/%d\n"
+		"%-42s %-11d\n",
+		"Inactive/active pages", inactive_pages,
+		atomic_read(&sgv_pages_total) - inactive_pages,
+		"Hi/lo watermarks [pages]", sgv_hi_wmk, sgv_lo_wmk,
+		"Hi watermark releases/failures",
+		atomic_read(&sgv_releases_on_hiwmk),
+		atomic_read(&sgv_releases_on_hiwmk_failed),
+		"Other allocs", atomic_read(&sgv_other_total_alloc));
+	return res;
+}
+
+ssize_t sgv_sysfs_global_stat_reset(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+
+	atomic_set(&sgv_releases_on_hiwmk, 0);
+	atomic_set(&sgv_releases_on_hiwmk_failed, 0);
+	atomic_set(&sgv_other_total_alloc, 0);
+
+	PRINT_INFO("%s", "Global SGV pool statistics resetted");
+	return count;
+}
+
diff -uprN orig/linux-2.6.33/Documentation/scst/sgv_cache.txt linux-2.6.33/Documentation/scst/sgv_cache.txt
--- orig/linux-2.6.33/Documentation/scst/sgv_cache.txt
+++ linux-2.6.33/Documentation/scst/sgv_cache.txt
@@ -0,0 +1,224 @@
+			SCST SGV CACHE.
+
+		PROGRAMMING INTERFACE DESCRIPTION.
+
+		     For SCST version 1.0.2
+
+SCST SGV cache is a memory management subsystem in SCST. One can call it
+a "memory pool", but Linux kernel already have a mempool interface,
+which serves different purposes. SGV cache provides to SCST core, target
+drivers and backend dev handlers facilities to allocate, build and cache
+SG vectors for data buffers. The main advantage of it is the caching
+facility, when it doesn't free to the system each vector, which is not
+used anymore, but keeps it for a while (possibly indefinitely) to let it
+be reused by the next consecutive command. This allows to:
+
+ - Reduce commands processing latencies and, hence, improve performance;
+
+ - Make commands processing latencies predictable, which is essential
+   for RT applications.
+
+The freed SG vectors are kept by the SGV cache either for some (possibly
+indefinite) time, or, optionally, until the system needs more memory and
+asks to free some using the set_shrinker() interface. Also the SGV cache
+allows to:
+
+  - Cluster pages together. "Cluster" means merging adjacent pages in a
+single SG entry. It allows to have less SG entries in the resulting SG
+vector, hence improve performance handling it as well as allow to
+work with bigger buffers on hardware with limited SG capabilities.
+
+  - Set custom page allocator functions. For instance, scst_user device
+handler uses this facility to eliminate unneeded mapping/unmapping of
+user space pages and avoid unneeded IOCTL calls for buffers allocations.
+In fileio_tgt application, which uses a regular malloc() function to
+allocate data buffers, this facility allows ~30% less CPU load and
+considerable performance increase.
+
+ - Prevent each initiator or all initiators altogether to allocate too 
+much memory and DoS the target. Consider 10 initiators, which can have
+access to 10 devices each. Any of them can queue up to 64 commands, each
+can transfer up to 1MB of data. So, all of them in a peak can allocate
+up to 10*10*64 = ~6.5GB of memory for data buffers. This amount must be
+limited somehow and the SGV cache performs this function. 
+
+From implementation POV the SGV cache is a simple extension of the kmem
+cache. It can work in 2 modes:
+
+1. With fixed size buffers.
+
+2. With a set of power 2 size buffers. In this mode each SGV cache
+(struct sgv_pool) has SGV_POOL_ELEMENTS (11 currently) of kmem caches.
+Each of those kmem caches keeps SGV cache objects (struct sgv_pool_obj)
+corresponding to SG vectors with size of order X pages. For instance,
+request to allocate 4 pages will be served from kmem cache[2], since the
+order of the of number of requested pages is 2. If later request to
+allocate 11KB comes, the same SG vector with 4 pages will be reused (see
+below). This mode is in average allows less memory overhead comparing
+with the fixed size buffers mode.
+
+Consider how the SGV cache works in the set of buffers mode. When a
+request to allocate new SG vector comes, sgv_pool_alloc() via 
+sgv_get_obj() checks if there is already a cached vector with that
+order. If yes, then that vector will be reused and its length, if 
+necessary, will be modified to match the requested size. In the above 
+example request for 11KB buffer, 4 pages vector will be reused and
+modified using trans_tbl to contain 3 pages and the last entry will be
+modified to contain the requested length - 2*PAGE_SIZE. If there is no
+cached object, then a new sgv_pool_obj will be allocated from the
+corresponding kmem cache, chosen by the order of number of requested
+pages. Then that vector will be filled by pages and returned.
+
+In the fixed size buffers mode the SGV cache works similarly, except
+that it always allocate buffer with the predefined fixed size. I.e.
+even for 4K request the whole buffer with predefined size, say, 1MB,
+will be used.
+
+In both modes, if size of a request exceeds the maximum allowed for
+caching buffer size, the requested buffer will be allocated, but not
+cached.
+
+Freed cached sgv_pool_obj objects are actually freed to the system
+either by the purge work, which is scheduled once in 60 seconds, or in
+sgv_shrink() called by system, when it's asking for memory.
+
+			Interface.
+
+struct sgv_pool *sgv_pool_create(const char *name,
+	enum sgv_clustering_types clustered, int single_alloc_pages,
+	bool shared, int purge_interval)
+	
+This function creates and initializes an SGV cache. It has the following
+arguments:
+
+ - name - the name of the SGV cache
+
+ - clustered - sets type of the pages clustering. The type can be:
+
+     * sgv_no_clustering - no clustering performed.
+
+     * sgv_tail_clustering - a page will only be merged with the latest
+       previously allocated page, so the order of pages in the SG will be
+       preserved
+
+     * sgv_full_clustering - free merging of pages at any place in
+       the SG is allowed. This mode usually provides the best merging
+       rate.
+ 
+ - single_alloc_pages - if 0, then the SGV cache will work in the set of
+   power 2 size buffers mode. If >0, then the SGV cache will work in the
+   fixed size buffers mode. In this case single_alloc_pages sets the
+   size of each buffer in pages.
+
+ - shared - sets if the SGV cache can be shared between devices or not.
+   The cache sharing allowed only between devices created inside the same
+   address space. If an SGV cache is shared, each subsequent call of
+   sgv_pool_create() with the same cache name will not create a new cache,
+   but instead return a reference to it.
+
+ - purge_interval - sets the cache purging interval. I.e. an SG buffer
+   will be freed if it's unused for time t purge_interval <= t <
+   2*purge_interval. If purge_interval is 0, then the default interval
+   will be used (60 seconds). If purge_interval <0, then the automatic
+   purging will be disabled. Shrinking by the system's demand will also
+   be disabled.
+
+Returns the resulting SGV cache or NULL in case of any error.
+
+void sgv_pool_del(struct sgv_pool *pool)
+
+This function deletes the corresponding SGV cache. If the cache is
+shared, it will decrease its reference counter. If the reference counter
+reaches 0, the cache will be destroyed.
+
+void sgv_pool_flush(struct sgv_pool *pool)
+
+This function flushes, i.e. frees, all the cached entries in the SGV
+cache.
+
+void sgv_pool_set_allocator(struct sgv_pool *pool,
+	struct page *(*alloc_pages_fn)(struct scatterlist *sg, gfp_t gfp, void *priv),
+	void (*free_pages_fn)(struct scatterlist *sg, int sg_count, void *priv));
+
+This function allows to set for the SGV cache a custom pages allocator. For
+instance, scst_user uses such function to supply to the cache mapped from
+user space pages.
+
+alloc_pages_fn() has the following parameters:
+
+ - sg - SG entry, to which the allocated page should be added.
+ 
+ - gfp - the allocation GFP flags
+ 
+ - priv - pointer to a private data supplied to sgv_pool_alloc()
+ 
+This function should return the allocated page or NULL, if no page was
+allocated.
+
+free_pages_fn() has the following parameters:
+
+ - sg - SG vector to free
+ 
+ - sg_count - number of SG entries in the sg
+ 
+ - priv - pointer to a private data supplied to the corresponding sgv_pool_alloc()
+
+struct scatterlist *sgv_pool_alloc(struct sgv_pool *pool, unsigned int size,
+	gfp_t gfp_mask, int flags, int *count,
+	struct sgv_pool_obj **sgv, struct scst_mem_lim *mem_lim, void *priv)
+
+This function allocates an SG vector from the SGV cache. It has the
+following parameters:
+
+ - pool - the cache to alloc from
+
+ - size - size of the resulting SG vector in bytes
+
+ - gfp_mask - the allocation mask
+ 
+ - flags - the allocation flags. The following flags are possible and
+   can be set using OR operation:
+ 
+     * SGV_POOL_ALLOC_NO_CACHED - the SG vector must not be cached.
+ 
+     * SGV_POOL_NO_ALLOC_ON_CACHE_MISS - don't do an allocation on a
+       cache miss.
+ 
+     * SGV_POOL_RETURN_OBJ_ON_ALLOC_FAIL - return an empty SGV object,
+       i.e. without the SG vector, if the allocation can't be completed.
+       For instance, because SGV_POOL_NO_ALLOC_ON_CACHE_MISS flag set.
+ 
+ - count - the resulting count of SG entries in the resulting SG vector.
+
+ - sgv - the resulting SGV object. It should be used to free the
+   resulting SG vector.
+ 
+ - mem_lim - memory limits, see below.
+ 
+ - priv - pointer to private for this allocation data. This pointer will
+   be supplied to alloc_pages_fn() and free_pages_fn() and can be
+   retrieved by sgv_get_priv().
+
+This function returns pointer to the resulting SG vector or NULL in case
+of any error.
+
+void sgv_pool_free(struct sgv_pool_obj *sgv, struct scst_mem_lim *mem_lim)
+
+This function frees previously allocated SG vector, referenced by SGV
+cache object sgv.
+
+void *sgv_get_priv(struct sgv_pool_obj *sgv)
+
+This function allows to get the allocation private data for this SGV
+cache object sgv. The private data are set by sgv_pool_alloc().
+
+void scst_init_mem_lim(struct scst_mem_lim *mem_lim)
+
+This function initializes memory limits structure mem_lim according to
+the current system configuration. This structure should be latter used
+to track and limit allocated by one or more SGV caches memory.
+
+		Runtime information and statistics.
+
+Runtime information and statistics is available in /sys/kernel/scst_tgt/sgv.
+

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH][RFC 8/12/1/5] SCST sysfs interface
       [not found] ` <4BC44D08.4060907@vlnb.net>
                     ` (6 preceding siblings ...)
  2010-04-13 13:06   ` [PATCH][RFC 7/12/1/5] SCST SGV cache Vladislav Bolkhovitin
@ 2010-04-13 13:06   ` Vladislav Bolkhovitin
  2010-04-13 13:06   ` [PATCH][RFC 9/12/1/5] SCST debugging support Vladislav Bolkhovitin
  2010-04-13 13:06   ` [PATCH][RFC 10/12/1/5] SCST external modules support Vladislav Bolkhovitin
  9 siblings, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:06 UTC (permalink / raw)
  To: linux-scsi
  Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
	FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
	Vu Pham, Bart Van Assche, James Smart, Joe Eykholt, Andy Yan,
	linux-driver, Daniel Henrique Debonzi

This patch contains file scst_sysfs.c.

Signed-off-by: Daniel Henrique Debonzi <debonzi@linux.vnet.ibm.com>
Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
 scst_sysfs.c | 3884 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 3884 insertions(+)

diff -uprN orig/linux-2.6.33/drivers/scst/scst_sysfs.c linux-2.6.33/drivers/scst/scst_sysfs.c
--- orig/linux-2.6.33/drivers/scst/scst_sysfs.c
+++ linux-2.6.33/drivers/scst/scst_sysfs.c
@@ -0,0 +1,3884 @@
+/*
+ *  scst_sysfs.c
+ *
+ *  Copyright (C) 2009 Daniel Henrique Debonzi <debonzi@linux.vnet.ibm.com>
+ *  Copyright (C) 2009 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ *  Copyright (C) 2009 - 2010 ID7 Ltd.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation, version 2
+ *  of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#include <linux/kobject.h>
+#include <linux/string.h>
+#include <linux/sysfs.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/ctype.h>
+
+#include "scst.h"
+#include "scst_priv.h"
+#include "scst_mem.h"
+
+static DECLARE_COMPLETION(scst_sysfs_root_release_completion);
+
+static struct kobject scst_sysfs_root_kobj;
+static struct kobject *scst_targets_kobj;
+static struct kobject *scst_devices_kobj;
+static struct kobject *scst_sgv_kobj;
+static struct kobject *scst_handlers_kobj;
+
+/* Regular SCST sysfs operations */
+struct sysfs_ops scst_sysfs_ops;
+EXPORT_SYMBOL_GPL(scst_sysfs_ops);
+
+static const char *scst_dev_handler_types[] = {
+    "Direct-access device (e.g., magnetic disk)",
+    "Sequential-access device (e.g., magnetic tape)",
+    "Printer device",
+    "Processor device",
+    "Write-once device (e.g., some optical disks)",
+    "CD-ROM device",
+    "Scanner device (obsolete)",
+    "Optical memory device (e.g., some optical disks)",
+    "Medium changer device (e.g., jukeboxes)",
+    "Communications device (obsolete)",
+    "Defined by ASC IT8 (Graphic arts pre-press devices)",
+    "Defined by ASC IT8 (Graphic arts pre-press devices)",
+    "Storage array controller device (e.g., RAID)",
+    "Enclosure services device",
+    "Simplified direct-access device (e.g., magnetic disk)",
+    "Optical card reader/writer device"
+};
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+
+static DEFINE_MUTEX(scst_log_mutex);
+
+static struct scst_trace_log scst_trace_tbl[] = {
+    { TRACE_OUT_OF_MEM,		"out_of_mem" },
+    { TRACE_MINOR,		"minor" },
+    { TRACE_SG_OP,		"sg" },
+    { TRACE_MEMORY,		"mem" },
+    { TRACE_BUFF,		"buff" },
+    { TRACE_PID,		"pid" },
+    { TRACE_LINE,		"line" },
+    { TRACE_FUNCTION,		"function" },
+    { TRACE_DEBUG,		"debug" },
+    { TRACE_SPECIAL,		"special" },
+    { TRACE_SCSI,		"scsi" },
+    { TRACE_MGMT,		"mgmt" },
+    { TRACE_MGMT_DEBUG,		"mgmt_dbg" },
+    { TRACE_FLOW_CONTROL,	"flow_control" },
+    { 0,			NULL }
+};
+
+static struct scst_trace_log scst_local_trace_tbl[] = {
+    { TRACE_RTRY,		"retry" },
+    { TRACE_SCSI_SERIALIZING,	"scsi_serializing" },
+    { TRACE_RCV_BOT,		"recv_bot" },
+    { TRACE_SND_BOT,		"send_bot" },
+    { TRACE_RCV_TOP,		"recv_top" },
+    { TRACE_SND_TOP,		"send_top" },
+    { 0,			NULL }
+};
+
+static ssize_t scst_trace_level_show(const struct scst_trace_log *local_tbl,
+	unsigned long log_level, char *buf, const char *help);
+static int scst_write_trace(const char *buf, size_t length,
+	unsigned long *log_level, unsigned long default_level,
+	const char *name, const struct scst_trace_log *tbl);
+
+#endif /* defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING) */
+
+static ssize_t scst_luns_mgmt_show(struct kobject *kobj,
+				   struct kobj_attribute *attr,
+				   char *buf);
+static ssize_t scst_luns_mgmt_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count);
+static ssize_t scst_tgt_addr_method_show(struct kobject *kobj,
+				   struct kobj_attribute *attr,
+				   char *buf);
+static ssize_t scst_tgt_addr_method_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count);
+static ssize_t scst_tgt_io_grouping_type_show(struct kobject *kobj,
+				   struct kobj_attribute *attr,
+				   char *buf);
+static ssize_t scst_tgt_io_grouping_type_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count);
+static ssize_t scst_ini_group_mgmt_show(struct kobject *kobj,
+				   struct kobj_attribute *attr,
+				   char *buf);
+static ssize_t scst_ini_group_mgmt_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count);
+static ssize_t scst_rel_tgt_id_show(struct kobject *kobj,
+				   struct kobj_attribute *attr,
+				   char *buf);
+static ssize_t scst_rel_tgt_id_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count);
+static ssize_t scst_acg_luns_mgmt_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count);
+static ssize_t scst_acg_ini_mgmt_show(struct kobject *kobj,
+				   struct kobj_attribute *attr,
+				   char *buf);
+static ssize_t scst_acg_ini_mgmt_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count);
+static ssize_t scst_acg_addr_method_show(struct kobject *kobj,
+				   struct kobj_attribute *attr,
+				   char *buf);
+static ssize_t scst_acg_addr_method_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count);
+static ssize_t scst_acg_io_grouping_type_show(struct kobject *kobj,
+				   struct kobj_attribute *attr,
+				   char *buf);
+static ssize_t scst_acg_io_grouping_type_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count);
+static ssize_t scst_acn_file_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf);
+
+static void scst_sysfs_release(struct kobject *kobj)
+{
+	kfree(kobj);
+}
+
+/*
+ * Target Template
+ */
+
+static void scst_tgtt_release(struct kobject *kobj)
+{
+	struct scst_tgt_template *tgtt;
+
+	tgtt = container_of(kobj, struct scst_tgt_template, tgtt_kobj);
+
+	complete_all(&tgtt->tgtt_kobj_release_cmpl);
+
+	scst_tgtt_cleanup(tgtt);
+	return;
+}
+
+static struct kobj_type tgtt_ktype = {
+	.sysfs_ops = &scst_sysfs_ops,
+	.release = scst_tgtt_release,
+};
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+
+static ssize_t scst_tgtt_trace_level_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	struct scst_tgt_template *tgtt;
+
+	tgtt = container_of(kobj, struct scst_tgt_template, tgtt_kobj);
+
+	return scst_trace_level_show(tgtt->trace_tbl,
+		tgtt->trace_flags ? *tgtt->trace_flags : 0, buf,
+		tgtt->trace_tbl_help);
+}
+
+static ssize_t scst_tgtt_trace_level_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	struct scst_tgt_template *tgtt;
+
+	tgtt = container_of(kobj, struct scst_tgt_template, tgtt_kobj);
+
+	if (mutex_lock_interruptible(&scst_log_mutex) != 0) {
+		res = -EINTR;
+		goto out;
+	}
+
+	res = scst_write_trace(buf, count, tgtt->trace_flags,
+		tgtt->default_trace_flags, tgtt->name, tgtt->trace_tbl);
+
+	mutex_unlock(&scst_log_mutex);
+
+out:
+	return res;
+}
+
+static struct kobj_attribute tgtt_trace_attr =
+	__ATTR(trace_level, S_IRUGO | S_IWUSR,
+	       scst_tgtt_trace_level_show, scst_tgtt_trace_level_store);
+
+#endif /* #if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING) */
+
+static ssize_t scst_tgtt_mgmt_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	char *help = "Usage: echo \"add_target target_name [parameters]\" "
+				">mgmt\n"
+		     "       echo \"del_target target_name\" >mgmt\n"
+		     "%s"
+		     "\n"
+		     "where parameters are one or more "
+		     "param_name=value pairs separated by ';'\n"
+		     "%s%s";
+	struct scst_tgt_template *tgtt;
+
+	tgtt = container_of(kobj, struct scst_tgt_template, tgtt_kobj);
+
+	if (tgtt->add_target_parameters_help != NULL)
+		return sprintf(buf, help,
+			(tgtt->mgmt_cmd_help) ? tgtt->mgmt_cmd_help : "",
+			"\nThe following parameters available: ",
+			tgtt->add_target_parameters_help);
+	else
+		return sprintf(buf, help,
+			(tgtt->mgmt_cmd_help) ? tgtt->mgmt_cmd_help : "",
+			"", "");
+}
+
+static ssize_t scst_tgtt_mgmt_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count)
+{
+	int res;
+	char *buffer, *p, *pp, *target_name;
+	struct scst_tgt_template *tgtt;
+
+	tgtt = container_of(kobj, struct scst_tgt_template, tgtt_kobj);
+
+	buffer = kzalloc(count+1, GFP_KERNEL);
+	if (buffer == NULL) {
+		res = -ENOMEM;
+		goto out;
+	}
+
+	memcpy(buffer, buf, count);
+	buffer[count] = '\0';
+
+	pp = buffer;
+	if (pp[strlen(pp) - 1] == '\n')
+		pp[strlen(pp) - 1] = '\0';
+
+	p = scst_get_next_lexem(&pp);
+
+	if (strcasecmp("add_target", p) == 0) {
+		target_name = scst_get_next_lexem(&pp);
+		if (*target_name == '\0') {
+			PRINT_ERROR("%s", "Target name required");
+			res = -EINVAL;
+			goto out_free;
+		}
+		res = tgtt->add_target(target_name, pp);
+	} else if (strcasecmp("del_target", p) == 0) {
+		target_name = scst_get_next_lexem(&pp);
+		if (*target_name == '\0') {
+			PRINT_ERROR("%s", "Target name required");
+			res = -EINVAL;
+			goto out_free;
+		}
+
+		p = scst_get_next_lexem(&pp);
+		if (*p != '\0')
+			goto out_syntax_err;
+
+		res = tgtt->del_target(target_name);
+	} else if (tgtt->mgmt_cmd != NULL) {
+		scst_restore_token_str(p, pp);
+		res = tgtt->mgmt_cmd(buffer);
+	} else {
+		PRINT_ERROR("Unknown action \"%s\"", p);
+		res = -EINVAL;
+		goto out_free;
+	}
+
+	if (res == 0)
+		res = count;
+
+out_free:
+	kfree(buffer);
+
+out:
+	return res;
+
+out_syntax_err:
+	PRINT_ERROR("Syntax error on \"%s\"", p);
+	res = -EINVAL;
+	goto out_free;
+}
+
+static struct kobj_attribute scst_tgtt_mgmt =
+	__ATTR(mgmt, S_IRUGO | S_IWUSR, scst_tgtt_mgmt_show,
+	       scst_tgtt_mgmt_store);
+
+int scst_create_tgtt_sysfs(struct scst_tgt_template *tgtt)
+{
+	int retval = 0;
+	const struct attribute **pattr;
+
+	init_completion(&tgtt->tgtt_kobj_release_cmpl);
+
+	tgtt->tgtt_kobj_initialized = 1;
+
+	retval = kobject_init_and_add(&tgtt->tgtt_kobj, &tgtt_ktype,
+			scst_targets_kobj, tgtt->name);
+	if (retval != 0) {
+		PRINT_ERROR("Can't add tgtt %s to sysfs", tgtt->name);
+		goto out;
+	}
+
+	/*
+	 * In case of errors there's no need for additional cleanup, because
+	 * it will be done by the _put function() called by the caller.
+	 */
+
+	if (tgtt->add_target != NULL) {
+		retval = sysfs_create_file(&tgtt->tgtt_kobj,
+				&scst_tgtt_mgmt.attr);
+		if (retval != 0) {
+			PRINT_ERROR("Can't add mgmt attr for target driver %s",
+				tgtt->name);
+			goto out;
+		}
+	}
+
+	pattr = tgtt->tgtt_attrs;
+	if (pattr != NULL) {
+		while (*pattr != NULL) {
+			TRACE_DBG("Creating attr %s for target driver %s",
+				(*pattr)->name, tgtt->name);
+			retval = sysfs_create_file(&tgtt->tgtt_kobj, *pattr);
+			if (retval != 0) {
+				PRINT_ERROR("Can't add attr %s for target "
+					"driver %s", (*pattr)->name,
+					tgtt->name);
+				goto out;
+			}
+			pattr++;
+		}
+	}
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+	if (tgtt->trace_flags != NULL) {
+		retval = sysfs_create_file(&tgtt->tgtt_kobj,
+				&tgtt_trace_attr.attr);
+		if (retval != 0) {
+			PRINT_ERROR("Can't add trace_flag for target "
+				"driver %s", tgtt->name);
+			goto out;
+		}
+	}
+#endif
+
+out:
+	return retval;
+}
+
+void scst_tgtt_sysfs_put(struct scst_tgt_template *tgtt)
+{
+
+	if (tgtt->tgtt_kobj_initialized) {
+		int rc;
+
+		kobject_del(&tgtt->tgtt_kobj);
+		kobject_put(&tgtt->tgtt_kobj);
+
+		rc = wait_for_completion_timeout(&tgtt->tgtt_kobj_release_cmpl, HZ);
+		if (rc == 0) {
+			PRINT_INFO("Waiting for releasing sysfs entry "
+				"for target template %s...", tgtt->name);
+			wait_for_completion(&tgtt->tgtt_kobj_release_cmpl);
+			PRINT_INFO("Done waiting for releasing sysfs "
+				"entry for target template %s", tgtt->name);
+		}
+	} else
+		scst_tgtt_cleanup(tgtt);
+	return;
+}
+
+/*
+ * Target directory implementation
+ */
+
+static void scst_tgt_release(struct kobject *kobj)
+{
+	struct scst_tgt *tgt;
+
+	tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+
+	/* Let's make lockdep happy */
+	up_write(&tgt->tgt_attr_rwsem);
+
+	scst_free_tgt(tgt);
+	return;
+}
+
+static ssize_t scst_tgt_attr_show(struct kobject *kobj, struct attribute *attr,
+	char *buf)
+{
+	int res;
+	struct kobj_attribute *kobj_attr;
+	struct scst_tgt *tgt;
+
+	tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+
+	if (down_read_trylock(&tgt->tgt_attr_rwsem) == 0) {
+		res = -ENOENT;
+		goto out;
+	}
+
+	kobj_attr = container_of(attr, struct kobj_attribute, attr);
+
+	res = kobj_attr->show(kobj, kobj_attr, buf);
+
+	up_read(&tgt->tgt_attr_rwsem);
+
+out:
+	return res;
+}
+
+static ssize_t scst_tgt_attr_store(struct kobject *kobj,
+	struct attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	struct kobj_attribute *kobj_attr;
+	struct scst_tgt *tgt;
+
+	tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+
+	if (down_read_trylock(&tgt->tgt_attr_rwsem) == 0) {
+		res = -ENOENT;
+		goto out;
+	}
+
+	kobj_attr = container_of(attr, struct kobj_attribute, attr);
+
+	res = kobj_attr->store(kobj, kobj_attr, buf, count);
+
+	up_read(&tgt->tgt_attr_rwsem);
+
+out:
+	return res;
+}
+
+static struct sysfs_ops scst_tgt_sysfs_ops = {
+	.show = scst_tgt_attr_show,
+	.store = scst_tgt_attr_store,
+};
+
+static struct kobj_type tgt_ktype = {
+	.sysfs_ops = &scst_tgt_sysfs_ops,
+	.release = scst_tgt_release,
+};
+
+static void scst_acg_release(struct kobject *kobj)
+{
+	struct scst_acg *acg;
+
+	acg = container_of(kobj, struct scst_acg, acg_kobj);
+
+	scst_destroy_acg(acg);
+	return;
+}
+
+static struct kobj_type acg_ktype = {
+	.sysfs_ops = &scst_sysfs_ops,
+	.release = scst_acg_release,
+};
+
+static struct kobj_attribute scst_luns_mgmt =
+	__ATTR(mgmt, S_IRUGO | S_IWUSR, scst_luns_mgmt_show,
+	       scst_luns_mgmt_store);
+
+static struct kobj_attribute scst_acg_luns_mgmt =
+	__ATTR(mgmt, S_IRUGO | S_IWUSR, scst_luns_mgmt_show,
+	       scst_acg_luns_mgmt_store);
+
+static struct kobj_attribute scst_acg_ini_mgmt =
+	__ATTR(mgmt, S_IRUGO | S_IWUSR, scst_acg_ini_mgmt_show,
+	       scst_acg_ini_mgmt_store);
+
+static struct kobj_attribute scst_ini_group_mgmt =
+	__ATTR(mgmt, S_IRUGO | S_IWUSR, scst_ini_group_mgmt_show,
+	       scst_ini_group_mgmt_store);
+
+static struct kobj_attribute scst_tgt_addr_method =
+	__ATTR(addr_method, S_IRUGO | S_IWUSR, scst_tgt_addr_method_show,
+	       scst_tgt_addr_method_store);
+
+static struct kobj_attribute scst_tgt_io_grouping_type =
+	__ATTR(io_grouping_type, S_IRUGO | S_IWUSR,
+	       scst_tgt_io_grouping_type_show,
+	       scst_tgt_io_grouping_type_store);
+
+static struct kobj_attribute scst_rel_tgt_id =
+	__ATTR(rel_tgt_id, S_IRUGO | S_IWUSR, scst_rel_tgt_id_show,
+	       scst_rel_tgt_id_store);
+
+static struct kobj_attribute scst_acg_addr_method =
+	__ATTR(addr_method, S_IRUGO | S_IWUSR, scst_acg_addr_method_show,
+		scst_acg_addr_method_store);
+
+static struct kobj_attribute scst_acg_io_grouping_type =
+	__ATTR(io_grouping_type, S_IRUGO | S_IWUSR,
+	       scst_acg_io_grouping_type_show,
+	       scst_acg_io_grouping_type_store);
+
+static ssize_t scst_tgt_enable_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	struct scst_tgt *tgt;
+	int res;
+	bool enabled;
+
+	tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+
+	enabled = tgt->tgtt->is_target_enabled(tgt);
+
+	res = sprintf(buf, "%d\n", enabled ? 1 : 0);
+	return res;
+}
+
+static ssize_t scst_tgt_enable_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	struct scst_tgt *tgt;
+	bool enable;
+
+	if (buf == NULL) {
+		PRINT_ERROR("%s: NULL buffer?", __func__);
+		res = -EINVAL;
+		goto out;
+	}
+
+	tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+
+	switch (buf[0]) {
+	case '0':
+		enable = false;
+		break;
+	case '1':
+		if (tgt->rel_tgt_id == 0) {
+			res = gen_relative_target_port_id(&tgt->rel_tgt_id);
+			if (res)
+				goto out;
+			PRINT_INFO("Using autogenerated rel ID %d for target "
+				"%s", tgt->rel_tgt_id, tgt->tgt_name);
+		} else {
+			if (!scst_is_relative_target_port_id_unique(
+			    tgt->rel_tgt_id, tgt)) {
+				PRINT_ERROR("Relative port id %d is not unique",
+					tgt->rel_tgt_id);
+					res = -EBADSLT;
+				goto out;
+			}
+		}
+		enable = true;
+		break;
+	default:
+		PRINT_ERROR("%s: Requested action not understood: %s",
+		       __func__, buf);
+		res = -EINVAL;
+		goto out;
+	}
+
+	res = tgt->tgtt->enable_target(tgt, enable);
+	if (res == 0)
+		res = count;
+
+out:
+	return res;
+}
+
+static struct kobj_attribute tgt_enable_attr =
+	__ATTR(enabled, S_IRUGO | S_IWUSR,
+	       scst_tgt_enable_show, scst_tgt_enable_store);
+
+int scst_create_tgt_sysfs(struct scst_tgt *tgt)
+{
+	int retval;
+	const struct attribute **pattr;
+
+	init_rwsem(&tgt->tgt_attr_rwsem);
+
+	tgt->tgt_kobj_initialized = 1;
+
+	retval = kobject_init_and_add(&tgt->tgt_kobj, &tgt_ktype,
+			&tgt->tgtt->tgtt_kobj, tgt->tgt_name);
+	if (retval != 0) {
+		PRINT_ERROR("Can't add tgt %s to sysfs", tgt->tgt_name);
+		goto out;
+	}
+
+	/*
+	 * In case of errors there's no need for additional cleanup, because
+	 * it will be done by the _put function() called by the caller.
+	 */
+
+	if ((tgt->tgtt->enable_target != NULL) &&
+	    (tgt->tgtt->is_target_enabled != NULL)) {
+		retval = sysfs_create_file(&tgt->tgt_kobj,
+				&tgt_enable_attr.attr);
+		if (retval != 0) {
+			PRINT_ERROR("Can't add attr %s to sysfs",
+				tgt_enable_attr.attr.name);
+			goto out;
+		}
+	}
+
+	tgt->tgt_sess_kobj = kobject_create_and_add("sessions", &tgt->tgt_kobj);
+	if (tgt->tgt_sess_kobj == NULL) {
+		PRINT_ERROR("Can't create sess kobj for tgt %s", tgt->tgt_name);
+		goto out_nomem;
+	}
+
+	tgt->tgt_luns_kobj = kobject_create_and_add("luns", &tgt->tgt_kobj);
+	if (tgt->tgt_luns_kobj == NULL) {
+		PRINT_ERROR("Can't create luns kobj for tgt %s", tgt->tgt_name);
+		goto out_nomem;
+	}
+
+	retval = sysfs_create_file(tgt->tgt_luns_kobj, &scst_luns_mgmt.attr);
+	if (retval != 0) {
+		PRINT_ERROR("Can't add tgt attr %s for tgt %s",
+			scst_luns_mgmt.attr.name, tgt->tgt_name);
+		goto out;
+	}
+
+	tgt->tgt_ini_grp_kobj = kobject_create_and_add("ini_groups",
+					&tgt->tgt_kobj);
+	if (tgt->tgt_ini_grp_kobj == NULL) {
+		PRINT_ERROR("Can't create ini_grp kobj for tgt %s",
+			tgt->tgt_name);
+		goto out_nomem;
+	}
+
+	retval = sysfs_create_file(tgt->tgt_ini_grp_kobj,
+			&scst_ini_group_mgmt.attr);
+	if (retval != 0) {
+		PRINT_ERROR("Can't add tgt attr %s for tgt %s",
+			scst_ini_group_mgmt.attr.name, tgt->tgt_name);
+		goto out;
+	}
+
+	retval = sysfs_create_file(&tgt->tgt_kobj,
+			&scst_rel_tgt_id.attr);
+	if (retval != 0) {
+		PRINT_ERROR("Can't add attribute %s for tgt %s",
+			scst_rel_tgt_id.attr.name, tgt->tgt_name);
+		goto out;
+	}
+
+	retval = sysfs_create_file(&tgt->tgt_kobj,
+			&scst_tgt_addr_method.attr);
+	if (retval != 0) {
+		PRINT_ERROR("Can't add attribute %s for tgt %s",
+			scst_tgt_addr_method.attr.name, tgt->tgt_name);
+		goto out;
+	}
+
+	retval = sysfs_create_file(&tgt->tgt_kobj,
+			&scst_tgt_io_grouping_type.attr);
+	if (retval != 0) {
+		PRINT_ERROR("Can't add attribute %s for tgt %s",
+			scst_tgt_io_grouping_type.attr.name, tgt->tgt_name);
+		goto out;
+	}
+
+	pattr = tgt->tgtt->tgt_attrs;
+	if (pattr != NULL) {
+		while (*pattr != NULL) {
+			TRACE_DBG("Creating attr %s for tgt %s", (*pattr)->name,
+				tgt->tgt_name);
+			retval = sysfs_create_file(&tgt->tgt_kobj, *pattr);
+			if (retval != 0) {
+				PRINT_ERROR("Can't add tgt attr %s for tgt %s",
+					(*pattr)->name, tgt->tgt_name);
+				goto out;
+			}
+			pattr++;
+		}
+	}
+
+out:
+	return retval;
+
+out_nomem:
+	retval = -ENOMEM;
+	goto out;
+}
+
+/*
+ * Must not be called under scst_mutex or there can be a deadlock with
+ * tgt_attr_rwsem
+ */
+void scst_tgt_sysfs_prepare_put(struct scst_tgt *tgt)
+{
+	if (tgt->tgt_kobj_initialized) {
+		down_write(&tgt->tgt_attr_rwsem);
+		tgt->tgt_kobj_put_prepared = 1;
+	}
+
+	return;
+}
+
+/*
+ * Must not be called under scst_mutex or there can be a deadlock with
+ * tgt_attr_rwsem
+ */
+void scst_tgt_sysfs_put(struct scst_tgt *tgt)
+{
+	if (tgt->tgt_kobj_initialized) {
+		kobject_del(tgt->tgt_sess_kobj);
+		kobject_put(tgt->tgt_sess_kobj);
+
+		kobject_del(tgt->tgt_luns_kobj);
+		kobject_put(tgt->tgt_luns_kobj);
+
+		kobject_del(tgt->tgt_ini_grp_kobj);
+		kobject_put(tgt->tgt_ini_grp_kobj);
+
+		kobject_del(&tgt->tgt_kobj);
+
+		if (!tgt->tgt_kobj_put_prepared)
+			down_write(&tgt->tgt_attr_rwsem);
+		kobject_put(&tgt->tgt_kobj);
+	} else
+		scst_free_tgt(tgt);
+	return;
+}
+
+/*
+ * Devices directory implementation
+ */
+
+ssize_t scst_device_sysfs_type_show(struct kobject *kobj,
+			    struct kobj_attribute *attr, char *buf)
+{
+	int pos = 0;
+
+	struct scst_device *dev;
+
+	dev = container_of(kobj, struct scst_device, dev_kobj);
+
+	pos = sprintf(buf, "%d - %s\n", dev->type,
+		(unsigned)dev->type > ARRAY_SIZE(scst_dev_handler_types) ?
+		      "unknown" : scst_dev_handler_types[dev->type]);
+
+	return pos;
+}
+
+static struct kobj_attribute device_type_attr =
+	__ATTR(type, S_IRUGO, scst_device_sysfs_type_show, NULL);
+
+static ssize_t scst_device_sysfs_threads_num_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	int pos = 0;
+	struct scst_device *dev;
+
+	dev = container_of(kobj, struct scst_device, dev_kobj);
+
+	pos = sprintf(buf, "%d\n%s", dev->threads_num,
+		(dev->threads_num != dev->handler->threads_num) ?
+			SCST_SYSFS_KEY_MARK "\n" : "");
+	return pos;
+}
+
+static ssize_t scst_device_sysfs_threads_data_store(struct scst_device *dev,
+	int threads_num, enum scst_dev_type_threads_pool_type threads_pool_type)
+{
+	int res = 0;
+
+	if (dev->threads_num < 0) {
+		PRINT_ERROR("Threads pool disabled for device %s",
+			dev->virt_name);
+		res = -EPERM;
+		goto out;
+	}
+
+	if ((threads_num == dev->threads_num) &&
+	    (threads_pool_type == dev->threads_pool_type))
+		goto out;
+
+	res = scst_suspend_activity(true);
+	if (res != 0)
+		goto out;
+
+	if (mutex_lock_interruptible(&scst_mutex) != 0) {
+		res = -EINTR;
+		goto out_resume;
+	}
+
+	scst_stop_dev_threads(dev);
+
+	dev->threads_num = threads_num;
+	dev->threads_pool_type = threads_pool_type;
+
+	res = scst_create_dev_threads(dev);
+	if (res != 0)
+		goto out_up;
+
+out_up:
+	mutex_unlock(&scst_mutex);
+
+out_resume:
+	scst_resume_activity();
+
+out:
+	return res;
+}
+
+static ssize_t scst_device_sysfs_threads_num_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	struct scst_device *dev;
+	long newtn;
+
+	dev = container_of(kobj, struct scst_device, dev_kobj);
+
+	res = strict_strtoul(buf, 0, &newtn);
+	if (res != 0) {
+		PRINT_ERROR("strict_strtoul() for %s failed: %d ", buf, res);
+		goto out;
+	}
+
+	if (newtn < 0) {
+		PRINT_ERROR("Illegal threads num value %ld", newtn);
+		res = -EINVAL;
+		goto out;
+	}
+
+	res = scst_device_sysfs_threads_data_store(dev, newtn,
+		dev->threads_pool_type);
+	if (res != 0)
+		goto out;
+
+	PRINT_INFO("Changed cmd threads num to %ld", newtn);
+
+	res = count;
+
+out:
+	return res;
+}
+
+static struct kobj_attribute device_threads_num_attr =
+	__ATTR(threads_num, S_IRUGO | S_IWUSR,
+		scst_device_sysfs_threads_num_show,
+		scst_device_sysfs_threads_num_store);
+
+static ssize_t scst_device_sysfs_threads_pool_type_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	int pos = 0;
+	struct scst_device *dev;
+
+	dev = container_of(kobj, struct scst_device, dev_kobj);
+
+	if (dev->threads_num == 0) {
+		pos = sprintf(buf, "Async\n");
+		goto out;
+	} else if (dev->threads_num < 0) {
+		pos = sprintf(buf, "Not valid\n");
+		goto out;
+	}
+
+	switch (dev->threads_pool_type) {
+	case SCST_THREADS_POOL_PER_INITIATOR:
+		pos = sprintf(buf, "%s\n%s", SCST_THREADS_POOL_PER_INITIATOR_STR,
+			(dev->threads_pool_type != dev->handler->threads_pool_type) ?
+				SCST_SYSFS_KEY_MARK "\n" : "");
+		break;
+	case SCST_THREADS_POOL_SHARED:
+		pos = sprintf(buf, "%s\n%s", SCST_THREADS_POOL_SHARED_STR,
+			(dev->threads_pool_type != dev->handler->threads_pool_type) ?
+				SCST_SYSFS_KEY_MARK "\n" : "");
+		break;
+	default:
+		pos = sprintf(buf, "Unknown\n");
+		break;
+	}
+
+out:
+	return pos;
+}
+
+static ssize_t scst_device_sysfs_threads_pool_type_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	struct scst_device *dev;
+	enum scst_dev_type_threads_pool_type newtpt;
+
+	dev = container_of(kobj, struct scst_device, dev_kobj);
+
+	newtpt = scst_parse_threads_pool_type(buf, count);
+	if (newtpt == SCST_THREADS_POOL_TYPE_INVALID) {
+		PRINT_ERROR("Illegal threads pool type %s", buf);
+		res = -EINVAL;
+		goto out;
+	}
+
+	TRACE_DBG("buf %s, count %zd, newtpt %d", buf, count, newtpt);
+
+	res = scst_device_sysfs_threads_data_store(dev, dev->threads_num,
+		newtpt);
+	if (res != 0)
+		goto out;
+
+	PRINT_INFO("Changed cmd threads pool type to %d", newtpt);
+
+	res = count;
+
+out:
+	return res;
+}
+
+static struct kobj_attribute device_threads_pool_type_attr =
+	__ATTR(threads_pool_type, S_IRUGO | S_IWUSR,
+		scst_device_sysfs_threads_pool_type_show,
+		scst_device_sysfs_threads_pool_type_store);
+
+static struct attribute *scst_device_attrs[] = {
+	&device_type_attr.attr,
+	&device_threads_num_attr.attr,
+	&device_threads_pool_type_attr.attr,
+	NULL,
+};
+
+static void scst_sysfs_device_release(struct kobject *kobj)
+{
+	struct scst_device *dev;
+
+	dev = container_of(kobj, struct scst_device, dev_kobj);
+
+	/* Let's make lockdep happy */
+	up_write(&dev->dev_attr_rwsem);
+
+	scst_free_device(dev);
+	return;
+}
+
+int scst_create_devt_dev_sysfs(struct scst_device *dev)
+{
+	int retval = 0;
+	const struct attribute **pattr;
+
+	if (dev->handler == &scst_null_devtype)
+		goto out;
+
+	BUG_ON(!dev->handler->devt_kobj_initialized);
+
+	/*
+	 * In case of errors there's no need for additional cleanup, because
+	 * it will be done by the _put function() called by the caller.
+	 */
+
+	retval = sysfs_create_link(&dev->dev_kobj,
+			&dev->handler->devt_kobj, "handler");
+	if (retval != 0) {
+		PRINT_ERROR("Can't create handler link for dev %s",
+			dev->virt_name);
+		goto out;
+	}
+
+	retval = sysfs_create_link(&dev->handler->devt_kobj,
+			&dev->dev_kobj, dev->virt_name);
+	if (retval != 0) {
+		PRINT_ERROR("Can't create handler link for dev %s",
+			dev->virt_name);
+		goto out;
+	}
+
+	pattr = dev->handler->dev_attrs;
+	if (pattr != NULL) {
+		while (*pattr != NULL) {
+			retval = sysfs_create_file(&dev->dev_kobj, *pattr);
+			if (retval != 0) {
+				PRINT_ERROR("Can't add dev attr %s for dev %s",
+					(*pattr)->name, dev->virt_name);
+				goto out;
+			}
+			pattr++;
+		}
+	}
+
+out:
+	return retval;
+}
+
+void scst_devt_dev_sysfs_put(struct scst_device *dev)
+{
+	const struct attribute **pattr;
+
+	if (dev->handler == &scst_null_devtype)
+		goto out;
+
+	BUG_ON(!dev->handler->devt_kobj_initialized);
+
+	pattr = dev->handler->dev_attrs;
+	if (pattr != NULL) {
+		while (*pattr != NULL) {
+			sysfs_remove_file(&dev->dev_kobj, *pattr);
+			pattr++;
+		}
+	}
+
+	sysfs_remove_link(&dev->dev_kobj, "handler");
+	sysfs_remove_link(&dev->handler->devt_kobj, dev->virt_name);
+
+out:
+	return;
+}
+
+static ssize_t scst_dev_attr_show(struct kobject *kobj, struct attribute *attr,
+			 char *buf)
+{
+	int res;
+	struct kobj_attribute *kobj_attr;
+	struct scst_device *dev;
+
+	dev = container_of(kobj, struct scst_device, dev_kobj);
+
+	if (down_read_trylock(&dev->dev_attr_rwsem) == 0) {
+		res = -ENOENT;
+		goto out;
+	}
+
+	kobj_attr = container_of(attr, struct kobj_attribute, attr);
+
+	res = kobj_attr->show(kobj, kobj_attr, buf);
+
+	up_read(&dev->dev_attr_rwsem);
+
+out:
+	return res;
+}
+
+static ssize_t scst_dev_attr_store(struct kobject *kobj, struct attribute *attr,
+			  const char *buf, size_t count)
+{
+	int res;
+	struct kobj_attribute *kobj_attr;
+	struct scst_device *dev;
+
+	dev = container_of(kobj, struct scst_device, dev_kobj);
+
+	if (down_read_trylock(&dev->dev_attr_rwsem) == 0) {
+		res = -ENOENT;
+		goto out;
+	}
+
+	kobj_attr = container_of(attr, struct kobj_attribute, attr);
+
+	res = kobj_attr->store(kobj, kobj_attr, buf, count);
+
+	up_read(&dev->dev_attr_rwsem);
+
+out:
+	return res;
+}
+
+static struct sysfs_ops scst_dev_sysfs_ops = {
+	.show = scst_dev_attr_show,
+	.store = scst_dev_attr_store,
+};
+
+static struct kobj_type scst_device_ktype = {
+	.sysfs_ops = &scst_dev_sysfs_ops,
+	.release = scst_sysfs_device_release,
+	.default_attrs = scst_device_attrs,
+};
+
+int scst_create_device_sysfs(struct scst_device *dev)
+{
+	int retval = 0;
+
+	init_rwsem(&dev->dev_attr_rwsem);
+
+	dev->dev_kobj_initialized = 1;
+
+	retval = kobject_init_and_add(&dev->dev_kobj, &scst_device_ktype,
+				      scst_devices_kobj, dev->virt_name);
+	if (retval != 0) {
+		PRINT_ERROR("Can't add device %s to sysfs", dev->virt_name);
+		goto out;
+	}
+
+	/*
+	 * In case of errors there's no need for additional cleanup, because
+	 * it will be done by the _put function() called by the caller.
+	 */
+
+	dev->dev_exp_kobj = kobject_create_and_add("exported",
+						   &dev->dev_kobj);
+	if (dev->dev_exp_kobj == NULL) {
+		PRINT_ERROR("Can't create exported link for device %s",
+			dev->virt_name);
+		retval = -ENOMEM;
+		goto out;
+	}
+
+	if (dev->scsi_dev != NULL) {
+		retval = sysfs_create_link(&dev->dev_kobj,
+			&dev->scsi_dev->sdev_dev.kobj, "scsi_device");
+		if (retval != 0) {
+			PRINT_ERROR("Can't create scsi_device link for dev %s",
+				dev->virt_name);
+			goto out;
+		}
+	}
+
+out:
+	return retval;
+}
+
+/*
+ * Must not be called under scst_mutex or there can be a deadlock with
+ * dev_attr_rwsem
+ */
+void scst_device_sysfs_put(struct scst_device *dev)
+{
+
+	if (dev->dev_kobj_initialized) {
+		kobject_del(dev->dev_exp_kobj);
+		kobject_put(dev->dev_exp_kobj);
+
+		kobject_del(&dev->dev_kobj);
+
+		down_write(&dev->dev_attr_rwsem);
+		kobject_put(&dev->dev_kobj);
+	} else
+		scst_free_device(dev);
+	return;
+}
+
+/*
+ * Target sessions directory implementation
+ */
+
+static ssize_t scst_sess_sysfs_commands_show(struct kobject *kobj,
+			    struct kobj_attribute *attr, char *buf)
+{
+	struct scst_session *sess;
+
+	sess = container_of(kobj, struct scst_session, sess_kobj);
+
+	return sprintf(buf, "%i\n", atomic_read(&sess->sess_cmd_count));
+}
+
+static struct kobj_attribute session_commands_attr =
+	__ATTR(commands, S_IRUGO, scst_sess_sysfs_commands_show, NULL);
+
+static ssize_t scst_sess_sysfs_active_commands_show(struct kobject *kobj,
+			    struct kobj_attribute *attr, char *buf)
+{
+	int res;
+	struct scst_session *sess;
+	int active_cmds = 0, t;
+
+	if (mutex_lock_interruptible(&scst_mutex) != 0) {
+		res = -EINTR;
+		goto out;
+	}
+
+	sess = container_of(kobj, struct scst_session, sess_kobj);
+
+	for (t = TGT_DEV_HASH_SIZE-1; t >= 0; t--) {
+		struct list_head *sess_tgt_dev_list_head =
+			&sess->sess_tgt_dev_list_hash[t];
+		struct scst_tgt_dev *tgt_dev;
+		list_for_each_entry(tgt_dev, sess_tgt_dev_list_head,
+				sess_tgt_dev_list_entry) {
+			active_cmds += atomic_read(&tgt_dev->tgt_dev_cmd_count);
+		}
+	}
+
+	mutex_unlock(&scst_mutex);
+
+	res = sprintf(buf, "%i\n", active_cmds);
+
+out:
+	return res;
+}
+
+static struct kobj_attribute session_active_commands_attr =
+	__ATTR(active_commands, S_IRUGO, scst_sess_sysfs_active_commands_show,
+		NULL);
+
+static ssize_t scst_sess_sysfs_initiator_name_show(struct kobject *kobj,
+			    struct kobj_attribute *attr, char *buf)
+{
+	struct scst_session *sess;
+
+	sess = container_of(kobj, struct scst_session, sess_kobj);
+
+	return scnprintf(buf, SCST_SYSFS_BLOCK_SIZE, "%s\n",
+		sess->initiator_name);
+}
+
+static struct kobj_attribute session_initiator_name_attr =
+	__ATTR(initiator_name, S_IRUGO, scst_sess_sysfs_initiator_name_show, NULL);
+
+static struct attribute *scst_session_attrs[] = {
+	&session_commands_attr.attr,
+	&session_active_commands_attr.attr,
+	&session_initiator_name_attr.attr,
+	NULL,
+};
+
+static void scst_sysfs_session_release(struct kobject *kobj)
+{
+	struct scst_session *sess;
+
+	sess = container_of(kobj, struct scst_session, sess_kobj);
+
+	/* Let's make lockdep happy */
+	up_write(&sess->sess_attr_rwsem);
+
+	scst_release_session(sess);
+	return;
+}
+
+static ssize_t scst_sess_attr_show(struct kobject *kobj, struct attribute *attr,
+			 char *buf)
+{
+	int res;
+	struct kobj_attribute *kobj_attr;
+	struct scst_session *sess;
+
+	sess = container_of(kobj, struct scst_session, sess_kobj);
+
+	if (down_read_trylock(&sess->sess_attr_rwsem) == 0) {
+		res = -ENOENT;
+		goto out;
+	}
+
+	kobj_attr = container_of(attr, struct kobj_attribute, attr);
+
+	res = kobj_attr->show(kobj, kobj_attr, buf);
+
+	up_read(&sess->sess_attr_rwsem);
+
+out:
+	return res;
+}
+
+static ssize_t scst_sess_attr_store(struct kobject *kobj, struct attribute *attr,
+			  const char *buf, size_t count)
+{
+	int res;
+	struct kobj_attribute *kobj_attr;
+	struct scst_session *sess;
+
+	sess = container_of(kobj, struct scst_session, sess_kobj);
+
+	if (down_read_trylock(&sess->sess_attr_rwsem) == 0) {
+		res = -ENOENT;
+		goto out;
+	}
+
+	kobj_attr = container_of(attr, struct kobj_attribute, attr);
+
+	res = kobj_attr->store(kobj, kobj_attr, buf, count);
+
+	up_read(&sess->sess_attr_rwsem);
+
+out:
+	return res;
+}
+
+static struct sysfs_ops scst_sess_sysfs_ops = {
+	.show = scst_sess_attr_show,
+	.store = scst_sess_attr_store,
+};
+
+static struct kobj_type scst_session_ktype = {
+	.sysfs_ops = &scst_sess_sysfs_ops,
+	.release = scst_sysfs_session_release,
+	.default_attrs = scst_session_attrs,
+};
+
+/* scst_mutex supposed to be locked */
+int scst_create_sess_sysfs(struct scst_session *sess)
+{
+	int retval = 0;
+	struct scst_session *s;
+	const struct attribute **pattr;
+	char *name = (char *)sess->initiator_name;
+	int len = strlen(name) + 1, n = 1;
+
+restart:
+	list_for_each_entry(s, &sess->tgt->sess_list, sess_list_entry) {
+		if (!s->sess_kobj_initialized)
+			continue;
+
+		if (strcmp(name, kobject_name(&s->sess_kobj)) == 0) {
+			if (s == sess)
+				continue;
+
+			TRACE_DBG("Dublicated session from the same initiator "
+				"%s found", name);
+
+			if (name == sess->initiator_name) {
+				len = strlen(sess->initiator_name);
+				len += 20;
+				name = kmalloc(len, GFP_KERNEL);
+				if (name == NULL) {
+					PRINT_ERROR("Unable to allocate a "
+						"replacement name (size %d)",
+						len);
+				}
+			}
+
+			snprintf(name, len, "%s_%d", sess->initiator_name, n);
+			n++;
+			goto restart;
+		}
+	}
+
+	init_rwsem(&sess->sess_attr_rwsem);
+
+	sess->sess_kobj_initialized = 1;
+
+	retval = kobject_init_and_add(&sess->sess_kobj, &scst_session_ktype,
+			      sess->tgt->tgt_sess_kobj, name);
+	if (retval != 0) {
+		PRINT_ERROR("Can't add session %s to sysfs", name);
+		goto out_free;
+	}
+
+	/*
+	 * In case of errors there's no need for additional cleanup, because
+	 * it will be done by the _put function() called by the caller.
+	 */
+
+	pattr = sess->tgt->tgtt->sess_attrs;
+	if (pattr != NULL) {
+		while (*pattr != NULL) {
+			retval = sysfs_create_file(&sess->sess_kobj, *pattr);
+			if (retval != 0) {
+				PRINT_ERROR("Can't add sess attr %s for sess "
+					"for initiator %s", (*pattr)->name,
+					name);
+				goto out_free;
+			}
+			pattr++;
+		}
+	}
+
+	if (sess->acg == sess->tgt->default_acg)
+		retval = sysfs_create_link(&sess->sess_kobj,
+				sess->tgt->tgt_luns_kobj, "luns");
+	else
+		retval = sysfs_create_link(&sess->sess_kobj,
+				sess->acg->luns_kobj, "luns");
+
+out_free:
+	if (name != sess->initiator_name)
+		kfree(name);
+	return retval;
+}
+
+/*
+ * Must not be called under scst_mutex or there can be a deadlock with
+ * sess_attr_rwsem
+ */
+void scst_sess_sysfs_put(struct scst_session *sess)
+{
+
+	if (sess->sess_kobj_initialized) {
+		kobject_del(&sess->sess_kobj);
+
+		down_write(&sess->sess_attr_rwsem);
+		kobject_put(&sess->sess_kobj);
+	} else
+		scst_release_session(sess);
+	return;
+}
+
+/*
+ * Target luns directory implementation
+ */
+
+static void scst_acg_dev_release(struct kobject *kobj)
+{
+	struct scst_acg_dev *acg_dev;
+
+	acg_dev = container_of(kobj, struct scst_acg_dev, acg_dev_kobj);
+
+	scst_acg_dev_destroy(acg_dev);
+	return;
+}
+
+static ssize_t scst_lun_rd_only_show(struct kobject *kobj,
+				   struct kobj_attribute *attr,
+				   char *buf)
+{
+	struct scst_acg_dev *acg_dev;
+
+	acg_dev = container_of(kobj, struct scst_acg_dev, acg_dev_kobj);
+
+	if (acg_dev->rd_only || acg_dev->dev->rd_only)
+		return sprintf(buf, "%d\n%s\n", 1, SCST_SYSFS_KEY_MARK);
+	else
+		return sprintf(buf, "%d\n", 0);
+}
+
+static struct kobj_attribute lun_options_attr =
+	__ATTR(read_only, S_IRUGO, scst_lun_rd_only_show, NULL);
+
+static struct attribute *lun_attrs[] = {
+	&lun_options_attr.attr,
+	NULL,
+};
+
+static struct kobj_type acg_dev_ktype = {
+	.sysfs_ops = &scst_sysfs_ops,
+	.release = scst_acg_dev_release,
+	.default_attrs = lun_attrs,
+};
+
+int scst_create_acg_dev_sysfs(struct scst_acg *acg, unsigned int virt_lun,
+	struct kobject *parent)
+{
+	int retval;
+	struct scst_acg_dev *acg_dev = NULL, *acg_dev_tmp;
+	char str[20];
+
+	list_for_each_entry(acg_dev_tmp, &acg->acg_dev_list,
+			    acg_dev_list_entry) {
+		if (acg_dev_tmp->lun == virt_lun) {
+			acg_dev = acg_dev_tmp;
+			break;
+		}
+	}
+	if (acg_dev == NULL) {
+		PRINT_ERROR("%s", "acg_dev lookup for kobject creation failed");
+		retval = -EINVAL;
+		goto out;
+	}
+
+	snprintf(str, sizeof(str), "export%u",
+		acg_dev->dev->dev_exported_lun_num++);
+
+	kobject_get(&acg_dev->dev->dev_kobj);
+
+	acg_dev->acg_dev_kobj_initialized = 1;
+
+	retval = kobject_init_and_add(&acg_dev->acg_dev_kobj, &acg_dev_ktype,
+				      parent, "%u", virt_lun);
+	if (retval != 0) {
+		PRINT_ERROR("Can't add acg %s to sysfs", acg->acg_name);
+		goto out;
+	}
+
+	/*
+	 * In case of errors there's no need for additional cleanup, because
+	 * it will be done by the _put function() called by the caller.
+	 */
+
+	retval = sysfs_create_link(acg_dev->dev->dev_exp_kobj,
+				   &acg_dev->acg_dev_kobj, str);
+	if (retval != 0) {
+		PRINT_ERROR("Can't create acg %s LUN link", acg->acg_name);
+		goto out;
+	}
+
+	retval = sysfs_create_link(&acg_dev->acg_dev_kobj,
+			&acg_dev->dev->dev_kobj, "device");
+	if (retval != 0) {
+		PRINT_ERROR("Can't create acg %s device link", acg->acg_name);
+		goto out;
+	}
+
+out:
+	return retval;
+}
+
+static ssize_t __scst_luns_mgmt_store(struct scst_acg *acg,
+	struct kobject *kobj, const char *buf, size_t count)
+{
+	int res, virt = 0, read_only = 0, action;
+	char *buffer, *p, *e = NULL;
+	unsigned int host, channel = 0, id = 0, lun = 0, virt_lun;
+	struct scst_acg_dev *acg_dev = NULL, *acg_dev_tmp;
+	struct scst_device *d, *dev = NULL;
+
+#define SCST_LUN_ACTION_ADD	1
+#define SCST_LUN_ACTION_DEL	2
+#define SCST_LUN_ACTION_REPLACE	3
+#define SCST_LUN_ACTION_CLEAR	4
+
+	buffer = kzalloc(count+1, GFP_KERNEL);
+	if (buffer == NULL) {
+		res = -ENOMEM;
+		goto out;
+	}
+
+	memcpy(buffer, buf, count);
+	buffer[count] = '\0';
+	p = buffer;
+
+	p = buffer;
+	if (p[strlen(p) - 1] == '\n')
+		p[strlen(p) - 1] = '\0';
+	if (strncasecmp("add", p, 3) == 0) {
+		p += 3;
+		action = SCST_LUN_ACTION_ADD;
+	} else if (strncasecmp("del", p, 3) == 0) {
+		p += 3;
+		action = SCST_LUN_ACTION_DEL;
+	} else if (!strncasecmp("replace", p, 7)) {
+		p += 7;
+		action = SCST_LUN_ACTION_REPLACE;
+	} else if (!strncasecmp("clear", p, 5)) {
+		p += 5;
+		action = SCST_LUN_ACTION_CLEAR;
+	} else {
+		PRINT_ERROR("Unknown action \"%s\"", p);
+		res = -EINVAL;
+		goto out_free;
+	}
+
+	res = scst_suspend_activity(true);
+	if (res != 0)
+		goto out_free;
+
+	if (mutex_lock_interruptible(&scst_mutex) != 0) {
+		res = -EINTR;
+		goto out_free_resume;
+	}
+
+	if (action != SCST_LUN_ACTION_CLEAR) {
+		if (!isspace(*p)) {
+			PRINT_ERROR("%s", "Syntax error");
+			res = -EINVAL;
+			goto out_free_up;
+		}
+
+		while (isspace(*p) && *p != '\0')
+			p++;
+		e = p; /* save p */
+		host = simple_strtoul(p, &p, 0);
+		if (*p == ':') {
+			channel = simple_strtoul(p + 1, &p, 0);
+			id = simple_strtoul(p + 1, &p, 0);
+			lun = simple_strtoul(p + 1, &p, 0);
+			e = p;
+		} else {
+			virt++;
+			p = e; /* restore p */
+			while (!isspace(*e) && *e != '\0')
+				e++;
+			*e = '\0';
+		}
+
+		list_for_each_entry(d, &scst_dev_list, dev_list_entry) {
+			if (virt) {
+				if (d->virt_id && !strcmp(d->virt_name, p)) {
+					dev = d;
+					TRACE_DBG("Virt device %p (%s) found",
+						  dev, p);
+					break;
+				}
+			} else {
+				if (d->scsi_dev &&
+				    d->scsi_dev->host->host_no == host &&
+				    d->scsi_dev->channel == channel &&
+				    d->scsi_dev->id == id &&
+				    d->scsi_dev->lun == lun) {
+					dev = d;
+					TRACE_DBG("Dev %p (%d:%d:%d:%d) found",
+						  dev, host, channel, id, lun);
+					break;
+				}
+			}
+		}
+		if (dev == NULL) {
+			if (virt) {
+				PRINT_ERROR("Virt device '%s' not found", p);
+			} else {
+				PRINT_ERROR("Device %d:%d:%d:%d not found",
+					    host, channel, id, lun);
+			}
+			res = -EINVAL;
+			goto out_free_up;
+		}
+	}
+
+	switch (action) {
+	case SCST_LUN_ACTION_ADD:
+	case SCST_LUN_ACTION_REPLACE:
+	{
+		bool dev_replaced = false;
+
+		e++;
+		while (isspace(*e) && *e != '\0')
+			e++;
+		virt_lun = simple_strtoul(e, &e, 0);
+
+		while (isspace(*e) && *e != '\0')
+			e++;
+
+		while (1) {
+			char *pp;
+			unsigned long val;
+			char *param = scst_get_next_token_str(&e);
+			if (param == NULL)
+				break;
+
+			p = scst_get_next_lexem(&param);
+			if (*p == '\0') {
+				PRINT_ERROR("Syntax error at %s (device %s)",
+					param, dev->virt_name);
+				res = -EINVAL;
+				goto out_free_up;
+			}
+
+			pp = scst_get_next_lexem(&param);
+			if (*pp == '\0') {
+				PRINT_ERROR("Parameter %s value missed for device %s",
+					p, dev->virt_name);
+				res = -EINVAL;
+				goto out_free_up;
+			}
+
+			if (scst_get_next_lexem(&param)[0] != '\0') {
+				PRINT_ERROR("Too many parameter's %s values (device %s)",
+					p, dev->virt_name);
+				res = -EINVAL;
+				goto out_free_up;
+			}
+
+			res = strict_strtoul(pp, 0, &val);
+			if (res != 0) {
+				PRINT_ERROR("strict_strtoul() for %s failed: %d "
+					"(device %s)", pp, res, dev->virt_name);
+				goto out_free_up;
+			}
+
+			if (!strcasecmp("read_only", p)) {
+				read_only = val;
+				TRACE_DBG("READ ONLY %d", read_only);
+			} else {
+				PRINT_ERROR("Unknown parameter %s (device %s)",
+					p, dev->virt_name);
+				res = -EINVAL;
+				goto out_free_up;
+			}
+		}
+
+		acg_dev = NULL;
+		list_for_each_entry(acg_dev_tmp, &acg->acg_dev_list,
+				    acg_dev_list_entry) {
+			if (acg_dev_tmp->lun == virt_lun) {
+				acg_dev = acg_dev_tmp;
+				break;
+			}
+		}
+
+		if (acg_dev != NULL) {
+			if (action == SCST_LUN_ACTION_ADD) {
+				PRINT_ERROR("virt lun %d already exists in "
+					"group %s", virt_lun, acg->acg_name);
+				res = -EEXIST;
+				goto out_free_up;
+			} else {
+				/* Replace */
+				res = scst_acg_remove_dev(acg, acg_dev->dev,
+						false);
+				if (res != 0)
+					goto out_free_up;
+
+				dev_replaced = true;
+			}
+		}
+
+		res = scst_acg_add_dev(acg, dev, virt_lun, read_only,
+					!dev_replaced);
+		if (res != 0)
+			goto out_free_up;
+
+		res = scst_create_acg_dev_sysfs(acg, virt_lun, kobj);
+		if (res != 0) {
+			PRINT_ERROR("%s", "Creation of acg_dev kobject failed");
+			goto out_remove_acg_dev;
+		}
+
+		if (dev_replaced) {
+			struct scst_tgt_dev *tgt_dev;
+
+			list_for_each_entry(tgt_dev, &dev->dev_tgt_dev_list,
+				dev_tgt_dev_list_entry) {
+				if ((tgt_dev->acg_dev->acg == acg) &&
+				    (tgt_dev->lun == virt_lun)) {
+					TRACE_MGMT_DBG("INQUIRY DATA HAS CHANGED"
+						" on tgt_dev %p", tgt_dev);
+					scst_gen_aen_or_ua(tgt_dev,
+						SCST_LOAD_SENSE(scst_sense_inquery_data_changed));
+				}
+			}
+		}
+
+		break;
+	}
+	case SCST_LUN_ACTION_DEL:
+		res = scst_acg_remove_dev(acg, dev, true);
+		if (res != 0)
+			goto out_free_up;
+		break;
+	case SCST_LUN_ACTION_CLEAR:
+		PRINT_INFO("Removed all devices from group %s",
+			acg->acg_name);
+		list_for_each_entry_safe(acg_dev, acg_dev_tmp,
+					 &acg->acg_dev_list,
+					 acg_dev_list_entry) {
+			res = scst_acg_remove_dev(acg, acg_dev->dev,
+				list_is_last(&acg_dev->acg_dev_list_entry,
+					     &acg->acg_dev_list));
+			if (res)
+				goto out_free_up;
+		}
+		break;
+	}
+
+	res = count;
+
+out_free_up:
+	mutex_unlock(&scst_mutex);
+
+out_free_resume:
+	scst_resume_activity();
+
+out_free:
+	kfree(buffer);
+
+out:
+	return res;
+
+out_remove_acg_dev:
+	scst_acg_remove_dev(acg, dev, true);
+	goto out_free_up;
+
+#undef SCST_LUN_ACTION_ADD
+#undef SCST_LUN_ACTION_DEL
+#undef SCST_LUN_ACTION_REPLACE
+#undef SCST_LUN_ACTION_CLEAR
+}
+
+static ssize_t scst_luns_mgmt_show(struct kobject *kobj,
+				   struct kobj_attribute *attr,
+				   char *buf)
+{
+	static char *help = "Usage: echo \"add|del H:C:I:L lun [parameters]\" "
+					">mgmt\n"
+			    "       echo \"add|del VNAME lun [parameters]\" "
+					">mgmt\n"
+			    "       echo \"replace H:C:I:L lun [parameters]\" "
+					">mgmt\n"
+			    "       echo \"replace VNAME lun [parameters]\" "
+					">mgmt\n"
+			    "       echo \"clear\" >mgmt\n"
+			    "\n"
+			    "where parameters are one or more "
+			    "param_name=value pairs separated by ';'\n"
+			    "\nThe following parameters available: read_only.";
+
+	return sprintf(buf, help);
+}
+
+static ssize_t scst_luns_mgmt_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count)
+{
+	int res;
+	struct scst_acg *acg;
+	struct scst_tgt *tgt;
+
+	tgt = container_of(kobj->parent, struct scst_tgt, tgt_kobj);
+	acg = tgt->default_acg;
+
+	res = __scst_luns_mgmt_store(acg, kobj, buf, count);
+	return res;
+}
+
+static ssize_t __scst_acg_addr_method_show(struct scst_acg *acg, char *buf)
+{
+	int res;
+
+	switch (acg->addr_method) {
+	case SCST_LUN_ADDR_METHOD_FLAT:
+		res = sprintf(buf, "FLAT\n%s\n", SCST_SYSFS_KEY_MARK);
+		break;
+	case SCST_LUN_ADDR_METHOD_PERIPHERAL:
+		res = sprintf(buf, "PERIPHERAL\n");
+		break;
+	default:
+		res = sprintf(buf, "UNKNOWN\n");
+		break;
+	}
+
+	return res;
+}
+
+static ssize_t __scst_acg_addr_method_store(struct scst_acg *acg,
+	const char *buf, size_t count)
+{
+	int res = count;
+
+	if (strncasecmp(buf, "FLAT", min_t(int, 4, count)) == 0)
+		acg->addr_method = SCST_LUN_ADDR_METHOD_FLAT;
+	else if (strncasecmp(buf, "PERIPHERAL", min_t(int, 10, count)) == 0)
+		acg->addr_method = SCST_LUN_ADDR_METHOD_PERIPHERAL;
+	else {
+		PRINT_ERROR("Unknown address method %s", buf);
+		res = -EINVAL;
+	}
+	return res;
+}
+
+static ssize_t scst_tgt_addr_method_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	struct scst_acg *acg;
+	struct scst_tgt *tgt;
+
+	tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+	acg = tgt->default_acg;
+
+	return __scst_acg_addr_method_show(acg, buf);
+}
+
+static ssize_t scst_tgt_addr_method_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	struct scst_acg *acg;
+	struct scst_tgt *tgt;
+
+	tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+	acg = tgt->default_acg;
+
+	res = __scst_acg_addr_method_store(acg, buf, count);
+	return res;
+}
+
+static ssize_t __scst_acg_io_grouping_type_show(struct scst_acg *acg, char *buf)
+{
+	int res;
+
+	switch (acg->acg_io_grouping_type) {
+	case SCST_IO_GROUPING_AUTO:
+		res = sprintf(buf, "%s\n", SCST_IO_GROUPING_AUTO_STR);
+		break;
+	case SCST_IO_GROUPING_THIS_GROUP_ONLY:
+		res = sprintf(buf, "%s\n%s\n",
+			SCST_IO_GROUPING_THIS_GROUP_ONLY_STR,
+			SCST_SYSFS_KEY_MARK);
+		break;
+	case SCST_IO_GROUPING_NEVER:
+		res = sprintf(buf, "%s\n%s\n", SCST_IO_GROUPING_NEVER_STR,
+			SCST_SYSFS_KEY_MARK);
+		break;
+	default:
+		res = sprintf(buf, "%d\n%s\n", acg->acg_io_grouping_type,
+			SCST_SYSFS_KEY_MARK);
+		break;
+	}
+
+	return res;
+}
+
+static ssize_t __scst_acg_io_grouping_type_store(struct scst_acg *acg,
+	const char *buf, size_t count)
+{
+	int res = 0;
+	int prev = acg->acg_io_grouping_type;
+	struct scst_acg_dev *acg_dev;
+
+	if (strncasecmp(buf, SCST_IO_GROUPING_AUTO_STR,
+			min_t(int, strlen(SCST_IO_GROUPING_AUTO_STR), count)) == 0)
+		acg->acg_io_grouping_type = SCST_IO_GROUPING_AUTO;
+	else if (strncasecmp(buf, SCST_IO_GROUPING_THIS_GROUP_ONLY_STR,
+			min_t(int, strlen(SCST_IO_GROUPING_THIS_GROUP_ONLY_STR), count)) == 0)
+		acg->acg_io_grouping_type = SCST_IO_GROUPING_THIS_GROUP_ONLY;
+	else if (strncasecmp(buf, SCST_IO_GROUPING_NEVER_STR,
+			min_t(int, strlen(SCST_IO_GROUPING_NEVER_STR), count)) == 0)
+		acg->acg_io_grouping_type = SCST_IO_GROUPING_NEVER;
+	else {
+		long io_grouping_type;
+		res = strict_strtoul(buf, 0, &io_grouping_type);
+		if ((res != 0) || (io_grouping_type <= 0)) {
+			PRINT_ERROR("Unknown or not allowed I/O grouping type "
+				"%s", buf);
+			res = -EINVAL;
+			goto out;
+		}
+		acg->acg_io_grouping_type = io_grouping_type;
+	}
+
+	if (prev == acg->acg_io_grouping_type)
+		goto out;
+
+	res = scst_suspend_activity(true);
+	if (res != 0)
+		goto out;
+
+	if (mutex_lock_interruptible(&scst_mutex) != 0) {
+		res = -EINTR;
+		goto out_resume;
+	}
+
+	list_for_each_entry(acg_dev, &acg->acg_dev_list, acg_dev_list_entry) {
+		int rc;
+
+		scst_stop_dev_threads(acg_dev->dev);
+
+		rc = scst_create_dev_threads(acg_dev->dev);
+		if (rc != 0)
+			res = rc;
+	}
+
+	mutex_unlock(&scst_mutex);
+
+out_resume:
+	scst_resume_activity();
+
+out:
+	return res;
+}
+
+static ssize_t scst_tgt_io_grouping_type_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	struct scst_acg *acg;
+	struct scst_tgt *tgt;
+
+	tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+	acg = tgt->default_acg;
+
+	return __scst_acg_io_grouping_type_show(acg, buf);
+}
+
+static ssize_t scst_tgt_io_grouping_type_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	struct scst_acg *acg;
+	struct scst_tgt *tgt;
+
+	tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+	acg = tgt->default_acg;
+
+	res = __scst_acg_io_grouping_type_store(acg, buf, count);
+	if (res != 0)
+		goto out;
+
+	res = count;
+
+out:
+	return res;
+}
+
+static int scst_create_acg_sysfs(struct scst_tgt *tgt,
+	struct scst_acg *acg)
+{
+	int retval = 0;
+
+	acg->acg_kobj_initialized = 1;
+
+	retval = kobject_init_and_add(&acg->acg_kobj, &acg_ktype,
+		tgt->tgt_ini_grp_kobj, acg->acg_name);
+	if (retval != 0) {
+		PRINT_ERROR("Can't add acg '%s' to sysfs", acg->acg_name);
+		goto out;
+	}
+
+	acg->luns_kobj = kobject_create_and_add("luns", &acg->acg_kobj);
+	if (acg->luns_kobj == NULL) {
+		PRINT_ERROR("Can't create luns kobj for tgt %s",
+			tgt->tgt_name);
+		retval = -ENOMEM;
+		goto out;
+	}
+
+	retval = sysfs_create_file(acg->luns_kobj, &scst_acg_luns_mgmt.attr);
+	if (retval != 0) {
+		PRINT_ERROR("Can't add tgt attr %s for tgt %s",
+			scst_acg_luns_mgmt.attr.name, tgt->tgt_name);
+		goto out;
+	}
+
+	acg->initiators_kobj = kobject_create_and_add("initiators",
+		&acg->acg_kobj);
+	if (acg->initiators_kobj == NULL) {
+		PRINT_ERROR("Can't create initiators kobj for tgt %s",
+			tgt->tgt_name);
+		retval = -ENOMEM;
+		goto out;
+	}
+
+	retval = sysfs_create_file(acg->initiators_kobj,
+		&scst_acg_ini_mgmt.attr);
+	if (retval != 0) {
+		PRINT_ERROR("Can't add tgt attr %s for tgt %s",
+			scst_acg_ini_mgmt.attr.name, tgt->tgt_name);
+		goto out;
+	}
+
+	retval = sysfs_create_file(&acg->acg_kobj, &scst_acg_addr_method.attr);
+	if (retval != 0) {
+		PRINT_ERROR("Can't add tgt attr %s for tgt %s",
+			scst_acg_addr_method.attr.name, tgt->tgt_name);
+		goto out;
+	}
+
+	retval = sysfs_create_file(&acg->acg_kobj, &scst_acg_io_grouping_type.attr);
+	if (retval != 0) {
+		PRINT_ERROR("Can't add tgt attr %s for tgt %s",
+			scst_acg_io_grouping_type.attr.name, tgt->tgt_name);
+		goto out;
+	}
+
+out:
+	return retval;
+}
+
+void scst_acg_sysfs_put(struct scst_acg *acg)
+{
+
+	if (acg->acg_kobj_initialized) {
+		scst_clear_acg(acg);
+
+		kobject_del(acg->luns_kobj);
+		kobject_put(acg->luns_kobj);
+
+		kobject_del(acg->initiators_kobj);
+		kobject_put(acg->initiators_kobj);
+
+		kobject_del(&acg->acg_kobj);
+		kobject_put(&acg->acg_kobj);
+	} else
+		scst_destroy_acg(acg);
+	return;
+}
+
+static ssize_t scst_acg_addr_method_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	struct scst_acg *acg;
+
+	acg = container_of(kobj, struct scst_acg, acg_kobj);
+
+	return __scst_acg_addr_method_show(acg, buf);
+}
+
+static ssize_t scst_acg_addr_method_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	struct scst_acg *acg;
+
+	acg = container_of(kobj, struct scst_acg, acg_kobj);
+
+	res = __scst_acg_addr_method_store(acg, buf, count);
+	return res;
+}
+
+static ssize_t scst_acg_io_grouping_type_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	struct scst_acg *acg;
+
+	acg = container_of(kobj, struct scst_acg, acg_kobj);
+
+	return __scst_acg_io_grouping_type_show(acg, buf);
+}
+
+static ssize_t scst_acg_io_grouping_type_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	struct scst_acg *acg;
+
+	acg = container_of(kobj, struct scst_acg, acg_kobj);
+
+	res = __scst_acg_io_grouping_type_store(acg, buf, count);
+	if (res != 0)
+		goto out;
+
+	res = count;
+
+out:
+	return res;
+}
+
+static ssize_t scst_ini_group_mgmt_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	static char *help = "Usage: echo \"create GROUP_NAME\" >mgmt\n"
+			    "       echo \"del GROUP_NAME\" >mgmt\n";
+
+	return sprintf(buf, help);
+}
+
+static ssize_t scst_ini_group_mgmt_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res, action;
+	int len;
+	char *name;
+	char *buffer, *p, *e = NULL;
+	struct scst_acg *a, *acg = NULL;
+	struct scst_tgt *tgt;
+
+#define SCST_INI_GROUP_ACTION_CREATE	1
+#define SCST_INI_GROUP_ACTION_DEL	2
+
+	tgt = container_of(kobj->parent, struct scst_tgt, tgt_kobj);
+
+	buffer = kzalloc(count+1, GFP_KERNEL);
+	if (buffer == NULL) {
+		res = -ENOMEM;
+		goto out;
+	}
+
+	memcpy(buffer, buf, count);
+	buffer[count] = '\0';
+	p = buffer;
+
+	p = buffer;
+	if (p[strlen(p) - 1] == '\n')
+		p[strlen(p) - 1] = '\0';
+	if (strncasecmp("create ", p, 7) == 0) {
+		p += 7;
+		action = SCST_INI_GROUP_ACTION_CREATE;
+	} else if (strncasecmp("del ", p, 4) == 0) {
+		p += 4;
+		action = SCST_INI_GROUP_ACTION_DEL;
+	} else {
+		PRINT_ERROR("Unknown action \"%s\"", p);
+		res = -EINVAL;
+		goto out_free;
+	}
+
+	res = scst_suspend_activity(true);
+	if (res != 0)
+		goto out_free;
+
+	if (mutex_lock_interruptible(&scst_mutex) != 0) {
+		res = -EINTR;
+		goto out_free_resume;
+	}
+
+	while (isspace(*p) && *p != '\0')
+		p++;
+	e = p;
+	while (!isspace(*e) && *e != '\0')
+		e++;
+	*e = '\0';
+
+	if (p[0] == '\0') {
+		PRINT_ERROR("%s", "Group name required");
+		res = -EINVAL;
+		goto out_free_up;
+	}
+
+	list_for_each_entry(a, &tgt->tgt_acg_list, acg_list_entry) {
+		if (strcmp(a->acg_name, p) == 0) {
+			TRACE_DBG("group (acg) %p %s found",
+				  a, a->acg_name);
+			acg = a;
+			break;
+		}
+	}
+
+	switch (action) {
+	case SCST_INI_GROUP_ACTION_CREATE:
+		TRACE_DBG("Creating group '%s'", p);
+		if (acg != NULL) {
+			PRINT_ERROR("acg name %s exist", p);
+			res = -EINVAL;
+			goto out_free_up;
+		}
+
+		len = strlen(p) + 1;
+		name = kmalloc(len, GFP_KERNEL);
+		if (name == NULL) {
+			PRINT_ERROR("%s", "Allocation of name failed");
+			res = -ENOMEM;
+			goto out_free_up;
+		}
+		strlcpy(name, p, len);
+
+		acg = scst_alloc_add_acg(tgt, name);
+		kfree(name);
+		if (acg == NULL)
+			goto out_free_up;
+
+		res = scst_create_acg_sysfs(tgt, acg);
+		if (res != 0)
+			goto out_free_acg;
+		break;
+	case SCST_INI_GROUP_ACTION_DEL:
+		TRACE_DBG("Deleting group '%s'", p);
+		if (acg == NULL) {
+			PRINT_ERROR("Group %s not found", p);
+			res = -EINVAL;
+			goto out_free_up;
+		}
+		if (!scst_acg_sess_is_empty(acg)) {
+			PRINT_ERROR("Group %s is not empty", acg->acg_name);
+			res = -EBUSY;
+			goto out_free_up;
+		}
+		scst_acg_sysfs_put(acg);
+		break;
+	}
+
+	res = count;
+
+out_free_up:
+	mutex_unlock(&scst_mutex);
+
+out_free_resume:
+	scst_resume_activity();
+
+out_free:
+	kfree(buffer);
+
+out:
+	return res;
+
+out_free_acg:
+	scst_acg_sysfs_put(acg);
+	goto out_free_up;
+
+#undef SCST_LUN_ACTION_CREATE
+#undef SCST_LUN_ACTION_DEL
+}
+
+static ssize_t scst_rel_tgt_id_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	struct scst_tgt *tgt;
+	int res;
+
+	tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+
+	res = sprintf(buf, "%d\n%s", tgt->rel_tgt_id,
+		(tgt->rel_tgt_id != 0) ? SCST_SYSFS_KEY_MARK "\n" : "");
+	return res;
+}
+
+static ssize_t scst_rel_tgt_id_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res = 0;
+	struct scst_tgt *tgt;
+	unsigned long rel_tgt_id;
+
+	if (buf == NULL)
+		goto out_err;
+
+	tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+
+	res = strict_strtoul(buf, 0, &rel_tgt_id);
+	if (res != 0)
+		goto out_err;
+
+	TRACE_DBG("Try to set relative target port id %d",
+		(uint16_t)rel_tgt_id);
+
+	if (rel_tgt_id < SCST_MIN_REL_TGT_ID ||
+	    rel_tgt_id > SCST_MAX_REL_TGT_ID) {
+		if ((rel_tgt_id == 0) && !tgt->tgtt->is_target_enabled(tgt))
+			goto set;
+
+		PRINT_ERROR("Invalid relative port id %d",
+			(uint16_t)rel_tgt_id);
+		res = -EINVAL;
+		goto out;
+	}
+
+	if (tgt->tgtt->is_target_enabled(tgt) &&
+	    rel_tgt_id != tgt->rel_tgt_id) {
+		if (!scst_is_relative_target_port_id_unique(rel_tgt_id, tgt)) {
+			PRINT_ERROR("Relative port id %d is not unique",
+				(uint16_t)rel_tgt_id);
+			res = -EBADSLT;
+			goto out;
+		}
+	}
+
+set:
+	tgt->rel_tgt_id = (uint16_t)rel_tgt_id;
+
+	res = count;
+
+out:
+	return res;
+
+out_err:
+	PRINT_ERROR("%s: Requested action not understood: %s", __func__, buf);
+	res = -EINVAL;
+	goto out;
+}
+
+int scst_create_acn_sysfs(struct scst_acg *acg, struct scst_acn *acn)
+{
+	int retval = 0;
+	int len;
+	struct kobj_attribute *attr = NULL;
+
+	acn->acn_attr = NULL;
+
+	attr = kzalloc(sizeof(struct kobj_attribute), GFP_KERNEL);
+	if (attr == NULL) {
+		PRINT_ERROR("Unable to allocate attributes for initiator '%s'",
+			acn->name);
+		retval = -ENOMEM;
+		goto out;
+	}
+
+	len = strlen(acn->name) + 1;
+	attr->attr.name = kzalloc(len, GFP_KERNEL);
+	if (attr->attr.name == NULL) {
+		PRINT_ERROR("Unable to allocate attributes for initiator '%s'",
+			acn->name);
+		retval = -ENOMEM;
+		goto out_free;
+	}
+	strlcpy((char *)attr->attr.name, acn->name, len);
+
+	attr->attr.owner = THIS_MODULE;
+	attr->attr.mode = S_IRUGO;
+	attr->show = scst_acn_file_show;
+	attr->store = NULL;
+
+	retval = sysfs_create_file(acg->initiators_kobj, &attr->attr);
+	if (retval != 0) {
+		PRINT_ERROR("Unable to create acn '%s' for group '%s'",
+			acn->name, acg->acg_name);
+		kfree(attr->attr.name);
+		goto out_free;
+	}
+
+	acn->acn_attr = attr;
+
+out:
+	return retval;
+
+out_free:
+	kfree(attr);
+	goto out;
+}
+
+void scst_acn_sysfs_del(struct scst_acg *acg, struct scst_acn *acn,
+	bool reassign)
+{
+
+	if (acn->acn_attr != NULL) {
+		sysfs_remove_file(acg->initiators_kobj,
+			&acn->acn_attr->attr);
+		kfree(acn->acn_attr->attr.name);
+		kfree(acn->acn_attr);
+	}
+	scst_acg_remove_acn(acn);
+	if (reassign)
+		scst_check_reassign_sessions();
+	return;
+}
+
+static ssize_t scst_acn_file_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	return scnprintf(buf, SCST_SYSFS_BLOCK_SIZE, "%s\n",
+		attr->attr.name);
+}
+
+static ssize_t scst_acg_luns_mgmt_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count)
+{
+	int res;
+	struct scst_acg *acg;
+
+	acg = container_of(kobj->parent, struct scst_acg, acg_kobj);
+	res = __scst_luns_mgmt_store(acg, kobj, buf, count);
+	return res;
+}
+
+static ssize_t scst_acg_ini_mgmt_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	static char *help = "Usage: echo \"add INITIATOR_NAME\" "
+					">mgmt\n"
+			    "       echo \"del INITIATOR_NAME\" "
+					">mgmt\n"
+			    "       echo \"move INITIATOR_NAME DEST_GROUP_NAME\" "
+					">mgmt\n"
+			    "       echo \"clear\" "
+					">mgmt\n";
+
+	return sprintf(buf, help);
+}
+
+static ssize_t scst_acg_ini_mgmt_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res, action;
+	char *buffer, *p, *e = NULL;
+	char *name = NULL, *group = NULL;
+	struct scst_acg *acg = NULL, *acg_dest = NULL;
+	struct scst_tgt *tgt = NULL;
+	struct scst_acn *acn = NULL, *acn_tmp;
+
+#define SCST_ACG_ACTION_INI_ADD		1
+#define SCST_ACG_ACTION_INI_DEL		2
+#define SCST_ACG_ACTION_INI_CLEAR	3
+#define SCST_ACG_ACTION_INI_MOVE	4
+
+	acg = container_of(kobj->parent, struct scst_acg, acg_kobj);
+
+	buffer = kzalloc(count+1, GFP_KERNEL);
+	if (buffer == NULL) {
+		res = -ENOMEM;
+		goto out;
+	}
+
+	memcpy(buffer, buf, count);
+	buffer[count] = '\0';
+	p = buffer;
+
+	p = buffer;
+	if (p[strlen(p) - 1] == '\n')
+		p[strlen(p) - 1] = '\0';
+
+	if (strncasecmp("add", p, 3) == 0) {
+		p += 3;
+		action = SCST_ACG_ACTION_INI_ADD;
+	} else if (strncasecmp("del", p, 3) == 0) {
+		p += 3;
+		action = SCST_ACG_ACTION_INI_DEL;
+	} else if (strncasecmp("clear", p, 5) == 0) {
+		p += 5;
+		action = SCST_ACG_ACTION_INI_CLEAR;
+	} else if (strncasecmp("move", p, 4) == 0) {
+		p += 4;
+		action = SCST_ACG_ACTION_INI_MOVE;
+	} else {
+		PRINT_ERROR("Unknown action \"%s\"", p);
+		res = -EINVAL;
+		goto out_free;
+	}
+
+	if (action != SCST_ACG_ACTION_INI_CLEAR)
+		if (!isspace(*p)) {
+			PRINT_ERROR("%s", "Syntax error");
+			res = -EINVAL;
+			goto out_free;
+		}
+
+	res = scst_suspend_activity(true);
+	if (res != 0)
+		goto out_free;
+
+	if (mutex_lock_interruptible(&scst_mutex) != 0) {
+		res = -EINTR;
+		goto out_free_resume;
+	}
+
+	if (action != SCST_ACG_ACTION_INI_CLEAR)
+		while (isspace(*p) && *p != '\0')
+			p++;
+
+	switch (action) {
+	case SCST_ACG_ACTION_INI_ADD:
+		e = p;
+		while (!isspace(*e) && *e != '\0')
+			e++;
+		*e = '\0';
+		name = p;
+
+		if (name[0] == '\0') {
+			PRINT_ERROR("%s", "Invalid initiator name");
+			res = -EINVAL;
+			goto out_free_up;
+		}
+
+		res = scst_acg_add_name(acg, name);
+		if (res != 0)
+			goto out_free_up;
+		break;
+	case SCST_ACG_ACTION_INI_DEL:
+		e = p;
+		while (!isspace(*e) && *e != '\0')
+			e++;
+		*e = '\0';
+		name = p;
+
+		if (name[0] == '\0') {
+			PRINT_ERROR("%s", "Invalid initiator name");
+			res = -EINVAL;
+			goto out_free_up;
+		}
+
+		acn = scst_acg_find_name(acg, name);
+		if (acn == NULL) {
+			PRINT_ERROR("Unable to find "
+				"initiator '%s' in group '%s'",
+				name, acg->acg_name);
+			res = -EINVAL;
+			goto out_free_up;
+		}
+		scst_acn_sysfs_del(acg, acn, true);
+		break;
+	case SCST_ACG_ACTION_INI_CLEAR:
+		list_for_each_entry_safe(acn, acn_tmp, &acg->acn_list,
+				acn_list_entry) {
+			scst_acn_sysfs_del(acg, acn, false);
+		}
+		scst_check_reassign_sessions();
+		break;
+	case SCST_ACG_ACTION_INI_MOVE:
+		e = p;
+		while (!isspace(*e) && *e != '\0')
+			e++;
+		if (*e == '\0') {
+			PRINT_ERROR("%s", "Too few parameters");
+			res = -EINVAL;
+			goto out_free_up;
+		}
+		*e = '\0';
+		name = p;
+
+		if (name[0] == '\0') {
+			PRINT_ERROR("%s", "Invalid initiator name");
+			res = -EINVAL;
+			goto out_free_up;
+		}
+
+		e++;
+		p = e;
+		while (!isspace(*e) && *e != '\0')
+			e++;
+		*e = '\0';
+		group = p;
+
+		if (group[0] == '\0') {
+			PRINT_ERROR("%s", "Invalid group name");
+			res = -EINVAL;
+			goto out_free_up;
+		}
+
+		TRACE_DBG("Move initiator '%s' to group '%s'",
+			name, group);
+
+		/*
+		 * Better get tgt from hierarchy tgt_kobj -> tgt_ini_grp_kobj ->
+		 * acg_kobj -> initiators_kobj than have direct pointer to tgt
+		 * in struct acg and have a headache to care about its possible
+		 * wrong dereference on the destruction time.
+		 */
+		{
+			struct kobject *k;
+
+			/* acg_kobj */
+			k = kobj->parent;
+			if (k == NULL) {
+				res = -EINVAL;
+				goto out_free_up;
+			}
+			/* tgt_ini_grp_kobj */
+			k = k->parent;
+			if (k == NULL) {
+				res = -EINVAL;
+				goto out_free_up;
+			}
+			/* tgt_kobj */
+			k = k->parent;
+			if (k == NULL) {
+				res = -EINVAL;
+				goto out_free_up;
+			}
+
+			tgt = container_of(k, struct scst_tgt, tgt_kobj);
+		}
+
+		acn = scst_acg_find_name(acg, name);
+		if (acn == NULL) {
+			PRINT_ERROR("Unable to find "
+				"initiator '%s' in group '%s'",
+				name, acg->acg_name);
+			res = -EINVAL;
+			goto out_free_up;
+		}
+		acg_dest = scst_tgt_find_acg(tgt, group);
+		if (acg_dest == NULL) {
+			PRINT_ERROR("Unable to find group '%s' in target '%s'",
+				group, tgt->tgt_name);
+			res = -EINVAL;
+			goto out_free_up;
+		}
+		if (scst_acg_find_name(acg_dest, name) != NULL) {
+			PRINT_ERROR("Initiator '%s' already exists in group '%s'",
+				name, acg_dest->acg_name);
+			res = -EEXIST;
+			goto out_free_up;
+		}
+		scst_acn_sysfs_del(acg, acn, false);
+
+		res = scst_acg_add_name(acg_dest, name);
+		if (res != 0)
+			goto out_free_up;
+		break;
+	}
+
+	res = count;
+
+out_free_up:
+	mutex_unlock(&scst_mutex);
+
+out_free_resume:
+	scst_resume_activity();
+
+out_free:
+	kfree(buffer);
+
+out:
+	return res;
+
+#undef SCST_ACG_ACTION_INI_ADD
+#undef SCST_ACG_ACTION_INI_DEL
+#undef SCST_ACG_ACTION_INI_CLEAR
+#undef SCST_ACG_ACTION_INI_MOVE
+}
+
+/*
+ * SGV directory implementation
+ */
+
+static struct kobj_attribute sgv_stat_attr =
+	__ATTR(stats, S_IRUGO | S_IWUSR, sgv_sysfs_stat_show,
+		sgv_sysfs_stat_reset);
+
+static struct attribute *sgv_attrs[] = {
+	&sgv_stat_attr.attr,
+	NULL,
+};
+
+static void sgv_kobj_release(struct kobject *kobj)
+{
+	struct sgv_pool *pool;
+
+	pool = container_of(kobj, struct sgv_pool, sgv_kobj);
+
+	sgv_pool_destroy(pool);
+	return;
+}
+
+static struct kobj_type sgv_pool_ktype = {
+	.sysfs_ops = &scst_sysfs_ops,
+	.release = sgv_kobj_release,
+	.default_attrs = sgv_attrs,
+};
+
+int scst_create_sgv_sysfs(struct sgv_pool *pool)
+{
+	int retval;
+
+	pool->sgv_kobj_initialized = 1;
+
+	retval = kobject_init_and_add(&pool->sgv_kobj, &sgv_pool_ktype,
+			scst_sgv_kobj, pool->name);
+	if (retval != 0) {
+		PRINT_ERROR("Can't add sgv pool %s to sysfs", pool->name);
+		goto out;
+	}
+
+out:
+	return retval;
+}
+
+/* pool can be dead upon exit from this function! */
+void scst_sgv_sysfs_put(struct sgv_pool *pool)
+{
+	if (pool->sgv_kobj_initialized) {
+		kobject_del(&pool->sgv_kobj);
+		kobject_put(&pool->sgv_kobj);
+	} else
+		sgv_pool_destroy(pool);
+	return;
+}
+
+static struct kobj_attribute sgv_global_stat_attr =
+	__ATTR(global_stats, S_IRUGO | S_IWUSR, sgv_sysfs_global_stat_show,
+		sgv_sysfs_global_stat_reset);
+
+static struct attribute *sgv_default_attrs[] = {
+	&sgv_global_stat_attr.attr,
+	NULL,
+};
+
+static struct kobj_type sgv_ktype = {
+	.sysfs_ops = &scst_sysfs_ops,
+	.release = scst_sysfs_release,
+	.default_attrs = sgv_default_attrs,
+};
+
+/*
+ * SCST sysfs root directory implementation
+ */
+
+static ssize_t scst_threads_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	int count;
+
+	count = sprintf(buf, "%d\n%s", scst_main_cmd_threads.nr_threads,
+		(scst_main_cmd_threads.nr_threads != scst_threads) ?
+			SCST_SYSFS_KEY_MARK "\n" : "");
+	return count;
+}
+
+static ssize_t scst_threads_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	long oldtn, newtn, delta;
+
+	if (mutex_lock_interruptible(&scst_mutex) != 0) {
+		res = -EINTR;
+		goto out;
+	}
+
+	oldtn = scst_main_cmd_threads.nr_threads;
+
+	res = strict_strtoul(buf, 0, &newtn);
+	if (res != 0) {
+		PRINT_ERROR("strict_strtoul() for %s failed: %d ", buf, res);
+		goto out_up;
+	}
+
+	if (newtn <= 0) {
+		PRINT_ERROR("Illegal threads num value %ld", newtn);
+		res = -EINVAL;
+		goto out_up;
+	}
+
+	delta = newtn - oldtn;
+	if (delta < 0)
+		scst_del_threads(&scst_main_cmd_threads, -delta);
+	else {
+		res = scst_add_threads(&scst_main_cmd_threads, NULL, NULL, delta);
+		if (res != 0)
+			goto out_up;
+	}
+
+	PRINT_INFO("Changed cmd threads num: old %ld, new %ld", oldtn, newtn);
+
+	res = count;
+
+out_up:
+	mutex_unlock(&scst_mutex);
+
+out:
+	return res;
+}
+
+static ssize_t scst_setup_id_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	int count;
+
+	count = sprintf(buf, "0x%x%s\n", scst_setup_id,
+		(scst_setup_id == 0) ? "" : SCST_SYSFS_KEY_MARK "\n");
+	return count;
+}
+
+static ssize_t scst_setup_id_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	unsigned long val;
+
+	res = strict_strtoul(buf, 0, &val);
+	if (res != 0) {
+		PRINT_ERROR("strict_strtoul() for %s failed: %d ", buf, res);
+		goto out;
+	}
+
+	scst_setup_id = val;
+	PRINT_INFO("Changed scst_setup_id to %x", scst_setup_id);
+
+	res = count;
+
+out:
+	return res;
+}
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+
+static void scst_read_trace_tlb(const struct scst_trace_log *tbl, char *buf,
+	unsigned long log_level, int *pos)
+{
+	const struct scst_trace_log *t = tbl;
+
+	if (t == NULL)
+		goto out;
+
+	while (t->token) {
+		if (log_level & t->val) {
+			*pos += sprintf(&buf[*pos], "%s%s",
+					(*pos == 0) ? "" : " | ",
+					t->token);
+		}
+		t++;
+	}
+out:
+	return;
+}
+
+static ssize_t scst_trace_level_show(const struct scst_trace_log *local_tbl,
+	unsigned long log_level, char *buf, const char *help)
+{
+	int pos = 0;
+
+	scst_read_trace_tlb(scst_trace_tbl, buf, log_level, &pos);
+	scst_read_trace_tlb(local_tbl, buf, log_level, &pos);
+
+	pos += sprintf(&buf[pos], "\n\n\nUsage:\n"
+		"	echo \"all|none|default\" >trace_level\n"
+		"	echo \"value DEC|0xHEX|0OCT\" >trace_level\n"
+		"	echo \"add|del TOKEN\" >trace_level\n"
+		"\nwhere TOKEN is one of [debug, function, line, pid,\n"
+		"		       buff, mem, sg, out_of_mem,\n"
+		"		       special, scsi, mgmt, minor,\n"
+		"		       mgmt_dbg, scsi_serializing,\n"
+		"		       retry, recv_bot, send_bot, recv_top,\n"
+		"		       send_top%s]", help != NULL ? help : "");
+
+	return pos;
+}
+
+static ssize_t scst_main_trace_level_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	return scst_trace_level_show(scst_local_trace_tbl, trace_flag,
+			buf, NULL);
+}
+
+static int scst_write_trace(const char *buf, size_t length,
+	unsigned long *log_level, unsigned long default_level,
+	const char *name, const struct scst_trace_log *tbl)
+{
+	int res = length;
+	int action;
+	unsigned long level = 0, oldlevel;
+	char *buffer, *p, *e;
+	const struct scst_trace_log *t;
+
+#define SCST_TRACE_ACTION_ALL		1
+#define SCST_TRACE_ACTION_NONE		2
+#define SCST_TRACE_ACTION_DEFAULT	3
+#define SCST_TRACE_ACTION_ADD		4
+#define SCST_TRACE_ACTION_DEL		5
+#define SCST_TRACE_ACTION_VALUE		6
+
+	if ((buf == NULL) || (length == 0)) {
+		res = -EINVAL;
+		goto out;
+	}
+
+	buffer = kmalloc(length+1, GFP_KERNEL);
+	if (buffer == NULL) {
+		PRINT_ERROR("Unable to alloc intermediate buffer (size %zd)",
+			length+1);
+		res = -ENOMEM;
+		goto out;
+	}
+	memcpy(buffer, buf, length);
+	buffer[length] = '\0';
+
+	p = buffer;
+	if (!strncasecmp("all", p, 3)) {
+		action = SCST_TRACE_ACTION_ALL;
+	} else if (!strncasecmp("none", p, 4) || !strncasecmp("null", p, 4)) {
+		action = SCST_TRACE_ACTION_NONE;
+	} else if (!strncasecmp("default", p, 7)) {
+		action = SCST_TRACE_ACTION_DEFAULT;
+	} else if (!strncasecmp("add", p, 3)) {
+		p += 3;
+		action = SCST_TRACE_ACTION_ADD;
+	} else if (!strncasecmp("del", p, 3)) {
+		p += 3;
+		action = SCST_TRACE_ACTION_DEL;
+	} else if (!strncasecmp("value", p, 5)) {
+		p += 5;
+		action = SCST_TRACE_ACTION_VALUE;
+	} else {
+		if (p[strlen(p) - 1] == '\n')
+			p[strlen(p) - 1] = '\0';
+		PRINT_ERROR("Unknown action \"%s\"", p);
+		res = -EINVAL;
+		goto out_free;
+	}
+
+	switch (action) {
+	case SCST_TRACE_ACTION_ADD:
+	case SCST_TRACE_ACTION_DEL:
+	case SCST_TRACE_ACTION_VALUE:
+		if (!isspace(*p)) {
+			PRINT_ERROR("%s", "Syntax error");
+			res = -EINVAL;
+			goto out_free;
+		}
+	}
+
+	switch (action) {
+	case SCST_TRACE_ACTION_ALL:
+		level = TRACE_ALL;
+		break;
+	case SCST_TRACE_ACTION_DEFAULT:
+		level = default_level;
+		break;
+	case SCST_TRACE_ACTION_NONE:
+		level = TRACE_NULL;
+		break;
+	case SCST_TRACE_ACTION_ADD:
+	case SCST_TRACE_ACTION_DEL:
+		while (isspace(*p) && *p != '\0')
+			p++;
+		e = p;
+		while (!isspace(*e) && *e != '\0')
+			e++;
+		*e = 0;
+		if (tbl) {
+			t = tbl;
+			while (t->token) {
+				if (!strcasecmp(p, t->token)) {
+					level = t->val;
+					break;
+				}
+				t++;
+			}
+		}
+		if (level == 0) {
+			t = scst_trace_tbl;
+			while (t->token) {
+				if (!strcasecmp(p, t->token)) {
+					level = t->val;
+					break;
+				}
+				t++;
+			}
+		}
+		if (level == 0) {
+			PRINT_ERROR("Unknown token \"%s\"", p);
+			res = -EINVAL;
+			goto out_free;
+		}
+		break;
+	case SCST_TRACE_ACTION_VALUE:
+		while (isspace(*p) && *p != '\0')
+			p++;
+		res = strict_strtoul(p, 0, &level);
+		if (res != 0) {
+			PRINT_ERROR("Invalid trace value \"%s\"", p);
+			res = -EINVAL;
+			goto out_free;
+		}
+		break;
+	}
+
+	oldlevel = *log_level;
+
+	switch (action) {
+	case SCST_TRACE_ACTION_ADD:
+		*log_level |= level;
+		break;
+	case SCST_TRACE_ACTION_DEL:
+		*log_level &= ~level;
+		break;
+	default:
+		*log_level = level;
+		break;
+	}
+
+	PRINT_INFO("Changed trace level for \"%s\": old 0x%08lx, new 0x%08lx",
+		name, oldlevel, *log_level);
+
+out_free:
+	kfree(buffer);
+out:
+	return res;
+
+#undef SCST_TRACE_ACTION_ALL
+#undef SCST_TRACE_ACTION_NONE
+#undef SCST_TRACE_ACTION_DEFAULT
+#undef SCST_TRACE_ACTION_ADD
+#undef SCST_TRACE_ACTION_DEL
+#undef SCST_TRACE_ACTION_VALUE
+}
+
+static ssize_t scst_main_trace_level_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+
+	if (mutex_lock_interruptible(&scst_log_mutex) != 0) {
+		res = -EINTR;
+		goto out;
+	}
+
+	res = scst_write_trace(buf, count, &trace_flag,
+		SCST_DEFAULT_LOG_FLAGS, "scst", scst_local_trace_tbl);
+
+	mutex_unlock(&scst_log_mutex);
+
+out:
+	return res;
+}
+
+#endif /* defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING) */
+
+static ssize_t scst_version_show(struct kobject *kobj,
+				 struct kobj_attribute *attr,
+				 char *buf)
+{
+
+	sprintf(buf, "%s\n", SCST_VERSION_STRING);
+
+#ifdef CONFIG_SCST_STRICT_SERIALIZING
+	strcat(buf, "STRICT_SERIALIZING\n");
+#endif
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+	strcat(buf, "EXTRACHECKS\n");
+#endif
+
+#ifdef CONFIG_SCST_TRACING
+	strcat(buf, "TRACING\n");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG
+	strcat(buf, "DEBUG\n");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG_TM
+	strcat(buf, "DEBUG_TM\n");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG_RETRY
+	strcat(buf, "DEBUG_RETRY\n");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG_OOM
+	strcat(buf, "DEBUG_OOM\n");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG_SN
+	strcat(buf, "DEBUG_SN\n");
+#endif
+
+#ifdef CONFIG_SCST_USE_EXPECTED_VALUES
+	strcat(buf, "USE_EXPECTED_VALUES\n");
+#endif
+
+#ifdef CONFIG_SCST_ALLOW_PASSTHROUGH_IO_SUBMIT_IN_SIRQ
+	strcat(buf, "ALLOW_PASSTHROUGH_IO_SUBMIT_IN_SIRQ\n");
+#endif
+
+#ifdef CONFIG_SCST_STRICT_SECURITY
+	strcat(buf, "SCST_STRICT_SECURITY\n");
+#endif
+	return strlen(buf);
+}
+
+static struct kobj_attribute scst_threads_attr =
+	__ATTR(threads, S_IRUGO | S_IWUSR, scst_threads_show,
+	       scst_threads_store);
+
+static struct kobj_attribute scst_setup_id_attr =
+	__ATTR(setup_id, S_IRUGO | S_IWUSR, scst_setup_id_show,
+	       scst_setup_id_store);
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+static struct kobj_attribute scst_trace_level_attr =
+	__ATTR(trace_level, S_IRUGO | S_IWUSR, scst_main_trace_level_show,
+	       scst_main_trace_level_store);
+#endif
+
+static struct kobj_attribute scst_version_attr =
+	__ATTR(version, S_IRUGO, scst_version_show, NULL);
+
+static struct attribute *scst_sysfs_root_default_attrs[] = {
+	&scst_threads_attr.attr,
+	&scst_setup_id_attr.attr,
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+	&scst_trace_level_attr.attr,
+#endif
+	&scst_version_attr.attr,
+	NULL,
+};
+
+static void scst_sysfs_root_release(struct kobject *kobj)
+{
+	complete_all(&scst_sysfs_root_release_completion);
+}
+
+static ssize_t scst_show(struct kobject *kobj, struct attribute *attr,
+			 char *buf)
+{
+	struct kobj_attribute *kobj_attr;
+	kobj_attr = container_of(attr, struct kobj_attribute, attr);
+
+	return kobj_attr->show(kobj, kobj_attr, buf);
+}
+
+static ssize_t scst_store(struct kobject *kobj, struct attribute *attr,
+			  const char *buf, size_t count)
+{
+	struct kobj_attribute *kobj_attr;
+	kobj_attr = container_of(attr, struct kobj_attribute, attr);
+
+	return kobj_attr->store(kobj, kobj_attr, buf, count);
+}
+
+struct sysfs_ops scst_sysfs_ops = {
+	.show = scst_show,
+	.store = scst_store,
+};
+
+static struct kobj_type scst_sysfs_root_ktype = {
+	.sysfs_ops = &scst_sysfs_ops,
+	.release = scst_sysfs_root_release,
+	.default_attrs = scst_sysfs_root_default_attrs,
+};
+
+static void scst_devt_free(struct kobject *kobj)
+{
+	struct scst_dev_type *devt;
+
+	devt = container_of(kobj, struct scst_dev_type, devt_kobj);
+
+	complete_all(&devt->devt_kobj_release_compl);
+
+	scst_devt_cleanup(devt);
+	return;
+}
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+
+static ssize_t scst_devt_trace_level_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	struct scst_dev_type *devt;
+
+	devt = container_of(kobj, struct scst_dev_type, devt_kobj);
+
+	return scst_trace_level_show(devt->trace_tbl,
+		devt->trace_flags ? *devt->trace_flags : 0, buf,
+		devt->trace_tbl_help);
+}
+
+static ssize_t scst_devt_trace_level_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	struct scst_dev_type *devt;
+
+	devt = container_of(kobj, struct scst_dev_type, devt_kobj);
+
+	if (mutex_lock_interruptible(&scst_log_mutex) != 0) {
+		res = -EINTR;
+		goto out;
+	}
+
+	res = scst_write_trace(buf, count, devt->trace_flags,
+		devt->default_trace_flags, devt->name, devt->trace_tbl);
+
+	mutex_unlock(&scst_log_mutex);
+
+out:
+	return res;
+}
+
+static struct kobj_attribute devt_trace_attr =
+	__ATTR(trace_level, S_IRUGO | S_IWUSR,
+	       scst_devt_trace_level_show, scst_devt_trace_level_store);
+
+#endif /* #if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING) */
+
+static ssize_t scst_devt_type_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	int pos;
+	struct scst_dev_type *devt;
+
+	devt = container_of(kobj, struct scst_dev_type, devt_kobj);
+
+	pos = sprintf(buf, "%d - %s\n", devt->type,
+		(unsigned)devt->type > ARRAY_SIZE(scst_dev_handler_types) ?
+			"unknown" : scst_dev_handler_types[devt->type]);
+
+	return pos;
+}
+
+static struct kobj_attribute scst_devt_type_attr =
+	__ATTR(type, S_IRUGO, scst_devt_type_show, NULL);
+
+static struct attribute *scst_devt_default_attrs[] = {
+	&scst_devt_type_attr.attr,
+	NULL,
+};
+
+static struct kobj_type scst_devt_ktype = {
+	.sysfs_ops = &scst_sysfs_ops,
+	.release = scst_devt_free,
+	.default_attrs = scst_devt_default_attrs,
+};
+
+static ssize_t scst_devt_mgmt_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	char *help = "Usage: echo \"add_device device_name [parameters]\" "
+				">mgmt\n"
+		     "       echo \"del_device device_name\" >mgmt\n"
+		     "%s"
+		     "\n"
+		     "where parameters are one or more "
+		     "param_name=value pairs separated by ';'\n"
+		     "%s%s";
+	struct scst_dev_type *devt;
+
+	devt = container_of(kobj, struct scst_dev_type, devt_kobj);
+
+	if (devt->add_device_parameters_help != NULL)
+		return sprintf(buf, help,
+			(devt->mgmt_cmd_help) ? devt->mgmt_cmd_help : "",
+			"\nThe following parameters available: ",
+			devt->add_device_parameters_help);
+	else
+		return sprintf(buf, help,
+			(devt->mgmt_cmd_help) ? devt->mgmt_cmd_help : "",
+			"", "");
+}
+
+static ssize_t scst_devt_mgmt_store(struct kobject *kobj,
+				    struct kobj_attribute *attr,
+				    const char *buf, size_t count)
+{
+	int res;
+	char *buffer, *p, *pp, *device_name;
+	struct scst_dev_type *devt;
+
+	devt = container_of(kobj, struct scst_dev_type, devt_kobj);
+
+	buffer = kzalloc(count+1, GFP_KERNEL);
+	if (buffer == NULL) {
+		res = -ENOMEM;
+		goto out;
+	}
+
+	memcpy(buffer, buf, count);
+	buffer[count] = '\0';
+
+	pp = buffer;
+	if (pp[strlen(pp) - 1] == '\n')
+		pp[strlen(pp) - 1] = '\0';
+
+	p = scst_get_next_lexem(&pp);
+
+	if (strcasecmp("add_device", p) == 0) {
+		device_name = scst_get_next_lexem(&pp);
+		if (*device_name == '\0') {
+			PRINT_ERROR("%s", "Device name required");
+			res = -EINVAL;
+			goto out_free;
+		}
+		res = devt->add_device(device_name, pp);
+	} else if (strcasecmp("del_device", p) == 0) {
+		device_name = scst_get_next_lexem(&pp);
+		if (*device_name == '\0') {
+			PRINT_ERROR("%s", "Device name required");
+			res = -EINVAL;
+			goto out_free;
+		}
+
+		p = scst_get_next_lexem(&pp);
+		if (*p != '\0')
+			goto out_syntax_err;
+
+		res = devt->del_device(device_name);
+	} else if (devt->mgmt_cmd != NULL) {
+		scst_restore_token_str(p, pp);
+		res = devt->mgmt_cmd(buffer);
+	} else {
+		PRINT_ERROR("Unknown action \"%s\"", p);
+		res = -EINVAL;
+		goto out_free;
+	}
+
+	if (res == 0)
+		res = count;
+
+out_free:
+	kfree(buffer);
+
+out:
+	return res;
+
+out_syntax_err:
+	PRINT_ERROR("Syntax error on \"%s\"", p);
+	res = -EINVAL;
+	goto out_free;
+}
+
+static struct kobj_attribute scst_devt_mgmt =
+	__ATTR(mgmt, S_IRUGO | S_IWUSR, scst_devt_mgmt_show,
+	       scst_devt_mgmt_store);
+
+static ssize_t scst_devt_pass_through_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	return sprintf(buf, "1");
+}
+
+static struct kobj_attribute scst_devt_pass_through =
+	__ATTR(pass_through, S_IRUGO, scst_devt_pass_through_show, NULL);
+
+static ssize_t scst_devt_pass_through_mgmt_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	char *help = "Usage: echo \"assign H:C:I:L\" >mgmt\n"
+		     "       echo \"unassign H:C:I:L\" >mgmt\n";
+	return sprintf(buf, help);
+}
+
+static ssize_t scst_devt_pass_through_mgmt_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	char *buffer, *p, *pp, *action;
+	struct scst_dev_type *devt;
+	unsigned long host, channel, id, lun;
+	struct scst_device *d, *dev = NULL;
+
+	devt = container_of(kobj, struct scst_dev_type, devt_kobj);
+
+	buffer = kzalloc(count+1, GFP_KERNEL);
+	if (buffer == NULL) {
+		res = -ENOMEM;
+		goto out;
+	}
+
+	memcpy(buffer, buf, count);
+	buffer[count] = '\0';
+
+	pp = buffer;
+	if (pp[strlen(pp) - 1] == '\n')
+		pp[strlen(pp) - 1] = '\0';
+
+	action = scst_get_next_lexem(&pp);
+	p = scst_get_next_lexem(&pp);
+	if (*p == '\0') {
+		PRINT_ERROR("%s", "Device required");
+		res = -EINVAL;
+		goto out_free;
+	}
+
+	;
+	if (*scst_get_next_lexem(&pp) != '\0') {
+		PRINT_ERROR("%s", "Too many parameters");
+		res = -EINVAL;
+		goto out_syntax_err;
+	}
+
+	host = simple_strtoul(p, &p, 0);
+	if ((host == ULONG_MAX) || (*p != ':'))
+		goto out_syntax_err;
+	p++;
+	channel = simple_strtoul(p, &p, 0);
+	if ((channel == ULONG_MAX) || (*p != ':'))
+		goto out_syntax_err;
+	p++;
+	id = simple_strtoul(p, &p, 0);
+	if ((channel == ULONG_MAX) || (*p != ':'))
+		goto out_syntax_err;
+	p++;
+	lun = simple_strtoul(p, &p, 0);
+	if (lun == ULONG_MAX)
+		goto out_syntax_err;
+
+	TRACE_DBG("Dev %ld:%ld:%ld:%ld", host, channel, id, lun);
+
+	if (mutex_lock_interruptible(&scst_mutex) != 0) {
+		res = -EINTR;
+		goto out_free;
+	}
+
+	list_for_each_entry(d, &scst_dev_list, dev_list_entry) {
+		if ((d->virt_id == 0) &&
+		    d->scsi_dev->host->host_no == host &&
+		    d->scsi_dev->channel == channel &&
+		    d->scsi_dev->id == id &&
+		    d->scsi_dev->lun == lun) {
+			dev = d;
+			TRACE_DBG("Dev %p (%ld:%ld:%ld:%ld) found",
+				  dev, host, channel, id, lun);
+			break;
+		}
+	}
+	if (dev == NULL) {
+		PRINT_ERROR("Device %ld:%ld:%ld:%ld not found",
+			       host, channel, id, lun);
+		res = -EINVAL;
+		goto out_unlock;
+	}
+
+	if (dev->scsi_dev->type != devt->type) {
+		PRINT_ERROR("Type %d of device %s differs from type "
+			"%d of dev handler %s", dev->type,
+			dev->virt_name, devt->type, devt->name);
+		res = -EINVAL;
+		goto out_unlock;
+	}
+
+	if (strcasecmp("assign", action) == 0)
+		res = scst_assign_dev_handler(dev, devt);
+	else if (strcasecmp("deassign", action) == 0) {
+		if (dev->handler != devt) {
+			PRINT_ERROR("Device %s is not assigned to handler %s",
+				dev->virt_name, devt->name);
+			res = -EINVAL;
+			goto out_unlock;
+		}
+		res = scst_assign_dev_handler(dev, &scst_null_devtype);
+	} else {
+		PRINT_ERROR("Unknown action \"%s\"", action);
+		res = -EINVAL;
+		goto out_unlock;
+	}
+
+	if (res == 0)
+		res = count;
+
+out_unlock:
+	mutex_unlock(&scst_mutex);
+
+out_free:
+	kfree(buffer);
+
+out:
+	return res;
+
+out_syntax_err:
+	PRINT_ERROR("Syntax error on \"%s\"", p);
+	res = -EINVAL;
+	goto out_free;
+}
+
+static struct kobj_attribute scst_devt_pass_through_mgmt =
+	__ATTR(mgmt, S_IRUGO | S_IWUSR, scst_devt_pass_through_mgmt_show,
+	       scst_devt_pass_through_mgmt_store);
+
+int scst_create_devt_sysfs(struct scst_dev_type *devt)
+{
+	int retval;
+	struct kobject *parent;
+	const struct attribute **pattr;
+
+	init_completion(&devt->devt_kobj_release_compl);
+
+	if (devt->parent != NULL)
+		parent = &devt->parent->devt_kobj;
+	else
+		parent = scst_handlers_kobj;
+
+	devt->devt_kobj_initialized = 1;
+
+	retval = kobject_init_and_add(&devt->devt_kobj, &scst_devt_ktype,
+			parent, devt->name);
+	if (retval != 0) {
+		PRINT_ERROR("Can't add devt %s to sysfs", devt->name);
+		goto out;
+	}
+
+	/*
+	 * In case of errors there's no need for additional cleanup, because
+	 * it will be done by the _put function() called by the caller.
+	 */
+
+	if (devt->add_device != NULL) {
+		retval = sysfs_create_file(&devt->devt_kobj,
+				&scst_devt_mgmt.attr);
+		if (retval != 0) {
+			PRINT_ERROR("Can't add mgmt attr for dev handler %s",
+				devt->name);
+			goto out;
+		}
+	} else if (devt->pass_through) {
+		retval = sysfs_create_file(&devt->devt_kobj,
+				&scst_devt_pass_through_mgmt.attr);
+		if (retval != 0) {
+			PRINT_ERROR("Can't add mgmt attr for dev handler %s",
+				devt->name);
+			goto out;
+		}
+
+		retval = sysfs_create_file(&devt->devt_kobj,
+				&scst_devt_pass_through.attr);
+		if (retval != 0) {
+			PRINT_ERROR("Can't add pass_through attr for dev "
+				"handler %s", devt->name);
+			goto out;
+		}
+	}
+
+	pattr = devt->devt_attrs;
+	if (pattr != NULL) {
+		while (*pattr != NULL) {
+			retval = sysfs_create_file(&devt->devt_kobj, *pattr);
+			if (retval != 0) {
+				PRINT_ERROR("Can't add devt attr %s for dev "
+					"handler %s", (*pattr)->name,
+					devt->name);
+				goto out;
+			}
+			pattr++;
+		}
+	}
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+	if (devt->trace_flags != NULL) {
+		retval = sysfs_create_file(&devt->devt_kobj,
+				&devt_trace_attr.attr);
+		if (retval != 0) {
+			PRINT_ERROR("Can't add devt trace_flag for dev "
+				"handler %s", devt->name);
+			goto out;
+		}
+	}
+#endif
+
+out:
+	return retval;
+}
+
+void scst_devt_sysfs_put(struct scst_dev_type *devt)
+{
+
+	if (devt->devt_kobj_initialized) {
+		int rc;
+
+		kobject_del(&devt->devt_kobj);
+		kobject_put(&devt->devt_kobj);
+
+		rc = wait_for_completion_timeout(&devt->devt_kobj_release_compl, HZ);
+		if (rc == 0) {
+			PRINT_INFO("Waiting for releasing sysfs entry "
+				"for dev handler template %s...", devt->name);
+			wait_for_completion(&devt->devt_kobj_release_compl);
+			PRINT_INFO("Done waiting for releasing sysfs entry "
+				"for dev handler template %s", devt->name);
+		}
+	} else
+		scst_devt_cleanup(devt);
+	return;
+}
+
+static DEFINE_MUTEX(scst_sysfs_user_info_mutex);
+
+/* All protected by scst_sysfs_user_info_mutex */
+static LIST_HEAD(scst_sysfs_user_info_list);
+static uint32_t scst_sysfs_info_cur_cookie;
+
+/* scst_sysfs_user_info_mutex supposed to be held */
+static struct scst_sysfs_user_info *scst_sysfs_user_find_info(uint32_t cookie)
+{
+	struct scst_sysfs_user_info *info, *res = NULL;
+
+	list_for_each_entry(info, &scst_sysfs_user_info_list,
+			info_list_entry) {
+		if (info->info_cookie == cookie) {
+			res = info;
+			break;
+		}
+	}
+	return res;
+}
+
+/**
+ * scst_sysfs_user_get_info() - get user_info
+ *
+ * Finds the user_info based on cookie and mark it as received the reply by
+ * setting for it flag info_being_executed.
+ *
+ * Returns found entry or NULL.
+ */
+struct scst_sysfs_user_info *scst_sysfs_user_get_info(uint32_t cookie)
+{
+	struct scst_sysfs_user_info *res = NULL;
+
+	mutex_lock(&scst_sysfs_user_info_mutex);
+
+	res = scst_sysfs_user_find_info(cookie);
+	if (res != NULL) {
+		if (!res->info_being_executed)
+			res->info_being_executed = 1;
+	}
+
+	mutex_unlock(&scst_sysfs_user_info_mutex);
+	return res;
+}
+EXPORT_SYMBOL_GPL(scst_sysfs_user_get_info);
+
+/**
+ ** Helper functionality to help target drivers and dev handlers support
+ ** sending events to user space and wait for their completion in a safe
+ ** manner. See samples how to use it in iscsi-scst or scst_user.
+ **/
+
+/**
+ * scst_sysfs_user_add_info() - create and add user_info in the global list
+ *
+ * Creates an info structure and adds it in the info_list.
+ * Returns 0 and out_info on success, error code otherwise.
+ */
+int scst_sysfs_user_add_info(struct scst_sysfs_user_info **out_info)
+{
+	int res = 0;
+	struct scst_sysfs_user_info *info;
+
+	info = kzalloc(sizeof(*info), GFP_KERNEL);
+	if (info == NULL) {
+		PRINT_ERROR("Unable to allocate sysfs user info (size %zd)",
+			sizeof(*info));
+		res = -ENOMEM;
+		goto out;
+	}
+
+	mutex_lock(&scst_sysfs_user_info_mutex);
+
+	while ((info->info_cookie == 0) ||
+	       (scst_sysfs_user_find_info(info->info_cookie) != NULL))
+		info->info_cookie = scst_sysfs_info_cur_cookie++;
+
+	init_completion(&info->info_completion);
+
+	list_add_tail(&info->info_list_entry, &scst_sysfs_user_info_list);
+	info->info_in_list = 1;
+
+	*out_info = info;
+
+	mutex_unlock(&scst_sysfs_user_info_mutex);
+
+out:
+	return res;
+}
+EXPORT_SYMBOL_GPL(scst_sysfs_user_add_info);
+
+/**
+ * scst_sysfs_user_del_info - delete and frees user_info
+ */
+void scst_sysfs_user_del_info(struct scst_sysfs_user_info *info)
+{
+
+	mutex_lock(&scst_sysfs_user_info_mutex);
+
+	if (info->info_in_list)
+		list_del(&info->info_list_entry);
+
+	mutex_unlock(&scst_sysfs_user_info_mutex);
+
+	kfree(info);
+	return;
+}
+EXPORT_SYMBOL_GPL(scst_sysfs_user_del_info);
+
+/*
+ * Returns true if the reply received and being processed by another part of
+ * the kernel, false otherwise. Also removes the user_info from the list to
+ * fix for the user space that it missed the timeout.
+ */
+static bool scst_sysfs_user_info_executing(struct scst_sysfs_user_info *info)
+{
+	bool res;
+
+	mutex_lock(&scst_sysfs_user_info_mutex);
+
+	res = info->info_being_executed;
+
+	if (info->info_in_list) {
+		list_del(&info->info_list_entry);
+		info->info_in_list = 0;
+	}
+
+	mutex_unlock(&scst_sysfs_user_info_mutex);
+	return res;
+}
+
+/**
+ * scst_wait_info_completion() - wait an user space event's completion
+ *
+ * Waits for the info request been completed by user space at most timeout
+ * jiffies. If the reply received before timeout and being processed by
+ * another part of the kernel, i.e. scst_sysfs_user_info_executing()
+ * returned true, waits for it to complete indefinitely.
+ *
+ * Returns status of the request completion.
+ */
+int scst_wait_info_completion(struct scst_sysfs_user_info *info,
+	unsigned long timeout)
+{
+	int res, rc;
+
+	TRACE_DBG("Waiting for info %p completion", info);
+
+	while (1) {
+		rc = wait_for_completion_interruptible_timeout(
+			&info->info_completion, timeout);
+		if (rc > 0) {
+			TRACE_DBG("Waiting for info %p finished with %d",
+				info, rc);
+			break;
+		} else if (rc == 0) {
+			if (!scst_sysfs_user_info_executing(info)) {
+				PRINT_ERROR("Timeout waiting for user "
+					"space event %p", info);
+				res = -EBUSY;
+				goto out;
+			} else {
+				/* Req is being executed in the kernel */
+				TRACE_DBG("Keep waiting for info %p completion",
+					info);
+				wait_for_completion(&info->info_completion);
+				break;
+			}
+		} else if (rc != -ERESTARTSYS) {
+				res = rc;
+				PRINT_ERROR("wait_for_completion() failed: %d",
+					res);
+				goto out;
+		} else {
+			TRACE_DBG("Waiting for info %p finished with %d, "
+				"retrying", info, rc);
+		}
+	}
+
+	TRACE_DBG("info %p, status %d", info, info->info_status);
+	res = info->info_status;
+
+out:
+	return res;
+}
+EXPORT_SYMBOL_GPL(scst_wait_info_completion);
+
+int __init scst_sysfs_init(void)
+{
+	int retval = 0;
+
+	retval = kobject_init_and_add(&scst_sysfs_root_kobj,
+			&scst_sysfs_root_ktype, kernel_kobj, "%s", "scst_tgt");
+	if (retval != 0)
+		goto sysfs_root_add_error;
+
+	scst_targets_kobj = kobject_create_and_add("targets",
+				&scst_sysfs_root_kobj);
+	if (scst_targets_kobj == NULL)
+		goto targets_kobj_error;
+
+	scst_devices_kobj = kobject_create_and_add("devices",
+				&scst_sysfs_root_kobj);
+	if (scst_devices_kobj == NULL)
+		goto devices_kobj_error;
+
+	scst_sgv_kobj = kzalloc(sizeof(*scst_sgv_kobj), GFP_KERNEL);
+	if (scst_sgv_kobj == NULL)
+		goto sgv_kobj_error;
+
+	retval = kobject_init_and_add(scst_sgv_kobj, &sgv_ktype,
+			&scst_sysfs_root_kobj, "%s", "sgv");
+	if (retval != 0)
+		goto sgv_kobj_add_error;
+
+	scst_handlers_kobj = kobject_create_and_add("handlers",
+					&scst_sysfs_root_kobj);
+	if (scst_handlers_kobj == NULL)
+		goto handlers_kobj_error;
+
+out:
+	return retval;
+
+handlers_kobj_error:
+	kobject_del(scst_sgv_kobj);
+
+sgv_kobj_add_error:
+	kobject_put(scst_sgv_kobj);
+
+sgv_kobj_error:
+	kobject_del(scst_devices_kobj);
+	kobject_put(scst_devices_kobj);
+
+devices_kobj_error:
+	kobject_del(scst_targets_kobj);
+	kobject_put(scst_targets_kobj);
+
+targets_kobj_error:
+	kobject_del(&scst_sysfs_root_kobj);
+
+sysfs_root_add_error:
+	kobject_put(&scst_sysfs_root_kobj);
+
+	if (retval == 0)
+		retval = -EINVAL;
+	goto out;
+}
+
+void scst_sysfs_cleanup(void)
+{
+
+	PRINT_INFO("%s", "Exiting SCST sysfs hierarchy...");
+
+	kobject_del(scst_sgv_kobj);
+	kobject_put(scst_sgv_kobj);
+
+	kobject_del(scst_devices_kobj);
+	kobject_put(scst_devices_kobj);
+
+	kobject_del(scst_targets_kobj);
+	kobject_put(scst_targets_kobj);
+
+	kobject_del(scst_handlers_kobj);
+	kobject_put(scst_handlers_kobj);
+
+	kobject_del(&scst_sysfs_root_kobj);
+	kobject_put(&scst_sysfs_root_kobj);
+
+	wait_for_completion(&scst_sysfs_root_release_completion);
+	/*
+	 * There is a race, when in the release() schedule happens just after
+	 * calling complete(), so if we exit and unload scst module immediately,
+	 * there will be oops there. So let's give it a chance to quit
+	 * gracefully. Unfortunately, current kobjects implementation
+	 * doesn't allow better ways to handle it.
+	 */
+	msleep(3000);
+
+	PRINT_INFO("%s", "Exiting SCST sysfs hierarchy done");
+	return;
+}


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH][RFC 9/12/1/5] SCST debugging support
       [not found] ` <4BC44D08.4060907@vlnb.net>
                     ` (7 preceding siblings ...)
  2010-04-13 13:06   ` [PATCH][RFC 8/12/1/5] SCST sysfs interface Vladislav Bolkhovitin
@ 2010-04-13 13:06   ` Vladislav Bolkhovitin
  2010-04-13 13:06   ` [PATCH][RFC 10/12/1/5] SCST external modules support Vladislav Bolkhovitin
  9 siblings, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:06 UTC (permalink / raw)
  To: linux-scsi
  Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
	FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
	Vu Pham, Bart Van Assche, James Smart, Joe Eykholt, Andy Yan,
	linux-driver

This patch contains SCST debugging support routines.

Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
 drivers/scst/scst_debug.c |  136 ++++++++++++++++++++++
 include/scst/scst_debug.h |  276 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 412 insertions(+)

diff -uprN orig/linux-2.6.33/include/scst/scst_debug.h linux-2.6.33/include/scst/scst_debug.h
--- orig/linux-2.6.33/include/scst/scst_debug.h
+++ linux-2.6.33/include/scst/scst_debug.h
@@ -0,0 +1,276 @@
+/*
+ *  include/scst_debug.h
+ *
+ *  Copyright (C) 2004 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ *  Copyright (C) 2004 - 2005 Leonid Stoljar
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  Contains macroses for execution tracing and error reporting
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation, version 2
+ *  of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#ifndef __SCST_DEBUG_H
+#define __SCST_DEBUG_H
+
+#include <generated/autoconf.h>	/* for CONFIG_* */
+
+#include <linux/bug.h>		/* for WARN_ON_ONCE */
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+#define EXTRACHECKS_BUG_ON(a)		BUG_ON(a)
+#define EXTRACHECKS_WARN_ON(a)		WARN_ON(a)
+#define EXTRACHECKS_WARN_ON_ONCE(a)	WARN_ON_ONCE(a)
+#else
+#define EXTRACHECKS_BUG_ON(a)		do { } while (0)
+#define EXTRACHECKS_WARN_ON(a)		do { } while (0)
+#define EXTRACHECKS_WARN_ON_ONCE(a)	do { } while (0)
+#endif
+
+#define TRACE_NULL           0x00000000
+#define TRACE_DEBUG          0x00000001
+#define TRACE_FUNCTION       0x00000002
+#define TRACE_LINE           0x00000004
+#define TRACE_PID            0x00000008
+#define TRACE_BUFF           0x00000020
+#define TRACE_MEMORY         0x00000040
+#define TRACE_SG_OP          0x00000080
+#define TRACE_OUT_OF_MEM     0x00000100
+#define TRACE_MINOR          0x00000200 /* less important events */
+#define TRACE_MGMT           0x00000400
+#define TRACE_MGMT_DEBUG     0x00000800
+#define TRACE_SCSI           0x00001000
+#define TRACE_SPECIAL        0x00002000 /* filtering debug, etc */
+#define TRACE_FLOW_CONTROL   0x00004000 /* flow control in action */
+#define TRACE_ALL            0xffffffff
+/* Flags 0xXXXX0000 are local for users */
+
+#define TRACE_MINOR_AND_MGMT_DBG	(TRACE_MINOR|TRACE_MGMT_DEBUG)
+
+#ifndef KERN_CONT
+#define KERN_CONT       ""
+#endif
+
+/*
+ * Note: in the next two printk() statements the KERN_CONT macro is only
+ * present to suppress a checkpatch warning (KERN_CONT is defined as "").
+ */
+#define PRINT(log_flag, format, args...)  \
+		printk(log_flag format "\n", ## args)
+#define PRINTN(log_flag, format, args...) \
+		printk(log_flag format, ## args)
+
+#ifdef LOG_PREFIX
+#define __LOG_PREFIX	LOG_PREFIX
+#else
+#define __LOG_PREFIX	NULL
+#endif
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+
+#ifndef CONFIG_SCST_DEBUG
+#define ___unlikely(a)		(a)
+#else
+#define ___unlikely(a)		unlikely(a)
+#endif
+
+/*
+ * We don't print prefix for debug traces to not put additional preasure
+ * on the logging system in case of a lot of logging.
+ */
+
+extern int debug_print_prefix(unsigned long trace_flag,
+	const char *prefix, const char *func, int line);
+extern void debug_print_buffer(const void *data, int len);
+
+#define TRACE(trace, format, args...)					\
+do {									\
+	if (___unlikely(trace_flag & (trace))) {			\
+		debug_print_prefix(trace_flag, __LOG_PREFIX,		\
+				       __func__, __LINE__);		\
+		PRINT(KERN_CONT, format, args);				\
+	}								\
+} while (0)
+
+#ifdef CONFIG_SCST_DEBUG
+
+#define PRINT_BUFFER(message, buff, len)                            \
+do {                                                                \
+	PRINT(KERN_INFO, "%s:%s:", __func__, message);		    \
+	debug_print_buffer(buff, len);				    \
+} while (0)
+
+#else
+
+#define PRINT_BUFFER(message, buff, len)                            \
+do {                                                                \
+	PRINT(KERN_INFO, "%s:", message);			    \
+	debug_print_buffer(buff, len);				    \
+} while (0)
+
+#endif
+
+#define PRINT_BUFF_FLAG(flag, message, buff, len)			\
+do {									\
+	if (___unlikely(trace_flag & (flag))) {				\
+		debug_print_prefix(trace_flag, NULL, __func__, __LINE__);\
+		PRINT(KERN_CONT, "%s:", message);			\
+		debug_print_buffer(buff, len);				\
+	}								\
+} while (0)
+
+#else  /* CONFIG_SCST_DEBUG || CONFIG_SCST_TRACING */
+
+#define TRACE(trace, args...) do {} while (0)
+#define PRINT_BUFFER(message, buff, len) do {} while (0)
+#define PRINT_BUFF_FLAG(flag, message, buff, len) do {} while (0)
+
+#endif /* CONFIG_SCST_DEBUG || CONFIG_SCST_TRACING */
+
+#ifdef CONFIG_SCST_DEBUG
+
+#define TRACE_DBG_FLAG(trace, format, args...)				\
+do {									\
+	if (trace_flag & (trace)) {					\
+		debug_print_prefix(trace_flag, NULL, __func__, __LINE__);\
+		PRINT(KERN_CONT, format, args);				\
+	}								\
+} while (0)
+
+#define TRACE_MEM(args...)		TRACE_DBG_FLAG(TRACE_MEMORY, args)
+#define TRACE_SG(args...)		TRACE_DBG_FLAG(TRACE_SG_OP, args)
+#define TRACE_DBG(args...)		TRACE_DBG_FLAG(TRACE_DEBUG, args)
+#define TRACE_DBG_SPECIAL(args...)	TRACE_DBG_FLAG(TRACE_DEBUG|TRACE_SPECIAL, args)
+#define TRACE_MGMT_DBG(args...)		TRACE_DBG_FLAG(TRACE_MGMT_DEBUG, args)
+#define TRACE_MGMT_DBG_SPECIAL(args...)	\
+		TRACE_DBG_FLAG(TRACE_MGMT_DEBUG|TRACE_SPECIAL, args)
+
+#define TRACE_BUFFER(message, buff, len)				\
+do {									\
+	if (trace_flag & TRACE_BUFF) {					\
+		debug_print_prefix(trace_flag, NULL, __func__, __LINE__);\
+		PRINT(KERN_CONT, "%s:", message);			\
+		debug_print_buffer(buff, len);				\
+	}								\
+} while (0)
+
+#define TRACE_BUFF_FLAG(flag, message, buff, len)			\
+do {									\
+	if (trace_flag & (flag)) {					\
+		debug_print_prefix(trace_flag, NULL, __func__, __LINE__);\
+		PRINT(KERN_CONT, "%s:", message);			\
+		debug_print_buffer(buff, len);				\
+	}								\
+} while (0)
+
+#define PRINT_LOG_FLAG(log_flag, format, args...)			\
+do {									\
+	debug_print_prefix(trace_flag, __LOG_PREFIX, __func__, __LINE__);\
+	PRINT(KERN_CONT, format, args);					\
+} while (0)
+
+#define PRINT_WARNING(format, args...)					\
+do {									\
+	debug_print_prefix(trace_flag, __LOG_PREFIX, __func__, __LINE__);\
+	PRINT(KERN_CONT, "***WARNING***: " format, args);		\
+} while (0)
+
+#define PRINT_ERROR(format, args...)					\
+do {									\
+	debug_print_prefix(trace_flag, __LOG_PREFIX, __func__, __LINE__);\
+	PRINT(KERN_CONT, "***ERROR***: " format, args);			\
+} while (0)
+
+#define PRINT_CRIT_ERROR(format, args...)				\
+do {									\
+	debug_print_prefix(trace_flag, __LOG_PREFIX, __func__, __LINE__);\
+	PRINT(KERN_CONT, "***CRITICAL ERROR***: " format, args);	\
+} while (0)
+
+#define PRINT_INFO(format, args...)					\
+do {									\
+	debug_print_prefix(trace_flag, __LOG_PREFIX, __func__, __LINE__);\
+	PRINT(KERN_CONT, format, args);					\
+} while (0)
+
+#else  /* CONFIG_SCST_DEBUG */
+
+#define TRACE_MEM(format, args...) do {} while (0)
+#define TRACE_SG(format, args...) do {} while (0)
+#define TRACE_DBG(format, args...) do {} while (0)
+#define TRACE_DBG_FLAG(format, args...) do {} while (0)
+#define TRACE_DBG_SPECIAL(format, args...) do {} while (0)
+#define TRACE_MGMT_DBG(format, args...) do {} while (0)
+#define TRACE_MGMT_DBG_SPECIAL(format, args...) do {} while (0)
+#define TRACE_BUFFER(message, buff, len) do {} while (0)
+#define TRACE_BUFF_FLAG(flag, message, buff, len) do {} while (0)
+
+#ifdef LOG_PREFIX
+
+#define PRINT_INFO(format, args...)				\
+do {								\
+	PRINT(KERN_INFO, "%s: " format, LOG_PREFIX, args);	\
+} while (0)
+
+#define PRINT_WARNING(format, args...)          \
+do {                                            \
+	PRINT(KERN_INFO, "%s: ***WARNING***: "	\
+	      format, LOG_PREFIX, args);	\
+} while (0)
+
+#define PRINT_ERROR(format, args...)            \
+do {                                            \
+	PRINT(KERN_INFO, "%s: ***ERROR***: "	\
+	      format, LOG_PREFIX, args);	\
+} while (0)
+
+#define PRINT_CRIT_ERROR(format, args...)       \
+do {                                            \
+	PRINT(KERN_INFO, "%s: ***CRITICAL ERROR***: "	\
+		format, LOG_PREFIX, args);		\
+} while (0)
+
+#else
+
+#define PRINT_INFO(format, args...)           	\
+do {                                            \
+	PRINT(KERN_INFO, format, args);		\
+} while (0)
+
+#define PRINT_WARNING(format, args...)          \
+do {                                            \
+	PRINT(KERN_INFO, "***WARNING***: "	\
+		format, args);			\
+} while (0)
+
+#define PRINT_ERROR(format, args...)          	\
+do {                                            \
+	PRINT(KERN_ERR, "***ERROR***: "	\
+		format, args);			\
+} while (0)
+
+#define PRINT_CRIT_ERROR(format, args...)		\
+do {							\
+	PRINT(KERN_CRIT, "***CRITICAL ERROR***: "	\
+		format, args);				\
+} while (0)
+
+#endif /* LOG_PREFIX */
+
+#endif /* CONFIG_SCST_DEBUG */
+
+#if defined(CONFIG_SCST_DEBUG) && defined(CONFIG_DEBUG_SLAB)
+#define SCST_SLAB_FLAGS (SLAB_RED_ZONE | SLAB_POISON)
+#else
+#define SCST_SLAB_FLAGS 0L
+#endif
+
+#endif /* __SCST_DEBUG_H */
diff -uprN orig/linux-2.6.33/drivers/scst/scst_debug.c linux-2.6.33/drivers/scst/scst_debug.c
--- orig/linux-2.6.33/drivers/scst/scst_debug.c
+++ linux-2.6.33/drivers/scst/scst_debug.c
@@ -0,0 +1,136 @@
+/*
+ *  scst_debug.c
+ *
+ *  Copyright (C) 2004 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ *  Copyright (C) 2004 - 2005 Leonid Stoljar
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  Contains helper functions for execution tracing and error reporting.
+ *  Intended to be included in main .c file.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation, version 2
+ *  of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#include "scst.h"
+#include "scst_debug.h"
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+
+#define TRACE_BUF_SIZE    512
+
+static char trace_buf[TRACE_BUF_SIZE];
+static DEFINE_SPINLOCK(trace_buf_lock);
+
+static inline int get_current_tid(void)
+{
+	/* Code should be the same as in sys_gettid() */
+	if (in_interrupt()) {
+		/*
+		 * Unfortunately, task_pid_vnr() isn't IRQ-safe, so otherwise
+		 * it can oops. ToDo.
+		 */
+		return 0;
+	}
+	return task_pid_vnr(current);
+}
+
+/**
+ * debug_print_prefix() - print debug prefix for a log line
+ *
+ * Prints, if requested by trace_flag, debug prefix for each log line
+ */
+int debug_print_prefix(unsigned long trace_flag,
+	const char *prefix, const char *func, int line)
+{
+	int i = 0;
+	unsigned long flags;
+	int pid = get_current_tid();
+
+	spin_lock_irqsave(&trace_buf_lock, flags);
+
+	trace_buf[0] = '\0';
+
+	if (trace_flag & TRACE_PID)
+		i += snprintf(&trace_buf[i], TRACE_BUF_SIZE, "[%d]: ", pid);
+	if (prefix != NULL)
+		i += snprintf(&trace_buf[i], TRACE_BUF_SIZE - i, "%s: ",
+			      prefix);
+	if (trace_flag & TRACE_FUNCTION)
+		i += snprintf(&trace_buf[i], TRACE_BUF_SIZE - i, "%s:", func);
+	if (trace_flag & TRACE_LINE)
+		i += snprintf(&trace_buf[i], TRACE_BUF_SIZE - i, "%i:", line);
+
+	PRINTN(KERN_INFO, "%s", trace_buf);
+
+	spin_unlock_irqrestore(&trace_buf_lock, flags);
+
+	return i;
+}
+EXPORT_SYMBOL(debug_print_prefix);
+
+/**
+ * debug_print_buffer() - print a buffer
+ *
+ * Prints in the log data from the buffer
+ */
+void debug_print_buffer(const void *data, int len)
+{
+	int z, z1, i;
+	const unsigned char *buf = (const unsigned char *) data;
+	unsigned long flags;
+
+	if (buf == NULL)
+		return;
+
+	spin_lock_irqsave(&trace_buf_lock, flags);
+
+	PRINT(KERN_INFO, " (h)___0__1__2__3__4__5__6__7__8__9__A__B__C__D__E__F");
+	for (z = 0, z1 = 0, i = 0; z < len; z++) {
+		if (z % 16 == 0) {
+			if (z != 0) {
+				i += snprintf(&trace_buf[i], TRACE_BUF_SIZE - i,
+					      " ");
+				for (; (z1 < z) && (i < TRACE_BUF_SIZE - 1);
+				     z1++) {
+					if ((buf[z1] >= 0x20) &&
+					    (buf[z1] < 0x80))
+						trace_buf[i++] = buf[z1];
+					else
+						trace_buf[i++] = '.';
+				}
+				trace_buf[i] = '\0';
+				PRINT(KERN_INFO, "%s", trace_buf);
+				i = 0;
+			}
+			i += snprintf(&trace_buf[i], TRACE_BUF_SIZE - i,
+				      "%4x: ", z);
+		}
+		i += snprintf(&trace_buf[i], TRACE_BUF_SIZE - i, "%02x ",
+			      buf[z]);
+	}
+
+	i += snprintf(&trace_buf[i], TRACE_BUF_SIZE - i, "  ");
+	for (; (z1 < z) && (i < TRACE_BUF_SIZE - 1); z1++) {
+		if ((buf[z1] > 0x20) && (buf[z1] < 0x80))
+			trace_buf[i++] = buf[z1];
+		else
+			trace_buf[i++] = '.';
+	}
+	trace_buf[i] = '\0';
+
+	PRINT(KERN_INFO, "%s", trace_buf);
+
+	spin_unlock_irqrestore(&trace_buf_lock, flags);
+	return;
+}
+EXPORT_SYMBOL(debug_print_buffer);
+
+#endif /* CONFIG_SCST_DEBUG || CONFIG_SCST_TRACING */

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH][RFC 10/12/1/5] SCST external modules support
       [not found] ` <4BC44D08.4060907@vlnb.net>
                     ` (8 preceding siblings ...)
  2010-04-13 13:06   ` [PATCH][RFC 9/12/1/5] SCST debugging support Vladislav Bolkhovitin
@ 2010-04-13 13:06   ` Vladislav Bolkhovitin
  9 siblings, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:06 UTC (permalink / raw)
  To: linux-scsi
  Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
	FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
	Vu Pham, Bart Van Assche, James Smart, Joe Eykholt, Andy Yan,
	linux-driver

This patch contains SCST external modules support routines.

Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
 scst_module.c |   63 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 63 insertions(+)

diff -uprN orig/linux-2.6.33/drivers/scst/scst_module.c linux-2.6.33/drivers/scst/scst_module.c
--- orig/linux-2.6.33/drivers/scst/scst_module.c
+++ linux-2.6.33/drivers/scst/scst_module.c
@@ -0,0 +1,63 @@
+/*
+ *  scst_module.c
+ *
+ *  Copyright (C) 2004 - 2010 Vladislav Bolkhovitin <vst@vlnb.net>
+ *  Copyright (C) 2004 - 2005 Leonid Stoljar
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  Support for loading target modules. The usage is similar to scsi_module.c
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation, version 2
+ *  of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+
+#include <scst.h>
+
+static int __init init_this_scst_driver(void)
+{
+	int res;
+
+	res = scst_register_target_template(&driver_target_template);
+	TRACE_DBG("scst_register_target_template() returned %d", res);
+	if (res < 0)
+		goto out;
+
+#ifdef SCST_REGISTER_INITIATOR_DRIVER
+	driver_template.module = THIS_MODULE;
+	scsi_register_module(MODULE_SCSI_HA, &driver_template);
+	TRACE_DBG("driver_template.present=%d",
+	      driver_template.present);
+	if (driver_template.present == 0) {
+		res = -ENODEV;
+		MOD_DEC_USE_COUNT;
+		goto out;
+	}
+#endif
+
+out:
+	return res;
+}
+
+static void __exit exit_this_scst_driver(void)
+{
+
+#ifdef SCST_REGISTER_INITIATOR_DRIVER
+	scsi_unregister_module(MODULE_SCSI_HA, &driver_template);
+#endif
+
+	scst_unregister_target_template(&driver_target_template);
+	return;
+}
+
+module_init(init_this_scst_driver);
+module_exit(exit_this_scst_driver);

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH][RFC 11/12/1/5] SCST core's README
       [not found] ` <4BC45841.2090707@vlnb.net>
@ 2010-04-13 13:07   ` Vladislav Bolkhovitin
  2010-04-13 13:07   ` [PATCH][RFC 12/12/1/5] SCST sysfs rules Vladislav Bolkhovitin
  1 sibling, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:07 UTC (permalink / raw)
  To: linux-scsi
  Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
	FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
	Vu Pham, Bart Van Assche, James Smart, Joe Eykholt, Andy Yan,
	linux-driver

This patch contains SCST core's README.

Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
 README.scst | 1379 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 1379 insertions(+)

diff -uprN orig/linux-2.6.33/Documentation/scst/README.scst linux-2.6.33/Documentation/scst/README.scst
--- orig/linux-2.6.33/Documentation/scst/README.scst
+++ linux-2.6.33/Documentation/scst/README.scst
@@ -0,0 +1,1379 @@
+Generic SCSI target mid-level for Linux (SCST)
+==============================================
+
+SCST is designed to provide unified, consistent interface between SCSI
+target drivers and Linux kernel and simplify target drivers development
+as much as possible. Detail description of SCST's features and internals
+could be found in "Generic SCSI Target Middle Level for Linux" document
+SCST's Internet page http://scst.sourceforge.net.
+
+SCST supports the following I/O modes:
+
+ * Pass-through mode with one to many relationship, i.e. when multiple
+   initiators can connect to the exported pass-through devices, for
+   the following SCSI devices types: disks (type 0), tapes (type 1),
+   processors (type 3), CDROMs (type 5), MO disks (type 7), medium
+   changers (type 8) and RAID controllers (type 0xC)
+
+ * FILEIO mode, which allows to use files on file systems or block
+   devices as virtual remotely available SCSI disks or CDROMs with
+   benefits of the Linux page cache
+
+ * BLOCKIO mode, which performs direct block IO with a block device,
+   bypassing page-cache for all operations. This mode works ideally with
+   high-end storage HBAs and for applications that either do not need
+   caching between application and disk or need the large block
+   throughput
+
+ * User space mode using scst_user device handler, which allows to
+   implement in the user space virtual SCSI devices in the SCST
+   environment
+
+ * "Performance" device handlers, which provide in pseudo pass-through
+   mode a way for direct performance measurements without overhead of
+   actual data transferring from/to underlying SCSI device
+
+In addition, SCST supports advanced per-initiator access and devices
+visibility management, so different initiators could see different set
+of devices with different access permissions. See below for details.
+
+Installation
+------------
+
+To see your devices remotely, you need to add them to at least "Default"
+security group (see below how). By default, no local devices are seen
+remotely. There must be LUN 0 in each security group, i.e. LUs
+numeration must not start from, e.g., 1. Otherwise you will see no
+devices on remote initiators and SCST core will write into the kernel
+log message: "tgt_dev for LUN 0 not found, command to unexisting LU?"
+
+It is highly recommended to use scstadmin utility for configuring
+devices and security groups.
+
+If you experience problems during modules load or running, check your
+kernel logs (or run dmesg command for the few most recent messages).
+
+IMPORTANT: Without loading appropriate device handler, corresponding devices
+=========  will be invisible for remote initiators, which could lead to holes
+           in the LUN addressing, so automatic device scanning by remote SCSI
+           mid-level could not notice the devices. Therefore you will have
+	   to add them manually via
+	   'echo "- - -" >/sys/class/scsi_host/hostX/scan',
+	   where X - is the host number.
+
+IMPORTANT: Working of target and initiator on the same host is
+=========  supported, except the following 2 cases: swap over target exported
+           device and using a writable mmap over a file from target
+	   exported device. The latter means you can't mount a file
+	   system over target exported device. In other words, you can
+	   freely use any sg, sd, st, etc. devices imported from target
+	   on the same host, but you can't mount file systems or put
+	   swap on them. This is a limitation of Linux memory/cache
+	   manager, because in this case an OOM deadlock like: system
+	   needs some memory -> it decides to clear some cache -> cache
+	   needs to write on target exported device -> initiator sends
+	   request to the target -> target needs memory -> system needs
+	   even more memory -> deadlock.
+
+IMPORTANT: In the current version simultaneous access to local SCSI devices
+=========  via standard high-level SCSI drivers (sd, st, sg, etc.) and
+           SCST's target drivers is unsupported. Especially it is
+	   important for execution via sg and st commands that change
+	   the state of devices and their parameters, because that could
+	   lead to data corruption. If any such command is done, at
+	   least related device handler(s) must be restarted. For block
+	   devices READ/WRITE commands using direct disk handler look to
+	   be safe.
+
+IMPORTANT: Some versions of Windows have a bug, which makes them consider
+=========  response of READ CAPACITY(16) longer than 12 bytes as a faulty one.
+	   As the result, such Windows'es refuse to see SCST exported
+           devices >2TB in size. This is fixed by MS in latter Windows
+	   versions, probably, by some hotfix. But if you're using such
+	   buggy Windows and experience this problem, change in
+	   scst_vdisk.c::vdisk_exec_read_capacity16() "#if 1" to "#if 0".
+
+Usage in failover mode
+----------------------
+
+It is recommended to use TEST UNIT READY ("tur") command to check if
+SCST target is alive in MPIO configurations.
+
+Device handlers
+---------------
+
+Device specific drivers (device handlers) are plugins for SCST, which
+help SCST to analyze incoming requests and determine parameters,
+specific to various types of devices. If an appropriate device handler
+for a SCSI device type isn't loaded, SCST doesn't know how to handle
+devices of this type, so they will be invisible for remote initiators
+(more precisely, "LUN not supported" sense code will be returned).
+
+In addition to device handlers for real devices, there are VDISK, user
+space and "performance" device handlers.
+
+VDISK device handler works over files on file systems and makes from
+them virtual remotely available SCSI disks or CDROM's. In addition, it
+allows to work directly over a block device, e.g. local IDE or SCSI disk
+or ever disk partition, where there is no file systems overhead. Using
+block devices comparing to sending SCSI commands directly to SCSI
+mid-level via scsi_do_req()/scsi_execute_async() has advantage that data
+are transferred via system cache, so it is possible to fully benefit from
+caching and read ahead performed by Linux's VM subsystem. The only
+disadvantage here that in the FILEIO mode there is superfluous data
+copying between the cache and SCST's buffers. This issue is going to be
+addressed in the next release. Virtual CDROM's are useful for remote
+installation. See below for details how to setup and use VDISK device
+handler.
+
+SCST user space device handler provides an interface between SCST and
+the user space, which allows to create pure user space devices. The
+simplest example, where one would want it is if he/she wants to write a
+VTL. With scst_user he/she can write it purely in the user space. Or one
+would want it if he/she needs some sophisticated for kernel space
+processing of the passed data, like encrypting them or making snapshots.
+
+"Performance" device handlers for disks, MO disks and tapes in their
+exec() method skip (pretend to execute) all READ and WRITE operations
+and thus provide a way for direct link performance measurements without
+overhead of actual data transferring from/to underlying SCSI device.
+
+NOTE: Since "perf" device handlers on READ operations don't touch the
+====  commands' data buffer, it is returned to remote initiators as it
+      was allocated, without even being zeroed. Thus, "perf" device
+      handlers impose some security risk, so use them with caution.
+
+Compilation options
+-------------------
+
+There are the following compilation options, that could be change using
+your favorite kernel configuration Makefile target, e.g. "make xconfig":
+
+ - CONFIG_SCST_DEBUG - if defined, turns on some debugging code,
+   including some logging. Makes the driver considerably bigger and slower,
+   producing large amount of log data.
+
+ - CONFIG_SCST_TRACING - if defined, turns on ability to log events. Makes the
+   driver considerably bigger and leads to some performance loss.
+
+ - CONFIG_SCST_EXTRACHECKS - if defined, adds extra validity checks in
+   the various places.
+
+ - CONFIG_SCST_USE_EXPECTED_VALUES - if not defined (default), initiator
+   supplied expected data transfer length and direction will be used only for
+   verification purposes to return error or warn in case if one of them
+   is invalid. Instead, locally decoded from SCSI command values will be
+   used. This is necessary for security reasons, because otherwise a
+   faulty initiator can crash target by supplying invalid value in one
+   of those parameters. This is especially important in case of
+   pass-through mode. If CONFIG_SCST_USE_EXPECTED_VALUES is defined, initiator
+   supplied expected data transfer length and direction will override
+   the locally decoded values. This might be necessary if internal SCST
+   commands translation table doesn't contain SCSI command, which is
+   used in your environment. You can know that if you have messages like
+   "Unknown opcode XX for YY. Should you update scst_scsi_op_table?" in
+   your kernel log and your initiator returns an error. Also report
+   those messages in the SCST mailing list
+   scst-devel@lists.sourceforge.net. Note, that not all SCSI transports
+   support supplying expected values.
+
+ - CONFIG_SCST_DEBUG_TM - if defined, turns on task management functions
+   debugging, when on LUN 6 some of the commands will be delayed for
+   about 60 sec., so making the remote initiator send TM functions, eg
+   ABORT TASK and TARGET RESET. Also define
+   CONFIG_SCST_TM_DBG_GO_OFFLINE symbol in the Makefile if you want that
+   the device eventually become completely unresponsive, or otherwise to
+   circle around ABORTs and RESETs code. Needs CONFIG_SCST_DEBUG turned
+   on.
+
+ - CONFIG_SCST_STRICT_SERIALIZING - if defined, makes SCST send all commands to
+   underlying SCSI device synchronously, one after one. This makes task
+   management more reliable, with cost of some performance penalty. This
+   is mostly actual for stateful SCSI devices like tapes, where the
+   result of command's execution depends from device's settings defined
+   by previous commands. Disk and RAID devices are stateless in the most
+   cases. The current SCSI core in Linux doesn't allow to abort all
+   commands reliably if they sent asynchronously to a stateful device.
+   Turned off by default, turn it on if you use stateful device(s) and
+   need as much error recovery reliability as possible. As a side effect
+   of CONFIG_SCST_STRICT_SERIALIZING, on kernels below 2.6.30 no kernel
+   patching is necessary for pass-through device handlers (scst_disk,
+   etc.).
+
+ - CONFIG_SCST_ALLOW_PASSTHROUGH_IO_SUBMIT_IN_SIRQ - if defined, it will be
+   allowed to submit pass-through commands to real SCSI devices via the SCSI
+   middle layer using scsi_execute_async() function from soft IRQ
+   context (tasklets). This used to be the default, but currently it
+   seems the SCSI middle layer starts expecting only thread context on
+   the IO submit path, so it is disabled now by default. Enabling it
+   will decrease amount of context switches and improve performance. It
+   is more or less safe, in the worst case, if in your configuration the
+   SCSI middle layer really doesn't expect SIRQ context in
+   scsi_execute_async() function, you will get a warning message in the
+   kernel log.
+
+ - CONFIG_SCST_STRICT_SECURITY - if defined, makes SCST zero allocated data
+   buffers. Undefining it (default) considerably improves performance
+   and eases CPU load, but could create a security hole (information
+   leakage), so enable it, if you have strict security requirements.
+
+ - CONFIG_SCST_ABORT_CONSIDER_FINISHED_TASKS_AS_NOT_EXISTING - if defined,
+   in case when TASK MANAGEMENT function ABORT TASK is trying to abort a
+   command, which has already finished, remote initiator, which sent the
+   ABORT TASK request, will receive TASK NOT EXIST (or ABORT FAILED)
+   response for the ABORT TASK request. This is more logical response,
+   since, because the command finished, attempt to abort it failed, but
+   some initiators, particularly VMware iSCSI initiator, consider TASK
+   NOT EXIST response as if the target got crazy and try to RESET it.
+   Then sometimes get crazy itself. So, this option is disabled by
+   default.
+
+ - CONFIG_SCST_MEASURE_LATENCY - if defined, provides in /sys/kernel/scst_tgt
+   and below statisctics about average commands processing latency. You
+   can clear already measured results by writing 0 in the corresponding
+   file. Note, you need a non-preemptible kernel to have correct
+   results.
+
+HIGHMEM kernel configurations are fully supported, but not recommended
+for performance reasons, except for scst_user, where they are not
+supported, because this module deals with user supplied memory on a
+zero-copy manner. If you need to use it, consider change VMSPLIT option
+or use 64-bit system configuration instead.
+
+For changing VMSPLIT option (CONFIG_VMSPLIT to be precise) you should in
+"make menuconfig" command set the following variables:
+
+ - General setup->Configure standard kernel features (for small systems): ON
+
+ - General setup->Prompt for development and/or incomplete code/drivers: ON
+
+ - Processor type and features->High Memory Support: OFF
+
+ - Processor type and features->Memory split: according to amount of
+   memory you have. If it is less than 800MB, you may not touch this
+   option at all.
+
+Module parameters
+-----------------
+
+Module scst supports the following parameters:
+
+ - scst_threads - allows to set count of SCST's threads. By default it
+   is CPU count.
+
+ - scst_max_cmd_mem - sets maximum amount of memory in Mb allowed to be
+   consumed by the SCST commands for data buffers at any given time. By
+   default it is approximately TotalMem/4.
+
+SCST sysfs interface
+--------------------
+
+Root of SCST sysfs interface is /sys/kernel/scst_tgt. It has the
+following entries:
+
+ - devices - this is a root subdirectory for all SCST devices
+
+ - handlers - this is a root subdirectory for all SCST dev handlers
+
+ - sgv - this is a root subdirectory for all SCST SGV caches
+
+ - targets - this is a root subdirectory for all SCST targets
+
+ - setup_id - allows to read and write SCST setup ID. This ID can be
+   used in cases, when the same SCST configuration should be installed
+   on several targets, but exported from those targets devices should
+   have different IDs and SNs. For instance, VDISK dev handler uses this
+   ID to generate T10 vendor specific identifier and SN of the devices.
+
+ - threads - allows to read and set number of global SCST I/O threads.
+   Those threads used with async. dev handlers, for instance, vdisk
+   BLOCKIO or NULLIO.
+
+ - trace_level - allows to enable and disable various tracing
+   facilities. See content of this file for help how to use it.
+
+ - version - read-only attribute, which allows to see version of
+   SCST and enabled optional features.
+
+Each SCST sysfs file (attribute) can contain in the last line mark
+"[key]". It is automatically added mark used to allow scstadmin to see
+which attributes it should save in the config file. You can ignore it.
+
+"Devices" subdirectory contains subdirectories for each SCST devices.
+
+Content of each device's subdirectory is dev handler specific. See
+documentation for your dev handlers for more info about it as well as
+SysfsRules file for more info about common to all dev handlers rules.
+Standard SCST dev handlers have at least the following common entries:
+
+ - exported - subdirectory containing links to all LUNs where this
+   device was exported.
+
+ - handler - if dev handler determined for this device, this link points
+   to it. The handler can be not set for pass-through devices.
+
+ - threads_num - shows and allows to set number of threads in this device's
+   threads pool. If 0 - no threads will be created, and global SCST
+   threads pool will be used. If <0 - creation of the threads pool is
+   prohibited.
+
+ - threads_pool_type - shows and allows to sets threads pool type.
+   Possible values: "per_initiator" and "shared". When the value is
+   "per_initiator" (default), each session from each initiator will use
+   separate dedicated pool of threads. When the value is "shared", all
+   sessions from all initiators will share the same per-device pool of
+   threads. Valid only if threads_num attribute >0.
+
+ - type - SCSI type of this device
+
+See below for more information about other entries of this subdirectory
+of the standard SCST dev handlers.
+
+"Handlers" subdirectory contains subdirectories for each SCST dev
+handler.
+
+Content of each handler's subdirectory is dev handler specific. See
+documentation for your dev handlers for more info about it as well as
+SysfsRules file for more info about common to all dev handlers rules.
+Standard SCST dev handlers have at least the following common entries:
+
+ - mgmt - this entry allows to create virtual devices and their
+   attributes (for virtual devices dev handlers) or assign/unassign real
+   SCSI devices to/from this dev handler (for pass-through dev
+   handlers).
+
+ - pass_through - if exists, it contains 1 and this dev handler is a
+   pass-through dev handler.
+
+ - trace_level - allows to enable and disable various tracing
+   facilities. See content of this file for help how to use it.
+
+ - type - SCSI type of devices served by this dev handler.
+
+See below for more information about other entries of this subdirectory
+of the standard SCST dev handlers.
+
+"Sgv" subdirectory contains statistic information of SCST SGV caches. It
+has the following entries:
+
+ - None, one or more subdirectories for each existing SGV cache.
+
+ - global_stats - file containing global SGV caches statistics.
+
+Each SGV cache's subdirectory has the following item:
+
+ - stats - file containing statistics for this SGV caches.
+
+"Targets" subdirectory contains subdirectories for each SCST target.
+
+Content of each target's subdirectory is target specific. See
+documentation for your target for more info about it as well as
+SysfsRules file for more info about common to all targets rules.
+Every target should have at least the following entries:
+
+ - ini_groups - subdirectory, which contains and allows to define
+   initiator-oriented access control information, see below.
+
+ - luns - subdirectory, which contains list of available LUNs in the
+   target-oriented access control and allows to define it, see below.
+
+ - sessions - subdirectory containing connected to this target sessions.
+
+ - enabled - using this attribute you can enable or disable this target/
+   It allows to finish configuring it before it starts accepting new
+   connections. 0 by default.
+
+ - addr_method - used LUNs addressing method. Possible values:
+   "Peripheral" and "Flat". Most initiators work well with Peripheral
+   addressing method (default), but some (HP-UX, for instance) may
+   require Flat method. This attribute is also available in the
+   initiators security groups, so you can assign the addressing method
+   on per-initiator basis.
+
+ - io_grouping_type - defines how I/O from sessions to this target are
+   grouped together. This I/O grouping is very important for
+   performance. By setting this attribute in a right value, you can
+   considerably increase performance of your setup. This grouping is
+   performed only if you use CFQ I/O scheduler on the target and for
+   devices with threads_num >= 0 and, if threads_num > 0, with
+   threads_pool_type "per_initiator". Possible values:
+   "this_group_only", "never", "auto", or I/O group number >0. When the
+   value is "this_group_only" all I/O from all sessions in this target
+   will be grouped together. When the value is "never", I/O from
+   different sessions will not be grouped together, i.e. all sessions in
+   this target will have separate dedicated I/O groups. When the value
+   is "auto" (default), all I/O from initiators with the same name
+   (iSCSI initiator name, for instance) in all targets will be grouped
+   together with a separate dedicated I/O group for each initiator name.
+   For iSCSI this mode works well, but other transports usually use
+   different initiator names for different sessions, so using such
+   transports in MPIO configurations you should either use value
+   "this_group_only", or an explicit I/O group number. This attribute is
+   also available in the initiators security groups, so you can assign
+   the I/O grouping on per-initiator basis. See below for more info how
+   to use this attribute.
+
+ - rel_tgt_id - allows to read or write SCSI Relative Target Port
+   Identifier attribute. This identifier is used to identify SCSI Target
+   Ports by some SCSI commands, mainly by Persistent Reservations
+   commands. This identifier must be unique among all SCST targets, but
+   for convenience SCST allows disabled targets to have not unique
+   rel_tgt_id. In this case SCST will not allow to enable this target
+   until rel_tgt_id becomes unique. This attribute initialized unique by
+   SCST by default.
+
+A target driver may have also the following entries:
+
+ - "hw_target" - if the target driver supports both hardware and virtual
+    targets (for instance, an FC adapter supporting NPIV, which has
+    hardware targets for its physical ports as well as virtual NPIV
+    targets), this read only attribute for all hardware targets will
+    exist and contain value 1.
+
+Subdirectory "sessions" contains one subdirectory for each connected
+session with name equal to name of the connected initiator.
+
+Each session subdirectory contains the following entries:
+
+ - initiator_name - contains initiator name
+
+ - force_close - optional write-only attribute, which allows to force
+   close this session.
+
+ - active_commands - contains number of active, i.e. not yet or being
+   executed, SCSI commands in this session.
+
+ - commands - contains overall number of SCSI commands in this session.
+
+ - other target driver specific attributes and subdirectories.
+
+See below description of the VDISK's sysfs interface for samples.
+
+Access and devices visibility management (LUN masking) - sysfs interface
+------------------------------------------------------------------------
+
+Access and devices visibility management allows for an initiator or
+group of initiators to see different devices with different LUNs
+with necessary access permissions.
+
+SCST supports two modes of access control:
+
+1. Target-oriented. In this mode you define for each target a default
+set of LUNs, which are accessible to all initiators, connected to that
+target. This is a regular access control mode, which people usually mean
+thinking about access control in general. For instance, in IET this is
+the only supported mode.
+
+2. Initiator-oriented. In this mode you define which LUNs are accessible
+for each initiator. In this mode you should create for each set of one
+or more initiators, which should access to the same set of devices with
+the same LUNs, a separate security group, then add to it devices and
+names of allowed initiator(s).
+
+Both modes can be used simultaneously. In this case the
+initiator-oriented mode has higher priority, than the target-oriented,
+i.e. initiators are at first searched in all defined security groups for
+this target and, if none matches, the default target's set of LUNs is
+used. This set of LUNs might be empty, then the initiator will not see
+any LUNs from the target.
+
+You can at any time find out which set of LUNs each session is assigned
+to by looking where link
+/sys/kernel/scst_tgt/targets/target_driver/target_name/sessions/initiator_name/luns
+points to.
+
+To configure the target-oriented access control SCST provides the
+following interface. Each target's sysfs subdirectory
+(/sys/kernel/scst_tgt/targets/target_driver/target_name) has "luns"
+subdirectory. This subdirectory contains the list of already defined
+target-oriented access control LUNs for this target as well as file
+"mgmt". This file has the following commands, which you can send to it,
+for instance, using "echo" shell command. You can always get a small
+help about supported commands by looking inside this file. "Parameters"
+are one or more param_name=value pairs separated by ';'.
+
+ - "add H:C:I:L lun [parameters]" - adds a pass-through device with
+   host:channel:id:lun with LUN "lun". Optionally, the device could be
+   marked as read only by using parameter "read_only". The recommended
+   way to find out H:C:I:L numbers is use of lsscsi utility.
+
+ - "replace H:C:I:L lun [parameters]" - replaces by pass-through device
+   with host:channel:id:lun existing with LUN "lun" device with
+   generation of INQUIRY DATA HAS CHANGED Unit Attention. If the old
+   device doesn't exist, this command acts as the "add" command.
+   Optionally, the device could be marked as read only by using
+   parameter "read_only". The recommended way to find out H:C:I:L
+   numbers is use of lsscsi utility.
+
+ - "del H:C:I:L" - deletes a pass-through device with host:channel:id:lun
+   The recommended way to find out H:C:I:L numbers is use of lsscsi
+   utility.
+
+ - "add VNAME lun [parameters]" - adds a virtual device with name VNAME
+   with LUN "lun". Optionally, the device could be marked as read only
+   by using parameter "read_only".
+
+ - "replace VNAME lun [parameters]" - replaces by virtual device
+   with name VNAME existing with LUN "lun" device with generation of
+   INQUIRY DATA HAS CHANGED Unit Attention. If the old device doesn't
+   exist, this command acts as the "add" command. Optionally, the device
+   could be marked as read only by using parameter "read_only".
+
+ - "del VNAME" - deletes a virtual device with name VNAME.
+
+ - "clear" - clears the list of devices
+
+To configure the initiator-oriented access control SCST provides the
+following interface. Each target's sysfs subdirectory
+(/sys/kernel/scst_tgt/targets/target_driver/target_name) has "ini_groups"
+subdirectory. This subdirectory contains the list of already defined
+security groups for this target as well as file "mgmt". This file has
+the following commands, which you can send to it, for instance, using
+"echo" shell command. You can always get a small help about supported
+commands by looking inside this file.
+
+ - "create GROUP_NAME" - creates a new security group.
+
+ - "del GROUP_NAME" - deletes a new security group.
+
+Each security group's subdirectory contains 2 subdirectories: initiators
+and luns.
+
+Each "initiators" subdirectory contains list of added to this groups
+initiator as well as as well as file "mgmt". This file has the following
+commands, which you can send to it, for instance, using "echo" shell
+command. You can always get a small help about supported commands by
+looking inside this file.
+
+ - "add INITIATOR_NAME" - adds initiator with name INITIATOR_NAME to the
+   group.
+
+ - "del INITIATOR_NAME" - deletes initiator with name INITIATOR_NAME
+   from the group.
+
+ - "move INITIATOR_NAME DEST_GROUP_NAME" moves initiator with name
+   INITIATOR_NAME from the current group to group with name
+   DEST_GROUP_NAME.
+
+ - "clear" - deletes all initiators from this group.
+
+For "add" and "del" commands INITIATOR_NAME can be a simple DOS-type
+patterns, containing '*' and '?' symbols. '*' means match all any
+symbols, '?' means match only any single symbol. For instance,
+"blah.xxx" will match "bl?h.*".
+
+Each "luns" subdirectory contains the list of already defined LUNs for
+this group as well as file "mgmt". Content of this file as well as list
+of available in it commands is fully identical to the "luns"
+subdirectory of the target-oriented access control.
+
+Examples:
+
+ - echo "create INI" >/sys/kernel/scst_tgt/targets/iscsi/iqn.2006-10.net.vlnb:tgt1/ini_groups/mgmt -
+   creates security group INI for target iqn.2006-10.net.vlnb:tgt1.
+
+ - echo "add 2:0:1:0 11" >/sys/kernel/scst_tgt/targets/iscsi/iqn.2006-10.net.vlnb:tgt1/ini_groups/INI/luns/mgmt -
+   adds a pass-through device sitting on host 2, channel 0, ID 1, LUN 0
+   to group with name INI as LUN 11.
+
+ - echo "add disk1 0" >/sys/kernel/scst_tgt/targets/iscsi/iqn.2006-10.net.vlnb:tgt1/ini_groups/INI/luns/mgmt -
+   adds a virtual disk with name disk1 to group with name INI as LUN 0.
+
+ - echo "add 21:*:e0:?b:83:*" >/sys/kernel/scst_tgt/targets/21:00:00:a0:8c:54:52:12/ini_groups/INI/initiators/mgmt -
+   adds a pattern to group with name INI to Fibre Channel target with
+   WWN 21:00:00:a0:8c:54:52:12, which matches WWNs of Fibre Channel
+   initiator ports.
+
+Consider you need to have an iSCSI target with name
+"iqn.2007-05.com.example:storage.disk1.sys1.xyz", which should export
+virtual device "dev1" with LUN 0 and virtual device "dev2" with LUN 1,
+but initiator with name
+"iqn.2007-05.com.example:storage.disk1.spec_ini.xyz" should see only
+virtual device "dev2" read only with LUN 0. To achieve that you should
+do the following commands:
+
+# echo "iqn.2007-05.com.example:storage.disk1.sys1.xyz" >/sys/kernel/scst_tgt/targets/iscsi/mgmt
+# echo "add dev1 0" >/sys/kernel/scst_tgt/targets/iscsi/iqn.2007-05.com.example:storage.disk1.sys1.xyz/luns/mgmt
+# echo "add dev2 1" >/sys/kernel/scst_tgt/targets/iscsi/iqn.2007-05.com.example:storage.disk1.sys1.xyz/luns/mgmt
+# echo "create SPEC_INI" >/sys/kernel/scst_tgt/targets/iscsi/iqn.2007-05.com.example:storage.disk1.sys1.xyz/ini_groups/mgmt
+# echo "add dev2 0 read_only=1" \
+	>/sys/kernel/scst_tgt/targets/iscsi/iqn.2007-05.com.example:storage.disk1.sys1.xyz/ini_groups/SPEC_INI/luns/mgmt
+# echo "iqn.2007-05.com.example:storage.disk1.spec_ini.xyz" \
+	>/sys/kernel/scst_tgt/targets/iscsi/iqn.2007-05.com.example:storage.disk1.sys1.xyz/ini_groups/SPEC_INI/initiators/mgmt
+
+For Fibre Channel or SAS in the above example you should use target's
+and initiator ports WWNs instead of iSCSI names.
+
+It is highly recommended to use scstadmin utility instead of described
+in this section low level interface.
+
+IMPORTANT
+=========
+
+There must be LUN 0 in each set of LUNs, i.e. LUs numeration must not
+start from, e.g., 1. Otherwise you will see no devices on remote
+initiators and SCST core will write into the kernel log message: "tgt_dev
+for LUN 0 not found, command to unexisting LU?"
+
+IMPORTANT
+=========
+
+All the access control must be fully configured BEFORE the corresponding
+target is enabled! When you enable a target, it will immediately start
+accepting new connections, hence creating new sessions, and those new
+sessions will be assigned to security groups according to the
+*currently* configured access control settings. For instance, to
+the default target's set of LUNs, instead of "HOST004" group as you may
+need, because "HOST004" doesn't exist yet. So, one must configure all
+the security groups before new connections from the initiators are
+created, i.e. before the target enabled.
+
+VDISK device handler
+--------------------
+
+VDISK has 4 built-in dev handlers: vdisk_fileio, vdisk_blockio,
+vdisk_nullio and vcdrom. Roots of their sysfs interface are
+/sys/kernel/scst_tgt/handlers/handler_name, e.g. for vdisk_fileio:
+/sys/kernel/scst_tgt/handlers/vdisk_fileio. Each root has the following
+entries:
+
+ - None, one or more links to devices with name equal to names
+   of the corresponding devices.
+
+ - trace_level - allows to enable and disable various tracing
+   facilities. See content of this file for help how to use it.
+
+ - mgmt - main management entry, which allows to add/delete VDISK
+   devices with the corresponding type.
+
+The "mgmt" file has the following commands, which you can send to it,
+for instance, using "echo" shell command. You can always get a small
+help about supported commands by looking inside this file. "Parameters"
+are one or more param_name=value pairs separated by ';'.
+
+ - echo "add_device device_name [parameters]" - adds a virtual device
+   with name device_name and specified parameters (see below)
+
+ - echo "del_device device_name" - deletes a virtual device with name
+   device_name.
+
+Handler vdisk_fileio provides FILEIO mode to create virtual devices.
+This mode uses as backend files and accesses to them using regular
+read()/write() file calls. This allows to use full power of Linux page
+cache. The following parameters possible for vdisk_fileio:
+
+ - filename - specifies path and file name of the backend file. The path
+   must be absolute.
+
+ - blocksize - specifies block size used by this virtual device. The
+   block size must be power of 2 and >= 512 bytes. Default is 512.
+
+ - write_through - disables write back caching. Note, this option
+   has sense only if you also *manually* disable write-back cache in
+   *all* your backstorage devices and make sure it's actually disabled,
+   since many devices are known to lie about this mode to get better
+   benchmark results. Default is 0.
+
+ - read_only - read only. Default is 0.
+
+ - o_direct - disables both read and write caching. This mode isn't
+   currently fully implemented, you should use user space fileio_tgt
+   program in O_DIRECT mode instead (see below).
+
+ - nv_cache - enables "non-volatile cache" mode. In this mode it is
+   assumed that the target has a GOOD UPS with ability to cleanly
+   shutdown target in case of power failure and it is software/hardware
+   bugs free, i.e. all data from the target's cache are guaranteed
+   sooner or later to go to the media. Hence all data synchronization
+   with media operations, like SYNCHRONIZE_CACHE, are ignored in order
+   to bring more performance. Also in this mode target reports to
+   initiators that the corresponding device has write-through cache to
+   disable all write-back cache workarounds used by initiators. Use with
+   extreme caution, since in this mode after a crash of the target
+   journaled file systems don't guarantee the consistency after journal
+   recovery, therefore manual fsck MUST be ran. Note, that since usually
+   the journal barrier protection (see "IMPORTANT" note below) turned
+   off, enabling NV_CACHE could change nothing from data protection
+   point of view, since no data synchronization with media operations
+   will go from the initiator. This option overrides "write_through"
+   option. Disabled by default.
+
+ - removable - with this flag set the device is reported to remote
+   initiators as removable.
+
+Handler vdisk_blockio provides BLOCKIO mode to create virtual devices.
+This mode performs direct block I/O with a block device, bypassing the
+page cache for all operations. This mode works ideally with high-end
+storage HBAs and for applications that either do not need caching
+between application and disk or need the large block throughput. See
+below for more info.
+
+The following parameters possible for vdisk_blockio: filename,
+blocksize, read_only, removable. See vdisk_fileio above for description
+of those parameters.
+
+Handler vdisk_nullio provides NULLIO mode to create virtual devices. In
+this mode no real I/O is done, but success returned to initiators.
+Intended to be used for performance measurements at the same way as
+"*_perf" handlers. The following parameters possible for vdisk_nullio:
+blocksize, read_only, removable. See vdisk_fileio above for description
+of those parameters.
+
+Handler vcdrom allows emulation of a virtual CDROM device using an ISO
+file as backend. It doesn't have any parameters.
+
+For example:
+
+echo "add_device disk1 filename=/disk1; blocksize=4096; nv_cache=1" >/sys/kernel/scst_tgt/handlers/vdisk_fileio/mgmt
+
+will create a FILEIO virtual device disk1 with backend file /disk1
+with block size 4K and NV_CACHE enabled.
+
+Each vdisk_fileio's device has the following attributes in
+/sys/kernel/scst_tgt/devices/device_name:
+
+ - filename - contains path and file name of the backend file.
+
+ - blocksize - contains block size used by this virtual device.
+
+ - write_through - contains status of write back caching of this virtual
+   device.
+
+ - read_only - contains read only status of this virtual device.
+
+ - o_direct - contains O_DIRECT status of this virtual device.
+
+ - nv_cache - contains NV_CACHE status of this virtual device.
+
+ - removable - contains removable status of this virtual device.
+
+ - size_mb - contains size of this virtual device in MB.
+
+ - t10_dev_id - contains and allows to set T10 vendor specific
+   identifier for Device Identification VPD page (0x83) of INQUIRY data.
+   By default VDISK handler always generates t10_dev_id for every new
+   created device at creation time based on the device name and
+   scst_vdisk_ID scst_vdisk.ko module parameter (see below).
+
+ - usn - contains the virtual device's serial number of INQUIRY data. It
+   is created at the device creation time based on the device name and
+   scst_vdisk_ID scst_vdisk.ko module parameter (see below).
+
+ - type - contains SCSI type of this virtual device.
+
+ - resync_size - write only attribute, which makes vdisk_fileio to
+   rescan size of the backend file. It is useful if you changed it, for
+   instance, if you resized it.
+
+For example:
+
+/sys/kernel/scst_tgt/devices/disk1
+|-- blocksize
+|-- exported
+|   |-- export0 -> ../../../targets/iscsi/iqn.2006-10.net.vlnb:tgt/luns/0
+|   |-- export1 -> ../../../targets/iscsi/iqn.2006-10.net.vlnb:tgt/ini_groups/INI/luns/0
+|   |-- export2 -> ../../../targets/iscsi/iqn.2006-10.net.vlnb:tgt1/luns/0
+|   |-- export3 -> ../../../targets/iscsi/iqn.2006-10.net.vlnb:tgt1/ini_groups/INI1/luns/0
+|   |-- export4 -> ../../../targets/iscsi/iqn.2006-10.net.vlnb:tgt1/ini_groups/INI2/luns/0
+|-- filename
+|-- handler -> ../../handlers/vdisk_fileio
+|-- nv_cache
+|-- o_direct
+|-- read_only
+|-- removable
+|-- resync_size
+|-- size_mb
+|-- t10_dev_id
+|-- threads_num
+|-- threads_pool_type
+|-- type
+|-- usn
+`-- write_through
+
+Each vdisk_blockio's device has the following attributes in
+/sys/kernel/scst_tgt/devices/device_name: blocksize, filename,
+read_only, removable, resync_size, size_mb, t10_dev_id, threads_num,
+threads_pool_type, type, usn. See above description of those parameters.
+
+Each vdisk_nullio's device has the following attributes in
+/sys/kernel/scst_tgt/devices/device_name: blocksize, read_only,
+removable, size_mb, t10_dev_id, threads_num, threads_pool_type, type,
+usn. See above description of those parameters.
+
+Each vcdrom's device has the following attributes in
+/sys/kernel/scst_tgt/devices/device_name: filename,  size_mb,
+t10_dev_id, threads_num, threads_pool_type, type, usn. See above
+description of those parameters. Exception is filename attribute. For
+vcdrom it is writable. Writing to it allows to virtually insert or
+change virtual CD media in the virtual CDROM device. For example:
+
+ - echo "/image.iso" >/sys/kernel/scst_tgt/devices/cdrom/filename - will
+   insert file /image.iso as virtual media to the virtual CDROM cdrom.
+
+ - echo "" >/sys/kernel/scst_tgt/devices/cdrom/filename - will remove
+   "media" from the virtual CDROM cdrom.
+
+Additionally to the sysfs interface VDISK handler has module parameter
+"num_threads", which specifies count of I/O threads for each VDISK's
+device. If you have a workload, which tends to produce rather random
+accesses (e.g. DB-like), you should increase this count to a bigger
+value, like 32. If you have a rather sequential workload, you should
+decrease it to a lower value, like number of CPUs on the target or even
+1. Due to some limitations of Linux I/O subsystem, increasing number of
+I/O threads too much leads to sequential performance drop, especially
+with deadline scheduler, so decreasing it can improve sequential
+performance. The default provides a good compromise between random and
+sequential accesses.
+
+You shouldn't be afraid to have too many VDISK I/O threads if you have
+many VDISK devices. Kernel threads consume very little amount of
+resources (several KBs) and only necessary threads will be used by SCST,
+so the threads will not trash your system.
+
+CAUTION: If you partitioned/formatted your device with block size X, *NEVER*
+======== ever try to export and then mount it (even accidentally) with another
+         block size. Otherwise you can *instantly* damage it pretty
+	 badly as well as all your data on it. Messages on initiator
+	 like: "attempt to access beyond end of device" is the sign of
+	 such damage.
+
+	 Moreover, if you want to compare how well different block sizes
+	 work for you, you **MUST** EVERY TIME AFTER CHANGING BLOCK SIZE
+	 **COMPLETELY** **WIPE OFF** ALL THE DATA FROM THE DEVICE. In
+	 other words, THE **WHOLE** DEVICE **MUST** HAVE ONLY **ZEROS**
+	 AS THE DATA AFTER YOU SWITCH TO NEW BLOCK SIZE. Switching block
+	 sizes isn't like switching between FILEIO and BLOCKIO, after
+	 changing block size all previously written with another block
+	 size data MUST BE ERASED. Otherwise you will have a full set of
+	 very weird behaviors, because blocks addressing will be
+	 changed, but initiators in most cases will not have a
+	 possibility to detect that old addresses written on the device
+	 in, e.g., partition table, don't refer anymore to what they are
+	 intended to refer.
+
+IMPORTANT: Some disk and partition table management utilities don't support
+=========  block sizes >512 bytes, therefore make sure that your favorite one
+           supports it. Currently only cfdisk is known to work only with
+	   512 bytes blocks, other utilities like fdisk on Linux or
+	   standard disk manager on Windows are proved to work well with
+	   non-512 bytes blocks. Note, if you export a disk file or
+	   device with some block size, different from one, with which
+	   it was already partitioned, you could get various weird
+	   things like utilities hang up or other unexpected behavior.
+	   Hence, to be sure, zero the exported file or device before
+	   the first access to it from the remote initiator with another
+	   block size. On Window initiator make sure you "Set Signature"
+	   in the disk manager on the imported from the target drive
+	   before doing any other partitioning on it. After you
+	   successfully mounted a file system over non-512 bytes block
+	   size device, the block size stops matter, any program will
+	   work with files on such file system.
+
+Caching
+-------
+
+By default for performance reasons VDISK FILEIO devices use write back
+caching policy. This is generally safe for modern applications who
+prepared to work in the write back caching environments, so know when to
+flush cache to keep their data consistent and minimize damage caused in
+case of power/hardware/software failures by lost in the cache data.
+
+For instance, journaled file systems flush cache on each meta data
+update, so they survive power/hardware/software failures pretty well.
+Note, Linux IO subsystem guarantees it work reliably only using data
+protection barriers, which, for instance, for Ext3 turned off by default
+(see http://lwn.net/Articles/283161). Some info about barriers from the
+XFS point of view could be found at
+http://oss.sgi.com/projects/xfs/faq.html#wcache. On Linux initiators for
+Ext3 and ReiserFS file systems the barrier protection could be turned on
+using "barrier=1" and "barrier=flush" mount options correspondingly. You
+can check if the barriers turn on or off by looking in /proc/mounts.
+Windows and, AFAIK, other UNIX'es don't need any special explicit
+options and do necessary barrier actions on write-back caching devices
+by default.
+
+But even in case of journaled file systems your unsaved cached data will
+still be lost in case of power/hardware/software failures, so you may
+need to supply your target server with a good UPS with possibility to
+gracefully shutdown your target on power shortage or disable write back
+caching using WRITE_THROUGH flag. Note, on some real-life workloads
+write through caching might perform better, than write back one with the
+barrier protection turned on. Also note that without barriers enabled
+(i.e. by default) Linux doesn't provide a guarantee that after
+sync()/fsync() all written data really hit permanent storage. They can
+be stored in the cache of your backstorage devices and, hence, lost on a
+power failure event. Thus, ever with write-through cache mode, you still
+either need to enable barriers on your backend file system on the target
+(for devices this is, indeed, impossible), or need a good UPS to protect
+yourself from not committed data loss.
+
+To limit this data loss you can use files in /proc/sys/vm to limit
+amount of unflushed data in the system cache.
+
+BLOCKIO VDISK mode
+------------------
+
+This module works best for these types of scenarios:
+
+1) Data that are not aligned to 4K sector boundaries and <4K block sizes
+are used, which is normally found in virtualization environments where
+operating systems start partitions on odd sectors (Windows and it's
+sector 63).
+
+2) Large block data transfers normally found in database loads/dumps and
+streaming media.
+
+3) Advanced relational database systems that perform their own caching
+which prefer or demand direct IO access and, because of the nature of
+their data access, can actually see worse performance with
+non-discriminate caching.
+
+4) Multiple layers of targets were the secondary and above layers need
+to have a consistent view of the primary targets in order to preserve
+data integrity which a page cache backed IO type might not provide
+reliably.
+
+Also it has an advantage over FILEIO that it doesn't copy data between
+the system cache and the commands data buffers, so it saves a
+considerable amount of CPU power and memory bandwidth.
+
+IMPORTANT: Since data in BLOCKIO and FILEIO modes are not consistent between
+=========  them, if you try to use a device in both those modes simultaneously,
+           you will almost instantly corrupt your data on that device.
+
+Pass-through mode
+-----------------
+
+In the pass-through mode (i.e. using the pass-through device handlers
+scst_disk, scst_tape, etc) SCSI commands, coming from remote initiators,
+are passed to local SCSI devices on target as is, without any
+modifications.
+
+In the SYSFS interface all real SCSI devices are listed in
+/sys/kernel/scst_tgt/devices in form host:channel:id:lun numbers, for
+instance 1:0:0:0. The recommended way to match those numbers to your
+devices is use of lsscsi utility.
+
+When a pass-through dev handler is loaded it assigns itself to all
+existing SCSI devices of its SCSI type. If you later want to unassign some
+SCSI device from it or assign it to another dev handler you can use the
+following interface.
+
+Each pass-through dev handler has in its root subdirectory
+/sys/kernel/scst_tgt/handlers/handler_name, e.g.
+/sys/kernel/scst_tgt/handlers/dev_disk, "mgmt" file. It allows the
+following commands. They can be sent to it using, e.g., echo command.
+
+ - "assign" - this command assigns SCSI device with
+host:channel:id:lun numbers to this dev handler.
+
+echo "assign 1:0:0:0" >mgmt
+
+will assign SCSI device 1:0:0:0 to this dev handler.
+
+ - "unassign" - this command unassigns SCSI device with
+host:channel:id:lun numbers from this dev handler.
+
+As usually, on read the "mgmt" file returns small help about available
+commands.
+
+As any other hardware, the local SCSI hardware can not handle commands
+with amount of data and/or segments count in scatter-gather array bigger
+some values. Therefore, when using the pass-through mode you should note
+that values for maximum number of segments and maximum amount of
+transferred data for each SCSI command on devices on initiators can not
+be bigger, than corresponding values of the corresponding SCSI devices
+on the target. Otherwise you will see symptoms like small transfers work
+well, but large ones stall and messages like: "Unable to complete
+command due to SG IO count limitation" are printed in the kernel logs.
+
+You can't control from the user space limit of the scatter-gather
+segments, but for block devices usually it is sufficient if you set on
+the initiators /sys/block/DEVICE_NAME/queue/max_sectors_kb in the same
+or lower value as in /sys/block/DEVICE_NAME/queue/max_hw_sectors_kb for
+the corresponding devices on the target.
+
+For not-block devices SCSI commands are usually generated directly by
+applications, so, if you experience large transfers stalls, you should
+check documentation for your application how to limit the transfer
+sizes.
+
+Another way to solve this issue is to build SG entries with more than 1
+page each. See the following patch as an example:
+http://scst.sourceforge.net/sgv_big_order_alloc.diff
+
+User space mode using scst_user dev handler
+-------------------------------------------
+
+User space program fileio_tgt uses interface of scst_user dev handler
+and allows to see how it works in various modes. Fileio_tgt provides
+mostly the same functionality as scst_vdisk handler with the most
+noticeable difference that it supports O_DIRECT mode. O_DIRECT mode is
+basically the same as BLOCKIO, but also supports files, so for some
+loads it could be significantly faster, than the regular FILEIO access.
+All the words about BLOCKIO from above apply to O_DIRECT as well. See
+fileio_tgt's README file for more details.
+
+Performance
+-----------
+
+SCST from the very beginning has been designed and implemented to
+provide the best possible performance. Since there is no "one fit all"
+the best performance configuration for different setups and loads, SCST
+provides extensive set of settings to allow to tune it for the best
+performance in each particular case. You don't have to necessary use
+those settings. If you don't, SCST will do very good job to autotune for
+you, so the resulting performance will, in average, be better
+(sometimes, much better) than with other SCSI targets. But in some cases
+you can by manual tuning improve it even more.
+
+Before doing any performance measurements note that performance results
+are very much dependent from your type of load, so it is crucial that
+you choose access mode (FILEIO, BLOCKIO, O_DIRECT, pass-through), which
+suits your needs the best.
+
+In order to get the maximum performance you should:
+
+1. For SCST:
+
+ - Disable in Makefile CONFIG_SCST_STRICT_SERIALIZING, CONFIG_SCST_EXTRACHECKS,
+   CONFIG_SCST_TRACING, CONFIG_SCST_DEBUG*, CONFIG_SCST_STRICT_SECURITY
+
+ - For pass-through devices enable
+   CONFIG_SCST_ALLOW_PASSTHROUGH_IO_SUBMIT_IN_SIRQ.
+
+2. For target drivers:
+
+ - Disable in Makefiles CONFIG_SCST_EXTRACHECKS, CONFIG_SCST_TRACING,
+   CONFIG_SCST_DEBUG*
+
+3. For device handlers, including VDISK:
+
+ - Disable in Makefile CONFIG_SCST_TRACING and CONFIG_SCST_DEBUG.
+
+IMPORTANT: Some of the above compilation options in the SCST SVN enabled
+=========  by default, i.e. the development version of SCST is optimized
+	   for development and bug hunting, not for performance. For it
+	   you can set the above options, except
+	   CONFIG_SCST_ALLOW_PASSTHROUGH_IO_SUBMIT_IN_SIRQ, in the
+	   needed values by command "make debug2perf" performed in
+	   trunk/.
+
+4. Make sure you have io_grouping_type option set correctly, especially
+in the following cases:
+
+ - Several initiators share your target's backstorage. It can be a
+   shared LU using some cluster FS, like VMFS, as well as can be
+   different LUs located on the same backstorage (RAID array). For
+   instance, if you have 3 initiators and each of them using its own
+   dedicated FILEIO device file from the same RAID-6 array on the
+   target.
+
+   In this case for the best performance you should have
+   io_grouping_type option set in value "never" in all the LUNs' targets
+   and security groups.
+
+ - Your initiator connected to your target in MPIO mode. In this case for
+   the best performance you should:
+
+    * Either connect all the sessions from the initiator to a single
+      target or security group and have io_grouping_type option set in
+      value "this_group_only" in the target or security group,
+
+    * Or, if it isn't possible to connect all the sessions from the
+      initiator to a single target or security group, assign the same
+      numeric io_grouping_type value for each target/security group this
+      initiator connected to. The exact value itself doesn't matter,
+      important only that all the targets/security groups use the same
+      value.
+
+Don't forget, io_grouping_type makes sense only if you use CFQ I/O
+scheduler on the target and for devices with threads_num >= 0 and, if
+threads_num > 0, with threads_pool_type "per_initiator".
+
+You can check if in your setup io_grouping_type set correctly as well as
+if the "auto" io_grouping_type value works for you by tests like the
+following:
+
+ - For not MPIO case you can run single thread sequential reading, e.g.
+   using buffered dd, from one initiator, then run the same single
+   thread sequential reading from the second initiator in parallel. If
+   io_grouping_type is set correctly the aggregate throughput measured
+   on the target should only slightly decrease as well as all initiators
+   should have nearly equal share of it. If io_grouping_type is not set
+   correctly, the aggregate throughput and/or throughput on any
+   initiator will decrease significantly, in 2 times or even more. For
+   instance, you have 80MB/s single thread sequential reading from the
+   target on any initiator. When then both initiators are reading in
+   parallel you should see on the target aggregate throughput something
+   like 70-75MB/s with correct io_grouping_type and something like
+   35-40MB/s or 8-10MB/s on any initiator with incorrect.
+
+ - For the MPIO case it's quite easier. With incorrect io_grouping_type
+   you simply won't see performance increase from adding the second
+   session (assuming your hardware is capable to transfer data through
+   both sessions in parallel), or can even see a performance decrease.
+
+5. If you are going to use your target in an VM environment, for
+instance as a shared storage with VMware, make sure all your VMs
+connected to the target via *separate* sessions. For instance, for iSCSI
+it means that each VM has own connection to the target, not all VMs
+connected using a single connection. You can check it using SCST sysfs
+interface. For other transports you should use available facilities,
+like NPIV for Fibre Channel, to make separate sessions for each VM. If
+you miss it, you can greatly loose performance of parallel access to
+your target from different VMs. This isn't related to the case if your
+VMs are using the same shared storage, like with VMFS, for instance. In
+this case all your VM hosts will be connected to the target via separate
+sessions, which is enough.
+
+6. For other target and initiator software parts:
+
+ - Make sure you applied on your kernel all available SCST patches.
+   If for your kernel version this patch doesn't exist, it is strongly
+   recommended to upgrade your kernel to version, for which this patch
+   exists.
+
+ - Don't enable debug/hacking features in the kernel, i.e. use them as
+   they are by default.
+
+ - The default kernel read-ahead and queuing settings are optimized
+   for locally attached disks, therefore they are not optimal if they
+   attached remotely (SCSI target case), which sometimes could lead to
+   unexpectedly low throughput. You should increase read-ahead size to at
+   least 512KB or even more on all initiators and the target.
+
+   You should also limit on all initiators maximum amount of sectors per
+   SCSI command. This tuning is also recommended on targets with large
+   read-ahead values. To do it on Linux, run:
+
+   echo “64” > /sys/block/sdX/queue/max_sectors_kb
+
+   where specify instead of X your imported from target device letter,
+   like 'b', i.e. sdb.
+
+   To increase read-ahead size on Linux, run:
+
+   blockdev --setra N /dev/sdX
+
+   where N is a read-ahead number in 512-byte sectors and X is a device
+   letter like above.
+
+   Note: you need to set read-ahead setting for device sdX again after
+   you changed the maximum amount of sectors per SCSI command for that
+   device.
+
+   Note2: you need to restart SCST after you changed read-ahead settings
+   on the target.
+
+ - You may need to increase amount of requests that OS on initiator
+   sends to the target device. To do it on Linux initiators, run
+
+   echo “64” > /sys/block/sdX/queue/nr_requests
+
+   where X is a device letter like above.
+
+   You may also experiment with other parameters in /sys/block/sdX
+   directory, they also affect performance. If you find the best values,
+   please share them with us.
+
+ - On the target use CFQ IO scheduler. In most cases it has performance
+   advantage over other IO schedulers, sometimes huge (2+ times
+   aggregate throughput increase).
+
+ - It is recommended to turn the kernel preemption off, i.e. set
+   the kernel preemption model to "No Forced Preemption (Server)".
+
+ - Looks like XFS is the best filesystem on the target to store device
+   files, because it allows considerably better linear write throughput,
+   than ext3.
+
+7. For hardware on target.
+
+ - Make sure that your target hardware (e.g. target FC or network card)
+   and underlaying IO hardware (e.g. IO card, like SATA, SCSI or RAID to
+   which your disks connected) don't share the same PCI bus. You can
+   check it using lspci utility. They have to work in parallel, so it
+   will be better if they don't compete for the bus. The problem is not
+   only in the bandwidth, which they have to share, but also in the
+   interaction between cards during that competition. This is very
+   important, because in some cases if target and backend storage
+   controllers share the same PCI bus, it could lead up to 5-10 times
+   less performance, than expected. Moreover, some motherboard (by
+   Supermicro, particularly) have serious stability issues if there are
+   several high speed devices on the same bus working in parallel. If
+   you have no choice, but PCI bus sharing, set in the BIOS PCI latency
+   as low as possible.
+
+8. If you use VDISK IO module in FILEIO mode, NV_CACHE option will
+provide you the best performance. But using it make sure you use a good
+UPS with ability to shutdown the target on the power failure.
+
+Baseline performance numbers you can find in those measurements:
+http://lkml.org/lkml/2009/3/30/283.
+
+IMPORTANT: If you use on initiator some versions of Windows (at least W2K)
+=========  you can't get good write performance for VDISK FILEIO devices with
+           default 512 bytes block sizes. You could get about 10% of the
+	   expected one. This is because of the partition alignment, which
+	   is (simplifying) incompatible with how Linux page cache
+	   works, so for each write the corresponding block must be read
+	   first. Use 4096 bytes block sizes for VDISK devices and you
+	   will have the expected write performance. Actually, any OS on
+	   initiators, not only Windows, will benefit from block size
+	   max(PAGE_SIZE, BLOCK_SIZE_ON_UNDERLYING_FS), where PAGE_SIZE
+	   is the page size, BLOCK_SIZE_ON_UNDERLYING_FS is block size
+	   on the underlying FS, on which the device file located, or 0,
+	   if a device node is used. Both values are from the target.
+	   See also important notes about setting block sizes >512 bytes
+	   for VDISK FILEIO devices above.
+
+9. In some cases, for instance working with SSD devices, which consume 100%
+of a single CPU load for data transfers in their internal threads, to
+maximize IOPS it can be needed to assign for those threads dedicated
+CPUs using Linux CPU affinity facilities. No IRQ processing should be
+done on those CPUs. Check that using /proc/interrupts. See taskset
+command and Documentation/IRQ-affinity.txt in your kernel's source tree
+for how to assign IRQ affinity to tasks and IRQs.
+
+The reason for that is that processing of coming commands in SIRQ
+context might be done on the same CPUs as SSD devices' threads doing data
+transfers. As the result, those threads won't receive all the processing
+power of those CPUs and perform worse.
+
+Work if target's backstorage or link is too slow
+------------------------------------------------
+
+Under high I/O load, when your target's backstorage gets overloaded, or
+working over a slow link between initiator and target, when the link
+can't serve all the queued commands on time, you can experience I/O
+stalls or see in the kernel log abort or reset messages.
+
+At first, consider the case of too slow target's backstorage. On some
+seek intensive workloads even fast disks or RAIDs, which able to serve
+continuous data stream on 500+ MB/s speed, can be as slow as 0.3 MB/s.
+Another possible cause for that can be MD/LVM/RAID on your target as in
+http://lkml.org/lkml/2008/2/27/96 (check the whole thread as well).
+
+Thus, in such situations simply processing of one or more commands takes
+too long time, hence initiator decides that they are stuck on the target
+and tries to recover. Particularly, it is known that the default amount
+of simultaneously queued commands (48) is sometimes too high if you do
+intensive writes from VMware on a target disk, which uses LVM in the
+snapshot mode. In this case value like 16 or even 8-10 depending of your
+backstorage speed could be more appropriate.
+
+Unfortunately, currently SCST lacks dynamic I/O flow control, when the
+queue depth on the target is dynamically decreased/increased based on
+how slow/fast the backstorage speed comparing to the target link. So,
+there are 6 possible actions, which you can do to workaround or fix this
+issue in this case:
+
+1. Ignore incoming task management (TM) commands. It's fine if there are
+not too many of them, so average performance isn't hurt and the
+corresponding device isn't getting put offline, i.e. if the backstorage
+isn't too slow.
+
+2. Decrease /sys/block/sdX/device/queue_depth on the initiator in case
+if it's Linux (see below how) or/and SCST_MAX_TGT_DEV_COMMANDS constant
+in scst_priv.h file until you stop seeing incoming TM commands.
+ISCSI-SCST driver also has its own iSCSI specific parameter for that,
+see its README file.
+
+To decrease device queue depth on Linux initiators you can run command:
+
+# echo Y >/sys/block/sdX/device/queue_depth
+
+where Y is the new number of simultaneously queued commands, X - your
+imported device letter, like 'a' for sda device. There are no special
+limitations for Y value, it can be any value from 1 to possible maximum
+(usually, 32), so start from dividing the current value on 2, i.e. set
+16, if /sys/block/sdX/device/queue_depth contains 32.
+
+3. Increase the corresponding timeout on the initiator. For Linux it is
+located in
+/sys/devices/platform/host*/session*/target*:0:0/*:0:0:1/timeout. It can
+be done automatically by an udev rule. For instance, the following
+rule will increase it to 300 seconds:
+
+SUBSYSTEM=="scsi", KERNEL=="[0-9]*:[0-9]*", ACTION=="add", ATTR{type}=="0|7|14", ATTR{timeout}="300"
+
+By default, this timeout is 30 or 60 seconds, depending on your distribution.
+
+4. Try to avoid such seek intensive workloads.
+
+5. Increase speed of the target's backstorage.
+
+6. Implement in SCST dynamic I/O flow control. This will be an ultimate
+solution. See "Dynamic I/O flow control" section on
+http://scst.sourceforge.net/contributing.html page for possible
+implementation idea.
+
+Next, consider the case of too slow link between initiator and target,
+when the initiator tries to simultaneously push N commands to the target
+over it. In this case time to serve those commands, i.e. send or receive
+data for them over the link, can be more, than timeout for any single
+command, hence one or more commands in the tail of the queue can not be
+served on time less than the timeout, so the initiator will decide that
+they are stuck on the target and will try to recover.
+
+To workaround/fix this issue in this case you can use ways 1, 2, 3, 6
+above or (7): increase speed of the link between target and initiator.
+But for some initiators implementations for WRITE commands there might
+be cases when target has no way to detect the issue, so dynamic I/O flow
+control will not be able to help. In those cases you could also need on
+the initiator(s) to either decrease the queue depth (way 2), or increase
+the corresponding timeout (way 3).
+
+Note, that logged messages about QUEUE_FULL status are quite different
+by nature. This is a normal work, just SCSI flow control in action.
+Simply don't enable "mgmt_minor" logging level, or, alternatively, if
+you are confident in the worst case performance of your back-end storage
+or initiator-target link, you can increase SCST_MAX_TGT_DEV_COMMANDS in
+scst_priv.h to 64. Usually initiators don't try to push more commands on
+the target.
+
+Credits
+-------
+
+Thanks to:
+
+ * Mark Buechler <mark.buechler@gmail.com> for a lot of useful
+   suggestions, bug reports and help in debugging.
+
+ * Ming Zhang <mingz@ele.uri.edu> for fixes and comments.
+
+ * Nathaniel Clark <nate@misrule.us> for fixes and comments.
+
+ * Calvin Morrow <calvin.morrow@comcast.net> for testing and useful
+   suggestions.
+
+ * Hu Gang <hugang@soulinfo.com> for the original version of the
+   LSI target driver.
+
+ * Erik Habbinga <erikhabbinga@inphase-tech.com> for fixes and support
+   of the LSI target driver.
+
+ * Ross S. W. Walker <rswwalker@hotmail.com> for the original block IO
+   code and Vu Pham <huongvp@yahoo.com> who updated it for the VDISK dev
+   handler.
+
+ * Michael G. Byrnes <michael.byrnes@hp.com> for fixes.
+
+ * Alessandro Premoli <a.premoli@andxor.it> for fixes
+
+ * Nathan Bullock <nbullock@yottayotta.com> for fixes.
+
+ * Terry Greeniaus <tgreeniaus@yottayotta.com> for fixes.
+
+ * Krzysztof Blaszkowski <kb@sysmikro.com.pl> for many fixes and bug reports.
+
+ * Jianxi Chen <pacers@users.sourceforge.net> for fixing problem with
+   devices >2TB in size
+
+ * Bart Van Assche <bart.vanassche@gmail.com> for a lot of help
+
+ * Daniel Debonzi <debonzi@linux.vnet.ibm.com> for a big part of SCST sysfs tree
+   implementation
+
+Vladislav Bolkhovitin <vst@vlnb.net>, http://scst.sourceforge.net

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH][RFC 12/12/1/5] SCST sysfs rules
       [not found] ` <4BC45841.2090707@vlnb.net>
  2010-04-13 13:07   ` [PATCH][RFC 11/12/1/5] SCST core's README Vladislav Bolkhovitin
@ 2010-04-13 13:07   ` Vladislav Bolkhovitin
  1 sibling, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:07 UTC (permalink / raw)
  To: linux-scsi
  Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
	FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
	Vu Pham, Bart Van Assche, James Smart, Joe Eykholt, Andy Yan,
	linux-driver

This patch contains SCST sysfs rules.

Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
 SysfsRules |  795 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 795 insertions(+)

diff -uprN orig/linux-2.6.33/Documentation/scst/SysfsRules linux-2.6.33/Documentation/scst/SysfsRules
--- orig/linux-2.6.33/Documentation/scst/SysfsRules
+++ linux-2.6.33/Documentation/scst/SysfsRules
@@ -0,0 +1,795 @@
+		SCST SYSFS interface rules
+		==========================
+
+This file describes SYSFS interface rules, which all SCST target
+drivers, dev handlers and management utilities MUST follow. This allows
+to have a simple, self-documented, target drivers and dev handlers
+independent management interface.
+
+Words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
+"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
+document are to be interpreted as described in RFC 2119.
+
+In this document "key attribute" means a configuration attribute with
+not default value, which must be configured during the target driver's
+initialization. A key attribute MUST have in the last line keyword
+"[key]". If a default value set to a key attribute, it becomes a regular
+none-key attribute. For instance, iSCSI target has attribute DataDigest.
+Default value for this attribute is "None". It value "CRC32C" is set to
+this attribute, it will become a key attribute. If value "None" is again
+set, this attribute will become back to a none-key attribute.
+
+Each user configurable attribute with a not default value MUST be marked
+as key attribute.
+
+I. Rules for target drivers
+===========================
+
+SCST core for each target driver (struct scst_tgt_template) creates a
+root subdirectory in /sys/kernel/scst_tgt/targets with name
+scst_tgt_template.name (called "target_driver_name" further in this
+document). 
+
+For each target (struct scst_tgt) SCST core creates a root subdirectory
+in /sys/kernel/scst_tgt/targets/target_driver_name with name
+scst_tgt.tgt_name (called "target_name" further in this document).
+
+There are 2 type of targets possible: hardware and virtual targets.
+Hardware targets are targets corresponding to real hardware, for
+instance, a Fibre Channel adapter's port. Virtual targets are hardware
+independent targets, which can be dynamically added or removed, for
+instance, an iSCSI target, or NPIV Fibre Channel target.
+
+A target driver supporting virtual targets must support "mgmt" attribute
+and "add_target"/"del_target" commands.
+
+If target driver supports both hardware and virtual targets (for
+instance, an FC adapter supporting NPIV, which has hardware targets for
+its physical ports as well as virtual NPIV targets), it MUST create each
+hardware target with hw_target mark to make SCST core create "hw_target"
+attribute (see below).
+
+Attributes for target drivers
+-----------------------------
+
+A target driver MAY support in its root subdirectory the following
+optional attributes. Target drivers MAY also support there other
+read-only or read-writable attributes.
+
+1. "enabled" - this attribute MUST allow to enable and disable target
+driver as a whole, i.e. if disabled, the target driver MUST NOT accept
+new connections. The goal of this attribute is to allow the target
+driver's initial configuration. For instance, iSCSI target may need to
+have discovery user names and passwords set before it starts serving
+discovery connections.
+
+This attribute MUST have read and write permissions for superuser and be
+read-only for other users.
+
+On read it MUST return 0, if the target driver is disabled, and 1, if it
+is enabled.
+
+On write it MUST accept '0' character as request to disable and '1' as
+request to enable, but MAY also accept other driver specific commands.
+
+During disabling the target driver MAY close already connected sessions
+in all targets, but this is OPTIONAL.
+
+MUST be 0 by default.
+
+2. "trace_level" - this attribute SHOULD allow to change log level of this
+driver. 
+
+This attribute SHOULD have read and write permissions for superuser and be
+read-only for other users.
+
+On read it SHOULD return a help text about available command and log levels.
+
+On write it SHOULD accept commands to change log levels according to the
+help text.
+
+For example:
+
+out_of_mem | minor | pid | line | function | special | mgmt | mgmt_dbg | flow_control | conn
+
+Usage:
+        echo "all|none|default" >trace_level
+        echo "value DEC|0xHEX|0OCT" >trace_level
+        echo "add|del TOKEN" >trace_level
+
+where TOKEN is one of [debug, function, line, pid,
+		       entryexit, buff, mem, sg, out_of_mem,
+		       special, scsi, mgmt, minor,
+		       mgmt_dbg, scsi_serializing,
+		       retry, recv_bot, send_bot, recv_top,
+		       send_top, d_read, d_write, conn, conn_dbg, iov, pdu, net_page]
+
+3. "version" - this read-only for all attribute SHOULD return version of
+the target driver and some info about its enabled compile time facilities.
+
+For example:
+
+2.0.0
+EXTRACHECKS
+DEBUG
+ 
+4. "mgmt" - this attribute MUST allow to add and delete targets, if
+virtual targets are supported by this driver, as well as it MAY allow to
+add and delete the target driver's or its targets' attributes.
+
+This attribute MUST have read and write permissions for superuser and be
+read-only for other users.
+
+On read it MUST return a help string describing available commands and
+parameters.
+
+For example:
+
+Usage: echo "add_target target_name [parameters]" >mgmt
+       echo "del_target target_name" >mgmt
+       echo "add_attribute IncomingUser name password" >mgmt
+       echo "del_attribute IncomingUser name" >mgmt
+       echo "add_attribute OutgoingUser name password" >mgmt
+       echo "del_attribute OutgoingUser name" >mgmt
+       echo "add_target_attribute target_name IncomingUser name password" >mgmt
+       echo "del_target_attribute target_name IncomingUser name" >mgmt
+       echo "add_target_attribute target_name OutgoingUser name password" >mgmt
+       echo "del_target_attribute target_name OutgoingUser name" >mgmt
+
+where parameters are one or more param_name=value pairs separated by ';'                                                                
+
+4.1. "add_target" - if supported, this command MUST add new target with
+name "target_name" and specified optional or required parameters. Each
+parameter MUST be in form "parameter=value". All parameters MUST be
+separated by ';' symbol.
+
+All target drivers supporting creation of virtual targets MUST support
+this command.
+
+All target drivers supporting "add_target" command MUST support all
+read-only targets' key attributes as parameters to "add_target" command
+with the attributes' names as parameters' names and the attributes'
+values as parameters' values. 
+
+For example:
+
+echo "add_target TARGET1 parameter1=1; parameter2=2" >mgmt
+
+will add target with name "TARGET1" and parameters with names
+"parameter1" and "parameter2" with values 1 and 2 correspondingly.
+
+4.2. "del_target" - if supported, this command MUST delete target with
+name "target_name". If "add_target" command is supported "del_target"
+MUST also be supported.
+
+4.3. "add_attribute" - if supported, this command MUST add a target
+driver's attribute with the specified name and one or more values.
+
+All target drivers supporting run time creation of the target driver's
+key attributes MUST support this command.
+
+For example, for iSCSI target:
+
+echo "add_attribute IncomingUser name password" >mgmt
+
+will add for discovery sessions an incoming user (attribute
+/sys/kernel/scst_tgt/targets/iscsi/IncomingUser) with name "name" and
+password "password".
+
+4.4. "del_attribute" - if supported, this command MUST delete  target
+driver's attribute with the specified name and values. The values MUST
+be specified, because in some cases attributes MAY internally be
+distinguished by values. For instance, iSCSI target might have several
+incoming users. If not needed, target driver might ignore the values.
+
+If "add_attribute" command is supported "del_attribute" MUST
+also be supported.
+
+4.5. "add_target_attribute" - if supported, this command MUST add new
+attribute for the specified target with the specified name and one or
+more values.
+
+All target drivers supporting run time creation of targets' key
+attributes MUST support this command.
+
+For example:
+
+echo "add_target_attribute iqn.2006-10.net.vlnb:tgt IncomingUser name password" >mgmt
+
+will add for target with name "iqn.2006-10.net.vlnb:tgt" an incoming
+user (attribute
+/sys/kernel/scst_tgt/targets/iscsi/iqn.2006-10.net.vlnb:tgt/IncomingUser)
+with name "name" and password "password".
+
+4.6. "del_target_attribute" - if supported, this command MUST delete 
+target's attribute with the specified name and values. The values MUST
+be specified, because in some cases attributes MAY internally be
+distinguished by values. For instance, iSCSI target might have several
+incoming users. If not needed, target driver might ignore the values.
+
+If "add_target_attribute" command is supported "del_target_attribute"
+MUST also be supported.
+
+Attributes for targets
+----------------------
+
+Each target MAY support in its root subdirectory the following optional
+attributes. Target drivers MAY also support there other read-only or
+read-writable attributes.
+
+1. "enabled" - this attribute MUST allow to enable and disable the
+corresponding target, i.e. if disabled, the target MUST NOT accept new
+connections. The goal of this attribute is to allow the target's initial
+configuration. For instance, each target needs to have its LUNs setup
+before it starts serving initiators. Another example is iSCSI target,
+which may need to have initialized a number of iSCSI parameters before
+it starts accepting new iSCSI connections.
+
+This attribute MUST have read and write permissions for superuser and be
+read-only for other users.
+
+On read it MUST return 0, if the target is disabled, and 1, if it is
+enabled.
+
+On write it MUST accept '0' character as request to disable and '1' as
+request to enable. Other requests MUST be rejected.
+
+SCST core provides some facilities, which MUST be used to implement this
+attribute.
+
+During disabling the target driver MAY close already connected sessions
+to the target, but this is OPTIONAL.
+
+MUST be 0 by default.
+
+SCST core will automatically create for all targets the following
+attributes:
+
+1. "rel_tgt_id" - allows to read or write SCSI Relative Target Port
+Identifier attribute.
+
+2. "hw_target" - allows to distinguish hardware and virtual targets, if
+the target driver supports both.
+
+To provide OPTIONAL force close session functionality target drivers
+MUST implement it using "force_close" write only session's attribute,
+which on write to it MUST close the corresponding session.
+
+See SCST core's README for more info about those attributes.
+
+II. Rules for dev handlers
+==========================
+
+There are 2 types of dev handlers: parent dev handlers and children dev
+handlers. The children dev handlers depend from the parent dev handlers.
+
+SCST core for each parent dev handler (struct scst_dev_type with
+parent member with value NULL) creates a root subdirectory in
+/sys/kernel/scst_tgt/handlers with name scst_dev_type.name (called
+"dev_handler_name" further in this document). 
+
+Parent dev handlers can have one or more subdirectories for children dev
+handlers with names scst_dev_type.name of them.
+
+Only one level of the dev handlers' parent/children hierarchy is
+allowed. Parent dev handlers, which support children dev handlers, MUST
+NOT handle devices and MUST be only placeholders for the children dev
+handlers.
+
+Further in this document children dev handlers or parent dev handlers,
+which don't support children, will be called "end level dev handlers".
+
+Each end level dev handler MUST be either for pass-through, or for
+virtual devices.
+
+End level dev handlers can be recognized by existence of the "mgmt"
+attribute.
+
+For each device (struct scst_device) SCST core creates a root
+subdirectory in /sys/kernel/scst_tgt/devices/device_name with name
+scst_device.virt_name (called "device_name" further in this document).
+
+Attributes for dev handlers
+---------------------------
+
+Each pass-through dev handler MUST have in its root subdirectory
+"pass-through" and "mgmt" attributes and the "mgmt" attribute MUST
+support "assign" and "unassign" commands as described below.
+
+Each virtual devices dev handler MUST have it in its root subdirectory
+"mgmt" attribute, which MUST support "add_device" and "del_device"
+attributes as described below.
+
+Parent dev handlers and end level dev handlers without parents MAY
+support in its root subdirectory the following optional attributes. They
+MAY also support there other read-only or read-writable attributes.
+
+1. "trace_level" - this attribute SHOULD allow to change log level of this
+driver. 
+
+This attribute SHOULD have read and write permissions for superuser and be
+read-only for other users.
+
+On read it SHOULD return a help text about available command and log levels.
+
+On write it SHOULD accept commands to change log levels according to the
+help text.
+
+For example:
+
+out_of_mem | minor | pid | line | function | special | mgmt | mgmt_dbg
+
+Usage:
+	echo "all|none|default" >trace_level
+	echo "value DEC|0xHEX|0OCT" >trace_level
+	echo "add|del TOKEN" >trace_level
+
+where TOKEN is one of [debug, function, line, pid,
+		       entryexit, buff, mem, sg, out_of_mem,
+		       special, scsi, mgmt, minor,
+		       mgmt_dbg, scsi_serializing,
+		       retry, recv_bot, send_bot, recv_top,
+		       send_top]
+
+2. "version" - this read-only for all attribute SHOULD return version of
+the dev handler and some info about its enabled compile time facilities.
+
+For example:
+
+2.0.0
+EXTRACHECKS
+DEBUG
+
+End level dev handlers in their root subdirectories MUST support "mgmt"
+attribute and MAY support other read-only or read-writable attributes.
+This attribute MUST have read and write permissions for superuser and be
+read-only for other users.
+
+Attribute "mgmt" for virtual devices dev handlers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+For virtual devices dev handlers "mgmt" attribute MUST allow to add and
+delete devices as well as it MAY allow to add and delete the dev
+handler's or its devices' attributes.
+
+On read it MUST return a help string describing available commands and
+parameters.
+
+For example:
+
+Usage: echo "add_device device_name [parameters]" >mgmt
+       echo "del_device device_name" >mgmt
+       echo "add_attribute AttributeX ValueX" >mgmt
+       echo "del_attribute AttributeX ValueX" >mgmt
+       echo "add_attribute AttributeY ValueY1 ValueY2" >mgmt
+       echo "del_attribute AttributeY" >mgmt
+       echo "add_device_attribute device_name AttributeDX ValueDX" >mgmt
+       echo "del_device_attribute device_name AttributeDX ValueDX" >mgmt
+       echo "add_device_attribute device_name AttributeDY ValueY1 ValueY2" >mgmt
+       echo "del_device_attribute device_name AttributeDY" >mgmt
+
+where parameters are one or more param_name=value pairs separated by ';'
+
+The following parameters available: filename, blocksize, write_through, nv_cache, o_direct, read_only, removable
+
+1. "add_device" - this command MUST add new device with name
+"device_name" and specified optional or required parameters. Each
+parameter MUST be in form "parameter=value". All parameters MUST be
+separated by ';' symbol.
+
+All dev handlers supporting "add_device" command MUST support all
+read-only devices' key attributes as parameters to "add_device" command
+with the attributes' names as parameters' names and the attributes'
+values as parameters' values. 
+
+For example:
+
+echo "add_device device1 parameter1=1; parameter2=2" >mgmt
+
+will add device with name "device1" and parameters with names
+"parameter1" and "parameter2" with values 1 and 2 correspondingly.
+
+2. "del_device" - this command MUST delete device with name
+"device_name".
+
+3. "add_attribute" - if supported, this command MUST add a device
+driver's attribute with the specified name and one or more values.
+
+All dev handlers supporting run time creation of the dev handler's
+key attributes MUST support this command.
+
+For example:
+
+echo "add_attribute AttributeX ValueX" >mgmt
+
+will add attribute
+/sys/kernel/scst_tgt/handlers/dev_handler_name/AttributeX with value ValueX.
+
+4. "del_attribute" - if supported, this command MUST delete device
+driver's attribute with the specified name and values. The values MUST
+be specified, because in some cases attributes MAY internally be
+distinguished by values. If not needed, dev handler might ignore the
+values.
+
+If "add_attribute" command is supported "del_attribute" MUST also be
+supported.
+
+5. "add_device_attribute" - if supported, this command MUST add new
+attribute for the specified device with the specified name and one or
+more values.
+
+All dev handlers supporting run time creation of devices' key attributes
+MUST support this command.
+
+For example:
+
+echo "add_device_attribute device1 AttributeDX ValueDX" >mgmt
+
+will add for device with name "device1" attribute
+/sys/kernel/scst_tgt/devices/device_name/AttributeDX) with value
+ValueDX.
+
+6. "del_device_attribute" - if supported, this command MUST delete 
+device's attribute with the specified name and values. The values MUST
+be specified, because in some cases attributes MAY internally be
+distinguished by values. If not needed, dev handler might ignore the
+values.
+
+If "add_device_attribute" command is supported "del_device_attribute"
+MUST also be supported.
+
+Attribute "mgmt" for pass-through devices dev handlers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+For pass-through devices dev handlers "mgmt" attribute MUST allow to
+assign and unassign this dev handler to existing SCSI devices.
+
+On read it MUST return a help string describing available commands and
+parameters.
+
+For example:
+
+Usage: echo "assign H:C:I:L" >mgmt
+       echo "unassign H:C:I:L" >mgmt
+
+1. "assign" - this command MUST assign SCSI device with
+host:channel:id:lun numbers to this dev handler.
+
+All pass-through dev handlers MUST support this command.
+
+For example:
+
+echo "assign 1:0:0:0" >mgmt
+
+will assign SCSI device 1:0:0:0 to this dev handler.
+
+2. "unassign" - this command MUST unassign SCSI device with
+host:channel:id:lun numbers from this dev handler.
+
+SCST core will automatically create for all dev handlers the following
+attributes:
+
+1. "pass_through" - if the dev handler is a handler for pass-through
+devices, SCST core create this attribute. This attribute will have value
+1.
+
+2. "type" - SCSI type of device this dev handler can handle.
+
+See SCST core's README for more info about those attributes.
+
+Attributes for devices
+----------------------
+
+Each device MAY support in its root subdirectory any read-only or
+read-writable attributes.
+
+SCST core will automatically create for all devices the following
+attributes:
+
+1. "type" - SCSI type of this device
+
+See SCST core's README for more info about those attributes.
+
+III. Rules for management utilities
+===================================
+
+Rules summary
+-------------
+
+A management utility (scstadmin) SHOULD NOT keep any knowledge specific
+to any device, dev handler, target or target driver. It SHOULD only know
+the common SCST SYSFS rules, which all dev handlers and target drivers
+MUST follow. Namely:
+
+Common rules:
+~~~~~~~~~~~~~
+
+1. All key attributes MUST be marked by mark "[key]" in the last line of
+the attribute.
+
+2. All not key attributes don't matter and SHOULD be ignored.
+
+For target drivers and targets:
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+1. If target driver supports adding new targets, it MUST have "mgmt"
+attribute, which MUST support "add_target" and "del_target" commands as
+specified above.
+
+2. If target driver supports run time adding new key attributes, it MUST
+have "mgmt" attribute, which MUST support "add_attribute" and
+"del_attribute" commands as specified above.
+
+3. If target driver supports both hardware and virtual targets, all its
+hardware targets MUST have "hw_target" attribute with value 1.
+
+4. If target has read-only key attributes, the add_target command MUST
+support them as parameters.
+
+5. If target supports run time adding new key attributes, the target
+driver MUST have "mgmt" attribute, which MUST support
+"add_target_attribute" and "del_target_attribute" commands as specified
+above.
+
+6. Both target drivers and targets MAY support "enable" attribute. If
+supported, after configuring the corresponding target driver or target
+"1" MUST be written to this attribute in the following order: at first,
+for all targets of the target driver, then for the target driver.
+
+For devices and dev handlers:
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+1. Each dev handler in its root subdirectory MUST have "mgmt" attribute.
+
+2. All pass-through dev handlers MUST have "pass_through" attribute.
+
+3. Each virtual devices dev handler MUST support "add_device" and
+"del_device" commands to the "mgmt" attribute as specified above.
+
+4. Each pass-through dev handler MUST support "assign" and "unassign"
+commands to the "mgmt" attribute as specified above.
+
+5. If dev handler driver supports run time adding new key attributes, it
+MUST support "add_attribute" and "del_attribute" commands to the "mgmt"
+attribute as specified above.
+
+6. All device handlers have links in the root subdirectory pointing to
+their devices.
+
+7. If device has read-only key attributes, the "add_device" command MUST
+support them as parameters.
+
+8. If device supports run time adding new key attributes, its dev
+handler MUST support "add_device_attribute" and "del_device_attribute"
+commands to the "mgmt" attribute as specified above.
+
+9. Each device has "handler" link to its dev handler's root
+subdirectory.
+
+Algorithm to convert current SCST configuration to config file
+--------------------------------------------------------------
+
+A management utility SHOULD use the following algorithm when converting
+current SCST configuration to a config file:
+
+1. Scan all attributes in /sys/kernel/scst_tgt (not requirsive) and store
+all found key attributes.
+
+2. Scan all subdirectories of /sys/kernel/scst_tgt/handlers. Each
+subdirectory with "mgmt" attribute is a root subdirectory of a dev
+handler with name the name of the subdirectory. For each found dev
+handler do the following:
+
+2.1. Store the dev handler's name. Store also its path to the root
+subdirectory, if it isn't default (/sys/kernel/scst_tgt/handlers/handler_name).
+
+2.2. Store all dev handler's key attributes.
+
+2.3. Go through all links in the root subdirectory pointing to
+/sys/kernel/scst_tgt/devices and for each device:
+
+2.3.1. For virtual devices dev handlers:
+
+2.3.1.1. Store the name of the device.
+
+2.3.1.2. Store all key attributes. Mark all read only key attributes
+during storing, they will be parameters for the device's creation.
+
+2.3.2. For pass-through devices dev handlers:
+
+2.3.2.1. Store the H:C:I:L name of the device. Optionally, instead of
+the name unique T10 vendor device ID found using command:
+
+sg_inq -p 0x83 /dev/sdX
+
+can be stored. It will allow to reliably find out this device if on the
+next reboot it will have another host:channel:id:lin numbers. The sdX
+device can be found as the last letters after ':' in
+/sys/kernel/scst_tgt/devices/H:C:I:L/scsi_device/device/block:sdX.
+
+3. Go through all subdirectories in /sys/kernel/scst_tgt/targets. For
+each target driver:
+
+3.1. Store the name of the target driver.
+
+3.2. Store all its key attributes.
+
+3.3. Go through all target's subdirectories. For each target:
+
+3.3.1. Store the name of the target.
+
+3.3.2. Mark if the target is hardware or virtual target. The target is a
+hardware target if it has "hw_target" attribute or its target driver
+doesn't have "mgmt" attribute.
+
+3.3.3. Store all key attributes. Mark all read only key attributes
+during storing, they will be parameters for the target's creation.
+
+3.3.4. Scan all "luns" subdirectory and store:
+
+ - LUN.
+
+ - LU's device name.
+
+ - Key attributes.
+
+3.3.5. Scan all "ini_groups" subdirectories. For each group store the following:
+
+ - The group's name.
+
+ - The group's LUNs (the same info as for 3.3.4).
+
+ - The group's initiators.
+
+3.3.6. Store value of "enabled" attribute, if it exists.
+
+3.4. Store value of "enabled" attribute, if it exists.
+
+Algorithm to initialize SCST from config file
+---------------------------------------------
+
+A management utility SHOULD use the following algorithm when doing
+initial SCST configuration from a config file. All necessary kernel
+modules and user space programs supposed to be already loaded, hence all
+dev handlers' entries in /sys/kernel/scst_tgt/handlers as well as all
+entries for hardware targets already created.
+
+1. Set stored values for all stored global (/sys/kernel/scst_tgt)
+attributes.
+
+2. For each dev driver:
+
+2.1. Set stored values for all already existing stored attributes.
+
+2.2. Create not existing stored attributes using "add_attribute" command.
+
+2.3. For virtual devices dev handlers for each stored device:
+
+2.3.1. Create the device using "add_device" command using marked read
+only attributes as parameters.
+
+2.3.2. Set stored values for all already existing stored attributes.
+
+2.3.3. Create not existing stored attributes using
+"add_device_attribute" command.
+
+2.4. For pass-through dev handlers for each stores device:
+
+2.4.1. Assign the corresponding pass-through device to this dev handler
+using "assign" command, if it isn't already auto assigned to it.
+
+3. For each target driver:
+
+3.1. Set stored values for all already existing stored attributes.
+
+3.2. Create not existing stored attributes using "add_attribute" command.
+
+3.3. For each target:
+
+3.3.1. For virtual targets:
+
+3.3.1.1. Create the target using "add_target" command using marked read
+only attributes as parameters.
+
+3.3.1.2. Set stored values for all already existing stored attributes.
+
+3.3.1.3. Create not existing stored attributes using
+"add_target_attribute" command.
+
+3.3.2. For hardware targets for each target:
+
+3.3.2.1. Set stored values for all already existing stored attributes.
+
+3.3.2.2. Create not existing stored attributes using
+"add_target_attribute" command.
+
+3.3.3. Setup LUNs
+
+3.3.4. Setup ini_groups, their LUNs and initiators' names.
+
+3.3.5. If this target supports enabling, enable it.
+
+3.4. If this target driver supports enabling, enable it.
+
+Algorithm to apply changes in config file to currently running SCST
+-------------------------------------------------------------------
+
+A management utility SHOULD use the following algorithm when applying
+changes in config file to currently running SCST.
+
+Not all changes can be applied on enabled targets or enabled target
+drivers. From other side, for some target drivers enabling/disabling is
+a very long and disruptive operation, which should be performed as rare
+as possible. Thus, the management utility SHOULD support additional
+option, which, if set, will make it to disable all affected targets
+before doing any change with them.
+
+1. Scan all attributes in /sys/kernel/scst_tgt (not requirsive) and
+compare stored and actual key attributes. Apply all changes.
+
+2. Scan all subdirectories of /sys/kernel/scst_tgt/handlers. Each
+subdirectory with "mgmt" attribute is a root subdirectory of a dev
+handler with name the name of the subdirectory. For each found dev
+handler do the following:
+
+2.1. Compare stored and actual key attributes. Apply all changes. Create
+new attributes using "add_attribute" commands and delete not needed any
+more attributes using "del_attribute" command.
+
+2.2. Compare existing devices (links in the root subdirectory pointing
+to /sys/kernel/scst_tgt/devices) and stored devices in the config file.
+Delete all not needed devices and create new devices.
+
+2.3. For all existing devices:
+
+2.3.1. Compare stored and actual key attributes. Apply all changes.
+Create new attributes using "add_device_attribute" commands and delete
+not needed any more attributes using "del_device_attribute" command.
+
+2.3.2. If any read only key attribute for virtual device should be
+changed, delete the devices and recreate it.
+
+2.3.3. For pass-through devices dev handlers reassign handlers if
+necessary.
+
+3. Go through all subdirectories in /sys/kernel/scst_tgt/targets. For
+each target driver:
+
+3.1. If this target driver should be disabled, disable it.
+
+3.2. Compare stored and actual key attributes. Apply all changes. Create
+new attributes using "add_attribute" commands and delete not needed any
+more attributes using "del_attribute" command.
+
+3.3. Go through all target's subdirectories. Compare existing and stored
+targets. Delete all not needed targets and create new targets.
+
+3.4. For all existing targets:
+
+3.4.1. If this target should be disabled, disable it.
+
+3.4.2. Compare stored and actual key attributes. Apply all changes.
+Create new attributes using "add_target_attribute" commands and delete
+not needed any more attributes using "del_target_attribute" command.
+
+3.4.3. If any read only key attribute for virtual target should be
+changed, delete the target and recreate it.
+
+3.4.4. Scan all "luns" subdirectory and apply necessary changes, using
+"replace" commands to replace one LUN by another, if needed.
+
+3.4.5. Scan all "ini_groups" subdirectories and apply necessary changes,
+using "replace" commands to replace one LUN by another and "move"
+command to move initiator from one group to another, if needed. It MUST
+be done in the following order:
+
+ - Necessary initiators deleted, if they aren't going to be moved
+
+ - LUNs updated
+ 
+ - Necessary initiators added or moved
+
+3.4.6. If this target should be enabled, enable it.
+
+3.5. If this target driver should be enabled, enable it.
+

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH][RFC 0/4/4/5] iSCSI-SCST
       [not found] <4BC44A49.7070307@vlnb.net>
                   ` (2 preceding siblings ...)
       [not found] ` <4BC45841.2090707@vlnb.net>
@ 2010-04-13 13:09 ` Vladislav Bolkhovitin
       [not found] ` <4BC45AFA.7060403@vlnb.net>
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:09 UTC (permalink / raw)
  To: linux-scsi
  Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
	FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
	Bart Van Assche

This patchset contains iSCSI-SCST iSCSI target together with documentation.

Vlad



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH][RFC 1/4/4/5] iSCSI-SCST's Makefile and Kconfig
       [not found] ` <4BC45AFA.7060403@vlnb.net>
@ 2010-04-13 13:10   ` Vladislav Bolkhovitin
  0 siblings, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:10 UTC (permalink / raw)
  To: linux-scsi
  Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
	FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
	Bart Van Assche

This patch contains iSCSI-SCST's Makefile and Kconfig.

Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
 Kconfig  |   25 +++++++++++++++++++++++++
 Makefile |    6 ++++++
 2 files changed, 31 insertions(+)
 
diff -uprN orig/linux-2.6.33/drivers/scst/iscsi-scst/Makefile linux-2.6.33/drivers/scst/iscsi-scst/Makefile
--- orig/linux-2.6.33/drivers/scst/iscsi-scst/Makefile
+++ linux-2.6.33/drivers/scst/iscsi-scst/Makefile
@@ -0,0 +1,6 @@
+EXTRA_CFLAGS += -Iinclude/scst
+
+iscsi-scst-y := iscsi.o nthread.o config.o digest.o \
+	conn.o session.o target.o event.o param.o
+
+obj-$(CONFIG_SCST_ISCSI) += iscsi-scst.o
diff -uprN orig/linux-2.6.33/drivers/scst/iscsi-scst/Kconfig linux-2.6.33/drivers/scst/iscsi-scst/Kconfig
--- orig/linux-2.6.33/drivers/scst/iscsi-scst/Kconfig
+++ linux-2.6.33/drivers/scst/iscsi-scst/Kconfig
@@ -0,0 +1,25 @@
+config SCST_ISCSI
+	tristate "ISCSI Target"
+	depends on SCST && INET
+	default SCST
+	help
+	  ISCSI target driver for SCST framework. The iSCSI protocol has been
+	  defined in RFC 3720. To use it you should download from
+	  http://scst.sourceforge.net the user space part of it.
+
+config SCST_ISCSI_DEBUG_DIGEST_FAILURES
+	bool "Simulate iSCSI digest failures"
+	depends on SCST_ISCSI
+	help
+	  Simulates iSCSI digest failures in random places. Even when iSCSI
+	  traffic is sent over a TCP connection, the 16-bit TCP checksum is too
+	  weak for the requirements of a storage protocol. Furthermore, there
+	  are also instances where the TCP checksum does not protect iSCSI
+	  data, as when data is corrupted while being transferred on a PCI bus
+	  or while in memory. The iSCSI protocol therefore defines a 32-bit CRC
+	  digest on iSCSI packets in order to detect data corruption on an
+	  end-to-end basis. CRCs can be used on iSCSI PDU headers and/or data.
+	  Enabling this option allows to test digest failure recovery in the
+	  iSCSI initiator that is talking to SCST.
+
+	  If unsure, say "N".

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH][RFC 2/4/4/5] iSCSI-SCST's header files
       [not found] ` <4BC45E87.6000901@vlnb.net>
@ 2010-04-13 13:10   ` Vladislav Bolkhovitin
  0 siblings, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:10 UTC (permalink / raw)
  To: linux-scsi
  Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
	FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
	Bart Van Assche

This patch contains iSCSI-SCST's header files.

Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
 drivers/scst/iscsi-scst/digest.h    |   31 +
 drivers/scst/iscsi-scst/iscsi.h     |  709 ++++++++++++++++++++++++++++++++++++
 drivers/scst/iscsi-scst/iscsi_dbg.h |   60 +++
 drivers/scst/iscsi-scst/iscsi_hdr.h |  519 ++++++++++++++++++++++++++
 include/scst/iscsi_scst.h           |  207 ++++++++++
 include/scst/iscsi_scst_itf_ver.h   |    2 
 include/scst/iscsi_scst_ver.h       |   18 
 
diff -uprN orig/linux-2.6.33/include/scst/iscsi_scst.h linux-2.6.33/include/scst/iscsi_scst.h
--- orig/linux-2.6.33/include/scst/iscsi_scst.h
+++ linux-2.6.33/include/scst/iscsi_scst.h
@@ -0,0 +1,207 @@
+/*
+ *  Copyright (C) 2007 - 2010 Vladislav Bolkhovitin
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation, version 2
+ *  of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#ifndef _ISCSI_SCST_U_H
+#define _ISCSI_SCST_U_H
+
+#ifndef __KERNEL__
+#include <sys/uio.h>
+#endif
+
+#include "iscsi_scst_ver.h"
+#include "iscsi_scst_itf_ver.h"
+
+/* The maximum length of 223 bytes in the RFC. */
+#define ISCSI_NAME_LEN		256
+
+#define ISCSI_LISTEN_PORT	3260
+
+#define SCSI_ID_LEN		24
+
+#ifndef aligned_u64
+#define aligned_u64 uint64_t __attribute__((aligned(8)))
+#endif
+
+#define ISCSI_MAX_ATTR_NAME_LEN		50
+#define ISCSI_MAX_ATTR_VALUE_LEN	512
+
+enum {
+	key_initial_r2t,
+	key_immediate_data,
+	key_max_connections,
+	key_max_recv_data_length,
+	key_max_xmit_data_length,
+	key_max_burst_length,
+	key_first_burst_length,
+	key_default_wait_time,
+	key_default_retain_time,
+	key_max_outstanding_r2t,
+	key_data_pdu_inorder,
+	key_data_sequence_inorder,
+	key_error_recovery_level,
+	key_header_digest,
+	key_data_digest,
+	key_ofmarker,
+	key_ifmarker,
+	key_ofmarkint,
+	key_ifmarkint,
+	session_key_last,
+};
+
+enum {
+	key_queued_cmnds,
+	key_rsp_timeout,
+	key_nop_in_interval,
+	key_max_sessions,
+	target_key_last,
+};
+
+enum {
+	key_session,
+	key_target,
+};
+
+struct iscsi_kern_target_info {
+	u32 tid;
+	u32 cookie;
+	char name[ISCSI_NAME_LEN];
+	u32 attrs_num;
+	aligned_u64 attrs_ptr;
+};
+
+struct iscsi_kern_session_info {
+	u32 tid;
+	aligned_u64 sid;
+	char initiator_name[ISCSI_NAME_LEN];
+	u32 exp_cmd_sn;
+	s32 session_params[session_key_last];
+	s32 target_params[target_key_last];
+};
+
+#define DIGEST_ALL		(DIGEST_NONE | DIGEST_CRC32C)
+#define DIGEST_NONE		(1 << 0)
+#define DIGEST_CRC32C           (1 << 1)
+
+struct iscsi_kern_conn_info {
+	u32 tid;
+	aligned_u64 sid;
+
+	u32 cid;
+	u32 stat_sn;
+	u32 exp_stat_sn;
+	int fd;
+};
+
+struct iscsi_kern_attr {
+	u32 mode;
+	char name[ISCSI_MAX_ATTR_NAME_LEN];
+};
+
+struct iscsi_kern_mgmt_cmd_res_info {
+	u32 tid;
+	u32 cookie;
+	u32 req_cmd;
+	u32 result;
+	char value[ISCSI_MAX_ATTR_VALUE_LEN];
+};
+
+struct iscsi_kern_params_info {
+	u32 tid;
+	aligned_u64 sid;
+
+	u32 params_type;
+	u32 partial;
+
+	s32 session_params[session_key_last];
+	s32 target_params[target_key_last];
+};
+
+enum iscsi_kern_event_code {
+	E_ADD_TARGET,
+	E_DEL_TARGET,
+	E_MGMT_CMD,
+	E_ENABLE_TARGET,
+	E_DISABLE_TARGET,
+	E_GET_ATTR_VALUE,
+	E_SET_ATTR_VALUE,
+	E_CONN_CLOSE,
+};
+
+struct iscsi_kern_event {
+	u32 tid;
+	aligned_u64 sid;
+	u32 cid;
+	u32 code;
+	u32 cookie;
+	char target_name[ISCSI_NAME_LEN];
+	u32 param1_size;
+	u32 param2_size;
+};
+
+struct iscsi_kern_register_info {
+	union {
+		aligned_u64 version;
+		struct {
+			int max_data_seg_len;
+			int max_queued_cmds;
+		};
+	};
+};
+
+struct iscsi_kern_attr_info {
+	u32 tid;
+	u32 cookie;
+	struct iscsi_kern_attr attr;
+};
+
+#define	DEFAULT_NR_QUEUED_CMNDS	32
+#define	MIN_NR_QUEUED_CMNDS	1
+#define	MAX_NR_QUEUED_CMNDS	256
+
+#define DEFAULT_RSP_TIMEOUT	30
+#define MIN_RSP_TIMEOUT		10
+#define MAX_RSP_TIMEOUT		65535
+
+#define DEFAULT_NOP_IN_INTERVAL 30
+#define MIN_NOP_IN_INTERVAL	0
+#define MAX_NOP_IN_INTERVAL	65535
+
+#define NETLINK_ISCSI_SCST	25
+
+#define REGISTER_USERD		_IOWR('s', 0, struct iscsi_kern_register_info)
+#define ADD_TARGET		_IOW('s', 1, struct iscsi_kern_target_info)
+#define DEL_TARGET		_IOW('s', 2, struct iscsi_kern_target_info)
+#define ADD_SESSION		_IOW('s', 3, struct iscsi_kern_session_info)
+#define DEL_SESSION		_IOW('s', 4, struct iscsi_kern_session_info)
+#define ADD_CONN		_IOW('s', 5, struct iscsi_kern_conn_info)
+#define DEL_CONN		_IOW('s', 6, struct iscsi_kern_conn_info)
+#define ISCSI_PARAM_SET		_IOW('s', 7, struct iscsi_kern_params_info)
+#define ISCSI_PARAM_GET		_IOWR('s', 8, struct iscsi_kern_params_info)
+
+#define ISCSI_ATTR_ADD		_IOW('s', 9, struct iscsi_kern_attr_info)
+#define ISCSI_ATTR_DEL		_IOW('s', 10, struct iscsi_kern_attr_info)
+#define MGMT_CMD_CALLBACK	_IOW('s', 11, struct iscsi_kern_mgmt_cmd_res_info)
+
+static inline int iscsi_is_key_internal(int key)
+{
+	switch (key) {
+	case key_max_xmit_data_length:
+		return 1;
+	default:
+		return 0;
+	}
+}
+
+#endif
diff -uprN orig/linux-2.6.33/include/scst/iscsi_scst_ver.h linux-2.6.33/include/scst/iscsi_scst_ver.h
--- orig/linux-2.6.33/include/scst/iscsi_scst_ver.h
+++ linux-2.6.33/include/scst/iscsi_scst_ver.h
@@ -0,0 +1,18 @@
+/*
+ *  Copyright (C) 2007 - 2010 Vladislav Bolkhovitin
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation, version 2
+ *  of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#define ISCSI_VERSION_STRING_SUFFIX
+
+#define ISCSI_VERSION_STRING	"2.0.0/0.4.17r214" ISCSI_VERSION_STRING_SUFFIX
diff -uprN orig/linux-2.6.33/drivers/scst/iscsi-scst/digest.h linux-2.6.33/drivers/scst/iscsi-scst/digest.h
--- orig/linux-2.6.33/drivers/scst/iscsi-scst/digest.h
+++ linux-2.6.33/drivers/scst/iscsi-scst/digest.h
@@ -0,0 +1,31 @@
+/*
+ *  iSCSI digest handling.
+ *
+ *  Copyright (C) 2004 Xiranet Communications GmbH <arne.redlich@xiranet.com>
+ *  Copyright (C) 2007 - 2010 Vladislav Bolkhovitin
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#ifndef __ISCSI_DIGEST_H__
+#define __ISCSI_DIGEST_H__
+
+extern void digest_alg_available(int *val);
+
+extern int digest_init(struct iscsi_conn *conn);
+
+extern int digest_rx_header(struct iscsi_cmnd *cmnd);
+extern int digest_rx_data(struct iscsi_cmnd *cmnd);
+
+extern void digest_tx_header(struct iscsi_cmnd *cmnd);
+extern void digest_tx_data(struct iscsi_cmnd *cmnd);
+
+#endif /* __ISCSI_DIGEST_H__ */
diff -uprN orig/linux-2.6.33/drivers/scst/iscsi-scst/iscsi_dbg.h linux-2.6.33/drivers/scst/iscsi-scst/iscsi_dbg.h
--- orig/linux-2.6.33/drivers/scst/iscsi-scst/iscsi_dbg.h
+++ linux-2.6.33/drivers/scst/iscsi-scst/iscsi_dbg.h
@@ -0,0 +1,60 @@
+/*
+ *  Copyright (C) 2007 - 2010 Vladislav Bolkhovitin
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation, version 2
+ *  of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#ifndef ISCSI_DBG_H
+#define ISCSI_DBG_H
+
+#define LOG_PREFIX "iscsi-scst"
+
+#include <scst_debug.h>
+
+#define TRACE_D_WRITE		0x80000000
+#define TRACE_CONN_OC		0x40000000
+#define TRACE_D_IOV		0x20000000
+#define TRACE_D_DUMP_PDU	0x10000000
+#define TRACE_NET_PG		0x08000000
+#define TRACE_CONN_OC_DBG	0x04000000
+
+#ifdef CONFIG_SCST_DEBUG
+#define ISCSI_DEFAULT_LOG_FLAGS (TRACE_FUNCTION | TRACE_LINE | TRACE_PID | \
+	TRACE_OUT_OF_MEM | TRACE_MGMT | TRACE_MGMT_DEBUG | \
+	TRACE_MINOR | TRACE_SPECIAL | TRACE_CONN_OC)
+#else
+#define ISCSI_DEFAULT_LOG_FLAGS (TRACE_OUT_OF_MEM | TRACE_MGMT | \
+	TRACE_SPECIAL)
+#endif
+
+#ifdef CONFIG_SCST_DEBUG
+struct iscsi_pdu;
+struct iscsi_cmnd;
+extern void iscsi_dump_pdu(struct iscsi_pdu *pdu);
+extern unsigned long iscsi_get_flow_ctrl_or_mgmt_dbg_log_flag(
+	struct iscsi_cmnd *cmnd);
+#else
+#define iscsi_dump_pdu(x) do {} while (0)
+#define iscsi_get_flow_ctrl_or_mgmt_dbg_log_flag(x) do {} while (0)
+#endif
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+extern unsigned long iscsi_trace_flag;
+#define trace_flag iscsi_trace_flag
+#endif
+
+#define TRACE_CONN_CLOSE(args...)	TRACE_DBG_FLAG(TRACE_DEBUG|TRACE_CONN_OC, args)
+#define TRACE_CONN_CLOSE_DBG(args...)	TRACE(TRACE_CONN_OC_DBG, args)
+#define TRACE_NET_PAGE(args...)		TRACE_DBG_FLAG(TRACE_NET_PG, args)
+#define TRACE_WRITE(args...)		TRACE_DBG_FLAG(TRACE_DEBUG|TRACE_D_WRITE, args)
+
+#endif
diff -uprN orig/linux-2.6.33/drivers/scst/iscsi-scst/iscsi.h linux-2.6.33/drivers/scst/iscsi-scst/iscsi.h
--- orig/linux-2.6.33/drivers/scst/iscsi-scst/iscsi.h
+++ linux-2.6.33/drivers/scst/iscsi-scst/iscsi.h
@@ -0,0 +1,709 @@
+/*
+ *  Copyright (C) 2002 - 2003 Ardis Technolgies <roman@ardistech.com>
+ *  Copyright (C) 2007 - 2010 Vladislav Bolkhovitin
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation, version 2
+ *  of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#ifndef __ISCSI_H__
+#define __ISCSI_H__
+
+#include <linux/pagemap.h>
+#include <linux/mm.h>
+#include <linux/net.h>
+#include <net/sock.h>
+
+#include <scst.h>
+
+#include "iscsi_hdr.h"
+#include "iscsi_scst.h"
+
+#include "iscsi_dbg.h"
+
+#define iscsi_sense_crc_error			ABORTED_COMMAND, 0x47, 0x05
+#define iscsi_sense_unexpected_unsolicited_data	ABORTED_COMMAND, 0x0C, 0x0C
+#define iscsi_sense_incorrect_amount_of_data	ABORTED_COMMAND, 0x0C, 0x0D
+
+struct iscsi_sess_params {
+	int initial_r2t;
+	int immediate_data;
+	int max_connections;
+	unsigned int max_recv_data_length;
+	unsigned int max_xmit_data_length;
+	unsigned int max_burst_length;
+	unsigned int first_burst_length;
+	int default_wait_time;
+	int default_retain_time;
+	unsigned int max_outstanding_r2t;
+	int data_pdu_inorder;
+	int data_sequence_inorder;
+	int error_recovery_level;
+	int header_digest;
+	int data_digest;
+	int ofmarker;
+	int ifmarker;
+	int ofmarkint;
+	int ifmarkint;
+};
+
+struct iscsi_tgt_params {
+	int queued_cmnds;
+	unsigned int rsp_timeout;
+	unsigned int nop_in_interval;
+};
+
+struct network_thread_info {
+	struct task_struct *task;
+	unsigned int ready;
+};
+
+struct iscsi_target;
+struct iscsi_cmnd;
+
+struct iscsi_attr {
+	struct list_head attrs_list_entry;
+	struct kobj_attribute attr;
+	struct iscsi_target *target;
+	const char *name;
+};
+
+struct iscsi_target {
+	struct scst_tgt *scst_tgt;
+
+	struct mutex target_mutex;
+
+	struct list_head session_list; /* protected by target_mutex */
+
+	struct list_head target_list_entry;
+	u32 tid;
+
+	/* Protected by iscsi_sysfs_mutex */
+	unsigned int tgt_enabled:1;
+
+	/* Protected by target_mutex */
+	struct list_head attrs_list;
+
+	char name[ISCSI_NAME_LEN];
+};
+
+#define ISCSI_HASH_ORDER	8
+#define	cmnd_hashfn(itt)	hash_long((itt), ISCSI_HASH_ORDER)
+
+struct iscsi_session {
+	struct iscsi_target *target;
+	struct scst_session *scst_sess;
+
+	struct list_head pending_list; /* protected by sn_lock */
+
+	/* Unprotected, since accessed only from a single read thread */
+	u32 next_ttt;
+
+	/* Read only, if there are connection(s) */
+	struct iscsi_tgt_params tgt_params;
+	atomic_t active_cmds;
+
+	spinlock_t sn_lock;
+	u32 exp_cmd_sn; /* protected by sn_lock */
+
+	/* All 3 protected by sn_lock */
+	int tm_active;
+	u32 tm_sn;
+	struct iscsi_cmnd *tm_rsp;
+
+	/* Read only, if there are connection(s) */
+	struct iscsi_sess_params sess_params;
+
+	/*
+	 * In some corner cases commands can be deleted from the hash
+	 * not from the corresponding read thread. So, let's simplify
+	 * errors recovery and have this lock.
+	 */
+	spinlock_t cmnd_data_wait_hash_lock;
+	struct list_head cmnd_data_wait_hash[1 << ISCSI_HASH_ORDER];
+
+	struct list_head conn_list; /* protected by target_mutex */
+
+	struct list_head session_list_entry;
+
+	/* All protected by target_mutex, where necessary */
+	struct iscsi_session *sess_reinst_successor;
+	unsigned int sess_reinstating:1;
+	unsigned int sess_shutting_down:1;
+
+	/* All don't need any protection */
+	char *initiator_name;
+	u64 sid;
+};
+
+#define ISCSI_CONN_IOV_MAX			(PAGE_SIZE/sizeof(struct iovec))
+
+#define ISCSI_CONN_RD_STATE_IDLE		0
+#define ISCSI_CONN_RD_STATE_IN_LIST		1
+#define ISCSI_CONN_RD_STATE_PROCESSING		2
+
+#define ISCSI_CONN_WR_STATE_IDLE		0
+#define ISCSI_CONN_WR_STATE_IN_LIST		1
+#define ISCSI_CONN_WR_STATE_SPACE_WAIT		2
+#define ISCSI_CONN_WR_STATE_PROCESSING		3
+
+struct iscsi_conn {
+	struct iscsi_session *session; /* owning session */
+
+	/* Both protected by session->sn_lock */
+	u32 stat_sn;
+	u32 exp_stat_sn;
+
+#define ISCSI_CONN_REINSTATING	1
+#define ISCSI_CONN_SHUTTINGDOWN	2
+	unsigned long conn_aflags;
+
+	spinlock_t cmd_list_lock; /* BH lock */
+
+	/* Protected by cmd_list_lock */
+	struct list_head cmd_list; /* in/outcoming pdus */
+
+	atomic_t conn_ref_cnt;
+
+	spinlock_t write_list_lock;
+	/* List of data pdus to be sent, protected by write_list_lock */
+	struct list_head write_list;
+	/* List of data pdus being sent, protected by write_list_lock */
+	struct list_head write_timeout_list;
+
+	/* Protected by write_list_lock */
+	struct timer_list rsp_timer;
+	unsigned int rsp_timeout; /* in jiffies */
+
+	/* All 2 protected by iscsi_wr_lock */
+	unsigned short wr_state;
+	unsigned short wr_space_ready:1;
+
+	struct list_head wr_list_entry;
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+	struct task_struct *wr_task;
+#endif
+
+	/*
+	 * All are unprotected, since accessed only from a single write
+	 * thread.
+	 */
+	struct iscsi_cmnd *write_cmnd;
+	struct iovec *write_iop;
+	int write_iop_used;
+	struct iovec write_iov[2];
+	u32 write_size;
+	u32 write_offset;
+	int write_state;
+
+	/* Both don't need any protection */
+	struct file *file;
+	struct socket *sock;
+
+	void (*old_state_change)(struct sock *);
+	void (*old_data_ready)(struct sock *, int);
+	void (*old_write_space)(struct sock *);
+
+	/* Both read only. Stay here for better CPU cache locality. */
+	int hdigest_type;
+	int ddigest_type;
+
+	/* All 6 protected by iscsi_rd_lock */
+	unsigned short rd_state;
+	unsigned short rd_data_ready:1;
+	/* Let's save some cache footprint by putting them here */
+	unsigned short closing:1;
+	unsigned short active_close:1;
+	unsigned short deleting:1;
+	unsigned short conn_tm_active:1;
+
+	struct list_head rd_list_entry;
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+	struct task_struct *rd_task;
+#endif
+
+	unsigned long last_rcv_time;
+
+	/*
+	 * All are unprotected, since accessed only from a single read
+	 * thread.
+	 */
+	struct iscsi_cmnd *read_cmnd;
+	struct msghdr read_msg;
+	u32 read_size;
+	int read_state;
+	struct iovec *read_iov;
+	struct task_struct *rx_task;
+	uint32_t rpadding;
+
+	struct iscsi_target *target;
+
+	struct list_head conn_list_entry; /* list entry in session conn_list */
+
+	/* All protected by target_mutex, where necessary */
+	struct iscsi_conn *conn_reinst_successor;
+	struct list_head reinst_pending_cmd_list;
+
+	wait_queue_head_t read_state_waitQ;
+	struct completion ready_to_free;
+
+	/* Doesn't need any protection */
+	u16 cid;
+
+	struct delayed_work nop_in_delayed_work;
+	unsigned int nop_in_interval; /* in jiffies */
+	struct list_head nop_req_list;
+	spinlock_t nop_req_list_lock;
+	u32 nop_in_ttt;
+
+	/* Doesn't need any protection */
+	struct kobject iscsi_conn_kobj;
+};
+
+struct iscsi_pdu {
+	struct iscsi_hdr bhs;
+	void *ahs;
+	unsigned int ahssize;
+	unsigned int datasize;
+};
+
+typedef void (iscsi_show_info_t)(struct seq_file *seq,
+				 struct iscsi_target *target);
+
+/** Commands' states **/
+
+/* New command and SCST processes it */
+#define ISCSI_CMD_STATE_NEW		0
+
+/* SCST processes cmd after scst_rx_cmd() */
+#define ISCSI_CMD_STATE_RX_CMD		1
+
+/* The command returned from preprocessing_done() */
+#define ISCSI_CMD_STATE_AFTER_PREPROC	2
+
+/* The command is waiting for session or connection reinstatement finished */
+#define ISCSI_CMD_STATE_REINST_PENDING	3
+
+/* scst_restart_cmd() called and SCST processing it */
+#define ISCSI_CMD_STATE_RESTARTED	4
+
+/* SCST done processing */
+#define ISCSI_CMD_STATE_PROCESSED	5
+
+/* AEN processing */
+#define ISCSI_CMD_STATE_AEN		6
+
+/* Out of SCST core preliminary completed */
+#define ISCSI_CMD_STATE_OUT_OF_SCST_PRELIM_COMPL 7
+
+/*
+ * Most of the fields don't need any protection, since accessed from only a
+ * single thread, except where noted.
+ *
+ * ToDo: Eventually divide request and response structures in 2 separate
+ * structures and stop this IET-derived garbage.
+ */
+struct iscsi_cmnd {
+	struct iscsi_conn *conn;
+
+	/*
+	 * Some flags used under conn->write_list_lock, but all modified only
+	 * from single read thread or when there are no references to cmd.
+	 */
+	unsigned int hashed:1;
+	unsigned int should_close_conn:1;
+	unsigned int should_close_all_conn:1;
+	unsigned int pending:1;
+	unsigned int own_sg:1;
+	unsigned int on_write_list:1;
+	unsigned int write_processing_started:1;
+	unsigned int force_cleanup_done:1;
+	unsigned int dec_active_cmnds:1;
+	unsigned int ddigest_checked:1;
+#ifdef CONFIG_SCST_EXTRACHECKS
+	unsigned int on_rx_digest_list:1;
+	unsigned int release_called:1;
+#endif
+
+	/*
+	 * We suppose that preliminary commands completion is tested by
+	 * comparing prelim_compl_flags with 0. Otherwise, because of the
+	 * gap between setting different flags a race is possible,
+	 * like sending command in SCST core as PRELIM_COMPLETED, while it
+	 * wasn't aborted in it yet and have as the result a wrong success
+	 * status sent to the initiator.
+	 */
+#define ISCSI_CMD_ABORTED		0
+#define ISCSI_CMD_PRELIM_COMPLETED	1
+	unsigned long prelim_compl_flags;
+
+	struct list_head hash_list_entry;
+
+	/*
+	 * Unions are for readability and grepability and to save some
+	 * cache footprint.
+	 */
+
+	union {
+		/*
+		 * Used only to abort not yet sent responses. Usage in
+		 * cmnd_done() is only a side effect to have a lockless
+		 * accesss to this list from always only a single thread
+		 * at any time. So, all responses live in the parent
+		 * until it has the last reference put.
+		 */
+		struct list_head rsp_cmd_list;
+		struct list_head rsp_cmd_list_entry;
+	};
+
+	union {
+		struct list_head pending_list_entry;
+		struct list_head reinst_pending_cmd_list_entry;
+	};
+
+	union {
+		struct list_head write_list_entry;
+		struct list_head write_timeout_list_entry;
+	};
+
+	/* Both protected by conn->write_list_lock */
+	unsigned int on_write_timeout_list:1;
+	unsigned long write_start;
+
+	/*
+	 * All unprotected, since could be accessed from only a single
+	 * thread at time
+	 */
+	struct iscsi_cmnd *parent_req;
+	struct iscsi_cmnd *cmd_req;
+
+	/*
+	 * All unprotected, since could be accessed from only a single
+	 * thread at time
+	 */
+	union {
+		/* Request only fields */
+		struct {
+			struct list_head rx_ddigest_cmd_list;
+			struct list_head rx_ddigest_cmd_list_entry;
+
+			int scst_state;
+			union {
+				struct scst_cmd *scst_cmd;
+				struct scst_aen *scst_aen;
+			};
+			unsigned int read_size;
+
+			struct iscsi_cmnd *main_rsp;
+		};
+
+		/* Response only fields */
+		struct {
+			struct scatterlist rsp_sg[2];
+			struct iscsi_sense_data sense_hdr;
+		};
+	};
+
+	atomic_t ref_cnt;
+
+	struct iscsi_pdu pdu;
+
+	struct scatterlist *sg;
+	int sg_cnt;
+	unsigned int bufflen;
+	u32 r2t_sn;
+	unsigned int r2t_len_to_receive;
+	unsigned int r2t_len_to_send;
+	unsigned int outstanding_r2t;
+	u32 target_task_tag;
+	u32 hdigest;
+	u32 ddigest;
+
+	struct list_head cmd_list_entry;
+	struct list_head nop_req_list_entry;
+};
+
+/* Max time to wait for our response satisfied for aborted commands */
+#define ISCSI_TM_DATA_WAIT_TIMEOUT	(10 * HZ)
+
+/*
+ * Needed addition to all timeouts to complete a burst of commands at once.
+ * Otherwise, a part of the burst can be timeouted only in double timeout time.
+ */
+#define ISCSI_ADD_SCHED_TIME		HZ
+
+#define ISCSI_CTR_OPEN_STATE_CLOSED	0
+#define ISCSI_CTR_OPEN_STATE_OPEN	1
+#define ISCSI_CTR_OPEN_STATE_CLOSING	2
+
+extern struct mutex target_mgmt_mutex;
+
+extern int ctr_open_state;
+extern const struct file_operations ctr_fops;
+
+extern spinlock_t iscsi_rd_lock;
+extern struct list_head iscsi_rd_list;
+extern wait_queue_head_t iscsi_rd_waitQ;
+
+extern spinlock_t iscsi_wr_lock;
+extern struct list_head iscsi_wr_list;
+extern wait_queue_head_t iscsi_wr_waitQ;
+
+/* iscsi.c */
+extern struct iscsi_cmnd *cmnd_alloc(struct iscsi_conn *,
+	struct iscsi_cmnd *parent);
+extern int cmnd_rx_start(struct iscsi_cmnd *);
+extern int cmnd_rx_continue(struct iscsi_cmnd *req);
+extern void cmnd_rx_end(struct iscsi_cmnd *);
+extern void cmnd_tx_start(struct iscsi_cmnd *);
+extern void cmnd_tx_end(struct iscsi_cmnd *);
+extern void req_cmnd_release_force(struct iscsi_cmnd *req);
+extern void rsp_cmnd_release(struct iscsi_cmnd *);
+extern void cmnd_done(struct iscsi_cmnd *cmnd);
+extern void conn_abort(struct iscsi_conn *conn);
+extern void iscsi_restart_cmnd(struct iscsi_cmnd *cmnd);
+extern void iscsi_fail_data_waiting_cmnd(struct iscsi_cmnd *cmnd);
+extern void iscsi_send_nop_in(struct iscsi_conn *conn);
+
+/* conn.c */
+extern struct iscsi_conn *conn_lookup(struct iscsi_session *, u16);
+extern void conn_reinst_finished(struct iscsi_conn *);
+extern int __add_conn(struct iscsi_session *, struct iscsi_kern_conn_info *);
+extern int __del_conn(struct iscsi_session *, struct iscsi_kern_conn_info *);
+extern void iscsi_make_conn_rd_active(struct iscsi_conn *conn);
+#define ISCSI_CONN_ACTIVE_CLOSE		1
+#define ISCSI_CONN_DELETING		2
+extern void __mark_conn_closed(struct iscsi_conn *, int);
+extern void mark_conn_closed(struct iscsi_conn *);
+extern void iscsi_make_conn_wr_active(struct iscsi_conn *);
+extern void iscsi_check_tm_data_wait_timeouts(struct iscsi_conn *conn,
+	bool force);
+
+/* nthread.c */
+extern int iscsi_send(struct iscsi_conn *conn);
+extern int istrd(void *arg);
+extern int istwr(void *arg);
+extern void iscsi_task_mgmt_affected_cmds_done(struct scst_mgmt_cmd *scst_mcmd);
+extern void req_add_to_write_timeout_list(struct iscsi_cmnd *req);
+
+/* target.c */
+extern const struct attribute *iscsi_tgt_attrs[];
+extern int iscsi_enable_target(struct scst_tgt *scst_tgt, bool enable);
+extern bool iscsi_is_target_enabled(struct scst_tgt *scst_tgt);
+extern ssize_t iscsi_sysfs_send_event(uint32_t tid,
+	enum iscsi_kern_event_code code,
+	const char *param1, const char *param2, void **data);
+extern struct iscsi_target *target_lookup_by_id(u32);
+extern int __add_target(struct iscsi_kern_target_info *);
+extern int __del_target(u32 id);
+extern ssize_t iscsi_sysfs_add_target(const char *target_name, char *params);
+extern ssize_t iscsi_sysfs_del_target(const char *target_name);
+extern ssize_t iscsi_sysfs_mgmt_cmd(char *cmd);
+extern void target_del_session(struct iscsi_target *target,
+	struct iscsi_session *session, int flags);
+extern void target_del_all_sess(struct iscsi_target *target, int flags);
+extern void target_del_all(void);
+
+/* config.c */
+extern const struct attribute *iscsi_attrs[];
+extern int iscsi_add_attr(struct iscsi_target *target,
+	const struct iscsi_kern_attr *user_info);
+extern void __iscsi_del_attr(struct iscsi_target *target,
+	struct iscsi_attr *tgt_attr);
+
+/* session.c */
+extern const struct attribute *iscsi_sess_attrs[];
+extern const struct file_operations session_seq_fops;
+extern struct iscsi_session *session_lookup(struct iscsi_target *, u64);
+extern void sess_reinst_finished(struct iscsi_session *);
+extern int __add_session(struct iscsi_target *,
+	struct iscsi_kern_session_info *);
+extern int __del_session(struct iscsi_target *, u64);
+extern int session_free(struct iscsi_session *session, bool del);
+
+/* params.c */
+extern const char *iscsi_get_digest_name(int val, char *res);
+extern const char *iscsi_get_bool_value(int val);
+extern int iscsi_params_set(struct iscsi_target *,
+	struct iscsi_kern_params_info *, int);
+
+/* event.c */
+extern int event_send(u32, u64, u32, u32, enum iscsi_kern_event_code,
+	const char *param1, const char *param2);
+extern int event_init(void);
+extern void event_exit(void);
+
+#define get_pgcnt(size, offset)	\
+	((((size) + ((offset) & ~PAGE_MASK)) + PAGE_SIZE - 1) >> PAGE_SHIFT)
+
+static inline void iscsi_cmnd_get_length(struct iscsi_pdu *pdu)
+{
+#if defined(__BIG_ENDIAN)
+	pdu->ahssize = pdu->bhs.length.ahslength * 4;
+	pdu->datasize = pdu->bhs.length.datalength;
+#elif defined(__LITTLE_ENDIAN)
+	pdu->ahssize = (pdu->bhs.length & 0xff) * 4;
+	pdu->datasize = be32_to_cpu(pdu->bhs.length & ~0xff);
+#else
+#error
+#endif
+}
+
+static inline void iscsi_cmnd_set_length(struct iscsi_pdu *pdu)
+{
+#if defined(__BIG_ENDIAN)
+	pdu->bhs.length.ahslength = pdu->ahssize / 4;
+	pdu->bhs.length.datalength = pdu->datasize;
+#elif defined(__LITTLE_ENDIAN)
+	pdu->bhs.length = cpu_to_be32(pdu->datasize) | (pdu->ahssize / 4);
+#else
+#error
+#endif
+}
+
+extern struct scst_tgt_template iscsi_template;
+
+/*
+ * Skip this command if result is not 0. Must be called under
+ * corresponding lock.
+ */
+static inline bool cmnd_get_check(struct iscsi_cmnd *cmnd)
+{
+	int r = atomic_inc_return(&cmnd->ref_cnt);
+	int res;
+	if (unlikely(r == 1)) {
+		TRACE_DBG("cmnd %p is being destroyed", cmnd);
+		atomic_dec(&cmnd->ref_cnt);
+		res = 1;
+		/* Necessary code is serialized by locks in cmnd_done() */
+	} else {
+		TRACE_DBG("cmnd %p, new ref_cnt %d", cmnd,
+			atomic_read(&cmnd->ref_cnt));
+		res = 0;
+	}
+	return res;
+}
+
+static inline void cmnd_get(struct iscsi_cmnd *cmnd)
+{
+	atomic_inc(&cmnd->ref_cnt);
+	TRACE_DBG("cmnd %p, new cmnd->ref_cnt %d", cmnd,
+		atomic_read(&cmnd->ref_cnt));
+}
+
+static inline void cmnd_get_ordered(struct iscsi_cmnd *cmnd)
+{
+	cmnd_get(cmnd);
+	/* See comments for each cmnd_get_ordered() use */
+	smp_mb__after_atomic_inc();
+}
+
+static inline void cmnd_put(struct iscsi_cmnd *cmnd)
+{
+	TRACE_DBG("cmnd %p, new ref_cnt %d", cmnd,
+		atomic_read(&cmnd->ref_cnt)-1);
+
+	EXTRACHECKS_BUG_ON(atomic_read(&cmnd->ref_cnt) == 0);
+
+	if (atomic_dec_and_test(&cmnd->ref_cnt))
+		cmnd_done(cmnd);
+}
+
+/* conn->write_list_lock supposed to be locked and BHs off */
+static inline void cmd_add_on_write_list(struct iscsi_conn *conn,
+	struct iscsi_cmnd *cmnd)
+{
+	TRACE_DBG("cmnd %p", cmnd);
+	/* See comment in iscsi_restart_cmnd() */
+	EXTRACHECKS_BUG_ON(cmnd->parent_req->hashed &&
+		(cmnd_opcode(cmnd) != ISCSI_OP_R2T));
+	list_add_tail(&cmnd->write_list_entry, &conn->write_list);
+	cmnd->on_write_list = 1;
+}
+
+/* conn->write_list_lock supposed to be locked and BHs off */
+static inline void cmd_del_from_write_list(struct iscsi_cmnd *cmnd)
+{
+	TRACE_DBG("%p", cmnd);
+	list_del(&cmnd->write_list_entry);
+	cmnd->on_write_list = 0;
+}
+
+static inline void cmd_add_on_rx_ddigest_list(struct iscsi_cmnd *req,
+	struct iscsi_cmnd *cmnd)
+{
+	TRACE_DBG("Adding RX ddigest cmd %p to digest list "
+			"of req %p", cmnd, req);
+	list_add_tail(&cmnd->rx_ddigest_cmd_list_entry,
+			&req->rx_ddigest_cmd_list);
+#ifdef CONFIG_SCST_EXTRACHECKS
+	cmnd->on_rx_digest_list = 1;
+#endif
+}
+
+static inline void cmd_del_from_rx_ddigest_list(struct iscsi_cmnd *cmnd)
+{
+	TRACE_DBG("Deleting RX digest cmd %p from digest list", cmnd);
+	list_del(&cmnd->rx_ddigest_cmd_list_entry);
+#ifdef CONFIG_SCST_EXTRACHECKS
+	cmnd->on_rx_digest_list = 0;
+#endif
+}
+
+static inline int test_write_ready(struct iscsi_conn *conn)
+{
+	/*
+	 * No need for write_list protection, in the worst case we will be
+	 * restarted again.
+	 */
+	return !list_empty(&conn->write_list) || conn->write_cmnd;
+}
+
+static inline void conn_get(struct iscsi_conn *conn)
+{
+	atomic_inc(&conn->conn_ref_cnt);
+	TRACE_DBG("conn %p, new conn_ref_cnt %d", conn,
+		atomic_read(&conn->conn_ref_cnt));
+}
+
+static inline void conn_get_ordered(struct iscsi_conn *conn)
+{
+	conn_get(conn);
+	/* See comments for each conn_get_ordered() use */
+	smp_mb__after_atomic_inc();
+}
+
+static inline void conn_put(struct iscsi_conn *conn)
+{
+	TRACE_DBG("conn %p, new conn_ref_cnt %d", conn,
+		atomic_read(&conn->conn_ref_cnt)-1);
+	BUG_ON(atomic_read(&conn->conn_ref_cnt) == 0);
+
+	/*
+	 * Make it always ordered to protect from undesired side effects like
+	 * accessing just destroyed by close_conn() conn caused by reordering
+	 * of this atomic_dec().
+	 */
+	smp_mb__before_atomic_dec();
+	atomic_dec(&conn->conn_ref_cnt);
+}
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+extern void iscsi_extracheck_is_rd_thread(struct iscsi_conn *conn);
+extern void iscsi_extracheck_is_wr_thread(struct iscsi_conn *conn);
+#else
+static inline void iscsi_extracheck_is_rd_thread(struct iscsi_conn *conn) {}
+static inline void iscsi_extracheck_is_wr_thread(struct iscsi_conn *conn) {}
+#endif
+
+#endif	/* __ISCSI_H__ */
diff -uprN orig/linux-2.6.33/drivers/scst/iscsi-scst/iscsi_hdr.h linux-2.6.33/drivers/scst/iscsi-scst/iscsi_hdr.h
--- orig/linux-2.6.33/drivers/scst/iscsi-scst/iscsi_hdr.h
+++ linux-2.6.33/drivers/scst/iscsi-scst/iscsi_hdr.h
@@ -0,0 +1,519 @@
+/*
+ *  Copyright (C) 2002 - 2003 Ardis Technolgies <roman@ardistech.com>
+ *  Copyright (C) 2007 - 2010 Vladislav Bolkhovitin
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation, version 2
+ *  of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#ifndef __ISCSI_HDR_H__
+#define __ISCSI_HDR_H__
+
+#include <linux/types.h>
+#include <asm/byteorder.h>
+
+#define ISCSI_VERSION			0
+
+#ifndef __packed
+#define __packed __attribute__ ((packed))
+#endif
+
+struct iscsi_hdr {
+	u8  opcode;			/* 0 */
+	u8  flags;
+	u8  spec1[2];
+#if defined(__BIG_ENDIAN_BITFIELD)
+	struct {			/* 4 */
+		unsigned ahslength:8;
+		unsigned datalength:24;
+	} length;
+#elif defined(__LITTLE_ENDIAN_BITFIELD)
+	u32 length;			/* 4 */
+#endif
+	u64 lun;			/* 8 */
+	u32 itt;			/* 16 */
+	u32 ttt;			/* 20 */
+	u32 sn;				/* 24 */
+	u32 exp_sn;			/* 28 */
+	u32 max_sn;			/* 32 */
+	u32 spec3[3];			/* 36 */
+} __packed;				/* 48 */
+
+/* Opcode encoding bits */
+#define ISCSI_OP_RETRY			0x80
+#define ISCSI_OP_IMMEDIATE		0x40
+#define ISCSI_OPCODE_MASK		0x3F
+
+/* Client to Server Message Opcode values */
+#define ISCSI_OP_NOP_OUT		0x00
+#define ISCSI_OP_SCSI_CMD		0x01
+#define ISCSI_OP_SCSI_TASK_MGT_MSG	0x02
+#define ISCSI_OP_LOGIN_CMD		0x03
+#define ISCSI_OP_TEXT_CMD		0x04
+#define ISCSI_OP_SCSI_DATA_OUT		0x05
+#define ISCSI_OP_LOGOUT_CMD		0x06
+#define ISCSI_OP_SNACK_CMD		0x10
+
+/* Server to Client Message Opcode values */
+#define ISCSI_OP_NOP_IN			0x20
+#define ISCSI_OP_SCSI_RSP		0x21
+#define ISCSI_OP_SCSI_TASK_MGT_RSP	0x22
+#define ISCSI_OP_LOGIN_RSP		0x23
+#define ISCSI_OP_TEXT_RSP		0x24
+#define ISCSI_OP_SCSI_DATA_IN		0x25
+#define ISCSI_OP_LOGOUT_RSP		0x26
+#define ISCSI_OP_R2T			0x31
+#define ISCSI_OP_ASYNC_MSG		0x32
+#define ISCSI_OP_REJECT			0x3f
+
+struct iscsi_ahs_hdr {
+	u16 ahslength;
+	u8 ahstype;
+} __packed;
+
+#define ISCSI_AHSTYPE_CDB		1
+#define ISCSI_AHSTYPE_RLENGTH		2
+
+union iscsi_sid {
+	struct {
+		u8 isid[6];		/* Initiator Session ID */
+		u16 tsih;		/* Target Session ID */
+	} id;
+	u64 id64;
+} __packed;
+
+struct iscsi_scsi_cmd_hdr {
+	u8  opcode;
+	u8  flags;
+	u16 rsvd1;
+	u8  ahslength;
+	u8  datalength[3];
+	u64 lun;
+	u32 itt;
+	u32 data_length;
+	u32 cmd_sn;
+	u32 exp_stat_sn;
+	u8  scb[16];
+} __packed;
+
+#define ISCSI_CMD_FINAL		0x80
+#define ISCSI_CMD_READ		0x40
+#define ISCSI_CMD_WRITE		0x20
+#define ISCSI_CMD_ATTR_MASK	0x07
+#define ISCSI_CMD_UNTAGGED	0x00
+#define ISCSI_CMD_SIMPLE	0x01
+#define ISCSI_CMD_ORDERED	0x02
+#define ISCSI_CMD_HEAD_OF_QUEUE	0x03
+#define ISCSI_CMD_ACA		0x04
+
+struct iscsi_cdb_ahdr {
+	u16 ahslength;
+	u8  ahstype;
+	u8  reserved;
+	u8  cdb[0];
+} __packed;
+
+struct iscsi_rlength_ahdr {
+	u16 ahslength;
+	u8  ahstype;
+	u8  reserved;
+	u32 read_length;
+} __packed;
+
+struct iscsi_scsi_rsp_hdr {
+	u8  opcode;
+	u8  flags;
+	u8  response;
+	u8  cmd_status;
+	u8  ahslength;
+	u8  datalength[3];
+	u32 rsvd1[2];
+	u32 itt;
+	u32 snack;
+	u32 stat_sn;
+	u32 exp_cmd_sn;
+	u32 max_cmd_sn;
+	u32 exp_data_sn;
+	u32 bi_residual_count;
+	u32 residual_count;
+} __packed;
+
+#define ISCSI_FLG_RESIDUAL_UNDERFLOW		0x02
+#define ISCSI_FLG_RESIDUAL_OVERFLOW		0x04
+#define ISCSI_FLG_BIRESIDUAL_UNDERFLOW		0x08
+#define ISCSI_FLG_BIRESIDUAL_OVERFLOW		0x10
+
+#define ISCSI_RESPONSE_COMMAND_COMPLETED	0x00
+#define ISCSI_RESPONSE_TARGET_FAILURE		0x01
+
+struct iscsi_sense_data {
+	u16 length;
+	u8  data[0];
+} __packed;
+
+struct iscsi_task_mgt_hdr {
+	u8  opcode;
+	u8  function;
+	u16 rsvd1;
+	u8  ahslength;
+	u8  datalength[3];
+	u64 lun;
+	u32 itt;
+	u32 rtt;
+	u32 cmd_sn;
+	u32 exp_stat_sn;
+	u32 ref_cmd_sn;
+	u32 exp_data_sn;
+	u32 rsvd2[2];
+} __packed;
+
+#define ISCSI_FUNCTION_MASK			0x7f
+
+#define ISCSI_FUNCTION_ABORT_TASK		1
+#define ISCSI_FUNCTION_ABORT_TASK_SET		2
+#define ISCSI_FUNCTION_CLEAR_ACA		3
+#define ISCSI_FUNCTION_CLEAR_TASK_SET		4
+#define ISCSI_FUNCTION_LOGICAL_UNIT_RESET	5
+#define ISCSI_FUNCTION_TARGET_WARM_RESET	6
+#define ISCSI_FUNCTION_TARGET_COLD_RESET	7
+#define ISCSI_FUNCTION_TASK_REASSIGN		8
+
+struct iscsi_task_rsp_hdr {
+	u8  opcode;
+	u8  flags;
+	u8  response;
+	u8  rsvd1;
+	u8  ahslength;
+	u8  datalength[3];
+	u32 rsvd2[2];
+	u32 itt;
+	u32 rsvd3;
+	u32 stat_sn;
+	u32 exp_cmd_sn;
+	u32 max_cmd_sn;
+	u32 rsvd4[3];
+} __packed;
+
+#define ISCSI_RESPONSE_FUNCTION_COMPLETE	0
+#define ISCSI_RESPONSE_UNKNOWN_TASK		1
+#define ISCSI_RESPONSE_UNKNOWN_LUN		2
+#define ISCSI_RESPONSE_TASK_ALLEGIANT		3
+#define ISCSI_RESPONSE_ALLEGIANCE_REASSIGNMENT_UNSUPPORTED	4
+#define ISCSI_RESPONSE_FUNCTION_UNSUPPORTED	5
+#define ISCSI_RESPONSE_NO_AUTHORIZATION		6
+#define ISCSI_RESPONSE_FUNCTION_REJECTED	255
+
+struct iscsi_data_out_hdr {
+	u8  opcode;
+	u8  flags;
+	u16 rsvd1;
+	u8  ahslength;
+	u8  datalength[3];
+	u64 lun;
+	u32 itt;
+	u32 ttt;
+	u32 rsvd2;
+	u32 exp_stat_sn;
+	u32 rsvd3;
+	u32 data_sn;
+	u32 buffer_offset;
+	u32 rsvd4;
+} __packed;
+
+struct iscsi_data_in_hdr {
+	u8  opcode;
+	u8  flags;
+	u8  rsvd1;
+	u8  cmd_status;
+	u8  ahslength;
+	u8  datalength[3];
+	u32 rsvd2[2];
+	u32 itt;
+	u32 ttt;
+	u32 stat_sn;
+	u32 exp_cmd_sn;
+	u32 max_cmd_sn;
+	u32 data_sn;
+	u32 buffer_offset;
+	u32 residual_count;
+} __packed;
+
+#define ISCSI_FLG_STATUS		0x01
+
+struct iscsi_r2t_hdr {
+	u8  opcode;
+	u8  flags;
+	u16 rsvd1;
+	u8  ahslength;
+	u8  datalength[3];
+	u64 lun;
+	u32 itt;
+	u32 ttt;
+	u32 stat_sn;
+	u32 exp_cmd_sn;
+	u32 max_cmd_sn;
+	u32 r2t_sn;
+	u32 buffer_offset;
+	u32 data_length;
+} __packed;
+
+struct iscsi_async_msg_hdr {
+	u8  opcode;
+	u8  flags;
+	u16 rsvd1;
+	u8  ahslength;
+	u8  datalength[3];
+	u64 lun;
+	u32 ffffffff;
+	u32 rsvd2;
+	u32 stat_sn;
+	u32 exp_cmd_sn;
+	u32 max_cmd_sn;
+	u8  async_event;
+	u8  async_vcode;
+	u16 param1;
+	u16 param2;
+	u16 param3;
+	u32 rsvd3;
+} __packed;
+
+#define ISCSI_ASYNC_SCSI		0
+#define ISCSI_ASYNC_LOGOUT		1
+#define ISCSI_ASYNC_DROP_CONNECTION	2
+#define ISCSI_ASYNC_DROP_SESSION	3
+#define ISCSI_ASYNC_PARAM_REQUEST	4
+#define ISCSI_ASYNC_VENDOR		255
+
+struct iscsi_text_req_hdr {
+	u8  opcode;
+	u8  flags;
+	u16 rsvd1;
+	u8  ahslength;
+	u8  datalength[3];
+	u32 rsvd2[2];
+	u32 itt;
+	u32 ttt;
+	u32 cmd_sn;
+	u32 exp_stat_sn;
+	u32 rsvd3[4];
+} __packed;
+
+struct iscsi_text_rsp_hdr {
+	u8  opcode;
+	u8  flags;
+	u16 rsvd1;
+	u8  ahslength;
+	u8  datalength[3];
+	u32 rsvd2[2];
+	u32 itt;
+	u32 ttt;
+	u32 stat_sn;
+	u32 exp_cmd_sn;
+	u32 max_cmd_sn;
+	u32 rsvd3[3];
+} __packed;
+
+struct iscsi_login_req_hdr {
+	u8  opcode;
+	u8  flags;
+	u8  max_version;		/* Max. version supported */
+	u8  min_version;		/* Min. version supported */
+	u8  ahslength;
+	u8  datalength[3];
+	union iscsi_sid sid;
+	u32 itt;			/* Initiator Task Tag */
+	u16 cid;			/* Connection ID */
+	u16 rsvd1;
+	u32 cmd_sn;
+	u32 exp_stat_sn;
+	u32 rsvd2[4];
+} __packed;
+
+struct iscsi_login_rsp_hdr {
+	u8  opcode;
+	u8  flags;
+	u8  max_version;		/* Max. version supported */
+	u8  active_version;		/* Active version */
+	u8  ahslength;
+	u8  datalength[3];
+	union iscsi_sid sid;
+	u32 itt;			/* Initiator Task Tag */
+	u32 rsvd1;
+	u32 stat_sn;
+	u32 exp_cmd_sn;
+	u32 max_cmd_sn;
+	u8  status_class;		/* see Login RSP Status classes below */
+	u8  status_detail;		/* see Login RSP Status details below */
+	u8  rsvd2[10];
+} __packed;
+
+#define ISCSI_FLG_FINAL			0x80
+#define ISCSI_FLG_TRANSIT		0x80
+#define ISCSI_FLG_CSG_SECURITY		0x00
+#define ISCSI_FLG_CSG_LOGIN		0x04
+#define ISCSI_FLG_CSG_FULL_FEATURE	0x0c
+#define ISCSI_FLG_CSG_MASK		0x0c
+#define ISCSI_FLG_NSG_SECURITY		0x00
+#define ISCSI_FLG_NSG_LOGIN		0x01
+#define ISCSI_FLG_NSG_FULL_FEATURE	0x03
+#define ISCSI_FLG_NSG_MASK		0x03
+
+/* Login Status response classes */
+#define ISCSI_STATUS_SUCCESS		0x00
+#define ISCSI_STATUS_REDIRECT		0x01
+#define ISCSI_STATUS_INITIATOR_ERR	0x02
+#define ISCSI_STATUS_TARGET_ERR		0x03
+
+/* Login Status response detail codes */
+/* Class-0 (Success) */
+#define ISCSI_STATUS_ACCEPT		0x00
+
+/* Class-1 (Redirection) */
+#define ISCSI_STATUS_TGT_MOVED_TEMP	0x01
+#define ISCSI_STATUS_TGT_MOVED_PERM	0x02
+
+/* Class-2 (Initiator Error) */
+#define ISCSI_STATUS_INIT_ERR		0x00
+#define ISCSI_STATUS_AUTH_FAILED	0x01
+#define ISCSI_STATUS_TGT_FORBIDDEN	0x02
+#define ISCSI_STATUS_TGT_NOT_FOUND	0x03
+#define ISCSI_STATUS_TGT_REMOVED	0x04
+#define ISCSI_STATUS_NO_VERSION		0x05
+#define ISCSI_STATUS_TOO_MANY_CONN	0x06
+#define ISCSI_STATUS_MISSING_FIELDS	0x07
+#define ISCSI_STATUS_CONN_ADD_FAILED	0x08
+#define ISCSI_STATUS_INV_SESSION_TYPE	0x09
+#define ISCSI_STATUS_SESSION_NOT_FOUND	0x0a
+#define ISCSI_STATUS_INV_REQ_TYPE	0x0b
+
+/* Class-3 (Target Error) */
+#define ISCSI_STATUS_TARGET_ERROR	0x00
+#define ISCSI_STATUS_SVC_UNAVAILABLE	0x01
+#define ISCSI_STATUS_NO_RESOURCES	0x02
+
+struct iscsi_logout_req_hdr {
+	u8  opcode;
+	u8  flags;
+	u16 rsvd1;
+	u8  ahslength;
+	u8  datalength[3];
+	u32 rsvd2[2];
+	u32 itt;
+	u16 cid;
+	u16 rsvd3;
+	u32 cmd_sn;
+	u32 exp_stat_sn;
+	u32 rsvd4[4];
+} __packed;
+
+struct iscsi_logout_rsp_hdr {
+	u8  opcode;
+	u8  flags;
+	u8  response;
+	u8  rsvd1;
+	u8  ahslength;
+	u8  datalength[3];
+	u32 rsvd2[2];
+	u32 itt;
+	u32 rsvd3;
+	u32 stat_sn;
+	u32 exp_cmd_sn;
+	u32 max_cmd_sn;
+	u32 rsvd4;
+	u16 time2wait;
+	u16 time2retain;
+	u32 rsvd5;
+} __packed;
+
+struct iscsi_snack_req_hdr {
+	u8  opcode;
+	u8  flags;
+	u16 rsvd1;
+	u8  ahslength;
+	u8  datalength[3];
+	u32 rsvd2[2];
+	u32 itt;
+	u32 ttt;
+	u32 rsvd3;
+	u32 exp_stat_sn;
+	u32 rsvd4[2];
+	u32 beg_run;
+	u32 run_length;
+} __packed;
+
+struct iscsi_reject_hdr {
+	u8  opcode;
+	u8  flags;
+	u8  reason;
+	u8  rsvd1;
+	u8  ahslength;
+	u8  datalength[3];
+	u32 rsvd2[2];
+	u32 ffffffff;
+	u32 rsvd3;
+	u32 stat_sn;
+	u32 exp_cmd_sn;
+	u32 max_cmd_sn;
+	u32 data_sn;
+	u32 rsvd4[2];
+} __packed;
+
+#define ISCSI_REASON_RESERVED			0x01
+#define ISCSI_REASON_DATA_DIGEST_ERROR		0x02
+#define ISCSI_REASON_DATA_SNACK_REJECT		0x03
+#define ISCSI_REASON_PROTOCOL_ERROR		0x04
+#define ISCSI_REASON_UNSUPPORTED_COMMAND	0x05
+#define ISCSI_REASON_IMMEDIATE_COMMAND_REJECT	0x06
+#define ISCSI_REASON_TASK_IN_PROGRESS		0x07
+#define ISCSI_REASON_INVALID_DATA_ACK		0x08
+#define ISCSI_REASON_INVALID_PDU_FIELD		0x09
+#define ISCSI_REASON_OUT_OF_RESOURCES		0x0a
+#define ISCSI_REASON_NEGOTIATION_RESET		0x0b
+#define ISCSI_REASON_WAITING_LOGOUT		0x0c
+
+struct iscsi_nop_out_hdr {
+	u8  opcode;
+	u8  flags;
+	u16 rsvd1;
+	u8  ahslength;
+	u8  datalength[3];
+	u64 lun;
+	u32 itt;
+	u32 ttt;
+	u32 cmd_sn;
+	u32 exp_stat_sn;
+	u32 rsvd2[4];
+} __packed;
+
+struct iscsi_nop_in_hdr {
+	u8  opcode;
+	u8  flags;
+	u16 rsvd1;
+	u8  ahslength;
+	u8  datalength[3];
+	u64 lun;
+	u32 itt;
+	u32 ttt;
+	u32 stat_sn;
+	u32 exp_cmd_sn;
+	u32 max_cmd_sn;
+	u32 rsvd2[3];
+} __packed;
+
+#define ISCSI_RESERVED_TAG	(0xffffffffU)
+
+#define cmnd_hdr(cmnd) ((struct iscsi_scsi_cmd_hdr *) (&((cmnd)->pdu.bhs)))
+#define cmnd_itt(cmnd) cpu_to_be32((cmnd)->pdu.bhs.itt)
+#define cmnd_ttt(cmnd) cpu_to_be32((cmnd)->pdu.bhs.ttt)
+#define cmnd_opcode(cmnd) ((cmnd)->pdu.bhs.opcode & ISCSI_OPCODE_MASK)
+#define cmnd_scsicode(cmnd) (cmnd_hdr((cmnd))->scb[0])
+
+#endif	/* __ISCSI_HDR_H__ */
diff -uprN orig/linux-2.6.33/include/scst/iscsi_scst_itf_ver.h linux-2.6.33/include/scst/iscsi_scst_itf_ver.h
--- orig/linux-2.6.33/include/scst/iscsi_scst_itf_ver.h
+++ linux-2.6.33/include/scst/iscsi_scst_itf_ver.h
@@ -0,0 +1,2 @@
+#define ISCSI_SCST_INTERFACE_VERSION "0b9736e7abc6440bb842a800ec0552ae5388b0ef"
+


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH][RFC 3/4/4/5] iSCSI-SCST's implementation files
       [not found] ` <4BC45EF7.7010304@vlnb.net>
@ 2010-04-13 13:10   ` Vladislav Bolkhovitin
  2010-04-13 13:10   ` [PATCH][RFC 4/4/4/5] iSCSI-SCST's README file Vladislav Bolkhovitin
  1 sibling, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:10 UTC (permalink / raw)
  To: linux-scsi
  Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
	FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
	Bart Van Assche

This patch contains iSCSI-SCST's implementation files.

Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
 config.c  |  933 ++++++++++++++++
 conn.c    |  785 +++++++++++++
 digest.c  |  226 +++
 event.c   |  163 ++
 iscsi.c   | 3583 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 nthread.c | 1524 ++++++++++++++++++++++++++
 param.c   |  306 +++++
 session.c |  482 ++++++++
 target.c  |  500 ++++++++
 9 files changed, 8502 insertions(+)

diff -uprN orig/linux-2.6.33/drivers/scst/iscsi-scst/config.c linux-2.6.33/drivers/scst/iscsi-scst/config.c
--- orig/linux-2.6.33/drivers/scst/iscsi-scst/config.c
+++ linux-2.6.33/drivers/scst/iscsi-scst/config.c
@@ -0,0 +1,933 @@
+/*
+ *  Copyright (C) 2004 - 2005 FUJITA Tomonori <tomof@acm.org>
+ *  Copyright (C) 2007 - 2010 Vladislav Bolkhovitin
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#include "iscsi.h"
+
+/* Protected by target_mgmt_mutex */
+int ctr_open_state;
+
+/* Protected by target_mgmt_mutex */
+static LIST_HEAD(iscsi_attrs_list);
+
+static ssize_t iscsi_version_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+
+	sprintf(buf, "%s\n", ISCSI_VERSION_STRING);
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+	strcat(buf, "EXTRACHECKS\n");
+#endif
+
+#ifdef CONFIG_SCST_TRACING
+	strcat(buf, "TRACING\n");
+#endif
+
+#ifdef CONFIG_SCST_DEBUG
+	strcat(buf, "DEBUG\n");
+#endif
+
+#ifdef CONFIG_SCST_ISCSI_DEBUG_DIGEST_FAILURES
+	strcat(buf, "DEBUG_DIGEST_FAILURES\n");
+#endif
+	return strlen(buf);
+}
+
+static struct kobj_attribute iscsi_version_attr =
+	__ATTR(version, S_IRUGO, iscsi_version_show, NULL);
+
+static ssize_t iscsi_open_state_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	switch (ctr_open_state) {
+	case ISCSI_CTR_OPEN_STATE_CLOSED:
+		sprintf(buf, "%s\n", "closed");
+		break;
+	case ISCSI_CTR_OPEN_STATE_OPEN:
+		sprintf(buf, "%s\n", "open");
+		break;
+	case ISCSI_CTR_OPEN_STATE_CLOSING:
+		sprintf(buf, "%s\n", "closing");
+		break;
+	default:
+		sprintf(buf, "%s\n", "unknown");
+		break;
+	}
+
+	return strlen(buf);
+}
+
+static struct kobj_attribute iscsi_open_state_attr =
+	__ATTR(open_state, S_IRUGO, iscsi_open_state_show, NULL);
+
+const struct attribute *iscsi_attrs[] = {
+	&iscsi_version_attr.attr,
+	&iscsi_open_state_attr.attr,
+	NULL,
+};
+
+/* target_mgmt_mutex supposed to be locked */
+static int add_conn(void __user *ptr)
+{
+	int err, rc;
+	struct iscsi_session *session;
+	struct iscsi_kern_conn_info info;
+	struct iscsi_target *target;
+
+	rc = copy_from_user(&info, ptr, sizeof(info));
+	if (rc != 0) {
+		PRINT_ERROR("Failed to copy %d user's bytes", rc);
+		err = -EFAULT;
+		goto out;
+	}
+
+	target = target_lookup_by_id(info.tid);
+	if (target == NULL) {
+		PRINT_ERROR("Target %d not found", info.tid);
+		err = -ENOENT;
+		goto out;
+	}
+
+	mutex_lock(&target->target_mutex);
+
+	session = session_lookup(target, info.sid);
+	if (!session) {
+		PRINT_ERROR("Session %lld not found",
+			(long long unsigned int)info.tid);
+		err = -ENOENT;
+		goto out_unlock;
+	}
+
+	err = __add_conn(session, &info);
+
+out_unlock:
+	mutex_unlock(&target->target_mutex);
+
+out:
+	return err;
+}
+
+/* target_mgmt_mutex supposed to be locked */
+static int del_conn(void __user *ptr)
+{
+	int err, rc;
+	struct iscsi_session *session;
+	struct iscsi_kern_conn_info info;
+	struct iscsi_target *target;
+
+	rc = copy_from_user(&info, ptr, sizeof(info));
+	if (rc != 0) {
+		PRINT_ERROR("Failed to copy %d user's bytes", rc);
+		err = -EFAULT;
+		goto out;
+	}
+
+	target = target_lookup_by_id(info.tid);
+	if (target == NULL) {
+		PRINT_ERROR("Target %d not found", info.tid);
+		err = -ENOENT;
+		goto out;
+	}
+
+	mutex_lock(&target->target_mutex);
+
+	session = session_lookup(target, info.sid);
+	if (!session) {
+		PRINT_ERROR("Session %llx not found",
+			(long long unsigned int)info.sid);
+		err = -ENOENT;
+		goto out_unlock;
+	}
+
+	err = __del_conn(session, &info);
+
+out_unlock:
+	mutex_unlock(&target->target_mutex);
+
+out:
+	return err;
+}
+
+/* target_mgmt_mutex supposed to be locked */
+static int add_session(void __user *ptr)
+{
+	int err, rc;
+	struct iscsi_kern_session_info *info;
+	struct iscsi_target *target;
+
+	info = kzalloc(sizeof(*info), GFP_KERNEL);
+	if (info == NULL) {
+		PRINT_ERROR("Can't alloc info (size %zd)", sizeof(*info));
+		err = -ENOMEM;
+		goto out;
+	}
+
+	rc = copy_from_user(info, ptr, sizeof(*info));
+	if (rc != 0) {
+		PRINT_ERROR("Failed to copy %d user's bytes", rc);
+		err = -EFAULT;
+		goto out_free;
+	}
+
+	info->initiator_name[sizeof(info->initiator_name)-1] = '\0';
+
+	target = target_lookup_by_id(info->tid);
+	if (target == NULL) {
+		PRINT_ERROR("Target %d not found", info->tid);
+		err = -ENOENT;
+		goto out_free;
+	}
+
+	err = __add_session(target, info);
+
+out_free:
+	kfree(info);
+
+out:
+	return err;
+}
+
+/* target_mgmt_mutex supposed to be locked */
+static int del_session(void __user *ptr)
+{
+	int err, rc;
+	struct iscsi_kern_session_info *info;
+	struct iscsi_target *target;
+
+	info = kzalloc(sizeof(*info), GFP_KERNEL);
+	if (info == NULL) {
+		PRINT_ERROR("Can't alloc info (size %zd)", sizeof(*info));
+		err = -ENOMEM;
+		goto out;
+	}
+
+	rc = copy_from_user(info, ptr, sizeof(*info));
+	if (rc != 0) {
+		PRINT_ERROR("Failed to copy %d user's bytes", rc);
+		err = -EFAULT;
+		goto out_free;
+	}
+
+	info->initiator_name[sizeof(info->initiator_name)-1] = '\0';
+
+	target = target_lookup_by_id(info->tid);
+	if (target == NULL) {
+		PRINT_ERROR("Target %d not found", info->tid);
+		err = -ENOENT;
+		goto out_free;
+	}
+
+	mutex_lock(&target->target_mutex);
+	err = __del_session(target, info->sid);
+	mutex_unlock(&target->target_mutex);
+
+out_free:
+	kfree(info);
+
+out:
+	return err;
+}
+
+/* target_mgmt_mutex supposed to be locked */
+static int iscsi_params_config(void __user *ptr, int set)
+{
+	int err, rc;
+	struct iscsi_kern_params_info info;
+	struct iscsi_target *target;
+
+	rc = copy_from_user(&info, ptr, sizeof(info));
+	if (rc != 0) {
+		PRINT_ERROR("Failed to copy %d user's bytes", rc);
+		err = -EFAULT;
+		goto out;
+	}
+
+	target = target_lookup_by_id(info.tid);
+	if (target == NULL) {
+		PRINT_ERROR("Target %d not found", info.tid);
+		err = -ENOENT;
+		goto out;
+	}
+
+	mutex_lock(&target->target_mutex);
+	err = iscsi_params_set(target, &info, set);
+	mutex_unlock(&target->target_mutex);
+
+	if (err < 0)
+		goto out;
+
+	if (!set) {
+		rc = copy_to_user(ptr, &info, sizeof(info));
+		if (rc != 0) {
+			PRINT_ERROR("Failed to copy to user %d bytes", rc);
+			err = -EFAULT;
+			goto out;
+		}
+	}
+
+out:
+	return err;
+}
+
+/* target_mgmt_mutex supposed to be locked */
+static int mgmt_cmd_callback(void __user *ptr)
+{
+	int err = 0, rc;
+	struct iscsi_kern_mgmt_cmd_res_info cinfo;
+	struct scst_sysfs_user_info *info;
+
+	rc = copy_from_user(&cinfo, ptr, sizeof(cinfo));
+	if (rc != 0) {
+		PRINT_ERROR("Failed to copy %d user's bytes", rc);
+		err = -EFAULT;
+		goto out;
+	}
+
+	cinfo.value[sizeof(cinfo.value)-1] = '\0';
+
+	info = scst_sysfs_user_get_info(cinfo.cookie);
+	TRACE_DBG("cookie %u, info %p, result %d", cinfo.cookie, info,
+		cinfo.result);
+	if (info == NULL) {
+		err = -EINVAL;
+		goto out;
+	}
+
+	info->info_status = 0;
+
+	if (cinfo.result != 0) {
+		info->info_status = cinfo.result;
+		goto out_complete;
+	}
+
+	switch (cinfo.req_cmd) {
+	case E_ENABLE_TARGET:
+	case E_DISABLE_TARGET:
+	{
+		struct iscsi_target *target;
+
+		target = target_lookup_by_id(cinfo.tid);
+		if (target == NULL) {
+			PRINT_ERROR("Target %d not found", cinfo.tid);
+			err = -ENOENT;
+			goto out_status;
+		}
+
+		target->tgt_enabled = (cinfo.req_cmd == E_ENABLE_TARGET) ? 1 : 0;
+		break;
+	}
+
+	case E_GET_ATTR_VALUE:
+		info->data = kstrdup(cinfo.value, GFP_KERNEL);
+		if (info->data == NULL) {
+			PRINT_ERROR("Can't dublicate value %s", cinfo.value);
+			info->info_status = -ENOMEM;
+			goto out_complete;
+		}
+		break;
+	}
+
+out_complete:
+	complete(&info->info_completion);
+
+out:
+	return err;
+
+out_status:
+	info->info_status = err;
+	goto out_complete;
+}
+
+static ssize_t iscsi_attr_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	int pos;
+	struct iscsi_attr *tgt_attr;
+	void *value;
+
+	tgt_attr = container_of(attr, struct iscsi_attr, attr);
+
+	pos = iscsi_sysfs_send_event(
+		(tgt_attr->target != NULL) ? tgt_attr->target->tid : 0,
+		E_GET_ATTR_VALUE, tgt_attr->name, NULL, &value);
+
+	if (pos != 0)
+		goto out;
+
+	pos = scnprintf(buf, SCST_SYSFS_BLOCK_SIZE, "%s\n", (char *)value);
+
+	kfree(value);
+
+out:
+	return pos;
+}
+
+static ssize_t iscsi_attr_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	char *buffer;
+	struct iscsi_attr *tgt_attr;
+
+	buffer = kzalloc(count+1, GFP_KERNEL);
+	if (buffer == NULL) {
+		res = -ENOMEM;
+		goto out;
+	}
+
+	memcpy(buffer, buf, count);
+	buffer[count] = '\0';
+
+	tgt_attr = container_of(attr, struct iscsi_attr, attr);
+
+	res = iscsi_sysfs_send_event(
+		(tgt_attr->target != NULL) ? tgt_attr->target->tid : 0,
+		E_SET_ATTR_VALUE, tgt_attr->name, buffer, NULL);
+
+	kfree(buffer);
+
+	if (res == 0)
+		res = count;
+
+out:
+	return res;
+}
+
+/*
+ * target_mgmt_mutex supposed to be locked. If target != 0, target_mutex
+ * supposed to be locked as well.
+ */
+int iscsi_add_attr(struct iscsi_target *target,
+	const struct iscsi_kern_attr *attr_info)
+{
+	int res = 0;
+	struct iscsi_attr *tgt_attr;
+	struct list_head *attrs_list;
+	const char *name;
+
+	if (target != NULL) {
+		attrs_list = &target->attrs_list;
+		name = target->name;
+	} else {
+		attrs_list = &iscsi_attrs_list;
+		name = "global";
+	}
+
+	list_for_each_entry(tgt_attr, attrs_list, attrs_list_entry) {
+		if (strncmp(tgt_attr->name, attr_info->name,
+				sizeof(tgt_attr->name) == 0)) {
+			PRINT_ERROR("Attribute %s for %s already exist",
+				attr_info->name, name);
+			res = -EEXIST;
+			goto out;
+		}
+	}
+
+	TRACE_DBG("Adding %s's attr %s with mode %x", name,
+		attr_info->name, attr_info->mode);
+
+	tgt_attr = kzalloc(sizeof(*tgt_attr), GFP_KERNEL);
+	if (tgt_attr == NULL) {
+		PRINT_ERROR("Unable to allocate user (size %zd)",
+			sizeof(*tgt_attr));
+		res = -ENOMEM;
+		goto out;
+	}
+
+	tgt_attr->target = target;
+
+	tgt_attr->name = kstrdup(attr_info->name, GFP_KERNEL);
+	if (tgt_attr->name == NULL) {
+		PRINT_ERROR("Unable to allocate attr %s name/value (target %s)",
+			attr_info->name, name);
+		res = -ENOMEM;
+		goto out_free;
+	}
+
+	list_add(&tgt_attr->attrs_list_entry, attrs_list);
+
+	tgt_attr->attr.attr.name = tgt_attr->name;
+	tgt_attr->attr.attr.owner = THIS_MODULE;
+	tgt_attr->attr.attr.mode = attr_info->mode & (S_IRUGO | S_IWUGO);
+	tgt_attr->attr.show = iscsi_attr_show;
+	tgt_attr->attr.store = iscsi_attr_store;
+
+	res = sysfs_create_file(
+		(target != NULL) ? scst_sysfs_get_tgt_kobj(target->scst_tgt) :
+				scst_sysfs_get_tgtt_kobj(&iscsi_template),
+		&tgt_attr->attr.attr);
+	if (res != 0) {
+		PRINT_ERROR("Unable to create file '%s' for target '%s'",
+			tgt_attr->attr.attr.name, name);
+		goto out_del;
+	}
+
+out:
+	return res;
+
+out_del:
+	list_del(&tgt_attr->attrs_list_entry);
+
+out_free:
+	kfree(tgt_attr->name);
+	kfree(tgt_attr);
+	goto out;
+}
+
+void __iscsi_del_attr(struct iscsi_target *target,
+	struct iscsi_attr *tgt_attr)
+{
+
+	TRACE_DBG("Deleting %s's attr %s",
+		(target != NULL) ? target->name : "global", tgt_attr->name);
+
+	list_del(&tgt_attr->attrs_list_entry);
+
+	sysfs_remove_file((target != NULL) ?
+			scst_sysfs_get_tgt_kobj(target->scst_tgt) :
+			scst_sysfs_get_tgtt_kobj(&iscsi_template),
+		&tgt_attr->attr.attr);
+
+	kfree(tgt_attr->name);
+	kfree(tgt_attr);
+	return;
+}
+
+/*
+ * target_mgmt_mutex supposed to be locked. If target != 0, target_mutex
+ * supposed to be locked as well.
+ */
+static int iscsi_del_attr(struct iscsi_target *target,
+	const char *attr_name)
+{
+	int res = 0;
+	struct iscsi_attr *tgt_attr, *a;
+	struct list_head *attrs_list;
+
+	if (target != NULL)
+		attrs_list = &target->attrs_list;
+	else
+		attrs_list = &iscsi_attrs_list;
+
+	tgt_attr = NULL;
+	list_for_each_entry(a, attrs_list, attrs_list_entry) {
+		if (strncmp(a->name, attr_name, sizeof(a->name)) == 0) {
+			tgt_attr = a;
+			break;
+		}
+	}
+
+	if (tgt_attr == NULL) {
+		PRINT_ERROR("attr %s not found (target %s)", attr_name,
+			(target != NULL) ? target->name : "global");
+		res = -ENOENT;
+		goto out;
+	}
+
+	__iscsi_del_attr(target, tgt_attr);
+
+out:
+	return res;
+}
+
+/* target_mgmt_mutex supposed to be locked */
+static int iscsi_attr_cmd(void __user *ptr, unsigned int cmd)
+{
+	int rc, err = 0;
+	struct iscsi_kern_attr_info info;
+	struct iscsi_target *target;
+	struct scst_sysfs_user_info *i = NULL;
+
+	rc = copy_from_user(&info, ptr, sizeof(info));
+	if (rc != 0) {
+		PRINT_ERROR("Failed to copy %d user's bytes", rc);
+		err = -EFAULT;
+		goto out;
+	}
+
+	info.attr.name[sizeof(info.attr.name)-1] = '\0';
+
+	if (info.cookie != 0) {
+		i = scst_sysfs_user_get_info(info.cookie);
+		TRACE_DBG("cookie %u, uinfo %p", info.cookie, i);
+		if (i == NULL) {
+			err = -EINVAL;
+			goto out;
+		}
+	}
+
+	target = target_lookup_by_id(info.tid);
+
+	if (target != NULL)
+		mutex_lock(&target->target_mutex);
+
+	switch (cmd) {
+	case ISCSI_ATTR_ADD:
+		err = iscsi_add_attr(target, &info.attr);
+		break;
+	case ISCSI_ATTR_DEL:
+		err = iscsi_del_attr(target, info.attr.name);
+		break;
+	default:
+		BUG();
+	}
+
+	if (target != NULL)
+		mutex_unlock(&target->target_mutex);
+
+	if (i != NULL) {
+		i->info_status = err;
+		complete(&i->info_completion);
+	}
+
+out:
+	return err;
+}
+
+/* target_mgmt_mutex supposed to be locked */
+static int add_target(void __user *ptr)
+{
+	int err, rc;
+	struct iscsi_kern_target_info *info;
+	struct scst_sysfs_user_info *uinfo;
+
+	info = kzalloc(sizeof(*info), GFP_KERNEL);
+	if (info == NULL) {
+		PRINT_ERROR("Can't alloc info (size %zd)", sizeof(*info));
+		err = -ENOMEM;
+		goto out;
+	}
+
+	rc = copy_from_user(info, ptr, sizeof(*info));
+	if (rc != 0) {
+		PRINT_ERROR("Failed to copy %d user's bytes", rc);
+		err = -EFAULT;
+		goto out_free;
+	}
+
+	if (target_lookup_by_id(info->tid) != NULL) {
+		PRINT_ERROR("Target %u already exist!", info->tid);
+		err = -EEXIST;
+		goto out_free;
+	}
+
+	info->name[sizeof(info->name)-1] = '\0';
+
+	if (info->cookie != 0) {
+		uinfo = scst_sysfs_user_get_info(info->cookie);
+		TRACE_DBG("cookie %u, uinfo %p", info->cookie, uinfo);
+		if (uinfo == NULL) {
+			err = -EINVAL;
+			goto out_free;
+		}
+	} else
+		uinfo = NULL;
+
+	err = __add_target(info);
+
+	if (uinfo != NULL) {
+		uinfo->info_status = err;
+		complete(&uinfo->info_completion);
+	}
+
+out_free:
+	kfree(info);
+
+out:
+	return err;
+}
+
+/* target_mgmt_mutex supposed to be locked */
+static int del_target(void __user *ptr)
+{
+	int err, rc;
+	struct iscsi_kern_target_info info;
+	struct scst_sysfs_user_info *uinfo;
+
+	rc = copy_from_user(&info, ptr, sizeof(info));
+	if (rc != 0) {
+		PRINT_ERROR("Failed to copy %d user's bytes", rc);
+		err = -EFAULT;
+		goto out;
+	}
+
+	info.name[sizeof(info.name)-1] = '\0';
+
+	if (info.cookie != 0) {
+		uinfo = scst_sysfs_user_get_info(info.cookie);
+		TRACE_DBG("cookie %u, uinfo %p", info.cookie, uinfo);
+		if (uinfo == NULL) {
+			err = -EINVAL;
+			goto out;
+		}
+	} else
+		uinfo = NULL;
+
+	err = __del_target(info.tid);
+
+	if (uinfo != NULL) {
+		uinfo->info_status = err;
+		complete(&uinfo->info_completion);
+	}
+
+out:
+	return err;
+}
+
+static int iscsi_register(void __user *arg)
+{
+	struct iscsi_kern_register_info reg;
+	char ver[sizeof(ISCSI_SCST_INTERFACE_VERSION)+1];
+	int res, rc;
+
+	rc = copy_from_user(&reg, arg, sizeof(reg));
+	if (rc != 0) {
+		PRINT_ERROR("%s", "Unable to get register info");
+		res = -EFAULT;
+		goto out;
+	}
+
+	rc = copy_from_user(ver, (void __user *)(unsigned long)reg.version,
+				sizeof(ver));
+	if (rc != 0) {
+		PRINT_ERROR("%s", "Unable to get version string");
+		res = -EFAULT;
+		goto out;
+	}
+	ver[sizeof(ver)-1] = '\0';
+
+	if (strcmp(ver, ISCSI_SCST_INTERFACE_VERSION) != 0) {
+		PRINT_ERROR("Incorrect version of user space %s (expected %s)",
+			ver, ISCSI_SCST_INTERFACE_VERSION);
+		res = -EINVAL;
+		goto out;
+	}
+
+	memset(&reg, 0, sizeof(reg));
+	reg.max_data_seg_len = ISCSI_CONN_IOV_MAX << PAGE_SHIFT;
+	reg.max_queued_cmds = scst_get_max_lun_commands(NULL, NO_SUCH_LUN);
+
+	res = 0;
+
+	rc = copy_to_user(arg, &reg, sizeof(reg));
+	if (rc != 0) {
+		PRINT_ERROR("Failed to copy to user %d bytes", rc);
+		res = -EFAULT;
+		goto out;
+	}
+
+out:
+	return res;
+}
+
+static long ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+{
+	long err;
+
+	if (cmd == REGISTER_USERD) {
+		err = iscsi_register((void __user *)arg);
+		goto out;
+	}
+
+	err = mutex_lock_interruptible(&target_mgmt_mutex);
+	if (err < 0)
+		goto out;
+
+	switch (cmd) {
+	case ADD_TARGET:
+		err = add_target((void __user *)arg);
+		break;
+
+	case DEL_TARGET:
+		err = del_target((void __user *)arg);
+		break;
+
+	case ISCSI_ATTR_ADD:
+	case ISCSI_ATTR_DEL:
+		err = iscsi_attr_cmd((void __user *)arg, cmd);
+		break;
+
+	case MGMT_CMD_CALLBACK:
+		err = mgmt_cmd_callback((void __user *)arg);
+		break;
+
+	case ADD_SESSION:
+		err = add_session((void __user *)arg);
+		break;
+
+	case DEL_SESSION:
+		err = del_session((void __user *)arg);
+		break;
+
+	case ISCSI_PARAM_SET:
+		err = iscsi_params_config((void __user *)arg, 1);
+		break;
+
+	case ISCSI_PARAM_GET:
+		err = iscsi_params_config((void __user *)arg, 0);
+		break;
+
+	case ADD_CONN:
+		err = add_conn((void __user *)arg);
+		break;
+
+	case DEL_CONN:
+		err = del_conn((void __user *)arg);
+		break;
+
+	default:
+		PRINT_ERROR("Invalid ioctl cmd %x", cmd);
+		err = -EINVAL;
+		goto out_unlock;
+	}
+
+out_unlock:
+	mutex_unlock(&target_mgmt_mutex);
+
+out:
+	return err;
+}
+
+static int open(struct inode *inode, struct file *file)
+{
+	bool already;
+
+	mutex_lock(&target_mgmt_mutex);
+	already = (ctr_open_state != ISCSI_CTR_OPEN_STATE_CLOSED);
+	if (!already)
+		ctr_open_state = ISCSI_CTR_OPEN_STATE_OPEN;
+	mutex_unlock(&target_mgmt_mutex);
+
+	if (already) {
+		PRINT_WARNING("%s", "Attempt to second open the control "
+			"device!");
+		return -EBUSY;
+	} else
+		return 0;
+}
+
+static int release(struct inode *inode, struct file *filp)
+{
+	struct iscsi_attr *attr, *t;
+
+	TRACE(TRACE_MGMT, "%s", "Releasing allocated resources");
+
+	mutex_lock(&target_mgmt_mutex);
+	ctr_open_state = ISCSI_CTR_OPEN_STATE_CLOSING;
+	mutex_unlock(&target_mgmt_mutex);
+
+	target_del_all();
+
+	mutex_lock(&target_mgmt_mutex);
+
+	list_for_each_entry_safe(attr, t, &iscsi_attrs_list,
+					attrs_list_entry) {
+		__iscsi_del_attr(NULL, attr);
+	}
+
+	ctr_open_state = ISCSI_CTR_OPEN_STATE_CLOSED;
+
+	mutex_unlock(&target_mgmt_mutex);
+
+	return 0;
+}
+
+const struct file_operations ctr_fops = {
+	.owner		= THIS_MODULE,
+	.unlocked_ioctl	= ioctl,
+	.compat_ioctl	= ioctl,
+	.open		= open,
+	.release	= release,
+};
+
+#ifdef CONFIG_SCST_DEBUG
+static void iscsi_dump_char(int ch, unsigned char *text, int *pos)
+{
+	int i = *pos;
+
+	if (ch < 0) {
+		while ((i % 16) != 0) {
+			printk(KERN_CONT "   ");
+			text[i] = ' ';
+			i++;
+			if ((i % 16) == 0)
+				printk(KERN_CONT " | %.16s |\n", text);
+			else if ((i % 4) == 0)
+				printk(KERN_CONT " |");
+		}
+		i = 0;
+		goto out;
+	}
+
+	text[i] = (ch < 0x20 || (ch >= 0x80 && ch <= 0xa0)) ? ' ' : ch;
+	printk(KERN_CONT " %02x", ch);
+	i++;
+	if ((i % 16) == 0) {
+		printk(KERN_CONT " | %.16s |\n", text);
+		i = 0;
+	} else if ((i % 4) == 0)
+		printk(KERN_CONT " |");
+
+out:
+	*pos = i;
+	return;
+}
+
+void iscsi_dump_pdu(struct iscsi_pdu *pdu)
+{
+	unsigned char text[16];
+	int pos = 0;
+
+	if (trace_flag & TRACE_D_DUMP_PDU) {
+		unsigned char *buf;
+		int i;
+
+		buf = (void *)&pdu->bhs;
+		printk(KERN_DEBUG "BHS: (%p,%zd)\n", buf, sizeof(pdu->bhs));
+		for (i = 0; i < (int)sizeof(pdu->bhs); i++)
+			iscsi_dump_char(*buf++, text, &pos);
+		iscsi_dump_char(-1, text, &pos);
+
+		buf = (void *)pdu->ahs;
+		printk(KERN_DEBUG "AHS: (%p,%d)\n", buf, pdu->ahssize);
+		for (i = 0; i < pdu->ahssize; i++)
+			iscsi_dump_char(*buf++, text, &pos);
+		iscsi_dump_char(-1, text, &pos);
+
+		printk(KERN_DEBUG "Data: (%d)\n", pdu->datasize);
+	}
+}
+
+unsigned long iscsi_get_flow_ctrl_or_mgmt_dbg_log_flag(struct iscsi_cmnd *cmnd)
+{
+	unsigned long flag;
+
+	if (cmnd->cmd_req != NULL)
+		cmnd = cmnd->cmd_req;
+
+	if (cmnd->scst_cmd == NULL)
+		flag = TRACE_MGMT_DEBUG;
+	else {
+		int status = scst_cmd_get_status(cmnd->scst_cmd);
+		if ((status == SAM_STAT_TASK_SET_FULL) ||
+		    (status == SAM_STAT_BUSY))
+			flag = TRACE_FLOW_CONTROL;
+		else
+			flag = TRACE_MGMT_DEBUG;
+	}
+	return flag;
+}
+
+#endif /* CONFIG_SCST_DEBUG */
diff -uprN orig/linux-2.6.33/drivers/scst/iscsi-scst/conn.c linux-2.6.33/drivers/scst/iscsi-scst/conn.c
--- orig/linux-2.6.33/drivers/scst/iscsi-scst/conn.c
+++ linux-2.6.33/drivers/scst/iscsi-scst/conn.c
@@ -0,0 +1,785 @@
+/*
+ *  Copyright (C) 2002 - 2003 Ardis Technolgies <roman@ardistech.com>
+ *  Copyright (C) 2007 - 2010 Vladislav Bolkhovitin
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation, version 2
+ *  of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#include <linux/file.h>
+#include <linux/ip.h>
+#include <net/tcp.h>
+
+#include "iscsi.h"
+#include "digest.h"
+
+static int print_conn_state(char *p, size_t size, struct iscsi_conn *conn)
+{
+	int pos = 0;
+
+	if (conn->closing) {
+		pos += scnprintf(p, size, "%s", "closing");
+		goto out;
+	}
+
+	switch (conn->rd_state) {
+	case ISCSI_CONN_RD_STATE_PROCESSING:
+		pos += scnprintf(&p[pos], size - pos, "%s", "read_processing ");
+		break;
+	case ISCSI_CONN_RD_STATE_IN_LIST:
+		pos += scnprintf(&p[pos], size - pos, "%s", "in_read_list ");
+		break;
+	}
+
+	switch (conn->wr_state) {
+	case ISCSI_CONN_WR_STATE_PROCESSING:
+		pos += scnprintf(&p[pos], size - pos, "%s", "write_processing ");
+		break;
+	case ISCSI_CONN_WR_STATE_IN_LIST:
+		pos += scnprintf(&p[pos], size - pos, "%s", "in_write_list ");
+		break;
+	case ISCSI_CONN_WR_STATE_SPACE_WAIT:
+		pos += scnprintf(&p[pos], size - pos, "%s", "space_waiting ");
+		break;
+	}
+
+	if (test_bit(ISCSI_CONN_REINSTATING, &conn->conn_aflags))
+		pos += scnprintf(&p[pos], size - pos, "%s", "reinstating ");
+	else if (pos == 0)
+		pos += scnprintf(&p[pos], size - pos, "%s", "established idle ");
+
+out:
+	return pos;
+}
+
+static int conn_free(struct iscsi_conn *conn);
+
+static void iscsi_conn_release(struct kobject *kobj)
+{
+	struct iscsi_conn *conn;
+	struct iscsi_target *target;
+
+	conn = container_of(kobj, struct iscsi_conn, iscsi_conn_kobj);
+	target = conn->target;
+
+	mutex_lock(&target->target_mutex);
+	conn_free(conn);
+	mutex_unlock(&target->target_mutex);
+	return;
+}
+
+static struct kobj_type iscsi_conn_ktype = {
+	.sysfs_ops = &scst_sysfs_ops,
+	.release = iscsi_conn_release,
+};
+
+static ssize_t iscsi_get_initiator_ip(struct iscsi_conn *conn,
+	char *buf, int size)
+{
+	int pos;
+	struct sock *sk;
+
+	sk = conn->sock->sk;
+	switch (sk->sk_family) {
+	case AF_INET:
+		pos = scnprintf(buf, size,
+			"%u.%u.%u.%u", NIPQUAD(inet_sk(sk)->inet_daddr));
+		break;
+	case AF_INET6:
+		pos = scnprintf(buf, size, "[%p6]",
+			&inet6_sk(sk)->daddr);
+		break;
+	default:
+		pos = scnprintf(buf, size, "Unknown family %d",
+			sk->sk_family);
+		break;
+	}
+	return pos;
+}
+
+static ssize_t iscsi_conn_ip_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	int pos;
+	struct iscsi_conn *conn;
+
+	conn = container_of(kobj, struct iscsi_conn, iscsi_conn_kobj);
+
+	pos = iscsi_get_initiator_ip(conn, buf, SCST_SYSFS_BLOCK_SIZE);
+	return pos;
+}
+
+static struct kobj_attribute iscsi_conn_ip_attr =
+	__ATTR(ip, S_IRUGO, iscsi_conn_ip_show, NULL);
+
+static ssize_t iscsi_conn_cid_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	int pos;
+	struct iscsi_conn *conn;
+
+	conn = container_of(kobj, struct iscsi_conn, iscsi_conn_kobj);
+
+	pos = sprintf(buf, "%u", conn->cid);
+	return pos;
+}
+
+static struct kobj_attribute iscsi_conn_cid_attr =
+	__ATTR(cid, S_IRUGO, iscsi_conn_cid_show, NULL);
+
+static ssize_t iscsi_conn_state_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	int pos;
+	struct iscsi_conn *conn;
+
+	conn = container_of(kobj, struct iscsi_conn, iscsi_conn_kobj);
+
+	pos = print_conn_state(buf, SCST_SYSFS_BLOCK_SIZE, conn);
+	return pos;
+}
+
+static struct kobj_attribute iscsi_conn_state_attr =
+	__ATTR(state, S_IRUGO, iscsi_conn_state_show, NULL);
+
+/* target_mutex supposed to be locked */
+struct iscsi_conn *conn_lookup(struct iscsi_session *session, u16 cid)
+{
+	struct iscsi_conn *conn;
+
+	/*
+	 * We need to find the latest conn to correctly handle
+	 * multi-reinstatements
+	 */
+	list_for_each_entry_reverse(conn, &session->conn_list,
+					conn_list_entry) {
+		if (conn->cid == cid)
+			return conn;
+	}
+	return NULL;
+}
+
+void iscsi_make_conn_rd_active(struct iscsi_conn *conn)
+{
+
+	spin_lock_bh(&iscsi_rd_lock);
+
+	TRACE_DBG("conn %p, rd_state %x, rd_data_ready %d", conn,
+		conn->rd_state, conn->rd_data_ready);
+
+	conn->rd_data_ready = 1;
+
+	if (conn->rd_state == ISCSI_CONN_RD_STATE_IDLE) {
+		list_add_tail(&conn->rd_list_entry, &iscsi_rd_list);
+		conn->rd_state = ISCSI_CONN_RD_STATE_IN_LIST;
+		wake_up(&iscsi_rd_waitQ);
+	}
+
+	spin_unlock_bh(&iscsi_rd_lock);
+	return;
+}
+
+void iscsi_make_conn_wr_active(struct iscsi_conn *conn)
+{
+
+	spin_lock_bh(&iscsi_wr_lock);
+
+	TRACE_DBG("conn %p, wr_state %x, wr_space_ready %d", conn,
+		conn->wr_state, conn->wr_space_ready);
+
+	if (conn->wr_state == ISCSI_CONN_WR_STATE_IDLE) {
+		list_add_tail(&conn->wr_list_entry, &iscsi_wr_list);
+		conn->wr_state = ISCSI_CONN_WR_STATE_IN_LIST;
+		wake_up(&iscsi_wr_waitQ);
+	}
+
+	spin_unlock_bh(&iscsi_wr_lock);
+	return;
+}
+
+void __mark_conn_closed(struct iscsi_conn *conn, int flags)
+{
+	spin_lock_bh(&iscsi_rd_lock);
+	conn->closing = 1;
+	if (flags & ISCSI_CONN_ACTIVE_CLOSE)
+		conn->active_close = 1;
+	if (flags & ISCSI_CONN_DELETING)
+		conn->deleting = 1;
+	spin_unlock_bh(&iscsi_rd_lock);
+
+	iscsi_make_conn_rd_active(conn);
+}
+
+void mark_conn_closed(struct iscsi_conn *conn)
+{
+	__mark_conn_closed(conn, ISCSI_CONN_ACTIVE_CLOSE);
+}
+
+static void __iscsi_state_change(struct sock *sk)
+{
+	struct iscsi_conn *conn = sk->sk_user_data;
+
+	if (unlikely(sk->sk_state != TCP_ESTABLISHED)) {
+		if (!conn->closing) {
+			PRINT_ERROR("Connection with initiator %s "
+				"unexpectedly closed!",
+				conn->session->initiator_name);
+			TRACE_MGMT_DBG("conn %p, sk state %d", conn,
+				sk->sk_state);
+			__mark_conn_closed(conn, 0);
+		}
+	} else
+		iscsi_make_conn_rd_active(conn);
+	return;
+}
+
+static void iscsi_state_change(struct sock *sk)
+{
+	struct iscsi_conn *conn = sk->sk_user_data;
+
+	__iscsi_state_change(sk);
+	conn->old_state_change(sk);
+
+	return;
+}
+
+static void iscsi_data_ready(struct sock *sk, int len)
+{
+	struct iscsi_conn *conn = sk->sk_user_data;
+
+	iscsi_make_conn_rd_active(conn);
+
+	conn->old_data_ready(sk, len);
+	return;
+}
+
+static void iscsi_write_space_ready(struct sock *sk)
+{
+	struct iscsi_conn *conn = sk->sk_user_data;
+
+	TRACE_DBG("Write space ready for conn %p", conn);
+
+	spin_lock_bh(&iscsi_wr_lock);
+	conn->wr_space_ready = 1;
+	if ((conn->wr_state == ISCSI_CONN_WR_STATE_SPACE_WAIT)) {
+		list_add_tail(&conn->wr_list_entry, &iscsi_wr_list);
+		conn->wr_state = ISCSI_CONN_WR_STATE_IN_LIST;
+		wake_up(&iscsi_wr_waitQ);
+	}
+	spin_unlock_bh(&iscsi_wr_lock);
+
+	conn->old_write_space(sk);
+	return;
+}
+
+static void conn_rsp_timer_fn(unsigned long arg)
+{
+	struct iscsi_conn *conn = (struct iscsi_conn *)arg;
+	struct iscsi_cmnd *cmnd;
+	unsigned long j = jiffies;
+
+	TRACE_DBG("Timer (conn %p)", conn);
+
+	spin_lock_bh(&conn->write_list_lock);
+
+	if (!list_empty(&conn->write_timeout_list)) {
+		unsigned long timeout_time;
+		cmnd = list_entry(conn->write_timeout_list.next,
+				struct iscsi_cmnd, write_timeout_list_entry);
+
+		timeout_time = j + conn->rsp_timeout + ISCSI_ADD_SCHED_TIME;
+
+		if (unlikely(time_after_eq(j, cmnd->write_start +
+						conn->rsp_timeout))) {
+			if (!conn->closing) {
+				PRINT_ERROR("Timeout sending data/waiting "
+					"for reply to/from initiator "
+					"%s (SID %llx), closing connection",
+					conn->session->initiator_name,
+					(long long unsigned int)
+						conn->session->sid);
+				/*
+				 * We must call mark_conn_closed() outside of
+				 * write_list_lock or we will have a circular
+				 * locking dependency with iscsi_rd_lock.
+				 */
+				spin_unlock_bh(&conn->write_list_lock);
+				mark_conn_closed(conn);
+				goto out;
+			}
+		} else if (!timer_pending(&conn->rsp_timer) ||
+			   time_after(conn->rsp_timer.expires, timeout_time)) {
+			TRACE_DBG("Restarting timer on %ld (conn %p)",
+				timeout_time, conn);
+			/*
+			 * Timer might have been restarted while we were
+			 * entering here.
+			 */
+			mod_timer(&conn->rsp_timer, timeout_time);
+		}
+	}
+
+	spin_unlock_bh(&conn->write_list_lock);
+
+	if (unlikely(conn->conn_tm_active)) {
+		TRACE_MGMT_DBG("TM active: making conn %p RD active", conn);
+		iscsi_make_conn_rd_active(conn);
+	}
+
+out:
+	return;
+}
+
+static void conn_nop_in_delayed_work_fn(struct delayed_work *work)
+{
+	struct iscsi_conn *conn = container_of(work, struct iscsi_conn,
+		nop_in_delayed_work);
+
+	if (time_after_eq(jiffies, conn->last_rcv_time +
+				conn->nop_in_interval)) {
+		iscsi_send_nop_in(conn);
+	}
+
+	if (conn->nop_in_interval > 0) {
+		TRACE_DBG("Reschedule Nop-In work for conn %p", conn);
+		schedule_delayed_work(&conn->nop_in_delayed_work,
+			conn->nop_in_interval + ISCSI_ADD_SCHED_TIME);
+	}
+	return;
+}
+
+/* Must be called from rd thread only */
+void iscsi_check_tm_data_wait_timeouts(struct iscsi_conn *conn, bool force)
+{
+	struct iscsi_cmnd *cmnd;
+	unsigned long j = jiffies;
+	bool aborted_cmds_pending;
+	unsigned long timeout_time = j + ISCSI_TM_DATA_WAIT_TIMEOUT +
+					ISCSI_ADD_SCHED_TIME;
+
+	TRACE_DBG_FLAG(force ? TRACE_CONN_OC_DBG : TRACE_MGMT_DEBUG,
+		"j %ld (TIMEOUT %d, force %d)", j,
+		ISCSI_TM_DATA_WAIT_TIMEOUT + ISCSI_ADD_SCHED_TIME, force);
+
+	iscsi_extracheck_is_rd_thread(conn);
+
+again:
+	spin_lock_bh(&iscsi_rd_lock);
+	spin_lock(&conn->write_list_lock);
+
+	aborted_cmds_pending = false;
+	list_for_each_entry(cmnd, &conn->write_timeout_list,
+				write_timeout_list_entry) {
+		if (test_bit(ISCSI_CMD_ABORTED, &cmnd->prelim_compl_flags)) {
+			TRACE_DBG_FLAG(force ? TRACE_CONN_OC_DBG : TRACE_MGMT_DEBUG,
+				"Checking aborted cmnd %p (scst_state %d, "
+				"on_write_timeout_list %d, write_start %ld, "
+				"r2t_len_to_receive %d)", cmnd,
+				cmnd->scst_state, cmnd->on_write_timeout_list,
+				cmnd->write_start, cmnd->r2t_len_to_receive);
+			if ((cmnd->r2t_len_to_receive != 0) &&
+			    (time_after_eq(j, cmnd->write_start + ISCSI_TM_DATA_WAIT_TIMEOUT) ||
+			     force)) {
+				spin_unlock(&conn->write_list_lock);
+				spin_unlock_bh(&iscsi_rd_lock);
+				iscsi_fail_data_waiting_cmnd(cmnd);
+				goto again;
+			}
+			aborted_cmds_pending = true;
+		}
+	}
+
+	if (aborted_cmds_pending) {
+		if (!force &&
+		    (!timer_pending(&conn->rsp_timer) ||
+		     time_after(conn->rsp_timer.expires, timeout_time))) {
+			TRACE_MGMT_DBG("Mod timer on %ld (conn %p)",
+				timeout_time, conn);
+			mod_timer(&conn->rsp_timer, timeout_time);
+		}
+	} else {
+		TRACE_MGMT_DBG("Clearing conn_tm_active for conn %p", conn);
+		conn->conn_tm_active = 0;
+	}
+
+	spin_unlock(&conn->write_list_lock);
+	spin_unlock_bh(&iscsi_rd_lock);
+	return;
+}
+
+/* target_mutex supposed to be locked */
+void conn_reinst_finished(struct iscsi_conn *conn)
+{
+	struct iscsi_cmnd *cmnd, *t;
+
+	clear_bit(ISCSI_CONN_REINSTATING, &conn->conn_aflags);
+
+	list_for_each_entry_safe(cmnd, t, &conn->reinst_pending_cmd_list,
+					reinst_pending_cmd_list_entry) {
+		TRACE_MGMT_DBG("Restarting reinst pending cmnd %p",
+			cmnd);
+		list_del(&cmnd->reinst_pending_cmd_list_entry);
+		iscsi_restart_cmnd(cmnd);
+	}
+	return;
+}
+
+static void conn_activate(struct iscsi_conn *conn)
+{
+	TRACE_MGMT_DBG("Enabling conn %p", conn);
+
+	/* Catch double bind */
+	BUG_ON(conn->sock->sk->sk_state_change == iscsi_state_change);
+
+	write_lock_bh(&conn->sock->sk->sk_callback_lock);
+
+	conn->old_state_change = conn->sock->sk->sk_state_change;
+	conn->sock->sk->sk_state_change = iscsi_state_change;
+
+	conn->old_data_ready = conn->sock->sk->sk_data_ready;
+	conn->sock->sk->sk_data_ready = iscsi_data_ready;
+
+	conn->old_write_space = conn->sock->sk->sk_write_space;
+	conn->sock->sk->sk_write_space = iscsi_write_space_ready;
+
+	write_unlock_bh(&conn->sock->sk->sk_callback_lock);
+
+	/*
+	 * Check, if conn was closed while we were initializing it.
+	 * This function will make conn rd_active, if necessary.
+	 */
+	__iscsi_state_change(conn->sock->sk);
+
+	return;
+}
+
+/*
+ * Note: the code below passes a kernel space pointer (&opt) to setsockopt()
+ * while the declaration of setsockopt specifies that it expects a user space
+ * pointer. This seems to work fine, and this approach is also used in some
+ * other parts of the Linux kernel (see e.g. fs/ocfs2/cluster/tcp.c).
+ */
+static int conn_setup_sock(struct iscsi_conn *conn)
+{
+	int res = 0;
+	int opt = 1;
+	mm_segment_t oldfs;
+	struct iscsi_session *session = conn->session;
+
+	TRACE_DBG("%llu", (long long unsigned int)session->sid);
+
+	conn->sock = SOCKET_I(conn->file->f_dentry->d_inode);
+
+	if (conn->sock->ops->sendpage == NULL) {
+		PRINT_ERROR("Socket for sid %llu doesn't support sendpage()",
+			    (long long unsigned int)session->sid);
+		res = -EINVAL;
+		goto out;
+	}
+
+#if 0
+	conn->sock->sk->sk_allocation = GFP_NOIO;
+#endif
+	conn->sock->sk->sk_user_data = conn;
+
+	oldfs = get_fs();
+	set_fs(get_ds());
+	conn->sock->ops->setsockopt(conn->sock, SOL_TCP, TCP_NODELAY,
+		(void __force __user *)&opt, sizeof(opt));
+	set_fs(oldfs);
+
+out:
+	return res;
+}
+
+/* target_mutex supposed to be locked */
+static int conn_free(struct iscsi_conn *conn)
+{
+	struct iscsi_session *session = conn->session;
+
+	TRACE_MGMT_DBG("Freeing conn %p (sess=%p, %#Lx %u)", conn,
+		session, (long long unsigned int)session->sid, conn->cid);
+
+	del_timer_sync(&conn->rsp_timer);
+
+	BUG_ON(atomic_read(&conn->conn_ref_cnt) != 0);
+	BUG_ON(!list_empty(&conn->cmd_list));
+	BUG_ON(!list_empty(&conn->write_list));
+	BUG_ON(!list_empty(&conn->write_timeout_list));
+	BUG_ON(conn->conn_reinst_successor != NULL);
+	BUG_ON(!test_bit(ISCSI_CONN_SHUTTINGDOWN, &conn->conn_aflags));
+
+	/* Just in case if new conn gets freed before the old one */
+	if (test_bit(ISCSI_CONN_REINSTATING, &conn->conn_aflags)) {
+		struct iscsi_conn *c;
+		TRACE_MGMT_DBG("Freeing being reinstated conn %p", conn);
+		list_for_each_entry(c, &session->conn_list,
+					conn_list_entry) {
+			if (c->conn_reinst_successor == conn) {
+				c->conn_reinst_successor = NULL;
+				break;
+			}
+		}
+	}
+
+	list_del(&conn->conn_list_entry);
+
+	fput(conn->file);
+	conn->file = NULL;
+	conn->sock = NULL;
+
+	free_page((unsigned long)conn->read_iov);
+
+	kfree(conn);
+
+	if (list_empty(&session->conn_list)) {
+		BUG_ON(session->sess_reinst_successor != NULL);
+		session_free(session, true);
+	}
+
+	return 0;
+}
+
+/* target_mutex supposed to be locked */
+static int iscsi_conn_alloc(struct iscsi_session *session,
+	struct iscsi_kern_conn_info *info, struct iscsi_conn **new_conn)
+{
+	struct iscsi_conn *conn;
+	int res = 0;
+	struct iscsi_conn *c;
+	int n = 1;
+	char addr[64];
+
+	conn = kzalloc(sizeof(*conn), GFP_KERNEL);
+	if (!conn) {
+		res = -ENOMEM;
+		goto out_err;
+	}
+
+	TRACE_MGMT_DBG("Creating connection %p for sid %#Lx, cid %u", conn,
+		       (long long unsigned int)session->sid, info->cid);
+
+	/* Changing it, change ISCSI_CONN_IOV_MAX as well !! */
+	conn->read_iov = (struct iovec *)get_zeroed_page(GFP_KERNEL);
+	if (conn->read_iov == NULL) {
+		res = -ENOMEM;
+		goto out_err_free_conn;
+	}
+
+	atomic_set(&conn->conn_ref_cnt, 0);
+	conn->session = session;
+	if (session->sess_reinstating)
+		__set_bit(ISCSI_CONN_REINSTATING, &conn->conn_aflags);
+	conn->cid = info->cid;
+	conn->stat_sn = info->stat_sn;
+	conn->exp_stat_sn = info->exp_stat_sn;
+	conn->rd_state = ISCSI_CONN_RD_STATE_IDLE;
+	conn->wr_state = ISCSI_CONN_WR_STATE_IDLE;
+
+	conn->hdigest_type = session->sess_params.header_digest;
+	conn->ddigest_type = session->sess_params.data_digest;
+	res = digest_init(conn);
+	if (res != 0)
+		goto out_err_free1;
+
+	conn->target = session->target;
+	spin_lock_init(&conn->cmd_list_lock);
+	INIT_LIST_HEAD(&conn->cmd_list);
+	spin_lock_init(&conn->write_list_lock);
+	INIT_LIST_HEAD(&conn->write_list);
+	INIT_LIST_HEAD(&conn->write_timeout_list);
+	setup_timer(&conn->rsp_timer, conn_rsp_timer_fn, (unsigned long)conn);
+	init_waitqueue_head(&conn->read_state_waitQ);
+	init_completion(&conn->ready_to_free);
+	INIT_LIST_HEAD(&conn->reinst_pending_cmd_list);
+	INIT_LIST_HEAD(&conn->nop_req_list);
+	spin_lock_init(&conn->nop_req_list_lock);
+
+	conn->nop_in_ttt = 0;
+	INIT_DELAYED_WORK(&conn->nop_in_delayed_work,
+		(void (*)(struct work_struct *))conn_nop_in_delayed_work_fn);
+	conn->last_rcv_time = jiffies;
+	conn->rsp_timeout = session->tgt_params.rsp_timeout * HZ;
+	conn->nop_in_interval = session->tgt_params.nop_in_interval * HZ;
+	if (conn->nop_in_interval > 0) {
+		TRACE_DBG("Schedule Nop-In work for conn %p", conn);
+		schedule_delayed_work(&conn->nop_in_delayed_work,
+			conn->nop_in_interval + ISCSI_ADD_SCHED_TIME);
+	}
+
+	conn->file = fget(info->fd);
+
+	res = conn_setup_sock(conn);
+	if (res != 0)
+		goto out_err_free2;
+
+	iscsi_get_initiator_ip(conn, addr, sizeof(addr));
+
+restart:
+	list_for_each_entry(c, &session->conn_list, conn_list_entry) {
+		if (strcmp(addr, kobject_name(&conn->iscsi_conn_kobj)) == 0) {
+			char c_addr[64];
+
+			iscsi_get_initiator_ip(conn, c_addr, sizeof(c_addr));
+
+			TRACE_DBG("Duplicated conn from the same initiator "
+				"%s found", c_addr);
+
+			snprintf(addr, sizeof(addr), "%s_%d", c_addr, n);
+			n++;
+			goto restart;
+		}
+	}
+
+	res = kobject_init_and_add(&conn->iscsi_conn_kobj, &iscsi_conn_ktype,
+		scst_sysfs_get_sess_kobj(session->scst_sess), addr);
+	if (res != 0) {
+		PRINT_ERROR("Unable create sysfs entries for conn %s",
+			addr);
+		goto out_err_free2;
+	}
+
+	TRACE_DBG("conn %p, iscsi_conn_kobj %p", conn, &conn->iscsi_conn_kobj);
+
+	res = sysfs_create_file(&conn->iscsi_conn_kobj,
+			&iscsi_conn_state_attr.attr);
+	if (res != 0) {
+		PRINT_ERROR("Unable create sysfs attribute %s for conn %s",
+			iscsi_conn_state_attr.attr.name, addr);
+		goto out_err_free3;
+	}
+
+	res = sysfs_create_file(&conn->iscsi_conn_kobj,
+			&iscsi_conn_cid_attr.attr);
+	if (res != 0) {
+		PRINT_ERROR("Unable create sysfs attribute %s for conn %s",
+			iscsi_conn_cid_attr.attr.name, addr);
+		goto out_err_free3;
+	}
+
+	res = sysfs_create_file(&conn->iscsi_conn_kobj,
+			&iscsi_conn_ip_attr.attr);
+	if (res != 0) {
+		PRINT_ERROR("Unable create sysfs attribute %s for conn %s",
+			iscsi_conn_ip_attr.attr.name, addr);
+		goto out_err_free3;
+	}
+
+	list_add_tail(&conn->conn_list_entry, &session->conn_list);
+
+	*new_conn = conn;
+
+out:
+	return res;
+
+out_err_free3:
+	kobject_put(&conn->iscsi_conn_kobj);
+	goto out;
+
+out_err_free2:
+	fput(conn->file);
+
+out_err_free1:
+	free_page((unsigned long)conn->read_iov);
+
+out_err_free_conn:
+	kfree(conn);
+
+out_err:
+	goto out;
+}
+
+/* target_mutex supposed to be locked */
+int __add_conn(struct iscsi_session *session, struct iscsi_kern_conn_info *info)
+{
+	struct iscsi_conn *conn, *new_conn = NULL;
+	int err;
+	bool reinstatement = false;
+
+	conn = conn_lookup(session, info->cid);
+	if ((conn != NULL) &&
+	    !test_bit(ISCSI_CONN_SHUTTINGDOWN, &conn->conn_aflags)) {
+		/* conn reinstatement */
+		reinstatement = true;
+	} else if (!list_empty(&session->conn_list)) {
+		err = -EEXIST;
+		goto out;
+	}
+
+	err = iscsi_conn_alloc(session, info, &new_conn);
+	if (err != 0)
+		goto out;
+
+	if (reinstatement) {
+		TRACE_MGMT_DBG("Reinstating conn (old %p, new %p)", conn,
+			new_conn);
+		conn->conn_reinst_successor = new_conn;
+		__set_bit(ISCSI_CONN_REINSTATING, &new_conn->conn_aflags);
+		__mark_conn_closed(conn, 0);
+	}
+
+	conn_activate(new_conn);
+
+out:
+	return err;
+}
+
+/* target_mutex supposed to be locked */
+int __del_conn(struct iscsi_session *session, struct iscsi_kern_conn_info *info)
+{
+	struct iscsi_conn *conn;
+	int err = -EEXIST;
+
+	conn = conn_lookup(session, info->cid);
+	if (!conn) {
+		PRINT_ERROR("Connection %d not found", info->cid);
+		return err;
+	}
+
+	PRINT_INFO("Deleting connection with initiator %s (%p)",
+		conn->session->initiator_name, conn);
+
+	__mark_conn_closed(conn, ISCSI_CONN_ACTIVE_CLOSE|ISCSI_CONN_DELETING);
+
+	return 0;
+}
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+
+void iscsi_extracheck_is_rd_thread(struct iscsi_conn *conn)
+{
+	if (unlikely(current != conn->rd_task)) {
+		printk(KERN_EMERG "conn %p rd_task != current %p (pid %d)\n",
+			conn, current, current->pid);
+		while (in_softirq())
+			local_bh_enable();
+		printk(KERN_EMERG "rd_state %x\n", conn->rd_state);
+		printk(KERN_EMERG "rd_task %p\n", conn->rd_task);
+		printk(KERN_EMERG "rd_task->pid %d\n", conn->rd_task->pid);
+		BUG();
+	}
+}
+
+void iscsi_extracheck_is_wr_thread(struct iscsi_conn *conn)
+{
+	if (unlikely(current != conn->wr_task)) {
+		printk(KERN_EMERG "conn %p wr_task != current %p (pid %d)\n",
+			conn, current, current->pid);
+		while (in_softirq())
+			local_bh_enable();
+		printk(KERN_EMERG "wr_state %x\n", conn->wr_state);
+		printk(KERN_EMERG "wr_task %p\n", conn->wr_task);
+		printk(KERN_EMERG "wr_task->pid %d\n", conn->wr_task->pid);
+		BUG();
+	}
+}
+
+#endif /* CONFIG_SCST_EXTRACHECKS */
diff -uprN orig/linux-2.6.33/drivers/scst/iscsi-scst/digest.c linux-2.6.33/drivers/scst/iscsi-scst/digest.c
--- orig/linux-2.6.33/drivers/scst/iscsi-scst/digest.c
+++ linux-2.6.33/drivers/scst/iscsi-scst/digest.c
@@ -0,0 +1,226 @@
+/*
+ *  iSCSI digest handling.
+ *
+ *  Copyright (C) 2004 - 2006 Xiranet Communications GmbH
+ *                            <arne.redlich@xiranet.com>
+ *  Copyright (C) 2007 - 2010 Vladislav Bolkhovitin
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#include <linux/types.h>
+#include <linux/scatterlist.h>
+
+#include "iscsi.h"
+#include "digest.h"
+#include <linux/crc32c.h>
+
+void digest_alg_available(int *val)
+{
+#if defined(CONFIG_LIBCRC32C_MODULE) || defined(CONFIG_LIBCRC32C)
+	int crc32c = 1;
+#else
+	int crc32c = 0;
+#endif
+
+	if ((*val & DIGEST_CRC32C) && !crc32c) {
+		PRINT_ERROR("%s", "CRC32C digest algorithm not available "
+			"in kernel");
+		*val |= ~DIGEST_CRC32C;
+	}
+}
+
+/**
+ * initialize support for digest calculation.
+ *
+ * digest_init -
+ * @conn: ptr to connection to make use of digests
+ *
+ * @return: 0 on success, < 0 on error
+ */
+int digest_init(struct iscsi_conn *conn)
+{
+	if (!(conn->hdigest_type & DIGEST_ALL))
+		conn->hdigest_type = DIGEST_NONE;
+
+	if (!(conn->ddigest_type & DIGEST_ALL))
+		conn->ddigest_type = DIGEST_NONE;
+
+	return 0;
+}
+
+static u32 evaluate_crc32_from_sg(struct scatterlist *sg, int nbytes,
+	uint32_t padding)
+{
+	u32 crc = ~0;
+	int pad_bytes = ((nbytes + 3) & -4) - nbytes;
+
+#ifdef CONFIG_SCST_ISCSI_DEBUG_DIGEST_FAILURES
+	if (((scst_random() % 100000) == 752)) {
+		PRINT_INFO("%s", "Simulating digest failure");
+		return 0;
+	}
+#endif
+
+#if defined(CONFIG_LIBCRC32C_MODULE) || defined(CONFIG_LIBCRC32C)
+	while (nbytes > 0) {
+		int d = min(nbytes, (int)(sg->length));
+		crc = crc32c(crc, sg_virt(sg), d);
+		nbytes -= d;
+		sg++;
+	}
+
+	if (pad_bytes)
+		crc = crc32c(crc, (u8 *)&padding, pad_bytes);
+#endif
+
+	return ~cpu_to_le32(crc);
+}
+
+static u32 digest_header(struct iscsi_pdu *pdu)
+{
+	struct scatterlist sg[2];
+	unsigned int nbytes = sizeof(struct iscsi_hdr);
+	int asize = (pdu->ahssize + 3) & -4;
+
+	sg_init_table(sg, 2);
+
+	sg_set_buf(&sg[0], &pdu->bhs, nbytes);
+	if (pdu->ahssize) {
+		sg_set_buf(&sg[1], pdu->ahs, asize);
+		nbytes += asize;
+	}
+	EXTRACHECKS_BUG_ON((nbytes & 3) != 0);
+	return evaluate_crc32_from_sg(sg, nbytes, 0);
+}
+
+static u32 digest_data(struct iscsi_cmnd *cmd, u32 size, u32 offset,
+	uint32_t padding)
+{
+	struct scatterlist *sg = cmd->sg;
+	int idx, count;
+	struct scatterlist saved_sg;
+	u32 crc;
+
+	offset += sg[0].offset;
+	idx = offset >> PAGE_SHIFT;
+	offset &= ~PAGE_MASK;
+
+	count = get_pgcnt(size, offset);
+
+	TRACE_DBG("req %p, idx %d, count %d, sg_cnt %d, size %d, "
+		"offset %d", cmd, idx, count, cmd->sg_cnt, size, offset);
+	BUG_ON(idx + count > cmd->sg_cnt);
+
+	saved_sg = sg[idx];
+	sg[idx].offset = offset;
+	sg[idx].length -= offset - saved_sg.offset;
+
+	crc = evaluate_crc32_from_sg(sg + idx, size, padding);
+
+	sg[idx] = saved_sg;
+	return crc;
+}
+
+int digest_rx_header(struct iscsi_cmnd *cmnd)
+{
+	u32 crc;
+
+	crc = digest_header(&cmnd->pdu);
+	if (unlikely(crc != cmnd->hdigest)) {
+		PRINT_ERROR("%s", "RX header digest failed");
+		return -EIO;
+	} else
+		TRACE_DBG("RX header digest OK for cmd %p", cmnd);
+
+	return 0;
+}
+
+void digest_tx_header(struct iscsi_cmnd *cmnd)
+{
+	cmnd->hdigest = digest_header(&cmnd->pdu);
+	TRACE_DBG("TX header digest for cmd %p: %x", cmnd, cmnd->hdigest);
+}
+
+int digest_rx_data(struct iscsi_cmnd *cmnd)
+{
+	struct iscsi_cmnd *req;
+	struct iscsi_data_out_hdr *req_hdr;
+	u32 offset, crc;
+	int res = 0;
+
+	switch (cmnd_opcode(cmnd)) {
+	case ISCSI_OP_SCSI_DATA_OUT:
+		req = cmnd->cmd_req;
+		if (unlikely(req == NULL)) {
+			/* It can be for prelim completed commands */
+			req = cmnd;
+			goto out;
+		}
+		req_hdr = (struct iscsi_data_out_hdr *)&cmnd->pdu.bhs;
+		offset = be32_to_cpu(req_hdr->buffer_offset);
+		break;
+
+	default:
+		req = cmnd;
+		offset = 0;
+	}
+
+	/*
+	 * We need to skip the digest check for prelim completed commands,
+	 * because we use shared data buffer for them, so, most likely, the
+	 * check will fail. Plus, for such commands we sometimes don't have
+	 * sg_cnt set correctly (cmnd_prepare_get_rejected_cmd_data() doesn't
+	 * do it).
+	 */
+	if (unlikely(req->prelim_compl_flags != 0))
+		goto out;
+
+	crc = digest_data(req, cmnd->pdu.datasize, offset,
+		cmnd->conn->rpadding);
+
+	if (unlikely(crc != cmnd->ddigest)) {
+		PRINT_ERROR("%s", "RX data digest failed");
+		TRACE_MGMT_DBG("Calculated crc %x, ddigest %x, offset %d", crc,
+			cmnd->ddigest, offset);
+		iscsi_dump_pdu(&cmnd->pdu);
+		res = -EIO;
+	} else
+		TRACE_DBG("RX data digest OK for cmd %p", cmnd);
+
+out:
+	return res;
+}
+
+void digest_tx_data(struct iscsi_cmnd *cmnd)
+{
+	struct iscsi_data_in_hdr *hdr;
+	u32 offset;
+
+	TRACE_DBG("%s:%d req %p, own_sg %d, sg %p, sgcnt %d cmnd %p, "
+		"own_sg %d, sg %p, sgcnt %d", __func__, __LINE__,
+		cmnd->parent_req, cmnd->parent_req->own_sg,
+		cmnd->parent_req->sg, cmnd->parent_req->sg_cnt,
+		cmnd, cmnd->own_sg, cmnd->sg, cmnd->sg_cnt);
+
+	switch (cmnd_opcode(cmnd)) {
+	case ISCSI_OP_SCSI_DATA_IN:
+		hdr = (struct iscsi_data_in_hdr *)&cmnd->pdu.bhs;
+		offset = be32_to_cpu(hdr->buffer_offset);
+		break;
+	default:
+		offset = 0;
+	}
+
+	cmnd->ddigest = digest_data(cmnd, cmnd->pdu.datasize, offset, 0);
+	TRACE_DBG("TX data digest for cmd %p: %x (offset %d, opcode %x)", cmnd,
+		cmnd->ddigest, offset, cmnd_opcode(cmnd));
+}
diff -uprN orig/linux-2.6.33/drivers/scst/iscsi-scst/event.c linux-2.6.33/drivers/scst/iscsi-scst/event.c
--- orig/linux-2.6.33/drivers/scst/iscsi-scst/event.c
+++ linux-2.6.33/drivers/scst/iscsi-scst/event.c
@@ -0,0 +1,163 @@
+/*
+ *  Event notification code.
+ *
+ *  Copyright (C) 2005 FUJITA Tomonori <tomof@acm.org>
+ *  Copyright (C) 2007 - 2010 Vladislav Bolkhovitin
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ *
+ *  Some functions are based on audit code.
+ */
+
+#include <net/tcp.h>
+#include "iscsi_scst.h"
+#include "iscsi.h"
+
+static struct sock *nl;
+static u32 iscsid_pid;
+
+static int event_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh)
+{
+	u32 uid, pid, seq;
+	char *data;
+
+	pid  = NETLINK_CREDS(skb)->pid;
+	uid  = NETLINK_CREDS(skb)->uid;
+	seq  = nlh->nlmsg_seq;
+	data = NLMSG_DATA(nlh);
+
+	iscsid_pid = pid;
+
+	return 0;
+}
+
+static void event_recv_skb(struct sk_buff *skb)
+{
+	int err;
+	struct nlmsghdr	*nlh;
+	u32 rlen;
+
+	while (skb->len >= NLMSG_SPACE(0)) {
+		nlh = (struct nlmsghdr *)skb->data;
+		if (nlh->nlmsg_len < sizeof(*nlh) || skb->len < nlh->nlmsg_len)
+			goto out;
+		rlen = NLMSG_ALIGN(nlh->nlmsg_len);
+		if (rlen > skb->len)
+			rlen = skb->len;
+		err = event_recv_msg(skb, nlh);
+		if (err)
+			netlink_ack(skb, nlh, -err);
+		else if (nlh->nlmsg_flags & NLM_F_ACK)
+			netlink_ack(skb, nlh, 0);
+		skb_pull(skb, rlen);
+	}
+
+out:
+	return;
+}
+
+/* event_mutex supposed to be held */
+static int __event_send(const void *buf, int buf_len)
+{
+	int res = 0, len;
+	struct sk_buff *skb;
+	struct nlmsghdr *nlh;
+	static u32 seq; /* protected by event_mutex */
+
+	if (ctr_open_state != ISCSI_CTR_OPEN_STATE_OPEN)
+		goto out;
+
+	len = NLMSG_SPACE(buf_len);
+
+	skb = alloc_skb(NLMSG_SPACE(len), GFP_KERNEL);
+	if (skb == NULL) {
+		PRINT_ERROR("alloc_skb() failed (len %d)", len);
+		res =  -ENOMEM;
+		goto out;
+	}
+
+	nlh = __nlmsg_put(skb, iscsid_pid, seq++, NLMSG_DONE,
+			  len - sizeof(*nlh), 0);
+
+	memcpy(NLMSG_DATA(nlh), buf, buf_len);
+	res = netlink_unicast(nl, skb, iscsid_pid, 0);
+	if (res <= 0) {
+		if (res != -ECONNREFUSED)
+			PRINT_ERROR("netlink_unicast() failed: %d", res);
+		else
+			TRACE(TRACE_MINOR, "netlink_unicast() failed: %s. "
+				"Not functioning user space?",
+				"Connection refused");
+		goto out;
+	}
+
+out:
+	return res;
+}
+
+int event_send(u32 tid, u64 sid, u32 cid, u32 cookie,
+	enum iscsi_kern_event_code code,
+	const char *param1, const char *param2)
+{
+	int err;
+	static DEFINE_MUTEX(event_mutex);
+	struct iscsi_kern_event event;
+	int param1_size, param2_size;
+
+	param1_size = (param1 != NULL) ? strlen(param1) : 0;
+	param2_size = (param2 != NULL) ? strlen(param2) : 0;
+
+	event.tid = tid;
+	event.sid = sid;
+	event.cid = cid;
+	event.code = code;
+	event.cookie = cookie;
+	event.param1_size = param1_size;
+	event.param2_size = param2_size;
+
+	mutex_lock(&event_mutex);
+
+	err = __event_send(&event, sizeof(event));
+	if (err <= 0)
+		goto out_unlock;
+
+	if (param1_size > 0) {
+		err = __event_send(param1, param1_size);
+		if (err <= 0)
+			goto out_unlock;
+	}
+
+	if (param2_size > 0) {
+		err = __event_send(param2, param2_size);
+		if (err <= 0)
+			goto out_unlock;
+	}
+
+out_unlock:
+	mutex_unlock(&event_mutex);
+	return err;
+}
+
+int __init event_init(void)
+{
+	nl = netlink_kernel_create(&init_net, NETLINK_ISCSI_SCST, 1,
+				   event_recv_skb, NULL, THIS_MODULE);
+	if (!nl) {
+		PRINT_ERROR("%s", "netlink_kernel_create() failed");
+		return -ENOMEM;
+	} else
+		return 0;
+}
+
+void event_exit(void)
+{
+	netlink_kernel_release(nl);
+}
diff -uprN orig/linux-2.6.33/drivers/scst/iscsi-scst/iscsi.c linux-2.6.33/drivers/scst/iscsi-scst/iscsi.c
--- orig/linux-2.6.33/drivers/scst/iscsi-scst/iscsi.c
+++ linux-2.6.33/drivers/scst/iscsi-scst/iscsi.c
@@ -0,0 +1,3583 @@
+/*
+ *  Copyright (C) 2002 - 2003 Ardis Technolgies <roman@ardistech.com>
+ *  Copyright (C) 2007 - 2010 Vladislav Bolkhovitin
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation, version 2
+ *  of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/hash.h>
+#include <linux/kthread.h>
+#include <linux/scatterlist.h>
+#include <net/tcp.h>
+#include <scsi/scsi.h>
+
+#include "iscsi.h"
+#include "digest.h"
+
+#define ISCSI_INIT_WRITE_WAKE		0x1
+
+static int ctr_major;
+static char ctr_name[] = "iscsi-scst-ctl";
+
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+unsigned long iscsi_trace_flag = ISCSI_DEFAULT_LOG_FLAGS;
+#endif
+
+static struct kmem_cache *iscsi_cmnd_cache;
+
+DEFINE_SPINLOCK(iscsi_rd_lock);
+LIST_HEAD(iscsi_rd_list);
+DECLARE_WAIT_QUEUE_HEAD(iscsi_rd_waitQ);
+
+DEFINE_SPINLOCK(iscsi_wr_lock);
+LIST_HEAD(iscsi_wr_list);
+DECLARE_WAIT_QUEUE_HEAD(iscsi_wr_waitQ);
+
+static struct page *dummy_page;
+static struct scatterlist dummy_sg;
+
+struct iscsi_thread_t {
+	struct task_struct *thr;
+	struct list_head threads_list_entry;
+};
+
+static LIST_HEAD(iscsi_threads_list);
+
+static void cmnd_remove_data_wait_hash(struct iscsi_cmnd *cmnd);
+static void iscsi_send_task_mgmt_resp(struct iscsi_cmnd *req, int status);
+static void iscsi_check_send_delayed_tm_resp(struct iscsi_session *sess);
+static void req_cmnd_release(struct iscsi_cmnd *req);
+static int iscsi_preliminary_complete(struct iscsi_cmnd *req,
+	struct iscsi_cmnd *orig_req, bool get_data);
+static int cmnd_insert_data_wait_hash(struct iscsi_cmnd *cmnd);
+static void __cmnd_abort(struct iscsi_cmnd *cmnd);
+static void iscsi_set_resid(struct iscsi_cmnd *rsp, bool bufflen_set);
+static void iscsi_cmnd_init_write(struct iscsi_cmnd *rsp, int flags);
+
+static void req_del_from_write_timeout_list(struct iscsi_cmnd *req)
+{
+	struct iscsi_conn *conn;
+
+	if (!req->on_write_timeout_list)
+		goto out;
+
+	conn = req->conn;
+
+	TRACE_DBG("Deleting cmd %p from conn %p write_timeout_list",
+		req, conn);
+
+	spin_lock_bh(&conn->write_list_lock);
+
+	/* Recheck, since it can be changed behind us */
+	if (unlikely(!req->on_write_timeout_list))
+		goto out_unlock;
+
+	list_del(&req->write_timeout_list_entry);
+	req->on_write_timeout_list = 0;
+
+out_unlock:
+	spin_unlock_bh(&conn->write_list_lock);
+
+out:
+	return;
+}
+
+static inline u32 cmnd_write_size(struct iscsi_cmnd *cmnd)
+{
+	struct iscsi_scsi_cmd_hdr *hdr = cmnd_hdr(cmnd);
+
+	if (hdr->flags & ISCSI_CMD_WRITE)
+		return be32_to_cpu(hdr->data_length);
+	return 0;
+}
+
+static inline int cmnd_read_size(struct iscsi_cmnd *cmnd)
+{
+	struct iscsi_scsi_cmd_hdr *hdr = cmnd_hdr(cmnd);
+
+	if (hdr->flags & ISCSI_CMD_READ) {
+		struct iscsi_ahs_hdr *ahdr;
+
+		if (!(hdr->flags & ISCSI_CMD_WRITE))
+			return be32_to_cpu(hdr->data_length);
+
+		ahdr = (struct iscsi_ahs_hdr *)cmnd->pdu.ahs;
+		if (ahdr != NULL) {
+			uint8_t *p = (uint8_t *)ahdr;
+			unsigned int size = 0;
+			do {
+				int s;
+
+				ahdr = (struct iscsi_ahs_hdr *)p;
+
+				if (ahdr->ahstype == ISCSI_AHSTYPE_RLENGTH) {
+					struct iscsi_rlength_ahdr *rh =
+					      (struct iscsi_rlength_ahdr *)ahdr;
+					return be32_to_cpu(rh->read_length);
+				}
+
+				s = 3 + be16_to_cpu(ahdr->ahslength);
+				s = (s + 3) & -4;
+				size += s;
+				p += s;
+			} while (size < cmnd->pdu.ahssize);
+		}
+		return -1;
+	}
+	return 0;
+}
+
+void iscsi_restart_cmnd(struct iscsi_cmnd *cmnd)
+{
+	int status;
+
+	EXTRACHECKS_BUG_ON(cmnd->r2t_len_to_receive != 0);
+	EXTRACHECKS_BUG_ON(cmnd->r2t_len_to_send != 0);
+
+	req_del_from_write_timeout_list(cmnd);
+
+	/*
+	 * Let's remove cmnd from the hash earlier to keep it smaller.
+	 * See also corresponding comment in req_cmnd_release().
+	 */
+	if (cmnd->hashed)
+		cmnd_remove_data_wait_hash(cmnd);
+
+	if (unlikely(test_bit(ISCSI_CONN_REINSTATING,
+			&cmnd->conn->conn_aflags))) {
+		struct iscsi_target *target = cmnd->conn->session->target;
+		bool get_out;
+
+		mutex_lock(&target->target_mutex);
+
+		get_out = test_bit(ISCSI_CONN_REINSTATING,
+				&cmnd->conn->conn_aflags);
+		/* Let's don't look dead */
+		if (scst_cmd_get_cdb(cmnd->scst_cmd)[0] == TEST_UNIT_READY)
+			get_out = false;
+
+		if (!get_out)
+			goto unlock_cont;
+
+		TRACE_MGMT_DBG("Pending cmnd %p, because conn %p is "
+			"reinstated", cmnd, cmnd->conn);
+
+		cmnd->scst_state = ISCSI_CMD_STATE_REINST_PENDING;
+		list_add_tail(&cmnd->reinst_pending_cmd_list_entry,
+			&cmnd->conn->reinst_pending_cmd_list);
+
+unlock_cont:
+		mutex_unlock(&target->target_mutex);
+
+		if (get_out)
+			goto out;
+	}
+
+	if (unlikely(cmnd->prelim_compl_flags != 0)) {
+		if (test_bit(ISCSI_CMD_ABORTED, &cmnd->prelim_compl_flags)) {
+			TRACE_MGMT_DBG("cmnd %p (scst_cmd %p) aborted", cmnd,
+				cmnd->scst_cmd);
+			req_cmnd_release_force(cmnd);
+			goto out;
+		}
+
+		if (cmnd->scst_cmd == NULL) {
+			TRACE_MGMT_DBG("Finishing preliminary completed cmd %p "
+				"with NULL scst_cmd", cmnd);
+			req_cmnd_release(cmnd);
+			goto out;
+		}
+
+		status = SCST_PREPROCESS_STATUS_ERROR_SENSE_SET;
+	} else
+		status = SCST_PREPROCESS_STATUS_SUCCESS;
+
+	cmnd->scst_state = ISCSI_CMD_STATE_RESTARTED;
+
+	scst_restart_cmd(cmnd->scst_cmd, status, SCST_CONTEXT_THREAD);
+
+out:
+	return;
+}
+
+void iscsi_fail_data_waiting_cmnd(struct iscsi_cmnd *cmnd)
+{
+
+	TRACE_MGMT_DBG("Failing data waiting cmnd %p", cmnd);
+
+	/*
+	 * There is no race with conn_abort(), since all functions
+	 * called from single read thread
+	 */
+	iscsi_extracheck_is_rd_thread(cmnd->conn);
+	cmnd->r2t_len_to_receive = 0;
+	cmnd->r2t_len_to_send = 0;
+
+	req_cmnd_release_force(cmnd);
+	return;
+}
+
+struct iscsi_cmnd *cmnd_alloc(struct iscsi_conn *conn,
+			      struct iscsi_cmnd *parent)
+{
+	struct iscsi_cmnd *cmnd;
+
+	/* ToDo: __GFP_NOFAIL?? */
+	cmnd = kmem_cache_zalloc(iscsi_cmnd_cache, GFP_KERNEL|__GFP_NOFAIL);
+
+	atomic_set(&cmnd->ref_cnt, 1);
+	cmnd->scst_state = ISCSI_CMD_STATE_NEW;
+	cmnd->conn = conn;
+	cmnd->parent_req = parent;
+
+	if (parent == NULL) {
+		conn_get(conn);
+
+		INIT_LIST_HEAD(&cmnd->rsp_cmd_list);
+		INIT_LIST_HEAD(&cmnd->rx_ddigest_cmd_list);
+		cmnd->target_task_tag = cpu_to_be32(ISCSI_RESERVED_TAG);
+
+		spin_lock_bh(&conn->cmd_list_lock);
+		list_add_tail(&cmnd->cmd_list_entry, &conn->cmd_list);
+		spin_unlock_bh(&conn->cmd_list_lock);
+	}
+
+	TRACE_DBG("conn %p, parent %p, cmnd %p", conn, parent, cmnd);
+	return cmnd;
+}
+
+/* Frees a command. Also frees the additional header. */
+static void cmnd_free(struct iscsi_cmnd *cmnd)
+{
+
+	TRACE_DBG("cmnd %p", cmnd);
+
+	if (unlikely(test_bit(ISCSI_CMD_ABORTED, &cmnd->prelim_compl_flags))) {
+		TRACE_MGMT_DBG("Free aborted cmd %p (scst cmd %p, state %d, "
+			"parent_req %p)", cmnd, cmnd->scst_cmd,
+			cmnd->scst_state, cmnd->parent_req);
+	}
+
+	/* Catch users from cmd_list or rsp_cmd_list */
+	EXTRACHECKS_BUG_ON(atomic_read(&cmnd->ref_cnt) != 0);
+
+	kfree(cmnd->pdu.ahs);
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+	if (unlikely(cmnd->on_write_list || cmnd->on_write_timeout_list)) {
+		struct iscsi_scsi_cmd_hdr *req = cmnd_hdr(cmnd);
+
+		PRINT_CRIT_ERROR("cmnd %p still on some list?, %x, %x, %x, "
+			"%x, %x, %x, %x", cmnd, req->opcode, req->scb[0],
+			req->flags, req->itt, be32_to_cpu(req->data_length),
+			req->cmd_sn, be32_to_cpu(cmnd->pdu.datasize));
+
+		if (unlikely(cmnd->parent_req)) {
+			struct iscsi_scsi_cmd_hdr *preq =
+					cmnd_hdr(cmnd->parent_req);
+			PRINT_CRIT_ERROR("%p %x %u", preq, preq->opcode,
+				preq->scb[0]);
+		}
+		BUG();
+	}
+#endif
+
+	kmem_cache_free(iscsi_cmnd_cache, cmnd);
+	return;
+}
+
+/* Might be called under some lock and on SIRQ */
+void cmnd_done(struct iscsi_cmnd *cmnd)
+{
+
+	TRACE_DBG("cmnd %p", cmnd);
+
+	if (unlikely(test_bit(ISCSI_CMD_ABORTED, &cmnd->prelim_compl_flags))) {
+		TRACE_MGMT_DBG("Done aborted cmd %p (scst cmd %p, state %d, "
+			"parent_req %p)", cmnd, cmnd->scst_cmd,
+			cmnd->scst_state, cmnd->parent_req);
+	}
+
+	EXTRACHECKS_BUG_ON(cmnd->on_rx_digest_list);
+	EXTRACHECKS_BUG_ON(cmnd->hashed);
+
+	req_del_from_write_timeout_list(cmnd);
+
+	if (cmnd->parent_req == NULL) {
+		struct iscsi_conn *conn = cmnd->conn;
+		struct iscsi_cmnd *rsp, *t;
+
+		TRACE_DBG("Deleting req %p from conn %p", cmnd, conn);
+
+		spin_lock_bh(&conn->cmd_list_lock);
+		list_del(&cmnd->cmd_list_entry);
+		spin_unlock_bh(&conn->cmd_list_lock);
+
+		conn_put(conn);
+
+		EXTRACHECKS_BUG_ON(!list_empty(&cmnd->rx_ddigest_cmd_list));
+
+		/* Order between above and below code is important! */
+
+		if ((cmnd->scst_cmd != NULL) || (cmnd->scst_aen != NULL)) {
+			switch (cmnd->scst_state) {
+			case ISCSI_CMD_STATE_PROCESSED:
+				TRACE_DBG("cmd %p PROCESSED", cmnd);
+				scst_tgt_cmd_done(cmnd->scst_cmd,
+					SCST_CONTEXT_DIRECT_ATOMIC);
+				break;
+
+			case ISCSI_CMD_STATE_AFTER_PREPROC:
+			{
+				/* It can be for some aborted commands */
+				struct scst_cmd *scst_cmd = cmnd->scst_cmd;
+				TRACE_DBG("cmd %p AFTER_PREPROC", cmnd);
+				cmnd->scst_state = ISCSI_CMD_STATE_RESTARTED;
+				cmnd->scst_cmd = NULL;
+				scst_restart_cmd(scst_cmd,
+					SCST_PREPROCESS_STATUS_ERROR_FATAL,
+					SCST_CONTEXT_THREAD);
+				break;
+			}
+
+			case ISCSI_CMD_STATE_AEN:
+				TRACE_DBG("cmd %p AEN PROCESSED", cmnd);
+				scst_aen_done(cmnd->scst_aen);
+				break;
+
+			case ISCSI_CMD_STATE_OUT_OF_SCST_PRELIM_COMPL:
+				break;
+
+			default:
+				PRINT_CRIT_ERROR("Unexpected cmnd scst state "
+					"%d", cmnd->scst_state);
+				BUG();
+				break;
+			}
+		}
+
+		if (cmnd->own_sg) {
+			TRACE_DBG("own_sg for req %p", cmnd);
+			if (cmnd->sg != &dummy_sg)
+				scst_free(cmnd->sg, cmnd->sg_cnt);
+#ifdef CONFIG_SCST_DEBUG
+			cmnd->own_sg = 0;
+			cmnd->sg = NULL;
+			cmnd->sg_cnt = -1;
+#endif
+		}
+
+		if (cmnd->dec_active_cmnds) {
+			struct iscsi_session *sess = cmnd->conn->session;
+			TRACE_DBG("Decrementing active_cmds (cmd %p, sess %p, "
+				"new value %d)", cmnd, sess,
+				atomic_read(&sess->active_cmds)-1);
+			atomic_dec(&sess->active_cmds);
+#ifdef CONFIG_SCST_EXTRACHECKS
+			if (unlikely(atomic_read(&sess->active_cmds) < 0)) {
+				PRINT_CRIT_ERROR("active_cmds < 0 (%d)!!",
+					atomic_read(&sess->active_cmds));
+				BUG();
+			}
+#endif
+		}
+
+		list_for_each_entry_safe(rsp, t, &cmnd->rsp_cmd_list,
+					rsp_cmd_list_entry) {
+			cmnd_free(rsp);
+		}
+
+		cmnd_free(cmnd);
+	} else {
+		if (cmnd->own_sg) {
+			TRACE_DBG("own_sg for rsp %p", cmnd);
+			if ((cmnd->sg != &dummy_sg) && (cmnd->sg != cmnd->rsp_sg))
+				scst_free(cmnd->sg, cmnd->sg_cnt);
+#ifdef CONFIG_SCST_DEBUG
+			cmnd->own_sg = 0;
+			cmnd->sg = NULL;
+			cmnd->sg_cnt = -1;
+#endif
+		}
+
+		EXTRACHECKS_BUG_ON(cmnd->dec_active_cmnds);
+
+		if (cmnd == cmnd->parent_req->main_rsp) {
+			TRACE_DBG("Finishing main rsp %p (req %p)", cmnd,
+				cmnd->parent_req);
+			cmnd->parent_req->main_rsp = NULL;
+		}
+
+		cmnd_put(cmnd->parent_req);
+		/*
+		 * rsp will be freed on the last parent's put and can already
+		 * be freed!!
+		 */
+	}
+	return;
+}
+
+/*
+ * Corresponding conn may also get destroyed after this function, except only
+ * if it's called from the read thread!
+ *
+ * It can't be called in parallel with iscsi_cmnds_init_write()!
+ */
+void req_cmnd_release_force(struct iscsi_cmnd *req)
+{
+	struct iscsi_cmnd *rsp, *t;
+	struct iscsi_conn *conn = req->conn;
+	LIST_HEAD(cmds_list);
+
+	TRACE_MGMT_DBG("req %p", req);
+
+	BUG_ON(req == conn->read_cmnd);
+
+	spin_lock_bh(&conn->write_list_lock);
+	list_for_each_entry_safe(rsp, t, &conn->write_list, write_list_entry) {
+		if (rsp->parent_req != req)
+			continue;
+
+		cmd_del_from_write_list(rsp);
+
+		list_add_tail(&rsp->write_list_entry, &cmds_list);
+	}
+	spin_unlock_bh(&conn->write_list_lock);
+
+	list_for_each_entry_safe(rsp, t, &cmds_list, write_list_entry) {
+		TRACE_MGMT_DBG("Putting write rsp %p", rsp);
+		list_del(&rsp->write_list_entry);
+		cmnd_put(rsp);
+	}
+
+	/* Supposed nobody can add responses in the list anymore */
+	list_for_each_entry_reverse(rsp, &req->rsp_cmd_list,
+			rsp_cmd_list_entry) {
+		bool r;
+
+		if (rsp->force_cleanup_done)
+			continue;
+
+		rsp->force_cleanup_done = 1;
+
+		if (cmnd_get_check(rsp))
+			continue;
+
+		spin_lock_bh(&conn->write_list_lock);
+		r = rsp->on_write_list || rsp->write_processing_started;
+		spin_unlock_bh(&conn->write_list_lock);
+
+		cmnd_put(rsp);
+
+		if (r)
+			continue;
+
+		/*
+		 * If both on_write_list and write_processing_started not set,
+		 * we can safely put() rsp.
+		 */
+		TRACE_MGMT_DBG("Putting rsp %p", rsp);
+		cmnd_put(rsp);
+	}
+
+	if (req->main_rsp != NULL) {
+		TRACE_MGMT_DBG("Putting main rsp %p", req->main_rsp);
+		cmnd_put(req->main_rsp);
+		req->main_rsp = NULL;
+	}
+
+	req_cmnd_release(req);
+	return;
+}
+
+/*
+ * Corresponding conn may also get destroyed after this function, except only
+ * if it's called from the read thread!
+ */
+static void req_cmnd_release(struct iscsi_cmnd *req)
+{
+	struct iscsi_cmnd *c, *t;
+
+	TRACE_DBG("req %p", req);
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+	BUG_ON(req->release_called);
+	req->release_called = 1;
+#endif
+
+	if (unlikely(test_bit(ISCSI_CMD_ABORTED, &req->prelim_compl_flags))) {
+		TRACE_MGMT_DBG("Release aborted req cmd %p (scst cmd %p, "
+			"state %d)", req, req->scst_cmd, req->scst_state);
+	}
+
+	BUG_ON(req->parent_req != NULL);
+
+	/*
+	 * We have to remove hashed req from the hash list before sending
+	 * response. Otherwise we can have a race, when for some reason cmd's
+	 * release (and, hence, removal from the hash) is delayed after the
+	 * transmission and initiator sends cmd with the same ITT, hence
+	 * the new command will be erroneously rejected as a duplicate.
+	 */
+	if (unlikely(req->hashed)) {
+		/* It sometimes can happen during errors recovery */
+		cmnd_remove_data_wait_hash(req);
+	}
+
+	if (unlikely(req->main_rsp != NULL)) {
+		TRACE_DBG("Sending main rsp %p", req->main_rsp);
+		iscsi_cmnd_init_write(req->main_rsp, ISCSI_INIT_WRITE_WAKE);
+		req->main_rsp = NULL;
+	}
+
+	list_for_each_entry_safe(c, t, &req->rx_ddigest_cmd_list,
+				rx_ddigest_cmd_list_entry) {
+		cmd_del_from_rx_ddigest_list(c);
+		cmnd_put(c);
+	}
+
+	EXTRACHECKS_BUG_ON(req->pending);
+
+	if (req->dec_active_cmnds) {
+		struct iscsi_session *sess = req->conn->session;
+		TRACE_DBG("Decrementing active_cmds (cmd %p, sess %p, "
+			"new value %d)", req, sess,
+			atomic_read(&sess->active_cmds)-1);
+		atomic_dec(&sess->active_cmds);
+		req->dec_active_cmnds = 0;
+#ifdef CONFIG_SCST_EXTRACHECKS
+		if (unlikely(atomic_read(&sess->active_cmds) < 0)) {
+			PRINT_CRIT_ERROR("active_cmds < 0 (%d)!!",
+				atomic_read(&sess->active_cmds));
+			BUG();
+		}
+#endif
+	}
+
+	cmnd_put(req);
+	return;
+}
+
+/*
+ * Corresponding conn may also get destroyed after this function, except only
+ * if it's called from the read thread!
+ */
+void rsp_cmnd_release(struct iscsi_cmnd *cmnd)
+{
+	TRACE_DBG("%p", cmnd);
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+	BUG_ON(cmnd->release_called);
+	cmnd->release_called = 1;
+#endif
+
+	EXTRACHECKS_BUG_ON(cmnd->parent_req == NULL);
+
+	cmnd_put(cmnd);
+	return;
+}
+
+static struct iscsi_cmnd *iscsi_alloc_rsp(struct iscsi_cmnd *parent)
+{
+	struct iscsi_cmnd *rsp;
+
+	rsp = cmnd_alloc(parent->conn, parent);
+
+	TRACE_DBG("Adding rsp %p to parent %p", rsp, parent);
+	list_add_tail(&rsp->rsp_cmd_list_entry, &parent->rsp_cmd_list);
+
+	cmnd_get(parent);
+	return rsp;
+}
+
+static inline struct iscsi_cmnd *iscsi_alloc_main_rsp(struct iscsi_cmnd *parent)
+{
+	struct iscsi_cmnd *rsp;
+
+	EXTRACHECKS_BUG_ON(parent->main_rsp != NULL);
+
+	rsp = iscsi_alloc_rsp(parent);
+	parent->main_rsp = rsp;
+	return rsp;
+}
+
+static void iscsi_cmnds_init_write(struct list_head *send, int flags)
+{
+	struct iscsi_cmnd *rsp = list_entry(send->next, struct iscsi_cmnd,
+						write_list_entry);
+	struct iscsi_conn *conn = rsp->conn;
+	struct list_head *pos, *next;
+
+	BUG_ON(list_empty(send));
+
+	if (!(conn->ddigest_type & DIGEST_NONE)) {
+		list_for_each(pos, send) {
+			rsp = list_entry(pos, struct iscsi_cmnd,
+						write_list_entry);
+
+			if (rsp->pdu.datasize != 0) {
+				TRACE_DBG("Doing data digest (%p:%x)", rsp,
+					cmnd_opcode(rsp));
+				digest_tx_data(rsp);
+			}
+		}
+	}
+
+	spin_lock_bh(&conn->write_list_lock);
+	list_for_each_safe(pos, next, send) {
+		rsp = list_entry(pos, struct iscsi_cmnd, write_list_entry);
+
+		TRACE_DBG("%p:%x", rsp, cmnd_opcode(rsp));
+
+		BUG_ON(conn != rsp->conn);
+
+		list_del(&rsp->write_list_entry);
+		cmd_add_on_write_list(conn, rsp);
+	}
+	spin_unlock_bh(&conn->write_list_lock);
+
+	if (flags & ISCSI_INIT_WRITE_WAKE)
+		iscsi_make_conn_wr_active(conn);
+
+	return;
+}
+
+static void iscsi_cmnd_init_write(struct iscsi_cmnd *rsp, int flags)
+{
+	LIST_HEAD(head);
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+	if (unlikely(rsp->on_write_list)) {
+		PRINT_CRIT_ERROR("cmd already on write list (%x %x %x "
+			"%u %u %d %d", cmnd_itt(rsp),
+			cmnd_opcode(rsp), cmnd_scsicode(rsp),
+			rsp->hdigest, rsp->ddigest,
+			list_empty(&rsp->rsp_cmd_list), rsp->hashed);
+		BUG();
+	}
+#endif
+	list_add_tail(&rsp->write_list_entry, &head);
+	iscsi_cmnds_init_write(&head, flags);
+	return;
+}
+
+static void send_data_rsp(struct iscsi_cmnd *req, u8 status, int send_status)
+{
+	struct iscsi_cmnd *rsp;
+	struct iscsi_scsi_cmd_hdr *req_hdr = cmnd_hdr(req);
+	struct iscsi_data_in_hdr *rsp_hdr;
+	u32 pdusize, expsize, size, offset, sn;
+	LIST_HEAD(send);
+
+	TRACE_DBG("req %p", req);
+
+	pdusize = req->conn->session->sess_params.max_xmit_data_length;
+	expsize = req->read_size;
+	size = min(expsize, (u32)req->bufflen);
+	offset = 0;
+	sn = 0;
+
+	while (1) {
+		rsp = iscsi_alloc_rsp(req);
+		TRACE_DBG("rsp %p", rsp);
+		rsp->sg = req->sg;
+		rsp->sg_cnt = req->sg_cnt;
+		rsp->bufflen = req->bufflen;
+		rsp_hdr = (struct iscsi_data_in_hdr *)&rsp->pdu.bhs;
+
+		rsp_hdr->opcode = ISCSI_OP_SCSI_DATA_IN;
+		rsp_hdr->itt = req_hdr->itt;
+		rsp_hdr->ttt = cpu_to_be32(ISCSI_RESERVED_TAG);
+		rsp_hdr->buffer_offset = cpu_to_be32(offset);
+		rsp_hdr->data_sn = cpu_to_be32(sn);
+
+		if (size <= pdusize) {
+			TRACE_DBG("offset %d, size %d", offset, size);
+			rsp->pdu.datasize = size;
+			if (send_status) {
+				unsigned int scsisize;
+
+				TRACE_DBG("status %x", status);
+
+				EXTRACHECKS_BUG_ON((cmnd_hdr(req)->flags & ISCSI_CMD_WRITE) != 0);
+
+				rsp_hdr->flags = ISCSI_FLG_FINAL | ISCSI_FLG_STATUS;
+				rsp_hdr->cmd_status = status;
+
+				scsisize = req->bufflen;
+				if (scsisize < expsize) {
+					rsp_hdr->flags |= ISCSI_FLG_RESIDUAL_UNDERFLOW;
+					size = expsize - scsisize;
+				} else if (scsisize > expsize) {
+					rsp_hdr->flags |= ISCSI_FLG_RESIDUAL_OVERFLOW;
+					size = scsisize - expsize;
+				} else
+					size = 0;
+				rsp_hdr->residual_count = cpu_to_be32(size);
+			}
+			list_add_tail(&rsp->write_list_entry, &send);
+			break;
+		}
+
+		TRACE_DBG("pdusize %d, offset %d, size %d", pdusize, offset,
+			size);
+
+		rsp->pdu.datasize = pdusize;
+
+		size -= pdusize;
+		offset += pdusize;
+		sn++;
+
+		list_add_tail(&rsp->write_list_entry, &send);
+	}
+	iscsi_cmnds_init_write(&send, 0);
+	return;
+}
+
+static void iscsi_init_status_rsp(struct iscsi_cmnd *rsp,
+	int status, const u8 *sense_buf, int sense_len, bool bufflen_set)
+{
+	struct iscsi_cmnd *req = rsp->parent_req;
+	struct iscsi_scsi_rsp_hdr *rsp_hdr;
+	struct scatterlist *sg;
+
+	rsp_hdr = (struct iscsi_scsi_rsp_hdr *)&rsp->pdu.bhs;
+	rsp_hdr->opcode = ISCSI_OP_SCSI_RSP;
+	rsp_hdr->flags = ISCSI_FLG_FINAL;
+	rsp_hdr->response = ISCSI_RESPONSE_COMMAND_COMPLETED;
+	rsp_hdr->cmd_status = status;
+	rsp_hdr->itt = cmnd_hdr(req)->itt;
+
+	if (SCST_SENSE_VALID(sense_buf)) {
+		TRACE_DBG("%s", "SENSE VALID");
+
+		sg = rsp->sg = rsp->rsp_sg;
+		rsp->sg_cnt = 2;
+		rsp->own_sg = 1;
+
+		sg_init_table(sg, 2);
+		sg_set_buf(&sg[0], &rsp->sense_hdr, sizeof(rsp->sense_hdr));
+		sg_set_buf(&sg[1], sense_buf, sense_len);
+
+		rsp->sense_hdr.length = cpu_to_be16(sense_len);
+
+		rsp->pdu.datasize = sizeof(rsp->sense_hdr) + sense_len;
+		rsp->bufflen = rsp->pdu.datasize;
+	} else {
+		rsp->pdu.datasize = 0;
+		rsp->bufflen = 0;
+	}
+
+	iscsi_set_resid(rsp, bufflen_set);
+	return;
+}
+
+static inline struct iscsi_cmnd *create_status_rsp(struct iscsi_cmnd *req,
+	int status, const u8 *sense_buf, int sense_len, bool bufflen_set)
+{
+	struct iscsi_cmnd *rsp;
+
+	rsp = iscsi_alloc_rsp(req);
+	TRACE_DBG("rsp %p", rsp);
+
+	iscsi_init_status_rsp(rsp, status, sense_buf, sense_len, bufflen_set);
+	return rsp;
+}
+
+static struct iscsi_cmnd *create_prelim_status_rsp(struct iscsi_cmnd *req,
+	int status, const u8 *sense_buf, int sense_len)
+{
+	struct iscsi_cmnd *rsp;
+
+	rsp = iscsi_alloc_main_rsp(req);
+	TRACE_DBG("main rsp %p", rsp);
+
+	iscsi_init_status_rsp(rsp, status, sense_buf, sense_len, false);
+	return rsp;
+}
+
+static int iscsi_set_prelim_r2t_len_to_receive(struct iscsi_cmnd *req)
+{
+	struct iscsi_hdr *req_hdr = &req->pdu.bhs;
+	int res = 0;
+
+	if (req_hdr->flags & ISCSI_CMD_FINAL)
+		goto out;
+
+	res = cmnd_insert_data_wait_hash(req);
+	if (res != 0) {
+		/*
+		 * We have to close connection, because otherwise a data
+		 * corruption is possible if we allow to receive data
+		 * for this request in another request with dublicated ITT.
+		 */
+		mark_conn_closed(req->conn);
+		goto out;
+	}
+
+	/*
+	 * We need to wait for one or more PDUs. Let's simplify
+	 * other code and pretend we need to receive 1 byte.
+	 * In data_out_start() we will correct it.
+	 */
+	if (req->outstanding_r2t == 0) {
+		req->outstanding_r2t = 1;
+		req_add_to_write_timeout_list(req);
+	}
+	req->r2t_len_to_receive = 1;
+	req->r2t_len_to_send = 0;
+
+	TRACE_DBG("req %p, op %x, outstanding_r2t %d, r2t_len_to_receive %d, "
+		"r2t_len_to_send %d", req, cmnd_opcode(req),
+		req->outstanding_r2t, req->r2t_len_to_receive,
+		req->r2t_len_to_send);
+
+out:
+	return res;
+}
+
+static int create_preliminary_status_rsp(struct iscsi_cmnd *req,
+	int status, const u8 *sense_buf, int sense_len)
+{
+	int res = 0;
+	struct iscsi_scsi_cmd_hdr *req_hdr = cmnd_hdr(req);
+
+	if (req->prelim_compl_flags != 0) {
+		TRACE_MGMT_DBG("req %p already prelim completed", req);
+		goto out;
+	}
+
+	req->scst_state = ISCSI_CMD_STATE_OUT_OF_SCST_PRELIM_COMPL;
+
+	if ((req_hdr->flags & ISCSI_CMD_READ) &&
+	    (req_hdr->flags & ISCSI_CMD_WRITE)) {
+		int sz = cmnd_read_size(req);
+		if (sz > 0)
+			req->read_size = sz;
+	} else if (req_hdr->flags & ISCSI_CMD_READ)
+		req->read_size = be32_to_cpu(req_hdr->data_length);
+
+	create_prelim_status_rsp(req, status, sense_buf, sense_len);
+	res = iscsi_preliminary_complete(req, req, true);
+
+out:
+	return res;
+}
+
+static int set_scst_preliminary_status_rsp(struct iscsi_cmnd *req,
+	bool get_data, int key, int asc, int ascq)
+{
+	int res = 0;
+
+	if (req->scst_cmd == NULL) {
+		/* There must be already error set */
+		goto complete;
+	}
+
+	scst_set_cmd_error(req->scst_cmd, key, asc, ascq);
+
+complete:
+	res = iscsi_preliminary_complete(req, req, get_data);
+	return res;
+}
+
+static int create_reject_rsp(struct iscsi_cmnd *req, int reason, bool get_data)
+{
+	int res = 0;
+	struct iscsi_cmnd *rsp;
+	struct iscsi_reject_hdr *rsp_hdr;
+	struct scatterlist *sg;
+
+	TRACE_MGMT_DBG("Reject: req %p, reason %x", req, reason);
+
+	if (cmnd_opcode(req) == ISCSI_OP_SCSI_CMD) {
+		if (req->scst_cmd == NULL) {
+			/* BUSY status must be already set */
+			struct iscsi_scsi_rsp_hdr *rsp_hdr;
+			rsp_hdr = (struct iscsi_scsi_rsp_hdr *)&req->main_rsp->pdu.bhs;
+			BUG_ON(rsp_hdr->cmd_status == 0);
+			/*
+			 * Let's not send REJECT here. The initiator will retry
+			 * and, hopefully, next time we will not fail allocating
+			 * scst_cmd, so we will then send the REJECT.
+			 */
+			goto out;
+		} else
+			set_scst_preliminary_status_rsp(req, get_data,
+				SCST_LOAD_SENSE(scst_sense_invalid_message));
+	}
+
+	rsp = iscsi_alloc_main_rsp(req);
+	rsp_hdr = (struct iscsi_reject_hdr *)&rsp->pdu.bhs;
+
+	rsp_hdr->opcode = ISCSI_OP_REJECT;
+	rsp_hdr->ffffffff = ISCSI_RESERVED_TAG;
+	rsp_hdr->reason = reason;
+
+	sg = rsp->sg = rsp->rsp_sg;
+	rsp->sg_cnt = 1;
+	rsp->own_sg = 1;
+	sg_init_one(sg, &req->pdu.bhs, sizeof(struct iscsi_hdr));
+	rsp->bufflen = rsp->pdu.datasize = sizeof(struct iscsi_hdr);
+
+	res = iscsi_preliminary_complete(req, req, true);
+
+out:
+	return res;
+}
+
+static inline int iscsi_get_allowed_cmds(struct iscsi_session *sess)
+{
+	int res = max(-1, (int)sess->tgt_params.queued_cmnds -
+				atomic_read(&sess->active_cmds)-1);
+	TRACE_DBG("allowed cmds %d (sess %p, active_cmds %d)", res,
+		sess, atomic_read(&sess->active_cmds));
+	return res;
+}
+
+static u32 cmnd_set_sn(struct iscsi_cmnd *cmnd, int set_stat_sn)
+{
+	struct iscsi_conn *conn = cmnd->conn;
+	struct iscsi_session *sess = conn->session;
+	u32 res;
+
+	spin_lock(&sess->sn_lock);
+
+	if (set_stat_sn)
+		cmnd->pdu.bhs.sn = cpu_to_be32(conn->stat_sn++);
+	cmnd->pdu.bhs.exp_sn = cpu_to_be32(sess->exp_cmd_sn);
+	cmnd->pdu.bhs.max_sn = cpu_to_be32(sess->exp_cmd_sn +
+				 iscsi_get_allowed_cmds(sess));
+
+	res = cpu_to_be32(conn->stat_sn);
+
+	spin_unlock(&sess->sn_lock);
+	return res;
+}
+
+/* Called under sn_lock */
+static void __update_stat_sn(struct iscsi_cmnd *cmnd)
+{
+	struct iscsi_conn *conn = cmnd->conn;
+	u32 exp_stat_sn;
+
+	cmnd->pdu.bhs.exp_sn = exp_stat_sn = be32_to_cpu(cmnd->pdu.bhs.exp_sn);
+	TRACE_DBG("%x,%x", cmnd_opcode(cmnd), exp_stat_sn);
+	if ((int)(exp_stat_sn - conn->exp_stat_sn) > 0 &&
+	    (int)(exp_stat_sn - conn->stat_sn) <= 0) {
+		/* free pdu resources */
+		cmnd->conn->exp_stat_sn = exp_stat_sn;
+	}
+	return;
+}
+
+static inline void update_stat_sn(struct iscsi_cmnd *cmnd)
+{
+	spin_lock(&cmnd->conn->session->sn_lock);
+	__update_stat_sn(cmnd);
+	spin_unlock(&cmnd->conn->session->sn_lock);
+	return;
+}
+
+/* Called under sn_lock */
+static int check_cmd_sn(struct iscsi_cmnd *cmnd)
+{
+	struct iscsi_session *session = cmnd->conn->session;
+	u32 cmd_sn;
+
+	cmnd->pdu.bhs.sn = cmd_sn = be32_to_cpu(cmnd->pdu.bhs.sn);
+	TRACE_DBG("%d(%d)", cmd_sn, session->exp_cmd_sn);
+	if (likely((s32)(cmd_sn - session->exp_cmd_sn) >= 0))
+		return 0;
+	PRINT_ERROR("sequence error (%x,%x)", cmd_sn, session->exp_cmd_sn);
+	return -ISCSI_REASON_PROTOCOL_ERROR;
+}
+
+static struct iscsi_cmnd *cmnd_find_itt_get(struct iscsi_conn *conn, u32 itt)
+{
+	struct iscsi_cmnd *cmnd, *found_cmnd = NULL;
+
+	spin_lock_bh(&conn->cmd_list_lock);
+	list_for_each_entry(cmnd, &conn->cmd_list, cmd_list_entry) {
+		if ((cmnd->pdu.bhs.itt == itt) && !cmnd_get_check(cmnd)) {
+			found_cmnd = cmnd;
+			break;
+		}
+	}
+	spin_unlock_bh(&conn->cmd_list_lock);
+
+	return found_cmnd;
+}
+
+/**
+ ** We use the ITT hash only to find original request PDU for subsequent
+ ** Data-Out PDUs.
+ **/
+
+/* Must be called under cmnd_data_wait_hash_lock */
+static struct iscsi_cmnd *__cmnd_find_data_wait_hash(struct iscsi_conn *conn,
+	u32 itt)
+{
+	struct list_head *head;
+	struct iscsi_cmnd *cmnd;
+
+	head = &conn->session->cmnd_data_wait_hash[cmnd_hashfn(itt)];
+
+	list_for_each_entry(cmnd, head, hash_list_entry) {
+		if (cmnd->pdu.bhs.itt == itt)
+			return cmnd;
+	}
+	return NULL;
+}
+
+static struct iscsi_cmnd *cmnd_find_data_wait_hash(struct iscsi_conn *conn,
+	u32 itt)
+{
+	struct iscsi_cmnd *res;
+	struct iscsi_session *session = conn->session;
+
+	spin_lock(&session->cmnd_data_wait_hash_lock);
+	res = __cmnd_find_data_wait_hash(conn, itt);
+	spin_unlock(&session->cmnd_data_wait_hash_lock);
+
+	return res;
+}
+
+static inline u32 get_next_ttt(struct iscsi_conn *conn)
+{
+	u32 ttt;
+	struct iscsi_session *session = conn->session;
+
+	/* Not compatible with MC/S! */
+
+	iscsi_extracheck_is_rd_thread(conn);
+
+	if (unlikely(session->next_ttt == ISCSI_RESERVED_TAG))
+		session->next_ttt++;
+	ttt = session->next_ttt++;
+
+	return ttt;
+}
+
+static int cmnd_insert_data_wait_hash(struct iscsi_cmnd *cmnd)
+{
+	struct iscsi_session *session = cmnd->conn->session;
+	struct iscsi_cmnd *tmp;
+	struct list_head *head;
+	int err = 0;
+	u32 itt = cmnd->pdu.bhs.itt;
+
+	if (unlikely(cmnd->hashed)) {
+		/* It can be for preliminary completed commands */
+		goto out;
+	}
+
+	/*
+	 * We don't need TTT, because ITT/buffer_offset pair is sufficient
+	 * to find out the original request and buffer for Data-Out PDUs, but
+	 * crazy iSCSI spec requires us to send this superfluous field in
+	 * R2T PDUs and some initiators may rely on it.
+	 */
+	cmnd->target_task_tag = get_next_ttt(cmnd->conn);
+
+	TRACE_DBG("%p:%x", cmnd, itt);
+	if (unlikely(itt == ISCSI_RESERVED_TAG)) {
+		PRINT_ERROR("%s", "ITT is RESERVED_TAG");
+		PRINT_BUFFER("Incorrect BHS", &cmnd->pdu.bhs,
+			sizeof(cmnd->pdu.bhs));
+		err = -ISCSI_REASON_PROTOCOL_ERROR;
+		goto out;
+	}
+
+	spin_lock(&session->cmnd_data_wait_hash_lock);
+
+	head = &session->cmnd_data_wait_hash[cmnd_hashfn(itt)];
+
+	tmp = __cmnd_find_data_wait_hash(cmnd->conn, itt);
+	if (likely(!tmp)) {
+		TRACE_DBG("Adding cmnd %p to the hash (ITT %x)", cmnd,
+			cmnd_itt(cmnd));
+		list_add_tail(&cmnd->hash_list_entry, head);
+		cmnd->hashed = 1;
+	} else {
+		PRINT_ERROR("Task %x in progress, cmnd %p", itt, cmnd);
+		err = -ISCSI_REASON_TASK_IN_PROGRESS;
+	}
+
+	spin_unlock(&session->cmnd_data_wait_hash_lock);
+
+out:
+	return err;
+}
+
+static void cmnd_remove_data_wait_hash(struct iscsi_cmnd *cmnd)
+{
+	struct iscsi_session *session = cmnd->conn->session;
+	struct iscsi_cmnd *tmp;
+
+	spin_lock(&session->cmnd_data_wait_hash_lock);
+
+	tmp = __cmnd_find_data_wait_hash(cmnd->conn, cmnd->pdu.bhs.itt);
+
+	if (likely(tmp && tmp == cmnd)) {
+		TRACE_DBG("Deleting cmnd %p from the hash (ITT %x)", cmnd,
+			cmnd_itt(cmnd));
+		list_del(&cmnd->hash_list_entry);
+		cmnd->hashed = 0;
+	} else
+		PRINT_ERROR("%p:%x not found", cmnd, cmnd_itt(cmnd));
+
+	spin_unlock(&session->cmnd_data_wait_hash_lock);
+
+	return;
+}
+
+static void cmnd_prepare_get_rejected_immed_data(struct iscsi_cmnd *cmnd)
+{
+	struct iscsi_conn *conn = cmnd->conn;
+	struct scatterlist *sg = cmnd->sg;
+	char __user *addr;
+	u32 size;
+	unsigned int i;
+
+	TRACE_DBG_FLAG(iscsi_get_flow_ctrl_or_mgmt_dbg_log_flag(cmnd),
+		"Skipping (cmnd %p, ITT %x, op %x, cmd op %x, "
+		"datasize %u, scst_cmd %p, scst state %d)", cmnd,
+		cmnd_itt(cmnd), cmnd_opcode(cmnd), cmnd_hdr(cmnd)->scb[0],
+		cmnd->pdu.datasize, cmnd->scst_cmd, cmnd->scst_state);
+
+	iscsi_extracheck_is_rd_thread(conn);
+
+	size = cmnd->pdu.datasize;
+	if (!size)
+		goto out;
+
+	/* We already checked pdu.datasize in check_segment_length() */
+
+	if (sg == NULL) {
+		/*
+		 * There are no problems with the safety from concurrent
+		 * accesses to dummy_page in dummy_sg, since data only
+		 * will be read and then discarded.
+		 */
+		sg = cmnd->sg = &dummy_sg;
+		cmnd->bufflen = PAGE_SIZE;
+		cmnd->own_sg = 1;
+	}
+
+	addr = (char __force __user *)(page_address(sg_page(&sg[0])));
+	BUG_ON(addr == NULL);
+	conn->read_size = size;
+	for (i = 0; size > PAGE_SIZE; i++, size -= cmnd->bufflen) {
+		/* We already checked pdu.datasize in check_segment_length() */
+		BUG_ON(i >= ISCSI_CONN_IOV_MAX);
+		conn->read_iov[i].iov_base = addr;
+		conn->read_iov[i].iov_len = cmnd->bufflen;
+	}
+	conn->read_iov[i].iov_base = addr;
+	conn->read_iov[i].iov_len = size;
+	conn->read_msg.msg_iov = conn->read_iov;
+	conn->read_msg.msg_iovlen = ++i;
+
+out:
+	return;
+}
+
+static void iscsi_set_resid(struct iscsi_cmnd *rsp, bool bufflen_set)
+{
+	struct iscsi_cmnd *req = rsp->parent_req;
+	struct iscsi_scsi_cmd_hdr *req_hdr = cmnd_hdr(req);
+	struct iscsi_scsi_rsp_hdr *rsp_hdr;
+	int resid, resp_len, in_resp_len;
+
+	if ((req_hdr->flags & ISCSI_CMD_READ) &&
+	    (req_hdr->flags & ISCSI_CMD_WRITE)) {
+		rsp_hdr = (struct iscsi_scsi_rsp_hdr *)&rsp->pdu.bhs;
+
+		if (bufflen_set) {
+			resp_len = req->bufflen;
+			if (req->scst_cmd != NULL)
+				in_resp_len = scst_cmd_get_in_bufflen(req->scst_cmd);
+			else
+				in_resp_len = 0;
+		} else {
+			resp_len = 0;
+			in_resp_len = 0;
+		}
+
+		resid = be32_to_cpu(req_hdr->data_length) - in_resp_len;
+		if (resid > 0) {
+			rsp_hdr->flags |= ISCSI_FLG_RESIDUAL_UNDERFLOW;
+			rsp_hdr->residual_count = cpu_to_be32(resid);
+		} else if (resid < 0) {
+			resid = -resid;
+			rsp_hdr->flags |= ISCSI_FLG_RESIDUAL_OVERFLOW;
+			rsp_hdr->residual_count = cpu_to_be32(resid);
+		}
+
+		resid = req->read_size - resp_len;
+		if (resid > 0) {
+			rsp_hdr->flags |= ISCSI_FLG_BIRESIDUAL_UNDERFLOW;
+			rsp_hdr->bi_residual_count = cpu_to_be32(resid);
+		} else if (resid < 0) {
+			resid = -resid;
+			rsp_hdr->flags |= ISCSI_FLG_BIRESIDUAL_OVERFLOW;
+			rsp_hdr->bi_residual_count = cpu_to_be32(resid);
+		}
+	} else {
+		if (bufflen_set)
+			resp_len = req->bufflen;
+		else
+			resp_len = 0;
+
+		resid = req->read_size - resp_len;
+		if (resid > 0) {
+			rsp_hdr = (struct iscsi_scsi_rsp_hdr *)&rsp->pdu.bhs;
+			rsp_hdr->flags |= ISCSI_FLG_RESIDUAL_UNDERFLOW;
+			rsp_hdr->residual_count = cpu_to_be32(resid);
+		} else if (resid < 0) {
+			rsp_hdr = (struct iscsi_scsi_rsp_hdr *)&rsp->pdu.bhs;
+			resid = -resid;
+			rsp_hdr->flags |= ISCSI_FLG_RESIDUAL_OVERFLOW;
+			rsp_hdr->residual_count = cpu_to_be32(resid);
+		}
+	}
+	return;
+}
+
+static int iscsi_preliminary_complete(struct iscsi_cmnd *req,
+	struct iscsi_cmnd *orig_req, bool get_data)
+{
+	int res = 0;
+	bool set_r2t_len;
+
+#ifdef CONFIG_SCST_DEBUG
+	{
+		struct iscsi_hdr *req_hdr = &req->pdu.bhs;
+		TRACE_DBG_FLAG(iscsi_get_flow_ctrl_or_mgmt_dbg_log_flag(orig_req),
+			"Prelim completed req %p, orig_req %p (FINAL %x, "
+			"outstanding_r2t %d)", req, orig_req,
+			(req_hdr->flags & ISCSI_CMD_FINAL),
+			orig_req->outstanding_r2t);
+	}
+#endif
+
+	iscsi_extracheck_is_rd_thread(req->conn);
+	BUG_ON(req->parent_req != NULL);
+
+	if (test_bit(ISCSI_CMD_PRELIM_COMPLETED, &req->prelim_compl_flags)) {
+		TRACE_MGMT_DBG("req %p already prelim completed", req);
+		/* To not try to get data twice */
+		get_data = false;
+	}
+
+	set_r2t_len = !req->hashed &&
+		      (cmnd_opcode(req) == ISCSI_OP_SCSI_CMD) &&
+		      !test_bit(ISCSI_CMD_PRELIM_COMPLETED,
+				&orig_req->prelim_compl_flags);
+	set_bit(ISCSI_CMD_PRELIM_COMPLETED, &orig_req->prelim_compl_flags);
+
+	TRACE_DBG("get_data %d, set_r2t_len %d", get_data, set_r2t_len);
+
+	if (get_data)
+		cmnd_prepare_get_rejected_immed_data(req);
+
+	if (set_r2t_len)
+		res = iscsi_set_prelim_r2t_len_to_receive(orig_req);
+	return res;
+}
+
+static int cmnd_prepare_recv_pdu(struct iscsi_conn *conn,
+	struct iscsi_cmnd *cmd,	u32 offset, u32 size)
+{
+	struct scatterlist *sg = cmd->sg;
+	unsigned int bufflen = cmd->bufflen;
+	unsigned int idx, i;
+	char __user *addr;
+	int res = 0;
+
+	TRACE_DBG("%p %u,%u", cmd->sg, offset, size);
+
+	iscsi_extracheck_is_rd_thread(conn);
+
+	if (unlikely((offset >= bufflen) ||
+		     (offset + size > bufflen))) {
+		PRINT_ERROR("Wrong ltn (%u %u %u)", offset, size, bufflen);
+		mark_conn_closed(conn);
+		res = -EIO;
+		goto out;
+	}
+
+	offset += sg[0].offset;
+	idx = offset >> PAGE_SHIFT;
+	offset &= ~PAGE_MASK;
+
+	conn->read_msg.msg_iov = conn->read_iov;
+	conn->read_size = size;
+
+	i = 0;
+	while (1) {
+		addr = (char __force __user *)(page_address(sg_page(&sg[idx])));
+		BUG_ON(addr == NULL);
+		conn->read_iov[i].iov_base = addr + offset;
+		if (offset + size <= PAGE_SIZE) {
+			TRACE_DBG("idx=%d, offset=%u, size=%d, addr=%p",
+				idx, offset, size, addr);
+			conn->read_iov[i].iov_len = size;
+			conn->read_msg.msg_iovlen = ++i;
+			break;
+		}
+		conn->read_iov[i].iov_len = PAGE_SIZE - offset;
+		TRACE_DBG("idx=%d, offset=%u, size=%d, iov_len=%zd, addr=%p",
+			idx, offset, size, conn->read_iov[i].iov_len, addr);
+		size -= conn->read_iov[i].iov_len;
+		if (unlikely(++i >= ISCSI_CONN_IOV_MAX)) {
+			PRINT_ERROR("Initiator %s violated negotiated "
+				"parameters by sending too much data (size "
+				"left %d)", conn->session->initiator_name,
+				size);
+			mark_conn_closed(conn);
+			res = -EINVAL;
+			break;
+		}
+		idx++;
+		offset = sg[idx].offset;
+	}
+	TRACE_DBG("msg_iov=%p, msg_iovlen=%zd",
+		conn->read_msg.msg_iov, conn->read_msg.msg_iovlen);
+
+out:
+	return res;
+}
+
+static void send_r2t(struct iscsi_cmnd *req)
+{
+	struct iscsi_session *sess = req->conn->session;
+	struct iscsi_cmnd *rsp;
+	struct iscsi_r2t_hdr *rsp_hdr;
+	u32 offset, burst;
+	LIST_HEAD(send);
+
+	EXTRACHECKS_BUG_ON(req->r2t_len_to_send == 0);
+
+	/*
+	 * There is no race with data_out_start() and conn_abort(), since
+	 * all functions called from single read thread
+	 */
+	iscsi_extracheck_is_rd_thread(req->conn);
+
+	/*
+	 * We don't need to check for PRELIM_COMPLETED here, because for such
+	 * commands we set r2t_len_to_send = 0, hence made sure we won't
+	 * called here.
+	 */
+
+	EXTRACHECKS_BUG_ON(req->outstanding_r2t >
+			   sess->sess_params.max_outstanding_r2t);
+
+	if (req->outstanding_r2t == sess->sess_params.max_outstanding_r2t)
+		goto out;
+
+	burst = sess->sess_params.max_burst_length;
+	offset = be32_to_cpu(cmnd_hdr(req)->data_length) -
+			req->r2t_len_to_send;
+
+	do {
+		rsp = iscsi_alloc_rsp(req);
+		rsp->pdu.bhs.ttt = req->target_task_tag;
+		rsp_hdr = (struct iscsi_r2t_hdr *)&rsp->pdu.bhs;
+		rsp_hdr->opcode = ISCSI_OP_R2T;
+		rsp_hdr->flags = ISCSI_FLG_FINAL;
+		rsp_hdr->lun = cmnd_hdr(req)->lun;
+		rsp_hdr->itt = cmnd_hdr(req)->itt;
+		rsp_hdr->r2t_sn = cpu_to_be32(req->r2t_sn++);
+		rsp_hdr->buffer_offset = cpu_to_be32(offset);
+		if (req->r2t_len_to_send > burst) {
+			rsp_hdr->data_length = cpu_to_be32(burst);
+			req->r2t_len_to_send -= burst;
+			offset += burst;
+		} else {
+			rsp_hdr->data_length = cpu_to_be32(req->r2t_len_to_send);
+			req->r2t_len_to_send = 0;
+		}
+
+		TRACE_WRITE("req %p, data_length %u, buffer_offset %u, "
+			"r2t_sn %u, outstanding_r2t %u", req,
+			be32_to_cpu(rsp_hdr->data_length),
+			be32_to_cpu(rsp_hdr->buffer_offset),
+			be32_to_cpu(rsp_hdr->r2t_sn), req->outstanding_r2t);
+
+		list_add_tail(&rsp->write_list_entry, &send);
+		req->outstanding_r2t++;
+
+	} while ((req->outstanding_r2t < sess->sess_params.max_outstanding_r2t) &&
+		 (req->r2t_len_to_send != 0));
+
+	iscsi_cmnds_init_write(&send, ISCSI_INIT_WRITE_WAKE);
+
+out:
+	return;
+}
+
+static int iscsi_pre_exec(struct scst_cmd *scst_cmd)
+{
+	int res = SCST_PREPROCESS_STATUS_SUCCESS;
+	struct iscsi_cmnd *req = (struct iscsi_cmnd *)
+		scst_cmd_get_tgt_priv(scst_cmd);
+	struct iscsi_cmnd *c, *t;
+
+	EXTRACHECKS_BUG_ON(scst_cmd_atomic(scst_cmd));
+
+	/* If data digest isn't used this list will be empty */
+	list_for_each_entry_safe(c, t, &req->rx_ddigest_cmd_list,
+				rx_ddigest_cmd_list_entry) {
+		TRACE_DBG("Checking digest of RX ddigest cmd %p", c);
+		if (digest_rx_data(c) != 0) {
+			scst_set_cmd_error(scst_cmd,
+				SCST_LOAD_SENSE(iscsi_sense_crc_error));
+			res = SCST_PREPROCESS_STATUS_ERROR_SENSE_SET;
+			/*
+			 * The rest of rx_ddigest_cmd_list will be freed
+			 * in req_cmnd_release()
+			 */
+			goto out;
+		}
+		cmd_del_from_rx_ddigest_list(c);
+		cmnd_put(c);
+	}
+
+out:
+	return res;
+}
+
+static int nop_out_start(struct iscsi_cmnd *cmnd)
+{
+	struct iscsi_conn *conn = cmnd->conn;
+	struct iscsi_hdr *req_hdr = &cmnd->pdu.bhs;
+	u32 size, tmp;
+	int i, err = 0;
+
+	TRACE_DBG("%p", cmnd);
+
+	iscsi_extracheck_is_rd_thread(conn);
+
+	if (!(req_hdr->flags & ISCSI_FLG_FINAL)) {
+		PRINT_ERROR("%s", "Initiator sent Nop-Out with not a single "
+			"PDU");
+		err = -ISCSI_REASON_PROTOCOL_ERROR;
+		goto out;
+	}
+
+	if (cmnd_itt(cmnd) == cpu_to_be32(ISCSI_RESERVED_TAG)) {
+		if (unlikely(!(cmnd->pdu.bhs.opcode & ISCSI_OP_IMMEDIATE)))
+			PRINT_ERROR("%s", "Initiator sent RESERVED tag for "
+				"non-immediate Nop-Out command");
+	}
+
+	spin_lock(&conn->session->sn_lock);
+	__update_stat_sn(cmnd);
+	err = check_cmd_sn(cmnd);
+	spin_unlock(&conn->session->sn_lock);
+	if (unlikely(err))
+		goto out;
+
+	size = cmnd->pdu.datasize;
+
+	if (size) {
+		conn->read_msg.msg_iov = conn->read_iov;
+		if (cmnd->pdu.bhs.itt != cpu_to_be32(ISCSI_RESERVED_TAG)) {
+			struct scatterlist *sg;
+
+			cmnd->sg = sg = scst_alloc(size, GFP_KERNEL,
+						&cmnd->sg_cnt);
+			if (sg == NULL) {
+				TRACE(TRACE_OUT_OF_MEM, "Allocating buffer for"
+				      " %d Nop-Out payload failed", size);
+				err = -ISCSI_REASON_OUT_OF_RESOURCES;
+				goto out;
+			}
+
+			/* We already checked it in check_segment_length() */
+			BUG_ON(cmnd->sg_cnt > (signed)ISCSI_CONN_IOV_MAX);
+
+			cmnd->own_sg = 1;
+			cmnd->bufflen = size;
+
+			for (i = 0; i < cmnd->sg_cnt; i++) {
+				conn->read_iov[i].iov_base =
+					(void __force __user *)(page_address(sg_page(&sg[i])));
+				tmp = min_t(u32, size, PAGE_SIZE);
+				conn->read_iov[i].iov_len = tmp;
+				conn->read_size += tmp;
+				size -= tmp;
+			}
+			BUG_ON(size != 0);
+		} else {
+			/*
+			 * There are no problems with the safety from concurrent
+			 * accesses to dummy_page, since for ISCSI_RESERVED_TAG
+			 * the data only read and then discarded.
+			 */
+			for (i = 0; i < (signed)ISCSI_CONN_IOV_MAX; i++) {
+				conn->read_iov[i].iov_base =
+					(void __force __user *)(page_address(dummy_page));
+				tmp = min_t(u32, size, PAGE_SIZE);
+				conn->read_iov[i].iov_len = tmp;
+				conn->read_size += tmp;
+				size -= tmp;
+			}
+
+			/* We already checked size in check_segment_length() */
+			BUG_ON(size != 0);
+		}
+
+		conn->read_msg.msg_iovlen = i;
+		TRACE_DBG("msg_iov=%p, msg_iovlen=%zd", conn->read_msg.msg_iov,
+			conn->read_msg.msg_iovlen);
+	}
+
+out:
+	return err;
+}
+
+int cmnd_rx_continue(struct iscsi_cmnd *req)
+{
+	struct iscsi_conn *conn = req->conn;
+	struct iscsi_session *session = conn->session;
+	struct iscsi_scsi_cmd_hdr *req_hdr = cmnd_hdr(req);
+	struct scst_cmd *scst_cmd = req->scst_cmd;
+	scst_data_direction dir;
+	bool unsolicited_data_expected = false;
+	int res = 0;
+
+	TRACE_DBG("scsi command: %x", req_hdr->scb[0]);
+
+	EXTRACHECKS_BUG_ON(req->scst_state != ISCSI_CMD_STATE_AFTER_PREPROC);
+
+	dir = scst_cmd_get_data_direction(scst_cmd);
+
+	/*
+	 * Check for preliminary completion here to save R2Ts. For TASK QUEUE
+	 * FULL statuses that might be a big performance win.
+	 */
+	if (unlikely(scst_cmd_prelim_completed(scst_cmd) ||
+	    unlikely(req->prelim_compl_flags != 0))) {
+		/*
+		 * If necessary, ISCSI_CMD_ABORTED will be set by
+		 * iscsi_xmit_response().
+		 */
+		res = iscsi_preliminary_complete(req, req, true);
+		goto trace;
+	}
+
+	/* For prelim completed commands sg&K can be already set! */
+
+	if (dir != SCST_DATA_BIDI) {
+		req->sg = scst_cmd_get_sg(scst_cmd);
+		req->sg_cnt = scst_cmd_get_sg_cnt(scst_cmd);
+		req->bufflen = scst_cmd_get_bufflen(scst_cmd);
+	} else {
+		req->sg = scst_cmd_get_in_sg(scst_cmd);
+		req->sg_cnt = scst_cmd_get_in_sg_cnt(scst_cmd);
+		req->bufflen = scst_cmd_get_in_bufflen(scst_cmd);
+	}
+
+	if (dir & SCST_DATA_WRITE) {
+		unsolicited_data_expected = !(req_hdr->flags & ISCSI_CMD_FINAL);
+
+		if (unlikely(session->sess_params.initial_r2t &&
+		    unsolicited_data_expected)) {
+			PRINT_ERROR("Initiator %s violated negotiated "
+				"parameters: initial R2T is required (ITT %x, "
+				"op  %x)", session->initiator_name,
+				cmnd_itt(req), req_hdr->scb[0]);
+			goto out_close;
+		}
+
+		if (unlikely(!session->sess_params.immediate_data &&
+		    req->pdu.datasize)) {
+			PRINT_ERROR("Initiator %s violated negotiated "
+				"parameters: forbidden immediate data sent "
+				"(ITT %x, op  %x)", session->initiator_name,
+				cmnd_itt(req), req_hdr->scb[0]);
+			goto out_close;
+		}
+
+		if (unlikely(session->sess_params.first_burst_length < req->pdu.datasize)) {
+			PRINT_ERROR("Initiator %s violated negotiated "
+				"parameters: immediate data len (%d) > "
+				"first_burst_length (%d) (ITT %x, op  %x)",
+				session->initiator_name,
+				req->pdu.datasize,
+				session->sess_params.first_burst_length,
+				cmnd_itt(req), req_hdr->scb[0]);
+			goto out_close;
+		}
+
+		req->r2t_len_to_receive = be32_to_cpu(req_hdr->data_length) -
+					  req->pdu.datasize;
+
+		if (unlikely(req->r2t_len_to_receive > req->bufflen)) {
+			PRINT_ERROR("req->r2t_len_to_receive %d > req->bufflen "
+				"%d", req->r2t_len_to_receive, req->bufflen);
+			goto out_close;
+		}
+
+		res = cmnd_insert_data_wait_hash(req);
+		if (unlikely(res != 0)) {
+			/*
+			 * We have to close connection, because otherwise a data
+			 * corruption is possible if we allow to receive data
+			 * for this request in another request with dublicated
+			 * ITT.
+			 */
+			goto out_close;
+		}
+
+		if (unsolicited_data_expected) {
+			req->outstanding_r2t = 1;
+			req->r2t_len_to_send = req->r2t_len_to_receive -
+				min_t(unsigned int,
+				      session->sess_params.first_burst_length -
+						req->pdu.datasize,
+				      req->r2t_len_to_receive);
+		} else
+			req->r2t_len_to_send = req->r2t_len_to_receive;
+
+		req_add_to_write_timeout_list(req);
+
+		if (req->pdu.datasize) {
+			res = cmnd_prepare_recv_pdu(conn, req, 0, req->pdu.datasize);
+			/* For performance better to send R2Ts ASAP */
+			if (likely(res == 0) && (req->r2t_len_to_send != 0))
+				send_r2t(req);
+		}
+	} else {
+		if (unlikely(!(req_hdr->flags & ISCSI_CMD_FINAL) ||
+			     req->pdu.datasize)) {
+			PRINT_ERROR("Unexpected unsolicited data (ITT %x "
+				"CDB %x", cmnd_itt(req), req_hdr->scb[0]);
+			set_scst_preliminary_status_rsp(req, true,
+				SCST_LOAD_SENSE(iscsi_sense_unexpected_unsolicited_data));
+		}
+	}
+
+trace:
+	TRACE_DBG("req=%p, dir=%d, unsolicited_data_expected=%d, "
+		"r2t_len_to_receive=%d, r2t_len_to_send=%d, bufflen=%d, "
+		"own_sg %d", req, dir, unsolicited_data_expected,
+		req->r2t_len_to_receive, req->r2t_len_to_send, req->bufflen,
+		req->own_sg);
+
+out:
+	return res;
+
+out_close:
+	mark_conn_closed(conn);
+	res = -EINVAL;
+	goto out;
+}
+
+static int scsi_cmnd_start(struct iscsi_cmnd *req)
+{
+	struct iscsi_conn *conn = req->conn;
+	struct iscsi_session *session = conn->session;
+	struct iscsi_scsi_cmd_hdr *req_hdr = cmnd_hdr(req);
+	struct scst_cmd *scst_cmd;
+	scst_data_direction dir;
+	struct iscsi_ahs_hdr *ahdr;
+	int res = 0;
+
+	TRACE_DBG("scsi command: %x", req_hdr->scb[0]);
+
+	TRACE_DBG("Incrementing active_cmds (cmd %p, sess %p, "
+		"new value %d)", req, session,
+		atomic_read(&session->active_cmds)+1);
+	atomic_inc(&session->active_cmds);
+	req->dec_active_cmnds = 1;
+
+	scst_cmd = scst_rx_cmd(session->scst_sess,
+		(uint8_t *)&req_hdr->lun, sizeof(req_hdr->lun),
+		req_hdr->scb, sizeof(req_hdr->scb), SCST_NON_ATOMIC);
+	if (scst_cmd == NULL) {
+		res = create_preliminary_status_rsp(req, SAM_STAT_BUSY,
+			NULL, 0);
+		goto out;
+	}
+
+	req->scst_cmd = scst_cmd;
+	scst_cmd_set_tag(scst_cmd, req_hdr->itt);
+	scst_cmd_set_tgt_priv(scst_cmd, req);
+
+	if ((req_hdr->flags & ISCSI_CMD_READ) &&
+	    (req_hdr->flags & ISCSI_CMD_WRITE)) {
+		int sz = cmnd_read_size(req);
+		if (unlikely(sz < 0)) {
+			PRINT_ERROR("%s", "BIDI data transfer, but initiator "
+				"not supplied Bidirectional Read Expected Data "
+				"Transfer Length AHS");
+			set_scst_preliminary_status_rsp(req, true,
+			   SCST_LOAD_SENSE(scst_sense_parameter_value_invalid));
+		} else {
+			req->read_size = sz;
+			dir = SCST_DATA_BIDI;
+			scst_cmd_set_expected(scst_cmd, dir, sz);
+			scst_cmd_set_expected_in_transfer_len(scst_cmd,
+				be32_to_cpu(req_hdr->data_length));
+			scst_cmd_set_tgt_need_alloc_data_buf(scst_cmd);
+		}
+	} else if (req_hdr->flags & ISCSI_CMD_READ) {
+		req->read_size = be32_to_cpu(req_hdr->data_length);
+		dir = SCST_DATA_READ;
+		scst_cmd_set_expected(scst_cmd, dir, req->read_size);
+		scst_cmd_set_tgt_need_alloc_data_buf(scst_cmd);
+	} else if (req_hdr->flags & ISCSI_CMD_WRITE) {
+		dir = SCST_DATA_WRITE;
+		scst_cmd_set_expected(scst_cmd, dir,
+			be32_to_cpu(req_hdr->data_length));
+	} else {
+		dir = SCST_DATA_NONE;
+		scst_cmd_set_expected(scst_cmd, dir, 0);
+	}
+
+	switch (req_hdr->flags & ISCSI_CMD_ATTR_MASK) {
+	case ISCSI_CMD_SIMPLE:
+		scst_cmd_set_queue_type(scst_cmd, SCST_CMD_QUEUE_SIMPLE);
+		break;
+	case ISCSI_CMD_HEAD_OF_QUEUE:
+		scst_cmd_set_queue_type(scst_cmd, SCST_CMD_QUEUE_HEAD_OF_QUEUE);
+		break;
+	case ISCSI_CMD_ORDERED:
+		scst_cmd_set_queue_type(scst_cmd, SCST_CMD_QUEUE_ORDERED);
+		break;
+	case ISCSI_CMD_ACA:
+		scst_cmd_set_queue_type(scst_cmd, SCST_CMD_QUEUE_ACA);
+		break;
+	case ISCSI_CMD_UNTAGGED:
+		scst_cmd_set_queue_type(scst_cmd, SCST_CMD_QUEUE_UNTAGGED);
+		break;
+	default:
+		PRINT_ERROR("Unknown task code %x, use ORDERED instead",
+			req_hdr->flags & ISCSI_CMD_ATTR_MASK);
+		scst_cmd_set_queue_type(scst_cmd, SCST_CMD_QUEUE_ORDERED);
+		break;
+	}
+
+	/* cmd_sn is already in CPU format converted in check_cmd_sn() */
+	scst_cmd_set_tgt_sn(scst_cmd, req_hdr->cmd_sn);
+
+	ahdr = (struct iscsi_ahs_hdr *)req->pdu.ahs;
+	if (ahdr != NULL) {
+		uint8_t *p = (uint8_t *)ahdr;
+		unsigned int size = 0;
+		do {
+			int s;
+
+			ahdr = (struct iscsi_ahs_hdr *)p;
+
+			if (ahdr->ahstype == ISCSI_AHSTYPE_CDB) {
+				struct iscsi_cdb_ahdr *eca =
+					(struct iscsi_cdb_ahdr *)ahdr;
+				scst_cmd_set_ext_cdb(scst_cmd, eca->cdb,
+					be16_to_cpu(ahdr->ahslength) - 1);
+				break;
+			}
+			s = 3 + be16_to_cpu(ahdr->ahslength);
+			s = (s + 3) & -4;
+			size += s;
+			p += s;
+		} while (size < req->pdu.ahssize);
+	}
+
+	TRACE_DBG("START Command (itt %x, queue_type %d)",
+		req_hdr->itt, scst_cmd_get_queue_type(scst_cmd));
+	req->scst_state = ISCSI_CMD_STATE_RX_CMD;
+	conn->rx_task = current;
+	scst_cmd_init_stage1_done(scst_cmd, SCST_CONTEXT_DIRECT, 0);
+
+	if (req->scst_state != ISCSI_CMD_STATE_RX_CMD)
+		res = cmnd_rx_continue(req);
+	else {
+		TRACE_DBG("Delaying req %p post processing (scst_state %d)",
+			req, req->scst_state);
+		res = 1;
+	}
+
+out:
+	return res;
+}
+
+static int data_out_start(struct iscsi_cmnd *cmnd)
+{
+	struct iscsi_conn *conn = cmnd->conn;
+	struct iscsi_data_out_hdr *req_hdr =
+		(struct iscsi_data_out_hdr *)&cmnd->pdu.bhs;
+	struct iscsi_cmnd *orig_req;
+#if 0
+	struct iscsi_hdr *orig_req_hdr;
+#endif
+	u32 offset = be32_to_cpu(req_hdr->buffer_offset);
+	int res = 0;
+
+	/*
+	 * There is no race with send_r2t() and conn_abort(), since
+	 * all functions called from single read thread
+	 */
+	iscsi_extracheck_is_rd_thread(cmnd->conn);
+
+	update_stat_sn(cmnd);
+
+	orig_req = cmnd_find_data_wait_hash(conn, req_hdr->itt);
+	cmnd->cmd_req = orig_req;
+	if (unlikely(orig_req == NULL)) {
+		/*
+		 * It shouldn't happen, since we don't abort any request until
+		 * we received all related PDUs from the initiator or timeout
+		 * them. Let's quietly drop such PDUs.
+		 */
+		TRACE_MGMT_DBG("Unable to find scsi task ITT %x",
+			cmnd_itt(cmnd));
+		res = iscsi_preliminary_complete(cmnd, cmnd, true);
+		goto out;
+	}
+
+	if (unlikely(orig_req->r2t_len_to_receive < cmnd->pdu.datasize)) {
+		if (orig_req->prelim_compl_flags != 0) {
+			/* We can have fake r2t_len_to_receive */
+			goto go;
+		}
+		PRINT_ERROR("Data size (%d) > R2T length to receive (%d)",
+			cmnd->pdu.datasize, orig_req->r2t_len_to_receive);
+		set_scst_preliminary_status_rsp(orig_req, false,
+			SCST_LOAD_SENSE(iscsi_sense_incorrect_amount_of_data));
+		goto go;
+	}
+
+	/* Crazy iSCSI spec requires us to make this unneeded check */
+#if 0 /* ...but some initiators (Windows) don't care to correctly set it */
+	orig_req_hdr = &orig_req->pdu.bhs;
+	if (unlikely(orig_req_hdr->lun != req_hdr->lun)) {
+		PRINT_ERROR("Wrong LUN (%lld) in Data-Out PDU (expected %lld), "
+			"orig_req %p, cmnd %p", (unsigned long long)req_hdr->lun,
+			(unsigned long long)orig_req_hdr->lun, orig_req, cmnd);
+		create_reject_rsp(orig_req, ISCSI_REASON_PROTOCOL_ERROR, false);
+		goto go;
+	}
+#endif
+
+go:
+	if (req_hdr->flags & ISCSI_FLG_FINAL)
+		orig_req->outstanding_r2t--;
+
+	if (unlikely(orig_req->prelim_compl_flags != 0)) {
+		res = iscsi_preliminary_complete(cmnd, orig_req, true);
+		goto out;
+	}
+
+	TRACE_WRITE("cmnd %p, orig_req %p, offset %u, datasize %u", cmnd,
+		orig_req, offset, cmnd->pdu.datasize);
+
+	res = cmnd_prepare_recv_pdu(conn, orig_req, offset, cmnd->pdu.datasize);
+
+out:
+	return res;
+}
+
+static void data_out_end(struct iscsi_cmnd *cmnd)
+{
+	struct iscsi_data_out_hdr *req_hdr =
+		(struct iscsi_data_out_hdr *)&cmnd->pdu.bhs;
+	struct iscsi_cmnd *req;
+
+	EXTRACHECKS_BUG_ON(cmnd == NULL);
+	req = cmnd->cmd_req;
+	if (unlikely(req == NULL))
+		goto out;
+
+	TRACE_DBG("cmnd %p, req %p", cmnd, req);
+
+	iscsi_extracheck_is_rd_thread(cmnd->conn);
+
+	if (!(cmnd->conn->ddigest_type & DIGEST_NONE) &&
+	    !cmnd->ddigest_checked) {
+		cmd_add_on_rx_ddigest_list(req, cmnd);
+		cmnd_get(cmnd);
+	}
+
+	/*
+	 * Now we received the data and can adjust r2t_len_to_receive of the
+	 * orig req. We couldn't do it earlier, because it will break data
+	 * receiving errors recovery (calls of iscsi_fail_data_waiting_cmnd()).
+	 */
+	req->r2t_len_to_receive -= cmnd->pdu.datasize;
+
+	if (unlikely(req->prelim_compl_flags != 0)) {
+		/*
+		 * We might need to wait for one or more PDUs. Let's simplify
+		 * other code.
+		 */
+		req->r2t_len_to_receive = req->outstanding_r2t;
+		req->r2t_len_to_send = 0;
+	}
+
+	TRACE_DBG("req %p, FINAL %x, outstanding_r2t %d, r2t_len_to_receive %d,"
+		" r2t_len_to_send %d", req, req_hdr->flags & ISCSI_FLG_FINAL,
+		req->outstanding_r2t, req->r2t_len_to_receive,
+		req->r2t_len_to_send);
+
+	if (!(req_hdr->flags & ISCSI_FLG_FINAL))
+		goto out;
+
+	if (req->r2t_len_to_receive == 0) {
+		if (!req->pending)
+			iscsi_restart_cmnd(req);
+	} else if (req->r2t_len_to_send != 0)
+		send_r2t(req);
+
+out:
+	return;
+}
+
+/* Might be called under target_mutex and cmd_list_lock */
+static void __cmnd_abort(struct iscsi_cmnd *cmnd)
+{
+	unsigned long timeout_time = jiffies + ISCSI_TM_DATA_WAIT_TIMEOUT +
+					ISCSI_ADD_SCHED_TIME;
+	struct iscsi_conn *conn = cmnd->conn;
+
+	TRACE_MGMT_DBG("Aborting cmd %p, scst_cmd %p (scst state %x, "
+		"ref_cnt %d, on_write_timeout_list %d, write_start %ld, ITT %x, "
+		"sn %u, op %x, r2t_len_to_receive %d, r2t_len_to_send %d, "
+		"CDB op %x, size to write %u, outstanding_r2t %d, "
+		"sess->exp_cmd_sn %u, conn %p, rd_task %p)",
+		cmnd, cmnd->scst_cmd, cmnd->scst_state,
+		atomic_read(&cmnd->ref_cnt), cmnd->on_write_timeout_list,
+		cmnd->write_start, cmnd_itt(cmnd), cmnd->pdu.bhs.sn,
+		cmnd_opcode(cmnd), cmnd->r2t_len_to_receive,
+		cmnd->r2t_len_to_send, cmnd_scsicode(cmnd),
+		cmnd_write_size(cmnd), cmnd->outstanding_r2t,
+		cmnd->conn->session->exp_cmd_sn, cmnd->conn,
+		cmnd->conn->rd_task);
+
+	/*
+	 * Lock to sync with iscsi_check_tm_data_wait_timeouts(), including
+	 * CMD_ABORTED bit set.
+	 */
+	spin_lock_bh(&iscsi_rd_lock);
+
+	/*
+	 * We suppose that preliminary commands completion is tested by
+	 * comparing prelim_compl_flags with 0. Otherwise a race is possible,
+	 * like sending command in SCST core as PRELIM_COMPLETED, while it
+	 * wasn't aborted in it yet and have as the result a wrong success
+	 * status sent to the initiator.
+	 */
+	set_bit(ISCSI_CMD_ABORTED, &cmnd->prelim_compl_flags);
+
+	TRACE_MGMT_DBG("Setting conn_tm_active for conn %p", conn);
+	conn->conn_tm_active = 1;
+
+	spin_unlock_bh(&iscsi_rd_lock);
+
+	/*
+	 * We need the lock to sync with req_add_to_write_timeout_list() and
+	 * close races for rsp_timer.expires.
+	 */
+	spin_lock_bh(&conn->write_list_lock);
+	if (!timer_pending(&conn->rsp_timer) ||
+	    time_after(conn->rsp_timer.expires, timeout_time)) {
+		TRACE_MGMT_DBG("Mod timer on %ld (conn %p)", timeout_time,
+			conn);
+		mod_timer(&conn->rsp_timer, timeout_time);
+	} else
+		TRACE_MGMT_DBG("Timer for conn %p is going to fire on %ld "
+			"(timeout time %ld)", conn, conn->rsp_timer.expires,
+			timeout_time);
+	spin_unlock_bh(&conn->write_list_lock);
+
+	return;
+}
+
+/* Must be called from the read or conn close thread */
+static int cmnd_abort(struct iscsi_cmnd *req, int *status)
+{
+	struct iscsi_task_mgt_hdr *req_hdr =
+		(struct iscsi_task_mgt_hdr *)&req->pdu.bhs;
+	struct iscsi_cmnd *cmnd;
+	int res = -1;
+
+	req_hdr->ref_cmd_sn = be32_to_cpu(req_hdr->ref_cmd_sn);
+
+	if (!before(req_hdr->ref_cmd_sn, req_hdr->cmd_sn)) {
+		TRACE(TRACE_MGMT, "ABORT TASK: RefCmdSN(%u) > CmdSN(%u)",
+			req_hdr->ref_cmd_sn, req_hdr->cmd_sn);
+		*status = ISCSI_RESPONSE_UNKNOWN_TASK;
+		goto out;
+	}
+
+	cmnd = cmnd_find_itt_get(req->conn, req_hdr->rtt);
+	if (cmnd) {
+		struct iscsi_conn *conn = cmnd->conn;
+		struct iscsi_scsi_cmd_hdr *hdr = cmnd_hdr(cmnd);
+
+		if (req_hdr->lun != hdr->lun) {
+			PRINT_ERROR("ABORT TASK: LUN mismatch: req LUN "
+				    "%llx, cmd LUN %llx, rtt %u",
+				    (long long unsigned int)req_hdr->lun,
+				    (long long unsigned int)hdr->lun,
+				    req_hdr->rtt);
+			*status = ISCSI_RESPONSE_FUNCTION_REJECTED;
+			goto out_put;
+		}
+
+		if (cmnd->pdu.bhs.opcode & ISCSI_OP_IMMEDIATE) {
+			if (req_hdr->ref_cmd_sn != req_hdr->cmd_sn) {
+				PRINT_ERROR("ABORT TASK: RefCmdSN(%u) != TM "
+					"cmd CmdSN(%u) for immediate command "
+					"%p", req_hdr->ref_cmd_sn,
+					req_hdr->cmd_sn, cmnd);
+				*status = ISCSI_RESPONSE_FUNCTION_REJECTED;
+				goto out_put;
+			}
+		} else {
+			if (req_hdr->ref_cmd_sn != hdr->cmd_sn) {
+				PRINT_ERROR("ABORT TASK: RefCmdSN(%u) != "
+					"CmdSN(%u) for command %p",
+					req_hdr->ref_cmd_sn, req_hdr->cmd_sn,
+					cmnd);
+				*status = ISCSI_RESPONSE_FUNCTION_REJECTED;
+				goto out_put;
+			}
+		}
+
+		if (before(req_hdr->cmd_sn, hdr->cmd_sn) ||
+		    (req_hdr->cmd_sn == hdr->cmd_sn)) {
+			PRINT_ERROR("ABORT TASK: SN mismatch: req SN %x, "
+				"cmd SN %x, rtt %u", req_hdr->cmd_sn,
+				hdr->cmd_sn, req_hdr->rtt);
+			*status = ISCSI_RESPONSE_FUNCTION_REJECTED;
+			goto out_put;
+		}
+
+		spin_lock_bh(&conn->cmd_list_lock);
+		__cmnd_abort(cmnd);
+		spin_unlock_bh(&conn->cmd_list_lock);
+
+		cmnd_put(cmnd);
+		res = 0;
+	} else {
+		TRACE_MGMT_DBG("cmd RTT %x not found", req_hdr->rtt);
+		/*
+		 * iSCSI RFC:
+		 *
+		 * b)  If the Referenced Task Tag does not identify an existing task,
+		 * but if the CmdSN indicated by the RefCmdSN field in the Task
+		 * Management function request is within the valid CmdSN window
+		 * and less than the CmdSN of the Task Management function
+		 * request itself, then targets must consider the CmdSN received
+		 * and return the "Function complete" response.
+		 *
+		 * c)  If the Referenced Task Tag does not identify an existing task
+		 * and if the CmdSN indicated by the RefCmdSN field in the Task
+		 * Management function request is outside the valid CmdSN window,
+		 * then targets must return the "Task does not exist" response.
+		 *
+		 * 128 seems to be a good "window".
+		 */
+		if (between(req_hdr->ref_cmd_sn, req_hdr->cmd_sn - 128,
+			    req_hdr->cmd_sn)) {
+			*status = ISCSI_RESPONSE_FUNCTION_COMPLETE;
+			res = 0;
+		} else
+			*status = ISCSI_RESPONSE_UNKNOWN_TASK;
+	}
+
+out:
+	return res;
+
+out_put:
+	cmnd_put(cmnd);
+	goto out;
+}
+
+/* Must be called from the read or conn close thread */
+static int target_abort(struct iscsi_cmnd *req, int all)
+{
+	struct iscsi_target *target = req->conn->session->target;
+	struct iscsi_task_mgt_hdr *req_hdr =
+		(struct iscsi_task_mgt_hdr *)&req->pdu.bhs;
+	struct iscsi_session *session;
+	struct iscsi_conn *conn;
+	struct iscsi_cmnd *cmnd;
+
+	mutex_lock(&target->target_mutex);
+
+	list_for_each_entry(session, &target->session_list,
+			    session_list_entry) {
+		list_for_each_entry(conn, &session->conn_list,
+				    conn_list_entry) {
+			spin_lock_bh(&conn->cmd_list_lock);
+			list_for_each_entry(cmnd, &conn->cmd_list,
+					    cmd_list_entry) {
+				if (cmnd == req)
+					continue;
+				if (all)
+					__cmnd_abort(cmnd);
+				else if (req_hdr->lun == cmnd_hdr(cmnd)->lun)
+					__cmnd_abort(cmnd);
+			}
+			spin_unlock_bh(&conn->cmd_list_lock);
+		}
+	}
+
+	mutex_unlock(&target->target_mutex);
+	return 0;
+}
+
+/* Must be called from the read or conn close thread */
+static void task_set_abort(struct iscsi_cmnd *req)
+{
+	struct iscsi_session *session = req->conn->session;
+	struct iscsi_task_mgt_hdr *req_hdr =
+		(struct iscsi_task_mgt_hdr *)&req->pdu.bhs;
+	struct iscsi_target *target = session->target;
+	struct iscsi_conn *conn;
+	struct iscsi_cmnd *cmnd;
+
+	mutex_lock(&target->target_mutex);
+
+	list_for_each_entry(conn, &session->conn_list, conn_list_entry) {
+		spin_lock_bh(&conn->cmd_list_lock);
+		list_for_each_entry(cmnd, &conn->cmd_list, cmd_list_entry) {
+			struct iscsi_scsi_cmd_hdr *hdr = cmnd_hdr(cmnd);
+			if (cmnd == req)
+				continue;
+			if (req_hdr->lun != hdr->lun)
+				continue;
+			if (before(req_hdr->cmd_sn, hdr->cmd_sn) ||
+			    req_hdr->cmd_sn == hdr->cmd_sn)
+				continue;
+			__cmnd_abort(cmnd);
+		}
+		spin_unlock_bh(&conn->cmd_list_lock);
+	}
+
+	mutex_unlock(&target->target_mutex);
+	return;
+}
+
+/* Must be called from the read or conn close thread */
+void conn_abort(struct iscsi_conn *conn)
+{
+	struct iscsi_cmnd *cmnd, *r, *t;
+
+	TRACE_MGMT_DBG("Aborting conn %p", conn);
+
+	iscsi_extracheck_is_rd_thread(conn);
+
+	cancel_delayed_work_sync(&conn->nop_in_delayed_work);
+
+	/* No locks, we are the only user */
+	list_for_each_entry_safe(r, t, &conn->nop_req_list,
+			nop_req_list_entry) {
+		list_del(&r->nop_req_list_entry);
+		cmnd_put(r);
+	}
+
+	spin_lock_bh(&conn->cmd_list_lock);
+again:
+	list_for_each_entry(cmnd, &conn->cmd_list, cmd_list_entry) {
+		__cmnd_abort(cmnd);
+		if (cmnd->r2t_len_to_receive != 0) {
+			if (!cmnd_get_check(cmnd)) {
+				spin_unlock_bh(&conn->cmd_list_lock);
+
+				/* ToDo: this is racy for MC/S */
+				iscsi_fail_data_waiting_cmnd(cmnd);
+
+				cmnd_put(cmnd);
+
+				/*
+				 * We are in the read thread, so we may not
+				 * worry that after cmnd release conn gets
+				 * released as well.
+				 */
+				spin_lock_bh(&conn->cmd_list_lock);
+				goto again;
+			}
+		}
+	}
+	spin_unlock_bh(&conn->cmd_list_lock);
+
+	return;
+}
+
+static void execute_task_management(struct iscsi_cmnd *req)
+{
+	struct iscsi_conn *conn = req->conn;
+	struct iscsi_session *sess = conn->session;
+	struct iscsi_task_mgt_hdr *req_hdr =
+		(struct iscsi_task_mgt_hdr *)&req->pdu.bhs;
+	int rc, status = ISCSI_RESPONSE_FUNCTION_REJECTED;
+	int function = req_hdr->function & ISCSI_FUNCTION_MASK;
+	struct scst_rx_mgmt_params params;
+
+	TRACE(TRACE_MGMT, "TM fn %d", function);
+
+	TRACE_MGMT_DBG("TM req %p, ITT %x, RTT %x, sn %u, con %p", req,
+		cmnd_itt(req), req_hdr->rtt, req_hdr->cmd_sn, conn);
+
+	iscsi_extracheck_is_rd_thread(conn);
+
+	spin_lock(&sess->sn_lock);
+	sess->tm_active++;
+	sess->tm_sn = req_hdr->cmd_sn;
+	if (sess->tm_rsp != NULL) {
+		struct iscsi_cmnd *tm_rsp = sess->tm_rsp;
+
+		TRACE_MGMT_DBG("Dropping delayed TM rsp %p", tm_rsp);
+
+		sess->tm_rsp = NULL;
+		sess->tm_active--;
+
+		spin_unlock(&sess->sn_lock);
+
+		BUG_ON(sess->tm_active < 0);
+
+		rsp_cmnd_release(tm_rsp);
+	} else
+		spin_unlock(&sess->sn_lock);
+
+	memset(&params, 0, sizeof(params));
+	params.atomic = SCST_NON_ATOMIC;
+	params.tgt_priv = req;
+
+	if ((function != ISCSI_FUNCTION_ABORT_TASK) &&
+	    (req_hdr->rtt != ISCSI_RESERVED_TAG)) {
+		PRINT_ERROR("Invalid RTT %x (TM fn %d)", req_hdr->rtt,
+			function);
+		rc = -1;
+		status = ISCSI_RESPONSE_FUNCTION_REJECTED;
+		goto reject;
+	}
+
+	/* cmd_sn is already in CPU format converted in check_cmd_sn() */
+
+	switch (function) {
+	case ISCSI_FUNCTION_ABORT_TASK:
+		rc = cmnd_abort(req, &status);
+		if (rc == 0) {
+			params.fn = SCST_ABORT_TASK;
+			params.tag = req_hdr->rtt;
+			params.tag_set = 1;
+			params.lun = (uint8_t *)&req_hdr->lun;
+			params.lun_len = sizeof(req_hdr->lun);
+			params.lun_set = 1;
+			params.cmd_sn = req_hdr->cmd_sn;
+			params.cmd_sn_set = 1;
+			rc = scst_rx_mgmt_fn(conn->session->scst_sess,
+				&params);
+			status = ISCSI_RESPONSE_FUNCTION_REJECTED;
+		}
+		break;
+	case ISCSI_FUNCTION_ABORT_TASK_SET:
+		task_set_abort(req);
+		params.fn = SCST_ABORT_TASK_SET;
+		params.lun = (uint8_t *)&req_hdr->lun;
+		params.lun_len = sizeof(req_hdr->lun);
+		params.lun_set = 1;
+		params.cmd_sn = req_hdr->cmd_sn;
+		params.cmd_sn_set = 1;
+		rc = scst_rx_mgmt_fn(conn->session->scst_sess,
+			&params);
+		status = ISCSI_RESPONSE_FUNCTION_REJECTED;
+		break;
+	case ISCSI_FUNCTION_CLEAR_TASK_SET:
+		task_set_abort(req);
+		params.fn = SCST_CLEAR_TASK_SET;
+		params.lun = (uint8_t *)&req_hdr->lun;
+		params.lun_len = sizeof(req_hdr->lun);
+		params.lun_set = 1;
+		params.cmd_sn = req_hdr->cmd_sn;
+		params.cmd_sn_set = 1;
+		rc = scst_rx_mgmt_fn(conn->session->scst_sess,
+			&params);
+		status = ISCSI_RESPONSE_FUNCTION_REJECTED;
+		break;
+	case ISCSI_FUNCTION_CLEAR_ACA:
+		params.fn = SCST_CLEAR_ACA;
+		params.lun = (uint8_t *)&req_hdr->lun;
+		params.lun_len = sizeof(req_hdr->lun);
+		params.lun_set = 1;
+		params.cmd_sn = req_hdr->cmd_sn;
+		params.cmd_sn_set = 1;
+		rc = scst_rx_mgmt_fn(conn->session->scst_sess,
+			&params);
+		status = ISCSI_RESPONSE_FUNCTION_REJECTED;
+		break;
+	case ISCSI_FUNCTION_TARGET_COLD_RESET:
+	case ISCSI_FUNCTION_TARGET_WARM_RESET:
+		target_abort(req, 1);
+		params.fn = SCST_TARGET_RESET;
+		params.cmd_sn = req_hdr->cmd_sn;
+		params.cmd_sn_set = 1;
+		rc = scst_rx_mgmt_fn(conn->session->scst_sess,
+			&params);
+		status = ISCSI_RESPONSE_FUNCTION_REJECTED;
+		break;
+	case ISCSI_FUNCTION_LOGICAL_UNIT_RESET:
+		target_abort(req, 0);
+		params.fn = SCST_LUN_RESET;
+		params.lun = (uint8_t *)&req_hdr->lun;
+		params.lun_len = sizeof(req_hdr->lun);
+		params.lun_set = 1;
+		params.cmd_sn = req_hdr->cmd_sn;
+		params.cmd_sn_set = 1;
+		rc = scst_rx_mgmt_fn(conn->session->scst_sess,
+			&params);
+		status = ISCSI_RESPONSE_FUNCTION_REJECTED;
+		break;
+	case ISCSI_FUNCTION_TASK_REASSIGN:
+		rc = -1;
+		status = ISCSI_RESPONSE_ALLEGIANCE_REASSIGNMENT_UNSUPPORTED;
+		break;
+	default:
+		PRINT_ERROR("Unknown TM function %d", function);
+		rc = -1;
+		status = ISCSI_RESPONSE_FUNCTION_REJECTED;
+		break;
+	}
+
+reject:
+	if (rc != 0)
+		iscsi_send_task_mgmt_resp(req, status);
+
+	return;
+}
+
+static void nop_out_exec(struct iscsi_cmnd *req)
+{
+	struct iscsi_cmnd *rsp;
+	struct iscsi_nop_in_hdr *rsp_hdr;
+
+	TRACE_DBG("%p", req);
+
+	if (cmnd_itt(req) != cpu_to_be32(ISCSI_RESERVED_TAG)) {
+		rsp = iscsi_alloc_main_rsp(req);
+
+		rsp_hdr = (struct iscsi_nop_in_hdr *)&rsp->pdu.bhs;
+		rsp_hdr->opcode = ISCSI_OP_NOP_IN;
+		rsp_hdr->flags = ISCSI_FLG_FINAL;
+		rsp_hdr->itt = req->pdu.bhs.itt;
+		rsp_hdr->ttt = cpu_to_be32(ISCSI_RESERVED_TAG);
+
+		if (req->pdu.datasize)
+			BUG_ON(req->sg == NULL);
+		else
+			BUG_ON(req->sg != NULL);
+
+		if (req->sg) {
+			rsp->sg = req->sg;
+			rsp->sg_cnt = req->sg_cnt;
+			rsp->bufflen = req->bufflen;
+		}
+
+		/* We already checked it in check_segment_length() */
+		BUG_ON(get_pgcnt(req->pdu.datasize, 0) > ISCSI_CONN_IOV_MAX);
+
+		rsp->pdu.datasize = req->pdu.datasize;
+	} else {
+		bool found = false;
+		struct iscsi_cmnd *r;
+		struct iscsi_conn *conn = req->conn;
+
+		TRACE_DBG("Receive Nop-In response (ttt 0x%08x)",
+			be32_to_cpu(cmnd_ttt(req)));
+
+		spin_lock_bh(&conn->nop_req_list_lock);
+		list_for_each_entry(r, &conn->nop_req_list,
+				nop_req_list_entry) {
+			if (cmnd_ttt(req) == cmnd_ttt(r)) {
+				list_del(&r->nop_req_list_entry);
+				found = true;
+				break;
+			}
+		}
+		spin_unlock_bh(&conn->nop_req_list_lock);
+
+		if (found)
+			cmnd_put(r);
+		else
+			TRACE_MGMT_DBG("%s", "Got Nop-out response without "
+				"corresponding Nop-In request");
+	}
+
+	req_cmnd_release(req);
+	return;
+}
+
+static void logout_exec(struct iscsi_cmnd *req)
+{
+	struct iscsi_logout_req_hdr *req_hdr;
+	struct iscsi_cmnd *rsp;
+	struct iscsi_logout_rsp_hdr *rsp_hdr;
+
+	PRINT_INFO("Logout received from initiator %s",
+		req->conn->session->initiator_name);
+	TRACE_DBG("%p", req);
+
+	req_hdr = (struct iscsi_logout_req_hdr *)&req->pdu.bhs;
+	rsp = iscsi_alloc_main_rsp(req);
+	rsp_hdr = (struct iscsi_logout_rsp_hdr *)&rsp->pdu.bhs;
+	rsp_hdr->opcode = ISCSI_OP_LOGOUT_RSP;
+	rsp_hdr->flags = ISCSI_FLG_FINAL;
+	rsp_hdr->itt = req_hdr->itt;
+	rsp->should_close_conn = 1;
+
+	req_cmnd_release(req);
+
+	return;
+}
+
+static void iscsi_cmnd_exec(struct iscsi_cmnd *cmnd)
+{
+
+	TRACE_DBG("cmnd %p, op %x, SN %u", cmnd, cmnd_opcode(cmnd),
+		cmnd->pdu.bhs.sn);
+
+	iscsi_extracheck_is_rd_thread(cmnd->conn);
+
+	if (cmnd_opcode(cmnd) == ISCSI_OP_SCSI_CMD) {
+		if (cmnd->r2t_len_to_receive == 0)
+			iscsi_restart_cmnd(cmnd);
+		else if (cmnd->r2t_len_to_send != 0)
+			send_r2t(cmnd);
+		goto out;
+	}
+
+	if (cmnd->prelim_compl_flags != 0) {
+		TRACE_MGMT_DBG("Terminating prelim completed non-SCSI cmnd %p "
+			"(op %x)", cmnd, cmnd_opcode(cmnd));
+		req_cmnd_release(cmnd);
+		goto out;
+	}
+
+	switch (cmnd_opcode(cmnd)) {
+	case ISCSI_OP_NOP_OUT:
+		nop_out_exec(cmnd);
+		break;
+	case ISCSI_OP_SCSI_TASK_MGT_MSG:
+		execute_task_management(cmnd);
+		break;
+	case ISCSI_OP_LOGOUT_CMD:
+		logout_exec(cmnd);
+		break;
+	default:
+		PRINT_CRIT_ERROR("Unexpected cmnd op %x", cmnd_opcode(cmnd));
+		BUG();
+		break;
+	}
+
+out:
+	return;
+}
+
+static void set_cork(struct socket *sock, int on)
+{
+	int opt = on;
+	mm_segment_t oldfs;
+
+	oldfs = get_fs();
+	set_fs(get_ds());
+	sock->ops->setsockopt(sock, SOL_TCP, TCP_CORK,
+			      (void __force __user *)&opt, sizeof(opt));
+	set_fs(oldfs);
+	return;
+}
+
+void cmnd_tx_start(struct iscsi_cmnd *cmnd)
+{
+	struct iscsi_conn *conn = cmnd->conn;
+
+	TRACE_DBG("conn %p, cmnd %p, opcode %x", conn, cmnd, cmnd_opcode(cmnd));
+	iscsi_cmnd_set_length(&cmnd->pdu);
+
+	iscsi_extracheck_is_wr_thread(conn);
+
+	set_cork(conn->sock, 1);
+
+	conn->write_iop = conn->write_iov;
+	conn->write_iop->iov_base = (void __force __user *)(&cmnd->pdu.bhs);
+	conn->write_iop->iov_len = sizeof(cmnd->pdu.bhs);
+	conn->write_iop_used = 1;
+	conn->write_size = sizeof(cmnd->pdu.bhs) + cmnd->pdu.datasize;
+	conn->write_offset = 0;
+
+	switch (cmnd_opcode(cmnd)) {
+	case ISCSI_OP_NOP_IN:
+		if (cmnd_itt(cmnd) == cpu_to_be32(ISCSI_RESERVED_TAG))
+			cmnd->pdu.bhs.sn = cmnd_set_sn(cmnd, 0);
+		else
+			cmnd_set_sn(cmnd, 1);
+		break;
+	case ISCSI_OP_SCSI_RSP:
+		cmnd_set_sn(cmnd, 1);
+		break;
+	case ISCSI_OP_SCSI_TASK_MGT_RSP:
+		cmnd_set_sn(cmnd, 1);
+		break;
+	case ISCSI_OP_TEXT_RSP:
+		cmnd_set_sn(cmnd, 1);
+		break;
+	case ISCSI_OP_SCSI_DATA_IN:
+	{
+		struct iscsi_data_in_hdr *rsp =
+			(struct iscsi_data_in_hdr *)&cmnd->pdu.bhs;
+		u32 offset = cpu_to_be32(rsp->buffer_offset);
+
+		TRACE_DBG("cmnd %p, offset %u, datasize %u, bufflen %u", cmnd,
+			offset, cmnd->pdu.datasize, cmnd->bufflen);
+
+		BUG_ON(offset > cmnd->bufflen);
+		BUG_ON(offset + cmnd->pdu.datasize > cmnd->bufflen);
+
+		conn->write_offset = offset;
+
+		cmnd_set_sn(cmnd, (rsp->flags & ISCSI_FLG_FINAL) ? 1 : 0);
+		break;
+	}
+	case ISCSI_OP_LOGOUT_RSP:
+		cmnd_set_sn(cmnd, 1);
+		break;
+	case ISCSI_OP_R2T:
+		cmnd->pdu.bhs.sn = cmnd_set_sn(cmnd, 0);
+		break;
+	case ISCSI_OP_ASYNC_MSG:
+		cmnd_set_sn(cmnd, 1);
+		break;
+	case ISCSI_OP_REJECT:
+		cmnd_set_sn(cmnd, 1);
+		break;
+	default:
+		PRINT_ERROR("Unexpected cmnd op %x", cmnd_opcode(cmnd));
+		break;
+	}
+
+	iscsi_dump_pdu(&cmnd->pdu);
+	return;
+}
+
+void cmnd_tx_end(struct iscsi_cmnd *cmnd)
+{
+	struct iscsi_conn *conn = cmnd->conn;
+
+	TRACE_DBG("%p:%x (should_close_conn %d, should_close_all_conn %d)",
+		cmnd, cmnd_opcode(cmnd), cmnd->should_close_conn,
+		cmnd->should_close_all_conn);
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+	switch (cmnd_opcode(cmnd)) {
+	case ISCSI_OP_NOP_IN:
+	case ISCSI_OP_SCSI_RSP:
+	case ISCSI_OP_SCSI_TASK_MGT_RSP:
+	case ISCSI_OP_TEXT_RSP:
+	case ISCSI_OP_R2T:
+	case ISCSI_OP_ASYNC_MSG:
+	case ISCSI_OP_REJECT:
+	case ISCSI_OP_SCSI_DATA_IN:
+	case ISCSI_OP_LOGOUT_RSP:
+		break;
+	default:
+		PRINT_CRIT_ERROR("unexpected cmnd op %x", cmnd_opcode(cmnd));
+		BUG();
+		break;
+	}
+#endif
+
+	if (unlikely(cmnd->should_close_conn)) {
+		if (cmnd->should_close_all_conn) {
+			PRINT_INFO("Closing all connections for target %x at "
+				"initiator's %s request",
+				cmnd->conn->session->target->tid,
+				conn->session->initiator_name);
+			target_del_all_sess(cmnd->conn->session->target, 0);
+		} else {
+			PRINT_INFO("Closing connection at initiator's %s "
+				"request", conn->session->initiator_name);
+			mark_conn_closed(conn);
+		}
+	}
+
+	set_cork(cmnd->conn->sock, 0);
+	return;
+}
+
+/*
+ * Push the command for execution. This functions reorders the commands.
+ * Called from the read thread.
+ *
+ * Basically, since we don't support MC/S and TCP guarantees data delivery
+ * order, all that SN's stuff isn't needed at all (commands delivery order is
+ * a natural commands execution order), but insane iSCSI spec requires
+ * us to check it and we have to, because some crazy initiators can rely
+ * on the SN's based order and reorder requests during sending. For all other
+ * normal initiators all that code is a NOP.
+ */
+static void iscsi_push_cmnd(struct iscsi_cmnd *cmnd)
+{
+	struct iscsi_session *session = cmnd->conn->session;
+	struct list_head *entry;
+	u32 cmd_sn;
+
+	TRACE_DBG("cmnd %p, iSCSI opcode %x, sn %u, exp sn %u", cmnd,
+		cmnd_opcode(cmnd), cmnd->pdu.bhs.sn, session->exp_cmd_sn);
+
+	iscsi_extracheck_is_rd_thread(cmnd->conn);
+
+	BUG_ON(cmnd->parent_req != NULL);
+
+	if (cmnd->pdu.bhs.opcode & ISCSI_OP_IMMEDIATE) {
+		TRACE_DBG("Immediate cmd %p (cmd_sn %u)", cmnd,
+			cmnd->pdu.bhs.sn);
+		iscsi_cmnd_exec(cmnd);
+		goto out;
+	}
+
+	spin_lock(&session->sn_lock);
+
+	cmd_sn = cmnd->pdu.bhs.sn;
+	if (cmd_sn == session->exp_cmd_sn) {
+		while (1) {
+			session->exp_cmd_sn = ++cmd_sn;
+
+			if (unlikely(session->tm_active > 0)) {
+				if (before(cmd_sn, session->tm_sn)) {
+					struct iscsi_conn *conn = cmnd->conn;
+
+					spin_unlock(&session->sn_lock);
+
+					spin_lock_bh(&conn->cmd_list_lock);
+					__cmnd_abort(cmnd);
+					spin_unlock_bh(&conn->cmd_list_lock);
+
+					spin_lock(&session->sn_lock);
+				}
+				iscsi_check_send_delayed_tm_resp(session);
+			}
+
+			spin_unlock(&session->sn_lock);
+
+			iscsi_cmnd_exec(cmnd);
+
+			spin_lock(&session->sn_lock);
+
+			if (list_empty(&session->pending_list))
+				break;
+			cmnd = list_entry(session->pending_list.next,
+					  struct iscsi_cmnd,
+					  pending_list_entry);
+			if (cmnd->pdu.bhs.sn != cmd_sn)
+				break;
+
+			list_del(&cmnd->pending_list_entry);
+			cmnd->pending = 0;
+
+			TRACE_MGMT_DBG("Processing pending cmd %p (cmd_sn %u)",
+				cmnd, cmd_sn);
+		}
+	} else {
+		int drop = 0;
+
+		TRACE_DBG("Pending cmd %p (cmd_sn %u, exp_cmd_sn %u)",
+			cmnd, cmd_sn, session->exp_cmd_sn);
+
+		/*
+		 * iSCSI RFC 3720: "The target MUST silently ignore any
+		 * non-immediate command outside of [from ExpCmdSN to MaxCmdSN
+		 * inclusive] range". But we won't honor the MaxCmdSN
+		 * requirement, because, since we adjust MaxCmdSN from the
+		 * separate write thread, rarely it is possible that initiator
+		 * can legally send command with CmdSN>MaxSN. But it won't
+		 * hurt anything, in the worst case it will lead to
+		 * additional QUEUE FULL status.
+		 */
+
+		if (unlikely(before(cmd_sn, session->exp_cmd_sn))) {
+			PRINT_ERROR("Unexpected cmd_sn (%u,%u)", cmd_sn,
+				session->exp_cmd_sn);
+			drop = 1;
+		}
+
+#if 0
+		if (unlikely(after(cmd_sn, session->exp_cmd_sn +
+					iscsi_get_allowed_cmds(session)))) {
+			TRACE_MGMT_DBG("Too large cmd_sn %u (exp_cmd_sn %u, "
+				"max_sn %u)", cmd_sn, session->exp_cmd_sn,
+				iscsi_get_allowed_cmds(session));
+		}
+#endif
+
+		spin_unlock(&session->sn_lock);
+
+		if (unlikely(drop)) {
+			req_cmnd_release_force(cmnd);
+			goto out;
+		}
+
+		if (unlikely(test_bit(ISCSI_CMD_ABORTED,
+					&cmnd->prelim_compl_flags))) {
+			struct iscsi_cmnd *tm_clone;
+
+			TRACE_MGMT_DBG("Pending aborted cmnd %p, creating TM "
+				"clone (scst cmd %p, state %d)", cmnd,
+				cmnd->scst_cmd, cmnd->scst_state);
+
+			tm_clone = cmnd_alloc(cmnd->conn, NULL);
+			if (tm_clone != NULL) {
+				set_bit(ISCSI_CMD_ABORTED,
+					&tm_clone->prelim_compl_flags);
+				tm_clone->pdu = cmnd->pdu;
+
+				TRACE_MGMT_DBG("TM clone %p created",
+					       tm_clone);
+
+				iscsi_cmnd_exec(cmnd);
+				cmnd = tm_clone;
+			} else
+				PRINT_ERROR("%s", "Unable to create TM clone");
+		}
+
+		TRACE_MGMT_DBG("Pending cmnd %p (op %x, sn %u, exp sn %u)",
+			cmnd, cmnd_opcode(cmnd), cmd_sn, session->exp_cmd_sn);
+
+		spin_lock(&session->sn_lock);
+		list_for_each(entry, &session->pending_list) {
+			struct iscsi_cmnd *tmp =
+				list_entry(entry, struct iscsi_cmnd,
+					   pending_list_entry);
+			if (before(cmd_sn, tmp->pdu.bhs.sn))
+				break;
+		}
+		list_add_tail(&cmnd->pending_list_entry, entry);
+		cmnd->pending = 1;
+	}
+
+	spin_unlock(&session->sn_lock);
+out:
+	return;
+}
+
+static int check_segment_length(struct iscsi_cmnd *cmnd)
+{
+	struct iscsi_conn *conn = cmnd->conn;
+	struct iscsi_session *session = conn->session;
+
+	if (unlikely(cmnd->pdu.datasize > session->sess_params.max_recv_data_length)) {
+		PRINT_ERROR("Initiator %s violated negotiated parameters: "
+			"data too long (ITT %x, datasize %u, "
+			"max_recv_data_length %u", session->initiator_name,
+			cmnd_itt(cmnd), cmnd->pdu.datasize,
+			session->sess_params.max_recv_data_length);
+		mark_conn_closed(conn);
+		return -EINVAL;
+	}
+	return 0;
+}
+
+int cmnd_rx_start(struct iscsi_cmnd *cmnd)
+{
+	int res, rc;
+
+	iscsi_dump_pdu(&cmnd->pdu);
+
+	res = check_segment_length(cmnd);
+	if (res != 0)
+		goto out;
+
+	switch (cmnd_opcode(cmnd)) {
+	case ISCSI_OP_SCSI_CMD:
+		res = scsi_cmnd_start(cmnd);
+		if (unlikely(res < 0))
+			goto out;
+		spin_lock(&cmnd->conn->session->sn_lock);
+		__update_stat_sn(cmnd);
+		rc = check_cmd_sn(cmnd);
+		spin_unlock(&cmnd->conn->session->sn_lock);
+		break;
+	case ISCSI_OP_SCSI_DATA_OUT:
+		res = data_out_start(cmnd);
+		goto out;
+	case ISCSI_OP_NOP_OUT:
+		rc = nop_out_start(cmnd);
+		break;
+	case ISCSI_OP_SCSI_TASK_MGT_MSG:
+	case ISCSI_OP_LOGOUT_CMD:
+		spin_lock(&cmnd->conn->session->sn_lock);
+		__update_stat_sn(cmnd);
+		rc = check_cmd_sn(cmnd);
+		spin_unlock(&cmnd->conn->session->sn_lock);
+		break;
+	case ISCSI_OP_TEXT_CMD:
+	case ISCSI_OP_SNACK_CMD:
+	default:
+		rc = -ISCSI_REASON_UNSUPPORTED_COMMAND;
+		break;
+	}
+
+	if (unlikely(rc < 0)) {
+		PRINT_ERROR("Error %d (iSCSI opcode %x, ITT %x)", rc,
+			cmnd_opcode(cmnd), cmnd_itt(cmnd));
+		res = create_reject_rsp(cmnd, -rc, true);
+	}
+
+out:
+	return res;
+}
+
+void cmnd_rx_end(struct iscsi_cmnd *cmnd)
+{
+
+	TRACE_DBG("cmnd %p, opcode %x", cmnd, cmnd_opcode(cmnd));
+
+	cmnd->conn->last_rcv_time = jiffies;
+	TRACE_DBG("Updated last_rcv_time %ld", cmnd->conn->last_rcv_time);
+
+	switch (cmnd_opcode(cmnd)) {
+	case ISCSI_OP_SCSI_CMD:
+	case ISCSI_OP_NOP_OUT:
+	case ISCSI_OP_SCSI_TASK_MGT_MSG:
+	case ISCSI_OP_LOGOUT_CMD:
+		iscsi_push_cmnd(cmnd);
+		goto out;
+	case ISCSI_OP_SCSI_DATA_OUT:
+		data_out_end(cmnd);
+		break;
+	default:
+		PRINT_ERROR("Unexpected cmnd op %x", cmnd_opcode(cmnd));
+		break;
+	}
+
+	req_cmnd_release(cmnd);
+
+out:
+	return;
+}
+
+static int iscsi_alloc_data_buf(struct scst_cmd *cmd)
+{
+	/*
+	 * sock->ops->sendpage() is async zero copy operation,
+	 * so we must be sure not to free and reuse
+	 * the command's buffer before the sending was completed
+	 * by the network layers. It is possible only if we
+	 * don't use SGV cache.
+	 */
+	EXTRACHECKS_BUG_ON(!(scst_cmd_get_data_direction(cmd) & SCST_DATA_READ));
+	scst_cmd_set_no_sgv(cmd);
+	return 1;
+}
+
+static void iscsi_preprocessing_done(struct scst_cmd *scst_cmd)
+{
+	struct iscsi_cmnd *req = (struct iscsi_cmnd *)
+				scst_cmd_get_tgt_priv(scst_cmd);
+
+	TRACE_DBG("req %p", req);
+
+	if (req->conn->rx_task == current)
+		req->scst_state = ISCSI_CMD_STATE_AFTER_PREPROC;
+	else {
+		/*
+		 * We wait for the state change without any protection, so
+		 * without cmnd_get() it is possible that req will die
+		 * "immediately" after the state assignment and
+		 * iscsi_make_conn_rd_active() will operate on dead data.
+		 * We use the ordered version of cmnd_get(), because "get"
+		 * must be done before the state assignment.
+		 *
+		 * We protected from the race on calling cmnd_rx_continue(),
+		 * because there can be only one read thread processing
+		 * connection.
+		 */
+		cmnd_get_ordered(req);
+		req->scst_state = ISCSI_CMD_STATE_AFTER_PREPROC;
+		iscsi_make_conn_rd_active(req->conn);
+		if (unlikely(req->conn->closing)) {
+			TRACE_DBG("Waking up closing conn %p", req->conn);
+			wake_up(&req->conn->read_state_waitQ);
+		}
+		cmnd_put(req);
+	}
+
+	return;
+}
+
+/*
+ * No locks.
+ *
+ * IMPORTANT! Connection conn must be protected by additional conn_get()
+ * upon entrance in this function, because otherwise it could be destroyed
+ * inside as a result of iscsi_send(), which releases sent commands.
+ */
+static void iscsi_try_local_processing(struct iscsi_conn *conn)
+{
+	int local;
+
+	spin_lock_bh(&iscsi_wr_lock);
+	switch (conn->wr_state) {
+	case ISCSI_CONN_WR_STATE_IN_LIST:
+		list_del(&conn->wr_list_entry);
+		/* go through */
+	case ISCSI_CONN_WR_STATE_IDLE:
+#ifdef CONFIG_SCST_EXTRACHECKS
+		conn->wr_task = current;
+#endif
+		conn->wr_state = ISCSI_CONN_WR_STATE_PROCESSING;
+		conn->wr_space_ready = 0;
+		local = 1;
+		break;
+	default:
+		local = 0;
+		break;
+	}
+	spin_unlock_bh(&iscsi_wr_lock);
+
+	if (local) {
+		int rc = 1;
+
+		if (test_write_ready(conn))
+			rc = iscsi_send(conn);
+
+		spin_lock_bh(&iscsi_wr_lock);
+#ifdef CONFIG_SCST_EXTRACHECKS
+		conn->wr_task = NULL;
+#endif
+		if ((rc <= 0) || test_write_ready(conn)) {
+			list_add_tail(&conn->wr_list_entry, &iscsi_wr_list);
+			conn->wr_state = ISCSI_CONN_WR_STATE_IN_LIST;
+			wake_up(&iscsi_wr_waitQ);
+		} else
+			conn->wr_state = ISCSI_CONN_WR_STATE_IDLE;
+		spin_unlock_bh(&iscsi_wr_lock);
+	}
+	return;
+}
+
+static int iscsi_xmit_response(struct scst_cmd *scst_cmd)
+{
+	int is_send_status = scst_cmd_get_is_send_status(scst_cmd);
+	struct iscsi_cmnd *req = (struct iscsi_cmnd *)
+					scst_cmd_get_tgt_priv(scst_cmd);
+	struct iscsi_conn *conn = req->conn;
+	int status = scst_cmd_get_status(scst_cmd);
+	u8 *sense = scst_cmd_get_sense_buffer(scst_cmd);
+	int sense_len = scst_cmd_get_sense_buffer_len(scst_cmd);
+
+	if (unlikely(scst_cmd_atomic(scst_cmd)))
+		return SCST_TGT_RES_NEED_THREAD_CTX;
+
+	scst_cmd_set_tgt_priv(scst_cmd, NULL);
+
+	EXTRACHECKS_BUG_ON(req->scst_state != ISCSI_CMD_STATE_RESTARTED);
+
+	if (unlikely(scst_cmd_aborted(scst_cmd)))
+		set_bit(ISCSI_CMD_ABORTED, &req->prelim_compl_flags);
+
+	if (unlikely(req->prelim_compl_flags != 0)) {
+		if (test_bit(ISCSI_CMD_ABORTED, &req->prelim_compl_flags)) {
+			TRACE_MGMT_DBG("req %p (scst_cmd %p) aborted", req,
+				req->scst_cmd);
+			scst_set_delivery_status(req->scst_cmd,
+				SCST_CMD_DELIVERY_ABORTED);
+			req->scst_state = ISCSI_CMD_STATE_PROCESSED;
+			req_cmnd_release_force(req);
+			goto out;
+		}
+
+		TRACE_DBG("Prelim completed req %p", req);
+
+		/*
+		 * We could preliminary have finished req before we
+		 * knew its device, so check if we return correct sense
+		 * format.
+		 */
+		scst_check_convert_sense(scst_cmd);
+
+		if (!req->own_sg) {
+			req->sg = scst_cmd_get_sg(scst_cmd);
+			req->sg_cnt = scst_cmd_get_sg_cnt(scst_cmd);
+		}
+	} else {
+		EXTRACHECKS_BUG_ON(req->own_sg);
+		req->sg = scst_cmd_get_sg(scst_cmd);
+		req->sg_cnt = scst_cmd_get_sg_cnt(scst_cmd);
+	}
+
+	req->bufflen = scst_cmd_get_resp_data_len(scst_cmd);
+
+	req->scst_state = ISCSI_CMD_STATE_PROCESSED;
+
+	TRACE_DBG("req %p, is_send_status=%x, req->bufflen=%d, req->sg=%p, "
+		"req->sg_cnt %d", req, is_send_status, req->bufflen, req->sg,
+		req->sg_cnt);
+
+	EXTRACHECKS_BUG_ON(req->hashed);
+	if (req->main_rsp != NULL)
+		EXTRACHECKS_BUG_ON(cmnd_opcode(req->main_rsp) != ISCSI_OP_REJECT);
+
+	if (unlikely((req->bufflen != 0) && !is_send_status)) {
+		PRINT_CRIT_ERROR("%s", "Sending DATA without STATUS is "
+			"unsupported");
+		scst_set_cmd_error(scst_cmd,
+			SCST_LOAD_SENSE(scst_sense_hardw_error));
+		BUG(); /* ToDo */
+	}
+
+	if (req->bufflen != 0) {
+		/*
+		 * Check above makes sure that is_send_status is set,
+		 * so status is valid here, but in future that could change.
+		 * ToDo
+		 */
+		if ((status != SAM_STAT_CHECK_CONDITION) &&
+		    ((cmnd_hdr(req)->flags & (ISCSI_CMD_WRITE|ISCSI_CMD_READ)) !=
+				(ISCSI_CMD_WRITE|ISCSI_CMD_READ))) {
+			send_data_rsp(req, status, is_send_status);
+		} else {
+			struct iscsi_cmnd *rsp;
+			send_data_rsp(req, 0, 0);
+			if (is_send_status) {
+				rsp = create_status_rsp(req, status, sense,
+					sense_len, true);
+				iscsi_cmnd_init_write(rsp, 0);
+			}
+		}
+	} else if (is_send_status) {
+		struct iscsi_cmnd *rsp;
+		rsp = create_status_rsp(req, status, sense, sense_len, false);
+		iscsi_cmnd_init_write(rsp, 0);
+	}
+#ifdef CONFIG_SCST_EXTRACHECKS
+	else
+		BUG();
+#endif
+
+	/*
+	 * "_ordered" here to protect from reorder, which can lead to
+	 * preliminary connection destroy in req_cmnd_release(). Just in
+	 * case, actually, because reordering shouldn't go so far, but who
+	 * knows..
+	 */
+	conn_get_ordered(conn);
+	req_cmnd_release(req);
+	iscsi_try_local_processing(conn);
+	conn_put(conn);
+
+out:
+	return SCST_TGT_RES_SUCCESS;
+}
+
+/* Called under sn_lock */
+static bool iscsi_is_delay_tm_resp(struct iscsi_cmnd *rsp)
+{
+	bool res = 0;
+	struct iscsi_task_mgt_hdr *req_hdr =
+		(struct iscsi_task_mgt_hdr *)&rsp->parent_req->pdu.bhs;
+	int function = req_hdr->function & ISCSI_FUNCTION_MASK;
+	struct iscsi_session *sess = rsp->conn->session;
+
+	/* This should be checked for immediate TM commands as well */
+
+	switch (function) {
+	default:
+		if (before(sess->exp_cmd_sn, req_hdr->cmd_sn))
+			res = 1;
+		break;
+	}
+	return res;
+}
+
+/* Called under sn_lock, but might drop it inside, then reaquire */
+static void iscsi_check_send_delayed_tm_resp(struct iscsi_session *sess)
+	__acquires(&sn_lock)
+	__releases(&sn_lock)
+{
+	struct iscsi_cmnd *tm_rsp = sess->tm_rsp;
+
+	if (tm_rsp == NULL)
+		goto out;
+
+	if (iscsi_is_delay_tm_resp(tm_rsp))
+		goto out;
+
+	TRACE_MGMT_DBG("Sending delayed rsp %p", tm_rsp);
+
+	sess->tm_rsp = NULL;
+	sess->tm_active--;
+
+	spin_unlock(&sess->sn_lock);
+
+	BUG_ON(sess->tm_active < 0);
+
+	iscsi_cmnd_init_write(tm_rsp, ISCSI_INIT_WRITE_WAKE);
+
+	spin_lock(&sess->sn_lock);
+
+out:
+	return;
+}
+
+static void iscsi_send_task_mgmt_resp(struct iscsi_cmnd *req, int status)
+{
+	struct iscsi_cmnd *rsp;
+	struct iscsi_task_mgt_hdr *req_hdr =
+				(struct iscsi_task_mgt_hdr *)&req->pdu.bhs;
+	struct iscsi_task_rsp_hdr *rsp_hdr;
+	struct iscsi_session *sess = req->conn->session;
+	int fn = req_hdr->function & ISCSI_FUNCTION_MASK;
+
+	TRACE_MGMT_DBG("TM req %p finished", req);
+	TRACE(TRACE_MGMT, "TM fn %d finished, status %d", fn, status);
+
+	rsp = iscsi_alloc_rsp(req);
+	rsp_hdr = (struct iscsi_task_rsp_hdr *)&rsp->pdu.bhs;
+
+	rsp_hdr->opcode = ISCSI_OP_SCSI_TASK_MGT_RSP;
+	rsp_hdr->flags = ISCSI_FLG_FINAL;
+	rsp_hdr->itt = req_hdr->itt;
+	rsp_hdr->response = status;
+
+	if (fn == ISCSI_FUNCTION_TARGET_COLD_RESET) {
+		rsp->should_close_conn = 1;
+		rsp->should_close_all_conn = 1;
+	}
+
+	BUG_ON(sess->tm_rsp != NULL);
+
+	spin_lock(&sess->sn_lock);
+	if (iscsi_is_delay_tm_resp(rsp)) {
+		TRACE_MGMT_DBG("Delaying TM fn %d response %p "
+			"(req %p), because not all affected commands "
+			"received (TM cmd sn %u, exp sn %u)",
+			req_hdr->function & ISCSI_FUNCTION_MASK, rsp, req,
+			req_hdr->cmd_sn, sess->exp_cmd_sn);
+		sess->tm_rsp = rsp;
+		spin_unlock(&sess->sn_lock);
+		goto out_release;
+	}
+	sess->tm_active--;
+	spin_unlock(&sess->sn_lock);
+
+	BUG_ON(sess->tm_active < 0);
+
+	iscsi_cmnd_init_write(rsp, ISCSI_INIT_WRITE_WAKE);
+
+out_release:
+	req_cmnd_release(req);
+	return;
+}
+
+static inline int iscsi_get_mgmt_response(int status)
+{
+	switch (status) {
+	case SCST_MGMT_STATUS_SUCCESS:
+		return ISCSI_RESPONSE_FUNCTION_COMPLETE;
+
+	case SCST_MGMT_STATUS_TASK_NOT_EXIST:
+		return ISCSI_RESPONSE_UNKNOWN_TASK;
+
+	case SCST_MGMT_STATUS_LUN_NOT_EXIST:
+		return ISCSI_RESPONSE_UNKNOWN_LUN;
+
+	case SCST_MGMT_STATUS_FN_NOT_SUPPORTED:
+		return ISCSI_RESPONSE_FUNCTION_UNSUPPORTED;
+
+	case SCST_MGMT_STATUS_REJECTED:
+	case SCST_MGMT_STATUS_FAILED:
+	default:
+		return ISCSI_RESPONSE_FUNCTION_REJECTED;
+	}
+}
+
+static void iscsi_task_mgmt_fn_done(struct scst_mgmt_cmd *scst_mcmd)
+{
+	int fn = scst_mgmt_cmd_get_fn(scst_mcmd);
+	struct iscsi_cmnd *req = (struct iscsi_cmnd *)
+				scst_mgmt_cmd_get_tgt_priv(scst_mcmd);
+	int status =
+		iscsi_get_mgmt_response(scst_mgmt_cmd_get_status(scst_mcmd));
+
+	if ((status == ISCSI_RESPONSE_UNKNOWN_TASK) &&
+	    (fn == SCST_ABORT_TASK)) {
+		/* If we are here, we found the task, so must succeed */
+		status = ISCSI_RESPONSE_FUNCTION_COMPLETE;
+	}
+
+	TRACE_MGMT_DBG("req %p, scst_mcmd %p, fn %d, scst status %d, status %d",
+		req, scst_mcmd, fn, scst_mgmt_cmd_get_status(scst_mcmd),
+		status);
+
+	switch (fn) {
+	case SCST_NEXUS_LOSS_SESS:
+	case SCST_ABORT_ALL_TASKS_SESS:
+		/* They are internal */
+		break;
+	default:
+		iscsi_send_task_mgmt_resp(req, status);
+		scst_mgmt_cmd_set_tgt_priv(scst_mcmd, NULL);
+		break;
+	}
+	return;
+}
+
+static int iscsi_scsi_aen(struct scst_aen *aen)
+{
+	int res = SCST_AEN_RES_SUCCESS;
+	uint64_t lun = scst_aen_get_lun(aen);
+	const uint8_t *sense = scst_aen_get_sense(aen);
+	int sense_len = scst_aen_get_sense_len(aen);
+	struct iscsi_session *sess = scst_sess_get_tgt_priv(
+					scst_aen_get_sess(aen));
+	struct iscsi_conn *conn;
+	bool found;
+	struct iscsi_cmnd *fake_req, *rsp;
+	struct iscsi_async_msg_hdr *rsp_hdr;
+	struct scatterlist *sg;
+
+	TRACE_MGMT_DBG("SCSI AEN to sess %p (initiator %s)", sess,
+		sess->initiator_name);
+
+	mutex_lock(&sess->target->target_mutex);
+
+	found = false;
+	list_for_each_entry_reverse(conn, &sess->conn_list, conn_list_entry) {
+		if (!test_bit(ISCSI_CONN_SHUTTINGDOWN, &conn->conn_aflags) &&
+		    (conn->conn_reinst_successor == NULL)) {
+			found = true;
+			break;
+		}
+	}
+	if (!found) {
+		TRACE_MGMT_DBG("Unable to find alive conn for sess %p", sess);
+		goto out_err;
+	}
+
+	/* Create a fake request */
+	fake_req = cmnd_alloc(conn, NULL);
+	if (fake_req == NULL) {
+		PRINT_ERROR("%s", "Unable to alloc fake AEN request");
+		goto out_err;
+	}
+
+	mutex_unlock(&sess->target->target_mutex);
+
+	rsp = iscsi_alloc_main_rsp(fake_req);
+	if (rsp == NULL) {
+		PRINT_ERROR("%s", "Unable to alloc AEN rsp");
+		goto out_err_free_req;
+	}
+
+	fake_req->scst_state = ISCSI_CMD_STATE_AEN;
+	fake_req->scst_aen = aen;
+
+	rsp_hdr = (struct iscsi_async_msg_hdr *)&rsp->pdu.bhs;
+
+	rsp_hdr->opcode = ISCSI_OP_ASYNC_MSG;
+	rsp_hdr->flags = ISCSI_FLG_FINAL;
+	rsp_hdr->lun = lun; /* it's already in SCSI form */
+	rsp_hdr->ffffffff = 0xffffffff;
+	rsp_hdr->async_event = ISCSI_ASYNC_SCSI;
+
+	sg = rsp->sg = rsp->rsp_sg;
+	rsp->sg_cnt = 2;
+	rsp->own_sg = 1;
+
+	sg_init_table(sg, 2);
+	sg_set_buf(&sg[0], &rsp->sense_hdr, sizeof(rsp->sense_hdr));
+	sg_set_buf(&sg[1], sense, sense_len);
+
+	rsp->sense_hdr.length = cpu_to_be16(sense_len);
+	rsp->pdu.datasize = sizeof(rsp->sense_hdr) + sense_len;
+	rsp->bufflen = rsp->pdu.datasize;
+
+	req_cmnd_release(fake_req);
+
+out:
+	return res;
+
+out_err_free_req:
+	req_cmnd_release(fake_req);
+
+out_err:
+	mutex_unlock(&sess->target->target_mutex);
+	res = SCST_AEN_RES_FAILED;
+	goto out;
+}
+
+static int iscsi_report_aen(struct scst_aen *aen)
+{
+	int res;
+	int event_fn = scst_aen_get_event_fn(aen);
+
+	switch (event_fn) {
+	case SCST_AEN_SCSI:
+		res = iscsi_scsi_aen(aen);
+		break;
+	default:
+		TRACE_MGMT_DBG("Unsupported AEN %d", event_fn);
+		res = SCST_AEN_RES_NOT_SUPPORTED;
+		break;
+	}
+	return res;
+}
+
+void iscsi_send_nop_in(struct iscsi_conn *conn)
+{
+	struct iscsi_cmnd *req, *rsp;
+	struct iscsi_nop_in_hdr *rsp_hdr;
+
+	req = cmnd_alloc(conn, NULL);
+	if (req == NULL) {
+		PRINT_ERROR("%s", "Unable to alloc fake Nop-In request");
+		goto out_err;
+	}
+
+	rsp = iscsi_alloc_main_rsp(req);
+	if (rsp == NULL) {
+		PRINT_ERROR("%s", "Unable to alloc Nop-In rsp");
+		goto out_err_free_req;
+	}
+
+	cmnd_get(rsp);
+
+	rsp_hdr = (struct iscsi_nop_in_hdr *)&rsp->pdu.bhs;
+	rsp_hdr->opcode = ISCSI_OP_NOP_IN;
+	rsp_hdr->flags = ISCSI_FLG_FINAL;
+	rsp_hdr->itt = cpu_to_be32(ISCSI_RESERVED_TAG);
+	rsp_hdr->ttt = conn->nop_in_ttt++;
+
+	if (conn->nop_in_ttt == cpu_to_be32(ISCSI_RESERVED_TAG))
+		conn->nop_in_ttt = 0;
+
+	/* Supposed that all other fields are zeroed */
+
+	TRACE_DBG("Sending Nop-In request (ttt 0x%08x)", rsp_hdr->ttt);
+	spin_lock_bh(&conn->nop_req_list_lock);
+	list_add_tail(&rsp->nop_req_list_entry, &conn->nop_req_list);
+	spin_unlock_bh(&conn->nop_req_list_lock);
+
+out_err_free_req:
+	req_cmnd_release(req);
+
+out_err:
+	return;
+}
+
+static int iscsi_target_detect(struct scst_tgt_template *templ)
+{
+	/* Nothing to do */
+	return 0;
+}
+
+static int iscsi_target_release(struct scst_tgt *scst_tgt)
+{
+	/* Nothing to do */
+	return 0;
+}
+
+static struct scst_trace_log iscsi_local_trace_tbl[] = {
+    { TRACE_D_WRITE,		"d_write" },
+    { TRACE_CONN_OC,		"conn" },
+    { TRACE_CONN_OC_DBG,	"conn_dbg" },
+    { TRACE_D_IOV,		"iov" },
+    { TRACE_D_DUMP_PDU,		"pdu" },
+    { TRACE_NET_PG,		"net_page" },
+    { 0,			NULL }
+};
+
+#define ISCSI_TRACE_TLB_HELP	", d_read, d_write, conn, conn_dbg, iov, pdu, net_page"
+
+#define ISCSI_MGMT_CMD_HELP	\
+	"       echo \"add_attribute IncomingUser name password\" >mgmt\n" \
+	"       echo \"del_attribute IncomingUser name\" >mgmt\n" \
+	"       echo \"add_attribute OutgoingUser name password\" >mgmt\n" \
+	"       echo \"del_attribute OutgoingUser name\" >mgmt\n" \
+	"       echo \"add_target_attribute target_name IncomingUser name password\" >mgmt\n" \
+	"       echo \"del_target_attribute target_name IncomingUser name\" >mgmt\n" \
+	"       echo \"add_target_attribute target_name OutgoingUser name password\" >mgmt\n" \
+	"       echo \"del_target_attribute target_name OutgoingUser name\" >mgmt\n"
+
+struct scst_tgt_template iscsi_template = {
+	.name = "iscsi",
+	.sg_tablesize = 0xFFFF /* no limit */,
+	.threads_num = 0,
+	.no_clustering = 1,
+	.xmit_response_atomic = 0,
+	.tgtt_attrs = iscsi_attrs,
+	.tgt_attrs = iscsi_tgt_attrs,
+	.sess_attrs = iscsi_sess_attrs,
+	.enable_target = iscsi_enable_target,
+	.is_target_enabled = iscsi_is_target_enabled,
+	.add_target = iscsi_sysfs_add_target,
+	.del_target = iscsi_sysfs_del_target,
+	.mgmt_cmd = iscsi_sysfs_mgmt_cmd,
+	.mgmt_cmd_help = ISCSI_MGMT_CMD_HELP,
+#if defined(CONFIG_SCST_DEBUG) || defined(CONFIG_SCST_TRACING)
+	.default_trace_flags = ISCSI_DEFAULT_LOG_FLAGS,
+	.trace_flags = &trace_flag,
+	.trace_tbl = iscsi_local_trace_tbl,
+	.trace_tbl_help = ISCSI_TRACE_TLB_HELP,
+#endif
+	.detect = iscsi_target_detect,
+	.release = iscsi_target_release,
+	.xmit_response = iscsi_xmit_response,
+	.alloc_data_buf = iscsi_alloc_data_buf,
+	.preprocessing_done = iscsi_preprocessing_done,
+	.pre_exec = iscsi_pre_exec,
+	.task_mgmt_affected_cmds_done = iscsi_task_mgmt_affected_cmds_done,
+	.task_mgmt_fn_done = iscsi_task_mgmt_fn_done,
+	.report_aen = iscsi_report_aen,
+};
+
+static __init int iscsi_run_threads(int count, char *name, int (*fn)(void *))
+{
+	int res = 0;
+	int i;
+	struct iscsi_thread_t *thr;
+
+	for (i = 0; i < count; i++) {
+		thr = kmalloc(sizeof(*thr), GFP_KERNEL);
+		if (!thr) {
+			res = -ENOMEM;
+			PRINT_ERROR("Failed to allocate thr %d", res);
+			goto out;
+		}
+		thr->thr = kthread_run(fn, NULL, "%s%d", name, i);
+		if (IS_ERR(thr->thr)) {
+			res = PTR_ERR(thr->thr);
+			PRINT_ERROR("kthread_create() failed: %d", res);
+			kfree(thr);
+			goto out;
+		}
+		list_add_tail(&thr->threads_list_entry, &iscsi_threads_list);
+	}
+
+out:
+	return res;
+}
+
+static void iscsi_stop_threads(void)
+{
+	struct iscsi_thread_t *t, *tmp;
+
+	list_for_each_entry_safe(t, tmp, &iscsi_threads_list,
+				threads_list_entry) {
+		int rc = kthread_stop(t->thr);
+		if (rc < 0)
+			TRACE_MGMT_DBG("kthread_stop() failed: %d", rc);
+		list_del(&t->threads_list_entry);
+		kfree(t);
+	}
+	return;
+}
+
+static int __init iscsi_init(void)
+{
+	int err = 0;
+	int num;
+
+	PRINT_INFO("iSCSI SCST Target - version %s", ISCSI_VERSION_STRING);
+
+	dummy_page = alloc_pages(GFP_KERNEL, 0);
+	if (dummy_page == NULL) {
+		PRINT_ERROR("%s", "Dummy page allocation failed");
+		goto out;
+	}
+
+	sg_init_table(&dummy_sg, 1);
+	sg_set_page(&dummy_sg, dummy_page, PAGE_SIZE, 0);
+
+	ctr_major = register_chrdev(0, ctr_name, &ctr_fops);
+	if (ctr_major < 0) {
+		PRINT_ERROR("failed to register the control device %d",
+			    ctr_major);
+		err = ctr_major;
+		goto out_callb;
+	}
+
+	err = event_init();
+	if (err < 0)
+		goto out_reg;
+
+	iscsi_cmnd_cache = KMEM_CACHE(iscsi_cmnd, SCST_SLAB_FLAGS);
+	if (!iscsi_cmnd_cache) {
+		err = -ENOMEM;
+		goto out_event;
+	}
+
+	err = scst_register_target_template(&iscsi_template);
+	if (err < 0)
+		goto out_kmem;
+
+	num = max((int)num_online_cpus(), 2);
+
+	err = iscsi_run_threads(num, "iscsird", istrd);
+	if (err != 0)
+		goto out_thr;
+
+	err = iscsi_run_threads(num, "iscsiwr", istwr);
+	if (err != 0)
+		goto out_thr;
+
+out:
+	return err;
+
+out_thr:
+	iscsi_stop_threads();
+
+	scst_unregister_target_template(&iscsi_template);
+
+out_kmem:
+	kmem_cache_destroy(iscsi_cmnd_cache);
+
+out_event:
+	event_exit();
+
+out_reg:
+	unregister_chrdev(ctr_major, ctr_name);
+
+out_callb:
+	__free_pages(dummy_page, 0);
+	goto out;
+}
+
+static void __exit iscsi_exit(void)
+{
+	iscsi_stop_threads();
+
+	unregister_chrdev(ctr_major, ctr_name);
+
+	event_exit();
+
+	kmem_cache_destroy(iscsi_cmnd_cache);
+
+	scst_unregister_target_template(&iscsi_template);
+
+	__free_pages(dummy_page, 0);
+	return;
+}
+
+module_init(iscsi_init);
+module_exit(iscsi_exit);
+
+MODULE_LICENSE("GPL");
diff -uprN orig/linux-2.6.33/drivers/scst/iscsi-scst/nthread.c linux-2.6.33/drivers/scst/iscsi-scst/nthread.c
--- orig/linux-2.6.33/drivers/scst/iscsi-scst/nthread.c
+++ linux-2.6.33/drivers/scst/iscsi-scst/nthread.c
@@ -0,0 +1,1524 @@
+/*
+ *  Network threads.
+ *
+ *  Copyright (C) 2004 - 2005 FUJITA Tomonori <tomof@acm.org>
+ *  Copyright (C) 2007 - 2010 Vladislav Bolkhovitin
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#include <linux/sched.h>
+#include <linux/file.h>
+#include <linux/kthread.h>
+#include <asm/ioctls.h>
+#include <linux/delay.h>
+#include <net/tcp.h>
+
+#include "iscsi.h"
+#include "digest.h"
+
+enum rx_state {
+	RX_INIT_BHS, /* Must be zero for better "switch" optimization. */
+	RX_BHS,
+	RX_CMD_START,
+	RX_DATA,
+	RX_END,
+
+	RX_CMD_CONTINUE,
+	RX_INIT_HDIGEST,
+	RX_CHECK_HDIGEST,
+	RX_INIT_DDIGEST,
+	RX_CHECK_DDIGEST,
+	RX_AHS,
+	RX_PADDING,
+};
+
+enum tx_state {
+	TX_INIT = 0, /* Must be zero for better "switch" optimization. */
+	TX_BHS_DATA,
+	TX_INIT_PADDING,
+	TX_PADDING,
+	TX_INIT_DDIGEST,
+	TX_DDIGEST,
+	TX_END,
+};
+
+static inline void iscsi_check_closewait(struct iscsi_conn *conn) {};
+
+static void free_pending_commands(struct iscsi_conn *conn)
+{
+	struct iscsi_session *session = conn->session;
+	struct list_head *pending_list = &session->pending_list;
+	int req_freed;
+	struct iscsi_cmnd *cmnd;
+
+	spin_lock(&session->sn_lock);
+	do {
+		req_freed = 0;
+		list_for_each_entry(cmnd, pending_list, pending_list_entry) {
+			TRACE_CONN_CLOSE_DBG("Pending cmd %p"
+				"(conn %p, cmd_sn %u, exp_cmd_sn %u)",
+				cmnd, conn, cmnd->pdu.bhs.sn,
+				session->exp_cmd_sn);
+			if ((cmnd->conn == conn) &&
+			    (session->exp_cmd_sn == cmnd->pdu.bhs.sn)) {
+				TRACE_CONN_CLOSE_DBG("Freeing pending cmd %p",
+					cmnd);
+
+				list_del(&cmnd->pending_list_entry);
+				cmnd->pending = 0;
+
+				session->exp_cmd_sn++;
+
+				spin_unlock(&session->sn_lock);
+
+				req_cmnd_release_force(cmnd);
+
+				req_freed = 1;
+				spin_lock(&session->sn_lock);
+				break;
+			}
+		}
+	} while (req_freed);
+	spin_unlock(&session->sn_lock);
+
+	return;
+}
+
+static void free_orphaned_pending_commands(struct iscsi_conn *conn)
+{
+	struct iscsi_session *session = conn->session;
+	struct list_head *pending_list = &session->pending_list;
+	int req_freed;
+	struct iscsi_cmnd *cmnd;
+
+	spin_lock(&session->sn_lock);
+	do {
+		req_freed = 0;
+		list_for_each_entry(cmnd, pending_list, pending_list_entry) {
+			TRACE_CONN_CLOSE_DBG("Pending cmd %p"
+				"(conn %p, cmd_sn %u, exp_cmd_sn %u)",
+				cmnd, conn, cmnd->pdu.bhs.sn,
+				session->exp_cmd_sn);
+			if (cmnd->conn == conn) {
+				PRINT_ERROR("Freeing orphaned pending cmd %p",
+					    cmnd);
+
+				list_del(&cmnd->pending_list_entry);
+				cmnd->pending = 0;
+
+				if (session->exp_cmd_sn == cmnd->pdu.bhs.sn)
+					session->exp_cmd_sn++;
+
+				spin_unlock(&session->sn_lock);
+
+				req_cmnd_release_force(cmnd);
+
+				req_freed = 1;
+				spin_lock(&session->sn_lock);
+				break;
+			}
+		}
+	} while (req_freed);
+	spin_unlock(&session->sn_lock);
+
+	return;
+}
+
+#ifdef CONFIG_SCST_DEBUG
+static void trace_conn_close(struct iscsi_conn *conn)
+{
+	struct iscsi_cmnd *cmnd;
+
+#if 0
+	if (time_after(jiffies, start_waiting + 10*HZ))
+		trace_flag |= TRACE_CONN_OC_DBG;
+#endif
+
+	spin_lock_bh(&conn->cmd_list_lock);
+	list_for_each_entry(cmnd, &conn->cmd_list,
+			cmd_list_entry) {
+		TRACE_CONN_CLOSE_DBG(
+			"cmd %p, scst_cmd %p, scst_state %x, scst_cmd state "
+			"%d, r2t_len_to_receive %d, ref_cnt %d, sn %u, "
+			"parent_req %p, pending %d",
+			cmnd, cmnd->scst_cmd, cmnd->scst_state,
+			((cmnd->parent_req == NULL) && cmnd->scst_cmd) ?
+				cmnd->scst_cmd->state : -1,
+			cmnd->r2t_len_to_receive, atomic_read(&cmnd->ref_cnt),
+			cmnd->pdu.bhs.sn, cmnd->parent_req, cmnd->pending);
+	}
+	spin_unlock_bh(&conn->cmd_list_lock);
+	return;
+}
+#else /* CONFIG_SCST_DEBUG */
+static void trace_conn_close(struct iscsi_conn *conn) {}
+#endif /* CONFIG_SCST_DEBUG */
+
+void iscsi_task_mgmt_affected_cmds_done(struct scst_mgmt_cmd *scst_mcmd)
+{
+	int fn = scst_mgmt_cmd_get_fn(scst_mcmd);
+	void *priv = scst_mgmt_cmd_get_tgt_priv(scst_mcmd);
+
+	TRACE_MGMT_DBG("scst_mcmd %p, fn %d, priv %p", scst_mcmd, fn, priv);
+
+	switch (fn) {
+	case SCST_NEXUS_LOSS_SESS:
+	case SCST_ABORT_ALL_TASKS_SESS:
+	{
+		struct iscsi_conn *conn = (struct iscsi_conn *)priv;
+		struct iscsi_session *sess = conn->session;
+		struct iscsi_conn *c;
+
+		mutex_lock(&sess->target->target_mutex);
+
+		/*
+		 * We can't mark sess as shutting down earlier, because until
+		 * now it might have pending commands. Otherwise, in case of
+		 * reinstatement it might lead to data corruption, because
+		 * commands in being reinstated session can be executed
+		 * after commands in the new session.
+		 */
+		sess->sess_shutting_down = 1;
+		list_for_each_entry(c, &sess->conn_list, conn_list_entry) {
+			if (!test_bit(ISCSI_CONN_SHUTTINGDOWN, &c->conn_aflags)) {
+				sess->sess_shutting_down = 0;
+				break;
+			}
+		}
+
+		if (conn->conn_reinst_successor != NULL) {
+			BUG_ON(!test_bit(ISCSI_CONN_REINSTATING,
+				  &conn->conn_reinst_successor->conn_aflags));
+			conn_reinst_finished(conn->conn_reinst_successor);
+			conn->conn_reinst_successor = NULL;
+		} else if (sess->sess_reinst_successor != NULL) {
+			sess_reinst_finished(sess->sess_reinst_successor);
+			sess->sess_reinst_successor = NULL;
+		}
+		mutex_unlock(&sess->target->target_mutex);
+
+		complete_all(&conn->ready_to_free);
+		break;
+	}
+	default:
+		/* Nothing to do */
+		break;
+	}
+
+	return;
+}
+
+/* No locks */
+static void close_conn(struct iscsi_conn *conn)
+{
+	struct iscsi_session *session = conn->session;
+	struct iscsi_target *target = conn->target;
+	typeof(jiffies) start_waiting = jiffies;
+	typeof(jiffies) shut_start_waiting = start_waiting;
+	bool pending_reported = 0, wait_expired = 0, shut_expired = 0;
+	bool reinst;
+
+#define CONN_PENDING_TIMEOUT	((typeof(jiffies))10*HZ)
+#define CONN_WAIT_TIMEOUT	((typeof(jiffies))10*HZ)
+#define CONN_REG_SHUT_TIMEOUT	((typeof(jiffies))125*HZ)
+#define CONN_DEL_SHUT_TIMEOUT	((typeof(jiffies))10*HZ)
+
+	TRACE_MGMT_DBG("Closing connection %p (conn_ref_cnt=%d)", conn,
+		atomic_read(&conn->conn_ref_cnt));
+
+	iscsi_extracheck_is_rd_thread(conn);
+
+	BUG_ON(!conn->closing);
+
+	if (conn->active_close) {
+		/* We want all our already send operations to complete */
+		conn->sock->ops->shutdown(conn->sock, RCV_SHUTDOWN);
+	} else {
+		conn->sock->ops->shutdown(conn->sock,
+			RCV_SHUTDOWN|SEND_SHUTDOWN);
+	}
+
+	mutex_lock(&session->target->target_mutex);
+
+	set_bit(ISCSI_CONN_SHUTTINGDOWN, &conn->conn_aflags);
+	reinst = (conn->conn_reinst_successor != NULL);
+
+	mutex_unlock(&session->target->target_mutex);
+
+	if (reinst) {
+		int rc;
+		int lun = 0;
+
+		/* Abort all outstanding commands */
+		rc = scst_rx_mgmt_fn_lun(session->scst_sess,
+			SCST_ABORT_ALL_TASKS_SESS, (uint8_t *)&lun, sizeof(lun),
+			SCST_NON_ATOMIC, conn);
+		if (rc != 0)
+			PRINT_ERROR("SCST_ABORT_ALL_TASKS_SESS failed %d", rc);
+	} else {
+		int rc;
+		int lun = 0;
+
+		rc = scst_rx_mgmt_fn_lun(session->scst_sess,
+			SCST_NEXUS_LOSS_SESS, (uint8_t *)&lun, sizeof(lun),
+			SCST_NON_ATOMIC, conn);
+		if (rc != 0)
+			PRINT_ERROR("SCST_NEXUS_LOSS_SESS failed %d", rc);
+	}
+
+	if (conn->read_state != RX_INIT_BHS) {
+		struct iscsi_cmnd *cmnd = conn->read_cmnd;
+
+		if (cmnd->scst_state == ISCSI_CMD_STATE_RX_CMD) {
+			TRACE_CONN_CLOSE_DBG("Going to wait for cmnd %p to "
+				"change state from RX_CMD", cmnd);
+		}
+		wait_event(conn->read_state_waitQ,
+			cmnd->scst_state != ISCSI_CMD_STATE_RX_CMD);
+
+		TRACE_CONN_CLOSE_DBG("Releasing conn->read_cmnd %p (conn %p)",
+			conn->read_cmnd, conn);
+
+		conn->read_cmnd = NULL;
+		conn->read_state = RX_INIT_BHS;
+		req_cmnd_release_force(cmnd);
+	}
+
+	conn_abort(conn);
+
+	/* ToDo: not the best way to wait */
+	while (atomic_read(&conn->conn_ref_cnt) != 0) {
+		if (conn->conn_tm_active)
+			iscsi_check_tm_data_wait_timeouts(conn, true);
+
+		mutex_lock(&target->target_mutex);
+		spin_lock(&session->sn_lock);
+		if (session->tm_rsp && session->tm_rsp->conn == conn) {
+			struct iscsi_cmnd *tm_rsp = session->tm_rsp;
+			TRACE_MGMT_DBG("Dropping delayed TM rsp %p", tm_rsp);
+			session->tm_rsp = NULL;
+			session->tm_active--;
+			WARN_ON(session->tm_active < 0);
+			spin_unlock(&session->sn_lock);
+			mutex_unlock(&target->target_mutex);
+
+			rsp_cmnd_release(tm_rsp);
+		} else {
+			spin_unlock(&session->sn_lock);
+			mutex_unlock(&target->target_mutex);
+		}
+
+		/* It's safe to check it without sn_lock */
+		if (!list_empty(&session->pending_list)) {
+			TRACE_CONN_CLOSE_DBG("Disposing pending commands on "
+				"connection %p (conn_ref_cnt=%d)", conn,
+				atomic_read(&conn->conn_ref_cnt));
+
+			free_pending_commands(conn);
+
+			if (time_after(jiffies,
+				start_waiting + CONN_PENDING_TIMEOUT)) {
+				if (!pending_reported) {
+					TRACE_CONN_CLOSE("%s",
+						"Pending wait time expired");
+					pending_reported = 1;
+				}
+				free_orphaned_pending_commands(conn);
+			}
+		}
+
+		iscsi_make_conn_wr_active(conn);
+
+		/* That's for active close only, actually */
+		if (time_after(jiffies, start_waiting + CONN_WAIT_TIMEOUT) &&
+		    !wait_expired) {
+			TRACE_CONN_CLOSE("Wait time expired (conn %p, "
+				"sk_state %d)",
+				conn, conn->sock->sk->sk_state);
+			conn->sock->ops->shutdown(conn->sock, SEND_SHUTDOWN);
+			wait_expired = 1;
+			shut_start_waiting = jiffies;
+		}
+
+		if (wait_expired && !shut_expired &&
+		    time_after(jiffies, shut_start_waiting +
+				conn->deleting ? CONN_DEL_SHUT_TIMEOUT :
+						 CONN_REG_SHUT_TIMEOUT)) {
+			TRACE_CONN_CLOSE("Wait time after shutdown expired "
+				"(conn %p, sk_state %d)", conn,
+				conn->sock->sk->sk_state);
+			conn->sock->sk->sk_prot->disconnect(conn->sock->sk, 0);
+			shut_expired = 1;
+		}
+
+		if (conn->deleting)
+			msleep(200);
+		else
+			msleep(1000);
+
+		TRACE_CONN_CLOSE_DBG("conn %p, conn_ref_cnt %d left, "
+			"wr_state %d, exp_cmd_sn %u",
+			conn, atomic_read(&conn->conn_ref_cnt),
+			conn->wr_state, session->exp_cmd_sn);
+
+		trace_conn_close(conn);
+
+		iscsi_check_closewait(conn);
+	}
+
+	write_lock_bh(&conn->sock->sk->sk_callback_lock);
+	conn->sock->sk->sk_state_change = conn->old_state_change;
+	conn->sock->sk->sk_data_ready = conn->old_data_ready;
+	conn->sock->sk->sk_write_space = conn->old_write_space;
+	write_unlock_bh(&conn->sock->sk->sk_callback_lock);
+
+	while (1) {
+		bool t;
+
+		spin_lock_bh(&iscsi_wr_lock);
+		t = (conn->wr_state == ISCSI_CONN_WR_STATE_IDLE);
+		spin_unlock_bh(&iscsi_wr_lock);
+
+		if (t && (atomic_read(&conn->conn_ref_cnt) == 0))
+			break;
+
+		TRACE_CONN_CLOSE_DBG("Waiting for wr thread (conn %p), "
+			"wr_state %x", conn, conn->wr_state);
+		msleep(50);
+	}
+
+	wait_for_completion(&conn->ready_to_free);
+
+	TRACE_CONN_CLOSE("Notifying user space about closing connection %p",
+			 conn);
+	event_send(target->tid, session->sid, conn->cid, 0, E_CONN_CLOSE,
+		NULL, NULL);
+
+	kobject_put(&conn->iscsi_conn_kobj);
+	return;
+}
+
+static int close_conn_thr(void *arg)
+{
+	struct iscsi_conn *conn = (struct iscsi_conn *)arg;
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+	/*
+	 * To satisfy iscsi_extracheck_is_rd_thread() in functions called
+	 * on the connection close. It is safe, because at this point conn
+	 * can't be used by any other thread.
+	 */
+	conn->rd_task = current;
+#endif
+	close_conn(conn);
+	return 0;
+}
+
+/* No locks */
+static void start_close_conn(struct iscsi_conn *conn)
+{
+	struct task_struct *t;
+
+	t = kthread_run(close_conn_thr, conn, "iscsi_conn_cleanup");
+	if (IS_ERR(t)) {
+		PRINT_ERROR("kthread_run() failed (%ld), closing conn %p "
+			"directly", PTR_ERR(t), conn);
+		close_conn(conn);
+	}
+	return;
+}
+
+static inline void iscsi_conn_init_read(struct iscsi_conn *conn,
+	void __user *data, size_t len)
+{
+	conn->read_iov[0].iov_base = data;
+	conn->read_iov[0].iov_len = len;
+	conn->read_msg.msg_iov = conn->read_iov;
+	conn->read_msg.msg_iovlen = 1;
+	conn->read_size = len;
+	return;
+}
+
+static void iscsi_conn_prepare_read_ahs(struct iscsi_conn *conn,
+	struct iscsi_cmnd *cmnd)
+{
+	int asize = (cmnd->pdu.ahssize + 3) & -4;
+
+	/* ToDo: __GFP_NOFAIL ?? */
+	cmnd->pdu.ahs = kmalloc(asize, __GFP_NOFAIL|GFP_KERNEL);
+	BUG_ON(cmnd->pdu.ahs == NULL);
+	iscsi_conn_init_read(conn, (void __force __user *)cmnd->pdu.ahs, asize);
+	return;
+}
+
+static struct iscsi_cmnd *iscsi_get_send_cmnd(struct iscsi_conn *conn)
+{
+	struct iscsi_cmnd *cmnd = NULL;
+
+	spin_lock_bh(&conn->write_list_lock);
+	if (!list_empty(&conn->write_list)) {
+		cmnd = list_entry(conn->write_list.next, struct iscsi_cmnd,
+				write_list_entry);
+		cmd_del_from_write_list(cmnd);
+		cmnd->write_processing_started = 1;
+	}
+	spin_unlock_bh(&conn->write_list_lock);
+
+	if (unlikely(test_bit(ISCSI_CMD_ABORTED,
+			&cmnd->parent_req->prelim_compl_flags))) {
+		TRACE_MGMT_DBG("Going to send acmd %p (scst cmd %p, "
+			"state %d, parent_req %p)", cmnd, cmnd->scst_cmd,
+			cmnd->scst_state, cmnd->parent_req);
+	}
+
+	if (unlikely(cmnd_opcode(cmnd) == ISCSI_OP_SCSI_TASK_MGT_RSP)) {
+#ifdef CONFIG_SCST_DEBUG
+		struct iscsi_task_mgt_hdr *req_hdr =
+			(struct iscsi_task_mgt_hdr *)&cmnd->parent_req->pdu.bhs;
+		struct iscsi_task_rsp_hdr *rsp_hdr =
+			(struct iscsi_task_rsp_hdr *)&cmnd->pdu.bhs;
+		TRACE_MGMT_DBG("Going to send TM response %p (status %d, "
+			"fn %d, parent_req %p)", cmnd, rsp_hdr->response,
+			req_hdr->function & ISCSI_FUNCTION_MASK,
+			cmnd->parent_req);
+#endif
+	}
+
+	return cmnd;
+}
+
+/* Returns number of bytes left to receive or <0 for error */
+static int do_recv(struct iscsi_conn *conn)
+{
+	int res;
+	mm_segment_t oldfs;
+	struct msghdr msg;
+	int first_len;
+
+	EXTRACHECKS_BUG_ON(conn->read_cmnd == NULL);
+
+	if (unlikely(conn->closing)) {
+		res = -EIO;
+		goto out;
+	}
+
+	/*
+	 * We suppose that if sock_recvmsg() returned less data than requested,
+	 * then next time it will return -EAGAIN, so there's no point to call
+	 * it again.
+	 */
+
+restart:
+	memset(&msg, 0, sizeof(msg));
+	msg.msg_iov = conn->read_msg.msg_iov;
+	msg.msg_iovlen = conn->read_msg.msg_iovlen;
+	first_len = msg.msg_iov->iov_len;
+
+	oldfs = get_fs();
+	set_fs(get_ds());
+	res = sock_recvmsg(conn->sock, &msg, conn->read_size,
+			   MSG_DONTWAIT | MSG_NOSIGNAL);
+	set_fs(oldfs);
+
+	if (res > 0) {
+		/*
+		 * To save some considerable effort and CPU power we
+		 * suppose that TCP functions adjust
+		 * conn->read_msg.msg_iov and conn->read_msg.msg_iovlen
+		 * on amount of copied data. This BUG_ON is intended
+		 * to catch if it is changed in the future.
+		 */
+		BUG_ON((res >= first_len) &&
+			(conn->read_msg.msg_iov->iov_len != 0));
+		conn->read_size -= res;
+		if (conn->read_size != 0) {
+			if (res >= first_len) {
+				int done = 1 + ((res - first_len) >> PAGE_SHIFT);
+				conn->read_msg.msg_iov += done;
+				conn->read_msg.msg_iovlen -= done;
+			}
+		}
+		res = conn->read_size;
+	} else {
+		switch (res) {
+		case -EAGAIN:
+			TRACE_DBG("EAGAIN received for conn %p", conn);
+			res = conn->read_size;
+			break;
+		case -ERESTARTSYS:
+			TRACE_DBG("ERESTARTSYS received for conn %p", conn);
+			goto restart;
+		default:
+			if (!conn->closing) {
+				PRINT_ERROR("sock_recvmsg() failed: %d", res);
+				mark_conn_closed(conn);
+			}
+			if (res == 0)
+				res = -EIO;
+			break;
+		}
+	}
+
+out:
+	return res;
+}
+
+static int iscsi_rx_check_ddigest(struct iscsi_conn *conn)
+{
+	struct iscsi_cmnd *cmnd = conn->read_cmnd;
+	int res;
+
+	res = do_recv(conn);
+	if (res == 0) {
+		conn->read_state = RX_END;
+
+		if (cmnd->pdu.datasize <= 16*1024) {
+			/*
+			 * It's cache hot, so let's compute it inline. The
+			 * choice here about what will expose more latency:
+			 * possible cache misses or the digest calculation.
+			 */
+			TRACE_DBG("cmnd %p, opcode %x: checking RX "
+				"ddigest inline", cmnd, cmnd_opcode(cmnd));
+			cmnd->ddigest_checked = 1;
+			res = digest_rx_data(cmnd);
+			if (unlikely(res != 0)) {
+				mark_conn_closed(conn);
+				goto out;
+			}
+		} else if (cmnd_opcode(cmnd) == ISCSI_OP_SCSI_CMD) {
+			cmd_add_on_rx_ddigest_list(cmnd, cmnd);
+			cmnd_get(cmnd);
+		} else if (cmnd_opcode(cmnd) != ISCSI_OP_SCSI_DATA_OUT) {
+			/*
+			 * We could get here only for Nop-Out. ISCSI RFC
+			 * doesn't specify how to deal with digest errors in
+			 * this case. Is closing connection correct?
+			 */
+			TRACE_DBG("cmnd %p, opcode %x: checking NOP RX "
+				"ddigest", cmnd, cmnd_opcode(cmnd));
+			res = digest_rx_data(cmnd);
+			if (unlikely(res != 0)) {
+				mark_conn_closed(conn);
+				goto out;
+			}
+		}
+	}
+
+out:
+	return res;
+}
+
+/* No locks, conn is rd processing */
+static int process_read_io(struct iscsi_conn *conn, int *closed)
+{
+	struct iscsi_cmnd *cmnd = conn->read_cmnd;
+	int res;
+
+	/* In case of error cmnd will be freed in close_conn() */
+
+	do {
+		switch (conn->read_state) {
+		case RX_INIT_BHS:
+			EXTRACHECKS_BUG_ON(conn->read_cmnd != NULL);
+			cmnd = cmnd_alloc(conn, NULL);
+			conn->read_cmnd = cmnd;
+			iscsi_conn_init_read(cmnd->conn,
+				(void __force __user *)&cmnd->pdu.bhs,
+				sizeof(cmnd->pdu.bhs));
+			conn->read_state = RX_BHS;
+			/* go through */
+
+		case RX_BHS:
+			res = do_recv(conn);
+			if (res == 0) {
+				iscsi_cmnd_get_length(&cmnd->pdu);
+				if (cmnd->pdu.ahssize == 0) {
+					if ((conn->hdigest_type & DIGEST_NONE) == 0)
+						conn->read_state = RX_INIT_HDIGEST;
+					else
+						conn->read_state = RX_CMD_START;
+				} else {
+					iscsi_conn_prepare_read_ahs(conn, cmnd);
+					conn->read_state = RX_AHS;
+				}
+			}
+			break;
+
+		case RX_CMD_START:
+			res = cmnd_rx_start(cmnd);
+			if (res == 0) {
+				if (cmnd->pdu.datasize == 0)
+					conn->read_state = RX_END;
+				else
+					conn->read_state = RX_DATA;
+			} else if (res > 0)
+				conn->read_state = RX_CMD_CONTINUE;
+			else
+				BUG_ON(!conn->closing);
+			break;
+
+		case RX_CMD_CONTINUE:
+			if (cmnd->scst_state == ISCSI_CMD_STATE_RX_CMD) {
+				TRACE_DBG("cmnd %p is still in RX_CMD state",
+					cmnd);
+				res = 1;
+				break;
+			}
+			res = cmnd_rx_continue(cmnd);
+			if (unlikely(res != 0))
+				BUG_ON(!conn->closing);
+			else {
+				if (cmnd->pdu.datasize == 0)
+					conn->read_state = RX_END;
+				else
+					conn->read_state = RX_DATA;
+			}
+			break;
+
+		case RX_DATA:
+			res = do_recv(conn);
+			if (res == 0) {
+				int psz = ((cmnd->pdu.datasize + 3) & -4) - cmnd->pdu.datasize;
+				if (psz != 0) {
+					TRACE_DBG("padding %d bytes", psz);
+					iscsi_conn_init_read(conn,
+						(void __force __user *)&conn->rpadding, psz);
+					conn->read_state = RX_PADDING;
+				} else if ((conn->ddigest_type & DIGEST_NONE) != 0)
+					conn->read_state = RX_END;
+				else
+					conn->read_state = RX_INIT_DDIGEST;
+			}
+			break;
+
+		case RX_END:
+			if (unlikely(conn->read_size != 0)) {
+				PRINT_CRIT_ERROR("conn read_size !=0 on RX_END "
+					"(conn %p, op %x, read_size %d)", conn,
+					cmnd_opcode(cmnd), conn->read_size);
+				BUG();
+			}
+			conn->read_cmnd = NULL;
+			conn->read_state = RX_INIT_BHS;
+
+			cmnd_rx_end(cmnd);
+
+			EXTRACHECKS_BUG_ON(conn->read_size != 0);
+
+			/*
+			 * To maintain fairness. Res must be 0 here anyway, the
+			 * assignment is only to remove compiler warning about
+			 * uninitialized variable.
+			 */
+			res = 0;
+			goto out;
+
+		case RX_INIT_HDIGEST:
+			iscsi_conn_init_read(conn,
+				(void __force __user *)&cmnd->hdigest, sizeof(u32));
+			conn->read_state = RX_CHECK_HDIGEST;
+			/* go through */
+
+		case RX_CHECK_HDIGEST:
+			res = do_recv(conn);
+			if (res == 0) {
+				res = digest_rx_header(cmnd);
+				if (unlikely(res != 0)) {
+					PRINT_ERROR("rx header digest for "
+						"initiator %s failed (%d)",
+						conn->session->initiator_name,
+						res);
+					mark_conn_closed(conn);
+				} else
+					conn->read_state = RX_CMD_START;
+			}
+			break;
+
+		case RX_INIT_DDIGEST:
+			iscsi_conn_init_read(conn,
+				(void __force __user *)&cmnd->ddigest,
+				sizeof(u32));
+			conn->read_state = RX_CHECK_DDIGEST;
+			/* go through */
+
+		case RX_CHECK_DDIGEST:
+			res = iscsi_rx_check_ddigest(conn);
+			break;
+
+		case RX_AHS:
+			res = do_recv(conn);
+			if (res == 0) {
+				if ((conn->hdigest_type & DIGEST_NONE) == 0)
+					conn->read_state = RX_INIT_HDIGEST;
+				else
+					conn->read_state = RX_CMD_START;
+			}
+			break;
+
+		case RX_PADDING:
+			res = do_recv(conn);
+			if (res == 0) {
+				if ((conn->ddigest_type & DIGEST_NONE) == 0)
+					conn->read_state = RX_INIT_DDIGEST;
+				else
+					conn->read_state = RX_END;
+			}
+			break;
+
+		default:
+			PRINT_CRIT_ERROR("%d %x", conn->read_state, cmnd_opcode(cmnd));
+			res = -1; /* to keep compiler happy */
+			BUG();
+		}
+	} while (res == 0);
+
+	if (unlikely(conn->closing)) {
+		start_close_conn(conn);
+		*closed = 1;
+	}
+
+out:
+	return res;
+}
+
+/*
+ * Called under iscsi_rd_lock and BHs disabled, but will drop it inside,
+ * then reaquire.
+ */
+static void scst_do_job_rd(void)
+	__acquires(&iscsi_rd_lock)
+	__releases(&iscsi_rd_lock)
+{
+
+	/*
+	 * We delete/add to tail connections to maintain fairness between them.
+	 */
+
+	while (!list_empty(&iscsi_rd_list)) {
+		int closed = 0, rc;
+		struct iscsi_conn *conn = list_entry(iscsi_rd_list.next,
+			typeof(*conn), rd_list_entry);
+
+		list_del(&conn->rd_list_entry);
+
+		BUG_ON(conn->rd_state == ISCSI_CONN_RD_STATE_PROCESSING);
+		conn->rd_data_ready = 0;
+		conn->rd_state = ISCSI_CONN_RD_STATE_PROCESSING;
+#ifdef CONFIG_SCST_EXTRACHECKS
+		conn->rd_task = current;
+#endif
+		spin_unlock_bh(&iscsi_rd_lock);
+
+		rc = process_read_io(conn, &closed);
+
+		spin_lock_bh(&iscsi_rd_lock);
+
+		if (unlikely(closed))
+			continue;
+
+		if (unlikely(conn->conn_tm_active)) {
+			spin_unlock_bh(&iscsi_rd_lock);
+			iscsi_check_tm_data_wait_timeouts(conn, false);
+			spin_lock_bh(&iscsi_rd_lock);
+		}
+
+#ifdef CONFIG_SCST_EXTRACHECKS
+		conn->rd_task = NULL;
+#endif
+		if ((rc == 0) || conn->rd_data_ready) {
+			list_add_tail(&conn->rd_list_entry, &iscsi_rd_list);
+			conn->rd_state = ISCSI_CONN_RD_STATE_IN_LIST;
+		} else
+			conn->rd_state = ISCSI_CONN_RD_STATE_IDLE;
+	}
+	return;
+}
+
+static inline int test_rd_list(void)
+{
+	int res = !list_empty(&iscsi_rd_list) ||
+		  unlikely(kthread_should_stop());
+	return res;
+}
+
+int istrd(void *arg)
+{
+
+	PRINT_INFO("Read thread started, PID %d", current->pid);
+
+	current->flags |= PF_NOFREEZE;
+
+	spin_lock_bh(&iscsi_rd_lock);
+	while (!kthread_should_stop()) {
+		wait_queue_t wait;
+		init_waitqueue_entry(&wait, current);
+
+		if (!test_rd_list()) {
+			add_wait_queue_exclusive_head(&iscsi_rd_waitQ, &wait);
+			for (;;) {
+				set_current_state(TASK_INTERRUPTIBLE);
+				if (test_rd_list())
+					break;
+				spin_unlock_bh(&iscsi_rd_lock);
+				schedule();
+				spin_lock_bh(&iscsi_rd_lock);
+			}
+			set_current_state(TASK_RUNNING);
+			remove_wait_queue(&iscsi_rd_waitQ, &wait);
+		}
+		scst_do_job_rd();
+	}
+	spin_unlock_bh(&iscsi_rd_lock);
+
+	/*
+	 * If kthread_should_stop() is true, we are guaranteed to be
+	 * on the module unload, so iscsi_rd_list must be empty.
+	 */
+	BUG_ON(!list_empty(&iscsi_rd_list));
+
+	PRINT_INFO("Read thread PID %d finished", current->pid);
+	return 0;
+}
+
+static inline void check_net_priv(struct iscsi_cmnd *cmd, struct page *page) {}
+static inline void __iscsi_get_page_callback(struct iscsi_cmnd *cmd) {}
+static inline void __iscsi_put_page_callback(struct iscsi_cmnd *cmd) {}
+
+void req_add_to_write_timeout_list(struct iscsi_cmnd *req)
+{
+	struct iscsi_conn *conn;
+	unsigned long timeout_time;
+	bool set_conn_tm_active = false;
+
+	if (req->on_write_timeout_list)
+		goto out;
+
+	conn = req->conn;
+
+	TRACE_DBG("Adding req %p to conn %p write_timeout_list",
+		req, conn);
+
+	spin_lock_bh(&conn->write_list_lock);
+
+	/* Recheck, since it can be changed behind us */
+	if (unlikely(req->on_write_timeout_list)) {
+		spin_unlock_bh(&conn->write_list_lock);
+		goto out;
+	}
+
+	req->on_write_timeout_list = 1;
+	req->write_start = jiffies;
+
+	list_add_tail(&req->write_timeout_list_entry,
+		&conn->write_timeout_list);
+
+	if (!timer_pending(&conn->rsp_timer)) {
+		if (unlikely(conn->conn_tm_active ||
+			     test_bit(ISCSI_CMD_ABORTED,
+					&req->prelim_compl_flags))) {
+			set_conn_tm_active = true;
+			timeout_time = req->write_start +
+					ISCSI_TM_DATA_WAIT_TIMEOUT +
+					ISCSI_ADD_SCHED_TIME;
+		} else
+			timeout_time = req->write_start +
+				conn->rsp_timeout + ISCSI_ADD_SCHED_TIME;
+
+		TRACE_DBG("Starting timer on %ld (con %p, write_start %ld)",
+			timeout_time, conn, req->write_start);
+
+		conn->rsp_timer.expires = timeout_time;
+		add_timer(&conn->rsp_timer);
+	} else if (unlikely(test_bit(ISCSI_CMD_ABORTED,
+				&req->prelim_compl_flags))) {
+		unsigned long timeout_time = jiffies +
+			ISCSI_TM_DATA_WAIT_TIMEOUT + ISCSI_ADD_SCHED_TIME;
+		set_conn_tm_active = true;
+		if (time_after(conn->rsp_timer.expires, timeout_time)) {
+			TRACE_MGMT_DBG("Mod timer on %ld (conn %p)",
+				timeout_time, conn);
+			mod_timer(&conn->rsp_timer, timeout_time);
+		}
+	}
+
+	spin_unlock_bh(&conn->write_list_lock);
+
+	/*
+	 * conn_tm_active can be already cleared by
+	 * iscsi_check_tm_data_wait_timeouts(). write_list_lock is an inner
+	 * lock for iscsi_rd_lock.
+	 */
+	if (unlikely(set_conn_tm_active)) {
+		spin_lock_bh(&iscsi_rd_lock);
+		TRACE_MGMT_DBG("Setting conn_tm_active for conn %p", conn);
+		conn->conn_tm_active = 1;
+		spin_unlock_bh(&iscsi_rd_lock);
+	}
+
+out:
+	return;
+}
+
+static int write_data(struct iscsi_conn *conn)
+{
+	mm_segment_t oldfs;
+	struct file *file;
+	struct iovec *iop;
+	struct socket *sock;
+	ssize_t (*sock_sendpage)(struct socket *, struct page *, int, size_t,
+				 int);
+	ssize_t (*sendpage)(struct socket *, struct page *, int, size_t, int);
+	struct iscsi_cmnd *write_cmnd = conn->write_cmnd;
+	struct iscsi_cmnd *ref_cmd;
+	struct page *page;
+	struct scatterlist *sg;
+	int saved_size, size, sendsize;
+	int length, offset, idx;
+	int flags, res, count, sg_size;
+	bool do_put = false, ref_cmd_to_parent;
+
+	iscsi_extracheck_is_wr_thread(conn);
+
+	if (!write_cmnd->own_sg) {
+		ref_cmd = write_cmnd->parent_req;
+		ref_cmd_to_parent = true;
+	} else {
+		ref_cmd = write_cmnd;
+		ref_cmd_to_parent = false;
+	}
+
+	req_add_to_write_timeout_list(write_cmnd->parent_req);
+
+	file = conn->file;
+	size = conn->write_size;
+	saved_size = size;
+	iop = conn->write_iop;
+	count = conn->write_iop_used;
+
+	if (iop) {
+		while (1) {
+			loff_t off = 0;
+			int rest;
+
+			BUG_ON(count > (signed)(sizeof(conn->write_iov) /
+						sizeof(conn->write_iov[0])));
+retry:
+			oldfs = get_fs();
+			set_fs(KERNEL_DS);
+			res = vfs_writev(file,
+					 (struct iovec __force __user *)iop,
+					 count, &off);
+			set_fs(oldfs);
+			TRACE_WRITE("sid %#Lx, cid %u, res %d, iov_len %ld",
+				    (long long unsigned int)conn->session->sid,
+				    conn->cid, res, (long)iop->iov_len);
+			if (unlikely(res <= 0)) {
+				if (res == -EAGAIN) {
+					conn->write_iop = iop;
+					conn->write_iop_used = count;
+					goto out_iov;
+				} else if (res == -EINTR)
+					goto retry;
+				goto out_err;
+			}
+
+			rest = res;
+			size -= res;
+			while ((typeof(rest))iop->iov_len <= rest && rest) {
+				rest -= iop->iov_len;
+				iop++;
+				count--;
+			}
+			if (count == 0) {
+				conn->write_iop = NULL;
+				conn->write_iop_used = 0;
+				if (size)
+					break;
+				goto out_iov;
+			}
+			BUG_ON(iop > conn->write_iov + sizeof(conn->write_iov)
+						  /sizeof(conn->write_iov[0]));
+			iop->iov_base += rest;
+			iop->iov_len -= rest;
+		}
+	}
+
+	sg = write_cmnd->sg;
+	if (unlikely(sg == NULL)) {
+		PRINT_INFO("WARNING: Data missed (cmd %p)!", write_cmnd);
+		res = 0;
+		goto out;
+	}
+
+	/* To protect from too early transfer completion race */
+	__iscsi_get_page_callback(ref_cmd);
+	do_put = true;
+
+	sock = conn->sock;
+
+	if ((write_cmnd->parent_req->scst_cmd != NULL) &&
+	    scst_cmd_get_dh_data_buff_alloced(write_cmnd->parent_req->scst_cmd))
+		sock_sendpage = sock_no_sendpage;
+	else
+		sock_sendpage = sock->ops->sendpage;
+
+	flags = MSG_DONTWAIT;
+	sg_size = size;
+
+	if (sg != write_cmnd->rsp_sg) {
+		offset = conn->write_offset + sg[0].offset;
+		idx = offset >> PAGE_SHIFT;
+		offset &= ~PAGE_MASK;
+		length = min(size, (int)PAGE_SIZE - offset);
+		TRACE_WRITE("write_offset %d, sg_size %d, idx %d, offset %d, "
+			"length %d", conn->write_offset, sg_size, idx, offset,
+			length);
+	} else {
+		idx = 0;
+		offset = conn->write_offset;
+		while (offset >= sg[idx].length) {
+			offset -= sg[idx].length;
+			idx++;
+		}
+		length = sg[idx].length - offset;
+		offset += sg[idx].offset;
+		sock_sendpage = sock_no_sendpage;
+		TRACE_WRITE("rsp_sg: write_offset %d, sg_size %d, idx %d, "
+			"offset %d, length %d", conn->write_offset, sg_size,
+			idx, offset, length);
+	}
+	page = sg_page(&sg[idx]);
+
+	while (1) {
+		sendpage = sock_sendpage;
+
+		sendsize = min(size, length);
+		if (size <= sendsize) {
+retry2:
+			res = sendpage(sock, page, offset, size, flags);
+			TRACE_WRITE("Final %s sid %#Lx, cid %u, res %d (page "
+				"index %lu, offset %u, size %u, cmd %p, "
+				"page %p)", (sendpage != sock_no_sendpage) ?
+						"sendpage" : "sock_no_sendpage",
+				(long long unsigned int)conn->session->sid,
+				conn->cid, res, page->index,
+				offset, size, write_cmnd, page);
+			if (unlikely(res <= 0)) {
+				if (res == -EINTR)
+					goto retry2;
+				else
+					goto out_res;
+			}
+
+			check_net_priv(ref_cmd, page);
+			if (res == size) {
+				conn->write_size = 0;
+				res = saved_size;
+				goto out_put;
+			}
+
+			offset += res;
+			size -= res;
+			goto retry2;
+		}
+
+retry1:
+		res = sendpage(sock, page, offset, sendsize, flags | MSG_MORE);
+		TRACE_WRITE("%s sid %#Lx, cid %u, res %d (page index %lu, "
+			"offset %u, sendsize %u, size %u, cmd %p, page %p)",
+			(sendpage != sock_no_sendpage) ? "sendpage" :
+							 "sock_no_sendpage",
+			(unsigned long long)conn->session->sid, conn->cid,
+			res, page->index, offset, sendsize, size,
+			write_cmnd, page);
+		if (unlikely(res <= 0)) {
+			if (res == -EINTR)
+				goto retry1;
+			else
+				goto out_res;
+		}
+
+		check_net_priv(ref_cmd, page);
+
+		size -= res;
+
+		if (res == sendsize) {
+			idx++;
+			EXTRACHECKS_BUG_ON(idx >= ref_cmd->sg_cnt);
+			page = sg_page(&sg[idx]);
+			length = sg[idx].length;
+			offset = sg[idx].offset;
+		} else {
+			offset += res;
+			sendsize -= res;
+			goto retry1;
+		}
+	}
+
+out_off:
+	conn->write_offset += sg_size - size;
+
+out_iov:
+	conn->write_size = size;
+	if ((saved_size == size) && res == -EAGAIN)
+		goto out_put;
+
+	res = saved_size - size;
+
+out_put:
+	if (do_put)
+		__iscsi_put_page_callback(ref_cmd);
+
+out:
+	return res;
+
+out_res:
+	check_net_priv(ref_cmd, page);
+	if (res == -EAGAIN)
+		goto out_off;
+	/* else go through */
+
+out_err:
+#ifndef CONFIG_SCST_DEBUG
+	if (!conn->closing)
+#endif
+	{
+		PRINT_ERROR("error %d at sid:cid %#Lx:%u, cmnd %p", res,
+			    (long long unsigned int)conn->session->sid,
+			    conn->cid, conn->write_cmnd);
+	}
+	if (ref_cmd_to_parent &&
+	    ((ref_cmd->scst_cmd != NULL) || (ref_cmd->scst_aen != NULL))) {
+		if (ref_cmd->scst_state == ISCSI_CMD_STATE_AEN)
+			scst_set_aen_delivery_status(ref_cmd->scst_aen,
+				SCST_AEN_RES_FAILED);
+		else
+			scst_set_delivery_status(ref_cmd->scst_cmd,
+				SCST_CMD_DELIVERY_FAILED);
+	}
+	goto out_put;
+}
+
+static int exit_tx(struct iscsi_conn *conn, int res)
+{
+	iscsi_extracheck_is_wr_thread(conn);
+
+	switch (res) {
+	case -EAGAIN:
+	case -ERESTARTSYS:
+		res = 0;
+		break;
+	default:
+#ifndef CONFIG_SCST_DEBUG
+		if (!conn->closing)
+#endif
+		{
+			PRINT_ERROR("Sending data failed: initiator %s, "
+				"write_size %d, write_state %d, res %d",
+				conn->session->initiator_name,
+				conn->write_size,
+				conn->write_state, res);
+		}
+		conn->write_state = TX_END;
+		conn->write_size = 0;
+		mark_conn_closed(conn);
+		break;
+	}
+	return res;
+}
+
+static int tx_ddigest(struct iscsi_cmnd *cmnd, int state)
+{
+	int res, rest = cmnd->conn->write_size;
+	struct msghdr msg = {.msg_flags = MSG_NOSIGNAL | MSG_DONTWAIT};
+	struct kvec iov;
+
+	iscsi_extracheck_is_wr_thread(cmnd->conn);
+
+	TRACE_DBG("Sending data digest %x (cmd %p)", cmnd->ddigest, cmnd);
+
+	iov.iov_base = (char *)(&cmnd->ddigest) + (sizeof(u32) - rest);
+	iov.iov_len = rest;
+
+	res = kernel_sendmsg(cmnd->conn->sock, &msg, &iov, 1, rest);
+	if (res > 0) {
+		cmnd->conn->write_size -= res;
+		if (!cmnd->conn->write_size)
+			cmnd->conn->write_state = state;
+	} else
+		res = exit_tx(cmnd->conn, res);
+
+	return res;
+}
+
+static void init_tx_hdigest(struct iscsi_cmnd *cmnd)
+{
+	struct iscsi_conn *conn = cmnd->conn;
+	struct iovec *iop;
+
+	iscsi_extracheck_is_wr_thread(conn);
+
+	digest_tx_header(cmnd);
+
+	BUG_ON(conn->write_iop_used >=
+		(signed)(sizeof(conn->write_iov)/sizeof(conn->write_iov[0])));
+
+	iop = &conn->write_iop[conn->write_iop_used];
+	conn->write_iop_used++;
+	iop->iov_base = (void __force __user *)&(cmnd->hdigest);
+	iop->iov_len = sizeof(u32);
+	conn->write_size += sizeof(u32);
+
+	return;
+}
+
+static int tx_padding(struct iscsi_cmnd *cmnd, int state)
+{
+	int res, rest = cmnd->conn->write_size;
+	struct msghdr msg = {.msg_flags = MSG_NOSIGNAL | MSG_DONTWAIT};
+	struct kvec iov;
+	static const uint32_t padding;
+
+	iscsi_extracheck_is_wr_thread(cmnd->conn);
+
+	TRACE_DBG("Sending %d padding bytes (cmd %p)", rest, cmnd);
+
+	iov.iov_base = (char *)(&padding) + (sizeof(uint32_t) - rest);
+	iov.iov_len = rest;
+
+	res = kernel_sendmsg(cmnd->conn->sock, &msg, &iov, 1, rest);
+	if (res > 0) {
+		cmnd->conn->write_size -= res;
+		if (!cmnd->conn->write_size)
+			cmnd->conn->write_state = state;
+	} else
+		res = exit_tx(cmnd->conn, res);
+
+	return res;
+}
+
+static int iscsi_do_send(struct iscsi_conn *conn, int state)
+{
+	int res;
+
+	iscsi_extracheck_is_wr_thread(conn);
+
+	res = write_data(conn);
+	if (res > 0) {
+		if (!conn->write_size)
+			conn->write_state = state;
+	} else
+		res = exit_tx(conn, res);
+
+	return res;
+}
+
+/*
+ * No locks, conn is wr processing.
+ *
+ * IMPORTANT! Connection conn must be protected by additional conn_get()
+ * upon entrance in this function, because otherwise it could be destroyed
+ * inside as a result of cmnd release.
+ */
+int iscsi_send(struct iscsi_conn *conn)
+{
+	struct iscsi_cmnd *cmnd = conn->write_cmnd;
+	int ddigest, res = 0;
+
+	TRACE_DBG("conn %p, write_cmnd %p", conn, cmnd);
+
+	iscsi_extracheck_is_wr_thread(conn);
+
+	ddigest = conn->ddigest_type != DIGEST_NONE ? 1 : 0;
+
+	switch (conn->write_state) {
+	case TX_INIT:
+		BUG_ON(cmnd != NULL);
+		cmnd = conn->write_cmnd = iscsi_get_send_cmnd(conn);
+		if (!cmnd)
+			goto out;
+		cmnd_tx_start(cmnd);
+		if (!(conn->hdigest_type & DIGEST_NONE))
+			init_tx_hdigest(cmnd);
+		conn->write_state = TX_BHS_DATA;
+	case TX_BHS_DATA:
+		res = iscsi_do_send(conn, cmnd->pdu.datasize ?
+					TX_INIT_PADDING : TX_END);
+		if (res <= 0 || conn->write_state != TX_INIT_PADDING)
+			break;
+	case TX_INIT_PADDING:
+		cmnd->conn->write_size = ((cmnd->pdu.datasize + 3) & -4) -
+						cmnd->pdu.datasize;
+		if (cmnd->conn->write_size != 0)
+			conn->write_state = TX_PADDING;
+		else if (ddigest)
+			conn->write_state = TX_INIT_DDIGEST;
+		 else
+			conn->write_state = TX_END;
+		break;
+	case TX_PADDING:
+		res = tx_padding(cmnd, ddigest ? TX_INIT_DDIGEST : TX_END);
+		if (res <= 0 || conn->write_state != TX_INIT_DDIGEST)
+			break;
+	case TX_INIT_DDIGEST:
+		cmnd->conn->write_size = sizeof(u32);
+		conn->write_state = TX_DDIGEST;
+	case TX_DDIGEST:
+		res = tx_ddigest(cmnd, TX_END);
+		break;
+	default:
+		PRINT_CRIT_ERROR("%d %d %x", res, conn->write_state,
+			cmnd_opcode(cmnd));
+		BUG();
+	}
+
+	if (res == 0)
+		goto out;
+
+	if (conn->write_state != TX_END)
+		goto out;
+
+	if (unlikely(conn->write_size)) {
+		PRINT_CRIT_ERROR("%d %x %u", res, cmnd_opcode(cmnd),
+			conn->write_size);
+		BUG();
+	}
+	cmnd_tx_end(cmnd);
+
+	rsp_cmnd_release(cmnd);
+
+	conn->write_cmnd = NULL;
+	conn->write_state = TX_INIT;
+
+out:
+	return res;
+}
+
+/* No locks, conn is wr processing.
+ *
+ * IMPORTANT! Connection conn must be protected by additional conn_get()
+ * upon entrance in this function, because otherwise it could be destroyed
+ * inside as a result of iscsi_send(), which releases sent commands.
+ */
+static int process_write_queue(struct iscsi_conn *conn)
+{
+	int res = 0;
+
+	if (likely(test_write_ready(conn)))
+		res = iscsi_send(conn);
+	return res;
+}
+
+/*
+ * Called under iscsi_wr_lock and BHs disabled, but will drop it inside,
+ * then reaquire.
+ */
+static void scst_do_job_wr(void)
+	__acquires(&iscsi_wr_lock)
+	__releases(&iscsi_wr_lock)
+{
+
+	/*
+	 * We delete/add to tail connections to maintain fairness between them.
+	 */
+
+	while (!list_empty(&iscsi_wr_list)) {
+		int rc;
+		struct iscsi_conn *conn = list_entry(iscsi_wr_list.next,
+			typeof(*conn), wr_list_entry);
+
+		TRACE_DBG("conn %p, wr_state %x, wr_space_ready %d, "
+			"write ready %d", conn, conn->wr_state,
+			conn->wr_space_ready, test_write_ready(conn));
+
+		list_del(&conn->wr_list_entry);
+
+		BUG_ON(conn->wr_state == ISCSI_CONN_WR_STATE_PROCESSING);
+
+		conn->wr_state = ISCSI_CONN_WR_STATE_PROCESSING;
+		conn->wr_space_ready = 0;
+#ifdef CONFIG_SCST_EXTRACHECKS
+		conn->wr_task = current;
+#endif
+		spin_unlock_bh(&iscsi_wr_lock);
+
+		conn_get(conn);
+
+		rc = process_write_queue(conn);
+
+		spin_lock_bh(&iscsi_wr_lock);
+#ifdef CONFIG_SCST_EXTRACHECKS
+		conn->wr_task = NULL;
+#endif
+		if ((rc == -EAGAIN) && !conn->wr_space_ready) {
+			conn->wr_state = ISCSI_CONN_WR_STATE_SPACE_WAIT;
+			goto cont;
+		}
+
+		if (test_write_ready(conn)) {
+			list_add_tail(&conn->wr_list_entry, &iscsi_wr_list);
+			conn->wr_state = ISCSI_CONN_WR_STATE_IN_LIST;
+		} else
+			conn->wr_state = ISCSI_CONN_WR_STATE_IDLE;
+
+cont:
+		conn_put(conn);
+	}
+	return;
+}
+
+static inline int test_wr_list(void)
+{
+	int res = !list_empty(&iscsi_wr_list) ||
+		  unlikely(kthread_should_stop());
+	return res;
+}
+
+int istwr(void *arg)
+{
+
+	PRINT_INFO("Write thread started, PID %d", current->pid);
+
+	current->flags |= PF_NOFREEZE;
+
+	spin_lock_bh(&iscsi_wr_lock);
+	while (!kthread_should_stop()) {
+		wait_queue_t wait;
+		init_waitqueue_entry(&wait, current);
+
+		if (!test_wr_list()) {
+			add_wait_queue_exclusive_head(&iscsi_wr_waitQ, &wait);
+			for (;;) {
+				set_current_state(TASK_INTERRUPTIBLE);
+				if (test_wr_list())
+					break;
+				spin_unlock_bh(&iscsi_wr_lock);
+				schedule();
+				spin_lock_bh(&iscsi_wr_lock);
+			}
+			set_current_state(TASK_RUNNING);
+			remove_wait_queue(&iscsi_wr_waitQ, &wait);
+		}
+		scst_do_job_wr();
+	}
+	spin_unlock_bh(&iscsi_wr_lock);
+
+	/*
+	 * If kthread_should_stop() is true, we are guaranteed to be
+	 * on the module unload, so iscsi_wr_list must be empty.
+	 */
+	BUG_ON(!list_empty(&iscsi_wr_list));
+
+	PRINT_INFO("Write thread PID %d finished", current->pid);
+	return 0;
+}
diff -uprN orig/linux-2.6.33/drivers/scst/iscsi-scst/param.c linux-2.6.33/drivers/scst/iscsi-scst/param.c
--- orig/linux-2.6.33/drivers/scst/iscsi-scst/param.c
+++ linux-2.6.33/drivers/scst/iscsi-scst/param.c
@@ -0,0 +1,306 @@
+/*
+ *  Copyright (C) 2005 FUJITA Tomonori <tomof@acm.org>
+ *  Copyright (C) 2007 - 2010 Vladislav Bolkhovitin
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#include "iscsi.h"
+#include "digest.h"
+
+#define	CHECK_PARAM(info, iparams, word, min, max)				\
+do {										\
+	if (!(info)->partial || ((info)->partial & 1 << key_##word)) {		\
+		TRACE_DBG("%s: %u", #word, (iparams)[key_##word]);		\
+		if ((iparams)[key_##word] < (min) ||				\
+			(iparams)[key_##word] > (max)) {			\
+			if ((iparams)[key_##word] < (min)) {			\
+				(iparams)[key_##word] = (min);			\
+				PRINT_WARNING("%s: %u is too small, resetting "	\
+					"it to allowed min %u",			\
+					#word, (iparams)[key_##word], (min));	\
+			} else {						\
+				PRINT_WARNING("%s: %u is too big, resetting "	\
+					"it to allowed max %u",			\
+					#word, (iparams)[key_##word], (max));	\
+				(iparams)[key_##word] = (max);			\
+			}							\
+		}								\
+	}									\
+} while (0)
+
+#define	SET_PARAM(params, info, iparams, word)					\
+({										\
+	int changed = 0;							\
+	if (!(info)->partial || ((info)->partial & 1 << key_##word)) {		\
+		if ((params)->word != (iparams)[key_##word])			\
+			changed = 1;						\
+		(params)->word = (iparams)[key_##word];				\
+		TRACE_DBG("%s set to %u", #word, (params)->word);		\
+	}									\
+	changed;								\
+})
+
+#define	GET_PARAM(params, info, iparams, word)					\
+do {										\
+	(iparams)[key_##word] = (params)->word;					\
+} while (0)
+
+const char *iscsi_get_bool_value(int val)
+{
+	if (val)
+		return "Yes";
+	else
+		return "No";
+}
+
+const char *iscsi_get_digest_name(int val, char *res)
+{
+	int pos = 0;
+
+	if (val & DIGEST_NONE)
+		pos = sprintf(&res[pos], "%s", "None");
+
+	if (val & DIGEST_CRC32C)
+		pos += sprintf(&res[pos], "%s%s", (pos != 0) ? ", " : "",
+			"CRC32C");
+
+	if (pos == 0)
+		sprintf(&res[pos], "%s", "Unknown");
+
+	return res;
+}
+
+static void log_params(struct iscsi_sess_params *params)
+{
+	char digest_name[64];
+
+	PRINT_INFO("Negotiated parameters: InitialR2T %s, ImmediateData %s, "
+		"MaxConnections %d, MaxRecvDataSegmentLength %d, "
+		"MaxXmitDataSegmentLength %d, ",
+		iscsi_get_bool_value(params->initial_r2t),
+		iscsi_get_bool_value(params->immediate_data), params->max_connections,
+		params->max_recv_data_length, params->max_xmit_data_length);
+	PRINT_INFO("    MaxBurstLength %d, FirstBurstLength %d, "
+		"DefaultTime2Wait %d, DefaultTime2Retain %d, ",
+		params->max_burst_length, params->first_burst_length,
+		params->default_wait_time, params->default_retain_time);
+	PRINT_INFO("    MaxOutstandingR2T %d, DataPDUInOrder %s, "
+		"DataSequenceInOrder %s, ErrorRecoveryLevel %d, ",
+		params->max_outstanding_r2t,
+		iscsi_get_bool_value(params->data_pdu_inorder),
+		iscsi_get_bool_value(params->data_sequence_inorder),
+		params->error_recovery_level);
+	PRINT_INFO("    HeaderDigest %s, DataDigest %s, OFMarker %s, "
+		"IFMarker %s, OFMarkInt %d, IFMarkInt %d",
+		iscsi_get_digest_name(params->header_digest, digest_name),
+		iscsi_get_digest_name(params->data_digest, digest_name),
+		iscsi_get_bool_value(params->ofmarker),
+		iscsi_get_bool_value(params->ifmarker),
+		params->ofmarkint, params->ifmarkint);
+}
+
+/* target_mutex supposed to be locked */
+static void sess_params_check(struct iscsi_kern_params_info *info)
+{
+	int32_t *iparams = info->session_params;
+	const int max_len = ISCSI_CONN_IOV_MAX * PAGE_SIZE;
+
+	CHECK_PARAM(info, iparams, initial_r2t, 0, 1);
+	CHECK_PARAM(info, iparams, immediate_data, 0, 1);
+	CHECK_PARAM(info, iparams, max_connections, 1, 1);
+	CHECK_PARAM(info, iparams, max_recv_data_length, 512, max_len);
+	CHECK_PARAM(info, iparams, max_xmit_data_length, 512, max_len);
+	CHECK_PARAM(info, iparams, max_burst_length, 512, max_len);
+	CHECK_PARAM(info, iparams, first_burst_length, 512, max_len);
+	CHECK_PARAM(info, iparams, max_outstanding_r2t, 1, 65535);
+	CHECK_PARAM(info, iparams, error_recovery_level, 0, 0);
+	CHECK_PARAM(info, iparams, data_pdu_inorder, 0, 1);
+	CHECK_PARAM(info, iparams, data_sequence_inorder, 0, 1);
+
+	digest_alg_available(&iparams[key_header_digest]);
+	digest_alg_available(&iparams[key_data_digest]);
+
+	CHECK_PARAM(info, iparams, ofmarker, 0, 0);
+	CHECK_PARAM(info, iparams, ifmarker, 0, 0);
+
+	return;
+}
+
+/* target_mutex supposed to be locked */
+static void sess_params_set(struct iscsi_sess_params *params,
+			   struct iscsi_kern_params_info *info)
+{
+	int32_t *iparams = info->session_params;
+
+	SET_PARAM(params, info, iparams, initial_r2t);
+	SET_PARAM(params, info, iparams, immediate_data);
+	SET_PARAM(params, info, iparams, max_connections);
+	SET_PARAM(params, info, iparams, max_recv_data_length);
+	SET_PARAM(params, info, iparams, max_xmit_data_length);
+	SET_PARAM(params, info, iparams, max_burst_length);
+	SET_PARAM(params, info, iparams, first_burst_length);
+	SET_PARAM(params, info, iparams, default_wait_time);
+	SET_PARAM(params, info, iparams, default_retain_time);
+	SET_PARAM(params, info, iparams, max_outstanding_r2t);
+	SET_PARAM(params, info, iparams, data_pdu_inorder);
+	SET_PARAM(params, info, iparams, data_sequence_inorder);
+	SET_PARAM(params, info, iparams, error_recovery_level);
+	SET_PARAM(params, info, iparams, header_digest);
+	SET_PARAM(params, info, iparams, data_digest);
+	SET_PARAM(params, info, iparams, ofmarker);
+	SET_PARAM(params, info, iparams, ifmarker);
+	SET_PARAM(params, info, iparams, ofmarkint);
+	SET_PARAM(params, info, iparams, ifmarkint);
+	return;
+}
+
+static void sess_params_get(struct iscsi_sess_params *params,
+			   struct iscsi_kern_params_info *info)
+{
+	int32_t *iparams = info->session_params;
+
+	GET_PARAM(params, info, iparams, initial_r2t);
+	GET_PARAM(params, info, iparams, immediate_data);
+	GET_PARAM(params, info, iparams, max_connections);
+	GET_PARAM(params, info, iparams, max_recv_data_length);
+	GET_PARAM(params, info, iparams, max_xmit_data_length);
+	GET_PARAM(params, info, iparams, max_burst_length);
+	GET_PARAM(params, info, iparams, first_burst_length);
+	GET_PARAM(params, info, iparams, default_wait_time);
+	GET_PARAM(params, info, iparams, default_retain_time);
+	GET_PARAM(params, info, iparams, max_outstanding_r2t);
+	GET_PARAM(params, info, iparams, data_pdu_inorder);
+	GET_PARAM(params, info, iparams, data_sequence_inorder);
+	GET_PARAM(params, info, iparams, error_recovery_level);
+	GET_PARAM(params, info, iparams, header_digest);
+	GET_PARAM(params, info, iparams, data_digest);
+	GET_PARAM(params, info, iparams, ofmarker);
+	GET_PARAM(params, info, iparams, ifmarker);
+	GET_PARAM(params, info, iparams, ofmarkint);
+	GET_PARAM(params, info, iparams, ifmarkint);
+	return;
+}
+
+/* target_mutex supposed to be locked */
+static void tgt_params_check(struct iscsi_session *session,
+	struct iscsi_kern_params_info *info)
+{
+	int32_t *iparams = info->target_params;
+
+	CHECK_PARAM(info, iparams, queued_cmnds, MIN_NR_QUEUED_CMNDS,
+		min_t(int, MAX_NR_QUEUED_CMNDS,
+		      scst_get_max_lun_commands(session->scst_sess, NO_SUCH_LUN)));
+	CHECK_PARAM(info, iparams, rsp_timeout, MIN_RSP_TIMEOUT,
+		MAX_RSP_TIMEOUT);
+	CHECK_PARAM(info, iparams, nop_in_interval, MIN_NOP_IN_INTERVAL,
+		MAX_NOP_IN_INTERVAL);
+	return;
+}
+
+/* target_mutex supposed to be locked */
+static int iscsi_tgt_params_set(struct iscsi_session *session,
+		      struct iscsi_kern_params_info *info, int set)
+{
+	struct iscsi_tgt_params *params = &session->tgt_params;
+	int32_t *iparams = info->target_params;
+
+	if (set) {
+		struct iscsi_conn *conn;
+
+		tgt_params_check(session, info);
+
+		SET_PARAM(params, info, iparams, queued_cmnds);
+		SET_PARAM(params, info, iparams, rsp_timeout);
+		SET_PARAM(params, info, iparams, nop_in_interval);
+
+		PRINT_INFO("Target parameters set for session %llx: "
+			"QueuedCommands %d, Response timeout %d, Nop-In "
+			"interval %d", session->sid, params->queued_cmnds,
+			params->rsp_timeout, params->nop_in_interval);
+
+		list_for_each_entry(conn, &session->conn_list,
+					conn_list_entry) {
+			conn->rsp_timeout = session->tgt_params.rsp_timeout * HZ;
+			conn->nop_in_interval = session->tgt_params.nop_in_interval * HZ;
+			spin_lock_bh(&iscsi_rd_lock);
+			if (!conn->closing && (conn->nop_in_interval > 0)) {
+				TRACE_DBG("Schedule Nop-In work for conn %p", conn);
+				schedule_delayed_work(&conn->nop_in_delayed_work,
+					conn->nop_in_interval + ISCSI_ADD_SCHED_TIME);
+			}
+			spin_unlock_bh(&iscsi_rd_lock);
+		}
+	} else {
+		GET_PARAM(params, info, iparams, queued_cmnds);
+		GET_PARAM(params, info, iparams, rsp_timeout);
+		GET_PARAM(params, info, iparams, nop_in_interval);
+	}
+
+	return 0;
+}
+
+/* target_mutex supposed to be locked */
+static int iscsi_sess_params_set(struct iscsi_session *session,
+	struct iscsi_kern_params_info *info, int set)
+{
+	struct iscsi_sess_params *params;
+
+	if (set)
+		sess_params_check(info);
+
+	params = &session->sess_params;
+
+	if (set) {
+		sess_params_set(params, info);
+		log_params(params);
+	} else
+		sess_params_get(params, info);
+
+	return 0;
+}
+
+/* target_mutex supposed to be locked */
+int iscsi_params_set(struct iscsi_target *target,
+	struct iscsi_kern_params_info *info, int set)
+{
+	int err;
+	struct iscsi_session *session;
+
+	if (info->sid == 0) {
+		PRINT_ERROR("sid must not be %d", 0);
+		err = -EINVAL;
+		goto out;
+	}
+
+	session = session_lookup(target, info->sid);
+	if (session == NULL) {
+		PRINT_ERROR("Session for sid %llx not found", info->sid);
+		err = -ENOENT;
+		goto out;
+	}
+
+	if (set && !list_empty(&session->conn_list) &&
+	    (info->params_type != key_target)) {
+		err = -EBUSY;
+		goto out;
+	}
+
+	if (info->params_type == key_session)
+		err = iscsi_sess_params_set(session, info, set);
+	else if (info->params_type == key_target)
+		err = iscsi_tgt_params_set(session, info, set);
+	else
+		err = -EINVAL;
+
+out:
+	return err;
+}
diff -uprN orig/linux-2.6.33/drivers/scst/iscsi-scst/session.c linux-2.6.33/drivers/scst/iscsi-scst/session.c
--- orig/linux-2.6.33/drivers/scst/iscsi-scst/session.c
+++ linux-2.6.33/drivers/scst/iscsi-scst/session.c
@@ -0,0 +1,482 @@
+/*
+ *  Copyright (C) 2002 - 2003 Ardis Technolgies <roman@ardistech.com>
+ *  Copyright (C) 2007 - 2010 Vladislav Bolkhovitin
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation, version 2
+ *  of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#include "iscsi.h"
+
+/* target_mutex supposed to be locked */
+struct iscsi_session *session_lookup(struct iscsi_target *target, u64 sid)
+{
+	struct iscsi_session *session;
+
+	list_for_each_entry(session, &target->session_list,
+			session_list_entry) {
+		if (session->sid == sid)
+			return session;
+	}
+	return NULL;
+}
+
+/* target_mgmt_mutex supposed to be locked */
+static int iscsi_session_alloc(struct iscsi_target *target,
+	struct iscsi_kern_session_info *info, struct iscsi_session **result)
+{
+	int err;
+	unsigned int i;
+	struct iscsi_session *session;
+	char *name = NULL;
+
+	session = kzalloc(sizeof(*session), GFP_KERNEL);
+	if (!session)
+		return -ENOMEM;
+
+	session->target = target;
+	session->sid = info->sid;
+	atomic_set(&session->active_cmds, 0);
+	session->exp_cmd_sn = info->exp_cmd_sn;
+
+	session->initiator_name = kstrdup(info->initiator_name, GFP_KERNEL);
+	if (!session->initiator_name) {
+		err = -ENOMEM;
+		goto err;
+	}
+
+	name =  info->initiator_name;
+
+	INIT_LIST_HEAD(&session->conn_list);
+	INIT_LIST_HEAD(&session->pending_list);
+
+	spin_lock_init(&session->sn_lock);
+
+	spin_lock_init(&session->cmnd_data_wait_hash_lock);
+	for (i = 0; i < ARRAY_SIZE(session->cmnd_data_wait_hash); i++)
+		INIT_LIST_HEAD(&session->cmnd_data_wait_hash[i]);
+
+	session->next_ttt = 1;
+
+	session->scst_sess = scst_register_session(target->scst_tgt, 0,
+		name, NULL, NULL);
+	if (session->scst_sess == NULL) {
+		PRINT_ERROR("%s", "scst_register_session() failed");
+		err = -ENOMEM;
+		goto err;
+	}
+
+	scst_sess_set_tgt_priv(session->scst_sess, session);
+
+	TRACE_MGMT_DBG("Session %p created: target %p, tid %u, sid %#Lx",
+		session, target, target->tid, info->sid);
+
+	*result = session;
+	return 0;
+
+err:
+	if (session) {
+		kfree(session->initiator_name);
+		kfree(session);
+	}
+	return err;
+}
+
+/* target_mutex supposed to be locked */
+void sess_reinst_finished(struct iscsi_session *sess)
+{
+	struct iscsi_conn *c;
+
+	TRACE_MGMT_DBG("Enabling reinstate successor sess %p", sess);
+
+	BUG_ON(!sess->sess_reinstating);
+
+	list_for_each_entry(c, &sess->conn_list, conn_list_entry) {
+		conn_reinst_finished(c);
+	}
+	sess->sess_reinstating = 0;
+	return;
+}
+
+/* target_mgmt_mutex supposed to be locked */
+int __add_session(struct iscsi_target *target,
+	struct iscsi_kern_session_info *info)
+{
+	struct iscsi_session *new_sess = NULL, *sess, *old_sess;
+	int err = 0, i;
+	union iscsi_sid sid;
+	bool reinstatement = false;
+	struct iscsi_kern_params_info *params_info;
+
+	TRACE_MGMT_DBG("Adding session SID %llx", info->sid);
+
+	err = iscsi_session_alloc(target, info, &new_sess);
+	if (err != 0)
+		goto out;
+
+	mutex_lock(&target->target_mutex);
+
+	sess = session_lookup(target, info->sid);
+	if (sess != NULL) {
+		PRINT_ERROR("Attempt to add session with existing SID %llx",
+			info->sid);
+		err = -EEXIST;
+		goto out_err_unlock;
+	}
+
+	params_info = kmalloc(sizeof(*params_info), GFP_KERNEL);
+	if (params_info == NULL) {
+		PRINT_ERROR("Unable to allocate params info (size %zd)",
+			sizeof(*params_info));
+		err = -ENOMEM;
+		goto out_err_unlock;
+	}
+
+	sid = *(union iscsi_sid *)&info->sid;
+	sid.id.tsih = 0;
+	old_sess = NULL;
+
+	/*
+	 * We need to find the latest session to correctly handle
+	 * multi-reinstatements
+	 */
+	list_for_each_entry_reverse(sess, &target->session_list,
+			session_list_entry) {
+		union iscsi_sid i = *(union iscsi_sid *)&sess->sid;
+		i.id.tsih = 0;
+		if ((sid.id64 == i.id64) &&
+		    (strcmp(info->initiator_name, sess->initiator_name) == 0)) {
+			if (!sess->sess_shutting_down) {
+				/* session reinstatement */
+				old_sess = sess;
+			}
+			break;
+		}
+	}
+	sess = NULL;
+
+	list_add_tail(&new_sess->session_list_entry, &target->session_list);
+
+	memset(params_info, 0, sizeof(*params_info));
+	params_info->tid = target->tid;
+	params_info->sid = info->sid;
+	params_info->params_type = key_session;
+	for (i = 0; i < session_key_last; i++)
+		params_info->session_params[i] = info->session_params[i];
+
+	err = iscsi_params_set(target, params_info, 1);
+	if (err != 0)
+		goto out_del;
+
+	memset(params_info, 0, sizeof(*params_info));
+	params_info->tid = target->tid;
+	params_info->sid = info->sid;
+	params_info->params_type = key_target;
+	for (i = 0; i < target_key_last; i++)
+		params_info->target_params[i] = info->target_params[i];
+
+	err = iscsi_params_set(target, params_info, 1);
+	if (err != 0)
+		goto out_del;
+
+	kfree(params_info);
+	params_info = NULL;
+
+	if (old_sess != NULL) {
+		reinstatement = true;
+
+		TRACE_MGMT_DBG("Reinstating sess %p with SID %llx (old %p, "
+			"SID %llx)", new_sess, new_sess->sid, old_sess,
+			old_sess->sid);
+
+		new_sess->sess_reinstating = 1;
+		old_sess->sess_reinst_successor = new_sess;
+
+		target_del_session(old_sess->target, old_sess, 0);
+	}
+
+	mutex_unlock(&target->target_mutex);
+
+	if (reinstatement) {
+		/*
+		 * Mutex target_mgmt_mutex won't allow to add connections to
+		 * the new session after target_mutex was dropped, so it's safe
+		 * to replace the initial UA without it. We can't do it under
+		 * target_mutex, because otherwise we can establish a
+		 * circular locking dependency between target_mutex and
+		 * scst_mutex in SCST core (iscsi_report_aen() called by
+		 * SCST core under scst_mutex).
+		 */
+		scst_set_initial_UA(new_sess->scst_sess,
+			SCST_LOAD_SENSE(scst_sense_nexus_loss_UA));
+	}
+
+out:
+	return err;
+
+out_del:
+	list_del(&new_sess->session_list_entry);
+	kfree(params_info);
+
+out_err_unlock:
+	mutex_unlock(&target->target_mutex);
+
+	scst_unregister_session(new_sess->scst_sess, 1, NULL);
+	new_sess->scst_sess = NULL;
+
+	mutex_lock(&target->target_mutex);
+	session_free(new_sess, false);
+	mutex_unlock(&target->target_mutex);
+	goto out;
+}
+
+static void __session_free(struct iscsi_session *session)
+{
+	kfree(session->initiator_name);
+	kfree(session);
+}
+
+static void iscsi_unreg_sess_done(struct scst_session *scst_sess)
+{
+	struct iscsi_session *session;
+
+	session = (struct iscsi_session *)scst_sess_get_tgt_priv(scst_sess);
+
+	session->scst_sess = NULL;
+	__session_free(session);
+	return;
+}
+
+/* target_mutex supposed to be locked */
+int session_free(struct iscsi_session *session, bool del)
+{
+	unsigned int i;
+
+	TRACE_MGMT_DBG("Freeing session %p (SID %llx)",
+		session, session->sid);
+
+	BUG_ON(!list_empty(&session->conn_list));
+	if (unlikely(atomic_read(&session->active_cmds) != 0)) {
+		PRINT_CRIT_ERROR("active_cmds not 0 (%d)!!",
+			atomic_read(&session->active_cmds));
+		BUG();
+	}
+
+	for (i = 0; i < ARRAY_SIZE(session->cmnd_data_wait_hash); i++)
+		BUG_ON(!list_empty(&session->cmnd_data_wait_hash[i]));
+
+	if (session->sess_reinst_successor != NULL)
+		sess_reinst_finished(session->sess_reinst_successor);
+
+	if (session->sess_reinstating) {
+		struct iscsi_session *s;
+		TRACE_MGMT_DBG("Freeing being reinstated sess %p", session);
+		list_for_each_entry(s, &session->target->session_list,
+						session_list_entry) {
+			if (s->sess_reinst_successor == session) {
+				s->sess_reinst_successor = NULL;
+				break;
+			}
+		}
+	}
+
+	if (del)
+		list_del(&session->session_list_entry);
+
+	if (session->scst_sess != NULL) {
+		/*
+		 * We must NOT call scst_unregister_session() in the waiting
+		 * mode, since we are under target_mutex. Otherwise we can
+		 * establish a circular locking dependency between target_mutex
+		 * and scst_mutex in SCST core (iscsi_report_aen() called by
+		 * SCST core under scst_mutex).
+		 */
+		scst_unregister_session(session->scst_sess, 0,
+			iscsi_unreg_sess_done);
+	} else
+		__session_free(session);
+
+	return 0;
+}
+
+/* target_mutex supposed to be locked */
+int __del_session(struct iscsi_target *target, u64 sid)
+{
+	struct iscsi_session *session;
+
+	session = session_lookup(target, sid);
+	if (!session)
+		return -ENOENT;
+
+	if (!list_empty(&session->conn_list)) {
+		PRINT_ERROR("%llu still have connections",
+			    (long long unsigned int)session->sid);
+		return -EBUSY;
+	}
+
+	return session_free(session, true);
+}
+
+#define ISCSI_SESS_BOOL_PARAM_ATTR(name, exported_name)				\
+static ssize_t iscsi_sess_show_##name(struct kobject *kobj,			\
+	struct kobj_attribute *attr, char *buf)					\
+{										\
+	int pos;								\
+	struct scst_session *scst_sess;						\
+	struct iscsi_session *sess;						\
+										\
+	scst_sess = container_of(kobj, struct scst_session, sess_kobj);		\
+	sess = (struct iscsi_session *)scst_sess_get_tgt_priv(scst_sess);	\
+										\
+	pos = sprintf(buf, "%s\n",						\
+		iscsi_get_bool_value(sess->sess_params.name));			\
+										\
+	return pos;								\
+}										\
+										\
+static struct kobj_attribute iscsi_sess_attr_##name =				\
+	__ATTR(exported_name, S_IRUGO, iscsi_sess_show_##name, NULL);
+
+#define ISCSI_SESS_INT_PARAM_ATTR(name, exported_name)				\
+static ssize_t iscsi_sess_show_##name(struct kobject *kobj,			\
+	struct kobj_attribute *attr, char *buf)					\
+{										\
+	int pos;								\
+	struct scst_session *scst_sess;						\
+	struct iscsi_session *sess;						\
+										\
+	scst_sess = container_of(kobj, struct scst_session, sess_kobj);		\
+	sess = (struct iscsi_session *)scst_sess_get_tgt_priv(scst_sess);	\
+										\
+	pos = sprintf(buf, "%d\n", sess->sess_params.name);			\
+										\
+	return pos;								\
+}										\
+										\
+static struct kobj_attribute iscsi_sess_attr_##name =				\
+	__ATTR(exported_name, S_IRUGO, iscsi_sess_show_##name, NULL);
+
+#define ISCSI_SESS_DIGEST_PARAM_ATTR(name, exported_name)			\
+static ssize_t iscsi_sess_show_##name(struct kobject *kobj,			\
+	struct kobj_attribute *attr, char *buf)					\
+{										\
+	int pos;								\
+	struct scst_session *scst_sess;						\
+	struct iscsi_session *sess;						\
+	char digest_name[64];							\
+										\
+	scst_sess = container_of(kobj, struct scst_session, sess_kobj);		\
+	sess = (struct iscsi_session *)scst_sess_get_tgt_priv(scst_sess);	\
+										\
+	pos = sprintf(buf, "%s\n", iscsi_get_digest_name(			\
+			sess->sess_params.name, digest_name));			\
+										\
+	return pos;								\
+}										\
+										\
+static struct kobj_attribute iscsi_sess_attr_##name =				\
+	__ATTR(exported_name, S_IRUGO, iscsi_sess_show_##name, NULL);
+
+ISCSI_SESS_BOOL_PARAM_ATTR(initial_r2t, InitialR2T);
+ISCSI_SESS_BOOL_PARAM_ATTR(immediate_data, ImmediateData);
+ISCSI_SESS_INT_PARAM_ATTR(max_recv_data_length, MaxRecvDataSegmentLength);
+ISCSI_SESS_INT_PARAM_ATTR(max_xmit_data_length, MaxXmitDataSegmentLength);
+ISCSI_SESS_INT_PARAM_ATTR(max_burst_length, MaxBurstLength);
+ISCSI_SESS_INT_PARAM_ATTR(first_burst_length, FirstBurstLength);
+ISCSI_SESS_INT_PARAM_ATTR(max_outstanding_r2t, MaxOutstandingR2T);
+ISCSI_SESS_DIGEST_PARAM_ATTR(header_digest, HeaderDigest);
+ISCSI_SESS_DIGEST_PARAM_ATTR(data_digest, DataDigest);
+
+static ssize_t iscsi_sess_sid_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	int pos;
+	struct scst_session *scst_sess;
+	struct iscsi_session *sess;
+
+	scst_sess = container_of(kobj, struct scst_session, sess_kobj);
+	sess = (struct iscsi_session *)scst_sess_get_tgt_priv(scst_sess);
+
+	pos = sprintf(buf, "%llx\n", sess->sid);
+	return pos;
+}
+
+static struct kobj_attribute iscsi_attr_sess_sid =
+	__ATTR(sid, S_IRUGO, iscsi_sess_sid_show, NULL);
+
+static ssize_t iscsi_sess_reinstating_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	int pos;
+	struct scst_session *scst_sess;
+	struct iscsi_session *sess;
+
+	scst_sess = container_of(kobj, struct scst_session, sess_kobj);
+	sess = (struct iscsi_session *)scst_sess_get_tgt_priv(scst_sess);
+
+	pos = sprintf(buf, "%d\n", sess->sess_reinstating ? 1 : 0);
+	return pos;
+}
+
+static struct kobj_attribute iscsi_sess_attr_reinstating =
+	__ATTR(reinstating, S_IRUGO, iscsi_sess_reinstating_show, NULL);
+
+static ssize_t iscsi_sess_force_close_store(struct kobject *kobj,
+	struct kobj_attribute *attr, const char *buf, size_t count)
+{
+	int res;
+	struct scst_session *scst_sess;
+	struct iscsi_session *sess;
+	struct iscsi_conn *conn;
+
+	scst_sess = container_of(kobj, struct scst_session, sess_kobj);
+	sess = (struct iscsi_session *)scst_sess_get_tgt_priv(scst_sess);
+
+	if (mutex_lock_interruptible(&sess->target->target_mutex) != 0) {
+		res = -EINTR;
+		goto out;
+	}
+
+	PRINT_INFO("Deleting session %llu with initiator %s (%p)",
+		(long long unsigned int)sess->sid, sess->initiator_name, sess);
+
+	list_for_each_entry(conn, &sess->conn_list, conn_list_entry) {
+		TRACE_MGMT_DBG("Deleting connection with initiator %p", conn);
+		__mark_conn_closed(conn, ISCSI_CONN_ACTIVE_CLOSE|ISCSI_CONN_DELETING);
+	}
+
+	mutex_unlock(&sess->target->target_mutex);
+
+	res = count;
+
+out:
+	return res;
+}
+
+static struct kobj_attribute iscsi_sess_attr_force_close =
+	__ATTR(force_close, S_IWUSR, NULL, iscsi_sess_force_close_store);
+
+const struct attribute *iscsi_sess_attrs[] = {
+	&iscsi_sess_attr_initial_r2t.attr,
+	&iscsi_sess_attr_immediate_data.attr,
+	&iscsi_sess_attr_max_recv_data_length.attr,
+	&iscsi_sess_attr_max_xmit_data_length.attr,
+	&iscsi_sess_attr_max_burst_length.attr,
+	&iscsi_sess_attr_first_burst_length.attr,
+	&iscsi_sess_attr_max_outstanding_r2t.attr,
+	&iscsi_sess_attr_header_digest.attr,
+	&iscsi_sess_attr_data_digest.attr,
+	&iscsi_attr_sess_sid.attr,
+	&iscsi_sess_attr_reinstating.attr,
+	&iscsi_sess_attr_force_close.attr,
+	NULL,
+};
+
diff -uprN orig/linux-2.6.33/drivers/scst/iscsi-scst/target.c linux-2.6.33/drivers/scst/iscsi-scst/target.c
--- orig/linux-2.6.33/drivers/scst/iscsi-scst/target.c
+++ linux-2.6.33/drivers/scst/iscsi-scst/target.c
@@ -0,0 +1,500 @@
+/*
+ *  Copyright (C) 2002 - 2003 Ardis Technolgies <roman@ardistech.com>
+ *  Copyright (C) 2007 - 2010 Vladislav Bolkhovitin
+ *  Copyright (C) 2007 - 2010 ID7 Ltd.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation, version 2
+ *  of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ *  GNU General Public License for more details.
+ */
+
+#include <linux/delay.h>
+
+#include "iscsi.h"
+#include "digest.h"
+
+#define MAX_NR_TARGETS		(1UL << 30)
+
+DEFINE_MUTEX(target_mgmt_mutex);
+
+/* All 3 protected by target_mgmt_mutex */
+static LIST_HEAD(target_list);
+static u32 next_target_id;
+static u32 nr_targets;
+
+/* target_mgmt_mutex supposed to be locked */
+struct iscsi_target *target_lookup_by_id(u32 id)
+{
+	struct iscsi_target *target;
+
+	list_for_each_entry(target, &target_list, target_list_entry) {
+		if (target->tid == id)
+			return target;
+	}
+	return NULL;
+}
+
+/* target_mgmt_mutex supposed to be locked */
+static struct iscsi_target *target_lookup_by_name(const char *name)
+{
+	struct iscsi_target *target;
+
+	list_for_each_entry(target, &target_list, target_list_entry) {
+		if (!strcmp(target->name, name))
+			return target;
+	}
+	return NULL;
+}
+
+/* target_mgmt_mutex supposed to be locked */
+static int iscsi_target_create(struct iscsi_kern_target_info *info, u32 tid,
+	struct iscsi_target **out_target)
+{
+	int err = -EINVAL, len;
+	char *name = info->name;
+	struct iscsi_target *target;
+
+	TRACE_MGMT_DBG("Creating target tid %u, name %s", tid, name);
+
+	len = strlen(name);
+	if (!len) {
+		PRINT_ERROR("The length of the target name is zero %u", tid);
+		goto out;
+	}
+
+	if (!try_module_get(THIS_MODULE)) {
+		PRINT_ERROR("Fail to get module %u", tid);
+		goto out;
+	}
+
+	target = kzalloc(sizeof(*target), GFP_KERNEL);
+	if (!target) {
+		err = -ENOMEM;
+		goto out_put;
+	}
+
+	target->tid = info->tid = tid;
+
+	strlcpy(target->name, name, sizeof(target->name));
+
+	mutex_init(&target->target_mutex);
+	INIT_LIST_HEAD(&target->session_list);
+	INIT_LIST_HEAD(&target->attrs_list);
+
+	target->scst_tgt = scst_register_target(&iscsi_template, target->name);
+	if (!target->scst_tgt) {
+		PRINT_ERROR("%s", "scst_register_target() failed");
+		err = -EBUSY;
+		goto out_free;
+	}
+
+	scst_tgt_set_tgt_priv(target->scst_tgt, target);
+
+	list_add_tail(&target->target_list_entry, &target_list);
+
+	*out_target = target;
+
+	return 0;
+
+out_free:
+	kfree(target);
+
+out_put:
+	module_put(THIS_MODULE);
+
+out:
+	return err;
+}
+
+/* target_mgmt_mutex supposed to be locked */
+int __add_target(struct iscsi_kern_target_info *info)
+{
+	int err;
+	u32 tid = info->tid;
+	struct iscsi_target *target;
+	struct iscsi_kern_params_info *params_info;
+	struct iscsi_kern_attr *attr_info;
+	union add_info_union {
+		struct iscsi_kern_params_info params_info;
+		struct iscsi_kern_attr attr_info;
+	} *add_info;
+	int i, rc;
+	unsigned long attrs_ptr_long;
+	struct iscsi_kern_attr __user *attrs_ptr;
+
+	if (nr_targets > MAX_NR_TARGETS) {
+		err = -EBUSY;
+		goto out;
+	}
+
+	if (target_lookup_by_name(info->name)) {
+		PRINT_ERROR("Target %s already exist!", info->name);
+		err = -EEXIST;
+		goto out;
+	}
+
+	if (tid && target_lookup_by_id(tid)) {
+		PRINT_ERROR("Target %u already exist!", tid);
+		err = -EEXIST;
+		goto out;
+	}
+
+	add_info = kmalloc(sizeof(*add_info), GFP_KERNEL);
+	if (add_info == NULL) {
+		PRINT_ERROR("Unable to allocate additional info (size %zd)",
+			sizeof(*add_info));
+		err = -ENOMEM;
+		goto out;
+	}
+	params_info = (struct iscsi_kern_params_info *)add_info;
+	attr_info = (struct iscsi_kern_attr *)add_info;
+
+	if (tid == 0) {
+		do {
+			if (!++next_target_id)
+				++next_target_id;
+		} while (target_lookup_by_id(next_target_id));
+
+		tid = next_target_id;
+	}
+
+	err = iscsi_target_create(info, tid, &target);
+	if (err != 0)
+		goto out_free;
+
+	nr_targets++;
+
+	mutex_lock(&target->target_mutex);
+
+	attrs_ptr_long = info->attrs_ptr;
+	attrs_ptr = (struct iscsi_kern_attr __user *)attrs_ptr_long;
+	for (i = 0; i < info->attrs_num; i++) {
+		memset(attr_info, 0, sizeof(*attr_info));
+
+		rc = copy_from_user(attr_info, attrs_ptr, sizeof(*attr_info));
+		if (rc != 0) {
+			PRINT_ERROR("Failed to copy users of target %s "
+				"failed", info->name);
+			err = -EFAULT;
+			goto out_del_unlock;
+		}
+
+		attr_info->name[sizeof(attr_info->name)-1] = '\0';
+
+		err = iscsi_add_attr(target, attr_info);
+		if (err != 0)
+			goto out_del_unlock;
+
+		attrs_ptr++;
+	}
+
+	mutex_unlock(&target->target_mutex);
+
+	err = tid;
+
+out_free:
+	kfree(add_info);
+
+out:
+	return err;
+
+out_del_unlock:
+	mutex_unlock(&target->target_mutex);
+	__del_target(tid);
+	goto out_free;
+}
+
+static void target_destroy(struct iscsi_target *target)
+{
+	struct iscsi_attr *attr, *t;
+
+	TRACE_MGMT_DBG("Destroying target tid %u", target->tid);
+
+	list_for_each_entry_safe(attr, t, &target->attrs_list,
+				attrs_list_entry) {
+		__iscsi_del_attr(target, attr);
+	}
+
+	scst_unregister_target(target->scst_tgt);
+
+	kfree(target);
+
+	module_put(THIS_MODULE);
+	return;
+}
+
+/* target_mgmt_mutex supposed to be locked */
+int __del_target(u32 id)
+{
+	struct iscsi_target *target;
+	int err;
+
+	target = target_lookup_by_id(id);
+	if (!target) {
+		err = -ENOENT;
+		goto out;
+	}
+
+	mutex_lock(&target->target_mutex);
+
+	if (!list_empty(&target->session_list)) {
+		err = -EBUSY;
+		goto out_unlock;
+	}
+
+	list_del(&target->target_list_entry);
+	nr_targets--;
+
+	mutex_unlock(&target->target_mutex);
+
+	target_destroy(target);
+	return 0;
+
+out_unlock:
+	mutex_unlock(&target->target_mutex);
+
+out:
+	return err;
+}
+
+/* target_mutex supposed to be locked */
+void target_del_session(struct iscsi_target *target,
+	struct iscsi_session *session, int flags)
+{
+
+	TRACE_MGMT_DBG("Deleting session %p", session);
+
+	if (!list_empty(&session->conn_list)) {
+		struct iscsi_conn *conn, *tc;
+		list_for_each_entry_safe(conn, tc, &session->conn_list,
+					 conn_list_entry) {
+			TRACE_MGMT_DBG("Mark conn %p closing", conn);
+			__mark_conn_closed(conn, flags);
+		}
+	} else {
+		TRACE_MGMT_DBG("Freeing session %p without connections",
+			       session);
+		__del_session(target, session->sid);
+	}
+	return;
+}
+
+/* target_mutex supposed to be locked */
+void target_del_all_sess(struct iscsi_target *target, int flags)
+{
+	struct iscsi_session *session, *ts;
+
+	if (!list_empty(&target->session_list)) {
+		TRACE_MGMT_DBG("Deleting all sessions from target %p", target);
+		list_for_each_entry_safe(session, ts, &target->session_list,
+						session_list_entry) {
+			target_del_session(target, session, flags);
+		}
+	}
+	return;
+}
+
+void target_del_all(void)
+{
+	struct iscsi_target *target, *t;
+	bool first = true;
+
+	TRACE_MGMT_DBG("%s", "Deleting all targets");
+
+	/* Not the best, ToDo */
+	while (1) {
+		mutex_lock(&target_mgmt_mutex);
+
+		if (list_empty(&target_list))
+			break;
+
+		/*
+		 * In the first iteration we won't delete targets to go at
+		 * first through all sessions of all targets and close their
+		 * connections. Otherwise we can stuck for noticeable time
+		 * waiting during a target's unregistration for the activities
+		 * suspending over active connection. This can especially got
+		 * bad if any being wait connection itself stuck waiting for
+		 * something and can be recovered only by connection close.
+		 * Let's for such cases not wait while such connection recover
+		 * theyself, but act in advance.
+		 */
+
+		list_for_each_entry_safe(target, t, &target_list,
+					 target_list_entry) {
+			mutex_lock(&target->target_mutex);
+
+			if (!list_empty(&target->session_list)) {
+				target_del_all_sess(target,
+					ISCSI_CONN_ACTIVE_CLOSE |
+					ISCSI_CONN_DELETING);
+			} else if (!first) {
+				TRACE_MGMT_DBG("Deleting target %p", target);
+				list_del(&target->target_list_entry);
+				nr_targets--;
+				mutex_unlock(&target->target_mutex);
+				target_destroy(target);
+				continue;
+			}
+
+			mutex_unlock(&target->target_mutex);
+		}
+		mutex_unlock(&target_mgmt_mutex);
+		msleep(100);
+
+		first = false;
+	}
+
+	mutex_unlock(&target_mgmt_mutex);
+
+	TRACE_MGMT_DBG("%s", "Deleting all targets finished");
+	return;
+}
+
+static ssize_t iscsi_tgt_tid_show(struct kobject *kobj,
+	struct kobj_attribute *attr, char *buf)
+{
+	int pos;
+	struct scst_tgt *scst_tgt;
+	struct iscsi_target *tgt;
+
+	scst_tgt = container_of(kobj, struct scst_tgt, tgt_kobj);
+	tgt = (struct iscsi_target *)scst_tgt_get_tgt_priv(scst_tgt);
+
+	pos = sprintf(buf, "%u\n", tgt->tid);
+	return pos;
+}
+
+static struct kobj_attribute iscsi_tgt_attr_tid =
+	__ATTR(tid, S_IRUGO, iscsi_tgt_tid_show, NULL);
+
+const struct attribute *iscsi_tgt_attrs[] = {
+	&iscsi_tgt_attr_tid.attr,
+	NULL,
+};
+
+ssize_t iscsi_sysfs_send_event(uint32_t tid, enum iscsi_kern_event_code code,
+	const char *param1, const char *param2, void **data)
+{
+	int res;
+	struct scst_sysfs_user_info *info;
+
+	if (ctr_open_state != ISCSI_CTR_OPEN_STATE_OPEN) {
+		PRINT_ERROR("%s", "User space process not connected");
+		res = -EPERM;
+		goto out;
+	}
+
+	res = scst_sysfs_user_add_info(&info);
+	if (res != 0)
+		goto out;
+
+	TRACE_DBG("Sending event %d (tid %d, param1 %s, param2 %s, cookie %d, "
+		"info %p)", tid, code, param1, param2, info->info_cookie, info);
+
+	res = event_send(tid, 0, 0, info->info_cookie, code, param1, param2);
+	if (res <= 0) {
+		PRINT_ERROR("event_send() failed: %d", res);
+		if (res == 0)
+			res = -EFAULT;
+		goto out_free;
+	}
+
+	/*
+	 * It may wait 30 secs in blocking connect to an unreacheable
+	 * iSNS server. It must be fixed, but not now. ToDo.
+	 */
+	res = scst_wait_info_completion(info, 31 * HZ);
+
+	if (data != NULL)
+		*data = info->data;
+
+out_free:
+	scst_sysfs_user_del_info(info);
+
+out:
+	return res;
+}
+
+int iscsi_enable_target(struct scst_tgt *scst_tgt, bool enable)
+{
+	struct iscsi_target *tgt =
+		(struct iscsi_target *)scst_tgt_get_tgt_priv(scst_tgt);
+	int res;
+	uint32_t type;
+
+	if (enable)
+		type = E_ENABLE_TARGET;
+	else
+		type = E_DISABLE_TARGET;
+
+	TRACE_DBG("%s target %d", enable ? "Enabling" : "Disabling", tgt->tid);
+
+	res = iscsi_sysfs_send_event(tgt->tid, type, NULL, NULL, NULL);
+	return res;
+}
+
+bool iscsi_is_target_enabled(struct scst_tgt *scst_tgt)
+{
+	struct iscsi_target *tgt =
+		(struct iscsi_target *)scst_tgt_get_tgt_priv(scst_tgt);
+
+	return tgt->tgt_enabled;
+}
+
+ssize_t iscsi_sysfs_add_target(const char *target_name, char *params)
+{
+	int res;
+
+	res = iscsi_sysfs_send_event(0, E_ADD_TARGET, target_name,
+			params, NULL);
+	if (res > 0) {
+		/* It's tid */
+		res = 0;
+	}
+	return res;
+}
+
+ssize_t iscsi_sysfs_del_target(const char *target_name)
+{
+	int res = 0, tid;
+
+	/* We don't want to have tgt visible after the mutex unlock */
+	{
+		struct iscsi_target *tgt;
+		mutex_lock(&target_mgmt_mutex);
+		tgt = target_lookup_by_name(target_name);
+		if (tgt == NULL) {
+			PRINT_ERROR("Target %s not found", target_name);
+			mutex_unlock(&target_mgmt_mutex);
+			res = -ENOENT;
+			goto out;
+		}
+		tid = tgt->tid;
+		mutex_unlock(&target_mgmt_mutex);
+	}
+
+	TRACE_DBG("Deleting target %s (tid %d)", target_name, tid);
+
+	res = iscsi_sysfs_send_event(tid, E_DEL_TARGET, NULL, NULL, NULL);
+
+out:
+	return res;
+}
+
+ssize_t iscsi_sysfs_mgmt_cmd(char *cmd)
+{
+	int res;
+
+	TRACE_DBG("Sending mgmt cmd %s", cmd);
+
+	res = iscsi_sysfs_send_event(0, E_MGMT_CMD, cmd, NULL, NULL);
+	return res;
+}
+

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH][RFC 4/4/4/5] iSCSI-SCST's README file
       [not found] ` <4BC45EF7.7010304@vlnb.net>
  2010-04-13 13:10   ` [PATCH][RFC 3/4/4/5] iSCSI-SCST's implementation files Vladislav Bolkhovitin
@ 2010-04-13 13:10   ` Vladislav Bolkhovitin
  1 sibling, 0 replies; 18+ messages in thread
From: Vladislav Bolkhovitin @ 2010-04-13 13:10 UTC (permalink / raw)
  To: linux-scsi
  Cc: linux-kernel, scst-devel, James Bottomley, Andrew Morton,
	FUJITA Tomonori, Mike Christie, Jeff Garzik, Linus Torvalds,
	Bart Van Assche

This patch contains iSCSI-SCST's README file.

Signed-off-by: Vladislav Bolkhovitin <vst@vlnb.net>
---
 README.iscsi |  585 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 585 insertions(+)

diff -uprN orig/linux-2.6.33/Documentation/scst/README.iscsi linux-2.6.33/Documentation/scst/README.iscsi
--- orig/linux-2.6.33/Documentation/scst/README.iscsi
+++ linux-2.6.33/Documentation/scst/README.iscsi
@@ -0,0 +1,585 @@
+iSCSI SCST target driver
+========================
+
+ISCSI-SCST is a deeply reworked fork of iSCSI Enterprise Target (IET)
+(http://iscsitarget.sourceforge.net). Reasons of the fork were:
+
+ - To be able to use full power of SCST core.
+
+ - To fix all the problems, corner cases issues and iSCSI standard
+   violations which IET has.
+
+See more info at http://iscsi-scst.sourceforge.net.
+
+Usage
+-----
+
+See in http://iscsi-scst.sourceforge.net/iscsi-scst-howto.txt how to
+configure iSCSI-SCST.
+
+In 2.0.0 usage of iscsi-scstd.conf as well as iscsi-scst-adm utility is
+obsolete. Use the sysfs interface facilities instead.
+
+It is recommended to use TEST UNIT READY ("tur") command to check if
+iSCSI-SCST target is alive in MPIO configurations.
+
+Also see SCST README file how to tune for the best performance.
+
+CAUTION: Working of target and initiator on the same host isn't fully
+=======  supported. See SCST README file for details.
+
+Sysfs interface
+---------------
+
+Root of SCST sysfs interface is /sys/kernel/scst_tgt. Root of iSCSI-SCST
+is /sys/kernel/scst_tgt/targets/iscsi. It has the following entries:
+
+ - None, one or more subdirectories for targets with name equal to names
+   of the corresponding targets.
+
+ - IncomingUser[num] - optional one or more attributes containing user
+   name and password for incoming discovery user name. Not exist by
+   default and can be added through "mgmt" entry, see below.
+
+ - OutgoingUser - optional attribute containing user name and password
+   for outgoing discovery user name. Not exist by default and can be
+   added through "mgmt" entry, see below.
+
+ - iSNSServer - contains name or IP address of iSNS server with optional
+   "AccessControl" attribute, which allows to enable iSNS access
+   control. Empty by default.
+
+ - enabled - using this attribute you can enable or disable iSCSI-SCST
+   accept new connections. It allows to finish configuring global
+   iSCSI-SCST attributes before it starts accepting new connections. 0
+   by default.
+
+ - open_state - read-only attribute, which allows to see if the user
+   space part of iSCSI-SCST connected to the kernel part.
+
+ - trace_level - allows to enable and disable various tracing
+   facilities. See content of this file for help how to use it.
+
+ - version - read-only attribute, which allows to see version of
+   iSCSI-SCST and enabled optional features.
+
+ - mgmt - main management entry, which allows to configure iSCSI-SCST.
+   Namely, add/delete targets as well as add/delete optional global and
+   per-target attributes. See content of this file for help how to use
+   it.
+
+Each iSCSI-SCST sysfs file (attribute) can contain in the last line mark
+"[key]". It is automatically added mark used to allow scstadmin to see
+which attributes it should save in the config file. You can ignore it.
+
+Each target subdirectory contains the following entries:
+
+ - ini_groups - subdirectory defining initiator groups for this target,
+   used to define per-initiator access control. See SCST core README for
+   more details.
+
+ - luns - subdirectory defining LUNs of this target. See SCST core
+   README for more details.
+
+ - sessions - subdirectory containing connected to this target sessions.
+
+ - IncomingUser[num] - optional one or more attributes containing user
+   name and password for incoming user name. Not exist by default and can
+   be added through the "mgmt" entry, see above.
+
+ - OutgoingUser - optional attribute containing user name and password
+   for outgoing user name. Not exist by default and can be added through
+   the "mgmt" entry, see above.
+
+ - Entries defining default iSCSI parameters values used during iSCSI
+   parameters negotiation. Only entries which can be changed or make
+   sense are listed there.
+
+ - QueuedCommands - defines maximum number of commands queued to any
+   session of this target. Default is 32 commands.
+
+ - RspTimeout - defines the maximum time in seconds a command can wait for
+   response from initiator, otherwise the corresponding connection will
+   be closed. For performance reasons it is implemented as a timer,
+   which once in RspTimeout time checks the oldest command waiting for
+   response and, if it's older than RspTimeout, then it closes the
+   connection. Hence, a stalled connection will be closed in time
+   between RspTimeout and 2*RspTimeout. Default is 30 seconds.
+
+ - NopInInterval - defines interval between NOP-In requests, which the
+   target will send on idle connections to check if the initiator is
+   still alive. If there is no NOP-Out reply from the initiator in
+   RspTimeout time, the corresponding connection will be closed. Default
+   is 30 seconds. If it's set to 0, then NOP-In requests are disabled.
+
+ - enabled - using this attribute you can enable or disable iSCSI-SCST
+   accept new connections to this target. It allows to finish
+   configuring it before it starts accepting new connections. 0 by
+   default.
+
+ - rel_tgt_id - allows to read or write SCSI Relative Target Port
+   Identifier attribute. This identifier is used to identify SCSI Target
+   Ports by some SCSI commands, mainly by Persistent Reservations
+   commands. This identifier must be unique among all SCST targets, but
+   for convenience SCST allows disabled targets to have not unique
+   rel_tgt_id. In this case SCST will not allow to enable this target
+   until rel_tgt_id becomes unique. This attribute initialized unique by
+   SCST by default.
+
+ - tid - TID of this target.
+
+Subdirectory "sessions" contains one subdirectory for each connected
+session with name equal to name of the connected initiator.
+
+Each session subdirectory contains the following entries:
+
+ - One subdirectory for each TCP connection in this session. ISCSI-SCST
+   supports 1 connection per session, but the session subdirectory can
+   contain several connections: one active and other being closed.
+
+ - Entries defining negotiated iSCSI parameters. Only parameters which
+   can be changed or make sense are listed there.
+
+ - initiator_name - contains initiator name
+
+ - sid - contains SID of this session
+
+ - reinstating - contains reinstatement state of this session
+
+ - force_close - write-only attribute, which allows to force close this
+   session. This is the only writable session attribute.
+
+ - active_commands - contains number of active, i.e. not yet or being
+   executed, SCSI commands in this session.
+
+ - commands - contains overall number of SCSI commands in this session.
+
+Each connection subdirectory contains the following entries:
+
+ - cid - contains CID of this connection.
+
+ - ip - contains IP address of the connected initiator.
+
+ - state - contains processing state of this connection.
+
+Below is a sample script, which configures 1 virtual disk "disk1" using
+/disk1 image and one target iqn.2006-10.net.vlnb:tgt with all default
+parameters:
+
+#!/bin/bash
+
+modprobe scst
+modprobe scst_vdisk
+
+echo "add_device disk1 filename=/disk1; nv_cache=1" >/sys/kernel/scst_tgt/handlers/vdisk_fileio/mgmt
+
+service iscsi-scst start
+
+echo "add_target iqn.2006-10.net.vlnb:tgt" >/sys/kernel/scst_tgt/targets/iscsi/mgmt
+echo "add disk1 0" >/sys/kernel/scst_tgt/targets/iscsi/iqn.2006-10.net.vlnb:tgt/luns/mgmt
+
+echo 1 >/sys/kernel/scst_tgt/targets/iscsi/iqn.2006-10.net.vlnb:tgt/enabled
+echo 1 >/sys/kernel/scst_tgt/targets/iscsi/enabled
+
+Below is more advanced sample script, which configures more virtual
+devices of various types, including virtual CDROM and 2 targets, one
+with all default parameters, another one with some not default
+parameters, incoming and outgoing user names for CHAP authentification,
+and special permissions for initiator iqn.2005-03.org.open-iscsi:cacdcd2520,
+which will see another set of devices. Also this sample configures CHAP
+authentication for discovery sessions and iSNS server with access control.
+
+#!/bin/bash
+
+modprobe scst
+modprobe scst_vdisk
+
+echo "add_device disk1 filename=/disk1; nv_cache=1" >/sys/kernel/scst_tgt/handlers/vdisk_fileio/mgmt
+echo "add_device disk2 filename=/disk2; blocksize=4096; nv_cache=1" >/sys/kernel/scst_tgt/handlers/vdisk_fileio/mgmt
+echo "add_device blockio filename=/dev/sda5" >/sys/kernel/scst_tgt/handlers/vdisk_blockio/mgmt
+echo "add_device nullio" >/sys/kernel/scst_tgt/handlers/vdisk_nullio/mgmt
+echo "add_device cdrom" >/sys/kernel/scst_tgt/handlers/vcdrom/mgmt
+
+service iscsi-scst start
+
+echo "192.168.1.16 AccessControl" >/sys/kernel/scst_tgt/targets/iscsi/iSNSServer
+echo "add_attribute IncomingUser joeD 12charsecret" >/sys/kernel/scst_tgt/targets/iscsi/mgmt
+echo "add_attribute OutgoingUser jackD 12charsecret1" >/sys/kernel/scst_tgt/targets/iscsi/mgmt
+
+echo "add_target iqn.2006-10.net.vlnb:tgt" >/sys/kernel/scst_tgt/targets/iscsi/mgmt
+
+echo "add disk1 0" >/sys/kernel/scst_tgt/targets/iscsi/iqn.2006-10.net.vlnb:tgt/luns/mgmt
+echo "add cdrom 1" >/sys/kernel/scst_tgt/targets/iscsi/iqn.2006-10.net.vlnb:tgt/luns/mgmt
+
+echo "add_target iqn.2006-10.net.vlnb:tgt1" >/sys/kernel/scst_tgt/targets/iscsi/mgmt
+echo "add_target_attribute iqn.2006-10.net.vlnb:tgt1 IncomingUser1 joe2 12charsecret2" >/sys/kernel/scst_tgt/targets/iscsi/mgmt
+echo "add_target_attribute iqn.2006-10.net.vlnb:tgt1 IncomingUser joe 12charsecret" >/sys/kernel/scst_tgt/targets/iscsi/mgmt
+echo "add_target_attribute iqn.2006-10.net.vlnb:tgt1 OutgoingUser jim1 12charpasswd" >/sys/kernel/scst_tgt/targets/iscsi/mgmt
+echo "No" >/sys/kernel/scst_tgt/targets/iscsi/iqn.2006-10.net.vlnb:tgt1/InitialR2T
+echo "Yes" >/sys/kernel/scst_tgt/targets/iscsi/iqn.2006-10.net.vlnb:tgt1/ImmediateData
+echo "8192" >/sys/kernel/scst_tgt/targets/iscsi/iqn.2006-10.net.vlnb:tgt1/MaxRecvDataSegmentLength
+echo "8192" >/sys/kernel/scst_tgt/targets/iscsi/iqn.2006-10.net.vlnb:tgt1/MaxXmitDataSegmentLength
+echo "131072" >/sys/kernel/scst_tgt/targets/iscsi/iqn.2006-10.net.vlnb:tgt1/MaxBurstLength
+echo "32768" >/sys/kernel/scst_tgt/targets/iscsi/iqn.2006-10.net.vlnb:tgt1/FirstBurstLength
+echo "1" >/sys/kernel/scst_tgt/targets/iscsi/iqn.2006-10.net.vlnb:tgt1/MaxOutstandingR2T
+echo "CRC32C,None" >/sys/kernel/scst_tgt/targets/iscsi/iqn.2006-10.net.vlnb:tgt1/HeaderDigest
+echo "CRC32C,None" >/sys/kernel/scst_tgt/targets/iscsi/iqn.2006-10.net.vlnb:tgt1/DataDigest
+echo "32" >/sys/kernel/scst_tgt/targets/iscsi/iqn.2006-10.net.vlnb:tgt1/QueuedCommands
+
+echo "add disk2 0" >/sys/kernel/scst_tgt/targets/iscsi/iqn.2006-10.net.vlnb:tgt1/luns/mgmt
+echo "add nullio 26" >/sys/kernel/scst_tgt/targets/iscsi/iqn.2006-10.net.vlnb:tgt1/luns/mgmt
+
+echo "create special_ini" >/sys/kernel/scst_tgt/targets/iscsi/iqn.2006-10.net.vlnb:tgt1/ini_groups/mgmt
+echo "add blockio 0 read_only=1" >/sys/kernel/scst_tgt/targets/iscsi/iqn.2006-10.net.vlnb:tgt1/ini_groups/special_ini/luns/mgmt
+echo "add iqn.2005-03.org.open-iscsi:cacdcd2520" >/sys/kernel/scst_tgt/targets/iscsi/iqn.2006-10.net.vlnb:tgt1/ini_groups/special_ini/initiators/mgmt
+
+echo 1 >/sys/kernel/scst_tgt/targets/iscsi/iqn.2006-10.net.vlnb:tgt/enabled
+echo 1 >/sys/kernel/scst_tgt/targets/iscsi/iqn.2006-10.net.vlnb:tgt1/enabled
+
+echo 1 >/sys/kernel/scst_tgt/targets/iscsi/enabled
+
+The resulting overall SCST sysfs hierarchy with an initiator connected to
+both iSCSI-SCST targets will look like:
+
+/sys/kernel/scst_tgt
+|-- devices
+|   |-- blockio
+|   |   |-- blocksize
+|   |   |-- exported
+|   |   |   `-- export0 -> ../../../targets/iscsi/iqn.2006-10.net.vlnb:tgt1/ini_groups/special_ini/luns/0
+|   |   |-- filename
+|   |   |-- handler -> ../../handlers/vdisk_blockio
+|   |   |-- read_only
+|   |   |-- removable
+|   |   |-- resync_size
+|   |   |-- size_mb
+|   |   |-- t10_dev_id
+|   |   |-- threads_num
+|   |   |-- threads_pool_type
+|   |   |-- type
+|   |   `-- usn
+|   |-- cdrom
+|   |   |-- exported
+|   |   |   `-- export0 -> ../../../targets/iscsi/iqn.2006-10.net.vlnb:tgt/luns/1
+|   |   |-- filename
+|   |   |-- handler -> ../../handlers/vcdrom
+|   |   |-- size_mb
+|   |   |-- t10_dev_id
+|   |   |-- threads_num
+|   |   |-- threads_pool_type
+|   |   |-- type
+|   |   `-- usn
+|   |-- disk1
+|   |   |-- blocksize
+|   |   |-- exported
+|   |   |   `-- export0 -> ../../../targets/iscsi/iqn.2006-10.net.vlnb:tgt/luns/0
+|   |   |-- filename
+|   |   |-- handler -> ../../handlers/vdisk_fileio
+|   |   |-- nv_cache
+|   |   |-- o_direct
+|   |   |-- read_only
+|   |   |-- removable
+|   |   |-- resync_size
+|   |   |-- size_mb
+|   |   |-- t10_dev_id
+|   |   |-- type
+|   |   |-- usn
+|   |   `-- write_through
+|   |-- disk2
+|   |   |-- blocksize
+|   |   |-- exported
+|   |   |   `-- export0 -> ../../../targets/iscsi/iqn.2006-10.net.vlnb:tgt1/luns/0
+|   |   |-- filename
+|   |   |-- handler -> ../../handlers/vdisk_fileio
+|   |   |-- nv_cache
+|   |   |-- o_direct
+|   |   |-- read_only
+|   |   |-- removable
+|   |   |-- resync_size
+|   |   |-- size_mb
+|   |   |-- t10_dev_id
+|   |   |-- threads_num
+|   |   |-- threads_pool_type
+|   |   |-- threads_num
+|   |   |-- threads_pool_type
+|   |   |-- type
+|   |   |-- usn
+|   |   `-- write_through
+|   `-- nullio
+|       |-- blocksize
+|       |-- exported
+|       |   `-- export0 -> ../../../targets/iscsi/iqn.2006-10.net.vlnb:tgt1/luns/26
+|       |-- handler -> ../../handlers/vdisk_nullio
+|       |-- read_only
+|       |-- removable
+|       |-- size_mb
+|       |-- t10_dev_id
+|       |-- threads_num
+|       |-- threads_pool_type
+|       |-- type
+|       `-- usn
+|-- handlers
+|   |-- vcdrom
+|   |   |-- cdrom -> ../../devices/cdrom
+|   |   |-- mgmt
+|   |   |-- trace_level
+|   |   `-- type
+|   |-- vdisk_blockio
+|   |   |-- blockio -> ../../devices/blockio
+|   |   |-- mgmt
+|   |   |-- trace_level
+|   |   `-- type
+|   |-- vdisk_fileio
+|   |   |-- disk1 -> ../../devices/disk1
+|   |   |-- disk2 -> ../../devices/disk2
+|   |   |-- mgmt
+|   |   |-- trace_level
+|   |   `-- type
+|   `-- vdisk_nullio
+|       |-- mgmt
+|       |-- nullio -> ../../devices/nullio
+|       |-- trace_level
+|       `-- type
+|-- sgv
+|   |-- global_stats
+|   |-- sgv
+|   |   `-- stats
+|   |-- sgv-clust
+|   |   `-- stats
+|   `-- sgv-dma
+|       `-- stats
+|-- targets
+|   `-- iscsi
+|       |-- IncomingUser
+|       |-- OutgoingUser
+|       |-- enabled
+|       |-- iSNSServer
+|       |-- iqn.2006-10.net.vlnb:tgt
+|       |   |-- DataDigest
+|       |   |-- FirstBurstLength
+|       |   |-- HeaderDigest
+|       |   |-- ImmediateData
+|       |   |-- InitialR2T
+|       |   |-- MaxBurstLength
+|       |   |-- MaxOutstandingR2T
+|       |   |-- MaxRecvDataSegmentLength
+|       |   |-- MaxXmitDataSegmentLength
+|       |   |-- NopInInterval
+|       |   |-- QueuedCommands
+|       |   |-- RspTimeout
+|       |   |-- enabled
+|       |   |-- ini_groups
+|       |   |   `-- mgmt
+|       |   |-- luns
+|       |   |   |-- 0
+|       |   |   |   |-- device -> ../../../../../devices/disk1
+|       |   |   |   `-- read_only
+|       |   |   |-- 1
+|       |   |   |   |-- device -> ../../../../../devices/cdrom
+|       |   |   |   `-- read_only
+|       |   |   `-- mgmt
+|       |   |-- rel_tgt_id
+|       |   |-- sessions
+|       |   |   `-- iqn.2005-03.org.open-iscsi:cacdcd2520
+|       |   |       |-- 10.170.75.2
+|       |   |       |   |-- cid
+|       |   |       |   |-- ip
+|       |   |       |   `-- state
+|       |   |       |-- DataDigest
+|       |   |       |-- FirstBurstLength
+|       |   |       |-- HeaderDigest
+|       |   |       |-- ImmediateData
+|       |   |       |-- InitialR2T
+|       |   |       |-- MaxBurstLength
+|       |   |       |-- MaxOutstandingR2T
+|       |   |       |-- MaxRecvDataSegmentLength
+|       |   |       |-- MaxXmitDataSegmentLength
+|       |   |       |-- active_commands
+|       |   |       |-- commands
+|       |   |       |-- force_close
+|       |   |       |-- initiator_name
+|       |   |       |-- luns -> ../../luns
+|       |   |       |-- reinstating
+|       |   |       `-- sid
+|       |   `-- tid
+|       |-- iqn.2006-10.net.vlnb:tgt1
+|       |   |-- DataDigest
+|       |   |-- FirstBurstLength
+|       |   |-- HeaderDigest
+|       |   |-- ImmediateData
+|       |   |-- IncomingUser
+|       |   |-- IncomingUser1
+|       |   |-- InitialR2T
+|       |   |-- MaxBurstLength
+|       |   |-- MaxOutstandingR2T
+|       |   |-- MaxRecvDataSegmentLength
+|       |   |-- MaxXmitDataSegmentLength
+|       |   |-- OutgoingUser
+|       |   |-- NopInInterval
+|       |   |-- QueuedCommands
+|       |   |-- RspTimeout
+|       |   |-- enabled
+|       |   |-- ini_groups
+|       |   |   |-- mgmt
+|       |   |   `-- special_ini
+|       |   |       |-- initiators
+|       |   |       |   |-- iqn.2005-03.org.open-iscsi:cacdcd2520
+|       |   |       |   `-- mgmt
+|       |   |       `-- luns
+|       |   |           |-- 0
+|       |   |           |   |-- device -> ../../../../../../../devices/blockio
+|       |   |           |   `-- read_only
+|       |   |           `-- mgmt
+|       |   |-- luns
+|       |   |   |-- 0
+|       |   |   |   |-- device -> ../../../../../devices/disk2
+|       |   |   |   `-- read_only
+|       |   |   |-- 26
+|       |   |   |   |-- device -> ../../../../../devices/nullio
+|       |   |   |   `-- read_only
+|       |   |   `-- mgmt
+|       |   |-- rel_tgt_id
+|       |   |-- sessions
+|       |   |   `-- iqn.2005-03.org.open-iscsi:cacdcd2520
+|       |   |       |-- 10.170.75.2
+|       |   |       |   |-- cid
+|       |   |       |   |-- ip
+|       |   |       |   `-- state
+|       |   |       |-- DataDigest
+|       |   |       |-- FirstBurstLength
+|       |   |       |-- HeaderDigest
+|       |   |       |-- ImmediateData
+|       |   |       |-- InitialR2T
+|       |   |       |-- MaxBurstLength
+|       |   |       |-- MaxOutstandingR2T
+|       |   |       |-- MaxRecvDataSegmentLength
+|       |   |       |-- MaxXmitDataSegmentLength
+|       |   |       |-- active_commands
+|       |   |       |-- commands
+|       |   |       |-- force_close
+|       |   |       |-- initiator_name
+|       |   |       |-- luns -> ../../ini_groups/special_ini/luns
+|       |   |       |-- reinstating
+|       |   |       `-- sid
+|       |   `-- tid
+|       |-- mgmt
+|       |-- open_state
+|       |-- trace_level
+|       `-- version
+|-- threads
+|-- trace_level
+`-- version
+
+Troubleshooting
+---------------
+
+If you have any problems, start troubleshooting from looking at the
+kernel and system logs. In the kernel log iSCSI-SCST and SCST core send
+their messages, in the system log iscsi-scstd sends its messages. In
+most Linux distributions both those logs are put to /var/log/messages
+file.
+
+Then, it might be helpful to increase level of logging. For kernel
+modules you should make the debug build by enabling CONFIG_SCST_DEBUG.
+
+If after looking on the logs the reason of your problem is still unclear
+for you, report to SCST mailing list scst-devel@lists.sourceforge.net.
+
+Work if target's backstorage or link is too slow
+------------------------------------------------
+
+In some cases you can experience I/O stalls or see in the kernel log
+abort or reset messages. It can happen under high I/O load, when your
+target's backstorage gets overloaded, or working over a slow link, when
+the link can't serve all the queued commands on time,
+
+To workaround it you can reduce QueuedCommands parameter for the
+corresponding target to some lower value, like 8 (default is 32).
+
+Also see SCST README file for more details about that issue and ways to
+prevent it.
+
+Performance advices
+-------------------
+
+1. If you use Windows XP or Windows 2003+ as initiators, you should
+consider to decrease TcpAckFrequency parameter to 1. See
+http://support.microsoft.com/kb/328890/ or google for "TcpAckFrequency"
+for more details.
+
+2. See how to get the maximum throughput from iSCSI, for instance, at
+http://virtualgeek.typepad.com/virtual_geek/2009/01/a-multivendor-post-to-help-our-mutual-iscsi-customers-using-vmware.html.
+It's about VMware, but its recommendations apply to other environments
+as well.
+
+3. ISCSI initiators built in pre-CentOS/RHEL 5 reported to have some
+performance problems. If you use it, it is strongly advised to upgrade.
+
+4. If you are going to use your target in an VM environment, for
+instance as a shared storage with VMware, make sure all your VMs
+connected to the target via *separate* sessions, i.e. each VM has own
+connection to the target, not all VMs connected using a single
+connection. You can check it using SCST sysfs interface. If you
+miss it, you can greatly loose performance of parallel access to your
+target from different VMs. This isn't related to the case if your VMs
+are using the same shared storage, like with VMFS, for instance. In this
+case all your VM hosts will be connected to the target via separate
+sessions, which is enough.
+
+5. Many dual port network adapters are not able to transfer data
+simultaneously on both ports, i.e. they transfer data via both ports on
+the same speed as via any single port. Thus, using such adapters in MPIO
+configuration can't improve performance. To allow MPIO to have double
+performance you should either use separate network adapters, or find a
+dual-port adapter capable to to transfer data simultaneously on both
+ports. You can check it by running 2 iperf's through both ports in
+parallel.
+
+6. Since network offload works much better in the write direction, than
+for reading (simplifying, in the read direction often there's additional
+data copy) in many cases with 10GbE in a single initiator-target pair
+the initiator's CPU is a bottleneck, so you can see the initiator can
+read data on much slower rate, than write. You can check it by watching
+*each particular* CPU load to find out if any of them is close to 100%
+load, including IRQ processing load. Note, many tools like vmstat give
+aggregate load on all CPUs, so with 4 cores 25% corresponds to 100% load
+of any single CPU.
+
+7. See SCST core's README for more advices. Especially pay attention to
+have io_grouping_type option set correctly.
+
+Compilation options
+-------------------
+
+There are the following compilation options, that could be commented
+in/out in the kernel's module Makefile:
+
+ - CONFIG_SCST_DEBUG - turns on some debugging code, including some logging.
+   Makes the driver considerably bigger and slower, producing large amount of
+   log data.
+
+ - CONFIG_SCST_TRACING - turns on ability to log events. Makes the driver
+   considerably bigger and leads to some performance loss.
+
+ - CONFIG_SCST_EXTRACHECKS - adds extra validity checks in the various places.
+
+ - CONFIG_SCST_ISCSI_DEBUG_DIGEST_FAILURES - simulates digest failures in
+   random places.
+
+Credits
+-------
+
+Thanks to:
+
+ * Ming Zhang <blackmagic02881@gmail.com> for fixes
+
+ * Krzysztof Blaszkowski <kb@sysmikro.com.pl> for many fixes
+
+ * Alexey Kuznetsov <kuznet@ms2.inr.ac.ru> for comments and help in
+   debugging
+
+ * Tomasz Chmielewski <mangoo@wpkg.org> for testing and suggestions
+
+ * Bart Van Assche <bart.vanassche@gmail.com> for a lot of help
+
+Vladislav Bolkhovitin <vst@vlnb.net>, http://scst.sourceforge.net


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2010-04-13 13:11 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <4BC44A49.7070307@vlnb.net>
2010-04-13 13:04 ` [PATCH][RFC 0/12/1/5] SCST core Vladislav Bolkhovitin
     [not found] ` <4BC44D08.4060907@vlnb.net>
2010-04-13 13:04   ` [PATCH][RFC 1/12/1/5] SCST core's Makefile and Kconfig Vladislav Bolkhovitin
2010-04-13 13:04   ` [PATCH][RFC 2/12/1/5] SCST core's external headers Vladislav Bolkhovitin
2010-04-13 13:04   ` [PATCH][RFC 3/12/1/5] SCST core's scst_main.c Vladislav Bolkhovitin
2010-04-13 13:05   ` [PATCH][RFC 4/12/1/5] SCST core's scst_targ.c Vladislav Bolkhovitin
2010-04-13 13:05   ` [PATCH][RFC 5/12/1/5] SCST core's scst_lib.c Vladislav Bolkhovitin
2010-04-13 13:06   ` [PATCH][RFC 6/12/1/5] SCST core's private header Vladislav Bolkhovitin
2010-04-13 13:06   ` [PATCH][RFC 7/12/1/5] SCST SGV cache Vladislav Bolkhovitin
2010-04-13 13:06   ` [PATCH][RFC 8/12/1/5] SCST sysfs interface Vladislav Bolkhovitin
2010-04-13 13:06   ` [PATCH][RFC 9/12/1/5] SCST debugging support Vladislav Bolkhovitin
2010-04-13 13:06   ` [PATCH][RFC 10/12/1/5] SCST external modules support Vladislav Bolkhovitin
     [not found] ` <4BC45841.2090707@vlnb.net>
2010-04-13 13:07   ` [PATCH][RFC 11/12/1/5] SCST core's README Vladislav Bolkhovitin
2010-04-13 13:07   ` [PATCH][RFC 12/12/1/5] SCST sysfs rules Vladislav Bolkhovitin
2010-04-13 13:09 ` [PATCH][RFC 0/4/4/5] iSCSI-SCST Vladislav Bolkhovitin
     [not found] ` <4BC45AFA.7060403@vlnb.net>
2010-04-13 13:10   ` [PATCH][RFC 1/4/4/5] iSCSI-SCST's Makefile and Kconfig Vladislav Bolkhovitin
     [not found] ` <4BC45E87.6000901@vlnb.net>
2010-04-13 13:10   ` [PATCH][RFC 2/4/4/5] iSCSI-SCST's header files Vladislav Bolkhovitin
     [not found] ` <4BC45EF7.7010304@vlnb.net>
2010-04-13 13:10   ` [PATCH][RFC 3/4/4/5] iSCSI-SCST's implementation files Vladislav Bolkhovitin
2010-04-13 13:10   ` [PATCH][RFC 4/4/4/5] iSCSI-SCST's README file Vladislav Bolkhovitin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).