Discussion:
[PATCH RESEND 2/2] target: Implement report lun data change unit attention.
Saurav Kashyap
2014-09-25 10:22:29 UTC
Permalink
Signed-off-by: Saurav Kashyap <***@qlogic.com>
Signed-off-by: Giridhar Malavali <***@qlogic.com>
---
drivers/target/target_core_fabric_configfs.c | 28 ++++++++++++++++++++++++++
1 files changed, 28 insertions(+), 0 deletions(-)

diff --git a/drivers/target/target_core_fabric_configfs.c b/drivers/target/target_core_fabric_configfs.c
index 7de9f04..f97866e 100644
--- a/drivers/target/target_core_fabric_configfs.c
+++ b/drivers/target/target_core_fabric_configfs.c
@@ -42,6 +42,7 @@
#include "target_core_internal.h"
#include "target_core_alua.h"
#include "target_core_pr.h"
+#include "target_core_ua.h"

#define TF_CIT_SETUP(_name, _item_ops, _group_ops, _attrs) \
static void target_fabric_setup_##_name##_cit(struct target_fabric_configfs *tf) \
@@ -326,6 +327,8 @@ static struct config_group *target_fabric_make_mappedlun(
char *buf;
unsigned long mapped_lun;
int ret = 0;
+ struct se_lun *lun;
+ int i = 0;

acl_ci = &group->cg_item;
if (!acl_ci) {
@@ -400,6 +403,16 @@ static struct config_group *target_fabric_make_mappedlun(
goto out;
}
target_stat_setup_mappedlun_default_groups(lacl);
+ spin_lock(&se_tpg->tpg_lun_lock);
+ for (i = 0; i < TRANSPORT_MAX_LUNS_PER_TPG; i++) {
+ lun = se_tpg->tpg_lun_list[i];
+ if (lun->lun_status != TRANSPORT_LUN_STATUS_ACTIVE)
+ continue;
+ spin_unlock(&se_tpg->tpg_lun_lock);
+ core_scsi3_ua_allocate(se_nacl, lun->unpacked_lun, 0x3f, 0xe);
+ spin_lock(&se_tpg->tpg_lun_lock);
+ }
+ spin_unlock(&se_tpg->tpg_lun_lock);

kfree(buf);
return &lacl->se_lun_group;
@@ -419,6 +432,8 @@ static void target_fabric_drop_mappedlun(
struct config_item *df_item;
struct config_group *lacl_cg = NULL, *ml_stat_grp = NULL;
int i;
+ struct se_lun *lun;
+ struct se_portal_group *se_tpg = lacl->se_lun_nacl->se_tpg;

ml_stat_grp = &lacl->ml_stat_grps.stat_group;
for (i = 0; ml_stat_grp->default_groups[i]; i++) {
@@ -437,6 +452,19 @@ static void target_fabric_drop_mappedlun(
kfree(lacl_cg->default_groups);

config_item_put(item);
+
+ spin_lock(&se_tpg->tpg_lun_lock);
+ for (i = 0; i < TRANSPORT_MAX_LUNS_PER_TPG; i++) {
+ lun = se_tpg->tpg_lun_list[i];
+ if (lun->lun_status != TRANSPORT_LUN_STATUS_ACTIVE)
+ continue;
+ spin_unlock(&se_tpg->tpg_lun_lock);
+ core_scsi3_ua_allocate(lacl->se_lun_nacl, lun->unpacked_lun,
+ 0x3f, 0xe);
+ spin_lock(&se_tpg->tpg_lun_lock);
+ }
+ spin_unlock(&se_tpg->tpg_lun_lock);
+
}

static void target_fabric_nacl_base_release(struct config_item *item)
--
1.7.7

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Saurav Kashyap
2014-09-25 10:22:28 UTC
Permalink
From: Quinn Tran <***@qlogic.com>

During temporary resource starvation at lower transport layer, command
is placed on queue full retry path, which expose this problem. The TCM
Qfull handling send the same cmd twice to lower layer. The 1st time
led to cmd normal free path. The 2nd time cause Null pointer access.

Signed-off-by: Quinn Tran <***@qlogic.com>
Signed-off-by: Saurav Kashyap <***@qlogic.com>
---
drivers/target/target_core_transport.c | 3 +--
1 files changed, 1 insertions(+), 2 deletions(-)

diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
index 7fa62fc..ab61014 100644
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -1877,8 +1877,7 @@ static void transport_complete_qf(struct se_cmd *cmd)
if (cmd->se_cmd_flags & SCF_TRANSPORT_TASK_SENSE) {
trace_target_cmd_complete(cmd);
ret = cmd->se_tfo->queue_status(cmd);
- if (ret)
- goto out;
+ goto out;
}

switch (cmd->data_direction) {
--
1.7.7
Nicholas A. Bellinger
2014-10-01 21:24:34 UTC
Permalink
Hi Saurav,

Apologies for the delayed response, comments are below.
Post by Saurav Kashyap
During temporary resource starvation at lower transport layer, command
is placed on queue full retry path, which expose this problem. The TCM
Qfull handling send the same cmd twice to lower layer. The 1st time
led to cmd normal free path. The 2nd time cause Null pointer access.
---
drivers/target/target_core_transport.c | 3 +--
1 files changed, 1 insertions(+), 2 deletions(-)
diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
index 7fa62fc..ab61014 100644
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -1877,8 +1877,7 @@ static void transport_complete_qf(struct se_cmd *cmd)
if (cmd->se_cmd_flags & SCF_TRANSPORT_TASK_SENSE) {
trace_target_cmd_complete(cmd);
ret = cmd->se_tfo->queue_status(cmd);
- if (ret)
- goto out;
+ goto out;
}
switch (cmd->data_direction) {
Applied to target-pending/for-next with a CC' to v3.1 stable, as the bug
was introduced as a regression in:

commit e057f53308a5f071556ee80586b99ee755bf07f5
Author: Christoph Hellwig <***@infradead.org>
Date: Mon Oct 17 13:56:41 2011 -0400

Thank you,

--nab





--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Nicholas A. Bellinger
2014-10-02 00:00:00 UTC
Permalink
Hi Saurav & Co,

Comments are inline below.
Post by Saurav Kashyap
---
drivers/target/target_core_fabric_configfs.c | 28 ++++++++++++++++++++++++++
1 files changed, 28 insertions(+), 0 deletions(-)
diff --git a/drivers/target/target_core_fabric_configfs.c b/drivers/target/target_core_fabric_configfs.c
index 7de9f04..f97866e 100644
--- a/drivers/target/target_core_fabric_configfs.c
+++ b/drivers/target/target_core_fabric_configfs.c
@@ -42,6 +42,7 @@
#include "target_core_internal.h"
#include "target_core_alua.h"
#include "target_core_pr.h"
+#include "target_core_ua.h"
#define TF_CIT_SETUP(_name, _item_ops, _group_ops, _attrs) \
static void target_fabric_setup_##_name##_cit(struct target_fabric_configfs *tf) \
@@ -326,6 +327,8 @@ static struct config_group *target_fabric_make_mappedlun(
char *buf;
unsigned long mapped_lun;
int ret = 0;
+ struct se_lun *lun;
+ int i = 0;
acl_ci = &group->cg_item;
if (!acl_ci) {
@@ -400,6 +403,16 @@ static struct config_group *target_fabric_make_mappedlun(
goto out;
}
target_stat_setup_mappedlun_default_groups(lacl);
+ spin_lock(&se_tpg->tpg_lun_lock);
+ for (i = 0; i < TRANSPORT_MAX_LUNS_PER_TPG; i++) {
+ lun = se_tpg->tpg_lun_list[i];
+ if (lun->lun_status != TRANSPORT_LUN_STATUS_ACTIVE)
+ continue;
+ spin_unlock(&se_tpg->tpg_lun_lock);
+ core_scsi3_ua_allocate(se_nacl, lun->unpacked_lun, 0x3f, 0xe);
+ spin_lock(&se_tpg->tpg_lun_lock);
+ }
+ spin_unlock(&se_tpg->tpg_lun_lock);
This should be walking se_nacl->device_list[] to generate UAs for the
specific NodeACL's existing MappedLUN layout (se_dev_entry->mapped_lun),
minus the se_dev_entry->mapped_lun that was added by this particular
invocation of target_fabric_make_mappedlun().

The above walks the TPG LUN list, but doesn't honor the specific
NodeACL's MappedLUN layout.
Post by Saurav Kashyap
kfree(buf);
return &lacl->se_lun_group;
@@ -419,6 +432,8 @@ static void target_fabric_drop_mappedlun(
struct config_item *df_item;
struct config_group *lacl_cg = NULL, *ml_stat_grp = NULL;
int i;
+ struct se_lun *lun;
+ struct se_portal_group *se_tpg = lacl->se_lun_nacl->se_tpg;
ml_stat_grp = &lacl->ml_stat_grps.stat_group;
for (i = 0; ml_stat_grp->default_groups[i]; i++) {
@@ -437,6 +452,19 @@ static void target_fabric_drop_mappedlun(
kfree(lacl_cg->default_groups);
config_item_put(item);
+
+ spin_lock(&se_tpg->tpg_lun_lock);
+ for (i = 0; i < TRANSPORT_MAX_LUNS_PER_TPG; i++) {
+ lun = se_tpg->tpg_lun_list[i];
+ if (lun->lun_status != TRANSPORT_LUN_STATUS_ACTIVE)
+ continue;
+ spin_unlock(&se_tpg->tpg_lun_lock);
+ core_scsi3_ua_allocate(lacl->se_lun_nacl, lun->unpacked_lun,
+ 0x3f, 0xe);
+ spin_lock(&se_tpg->tpg_lun_lock);
+ }
+ spin_unlock(&se_tpg->tpg_lun_lock);
+
Ditto here as well, and AFAICT there is no reason why these two can't be
made into common code

Also, this patch doesn't handle the case of demo-mode sessions where no
explicit NodeACLs + MappedLUNs layout exists in configfs. This will
require walking the list of active sessions per endpoint, and update
individual se_node_acl->device_list[] entries accordingly..

So that said, I'm OK with merging support for generating UA's for
explicit NodeACLs + MappedLUNs first, but the demo-mode case will
require a bit more effort to implement.

Thanks,

--nab

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Loading...