* Open-FCoE on linux-scsi
@ 2007-11-27 23:40 Love, Robert W
2007-11-28 0:19 ` FUJITA Tomonori
2007-12-28 19:11 ` FUJITA Tomonori
0 siblings, 2 replies; 26+ messages in thread
From: Love, Robert W @ 2007-11-27 23:40 UTC (permalink / raw)
To: linux-scsi, Love, Robert W; +Cc: Zou, Yi, Leech, Christopher, Dev, Vasu
Hello SCSI mailing list,
I'd just like to introduce ourselves a bit before we get
started. My name is Robert Love and I'm joined by a team of engineers
including Vasu Dev, Chris Leech and Yi Zou. We are committed to
maintaining the Open-FCoE project. Aside from Intel engineers we expect
engineers from other companies to contribute to Open-FCoE.
Our goal is to get the initiator code upstream. We have a lot of
working code but recognize that we're early in this project's
development. We're looking for direction from you, the experts, on what
this project should grow into.
My concern is that we're going to bombard the SCSI list, but
upon James' recommendation we'll start on your list and if it's too much
move to our own list.
Thanks,
//Rob
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Open-FCoE on linux-scsi
2007-11-27 23:40 Open-FCoE on linux-scsi Love, Robert W
@ 2007-11-28 0:19 ` FUJITA Tomonori
2007-11-28 0:29 ` Love, Robert W
2007-12-28 19:11 ` FUJITA Tomonori
1 sibling, 1 reply; 26+ messages in thread
From: FUJITA Tomonori @ 2007-11-28 0:19 UTC (permalink / raw)
To: robert.w.love; +Cc: linux-scsi, yi.zou, christopher.leech, vasu.dev
On Tue, 27 Nov 2007 15:40:05 -0800
"Love, Robert W" <robert.w.love@intel.com> wrote:
> Hello SCSI mailing list,
>
> I'd just like to introduce ourselves a bit before we get
> started. My name is Robert Love and I'm joined by a team of engineers
> including Vasu Dev, Chris Leech and Yi Zou. We are committed to
> maintaining the Open-FCoE project. Aside from Intel engineers we expect
> engineers from other companies to contribute to Open-FCoE.
>
> Our goal is to get the initiator code upstream. We have a lot of
> working code but recognize that we're early in this project's
> development. We're looking for direction from you, the experts, on what
> this project should grow into.
A quick start guide to setup initiator and target and connect them is
available?
^ permalink raw reply [flat|nested] 26+ messages in thread
* RE: Open-FCoE on linux-scsi
2007-11-28 0:19 ` FUJITA Tomonori
@ 2007-11-28 0:29 ` Love, Robert W
0 siblings, 0 replies; 26+ messages in thread
From: Love, Robert W @ 2007-11-28 0:29 UTC (permalink / raw)
To: FUJITA Tomonori; +Cc: linux-scsi, Zou, Yi, Leech, Christopher, Dev, Vasu
>On Tue, 27 Nov 2007 15:40:05 -0800
>"Love, Robert W" <robert.w.love@intel.com> wrote:
>
>> Hello SCSI mailing list,
>>
>> I'd just like to introduce ourselves a bit before we get
>> started. My name is Robert Love and I'm joined by a team of engineers
>> including Vasu Dev, Chris Leech and Yi Zou. We are committed to
>> maintaining the Open-FCoE project. Aside from Intel engineers we
expect
>> engineers from other companies to contribute to Open-FCoE.
>>
>> Our goal is to get the initiator code upstream. We have a lot of
>> working code but recognize that we're early in this project's
>> development. We're looking for direction from you, the experts, on
what
>> this project should grow into.
>
>A quick start guide to setup initiator and target and connect them is
>available?
Yeah, there's a page in our wiki. It's mentioned the first post on
www.Open-FCoE.org. Here's a direct link-
http://www.open-fcoe.org/openfc/wiki/index.php/Quickstart. Unfortunately
it's a bit rocky to get everything working even with the quickstart, I'm
working on improving that right now.
>-
>To unsubscribe from this list: send the line "unsubscribe linux-scsi"
in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Open-FCoE on linux-scsi
2007-11-27 23:40 Open-FCoE on linux-scsi Love, Robert W
2007-11-28 0:19 ` FUJITA Tomonori
@ 2007-12-28 19:11 ` FUJITA Tomonori
2007-12-31 16:34 ` Love, Robert W
1 sibling, 1 reply; 26+ messages in thread
From: FUJITA Tomonori @ 2007-12-28 19:11 UTC (permalink / raw)
To: robert.w.love
Cc: linux-scsi, yi.zou, christopher.leech, vasu.dev, fujita.tomonori
From: "Love, Robert W" <robert.w.love@intel.com>
Subject: Open-FCoE on linux-scsi
Date: Tue, 27 Nov 2007 15:40:05 -0800
> Hello SCSI mailing list,
>
> I'd just like to introduce ourselves a bit before we get
> started. My name is Robert Love and I'm joined by a team of engineers
> including Vasu Dev, Chris Leech and Yi Zou. We are committed to
> maintaining the Open-FCoE project. Aside from Intel engineers we expect
> engineers from other companies to contribute to Open-FCoE.
>
> Our goal is to get the initiator code upstream. We have a lot of
> working code but recognize that we're early in this project's
> development. We're looking for direction from you, the experts, on what
> this project should grow into.
I've just added a new fcoe target driver to tgt:
http://stgt.berlios.de/
The driver runs in user space unlike your target mode driver (I just
modified your FCoE code to run it in user space).
The initiator driver succeeded to log in a target, see logical units,
and perform some I/Os. It's still very unstable but it would be
useful for FCoE developers.
I would like to help you push the Open-FCoE initiator to mainline
too. What are on your todo list and what you guys working on now?
^ permalink raw reply [flat|nested] 26+ messages in thread
* RE: Open-FCoE on linux-scsi
2007-12-28 19:11 ` FUJITA Tomonori
@ 2007-12-31 16:34 ` Love, Robert W
2008-01-03 10:35 ` FUJITA Tomonori
0 siblings, 1 reply; 26+ messages in thread
From: Love, Robert W @ 2007-12-31 16:34 UTC (permalink / raw)
To: FUJITA Tomonori
Cc: Zou, Yi, Leech, Christopher, Dev, Vasu, linux-scsi,
fujita.tomonori
>> Hello SCSI mailing list,
>>
>> I'd just like to introduce ourselves a bit before we get
>> started. My name is Robert Love and I'm joined by a team of engineers
>> including Vasu Dev, Chris Leech and Yi Zou. We are committed to
>> maintaining the Open-FCoE project. Aside from Intel engineers we
expect
>> engineers from other companies to contribute to Open-FCoE.
>>
>> Our goal is to get the initiator code upstream. We have a lot of
>> working code but recognize that we're early in this project's
>> development. We're looking for direction from you, the experts, on
what
>> this project should grow into.
>
>I've just added a new fcoe target driver to tgt:
>
>http://stgt.berlios.de/
>
That's great; we'll check it out as soon as everyone is back from the
holidays.
>The driver runs in user space unlike your target mode driver (I just
>modified your FCoE code to run it in user space).
>
There seems to be a trend to move non-data-path code userspace, however,
I don't like having so much duplicate code. We were going to investigate
if we could redesign the target code to have less of a profile and just
depend on the initiator modules instead of recompiling openfc as
openfc_tgt.
What's the general opinion on this? Duplicate code vs. more kernel code?
I can see that you're already starting to clean up the code that you
ported. Does that mean the duplicate code isn't an issue to you? When we
fix bugs in the initiator they're not going to make it into your tree
unless you're diligent about watching the list.
>The initiator driver succeeded to log in a target, see logical units,
>and perform some I/Os. It's still very unstable but it would be
>useful for FCoE developers.
>
>
>I would like to help you push the Open-FCoE initiator to mainline
>too. What are on your todo list and what you guys working on now?
We would really appreciate the help! The best way I could come up with
to coordinate this effort was through the BZ-
http://open-fcoe.org/bugzilla. I was going to write a BZ wiki entry to
help new contributors, but since I haven't yet, here's the bottom line.
Sign-up to the BZ, assign bugs to yourself from my name (I'm the default
assignee now) and also file bugs as you find them. I don't want to
impose much process, but this will allow all of us to know what everyone
else is working on.
The main things that I think need to be fixed are (in no particular
order)-
1) Stability- Just straight up bug fixing. This is ongoing and everyone
is looking at bugs.
2) Abstractions- We consider libsa a big bug, which we're trying to
strip down piece by piece. Vasu took out the LOG_SA code and I'm looking
into changing the ASSERTs to BUG_ON/WARN_ONs. That isn't all of it, but
that's how we're breaking it down.
3) Target- The duplicate code of the target is too much. I want to
integrate the target into our -upstream tree. Without doing that, fixes
to the -upstream tree won't benefit the target and it will get into
worse shape than it already is, unless someone is porting those patches
to the target too. I think that ideally we'd want to reduce the target's
profile and move it to userspace under tgt.
4) Userspace/Kernel interaction- It's our belief that netlink is the
preferred mechanism for kernel/userspace interaction. Yi has converted
the FCoE ioctl code to netlink and is looking into openfc next.
We have various other little things going on as well. Our validation
team is beginning to file bugs in the BZ and we're working on our
internal processes around that effort. I'm also trying to setup a "smoke
test" system that will do a quick automated test so that I can confirm
patches won't break the code base.
^ permalink raw reply [flat|nested] 26+ messages in thread
* RE: Open-FCoE on linux-scsi
2007-12-31 16:34 ` Love, Robert W
@ 2008-01-03 10:35 ` FUJITA Tomonori
2008-01-03 21:58 ` Love, Robert W
2008-01-05 18:33 ` Vladislav Bolkhovitin
0 siblings, 2 replies; 26+ messages in thread
From: FUJITA Tomonori @ 2008-01-03 10:35 UTC (permalink / raw)
To: robert.w.love
Cc: tomof, yi.zou, christopher.leech, vasu.dev, linux-scsi,
fujita.tomonori
From: "Love, Robert W" <robert.w.love@intel.com>
Subject: RE: Open-FCoE on linux-scsi
Date: Mon, 31 Dec 2007 08:34:38 -0800
> >> Hello SCSI mailing list,
> >>
> >> I'd just like to introduce ourselves a bit before we get
> >> started. My name is Robert Love and I'm joined by a team of engineers
> >> including Vasu Dev, Chris Leech and Yi Zou. We are committed to
> >> maintaining the Open-FCoE project. Aside from Intel engineers we
> expect
> >> engineers from other companies to contribute to Open-FCoE.
> >>
> >> Our goal is to get the initiator code upstream. We have a lot of
> >> working code but recognize that we're early in this project's
> >> development. We're looking for direction from you, the experts, on
> what
> >> this project should grow into.
> >
> >I've just added a new fcoe target driver to tgt:
> >
> >http://stgt.berlios.de/
> >
> That's great; we'll check it out as soon as everyone is back from the
> holidays.
It's still an experiment. Patches are welcome.
> >The driver runs in user space unlike your target mode driver (I just
> >modified your FCoE code to run it in user space).
> >
> There seems to be a trend to move non-data-path code userspace, however,
Implementing FCoE target drive in user space has no connection with a
trend to move non-data-path code user space. It does all the data-path
in user space.
The examples of the trend to move non-data-path code userspace are
open-iscsi, multi-path, etc, I think.
> I don't like having so much duplicate code. We were going to investigate
> if we could redesign the target code to have less of a profile and just
> depend on the initiator modules instead of recompiling openfc as
> openfc_tgt.
>
> What's the general opinion on this? Duplicate code vs. more kernel code?
> I can see that you're already starting to clean up the code that you
> ported. Does that mean the duplicate code isn't an issue to you? When we
> fix bugs in the initiator they're not going to make it into your tree
> unless you're diligent about watching the list.
It's hard to convince the kernel maintainers to merge something into
mainline that which can be implemented in user space. I failed twice
(with two iSCSI target implementations).
Yeah, duplication is not good but the user space code has some
great advantages. Both approaches have the pros and cons.
> >The initiator driver succeeded to log in a target, see logical units,
> >and perform some I/Os. It's still very unstable but it would be
> >useful for FCoE developers.
> >
> >
> >I would like to help you push the Open-FCoE initiator to mainline
> >too. What are on your todo list and what you guys working on now?
>
> We would really appreciate the help! The best way I could come up with
> to coordinate this effort was through the BZ-
> http://open-fcoe.org/bugzilla. I was going to write a BZ wiki entry to
> help new contributors, but since I haven't yet, here's the bottom line.
> Sign-up to the BZ, assign bugs to yourself from my name (I'm the default
> assignee now) and also file bugs as you find them. I don't want to
> impose much process, but this will allow all of us to know what everyone
> else is working on.
>
> The main things that I think need to be fixed are (in no particular
> order)-
>
> 1) Stability- Just straight up bug fixing. This is ongoing and everyone
> is looking at bugs.
Talking about stability is a bit premature, I think. The first thing
to do is finding a design that can be accepted into mainline.
> 2) Abstractions- We consider libsa a big bug, which we're trying to
> strip down piece by piece. Vasu took out the LOG_SA code and I'm looking
> into changing the ASSERTs to BUG_ON/WARN_ONs. That isn't all of it, but
> that's how we're breaking it down.
Agreed, libsa (and libcrc) should be removed.
> 3) Target- The duplicate code of the target is too much. I want to
> integrate the target into our -upstream tree. Without doing that, fixes
> to the -upstream tree won't benefit the target and it will get into
> worse shape than it already is, unless someone is porting those patches
> to the target too. I think that ideally we'd want to reduce the target's
> profile and move it to userspace under tgt.
>
> 4) Userspace/Kernel interaction- It's our belief that netlink is the
> preferred mechanism for kernel/userspace interaction. Yi has converted
> the FCoE ioctl code to netlink and is looking into openfc next.
There are other options and I'm not sure that netlink is the best. I
think that there is no general consensus about the best mechanism for
kernel/userspace interaction. Even ioctl is still accepted into
mainline (e.g. kvm).
I expect you get an idea to use netlink from open-iscsi, but unlike
open-iscsi, for now the FCoE code does just configuration with
kernel/userspace interaction. open-iscsi has non-data path in user
space so the kernel need to send variable-length data (PDUs, event,
etc) to user space via netlink. So open-iscsi really needs netlink.
If you have the FCoE non-data path in user space, netlink would work
well for you.
I would add one TODO item, better integration with scsi_transport_fc.
If we have HW FCoE HBAs in the future, we need FCoE support in the fc
transport class (you could use its netlink mechanism for event
notification).
BTW, I think that the name 'openfc' is a bit strange. Surely, the
mainline iscsi initiator driver is called 'open-iscsi' but it doesn't
have any functions or files called 'open*'. It's just the project
name.
^ permalink raw reply [flat|nested] 26+ messages in thread
* RE: Open-FCoE on linux-scsi
2008-01-03 10:35 ` FUJITA Tomonori
@ 2008-01-03 21:58 ` Love, Robert W
2008-01-04 11:45 ` Stefan Richter
2008-01-04 13:47 ` FUJITA Tomonori
2008-01-05 18:33 ` Vladislav Bolkhovitin
1 sibling, 2 replies; 26+ messages in thread
From: Love, Robert W @ 2008-01-03 21:58 UTC (permalink / raw)
To: FUJITA Tomonori
Cc: Zou, Yi, Leech, Christopher, Dev, Vasu, linux-scsi,
fujita.tomonori
>From: "Love, Robert W" <robert.w.love@intel.com>
>Subject: RE: Open-FCoE on linux-scsi
>Date: Mon, 31 Dec 2007 08:34:38 -0800
>
>> >> Hello SCSI mailing list,
>> >>
>> >> I'd just like to introduce ourselves a bit before we get
>> >> started. My name is Robert Love and I'm joined by a team of
engineers
>> >> including Vasu Dev, Chris Leech and Yi Zou. We are committed to
>> >> maintaining the Open-FCoE project. Aside from Intel engineers we
>> expect
>> >> engineers from other companies to contribute to Open-FCoE.
>> >>
>> >> Our goal is to get the initiator code upstream. We have a lot of
>> >> working code but recognize that we're early in this project's
>> >> development. We're looking for direction from you, the experts, on
>> what
>> >> this project should grow into.
>> >
>> >I've just added a new fcoe target driver to tgt:
>> >
>> >http://stgt.berlios.de/
>> >
>> That's great; we'll check it out as soon as everyone is back from the
>> holidays.
>
>It's still an experiment. Patches are welcome.
>
>
>> >The driver runs in user space unlike your target mode driver (I just
>> >modified your FCoE code to run it in user space).
>> >
>> There seems to be a trend to move non-data-path code userspace,
however,
>
>Implementing FCoE target drive in user space has no connection with a
>trend to move non-data-path code user space. It does all the data-path
>in user space.
>
>The examples of the trend to move non-data-path code userspace are
>open-iscsi, multi-path, etc, I think.
>
>
>> I don't like having so much duplicate code. We were going to
investigate
>> if we could redesign the target code to have less of a profile and
just
>> depend on the initiator modules instead of recompiling openfc as
>> openfc_tgt.
>>
>> What's the general opinion on this? Duplicate code vs. more kernel
code?
>> I can see that you're already starting to clean up the code that you
>> ported. Does that mean the duplicate code isn't an issue to you? When
we
>> fix bugs in the initiator they're not going to make it into your tree
>> unless you're diligent about watching the list.
>
>It's hard to convince the kernel maintainers to merge something into
>mainline that which can be implemented in user space. I failed twice
>(with two iSCSI target implementations).
>
>Yeah, duplication is not good but the user space code has some
>great advantages. Both approaches have the pros and cons.
>
>
>> >The initiator driver succeeded to log in a target, see logical
units,
>> >and perform some I/Os. It's still very unstable but it would be
>> >useful for FCoE developers.
>> >
>> >
>> >I would like to help you push the Open-FCoE initiator to mainline
>> >too. What are on your todo list and what you guys working on now?
>>
>> We would really appreciate the help! The best way I could come up
with
>> to coordinate this effort was through the BZ-
>> http://open-fcoe.org/bugzilla. I was going to write a BZ wiki entry
to
>> help new contributors, but since I haven't yet, here's the bottom
line.
>> Sign-up to the BZ, assign bugs to yourself from my name (I'm the
default
>> assignee now) and also file bugs as you find them. I don't want to
>> impose much process, but this will allow all of us to know what
everyone
>> else is working on.
>>
>> The main things that I think need to be fixed are (in no particular
>> order)-
>>
>> 1) Stability- Just straight up bug fixing. This is ongoing and
everyone
>> is looking at bugs.
>
>Talking about stability is a bit premature, I think. The first thing
>to do is finding a design that can be accepted into mainline.
How can we get this started? We've provided our current solution, but
need feedback to guide us in the right direction. We've received little
quips about libsa and libcrc and now it looks like we should look at
what we can move to userspace (see below), but that's all the feedback
we've got so far. Can you tell us what you think about our current
architecture? Then we could discuss your concerns...
>
>
>> 2) Abstractions- We consider libsa a big bug, which we're trying to
>> strip down piece by piece. Vasu took out the LOG_SA code and I'm
looking
>> into changing the ASSERTs to BUG_ON/WARN_ONs. That isn't all of it,
but
>> that's how we're breaking it down.
>
>Agreed, libsa (and libcrc) should be removed.
>
>
>> 3) Target- The duplicate code of the target is too much. I want to
>> integrate the target into our -upstream tree. Without doing that,
fixes
>> to the -upstream tree won't benefit the target and it will get into
>> worse shape than it already is, unless someone is porting those
patches
>> to the target too. I think that ideally we'd want to reduce the
target's
>> profile and move it to userspace under tgt.
>>
>> 4) Userspace/Kernel interaction- It's our belief that netlink is the
>> preferred mechanism for kernel/userspace interaction. Yi has
converted
>> the FCoE ioctl code to netlink and is looking into openfc next.
>
>There are other options and I'm not sure that netlink is the best. I
>think that there is no general consensus about the best mechanism for
>kernel/userspace interaction. Even ioctl is still accepted into
>mainline (e.g. kvm).
>
>I expect you get an idea to use netlink from open-iscsi, but unlike
>open-iscsi, for now the FCoE code does just configuration with
>kernel/userspace interaction. open-iscsi has non-data path in user
>space so the kernel need to send variable-length data (PDUs, event,
>etc) to user space via netlink. So open-iscsi really needs netlink.
>If you have the FCoE non-data path in user space, netlink would work
>well for you.
We definitely got the netlink direction from open-iscsi. Combining your
comment that "It's hard to convince the kernel maintainers to merge
something into mainline that which can be implemented in user space"
with
"If you have the FCoE non-data path in user space, netlink would work
well for you", makes it sound like this is an architectural change we
should consider. I'm not sure how strong the trend is though. Is moving
non data-path code to userspace a requirement? (you might have answered
me already by saying you had 2x failed upstream attempts)
>
>I would add one TODO item, better integration with scsi_transport_fc.
>If we have HW FCoE HBAs in the future, we need FCoE support in the fc
>transport class (you could use its netlink mechanism for event
>notification).
What do you have in mind in particular? Our layers are,
SCSI
Openfc
FCoE
net_devive
NIC driver
So, it makes sense to me that we fit under scsi_transport_fc. I like our
layering- we clearly have SCSI on our top edge and net_dev at our bottom
edge. My initial reaction would be to resist merging openfc and fcoe and
creating a scsi_transport_fcoe.h interface.
>
>
>BTW, I think that the name 'openfc' is a bit strange. Surely, the
>mainline iscsi initiator driver is called 'open-iscsi' but it doesn't
>have any functions or files called 'open*'. It's just the project
>name.
Understood, but open-iscsi doesn't have the layering scheme that we do.
Since we're providing a Fibre Channel protocol processing layer that
different transport types can register with I think the generic name is
appropriate. Anyway, I don't think anyone here is terribly stuck on the
name; it's not a high priority at this time.
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Open-FCoE on linux-scsi
2008-01-03 21:58 ` Love, Robert W
@ 2008-01-04 11:45 ` Stefan Richter
2008-01-04 11:59 ` FUJITA Tomonori
2008-01-04 13:47 ` FUJITA Tomonori
1 sibling, 1 reply; 26+ messages in thread
From: Stefan Richter @ 2008-01-04 11:45 UTC (permalink / raw)
To: Love, Robert W
Cc: FUJITA Tomonori, Zou, Yi, Leech, Christopher, Dev, Vasu,
linux-scsi, fujita.tomonori
On 1/3/2008 10:58 PM, Love, Robert W wrote:
[FUJITA Tomonori wrote]
>>I would add one TODO item, better integration with scsi_transport_fc.
>>If we have HW FCoE HBAs in the future, we need FCoE support in the fc
>>transport class (you could use its netlink mechanism for event
>>notification).
>
> What do you have in mind in particular? Our layers are,
>
> SCSI
> Openfc
> FCoE
> net_devive
> NIC driver
>
> So, it makes sense to me that we fit under scsi_transport_fc. I like our
> layering- we clearly have SCSI on our top edge and net_dev at our bottom
> edge. My initial reaction would be to resist merging openfc and fcoe and
> creating a scsi_transport_fcoe.h interface.
AFAIU the stack should be:
- SCSI core,
scsi_transport_fc
- Openfc (an FCoE implementation)
- net_device
- NIC driver
_If_ there will indeed be dedicated FCoE HBAs in the future, the
following stack could exist in addition to the one above:
- SCSI core,
scsi_transport_fc
- FCoE HBA driver(s)
--
Stefan Richter
-=====-==--- ---= --=--
http://arcgraph.de/sr/
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Open-FCoE on linux-scsi
2008-01-04 11:45 ` Stefan Richter
@ 2008-01-04 11:59 ` FUJITA Tomonori
2008-01-04 22:07 ` Dev, Vasu
0 siblings, 1 reply; 26+ messages in thread
From: FUJITA Tomonori @ 2008-01-04 11:59 UTC (permalink / raw)
To: stefanr
Cc: robert.w.love, tomof, yi.zou, christopher.leech, vasu.dev,
linux-scsi, fujita.tomonori
On Fri, 04 Jan 2008 12:45:45 +0100
Stefan Richter <stefanr@s5r6.in-berlin.de> wrote:
> On 1/3/2008 10:58 PM, Love, Robert W wrote:
> [FUJITA Tomonori wrote]
> >>I would add one TODO item, better integration with scsi_transport_fc.
> >>If we have HW FCoE HBAs in the future, we need FCoE support in the fc
> >>transport class (you could use its netlink mechanism for event
> >>notification).
> >
> > What do you have in mind in particular? Our layers are,
> >
> > SCSI
> > Openfc
> > FCoE
> > net_devive
> > NIC driver
> >
> > So, it makes sense to me that we fit under scsi_transport_fc. I like our
> > layering- we clearly have SCSI on our top edge and net_dev at our bottom
> > edge. My initial reaction would be to resist merging openfc and fcoe and
> > creating a scsi_transport_fcoe.h interface.
>
> AFAIU the stack should be:
>
> - SCSI core,
> scsi_transport_fc
> - Openfc (an FCoE implementation)
> - net_device
> - NIC driver
>
> _If_ there will indeed be dedicated FCoE HBAs in the future, the
> following stack could exist in addition to the one above:
>
> - SCSI core,
> scsi_transport_fc
> - FCoE HBA driver(s)
Agreed. My FCoE initiator design would be something like:
scsi-ml
fcoe initiator driver
libfcoe
fc_transport_class (inclusing fcoe support)
And FCoE HBA LLDs work like:
scsi-ml
FCoE HBA LLDs (some of them might use libfcoe)
fc_transport_class (inclusing fcoe support)
That's the way that other transport classes do, I think. For me, the
current code tries to invent another fc class. For example, the code
newly defines:
struct fc_remote_port {
struct list_head rp_list; /* list under fc_virt_fab */
struct fc_virt_fab *rp_vf; /* virtual fabric */
fc_wwn_t rp_port_wwn; /* remote port world wide name */
fc_wwn_t rp_node_wwn; /* remote node world wide name */
fc_fid_t rp_fid; /* F_ID for remote_port if known */
atomic_t rp_refcnt; /* reference count */
u_int rp_disc_ver; /* discovery instance */
u_int rp_io_limit; /* limit on outstanding I/Os */
u_int rp_io_count; /* count of outstanding I/Os */
u_int rp_fcp_parm; /* remote FCP service parameters */
u_int rp_local_fcp_parm; /* local FCP service parameters */
void *rp_client_priv; /* HBA driver private data */
void *rp_fcs_priv; /* FCS driver private data */
struct sa_event_list *rp_events; /* event list */
struct sa_hash_link rp_fid_hash_link;
struct sa_hash_link rp_wwpn_hash_link;
/*
* For now, there's just one session per remote port.
* Eventually, for multipathing, there will be more.
*/
u_char rp_sess_ready; /* session ready to be used */
struct fc_sess *rp_sess; /* session */
void *dns_lookup; /* private dns lookup */
int dns_lookup_count; /* number of attempted lookups */
};
/*
* remote ports are created and looked up by WWPN.
*/
struct fc_remote_port *fc_remote_port_create(struct fc_virt_fab *, fc_wwn_t);
struct fc_remote_port *fc_remote_port_lookup(struct fc_virt_fab *,
fc_fid_t, fc_wwn_t wwpn);
struct fc_remote_port *fc_remote_port_lookup_create(struct fc_virt_fab *,
fc_fid_t,
fc_wwn_t wwpn,
fc_wwn_t wwnn);
The FCoE LLD needs to exploit the exsting struct fc_rport and APIs.
^ permalink raw reply [flat|nested] 26+ messages in thread
* RE: Open-FCoE on linux-scsi
2008-01-03 21:58 ` Love, Robert W
2008-01-04 11:45 ` Stefan Richter
@ 2008-01-04 13:47 ` FUJITA Tomonori
2008-01-04 20:19 ` Mike Christie
1 sibling, 1 reply; 26+ messages in thread
From: FUJITA Tomonori @ 2008-01-04 13:47 UTC (permalink / raw)
To: robert.w.love
Cc: tomof, yi.zou, christopher.leech, vasu.dev, linux-scsi,
fujita.tomonori
On Thu, 3 Jan 2008 13:58:29 -0800
"Love, Robert W" <robert.w.love@intel.com> wrote:
> >Talking about stability is a bit premature, I think. The first thing
> >to do is finding a design that can be accepted into mainline.
>
> How can we get this started? We've provided our current solution, but
> need feedback to guide us in the right direction. We've received little
> quips about libsa and libcrc and now it looks like we should look at
> what we can move to userspace (see below), but that's all the feedback
> we've got so far. Can you tell us what you think about our current
> architecture? Then we could discuss your concerns...
I think that you have got littel feedback since few people have read
the code. Hopefully, this discussion gives some information.
My main concern is transport class integration. But they are just
mine. The SCSI maintainer and FC people might have different opinions.
> >> 2) Abstractions- We consider libsa a big bug, which we're trying to
> >> strip down piece by piece. Vasu took out the LOG_SA code and I'm
> looking
> >> into changing the ASSERTs to BUG_ON/WARN_ONs. That isn't all of it,
> but
> >> that's how we're breaking it down.
> >
> >Agreed, libsa (and libcrc) should be removed.
> >
> >
> >> 3) Target- The duplicate code of the target is too much. I want to
> >> integrate the target into our -upstream tree. Without doing that,
> fixes
> >> to the -upstream tree won't benefit the target and it will get into
> >> worse shape than it already is, unless someone is porting those
> patches
> >> to the target too. I think that ideally we'd want to reduce the
> target's
> >> profile and move it to userspace under tgt.
> >>
> >> 4) Userspace/Kernel interaction- It's our belief that netlink is the
> >> preferred mechanism for kernel/userspace interaction. Yi has
> converted
> >> the FCoE ioctl code to netlink and is looking into openfc next.
> >
> >There are other options and I'm not sure that netlink is the best. I
> >think that there is no general consensus about the best mechanism for
> >kernel/userspace interaction. Even ioctl is still accepted into
> >mainline (e.g. kvm).
> >
> >I expect you get an idea to use netlink from open-iscsi, but unlike
> >open-iscsi, for now the FCoE code does just configuration with
> >kernel/userspace interaction. open-iscsi has non-data path in user
> >space so the kernel need to send variable-length data (PDUs, event,
> >etc) to user space via netlink. So open-iscsi really needs netlink.
> >If you have the FCoE non-data path in user space, netlink would work
> >well for you.
>
> We definitely got the netlink direction from open-iscsi. Combining your
> comment that "It's hard to convince the kernel maintainers to merge
> something into mainline that which can be implemented in user space"
> with
> "If you have the FCoE non-data path in user space, netlink would work
> well for you", makes it sound like this is an architectural change we
> should consider.
I think they are different topics (though they are related).
"It's hard to convince the kernel maintainers to merge something into
mainline that which can be implemented in user space" applies to the
target driver.
You can fully implement FCoE target software in user space, right? So
if so, it's hard to push it into kernel.
The trend to push the non-data path to user space applies to the
initiator driver. Initiator drivers are expected to run in kernel
space but open-iscsi driver was split and the non-data part was moved
to user space. The kernel space and user-space parts work
together. It's completely different from iSCSI target drivers that can
be implemented fully in user space.
> I'm not sure how strong the trend is though. Is moving
> non data-path code to userspace a requirement? (you might have answered
> me already by saying you had 2x failed upstream attempts)
I don't know. You need to ask James.
> >I would add one TODO item, better integration with scsi_transport_fc.
> >If we have HW FCoE HBAs in the future, we need FCoE support in the fc
> >transport class (you could use its netlink mechanism for event
> >notification).
>
> What do you have in mind in particular? Our layers are,
>
> SCSI
> Openfc
> FCoE
> net_devive
> NIC driver
>
> So, it makes sense to me that we fit under scsi_transport_fc. I like our
> layering- we clearly have SCSI on our top edge and net_dev at our bottom
> edge. My initial reaction would be to resist merging openfc and fcoe and
> creating a scsi_transport_fcoe.h interface.
As I wrote in another mail, this part is the major issue for me.
> >BTW, I think that the name 'openfc' is a bit strange. Surely, the
> >mainline iscsi initiator driver is called 'open-iscsi' but it doesn't
> >have any functions or files called 'open*'. It's just the project
> >name.
>
> Understood, but open-iscsi doesn't have the layering scheme that we do.
> Since we're providing a Fibre Channel protocol processing layer that
> different transport types can register with I think the generic name is
> appropriate. Anyway, I don't think anyone here is terribly stuck on the
> name; it's not a high priority at this time.
open-iscsi provides the proper abstraction. It can handles different
transport types, tcp and RDMA (iSER). It supports software iSCSI
drivers and HW iSCSI HBAs drivers. They are done via iscsi transport
class (and libiscsi).
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Open-FCoE on linux-scsi
2008-01-04 13:47 ` FUJITA Tomonori
@ 2008-01-04 20:19 ` Mike Christie
0 siblings, 0 replies; 26+ messages in thread
From: Mike Christie @ 2008-01-04 20:19 UTC (permalink / raw)
To: FUJITA Tomonori
Cc: robert.w.love, yi.zou, christopher.leech, vasu.dev, linux-scsi,
fujita.tomonori
FUJITA Tomonori wrote:
>> Understood, but open-iscsi doesn't have the layering scheme that we do.
>> Since we're providing a Fibre Channel protocol processing layer that
>> different transport types can register with I think the generic name is
>> appropriate. Anyway, I don't think anyone here is terribly stuck on the
>> name; it's not a high priority at this time.
>
> open-iscsi provides the proper abstraction. It can handles different
> transport types, tcp and RDMA (iSER). It supports software iSCSI
> drivers and HW iSCSI HBAs drivers. They are done via iscsi transport
> class (and libiscsi).
I think I hinted to this offlist, but the bnx2i branch in my iscsi git
tree is best to look at for this. The upstream stuff is best the model
where we support only HW iscsi hbas drivers (all offload) and SW iscsi
drivers (all software). The bnx2i branch modifies the class and lib so
it also supports a model in between the two, so pretty much everything
is covered.
^ permalink raw reply [flat|nested] 26+ messages in thread
* RE: Open-FCoE on linux-scsi
2008-01-04 11:59 ` FUJITA Tomonori
@ 2008-01-04 22:07 ` Dev, Vasu
2008-01-04 23:41 ` Stefan Richter
2008-01-06 4:27 ` FUJITA Tomonori
0 siblings, 2 replies; 26+ messages in thread
From: Dev, Vasu @ 2008-01-04 22:07 UTC (permalink / raw)
To: FUJITA Tomonori, stefanr
Cc: Love, Robert W, tomof, Zou, Yi, Leech, Christopher, linux-scsi
>>
>> _If_ there will indeed be dedicated FCoE HBAs in the future, the
>> following stack could exist in addition to the one above:
>>
>> - SCSI core,
>> scsi_transport_fc
>> - FCoE HBA driver(s)
>
>Agreed. My FCoE initiator design would be something like:
>
>scsi-ml
>fcoe initiator driver
>libfcoe
>fc_transport_class (inclusing fcoe support)
>
>And FCoE HBA LLDs work like:
>
>scsi-ml
>FCoE HBA LLDs (some of them might use libfcoe)
>fc_transport_class (inclusing fcoe support)
>
>
>That's the way that other transport classes do, I think. For me, the
>current code tries to invent another fc class. For example, the code
>newly defines:
>
>struct fc_remote_port {
> struct list_head rp_list; /* list under fc_virt_fab */
> struct fc_virt_fab *rp_vf; /* virtual fabric */
> fc_wwn_t rp_port_wwn; /* remote port world wide name
*/
> fc_wwn_t rp_node_wwn; /* remote node world wide name
*/
> fc_fid_t rp_fid; /* F_ID for remote_port if known
*/
> atomic_t rp_refcnt; /* reference count */
> u_int rp_disc_ver; /* discovery instance */
> u_int rp_io_limit; /* limit on outstanding I/Os */
> u_int rp_io_count; /* count of outstanding I/Os */
> u_int rp_fcp_parm; /* remote FCP service parameters
*/
> u_int rp_local_fcp_parm; /* local FCP service
parameters */
> void *rp_client_priv; /* HBA driver private data */
> void *rp_fcs_priv; /* FCS driver private data */
> struct sa_event_list *rp_events; /* event list */
> struct sa_hash_link rp_fid_hash_link;
> struct sa_hash_link rp_wwpn_hash_link;
>
> /*
> * For now, there's just one session per remote port.
> * Eventually, for multipathing, there will be more.
> */
> u_char rp_sess_ready; /* session ready to be used */
> struct fc_sess *rp_sess; /* session */
> void *dns_lookup; /* private dns lookup */
> int dns_lookup_count; /* number of attempted lookups
*/
>};
>
>/*
> * remote ports are created and looked up by WWPN.
> */
>struct fc_remote_port *fc_remote_port_create(struct fc_virt_fab *,
fc_wwn_t);
>struct fc_remote_port *fc_remote_port_lookup(struct fc_virt_fab *,
> fc_fid_t, fc_wwn_t wwpn);
>struct fc_remote_port *fc_remote_port_lookup_create(struct fc_virt_fab
*,
> fc_fid_t,
> fc_wwn_t wwpn,
> fc_wwn_t wwnn);
>
>
>The FCoE LLD needs to exploit the exsting struct fc_rport and APIs.
The openfc is software implementation of FC services such as FC login
and target discovery and it is already using/exploiting existing fc
transport class including fc_rport struct. You can see openfc using
fc_rport in openfc_queuecommand() and using fc transport API
fc_port_remote_add() for fc_rport.
The fcoe module is just a first example of possible openfc transport but
openfc can be used with other transports or HW HBAs also.
The openfc does provide generic transport interface using fcdev which is
currently used by FCoE module.
One can certainly implement partly or fully openfc and fcoe modules in
FCoE HBA.
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Open-FCoE on linux-scsi
2008-01-04 22:07 ` Dev, Vasu
@ 2008-01-04 23:41 ` Stefan Richter
2008-01-05 0:09 ` Stefan Richter
2008-01-06 4:14 ` FUJITA Tomonori
2008-01-06 4:27 ` FUJITA Tomonori
1 sibling, 2 replies; 26+ messages in thread
From: Stefan Richter @ 2008-01-04 23:41 UTC (permalink / raw)
To: Dev, Vasu
Cc: FUJITA Tomonori, Love, Robert W, tomof, Zou, Yi,
Leech, Christopher, linux-scsi
Dev, Vasu wrote:
[FUJITA Tomonori wrote:]
>> Agreed. My FCoE initiator design would be something like:
>>
>> scsi-ml
>> fcoe initiator driver
>> libfcoe
>> fc_transport_class (inclusing fcoe support)
>>
>> And FCoE HBA LLDs work like:
>>
>> scsi-ml
>> FCoE HBA LLDs (some of them might use libfcoe)
>> fc_transport_class (inclusing fcoe support)
Wouldn't it make more sense to think of fc_transport_class as a FCP
layer, sitting between scsi-ml and the various FC interconnect drivers
(among them Openfc and maybe more FCoE drivers)? I.e. you have SCSI
command set layer -- SCSI core -- SCSI transport layer -- interconnect
layer.¹
I am not familiar with FCP/ FCoE/ FC-DA et al, but I guess the FCoE
support in the FCP transport layer should then go to the extent of
target discovery, login, lifetime management and representation of
remote ports and so on as far as it pertains to FCP (the SCSI transport
protocol, FC-4 layer) independently of the interconnect (FC-3...FC-0
layers).²
[...]
>> The FCoE LLD needs to exploit the exsting struct fc_rport and APIs.
>
> The openfc is software implementation of FC services such as FC login
> and target discovery and it is already using/exploiting existing fc
> transport class including fc_rport struct. You can see openfc using
> fc_rport in openfc_queuecommand() and using fc transport API
> fc_port_remote_add() for fc_rport.
Hence, aren't there interconnect independent parts of target discovery
and login which should be implemented in fc_transport_class? The
interconnect dependent parts would then live in LLD methods to be
provided in struct fc_function_template.
I.e. not only make full use of the API of fc_transport_class, also think
about changing the API _if_ necessary to become a more useful
implementation of the interface below FC-4.
-------
¹) The transport classes are of course not layers in such a sense that
they would completely hide SCSI core from interconnect drivers. They
don't really have to; they nevertheless live at a higher level of
abstraction than LLDs and a lower level of abstraction than SCSI core.
(One obvious example that SCSI core is less hidden than it possibly
could be can be seen by the struct fc_function_template methods having
struct scsi_target * and struct Scsi_Host * arguments, instead of struct
fc_xyz * arguments.)
²) I'm using the term interconnect from the SCSI perspective, not from
the FC perspective.
--
Stefan Richter
-=====-==--- ---= --=-=
http://arcgraph.de/sr/
-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Open-FCoE on linux-scsi
2008-01-04 23:41 ` Stefan Richter
@ 2008-01-05 0:09 ` Stefan Richter
2008-01-05 0:21 ` Stefan Richter
2008-01-15 1:18 ` Love, Robert W
2008-01-06 4:14 ` FUJITA Tomonori
1 sibling, 2 replies; 26+ messages in thread
From: Stefan Richter @ 2008-01-05 0:09 UTC (permalink / raw)
To: Dev, Vasu
Cc: FUJITA Tomonori, Love, Robert W, tomof, Zou, Yi,
Leech, Christopher, linux-scsi
Stefan Richter wrote:
> I.e. you have SCSI command set layer -- SCSI core -- SCSI transport
> layer -- interconnect layer.
The interconnect layer could be split further:
SCSI command set layer -- SCSI core -- SCSI transport layer (FCP) --
Fibre Channel core -- Fibre Channel card drivers, FCoE drivers.
But this would only really make sense if anybody would implement
additional FC-4 drivers besides FCP, e.g. RFC 2625, which would also sit
on top of Fibre Channel core.
--
Stefan Richter
-=====-==--- ---= --=-=
http://arcgraph.de/sr/
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Open-FCoE on linux-scsi
2008-01-05 0:09 ` Stefan Richter
@ 2008-01-05 0:21 ` Stefan Richter
2008-01-05 8:28 ` Christoph Hellwig
2008-01-15 1:18 ` Love, Robert W
1 sibling, 1 reply; 26+ messages in thread
From: Stefan Richter @ 2008-01-05 0:21 UTC (permalink / raw)
To: Dev, Vasu
Cc: FUJITA Tomonori, Love, Robert W, tomof, Zou, Yi,
Leech, Christopher, linux-scsi
Stefan Richter wrote:
> The interconnect layer could be split further:
> SCSI command set layer -- SCSI core -- SCSI transport layer (FCP) --
> Fibre Channel core -- Fibre Channel card drivers, FCoE drivers.
>
> But this would only really make sense if anybody would implement
> additional FC-4 drivers besides FCP, e.g. RFC 2625, which would also sit
> on top of Fibre Channel core.
PS: There is already an RFC 2625 implementation in Linux, but only for
LSIFC9xx.
PPS: RFC 2625 is superseded by RFC 4338.
--
Stefan Richter
-=====-==--- ---= --=-=
http://arcgraph.de/sr/
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Open-FCoE on linux-scsi
2008-01-05 0:21 ` Stefan Richter
@ 2008-01-05 8:28 ` Christoph Hellwig
0 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2008-01-05 8:28 UTC (permalink / raw)
To: Stefan Richter
Cc: Dev, Vasu, FUJITA Tomonori, Love, Robert W, tomof, Zou, Yi,
Leech, Christopher, linux-scsi
On Sat, Jan 05, 2008 at 01:21:28AM +0100, Stefan Richter wrote:
> PS: There is already an RFC 2625 implementation in Linux, but only for
> LSIFC9xx.
There has also been one for interphace cards which was removed because
the driver was entirely unmaintained. qlogic also has/had an out of
tree driver.
Now doing IP over FC over Ethernet sounds like a lot of useless fun :)
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Open-FCoE on linux-scsi
2008-01-03 10:35 ` FUJITA Tomonori
2008-01-03 21:58 ` Love, Robert W
@ 2008-01-05 18:33 ` Vladislav Bolkhovitin
2008-01-06 1:28 ` FUJITA Tomonori
1 sibling, 1 reply; 26+ messages in thread
From: Vladislav Bolkhovitin @ 2008-01-05 18:33 UTC (permalink / raw)
To: FUJITA Tomonori
Cc: robert.w.love, yi.zou, christopher.leech, vasu.dev, linux-scsi,
fujita.tomonori
FUJITA Tomonori wrote:
>>What's the general opinion on this? Duplicate code vs. more kernel code?
>>I can see that you're already starting to clean up the code that you
>>ported. Does that mean the duplicate code isn't an issue to you? When we
>>fix bugs in the initiator they're not going to make it into your tree
>>unless you're diligent about watching the list.
>
> It's hard to convince the kernel maintainers to merge something into
> mainline that which can be implemented in user space. I failed twice
> (with two iSCSI target implementations).
Tomonori and "the kernel maintainers",
In fact, almost all of the kernel can be done in user space, including
all the drivers, networking, I/O management with block/SCSI initiator
subsystem and disk cache manager. But does it mean that currently kernel
is bad and all the above should be (re)done in user space instead? I
think, not. Linux isn't a microkernel for very pragmatic reasons:
simplicity and performance.
1. Simplicity.
For SCSI target, especially with hardware target card, data are come
from kernel and eventually served by kernel doing actual I/O or
getting/putting data from/to cache. Dividing the requests processing job
between user and kernel space creates unnecessary interface layer(s) and
effectively makes the requests processing job distributed with all its
complexity and reliability problems. As the example, what will currently
happen in STGT if the user space part suddenly dies? Will the kernel
part gracefully recover from it? How much effort will be needed to
implement that?
Another example is the mentioned above code duplication. Is it good?
What will it bring? Or you care only about amount of the kernel's code
and don't care about the overall amount of code? If so, you should
(re)read what Linus Torvalds thinks about that:
http://lkml.org/lkml/2007/4/24/364 (I don't consider myself as an
authoritative in this question)
I agree that some of the processing, which can be clearly separated, can
and should be done in user space. The good example of such approach is
connection negotiation and management in the way, how it's done in
open-iscsi. But I don't agree that this idea should be driven to the
absolute. It might look good, but it's unpractical, it will only make
things more complicated and harder for maintainership.
2. Performance.
Modern SCSI transports, e.g. Infiniband, have as low link latency as
1(!) microsecond. For comparison, the inter-thread context switch time
on a modern system is about the same, syscall time - about 0.1
microsecond. So, only ten empty syscalls or one context switch add the
same latency as the link. Even 1Gbps Ethernet has less, than 100
microseconds of round-trip latency.
You, most likely, know, that QLogic target driver for SCST allows
commands being executed either directly from soft IRQ, or from the
corresponding thread. There is a steady 5% difference in IOPS between
those modes on 512 bytes reads on nullio using 4Gbps link. So, a single
additional inter-kernel-thread context switch costs 5% of IOPS.
Another source of additional unavoidable with the user space approach
latency is data copy to/from cache. With the fully kernel space
approach, cache can be used directly, so no extra copy will be needed.
So, putting code in the user space you should accept the extra latency
it adds. Many, if not most, real-life workloads more or less latency,
not throughput, bound, so you shouldn't be surprised that single stream
"dd if=/dev/sdX of=/dev/null" on initiator gives too low values. Such
"benchmark" isn't less important and practical, than all the
multithreaded latency insensitive benchmarks, which people like running.
You may object me that the backstorage's latency is a lot more, than 1
microsecond, but that is true only if data are read/written from/to the
actual backstorage media, not from the cache, even from the backstorage
device's cache. Nothing prevents a target from having 8 or even 64GB of
cache, so most even random accesses could be served by it. This is
especially important for sync. writes.
Thus, I believe, that partial user space, partial kernel space approach
for building SCSI targets is the move in the wrong direction, because it
brings practically nothing, but costs a lot.
Vlad
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Open-FCoE on linux-scsi
2008-01-05 18:33 ` Vladislav Bolkhovitin
@ 2008-01-06 1:28 ` FUJITA Tomonori
2008-01-08 17:38 ` Vladislav Bolkhovitin
0 siblings, 1 reply; 26+ messages in thread
From: FUJITA Tomonori @ 2008-01-06 1:28 UTC (permalink / raw)
To: vst
Cc: tomof, robert.w.love, yi.zou, christopher.leech, vasu.dev,
linux-scsi, fujita.tomonori
On Sat, 05 Jan 2008 21:33:48 +0300
Vladislav Bolkhovitin <vst@vlnb.net> wrote:
> Thus, I believe, that partial user space, partial kernel space approach
> for building SCSI targets is the move in the wrong direction, because it
> brings practically nothing, but costs a lot.
We have not discussed such topic. FCoE target can be implemented fully
in user space if I understand correctly.
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Open-FCoE on linux-scsi
2008-01-04 23:41 ` Stefan Richter
2008-01-05 0:09 ` Stefan Richter
@ 2008-01-06 4:14 ` FUJITA Tomonori
1 sibling, 0 replies; 26+ messages in thread
From: FUJITA Tomonori @ 2008-01-06 4:14 UTC (permalink / raw)
To: stefanr
Cc: vasu.dev, fujita.tomonori, robert.w.love, tomof, yi.zou,
christopher.leech, linux-scsi
On Sat, 05 Jan 2008 00:41:05 +0100
Stefan Richter <stefanr@s5r6.in-berlin.de> wrote:
> Dev, Vasu wrote:
> [FUJITA Tomonori wrote:]
> >> Agreed. My FCoE initiator design would be something like:
> >>
> >> scsi-ml
> >> fcoe initiator driver
> >> libfcoe
> >> fc_transport_class (inclusing fcoe support)
> >>
> >> And FCoE HBA LLDs work like:
> >>
> >> scsi-ml
> >> FCoE HBA LLDs (some of them might use libfcoe)
> >> fc_transport_class (inclusing fcoe support)
>
> Wouldn't it make more sense to think of fc_transport_class as a FCP
> layer, sitting between scsi-ml and the various FC interconnect drivers
> (among them Openfc and maybe more FCoE drivers)? I.e. you have SCSI
> command set layer -- SCSI core -- SCSI transport layer -- interconnect
> layer.¹
Oops, I should have depicted:
scsi-ml
fc_transport_class (inclusing fcoe support)
FCoE HBA LLDs (some of them might use libfcoe)
As you pointed out, that's the correct layering from the perspective
of SCSI architecture. I put FCoE HBA LLDs over fc_transport_class just
because LLDs directly interact with scsi-ml to perform the main work,
queuecommand/done (as you explained in 1).
> I am not familiar with FCP/ FCoE/ FC-DA et al, but I guess the FCoE
> support in the FCP transport layer should then go to the extent of
> target discovery, login, lifetime management and representation of
> remote ports and so on as far as it pertains to FCP (the SCSI transport
> protocol, FC-4 layer) independently of the interconnect (FC-3...FC-0
> layers).²
>
> [...]
> >> The FCoE LLD needs to exploit the exsting struct fc_rport and APIs.
> >
> > The openfc is software implementation of FC services such as FC login
> > and target discovery and it is already using/exploiting existing fc
> > transport class including fc_rport struct. You can see openfc using
> > fc_rport in openfc_queuecommand() and using fc transport API
> > fc_port_remote_add() for fc_rport.
>
> Hence, aren't there interconnect independent parts of target discovery
> and login which should be implemented in fc_transport_class? The
> interconnect dependent parts would then live in LLD methods to be
> provided in struct fc_function_template.
Agreed. Then FCoE helper functions that aren't useful for all the FCoE
LLDs would go libfcoe like iscsi class does (and sas class also does,
I guess).
> I.e. not only make full use of the API of fc_transport_class, also think
> about changing the API _if_ necessary to become a more useful
> implementation of the interface below FC-4.
>
> -------
> ¹) The transport classes are of course not layers in such a sense that
> they would completely hide SCSI core from interconnect drivers. They
> don't really have to; they nevertheless live at a higher level of
> abstraction than LLDs and a lower level of abstraction than SCSI core.
>
> (One obvious example that SCSI core is less hidden than it possibly
> could be can be seen by the struct fc_function_template methods having
> struct scsi_target * and struct Scsi_Host * arguments, instead of struct
> fc_xyz * arguments.)
>
> ²) I'm using the term interconnect from the SCSI perspective, not from
> the FC perspective.
> --
> Stefan Richter
> -=====-==--- ---= --=-=
> http://arcgraph.de/sr/
> -
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 26+ messages in thread
* RE: Open-FCoE on linux-scsi
2008-01-04 22:07 ` Dev, Vasu
2008-01-04 23:41 ` Stefan Richter
@ 2008-01-06 4:27 ` FUJITA Tomonori
1 sibling, 0 replies; 26+ messages in thread
From: FUJITA Tomonori @ 2008-01-06 4:27 UTC (permalink / raw)
To: vasu.dev
Cc: fujita.tomonori, stefanr, robert.w.love, tomof, yi.zou,
christopher.leech, linux-scsi
On Fri, 4 Jan 2008 14:07:28 -0800
"Dev, Vasu" <vasu.dev@intel.com> wrote:
>
> >>
> >> _If_ there will indeed be dedicated FCoE HBAs in the future, the
> >> following stack could exist in addition to the one above:
> >>
> >> - SCSI core,
> >> scsi_transport_fc
> >> - FCoE HBA driver(s)
> >
> >Agreed. My FCoE initiator design would be something like:
> >
> >scsi-ml
> >fcoe initiator driver
> >libfcoe
> >fc_transport_class (inclusing fcoe support)
> >
> >And FCoE HBA LLDs work like:
> >
> >scsi-ml
> >FCoE HBA LLDs (some of them might use libfcoe)
> >fc_transport_class (inclusing fcoe support)
> >
> >
> >That's the way that other transport classes do, I think. For me, the
> >current code tries to invent another fc class. For example, the code
> >newly defines:
> >
> >struct fc_remote_port {
> > struct list_head rp_list; /* list under fc_virt_fab */
> > struct fc_virt_fab *rp_vf; /* virtual fabric */
> > fc_wwn_t rp_port_wwn; /* remote port world wide name
> */
> > fc_wwn_t rp_node_wwn; /* remote node world wide name
> */
> > fc_fid_t rp_fid; /* F_ID for remote_port if known
> */
> > atomic_t rp_refcnt; /* reference count */
> > u_int rp_disc_ver; /* discovery instance */
> > u_int rp_io_limit; /* limit on outstanding I/Os */
> > u_int rp_io_count; /* count of outstanding I/Os */
> > u_int rp_fcp_parm; /* remote FCP service parameters
> */
> > u_int rp_local_fcp_parm; /* local FCP service
> parameters */
> > void *rp_client_priv; /* HBA driver private data */
> > void *rp_fcs_priv; /* FCS driver private data */
> > struct sa_event_list *rp_events; /* event list */
> > struct sa_hash_link rp_fid_hash_link;
> > struct sa_hash_link rp_wwpn_hash_link;
> >
> > /*
> > * For now, there's just one session per remote port.
> > * Eventually, for multipathing, there will be more.
> > */
> > u_char rp_sess_ready; /* session ready to be used */
> > struct fc_sess *rp_sess; /* session */
> > void *dns_lookup; /* private dns lookup */
> > int dns_lookup_count; /* number of attempted lookups
> */
> >};
> >
> >/*
> > * remote ports are created and looked up by WWPN.
> > */
> >struct fc_remote_port *fc_remote_port_create(struct fc_virt_fab *,
> fc_wwn_t);
> >struct fc_remote_port *fc_remote_port_lookup(struct fc_virt_fab *,
> > fc_fid_t, fc_wwn_t wwpn);
> >struct fc_remote_port *fc_remote_port_lookup_create(struct fc_virt_fab
> *,
> > fc_fid_t,
> > fc_wwn_t wwpn,
> > fc_wwn_t wwnn);
> >
> >
> >The FCoE LLD needs to exploit the exsting struct fc_rport and APIs.
>
> The openfc is software implementation of FC services such as FC login
> and target discovery and it is already using/exploiting existing fc
> transport class including fc_rport struct. You can see openfc using
> fc_rport in openfc_queuecommand() and using fc transport API
> fc_port_remote_add() for fc_rport.
You just calls fc_remote_port_add. I don't think that reinventing the
whole rport management like reference counting doesn't mean exploiting
the exsting struct fc_rport and APIs.
> The fcoe module is just a first example of possible openfc transport but
> openfc can be used with other transports or HW HBAs also.
>
> The openfc does provide generic transport interface using fcdev which is
> currently used by FCoE module.
>
> One can certainly implement partly or fully openfc and fcoe modules in
> FCoE HBA.
As pointed out in other mails, I believe that the similar job has done
in other transport classes using scsi transport class infrastructure
and the FCoE needs to follow the existing examples.
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Open-FCoE on linux-scsi
2008-01-06 1:28 ` FUJITA Tomonori
@ 2008-01-08 17:38 ` Vladislav Bolkhovitin
0 siblings, 0 replies; 26+ messages in thread
From: Vladislav Bolkhovitin @ 2008-01-08 17:38 UTC (permalink / raw)
To: FUJITA Tomonori
Cc: robert.w.love, yi.zou, christopher.leech, vasu.dev, linux-scsi,
fujita.tomonori
FUJITA Tomonori wrote:
>>Thus, I believe, that partial user space, partial kernel space approach
>>for building SCSI targets is the move in the wrong direction, because it
>>brings practically nothing, but costs a lot.
>
> We have not discussed such topic. FCoE target can be implemented fully
> in user space if I understand correctly.
Really? FCoE target doesn't need an FC hardware target? And FCoE isn't
sensitive to the packets forwarding latency?
For fully in-kernel approach it is possible to make the packets
forwarding zero-copy in both directions FC<->Ethernet, which is
practically impossible with user space. Modern memory has few GB/s
throughput, so guess how much latency data copying will add on 10Gbps
speed. Thus, I believe, if performance matters, FCoE should be in
kernel, at least hot processing path, when management possibly done in
user space as for open-iscsi. But user space/kernel separation should
only be done if the additional user space/kernel interface won't
complicate things too much.
Vlad
^ permalink raw reply [flat|nested] 26+ messages in thread
* RE: Open-FCoE on linux-scsi
2008-01-05 0:09 ` Stefan Richter
2008-01-05 0:21 ` Stefan Richter
@ 2008-01-15 1:18 ` Love, Robert W
2008-01-15 22:18 ` James Smart
1 sibling, 1 reply; 26+ messages in thread
From: Love, Robert W @ 2008-01-15 1:18 UTC (permalink / raw)
To: Stefan Richter, Dev, Vasu
Cc: FUJITA Tomonori, tomof, Zou, Yi, Leech, Christopher, linux-scsi
>-----Original Message-----
>From: Stefan Richter [mailto:stefanr@s5r6.in-berlin.de]
>Sent: Friday, January 04, 2008 4:10 PM
>To: Dev, Vasu
>Cc: FUJITA Tomonori; Love, Robert W; tomof@acm.org; Zou, Yi; Leech,
>Christopher; linux-scsi@vger.kernel.org
>Subject: Re: Open-FCoE on linux-scsi
>
>Stefan Richter wrote:
>> I.e. you have SCSI command set layer -- SCSI core -- SCSI transport
>> layer -- interconnect layer.
>
>The interconnect layer could be split further:
>SCSI command set layer -- SCSI core -- SCSI transport layer (FCP) --
>Fibre Channel core -- Fibre Channel card drivers, FCoE drivers.
This is how I see the comparison. ('/' indicates 'or')
You suggest Open-FCoE
SCSI-ml SCSI-ml
scsi_transport_fc.h scsi_tranport_fc.h
scsi_transport_fc.c (FC core) / HBA openfc / HBA
fcoe / HBA fcoe / HBA
>From what I can see the layering is roughly the same with the main
difference being that we should be using more of (and putting more into)
scsi_transport_fc.h. Also we should make the FCP implementation (openfc)
fit in a bit nicer as scsi_transport_fc.c. We're going to look into
making better use of scsi_transport_fc.h as a first step.
I'm a little confused though; in a prior mail it seemed that you were
clubbing openfc and fcoe together, and at one point Fujita's stack
showed a libfcoe and fcoe fitting directly under scsi_transport_fc. I
think the layering is nicer at this point in the thread, where SCSI only
knows that it's using FC and the SW implementation of FCP knows the
transport. It's closer to my understanding of Open-iSCSI.
Open-iSCSI Open-FCoE
scsi_transport_iscsi.c scsi_transport_fc.c
iscsi_tcp.c fcoe
I'm curious how aware you think scsi_transport_fc.h should be of FCoE?
>
>But this would only really make sense if anybody would implement
>additional FC-4 drivers besides FCP, e.g. RFC 2625, which would also
sit
>on top of Fibre Channel core.
>--
>Stefan Richter
>-=====-==--- ---= --=-=
>http://arcgraph.de/sr/
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Open-FCoE on linux-scsi
2008-01-15 1:18 ` Love, Robert W
@ 2008-01-15 22:18 ` James Smart
2008-01-22 23:52 ` Love, Robert W
2008-01-29 5:42 ` Chris Leech
0 siblings, 2 replies; 26+ messages in thread
From: James Smart @ 2008-01-15 22:18 UTC (permalink / raw)
To: Love, Robert W
Cc: Stefan Richter, Dev, Vasu, FUJITA Tomonori, tomof, Zou, Yi,
Leech, Christopher, linux-scsi, James Smart
Love, Robert W wrote:
>> The interconnect layer could be split further:
>> SCSI command set layer -- SCSI core -- SCSI transport layer (FCP) --
>> Fibre Channel core -- Fibre Channel card drivers, FCoE drivers.
>
> This is how I see the comparison. ('/' indicates 'or')
>
> You suggest Open-FCoE
> SCSI-ml SCSI-ml
> scsi_transport_fc.h scsi_tranport_fc.h
> scsi_transport_fc.c (FC core) / HBA openfc / HBA
> fcoe / HBA fcoe / HBA
>
>>From what I can see the layering is roughly the same with the main
> difference being that we should be using more of (and putting more into)
> scsi_transport_fc.h. Also we should make the FCP implementation (openfc)
> fit in a bit nicer as scsi_transport_fc.c. We're going to look into
> making better use of scsi_transport_fc.h as a first step.
I don't know what the distinction is between scsi_transport_fc.h vs
scsi_transport_fc.c is. They're all one and the same - the fc transport.
One contains the data structures and api between LLD and transport,
the other (the .c) contains the code to implement the api, transport objects
and sysfs handlers.
From my point of view, the fc transport is an assist library for the FC LLDDs.
Currently, it interacts with the midlayer only around some scan and block/unblock
functions. Excepting a small helper function used by the LLDD, it does not get
involved in the i/o path.
So my view of the layering for a normal FC driver is:
SCSI-ml
LLDD <-> FC transport
<bus code (e.g. pci)>
Right now, the "assists" provided in the FC transport are:
- Presentation of transport objects into the sysfs tree, and thus sysfs
attribute handling around those objects. This effectively is the FC
management interface.
- Remote Port Object mgmt - interaction with the midlayer. Specifically:
- Manages the SCSI target id bindings for the remote port
- Knows when the rport is present or not.
On new connectivity:
Kicks off scsi scans, restarts blocked i/o.
On connectivity loss:
Insulates midlayer from temporary disconnects by block of
the target/luns, and manages the timer for the allowed period of
disconnect.
Assists in knowing when/how to terminate pending i/o after a
connectivity loss (fast fail, or wait).
- Provides consistent error codes for i/o path and error handlers via
helpers that are used by LLDD.
Note that the above does not contain the FC login state machine, etc.
We have discussed this in the past. Given the 4 FC LLDDs we had, there was
a wide difference on who did what where. LSI did all login and FC ELS
handling in their firmware. Qlogic did the initiation of the login in the
driver, but the ELS handling in the firmware. Emulex did the ELS handling
in the driver. IBM/zfcp runs a hybrid of login/ELS handling over it's pseudo
hba interface. Knowing how much time we spend constantly debugging login/ELS
handling and the fact that we have to interject adapter resource allocation
steps into the statemachine, I didn't want to go to a common library until
there was a very clear and similar LLDD. Well, you can't get much clearer
than a full software-based login/ELS state machine that FCOE needs. It makes
sense to at least try to library-ize the login/ELS handling if possible.
Here's what I have in mind for FCOE layering. Keep in mind, that one of the
goals here is to support a lot of different implementations which may range
from s/w layers on a simple Ethernet packet pusher, to more and more levels
of offload on an FCOE adapter. The goal is to create the s/w layers such that
different LLDD's can pick and choose the layer(s) (or level) they want to
integrate into. At a minimum, they should/must integrate with the base mgmt
objects.
For FC transport, we'd have the following "layers" or api "sections" :
Layer 0: rport and vport objects (current functionality)
Layer 1: Port login and ELS handling
Layer 2: Fabric login, PT2PT login, CT handling, and discovery/RSCN
Layer 3: FCP I/O Assist
Layer 4: FC2 - Exchange and Sequence handling
Layer 5: FCOE encap/decap
Layer 6: FCOE FLOGI handler
Layer 1 would work with an api to the LLDD based on a send/receive ELS interface
coupled with a login/logout to address interface. The code within layer 1
would make calls to layer 0 to instantiate the different objects. If layer 1
needs to track additional rport data, it should specify dd_data on the
rport_add call. (Note: all of the LLDDs today have their own node structure
that is independent from the rport struct. I wish we could kill this, but for
now, Layer 1 could do the same (but don't name it so similarly like openfc did)).
You could also specify login types, so that it knows to do FC4-specific login
steps such as PRLI's for FCP.
Layer 2 work work with an api to the LLDD based on a send/receive ELS/CT coupled
with a fabric or pt2pt login/logout interface. It manages discovery and would
use layer 1 for all endpoint-to-endpoint logins. It too would use layer 0 to
instantiate sysfs objects. It could also be augmented with a simple link
up/down statemachine that auto invokes the fabric/pt2pt login.
Layer 3 would work with an api to the LLDD based on a exchange w/ send/receive
sequence interface. You could extend this with a set of routines that glue
directly into the queuecommand and error handler interfaces, which then
utilizes the FCP helpers.
Layer 4 would work with a send/receive frame interface with the LLDD, and support
send/receive ELS/CT/sequence, etc. It essentially supports operation of all
of the above on a simple FC mac. It too would likely need to work with a link
state machine.
Layer 5 is a set of assist routines that convert a FC frame to an FCOE ethernet
packet and vice versa. It probably has an option to calculate the checksum or
not (if not, it's expected a adapter would do it). It may need to contain a
global FCOE F_Port object that is used as part of the translation.
Layer 6 would work with a send/receive ethernet packet interface and would
perform the FCOE FLOGI and determine the FCOE F_Port MAC address. It would
then tie into layer 2 to continue fabric logins, CT traffic, and discovery.
Thus, we could support adapters such as :
- A FC adapter such as Emulex, which would want to use layers 0, 1, and perhaps 2.
- A FC adapter, that sends/receives FC frames - uses layers 0 thru 4.
- A FCOE adapter, that sends/receives ethernet packets, but also provides FCP
I/O offload.
- A FCOE adapter, that simply sends/receives ethernet frames.
Layers 1, 2, 3, and 4 map to things in your openfc implementation layer.
Layers 5 and 6 map to things in your fcoe layer.
Note that they are not direct copies, but your layers carved up into libraries.
My belief is you would still have an FCOE LLDD that essentially contains the
logic to glue the different layers together.
Thus, the resulting layering looks like:
SCSI-ml
+- fc layer 0
+- fc layer 1
FC LLDD -+- fc layer 3
+- fc layer 4
+- fc layer 5
+- fc layer 6
net_device
NIC_LLDD
<i/o bus>
I hope this made sense..... There's lots of partial thoughts. They key here is
to create a library of reusable subsets that could be used by different hardware
implementations. We could punt, and have the FC LLDD just contain your openfc
and openfcoe chunks. I don't like this as you will create a bunch of sysfs
parameters for your own port objects, etc which are effectively FCOE-driver
specific. Even if we ignored my dislike, we would minimally need to put the
basic FCOE mgmt interface in place. We could start by extending the fc_port
object to reflect a type of FCOE, and to add support for optional FCOE MAC
addresses for the port and the FCOE F_Port. We'd then need to look at what
else (outside of login state, etc) that we'd want to manage for FCOE. This would
mirror what we did for FC in general.
Also, a couple of comments from my perspective on netlink vs sysfs vs ioctl
from a management perspective. Sysfs works well for singular attributes with
simple set/get primitives. They do not work if a set of attributes much be
changed together or in any multi-step operation. Such things, especially when
requests from user space to kernel, work better in an ioctl (e.g. soon to all
be under sgio). However, ioctls suck for driver-to-user space requests and
event postings. Netlink is a much better fit for these operations, with the
caveate that payloads can't be DMA based.
>> But this would only really make sense if anybody would implement
>> additional FC-4 drivers besides FCP, e.g. RFC 2625, which would also
> sit
>> on top of Fibre Channel core.
>> --
>> Stefan Richter
>> -=====-==--- ---= --=-=
>> http://arcgraph.de/sr/
True - it should become rather evident that FC should be its own
i/o bus, with the hba LLDD providing bindings to each of the FC4 stacks.
This would have worked really well for FCOE, with it creating a fc_port
object, which could then layer a scsi_host on top of it, etc.
Right now there's too much assumption that SCSI is the main owner of the
port. The NPIV vport stuff is a good illustration of this concept (why is
the vport a subobject of the scsi_host ?).
As it stands today, we have implemented these other FC-4's but they end
up being add-on's similar to the fc-transport.
-- james s
^ permalink raw reply [flat|nested] 26+ messages in thread
* RE: Open-FCoE on linux-scsi
2008-01-15 22:18 ` James Smart
@ 2008-01-22 23:52 ` Love, Robert W
2008-01-29 5:42 ` Chris Leech
1 sibling, 0 replies; 26+ messages in thread
From: Love, Robert W @ 2008-01-22 23:52 UTC (permalink / raw)
To: James.Smart
Cc: Stefan Richter, Dev, Vasu, FUJITA Tomonori, tomof, Zou, Yi,
Leech, Christopher, linux-scsi
>-----Original Message-----
>From: James Smart [mailto:James.Smart@Emulex.Com]
>Sent: Tuesday, January 15, 2008 2:19 PM
>To: Love, Robert W
>Cc: Stefan Richter; Dev, Vasu; FUJITA Tomonori; tomof@acm.org; Zou, Yi;
>Leech, Christopher; linux-scsi@vger.kernel.org; James Smart
>Subject: Re: Open-FCoE on linux-scsi
>
>Love, Robert W wrote:
>>> The interconnect layer could be split further:
>>> SCSI command set layer -- SCSI core -- SCSI transport layer (FCP) --
>>> Fibre Channel core -- Fibre Channel card drivers, FCoE drivers.
>>
>> This is how I see the comparison. ('/' indicates 'or')
>>
>> You suggest Open-FCoE
>> SCSI-ml SCSI-ml
>> scsi_transport_fc.h scsi_tranport_fc.h
>> scsi_transport_fc.c (FC core) / HBA openfc / HBA
>> fcoe / HBA fcoe / HBA
>>
>>>From what I can see the layering is roughly the same with the main
>> difference being that we should be using more of (and putting more
into)
>> scsi_transport_fc.h. Also we should make the FCP implementation
(openfc)
>> fit in a bit nicer as scsi_transport_fc.c. We're going to look into
>> making better use of scsi_transport_fc.h as a first step.
>
>I don't know what the distinction is between scsi_transport_fc.h vs
>scsi_transport_fc.c is. They're all one and the same - the fc
transport.
>One contains the data structures and api between LLD and transport,
>the other (the .c) contains the code to implement the api, transport
>objects
>and sysfs handlers.
>
> From my point of view, the fc transport is an assist library for the
FC
>LLDDs.
>Currently, it interacts with the midlayer only around some scan and
>block/unblock
>functions. Excepting a small helper function used by the LLDD, it does
not
>get
>involved in the i/o path.
>
>So my view of the layering for a normal FC driver is:
> SCSI-ml
> LLDD <-> FC transport
> <bus code (e.g. pci)>
>
>Right now, the "assists" provided in the FC transport are:
>- Presentation of transport objects into the sysfs tree, and thus sysfs
> attribute handling around those objects. This effectively is the FC
> management interface.
>- Remote Port Object mgmt - interaction with the midlayer.
Specifically:
> - Manages the SCSI target id bindings for the remote port
> - Knows when the rport is present or not.
> On new connectivity:
> Kicks off scsi scans, restarts blocked i/o.
> On connectivity loss:
> Insulates midlayer from temporary disconnects by block of
> the target/luns, and manages the timer for the allowed period
of
> disconnect.
> Assists in knowing when/how to terminate pending i/o after a
> connectivity loss (fast fail, or wait).
> - Provides consistent error codes for i/o path and error handlers
via
> helpers that are used by LLDD.
>
>Note that the above does not contain the FC login state machine, etc.
>We have discussed this in the past. Given the 4 FC LLDDs we had, there
was
>a wide difference on who did what where. LSI did all login and FC ELS
>handling in their firmware. Qlogic did the initiation of the login in
the
>driver, but the ELS handling in the firmware. Emulex did the ELS
handling
>in the driver. IBM/zfcp runs a hybrid of login/ELS handling over it's
>pseudo
>hba interface. Knowing how much time we spend constantly debugging
>login/ELS
>handling and the fact that we have to interject adapter resource
allocation
>steps into the statemachine, I didn't want to go to a common library
until
>there was a very clear and similar LLDD. Well, you can't get much
clearer
>than a full software-based login/ELS state machine that FCOE needs. It
>makes
>sense to at least try to library-ize the login/ELS handling if
possible.
>
>Here's what I have in mind for FCOE layering. Keep in mind, that one of
the
>goals here is to support a lot of different implementations which may
range
>from s/w layers on a simple Ethernet packet pusher, to more and more
levels
>of offload on an FCOE adapter. The goal is to create the s/w layers
such
>that
>different LLDD's can pick and choose the layer(s) (or level) they want
to
>integrate into. At a minimum, they should/must integrate with the base
mgmt
>objects.
>
>For FC transport, we'd have the following "layers" or api "sections" :
> Layer 0: rport and vport objects (current functionality)
> Layer 1: Port login and ELS handling
> Layer 2: Fabric login, PT2PT login, CT handling, and discovery/RSCN
> Layer 3: FCP I/O Assist
> Layer 4: FC2 - Exchange and Sequence handling
> Layer 5: FCOE encap/decap
> Layer 6: FCOE FLOGI handler
>
>Layer 1 would work with an api to the LLDD based on a send/receive ELS
>interface
> coupled with a login/logout to address interface. The code within
layer
>1
> would make calls to layer 0 to instantiate the different objects. If
>layer 1
> needs to track additional rport data, it should specify dd_data on
the
> rport_add call. (Note: all of the LLDDs today have their own node
>structure
> that is independent from the rport struct. I wish we could kill
this,
>but for
> now, Layer 1 could do the same (but don't name it so similarly like
>openfc did)).
> You could also specify login types, so that it knows to do
FC4-specific
>login
> steps such as PRLI's for FCP.
>
>Layer 2 work work with an api to the LLDD based on a send/receive
ELS/CT
>coupled
> with a fabric or pt2pt login/logout interface. It manages discovery
and
>would
> use layer 1 for all endpoint-to-endpoint logins. It too would use
layer
>0 to
> instantiate sysfs objects. It could also be augmented with a simple
link
> up/down statemachine that auto invokes the fabric/pt2pt login.
>
>Layer 3 would work with an api to the LLDD based on a exchange w/
>send/receive
> sequence interface. You could extend this with a set of routines
that
>glue
> directly into the queuecommand and error handler interfaces, which
then
> utilizes the FCP helpers.
>
>Layer 4 would work with a send/receive frame interface with the LLDD,
and
>support
> send/receive ELS/CT/sequence, etc. It essentially supports operation
of
>all
> of the above on a simple FC mac. It too would likely need to work
with a
>link
> state machine.
>
>Layer 5 is a set of assist routines that convert a FC frame to an FCOE
>ethernet
> packet and vice versa. It probably has an option to calculate the
>checksum or
> not (if not, it's expected a adapter would do it). It may need to
>contain a
> global FCOE F_Port object that is used as part of the translation.
>
>Layer 6 would work with a send/receive ethernet packet interface and
would
> perform the FCOE FLOGI and determine the FCOE F_Port MAC address. It
>would
> then tie into layer 2 to continue fabric logins, CT traffic, and
>discovery.
>
>Thus, we could support adapters such as :
>- A FC adapter such as Emulex, which would want to use layers 0, 1, and
>perhaps 2.
>- A FC adapter, that sends/receives FC frames - uses layers 0 thru 4.
>- A FCOE adapter, that sends/receives ethernet packets, but also
provides
>FCP
> I/O offload.
>- A FCOE adapter, that simply sends/receives ethernet frames.
>
>Layers 1, 2, 3, and 4 map to things in your openfc implementation
layer.
>Layers 5 and 6 map to things in your fcoe layer.
>Note that they are not direct copies, but your layers carved up into
>libraries.
>My belief is you would still have an FCOE LLDD that essentially
contains
>the
>logic to glue the different layers together.
>
>Thus, the resulting layering looks like:
>
> SCSI-ml
> +- fc layer 0
> +- fc layer 1
> FC LLDD -+- fc layer 3
> +- fc layer 4
> +- fc layer 5
> +- fc layer 6
> net_device
> NIC_LLDD
> <i/o bus>
>
>I hope this made sense..... There's lots of partial thoughts. They key
here
>is
>to create a library of reusable subsets that could be used by different
>hardware
>implementations. We could punt, and have the FC LLDD just contain your
>openfc
>and openfcoe chunks. I don't like this as you will create a bunch of
sysfs
>parameters for your own port objects, etc which are effectively
FCOE-driver
>specific. Even if we ignored my dislike, we would minimally need to put
the
>basic FCOE mgmt interface in place. We could start by extending the
fc_port
>object to reflect a type of FCOE, and to add support for optional FCOE
MAC
>addresses for the port and the FCOE F_Port. We'd then need to look at
what
>else (outside of login state, etc) that we'd want to manage for FCOE.
This
>would
>mirror what we did for FC in general.
>
>
>
>Also, a couple of comments from my perspective on netlink vs sysfs vs
ioctl
>from a management perspective. Sysfs works well for singular attributes
>with
>simple set/get primitives. They do not work if a set of attributes much
be
>changed together or in any multi-step operation. Such things,
especially
>when
>requests from user space to kernel, work better in an ioctl (e.g. soon
to
>all
>be under sgio). However, ioctls suck for driver-to-user space requests
and
>event postings. Netlink is a much better fit for these operations, with
the
>caveate that payloads can't be DMA based.
>
>
>>> But this would only really make sense if anybody would implement
>>> additional FC-4 drivers besides FCP, e.g. RFC 2625, which would also
>> sit
>>> on top of Fibre Channel core.
>>> --
>>> Stefan Richter
>>> -=====-==--- ---= --=-=
>>> http://arcgraph.de/sr/
>
>True - it should become rather evident that FC should be its own
>i/o bus, with the hba LLDD providing bindings to each of the FC4
stacks.
>This would have worked really well for FCOE, with it creating a fc_port
>object, which could then layer a scsi_host on top of it, etc.
>Right now there's too much assumption that SCSI is the main owner of
the
>port. The NPIV vport stuff is a good illustration of this concept (why
is
>the vport a subobject of the scsi_host ?).
>
>As it stands today, we have implemented these other FC-4's but they end
>up being add-on's similar to the fc-transport.
>
>-- james s
Thanks for the feedback James, we're looking into breaking down the code
into functional units so that we can "library-ize" as you've suggested.
We'll report back when we have something more concrete.
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Open-FCoE on linux-scsi
2008-01-15 22:18 ` James Smart
2008-01-22 23:52 ` Love, Robert W
@ 2008-01-29 5:42 ` Chris Leech
2008-02-01 1:53 ` James Smart
1 sibling, 1 reply; 26+ messages in thread
From: Chris Leech @ 2008-01-29 5:42 UTC (permalink / raw)
To: James.Smart
Cc: Love, Robert W, Stefan Richter, Dev, Vasu, FUJITA Tomonori, tomof,
Zou, Yi, linux-scsi
On Jan 15, 2008 2:18 PM, James Smart <James.Smart@emulex.com> wrote:
> True - it should become rather evident that FC should be its own
> i/o bus, with the hba LLDD providing bindings to each of the FC4 stacks.
> This would have worked really well for FCOE, with it creating a fc_port
> object, which could then layer a scsi_host on top of it, etc.
> Right now there's too much assumption that SCSI is the main owner of the
> port. The NPIV vport stuff is a good illustration of this concept (why is
> the vport a subobject of the scsi_host ?).
In thinking about how FC should be represented, it seems to me that in
order to provide good interfaces at multiple levels of functionality
we have to make sure the we have the right data structures at each
level. At the highest level there's scsi_cmd, then there's sequence
based interfaces that would need some sort of a sequence structure
with a scatter gather list, and at the lowest level interfaces work
directly with FC frames.
I'd like to talk about how we should go about representing FC frames.
Currently, our libfc code introduces an fc_frame struct but allows the
LLDD to provide an allocation function and control how the fc_frames
are allocated. The fcoe module uses this capability to map the data
buffer of an fc_frame to that of an sk_buff. As someone coming from a
networking background, and interested in FCoE which ends up sending
frames via an Ethernet driver, I tend to think this is overly complex
and just want to use sk_buffs directly.
Would SCSI/FC developers be opposed to dealing with sk_buffs for frame
level interfaces, or do we need to keep a seperate fc_frame structure
around? I'd argue that skbs do a fine job of representing all sorts
of frame structures, that any device that supports IP over FC has to
deal with skbs in its driver anyway, and that at the frame level FC is
just another network. But then again, I am biased as skbs seem
friendly and familiar to me as I venture further into the alien
landscape that is scsi.
- Chris
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: Open-FCoE on linux-scsi
2008-01-29 5:42 ` Chris Leech
@ 2008-02-01 1:53 ` James Smart
0 siblings, 0 replies; 26+ messages in thread
From: James Smart @ 2008-02-01 1:53 UTC (permalink / raw)
To: chris.leech
Cc: Love, Robert W, Stefan Richter, Dev, Vasu, FUJITA Tomonori, tomof,
Zou, Yi, linux-scsi
Chris Leech wrote:
> In thinking about how FC should be represented, it seems to me that in
> order to provide good interfaces at multiple levels of functionality
> we have to make sure the we have the right data structures at each
> level. At the highest level there's scsi_cmd, then there's sequence
> based interfaces that would need some sort of a sequence structure
> with a scatter gather list, and at the lowest level interfaces work
> directly with FC frames.
I think the only thing that will actually talk frames will either be
a fc mac, which we haven't seen yet, or a FCOE entity. Consider the
latter to be the predominant case.
> I'd like to talk about how we should go about representing FC frames.
> Currently, our libfc code introduces an fc_frame struct but allows the
> LLDD to provide an allocation function and control how the fc_frames
> are allocated. The fcoe module uses this capability to map the data
> buffer of an fc_frame to that of an sk_buff. As someone coming from a
> networking background, and interested in FCoE which ends up sending
> frames via an Ethernet driver, I tend to think this is overly complex
> and just want to use sk_buffs directly.
If the predominant user is fcoe, then I think describing the frame in
the context of a sk_buff is fine.
> Would SCSI/FC developers be opposed to dealing with sk_buffs for frame
> level interfaces, or do we need to keep a seperate fc_frame structure
> around? I'd argue that skbs do a fine job of representing all sorts
> of frame structures, that any device that supports IP over FC has to
> deal with skbs in its driver anyway, and that at the frame level FC is
> just another network. But then again, I am biased as skbs seem
> friendly and familiar to me as I venture further into the alien
> landscape that is scsi.
>
> - Chris
-- james s
^ permalink raw reply [flat|nested] 26+ messages in thread
end of thread, other threads:[~2008-02-01 1:54 UTC | newest]
Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-11-27 23:40 Open-FCoE on linux-scsi Love, Robert W
2007-11-28 0:19 ` FUJITA Tomonori
2007-11-28 0:29 ` Love, Robert W
2007-12-28 19:11 ` FUJITA Tomonori
2007-12-31 16:34 ` Love, Robert W
2008-01-03 10:35 ` FUJITA Tomonori
2008-01-03 21:58 ` Love, Robert W
2008-01-04 11:45 ` Stefan Richter
2008-01-04 11:59 ` FUJITA Tomonori
2008-01-04 22:07 ` Dev, Vasu
2008-01-04 23:41 ` Stefan Richter
2008-01-05 0:09 ` Stefan Richter
2008-01-05 0:21 ` Stefan Richter
2008-01-05 8:28 ` Christoph Hellwig
2008-01-15 1:18 ` Love, Robert W
2008-01-15 22:18 ` James Smart
2008-01-22 23:52 ` Love, Robert W
2008-01-29 5:42 ` Chris Leech
2008-02-01 1:53 ` James Smart
2008-01-06 4:14 ` FUJITA Tomonori
2008-01-06 4:27 ` FUJITA Tomonori
2008-01-04 13:47 ` FUJITA Tomonori
2008-01-04 20:19 ` Mike Christie
2008-01-05 18:33 ` Vladislav Bolkhovitin
2008-01-06 1:28 ` FUJITA Tomonori
2008-01-08 17:38 ` Vladislav Bolkhovitin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).