NVMe over Fabrics (1 Day total for all subjects listed)

Curley(126x152)$1,395 – Each of these subjects will start with a short overview and then delve into their uniqueness and how that will apply to NVMe over Fabrics.

NVMe over FABRICS
Overview
What are Fabrics?
What is Mapping/Binding?
Why create NVMe over Fabrics?
What is Channel I/O vs. Memory I/O
Messages vs. Shared Memory
Differences & Similarities between
NVMe over PCIe & NVMe over Fabrics
Encapsulation
Fabrics Commands & Responses
Connection Parameters
Properties Definitions
Discovery Controller
Authentication
Transport Requirements specified by NVMeRDMA
What is RDMA?
How RDMA operates
Benefits of RDMA
Zero Copy
OS Bypass
Detail of RDMA defined by IETF
Reliable/Unreliable Connection/Datagram
RDMA queue pairs with NVMe host & target
Verbs
RDMA operations

ETHERNET with iWARP or RoCE
Ethernet is ubiquitous and high-speed making it an idea
candidate for any new transport protocol. However, Ethernet is
not designed to use RDMA, so two competing protocols have
been developed to add RDMA to Ethernet. This presentation
will provide the attendee a non-biased understanding of
how they both work.

Overview of Ethernet
Addressing: Domain Name, IP, and MAC
Ethernet Layers: Levels 1 through 4
Packet Formats

iWARP to add RDMA
What is iWARP?
Solving Ethernet problems with iWARP
Commands & Responses via messages
Data via RDMA
iWARP layers added to TCP/IP
RDMAP, DDP, & MPA
L4 offload (TOE)
TCP packet with iWARP heaaders
Using DDP: differentiating between
Data Delivery & Data Placement

RoCE to add RDMA
What is Convergent Ethernet?
RoCE V1
RoCE V2
RoCE headers
RoCE vs. InfiniBand
RoCE vs. iWARP
Flow Control with RoCE
Soft RoCE

InfiniBand
InfiniBand is an environment ideally suited for the speed and protocol of SSDs and NVMe. It is used primarily in High Performance Computing where the transfer of
large files of data between computers or between computers and storage arrays is
time critical. To this end InfiniBand was designed to use RDMA.

Outline
Where is InfiniBand used?
Requirements to operate in
High Performance Computing
Why RDMA is required
Detail of InfiniBand
Example Topology
Virtual Lanes
Send messages
RDMA writes & reads
InfiniBand Packet Format
InfiniBand Keys
Flow Control
Encoding: 8b/10b, 64b/66b, 256b/257b
Cable widths
Speeds: SDR, DDR, QDR, FDR, EDR, HDR, NDR
InfiniBand Discovery
Components
HCA
TCA
Routers & Gateways

Fibre Channel
Fibre Channel is a major transport for Storage Area Networks (SAN) in the
Enterprise market. Many of the required services for NVMe Over Fabrics Over FC
already exist e.g. Discovery (Name Server), Link Services, Process Login, Error
Recovery. Despite its wide acceptance, it does not support RDMA.

Overview of Fibre Channel
Topologies: Point-to-Point, Fabric
Switches & addressing
Login: Fabric, Port, & Process
Detail
Port types
Initialization
Discovery (Name Server)
Packets
Services
Transferring Commands, Data, & Response
FC-0:      Encoding: 8b/10b, 643b/66b, 256b/257b,Forward Error Correction
FC-1:      Transmission Word Synchonization
FC-2P:     FC Port, Link Speed Negotiation
FC2V:      Level Defining VN Ports (multiplexed PN ports)
FC-2M:    Multiplexer
FC-3:      Common Services
FC-4:      Mapping to Upper Level Protocol