From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Wiles, Keith" Subject: Re: proposal: raw packet send and receive API for PMD driver Date: Wed, 27 May 2015 14:50:48 +0000 Message-ID: Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable To: Lin XU , "dev@dpdk.org" Return-path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 97BB3567E for ; Wed, 27 May 2015 16:50:50 +0200 (CEST) Content-Language: en-US Content-ID: <9B9D66B16624A44C821D7E4041D3F70E@intel.com> List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 5/26/15, 11:18 PM, "Lin XU" wrote: >I think it is very important to decouple PMD driver with DPDK framework. > (1) Currently, the rte_mbuf struct is too simple and hard to support >complex application such as IPSEC, flow control etc. This key struct >should be extendable to support customer defined management header and >hardware offloading feature. I was wondering if adding something like M_EXT support for external storage to DPDK MBUF would be more reasonable. IMO decoupling PMDs from DPDK will possible impact performance and I would prefer not to let this happen. The drivers are written for performance, but they did start out as normal FreeBSD/Linux drivers. Most of the core code to the Intel drivers are shared between other systems. > (2) To support more NICs. >So, I thinks it time to add new API for PMD(a no radical way), and >developer can add initial callback function in PMD for various upper >layer protocol procedures. We have one callback now I think, but what callbacks do you need? The only callback I can think of is for a stack to know when it can release its hold on the data as it has been transmitted for retry needs. > >