Debian部署dpdk-1.8.0(VMWARE Fusion 7)

Debian部署dpdk-1.8.0(VMWARE Fusion 7)

环境

Debian版本: 7.4
内核:3.2.0
gcc:4.7.2
后来在安装的过程中发现没有sudo,所以如果没有的话最好先装一个:apt-get install sudo
给虚拟机分配3块网卡,注意别用NAT模式,否则测试程序无法收发包,统计结果都是0。
Debian环境下启用一块网卡即可,有实际地址,用来ssh登录进行操作,其他两块给dpdk折腾

编译&加载

先设置好环境变量:
export RTE_SDK=`pwd`
export RTE_TARGET=x86_64-native-linuxapp-gcc

进入源码目录执行:

root@debian:~/code/dpdk-1.8.0# ./tools/setup.sh 
------------------------------------------------------------------------------
 RTE_SDK exported as /root/code/dpdk-1.8.0
------------------------------------------------------------------------------
----------------------------------------------------------
 Step 1: Select the DPDK environment to build
----------------------------------------------------------
[1] i686-native-linuxapp-gcc
[2] i686-native-linuxapp-icc
[3] ppc_64-power8-linuxapp-gcc
[4] x86_64-ivshmem-linuxapp-gcc
[5] x86_64-ivshmem-linuxapp-icc
[6] x86_64-native-bsdapp-clang
[7] x86_64-native-bsdapp-gcc
[8] x86_64-native-linuxapp-clang
[9] x86_64-native-linuxapp-gcc
[10] x86_64-native-linuxapp-icc

----------------------------------------------------------
 Step 2: Setup linuxapp environment
----------------------------------------------------------
[11] Insert IGB UIO module
[12] Insert VFIO module
[13] Insert KNI module
[14] Setup hugepage mappings for non-NUMA systems
[15] Setup hugepage mappings for NUMA systems
[16] Display current Ethernet device settings
[17] Bind Ethernet device to IGB UIO module
[18] Bind Ethernet device to VFIO module
[19] Setup VFIO permissions

----------------------------------------------------------
 Step 3: Run test application for linuxapp environment
----------------------------------------------------------
[20] Run test application ($RTE_TARGET/app/test)
[21] Run testpmd application in interactive mode ($RTE_TARGET/app/testpmd)

----------------------------------------------------------
 Step 4: Other tools
----------------------------------------------------------
[22] List hugepage info from /proc/meminfo

----------------------------------------------------------
 Step 5: Uninstall and system cleanup
----------------------------------------------------------
[23] Uninstall all targets
[24] Unbind NICs from IGB UIO driver
[25] Remove IGB UIO module
[26] Remove VFIO module
[27] Remove KNI module
[28] Remove hugepage mappings

[29] Exit Script

Option: 

我是64位系统,选择[9]开始编译,一开头就碰到这么个错误:
/lib/module/`uname -r`/build: no such file or directory
即使手动创建对应的目录,同样会报错:No targets specified and no makefile found.这是因为正常情况build不是个目录,而是个软链接,指向/usr/src下对应的kernel头文件目录。因此手动创建个build的软链接即可,即/usr/src/linux-headers-`uname -r`/
如果没有安装kernel header,建议根据自己的内核版本下载:apt-get install linux-headers-`uname -r`
然后就是加载内核模块、分配大页内存、绑定网卡之类了
这里选[11],[14],[17], 大页内存可设置为128,网卡的话,需要填写网卡的PCIE的地址,如0000:02:05.0之类,操作过程的指导很清楚,可以照着提示信息选网卡。

测试程序

选择21,启动测试程序,我只有两个核,在选择 bitmask of cores时,输入了3
但是start后会不断打印错误日志:
EAL: Error reading from file descriptor
貌似是由于VMWare对PCIE的INTX中断模拟得比较差导致,
修改源码:

diff --git a/lib/librte_eal/linuxapp/igb_uio/igb_uio.c b/lib/librte_eal/linuxapp/igb_uio/igb_uio.c
index d1ca26e..c46a00f 100644
--- a/lib/librte_eal/linuxapp/igb_uio/igb_uio.c
+++ b/lib/librte_eal/linuxapp/igb_uio/igb_uio.c
@@ -505,14 +505,11 @@  igbuio_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
 		}
 		/* fall back to INTX */
 	case RTE_INTR_MODE_LEGACY:
-		if (pci_intx_mask_supported(dev)) {
-			dev_dbg(&dev->dev, "using INTX");
-			udev->info.irq_flags = IRQF_SHARED;
-			udev->info.irq = dev->irq;
-			udev->mode = RTE_INTR_MODE_LEGACY;
-			break;
-		}
-		dev_notice(&dev->dev, "PCI INTX mask not supported\n");
+		dev_dbg(&dev->dev, "using INTX");
+		udev->info.irq_flags = IRQF_SHARED;
+		udev->info.irq = dev->irq;
+		udev->mode = RTE_INTR_MODE_LEGACY;
+		break;
 		/* fall back to no IRQ */
 	case RTE_INTR_MODE_NONE:
 		udev->mode = RTE_INTR_MODE_NONE;

不光是这块,由于修改后pci_intx_mask_supported()函数没有用到,编译还会报错(dpdk认为warning也是出错),得把头文件compat.h里这个函数的定义也去掉...
重新编译后一切ok:

testpmd> start
  io packet forwarding - CRC stripping disabled - packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  RX queues=1 - RX desc=128 - RX free threshold=32
  RX threshold registers: pthresh=8 hthresh=8 wthresh=0
  TX queues=1 - TX desc=512 - TX free threshold=0
  TX threshold registers: pthresh=32 hthresh=0 wthresh=0
  TX RS bit threshold=0 - TXQ flags=0x0
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 829923         RX-dropped: 0             RX-total: 829923
  TX-packets: 829856         TX-dropped: 0             TX-total: 829856
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 829915         RX-dropped: 0             RX-total: 829915
  TX-packets: 829856         TX-dropped: 0             TX-total: 829856
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 1659838        RX-dropped: 0             RX-total: 1659838
  TX-packets: 1659712        TX-dropped: 0             TX-total: 1659712
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

不过我这里一旦start,CPU都占满了,虚拟机系统会变得很慢,敲stop都要等好一会

通过脚本加载

交互式加载不好自动化,可以写个脚本加载
首先编译安装dpdk:
make install T=x86_64-native-linuxapp-gcc
接下来的命令可以写到脚本里,PCI地址需要根据自己的情况设置:

echo 128 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
mount -t hugetlbfs nodev /mnt/huge
modprobe uio
insmod x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
./tools/dpdk_nic_bind.py -b igb_uio 0000:02:05.0
./tools/dpdk_nic_bind.py -b igb_uio 0000:02:06.0

执行测试程序:

./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 2 -- -i

参考:

posted @ 2015-03-18 23:20  NumberSix  阅读(1674)  评论(0编辑  收藏  举报