树莓派3B交叉编译PaddleLite进行图像分类
链接:
https://paddle-lite.readthedocs.io/zh/latest/source_compile/compile_env.html
https://paddle-lite.readthedocs.io/zh/latest/demo_guides/x86.html
https://paddle-lite.readthedocs.io/zh/latest/source_compile/compile_linux.html
https://paddle-lite.readthedocs.io/zh/latest/demo_guides/linux_arm_demo.html
系统信息:
- ubuntu 18.04
- python 3.7
- paddle 2.0.2
- paddlelite 2.9
说明:
- 编译环境准备,安装git,cmake和交叉编译工具链
sudo apt-get update sudo apt-get install cmake git dudo apt-get install g++-arm-linux-gnueabi gcc-arm-linux-gnueabi \ g++-arm-linux-gnueabihf gcc-arm-linux-gnueabihf \ gcc-aarch64-linux-gnu g++-aarch64-linux-gnu
- 编译x86版本的Paddlelite,对转换模型进行验证
git clone https://github.com/PaddlePaddle/Paddle-Lite.git git checkout -b v2.9 cd Paddle-Lite/ rm third-party -rf ./lite/tools/build.sh --build_python=ON x86
- 安装python版paddlelite,使用paddle_lite_opt对paddle生成的模型进行转换
cd Paddle-Lite/build.lite.x86/inference_lite_lib/python/install/dist pip install paddlelite-15ee38677-cp37-cp37m-linux_x86_64.whl cd model_dir/ mkdir -p Paddle-Lite/build.lite.x86/inference_lite_lib/demo/cxx/mobilenetv1_light/model cp model.pdmodel Paddle-Lite/build.lite.x86/inference_lite_lib/demo/cxx/mobilenetv1_light/model cp model.pdiparams Paddle-Lite/build.lite.x86/inference_lite_lib/demo/cxx/mobilenetv1_light/model cd Paddle-Lite/build.lite.x86/inference_lite_lib/demo/cxx/mobilenetv1_light/model paddle_lite_opt --model_file=./model.pdmodel --param_file=./model.pdiparams --optimize_out=./model --optimize_out_type=naive_buffer --valid_targets=x86
- 编译运行mobilenetv1_light
cd addle-Lite/build.lite.x86/inference_lite_lib/demo/cxx/mobilenetv1_light sh build.sh cp ../../../third_party/mklml/lib/lib* ./ cp ../../../cxx/lib/libpaddle_light_api_shared.so ./mobilenet_light_api model/model.nb
- 交叉编译armlinux版paddlelite
cd Paddle-Lite/
./lite/tools/build_linux.sh --arch=armv7hf --with_extra=ON --with_static_lib=ON --with_cv=ON
- 使用paddle_lite_opt对paddle生成的模型进行转换
cd model_dir/ mkdir -p addle-Lite/build.lite.linux.armv7hf.gcc/inference_lite_lib.armlinux.armv7hf/demo/cxx/mobilenetv1_light/model cp model.pd* Paddle-Lite/build.lite.linux.armv7hf.gcc/inference_lite_lib.armlinux.armv7hf/demo/cxx/mobilenetv1_light/model cd Paddle-Lite/build.lite.linux.armv7hf.gcc/inference_lite_lib.armlinux.armv7hf/demo/cxx/mobilenetv1_light/model paddle_lite_opt --model_file=./model.pdmodel --param_file=./model.pdiparams --optimize_out=./model --optimize_out_type=naive_buffer --valid_targets=arm
- 编译mobilenetv1_light,然后拷贝到树莓派3B上运行
vim CMakeLists.txt set(ARMLINUX_ARCH_ABI armv7hf) add_definitions(-std=c++11 -O3 -pthread -Wl,--rpath=./) sh build.sh scp mobilenet_light_api model.nb libpaddle_light_api_shared.so pi@192.168.123.62:/home/pi/mobilenet_light_api ssh pi@192.168.123.62 export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:./ ./mobilenet_light_api model.nb
- 编译运行Paddle-Lite-Demo
git clone https://github.com/PaddlePaddle/Paddle-Lite-Demo.git cd Paddle-Lite-Demo/PaddleLite-armlinux-demo ./download_models_and_libs.sh scp *.h pi@192.168.123.62:/home/pi/yuandanfei/Paddle-Lite-Demo/PaddleLite-armlinux-demo/Paddle-Lite/include scp libpaddle_light_api_shared.so pi@192.168.123.62:/home/pi/yuandanfei/Paddle-Lite-Demo/PaddleLite-armlinux-demo/Paddle-Lite/libs/armv7hf cd Paddle-Lite-Demo/PaddleLite-armlinux-demo/image_classification_demo vim run.sh #TARGET_ARCH_ABI=armv8 # for RK3399, set to default arch abi TARGET_ARCH_ABI=armv7hf # for Raspberry Pi 3B sh run.sh
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· Linux系列:如何用 C#调用 C方法造成内存泄露
· AI与.NET技术实操系列(二):开始使用ML.NET
· 记一次.NET内存居高不下排查解决与启示
· 探究高空视频全景AR技术的实现原理
· 理解Rust引用及其生命周期标识(上)
· 物流快递公司核心技术能力-地址解析分单基础技术分享
· .NET 10首个预览版发布:重大改进与新特性概览!
· 单线程的Redis速度为什么快?
· 展开说说关于C#中ORM框架的用法!
· Pantheons:用 TypeScript 打造主流大模型对话的一站式集成库