top of page
Search
  • Writer's pictureJean-Alexis Boulet

FPGA Video AI deployment – From platform creation to AI deployment - Part 2

In the previous part, you've learned how to get the PetaLinux image to work and how to create a platform to deploy accelerators. We will now be creating a test platform but this time with a DPU accelerator. We first need to Add the Vitis AI and Vitis Accelerated Library to Vitis


Adding Vitis-AI repo in Vitis:

  • Open menu Window -> Preferences

  • Go to Library Repository tab

  • Add Vitis-AI:

  • Click Add button

  • Input ID: vitis-ai

  • Name: Vitis AI

  • Location: assign a target download directory or keep empty. Vitis will use default path ~/.Xilinx if this field is empty.

  • Git URL: https://github.com/Xilinx/Vitis-AI.git

  • Branch: The branch you'd like to verify with your platform. Use master for the latest version.



Download the Vitis-AI library

  • Open menu Xilinx -> Libraries

  • Find the Vitis-AI entry we just added. Click the Download button on it.

  • Wait until the download of the Vitis-AI repository completes

  • Click OK to close this window.


Create a DPU kernel

  • Go to menu File -> New -> Application Project

  • Click Next on the Welcome page

  • Select platform zcu104_custom_platform. Click Next.

  • Name the project dpu_trd, click next.

  • Set Domain to linux on psu_cortexa53

  • Set Sys_root path to sysroot installation path in previous step, e.g. <full_pathname_to_zcu104_custom_pkg>/sysroots/cortexa72-cortexa53-xilinx-linux

  • Set the Root FS to rootfs.ext4 and Kernel Image to Image. These files are located in zcu104_custom_plnx/images directory, which are generated previously. Click next.

  • Select dsa -> DPU Kernel (RTL Kernel) and click Finish to generate the application.

Edit the DPU kernel setting for the zcu104 project


Open dpu_trd_system.sprj and select HARDWARE build configuration


Now edit configuration file

Open dpu_conf.vh from dpu_trd_kernels/src/prj/Vitis directory

Update line 37 from URAM_DISABLE to URAM_ENABLE

save the changes.


Update system_hw_link for proper kernel instantiation

Double click: dpu_trd_system_hw_link.prj

Remove sfm_xrt_top

In the assistant pane right click on:

dpu_trd_system-> dpu_trd_system_hw_link->Hardware-dpu and select settings

click (...) on the V++ configuration setting line


Add the following content:

[clock]

freqHz=300000000:DPUCZDX8G_1.aclk

freqHz=600000000:DPUCZDX8G_1.ap_clk_2

freqHz=300000000:DPUCZDX8G_2.aclk

freqHz=600000000:DPUCZDX8G_2.ap_clk_2


[connectivity]

sp=DPUCZDX8G_1.M_AXI_GP0:HPC0

sp=DPUCZDX8G_1.M_AXI_HP0:HP0

sp=DPUCZDX8G_1.M_AXI_HP2:HP1

sp=DPUCZDX8G_2.M_AXI_GP0:HPC0

sp=DPUCZDX8G_2.M_AXI_HP0:HP2

sp=DPUCZDX8G_2.M_AXI_HP2:HP3


Update package options


Double click dpu_trd_system.sprj

Click on the ... button on Package options

Input --package.sd_dir=../../dpu_trd/src/app

Click OK


Update Include directory for opencv:

rigth click on dpu_trd[xrt] -> select C++ build settings

Select includes and add a path the the sysroot directory :

/usr/include/opencv4



Build the hardware design

  • Select the dpu_trd_system system project

  • Click the hammer button to build the system project

  • The generated SD card image is located at dpu_trd_system/Hardware/package/sd_card.img.

Now go get another cup of coffee, reply to some email and wait until the compilation is done...

You now have an application with a dpu kernel in it ready to run the application.


Test the solution


Now to test the design you first need to program your SD card as you did previously.


Now connect to the board using ssh, the uart, or a keyboard as you prefer.

Here is a couple of quick commands that you can use to know that the design is working


cp /mnt/sd-mmcblk0p1/app/model/resnet50.xmodel /mnt/sd-mmcblk0p1/resnet50.xmodel
env LD_LIBRARY_PATH=/mnt/sd-mmcblk0p1/app/samples/lib
cd /mnt/sd-mmcblk0p1/
XLNX_VART_FIRMWARE=/mnt/sd-mmcblk0p1/dpu.xclbin ./dpu_trd ./app/img/bellpeppe-994958.JPEG

The output should look like this


score[945]  =  0.992235     text: bell pepper,
score[941]  =  0.00315807   text: acorn squash,
score[943]  =  0.00191546   text: cucumber, cuke,
score[939]  =  0.000904801  text: zucchini, courgette,
score[949]  =  0.00054879   text: strawberry,

This means that the neural network thinks the image is a bell pepper at 99.22%, which is the case.


Once you're done you can use the following file which will prepare the SD card with a lot in order to be able to test various applications. You can go through the file for details but here is what it will do:



host_to_target_init
.txt
Download TXT • 408B

sd_card_init
.txt
Download TXT • 4KB

Note : You need to change the extension to .sh to be able to run the file.

  1. Resize the partition to use the full size

  2. Copy the ResNet application for testing with the bellpepper jpeg

  3. Install PetaLinux update

  4. Install the VitisAiLibrary model

  5. Install the ResNet model

  6. Install ssd_pedestrian_pruned

  7. Install fpn

  8. Install yolovv2_adas_pruned

  9. Install densebox

  10. Install Example Video file

  11. Test the ResNet model with the 001.jpg image

  12. Run the face detect model


Once this is done, it is time to go ahead and cross-compile some applications.


On the host side:

Go to ~/Vitis-AI/demo/VART/resnet50_ext


Build the application. Don't forget to source the SDK environment in your command line before with:

./build.sh


Copy the result using the following command (using the board IP address):

scp resnet50_ext root@192.168.1.157:~/demo/VART/resnet50_ext

Then, run these commands on the target board:

cd ~/demo/VART/resnet50_ext
chmod 777 resnet50_ext
./resnet50_ext /usr/share/vitis_ai_library/models/resnet50/resnet50.xmodel ../images/001.jpg

The result should be as follow

score[109]  =  0.982666     text: brain coral,
score[973]  =  0.00850172   text: coral reef,
score[955]  =  0.00662115   text: jackfruit, jak, jack,
score[397]  =  0.000543497  text: puffer, pufferfish, blowfish, globefish,
score[390]  =  0.000329648  text: eel,

If you see this result, your build is successful! Let's now move on to building Pose Detection and Face Detection applications.


Pose Detection


On the host

cd ~/Vitis-AI/demo/CART/pose_detection
./build.sh
scp pose_detection root@192.168.1.157:~/demo/VART/pose_detection 

On the target

cd ~/demo/VART/pose_detection
chmod 777 pose_detection
export DISPLAY=:0.0
xrandr --output DP-1 --mode 800x600
./pose_detection video/pose.webm /usr/share/vitis_ai_library/models/sp_net/sp_net.xmodel /usr/share/vitis_ai_library/models/ssd_pedestrian_pruned_0_97/ssd_pedestrian_pruned_0_97.xmodel

If successful, you should see a video start on your display with the pose of the dancer highlighted.



Face Detection


On the target

mkdir ~/demo/Vitis-AI-Library/samples/

On the host

cd ~/Vitis-AI/demo/Vitis-AI-Library/samples/facedetect
./build.sh

Copy the directory to ~demo/Vitis-AI-Library/samples/facedetect


On the target

cd ~/demo/Vitis-AI-Library/samples/facedetect
chmod 777 test_video_facedetect
export DISPLAY=:0.0
xrandr --output DP-1 --mode 800x600
./test_video_facedetect densebox_640_360 0

You should now see your webcam open in a separate window. The face tracking app now runs in the hardware. You should see in the webcam window that it is able to do face detection.


You can now use those various examples as a baseline for your own idea!


bottom of page