Open3D-PointNet2-Semantic3D-master的运行
- 1.修改download_semantic3d.sh文件
#!/bin/bash ans=`dpkg-query -W p7zip-full` if [ -z "$ans" ]; then echo "Please, install p7zip-full by running: sudo apt-get install p7zip-full" exit -1 fi for i in `cat semantic3D_files.csv` do output_file=`basename $i` echo Downloading ${output_file} ... #把wget $i改成:wget -c -N $i wget -c -N $i 7z x ${output_file} -y done mv station1_xyz_intensity_rgb.txt neugasse_station1_xyz_intensity_rgb.txt exit 0
- 2.修改preprocess.py文件
import os import subprocess import shutil import open3d from dataset.semantic_dataset import all_file_prefixes def wc(file_name): out = subprocess.Popen( ["wc", "-l", file_name], stdout=subprocess.PIPE, stderr=subprocess.STDOUT ).communicate()[0] return int(out.partition(b" ")[0]) def prepend_line(file_name, line): with open(file_name, "r+") as f: content = f.read() f.seek(0, 0) f.write(line.rstrip("\r\n") + "\n" + content) def point_cloud_txt_to_pcd(raw_dir, file_prefix): # File names txt_file = os.path.join(raw_dir, file_prefix + ".txt") pts_file = os.path.join(raw_dir, file_prefix + ".pts") pcd_file = os.path.join(raw_dir, file_prefix + ".pcd") # Skip if already done if os.path.isfile(pcd_file): print("pcd {} exists, skipped".format(pcd_file)) return # .txt to .pts # We could just prepend the line count, however, there are some intensity value # which are non-integers. print("[txt->pts]") print("txt: {}".format(txt_file)) print("pts: {}".format(pts_file)) with open(txt_file, "r") as txt_f, open(pts_file, "w") as pts_f: for line in txt_f: # x, y, z, i, r, g, b tokens = line.split() tokens[3] = str(int(float(tokens[3]))) line = " ".join(tokens) pts_f.write(line + "\n") prepend_line(pts_file, str(wc(txt_file))) # .pts -> .pcd print("[pts->pcd]") print("pts: {}".format(pts_file)) print("pcd: {}".format(pcd_file)) """ point_cloud = open3d.read_point_cloud(pts_file) open3d.write_point_cloud(pcd_file, point_cloud) 改成: point_cloud = open3d.io.read_point_cloud(pts_file) open3d.io.write_point_cloud(pcd_file, point_cloud) """ point_cloud = open3d.io.read_point_cloud(pts_file) open3d.io.write_point_cloud(pcd_file, point_cloud) os.remove(pts_file) if __name__ == "__main__": # By default # raw data: "dataset/semantic_raw" current_dir = os.path.dirname(os.path.realpath(__file__)) dataset_dir = os.path.join(current_dir, "dataset") raw_dir = os.path.join(dataset_dir, "semantic_raw") for file_prefix in all_file_prefixes: point_cloud_txt_to_pcd(raw_dir, file_prefix)
Run
python preprocess.py
Open3D is able to read .pcd
files much more efficiently.
- 4. Downsample
The downsampled dataset will be written to dataset/semantic_downsampled
. Points with label 0 (unlabled) are excluded during downsampling.
downsample.py文件中的open3d.Vector3dVector()改为:
open3d.utility.Vector3dVector()
5.open3d.voxel_down_sample_and_trace(
dense_pcd, voxel_size, min_bound, max_bound, False
)改成:
open3d.geometry.PointCloud.voxel_down_sample_and_trace(
dense_pcd, voxel_size, min_bound, max_bound, False
)
- 5. Compile TF Ops
cmake ..这一步遇到的错误:CMake Error at CMakeLists.txt:4 (cmake_minimum_required):
CMake 3.8 or higher is required. You are running version 3.5.1
解决办法:
升级cmake
参考:https://blog.csdn.net/weixin_43046653/article/details/86511157
问题还是没有得到解决。
直接复制pointnet++中编译好的.so文件到build
directory.
(After compilation the following .so
files shall be in the build
directory.)
Run
python train.py
-
7. Predict
Pick a checkpoint and run the predict.py
script. The prediction dataset is configured by --set
. Since PointNet2 only takes a few thousand points per forward pass, we need to sample from the prediction dataset multiple times to get a good coverage of the points. Each sample contains the few thousand points required by PointNet2. To specify the number of such samples per scene, use the --num_samples
flag.
python predict.py --ckpt log/semantic/best_model_epoch_040.ckpt \
--set=validation \
--num_samples=500
The prediction results will be written to result/sparse
.