车道线检测

检测步骤:

  • 相机标定
  • 图片失真校正
  • 图像阈值化
  • 透视变换
  • 检测车道像素并拟合边界
  • 计算车道的曲率和车辆相对位置
  • 车道边界弯曲回原始图像

一、相机标定

1.1 角点检测

我从准备object points开始,它将是世界棋盘角落的(x, y, z)坐标。这里我假设棋盘固定在z=0的(x, y)平面上,这样每个校准图像的目标点都是相同的。因此objp只是一个复制的坐标数组,每当我成功地检测到测试图像中的所有棋盘角时,objpoints将附加一个它的副本。imgpoints将与(x, y)像素位置的每一个角落在图像平面与每一个成功的棋盘检测。

# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*9,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)

# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.

其中imgpoints的获得,通过cv2.findChessboardCorners()函数。

1.2 标定

然后,我使用输出objpointsimgpoints使用cv2.calibrateCamera()函数计算相机校准和失真系数。我使用 cv2. undistort()函数对测试图像进行失真校正,

# Test undistortion on an image
img = cv2.imread('./calibration.jpg')
img_size = (img.shape[1], img.shape[0])

# Do camera calibration given object points and image points
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img_size,None,None)

dst = cv2.undistort(img, mtx, dist, None, mtx)

得到如下结果:


Figure 1 去畸变

二、图片的Pipeline

2.1 图片获取

将一张原始图片通过之前标定的参数进行矫正,再对矫正后的图片进行余下操作,矫正的函数为cv2.undistort(),结果如Figure 2所示,


Figure 2 车道线矫正

2.2 图像阈值化

我使用颜色和梯度阈值的组合来生成一个二进制图像。下面是这个步骤的输出示例。我使用binary_select()函数对测试图像应用了颜色变换和渐变,并得到了这个结果。

# Define a function that thresholds the S-channel of HLS
def binary_select(image, s_thresh=(170, 255), sx_thresh=(20, 100)):
    hls = cv2.cvtColor(image, cv2.COLOR_RGB2HLS) #Convert to HLS color space
    s_channel = hls[:,:,2] #Apply a threshold to the S channel
    
    # Threshold color channel
    s_binary = np.zeros_like(s_channel)
    s_binary[(s_channel >= s_thresh[0]) & (s_channel <= s_thresh[1])] = 255

    # Sobel x
    sobelx = cv2.Sobel(s_channel, cv2.CV_64F, 1, 0) # Take the derivative in x
    abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal
    scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))
    
    # Threshold x gradient
    sxbinary = np.zeros_like(scaled_sobel)
    sxbinary[(scaled_sobel >= sx_thresh[0]) & (scaled_sobel <= sx_thresh[1])] = 255
    
    #the threshold result of combine s_binary and sxbinary
    binary_output = np.zeros_like(s_channel)
    binary_output[(s_binary >= 255) & (sxbinary >= 255)] = 255
    
    return s_channel, s_binary, sxbinary, binary_output 


Figure 3 阈值化效果

2.3 透视变换

我的透视图转换代码包含一个名为birds_eye_warp()的函数。函数birds_eye_warp()接收图像(img)、Source(src)和 Destination(dst)作为输入。 其中Source(src)为图片上的像素坐标,而Destination(dst)是实际场景的比例关系估算获得。

Source(src) Destination(dst)
575,464 150,0
707,464 1130,0
1049,682 1130,720
258,682 150,720

接着通过函数cv2.getPerspectiveTransform()获得Source点和Destination的变换关系,

# the function of "birds-eye view"
def birds_eye_warp(img, src, dst): 
    h,w = img.shape[:2]
    M = cv2.getPerspectiveTransform(src, dst)
    Minv = cv2.getPerspectiveTransform(dst, src)
    # use cv2.warpPerspective() to warp your image to a top-down view
    warped = cv2.warpPerspective(img, M, (w,h), flags=cv2.INTER_LINEAR)
    return warped, M, Minv

最后,我将srcdst点绘制到测试图像及其翘曲的对应图像上,验证了透视图转换如预期的那样工作,以及这些车道线经过透视图转换后看起来是平行的。


Figure 4 鸟瞰图

2.4 拟合车道线像素

首先,通过代码hist()函数查找属于车道线的像素。我选择了两个最强的峰作为初始车道线的起点。其次,使用Sliding Windows策略,为每段选择车道线像素。

def hist(img):   #img是归一化处理的单通道图片
    # TO-DO: Grab only the bottom half of the image
    # Lane lines are likely to be mostly vertical nearest to the car
    bottom_half = None
    bottom_half = img[img.shape[0]//2:,]
    # TO-DO: Sum across image pixels vertically - make sure to set `axis`
    # i.e. the highest areas of vertical lines should be larger values
    histogram = None
    histogram = np.sum(bottom_half, axis = 0)

    return histogram


Figure 5 统计频数直方图

最后,我借助函数numpy.polyfit()通过一个二阶多项式来拟合车道线像素。使用它们的x和y像素位置来拟合一个二阶多项式曲线\(f(y) = Ay^2 + By + C\)


Figure 6 拟合曲线

2.5 车道曲率和车辆位置

通过曲率半径的参考公式

编写函数measure re_ature_real()计算曲线x=f(y)任意点x处曲率半径的直线

def measure_curvature_real(left_fitx, right_fitx, ploty, left_fit, right_fit):

    # Define conversions in x and y from pixels space to meters
    ym_per_pix = 16.0/720 # meters per pixel in y dimension
    xm_per_pix = 3.7/1000 # meters per pixel in x dimension
    
    leftx = left_fitx*xm_per_pix
    rightx = right_fitx*xm_per_pix
    ploty = ploty*ym_per_pix
    
    left_fit_cr = np.polyfit(ploty, leftx, 2)
    right_fit_cr = np.polyfit(ploty, rightx, 2)

    # Define y-value where we want radius of curvature
    # We'll choose the maximum y-value, corresponding to the bottom of the image
    y_eval = np.max(ploty)
    
    # Implement the calculation of R_curve (radius of curvature) 
    left_curverad = ((1 + (2*left_fit_cr[0]*y_eval*ym_per_pix + left_fit_cr[1])**2)**1.5) / np.absolute(2*left_fit_cr[0])
    right_curverad = ((1 + (2*right_fit_cr[0]*y_eval*ym_per_pix + right_fit_cr[1])**2)**1.5) / np.absolute(2*right_fit_cr[0])
    
    return left_curverad, right_curverad
  • \(car_{p}\) 是汽车在图片中的位置,因为相机安装在汽车的中心,这就相当于取图像宽度的一半car_position = img.shape[1]/2
  • \(xm_{pp}\) 是图像像素与实际距离之间的关系,即代码中的xm_per_pix = 3.7/1000,其中3.7是车道线实际宽度3.7米,1000是图片像素的像素距离。
  • \(lanecenter_{p}\) 是车道线中心,通过检测图片中沿x轴的车道线来计算的,即lane_center_position = (r_fit_x_int + l_fit_x_int) /2

最后,车辆相对于车道中心的位置\(center_{d}\)计算公式如下:

\(center_{d} = (car_{p} - lanecenter_{p}) * xm_{pp}\)

2.6 检测效果


Figure 6 检测结果

---

三、视频的Pipeline

将图片的检测方法运用到车辆行驶的视频上,可以获得轻微抖动的检测效果,没有会导致汽车驶离道路的灾难性故障。

posted @ 2021-03-01 11:41  iamwasabi  阅读(1927)  评论(2编辑  收藏  举报