Learn Detect Highway Lane Lines with OpenCV and Python

with machinehorizon

Tawqmju3rymf8nz4slb2?cache=true

Getting Started with Lane Line Detection

Being able to detect lane lines is a critical task for any self-driving autonomous vehicle. In this lesson, I will show you how to develop a simple pipeline with OpenCV for finding lane lines in an image, then apply this pipeline to a full video feed. 

On a sidenote, I got started with these skills from Udacity's Self-Driving Car program, which I highly recommend you check out if you want to get into this field. 


This lesson is with a series of helper methods that will all be combined into a single pipeline for processing. 


You will need:

- Python, Numpy, Matplotlib

- OpenCV 3


Helper Methods

The helper methods below are used to wrap OpenCV functions into a more readable final pipeline. 



def grayscale(img):
    return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    
def canny(img, low_threshold, high_threshold):
    return cv2.Canny(img, low_threshold, high_threshold)

def gaussian_blur(img, kernel_size):
    return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)

def region_of_interest(img, vertices):

    mask = np.zeros_like(img)   
    
    if len(img.shape) > 2:
        channel_count = img.shape[2]  # i.e. 3 or 4 depending on your image
        ignore_mask_color = (255,) * channel_count
    else:
        ignore_mask_color = 255
  
    cv2.fillPoly(mask, vertices, ignore_mask_color)
    
    masked_image = cv2.bitwise_and(img, mask)
    return masked_image


def draw_lines(img, lines, color=(255, 0, 0), thickness=7):
    for line in lines:
        for x1,y1,x2,y2 in line:
            cv2.line(img, (x1, y1), (x2, y2), color, thickness)

def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
    lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
    line_img = np.zeros(img.shape, dtype=np.uint8)
    draw_lines(line_img, lines)
    return line_img

def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
    return cv2.addWeighted(initial_img, α, img, β, λ)

Final Video Output from the Script

Your final output of your opencv pipeline should look something like this video.

Applying Canny Edge Detection and Masking a Region of Interest

We will start by applying a Gaussian blur and converting the image to grayscale before isolating the region of interest. 

The second step is to run canny edge detection in OpenCV. The blur and grayscale step will help make the main lane lines stand out. 

Lastly, it's important to cut out as much of the noise as possible in the frame. We know the general area in which the road will appear, so let's just isolate that area with a trapezoid shape. 



def pipeline(image):  
    ### Params for region of interest
    bot_left = [80, 540]
    bot_right = [980, 540]
    apex_right = [510, 315]
    apex_left = [450, 315]
    v = [np.array([bot_left, bot_right, apex_right, apex_left], dtype=np.int32)]
    
    ### Run canny edge dection and mask region of interest
    gray = grayscale(image)
    blur = gaussian_blur(gray, 7)
    edge = canny(blur, 50, 125)
    mask = region_of_interest(edge, v)

Hough Lines

We are going to use probabilistic Hough lines in OpenCV to identify the location of lane lines in the road. The HoughLinesP function in OpenCV returns an array of lines organized by endpoints (X1,Y1, X2, Y2), which can then be drawn onto the image.


The problem here is that we have two distinct lines we want to detect, the right lane marker and the left lane marker. In the next section, I will organize these by slope and reject outliers that throw off the intended slope of the line. First things first, add all of the detected lines to a variable.


def pipeline(image):
    ## ...
    lines = cv2.HoughLinesP(mask, 0.8, np.pi/180, 25, np.array([]), minLineLength=50, maxLineGap=200)

Separating Lines by Slope

You may remember how to find the slope of a line from high school algebra with: m = (Y2 - Y1) / (X2 - X1). We are going to use this equation to organize lines by their slope. It is important to point out that the y-axis is inverted in OpenCV when reading images, so positive slopes will be the right lane and negative slopes will be the left lane. 


def separate_lines(lines):
    """ Takes an array of hough lines and separates them by +/- slope.
        The y-axis is inverted in matplotlib, so the calculated positive slopes will be right
        lane lines and negative slopes will be left lanes. """
    right = []
    left = []
    for x1,y1,x2,y2 in lines[:, 0]:
        m = (float(y2) - y1) / (x2 - x1)
        if m >= 0: 
            right.append([x1,y1,x2,y2,m])
        else:
            left.append([x1,y1,x2,y2,m])
    
    return right, left

Here's the updated pipeline to this point


def pipeline(image):
    ## ...
    right_lines, left_lines = separate_lines(lines)

Extending Hough Lines into a Single Unified Lane Line

This Section is Locked!

Unlock this lesson for $5 to view all sections.

Final Pipeline

Here's what the final pipeline looks like, which can be plugged into MoviePy for frame by frame lane line detection. 


def pipeline(image):  
    ### Params for region of interest
    bot_left = [80, 540]
    bot_right = [980, 540]
    apex_right = [510, 315]
    apex_left = [450, 315]
    v = [np.array([bot_left, bot_right, apex_right, apex_left], dtype=np.int32)]
    
    ### Run canny edge detection and mask region of interest
    gray = grayscale(image)
    blur = gaussian_blur(gray, 7)
    edge = canny(blur, 50, 125)
    mask = region_of_interest(edge, v)
    
    ### Run Hough Lines and separate by +/- slope
    lines = cv2.HoughLinesP(mask, 0.8, np.pi/180, 25, np.array([]), minLineLength=50, maxLineGap=200)

    right_lines, left_lines = separate_lines(lines)
    right = reject_outliers(right_lines,  cutoff=(0.45, 0.75))
    right = merge_lines(right)
    
    left = reject_outliers(left_lines, cutoff=(-0.85, -0.6))
    left = merge_lines(left)

    lines = np.concatenate((right, left))
    
    ### Draw lines and return final image 
    line_img = np.copy((image)*0)
    draw_lines(line_img, lines, thickness=10)
    
    line_img = region_of_interest(line_img, v)
    final = weighted_img(line_img, image)
    

    return final

The pipeline plugged into MoviePy:


from moviepy.editor import VideoFileClip, ImageClip
from IPython.display import HTML

def process_image(image):
    result = pipeline(image)
    return result

white_output = 'final.mp4'
clip1 = VideoFileClip("solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image)
%time white_clip.write_videofile(white_output, audio=False)

Steps in the Pipeline Visual

Hjisc8e4qymw6pbcdjbw
Here's a few visuals of the intermediate steps in the transformation process.

Advanced Techniques: Color Masking and Prior Frame Averaging (Extra Challenge)

This Section is Locked!

Unlock this lesson for $5 to view all sections.

Signup and Unlock for $5

Grades

a

Graded

Hey amazing tutorial. Could you let me know how you decided the cutoff and threshold parameters for outliers rejection function?

Av 6 35ca29fbea030fa2382540a06151508af22e7ca546b320764dd1caa63c096f8b
a

Graded

This was very helpful. Fine tuning the OpenCV parameters in the second video was really frustrating me. The key turned out to be setting the global variables to make this more of a sequential process. Great lesson, thank you!

Av 2 4cf64c0e1b3355a4a2080590bb0e1ed92560cfa2d71950969066908e15a100d8
a 100.0%
Technology Data Analysis

  • 59 Unlocks
  • 4179 Total Reads
  • 36 minutes Est. Learning Time
Top Rated
Top Seller