Lane markings detection and vanishing point detection with OpenCV

Hi guys! The vanishing point detection topic has occupied a good part of my (research) life. Indeed I spent quite a long time to finish my PhD whose title was “Detection and tracking of vanishing points in dynamic environments“.

In this post I would like to show a simple yet robust solution for the detection of a single vanishing point in road scenes.

Vanishing point detection

Original grayscale image

The vanishing point in this scenario can be very usefull to retrieve the camera calibration, to perform some planar homography transformation, to determine a ROI inside the image, etc. Although there are several vanishing points defined by the elements of this scenario (the vertical and horizontal directions of the panels), we want to focus on the vanishing point defined by the lane markings. Note that for curvy roads, the vanishing point does not exist, although you can thought of it as the direction of the tangent on the car position on the curve.

For that reason, we first need to extract the lane markings, which can be done in many, many different ways (thresholding the intensity, connected components, edges, etc). In this post I share one of the fastest I’ve used in my works. It is published in my Springer  MVAP paper “Road environment modeling using robust perspective analysis and recursive Bayesian segmentation”, and the code in C++/OpenCV I share here (sorry it’s an image because the <code> </code> html commands seem not to work fine in WordPress):

Applying this filter we get images like the following (note that I have set to black the upper half of the image), where tau is the expected width (in pixels) of the lane markings. For a better performance, this value can be adapted to the perspective of the road, although for this special case this is exactly what we do not have!):

Detected lane markings

After a proper thresholding we can get something like this:

Binarized lane markings

Although we do have a lot of false positive pixels (the vehicle or lateral elements of the scenario), the following robust stages will find the correct vanishing point.

Using OpenCV, I have found that a quite reliable solution is based on (i) the use of the Hough transform, and (ii) the computation of the intersection of the lines we get.

For the first part, OpenCV has two main options, the Standard Hough Transform (SHT), and the Progressive Probabilistic Hough Transform (PPHT). I use the first because it returns lines and not pairs of points or line segments and although it is a little bit slower, it requires the user to set less parameters and it works fine in most cases. The Hough transform can be applied as:

Where __lmGRAY is the image we obtain from laneMarkingsDetector, and __houghMinLength is the minimum length we require (it should be set according to the image dimensions, something like 30 should work for small images 320 x 240).

The result is a set of lines that visually converge on a small region of the image:

Detected Hough lines

In this simple case there are no strong outliers, i.e. lines that clearly do not intersect at the vanishing point, although we do have a non-negligible intersection error. For cases like this or with more outliers, we can use a RANSAC-like method to find the more likely vanishing point.

(UPDATE: The MSAC class is no longer available as it was, instead, you can download the new MSAC class with a full sample capturing images or video and computing as many vanishing point as desired, both finite or infinite. Please refer to the specific post for more details).

For that purpose I use a variation of RANSAC called MSAC which simply weights inliers according to their cost function (instead of just counting 1 for inliers and 0 for outliers as RANSAC does). I have programmed a very simple version of it, in a C++ class, which only needs two steps:

// Initialization
__msac.init(IMG_WIDTH, IMG_HEIGHT, 1);
// Execution (passing as argument the lines obtained with the Hough transform)
__msac.singleVPEstimation(lines, &number_of_inliers, vanishing_point);

Where __msac is an object of class MSAC, number_of_inliers is an output int that contains the number of inliers MSAC has found to compute the vanishing_point (if you want to play with this, you can go to the Code page from my website although it is not optimized nor commented).

The result is normally a good vanishing point (I have tested it in many, many type of road sequences, and it works fine as long as there are some painted lane markings).

Detected vanishing point

Additionally, I usually compute the vanishing point for a set of time instants, and check if the vanishing point is coherent and steady in time. In case not, I restart the procedure until I find something reliable.

That’s all for today!

This entry was posted in Computer vision, OpenCV and tagged , , , , , , , . Bookmark the permalink.

42 Responses to Lane markings detection and vanishing point detection with OpenCV

  1. Omkar Kulkarni says:

    here is the main function where i retrieve frames and display…before using msac object for class msac i sent these frames to the two functions:
    int main( int argc , char** argv)
    {

    IplImage* frame;
    IplImage * histImage;
    float ranges[]={0,255};
    float* Range[1]={&ranges[0]};
    int Bin=256;

    CvCapture* capture=cvCreateFileCapture(“test.avi”);
    cvNamedWindow(“Video”,1);
    cvNamedWindow( “Histogram”, 1 );

    cout<<capture<<endl;
    if(capture==NULL)
    {
    cout<<"NO capture"<origin = 1;

    for(int i=0;i<Bin;i++)
    {
    cvLine(histImage,cvPoint(i,0),cvPoint(i,int(cvQueryHistValue_1D(R_hist,i)/50)),CV_RGB(i,0,0));
    cvLine(histImage,cvPoint(i,150),cvPoint(i,int(cvQueryHistValue_1D(G_hist,i)/50)+150),CV_RGB(0,i,0));
    cvLine(histImage,cvPoint(i,300),cvPoint(i,int(cvQueryHistValue_1D(B_hist,i)/50)+300),CV_RGB(0,0,i));
    cvLine(histImage,cvPoint(i,450),cvPoint(i,int(cvQueryHistValue_1D(Grey_hist,i)/50)+450),CV_RGB(i,i,i));
    }

    cvShowImage("Video",frame);
    cvShowImage( "Histogram", histImage );

    char c=cvWaitKey(33);
    if(c==27) break;
    }

    cvReleaseCapture(&capture);
    cvReleaseImage(&frame);
    cvReleaseImage(&histImage);
    cvDestroyWindow("video");
    cvDestroyWindow("Histogram");

    return 1;

    }

    • Omkar Kulkarni says:

      I am sorry!…the full main function isnt displayed! this is the main function where i am retrieving frames:

      #include “stdafx.h”

      #include “cv.h”
      #include “highgui.h”

      int main(int argc,char** argv)
      {
      cvNamedWindow(“Canny Edges”,CV_WINDOW_AUTOSIZE);
      cvNamedWindow(“Original View”,CV_WINDOW_AUTOSIZE);

      CvCapture* capture;
      if(argc==0)
      {
      capture=cvCreateCameraCapture(0);
      }
      else
      {
      capture=cvCreateFileCapture(“new.avi”);
      }

      IplImage* frameRGB;

      while(1)
      {
      frameRGB=cvQueryFrame(capture);

      /*IplImage* frameG=cvCreateImage(cvGetSize(frameRGB),IPL_DEPTH_8U,0);
      cvConvertImage(frameRGB,frameG,0);

      CvSize sz=cvGetSize(frameRGB);
      sz.width=sz.width;
      sz.height=sz.height;

      IplImage* frameRG=cvCreateImage(sz,IPL_DEPTH_8U,0);
      cvResize(frameG, frameRG,CV_INTER_LINEAR);

      IplImage* framec=cvCreateImage(sz,IPL_DEPTH_8U,0);

      cvCanny(frameRG, framec, 50, 5, 3);*/
      if(!frameRGB)break;
      cvShowImage(“Original View”,frameRGB);
      /*cvShowImage(“Canny Edges”,framec);*/

      char c=cvWaitKey(33);
      if(c==27)break;

      }
      cvReleaseCapture(&capture);
      cvDestroyWindow(“Original View”);
      /* cvDestroyWindow(“Canny Edges”);*/
      return 0;
      }
      here where should i add the calls using object __msac?? i am not getting which variables to pass too! plz plz help!

      • Hi!
        You can use the Hough transform to get the lines, for instance after applying the Canny edge detector as you are doing now.
        The piece of code is in image in this post where it reads // USE STANDARD HOUGH TRANSFORM, you only have to substitute __lmGRAY which is the image I used for the one you are using, framec, and you will get a set of lines in the variable vector lines_.
        After that you should convert vector lines_ into vector < vector < Vec2f > > lines before passing it to the MSAC object, for instance like this:
        vector < vector < Point > > lines;
        vector < Point > aux;
        for(size_t i=0; i < lines_.size(); ++i)
        {
        aux.clear();
        // Get the two end-points of current line segment
        float rho = lines_[i][0];
        float theta = lines_[i][1];

        double a = cos(theta), b = sin(theta);
        double x0 = a*rho, y0 = b*rho;

        CvPoint pt1, pt2;
        pt1.x = cvRound(x0 + 1000*(-b));
        pt1.y = cvRound(y0 + 1000*(a));
        pt2.x = cvRound(x0 – 1000*(-b));
        pt2.y = cvRound(y0 – 1000*(a));

        aux.push_back(pt1);
        aux.push_back(pt2);
        lines.push_back(aux);
        }
        }

        Then you can call the __msac like:
        int number_of_inliers = 0;
        CvMat *vanishing_point = cvCreateMat(3,1,CV_32F);
        __msac.singleVPEstimation(lines, *number_of_inliers, vanishing_point);

        The resulting vanishing_point is in homogeneous coordinates, so make sure the third coordinate is one!
        Do not forget to create and init the the MSAC object before!

  2. Omkar Kulkarni says:

    So,after deliberate efforts, i came up with the main(), still giving lots of error functions, plz Sir rectify if any!

    #include “stdafx.h”
    #include “cv.h”
    #include “highgui.h”

    int _tmain(int argc, _TCHAR* argv[])
    {
    cvNamedWindow(“Canny Edges”,CV_WINDOW_AUTOSIZE);
    cvNamedWindow(“Original View”,CV_WINDOW_AUTOSIZE);

    CvCapture* capture;
    if(argc==1)
    {
    capture=cvCreateCameraCapture(0);
    }
    else
    {
    capture=cvCreateFileCapture(“new.avi”);
    }

    IplImage* frameRGB;

    while(1)
    {
    frameRGB=cvQueryFrame(capture);

    IplImage* frameG=cvCreateImage(cvGetSize(frameRGB),IPL_DEPTH_8U,0);
    cvConvertImage(frameRGB,frameG,0);

    CvSize sz=cvGetSize(frameRGB);
    sz.width=sz.width;
    sz.height=sz.height;

    IplImage* frameRG=cvCreateImage(sz,IPL_DEPTH_8U,0);
    cvResize(frameG, frameRG,CV_INTER_LINEAR);

    IplImage* __lmGRAY=cvCreateImage(sz,IPL_DEPTH_8U,0);

    cvCanny(frameRG, __lmGRAY, 50, 5, 3);
    if(!frameRGB)break;
    vector lines_;
    HoughLines(__lmGRAY,lines_,CV_PI/180,__houghMinLength);
    for(size_t i=0;i<lines_.size();i++)
    {
    float rho=lines_[i][0];
    float theta=lines_[i][l];
    double a=cos(theta),b=sin(theta);
    double x0=a*rho,y0=b*rho;
    Point pt1(cvRound(x0+1000*(-b)),cvRound(y0+1000*(a)));
    Point pt1(cvRound(x0-1000*(-b)),cvRound(y0-1000*(a)));
    cv::clipLine(srcGRAY.size(),pt1,pt2);
    if(!dstBGR.empty())
    line(dstBGR,pt1,pt2,Scalar(0,0,255),1,8);
    cv::imwrite("HOUGH.bmp",dstBGR);
    }

    vector < vector > lines;
    vector aux;
    for(size_t i=0; i < lines_.size(); ++i)
    {
    aux.clear();
    // Get the two end-points of current line segment
    float rho = lines_[i][0];
    float theta = lines_[i][1];

    double a = cos(theta), b = sin(theta);
    double x0 = a*rho, y0 = b*rho;

    CvPoint pt1, pt2;
    pt1.x = cvRound(x0 + 1000*(-b));
    pt1.y = cvRound(y0 + 1000*(a));
    pt2.x = cvRound(x0 – 1000*(-b));
    pt2.y = cvRound(y0 – 1000*(a));

    aux.push_back(pt1);
    aux.push_back(pt2);
    lines.push_back(aux);
    }
    int number_of_inliers = 0;
    CvMat *vanishing_point = cvCreateMat(3,1,CV_32F);
    __msac.singleVPEstimation(lines, *number_of_inliers, vanishing_point);

    cvShowImage("Original View",frameRGB);
    cvShowImage("Canny Edges",framec);

    char c=cvWaitKey(33);
    if(c==27)break;

    }
    cvReleaseCapture(&capture);
    cvDestroyWindow("Original View");
    cvDestroyWindow("Canny Edges");
    return 0;
    }

  3. Omkar Kulkarni says:

    Sir, i am waiting for ur reply…thank you!

  4. qurban124 says:

    please send me complete code for road detection in opencv

    • Hi!
      For the moment I can’t share more than I currently do. Anyway, please keep visiting the blog because I will probably add more hints and useful code examples in the future.
      Regards,

      Marcos

      • spring says:

        Hi,
        Thanks a lot for your posting, if u can share your complete code it will be excellent
        Thank u again
        Good luck

  5. dp says:

    Hey, just wanted to say thanks for posting this. Its hard to find examples of this kind of stuff on the net, so thanks for sharing. I’m working on an small scale autonomous car and was trying to figure out whats the next step after hough transform. This answers my question, thanks!

  6. Vitamin A says:

    Greeting from across the sea. excellent blog I shall return for more.

  7. Marek says:

    this laneMarkingsDetector, doesnt work for me…. after running, error apears every sing time :/

    • Send me or post your code if you want, otherwise, I cannot help : )

      • Marek says:

        It was my mistake. I fixed it ;)
        Recently i read your publication “Road environment modeling using robust perspective analysis and recursive Bayesian segmentation” and according to it i was trying to write my own lane markings detection algorithm.

        I Use:
        1) Inverse PerspectiveMapping
        2) Sobel edge detector
        3) then i’m binarizing the image
        4) and use my algorithm to detect road stripes (only vertical like ones)

        this is one of the images i worked on:
        Before:
        http://imageshack.us/photo/my-images/109/83464702.jpg/
        and After:
        http://imageshack.us/photo/my-images/403/zapisz1.jpg/
        http://imageshack.us/photo/my-images/836/zapisz2.jpg/

        I would like to ask you, as a specialist, if you could give me some tips to help me find the best way to write an algorithm for Lane tracking like this one, you presented on YT:

        I am beginner in working with Opencv (i downloaded it 2 weeks ago), so I would be very thankful for any help. This is very important for me cause it is a part of work at my university to build an autonomous cat.

        best regards
        Marek Kotewicz

      • Marek says:

        car*, not cat* :)

      • Hi Marek,

        Good job! It looks you’re doing the right way.
        Nevertheless, if you are planning to jump into an autonomous vehicle, you should consider the extra difficulties a real scenario poses. The first one (and the most significant for the inverse perspective mapping) is the vibration of the camera, and the motion of the vehicle. This will make your IPM to be very unsteady. Typically you can only trust to have reliable straight-vertical lines in the very close distance.
        Other aspects to consider: absence of lane markings for a while, rain, shadows, occlusions due to other vehicles…
        Welcome to the road environment!

        Regards,

        Marcos

  8. hb2012 says:

    slt mes amis,
    j’ai l’honneur je vous contacter une autre fois dans ce fameux forum,bon j’ai besoin d’une petit correction au niveau ce code en opencv
    hello marcos,
    well, I need a small correction to this code in opencv:
    CvPoint meas_x1,meas_y1,cord_x1,cord_y1;
    cord_x1.x=230;
    cord_x1.y=100;
    cord_y1.x=550;
    cord_y1.y=500;
    for (int l=0;l<10;l++)
    {
    std::string varimg;
    char format[] = "franck_000%d.jpg";
    char filename[sizeof format+100];
    sprintf(filename,format,l);
    varimg = filename ;
    IplImage*imgw = cvLoadImage( varimg.c_str() );
    cvNamedWindow( "Example1", CV_WINDOW_AUTOSIZE );
    meas_x1.x=cord_x1.x;
    meas_x1.y=cord_x1.y;
    meas_y1.x=cord_y1.x;
    meas_y1.y=cord_y1.y;
    it is a part of the main program, but the most important is that you can help me to get a solution allows me to access the next position of the object to follow that is the measure of each point varies as they are developing a rectangle.this code used to display a series of images (though making a sequence) and in these images there is an object (eg the face of a person) that makes the mouvement.In this case, I just measured the new position of this object so I do the tracking of this object. In the first place I framed by a rectangle object and consequently I have (hopefully) receive each time the measurement of the position of the object to correct it.

  9. hb2012 says:

    thank you for this earlier answer,but the goal of my project is use the opencv only with a simple fonction to realize a tracking object with kalman filter .For this i haven’t used this fonction predefined in opencv from kamlan filter because i have a some image to configure at a sequence for tracking object. Therefore i must work with the exact place of subject .

  10. ardi says:

    Hi Mr Marcos..

    I have sent you an email, please reply my email.

  11. ardi says:

    Thanks for your reply.. :)
    I ask you via email, please check it sir.

  12. nehad says:

    I just want to ask if this code can use with a maps image or no?

  13. Nehad says:

    hi Marco,
    I am looking for complete code of lane marking road detection.I need it urgently.Can you please provide me. I will be thankful to you.

  14. Pingback: A First Attempt at Lane Detection | Jay Chakravarty

  15. Asif says:

    Can you share the link of sample video for lane marking detection?

  16. Soju T Varghese says:

    Hello, i have got a working code for lane and vehicle detection in opencv(version 2.3, based on C). Everything is fine, excpet that in the lane detection output window, the lanes detected get overlayed over the previous, thus filling the window with lines, subsequently. I do not know how to delete the previous drawn lane-lines from the window. The normal cvLine() function is used for drawing the lane lines. Your help would be highly appreciated.

    Regards,

    Soju.

    • Hi!
      You probably just need to reset the image on which you are drawing using something like:

      image.setTo( 0 );

      Assuming that image is a cv::Mat and that has been created before that.
      Kind regards,
      Marcos

  17. Ivan says:

    I want to use RANSAC to detect a line? Do you have any source code to share?

    regards,
    Ivan

  18. Ravichandra says:

    Hi sir..
    I am using the same lane detection concept in vision based aircraft runway detection for auto landing system.
    Please send the code of this project..
    It may helps lot.
    Thanks in advance…

    • Hi,
      Unfortunately, I don’t have an entire sample for lane detection available to share. Moreover, I don’t think it would be useful in your case, since aircrafts have more degree of freedom than cars, so the assumptions I use for lane detection will not hold (basically, constant roll angle and preferrably equal to zero).
      Regards,
      Marcos

      • Ravi says:

        Yes sir.
        The roll and pitch can be controlled by the Horizon Detection based on the Vanishing point i can adjust the roll by height of the vanishing point

  19. Moveh says:

    Hi
    this is a brilliant work, although i am working on a different platform as a beginner and i was a bit confused when i was going through the way you applied your hough transform.I am presently working on lane detection using matlab as my image processing platform…i have presently captured and processed the image by simply capturing,converting to grayscale and applying edge detectors operators and i am presently stuck on applying the houghline so as to get my lane boundries….i would really appreciate your guidance in this please. below are the image processing techniques i used
    a = imread(‘roadlane.jpg’);
    a = rgb2gray(a);
    imshow(a)
    % h = [1 1 1; 1 1 1; 1 1 1]/9;
    % c = imfilter(a,h);
    % % imshow(c);
    % for sobel edge detector
    % h = [1 0 -1; 2 0 -2; 1 0 -1];
    % c = imfilter(a,h);
    % imshow(c);
    c = edge(a);
    imshow(c);
    % for canny edge detector
    c = edge(a,’canny’);
    imshow(c);

    • Hi Moveh,

      Apologies for answering this late, I was terribly busy with other duties.
      So, in my opinion your idea is just fine. You can use any type of lane marking detector, such as the Canny or Edge detectors in Matlab, because at the end, what you need is a set of points in the image that belong to the lane markings. The Hough transform takes these points and finds the dominant lines (as clusters of points). Matlab comes with a nice set of Hough implementations, you have probably already found a solution for this.

      Regards,

      Marcos

      • Moveh Samuel says:

        Hi
        I really appreciate your reply,although as u mentioned i have found a way out after tresholding the image and applied d hough transform and obtained an excellent result.i am presently trying to analyze which of the edge detector is more suitable.
        Thanks and best regards.

      • Great!
        Sometimes it is a good idea to keep using simpler edge detectors (e.g. Sobel), which may run faster afterwards if migrated to embedded platforms.
        Regards,
        Marcos

  20. Anuj says:

    Hello,
    I saw your video on lane tracking and was simply amazed by it. I have just started of with my project which involves lane detection and tracking. Just to begin with it, what I have done is the following:

    – For each incoming frame from a video or a camera placed on a moving vehicle, I detect edges, apply Hough Transform (OpenCV implementation) to get a bunch of lines, filter out lines based on the slope criteria to get only those lines which corresponds to the lane.
    – Now, in many of the frames, I am not getting the hough lines so I applied Kalman Filter but it is not working.
    – I take out the slope and intercept of the line and model them as my state vector, I use a 2×2 identity matrix as the state transition matrix (F). Why I am taking it as an identity matrix is because I don’t see this model as a constant velocity model. But I know that Kalman filter will only be able to predict the new state of the system if we assume that it is a constant velocity or a constant acceleration model.

    So all together, I am not able to think how should I model my state vector and transition matrix when I have to track lanes based on hough lines. The only property of the these hough lines I can think about are the slope and intercept which somewhat change with each frame but that change is not constant.
    Could you please suggest me the steps how to go about lane tracking?

    • Hi Anuj,

      Your approach is correct, although you may need to better fine tune your parameters. The road scenario is vastly dynamic which makes fixed thresholds and assumptions to hold only for certain situations or during small periods of time.
      For instance, you are using edge detection and Hough transform. Fine, but probably you need to dynamically adjust the parameters as the scene evolves (what happens if the road is suddenly not well painted? or if you enter a tunnel?).
      Once you have your lane markings detected (as Hough lines or other type of detections), you probably want to fit a lane model. In my case I use multi-lanes and parabolic fitting in the bird’s-eye view. Of course, this is up to you. The simplest approach is to model a single lane, without curvature, which can be basically be defined by a fixed vanishing point (that you can compute at the beginning and keep it fixed, or update it online with the observations) and two points at the bottom of the image.

      Then you can apply a Kalman filter to provide smoothness to your tracking. Using constant-velocity is probably a good option.
      All I can say is that if you plan to use this in a real environment, you probably also want to avoid costly operations (to run your sw in embedded platforms) such as the Hough transform, and also avoid OpenCV implementations, which are great, but general.

      Good luck with your project!

      Marcos

  21. strangelyhuman says:

    stumbled onto your post while I was researching the Vanishing point. Helped a lot. Thanks a ton!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s