About me

Hi, my name is Marcos Nieto, I am a researcher in the Vicomtech-ik4 research alliance at the Department of Intelligent Transportation Systems and Industry. I completed my studies (Electrical Engineer and PhD) in the Image Processing Group (Grupo de Tratamiento de Imágenes) of the Universidad Politécnica de Madrid where I also worked as researcher until 2010.

This blog has been started as a way to share some code snippets and other stuff about computer vision (OpenCV), LaTeX, Tiddlywiki and some other tools of interest for any esearcher.

See my profiles in other sites:

Scientific profile in ResearchGate www.researchgate.net/profile/Marcos_Nieto3/
Professional profeile in Linkedin www.linkedin.com/in/marcosnietodoncel
My Youtube channel www.youtube.com/user/marcosnietodoncel
Vicomtech-IK4 Youtube channel http://www.youtube.com/user/VICOMTech

36 Responses to About me

  1. Omkar Kulkarni says:

    Hello, i webt through your articles. They were just great! i want the complete source code for detection of vanishing point to understand how you coded the algorithm!it would be a pleasure if i recieve the same

    • Hi! thanks for your interest.
      I will probably add a new post with a full example using the MSAC class for vanishing point detection. For the moment I just can say that the MSAC class is pretty simple to use. You need to create a MSAC object somewhere in your own class or main() body, as MSAC __msac;
      Then, initialize it as __msac.init(img_height, img_width, num_vps); where num_vps is an input int that defines how many vanishing points do you want to obtain.
      The second step is to call the procedure as __msac.singleVPEstimation(lines, &number_of_inliers, vanishing_point); where the function is void MSAC::singleVPEstimation(std::vector<std::vector > &lines, int *number_of_inliers, CvMat *vanishing_point). In case you defined num_vps to be more than 1, you should call void MSAC::msac(std::vector<std::vector > &lines, std::vector &ind_CS_best, int* number_of_inliers, CvMat* vanishing_point, int num_vp, bool vp_prev_exists) instead.
      To see the details of the process, uncomment #define DEBUG_VERBOSE in the MSAC.h file.
      Hope it helps!

  2. Omkar Kulkarni says:

    hello,
    I tried the same by creating the main(). but unfortunately the flow isnt going right…still will try to repost my main function. Till then if possible plz post your code.
    Regards,
    Omkar K.!

  3. Arthur says:

    Hi Marco, I’d like to download your line segmentation code. Unfortunately, the GIT repository seems to be empty. Could you reupload it?

  4. Brian says:

    Hi Marcos,

    Thanks for the great blog!
    I’m also a Phd student.
    Do you know of any free software or code for road detection from a single photo? I’m just interested in identifying the road itself mainly rural roads with almost no traffic and time is not an issue so I’m not looking for any fast processing algorithms. Rural roads usually don’t have the paint markings on the asphalt and may experience heavy snow in winter.
    You’re help is greatly appreciated!
    Thanks

    • Hi! I don’t know open source code for road detection. However, by coincidence I’ve been doing something similar recently. What I can say is that you need to define your constraints at the beginning, e.g. type of motion pf the vehicle, type of pavement, color/gray, etc. In my case I hypothesized that the pavement is more homogeneous than the outsides, so I could implement a simple noise modeling analysis to find the limits of the road. For more complex or variable situations (including snow), ir seems necesary some kind of online learning and segmentation

  5. dmitry says:

    Hello, Yours video – lane tracking and vehicle is very good! It looks like best video for purpose of vehicle and lane detection!
    Could you tell me what complete set of algorithms would be best for only vehicle detecion I’m a newbie in computer vision, so could you decribe it as detailed as possible?
    I’m insterested only for algorithms that applicable for arm cortex a9 processors, also algorithms, that can be scaled to multiple cores.
    I’m trying to write application for android and if you help me, i’ll get a better chance to get some positive results, if so – I can share the results to you in future :)
    Thanks!

    • Hi!
      First, sorry for my late answer and thanks for writing!
      If you are interested in ARM solutions, you probably need to define the HW platform first, to know whether you will have parallelizable power or not. In our experience, ARM alone can not handle a high-complexity algorithm like a vehicle or pedestrian detection. Part of the algorithms must be migrated to massive parallel HW, such as GPU (in CUDA) or FPGA (in VHDL).
      In any case, in the automotive industry, it is crucial to have a good algorithm, which is able to detect things, to trigger a low number of false alarms. When you have it (in C++, Matlab or whatever you like), then start the migration into ARM+GPU/FPGA.
      If you work in a team, you can parallelize these tasks, looping over design of the algorithm, migration into the HW, feedback and start again. This is typically known as the co-design development pattern.
      Good luck!
      Marcos

      • dmitry says:

        Thank you for your reply! Can you tell what is most performance critical parts in termins of parallelization? And what algorithms can be most suitable for vehicle recognition? Is vehicle recognition by parts (may be wheels + license plate + headlights + another parts) more stable for recognition? I have to use only typical multicore arm cortex a9 processor and that is all.

      • Hi!

        Normally, ARM+Multicore gives you a lot of power (considering you take care designing well your algorithm). In our experience, we have resolved our algorithms without the need to parallelize in FPGA or GPU for normal operation. However, if you move parts of the algorithms there, you can load more than one algorithm/system into a single embedded HW, which is really appreciated by costumers.
        We move to FPGA the critical parts like capture, color conversion and image filtering: those that operate pixel-wise; and left the maths, projective geometry and machine learning for the ARM.
        Detecting vehicles is a tough task. Normally we don’t expect to have enough resolution to detect license plates or elements of the cars. We try to detect “car” as a whole entity, according to motion, perspective, temporal coherence, shape and appearance.
        Perhaps you can take a look to the PhD thesis of a colleague of mine who explored several algorithms in this very field: http://oa.upm.es/11657/1/JON_ARROSPIDE_LABORDA.pdf
        Kind regards,

        Marcos

  6. Xiangyang Li says:

    Hi Marcos,
    I’m a PhD student, and do some research on road detection.
    I inspired by your inverse perspective mapping method for road detection.
    We use open active contour model with parallelism constraint for road detection on bird-view image.
    A paper has been proposed for publish recently, a reviewer asked me to give a comparison with your method proposed in ” Road environment modeling using robust perspective analysis and recursive Bayesian segmentation”.
    So could you send me the source code or executable program for evaluation with my proposed method?

    Thanks

  7. Xiangyang Li says:

    Hi Marcos,
    I’m a PhD student, and do some research on road detection.
    I inspired by your inverse perspective mapping method for road detection.
    We use open active contour model with parallelism constraint for road detection on bird-view image.
    A paper has been proposed for publish recently, a reviewer asked me to give a comparison with your method proposed in ” Road environment modeling using robust perspective analysis and recursive Bayesian segmentation”.
    So could you send me the source code or executable program for evaluation with my proposed method?
    Thanks

    • Hi Xiangyang,
      I can not promise anything because the paper you mention dates back to 2009 (submitted) and the code isn’t in good shape right now : )
      Currently I am working on a lane tracking sample that I will post in the blog for everybody. It is based on the methods described in the paper.
      I will answer this thread when done.
      Kind regards,
      Marcos

  8. Stav says:

    Hi Marcos,

    Thanks for sharing all the information and code on your site! I’m a computer vision grad student and I wanted to use your method of plane rectification via vanishing point detection as demonstrated in this video of yours: https://www.youtube.com/watch?v=76Ydp4ptPXo

    I downloaded the vanishing point detection code and got it to work (great code!), however, I still don’t understand how you rectify the image once you have the vanishing points.

    Any help is appreciated,
    Thanks!
    Stav

    • Hi!
      Well, to rectify the plane you just need a 3×3 homography matrix. It can be built in many different ways. One of them is to use the DLT algorithm and use 4 point-correspondences.
      If you have two vanishing points that you know represent perpendicular directions in your plane, you can probably draw two lines meeting at each vanishing point that intersect in 4 points and use that points to rectify the plane.
      Of course, this rectification will have strong affine distortion, since the position of the points is quite arbitrary. If you do have more information about the scene, you should use it.

      Kind regards,

      Marcos

      • Alexandre Bizeau says:

        Hello Marcos,

        Same as Stav, I download your vanishing point detection code and works so well for what I need. But me too, I would like to get the rectify plane of a building like you. My camera is moving less than yours but I want to get same result. I dont understand the part : “each vanishing point that intersect in 4 points”. If you draw perpendicular line form vp1 and vp2, you’ll obtain a cross ? Only a middle intersection ? Maybe I’m confuse but I don’t get it.

        Your vanishing point algorithm give me hope back on my project !
        Thanks you,
        Alexandre

      • Hi!
        Yeah, of course, with two (vanishing) points we can not get 4 intersection points directly. However, you can obtain something alike if you draw two lines passing through each vanishing point and make them such that they intersect in 4 points in the interior of the image frame. So, two lines from the “vertical” vanishing point that pass through the image at some random or selected places, and two lines from the “horizontal” vanishing point in the same fashion. Obviously, that 4 points do not really represent anything in the world plane unless you create the lines in some specific way. I believe I used some fixed separation between the lines and centered in the image, so there was a large affine distortion in the rectification.
        Kind regards,

        Marcos

        We

      • Stav says:

        Hi again!

        Thanks a bunch for your help!
        Is this the method you used to create the YouTube video I attached? Is there a chance you can share this code?

        Thanks again!
        Stav

  9. Hello Marcos!

    My graduation project is about the video you uploaded earlier(car detection), I’m really looking forward to have your email to be able to ask you some questions if its possible :)

  10. Neal says:

    Hi Marcos,
    I have downloaded the viulib_omf_demo in the Viomtech. But I don’t find any documents for this file. So I can not know how to use them. Will you be willing to tell me some ways to the corresponding documents or any other things accessing to the technical explanations.
    Thank you.
    Neal

  11. Hi,
    you can get in contact with Vicomtech-IK4 for issues with the provided demo.
    Regards,
    Marcos

  12. Neal says:

    Why the address doesn’t work either. Is this organization doesn’t allow to be delivered?
    Best regards,
    Neal

    • Hi Neal,
      This escapes my control right now, but I will ask about the viulib_omf demo just to clarify if it is still available and what documentation do exist.
      Apologies for the inconveniences.
      Regards,
      Marcos

  13. Neal says:

    If anything you get, please contact with me or by e-mail neal199101@gmail.com.
    Thank you very much.

  14. Ben Sharp says:

    Hi Marcos, I am also trying to get in touch with Vicomtech, I am very interested in using the VUILIB sdk to build a visual slam system. The email address above bounces back, i did drop a message on the vicomtech contact page, but no reply as yet. Is the VUILIB available? Is there an implementation of SLAM already built in? Sorry for the random questions! Excellent work by the way! B

    • Hi!
      Sorry for answering this late. I am lately quite unable to attend this account because of my daily duties.
      Yes, Viulib is available, but its access is yet restricted to companies that reach agreements with Vicomtech in terms of how Viulib will be exploited (if entirely, only one module, what kind of applications, what sectors, etc.). There might be exceptions, and demo versions of Viulib can be distributed, but still requires the approval of Vicomtech.
      If they have not responded yet, it might be because they are very busy.

      I am sorry I can not do anything specific to help you now (if you wish, I can send you the paper where we explain what we did for the stereo odometry). Good luck with your odometry projects.
      Regards,

      Marcos

  15. Pablo Gómez says:

    Hi Marcos, I’m trying to program the Rao-Blackwellized particle filter on opencv, can you help me with a part of the scrip, I leave my email pablogomez.ing@gmail.com
    thanks !!

  16. Ahmad Alwazzan says:

    Hi Marcos,

    We are working on a vehicle collision avoidance system for our final capstone project in the University of West Florida. We are using a high rate laser range finder to detect the distance to the vehicle in front. Also, we are using a camera to detect cars in front. Our main demand from the camera is to identify whether the object in front is a vehicle or not.

    So far we used Haar Cascade Classifier method in openCV. We had some success with some .xml cascade classifier files downloaded from the web and with some classifiers we trained ourselves, but the results are far from perfect.

    Is the Haar method the best way of doing this given the limited time we have? Do you know of good any trained classifiers that we can download?

    your help is highly appreciated.

    Thanks,

    • Hi Ahmad,

      Apologies I reach you this late. I have limited time to review the comments of the blog.
      Your approach is correct. The Haar Cascades have shown to be fast and effective, better than other available combinations such as HOG and SVM classifiers (this is debatable, though), if the focus is on speedness.
      The problem with vehicle detection is the enormous intra-class variability of cars depending on the type of camera, the relative distance, the different vehicle types, different view angles, and also the varying outdoor illumination conditions (direct sunlight or dark regions).
      It looks like the problem can not be solved with a single classifier. I would rather go for several classifiers that are triggered for different situations.
      There are nice works nowadays using convolutional neural networks (“deep learning”) and CUDA capabilities for NVIDIA cards, using OpenCV 3.0 and libraries such as Caffe (caffe.berkeleyvision.org/).
      We are also working in this line, so I known it is not easy at all!

      Regards,

      Marcos

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s