Home Applications


Left ArrowBack to discussions page
SebastienSebastien Posts: 219 Handy
edited November 2016 in Applications
Hi pros,
I am working on an application where I have to locate flat round parts. Setting up the vision program is simple with the Robotiq wrist camera. However, I would like to make this locate as accurate as possible. I know that the accuracy can be impacted by many different variables (lighting, background, robot,etc). How do you make sure you are at optimal setup in your application. On our end here is what we tried:
1-Set up snapshot position as close as possible to the parts while still maintaining a relatively good FOV.
2-Calibrate the camera
3-Teach the part and test locate the part with different background to see which one has the highest score.
4-Place the part in the middle of the FOV, locate the part and teach pick position. 
5-Then I don't move the part and I run the program. Since the robot is 0.1mm repeatable, I am aiming to see this repeatability if I don't move the object. 
6-If I don't get the precision, then I loop back to vision system to see what can be improve in the object location. 
7-Once I get repeatable results, I start moving the part around and if it is good in the middle of the FoV but inacurate on the edges, I then point to robot calibration.

What kind of procedure are you pros doing? What else can we play with in the vision system to improve performance and accuracy? Or in the robot program maybe?


  • Etienne_SamsonEtienne_Samson Beta Tester Beetle, Wrist Camera URCap 1.3.0, Vacuum Beta tester Posts: 419 Handy
    edited November 2016
    @Sebastien very interesting and important question, we are still working on those specs to publish some numbers, but here are a few things to note:
    1. Repeatability of the vision (when the part is not moving, like you did) will be 0.1 mm + a small number due to our vision system, that one is easy to spec and we will release it soon. Note that to have a reliable measurement, you need a rigid setup and a non-symmetrical part. See picture below for a good vs bad target when trying to reach the center. If you use a symmetric target your repeatability measure will be = robot repeatability + vision repeatability + your teaching of the center precision.
    2. The precision (when the part is moving around) is much harder to spec and will vary with a lot of things. That one will vary according to the robot precision ( the robot will need a valid calibration ), will vary with the distance looking at part (closer = better) and a lot more things on the vision side. We expect to come up with a number this trimester. So far my personal test have shown something around 3 - 5 millimeters, when I got something over that, there is usually something wrong with the robot or with the calibration of the vision system. I have also seen in some case a precision of 1 mm when touching the tip of an object but I have no clue why it was so much better then some other test. We will wait and see.
    Hope that helps !
    Etienne Samson
    Technical Support Director
    +1 418-380-2788 ext. 207
    [email protected]m
  • nicknick Posts: 5 Apprentice
    @Etienne_Samson I have a question about your targets; Wouldn't both of those have the same accuracy because the vision system checks for edges?  The shaded corner wouldn't be any different because the only difference is the contrast but the edges are the same. Wouldn't a target like this be better?  Just trying to understand why you'd get a different result with those 2 targets you mentioned
  • Etienne_SamsonEtienne_Samson Beta Tester Beetle, Wrist Camera URCap 1.3.0, Vacuum Beta tester Posts: 419 Handy
    @nick I think you are right yes, the system identifies contours, inner and outer, yours is a better example I believe.
    Etienne Samson
    Technical Support Director
    +1 418-380-2788 ext. 207
    [email protected]
Sign In or Register to comment.
Left ArrowBack to discussions page