Home Programming


Left ArrowBack to discussions page
tor_odigotor_odigo Posts: 19 Apprentice
I am integrating a UR10 to do CNC machine tending. I have placed an offset tag on the vise for the robot calibrate its position when entering the CNC. After applying the visual offset I move the robot to an origin on the vise, and then use pose_trans based on the object's size to insert it into the vise. However, the calibration does not seem to be exact enough as the movement doesn't correspond to the programmed movement and the object ends up being a few mm off. Is the offset tag accurate enough to insert an object into a gap that is only 2 mm wider? And is this the best way to accuratly insert an object into a vise?


  • MarcAntoine_GauthierMarcAntoine_Gauthier Vacuum Beta tester Posts: 55 Crew
    edited September 2020
    Hello @tor_odigo

    We are normally are able to get much more accuracy with the Visual Offset. Here's a few tips on how to be as precise as possible.
    Adjust the LED and try on and off.
    Adjust the Exposure Sensitivity according to the lighting.
    Manually focus your Wrist Camera to obtain a crisp tag image
    You can see the camera view which will let you see all the edges that the camera will detect. If it is blurry or full of noise, that means you should play with the vision parameter to optimise the picture.
    Also, it will be important to give an angle to the picture. You don't want a perfect square in your picture. That way, there will be a clear difference in the dimensions in the black and white squares of the tag. It will allow the Visual offset to be much more precise. (see picture below as an example)

    We also suggest that the distance between the tag and the camera is no more than 15 cm when the picture is taken.

    The second thing you could use to help you insert the part in the vise is the Force Copilot. You could then feel your way in the vise using the Robotiq Insertion nodes.

    Hope this helps.

    Let me know if you have further questions.


  • cobottiukkocobottiukko Posts: 17 Handy
    That's really good to know that accuracy is better when picture is taken from angled view! I didn't knew that.

    I recently found out that using manual focus is not good since the distance between tag and lens may vary each time when robot is moved. I found out sometimes auto focusing do not focus the lens sharply before the camera is triggered inside Find Visual Offset node. What I did, I just added one empty Find Visual Offset node before the main Find Visual Contact Offset. The pre-locate node does nothing but just run through focusing procedure and then the main Find Visual Offset does the job. This made my Find Visual Offset node more reliable!
  • MarcAntoine_GauthierMarcAntoine_Gauthier Vacuum Beta tester Posts: 55 Crew
    Hello @cobottiukko

    That is a really good idea. I was about to suggest you that!

    Indeed, sometime it is recommended to do a first Find Visual Offset where you can have the automatic focus and where you could be further away from the tag. That allow you to have a bigger Field of View which will allow more flexibility in the robot position. Then add a second Find visual offset closer to the tag and always at the same position from the tag where you can fix the vision parameter to have better results.

    Happy to hear that you have better results now!

    Let me know if you have further question.

    Best regards.
Sign In or Register to comment.
Left ArrowBack to discussions page