Hello @tor_odigo
We are normally are able to get much more accuracy with the Visual Offset. Here's a few tips on how to be as precise as possible.
Adjust the LED and try on and off.
Adjust the Exposure Sensitivity according to the lighting.
Manually focus your Wrist Camera to obtain a crisp tag image
You can see the camera view which will let you see all the edges that the camera will detect. If it is blurry or full of noise, that means you should play with the vision parameter to optimise the picture.
Also, it will be important to give an angle to the picture. You don't want a perfect square in your picture. That way, there will be a clear difference in the dimensions in the black and white squares of the tag. It will allow the Visual offset to be much more precise. (see picture below as an example)
We also suggest that the distance between the tag and the camera is no more than 15 cm when the picture is taken.
The second thing you could use to help you insert the part in the vise is the Force Copilot. You could then feel your way in the vise using the Robotiq Insertion nodes.
Hope this helps.
Let me know if you have further questions.
Regards.
Hello,
I am integrating a UR10 to do CNC machine tending. I have placed an offset tag on the vise for the robot calibrate its position when entering the CNC. After applying the visual offset I move the robot to an origin on the vise, and then use pose_trans based on the object's size to insert it into the vise. However, the calibration does not seem to be exact enough as the movement doesn't correspond to the programmed movement and the object ends up being a few mm off. Is the offset tag accurate enough to insert an object into a gap that is only 2 mm wider? And is this the best way to accuratly insert an object into a vise?