object_found rotation inaccuracy
/ Most recent by nick
I’m trying to use the object_found function to adjust a frame on a universal robot (UR10 Version 18.104.22.168). For some reason the accuracy on the rotation is terrible. Attached is a video of it in action, below is the code. The first 3 marks in the video are within the camera subroutine. The next 2 marks are done by using a frame adjusted with the difference between two object_found positions (pos_found – pos_original). Is there any fix for this? Do I have to do something special when subtracting rotation values found by object_found? I know it's not a vision target issue because the 3 vision points are ok
Note, the vision system is using the Automatic Method (not parametric method) so that the T within the circle will keep the orientation accurate. Also, the snapshot position is perpendicular to the target
UF1 = plane used as a user frame
MoveL 'With UF1 as the frame the positions are based off of
@Vincent_Paquin can you advise?
I hope this helps.
Machine Vision Technical Leader
Thank you @Vincent, that helped me troubleshoot this a bit and now I know the better question to ask. So basically what I was doing was subtracting the difference between the pos_found and pos_original which gave me a good x,y,z offset but I can't do this with the rx,ry,rz it seems because they're rotation vectors. I figure I have to use pose_trans() but I couldn't get it to work right; below is the simplified version of my code, I just need to know how to properly offset the frame's rotation. FYI I also tried converting my rotation vectors to RPY and then doing the math but that didn't work, though I may have done that incorrectly
The goal is to offset UF1 by the offset of pos_found relative to pos_original. Thus if pos_found is 30mm up in the x direction and 15% degrees rotated (compared to pos_original) then UF1 should be adjusted that amount.
pos1 is the backup of UF1
pos_original is the reference position