Hi Pros,here is a video that I made where I simply locate a white piece of cardboard roundly shaped on a flat surface. I locate the part using the Robotiq camera mounted on a UR5. I don't move the part between snapshots. So when the robot goes down with the pen on the object, it should hit at the exact same location everytime. I would like to get more accurate then what you see on the video. Of course I could snapshot closer to the object but I want to keep a relatively good field of view. What are your thoughts on this? Do you have any tricks or can you put some numbers on the repeatability we should expect to get from Robotiq's camera with UR5?https://www.youtube.com/watch?v=wkxwLczp8VU&feature=youtu.be
@Sebastien check my comment on the target, yours is the example of a bad target that will give you an error on the repeatability test: http://dof.robotiq.com/discussion/comment/1511/#Comment_1511We will release numbers as soon as possible!
@Etienne_Samson It makes sense a lot. So the closer to the center that I will be, when teaching a round/symmetrical target, the more accurate I will be!I will run some more testing!
@Etienne_Samson I was looking at this application more closely. What you said makes sense but based on other vision hardware that we work on, there should be a way around it.Snapping and finding the round shape is quite simple using Pattern matching algorithm or blob finder. Then we get the X,Y position of the geometric center of our round part along with the angle between the taught and found pattern. I would just discard this angle and send X-Y position to the robot. This way I should point directly in the middle every time if the robot keeps the same orientation on the plane where it has to pick. What do you think?
@Sebastien yes it is possible and it would be a nice improvement for such parts. For now our teaching method does not have a "blob center finder" but that could definitely happens in the future. That's also why you see symmetric parts like yours fall in the category of parts we do not recommend.
@Etienne_Samson We did the same testing as above but with a fixed Cognex Camera. The Cognex camera has a really nice calibration feature built in its interface. This allows you to calibrate the robot with the vision system by simply moving the robot to 'calibration targets'. Once done, coordinate values output by the camera are in the robot's coordinate system. You will see in our video below that we are really repeatable when taking snapshots while leaving the white round piece at the same spot. We are 1-2mm off when we move the piece on the table. This is do to our calibration and our setup. We did not spend much time on the calibration since we simply wanted to run some testings rapidly. Furthermore, our touch tool was a Sharpie that could move a bit in its setup and the camera was installed on a stand that could vibrate a little. So our conclusion was that this setup worked well for repeatable results on these round object. To increase accuracy we would:Have the camera mounted on a more rigid and fixed pole. Have a stiff point tool mounted on the robot as a tool that we would use in the calibration procedure. Have another stiff point tool on the table to perform TCP definition procedure on the UR such that we can define accurately the point tool on the robot arm.Have a small notch machined right in the middle of some target parts such that during the calibration procedure, we move with the robot and its point, right to the middle of the part with its point tool. That way we make sure that each target is pointed right in the middle during calibration.Below is the video. Note that we can see two different part sizes in the video. The first one is about 1,25'' in diameter and the second one 0.5'' in diameter. Using blob tool we can easily switch between the two parts!https://youtu.be/J2Lw72yXz04
Hi Pros,
here is a video that I made where I simply locate a white piece of cardboard roundly shaped on a flat surface. I locate the part using the Robotiq camera mounted on a UR5. I don't move the part between snapshots. So when the robot goes down with the pen on the object, it should hit at the exact same location everytime.
I would like to get more accurate then what you see on the video. Of course I could snapshot closer to the object but I want to keep a relatively good field of view. What are your thoughts on this? Do you have any tricks or can you put some numbers on the repeatability we should expect to get from Robotiq's camera with UR5?
https://www.youtube.com/watch?v=wkxwLczp8VU&feature=youtu.be