Discussion

Left ArrowBack to discussions page
Tyler_BerrymanTyler_Berryman Posts: 122Beta Tester Beetle, Wrist Camera URCap 1.3.0 Handy
edited September 2016 in Applications
Hi all,

Last week I was running a demo at the Universal Robots booth at IMTS. If you didn't get a chance to see the demo here is the link to our DoF post regarding the events at IMTS : http://dof.robotiq.com/discussion/comment/1124

As you can see, I was using the wrist camera to identify the printed circuit boards, the gripper to pick-up the parts and the force/torque sensor to position the parts against the back edge of the tray. A lot of people were impressed by how many details could be found in such a simple application, and a few people's curiosity were asking to push the demo even further. Several people were interested in the possibility of using the wrist camera to identify the parts and to identify the open space in the tray. In this demo, the parts can be placed in a random orientation and the tray can also be placed in a random orientation. So here is a video of the modified demo that I was running at IMTS:



The first part of the demo uses a Camera Locate node to identify the printed circuit board using the first snapshot position. If the vision system identifies a part matching the image template for the printed circuit board, the robot will pick-up the part that has the best match when compared to the image template. 
The robot will then move to the second snapshot position and will enter the second Camera Locate Node. In this node, the vision system is looking for an open tray position. To teach this position, I used a yellow background and I also put pieces of paper over 3 of the tray positions to leave one open tray position. From there, I followed the normal steps to teach a part to the vision system. Here is an image of the image template of the tray :



By using a Camera Locate node within a Camera Locate node, you can ensure that the robot will not identify an empty spot in the tray without having picked-up an object. Using the wrist camera to identify both the part and the tray also gives you added flexibility for machine tending applications since you do not have to design precise jigs to position your packaging trays. This can also help you avoid using more advanced functions like pallet functions in your program.

Tyler Berryman
Robotiq Integration Coach
[email protected]
1-418-380-2788, option 3

Comments

  • Tyler_BerrymanTyler_Berryman Posts: 122Beta Tester Beetle, Wrist Camera URCap 1.3.0 Handy
    edited September 2016
     Program
       Robot Program
         MoveJ
           Waypoint_4
           Gripper Move 52%
         Camera Locate
           MoveL
             Waypoint_3
             Waypoint_1
             Gripper Close
             Waypoint_2
           MoveJ
             Waypoint_8
             Camera Locate
               MoveL
                 Waypoint_6
                 Loop norm(Fz)<FMax
                   position_increm≔Tool
                   position_increm[2] = position_increm[2]-.001
                   movel(position_increm,1.2,0.005,0,0.0001)
                 Waypoint_5
                 Gripper Move 51%
                 Waypoint_7
       Thread_1
         sensor_data≔socket_read_ascii_float(6,"stream")
         If sensor_data[0] ≥6
           Fx≔sensor_data[1]
           Fy≔sensor_data[2]
           Fz≔sensor_data[3]
           Mx≔sensor_data[4]
           My≔sensor_data[5]
           Mz≔sensor_data[6]
         Else
           Fx≔0.0
           Fy≔0.0
           Fz≔0.0
           Mx≔0.0
           My≔0.0
           Mz≔0.0


    Tyler Berryman
    Robotiq Integration Coach
    [email protected]
    1-418-380-2788, option 3
Sign In or Register to comment.
Left ArrowBack to discussions page