Discussion

Left ArrowBack to discussions page
Ryan_WeaverRyan_Weaver Posts: 45Founding Pro, Partner, Beta Tester VIsion 1.1 Program, Wrist Camera URCap 1.3.0 Handy
Once robotic vision systems identify a part for pickup, an important second step is usually to determine if any other objects in the area would interfere with the EOAT grabbing the part.  If there isn't appropriate separation in the space where the grippers need to be, the part might not be picked up properly, or the tooling could be damaged.  

I think this is an important feature that the Robotiq Camera system will need to allow users to define.  @Grady_Turner and @Enric, you guys see the same need?

Ryan Weaver   |   Automation Engineer   |   Axis New England
[email protected]
https://www.youtube.com/user/AxisNewEngland
https://twitter.com/axis_newengland

Comments

  • Grady_TurnerGrady_Turner Posts: 67Founding Pro, Partner, Beta Tester VIsion 1.1 Program, Wrist Camera URCap 1.3.0 Handy
    @Ryan_Weaver   The issue I see is that the camera only looks for the taught part, anything else is ignored even if it will be an obstruction.  Maybe a different approach could be for the Camera Locate function to provide a minimum open distance needed to pick the part  from the edges (or if a Robotiq gripper is used it could write to the position variable for the user).  

    If the user is picking the part from an internal feature, then there is no need to worry about obstructions from the sides anyway
  • JeanPhilippe_JobinJeanPhilippe_Jobin Posts: 63Beta Tester VIsion 1.1 Program, Wrist Camera URCap 1.3.0 Handy
    @Ryan_Weaver
    What Grady is saying is right: the main issue here is that the Camera Locate node is, for the moment, ignoring anything else than the object that has been taught.

    This said, is it wrong to say that in most applications, we will have only one type of part at the time and that potential interference will occur between them? If not, can you please give me some examples?

    Jean-Philippe Jobin
    Eng., M.Sc. / ing., M.Sc.
    Chief Technical Officer / V.P. R&D 
  • ericeric Posts: 18Founding Pro, Partner Handy
    If the parts are stacked on a table, some turned sideways and others right side up.  If the part is detected but the surrounding parts are touching it there could be a problem picking it up.  Even if you detect 5 parts that are "good", if they are too close together you will have issues picking it up.
  • Ryan_WeaverRyan_Weaver Posts: 45Founding Pro, Partner, Beta Tester VIsion 1.1 Program, Wrist Camera URCap 1.3.0 Handy
    @JeanPhilippe_Jobin - to echo what @eric is saying, I think the key is that for some (many) customers any collisions could be problematic.  If you go to pick up a part, and there isn't clearance for the gripper jaws, the jaw could strike and damage a part.  Then you might not know which part has been damaged since it's mixed in with the rest.

    Also, many customers want to have the robot pick up parts and walk away while it's running.  If the robot is constantly entering "protective stop" because it accidentally collides with parts that are too close to on another, then the throughput will be poor.

    I think the solution would be to either ensure a safe distance around parts that are identified as matches, or for the user to show an area around the "gold standard" part that must be clear for a pick to be OK.  I know it adds complexity, but I can envision this coming up as a problem that could make it unusable for some customers.

    Ryan Weaver   |   Automation Engineer   |   Axis New England
    [email protected]
    https://www.youtube.com/user/AxisNewEngland
    https://twitter.com/axis_newengland

  • JeanPhilippe_JobinJeanPhilippe_Jobin Posts: 63Beta Tester VIsion 1.1 Program, Wrist Camera URCap 1.3.0 Handy
    edited August 2016
    Just a recap:

    1. The current version of the Robotiq Camera does not take into consideration object separation. Thus either
    • the feeding process (manual or automatic) of the parts to the vision system ensure enough clearance between the parts (this is what the product was made for), or 
    • a validation of clearance is made afterwords using another mean (i.e. using force sensing, gripper's object detection, etc.).
    2. This said, we can envision improvements to our solution that would take into account clearances between parts of the same kind (following the idea that a Camera Locate node can only detect what has been taught). 

    3. But it would be much more complexe to develop something that would detect the clearance of an object with anything. 

    thanks @Ryan_Weaver for this proposition since I also think this aspect will be important in the future evolution of the product; no matter in which direction it will go.

    Jean-Philippe Jobin
    Eng., M.Sc. / ing., M.Sc.
    Chief Technical Officer / V.P. R&D 
Sign In or Register to comment.
Left ArrowBack to discussions page