Discussion

Left ArrowBack to discussions page
Annick_MottardAnnick_Mottard Posts: 146Beta Tester Beetle, Wrist Camera URCap 1.3.0 Handy
edited July 2016 in Robotiq Products
Hi all, 
We gathered the 5 most frequently asked questions about Robotiq's vision system. Feel free to post your questions or comments below.  Note that the system is compatible with Universal Robots only and is designed to perform part detection. 

  • Can the vision system do quality inspection?
No. The vision system is used for part detection, it cannot measure dimensions. 

  • Can the vision system use colors to detect objects?
No. As for now, the vision system is colorblind. It needs a certain amount of contrast to detect parts. Therefore, transparent objects cannot be detected. 

  • Is it possible for the vision system to identify components on a moving conveyor?
No. As for now, the conveyor would need to stop. The vision system uses a static Snapshot Position to detect parts. The robot can then perform a pick and place operation relative to the part’s position during the detection.

  • How is the calibration performed?
The calibration is performed through the Snapshot Position wizard (in the Installation menu) with the calibration board provided by Robotiq. Note that a Snapshot Position is required for each object that needs to be located by the vision system. Many objects can use the same Snapshot Position.

  • What is actually calibrated?
The camera is calibrated for a robot position and a work plane. A Snapshot Position, and therefore its associated calibration, can be used in various programs as long as the robot system, the work plane and the camera position remain the same.


Do you think of applications where Robotiq’s vision system would be helpful? Feel free to post your questions or comments below!
Annick Mottard
Product Expert
Robotiq
[email protected] 

Comments

  • mdausermdauser Posts: 2 Recruit

    Hi,

    Will it be possible to export data from the vision system such as the position and orientation of a part that it has located? A bit of background on my request:

    I'm working on a work cell for a UR robot but I need to be able to relocate the arm from week to week to different work cells. We're aiming for pretty good precision so I'm working on making an automated calibration protocol for the arm that I could run each time I relocate the arm. Currently that involves essentially transforming the arm into a CMM machine to locate a part of the cell. Ideally though, I'd like to be able to use the vision system to take a picture of a target in my work cell and have the vision system output a position and orientation of this target with respect to the base frame of the robot. I'd then use that information as a starting point to generate the displacement vectors and rotation matrices for all subsequent operations to be performed in the work cell. Would there be any kind of URscript command that would output this kind of information?

    Thanks!

    Sean Fielding

    MDA

  • Nicolas_LauzierNicolas_Lauzier Posts: 26Beta Tester VIsion 1.1 Program, Vacuum Beta tester Crew
    @mdauser

    Thanks for the very good question. The software was not designed with this use case in mind but it is actually possible to obtain the information you want with a URScript command -- under certain conditions.

    So, when an object is found using the camera locate node, a variable named "object_location" is updated which contains the pose of the object in the base frame of the robot (exactly what you want). You could therefore teach the target as an object, locate it and offset all your programs according to the target position.

    However, our vision system works under the assumption that the object is always located in the same plane (the calibration plane). As such, any relocation error of the robot that would change this plane would not be picked up by the system. That being said, this would still work if the working plane is a table parallel to the base plane of the robot and your relocation is such that this constraint (along with the distance between the two planes) is maintained every time you relocate the robot (basically, if the calibration plane is still valid). 

    In other words, the vision system would be able to compensate for a translation in x-y and a rotation along z (z being normal to the calibration plane), but not for the other degrees-of-freedom. 

    Do you think that would work for your use case? Does it answer your question?

    Nicolas Lauzier, Eng., PhD
    R&D Director
    [email protected]

  • mdausermdauser Posts: 2 Recruit

    @Nicolas_Lauzier that sounds really interesting. We could definitely try and make something like that work. Thanks for the reply!

    On a similar note, is there a variable that holds the number of parts that the camera sees in its workspace?

    Thanks

    Sean Fielding

    MDA

  • Nicolas_LauzierNicolas_Lauzier Posts: 26Beta Tester VIsion 1.1 Program, Vacuum Beta tester Crew
    edited July 2016
    @mdauser

    For the first release, the only output is the object location of the best match. We know that outputting the number of objects would also be desirable for some applications but it will not be available in the first release (and I cannot commit for a specific release date for this feature).

    Nicolas Lauzier, Eng., PhD
    R&D Director
    [email protected]

Sign In or Register to comment.
Left ArrowBack to discussions page