Home Robotiq Products


Left ArrowBack to discussions page
ziga004ziga004 Posts: 13 Apprentice

I'm working on an application that has six workstations spread around the robot base. Once in every so often I'd like to run a calibration routine using the robotiq wrist camera to detect the (6) visual offset tags. I've noticed that all of the tags are exactly the same and have the sam ref. number written on it that's "Ref. 564JE". How does the robot know which one is which if I have all of them in the application but at different workstations - one per station?

What I've envisioned is this: after a while the workstations could move a little and due to the strict tolerances, I'd like to detect and correct this. So a few times per day, I'd like the robot to find the sepparate tags, calculate location of each of these visual offset tags and recalculate the relative moves after that. I'd have six workstations, six visual offset tags and six snapshot positions/ tag positions, close and far.

Thank you



  • bcastetsbcastets Vacuum Beta tester Posts: 673 Expert
    As you said visual tags are identical so there is no way to diferenciate it. You should have only one visual tag in the field of view of the camera.
  • ziga004ziga004 Posts: 13 Apprentice
    Hi, thank you for your reply. However, if only one tag is in the field of view and by that I mean when the wrist camera is in the far detection point it "sees" only the one tag, then I can have multiple tags, consequently, multiple detection nodes in the PScope, and multiple programs? By multiple I mean six as per OP.

    If all are the same, why are there five in the set.. just extras?

    Can I create my own tags and use them with the apply visual offset node and visual offset node?

    I thank you kindly!
  • bcastetsbcastets Vacuum Beta tester Posts: 673 Expert
    You can use several visual tags at different location. It works.
  • ziga004ziga004 Posts: 13 Apprentice
    Then to my original question: if all tags are the same, how can I program different task/events? If the camera detects "tag 1", it has to do a certain program called "program 1". If the camera detects "tag 2" it should execute "program 2". But since the "tag 1" is equal "tag 2", what happens here? What differentiates execution of program 1 from program 2, certainly not the tag since it's graphically the same.

    Where is the crucial difference I'm not getting?

    Thank you
  • Yannik_MethotYannik_Methot Posts: 40 Handy
    edited May 2022
    Hi @ziga004,

    For your exact application, I would recommend using the barcode reading feature instead of visual offset tags.

    For example, you may have a different barcode for each program you want to execute. The result of the barcode scanning is stored as a string in a variable which you can use to set a switch case to execute different program parts. You may learn more about the mechanics of this feature by looking at 'Barcode' in the Wrist Camera Instruction Manual.

    The visual offset tags' purpose is more to fine-tune robot moves relative to where the tag was scanned. For example, if your tag is scanned in a 12deg and 3cm offset from where you originally taught it, the following relative moves will be offset by the same amount so your program will still work with this offset. Again, you may get further details about this in the manual if you wish to learn more about this.

    Hope this helps,
  • ziga004ziga004 Posts: 13 Apprentice
    Hi @Yannik_Methot,

    Thanks for the info. Does barcode scanning enable me to get the "object_location" variable from the barcode? This is very important for me. I know if I use just the basic vision detection node (not visual offset) i can get a (x,y,z,rx,ry,rz) vector of this object. 
  • bcastetsbcastets Vacuum Beta tester Posts: 673 Expert
    Barcode does not give you "object location". You can use a combinaison of a visual tag offset and a bare code.
Sign In or Register to comment.
Left ArrowBack to discussions page