Home› Programming
Discussion
Back to discussions page
JMH
Posts: 9 Apprentice
How to pick up parts from big area efficiently with Camera Locate |
191 views
|
Answered | |
/ Most recent by bcastets
in Programming
|
11 comments |

in Programming
Hi!
I´m having problems to build a program which could find parts from big table by Camera Locate and load them to CNC-machine. So far I have managed to use only one snapshot point and one small corner of the table where the metal pieces are placed. I tried to create more feature points and snapshot points but program was not working as intended and was a mess since I had 10 dirrerent spots to locate parts.
I saw this post ( https://dof.robotiq.com/discussion/1430/camera-detect-object-on-various-plane-and-from-various-camera-location) about * snapshot_position_offset command and wondered if this is the easiest way to solve my problem. The program should pick all pieces from one snapshot point and then move to next location until the whole table is empty. If this command is the solution to my problem, should I add some If statement to get the robot to move to next place when nothing is found from first place? Or is there some other easier way do this whole thing?
Grateful for all answers!
JMH
I´m having problems to build a program which could find parts from big table by Camera Locate and load them to CNC-machine. So far I have managed to use only one snapshot point and one small corner of the table where the metal pieces are placed. I tried to create more feature points and snapshot points but program was not working as intended and was a mess since I had 10 dirrerent spots to locate parts.
I saw this post ( https://dof.robotiq.com/discussion/1430/camera-detect-object-on-various-plane-and-from-various-camera-location) about * snapshot_position_offset command and wondered if this is the easiest way to solve my problem. The program should pick all pieces from one snapshot point and then move to next location until the whole table is empty. If this command is the solution to my problem, should I add some If statement to get the robot to move to next place when nothing is found from first place? Or is there some other easier way do this whole thing?
Grateful for all answers!
JMH
You don t need to use snapshot_position_offset parameter because your workplan is always at the same height. snapshot_position_offset parameter is used to offset the Z position of the workplan.
What you need is a to use the ignore_snapshot_position option of the camlocate note and remove the option to automatically move the robot to the snapshot position. This way the robot can enter the camlocate note from any position. The robot take a picture from the position he had before entering the camlocate and detect the objects on the workplan.
You can put the camlocate node in a loop which bring the robot under each position of the workplan and enter the camlocate.
Let say that you devide your workplan in 6 areas, the program would look something like this:
i=0
While i<6
MoveJ
Switch Case (i)
case 0
Waypoint_0
...
...
case 5
Waypoint_5
Camlocate
...Picking instruction...
i=i+1
As the calibration board have smaller dimension compare to your complete workplan, you may have some error on the positioning of the workplan fare from where was the calibration board (Calibration board may not be exactly parallel with the workplan. Also positioning of the calibration board have some little error.). This may result if some detection issues or unexpected variation of Z picking position.
Can I do the same with script "ingnore_snapshot_position=True" before entering Camera Locate node?
To see the setting menu, you have to update the URCAP to the latest version. Updating the camera URCAP is recommended but not necessary.
https://dof.robotiq.com/discussion/1430/camera-detect-object-on-various-plane-and-from-various-camera-location
Regarding the error, on which node to you get this error ? Could you share your program ?
I suppose you have a while command for which the while condition is not defined.
The program goes to the first Case (case 1) to search parts and finds them and performs the other actions in Camlocate node just fine. But when it's empty and it should go search parts from Case 2 it just gives my pop-up "no parts detected" even though there is parts. In the CameraLocate node I have 5 subprograms; to open doors, to close doors, to set an M-code from machine ON and OFF, to set cycle start ON and OFF and to take ready part away. Is this a problem and they should be in different place for example after count=count+1 line?
I added a zip-file where my program is. I also put the program in PDF-sheet and added some explanations about the subprograms that are inside CamLocate node.
To check why the camera is not detecting objects on case 2 you can lower the part detection threshold and check the camera view in installation menu when the program is running. Like this you can see if the camera make wrong detection.
It is possible that the camera fail the detect edge because of different lighting condition in case 2 area.
Could you also confirm that for case 2 parts are on the same workplan as case 1 ?
I will check that camera view thing if that's the problem. There is very bright lights just above the robot but I already have covered them and the area has same lighting conditions. But I do have a black rubber surface covering the whole table and the parts are aluminium which is not the best combination if I'm understood right so I will try to put something else there to get better contrast.
The snapshot area is exactly the size of the calibration board so the camera is about 400mm above the table when picture is taken. And yes, case 2 is on same workplane since it's right next to first "calibration board area" on the table. (Case 2 also covers calibration board sized area from same height. I just moved the robot to the side from case 1. ) Aluminium pieces are 112 x 15 x 15mm.
The first object it can always detect but the problem comes with the next part. Then the robot twists itself in different angle compared to table and gets the error.
I have no idea why it does like that. As written in the picture, the waypoint_9 in case 1 is the same point as in CamLocate node. And waypoint_10 right next to it. But in the second round to detect parts it twists itself in both cases waypoints. Could the problem come with the waypoints in lower "Switch count"- node? In example program these waypoints were named exit_1, exit_2 etc. Are they not allowed to be the same points as the Cases waypoints before CamLocate node?
I have teached the waypoints for the cases again in both Switch count- nodes but the problem still exists. The position of the robot when it twists itself is very random and odd; it is not any waypoint at my program for sure. These waypoints seems to be acting like if they were incremental from some point or something.
The problem could come from the model itself or the lighting condition which could make the detection harder.
I open a support ticket so that we can advise directly and exchange more information about your problem.