Hi,thank you for answer.We work with glass plates. Some the camera can identify some not but we know which the camera can identify and why. the plates have the normal shades, so they are circles, rectangles or squares. We know the dimensions, we get the data from the article database. The clou is, the work process is the same each time, so we just have to start the program, tell the program the article number, the program will get the data from the database and can adjust the work path. So we just need the camera for the inital starting point. At the moment we teach the starting point by freedrive. But I think the camera would it make more easier for the shop floor team.
Is the starting point always the same for any specific part? Could you just store that information along with the information you'd using to adjust the path?
The snapshot position is always the same and the relative motion will always be the same (just pick it up in the middle with a vacuum gripper ), then move it to another fixed position . That's the case at the moment. Maybe we are able to edit the script for alternative actions, but this is a future project.Is this understandable?
We use another camera with fixed mount position and with that we are able to find a blob and then return the center of that blob so even if the blob changes some we are still able to see it. By storing blobs of different shapes and sizes you should be able to cover a wide range of different parts and still return the center of the blob. How precise does the pickup point need to be to the center? We did a sheet metal job a couple of years ago and to ensure we had the precise center position we picked up the sheets and then placed them into a v-block mounted on an angle so that the sheet always went to the same corner. Then the position for the center point was known better.
I understand, we have tried different cameras, but the robotiq is at the moment the most efficient for this glasses. For all other cameras we would need additional equipment like lights to get a good reflection. Astonishing that the robotiq camera can recognize glass (ok not all, but efficient enough for us). So it would be the easiest way for us to use it. We just want to input the article number, the script will get the dimensions, the camera will detect this form on a given position, the rest ist "runtime". No great setup, start and run the same program, just give the number. If we change the position of the robot we will change the snapshot position, after that run the program. So we would also have no need for additional equipment which we have to integrate in the robot or network or whatever. If possible - great, if not - not great, but ok. But we want to keep it simple and stupid (for the workers, not the programmer).So the idea came up that it might be possible to use the "interface" of the parametric method, somewhere the data must be and there has to be a program in background which works with that data from the gui.