Close

Scrapping the endoscope, and all that means.

A project log for Arcus-3D-P1 - Pick and Place for 3D printers

Open source, mostly 3D printable, lightweight pick and place head for a standard groove mount

daren-schwenkeDaren Schwenke 02/24/2018 at 21:453 Comments

I've been mulling over ditching the endoscope for a more mainstream camera module.

The S/N ratio sucks, the quality sucks, and I have no control over exposure on the one I have.  The exposure is a big deal as when I switch from bottom to top vision, the brightness changes which causes a delay until it adjusts... every time.  It also has started to intermittently fail, like the imaging portion is losing bits.  This is probably due to the sheer number of times I've had to mess with it as the USB cable also had an issue at the attachment point.

So I've been researching replacement cameras.  The statistics for the other USB endoscope cameras have been really hard to pull together and determining if any of them supported control over exposure has been... impossible.  I think I'm going to go with a more mainstream camera.

I've settled on either the Raspberry Pi mini camera version 1.3, or the ELP super mini 720P 45 degree version.  Both would fit my current design with some modifications..

The ELP has a little larger footprint at 26mm square, and the camera is centered on it, which would cause an issue with my current layout as the mirror arm linkage passes through that space.  However it would plug right in where my endoscope was going into the current plans of using the BBG as my platform.

The other plan using the Pi mini camera is a radical departure from 'the plan'.

Obviously this requires me to move my vision processing to a Raspberry Pi.  However, the Pi would actually be faster at vision, and the entire image pipeline is much faster as well since it doesn't have to traverse the USB bus.  But the Pi can't do step generation with hardware real-time accuracy like the BBG can using the PRU on Machinekit so I lose the ability to have a single board solution.  I'm also partial to kinematics other than cartesian for which there wasn't really a good inexpensive solution, until recently.

Through the use of klipper and a couple cheap arduinos I get the best of both worlds, mostly.  The arduinos (yes, you can use more than one synchronized) handle the hard real-time stuff and are just passed a queue of things to do, whereas the motion planning and actual interpretation of gcode happens in python on the Pi.  That is a step up from the old way of trying to squeeze the motion onto an 8 bit AVR with varying levels of compromise required.

The downsides of switching to klipper:

The upside:

I'm mulling it over.  

If you made it this far, feel free to weigh in.

Discussions

Daren Schwenke wrote 02/24/2018 at 23:57 point

I've actually got a couple Pi Zero's here, but no camera that will fit them or I'd be experimenting now.  The zero also supports USB slave mode so with a virtual ethernet setup getting at the images wouldn't be too hard (although I might as well just use the ELP camera at that point).  I assume the 'USB gadget interface' is akin to slave mode, probably just more specific?

If the workload was split with OpenPnP living on the BBG and OpenCV living on the Pi, this would totally make sense.  The real difficulty would be in making the OpenPnP run just the vision component over there across the two machines.  That is currently not possible AFAIK, and would probably require a lot of work.  I'll look into it.

As for making the Pi zero the real master and just running Machinekit to handle the gcode interpretation that would work, and it would be easy.  Machinekit already can accept gcode via network connection.  Problem being, it's only a single core and the same speed as the BBG basically.  Upgrading the processor to a real Pi works, but now I'm using two moderately expensive boards and we are back to it making sense to just run klipper.

There is one more gotcha (that I haven't triggered on the BBG yet so fingers crossed) in that the USB bus of the Beaglebones can reset under high load, and then it just locks up. http://e2e.ti.com/support/arm/sitara_arm/f/791/t/308549  The ELP USB camera could trigger this bug as well though.  Probably safer and more portable for everyone else just to go with the Pi solution.

  Are you sure? yes | no

[deleted]

[this comment has been deleted]

Daren Schwenke wrote 02/26/2018 at 21:38 point

OpenPnP captures, and then sends the images through an OpenCV pipeline to detect edges and such.  It also displays them on screen with the resulting edges/features highlighted.  Video in Java... not too efficient.  

Corrections are made if the part is not straight or not centered, and then the part is placed.  Repeat. 

 If I'm just using the zero as a webcam, I don't really gain much here and it adds a lot of complexity.  There is a webcam gadget though so simple to try.

  Are you sure? yes | no

[deleted]

[this comment has been deleted]

Daren Schwenke wrote 02/26/2018 at 21:01 point

I'm going to try running OpenPnP on the Zero wireless I have.  OpenPnP is Java based, so kinda a pig in and of itself and the Zero is a single core so that plus vision makes me nervous.

But this way I could just put the zero right on the head with just power needed, and they make a 'spy camera' https://www.amazon.com/gp/product/B0769KS7C7 which fits the format here nicely.  It's 5MP so this should give me enough spare pixels I can 'digital zoom' the nozzle to eliminate the extra area, and still have enough resolution.

I'm still undecided if I'm going with klipper or not.  Probably not at first as I don't have to write the kinematics right away then.

Thanks for your input.

Edit: And... I changed my mind again and decided to use a Pi 3.  I liked the idea of each required high processor usage part getting its own core too much.  Had to go with the larger camera now though but I'll lift the sensor and aim it at 90 degrees.  Modeling ensuing.

  Are you sure? yes | no