The Eye, a project to build an imaging sensor (a webcam of sorts) from discrete, low quality sensors. Something to let me have some fun with data manipulation and noise rejection, since the sensors used vary a lot and the idea of having a sensor size of roughly 4 inches square with no optics makes it pretty difficult to visualize much of anything.
Update! 4/4/2017
I came back to revisit this project to see what I could improve on, in the time since designing it my understanding of electronics has come a long ways, but I also found out how gamma adjustment worked, and wanted to incorporate it into the project. In this revision, I made a couple of hardware changes (though there are some design flaws that can't really be overcome with tweaks), I ported it to the Arduino IDE (using STM32_Arduino), I added a gamma adjust function for the microprocessor, slightly optimized the data transmission speed, and tweaked some values for better automatic adjustments. I added controls for the gamma function into the interface application, and I added individual control of the LEDs - something that was hardware supported in the earlier version, but which had not been implemented in software. I also put the code up on GitHub to share around - though it's a niche project, some of the techniques may be useful to see fully coded out.
The hardware modifications are two things: I replaced the wires running to the sensors with soldered jumpers because in the time in storage, the contacts had gotten questionable and pressing a little on the wires would change the readings of the sensor. I also swapped the old white LEDs with surface mount ones - they are still too dim for the purpose I was envisioning (Illuminating a near target), but they are at least short enough to not interfere with putting things close to the sensor. If I had to make a new revision, I'd drive the LEDs with more current to get that illumination target, but I wouldn't put them in between the sensor lines. This ends up distorting the image somewhat and even lighting can be achieved with a ring of lights around the sensor (individual control doesn't help much, it turns out). Having the sensor elements packed more evenly and tighter together would be better because without optics, your focal length is VERY short. A tighter cluster would get more detail up close and may get close to being able to use a large lens as basic camera optics - and since I'm now fairly comfortable with PCB design, wiring up the bottom manually wouldn't be a concern and it would be easier to pack them together without construction concerns. It also seems that too much darkness in a line of sensors being read depletes the supplied current and dims the whole row. Extra balancing resistors on each element could fix this but would be a lot of extra parts, a different multiplexing algorithm would help but could be complicated, and more drive current (or selecting sensors for slightly higher dark resistance) would all help.
And the new software, showing a spool of wire on the sensor (circularish thing) casting its shadow diagonally to the left. You can see a dark horizontal line under where the wire spool is, and this is the problem where too much dark in a horizontal line makes all of the sensors on the row read too low.
Full source code for the firmware and computer interface is available on github: The Eye on Github
Original page:
This project is built off of a Leaf Labs Maple Mini clone, built with the same PCB but by a different manufacturer, and programmed through their IDE, the imager is made of 64 CDS Photocells, rated for 10K ohms, 9 white LEDs for some additional light, and a couple of support chips to reduce the number of pins required to interface to the maple mini - while the framerate would be awful through the serial connection, one Mini has enough interface pins to run two imagers. The Maple Mini uses C and is programmed just like an Arduino, many libraries don't easily port over because the Mini is based off of an ARM Cortex M3, but some have been ported by Leaf Labs and some additional ones through the community.
As a proof of concept, before I ordered all the parts, I built a prototype on a breadboard: a 3x3 pixel imager using an arduino nano. Nothing fancy, but it proved that I could make the interface with my PC work and gave me probably the world's lowest resolution webcam, at .000009 megapixels.
Programming this thing has taken a while and is by no means is in a final state. I built the hardware in the summer of 2013, but didn't complete much software then. I got enough working to read the imager on my PC in September of 2013, but didn't return to the project to iron out some bugs, flesh out the interface, and get it to this point until February of 2014. The microcontroller interfaces through a USB serial connection to a program written in Processing that displays the data and requests new data or issues commands as needed. The bottleneck in the system is the USB serial baud rate, even at 1.1Mbit/s, I can only reliably get about 2-3fps from the imager on the PC... though when not attached, the system can easily operate at a smooth framerate, well over 60fps. Some basic processing is done and there is auto brightness and contrast settings, the whole system self-calibrates based on the maximum and minimum sensor readings for each pixel and a set of offsets recorded from the frame with the smoothest output. The auto calibration rates itself based on how wide the range between the maximum and minimums are and how smooth the smoothest frame is, then applies adjustments to the raw data, scaling the adjustment amount with the calibration rating. This means that when first switched on, the image is quite noisy, but with time and very dark and bright environments, the calibration makes an even frame look fairly even, and makes the auto contrast and brightness adjustments improve the usefulness of the image considerably.
This project is ongoing, with improvements to the calibration algorithm at the top of the list. I'm also considering breaking out one of the hardware serial ports and connecting it to an FDTI chip to get a second link to the computer and effectively doubling the image framerate. This has taught me a lot about what to pay attention to in calibrating sensors, and about making an asynchronous serial link with the computer work properly with compensating for errors, which took a lot more effort than I expected. I may try to improve the lights built in, which should be possible of decent illumination, but which do very little at the moment and which are not particularly even in spread. If I were to do another similar project, it would probably have a few more pixels, maybe 10x10 or so, and would be much more compact. The black silicone tube which is meant to isolate pixels from side lighting, would be removed and the CDS cells would be packed much tighter together to make a more usable image and a little more fidelity, though building it closer together would be considerably more difficult to assemble. Even in it's current state of calibration, you can only faintly see the B that the imager is reading off the monitor.