Monday, August 29, 2005

 

Meccano EVA extruder plans

No pictures to show today, as work was fairly slow in New Zealand on the weekend. But our PIC serial port does at least connect to the PC to the point where we can turn a motor on and off with it.

Attempts to drive the 5V Meccano EVA feed motor with a relay resulted in a "smoked" PIC, due to accidental non-connection of backlash diodes. These are normally integral to the buffer but wiring them got missed out.

This has prompted the move to a 12V-only EVA feed. The DVD drive shown last month has now been dismembered and the motor/gearbox from the sliding DVD tray is being pressed into service. The only snag is that it works on 5mm shafts and the Meccano EVA feed is on 4mm so there's a bit of turning on the lathe to do.

Finally, a submission for a RepRap presentation has been made to LinuxConf Dunedin for 2006: http://linux.conf.au

Vik :v)

Comments:
I just found a interesting project on the web.

Digital Pygmalion project: from photographs to 3D computer model

http://www.eng.cam.ac.uk/news/stories/2005/digital_pygmalion/

They have made a program thats automatic makes a 3d model from a movie.

Most Digital cameras can record small movies so
your can make 3d model
just by using a digital camara and this program.
 
Interesting. We've previously considered using a laser pointer shining on the turntable to capture point information on a video camera. Being able to do it with snapshots from a digital camera might be easier for the user.

BTW, the new EVA extrusion head is now assembled and awaits initial mechanical testing tonight.

Vik :v)
 
This technique has been around for several years - there is a fairly complete description of it in the SigGraph proceedings for either 2001 or 2002. (Sorry - I don't have my proceedings books here at home).

The difficulty is that it's VERY complicated to implement and consumes vast CPU resources. The laser-pointer approach is more accurate and almost trivial to get working (I did it in one evening - including building the hardware from Lego!)

In this application, some steps of the Cipolla/Esteban approach can be skipped because we can know the exact position of the camera relative to the model.

Where the camera-position-extraction thing really shines is in applications where you are using existing footage. At the SigGraph presentation, they were able to show old movie footage used to reconstruct 3D models of the set so that footage of modern actors or props could be inserted into the movie 100% automatically.

The 'killer app' for this is 'product placement'. For TV audiences with TiVo and an intolerance for conventional advertising, it's now very possible to digitally insert a 3D model of a modern product into a movie made 50 years ago.

Be afraid - be *very* afraid!
 
Would you care to share the scanner software with the Great Unwashed?

It'd make an interesting sideline for the RepRap project.

Vik :v)
 
Your might like project splinescan.

http://www.splinescan.co.uk/index.html
 
My code was a quick hack - it relies on LOTS of odd messy libraries I've cobbled together over the years. It'll take me a while to get it all together for release.

But my techniques and results are pretty much identical to the splinescan site (thanks for the link BTW - I didn't know about it).

I used a laser pointer - the most tricky part was to convert that into a laser line. Everyone recommends cylindrical lenses made from glass or acrylic rods - but I had very poor results with any samples I could get hold of.

I eventually ended up with a fresnel lens that I picked up on eBay which is one of the kind they use in barcode scanners in supermarkets.

If you wanted to do things a bit more mechanically, you could simply scan the laser up and down by tilting it...but spreading the light with a lens has the huge advantage of dramatically reducing the amount of energy in the laser light - so it's not so harmful to shine it into people's eyes...handy if you want to scan someone's face.

I'll see about getting the code together - I have no problem with donating it to RepRap.
 
The code would be good. We can simplify things by using the turntable height controls to gradually lower the object each revolution, so no beam spread would be needed and spot movement would be linear.

My Mum is visiting, so I've not had much time to myself. Hopefully I'll get the head depositing again tonight. Just got to brace the nozzle better, fit a tray to catch spilt molten plastic, and arrange a crude 1 in 4 duty cycle for the extruder motor. Piece of cake :)

Vik :v)
 
Doing many, many revolutions of the turntable would slow the process down to a standstill. The software already has to analyse maybe 1000 images of almost a megabyte each. If we had a vertical resolution of a millimeter or so, it would take hundreds more passes.

Remember, we want uncompressed video here - otherwise the JPEG/MPEG compression artifacts kill you.

The MPEG encoding is particularly nasty because it attempts to discover consecutive frames that are almost the same and actually makes them the same to save space. That really messes up the algorithm and the MPEG optimisations make little flat spots all over the resulting 3D surface.

So using uncompressed video is best. But when you do that, you can easily get up to hundreds of gigabytes of stuff.

Hence, it's a better idea to grab as much information in a single revolution of the turntable. It makes the scanning orders of magnitude faster, the data requirements for the video orders of magnitude less - and the final calculation of the 3D surface also many hundreds of times faster.

It's worth buying a beam spreader for that (and the reduction in laser light intensity is a nice spin-off for safety with very shiney objects.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?

Subscribe to
Posts [Atom]