Tuesday, April 18, 2006
RepRap application

This is the current RepRap application, showing both the construction view and the production progress window. The progress area optionally allows you pause the process so you can inspect what is being printed. Throughout this time, both panels can still be fully manipulated and inspected.
What's especially fun about this is that it is all talking to the hardware now too, so you can watch the motors whirling away, changing direction, etc. as the model builds up. If only I had a frame to put it all on :)
Without the hardware installed, you can select the null device in the preferences screen and still emulate the process.
There's still plenty to do of course. As you can see, what is being produced on that screen doesn't actually match the scene on the left, and that's some of the magic that Adrian is part way through.
Comments:
<< Home
Is the source code available somewhere? I'd like to take a look, maybe there's some way I could contribute after all (I've been writing code for 26+ years and am pretty ok with java and comm issues)
I'm also experienced with the microsoft programming platforms (C++ and VB )
I'm also experienced with the microsoft programming platforms (C++ and VB )
Yes - we mirror the CVS tree on the Wiki. Bear in mind that it's very much under development so the extensive documentation may miss the odd full stop...
It's at:
http://reprapdoc.voodoo.co.nz/cvs/reprap/
It's at:
http://reprapdoc.voodoo.co.nz/cvs/reprap/
I was wondering, will the final package contain adjuster controls for people whose first repstrap machine deviates from the design slightly? Say if they're unable to make the nozzle the exact same diameter but a similar one, or salvaged motors run at a different speed, etc?
Yes - we're trying to accomodate as much of that sort of thing as possible. To take your example, we seem to be converging on 0.5mm diameter nozzles. But the software that drives the machine has the width of the deposited polymer track (which is obviously dependent on the nozzle diameter) as a parameter that's set in a machine configuration file. All the high-level software works in mm (there'll be a button to convert inch STL files to mm, too), and there are factors that translate mm into machine steps. Again these are in a configuration file. As speeds are in mm/s speeds get changed automatically.
Ok, im gonna cover a bunch of the log topics as possible in the few mintues ive got today.
First, notes when using java for any machine control system whatsoever:
Any hardware coupling must be dumped through a machine code or external binary, although a signature is possible for various direct control options (to drivers, etc) there are severe problems with the java cache and timing issues related to "garbage collection" and related...
The method one could mimic is effectively similar to the sound output, notice how they buffer the sound to a signed module, then use the host system to poll... this is problematic given the excessive time needed to pre-buffer to avoid dropouts -- neutralizes sensor input..
Obviously, the direct control of a windows dll or a related system in other hosts is more correct but:
THE BINARY MUST ALWAYS HANDLE ALL SCHEDULING AND SEQUENCING REQUIREMENTS< INCLUDING ANY TYPE OF EXCEPTION GENERATED BY SENSORS, ALONG WITH THE VARIOUS FAILSAFE CODE REQUIRED FOR SUCH..
so, even though java has advanced some (and yes, the k-12 project is using that to ensure distributed consoles in a classroom are always fairly up to date with the core updates (web based, com peer to the hardware on the physical control machine))
java is designed for scripting and can NOT be trusted for accuracy and timing... so, offload all the timing critical systems to a binary on the host.
-- the sound methods, for example, even when using a dual stereo mic-out combo for controller and sensors (related to your inductive coil positioning sensor --see below) are fine when using hardware, however, be very very concerned when trying to use any java sound, we were able to compensate for the java problems a long while ago using all types of stupid feedback methods on multiple host threads to compensate for threading problems in the java systems, however there was still error due to java schedueling that neutralized any option of direct control.
ok... obviously you get the point that java is good as a client watch visualization and a binary on the host is desirable for any type of realtime requirements (hint java lagged over a quarter second in many cases-- more than enough to burn down a school) ...
clicking submit, tech on the control system in next msg. :)
First, notes when using java for any machine control system whatsoever:
Any hardware coupling must be dumped through a machine code or external binary, although a signature is possible for various direct control options (to drivers, etc) there are severe problems with the java cache and timing issues related to "garbage collection" and related...
The method one could mimic is effectively similar to the sound output, notice how they buffer the sound to a signed module, then use the host system to poll... this is problematic given the excessive time needed to pre-buffer to avoid dropouts -- neutralizes sensor input..
Obviously, the direct control of a windows dll or a related system in other hosts is more correct but:
THE BINARY MUST ALWAYS HANDLE ALL SCHEDULING AND SEQUENCING REQUIREMENTS< INCLUDING ANY TYPE OF EXCEPTION GENERATED BY SENSORS, ALONG WITH THE VARIOUS FAILSAFE CODE REQUIRED FOR SUCH..
so, even though java has advanced some (and yes, the k-12 project is using that to ensure distributed consoles in a classroom are always fairly up to date with the core updates (web based, com peer to the hardware on the physical control machine))
java is designed for scripting and can NOT be trusted for accuracy and timing... so, offload all the timing critical systems to a binary on the host.
-- the sound methods, for example, even when using a dual stereo mic-out combo for controller and sensors (related to your inductive coil positioning sensor --see below) are fine when using hardware, however, be very very concerned when trying to use any java sound, we were able to compensate for the java problems a long while ago using all types of stupid feedback methods on multiple host threads to compensate for threading problems in the java systems, however there was still error due to java schedueling that neutralized any option of direct control.
ok... obviously you get the point that java is good as a client watch visualization and a binary on the host is desirable for any type of realtime requirements (hint java lagged over a quarter second in many cases-- more than enough to burn down a school) ...
clicking submit, tech on the control system in next msg. :)
next pass:
loading engineering files, povray, etc...
in short, the computationally based object files are technically far more accurate than slice pattern generation and frame/triangle structures, mainly due to the ability to compute exact edge boundaries, etc...
there was at one point, a number of things that handled the slicing of povray models, along with various others... It might have disappeared, though a few years ago they had a plugin for rhino and other things, but as I recall, it was VERY effective for doing layup and RP processes on complex scenes, allowed for molten glass to be piped along with polymers, rosins, etc, though is no different than any other control technique, using the pov based files and a number of our custom machine control and reverse modeling stuffs, it was easier to do ray-trace confirmation on crystaline structures using a reverse pov-layup-camera technique and physical reverse modeling of the target objects, confirmed quickly off the POV distributed rendering (and compensating for machine speeds at that time)
so... though most machine tools tend to prefer slicing techniques, i'll again emphasize that using more materials-specific techniques is critical in controlling your quality on output objects...
also, fea/materials analysis techniques are easily accomodated in the same process of reverse modeling... last week, we reviewed a "few" ... well lets just say a cubic meter of glue layup... comprehensive permutations of various materials and glue gun combinations for the k-12 programme, specificly targeting some type of reverse analytics on the materials characteristics (especially mechanical) along with various control patterns and whatnot...
in short, the thing (last i heard) is sitting on the scanner waiting for more images... a few layers got reviewed, but a machine needs to be made to do more accurate stress testing,etc
-- dumping this log, then addressing coding and abstract machine control to be consistant with this post ;)
loading engineering files, povray, etc...
in short, the computationally based object files are technically far more accurate than slice pattern generation and frame/triangle structures, mainly due to the ability to compute exact edge boundaries, etc...
there was at one point, a number of things that handled the slicing of povray models, along with various others... It might have disappeared, though a few years ago they had a plugin for rhino and other things, but as I recall, it was VERY effective for doing layup and RP processes on complex scenes, allowed for molten glass to be piped along with polymers, rosins, etc, though is no different than any other control technique, using the pov based files and a number of our custom machine control and reverse modeling stuffs, it was easier to do ray-trace confirmation on crystaline structures using a reverse pov-layup-camera technique and physical reverse modeling of the target objects, confirmed quickly off the POV distributed rendering (and compensating for machine speeds at that time)
so... though most machine tools tend to prefer slicing techniques, i'll again emphasize that using more materials-specific techniques is critical in controlling your quality on output objects...
also, fea/materials analysis techniques are easily accomodated in the same process of reverse modeling... last week, we reviewed a "few" ... well lets just say a cubic meter of glue layup... comprehensive permutations of various materials and glue gun combinations for the k-12 programme, specificly targeting some type of reverse analytics on the materials characteristics (especially mechanical) along with various control patterns and whatnot...
in short, the thing (last i heard) is sitting on the scanner waiting for more images... a few layers got reviewed, but a machine needs to be made to do more accurate stress testing,etc
-- dumping this log, then addressing coding and abstract machine control to be consistant with this post ;)
the first and most critical concept needed when desiging an abstract system, is a complete awareness of all potential variables that could possibly be desired in any analysis or analytic process.
Though we all agree that .5mm nozzles are fairly effective for use (a crimped bicycle-ball pump nozzle soldered (or superglued) to the nozzle of a hot glue gun -- quick fix style), the problem is that any materials flow through such will always be different. even if all devices are built with exactly the same materials, i'll emphasize that even the heating or fluid dynamics characteristics of a different section of the same pipe will be sufficiently different to generate a diverse variance in the created output... not to mention that even hot glue and water expand at noticably different rates based on even a few meters verticle elevation -- floors in a school -- not to mention you at sea level and me sitting at 3000m yesterday...
So, how does one effectively handle these dynamic characteristics from the start to avoid problems with unpredictables? what about the simple effective use of time rather than coding for rigid or static systems when all are fundamentally unpredictable? -- best option is to model the tools and techniques on the most concievably abstract system imaginable, and code/design for fundamental dynamicacy, rather than attempt to implement anything other than an analytic framework in code ever.
the ability to handle any motor or physical device depends on the ability to control it. one needs the number in the wire. thats about it. but to model how to present the number, one must be capable of reverse modeling the entire physical system implemented, along with any other computational latancies (for very high speed feedback in software) and basicly ignore the textbook definition of motor or sensor or whatnot, and use a probe and RE technique to simply find out what known (or closed-system-relative) values and variables are relevant to any specific elemental physical object...
in student lectures, i've found that a "crystal" is most effective for the comprehensive understanding of fea analysis and materials characteristics... in short, said "crystal" is capable of comprehensive containment of all definable physical structures, be them atomic size or an entire system (accurate through city-scale)
Basicly, any fea/simulation model requires a predictive capability that assumes a material or whatnot has a specific characteristic.
everything we are working with is completely dynamic, and we should NEVER trust a spec...
Note, original systems were designed for the decomposition of industrial and modern trash pits... aka junkyard and dump ... for the purpose of extracting any possible reusable materials, reverse modeling EVERYTHING encountered, and comprehensive analytics on the viability of use for furthering said processes...
Until a major fight with various american national security entities about the ability to effectively identify, contain, and outprocess hazmat materials, etc... the system was intended to find any object, then find and model its sub components and capabilities... if its a watch, the gearing is very simple to look at and reverse... a tv, composed of marked components, might differ from others of the same brand and model, but likely SHOULD have similar operational signatures, so probing it and all its parts is quite easy to do... in a broken tv, likely only one piece is deficient in operation, meaning the vast majority of macro components are recoverable, no less operational... obviously a vcr has some buttons on front that control a few very basic motor controls... tying a string to it, and utilization, requires nothing more than an ac plug and the ability to poke it. conversion of viable components is easily optimized when you analyse the entire assortment from a no-prior-knowledge basis, assume it is foreign, and know that atomic physics tends to be predictable.
well...
lets just say we transitioned to archeology in order to more effectively utilize the sensors and a more effective edu/academic and intellectual gateway, (no less to avoid further fights with various defense and intell operations) and thus fabricate the basis to implement distributed capabilities in a k-12 global environment...
so, glue guns and reverse modeling of both your calibration output as well as any more complex system.
what does it require?
the understanding that any variable could be dynamic, thus assume all are, and assume that reverse modeled torque to voltage ratios on a motor are more accurate than the specs all the time...
the core binary should be nothing more than a simple dynamicly defined containment and sequencing engine, which handles any abstract method or approach (aka reverse modeling of vcr is the same as reverse modeling the abstract concept of wanting something to do something -- neutralizing the need to code it rigidly -- and simply use the known limitiations (namely wires that cary numbers) to define the system's structure
...
in short, while short on time, all that needs to be built for this core is the code that handles dynamic definitions of data sets, be them variables or mathematical functions, interpretations of 3rd party source code or functions, etc, or the reverse modeling of a tool that seems to do a decent job and was engineered to do so...
all the same
more later..
-Wilfred
Wilfred@Cryogen.com
Though we all agree that .5mm nozzles are fairly effective for use (a crimped bicycle-ball pump nozzle soldered (or superglued) to the nozzle of a hot glue gun -- quick fix style), the problem is that any materials flow through such will always be different. even if all devices are built with exactly the same materials, i'll emphasize that even the heating or fluid dynamics characteristics of a different section of the same pipe will be sufficiently different to generate a diverse variance in the created output... not to mention that even hot glue and water expand at noticably different rates based on even a few meters verticle elevation -- floors in a school -- not to mention you at sea level and me sitting at 3000m yesterday...
So, how does one effectively handle these dynamic characteristics from the start to avoid problems with unpredictables? what about the simple effective use of time rather than coding for rigid or static systems when all are fundamentally unpredictable? -- best option is to model the tools and techniques on the most concievably abstract system imaginable, and code/design for fundamental dynamicacy, rather than attempt to implement anything other than an analytic framework in code ever.
the ability to handle any motor or physical device depends on the ability to control it. one needs the number in the wire. thats about it. but to model how to present the number, one must be capable of reverse modeling the entire physical system implemented, along with any other computational latancies (for very high speed feedback in software) and basicly ignore the textbook definition of motor or sensor or whatnot, and use a probe and RE technique to simply find out what known (or closed-system-relative) values and variables are relevant to any specific elemental physical object...
in student lectures, i've found that a "crystal" is most effective for the comprehensive understanding of fea analysis and materials characteristics... in short, said "crystal" is capable of comprehensive containment of all definable physical structures, be them atomic size or an entire system (accurate through city-scale)
Basicly, any fea/simulation model requires a predictive capability that assumes a material or whatnot has a specific characteristic.
everything we are working with is completely dynamic, and we should NEVER trust a spec...
Note, original systems were designed for the decomposition of industrial and modern trash pits... aka junkyard and dump ... for the purpose of extracting any possible reusable materials, reverse modeling EVERYTHING encountered, and comprehensive analytics on the viability of use for furthering said processes...
Until a major fight with various american national security entities about the ability to effectively identify, contain, and outprocess hazmat materials, etc... the system was intended to find any object, then find and model its sub components and capabilities... if its a watch, the gearing is very simple to look at and reverse... a tv, composed of marked components, might differ from others of the same brand and model, but likely SHOULD have similar operational signatures, so probing it and all its parts is quite easy to do... in a broken tv, likely only one piece is deficient in operation, meaning the vast majority of macro components are recoverable, no less operational... obviously a vcr has some buttons on front that control a few very basic motor controls... tying a string to it, and utilization, requires nothing more than an ac plug and the ability to poke it. conversion of viable components is easily optimized when you analyse the entire assortment from a no-prior-knowledge basis, assume it is foreign, and know that atomic physics tends to be predictable.
well...
lets just say we transitioned to archeology in order to more effectively utilize the sensors and a more effective edu/academic and intellectual gateway, (no less to avoid further fights with various defense and intell operations) and thus fabricate the basis to implement distributed capabilities in a k-12 global environment...
so, glue guns and reverse modeling of both your calibration output as well as any more complex system.
what does it require?
the understanding that any variable could be dynamic, thus assume all are, and assume that reverse modeled torque to voltage ratios on a motor are more accurate than the specs all the time...
the core binary should be nothing more than a simple dynamicly defined containment and sequencing engine, which handles any abstract method or approach (aka reverse modeling of vcr is the same as reverse modeling the abstract concept of wanting something to do something -- neutralizing the need to code it rigidly -- and simply use the known limitiations (namely wires that cary numbers) to define the system's structure
...
in short, while short on time, all that needs to be built for this core is the code that handles dynamic definitions of data sets, be them variables or mathematical functions, interpretations of 3rd party source code or functions, etc, or the reverse modeling of a tool that seems to do a decent job and was engineered to do so...
all the same
more later..
-Wilfred
Wilfred@Cryogen.com
It is possible to write timing-sensitive Java code, but it requires a bit of care. Unloading work to binary drivers is not necessary however, as we can make the comms sufficiently robust.
The trick is that we have to use synchronisation for the timing-sensitive parts anyway. This is handled inside the RepRap. Our Java code just needs to send atomic operations to the devices in the field.
As an aside, I think binary drivers would scare the bejesus out of too many potential developers :)
Vik :v)
The trick is that we have to use synchronisation for the timing-sensitive parts anyway. This is handled inside the RepRap. Our Java code just needs to send atomic operations to the devices in the field.
As an aside, I think binary drivers would scare the bejesus out of too many potential developers :)
Vik :v)
I'm going to have to go with Vik on this one. I must respectfully disagree that java was designed just for scripting. A properly designed and compiled java program should be plenty robust to handle the timing cycles. Java was originally designed for use in microwaves, television sets and refrigerators (I think it was called Acorn back then) and it was the sudden explosion of the web which got it ported to the browser environment which has given it a questionable reputation. Javascript and java appplets are completely different from java programs. The Java VM is actually quite slick and capable so long as its behaviour is tested and standardized, something which will be necessary with any new variations of reprap anyways. I think you're going to choke on details if you try to constrain this system to a machine coded driver set. What happens when someone comes along with a different machine? Someone will have to rewrite the machine code drivers every single time a different type of machine is used. Whereas a simple testing feedback loop through the java VM can establish operational parameters which then allow the correct settings. Once standardized (so long as you aren't also editing a text or doing something else with the CPU) the behaviour of the VM would be constant.
Machine coded drivers don't scare me, they're just irritating like poison oak.
Post a Comment
Machine coded drivers don't scare me, they're just irritating like poison oak.
<< Home