Name | Modified | Size | Downloads / Week |
---|---|---|---|
Pre-Alpha | 2015-07-03 | ||
ReadMe.txt | 2015-07-03 | 6.5 kB | |
Totals: 2 Items | 6.5 kB | 1 |
Techne 3D ReadMe Currently in Pre-Alpha. This means its only starting to do useful things. Version 0.0.0.13 Redid the merge zMap module. The new method is more accurate and eliminates any potential memory simultaneous writes between threads. Found an error in the way we are computing inlay style. The problem is that current method only works for well defined models, if they have internal vertices the toolpath isn't valid. Redid the zMap generation for when a roughing toolpath is used that leaves extra material for the finish passes. Have disabled the new algorithm for now. The issue is that if a model has gaps, which then doing a true expansion of the model can widen the gaps so that the tool will move down into the area. Not sure if there is any automated way to ensure models are solid, so for now simply moving the zMap up in the z axis to create the extra material. Otherwise toolpaths looking pretty good. Version 0.0.0.12 Previous versions may have created a file h:\\bottle.xyz and left it undeleted. The basic toolpath logic genates collinear points, now testing for this and removing many of them. Still getting some in the final file but this is because we are now only writing movement commands with data for position in a given axis that changes. Previously you could have G01 X10 Y10 G01 X11 Y10 is more likely to now be G01 X10 Y10 G01 X11 But still getting G01 X10 G01 X11 G01 X12 G01 X14 which should optimally be G01 X10 G01 X14 Version 0.0.0.11 Optimized toolpaths. Have noticed that for complicated toolpaths when trying to previous they use huge amounts of memory, can cause memory error. Debating if have toolpath preview is worthwhile given how dense the toolpaths are and how much overlay there is. Version 0.0.0.10 Found error in how collision is detected between tool and model. Toolpath looks reasonable. Version 0.0.0.9 The stepover value of the toolpath for writing to files is corrected. Comparing the part preview to loading the GCode version into another program that does preview shows them looking similar. However the result has lumpy areas that should not be there. Have not tried to run actual GCode on a CNC machine. Version 0.0.0.8 Can write toolpath to files. Still have not implemented cutout option. Be very careful using these paths. Have not implemented saving a design. Version 0.0.0.7 The major styles of designs all work, including dish. Toolpath generation seems to be working, including having separate roughing pass. Changed the UI so that preview is not automatically and continously generated. Now there is a button to start computation. Not loving this approach, its a little confusing but the calculations are so long and intensive that having it always running is iffy. Version 0.0.0.6 More UI improvements. Speed improvements. Version 0.0.0.5 Updated the UI a little, show/hide toolpath kinds of things. Speaking of toolpath, it will now display the finish toolpath. Actually, calculate it too. This is the full toolpath, with increasing cut depths per pass. Also included the logic for determining when a section has been machined to its final depth, so that that area can be skipped. This is particularly important as it the whole point behind using roughing toolpaths. The last big job is the logic for figuring out the material remaining from one toolpath (or tool) to another. The concept is pretty straightforward, so I am optimistic it won't be too bad. Some speed increases too. The calculations for the initial toolpath remain intense. A highly accurate toolpath will be very time consuming to calculate. Version 0.0.0.4 Lots o progress. It now displays the 3D Part Preview. Also cleaned up a few issues regarding setting options for the part. Two big things: In the Techne 2D project I use an adaptive triangle mesh generation for the part. You start with a 2D array of Z values that represents the part after being machined. The array index position corresond to XY values. The mesh algorithm compares Z values and tries to make large triangles where there is constant Z. The idea is to have coarse mesh in flat areas and finer mesh where needed. It works but it somewhat CPU intensive and also has boundary problems. For Techne 3D that method is replaced with a fixed density mesh. Graphics cards have so much memory now that running a high density mesh isn't much of a problem. The creation is much faster and there are potential benefits down the road such as having real-time preview of the part as the tool is moving along it. The second thing is the near completion of the logic for determining the final toolpath. I am using a fairly brute force intersection testing method. Create a model of the tool then bash at along the Z axis until it hits something. Move to the next spot, repeat. Its very very CPU intensive. The method uses a zig zag toolpath. The user specifies how far to move the tool (width of cut) on the horizontal pass. The challenge is coming up with a good number for the vertical pass. You can increment as little as the machine tolerance which is what will be used as the default. For now will just allow the user to specify an alternative value if they like. Version 0.0.0.3 Ability to figure out optimum amount of material to remove from part in the Z axis (Cut Depth). Surprisingly simple concept but lots of pitfalls in implementation, using the DirectX math library. Very CPU intensive, woohoo. Version 0.0.0.2 The part & toolpath configuration dialogs are implemented. Some bug fixes in the model view window. Version 0.0.0.1 Tool database, machine configuration and material database work. Will syncronize with the Techne CAD application versions. Can load multiple models into one larger version. Implemented scaling along different axis. Implemented rotation of model (not just rotation of model view). Zoom, pan and view rotation work for model view. Version 0.0.0.0 Version only loads and displays a STL model. Rotation works though. Supports the loading and display of STL based models. Because our algorithms will ultimately depend on knowing surface normals and weighted vertex normals we need to compute them during model loading. For memory efficiency also wanted to know which vertexes are shared. This initially was very computationally expensive, implemented a multi-threaded load process.