|
From: Paulo E. C. <pau...@gm...> - 2013-04-22 13:48:46
|
> The win32 version builds from .sln /.vcproj build files. > I'll reserve comment on the linux build except to say recent > activities on the win32 build could have broken something. On win32 I > just build what's in freewrl/freex3d/src /lib and /bin and /libeai. Patch named "make it build 2" is related with a Win32 change that broke the Linux build. I use Linux, Fedora to be precise. > > > And you try to find some understanding to how it all fits together and > > there's none or it is so scattered that it is easy to loose track of it. > > It follows www.web3d.org <http://www.web3d.org> specs, and in general > vrml browsers work like game engines: > parse input files > fetch image resources > Loop: > - get input from mouse, keyboard > - get time delta since last frame > - run scripts based on input and time > - update geometry based on scripts, time and input > - render geometry to frame > end loop > > > - Documentation briefly ( optionally as comprehensive has possible ) > > describing the contents of each directory. > I'm not familiar with all the directories myself. > And some directories no longer adhere to their original intent - they > are what I'd call fuzzy categories, not precise. To get precise I > build and run and trace the code. > If we do a big refactoring, we can perhaps refactor into better > category directories in the process. > > Here's the ones I know: > freewrl/freex3d/codegen - perl files to generate GeneratedCode.c and > Structs.h node and field plumbing - only run perl after an infrequent > change to the node or field structs, check in the GeneratedCode.c and > Structs.h. > freewrl/freex3d/projectfiles_vc9 and _vc10 - win32 MSVC .proj and .sln > build files > freewrl/freex3d/src - cross platform source code files for: > /libeai - in web3d.org specs there's something called External > Authoring Interface EAI that allows scripting via port communications > /java - for SAI or Scene Authoring Interface writtin in java > /bin - main.c console app - calls lib > /libtess - used to tesselate (triangulate into mesh) truetype font > polygon outlines, called by /lib > /lib - libfreewrl library called by main.c and other gui apps > depending on platform > > > within /lib - the directories are fuzzy categories, not strict, and I > have only fuzzy ideas about what the cateogry was originally meant for > - so check the code to see specifically. Thanks for these. I'm collating all of these in a file to submit as a patch for documentation > > Q. If you had to decide between better documentation or better code > and directory structure which would you choose? I would chose both. They're not mutually exclusive. The point is to have things leaner so that code doesn't rot so fast ... > > The 'agile' / eXtreme programming school of thought skips the > documentation and goes for good structure. A problem with detailed > documentation: it can get stale, and keeping it up slows down > structural refactoring. And assume for a moment know one knows exactly > what a function is supposed to be doing. Then no one can properly > document it. That's pretty close to where we are with freewrl. I'm not suggesting one documents every line of code. But simple, short description of what a function is supposed to do, of it's purpose, is helpful. That documentation only rots when that function is no longer necessary in which case one should remove it from the code. > > On the other hand, before being able to wisely refactor code, you > would need to have some fuzzy idea what it is supposed to do, perhaps > through guessing or debugging sessions, and looking where and how it > is used, at least enough to form an hypothesis about what it's doing. > I'm used to this. Debugging takes a long time, and if one is only collecting that information and intuition in one's brain then that information will easily be forgotten and not shared with the project. > > The Agile school sets up lots of tests -automated, so they can run > them quickly between changes- and if they break tests when > refactoring, they roll back. When I do testing there are 50 or so > .wrl/.x3d test files I run. > http://dug9.users.sourceforge.net/web3d/tests/screenlinks.html > Plus depending on what I'm tinkering with, I'll set up special > .wrl/.x3d test files, and I've accumulated some samples people have > sent in. Can you describe your testing process a bit better ? Do you mean you manually open the files and play with them to assert the validity of the code ? Is there a specific set of validation rules you use/follow ? > I think there are different names for different levels of tests. A > Unit test is supposed to be for a small bite size module, class or > even a function, I think. Functional tests are more from the outside > looking in, as a user would see it. I think the 50 wrl tests would be > called functional tests. > A problem with unit tests: if you're going to refactor your modules > and functions, then the unit tests would need to change a lot in the > process to keep up. But functional tests as seen from the outside > should be stable. So if I was setting up automated tests before major > refactoring I'd want functional tests, especially for freewrl which is > mostly working well and free of bugs, as seen from the outside, but is > "hell's kitchen" on the inside. Or perhaps unit tests could be > generated from perl. The kind of testing I was talking about is more akin to Unit tests. But these need not be written for every single module, class or function. You can write tests at different stages of your development, a) writing a new function, b) amending an existing function c) preparing for some refactoring work d) because there are none e) because you're bored and you feel like it Unit tests can be as simple as throwing some data into a function and validate that the output conforms with whatever one should expect. This means that yes, your tests need to be tendered to but not to the extent you're talking about. If I refactor a given function to be slightly faster, or to do things differently, or use a different API, etc , All I need to validate is that in the end of my work the function is still giving me the same expected reply to a given input. If it's not, then either this rewrite is bad, or most likely this function is a totally different new thing that should have a new name, perhaps be located in some other place, and have a test of it's own. In a nutshell, I don't need to also rewrite my tests for this function. I should be able to extend them though. If I'm considerably rewriting my test for a given function that's usually a code smell indication that something is wrong. > > Having said that, some contextual and overview documentation would > likely be stable and helpful to everyone. After a major refactoring, > the code might be stable enough to make more refined granularity of > documentation useable. Documentation in my experience is usually best done as you go along... So every time you're amending some code you could check if that function is well described within your understanding of it. Please have a look here for an example http://finsframework.org/mediawiki/index.php/Short_Tutorial_on_Doxygen In the end of the document there's a link to some generated documentation. One only needs to agree a format and then add the config file to the project and include some building commads in make. After that it's only a matter of adding up to the pile... > > > - Some contextual explanation of the purpose of the codegen/ folder and > > how to run it. > > - Why it exists > > - What it generates and why > > - How it runs... it seems you're suppose to pipe something into perl > > -MVRMLC ? Is that written somewhere ? > VRML has 2 main entity types: nodes and fields. There's a lot of > repetitive plumbing for each of those types, and for each specific > node and field type. When changing the system occassionally -very > infrequently- if a generic field struct or node struct needs to be > changed, instead of changing it repetitvely in hundreds of node or > field structs, it's changed on one line in a codegen/ perl file, and > the perl is re-run to generate all the repetitive plumbing code. > To re-run the perl, in windows I put it in a .bat file in /codegen, > install perl somwhere, then run the .bat, which looks like this: > E:/Perl/bin/perl.exe VRMLC.pm > pause > That's it. It does it's own running of the other perl .pm files. > It outputs > freex3d/lib/scenegraph/generatedCode.c and > freex3d/lib/vrml/structs.h and > freex3d/libeai/GeneratedCode.c and GeneratedHeaders.c > So these output files should never be editied directly (although I > have for a throw-away proof-of-concept test before editing the perl) I'm also adding these for a documentation patch. > > > - Consider moving to at least SVN preferably Git|Mercurial > > - CVS is old, cumbersome and hard to enable collaboration of multiple > > people, basically everything is hard with CVS... > Agreed. Next time we change repositories. > Q. which is easier for newbies to learn? SVN or Git? I've never used > git but it seems more complicated, with local and branches and main > repositories. Which do you recommend for us? Why do you use Git? Is > there a Git client for all desktop platforms: osx,linux,win32? Are all > git tools compatible? Does sourceforge support Git, or is Git a > commercial effort in competition with sourceforge - would we be forced > to move? Git is all the rage these days and so is every other distributed VCS. Git only seems complicated because it's light years away from CVS and SVN given the immense power that it unleashes onto peoples hands. In fact it's not complicated at all. I use git because it's model fosters distributed development, due to its internal architecture, merge conflicts are less frequent than with CVS,SVN, also all the pain that used to come with using those tools is not there with git. Git is open source, and there are tools for all the common OSes. SourceForge supports git and there's even already a git placeholder for FreeWRL here http://freewrl.git.sourceforge.net/git/gitweb.cgi?p=freewrl/freewrl;a=summary The repo only needs to be converted and imported. I can help with that. Someone did a CVS export and git import some years ago, you can check it here for clues on how the repo, commits etc will look like -> https://github.com/cbuehler/freewrl Some reading suggestions: http://git-scm.com/book/en/Getting-Started-Git-Basics <- Very good reference https://www.kernel.org/pub/software/scm/git/docs/v1.4.4.4/tutorial.html http://byte.kde.org/~zrusin/git/git-cheat-sheet-medium.png > > It sounds like you are a code-structure kind of guru, so some of your > wishlist and complaints could be rolled into self-initiated projects > on that theme. Not at all, just learning as we go along .... Perhaps in my original post I might have come across with some bluntness... I suppose my point was one of, even though people might be pushing, heading, implementing in different direction as you say, agreeing on some consistency across the board could be a necessary good thing. > > A funny story I heard: freewrl was originally developed in perl. > It ran too slow, and was converted to C. > But somewhere in all of its organization and structs are the seeds of > its history. Yes, the first commit from rcoscali in 2000 was all about some documentation and Perl files. |