From: Marc l. <mar...@ya...> - 2009-07-23 21:40:37
|
> Hi Marco, > > Sounds interesting, although it is like an aggregation > mechanism once the reports are generated right? > > StatCVS could export things in XML by the way, so you may > be able to write some code to read n XML files and do your > aggregations that way. > > > In any case, do send us the info / scripts. > > Thanks > > Hi Benoit, You are right, it aggregates projects. I think it is a kind of zoom out, for managers. They are interested more on what their developers are doing recently. Developers, on the other hand ,zoom in. Down to differences between file versions. Different audience, at least here where I work. I think xml are better then parsing html files, but I have to do it in a hurry and didnt know how to parse xml from bash (learning groovy sh right now). BTW,it would be cool if we put the cvs info in a little DB (H2 DB comes to mind) so that we could select our report away. See below for some crazy ideas. Im sending the hacks (err... scripts) to save them on internet, Im always formating my drives here :-). Thats it. Hope they are useful. Thanks Marco "Linux" Antonio de Sousa linux/java developer, Brazil SCRIPT DETAILS One script try to find any user_*.html file and generate a CSV file based on that: Eg: projectM/user_marc.html dir/projectN/user_marc.html ; user_ann.html Generates csv.csv: projectM marc dir/projectN marc,ann marc.csv: projectM dir/projectN ann.csv: dir/projectN It is pushing the limits of my (limited) shell programming capabilities :-). A DB here would be *way* better. From those CSV files, another script generate the html index. The scripts works well with my cvs server, but YMMV. I wrote them with maintainability in mind, not efficiency. Thats why I used so many functions.They make it easier to test and separate concerns. It needs: bash (and find,date, cron, etc). I think any linux box will do. And java for statcvs. INSTALL/USE Edit the scripts and configure java, http dir, projects/repos; Run avery(day|week) with cron; browse to : http://your.server/http-dir/ Enjoy. Put them on cron (user must have cvs checkout rights): $ crontab -l HTTP_DIR=/opt/IBM/HTTPServer/htdocs/en_US/cvs-stats/ 4 4 * * * /opt/cvs-stat/cvs-stat.sh > /tmp/cvs-stat.log 2>&1 ; mv /tmp/cvs-stat.log $HTTP_DIR ; /opt/cvs-stat/makeIndex.sh > $HTTP_DIR/makeIndex.log 2>&1 In this example , run every day at 04:04 am (every week should do it, but the machine is idle anyway). SOME CRAZY IDEAS statcvs could insert data on a local DB (say, H2 db) . That way we could ask things like: select * from projects select users from project where project like foo select commit from project order by date select project from repositories where projName like "myProj" select commit-date from projects order by developer The scripts could start a local web server (jetty) to avoid setup an apache server. ____________________________________________________________________________________ Veja quais são os assuntos do momento no Yahoo! +Buscados http://br.maisbuscados.yahoo.com |