callistocms-devel Mailing List for Callisto Content Management System
Brought to you by:
nachbaur
You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(8) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Michael A N. <mi...@na...> - 2004-03-02 21:18:04
|
Just to keep everyone interested apprised of goings-on within CallistoCMS, I'm currently working on extending the user interface with the addition of client-side Mozilla support. This has several repercussions, but I think the benefits well worth it. First, Internet Explorer will no longer be supported as a client platform. I've tried to find an elegant way around this, but as it is, with every new release of Internet Explorer, something breaks in my DHTML code. Microsoft just doesn't adhere to standards enough for me to continue to support it. IE will continue to be supported in the near-term, but eventually support will drop. Second, Callisto will function as an XPI-installed browser extension like Venkman or ChatZilla. This is necessary due to the security constraints imposed on Remote XUL applications (the same that are applied to JavaScript in HTML pages). The benefits of this are pretty impressive, and though I haven't completed a full list of the features that will be added as a result, the following will have to do for now. - Better perceived client performance. - Drag-n-drop management of content - Inline content editing (e.g. no pop-up text field) - Upload content (images, PDFs, etc) from a user's computer - Much more rich file linking, navigating, editing, etc. For more information about all this, take a look at the pages in http://callistocms.com/development/ (in particular the "Use Cases" and "RDF Content Editing" sections). Also some ideas have been jotted down at http://nachbaur.com/software/callisto.xml, though those are a bit older. If anyone wants to discuss these or any other Callisto development ideas, feel free to reply, or chat online at #callisto on irc.perl.org. |
From: Peter F. <pet...@zv...> - 2002-07-05 03:39:06
|
On Fri, 5 Jul 2002 01:45, you wrote: > > Ah, good point. AFAIK Callisto *should* run fine on a Win32 system, as > long as all the necessary Perl dependancies are handled. All paths > originate from the configuration files, so that should be fine. I may = have > to switch "/"'s into "\"'s in some cases though. > Thats great news. I was hoping that it didnt have too much platform=20 dependence built in. > > Well, another idea I've had is to use CVS to deploy files. I'm right > now adding some initial development code to make the whole shebang buil= t > on CVS, so all content and configuration file changes are versioned. > When a deployment is made, the files are tagged, and those files are > then copied. > Even better news. I wondered if CVS would fit as a deployment system - I = have=20 some concerns (see below) > It might be possible (but probably pretty slow) to just do a "cvs > update" or "cvs export" on the production directory, and snag the files > straight from the CVS repository. your right - it would chew a lot of bandwidth and time propagating the si= te=20 that way under some scenarios (lots of small changes across the site done= by=20 some automated patching process for e.g) , though it would be faster than= =20 your current deploy system. One thing I found is that if your webs lan s= peed=20 is similiar to the size of the internet pipe, background synchronisation=20 could result in noticeably poorer response times. We have been stuck on a= =20 10Mbs lan for a while, with 8Mbs internet pipe. Once we are allowed to=20 upgrade to 100Mbs, all sorts of server to server background traffic can g= o on=20 with impunity , but in the meant time circumstances demand that server to= =20 server bandwidth consumption is minimised. > > This would also have an additional benefit that web designers / > developers could just checkout a copy of the website through CVS, edit > images and stylesheets on their local machines, and can then just check > the changes back in. > > All that would be needed, would be for either: > > a) a piece of server code to reside on the target webservers that can b= e > told by the CMS server when to grab a fresh image of the site, what > release tag to grab, and could then tell the CMS if the deployment > failed (allowing the CMS server to then tell all the other webservers t= o > rollback to previous versions so there isn't any inconsistancy in the > server farm). Umm this seems to imply you are thinking of PULLING the site changes acr= oss.=20 This wont be a viable approach if your staging and public webservers are=20 seperated by some firewall(s) that are maintained by security admin worth= his=20 money - he'll want to avoid punching holes thru into the inner sanctum at= all=20 costs. PUSHING content out from the master site will always be much more= =20 security policy friendly (which makes SOAP combined with WebDAV a big win= ner=20 in this scenario , HTTP(S) is almost always available, whereas=20 SSH/RSH/FTP/CVS are quite often not, even for outgoing connections - [thi= s is=20 often due to f/w inadequacies and or lack of admin skills] ) nice idea on server farm consistency !. My systems rsync deployment has n= o=20 contigency for detecting inconsistency across slave servers or immediate=20 recovery from such a failure. I'd like to be able to deploy subcomponent = and=20 or directory hierarchies of a site, and also specific files, as the site=20 changes often and has muiltiple authors , so full site deployment could=20 deploy unfinsihed work . > > or b) a script that can reside on the target servers that can be invoke= d > by the CMS server via SSH. defintely better to go A SOAP route if RPC style control mechanisms are=20 needed. You dont have to reinvent a lot of wheels, and you can more likel= y=20 rely on having a HTTP(S) transport method available than anything else, a= s=20 well as not having to worry about whether ssh is available on all pltform= s=20 concerned. Peter --=20 Peter Farmer | Custom XML software | Internet Engineering =0D Zveno Pty Ltd | Website XML Solutions | Training & Seminars=0D http://www.zveno.com/ | Open Source Tools | - XML XSL Tcl=0D Pet...@zv... +------------------------+---------------------=0D Ph. +61 8 92036380 | Mobile +61 417 906 851 | Fax +61 8 92036380=0D |
From: Peter F. <pet...@zv...> - 2002-07-05 02:59:34
|
On Fri, 5 Jul 2002 01:37, Michael A Nachbaur wrote: > Great! If you'd be willing, lets see what pieces of your CMS could be > merged in with Callisto (that is, if you agree with Callisto's code). = I > have a pretty good start with this CMS, but there are some (many?) > things that are done in a "well, it works for me" approach. Deployment > is one of them. I'm happy to supply any code I already have. Im just not sure yet if any = of it would be useful. The system is designed to be runs as cgi's (so it wou= ld run on any webserver , as well as from the command line ) and the XML cod= ing is total rubbish - just proof of concept stuff and not robust or scalable (was planning to replace it with Perl XML modules and libxml). I guess I = need to know callisto better and see if I can identify anything that suits > > I run a server farm of 4 unix servers running Apache ( + openssl + > > mod_perl + various cgi based apps ) . There are 30+ hosted sites with > > 4Gbytes of content. the main site has 10,000+ pages and 2-3 million > > hits (300-500,000 page views) a week, 30-40,000 unique IP's a week , > > Well, none of the sites that Callisto currently manages is in any way a= s > large or well trafficked as the ones you mention. I don't think > Callisto would have any problems managing such a large site, except > maybe the new directory listing code I've just added; if there are a lo= t > of files in a directory, it may take a few seconds longer than > comfortable to generate the listing. My current system is CGI based - I have been very conscious of its scalabiity failings - performance under heavy load is a real problem (at= one point demand on the dynamic part of the site went up 700% and the system = just ground to a halt at about 150 simultaneous connections to the CMS pages). I had been considering how to improve cpu & memory usage, and how to get = the best response times I can. Fast CGI, and various caching mechanism were o= n my todo list (though Axkit's caching was tempting me to convert 8-) . If Axkit does get a standalone mode then I'm keen to have a system that c= an be run via the cgi-mechanism as well as under mod_perl, basically so that= it can be used on non-Apache platforms, and on low traffic sites where the "process per page view" model doesnt cause performance problems, is actua= lly kinder on memory consumption, is less likely to break (no chance of memor= y leaks etc) and could be more easily bundled up into a single executable (using something like metakit ) for a "drop in place and use" experience. Larger sites definitley need the mod_perl or fast-cgi approach of maintai= ning a permanently running engine that services cms requests, maintains state,= etc avoiding process startup overheads, hopefully sharing as much memory as possible, etc .... My experiences with cgi & mod perl suggest that you could produce a singl= e code base to execute in both environments, once Axkit can be run standal= one =2E.. Anyway I digress . Application aware caching (as apposed to generic Axkit caching ) i.e. reusing generated objects whose subcompoinents havent cha= nged can help with large directory listings and the like . > > blah, blah ... Anyway, it is all authored on a staging server and > > This definitely makes much more sense than the system I have in place. > Here was the design reasons behind the way I've built deployment into > Callisto: > > "Crap! My customer needs to edit their website by tomorrow, and I > have no way to deploy content to the production webserver" > > So I crammed all night long, and got a FTP/SOAP/ZIP hodgepodge working > by morning. I've let it be because a) it works; b) I wanted to get > ideas from people more experienced with automated deployment before I > hacked another deploykenstein together. 8-). No criticism here - sounds like a great achievement in that context = (as well as a fun project) . It just wouldnt be way to slow & resource intens= ive in my context 8-) . > I have personally never used rsync, though it sounds much more elegant > than what I have going now. If you're interested, grab a CVS copy of > Callisto and have fun. ;) If you don't have the time, I'd be more than > happy to add support for this. I hope i can, but it depends on my client's future plans. He may allow me= to refactor his sites to use Callisto if I can garantee no disruption the current staus quo, and some real chance of a better facility in the futur= e . In that case my work will effectively be sponsord by them (my company ret= ains the IP to any development work we do - the client gets the system they wa= nt, plus free upgrades and very attentive support ... ) > Would you object to merging some of your code with Callisto's? Not at all - if there is anything worth reusing 8-). > > Having said that, WebDAV is another way to achieve your ends, though > > it cant do the things rsync can .... > > I've played a bit with WebDAV, and though it's goals seem very nice, > it a) doesn't seem to work exactly as advertised, and b) smacks too > much of FrontPage Server Extensions for my liking. *shudder* Of > course, that's an uneducated observation, since I haven't worked with > it in-depth. point a is true, but I think that it is fixable at the implementation lev= el - I dont believe that WebDAV itself is broken - just not finished. If the "= V" part is ever actually realised (i.e defined well enough to actually be ab= le to be implemented and used) it would be a very useful thing. Consider the very common scenario of people who dont have any other choice for fremote file transfer other than http . e.g small sites on ISP host servers that = dont offer ssh/rsh/cvs (I know ftp is usually available, but secure ftp usuall= y isnt and I for one need secure data transfer) . Or users who have to have their staging server behind a corporate firewall running under extremely paranoid secuirty policies , that they have no control over and only hav= e https connectivity to the public web servers (this is a very real future possiblity for one of my clients) And yes, any similiarity to FrontPage would be reason to "just say No" 8-= ). But I dont see the resemblance. Its more in kind to your (wonderfully nam= ed) " deploykenstein" solution , which apart from actually working, is very portable, just not efficient for large sites. Perhaps deployment needs a generic API , so that sites can use whichever transport protocols and deployment tools are available to them , with the best allround mechanism being the default / initially available deploymen= t app ? . > > Anyway, maybe I'm off the track here ... are there any limitations > > that I'm not aware of ?. I'd like to continue using my deployment > > approach with Callisto even if you cant. so I'd like to know if there > > are any gotchas in deploying the Callisto framework & data .... > > Not really. Callisto is mainly a content editing/navigation framework, > with bolt-on "app" and "sidebar" support. A "sidebar" is a > self-contained directory with an .htaccess file, index.xsp file (the > main sidebar logic), and a sidebar.xml file describing the whole lot. > You can add as many sidebars as you want cool . Steve Ball & I did a Tcl based intranet prototype that had a simil= iar approach (TclHTTP instead of APache/mod_perl, Tcl microscripting instead = of xsp and xhtml instead of xml) > Likewise, apps are structured much the same way. The "deploy" app is > the only app included in the distrubution right now, and it's > self-contained. If a new deployment system could be devised, it could > just be another app, and when it's finished, the old deploy app could b= e > axed. If there are enough hands to help, no reason not to have several deployme= nt apps to suit different platform contexts as mentioned above. I like your current one, because as you say, it works, and it is very portable, and w= ill work in most heterogenous environments too. rsync/rsh/ssh/cvs/webdav can work in any unix. mac or windows space too,= but are not self contained - they need the installation of other applications= to work in any uncustomised environment other than some *nix's Peter -- =0D Peter Farmer | Custom XML software | Internet Engineering =0D Zveno Pty Ltd | Website XML Solutions | Training & Seminars=0D http://www.zveno.com/ | Open Source Tools | - XML XSL Tcl=0D Pet...@zv... +------------------------+---------------------=0D Ph. +61 8 92036380 | Mobile +61 417 906 851 | Fax +61 8 92036380=0D |
From: Peter F. <pet...@zv...> - 2002-07-05 02:36:30
|
On Fri, 5 Jul 2002 01:37, Michael A Nachbaur wrote: > > Great! If you'd be willing, lets see what pieces of your CMS could be > merged in with Callisto (that is, if you agree with Callisto's code). = I > have a pretty good start with this CMS, but there are some (many?) > things that are done in a "well, it works for me" approach. Deployment > is one of them. I'm happy to supply any code I already have. Im just not sure yet if any = of=20 it would be useful. The system is designed to be runs as cgi's (so it wou= ld=20 run on any webserver , as well as from the command line ) and the XML cod= ing=20 is total rubbish - just proof of concept stuff and not robust or scalable= =20 (was planning to replace it with Perl XML modules and libxml). I guess I = need=20 to know callisto better and see if I can identify anything that suits > > > I run a server farm of 4 unix servers running Apache ( + openssl + > > mod_perl + various cgi based apps ) . There are 30+ hosted sites with > > 4Gbytes of content. the main site has 10,000+ pages and 2-3 million > > hits (300-500,000 page views) a week, 30-40,000 unique IP's a week , > > Well, none of the sites that Callisto currently manages is in any way a= s > large or well trafficked as the ones you mention. I don't think > Callisto would have any problems managing such a large site, except > maybe the new directory listing code I've just added; if there are a lo= t > of files in a directory, it may take a few seconds longer than > comfortable to generate the listing. My current system is CGI based - I have been very conscious of its=20 scalabiity failings - performance under heavy load is a real problem (at= one=20 point demand on the dynamic part of the site went up 700% and the system = just=20 ground to a halt at about 150 simultaneous connections to the CMS pages). I had been considering how to improve cpu & memory usage, and how to get = the=20 best response times I can. Fast CGI, and various caching mechanism were o= n my=20 todo list (though Axkit's caching was tempting me to convert 8-) . =20 If Axkit does get a standalone mode then I'm keen to have a system that c= an=20 be run via the cgi-mechanism as well as under mod_perl, basically so that= it=20 can be used on non-Apache platforms, and on low traffic sites where the=20 "process per page view" model doesnt cause performance problems, is actua= lly=20 kinder on memory consumption, is less likely to break (no chance of memor= y=20 leaks etc) and could be more easily bundled up into a single executable=20 (using something like metakit ) for a "drop in place and use" experience.= =20 Larger sites definitley need the mod_perl or fast-cgi approach of maintai= ning=20 a permanently running engine that services cms requests, maintains state,= etc=20 avoiding process startup overheads, hopefully sharing as much memory as=20 possible, etc .... My experiences with cgi & mod perl suggest that you could produce a singl= e=20 code base to execute in both environments, once Axkit can be run standal= one=20 =2E.. Anyway I digress . Application aware caching (as apposed to generic Axkit= =20 caching ) i.e. reusing generated objects whose subcompoinents havent cha= nged=20 can help with large directory listings and the like . > > blah, blah ... Anyway, it is all authored on a staging server and > This definitely makes much more sense than the system I have in place. > Here was the design reasons behind the way I've built deployment into > Callisto: > > "Crap! My customer needs to edit their website by tomorrow, and I > have no way to deploy content to the production webserver" > > So I crammed all night long, and got a FTP/SOAP/ZIP hodgepodge working > by morning. I've let it be because a) it works; b) I wanted to get > ideas from people more experienced with automated deployment before I > hacked another deploykenstein together. 8-). No criticism here - sounds like a great achievement in that context = (as=20 well as a fun project) . It just wouldnt be way to slow & resource intens= ive=20 in my context 8-) .=20 > I have personally never used rsync, though it sounds much more elegant > than what I have going now. If you're interested, grab a CVS copy of > Callisto and have fun. ;) If you don't have the time, I'd be more than > happy to add support for this. I hope i can, but it depends on my client's future plans. He may allow me= to=20 refactor his sites to use Callisto if I can garantee no disruption the=20 current staus quo, and some real chance of a better facility in the futur= e . In that case my work will effectively be sponsord by them (my company ret= ains=20 the IP to any development work we do - the client gets the system they wa= nt,=20 plus free upgrades and very attentive support ... ) > Would you object to merging some of your code with Callisto's? Not at all - if there is anything worth reusing 8-). > > Having said that, WebDAV is another way to achieve your ends, though > > it cant do the things rsync can .... > > I've played a bit with WebDAV, and though it's goals seem very nice, > it a) doesn't seem to work exactly as advertised, and b) smacks too > much of FrontPage Server Extensions for my liking. *shudder* Of > course, that's an uneducated observation, since I haven't worked with > it in-depth. point a is true, but I think that it is fixable at the implementation lev= el -=20 I dont believe that WebDAV itself is broken - just not finished. If the "= V"=20 part is ever actually realised (i.e defined well enough to actually be ab= le=20 to be implemented and used) it would be a very useful thing. Consider the= =20 very common scenario of people who dont have any other choice for fremote= =20 file transfer other than http . e.g small sites on ISP host servers that = dont=20 offer ssh/rsh/cvs (I know ftp is usually available, but secure ftp usuall= y=20 isnt and I for one need secure data transfer) . Or users who have to have= =20 their staging server behind a corporate firewall running under extremely=20 paranoid secuirty policies , that they have no control over and only hav= e=20 https connectivity to the public web servers (this is a very real future=20 possiblity for one of my clients) And yes, any similiarity to FrontPage would be reason to "just say No" 8-= ).=20 But I dont see the resemblance. Its more in kind to your (wonderfully nam= ed)=20 " deploykenstein" solution , which apart from actually working, is very=20 portable, just not efficient for large sites. Perhaps deployment needs a generic API , so that sites can use whichever=20 transport protocols and deployment tools are available to them , with the= =20 best allround mechanism being the default / initially available deploymen= t=20 app ? . > > > Anyway, maybe I'm off the track here ... are there any limitations > > that I'm not aware of ?. I'd like to continue using my deployment > > approach with Callisto even if you cant. so I'd like to know if there > > are any gotchas in deploying the Callisto framework & data .... > > Not really. Callisto is mainly a content editing/navigation framework, > with bolt-on "app" and "sidebar" support. A "sidebar" is a > self-contained directory with an .htaccess file, index.xsp file (the > main sidebar logic), and a sidebar.xml file describing the whole lot. > You can add as many sidebars as you want=20 cool . Steve Ball & I did a Tcl based intranet prototype that had a simil= iar=20 approach (TclHTTP instead of APache/mod_perl, Tcl microscripting instead = of=20 xsp and xhtml instead of xml)=20 > Likewise, apps are structured much the same way. The "deploy" app is > the only app included in the distrubution right now, and it's > self-contained. If a new deployment system could be devised, it could > just be another app, and when it's finished, the old deploy app could b= e > axed. If there are enough hands to help, no reason not to have several deployme= nt=20 apps to suit different platform contexts as mentioned above. I like your=20 current one, because as you say, it works, and it is very portable, and w= ill=20 work in most heterogenous environments too. rsync/rsh/ssh/cvs/webdav can work in any unix. mac or windows space too,= but=20 are not self contained - they need the installation of other applications= to=20 work in any uncustomised environment other than some *nix's Peter=20 -- =0D Peter Farmer | Custom XML software | Internet Engineering =0D Zveno Pty Ltd | Website XML Solutions | Training & Seminars=0D http://www.zveno.com/ | Open Source Tools | - XML XSL Tcl=0D Pet...@zv... +------------------------+---------------------=0D Ph. +61 8 92036380 | Mobile +61 417 906 851 | Fax +61 8 92036380=0D |
From: Michael A N. <mi...@na...> - 2002-07-04 17:52:04
|
On 04 Jul 2002 07:32:21 -0400 Fraser Campbell wrote: > On Thu, 2002-07-04 at 01:12, Michael A Nachbaur wrote: > > Anyway, tonight an idea struck me. Callisto could create an RPM > > file instead of a Zip file. FTP'd over to the target servers, an > > "rpm -Uvh<filename>" could include all the pre/post scripts, install > > the files in the necessary directories, etc. > > I wouldn't want to see that. As soon as you do that then Callisto > becomes Linux specific, further to that I use Debian Linux and over > here we don't even use RPMs. Ah, good point. AFAIK Callisto *should* run fine on a Win32 system, as long as all the necessary Perl dependancies are handled. All paths originate from the configuration files, so that should be fine. I may have to switch "/"'s into "\"'s in some cases though. No sense in adding an artificial limitation to the app so early in it's existance. > I'd prefer to see an implementation of publishing based entirely on > http although I'm not sure what might be appropriate (Dav, SOAP, ???). > We don't even run FTP on our webservers ... we use either rsync over > ssh or perforce (revision control system) to publish files. Well, another idea I've had is to use CVS to deploy files. I'm right now adding some initial development code to make the whole shebang built on CVS, so all content and configuration file changes are versioned. When a deployment is made, the files are tagged, and those files are then copied. It might be possible (but probably pretty slow) to just do a "cvs update" or "cvs export" on the production directory, and snag the files straight from the CVS repository. This would also have an additional benefit that web designers / developers could just checkout a copy of the website through CVS, edit images and stylesheets on their local machines, and can then just check the changes back in. All that would be needed, would be for either: a) a piece of server code to reside on the target webservers that can be told by the CMS server when to grab a fresh image of the site, what release tag to grab, and could then tell the CMS if the deployment failed (allowing the CMS server to then tell all the other webservers to rollback to previous versions so there isn't any inconsistancy in the server farm). or b) a script that can reside on the target servers that can be invoked by the CMS server via SSH. -- OpenSource: Every now and then, you get what you don't pay for. |
From: Michael A N. <mi...@na...> - 2002-07-04 17:43:30
|
> I'm a s/w engineer with interest in both finding a way to use axkit > seriously on my clients sites and in open source CMS development. I > have been slowly evolving a perl based CMS for use by my main client . <snip> > the bandwagon I need to jump on so I can to scratch my itch's (sorry > about that mixed metaphor 8-) . > I look forward to trying it out Callisto real soon. Great! If you'd be willing, lets see what pieces of your CMS could be merged in with Callisto (that is, if you agree with Callisto's code). I have a pretty good start with this CMS, but there are some (many?) things that are done in a "well, it works for me" approach. Deployment is one of them. > I run a server farm of 4 unix servers running Apache ( + openssl + > mod_perl + various cgi based apps ) . There are 30+ hosted sites with > 4Gbytes of content. the main site has 10,000+ pages and 2-3 million > hits (300-500,000 page views) a week, 30-40,000 unique IP's a week , Well, none of the sites that Callisto currently manages is in any way as large or well trafficked as the ones you mention. I don't think Callisto would have any problems managing such a large site, except maybe the new directory listing code I've just added; if there are a lot of files in a directory, it may take a few seconds longer than comfortable to generate the listing. > blah, blah ... Anyway, it is all authored on a staging server and > synchronised via cron at regular intervals or can be manually <snip> > rsh/ssh, then running rsync servers on your web servers can achieve > the same results. This definitely makes much more sense than the system I have in place. Here was the design reasons behind the way I've built deployment into Callisto: "Crap! My customer needs to edit their website by tomorrow, and I have no way to deploy content to the production webserver" So I crammed all night long, and got a FTP/SOAP/ZIP hodgepodge working by morning. I've let it be because a) it works; b) I wanted to get ideas from people more experienced with automated deployment before I hacked another deploykenstein together. I have personally never used rsync, though it sounds much more elegant than what I have going now. If you're interested, grab a CVS copy of Callisto and have fun. ;) If you don't have the time, I'd be more than happy to add support for this. Would you object to merging some of your code with Callisto's? > Having said that, WebDAV is another way to achieve your ends, though > it cant do the things rsync can .... I've played a bit with WebDAV, and though it's goals seem very nice, it a) doesn't seem to work exactly as advertised, and b) smacks too much of FrontPage Server Extensions for my liking. *shudder* Of course, that's an uneducated observation, since I haven't worked with it in-depth. > Anyway, maybe I'm off the track here ... are there any limitations > that I'm not aware of ?. I'd like to continue using my deployment > approach with Callisto even if you cant. so I'd like to know if there > are any gotchas in deploying the Callisto framework & data .... Not really. Callisto is mainly a content editing/navigation framework, with bolt-on "app" and "sidebar" support. A "sidebar" is a self-contained directory with an .htaccess file, index.xsp file (the main sidebar logic), and a sidebar.xml file describing the whole lot. You can add as many sidebars as you want (though I had to rip out the code that allows a person to choose which sidebars they want displayed). Likewise, apps are structured much the same way. The "deploy" app is the only app included in the distrubution right now, and it's self-contained. If a new deployment system could be devised, it could just be another app, and when it's finished, the old deploy app could be axed. -- OpenSource: Every now and then, you get what you don't pay for. |
From: Peter F. <pet...@zv...> - 2002-07-04 16:57:15
|
Hi Micheal, I'm a s/w engineer with interest in both finding a way to use axkit serio= usly=20 on my clients sites and in open source CMS development. I have been slowl= y=20 evolving a perl based CMS for use by my main client . I wanted something = that=20 used the filesystem as the storage medium and was open, cross-platform an= d=20 that used XML as the data storage format and could use templating or XSL = to=20 deliver the content. It does all that but it has various limitations and = i=20 dont know if I'd have the resources to keep developing it on my own. I a= lso=20 like what I see going on with AxKit and wanted to transition my system to= it=20 , again if I could find the extra resources. And then lo & behold ! Calli= sto=20 gets announced on the AxKit and CMS mailinglists and it sounds like the=20 bandwagon I need to jump on so I can to scratch my itch's (sorry about th= at=20 mixed metaphor 8-) .=20 I look forward to trying it out Callisto real soon. In the meantime I'd like to venture a comment on=20 deployment strategies. I think that the systems I have built are similiar= =20 enough to yours to make my experience relevant. I have to support Fraser=20 Campbells views. I dont know the background for your current deployment strategy so I may = be=20 missing the point.=20 I run a server farm of 4 unix servers running Apache ( + openssl + mod_pe= rl +=20 various cgi based apps ) . There are 30+ hosted sites with 4Gbytes of=20 content. the main site has 10,000+ pages and 2-3 million hits (300-500,00= 0=20 page views) a week, 30-40,000 unique IP's a week , blah, blah ... Anyway, it is all authored on a staging server and synchronised via cron = at=20 regular intervals or can be manually initiated. Only parts of the sites a= re=20 managed by the CMS, but the whole lot is deployed sucessfully and efficie= ntly=20 using rsync . Since all cgi's are written in perl , the same sites can be= =20 deployed on to any unix platform or windows (with a little help from Cygw= in)=20 - with a little care. I can also use rdist which has a nice configuration= =20 file syntax, but rsync is (obviously) much quicker in deploying site chan= ges.=20 No need to zip anything, no large files to transfer, no wasted bandwidth = or=20 time, no SuExecing required - just rsync, rsh or ssh, cron and perhaps a = very=20 little scripting glue. If you cant use rsh/ssh, then running rsync serve= rs=20 on your web servers can achieve the same results. Having said that, WebDAV is another way to achieve your ends, though it c= ant=20 do the things rsync can ....=20 Anyway, maybe I'm off the track here ... are there any limitations that I= 'm=20 not aware of ?. I'd like to continue using my deployment approach with=20 Callisto even if you cant. so I'd like to know if there are any gotchas i= n=20 deploying the Callisto framework & data .... > Right now, Callisto deploys to a website target by: > a) Zipping up the site's files; > b) FTP'ing the files to all the targets; > c) Running a SOAP query on all the target webservers which, SuExec'd as > the user owning the files, extracts the files to the necessary director= y. =20 regards, Peter -- =0D Peter Farmer | Custom XML software | Internet Engineering =0D Zveno Pty Ltd | Website XML Solutions | Training & Seminars=0D http://www.zveno.com/ | Open Source Tools | - XML XSL Tcl=0D Pet...@zv... +------------------------+---------------------=0D Ph. +61 8 92036380 | Mobile +61 417 906 851 | Fax +61 8 92036380=0D |
From: Fraser C. <fr...@we...> - 2002-07-04 11:32:35
|
On Thu, 2002-07-04 at 01:12, Michael A Nachbaur wrote: > Anyway, tonight an idea struck me. Callisto could create an RPM file > instead of a Zip file. FTP'd over to the target servers, an "rpm -Uvh > <filename>" could include all the pre/post scripts, install the files in > the necessary directories, etc. I wouldn't want to see that. As soon as you do that then Callisto becomes Linux specific, further to that I use Debian Linux and over here we don't even use RPMs. > So what I want to know from you is, what do you think? Do you think RPM > is the way to go for this, or should I (we?) roll my own way of > deploying files? I still think I'm going to have to use FTP/SOAP (or > just SOAP with a mime attachment) to get the Zip or RPM file onto the > target webserver, but the way the files are extracted could be more > standardized than it is now. I still haven't had the time to get callisto set up so I'm hardly qualified to comment. I'll try to take a look at how you're handling the publishing sometime in the next week though. I'd prefer to see an implementation of publishing based entirely on http although I'm not sure what might be appropriate (Dav, SOAP, ???). We don't even run FTP on our webservers ... we use either rsync over ssh or perforce (revision control system) to publish files. A good place for looking at packaging/deployment ideas might be OpenInteract, they deploy applications with special tarballs (see http://www.openinteract.org/code/link-1.07.tar.gz as an example). Fraser |
From: Michael A N. <mi...@na...> - 2002-07-04 05:19:11
|
I'm not sure how many people have subscribed to the devel list, so I'm CC'ing the users list as well. *shrug* Anyhoo. I've been thinking about some cool ideas for fleshing out deployment, and have been trying to find an elegant way of implementing it. The big feature I want is pre/post scripts to be run during a deployment. I run a reverse proxy server in front of my websites, and it would be nice to be able to automagically purge the proxy's cache when I deploy new versions of a site. There are also some circumstances where it would be necessary to restart Apache after a deployment. And once AxKit can be run from the command-line, it would be nice to be able to pre-generate static HTML from XML/XSL. Right now, Callisto deploys to a website target by: a) Zipping up the site's files; b) FTP'ing the files to all the targets; c) Running a SOAP query on all the target webservers which, SuExec'd as the user owning the files, extracts the files to the necessary directory. Though this works, I don't feel that it's optimal. My idea on improving this was to have an XML configuration file describe this information, but I didn't want to re-invent the wheel. Anyway, tonight an idea struck me. Callisto could create an RPM file instead of a Zip file. FTP'd over to the target servers, an "rpm -Uvh <filename>" could include all the pre/post scripts, install the files in the necessary directories, etc. So what I want to know from you is, what do you think? Do you think RPM is the way to go for this, or should I (we?) roll my own way of deploying files? I still think I'm going to have to use FTP/SOAP (or just SOAP with a mime attachment) to get the Zip or RPM file onto the target webserver, but the way the files are extracted could be more standardized than it is now. That's it for now. I'm off to bed. -- OpenSource: Every now and then, you get what you don't pay for. |