You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(44) |
Nov
(18) |
Dec
(3) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(12) |
Feb
(5) |
Mar
(8) |
Apr
(51) |
May
(1) |
Jun
|
Jul
(3) |
Aug
(2) |
Sep
(8) |
Oct
(1) |
Nov
(53) |
Dec
(17) |
2004 |
Jan
(20) |
Feb
(18) |
Mar
(11) |
Apr
(2) |
May
(1) |
Jun
(4) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2005 |
Jan
|
Feb
|
Mar
(6) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
|
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
(6) |
2007 |
Jan
(1) |
Feb
(3) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Justin Y. <ju...@sk...> - 2003-01-03 17:52:13
|
Andreas Jellinghaus said: > Hi, > > looking at your project, I have these questions: > (could be put into your FAQ) I'll add them shortly, they're good questions :) > - why not GConf? > Yes, i want to know in details, very very in deep. And I know that > gconf is now what you want, but I'd also like to know why starting > without it is easier as with. > > I'm not related to the gconf project in anyway, have not even used it. Well, we looked at GConf to an extent, and it does appear to have potential. We thought that some of the design considerations made in GConf were not the best choices that could be made. GConf is more of a total replacement for "plain text" configuration files, storing everything in XML on disk. This is good for some things, but not all. There are other similar reasons as well. Basically, if we were to work on GConf, we'd probably want to completely redesign some or all of it, and that isn't always easy (nor is it necessarily less work), so we decided to just go at it alone. That said, we've intentionally made our design very modular, so that other projects can make use of one or more of our layers without using the others. > - Is part of the design, that applications such as Apache, Samba etc. > will kick their own config file formats, parsers, writers and use the > result of this project as universal replacement? Part of our design will allow this, but we're not necessarily going to encourage it. Another project could potentially use our XML read/write/edit libraries & tools to store their config natively in XML. I would much prefer to edit most native config files than something muddled with XML tags everywhere. XML's strength is when a program is doing the modifications, so we decided to convert native config files on the fly to and from XML as needed, so that the XML is only seen by applications that *want* to see XML. Otherwise, apps & users can still just directly read the native files. This means that our config-editing UIs will only understand config files in XML, but the authoritative copy of the configuration stays in the native format. You may wonder why we don't just go totally w/ XML and try to abandon native config files. The reason is that using our design, its not necessary. There are 2 disadvantages to our approach. First, it could be a performance problem. This will be minimized by caching & dynamic loading, so it should not be noticable in most cases. Second, there's the possibility that a user will create an invalid native config file, causing errors in the conversion to XML. We'll just have to handle this as cleanly as possible. > - Is part of the project to get your hands dirty and do these changes? We will be working on adding the supporting files needed to support various applications, servers, etc. We would also like project maintainers to include our supporting files in their packages. So, yes, we will be working to support lots of things initially, but after that, we would prefer if project maintainers were responsible for updating the files as needed. This will eliminate version problems. Since the supporting files will be updated & installed at the same time as the actual applications they describe, they will always be up to date. > c4g isn't the first project about the topic. All other projects i know > start with new code for some reason, and either straight not change the > application (only provide a wrapper around the applications config file > format - yast, webmin, linuxconf etc.) or theoretically allow an > application to be rewritten using the new library/format/parser/writer, > but never implement this on their own. > > If I understand your documentation correctly, the goal is to create the > same thing as yast, linuxconf and webmin, but do it better. We mostly want to create a better webmin/linuxconf system & try to revamp how projects think about configuration. Our library will also make it possible for projects to read their config files through the library, making it easier for them to parse their configuration, but this isn't required. We'll also do validation automatically. Also, they can invoke one of our UIs to edit their configuration, so all aspects of configuration will be *able* to be handled by our system. Hope this answers your questions. Jason, do you have anything to add? Justin -- SkiingYAC Custom Solutions http://www.SkiingYAC.com |
From: Andreas J. <aj...@du...> - 2003-01-03 13:21:11
|
Hi, looking at your project, I have these questions: (could be put into your FAQ) - why not GConf? Yes, i want to know in details, very very in deep. And I know that gconf is now what you want, but I'd also like to know why starting without it is easier as with. I'm not related to the gconf project in anyway, have not even used it. - Is part of the design, that applications such as Apache, Samba etc. will kick their own config file formats, parsers, writers and use the result of this project as universal replacement? - Is part of the project to get your hands dirty and do these changes? c4g isn't the first project about the topic. All other projects i know start with new code for some reason, and either straight not change the application (only provide a wrapper around the applications config file format - yast, webmin, linuxconf etc.) or theoretically allow an application to be rewritten using the new library/format/parser/writer, but never implement this on their own. If I understand your documentation correctly, the goal is to create the same thing as yast, linuxconf and webmin, but do it better. Regards, Andreas |
From: Justin Y. <ju...@sk...> - 2003-01-03 03:55:59
|
Jason, Thats great. I haven't gotten around to looking at Xerces much more, but I should be able to shortly. I've spent the last few days converting our docs to XML and am now finished (I believe). They should look the same on the website, but now everything is .xml instead of .sgml in CVS. I need to update that extension in a few of the pages on the site, but otherwise it seems like its working, and its *much* nicer than sgml if you ask me. I had to do a few weird things w/ make to get it all to build properly, and I'm not 100% sure I did it the "correct" way, but it is working now. Let me know when you get the Xerces stuff in CVS. I'm going to look over it more this weekend. Justin -- SkiingYAC Custom Solutions http://www.SkiingYAC.com |
From: Jason L. <jl...@me...> - 2003-01-01 00:33:25
|
Justin: I have downloaded the C++ Xerces from xml.apache.org, compiled it, and got a little prototype to work with it. I think it will suit. Since the library is C++ I decided to do this prototype using Gtk-- (C++ bindings for Gtk+). It's taking me some time to get used to the C++ ways of doing things; I'm finding it a little confusing. Code will be committed to CVS once I figure out how to integrate these new dependencies into the automake/autoconf system. For those on the list, I'd like to take this opportunity to bring you up to speed with what's going on and to apologize that not much has been said on the mailing list recently :) First of all, Justin and I have put a lot of work recently into an "implementation" document, which is basically a guide for how the system will work and is more detailed than anything we've done so far. We want to make this document as detailed as possible, because it's a lot easier to make changes to this document than it is to make changes to code. The implementation document (DocBook format) is in CVS in website/docs/implementation/implementation.sgml. It can be viewed on the web at http://config4gnu.sourceforge.net/docs/implementation/. Second, this email about Xerces is a result of a proposal to use a more feature-rich XML library than libxml. If you look at the website xml.apache.org, you'll see that Xerces implements the DOM interfaces, and supports XML Schema. Unfortunately there are no C-bindings for it, which means our project would use C++ (if we decide to make this change). Another nice thing about Xerces is that there are already Perl bindings for it, so we could use the same XML library for the Config4GNU library (written in C++) as the backends (which use Perl). We welcome your feedback (particularly concerning Xerces because our experience with it so far is very limited). I will also try to post this information as a SourceForge news item so our website will be updated as well. Jason |
From: Jason L. <jl...@me...> - 2002-12-17 16:10:27
|
Attached is a new parser I wrote for generating an XML representation of runlevel configuration under Gentoo system. I also checked in a parser for doing the same on a Redhat/Mandrake system (and only a few changes would be needed to do Debian as well). Jason |
From: Justin Y. <ju...@sk...> - 2002-12-07 21:46:16
|
I've updated the website. I believe most of the docs are more or less up to date. Some may soon become out of date when we begin coding next semester, but at this point I think everything is OK. I put up the new schematic diagram, put up Jason's changes to the requirements document, and added the new Implementation Guide http://config4gnu.sf.net/docs/implementation/ which it a very young version that probably needs to be cleaned up a little bit in the next week. If anyone has any suggestions on the implementation guide, I'm open to them. I just wanted to get some of our ideas online so people can comment on them. Justin -- SkiingYAC Custom Solutions http://www.SkiingYAC.com |
From: Justin Y. <ju...@sk...> - 2002-12-04 19:13:12
|
On Wed, 2002-12-04 at 12:24, Mark Howard wrote: > Hi, > I'm just about to finish uni for the Christmas vacation and thought it > might be interesting to look at your project. From what I've seen so > far, I'm very impressed. The task, however, is huge and many people have > failed in the past. I hope you persevere and are successful. It does seem daunting at times. :) > A few weeks ago, when the Debian desktop subproject was formed, I > proposed that one of it's aims should be to tackle to 'configuration > nightmare' linking to that fm][ article. Very few ppl were interested in > this, saying that it was far too much work. However, I still believe > that the only way such a system as your is to succeed is if it is > adopted by one of the large distributions. The sooner this happens, the > more interest and so more development you will get. Therefore, I would > like to work with cfg Debian integration. We'd be *very* interested in getting Debian to support it. We think that if we could get one or two major distributions (Debian and Gentoo were some of our top candidates) to support/use CFG in some way, and maybe get a major software project (Apache, Samba, etc.) to support it, we'll be in pretty good shape. So having Debian on board in any way is a great thing. > My main aims over the vacation would be to familiarise myself with your > work, start producing Debian packages of cvs snapshots and possibly look > into writing a debconf (debian configuration system, used by many > packages to set up initial configuration) back end for cfg. > Unfortunately, I have never looked into debconf before, so it would > involve learning. Well, I can say I'm not terribly familiar with debconf. We're in the process of cleaning up some aspects of our project, specifically the documentation on our website. This should be done by the 18th at the latest, as we have some things to finish before the end of the semester to fulfill some requirements we have (CFG is being started as a project to meet some graduation requirements, but it certainly won't stop there if its successful). We have a lot of code in CVS, but most of it is prototypes we're made to test out our ideas & look into how feasible this project is. I would probably not recommend spending too much time looking through our code. However, I *would* recommend familiarizing yourself with the documentation on our website which outlines our goals, our design concept, etc. Again, these docs are slightly out of date at the moment and will be updated. I can drop you a line when they're cleaned up to better reflect the current state of things. Right now, we've got little code I would recommend actually using, and I'd also recommend NOT having it edit your real config filees, so don't run it as root so that it can't. If you want to make CVS snapshot packages for Debian, you can, I'm just not sure whether it would be useful at *this* point. In the near future (Feb or March?), they'd be great to help get feedback & facilitate testing. > So, would you like my help with this? Yes. > and has anyone else started work > on Debian related aspects of the project already? No, but we have thought about what it will take to have CFG support various distros. So, perhaps you could both help us refine our ideas on distro support and help with testing/support for Debian. Anything else you'd like to help with (general feedback, coding, etc.) would be great as well. Thanks for your interest in our project. Justin -- SkiingYAC.com Custom Solutions su...@sk... http://www.SkiingYAC.com |
From: Justin Y. <ju...@sk...> - 2002-11-22 01:50:21
|
Here's an update on the parsers: The current CVS version should be fully working, and should not break anything else (apart from the deal w/ having to call them using the method outlined in /src/parsers/README and not being able to tell them the files to read/write from as a command line argument). XML rewriting is now possible (and is explained in /src/parsers/README). The base parsers do NOT rewrite XML, and I think it would be a good idea to keep it that way. So, using the Ini parser will give the exact same XML as it gave previously. However, calling the Ini::Samba parser will give the same XML but with rewritten XML tags, using <sambaglobal> <sambaprinter> <sambahomes> and <sambashare> for sections, and either <string> or <boolean> for property tags. The rewrite rules are in /src/parsers/CFGXML/Parsers/Ini/Samba.pm (the only parser that rewrites anything currently) if anyone is curious. The same set of rules are used by both the parsing and unparsing process. They're done using hash tables, and I've tried to cache as much as possible, so there shouldn't be a significant speed hit once the rules get complex. I'm wondering about are what exactly we should do to identify sections versus properties, and what to do about having the names of properties & sections be attributes versus child tags. If the GUI or some other layer is using the XML class files to determine how to display them, it *could* look at the inheritance to determine whether a tag is inherited from a section or not, right? Or is that difficult to do? Justin -- SkiingYAC.com Custom Solutions su...@sk... http://www.SkiingYAC.com |
From: Justin Y. <ju...@sk...> - 2002-11-20 16:06:17
|
A possible way to do remote administration using CFG: http://xmlrpc-c.sourceforge.net/ It is a C/C++ library that allows RPCs to be made via XML via HTTP. It doesn't solve any of the permissions issues, but it is at least a possible alternative to ssh for the transport. Only requirement is that a HTTP server run the CGI w/ the proper permissions. Justin -- SkiingYAC.com Custom Solutions su...@sk... http://www.SkiingYAC.com |
From: Peter W. <wie...@we...> - 2002-11-14 15:39:32
|
On 13. November 2002 17:21 wrote Jason Long: > Last night I thought more about what kinds of interface, i.e. > representations of the data, we might pursue. Here's my list: > > [...] > > What others are possible? > [...] I think it should be =09a client server protocol =09extremely simple =09fast =09low level IPC, not over the network. There should be only one interface which is the basic and definitive CFG interface. All other interfaces are layers ontop of this. These other interfaces are =09meta interfaces like CORBA =09adapters for other applications like ODCB =09transport protocols like TCP/IP. Let me describe how linuxconf does it (linuxconf has 3 years of experience and 80,000 lines of code) : Linuxconf uses two pipes (one for each direction) to talk between the UI (called "GUI server") and the conf program. They send simple ASCII text with primitive XML style frame containing a shell like command line. A short quote [(C) by linuxconf developers, fair use, blabla] from their docu: ---begin of quote =09Here is an example of a small dialog: =09<tscreen><verb> =09MainForm basic "Basic menu" =09 Label "Your name" =09 String S1 15 =09 Newline =09 Label Telephone =09 String S2 15 =09 Newline =09 Label Status =09 Radio R1 1 0 single =09 Radio R1 2 0 Married =09 Newline =09 Button B1 Accept =09 Button B2 Cancel =09 End =09</verb></tscreen> ---end of quote In my opinion it would be good to modularize the conf program into ressources. These ressources are CFG servers. So configuration would be a process of getting instructions from the client above and internally have some definitions (CFG scripts) which result in actions. The server can become itself a client "calling" other ressources (or operating directly on sys ressources like conf files, tools, etc.). So remote admin and OS specific conf would become more easy I hope. And without reprogramming the user can run a conf/admin automat (script or so) or a GUI on any ressource. There would be a daemon lib which is a sort of CFG parser. But many daemon process instances would probably slow down the system, or? (The UI itself could be client to the UI and client to the resssources: UI display server<---UI program--->conf ressource server Maybe reusing the UI server of linuxconf is an option. It's sort of alpha stage but will soon support console with newt, X and HTML and will support helptexts.) Linuxconf folks intentionally did not modularize as radically as I suggested above. They have a core and some outsourced modules. Someone mentioned already both ideas (a core "center" and everything being a module) on this list I think. I think that the design currently discussed in the other thread "J&J reinvent the wheel: they rediscover db & oop :)", has strong influence on the issue which UI interface is used. Kind regards Peter Wiehe |
From: Justin Y. <ju...@sk...> - 2002-11-13 16:31:56
|
On Tue, 2002-11-12 at 12:54, Jason Long wrote: > Hopefully it's not so much reinventing the wheel as using existing ideas.... > > i.e. using an existing database wouldn't help because we want the data to > exist in the native configuration files. Using a database to store the XML > would mean we have two copies of configuration-- the one that's in the > database and the other that's in the original configuration files. However, > this again brings up the question of caching the configuration, in which > case using a database to store the configuration is appropriate. Yes, since one of our goals is to maintain the existing formats and keep the config files as simple text files which are easy to backup and more or less 100% reliable, we can't very well store them in the database. We could perhaps use the berkley db or gdbm libraries to handle some of our logging & othe things, although, I would vote for logs being human-readable text files (just like all other *nix logs). Something which might have been confusing about my bash script example is the "commit" command. This would be more useful if the script made multiple configuration changes, then said "commit". That way, unparsers would not be re-run for *every* change, "Do this. Ok, now do this. Ok, now this...". Instead, at the end of all the changes, the user says "Here's a list of things. Ok, now do what I just said all at once", and the unparsers and everything else is only run once. > I do see the project as something along the lines of providing an > object-oriented database _interface_ to the configuration data. But what > should an object-oriented database interface look like? That, I think really is the key. We just want to make a fancy interface to not so fancy, often weird, config files. It follows that we *could* create a fancy object-oriented way of storing data in some fast binary format, or even just as raw XML, but this would be relegated to an optional lower level, something of a very fancy parser/unparser. I guess we are doing something similar to a database when you really get down to it. Who isn't though? :) > I agree that Java has a lot of capabilities that will be useful, but I think > creating a dependency on a JVM would be a serious detriment to the > acceptability of our project. JVM's are big, take a while to start up, and > I'm not sure how available they are beyond the Win32, Solaris, and Linux > environments. I agree that requiring Java would prevent any sort of wide-spread acceptance. Perl does allow some object-oriented-ness with its modules, and most object support that Java has is also available in Perl, and C for that matter (if you use glib anyway). What has made me think the most about parsers is how important it is that they are truly extendable. I can't think of more than a dozen types of config formats, maybe 2 or 3 dozen if you consider *all* of the variants. Still, thats hardly anything, and each parser can be written using the new Perl module in about 10-20 lines of code, using regex's and simple if statements. *If* the (un)parsers only were responsible for converting to and from native files and a few generic XML tags, I would say that writing a new parser for each new config *format* (non-application specific) is the easiest and best way to go. It also would be reasonably fast. This is something like the question "Why write parsers that only can handle CFGs (the other CFG) when they could handle CSGs?". I'm also not sure how to extend something which is basically a list of regular expressions to try to match against a file. However, if the parsers need to be smart enough to generate customized XML tags, such as using <ipaddress> for an IP, and <color> for a hexadecimal RGB color, then I'm not so sure. My concept of the parsers has been that the current "(un)parsers" are to handle file non-application-specific *formats* (INI, Colon, Flatfile, Resolv, etc.) which they do now, instead of handling specific config files (i.e., same parser is used for Samba and PHP since they both use INI formatted config files). Now, there certainly needs to be application-specific stuff done shortly after the parsers get done with things. It seems that the most flexible way to do this is to write a layer on top of the parsers that read *XML* and perform validation, add data types, etc. That way, a Samba "validator" would not need to be re-written should Samba ever decide to use an Apache-style config format. Additionally, (and more importantly, since Samba switching formats is a silly example), similar applications, such as all of the various email servers, could potentially share very similar validators if the validators were made to validate the same generic XML. If the XML is valid, the validators could also perform the task of turning the generic XML into more specific XML tags for each particular application's config files. This could either be done on a second pass, or on yet another very thin layer sitting above the validators. Part of the reason why I think parsers should be generic and only do the simple task of XML conversion upon request is because currently if a config file includes another config file, it is the job of the middle layer to call the parser again on each included file. That seems like the correct way to do it. If the parser did validation, in a single layer, then the parser would have to read all of the included files first, and then validate. Maybe this isn't a huge deal one way or another, but I'm not sure whether or not parsers & validators should be merged into one. We should probably lean toward modular where possible. Jason, maybe it would be a good idea for us to set aside an hour or two in the next few days to sit down and revise our current design/schematic and other documentation to reflect the current situation as well as the proposed semi-final design. We can decide on what to do about the UIs being generic XML editors (which I'm thinking more and more is a great idea), what to do about the validation & data type adding questions, and whatever else we can think of. I hesitate to say we should do it via an online chat room, because I think that would be slightly since we're right next to each other. Maybe we could just take some brief notes and post what we decide on as a news item. I vote for that over IRC/IM. Justin -- SkiingYAC.com Custom Solutions su...@sk... http://www.SkiingYAC.com |
From: Jason L. <jl...@me...> - 2002-11-13 16:22:04
|
Last night I thought more about what kinds of interface, i.e. representations of the data, we might pursue. Here's my list: Document Object Model (DOM) - standardized by www.w3.org - provides a tree-structure view of the data - tailored towards representing XML documents - a editor based on DOM could be created, that could be useful for not only configuration, but editing XML documents in general LDAP - more of a network protocol, but it does define data structures - provides a tree-structure view of the data - each node has attributes - configuration editable by any LDAP-aware client CORBA - object-based - things have properties and methods ODBC - standard interface for relational database - table (rows and columns) view of data - configuration would be editable by any database application that supports ODBC Linux filesystem - directories+files view of data GConf - Gnome specific - Windows registry-like view of data I thought of more, including GnomeDB and gnome-vfs, but they are very similar to items listed above. In addition I haven't investigated any KDE specific technologies; I'm sure there are some. What others are possible? Jason |
From: Jason L. <jl...@me...> - 2002-11-12 17:55:24
|
Hopefully it's not so much reinventing the wheel as using existing ideas.... i.e. using an existing database wouldn't help because we want the data to exist in the native configuration files. Using a database to store the XML would mean we have two copies of configuration-- the one that's in the database and the other that's in the original configuration files. However, this again brings up the question of caching the configuration, in which case using a database to store the configuration is appropriate. I do see the project as something along the lines of providing an object-oriented database _interface_ to the configuration data. But what should an object-oriented database interface look like? I agree that Java has a lot of capabilities that will be useful, but I think creating a dependency on a JVM would be a serious detriment to the acceptability of our project. JVM's are big, take a while to start up, and I'm not sure how available they are beyond the Win32, Solaris, and Linux environments. Anyways, thanks for the feedback Jason Long >>> Gene Chase 11/12/02 12:18 PM >>> I've caught up with reading your Config4GNU site. My summary: "Let's reinvent the wheel." :) Yack says >It seems like the sticky point is how to make it easy >to ensure that the configuration changes are successful. > Maybe a better bash script would be: >for host in "host1.com" "host2.com" "host3.com" >do >echo 'cd /Daemons/Samba/; set "wins server" = "10.13.1.1"; commit;' \ >> cfg -H $host >done ----Let's reinvent databases. Long says >Would it be possible to write the generic parsers > in a way that they could be extended by more specific parsers. ----Let's reinvent object-oriented programming. I didn't think db before because the XML files would be small and infrequently used, so I didn't anticipate any contention. Your discussion about assuring that the data are properly written reminds me: There are other reasons for db's besides contention, security and integrity being two of them. ----Would storing the XML in a db help? I didn't think OO before because I didn't read your "samba extends inifile" example. This is not quite the same thing as simply redoing C code in C++ because you are not trying to write a parser in an OO language, but rather writing a parser of* objects. There must be some way to write parametrized parsers that don't require creating a new (C++?) class file every time a new thing to be configured appears. That would somehow fail to capture a significant generalization, unless that process too were quite boilerplate. I can imagine a future example in which you want samba to extend inifile, and also to extend microsoftPassport, and then the extension mechanism requiring multiple inheritance rears its ugly head. Even without multiple inheritance, I can imagine future scenarios in which you might need to "refactor" your code in the light of new object relationships. At the moment Java is the only lg that I know which has tools for automating the refactoring of OO code. ----Would documenting an object hierarchy for the things currently listed at cs.messiah.edu/~cfg help to determine whether this is a special case that you should ignore or whether inheritance is so common that it becomes an essential part of what you're doing? My $0.02 is that there isn't enough OO stuff going on to make it worth the trouble to think further about it. At least this year. --gene chase |
From: Jason L. <jl...@me...> - 2002-11-11 01:40:09
|
I've been thinking about yet another approach for parsers. Would it be possible to write the generic parsers in a way that they could be extended by more specific parsers. i.e. create a samba parser that extends the inifile parser samba parser does not need any syntax recognition abilities, but it will recognize known parameters and print out XML representation for them it could be something like whenever inifile recognizes a new section it calls a method named "newSection", which the samba parser overrides to recognize sections specific to Samba, or whenever inifile recognizes a new parameter, it calls a method named "newParameter" which the samba parser overrides to recognize known Samba parameters and their data types. It's an idea. Jason |
From: Justin Y. <ju...@sk...> - 2002-11-10 19:07:39
|
On Thu, 2002-11-07 at 20:32, Eduardo L. Arana wrote: > Hi i like to know if you are looking an spanish translator for your project, if so, let me know. Certainly, we would be very interested in any help you could provide. Currently, as our system is mostly a prototype, we have little along the lines of a user manual, and I would not consider much of our documentation to be in a stable state. We are planning to start using gettext for our C programs shortly, which will make adding translations for them possible. We should also make things written in other languages (Perl & PHP) support translation as well. I would say that having several translations is certainly something we would like to accomplish by the time any usable code is released. Are you interested in providing a translation of our actual program via gettext, or of our documentation, or both? Or perhaps you are interested in helping to make our current code more internationalization-friendly? Thanks for your interest in our project. Justin -- SkiingYAC.com Custom Solutions su...@sk... http://www.SkiingYAC.com |
From: Jason L. <jl...@me...> - 2002-11-09 19:56:41
|
Justin: Perhaps a possible next step is a simple command-line program that outputs an XML representation of a particular application's XML. This would be the hypothetical "cfg --fetchraw" command you mentioned in your last email. This program would be knowledgeable about what parser command it needs to run for each different type of application, and where to look for the configuration file for that application. So... the command: cfg-fetchxml samba would look up the definition file for samba, find that it should run "parse.inifile.pl" and the default config file location for the proper distribution is "/etc/samba/smb.conf". I think I want to do something like this instead of specifying "path=/Daemons/Samba" because the path may depend on where the user wants to put it and may be localized. Scripts that use CFG should be able to identify a type of configuration (e.g. "samba") without knowing where it is located in the tree and how it is translated, etc. I'm not suggesting that the configuration tree, with "Daemons" and "Applications", should go away. Rather it should be limited to a method of organizing all the configuration on the system, which will be translated to the user's language and may be customized by a distribution. Our client front-ends will be set to load the entire configuration tree by default, but may be configured to load just one application's configuration with, for instance, a command-line argument. e.g. gnome-cfg samba will load just Samba's configuration e.g. gnome-cfg samba configfile=/etc/samba/smb.conf could specify to load Samba's configuration using a different configuration file Well, these are some ideas... I guess I'll get started implementing a "fetch XML" command and see what issues I come across. Jason |
From: Justin Y. <ju...@di...> - 2002-11-08 00:22:22
|
On Thu, 2002-11-07 at 17:46, Jason Long wrote: <snip> > Which makes me think of how remote administration is going to happen. I > think of two alternatives now; I'm sure there are others: > > 1. Config4GNU is installed on one computer, the computer that manages > configuration for all other computers. It is configured to use remote shell > or (preferably) secure shell to read/write remote configuration files and > run commands on these remote computers. > > Pros: > - only one system has to meet Config4GNU dependencies > - ssh/rsh are pretty universally available for all sorts of unixes > - simple to upgrade Config4GNU > > 2. A Config4GNU daemon is installed on all computers that can be configured, > and the client talks to the daemon on the computer that you want to > configure. > > Pros: > - configuration is not dependent on a certain host being up If the host isn't up, there are bigger problems than configuration. Perhaps CFG could be told to check a remote CFG server every X hours for possible updates. This would be similar to the commercial pcr-dist program, which checks upon boot whether changes to the computer should be made and makes them as necessary. > - Config4GNU installation is limited to files/parsers applicable to a > particular distribution of Linux/Unix There's a third possibility which could be used in place of #1 if CFG is on both machines, otherwise #1 could be the fall-back method, although there should be a warning before #1 is falled-back upon b/c things could easily break: - A user runs CFG on their machine and wants to edit a remote machine's configuration - They enter the username/password of an account w/ appropriate permissions on the remote machine (root, for example) - The local CFG connects to the remote machine via SSH - The local CFG attempts to read the remote machine's config by running the appropriate parser(s) on the appropriate file(s) (which would be determined by XML data on the remote machine) Even better, once a command line version of CFG is finished, there should be an "just output the raw XML" option so that the local CFG can tell the remote CFG "Send the XML config for Samba" and get the XML on stdout. - XML is read back via SSH/SCP (either way, doesn't matter much) - Local CFG is used to modify data - Local CFG reconnects or resuses connection to remote computer - Local CFG sends modified XML to parser on remote machine, which writes the config file, or sends modified XML to command line CFG on remote machine, which handles the details. If the remote machine doesn't have CFG, then although dangerous, it could fall back to your #1 using nearly the same process but telling ssh to execute a slightly different command. Example of command local CFG uses to connect *if* CFG is installed on remote machine, which I'd argue is strongly recommended: To get XML: ssh us...@my... -c 'cfg --fetchraw --path=/Daemons/Samba' > tmpfile To read XML: ssh us...@my... -c 'cfg --saveraw --path=/Daemons/Samba' < tmpfile Admittedly, a CFG server/daemon would be cool, but would also be a lot of work and added complexity (I think), and like all other remote configuration programs, is a potential security issue. To get remote config working to a point where it is useful, I *think* what I described would be sufficient, at least at first. I could see how a more robust remote administration "module" could be combined with an authentication module, since they're very similar, and could be a later optional component. Well, I guess what I'm saying is that the command line client *should* have an option to read/write raw XML config data anyway, as that would be a very slick and powerful option. If such an option existed, then 99% of remote administration can be done w/ a simple bash script which handed some XML to CFG on a set of remote machines. It would also be cool to be able to be able to do small changes via something like: for host in "host1.com" "host2.com" "host3.com" do echo 'cd /Daemons/Samba/; set "wins server" = "10.13.1.1"; commit;' \ | ssh root@$host.com -c 'cfg' done It seems like the sticky point is how to make it easy to ensure that the configuration changes are successful. Maybe a better bash script would be: for host in "host1.com" "host2.com" "host3.com" do echo 'cd /Daemons/Samba/; set "wins server" = "10.13.1.1"; commit;' \ > cfg -H $host done Where here, CFG takes responsibility for running SSH to connect, and handles any error checking. Perhaps it could even say "Oops, 1 of the servers is down, do you want me to retry in the background and send you an email when I'm successful, or give up and send an error email after 24 hours?" Justin |
From: Jason L. <jl...@me...> - 2002-11-07 22:46:17
|
>>> Justin Yackoski <ju...@sk...> 11/07/02 17:26 PM >>> >You mean that the parser would include in the outputted XML a node which >the middle layer checks for and if present, calls the parser again for >each external file? That would work, or maybe you meant something else? That's what I meant. >> - parsers can't run external commands to implement changes > >Unless they output a shell script to accomplish the changes, and the >middle layer can detect this and run the script? That's an interesting idea. Some way of differentiating between types of parsers would be in order. It may be easier to just specify that certain parsers need to perform complex operations and therefore they will read/write the actual configuration file themselves. Which makes me think of how remote administration is going to happen. I think of two alternatives now; I'm sure there are others: 1. Config4GNU is installed on one computer, the computer that manages configuration for all other computers. It is configured to use remote shell or (preferably) secure shell to read/write remote configuration files and run commands on these remote computers. Pros: - only one system has to meet Config4GNU dependencies - ssh/rsh are pretty universally available for all sorts of unixes - simple to upgrade Config4GNU 2. A Config4GNU daemon is installed on all computers that can be configured, and the client talks to the daemon on the computer that you want to configure. Pros: - configuration is not dependent on a certain host being up - Config4GNU installation is limited to files/parsers applicable to a particular distribution of Linux/Unix My $0.02 Jason Long |
From: Justin Y. <ju...@sk...> - 2002-11-07 22:25:30
|
On Thu, 2002-11-07 at 17:03, Jason Long wrote: > How do you feel about making a change to the "parsers" & "unparsers" so that > they always read their input on STDIN and output on STDOUT? Sounds like a good idea to me, they do half of this already. I will make it happen when I redo all the parsers by this weekend. If there are any changes (i'm already changing parameter to property) that I should make to the XML, let me know. > Advantages: > - remote administration... the caller of the parser can read the file from > a remote computer and pipes it to the parser. Parsers do not need to know > how to read remote files > > Disadvantages: > - parsers cannot work with multiple files > possible workaround: for configuration files with "includes", output it > as an "external" node You mean that the parser would include in the outputted XML a node which the middle layer checks for and if present, calls the parser again for each external file? That would work, or maybe you meant something else? > - parsers can't run external commands to implement changes Unless they output a shell script to accomplish the changes, and the middle layer can detect this and run the script? (this email address I'm sending from works, btw). Justin -- SkiingYAC.com Custom Solutions su...@sk... http://www.SkiingYAC.com |
From: Jason L. <jl...@me...> - 2002-11-07 22:04:02
|
How do you feel about making a change to the "parsers" & "unparsers" so that they always read their input on STDIN and output on STDOUT? Advantages: - remote administration... the caller of the parser can read the file from a remote computer and pipes it to the parser. Parsers do not need to know how to read remote files Disadvantages: - parsers cannot work with multiple files possible workaround: for configuration files with "includes", output it as an "external" node - parsers can't run external commands to implement changes Jason |
From: Jason L. <jl...@me...> - 2002-11-05 04:47:05
|
I just checked in a bunch of changes, so I wanted to let you know about them. Here are the two main changes... 1. class definition files can now specify what nodes types to make children. E.g. for the "Samba" class definition file, it specifies to make children "SambaGlobal" or "SambaShare", overriding what the parser outputs. 2. When you double-click on a SambaGlobal or SambaShare node in the GUI, a new tab will appear with a form. The form will load with current values, and when you change the values in the form the corresponding properties will be updated. -- If you already have a copy of the repository, you may need to `cvs checkout' again to get any new directories that have been created since the last checkout. ./autogen.sh --prefix=/tmp/cfg make install /tmp/cfg/bin/gnome-cfg -- Now for the "more" part of this message I've been thinking about making the GUI more like a generic XML file editor, and then with the XML editor and the parsers which convert to-and-from XML we'd have a pretty nice system. Now this XML editor is not like the common XML editors you see when you search for "XML Editor" on the web-- it is not for actually editing XML syntax. Instead it will provide an object-oriented view of the data, and it would be flexible enough that the user would not need to hand-edit any XML. An illustration: a contrast between a traditional XML editor and the XML editor I am proposing: In a traditional XML editor you see: <book-list> <book> <title>Config4GNU for Dummies</title> <author>J. Long and J. Yackoski</author> <year>2002</year> </book> </book-list> It might have syntax highlighting, auto-completion of elements, and validation, but you're really manipulating the raw XML syntax. In this "enhanced" XML editor, you'd see a list of books on the left side of the screen and a form for entering the attributes of the book on the right. Really more like a database entry screen, maybe. Anyways, it's just a thought... I'm thinking that a powerful and generic XML editor like this would be pretty useful in general, and this would make our work be useful beyond the realm of our "little";) project. Jason |
From: Jason L. <jl...@me...> - 2002-11-04 15:54:52
|
Justin I've been contemplating some definition changes. Let me know what you think. Node - instead of referring to something that contains properties and other nodes, Node will be the abstract base type for everything, including properties and values. Container - this is what "Node" used to be. It contains other Containers, Properties, and Data Properties - extends "Node"; an association between a name and a value. Datum - extends "Node"; a single piece of data Document - extends "Container"; a candidate for "root node". It reads in an XML document by running a parser and saves changes by running an "unparser" -- Which brings me also to... levels of validation... We can have validation on many levels 1. Validation on the Data level. Makes sure data entered is of the right type. E.G. a person cannot enter "Yes" when a number is required 2. Validation on the Property level. Makes sure the property is valid. E.g. a person cannot enter multiple values for a single-value property 3. Validation on the Container level. Makes sure required properties are present and have the right values. 4. Validation on the Document level. Makes sure required containers are present and checks for dependencies between containers. 5. External validation. Runs an external program to check the validation of the final configuration file. A particular application would rarely want to implement all five of these levels, but these different levels allow a lot of flexibility. Jason |
From: Jason L. <jl...@me...> - 2002-11-02 22:56:27
|
I am now committing a change to the gnome client that implements a data store for the node tree that is displayed in the client. This new data store means that the client does not need to expand the whole configuration tree before displaying itself, resulting in a faster startup time. (Previously, the client used the default GtkTreeStore which was manually filled with data from the tree.) Note however that there are a few quirks. One of the functions I had to implement was "get_parent". However, our Node API does not have a get_parent function, so I had to write a temporary replacement get_parent function that searches the whole tree for the node in question and returns the parent. Because of this you may find that you click to expand one node and a whole bunch of parsers for other nodes get run. This quirk should go away whenever the Node API gets a "get_parent" function. Jason |
From: Justin Y. <ju...@sk...> - 2002-11-02 19:20:24
|
I've added the news item w/ the information listed in the last email and made our main page dynamically display the last 10 news items. It seems like with the recent documentation we're more ready for other developers to help with the sections of CFG that are written in C. Are there any specific tasks that we could perhaps delegate interested developers? I am planning on working on adding more support in the PHP extension (although I'd be happy to let someone else work on that if they're interested), and try my hand at generalizing one of our simpler parsers and see if I can easily come up with a more general parser architecture. Justin -- SkiingYAC.com Custom Solutions su...@sk... http://www.SkiingYAC.com |
From: Jason L. <jl...@me...> - 2002-11-02 05:17:30
|
It's been a while since we posted a news item on source forge or on the web page. Justin, you said you would do this, so here are a few things I can think of that might be worth including: 1. additions to the website - formal system requirements specification (SRS) - screenshot of a "wizard" - API reference - XML specification (version 1) - libconf coordination notes 2. changes/additions to the software - actual communication between parsers and front-end - gnome front-end uses libglade for user interfaces (thanks to Mark Stratman) - allow setting, removing, and adding properties - ability to write back a configuration file - PHP extension to allow access to CFG from PHP programs - additional "parsers": resolvfile, flatfile (thanks to Greg Stoll) 3. miscellaneous - change acronym from "c4g" to "cfg" as it's easier to type - added Greg Stoll as developer Jason |