Thread: [Linkbat-devel] Data format, XML to DB conversion
Brought to you by:
jimmo
From: James M. <lin...@ji...> - 2002-11-20 13:33:16
|
Hi All! I think we have reached the point where we need to be talking about the actual data/table structure in CSV text or a database. For now let's just work on the database model, to make things simpler. However, we all need to keep in mind that I eventually want to have the system work with CSV text files. Rama has volunteered to work on the code to convert the XML files into something the database can handle. Shanta is working on the "presentation layer", so obviously you two will be working closely together. When the code is done for the CSV->XML conversion, I would like Luu to start working on the code to validate the XML files. That is, making sure the right tags are there, tags match, etc. Following are some thoughts and ideas about how the tables can be created. I have no particular affection for any of these ideas, so feel free to tell me they stink. The first thing we need to decided for the data model is how to store the KUs themselves. Since different KU types have different attributes, I do not see a way of storing them all in a single table. Instead, I would see one table for each KU type and then a central table that contains the unique KU ID and then the KU type. When making a cross reference from one KU to another, we would then go through this central table. Next is the inter-relationship between the KUs, which is a core aspect of linkbat. There is a potential that each KU (regardless of type) can reference every other type of KU. However, I cannot really imagine an index for each relationship. For example, the Concept KU has an index listing all of the Concept KUs and with pointers to all of the referenced Glossary KUs. Then another index with the MoreInfo KUs. In my mind that would mean too many tables. If there are five KU types and each has a index for the relationship to the others, then we would have 20 tables (5x4). With 6 KU tables, 30 tables (6x5). Is my math right? However, the question is whether 20 or 30 indexes is "too many". Does having seperate indexes for each relationship provide any advantages? Quicker access? If so, does it compensate for the extra work to manage 20-30 tables?? Alternatively, I could imagine one index that keeps tracks of the type of KUs, as well: UNIQUE_KU_ID:REFERENCED_TYPE:REFERENCED_KU_ID UNIQUE_KU_ID - The ID of the current KU REFERENCED_TYPE - The type of KU that this KU is referencing REFERENCED_KU_ID - The ID of that KU that is being refereced. A query could be done sorted by UNIQUE_KU_ID and then REFERENCED_TYPE. Once we have that, the code to present a list of referenced KUs grouped by KU type would be easy. Currently we are only dealing with a few thousands KUs. However, what about when we get to tens of thousands?? Hopefully MySQL should be in a position to deal with that **few** records. So, I would not see it as a problem to have all of the KUs referenced in a single table like that. Is the REFERENCED_TYPE even necessary here? We obviously need a table that contains the attributes of each KU and a unique ID, so the REFERENCED_KU_ID can be used to get the type through the central table that I mentioned above and we have the KU type. On the other hand, to get a list of all referenced KUs in the correct order we could have a query like this: select REFERENCED_KU_ID from WHATEVER_TABLE where UNIQUE_KU_ID = 'CURRENT_KU_ID" sort by REFERENCED_TYPE This is a single table access and is obviously faster than mutliple access. Therefore, I think this would more than make up for having an extra field in the index. Since we can expect that new data files will have mixed KU types, I am thinking that a single index which also contains the KU type would be more efficient. Each time we read a new KU, it is inserted into that single index, so we don't have to deal with mulitple indexes. Comments? Although not KUs in the true sense, I think we need an index by topic and skill level (if we decide to implement that). This would be used in generateing the technical FAQs or lists of all info on a particular topic. What about the reverse references. For example a list of all Concept KUs that reference a particular Glossary KU. This is not the same as the list of Glossary KUs that this Concept KU references. We click on a link that displays a Glossary KU, with a link to all the KUs that it references. I would also like links to all of the KUs that reference this glossary term. Although it seems logical that each Concept KU that references a particular Glossary KU should also be referenced by that Glossary KU, it is likely that we will forget the reverse references when create the XML files. Think about the glossary terms on the content pages. Each time we add a new page we have to update the referenced glossary KUs to point back to this page. I'm sure there is a lot more, but I should get this off to start the discussion going. Regards, jimmo -- --------------------------------------- "Be more concerned with your character than with your reputation. Your character is what you really are while your reputation is merely what others think you are." -- John Wooden --------------------------------------- Be sure to visit the Linux Tutorial: http://www.linux-tutorial.info |
From: Shanta M. <sh...@fo...> - 2002-11-20 16:25:03
|
I have made limited comments in line. Could everyone go to http://webcthelpdesk.com/cgi-bin/Linkbat/linkbat.cgi?site=Linkbat click the logon link under the logo on the left and create an account. I will set the system to display so group normal will display all the links. Until I get you all changed to Linkbat_admin. The script mails me when an account is created. To bring everyone up to speed on this. This site is created in a web application delivery product known as eXtropia. The product takes care of Sessions, Authentication, DataSource (works transparently with CVS or SQL), Views, name space, and much much more. There is 15 or more years of development here with the last five for banks. I have been developing applications in this product now for 4 or 5 years. At first it was vary much a blackbox but now it is a resource I would not be without. Remember that this is not necessarily the end delivery engine. James has not at this time accepted this as the method of delivering the project but rather a Demo on what eXtropia can do. This site is also a blend of Linkbat and my admin services. It has tools in it that are not available to any but my clients at this time. Project tracking, todo's logs, FAQ etc. They are in the extra Admin link box. All these tool's contents are in MySQL. On Wed, 2002-11-20 at 07:02, James Mohr wrote: > Hi All! > > I think we have reached the point where we need to be talking about the actual > data/table structure in CSV text or a database. For now let's just work on > the database model, to make things simpler. However, we all need to keep in > mind that I eventually want to have the system work with CSV text files. > I am a bit at a loss as to what is so special about CSV text files. I see them only as a poor and limited (Hard to display code) form of data storage When data is stored in SQL there is no need for them at all except as an alternate storage system when a host that dose not have access to SQL. That being said I still use SQL quires to search the files for the records and contents of the records you what to display in the "View". > Rama has volunteered to work on the code to convert the XML files into > something the database can handle. Shanta is working on the "presentation > layer", so obviously you two will be working closely together. When the code > is done for the CSV->XML conversion, I would like Luu to start working on the > code to validate the XML files. That is, making sure the right tags are > there, tags match, etc. I will have the CSV s files into MySQL in a few hours. I have code to do this by simply importing the contents into the SQL Table.PHP but it works. From that point on to switch the code from on DataSource to another takes a simple change in one variable in the site setup file. Export the contents of the SQL table to a | delimited file though and delimiter can be used. and there you have switched. Validation, field contents, required fields and many other checking is already in eXtropia one just has to tell it what you which to check for. Don't know if this is what you are referring to with XML validation as I know little of XML. > Following are some thoughts and ideas about how the tables can be created. I > have no particular affection for any of these ideas, so feel free to tell me > they stink. > > The first thing we need to decided for the data model is how to store the KUs > themselves. Since different KU types have different attributes, I do not see > a way of storing them all in a single table. Instead, I would see one table > for each KU type and then a central table that contains the unique KU ID and > then the KU type. When making a cross reference from one KU to another, we > would then go through this central table. > > Next is the inter-relationship between the KUs, which is a core aspect of > linkbat. There is a potential that each KU (regardless of type) can reference > every other type of KU. However, I cannot really imagine an index for each > relationship. For example, the Concept KU has an index listing all of the > Concept KUs and with pointers to all of the referenced Glossary KUs. Then > another index with the MoreInfo KUs. In my mind that would mean too many > tables. If there are five KU types and each has a index for the relationship > to the others, then we would have 20 tables (5x4). With 6 KU tables, 30 > tables (6x5). Is my math right? > > However, the question is whether 20 or 30 indexes is "too many". Does having > seperate indexes for each relationship provide any advantages? Quicker > access? If so, does it compensate for the extra work to manage 20-30 tables?? > The SQL table takes care of that for another reason to store all data in SQL.You can assign indexes within each table. > Alternatively, I could imagine one index that keeps tracks of the type of KUs, > as well: > > UNIQUE_KU_ID:REFERENCED_TYPE:REFERENCED_KU_ID > > UNIQUE_KU_ID - The ID of the current KU > REFERENCED_TYPE - The type of KU that this KU is referencing > REFERENCED_KU_ID - The ID of that KU that is being refereced. > > A query could be done sorted by UNIQUE_KU_ID and then REFERENCED_TYPE. Once > we have that, the code to present a list of referenced KUs grouped by KU type > would be easy. Currently we are only dealing with a few thousands KUs. > However, what about when we get to tens of thousands?? Hopefully MySQL should > be in a position to deal with that **few** records. So, I would not see it as > a problem to have all of the KUs referenced in a single table like that. > > Is the REFERENCED_TYPE even necessary here? We obviously need a table that > contains the attributes of each KU and a unique ID, so the REFERENCED_KU_ID > can be used to get the type through the central table that I mentioned above > and we have the KU type. On the other hand, to get a list of all referenced > KUs in the correct order we could have a query like this: > > select REFERENCED_KU_ID from WHATEVER_TABLE where UNIQUE_KU_ID = > 'CURRENT_KU_ID" sort by REFERENCED_TYPE > > This is a single table access and is obviously faster than mutliple access. > Therefore, I think this would more than make up for having an extra field in > the index. > > Since we can expect that new data files will have mixed KU types, I am > thinking that a single index which also contains the KU type would be more > efficient. Each time we read a new KU, it is inserted into that single index, > so we don't have to deal with mulitple indexes. Comments? > > Although not KUs in the true sense, I think we need an index by topic and > skill level (if we decide to implement that). This would be used in > generateing the technical FAQs or lists of all info on a particular topic. > > What about the reverse references. For example a list of all Concept KUs that > reference a particular Glossary KU. This is not the same as the list of > Glossary KUs that this Concept KU references. We click on a link that > displays a Glossary KU, with a link to all the KUs that it references. I > would also like links to all of the KUs that reference this glossary term. > Although it seems logical that each Concept KU that references a particular > Glossary KU should also be referenced by that Glossary KU, it is likely that > we will forget the reverse references when create the XML files. Think about > the glossary terms on the content pages. Each time we add a new page we have > to update the referenced glossary KUs to point back to this page. > > I'm sure there is a lot more, but I should get this off to start the > discussion going. > > Regards, > > jimmo > -- > --------------------------------------- > "Be more concerned with your character than with your reputation. Your > character is what you really are while your reputation is merely what others > think you are." -- John Wooden > --------------------------------------- > Be sure to visit the Linux Tutorial: http://www.linux-tutorial.info > > > ------------------------------------------------------- > This sf.net email is sponsored by: To learn the basics of securing > your web site with SSL, click here to get a FREE TRIAL of a Thawte > Server Certificate: http://www.gothawte.com/rd524.html > _______________________________________________ > Linkbat-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linkbat-devel -- Shanta McBain <sh...@fo...> |
From: James M. <lin...@ji...> - 2002-11-20 17:16:54
|
On Wednesday 20 November 2002 17:24, Shanta McBain wrote: <extropia stuff snipped> > I am a bit at a loss as to what is so special about CSV text files. I > see them only as a poor and limited (Hard to display code) form of data > storage When data is stored in SQL there is no need for them at all > except as an alternate storage system when a host that dose not have > access to SQL. That being said I still use SQL quires to search the > files for the records and contents of the records you what to display in > the "View". That's exactly it: "when a host that dose not have access to SQL" and that's really the only reason. A corollary is someone who does not have the skill or desire to set up an SQL database, but that is less important to me than someone who cannot set it up. However, it will be a while before it is a "product" that we can package and make available on SourceForge, etc. I just don't want to paint ourselves into a corner. > > Rama has volunteered to work on the code to convert the XML files into > > something the database can handle. Shanta is working on the "presentation > > layer", so obviously you two will be working closely together. When the > > code is done for the CSV->XML conversion, I would like Luu to start > > working on the code to validate the XML files. That is, making sure the > > right tags are there, tags match, etc. > > I will have the CSV s files into MySQL in a few hours. I have code to do > this by simply importing the contents into the SQL Table.PHP but it > works. From that point on to switch the code from on DataSource to > another takes a simple change in one variable in the site setup file. > Export the contents of the SQL table to a | delimited file though and > delimiter can be used. and there you have switched. Does that mean there would be little change in the eXtropia code to reflect a CSV data source as opposed to SQL? > Validation, field > contents, required fields and many other checking is already in eXtropia > one just has to tell it what you which to check for. Don't know if this > is what you are referring to with XML validation as I know little of > XML. I'm refering to the actual XML files before they are imported into the database. If someone forgets a closing tag or something else, we should know about it before we try to read it into the database. > > Following are some thoughts and ideas about how the tables can be > > created. I have no particular affection for any of these ideas, so feel > > free to tell me they stink. > > > > The first thing we need to decided for the data model is how to store the > > KUs themselves. Since different KU types have different attributes, I do > > not see a way of storing them all in a single table. Instead, I would see > > one table for each KU type and then a central table that contains the > > unique KU ID and then the KU type. When making a cross reference from one > > KU to another, we would then go through this central table. > > > > Next is the inter-relationship between the KUs, which is a core aspect of > > linkbat. There is a potential that each KU (regardless of type) can > > reference every other type of KU. However, I cannot really imagine an > > index for each relationship. For example, the Concept KU has an index > > listing all of the Concept KUs and with pointers to all of the referenced > > Glossary KUs. Then another index with the MoreInfo KUs. In my mind that > > would mean too many tables. If there are five KU types and each has a > > index for the relationship to the others, then we would have 20 tables > > (5x4). With 6 KU tables, 30 tables (6x5). Is my math right? > > > > However, the question is whether 20 or 30 indexes is "too many". Does > > having seperate indexes for each relationship provide any advantages? > > Quicker access? If so, does it compensate for the extra work to manage > > 20-30 tables?? > > The SQL table takes care of that for another reason to store all data in > SQL.You can assign indexes within each table. I'm aware of that. However, these are not quite the same indexes that an RDBMS system like MySQL would generate automatically. It's not a index of a single table used to speed-up access, but rather a table containing the relationship between the KUs. These would be something we would have to define and fill ourselves I'm just curous about the performance issue of having 1 or 30 tables versus the administration of 30 tables. Is that really an issue. Based on the discusion following this part, I think only a single table will be needed since MySQL should be able to handle the amount of data we are dealing with. Regards, jimmo -- --------------------------------------- "Be more concerned with your character than with your reputation. Your character is what you really are while your reputation is merely what others think you are." -- John Wooden --------------------------------------- Be sure to visit the Linux Tutorial: http://www.linux-tutorial.info |
From: Shanta M. <sh...@fo...> - 2002-11-20 17:34:20
|
On Wed, 2002-11-20 at 10:46, James Mohr wrote: > That's exactly it: "when a host that dose not have access to SQL" and that's > really the only reason. A corollary is someone who does not have the skill > or desire to set up an SQL database, but that is less important to me than > someone who cannot set it up. However, it will be a while before it is a > "product" that we can package and make available on SourceForge, etc. I just > don't want to paint ourselves into a corner. > That is why I like to store my data in SQL. It allows me to manipulate the data as I see fit. This includes exporting data to cvs files. > > > > I will have the CSV s files into MySQL in a few hours. I have code to do > > this by simply importing the contents into the SQL Table.PHP but it > > works. From that point on to switch the code from on DataSource to > > another takes a simple change in one variable in the site setup file. > > Export the contents of the SQL table to a | delimited file though and > > delimiter can be used. and there you have switched. > > Does that mean there would be little change in the eXtropia code to reflect a > CSV data source as opposed to SQL? > In my setup 0 change exempt one variable in the Global site setup. $DataSource = "file"; or $DataSource = "MySQL"; DataSources currently supported by eXtropia are file, MySQL, Oracle and a few others but they are not important. > > Validation, field > > contents, required fields and many other checking is already in eXtropia > > one just has to tell it what you which to check for. Don't know if this > > is what you are referring to with XML validation as I know little of > > XML. > > I'm refering to the actual XML files before they are imported into the > database. If someone forgets a closing tag or something else, we should know > about it before we try to read it into the database. > Ok. but that is why one develops on a separate server. could be added to eXtropia but XML in the form View would likely be easier > > > Following are some thoughts and ideas about how the tables can be > > > created. I have no particular affection for any of these ideas, so feel > > > free to tell me they stink. > > > > > > The first thing we need to decided for the data model is how to store the > > > KUs themselves. Since different KU types have different attributes, I do > > > not see a way of storing them all in a single table. Instead, I would see > > > one table for each KU type and then a central table that contains the > > > unique KU ID and then the KU type. When making a cross reference from one > > > KU to another, we would then go through this central table. > > > > > > Next is the inter-relationship between the KUs, which is a core aspect of > > > linkbat. There is a potential that each KU (regardless of type) can > > > reference every other type of KU. However, I cannot really imagine an > > > index for each relationship. For example, the Concept KU has an index > > > listing all of the Concept KUs and with pointers to all of the referenced > > > Glossary KUs. Then another index with the MoreInfo KUs. In my mind that > > > would mean too many tables. If there are five KU types and each has a > > > index for the relationship to the others, then we would have 20 tables > > > (5x4). With 6 KU tables, 30 tables (6x5). Is my math right? > > > > > > However, the question is whether 20 or 30 indexes is "too many". Does > > > having seperate indexes for each relationship provide any advantages? > > > Quicker access? If so, does it compensate for the extra work to manage > > > 20-30 tables?? > > > > The SQL table takes care of that for another reason to store all data in > > SQL.You can assign indexes within each table. > > I'm aware of that. However, these are not quite the same indexes that an RDBMS > system like MySQL would generate automatically. It's not a index of a single > table used to speed-up access, but rather a table containing the relationship > between the KUs. These would be something we would have to define and fill > ourselves I'm just curous about the performance issue of having 1 or 30 > tables versus the administration of 30 tables. Is that really an issue. > > Based on the discusion following this part, I think only a single table will > be needed since MySQL should be able to handle the amount of data we are > dealing with. > > ------------------------------------------------------- > This sf.net email is sponsored by: To learn the basics of securing > your web site with SSL, click here to get a FREE TRIAL of a Thawte > Server Certificate: http://www.gothawte.com/rd524.html > _______________________________________________ > Linkbat-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linkbat-devel -- Shanta McBain <sh...@fo...> |
From: James M. <lin...@ji...> - 2002-11-20 18:32:40
|
On Wednesday 20 November 2002 18:33, Shanta McBain wrote: > On Wed, 2002-11-20 at 10:46, James Mohr wrote: > > That's exactly it: "when a host that dose not have access to SQL" and > > that's really the only reason. A corollary is someone who does not have > > the skill or desire to set up an SQL database, but that is less important > > to me than someone who cannot set it up. However, it will be a while > > before it is a "product" that we can package and make available on > > SourceForge, etc. I just don't want to paint ourselves into a corner. > > That is why I like to store my data in SQL. It allows me to manipulate > the data as I see fit. This includes exporting data to cvs files. In principle I don't have a problem with that. However, it requires the intermediate step to be SQL. If you have already inserted it into an SQL DB, why bother exporting it to CSV? (unless you want to move it a server that has no SQL DB) Although not necessarily efficient, I see a "friendlier" approach to first parse the XML into CSV and then import that into SQL. > > > I will have the CSV s files into MySQL in a few hours. I have code to > > > do this by simply importing the contents into the SQL Table.PHP but it > > > works. From that point on to switch the code from on DataSource to > > > another takes a simple change in one variable in the site setup file. > > > Export the contents of the SQL table to a | delimited file though and > > > delimiter can be used. and there you have switched. > > > > Does that mean there would be little change in the eXtropia code to > > reflect a CSV data source as opposed to SQL? > > In my setup 0 change exempt one variable in the Global site setup. > $DataSource = "file"; or $DataSource = "MySQL"; What about the code itself? If a single file = MySQL in terms of being a data source, I do not see how you could get around the problem of having to redo A LOT of code. If a file equaled a MySQL Table, then I see a 1:1 relationship between fields in MySQL and fields in the file, so there would be very litle code change necessary. > > > Validation, field > > > contents, required fields and many other checking is already in > > > eXtropia one just has to tell it what you which to check for. Don't > > > know if this is what you are referring to with XML validation as I know > > > little of XML. > > > > I'm refering to the actual XML files before they are imported into the > > database. If someone forgets a closing tag or something else, we should > > know about it before we try to read it into the database. > > Ok. but that is why one develops on a separate server. could be added to > eXtropia but XML in the form View would likely be easier What about performance. An XML files is at least 10 times larger. Plus we have the issues of the inter-relationships between the various KUs. To what extent can this be handle by eXtropia when directly reading the XML files? Even if eXtropia could read the XML files directly and be able to present the relationships correctly. I cannot see how that would **not** lock us into eXtropia as the delivery mechanism. If in a CSV text file or SQL DB, then we can do standard queries on the data. I think I'll draw a diagram of what I was thinking about it and post it. Regards, jimmo -- --------------------------------------- "Be more concerned with your character than with your reputation. Your character is what you really are while your reputation is merely what others think you are." -- John Wooden --------------------------------------- Be sure to visit the Linux Tutorial: http://www.linux-tutorial.info |
From: Shanta M. <sh...@fo...> - 2002-11-20 19:05:14
|
On Wed, 2002-11-20 at 12:01, James Mohr wrote: > > > > That is why I like to store my data in SQL. It allows me to manipulate > > the data as I see fit. This includes exporting data to cvs files. > > In principle I don't have a problem with that. However, it requires the > intermediate step to be SQL. If you have already inserted it into an SQL DB, > why bother exporting it to CSV? (unless you want to move it a server that has > no SQL DB) Although not necessarily efficient, I see a "friendlier" approach > to first parse the XML into CSV and then import that into SQL. > eXtropia puts things into the chosen DataSource. There is no need for the XML interface to do that. > > > > In my setup 0 change exempt one variable in the Global site setup. > > $DataSource = "file"; or $DataSource = "MySQL"; > > What about the code itself? If a single file = MySQL in terms of being a data > source, I do not see how you could get around the problem of having to redo A > LOT of code. If a file equaled a MySQL Table, then I see a 1:1 relationship > between fields in MySQL and fields in the file, so there would be very litle > code change necessary. Long ago I added the falowing code to my application files my @BASIC_DATASOURCE_CONFIG_PARAMS; if ($site eq "file"){ @BASIC_DATASOURCE_CONFIG_PARAMS = ( -TYPE => 'File', -FILE => "$APP_DATAFILES_DIRECTORY/$APP_NAME.dat", -FIELD_DELIMITER => '|', -COMMENT_PREFIX => '#', -CREATE_FILE_IF_NONE_EXISTS => 1, -FIELD_NAMES => \@DATASOURCE_FIELD_NAMES, -KEY_FIELDS => ['record_id'], -FIELD_TYPES => { record_id => 'Autoincrement', datetime => [ -TYPE => "Date", -STORAGE => 'y-m-d H:M:S', -DISPLAY => 'y-m-d H:M:S', ], }, ); } else{ @BASIC_DATASOURCE_CONFIG_PARAMS = ( -TYPE => 'DBI', -DBI_DSN => $DBI_DSN, -TABLE => 'linkbat_questions_tb', -USERNAME => $AUTH_MSQL_USER_NAME, -PASSWORD => $MySQLPW, -FIELD_NAMES => \@DATASOURCE_FIELD_NAMES, -KEY_FIELDS => ['username'], -FIELD_TYPES => { record_id => 'Autoincrement', datetime => [ -TYPE => "Date", -STORAGE => 'y-m-d H:M:S', -DISPLAY => 'y-m-d H:M:S', ], }, ); In the Global setup the variables are defined as they are used my all apps in the site only the table name needs to be changed in telling eXtropia how the data is stored. cvs file or SQL. > > > > > Validation, field > > > > contents, required fields and many other checking is already in > > > > eXtropia one just has to tell it what you which to check for. Don't > > > > know if this is what you are referring to with XML validation as I know > > > > little of XML. > > > > > > I'm refering to the actual XML files before they are imported into the > > > database. If someone forgets a closing tag or something else, we should > > > know about it before we try to read it into the database. > > > > Ok. but that is why one develops on a separate server. could be added to > > eXtropia but XML in the form View would likely be easier > > What about performance. An XML files is at least 10 times larger. Plus we have > the issues of the inter-relationships between the various KUs. To what extent > can this be handle by eXtropia when directly reading the XML files? > Not sure! As a Perl programmer I see XML as an more complex form of HTML. I would not chose to use XML to directly access the underlying data files but that is largely because I have not taken the time yet to look into XML as a programing language. In this context I was thinking more in the form of Javascripts ability to do checking in the browser. I don't even know if XML can do that. In eXtropia XML is placed in TTML files. This blends Perl into the "View" that gets exported into the browser. > Even if eXtropia could read the XML files directly and be able to present the > relationships correctly. I cannot see how that would **not** lock us into > eXtropia as the delivery mechanism. If in a CSV text file or SQL DB, then we > can do standard queries on the data. > In eXtropia the queries are done on ActionHandlers to separate the logic you are developing from the look and feel of the site. For the most part you just tell eXtropia what you want it make the quire to the DataSource so you don't have write new code to switch from SQL to CSV Here is an example $cgi->param( -NAME => 'raw_search', -VALUE => "project_code=='Requested'" ); There is a subroutine built into the system that forms a SQL quire and sends it to MySQL or parses the CSV File for the records that match your request. No rewriting of code to accommodate the change in DataSource. I you want to duplicate the mechanics of doing this you will take years to get it right. I don't have any desire to do that unless the default dose not do what I want then I will hack the code, Isolate it from the default code and call it as I need it. Selena, Gunther, and Peter are far better programmers than I am. Gunther worked for Lincoln Stein. They all program for banks using this application. > > ------------------------------------------------------- > This sf.net email is sponsored by: > Battle your brains against the best in the Thawte Crypto > Challenge. Be the first to crack the code - register now: > http://www.gothawte.com/rd521.html > _______________________________________________ > Linkbat-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linkbat-devel -- Shanta McBain <sh...@fo...> |