cobolforgcc-devel Mailing List for Cobol for GCC (Page 4)
Status: Pre-Alpha
Brought to you by:
timjosling
You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(20) |
Sep
(2) |
Oct
(4) |
Nov
(16) |
Dec
(15) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(13) |
Feb
(5) |
Mar
(21) |
Apr
(34) |
May
(9) |
Jun
(22) |
Jul
|
Aug
(6) |
Sep
(2) |
Oct
|
Nov
|
Dec
|
2002 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(24) |
Jul
(1) |
Aug
|
Sep
(4) |
Oct
(6) |
Nov
|
Dec
(1) |
2003 |
Jan
(2) |
Feb
|
Mar
|
Apr
(11) |
May
(19) |
Jun
(2) |
Jul
(1) |
Aug
(1) |
Sep
|
Oct
|
Nov
(1) |
Dec
|
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2005 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2008 |
Jan
(15) |
Feb
(4) |
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
From: Keisuke N. <kni...@ne...> - 2002-06-19 03:30:03
|
Hi, I have registered COBOL Utilities Project on SourceForge: http://sourceforge.net/projects/cobol/ Administrators are as follows: bgiroud Giroud, Bernard dessex David Essex knishida Keisuke Nishida rpragana Rildo Pragana timjosling Tim Josling An mailing list is available at: https://lists.sourceforge.net/lists/listinfo/cobol-utils Can we move there? Keisuke |
From: Keisuke N. <kni...@ne...> - 2002-06-18 03:26:24
|
At Mon, 17 Jun 2002 17:44:49 +0200 (MEST), bg...@fr... wrote: > > > I would like to suggest that a shell script be added so the test > > suite can be configured for each COBOL project (ex: setup.sh OC or > > TC or CG). > > And the location of the compiler (i.e. installed or in development). > Unless you prefer to play with the path?! I'll do it. > > Also I would like to see the Makefile changed so only specific > > modules can be tested (ex: make SM). What about "cd SM && make"? This is the way I do now. Changing the current directory eases you to open the source. > Another issue is to take care of "manual" runs like NC109M. It can be run as follows: ./NC109M < NC109M.DAT Note you still need to check some of the result manually, which is not automatically done by my scripts yet. Keisuke |
From: David E. <de...@ar...> - 2002-06-17 18:45:19
|
At 11:41 AM 17/06/02 +0900, Keisuke Nishida wrote: > ... > I originally used the one written for TC, which only extracted > `NC' tests at that time. Later I hacked the test suite and > wrote new scripts that extract all tests (i.e., CM, DB, IC, IF, > IX, NC, OB, RL, RW, SG, SM, SQ, and ST). Bernard Giroud did the original investigation, created the original CONTROL-CARD-FILE and wrote the test report program in awk. I wrote most of the Perl scripts. > But I have not understood the test suite completely, especially > how to write CONTROL-CARD-FILE, and those scripts and the control > file still need to be improved. Unfortunately there is no information on how to use the CONTROL-CARD-FILE in the NIST source, other than trail and error, and what information can be derived from the source code. So it definitely needs improvement. There seams to be some differences between the TC and OC versions, perhaps Bernard and Keisuke should compare notes on this. At 05:44 PM 17/06/02 +0200, Bernard Giroud wrote: > So, who registers the cobol-utils project? While I would like to contribute to the project, I would not like to manage it. David Essex |
From: <bg...@fr...> - 2002-06-17 15:44:52
|
En réponse à David Essex <de...@ar...>: > At 06:03 PM 15/06/02 +0900, Keisuke Nishida wrote: > > > I have somehow arranged my NIST-test supporting tools for TinyCOBOL: > > ... > > Would you like it? > > I ran some test with the OC NIST test suite from OC-0.9.6 and TC CVS > version. > > It seams to work OK. > > Unfortunately there are some problems with the TC DISPLAY/ACCEPT syntax > so > I could not complete the tests. > > I think the OC NIST test suite could be used for all COBOL projects. > > I would like to suggest that a shell script be added so the test suite > can > be configured for each COBOL project (ex: setup.sh OC or TC or CG). > And the location of the compiler (i.e. installed or in development). Unless you prefer to play with the path?! > Also I would like to see the Makefile changed so only specific modules > can > be tested (ex: make SM). > I would even recommend that we be able to test individual programs, or at least, to be able not to run all programs before a given one; that's why I started coding Makefile's instead of one big script. Another issue is to take care of "manual" runs like NC109M. > Other wise I would like that the NIST test suite be the first entry in > the > COBOL utilities project. > > David Essex Practically, I think we should first start the new project, donate what is needed, and when ok, suppress tools from our specific projects. So, who registers the cobol-utils project? Bernard |
From: Stephen B. <st...@cf...> - 2002-06-17 01:33:06
|
On June 15, 2002 1:16 PM "David Essex" <de...@ar...> wrote: <snip> > > Berkeley DB maybe overkill, and ISAM maybe a 1970's concept, > but unfortunately there are no free nor open source implementation > of ISAM. > > This has been one of the killer problems in many open source > COBOL projects. > > MySQL used a version of ISAM, but recently they converted to > DB for the low level structures. > > So AFIK, DB is the only choice available. > > The closest implementation of VS*M on UN*X is C-IS*M, which by > the way is now owned by I*M. > > What I would to see is an OpenISAM project with the following criteria. > - XOpen ISAM standard conformity. > - Low level concurrence file locking. > > If there are any volunteers who would like to work on this project, > then the project is all your. > > David Essex MyISAM is still the standard filesystem used with MySQL. I use myisam extensively in my Cobol interface. It is fast and multi-user. I see only these difficulties with myisam: 1. The only documentation is what I wrote. It is not complete. 2. It does not support row-level locking. Instead, when rewriting, you provide a copy of the before buffer, and it checks the record has not been changed since then. 3. I found a couple of bugs in my usage. Either there are some bugs, or I was not using it properly - but see #1. 4. Monty Widenius is the only programmer who fully understands myisam. I am seeking a reliable, affordable Cobol. It is very encouraging to me that these three Cobol compiler teams are cooperating. Stephen, Sydney |
From: David A. C. <sup...@co...> - 2002-06-16 21:35:05
|
Tim Josling wrote: >"David A. Cobb" wrote: > > > >>Isn't DB (Berkeley, IIRC) rather overkill? ... >>David A. Cobb, Software Engineer, Public Access Advocate >> >> > >Possibly. I knew I shouldn't have expressed a view. In general I would be >happy to use it as long as the surplus function does not get in the way. > >I work as a software architect. We have this problem all the time. A package >has at the same time more and less function that we need. > >If we can >- add the missing bits without too much angst AND >- ignore the surplus (often this is not easy) >then we will use it. > >The question is, can you do better by starting from scratch? Others would have >the basis for a more informed view than I would. > What'chu mean, "can *you*"? Anyway, I will look at the problem. Because things like Berkeley (and other) DB's are OpenSource, I imagine we can extract a relevant part of the code as a starting point, at least. No reason these days for any of us to "start from scratch." -- David A. Cobb, Software Engineer, Public Access Advocate "By God's Grace I am a Christian man, by my actions a great sinner." -- The Way of a Pilgrim; R. M. French, tr. Life is too short to tolerate crappy software. The SuperBiskit is starring at R.A. 20h 57m 02.15s, Decl -06deg 28' 13.5" |
From: David E. <de...@ar...> - 2002-06-15 18:40:18
|
At 06:03 PM 15/06/02 +0900, Keisuke Nishida wrote: > I have somehow arranged my NIST-test supporting tools for TinyCOBOL: > ... > Would you like it? I ran some test with the OC NIST test suite from OC-0.9.6 and TC CVS version. It seams to work OK. Unfortunately there are some problems with the TC DISPLAY/ACCEPT syntax so I could not complete the tests. I think the OC NIST test suite could be used for all COBOL projects. I would like to suggest that a shell script be added so the test suite can be configured for each COBOL project (ex: setup.sh OC or TC or CG). Also I would like to see the Makefile changed so only specific modules can be tested (ex: make SM). Other wise I would like that the NIST test suite be the first entry in the COBOL utilities project. David Essex |
From: David E. <de...@ar...> - 2002-06-15 18:40:16
|
At 11:28 PM 14/06/02 -0400, Boris Kortiak wrote: > Did anyone notice the Cobolic project that was > open around 2002/04/26? > The primary intent seems to be to create libraries > for numeric processing. > I do not think there has been any activity, but it > may be a useful place to discuss these topics. Actually the project registered in April 2001. I have downloaded from CVS the sources, but the configuration is not working. There are several COBOL related projects on SF. Unfortunately none appear active. There even one called Legacy Tcl (1), which uses Tcl and COBOL. 1) Legacy Tcl http://sourceforge.net/projects/legacytcl/ David Essex |
From: Bernard G. <bg...@fr...> - 2002-06-15 14:26:59
|
Keisuke Nishida a écrit : > At Wed, 12 Jun 2002 15:40:17 -0400, > David Essex wrote: > > > > Some areas where I would like to see more cooperation are the following. > > > > - The NIST test suite > > - SORT/MERGE utility > > (Some thing like CA-So*t, Syncs*rt, ...) > > - Indexed/Relative file utility > > (ISAM, C-IS*M to the Xopen standard with file locks etc.) > > > > In my view the above are relatively independent projects which could easily > > be shared between projects. > > > > For example both the SORT/MERGE and indexed/Relative file utilities could > > be used as a stand alone program or as a library used by any of the > > projects run-time. > > Right. Should we start a new project, like cobol-utils, which > shares common utilities, libraries, and test suites? > Yes, I think that it should definitely be another project. > > > As for the embedded SQL pre-processor for COBOL, this is also a separate > > project, a rather large project. > > > > The problem here is that each database has a different internal API. So to > > create a generic SQL pre-processor in my view is not realistic. > > > > Yes, one could use ODBC, but there some performance issues there. > > I do not think performance is a big issue at this moment. > Having a generic interface that works is better than nothing. > We can improve the interface when performance really become > an issue. > > Many databases seem to support ODBC on Linux already. What > about having a preprocessor and a run-time library that > support ODBC? > I agree that ODBC should be a prime issue: availability first, performance next if needed. > > Keisuke > > _______________________________________________________________ > > Don't miss the 2002 Sprint PCS Application Developer's Conference > August 25-28 in Las Vegas - http://devcon.sprintpcs.com/adp/index.cfm?source=osdntextlink > > _______________________________________________ > Tin...@li... > https://lists.sourceforge.net/lists/listinfo/tiny-cobol-users Bernard |
From: Keisuke N. <kni...@ne...> - 2002-06-15 09:40:07
|
At Fri, 14 Jun 2002 14:54:02 -0400, David Essex wrote: > > > License? LGPL would be good. > > The programs could be released under the GPL and libraries under the LGPL. I agree. > > Can we make the code > > > > A) front-end agnostic. > > No 'int display_number (struct tiny_cobol_number *n);' types of things > > B) ANSI C? It should be, in the same sense that DB or GMP is. > > What coding standards would you propose, if any? I prefer GNU's indent style, just because I have been used to it. I do not care much about commenting, at least when the project is evolving like now. Writing a readable code is more important than having a comment, I think. Keisuke |
From: Keisuke N. <kni...@ne...> - 2002-06-15 09:07:57
|
At Fri, 14 Jun 2002 23:28:29 -0400, vze3y7w8 wrote: > > Did anyone notice the Cobolic project that was open around 2002/04/26? > The primary intent seems to be to create libraries for numeric > processing. I do not think there has been any activity, but it may be a > useful place to discuss these topics. Do you mean this project? http://sourceforge.net/projects/cobolic/ Yes, it is a good idea to have a common place as soon as possible.. Keisuke |
From: Keisuke N. <kni...@ne...> - 2002-06-15 09:02:05
|
Hi, I have somehow arranged my NIST-test supporting tools for TinyCOBOL: http://www.nurs.or.jp/~knishida/cobol-testsuite.tar.gz See README in the subdirectory `cobol85' for usage. If you run the test, test reports will be stored in report.txt for each test module. Would you like it? At Sat, 15 Jun 2002 07:52:25 +1000, Tim Josling wrote: > > > The first project will probably be the NIST test suite. Currently there are > > two versions, one for TC and one for OC. Do you have any comments > > suggestions on this ? > > I don't know enough to comment. I thought the NIST was supposed to be self > extracting, so there should not be an issue. Just shows how much I don't know > about this, I guess. Their test archive is not self-extracting by itself. We need some supporting scripts that help extracting, as well as scripts that summarize the test results. By the way, NIST also distributes Embedded SQL Test Suite for various languages, including COBOL: http://www.itl.nist.gov/div897/ctg/sql_form.htm Did anyone try it? Keisuke |
From: Tim J. <te...@me...> - 2002-06-15 03:30:14
|
> What I would to see is an OpenISAM project with the following criteria. > - XOpen ISAM standard conformity. > - Low level concurrence file locking. > > If there are any volunteers who would like to work on this project, then > the project is all your. > > David Essex > I agree. This is a technically interesting and challenging project, but it does not require people to learn gcc or assembler or other horrors. Building a reliable fast multi-user indexed file system is non trivial. A good sort utility would be another good discrete project, the unix sort utility not quite being up to it. Tim Josling |
From: vze3y7w8 <bor...@ve...> - 2002-06-15 03:29:08
|
PMFJI, Did anyone notice the Cobolic project that was open around 2002/04/26? The primary intent seems to be to create libraries for numeric processing. I do not think there has been any activity, but it may be a useful place to discuss these topics. bo...@ko... http://www.boriskortiak.com/ |
From: David E. <de...@ar...> - 2002-06-15 03:19:59
|
At 10:03 PM 14/06/02 -0400, David A. Cobb wrote: > Tim Josling wrote: > >> David Essex wrote: >> ... >>> How is COBOL4GCC going to deal with indexed files ? >>> Will it also be using DB ? >> >> Probably. I will probably reuse whatever you guys are doing. >> Maybe later on we might do a custom piece but I think DB is >> good enough to start. > > Isn't DB (Berkeley, IIRC) rather overkill? Some of their code, maybe, > but there's a lot more in a DB than is needed for COBOL ISAM. A COBOL > indexed file has an unsurprising similarity to an IBM VS*M file system. > Other than how it manages space allocation and lock management, there's > nothing very sophisticated about VSAM, it's still a 1970's concept. In > contrast, even the simple xBase indexed structure "knows" more about the > structure of the data. In fact, the only thing required of an indexed > file is that the key be the same sequential character positions in every > record -- the records need not even be the same otherwise. That would > drive a DB system bonkers. Berkeley DB maybe overkill, and ISAM maybe a 1970's concept, but unfortunately there are no free nor open source implementation of ISAM. This has been one of the killer problems in many open source COBOL projects. MySQL used a version of ISAM, but recently they converted to DB for the low level structures. So AFIK, DB is the only choice available. The closest implementation of VS*M on UN*X is C-IS*M, which by the way is now owned by I*M. What I would to see is an OpenISAM project with the following criteria. - XOpen ISAM standard conformity. - Low level concurrence file locking. If there are any volunteers who would like to work on this project, then the project is all your. David Essex |
From: Tim J. <te...@me...> - 2002-06-15 02:32:47
|
"David A. Cobb" wrote: > Isn't DB (Berkeley, IIRC) rather overkill? ... > David A. Cobb, Software Engineer, Public Access Advocate Possibly. I knew I shouldn't have expressed a view. In general I would be happy to use it as long as the surplus function does not get in the way. I work as a software architect. We have this problem all the time. A package has at the same time more and less function that we need. If we can - add the missing bits without too much angst AND - ignore the surplus (often this is not easy) then we will use it. The question is, can you do better by starting from scratch? Others would have the basis for a more informed view than I would. Ignoring other issues from the commercial world such as vendor viability which are not such an issue with free software. If Rildo goes bust Tiny COBOL will live. Tim Josling |
From: David A. C. <sup...@co...> - 2002-06-15 02:04:03
|
Tim Josling wrote: >David Essex wrote: > > >>Tim, just a few questions regarding COBOL4GCC. >> >> >> >>How is COBOL4GCC going to deal with indexed files ? Will it also be using DB ? >> >> >> > >Probably. I will probably reuse whatever you guys are doing. Maybe later on we >might do a custom piece but I think db is good enough to start. > Isn't DB (Berkeley, IIRC) rather overkill? Some of their code, maybe, but there's a lot more in a DB than is needed for COBOL ISAM. A COBOL indexed file has an unsurprising similarity to an IBM VSAM filesystem. Other than how it manages space allocation and lock management, there's nothing very sophisticated about VSAM, it's still a 1970's concept. In contrast, even the simple xBase indexed structure "knows" more about the structure of the data. In fact, the only thing required of an indexed file is that the key be the same sequential character positions in every record -- the records need not even be the same otherwise. That would drive a DB system bonkers. -- David A. Cobb, Software Engineer, Public Access Advocate "By God's Grace I am a Christian man, by my actions a great sinner." -- The Way of a Pilgrim; R. M. French, tr. Life is too short to tolerate crappy software. . |
From: Tim J. <te...@me...> - 2002-06-14 22:01:33
|
David Essex wrote: > Tim, just a few questions regarding COBOL4GCC. > > How is COBOL4GCC going to deal with large numbers, specifically > intermediate variables which require 33 digits, according to the COBOL 2002 > draft standard. > David Sadler is working on an implementation of Knuth's algorithms. I looked at gmp and had the same issue of size. Also the use of dynamically allocated storage would be a pain to manage, especially in the GCC environment. > How is COBOL4GCC going to deal with indexed files ? Will it also be using DB ? > Probably. I will probably reuse whatever you guys are doing. Maybe later on we might do a custom piece but I think db is good enough to start. > The first project will probably be the NIST test suite. Currently there are > two versions, one for TC and one for OC. Do you have any comments > suggestions on this ? > I don't know enough to comment. I thought the NIST was supposed to be self extracting, so there should not be an issue. Just shows how much I don't know about this, I guess. > .. > David Essex > Regards, Tim Josling |
From: David E. <de...@ar...> - 2002-06-14 19:00:25
|
At 08:30 PM 14/06/02 +1000, Tim Josling wrote: > ... > A few questions: > > License? LGPL would be good. The programs could be released under the GPL and libraries under the LGPL. > Can we make the code > > A) front-end agnostic. > No 'int display_number (struct tiny_cobol_number *n);' types of things > B) ANSI C? > > What coding standards would you propose, if any? I am used to gcc where they > are very fussy about fullstops on comments and all. Maybe that is over the top > but I do find it very useful to have a comment at top of file and before a > function saying what it is for. I have no preference with coding standards, as long you start and stick with one. The only criteria I have is that sources should configure, compile and run on all supported platforms, from old libc5 Linux to native Win32 (MinGW). Tim, just a few questions regarding COBOL4GCC. How is COBOL4GCC going to deal with large numbers, specifically intermediate variables which require 33 digits, according to the COBOL 2002 draft standard. How is COBOL4GCC going to deal with indexed files ? Will it also be using DB ? The first project will probably be the NIST test suite. Currently there are two versions, one for TC and one for OC. Do you have any comments suggestions on this ? Does nay else have any comments and or suggestions ? Rildo, what do you think of this proposal ? David Essex |
From: Tim J. <te...@me...> - 2002-06-14 10:39:15
|
David Essex wrote: > > At 01:47 PM 13/06/02 +0900, Keisuke Nishida wrote: > > > At Wed, 12 Jun 2002 15:40:17 -0400, David Essex wrote: > > ... > >> In my view the above are relatively independent projects which could easily > >> be shared between projects. > > ... > > Right. Should we start a new project, like cobol-utils, which > > shares common utilities, libraries, and test suites? > > Sounds OK to me. > Sounds great to me. I was intending to steal your runtime later on anyway so this makes it formal. A few questions: Licence? LGPL would be good. Can we make the code A) front-end agnostgic. No 'int display_number (struct tiny_cobol_number * n);' types of things B) ANSI C? What coding standards would you propose, if any? I am used to gcc where they are very fussy about fullstops on comments and all. Maybe that is over the top but I do find it very useful to have a comment at top of file and before a function saying what it is for. I subscribe to tiny-cobol so no need to reply beyond there... Tim Josling |
From: David E. <de...@ar...> - 2002-06-13 21:17:04
|
At 01:47 PM 13/06/02 +0900, Keisuke Nishida wrote: > At Wed, 12 Jun 2002 15:40:17 -0400, David Essex wrote: > ... >> In my view the above are relatively independent projects which could easily >> be shared between projects. > ... > Right. Should we start a new project, like cobol-utils, which > shares common utilities, libraries, and test suites? Sounds OK to me. Perhaps one could think of this as a group of projects under the same banner. If no one objects, I think the first utility should be the NIST test suite. And the second should be some kind of Indexed/Relative file utility and library. >> As for the embedded SQL pre-processor for COBOL, this is also a separate >> project, a rather large project. >> >> The problem here is that each database has a different internal API. So to >> create a generic SQL pre-processor in my view is not realistic. >> >> Yes, one could use ODBC, but there some performance issues there. > > I do not think performance is a big issue at this moment. > Having a generic interface that works is better than nothing. > We can improve the interface when performance really become > an issue. > > Many databases seem to support ODBC on Linux already. What > about having a preprocessor and a run-time library that > support ODBC? There are at least 2 ODBC projects distributed under the GPL. So part of the work is done. But that still leaves the SQL parser/scanner, no small project. Perhaps it would be easier to adapt an existing open source C SQL preprocessor to COBOL. The only one I know of is PostgreSQL, but it is distributed under the Berkeley license. David Essex |
From: Keisuke N. <kni...@ne...> - 2002-06-13 04:46:26
|
At Wed, 12 Jun 2002 15:40:17 -0400, David Essex wrote: > > Some areas where I would like to see more cooperation are the following. > > - The NIST test suite > - SORT/MERGE utility > (Some thing like CA-So*t, Syncs*rt, ...) > - Indexed/Relative file utility > (ISAM, C-IS*M to the Xopen standard with file locks etc.) > > In my view the above are relatively independent projects which could easily > be shared between projects. > > For example both the SORT/MERGE and indexed/Relative file utilities could > be used as a stand alone program or as a library used by any of the > projects run-time. Right. Should we start a new project, like cobol-utils, which shares common utilities, libraries, and test suites? > As for the embedded SQL pre-processor for COBOL, this is also a separate > project, a rather large project. > > The problem here is that each database has a different internal API. So to > create a generic SQL pre-processor in my view is not realistic. > > Yes, one could use ODBC, but there some performance issues there. I do not think performance is a big issue at this moment. Having a generic interface that works is better than nothing. We can improve the interface when performance really become an issue. Many databases seem to support ODBC on Linux already. What about having a preprocessor and a run-time library that support ODBC? Keisuke |
From: David E. <de...@ar...> - 2002-06-12 19:44:23
|
At 09:45 PM 12/06/02 +0900, Keisuke Nishida wrote: > ... > At the moment, OpenCOBOL lacks many facilities that TinyCOBOL > has, including SORT/MERGE statements, intrinsic functions, > SCREEN SECTION, embedded SQL, and GDB supports. > ... > Still I think it would be great if we could share some > ideas and resources (like test suite), as we both have > the same goal: implementing an open-source COBOL compiler. > .... Yes, I agree there should be more cooperation not only between open-COBOL and TC, but with cobol4gcc also. Unfortunately this is easier said than done. Some areas where I would like to see more cooperation are the following. - The NIST test suite - SORT/MERGE utility (Some thing like CA-So*t, Syncs*rt, ...) - Indexed/Relative file utility (ISAM, C-IS*M to the Xopen standard with file locks etc.) In my view the above are relatively independent projects which could easily be shared between projects. For example both the SORT/MERGE and indexed/Relative file utilities could be used as a stand alone program or as a library used by any of the projects run-time. As for the embedded SQL pre-processor for COBOL, this is also a separate project, a rather large project. The problem here is that each database has a different internal API. So to create a generic SQL pre-processor in my view is not realistic. Yes, one could use ODBC, but there some performance issues there. Also most commercial database vendors do include a SQL pre-processor with the database. As for open source database, to my knowledge Firebird it is the only one which has a COBOL SQL pre-processor. PostgreSQL has C SQL pre-processor, which could be used as a starting point to create one for COBOL. MySQL does not have any SQL pre-processors, so embedded SQL is not supported. Unless some one made one as an add on product. Anyway my 2 cents worth. David Essex |
From: Tim J. <te...@me...> - 2002-04-21 04:08:38
|
I have uploaded the perform verb code and tests. Also the compiler now works with GCC3.1. Only call/functions remain to complete the compiler subset which will allow writing the remaining runtime code in COBOL. Tim Josling |
From: Tim J. <te...@me...> - 2001-09-10 20:28:34
|
Rama, Sorry for the delay in replying. I am getting my house ready for selling at auction at the moment. In the discussion below, all the file names assume that they start with "cobr_". This temp.c is really cobr_temp.c. The file cobr_sort_overview.txt describes the overall structure. At the moment only routines 4 - basic in core sort and 6 - compare have been completed. The routine 7 - sort IO routines have been written but not tested. These sort IO routines are meant to be for handling IO to sort work files if the sort is too large to fit in memory. These routines are not for doing the IO to the actual input and output files specified by the programmer. The way large sorts work is that they take the input a chunk at a time and sort each chunk in memory. If all the input fits in memory, then there is only one chunk and we are done. If it does not fit in memory, then the chunks need to be written out to disk using routine 7 - IO, and then merged. The merge works by reading in several chunks (maybe up to 10) a record at a time and merging them into one output chunk which is written out as the merge proceeds. So each merge pass reduces the number of chunks by a factor of, say, 10. Eventually there is only one chunk and you are done. In COBOL the input to a sort can be a file or files, or an input procedure. Either way, the compiler will generate the code to read the file and will pass the records one at a time to the sort/merge executive (routine 2) using a routine called something like 'sort_put_record'. Once the sort is done the sort/merge executive would hand the records back to the compiler one at a time. Presumably this would be done by the compiler calling a routine called something like 'sort_get_record'. The compiler would either call the output procedure or write the records to a file, depending on what the programmer asked for in the code. For a merge, the input files are assumed to be in order. The compiler would have to pass information to the sort/merge executive to specify things like maximum memory to be used, where to put the work files, maximum size of the work files, and the details of the sort fields and the collating sequence (see below). This interface would be similar to the interface to sort.c I hope this clarifies things; if not please ask some more questions. See also below... I have cc'd this to cobolforgcc-devel, to keep a record of this. I hope you don't mind. Regards, Tim Josling "Linga, Rama Krishna (Rama)" wrote: > Hi Tim. > > I could not understand afew things regarding this sort/merge. Quite understandable. > > > > 1. What is the prime objective of this? Is it to write an > equivalant code for converting SORT - MERGE usages of COBOL in C. Then what > are all these collating sequences and how many of them are primarily related > to this code and in what way? The main aim is, as you said, to support the sort/merge verbs of COBOL. The collating sequences are used in the compare routine. In COBOL you can specify a collating sequence, which means characters are compared using the collating sequence rather than using the binary values of the characters. See cobr_compare.[ch]. Effectively the characters are converted using the lookup table (collating sequence) before being comparesd. > > > 2. And what are we sorting? Data files / text files and what > are the format of these files? > The intention is to support both text files (delimited by \n) and non-text files. Non-text files can be either fixed length or variable length, with a record control word at the start, giving the length.. However at the moment none of the code to support the various file formats has been written, just some of the core sort routines have been written. > > 3. And how do we use these formats for sorting. Like, how do we > know about the field we are going to use for sorting. > The overview.txt file gives the suggested module structure. Ted has written 4 - basic in core sort (sort.c) and 6 - compare function (compare.c) and had started 7 - sort IO (sort_io.c) but I don't think 7 was complete. The sort.c routine is passed the structure of the fields in the sort_init call in the parameter sort_fields. I assume that the compiler generated code would pass similar information to the sort-merge executive. The compiler will implement routine (1). This would pass the details of the fields to the sort/merge executive (not yet written) which would then call the sort/merge and IO routines. > > 4. When will be the compiler generated code uses the run time > interfaces of sort and merge? The compiler generated code will call routine 3 (sort/merge executive). The interface for this has not been specified. > > > 5. When is the command levels are used? The command level (routine 2) would be a stand alone utility, to be written later on, using the sort/merge code. > > > 6. What exactly is status of this sort/merge? I looked into > cobr_sort_readme.txt but that is so vague. I could not get much out of it. If you look at the overview.txt, the routines 4 and 6 have been done, and part of 7 (as described above). See also below. I tend to think that sort.c (4) and compare.c (6) can be kept, but maybe sort_io.c (7) could be redone. If I were doing this, I would probably do the merge routine next, then the work file IO routines and buffer management routines (routine 7). However it is up to you whether you want to use all of part of Ted's code. It may be you would find it easier to start again, than try to dissect his code. > > 7. What about merge and sort-merge routines. The current stuff > appears like just sort related. No merge code has been written yet. > > > Before start writing the code, I would like to know these things. > > > Regards. > rama |