Hi Brian, I
open another thread to make another consideration.
One of the strengths of the GnuCOBOL compiler architecture is that it actually translates COBOL into C language and then the real compiler used is that of C language.
Consequently it is said in several parts, from GnuCOBOL it is possible to access the world of libraries written for developers in C language.
Then in other points, however, it should be noted that using functions of the C libraries and exchanging data between GnuCOBOL and the C libraries involves problems and we have to be careful.
Could you write a post to clarify this topic ?
Why can there be problems ?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
It comes down to specifications. C defines int, and many native types as platform dependent sizes. Definitions like char is 8 bits, short is at least as big as char, int is at least as big as short. A 32 bit computer int is (usually) 32 bits. 64 bit machine, int is 64 bits (but not always). COBOL has fixed size types throughout. BINARY-LONG is by definition 32 bits, signed by default. No more, no less.
How a C compiler lays down a structure is platform dependent, and we have to be careful to choose COBOL data that can manage the potential deltas. There are other ways, but I eventually decided to use gcv and a one-time preprocessor pass on key data definitions for COBOL that get widths and chip-efficient padding that the C ABI "requires" by asking what the local C compiler actually uses.
Ron developed a much more sane layer in the C interface features added to current GnuCOBOL 4-prerels. The data is marshalled at runtime to manage the synch between COBOL and C. See trunk/libcob/cobcapi.c in the SVN tree for details.
We can talk more about this. It's simple in theory and complicated in the details.
Have good, make well,
Blue
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi Brian, I
open another thread to make another consideration.
One of the strengths of the GnuCOBOL compiler architecture is that it actually translates COBOL into C language and then the real compiler used is that of C language.
Consequently it is said in several parts, from GnuCOBOL it is possible to access the world of libraries written for developers in C language.
Then in other points, however, it should be noted that using functions of the C libraries and exchanging data between GnuCOBOL and the C libraries involves problems and we have to be careful.
Could you write a post to clarify this topic ?
Why can there be problems ?
Sure. The discussion on
gcvis a good start. Get C Value.https://sourceforge.net/p/gnucobol/discussion/lounge/thread/5a08049f/
It comes down to specifications. C defines
int, and many native types as platform dependent sizes. Definitions like char is 8 bits, short is at least as big as char, int is at least as big as short. A 32 bit computerintis (usually) 32 bits. 64 bit machine,intis 64 bits (but not always). COBOL has fixed size types throughout. BINARY-LONG is by definition 32 bits, signed by default. No more, no less.How a C compiler lays down a structure is platform dependent, and we have to be careful to choose COBOL data that can manage the potential deltas. There are other ways, but I eventually decided to use
gcvand a one-time preprocessor pass on key data definitions for COBOL that get widths and chip-efficient padding that the C ABI "requires" by asking what the local C compiler actually uses.Ron developed a much more sane layer in the C interface features added to current GnuCOBOL 4-prerels. The data is marshalled at runtime to manage the synch between COBOL and C. See trunk/libcob/cobcapi.c in the SVN tree for details.
We can talk more about this. It's simple in theory and complicated in the details.
Have good, make well,
Blue