Thank you! It's working.
1) The installer will try and add the start server program to the Windows-startup directory so it will try and start the Server (and fail) when windows starts, you should probably remove it. 2) Go to the RecordEditor lib directory C:\Program Files (x86)\RecordEdit\HSQL\lib** and check if runCobolEditor.jar - this java should run the Cobol Editor. 3) If you have option to run using Java.exe, try running thje program 4) There will probably be a bat file runCobolEditor.bat -try running it 5) If that...
Attaching screen shots.
Tried attaching screen shots. Not sure if it worked.
Error on startup - Requires Java 1.5
Sorry for the late reply. I thought I would get email notification about this post. Thank you so much for this. You are a legend!
I have uploaded a USB version of the RecordEditor: https://sourceforge.net/projects/record-editor/files/Test/Version_0.99.4/ I have not done much testing on it so far. Will upload Update jars when I have done more testing, but that could take a while (I need to setup a Test environment - probably rebuild my Old Computer).
I will check if the CodeGen jar can be be upgraded in the RecordEditor (I suspect it can)
Hi Bruce, Firstly, let me express my gratitude when I found about this project. Amazing work here. One general question, would you be able to upgrade Record Editor to use the latest version of the codegen? I am also a bit confused about all the version numbers. I will be extremely helpful if we can have a more comprehensive maven support for everything. Thanks again. Regards, Dale
I will look at doing this. I am part way through some other changes so not sure if I will be able implement it at the moment.
I will look at doing this. I am part way through some other changes so not sure if I will be able implement it at the moment.
hello Bruce, I am able to generate the xml from cobol copybook and comparision is also working with batch. I would need your help in modifying the comparisoon results if possible. As of now, we are seeing nothing if all records are matching. Is it possible to get the summary, like 20 records are compared and 20 are matching and 5 are not matching. Also, the list of unmatched records after this.
Thanks! That did it. When I follow these instructions I'm able to Export to CSV.
I presume you are using the Cobol Editor ??? The batch compare currently doe not work with Cobol copybooks. I will try and update it work with Cobol Copybooks / Cobol Editor
in this file \abcd\COMPARE_RESULT.xml I've the fields on which i want to run the comparision. Basically, I have two files which uses a cobol copybook format. In above .xml I have the fields which I want to used for comparasion. I am not using database for anything.
The Export functions are not available on the Generate screen, only the File editor. I will look at changing that, but it will not be easy Try entering filename on the edit screen and changing to the Cobol Tab Hit the Edit button, then try the export: Also the installer you used might not install properly to the Windows Program directory. I will do a seperate post on this later
The Export functions are not available on the Generate screen, only the File editor. I will look at changing that, but it will not be easy Try entering filename on the edit screen and changing to the Cobol Tab
What is in \abcd\COMPARE_RESULT.xml , It looks like the RecordEditorwas not able to read the schema (does not exist or could not connect to the Database). I will do some more investigation
Could you please help me running the saved xml and html via batch. I am executing below command java -jar "C:\Program Files (x86)\RecordEdit\HSQL\lib\run.jar" net.sf.RecordEditor.diff.BatchDiff -xml \abcd\COMPARE_RESULT.xml -htmlFile \abcd\RESULT.HTML and getting below error: HSQL DB >>C:\Users\ABC\RecordEditor_HSQL\Database<< HSQL DB >>C:\Users\ABC\RecordEditor_HSQL\Database<< ---> 1 : jdbc:hsqldb:file:C:\Users\ABC\RecordEditor_HSQL\Database/recordedit;readonly=no; jdbc:hsqldb:file:C:\Users\ABC\RecordEditor_HSQL\Database/recordedit;readonly=true;...
Screenshot attached of whats displayed when I try to Export to CSV.
Java version: java version "1.8.0_411" Java(TM) SE Runtime Environment (build 1.8.0_411-b09) Java HotSpot(TM) 64-Bit Server VM (build 25.411-b09, mixed mode) OS: Windows 10 Pro (22H2) I'm using: RecordEditor (0.99.3)... I donn't know what you mean by Windows/Generic or USB; but I installed via this link: https://sourceforge.net/projects/record-editor/files/latest/download I'll have to get some screenshots and report back.
I am not sure what is going on. So can you tell me Which Java version you are using ??? What operating system can you upload an image of what is being displayed when you try to export ??? Which version of the RecordEditor - Windows specific, Generic or USB Once I know what operating system / version I will provide instructions for running the RecordEditor in a Terminal. This might display error messages
The preview window was definitely in focus and no matter what I tried it's greyed out. I tried: * Clicking the Tab (name of the copybook). * Selected all cells. * Selected 1 cell. * Selected outside the table to the same results. However, this is all moot since I actually wanted to create a shell script for my project and the bottom link you provided was exactly what I needed. Thanks Bruce (you're a gem).
The export option works on the file being displayed in the screen in focus. If there is no file being displayed or it has lost focus. So * Display a file; The options should be active. If not- * Make sure a screen displaying the file has focus (i.e. click on it) Also if you have a Cobol Copybook, you can create shellscripts for the CobolToCsv project (https://sourceforge.net/projects/coboltocsv/) using the Generate option, see https://sourceforge.net/p/jrecord/wiki/Generate%20CobolToCsv%20Script...
Hello, I just stumbled across this wonderful project but I'm having troubles exporting COBOL copybooks to CSV data because the" Export as CSV file" menu option is greyed out. Additionally, every Export option is greyed out (screenshot attached). I'm using: RecordEditor (0.99.3) & RecordEditor is successfully parsing and outputting my copybooks to the preview accurately. What am I missing here? Thank you for reading.
1) Not out of box (yes with a lot of work and not really advised) 2) No, I have used File-Viewer to look at files on a server and run recordeditor (the file viewer will do the transfer). The code should run on a server but it will require manual setup. The older generic version will run against most SQL-Databases. This process should work: Install the generic version Update the jars to the latest version Create shell scrip[ts etc Note: the user-id password will need to be stored in a standard Text...
Accessing files on a server
Yes the **ReadMe.html" is a false positive and can safetly be ignored. historically readme.html where fairly common. If there is anything else that should be investigated separately.
Yes the **ReadMe.html" is a false positive and can safetly be ignored. historically readme.html where fairly common
Hi Bruce, Please validate if it is a false positive and can be ignored safely without posing any security risk. Thank you, David From: support-requests@record-editor.p.re.sourceforge.net support-requests@record-editor.p.re.sourceforge.net On Behalf Of Bruce Martin Sent: Monday, January 8, 2024 12:48 PM To: [record-editor:support-requests] 16@support-requests.record-editor.p.re.sourceforge.net Subject: [record-editor:support-requests] #16 RecordEdit Installer flagged as suspicious. It is objecting...
It is objecting to the Readme.html
Hi Bruce, thanks for getting back to me. The following was detected: [cid:image001.png@01DA4214.2692B960] Can this be corrected? Thanks, David From: support-requests@record-editor.p.re.sourceforge.net support-requests@record-editor.p.re.sourceforge.net On Behalf Of Bruce Martin Sent: Friday, January 5, 2024 4:06 PM To: [record-editor:support-requests] 16@support-requests.record-editor.p.re.sourceforge.net Subject: [record-editor:support-requests] #16 RecordEdit Installer flagged as suspicious. To...
To the best of my knowledge there should not be any ransomware in it. was the file downloaded from sourceforge or someware else ??? Do you know what ransomware is being flagged ??? have you tried yhe Generic/USB versions ???
RecordEdit Installer flagged as suspicious.
I have uploaded a new version of the cobol2json jars: https://sourceforge.net/projects/coboltojson/files/Versions_0.9/ DownloaD FILE: Cobol2JsonJars_0.93d.zip This version has a specific option for redefines: ICobol2Json cbl2Json = getCobol2Json() .setRedefineSelection("Group-1", new IRedefineSelection() { @Override public List<IItem> selectRedefinedItemToWrite(List<IItem> redefinedItemGroup, AbstractLine line) { String code = line.getFieldValue("Group-Selector").asString(); IItem itm = redefinedItemGroup.get(2);...
I tried below: Cobol2JsonSchema.newCobol2Json(copyBookFile).cobol2jsonSchema(printStream); but this is printing the actual data in json format and not the schema definition
There is the class Cobol2JsonSchema written by Moddy Te'eni in the Cobol2Json package that takes a cobol copybook and writes sample Json. I will add calls to it to the Cobol2Json class
Does it support Avro format ?
Does it support Avro format ?
There is the class Cobol2JsonSchema written by Moddy Te'eni in the Cobol2Json package that takes a cobol copybook and writes sample Json.
There is the class Cobol2JsonSchema written by Moddy Te'eni in the Cobol2Json package
THanks @bruce_a_martin. is there possibility to output the avro schema along with the file. that is helpful for parsing large json files to avoid schema inference in other tools for processing. I noticed for large json processing its better to supply the schema in avro format instead of inference. schema inference is very expensive operation for large datasets.
THanks @bruce_a_martin. is there possbility to output the json schema along with the file. that is helpful for parsing large json files to avoid schema inference in other tools for processing.
THanks @bruce_a_martin. is there possbility to redirect errors generated to a file output or supress it so it does not show up in stdout same as final json output ?
Flatten has some potential issues * Could result in duplicate fields. Will need caution when using it. Selective flatten may be desireabvle. * Also arrays should probably be left asis With redefine fields/groups there will always be a way to tell which group to use. In one your earlier examples there was an old-format flag and old/new formats that where redefined. So setWriteCheck can be used for redefines (for the most part). I will look at adding seperate redines check (and if it is worth whil...
@bruce_a_martin. thanks for the update. flatten option will be great addition. setWriteCheck addresses the redefines issue suggested earlier ?
@bruce_a_martin. thanks for the update. flatten option will be great addition. are you planning to address the redefine issue as well ?
There are 2 new options public abstract Icb2xml2Json setFormatField(String fieldName, IFormatField formatField); public abstract Icb2xml2Json setWriteCheck(String groupName, IWriteCheck writeCheck); setFormatField - lets you reformat a field before writing. It is only applicable for fields and I have done no testing yet so may not work setWriteCheck - lets you test wether a Cobol-Group should be written. Very basic testing done. It allows you suppress a whole group or field (redefines when not used)...
@bruce_a_martin thatnks for the update. I will test it out. what is the option or flag to supress groups ? i assume this helps us flatten out the record. this will be great option saving space and compressing large data further
I will look at it I have created new jars and uploaded as Cobol2JsonJars_0.93c.zip in https://sourceforge.net/projects/coboltojson/files/Versions_0.9 In the java interface, there is a new option .setNameMainArray(false) which controls wether the 1st array is name. Note: you need both the JRecord.jar and Cobol2Json.jar as there are changes to JRecord for suppressing printing of groups/fields.
I will look at it I have created new jars and uploaded as Cobol2JsonJars_0.93c.zip in https://sourceforge.net/projects/coboltojson/files/Versions_0.9 In the java interface, there is a new option .setNameMainArray(false) which controls wether the 1st array is name. Note: you need both the JRecord.jar and Cobol2Json.jar as there are changes to JRecord for suppressing printing of groups/fields.
Hi @bruce_a_martin - i noticed that non-pretty flag iss resulting in errors . i tried validating using jq. pretty formatting is not yeilding any errors. parse error: Unfinished string at EOF at line 1, column -1313092230
@bruce_a_martin, below is snippet of the code: Cobol2Json.newCobol2Json(copyBookFile) .setFileOrganization(IFileStructureConstants.IO_BIN_TEXT) .setPrettyPrint(false) .setSplitCopybook(CopybookLoader.SPLIT_01_LEVEL) .setTagFormat(IReformatFieldNames.RO_CAMEL_CASE) .setDropCopybookNameFromFields(true) .setRecordSelection("DAROOT", Cobol2Json.newFieldSelection("RRC-TAPE-RECORD-ID","01")) .setRecordSelection("DAPERMIT" , Cobol2Json.newFieldSelection("RRC-TAPE-RECORD-ID","02")) .setRecordSelection("DAFIELD",...
I will look at it. Can you provide your code, it seems to work for me Note: when using the script version you must use one of -dropCopybookName t -dropCopybookName true -dropCopybookName y -dropCopybookName yes
I will look at it. Can you provide your code, it seems to work for me
I will look at it
Thanks @bruce_a_martin. I noticed that in your latest updated jars this function is not working: setDropCopybookNameFromFields(). Whether true or false its still adding the copybookname. So right now when I set it to true or false, I am getting following output: { "allpermits": [{ "daremark": { "rrcTapeRecordId": "12", "daRemarksSegment": { "daRemarkSequenceNumber": 1, "daRemarkFileDate": { "daRemarkFileCentury": 19, "daRemarkFileYear": 87, "daRemarkFileMonth": 3, "daRemarkFileDay": 3 }, "daRemarkLine":...
Thanks @bruce_a_martin. I noticed that in your latest updated jars this function is not working: setDropCopybookNameFromFields(). Whether true or false its still adding the copybookname. So right now when I set it to true or false, I am getting following output: { **"allpermits":** [{ "daremark": { "rrcTapeRecordId": "12", "daRemarksSegment": { "daRemarkSequenceNumber": 1, "daRemarkFileDate": { "daRemarkFileCentury": 19, "daRemarkFileYear": 87, "daRemarkFileMonth": 3, "daRemarkFileDay": 3 }, "daRemarkLine":...
I have updated CoboltoJson to add a -pretty option tobetch interface See https://sourceforge.net/projects/coboltojson/files/Versions_0.9/?upload_just_completed=true File: Cobol2Json_0.93b.zip
You probably can not. After java 11 allowed compile/running of java programs, My thoughts where to make the Java interface the main java interface. The Java interface offers more options
@bruce_a_martin how can i setPrettyPrint via commandline ?
Also I forgot to mention, in the java interface to cobol2json there is a setPrettyPrint(true) Cobol2Json.newCobol2Json(Cbl2JsonCode.getFullName("cobol/amsPoDownload.cbl")) .setFileOrganization(IFileStructureConstants.IO_BIN_TEXT) .setPrettyPrint(true)
At the moment no, It should not be to hard to add in. Work out all the things you want changed and I will have a look at it
Thanks @bruce_a_martin. Is there way in CBL2JSON I can avoid trimming of leading 0's for certain fields. I know you wrote a helper function to read the raw values in another thread. how can we apply that here in CBL2JSON Thanks
For CoboltoJson to work on a multi record file, it needs to be able to work out the record-type. There is basic code in JRecord that uses the Record-Length to work out the record-type. But this is unreliable. The message is saying JRecord can not workout the Record-Type for a record. If you use the java interface, you can use the setRecordSelection method to define which record to use Cobol2Json.newCobol2Json(Cbl2JsonCode.getFullName("cobol/amsPoDownload.cbl")) .setFileOrganization(IFileStructureConstants.IO_BIN_TEXT)...
Thanks @bruce_a_martin. That was quick turn around. I tested it. It finised till the end. Any ideas why I am getting below errors. Is there way to parse these special unicode characters ? Line Number: 9030479 Error: Invalid Record Type �NO ALLOWABLE WILL BE ASSIGNED UNTIL THF0000000000 854e4f20414c4c4f5741424c452057494c4c2042452041535349474e454420554e54494c2054484630303030303030303030 Line Number: 9117824 Error: Invalid Record Type � P0000000000 8520202020202020202020202020205030303030303030303030...
I have made changes and uploaded to: https://sourceforge.net/projects/coboltojson/files/Versions_0.9 Cobol2Json_0.93a_test.zip - holds the updated jars cobol2json_093a_source.zip - latest source for cobol2json
Should have updated jars in a day or 2. I will add the following if I can Output pretty format or not Stream output to stdout instead to a file (as an option)
Thanks @bruce_a_martin. Can you also look in to following options for long term: * Output pretty format or not * Stream output to stdout instead to a file How soon can I get the updated jar for the error handling and skip records that can't be read ? Thanks
It cased there is a record type that is not defined in the JSon conversion job. Initially I will update the cobol2json to 1) By pass and store invalid records 2) Produce the json file 3) Produce report at the end and throw an exception at the very end. Longer term I will Provide options on what to do with invalid records Give options to handle redefines.
Hi @bruce_a_martin if its using jackson for streaming then thats great. I tried running below code. It works almost 80% of the way. I am not sure why there is an error on line 3124060 java -jar Cobol2Json.jar -cobol /diskf/RRCDataFiles/allPermits.cbl -fileOrganisation Text -split 01 -recordSelection DAROOT RRC-TAPE-RECORD-ID=01 -recordSelection DAPERMIT RRC-TAPE-RECORD-ID=02 -recordSelection DAFIELD RRC-TAPE-RECORD-ID=03 -recordSelection DAFLDSPC RRC-TAPE-RECORD-ID=04 -recordSelection DAFLDBHL RRC-TAPE-RECORD-ID=05...
What do you mean by writing in chunks ?? cbl2json uses jackson to stream the json conversion. It writes Groups/fields one at time. It reads Cobol records one at a time then goes through the Cobol-Groups/Cobol-Fields directly Jackson (Stax like) parser. .
Thanks @bruce_a_martin. Does cbl2json support streaming of data in chunks ?
Cobol Redefines A cobol redefines a block of computer memory. It is upto the program to control access to the data. One use for redefines see: https://stackoverflow.com/questions/5269899/cobol-keyword-redefines/5270215#5270215 is 03 Birth-Date-YYYYMMDD pic 9(8). 03 filler redefines Birth-Date-YYYYMMDD. 05 Birth-Date-YYYY pic 9(4). 05 Birth-Date-MM pic 99. 05 Birth-Date-DD pic 99. In your case it is more like class message { ... } class old_Message_format extends message { ... } class new_message_format...
Thanks for the update @bruce_a_martin. I am not sure how REDEFINES need to be handled. I am new to COBOL data structures. Should only one of the fields be extracted ? How should it be handled on JRecord side ? If you can share example code to tackle this that will be great. Thanks
A redefines will cause precisly this. It dumps every field including all redefined fields. If need be I can look at adding an exit where you supply code to check if a redefined field/group should be printed. The Cobol2Xml/json started as example programs
A redefines will cause precisly this. It dumps every field including all redefined fields
A redefines will cause precisly this.
The SEQUENCE-NUMBER is quite possibly the answer. I was going to say I need to see more of the file. Cobol copybooks do not tell you things like this, it is held in the Cobol Code, but you can generally work it out provided you can see enough of the file to see any patterns
I am noticing this is occuring in DAPERMIT section wherever a field has REDEFINES.
Yes. I have attached it.
I am wondering if its to do with records where there are REDEFINES and those need to be removed since they share existing space with another.
Do you have the Cobol copybook as a text file, when I copy it from the PDF it is a mess
Hi @bruce_a_martin. I am trying to parse DAPERMIT section of the cobol data file based on the structure defined here: https://www.rrc.texas.gov/media/ezxjqdmn/oga049.pdf I convert each record to a json object. within the object I noticed some fields are not being read correctly These ones I noticed are not being parsed correctly: daSurfaceSurveyDirection1 daSurfaceSurveyFeet2 daSurfaceSurvey daSurfaceSurveyFeet1 daSurfaceLeaseDirection2 daNearestLeaseLine daNearestWellFeet daNearestWell daSurfaceAbstract...
I noticed the SEQUENCE-NUMBER defines if there is a records continuation from previous number
Thanks @bruce_a_martin are you able to share what the code might look like ? How will I determine the end of the string here ? Can P0000000000 be treated as newline character ?
There is no way to do this in JRecord. It would have to be handled in program logic. It is not something Cobol does, it would be handled in the cobol Code.
Hi @bruce_a_martin in the copybook file I have lots filler entries like below 01 DAREMARK. 02 RRC-TAPE-RECORD-ID PIC X(02). 02 DA-REMARKS-SEGMENT. 03 DA-REMARK-SEQUENCE-NUMBER PIC 9(03) VALUE ZEROS. 03 DA-REMARK-FILE-DATE. 05 DA-REMARK-FILE-CENTURY PIC 9(02) VALUE ZEROS. 05 DA-REMARK-FILE-YEAR PIC 9(02) VALUE ZEROS. 05 DA-REMARK-FILE-MONTH PIC 9(02) VALUE ZEROS. 05 DA-REMARK-FILE-DAY PIC 9(02) VALUE ZEROS. 03 DA-REMARK-LINE PIC X(70) VALUE SPACES. 03 FILLER PIC X(10) VALUE ZEROS. 02 RRC-TAPE-FILLER...
I updated the CopyBook file section to below and it worked. Thanks: 01 DAW999A1. 05 RRC-TAPE-RECORD-ID PIC X(02). 05 DA-SURF-LOC-LONGITUDE PIC 9(5)V9(9) VALUE SPACES. 05 DA-SURF-LOC-LATITUDE PIC 9(5)V9(9) VALUE SPACES. 01 DAW999B1. 05 RRC-TAPE-RECORD-ID PIC X(02). 05 DA-BOTTOM-HOLE-LONGITUDE PIC 9(5)V9(9) VALUE SPACES. 05 DA-BOTTOM-HOLE-LATITUDE PIC 9(5)V9(9) VALUE SPACE
The problem is the Copybook does not match the data. So either it is the wrong file or the wrong copybook. I reliase this not something you can influence, one of the things I have always advacate is including the copybook name in the file name at specified position (start or end). It makes life easy for everybody !!! e.g. if the copybook is PZR1000, the file name might be (ptpz.PZR1000.price.extract or ptpz.price.extract.PZR1000) In this case, the Copybook should look like: 01 DAW999A1. 05 Record-Type...
Hi @bruce_a_martin I have managed to generate Java code to read and extract the records. I have an issue with following section: 01 DAW999A1. 05 DA-SURF-LOC-LONGITUDE PIC 9(5)V9(7) VALUE SPACES. 05 DA-SURF-LOC-LATITUDE PIC 9(5)V9(7) VALUE SPACES. 01 DAW999B1. 05 DA-BOTTOM-HOLE-LONGITUDE PIC 9(5)V9(7) VALUE SPACES. 05 DA-BOTTOM-HOLE-LATITUDE PIC 9(5)V9(7) VALUE SPACES. Record number 14 and 15 have latitude and longitude values that represent in the data file like this: 14 -94.3251710 31.4884060 15...
Hi @bruce_a_martin I have managed to generate Java code to read and extract the records. I have an issue with following section: 01 DAW999A1. 05 DA-SURF-LOC-LONGITUDE PIC 9(5)V9(7) VALUE SPACES. 05 DA-SURF-LOC-LATITUDE PIC 9(5)V9(7) VALUE SPACES. 01 DAW999B1. 05 DA-BOTTOM-HOLE-LONGITUDE PIC 9(5)V9(7) VALUE SPACES. 05 DA-BOTTOM-HOLE-LATITUDE PIC 9(5)V9(7) VALUE SPACES. Record number 14 and 15 have latitude and longitude values that represent in the data file like this: 14 -94.3251710 31.4884060 15...
Hi @bruce_a_martin I have managed to generate Java code to read and extract the records. I have an issue with following section: 01 DAW999A1. 05 DA-SURF-LOC-LONGITUDE PIC 9(5)V9(7) VALUE SPACES. 05 DA-SURF-LOC-LATITUDE PIC 9(5)V9(7) VALUE SPACES. 01 DAW999B1. 05 DA-BOTTOM-HOLE-LONGITUDE PIC 9(5)V9(7) VALUE SPACES. 05 DA-BOTTOM-HOLE-LATITUDE PIC 9(5)V9(7) VALUE SPACES. Record number 14 and 15 have latitude and longitude values that represent in the data file like this: 14 -94.3251710 31.4884060 15...
Hi @bruce_a_martin I have managed to generate Java code to read and extract the records. I have an issue with following section: 01 DAW999A1. 05 DA-SURF-LOC-LONGITUDE PIC 9(5)V9(7) VALUE SPACES. 05 DA-SURF-LOC-LATITUDE PIC 9(5)V9(7) VALUE SPACES. 01 DAW999B1. 05 DA-BOTTOM-HOLE-LONGITUDE PIC 9(5)V9(7) VALUE SPACES. 05 DA-BOTTOM-HOLE-LATITUDE PIC 9(5)V9(7) VALUE SPACES. Record number 14 and 15 have latitude and longitude values that represent in the data file like this: 14 -94.3251710 31.4884060 15...
You can certainly view/edit the file with file with the RecordEditor but remember If the file was converted from the Mainframe this field could be corrupt Lines could be split in 2 if a \n happens to be in this field From the Mainframe When transferring a binary file from the Mainframe Use the binary transfer option (keep the file as EBCDIC) For VB files also use the RDW option is used to ensure the RDW is transferred Note: RDW: record Descriptor word basically record length Note: VB Files are a...
Bruce. Thanks for your prompt reply. Commenting that line out worked. However when you open the data file the encoding seems to be ASCII. So will there still be corruption with the comp field ? What is your recommendation to handle the comp field in the ASCII data file to avoid such corruption or incorrect parsing of the file.
The problem is with field RAILROAD-COMMISSION-TAPE-REC, it has no picture clause: 02 RRC-TAPE-RECORD-ID PIC X(02). 02 RAILROAD-COMMISSION-TAPE-REC. 02 RRC-TAPE-RECORD-ID PIC X(02). Just comment it out, I will update RecordEditor to report/ignore this type of issue. Also a separate issue is DA-CONVERTED-DATE is a comp field. This create 2 problems: It will be corrupted by Mainframe EBCIDIC to ascii conversion It could be treated as Line field - corrupting the record (splitting the record in to 2 ...
The problem is with field RAILROAD-COMMISSION-TAPE-REC, it has no picture clause: ~~~ 02 RRC-TAPE-RECORD-ID PIC X(02). 02 RAILROAD-COMMISSION-TAPE-REC. 02 RRC-TAPE-RECORD-ID PIC X(02). ~~~ Just comment it out, I will update RecordEditor to report/ignore this type of issue. Also a separate issue is DA-CONVERTED-DATE is a comp field. This create 2 problems: It will be corrupted by Mainframe EBCIDIC to ascii conversion It could be treated as Line field - corrupting the record (splitting the record in...