I believe its a chicken and egg thing. The bean is probably the best way to get all the fields and annotations. Though if we were to ever try a ground up rewrite we might try and use the class first.
Hello Scott, In deed, this a possibility but I don't see why an existing bean is required to export the header?
Yes that's a possibility and I'm using the setColumnOrderOnWrite method in my sample code above, passing it comparator. But my point is about giving the possibility to use the CsvBindByPosition annotation in conjonction with the HeaderColumnNameMappingStrategy to do the column sorting automatically. In my use case, I'm using CSV files as output to validate the behaviour of my program. Our QA team give us some CSV files, directly extracted from the Database. We are planning to use plain text comparison...
Hello Six I don't have as much experience with this one as I have never personally had a use case where order of the output mattered. But that said there already exists an non-annotation way of handling it. There is some crude documentation at https://opencsv.sourceforge.net/#changing_the_write_order The headerColumnNameMappingStrategy setColumnOrderOnWrite method takes a comparator of strings. You can use something like the supplied LiteralComparator and a sample of its usage in ComparatorTest.java....
Print header separately
Hello Six All mapping strategies have a generateHeader that creates the header as a array of Strings. It can be called at will. If that does not work for you then please explain your use case with some sample code. Scott :)
Automaticaly sorting columns on HeaderColumnNameMappingStrategy
Print header separately
Hello Jeremy - thanks for the offer but so far sourceforge has done everything we needed so there is really no need to move. I actually use github for my personal projects (though nowhere near the number of commits you have) but opencsv started on sourceforge in 2005 and there are millions of bookmarks in browsers in the world for opencsv.sourceforge.net and there is no need to cause such a disruption because a newer, sexier developer platform comes out, because there will always be a newer, sexier...
Find me at github at https://github.com/hazendaz so you know you have someone with great experience that can help get this over.
Scott, please reconsider this. No one uses sourceforge these days that care about open source. Every single thing you have here can move to github. I can help you. I've been on github for 13 years now. There are issues with this library and I can guarantee no one reports them due to it being here. Whatever you need I want to help. Lets get this on github and start releasing more frequently and get rid of the legacy baggage this currently contains. Even if you don't port everything, just port the...
My specific case is that I want to represent NULL with an explicit string value. If a non-null value evaluates to the NULL string value, then prepend the NULL string value. Then when loading the CSV file back if the value read is the NULL string value then insert a NULL into the database. If the value read starts with NULL string value, then remove it and insert into the database. So it's both an export and import. At this point opencsv doesn't do it's own import, so I suspect this isn't something...
Sorry I missed your original post. But for the original question - it depends. If it is a single change that if left unset or by default leaves the original functionality unchanged (so by default it does not change established behavior) then I am okay with a simple setter. If it is a very pervasive change that is difficult not to modify existing behavior then extension would be the route to go - which you can see in the ResultSetColumnNameHelperService. Let me know if you need/want to change the...
I did some more work on this and realized that the solution I need is likely more specialized than would make sense to add to the library. I believe I can handle this by writing a custom ResultSetHelper class.
Patch to support modifying the default value of ResultSetHelperService
Sorry for the late response but work was pretty busy this week. So if your change did not break any of the existing tests. And more importantly you have a junit that will break if someone removes your fix, thus showing why the fix is there in the first place, please send it as a merge request or a patch.
One solution to this dilemma of "to interpret or not to interpret carriage returns" is to use a modified LineReader class that interprets both CR and LF as line endings, but appends them to the resulting string just like other characters. Then, you wouldn't need to interpose newlines when reconstructing multi-line fields, as LineReader has preserved the original line-ending characters. Then, it's just a matter of trimming them strategically in CSVParser and RFC4180Parser. This way you get the best...
CsvToBean.stream() does not preserve order of input
Okay I see what you are talking about but you are using the wrong file. Doing the diff showed me that while both are ordered the same the but differed in the order of creation. diff output_iterator.csv output_stream.csv 90,91c90,91 < 90;newyawdrive02_13a < 89;newyawdrive02_14a --- > 89;newyawdrive02_13a > 90;newyawdrive02_14a 126,127c126,127 < 126;newyawdrive03_23a < 125;newyawdrive03_24a --- > 125;newyawdrive03_23a > 126;newyawdrive03_24a 201,202c201,202 < 201;newyawdrive06_20a < 200;newyawdrive06_21a...
Carriage returns considered part of quoted data
So why do you have withKeepCarriageReturn set to true? Your test, and thank you for supplying a test!!, does not show that use case and my answer for this particular test is to set the carriage return to false. I created the following test and it passes: @Test public void bug250CarriageReturnsAtEndOfLine() throws IOException { String input = "\"Line 1\"\r\n\"Line 2\""; CSVReader csvReader = new CSVReaderBuilder(new StringReader(input)) .withCSVParser(new RFC4180Parser()) .withKeepCarriageReturn(false)...
Carriage returns considered part of quoted data
I can reproduce my problem, but turns out it is a little bit different than I thought. So the order in which the elements are returned is actually the same as in the input. But in my Bean class I have a field called sequenceNumber which is assigned its value in the constructor based on a global counter. import com.opencsv.bean.CsvBindByName; public class CsvSymbol { @CsvBindByName(column = "Designer ID") public String designerId; // this is just a counter not included in the CSV file public static...
CsvToBean.stream() does not preserve order of input
If possible please send a test sample so we can see what is going on. My original gut feeling was I was not surprised at all because calling the iterator from the CsvToBean returns a CsvToBeanIterator whereas calling the stream method called from the CsvToBean returns a java stream and then you are calling the iterator on the java stream. But digging into the java stream code it calls the spliterator code which is the opencsv LineExecutor. So in theory even though they are different they should be...
CsvToBean.stream() does not preserve order of input
Of course you can - though I would recommend you extending the CSVParserBuilder as well to have a builder for your own extended class as well. We went the builder route because after several years and many, many modifications later we realized that our classes had 8-9 constructors some with a dozen parameters so we could maintain backwards compatibility while still allowing for the new features being requested. It just got to be too much so we created a Factory/Builder class and never looked back....
Thanks for this solution, btw can we extend CSVParser and override this method like you did while changing some fields like escape char? Fields are final and only builder can set them. Thanks!
What's new
yes - my apologies but aparently I did not update the wiki. I missed several of those in the 5.8 release for some reason. After ensuring that the fix is in by looking at the git history I did update the wiki.
ConverterPrimitiveTypes calls 'new ConvertUtilsBean()' over and over
What's new
Operational risk in opencsv
K - so I looked up the issues in the 4.4 version of commons-collections4 - https://mvnrepository.com/artifact/org.apache.commons/commons-collections4/4.4 The issue is from an older version of juni4 (4.12) which is a test scope dependency so it is not compiled into the system. PLUS the website noted that the issue was fixed in 4.13.1. Because of our use of the junit5-vintage-engine we are pulling in a newer version of junit4. mvn dependency:tree | grep junit [INFO] +- org.junit.jupiter:junit-jupiter-api:jar:5.10.1:test...
closed for lack of response.
CSVReader readAll method exist attack risks
readNextSilently method of CSVReader not found
closed - not a bug per se but a jar hell scenario from a refactor we made a decade ago where the package locations were changed causing two CSVReader classes to exist for a user using a library with a dependency on the old version of opencsv.
Unexpected behavior in OpenCSV when parsing CSV file with backslash characters
close for lack of response.
Hi Scott, Thanks very much for responding and thanks for the suggestions.
So on the first problem are you seeing a comma at the end of every row or the end of every line? That is important for two reasons - the first is that csv files can have data that has multiple new line characters. And second opencsv uses the java Reader class which reads the file one line at a time to create a row. If your data has no newlines in it so each line in the file is exactly one row of data or you have a comma at the end of each line then you are good. In this case I would recommend you...
Allow header transformation in CsvReaderHeaderAware
Unable to set locale dynamically
Hi again @sconway, Sorry for the delay! I have seen the ticket that you opened for DL4J, thank you! Our full deployment pipeline is a bit more complex because we have four projects, which build on (-->) top of the previous one: ConnectionStudio --> Pathways --> Abstra --> ConnectionLens After looking at our recent Abstra POM, we do not use either DL4J anymore, we replaced it by a much more compact library, mainly because we couldn't deploy our JAR anymore due tu DL4J size. Apart from that, I started...
I reached out to them yesterday and they sent me a link to their github this morning so I created a request there. https://github.com/deeplearning4j/deeplearning4j/issues/10048
If there is no net.sf.opencsv or au.com.bytecode references then there should not be a jar hell situation. please run the following command on the astra project mvn dependency:tree -Dverbose > somefile.txt and attach that file to the ticket. looking at the uncompressed core jar I do see the net.sf.opencsv find . -name pom.xml -exec grep -il opencsv {} \; ./META-INF/maven/net.sf.opencsv/opencsv/pom.xml ./META-INF/maven/org.apache.commons/commons-csv/pom.xml ./META-INF/maven/com.opencsv/opencsv/pom.xml...
Dear Scott Conway, First, thank you very much for you quick and detailled answer, it is very much appreciated! Yes, Abstra is huge, mainly because ConnectionLens is huge, due to language models... Sorry that you had to go through this! The Abstra project that you downloaded is the public one, with a stable release: that is why there is no pom.xml, we only provide a jar with all its dependencies for a simpler installation. However, as you can see, it has been lastly updated 5 months ago, we work obviously...
K - for giggles I downloaded the astra project. And it was HUGE!! I am not sure what you are using to build the project as there was no pom file nor did I see a build.xml (ant) or build.gradle (gradle) but I can definitely tell you that you have a jar hell situation. I extracted the abstra-core-full-1.1-SNAPSHOT-develop-1daa256-20230223-1808.jar and did a grep for opencsv and came across two different directories: find . -type d -name opencsv ./META-INF/maven/net.sf.opencsv/opencsv ./META-INF/maven/com.opencsv/opencsv...
readNextSilently method of CSVReader not found
So your connectionlens project is working without issue but when it is included into the abstra project you are getting this issue. Actually I have seen this quite a bit and it goes by several names, typically dependency conflict or "jar hell". So what is happening is that your connectionlens project is using a version of opencsv that has the readNextSilently method in it - doing a little bit of git forensics (the fact that the call is 129 of CSVReaderHeaderAware) you are using up to version 5.5...
readNextSilently method of CSVReader not found
Awesome, thank you. Works perfectly.
CsvNumber - rounding mode
Version 5.9 has been released with this feature.
how to transform data when the column header in csv file is not the same as the pojo
Spaces in Header and CsvBindByName.required
I don't know what changed today other than the fact that I was taking screen shots of everything so I could file a Jira with sonatype but it worked and 5.9 has been released. :)
create 5.9.1-SNAPSHOT version
Merge remote-tracking branch 'origin/master'
create 5.9 version
5.9 has been released!
What's new
Perfect !! Thank you @sconway Scott !! , this is of great help. This will solve the main hurdle in my software, because based on the data from the CSV, the REST API will be invoked. If my bean is able to populate the input csv , the REST calls will be succeeded. Thank you for the clarification. Have a nice weekend ! Cheers !!
K - so no there is no requirement that the value matches the name, nothing is going to blow up - the worst that will happen is that the beans will not populate. Under the covers opencsv takes the headers and looks for getters setters for each of them so for Id it will look for a getId and setId. As long as those are there you should have no problem. My only question, which is one admittedly I have not used much, is that you are auto generating your getters and setters but I don't think that will...
@sconway thank you , what i have intended to ask was, is the Bean Builder is a strict data model, meaning, the POJO should exactly match to the CSV column header ? e.g If the POJO is of four fields like the above, the CSV file should contain all four columns for bean building , if the CSV file contains only two columns, will still the bean be processed ? any help , much appreciated !!
@sconway thank you , what i have intended to ask was, is the Bean Builder is a strict data model, meaning, the POJO should exactly match to the CSV records ? e.g If the POJO is of four fields like the above, the CSV file should contain all four columns for bean building , if the CSV file contains only two columns, will still the bean be processed ? any help , much appreciated !!
I am not sure if I fully understand the question. If it is just for the basic documentation take a look at the quickstart at https://opencsv.sourceforge.net/#quick_start If I remember correctly by default required is false so it is okay if a header is missing - the value will just be default (null, 0, false, etc)
am using version 5.8 , using gradle build
am using version 5.8
how to transform data when the column header in csv file is not the same as the pojo
Hi Scott, Thanks for letting me know and good luck with the resolution.
Hello James - sorry but this is going to have to be postponed. The Nexus Repository Manager code that is hosted for open source projects at oss.sonatype.org is no longer working so I cannot upload projects to maven central repository. I have a call scheduled with sonatype to figure out what the new way of doing things are because everything I am finding on the internet want me to install a repository manager on my laptop just to upload to maven central and then they want to upsell to the not free...
RecursiveType constructor should at least be protected
Yes I am going to close this and retroactively update the what's new page. I see that the change was made and it must have gone out in the 5.8 release but I must not have updated the what's new page when I merged the code in.
What's new
Please feel free to close it on my account.
Spaces in Header and CsvBindByName.required
Recording input CSV record line number in the mapped bean
Caching and reusing HeaderColumnNameMappingStrategy instances
required column names
Closed for lack of activity.
What would you suggest we update to? 4.4 is still the newest version. I'm also familiar with security problems with 4.3, but not 4.4.
Has this change been released?
This ticket can be closed, can it not? The changes are not listed on the project wiki's "What's new" page, but we have released since this was done.
@sconway This one is really directed at you. The all-caps thing was there when I arrived, and I recall you having a reason for it.
Error message don't use errorLocale
This fix was released with version 5.8 in July. Sorry I didn't update the ticket.
CsvNumber - rounding mode
This is implemented and will go out with the next release.
What's new
Updated dependencies.