I see my data which is what I want, but for some reason I see extra rows of empty records. Any idea why? Make sure you are trimming the entire data set to get rid of any extra line-returns at the bottom. That said, I've seen others get extra rows at the end before and I am not sure I ever got to the bottom of it. I updated your function to allow for "FirstRowIsHeader" You could do that - you'd just have to the File-Read within the function itself. The only thing to consider is that this limits the ways in which the function can be used.
However, if you only use files, there's nothing wrong with making it more convenient by moving the file-read internally. Thank you very much! I cant tell you how much of a help you have been and how much of my time you have saved! Greg: Your solution is not for all situations.
If one row have more columns then the header row have, you can't access this values nor see it in the dump of the result query. Same problem is an empty column header.
I'm trying to parse a csv file using a comma delimiter. The csv file has about rows and 80 columns. When I do that I get an error: javax. OutOfMemoryError: Java heap space. I also noticed a trailing comma which I removed.. I'm pretty sure the problem I'm having arises from the size of the document. Rather than changing memory allocation on the server, because I'm not sure what else that will affect.
I'm using your function here over at www. It's been great. We got an error report today that the CSV import was broken for 1 user. Years later, another modification, based on Greg's comment above. This time, I needed to specify -which- row of the csv data contains the header. I use "0" if no header, same as "False" in Greg's example. Light Dark. This is different than standard ColdFusion, but I am trying to make this as easy as possible.
Is is possible that there is no qualifier being used. In that case, we can just store the empty string leave as-is. This will be the character that acts as the record delimiter. A "line break" might be a return followed by a feed or just a line feed. We want to standardize it so that it is just a line feed. That way, it is easy to check for later and it is a single character which makes our life times nicer.
We will need this when we are going throuth the tokens and building up field values. To do this, we are going to strip out all characters that are NOT delimiters and then get the character array of the string.
This should put each delimiter at it's own index. This will put each found delimiter in its own index that should correspond to the tokens. We just have to be sure to strip out this space later on. First, add a space to the beginning of the string. Going forward, some of these tokens may be merged, but doing it this way will help us iterate over them. When splitting the string, add a space to each token first to ensure that the split works properly. You cannot alter this array once it has been created.
It can merely be referenced read only. We will handle this later as we build values this is why we created the array of delimiters above. This will be a full array of arrays, but for now, just create the parent array with no indexes. Even if we don't end up adding any values to this row, it is going to make our lives more smiple to have it in existence.
THis is the row to which we are actively adding value. We have no sense of any row delimiters yet. Those will have to be checked for as we are building up each value. This is the current index of the array to which we might be appending values for a multi-token value. Trim off the first character which is the empty string that we added to ensure proper splitting. TokenIndex ]. If we do, then we might have to build the value across multiple fields.
If we do not, then the raw tokens should line up perfectly with the real tokens. Therefore, we can assume that we have a previous token value ALREADY in the row value array and that we have access to a previous delimiter in our delimiter array.
We don't care about the first qualifier as it can ONLY be an escaped qualifier not a field qualifier. While this is not easy to read, add it directly to the results array as this will allow us to forget about it later. We have reached the end of a qualified value. We can complete this value and move onto the next field. Remove the trailing quote. Remember, we have already added to token to the results array so we must now manipulate the results array directly.
Token at this point will not affect the results. FieldIndex ]. ReplaceFirst ". The field is qualified on both ends. The field is qualified on the start end. If that is the case then either this field is starting a multi-token value OR this field has a completely qualified value.
This is the field qualifier and we do NOT want to include it in the final value. If the first character is a qualifier already established and the last character is also a qualifier what we are about to test for , then this token is a fully qualified value.
Remove the end field qualifier and append it to the row data. We are buildling a value up across differen tokens. Set the flag for building the value. Just add this token value as the next value in the row. Do NOT use the FieldIndex value as this might be a corrupt value at this point in the token iteration. We are NOT going to have to worry about building values across tokens. Plain This took tick count to complete.
If you do not have to process extremely large CSV files, this is a fine solution. Option 3: If you have to work with large files and speed is critical, jumping into Java always helps. Opencsv completed the same task in 70 Tick counts. In CF8, I had to use explicit 'JavaCast ' calls to make the constructor call still 'init ', of course - I was too hasty about that in my previous comment work correctly.
Also when using CF8, you have to remember that it is running on Java 6; you have to recompile 'opencsv' 3. About option 3: the 'opencsv' library is now version 3. It also has new constructors: 'init ' won't cut it anymore. Is this possible? If so how do I go about this? This code will run the query, writing the file to server and automatically download. But only downloads and writes to one file.
How do I separate it into multiple small result files? This puts everything for the result set into a variable, writes the variable to a file and stores the file in a list. You can either add a unique identifier to the filename and keep them, or you can delete them afterwards.
Stack Overflow for Teams — Collaborate and share knowledge with a private group. Create a free Team What is Teams? Collectives on Stack Overflow.
Learn more. Ask Question. Asked 7 years ago. Active 7 years ago. Viewed times. What I'm trying to do is A User puts in a Date They press a Download Button, this runs a query generating the results for the date they entered. Current code I have for the query and writing data to server.
Some things adjusted. Improve this question. SaSquadge SaSquadge 2 2 silver badges 12 12 bronze badges.
0コメント