If a file is to big to upload via the import page and thus times out the page allows users to resubmit the same file and continue the import where it left off.
There is a small bug in this process (or at least there needs to be more documentation on the page). If the user enters a value other than zro in the "Number of records(queries) to skip from start" field then upon resubmission this value is still set and causes that many records to be skipped on the resubmit (which is results in an inadvertent loss of data).
Suggested fix - ignore the value in that field on the resubmit.
Logged In: YES
user_id=210714
Originator: NO
What about replacing the message
Script timeout passed, if you want to finish import, please resubmit same file and import will resume.
with
Script timeout passed, if you want to finish import, please resubmit same file and import will resume. Do not use the "Number of records to skip" dialog.
Logged In: YES
user_id=210714
Originator: NO
Better: when this happens, remove the input field in the "Number of records to skip" dialog and show instead how many will skipped.
Logged In: YES
user_id=1472046
Originator: YES
Having tested this more I realize that the issue I am seeing is not due to the "Number of records" skipped, but to missing the last record in a file if there is not "line terminating character" at the end of the file. It might be useful to indicate that on the import page...
Logged In: YES
user_id=210714
Originator: NO
I'll open a new bug entry.