I have a backup script using dar that has been running without issue for years. The version of dar being used is now 2.7.17 from debian trixie.
There was an issue on Saturday. The previous week's full backup done Friday required 40 4G slices. This week's Friday full backup required 39 slices. When the Saturday incremental ran this week, it balked because it tried to read the slice 40 from the previous week and thought the set was corrupted. A -t test also balked for the same reason. Once the old slice 40 dar file was erased, the -t worked as expected and reported no errors in the 39 slice full backup created this week.
Would it be possible to check the file modification/creation dates of the slices as you process them and reject any slice that is written before the first slice. I'm not sure why my file glob didn't remove the old slice - it did remove its par2 files - before writing this Friday's full backup. But that's my issue to deal with. dar shouldn't be trying to process files from previous backups after a new backup set is complete. I suspect if there was a gap in numbers - if the previous set had needed 41 slices and everything in 40 was gone for example, it would have processed fine. Unfortunately, testing this on a live system is a bit tricky as forcing just enough less to be backed up to make an off by one isn't easy. You might have more luck with a small slice size in testing.
I realize that it might be possible to script something to specify what is the last valid slice to prevent this, but it really seems like something dar should just do.