There are possible situations when libexif causes a program crash because it dereferences a NULL pointer that has resulted from an allocation failure.
I have noticed the following case, but I don't know if this is the only one. exif_data_save_data_content calls exif_data_save_data_entry in a for loop, passing as a parameter the double pointer 'd' that points to the address of the output buffer. This buffer address can change as reallocations are done inside exif_data_save_data_entry. If one of these reallocations fails, exif_data_save_data_entry returns but does not signal the failure to the caller in any way, so the next exif_data_save_data_entry call crashes in the exif_set_short call that uses *d.
Although reallocation failure is very unlikely with valid data, it can well occur with corrupt data, therefore it is critical to correct problems like this. I first noticed this problem with an Olympus image that had byte order mismatch between the MakerNote data and the other Exif data (probably due to editing by some software). An incorrectly interpreted size field had caused exif_mnote_data_olympus_save to allocate a very large buffer, and this made a later reallocation to fail, leading to the above mentioned situation. Although recent versions contain heuristics to infer the byte order, I have encountered files for which this logic does not work, and heuristics like these cannot catch all possible data corruption cases. So these issues should be fixed at a more fundamental level.
In general, when memory is allocated for some sub-items read from a buffer of known length (as in exif_mnote_data_olympus_load, for example), then the required memory block cannot possibly have size larger than the total size of the buffer (*provided* the data are not converted in a manner that would increase its byte count). If the allocation *does* appear to be larger, then we could conclude that the data are corrupt. I implemented this kind of logic for my own use by overriding libexif's default memory allocation by functions that refuse to allocate more than 64kB, as this is an upper limit for the Exif block length in a JPEG image. Using this, I was able to prevent the mentioned crashes with high probability because the data corruption was then detected before exif_data_save_data_content was ever called. Although crashes should be elimininated in more reliable ways as this, I think some reasonable allocation limit would be useful because the allocation of very large buffers can take a very long time, making it possible to perform DoS attacks using corrupt input data.
Also, libexif contains places where a previous buffer pointer is directly replaced by the result of exif_mem_realloc, without testing the reallocation success. Then a memory leak can occur in the case of failure, as the original buffer is still allocated but all references to it may have been lost. This occurs at least in exif_content_remove_entry, exif-data.c and some makernote codes. (A further note regarding exif_content_remove_entry: perhaps reducing the buffer size never causes an allocation failure, but it would be safest to do the check here as well. In fact, the current code would cause not only a leak but a crash if the reallocation failed, because c->entries would become NULL but c->count is not set to zero.)
Directly replacing a previous buffer pointer with reallocation result is also questionable for another reason: all previous data in the buffer are lost as the pointer becomes NULL in the case of failure. For example, if the allocation fails due to a single corrupted field, it might be useful to preserve other fields that have been handled thus far, and perhaps even try handling fields after the corrupted one, rather than discarding all the data. However, I'm not sure if this point has any relevance in the current libexif codes.