With a slave unit that is connected to a ptp master via an x-over cable, I have noticed that peerMeanPathDelay.nanoseconds tended to end up on 256 nanosecond boundaries once owd_filt->s_exp had reached its maximum value. In my case, it always ended with a value of 256 nanoseconds even though delaySM and delayMS values indicated that it should be a higher value.
The problem comes from the integer math calculation of this section of code:
/* filter 'meanPathDelay' */
owd_filt->y = (owd_filt->s_exp - 1) *
owd_filt->y / owd_filt->s_exp +
(ptpClock->peerMeanPathDelay.nanoseconds / 2 +
owd_filt->nsec_prev / 2) / owd_filt->s_exp;
Please note that the code fragment of "(ptpClock->peerMeanPathDelay.nanoseconds / 2 +
owd_filt->nsec_prev / 2) / owd_filt->s_exp" will lose several bits of precision.
I was able to cure my problem by converting owld_filt->fy to a double, and by casting the above calculation to double. This fix removed 50 nanoseconds of error out of my slave.
I've included this in the last code