I'll hope this gets the point across clearly enough, and forgive me if I mess up the math:
The integrated Dolby decoder evidently is doing does its math in quite literal a fashion,
i.e. centre = 0.707R + .0707L [I know that's simplified, it's sqrt (0.5L) + sqrt (0.5R) if I have things correct]
.. which works fine in an analogue system that has the headroom to do this without clipping, or with digital sources that "follow the rules" [i.e. where sqrt (0.5L) + sqrt (0.5R) never exceeds 0dBfs] but constantly causes clipping in the centre channel when done in the digital domain with material where the encoding party has maximised the volume.
This can be remedied by using the volume filter and attenuating levels at least 7dB before the signal reaches the Dolby decoder, but this shouldn't be necessary as it is not transparent or obvious to most end users, who just hear distortion, don't understand why, and never use the filter again under the assumption that it's broken [when the real problem is the source material being processed].
It could be more easily remedied and eliminate the need for manually-specified level pre-processing by taking the expectation of clipping by volume-maximised material into account beforehand by
* attenuating the L and R signals before the decoder, eg. (0.5L), (0.5R) into the decoder, eliminating the possibility of clipping or
* not merely processing and truncating the signal at decoder output [where the result exceeds 0dBfs but mathematically may have originally yielded valid unclipped signal outside of the valid range of the output quantisation depth] but attenuating it after processing and before requantising to the desired output format, again eliminating the possibility of clipping
Additionally, to avoid unnecessary attenuation with program material that "follows the rules", a check could be added to the filter that enables "maximised mode" and applies the pre- or post- attenuation only if the summed level of sqrt(0.5L) + sqrtn(05.R) input exceeds 0dBfs. This would cause a "duck" in volume after the first loud channel-simultaneous peak but this would seem a better compromise than simply attenuating everything and users then finding that movies which "follow the rules" are wayyyyyy too quiet, while volume-maximised material sounds "normal".
Hope that made sense.
I like using the built-in decoder because I don't need to switch my outboard processor to do so and can still do things like run games with 5.1 audio at the same time as watching video. Because I use it this way frequently, I'm made quite aware of the problem in the decoder's design which, with unmaximised audio, would give perfectly good output - but thanks to poor encoding practices [i.e. bitpushing] is rendered effectively-faulty.