RE: [GD-General] The joy of type aliasing and C
Brought to you by:
vexxed72
From: Neil S. <ne...@r0...> - 2003-12-28 14:26:24
|
> Of course, no one actually answered the question. I > restated, I'll see what nits they pick with my new question. *sigh* You won't get a proper answer because they have nothing that will help you. The C and C++ standards are retarded in the sense that they have, increasingly over the years, tried to improve platform-independence by tightening up binary compatibility issues, which is a pretty fruitless exercise. If you really want binary compatibility, then you have to accept that the basic types will have to be the same on all platforms, and that's something they are not prepared to accept, as it will hurt performance on non-conforming systems. So you end up with a set of rules based on the notion that you cannot assume very much about anything at all, such as the relative alignment issues of floats and ints, or even crazy things like a 32 bit int not necessarily being able to represent the data in a 32 bit float. For example, the three 'issues' that were mentioned on comp.std.c *do not actually happen* on any implementations they know of. In other words, the law is an ass. If the standards committee wanted to be actually helpful, they would have specified exceptions to these rules, such as: when the two types have identical size and alignment, type punning is well-defined. This might reduce theoretical binary compatibility, but will not harm actual binary compatibility one bit, and will at least provide some rules which are compatible with what people actually do. The real insult is that idioms like *(int *)&f are commonly used, and generally handled as expected (by users not the standard) by most compilers. This is a case where the real standard is the standard which actually exists in practice and not the standard that some academics have made up. Incidentally, when someone starts asking you to define 'works', you know nothing good will come out of the discussion. ;) - Neil. |