I am using Dev c++ 4.9.9.2. When I use post increment (++) operator it fails
where as pre increment(++) works. Following is the
small program to demonstrate this problem:
thx for your reply. But this one works as I expected:
Yes it does. Why are you surprised? Did you look at the precedence table I
linked?
Pre-increment has a different order of precedence than post-increment. In fact
it has the same precedence as the dereference, so the expression is evaluated
sequentially left to right.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
2010-01-25 15:45:30 GMT
ok. is there any advantage using a+=1 over a= *a+1?
Aesthetics!? Possibly maintainability. Potentially the latter evaluates *a
twice, but a decent compiler might avoid that; either way it would seldom be
critical.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
2010-01-25 16:40:22 GMT
Ok. If we can use & in the place of pointers, why they keep pointers still?
Well you can only do that if you are using C++ (and you have not stated either
way), and while a reference type is suitable in this case, that is not
always the case, C++ still needs pointers, but references should be used where
possible (and where C compilation is not required) since they are more
restrictive, so guard against a number of common pointer errors and lead to
cleaner syntax.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
But it seems Visual C++ compiles a++ into the same as a += x.
Same as a += 1 perhaps, not a += x, unless x were a constant of value 1. If
x were a variable, the same code would not be generated.
Yes modern compilers will produce these trivial optimisations. For example_ x
*= 2_ will often be transformed to a bit shift if x is an integer, true for
any constant power of two.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
2010-01-28 03:00:44 GMT
I still find it odd that Visual C++ compiles a++ into the same as a += 1.
Or is it just a quirk from the use of disassembly?
The disassembly shows you exactly the code the compiler generated (how could
it do anything else). Why are you surprised? They are exactly equivalent and
the compiler knows that. Even without optimisation options enabled, compilers
will perform these trivial optimisations for common idioms. This is why
"using assembler for performance" seldom produces significant advantage (if
any) . The compiler acts like an expert system you'd have to know the
instruction set and architecture inside-out to consistently outperform the
compiler in hand coded assembler. And you'd probably consistently deliver
late! ;-)
Optimisations that do not result in the removal of code, or data, or change
the order of execution (i.e. optimisations restricted to a single statement of
source) are commonly applied even without optimisation settings because they
do not affect symbolic debugging and code stepping.
I would not be surprised either if you got exactly the same results using:
a = a + 1 ;
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
"millisecond" !? A few tens of nanoseconds in fact! And that is exactly what
the compiler does do, as you have demonstrated yourself. The compiler is free
to generate whatever code it likes so long as it behaves exactly as required
by the language specification. Why would it not produce the smallest/fastest
code possible that does that? This such optimisations could be highly
significant an iterative algorithm comprising primarily of arithmetic
operations.
In the compiler world such techniques are referred to as "Machine idioms and
instruction combining".
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi:
I am using Dev c++ 4.9.9.2. When I use post increment (++) operator it fails
where as pre increment(++) works. Following is the
small program to demonstrate this problem:
include<iostream>
using namespace std;
void func1 (int, int);
int main()
{
int a=1;
int b=1;
cout <<"Before call func1 (int, int): a=" <<a <<",b=" << b << endl;
func1(&a, &b);
cout <<"After call func1 (int, int): a=" <<a <<",b=" << b << endl<<endl;
system("pause");
}
What is wrong with the above code?
void func1 (int a, int b)
{
a++; // ++a ok
b++; // ++b ok
}
Check your order of precedence.
Your code incremented the pointer, then dereferenced it, but modified nothing.
Also use code tags!
Hi cpns:
thx for your reply. But this one works as I expected:
void func1 (int a, int b)
{
++a;
++b;
}
As long as you know why that works and the previous doesn't.
Yes it does. Why are you surprised? Did you look at the precedence table I
linked?
Pre-increment has a different order of precedence than post-increment. In fact
it has the same precedence as the dereference, so the expression is evaluated
sequentially left to right.
Note that you could avoid this issue altogether by using C++ and int&
arguments rather than int*. That would be a better solution to this exercise.
How about this?
void func1 (int a, int b)
{
a = a+1;
b = b+1;
}
What about it? Why do you need more than one way of doing a very simple thing.
If you want to play that game equally you could have:
I still suggest that:
ok. is there any advantange using a+=1 over a= *a+1?
My apologies, I messed up the mark-up tags; that should have been:
Aesthetics!? Possibly maintainability. Potentially the latter evaluates *a
twice, but a decent compiler might avoid that; either way it would seldom be
critical.
I really think that you may be _sweating the small stuff _worrying about this.
Ok. If we can use & in the place of pointers, why they keep pointers still?
Thanks cpns for your time & patience to answer my questions. Bye
2010-01-25 16:40:22 GMT
Ok. If we can use & in the place of pointers, why they keep pointers still?
Well you can only do that if you are using C++ (and you have not stated either
way), and while a reference type is suitable in this case, that is not
always the case, C++ still needs pointers, but references should be used where
possible (and where C compilation is not required) since they are more
restrictive, so guard against a number of common pointer errors and lead to
cleaner syntax.
Hmm, I was going to say a++ would be faster because it's compiled to a simple:
Whereas a += 1 would be:
But it seems Visual C++ compiles a++ into the same as a += x.
Same as a += 1 perhaps, not a += x, unless x were a constant of value 1. If
x were a variable, the same code would not be generated.
Yes modern compilers will produce these trivial optimisations. For example_ x
*= 2_ will often be transformed to a bit shift if x is an integer, true for
any constant power of two.
Yes, that was bad wording, sorry.
I still find it odd that Visual C++ compiles a++ into the same as a += 1.
Or is it just a quirk from the use of disassembly?
Or is it just a quirk from the use of disassembly?
The disassembly shows you exactly the code the compiler generated (how could
it do anything else). Why are you surprised? They are exactly equivalent and
the compiler knows that. Even without optimisation options enabled, compilers
will perform these trivial optimisations for common idioms. This is why
"using assembler for performance" seldom produces significant advantage (if
any) . The compiler acts like an expert system you'd have to know the
instruction set and architecture inside-out to consistently outperform the
compiler in hand coded assembler. And you'd probably consistently deliver
late! ;-)
Optimisations that do not result in the removal of code, or data, or change
the order of execution (i.e. optimisations restricted to a single statement of
source) are commonly applied even without optimisation settings because they
do not affect symbolic debugging and code stepping.
I would not be surprised either if you got exactly the same results using:
You're confusing me. Your words sound like you expect the compiler to produce
these tiny millisecond optimisations. What I'm saying is it doesn't.
"millisecond" !? A few tens of nanoseconds in fact! And that is exactly what
the compiler does do, as you have demonstrated yourself. The compiler is free
to generate whatever code it likes so long as it behaves exactly as required
by the language specification. Why would it not produce the smallest/fastest
code possible that does that? This such optimisations could be highly
significant an iterative algorithm comprising primarily of arithmetic
operations.
In the compiler world such techniques are referred to as "Machine idioms and
instruction combining".