i was wondering if you guys have a better explanation of DWORD terminology used in win programming. i sure have a hard time understanding it.
according to msdn:
WORD is 16-bit unsigned integer.
DWORD is 32-bit unsigned integer.
what's the difference between these 2 datatypes when using them on any today's computer system, and is it ok to worked like this:
int a = 0;
word aa = 13;
dword aaa = 20;
a = aa;
aaa = a;
could this work ok?
and the hardest for me to understand is low-order, high-order. what they really mean? and if i have a double, int, or char, how do i use them with dwords and words.
thank you..
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The C and C++ language specify different types of integers:
char
short
long
long long (this is newer)
int
They come in signed and unsigned flavours.
char is 8 bits
short is 16 bits
long is 32 bits
long long is 64 bits
int is the size fo the processor's native integer (on a 32 bit processor it would be 32 bits, on an 8 bit processor it wopuld be 8 bits).
The types Microsoft introduced (among others)
WORD
DWORD
are simply #defines that rename existing types
#define WORD short
#define DWORD long
So you may use them as you have described above.
C and C++ allow implicit conversion between different integer types and floating point types.
rr
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
That explainations pretty good. The way I look at it is a little bit different (no pun intended). This is a carry over of my days programming in Assembly.
As you know, the computer deals only in ones and zeros. This is called a bit. Four bits together is called a nibble (%1010). Eight bits is called a byte. This is the bases of hexadecimal notation. So, 11111111 = $FF in hexadecimal (I'll come back to this in a minute). Putting two bytes together (for 16 bits) is called a word and putting two words together (for 32 bits) is called a double word.
Now for low order and high order, if you are talking about a word, which I said is made up of two bytes (e.g. $1A and $2B = $1A2B.) $1A is call the high order byte and $2B is called the low order byte. If you have two words, say $3C4D and $5E6F = $3C4D5E6F, then $3C4D is the high order word and $5E6F is the low order word. If you are referring to a single byte, which is 8 bits, the one on the far left is the high order bit and the one on the far right is call the low order bit.
Now, just as an aside, some computers store words and double words either low order first followed by the high order, while others store them the reverse way. But you only need to worry about this when you are programming in Assembly.
As rr says Microsoft, to make you life easier, redefines short and long as WORD and DWORD cause thats easier to remember then for you to try to remember how big short and long is.
A char is one byte, if you look at a table of ASCII characters you will notice that they are defined from $00 through $FF (or in normal numbers 0 through 255). An integer is two bytes, so your range is $0000 through $FFFF or in normal numbers 0 through 65535. Since integer can be + or - your range is from -32767 to +32768. If you use unsigned int you are only using positive numbers, so you've changed the range to 0 through 65535
Don't know if this answers all your questions.
Hope this isn't to confusing. See Ya
Butch
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Eugene says: "I think it would be clearer to stay away from WORD and DWORD unless you're into assembly language."
WORD and DWORD are used extensively in Windows, so you can't very well stay away from them, nor should you want to. They are used with a lot of the API functions.
You'll see functions using DWORDs for flags, dwFlags, where each bit gives a different option, allowing for 32 options that can be OR'd together.
When converting from one type to another you will probably need to cast them, such as:
WORD a;
UINT b;
a = 25;
b = (UINT) a; //cast WORD to UINT
You might not always need to, but often you'll get a warning if you don't.
black_adder
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
This highlights IMO a fundamental flaw in the way the windows programming model is implemented - it is too low level.
Programming for Windows is a failrly complicated task that is only made more complicated by having lots and lots of low level details exposed to the user. All that Hungarian notation is good if you are interested in taking a low level look at how things work, but in reality it gets in the way of a high level view.
for example:
lpsz - long pointer to a null terminated string.
Exposing all this low level detail makes me (the programmer) responsible for doing a lot of work that should really be handled by the language / compiler.
Of course, use of the C language precluded any higher level abstraction. Since we are dealing with a limitation of the language. Though, to be fair, there were not really a whole lot of language options available at the time that would have offered better abstraction - Ada and maybe smallTalk - but I'm not sure - at least wee can be thankful it wasn't done in LISP)
End of rant.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
If you want a high level view, use a RAD. Programming for Windows isn't anywhere near as complicated as a lot of people make it out to be, it's just a matter of learning how it works, same as anything else.
As for the Hungarian notation, you can usually get away without using it much in Windows, and you can certainly use it if you want even when not programming for Windows, it's just a way of showing the type of a variable in the name. Just by looking at the name of a variable, you know it's type.
They do typedef some things, so you can use something like LPCSTR instead of const char* which gives you a higher level of abstraction, exactly what you wanted.
I think a lot of people are put off Windows programming for the same reason a lot of people don't like spinach.. they've been told they won't like it, and get that in their head before they really try it.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
hello there,
i was wondering if you guys have a better explanation of DWORD terminology used in win programming. i sure have a hard time understanding it.
according to msdn:
WORD is 16-bit unsigned integer.
DWORD is 32-bit unsigned integer.
what's the difference between these 2 datatypes when using them on any today's computer system, and is it ok to worked like this:
int a = 0;
word aa = 13;
dword aaa = 20;
a = aa;
aaa = a;
could this work ok?
and the hardest for me to understand is low-order, high-order. what they really mean? and if i have a double, int, or char, how do i use them with dwords and words.
thank you..
The C and C++ language specify different types of integers:
char
short
long
long long (this is newer)
int
They come in signed and unsigned flavours.
char is 8 bits
short is 16 bits
long is 32 bits
long long is 64 bits
int is the size fo the processor's native integer (on a 32 bit processor it would be 32 bits, on an 8 bit processor it wopuld be 8 bits).
The types Microsoft introduced (among others)
WORD
DWORD
are simply #defines that rename existing types
#define WORD short
#define DWORD long
So you may use them as you have described above.
C and C++ allow implicit conversion between different integer types and floating point types.
rr
I think it would be clearer to stay away from WORD and DWORD unless you're into assembly language.
Hi Everyone:
That explainations pretty good. The way I look at it is a little bit different (no pun intended). This is a carry over of my days programming in Assembly.
As you know, the computer deals only in ones and zeros. This is called a bit. Four bits together is called a nibble (%1010). Eight bits is called a byte. This is the bases of hexadecimal notation. So, 11111111 = $FF in hexadecimal (I'll come back to this in a minute). Putting two bytes together (for 16 bits) is called a word and putting two words together (for 32 bits) is called a double word.
Now for low order and high order, if you are talking about a word, which I said is made up of two bytes (e.g. $1A and $2B = $1A2B.) $1A is call the high order byte and $2B is called the low order byte. If you have two words, say $3C4D and $5E6F = $3C4D5E6F, then $3C4D is the high order word and $5E6F is the low order word. If you are referring to a single byte, which is 8 bits, the one on the far left is the high order bit and the one on the far right is call the low order bit.
Now, just as an aside, some computers store words and double words either low order first followed by the high order, while others store them the reverse way. But you only need to worry about this when you are programming in Assembly.
As rr says Microsoft, to make you life easier, redefines short and long as WORD and DWORD cause thats easier to remember then for you to try to remember how big short and long is.
A char is one byte, if you look at a table of ASCII characters you will notice that they are defined from $00 through $FF (or in normal numbers 0 through 255). An integer is two bytes, so your range is $0000 through $FFFF or in normal numbers 0 through 65535. Since integer can be + or - your range is from -32767 to +32768. If you use unsigned int you are only using positive numbers, so you've changed the range to 0 through 65535
Don't know if this answers all your questions.
Hope this isn't to confusing. See Ya
Butch
Eugene says: "I think it would be clearer to stay away from WORD and DWORD unless you're into assembly language."
WORD and DWORD are used extensively in Windows, so you can't very well stay away from them, nor should you want to. They are used with a lot of the API functions.
You'll see functions using DWORDs for flags, dwFlags, where each bit gives a different option, allowing for 32 options that can be OR'd together.
When converting from one type to another you will probably need to cast them, such as:
WORD a;
UINT b;
a = 25;
b = (UINT) a; //cast WORD to UINT
You might not always need to, but often you'll get a warning if you don't.
black_adder
This highlights IMO a fundamental flaw in the way the windows programming model is implemented - it is too low level.
Programming for Windows is a failrly complicated task that is only made more complicated by having lots and lots of low level details exposed to the user. All that Hungarian notation is good if you are interested in taking a low level look at how things work, but in reality it gets in the way of a high level view.
for example:
lpsz - long pointer to a null terminated string.
Exposing all this low level detail makes me (the programmer) responsible for doing a lot of work that should really be handled by the language / compiler.
Of course, use of the C language precluded any higher level abstraction. Since we are dealing with a limitation of the language. Though, to be fair, there were not really a whole lot of language options available at the time that would have offered better abstraction - Ada and maybe smallTalk - but I'm not sure - at least wee can be thankful it wasn't done in LISP)
End of rant.
If you want a high level view, use a RAD. Programming for Windows isn't anywhere near as complicated as a lot of people make it out to be, it's just a matter of learning how it works, same as anything else.
As for the Hungarian notation, you can usually get away without using it much in Windows, and you can certainly use it if you want even when not programming for Windows, it's just a way of showing the type of a variable in the name. Just by looking at the name of a variable, you know it's type.
They do typedef some things, so you can use something like LPCSTR instead of const char* which gives you a higher level of abstraction, exactly what you wanted.
I think a lot of people are put off Windows programming for the same reason a lot of people don't like spinach.. they've been told they won't like it, and get that in their head before they really try it.
lol, nice, Edmund ;)
Kip