You can subscribe to this list here.
| 1999 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(79) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2000 |
Jan
(91) |
Feb
(50) |
Mar
(27) |
Apr
(12) |
May
(41) |
Jun
(17) |
Jul
(12) |
Aug
(7) |
Sep
(6) |
Oct
(10) |
Nov
(9) |
Dec
(1) |
| 2001 |
Jan
(3) |
Feb
|
Mar
(6) |
Apr
(8) |
May
(4) |
Jun
(4) |
Jul
|
Aug
|
Sep
|
Oct
(7) |
Nov
|
Dec
|
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(3) |
Nov
|
Dec
|
| 2003 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
| 2004 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
|
| 2006 |
Jan
(4) |
Feb
(4) |
Mar
(3) |
Apr
(1) |
May
(8) |
Jun
(8) |
Jul
(7) |
Aug
(3) |
Sep
(7) |
Oct
(19) |
Nov
(28) |
Dec
(5) |
| 2007 |
Jan
(25) |
Feb
(9) |
Mar
(15) |
Apr
(6) |
May
(1) |
Jun
(2) |
Jul
(4) |
Aug
(12) |
Sep
(10) |
Oct
(12) |
Nov
(5) |
Dec
(2) |
| 2008 |
Jan
(3) |
Feb
(5) |
Mar
(28) |
Apr
(23) |
May
(19) |
Jun
(13) |
Jul
(31) |
Aug
(12) |
Sep
(21) |
Oct
(14) |
Nov
(48) |
Dec
(39) |
| 2009 |
Jan
(10) |
Feb
(15) |
Mar
(12) |
Apr
(19) |
May
(40) |
Jun
(24) |
Jul
(34) |
Aug
(12) |
Sep
(4) |
Oct
|
Nov
|
Dec
(13) |
| 2010 |
Jan
(5) |
Feb
(5) |
Mar
(4) |
Apr
(3) |
May
|
Jun
|
Jul
(3) |
Aug
(1) |
Sep
(1) |
Oct
(3) |
Nov
|
Dec
|
|
From: Frank V. C. <fr...@co...> - 1999-12-09 22:36:51
|
Jim Koontz wrote: > > On Thu, 9 Dec 1999, Thomas Maguire wrote: > > > I think an approach that encodes the ordering in the beginning of data files > > and source files is an appropriate approach. This would be akin to the > > UNICODE non spacing non breaking code <0xfffe> used in the beginning of > > unicode files to tell the reader the ordering. > > > > I think that is a satisfactory solution, should this be incorporated into > the standards documents, or requirements? There are a few requirements here: 1. Big/little endianess issues in the library development. I believe the CoreLinux C++ standard addresses this. 2. CoreLinux++ should provide a persistence abstraction. A clear and deisreable framework requirement. 3. The CoreLinux++ framework persistence implementation insures portable content. For this I would ask the following: Is our persistent data (the users application data I will assume) for local storing? Exchange? Both? The reason I ask this is because I assume that the framework will consist of at least a suitable persistence abstraction and at minimum one (1) implementation. If the implementation is a flat file one, then we can make IT'S requirement and feature one that insures proper handling regardless of the byte ordering in the content. For a persistence implementation that uses XML (for example) this may be a moot point. For a persistence implementation that uses an underlying RDMS, I don't know if it is moot or not. While I agree with Thomas Maguire and you that it is a solution to a data file exchange between big and little endian machines, I think it is premature for us to make it a requirement of a implementation we don't have a reqiurement for yet. When we get there, and if local file storage is the persistence implementation, than we have this specification available which solves the eventual issue. Thoughts? |
|
From: Jim K. <jrk...@at...> - 1999-12-09 22:17:44
|
On Thu, 9 Dec 1999, Thomas Maguire wrote: > I think an approach that encodes the ordering in the beginning of data files > and source files is an appropriate approach. This would be akin to the > UNICODE non spacing non breaking code <0xfffe> used in the beginning of > unicode files to tell the reader the ordering. > I think that is a satisfactory solution, should this be incorporated into the standards documents, or requirements? |
|
From: Thomas M. <t.r...@wo...> - 1999-12-09 20:31:02
|
I think an approach that encodes the ordering in the beginning of data files and source files is an appropriate approach. This would be akin to the UNICODE non spacing non breaking code <0xfffe> used in the beginning of unicode files to tell the reader the ordering. ----- Original Message ----- From: Jim Koontz <jrk...@at...> To: <cor...@ma...> Sent: Thursday, December 09, 1999 10:29 AM Subject: Re: [Corelinux-public] Endianess > > > On Thu, 9 Dec 1999, Frank V. Castellucci wrote: > > > > Is endian-ness an issue in Linux? If a data file is created by a > > > Linux/Intel application, can it be read by the Linux/PowerPC version of > > > that application? > > > > > > Should CoreLinux provide some facility for conversion of endian-ness in > > > cross-platform data? > > > > > > Jim Koontz > > > jrk...@us... > > > > Data Files > > -------------- > > There is an issue that has to be addressed by the applications persistence > > implementation or any persistence that the library provides. From a numeric > > stand point using network byte order consistently should avoid that, or of > > course characters. > > > > Source Files > > ------------------ > > There is an issue, I know bitfields and unions may be targets for analysis, > > there may be more (I have been corrupted for too long by little-endianess) > > but avoiding bit manipulation and unions is part of the CoreLinux standards. > > > > Binary Libraries and Apps > > ------------------------------------ > > Big issue, I don't know of solutions. > > > > > > I'm hoping there are some who are running on both type of Linux > > implementations (big and little) that we can test/verify against early on in > > the development stages. > > > As far as the binary libraries and apps go, I think the only solution is > to build them on each platform. > > The way that I have seen endian-ness resolved in other operating systems > is to either save in one format, and provide code that does bit flipping > if the platform reading is opposite-endian, or to save in a canonical > format, that contains a flag field indicating the source platform. If the > platform reading is opposite-endian from the platform that saved the data, > the bit flipping takes place. In either case, macros that test for > endian-ness and perform the bit-flipping are provided as part of the > API. > > Jim Koontz > jrk...@so... > > > _______________________________________________ > Corelinux-public mailing list > Cor...@li... > http://lists.sourceforge.net/mailman/listinfo/corelinux-public > |
|
From: Jim K. <jrk...@at...> - 1999-12-09 15:29:54
|
On Thu, 9 Dec 1999, Frank V. Castellucci wrote: > > Is endian-ness an issue in Linux? If a data file is created by a > > Linux/Intel application, can it be read by the Linux/PowerPC version of > > that application? > > > > Should CoreLinux provide some facility for conversion of endian-ness in > > cross-platform data? > > > > Jim Koontz > > jrk...@us... > > Data Files > -------------- > There is an issue that has to be addressed by the applications persistence > implementation or any persistence that the library provides. From a numeric > stand point using network byte order consistently should avoid that, or of > course characters. > > Source Files > ------------------ > There is an issue, I know bitfields and unions may be targets for analysis, > there may be more (I have been corrupted for too long by little-endianess) > but avoiding bit manipulation and unions is part of the CoreLinux standards. > > Binary Libraries and Apps > ------------------------------------ > Big issue, I don't know of solutions. > > > I'm hoping there are some who are running on both type of Linux > implementations (big and little) that we can test/verify against early on in > the development stages. > As far as the binary libraries and apps go, I think the only solution is to build them on each platform. The way that I have seen endian-ness resolved in other operating systems is to either save in one format, and provide code that does bit flipping if the platform reading is opposite-endian, or to save in a canonical format, that contains a flag field indicating the source platform. If the platform reading is opposite-endian from the platform that saved the data, the bit flipping takes place. In either case, macros that test for endian-ness and perform the bit-flipping are provided as part of the API. Jim Koontz jrk...@so... |
|
From: Frank V. C. <fr...@co...> - 1999-12-09 12:06:55
|
> Is endian-ness an issue in Linux? If a data file is created by a > Linux/Intel application, can it be read by the Linux/PowerPC version of > that application? > > Should CoreLinux provide some facility for conversion of endian-ness in > cross-platform data? > > Jim Koontz > jrk...@us... Data Files -------------- There is an issue that has to be addressed by the applications persistence implementation or any persistence that the library provides. From a numeric stand point using network byte order consistently should avoid that, or of course characters. Source Files ------------------ There is an issue, I know bitfields and unions may be targets for analysis, there may be more (I have been corrupted for too long by little-endianess) but avoiding bit manipulation and unions is part of the CoreLinux standards. Binary Libraries and Apps ------------------------------------ Big issue, I don't know of solutions. I'm hoping there are some who are running on both type of Linux implementations (big and little) that we can test/verify against early on in the development stages. |
|
From: Jim K. <jrk...@cs...> - 1999-12-09 05:29:59
|
Is endian-ness an issue in Linux? If a data file is created by a Linux/Intel application, can it be read by the Linux/PowerPC version of that application? Should CoreLinux provide some facility for conversion of endian-ness in cross-platform data? Jim Koontz jrk...@us... |
|
From: Frank V. C. <fr...@co...> - 1999-12-09 00:39:59
|
> Simple accessor/mutator functions should be written in the class declaration and > declared as inline. This way there will be no overhead. I think caution is in order here. While you can go inline to gain performance (issue of optimize switches still open) it also means that the implementation changes will require a more significant rebuild on behalf of the application develop using the class libraries. So what happens is that all applications that (hopefully) use CoreLinux++ libraries will be forced to re-distribute their applications if we change the inline definition. I will also defer back to my original position that we don't know what behavior is introduced by extensions to classes we provide. So in regards to the OOA/OOD 4.1 I believe that in general we stick to the use of accessors and mutators instead of direct member manipulation until which point we have empirical evidence for a specific case where we are choking in performance. -- Frank V. Castellucci http://corelinux.sourceforge.net OOA/OOD/C++ Standards and Guidelines for Linux http://www.colconsulting.com Object Oriented Analysis and Design |Java and C++ Development |
|
From: Jens B. J. <jjo...@bd...> - 1999-12-08 20:21:30
|
"Frank V. Castellucci" wrote: > Here is another one I received direct and felt a group discussion was a > better idea. (My comments are noted by the /* */ > ------------------------------------------------------------------------------------ > > Frank, > > 4.1 Abuse of Member Data > Classes should access their own member data via accessor and modifier > functions, just like clients. If a class provides overloadable modifier > functions, but does not use them internally, then descendant > implementations > may no work correctly. > > <sp> ...may not work correctly. > > -- Will correct > > What about the function call overhead incurred by accessing readily > available data by calling accessor and mutator functions by other member > functions? Simple accessor/mutator functions should be written in the class declaration and declared as inline. This way there will be no overhead. [snip] > 6.11 Portability > Standard: Native C++ data types shall not be used. Use the Types.hpp. > > /* > This applies to built-in types (int, long, double, etc). > */ Really?! Hmmm. This is a new and austere requirement for me. What sort of things are in Types.hpp? Also, do he really have to name the headers .hpp? Why not just .h or at least .H? -- Jens B. Jorgensen jjo...@bd... |
|
From: Frank V. C. <fr...@co...> - 1999-12-08 18:59:36
|
Here is another one I received direct and felt a group discussion was a better idea. (My comments are noted by the /* */ ------------------------------------------------------------------------------------ Frank, 4.1 Abuse of Member Data Classes should access their own member data via accessor and modifier functions, just like clients. If a class provides overloadable modifier functions, but does not use them internally, then descendant implementations may no work correctly. <sp> ...may not work correctly. -- Will correct What about the function call overhead incurred by accessing readily available data by calling accessor and mutator functions by other member functions? -- As class library objects many (if not all) methods will be virtual. If we don't know to what extent a derivation has overloaded a accessor or -- mutator then we run the risk of getting unexpected behavior. For concrete classes that don't override then this is not as much as a problem. Sections 6.5 and 6.6 are marked as dated. How does this affect the standards? Does this mean that namespaces and RTTI are acceptable, or is this an item for discussion? Are these really analysis and design issues? Should namespaces and RTTI be included in the standards documents if for no other reason than to extend the shelf life of the standard? /* This should be clarified. Yes, namespaces and RTTI are acceptable in my opinion. */ 6.11 Portability Standard: Native C++ data types shall not be used. Use the Types.hpp. /* This applies to built-in types (int, long, double, etc). */ Standard: Native C++ classes ( streams, stl, string ) should be used but it is recommended that we at least wrap them. For example: class EXPORT String : public string If one of the goals of the project is to make C++ on Linux more accessible, and attractive through the development of coding standards, doesn't this present a barrier to entry by increasing the learning curve? What about developers, who are new to the Linux operating system, but are familiar with C++, and the implementation of the Standard C++ Library? Does that mean their code is not to standards because they directly reference the Standard C++ Library classes? Do you intend to wrap every class from the Standard Library? I don't believe that either of these contribute to clarity, and should be made guidelines if they are included at all. /* There are a couple of points here: 1. If a developer wants to use std:: classes direct, that is fine 2. In the String example the direction is clearly to allow the use of a string abstraction but the standard implementation doesn't help with the real world issues such as UCS2 and UCS4. I agree to the guideline disposition. I believe that our analysis of the requirements (which are still coming) will show that the treatments we -- need to apply to otherwise standard constructs will be for the logical extensions. */ 8. Tools and Methodologies I agree that UML should be used as the standard for analysis and design. |
|
From: Frank V. C. <fr...@co...> - 1999-12-08 13:47:35
|
I received this yesterday. There are some points that merit close
evaluation. I have followed this post with my return statement to the
author. Hopefully, he will be joining this mailing list.
------------- Orignal Post
----------------------------------------------
Hi,
I've been having a look at the CoreLinux++ homepage, have read the docs
you have so far created and looked at the source (via anon CVS).
This looks like an interesting project !
A while ago I noticed a group of developers starting a similar project,
called LCL. Unfortunately, they didn't seem to understand some basic
concepts, and had gone down the all-too-common route of coding first,
thinking second.
One thing I couldn't get across to the LCL people was that a 'tree'
class
structure is entirely inappropriate for the design. They didn't even
understand what I meant when I mentioned shared data.
It looks like you've already had a good think about coding standards,
code layout, etc. and also laid down some sensible guidelines for
design.
My background is that I completed a BSc in Comp. Sci. a few years back.
Since then I've worked as an UNIX admin, and then started learning C++
'properly' - i.e. not in an academic setting. It's taken about a year,
but I'm now well used to writing software for the real world. Recently
I started working as a contractor, writing Qt software from home, for
cash :)
I have some knowledge of Rational Rose and the Booch method, from my
days at uni. I'm afraid I haven't got around to reading the much-vaunted
book 'Design Patterns' which seems very trendy these days. My knowledge
is mostly self-taught and helped along by working with the Qt library
for the last year.
You will find I'm very keen on the design of Qt. If you look at recent
versions (2.0 onwards), you'll see that it provides a kind of template
library of its own, often dubbed the 'QTL'. This subset of Qt is best
described as STL, without the least-used classes, and with extra, more
useful features added.
QTL is obviously not as fast as STL, as it's designed to work
cross-platform,
on most Unices and on MS Windows; however, despite its incompleteness
(it doesn't have classes like 'set') and the slight speed disadvantage,
its design and usability is IMO second to none.
Perhaps the greatest advantages of QTL are QString, a 16-bit Unicode
string class, the value-based collections and the widespread use of
both explicit- and implicit- shared data.
I'm not entirely sure what the aim of your project is.
Are you looking to replace libg++ with a better library ?
Or are you attempting to provide a C++ version of libc, with socket and
thread wrappers, etc ?
Or... ?
I'm quite interested in getting involved.
While I'm currently very busy finishing off a contract and working
on some code for the upcoming KDE release (on the 15th), I'll have
plenty
of free time over the new year period.
Cheers,
Rik
p.s. Some topics I'd like to throw on the table for discussion.
---------------------------------------------------------------------------
You have specified that inspector methods should use the 'get' prefix.
While I'm a big fan of Java, I find that this prefix is generally
redundant and makes code more cluttered and less clear.
For example,
list.count();
list.getCount();
Which is more appropriate ?
---------------------------------------------------------------------------
In the coding standards doc, you seem to suggest that const references
may be returned. Considering that the const can be cast away, would it
not be wiser to return by value (assuming the return type is
copy-on-write
or has a trivial ctor), or to return via a reference parameter ?
This was suggested to me by someone else some time ago. He said that
it is possible that while it is expected that the returned reference
is safe, it may still be messed with, so unless you have complete
control
over your code (something you don't when writing a library) it's
dangerous.
I haven't had any problems with this myself, but I understand the
reasoning
and can see that if the scenario was to appear, debugging the references
would be a nightmare.
---------------------------------------------------------------------------
Have you considered the use of copy-on-write semantics ? Am I just being
premature, and this simply hasn't been written into the standards/code
yet ?
In my experience, copy-on-write coupled with value-based semantics
provide
for clean, safe user code. Pointers can be avoided more easily and
memory
leaks can be made practically extinct.
This also allows the use of value-based collections...
---------------------------------------------------------------------------
Value-based collections are useful for classes that implement a fast
copy ctor.
Example:
ValueList<int> intList;
intList << 1 << 2 << 3 << 4 << 5;
ValueList<int>::ConstIterator it;
for (it = intList.begin(); it != intList.end(); ++it)
cout << *it << endl;
No pointers, no [a|de]llocation, no mess :)
If the list is implicit shared, then copy is done in O(1) - quite handy
:)
---------------------------------------------------------------------------
I can see that you're keen to take full advantage of Linux' capabilities
-
well, the name gives that away ;)
How far do you propose to take this ? I have written some code in the
past that takes advantages of the extra capabilities of gcc, i.e. C and
C++ extensions. There is, of course, the possibility that people will
use a different compiler on Linux in the future - are you going to
take this into consideration and write code without gcc extensions ?
Personally I think that it will be a long time before anyone wants to
replace gcc, and that corelinux++ could still take advantage of
gcc features without requiring gcc.
For example, constructs such as this:
MyString
MyClass::myMethod() return str;
{
str = "hello";
}
Using the above in .cpp files is ok if you are compiling the library
with gcc. The declaration is as normal - no 'return something' added,
so other compilers won't have a problem compiling code written in
this way.
I'm not entirely sure, but I think that if libcorelinux++ is compiled
with gcc and another compiler is used to write software, linking
won't work due to different name mangling. If extensions such as the
above are used in the implementation, then another compiler may be
unable to compile libcorelinux++, so a user couldn't even create their
own libcorelinux++ to use with their own compiler.
So, is this something you have considered ? Will you be writing in
ANSI C++ only ? Forgive me if I missed this in your docs. I noticed
in the changelog 'Changed all code to conform to the update C++
Standards
and Guidelines Revision 1.2 document.' but I haven't seen anything about
using gcc extensions.
-------------- Partial Original with my response -------------------
> [snip]
>
> I have some knowledge of Rational Rose and the Booch method, from my
> days at uni. I'm afraid I haven't got around to reading the much-vaunted
> book 'Design Patterns' which seems very trendy these days. My knowledge
> is mostly self-taught and helped along by working with the Qt library
> for the last year.
We are pushing for UML as the standard notation for analysis and design,
shouldn't be to hard to pick up unless you meant recent Rose, in which
case I will assume you are familiar (tell me otherwise and we can get
into the differences between Booch notation).
> You will find I'm very keen on the design of Qt. If you look at recent
> versions (2.0 onwards), you'll see that it provides a kind of template
> library of its own, often dubbed the 'QTL'. This subset of Qt is best
> described as STL, without the least-used classes, and with extra, more
> useful features added.
I must admit, I haven't taken a long enough look to consider myself
familiar. I have 1.44 and 2.? installed. I will familiarize myself with
it.
> QTL is obviously not as fast as STL, as it's designed to work cross-platform,
> on most Unices and on MS Windows; however, despite its incompleteness
> (it doesn't have classes like 'set') and the slight speed disadvantage,
> its design and usability is IMO second to none.
Whereas STL is cross-platform and has the speed. Could you elaborate on
the QT advantage?
> Perhaps the greatest advantages of QTL are QString, a 16-bit Unicode
> string class, the value-based collections and the widespread use of
> both explicit- and implicit- shared data.
That I did notice. Did you also look at the IBM Unicode implementation?
> I'm not entirely sure what the aim of your project is.
>
> Are you looking to replace libg++ with a better library ?
Well, err...
> Or are you attempting to provide a C++ version of libc, with socket and
> thread wrappers, etc ?
Hang in there it's coming...
>
> Or... ?
Part I
------
The overall objective here is to provide common classes abstractions,
and at least the Gang Of Four (Gamma et.al) patterns, if at a minimal
abstraction for later building blocks.
To do this we SHOULD and WILL:
A. Gather requirements. There is a Requirements forum on the CoreLinux
project page and I just started a mailing list which I will detail at
the end of this message.
B. Perform the proper analysis which brings out all those thing the
requirements did not specify, re-iterate.
C. Design the above, re-iterating to Analysis and Requirements as
needed.
D. Prioritize the implementation of the design.
E. Implement
F. Go to A
Part II
-------
The overall objective here is to begin construction of frameworks
(again, those recognized as common) to work with the objects implemented
in libcorelinux++ and extending where appropriate. We will follow the
same process detailed in Part I.
As you well know (or are picking up as your experience with C++
continues) there is very little real Object Oriented class libraries for
Linux. Hear me out if I raised a few hackles. What is available now
specifically for Linux are either quick ports of the traditional C work,
almost a "see I can do this" attempt, class libraries that are
encumbered by the desire to be portable to Windows, OS/2, etc. etc. etc.
(which by definition are NOT specific to Linux).
But, what is even more important is the LACK of Open Source class
libraries that reflect both real world abstractions and the ability to
be useful. I refer strongly to Patterns and Frameworks as there is no
cohesivness<sp> between development efforts. Now I will admit that I
haven't perused every available library or framework that IS available,
but I have seen enough. Help me here if I am missing something.
> ---------------------------------------------------------------------------
>
> You have specified that inspector methods should use the 'get' prefix.
> While I'm a big fan of Java, I find that this prefix is generally
> redundant and makes code more cluttered and less clear.
>
> For example,
>
> list.count();
> list.getCount();
>
> Which is more appropriate ?
This is a loaded gun. I need to tell you about my background:
1982 - MVS using BAL (assembler for the mainframe)
1983 - PL1 (kinda Fortran and COBOL mix)
1985 - x86 assembler
1987 - C with some C with classes
1989 - C++
At which point I became a consultant on large scale system
implementations on primarily x86 and OS/2 systems. Part time I wrote a
performance analysis and capacity planning system for OS/2 (Osrm2).
Projects ranging from Speech recognition systems to Databases, Fuzzy
logic to Knowledge Representation and Constraint Logic.
Enough of which to know that with a development group larger than 1
there are communication problems. The last place that any effort can
afford to have communication problems are in the CODE!!! Now granted,
these projects move on and the hired guns to maintain them are usually
of less experience than the architects and original teams that
implemented them. The costs of maintenance grows exponentially if there
is a complete education required to bring them up to speed on ambiguous
terminology, I wouldn't want to foot the bill. I have bore witness and
even recommended that entire systems be re-written because the grammar
wasn't even CLOSE to the problem domain.
And I think that is what part of the effort is about: Recognize the
problem domain through the clutter is half the battle. Making it so that
others that follow have the same clarity is the other half.
I would opt for the latter (list.getCount()). BTW: It had really nothing
to do with Java (although I think it is useful there as well) but with
my understanding of the methodologies I weaned on (Rumbaugh, Booch,
Coad).
Still, I am willing to state that even though I feel strongly about this
I also believe in more minds than mine arriving at a sensible path or
direction. The standards I put up were in part straw-man to that effect.
I think you should bring this up in the mailing list, if you are still
even reading this...
> ---------------------------------------------------------------------------
>
> In the coding standards doc, you seem to suggest that const references
> may be returned. Considering that the const can be cast away, would it
> not be wiser to return by value (assuming the return type is copy-on-write
> or has a trivial ctor), or to return via a reference parameter ?
>
> This was suggested to me by someone else some time ago. He said that
> it is possible that while it is expected that the returned reference
> is safe, it may still be messed with, so unless you have complete control
> over your code (something you don't when writing a library) it's dangerous.
>
> I haven't had any problems with this myself, but I understand the reasoning
> and can see that if the scenario was to appear, debugging the references
> would be a nightmare.
I agree that const can be casted away. But I also know there is some
protection in invarient state management and enforcement. I have used
it, it has worked to some degree but it is not a silver bullet. You only
need to change the header file to the class library you are linking with
and remove the const from the grammar!
> ---------------------------------------------------------------------------
>
> Have you considered the use of copy-on-write semantics ? Am I just being
> premature, and this simply hasn't been written into the standards/code
> yet ?
Copy on write is a good thing. I believe the analysis and design will
bring out where it is appropriate.
> In my experience, copy-on-write coupled with value-based semantics provide
> for clean, safe user code. Pointers can be avoided more easily and memory
> leaks can be made practically extinct.
>
> This also allows the use of value-based collections...
>
> ---------------------------------------------------------------------------
>
> Value-based collections are useful for classes that implement a fast
> copy ctor.
>
> Example:
>
> ValueList<int> intList;
> intList << 1 << 2 << 3 << 4 << 5;
>
> ValueList<int>::ConstIterator it;
>
> for (it = intList.begin(); it != intList.end(); ++it)
> cout << *it << endl;
>
> No pointers, no [a|de]llocation, no mess :)
>
> If the list is implicit shared, then copy is done in O(1) - quite handy :)
This works for atomics but what about aggregates? What about objects
that the semantics require uniqueness in the domain? Again, this is one
area I am playing devils advocate from experience. In addition, we have
a requirement for SmartPointers, which I have alot of experience with.
> ---------------------------------------------------------------------------
>
> I can see that you're keen to take full advantage of Linux' capabilities -
> well, the name gives that away ;)
>
> How far do you propose to take this ? I have written some code in the
> past that takes advantages of the extra capabilities of gcc, i.e. C and
> C++ extensions. There is, of course, the possibility that people will
> use a different compiler on Linux in the future - are you going to
> take this into consideration and write code without gcc extensions ?
> Personally I think that it will be a long time before anyone wants to
> replace gcc, and that corelinux++ could still take advantage of
> gcc features without requiring gcc.
>
The portability goals of CoreLinux would be across Linux implementations
and compilers that work under Linux. It presents a distribution issue
that we will have to face, but needless to say I suggest we stay well
clear of any specific compiler tricks or extensions. I don't think we
need them, but we haven't advanced far enough along to say either way.
> I'm not entirely sure, but I think that if libcorelinux++ is compiled
> with gcc and another compiler is used to write software, linking
> won't work due to different name mangling. If extensions such as the
> above are used in the implementation, then another compiler may be
> unable to compile libcorelinux++, so a user couldn't even create their
> own libcorelinux++ to use with their own compiler.
I would rather they can compile with the compiler of their choice. If we
want to have a single binary distribution we would have no choice but to
look to CORBA and remain independant. This is more overhead than is
desired.
> So, is this something you have considered ? Will you be writing in
> ANSI C++ only ? Forgive me if I missed this in your docs. I noticed
> in the changelog 'Changed all code to conform to the update C++ Standards
> and Guidelines Revision 1.2 document.' but I haven't seen anything about
> using gcc extensions.
Yes, I am checking everything we have gone near (which isn't much yet)
against the ISO C++ 14882 (1998E) standard. That is the conformance we
SHOULD aim for. The Revision 1.2 refered to the CoreLinux++ C++
document, not the ISO.
-------- End of response -------------------------------------
|
|
From: Frank V. C. <fr...@co...> - 1999-12-08 02:55:34
|
We have this mailing list for requirements (actually I will take them from here and enter them into the Requirements Forum on the project page), but I think we should narrow down and agree on a analysis and design tool. Unless there is a compeling reason not to, I would think that UML is a given. The problem then is one of which tool? Here are some of my thoughts: 1. For the core library analysis I don't know important the use case diagrams will be because it is a class library. I think the sequence or collaboration will be more usefull.. 2. One that isn't so experimental it is counter-productive. A plus would be one that could save/load in XMI (UML Modeling Interchange). Anyone else have any ideas? -- Frank V. Castellucci http://corelinux.sourceforge.net OOA/OOD/C++ Standards and Guidelines for Linux http://www.colconsulting.com Object Oriented Analysis and Design |Java and C++ Development |
|
From: Frank V. C. <fr...@co...> - 1999-12-07 13:30:58
|
Not in regards to the style but in enabling things Doc++ or Doxygen to = produce HTML output. Anyone have any experience with these or any other? Ideally, the less we = have to add (tags, etc.) the better. I don't think that we have, as a = developer community, really fallen in love with documentation. |
|
From: Frank V. C. <fr...@co...> - 1999-12-06 23:06:08
|
Testing for response. |
|
From: Frank V. C. <fc...@at...> - 1999-12-06 22:37:41
|
Just a test |