You can subscribe to this list here.
| 1999 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(79) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2000 |
Jan
(91) |
Feb
(50) |
Mar
(27) |
Apr
(12) |
May
(41) |
Jun
(17) |
Jul
(12) |
Aug
(7) |
Sep
(6) |
Oct
(10) |
Nov
(9) |
Dec
(1) |
| 2001 |
Jan
(3) |
Feb
|
Mar
(6) |
Apr
(8) |
May
(4) |
Jun
(4) |
Jul
|
Aug
|
Sep
|
Oct
(7) |
Nov
|
Dec
|
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(3) |
Nov
|
Dec
|
| 2003 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
| 2004 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
|
| 2006 |
Jan
(4) |
Feb
(4) |
Mar
(3) |
Apr
(1) |
May
(8) |
Jun
(8) |
Jul
(7) |
Aug
(3) |
Sep
(7) |
Oct
(19) |
Nov
(28) |
Dec
(5) |
| 2007 |
Jan
(25) |
Feb
(9) |
Mar
(15) |
Apr
(6) |
May
(1) |
Jun
(2) |
Jul
(4) |
Aug
(12) |
Sep
(10) |
Oct
(12) |
Nov
(5) |
Dec
(2) |
| 2008 |
Jan
(3) |
Feb
(5) |
Mar
(28) |
Apr
(23) |
May
(19) |
Jun
(13) |
Jul
(31) |
Aug
(12) |
Sep
(21) |
Oct
(14) |
Nov
(48) |
Dec
(39) |
| 2009 |
Jan
(10) |
Feb
(15) |
Mar
(12) |
Apr
(19) |
May
(40) |
Jun
(24) |
Jul
(34) |
Aug
(12) |
Sep
(4) |
Oct
|
Nov
|
Dec
(13) |
| 2010 |
Jan
(5) |
Feb
(5) |
Mar
(4) |
Apr
(3) |
May
|
Jun
|
Jul
(3) |
Aug
(1) |
Sep
(1) |
Oct
(3) |
Nov
|
Dec
|
|
From: Davide B. <dbo...@li...> - 1999-12-15 20:23:04
|
Still listening, but overwhelmed ... my humble suggestion is to keep up the good work at whatever pace you feel comfortable with. I hope to ponder all the stuff during the holidays and provide my 0.2 cents after Y2K. |
|
From: Frank V. C. <fr...@co...> - 1999-12-15 05:31:22
|
In the past few days I have dumped a TREMENDOUS amount of process description into this list. I am sure there may be as much pain reading it as there was writing it. But one of the main charters of this effort is the OOA/OOD Standards and Guidelines. What I have put out should clear the air about "What does that mean specifically?" It is about process. I would like to point out that it is really not THAT painful once the ball gets rolling. My experience has been that once you get through a few of them you don't want to develop where there is no process. Of course that very ball which I will kick off soon by taking all of us through the process with one of the requirements in the Requirements Forum. BUT, I NEED TO KNOW: Am I moving too fast? I am a great one for assuming that if something isn't said it is because: 1. There has been no time for you to consume what I have put out, subsequently a response is possibly forthcoming. 2. That there are no objections, issues, concerns, or questions. 3. There is a tremendous amount of impatience and I should be moving quicker. -- Frank V. Castellucci http://corelinux.sourceforge.net OOA/OOD/C++ Standards and Guidelines for Linux |
|
From: Frank V. C. <fr...@co...> - 1999-12-15 05:09:38
|
Following is the suggested process used by CoreLinux++ in regards to Analysis moving into Design, as well as what the design will achieve. I will follow this in later post From Design to Implementation. Following any changes to this, I will begin to add this to the CoreLinux++ OOA/OOD document. The assumptions I am making about the reader at this point are: A. That the effort of Analysis has is that the focus is on the problem domain, not on a specific technical solution. B. Class, Sequence, Collaboration, State, and Activity diagrams are understood. If you are unsure about these let me know and I will post links to where you can find out more. For immediate consumption http://www.cetus-links.org has a tremendous reference list. ===== In general, the design is a technical expansion and adaptation of the analysis results. The classes, realtionships, and collaborations from the analysis are complemented with new elements, now focusing on how to implement the requirement. All the details of how things should work technically and the constraints of the implementation environment are taken into consideration. Typically the analysis classes are embedded in a technical infrastructure, where other technical classes help them. In our case, the libcorelinux++ features, and their respective designs, will constitute some of the technical classes when we move on to frameworks. Of course another benefit of the design phase is to recognize areas where the requirement or analysis was not sufficient to complete the design. In the true sense of Object Oriented methodologies, our methodology will be iterative as well. From Analysis to Design ======================= 1. Once a Analysis has been checked in it will move into the Design queue and assigned a Task Identifier from the CoreLinux++ Design task control in Source Forge. At this point, a fully qualified identifier for the effort in regards to the originating requirement should be : RequirementForumID.AnalysisTaskID.DesignTaskID 2. A directory will be created with the Design Task Identifier as part of the name (TBD) and captured in CVS. 3. If not already done in the Analysis phase, analysis classes will be divided into functional packages. This assumes the tool we use can provide that. 4. Additional technical classes are added. 5. Concurrency needs are identified and modeled through active classes, asynchrounous messages, and synchronization techniques for handling shared (if applicable) resources. 6. A STRONG emphasis is placed on exceptions and faults in the design. This includes both normal and abnormal where: Normal : those that can be anticipated in the course of performing the functions. Abnormal: those that can't be anticipated and must be handled by some generic exception mechanisms. it is my assumption that, at a minimum, the UML tool we use supports exception designation in the class specification. 7. A STRONG emphasis is placed on constraints. A constraint is a restriction on an element that limits the usage of the element or the semantics (meaning) of the element. Constraints are one way of enforcing the "contract" by which the objects in a live system may or may not interact. For example, if Class A is constrained to having at most five (5) elements in a collection data member, a pre-condition constraint would be put on the addElement( ElementCref ) method. Whether or not we have access to a tool that supports constraint notation (Object Constraint Language (OCL) 1.1 or greater) is TBD. In the abscense of the notational support, text blocks should be specified. 8. The dynamic behavior of the design is emphasized. This is done through class, collaboration, sequence, and state diagrams. The benefit is that it will reduce the implementation time by having a clear understanding of what the code should be doing. -- Frank V. Castellucci http://corelinux.sourceforge.net OOA/OOD/C++ Standards and Guidelines for Linux |
|
From: Frank V. C. <fr...@co...> - 1999-12-15 03:44:05
|
As stated in the From Requirements to Analysis post I would like to propose a template for the Requirements Document. The document itself is not expected to be complete when first introduced (unless someone is really ambitious) but will probably be filled in throughout the discussion of the requirement statement. I assume that "CoreLinux++ should have a Thread Class" in not nearly sufficient to even enter Analysis with. This is, for the most part, a somewhat shortened System Requirement Specification (SRS) that I had in my past experience. I took the liberty to removed some of the specification as they overlap with what the Analysis documentation would otherwise provide. AS ALWAYS THIS IS OPEN FOR DISCUSSION (I will omit this statement in future posts as it is my opinion that is is implicit in Open Source philosophy). A software requirements specification should include the following sections. Title : Requirement ID: (message ID from the Requirement Forum) 1. Introduction Provide an overview of the requirement in relationship to the system. 1.1 Deliverables Overview Describe the major functions and components of the product. Summarize the rationale for building the deliverables for readers unfamiliar with the project. 2. Functional Requirements Describe the services, operations, data transformations, etc., provided by the requirement, user interfaces, and the relationship of the requirement to its environment. This portion of a requirements document is usually the largest, because functionality must be described as fully and precisely as possible. 2.1 User Interface Specifications If applicable, describe screens, windows, graphics, and other visual aspects of the requirement. Define any command languages. Specify the interaction or dialog conventions governing the user interface in complete detail. User interface prototypes may be used. For the most part, the CoreLinux++ effort is not to provide widgets or dialogs for EUI so this would typically be NA. 2.2 Product Services Describe the computations, data transformations, services, and so forth provided by the product. State all the services the product will provide, but not how they will be provided. 2.3 External Interfaces and Database Requirements If applicable, describe the interfaces to other systems or the environment. Describe the logical organization of databases used by the system. 2.4 Error Handling Catalog exceptional conditions and error conditions, and responses to these conditions, as completely as possible. 2.5 Foreseeable Functional Changes and Enhancements State any foreseeable changes and enhancements in functionality for the benefit of the designers and implementers. 3. Non-Functional Requirements Discuss the constraints under which the deliverables must operate, the environment in which they must operate, any standards they must conform to, etc. 3.1 Performance Requirements State any efficiency, reliability, robustness, portability, memory size, response time, or similar requirements. 3.2 User Documentation and Other User Aids State the tutorials, reference material, data, examples, test programs, and so forth that will accompany each deliverable. 3.3 Development Requirements State any quality standards or guidelines, independent verification and validation requirements, testing strategies, development methods, tools, techniques, or policies, or other development constraints imposed by the customer or the development organization. 3.5 Foreseeable Non-Functional Changes State any foreseeable changes in non-functional requirements for the benefit of the designers and implementers. Such changes generally arise from hardware evolution, changing user needs, new systems in the operating environment, etc. 4. Remarks and Guidelines for Later Life Cycle Phases Draw attention to any problems or pitfalls, potential solutions to foreseen problems, or other helpful information produced during requirements analysis and generation that may be useful later in the life cycle. 5. Term Glossary Add whatever terms we have introduced. |
|
From: Frank V. C. <fr...@co...> - 1999-12-14 02:42:13
|
"Frank V. Castellucci" wrote: > > I changed the way replies to the mailing list are posted. Messages > headers should now show the mailing list in the from post, which means > when you reply it won't go to the person who posted the original. > Um, err, like I said, it will appear in the reply-to field. |
|
From: Frank V. C. <fr...@co...> - 1999-12-14 02:41:17
|
I changed the way replies to the mailing list are posted. Messages headers should now show the mailing list in the from post, which means when you reply it won't go to the person who posted the original. -- Frank V. Castellucci http://corelinux.sourceforge.net OOA/OOD/C++ Standards and Guidelines for Linux http://www.colconsulting.com Object Oriented Analysis and Design |Java and C++ Development |
|
From: Frank V. C. <fr...@co...> - 1999-12-13 14:38:33
|
Following is the process that will be used by CoreLinux++ in regards to Requirements moving into Analysis, as well as what the analysis will achieve. I will follow this in latter posts with From Analysis to Design and From Design to Implementation. THIS IS OPEN FOR DISCUSSION!!! I just felt we needed to start drawing the lines in the sand as to the rules of engagement (ROE). Following any changes to this, I will begin to add this to the CoreLinux++ OOA/OOD document. From Requirement to Analysis ======================= 1. All things start with a Requirement statement (e.g. CoreLinux++ should have a Thread class) regardless of the origin. If it is not entered into the Source Forge CoreLinux++ Requirements Forum, it will be placed there and assigned a Requirement ID (the Requirement Forum message identifier). 2. All Requirements will be discussed for acceptance (review). 3. If we decide that NO, it is not a valid requirement (scope exceeds the goals of the Project for example), it will be marked as INVALID with an explanation. 4. Requirements that are accepted will need a Requirement document. I will detail this in a later post. 5. Once a requirement has been checked in it will move into the Analysis queue and assigned a Task Identifier from the CoreLinux++ Analysis task control in Source Forge. 6. A directory will be created with the Task Identifier as part of the name (TBD) and captured in CVS. 7. Analysis will proceed starting with Use-Case Modeling. In use-case modeling, the system is looked upon as a "black box" that provides use cases. How the system does this, how the use cases are implemented, and how they work internally is not important. In fact, when the use-case modeling is done early in the project, the developers have no idea how the use cases will be implemented Use case objectives are A. To decide and describe the functional requirements of the system, resulting in an agreement between the team members. B. To give a clear and consistent description of what the system should do, so that the model is used throughout the development process to communicate to all developers those requirements, and to provide the basis for further design modeling that delivers the requested functionality. C. To provide a basis for performing system tests that verify the system. For example, asking, does the final system actually perform the functionality initially requested. D. To provide the ability to trace functional requirements into actual cases and operations in the system. To simplify changes and extensions to the system by altering the use-case model and then tracing the use cases affected into the system design and implementation. 8. The actual work required to create a use-case model involves defining the system, finding the actors and the use cases, describing the use cases, defining the relationship between use cases, and finally validating the model. 9. The use-case model consists of use-case diagrams (tool TBD) showing the actors, the use cases, and their relationships. These diagrams give an overview of the model, but the actual descriptions of the use cases are typically textual. Both are important! 10. The use-case model will be used to realize the use case. The UML principles for realizing use cases are: A. A use case is realized in a collaboration: A collaboration shows an internal implementation-dependent solution of a use case in terms of classes/objects and their relationship (called the context of the collaboration) and their interactions to achieve the desired functionality (called the interaction of the collaboration). B. A collaboration is represented in UML as a number of diagrams showing both the context and the interaction between the participants in their collaboration. Participating in a collaboration are a number of classes (and in a collaboration instance: objects). The diagrams are collaboration, sequence, and activity. The type of diagram to use to give a complete picture of the collaboration depends on the actual case. In some cases, one collaboration diagram may be sufficient, in other cases, a combination of different diagrams may be necessary. C. A scenario is an instance of a use case or a collaboration. The scenario is a specific operation path (a specific flow of events) that represents a specific instantiation of the use case. When a scenario is viewed as a use case, only the external behavior toward the actor is described. When a scenario is viewed as an instance of the collaboration, the internal implementation of the involved classes, their operations, and their communication is described. --- Frank V. Castellucci http://corelinux.sourceforge.net OOA/OOD/C++ Standards and Guidelines for Linux http://www.colconsulting.com Object Oriented Analysis and Design |Java and C++ Development |
|
From: Frank V. C. <fr...@co...> - 1999-12-12 13:56:48
|
I put the new one up. I started referring to the Task or Defect numbers in both the CVS check-in (commit) and documentation. This one also includes a FAQ and Class Library documentation (scant but workable). As in the previous post I am sticking to the generic JavaDoc 1.1 specification as far as the comment blocks. Both Doxygen and Doc++ work with these (at least they say so in THEIR documentation). I used Doc++ ONLY because it is easier to run and configure at the moment. --- Frank V. Castellucci http://corelinux.sourceforge.net OOA/OOD/C++ Standards and Guidelines for Linux http://www.colconsulting.com Object Oriented Analysis and Design |Java and C++ Development |
|
From: Frank V. C. <fr...@co...> - 1999-12-12 03:39:54
|
Rik, and others:
After a preliminary look at Doc++ and Doxygen it is my opinion that both
would do fine, with Doxygen having the advantage by the fact that it
appears to be highly configurable (based on the number of configuration
points documented) as well as currently maintained.
BUT...
I do not by any means think that we have to make a decision yet. The
reason being that BOTH these tools can work off of the same tag style at
a minimum. This style is the one commonly used in Java for JavaDoc where
big descriptions are enclosed in /** ... */ comments, and one liners
appear after the /// . Both tools expect these comments ABOVE the target
class, method, macro, etc.
For example:
/**
This is the explanation about the purpose of the
class in the header file.
*/
/// Namespace
namespace corelinux
{
DEFINE_CLASS( OurClass );
/**
OurClass is a Great class. That is why we have it
*/
class OurClass : public Great
{
public:
/// Default Constructor
OurClass( void );
};
}
You get the idea. To begin I will take what we have now and tag them up!
This, and the new CLFAQ.HTML document will make it's way into a 0.2.1
distribution probably by sometime Sunday US.
Frank
|
|
From: Frank V. C. <fr...@co...> - 1999-12-12 02:09:53
|
Rik Hemsley wrote: > > Just had a look at the Doc++ home page and it seems it doesn't > directly generate man pages. Doc++ also seems to miss the nice diagrams > that Doxygen draws, which are very cool (clickable image maps !) > > I don't think Doc++ has a config file either.. this is a very > useful feature of Doxygen, as it allows all the options to be stored > centrally rather than being passed as parameters from within makefiles. > > Cheers, > Rik > I just downloaded Doxygen and will have a look. BTW: Doc++ does create class inheritence trees. Frank |
|
From: Rik H. <ri...@kd...> - 1999-12-11 17:51:47
|
Just had a look at the Doc++ home page and it seems it doesn't directly generate man pages. Doc++ also seems to miss the nice diagrams that Doxygen draws, which are very cool (clickable image maps !) I don't think Doc++ has a config file either.. this is a very useful feature of Doxygen, as it allows all the options to be stored centrally rather than being passed as parameters from within makefiles. Cheers, Rik |
|
From: Rik H. <ri...@kd...> - 1999-12-11 17:33:19
|
#if Frank V. Castellucci > I have been playing around with Doc++ [...] I haven't used Doc++ but I have used Doxygen. Sorry I can't compare, but Doxygen is excellent. Perhaps we need a side-by-side comparison :) Cheers, Rik |
|
From: Frank V. C. <fr...@co...> - 1999-12-11 15:45:46
|
I have been playing around with Doc++ and it seems to be a reasonable system (although the comments need to change) for generating HTML and it also creates a java class browser applet with class library hierarchy views. I don't want to spend to long with this being an open issue if we can avoid it, are there any OBJECTIONS to using Doc++, or are there any questions to what it implies from a source documentation effort? Anyone, anyone, anyone... |
|
From: Frank V. C. <fr...@co...> - 1999-12-11 15:42:43
|
I started putting together a FAQ for CoreLinux++. I am using a faq to html perl script I found. If there is some better tool I would appreciate a heads up. The one I have works ok, it automatically creates the heading levels (well, with a little tagging in the text document) and intra-document links. |
|
From: Frank V. C. <fr...@co...> - 1999-12-11 00:59:30
|
Rik Hemsley wrote: > > I've converted the build system to use autoconf/automake/libtool. > > I have a tarball with it in - about 120k. The sources I used are > not the current CVS, but that shouldn't cause too much difficulty. > > Would someone care to take this off my hands, check it and commit it > if it's ok ? > > Cheers, > Rik Rik, Attach it in a e-mail to me. fr...@us... |
|
From: Rik H. <ri...@kd...> - 1999-12-11 00:30:12
|
I've converted the build system to use autoconf/automake/libtool. I have a tarball with it in - about 120k. The sources I used are not the current CVS, but that shouldn't cause too much difficulty. Would someone care to take this off my hands, check it and commit it if it's ok ? Cheers, Rik |
|
From: Frank V. C. <fr...@co...> - 1999-12-10 13:55:53
|
This will be included in a CoreLinux++ FAQ I will start to put together, = in the meantime: To join the CoreLinux++ development team (in whatever capacity) please = e-mail me direct fr...@us... with the following = information: 1. Your Source Forge User Name 2. Your area of interest 3. Your Linux distribution 4. Your hardware setup 5. Optionally your strongest and weakest areas of experience Once I receive this I can add you to the project team configuration. The = only piece of info I really need for that is #1, the rest is so we have = a handle on who can do what, on what platform, etc. A broad = diversification will insure a robust deliverable. Once you are added to the project you will have read/write access to CVS = BUT Source Forge will only allow access through the SSH Shell. To review = how this works go to the Source Forge home page at = http://www.sourceforge.org and follow the Site Documentation link. Then = read the CVS documentation. It will help to be familiar with CVS = commands, etc. If you are nervous about doing any changes with CVS early on, send me = whatever you want to post with an explanation. We have yet to determine = the tracking aspect of WHY changes are occurring but relating it to a = Feature (use the message # from the Forum) or a Defect ( use the defect = # from the Bug list ) will be very safe to begin with. If you are making changes that are to the CoreLinux++ Home Pages, = http://corelinux.sourceforge.net , you will need to do additional work: 1. First make the changes and get them committed into our CVS tree 2. Upload the *.html or whatever (graphics, php scripts, docs) to the = CoreLinux web page files using SCP ( it is a SSH copy file program ). TEST, TEST, TEST, and oh yes, TEST |
|
From: Frank V. C. <fr...@co...> - 1999-12-10 13:41:31
|
In a previous message from Rik : > #if Frank V. Castellucci > > Unless I see otherwise I will assume you have all the "leads" to > > Patterns/OOA/OOD material you need. > > I've decided to buy the 'Design Patterns' book as I get paid for my > last contract next week :) From what I've heard it seems like the > logical thing for me to read next. That is one I would have recommended. Keep in mind that it slants to development patterns. It is good to keep in mind that patterns exist everywhere (Business, e-commerce, Chaos, etc.). I agree we need to first focus on those patterns that will be of use to developers but I wouldn't mind future discussions on analysis of real-world patterns that we may go after. > > As far as the web graphics, a CoreLinux++ logo would be cool. And anyway you > > think the pages should go, the Source Forge system supports PHP (which I > > know nothing about) if that helps. > > Ok, I too know nothing about PHP, but designing a logo will give me > something to do over the weekend. First things first, eh ? ;) Great. You might want to find out if there is some form of graphic or multi-media Open Source license that we need to use (and state) before we put these things up. > > So far Thomas Maguire has asked to be assigned to the project in Source > > Forge. I don't know if some of you are still testing the water or want > > access as a project team member. Clearly there is nothing to modify yet, and > > I am not overwhelmed with maintaining the changes at the speed they are > > coming in at, but there will come a point in time... > > Just to let you know, I've had a fair bit of experience with GNU autoconf, > automake and libtool (further: acamel). I don't know if a decision about the > build structure has been made, but if people feel an 'acamel' based system > would be a plus then I'd be happy to 'acamel-ify' the sources and do > my best to maintain etc. > > Cheers, > Rik Even better! We will need this in a big way once we start rolling. I have received a few flames about not having it in the current tarballs, but the build system I included worked anyway. Another area will be binary packages (RPM, and whatever else) for various platforms. As far as access see my follow on post "If you want to join the team". |
|
From: Rik H. <ri...@kd...> - 1999-12-10 12:32:02
|
#if Frank V. Castellucci > Unless I see otherwise I will assume you have all the "leads" to > Patterns/OOA/OOD material you need. I've decided to buy the 'Design Patterns' book as I get paid for my last contract next week :) From what I've heard it seems like the logical thing for me to read next. > As far as the web graphics, a CoreLinux++ logo would be cool. And anyway you > think the pages should go, the Source Forge system supports PHP (which I > know nothing about) if that helps. Ok, I too know nothing about PHP, but designing a logo will give me something to do over the weekend. First things first, eh ? ;) > So far Thomas Maguire has asked to be assigned to the project in Source > Forge. I don't know if some of you are still testing the water or want > access as a project team member. Clearly there is nothing to modify yet, and > I am not overwhelmed with maintaining the changes at the speed they are > coming in at, but there will come a point in time... Just to let you know, I've had a fair bit of experience with GNU autoconf, automake and libtool (further: acamel). I don't know if a decision about the build structure has been made, but if people feel an 'acamel' based system would be a plus then I'd be happy to 'acamel-ify' the sources and do my best to maintain etc. Cheers, Rik p.s. sorry about my abbreviation 'acamel' but how else do you say 'automake/autoconf/libtool-ify' twice in one sentence without getting tongue-tied ? |
|
From: Rik H. <ri...@kd...> - 1999-12-10 12:31:54
|
#if Frank V. Castellucci
> Right, but doesn't the issue of what we implement as a default persistence
> come into play?
Yes, I'd say that there's nothing like streaming :)
Correct me if I'm wrong, but I tried to do something like this a while
back:
class Base {
...
stream & operator << (stream & str, const Base &)
{
str << someData;
return str;
}
};
class Derived : public Base
{
...
stream & operator << (stream & str, const Derived &)
{
Base::operator << (str, *this);
str << someMoreData;
return str;
}
};
This didn't seem to work. It seems you have to do it this way:
class Base {
...
void save(stream & str)
{
str << someData;
}
};
class Derived : public Base
{
...
void save(stream & str)
{
Base::save(str);
str << someMoreData;
}
};
I could have explained this in words but I couldn't work out which
ones ;)
What I'm trying to say is that you can't use operators << and >> in
a derived class and still stream the base, so you'd have to use
a method like save(stream &). That's the only niggle I have with
streaming. Apart from that it's very convenient.
By 'flatfile', are you suggesting that writing to a 'text' file is
an option ? If so, I don't know whether this is worth it, as if
you're using 16-bit Unicode then the file will just look like garbage
in most text editors, so there's no 'readability' advantage.
Did I miss the point ?
Cheers,
Rik
|
|
From: Frank V. C. <fr...@co...> - 1999-12-10 12:31:16
|
I have added a new Forum on the CoreLinux project page: Development Process Discussions The intro explains the purpose of the forum, but generally it became = apparent we are without such policy as "locking" for edit, check-outs = that track Features or Defects, etc. I would prefer if we can keep the discussions pertinent to that out of = this list because it will become tedious to track in two places. --- Frank V. Castellucci http://corelinux.sourceforge.net OOA/OOD/C++ Standards and Guidelines for Linux http://www.colconsulting.com=20 Object Oriented Analysis and Design |Java and C++ Development |
|
From: Frank V. C. <fr...@co...> - 1999-12-10 00:32:44
|
----- Original Message ----- > #if Frank V. Castellucci > > Is our persistent data (the users application data I will assume) for > > local storing? Exchange? Both? > > I can offer examples from real life: > > 1. KDE uses a protocol called DCOP for communication. This was designed > as a replacement for using CORBA. Parameters are marshalled by streaming > into a 'QByteArray'. Here endianness is important. > > 2. Recently I created a class which gave the same functionality as GDBM. > Here again the data is streamed into a 'QByteArray'. This class is > used for efficiently indexing a mailbox. > > -> I think persistent data must work on any byte-order. The advantage > of being able to marshal parameters for schemes such as KDE's DCOP, plus > the advantage of being able to read my mailbox indices on any machine > are great. > > Cheers, > Rik Right, but doesn't the issue of what we implement as a default persistence come into play? I don't mind if we state here and now that CoreLinux++ framework will provide both the abstractions and one (1) implementation for persistence as a minimum. That said implementation will be flat file or stream based. That said implementation will provide byte ordering protection. Frank |
|
From: Frank V. C. <fr...@co...> - 1999-12-10 00:28:38
|
> Greetings to all. > > I have for some time been interested in working on a Linux-specific > class library. > > As you're currently in the design phase, I'll probably interject > with suggestions from the point of view of a user (of the library.) > > This is partly because usability is my pet favourite 'feature' but also > because my knowledge of some concepts (such as patterns) is limited, > due to me being mostly self-taught. Glad to see you make it Rik. Any input is valuable. > BTW, I can knock up servicable, if a little corporate-friendly, web > graphics if logos etc are needed. Something to keep me occupied until > I've read some more about OO design (last thing I read was something > about the Booch method, that was about 4 years back.) Unless I see otherwise I will assume you have all the "leads" to Patterns/OOA/OOD material you need. As far as the web graphics, a CoreLinux++ logo would be cool. And anyway you think the pages should go, the Source Forge system supports PHP (which I know nothing about) if that helps. To everyone: So far Thomas Maguire has asked to be assigned to the project in Source Forge. I don't know if some of you are still testing the water or want access as a project team member. Clearly there is nothing to modify yet, and I am not overwhelmed with maintaining the changes at the speed they are coming in at, but there will come a point in time... |
|
From: Rik H. <ri...@kd...> - 1999-12-10 00:15:35
|
#if Frank V. Castellucci > Is our persistent data (the users application data I will assume) for > local storing? Exchange? Both? I can offer examples from real life: 1. KDE uses a protocol called DCOP for communication. This was designed as a replacement for using CORBA. Parameters are marshalled by streaming into a 'QByteArray'. Here endianness is important. 2. Recently I created a class which gave the same functionality as GDBM. Here again the data is streamed into a 'QByteArray'. This class is used for efficiently indexing a mailbox. -> I think persistent data must work on any byte-order. The advantage of being able to marshal parameters for schemes such as KDE's DCOP, plus the advantage of being able to read my mailbox indices on any machine are great. Cheers, Rik |
|
From: Rik H. <ri...@kd...> - 1999-12-10 00:15:26
|
Greetings to all. I have for some time been interested in working on a Linux-specific class library. As you're currently in the design phase, I'll probably interject with suggestions from the point of view of a user (of the library.) This is partly because usability is my pet favourite 'feature' but also because my knowledge of some concepts (such as patterns) is limited, due to me being mostly self-taught. BTW, I can knock up servicable, if a little corporate-friendly, web graphics if logos etc are needed. Something to keep me occupied until I've read some more about OO design (last thing I read was something about the Booch method, that was about 4 years back.) Cheers, Rik |