sxengine-devs Mailing List for SxEngine
Brought to you by:
spidey01
You can subscribe to this list here.
| 2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(6) |
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
|
|---|
|
From: TerryP <spi...@us...> - 2010-08-25 15:12:31
|
Been swamped lately and I'll be out of things for at least another week :'(
--
TerryP / Spidey01
Just Another Computer Geek.
|
|
From: TerryP <spi...@us...> - 2010-08-14 17:47:09
|
Commits to the subversion repository should now result in a message to this
ML, showing the commit message and what files were changed. For now I've got
it set to include an svn diff at the end but that can be turned off.
I've also set the mailing list to munge the headers so hitting "Reply" goes
straight back to the mailing list rather than privately messaging the
poster. I think that's a bit easier in our case.
Please give a shout if you encounter any problems, thanks :-).
--
TerryP / Spidey01
Just Another Computer Geek.
|
|
From: <spi...@us...> - 2010-08-14 17:41:36
|
Revision: 16
http://sxengine.svn.sourceforge.net/sxengine/?rev=16&view=rev
Author: spidey01
Date: 2010-08-14 17:41:30 +0000 (Sat, 14 Aug 2010)
Log Message:
-----------
Update rake-build-infrastructure to version 9e2dc08278e7d658adaa739fd9ac3f7c5550e7ce
Changes include more flexibility in choosing YAML build configurations. See upstream commit log here: http://github.com/Spidey01/rake-build-infrastructure/commit/9e2dc08278e7d658adaa739fd9ac3f7c5550e7ce for more details.
Modified Paths:
--------------
trunk/Rk/builder.rb
trunk/Rk/cbuilder.rb
Modified: trunk/Rk/builder.rb
===================================================================
--- trunk/Rk/builder.rb 2010-07-30 20:10:09 UTC (rev 15)
+++ trunk/Rk/builder.rb 2010-08-14 17:41:30 UTC (rev 16)
@@ -30,6 +30,7 @@
# settings defined for language 'lang'.
#
def initialize modname, lang
+ require 'rubygems'
require 'platform'
@cpu = Platform::ARCH
@@ -39,6 +40,7 @@
@os = Platform::IMPL
end
+ @vars = Hash.new
setup_toolset
setup_paths modname
setup_conf lang, modname
@@ -151,7 +153,7 @@
when :win32
@toolset = 'msvc'
else
- @toolset = :unknown
+ @toolset = 'unknown'
end
end
@@ -165,17 +167,31 @@
def setup_conf lang, modname
@language = lang
- yamlconf = "#{@cpu}.#{@os}.#{@toolset}.yml"
+ yamlconf = nil
+ checkconfs = [ "#{@cpu}.#{@os}.#{@toolset}.yml",
+ "#{@cpu}.#{@os}.yml",
+ "#{@toolset}.yml",
+ ]
+ for cf in checkconfs do
+ f = File.join('Rk', 'conf', cf)
+ if File.exists? f
+ yamlconf = f
+ break
+ end
+ end
+
+ # fail safe
+ yamlconf = "config.yml" unless yamlconf
+
#
# Load the main build settings
#
require 'yaml'
- maincfg = File.join('Rk', 'conf', yamlconf)
begin
- @data = YAML.load_file(maincfg)
+ @data = YAML.load_file(yamlconf) || {}
rescue Errno::ENOENT
- Error "Missing #{maincfg}, please create it with the correct settings!"
+ Error "Missing #{yamlconf}, please create it with the correct settings!"
end
#
@@ -184,7 +200,7 @@
#
begin
@data.merge! YAML.load_file(File.join(File.dirname(srcdir()),
- 'conf', yamlconf))
+ 'conf', File.basename(yamlconf)))
rescue Errno::ENOENT
# pass
end
@@ -217,18 +233,23 @@
ns
end
- def make_thing thing, evars, cb=nil
+ def make_thing thing, evars
begin
c = expand @data['commands'][@language][thing], evars
sh c
#puts "\n\n#{c}\n\n"
rescue NoMethodError
puts 'Build settings missing or incorrect for language ' + @language
+ puts "$!: #{$!}"
end
end
def query_ext name
- @data['extensions'][@language][name]
+ begin
+ @data['extensions'][@language][name]
+ rescue
+ return ''
+ end
end
def setup_lut
@@ -242,39 +263,43 @@
# that wouldn't process the file correctly. So use the seperate
# loops. It's all faster than the average monkey any way.
#
- @vars = Hash.new
-
- # keys in the 'programs' hash are taken as name=>command pairs.
- # Each name written as ${NAME} in s, expand to command.
- #
- for prog in @data['programs'].keys do
- p = '${'+prog.upcase+'}'
- @vars[p] = @data['programs'][prog]
+ if @data.has_key? 'programs'
+ #
+ # keys in the 'programs' hash are taken as name=>command pairs.
+ # Each name written as ${NAME} in s, expand to command.
+ #
+ for prog in @data['programs'].keys do
+ p = '${'+prog.upcase+'}'
+ @vars[p] = @data['programs'][prog]
+ end
end
- # keys in the 'options' has are taken as the following:
- # name => { category => options, ... }
- #
- # Occurences of ${NAME} in s, will be replaced with each category set
- # for name.
- #
- for flag in @data['options'].keys do
- o = @data['options'][flag]
- f = '${'+flag.upcase+'}'
- @vars[f] = ''
-
- # XXX not pretty but it filters out any enter we're not looking for
+ if @data.has_key? 'options'
#
- next unless o.respond_to? :[] and o.respond_to? :each_key
+ # keys in the 'options' has are taken as the following:
+ # name => { category => options, ... }
+ #
+ # Occurences of ${NAME} in s, will be replaced with each category set
+ # for name.
+ #
+ for flag in @data['options'].keys do
+ o = @data['options'][flag]
+ f = '${'+flag.upcase+'}'
+ @vars[f] = ''
- o.each_key do |val|
- v = o[val]
- if v==nil
- Error "Assertion failed: v==nil for o[#{val}]"
- next
+ # XXX not pretty but it filters out any entry we're not looking for
+ #
+ next unless o.respond_to? :[] and o.respond_to? :each_key
+
+ o.each_key do |val|
+ v = o[val]
+ if v==nil
+ Error "Assertion failed: v==nil for o[#{val}]"
+ next
+ end
+ @vars[f] += " #{v}"
end
- @vars[f] += " #{v}"
end
end
end
@@ -285,19 +310,19 @@
File.join d, *f
end
- def libstoflags libs
+ def libstoflags libs, filter
t = @data['template_vars']['libs_template']
ls = ''
for l in libs do
- ls << t.gsub('${LIB}', l) << " "
+ ls << t.gsub('${LIB}', filter.call(l)) << " "
end
ls
end
@data = Hash.new
- @vars = Hash.new
+ @vars = nil
@paths = nil
@cpu = :unknown
@os = :unknown
Modified: trunk/Rk/cbuilder.rb
===================================================================
--- trunk/Rk/cbuilder.rb 2010-07-30 20:10:09 UTC (rev 15)
+++ trunk/Rk/cbuilder.rb 2010-08-14 17:41:30 UTC (rev 16)
@@ -1,5 +1,7 @@
require 'rake/clean'
require 'Rk/builder'
+require 'rubygems'
+require 'platform'
#
# A version of Builder suited for C/C++ type languages.
@@ -17,6 +19,22 @@
'${IMPLIB}' => implib.to_s)
end
+ def make_executable exe, objs, libs
+ libsfilter = Proc.new { |s|
+
+ libstoflags(libs, Proc.new { |l|
+ if Platform::OS == :unix && l.start_with?('lib')
+ l = l[3, l.size]
+ end
+ l
+ })
+ }
+
+ make_thing('make_executable', '${TARGET}' => exe,
+ '${SOURCE}' => objs.join(' '),
+ '${LIBS_TEMPLATE}' => libsfilter)
+ end
+
def exeext
query_ext 'executable'
end
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: TerryP <spi...@us...> - 2010-07-24 16:20:38
|
On Sat, Jul 24, 2010 at 16:12, Erik Schrauwen <
gho...@us...> wrote:
> hi,
>
> I'm sending a second confirmation of my nod of approval of the software
> license so that it's in the archive.
>
> I also wanted to let you know that my father and I are going on a camping
> trip in the last week of July, (his vacation from work). So I won't have
> internet access or be able to respond to emails untill I get back, which
> will probably be in about a week.
>
> from ghost314
>
>
Thanks for the confirm and the heads up, I hope y'all have a great time :-)
--
TerryP / Spidey01
Just Another Computer Geek.
|
|
From: Erik S. <gho...@us...> - 2010-07-24 16:12:08
|
hi, I'm sending a second confirmation of my nod of approval of the software license so that it's in the archive. I also wanted to let you know that my father and I are going on a camping trip in the last week of July, (his vacation from work). So I won't have internet access or be able to respond to emails untill I get back, which will probably be in about a week. from ghost314 _________________________________________________________________ Learn more ways to connect with your buddies now http://go.microsoft.com/?linkid=9734388 |
|
From: TerryP <spi...@us...> - 2010-07-14 16:03:40
|
The concept is fairly simple at the top level the engine is divided into 5
parts:
- Client application
- Server application
- Shared libraries
- Parts of the engine
- Dependencies of the engine
- Plugins providing backends
- Dependencies of the plugins
- Header files to #include in game code
The client application is responsible for rendering and input (etc). If need
be it would launch the server on the localhost to handle single player or
host a multiplayer game.
The server application is entrusted with world management, the kind of thing
that decides if you win/lose, tracks scores, and so on. It can obviously be
hosted in Bolivia as long as the client(s) can talk to it.
Shared libraries are the meat and potatoes of the architecture. The engine
itself needs to offer a class library of useful code. Stuff like
SxE::String, interfaces to log handling scripting, game settings, and so on.
Little of it should be strictly necessary for writing a game but making
peoples lives easier is the point of doing an engine ;). Plugins would be
shared libraries that get loaded on at run time, for example the games
configuration may say to use the Spidermonkey JavaScript engine to implement
the engines scripting interface. So the engine goes a head and loads
the necessary plugin DLL implementing the interface by way of Mozilla
Spidermonkey. This model works for most any component. Any other runtime
dependencies like a Direct3D runtime or Boost.Thread would be included in
the binary distribution as needed.
This keeps the engine flexible, so for example if the Irrlicht rendering
engine doesn't work on platform XYZ, an implementation of
the necessary interfaces can be written for a rendering backend that does
work on platform XYZ, and used as identically as possible. Mating a
client/server architecture so deeply can also simply the task of making a
game with both a single player campaign and a multiplayer, and better yet
co-op modes!
There are at least 3 schools of thought on how to implement a game using an
engine with regard to what writing a game on it will look like.
One is to simply fork the engine code and hack away until it implements your
game. That is the model Id Softwares DooM and Quake III game engines
followed. Works nice except you have to cuddle up to the engine internals to
actually do anything :-o. It's more practical for in-house use than a
general purpose system, in my humble opinion. That is also how most such
engines began in the good old 90s.
Another less often used method is to treat the game as an object to be
called by the engine, rather than it calling the engine. I haven't seen many
game engines made this way.
The third way, which could be implemented through the second one: is to shut
off the game specific code into a separate environment from the actual game
engine. That's the model that Epics Unreal Engine (C++) follows with its
UnrealScript language, and to a lesser extent the original Quake (C) with
it's minimalist "Quake C" game language. I say to a lesser extent beacause
while you should never have to go past UnrealScript for most UE games,
eventually you'll want to modify Quake I engine code instead of just Quake C
code.
Of course only the fourth method of game engine design likely works \o/.
For SxEngine, my vision could be called an implementation of the second
method: treating a game as something the engine calls rather than vice
versa.
A game is written as a C++ library expected to export an implementation of
an interface between game and engine. When the client or server application
executes, it loads that DLL. It is theoretically possible to implement that
proverbial "Game Application" class as a simple adapter for a more dynamic
language (including the script engine) instead of doing the game largely in
C++. I'm actually interested in testing the performance of that sometime.
Doing it 'this' way has some interesting properties. It should reduce (and
usually eliminate) the need for a games developer to write their own client
and server programs, and modifying SxEngines client/server should only be
needed for specialised cases. Like wise this allows the games code to
receive access to relevant portions of the game engine via C++ references
where needed instead of A.) having to manage the engine internals and
various component memory allocations by hand, or B.) having to ask for
handles to singleton objects on demand. It makes a very nice design with
less complexity and can readily be grown to it's fullest potential. At
least, that's what several hours thinking tells me lol.
Any questions, comments, advice (Q.A.C.s) or the like before we move on?
Quack, quack!
--
TerryP / Spidey01
Just Another Computer Geek.
|
|
From: TerryP <spi...@us...> - 2010-07-12 04:45:05
|
I thought it would be a good idea to write something about what the source tree will look like as commits move on, given that an intro to subversion repository structure was just done lol. There's many ways to build a project, normally it is done "In tree" or "Out of tree". In tree typically means the compiler is told to put it's files next to the source code. It's rather messy. Out of tree simply has the compiler put the generated files some where else. There are even more ways to lay out the sources. At least for me, my "Home directory" is shared between computers and different operating systems, I also test with several compilers; so in tree builds don't really "Do it" for me unless it applies magic to the problem. When you combine the process of building and distributing a larger project, having separate directory structures for each phase starts to look pretty sweet. This is the kind of structure I've set things up for the "Source tree": - Build/ - Dist/ - Source/ - Vendor/ SxEngine code goes in the Source folder and third party stuff goes in Vendor/vendorname. Sticking with that branching example from the subversion thread, vendors/ogre3d/release-1.7 would get merged into branches/ogre3d-backend/Vendor/ogre3d/src. This distinction between <branch path>/Source and <branch path>/Vendor makes it perfectly clear whether you're looking at part of the engine or not. It also makes accidentally merging the branches the wrong way a little harder. Temporary files created by building the source code ends up under the "Build" tree. Because C++ object files and what not tend to be specific to a given CPU, OS, and compiler: all those files go in Build/architecture/os/compiler. I.e. if you build Source/module/foo.cpp you may get Build/x86/win32/msvc/module/foo.obj. This allows having any number of side by side builds for an arbitrary number of configurations very easy, as is deleting all the temporary files! Results of compilation end up in the "Dist" tree. For the same reason as above it follows the Dist/architecture/os/compiler structure. The concept is when you build the project, distributing the files should be as easy as zipping up the right folder in Dist/ and unzipping it on the users computer. When headers and import libraries are included in the dist files, it also makes building your own game with the engine less painful IMHO, and frees the game of having to care about the engines build system. Releasing an SDK becomes easier! If you have any better ideas then the Build, Dist, Source, and Vendor trees thing, I'm open to listening. Over the years it's the more effective way I've found for combing C/C++ with cross platform work. Why I chose rake over using more popular solutions like CMake or SCons, is quite franky in my experience they make doing that kind of build exponentially more work in filthy hacks and kludges than they actually solve in headaches. We obviously can't use Visual C++ for Linux builds (et. al.), so VS project files are useless beyond our workstations. Such IDEs are however still great tools for writing the code and setups can be committed to subversion. Maintaining separate sets of makefiles/project files per development environment is also an *anti*-solution. Enter rake. It's essentially a make <http://en.wikipedia.org/wiki/Make_(software)> tool written in Ruby and using ruby syntax. Requiring rake for a build is certainly a lot less weight than forcing a specific development environment (which are not all created equal). It's also faster than scratch writing a custom program for the job. Ruby 1.9 includes rake, and it can be installed separately as a gem for Ruby 1.8. I also used the Platform gem to make life easier, it's all easy as pie to setup. Like traditional makefiles, rake can also support several compiler tool sets in the sense that it can conditionally include a separate file describing the build configuration. Because knowing a lot of Ruby to build C++ is silly, I adapted it to use YAML to store such settings. The Builder object automatically does this when created. A YAML file looks like this: http://en.wikipedia.org/wiki/YAML#Sample_document so it's child's play to edit most compilation settings. A little applied engineering resulted in a rake infrastructure that coops with the desired multi-tree source/build/dist structure very well and keeps the Ruby hacking to a bare minimal where ever possible (and is getting steadily better). The way this rake stuff has been done, it can actually be adapted to any language. It's only been tested with C and C++, but the scripts can be used just as well for projects written in languages like Java, C#, Visual Basic, Lisp, and Go. Since it's so useful, at least to me personally, there is an unrelated git repo < http://github.com/Spidey01/rake-build-infrastructure/tree/Examples > to make using the scripts on new projects faster. When SxEngine is setup, I'll push the updated code/docs for the build scripts back out to the hub. In essence building the SxEngine will be a matter of running rake. > rake target If you've ever used make, it works the same way, e.g. make clean. Under Ruby 1.9 var=value arguments are possible, such as: > rake toolset=mingw target Using an IDE such as Visual Studio, NetBeans, Eclipse, Code::Blocks, an any of the other dozen or so I've worked with, become just a matter of telling it to run rake when you hit "Build". Vim/Emacs also work fine. If invoked just as rake, the default settings just build the game engine and its dependencies using the most natural toolkit. Visual C++ is assumed for Windows. Being able to commit IDE specific files that handle running rake is also a handy to share IDE settings ;). -- TerryP / Spidey01 Just Another Computer Geek. |
|
From: Terry P. <big...@gm...> - 2010-07-12 02:14:23
|
As far as a first version control system goes, Subversion is as easy to
learn as any other modern system. That includes CVS, Perforce, Git,
Mercurial, and Bazaar; it's likely debatable if CVS can be called modern but
we can skip that. A repository in SVN is basically a collection of directory
trees, composed of so many branches. Subversion (svn) is sometimes a bugger
to work with but it offers a lot of flexibility in laying out your files.
The usual convention among SVN users, is to layout repositories like this:
- / (or /projectname)
- trunk/
- source tree for the trunk branch.
- branches/
- branch-name/
- source tree for the branch-name branch.
- tags/
- tag-name/
- source tree for the tag-name branch.
A 'branch' is simply a thread of development that happens along side other
branches. It's like copying your files to a different directory so you can
go wild on an idea without breaking the whole project; only you get the
power of a version control system to help maintain everything. The SVN Book
has a nice intro to
branching<http://svnbook.red-bean.com/en/1.5/svn.branchmerge.whatis.html>with
a good explanation of it, as well as revision control in general.
Branching is a nice solution for the "Oh crap, where did I save the code
from working on quux idea" kind of problem!
By convention a "tag" is simply a branch that no one edits. it's good for
having an iron clad record of what shipped as version x.y.z. for easily
comparing it against patches applied to another maintenance branch.
Most projects use the "trunk" as the current code base. Some prefer it be
like a publicly readable working copy, others prefer that it stays stable
enough to be built and tested at any given moment.
"Source tree" in the above list, obviously being our source tree --
essentially what you would get if you extracted sxengine-x.y.z.tar.bz2 and
set out to compile it.
For an example of using Subversion in a routine manor, adding support for a
new rendering backend might look something like this:
1. Create a new branch
- named "branches/ogre3d-backend"
- copy existing SxEngine code into it from a suitable branching point,
like trunk or branches/release-1.2
2. Make changes to branches/ogre3d-backend
- Editing for this feature happens in branches/ogre3d-backend
- Editing for things unrelated to this feature happen in another
branch, e.g. trunk
- Changes can be merged between branches if needed.
3. Eventually the code has a clean bill of health
- compiles
- tested
- reviewed
- blah blah
4. Integrate changes back into the code base proper
- See what's changed since the branch was last synchronized with the
current code base
- Code and revision *history* in branches/ogre3d-backend is merged
with trunk/ (for example).
- Other developers update their working copies own ideas of trunk as
needed.
5. Take long nap and enjoy a cool drink in the shade.
Branches are good for many things:
- Adding specific features.
- Exploring interesting ideas.
- Incompatible changes to the code base.
- Maintaining a "Release" separate from current hacking needs.
- Snapshoting released code.
- Personal farting ground.
The downside of branches are of course, you eventually have to merge them to
get any real value. Cherry picking individual change sets between branches
rather than a straight merge can also be done, but I've never had to do it
under svn.
If you understand the concept of branching and merging files, you generally
understand Subversion.
I believe the usual trunk, branches, tags layout is suitable for our needs,
whatever policy is needed for managing them. Given the nature of the
project, it is best to throw vendor branches in their own place. A vendor
branch is a branch containing someone else's code, i.e. if we depend on zlib
we might want a vendor branch containing zlibs source code, so there is
always quick access to the *right* version of it. On unix/linux there's
little need for something like that unless the third party projects unheard
of on the target distro, and therefore needs to be manually installed (zlib
is as common as air), but on Windows... such vendor branches makes life a
hell of a lot easier to compile dependencies. It also helps if we need a
specific version of some library that is no longer easy for users to
download by hand.
So our repo should look something like this:
- /
- trunk/
- branches/
- tags/
- vendors/
Where vendors/ is the home of branches for third-party code, things we need
to build against like zlib, Boost, Irrlicht, etc. Playing off the Ogre3D
example of using branches for feature development, before creating
branches/ogre3d-backend (see above) we might create a
vendors/ogre3d/release-1.7 branch, commit a copy of the Ogre3D version 1.7.x
code there. Then merge it into a sub directory of branches/ogre3d-backend
and setup the Rakefiles to compile ogre, and commit that to
branches/ogre3d-backend as it's first commit. More precise details can be
found in the SVN books section on vendor branches.
In terms of general policy for using subversion on the project, this is what
I would like to see, conceptually.
When we make a release of SxEngine, we create something like
branches/release-x.y and tags/release-x.y.z. Further development of that
release version happens in the branches/release-x.y branch. Every time
there's a file release we branch that into tags/release-x.y.z. As a list,
that might look something like:
- branches/
- release-1.2/
- tags/
- release-1.2.0
- release-1.2.1
You get the idea.
Like wise hacking on stuff that should take a while to get into a workable
state, likely belongs in branches/ under a name that makes sense. Like
branches/irrlicht-backend or branches/v8-backend.
When a so called topic branch is ready to be merged into the main line of
development (etc), we should have a general checklist before the work is
"Blessed" as code we can release. At a minimal it should compile and run
lol. In my humble opinion an event like merging a topic branch into the
trunk would be a good time for everyone to try a brief review the code
involved. Having relevant unit tests and documentation written would also be
nice.
Some people are anal about reviewing every single line of code before
allowing it to be committed... I'm not that strict a bastard. I do however
feel we should take an effort to review our work periodically. As the old
saying goes, two_heads > one_head. It's also best to do it frequently enough
that reviews are short and expedient. If it's more code than you can read
over lunch, then there's a problem.
What is a code review <http://en.wikipedia.org/wiki/Code_review>? Basically
several people look *objectively* at the code, preferably this includes
people who didn't write it. Take a quick read of things, look for obvious
faults, details that might have fallen through the cracks, code that looks
like a pile of buffalo pucky, etc. It is kind of like a cross between proof
reading an article for correctness and being a film critic. Remember it's
code that smells, not the developers. Ok, well some people likely need a
bath but they haven't invented a Smell Transfer over Internet Protocol yet
:-o.
When I used to run tactical exercises in my old gaming team, I always took a
few minutes after the operations for a quick debriefing. Everyone in the
assault team was asked 3 questions about the simulated mission they had just
played: what went right, what went wrong, and how could we improve for next
time? Then I added my own voice of experience. It was usually done in 5 to
10 minutes.
In our world of programming, it would undoubtedly take longer than 5 minutes
to review a non-trivial class but the idea is the same. Look for what's
good, what's off centre, and how to make the code better. Reviewing some
code should also be *short*, not a time consuming affair.
More precise details about all that, we can work out as needed. Posts to the
mailing list can be used or we can setup a tool to help get it done. SF has
one called codestriker available in the hosting features. I think the
mailing list is good enough for a small group but we can structure it
however works best for us all.
It also goes without saying that when committing changes to a publicly
readable repository, don't accidentally commit something that shouldn't be
readable by the whole world. Like some kind of passwords.doc file or
whatever.
--
TerryP / Spidey01
Just Another Computer Geek.
- tags/
|
|
From: TerryP <spi...@us...> - 2010-07-12 01:21:16
|
Here's the licensing I expect us all to work under:
Copyright (c) 2010, Terry Matthew Poulin
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
- Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
- Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
- Neither the name of the SxEngine project nor the names of its
contributors may be used to endorse or promote products derived from this
software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
This generally follows the Linux kernel format of doing it, i.e. a single
git owns the copyright in case quick legal action is needed but the license
is permissive (in this case an open barn door) enough that control is easily
divested, even if the git in charge gets hit by a bus. This seems the most
appropriate method for now, although if we still have enough regulars
hanging around after a few years of hacking, I would be happy transferring
copyright to some kind of team-held company or foundation in the future. I
just want to make sure we don't run into roadblocks later. The text itself
was created from the OSI template for the simplified BSD license with the
usual replacements on <YEAR>, <OWNER>, and <ORGANIZATION> as appropriate.
Please offer some nod of agreement, so access controls can be turned on,
thanks :-).
--
TerryP / Spidey01
Just Another Computer Geek.
|