Thread: Re: [Algorithms] Complexity of new hardware (Page 11)
Brought to you by:
vexxed72
|
From: Jarkko L. <al...@gm...> - 2009-04-19 22:53:30
|
Hand coding data coversion just sounds awful lot of unnecessary maintenance work which I rather avoid when constantly adding/removing variables to/from classes. Batch processing data for data conversion code clean-up also assumes you got all the old data available for processing, which may cause problems particularly if you are a middleware developer. For improving loading performance and reducing data size I can strip off all the type information data for final build, in which case loading becomes straight forward linear data streaming. Cheers, Jarkko -----Original Message----- From: Adrian Bentley [mailto:ad...@gm...] Sent: Monday, April 20, 2009 12:30 AM To: Game Development Algorithms Subject: Re: [Algorithms] Complexity of new hardware Hand coding incremental conversions is really easy. Seems to me the problem is how to flush out old versions so you don't end up with a rats nest of compatibility code. Fortunately, it's only a matter of writing other code :). Combining an easily editable source format (e.g. text) for your assets with a translation application allows these sorts of rebuilds easily. Layering incremental versioning on top is not hard. There are certainly trade offs: mergeability, better abstraction in some cases, more code, more time to get to preview, etc. Or, if you want to keep your source in a compiled format, providing some form of mass translation mechanism should work. E.g. snapshotting version changes into a file which can be run across all assets. I haven't done this personally, and it comes with a different sort of baggage, but seems doable all the same. Pick your poison, but there are alternatives to brain dead simple serialization that have better long term usability. Also, unless you have a baking step (hello translation), it feels like this would be important for loading performance anyways. Cheers, Adrian On Sun, Apr 19, 2009 at 3:17 AM, Jarkko Lempiainen <al...@gm...> wrote: > Otherwise you would have to hand-code the conversion which > could get quite a taunting task. ---------------------------------------------------------------------------- -- Stay on top of everything new and different, both inside and around Java (TM) technology - register by April 22, and save $200 on the JavaOne (SM) conference, June 2-5, 2009, San Francisco. 300 plus technical and hands-on sessions. Register today. Use priority code J9JMT32. http://p.sf.net/sfu/p _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list |
|
From: Jon W. <jw...@gm...> - 2009-04-20 16:55:44
|
Will Vale wrote:
> On Sun, 19 Apr 2009 03:03:27 +1200, Jarkko Lempiainen <al...@gm...>
> wrote:
>
>> All I have is something like:
>> struct foo
>> { PFC_MONO(foo) {PFC_VAR3(x, y, z);}
>> int x, y, z;
>> };
>>
>
> That's admirably brief! I assume you're generating code rather than data
>
> // foo.h
> struct foo
> {
> COMPOUND(foo)
>
> int x, y, z;
> };
>
> // foo.cpp - essentially generates const member_t foo::members[] = { ... };
> RTTI(foo, MEMBER(x) MEMBER(y) MEMBER(z))
>
This is exactly what I want to avoid -- the members are listed twice, in
two different files! Version mismatch hell. (The Despair engine write-up
has this same problem).
You can actually use some template trickery to hoist the RTTI() /
MEMBER() parts into the struct itself.
This allows you to write something like:
struct foo {
int x, y, z;
RTTI(MEMBER(x) MEMBER(y) MEMBER(z))
};
That's all there's to it. No additional code, cpp files, or anything
like that needed (except for whatever runtime support you'll want, like
stream handling, custom type marshaling, etc).
members() gives you a list of the members; info() gives you information
about the struct itself. I put an implementation (including a program
you can compile and run) up for discussion at
http://www.enchantedage.com/cpp-reflection if you want to take a look.
Here's a main() program that prints the information about the type "foo":
int main() {
printf("type: %s\n", foo::info().name());
printf("size: %ld\n", foo::info().size());
for (size_t i = 0; i != foo::info().memberCount(); ++i) {
printf(" %s: offset %ld size %ld type %s\n",
foo::info().members()[i].name,
foo::info().members()[i].offset,
foo::info().members()[i].type->size(),
foo::info().members()[i].type->name());
}
return 0;
}
And here's the output:
type: foo
size: 12
x: offset 0 size 4 type i
y: offset 4 size 4 type i
z: offset 8 size 4 type i
You probably want to wrap those ::info().members()[i] accessors in some
nice readable macro, like TYPE_NTH_MEMBER() or suchlike, but I think
this provides the optimum implementation in the sense that it relies
only on static init (no dynamic memory allocations) and it doesn't
require member definition to be separate from declaration (it's all in
the .h file).
If you want editor information, you can easily push that into the
MEMBER() macro.
It's still repeating yourself, though.
Sincerely,
jw
|
|
From: Gregory J. <gj...@da...> - 2009-04-20 17:14:54
|
> this provides the optimum implementation in the sense that it relies > only on static init (no dynamic memory allocations) and it doesn't > require member definition to be separate from declaration (it's all in > the .h file). Agreed -- ours works the same way. Class and property reflection info is stored as a static linked list, and the reflection info is declared in the header (of course, in order for the static registration magic to work, you do need a single HYP_CLASS_IMPL() macro in the .cpp file). ... > It's still repeating yourself, though. Our system has all of the benefits described, with no repetitions -- the declaration of the property (member) reflection info *is* the declaration of the struct/class field. Greg |
|
From: Pal-Kristian E. <pal...@na...> - 2009-04-22 00:44:23
|
Nicholas "Indy" Ray wrote: > On Tue, Apr 21, 2009 at 12:43 PM, Pal-Kristian Engstad > <pal...@na...> wrote: > >> Goal was a great system - one that we still greatly miss. If we had to make >> a Goal2, then we'd probably: >> >> Use an SML-ish or Scala-ish surface syntax, while trying to retain the power >> of Lisp macros. >> > > Is there any reason you would choose against S-Expressions? I don't > know about Scala, but I find SML syntax to be a little less > maintainable for large systems and a lot less usable for Macros; Is > this mostly a choice of preference by the team, or perhaps you think > it'd be easier to get new employees to learn? > Indeed. Most game-programmers and to some extent game designers, want to work on a lot of 3D math. This requires quite a bit of math and using infix notation for this is definitely mind-boggling, even for seasoned Lisp/Schemers. The second reason is that s-expressions work well for expressions, but for sequencing, the syntax is less than stellar. The third reason is that it is quite possible to have strong macro systems (see e.g. Dylan and to a certain extent OCaml) without s-expressions. And finally, as you mentioned, it does take new employees a rather long time to get used to it. >> Introduce stronger typing features, which is difficult, given the need for >> REPL and hot updates. >> Do more in terms of high-level optimization, though LLVM might negate the >> need for some of that. >> > > LLVM is rather quite nice, and while it'll take some infrastructure, > and certainly a resident compiler instance, I don't suspect that hot > updates would be too much of a problem with a better typed (likely > type inferred) programming language. > The devil is in the details, I am afraid. As an example (and I'm not saying this is impossible with static type checking), in a completely dynamic setting it is quite feasible to: introduce new fields in a data structure (compile and send to game), edit (static) data that uses the data structure (compile and send to game) and finally compile functions that reference the data structure and send it to the game (at which point the new data is actually used). I know AliceML has a solution to this, but from what I've heard, it was quite a challenge. PKE. -- Pål-Kristian Engstad (en...@na...), Lead Graphics & Engine Programmer, Naughty Dog, Inc., 1601 Cloverfield Blvd, 6000 North, Santa Monica, CA 90404, USA. Ph.: (310) 633-9112. "Emacs would be a far better OS if it was shipped with a halfway-decent text editor." -- Slashdot, Dec 13. 2005. |
|
From: Nicholas \Indy\ R. <ar...@gm...> - 2009-04-22 01:24:17
|
On Tue, Apr 21, 2009 at 5:41 PM, Pal-Kristian Engstad <pal...@na...> wrote: > Indeed. Most game-programmers and to some extent game designers, want to > work on a lot of 3D math. This requires quite a bit of math and using > infix notation for this is definitely mind-boggling, even for seasoned > Lisp/Schemers. The second reason is that s-expressions work well for > expressions, but for sequencing, the syntax is less than stellar. I do wonder if an infix macro would do the majority of the job, for instance in places where infix would prove advantageous you could do something such as: (define (math-me x y z) (infix x + 2 * (y ^ sqrt(z)))) The macro would be fairly trivial (as far as operator precedence goes) and would keep such code out of the compiler itself. Additionally it should also be pretty easy to write code in you're editor (presumably emacs if we are writing code in a lisp) that could transform prefix into infix and vice versa. > third reason is that it is quite possible to have strong macro systems > (see e.g. Dylan and to a certain extent OCaml) without s-expressions. I have yet to actually use Dylan, but I have used OCaml and the camlp4 preprocessor, I however found macros being much less intuitive when the language doesn't model the transformation tree quite so well. I'd like to know if the problem wasn't so large in Dylan. > The devil is in the details, I am afraid. As an example (and I'm not > saying this is impossible with static type checking), in a completely > dynamic setting it is quite feasible to: introduce new fields in a data > structure (compile and send to game), edit (static) data that uses the > data structure (compile and send to game) and finally compile functions > that reference the data structure and send it to the game (at which > point the new data is actually used). I know AliceML has a solution to > this, but from what I've heard, it was quite a challenge. Yes, this is of course a problem with using more strict and static run-time data formats. But I have found this problem in Dynamic Languages as well, often times I've found that the default record types in some dynamic languages, don't often support introduction of new fields. This becomes much easier if you can support dynamic non-contiguous records, and this can still interface very well in a statically typed language. But It's often the case that in static languages you want static records. In which case you have no choice then stopping the world, and upgrading every piece of data, and this would require a lot of allocations/deallocations, This would lead to large pauses while upgrading data-types, while that might not be often enough to matter. When it had to happen, it would certainly put a stall to the quick incremental cycle you otherwise have going on. Nicholas "Indy" Ray |
|
From: Sam M. <sam...@ge...> - 2009-04-22 17:19:38
|
> Wouldn't that be a tough sell? You'd already be competing with free > implementations of LUA, Python, JavaScript and their ilk on the low end, > and built-in languages like UnrealScript on the high end. I don't think there's a market for that kind of scripting DSL. A new language would need to eat into the remaining C++ development burden that isn't suitable to implementing in Lua, say. Which is plenty. > Doesn't this bring us back full circle? I recall a statement from a > month ago saying that we all need to think differently about how we put > together massively parallel software, because the current tools don't > really help us in the right ways... Another reason to consider pure functional languages. This is a much deeper topic that I'm now about to trivialise, but the referential transparency of these languages makes them particular suitable to parallel evaluation. For example, GHC (arguably the most mature Haskell compiler) can compile for an arbitrary number of cores, although it's still an active research area as I understand it. Thanks, Sam ------------------------------------------------------------------------ ------ Stay on top of everything new and different, both inside and around Java (TM) technology - register by April 22, and save $200 on the JavaOne (SM) conference, June 2-5, 2009, San Francisco. 300 plus technical and hands-on sessions. Register today. Use priority code J9JMT32. http://p.sf.net/sfu/p _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-lis t |
|
From: Gregory J. <gj...@da...> - 2009-04-22 17:31:48
|
> parallel evaluation. For example, GHC (arguably the most mature Haskell > compiler) can compile for an arbitrary number of cores I don't think that's a desirable feature -- limiting your scalability at compile time, that is. Greg |
|
From: Sam M. <sam...@ge...> - 2009-04-23 08:16:25
|
Apologies - I'm still pretty new to Haskell and messed up my facts here. You specify the number of cores at runtime rather than compile time. >From the ghc user guide: http://haskell.org/ghc/docs/latest/html/users_guide/using-smp.html ta, Sam -----Original Message----- From: Gregory Junker [mailto:gj...@da...] Sent: 22 April 2009 18:32 To: 'Game Development Algorithms'; and...@ni... Subject: Re: [Algorithms] Complexity of new hardware > parallel evaluation. For example, GHC (arguably the most mature Haskell > compiler) can compile for an arbitrary number of cores I don't think that's a desirable feature -- limiting your scalability at compile time, that is. Greg ------------------------------------------------------------------------ ------ Stay on top of everything new and different, both inside and around Java (TM) technology - register by April 22, and save $200 on the JavaOne (SM) conference, June 2-5, 2009, San Francisco. 300 plus technical and hands-on sessions. Register today. Use priority code J9JMT32. http://p.sf.net/sfu/p _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-lis t |
|
From: Andrew V. <and...@ni...> - 2009-04-23 08:25:07
|
Actually, I think that's a perfectly fine feature if all you care about is one or two fixed platforms that you already compile for separately anyway. Which is quite a few us. :) Knowing exactly how many cores you have and figuring that info in at compile time might also enable better optimisation or layout of the code by the compiler as well. Cheers, Andrew. > -----Original Message----- > From: Gregory Junker [mailto:gj...@da...] > Sent: 22 April 2009 18:32 > To: 'Game Development Algorithms'; and...@ni... > Subject: Re: [Algorithms] Complexity of new hardware > > > parallel evaluation. For example, GHC (arguably the most mature > > Haskell > > compiler) can compile for an arbitrary number of cores > > I don't think that's a desirable feature -- limiting your > scalability at compile time, that is. > > Greg > > > -------------------------------------------------------------- > ---------------- > Stay on top of everything new and different, both inside and > around Java (TM) technology - register by April 22, and save > $200 on the JavaOne (SM) conference, June 2-5, 2009, San Francisco. > 300 plus technical and hands-on sessions. Register today. > Use priority code J9JMT32. http://p.sf.net/sfu/p > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgo rithms-list > > ______________________________________________________________________ > This email has been scanned by the MessageLabs Email Security System. > For more information please visit > http://www.messagelabs.com/email > ______________________________________________________________________ > |
|
From: Sebastian S. <seb...@gm...> - 2009-04-25 18:16:54
|
On Wed, Apr 22, 2009 at 5:52 PM, Sam Martin <sam...@ge...>wrote: > > > Wouldn't that be a tough sell? You'd already be competing with free > > implementations of LUA, Python, JavaScript and their ilk on the low > end, > > and built-in languages like UnrealScript on the high end. > > I don't think there's a market for that kind of scripting DSL. A new > language would need to eat into the remaining C++ development burden > that isn't suitable to implementing in Lua, say. Which is plenty. > > > Doesn't this bring us back full circle? I recall a statement from a > > month ago saying that we all need to think differently about how we > put > > together massively parallel software, because the current tools don't > > really help us in the right ways... > > Another reason to consider pure functional languages. This is a much > deeper topic that I'm now about to trivialise, but the referential > transparency of these languages makes them particular suitable to > parallel evaluation. For example, GHC (arguably the most mature Haskell > compiler) can compile for an arbitrary number of cores, although it's > still an active research area as I understand it. Being a massive Haskell fanboy myself, let me jump in with some other cool things it does that relates to game development. 1. It's starting to get support for "Nested data parallelism". Basically flat data parallelism is what we get with shaders now, the problem with that is that the "per-element operation" can't itself be another data parallel operation. NDP allows you to write data parallel operations (on arrays) where the thing you do to each element is itself another data parallel operation. The compiler then has a team of magic pixies that fuses/flattens this into a series of data parallel appliacations, eliminating the need to do it manually. 2. It has Software Transactional Memory. So when you really need shared mutable state you can still access it from lots of different threads at once with optimistic concurrency (only block when there's an actual conflict). Yes, there are issues, and yes it adds overhead, but if the alternative is single threaded execution and the overhead is 2-3x, then we win once we have 4 hardware threads to spare. 3. Monads! Basically this allows you to overload semi-colon, which means you can fairly easily define your own embedded DSLs. This can let you write certain code a lot easier.. You could have a "behaviour" monad for example, abstracting over all the details of entities in the game doing things which take multiple frames (so you don't need to litter your behaviour code with state machine code, saving and restoring state etc, you just write what you want to do and the implementation of the monad takes care of things that needs to "yield"). 4. It's safe. Most code in games isn't systems code, so IMO it doesn't make sense to pay the cost of using a systems programming language for it (productivity, safety). 5. It's statically typed with a native compiler, meaning you could compile all your scripts and just link them into the game for release and get decent performance. Not C-like (yet, anyway!), but probably an order of magnitude over most dynamic languages. -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862 |
|
From: Conor S. <bor...@ya...> - 2009-04-26 16:11:51
|
Where I'd like to break in; Haskell is not yet ready for large games (or medium/large project software development at large), because it's an academic language that hasn't yet progressed to programming in the large (big picture). Most of the "safety" in Haskell is for local problems, not for the large architectual decisions, which are the hardest to change (and in fact, haskell provides very little guidance or mechanism in these areas) later down the line. At least "object oriented" (in the general sense) programming languages try and provide a layered mechanism for programming in the large. Functional programming is a very good tool, but it's too pure a tool for production software. Most production software has areas that are "do this, then do that", which pure functional language still has awkward and heavy abstractions for (i.e. an extra level of thought that isn't necessary for the functionality required). It is also interesting that when Tim Sweeney said in his programming-language-for-the-future talk that the "graphics engine" would be "fuctional", yet he doesn't mention that rendering (as it currently stands) occurs in order and is highly stateful. Graphics hardware requires that you set your states, followed by your rendering commands, in order, which is a highly imperative way to think. This really shows that large problems tend to be made up of mixed solutions, that don't follow any one set of rules. The interaction with evalutation is purely a tools problem. It has been shown you can write C# in a REPL and there is no reason why C++ couldn't work in a REPL if C# can (as long as you can isolate the illegal behaviour). All of this is not to write off functional programming. I love functional programming and I think it's the key to code re-use (or in Charles Simonyi's words, "going meta") that has been missing from a lot of the current "promised land" languages. I think C# 3.0 was a move in the right direction (somewhere between F#, Haskell, C# 3.0, Cyclone and C99 is probably a "sweet point" right now), I also think the next C++ standard is moving in the right direction with lambdas/closures (if not dispensing with a whole lot of crud that a re-worked systems language doesn't need), but I do think it's (functional programming) not a paradigm people should be grabbing with both hands (only one and a pinky or so). Functional programming is part of the general solution to "better programming", but to take it to extremes (like Haskell) is not the answer, in the way software transactional memory is not the answer to scalable parallel computation (only some of the time; STM still has the analogue to deadlocks; that is a pathological case of cross referential transactions). There is no silver bullet to either of these problems and what we should be looking at is the best balance of tools without a level of complication that makes us put all our mental effort into the mechanisms of compution as opposed to the outcomes we're trying to achieve. Cheers, Conor ----- Original Message ---- From: Sam Martin <sam...@ge...> To: Game Development Algorithms <gda...@li...> Cc: and...@ni... Sent: Sunday, 26 April, 2009 3:58:08 AM Subject: Re: [Algorithms] Complexity of new hardware Yeah, that's what I'm talking about! :) I was trying to resist getting excited and going into over-sell mode, but likely undercooked how much potential I think there is here. To highlight just two more points I think are important: - Haskell stands a very good chance of allowing games to really get on top of their (growing) complexity. I think this is best illustrated in the paper, "Why functional programming matters", http://www.cs.chalmers.se/~rjmh/Papers/whyfp.html. Well worth a read if you've not seen it before. - It can be interactively evaluated and extended. Working with C/C++ we get so used to living without this I think we potentially under value how important a feature this is. Cheers, Sam -----Original Message----- From: Sebastian Sylvan [mailto:seb...@gm...] Sent: Sat 25/04/2009 19:16 To: Game Development Algorithms Cc: and...@ni... Subject: Re: [Algorithms] Complexity of new hardware On Wed, Apr 22, 2009 at 5:52 PM, Sam Martin <sam...@ge...>wrote: > > > Wouldn't that be a tough sell? You'd already be competing with free > > implementations of LUA, Python, JavaScript and their ilk on the low > end, > > and built-in languages like UnrealScript on the high end. > > I don't think there's a market for that kind of scripting DSL. A new > language would need to eat into the remaining C++ development burden > that isn't suitable to implementing in Lua, say. Which is plenty. > > > Doesn't this bring us back full circle? I recall a statement from a > > month ago saying that we all need to think differently about how we > put > > together massively parallel software, because the current tools don't > > really help us in the right ways... > > Another reason to consider pure functional languages. This is a much > deeper topic that I'm now about to trivialise, but the referential > transparency of these languages makes them particular suitable to > parallel evaluation. For example, GHC (arguably the most mature Haskell > compiler) can compile for an arbitrary number of cores, although it's > still an active research area as I understand it. Being a massive Haskell fanboy myself, let me jump in with some other cool things it does that relates to game development. 1. It's starting to get support for "Nested data parallelism". Basically flat data parallelism is what we get with shaders now, the problem with that is that the "per-element operation" can't itself be another data parallel operation. NDP allows you to write data parallel operations (on arrays) where the thing you do to each element is itself another data parallel operation. The compiler then has a team of magic pixies that fuses/flattens this into a series of data parallel appliacations, eliminating the need to do it manually. 2. It has Software Transactional Memory. So when you really need shared mutable state you can still access it from lots of different threads at once with optimistic concurrency (only block when there's an actual conflict). Yes, there are issues, and yes it adds overhead, but if the alternative is single threaded execution and the overhead is 2-3x, then we win once we have 4 hardware threads to spare. 3. Monads! Basically this allows you to overload semi-colon, which means you can fairly easily define your own embedded DSLs. This can let you write certain code a lot easier.. You could have a "behaviour" monad for example, abstracting over all the details of entities in the game doing things which take multiple frames (so you don't need to litter your behaviour code with state machine code, saving and restoring state etc, you just write what you want to do and the implementation of the monad takes care of things that needs to "yield"). 4. It's safe. Most code in games isn't systems code, so IMO it doesn't make sense to pay the cost of using a systems programming language for it (productivity, safety). 5. It's statically typed with a native compiler, meaning you could compile all your scripts and just link them into the game for release and get decent performance. Not C-like (yet, anyway!), but probably an order of magnitude over most dynamic languages. -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862 The new Internet Explorer 8 optimised for Yahoo!7: Faster, Safer, Easier. |
|
From: Sebastian S. <seb...@gm...> - 2009-04-26 17:04:42
|
On Sun, Apr 26, 2009 at 5:11 PM, Conor Stokes < bor...@ya...> wrote: > > Where I'd like to break in; Haskell is not yet ready for large games (or > medium/large project software development at large), because it's an > academic language that hasn't yet progressed to programming in the large > (big picture). Most of the "safety" in Haskell is for local problems, not > for the large architectual decisions, which are the hardest to change (and > in fact, haskell provides very little guidance or mechanism in these areas) > later down the line. At least "object oriented" (in the general sense) > programming languages try and provide a layered mechanism for programming in > the large. Could you expand on this? Some specifics perhaps? I'm not sure I understand what you mean. At least not when you compare to C++ which doesn't even have a module system (though Haskell's module system is fairly spartan)! > Functional programming is a very good tool, but it's too pure a tool for > production software. Most production software has areas that are "do this, > then do that", which pure functional language still has awkward and heavy > abstractions for (i.e. an extra level of thought that isn't necessary for > the functionality required). It is also interesting that when Tim Sweeney said in his > programming-language-for-the-future talk that the "graphics engine" would be > "fuctional", yet he doesn't mention that rendering (as it currently stands) > occurs in order and is highly stateful. Graphics hardware requires that you > set your states, followed by your rendering commands, in order, which is a > highly imperative way to think. This really shows that large problems tend > to be made up of mixed solutions, that don't follow any one set of rules. Sequences of "stuff" does not imply imperative languages. The low level rendering abstraction could easily be a list of "Commands" (we could call them "command buffers", or maybe "display lists", hang on a minute!), rather than being a series of state modifying statements. In fact, a lot of abstractions treat graphics as a tree, hiding the details of the underlying state machine. At a higher level I do agree with Sweeney that graphics is pretty functional. Take something like a pixel shader, for instance, which is just a pure function really, even though most shader languages make it look like an imperative function inside (to look like C, usually). Furthermore, if we're talking about Haskell specifically I'd say that in many ways it has much better support for imperative programming than C++ does, since you can define your own imperative sub-languages (e.g. you could have your CommandBuffer monad and write state setting etc. if that's how you really want to think about graphics). C++ doesn't allow you to abstract over what *kind* of statements you're working with, it only has one "kitchen sink" kind, and no way of overloading the way a block of multiple statements are bound together. > > Functional programming is part of the general solution to "better > programming", but to take it to extremes (like Haskell) is not the answer The problem with only going half way is that certain properties really need to be absolute unless they are to disappear. You can't be "sort of " pregnant, you either are or you aren't. The key killer feature of Haskell for me is purity, and if you start allowing ad-hoc undisciplined use of side effects anywhere then the language is no longer pure. Either you contain side effects by design and enforce it, or you can never write code that relies on something being pure (parallelism!) without basically giving up on the language helping you, relying instead on convention (which always breaks). Note that purity does emphatically *not* mean "no state mutations ever", it merely means that IO has to happen at the "bottom" of the program, not deep inside application code (this is pretty much what we do already though, so not much of an issue IMO), and that any localized side effects have to be marked up so that the compiler can enforce that they don't "leak" (e.g. you may want to do some in-place operations on an array for performance in a function, but from the outside the function still looks pure - that's fine in FP, Haskell uses the ST monad for it - you just need to seal off these local "bubbles" of impurity by explicitly marking up where things start going impure). I agree with Sweeney, again, that "side effects anywhere" is not the right default in a parallel world. So really it's all about being disciplined about it and marking up functions that are impure up front, so that the compiler can be sure that you're not trying to do anything impure in a context where purity is required (e.g. parallelism, or lazy evaluation). |
|
From: Conor S. <bor...@ya...> - 2009-04-26 19:11:55
|
"Could you expand on this? Some specifics perhaps? I'm not sure I understand what you mean. At least not when you compare to C++ which doesn't even have a module system (though Haskell's module system is fairly spartan)!"
Somewhere between type-classes and modules sits the "component" that Haskell has not yet mastered. C++ is reasonable for communicating "component frameworks" and Haskell is not quite there yet. The attempt at object orientation in C++ at least gives you to organize your "larger" thoughts in a way other people can understand (this is an entity, it does these things and it interacts with those things) and I don't think that Haskell has mastered this level of organisation yet. It's a good language for elegantly expressing algorithms and transforms, but it's not yet a full software engineering tool, with an easy methodology for relating with real world problems.
""Sequences of "stuff" does not imply imperative languages. The low level rendering abstraction could easily be a list of "Commands" (we could call them "command buffers", or maybe "display lists", hang on a minute!), rather than being a series of state modifying statements. In fact, a lot of abstractions treat graphics as a tree, hiding the details of the underlying state machine. At a higher level I do agree with Sweeney that graphics is pretty functional. Take something like a pixel shader, for instance, which is just a pure function really, even though most shader languages make it look like an imperative function inside (to look like C, usually).
Furthermore, if we're talking about Haskell specifically I'd say that in many ways it has much better support for imperative programming than C++ does, since you can define your own imperative sub-languages (e.g. you could have your CommandBuffer monad and write state setting etc. if that's how you really want to think about graphics). C++ doesn't allow you to abstract over what *kind* of statements you're working with, it only has one "kitchen sink" kind, and no way of overloading the way a block of multiple statements are bound together."
But sequenced execution and state setting are the native mode of imperative languages. Pure functional lazy languages, like Haskell, require abstractions to deal with that, which are an extra level of thought (and complexity). In C++, you can have a CommandBuffer without the monad (or having to think about having a monad, which is the important part). Sure, there are very functional aspects to rendering (shading is a good example) and if you want to think of it in a functional way you can, but in games currently you don't necessarily want to be that abstracted from the process. So I agree that graphics are fuctional, but only up to a point. If you're doing hardware interataction for rasterization (which most of us currently are), then graphics are a wild mix of imperative and functional (running on multiple pieces of hardware and multiple 3rd party pieces of software) and can not be categorized into one or the other area completely. Hence, taking an
absolute approach in either direction is probably not going to get you the best system.
"The problem with only going half way is that certain properties really need to be absolute unless they are to disappear. You can't be "sort of " pregnant, you either are or you aren't."
Yes, but I can't remember the last time I programmed a simulation of being pregnant (I'm not saying I haven't...). We shouldn't trap ourselves into a false dichotomy here. Software projects are incredibly complicated, not simple yes or no (are or aren't) questions. They may be a huge number of yes or no questions combining to make a nice fuzzy "yes/no" stew that is quite difficult to think of as a whole, but to be trapped in a single mindset is to miss the point of a software development.
"The key killer feature of Haskell for me is purity, and if you start allowing ad-hoc undisciplined use of side effects anywhere then the language is no longer pure. Either you contain side effects by design and enforce it, or you can never write code that relies on something being pure (parallelism!) without basically giving up on the language helping you, relying instead on convention (which always breaks)."
This is part of why I contend that Haskell is a good language in the small. That kind of purity is great at a micro level, but sometimes it is nice to do things another way. In fact, part of the beauty of pure functional programming is that you can apply it absolutely over a small area and then use that to compose code in a larger impure language, as long as the contract of the pure bits are enforced when you use them.
"Note that purity does emphatically *not* mean "no state mutations ever", it merely means that IO has to happen at the "bottom" of the program, not deep inside application code (this is pretty much what we do already though, so not much of an issue IMO), and that any localized side effects have to be marked up so that the compiler can enforce that they don't "leak" (e.g. you may want to do some in-place operations on an array for performance in a function, but from the outside the function still looks pure - that's fine in FP, Haskell uses the ST monad for it - you just need to seal off these local "bubbles" of impurity by explicitly marking up where things start going impure). I agree with Sweeney, again, that "side effects anywhere" is not the right default in a parallel world.
So really it's all about being disciplined about it and marking up functions that are impure up front, so that the compiler can be sure that you're not trying to do anything impure in a context where purity is required (e.g. parallelism, or lazy evaluation)."
I agree that side effects anywhere is not the right ideal (and Haskell does really put them into a nice little monad focused box), but I tend to lean towards a less absolute way of going about that. When you're doing a complex and heavy IO oriented operation, it sometimes makes sense to be able to do that in the simple way an imperative language allows (they're very good at it!) without the extra level of abstraction. If you want to, in most imperative languages today, you can even isolate it behind an interface such that it can not be misused in an unsafe/leaky way (one of the selling points of object orientation, I believe).
What I tend to think the right thing is to move to is a system where purity is an interface annotation that is then enforced in the compilation. Of course, I tend to think that data structure/type information should live in a language independent schema that is available at compile time to the "code", but I'm kind of crazy like that.
Cheers,
Conor
The new Internet Explorer 8 optimised for Yahoo!7: Faster, Safer, Easier. |
|
From: Sebastian S. <seb...@gm...> - 2009-04-26 19:56:58
|
On Sun, Apr 26, 2009 at 8:11 PM, Conor Stokes < bor...@ya...> wrote: > "Could you expand on this? Some specifics perhaps? I'm not sure I > understand what you mean. At least not when you compare to C++ which doesn't > even have a module system (though Haskell's module system is fairly > spartan)!" > > Somewhere between type-classes and modules sits the "component" that > Haskell has not yet mastered. > Couldn't it be just a smaller module? Lots of Haskell applications tend to use hierarchical modules this way, where you have a bunch of "low level" mini modules that are then included and exported from a main module. > > But sequenced execution and state setting are the native mode of imperative > languages. Pure functional lazy languages, like Haskell, require > abstractions to deal with that, which are an extra level of thought (and > complexity) > ... and expressive power. > In C++, you can have a CommandBuffer without the monad (or having to think > about having a monad, which is the important part). > Possibly, but you're hamstrung in how you can implement it since you have no control over what the semi-colon does, and for some things having just "state" as an underlying implementation isn't good enough. For example imagine being able to write something like: cover <- findBestCover moveToCover cover `withFallback` failedToGetToCover target <- acquireTarget ... etc.... The point being that the "moveToCover" function can take many frames to complete, and you can even let that action support the "withFallback" function to allow the action to fail (e.g. by an enemy firing at you). All the marshalling of actually running this action over multiple frames, keeping track of where in the action you need to resume and what the conditions are for resuming, can be handled by the AI monad. You can't do this (nicely) in C++ because it doesn't have the ability to let you define your own statement types. If you happen to have a sequence of imperative state modifications you're good to go, but if that's not what you're doing you're screwed. So like I said, in many ways Haskell beats loads of imperative languages at their own game. You can pretend it doesn't and just use the IO monad (which is essentially Haskell's "kitchen sink") like "imperative Haskell" and never have to worry about monads, if you think the complexity isn't worth it. So I don't think your characterization of imperative coding in Haskell as complex is necessarily true, you only pay for the complexity if you need it. You don't actually *need* to understand how monads work to do IO etc., but if you do spend the effort you find that it's a very powerful technique, and the tiny extra complexity it requires to understand is well worth it. > Hence, taking an absolute approach in either direction is probably not > going to get you the best system. > Well if that absolute approach allows you to do both (so long as you're explicit about when you're doing what), then there's no problem. Like I said earlier, Haskell does allow you to write sequential imperative code modifying state if that's what you want to do, you just need to be up front about it and say that you're going to do that in the type signature. The benefit of doing it that way is that you can later parallelise it trivially, since the compiler knows ahead of time where that's safe to do (as well as making it easier to reason about since you can easily see exactly what kind of stuff a function will do from its type). > > "The problem with only going half way is that certain properties really > need to be absolute unless they are to disappear. You can't be "sort of " > pregnant, you either are or you aren't." > > Yes, but I can't remember the last time I programmed a simulation of being > pregnant (I'm not saying I haven't...). > I have written plenty of software where I positively rely on purity of a function (parallelism, again). You can't be "sort of" pure, you either are or you aren't. > > I agree that side effects anywhere is not the right ideal (and Haskell does > really put them into a nice little monad focused box), but I tend to lean > towards a less absolute way of going about that. When you're doing a complex > and heavy IO oriented operation, it sometimes makes sense to be able to do > that in the simple way an imperative language allows (they're very good at > it!) without the extra level of abstraction. If you want to, in > most imperative languages today, you can even isolate it behind an interface > such that it can not be misused in an unsafe/leaky way (one of the selling > points of object orientation, I believe). > I'm not sure what you're referring to here, how is doing IO in C++ easier than in Haskell? Haskell is no better than C++, because you still have to do it in the same low-level imperative way. So it's not really improving on it much, you're essentially writing very C-like code with Haskell syntax, but I don't see any major way where it does worse either. Also, if you truly have a pure interface to something using IO, then unsafePerformIO can be used to give it a pure interface - though that really shouldn't be used very often and it's your responsibility to make sure there is no way of observing any side effects it does (think of it as a way of hooking into the runtime - you really, really shouldn't need it very often). And as I said before, if all you're doing is mutating some state (i.e. no real IO at all), then ST gives you away of wrapping it in a pure interface already (with compile time guarantees that you didn't screw up). -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862 |
|
From: Gregory J. <gj...@da...> - 2009-04-15 17:57:07
|
We could keep going, if you like, down to the fundamental matter of the
universe, as well as arguing whether light behaves as waves, particles or
both.
But even you have to admit that in the context that not only the statement
was made, but within this entire discussion exists, that there is a base
level of functionality assumed to exist, to which the debate does not apply.
So I for one reject your argument on the grounds of "ad absurdum". Not that
you likely care.
Greg
-----Original Message-----
From: Conor Stokes [mailto:bor...@ya...]
Sent: Wednesday, April 15, 2009 9:25 AM
To: Game Development Algorithms
Subject: Re: [Algorithms] General purpose task parallel threading approach
"Other than classroom examples, software is NOT written through
composition of 'general components'. And even in the small subset
of software writing where your statement may hold true, writing
those components was a very small amount of work of the total."
No, every piece of software in a modern language is written through
composition of "general components", from the machine code op-codes, up to
the language constructs they come from, up to the libraries, patterns and
algorithms they're constructed with. Unless you dislike modern conveniences
like "arrays" and "for loops", or you feel the need to write binary
searches, quicksorts, disk-io and image-decompression routines inline every
time you come across the need for them (in a language of silicon, plastic
and copper no less!). Those may be small pieces of code, but serialization
frameworks, database frameworks and communications frameworks that are used
in extremely general ways and can often be much larger than the small pieces
of business logic that ties them together.
Someone writes the middleware and the infrastructure code, the VHDL, the
compilers, the drivers, the operating system and the standard libraries, it
may not be you or the coder next to you, but I can assure you there is a
huge amount of it in the world (have a look on sourceforge at the number of
different component libraries relative the number of "application" projects)
and it's a lot of work for somebody. Then again, the better general
components you have, the less often you need to re-write them (or go looking
for them) and the rarer they might seem from the perspective of someone who
spends a lot of time solving specific problems with them.
Cheers,
Conor
Yahoo!7 recommends that you update your browser to the new Internet
Explorer 8.Get it now.
----------------------------------------------------------------------------
--
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
_______________________________________________
GDAlgorithms-list mailing list
GDA...@li...
https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list
Archives:
http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list
|
|
From: Conor S. <bor...@ya...> - 2009-04-16 08:10:11
|
"We could keep going, if you like, down to the fundamental matter of the
universe, as well as arguing whether light behaves as waves, particles or
both."
Apart from the fact that those aren't really human created components created in a programming language (that we know of).
"But even you have to admit that in the context that not only the statement
was made, but within this entire discussion exists, that there is a base
level of functionality assumed to exist, to which the debate does not apply."
No, the argument expanded and Christer made a blanket statement. My points where that software is nearly always written through composition of general components and that just because you weren't always the one doing the work on said components, it doesn't mean that they are a very small proportion of the work involved.
"So I for one reject your argument on the grounds of "ad absurdum". Not that
you likely care."
Reductio ad absurdum are actually formal logical arguments (if the logic is correct), it's a formal logical fallacy to dismiss an argument on the grounds of ad absurdum. That doesn't mean they aren't facetious, but it certainly isn't grounds to reject an argument.
Cheers,
Conor
-----Original Message-----
From: Conor Stokes [mailto:bor...@ya...]
Sent: Wednesday, April 15, 2009 9:25 AM
To: Game Development Algorithms
Subject: Re: [Algorithms] General purpose task parallel threading approach
"Other than classroom examples, software is NOT written through
composition of 'general components'. And even in the small subset
of software writing where your statement may hold true, writing
those components was a very small amount of work of the total."
No, every piece of software in a modern language is written through
composition of "general components", from the machine code op-codes, up to
the language constructs they come from, up to the libraries, patterns and
algorithms they're constructed with. Unless you dislike modern conveniences
like "arrays" and "for loops", or you feel the need to write binary
searches, quicksorts, disk-io and image-decompression routines inline every
time you come across the need for them (in a language of silicon, plastic
and copper no less!). Those may be small pieces of code, but serialization
frameworks, database frameworks and communications frameworks that are used
in extremely general ways and can often be much larger than the small pieces
of business logic that ties them together.
Someone writes the middleware and the infrastructure code, the VHDL, the
compilers, the drivers, the operating system and the standard libraries, it
may not be you or the coder next to you, but I can assure you there is a
huge amount of it in the world (have a look on sourceforge at the number of
different component libraries relative the number of "application" projects)
and it's a lot of work for somebody. Then again, the better general
components you have, the less often you need to re-write them (or go looking
for them) and the rarer they might seem from the perspective of someone who
spends a lot of time solving specific problems with them.
Cheers,
Conor
Yahoo!7 recommends that you update your browser to the new Internet
Explorer 8.Get it now.
----------------------------------------------------------------------------
--
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
_______________________________________________
GDAlgorithms-list mailing list
GDA...@li...
https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list
Archives:
http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list
------------------------------------------------------------------------------
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
_______________________________________________
GDAlgorithms-list mailing list
GDA...@li...
https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list
Archives:
http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list
Enjoy a better web experience. Upgrade to the new Internet Explorer 8 optimised for Yahoo!7. Get it now.
|
|
From: Mat N. <mat...@bu...> - 2009-04-13 17:42:59
|
Again, there is a tremendous difference between allowing dynamic task dependencies within a frame vs. between frames vs. not at all.
Although it's really a matter of whether you can separate how you generate dependencies between tasks from running the task. If you can, then it doesn't matter when you spawn them, so you should spawn them where you can get the most benefit with the least complexity. If you can't, then you have to make an even better scheduler just to take into account intra-frame tasks.
Of course it doesn't work with a C++-vtable-dispatch mechanism; that does not allow you to separate the behavior of the task from the structure (i.e., dependencies) of the task. What Jon is talking about (I presume) is that tasks are not just vtables or function pointers; they have information associated with them that allows for higher order reasoning without running them.
MSN
From: Sebastian Sylvan [mailto:seb...@gm...]
Sent: Monday, April 13, 2009 10:09 AM
To: Game Development Algorithms
Subject: Re: [Algorithms] General purpose task parallel threading approach
On Mon, Apr 13, 2009 at 5:30 PM, Mat Noguchi <mat...@bu...<mailto:mat...@bu...>> wrote:
> Which is what I've been trying to say all this time. If the scheduler can't handle dynamic code, then it's not really a general system.
That's like saying if you can't handle dynamic allocations your game isn't truly dynamic. Well, maybe not, but what Jon said does not necessarily lead to what you said. It simply means that any dynamic behavior changes what happens between frames, not within a frame. That does not necessarily restrict "dynamic" code from ever working, just the mechanisms through which it can work.
This seems like a semantics argument, to be honest.
In my opinion a task system that places restrictions (fairly severe ones IMO) on what kinds of code you can run in tasks can not reasonably be called a "general" system. I don't find this to be an unreasonable definition.
This is more like "if you system can't handle dynamic allocations then it can't run code that really needs to allocate dynamically, which means it's a specialized, not general, system" which I would say is a fair, and even obvious, description.
Being able to run a different statically known control flow from frame to frame is not the same as being able to run dynamic control flow. I'm not sure you realize the severity of the restrictions you must put up with if the dependencies of a task must be known ahead of time - for example you couldn't do this in a task:
void MyClass::MyTask()
{
IFoo foo = GetFoo(m_dynamicData); // expensive, sequential
foo.bar(); // virtual function call; who knows what this does?
}
If bar is a virtual function this just wouldn't work within a system where the dependencies of MyTask have to be known up front before executing it, because there is no way of knowing up front what code will run when you call bar() (you have to run GetFoo to know what object you'll get back - you don't even know its type, just that it implements IFoo), and that code may have lots of dependencies that you simply can't know up front.
So for this code to work you'd have to guarantee that IFoo::bar() never spawns any child tasks, which is a pretty severe restrictions for a task parallel system (whose whole point is to facilitate running stuff in parallel) don't you think?
This doesn't mean you can't work around it for certain specific scenarios where dependencies are known up front, but it does mean that certain kinds of code are impossible to write and you'll just have to make do without them, which is not a characterization of a general system, IMO.
--
Sebastian Sylvan
+44(0)7857-300802
UIN: 44640862
|
|
From: Sebastian S. <seb...@gm...> - 2009-04-13 18:00:16
|
On Mon, Apr 13, 2009 at 6:42 PM, Mat Noguchi <mat...@bu...> wrote: > Again, there is a tremendous difference between allowing dynamic task > dependencies within a frame vs. between frames vs. not at all. > > > > Although it’s really a matter of whether you can separate how you generate > dependencies between tasks from running the task. If you can, then it > doesn’t matter when you spawn them, so you should spawn them where you can > get the most benefit with the least complexity. If you can’t, then you have > to make an even better scheduler just to take into account intra-frame > tasks. > I agree completely with this. I'm not saying that a system like Jon describes is useless (indeed I've already explained that I've written something similar, even more restricted in fact, myself), I'm just pointing out that it's rather limited in some important ways that other systems aren't. > Of course it doesn’t work with a C++-vtable-dispatch mechanism; that does > not allow you to separate the behavior of the task from the structure (i.e., > dependencies) of the task. What Jon is talking about (I presume) is that > tasks are not just vtables or function pointers; they have information > associated with them that allows for higher order reasoning without running > them. > Yeah sure, and that's kind of the point, that kind of code (which is fairly ubiquitous IME), and even simple if-statements (where the two branches spawn different tasks) couldn't be written in such a system. You may not need to, which I fully accept, but if you could have the option of doing it without losing anything significant (again, performance for cilk-style dynamic parallelism seems to be bounded by the theoretical parallelism of the app, i.e. critical path etc., not implementation overheads) that would seem like a very attractive option. -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862 |
|
From: Gregory J. <gj...@da...> - 2009-04-13 20:01:11
|
That's what I took from Jon's comments (and Jon can clear this up if incorrect): that when you write a piece of code, it has inherent static data dependencies. For example, a piece of code that is written to post-process the results of, say, narrow-phase collision detection, isn't suddenly going to depend, without changing the code, on, say, the results of a pathfinding iteration. Since you have to change the code, and that act is performed in a static manner (at code authoring time) then that piece of code is known a priori to depend on a particular set of data inputs, and you can declare those dependencies in such a way (for example, using the metadata described) that a dynamic task scheduler can intelligently schedule that piece of code to run when its data is ready. Greg From: Mat Noguchi [mailto:mat...@bu...] Sent: Monday, April 13, 2009 10:43 AM To: Game Development Algorithms Subject: Re: [Algorithms] General purpose task parallel threading approach Of course it doesn't work with a C++-vtable-dispatch mechanism; that does not allow you to separate the behavior of the task from the structure (i.e., dependencies) of the task. What Jon is talking about (I presume) is that tasks are not just vtables or function pointers; they have information associated with them that allows for higher order reasoning without running them. MSN |
|
From: Jon W. <jw...@gm...> - 2009-04-13 23:09:39
|
Sebastian Sylvan wrote: > In my opinion a task system that places restrictions (fairly severe > ones IMO) on what kinds of code you can run in tasks can not > reasonably be called a "general" system. I don't find this to be an > unreasonable definition. Note that the system I describe doesn't actually restrict you from dynamically spawning tasks; it just restricts you from generally waiting synchronously for the completion of those tasks after you spawn them. Synchronization is done based on dependencies known when the task gets scheduled. Thus, anything dynamic is restricted to happening between individual work task invocations, which I think is about as granular as you would want it. Storing stacks for each task, and allowing arbitrary synchronous waits between tasks is simply not going to scale to many-core machines IMO, so wanting to do that is like wanting to not scale. Sincerely, jw |
|
From: Matteo F. <ma...@ci...> - 2009-04-14 10:11:48
|
Jon Watte <jw...@gm...> writes: > Storing stacks for each task, and allowing arbitrary synchronous waits > between tasks is simply not going to scale to many-core machines IMO, > so wanting to do that is like wanting to not scale. It may be true that allowing arbitrary synchronous waits between tasks does not scale. However, you can allow arbitrary fork-join Cilk-style waits in a way that does indeed scale. (I have used Cilk on up to 256 processors in shared-memory configuration, and up to 1824 processors in a distributed-memory configuration without any particular problems.) For this to work, however, you must implement the stack as a ``cactus stack'' rather than as a collection of standard linear (contiguous) stacks. This is hard to do in a library---you really need compiler support. Regards, Matteo Frigo |
|
From: <asy...@gm...> - 2009-04-14 12:46:59
|
>Storing stacks for each task, and allowing arbitrary >synchronous waits between tasks is simply not going to scale to >many-core machines IMO, so wanting to do that is like wanting to not scale. Why not ? As far as there is something to do for a worker thread while tasks are waiting it is Ok, isn't it ? Alexander. 2009/4/14 Jon Watte <jw...@gm...> > Sebastian Sylvan wrote: > > In my opinion a task system that places restrictions (fairly severe > > ones IMO) on what kinds of code you can run in tasks can not > > reasonably be called a "general" system. I don't find this to be an > > unreasonable definition. > > Note that the system I describe doesn't actually restrict you from > dynamically spawning tasks; it just restricts you from generally waiting > synchronously for the completion of those tasks after you spawn them. > Synchronization is done based on dependencies known when the task gets > scheduled. Thus, anything dynamic is restricted to happening between > individual work task invocations, which I think is about as granular as > you would want it. Storing stacks for each task, and allowing arbitrary > synchronous waits between tasks is simply not going to scale to > many-core machines IMO, so wanting to do that is like wanting to not scale. > > Sincerely, > > jw > > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by: > High Quality Requirements in a Collaborative Environment. > Download a free trial of Rational Requirements Composer Now! > http://p.sf.net/sfu/www-ibm-com > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > -- Regards, Alexander Karnakov |
|
From: Jon W. <jw...@gm...> - 2009-04-14 16:54:13
|
asy...@gm... wrote: > >Storing stacks for each task, and allowing arbitrary > >synchronous waits between tasks is simply not going to scale to > >many-core machines IMO, so wanting to do that is like wanting to not > scale. > Why not ? As far as there is something to do for a worker thread while > tasks are waiting it is Ok, isn't it ? So you missed the entire discussion of memory locality and the cost of stacks? Sincerely, jw |
|
From: Adrian B. <ad...@gm...> - 2009-04-13 17:28:53
|
Daunting to be sure. You know... this would be a great place for a
good macro system. Commence language war!
Cheers,
Adrian
On Sun, Apr 12, 2009 at 9:28 PM, Jon Watte <jw...@gm...> wrote:
> Adrian Bentley wrote:
>> However, I am very
>> curious about making more general workloads run in 16 way SIMD.
>> Sounds super fun.
>>
>
> A preview of your future:
>
> |vmadd231ps v0 {k1}, v5, [rbx+rcx*4] {4to16}
>
> I remember when "LDA #10" was an advanced instruction :-)
>
> Sincerely,
>
> jw
>
> |
>
> ------------------------------------------------------------------------------
> This SF.net email is sponsored by:
> High Quality Requirements in a Collaborative Environment.
> Download a free trial of Rational Requirements Composer Now!
> http://p.sf.net/sfu/www-ibm-com
> _______________________________________________
> GDAlgorithms-list mailing list
> GDA...@li...
> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list
> Archives:
> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list
>
|
|
From: Sebastian S. <seb...@gm...> - 2009-04-13 17:38:19
|
The cool thing about LRB is that it's what they call "vector complete",
meaning it has all of the machinery (scatter/gather etc.) that any parallel
loop with a static call graph could be turned into a 16 wide vector version
mechanically. This should hopefully allow the compiler to do most of this
work for us if we just mark up the loops we think ought to be data parallel
using some pragma.
On Mon, Apr 13, 2009 at 6:28 PM, Adrian Bentley <ad...@gm...> wrote:
> Daunting to be sure. You know... this would be a great place for a
> good macro system. Commence language war!
>
> Cheers,
> Adrian
>
> On Sun, Apr 12, 2009 at 9:28 PM, Jon Watte <jw...@gm...> wrote:
> > Adrian Bentley wrote:
> >> However, I am very
> >> curious about making more general workloads run in 16 way SIMD.
> >> Sounds super fun.
> >>
> >
> > A preview of your future:
> >
> > |vmadd231ps v0 {k1}, v5, [rbx+rcx*4] {4to16}
> >
> > I remember when "LDA #10" was an advanced instruction :-)
> >
> > Sincerely,
> >
> > jw
> >
> > |
> >
> >
> ------------------------------------------------------------------------------
> > This SF.net email is sponsored by:
> > High Quality Requirements in a Collaborative Environment.
> > Download a free trial of Rational Requirements Composer Now!
> > http://p.sf.net/sfu/www-ibm-com
> > _______________________________________________
> > GDAlgorithms-list mailing list
> > GDA...@li...
> > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list
> > Archives:
> >
> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list
> >
>
>
> ------------------------------------------------------------------------------
> This SF.net email is sponsored by:
> High Quality Requirements in a Collaborative Environment.
> Download a free trial of Rational Requirements Composer Now!
> http://p.sf.net/sfu/www-ibm-com
> _______________________________________________
> GDAlgorithms-list mailing list
> GDA...@li...
> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list
> Archives:
> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list
>
--
Sebastian Sylvan
+44(0)7857-300802
UIN: 44640862
|