Re: [Algorithms] Complexity of new hardware
Brought to you by:
vexxed72
|
From: Sebastian S. <seb...@gm...> - 2009-04-26 17:04:42
|
On Sun, Apr 26, 2009 at 5:11 PM, Conor Stokes < bor...@ya...> wrote: > > Where I'd like to break in; Haskell is not yet ready for large games (or > medium/large project software development at large), because it's an > academic language that hasn't yet progressed to programming in the large > (big picture). Most of the "safety" in Haskell is for local problems, not > for the large architectual decisions, which are the hardest to change (and > in fact, haskell provides very little guidance or mechanism in these areas) > later down the line. At least "object oriented" (in the general sense) > programming languages try and provide a layered mechanism for programming in > the large. Could you expand on this? Some specifics perhaps? I'm not sure I understand what you mean. At least not when you compare to C++ which doesn't even have a module system (though Haskell's module system is fairly spartan)! > Functional programming is a very good tool, but it's too pure a tool for > production software. Most production software has areas that are "do this, > then do that", which pure functional language still has awkward and heavy > abstractions for (i.e. an extra level of thought that isn't necessary for > the functionality required). It is also interesting that when Tim Sweeney said in his > programming-language-for-the-future talk that the "graphics engine" would be > "fuctional", yet he doesn't mention that rendering (as it currently stands) > occurs in order and is highly stateful. Graphics hardware requires that you > set your states, followed by your rendering commands, in order, which is a > highly imperative way to think. This really shows that large problems tend > to be made up of mixed solutions, that don't follow any one set of rules. Sequences of "stuff" does not imply imperative languages. The low level rendering abstraction could easily be a list of "Commands" (we could call them "command buffers", or maybe "display lists", hang on a minute!), rather than being a series of state modifying statements. In fact, a lot of abstractions treat graphics as a tree, hiding the details of the underlying state machine. At a higher level I do agree with Sweeney that graphics is pretty functional. Take something like a pixel shader, for instance, which is just a pure function really, even though most shader languages make it look like an imperative function inside (to look like C, usually). Furthermore, if we're talking about Haskell specifically I'd say that in many ways it has much better support for imperative programming than C++ does, since you can define your own imperative sub-languages (e.g. you could have your CommandBuffer monad and write state setting etc. if that's how you really want to think about graphics). C++ doesn't allow you to abstract over what *kind* of statements you're working with, it only has one "kitchen sink" kind, and no way of overloading the way a block of multiple statements are bound together. > > Functional programming is part of the general solution to "better > programming", but to take it to extremes (like Haskell) is not the answer The problem with only going half way is that certain properties really need to be absolute unless they are to disappear. You can't be "sort of " pregnant, you either are or you aren't. The key killer feature of Haskell for me is purity, and if you start allowing ad-hoc undisciplined use of side effects anywhere then the language is no longer pure. Either you contain side effects by design and enforce it, or you can never write code that relies on something being pure (parallelism!) without basically giving up on the language helping you, relying instead on convention (which always breaks). Note that purity does emphatically *not* mean "no state mutations ever", it merely means that IO has to happen at the "bottom" of the program, not deep inside application code (this is pretty much what we do already though, so not much of an issue IMO), and that any localized side effects have to be marked up so that the compiler can enforce that they don't "leak" (e.g. you may want to do some in-place operations on an array for performance in a function, but from the outside the function still looks pure - that's fine in FP, Haskell uses the ST monad for it - you just need to seal off these local "bubbles" of impurity by explicitly marking up where things start going impure). I agree with Sweeney, again, that "side effects anywhere" is not the right default in a parallel world. So really it's all about being disciplined about it and marking up functions that are impure up front, so that the compiler can be sure that you're not trying to do anything impure in a context where purity is required (e.g. parallelism, or lazy evaluation). |