Re: [Algorithms] Complexity of new hardware
Brought to you by:
vexxed72
|
From: Sebastian S. <seb...@gm...> - 2009-04-26 19:56:58
|
On Sun, Apr 26, 2009 at 8:11 PM, Conor Stokes < bor...@ya...> wrote: > "Could you expand on this? Some specifics perhaps? I'm not sure I > understand what you mean. At least not when you compare to C++ which doesn't > even have a module system (though Haskell's module system is fairly > spartan)!" > > Somewhere between type-classes and modules sits the "component" that > Haskell has not yet mastered. > Couldn't it be just a smaller module? Lots of Haskell applications tend to use hierarchical modules this way, where you have a bunch of "low level" mini modules that are then included and exported from a main module. > > But sequenced execution and state setting are the native mode of imperative > languages. Pure functional lazy languages, like Haskell, require > abstractions to deal with that, which are an extra level of thought (and > complexity) > ... and expressive power. > In C++, you can have a CommandBuffer without the monad (or having to think > about having a monad, which is the important part). > Possibly, but you're hamstrung in how you can implement it since you have no control over what the semi-colon does, and for some things having just "state" as an underlying implementation isn't good enough. For example imagine being able to write something like: cover <- findBestCover moveToCover cover `withFallback` failedToGetToCover target <- acquireTarget ... etc.... The point being that the "moveToCover" function can take many frames to complete, and you can even let that action support the "withFallback" function to allow the action to fail (e.g. by an enemy firing at you). All the marshalling of actually running this action over multiple frames, keeping track of where in the action you need to resume and what the conditions are for resuming, can be handled by the AI monad. You can't do this (nicely) in C++ because it doesn't have the ability to let you define your own statement types. If you happen to have a sequence of imperative state modifications you're good to go, but if that's not what you're doing you're screwed. So like I said, in many ways Haskell beats loads of imperative languages at their own game. You can pretend it doesn't and just use the IO monad (which is essentially Haskell's "kitchen sink") like "imperative Haskell" and never have to worry about monads, if you think the complexity isn't worth it. So I don't think your characterization of imperative coding in Haskell as complex is necessarily true, you only pay for the complexity if you need it. You don't actually *need* to understand how monads work to do IO etc., but if you do spend the effort you find that it's a very powerful technique, and the tiny extra complexity it requires to understand is well worth it. > Hence, taking an absolute approach in either direction is probably not > going to get you the best system. > Well if that absolute approach allows you to do both (so long as you're explicit about when you're doing what), then there's no problem. Like I said earlier, Haskell does allow you to write sequential imperative code modifying state if that's what you want to do, you just need to be up front about it and say that you're going to do that in the type signature. The benefit of doing it that way is that you can later parallelise it trivially, since the compiler knows ahead of time where that's safe to do (as well as making it easier to reason about since you can easily see exactly what kind of stuff a function will do from its type). > > "The problem with only going half way is that certain properties really > need to be absolute unless they are to disappear. You can't be "sort of " > pregnant, you either are or you aren't." > > Yes, but I can't remember the last time I programmed a simulation of being > pregnant (I'm not saying I haven't...). > I have written plenty of software where I positively rely on purity of a function (parallelism, again). You can't be "sort of" pure, you either are or you aren't. > > I agree that side effects anywhere is not the right ideal (and Haskell does > really put them into a nice little monad focused box), but I tend to lean > towards a less absolute way of going about that. When you're doing a complex > and heavy IO oriented operation, it sometimes makes sense to be able to do > that in the simple way an imperative language allows (they're very good at > it!) without the extra level of abstraction. If you want to, in > most imperative languages today, you can even isolate it behind an interface > such that it can not be misused in an unsafe/leaky way (one of the selling > points of object orientation, I believe). > I'm not sure what you're referring to here, how is doing IO in C++ easier than in Haskell? Haskell is no better than C++, because you still have to do it in the same low-level imperative way. So it's not really improving on it much, you're essentially writing very C-like code with Haskell syntax, but I don't see any major way where it does worse either. Also, if you truly have a pure interface to something using IO, then unsafePerformIO can be used to give it a pure interface - though that really shouldn't be used very often and it's your responsibility to make sure there is no way of observing any side effects it does (think of it as a way of hooking into the runtime - you really, really shouldn't need it very often). And as I said before, if all you're doing is mutating some state (i.e. no real IO at all), then ST gives you away of wrapping it in a pure interface already (with compile time guarantees that you didn't screw up). -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862 |