From: Ken A. <kan...@bb...> - 2004-07-08 21:54:42
|
Nice reply. Now, that i'm reading Bruce's mind view, i think he muddies things up a lot more than i would have liked. Maybe this is the right approach, especially for people like him who have survived C++. At 02:23 PM 7/5/2004 -0400, Borislav Iordanov wrote: >The core argument for dynamic typing is actually one that I used to >convince myself to rely on dynamically typed languages more and more: we >need to test anyway so sacrificing expressiveness, readability and >development speed for the illusionary safety of static type checking is >simply not worth it. Types are good, but i think the real problem is "static". Imagine having to halt the internet everytime an interface changed. CORBA and RMI require both sides of a conversation agree on the types. Common Lisp lets you change the definition of a class dynamically, though i don't think this has been done across machines. I saw a nice NASA presentation once that used "frames" (nested alists) or almost XML to communicate across machines. That let them add a field to the output of one machine and other machines could ignore it until they were updated. Optimize for flexibility as Howie Shrobe would say. >But he misses two important (related) points in >favor of static typing: > >1) When a mistake is detected "statically", at compile time, not only is >a programmer informed about it, but the information is much more useful >- you know where it is in the code, you can more easily associate it >with a mistaken/misunderstood intent, program design/structure etc. In a >sense, a semantic constraint is really reduced to syntactic one which is >one level down in terms tractability by both humans and machines. I take this to mean that typing something like this in Java: Foo x = (Foo) foos.get(name); seems more like writing assembly language than saying Foo x = foos.get(name); There is no reason Java could not do this type inference. IN a dynamic language you could say x = foos.get(name); which does seem "higher level than assembly". >2) The rigor and expliciteness required by statically typed languages >become increasingly important with program size. This is a consequence >of the previous point: it is easy to track down a run-time exception >when the program is small and the behavior can be easily localized >within the code. For large systems, maintained and evolving over years >by many programmers, the extra fingering and design contraints enforced >by static typing can be life savers - ok, a test may detect that >something is wrong, but how long before the programmer goes from the >error produced by the test to the real problem? And what (how many?) >changes to the code would that prompt? So here i think you're arguing in favor of static typing. It keeps you honest longer. However, when i've changed large code bases i didn't understand, i make the smallest change i can and test it. Which suggests that it is really testing that leads to code longevity. >A problem I often have when programming with Jscheme is the error: >"Expected pair or '(), but found blabla....". I know in what function >this happens, I know where is the code expected the list, but god know >where I put the blabla instead of it. The positive aspect is that this >forces me to be much more attentive to the type of data I'm passing >around, a habit I'd lost (or probably never really aquired) with >languages like Java and C++. If you are criticizing JScheme, let me know, lets make it better. I have a way of finding where functions are defined, and i agree that backtraces can be hard to follow. I have begun to follow the PLT Scheme practice of writing a type signature before a procedure definition: ;;; sq: number -> number (define (sq x) "return x squared." (* x x)) you can use (describe sq) to see the definition and the documentation of sq. k |