So occasionally I get email thanking me for preserving the history of Pascal. Thatś a good thing and a bad thing. Sure, I try to become a point to collect various history documents and code on Pascal. I actually found that this helps me more than it helps others. Instead of trying to find Pascal documents and code, people find it and give it to me. How nice is that?
The bad part is that it implies folks view what I work on as outdated. This is unfortunately not true. I say unfortunately because I would wish the state of the art had passed Pascal by now. It hasn't. Pascal remains a "great improvement on its successors". Here is a short history:
1974 Pascal arrives.
1978 C arrives.
Now I know that some will have a problem with these dates, however, the "users manual and report" arrived in 1974, and "the C programming language" arrived in 1978 according to their copyrights. Thus, these mark the dates of their general diffusion throughout the industry.
In 1978, Pascal was considered the "hot" language, and C was nowhere to be found in the Microprocessor business.
By about 1982 this had changed radically, and there were several reasonable light C compilers. By the end of the 1980's, C was starting to take over from assembly language.
Pascal compilers were available, and in about the same quality range as C compilers. It helps to remember that in the 1980s, a segmented 16 bit model was paramount. The segmented model was mostly good at generating non-standard code, for both C and Pascal. The ANSI C language didn't really replace the nonstandard versions until the end of the decade.
However, C won. The language that started as a cross between Fortran and Algol was a very good "universal assembler", and indeed quite a few of the original assembly language programmers of the early microprocessor era moved to C.
I was of course one of them, but there the similarity ends. I wanted a Unix and C platform, and I got one by 1982. I did x86 programming on the PC in assembly, but I didn't like it.
So the 1990s started as the age of C, was supposed to be the beginning of C++. It ended with Java and its competitor C#. In the 1990s there was a fairly simple backlash against the complexity and production issues found with C and C++. What do I mean by this?
Employers began to notice that C programming, and its much more complex twin C++, took a lot of time to get well tested and finished programs out the door. Moreover, it required high end programmers who knew a lot about the machine and C/C++.
The reaction to this went two ways. First into something called "managed programming". The second was the rise of the pure interpreters. The "unreaction" were the players who thought that whatever the solution to the problem was, it had to be compatible with C.
Now C++ itself was originally marketed as a solution to C productivity issues. However, C++ is a much more complex programming language. It didn't really solve the issue of new graduates not knowing how to be productive in C, nor did it solve the complexity of debugging and testing C.
What the C/C++ market instead has done was go into a series of tooling phases, with coverity, Agile programming and similar efforts to bring C/C++ programming costs and development times under control.
The two other languages, Java and C#, have both made huge progress in many classes of programming work. Neither accomplished what C and C++ did, which was to become really general purpose languages. I think in the early days folks believed that the VM implementation basis was too inefficient. However with today's highly polished JIT compilers I don't think that is still an issue.
The real issue with Java and C# with respect to being a general purpose language is the foreign nature of the languages. The intermediate generation step, which received so much fame with these languages, is in the way in systems coding. Even without this, Java and C are married to their tool kits in a way that makes them inseparable. the GCC series can compile direct to binary Java, but what does that mean? Where do you go to learn how to program "to the metal" Java programs?
C# has access to virtually all of the Windows API, and thus can make any application. So what part of Windows itself can be programmed in C#? The answer is none. Even with equivalent efficiency, C# does not do systems work and there is no demand for that role in any case.
What would it take to make Java and C# systems programming languages? Simple. Easy access and linkage to assembly language, a requirement that C itself met, and nowdays, Easy access and linkage to C itself. The rule is simple. You don't replace an existing standard by being incompatible with it.
In the 2000's, we see the rise of the pure interpreters, Perl, Python, Ruby and now LUA. None of these are system programming languages or ever will be. The next version of Word is not going to be written in LUA. This trend is pushed by the idea that students right out of collage are going to be productive in those languages. They are fully type protected at runtime, and can be developed interactively. Even if you would prefer not to develop by typing in lines to the interpreter (most don't), the highly interactive nature of the systems allow for better debugging.
So we have this trend, Java, C#, and the interpreters, that founded themselves on the idea that even if the programmer is a beginner, the system should either work or be "reasonable", that is, not crash, give reasonable diagnostics, or even stop in the middle of interpretation and let the programmer fix or investigate.
I think a better message you can get from this is that if people feel like investing their time and energy into a system that is orders of magnitude slower than optimized C such as interpreters, that something is seriously wrong with today's programming models.
So way back in th 1960's, the beautiful experiment that was Algol met it's end with Algol 68, the idea that a programming language could be both functional and beautiful in the mathematics sense. Pascal was a child of that era. Wirth basically wanted an Algol that was applied, and not theoretical.
In the 1980's, much of programming was done in Assembly language. When C took over from that group, mainframes were gone, mini computers were gone, and the idea that a language could protect you from system crashes was similarly just a memory.
Just as rapidly in the 1990s, the idea of type safe programming expounded by Wirth and others in the 1960s and 1970s came bounding back with a new set of vocabulary. Type safe became "managed code". Java took C/C++ and went back in time to type safety. C# did the same, and was designed by the programmer, Anders Hejlsberg, who was the same one who took type safety OUT of the Borland Pascal compilers. I find this funny. I seem to be the only one who sees the humor in this.
So what makes a general purpose programming language today?
It must unite the state of the art. That is, it must be an expression of unifying current thoughts on programming languages into a coherent whole.
It must address current problems in computing.
For the first point, object orientation, language extensibility and language backed multiprocessing are key.
For the second, I would list #1 as multiprocessing, then language security and verification.
Of course, I have listed multiprocessing twice. But multiprocessing is both a language issue and an application problem.
Lets call these:
A = Language extension, including objects, libraries, etc.
B = Multiprocessing.
C = Language security and verification.
In reality, B and C are handmaidens. Per Brinch Hansen believed that multiprocessing was not reasonable unless you did it in a language secure environment. The counterpoint to this was Stroustrup, who stated that the incorporation of multiprocessing primitives in C++ would be tantamount to the language designers picking a particular implementation over another, but C++ (as well as C), has no realistic security model.
So lets go through the general purpose languages in common use in terms of the above.
C language
A-> Virtually unsupported.
B-> Supported by add on library (Pthreads).
C-> None
C++ language
A-> Very good.
B-> Supported by add on library.
C-> None
Java
A-> Very good
B-> Supported by add on library.
C-> Very good
C#
A-> Very good
B-> Supported by add on library.
C-> Very good
By my scale, both Java and C# represent state of the art general purpose programming languages. They both have the issue that they don't interface well with lower level code, which I discussed earlier.
They both also copy the C/C++ format, meaning lots of operators, and special characters. This is not to my taste, but that is a style issue. The Algol practice of mainly using keywords was based on the idea that people could remember words in their language better than special characters. Of course, that is when you are reading the source. For writing it, many programmers apparently favor short special character sequences that are easy to type. Thus, it is often said that C/C++ and its descendants are "write only languages", not really designed to be read.
So lets come back to Pascal. What does Pascal have to offer now?
AGE
Yep, the first is age. Pascal preceded all of these languages. And common use discarded the idea of type safety (including in subsequent dielects of Pascal), then "rediscovered" that again reborn as "managed programming". The programmers of Java and C# didn't feel that a return to Pascal was in order (even if they were reusing its principles) because "everyone knew" your language had to look like C.
EXTENSIBILITY
Pascal was a very well designed language. It makes a good platform to go forward on. Pascal has been extended before, but that was first to make it look like Basic (UCSD), and then later to make it look like C (Borland). The object oriented dialects were more like a true extension to Pascal.
I personally think the OO for Pascal extensions, and later Wirth's Oberon, were good but in the wrong direction. The philosophy was "objects are extended records". This changed toward the end of the 1990s. Objects in Java and C# are treated as modules. These languages treat the program as a collection of classes, then give a way to specify the primary class/object that will instantiate all the others. In C#, objects can be static or dynamic, and not just in the declarative sense.
Thus the trend is towards objects as a fundamental program construct, not just a type.
SECURITY
Java introduced the idea that the intermediate language could not only serve to couple the backend of the compiler, but also serve as the basis of a security model by having a "checker" validate the code before final compilation or interpretation. This is an interesting concept with respect to the idea of using intermediates as a universal transport format to different machine types. However, it rightly leaves language designers scratching their heads as to how it defines fundamental security. A language such as Pascal is secure by design. It does not need to be secured at the intermediate level.
The type security model in Pascal was first class as designed by Wirth, and only got better with the ISO 7185 standard. They thought of several possible ways to break the type security of the language and instituted language rules against them. For example, the idea of undiscriminated variants to get around type security is well known. Less well known is that the same result can be accomplished with fully discriminated variants and VAR passed procedure parameters.
BODY OF IMPLEMENTATION
What I refer to here is the collection of compilers and tools for the language whose source code is readily available, as well as clear specifications of the language itself. Java, C and C++ have this. C# does not, it is a proprietary technology.
AND SO WHAT FUTURE PASCAL?
People often ask me why I don't just "come to reality" and join up with a Borland dialect. I actually have done extensive work with the Delphi system. Its a good system, it is far to reliant on escaping the type security mechanisms for my taste, and now it is clearly dated. The object model is based on the "objects are records" model. I also note that the major designer of Borland Pascal moved on to C#, and went back to full type security.
My "agenda" with Pascal consists of:
Double down on Pascal. Few implementations really exploited the extent of type security with Pascal. Pascal delivers type and executable security equal to or greater than Java and C#. And secure type models dovetail nicely with multiprocessor models. Per Brinch Hansen said it all on this subject. He has left us now, but he wrote extensively on the subject and has mainly been ignored.
Extend Pascal as an integrated whole. Perhaps with the benefit of hindsight, there are far better models for OO Pascal than have appeared to date. Pascal benefits well from not only a more modern treatment of objects and extendable code, but also from the idea that Pascal can "reintegrate" on a new and higher level. Think about this. What attracted many people to Pascal was the sheer integration of the language. Virtually any type could be formed using the base types, and these types naturally flowed into one another. subrange types or ordinal types formed a natural way to explain array indices, and arrays, files and dynamic variables could be composed of any subtype in a very regular manner (in fact the only major subtype that the ISO 7185 forbid was file of files, and there is reason to believe that even they are useful).
Apply Pascal to multiprocessing. Multiprocessing is a crisis for the computer industry. CPU speed increases have virtually stopped, and spreading the work across processor arrays is the only way forward now. We have massively parallel machines, but the software to exploit that is written specifically for that purpose, usually based on message passing. The idea of integrating communicating processes into everyday use has been proposed and demonstrated by several systems, but remains far outside mainstream programming. Programming C/C++ with pthreads is an exercise in pain. Finding lost pointers and array overruns is not fun on a single processor system. On a multiprocessor/multitask system is more akin to a headfirst dive into a pile of glass shards. In fact, given a type secure language with mutitasking/multiprocessing as a fundamental, multiprocessing does not have to be hard at all.
SUMMATION
So I hope this article has given you some idea about how I regard the past and future of Pascal. Perhaps the most important point is, that if you look at the past developments in languages, you will find that what makes a language popular is not WHAT IT IS, but WHAT IT DOES. C gave a reasonable replacement for assembly language programming. Pascal gave a mainframe language to early microprocessor users. Java tried to, and mostly delivered, the idea that programs could run on any machine without change.
This is something to think about. If I accomplish anything with my work, it would not be important because I added to the Pascal language so much as because I accomplished something of import with the language.
And time will tell.
Scott Franco
Last edit: Scott Franco 2014-07-28
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
So occasionally I get email thanking me for preserving the history of Pascal. Thatś a good thing and a bad thing. Sure, I try to become a point to collect various history documents and code on Pascal. I actually found that this helps me more than it helps others. Instead of trying to find Pascal documents and code, people find it and give it to me. How nice is that?
The bad part is that it implies folks view what I work on as outdated. This is unfortunately not true. I say unfortunately because I would wish the state of the art had passed Pascal by now. It hasn't. Pascal remains a "great improvement on its successors". Here is a short history:
1974 Pascal arrives.
1978 C arrives.
Now I know that some will have a problem with these dates, however, the "users manual and report" arrived in 1974, and "the C programming language" arrived in 1978 according to their copyrights. Thus, these mark the dates of their general diffusion throughout the industry.
In 1978, Pascal was considered the "hot" language, and C was nowhere to be found in the Microprocessor business.
By about 1982 this had changed radically, and there were several reasonable light C compilers. By the end of the 1980's, C was starting to take over from assembly language.
Pascal compilers were available, and in about the same quality range as C compilers. It helps to remember that in the 1980s, a segmented 16 bit model was paramount. The segmented model was mostly good at generating non-standard code, for both C and Pascal. The ANSI C language didn't really replace the nonstandard versions until the end of the decade.
However, C won. The language that started as a cross between Fortran and Algol was a very good "universal assembler", and indeed quite a few of the original assembly language programmers of the early microprocessor era moved to C.
I was of course one of them, but there the similarity ends. I wanted a Unix and C platform, and I got one by 1982. I did x86 programming on the PC in assembly, but I didn't like it.
So the 1990s started as the age of C, was supposed to be the beginning of C++. It ended with Java and its competitor C#. In the 1990s there was a fairly simple backlash against the complexity and production issues found with C and C++. What do I mean by this?
Employers began to notice that C programming, and its much more complex twin C++, took a lot of time to get well tested and finished programs out the door. Moreover, it required high end programmers who knew a lot about the machine and C/C++.
The reaction to this went two ways. First into something called "managed programming". The second was the rise of the pure interpreters. The "unreaction" were the players who thought that whatever the solution to the problem was, it had to be compatible with C.
Now C++ itself was originally marketed as a solution to C productivity issues. However, C++ is a much more complex programming language. It didn't really solve the issue of new graduates not knowing how to be productive in C, nor did it solve the complexity of debugging and testing C.
What the C/C++ market instead has done was go into a series of tooling phases, with coverity, Agile programming and similar efforts to bring C/C++ programming costs and development times under control.
The two other languages, Java and C#, have both made huge progress in many classes of programming work. Neither accomplished what C and C++ did, which was to become really general purpose languages. I think in the early days folks believed that the VM implementation basis was too inefficient. However with today's highly polished JIT compilers I don't think that is still an issue.
The real issue with Java and C# with respect to being a general purpose language is the foreign nature of the languages. The intermediate generation step, which received so much fame with these languages, is in the way in systems coding. Even without this, Java and C are married to their tool kits in a way that makes them inseparable. the GCC series can compile direct to binary Java, but what does that mean? Where do you go to learn how to program "to the metal" Java programs?
C# has access to virtually all of the Windows API, and thus can make any application. So what part of Windows itself can be programmed in C#? The answer is none. Even with equivalent efficiency, C# does not do systems work and there is no demand for that role in any case.
What would it take to make Java and C# systems programming languages? Simple. Easy access and linkage to assembly language, a requirement that C itself met, and nowdays, Easy access and linkage to C itself. The rule is simple. You don't replace an existing standard by being incompatible with it.
In the 2000's, we see the rise of the pure interpreters, Perl, Python, Ruby and now LUA. None of these are system programming languages or ever will be. The next version of Word is not going to be written in LUA. This trend is pushed by the idea that students right out of collage are going to be productive in those languages. They are fully type protected at runtime, and can be developed interactively. Even if you would prefer not to develop by typing in lines to the interpreter (most don't), the highly interactive nature of the systems allow for better debugging.
So we have this trend, Java, C#, and the interpreters, that founded themselves on the idea that even if the programmer is a beginner, the system should either work or be "reasonable", that is, not crash, give reasonable diagnostics, or even stop in the middle of interpretation and let the programmer fix or investigate.
I think a better message you can get from this is that if people feel like investing their time and energy into a system that is orders of magnitude slower than optimized C such as interpreters, that something is seriously wrong with today's programming models.
So way back in th 1960's, the beautiful experiment that was Algol met it's end with Algol 68, the idea that a programming language could be both functional and beautiful in the mathematics sense. Pascal was a child of that era. Wirth basically wanted an Algol that was applied, and not theoretical.
In the 1980's, much of programming was done in Assembly language. When C took over from that group, mainframes were gone, mini computers were gone, and the idea that a language could protect you from system crashes was similarly just a memory.
Just as rapidly in the 1990s, the idea of type safe programming expounded by Wirth and others in the 1960s and 1970s came bounding back with a new set of vocabulary. Type safe became "managed code". Java took C/C++ and went back in time to type safety. C# did the same, and was designed by the programmer, Anders Hejlsberg, who was the same one who took type safety OUT of the Borland Pascal compilers. I find this funny. I seem to be the only one who sees the humor in this.
So what makes a general purpose programming language today?
It must unite the state of the art. That is, it must be an expression of unifying current thoughts on programming languages into a coherent whole.
It must address current problems in computing.
For the first point, object orientation, language extensibility and language backed multiprocessing are key.
For the second, I would list #1 as multiprocessing, then language security and verification.
Of course, I have listed multiprocessing twice. But multiprocessing is both a language issue and an application problem.
Lets call these:
A = Language extension, including objects, libraries, etc.
B = Multiprocessing.
C = Language security and verification.
In reality, B and C are handmaidens. Per Brinch Hansen believed that multiprocessing was not reasonable unless you did it in a language secure environment. The counterpoint to this was Stroustrup, who stated that the incorporation of multiprocessing primitives in C++ would be tantamount to the language designers picking a particular implementation over another, but C++ (as well as C), has no realistic security model.
So lets go through the general purpose languages in common use in terms of the above.
C language
A-> Virtually unsupported.
B-> Supported by add on library (Pthreads).
C-> None
C++ language
A-> Very good.
B-> Supported by add on library.
C-> None
Java
A-> Very good
B-> Supported by add on library.
C-> Very good
C#
A-> Very good
B-> Supported by add on library.
C-> Very good
By my scale, both Java and C# represent state of the art general purpose programming languages. They both have the issue that they don't interface well with lower level code, which I discussed earlier.
They both also copy the C/C++ format, meaning lots of operators, and special characters. This is not to my taste, but that is a style issue. The Algol practice of mainly using keywords was based on the idea that people could remember words in their language better than special characters. Of course, that is when you are reading the source. For writing it, many programmers apparently favor short special character sequences that are easy to type. Thus, it is often said that C/C++ and its descendants are "write only languages", not really designed to be read.
So lets come back to Pascal. What does Pascal have to offer now?
AGE
Yep, the first is age. Pascal preceded all of these languages. And common use discarded the idea of type safety (including in subsequent dielects of Pascal), then "rediscovered" that again reborn as "managed programming". The programmers of Java and C# didn't feel that a return to Pascal was in order (even if they were reusing its principles) because "everyone knew" your language had to look like C.
EXTENSIBILITY
Pascal was a very well designed language. It makes a good platform to go forward on. Pascal has been extended before, but that was first to make it look like Basic (UCSD), and then later to make it look like C (Borland). The object oriented dialects were more like a true extension to Pascal.
I personally think the OO for Pascal extensions, and later Wirth's Oberon, were good but in the wrong direction. The philosophy was "objects are extended records". This changed toward the end of the 1990s. Objects in Java and C# are treated as modules. These languages treat the program as a collection of classes, then give a way to specify the primary class/object that will instantiate all the others. In C#, objects can be static or dynamic, and not just in the declarative sense.
Thus the trend is towards objects as a fundamental program construct, not just a type.
SECURITY
Java introduced the idea that the intermediate language could not only serve to couple the backend of the compiler, but also serve as the basis of a security model by having a "checker" validate the code before final compilation or interpretation. This is an interesting concept with respect to the idea of using intermediates as a universal transport format to different machine types. However, it rightly leaves language designers scratching their heads as to how it defines fundamental security. A language such as Pascal is secure by design. It does not need to be secured at the intermediate level.
The type security model in Pascal was first class as designed by Wirth, and only got better with the ISO 7185 standard. They thought of several possible ways to break the type security of the language and instituted language rules against them. For example, the idea of undiscriminated variants to get around type security is well known. Less well known is that the same result can be accomplished with fully discriminated variants and VAR passed procedure parameters.
BODY OF IMPLEMENTATION
What I refer to here is the collection of compilers and tools for the language whose source code is readily available, as well as clear specifications of the language itself. Java, C and C++ have this. C# does not, it is a proprietary technology.
AND SO WHAT FUTURE PASCAL?
People often ask me why I don't just "come to reality" and join up with a Borland dialect. I actually have done extensive work with the Delphi system. Its a good system, it is far to reliant on escaping the type security mechanisms for my taste, and now it is clearly dated. The object model is based on the "objects are records" model. I also note that the major designer of Borland Pascal moved on to C#, and went back to full type security.
My "agenda" with Pascal consists of:
Double down on Pascal. Few implementations really exploited the extent of type security with Pascal. Pascal delivers type and executable security equal to or greater than Java and C#. And secure type models dovetail nicely with multiprocessor models. Per Brinch Hansen said it all on this subject. He has left us now, but he wrote extensively on the subject and has mainly been ignored.
Extend Pascal as an integrated whole. Perhaps with the benefit of hindsight, there are far better models for OO Pascal than have appeared to date. Pascal benefits well from not only a more modern treatment of objects and extendable code, but also from the idea that Pascal can "reintegrate" on a new and higher level. Think about this. What attracted many people to Pascal was the sheer integration of the language. Virtually any type could be formed using the base types, and these types naturally flowed into one another. subrange types or ordinal types formed a natural way to explain array indices, and arrays, files and dynamic variables could be composed of any subtype in a very regular manner (in fact the only major subtype that the ISO 7185 forbid was file of files, and there is reason to believe that even they are useful).
Apply Pascal to multiprocessing. Multiprocessing is a crisis for the computer industry. CPU speed increases have virtually stopped, and spreading the work across processor arrays is the only way forward now. We have massively parallel machines, but the software to exploit that is written specifically for that purpose, usually based on message passing. The idea of integrating communicating processes into everyday use has been proposed and demonstrated by several systems, but remains far outside mainstream programming. Programming C/C++ with pthreads is an exercise in pain. Finding lost pointers and array overruns is not fun on a single processor system. On a multiprocessor/multitask system is more akin to a headfirst dive into a pile of glass shards. In fact, given a type secure language with mutitasking/multiprocessing as a fundamental, multiprocessing does not have to be hard at all.
SUMMATION
So I hope this article has given you some idea about how I regard the past and future of Pascal. Perhaps the most important point is, that if you look at the past developments in languages, you will find that what makes a language popular is not WHAT IT IS, but WHAT IT DOES. C gave a reasonable replacement for assembly language programming. Pascal gave a mainframe language to early microprocessor users. Java tried to, and mostly delivered, the idea that programs could run on any machine without change.
This is something to think about. If I accomplish anything with my work, it would not be important because I added to the Pascal language so much as because I accomplished something of import with the language.
And time will tell.
Scott Franco
Last edit: Scott Franco 2014-07-28