High-level language computer architecture: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Alter: title. | Use this bot. Report bugs. | Suggested by BrownHairedGirl | #UCB_webform 1612/3666
Adding short description: "Computer designed to run a specific language"
 
(7 intermediate revisions by 7 users not shown)
Line 1:
{{Short description|Computer designed to run a specific language}}
{{Cleanup bare URLs|date=August 2022}}
A '''high-level language computer architecture''' ('''HLLCA''') is a [[computer architecture]] designed to be targeted by a specific [[high-level programming language]] (HLL), rather than the architecture being dictated by hardware considerations. It is accordingly also termed '''language-directed computer design,''', coined in {{harvtxt|McKeeman|1967}} and primarily used in the 1960s and 1970s. HLLCAs were popular in the 1960s and 1970s, but largely disappeared in the 1980s. This followed the dramatic failure of the [[Intel 432]] (1981) and the emergence of [[optimizing compiler]]s and [[reduced instruction set computer]] (RISC) architectures and RISC-like [[complex instruction set computer]] (CISC) architectures, and the later development of [[just-in-time compilation]] (JIT) for HLLs. A detailed survey and critique can be found in {{harvtxt|Ditzel|Patterson|1980}}.
 
HLLCAs date almost to the beginning of HLLs, in the [[Burroughs large systems]] (1961), which were designed for [[ALGOL 60]] (1960), one of the first HLLs. The best known HLLCAs may be the [[Lisp machine]]s of the 1970s and 1980s, for the language [[Lisp (programming language)|Lisp]] (1959). At present the most popular HLLCAs are [[Java processor]]s, for the language [[Java (programming language)|Java]] (1995), and these are a qualified success, being used for certain applications. A recent architecture in this vein is the [[Heterogeneous System Architecture]] (2012), which [[HSA Intermediate Layer]] (HSAIL) provides instruction set support for HLL features such as exceptions and virtual functions; this uses JIT to ensure performance.
 
==Definition==
There are a wide variety of systems under this heading. The most extreme example is a Directly Executed Language (DEL), where the [[instruction set architecture]] (ISA) of the computer equals the instructions of the HLL, and the [[source code]] is directly executable with minimal processing. In extreme cases, the only compiling needed is [[Tokenization (lexical analysis)|tokenizing]] the source code and feeding the tokens directly to the processor; this is found in [[stack-oriented programming language]]s running on a [[stack machine]]. For more conventional languages, the HLL statements are grouped into instruction + [[Parameter (computer programming)|arguments]], and [[Infix notation|infix]] order is transformed to [[Substring|prefix]] or [[Reverse Polish notation|postfix]] order. DELs are typically only hypothetical, though they were advocated in the 1970s.<ref>See Yaohan Chu references.</ref>
 
In less extreme examples, the source code is first parsed to [[bytecode]], which is then the [[machine code]] that is passed to the processor. In these cases, the system typically lacks an [[Assembly language|assembler]], as the [[compiler]] is deemed sufficient, though in some cases (such as Java), assemblers are used to produce legal bytecode which would not be output by the compiler. This approach was found in the [[Pascal MicroEngine]] (1979), and is currently used by Java processors.
Line 25:
[[Rekursiv]] (mid-1980s) was a minor system, designed to support [[object-oriented programming]] and the [[Lingo (programming language)#Other languages|Lingo]] programming language in hardware, and supported [[recursion]] at the instruction set level, hence the name.
 
A number of processors and coprocessors intended to implement [[Prolog]] more directly were designed in the late 1980s and early 1990s, including the [http://www.eecs.berkeley.edu/Pubs/TechRpts/1991/6379.html Berkeley VLSI-PLM], its successor (the [http://portal.acm.org/citation.cfm?id=74948 PLUM]), and a [http://www.eecs.berkeley.edu/Pubs/TechRpts/1988/5870.html related microcode implementation]. There were also a number of simulated designs that were not produced as hardware [httphttps://ieeexplore.ieee.org/xpldocument/freeabs_all.jsp380918/;jsessionid=A86AF912E75993CA7D045D7807BC1F9D?arnumber=380918 A VHDL-based methodology for designing a Prolog processor], [httphttps://ieeexplore.ieee.org/xpldocument/freeabs_all.jsp183879/;jsessionid=470A2E0F0ECF4188B2F876ECEC505134?arnumber=183879 A Prolog coprocessor for superconductors]. Like Lisp, Prolog's basic model of computation is radically different from standard imperative designs, and computer scientists and electrical engineers were eager to escape the bottlenecks caused by emulating their underlying models.
 
[[Niklaus Wirth]]'s [[Lilith (computer)|Lilith]] project included a custom CPU geared toward the [[Modula-2]] language.<ref>{{cite web |url=http://pascal.hansotten.com/index.php?page=history-of-lilith |title=Pascal for Small Machines – History of Lilith |publisher=Pascal.hansotten.com |date=28 September 2010 |access-date=12 November 2011 |archive-date=20 March 2012 |archive-url=https://web.archive.org/web/20120320091110/http://pascal.hansotten.com/index.php?page=history-of-lilith |url-status=dead }}</ref>
Line 35:
In the late 1990s, there were plans by [[Sun Microsystems]] and other companies to build CPUs that directly (or closely) implemented the stack-based [[Java (programming language)|Java]] [[Java virtual machine|virtual machine]]. As a result, several [[Java processor]]s have been built and used.
 
[[Ericsson]] developed ECOMP, a processor designed to run [[Erlang (programming language)|Erlang]].<ref>{{Cite web |url=http://www.erlang.se/euc/00/processor.ppt |title=ECOMP - an Erlang Processor |access-date=2022-12-01 |archive-url=https://web.archive.org/web/20210424115257/http://www.erlang.se/euc/00/processor.ppt |archive-date=2021-04-24 |url-status=dead }}</ref> It was never commercially produced.
 
The HSA Intermediate Layer (HSAIL) of the [[Heterogeneous System Architecture]] (2012) provides a virtual instruction set to abstract away from the underlying ISAs, and has support for HLL features such as exceptions and virtual functions, and include debugging support.
Line 54:
A further advantage is that a language implementation can be updated by updating the microcode ([[firmware]]), without requiring recompilation of an entire system. This is analogous to updating an interpreter for an interpreted language.
 
An advantage that's reappearing post-2000 is safety or security. Mainstream IT has largely moved to languages with type and/or memory safety for most applications.{{Cn|date=May 2023}} The software those depend on, from OS to virtual machines, leverage native code with no protection. Many vulnerabilities have been found in such code. One solution is to use a processor custom built to execute a safe high level language or at least understand types. Protections at the processor word level make attackers' job difficult compared to low level machines that see no distinction between scalar data, arrays, pointers, or code. Academics are also developing languages with similar properties that might integrate with high level processors in the future. An example of both of these trends is the SAFE<ref>{{Cite web |url=http://www.crash-safe.org/ |title=SAFE Project |access-date=2022-07-09 |archive-date=2019-10-22 |archive-url=https://web.archive.org/web/20191022221212/http://www.crash-safe.org/ |url-status=dead }}</ref> project. Compare [[language-based system]]s, where the software (especially operating system) is based around a safe, high-level language, though the hardware need not be: the "trusted base" may still be in a lower level language.
 
==Disadvantages==
Line 98:
** {{Cite conference |last=Chu |first=Yaohan |title=Proceedings of the 1975 annual conference on - ACM 75 |year=1975 |chapter=Concepts of high-level-language computer architecture |conference=ACM '75 Proceedings of the 1975 annual conference |pages=6–13 |doi=10.1145/800181.810257}}
* {{Cite journal |last1=Chu |first1=Yaohan |last2=Cannon |first2=R. |date=June 1976 |title=Interactive High-Level Language Direct-Execution Microprocessor System |journal=IEEE Transactions on Software Engineering |volume=2 |issue=2 |pages=126–134 |doi=10.1109/TSE.1976.233802 |s2cid=9076898}}
* {{Cite journal |last=Chu |first=Yaohan |date=December 1977 |title=Direct-execution computer architecture |journal=ACM SIGARCH Computer Architecture News |volume=6 |issue=5 |pages=18–23 |doi=10.1145/859412.859415 |s2cid=10241380|doi-access=free }}
* {{Cite conference |last=Chu |first=Yaohan |title=Proceedings of the 1978 annual conference on - ACM 78 |year=1978 |chapter=Direct Execution In A High-Level Computer Architecture |conference=ACM '78 Proceedings of the 1978 annual conference |pages=289–300 |doi=10.1145/800127.804116|isbn=0897910001 }}
* {{Cite journal |last1=Chu |first1=Yaohan |last2=Abrams |first2=M. |date=July 1981 |title=Programming Languages and Direct-Execution Computer Architecture |journal=Computer |volume=14 |issue=7 |pages=22–32 |doi=10.1109/C-M.1981.220525 |s2cid=3373193}}