Computer program: Difference between revisions

Content deleted Content added
No edit summary
Tags: Reverted Mobile edit Mobile web edit
Reverted 1 edit by 1.1.250.125 (talk): Using Wikipedia for advertising/promotion
 
(45 intermediate revisions by 26 users not shown)
Line 3:
[[File:JavaScript_code.png|thumb|[[Source code]] for a computer program written in the [[JavaScript]] language. It demonstrates the ''appendChild'' method. The method adds a new child node to an existing parent node. It is commonly used to dynamically modify the structure of an HTML document.]]
{{Program execution}}
 
[[A]] '''computer program''' is a [[sequence]] or set{{efn|The [[Prolog]] language allows for a database of facts and rules to be entered in any order. However, a question about a database must be at the very end.}} of instructions in a [[programming language]] for a [[computer]] to [[Execution (computing)|execute]]. It is one component of [[software]], which also includes [[software documentation|documentation]] and other intangible components.<ref name="ISO 2020">{{cite web
| title=ISO/IEC 2382:2015
| website=ISO
Line 15 ⟶ 16:
}}</ref>
 
A ''computer program'' in its [[Human-readable medium and data|human-readable]] form is called [[source code]]. Source code needs another computer program to [[Execution (computing)|execute]] because computers can only execute their native [[machine code|machine instructions]]. Therefore, source code may be [[Translator (computing)|translated]] to machine instructions using a [[compiler]] written for the language. ([[Assembly language]] programs are translated using an [[Assembler (computing)|assembler]].) The resulting file is called an [[executable]]. Alternatively, source code may execute within an [[interpreter (computing)|interpreter]] written for the language.<ref name="cpl_3rd-ch1-7_quoted">{{cite book
| last = Wilson
| first = Leslie B.
Line 45 ⟶ 46:
}}</ref>
 
If the source code is requested for execution, then the operating system loads the corresponding interpreter into memory and starts a process. The interpreter then loads the source code into memory to translate and execute each [[Statement (computer science)|statement]]. Running the source code is slower than running an [[executable]].<ref name="cpl_3rd-ch1-7">{{cite book
| last = Wilson
| first = Leslie B.
Line 57 ⟶ 58:
==Example computer program==
 
The [["Hello, World!" program]] is used to illustrate a language's basic [[Syntax (programming languages)|syntax]]. The syntax of the language [[Dartmouth BASIC|BASIC]] (1964) was intentionally limited to make the language easy to learn.<ref name="cpl_3rd-ch2-30_quote1">{{cite book
| last = Wilson
| first = Leslie B.
Line 191 ⟶ 192:
| page = 234
| isbn = 978-0-669-17342-0
}}</ref> All present-day computers are [[Turing completeness|Turing complete]].<ref name="formal_languages-ch9-p243">{{cite book
| last = Linz
| first = Peter
Line 204 ⟶ 205:
===ENIAC===
[[File:ENIAC-changing_a_tube.jpg|thumb|right|Glenn A. Beck changing a tube in ENIAC]]
The [[ENIAC|Electronic Numerical Integrator And Computer]] (ENIAC) was built between July 1943 and Fall 1945. It was a [[Turing complete]], general-purpose computer that used 17,468 [[vacuum tube]]s to create the [[Electronic circuit|circuits]]. At its core, it was a series of [[Pascaline]]s wired together.<ref name="eniac-ch5-p102">{{cite book
| last = McCartney
| first = Scott
Line 252 ⟶ 253:
| isbn = 978-0-8027-1348-3
| url = https://archive.org/details/eniac00scot/page/118
}}</ref> [[J. Presper Eckert|Presper Eckert]] and [[John Mauchly]] built the ENIAC. The two engineers introduced the ''stored-program concept'' in a three-page memo dated February 1944.<ref name="eniac-ch6-p119">{{cite book
| last = McCartney
| first = Scott
Line 270 ⟶ 271:
| isbn = 978-0-8027-1348-3
| url = https://archive.org/details/eniac00scot/page/123
}}</ref><ref>{{Citation |last=Huskey |first=Harry D. |title=EDVAC |date=2003-01-01 |encyclopedia=Encyclopedia of Computer Science |pages=626–628 |url=https://dl.acm.org/doi/10.5555/1074100.1074362 |access-date=2025-04-25 |place=GBR |publisher=John Wiley and Sons Ltd. |isbn=978-0-470-86412-8}}</ref>
}}</ref>
 
The [[IBM System/360]] (1964) was a family of computers, each having the same [[instruction set|instruction set architecture]]. The [[IBM System/360 Model 20|Model 20]] was the smallest and least expensive. Customers could upgrade and retain the same [[application software]].<ref name="sco-ch1-p21">{{cite book
| last = Tanenbaum
| first = Andrew S.
Line 282 ⟶ 283:
| url = https://archive.org/details/structuredcomput00tane
| url-access = registration
}}</ref> The [[IBM System/360 Model 195|Model 195]] was the most premium. Each System/360 model featured [[Computer multitasking#Multiprogramming|multiprogramming]]<ref name="sco-ch1-p21"/>—having multiple [[Process (computing)|processes]] in [[random-access memory|memory]] at once. When one process was waiting for [[input/output]], another could compute.
 
IBM planned for each model to be programmed using [[PL/1]].<ref name="cpl_3rd-ch2-27">{{cite book
Line 292 ⟶ 293:
| page = 27
| isbn = 0-201-71012-9
}}</ref> A committee was formed that included [[COBOL]], [[Fortran]] and [[ALGOL]] programmers. The purpose was to develop a language that was comprehensive, easy to use, extendible, and would replace Cobol and Fortran.<ref name="cpl_3rd-ch2-27"/> The result was a large and complex language that took a long time to [[Compiler|compile]].<ref name="cpl_3rd-ch2-29">{{cite book
| last = Wilson
| first = Leslie B.
Line 315 ⟶ 316:
===Very Large Scale Integration===
[[Image:Diopsis.jpg|thumb|right|A VLSI integrated-circuit [[die (integrated circuit)|die]] ]]
A major milestone in software development was the invention of the [[Very Large Scale Integration]] (VLSI) circuit (1964).<ref name="digibarn_bp">{{cite web
| url=https://www.digibarn.com/stories/bill-pentz-story/index.html#story
| title=Bill Pentz — A bit of Background: the Post-War March to VLSI
| publisher=Digibarn Computer Museum
| date=August 2008
| access-date=January 31, 2022
| archive-date=March 21, 2022
| archive-url=https://web.archive.org/web/20220321183527/https://www.digibarn.com/stories/bill-pentz-story/index.html#story
| url-status=live
}}</ref> Following [[World War II]], tube-based technology was replaced with [[point-contact transistor]]s (1947) and [[bipolar junction transistor]]s (late 1950s) mounted on a [[circuit board]].<ref name="digibarn_bp"/> [[Invention of the integrated circuit|During the 1960s]], the [[aerospace]] industry replaced the circuit board with an [[integrated circuit|integrated circuit chip]].<ref name="digibarn_bp"/>
 
[[Robert Noyce]], co-founder of [[Fairchild Semiconductor]] (1957) and [[Intel]] (1968), achieved a technological improvement to refine the [[Semiconductor device fabrication|production]] of [[field-effect transistor]]s (1963).<ref name="digital_age">{{cite book
Line 336 ⟶ 328:
| archive-url=https://web.archive.org/web/20230202181649/https://books.google.com/books?id=UUbB3d2UnaAC&pg=PA46
| url-status=live
}}</ref> The goal is to alter the [[electrical resistivity and conductivity]] of a [[p–n junction|semiconductor junction]]. First, naturally occurring [[silicate minerals]] are converted into [[Polycrystalline silicon|polysilicon]] rods using the [[Siemens process]].<ref name="osti">{{cite web
| url=https://www.osti.gov/servlets/purl/1497235
| title=Manufacturing of Silicon Materials for Microelectronics and Solar PV
Line 347 ⟶ 339:
| archive-url=https://web.archive.org/web/20230323163602/https://www.osti.gov/biblio/1497235
| url-status=live
}}</ref> The [[Czochralski method|Czochralski process]] then converts the rods into a [[monocrystalline silicon]], [[Boule (crystal)|boule crystal]].<ref name="britannica_wafer">{{cite web
| url=https://www.britannica.com/technology/integrated-circuit/Fabricating-ICs#ref837156
| title=Fabricating ICs Making a base wafer
Line 374 ⟶ 366:
}}</ref> The MOS transistor is the primary component in ''integrated circuit chips''.<ref name="digital_age"/>
 
Originally, [[integrated circuit]] chips had their function set during manufacturing. During the 1960s, controlling the electrical flow migrated to programming a [[Diode matrix|matrix]] of [[read-only memory]] (ROM). The matrix resembled a two-dimensional array of fuses.<ref name="digibarn_bp"/> The process to embed instructions onto the matrix was to burn out the unneeded connections.<ref name="digibarn_bp"/> There were so many connections, [[firmware]] programmers wrote a ''computer program'' on another chip to oversee the burning.<ref name="digibarn_bp"/> The technology became known as [[Programmable ROM]]. In 1971, Intel [[stored-program computer|installed the computer program onto the chip]] and named it the [[Intel 4004]] [[microprocessor]].<ref name="intel_4004">{{cite web
| url=https://spectrum.ieee.org/chip-hall-of-fame-intel-4004-microprocessor
| title=Chip Hall of Fame: Intel 4004 Microprocessor
Line 393 ⟶ 385:
| access-date=February 5, 2022
}}</ref>
 
===Sac State 8008===
[[File:Sacstate 8008.jpg|thumb|Artist's depiction of Sacramento State University's Intel 8008 microcomputer (1972)]]
The Intel 4004 (1971) was a 4-[[bit]] microprocessor designed to run the [[Busicom]] calculator. Five months after its release, Intel released the [[Intel 8008]], an 8-bit microprocessor. Bill Pentz led a team at [[California State University, Sacramento|Sacramento State]] to build the first [[microcomputer]] using the Intel 8008: the ''Sac State 8008'' (1972).<ref name="cnet">{{cite web
| url=https://www.cnet.com/news/inside-the-worlds-long-lost-first-microcomputer/
| title=Inside the world's long-lost first microcomputer
| publisher=c/net
| date=January 8, 2010
| access-date=January 31, 2022
| archive-date=February 1, 2022
| archive-url=https://web.archive.org/web/20220201023538/https://www.cnet.com/news/inside-the-worlds-long-lost-first-microcomputer/
| url-status=live
}}</ref> Its purpose was to store patient medical records. The computer supported a [[disk operating system]] to run a [[Memorex]], 3-[[megabyte]], [[hard disk drive]].<ref name="digibarn_bp"/> It had a color display and keyboard that was packaged in a single console. The disk operating system was programmed using [[IBM Basic Assembly Language and successors|IBM's Basic Assembly Language (BAL)]]. The medical records application was programmed using a [[BASIC]] interpreter.<ref name="digibarn_bp"/> However, the computer was an evolutionary dead-end because it was extremely expensive. Also, it was built at a public university lab for a specific purpose.<ref name="cnet"/> Nonetheless, the project contributed to the development of the [[Intel 8080]] (1974) [[Instruction set architecture|instruction set]].<ref name="digibarn_bp"/>
 
===x86 series===
[[File:IBM_PC-IMG_7271_(transparent).png|thumb|right|The original [[IBM Personal Computer]] (1981) used an Intel 8088 microprocessor.]]
In 1978, the modern [[software development]] environment began when Intel upgraded the [[Intel 8080]] to the [[Intel 8086]]. Intel simplified the Intel 8086 to manufacture the cheaper [[Intel 8088]].<ref name="infoworld_8-23-82">{{cite web
| url=https://books.google.com/books?id=VDAEAAAAMBAJ&pg=PA22
| title=Bill Gates, Microsoft and the IBM Personal Computer
Line 427 ⟶ 406:
===Changing programming environment===
[[File:DEC VT100 terminal transparent.png|thumb|right|The [[Digital Equipment Corporation|DEC]] [[VT100]] (1978) was a widely used [[computer terminal]].]]
VLSI circuits enabled the [[integrated development environment|programming environment]] to advance from a [[computer terminal]] (until the 1990s) to a [[graphical user interface]] (GUI) computer. Computer terminals limited programmers to a single [[Shell (computing)|shell]] running in a [[command-line interface|command-line environment]]. During the 1970s, full-screen source code editing became possible through a [[text-based user interface]]. Regardless of the technology available, the goal is to program in a [[programming language]].
 
==Programming paradigms and languages==
 
[[Programming language]] features exist to provide building blocks to be combined to express programming ideals.<ref name="stroustrup-ch1-10">{{cite book
| last = Stroustrup
| first = Bjarne
Line 456 ⟶ 435:
| isbn = 978-0-321-56384-2
}}</ref> For example, different paradigms may differentiate:<ref name="stroustrup-ch1-11"/>
* [[Procedural programming|procedural languages]], [[Functional programming|functional languageslanguage]]s, and [[Logic programming|logical languages]].
* different levels of [[Abstraction (computer science)|data abstraction]].
* different levels of [[class hierarchy]].
* different levels of input [[Data type|datatypes]], as in [[Container (abstract data type)|container types]] and [[generic programming]].
Each of these programming styles has contributed to the synthesis of different ''programming languages''.<ref name="stroustrup-ch1-11"/>
 
Line 507 ⟶ 486:
| isbn = 978-0-13-854662-5
| url = https://archive.org/details/structuredcomput00tane/page/17
}}</ref> Programming the EDSAC was in the first [[Programming language generations|generation of programming language]].<ref>{{Citation |last1=Wilkes |first1=M. V. |title=The EDSAC |date=1982 |work=The Origins of Digital Computers: Selected Papers |pages=417–421 |editor-last=Randell |editor-first=Brian |url=https://link.springer.com/chapter/10.1007/978-3-642-61812-3_34 |access-date=2025-04-25 |place=Berlin, Heidelberg |publisher=Springer |language=en |doi=10.1007/978-3-642-61812-3_34 |isbn=978-3-642-61812-3 |last2=Renwick |first2=W.|url-access=subscription }}</ref>
 
* The [[First-generation programming language|first generation of programming language]] is [[machine language]].<ref name="pis-ch4-p160">{{cite book
Line 517 ⟶ 496:
| page = 160
| isbn = 0-619-06489-7
}}</ref> ''Machine language'' requires the programmer to enter instructions using ''instruction numbers'' called [[machine code]]. For example, the ADD operation on the [[PDP-11 architecture|PDP-11]] has instruction number 24576.{{efn|Whereas this is a decimal number, PDP-11 code is always expressed as [[octal]].}}<ref name="sco-ch7-p399">{{cite book
| last = Tanenbaum
| first = Andrew S.
Line 528 ⟶ 507:
}}</ref>
 
* The [[Second-generation programming language|second generation of programming language]] is [[assembly language]].<ref name="pis-ch4-p160"/> ''Assembly language'' allows the programmer to use [[Assembly language#Mnemonics|mnemonic]] [[Instruction_set_architecture#Instructions|instructions]] instead of remembering instruction numbers. An [[Assembler (computing)|assembler]] translates each assembly language mnemonic into its machine language number. For example, on the PDP-11, the operation 24576 can be referenced as ADD R0,R0 in the source code.<ref name="sco-ch7-p399"/> The four basic arithmetic operations have assembly instructions like ADD, SUB, MUL, and DIV.<ref name="sco-ch7-p399"/> Computers also have instructions like DW (Define [[Word (computer architecture)|Word]]) to reserve [[Random-access memory|memory]] cells. Then the MOV instruction can copy [[integer]]s between [[Processor register|registers]] and memory.
 
:* The basic structure of an assembly language statement is a label, [[Operation (mathematics)|operation]], [[operand]], and comment.<ref name="sco-ch7-p400">{{cite book
Line 563 ⟶ 542:
| page = 26
| isbn = 0-201-71012-9
}}</ref> Early languages include [[Fortran]] (1958), [[COBOL]] (1959), [[ALGOL]] (1960), and [[BASIC]] (1964).<ref name="pis-ch4-p160"/> In 1973, the [[C (programming language)|C programming language]] emerged as a [[High-level programming language|high-level language]] that produced efficient machine language instructions.<ref name="cpl_3rd-ch2-37">{{cite book
| last = Wilson
| first = Leslie B.
Line 582 ⟶ 561:
}}</ref> C has statements that may generate a single machine instruction.{{efn|[[Operators in C and C++|Operators]] like <code>x++</code> will usually compile to a single instruction.}} Moreover, an [[optimizing compiler]] might overrule the programmer and produce fewer machine instructions than statements. Today, an entire [[programming paradigm|paradigm]] of languages fill the [[imperative programming|imperative]], ''third generation'' spectrum.
 
* The [[Fourth-generation programming language|fourth generation of programming language]] emphasizes what output results are desired, rather than how programming statements should be constructed.<ref name="pis-ch4-p160"/> [[Declarative programming|Declarative languageslanguage]]s attempt to limit [[Side effect (computer science)|side effects]] and allow programmers to write code with relatively few errors.<ref name="pis-ch4-p160"/> One popular ''fourth generation'' language is called [[SQL|Structured Query Language]] (SQL).<ref name="pis-ch4-p160"/> [[Database]] developers no longer need to process each database record one at a time. Also, a simple [[Select (SQL)|select statement]] can generate output records without having to understand how they are retrieved.
 
===Imperative languages===
Line 662 ⟶ 641:
| page = 19
| isbn = 0-201-71012-9
}}</ref> Emerging from a committee of European and American programming language experts, it used standard [[mathematical notation]] and had a readable, structured design. Algol was first to define its [[Syntax (programming languages)|syntax]] using the [[Backus–Naur form]].<ref name="cpl_3rd-ch2-19"/> This led to [[Syntax-directed translation|syntax-directed]] compilers. It added features like:
* [[Block (programming)|block structure]], where variables were local to their block.
* arrays with variable bounds.
Line 681 ⟶ 660:
* The 'run' command executed the program.
 
However, the Basic syntax was too simple for large programs.<ref name="cpl_3rd-ch2-30"/> Recent dialects added structure and object-oriented extensions. [[Microsoft|Microsoft's]]'s [[Visual Basic]] is still widely used and produces a [[graphical user interface]].<ref name="cpl_3rd-ch2-31"/>
 
====C====
[[C (programming language)|C programming language]] (1973) got its name because the language [[BCPL]] was replaced with [[B (programming language)|B]], and [[Bell Labs|AT&T Bell Labs]] called the next version "C". Its purpose was to write the [[UNIX]] [[operating system]].<ref name="cpl_3rd-ch2-37"/> C is a relatively small language, making it easy to write compilers. Its growth mirrored the hardware growth in the 1980s.<ref name="cpl_3rd-ch2-37"/> Its growth also was because it has the facilities of [[assembly language]], but it uses a [[High-level programming language|high-level syntax]]. It added advanced features like:
* [[inline assembler]].
* arithmetic on pointers.
Line 692 ⟶ 671:
 
[[File:Computer-memory-map.png|thumb|right|Computer memory map]]
''C'' allows the programmer to control which region of memory data is to be stored. [[Global variable]]s and [[static variable]]s require the fewest [[Clock signal|clock cyclescycle]]s to store. The [[call stack|stack]] is automatically used for the standard variable [[Declaration (computer programming)|declarations]]. [[Manual memory management|Heap]] memory is returned to a [[Pointer (computer programming)|pointer variable]] from the [[C dynamic memory allocation|<code>malloc()</code>]] function.
 
* The ''global and static data'' region is located just above the ''program'' region. (The program region is technically called the ''text'' region. It is where machine instructions are stored.)
Line 704 ⟶ 683:
| url-status = live
}}</ref> One region is called the ''initialized [[data segment]]'', where variables declared with default values are stored. The other region is called the ''[[.bss|block started by segment]]'', where variables declared without default values are stored.
:* Variables stored in the ''global and static data'' region have their [[Memory address|addresses]] set at compile- time. They retain their values throughout the life of the process.
 
:* The global and static region stores the ''global variables'' that are declared on top of (outside) the <code>main()</code> function.<ref name="cpl-ch1-p31">{{cite book
Line 717 ⟶ 696:
|page=31}}</ref> Global variables are visible to <code>main()</code> and every other function in the source code.
 
: On the other hand, variable declarations inside of <code>main()</code>, other functions, or within <code>{</code> <code>}</code> [[Block (programming)|block delimiters]] are ''local variables''. Local variables also include ''[[Parameter (computer programming)#Parameters and arguments|formal parameter]] variables''. Parameter variables are enclosed within the parenthesis of a function definition.<ref name="cpl_3rd-ch6-128">{{cite book
| last = Wilson
| first = Leslie B.
Line 727 ⟶ 706:
}}</ref> Parameters provide an [[Interface (computing)|interface]] to the function.
 
:* ''Local variables'' declared using the <code>static</code> prefix are also stored in the ''global and static data'' region.<ref name="geeksforgeeks"/> Unlike global variables, static variables are only visible within the function or block. Static variables always retain their value. An example usage would be the function <code>int increment_counter(){static int counter = 0; counter++; return counter;}</code>{{efn|This function could be written more concisely as <code>int increment_counter(){ static int counter; return ++counter;}</code>. 1) Static variables are automatically initialized to zero. 2) <code>++counter</code> is a prefix [[Increment and decrement operators|increment operator]].}}
 
* The [[call stack|stack]] region is a contiguous block of memory located near the top memory address.<ref name="lpi-ch6-p121">{{cite book
Line 770 ⟶ 749:
 
====C++====
In the 1970s, [[software engineering|software engineers]] needed language support to break large projects down into [[Modular programming|modules]].<ref name="cpl_3rd-ch2-38">{{cite book
| last = Wilson
| first = Leslie B.
Line 778 ⟶ 757:
| page = 38
| isbn = 0-201-71012-9
}}</ref> One obvious feature was to decompose large projects ''physically'' into separate [[computer file|files]]. A less obvious feature was to decompose large projects ''logically'' into [[abstract data type]]s.<ref name="cpl_3rd-ch2-38"/> At the time, languages supported [[Type system|concrete (scalar)]] datatypes like [[integer]] numbers, [[Floating-point arithmetic|floating-point]] numbers, and [[String (computer science)|strings]] of [[Character (computing)|characters]]. Abstract datatypes are [[Record (computer science)|structures]] of concrete datatypes, with a new name assigned. For example, a [[List (abstract data type)|list]] of integers could be called <code>integer_list</code>.
 
In object-oriented jargon, abstract datatypes are called [[Class (computer programming)|classes]]. However, a ''class'' is only a definition; no memory is allocated. When memory is allocated to a class and [[Name binding|bound]] to an [[identifier]], it is called an [[Object (computer science)|object]].<ref name="cpl_3rd-ch8-193">{{cite book
Line 798 ⟶ 777:
| page = 39
| isbn = 0-201-71012-9
}}</ref> A function, in an object-oriented language, is assigned to a class. An assigned function is then referred to as a [[Method (computer programming)|method]], [[Method (computer programming)#Member functions in C++|member function]], or [[Operation (mathematics)|operation]]. ''Object-oriented programming'' is executing ''operations'' on ''objects''.<ref name="cpl_3rd-ch2-35">{{cite book
| last = Wilson
| first = Leslie B.
Line 836 ⟶ 815:
}}</ref>
 
An object-oriented module is composed of two files. The definitions file is called the [[Include directive|header file]]. Here is a C++ ''header file'' for the ''GRADE class'' in a simple school application:
 
<syntaxhighlight lang="cpp">
Line 879 ⟶ 858:
}}</ref> It is executed when the calling operation executes the <code>[[new and delete (C++)|new]]</code> statement.
 
A module's other file is the ''[[source code|source file]]''. Here is a C++ source file for the ''GRADE class'' in a simple school application:
 
<syntaxhighlight lang="cpp">
Line 1,043 ⟶ 1,022:
| page = 218
| isbn = 0-201-71012-9
}}</ref> [[Declarative programming|Declarative languageslanguage]]s generally omit the assignment statement and the control flow. They describe ''what'' computation should be performed and not ''how'' to compute it. Two broad categories of declarative languages are [[functional language]]s and [[Logic programming|logical languages]].
 
The principle behind a ''functional language'' is to use [[lambda calculus]] as a guide for a well defined [[Semantics (computer science)|semantic]].<ref name="cpl_3rd-ch9-217">{{cite book
Line 1,104 ⟶ 1,083:
| page = 240
| isbn = 0-201-71012-9
}}</ref> Moreover, their lack of side-effects have made them popular in [[Parallel computing|parallel programming]] and [[Concurrent computing|concurrent programming]].<ref name="cpl_3rd-ch9-241">{{cite book
| last = Wilson
| first = Leslie B.
Line 1,126 ⟶ 1,105:
| publisher=Springer Science & Business Media
| isbn=9781447117193
| page=2}}</ref> It is tailored to process [[List (abstract data type)|lists]]. A full structure of the data is formed by building lists of lists. In memory, a [[Tree (data structure)|tree data structure]] is built. Internally, the tree structure lends nicely for [[Recursion (computer science)|recursive]] functions.<ref name="cpl_3rd-ch9-220">{{cite book
| last = Wilson
| first = Leslie B.
Line 1,158 ⟶ 1,137:
| page = 229
| isbn = 0-201-71012-9
}}</ref> Also, ''Lisp'' is not concerned with the [[data type|datatype]] of the elements at compile time.<ref name="cpl_3rd-ch9-227">{{cite book
| last = Wilson
| first = Leslie B.
Line 1,216 ⟶ 1,195:
| page = 235
| isbn = 0-201-71012-9
}}</ref> Moreover, ''ML'' assigns the datatype of an element at [[compile time|compile-time]]. Assigning the datatype at compile- time is called [[Name binding#Binding time|static binding]]. Static binding increases reliability because the compiler checks the context of variables before they are used.<ref name="cpl_3rd-ch3-55">{{cite book
| last = Wilson
| first = Leslie B.
Line 1,280 ⟶ 1,259:
</syntaxhighlight>
 
Here is a comprehensive example:<ref name="Logical English">Kowalski, R., Dávila, J., Sartor, G. and Calejo, M., 2023. Logical English for law and education. In Prolog: The Next 50 Years (pp. 287-299287–299). Cham: Springer Nature Switzerland.</ref>
 
1) All dragons billow fire, or equivalently, a thing billows fire if the thing is a dragon:
Line 1,320 ⟶ 1,299:
Prolog is an untyped language. Nonetheless, [[Inheritance (object-oriented programming)|inheritance]] can be represented by using predicates. Rule (4) asserts that a creature is a superclass of a dragon.
 
Questions are answered using [[backward chaining|backward reasoning]]. Given the question:
 
<syntaxhighlight lang="prolog"> ?- billows_fire(X).
Line 1,360 ⟶ 1,339:
| isbn = 0-256-08515-3
| quote = While it is true that OOD [(object oriented design)] as such is not supported by the majority of popular languages, a large subset of OOD can be used.
}}</ref> In an object-oriented language, an object container is called a [[Class (computer programming)|class]]. In a non-object-oriented language, a [[data structure]] (which is also known as a [[Record (computer science)|record]]) may become an object container. To turn a data structure into an object container, operations need to be written specifically for the structure. The resulting structure is called an [[Abstract data type|abstract datatype]].<ref name="dsa-ch3-p57">{{cite book
| last = Weiss
| first = Mark Allen
Line 1,397 ⟶ 1,376:
The <code>grade_new()</code> function performs the same algorithm as the C++ [[Constructor (object-oriented programming)|constructor]] operation.
 
Here is a C programming language ''[[source code|source file]]'' for the ''GRADE abstract datatype'' in a simple school application:
 
<syntaxhighlight lang="c">
Line 1,713 ⟶ 1,692:
| page = [https://archive.org/details/discretemathemat00rose/page/623 623]
| isbn = 978-0-07-053744-6
| url = https://archive.org/details/discretemathemat00rose/page/623}}</ref> BNF describes the syntax of a language and itself has a ''syntax''. This recursive definition is an example of a [[metalanguage|meta-language]].<ref name="cpl_3rd-ch12-290"/> The ''syntax'' of BNF includes:
* <code>::=</code> which translates to ''is made up of a[n]'' when a non-terminal is to its right. It translates to ''is'' when a terminal is to its right.
* <code>|</code> which translates to ''or''.
Line 1,769 ⟶ 1,748:
 
==Software engineering and computer programming==
[[File:Two women operating ENIAC (full resolution).jpg|thumb|right|Prior to programming languages, [[Jean Bartik|Betty Jennings]] and [[Frances Spence|Fran Bilas]] programmed the [[ENIAC]] by moving cables and setting switches.]]
 
[[Software engineering]] is a variety of techniques to produce [[software quality|quality]] ''computer programs''.<ref name="se-preface1">{{cite book
Line 1,880 ⟶ 1,859:
| page = 331
| isbn = 0-256-08515-3
}}</ref> Chances are a module will execute modules located in other source code files. Therefore, computer programmers may be [[Programming in the large and programming in the small#Programming in the large|programming in the large]]: programming modules so they will effectively couple with each other.<ref name="se-ch10-331"/> Programming-in-the-large includes contributing to the [[API|application programming interface]] (API).
 
===Program modules===
Line 1,939 ⟶ 1,918:
The levels of coupling from worst to best are:<ref name="se-ch8-226"/>
 
* ''Content Coupling'': A module has content coupling if it modifies a [[local variable]] of another function. COBOL used to do this with the ''alter'' verb.
* ''Common Coupling'': A module has common coupling if it modifies a global variable.
* ''Control Coupling'': A module has control coupling if another module can modify its [[control flow]]. For example, <code>perform_arithmetic( perform_addition, a, b )</code>. Instead, control should be on the makeup of the returned object.
Line 1,991 ⟶ 1,970:
| isbn = 0-619-06489-7
| quote = The key to unlocking the potential of any computer system is application software.
}}</ref> [[Enterprise software|Enterprise application software]] bundles accounting, personnel, customer, and vendor applications. Examples include [[enterprise resource planning]], [[customer relationship management]], and [[supply chain management software]].
 
Enterprise applications may be developed in-house as a one-of-a-kind [[proprietary software]].<ref name="pis-ch4-p148">{{cite book
Line 2,001 ⟶ 1,980:
| page = 147
| isbn = 0-619-06489-7
}}</ref> Alternatively, they may be purchased as [[Commercial off-the-shelf|off-the-shelf software]]. Purchased software may be modified to provide [[custom software]]. If the application is customized, then either the company's resources are used or the resources are outsourced. Outsourced software development may be from the original software vendor or a third-party developer.<ref name="pis-ch4-p147_quote2">{{cite book
| last = Stair
| first = Ralph M.
Line 2,081 ⟶ 2,060:
[[File:Kernel Layout.svg|thumb|A kernel connects the application software to the hardware of a computer.]]
The kernel's main purpose is to manage the limited resources of a computer:
* The kernel program should perform [[Scheduling (computing)|process scheduling]],<ref name="lpi-ch2-p22">{{cite book
|title=The Linux Programming Interface
|last=Kerrisk
Line 2,088 ⟶ 2,067:
|year=2010
|isbn=978-1-59327-220-3
|page=22}}</ref> which is also known as a [[context switch]]. The kernel creates a [[process control block]] when a ''computer program'' is [[Loader (computing)|selected for execution]]. However, an executing program gets exclusive access to the [[central processing unit]] only for a [[Preemption Preemption_(computing)#time sliceTime_slice|time slice]]. To provide each user with the [[Time-sharing|appearance of continuous access]], the kernel quickly [[Preemption (computing)|preempts]] each process control block to execute another one. The goal for [[Systems programming|system developers]] is to minimize [[dispatch latency]].
[[File:Virtual memory.svg|thumb|250px|Physical memory is scattered around RAM and the hard disk. Virtual memory is one continuous block.]]
* The kernel program should perform [[memory management]].
Line 2,099 ⟶ 2,078:
| page = 152
| isbn = 0-13-201799-7
}}</ref> The kernel maintains a master-region table and many per-process-region (pregion) tables—one for each running [[Process (computing)|process]].<ref name="duos-ch6-p152"/> These tables constitute the [[virtual address space]]. The master-region table is used to determine where its contents are located in [[Computer data storage#Primary storage|physical memory]]. The pregion tables allow each process to have its own program (text) pregion, data pregion, and stack pregion.
:*The program pregion stores machine instructions. Since machine instructions do not change, the program pregion may be shared by many processes of the same executable.<ref name="duos-ch6-p152"/>
:* To save time and memory, the kernel may load only blocks of execution instructions from the disk drive, not the entire execution file completely.<ref name="lpi-ch2-p22"/>
Line 2,161 ⟶ 2,140:
 
===Utility program===
A [[Utility software|utility program]] is designed to aid system administration and software execution. Operating systems execute hardware utility programs to check the status of disk drives, memory, speakers, and printers.<ref name="pis-ch4-p145">{{cite book
| last = Stair
| first = Ralph M.
Line 2,188 ⟶ 2,167:
[[File:AND_ANSI_Labelled.svg|thumb|96px|right|AND gate]]
[[File:OR_ANSI_Labelled.svg|thumb|96px|right|OR gate]]
A [[Microcode|microcode program]] is the bottom-level interpreter that controls the [[Datapath|data path]] of software-driven computers.<ref name="sco6th-ch1-p6">{{cite book
| last = Tanenbaum
| first = Andrew S.
Line 2,225 ⟶ 2,204:
}}</ref>
 
* Having one transistor forms the [[Inverter (logic gate)|NOT gate]].
* Connecting two transistors in series forms the [[NAND gate]].
* Connecting two transistors in parallel forms the [[NOR gate]].
Line 2,242 ⟶ 2,221:
| isbn = 978-0-13-291652-3
}}</ref>
These hardware-level instructions move data throughout the [[Datapath|data path]].
 
The micro-instruction cycle begins when the [[microsequencer]] uses its microprogram counter to ''fetch'' the next [[Machine code|machine instruction]] from [[random-access memory]].<ref name="sco6th-ch4-p255">{{cite book
| last = Tanenbaum
| first = Andrew S.
Line 2,274 ⟶ 2,253:
}}</ref> The ALU has circuits to perform elementary operations to add, shift, and compare integers. By combining and looping the elementary operations through the ALU, the CPU performs its complex arithmetic.
 
Microcode instructions move data between the CPU and the [[memory controller]]. Memory controller microcode instructions manipulate two [[Processor register|registers]]. The [[memory address register]] is used to access each memory cell's address. The [[Memory buffer register|memory data register]] is used to set and read each cell's contents.<ref name="sco6th-ch4-p249">{{cite book
| last = Tanenbaum
| first = Andrew S.
Line 2,281 ⟶ 2,260:
| year = 2013
| page = 249
| isbn = 978-0-13-291652-3
}}</ref>
 
Microcode instructions move data between the CPU and the many [[Bus (computing)|computer buses]]. The [[Disk controller|disk controller bus]] writes to and reads from [[hard disk drive]]s. Data is also moved between the CPU and other functional units via the [[PCI Express|peripheral component interconnect express bus.]]<ref name="sco6th-ch2-p111">{{cite book
| last = Tanenbaum
| first = Andrew S.
| title = Structured Computer Organization, Sixth Edition
| publisher = Pearson
| year = 2013
| page = 111
| isbn = 978-0-13-291652-3
}}</ref>