Content deleted Content added
No edit summary |
Citation bot (talk | contribs) Add: bibcode, issue. | Use this bot. Report bugs. | Suggested by Abductive | Category:Automated reasoning | #UCB_Category 13/18 |
||
(28 intermediate revisions by 11 users not shown) | |||
Line 1:
{{Short description|Field of artificial intelligence}}
'''Knowledge representation''' ('''KR''') aims to model information in a structured manner to formally represent it as knowledge in knowledge-based systems
In a broader sense, parameterized
More recently, Heng Zhang
== History ==
{{Artificial intelligence|Major goals}}
The earliest work in computerized knowledge representation was focused on general problem-solvers such as the [[General Problem Solver]] (GPS) system developed by [[Allen Newell]] and [[Herbert A. Simon]] in 1959 and the [[Advice Taker]] proposed by [[John McCarthy (computer scientist)|John McCarthy]] also in 1959. GPS featured data structures for planning and decomposition. The system would begin with a goal. It would then decompose that goal into sub-goals and then set out to construct strategies that could accomplish each subgoal. The Advisor Taker, on the other hand, proposed the use of the [[predicate calculus]] to
Many of the early approaches to knowledge
Other researchers focused on developing [[Automated theorem proving |
In the meanwhile, John McCarthy and [[Pat Hayes]] developed the [[situation calculus]] as a logical representation of common sense knowledge about the laws of cause and effect. [[Cordell Green]], in turn, showed how to do robot plan-formation by applying resolution to the situation calculus. He also showed how to use resolution for [[Question answering|question-answering]] and [[automatic programming]].<ref>{{cite conference|first=Cordell|last=Green|url=https://www.ijcai.org/Proceedings/69/Papers/023.pdf|title=Application of Theorem Proving to Problem Solving|conference=IJCAI 1969}}</ref>
In contrast, researchers at Massachusetts Institute of Technology (MIT) rejected the resolution uniform proof procedure paradigm and advocated the procedural embedding of knowledge instead.<ref>Hewitt, C., 2009. Inconsistency robustness in logic programs. arXiv preprint arXiv:0904.3036.</ref> The resulting conflict between the use of logical representations and the use of procedural representations was resolved in the early 1970s with the development of [[logic programming]] and [[Prolog]], using [[SLD resolution]] to treat [[Horn clause]]s as goal-reduction procedures.
Line 26:
Expert systems gave us the terminology still in use today where AI systems are divided into a ''knowledge base'', which includes facts and rules about a problem ___domain, and an ''inference engine'', which applies the knowledge in the [[knowledge base]] to answer questions and solve problems in the ___domain. In these early systems the facts in the knowledge base tended to be a fairly flat structure, essentially assertions about the values of variables used by the rules.<ref>{{cite book|last1=Hayes-Roth|first1=Frederick|title=Building Expert Systems|year=1983|publisher=Addison-Wesley|isbn=978-0-201-10686-2|first2=Donald|last2=Waterman|first3=Douglas|last3=Lenat|url=https://archive.org/details/buildingexpertsy00temd}}</ref>
Meanwhile, [[Marvin Minsky]] developed the concept of [[Frame (artificial intelligence)|frame]] in the mid-1970s.<ref>Marvin Minsky, [http://web.media.mit.edu/~minsky/papers/Frames/frames.html A Framework for Representing Knowledge] {{Webarchive|url=https://web.archive.org/web/20210107162402/http://web.media.mit.edu/~minsky/papers/Frames/frames.html |date=2021-01-07 }}, MIT-AI Laboratory Memo 306, June, 1974</ref> A frame is similar to an object class: It is an abstract description of a category describing things in the world, problems, and potential solutions. Frames were originally used on systems geared toward human interaction, e.g. [[natural language understanding|understanding natural language]] and the social settings in which various default expectations such as ordering food in a restaurant narrow the search space and allow the system to choose appropriate responses to dynamic situations.
It was not long before the frame communities and the rule-based researchers realized that there was a synergy between their approaches. Frames were good for representing the real world, described as classes, subclasses, slots (data values) with various constraints on possible values. Rules were good for representing and utilizing complex logic such as the process to make a medical diagnosis. Integrated systems were developed that combined frames and rules. One of the most powerful and well known was the 1983 [[Knowledge Engineering Environment]] (KEE) from [[IntelliCorp (software)|Intellicorp]]. KEE had a complete rule engine with [[forward chaining|forward]] and [[backward chaining]]. It also had a complete frame-based knowledge base with triggers, slots (data values), inheritance, and message passing. Although message passing originated in the object-oriented community rather than AI it was quickly embraced by AI researchers as well in environments such as KEE and in the operating systems for Lisp machines from [[Symbolics]], [[Xerox]], and [[Texas Instruments]].<ref>{{cite journal|last=Mettrey|first=William|title=An Assessment of Tools for Building Large Knowledge-Based Systems|journal=AI Magazine|year=1987|volume=8|issue=4|url=http://www.aaai.org/ojs/index.php/aimagazine/article/viewArticle/625|access-date=2013-12-24|archive-url=https://web.archive.org/web/20131110022104/http://www.aaai.org/ojs/index.php/aimagazine/article/viewArticle/625|archive-date=2013-11-10|url-status=dead}}</ref>
The integration of frames, rules, and object-oriented programming was significantly driven by commercial ventures such as KEE and Symbolics spun off from various research projects. At the same time, there was another strain of research that was less commercially focused and was driven by mathematical logic and automated theorem proving.{{citation needed|date=February 2021}} One of the most influential languages in this research was the [[KL-ONE]] language of the mid-'80s. KL-ONE was a [[frame language]] that had a rigorous semantics, formal definitions for concepts such as an [[Is-a|Is-A relation]].<ref>{{cite journal|last=Brachman|first=Ron|title=A Structural Paradigm for Representing Knowledge|journal=Bolt, Beranek, and Neumann Technical Report|year=1978|issue=3605|url=https://apps.dtic.mil/dtic/tr/fulltext/u2/a056524.pdf|archive-url=https://web.archive.org/web/20200430153426/https://apps.dtic.mil/dtic/tr/fulltext/u2/a056524.pdf|url-status=live|archive-date=April 30, 2020}}</ref> KL-ONE and languages that were influenced by it such as [[LOOM (ontology)|Loom]] had an automated reasoning engine that was based on formal logic rather than on IF-THEN rules. This reasoner is called the classifier. A classifier can analyze a set of declarations and infer new assertions, for example, redefine a class to be a subclass or superclass of some other class that wasn't formally specified. In this way the classifier can function as an inference engine, deducing new facts from an existing knowledge base. The classifier can also provide consistency checking on a knowledge base (which in the case of KL-ONE languages is also referred to as an Ontology).<ref>{{cite journal|last=MacGregor|first=Robert|title=Using a description classifier to enhance knowledge representation|journal=IEEE Expert|date=June 1991|volume=6|issue=3|doi=10.1109/64.87683|pages=41–46|s2cid=29575443}}</ref>
Another area of knowledge representation research was the problem of [[commonsense reasoning|common-sense reasoning]]. One of the first realizations learned from trying to make software that can function with human natural language was that humans regularly draw on an extensive foundation of knowledge about the real world that we simply take for granted but that is not at all obvious to an artificial agent, such as basic principles of common-sense physics, causality, intentions, etc. An example is the [[frame problem]], that in an event driven logic there need to be axioms that state things maintain position from one moment to the next unless they are moved by some external force. In order to make a true artificial intelligence agent that can [[natural language user interface|converse with humans using natural language]] and can process basic statements and questions about the world, it is essential to represent this kind of knowledge.<ref>McCarthy, J., and Hayes, P. J. 1969. {{
The starting point for knowledge representation is the ''knowledge representation hypothesis'' first formalized by [[Brian Cantwell Smith|Brian C. Smith]] in 1985:<ref>{{cite book|last=Smith|first=Brian C.|title=Readings in Knowledge Representation|year=1985|publisher=Morgan Kaufmann|isbn=978-0-934613-01-9|pages=[https://archive.org/details/readingsinknowle00brac/page/31 31–40]|editor=Ronald Brachman and Hector J. Levesque|chapter=Prologue to Reflections and Semantics in a Procedural Language|chapter-url=https://archive.org/details/readingsinknowle00brac/page/31}}</ref>
Line 40:
One of the most active areas of knowledge representation research is the [[Semantic Web]].{{citation needed|date=February 2021}} The Semantic Web seeks to add a layer of semantics (meaning) on top of the current Internet. Rather than indexing web sites and pages via keywords, the Semantic Web creates large [[ontology (information science)|ontologies]] of concepts. Searching for a concept will be more effective than traditional text only searches. Frame languages and automatic classification play a big part in the vision for the future Semantic Web. The automatic classification gives developers technology to provide order on a constantly evolving network of knowledge. Defining ontologies that are static and incapable of evolving on the fly would be very limiting for Internet-based systems. The classifier technology provides the ability to deal with the dynamic environment of the Internet.
Recent projects funded primarily by the [[Defense Advanced Research Projects Agency]] (DARPA) have integrated frame languages and classifiers with markup languages based on XML. The [[Resource Description Framework]] (RDF) provides the basic capability to define classes, subclasses, and properties of objects. The [[Web Ontology Language]] (OWL) provides additional levels of semantics and enables integration with classification engines.<ref name="Berners-Lee 34–43">{{cite journal|last1=Berners-Lee |first1=Tim |first2=James |last2=Hendler |first3=Ora |last3=Lassila |title=The Semantic Web – A new form of Web content that is meaningful to computers will unleash a revolution of new possibilities |journal=[[Scientific American]] |date=May 17, 2001 |url=http://www.cs.umd.edu/~golbeck/LBSC690/SemanticWeb.html |doi=10.1038/scientificamerican0501-34 |volume=284 |issue=5 |pages=34–43 |url-status=dead |archive-url=https://web.archive.org/web/20130424071228/http://www.cs.umd.edu/~golbeck/LBSC690/SemanticWeb.html |archive-date=April 24, 2013 |url-access=subscription }}</ref><ref name="w3_org">{{cite web|url=http://www.w3.org/2001/sw/BestPractices/SE/ODSD/|title=A Semantic Web Primer for Object-Oriented Software Developers|last1=Knublauch|first1=Holger|last2=Oberle|first2=Daniel|last3=Tetlow|first3=Phil|last4=Wallace|first4=Evan|publisher=[[W3C]]|date=2006-03-09|access-date=2008-07-30|archive-date=2018-01-06|archive-url=https://web.archive.org/web/20180106172902/http://www.w3.org/2001/sw/BestPractices/SE/ODSD/|url-status=live}}</ref>
== Overview ==
Line 87:
* Primitives. What is the underlying framework used to represent knowledge? [[Semantic network]]s were one of the first knowledge representation primitives. Also, data structures and algorithms for general fast search. In this area, there is a strong overlap with research in data structures and algorithms in computer science. In early systems, the Lisp programming language, which was modeled after the [[lambda calculus]], was often used as a form of functional knowledge representation. Frames and Rules were the next kind of primitive. Frame languages had various mechanisms for expressing and enforcing constraints on frame data. All data in frames are stored in slots. Slots are analogous to relations in entity-relation modeling and to object properties in object-oriented modeling. Another technique for primitives is to define languages that are modeled after [[First Order Logic]] (FOL). The most well known example is [[Prolog]], but there are also many special-purpose theorem-proving environments. These environments can validate logical models and can deduce new theories from existing models. Essentially they automate the process a logician would go through in analyzing a model. Theorem-proving technology had some specific practical applications in the areas of software engineering. For example, it is possible to prove that a software program rigidly adheres to a formal logical specification.
* Meta-representation. This is also known as the issue of [[Reflective programming|reflection]] in computer science. It refers to the ability of a formalism to have access to information about its own state. An example is the meta-object protocol in [[Smalltalk]] and [[CLOS]] that gives developers [[Execution (computing)#runtime|runtime]] access to the class objects and enables them to dynamically redefine the structure of the knowledge base even at runtime. Meta-representation means the knowledge representation language is itself expressed in that language. For example, in most Frame based environments all frames would be instances of a frame class. That class object can be inspected at runtime, so that the object can understand and even change its internal structure or the structure of other parts of the model. In rule-based environments, the rules were also usually instances of rule classes. Part of the meta protocol for rules were the meta rules that prioritized rule firing.
* [[Completeness (logic)|Incompleteness]]. Traditional logic requires additional axioms and constraints to deal with the real world as opposed to the world of mathematics. Also, it is often useful to associate degrees of confidence with a statement, i.e., not simply say "Socrates is Human" but rather "Socrates is Human with confidence 50%". This was one of the early innovations from [[expert system]]s research which migrated to some commercial tools, the ability to associate certainty factors with rules and conclusions. Later research in this area is known as [[fuzzy logic]].<ref>{{cite journal|last=Bih|first=Joseph|title=Paradigm Shift: An Introduction to Fuzzy Logic|journal=IEEE Potentials|volume=25|pages=6–21|year=2006|issue=1 |url=http://www.cse.unr.edu/~bebis/CS365/Papers/FuzzyLogic.pdf|access-date=24 December 2013|doi=10.1109/MP.2006.1635021|bibcode=2006IPot...25a...6B |s2cid=15451765|archive-date=12 June 2014|archive-url=https://web.archive.org/web/20140612022317/http://www.cse.unr.edu/~bebis/CS365/Papers/FuzzyLogic.pdf|url-status=live}}</ref>
* Definitions and [[universals]] vs. facts and defaults. Universals are general statements about the world such as "All humans are mortal". Facts are specific examples of universals such as "Socrates is a human and therefore mortal". In logical terms definitions and universals are about [[universal quantification]] while facts and defaults are about [[existential quantification]]s. All forms of knowledge representation must deal with this aspect and most do so with some variant of set theory, modeling universals as sets and subsets and definitions as elements in those sets.
* [[Non-monotonic logic|Non-monotonic reasoning]]. Non-monotonic reasoning allows various kinds of hypothetical reasoning. The system associates facts asserted with the rules and facts used to justify them and as those facts change updates the dependent knowledge as well. In rule based systems this capability is known as a [[truth maintenance system]].<ref>{{cite journal|last=Zlatarva|first=Nellie|title=Truth Maintenance Systems and their Application for Verifying Expert System Knowledge Bases|journal=Artificial Intelligence Review|year=1992|volume=6|pages=67–110|doi=10.1007/bf00155580|s2cid=24696160}}</ref>
Line 111:
== See also ==
*
*
*
*
*
*
* [[DATR]], a language for lexical knowledge representation
* [[FO(.)]], a KR language based on [[first-order logic]]
* [[Logic programming]]▼
* [[Logico-linguistic modeling]]▼
▲* [[Knowledge graph]]
▲* [[Knowledge management]]
*
*
*
== References ==
Line 158:
{{Clear}}
{{Knowledge representation and reasoning}}
{{computer science}}
{{Authority control}}
|