Functional decomposition: Difference between revisions

Content deleted Content added
No edit summary
Start fixing harv/sfn errors. Please watchlist Category:Harv and Sfn no-target errors and install User:Trappist the monk/HarvErrors.js to help you spot such errors when reading and editing.
 
(One intermediate revision by one other user not shown)
Line 32:
In practical scientific applications, it is almost never possible to achieve perfect functional decomposition because of the incredible complexity of the systems under study. This complexity is manifested in the presence of "noise," which is just a designation for all the unwanted and untraceable influences on our observations.
 
However, while perfect functional decomposition is usually impossible, the approachspirit islives reflectedon in a large number of statistical methods that are equipped to deal with noisy systems. When a natural or artificial system is intrinsically hierarchical, the [[joint distribution]] on system variables should provide evidence of this hierarchical structure. The task of an observer who seeks to understand the system is then to infer the hierarchical structure from observations of these variables. This is the notion behind the hierarchical decomposition of a joint distribution, the attempt to recover something of the intrinsic hierarchical structure which generated that joint distribution.
 
As an example, [[Bayesian network]] methods attempt to decompose a joint distribution along its causal fault lines, thus "cutting nature at its seams". The essential motivation behind these methods is again that within most systems (natural or artificial), relatively few components/events interact with one another directly on equal footing.{{sfnp|Simon|1963}} Rather, one observes pockets of dense connections (direct interactions) among small subsets of components, but only loose connections between these densely connected subsets. There is thus a notion of "causal proximity" in physical systems under which variables naturally precipitate into small clusters. Identifying these clusters and using them to represent the joint provides the basis for great efficiency of storage (relative to the full joint distribution) as well as for potent inference algorithms.
Line 45:
Functional decomposition is used in the analysis of many [[signal processing]] systems, such as [[LTI system theory|LTI systems]]. The input signal to an LTI system can be expressed as a function, <math>f(t)</math>. Then <math>f(t)</math> can be decomposed into a linear combination of other functions, called component signals:
::<math> f(t) = a_1 \cdot g_1(t) + a_2 \cdot g_2(t) + a_3 \cdot g_3(t) + \dots + a_n \cdot g_n(t) </math>
Here, <math> \{g_1(t), g_2(t), g_3(t), \dots , g_n(t)\} </math> are the component signals. Note that <math> \{a_1, a_2, a_3, \dots , a_n\} </math> are constants. This decomposition allowsaids forin analysis, because now the output of the system can be expressed in terms of the components of the input. If we let <math>T\{\}</math> represent the effect of the system, then the output signal is <math>T\{f(t)\}</math>, which can be expressed as:
::<math> T\{f(t)\} = T\{ a_1 \cdot g_1(t) + a_2 \cdot g_2(t) + a_3 \cdot g_3(t) + \dots + a_n \cdot g_n(t)\}</math>
::<math> = a_1 \cdot T\{g_1(t)\} + a_2 \cdot T\{g_2(t)\} + a_3 \cdot T\{g_3(t)\} + \dots + a_n \cdot T\{g_n(t)\}</math>
Line 94:
{{refbegin}}
 
* {{Citation |last=1. Fodor |first=Jerry |author-link=Jerry Fodor |title=The Modularity of Mind |place=Cambridge, Massachusetts |publisher=MIT Press |year=1983}}
 
* {{Citation |last=2. Koestler |first=Arthur |title=The Ghost in the Machine |place=New York |publisher=Macmillan |year=1967}}
 
* {{Citation |first=Athur |last=3. Koestler |editor-last=Gray |editor-first=William |editor2-last=Rizzo |editor2-first=Nicholas D. |contribution=The tree and the candle |title=Unity Through Diversity: A Festschrift for Ludwig von Bertalanffy |year=1973 |pages=287–314 |place=New York |publisher=Gordon and Breach}}
 
* {{Citation |last=4. Leyton |first=Michael |title=Symmetry, Causality, Mind |place=Cambridge, Massachusetts |publisher=MIT Press |year=1992}}
 
*{{Citation |last=5. McGinn |first=Colin |title=The Problem of Philosophy |journal=Philosophical Studies |volume=76 |issue=2–3 |pages=133–156 |year=1994 |doi=10.1007/BF00989821 |s2cid=170454227}}
 
* {{Citation |last=6. Resnikoff |first=Howard L. |title=The Illusion of Reality |place=New York |publisher=Springer |year=1989}}
 
*{{Citation |last1=Simon |first1=Herbert A. |year=1963 |chapter=Causal Ordering and Identifiability |editor=Ando, Albert |editor2=Fisher, Franklin M. |editor3=Simon, Herbert A. |title=Essays on the Structure of Social Science Models |publisher=MIT Press |place=[[Cambridge, Massachusetts|Cambridge]], Massachusetts |pages=5–31}}.
*{{Citation |last1=8. Simon |first1=Herbert A. |year=1973 |chapter=The organization of complex systems |editor=Pattee, Howard H. |title=Hierarchy Theory: The Challenge of Complex Systems |publisher=George Braziller |place=[[New York City|New York]] |pages=3–27}}.
*{{Citation |last1=9. Simon |first1=Herbert A. |year=1996 |chapter=The architecture of complexity: Hierarchic systems |title=The sciences of the artificial |publisher=MIT Press |place=[[Cambridge, Massachusetts|Cambridge]], Massachusetts |pages=183–216}}.
*{{Citation |last1=10. Tonge |first1=Fred M. |year=1969 |chapter=Hierarchical aspects of computer languages |editor=Whyte, Lancelot Law |editor2=Wilson, Albert G. |editor3=Wilson, Donna |title=Hierarchical Structures |publisher=American Elsevier |place=[[New York City|New York]] |pages=233–251}}.
 
{{refend}}