Functional decomposition: Difference between revisions

Content deleted Content added
Systems engineering: {{main|Functional flow block diagram}}
Start fixing harv/sfn errors. Please watchlist Category:Harv and Sfn no-target errors and install User:Trappist the monk/HarvErrors.js to help you spot such errors when reading and editing.
 
(16 intermediate revisions by 7 users not shown)
Line 1:
{{Short description|Expression of a function as the composition of two functions}}
<!-- Please do not remove or change this AfD message until the discussion has been closed. -->
{{Article for deletion/dated|page=Functional decomposition|timestamp=20230528164829|year=2023|month=May|day=28|substed=yes|help=off}}
<!-- Once discussion is closed, please place on talk page: {{Old AfD multi|page=Functional decomposition|date=28 May 2023|result='''keep'''}} -->
<!-- End of AfD message, feel free to edit beyond this point -->
{{Use dmy dates|date=May 2019|cs1-dates=y}}
{{multiple issues|
{{essay|date=May 2023}}
{{original research|date=May 2023}}
{{more footnotes needed|date=September 2020}}
}}
{{Format footnotes|date=May 2023|reason=Parenthetical referencing has been [[WP:PARREF|deprecated]]; convert to shortened footnotes.}}}}
In [[engineering]], '''functional decomposition''' is the process of resolving a [[Function (mathematics)|functional]] relationship into its constituent parts in such a way that the original function can be reconstructed (i.e., recomposed) from those parts by [[function composition]].
 
This process of decomposition may be undertaken to gain insight into the identity of the constituent components, which may reflect individual physical processes of interest. Also, functional decomposition may result in a compressed representation of the global function, a task which is feasible only when the constituent processes possess a certain level of ''modularity'' (i.e., independence or non-interaction).
 
[[Interaction (statistics)]]{{clarify span|Interactions(a situation in which one causal variable depends on the state of a second causal variable)|reason=The notion of 'interaction' of mathematical functions is undefined, likewise for 'observable', 'perception' etc. I guess this paragraph confuses mathematical notions (like 'function') with physical intuitions (like 'process'); this should be fixed.|date=September 2020}} between the components are critical to the function of the collection. All interactions may not be {{clarify span|[[Observability|observable]], or measured|date=September 2020}}, but possibly deduced through repetitive {{clarify span|[[perception]]|date=September 2020}}, synthesis, validation and verification of composite behavior.
 
==Motivation for decomposition==
[[Image:West-side-highway traffic.png|thumb|400px|Causal influences on West Side Highway traffic. Weather and GW Bridge traffic ''screen off'' other influences]]
Decomposition of a function into non-interacting components generally permits more economical representations of the function. Intuitively, this reduction in representation size is achieved simply because each variable depends only on a subset of the other variables. Thus, variable <math>x_1</math> only depends directly on variable <math>x_2</math>, rather than depending on the ''entire set'' of variables. We would say that variable <math>x_2</math> ''screens off'' variable <math>x_1</math> from the rest of the world. Practical examples of this phenomenon surround us, as discussed in the "Philosophical Considerations" below, but let's just consider the particular case of "northbound traffic on the [[West Side Highway]]." Let us assume this variable (<math>{x_1}</math>) takes on three possible values of {"moving slow", "moving deadly slow", "not moving at all"}. Now let's say variable <math>{x_1}</math> depends on two other variables, "weather" with values of {"sun", "rain", "snow"}, and "[[GW Bridge]] traffic" with values {"10mph", "5mph", "1mph"}. The point here is that while there are certainly many secondary variables that affect the weather variable (e.g., low pressure system over Canada, [[Butterfly Effect|butterfly flapping]] in Japan, etc.) and the Bridge traffic variable (e.g., an accident on [[Interstate 95 in New York|I-95]], presidential motorcade, etc.) all these other secondary variables are not directly relevant to the West Side Highway traffic. All we need (hypothetically) in order to predict the West Side Highway traffic is the weather and the GW Bridge traffic, because these two variables ''screen off'' West Side Highway traffic from all other potential influences. That is, all other influences act ''through'' them.
 
Consider the particular case of "northbound traffic on the [[West Side Highway]]." Let us assume this variable (<math>{x_1}</math>) takes on three possible values of {"moving slow", "moving deadly slow", "not moving at all"}. Now, let's say the variable <math>{x_1}</math> depends on two other variables, "weather" with values of {"sun", "rain", "snow"}, and "[[GW Bridge]] traffic" with values {"10mph", "5mph", "1mph"}. The point here is that while there are certainly many secondary variables that affect the weather variable (e.g., low pressure system over Canada, [[Butterfly Effect|butterfly flapping]] in Japan, etc.) and the Bridge traffic variable (e.g., an accident on [[Interstate 95 in New York|I-95]], presidential motorcade, etc.) all these other secondary variables are not directly relevant to the West Side Highway traffic. All we need (hypothetically) in order to predict the West Side Highway traffic is the weather and the GW Bridge traffic, because these two variables ''screen off'' West Side Highway traffic from all other potential influences. That is, all other influences act ''through'' them.
Outside of purely mathematical considerations, perhaps the greatest value of functional decomposition is the insight it provides into the structure of the world. When a functional decomposition can be achieved, this provides ontological information about what structures actually exist in the world, and how they can be predicted and manipulated. For example, in the illustration above, if it is learned that <math>{x_1}</math> depends directly only on <math>{x_2}</math>, this means that for purposes of prediction of <math>{x_1}</math>, it suffices to know only <math>{x_2}</math>. Moreover, interventions to influence <math>{x_1}</math> can be taken directly on <math>{x_2}</math>, and nothing additional can be gained by intervening on variables <math>\{x_3,x_4,x_5\}</math>, since these only act through <math>{x_2}</math> in any case.
 
==Applications==
Practical applications of functional decomposition are found in [[Bayesian networks]], [[structural equation modeling]], [[linear systems]], and [[database systems]].
 
=== Knowledge representation= ==
Processes related to functional decomposition are prevalent throughout the fields of [[knowledge representation]] and [[machine learning]]. Hierarchical model induction techniques such as [[Logic circuit minimization]], [[decision trees]], [[grammatical inference]], [[hierarchical clustering]], and [[quadtree decomposition]] are all examples of function decomposition. A review of other applications and function decomposition can be found in {{Harvtxt|Zupan|Bohanec|Bratko|Demšar|1997}}, which also presents methods based on [[information theory]] and [[graph theory]].
 
Many [[statistical inference]] methods can be thought of as implementing a function decomposition process in the presence of noise; that is, where functional dependencies are only expected to hold ''approximately''. Among such models are [[mixture models]] and the recently popular methods referred to as "causal decompositions" or [[Bayesian networks]].
 
=== Database theory= ==
See [[database normalization]].
 
=== Machine learning= ==
In practical scientific applications, it is almost never possible to achieve perfect functional decomposition because of the incredible complexity of the systems under study. This complexity is manifested in the presence of "noise," which is just a designation for all the unwanted and untraceable influences on our observations.
 
However, while perfect functional decomposition is usually impossible, the spirit lives on in a large number of statistical methods that are equipped to deal with noisy systems. When a natural or artificial system is intrinsically hierarchical, the [[joint distribution]] on system variables should provide evidence of this hierarchical structure. The task of an observer who seeks to understand the system is then to infer the hierarchical structure from observations of these variables. This is the notion behind the hierarchical decomposition of a joint distribution, the attempt to recover something of the intrinsic hierarchical structure which generated that joint distribution.
 
As an example, [[Bayesian network]] methods attempt to decompose a joint distribution along its causal fault lines, thus "cutting nature at its seams". The essential motivation behind these methods is again that within most systems (natural or artificial), relatively few components/events interact with one another directly on equal footing .{{Harvsfnp|Simon|1963}}. Rather, one observes pockets of dense connections (direct interactions) among small subsets of components, but only loose connections between these densely connected subsets. There is thus a notion of "causal proximity" in physical systems under which variables naturally precipitate into small clusters. Identifying these clusters and using them to represent the joint provides the basis for great efficiency of storage (relative to the full joint distribution) as well as for potent inference algorithms.
 
=== Software architecture= ==
{{main|Functional decomposition (computer science)}}
{{main|Structured analysis}}
{{main|Structure chart}}
Functional Decomposition is a design method intending to produce a non-implementation, architectural description of a computer program. The software architect first establishes a series of functions and types that accomplishes the main processing problem of the computer program, decomposes each to reveal common functions and types, and finally derives Modules from this activity.
{{weasel section|date=September 2019}}
Functional Decomposition is a design method intending to produce a non-implementation, architectural description of a computer program. Rather than conjecturing Objects and adding methods to them ([[Object-oriented programming|OOP]]), with each Object intending to capture some service of the program, the software architect first establishes a series of functions and types that accomplishes the main processing problem of the computer program, decomposes each to reveal common functions and types, and finally derives Modules from this activity.
 
== Signal processing ==
For example, the design of the editor [[Emacs]] can initially be thought about in terms of functions:
 
<math>
e\, \equiv \text{state of the Emacs editor and running operating system}
</math>
<br>
<math>
e'\, \equiv e\text{ with some component/part of its state changed}
</math>
 
<math>
f: (e, lisp\,\,expression) \rightarrow e'
</math>
 
And a possible '''function decomposition''' of ''f'':
 
<math>
fromExpr: lisp\,\,expression \rightarrow
\begin{cases}
object, & \text{if success} \\
error, & \text{if failure}
\end{cases}
</math>
 
<math>
evaluate: (object, e) \rightarrow e'
</math>
 
<math>
print: (error, e) \rightarrow e'
</math>
 
This leads one to the plausible Module, Service, or Object, of an interpreter (containing the function ''fromExpr''). Function Decomposition arguably yields insights about re-usability, such as if during the course of analysis, two functions produce the same type, it is likely that a common function/[[cross-cutting concern]] resides in both. To contrast, in [[Object-oriented programming|OOP]], it is a common practice to conjecture Modules prior to considering such a decomposition. This arguably results in costly [[refactoring]] later. FD mitigates that risk to some extent. Further, arguably, what separates FD from other design methods- is that it provides a concise high-level medium of architectural discourse that is end-to-end, revealing flaws in upstream [[requirements]] and beneficially exposing more design decisions in advance. And lastly, FD is known to prioritize development. Arguably, if the FD is correct, the most re-usable and cost-determined parts of the program are identified far earlier in the development cycle.
 
===Signal processing===
Functional decomposition is used in the analysis of many [[signal processing]] systems, such as [[LTI system theory|LTI systems]]. The input signal to an LTI system can be expressed as a function, <math>f(t)</math>. Then <math>f(t)</math> can be decomposed into a linear combination of other functions, called component signals:
::<math> f(t) = a_1 \cdot g_1(t) + a_2 \cdot g_2(t) + a_3 \cdot g_3(t) + \dots + a_n \cdot g_n(t) </math>
Line 88 ⟶ 50:
In other words, the system can be seen as acting separately on each of the components of the input signal. Commonly used examples of this type of decomposition are the [[Fourier series]] and the [[Fourier transform]].
 
=== Systems engineering ===
{{main|Functional flow block diagram}}
Functional decomposition in [[systems engineering]] refers to the process of defining a system in functional terms, then defining lower-level functions and sequencing relationships from these higher level systems functions.<ref>[{{cite report |url= http://ocw.mit.edu/courses/aeronautics-and-astronautics/16-885j-aircraft-systems-engineering-fall-2005/readings/sefguide_01_01.pdf ''|title=Systems Engineering Fundamentals.''], |publisher=Defense Acquisition University Press, |place=Fort Belvoir, VA, |date=January 2001, p45|page=45}}</ref> The basic idea is to try to divide a system in such a way that each block of a [[Functional flow block diagram|block diagram]] can be described without an "and" or "or" in the description.
 
This exercise forces each part of the system to have a pure [[role|function]]. When a system is designed as pure functions, they can be reused, or replaced. A usual side effect is that the interfaces between blocks become simple and generic. Since the interfaces usually become simple, it is easier to replace a pure function with a related, similar function.
Line 100 ⟶ 62:
*[[Currying]]
*[[Database normalization]]
*[[Function composition (computer science)]]
*[[Inductive inference]]
*[[Knowledge representation]]
 
==Further reading==
* {{cite conference
| last1=Zupan
| first1=Blaž
| last2=Bohanec
| first2=Marko
| last3=Bratko
| first3=Ivan
| last4=Demšar
| first4=Janez
| date=July 1997
| title=Machine learning by function decomposition
| url=https://dl.acm.org/doi/10.5555/645526.657131
| conference=ICML '97: July 8–12, 1997
| conference-url=https://dl.acm.org/doi/proceedings/10.5555/645526
| editor=Douglas H. Fisher
| book-title=Proceedings of the Fourteenth International Conference on Machine Learning
| publisher=Morgan Kaufmann Publishers |place=San Francisco
| isbn=978-1-55860-486-5
| pages=421–429
}} A review of other applications and function decomposition. Also presents methods based on [[information theory]] and [[graph theory]].
 
==Notes==
Line 110 ⟶ 94:
{{refbegin}}
 
* {{Citation |last=1. Fodor |first=Jerry |author-link=Jerry Fodor |title=The Modularity of Mind |place=Cambridge, Massachusetts |publisher=MIT Press |year=1983}}
* {{Citation
| last =Fodor
| first =Jerry
| author-link = Jerry Fodor
| title =The Modularity of Mind
| place=Cambridge, Massachusetts
| publisher =MIT Press
| year =1983}}
 
* {{Citation |last=2. Koestler |first=Arthur |title=The Ghost in the Machine |place=New York |publisher=Macmillan |year=1967}}
* {{Citation
| last =Koestler
| first =Arthur
| title =The Ghost in the Machine
| place=New York
| publisher =Macmillan
| year =1967}}
 
* {{Citation |first=Athur |last=3. Koestler |editor-last=Gray |editor-first=William |editor2-last=Rizzo |editor2-first=Nicholas D. |contribution=The tree and the candle |title=Unity Through Diversity: A Festschrift for Ludwig von Bertalanffy |year=1973 |pages=287–314 |place=New York |publisher=Gordon and Breach}}
* {{Citation
| first =Athur
| last =Koestler
| editor-last =Gray
| editor-first =William
| editor2-last =Rizzo
| editor2-first =Nicholas D.
| contribution =The tree and the candle
| title =Unity Through Diversity: A Festschrift for Ludwig von Bertalanffy
| year =1973
| pages =287–314
| place =New York
| publisher =Gordon and Breach
}}
 
* {{Citation |last=4. Leyton |first=Michael |title=Symmetry, Causality, Mind |place=Cambridge, Massachusetts |publisher=MIT Press |year=1992}}
* {{Citation
| last =Leyton
| first =Michael
| title =Symmetry, Causality, Mind
| place=Cambridge, Massachusetts
| publisher =MIT Press
| year =1992}}
 
*{{Citation |last=5. McGinn |first=Colin |title=The Problem of Philosophy |journal=Philosophical Studies |volume=76 |issue=2–3 |pages=133–156 |year=1994 |doi=10.1007/BF00989821 |s2cid=170454227}}
*{{Citation
| last =McGinn
| first =Colin
| title = The Problem of Philosophy
| journal = Philosophical Studies
| volume = 76
| issue = 2–3
| pages = 133–156
| year =1994
| doi =10.1007/BF00989821
| s2cid =170454227
}}
 
* {{Citation |last=6. Resnikoff |first=Howard L. |title=The Illusion of Reality |place=New York |publisher=Springer |year=1989}}
* {{Citation
| last =Resnikoff
| first =Howard L.
| title =The Illusion of Reality
| place=New York
| publisher =Springer
| year =1989}}
 
*{{Citation |last1=Simon |first1=Herbert A. |year=1963 |chapter=Causal Ordering and Identifiability |editor=Ando, Albert |editor2=Fisher, Franklin M. |editor3=Simon, Herbert A. |title=Essays on the Structure of Social Science Models |publisher=MIT Press |place=[[Cambridge, Massachusetts|Cambridge]], Massachusetts |pages=5–31}}.
*{{Citation | last1=8. Simon| |first1=Herbert A.| |year=1973 | chapter=The organization of complex systems |editor=Pattee, Howard H. |title=Hierarchy Theory: The Challenge of Complex Systems |publisher=George Braziller |place=[[New York City|New York]] |pages=3–27}}.
*{{Citation | last1=9. Simon| |first1=Herbert A. |year=1996 |chapter=The architecture of complexity: Hierarchic systems |title=The sciences of the artificial |publisher=MIT Press |place=[[Cambridge, Massachusetts|Cambridge]], Massachusetts |pages=183–216}}.
*{{Citation |last1=10. Tonge |first1=Fred M. |year=1969 |chapter=Hierarchical aspects of computer languages |editor=Whyte, Lancelot Law |editor2=Wilson, Albert G. |editor3=Wilson, Donna |title=Hierarchical Structures |publisher=American Elsevier |place=[[New York City|New York]] |pages=233–251}}.
 
* {{Citation
| last1=Zupan
| first1=Blaž
| last2=Bohanec
| first2=Marko
| last3=Bratko
| first3=Ivan
| last4=Demšar
| first4=Janez
| year=1997
| contribution=Machine learning by function decomposition
| contribution-url=http://citeseer.ist.psu.edu/zupan97machine.html
| title=Proc. 14th International Conference on Machine Learning
| publisher=Morgan Kaufmann
| pages=421–429
}}
{{refend}}