Polynomial: Difference between revisions

Content deleted Content added
Definition: clarify sentence; arithmetic operations themselves cannot be performed on x (e.g., 2x just stays 2x)
WikiSilky (talk | contribs)
m Classification: Citation for multivariate polynomial
 
Line 1:
{{Short description|Type of mathematical expression}}
[[File:Polynomialdeg3.svg|The [[graph of a function|graph]] of a polynomial function of degree 3|thumb|upright]]
{{about|the expressions themselves|the related topic in algebra|Polynomial ring}}
In [[mathematics]], a '''polynomial''' is an [[mathematical expression|expression]] consisting of [[variable (mathematics)|variables]], called '''[[indeterminate (variable)|indeterminates]]''', and [[coefficient]]s that involves only the operations of [[addition]], [[subtraction]], [[multiplication]], and positive [[integer]] [[exponent]]s. An example of a polynomial of a single indeterminate, {{math|''x''}}, is {{math|''x''<sup>2</sup> &minus; 4''x'' + 7}}, which is a [[Quadratic function|quadratic]].
In [[mathematics]], a '''polynomial''' is a [[Expression (mathematics)|mathematical expression]] consisting of [[indeterminate (variable)|indeterminates]] (also called [[variable (mathematics)|variables]]) and [[coefficient]]s, that involves only the operations of [[addition]], [[subtraction]], [[multiplication]] and [[exponentiation]] to [[nonnegative integer]] powers, and has a finite number of terms.<ref>{{harvtxt|Beauregard|Fraleigh|1973|p=153}}</ref><ref>{{harvtxt|Burden|Faires|1993|p=96}}</ref><ref>{{harvtxt|Fraleigh|1976|p=245}}</ref><ref>{{harvtxt|McCoy|1968|p=190}}</ref><ref>{{harvtxt|Moise|1967|p=82}}</ref> An example of a polynomial of a single indeterminate <math>x</math> is <math>x^2 - 4x + 7</math>. An example with three indeterminates is <math>x^3 + 2xyz^2 - yz + 1</math>.
 
Polynomials appear in a wide variety ofmany areas of mathematics and science. For example, they are used to form [[polynomial equationsequation]]s, which encode a wide range of problems, from elementary [[word problem (mathematics education)|word problems]] to complicated scientific problems in the sciences; they are used to define '''polynomial functions''', which appear in settings ranging from basic [[chemistry]] and [[physics]] to [[economics]] and [[social science]]; and they are used in [[calculus]] and [[numerical analysis]] to approximate other functions. In advanced mathematics, polynomials are used to construct [[polynomial ring]]s and [[algebraic variety|algebraic varieties]], awhich are central conceptconcepts in [[algebra]] and [[algebraic geometry]].
 
== Etymology ==
According to the [[Oxford English Dictionary]], ''polynomial'' succeeded the term ''[[binomial]]'', and was made simply by replacing the Latin root ''bi-'' with the Greek ''poly-'', which comes from the Greek word for ''many''. The word ''polynomial'' was first used in the 17th century.<ref>Etymology of "polynomial". ''Compact Oxford English Dictionary''</ref>
 
The word ''polynomial'' [[hybrid word|joins two diverse roots]]: the Greek ''poly'', meaning "many", and the Latin ''nomen'', or "name". It was derived from the term ''[[binomial (polynomial)|binomial]]'' by replacing the Latin root ''bi-'' with the Greek ''poly-''. That is, it means a sum of many terms (many [[monomial]]s). The word ''polynomial'' was first used in the 17th century.<ref>See "polynomial" and "binomial", ''Compact Oxford English Dictionary''</ref>
=={{anchor|Polynomial notation}} Notation==
{{anchor|Polynomial notation}}
It may be confusing that a polynomial ''P'' in the indeterminate ''X'' may appear in the formulas either as ''P'' or as ''P''(''X'').
 
== Notation and terminology ==
Normally, the name of the polynomial is ''P'', not ''P''(''X''). However, if ''a'' denotes a number, a variable, another polynomial, or, more generally any expression, then ''P''(''a'') denotes, by convention, the result of substituting ''X'' by ''a'' in ''P''. For example, the polynomial ''P'' defines the function
[[File:Polynomialdeg3.svg|The [[graph of a function|graph]] of a polynomial function of degree 3|thumb]]
:<math>x\mapsto P(x)</math>
(it is a common convention of using upper case letters for the indeterminates and the corresponding lower case letters for the variables (arguments) of the associated function).
 
The <math>x</math> occurring in a polynomial is commonly called a ''variable'' or an ''indeterminate''. When the polynomial is considered as an expression, <math>x</math> is a fixed symbol which does not have any value (its value is "indeterminate"). However, when one considers the [[function (mathematics)|function]] defined by the polynomial, then <math>x</math> represents the argument of the function, and is therefore called a "variable". Many authors use these two words interchangeably.
In particular, if ''a'' = ''X'', then the definition of ''P''(''a'') implies
:<math>P=P(X).</math>
 
ThisA equalitypolynomial allow<math>P</math> toin simplifythe wordingindeterminate in<math>x</math> someis casescommonly denoted either as <math>P</math> or as <math>P(x)</math>. Formally, the name of the polynomial is <math>P</math>, not <math>P(x)</math>, but the use of the [[functional notation]] <math>P(x)</math> dates from a time when the distinction between a polynomial and the associated function was unclear. Moreover, the functional notation is often useful for examplespecifying, byin writinga single phrase, a polynomial and its indeterminate. For example, "let ''<math>P''(''X''x)</math> be a polynomial" insteadis ofa shorthand for "let ''<math>P''</math> be a polynomial in the indeterminate ''X''<math>x</math>". On the other hand, when it is not necessary to emphasize the name of the indeterminate, many formulas are much simpler and much easyeasier to read if the name(s) of the indeterminate(s) do not appear at each occurrence of the polynomial.
 
The ambiguity of having two notations for a single mathematical object may be formally resolved by considering the general meaning of the functional notation for polynomials.
==Definition==
If <math>a</math> denotes a number, a variable, another polynomial, or, more generally, any expression, then <math>P(a)</math> denotes, by convention, the result of substituting <math>a</math> for <math>x</math> in <math>P</math>. Thus, the polynomial <math>P</math> defines the function
A polynomial in a single variable can be written in the form
<math display="block">a\mapsto P(a),</math>
:<math>a_n x^n + a_{n-1}x^{n-1} + \dotsb + a_2 x^2 + a_1 x + a_0,</math>
which is the ''polynomial function'' associated to <math>P</math>.
where <math>a_0, \ldots, a_n</math> are numbers, or more generally elements of a [[ring (mathematics)|ring]], and <math>x</math> is a symbol which is called [[indeterminate (variable)|indeterminate]] or, for historical reasons, [[variable (mathematics)|variable]]. It should be emphasized that, <math>x</math> does not represent any value, although the usual (commutative, distributive) laws valid for [[arithmetic]] operations also apply to indeterminates.
Frequently, when using this notation, one supposes that <math>a</math> is a number. However, one may use it over any ___domain where addition and multiplication are defined (that is, any [[ring (mathematics)|ring]]). In particular, if <math>a</math> is a polynomial then <math>P(a)</math> is also a polynomial.
More specifically, when <math>a</math> is the indeterminate <math>x</math>, then the [[Image (mathematics)|image]] of <math>x</math> by this function is the polynomial <math>P</math> itself (substituting <math>x</math> for <math>x</math> does not change anything). In other words,
<math display="block">P(x)=P,</math>
which justifies formally the existence of two notations for the same polynomial.
 
== Definition ==
This can be expressed more concisely by using [[Capital-sigma_notation#Capital-sigma_notation|summation notation]]:
A ''polynomial expression'' is an [[expression (mathematics)|expression]] that can be built from [[constant (mathematics)|constants]] and symbols called ''variables'' or ''indeterminates'' by means of [[addition]], [[multiplication]] and [[exponentiation]] to a [[non-negative integer]] power. The constants are generally [[number]]s, but may be any expression that do not involve the indeterminates, and represent [[mathematical object]]s that can be added and multiplied. Two polynomial expressions are considered as defining the same ''polynomial'' if they may be transformed, one to the other, by applying the usual properties of [[commutative property|commutativity]], [[associative property|associativity]] and [[distributive property|distributivity]] of addition and multiplication. For example <math>(x-1)(x-2)</math> and <math>x^2-3x+2</math> are two polynomial expressions that represent the same polynomial; so, one has the [[equality (mathematics)|equality]] <math>(x-1)(x-2)=x^2-3x+2</math>.
 
A polynomial in a single indeterminate {{math|''x''}} can always be written (or rewritten) in the form
:<math>\sum_{i=0}^n a_i x^i</math>
<math display="block">a_n x^n + a_{n-1}x^{n-1} + \dotsb + a_2 x^2 + a_1 x + a_0,</math>
where <math>a_0, \ldots, a_n</math> are constants that are called the ''coefficients'' of the polynomial, and <math>x</math> is the indeterminate.<ref name=":1">{{Cite web|last=Weisstein|first=Eric W.|title=Polynomial|url=https://mathworld.wolfram.com/Polynomial.html|access-date=2020-08-28|website=mathworld.wolfram.com|language=en}}</ref> The word "indeterminate" means that <math>x</math> represents no particular value, although any value may be substituted for it. The mapping that associates the result of this substitution to the substituted value is a [[function (mathematics)|function]], called a ''polynomial function''.
 
This can be expressed more concisely by using [[summation#Capital-sigma notation|summation notation]]:
That is, a polynomial can either be zero or can be written as the sum of a finite number of non-zero [[term (mathematics)|terms]]. Each term consists of the product of a number – called the [[coefficient]] of the term<ref>The coefficient of a term may be any number from a specified set. If that set is the set of real numbers, we speak of "polynomials over the reals". Other common kinds of polynomials are polynomials with integer coefficients, polynomials with complex coefficients, and polynomials with coefficients that are integers [[modular arithmetic|modulo]] of some [[prime number]] {{math|''p''}}.</ref> – and a finite number of indeterminates, raised to integer powers. The exponent on a variable in a term is called the [[Degree of a polynomial|degree]] of that variable in that term; the degree of the term is the sum of the degrees of the variables in that term, and the degree of a polynomial is the largest degree of any one term with nonzero coeffficient. Since {{math|''x'' {{=}} ''x''<sup>1</sup>}}, the degree of a variable without a written exponent is one. A term and a polynomial with no variables are called respectively a [[constant term]] and a constant polynomial;<ref>This terminology date from the time where the distinction was not clear between a polynomial and the function that it defines: a constant term and a constant polynomial define [[constant function]]s.</ref> the degree of a constant term and of a nonzero constant polynomial is 0. The degree of the zero polynomial (which has no term) is not defined.<ref name=Barbeau-2003-pp1-2>{{cite book|author=Barbeau, E.J.|title=Polynomials|publisher=Springer|year=2003|isbn=9780387406275|pages=1-2|url=http://books.google.com/books?id=CynRMm5qTmQC&pg=PA1}}</ref>
<math display="block">\sum_{k=0}^n a_k x^k</math>
That is, a polynomial can either be zero or can be written as the sum of a finite number of non-zero [[Summand|terms]]. Each term consists of the product of a number{{snd}} called the [[coefficient]] of the term{{efn|The coefficient of a term may be any number from a specified set. If that set is the set of real numbers, we speak of "polynomials over the reals". Other common kinds of polynomials are polynomials with integer coefficients, polynomials with complex coefficients, and polynomials with coefficients that are integers [[modular arithmetic|modulo]] some [[prime number]] <math>p</math>.}}{{snd}} and a finite number of indeterminates, raised to non-negative integer powers.
 
== Classification ==
For example:
{{Further|Degree of a polynomial}}
The exponent on an indeterminate in a term is called the degree of that indeterminate in that term; the degree of the term is the sum of the degrees of the indeterminates in that term, and the degree of a polynomial is the largest degree of any term with nonzero coefficient.<ref name=":2">{{Cite web|title=Polynomials {{!}} Brilliant Math & Science Wiki|url=https://brilliant.org/wiki/polynomials/|access-date=2020-08-28|website=brilliant.org|language=en-us}}</ref> Because <math>x = x^1</math>, the degree of an indeterminate without a written exponent is one.
 
{{anchor|constant polynomial}}
: <math> -5x^2y\,</math>
A term with no indeterminates and a polynomial with no indeterminates are called, respectively, a [[constant term]] and a '''constant polynomial'''.{{efn|This terminology dates from the time when the distinction was not clear between a polynomial and the function that it defines: a constant term and a constant polynomial define [[constant function]]s.{{citation needed|date=July 2020}}}} The degree of a constant term and of a nonzero constant polynomial is <math>0</math>. The degree of the zero polynomial <math>0</math> (which has no terms at all) is generally treated as not defined (but see below).<ref name=Barbeau-2003-pp1-2>{{harvnb|Barbeau|2003|pp=[https://books.google.com/books?id=CynRMm5qTmQC&pg=PA1 1]–2}}</ref>
 
For example:
is a term. The coefficient is {{math|&minus;5}}, the variables are {{math|''x''}} and {{math|''y''}}, the degree of {{math|''x''}} is two, while the degree of {{math|''y''}} is one. The degree of the entire term is the sum of the degrees of each variable in it, so in this example the degree is {{math|2 + 1 {{=}} 3}}.
<math display="block"> -5x^2y </math>
is a term. The coefficient is <math>-5</math>, the indeterminates are <math>x</math> and <math>y</math>, the degree of <math>x</math> is two, while the degree of <math>y</math> is one. The degree of the entire term is the sum of the degrees of each indeterminate in it, so in this example the degree is <math>2+1 = 3</math>.
 
Forming a sum of several terms produces a polynomial. For example, the following is a polynomial:
<math display="block">\underbrace{_\,3x^2}_{\begin{smallmatrix}\mathrm{term}\\\mathrm{1}\end{smallmatrix}} \underbrace{-_\,5x}_{\begin{smallmatrix}\mathrm{term}\\\mathrm{2}\end{smallmatrix}} \underbrace{+_\,4}_{\begin{smallmatrix}\mathrm{term}\\\mathrm{3}\end{smallmatrix}}. </math>
It consists of three terms: the first is degree two, the second is degree one, and the third is degree zero.
 
{{anchor|linear polynomial}}Polynomials of small degree have been given specific names. A polynomial of degree zero is a ''constant polynomial'', or simply a ''constant''. Polynomials of degree one, two or three are respectively ''linear polynomials,'' ''[[quadratic polynomial]]s'' and ''cubic polynomials''.<ref name=":2" /> For higher degrees, the specific names are not commonly used, although ''quartic polynomial'' (for degree four) and ''quintic polynomial'' (for degree five) are sometimes used. The names for the degrees may be applied to the polynomial or to its terms. For example, the term <math>2x</math> in <math>x^2 + 2x + 1</math> is a linear term in a quadratic polynomial.
:<math>\underbrace{_\,3x^2}_{\begin{smallmatrix}\mathrm{term}\\\mathrm{1}\end{smallmatrix}} \underbrace{-_\,5x}_{\begin{smallmatrix}\mathrm{term}\\\mathrm{2}\end{smallmatrix}} \underbrace{+_\,4}_{\begin{smallmatrix}\mathrm{term}\\\mathrm{3}\end{smallmatrix}}. </math>
 
{{anchor|zero polynomial}}The polynomial <math>0</math>, which may be considered to have no terms at all, is called the '''zero polynomial'''. Unlike other constant polynomials, its degree is not zero. Rather, the degree of the zero polynomial is either left explicitly undefined, or defined as negative (either −1 or <math>-\infty</math>).<ref>{{MathWorld |urlname=ZeroPolynomial |title=Zero Polynomial}}</ref> The zero polynomial is also unique in that it is the only polynomial in one indeterminate that has an infinite number of [[root of a function|roots]]. The graph of the zero polynomial, <math>f(x) = 0</math>, is the <math>x</math>-axis.
It consists of three terms: the first is degree two, the second is degree one, and the third is degree zero.
 
In the case of polynomials in more than one indeterminate, a polynomial is called ''homogeneous'' of degree <math>n</math> if ''all'' of its non-zero terms have degree <math>n</math>. The zero polynomial is homogeneous, and, as a homogeneous polynomial, its degree is undefined.{{efn|In fact, as a [[homogeneous function]], it is homogeneous of ''every'' degree.{{citation needed|date=July 2020}}}} For example, <math>x^3y^2 + 7x^2y^3 - 3x^5</math> is homogeneous of degree <math>5</math>. For more details, see [[Homogeneous polynomial|homogeneous polynomials]].
Polynomials of small degree have been given specific names. A polynomial of degree zero is a ''constant polynomial'' or simply a ''constant''. Polynomials of degree one, two or three are respectively ''linear polynomials,'' ''quadratic polynomials'' and ''cubic polynomials''. For higher degrees the specific names are not commonly used, although ''quartic polynomial'' (for degree four) and ''quintic polynomial'' (for degree five) are sometimes used. The names for the degrees may be applied to the polynomial or to its terms. For example, in {{math|''x''<sup>2</sup> + 2''x'' + 1}} the term {{math|2''x''}} is a linear term in a quadratic polynomial.
 
The [[commutative law]] of addition can be used to rearrange terms into any preferred order. In polynomials with one indeterminate, the terms are usually ordered according to degree, either in "descending powers of <math>x</math>", with the term of largest degree first, or in "ascending powers of <math>x</math>". The polynomial <math>3x^2 - 5x + 4</math> is written in descending powers of <math>x</math>. The first term has coefficient <math>3</math>, indeterminate <math>x</math>, and exponent <math>2</math>. In the second term, the coefficient is <math>-5</math>. The third term is a constant. Because the ''degree'' of a non-zero polynomial is the largest degree of any one term, this polynomial has degree two.<ref>{{harvnb|Edwards|1995|p=[https://books.google.com/books?id=ylFR4h5BIDEC&pg=PA78 78]}}</ref>
{{anchor|zero polynomial}}The polynomial 0, which may be considered to have no terms at all, is called the ''zero polynomial''. Unlike other constant polynomials, its degree is not zero. Rather the degree of the zero polynomial is either left explicitly undefined, or defined as negative (either −1 or −∞).<ref>{{MathWorld|urlname=ZeroPolynomial|title=Zero Polynomial}}</ref> These conventions are useful when defining [[Euclidean division of polynomials]]. The zero polynomial is also unique in that it is the only polynomial having an infinite number of [[Root of a function|roots]]. In the case of polynomials in more than one variable, a polynomial is called ''homogeneous'' of {{nowrap|degree {{math|''n''}}}} if ''all'' its terms have {{nowrap|degree {{math|''n''}}}}. For example, {{math|''x''<sup>3</sup>''y''<sup>2</sup> + 7''x''<sup>2</sup>''y''<sup>3</sup> - 3''x''<sup>5</sup>}} is homogeneous of degree 5. For more details, see [[homogeneous polynomial]].
 
Two terms with the same indeterminates raised to the same powers are called "similar terms" or "like terms", and they can be combined, using the [[distributive law]], into a single term whose coefficient is the sum of the coefficients of the terms that were combined. It may happen that this makes the coefficient <math>0</math>.<ref name="Edwards-1995-p47"/>
The [[commutative law]] of addition can be used to rearrange terms into any preferred order. In polynomials with one variable, the terms are usually ordered according to degree, either in "descending powers of {{math|''x''}}", with the term of largest degree first, or in "ascending powers of {{math|''x''}}". The polynomial in the example above is written in descending powers of {{math|''x''}}. The first term has coefficient {{math|3}}, variable {{math|''x''}}, and exponent {{math|2}}. In the second term, the coefficient {{nowrap|is {{math|&minus;5}}}}. The third term is a constant. Since the ''degree'' of a non-zero polynomial is the largest degree of any one term, this polynomial has degree two.<ref>{{cite book|author=Edwards, Harold M.|title=Linear Algebra|publisher=Springer|year=1995|isbn=9780817637316|page=78|url=http://books.google.com/books?id=ylFR4h5BIDEC&pg=PA78}}</ref>
 
Two terms with the same variables raised to the same powers are called "similar terms" or "like terms", and they can be combined, using the [[distributive law]], into a single term whose coefficient is the sum of the coefficients of the terms that were combined. It may happen that this makes the coefficient 0.<ref name="Edwards-1995-p47" /> Polynomials can be classified by the number of terms with nonzero coefficients, so that a one-term polynomial is called a [[monomial]],<ref>{{efn|Some authors use "monomial" to mean "[[monic polynomial|monic]] monomial". See {{cite book |authorfirst=Anthony W. |last=Knapp |title=Advanced Algebra: Along with a Companion Volume Basic Algebra |page=457 |year=2007 |publisher=Springer |isbn=978-0-8176-4522-59}}}}</ref> a two-term polynomial is called a [[binomial (polynomial)|binomial]], and soa onthree-term polynomial is called a [[trinomial]]. A polynomial with two or more terms is also called a '''multinomial'''.<ref>{{Cite web |last=Weisstein |first=Eric W. |title=Multinomial |url=https://mathworld.wolfram.com/Multinomial.html |access-date=2025-08-26 |website=mathworld.wolfram.com |language=en}}</ref><ref>{{Cite book |last=Clapham |first=Christopher |url=https://www.google.ae/books/edition/The_Concise_Oxford_Dictionary_of_Mathema/ZsiSvE0Z3s4C |title=The Concise Oxford Dictionary of Mathematics |last2=Nicholson |first2=James |publisher=Oxford University Press |year=2009 |isbn=9780199235940 |edition=4th |___location=United States |pages=303 |language=en}}</ref>
{{anchor|univariate}}A polynomial in one variable is called a ''[[univariate]] polynomial'', a polynomial in more than one variable is called a ''multivariate polynomial''. These notions refer more to the kind of polynomials one is generally working with than to individual polynomials; for instance when working with univariate polynomials one does not exclude constant polynomials (which may result, for instance, from the subtraction of non-constant polynomials), although strictly speaking constant polynomials do not contain any variables at all. It is possible to further classify multivariate polynomials as ''bivariate'', ''trivariate'', and so on, according to the maximum number of variables allowed. Again, so that the set of objects under consideration be closed under subtraction, a study of trivariate polynomials usually allows bivariate polynomials, and so on. It is common, also, to say simply "polynomials in {{math|''x'', ''y''}}, and {{math|''z''}}", listing the variables allowed.
 
{{anchor|real polynomial|complex polynomial|integer polynomial}}A '''real polynomial''' is a polynomial with [[real number|real]] coefficients. When it is used to define a [[function (mathematics)|function]], the [[___domain (function)|___domain]] is not so restricted. However, a '''real polynomial function''' is a function from the reals to the reals that is defined by a real polynomial. Similarly, an '''integer polynomial''' is a polynomial with [[integer]] coefficients, and a '''complex polynomial''' is a polynomial with [[complex number|complex]] coefficients.
The ''evaluation of a polynomial'' consists of substituting a numerical value to each variable and carrying out the indicated multiplications and additions. For polynomials in one variable, the evaluation is usually more efficient (lower number of arithmetic operations to perform) using the [[Horner scheme]]:
:<math>(((\dotsb((a_n x + a_{n-1})x + a_{n-2})x + \dotsb + a_3)x + a_2)x + a_1)x + a_0.</math>
 
{{anchor|univariate|bivariate|Number of variables|Multivariate polynomial}}A polynomial in one indeterminate is called a '''univariate polynomial''', a polynomial in more than one indeterminate is called a '''multivariate polynomial'''.<ref>{{Cite web |last=Weisstein |first=Eric W. |title=Multivariate Polynomial |url=https://mathworld.wolfram.com/MultivariatePolynomial.html |access-date=2025-08-26 |website=mathworld.wolfram.com |language=en}}</ref> A polynomial with two indeterminates is called a '''bivariate polynomial'''.<ref name=":1" /> These notions refer more to the kind of polynomials one is generally working with than to individual polynomials; for instance, when working with univariate polynomials, one does not exclude constant polynomials (which may result from the subtraction of non-constant polynomials), although strictly speaking, constant polynomials do not contain any indeterminates at all. It is possible to further classify multivariate polynomials as ''bivariate'', ''trivariate'', and so on, according to the maximum number of indeterminates allowed. Again, so that the set of objects under consideration be closed under subtraction, a study of trivariate polynomials usually allows bivariate polynomials, and so on. It is also common to say simply "polynomials in <math>x,y</math>, and <math>z</math>", listing the indeterminates allowed.
==Arithmetic of polynomials==
Polynomials can be added using the [[associative law| associative]] law of addition (grouping all their terms together into a single sum), possibly followed by reordering, and combining of like terms.<ref name="Edwards-1995-p47">{{cite book|author=Edwards, Harold M.|title=Linear Algebra|publisher=Springer|year=1995|isbn=9780817637316|page=47|url=http://books.google.com/books?id=ylFR4h5BIDEC&pg=PA47}}</ref><ref>{{cite book|author=Salomon, David|title=Coding for Data and Computer Communications|publisher=Springer|year=2006|isbn=9780387238043|page=459|url=http://books.google.com/books?id=Zr9bjEpXKnIC&pg=PA459}}</ref> For example, if
:<math>\begin{align}
P &= 3x^2 - 2x + 5xy - 2 \\
Q &= -3x^2 + 3x + 4y^2 + 8
\end{align}</math>
 
== Operations ==
then
=== Addition and subtraction ===
:<math>P + Q = 3x^2 - 2x + 5xy - 2 - 3x^2 + 3x + 4y^2 + 8 </math>
Polynomials can be added using the [[associative law]] of addition (grouping all their terms together into a single sum), possibly followed by reordering (using the [[commutative law]]) and combining of like terms.<ref name="Edwards-1995-p47">{{cite book |last=Edwards |first=Harold M. |title=Linear Algebra |publisher=Springer |year=1995 |isbn=978-0-8176-3731-6 |page=47 |url=https://books.google.com/books?id=ylFR4h5BIDEC&pg=PA47}}</ref><ref>{{cite book |last=Salomon |first=David |title=Coding for Data and Computer Communications |publisher=Springer |year=2006 |isbn=978-0-387-23804-3 |page=459 |url=https://books.google.com/books?id=Zr9bjEpXKnIC&pg=PA459}}</ref> For example, if
<math display="block"> P = 3x^2 - 2x + 5xy - 2 </math> and <math display="block"> Q = -3x^2 + 3x + 4y^2 + 8</math>
then the sum
<math display="block">P + Q = 3x^2 - 2x + 5xy - 2 - 3x^2 + 3x + 4y^2 + 8 </math>
can be reordered and regrouped as
<math display="block">P + Q = (3x^2 - 3x^2) + (- 2x + 3x) + 5xy + 4y^2 + (8 - 2) </math>
and then simplified to
<math display="block">P + Q = x + 5xy + 4y^2 + 6.</math>
When polynomials are added together, the result is another polynomial.<ref name=":0">{{Cite book|url=https://books.google.com/books?id=PagNAQAAIAAJ&q=the+addition+of+polynomials+is+an+operation+that+takes+any+two+polynomials+and+produce+always+another+polynomial,|title=Introduction to Algebra|date=1965|publisher=Yale University Press|pages=621|language=en|quote=Any two such polynomials can be added, subtracted, or multiplied. Furthermore, the result in each case is another polynomial}}</ref>
 
Subtraction of polynomials is similar.
which can be simplified to
:<math>P + Q = x + 5xy + 4y^2 + 6 </math>
 
=== Multiplication ===
To work out the product of two polynomials into a sum of terms, the distributive law is repeatedly applied, which results in each term of one polynomial being multiplied by every term of the other.<ref name="Edwards-1995-p47" /> For example, if
{{Further|Polynomial expansion}}
:<math>\begin{align}
Polynomials can also be multiplied. To expand the [[product (mathematics)|product]] of two polynomials into a sum of terms, the distributive law is repeatedly applied, which results in each term of one polynomial being multiplied by every term of the other.<ref name="Edwards-1995-p47"/> For example, if
\color{BrickRed} P &\color{BrickRed}{= 2x + 3y + 5} \\
<math display="block">\begin{align}
\color{RoyalBlue} Q &\color{RoyalBlue}{= 2x + 5y + xy + 1}
\color{Red} P &\color{Red}{= 2x + 3y + 5} \\
\color{Blue} Q &\color{Blue}{= 2x + 5y + xy + 1}
\end{align}</math>
 
then
:<math display="block">\begin{array}{rccrcrcrcr}
{\color{BrickRedRed}{P}} {\color{RoyalBlueBlue}{Q}} & {{=}}&&({\color{BrickRedRed}{2x}}\cdot{\color{RoyalBlueBlue}{2x}})
&+&({\color{BrickRedRed}{2x}}\cdot{\color{RoyalBlueBlue}{5y}})&+&({\color{BrickRedRed}{2x}}\cdot {\color{RoyalBlueBlue}{xy}})&+&({\color{BrickRedRed}{2x}}\cdot{\color{RoyalBlueBlue}{1}})
\\&&+&({\color{BrickRedRed}{3y}}\cdot{\color{RoyalBlueBlue}{2x}})&+&({\color{BrickRedRed}{3y}}\cdot{\color{RoyalBlueBlue}{5y}})&+&({\color{BrickRedRed}{3y}}\cdot {\color{RoyalBlueBlue}{xy}})&+&
({\color{BrickRedRed}{3y}}\cdot{\color{RoyalBlueBlue}{1}})
\\&&+&({\color{BrickRedRed}{5}}\cdot{\color{RoyalBlueBlue}{2x}})&+&({\color{BrickRedRed}{5}}\cdot{\color{RoyalBlueBlue}{5y}})&+&
({\color{BrickRedRed}{5}}\cdot {\color{RoyalBlueBlue}{xy}})&+&({\color{BrickRedRed}{5}}\cdot{\color{RoyalBlueBlue}{1}})
\end{array}</math>
Carrying out the multiplication in each term produces
<math display="block">\begin{array}{rccrcrcrcr}
PQ & = && 4x^2 &+& 10xy &+& 2x^2y &+& 2x \\
&&+& 6xy &+& 15y^2 &+& 3xy^2 &+& 3y \\
&&+& 10x &+& 25y &+& 5xy &+& 5.
\end{array}</math>
Combining similar terms yields
<math display="block">\begin{array}{rcccrcrcrcr}
PQ & = && 4x^2 &+&( 10xy + 6xy + 5xy ) &+& 2x^2y &+& ( 2x + 10x ) \\
&& + & 15y^2 &+& 3xy^2 &+&( 3y + 25y )&+&5
\end{array}</math>
 
which can be simplified to
:<math display="block">PQ = 4x^2 + 21xy + 2x^2y + 12x + 15y^2 + 3xy^2 + 28y + 5.</math>
As in the example, the product of polynomials is always a polynomial.<ref name=":0" /><ref name=Barbeau-2003-pp1-2/>
 
=== Composition ===
Polynomial evaluation can be used to compute the remainder of [[Euclidean division of polynomials|polynomial division]] by a polynomial of degree one, since the remainder of the division of {{math|''f''(''x'')}} by {{math|(''x'' &minus; ''a'')}} is {{math|''f''(''a'')}}; see the [[polynomial remainder theorem]]. This is more efficient than the usual algorithm of division when the quotient is not needed.
Given a polynomial <math>f</math> of a single variable and another polynomial <math>g</math> of any number of variables, the [[function composition|composition]] <math>f \circ g</math> is obtained by substituting each copy of the variable of the first polynomial by the second polynomial.<ref name=Barbeau-2003-pp1-2/> For example, if <math>f(x) = x^2 + 2x</math> and <math>g(x) = 3x + 2</math> then
<math display = "block"> (f\circ g)(x) = f(g(x)) = (3x + 2)^2 + 2(3x + 2).</math>
A composition may be expanded to a sum of terms using the rules for multiplication and division of polynomials. The composition of two polynomials is another polynomial.<ref>{{Cite book|last=Kriete|first=Hartje|url=https://books.google.com/books?id=HwqjxJOLLOoC&q=The+composition+of+two+polynomials+is+always+another+polynomial.&pg=PA159|title=Progress in Holomorphic Dynamics|date=1998-05-20|publisher=CRC Press|isbn=978-0-582-32388-9|pages=159|language=en|quote=This class of endomorphisms is closed under composition,}}</ref>
<!--something about the composition where ''f'' has many variables? <ref name=Barbeau-2003-pp1-2/>-->
 
=== Division ===
* A [[sum]] of polynomials is a polynomial.<ref name=Barbeau-2003-pp1-2 />
* A [[product (mathematics)|product]] of polynomials is a polynomial.<ref name=Barbeau-2003-pp1-2 />
* A [[function composition|composition]] of two polynomials is a polynomial, which is obtained by substituting a variable of the first polynomial by the second polynomial.<ref name=Barbeau-2003-pp1-2 />
* The [[derivative]] of the polynomial {{math|''a''<sub>n</sub>''x''<sup>n</sup> + ''a''<sub>n&minus;1</sub>''x''<sup>n&minus;1</sup> + ... + ''a''<sub>2</sub>''x''<sup>2</sup> + ''a''<sub>1</sub>''x'' + ''a''<sub>0</sub>}} is the polynomial {{math|n''a''<sub>n</sub>''x''<sup>n&minus;1</sup> + (n&minus;1)''a''<sub>n&minus;1</sub>''x''<sup>n&minus;2</sup> + ... + 2''a''<sub>2</sub>''x'' + ''a''<sub>1</sub>}}. If the set of the coefficients does not contain the integers (for example if the coefficients are integers [[modular arithmetic|modulo]] some [[prime number]] {{math|''p''}}), then {{math|k''a''<sub>k</sub>}} should be interpreted as the sum of {{math|''a''<sub>k</sub>}} with itself, {{math|k}} times. For example, over the integers modulo {{math|''p''}}, the derivative of the polynomial {{math|''x''<sup>''p''</sup> + 1}} is the polynomial {{math|0}}.<ref name=Barbeau-2003-pp64-65>{{cite book|author=Barbeau, E.J.|title=Polynomials|publisher=Springer|year=2003|isbn=9780387406275|pages=64-65|url=http://books.google.com/books?id=CynRMm5qTmQC&pg=PA64}}</ref>
* If the division by integers is allowed in the set of coefficients, a primitive or [[antiderivative]] of the polynomial {{math|''a''<sub>n</sub>''x''<sup>n</sup> + ''a''<sub>n&minus;1</sub>''x''<sup>n&minus;1</sup> + ... + ''a''<sub>2</sub>''x''<sup>2</sup> + ''a''<sub>1</sub>''x'' + ''a''<sub>0</sub>}} is {{math|''a''<sub>n</sub>''x''<sup>n+1</sup>/(n+1) + ''a''<sub>n&minus;1</sub>''x''<sup>n</sup>/n + ... + ''a''<sub>2</sub>''x''<sup>3</sup>/3 + ''a''<sub>1</sub>''x''<sup>2</sup>/2 + ''a''<sub>0</sub>''x'' + ''c''}}, where {{math|''c''}} is an arbitrary constant. Thus {{math|''x''<sup>2</sup> + 1}} is a polynomial with integer coefficients whose primitives are not polynomials over the integers. If this polynomial is viewed as a polynomial over the integers modulo 3 it has no primitive at all.{{cn|date=August 2013}}
 
The division of one polynomial by another is not typically a polynomial. Instead, such ratios are a more general family of objects, called ''[[rational fraction]]s'', ''rational expressions'', or ''[[rational function]]s'', depending on context.<ref>{{cite book|last1 = Marecek | first1 = Lynn | last2 = Mathis | first2 = Andrea Honeycutt | title = Intermediate Algebra 2e | date = 6 May 2020 | publisher = [[OpenStax]] <!-- | ___location = Houston, Texas -->| url = https://openstax.org/details/books/intermediate-algebra-2e | at = §7.1}}</ref> This is analogous to the fact that the ratio of two [[integer]]s is a [[rational number]], not necessarily an integer.<ref>{{Cite book|last1=Haylock|first1=Derek|url=https://books.google.com/books?id=hgAr3maZeQUC&q=division+integers+not+closed&pg=PA49|title=Understanding Mathematics for Young Children: A Guide for Foundation Stage and Lower Primary Teachers|last2=Cockburn|first2=Anne D.|date=2008-10-14|publisher=SAGE|isbn=978-1-4462-0497-9|pages=49|language=en|quote=We find that the set of integers is not closed under this operation of division.}}</ref><ref name = openstax>{{harvnb|Marecek|Mathis|2020|loc=§5.4]}}</ref> For example, the fraction <math>1/(x^2+1)</math> is not a polynomial, and it cannot be written as a finite sum of powers of the variable <math>x</math>.
As for the integers, two kinds of divisions are considered for the polynomials. The ''[[Euclidean division of polynomials]]'' that generalizes the [[Euclidean division]] of the integers. It results in two polynomials, a ''quotient'' and a ''remainder'' that are characterized by the following property of the polynomials: given two polynomials ''a'' and ''b'' such that ''b'' ≠ 0, there exists a unique pair of polynomials, ''q'', the quotient, and ''r'', the remainder, such that {{math|''a'' {{=}} ''b'' ''q'' + ''r''}} and {{math|degree(''r'') < degree(''b'')}} (here the polynomial zero is supposed to have a negative degree). By hand as well as with a computer, this division can be computed by the [[polynomial long division]] algorithm.<ref>Peter H. Selby, Steve Slavin, ''Practical Algebra: A Self-Teaching Guide, 2nd Edition'', Wiley, ISBN 0-471-53012-3 ISBN 978-0471530121</ref>
 
For polynomials in one variable, there is a notion of [[Euclidean division of polynomials]], generalizing the [[Euclidean division]] of integers.{{efn|This paragraph assumes that the polynomials have coefficients in a [[field (mathematics)|field]].}} This notion of the division <math>a(x)/b(x)</math> results in two polynomials, a ''quotient'' <math>q(x)</math> and a ''remainder'' <math>r(x)</math>, such that <math>a = bq + r</math> and <math>\deg(r) < \deg(b)</math>, where <math>\deg(p)</math> is the degree of <math>p</math>. The quotient and remainder may be computed by any of several algorithms, including [[polynomial long division]] and [[synthetic division]].<ref>{{cite book |first1=Peter H. |last1=Selby |first2=Steve |last2=Slavin |title=Practical Algebra: A Self-Teaching Guide |date=1991 |publisher=Wiley |isbn=978-0-471-53012-1 |edition=2nd}}</ref>
All polynomials with coefficients in a [[unique factorization ___domain]] (for example, the integers or a [[field (mathematics)|field]]) also have a factored form in which the polynomial is written as a product of [[irreducible polynomial]]s and a constant. This factored form is unique up to the order of the factors and their multiplication by an invertible constant. In the case of the field of [[complex number]]s, the irreducible factors are linear. Over the [[real number]]s, they have the degree either one or two. Over the integers and the [[rational number]]s the irreducible factors may have any degree.<ref name=Barbeau-2003-pp80-82>{{cite book|author=Barbeau, E.J.|title=Polynomials|publisher=Springer|year=2003|isbn=9780387406275|pages=80-82|url=http://books.google.com/books?id=CynRMm5qTmQC&pg=PA80}}</ref> For example, the factored form of
:<math> 5x^3-5</math>
is
:<math>5(x - 1)\left(x^2 + x + 1\right)</math>
 
When the denominator <math>b(x)</math> is [[monic polynomial|monic]] and linear, that is, <math>b(x) = x-c</math> for some constant <math>c</math>, then the [[polynomial remainder theorem]] asserts that the remainder of the division of <math>a(x)</math> by <math>b(x)</math> is the [[#evaluation|evaluation]] <math>a(c)</math>.<ref name = openstax/> In this case, the quotient may be computed by [[Ruffini's rule]], a special case of synthetic division.<ref>{{Cite web|last=Weisstein|first=Eric W.|title=Ruffini's Rule|url=https://mathworld.wolfram.com/RuffinisRule.html|access-date=2020-07-25|website=mathworld.wolfram.com|language=en}}</ref>
over the integers and the reals and
:<math> 5(x - 1)\left(x + \frac{1 + i\sqrt{3}}{2}\right)\left(x + \frac{1 - i\sqrt{3}}{2}\right)</math>
 
=== Factoring ===
All polynomials with coefficients in a [[unique factorization ___domain]] (for example, the integers or a [[field (mathematics)|field]]) also have a factored form in which the polynomial is written as a product of [[irreducible polynomial]]s and a constant. This factored form is unique up to the order of the factors and their multiplication by an invertible constant. In the case of the field of [[complex number]]s, the irreducible factors are linear. Over the [[real number]]s, they have the degree either one or two. Over the integers and the [[rational number]]s the irreducible factors may have any degree.<ref name=Barbeau-2003-pp80-82>{{harvnb|Barbeau|2003|pp=[https://books.google.com/books?id=CynRMm5qTmQC&pg=PA80 80]–2}}</ref> For example, the factored form of
<math display="block"> 5x^3-5</math>
is
<math display="block">5(x - 1)\left(x^2 + x + 1\right)</math>
over the integers and the reals, and
<math display="block"> 5(x - 1)\left(x + \frac{1 + i\sqrt{3}}{2}\right)\left(x + \frac{1 - i\sqrt{3}}{2}\right)</math>
over the complex numbers.
 
The computation of the factored form, called ''factorization'' is, in general, too difficult to be done by hand-written computation. However, there are efficient [[algorithmsfactorization of polynomials|polynomial factorization]] (see [[Polynomial factorizationalgorithm]]) thats are available in most [[computer algebra system]]s.
 
=== Calculus ===
A formal quotient of polynomials, that is, an [[algebraic fraction]] where the numerator and denominator are polynomials, is called a "[[Rational_function|rational expression]]" or "rational fraction" and is not, in general, a polynomial. Division of a polynomial by a number, however, does yield another polynomial. For example, {{math|''x''<sup>3</sup>/12}} is considered a valid term in a polynomial (and a polynomial by itself) because it is equivalent to {{math|(1/12)''x''<sup>3</sup>}} and 1/12 is just a constant. When this expression is used as a term, its coefficient is therefore 1/12. For similar reasons, if complex coefficients are allowed, one may have a single term like {{math|(2 + 3''i'') ''x''<sup>3</sup>}}; even though it looks like it should be expanded to two terms, the complex number 2&nbsp;+&nbsp;3''i'' is one complex number, and is the coefficient of that term. The expression {{math|1 {{=}} 1/(x<sup>2</sup> + 1)}} is not a polynomial because it includes division by a non-constant polynomial. The expression {{math|(5 + ''y'')<sup>''x''</sup>}} is not a polynomial, because it contains a variable used as exponent.
{{Main|Calculus with polynomials}}
Calculating [[derivative]]s and integrals of polynomials is particularly simple, compared to other kinds of functions.
The [[derivative]] of the polynomial <math display="block">P = a_n x^n + a_{n - 1} x^{n - 1} + \dots + a_2 x^2 + a_1 x + a_0 = \sum_{i=0}^n a_i x^i</math> with respect to <math>x</math> is the polynomial
<math display="block"> n a_n x^{n - 1} + (n - 1)a_{n - 1} x^{n - 2} + \dots + 2 a_2 x + a_1 = \sum_{i=1}^n i a_i x^{i-1}.</math>
Similarly, the general [[antiderivative]] (or indefinite integral) of <math>P</math> is
<math display="block"> \frac{a_n x^{n + 1}}{n + 1} + \frac{a_{n - 1} x^{n}}{n} + \dots + \frac{a_2 x^3}{3} + \frac{a_1 x^2}{2} + a_0 x + c = c + \sum_{i = 0}^n \frac{a_i x^{i + 1}}{i + 1}</math>
where <math>c</math> is an arbitrary constant. For example, antiderivatives of <math>x^2 + 1</math> have the form <math>\frac13x^3 + x + c</math>.
 
For polynomials whose coefficients come from more abstract settings (for example, if the coefficients are integers [[modular arithmetic|modulo]] some [[prime number]] <math>p</math>, or elements of an arbitrary ring), the formula for the derivative can still be interpreted formally, with the coefficient <math>ka_k</math> understood to mean the sum of <math>k</math> copies of <math>a_k</math>. For example, over the integers modulo <math>p</math>, the derivative of the polynomial <math>x^p + x</math> is the polynomial <math>1</math>.<ref name=Barbeau-2003-pp64-65>{{harvnb|Barbeau|2003|pp=[https://books.google.com/books?id=CynRMm5qTmQC&pg=PA64 64]–5}}</ref>
Since subtraction can be replaced by addition of the opposite quantity, and since positive integer exponents can be replaced by repeated multiplication, all polynomials can be constructed from constants and variables using only addition and multiplication.
 
== Polynomial functions ==
<!-- "Polynomial function" redirects here -->
{{seeSee also|Ring of polynomial functions}}
A ''polynomial function'' is a function that can be defined by [[expression (mathematics)|evaluating]] a polynomial. A function {{math|''f''}} of one [[Argument of a function|argument]] is called a polynomial function if it satisfies{{cn|date=August 2013}}
 
A ''polynomial function'' is a function that can be defined by [[#evaluation|evaluating]] a polynomial. More precisely, a function <math>f</math> of one [[argument of a function|argument]] from a given ___domain is a polynomial function if there exists a polynomial
: <math> f(x) = a_n x^n + a_{n-1} x^{n-1} + \cdots + a_2 x^2 + a_1 x + a_0 \, </math>
<math display="block">a_n x^n + a_{n-1} x^{n-1} + \cdots + a_2 x^2 + a_1 x + a_0 </math>
that evaluates to <math>f(x)</math> for all {{mvar|x}} in the [[___domain of a function|___domain]] of <math>f</math> (here, <math>n</math> is a non-negative integer and <math>a_0, a_1, a_2, \ldots, a_n</math> are constant coefficients).{{sfn|Varberg|Purcell|Rigdon|2007|p=[https://archive.org/details/matematika-a-purcell-calculus-9th-ed/page/38/mode/1up?view=theater&q=polynomial 38]}}
Generally, unless otherwise specified, polynomial functions have [[complex number|complex]] coefficients, arguments, and values. In particular, a polynomial, restricted to have real coefficients, defines a function from the complex numbers to the complex numbers. If the ___domain of this function is also [[restriction of a function|restricted]] to the reals, the resulting function is a [[real function]] that maps reals to reals.
 
For example, the function <math>f</math>, defined by
for all arguments {{math|''x''}}, where {{math|''n''}} is a non-negative integer and {{math|''a''<sub>0</sub>, ''a''<sub>1</sub>, ''a''<sub>2</sub>, ..., ''a<sub>n</sub>''}} are constant coefficients.
<math display="block"> f(x) = x^3 - x,</math>
is a polynomial function of one variable. Polynomial functions of several variables are similarly defined, using polynomials in more than one indeterminate, as in
<math display="block">f(x,y)= 2x^3+4x^2y+xy^5+y^2-7.</math>
According to the definition of polynomial functions, there may be expressions that obviously are not polynomials but nevertheless define polynomial functions. An example is the expression <math>\left(\sqrt{1-x^2}\right)^2,</math> which takes the same values as the polynomial <math>1-x^2</math> on the interval <math>[-1,1]</math>, and thus both expressions define the same polynomial function on this interval.
 
Every polynomial function is [[continuous function|continuous]], [[smooth function|smooth]], and [[entire function|entire]].
For example, the function {{math|''f''}}, taking real numbers to real numbers, defined by
 
{{anchor|evaluation}}The [[polynomial evaluation|'''evaluation''']] of a polynomial is the computation of the corresponding polynomial function; that is, the evaluation consists of substituting a numerical value to each indeterminate and carrying out the indicated multiplications and additions.
:<math> f(x) = x^3 - x\,</math>
 
For polynomials in one indeterminate, the evaluation is usually more efficient (lower number of arithmetic operations to perform) using [[Horner's method]], which consists of rewriting the polynomial as
is a polynomial function of one argument. Polynomial functions of multiple arguments can also be defined, using polynomials in multiple variables, as in
<math display="block">(((((a_n x + a_{n-1})x + a_{n-2})x + \dotsb + a_3)x + a_2)x + a_1)x + a_0.</math>
 
=== Graphs ===
: <math>f(x,y)= 2x^3+4x^2y+xy^5+y^2-7.\,</math>
An example is also the function <math>f(x)=\cos(2\arccos(x))</math> which, although it doesn't look like a polynomial, is a polynomial function on <math>[-1,1]</math> since for every <math>x</math> from <math>[-1,1]</math> it is true that <math>f(x)=2x^2-1</math> (see [[Chebyshev polynomials]]).
 
Polynomial functions are a class of functions having many important properties. They are all [[Continuous function|continuous]], [[smooth function|smooth]], [[entire function|entire]], [[computable function|computable]], etc.{{cn|date=August 2013}}
 
===Graphs of polynomial functions===
<div class="floatright">
<gallery perrow="2" widths="200px120px" heights="200px120px">
File:Polynomialdeg2Algebra1 fnz fig037 pc.svg|Polynomial of degree 20:<br/><small>{{math|''f''(''x'') {{=}} ''x''<sup>2</sup> &minus; ''x'' &minus; 2}}<br>{{math|{{=}} (''x'' + 1)(''x'' &minus; 2)}}</small>
File:Polynomialdeg3Fonction de Sophie Germain.svgpng|Polynomial of degree 31:<br/><small>{{math|''f''(''x'') {{=}} ''x''<sup>3</sup>/4 + 3''x''<sup>2</sup>/4 &minus; 3''x''/2 &minus; 2}}<br>{{math|{{=}} 1/4 (''x'' + 4)(''x'' + 1)(''x'' &minus; 2)}}</small>
File:Polynomialdeg4Polynomialdeg2.svg|Polynomial of degree 42:<br/><small>{{math|''f''(''x'') {{=}} 1/14 (''x''<sup>2</sup> + 4)(''x'' +− 2}}<br/>{{math|{{=}} 1)(''x'' &minus;+ 1)(''x'' &minus; 32) + 0.5}}</small>
File:Polynomialdeg5Polynomialdeg3.svg|Polynomial of degree 53:<br/><small>{{math|''f''(''x'') {{=}} 1/20 (''x''<sup>3</sup>/4 + 3''x''<sup>2</sup>/4)( − 3''x''/2 + 2)}}<br/>{{math|{{=}} 1/4 (''x'' + 1 4)(''x'' &minus;+ 1)(''x'' &minus; 3)}}<br>{{math|+ 2)}}</small>
File:Sextic GraphPolynomialdeg4.svg|Polynomial of degree 64:<br/><small>{{math|''f''(''x'') {{=}} 1/3014 (''x'' +3.5)(''x''+2 4)(''x'' + 1)(''x'' &minus; 1)(''x'' &minus; 3)}} <br/>{{math|(''x'' &minus; 4) + 20.5}}</small>
File:SepticQuintic graphpolynomial.svg|Polynomial of degree 75:<br/><small>{{math|''f''(''x'') {{=}} 1/20 (''x'' &minus;+ 34)(''x'' &minus;+ 2)(''x'' &minus;+ 1)(''x'')(''x'' + 1)(''x'' + 2)}}<br/>{{math|(''x'' + 3) + 2}}</small>
File:Sextic Graph.svg|Polynomial of degree 6:<br/><small>{{math|''f''(''x'') {{=}} 1/100 (''x''<sup>6</sup> − 2''x'' <sup>5</sup> − 26''x''<sup>4</sup> + 28''x''<sup>3</sup>}}<br/>{{math|+ 145''x''<sup>2</sup> − 26''x'' − 80)}}</small>
File:Septic graph.svg|Polynomial of degree 7:<br/><small>{{math|''f''(''x'') {{=}} (''x'' − 3)(''x'' − 2)(''x'' − 1)(''x'')(''x'' + 1)(''x'' + 2)}}<br/>{{math|(''x'' + 3)}}</small>
</gallery>
</div>
A polynomial function in one real variable can be represented by a [[graph of a function|graph]].
<ul>
* The graph of the zero polynomial
<li>
::{{math|''f''(''x'') {{=}} 0}}
The graph of the zero polynomial
:is the {{math|''x''}}-axis.
{{block indent|{{math|1=''f''(''x'') = 0}}}} is the {{math|''x''}}-axis.
</li>
<li>
The graph of a degree 0 polynomial
{{block indent|{{math|1=''f''(''x'') = ''a''<sub>0</sub>}}, where {{math|''a''<sub>0</sub> ≠ 0}},}} is a horizontal line with {{nowrap|{{math|''y''}}-intercept {{math|''a''<sub>0</sub>}}}}
</li>
<li>
The graph of a degree 1 polynomial (or linear function)
{{block indent|{{math|1=''f''(''x'') = ''a''<sub>0</sub> + ''a''<sub>1</sub>''x''}}, where {{math|''a''<sub>1</sub> ≠ 0}},}} is an oblique line with {{nowrap|{{math|''y''}}-intercept {{math|''a''<sub>0</sub>}}}} and [[slope]] {{math|''a''<sub>1</sub>}}.
</li>
<li>
The graph of a degree 2 polynomial
{{block indent|{{math|1=''f''(''x'') = ''a''<sub>0</sub> + ''a''<sub>1</sub>''x'' + ''a''<sub>2</sub>''x''<sup>2</sup>}}, where {{math|''a''<sub>2</sub> ≠ 0}}}} is a [[parabola]].
</li>
<li>
The graph of a degree 3 polynomial
{{block indent|{{math|1=''f''(''x'') = ''a''<sub>0</sub> + ''a''<sub>1</sub>''x'' + ''a''<sub>2</sub>''x''<sup>2</sup> + ''a''<sub>3</sub>''x''<sup>3</sup>}}, where {{math|''a''<sub>3</sub> ≠ 0}}}} is a [[cubic equation|cubic curve]].
</li>
<li>
The graph of any polynomial with degree 2 or greater
{{block indent|{{math|1=''f''(''x'') = ''a''<sub>0</sub> + ''a''<sub>1</sub>''x'' + ''a''<sub>2</sub>''x''<sup>2</sup> + ⋯ + ''a''<sub>''n''</sub>''x''<sup>''n''</sup>}}, where {{math|''a''<sub>''n''</sub> ≠ 0 and ''n'' ≥ 2}}}} is a continuous non-linear curve.
</li>
</ul>
 
A non-constant polynomial function [[infinity#Calculus|tends to infinity]] when the variable increases indefinitely (in [[absolute value]]). If the degree is higher than one, the graph does not have any [[asymptote]]. It has two [[parabolic branch]]es with vertical direction (one branch for positive {{math|''x''}} and one for negative {{math|''x''}}).
* The graph of a degree 0 polynomial
::{{math|''f''(''x'') {{=}} ''a''<sub>0</sub>}}, where {{math|''a''<sub>0</sub> ≠ 0}},
:is a horizontal line with {{math|''y''-intercept ''a''<sub>0</sub>}}
 
Polynomial graphs are analyzed in calculus using intercepts, slopes, concavity, and end behavior.
* The graph of a degree 1 polynomial (or linear function)
::{{math|''f''(''x'') {{=}} ''a''<sub>0</sub> + ''a''<sub>1</sub>''x''}} , where {{math|''a''<sub>1</sub> ≠ 0}},
:is an oblique line with {{math|''y''-intercept ''a''<sub>0</sub>}} and [[slope]] {{math|''a''<sub>1</sub>}}.
 
== Equations ==
* The graph of a degree 2 polynomial
{{Main|Algebraic equation}}
::{{math|''f''(''x'') {{=}} ''a''<sub>0</sub> + ''a''<sub>1</sub>''x'' + ''a''<sub>2</sub>''x''<sup>2</sup>}}, where {{math|''a''<sub>2</sub> ≠ 0}}
A ''polynomial equation'', also called an ''[[algebraic equation]]'', is an [[equation]] of the form<ref>{{Cite book |last=Proskuryakov |first=I.V. |chapter=Algebraic equation |editor=Hazewinkel, Michiel |editor-link=Michiel Hazewinkel |title=Encyclopaedia of Mathematics |volume=1 |publisher=Springer |year=1994 |isbn=978-1-55608-010-4 |chapter-url=https://books.google.com/books?id=PE1a-EIG22kC&pg=PA88}}</ref>
:is a [[parabola]].
<math display="block">a_n x^n + a_{n-1}x^{n-1} + \dotsb + a_2 x^2 + a_1 x + a_0 = 0.</math>
For example,
<math display="block"> 3x^2 + 4x - 5 = 0 </math>
is a polynomial equation.
 
When considering equations, the indeterminates (variables) of polynomials are also called [[variable (mathematics)|unknown]]s, and the ''solutions'' are the possible values of the unknowns for which the equality is true (in general more than one solution may exist). A polynomial equation stands in contrast to a ''polynomial [[identity (mathematics)|identity]]'' like <math>(x+y)(x-y) = x^2 - y^2</math>, where both expressions represent the same polynomial in different forms, and as a consequence any evaluation of both members gives a valid equality.
* The graph of a degree 3 polynomial
::{{math|''f''(''x'') {{=}} ''a''<sub>0</sub> + ''a''<sub>1</sub>''x'' + ''a''<sub>2</sub>''x''<sup>2</sup>, + ''a''<sub>3</sub>''x''<sup>3</sup>}}, where {{math|''a''<sub>3</sub> ≠ 0}}
:is a cubic curve.
 
In elementary [[algebra]], methods such as the [[quadratic formula]] are taught for solving all first degree and second degree polynomial equations in one variable. There are also formulas for the [[cubic equation|cubic]] and [[quartic equation]]s. For higher degrees, the [[Abel–Ruffini theorem]] asserts that there can not exist a general formula in radicals. However, [[root-finding algorithm]]s may be used to find [[numerical approximation]]s of the roots of a polynomial expression of any degree.
* The graph of any polynomial with degree 2 or greater
::{{math|''f''(''x'') {{=}} ''a''<sub>0</sub> + ''a''<sub>1</sub>''x'' + ''a''<sub>2</sub>''x''<sup>2</sup> + ... + ''a''<sub>''n''</sub>''x''<sup>''n''</sup>}} , where {{math|''a''<sub>''n''</sub> ≠ 0 and ''n'' ≥ 2}}
:is a continuous non-linear curve.
 
The number of solutions of a polynomial equation with real coefficients may not exceed the degree, and equals the degree when the [[complex number|complex]] solutions are counted with their [[multiplicity (mathematics)|multiplicity]]. This fact is called the [[fundamental theorem of algebra]].
The graph of a non-constant (univariate) polynomial always [[Infinity#Calculus|tends to infinity]] when the variable increases indefinitely (in [[absolute value]]).{{cn|date=August 2013}}
 
=== Solving equations <span class="anchor" id="Solving polynomial equations"></span> ===
Polynomial graphs are analyzed in calculus using intercepts, slopes, concavity, and end behavior.
<!-- "Simple root (polynomial)" redirects here -->
 
==Polynomial equations==
{{main|Algebraic equation}}
{{See also|Root-finding of polynomials|Properties of polynomial roots}}
A ''polynomial equation'', also called ''[[algebraic equation]]'', is an [[equation]] of the form<ref>{{Cite book|author=Proskuryakov, I.V.|chapter=Algebraic equation|editor=Hazewinkel, Michiel| editor-link=Michiel Hazewinkel |title=Encyclopaedia of Mathematics|volume=vol. 1|publisher=Springer|year=1994|isbn=9781556080104|url=http://books.google.com/books?id=PE1a-EIG22kC&pg=PA88}}</ref>
 
A ''root'' of a nonzero univariate polynomial {{math|''P''}} is a value {{mvar|a}} of {{mvar|x}} such that {{math|''P''(''a'') {{=}} 0}}. In other words, a root of {{mvar|P}} is a solution of the [[polynomial equation]] {{math|''P''(''x'') {{=}} 0}} or a [[zero of a function|zero]] of the polynomial function defined by {{math|''P''}}. In the case of the zero polynomial, every number is a zero of the corresponding function, and the concept of root is rarely considered.
:<math>a_n x^n + a_{n-1}x^{n-1} + \dotsb + a_2 x^2 + a_1 x + a_0 = 0</math>
 
A number {{math|''a''}} is a root of a polynomial {{math|''P''}} if and only if the [[#linear polynomial|linear polynomial]] {{math|''x'' − ''a''}} divides {{math|''P''}}, that is if there is another polynomial {{math|''Q''}} such that {{math|1=''P'' = (''x'' − ''a'') Q}}. It may happen that a power (greater than {{math|1}}) of {{math|''x'' − ''a''}} divides {{math|''P''}}; in this case, {{math|''a''}} is a ''multiple root'' of {{math|''P''}}, and otherwise {{math|''a''}} is a '''simple root''' of {{math|''P''}}. If {{math|''P''}} is a nonzero polynomial, there is a highest power {{math|''m''}} such that {{math|(''x'' − ''a'')<sup>''m''</sup>}} divides {{math|''P''}}, which is called the ''multiplicity'' of {{math|''a''}} as a root of {{math|''P''}}. The number of roots of a nonzero polynomial {{math|''P''}}, counted with their respective multiplicities, cannot exceed the degree of {{math|''P''}},<ref>{{cite book |last=Leung |first=Kam-tim |title=Polynomials and Equations |publisher=Hong Kong University Press |year=1992 |isbn=9789622092716 |page=134 |url=https://books.google.com/books?id=v5uXkwIUbC8C&pg=PA134|display-authors=etal}}</ref> and equals this degree if all [[complex number|complex]] roots are considered (this is a consequence of the [[fundamental theorem of algebra]]).
For example,
The coefficients of a polynomial and its roots are related by [[Vieta's formulas]].
 
Some polynomials, such as {{math|''x''<sup>2</sup> + 1}}, do not have any roots among the [[real number]]s. If, however, the set of accepted solutions is expanded to the [[complex number]]s, every non-constant polynomial has at least one root; this is the [[fundamental theorem of algebra]]. By successively dividing out factors {{math|''x'' − ''a''}}, one sees that any polynomial with complex coefficients can be written as a constant (its leading coefficient) times a product of such polynomial factors of degree&nbsp;1; as a consequence, the number of (complex) roots counted with their multiplicities is exactly equal to the degree of the polynomial.
: <math> 3x^2 + 4x -5 = 0 \,</math>
 
There may be several meanings of [[Equation solving|"solving an equation"]]. One may want to express the solutions as explicit numbers; for example, the unique solution of {{math|1=2''x'' − 1 = 0}} is {{math|1/2}}. This is, in general, impossible for equations of degree greater than one, and, since the ancient times, mathematicians have searched to express the solutions as [[algebraic expression]]s; for example, the [[golden ratio]] {{math|(1+{{sqrt|5}})/2}} is the unique positive solution of {{math|''x''<sup>2</sup> − ''x'' − 1 {{=}} 0}} In the ancient times, they succeeded only for degrees one and two. For [[quadratic equation]]s, the [[quadratic formula]] provides such expressions of the solutions. Since the 16th century, similar formulas (using cube roots in addition to square roots), although much more complicated, are known for equations of degree three and four (see [[cubic equation]] and [[quartic equation]]). But formulas for degree 5 and higher eluded researchers for several centuries. In 1824, [[Niels Henrik Abel]] proved the striking result that there are equations of degree 5 whose solutions cannot be expressed by a (finite) formula, involving only arithmetic operations and radicals (see [[Abel–Ruffini theorem]]). In 1830, [[Évariste Galois]] proved that most equations of degree higher than four cannot be solved by radicals, and showed that for each equation, one may decide whether it is solvable by radicals, and, if it is, solve it. This result marked the start of [[Galois theory]] and [[group theory]], two important branches of modern [[algebra]]. Galois himself noted that the computations implied by his method were impracticable. Nevertheless, formulas for solvable equations of degrees 5 and 6 have been published (see [[quintic function]] and [[sextic equation]]).
is a polynomial equation.
 
When there is no algebraic expression for the roots, and when such an algebraic expression exists but is too complicated to be useful, the unique way of solving it is to compute [[numerical approximation]]s of the solutions.<ref>{{cite book |last=McNamee |first=J.M. |title=Numerical Methods for Roots of Polynomials, Part 1 |publisher=Elsevier |year=2007 |isbn=978-0-08-048947-6 |url=https://books.google.com/books?id=4PMqxwG-eqQC}}</ref> There are many methods for that; some are restricted to polynomials and others may apply to any [[continuous function]]. The most efficient [[algorithm]]s allow solving easily (on a [[computer]]) polynomial equations of degree higher than 1,000 (see ''[[Root-finding algorithm]]'').
In case of a univariate polynomial equation, the variable is considered an [[Variable (mathematics)|unknown]], and one seeks to find the possible values for which both members of the equation evaluate to the same value (in general more than one solution may exist). A polynomial equation stands in contrast to a ''polynomial identity'' like {{math|(''x'' + ''y'')(''x'' &minus; ''y'') {{=}} ''x''<sup>2</sup> &minus; ''y''<sup>2</sup>}}, where both members represent the same polynomial in different forms, and as a consequence any evaluation of both members gives a valid equality. This means that a polynomial identity is a polynomial equation for which all possible values of the unknowns are solutions.{{cn|date=August 2013}}
 
For polynomials with more than one indeterminate, the combinations of values for the variables for which the polynomial function takes the value zero are generally called ''zeros'' instead of "roots". The study of the sets of zeros of polynomials is the object of [[algebraic geometry]]. For a set of polynomial equations with several unknowns, there are [[algorithm]]s to decide whether they have a finite number of [[complex number|complex]] solutions, and, if this number is finite, for computing the solutions. See [[System of polynomial equations]].
In elementary [[algebra]], methods such as the [[quadratic formula]] are given for solving all first degree and second degree polynomial equations in one variable. There are also formulas for the [[cubic function|cubic]] and [[quartic equations]]. For higher degrees, [[Abel-Ruffini theorem]] asserts that there can not exist a general formula. Therefore, only [[numerical approximation]]s of the roots may be computed (see [[Root-finding algorithm]]). The number of solutions may not exceed the degree, and equals the degree when the [[complex number|complex]] solutions are counted with their [[Multiplicity (mathematics)|multiplicity]]. This fact is called the [[fundamental theorem of algebra]].
 
The special case where all the polynomials are of degree one is called a [[system of linear equations]], for which another range of different [[system of linear equations#Solving a linear system|solution methods]] exist, including the classical [[Gaussian elimination]].
===Solving polynomial equations===<!-- "Simple root (polynomial)" redirects here -->
{{Further|Properties of polynomial roots}}
 
A polynomial equation for which one is interested only in the solutions which are [[integer]]s is called a [[Diophantine equation]]. Solving Diophantine equations is generally a very hard task. It has been proved that there cannot be any general [[algorithm]] for solving them, or even for deciding whether the set of solutions is empty (see [[Hilbert's tenth problem]]). Some of the most famous problems that have been solved during the last fifty years are related to Diophantine equations, such as [[Fermat's Last Theorem]].
Every polynomial {{math|''P''}} in {{math|''x''}} corresponds to a function, {{math|''f''(''x'') {{=}} ''P''}} (where the occurrences of {{math|''x''}} in {{math|''P''}} are interpreted as the argument of {{math|''f''}}), called the ''polynomial function'' of {{math|''P''}}; the equation in {{math|''x''}} setting {{math|''f''(''x'') {{=}} 0}} is the ''polynomial equation'' corresponding to {{math|''P''}}. The solutions of this equation are called the ''roots'' of the polynomial; they are the ''zeroes'' of the function {{math|''f''}} (corresponding to the points where the graph of {{math|''f''}} meets the {{math|''x''}}-axis). A number {{math|''a''}} is a root of {{math|''P''}} if and only if the polynomial {{math|''x'' &minus; ''a''}} (of degree one in {{math|''x''}}) divides {{math|''P''}}. It may happen that {{math|''x'' &minus; ''a''}} divides {{math|''P''}} more than once: if {{math|(''x'' &minus; ''a'')<sup>2</sup>}} divides {{math|''P''}} then {{math|''a''}} is called a ''multiple root'' of {{math|''P''}}, and otherwise {{math|''a''}} is called a ''simple root'' of {{math|''P''}}. If {{math|''P''}} is a nonzero polynomial, there is a highest power {{math|''m''}} such that {{math|(''x''&nbsp;−&nbsp;''a'')<sup>''m''</sup>}} divides {{math|''P''}}, which is called the ''multiplicity'' of the root {{math|''a''}} in {{math|''P''}}. When {{math|''P''}} is the zero polynomial, the corresponding polynomial equation is trivial, and this case is usually excluded when considering roots: with the above definitions every number would be a root of the zero polynomial, with undefined (or infinite) multiplicity. With this exception made, the number of roots of {{math|''P''}}, even counted with their respective multiplicities, cannot exceed the degree of {{math|''P''}}.<ref>{{cite book|author=Leung, Kam-tim et al|title=Polynomials and Equations|publisher=Hong Kong University Press|year=1992|isbn=9789622092716|page=134|url=http://books.google.com/books?id=v5uXkwIUbC8C&pg=PA134}}</ref> The relation between the roots of a polynomial and its coefficients is described by [[Viète's formulas]].
 
== Polynomial expressions ==
Some polynomials, such as {{math|''x''<sup>2</sup> + 1}}, do not have any roots among the [[real number]]s. If, however, the set of allowed candidates is expanded to the [[complex number]]s, every non-constant polynomial has at least one root; this is the [[fundamental theorem of algebra]]. By successively dividing out factors {{math|''x'' &minus; ''a''}}, one sees that any polynomial with complex coefficients can be written as a constant (its leading coefficient) times a product of such polynomial factors of degree&nbsp;1; as a consequence, the number of (complex) roots counted with their multiplicities is exactly equal to the degree of the polynomial.
{{anchor|Generalizations of polynomials}}<!-- [[Polynomial expression]] redirects to this section -->
Polynomials where indeterminates are substituted for some other mathematical objects are often considered, and sometimes have a special name.
 
=== Trigonometric polynomials ===
There is a difference between approximating roots and finding exact expressions for roots. Formulas for expressing the roots of polynomials of [[Degree of a polynomial|degree]] 2 in terms of square roots have been known since ancient times (see [[quadratic equation]]), and for polynomials of degree 3 or 4 similar formulas (using cube roots in addition to square roots) were found in the 16th century (see [[cubic function]] and [[quartic function]] for the formulas and [[Niccolo Fontana Tartaglia]], [[Lodovico Ferrari]], [[Gerolamo Cardano]], and [[Franciscus Vieta|Vieta]] for historical details). But formulas for degree 5 eluded researchers. In 1824, [[Niels Henrik Abel]] proved the striking result that there can be no general (finite) formula, involving only arithmetic operations and radicals, that expresses the roots of a polynomial of degree 5 or greater in terms of its coefficients (see [[Abel-Ruffini theorem]]). In 1830, [[Évariste Galois]], studying the permutations of the roots of a polynomial, extended the [[Abel-Ruffini theorem]] by showing that, given a polynomial equation, one may decide if it is solvable by radicals, and, if it is, solve it. This result marked the start of [[Galois theory]] and [[Group theory]], two important branches of modern mathematics. Galois himself noted that the computations implied by his method were impracticable. Nevertheless, formulas for solvable equations of degrees 5 and 6 have been published (see [[quintic function]] and [[sextic equation]]).
{{Main|Trigonometric polynomial}}
A '''trigonometric polynomial''' is a finite [[linear combination]] of [[function (mathematics)|functions]] sin(''nx'') and cos(''nx'') with ''n'' taking on the values of one or more [[natural number]]s.<ref>{{cite book |last1=Powell |first1=Michael J. D. |author1-link=Michael J. D. Powell |title=Approximation Theory and Methods |publisher=[[Cambridge University Press]] |isbn=978-0-521-29514-7 |year=1981}}</ref> The coefficients may be taken as real numbers, for real-valued functions.
 
If sin(''nx'') and cos(''nx'') are expanded in terms of sin(''x'') and cos(''x''), a trigonometric polynomial becomes a polynomial in the two variables sin(''x'') and cos(''x'') (using the [[List of trigonometric identities#Multiple-angle formulae|multiple-angle formulae]]). Conversely, every polynomial in sin(''x'') and cos(''x'') may be converted, with [[List of trigonometric identities#Product-to-sum and sum-to-product identities|Product-to-sum identities]], into a linear combination of functions sin(''nx'') and cos(''nx''). This equivalence explains why linear combinations are called polynomials.
Numerical approximations of roots of polynomial equations in one unknown is easily done on a computer by the [[Jenkins-Traub method]], [[Laguerre's method]], [[Durand–Kerner method]] or by some other [[root-finding algorithm]].<ref>{{cite book|author=McNamee, J.M.|title=Numerical Methods for Roots of Polynomials, Part 1|publisher=Elsevier|year=2007|isbn=9780080489476|url=http://books.google.com/books?id=4PMqxwG-eqQC&printsec=frontcover}}</ref>
 
For [[complex number|complex coefficients]], there is no difference between such a function and a finite [[Fourier series]].
For polynomials in more than one variable the notion of root does not exist, and there are usually infinitely many combinations of values for the variables for which the polynomial function takes the value zero. However for certain ''sets'' of such polynomials it may happen that for only finitely many combinations all polynomial functions take the value zero.
 
Trigonometric polynomials are widely used, for example in [[trigonometric interpolation]] applied to the [[interpolation]] of [[periodic function]]s. They are also used in the [[discrete Fourier transform]].
For a set of polynomial equations in several unknowns, there are [[algorithm]]s to decide if they have a finite number of complex solutions. If the number of solutions is finite, there are algorithms to compute the solutions. The methods underlying these algorithms are described in the article [[systems of polynomial equations]].
 
=== Matrix polynomials ===
The special case where all the polynomials are of degree one is called a [[system of linear equations]], for which another range of different [[System of linear equations#Solving a linear system|solution methods]] exist, including the classical [[Gaussian elimination]].
{{Main|Matrix polynomial}}
A [[matrix polynomial]] is a polynomial with [[square matrix|square matrices]] as variables.<ref>{{cite book |title=Matrix Polynomials |volume=58 |series=Classics in Applied Mathematics |first1=Israel |last1=Gohberg |first2=Peter |last2=Lancaster |first3=Leiba |last3=Rodman |publisher=[[Society for Industrial and Applied Mathematics]] |___location=Lancaster, PA |year=2009 |orig-year=1982 |isbn=978-0-89871-681-8 |zbl=1170.15300}}</ref> Given an ordinary, scalar-valued polynomial
<math display="block">P(x) = \sum_{i=0}^n{ a_i x^i} =a_0 + a_1 x+ a_2 x^2 + \cdots + a_n x^n, </math>
this polynomial evaluated at a matrix ''A'' is
<math display="block">P(A) = \sum_{i=0}^n{ a_i A^i} =a_0 I + a_1 A + a_2 A^2 + \cdots + a_n A^n,</math>
where ''I'' is the [[identity matrix]].{{sfn|Horn|Johnson|1990|p=36}}
 
A '''matrix polynomial equation''' is an equality between two matrix polynomials, which holds for the specific matrices in question. A '''matrix polynomial identity''' is a matrix polynomial equation which holds for all matrices ''A'' in a specified [[matrix ring]] ''M<sub>n</sub>''(''R'').
==Polynomials associated to other objects==
 
=== Exponential polynomials ===
===Calculus===
A bivariate polynomial where the second variable is substituted for an exponential function applied to the first variable, for example {{math|''P''(''x'', ''e''<sup>''x''</sup>)}}, may be called an [[exponential polynomial]].
{{Main|Calculus with polynomials}}
{{See also|Polynomial interpolation}}
The simple structure of polynomial functions makes them quite useful in analyzing general functions using polynomial approximations. An important example in [[calculus]] is [[Taylor's theorem]], which roughly states that every [[differentiable]] function locally looks like a polynomial function, and the [[Stone-Weierstrass theorem]], which states that every [[continuous function]] defined on a [[compact space|compact]] [[interval (mathematics)|interval]] of the real axis can be approximated on the whole interval as closely as desired by a polynomial function.
 
== Related concepts ==
Calculating derivatives and integrals of polynomial functions is particularly simple. For the polynomial function
:<math>\sum_{i=0}^n a_i x^i</math>
the derivative with respect to ''x'' is
:<math>\sum_{i=1}^n a_i i x^{i-1}</math>
and the indefinite integral is
:<math>\sum_{i=0}^n {a_i\over i+1} x^{i+1}+c.</math>
 
=== AbstractRational algebrafunctions ===
{{Main|PolynomialRational ringfunction}}
A [[rational fraction]] is the [[quotient]] ([[algebraic fraction]]) of two polynomials. Any [[algebraic expression]] that can be rewritten as a rational fraction is a [[rational function]].
 
While polynomial functions are defined for all values of the variables, a rational function is defined only for the values of the variables for which the denominator is not zero.
In [[abstract algebra]], one distinguishes between ''polynomials'' and ''polynomial functions''. A ''polynomial'' {{math|''f''}} in one variable {{math|''X''}} over a [[ring (mathematics)|ring]] {{math|''R''}} is defined as a formal expression of the form
: <math>f = a_n X^n + a_{n - 1} X^{n - 1} + \cdots + a_1 X^1 + a_0X^0</math>
where {{math|''n''}} is a natural number, the coefficients {{math|''a''<sub>0</sub>, . . ., ''a''<sub>n</sub>}} are elements of {{math|''R''}}, and {{math|''X''}} is a formal symbol, whose powers {{math|''X<sup>i</sup>''}} are just placeholders for the corresponding coefficients {{math|''a<sub>i</sub>''}}, so that the given formal expression is just a way to encode the sequence {{math|(''a''<sub>0</sub>, ''a''<sub>1</sub>, . . .)}}, where there is an {{math|''n''}} such that {{math|''a<sub>i</sub>'' {{=}} 0}} for all {{math|''i'' > ''n''}}. Two polynomials sharing the same value of ''n'' are considered equal if and only if the sequences of their coefficients are equal; furthermore any polynomial is equal to any polynomial with greater value of {{math|''n''}} obtained from it by adding terms in front whose coefficient is zero. These polynomials can be added by simply adding corresponding coefficients (the rule for extending by terms with zero coefficients can be used to make sure such coefficients exist). Thus each polynomial is actually equal to the sum of the terms used in its formal expression, if such a term {{math|''a<sub>i</sub>X<sup>i</sup>''}} is interpreted as a polynomial that has zero coefficients at all powers of {{math|''X''}} other than {{math|''X<sup>i</sup>''}}. Then to define multiplication, it suffices by the [[distributive law]] to describe the product of any two such terms, which is given by the rule
 
The rational fractions include the Laurent polynomials, but do not limit denominators to powers of an indeterminate.
:<div style="vertical-align:30%;display:inline"><math>
a X^k \; b X^l = ab X^{k+l}</math></div>{{nbsp|2}} for all elements ''a'', ''b'' of the ring ''R'' and all [[natural numbers]] ''k'' and ''l''.
 
=== Laurent polynomials ===
Thus the set of all polynomials with coefficients in the ring {{math|''R''}} forms itself a ring, the ''ring of polynomials'' over {{math|''R''}}, which is denoted by {{math|''R''[''X'']}}. The map from {{math|''R''}} to {{math|''R''[''X'']}} sending {{math|''R''}} to {{math|''rX''<sup>0</sup>}} is an injective homomorphism of rings, by which {{math|''R''}} is viewed as a subring of {{math|''R''[''X'']}}. If {{math|''R''}} is [[commutative ring|commutative]], then {{math|''R''[''X'']}} is an [[Algebra (ring theory)|algebra]] over {{math|''R''}}.
{{Main|Laurent polynomial}}
[[Laurent polynomial]]s are like polynomials, but allow negative powers of the variable(s) to occur.
 
=== Power series ===
One can think of the ring {{math|''R''[''X'']}} as arising from {{math|''R''}} by adding one new element ''X'' to ''R'', and extending in a minimal way to a ring in which {{math|''X''}} satisfies no other relations than the obligatory ones, plus commutation with all elements of {{math|''R''}} (that is {{math|''Xr'' {{=}} ''rX''}}). To do this, one must add all powers of {{math|''X''}} and their linear combinations as well.
{{Main|Formal power series}}
[[Formal power series]] are like polynomials, but allow infinitely many non-zero terms to occur, so that they do not have finite degree. Unlike polynomials they cannot in general be explicitly and fully written down (just like [[irrational number]]s cannot), but the rules for manipulating their terms are the same as for polynomials. Non-formal [[power series]] also generalize polynomials, but the multiplication of two power series may not converge.
 
== Polynomial ring ==
Formation of the polynomial ring, together with forming factor rings by factoring out [[ideal (ring theory)|ideals]], are important tools for constructing new rings out of known ones. For instance, the ring (in fact field) of complex numbers, which can be constructed from the polynomial ring {{math|''R''[''X'']}} over the real numbers by factoring out the ideal of multiples of the polynomial {{math|''X''<sup>2</sup> + 1}}. Another example is the construction of [[finite field]]s, which proceeds similarly, starting out with the field of integers modulo some [[prime number]] as the coefficient ring {{math|''R''}} (see [[modular arithmetic]]).
{{Main|Polynomial ring}}
A ''polynomial'' {{math|''f''}} over a [[commutative ring]] {{math|''R''}} is a polynomial all of whose coefficients belong to {{math|''R''}}. It is straightforward to verify that the polynomials in a given set of indeterminates over {{math|''R''}} form a commutative ring, called the ''polynomial ring'' in these indeterminates, denoted <math>R[x]</math> in the univariate case and <math>R[x_1,\ldots, x_n]</math> in the multivariate case.
 
One has
If {{math|''R''}} is commutative, then one can associate to every polynomial {{math|''P''}} in {{math|''R''[''X'']}}, a ''polynomial function'' {{math|''f''}} with ___domain and range equal to {{math|''R''}} (more generally one can take ___domain and range to be the same [[unital algebra|unital]] [[associative algebra]] over {{math|''R''}}). One obtains the value {{math|''f''(''r'')}} by [[substitution (algebra)|substitution]] of the value {{math|''R''}} for the symbol {{math|''X''}} in {{math|''P''}}. One reason to distinguish between polynomials and polynomial functions is that over some rings different polynomials may give rise to the same polynomial function (see [[Fermat's little theorem]] for an example where {{math|''R''}} is the integers modulo {{math|''p''}}). This is not the case when {{math|''R''}} is the real or complex numbers, whence the two concepts are not always distinguished in [[analysis (mathematics)|analysis]]. An even more important reason to distinguish between polynomials and polynomial functions is that many operations on polynomials (like [[Euclidean division]]) require looking at what a polynomial is composed of as an expression rather than evaluating it at some constant value for {{math|''X''}}. And it should be noted that if {{math|''R''}} is not commutative, there is no (well behaved) notion of polynomial function at all.{{cn|date=August 2013}}
<math display="block">R[x_1,\ldots, x_n]=\left(R[x_1,\ldots, x_{n-1}]\right)[x_n].</math>
So, most of the theory of the multivariate case can be reduced to an iterated univariate case.
 
The map from {{math|''R''}} to {{math|''R''[''x'']}} sending {{math|''r''}} to itself considered as a constant polynomial is an injective [[ring homomorphism]], by which {{math|''R''}} is viewed as a subring of {{math|''R''[''x'']}}. In particular, {{math|''R''[''x'']}} is an [[algebra (ring theory)|algebra]] over {{math|''R''}}.
====Divisibility====
{{Main|Polynomial greatest common divisor|Factorization of polynomials}}
In [[commutative algebra]], one major focus of study is ''divisibility'' among polynomials. If {{math|''R''}} is an [[integral ___domain]] and {{math|''f''}} and {{math|''g''}} are polynomials in {{math|''R''[''X'']}}, it is said that {{math|''f''}} ''divides'' {{math|''g''}} or {{math|''f''}} is a divisor of {{math|''g''}} if there exists a polynomial {{math|''q''}} in {{math|''R''[''X'']}} such that {{math|''f'' ''q'' {{=}} ''g''}}. One can show that every zero gives rise to a linear divisor, or more formally, if {{math|''f''}} is a polynomial in {{math|''R''[''X'']}} and {{math|''r''}} is an element of {{math|''R''}} such that {{math|''f''(''r'') {{=}} 0}}, then the polynomial ({{math|''X'' &minus; ''r''}}) divides {{math|''f''}}. The converse is also true. The quotient can be computed using the [[polynomial long division]].<ref>{{Cite book|author=Irving, Ronald S.|title=Integers, Polynomials, and Rings: A Course in Algebra|publisher=Springer|year=2004|isbn=9780387201726|page=129|url=http://books.google.com/books?id=B4k6ltaxm5YC&pg=PA129}}</ref><ref>{{cite book|author=Jackson, Terrence H.|title=From Polynomials to Sums of Squares|publisher=CRC Press|year=1995|isbn=9780750303293|page=143|url=http://books.google.com/books?id=LCEOri2-doMC&pg=PA143}}</ref>
 
IfOne {{math|''F''}}can isthink aof [[fieldthe (mathematics)|field]] andring {{math|''fR''[''x'']}} andas arising from {{math|''gR''}} areby polynomialsadding inone new element {{math|''Fx''[ to ''XR'']}}, withand extending in a minimal way to a ring in which {{math|''gx'' ≠ 0}} satisfies no other relations than the obligatory ones, thenplus therecommutation with existall uniqueelements polynomialsof {{math|''qR''}} and(that is {{math|''rxr'' {{=}} in''rx''}}). To do this, one must add all powers of {{math|''Fx''[''X'']}} withand their linear combinations as well.
:<math> f = q \, g + r </math>
and such that the degree of {{math|''r''}} is smaller than the degree of {{math|''g''}} (using the convention that the polynomial 0 has a negative degree). The polynomials {{math|''q''}} and {{math|''r''}} are uniquely determined by {{math|''f''}} and {{math|''g''}}. This is called ''[[Euclidean division of polynomials|Euclidean division]], division with remainder'' or ''polynomial long division'' and shows that the ring {{math|''F''[''X'']}} is a [[Euclidean ___domain]].
 
Formation of the polynomial ring, together with forming factor rings by factoring out [[ideal (ring theory)|ideals]], are important tools for constructing new rings out of known ones. For instance, the ring (in fact field) of complex numbers, which can be constructed from the polynomial ring {{math|''R''[''x'']}} over the real numbers by factoring out the ideal of multiples of the polynomial {{math|''x''<sup>2</sup> + 1}}. Another example is the construction of [[finite field]]s, which proceeds similarly, starting out with the field of integers modulo some [[prime number]] as the coefficient ring {{math|''R''}} (see [[modular arithmetic]]).
Analogously, ''prime polynomials'' (more correctly, ''[[irreducible element|irreducible]] polynomials'') can be defined as ''non zero polynomials which cannot be factorized into the product of two non constant polynomials''. In the case of coefficients in a ring, ''"non constant"'' must be replaced by ''"non constant or non [[unit (ring theory)|unit]]"'' (both definitions agree in the case of coefficients in a field). Any polynomial may be decomposed into the product of an invertible constant by a product of irreducible polynomials. If the coefficients belong to a field or a [[unique factorization ___domain]] this decomposition is unique up to the order of the factors and the multiplication of any non unit factor by a unit (and division of the unit factor by the same unit). When the coefficients belong to integers, rational numbers or a finite field, there are algorithms to test irreducibility and to compute the factorization into irreducible polynomials (see [[Factorization of polynomials]]). These algorithms are not practicable for hand written computation, but are available in any [[Computer algebra system]]. [[Eisenstein's criterion]] can also be used in some cases to determine irreducibility.
 
If {{math|''R''}} is commutative, then one can associate with every polynomial {{math|''P''}} in {{math|''R''[''x'']}} a ''polynomial function'' {{math|''f''}} with ___domain and range equal to {{math|''R''}}. (More generally, one can take ___domain and range to be any same [[unital algebra|unital]] [[associative algebra]] over {{math|''R''}}.) One obtains the value {{math|''f''(''r'')}} by [[substitution (algebra)|substitution]] of the value {{math|''r''}} for the symbol {{math|''x''}} in {{math|''P''}}. One reason to distinguish between polynomials and polynomial functions is that, over some rings, different polynomials may give rise to the same polynomial function (see [[Fermat's little theorem]] for an example where {{math|''R''}} is the integers modulo {{math|''p''}}). This is not the case when {{math|''R''}} is the real or complex numbers, whence the two concepts are not always distinguished in [[analysis (mathematics)|analysis]]. An even more important reason to distinguish between polynomials and polynomial functions is that many operations on polynomials (like [[Euclidean division]]) require looking at what a polynomial is composed of as an expression rather than evaluating it at some constant value for {{math|''x''}}.
===Other applications===
{{see also|Orthogonal polynomial|B-spline|spline interpolation}}
Polynomials serve to approximate other [[function (mathematics)|functions]], <ref>{{cite book|author=de Villiers, Johann |title=Mathematics of Approximation|publisher=Springer|year=2012|isbn=9789491216503|url=http://books.google.com/books?id=l5mIro_6RlUC&printsec=frontcover}}</ref> such as the use of [[Spline (mathematics)|splines]].
 
=== Divisibility ===
Polynomials are frequently used to encode information about some other object. The [[characteristic polynomial]] of a matrix or linear operator contains information about the operator's [[eigenvalue]]s. The [[minimal polynomial (field theory)|minimal polynomial]] of an [[algebraic element]] records the simplest algebraic relation satisfied by that element. The [[chromatic polynomial]] of a [[Graph (mathematics)|graph]] counts the number of proper colourings of that graph.
{{Main|Polynomial greatest common divisor|Factorization of polynomials}}
If {{math|''R''}} is an [[integral ___domain]] and {{math|''f''}} and {{math|''g''}} are polynomials in {{math|''R''[''x'']}}, it is said that {{math|''f''}} ''divides'' {{math|''g''}} or {{math|''f''}} is a divisor of {{math|''g''}} if there exists a polynomial {{math|''q''}} in {{math|''R''[''x'']}} such that {{math|''f'' ''q'' {{=}} ''g''}}. If <math>a\in R,</math> then {{mvar|a}} is a root of {{mvar|f}} if and only <math>x-a</math> divides {{mvar|f}}. In this case, the quotient can be computed using the [[polynomial long division]].<ref>{{Cite book |last=Irving |first=Ronald S. |title=Integers, Polynomials, and Rings: A Course in Algebra |publisher=Springer |year=2004 |isbn=978-0-387-20172-6 |page=129 |url=https://books.google.com/books?id=B4k6ltaxm5YC&pg=PA129}}</ref><ref>{{cite book |last=Jackson |first=Terrence H. |title=From Polynomials to Sums of Squares |publisher=CRC Press |year=1995 |isbn=978-0-7503-0329-3 |page=143 |url=https://books.google.com/books?id=LCEOri2-doMC&pg=PA143}}</ref>
 
If {{math|''F''}} is a [[field (mathematics)|field]] and {{math|''f''}} and {{math|''g''}} are polynomials in {{math|''F''[''x'']}} with {{math|''g'' ≠ 0}}, then there exist unique polynomials {{math|''q''}} and {{math|''r''}} in {{math|''F''[''x'']}} with
The term "polynomial", as an adjective, can also be used for quantities or functions that can be written in polynomial form. For example, in [[computational complexity theory]] the phrase ''[[polynomial time]]'' means that the time it takes to complete an [[algorithm]] is bounded by a polynomial function of some variable, such as the size of the input.
<math display="block"> f = q \, g + r </math>
and such that the degree of {{math|''r''}} is smaller than the degree of {{math|''g''}} (using the convention that the polynomial 0 has a negative degree). The polynomials {{math|''q''}} and {{math|''r''}} are uniquely determined by {{math|''f''}} and {{math|''g''}}. This is called ''[[Euclidean division of polynomials|Euclidean division]], division with remainder'' or ''polynomial long division'' and shows that the ring {{math|''F''[''x'']}} is a [[Euclidean ___domain]].
 
Analogously, ''prime polynomials'' (more correctly, ''[[irreducible polynomial]]s'') can be defined as ''non-zero polynomials which cannot be factorized into the product of two non-constant polynomials''. In the case of coefficients in a ring, ''"non-constant"'' must be replaced by ''"non-constant or non-[[unit (ring theory)|unit]]"'' (both definitions agree in the case of coefficients in a field). Any polynomial may be decomposed into the product of an invertible constant by a product of irreducible polynomials. If the coefficients belong to a field or a [[unique factorization ___domain]] this decomposition is unique up to the order of the factors and the multiplication of any non-unit factor by a unit (and division of the unit factor by the same unit). When the coefficients belong to integers, rational numbers or a finite field, there are algorithms to test irreducibility and to compute the factorization into irreducible polynomials (see ''[[Factorization of polynomials]]''). These algorithms are not practicable for hand-written computation, but are available in any [[computer algebra system]]. [[Eisenstein's criterion]] can also be used in some cases to determine irreducibility.
==Extensions of the concept of a polynomial==
Rings of polynomials in a finite number of variables are of fundamental importance in [[algebraic geometry]] which studies the simultaneous zero sets of several such multivariate polynomials. These rings can alternatively be constructed by repeating the construction of univariate polynomials with as coefficient ring another ring of polynomials: thus the ring {{math|''R''[''X'', ''Y'']}} of polynomials in {{math|''X''}} and {{math|''Y''}} can be viewed as the ring {{math|(''R''[''X''])[''Y'']}} of polynomials in {{math|''Y''}} with as coefficients polynomials in {{math|''X''}}, or as the ring
{{math|(''R''[''Y''])[''X'']}} of polynomials in ''X'' with as coefficients polynomials in {{math|''Y''}}. These identifications are compatible with arithmetic operations (they are [[isomorphism]]s of rings), but some notions such as degree or whether a polynomial is considered monic can change between these points of view. One can construct rings of polynomials in infinitely many variables, but since polynomials are (finite) expressions, any individual polynomial can only contain finitely many variables.{{cn|date=August 2013}}
 
== Applications ==
A binary polynomial where the second variable takes the form of an exponential function applied to the first variable, for example {{math|''P''(''X'', ''e''<sup>''X''</sup>)}}, may be called an [[exponential polynomial]].
=== Positional notation ===
{{Main|Positional notation}}
In modern positional numbers systems, such as the [[Decimal|decimal system]], the digits and their positions in the representation of an integer, for example, 45, are a shorthand notation for a polynomial in the [[radix]] or base, in this case, {{nowrap|4 × 10<sup>1</sup> + 5 × 10<sup>0</sup>}}. As another example, in radix 5, a string of digits such as 132 denotes the (decimal) number {{nowrap|1 × 5<sup>2</sup> + 3 × 5<sup>1</sup> + 2 × 5<sup>0</sup>}} = 42. This representation is unique. Let ''b'' be a positive integer greater than 1. Then every positive integer ''a'' can be expressed uniquely in the form
 
<math display="block">a = r_m b^m + r_{m-1} b^{m-1} + \dotsb + r_1 b + r_0,</math>
[[Laurent polynomial]]s are like polynomials, but allow negative powers of the variable(s) to occur.
where ''m'' is a nonnegative integer and the ''r'''s are integers such that
 
{{math|0 < ''r''<sub>''m''</sub> < ''b''}} and {{math|0 ≤ ''r''<sub>''i''</sub> < ''b''}} for {{math|1=''i'' = 0, 1, . . . , ''m'' − 1}}.<ref>{{harvnb|McCoy|1968|p=75}}</ref>
[[Quotient]]s of polynomials are called [[rational expression]]s (or rational fractions), and functions that evaluate rational expressions are called [[rational function]]s. Rational fractions are formal quotients of polynomials (they are formed from polynomials just as [[rational number]]s are formed from [[integer]]s, writing a [[Algebraic fraction|fraction]] of two of them; fractions related by the canceling of common factors are identified with each other). The rational function defined by a rational fraction is the quotient of the polynomial functions defined by the numerator and the denominator of the rational fraction. The rational fractions contain the Laurent polynomials, but do not limit denominators to powers of a variable. While polynomial functions are defined for all values of the variables, a rational function is defined only for the values of the variables for which the denominator is not null. A rational function produces rational output for any rational input for which it is defined; this is not true of other functions such as [[trigonometric function]]s, [[logarithm]]s and [[exponential function]]s.{{cn|date=August 2013}}
 
=== Interpolation and approximation ===
[[Formal power series]] are like polynomials, but allow infinitely many non-zero terms to occur, so that they do not have finite degree. Unlike polynomials they cannot in general be explicitly and fully written down (just like [[real number]]s cannot), but the rules for manipulating their terms are the same as for polynomials.
{{See also|Polynomial interpolation|Orthogonal polynomials|B-spline|spline interpolation}}
The simple structure of polynomial functions makes them quite useful in analyzing general functions using polynomial approximations. An important example in [[calculus]] is [[Taylor's theorem]], which roughly states that every [[differentiable function]] locally looks like a polynomial function, and the [[Stone–Weierstrass theorem]], which states that every [[continuous function]] defined on a [[compact space|compact]] [[interval (mathematics)|interval]] of the real axis can be approximated on the whole interval as closely as desired by a polynomial function. Practical methods of approximation include [[polynomial interpolation]] and the use of [[spline (mathematics)|splines]].<ref>{{cite book |last=de Villiers |first=Johann |title=Mathematics of Approximation |publisher=Springer |year=2012 |isbn=9789491216503 |url=https://books.google.com/books?id=l5mIro_6RlUC}}</ref>
 
=== Other applications ===
Polynomials are frequently used to encode information about some other object. The [[characteristic polynomial]] of a matrix or linear operator contains information about the operator's [[eigenvalue]]s. The [[minimal polynomial (field theory)|minimal polynomial]] of an [[algebraic element]] records the simplest algebraic relation satisfied by that element. The [[chromatic polynomial]] of a [[graph (discrete mathematics)|graph]] counts the number of proper colourings of that graph.
 
The term "polynomial", as an adjective, can also be used for quantities or functions that can be written in polynomial form. For example, in [[computational complexity theory]] the phrase ''[[polynomial time]]'' means that the time it takes to complete an [[algorithm]] is bounded by a polynomial function of some variable, such as the size of the input.
 
== History ==
{{Main|Cubic function#History|Quartic function#History|Abel–Ruffini theorem#History}}
Determining the roots of polynomials, or "solving algebraic equations", is among the oldest problems in mathematics. However, the elegant and practical notation we use today only developed beginning in the 15th century. Before that, equations were written out in words. For example, an algebra problem from the Chinese [[The Nine Chapters on the Mathematical Art|Arithmetic in Nine Sections]], circa 200 BCE, begins "Three sheafs of good crop, two sheafs of mediocre crop, and one sheaf of bad crop are sold for 29 dou." We would write {{math|3''x''&nbsp;+&nbsp;2''y''&nbsp;+&nbsp;''z'' {{=}}&nbsp;29}}.
Determining the roots of polynomials, or "solving algebraic equations", is among the oldest problems in mathematics. However, the elegant and practical notation we use today only developed beginning in the 15th century. Before that, equations were written out in words. For example, an algebra problem from the Chinese [[The Nine Chapters on the Mathematical Art|Arithmetic in Nine Sections]], {{circa|200&nbsp;BCE}}, begins "Three sheafs of good crop, two sheafs of mediocre crop, and one sheaf of bad crop are sold for 29 dou." We would write {{math|3''x'' + 2''y'' + ''z'' {{=}} 29}}.
 
=== History of the notation ===
{{Main|History of mathematical notation}}
The earliest known use of the equal sign is in [[Robert Recorde]]'s ''[[The Whetstone of Witte]]'', 1557. The signs + for addition, &minus; for subtraction, and the use of a letter for an unknown appear in [[Michael Stifel]]'s ''Arithemetica integra'', 1544. [[René Descartes]], in ''La géometrie'', 1637, introduced the concept of the graph of a polynomial equation. He popularized the use of letters from the beginning of the alphabet to denote constants and letters from the end of the alphabet to denote variables, as can be seen above, in the general formula for a polynomial in one variable, where the {{math|''a''}}'s denote constants and {{math|''x''}} denotes a variable. Descartes introduced the use of superscripts to denote exponents as well.<ref>{{cite book |first=Howard |last=Eves, ''|title=An Introduction to the History of Mathematics, Sixth Edition, |publisher=Saunders, ISBN|year= 1990|isbn=0-03-029558-0 |edition=6th}}</ref>
 
== See also ==
* [[List of polynomial topics]]
*[[Binomial]]
*[[Lill's method]]
*[[List of polynomial topics]]
*[[Polynomials on vector spaces]]
*[[indeterminate (variable)]]
*[[Power series]]
 
== Notes ==
{{Reflist|colwidth=35em30em}}
{{notelist}}
 
== References ==
<!-- * {{cite book|author= |title= |publisher= |year= |isbn= |url=}} -->
{{Refbegin}}
* {{cite book |authorlast=Barbeau, |first=E.J. |title=Polynomials |publisher=Springer |year=2003 |isbn=9780387406275978-0-387-40627-5 |url=httphttps://books.google.com/books?id=CynRMm5qTmQC&printsec=frontcover}}
* {{ citation | last1 = Beauregard | first1 = Raymond A. | last2 = Fraleigh | first2 = John B. | title = A First Course In Linear Algebra: with Optional Introduction to Groups, Rings, and Fields | ___location = Boston | publisher = [[Houghton Mifflin Company]] | year = 1973 | isbn = 0-395-14017-X }}
* {{cite book|editors=Bronstein, Manuel et al|title=Solving Polynomial Equations: Foundations, Algorithms, and Applications|publisher=Springer|year=2006|isbn=9783540273578|url=http://books.google.com/books?id=aIlSmBV3yf8C&printsec=frontcover}}
* {{cite book |authorseditor-last=Cahen,Bronstein Paul|editor-Jeanfirst=Manuel & Chabert, Jean-Luc|title=Integer-ValuedSolving Polynomial Equations: Foundations, Algorithms, and Applications Polynomials|publisher=AmericanSpringer Mathematical Society|year=19972006 |isbn=9780821803882978-3-540-27357-8 |url=httphttps://books.google.com/books?id=AlAluH5is6AC&printsecaIlSmBV3yf8C|display-editors=frontcoveretal}}
* {{ citation | last1 = Burden | first1 = Richard L. | last2 = Faires | first2 = J. Douglas | year = 1993 | isbn = 0-534-93219-3 | title = Numerical Analysis | edition = 5th | publisher = [[Prindle, Weber and Schmidt]] | ___location = Boston }}
* {{cite book |last1=Cahen |first1=Paul-Jean |last2=Chabert |first2=Jean-Luc |title=Integer-Valued Polynomials |publisher=American Mathematical Society |year=1997 |isbn=978-0-8218-0388-2 |url=https://books.google.com/books?id=AlAluH5is6AC}}
* {{ citation | last1 = Fraleigh | first1 = John B. | year = 1976 | isbn = 0-201-01984-1 | title = A First Course In Abstract Algebra | edition = 2nd | publisher = [[Addison-Wesley]] | ___location = Reading }}
* {{cite book |last1=Horn |first1=Roger A. |last2=Johnson |first2=Charles R. |title=Matrix Analysis |publisher=[[Cambridge University Press]] |isbn=978-0-521-38632-6 |year=1990}}.
* {{Lang Algebra}}. This classical book covers most of the content of this article.
* {{cite book |authorlast=Leung, |first=Kam-tim et al|title=Polynomials and Equations |publisher=Hong Kong University Press |year=1992 |isbn=9789622092716 |url=httphttps://books.google.com/books?id=v5uXkwIUbC8C&printsec|display-authors=frontcoveretal}}
* {{cite journal |last=Mayr, |first=K. |title=Über die Auflösung algebraischer Gleichungssysteme durch hypergeometrische Funktionen. ''|journal=Monatshefte für Mathematik und Physik'' vol. |volume=45, (|year=1937) pp. |pages=280–313 |doi=10.1007/BF01707992|s2cid=197662587 }}
* {{citation | last1 = McCoy | first1 = Neal H. | title = Introduction To Modern Algebra, Revised Edition | ___location = Boston | publisher = [[Allyn and Bacon]] | year = 1968 | lccn = 68015225 }}
* {{cite book|author=Prasolov, Victor V.|title=Polynomials|publisher=Springer|year=2005|isbn=9783642040122|url=http://books.google.com/books?id=qIJPxdwSqlcC&printsec=frontcover}}
* {{ citation | last1 = Moise | first1 = Edwin E. | title = Calculus: Complete | ___location = Reading | publisher = [[Addison-Wesley]] | year = 1967 }}
* {{cite book|author=Sethuraman, B.A.|chapter=Polynomials|title=Rings, Fields, and Vector Spaces: An Introduction to Abstract Algebra Via Geometric Constructibility|publisher=Springer|year=1997|isbn=9780387948485|url=http://books.google.com/books?id=yWnTIqmUOFgC&pg=PA119}}
* {{cite book |last=Prasolov |first=Victor V. |title=Polynomials |publisher=Springer |year=2005 |isbn=978-3-642-04012-2 |url=https://books.google.com/books?id=qIJPxdwSqlcC}}
* Umemura, H. Solution of algebraic equations in terms of theta constants. In D. Mumford, ''Tata Lectures on Theta II'', Progress in Mathematics 43, Birkhäuser, Boston, 1984.
* {{cite book |last=Sethuraman |first=B.A. |chapter=Polynomials |title=Rings, Fields, and Vector Spaces: An Introduction to Abstract Algebra Via Geometric Constructibility |publisher=Springer |year=1997 |isbn=978-0-387-94848-5 |chapter-url=https://books.google.com/books?id=yWnTIqmUOFgC&pg=PA119 |url-access=registration |url=https://archive.org/details/ringsfieldsvecto0000seth }}
* von Lindemann, F. [http://dz-srv1.sub.uni-goettingen.de/sub/digbib/loader?ht=VIEW&did=D55215 Über die Auflösung der algebraischen Gleichungen durch transcendente Functionen]. Nachrichten von der Königl. Gesellschaft der Wissenschaften, vol. 7, 1884. Polynomial solutions in terms of theta functions.
* {{cite book |doi=10.1007/978-3-030-75051-0_6|chapter=Polynomial Expressions |title=Elements of Mathematics |series=Undergraduate Texts in Mathematics |year=2021 |last1=Toth |first1=Gabor |pages=263–318 |isbn=978-3-030-75050-3|chapter-url={{Google books|bJhEEAAAQBAJ|page=263|plainurl=yes}}}}
* von Lindemann, F. [http://dz-srv1.sub.uni-goettingen.de/sub/digbib/loader?did=D55847 Über die Auflösung der algebraischen Gleichungen durch transcendente Functionen II]. Nachrichten von der Königl. Gesellschaft der Wissenschaften und der Georg-Augusts-Universität zu Göttingen, 1892 edition.
* {{cite book |last=Umemura |first=H. |chapter=Resolution of algebraic equations by theta constants |editor-first=David |editor-last=Mumford |title=Tata Lectures on Theta II: Jacobian theta functions and differential equations |chapter-url=https://books.google.com/books?id=xaNCAAAAQBAJ&pg=PA261 |date=2012 |orig-year=1984 |publisher=Springer |isbn=978-0-8176-4578-6 |pages=261–}}
* {{cite book
| last1 = Varberg | first1 = Dale E.
| last2 = Purcell | first2 = Edwin J.
| last3 = Rigdon | first3 = Steven E.
| title = Calculus
| year = 2007
| publisher = [[Pearson Prentice Hall]]
| edition = 9th
| isbn = 978-0131469686
}}
* {{cite journal |first=F. |last=von Lindemann |title=Ueber die Auflösung der algebraischen Gleichungen durch transcendente Functionen |journal=Nachrichten von der Königl. Gesellschaft der Wissenschaften und der Georg-Augusts-Universität zu Göttingen |volume=1884 |issue= |pages=245–8 |year=1884 |url=https://eudml.org/doc/180024}}
* {{cite journal |first=F. |last=von Lindemann |title=Ueber die Auflösung der algebraischen Gleichungen durch transcendente Functionen. II |journal=Nachrichten von der Königl. Gesellschaft der Wissenschaften und der Georg-Augusts-Universität zu Göttingen |volume=1892 |issue= |pages=245–8 |year=1892 |url=https://eudml.org/doc/180353}}
{{Refend}}
 
== External links ==
{{Commons category|Polynomials}}
{{Wiktionary|polynomial}}
* {{springer|title=Polynomial|idauthor-last1 =p/p073690 Markushevich| author-first1 = A.I. | oldid = 36519}}
*[ {{cite web |url=http://mathdl.maa.org/convergencemathDL/146/?pa=content&sa=viewDocument&nodeId=640&bodyIdpf=10381 |title=Euler's workInvestigations on Imaginarythe Roots of Polynomials]Equations |archive-url=https://web.archive.org/web/20120924140505/http://mathdl.maa.org/mathDL/46/?pa=content&sa=viewDocument&nodeId=640&pf=1 |archive-date=September 24, 2012 |url-status=dead}}
*{{MathWorld |title=Polynomial |id=Polynomial}}
 
{{Polynomials}} {{Functions navbox}}
{{Authority control}}
 
[[Category:Polynomials| ]]