IPO underpricing algorithm: Difference between revisions

Content deleted Content added
 
(12 intermediate revisions by 7 users not shown)
Line 1:
{{Short description|Increase in stock value}}
{{Multiple issues|
{{original research|date=April 2011}}
{{essay-like|date=April 2011}}
{{cleanup|date=April 2011}}
}}
 
'''[[Initial public offering#Pricing|IPO underpricing]]''' is the increase in stock value from the [[initial public offering#Pricing|initial offering price]] to the first-day closing price. Many believe that underpriced IPOs[[initial public offering|IPO]]s leave money on the table for corporations, but some believe that underpricing is inevitable. Investors state that underpricing signals high interest to the market which increases the demand. On the other hand, overpriced stocks will drop long-term as the price stabilizes so underpricing may keep the issuers safe from investor litigation.
 
==IPO underpricing algorithms==
[[Underwriters]] and investors and corporations going for an [[initial public offering]] (IPO), issuers, are interested in their market value. There is always tension that results since the underwriters want to keep the price low while the companies want a high IPO price.
 
Underpricing may also be caused by investor over-reaction causing spikes on the initial days of trading. The IPO pricing process is similar to pricing new and unique products where there is sparse data on market demand, product acceptance, or competitive response. Besides, underpricing is also affected by the firm idiosyncratic factors such as its business model.<ref>{{cite journal|last=Morricone|first=Serena |author2=Federico Munari |author3=Raffaele Oriani |author4=Gaétan de Rassenfosse |title=Commercialization Strategy and IPO Underpricing|journal=Research Policy|year=2017|volume=46|issue=6|pages=1133–1141 |doi=10.1016/j.respol.2017.04.006|url=http://cdm-it.epfl.ch/RePEc/iip-wpaper/commercialization_strategy_and_IPO_underpricing.pdf }}</ref> Thus it is difficult to determine a clear price which is compounded by the different goals issuers and investors have.
 
The problem with developing algorithms to determine underpricing is dealing with [[Statistical noise|noisy]], complex, and unordered data sets. Additionally, people, environment, and various environmental conditions introduce irregularities in the data. To resolve these issues, researchers have found various techniques from [[artificial intelligence]] that [[normalization (statistics)|normalizes]] the data.
 
== Evolutionary models ==
==Artificial neural network==
[[Evolutionary programming]] is often paired with other algorithms e.g. [[artificial neural network|ANNnetworks]] to improve the robustness, reliability, and adaptability. Evolutionary models reduce error rates by allowing the numerical values to change within the fixed structure of the program. Designers provide their algorithms the variables, they then provide training data to help the program generate rules defined in the input space that make a prediction in the output variable space.
[[Artificial neural networks]] (ANNs) resolves these issues by scanning the data to develop internal representations of the relationship between the data. By determining the relationship over time, ANNs are more responsive and adaptive to structural changes in the data. There are two models for ANNs: supervised learning and unsupervised learning.
 
In [[supervised learning]] models, there are tests that are needed to pass to reduce mistakes. Usually, when mistakes are encountered i.e. test output does not match test input, the algorithms use [[back propagation]] to fix mistakes. Whereas in [[unsupervised learning]] models, the input is classified based on which problems need to be resolved.
 
==Evolutionary models==
[[Evolutionary programming]] is often paired with other algorithms e.g. [[artificial neural network|ANN]] to improve the robustness, reliability, and adaptability. Evolutionary models reduce error rates by allowing the numerical values to change within the fixed structure of the program. Designers provide their algorithms the variables, they then provide training data to help the program generate rules defined in the input space that make a prediction in the output variable space.
 
In this approach, the solution is made an individual and the population is made of alternatives. However, the outliers cause the individuals to act unexpectedly as they try to create rules to explain the whole set.
 
===Rule-based system===
For example, Quintana<ref>{{cite journalbook|last=Quintana|first=David |author2=Cristóbal Luque |author3=Pedro Isasi|title=Evolutionary rule-based system for IPO underpricing prediction|journal=In&nbsp;Proceedings of the 20057th Conferenceannual conference on Genetic and evolutionary computation |chapter=Evolutionary Computation&nbsp;(GECCOrule-based system for IPO underpricing prediction '05)|year=2005|pages=983–989|doi=10.1145/1068009.1068176 |hdl=10016/4081 |isbn=1595930108 |s2cid=3035047 |hdl-access=free}}</ref> first abstracts a model with 7 major variables. The rules evolved from the Evolutionary Computation system developed at Michigan and Pittsburgh:
* Underwriter prestige – Is the underwriter prestigious in role of lead manager? 1 for true, 0 otherwise.
* Price range width – The width of the non-binding reference price range offered to potential customers during the roadshow. This width can be interpreted as a sign of uncertainty regarding the real value of the company and a therefore, as a factor that could influence the initial return.
Line 37 ⟶ 32:
 
===Two-layered evolutionary forecasting===
Luque<ref>{{cite journalbook|last=Luque|first=Cristóbal|author2=David Quintana |author3=J. M. Valls |author4=Pedro Isasi |title=2009 IEEE Congress on Evolutionary Computation |chapter=Two-layered evolutionary forecasting for IPO underpricing|journal=In&nbsp;Proceedings of the Eleventh Conference on Congress on Evolutionary Computation&nbsp;(CEC'09)|year=2009|pages=2374–2378|publisher=IEEE Press|___location=Piscatawy, NJ, USA|doi=10.1109/cec.2009.4983237|isbn=978-1-4244-2958-5|s2cid=1733801}}</ref> approaches the problem with outliers by performing linear regressions over the set of data points (input, output). The algorithm deals with the data by allocating regions for noisy data. The scheme has the advantage of isolating noisy patterns which reduces the effect outliers have on the rule-generation system. The algorithm can come back later to understand if the isolated data sets influence the general data. Finally, the worst results from the algorithm outperformed all other algorithms' predictive abilities.
 
==Agent-based modelling==
Line 49 ⟶ 44:
[[Category:Initial public offering]]
[[Category:Artificial neural networks]]
[[Category:EvolutionaryApplications of evolutionary algorithms]]