IPO underpricing algorithm: Difference between revisions

Content deleted Content added
Removed unnecessary "DEFAULTSORT" tag so that this article may be sorted in categories according to its actual name/title.
m clean-up
Line 7:
Underpricing may also be caused by investor over-reaction causing spikes on the initial days of trading. The IPO pricing process is similar to pricing new and unique products where there is sparse data on market demand, product acceptance, or competitive response. Thus it is difficult to determine a clear price which is compounded by the different goals issuers and investors have.
 
The problem with developing algorithms to determine underpricing is dealing with [[Statistical noise|noisy]], complex, and unordered data sets. Additionally, people, environment, and various environmental conditions introduce irregularities in the data. To resolve these issues, researchers have found various techniques from [[Artificialartificial Intelligenceintelligence]] that [[Normalizationnormalization (statistics)|normalizes]] the data.
 
==Artificial neural network==
Line 14:
In [[supervised learning]] models, there are tests that are needed to pass to reduce mistakes. Usually, when mistakes are encountered i.e. test output does not match test input, the algorithms use [[back propagation]] to fix mistakes. Whereas in [[unsupervised learning]] models, the input is classified based on which problems need to be resolved.
 
For example, Chou<ref>{{cite journal|last=Chou|first=Shi-Hao|coauthors=Yen-Sen Ni and William T. Lin|title=Forecasting IPO price using GA and ANN simulation|journal=In &nbsp;Proceedings of the 10th WSEAS international conference on Signal processing, computational geometry and artificial vision &nbsp;(ISCGAV'10)|year=2010|pages=145–150|publisher=World Scientific and Engineering Academy and Society (WSEAS)}}</ref> discusses their algorithm for determining the IPO price of [[Baidu]]. They have a three layer algorithm which contains—input level, hidden level, and output level:
* Input level, the data is received unprocessed.
* Hidden level, the data is processed for analyses
Line 22:
 
==Evolutionary models==
[[Evolutionary programming]] is often paired with other algorithms e.g. [[Artificialartificial neural network|ANN]] to improve the robustness, reliability, and adaptability. Evolutionary models reduce error rates by allowing the numerical values to change within the fixed structure of the program. Designers provide their algorithms the variables, they then provide training data to help the program generate rules defined in the input space that make a prediction in the output variable space.
 
In this approach, the solution is made an individual and the population is made of alternatives. However, the outliers cause the individuals to act unexpectedly as they try to create rules to explain the whole set.
 
===Rule-based system===
For example, Quintana<ref>{{cite journal|last=Quintana|first=David|coauthors=Cristóbal Luque and Pedro Isasi|title=Evolutionary rule-based system for IPO underpricing prediction|journal=In &nbsp;Proceedings of the 2005 conference on Genetic and evolutionary computation &nbsp;(GECCO '05)|year=2005|pages=983–989}}</ref> first abstracts a model with 7 major variables. The rules evolved from the Evolutionary Computation system developed at Michigan and Pittsburgh:
* Underwriter prestige – Is the underwriter prestigious in role of lead manager? 1 for true, 0 otherwise.
* Price range width – The width of the non-binding reference price range offered to potential customers during the roadshow. This width can be interpreted as a sign of uncertainty regarding the real value of the company and a therefore, as a factor that could influence the initial return.
Line 39:
 
===Two-layered evolutionary forecasting===
Luque<ref>{{cite journal|last=Luque|first=Cristóbal|author2=David Quintana |author3=J. M. Valls |author4=Pedro Isasi |title=Two-layered evolutionary forecasting for IPO underpricing|journal=In &nbsp;Proceedings of the Eleventh conference on Congress on Evolutionary Computation &nbsp;(CEC'09)|year=2009|pages=2384–2378|publisher=IEEE Press|___location=Piscatawy, NJ, USA}}</ref> approaches the problem with outliers by performing linear regressions over the set of data points (input, output). The algorithm deals with the data by allocating regions for noisy data. The scheme has the advantage of isolating noisy patterns which reduces the effect outliers have on the rule-generation system. The algorithm can come back later to understand if the isolated data sets influence the general data. Finally, the worst results from the algorithm outperformed all other algorithms' predictive abilities.
 
==Agent-based modelling==
Currently, many of the algorithms assume homogeneous and rational behavior among investors. However, there’s an alternative approach being researched to financial modeling called [[Agentagent-based model|agent-based modelling]]ling (ABM). ABM uses different autonomous agents whose behavior evolves endogenously which lead to complicated system dynamics that are sometimes impossible to predict from the properties of individual agents.<ref>{{cite journal |last=Brabazon |first=Anthony |author2=Jiang Dang |author3=Ian Dempsy |author4=Michael O'Neill |author5=David M. Edelman |title=Natural Computing in finance: a review |journal=Handbook of Natural Computing |year=2010 |url=http://irserver.ucd.ie/dspace/bitstream/10197/2737/1/NCinFinance_v8.pdf |deadurl=yes}} {{dead link |date=September 2013}}</ref> ABM is starting to be applied to computational finance. Though, for ABMs to be more accurate, better models for rule-generation need to be developed.
 
== References ==
{{reflist}}
<references />
 
{{Corporate finance and investment banking}}