Predictive analytics: Difference between revisions

Content deleted Content added
AnomieBOT (talk | contribs)
m Dating maintenance tags: {{Contradictory}}
revert to last version not corrupted by obvious WP:COI edits and general LinkedIn-level shenanigans
Line 1:
{{Short description|Statistical techniques analyzing facts to make predictions about unknown events}}
{{More citations needed|date=June 2011}}
{{contradictory|date=December 2024}}
 
'''Predictive analytics''' isencompasses a formvariety of [[business analyticsStatistics|statistical]] applyingtechniques from [[machinedata learningmining]] to generate a, [[Predictive modelling|predictive modelmodeling]] for certain business applications. As such, it encompasses a variety of statistical techniques from predictive modeling and [[machine learning]] that analyze current and historical facts to make predictions[[prediction]]s about future or otherwise unknown events.<ref name=":52">{{Cite web |title=To predict or not to Predict |url=https://mccoy-partners.com/updates/to-predict-or-not-to-predict |access-date=2022-05-05 |website=mccoy-partners.com}}</ref> It represents a major subset of machine learning applications; in some contexts, it is synonymous with machine learning.<ref name="Siegel 2013">{{Cite book |last=Siegel |first=Eric |title=Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die (1st ed.) |publisher=[[Wiley (publisher)|Wiley]] |year=2013 |isbn=978-1-1183-5685-2 |language=English}}</ref>
 
In business, predictive models exploit [[Pattern detection|patterns]] found in historical and transactional data to identify risks and opportunities. Models capture relationships among many factors to allow assessment of risk or potential associated with a particular set of conditions, guiding [[decision-making]] for candidate transactions.<ref>{{Cite book |last=Coker |first=Frank |title=Pulse: Understanding the Vital Signs of Your Business (1st ed.) |___location=Bellevue, WA |publisher=Ambient Light Publishing |year=2014 |isbn=978-0-9893086-0-1 |pages=30, 39, 42, more}}</ref>
Line 10 ⟶ 9:
 
== Definition ==
Predictive analytics involvesis usinga statisticalset techniquesof andbusiness machineintelligence learning(BI) algorithmstechnologies tothat analyzeuncovers historical datarelationships and makepatterns forecastswithin aboutlarge futurevolumes events.of Thedata risksthat includecan databe privacyused issues,to potentialpredict biasesbehavior inand dataevents. leadingUnlike toother inaccurateBI predictionstechnologies, andpredictive overanalytics is forward-looking, relianceusing onpast automatedevents systems.to Extendinganticipate the Value of Your Data Warehousing Investmentfuture.<ref name=":4">{{citeCite web |last1last=SinghEckerson |first1first=MayurendraWayne, PratapW |date=2007 |title=Predictive analytics |url=https://thecodeworkAnalytics.com/data-engineering-and-analytics/ |website=TheCodeWorkExtending |access-date=4the NovemberValue 2024}}</ref>of Your Data Warehousing Investment |url=http://download.101com.com/pub/tdwi/files/pa_report_q107_f.pdf}}</ref> Predictive analytics statistical techniques include [[data modeling]], [[machine learning]], [[Artificial intelligence|AI]], [[deep learning]] algorithms and [[data mining]]. Often the unknown event of interest is in the future, but predictive analytics can be applied to any type of unknown whether it be in the past, present or future. For example, identifying suspects after a crime has been committed, or credit card fraud as it occurs.<ref>{{Cite book |last=Finlay |first=Steven |title=Predictive Analytics, Data Mining and Big Data. Myths, Misconceptions and Methods (1st ed.) |publisher=[[Palgrave Macmillan]] |year=2014 |isbn=978-1137379276 |___location=Basingstoke |pages=237 |language=English}}</ref> The core of predictive analytics relies on capturing relationships between [[explanatory variable]]s and the predicted variables from past occurrences, and exploiting them to predict the unknown outcome. It is important to note, however, that the accuracy and usability of results will depend greatly on the level of data analysis and the quality of assumptions.<ref name=":52" />
 
Predictive analytics is often defined as predicting at a more detailed level of granularity, i.e., generating predictive scores (probabilities) for each individual organizational element. This distinguishes it from [[forecasting]]. For example, "Predictive analytics—Technology that learns from experience (data) to predict the future behavior of individuals in order to drive better decisions."<ref>{{Cite namebook |last="Siegel |first=Eric |title=Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die (1st ed.) |publisher=[[Wiley (publisher)|Wiley]] |year=2013" |isbn=978-1-1183-5685-2 |language=English}}</ref> In future industrial systems, the value of predictive analytics will be to predict and prevent potential issues to achieve near-zero break-down and further be integrated into [[prescriptive analytics]] for decision optimization.<ref>{{Cite book |last=Spalek |first=Seweryn |title=Data Analytics in Project Management |publisher=Taylor & Francis Group, LLC |year=2019 |language=English}}</ref>
 
== AnalyticalBig techniquesData ==
While there is no universal definition of big data, most of them refer to the processing of a large set of data points to get a finished product. When the dataset is too large to be analyzed using traditional analysis techniques, big data analytics comes into play. However, size is not the only factor that defines big data.
 
Gartner's definition of big data is useful in explaining the defining properties of big data: "Big data is high-volume, high-velocity and/or high variety information assets that demand cost-effective, innovative forms of information processing that enable enhanced insight, decision making, and process automation."<ref name=":2">{{Cite web |title=Definition of Big Data - Gartner Information Technology Glossary |url=https://www.gartner.com/en/information-technology/glossary/big-data |access-date=2022-04-28 |website=Gartner |language=en-US}}</ref> These properties are sometimes referred to as the 3 Vs of big data.
 
When we talk about volume of data, think about its size. There is no universal criteria for size that determines whether a dataset is "big" or not, because size is relative. Terabytes of data could be considered big data to one firm while another firm uses a larger unit of storage as criteria for big data such as a petabyte or an exabyte.
 
The velocity of data refers to the speed of data and how much time it takes to create, store, and analyze it. Batch processing was traditionally used to process large blocks of data, but this takes a lot of time and is only useful if decision making can be successful without fast-paced data processing. The markets of the modern day however require real-time processing for powerful and successful decision making in highly versatile and competitive environments.
 
There are also a few different types of data, which is what Gartner means by variety. Data can be structured, semi-structured, or unstructured. "Structured data is data that adheres to a predefined data model and is therefore straightforward to analyze."<ref name=":2" /> Structured data generally has rows and columns that can be sorted and searched with basic techniques. Spreadsheets and relational databases are typical examples of structured data. Unstructured data is basically the opposite of structured data in that it doesn't adhere to a predefined data model and doesn't contain columns or rows to help organize the data. This makes unstructured data more difficult to understand than structured data, which can be easily processed using traditional programs like Excel and SQL. Some examples of unstructured data include emails, PDF files, and Google searches. Storing and processing unstructured data has become much easier in recent years due to programs like Power BI and Tableau.
 
"Semi-structured data lies in between structured and unstructured data. It does not adhere to a formal data structure yet does contain tags and other markers to organize the data."<ref name=":1">{{Cite book |last1=McCarthy |first1=Richard |title=Applying Predictive Analytics: Finding Value in Data |last2=McCarthy |first2=Mary |last3=Ceccucci |first3=Wendy |publisher=Springer |year=2021}}</ref> The semi-structured category of data is much easier to analyze than unstructured data. Many big data tools can 'read' and process semi-structured forms of data like XML or JSON files.
 
The volume, variety and velocity of big data have introduced challenges across the board for capture, storage, search, sharing, analysis, and visualization. Examples of big data sources include [[web log]]s, [[RFID]], [[Sensor network|sensor]] data, [[social network]]s, Internet search indexing, call detail records, military surveillance, and complex data in astronomic, biogeochemical, genomics, and atmospheric sciences. Thanks to technological advances in computer hardware—faster CPUs, cheaper memory, and [[Massive parallel processing|MPP]] architectures—and new technologies such as [[Hadoop]], [[MapReduce]], and [[In-database processing|in-database]] and [[text analytics]] for processing big data, it is now feasible to collect, analyze, and mine massive amounts of structured and [[unstructured data]] for new insights. It is also possible to run predictive algorithms on streaming data. Today, exploring big data and using predictive analytics is within reach of more organizations than ever before and new methods that are capable of handling such datasets are proposed.
 
== Analytical Techniques ==
The approaches and techniques used to conduct predictive analytics can broadly be grouped into regression techniques and machine learning techniques.
 
=== Machine Learning ===
{{main|Machine Learning}}
Machine learning can be defined as the ability of a machine to learn and then mimic human behavior that requires intelligence. This is accomplished through artificial intelligence, algorithms, and models.<ref>{{Cite web |title=Machine learning, explained |url=https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained |access-date=2022-05-06 |website=MIT Sloan |language=en}}</ref>
 
==== Autoregressive Integrated Moving Average (ARIMA) ====
{{main|ARIMA}}
 
ARIMA models are a common example of time series models. These models use autoregression, which means the model can be fitted with a regression software that will use machine learning to do most of the regression analysis and smoothing. ARIMA models are known to have no overall trend, but instead have a variation around the average that has a constant amplitude, resulting in statistically similar time patterns. Through this, variables are analyzed and data is filtered in order to better understand and predict future values.<ref name=":0">{{Cite journal |last=Kinney |first=William R. |date=1978 |title=ARIMA and Regression in Analytical Review: An Empirical Test |journal=The Accounting Review |volume=53 |issue=1 |pages=48–60 |jstor=245725 |issn=0001-4826}}</ref><ref>{{Cite web |title=Introduction to ARIMA models |url=https://people.duke.edu/~rnau/411arim.htm |access-date=2022-05-06 |website=people.duke.edu}}</ref>
 
One example of an ARIMA method is exponential smoothing models. Exponential smoothing takes into account the difference in importance between older and newer data sets, as the more recent data is more accurate and valuable in predicting future values. In order to accomplish this, exponents are utilized to give newer data sets a larger weight in the calculations than the older sets.<ref>{{Cite web |title=6.4.3. What is Exponential Smoothing? |url=https://www.itl.nist.gov/div898/handbook/pmc/section4/pmc43.htm |access-date=2022-05-06 |website=www.itl.nist.gov}}</ref>
 
==== Time seriesSeries modelsModels ====
{{main|Time series}}
 
Time series models are a subset of machine learning that utilize time series in order to understand and forecast data using past values. A time series is the sequence of a variable's value over equally spaced periods, such as years or quarters in business applications.<ref>{{Cite web |title=6.4.1. Definitions, Applications and Techniques |url=https://www.itl.nist.gov/div898/handbook/pmc/section4/pmc41.htm |access-date=2022-05-06 |website=www.itl.nist.gov}}</ref> To accomplish this, the data must be smoothed, or the random variance of the data must be removed in order to reveal trends in the data. There are multiple ways to accomplish this.
 
===== Single Moving averageAverage =====
{{main|Moving average}}
 
Single moving average methods utilize smaller and smaller numbered sets of past data to decrease error that is associated with taking a single average, making it a more accurate average than it would be to take the average of the entire data set.<ref>{{Cite web |title=6.4.2.1. Single Moving Average |url=https://www.itl.nist.gov/div898/handbook/pmc/section4/pmc421.htm |access-date=2022-05-06 |website=www.itl.nist.gov}}</ref>
 
===== Centered Moving Average =====
Centered moving average methods utilize the data found in the single moving average methods by taking an average of the median-numbered data set. However, as the median-numbered data set is difficult to calculate with even-numbered data sets, this method works better with odd-numbered data sets than even.<ref>{{Cite web |title=6.4.2.2. Centered Moving Average |url=https://www.itl.nist.gov/div898/handbook/pmc/section4/pmc422.htm |access-date=2022-05-06 |website=www.itl.nist.gov}}</ref>
 
=== Predictive modelingModeling ===
Predictive Modeling is a statistical technique used to predict future behavior. It utilizes predictive models to analyze a relationship between a specific unit in a given sample and one or more features of the unit. The objective of these models is to assess the possibility that a unit in another sample will display the same pattern. Predictive model solutions can be considered a type of data mining technology. The models can analyze both historical and current data and generate a model in order to predict potential future outcomes.<ref name=":1">{{Cite book |last1=McCarthy |first1=Richard |title=Applying Predictive Analytics: Finding Value in Data |last2=McCarthy |first2=Mary |last3=Ceccucci |first3=Wendy |publisher=Springer |year=2021}}</ref>
{{main|Predictive modeling}}
Predictive Modeling is a statistical technique used to predict future behavior. It utilizes predictive models to analyze a relationship between a specific unit in a given sample and one or more features of the unit. The objective of these models is to assess the possibility that a unit in another sample will display the same pattern. Predictive model solutions can be considered a type of data mining technology. The models can analyze both historical and current data and generate a model in order to predict potential future outcomes.<ref name=":1">{{Cite book |last1=McCarthy |first1=Richard |title=Applying Predictive Analytics: Finding Value in Data |last2=McCarthy |first2=Mary |last3=Ceccucci |first3=Wendy |publisher=Springer |year=2021}}</ref>
 
Regardless of the methodology used, in general, the process of creating predictive models involves the same steps. First, it is necessary to determine the project objectives and desired outcomes and translate these into predictive analytic objectives and tasks. Then, analyze the source data to determine the most appropriate data and model building approach (models are only as useful as the applicable data used to build them). Select and transform the data in order to create models. Create and test models in order to evaluate if they are valid and will be able to meet project goals and metrics. Apply the model's results to appropriate business processes (identifying patterns in the data doesn't necessarily mean a business will understand how to take advantage or capitalize on it). Afterward, manage and maintain models in order to standardize and improve performance (demand will increase for model management in order to meet new compliance regulations).<ref name=":4">{{Cite web |last=Eckerson |first=Wayne, W |date=2007 |title=Predictive Analytics. Extending the Value of Your Data Warehousing Investment |url=http://download.101com.com/pub/tdwi/files/pa_report_q107_f.pdf}}</ref>
 
=== Regression analysisTechniques ===
{{main|Regression analysis}}
Generally, regression analysis uses structural data along with the past values of independent variables and the relationship between them and the dependent variable to form predictions.<ref name=":0" />
 
==== Linear regressionRegression ====
In linear regression, a plot is constructed with the previous values of the dependent variable plotted on the Y-axis and the independent variable that is being analyzed plotted on the X-axis. A regression line is then constructed by a statistical program representing the relationship between the independent and dependent variables which can be used to predict values of the dependent variable based only on the independent variable. With the regression line, the program also shows a slope intercept equation for the line which includes an addition for the error term of the regression, where the higher the value of the error term the less precise the regression model is. In order to decrease the value of the error term, other independent variables are introduced to the model, and similar analyses are performed on these independent variables.<ref name=":0" /><ref>{{Cite web |title=Linear Regression |url=http://www.stat.yale.edu/Courses/1997-98/101/linreg.htm |access-date=2022-05-06 |website=www.stat.yale.edu}}</ref> Additionally, multiple linear regression (MLP) can be employed to address relationships involving multiple independent variables, offering a more comprehensive modeling approach.<ref>{{Cite journal |last1=Li |first1=Meng |last2=Liu |first2=Jiqiang |last3=Yang |first3=Yeping |date=2023-10-14 |title=Financial Data Quality Evaluation Method Based on Multiple Linear Regression |journal=Future Internet |language=en |volume=15 |issue=10 |pages=338 |doi=10.3390/fi15100338 |issn=1999-5903|doi-access=free }}</ref>
{{main|Linear regression}}
In linear regression, a plot is constructed with the previous values of the dependent variable plotted on the Y-axis and the independent variable that is being analyzed plotted on the X-axis. A regression line is then constructed by a statistical program representing the relationship between the independent and dependent variables which can be used to predict values of the dependent variable based only on the independent variable. With the regression line, the program also shows a slope intercept equation for the line which includes an addition for the error term of the regression, where the higher the value of the error term the less precise the regression model is. In order to decrease the value of the error term, other independent variables are introduced to the model, and similar analyses are performed on these independent variables.<ref name=":0" /><ref>{{Cite web |title=Linear Regression |url=http://www.stat.yale.edu/Courses/1997-98/101/linreg.htm |access-date=2022-05-06 |website=www.stat.yale.edu}}</ref> Additionally, multiple linear regression (MLP) can be employed to address relationships involving multiple independent variables, offering a more comprehensive modeling approach.<ref>{{Cite journal |last1=Li |first1=Meng |last2=Liu |first2=Jiqiang |last3=Yang |first3=Yeping |date=2023-10-14 |title=Financial Data Quality Evaluation Method Based on Multiple Linear Regression |journal=Future Internet |language=en |volume=15 |issue=10 |pages=338 |doi=10.3390/fi15100338 |issn=1999-5903|doi-access=free }}</ref>
 
== Applications ==
 
=== Analytical Review and Conditional Expectations in Auditing ===
 
An important aspect of auditing includes analytical review. In analytical review, the reasonableness of reported account balances being investigated is determined. Auditors accomplish this process through predictive modeling to form predictions called conditional expectations of the balances being audited using autoregressive integrated moving average (ARIMA) methods and general regression analysis methods,<ref name=":0" /> specifically through the Statistical Technique for Analytical Review (STAR) methods.<ref name=":3">{{Cite journal |last1=Kinney |first1=William R. |last2=Salamon |first2=Gerald L. |date=1982 |title=Regression Analysis in Auditing: A Comparison of Alternative Investigation Rules |journal=Journal of Accounting Research |volume=20 |issue=2 |pages=350–366 |doi=10.2307/2490745 |jstor=2490745 |issn=0021-8456}}</ref>
 
Line 65 ⟶ 69:
 
The STAR methods operate using regression analysis, and fall into two methods. The first is the STAR monthly balance approach, and the conditional expectations made and regression analysis used are both tied to one month being audited. The other method is the STAR annual balance approach, which happens on a larger scale by basing the conditional expectations and regression analysis on one year being audited. Besides the difference in the time being audited, both methods operate the same, by comparing expected and reported balances to determine which accounts to further investigate.<ref name=":3" />
 
Furthermore, the incorporation of analytical procedures into auditing standards underscores the increasing necessity for auditors to modify these methodologies to suit particular datasets, which reflects the ever-changing nature of financial examination.<ref>{{Cite journal |last=Wilson |first=Arlette C. |date=1991 |title=Use of Regression Models as Analytical Procedures: An Empirical Investigation of Effect of Data Dispersion on Auditor Decisions |url=http://journals.sagepub.com/doi/10.1177/0148558X9100600307 |journal=Journal of Accounting, Auditing & Finance |language=en |volume=6 |issue=3 |pages=365–381 |doi=10.1177/0148558X9100600307 |s2cid=154468768 |issn=0148-558X}}</ref>
 
=== Business Value ===
As we move into a world of technological advances where more and more data is created and stored digitally, businesses are looking for ways to take advantage of this opportunity and use this information to help generate profits. Predictive analytics can be used and is capable of providing many benefits to a wide range of businesses, including asset management firms, insurance companies, communication companies, and many other firms. Every company that uses project management to achieve its goals benefits immensely from predictive intelligence capabilities. In a study conducted by IDC Analyze the Future, Dan Vasset and Henry D. Morris explain how an asset management firm used predictive analytics to develop a better marketing campaign. They went from a mass marketing approach to a customer-centric approach, where instead of sending the same offer to each customer, they would personalize each offer based on their customer. Predictive analytics was used to predict the likelihood that a possible customer would accept a personalized offer. Due to the marketing campaign and predictive analytics, the firm's acceptance rate skyrocketed, with three times the number of people accepting their personalized offers.<ref>{{Cite journal |last1=Vesset |first1=Dan |last2=Morris |first2=Henry D. |date=June 2011 |title=The Business Value of Predictive Analytics |url=http://nexdimension.net/wp-content/uploads/2013/04/ibm-spss-predictive-analytics-business-value-whitepaper.pdf |journal=White Paper |pages=1–3}}</ref>
 
Technological advances in predictive analytics<ref>{{cite web |last1=Clay |first1=Halton |title=Predictive Analytics: Definition, Model Types, and Uses |url=https://www.investopedia.com/terms/p/predictive-analytics.asp |website=Investopedia |access-date=8 January 2024}}</ref> have increased its value to firms. One technological advancement is more powerful computers, and with this predictive analytics has become able to create forecasts on large data sets much faster. With the increased computing power also comes more data and applications, meaning a wider array of inputs to use with predictive analytics. Another technological advance includes a more user-friendly interface, allowing a smaller barrier of entry and less extensive training required for employees to utilize the software and applications effectively. Due to these advancements, many more corporations are adopting predictive analytics and seeing the benefits in employee efficiency and effectiveness, as well as profits.<ref>{{Cite book |last=Stone |first=Paul |title=All Days |date=April 2007 |chapter=Introducing Predictive Analytics: Opportunities |doi=10.2118/106865-MS |chapter-url=https://onepetro.org/SPEDEC/proceedings-abstract/07DEC/All-07DEC/SPE-106865-MS/141862}}</ref> The percentage of projects that fail is fairly high—a whopping 70% of all projects fail to deliver what was promised to customers. The implementation of a management process, however, is shown to reduce the failure rate to 20% or below.<ref>{{cite web |last1=Team Stage |title=Project Management Statistics: Trends and Common Mistakes in 2023 |url=https://teamstage.io/project-management-statistics/ |website=TeamStage |date=29 May 2021 |access-date=8 January 2024}}</ref>
 
=== Cash-flow Prediction ===
Line 80 ⟶ 82:
=== Child protection ===
Some child welfare agencies have started using predictive analytics to flag high risk cases.<ref>{{Cite web |last=Reform |first=Fostering |date=2016-02-03 |title=New Strategies Long Overdue on Measuring Child Welfare Risk |url=https://imprintnews.org/blogger-co-op/new-strategies-long-overdue-measuring-child-welfare-risk/15442 |access-date=2022-05-03 |website=The Imprint |language=en-US}}</ref> For example, in [[Hillsborough County, Florida]], the child welfare agency's use of a predictive modeling tool has prevented abuse-related child deaths in the target population.<ref>{{Cite journal |date=2016 |title=Within Our Reach: A National Strategy to Eliminate Child Abuse and Neglect Fatalities |url=https://www.acf.hhs.gov/sites/default/files/documents/cb/cecanf_final_report.pdf |journal=Commission to Eliminate Child Abuse and Neglect Fatalities}}</ref>
 
=== Clinical decision support systems ===
Predictive analysis have found use in health care primarily to determine which patients are at risk of developing conditions such as diabetes, asthma, or heart disease. Additionally, sophisticated [[clinical decision support system]]s incorporate predictive analytics to support medical decision making.
 
A 2016 study of [[Neurodegeneration|neurodegenerative disorders]] provides a powerful example of a CDS platform to diagnose, track, predict and monitor the progression of [[Parkinson's disease]].<ref>{{Cite journal |last1=Dinov |first1=Ivo D. |last2=Heavner |first2=Ben |last3=Tang |first3=Ming |last4=Glusman |first4=Gustavo |last5=Chard |first5=Kyle |last6=Darcy |first6=Mike |last7=Madduri |first7=Ravi |last8=Pa |first8=Judy |last9=Spino |first9=Cathie |last10=Kesselman |first10=Carl |last11=Foster |first11=Ian |date=2016-08-05 |title=Predictive Big Data Analytics: A Study of Parkinson's Disease Using Large, Complex, Heterogeneous, Incongruent, Multi-Source and Incomplete Observations |journal=[[PLOS ONE]] |volume=11 |issue=8 |pages=e0157077 |doi=10.1371/journal.pone.0157077 |issn=1932-6203 |pmc=4975403 |pmid=27494614 |bibcode=2016PLoSO..1157077D |doi-access=free}}</ref>
 
=== Predicting outcomes of legal decisions ===
Line 89 ⟶ 96:
=== Underwriting ===
Many businesses have to account for risk exposure due to their different services and determine the costs needed to cover the risk. Predictive analytics can help [[underwrite]] these quantities by predicting the chances of illness, [[Default (finance)|default]], [[bankruptcy]], etc. Predictive analytics can streamline the process of customer acquisition by predicting the future risk behavior of a customer using application level data. Predictive analytics in the form of credit scores have reduced the amount of time it takes for loan approvals, especially in the mortgage market. Proper predictive analytics can lead to proper pricing decisions, which can help mitigate future risk of default. Predictive analytics can be used to mitigate moral hazard and prevent accidents from occurring.<ref>{{Cite journal |last1=Montserrat |first1=Guillen |last2=Cevolini |first2=Alberto |date=November 2021 |title=Using Risk Analytics to Prevent Accidents Before They Occur – The Future of Insurance |url=https://www.capco.com/Capco-Institute/Journal-54-Insurance/Using-Risk-Analytics-To-Prevent-Accidents-Before-They-Occur-The-Future-Of-Insurance |journal=Journal of Financial Transformation}}</ref>
 
=== Policing ===
{{main|Predictive policing}}
Police agencies are now utilizing proactive strategies for crime prevention. Predictive analytics, which utilizes statistical tools to forecast crime patterns, provides new ways for police agencies to mobilize resources and reduce levels of crime.<ref>{{Cite journal |last1=Towers |first1=Sherry |last2=Chen |first2=Siqiao |last3=Malik |first3=Abish |last4=Ebert |first4=David |date=2018-10-24 |editor-last=Eisenbarth |editor-first=Hedwig |title=Factors influencing temporal patterns in crime in a large American city: A predictive analytics perspective |journal=PLOS ONE |language=en |volume=13 |issue=10 |pages=e0205151 |doi=10.1371/journal.pone.0205151 |issn=1932-6203 |pmc=6200217 |pmid=30356321 |bibcode=2018PLoSO..1305151T |doi-access=free }}</ref> With this predictive analytics of crime data, the police can better allocate the limited resources and manpower to prevent more crimes from happening. Directed patrol or problem-solving can be employed to protect crime hot spots, which exhibit crime densities much higher than the average in a city.<ref>{{Cite journal |last1=Fitzpatrick |first1=Dylan J. |last2=Gorr |first2=Wilpen L. |last3=Neill |first3=Daniel B. |date=2019-01-13 |title=Keeping Score: Predictive Analytics in Policing |url=https://www.annualreviews.org/doi/10.1146/annurev-criminol-011518-024534 |journal=Annual Review of Criminology |language=en |volume=2 |issue=1 |pages=473–491 |doi=10.1146/annurev-criminol-011518-024534 |s2cid=169389590 |issn=2572-4568}}</ref>
 
=== Sports ===
Several firms have emerged specializing in predictive analytics in the field of professional sports for both teams and individuals.<ref>{{Cite web |title=Free AI Sports Picks & Predictions for Today's Games |url=https://leans.ai/ |access-date=2023-07-08 |website=LEANS.AI |language=en-US}}</ref> While predicting human behavior creates a wide variance due to many factors that can change after predictions are made, including injuries, officiating, coaches decisions, weather, and more, the use of predictive analytics to project long term trends and performance is useful. Much of the field was started by the [[Moneyball: The Art of Winning an Unfair Game|Moneyball]] concept of [[Billy Beane]] near the turn of the century, and now most professional sports teams employ their own analytics departments.
 
== See also ==
Line 101:
* [[Artificial intelligence in healthcare]]
* [[Analytical procedures (finance auditing)]]
* [[Big data]]
* [[Computational sociology]]
* [[Criminal Reduction Utilising Statistical History]]
Line 136 ⟶ 135:
[[Category:Business intelligence]]
[[Category:Actuarial science]]
[[Category:Big data|analytics]]
[[Category:Types of analytics]]
[[Category:Predictive analytics|*]]