Content deleted Content added
No edit summary |
Citation bot (talk | contribs) Removed parameters. | Use this bot. Report bugs. | Suggested by Dominic3203 | Category:Big data | #UCB_Category 46/64 |
||
(31 intermediate revisions by 22 users not shown) | |||
Line 1:
{{refimprove|date=May 2016}}
'''Continuous analytics''' is a [[data science]] process that abandons [[Extract,_transform,_load|ETLs]] and complex batch [[Data pipeline|data pipelines]] in favor of [[Cloud computing|cloud]]-native and [[microservices]] paradigms. Continuous [[data processing]] enables real time interactions and immediate insights with fewer resources.
== Defined ==
[[Analytics]] is the application of [[mathematics]] and [[statistics]] to big data. Data scientists write analytics programs to look for solutions to business problems, like forecasting [[demand]] or setting an optimal price. The continuous approach runs multiple stateless engines which concurrently enrich, aggregate, infer and act on the data. Data scientists, dashboards and client apps all access the same raw or real-time data derivatives with proper identity-based security, [[data masking]] and [[Versioning (economics)|versioning]] in real-time.
Traditionally, data scientists have not been part of [[IT]] development teams, like regular [[Java (programming language)|Java]] programmers. This is because their skills set them apart in their own department not normally related to IT
Continuous analytics then is the extension of the continuous delivery software development model to the [[big data]] analytics development team. The goal of the continuous analytics practitioner then is to find ways to incorporate writing analytics code and installing big data software into the agile development model of automatically running unit and functional tests and building the environment system with automated tools.
To make this work means getting [[data scientists]] to write their code in the same [[code repository]] that regular programmers use so that software can pull it from there and run it through the build process. It also means saving the configuration of the big data cluster (sets of [[Virtual machine|virtual machines]]) in some kind of repository as well. That facilitates sending out analytics code and big data software and objects in the same automated way as the continuous integration process.<ref>{{cite web|url=http://southernpacificreview.com/2016/05/17/continuous-analytics-defined/|title=Continuous Analytics Defined |website=Southern Pacific Review|accessdate=17 May 2016}}</ref><ref>{{cite web|last1=Pushkarev|first1=Stepan|title=Tear down the Wall between Data Science and DevOps|url=https://www.linkedin.com/pulse/tear-down-wall-between-data-science-devops-stepan-pushkarev?trk=prof-post|website=LinkedIN|accessdate=17 May 2016}}</ref>
▲Traditionally data scientists have not been part of IT development teams, like regular Java programmers. This is because their skills set them apart in their own department not normally related to IT. i.e., data scientists. So it is logical to conclude that their approach to writing software code does not enjoy the same efficiencies as the traditional programming team. In particular traditional programming has adopted the Continuous Release approach to writing code. That releases software in a continuous circle, called iterations. Because operating that way has become commonplace, there are many software tools to do that.
<ref>{{cite web|title=Data Wow|url=https://datawow.io|website=datawow.io|accessdate=12 January 2021}}</ref><ref>[https://datasciencericardo.com Data Scientist Ricardo Ramon Benitez]</ref>
== External links ==
* [http://hydrosphere.io/blog/continuous-analytics-defined/ Continuous analytics]
* [https://www.oreilly.com/ideas/data-scientists-and-the-analytic-lifecycle Development model]
==References==
{{Reflist}}
[[Category:Data analysis]]
{{stub}}▼
[[Category:Big data]]
▲{{Database-stub}}
|