In-memory processing: Difference between revisions

Content deleted Content added
Copy Edit: Reduced puffery in Introduction.
Citation bot (talk | contribs)
Alter: title, template type, url. URLs might have been anonymized. Add: chapter-url, chapter. Removed or converted URL. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox2 | #UCB_webform_linked 582/2384
Line 2:
{{advert|date=November 2018}}
{{cleanup rewrite|date=January 2020}}
In [[computer science]], '''in-memory processing''' (PIM) is a [[computer architecture]] for [[data processing|processing]] data stored in an [[in-memory database]].<ref>{{Cite journal |last=Ghose |first=S. |date=November 2019 |title=Processing-in-memory: A workload-driven perspective |url=https://www.pdl.cmu.edu/PDL-FTP/associated/19ibmjrd_pim.pdf |journal=IBM Journal of Research and Development |volume=63 |issue=6 |pages=3:1–19|doi=10.1147/JRD.2019.2934048 |s2cid=202025511 }}</ref> In-memory processing improves the [[Electric power|power usage]] and [[Computer performance|performance]] of moving data between the processor and the main memory.<ref>{{Cite journalbook|last1=Chi|first1=Ping|last2=Li|first2=Shuangchen|last3=Xu|first3=Cong|last4=Zhang|first4=Tao|last5=Zhao|first5=Jishen|last6=Liu|first6=Yongpan|last7=Wang|first7=Yu|last8=Xie|first8=Yuan|datetitle=June 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA) |titlechapter=PRIME: A Novel Processing-in-Memory Architecture for Neural Network Computation in ReRAM-Based Main Memory |date=June 2016|chapter-url=https://ieeexplore.ieee.org/document/7551380|journal=2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA)|___location=Seoul, South Korea|publisher=IEEE|pages=27–39|doi=10.1109/ISCA.2016.13|isbn=978-1-4673-8947-1}}</ref> Older systems have been based on [[disk storage]] and [[relational database]]s using [[Structured Query Language]], which are increasingly regarded as inadequate to meet [[business intelligence]] (BI) needs. Because stored data is accessed much more quickly when it is placed in [[random-access memory]] (RAM) or [[flash memory]], in-memory processing allows data to be analyzed in [[Real-time computing|real time]], enabling faster reporting and decision-making in business.<ref>{{cite book|last1=Plattner|first1=Hasso|last2=Zeier|first2=Alexander|title=In-Memory Data Management: Technology and Applications|date=2012|publisher=Springer Science & Business Media|isbn=9783642295744|url=https://books.google.com/books?id=HySCgzCApsEC&q=%22in-memory%22|language=en}}</ref><ref>{{cite journal|first=Hao|last=Zhang|author2=Gang Chen|author3=Beng Chin Ooi|author4=Kian-Lee Tan|author5=Meihui Zhang|title=In-Memory Big Data Management and Processing: A Survey|journal=IEEE Transactions on Knowledge and Data Engineering|date=July 2015|volume=27|issue=7|pages=1920–1948|doi=10.1109/TKDE.2015.2427795|doi-access=free}}</ref>
 
== Disk-based business intelligence ==
Line 28:
* Increasing ''volumes of data'' have meant that traditional data warehouses are no longer able to process the data in a timely and accurate way. The [[extract, transform, load]] (ETL) process that periodically updates data warehouses with operational data can take anywhere from a few hours to weeks to complete. So, at any given point of time data is at least a day old. In-memory processing enables instant access to terabytes of data for real time reporting.
* In-memory processing is available at a ''lower cost'' compared to traditional BI tools, and can be more easily deployed and maintained. According to Gartner survey,{{citation needed|date=January 2016}} deploying traditional BI tools can take as long as 17 months. Many data warehouse vendors are choosing in-memory technology over traditional BI to speed up implementation times.
*Decreases in power consumption and increases in throughput due to a lower access latency, and greater memory bandwidth and hardware parallelism.<ref>{{Cite journalbook|last1=Upchurch|first1=E.|last2=Sterling|first2=T.|last3=Brockman|first3=J.|datetitle=2004Proceedings of the ACM/IEEE SC2004 Conference |titlechapter=Analysis and Modeling of Advanced PIM Architecture Design Tradeoffs |date=2004|chapter-url=https://ieeexplore.ieee.org/document/1392942|journal=Proceedings of the ACM/IEEE SC2004 Conference|___location=Pittsburgh, PA, USA|publisher=IEEE|pages=12|doi=10.1109/SC.2004.11|isbn=978-0-7695-2153-4|s2cid=9089044 |url=https://resolver.caltech.edu/CaltechAUTHORS:20170103-172751346 }}</ref>
 
== Application in business ==