In-memory processing: Difference between revisions

Content deleted Content added
top: wl fix
BattyBot (talk | contribs)
General fixes, removed orphan tag using AWB (9959)
Line 1:
{{multiple issues|
{{Lead missing|date=December 2011}}
{{Orphan|date=December 2011}}
{{Underlinked|date=November 2012}}
{{AdAdvert|date=June 2013}}
}}
{{merge|In-memory database|discuss=Talk:In-Memory Processing#Merger proposal|date=January 2012}}
Line 21 ⟶ 20:
 
== How does In-memory processing Work? ==
The arrival of [[Column-oriented_DBMSoriented DBMS|column centric databases]] which stored similar information together allowed storing data more efficiently and with greater compression. This in turn allowed to store huge amounts of data in the same physical space which in turn reduced the amount memory needed to perform a query and increased the processing speed. With in-memory database, all information is initially loaded into memory. It eliminates the need for optimizing database like creating indexes, aggregates and designing of cubes and [[star schema]]s.
 
Most in-memory tools use compression algorithms which reduce the size of in-memory data than what would be needed for hard disks. Users query the data loaded into the system’s memory thereby avoiding slower database access and performance bottlenecks. This is different from caching, a very widely used method to speed up query performance, in that caches are subsets of very specific pre-defined organized data. With in-memory tools, data available for analysis can be as large as data mart or small data warehouse which is entirely in memory. This can be accessed within seconds by multiple concurrent users at a detailed level and offers the potential for excellent analytics. Theoretically the improvement in data access is 10,000 to 1,000,000 times faster than from disk. It also minimizes the need for performance tuning by IT staff and provides faster service for end users.