In-memory processing: Difference between revisions

Content deleted Content added
Undid revision 571471138 by Winged watermelon (talk); I felt that in trying to get away from redundancies & advertising, I deviated from the MOS in another direction
m link star schema using Find link
Line 21:
 
== How does In-memory processing Work? ==
The arrival of [[Column-oriented_DBMS|column centric databases]] which stored similar information together allowed storing data more efficiently and with greater compression. This in turn allowed to store huge amounts of data in the same physical space which in turn reduced the amount memory needed to perform a query and increased the processing speed. With in-memory database, all information is initially loaded into memory. It eliminates the need for optimizing database like creating indexes, aggregates and designing of cubes and [[star schemasschema]]s.
 
Most in-memory tools use compression algorithms which reduce the size of in-memory data than what would be needed for hard disks. Users query the data loaded into the system’s memory thereby avoiding slower database access and performance bottlenecks. This is different from caching, a very widely used method to speed up query performance, in that caches are subsets of very specific pre-defined organized data. With in-memory tools, data available for analysis can be as large as data mart or small data warehouse which is entirely in memory. This can be accessed within seconds by multiple concurrent users at a detailed level and offers the potential for excellent analytics. Theoretically the improvement in data access is 10,000 to 1,000,000 times faster than from disk. It also minimizes the need for performance tuning by IT staff and provides faster service for end users.