The arrival of [[Column-oriented DBMS|column centric databases]] which stored similar information together allowed storing data more efficiently and with greater compression. This in turn allowed to store huge amounts of data in the same physical space, which in turn reducedreducing the amount of memory needed to perform a query and increased theincreasing processing speed. With in-memory database, all information is initially loaded into memory. ItThis eliminates the need for optimizingoptimized database like creatingdatabases, indexes, aggregates and designing of cubes and [[star schema]]s.
Most in-memory tools use compression algorithms whichthat reduce the size of in-memory data thanbeyond what would be needednecessary for hard disks. Users query the data loaded into the system’s memory thereby avoiding slower database access and performance bottlenecks. This is differentdiffers from caching, a very widely used method to speed up query performance, in that caches are subsets of very specific pre-defined organized data. With in-memory tools, data available for analysis can be as large as data mart or small data warehouse which is entirely in memory. This can be accessed within seconds by multiple concurrent users at a detailed level and offers the potential for excellent analytics. Theoretically the improvement in data access is 10,000 to 1,000,000 times faster than from disk. It also minimizes the need for performance tuning by IT staff and provides faster service for end users.