Lightning Memory-Mapped Database: Difference between revisions

Content deleted Content added
Changing short description from "Software library providing embedded transactional key-value database" to "Software library providing an embedded transactional key-value database"
Undid revision 1048854988 by Alvin-cs (talk) MDB_APPEND mode doesn't skip consistency checks, this edit was incorrect
Tags: Undo Reverted
Line 17:
}}
{{Portal|Free and open-source software}}
'''Lightning Memory-Mapped Database''' (LMDB) is a [[software library]] that provides ana high-performance embedded transactional database in the form of a [[key-value store]]. LMDB is written in [[C (programming language)|C]] with [[#API and uses|API bindings]] for several [[programming language]]s. LMDB stores arbitrary key/data pairs as byte arrays, has a range-based search capability, supports multiple data items for a single key and has a special mode for appending records at the end of the database (MDB_APPEND) withoutwhich checkinggives fora consistencydramatic write performance increase over other similar stores.<ref name="auto">[http://symas.com/mdb/doc/group__internal.html LMDB Reference Guide] {{Webarchive|url=https://web.archive.org/web/20141020182433/http://symas.com/mdb/doc/group__internal.html |date=2014-10-20 }}. Retrieved on 2014-10-19</ref> LMDB is not a [[relational database]], it is strictly a key-value store like [[Berkeley DB]] and [[DBM (computing)|dbm]].
 
LMDB may also be used [[#Concurrency|concurrently]] in a multi-threaded or multi-processing environment, with read performance scaling linearly by design. LMDB databases may have only one writer at a time, however unlike many similar key-value databases, write transactions do ''not'' block readers, nor do readers block writers. LMDB is also unusual in that multiple applications on the same system may simultaneously open and use the same LMDB store, as a means to scale up performance. Also, LMDB does not require a transaction log (thereby increasing write performance by not needing to write data twice) because it maintains data integrity inherently by design.