Basic Linear Algebra Subprograms: Difference between revisions

Content deleted Content added
Rescuing 1 sources and tagging 0 as dead.) #IABot (v2.0.8.8) (Ost316 - 10216
mNo edit summary
Line 37:
It originated as a Fortran library in 1979<ref name="lawson79">*{{cite journal |last1=Lawson |first1=C. L. |last2=Hanson |first2=R. J. |last3=Kincaid |first3=D. |last4=Krogh |first4=F. T. |title=Basic Linear Algebra Subprograms for FORTRAN usage |journal=ACM Trans. Math. Softw. |volume=5 |issue=3 |pages=308–323 |date=1979 |id=Algorithm 539 |doi=10.1145/355841.355847 |hdl=2060/19780018835|s2cid=6585321 |hdl-access=free }}</ref> and its interface was standardized by the BLAS Technical (BLAST) Forum, whose latest BLAS report can be found on the [[netlib]] website.<ref>{{Cite web |url=http://netlib.org/blas/blast-forum|title=BLAS Technical Forum |website=netlib.org |access-date=2017-07-07}}</ref> This Fortran library is known as the ''[[reference implementation]]'' (sometimes confusingly referred to as ''the'' BLAS library) and is not optimized for speed but is in the [[public ___domain]].<ref>[http://www.lahey.com/docs/blaseman_lin62.pdf blaseman] {{webarchive |url=https://web.archive.org/web/20161012014431/http://www.lahey.com/docs/blaseman_lin62.pdf |date=2016-10-12}} ''"The products are the implementations of the public ___domain BLAS (Basic Linear Algebra Subprograms) and LAPACK (Linear Algebra PACKage), which have been developed by groups of people such as Prof. Jack Dongarra, University of Tennessee, USA and all published on the WWW (URL: http://www.netlib.org/)."''{{dead link|date=October 2016 |bot=InternetArchiveBot |fix-attempted=yes }}</ref><ref>{{cite web |url=http://www.netlib.org/utk/people/JackDongarra/PAPERS/netlib-history6.pdf |title=Netlib and NA-Net: building a scientific computing community |author=Jack Dongarra |author2=Gene Golub |author3=Eric Grosse |author4=Cleve Moler |author5=Keith Moore |quote=The Netlib software repository was created in 1984 to facilitate quick distribution of public ___domain software routines for use in scientific computation. |publisher=netlib.org |access-date=2016-02-13}}</ref>
 
Most libraries that offer linear algebra routines conform to the BLAS interface, allowing library users to develop programs that are indifferent to the BLAS library being used. BLAS implementations have known a spectacular explosion in uses with the development of [[GPGPU]], with [[CUDA#Programming_abilities|cuBLAS]] and [[ROCm#rocBLAS_/_hipBLAS|rocBLAS]] being prime examples. CPU-based examples of BLAS libraries include: [[OpenBLAS]], [[BLIS (software)|BLIS (BLAS-like Library Instantiation Software)]], Arm Performance Libraries,<ref name="Arm Performance Libraries">{{cite web|date=2020 |title=Arm Performance Libraries |publisher=[[Arm (company)|Arm]] |url=https://www.arm.com/products/development-tools/server-and-hpc/allinea-studio/performance-libraries |access-date=2020-12-16}}</ref> [[Automatically Tuned Linear Algebra Software|ATLAS]], and [[Intel Math Kernel Library]] (MKL). AMD maintains a fork of BLIS that is optimized for the [[Advanced Micro Devices|AMD]] platform.<ref>{{Cite web|url=https://developer.amd.com/amd-aocl/blas-library/|title=BLAS Library}}</ref> ATLAS is a portable library that automatically optimizes itself for an arbitrary architecture. MKL is a freeware<ref name="MKLfree">{{cite web |date=2015 |title=No Cost Options for Intel Math Kernel Library (MKL), Support yourself, Royalty-Free |publisher=[[Intel]] |url=http://software.intel.com/articles/free_mkl |access-date=31 August 2015}}</ref> and proprietary<ref name="MKLintel">{{cite web |date=2015 |title=Intel Math Kernel Library (Intel MKL) |publisher=[[Intel]] |url=http://software.intel.com/intel-mkl |access-date=25 August 2015}}</ref> vendor library optimized for x86 and x86-64 with a performance emphasis on [[Intel]] processors.<ref name="optnotice">{{cite web |year=2012 |title=Optimization Notice |publisher=[[Intel]] |url=http://software.intel.com/articles/optimization-notice |access-date=10 April 2013}}</ref> OpenBLAS is an open-source library that is hand-optimized for many of the popular architectures. The [[LINPACK benchmarks]] rely heavily on the BLAS routine <code>[[General Matrix Multiply|gemm]]</code> for its performance measurements.
 
Many numerical software applications use BLAS-compatible libraries to do linear algebra computations, including [[LAPACK]], [[LINPACK]], [[Armadillo (C++ library)|Armadillo]], [[GNU Octave]], [[Mathematica]],<ref>{{cite journal |author=Douglas Quinney |date=2003 |title=So what's new in Mathematica 5.0? |journal=MSOR Connections |volume=3 |number=4 |publisher=The Higher Education Academy |url=http://78.158.56.101/archive/msor/headocs/34mathematica5.pdf |url-status=dead |archive-url=https://web.archive.org/web/20131029204826/http://78.158.56.101/archive/msor/headocs/34mathematica5.pdf |archive-date=2013-10-29 }}</ref> [[MATLAB]],<ref>{{cite web |author=Cleve Moler |date=2000 |title=MATLAB Incorporates LAPACK |publisher=[[MathWorks]] |url=http://www.mathworks.com/company/newsletters/articles/matlab-incorporates-lapack.html |access-date=26 October 2013}}</ref> [[NumPy]],<ref name="cise">{{cite journal |title=The NumPy array: a structure for efficient numerical computation |author=Stéfan van der Walt |author2=S. Chris Colbert |author3=Gaël Varoquaux |name-list-style=amp |date=2011 |journal=Computing in Science and Engineering |volume=13 |issue=2 |pages=22–30 |arxiv=1102.1523|bibcode=2011arXiv1102.1523V |doi=10.1109/MCSE.2011.37|s2cid=16907816 }}</ref> [[R (programming language)|R]], and [[Julia (programming language)|Julia]].
Line 92:
==Implementations==
; Accelerate: [[Apple Inc.|Apple]]'s framework for [[macOS]] and [[IOS (Apple)|iOS]], which includes tuned versions of [[BLAS]] and [[LAPACK]].<ref>{{Cite web|url=https://developer.apple.com/library/mac/#releasenotes/Performance/RN-vecLib/|title=Guides and Sample Code|website=developer.apple.com|access-date=2017-07-07}}</ref><ref>{{Cite web|url=https://developer.apple.com/library/ios/#documentation/Accelerate/Reference/AccelerateFWRef/|title=Guides and Sample Code|website=developer.apple.com|access-date=2017-07-07}}</ref>
; Arm Performance Libraries
: [[Arm Performance Libraries]], supporting Arm 64-bit [[AArch64]]-based processors, available from [[Arm Ltd.(company)|Arm]].<ref name="Arm Performance Libraries"/>
; ATLAS: [[Automatically Tuned Linear Algebra Software]], an [[Open-source software|open source]] implementation of BLAS [[application programming interface|API]]s for [[C (programming language)|C]] and [[Fortran|Fortran 77]].<ref>{{Cite web|url=http://math-atlas.sourceforge.net/|title=Automatically Tuned Linear Algebra Software (ATLAS)|website=math-atlas.sourceforge.net|access-date=2017-07-07}}</ref>
; [[BLIS (software)|BLIS]]: BLAS-like Library Instantiation Software framework for rapid instantiation. Optimized for most modern CPUs. BLIS is a complete refactoring of the GotoBLAS that reduces the amount of code that must be written for a given platform. <ref>{{Citation|title=blis: BLAS-like Library Instantiation Software Framework|date=2017-06-30|url=https://github.com/flame/blis|publisher=flame|access-date=2017-07-07}}</ref><ref>{{Citation|title=BLIS GitHub Repository|date=15 October 2021|url=https://github.com/flame/blis}}</ref>
Line 100 ⟶ 101:
; clBLAS: An [[OpenCL]] implementation of BLAS by AMD. Part of the AMD Compute Libraries.<ref name="github.com">{{Citation|title=clBLAS: a software library containing BLAS functions written in OpenCL|date=2017-07-03|url=https://github.com/clMathLibraries/clBLAS|publisher=clMathLibraries|access-date=2017-07-07}}</ref>
; clBLAST: A tuned [[OpenCL]] implementation of most of the BLAS api.<ref name="https://github.com/CNugteren/CLBlast">{{Citation|last=Nugteren|first=Cedric|title=CLBlast: Tuned OpenCL BLAS|date=2017-07-05|url=https://github.com/CNugteren/CLBlast|access-date=2017-07-07}}</ref>
; Eigen BLAS: A [[Fortran|Fortran 77]] and [[C (programming language)|C]] BLAS library implemented on top of the [[Mozilla license|MPL]]-licensed [[Eigen (C++ library)|Eigen library]], supporting [[x86]], [[x86-64]], [[ARM architecture family|ARM (NEON)]], and [[PowerPC]] architectures.
; ESSL: [[IBM]]'s Engineering and Scientific Subroutine Library, supporting the [[PowerPC]] architecture under [[AIX operating system|AIX]] and [[Linux]].<ref name="https://www.ibm.com/support/knowledgecenter/en/SSFHY8/essl_welcome.html">{{Citation|title=IBM Knowledge Centre: Engineering and Scientific Subroutine Library|url=https://www.ibm.com/support/knowledgecenter/en/SSFHY8/essl_welcome.html}}</ref>
; [[GotoBLAS]]: [[Kazushige Goto]]'s BSD-licensed implementation of BLAS, tuned in particular for [[Intel]] [[Nehalem (microarchitecture)|Nehalem]]/[[Intel Atom|Atom]], [[VIA Technologies|VIA]] [[VIA Nano|Nanoprocessor]], [[AMD]] [[Opteron]].<ref name="GotoBLAS2"/>
; [[GNU Scientific Library]]: Multi-platform implementation of many numerical routines. Contains a CBLAS interface.
; HP MLIB: [[Hewlett-Packard|HP]]'s Math library supporting [[IA-64]], [[PA-RISC]], [[x86]] and [[Opteron]] architecture under [[HP-UX]] and [[Linux]].
; Intel MKL: The [[Intel]] [[Math Kernel Library]], supporting x86 32-bits and 64-bits, available free from [[Intel]].<ref name="MKLfree" /> Includes optimizations for Intel [[Pentium (brand)|Pentium]], [[Intel Core|Core]] and Intel [[Xeon]] CPUs and Intel [[Xeon Phi]]; support for [[Linux]], [[Microsoft Windows|Windows]] and [[macOS]].<ref>{{Cite web|url=http://software.intel.com/en-us/intel-mkl/|title=Intel Math Kernel Library (Intel MKL) {{!}} Intel Software|website=software.intel.com|language=en|access-date=2017-07-07}}</ref>
Line 109 ⟶ 110:
; Netlib BLAS: The official reference implementation on [[Netlib]], written in [[Fortran|Fortran 77]].<ref>{{Cite web|url=http://www.netlib.org/blas/|title=BLAS (Basic Linear Algebra Subprograms)|website=www.netlib.org|access-date=2017-07-07}}</ref>
; Netlib CBLAS: Reference [[C (programming language)|C]] interface to the BLAS. It is also possible (and popular) to call the Fortran BLAS from C.<ref>{{Cite web|url=http://www.netlib.org/blas|title=BLAS (Basic Linear Algebra Subprograms)|website=www.netlib.org|access-date=2017-07-07}}</ref>
; [[OpenBLAS]]: Optimized BLAS based on GotoBLAS, supporting [[x86]], [[x86-64]], [[MIPS architecture|MIPS]] and [[ARM architecture family|ARM]] processors.<ref>{{Cite web|url=http://www.openblas.net/|title=OpenBLAS : An optimized BLAS library|website=www.openblas.net|access-date=2017-07-07}}</ref>
; PDLIB/SX: [[NEC Corporation|NEC]]'s Public Domain Mathematical Library for the NEC [[NEC SX architecture|SX-4]] system.<ref name=":0">{{cite web |url=http://www.nec.co.jp/hpc/mediator/sxm_e/software/61.html |title=Archived copy |access-date=2007-05-20 |url-status=dead |archive-url=https://web.archive.org/web/20070222154031/http://www.nec.co.jp/hpc/mediator/sxm_e/software/61.html |archive-date=2007-02-22 }}</ref>
; rocBLAS: Implementation that runs on [[AMD]] GPUs via [[ROCm]].<ref>{{Cite web|url=https://rocmdocs.amd.com/en/latest/ROCm_Tools/rocblas.html|title=rocBLAS|website=rocmdocs.amd.com|access-date=2021-05-21}}</ref>
; SCSL
: [[Silicon Graphics|SGI]]'s Scientific Computing Software Library contains BLAS and LAPACK implementations for SGI's [[Irix]] workstations.<ref>{{cite web |url=http://www.sgi.com/products/software/scsl.html |title=Archived copy |access-date=2007-05-20 |url-status=dead |archive-url=https://web.archive.org/web/20070513173030/http://www.sgi.com/products/software/scsl.html |archive-date=2007-05-13 }}</ref>
; Sun Performance Library: Optimized BLAS and LAPACK for [[SPARC]], [[Intel Core|Core]] and [[AMD64]] architectures under Solaris 8, 9, and 10 as well as Linux.<ref>{{Cite web|url=http://www.oracle.com/technetwork/server-storage/solarisstudio/overview/index.html|title=Oracle Developer Studio|website=www.oracle.com|access-date=2017-07-07}}</ref>
; uBLAS: A generic [[C++]] template class library providing BLAS functionality. Part of the [[Boost library]]. It provides bindings to many hardware-accelerated libraries in a unifying notation. Moreover, uBLAS focuses on correctness of the algorithms using advanced C++ features.<ref>{{Cite web|url=http://www.boost.org/doc/libs/1_60_0/libs/numeric/ublas/doc/index.html|title=Boost Basic Linear Algebra - 1.60.0|website=www.boost.org|access-date=2017-07-07}}</ref>
 
=== Libraries using BLAS ===
 
; Armadillo: [[Armadillo (C++ library)|Armadillo]] is a C++ linear algebra library aiming towards a good balance between speed and ease of use. It employs template classes, and has optional links to BLAS/ATLAS and LAPACK. It is sponsored by [[NICTA]] (in Australia) and is licensed under a free license.<ref>{{Cite web|url=http://arma.sourceforge.net/|title=Armadillo: C++ linear algebra library|website=arma.sourceforge.net|access-date=2017-07-07}}</ref>
; [[LAPACK]]: LAPACK is a higher level Linear Algebra library built upon BLAS. Like BLAS, a reference implementation exists, but many alternatives like libFlame and MKL exist.