Basic Linear Algebra Subprograms: Difference between revisions

Content deleted Content added
OAbot (talk | contribs)
m Open access bot: hdl updated in citation with #oabot.
Heusler (talk | contribs)
m capitalize company names
Line 37:
It originated as a Fortran library in 1979<ref name="lawson79">*{{cite journal |last1=Lawson |first1=C. L. |last2=Hanson |first2=R. J. |last3=Kincaid |first3=D. |last4=Krogh |first4=F. T. |title=Basic Linear Algebra Subprograms for FORTRAN usage |journal=ACM Trans. Math. Softw. |volume=5 |issue=3 |pages=308–323 |date=1979 |id=Algorithm 539 |doi=10.1145/355841.355847 |hdl=2060/19780018835|s2cid=6585321 |hdl-access=free }}</ref> and its interface was standardized by the BLAS Technical (BLAST) Forum, whose latest BLAS report can be found on the [[netlib]] website.<ref>{{Cite web |url=http://netlib.org/blas/blast-forum|title=BLAS Technical Forum |website=netlib.org |access-date=2017-07-07}}</ref> This Fortran library is known as the ''[[reference implementation]]'' (sometimes confusingly referred to as ''the'' BLAS library) and is not optimized for speed but is in the [[public ___domain]].<ref>[http://www.lahey.com/docs/blaseman_lin62.pdf blaseman] {{webarchive |url=https://web.archive.org/web/20161012014431/http://www.lahey.com/docs/blaseman_lin62.pdf |date=2016-10-12}} ''"The products are the implementations of the public ___domain BLAS (Basic Linear Algebra Subprograms) and LAPACK (Linear Algebra PACKage), which have been developed by groups of people such as Prof. Jack Dongarra, University of Tennessee, USA and all published on the WWW (URL: http://www.netlib.org/)."''{{dead link|date=October 2016 |bot=InternetArchiveBot |fix-attempted=yes }}</ref><ref>{{cite web |url=http://www.netlib.org/utk/people/JackDongarra/PAPERS/netlib-history6.pdf |title=Netlib and NA-Net: building a scientific computing community |author=Jack Dongarra |author2=Gene Golub |author3=Eric Grosse |author4=Cleve Moler |author5=Keith Moore |quote=The Netlib software repository was created in 1984 to facilitate quick distribution of public ___domain software routines for use in scientific computation. |publisher=netlib.org |access-date=2016-02-13}}</ref>
 
Most computing libraries that offer linear algebra routines conform to common BLAS user interface command structures, thus queries to those libraries (and the associated results) are often portable between BLAS library branches, such as [[CUDA#Programming_abilities|cuBLAS]] (nvidiaNVIDIA GPU, [[GPGPU]]), [[ROCm#rocBLAS_/_hipBLAS|rocBLAS]] (amdAMD GPU, GPGP), and [[OpenBLAS]]. This interoperability is then the basis of functioning homogenous code implementations between heterzygous cascades of computing architectures (such as those found in some advanced clustering implementations). Examples of CPU-based BLAS library branches include: [[OpenBLAS]], [[BLIS (software)|BLIS (BLAS-like Library Instantiation Software)]], Arm Performance Libraries,<ref name="Arm Performance Libraries">{{cite web|date=2020 |title=Arm Performance Libraries |publisher=[[Arm]] |url=https://www.arm.com/products/development-tools/server-and-hpc/allinea-studio/performance-libraries |access-date=2020-12-16}}</ref> [[Automatically Tuned Linear Algebra Software|ATLAS]], and [[Intel Math Kernel Library]] (iMKL). AMD maintains a fork of BLIS that is optimized for the [[Advanced Micro Devices|AMD]] platform, although it is unclear whether integrated ombudsmen resources are present in that particular software-hardware implementation.<ref>{{Cite web|url=https://developer.amd.com/amd-aocl/blas-library/|title=BLAS Library}}</ref> ATLAS is a portable library that automatically optimizes itself for an arbitrary architecture. iMKL is a freeware<ref name="MKLfree">{{cite web |date=2015 |title=No Cost Options for Intel Math Kernel Library (MKL), Support yourself, Royalty-Free |publisher=[[Intel]] |url=http://software.intel.com/articles/free_mkl |access-date=31 August 2015}}</ref> and proprietary<ref name="MKLintel">{{cite web |date=2015 |title=Intel Math Kernel Library (Intel MKL) |publisher=[[Intel]] |url=http://software.intel.com/intel-mkl |access-date=25 August 2015}}</ref> vendor library optimized for x86 and x86-64 with a performance emphasis on [[Intel]] processors.<ref name="optnotice">{{cite web |year=2012 |title=Optimization Notice |publisher=[[Intel]] |url=http://software.intel.com/articles/optimization-notice |access-date=10 April 2013}}</ref> OpenBLAS is an open-source library that is hand-optimized for many of the popular architectures. The [[LINPACK benchmarks]] rely heavily on the BLAS routine <code>[[General Matrix Multiply|gemm]]</code> for its performance measurements.
 
Many numerical software applications use BLAS-compatible libraries to do linear algebra computations, including [[LAPACK]], [[LINPACK]], [[Armadillo (C++ library)|Armadillo]], [[GNU Octave]], [[Mathematica]],<ref>{{cite journal |author=Douglas Quinney |date=2003 |title=So what's new in Mathematica 5.0? |journal=MSOR Connections |volume=3 |number=4 |publisher=The Higher Education Academy |url=http://78.158.56.101/archive/msor/headocs/34mathematica5.pdf |url-status=dead |archive-url=https://web.archive.org/web/20131029204826/http://78.158.56.101/archive/msor/headocs/34mathematica5.pdf |archive-date=2013-10-29 }}</ref> [[MATLAB]],<ref>{{cite web |author=Cleve Moler |date=2000 |title=MATLAB Incorporates LAPACK |publisher=[[MathWorks]] |url=http://www.mathworks.com/company/newsletters/articles/matlab-incorporates-lapack.html |access-date=26 October 2013}}</ref> [[NumPy]],<ref name="cise">{{cite journal |title=The NumPy array: a structure for efficient numerical computation |author=Stéfan van der Walt |author2=S. Chris Colbert |author3=Gaël Varoquaux |name-list-style=amp |date=2011 |journal=Computing in Science and Engineering |volume=13 |issue=2 |pages=22–30 |arxiv=1102.1523|bibcode=2011CSE....13b..22V |doi=10.1109/MCSE.2011.37|s2cid=16907816 }}</ref> [[R (programming language)|R]], [[Julia (programming language)|Julia]] and Lisp-Stat.