Content deleted Content added
m fix 2 misspellings |
m →History: spell out acronym, add link, improve wording |
||
Line 30:
==History==
In 2001, Torch was written and released under a [[GNU General Public License|GPL license]]. It was a machine-learning library written in C++, supporting methods including neural networks, [[Support vector machine|
Meta (formerly known as Facebook) operates both PyTorch and Convolutional Architecture for Fast Feature Embedding ([[Caffe (software)|Caffe2]]), but models defined by the two frameworks were mutually incompatible. The Open Neural Network Exchange ([[Open Neural Network Exchange|ONNX]]) project was created by Meta and [[Microsoft]] in September 2017 for converting models between frameworks. Caffe2 was merged into PyTorch at the end of March 2018.<ref>{{cite web|url=https://medium.com/@Synced/caffe2-merges-with-pytorch-a89c70ad9eb7|title=Caffe2 Merges With PyTorch|date=2 April 2018|access-date=2 January 2019|archive-date=30 March 2019|archive-url=https://web.archive.org/web/20190330143500/https://medium.com/@Synced/caffe2-merges-with-pytorch-a89c70ad9eb7|url-status=live}}</ref> In September 2022, Meta announced that PyTorch would be governed by the independent PyTorch Foundation, a newly created subsidiary of the [[Linux Foundation]].<ref>{{cite web |url=https://arstechnica.com/information-technology/2022/09/meta-spins-off-pytorch-foundation-to-make-ai-framework-vendor-neutral/ |title=Meta spins off PyTorch Foundation to make AI framework vendor neutral |date=12 September 2022 |website=[[Ars Technica]] |last=Edwards |first=Benj |access-date=13 September 2022 |archive-date=13 September 2022 |archive-url=https://web.archive.org/web/20220913034850/https://arstechnica.com/information-technology/2022/09/meta-spins-off-pytorch-foundation-to-make-ai-framework-vendor-neutral/ |url-status=live }}</ref>
Line 38:
==PyTorch tensors==
{{main|Tensor (machine learning)}}
PyTorch defines a class called Tensor (<code>torch.Tensor</code>) to store and operate on homogeneous multidimensional rectangular arrays of numbers. PyTorch Tensors are similar to [[NumPy]] Arrays, but can also be operated on by a [[CUDA]]-capable [[Nvidia|NVIDIA]] [[Graphics processing unit|GPU]]. PyTorch has also been developing support for other GPU platforms, for example, AMD's [[ROCm]]<ref>{{cite web|url=https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/3rd-party/pytorch-install.html|title=Installing PyTorch for ROCm|date=9 February 2024|website=rocm.docs.amd.com}}</ref> and Apple's [[Metal (API)|Metal Framework.]]<ref>{{Cite web |title=Introducing Accelerated PyTorch Training on Mac |url=https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/ |access-date=4 June 2022 |website=pytorch.org |language=en |archive-date=29 January 2024 |archive-url=https://web.archive.org/web/20240129141050/https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/ |url-status=live }}</ref>
PyTorch supports various sub-types of Tensors.<ref>{{cite web |url=https://www.analyticsvidhya.com/blog/2018/02/pytorch-tutorial/ |title=An Introduction to PyTorch – A Simple yet Powerful Deep Learning Library |website=analyticsvidhya.com |access-date=11 June 2018 |date=22 February 2018 |archive-date=22 October 2019 |archive-url=https://web.archive.org/web/20191022200858/https://www.analyticsvidhya.com/blog/2018/02/pytorch-tutorial/ |url-status=live }}</ref>
|