PyTorch: Difference between revisions

Content deleted Content added
Line 78:
# Output: tensor(-2.1540)
 
print(a[1, 2]) # Output of the element in the third column of the second row (zero -based)
# Output: tensor(0.5847)
 
print(a.max())
# Output: tensor(0.8498)
</syntaxhighlight>

The following code-block defines a neural network with linear layers using the <code>nn</code> module.
<syntaxhighlight lang="python3" line="1">
from torch import nn # Import the nn sub-module from PyTorch
 
Line 91 ⟶ 94:
self.flatten = nn.Flatten() # Construct a flattening layer.
self.linear_relu_stack = nn.Sequential( # Construct a stack of layers.
nn.Linear(28 * 28, 512), # Linear Layers have an input and output shape
nn.ReLU(), # ReLU is one of many activation functions provided by nn
nn.Linear(512, 512),