Content deleted Content added
Citation bot (talk | contribs) Alter: title. | You can use this bot yourself. Report bugs here. | Activated by Zppix | Category:Artificial neural networks | via #UCB_Category |
|||
Line 170:
* Defense against white-box adversarial attacks: the Fast Gradient Sign Method (FGSM) is a typical method for attacking CNNs. It evaluates the gradient of each pixel against the loss of the network, and changes each pixel by at most epsilon (the error term) to maximize the loss. Although this method can drop the accuracy of CNNs dramatically (e.g.: to below 20%), capsule networks maintain accuracy above 70%.
Purely convolutional nets cannot generalize to unlearned viewpoints (other than translation). For other [[affine transformation]]
Capsnet's transformation matrices learn the (viewpoint independent) spatial relationship between a part and a whole, allowing the latter to be recognized based on such relationships. However, capsnets assume that each ___location displays at most one instance of a capsule's object. This assumption allows a capsule to use a distributed representation (its activity vector) of an object to represent that object at that ___location.<ref name=":1"/>
|