Neural cryptography: Difference between revisions

Content deleted Content added
OAbot (talk | contribs)
m Open access bot: doi added to citation with #oabot.
 
(5 intermediate revisions by 5 users not shown)
Line 1:
{{Short description|Branch of cryptography}}
'''Neural cryptography''' is a branch of [[cryptography]] dedicated to analyzing the application of [[stochastic]] algorithms, especially [[artificial neural network]] algorithms, for use in [[encryption]] and [[cryptanalysis]].
 
Line 15 ⟶ 16:
== Applications ==
 
In 1995, Sebastien Dourlens applied neural networks to cryptanalyze [[Data Encryption Standard|DES]] by allowing the networks to learn how to invert the S-tables of the DES. The bias in DES studied through Differential Cryptanalysis by [[Adi Shamir]] is highlighted. The experiment shows about 50% of the key bits can be found, allowing the complete key to be found in a short time. Hardware application with multi micro-controllers have been proposed due to the easy implementation of multilayer neural networks in hardware.<br{{Citation />needed|date=May 2025}}
 
One example of a public-key protocol is given by Khalil Shihab {{Citation needed|date=May 2025}}. He describes the decryption scheme and the public key creation that are based on a [[backpropagation]] neural network. The encryption scheme and the private key creation process are based on Boolean algebra. This technique has the advantage of small time and memory complexities. A disadvantage is the property of backpropagation algorithms: because of huge training sets, the learning phase of a neural network is very long. Therefore, the use of this protocol is only theoretical so far.
 
== Neural key exchange protocol ==
Line 53 ⟶ 55:
## Compute the value of the output neuron
## Compare the values of both tree parity machines
### Outputs are the same: goone of the suitable learning rules is applied to 2.1the weights
### Outputs are different: one of the suitable learning rules is appliedgo to the weights2.1
 
After the full synchronization is achieved (the weights w<sub>ij</sub> of both tree parity machines are same), {{var|A}} and {{var|B}} can use their weights as keys.<br>
Line 92 ⟶ 94:
=== Permutation parity machine ===
 
The permutation parity machine is a binary variant of the tree parity machine.<ref name="Reyes">{{cite journal |last1=Reyes |first1=O. M. |last2=Kopitzke |first2=I. |last3=Zimmermann |first3=K.-H. |date=April 2009 |title=Permutation Parity Machines for Neural Synchronization |journal=Journal of Physics A: Mathematical and Theoretical |volume=42 |issue=19 |pages=195002 |issn=1751-8113 |doi=10.1088/1751-8113/42/19/195002|bibcode=2009JPhA...42s5002R |s2cid=122126162 }}</ref>
 
It consists of one input layer, one hidden layer and one output layer. The number of neurons in the output layer depends on the number of hidden units K. Each hidden neuron has N binary input neurons: