Neural cryptography: Difference between revisions

Content deleted Content added
Adding short description: "Branch of cryptography" (Shortdesc helper)
 
(3 intermediate revisions by 3 users not shown)
Line 16:
== Applications ==
 
In 1995, Sebastien Dourlens applied neural networks to cryptanalyze [[Data Encryption Standard|DES]] by allowing the networks to learn how to invert the S-tables of the DES. The bias in DES studied through Differential Cryptanalysis by [[Adi Shamir]] is highlighted. The experiment shows about 50% of the key bits can be found, allowing the complete key to be found in a short time. Hardware application with multi micro-controllers have been proposed due to the easy implementation of multilayer neural networks in hardware.<br{{Citation />needed|date=May 2025}}
 
One example of a public-key protocol is given by Khalil Shihab {{Citation needed|date=May 2025}}. He describes the decryption scheme and the public key creation that are based on a [[backpropagation]] neural network. The encryption scheme and the private key creation process are based on Boolean algebra. This technique has the advantage of small time and memory complexities. A disadvantage is the property of backpropagation algorithms: because of huge training sets, the learning phase of a neural network is very long. Therefore, the use of this protocol is only theoretical so far.
 
== Neural key exchange protocol ==
Line 93 ⟶ 94:
=== Permutation parity machine ===
 
The permutation parity machine is a binary variant of the tree parity machine.<ref name="Reyes">{{cite journal |last1=Reyes |first1=O. M. |last2=Kopitzke |first2=I. |last3=Zimmermann |first3=K.-H. |date=April 2009 |title=Permutation Parity Machines for Neural Synchronization |journal=Journal of Physics A: Mathematical and Theoretical |volume=42 |issue=19 |pages=195002 |issn=1751-8113 |doi=10.1088/1751-8113/42/19/195002|bibcode=2009JPhA...42s5002R |s2cid=122126162 }}</ref>
 
It consists of one input layer, one hidden layer and one output layer. The number of neurons in the output layer depends on the number of hidden units K. Each hidden neuron has N binary input neurons: