Content deleted Content added
Line 28:
The tree parity machine is a special type of multi-layer [[feedforward neural network]].
It consists of one output neuron, {{var|K}} hidden neurons and {{var|K
:<math>x_{ij} \in \left\{ -1,0,+1 \right\}</math>
The weights between input and hidden neurons take the values:
Line 34:
Output value of each hidden neuron is calculated as a sum of all multiplications of input neurons and these weights:
:<math>\sigma_i=\sgn(\sum_{j=1}^{N}w_{ij}x_{ij})</math>
Signum is a simple function, which returns
:<math>\sgn (x) = \begin{cases}
-1 & \text{if } x < 0, \\
Line 40:
1 & \text{if } x > 0. \end{cases}</math>
If the scalar product is 0, the output of the hidden neuron is mapped to
:<math>\tau=\prod_{i=1}^{K}\sigma_i</math>
Output of the tree parity machine is binary.
Line 46:
=== Protocol ===
Each party ({{var|A}} and {{var|B}}) uses its own tree parity machine. Synchronization of the tree parity machines is achieved in these steps
# Initialize random weight values
# Execute these steps until the full synchronization is achieved
## Generate random input vector {{var|X}}
## Compute the values of the hidden neurons
## Compute the value of the output neuron
Line 56:
### Outputs are different: one of the suitable learning rules is applied to the weights
After the full synchronization is achieved (the weights w<sub>ij</sub> of both tree parity machines are same), {{var|A}} and {{var|B}} can use their weights as keys.<br>
This method is known as a bidirectional learning.<br>
One of the following learning rules<ref name="SinghAndNandal">{{cite journal |last1=Singh |first1=Ajit |last2=Nandal |first2=Aarti |date=May 2013 |title=Neural Cryptography for Secret Key Exchange and Encryption with AES |url=http://ijarcsse.com/Before_August_2017/docs/papers/Volume_3/5_May2013/V3I5-0187.pdf |journal=International Journal of Advanced Research in Computer Science and Software Engineering |volume=3 |issue=5 |pages=376–381 |issn=2277-128X}}</ref> can be used for the synchronization:
Line 73:
=== Attacks and security of this protocol ===
In every attack it is considered, that the attacker {{var|E}} can eavesdrop messages between the parties {{var|A}} and {{var|B}}, but does not have an opportunity to change them.
==== Brute force ====
To provide a brute force attack, an attacker has to test all possible keys (all possible values of weights wij). By {{var|K}} hidden neurons, {{var|K
==== Learning with own tree parity machine ====
One of the basic attacks can be provided by an attacker, who owns the same tree parity machine as the parties {{var|A}} and {{var|B}}. He wants to synchronize his tree parity machine with these two parties. In each step there are three situations possible:
# Output(A) ≠ Output(B): None of the parties updates its weights.
# Output(A) = Output(B) = Output(E): All the three parties update weights in their tree parity machines.
# Output(A) = Output(B) ≠ Output(E): Parties {{var|A}} and {{var|B}} update their tree parity machines, but the attacker can not do that. Because of this situation his learning is slower than the synchronization of parties {{var|A}} and {{var|B}}.
It has been proven, that the synchronization of two parties is faster than learning of an attacker. It can be improved by increasing of the synaptic depth L of the neural network. That gives this protocol enough security and an attacker can find out the key only with small probability.
==== Other attacks ====
For conventional cryptographic systems, we can improve the security of the protocol by increasing of the key length. In the case of neural cryptography, we improve it by increasing of the synaptic depth {{var|L}} of the neural networks. Changing this parameter increases the cost of a successful attack exponentially, while the effort for the users grows polynomially. Therefore, breaking the security of neural key exchange belongs to the complexity class NP.
Alexander Klimov, Anton Mityaguine, and Adi Shamir say that the original neural synchronization scheme can be broken by at least three different attacks—geometric, probabilistic analysis, and using genetic algorithms. Even though this particular implementation is insecure, the ideas behind chaotic synchronization could potentially lead to a secure implementation.<ref name="Klimov">{{cite conference |last1=Klimov |first1=Alexander |last2=Mityagin |first2=Anton |last3=Shamir |first3=Adi |date=2002 |title=Analysis of Neural Cryptography |url=https://iacr.org/archive/asiacrypt2002/25010286/25010286.pdf |book-title=Advances in Cryptology |conference=ASIACRYPT 2002 |series=[[Lecture Notes in Computer Science|LNCS]] |volume=2501 |pages=288–298 |issn=0302-9743 |doi=10.1007/3-540-36178-2_18 |accessdate=2017-11-15}}</ref>
Line 119:
A [[quantum computer]] is a device that uses quantum mechanisms for computation. In this device the data are stored as qubits (quantum binary digits). That gives a quantum computer in comparison with a conventional computer the opportunity to solve complicated problems in a short time, e.g. discrete logarithm problem or factorization. Algorithms that are not based on any of these number theory problems are being searched because of this property.
Neural key exchange protocol is not based on any number theory. It is based on the difference between unidirectional and bidirectional synchronization of neural networks. Therefore, something like the neural key exchange protocol could give rise to potentially faster key exchange schemes.<ref name="Klimov" />
== See also ==
|