Neural cryptography: Difference between revisions

Content deleted Content added
hebbian learning reference moved to correct place
Line 59:
After the full synchronization is achieved (the weights w<sub>ij</sub> of both tree parity machines are same), A and B can use their weights as keys.<br>
This method is known as a bidirectional learning.<br>
*One of the following learning rules<ref name="SinghAndNandal">{{cite journal | author1=Ajit Singh | author2=Aarti Nandal | journal=International Journal of Advanced Research in Computer Science and Software Engineering | volume=3 | issue=5 | year=2013 |
One of the following learning rules can be used for the synchronization:
title=Neural Cryptography for Secret Key Exchange and Encryption with AES | url=http://www.ijarcsse.com/docs/papers/Volume_3/5_May2013/V3I5-0187.pdf | pages=376&ndash;381}}</ref> can be used for the synchronization:
* Hebbian learning rule:
:<math>w_i^+=g(w_i+\sigma_ix_i\Theta(\sigma_i\tau)\Theta(\tau^A\tau^B))</math>
Line 146 ⟶ 147:
* {{cite journal | author=Khalil Shihab | year=2006 |
title=A backpropagation neural network for computer network security | journal=Journal of Computer Science 2 | pages=710&ndash;715 | url=http://www.scipub.org/fulltext/jcs/jcs29710-715.pdf}}
* {{cite journal | author1=Ajit Singh | author2=Aarti Nandal | journal=International Journal of Advanced Research in Computer Science and Software Engineering | volume=3 | issue=5 | year=2013 |
title=Neural Cryptography for Secret Key Exchange and Encryption with AES | url=http://www.ijarcsse.com/docs/papers/Volume_3/5_May2013/V3I5-0187.pdf | pages=376&ndash;381}}
{{Cryptography navbox}}