Neural cryptography: Difference between revisions

Content deleted Content added
Line 59:
After the full synchronization is achieved (the weights w<sub>ij</sub> of both tree parity machines are same), A and B can use their weights as keys.<br>
This method is known as a bidirectional learning.<br>
One of the following learning rules<ref name="SinghAndNandal">{{cite journal |last1=Singh author1|first1=Ajit Singh |last2=Nandal author2|first2=Aarti Nandal|date=May 2013 |title=Neural Cryptography for Secret Key Exchange and Encryption with AES |url=http://ijarcsse.com/Before_August_2017/docs/papers/Volume_3/5_May2013/V3I5-0187.pdf |journal=International Journal of Advanced Research in Computer Science and Software Engineering | volume=3 | issue=5 | yearpages=2013376–381 |issn=2277-128X}}</ref> can be used for the synchronization:
title=Neural Cryptography for Secret Key Exchange and Encryption with AES | url=http://www.ijarcsse.com/docs/papers/Volume_3/5_May2013/V3I5-0187.pdf | pages=376&ndash;381}}</ref> can be used for the synchronization:
* Hebbian learning rule:
:<math>w_i^+=g(w_i+\sigma_ix_i\Theta(\sigma_i\tau)\Theta(\tau^A\tau^B))</math>