Neural cryptography: Difference between revisions

Content deleted Content added
Line 61:
One of the following learning rules can be used for the synchronization:
* Hebbian learning rule:
:<math>w_i^+=g(w_i+\sigma_ix_i\Theta(\sigma_i\tau)\Theta(\tau^A\tau^B))</math>
* Anti-Hebbian learning rule:
:<math>w_i^+=g(w_i-\sigma_ix_i\Theta(\sigma_i\tau)\Theta(\tau^A\tau^B))</math>
* Random walk:
:<math>w_i^+=g(w_i+x_i\Theta(\sigma_i\tau)\Theta(\tau^A\tau^B))</math>
Line 70:
:<math>\Theta(a,b)=0</math> if <math>a \ne b</math> otherwise <math>\Theta(a,b)=1</math>
And:
:<math>g(x)</math> is a [[Stochastic_processFunction_(mathematics)|stochastic function]] that keeps the <math>w_i</math> in the range <math>\{-L, -L+1,...,0,...,L-1,L\}</math>
 
=== Attacks and security of this protocol ===