IBM alignment models: Difference between revisions

Content deleted Content added
Rescuing 0 sources and tagging 1 as dead.) #IABot (v2.0
Line 72:
For additional words: <math>d_1(v_{j}-v_{\pi_{i,k-1}}\lor B(e_j),v_{max'})</math>
 
The alignment models that use first-order dependencies like the HMM or IBM Models 4 and 5 produce better results than the other alignment methods. The main idea of HMM is to predict the distance between subsequent source language positions. On the other hand, IBM Model 4 tries to predict the distance between subsequent target language positions. Since it was expected to achieve better alignment quality when using both types of such dependencies, HMM and Model 4 were combined in a log-linear manner in Model 6 as follows:<ref>{{cite web | url = http://people.cs.kuleuven.be/~ivan.vulic/Files/TASOA.pdf | author = Vulić I. | title = Term Alignment. State of the Art Overview | year = 2010 | publisher = Katholieke Universiteit Leuven | accessdate = 26 October 2015 }}{{Dead link|date=January 2020 |bot=InternetArchiveBot |fix-attempted=yes }}</ref>
 
:<math>p_6(f,a\lor e)= \frac{p_4(f,a\lor e)^\alpha*p_{HMM}(f,a\lor e)}{\sum_{a',f'} p_4(f',a'\lor e)^\alpha*p_{HMM}(f',a'\lor e)}</math>