Content deleted Content added
m link precision and recall using Find link |
m Open access bot: arxiv, doi added to citation with #oabot. |
||
Line 25:
=== Transformers ===
With the introduction of [[Transformer (machine learning model)|Transformer models]], paraphrase generation approaches improved their ability to generate text by scaling [[neural network]] parameters and heavily parallelizing training through [[Feedforward neural network|feed-forward layers]].<ref>{{Cite journal |last1=Zhou |first1=Jianing |last2=Bhat |first2=Suma |date=2021 |title=Paraphrase Generation: A Survey of the State of the Art |url=https://aclanthology.org/2021.emnlp-main.414 |journal=Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing |language=en |___location=Online and Punta Cana, Dominican Republic |publisher=Association for Computational Linguistics |pages=5075–5086 |doi=10.18653/v1/2021.emnlp-main.414|s2cid=243865349 |doi-access=free }}</ref> These models are so fluent in generating text that human experts cannot identify if an example was human-authored or machine-generated.<ref>{{Cite journal |last1=Dou |first1=Yao |last2=Forbes |first2=Maxwell |last3=Koncel-Kedziorski |first3=Rik |last4=Smith |first4=Noah |last5=Choi |first5=Yejin |date=2022 |title=Is GPT-3 Text Indistinguishable from Human Text? Scarecrow: A Framework for Scrutinizing Machine Text |url=https://aclanthology.org/2022.acl-long.501 |journal=Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) |language=en |___location=Dublin, Ireland |publisher=Association for Computational Linguistics |pages=7250–7274 |doi=10.18653/v1/2022.acl-long.501|s2cid=247315430 |doi-access=free }}</ref> Transformer-based paraphrase generation relies on [[Autoencoder|autoencoding]], [[Autoregressive model|autoregressive]], or [[Seq2seq|sequence-to-sequence]] methods. Autoencoder models predict word replacement candidates with a one-hot distribution over the vocabulary, while autoregressive and seq2seq models generate new text based on the source predicting one word at a time.<ref>{{Cite journal |last1=Liu |first1=Xianggen |last2=Mou |first2=Lili |last3=Meng |first3=Fandong |last4=Zhou |first4=Hao |last5=Zhou |first5=Jie |last6=Song |first6=Sen |date=2020 |title=Unsupervised Paraphrasing by Simulated Annealing |url=https://www.aclweb.org/anthology/2020.acl-main.28 |journal=Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics |language=en |___location=Online |publisher=Association for Computational Linguistics |pages=302–312 |doi=10.18653/v1/2020.acl-main.28|s2cid=202537332 |doi-access=free }}</ref><ref>{{Cite journal |last1=Wahle |first1=Jan Philip |last2=Ruas |first2=Terry |last3=Meuschke |first3=Norman |last4=Gipp |first4=Bela |title=Are Neural Language Models Good Plagiarists? A Benchmark for Neural Paraphrase Detection |url=https://ieeexplore.ieee.org/document/9651895 |journal=2021 ACM/IEEE Joint Conference on Digital Libraries (JCDL) |year=2021 |___location=Champaign, IL, USA |publisher=IEEE |pages=226–229 |doi=10.1109/JCDL52503.2021.00065 |isbn=978-1-6654-1770-9|s2cid=232320374 |arxiv=2103.12450 }}</ref> More advanced efforts also exist to make paraphrasing controllable according to predefined quality dimensions, such as semantic preservation or lexical diversity.<ref>{{Cite journal |last1=Bandel |first1=Elron |last2=Aharonov |first2=Ranit |last3=Shmueli-Scheuer |first3=Michal |last4=Shnayderman |first4=Ilya |last5=Slonim |first5=Noam |last6=Ein-Dor |first6=Liat |date=2022 |title=Quality Controlled Paraphrase Generation |url=https://aclanthology.org/2022.acl-long.45 |journal=Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) |language=en |___location=Dublin, Ireland |publisher=Association for Computational Linguistics |pages=596–609 |doi=10.18653/v1/2022.acl-long.45|doi-access=free }}</ref> Many Transformer-based paraphrase generation methods rely on unsupervised learning to leverage large amounts of training data and scale their methods.<ref>{{Cite journal |last1=Lee |first1=John Sie Yuen |last2=Lim |first2=Ho Hung |last3=Carol Webster |first3=Carol |date=2022 |title=Unsupervised Paraphrasability Prediction for Compound Nominalizations |url=https://aclanthology.org/2022.naacl-main.237 |journal=Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies |language=en |___location=Seattle, United States |publisher=Association for Computational Linguistics |pages=3254–3263 |doi=10.18653/v1/2022.naacl-main.237|s2cid=250390695 |doi-access=free }}</ref><ref>{{Cite journal |last1=Niu |first1=Tong |last2=Yavuz |first2=Semih |last3=Zhou |first3=Yingbo |last4=Keskar |first4=Nitish Shirish |last5=Wang |first5=Huan |last6=Xiong |first6=Caiming |date=2021 |title=Unsupervised Paraphrasing with Pretrained Language Models |url=https://aclanthology.org/2021.emnlp-main.417 |journal=Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing |language=en |___location=Online and Punta Cana, Dominican Republic |publisher=Association for Computational Linguistics |pages=5136–5150 |doi=10.18653/v1/2021.emnlp-main.417|s2cid=237497412 |doi-access=free }}</ref>
== Paraphrase recognition ==
Line 42:
=== Transformers ===
Similar to how [[Transformer (machine learning model)|Transformer models]] influenced paraphrase generation, their application in identifying paraphrases showed great success. Models such as BERT can be adapted with a [[binary classification]] layer and trained end-to-end on identification tasks.<ref>{{Cite journal |last1=Devlin |first1=Jacob |last2=Chang |first2=Ming-Wei |last3=Lee |first3=Kenton |last4=Toutanova |first4=Kristina |date=2019 |title=[No title found] |url=http://aclweb.org/anthology/N19-1423 |journal=Proceedings of the 2019 Conference of the North |language=en |___location=Minneapolis, Minnesota |publisher=Association for Computational Linguistics |pages=4171–4186 |doi=10.18653/v1/N19-1423|s2cid=52967399 }}</ref><ref>{{Citation |last1=Wahle |first1=Jan Philip |title=Identifying Machine-Paraphrased Plagiarism |date=2022 |url=https://link.springer.com/10.1007/978-3-030-96957-8_34 |work=Information for a Better World: Shaping the Global Future |volume=13192 |pages=393–413 |editor-last=Smits |editor-first=Malte |place=Cham |publisher=Springer International Publishing |language=en |doi=10.1007/978-3-030-96957-8_34 |isbn=978-3-030-96956-1 |access-date=2022-10-06 |last2=Ruas |first2=Terry |last3=Foltýnek |first3=Tomáš |last4=Meuschke |first4=Norman |last5=Gipp |first5=Bela|s2cid=232307572 |arxiv=2103.11909 }}</ref> Transformers achieve strong results when transferring between domains and paraphrasing techniques compared to more traditional machine learning methods such as [[logistic regression]]. Other successful methods based on the Transformer architecture include using [[Adversarial machine learning|adversarial learning]] and [[Meta-learning (computer science)|meta-learning]].<ref>{{Cite journal |last1=Nighojkar |first1=Animesh |last2=Licato |first2=John |date=2021 |title=Improving Paraphrase Detection with the Adversarial Paraphrasing Task |url=https://aclanthology.org/2021.acl-long.552 |journal=Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) |language=en |___location=Online |publisher=Association for Computational Linguistics |pages=7106–7116 |doi=10.18653/v1/2021.acl-long.552|s2cid=235436269 |doi-access=free }}</ref><ref>{{Cite journal |last1=Dopierre |first1=Thomas |last2=Gravier |first2=Christophe |last3=Logerais |first3=Wilfried |date=2021 |title=ProtAugment: Intent Detection Meta-Learning through Unsupervised Diverse Paraphrasing |url=https://aclanthology.org/2021.acl-long.191 |journal=Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) |language=en |___location=Online |publisher=Association for Computational Linguistics |pages=2454–2466 |doi=10.18653/v1/2021.acl-long.191|s2cid=236460333 |doi-access=free }}</ref>
== Evaluation ==
|