Text-to-video model: Difference between revisions

Content deleted Content added
Reverted 1 edit by 213.111.83.130 (talk): See also section is not for external links
mNo edit summary
Line 1:
{{short description|Machine learning model}}
A '''text-to-video model''' is a [[machine learning]] model]] which takes a [[natural language]] description as input and producing a [[video]] or multiples videos from the input.<ref name="AIIR">{{cite report|url=https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-Report_2023.pdf|title=Artificial Intelligence Index Report 2023|publisher=Stanford Institute for Human-Centered Artificial Intelligence|page=98|quote=Multiple high quality text-to-video models, AI systems that can generate video clips from prompted text, were released in 2022.}}</ref>
 
Video prediction on making objects realistic in a stable background is performed by using [[recurrent neural network]] for a sequence to sequence model with a connector [[convolutional neural network]] encoding and decoding each frame pixel by pixel,<ref>{{Cite web |title=Leading India |url=https://www.leadingindia.ai/downloads/projects/VP/vp_16.pdf}}</ref> creating video using [[deep learning]].<ref>{{Cite web |last=Narain |first=Rohit |date=2021-12-29 |title=Smart Video Generation from Text Using Deep Neural Networks |url=https://www.datatobiz.com/blog/smart-video-generation-from-text/ |access-date=2022-10-12 |language=en-US}}</ref> Testing of the [[data set]] in conditional [[generative model]] for existing information from text can be done by [[variational autoencoder]] and [[generative adversarial network]] (GAN).