Multimodal learning: Difference between revisions

Content deleted Content added
Removed a lot of the content on Boltzmann machines. It's way too technical for the average viewer, and Boltzmann machines don't appear to be state-of-the-art anymore. And the article is not about Boltzmann machines.
m Motivation: natural native english: in form of -> in the form of
Line 10:
 
==Motivation==
Many models and algorithms have been implemented to retrieve and classify certain types of data, e.g. image or text (where humans who interact with machines can extract images in the form of pictures and texts that could be any message etc.). However, data usually come with different modalities (it is the degree to which a system's components may be separated or combined) which carry different information. For example, it is very common to caption an image to convey the information not presented in the image itself. Similarly, sometimes it is more straightforward to use an image to describe the information which may not be obvious from texts. As a result, if different words appear in similar images, then these words likely describe the same thing. Conversely, if a word is used to describe seemingly dissimilar images, then these images may represent the same object. Thus, in cases dealing with multi-modal data, it is important to use a model which is able to jointly represent the information such that the model can capture the correlation structure between different modalities. Moreover, it should also be able to recover missing modalities given observed ones (e.g. predicting possible image object according to text description). The Multimodal Deep Boltzmann Machine model satisfies the above purposes.
 
== Multimodal transformers ==