Content deleted Content added
Copy editing, added a few "citation needed"s, and linked other wikipedia pages where relevant. |
m Dating maintenance tags: {{Citation needed}} |
||
Line 12:
==Background: Boltzmann machine==
A [[Boltzmann machine]] is a type of stochastic neural network invented by [[Geoffrey Hinton]] and [[Terry Sejnowski]] in 1985. Boltzmann machines can be seen as the [[stochastic process|stochastic]], [[generative model|generative]] counterpart of [[Hopfield net]]s. They are named after the [[Boltzmann distribution]] in statistical mechanics. The units in Boltzmann machines are divided into two groups: visible units and hidden units. General Boltzmann machines allow connection between any units. However, learning is impractical using general Boltzmann Machines because the computational time is exponential to the size of the machine{{Citation needed|date=November 2022}}. A more efficient architecture is called '''[[restricted Boltzmann machine]]''' where connection is only allowed between hidden unit and visible unit, which is described in the next section.
===Restricted Boltzmann machine===
Line 83:
==Application==
Multimodal deep Boltzmann machines are successfully used in classification and missing data retrieval. The classification accuracy of multimodal deep Boltzmann machine outperforms [[support vector machine]]s, [[latent Dirichlet allocation]] and [[deep belief network]], when models are tested on data with both image-text modalities or with single modality{{Citation needed|date=November 2022}}. Multimodal deep Boltzmann machine is also able to predict missing modalities given the observed ones with reasonably good precision{{Citation needed|date=November 2022}}.
Self Supervised Learning brings a more interesting and powerful model for multimodality. [[OpenAI]] developed CLIP and [[DALL-E]] models that revolutionized multimodality.
|