*** Welcome to piglix ***

Multimodal learning


The information in real world usually comes as different modalities. For example, images are usually associated with tags and text explanations; texts contain images to more clearly express the main idea of the article. Different modalities are characterized by very different statistical properties. For instance, images are usually represented as pixel intensities or outputs of feature extractors, while texts are represented as discrete word count vectors. Due to the distinct statistical properties of different information resources, it is very important to discover the relationship between different modalities. Multimodal learning is a good model to represent the joint representations of different modalities. The multimodal learning model is also capable to fill missing modality given the observed ones. The multimodal learning model combines two deep Boltzmann machines each corresponds to one modality. An additional hidden layer is placed on top of the two Boltzmann Machines to give the joint representation.

Motivation is about internal and external factors that stimulate desire and energy in people to be continually interested and committed to a job, role or subject, or to make an effort to attain a goal.

Motivation results from the interaction of both conscious and unconscious factors such as the (1) intensity of desire or need, (2) incentive or reward value of the goal, and (3) expectations of the individual and of his or her peers. These factors are the reasons one has for behaving a certain way. An example is a student that spends extra time studying for a test because he or she wants a better grade in the class.

So here it classifies human–computer interaction, which can be implemented with the use of models or algorithms.

A lot of models/algorithms have been implemented to retrieve and classify a certain type of data, e.g. image or text (where humans who interacts with machines can extract images in a form of pictures and text that could be any message etc). However, data usually comes with different modalities (it is the degree to which a system's components may be separated or combined) which carry different information. For example, it is very common to caption an image to convey the information not presented by this image. Similarly, sometimes it is more straightforward to use an image to describe the information which may not be obvious from texts. As a results, if some different words appear in similar images, these words are very likely used to describe the same thing. Conversely, if some words are used in different images, these images may represent the same object. Thus, it is important to invite a novel model which is able to jointly represent the information such that the model can capture the correlation structure between different modalities. Moreover, it should also be able to recover missing modalities given observed ones, e.g. predicting possible image object according to text description. The Multimodal Deep Boltzmann Machine model satisfies the above purposes.


...
Wikipedia

...