*** Welcome to piglix ***

Noisy text


The noise can be seen as all the differences between the surface form of a coded representation of the text and the intended, correct, or original text. It can be due to e.g. typographic errors or colloquialisms always present in natural language and usually lowers the data quality in a way that makes the text less accessible to automated processing by computers such as natural language processing. The noise can also get introduced through an extraction process (i.e. transcription, OCR) from media other than original electronic texts.

Language usage over computer mediated discourses, like chats, emails and SMS texts, significantly differs from the standard form of the language. An urge towards shorter message length facilitating faster typing and the need for semantic clarity, shape the structure of this text used in such discourses.

Various business analysts estimate that unstructured data constitutes around 80% of the whole enterprise data. A great proportion of this data comprises chat transcripts, emails and other informal and semi-formal internal and external communications. Usually such text is meant for human consumption, but - given the amount of data - manual processing and evaluation of those resources is not practically feasible anymore. This raises the need for robust text mining methods.

To reduce the amount of noise in typed text as it is produced, spell checkers and grammar checkers available today. Many word processors like MS Word include this in the editing tool. Online, Google search includes a search term suggestion engine to guide users when they make mistakes with their queries.


...
Wikipedia

...