*** Welcome to piglix ***

Wayback Machine

Wayback Machine
Stylized text saying: "INTERNET ARCHIVE WAYBACK MACHINE". The text is in black, except for "WAYBACK", which is in red.
Type of site
Archive
Owner Internet Archive
Website web.archive.org
Alexa rank 262 (as of November 2016)
Registration Optional
Launched October 24, 2001; 15 years ago (2001-10-24)
Current status Active
Written in C, Perl

The Wayback Machine is a digital archive of the World Wide Web and other information on the Internet created by the Internet Archive, a nonprofit organization, based in San Francisco, California, United States. The Internet Archive launched the Wayback Machine in October 2001. It was set up by Brewster Kahle and Bruce Gilliat, and is maintained with content from Alexa Internet. The service enables users to see archived versions of web pages across time, which the archive calls a "three dimensional index".

Since 1996, the Wayback Machine has been archiving cached pages of websites onto its large cluster of Linux nodes. It revisits sites every few weeks or months and archives a new version. Sites can also be captured on the fly by visitors who enter the site's URL into a search box. The intent is to capture and archive content that otherwise would be lost whenever a site is changed or closed down. The overall vision of the machine's creators is to archive the entire Internet.

The name Wayback Machine was chosen as a reference to the "WABAC machine" (pronounced way-back), a time-traveling device used by the characters Mr. Peabody and Sherman in The Rocky and Bullwinkle Show, an animated cartoon. In one of the animated cartoon's component segments, Peabody's Improbable History, the characters routinely used the machine to witness, participate in, and, more often than not, alter famous events in history.

In 1996 Brewster Kahle, with Bruce Gilliat, developed software to crawl and download all publicly accessible World Wide Web pages, the hierarchy, the Netnews (Usenet) bulletin board system, and downloadable software. The information collected by these "crawlers" does not include all the information available on the Internet, since much of the data is restricted by the publisher or stored in databases that are not accessible. These "crawlers" also respect the robots exclusion standard for websites whose owners opt for them not to appear in search results or be cached. To overcome inconsistencies in partially cached websites, Archive-It.org was developed in 2005 by the Internet Archive as a means of allowing institutions and content creators to voluntarily harvest and preserve collections of digital content, and create digital archives.


...
Wikipedia

...