*** Welcome to piglix ***

Wikipedia:Using WebCite


WebCite can archive a range of content, including HTML web pages, PDF files, style sheets, JavaScript, and digital images. Another web archiving service is the Wayback Machine. The two operate differently, and certain pages can be archived by one but not the other. The Wayback Machine takes snapshots of webpages at certain times as well as having an archiving process initiated by user requests; WebCite requires someone to actively archive a link.

There are many ways to submit a web page to WebCite for archiving. If you are new to using WebCite, give the Website form method a go first. The other methods are better suited to those who use WebCite regularly.

This method is easy to use but is slower than the other methods as it requires going to the WebCite website each time you want to archive a web page.

Put simply, a bookmarklet is a web browser bookmark which instead of going to a web page, performs a certain function. With the WebCite bookmarklet, you click the bookmark, it takes the URL of the page you are currently looking at and submits it to WebCite for archiving. This method is easy to set up, easy to use and is fast. To get the most out of this method, it is recommended that you have your Bookmarks/Favorites bar visible or at least have your bookmarks accessible within a click or two. This method only allows you to archive the page you are currently looking at, to archive a different web page you will have to use another method.

Firefox smart keywords are commonly used to perform searches through the Firefox address bar or to open a bookmark by typing a keyword into the Firefox address bar. Here we are going to use a smart keyword to submit a URL to WebCite for archiving. This method is moderately simple to set up, easy to use and is fast.

Although this is created through Chrome's search engine feature, this functions just like a smart keyword in Firefox. This method is moderately simple to set up, easy to use and is fast.

WebCite honors the robots exclusion standard, as well as no-cache and no-archive tags and will not archive sites that disallow archiving.

For example, The New York Times has a robots.txt file at http://www.nytimes.com/robots.txt which includes:


...
Wikipedia

...