*** Welcome to piglix ***

Bus sniffing


Bus snooping or bus sniffing is a scheme that a coherency controller (snooper) in a cache monitors or snoops the bus transactions, and its goal is to maintain a cache coherency in distributed shared memory systems. A cache that has a coherency controller (snooper) inside is called as Snoopy cache. The scheme was introduced by Ravishankar and Goodman in 1983.

When a specific data is shared by several caches and a processor modifies the value of the shared data, the change must be propagated to all the other caches which have the same copy of the data. Otherwise, it may violate a cache coherency. The notification of data change can be done by bus snooping. All the snoopers monitor every transaction on a bus. If a transaction modifying a shared cache block appears on a bus, all the snoopers check whether their caches have the same copy of the shared block. If a cache has the copy of the shared block, the corresponding snooper performs an action to ensure cache coherency. The action can be a flush or an invalidation of the cache block. It also involves a change of cache block status depending on the cache coherence protocol.

There are two kinds of snooping protocols depending on the way to manage a local copy of a write operation:

When a processor writes on a shared cache block, all the shared copies in the other caches are invalidated through bus snooping. This method ensures that only a copy of a data can be exclusively read and written by a processor. All the other copies in other caches are invalidated. This is the most commonly used snooping protocol. , , , , and protocols belong to this category.

When a processor writes on a shared cache block, all the shared copies of the other caches are updated through bus snooping. This method broadcasts a write data to all caches throughout a bus. It incurs larger bus traffic than write-invalidate protocol. That is why this method is uncommon. and protocols belong to this category.

One of the possible implementations is as follows:

The cache would have three extra bits:

Each cache line is in one of the following states: "dirty" (has been updated by local processor), "valid", "invalid" or "shared". A cache line contains a value, and it can be read or written. Writing on a cache line changes the value. Each value is either in main memory (which is very slow to access), or in one or more local caches (which is fast). When a block is first loaded into the cache, it is marked as "valid".


...
Wikipedia

...