In linear algebra, the computation of the permanent of a matrix is a problem that is known to be more difficult than the computation of the determinant of a matrix despite the apparent similarity of the definitions.
The permanent is defined similarly to the determinant, as a sum of products of sets of matrix entries that lie in distinct rows and columns. However, where the determinant weights each of these products with a ±1 sign based on the parity of the set, the permanent weights them all with a +1 sign.
While the determinant can be computed in polynomial time by Gaussian elimination, the permanent cannot. In computational complexity theory, a theorem of Valiant states that computing permanents is #P-hard, and even #P-complete for matrices in which all entries are 0 or 1.Valiant (1979) This puts the computation of the permanent in a class of problems believed to be even more difficult to compute than NP. It is known that computing the permanent is impossible for logspace-uniform ACC0 circuits.(Allender & Gore 1994)
The development of both exact and approximate algorithms for computing the permanent of a matrix is an active area of research.
The permanent of an n-by-n matrix A = (ai,j) is defined as
The sum here extends over all elements σ of the symmetric group Sn, i.e. over all permutations of the numbers 1, 2, ..., n. This formula differs from the corresponding formula for the determinant only in that, in the determinant, each product is multiplied by the sign of the permutation σ while in this formula each product is unsigned. The formula may be directly translated into an algorithm that naively expands the formula, summing over all permutations and within the sum multiplying out each matrix entry. This requires n! n arithmetic operations.