In portfolio theory, a mutual fund separation theorem, mutual fund theorem, or separation theorem is a theorem stating that, under certain conditions, any investor's optimal portfolio can be constructed by holding each of certain mutual funds in appropriate ratios, where the number of mutual funds is smaller than the number of individual assets in the portfolio. Here a mutual fund refers to any specified benchmark portfolio of the available assets. There are two advantages of having a mutual fund theorem. First, if the relevant conditions are met, it may be easier (or lower in transactions costs) for an investor to purchase a smaller number of mutual funds than to purchase a larger number of assets individually. Second, from a theoretical and empirical standpoint, if it can be assumed that the relevant conditions are indeed satisfied, then implications for the functioning of asset markets can be derived and tested.
Portfolios can be analyzed in a mean-variance framework, with every investor holding the portfolio with the lowest possible return variance consistent with that investor's chosen level of expected return (called a minimum-variance portfolio), if the returns on the assets are jointly elliptically distributed, including the special case in which they are jointly normally distributed. Under mean-variance analysis, it can be shown that every minimum-variance portfolio given a particular expected return (that is, every efficient portfolio) can be formed as a combination of any two efficient portfolios. If the investor's optimal portfolio has an expected return that is between the expected returns on two efficient benchmark portfolios, then that investor's portfolio can be characterized as consisting of positive quantities of the two benchmark portfolios.
To see two-fund separation in a context in which no risk-free asset is available, using matrix algebra, let be the variance of the portfolio return, let be the level of expected return on the portfolio that portfolio return variance is to be minimized contingent upon, let be the vector of expected returns on the available assets, let be the vector of amounts to be placed in the available assets, let be the amount of wealth that is to be allocated in the portfolio, and let be a vector of ones. Then the problem of minimizing the portfolio return variance subject to a given level of expected portfolio return can be stated as