CIMalgo Robust Model Series
The Theoretical Foundation
Classical finance, based on the efficient market hypothesis (EMH) and modern portfolio theory (MPT) suggests that reducing portfolio volatility also reduces expected returns. This generic hypothesis is justified by the assumption that returns on equity are derived from risk premia.
Still, it is a fact that the EMH/MPT key risk hypothesis has been extensively refuted in practise and is now-a-days fundamentally at odds with empirical observations and interpretations, since low volatility stocks have continuously outperformed high-volatility stocks over time, a phenomenon commonly referred to as the Low-Volatility Anomaly.
Not only does low-volatility portfolios achieve lower risk and higher Sharpe ratios, they also reach higher annualized returns. This phenomenon is in fact generalizable, as can be seen in figure 1.
Many market participants and observers have, accordingly, attempted to explain the phenomenon by various new theories, like market inefficiencies derived from psychological aspects of traders and investors, or treating low-volatility as a ”factor” of its own that is supposed to account for non-apparent risk exposures.
However, regardless of how one tries to explain the low-volatility anomaly, it remains in significant contrast to EMH and MPT and remains one of the best tools for generating superior long-term returns. This means that volatility-optimized portfolios can provide robust long-term performance by mitigating drawdowns and, also, perform on-par with higher-volatility portfolios in bullish markets. This important outperformance driving force is the basic theoretical foundation for the CIMalgo Robust Model Series.

Methods of Risk-reduction and Optimization
There are two main ways to construct a low-volatility portfolio. One is simply ranking stocks by their respective volatilities and, then, to select the least volatile ones. This, however, does not consider the correlations between the stocks, which can lead to significant losses when many of the stocks experience losses simultaneously. When the goal is to reduce risk, it is reasonable both to maximise diversification (i.e. avoiding correlated stocks) and to minimize the volatility of individual stocks (i.e. avoiding high-risk stocks). This more complex task cannot be achieved through rankings, it can only be achieved through portfolio optimization. In any modern and competitive stock-market, It can be shown that a plain rank-based portfolio experiences larger drawdowns, higher volatility, and worse return performance than the optimized counterpart.
Portfolio optimization, or mean-variance optimization (MVO), was introduced by academics already in the 1950’s. Several weaknesses of this approach have since surfaced over time, in practice, like instability and sensitivity to estimation errors. Significant criticism of MVO has, therefore, been aimed at the fact that small changes in volatility estimates can lead to dramatically different portfolios. This is referred to as “mean-variance instability” and is, ultimately, an overfitting problem.
Another MVO syndrome is that small change in volatility estimates causes quite dramatic changes in weights for the whole portfolio. Due to this instability, certain researchers have said that MVO mostly maximizes estimation errors as opposed to minimizing actual risk. This is especially important in the context of equity returns, as they are known to exhibit “fat tails”, which can severely distort variance estimates and therefore lead to over-exposure in some securities and under-exposure in others. It has therefore been shown in academic research that portfolios where the optimization accounts for the uncertainty in estimates of variance performs better than portfolios which treats estimated values as “true” values plugged into the optimization.
Estimating Volatility
In the context of equity returns, it is a well-known phenomenon that volatility is a dynamic property. Both market-wide events and events particular to a specific equity can shift the volatility level abruptly. The question of which part of an equity’s history to use for the volatility estimation, therefore, becomes an all important issue for model design and portfolio selections.
CIMalgo, in the Robust Series, seek a robust, parsimonious approach which, in contrast to widely used techniques like GARCH, is not susceptible to distortions or over-reliance on specific definitions of volatility. To this end, the basic Robust model uses statistical change-point detection to effectively identify volatility “regimes”. This technique avoids making assumptions on the distributional- and sequential-dependency properties of the volatilities.
For example; when comparing the change-point approach and using a 3-year rolling window estimation of volatility, during the Lehman Brother’s case 2006 – 2008, this difference is clearly shown. During the summer of 2007, the Lehman stock started a descent which would last until its eventual bankruptcy in late 2008. The change-point algorithm of the CIMalgo Robust model detects three points at which the Lehman stock’s volatility changes during this period: in April 2006, August 2007, and March 2008.
As can be seen from the second chart below, while the change-point based volatility jumps from 30% annualized to almost 60% annualized in 2007, the rolling volatility is only slightly changed. Thus, in this scenario, such a stock would surely be removed from a hypothetical portfolio during summer 2007, whereas it could very likely remain in a portfolio based on the rolling estimate.

The Robust Series Approach to Optimization
Various potential solutions to the problems of MVO have been proposed by academics and practitioners, for example performing regularized optimization where weights are forced to behave in a more stable fashion. In the Robust Series, we seek the most robust and parsimonious approach possible and, therefore, we – to begin with – enforce equal weighting and a pre-specified number of constituents.
An optimization enforcing equal-weighting and a fixed number of constituents is a challenging computational problem not solvable by conventional portfolio optimization tools like quadratic programming, as such constraints imply that the optimization problem belongs to a class of problems called combinatorial optimization. Large-scale cases of such problems require sophisticated techniques like heuristic search or genetic algorithms. The optimization method in the Robust Series relies on a proprietarily developed algorithm, resembling an evolutional process where the least “fit” securities are iteratively removed from the initial universe – until an optimal portfolio is found.
Empirically, equal weighting has also proven to outperform other weighting schemes in terms of absolute returns. Parts of this outperformance have been explained by risk factors, while other parts have been explained by a contrarian nature of such a weighting – since one enforces an equal weighting at every rebalancing date, this means that the position in securities which have performed well over the preceding period will be down-sized in the portfolio. The Robust models therefore benefits from this weighting setup in two ways: in terms of robustness of the risk optimization and performance in terms of absolute returns.
The basic portfolio construction steps of the Robust Series models, carried out at each rebalancing date, can be summarized as follows:
- Set up the basic selection pool from the universe of securities, typically filtered for liquidity criteria and other restrictions imposed on the eligible securities
- Use change-point detection, estimate volatilities, along with covariance matrixes for the full selection pool
- Pass the estimated parameters to the optimizer- to select the optimal portfolio constituents