Deflated Sharpe ratio
The Deflated Sharpe Ratio (DSR) is a statistical method used to determine whether the Sharpe ratio of an investment strategy is statistically significant, developed in 2014 by Marcos López de Prado at Guggenheim Partners and Cornell University, and David H. Bailey at Lawrence Berkeley National Laboratory. It corrects for selection bias, backtest overfitting, sample length, and non-normality in return distributions, providing a more reliable test of financial performance, especially when many trials are evaluated.[1] The application of the DSR, helps practitioners to detect false investment strategies.
DSR offers a more precise and robust adjustment for multiple testing compared to traditional methods like the Šidák correction because it explicitly models both the selection bias arising from choosing the best among many trials and the estimation uncertainty inherent in Sharpe ratios. Unlike Šidák, which assumes independence and adjusts p-values based only on the number of tests, the DSR accounts for the variance of Sharpe estimates, the number of trials, and their effective independence, often estimated through clustering. This leads to a more realistic threshold for statistical significance that reflects the true probability of a false discovery in data-mined environments. As a result, the DSR is particularly well-suited for finance, where researchers often conduct large-scale, correlated searches for profitable strategies without strong prior hypotheses.[2][3]
Relation to the Sharpe Ratio
[edit]One of the most important statistics for assessing the performance of an investment strategy is the Sharpe Ratio (SR). The Sharpe ratio was developed by William F. Sharpe and is a widely used measure of risk-adjusted return, calculated as the annualized ratio of excess return over the risk-free rate to the standard deviation of returns. While useful, the Sharpe Ratio has important limitations, especially when applied to multiple strategy evaluations. Issues such as selection bias, where the best-performing strategy is chosen from a large set, and backtest overfitting, where a strategy is tailored to past data, can inflate the Sharpe Ratio, leading to misleading conclusions about a strategy's effectiveness. Additionally, the Sharpe Ratio assumes normally distributed returns,[4] an assumption often violated in practice, and it does not take into account sample length.[5]
Applying the Deflated Sharpe Ratio in Practice
[edit]1. Get a record of all the trials.
[edit]In order to apply the DSR, researchers need to record the investment performance in returns (%), for every backtest that they ran. This is in relation to the development of a single specific strategy. For example: when building a momentum based strategy that trades at the end-of-day, 100 historical simulations were run to evaluate the performance and the best set of parameters were selected for the final strategy. Here all 100 simulations need to be recorded, with the strategies daily returns in %.
2. Estimating the Effective Number of Trials N.
[edit]In practice, many trials are not independent due to overlapping features. To estimate the effective number of independent trials N, López de Prado (2018) proposes 3 techniques to clustering similar strategies using unsupervised learning techniques:
- The Optimal Number of Clusters (ONC) algorithm.[2][6][7]
- Hierarchical clustering could be used to get a conservative lower bound for N.
- Alternatively, spectral methods (e.g. eigenvalue distribution of the correlation matrix) can also provide estimates of N.[2]
Tip:
- Multiple testing exercises should be carefully planned in advance, so as to avoid running an unnecessary large number of trials. Investment theory, not computational power, should motivate what experiments are worth conducting.[1]
Steps to estimate N:
2.1. Convert the correlation matrix to a distance matrix.
In order to apply a clustering algorithm to the returns data, we need make use of a statistical association measure (such as a correlation matrix) and we need to transform it into a distance matrix (such as angular distance) so that elements that are very similar to each other will be close together in their higher-dimensional space.[8][9]
2.2. Apply a clustering algorithm to estimate the number of independent trials.
The number of clusters N, are an estimate of the number of independent trials.
2.3 Plot the Block Correlation Matrix
In the figure below we can see a correlation matrix before and after clustering has been applied. Note how we can see blocks down the diagonal, each block corresponds to a cluster.[7]

Tip: If you don't use the ONC algorithm to cluster, then you can have blocks with trials that don't match very closely. The ONC algorithm uses silhouette scores to make sure each trial is in the best cluster, at the expense of high computational complexity and longer run times.
3. Compute the Sharpe ratio variance, across clusters.
[edit]3.1 Calculate the Sharpe ratio for each cluster.
Each cluster will now form a collection of time series returns (in%), for each cluster you need to create a new time series which represents that cluster using the Inverse Variance Portfolio (IVP) and then compute the Sharpe Ratio for each IVP portfolio. One doesn't need to use the IVP - the goal is to form an aggregate cluster return time series. For this a weighting scheme needs to be used, another alternative could be the minimum variance portfolio.[7]
3.2 Compute the variance of these Sharpe Ratios
is used in the next step, where we apply the False Strategy Theorem to determine the Expected Maximum Sharpe ratio.
4. Compute the Expected Maximum Sharpe ratio using the False Strategy Theorem.
[edit]Using the equation from the False Strategy Theorem (FST)[10] we can compute , which is the threshold Sharpe Ratio that reflects the highest Sharpe Ratio expected from unskilled strategies.
Where:
- is the cross-sectional variance of Sharpe Ratios across trials,
- is the Euler-Mascheroni constant (approx. 0.5772),
- is Euler's number,
- is the inverse standard normal CDF,
- is the number of independent strategy trials.[1]
Note:
The FST highlights that the optimal outcome of an unknown number of historical simulations is right-unbounded, with enough trials, there is no Sharpe ratio sufficiently large enough to reject the hypothesis that a strategy is false, i.e., that it is over-fit and wont generalize in the out-of-sample data.[5][7]

5. Compute the DSR for each cluster.
[edit]You now have all the variables you need to compute the DSR.
Where:
- is the observed Sharpe Ratio (not annualized),
- is the threshold Sharpe Ratio that reflects the highest Sharpe Ratio expected from unskilled strategies,
- is the skewnewss of the returns,
- is the kurtosis of the returns,
- is the returns' sample length.
- is the standard normal cumulative distribution function.
Notes:
- Readers may recognize that the DSR is the Probabilistic Sharpe Ratio (PSR),[11] where is the maximum expected Sharpe Ratio (estimated using the False Strategy Theorem) instead of a simple threshold SR (often 0).
- The PSR assumes that only 1 trial was run and is often used to determine that the observed SR is greater than 0.
- To account for multiple-testing, use the DSR.
- The DSR will increase with:
- Greater observed SRs.
- Longer track records.
- Positively skewed returns.
- The DSR decreases with:
- Fatter tails (Kurtosis).
6. Complete the Template for Disclosing Multiple Tests.
[edit]6.1 Aggregate statistics into a table.
Several peer reviewed papers recommend to aggregate the cluster statistics into a table format.[6][12][13]
The table below is Exhibit 7 from "A Practitioner’s Guide to the Optimal Number of Clusters Algorithm".[6]
![Template for Disclosing Multiple Tests [6]](/media/wikipedia/commons/thumb/6/65/Cluster_statistics.jpg/960px-Cluster_statistics.jpg)
Where:
- Cluster is the index of the cluster; there are N clusters.
- Strat Count is the number of strategies included in that cluster.
- aSR is the annualized Sharpe Ratio of that cluster's inverse variance portfolio (IVP).
- SR is the non-annualized Sharpe Ratio of that cluster's IVP.
- Skew is the skew of the returns of that cluster's IVP.
- Kurt is the kurtosis of the returns of that cluster's IVP.
- T is the number of observations in the cluster's IVP.
- sqrt(V[SR]) is the square root of the variance of Sharpe Ratios that was computed in step 3.
- E[max SR] is the Expected Maximum Sharpe ratio (), computed in step 4.
- DSR is the Deflated Sharpe Ratio for that cluster's IVP.
6.2 Plot the Sharpe Ratios, for each cluster.

In the figure above, we can see a collection of non-annualized Sharpe ratios for the 26 independent trials that were tested in the development of this investment strategy. The bars are highlighted based on if they passed the DSR at a 95% confidence level.
Note that this bar chart doesn't correspond to table above in Exhibit 7 but shares the result that only 1 cluster passed the DSR. The goal with this analysis is to show that for all clusters, except 1 - all of them failed the DSR. This would indicate that the strategy is over-fit and is likely to be a false investment strategy.
6.3 Plot the cumulative returns of the strategies.

In the figure above the cumulative returns are plotted. On the y axis is the total return in% and the x axis are the time indexes. Do you see the very straight line (the strategy with an outlier performance)?
7. Derive a conclusion from these results.
[edit]As seen in the plot of cumulative returns, there is one outlier strategy which is likely a false investment strategy as this outlier has very high performance relative to its own cluster and others.
We can see in the bar plots that all the cluster portfolios failed to pass the DSR at a 95% confidence level, except for the one that included this outlier strategy.
Mathematical Definitions
[edit]The Deflated Sharpe Ratio (DSR)
[edit]Where:
- is the observed Sharpe Ratio (not annualized),
- is the threshold Sharpe Ratio that reflects the highest Sharpe Ratio expected from unskilled strategies,
- is the skewnewss of the returns,
- is the kurtosis of the returns,
- is the returns' sample length.
- is the standard normal cumulative distribution function.
The threshold is approximated by:
Where:
- is the cross-sectional variance of Sharpe Ratios across trials,
- is the Euler-Mascheroni constant (approx. 0.5772),
- is Euler's number,
- is the inverse standard normal CDF,
- is the number of independent strategy trials.[1]
False Strategy Theorem: Statement and Proof
[edit]The False Strategy Theorem provides the theoretical foundation for the Deflated Sharpe Ratio (DSR) by quantifying how much the best Sharpe Ratio among many unskilled strategies is expected to exceed zero purely due to chance. Even if all tested strategies have true Sharpe Ratios of zero, the highest observed Sharpe Ratio will typically be positive and statistically significant—unless corrected. The DSR corrects for this inflation.[10]
Statement
[edit]Let be Sharpe Ratios independently drawn from a normal distribution with mean zero and variance . Then the expected maximum Sharpe Ratio among these trials is approximately:
Where:
- is the quantile function (inverse CDF) of the standard normal distribution,
- is the Euler–Mascheroni constant,
- is Euler’s number,
- is the number of independent trials.
This value is the **expected maximum Sharpe Ratio** under the null hypothesis of no skill, . It represents a benchmark that any observed Sharpe Ratio must exceed in order to be considered statistically significant.
Proof Sketch
[edit]Let be independent standard normal variables. The expected maximum of such variables is approximated by:
Now let for each . Then:
Combining the two expressions gives:
If is estimated as the cross-sectional variance of Sharpe Ratios , then:
This completes the derivation.
Implication for the DSR
[edit]The False Strategy Theorem shows that in large-scale testing, even unskilled strategies will produce apparently "significant" Sharpe Ratios. To correct for this, the DSR adjusts the observed Sharpe Ratio by subtracting the expected maximum from noise , and scaling by the standard error around the null hypothesis:
This yields the probability that the observed Sharpe Ratio reflects true skill, not selection bias or overfitting. DSR is more accurate than methods based on Šidák correction, because DSR takes into account the dispersion across trials, .
Confidence and Power of the Sharpe Ratio under Multiple Testing
[edit]To assess the significance of Sharpe Ratios under multiple testing, López de Prado (2018) derives closed-form expressions for the Type I and Type II errors.
Confidence
[edit]DSR is the probability of observing a Sharpe ratio less extreme than the estimated , subject to being true, where the multiple testing-adjusted baseline is . This can also be interpreted as the maximum confidence with which the null hypothesis can be rejected after observing :[14]
where the standard deviation around the null hypothesis is:
Power
[edit]The power of a test is the proportion of positives that are correctly identified. This is also known in machine learning as the test's true positive rate or recall, and sensitivity in medicine. Let be the expected value of the alternative hypothesis, . For instance, this may be the average Sharpe ratio observed among strategies that have yielded positive excess returns. Then, the false negative rate (, type II error) is defined as the probability of not rejecting given that is true,
where is the false positive rate (type I error), and:
Finallly, power is the probability of rejecting the null hypothesis when it is false, namely:
The above equations reveal that power decreases the number of trials , through the effect that has on . These equations quantify the reliability of observed Sharpe Ratios under multiple testing and return non-normality.[14] They can be used to assess the sample size needed to reject with a given power .
Minimum Track Record Length
[edit]A related concept is the Minimum Track Record Length (MinTRL), which computes the minimum sample size needed such that a null hypothesis is rejected with confidence , given an observed .[11] Formally, the problem can be stated as
with solution
For example, given an observed annualized , we need approximately 3 years worth of daily strategy returns in order to reject the null hypothesis with confidence 95%. This provides mathematical support to the common expectation among investors that a hedge fund must produce track records with a minimum length of 3 years, which may be reduced to 2 years for Sharpe ratios above 1.15. It is important to understand MinTRL as a minimum requirement, since this assumes a single trial (more trials will require longer track records).
See also
[edit]References
[edit]- ^ a b c d Bailey, D. H., & López de Prado, M. (2014). The Deflated Sharpe Ratio: Correcting for Selection Bias, Backtest Overfitting, and Non-Normality. The Journal of Portfolio Management, 40(5), 94–107.
- ^ a b c López de Prado, M., & Lewis, M. J. (2019): Detection of False Investment Strategies Using Unsupervised Learning Methods. Quantitative Finance, 19(9), pp.1555-1565.
- ^ Prado, Marcos López de (2018-07-02). "The 10 Reasons Most Machine Learning Funds Fail". The Journal of Portfolio Management. 44 (6): 120–133. doi:10.3905/jpm.2018.44.6.120. ISSN 0095-4918.
- ^ Lo, Andrew W. (2002-07-01). "The Statistics of Sharpe Ratios". Financial Analysts Journal. 58 (4): 36–52. doi:10.2469/faj.v58.n4.2453. ISSN 0015-198X.
- ^ a b Bailey, D. H., & Borwein, J. & López de Prado, M. (2014): "Pseudo-Mathematics and Financial Charlatanism: The Effects of Backtest Overfitting on Out-Of-Sample Performance". Notices of the American Mathematical Society, 61(5), pp. 458-471.
- ^ a b c d Andrews, Michelle (2023-08-01). "A Practitioner's Guide to the Optimal Number of Clusters Algorithm". The Journal of Financial Data Science. 5 (3): 66–79. doi:10.3905/jfds.2023.1.133. ISSN 2640-3943.
- ^ a b c d López de Prado, Marcos M. (2020). Machine Learning for Asset Managers. Elements in Quantitative Finance. Cambridge: Cambridge University Press. ISBN 978-1-108-79289-9.
- ^ Lopez de Prado, Marcos (15 January 2020). "Statistical Association (Presentation Slides)". SSRN. SSRN 3512994.
- ^ Marti, Gautier; Nielsen, Frank; Bińkowski, Mikołaj; Donnat, Philippe (2021), Nielsen, Frank (ed.), "A Review of Two Decades of Correlations, Hierarchies, Networks and Clustering in Financial Markets", Progress in Information Geometry: Theory and Applications, Cham: Springer International Publishing, pp. 245–274, doi:10.1007/978-3-030-65459-7_10, ISBN 978-3-030-65459-7, retrieved 2025-05-21
- ^ a b López de Prado, M., & Bailey, D. H. (2018). The False Strategy Theorem: A Financial Application of Experimental Mathematics. American Mathematical Monthly, Volume 128, Number 9, pp. 825-831.
- ^ a b Bailey, David; Lopez de Prado, Marcos (Winter 2012). "The Sharpe Ratio Efficient Frontier". Journal of Risk. 15 (2): 36. doi:10.21314/JOR.2012.255. SSRN 1821643.
- ^ López de Prado, M. (2019): A Data Science Solution to the Multiple-Testing Crisis in Financial Research. Journal of Financial Data Science, 1(1), pp. 99-110.
- ^ Fabozzi, Frank J.; Prado, Marcos López de (2018-11-01). "Being Honest in Backtest Reporting: A Template for Disclosing Multiple Tests". The Journal of Portfolio Management. 45 (1): 141–147. doi:10.3905/jpm.2018.45.1.141. ISSN 0095-4918.
- ^ a b López de Prado, M. (2022): "Type I and Type II Errors of the Sharpe Ratio under Multiple Testing", The Journal of Portfolio Management, 49(1), pp. 39 - 46