On Friday, Federal Reserve economists released a blog post measuring the severity of the Federal Reserve’s annual stress test over time. The blog post concluded that the economic scenario used for the test is increasing in severity over time, but that the outcomes of the stress tests in the most recent years have been less constraining to banks because bank asset portfolios have become safer. Setting aside (for the moment) the logic of their premise, the authors reach their conclusion because they erroneously assume that stress test severity is appropriately measured by the change in the tested banks’ capital from the beginning to the end of the nine quarters of the planning horizon. In fact, banks are evaluated, and make capital allocation decisions, based on their lowest capital level over the nine quarters of the planning horizon (see here, page 4). Also, the stress capital buffer requirement is proposed to be calculated as the difference between the firm’s starting and lowest projected capital ratio under the severely adverse scenario. After correcting for this mistake, the authors’ own analysis would have shown that stress test severity increased significantly when the economy strengthened further in 2018.
Moreover, based on their error, the authors go on to advocate raising the countercyclical capital buffer (CCyB) now because stress tests have not become more severe as the economy has strengthened. By their own logic, based on the corrected analysis, the stress test design and severity make the CCyB unnecessary.
So how did the analysis by the Fed economists reach this conclusion? The measure of stress severity in the Fed paper’s analysis is defined as the difference between the bank’s starting and ending capital under stress. However, the relevant stress severity measure is the difference between the firm’s starting and lowest projected common equity tier 1 (CET1) capital ratio over the nine quarters of the planning horizon. The quantitative assessment in CCAR is based on the minimum CET1 ratio over the 9-quarter stress horizon, not the CET1 capital ratio at the end of the stress horizon. As shown by the purple bars in the chart below, which represent the start-to-minimum decline in the CET1 capital ratio under DFAST, the 3.7 percent decline in capital observed in 2018 was by far the most severe to date. So, the 2018 scenarios effectively raised banks’ capital requirements by almost one percentage point (the difference between 2.8 percent and 3.7 percent in the chart below). The yellow bars replicate the Fed’s analysis of stress test severity defined as the difference between the bank’s starting and ending capital ratio under stress.
Why is the peak decline in capital excluding equity distributions equal to 3.7 percent instead of the Fed’s estimated 2.7 percent? Because almost all advanced approaches banks reached their minimum capital ratios before the end of the planning horizon due to the severity of the 2018 scenarios. The early peak in losses is derived primarily from the frontloading of real estate loan losses due to the steep and early declines of house prices and commercial real estate prices combined with higher interest rate risks. Moreover, the largest of the tested banks are also required to assume their trading assets suffer a severe decline in value in the first quarter of the test.
Returning to the premise of the test, the authors state correctly that bank performance on the stress tests improves when the economy strengthens because bank loan losses decline, and bank earnings rise. However, banks’ performances on the tests have also improved because banks have reduced their exposure to assets that performed poorly during the Great Recession and the stress scenarios based upon it. This reduction in risk was an intended consequence of the test. Using a cycling metaphor, if I completed a 40K time trial in one hour last year, the race organizers add more hills to the course this year, and I complete the time trial in 55 minutes because I’ve trained harder, by the authors’ logic the time trial has gotten easier.
Lastly, the Fed has previously argued that the CCyB performs a different function from the stress tests because the proposed stress capital buffer does not serve as an explicit countercyclical offset to the financial system. It is comforting to see that the blog post acknowledges that in fact the stress tests do serve as a countercyclical buffer, but the irony is that the acknowledgment is only made in the context of mistakenly saying that a reduction in stress test severity is now grounds for a CCyB makeup.
Disclaimer: The views expressed in this post are those of the author(s) and do not necessarily reflect the position of the Bank Policy Institute or its membership, and are not intended to be, and should not be construed as, legal advice of any kind.
 The post by Fed economists assumes banks develop their capital plans under the assumption that the capital ratios must be above the minimum requirements only at the end of a nine-quarter planning horizon. Because banks can reach the minimum capital ratio before the last quarter of the planning horizon, looking at capital ratios at the end of the scenario may significantly understate the stringency of the stress tests. In addition, to pass the quantitative assessment in CCAR, banks must be above the minimum requirement in all quarters of the planning horizon.
 Also, academic papers that analyze the impact of CCAR on bank behavior always use the lowest capital ratio over the nine quarters planning horizon, including this paper written by one of the coauthors of the Fed’s post.
 To be specific, the analysis uses the common equity tier 1 capital ratio.
 Analysis includes the same 28 firms as in the Fed analysis. To calculate the start-to-minimum decline in the CET1 capital ratio the quarter in which the minimum is reached needs to be estimated because it is not available in the DFAST disclosures. Had we excluded all 9 quarters of dividends the start-to-minimum change in the 2018 stress tests would have been 3.6 p.p. instead of 3.7 p.p. Also, 2014 and 2015 results were excluded because the stress tests used Basel I measures of regulatory capital, which makes the time-series comparison more difficult to interpret based on publicly available data.
 There is already a large body of academic literature on the impact of stress tests on banks’ exposures. See for example the paper by Acharya, Berger and Roman “Lending implications of U.S. bank stress tests: Costs or benefits?”, Journal of Financial Intermediation, Vol. 34, April 2018, pages 58-90. BPI has made this point in several notes, including in the notes posted here, here, and here.