One of the biggest challenges with the transition to clean electricity is figuring out how to keep the grid reliable. Extreme heat, winter storms, and flooding regularly remind us that the grid is struggling to keep up in the face of more climate change-fueled extreme weather.
Now, if you thought I was going to say that clean energy technologies aren’t capable of providing a reliable electricity supply, you’re way off track. A diverse portfolio of clean resources is quite literally the key to a reliable power grid. The tricky part here is figuring out the extent to which all the different types of resources contribute to grid reliability. And if you get that part right, then it’s only a matter of adding up the numbers to make sure you have enough resources overall to keep the grid reliable.
But what about fossil fuel resources, especially natural gas power plants? (Note: “Natural gas” is an industry misnomer; UCS considers methane, fossil gas, and gas to be much more appropriate terms. I’ll be using the term “gas” from here on out.)
Until recently, gas plants have been flying under the radar, avoiding any serious scrutiny of their presumed grid reliability contributions. But after a spate of power grid crises due in large part to simultaneous failures at gas power plants, grid planners are re-evaluating their long-standing assumptions.
They’re asking, “how much can we rely on gas plants to ensure grid reliability?”
And for good reason. Because the answer is not nearly as much as we once thought.
How grid planners accredit capacity to gas plants makes all the difference
Before I go on, I should say that this blog post is about determining the reliability contributions of gas power plants, and how those values are then utilized in resource adequacy programs and capacity markets. It’s important to note that those systems only address one aspect of grid reliability: maintaining a sufficient supply of electricity. The power can still go out for a variety of other reasons (e.g., distribution or transmission system failures), so when I refer to “grid reliability” in this post, I’m really only talking about ensuring there’s enough electricity supply to meet demand.
Nearly every independent system operator (ISO) and regional transmission organization (RTO) has a resource adequacy program or capacity market to ensure a reliable supply of electricity. And the methods ISOs and RTOs use to determine the reliability contribution of resources (aka “capacity accreditation”) are incredibly consequential for two reasons: 1. because they determine how much resource owners get paid for supporting grid reliability, and 2. because they can help ensure that grid reliability goals are met.
In short, getting capacity accreditation wrong could lead to ratepayers overpaying gas plants for their contributions to grid reliability, and it could even result in grid reliability problems if we significantly overestimate the reliability contributions of those plants. (Of course, it could also go the other way, with gas plants getting underpaid, but that’s definitely not an issue right now!)
Without further ado, I’m going to walk through three different methods for accrediting capacity from thermal power plants, including gas plants: ICAP, UCAP, and ELCC. (Hope you’re ready for some acronyms!)
A bad, outdated method: ICAP
The first (and worst) method for gas plant capacity accreditation is ICAP, or “installed capacity.” In essence, when using ICAP for capacity accreditation, a gas plant gets credit up to its maximum generation capacity. So if a gas plant’s maximum output is 100 megawatts (MW), the plant owner could get paid for providing 100 MW of capacity.
In some cases, ICAP accreditation takes into account real-world circumstances when determining the maximum output of a gas plant. Some ISOs and RTOs assign different ICAP values in different seasons (e.g., summer and winter), and they usually require testing to confirm that gas plants really can produce energy at the levels they claim. If, for instance, that testing must occur on a hot summer day when gas plants operate less efficiently, then the capacity value might be slightly lower. For example, if a 100 MW gas plant can only achieve a maximum output of 95 MW during a test on a hot summer day, its ICAP value may only be 95 MW.
While this methodology is relatively simple, it completely ignores the reality that gas plants aren’t always able to show up when the grid needs them most. There’s a whole host of reasons for gas plant failures, including mechanical breakdowns, fuel shortages, and issues with cooling water. And this methodology also ignores the fact that some of those issues can affect multiple gas plants simultaneously.
Thankfully, only a minority of ISOs and RTOs still use ICAP for capacity accreditation, including the California ISO (CAISO), New England ISO (ISO-NE), and Southwest Power Pool (SPP). (Though to be fair, ISO-NE is actively revising its capacity accreditation methods and will move away from ICAP in the near future.)
All the other ISOs and RTOs have already moved on to a better method.
A better method, with caveats: UCAP
UCAP, or “unforced capacity”, is a slightly more sophisticated method for gas plant capacity accreditation. This methodology starts with an ICAP value, which is then adjusted to account for the probability that a gas plant won’t be able to provide energy when it’s needed. (For the energy nerds out there, the formula is often expressed as UCAP = ICAP * (1 – EFORd), where EFORd is the “equivalent forced outage rate demand.”)
These calculations rely on historical operational data, usually for specific power plants, to determine the forced outage rate. For example, if a gas plant has an ICAP value of 100 MW and a history of forced outages 8% of the time that it’s needed (i.e., an EFORd of 8%), then its UCAP value would be 92 MW.
Using UCAP for capacity accreditation is an improvement over ICAP because it accounts for the fact that gas plants aren’t always able to operate when they’re needed. The UCAP methodology also provides better incentives for gas plants to operate reliably when the grid needs them most—for example, if a poorly-maintained gas plant frequently trips offline, that plant’s UCAP value will be much lower.
Several ISOs and RTOs currently use UCAP for capacity accreditation of gas plants and other thermal power plants, including the New York Independent System Operator (NYISO) and PJM. Each of those entities does the calculations a little differently, but the goal is always the same: to account for forced outages in the capacity value of power plants. (The Midcontinent Independent System Operator (MISO) has already moved on from UCAP to a similar, but slightly more sophisticated, methodology.)
However, using UCAP still has its downsides. The main problem is that UCAP calculations assume that power plant forced outages are completely independent of each other; but in reality, outages are often correlated. Because this methodology doesn’t attempt in any way to adequately account for events that affect multiple gas plants simultaneously, UCAP can significantly overestimate the ability of the gas fleet to ensure grid reliability.
The good news is there’s another method that does account for those possibilities.
|Capacity Accreditation Methodology for Thermal Power Plants in each ISO/RTO|
*While Texas’ grid operator, ERCOT, does not currently have a resource adequacy program or capacity market, it may soon implement a performance credit mechanism that would serve a similar purpose.
**MISO has already moved on from UCAP to a similar but slightly more sophisticated method called Seasonal Accredited Capacity (SAC), which is based on power plant availability during critical hours for grid reliability.
The best and most accurate method: ELCC
For years, effective load carrying capability (ELCC) has been applied to renewables (and to a lesser extent, energy storage), and for years I’ve been a big fan of this method. In light of recent instances of widespread gas plant failures, some ISOs and RTOs are considering applying ELCC to gas power plants as well.
ELCC calculations are very complicated, so I’m not going to walk through all the details here. (For a more thorough discussion, see my previous ELCC blog post.) The short version is that ELCC is a measurement of a resource’s ability to produce energy when the grid needs that energy most. ELCC values are calculated using probabilistic grid modeling to determine how much “perfect capacity” it would take to replace a resource (or, in most cases, a group of resources). You can think of “perfect capacity” as a mythical power plant that never has any outages, can ramp up and down instantly, and can operate 24/7/365.
For example, if it takes 80 MW of perfect capacity to replace a gas power plant with a 100 MW ICAP value, then the ELCC of that gas plant is 80%. And if ELCC were used for capacity accreditation, that gas power plant would only be able to get compensated for 80 MW in RTO/ISO programs such as capacity markets.
ELCC is a much more sophisticated method for capacity accreditation because it can capture the risk of correlated power plant outages. Let me illustrate this point: in February 2021, a severe winter storm in Texas forced a massive amount of gas capacity offline, which led to widespread power outages for days. Some of those gas plants failed because they broke down in the unusually cold weather, and others couldn’t operate because there wasn’t enough gas to burn. In total, this weather event forced roughly 20 gigawatts of gas capacity offline all at the same time.
ELCC excels at accounting for the possibility of correlated gas plant outages like those experienced in Texas. While UCAP values for thermal power plants are typically between 90% and 100%, recent ELCC calculations have revealed that the reliability contributions of thermal power plants, especially gas plants, are much smaller. For instance, an ELCC study in ERCOT, the grid operator in Texas, showed that the winter ELCC of all thermal generation is only 69%. An ELCC analysis in PJM showed that the winter ELCC of gas combined cycle power plants is 76%, and gas combustion turbines have an ELCC of 63%.
(As an aside, it blew my mind when I saw that the ELCC of offshore wind was 68%, which means that, megawatt for megawatt, offshore wind does more to ensure grid reliability than gas combustion turbines in PJM!)
The takeaway here is that UCAP can significantly overestimate reliability contributions, while ELCC is a much more accurate approach for quantifying the true reliability contributions of gas plants. While no ISO or RTO currently uses ELCC for capacity accreditation of gas power plants, some may transition to an ELCC methodology in the near future (e.g., PJM and ISO-NE).
We must determine how reliable gas plants truly are
With climate change leading to increasingly extreme weather events that strain the power grid, it is critical to get capacity accreditation right to ensure that we have enough resources to keep the lights on. Recent failures have clearly demonstrated that gas plants aren’t always able to deliver on their grid reliability promises.
Currently, it’s common practice to apply ELCC to renewables and energy storage, but thermal resources (including gas plants) are valued using capacity accreditation methodologies (ICAP and UCAP) that can substantially overstate their value. Rather than tipping the scales in favor of fossil fuels, we need grid reliability systems that recognize the true reliability contributions of their resources. Applying ELCC to all resource types, including gas plants, would put all resources on a level playing field.
And once we put all resources on a level playing field, gas power plants sure do lose their luster.