The What and Why of Carbon Budgets

If you’ve been paying much attention to the climate policy discussion over the last few years, you’ve probably heard mention of carbon budgets, or greenhouse gas (GHG) emissions budgets more generally. Put simply, for any given temperature target there’s a corresponding total cumulative amount of greenhouse gasses that can be released, while still having a decent chance of meeting the target. For example, the IPCC estimates that if we want a 2/3 chance of keeping warming to less than 2°C, then we can release no more than 1000Gt of CO2 between 2011 and the end of the 21st century.

The IPCC estimates that if we want a 2/3 chance of limiting warming to less than 2°C, then we can release no more than 1000Gt of CO2 equivalent between 2011 and the end of the 21st century.

The reason the IPCC and many other scientist types use carbon budgets instead of emissions rates to describe our situation is that the atmosphere’s long-term response to GHGs is almost entirely determined by our total cumulative emissions. In fact, as the figure below from the IPCC AR5 Summary for Policymakers shows, our current understanding suggests a close to linear relationship between CO2 released, and ultimate warming… barring any wild feedbacks (which become more likely and frightening at high levels of atmospheric CO2) like climate change induced fires vaporizing our boreal and tropical forests.

Carbon Budget vs. Cumulative Warming
Figure SPM.5(b), from the IPCC AR5 Summary for Policymakers.

What matters from the climate’s point of view isn’t when we release the GHGs or how quickly we release them, it’s the total amount we release — at least if we’re talking about normal human planning timescales of less than a couple of centuries. This is because the rate at which we’re putting these gasses into the atmosphere is much, much faster than they can be removed by natural processes — CO2 stays in the atmosphere for a long time, more than a century on average.    We’re throwing it up much faster than nature can draw it down.  This is why the concentration of atmospheric CO2 has been marching ever upward for the last couple of hundred years, finally surpassing 400ppm this year.

So regardless of whether we use the entire 1000Gt budget in 20 years or 200, the ultimate results in terms of warming will be similar — they’ll just take less or more time to manifest themselves.

Unfortunately, most actual climate policy doesn’t reflect this reality.  Instead, we tend to make long term aspirational commitments to large emissions reductions, with much less specificity about what happens in the short to medium term.  (E.g. Boulder, CO: 80% by 2030, Fort Collins, CO: 80% by 2030, the European Union: 40% by 2030).  When we acknowledge that it’s the total cumulative emissions over the next couple of centuries that determines our ultimate climate outcome, what we do in the short to medium term — a period of very, very high emissions — becomes critical.  These are big years, and they’re racing by.

Is 1000Gt a Lot, or a Little?

Few normal people have a good sense of the scale of our energy systems. One thousand gigatons. A thousand billion tons. A trillion tons. Those are all the same amount. They all sound big. But our civilization is also big, and comparing one gigantic number to another doesn’t give many people who aren’t scientists a good feel for what the heck is going on.

Many people were first introduced to the idea of carbon budgets through Bill McKibben’s popular article in Rolling Stone: Global Warming’s Terrifying New Math. McKibben looked at carbon budgets in the context of the fossil fuel producers. He pointed out that the world’s fossil fuel companies currently own and control several times more carbon than is required to destabilize the climate. This means that success on climate necessarily also means financial failure for much of the fossil fuel industry, as the value of their businesses is largely vested in the control of carbon intensive resources.

If you’re familiar with McKibben’s Rolling Stone piece, you may have noticed that the current IPCC budget of 1000Gt is substantially larger than the 565Gt one McKibben cites. In part, that’s because these two budgets have different probabilities of success. 565Gt in 2012 gave an 80% chance of keeping warming to less than 2°C, while the 2014 IPCC budget of 1000Gt would be expected to yield less than 2°C warming only 66% of the time. The IPCC doesn’t even report a budget for an 80% chance. The longer we have delayed action on climate, the more flexible we have become with our notion of success.

Unfortunately this particular brand of flexibility, in addition to being a bit dark, doesn’t even buy us very much time. If we continue the 2% annual rate of emissions growth the world has seen over the last couple of decades, the difference between a budget with a 66% chance of success and a 50% chance of success is only ~3 years worth of emissions. Between 50% and 33% it’s only about another 2 years. This is well-illustrated by some graphics from Shrink That Footprint (though they use gigatons of carbon or GtC, instead of CO2 as their unit of choice, so the budget numbers are different, but the time frames and probabilities are the same):

Carbon-budget1

Like McKibben’s article, this projection is from about 3 years ago. In those 3 years, humanity released about 100Gt of CO2. So, using the same assumptions that went into the 565Gt budget, we would now have only about 465Gt left — enough to take us out to roughly 2030 at the current burn rate.

There are various other tweaks that can be made with the budgets in addition to the desired probability of success, outlined here by the Carbon Tracker Initiative.  These details are important, but they don’t change the big picture: continuing the last few decades trend in emissions growth will fully commit us to more than 2°C of warming by the 2030s. 2030 might sound like The Future, but it’s not so far away.  It’s about as far in the future as 9/11 is in the past.

It’s encouraging to hear that global CO2 emissions remained the same in 2014 as they were in 2013, despite the fact that the global economy kept growing, but even if that does end up being due to some kind of structural decoupling between emissions, energy, and our economy (rather than, say, China having a bad economic year), keeping emissions constant as we go forward is still far from a path to success. Holding emissions constant only stretches our fixed 1000Gt budget into the 2040s, rather than the 2030s.

If we’d started reducing global emissions at 3.5% per year in 2011… we would have had a 50/50 chance of staying below 2°C by the end of the 21st century. If we wait until 2020 to peak global emissions, then the same 50/50 chance of success requires a 6% annual rate of decline.  That’s something we’ve not yet seen in any developed economy, short of a major economic dislocation, like the collapse of the Soviet Union.  And unlike that collapse, which was a fairly transient event, we will need these reductions to continue year after year for decades.

Growth-rates2

The Years of Living Dangerously

We live in a special time for the 2°C target.  We are in a transition period, that started in about 2010 and barring drastic change, will end around 2030.  In 2010, the 2°C target was clearly physically possible, but the continuation of our current behavior and recent trends will render it physically unattainable within 15 years.  Barring drastic change, over the course of these 20 or so years, our probability of success will steadily decline, and the speed of change required to succeed will steadily increase.

I’m not saying “We have until 2030 to fix the problem.”  What I’m saying is closer to “We need to be done fixing the problem by 2030.”  The choice of the 2°C goal is political, but the physics of attaining it is not.

My next post looks at carbon budgets at a much smaller scale — the city or the individual — since global numbers are too big and overwhelming for most of us to grasp in a personal, visceral way.  How much carbon do you get to release over your lifetime if we’re to stay with in the 1000Gt budget?  How much do you release today?  What does it go toward?  Flying? Driving? Electricity? Food?  How much do these things vary across different cities?

Featured image courtesy of user quakquak via Flickr, used under a Creative Commons Attribution License.

Decoupling & Demand Side Management in Colorado

Utility revenue decoupling is often seen as an enabling policy supporting “demand side management” (DSM) programs.  DSM is a catch-all term for the things you can do behind the meter that reduce the amount of energy (kWh) a utility needs to produce or the amount of capacity (kW) it needs to have available.  DSM includes investments improving the energy efficiency of buildings and their heating and cooling systems, lighting, and appliances.  It can also include “demand response” (DR) which is a dispatchable decline in energy consumption — like the ability of a utility to ask every Walmart in New England to turn down their lights or air conditioning at the same time on a moment’s notice — in order to avoid needing to build seldom used peaking power plants.

For reasons that will be obvious if you’ve read our previous posts on revenue decoupling, getting utilities to invest in these kinds of measures can be challenging, so long as their revenues are directly tied to the amount of electricity they sell.  Revenue decoupling can fix that problem.  However, reducing customer demand for energy on a larger scale, especially during times of peak demand, can seriously detract from the utility’s ability to deploy capital (on which they earn a return) for the construction of additional generating capacity.  That conflict of interests is harder to address.

But it’s worth working on, because as we’ll see below, DSM is cheap and very low risk — it’s great for rate payers, and it’s great for the economy as a whole.  It can reduce our economic sensitivity to volatile fuel prices, and often shifts investment away from low-value environmentally damaging commodities like natural gas and coal, toward skilled labor and high performance building systems and industrial components.

The rest of this post is based on the testimony that Clean Energy Action prepared for Xcel Energy’s 14AL-0660E rate case proceeding, before revenue decoupling was split off.  Much of it applies specifically to Xcel in Colorado.  However, the overall issues addressed are applicable in many traditional regulated, vertically integrated monopoly utility settings.

Why can’t we scale up DSM?

There are several barriers to Xcel profitably and cost-effectively scaling up their current DSM programs.  Removing these impediments is necessary if DSM is to realize its full potential for reducing GHG emissions from Colorado’s electricity sector.  Revenue decoupling can address some, but not all of them.

  1. There are the lost revenues from energy saved, which impacts the utility’s fixed cost recovery.  If the incentive payment that they earn by meeting DSM targets is too small to compensate for those lost revenues, then the net financial impact of investing in DSM is still negative — i.e. the utility will see investing in DSM as a losing proposition.  Xcel currently gets a “disincentive offset” to make up for lost revenues, but they say that this doesn’t entirely offset their lost revenues.
  2. Even if the performance incentive is big enough to make DSM an attractive investment, the PUC currently caps the incentive at $30M per year (including the $5M “disincentive offset”), meaning that even if there’s a larger pool of cost-effective energy efficiency measures to invest in, the utility has no reason to go above and beyond and save more energy once they’ve maxed out the incentive.
  3. If this cap were removed, the utility would still have a finite approved DSM budget.  With an unlimited performance incentive and a finite DSM budget, the utility would have an incentive to buy as much efficiency as possible, within their approved budget, which would encourage cost-effectiveness, but wouldn’t necessarily mean all the available cost-effective DSM was being acquired.
  4. Given that the utility has an annual obligation under the current DSM legislation to save a particular amount of energy (400 GWh), they have an incentive to “bank” some opportunities, and save them for later, lest they make it more difficult for themselves to satisfy their regulatory mandate in later years by buying all the easy stuff up front.
  5. It is of course the possible that beyond a certain point there simply aren’t any more scalable, cost-effective efficiency investments to be made.
  6. Finally and most seriously, declining electricity demand would pose a threat to the “used and useful” status of existing generation assets and to the utility’s future capital investment program, which is how they make basically all of their money right now.

Revenue decoupling can play an important role in overcoming some, but not all, of these limitations.  With decoupling in place, we’d expect that the utility would be willing and able to earn the entire $30M performance incentive (which they have yet to do in any year) so long as it didn’t make regulatory compliance in future years more challenging by prematurely exhausting some of the easy DSM opportunities.

Continue reading Decoupling & Demand Side Management in Colorado

Decoupling and Distributed Energy

One of the main reasons utilities fight distributed generation like rooftop solar is that it erodes demand for their centrally generated electricity. Reduced demand is annoying for any business, but it’s especially bad for traditional monopoly utilities. It’s especially bad because much — even most — of the cost of producing a kWh of electricity doesn’t go away if you don’t produce that kWh of electricity. These so-called “fixed” or “non-production” costs come from multi-decade financial commitments to big pieces of infrastructure — the power plants, transmission lines, and distribution systems.

So when you put solar panels on your roof and reduce the amount of electricity you need to buy from the utility, there’s a little bit of fuel that doesn’t get burned, and a little bit of money saved on the utility side (but as we’ve pointed out before, they don’t actually benefit from that cost savings), but a lot of the money that the utility spent to be able to provide you with electricity if you needed it is already spent. This is problematic because most electricity rates are designed to recover utility costs in proportion to the amount of electricity you buy (this type of rate is known as a “volumetric rate”). So utilities have an incentive (known as the throughput incentive) to ensure that their electricity sales increase, or at the very least don’t decline.

If lots of people start buying much less electricity, this reduces utility spending on things like fuel, but it doesn’t have any effect (in the short term) on the fixed or non-production costs. To stay solvent, the utilities then go back to their regulators and say “Hey, we’re not getting enough revenue to cover our costs. Give us a rate hike!” and if the regulators agree, allowing the utilities to recover the same fixed costs from fewer overall kWh of electricity sold, this just makes it even more financially sensible for people to put solar panels on their roof, to avoid buying the more expensive electricity.  (And in our fantasy world, one could also imagine savvy regulators taking measures to decrease fixed costs, by forcing early retirement of risky, uneconomic fossil generation…)

This is the essence of the Utility Death Spiral that’s gotten so much attention over the last year or two (including a speakeasy we hosted), and which Dave Roberts did a great job of exploring in his Utilities for Dummies series over at Grist. From the Utility’s point of view the Death Spiral can be short-circuited with revenue decoupling… up to a point. With decoupling, they don’t have to go to regulators and ask for a rate hike — they can recover the fixed costs in a formulaic way, and so decoupled utilities are able to invest in energy efficiency without worrying about lost revenues.  They’re also likely to be less opposed to modest amounts of distributed generation.

In fact, it’s hard to imagine a climate-aware utility of the future that isn’t decoupled.  We need to get away from utilities treating electricity (and energy more generally) as a commodity, with profits tied to the quantity of product they sell.  Instead, we need to move toward treating energy as a service — Amory Lovins’ famous hot showers and cold beer — with an incentive to provide high quality service using the least possible amount of underlying energy.

Decoupling is a Good Thing™

However, if you care about climate, then you always have to ask not just Is this a good thing? but Is this good enough?  It’s an old cliché that “better is the enemy of good enough,” — i.e. spending time and money and effort on improvement beyond what’s good enough can be wasteful.  But in the context of climate, we have the opposite problem.  Moving things in the right direction can still mean abject failure.  Plenty of things that are better than the status quo — like decoupling utility revenues, or burning natural gas instead of coal —  come nowhere close to being good enough to keep us from seeing more than 2°C of warming.

To have a chance of stabilizing the climate, the utility business model can’t just be tinkered with.  It needs to be radically transformed.  The good news is that radical transformation is probably on the table whether the utilities want to talk about it or not.  Our task is to make it happen as quickly and smoothly as possible.

Courtesy of Gigasolar on Flickr.
Courtesy of Gigasolar on Flickr.

Utility Death Spiral: Not Just for the Paranoid

Until very recently anybody afraid of the death spiral dynamic might have seemed a little paranoid. DG was still pretty expensive, and often dependent on utility rebate programs, tax credits, and other incentives that were often controlled by regulators and utilities.  As the price of distributed solar has fallen, rebates have dwindled to nothing, and new financing mechanisms and business models have emerged. Utilities and regulators have lost some of their ability to moderate deployment, and they’re poised to lose much more.

A few examples of new DG financing and business arrangements compiled by Green Tech Media and others:

  • Mosaic has created a peer-to-peer lending platform that lets individuals invest in diversified portfolios of smaller distributed solar projects, earning around a 5% return on their investments. They’ve done about $10M worth of financing this way. Now they’re getting into solar loans with backing from a large international re-insurer, adding another $100M in capital.
  • Sungage just raised $100M in funding from a large northeastern US credit union to use as a revolving solar loan fund.
  • SolarCity has started issuing solar bonds with a similar yield directly to the public on a much larger scale. They’ve raised more than $100M so far, without going through the traditional finance industry.
  • Big time sprawling suburban home builder Lennar is now installing rooftop PV systems by default in some markets, including around Denver. They’re offering home buyers a power purchase agreement (PPA) in which they get a 20% discount off of retail electricity rates for 20 years.

From the consumer’s point of view what this means is that in an increasing number of markets, rooftop solar can now be had at a discount to utility power, with no up front costs. This is new and different and scary for utilities, because it means rooftop solar can go big. Fast.  Additionally, Elon Musk (who heads both electric car maker Tesla Motors and SolarCity…) is investing $5 billion (with a B) in a massive lithium ion battery factory in Nevada, hoping to drive costs down through economies of scale.

Suddenly, a good chunk of the traditional utility customer base starts to look a little sketchy.

Frozen Meters

Net Metering Required (For Now)

Many of these disruptive businesses depend on net metering policies and so utilities, including Xcel, have coordinated with the climate-denying corporate octopus that is the American Legislative Exchange Council to try and repeal it. So far net metering has been pretty durable. The policy is easy to understand and seems fair to most of the public, so it’s popular. Net metering also now has its own relatively well funded corporate advocates in the form of Big Solar — the very same companies raising hundreds of millions of dollars, listed above, being represented by The Alliance for Solar Choice (TASC) — one of the intervenors in 14AL-0660E (which is the PUC’s catchy name for this whole rate case thing we’ve been involved in).

In Colorado (and elsewhere) these dynamics have brought us to a regulatory stalemate. For once the status quo — net metering — favors distributed renewable electricity. It’s the policy that Big Solar has bet the farm on. But if we try and use it to scale up cheap rooftop PV dramatically, it may destabilize the utilities.

Straight net metering also won’t result in a particularly optimal deployment of distributed energy resources, because all it accounts for is energy production, and there are many more subtle qualities that are important to a well functioning electricity grid. If we can integrate those other qualities — temporal, geographic, environmental, price stabilization, etc. — into our electricity pricing we’ll get a much better overall outcome. As the Rocky Mountain Institute has put it: the debate over net metering misses the point.

Be that as it may, right now there are two 800lb gorillas (or maybe, an 800lb gorilla and a 300lb gorilla) locked in mortal combat — the utilities on one side and Big Solar on the other. One side is trying to get rid of net metering altogether, and the other is willing to fight to the death to preserve it. When people bring up other ways of valuing distributed renewable energy like Minnesota’s proposed Value of Solar or Feed in Tariffs they tend to either be ignored or attacked, sometimes by both sides of the fight! For example, The Alliance for Solar Choice wasted no time in setting up a campaign to stop what they glibly re-termed Feed in Taxes and Value of Solar Taxes as soon as Minnesota made it clear they were considering Value of Solar seriously.

Headed for Strange Country

As with so many aspects of climate and energy policy, change here is inevitable. Regardless of which side prevails in the fight over net metering, as the cost of distributed solar and energy storage continue to decline, we are headed for strange territory.

If the utilities prevail and repeal net metering, they’ll probably slow the spread of distributed generation, since customers would only be able to benefit economically from satisfying their electricity demand on-site in real time, rather than banking electricity production annually. But in the longer term, given ongoing PV system cost declines and the potential for cost-effective electricity storage, the utilities will still face a decline in electricity demand regardless of whether a policy like NEM remains in place. At one extreme we could end up in a situation (well described by RMI), where defection from the grid is economically sensible for a significant number of people.

On the other hand if Big Solar prevails then we get to the same place, maybe a little quicker, since they’re already operating with a net metering based business model at significant scale. If the Feds don’t renew the Investment Tax Credit in 2016 that will push the economics out a little, but there’s little reason to think the overall price trend is going to reverse. Ever.

Does that sound ridiculous? Then note that PV in 2014 is already 59% cheaper than NREL predicted it would be back in 2010, and Deutsche Bank is forecasting that solar will reach grid parity nationwide by the end of 2016. On the wholesale side the New York Times reports that without subsidies wind on the high plains has come in as low as ¢3.7/kWh (the same as just the production costs of Xcel’s Colorado fossil fleet in 2013).

Some folks think widespread grid defection sounds like utopian energy independence. In practice it would be far less equitable, more expensive, and operationally much less robust than a well designed network that integrates a lot of distributed energy. It’s also physically impossible in cities, which consume most of our electricity, because no matter how cheap solar and storage become, cities use more energy within their boundaries than is available from renewable sources in those same boundaries.  This is despite the fact that cities have  much lower per capita energy use than rural and suburban places of comparable wealth. Cities are great for the climate, but they will always need to import energy, and that means we will still need transmission and distribution systems.

Um, okay. But, decoupling?

In the near term, revenue decoupling would insulate Xcel against the sales they’re going to lose to rooftop solar and other distributed energy. Rather than seeing revenues decline as more electricity sales are displaced, they’d be empowered to adjust rates in a formulaic way to compensate for the losses, and ensure that the fixed costs of the grid continue to be paid for (along with their profits). In theory, this ought to remove or at least reduce their opposition to net metering.

In the long term, if grid defection becomes attractive, additional fixed-cost recovery mechanisms like revenue decoupling aren’t going to be much help to the utility.

Our task is to open up the discussion about creating an intelligent grid with electricity prices that reflect the more subtle attributes of distributed generation. Revenue decoupling is one potential avenue into that discussion — at least the early part of it.  How so?

In the short term, the utilities are fighting for the status quo, minus net metering, and they seem to be losing.  If the only two positions available are the status quo with vs. without net metering, the choice for renewable energy and climate advocates is clear — we have to side with Big Solar.  But if utilities were actually up for creating a different — and much more scalable — renewable energy policy, then the decision of who to work with becomes more challenging.

With revenue decoupling in place, utilities like Xcel could have more room to consider policies that support distributed generation, without seeing them as an axiomatic threat to their revenues.  But to do so, they’d have to be willing to talk about unwinding their existing investments in fossil generation — otherwise, no renewable or distributed generation policy can scale up far enough to be “good enough” for the climate.  That vital discussion about unwinding fossil plants is not yet happening out in the open.  At least, not in the US.  We’ll take a much closer look at it in a post very soon!

A Decoupling Update

So, it’s been quite a while since our last long policy post, focusing on utility revenue decoupling in connection with Xcel’s current rate case (14AL-0660E) before the Colorado PUC.  That’s because we’ve been busy actually intervening in the case!

A Climate Intervention

We filed our motion to intervene in early August.  As you might already know, in order to be granted leave to intervene, you have to demonstrate that your interests aren’t already adequately represented by the other parties in the case.  Incredibly, CEA’s main interest — ensuring that Colorado’s electricity system is consistent with stabilizing the Earth’s climate — was not explicitly mentioned by any of the other parties!

In our petition we highlighted our mission:

…to educate the public and support a shift in public policy toward a zero carbon economy.  CEA brings a unique perspective on the economics of utility regulation and business models related to mitigating the large and growing risks associated with anthropogenic climate change.  In addition, CEA has an interest in transitioning away from fuel-based electric generation in order to mitigate the purely economic risk associated with inherently unpredictable future fuel costs.

…and we were granted intervention.  So far as we know, this is the first time that concern over climate change has been used as the primary interest justifying intervention at the PUC in Colorado.  In and of itself, this is a win.

A Long and Winding Road

Throughout the late summer, we spent many hours poring over the thousands of pages of direct testimony.  Especially Xcel’s decoupling proposal, but also (with the help of some awesome interns), the details of the company’s as-of-yet undepreciated generation facilities — trying to figure out how much the system might be worth, and so how much it might cost to just buy it out and shut it down (were we, as a society, so inclined).

Early on in the process, the PUC asked all the parties to submit briefs explaining why we thought it was appropriate to consider decoupling in the rate case, whether it represented a collateral attack on decisions that had already been made in the DSM strategic issues docket, and how it would interact with the existing DSM programs.  We pulled together a response, as did the other intervening parties, and kept working on our answer testimony — a much longer response to Xcel’s overall proposal.  The general consensus among the parties that filed briefs, including CEA, SWEEP, WRA, and The Alliance for Solar Choice (TASC, a solar industry group representing big installers like Solar City) was that decoupling was not an attempt to roll back previous PUC decisions related to DSM — and that addressing it in a rate case was appropriate.  Only the Colorado Healthcare Electric Coordinating Council (CHECC, a coalition of large healthcare facilities and energy consumers) told the PUC that decoupling ought to be considered an attack on previous DSM policies.

The PUC staff unfortunately came back with a reply brief that disagreed and suggested, among other things, that maybe it would be better if we just went with a straight fixed/variable rate design to address utility fixed cost recovery.  Never mind the fact that this kind of rate would destroy most of the incentives customers have to use energy efficiently.

And then we waited.

With baited breath each Wednesday morning we tuned in to the Commissioners’ Weekly Meeting, streaming live over the interwebs from the Windowless Room in Denver.  We watched regardless of whether anything related to our dear little 14AL-0660E was on their agenda.  Just in case they tried to sneak it by.  Weeks passed.  And then a month.  The deadline for submitting our answer testimony approached.

Finally on October 29th, six weeks after submitting our brief, the commissioners finally brought up the issue of decoupling at their weekly meeting and in a couple of minutes, indicated that they’d be severing it from the proceeding, with little explanation as to why.  However, because there were no details, and the order isn’t official until it’s issued in writing… we continued working on our answer testimony.  The final order came out on November 5th, and prohibited submission of testimony related to decoupling.  Answer testimony was due on November 7th.

Where to From Here?

Xcel might come back to the PUC with another decoupling proposal before the next Electric Resource Plan (in fall of 2015) .  Or they might not.  This means that a good chunk of the work that we’ve been doing since this summer will have to come to light in a different way.  So for the next few posts, we’re going to explore some of the issues that came up in the preparation of our answer testimony, including:

  • Decoupling and Distributed Energy:
    How would decoupling interact with distributed energy resources like rooftop solar?  What are the implications for utilities as the costs of those resources continue their precipitous decline?
  • Decoupling and Demand Side Management:
    How would revenue decoupling interact with demand side management programs in general — both utility and privately or locally funded — and what particular issues with Xcel’s DSM programs could decoupling address?  What issues can’t it help address?
  • Can Revenue Decoupling Scale?
    Why doesn’t revenue decoupling as a policy really scale up to the point of  taking existing generation facilities offline, or preventing new facilities from being built?
  • Decoupling as a First Step:
    Even if it can’t scale, why might decoupling still serve as a useful starting point for the decarbonization process? Can it give us a little bit of breathing room while we start the real negotiation? Or is it just another layer of financial protection for utilities who want to delay change as long as possible?
  • Realism and Equity in Carbon Budgets for Colorado:
    What is the true scope of the decarbonization challenge, in the context of the carbon budgets recently published by the IPCC in their Fifth Assessment Report (AR5), but localized to Colorado so we can actually wrap our heads around it.  Why is this conversation so hard?

Learn more about utility revenue decoupling on our resource page…

Featured image of binders (full of PUC filings…) courtesy of  Christian Schnettelker on Flickr. Used under a Creative Commons Attribution License.

Utilities Decoupling to Cover Their… Assets

Last month, Xcel Energy subsidiary Public Service Company of Colorado (PSCo) filed a rate case at the Colorado Public Utilities Commission (Docket: 14AL-0660E).  A lot of the case — the part that’s gotten most of the press — is about PSCo recovering the costs of retiring and retrofitting coal plants as agreed to under the Clean Air Clean Jobs Act (CACJA) of 2010.  However, there’s a piece of the case that could have much wider implications.  Way down deep in the last piece of direct testimony, PSCo witness Scott B. Brockett:

…provides support and recommendations regarding the initiation of a decoupling mechanism for residential and small commercial customers.

This recommendation has captivated all of us here at CEA because it could open the door to Xcel adopting a radically different business model, and becoming much more of an energy services utility (PDF), fit for the 21st century.

To explain why, we’re going to have to delve a ways into the weeds of the energy wonkosphere.

Continue reading Utilities Decoupling to Cover Their… Assets

Facing the Risk in Fossil Fueled Electricity

I recently wrote about how our risk tolerance/aversion powerfully affects our estimation of the social cost of carbon, but obviously that’s not the only place that risk shows up in our energy systems.  Fossil fuel based electricity is also exposed to a much more prosaic kind of risk: the possibility that fuel prices will increase over time.

Building a new coal or gas plant is a wager that fuel will continue to be available at a reasonable price over the lifetime of the plant, a lifetime measured in decades.  Unfortunately, nobody has a particularly good record with long term energy system predictions so this is a fairly risky bet, unless you can get somebody to sign a long term fuel contract with a known price.  That doesn’t really get rid of the risk, it just shifts it onto your fuel supplier.  They take on the risk that they won’t make as much money as they could have, if they’d been able to sell the fuel at (higher) market rates.  If the consumer is worried about rising prices, and the producer is worried about falling prices, then sometimes this can be a mutually beneficial arrangement.  This is called “hedging”.

Continue reading Facing the Risk in Fossil Fueled Electricity

Geology and Markets, not EPA, Waging War on Coal

With the release of the Environmental Protection Agency’s proposed rules limiting carbon pollution from the nation’s electricity sector, you’ve no doubt been hearing a lot of industry outrage about “Obama’s War on Coal.”

Don’t believe it.

Despite the passionate rhetoric from both sides of the climate divide, the proposed rules are very moderate — almost remedial.  The rules grade the states on a curve, giving each a tailored emissions target meant to be attainable without undue hardship.  For states that have already taken action to curb greenhouse gasses, and have more reductions in the works, they will be easy to meet.  California, Oregon, Washington, and Colorado, are all several steps ahead of the proposed federal requirements — former Colorado Governor Bill Ritter told Colorado Public Radio that he expects the state to meet the proposed federal emissions target for 2030 in 2020, a decade ahead of schedule.  This isn’t to say that Colorado has particularly clean power — our state has the 10th most carbon intensive electricity in the country, with about 63% of it coming from coal — but we’ve at least started the work of transitioning.

Furthermore, many heavily coal dependent states that have so far chosen to ignore the imperatives of climate change (e.g. Wyoming, West Virginia, Kentucky) must only attain single-digit percentage reductions, and would be permitted to remain largely coal dependent all the way up to 2030.  Roger Pielke Jr. and others have pointed out that in isolation, the new rules would be expected to reduce the amount of coal we burn by only about 15%, relative to 2012 by 2020.  By 2030, we might see an 18% reduction in coal use compared to 2012.  Especially when you compare these numbers to the 25% reduction in coal use that took place between 2005 and 2012, and the far more aggressive climate goals that even Republicans were advocating for just two presidential elections ago, it becomes hard to paint the regulations as extreme.  Instead, they look more like a binding codification of plans that already exist on the ground, and a gentle kick in the pants for regulatory laggards to get on board with at least a very basic level of emissions mitigation.

So, in isolation, there’s a limited amount to get either excited or angry about here.  Thankfully, the EPA’s rules will not be operating in isolation!

Continue reading Geology and Markets, not EPA, Waging War on Coal

The Myth of Price

Our society’s prevailing economic zeitgeist assumes that everything has a price, and that both costs and prices can be objectively calculated, or at least agreed upon by parties involved in the transaction.  There are some big problems with this proposition.

Externalized costs are involuntary transactions — those on the receiving end of the externalities have not agreed to the deal.  Putting a price on carbon can theoretically remedy this failure in the context of climate change.  In practice it’s much more complicated, because our energy markets are not particularly efficient (as we pointed out in our Colorado carbon fee proposal, and as the ACEEE has documented well), and because there are many subsidies (some explicit, others structural) that confound the integration of externalized costs into our energy prices.

The global pricing of energy and climate externalities is obviously a huge challenge that we need to address, and despite our ongoing failure to reduce emissions, there’s been a pretty robust discussion about externalities.  As our understanding of climate change and its potentially catastrophic economic consequences have matured, our estimates of these costs have been revised, usually upwards.  We acknowledge the fact that these costs exist, even if we’re politically unwilling to do much about them.

Unfortunately — and surprisingly to most people — it turns out that understanding how the climate is going to change and what the economic impacts of those changes will be is not enough information to calculate the social cost of carbon.

Continue reading The Myth of Price

Coal Geology vs. Coal Economics & Politics

The geology part of classifying coal as reserves is a lot of work, but it’s doable — with enough drilling logs and other data, you can determine where the coal is, how much of it there is, and its general quality. Once you’ve got that concrete geologic understanding, it’s unlikely to change drastically — it might be refined modestly over time, maybe increasing as mining technology improves… but if you’ve done the work well, you’re probably not going to suddenly discover that 90% or 99% of the coal you thought was there actually isn’t.

The economic part part of classifying coal as reserves is fundamentally different, and more changeable with time, because market conditions change much more quickly than geology! I think the experiences of the UK and Germany are particularly interesting, because they were both early large coal producers, part of the first wave of fossil fueled industrialization. They’re extremely mature hard coal mining provinces that have fallen off their peak production dramatically — they’re ahead of the curve that most of the rest of the world is still on.

The drastic downward revisions that both the UK and Germany made were due to changes in economic policies and domestic politics — not geology. Both nations historically had strong labor interests tied to coal mining, and the desire (like most nations) to maintain an indigenous energy supply. But as the cost of supporting the industry grew and its productivity fell, the political logic of maintaining the illusion of a viable coal-based energy system faded away. In Germany, it seems likely that popular support for the nation’s ambitious Energy Transition made it easier for the nation to face up to geologic reality. In the UK the politics seem to have been influenced by the Thatcher government’s desire to privatize previously nationalized industries like coal mining, as well as the discovery of massive offshore natural gas reserves in the North Sea.  In both cases the “proven reserve” numbers appear to have vastly overstated to begin with, but the political desire to support the industry and maintain the illusion of long-term energy independence was a powerful incentive to ignore the geologic reality.

However, in the end, geology wins.

Where are we headed?

The EIA’s admission that we have not, as a nation, officially and transparently evaluated the economics of extracting our vast coal resources opens the topic up for discussion. The economic and political forces at work today in the US may be different than they were in 1980s Britain, or early 2000s Germany, but they’re pushing in the same direction. A powerful incumbent coal industry is weakening both financially and politically — because of their own increasing production costs, low natural gas prices, flat electricity demand, plummeting renewable energy costs, and concerns about both traditional pollution and greenhouse gas emissions. This gives us the opportunity to re-evaluate our policies around them. What should we change?

We might start with ending the practice of soft pricing in uncompetitive BLM coal lease auctions, as laid out by the Government Accountability Office in February. However, by far our largest subsidy to the industry is our acceptance of the externalized costs they impose on us. A 2011 Harvard study (on which CEA co-founder Leslie Glustrom was a co-author) estimated these costs to be roughly $345 billion/year in the US — equivalent to adding $0.18/kWh of coal fired electricity (explore the study graphically, or see the full peer-reviewed paper).

Even if we ignored traditional environmental impacts and public health consequences, and just applied the modest $37/ton social cost of CO2 calculated by the US Office of Management and Budget, that would add roughly $60 to the cost of a ton of coal! With current PRB production costs in the neighborhood of $10/ton, and operating margins often less than $1/ton ($0.28/ton in the case of Arch last year), this — or even a smaller carbon price — would likely be a crushing blow to the fuel.

Given the current state of the industry, even without these “drastic” policy changes it’s possible that we are headed for our own major downward reserves revision. This isn’t “running out of coal”. Britain and Germany both still have enormous amounts of coal — it’s just not worth digging much of it out of the ground, given the available alternatives. It’s time to figure out whether we’re in the same boat, admit it to ourselves and the world if we are, and move on to the task of building real solutions.

Two Possibilities, One Course of Action

There’s an irony in all this, which is that regardless of whether we’re running short on economically recoverable coal, we need to expunge the fuel from our energy systems as quickly as possible in order to avoid catastrophic climate change. If the global reserves numbers reported by the WEC are accurate, then we need to leave 60-80% of those reserves in the ground. This was highlighted most famously by Bill McKibben in Rolling Stone in 2012, and implies that a huge fraction of the world’s fossil fuel assets are in fact worthless, unburnable carbon, and most of the world’s coal companies and unconventional hydrocarbon extraction projects are destined for bankruptcy. On the other hand, if the reserve numbers need to be revised downward because most of the listed coal isn’t economically extractable, then a lot of the coal industry’s supposedly bankable assets are worthless and the industry’s growth potential is seriously constrained.

In either case, the right thing to do is stop planning as if today’s coal plants are going to continue operating for much longer, figure out a way to take them offline, and replace them with cost-effective, low risk, zero-carbon generation resources and energy efficiency.

  1. US EIA on the Economics of Coal: No Comment
  2. A Long Time Coming: Revising US Coal Reserves
  3. In Good Company: A Brief History of Global Coal Reserve Revisions
  4. Coal Geology vs. Coal Economics & Politics

In Good Company: A Look at Global Coal Reserve Revisions

In my last post, I recounted some of the indications that have surfaced over the last decade that US coal reserves might not be as large as we think.  The work done by the USGS assessing our reserves, and more recently comments from the coal industry themselves cast doubt on the common refrain that the US is “the Saudi Arabia of coal” and the idea that we have a couple of centuries worth of the fuel just laying around, waiting to be burned.  As it turns out, the US isn’t alone in having potentially unreliable reserve numbers.  Over the decades, many other major coal producing nations have also dramatically revised their reserve estimates.

Internationally the main reserve compilations are done by the UN’s World Energy Council (WEC) and to some degree also the German equivalent of the USGS, known as the BGR. Virtually all global (publicly viewable) statistics on fossil fuel reserves are traceable back to one of those two agencies. For instance, the coal reserve numbers in the International Energy Agency’s (IEA’s) 2011 World Energy Outlook came from the BGR; the numbers in BP’s most recent Statistical Review of Energy came from the WEC.

Of course, both the WEC and the BGR are largely dependent on numbers reported by national agencies (like the USGS, the EIA and the SEC in the case of the US), who compile data directly from state and regional geologic survey and mining agencies, fossil fuel consumers, producers, and the markets that they make up.

Looking back through the years at internationally reported coal reserve numbers, it’s surprisingly common to see big discontinuous revisions.  Below are a few examples from the WEC Resource Surveys going back to 1950, including some of the world’s largest supposed coal reserve holders.  In all cases, the magnitude of the large reserve revisions is much greater than annual coal production can explain.

Continue reading In Good Company: A Look at Global Coal Reserve Revisions