The Self Sufficiency 2016 campaign hosted by the League of Women Voters of Boulder County has been pushing for a living wage in Boulder, and City Council talked about it last week. Employees of the University of Colorado have also been pushing for a $15 minimum wage. Unfortunately CRS 8-6-101 prohibits a city or county from setting a minimum wage, so we can’t simply pass a local minimum wage like Seattle, WA and Los Angeles, CA. Our state statue includes some kind of Orwellian justification for the prohibition. It says that the welfare of Colorado depends on workers having adequate wages, and therefore cities and counties shall not be allowed to regulate wages. Uh, what? Here’s the code:
If you’ve been paying much attention to the climate policy discussion over the last few years, you’ve probably heard mention of carbon budgets, or greenhouse gas (GHG) emissions budgets more generally. Put simply, for any given temperature target there’s a corresponding total cumulative amount of greenhouse gasses that can be released, while still having a decent chance of meeting the target. For example, the IPCC estimates that if we want a 2/3 chance of keeping warming to less than 2°C, then we can release no more than 1000Gt of CO2 between 2011 and the end of the 21st century.
The IPCC estimates that if we want a 2/3 chance of limiting warming to less than 2°C, then we can release no more than 1000Gt of CO2 equivalent between 2011 and the end of the 21st century.
The reason the IPCC and many other scientist types use carbon budgets instead of emissions rates to describe our situation is that the atmosphere’s long-term response to GHGs is almost entirely determined by our total cumulative emissions. In fact, as the figure below from the IPCC AR5 Summary for Policymakers shows, our current understanding suggests a close to linear relationship between CO2 released, and ultimate warming… barring any wild feedbacks (which become more likely and frightening at high levels of atmospheric CO2) like climate change induced fires vaporizing our boreal and tropical forests.
What matters from the climate’s point of view isn’t when we release the GHGs or how quickly we release them, it’s the total amount we release — at least if we’re talking about normal human planning timescales of less than a couple of centuries. This is because the rate at which we’re putting these gasses into the atmosphere is much, much faster than they can be removed by natural processes — CO2 stays in the atmosphere for a long time, more than a century on average. We’re throwing it up much faster than nature can draw it down. This is why the concentration of atmospheric CO2 has been marching ever upward for the last couple of hundred years, finally surpassing 400ppm this year.
So regardless of whether we use the entire 1000Gt budget in 20 years or 200, the ultimate results in terms of warming will be similar — they’ll just take less or more time to manifest themselves.
Unfortunately, most actual climate policy doesn’t reflect this reality. Instead, we tend to make long term aspirational commitments to large emissions reductions, with much less specificity about what happens in the short to medium term. (E.g. Boulder, CO: 80% by 2030, Fort Collins, CO: 80% by 2030, the European Union: 40% by 2030). When we acknowledge that it’s the total cumulative emissions over the next couple of centuries that determines our ultimate climate outcome, what we do in the short to medium term — a period of very, very high emissions — becomes critical. These are big years, and they’re racing by.
Is 1000Gt a Lot, or a Little?
Few normal people have a good sense of the scale of our energy systems. One thousand gigatons. A thousand billion tons. A trillion tons. Those are all the same amount. They all sound big. But our civilization is also big, and comparing one gigantic number to another doesn’t give many people who aren’t scientists a good feel for what the heck is going on.
Many people were first introduced to the idea of carbon budgets through Bill McKibben’s popular article in Rolling Stone: Global Warming’s Terrifying New Math. McKibben looked at carbon budgets in the context of the fossil fuel producers. He pointed out that the world’s fossil fuel companies currently own and control several times more carbon than is required to destabilize the climate. This means that success on climate necessarily also means financial failure for much of the fossil fuel industry, as the value of their businesses is largely vested in the control of carbon intensive resources.
If you’re familiar with McKibben’s Rolling Stone piece, you may have noticed that the current IPCC budget of 1000Gt is substantially larger than the 565Gt one McKibben cites. In part, that’s because these two budgets have different probabilities of success. 565Gt in 2012 gave an 80% chance of keeping warming to less than 2°C, while the 2014 IPCC budget of 1000Gt would be expected to yield less than 2°C warming only 66% of the time. The IPCC doesn’t even report a budget for an 80% chance. The longer we have delayed action on climate, the more flexible we have become with our notion of success.
Unfortunately this particular brand of flexibility, in addition to being a bit dark, doesn’t even buy us very much time. If we continue the 2% annual rate of emissions growth the world has seen over the last couple of decades, the difference between a budget with a 66% chance of success and a 50% chance of success is only ~3 years worth of emissions. Between 50% and 33% it’s only about another 2 years. This is well-illustrated by some graphics from Shrink That Footprint (though they use gigatons of carbon or GtC, instead of CO2 as their unit of choice, so the budget numbers are different, but the time frames and probabilities are the same):
Like McKibben’s article, this projection is from about 3 years ago. In those 3 years, humanity released about 100Gt of CO2. So, using the same assumptions that went into the 565Gt budget, we would now have only about 465Gt left — enough to take us out to roughly 2030 at the current burn rate.
There are various other tweaks that can be made with the budgets in addition to the desired probability of success, outlined here by the Carbon Tracker Initiative. These details are important, but they don’t change the big picture: continuing the last few decades trend in emissions growth will fully commit us to more than 2°C of warming by the 2030s. 2030 might sound like The Future, but it’s not so far away. It’s about as far in the future as 9/11 is in the past.
It’s encouraging to hear that global CO2 emissions remained the same in 2014 as they were in 2013, despite the fact that the global economy kept growing, but even if that does end up being due to some kind of structural decoupling between emissions, energy, and our economy (rather than, say, China having a bad economic year), keeping emissions constant as we go forward is still far from a path to success. Holding emissions constant only stretches our fixed 1000Gt budget into the 2040s, rather than the 2030s.
If we’d started reducing global emissions at 3.5% per year in 2011… we would have had a 50/50 chance of staying below 2°C by the end of the 21st century. If we wait until 2020 to peak global emissions, then the same 50/50 chance of success requires a 6% annual rate of decline. That’s something we’ve not yet seen in any developed economy, short of a major economic dislocation, like the collapse of the Soviet Union. And unlike that collapse, which was a fairly transient event, we will need these reductions to continue year after year for decades.
The Years of Living Dangerously
We live in a special time for the 2°C target. We are in a transition period, that started in about 2010 and barring drastic change, will end around 2030. In 2010, the 2°C target was clearly physically possible, but the continuation of our current behavior and recent trends will render it physically unattainable within 15 years. Barring drastic change, over the course of these 20 or so years, our probability of success will steadily decline, and the speed of change required to succeed will steadily increase.
I’m not saying “We have until 2030 to fix the problem.” What I’m saying is closer to “We need to be done fixing the problem by 2030.” The choice of the 2°C goal is political, but the physics of attaining it is not.
My next post looks at carbon budgets at a much smaller scale — the city or the individual — since global numbers are too big and overwhelming for most of us to grasp in a personal, visceral way. How much carbon do you get to release over your lifetime if we’re to stay with in the 1000Gt budget? How much do you release today? What does it go toward? Flying? Driving? Electricity? Food? How much do these things vary across different cities?
Featured image courtesy of user quakquak via Flickr, used under a Creative Commons Attribution License.
Utility revenue decoupling is often seen as an enabling policy supporting “demand side management” (DSM) programs. DSM is a catch-all term for the things you can do behind the meter that reduce the amount of energy (kWh) a utility needs to produce or the amount of capacity (kW) it needs to have available. DSM includes investments improving the energy efficiency of buildings and their heating and cooling systems, lighting, and appliances. It can also include “demand response” (DR) which is a dispatchable decline in energy consumption — like the ability of a utility to ask every Walmart in New England to turn down their lights or air conditioning at the same time on a moment’s notice — in order to avoid needing to build seldom used peaking power plants.
For reasons that will be obvious if you’ve read our previous posts on revenue decoupling, getting utilities to invest in these kinds of measures can be challenging, so long as their revenues are directly tied to the amount of electricity they sell. Revenue decoupling can fix that problem. However, reducing customer demand for energy on a larger scale, especially during times of peak demand, can seriously detract from the utility’s ability to deploy capital (on which they earn a return) for the construction of additional generating capacity. That conflict of interests is harder to address.
But it’s worth working on, because as we’ll see below, DSM is cheap and very low risk — it’s great for rate payers, and it’s great for the economy as a whole. It can reduce our economic sensitivity to volatile fuel prices, and often shifts investment away from low-value environmentally damaging commodities like natural gas and coal, toward skilled labor and high performance building systems and industrial components.
The rest of this post is based on the testimony that Clean Energy Action prepared for Xcel Energy’s 14AL-0660E rate case proceeding, before revenue decoupling was split off. Much of it applies specifically to Xcel in Colorado. However, the overall issues addressed are applicable in many traditional regulated, vertically integrated monopoly utility settings.
Why can’t we scale up DSM?
There are several barriers to Xcel profitably and cost-effectively scaling up their current DSM programs. Removing these impediments is necessary if DSM is to realize its full potential for reducing GHG emissions from Colorado’s electricity sector. Revenue decoupling can address some, but not all of them.
There are the lost revenues from energy saved, which impacts the utility’s fixed cost recovery. If the incentive payment that they earn by meeting DSM targets is too small to compensate for those lost revenues, then the net financial impact of investing in DSM is still negative — i.e. the utility will see investing in DSM as a losing proposition. Xcel currently gets a “disincentive offset” to make up for lost revenues, but they say that this doesn’t entirely offset their lost revenues.
Even if the performance incentive is big enough to make DSM an attractive investment, the PUC currently caps the incentive at $30M per year (including the $5M “disincentive offset”), meaning that even if there’s a larger pool of cost-effective energy efficiency measures to invest in, the utility has no reason to go above and beyond and save more energy once they’ve maxed out the incentive.
If this cap were removed, the utility would still have a finite approved DSM budget. With an unlimited performance incentive and a finite DSM budget, the utility would have an incentive to buy as much efficiency as possible, within their approved budget, which would encourage cost-effectiveness, but wouldn’t necessarily mean all the available cost-effective DSM was being acquired.
Given that the utility has an annual obligation under the current DSM legislation to save a particular amount of energy (400 GWh), they have an incentive to “bank” some opportunities, and save them for later, lest they make it more difficult for themselves to satisfy their regulatory mandate in later years by buying all the easy stuff up front.
It is of course the possible that beyond a certain point there simply aren’t any more scalable, cost-effective efficiency investments to be made.
Finally and most seriously, declining electricity demand would pose a threat to the “used and useful” status of existing generation assets and to the utility’s future capital investment program, which is how they make basically all of their money right now.
Revenue decoupling can play an important role in overcoming some, but not all, of these limitations. With decoupling in place, we’d expect that the utility would be willing and able to earn the entire $30M performance incentive (which they have yet to do in any year) so long as it didn’t make regulatory compliance in future years more challenging by prematurely exhausting some of the easy DSM opportunities.
One of the main reasons utilities fight distributed generation like rooftop solar is that it erodes demand for their centrally generated electricity. Reduced demand is annoying for any business, but it’s especially bad for traditional monopoly utilities. It’s especially bad because much — even most — of the cost of producing a kWh of electricity doesn’t go away if you don’t produce that kWh of electricity. These so-called “fixed” or “non-production” costs come from multi-decade financial commitments to big pieces of infrastructure — the power plants, transmission lines, and distribution systems.
So when you put solar panels on your roof and reduce the amount of electricity you need to buy from the utility, there’s a little bit of fuel that doesn’t get burned, and a little bit of money saved on the utility side (but as we’ve pointed out before, they don’t actually benefit from that cost savings), but a lot of the money that the utility spent to be able to provide you with electricity if you needed it is already spent. This is problematic because most electricity rates are designed to recover utility costs in proportion to the amount of electricity you buy (this type of rate is known as a “volumetric rate”). So utilities have an incentive (known as the throughput incentive) to ensure that their electricity sales increase, or at the very least don’t decline.
If lots of people start buying much less electricity, this reduces utility spending on things like fuel, but it doesn’t have any effect (in the short term) on the fixed or non-production costs. To stay solvent, the utilities then go back to their regulators and say “Hey, we’re not getting enough revenue to cover our costs. Give us a rate hike!” and if the regulators agree, allowing the utilities to recover the same fixed costs from fewer overall kWh of electricity sold, this just makes it even more financially sensible for people to put solar panels on their roof, to avoid buying the more expensive electricity. (And in our fantasy world, one could also imagine savvy regulators taking measures to decrease fixed costs, by forcing early retirement of risky, uneconomic fossil generation…)
This is the essence of the Utility Death Spiral that’s gotten so much attention over the last year or two (including a speakeasy we hosted), and which Dave Roberts did a great job of exploring in his Utilities for Dummies series over at Grist. From the Utility’s point of view the Death Spiral can be short-circuited with revenue decoupling… up to a point. With decoupling, they don’t have to go to regulators and ask for a rate hike — they can recover the fixed costs in a formulaic way, and so decoupled utilities are able to invest in energy efficiency without worrying about lost revenues. They’re also likely to be less opposed to modest amounts of distributed generation.
In fact, it’s hard to imagine a climate-aware utility of the future that isn’t decoupled. We need to get away from utilities treating electricity (and energy more generally) as a commodity, with profits tied to the quantity of product they sell. Instead, we need to move toward treating energy as a service — Amory Lovins’ famous hot showers and cold beer — with an incentive to provide high quality service using the least possible amount of underlying energy.
Decoupling is a Good Thing™
However, if you care about climate, then you always have to ask not just Is this a good thing? but Is this good enough? It’s an old cliché that “better is the enemy of good enough,” — i.e. spending time and money and effort on improvement beyond what’s good enough can be wasteful. But in the context of climate, we have the opposite problem. Moving things in the right direction can still mean abject failure. Plenty of things that are better than the status quo — like decoupling utility revenues, or burning natural gas instead of coal — come nowhere close to being good enough to keep us from seeing more than 2°C of warming.
To have a chance of stabilizing the climate, the utility business model can’t just be tinkered with. It needs to be radically transformed. The good news is that radical transformation is probably on the table whether the utilities want to talk about it or not. Our task is to make it happen as quickly and smoothly as possible.
Utility Death Spiral: Not Just for the Paranoid
Until very recently anybody afraid of the death spiral dynamic might have seemed a little paranoid. DG was still pretty expensive, and often dependent on utility rebate programs, tax credits, and other incentives that were often controlled by regulators and utilities. As the price of distributed solar has fallen, rebates have dwindled to nothing, and new financing mechanisms and business models have emerged. Utilities and regulators have lost some of their ability to moderate deployment, and they’re poised to lose much more.
Mosaic has created a peer-to-peer lending platform that lets individuals invest in diversified portfolios of smaller distributed solar projects, earning around a 5% return on their investments. They’ve done about $10M worth of financing this way. Now they’re getting into solar loans with backing from a large international re-insurer, adding another $100M in capital.
Sungage just raised $100M in funding from a large northeastern US credit union to use as a revolving solar loan fund.
SolarCity has started issuing solar bonds with a similar yield directly to the public on a much larger scale. They’ve raised more than $100M so far, without going through the traditional finance industry.
Big time sprawling suburban home builder Lennar is now installing rooftop PV systems by default in some markets, including around Denver. They’re offering home buyers a power purchase agreement (PPA) in which they get a 20% discount off of retail electricity rates for 20 years.
From the consumer’s point of view what this means is that in an increasing number of markets, rooftop solar can now be had at a discount to utility power, with no up front costs. This is new and different and scary for utilities, because it means rooftop solar can go big. Fast. Additionally, Elon Musk (who heads both electric car maker Tesla Motors and SolarCity…) is investing $5 billion (with a B) in a massive lithium ion battery factory in Nevada, hoping to drive costs down through economies of scale.
Suddenly, a good chunk of the traditional utility customer base starts to look a little sketchy.
Net Metering Required (For Now)
Many of these disruptive businesses depend on net metering policies and so utilities, including Xcel, have coordinated with the climate-denying corporate octopus that is the American Legislative Exchange Council to try and repeal it. So far net metering has been pretty durable. The policy is easy to understand and seems fair to most of the public, so it’s popular. Net metering also now has its own relatively well funded corporate advocates in the form of Big Solar — the very same companies raising hundreds of millions of dollars, listed above, being represented by The Alliance for Solar Choice (TASC) — one of the intervenors in 14AL-0660E (which is the PUC’s catchy name for this whole rate case thing we’ve been involved in).
In Colorado (and elsewhere) these dynamics have brought us to a regulatory stalemate. For once the status quo — net metering — favors distributed renewable electricity. It’s the policy that Big Solar has bet the farm on. But if we try and use it to scale up cheap rooftop PV dramatically, it may destabilize the utilities.
Straight net metering also won’t result in a particularly optimal deployment of distributed energy resources, because all it accounts for is energy production, and there are many more subtle qualities that are important to a well functioning electricity grid. If we can integrate those other qualities — temporal, geographic, environmental, price stabilization, etc. — into our electricity pricing we’ll get a much better overall outcome. As the Rocky Mountain Institute has put it: the debate over net metering misses the point.
Be that as it may, right now there are two 800lb gorillas (or maybe, an 800lb gorilla and a 300lb gorilla) locked in mortal combat — the utilities on one side and Big Solar on the other. One side is trying to get rid of net metering altogether, and the other is willing to fight to the death to preserve it. When people bring up other ways of valuing distributed renewable energy like Minnesota’s proposed Value of Solar or Feed in Tariffs they tend to either be ignored or attacked, sometimes by both sides of the fight! For example, The Alliance for Solar Choice wasted no time in setting up a campaign to stop what they glibly re-termed Feed in Taxes and Value of Solar Taxes as soon as Minnesota made it clear they were considering Value of Solar seriously.
Headed for Strange Country
As with so many aspects of climate and energy policy, change here is inevitable. Regardless of which side prevails in the fight over net metering, as the cost of distributed solar and energy storage continue to decline, we are headed for strange territory.
If the utilities prevail and repeal net metering, they’ll probably slow the spread of distributed generation, since customers would only be able to benefit economically from satisfying their electricity demand on-site in real time, rather than banking electricity production annually. But in the longer term, given ongoing PV system cost declines and the potential for cost-effective electricity storage, the utilities will still face a decline in electricity demand regardless of whether a policy like NEM remains in place. At one extreme we could end up in a situation (well described by RMI), where defection from the grid is economically sensible for a significant number of people.
On the other hand if Big Solar prevails then we get to the same place, maybe a little quicker, since they’re already operating with a net metering based business model at significant scale. If the Feds don’t renew the Investment Tax Credit in 2016 that will push the economics out a little, but there’s little reason to think the overall price trend is going to reverse. Ever.
Does that sound ridiculous? Then note that PV in 2014 is already 59% cheaper than NREL predicted it would be back in 2010, and Deutsche Bank is forecasting that solar will reach grid parity nationwide by the end of 2016. On the wholesale side the New York Times reports that without subsidies wind on the high plains has come in as low as ¢3.7/kWh (the same as just the production costs of Xcel’s Colorado fossil fleet in 2013).
Some folks think widespread grid defection sounds like utopian energy independence. In practice it would be far less equitable, more expensive, and operationally much less robust than a well designed network that integrates a lot of distributed energy. It’s also physically impossible in cities, which consume most of our electricity, because no matter how cheap solar and storage become, cities use more energy within their boundaries than is available from renewable sources in those same boundaries. This is despite the fact that cities have much lower per capita energy use than rural and suburban places of comparable wealth. Cities are great for the climate, but they will always need to import energy, and that means we will still need transmission and distribution systems.
Um, okay. But, decoupling?
In the near term, revenue decoupling would insulate Xcel against the sales they’re going to lose to rooftop solar and other distributed energy. Rather than seeing revenues decline as more electricity sales are displaced, they’d be empowered to adjust rates in a formulaic way to compensate for the losses, and ensure that the fixed costs of the grid continue to be paid for (along with their profits). In theory, this ought to remove or at least reduce their opposition to net metering.
In the long term, if grid defection becomes attractive, additional fixed-cost recovery mechanisms like revenue decoupling aren’t going to be much help to the utility.
Our task is to open up the discussion about creating an intelligent grid with electricity prices that reflect the more subtle attributes of distributed generation. Revenue decoupling is one potential avenue into that discussion — at least the early part of it. How so?
In the short term, the utilities are fighting for the status quo, minus net metering, and they seem to be losing. If the only two positions available are the status quo with vs. without net metering, the choice for renewable energy and climate advocates is clear — we have to side with Big Solar. But if utilities were actually up for creating a different — and much more scalable — renewable energy policy, then the decision of who to work with becomes more challenging.
With revenue decoupling in place, utilities like Xcel could have more room to consider policies that support distributed generation, without seeing them as an axiomatic threat to their revenues. But to do so, they’d have to be willing to talk about unwinding their existing investments in fossil generation — otherwise, no renewable or distributed generation policy can scale up far enough to be “good enough” for the climate. That vital discussion about unwinding fossil plants is not yet happening out in the open. At least, not in the US. We’ll take a much closer look at it in a post very soon!
So, it’s been quite a while since our last long policy post, focusing on utility revenue decoupling in connection with Xcel’s current rate case (14AL-0660E) before the Colorado PUC. That’s because we’ve been busy actually intervening in the case!
A Climate Intervention
We filed our motion to intervene in early August. As you might already know, in order to be granted leave to intervene, you have to demonstrate that your interests aren’t already adequately represented by the other parties in the case. Incredibly, CEA’s main interest — ensuring that Colorado’s electricity system is consistent with stabilizing the Earth’s climate — was not explicitly mentioned by any of the other parties!
In our petition we highlighted our mission:
…to educate the public and support a shift in public policy toward a zero carbon economy. CEA brings a unique perspective on the economics of utility regulation and business models related to mitigating the large and growing risks associated with anthropogenic climate change. In addition, CEA has an interest in transitioning away from fuel-based electric generation in order to mitigate the purely economic risk associated with inherently unpredictable future fuel costs.
…and we were granted intervention. So far as we know, this is the first time that concern over climate change has been used as the primary interest justifying intervention at the PUC in Colorado. In and of itself, this is a win.
A Long and Winding Road
Throughout the late summer, we spent many hours poring over the thousands of pages of direct testimony. Especially Xcel’s decoupling proposal, but also (with the help of some awesome interns), the details of the company’s as-of-yet undepreciated generation facilities — trying to figure out how much the system might be worth, and so how much it might cost to just buy it out and shut it down (were we, as a society, so inclined).
Early on in the process, the PUC asked all the parties to submit briefs explaining why we thought it was appropriate to consider decoupling in the rate case, whether it represented a collateral attack on decisions that had already been made in the DSM strategic issues docket, and how it would interact with the existing DSM programs. We pulled together a response, as did the other intervening parties, and kept working on our answer testimony — a much longer response to Xcel’s overall proposal. The general consensus among the parties that filed briefs, including CEA, SWEEP, WRA, and The Alliance for Solar Choice (TASC, a solar industry group representing big installers like Solar City) was that decoupling was not an attempt to roll back previous PUC decisions related to DSM — and that addressing it in a rate case was appropriate. Only the Colorado Healthcare Electric Coordinating Council (CHECC, a coalition of large healthcare facilities and energy consumers) told the PUC that decoupling ought to be considered an attack on previous DSM policies.
The PUC staff unfortunately came back with a reply brief that disagreed and suggested, among other things, that maybe it would be better if we just went with a straight fixed/variable rate design to address utility fixed cost recovery. Never mind the fact that this kind of rate would destroy most of the incentives customers have to use energy efficiently.
And then we waited.
With baited breath each Wednesday morning we tuned in to the Commissioners’ Weekly Meeting, streaming live over the interwebs from the Windowless Room in Denver. We watched regardless of whether anything related to our dear little 14AL-0660E was on their agenda. Just in case they tried to sneak it by. Weeks passed. And then a month. The deadline for submitting our answer testimony approached.
Finally on October 29th, six weeks after submitting our brief, the commissioners finally brought up the issue of decoupling at their weekly meeting and in a couple of minutes, indicated that they’d be severing it from the proceeding, with little explanation as to why. However, because there were no details, and the order isn’t official until it’s issued in writing… we continued working on our answer testimony. The final order came out on November 5th, and prohibited submission of testimony related to decoupling. Answer testimony was due on November 7th.
Where to From Here?
Xcel might come back to the PUC with another decoupling proposal before the next Electric Resource Plan (in fall of 2015) . Or they might not. This means that a good chunk of the work that we’ve been doing since this summer will have to come to light in a different way. So for the next few posts, we’re going to explore some of the issues that came up in the preparation of our answer testimony, including:
Decoupling and Distributed Energy: How would decoupling interact with distributed energy resources like rooftop solar? What are the implications for utilities as the costs of those resources continue their precipitous decline?
Decoupling and Demand Side Management: How would revenue decoupling interact with demand side management programs in general — both utility and privately or locally funded — and what particular issues with Xcel’s DSM programs could decoupling address? What issues can’t it help address?
Can Revenue Decoupling Scale?
Why doesn’t revenue decoupling as a policy really scale up to the point of taking existing generation facilities offline, or preventing new facilities from being built?
Decoupling as a First Step:
Even if it can’t scale, why might decoupling still serve as a useful starting point for the decarbonization process? Can it give us a little bit of breathing room while we start the real negotiation? Or is it just another layer of financial protection for utilities who want to delay change as long as possible?
Realism and Equity in Carbon Budgets for Colorado:
What is the true scope of the decarbonization challenge, in the context of the carbon budgets recently published by the IPCC in their Fifth Assessment Report (AR5), but localized to Colorado so we can actually wrap our heads around it. Why is this conversation so hard?
Last month, Xcel Energy subsidiary Public Service Company of Colorado (PSCo) filed a rate case at the Colorado Public Utilities Commission (Docket: 14AL-0660E). A lot of the case — the part that’s gotten most of the press — is about PSCo recovering the costs of retiring and retrofitting coal plants as agreed to under the Clean Air Clean Jobs Act (CACJA) of 2010. However, there’s a piece of the case that could have much wider implications. Way down deep in the last piece of direct testimony, PSCo witness Scott B. Brockett:
…provides support and recommendations regarding the initiation of a decoupling mechanism for residential and small commercial customers.
This recommendation has captivated all of us here at CEA because it could open the door to Xcel adopting a radically different business model, and becoming much more of an energy services utility (PDF), fit for the 21st century.
To explain why, we’re going to have to delve a ways into the weeds of the energy wonkosphere.
Building a new coal or gas plant is a wager that fuel will continue to be available at a reasonable price over the lifetime of the plant, a lifetime measured in decades. Unfortunately, nobody has a particularly good record with long term energy system predictions so this is a fairly risky bet, unless you can get somebody to sign a long term fuel contract with a known price. That doesn’t really get rid of the risk, it just shifts it onto your fuel supplier. They take on the risk that they won’t make as much money as they could have, if they’d been able to sell the fuel at (higher) market rates. If the consumer is worried about rising prices, and the producer is worried about falling prices, then sometimes this can be a mutually beneficial arrangement. This is called “hedging”.
Will Toor and Mike Salisbury at the Southwest Energy Efficiency Project have put together a good paper called Managed Lanes in Colorado (it’s a PDF) that looks at the policy rationale behind (and a few issues with) creating additional highway capacity in the form of managed lanes with tolling, that also allow high occupancy vehicles and transit to take advantage of the investment, addressing some of the “Lexus Lane” criticism of using tolls in the public right of way (on projects that are still mostly publicly funded). It’s not quite as fun to read as my magnum opus from this winter on the same topic (US 36: For Whom the Road Tolls) but might be more appropriate for forwarding to policymakers.
The past couple of years have been rough on Colorado, in terms of climate change related disasters. First a couple of record setting wildfire years, and then floods of “biblical” proportions. At a gut level we know we have to respond, but our public discourse is having trouble addressing the root cause directly. Instead we’re dancing around the issue, and failing to either adapt adequately to our new reality or to mitigate further climate change.
Creating a wildfire risk map, and rating all properties on a scale of 1 to 10, requiring that risk designation to be disclosed before any property sale, and making it available to insurance companies for use in setting their rates.
Charging those living in the “wildland urban interface” a fee based on their risk exposure, that would be used to defer some of the additional public costs incurred in protecting their private property.
Creating fire-resistant building codes for high risk areas, affecting both the materials used in construction, and requirements for defensible space around buildings.
Make no mistake: these are climate change adaptation measures, and Colorado has rejected them.
As the Denver Post reported in September: developers didn’t like the idea of increased construction costs; the real-estate industry didn’t like the idea of making a lucrative market much less attractive; homeowners in high risk areas certainly didn’t like the idea of paying for the risks they’ve taken on, or making those risks transparent to potential buyers of their property.
Would the discussion be any different if people understood that the wildfire frequency and intensity is likely to just keep increasing as climate change marches on? This is about as close as the article from September gets to mentioning climate change:
Colorado terrain ravaged by wildfire has quadrupled from 200,000 acres in the 1990s to nearly 900,000 acres in the 2000s. “Scientists tell us this pattern isn’t going to change,” Hickenlooper said.
Why is the “pattern” there in the first place? What kind of scientists was the Governor was talking to? None of the press articles linked to from this post mention climate change even once, despite universally pointing out the trend. For example: As Colorado wildfires continue to worsen, only moderate laws proposed. And why are they worsening? No comment. Even the wildfire task force’s report mentions climate change only once in 80 pages.
The only big risk factor we’ve talked about directly is where we choose to build our homes. This is an important discussion too. The overall wildfire risk — at least to human lives and property — is something like:
(human risk) = (area burned) x (pop. density in high risk areas)
Climate change will in large part determine how much of our state burns each year, but we have a choice about how many people and how much property to put in areas subject to burning. Reducing our exposure to the increasing wildfire risk is an adaptation to climate change — an alteration of our behavior, in light of the expected risks going forward. For the moment at least, we seem unwilling to listen to the warnings.
But hey, at least the state had a conversation, and decided not to do anything.
Cause and Effect
So what are the causes? According to the US Forest Service, the enormous bark beetle kill is due in part to warmer winters, resulting from climate change. These forests filled with dead trees are warm and dry for longer each year, lengthening the western US fire season by about 2 months. So it’s perhaps unsurprising that the number of large wild fires per year has already increased from 140 in the 1980s, to 250 in the first decade of the 2000s. This infographic from the Union of Concerned Scientists is a good cartoon summary:
The third panel is probably the scariest for Colorado. The dark red swath covering most of the western half of the state means that we expect more than six times as much land to burn each year in the near future, with just 1°C (1.8°F) of additional warming — and as Kevin Anderson and many others have pointed out, it is virtually certain that we will see another 1°C of warming… if not 3°C, or even more.
So our elected representatives are right to be concerned about increased risk from wildfires, and about the safety of the firefighters who try to protect us from those fires. But we’re still missing the point: We control our exposure to risk locally, and we control the magnitude of that risk globally.
Mitigation?
Policies aimed at avoiding or reducing climate change (like putting a price on carbon) are mitigation efforts. We’re not talking about them much, even in the context of an obviously climate mediated risk like wildfires. This is bad. If we can’t have a conversation about what’s increasing the wildfire risks, how can we hope to respond appropriately? Is our refusal to respond to change related to our refusal to accept the cause of the change? Or is it more a kind of landscape amnesia — an inability to even see the change? Are we going to forget what normal fire seasons looked like, in the same way that we’ve started to forget what a normal winter feels like:
Double Climate #Fail
Right now we’re managing to fail doubly with respect to climate change. We are both unwilling to adapt to the foreseeable risks, and unwilling to even mention that these risks are linked to our greenhouse gas emissions, let alone talk about what we might do to mitigate those emissions and the risks that they create.
If we really care about our firefighters, if we really are intent on avoiding ever more costly and tragic conflagrations in our state, we need to both adapt and mitigate. We need to start building for a warmer world now, and we need to stop warming the world as quickly as possible.
We should begin levying a modest carbon tax, in the range of $5 to $25/ton of CO2e.
The tax must be applied to the fossil fuels used in electricity generation (coal and natural gas). Ideally it should also be applied to gasoline, diesel, natural gas used outside the power sector, and fugitive methane emissions from the oil and gas industry, but those are less important for the moment.
New electricity generation resources must be allowed to compete economically with the operation of existing carbon-intensive facilities, and fuel costs must not be blindly passed through to consumers without either rigorous regulatory oversight, or utilities sharing fuel price risk.
Carbon tax revenues should be spent on emissions mitigation, providing reliable, low-cost financing for energy efficiency measures and a standard-offer contract with modest performance-based returns for new renewable generation.
Over time the carbon price should be increased and applied uniformly across all segments of the economy, with the eventual integration of consumption based emissions footprinting for imported goods.
A carbon price alone is not enough to get the job done — there are other pieces of our energy markets that also have to be fixed to get us to carbon zero.