The Basics

07Nov 2016

For the average reader, electrochemical impedance spectroscopy, often abbreviated as EIS, is more than a mouthful. Understanding its utility can be relegated to the category of unresolved mysteries. Today’s post will shed some light and a little intuitive thinking on this powerful method.

The reader’s first question might be “why are you talking about EIS in a battery blog?” The answer is simple. EIS is the foremost standard  tool in laboratories around the world to measure electrochemical processes and reactions. Electrochemistry, one of the most extensive branches in chemistry, is the study of chemical reactions that have an inherent relationship to electricity, i.e. they can either generate electricity or can be influenced by electricity. Yes, you guessed right, batteries are a prime example of electrochemistry. Another practical example of electrochemistry put to good use: the gold plating on your necklace or bracelet.

What does the name EIS imply? Electrochemical impedance is scientific jargon that refers to the electrical resistance of the device under study, in this case, the lithium-ion battery. In its most elemental form, impedance is voltage divided by current. For electrical engineers, it represents components such as resistors or capacitors. For other scientists, it represents the resistance the device exhibits against the flow of electricity.

Spectroscopy is the branch of science that deals with how a property changes with frequency. Hence, EIS is the methodology and science that seek to understand how impedance measurements change with frequency, and more particularly, how these changes are intimately tied to the underlying chemical reactions.

Why frequency? Frequency adds a lot more information about the nature of the chemical process that is taking place. In science, frequency plays a very important role. Take for example the difference between blue and red light. They are both made of photons, but differ in frequency. Medical MRI imaging depends on the frequency of the oscillation of the hydrogen atoms in our bodies. Distinguishing between different broadcast stations on the radio dial operates on similar principles. In other words, we use frequency to uniquely identify chemical or physical processes.

With this long introduction, let’s dive a little deeper into EIS as related to a lithium-ion battery. If you were to measure the impedance of a standard electrical resistor component — the kind of components you may find inside your smartphone — you will find that you will measure exactly the same impedance value whether you apply a low voltage or a high voltage, or whether you measure at low frequency or high frequency. In other words, for this resistor component, the value is independent of voltage (also known as bias) and frequency. Resistors are consequently easy components to understand.

That is NOT the case for a battery.  Change the voltage or frequency and you will get a different value. In other words, the battery can look like a resistor in some circumstances, or like a capacitor in others, or some complex combinations of both. When we change the voltage of the battery, it now operates at a different “state of charge,” in other words, it will have a different amount of electrical charge stored in it. As I described in this earlier post on fuel-gauges, the terminal voltage of the battery is a direct proxy of the amount of electrical charge stored in the battery, which is the state of charge (or the percentage of battery remaining).

In contrast, changing the frequency relates to different electrochemical processes that occur inside the battery. Such electrochemical processes could relate to the diffusion of the electrical charge (in this case, the lithium ions) from one electrode to the other. One can imagine that the ions have to travel a certain distance and insert themselves in the “Swiss-cheese” matrix of the material. So intuitively, this feels like a slow process, and it is. It takes several seconds to even minutes for the lithium ion to go through this diffusion process — meaning that diffusion of ions is characterized by a low-frequency signature. A distinctly different electrochemical process is how lithium ions and electrons interact right at the surface of the electrode. This interaction involves electrons and ions over very short distances. Intuitively, one can see that this can be a very fast reaction, usually on the order of microseconds. Hence its signature contains high frequency signals.

reactions

All of this goes to say that the impedance value at a particular frequency is a “unique signature” for the underlying electrochemical process of interest to our study. And that is what makes EIS such a powerful tool. To the trained scientist, he or she can read the EIS measurement as a map of the various electrochemical processes and reactions that are taking place inside the battery without cutting it open or damaging it. It also provides tremendous insight into what can also go wrong inside the battery. Not all electrochemical processes are desirable. For example, the underlying process that causes lithium metal plating is highly undesirable and can be readily measured using its unique EIS signature.

So how is the measurement made? In the laboratory, the oft-expensive and bulky instrument applies a small electrical current at a well defined frequency to the battery, then measures the voltage. Divide the voltage by the current and you now have the impedance at this frequency. For example, apply 1 mA of current at a frequency of 100 Hz, you might measure 0.5 mV. Hence the impedance is 0.5mV/1mA = 0.5 ohms at 100 Hz. This, of course, does not take into account the complex value of the impedance but it is a simple illustration of the concept. “Complex” numbers are mathematical tools to show values that have both real and imaginary components. Don’t worry if you don’t understand them fully —the key thing is that an impedance measurement has two values to represent it.

eis

A full EIS chart shows by convention the imaginary component of the impedance (vertical axis) vs. its real value (horizontal axis). The far left of the chart shows the measurements made at high frequencies, in particular highlighting what happens in the metal conductors inside the battery as well as what occurs at the surfaces of the electrodes. As we follow the purple dots and move towards the right, the frequency of the signature gradually decreases highlighting now a different set of electrochemical processes, in particular what happens at the insulating interface between the electrode and the electrolyte (also known as SEI layer). Ultimately, to the far right of the chart, the frequency is low and is unique to the diffusion effects of the lithium ions.

An EIS tool is present in every electrochemistry laboratory around the world. Young graduates in this discipline spend countless hours operating this tool. It is not a small instrument…it fits on a desk, may weigh several pounds, and costs several thousands of dollars. Now imagine how the world would look like if an EIS tool can somehow fit inside each and every smartphone!

18Oct 2016

State-of-the-art lithium-ion batteries, whether used in smartphones or electric vehicles, all rely on the same fundamental cell structure: two opposing electrodes with an intermediate insulating separator layer, with lithium ions shuffling between the two electrodes.

The positive electrode during charging, usually called the cathode, consists of a multi-metal oxide alloy material. Lithium-cobalt-oxide, or LCO, is by far the most common for consumer electronic applications. NCM, short for lithium nickel-cobalt-manganese oxide, also known as NMC, is gradually replacing other materials in energy storage and electric vehicle applications. LCO and NCM have a great property of storing lithium ions within their material matrix. Think of a porous swiss cheese: the lithium ions insert themselves between the atomic layers.

In contrast, the anode, or negative electrode during charging, is almost universally made of carbon graphite. Carbon historically was and continues to be the material of choice. It has a large capacity to store lithium ions within its crystalline matrix, much like the metal oxide cathode.

So how do manufacturers increase energy density? In some respects, the math is simple. In practice, it gets tricky.

Energy density equals total energy stored divided by volume. The total stored energy is dictated by the amount of active material, i.e., the available amount of metal oxide alloy as well as graphite that can physically store the lithium ions (i.e., the electric charge). So battery manufacturers resort to all types of design tricks to reduce the volume of inactive material, for example, reducing the thickness of the separator and metal connectors. Of course, there are limits with safety topping the list. To a large extent, this is what battery manufacturers did for the past 20 years — amounting largely to about a 5% increase annually in energy density.

But once this extra volume of inactive material is reduced to its bare minimum, increasing energy density gets tricky and challenging. This is the difficult wall that the battery industry is facing now. So what is next?

There are two potential paths forward:

1.  Find a way to pack more ions (i.e., more electric charge) within the electrodes. This is the topic of much research to develop new materials capable of such feat. But any such breakthrough is still several years away from commercial deployment, leaving the second option to….

2.  Increase the voltage. Since energy equals charge multiplied by voltage, increasing the voltage also raises the amount of energy (remember that energy and charge are related but are not commutable). This is the object of today’s post.

The battery industry raised the voltage a few years back from a maximum of 4.2 V to the present-day value of 4.35 V. This was responsible for adding approximately 4 to 5% to the energy density. A new crop of batteries is now beginning to operate at 4.4 V, adding an additional 4 to 5% to the energy density. But that does not come without some serious challenges. What are they?

First, there is the electrolyte. It is a gel-like solvent that imbibes the inside of the battery. Short of a better analogy, if ions are like fish, then the electrolyte is like water. It is the medium within which the lithium ions can travel between the two electrodes. As the voltage rises, it subjects the electrolyte to increasingly higher electric fields causing its early degradation and breakdown. So we are now seeing a new generation of electrolytes that can in principle withstand the higher voltage — albeit, we see in our lab testing that some of these electrolyte formulations are responsible for worse cycle life performance. This is a first example of the compromises that battery designers are battling.

Second, there is the structural integrity of the cathode. Let’s take LCO as an example. If we peer a little closer into the cathode material (see the figure below), we find a crystal structure with layers made of cobalt and oxygen atoms. When the battery is fully discharged, the lithium ions occupy the vacant space between these ordered layers. In fact, there is a proportion of lithium ions to cobalt and oxygen atoms: there is one lithium ion for every one cobalt and two oxygen atoms.

lco

courtesy of visualization for electronic and structural analysis (VESTA)

As the battery is charged, the lithium ions leave the cathode to the anode vacating some of the space between the ordered layers of the LCO cathode. But not all the lithium ions can leave; if too many of them leave, then the crystal structure of the cathode collapses and the material changes its properties. This is not good. So only about half of the lithium ions are “permitted” to leave during charging. This “permission” is determined by, you guessed it, the voltage. Right about 4.5 V, the LCO crystal structure begins to deteriorate, so one can easily see that at 4.4 V, the battery is already getting too close to the cliff.

Lastly, there is lithium plating. High energy-density cells push the limit of the design and tolerances in order to reduce the amount of material that is not participating in the storage. One of the unintended consequences is an “imbalance” between the amount of cathode and anode materials. This creates an “excess” of lithium ions that then deposit as lithium metal, hence plating.

These three challenges illustrate the increasing difficulties that battery manufacturing must overcome to continue pushing the limits of energy density. As they make progress, however, compromises become the norm. Cycle life is often shortened. Long gone are the days of 1,000+ cycles without intelligent adaptive controls. Fast charging becomes questionable. In some cases, safety may be in doubt. And the underlying R&D effort costs a lot of money with expenses that are stretching the financial limits of battery manufacturers without the promise of immediate financial returns in a market that is demanding performance at a the lowest possible price.

It is great to be a battery scientist with plenty of great problems to work on…but then again, may be not.

01Jul 2016

Sleep is an essential function of life. Tissue in living creatures regenerate during deep sleep. We, humans, get very cranky with sleep deprivation. And cranky we do get when our battery gets depleted because we did not give our mobile device sufficient “sleep time.”

I explained in a prior post the power needs in a smartphone, including the display, the radio functions…etc. If all these functions are constantly operating, the battery in a smartphone would last at most a couple of hours. So the key to having a smartphone battery last all day is having down time. So by now, you have hopefully noticed how the industry uses “sleep” terminology to describe these periods of time when the smartphone is nominally not active.

So what happens deep inside the mobile device during these periods of inactivity, often referred to as standby time? Sleep. That’s right. Not only sleep, but also deep sleep. This is the state of the electronic components, such as the processor and graphics chips, when they reduce their power demand. If we are not watching a video or the screen is actually turned off, there is no need for the graphics processor to be running. So the chip’s major functions are turned off, and the chip is put in a state of low power during which it draws very little from the battery. Bingo, sleep equals more battery life available to you when you need it.

Two key questions come to mind: When and how does the device go to sleep? and when and how does it wake up?

One primary function of the operating system (OS) is to decide when to go to sleep; this is the function of iOS for Apple devices, and Android OS for Android-based devices. The OS monitors the activity of the user, you, then makes some decisions. For example, if the OS detects that the smartphone has been lying on your desk for some considerable time and the screen has been off, then it will command the electronics to reduce their power demand and go to sleep.

This is similar to what happens in a car with a driver. You, the driver, gets to make decisions all the time when to turn the engine off, or put it in idle, or accelerate on the gas pedal. Each of these conditions changes the amount of fuel you draw from the fuel tank. In a smartphone, the OS is akin to the driver; the electronics replace the engine; and the fuel tank is like the battery. You get the picture. While this is colloquially referred to as managing the battery, in reality you are managing the “engine” and the power it consumes. This is why some drivers might get better mileage (mpg) than others. It is really about power management and has very little to do with true battery management.  Battery management is when one addresses the battery itself, for example how to charge it, how to maintain its health…etc. car-engine

The degree of sleep varies substantially and determines how much overall power is being used. Some electronic parts may be sleeping and others may be fully awake and active. For example, let’s say you are traveling and your device is set to airplane mode, but you are playing your favorite game. The OS will make sure that the radios chips, that’s the LTE radio, the WiFi, GPS chip, and all chips that have a wireless signal associated with them, go to deep sleep. But your processor and graphics chips will be running. With the radios off, your battery will last you the entire flight while playing Angry Birds.

The degree of sleep determines how much total power is being drawn from the battery, and hence, whether your standby time is a few hours or a lot more. A smart OS needs to awaken just the right number of electronic components for just the right amount of time. Anything more than that is a waste of battery, and loss of battery life. The battery is a precious resource and needs to be conserved when not needed.

Both iOS and Android have gotten much smarter over the past years in making these decisions. Earlier versions of Android were lacking the proper intelligence to optimize battery usage. Android Marshmallow introduced a new feature called Doze that adds more intelligence to this decision making process. Nextbit recently announced yet more intelligence to be layered on top of Android. This intelligence revolves around understanding the user behavior and accurately estimating what parts need to be sleeping, yet without impacting the overall responsiveness of the device.

The next question is who gets to wake up the chips that are sleeping? This is where things get tricky. In a car, you, the driver, gets to make decisions on how to run the engine. But imagine for a moment that the front passenger gets to also press the gas pedal. You can immediately see how this can be a recipe for chaos. In a smartphone, every app gets to access the electronics and arbitrarily wake up whatever was sleeping. An overzealous app developer might have his app pinging the GPS location chip constantly which will guarantee that this chip never goes to sleep — causing rapid battery loss of life. Early versions of Facebook and Twitter apps were guilty of constantly pinging the radio chips to refresh the social data in the background — even when you put your device down and thought it was inactive.  iOS and Android offer the user the ability to limit what these apps can do in the background; you can restrict their background refresh or limit their access to your GPS location. But many users do not take advantage of these power saving measures. If you haven’t done so, do yourself a favor and restrict background refresh on your device, and you will gain a few extra hours of battery life. You can find a few additional tidbits in this earlier post.

App designers have gotten somewhat more disciplined about power usage, but not entirely. Still too many apps are poorly written, or intentionally ignore the limited available power available. Just like in camp when many are sharing water, it takes one inconsiderate individual to ruin the experience. It takes one rogue app to ruin the battery experience in a smartphone. And when that happens, the user often blames the battery, not the rogue app. It’s like the campers blaming the water tank in camp instead of blaming the inconsiderate camper. Enforcement of power usage is improving with every iteration of operating systems, but the reality is that enforcement is not an easy task. There is no escaping the fact that the user experience is best improved by increasing the battery capacity (i.e., a bigger battery) and using faster charging. Managing a limited resource is essential but nothing makes the user happier than making that resource more abundant….and that, ladies and gentlemen, is what true battery management does. If power management is about making the engine more efficient, then battery management is about making the fuel tank bigger and better.

17Jun 2016

I will jump ahead in this post to discuss the merits of different lithium-ion chemistries and their suitability to energy storage systems (ESS) applications. Naturally, this assumes that lithium-ion batteries in general are among the best suited technologies for ESS. Some might take issue with this point — and there are some merits for such a discussion that I shall leave to a future post.

Made of two electrodes, the anode and the cathode, it is the choice of the cathode material that determines several key electrical attributes of the lithium-ion battery, in particular energy density, safety, longevity (cycle life) and cost. The most commonly used cathode materials are Li cobalt oxide (known as LCO), Li nickel cobalt aluminum (NCA), Li nickel manganese (NCM), Li iron phosphate (LFP) and Li manganese nickel oxide (LMNO).

EnergyDensity

LCO is by far the most common being the choice for consumer devices from smartphones to PCs. It is widely manufactured across Asian battery factories and the supply chain is very pervasive…as a result, and despite the use of cobalt (an expensive material), it bears the lowest cost per unit of energy with consumer batteries being priced near $0.50 /Ah, or equivalently, $130/kWh. LCO offers very good energy density and a cycle life often ranging between 500 and 1,500 cycles. From a material standpoint, LCO can potentially catch fire or explode especially if the battery is improperly designed or operated. That was the primary reason for the battery recalls that were frequent some 10 years ago. Proper battery design and safety electronics circuitry have greatly improved the situation and made LCO batteries far safer.

NCA came to prominence with Tesla’s use of the Panasonic 18650 cells in their model S (and the earlier Roadster). It has exceptional energy density — which translates directly to more miles of driving per charge. But NCA has a limited cycle life, often less than 500 cycles. Historically NCA was expensive because of its use of cobalt and limited manufacturing volume. This is rapidly changing with Tesla’s growing volume and the Gigafactory coming online in 2017. It is widely rumored that Tesla’s cost is at or near the figures for LCO, i.e., near $100/kWh at the cell level. It remains to be seen whether Panasonic will replicate these costs for the general market.

NCM sits between LCO and NCA. It has good energy density, better cycle life than NCA (in the range of 1,000 to 2,000 cycles) and is considered inherently less prone to safety hazards than LCO. Its historical usage was in power tools but it has become recently a serious candidate material for automotive applications. In principal, NCM cathodes should be less expensive to manufacture owing to their use of manganese, quite an inexpensive material. The two Korean conglomerates, LG Chem and Samsung SDI, are major advocates and manufacturers of NCM-based batteries.

One of the oldest used cathode materials is LMNO, or sometimes referred to as LMO. The Nissan Leaf battery uses LMNO cathodes. It is safe, reliable with long cycle life, and is relatively inexpensive to manufacture. But it suffers from low energy density especially relative to NCA. If you ever wondered why the Tesla has a far better driving range than the Leaf, the choice of cathode materials is an important part of your answer. It is not widely used outside of Japan.

Finally, we come to lithium iron phosphate, or LFP. Initially invented in North America in the 1990s, it has developed a strong manufacturing base today in China, with the Chinese government extending it significant economic incentives to make China a manufacturing powerhouse for LFP-batteries. LFP has exceptional cycle life, often exceeding 3,000 cycles, and is considered very safe. A major shortcoming of LFP is its reduced energy density: about one third that of LCO, NCA or NCM. It, in principle, should be inexpensive to manufacture. After all, iron and phosphorus are two inexpensive materials. But reality suggests otherwise: the lower energy density requires the use of twice or three times as many cells to build a battery pack with the same capacity as LCO or NCA. As a result, LFP-based batteries cost today 2 or 3x more than equivalent LCO-based battery packs.

By now, you are probably scratching your head and asking: so which one wins? and that is precisely the conundrum for energy storage and to some extent, electric vehicles. Let’s drill deeper.

Energy storage applications pose a few key requirements on the battery: 1) the battery should last 10 years with daily charge and discharge, or in other words, has a cycle life specification of 3,500 cycles or more; 2) it has to be immensely cost-effective, measured both in its upfront capital cost and cost of ownership; in other words, the total cost of owning and operating it over its 10-year life; and 3) it has to be safe.

The first and third requirements are straightforward: they make LFP and NCM favorites. LFP inherently has long cycle life, and NCM, if charged only to about 80% of its maximum capacity also can offer a very long cycle life. So if you wondered why Tesla quietly dropped its 10-kWh PowerWall product,  it is because it is made with NCA cathodes and cannot meet the very long cycle life requirement of daily charging.

The second requirement gets tricky. Right now, neither LFP nor NCM are sufficiently inexpensive to make a very compelling economic case to operators of energy storage systems (ESS) — setting government incentives aside. So the question boils down to which one of them will have a steeper cost reduction curve over time. Such a question naturally creates two camps of followers, each arguing their respective case.

Notice that high energy density does not factor in these requirements, at least not directly. Unlike consumer devices or electric vehicles, ESS seldom have a volume or weight restriction and thus, in principle, can accommodate batteries with lower energy density. The problem, however, is that batteries with lower energy density do not necessarily correspond to lower cost per unit of energy. It actually costs more to manufacture a 3Ah battery using LFP than it does using NCA. This makes energy density a critical factor in the math. Lower energy density equals more needed batteries to assemble a bigger battery pack, and thus more cost. For now, in the battle between LFP and NCM, the jury is still out though my personal opinion is that NCM, by virtue of its higher energy density, has an advantage. On the other hand, China’s uninhibited support for LFP can potentially tip the scale. More later.

Before I adjourn, I would like to rebuke an oft-made statement by some builders of ESS: that they are “battery agnostic.” To them, batteries are a commodity that can be easily interchanged among vendors and suppliers, much like commodity components in a consumer electronic product. I am hoping that the reader gleans from this post the great number of subtleties and complexities involved in the choice of the proper battery in an ESS. The notion of battery-agnostic in this space is utterly misplaced and only points to the illiteracy of the engineers building these ESS. If the battery fires on the 787 Dreamliner can permanently remind us of one lesson, it should be to never underestimate the consequences of neglecting the complexities of the battery. They can be very severe and immensely costly. Battery-agnostic is battery-illiterate.

13Jun 2016

Since the installation of the first electrical power plant late in the 19th century, the business of supplying electricity to industry and residences has been “on demand.” When you flip a light switch in your home, you expect the light to go on. For this to happen, electricity has to flow to your light bulb through an elaborate transmission and distribution network (T&D) of copper wires. These wires ultimately connect your light bulb to a generator that sits dozens if not hundreds of miles away from you. When you turn on your light bulb, there is an additional demand in electricity, and that generator “works a little harder” to supply this required electricity. This is what I mean by “on demand.” On a hot summer afternoon, the demand is large, and these generators are working near or at full capacity. At night, the demand is lower, and there is available excess capacity.

This system has worked exceptionally well for over one hundred years. Electrical utilities planned and built a system with high reliability. Now fast forward to the 21st century. Much like many new developments in our modern society, the way we use electricity is changing with new clean energy sources like solar panels and wind farms in diverse geographical locations, plus a general sense by the responsible regulatory bodies to modernize if not liberalize the way electricity is generated and distributed.

Enters the energy storage concept. Imagine an electrical system where the generation of electricity and its consumption are no longer simultaneous. In other words, imagine that the electricity that is flowing through your light bulb was actually generated at an earlier time — in other words, breaking the on demand relationship. That’s precisely what happens in your smartphone. The time you use the electricity in your smartphone is very different from the time the electricity was generated. One can easily notice that the on-demand electricity model fails in a mobile society. But what benefit do we get from breaking this model at a larger scale, e.g., the utility scale?

Let’s imagine the following scenario. You live in a small town in a sunny geography. There is a small power plant outside your town that historically supplied you with electricity, again on demand. You and many of the town residents decide to install solar panels on your rooftops. Your house is now generating all your electricity needs during the daytime. But at night you still rely on your local power plant to supply you with electricity.  Let’s sketch what the load on the power plant might look like before and after the installation of the solar panels.

During the night, the load is modest. Most residential lights are off, but appliances such as refrigerators are still on. The load rises in the morning and peaks some time during mid day, depending on the work hours and the need for air conditioning on hot days. The load peaks again in the evening when residents return home from their work day. This is the load that the power plant has to deliver. It is for the most part well characterized and predictable, with two modest peaks near the noon hour and evening. So your local utility sizes the generating plant to match this load demand. Any generating capacity wildly in excess will result in significant upfront capital costs that are not desirable to the rate payers, i.e. you.

Duck

Now let’s see what happens to this load curve after many residents in your town install solar panels. As expected, the load during the day time drops, and it drops drastically. This curve, as its shape suggests, is called the “duck curve.”  It creates a serious headache for your local utility. These generators that historically supplied your home with electricity are now running at a much reduced capacity during the day. In other words, the utility has idle capacity yet it bore the expense of the generators. Worse yet, it still has to size the generating capacity of the power plant to the maximum needed load which is now in the evening hours when solar is not a factor.

So, let’s take this scenario one step further. Imagine that there is now a big, I mean really big, battery that sits between your home and the power plant. During the peak solar hours in the day time, the power plant continues to produce electricity at or near its maximum capacity but that electricity is now used to charge the battery.  At night, the power plant continues to produce electricity at the same rate it did during the day, but the extra demand by the residents is now met by using the electricity from the battery. This has the effect of flattening the load curve and thereby reducing the peak demand on the power plant….resulting in significant reduction of capital costs. This is called “peak shifting” because in effect, this big battery enables us to use the excess capacity we have during the day to cover the excess load we have at night. This is one of several key benefits of energy storage.

In California, the scale of the duck curve is simply overwhelming. California ISO, the state agency responsible for the flow of electricity across its long-distance power lines, estimates that the differential between the peak and trough in the load curve will exceed 14,000 MW in 2020. To put it in perspective, this is equivalent to seven Diablo Canyon nuclear power plants near San Luis Obispo in central California. In essence, it also highlights the scale of the economic opportunity: Build energy storage systems or build expensive power plants. In future posts, I will cover various topics that come out of this discussion, for example, where do we place this “energy storage” system? what are the requirements on such a system? what technologies are most suitable? …etc.

CalISO