The Basics

17Jun 2016

I will jump ahead in this post to discuss the merits of different lithium-ion chemistries and their suitability to energy storage systems (ESS) applications. Naturally, this assumes that lithium-ion batteries in general are among the best suited technologies for ESS. Some might take issue with this point — and there are some merits for such a discussion that I shall leave to a future post.

Made of two electrodes, the anode and the cathode, it is the choice of the cathode material that determines several key electrical attributes of the lithium-ion battery, in particular energy density, safety, longevity (cycle life) and cost. The most commonly used cathode materials are Li cobalt oxide (known as LCO), Li nickel cobalt aluminum (NCA), Li nickel manganese (NCM), Li iron phosphate (LFP) and Li manganese nickel oxide (LMNO).

EnergyDensity

LCO is by far the most common being the choice for consumer devices from smartphones to PCs. It is widely manufactured across Asian battery factories and the supply chain is very pervasive…as a result, and despite the use of cobalt (an expensive material), it bears the lowest cost per unit of energy with consumer batteries being priced near $0.50 /Ah, or equivalently, $130/kWh. LCO offers very good energy density and a cycle life often ranging between 500 and 1,500 cycles. From a material standpoint, LCO can potentially catch fire or explode especially if the battery is improperly designed or operated. That was the primary reason for the battery recalls that were frequent some 10 years ago. Proper battery design and safety electronics circuitry have greatly improved the situation and made LCO batteries far safer.

NCA came to prominence with Tesla’s use of the Panasonic 18650 cells in their model S (and the earlier Roadster). It has exceptional energy density — which translates directly to more miles of driving per charge. But NCA has a limited cycle life, often less than 500 cycles. Historically NCA was expensive because of its use of cobalt and limited manufacturing volume. This is rapidly changing with Tesla’s growing volume and the Gigafactory coming online in 2017. It is widely rumored that Tesla’s cost is at or near the figures for LCO, i.e., near $100/kWh at the cell level. It remains to be seen whether Panasonic will replicate these costs for the general market.

NCM sits between LCO and NCA. It has good energy density, better cycle life than NCA (in the range of 1,000 to 2,000 cycles) and is considered inherently less prone to safety hazards than LCO. Its historical usage was in power tools but it has become recently a serious candidate material for automotive applications. In principal, NCM cathodes should be less expensive to manufacture owing to their use of manganese, quite an inexpensive material. The two Korean conglomerates, LG Chem and Samsung SDI, are major advocates and manufacturers of NCM-based batteries.

One of the oldest used cathode materials is LMNO, or sometimes referred to as LMO. The Nissan Leaf battery uses LMNO cathodes. It is safe, reliable with long cycle life, and is relatively inexpensive to manufacture. But it suffers from low energy density especially relative to NCA. If you ever wondered why the Tesla has a far better driving range than the Leaf, the choice of cathode materials is an important part of your answer. It is not widely used outside of Japan.

Finally, we come to lithium iron phosphate, or LFP. Initially invented in North America in the 1990s, it has developed a strong manufacturing base today in China, with the Chinese government extending it significant economic incentives to make China a manufacturing powerhouse for LFP-batteries. LFP has exceptional cycle life, often exceeding 3,000 cycles, and is considered very safe. A major shortcoming of LFP is its reduced energy density: about one third that of LCO, NCA or NCM. It, in principle, should be inexpensive to manufacture. After all, iron and phosphorus are two inexpensive materials. But reality suggests otherwise: the lower energy density requires the use of twice or three times as many cells to build a battery pack with the same capacity as LCO or NCA. As a result, LFP-based batteries cost today 2 or 3x more than equivalent LCO-based battery packs.

By now, you are probably scratching your head and asking: so which one wins? and that is precisely the conundrum for energy storage and to some extent, electric vehicles. Let’s drill deeper.

Energy storage applications pose a few key requirements on the battery: 1) the battery should last 10 years with daily charge and discharge, or in other words, has a cycle life specification of 3,500 cycles or more; 2) it has to be immensely cost-effective, measured both in its upfront capital cost and cost of ownership; in other words, the total cost of owning and operating it over its 10-year life; and 3) it has to be safe.

The first and third requirements are straightforward: they make LFP and NCM favorites. LFP inherently has long cycle life, and NCM, if charged only to about 80% of its maximum capacity also can offer a very long cycle life. So if you wondered why Tesla quietly dropped its 10-kWh PowerWall product,  it is because it is made with NCA cathodes and cannot meet the very long cycle life requirement of daily charging.

The second requirement gets tricky. Right now, neither LFP nor NCM are sufficiently inexpensive to make a very compelling economic case to operators of energy storage systems (ESS) — setting government incentives aside. So the question boils down to which one of them will have a steeper cost reduction curve over time. Such a question naturally creates two camps of followers, each arguing their respective case.

Notice that high energy density does not factor in these requirements, at least not directly. Unlike consumer devices or electric vehicles, ESS seldom have a volume or weight restriction and thus, in principle, can accommodate batteries with lower energy density. The problem, however, is that batteries with lower energy density do not necessarily correspond to lower cost per unit of energy. It actually costs more to manufacture a 3Ah battery using LFP than it does using NCA. This makes energy density a critical factor in the math. Lower energy density equals more needed batteries to assemble a bigger battery pack, and thus more cost. For now, in the battle between LFP and NCM, the jury is still out though my personal opinion is that NCM, by virtue of its higher energy density, has an advantage. On the other hand, China’s uninhibited support for LFP can potentially tip the scale. More later.

Before I adjourn, I would like to rebuke an oft-made statement by some builders of ESS: that they are “battery agnostic.” To them, batteries are a commodity that can be easily interchanged among vendors and suppliers, much like commodity components in a consumer electronic product. I am hoping that the reader gleans from this post the great number of subtleties and complexities involved in the choice of the proper battery in an ESS. The notion of battery-agnostic in this space is utterly misplaced and only points to the illiteracy of the engineers building these ESS. If the battery fires on the 787 Dreamliner can permanently remind us of one lesson, it should be to never underestimate the consequences of neglecting the complexities of the battery. They can be very severe and immensely costly. Battery-agnostic is battery-illiterate.

13Jun 2016

Since the installation of the first electrical power plant late in the 19th century, the business of supplying electricity to industry and residences has been “on demand.” When you flip a light switch in your home, you expect the light to go on. For this to happen, electricity has to flow to your light bulb through an elaborate transmission and distribution network (T&D) of copper wires. These wires ultimately connect your light bulb to a generator that sits dozens if not hundreds of miles away from you. When you turn on your light bulb, there is an additional demand in electricity, and that generator “works a little harder” to supply this required electricity. This is what I mean by “on demand.” On a hot summer afternoon, the demand is large, and these generators are working near or at full capacity. At night, the demand is lower, and there is available excess capacity.

This system has worked exceptionally well for over one hundred years. Electrical utilities planned and built a system with high reliability. Now fast forward to the 21st century. Much like many new developments in our modern society, the way we use electricity is changing with new clean energy sources like solar panels and wind farms in diverse geographical locations, plus a general sense by the responsible regulatory bodies to modernize if not liberalize the way electricity is generated and distributed.

Enters the energy storage concept. Imagine an electrical system where the generation of electricity and its consumption are no longer simultaneous. In other words, imagine that the electricity that is flowing through your light bulb was actually generated at an earlier time — in other words, breaking the on demand relationship. That’s precisely what happens in your smartphone. The time you use the electricity in your smartphone is very different from the time the electricity was generated. One can easily notice that the on-demand electricity model fails in a mobile society. But what benefit do we get from breaking this model at a larger scale, e.g., the utility scale?

Let’s imagine the following scenario. You live in a small town in a sunny geography. There is a small power plant outside your town that historically supplied you with electricity, again on demand. You and many of the town residents decide to install solar panels on your rooftops. Your house is now generating all your electricity needs during the daytime. But at night you still rely on your local power plant to supply you with electricity.  Let’s sketch what the load on the power plant might look like before and after the installation of the solar panels.

During the night, the load is modest. Most residential lights are off, but appliances such as refrigerators are still on. The load rises in the morning and peaks some time during mid day, depending on the work hours and the need for air conditioning on hot days. The load peaks again in the evening when residents return home from their work day. This is the load that the power plant has to deliver. It is for the most part well characterized and predictable, with two modest peaks near the noon hour and evening. So your local utility sizes the generating plant to match this load demand. Any generating capacity wildly in excess will result in significant upfront capital costs that are not desirable to the rate payers, i.e. you.

Duck

Now let’s see what happens to this load curve after many residents in your town install solar panels. As expected, the load during the day time drops, and it drops drastically. This curve, as its shape suggests, is called the “duck curve.”  It creates a serious headache for your local utility. These generators that historically supplied your home with electricity are now running at a much reduced capacity during the day. In other words, the utility has idle capacity yet it bore the expense of the generators. Worse yet, it still has to size the generating capacity of the power plant to the maximum needed load which is now in the evening hours when solar is not a factor.

So, let’s take this scenario one step further. Imagine that there is now a big, I mean really big, battery that sits between your home and the power plant. During the peak solar hours in the day time, the power plant continues to produce electricity at or near its maximum capacity but that electricity is now used to charge the battery.  At night, the power plant continues to produce electricity at the same rate it did during the day, but the extra demand by the residents is now met by using the electricity from the battery. This has the effect of flattening the load curve and thereby reducing the peak demand on the power plant….resulting in significant reduction of capital costs. This is called “peak shifting” because in effect, this big battery enables us to use the excess capacity we have during the day to cover the excess load we have at night. This is one of several key benefits of energy storage.

In California, the scale of the duck curve is simply overwhelming. California ISO, the state agency responsible for the flow of electricity across its long-distance power lines, estimates that the differential between the peak and trough in the load curve will exceed 14,000 MW in 2020. To put it in perspective, this is equivalent to seven Diablo Canyon nuclear power plants near San Luis Obispo in central California. In essence, it also highlights the scale of the economic opportunity: Build energy storage systems or build expensive power plants. In future posts, I will cover various topics that come out of this discussion, for example, where do we place this “energy storage” system? what are the requirements on such a system? what technologies are most suitable? …etc.

CalISO

02Apr 2016

For a seemingly simple device with only two electrical connections to it, a battery is deceivingly misunderstood by the broad population, especially as batteries are now a common fixture in our technology-laden daily lives. I will highlight in this post five common misconceptions about the lithium-ion battery:

1. STAND ON ONE LEG, EXTEND YOUR ARM, THEN PLUG YOUR CHARGER INTO YOUR DEVICE:

Well, not literally, but the acrobatic move captures the perceived hypersensitivity of the average consumer about past or secret special recipes that can help your battery. One of the silliest one I ever heard was to store the battery in the freezer to extend its life. PLEASE, DO NOT EVER DO THIS! Another silly is to charge the battery once it drops below 50%, or 40% or 30%….Let me be clear, you can use your phone down to zero and recharge it, and it will be just fine.

It is also now common to find apps that will “optimize” your battery. The reality is they do nada! Don’t bother.  Don’t also bother with task managers; no they don’t extend your battery life. Both Android and iOS are fairly sophisticated about managing apps in the background.

Turning off WiFi, GPS and Bluetooth will not extend your battery life, at least not meaningfully. These radios use such little power that turning them off will not give you any noticeable advantage. The fact is that your cellular radio signal (e.g., LTE) and your display (specifically when the screen is on) are the two primary consumers of battery life — and turning these off render your mobile device somewhat useless.

Lastly is the question of “should I charge the battery to 100%?” Well, yes! but you don’t have to if you don’t want to or can’t. In other words, stop thinking about it. The battery is fine whether you charge it to 100% or to 80% or anything else. Sure, for those of you who are battery geeks, yes, you will get more cycle life if the battery is not charged to 100%. But to the average population, you can do whatever you like — your usage is not wrong. These are design specifications that the device manufacturer is thinking about on your behalf.

2. LITHIUM BATTERIES HAVE LONG MEMORIES:

Yes, as long as the memory of a 95-year old suffering from Alzheimers!! Sarcasm aside, lithium ion batteries have zero memory effects. Now, if you are a techie intent on confusing your smartphone or mobile device, here’s a little trick. Keep your device’s battery between 30% and 70% always….this will confuse the “fuel gauge,” that little battery monitor that tells you how much juice you have left. The battery will be just fine but the fuel gauge will not report accurately. Every so often, the fuel gauge needs to hit close to zero and 100% to know what these levels truly are, otherwise the fuel gauge will not accurately report the amount of battery percentage. This is like your gas gauge in your car going kaput…it does not mean that the battery has memory or other deficiencies. Should you suspect that your fuel gauge is confused, charge your phone to 100% and discharge it down to 10% a few times. That is sufficient to recalibrate the gauge.

3. WE NEED NEW BATTERY CHEMISTRIES — THE PRESENT ONES ARE NO GOOD:

This one garners a lot of media interest. Every time a research lab makes a new discovery, it is headline news and makes prime time TV. The reality is that the path from discovery in the lab to commercial deployment is extremely rocky. There have been dozens such discoveries in the past 5 – 10 years, yet virtually none have made it into wide commercial deployment. History tells us it takes over $1 billion and about 10 years for a new material to begin its slow commercial adoption cycle….and for now, the pipeline is rather thin. Additionally, present lithium ion batteries continue to improve. Granted, it is not very fast progress, but there is progress that is sufficient to make great products….just think that current battery technology is powering some great electric vehicles.

Let me be more specific. Present-day lithium ion batteries are achieving over 600 Wh/l in energy density — that is nearly 10x what lead acid batteries can deliver. This is enough to put 3,000 mAh in your smartphone (sufficient for a full day of use), and 60  kWh in your electrical car (enough for 200 – 250 miles of driving range). With the proper control systems and intelligence, a mobile device battery can last 2 years or more, and an electric vehicle battery can last 10 years. Does it mean we stop here? of course not, but this sense of urgency to develop new materials or chemistries is rather misplaced. Instead, we need to keep optimizing the present batteries materials and chemistries. Just reflect on how silicon as a semiconductor material was challenged by other candidate materials in the 1980s and 1990s (do you remember Gallium Arsenide), only for it to continue its steady progress and become an amazing material platform for modern computation and communication.

4. LITHIUM-ION BATTERIES ARE EXPENSIVE:

What made silicon the king of semiconductor materials is its amazing cost curve, i.e., decreasing cost per performance, aka Moore’s law. Now, lithium ion batteries don’t have an equivalent to Moore’s law, but, the cost of making lithium ion batteries is dropping fast to the point they are rapidly becoming commoditized. A battery for a smartphone costs the device OEM somewhere between $1.50 and $3.00, hardly a limiting factor for making great mobile devices. GM and Tesla Motors have widely advertised that their battery manufacturing costs are approaching $100 /kWh. In other words, a battery with sufficient capacity to drive 200 miles (i.e., 50 to 60 kWh) has a manufacturing cost of $5,000 to $6,000 (excluding the electronics)…with continued room for further cost reduction. It’s not yet ready to compete with inexpensive cars with gas engines, but it sure is very competitive with mid-range luxury vehicles. If you are in the market for a BMW 3-series or equivalent, I bet you are keeping an eye on the new Tesla Model 3. Tesla Motors pre-sold nearly 200,000 Model 3 electric vehicles in the 24 hours after its announcement.  This performance at a competitive price is what makes the present lithium ion batteries (with their present materials) attractive and dominant especially vis-a-vis potentially promising or threatening new chemistries or new materials.

5. LITHIUM-ION BATTERIES ARE UNSAFE:

Why do we not worry about the immense flammability of gasoline in our vehicles? Isn’t combustion the most essential mechanism of gas-driven cars? Yet, we feel very safe in these cars. Car fires are seldom headline news. That’s because the safety of traditional combustion engine cars has evolved immensely in the past decades. For example, gas tanks are insulated and protected in the event of a car crash.

Yes, lithium is flammable under certain but well known conditions. But the safety of lithium ion batteries can be observed as religiously as car makers observe the safety of combustion engines. It is quite likely that some isolated accidents or battery recalls may occur in the future as lithium ion batteries are deployed even wider than they are today. In mobile devices, the track record on safety has been very good, certainly since the battery industry had to manage the safety recalls at the turn of the century. Is there room for progress and can we achieve an exceptional safety record with lithium ion batteries? Absolutely yes. There are no inherent reasons why it cannot be achieved, albeit it will take time, just like the automotive and airline industries have continuously improved the safety of their products.

22Jan 2016

I described in the earlier post how adaptive systems turned smartphones into great cameras. Let’s now talk about how adaptivity and adaptive charging can make a battery perform far better.

Let’s start briefly with the basic operation of a lithium ion battery. The early posts of this blog describe the operation of the lithium-ion battery in more detail.  I will briefly recap here the basic operation and explain where its performance is limited. For the reader who wants to learn more, select “The Basics” category tag and feel free to review these earlier posts.

The figure below illustrates the basic structure of a lithium-ion battery. On the left hand side, one sees an electron microscope image of a battery showing the anode, the cathode and the separator, essentially the three basic materials that constitute the battery. On the right hand side, one sees a sketch illustrating the function of these materials during the charging process: The lithium ions, “stored” inside the individual grains of the cathode, move through the separator and insert themselves inside the grain of the graphite anode. If you are an engineer or physicist, you are asking, “where are the electrons?”  A neutral lithium atom becomes an ion in the solution, travels through the separator to the anode. The electron travels in the opposite direction through the external circuitry from the Aluminum collector to the Copper collector, where then it is captured by a lithium ion to form a molecular lithium-carbon bond.

Structure of the lithium ion battery

This seems simple enough, so what can go wrong? lots! I will focus here on a handful of mechanisms that become critical as the battery’s storage capacity and energy density increase. Looking at the diagram above, it is hopefully obvious that increasing energy density means to the reader packing more and more ions into this little sketched volume. It means reducing the dimensions of the anode, the cathode, the separator, and trying to saturate the capabilities of the anode grains to absorb ions. It’s like when you try to put as much water as possible inside a sponge. Now, in this process, small variations in manufacturing become really detrimental to performance. Look at the left photograph and observe the coarseness of the grain size for both electrodes. That means the uniformity of the ionic current is poor. As the energy density rises, a large number of ions are all rushing from the cathode to the anode. But this lack of uniformity creates stress points, both electrical and mechanical, that ultimately lead to failure:  gradual loss of material, gradual loss of lithium ions, and gradual mechanical cracking, all leading in time to a gradual loss of capacity and ultimate failure.

I will jump to two key observations. First, it should be apparent that when energy density is low, these effects are benign, but when energy density is high, there are so many ions involved in the process that small manufacturing variations become detrimental. Second, it should be apparent too that faster charging results in the same effect, i.e., more ions are trying to participate in the process.

Clearly, battery manufacturers are trying to improve their manufacturing processes and improve their materials — but let’s face it, this is becoming an incredibly expensive process. Smartphone and PC manufacturers are not willing to pay for more expensive batteries. This is very similar to the earlier post about camera lenses. Make great lenses but they become very expensive, or shift the burden to computation and correct the errors dynamically and adaptively.

That’s precisely what adaptive charging does: Be able to measure the impact of the manufacturing variations, embedded defects, non-uniformity of material properties and what have you in real time, assess what these errors are and how they may be progressing in time, then adjust the voltage and current of the charging current in such a way to mitigate these “errors”….then keep doing it as long as the battery is in operation. This makes each battery unique in its manufacturing history, material properties, and performance, and lets the charging process get tailored in an intelligent but automated fashion to the uniqueness of the battery.

It’s a marriage of chemistry, control systems and software, that shifts the burden from expensive manufacturing to less expensive computation. But what is clear is that it does not make battery manufacturing any less important, and it does not replace battery manufacturing — it is complementary. It is no different that how adaptive algorithms in the camera are complementary to the lens, not replacing it. This is cool innovation!

16Jan 2016

Our new website presents our suite of products called Adaptive Charging Software. It is fair to say that everyone understands and recognizes the meaning of “Charging” and “Software”…but “Adaptive”? What does it really mean? The purpose of this post is to give the reader an intuitive feeling of the meaning of (and consequently the need for) “adaptive” as it relates to technology.

Let’s first start with the classical definition of adaptive:

a•dapt•ive (ə-dăpˈtĭv)  adj. Relating to or exhibiting adaptation.

Ok, it relates to adaptation, but adapting to what? and why? For that, let’s illustrate with an example at how adaptive algorithms and software became instrumental to modern photography.

Let’s look at two photographs of Liberty Cap from a recent trip I took to Yosemite National Park. Can you tell by looking at the photographs what camera(s) were used in taking the shots? I doubt it. They both offer plenty of resolution, richness of color and great image quality (you may click on each photo to enlarge it).

                         D7K_1310_1200px

 IMG_2484_1200px

The top photograph was taken by a Nikon D7000 DSLR with a 24-mm f/2.8 prime lens. Total weight: 1,042 g (2.3 lb). Total cost when new: about $1,500.

The bottom photograph was taken by an iPhone 6 Plus. Total weight:172 g (0.38 lb). Total cost when new: about $600 for the smartphone. The camera component is less than $15 and only a few grams.

So why are the two photographs so similar, and what is the purpose of using DSLRs over a smartphone if the differences are so minuscule if not inexistent?

From an optical standpoint, the camera optics of the iPhone 6 Plus are absolutely no match to the superb optics of the Nikon lens. The iPhone 6 Plus camera sensor is also no match to the one in the Nikon D7000 — though both are manufactured by Sony but to vastly different requirement standards.

Today’s cameras, both DSLRs and smartphone cameras included, incorporate very sophisticated computational electronics on board. The iPhone 6 Plus boasts a powerful Apple ARM processor, and the Nikon camera includes a sophisticated Expeed processor. Both of these processors perform corrections on the fly before, during and after the photograph is taken. For instance, they both incorporate algorithms that assess the nature of the scene (e.g., is it a landscape, or does it include faces?) to determine the exposure parameters. Same for the focusing. Additionally, they both make corrections on the fly for the optical errors coming from the optics…and this is just the beginning.

Now, the Nikon 24mm prime lens is a superb lens and has excellent optics. In contrast, the lens used in the iPhone 6 Plus is no match. It suffers from significant optical errors called aberrations. For instance, one of these errors is called distortion: the photograph, uncorrected, looks distorted. Another error is chromatic aberration: different colors have different focus points. Guess what? Both camera processors correct for all of these errors: this is what “adaptive” does. It adapts and corrects. In other words, there are algorithms (and intelligence) that measure and recognize errors in the system (here, the camera and the optics) that may vary depending on the device and circumstances, then make the proper corrections in real time such that the end product is nearly free of problems. The smartphone industry cleverly shifted the burden of camera performance from expensive and sophisticated lens manufacturing (what it used to be in the past decades) to inexpensive computation. Brilliant!

It becomes immediately obvious to the reader that the biggest beneficiary from this “adaptive” performance is the inexpensive plastic lens used in the iPhone 6 Plus. In other words, the benefit of shifting the burden to computation is the use of lower cost components, in this case, a lower cost sensor and lens, albeit with worse optical specifications. And I mean much lower cost: in this example here, it is about 100X less expensive.

Adaptive systems are not new by any stretch of the imagination. They were initially proposed and used in complex systems — for example, correcting the optical errors in large telescopes as a result of variations in the upper atmosphere. However, the rapid decline in the cost of computing over the past decade has made the implementation of “adaptivity” accessible across a broad range of applications.

So, now you can begin to imagine what adaptive solutions can do to improve the performance of batteries where materials and manufacturing can have significant variability and associated costs. This will be the topic of a future post.