The Basics

18Oct 2016

State-of-the-art lithium-ion batteries, whether used in smartphones or electric vehicles, all rely on the same fundamental cell structure: two opposing electrodes with an intermediate insulating separator layer, with lithium ions shuffling between the two electrodes.

The positive electrode during charging, usually called the cathode, consists of a multi-metal oxide alloy material. Lithium-cobalt-oxide, or LCO, is by far the most common for consumer electronic applications. NCM, short for lithium nickel-cobalt-manganese oxide, also known as NMC, is gradually replacing other materials in energy storage and electric vehicle applications. LCO and NCM have a great property of storing lithium ions within their material matrix. Think of a porous swiss cheese: the lithium ions insert themselves between the atomic layers.

In contrast, the anode, or negative electrode during charging, is almost universally made of carbon graphite. Carbon historically was and continues to be the material of choice. It has a large capacity to store lithium ions within its crystalline matrix, much like the metal oxide cathode.

So how do manufacturers increase energy density? In some respects, the math is simple. In practice, it gets tricky.

Energy density equals total energy stored divided by volume. The total stored energy is dictated by the amount of active material, i.e., the available amount of metal oxide alloy as well as graphite that can physically store the lithium ions (i.e., the electric charge). So battery manufacturers resort to all types of design tricks to reduce the volume of inactive material, for example, reducing the thickness of the separator and metal connectors. Of course, there are limits with safety topping the list. To a large extent, this is what battery manufacturers did for the past 20 years — amounting largely to about a 5% increase annually in energy density.

But once this extra volume of inactive material is reduced to its bare minimum, increasing energy density gets tricky and challenging. This is the difficult wall that the battery industry is facing now. So what is next?

There are two potential paths forward:

1.  Find a way to pack more ions (i.e., more electric charge) within the electrodes. This is the topic of much research to develop new materials capable of such feat. But any such breakthrough is still several years away from commercial deployment, leaving the second option to….

2.  Increase the voltage. Since energy equals charge multiplied by voltage, increasing the voltage also raises the amount of energy (remember that energy and charge are related but are not commutable). This is the object of today’s post.

The battery industry raised the voltage a few years back from a maximum of 4.2 V to the present-day value of 4.35 V. This was responsible for adding approximately 4 to 5% to the energy density. A new crop of batteries is now beginning to operate at 4.4 V, adding an additional 4 to 5% to the energy density. But that does not come without some serious challenges. What are they?

First, there is the electrolyte. It is a gel-like solvent that imbibes the inside of the battery. Short of a better analogy, if ions are like fish, then the electrolyte is like water. It is the medium within which the lithium ions can travel between the two electrodes. As the voltage rises, it subjects the electrolyte to increasingly higher electric fields causing its early degradation and breakdown. So we are now seeing a new generation of electrolytes that can in principle withstand the higher voltage — albeit, we see in our lab testing that some of these electrolyte formulations are responsible for worse cycle life performance. This is a first example of the compromises that battery designers are battling.

Second, there is the structural integrity of the cathode. Let’s take LCO as an example. If we peer a little closer into the cathode material (see the figure below), we find a crystal structure with layers made of cobalt and oxygen atoms. When the battery is fully discharged, the lithium ions occupy the vacant space between these ordered layers. In fact, there is a proportion of lithium ions to cobalt and oxygen atoms: there is one lithium ion for every one cobalt and two oxygen atoms.

lco

courtesy of visualization for electronic and structural analysis (VESTA)

As the battery is charged, the lithium ions leave the cathode to the anode vacating some of the space between the ordered layers of the LCO cathode. But not all the lithium ions can leave; if too many of them leave, then the crystal structure of the cathode collapses and the material changes its properties. This is not good. So only about half of the lithium ions are “permitted” to leave during charging. This “permission” is determined by, you guessed it, the voltage. Right about 4.5 V, the LCO crystal structure begins to deteriorate, so one can easily see that at 4.4 V, the battery is already getting too close to the cliff.

Lastly, there is lithium plating. High energy-density cells push the limit of the design and tolerances in order to reduce the amount of material that is not participating in the storage. One of the unintended consequences is an “imbalance” between the amount of cathode and anode materials. This creates an “excess” of lithium ions that then deposit as lithium metal, hence plating.

These three challenges illustrate the increasing difficulties that battery manufacturing must overcome to continue pushing the limits of energy density. As they make progress, however, compromises become the norm. Cycle life is often shortened. Long gone are the days of 1,000+ cycles without intelligent adaptive controls. Fast charging becomes questionable. In some cases, safety may be in doubt. And the underlying R&D effort costs a lot of money with expenses that are stretching the financial limits of battery manufacturers without the promise of immediate financial returns in a market that is demanding performance at a the lowest possible price.

It is great to be a battery scientist with plenty of great problems to work on…but then again, may be not.

01Jul 2016

Sleep is an essential function of life. Tissue in living creatures regenerate during deep sleep. We, humans, get very cranky with sleep deprivation. And cranky we do get when our battery gets depleted because we did not give our mobile device sufficient “sleep time.”

I explained in a prior post the power needs in a smartphone, including the display, the radio functions…etc. If all these functions are constantly operating, the battery in a smartphone would last at most a couple of hours. So the key to having a smartphone battery last all day is having down time. So by now, you have hopefully noticed how the industry uses “sleep” terminology to describe these periods of time when the smartphone is nominally not active.

So what happens deep inside the mobile device during these periods of inactivity, often referred to as standby time? Sleep. That’s right. Not only sleep, but also deep sleep. This is the state of the electronic components, such as the processor and graphics chips, when they reduce their power demand. If we are not watching a video or the screen is actually turned off, there is no need for the graphics processor to be running. So the chip’s major functions are turned off, and the chip is put in a state of low power during which it draws very little from the battery. Bingo, sleep equals more battery life available to you when you need it.

Two key questions come to mind: When and how does the device go to sleep? and when and how does it wake up?

One primary function of the operating system (OS) is to decide when to go to sleep; this is the function of iOS for Apple devices, and Android OS for Android-based devices. The OS monitors the activity of the user, you, then makes some decisions. For example, if the OS detects that the smartphone has been lying on your desk for some considerable time and the screen has been off, then it will command the electronics to reduce their power demand and go to sleep.

This is similar to what happens in a car with a driver. You, the driver, gets to make decisions all the time when to turn the engine off, or put it in idle, or accelerate on the gas pedal. Each of these conditions changes the amount of fuel you draw from the fuel tank. In a smartphone, the OS is akin to the driver; the electronics replace the engine; and the fuel tank is like the battery. You get the picture. While this is colloquially referred to as managing the battery, in reality you are managing the “engine” and the power it consumes. This is why some drivers might get better mileage (mpg) than others. It is really about power management and has very little to do with true battery management.  Battery management is when one addresses the battery itself, for example how to charge it, how to maintain its health…etc. car-engine

The degree of sleep varies substantially and determines how much overall power is being used. Some electronic parts may be sleeping and others may be fully awake and active. For example, let’s say you are traveling and your device is set to airplane mode, but you are playing your favorite game. The OS will make sure that the radios chips, that’s the LTE radio, the WiFi, GPS chip, and all chips that have a wireless signal associated with them, go to deep sleep. But your processor and graphics chips will be running. With the radios off, your battery will last you the entire flight while playing Angry Birds.

The degree of sleep determines how much total power is being drawn from the battery, and hence, whether your standby time is a few hours or a lot more. A smart OS needs to awaken just the right number of electronic components for just the right amount of time. Anything more than that is a waste of battery, and loss of battery life. The battery is a precious resource and needs to be conserved when not needed.

Both iOS and Android have gotten much smarter over the past years in making these decisions. Earlier versions of Android were lacking the proper intelligence to optimize battery usage. Android Marshmallow introduced a new feature called Doze that adds more intelligence to this decision making process. Nextbit recently announced yet more intelligence to be layered on top of Android. This intelligence revolves around understanding the user behavior and accurately estimating what parts need to be sleeping, yet without impacting the overall responsiveness of the device.

The next question is who gets to wake up the chips that are sleeping? This is where things get tricky. In a car, you, the driver, gets to make decisions on how to run the engine. But imagine for a moment that the front passenger gets to also press the gas pedal. You can immediately see how this can be a recipe for chaos. In a smartphone, every app gets to access the electronics and arbitrarily wake up whatever was sleeping. An overzealous app developer might have his app pinging the GPS location chip constantly which will guarantee that this chip never goes to sleep — causing rapid battery loss of life. Early versions of Facebook and Twitter apps were guilty of constantly pinging the radio chips to refresh the social data in the background — even when you put your device down and thought it was inactive.  iOS and Android offer the user the ability to limit what these apps can do in the background; you can restrict their background refresh or limit their access to your GPS location. But many users do not take advantage of these power saving measures. If you haven’t done so, do yourself a favor and restrict background refresh on your device, and you will gain a few extra hours of battery life. You can find a few additional tidbits in this earlier post.

App designers have gotten somewhat more disciplined about power usage, but not entirely. Still too many apps are poorly written, or intentionally ignore the limited available power available. Just like in camp when many are sharing water, it takes one inconsiderate individual to ruin the experience. It takes one rogue app to ruin the battery experience in a smartphone. And when that happens, the user often blames the battery, not the rogue app. It’s like the campers blaming the water tank in camp instead of blaming the inconsiderate camper. Enforcement of power usage is improving with every iteration of operating systems, but the reality is that enforcement is not an easy task. There is no escaping the fact that the user experience is best improved by increasing the battery capacity (i.e., a bigger battery) and using faster charging. Managing a limited resource is essential but nothing makes the user happier than making that resource more abundant….and that, ladies and gentlemen, is what true battery management does. If power management is about making the engine more efficient, then battery management is about making the fuel tank bigger and better.

17Jun 2016

I will jump ahead in this post to discuss the merits of different lithium-ion chemistries and their suitability to energy storage systems (ESS) applications. Naturally, this assumes that lithium-ion batteries in general are among the best suited technologies for ESS. Some might take issue with this point — and there are some merits for such a discussion that I shall leave to a future post.

Made of two electrodes, the anode and the cathode, it is the choice of the cathode material that determines several key electrical attributes of the lithium-ion battery, in particular energy density, safety, longevity (cycle life) and cost. The most commonly used cathode materials are Li cobalt oxide (known as LCO), Li nickel cobalt aluminum (NCA), Li nickel manganese (NCM), Li iron phosphate (LFP) and Li manganese nickel oxide (LMNO).

EnergyDensity

LCO is by far the most common being the choice for consumer devices from smartphones to PCs. It is widely manufactured across Asian battery factories and the supply chain is very pervasive…as a result, and despite the use of cobalt (an expensive material), it bears the lowest cost per unit of energy with consumer batteries being priced near $0.50 /Ah, or equivalently, $130/kWh. LCO offers very good energy density and a cycle life often ranging between 500 and 1,500 cycles. From a material standpoint, LCO can potentially catch fire or explode especially if the battery is improperly designed or operated. That was the primary reason for the battery recalls that were frequent some 10 years ago. Proper battery design and safety electronics circuitry have greatly improved the situation and made LCO batteries far safer.

NCA came to prominence with Tesla’s use of the Panasonic 18650 cells in their model S (and the earlier Roadster). It has exceptional energy density — which translates directly to more miles of driving per charge. But NCA has a limited cycle life, often less than 500 cycles. Historically NCA was expensive because of its use of cobalt and limited manufacturing volume. This is rapidly changing with Tesla’s growing volume and the Gigafactory coming online in 2017. It is widely rumored that Tesla’s cost is at or near the figures for LCO, i.e., near $100/kWh at the cell level. It remains to be seen whether Panasonic will replicate these costs for the general market.

NCM sits between LCO and NCA. It has good energy density, better cycle life than NCA (in the range of 1,000 to 2,000 cycles) and is considered inherently less prone to safety hazards than LCO. Its historical usage was in power tools but it has become recently a serious candidate material for automotive applications. In principal, NCM cathodes should be less expensive to manufacture owing to their use of manganese, quite an inexpensive material. The two Korean conglomerates, LG Chem and Samsung SDI, are major advocates and manufacturers of NCM-based batteries.

One of the oldest used cathode materials is LMNO, or sometimes referred to as LMO. The Nissan Leaf battery uses LMNO cathodes. It is safe, reliable with long cycle life, and is relatively inexpensive to manufacture. But it suffers from low energy density especially relative to NCA. If you ever wondered why the Tesla has a far better driving range than the Leaf, the choice of cathode materials is an important part of your answer. It is not widely used outside of Japan.

Finally, we come to lithium iron phosphate, or LFP. Initially invented in North America in the 1990s, it has developed a strong manufacturing base today in China, with the Chinese government extending it significant economic incentives to make China a manufacturing powerhouse for LFP-batteries. LFP has exceptional cycle life, often exceeding 3,000 cycles, and is considered very safe. A major shortcoming of LFP is its reduced energy density: about one third that of LCO, NCA or NCM. It, in principle, should be inexpensive to manufacture. After all, iron and phosphorus are two inexpensive materials. But reality suggests otherwise: the lower energy density requires the use of twice or three times as many cells to build a battery pack with the same capacity as LCO or NCA. As a result, LFP-based batteries cost today 2 or 3x more than equivalent LCO-based battery packs.

By now, you are probably scratching your head and asking: so which one wins? and that is precisely the conundrum for energy storage and to some extent, electric vehicles. Let’s drill deeper.

Energy storage applications pose a few key requirements on the battery: 1) the battery should last 10 years with daily charge and discharge, or in other words, has a cycle life specification of 3,500 cycles or more; 2) it has to be immensely cost-effective, measured both in its upfront capital cost and cost of ownership; in other words, the total cost of owning and operating it over its 10-year life; and 3) it has to be safe.

The first and third requirements are straightforward: they make LFP and NCM favorites. LFP inherently has long cycle life, and NCM, if charged only to about 80% of its maximum capacity also can offer a very long cycle life. So if you wondered why Tesla quietly dropped its 10-kWh PowerWall product,  it is because it is made with NCA cathodes and cannot meet the very long cycle life requirement of daily charging.

The second requirement gets tricky. Right now, neither LFP nor NCM are sufficiently inexpensive to make a very compelling economic case to operators of energy storage systems (ESS) — setting government incentives aside. So the question boils down to which one of them will have a steeper cost reduction curve over time. Such a question naturally creates two camps of followers, each arguing their respective case.

Notice that high energy density does not factor in these requirements, at least not directly. Unlike consumer devices or electric vehicles, ESS seldom have a volume or weight restriction and thus, in principle, can accommodate batteries with lower energy density. The problem, however, is that batteries with lower energy density do not necessarily correspond to lower cost per unit of energy. It actually costs more to manufacture a 3Ah battery using LFP than it does using NCA. This makes energy density a critical factor in the math. Lower energy density equals more needed batteries to assemble a bigger battery pack, and thus more cost. For now, in the battle between LFP and NCM, the jury is still out though my personal opinion is that NCM, by virtue of its higher energy density, has an advantage. On the other hand, China’s uninhibited support for LFP can potentially tip the scale. More later.

Before I adjourn, I would like to rebuke an oft-made statement by some builders of ESS: that they are “battery agnostic.” To them, batteries are a commodity that can be easily interchanged among vendors and suppliers, much like commodity components in a consumer electronic product. I am hoping that the reader gleans from this post the great number of subtleties and complexities involved in the choice of the proper battery in an ESS. The notion of battery-agnostic in this space is utterly misplaced and only points to the illiteracy of the engineers building these ESS. If the battery fires on the 787 Dreamliner can permanently remind us of one lesson, it should be to never underestimate the consequences of neglecting the complexities of the battery. They can be very severe and immensely costly. Battery-agnostic is battery-illiterate.

13Jun 2016

Since the installation of the first electrical power plant late in the 19th century, the business of supplying electricity to industry and residences has been “on demand.” When you flip a light switch in your home, you expect the light to go on. For this to happen, electricity has to flow to your light bulb through an elaborate transmission and distribution network (T&D) of copper wires. These wires ultimately connect your light bulb to a generator that sits dozens if not hundreds of miles away from you. When you turn on your light bulb, there is an additional demand in electricity, and that generator “works a little harder” to supply this required electricity. This is what I mean by “on demand.” On a hot summer afternoon, the demand is large, and these generators are working near or at full capacity. At night, the demand is lower, and there is available excess capacity.

This system has worked exceptionally well for over one hundred years. Electrical utilities planned and built a system with high reliability. Now fast forward to the 21st century. Much like many new developments in our modern society, the way we use electricity is changing with new clean energy sources like solar panels and wind farms in diverse geographical locations, plus a general sense by the responsible regulatory bodies to modernize if not liberalize the way electricity is generated and distributed.

Enters the energy storage concept. Imagine an electrical system where the generation of electricity and its consumption are no longer simultaneous. In other words, imagine that the electricity that is flowing through your light bulb was actually generated at an earlier time — in other words, breaking the on demand relationship. That’s precisely what happens in your smartphone. The time you use the electricity in your smartphone is very different from the time the electricity was generated. One can easily notice that the on-demand electricity model fails in a mobile society. But what benefit do we get from breaking this model at a larger scale, e.g., the utility scale?

Let’s imagine the following scenario. You live in a small town in a sunny geography. There is a small power plant outside your town that historically supplied you with electricity, again on demand. You and many of the town residents decide to install solar panels on your rooftops. Your house is now generating all your electricity needs during the daytime. But at night you still rely on your local power plant to supply you with electricity.  Let’s sketch what the load on the power plant might look like before and after the installation of the solar panels.

During the night, the load is modest. Most residential lights are off, but appliances such as refrigerators are still on. The load rises in the morning and peaks some time during mid day, depending on the work hours and the need for air conditioning on hot days. The load peaks again in the evening when residents return home from their work day. This is the load that the power plant has to deliver. It is for the most part well characterized and predictable, with two modest peaks near the noon hour and evening. So your local utility sizes the generating plant to match this load demand. Any generating capacity wildly in excess will result in significant upfront capital costs that are not desirable to the rate payers, i.e. you.

Duck

Now let’s see what happens to this load curve after many residents in your town install solar panels. As expected, the load during the day time drops, and it drops drastically. This curve, as its shape suggests, is called the “duck curve.”  It creates a serious headache for your local utility. These generators that historically supplied your home with electricity are now running at a much reduced capacity during the day. In other words, the utility has idle capacity yet it bore the expense of the generators. Worse yet, it still has to size the generating capacity of the power plant to the maximum needed load which is now in the evening hours when solar is not a factor.

So, let’s take this scenario one step further. Imagine that there is now a big, I mean really big, battery that sits between your home and the power plant. During the peak solar hours in the day time, the power plant continues to produce electricity at or near its maximum capacity but that electricity is now used to charge the battery.  At night, the power plant continues to produce electricity at the same rate it did during the day, but the extra demand by the residents is now met by using the electricity from the battery. This has the effect of flattening the load curve and thereby reducing the peak demand on the power plant….resulting in significant reduction of capital costs. This is called “peak shifting” because in effect, this big battery enables us to use the excess capacity we have during the day to cover the excess load we have at night. This is one of several key benefits of energy storage.

In California, the scale of the duck curve is simply overwhelming. California ISO, the state agency responsible for the flow of electricity across its long-distance power lines, estimates that the differential between the peak and trough in the load curve will exceed 14,000 MW in 2020. To put it in perspective, this is equivalent to seven Diablo Canyon nuclear power plants near San Luis Obispo in central California. In essence, it also highlights the scale of the economic opportunity: Build energy storage systems or build expensive power plants. In future posts, I will cover various topics that come out of this discussion, for example, where do we place this “energy storage” system? what are the requirements on such a system? what technologies are most suitable? …etc.

CalISO

02Apr 2016

For a seemingly simple device with only two electrical connections to it, a battery is deceivingly misunderstood by the broad population, especially as batteries are now a common fixture in our technology-laden daily lives. I will highlight in this post five common misconceptions about the lithium-ion battery:

1. STAND ON ONE LEG, EXTEND YOUR ARM, THEN PLUG YOUR CHARGER INTO YOUR DEVICE:

Well, not literally, but the acrobatic move captures the perceived hypersensitivity of the average consumer about past or secret special recipes that can help your battery. One of the silliest one I ever heard was to store the battery in the freezer to extend its life. PLEASE, DO NOT EVER DO THIS! Another silly is to charge the battery once it drops below 50%, or 40% or 30%….Let me be clear, you can use your phone down to zero and recharge it, and it will be just fine.

It is also now common to find apps that will “optimize” your battery. The reality is they do nada! Don’t bother.  Don’t also bother with task managers; no they don’t extend your battery life. Both Android and iOS are fairly sophisticated about managing apps in the background.

Turning off WiFi, GPS and Bluetooth will not extend your battery life, at least not meaningfully. These radios use such little power that turning them off will not give you any noticeable advantage. The fact is that your cellular radio signal (e.g., LTE) and your display (specifically when the screen is on) are the two primary consumers of battery life — and turning these off render your mobile device somewhat useless.

Lastly is the question of “should I charge the battery to 100%?” Well, yes! but you don’t have to if you don’t want to or can’t. In other words, stop thinking about it. The battery is fine whether you charge it to 100% or to 80% or anything else. Sure, for those of you who are battery geeks, yes, you will get more cycle life if the battery is not charged to 100%. But to the average population, you can do whatever you like — your usage is not wrong. These are design specifications that the device manufacturer is thinking about on your behalf.

2. LITHIUM BATTERIES HAVE LONG MEMORIES:

Yes, as long as the memory of a 95-year old suffering from Alzheimers!! Sarcasm aside, lithium ion batteries have zero memory effects. Now, if you are a techie intent on confusing your smartphone or mobile device, here’s a little trick. Keep your device’s battery between 30% and 70% always….this will confuse the “fuel gauge,” that little battery monitor that tells you how much juice you have left. The battery will be just fine but the fuel gauge will not report accurately. Every so often, the fuel gauge needs to hit close to zero and 100% to know what these levels truly are, otherwise the fuel gauge will not accurately report the amount of battery percentage. This is like your gas gauge in your car going kaput…it does not mean that the battery has memory or other deficiencies. Should you suspect that your fuel gauge is confused, charge your phone to 100% and discharge it down to 10% a few times. That is sufficient to recalibrate the gauge.

3. WE NEED NEW BATTERY CHEMISTRIES — THE PRESENT ONES ARE NO GOOD:

This one garners a lot of media interest. Every time a research lab makes a new discovery, it is headline news and makes prime time TV. The reality is that the path from discovery in the lab to commercial deployment is extremely rocky. There have been dozens such discoveries in the past 5 – 10 years, yet virtually none have made it into wide commercial deployment. History tells us it takes over $1 billion and about 10 years for a new material to begin its slow commercial adoption cycle….and for now, the pipeline is rather thin. Additionally, present lithium ion batteries continue to improve. Granted, it is not very fast progress, but there is progress that is sufficient to make great products….just think that current battery technology is powering some great electric vehicles.

Let me be more specific. Present-day lithium ion batteries are achieving over 600 Wh/l in energy density — that is nearly 10x what lead acid batteries can deliver. This is enough to put 3,000 mAh in your smartphone (sufficient for a full day of use), and 60  kWh in your electrical car (enough for 200 – 250 miles of driving range). With the proper control systems and intelligence, a mobile device battery can last 2 years or more, and an electric vehicle battery can last 10 years. Does it mean we stop here? of course not, but this sense of urgency to develop new materials or chemistries is rather misplaced. Instead, we need to keep optimizing the present batteries materials and chemistries. Just reflect on how silicon as a semiconductor material was challenged by other candidate materials in the 1980s and 1990s (do you remember Gallium Arsenide), only for it to continue its steady progress and become an amazing material platform for modern computation and communication.

4. LITHIUM-ION BATTERIES ARE EXPENSIVE:

What made silicon the king of semiconductor materials is its amazing cost curve, i.e., decreasing cost per performance, aka Moore’s law. Now, lithium ion batteries don’t have an equivalent to Moore’s law, but, the cost of making lithium ion batteries is dropping fast to the point they are rapidly becoming commoditized. A battery for a smartphone costs the device OEM somewhere between $1.50 and $3.00, hardly a limiting factor for making great mobile devices. GM and Tesla Motors have widely advertised that their battery manufacturing costs are approaching $100 /kWh. In other words, a battery with sufficient capacity to drive 200 miles (i.e., 50 to 60 kWh) has a manufacturing cost of $5,000 to $6,000 (excluding the electronics)…with continued room for further cost reduction. It’s not yet ready to compete with inexpensive cars with gas engines, but it sure is very competitive with mid-range luxury vehicles. If you are in the market for a BMW 3-series or equivalent, I bet you are keeping an eye on the new Tesla Model 3. Tesla Motors pre-sold nearly 200,000 Model 3 electric vehicles in the 24 hours after its announcement.  This performance at a competitive price is what makes the present lithium ion batteries (with their present materials) attractive and dominant especially vis-a-vis potentially promising or threatening new chemistries or new materials.

5. LITHIUM-ION BATTERIES ARE UNSAFE:

Why do we not worry about the immense flammability of gasoline in our vehicles? Isn’t combustion the most essential mechanism of gas-driven cars? Yet, we feel very safe in these cars. Car fires are seldom headline news. That’s because the safety of traditional combustion engine cars has evolved immensely in the past decades. For example, gas tanks are insulated and protected in the event of a car crash.

Yes, lithium is flammable under certain but well known conditions. But the safety of lithium ion batteries can be observed as religiously as car makers observe the safety of combustion engines. It is quite likely that some isolated accidents or battery recalls may occur in the future as lithium ion batteries are deployed even wider than they are today. In mobile devices, the track record on safety has been very good, certainly since the battery industry had to manage the safety recalls at the turn of the century. Is there room for progress and can we achieve an exceptional safety record with lithium ion batteries? Absolutely yes. There are no inherent reasons why it cannot be achieved, albeit it will take time, just like the automotive and airline industries have continuously improved the safety of their products.