Electronics & systems

02Sep 2016

The images of melted Samsung Note 7 smartphones are all over the internet. News of Samsung’s massive recall are headline news. It is embarrassing to Samsung Mobile, its marketing and engineering teams, and most certainly its executives. Consumers are wondering how could Samsung ship units with defective batteries that can catch fire.

It is easy but not right to pick on Samsung or be critical of the company at this moment. Why? because this could happen to anyone…that’s right, anyone. If you are an OEM of smartphone devices or consumer devices with lithium-ion batteries, this is the time for you to pay attention to your products because you could be next.

While this sounds ominous, the intent here is to raise safety awareness in the entire ecosystem that depends on batteries. Samsung happened to be the first unlucky company to exhibit the strains that have been accumulating now for several years. I have covered in several past posts how the battery industry has been hitting the wall. Battery materials are reaching their limits. Battery economics are not favorable. Yet, the performance demands on batteries continue to rise. All of these factors are and continue to be precursors to the situation that Samsung finds itself in.

As is often the case in life, we tend to remain complacent until a crisis hits. The crisis is here, and now. Samsung is first to feel the pain, but each and every company in this ecosystem, from consumer devices to energy storage and electric vehicles, should acknowledge the severity of the situation and participate in its solution. Again, why?

This perfect storm has been brewing for a while, in particular, the drive to increase energy density along with faster charging while making less expensive batteries. Increased energy density and faster charging operate the battery near its physical limits. In other words, the margins for error at these elevated performance levels are really thin. For example, the newest lithium-ion cells now operate at a terminal voltage of 4.4 Volts, up from 4.2 Volts a few years back. This increase in voltage is one of the underlying physical tenants of increased energy density, yet it moves the battery every so close to the edge of the safety abyss. Another example relates to charging speed: it is widely accepted now that charge rates are approaching if not exceeding 1C. Electric vehicle makers are actively exploring very fast charging for EVs. Tesla is deploying their superchargers at a fast pace. These superchargers can charge a Tesla model at up to 1.5C, i.e., put in half a tank in about 20 minutes. Fast charging wreaks havoc inside the cell if not properly managed.

So now add the push for making less expensive batteries. Battery manufacturing, unlike semiconductors, does not scale. There is no equivalent of Moore’s law. In other words, as energy density increases, the cost per Wh (per energy unit) does not decrease…au contraire, it tends to increase because manufacturing tolerances get tighter. As a result, capital expenditures go up. Combine that with low-cost, low-quality batteries coming out of China and at a fundamental level, you can see how the financials of battery companies do not look pretty. This invariably leads to changes in manufacturing processes as companies seek more efficient ways to manufacture. But when the design margin of error is so thin, it does not take much before small variations in manufacturing lead to disastrous consequences. Remember, all it took in the case of Samsung was 35 failing devices out of a total of 2,500,000 shipped to cause a recall. This is a failure rate of 14 ppm (parts per million). It is a small number but, clearly, not small enough.

This is not to say that battery manufacturing and battery technology are doomed. There are countless examples in history where engineers built far more complex systems and structures safely and economically…but usually these include a change in paradigm. For example, pause for a second and compare the first commercial airplanes with the most recent jetliners. The newest Boeing and Airbus commercial airliners are marvels in computation and software. Fly-by-wire and automated systems with redundancy are the norm today, yet these new airplanes are scantily faster than their predecessors. In other words, the industry added so much more intelligence and shifted the burden to computation. The result is that modern planes are vastly safer than ever and far more economical to operate.

This is precisely the opportunity in front of the battery manufacturers and their customers, the OEMs, to think deep and hard on how they are going to implement a lot more intelligence to manage their batteries. Kudos to Sony for recognizing this….the batteries in their smartphones carry a great deal of intelligence, perform incredibly well and are safe. I am biased here…a lot of that intelligence is from Qnovo, but that should not diminish from the importance of the point of needing intelligence to manage the vanishing margins of error that battery designers have to cope with.

19Jul 2016

This post includes contributions from Robert Nalesnik. I discussed in the past how fast charging requires two components: i) power delivery – that means getting extra electrical power from the wall socket to the battery and ii) battery management – that means making sure you don’t destroy the battery’s lifespan with all the extra power.

How much more power do you need? Quite a bit more if you want to charge considerably faster. It’s like your car engine: if you want to go faster then you will consume more gas. For a typical smartphone, power levels go up from the conventional 5 Watts to 15 or even close to 20 Watts in some cases.

Delivering higher levels of power is a very active area. Qualcomm has Quick Charge, Mediatek has Pump Express, and there is the USB Power Delivery standard with support from Intel, and the Chinese manufacturer Oppo has VOOC. Not surprisingly, with so many parties trying to influence or even define the standards of power delivery, there is plenty of confusion to go around.

First, let’s refresh some basic high-school science:     Electrical power = current x voltage.

So if we want to deliver more power, we can either increase the current, the voltage or both. Increasing current is relatively easy but more current means a lot more heat…that is until something begins to melt. Not good! That usually puts an upper limit somewhere between 3 and 5 A on the charging current.

The other approach is to increase voltage, from the conventional 5 V up to 9 V, or even 12 V, and in some limited cases even more.

High current charging

High-current charging leverages the fact that modern single-cell lithium ion batteries can be charged using an inexpensive 5 V AC adapter that can be manufactured for about one dollar.

Increasing the charging current is limited by i) the maximum current rating of the USB cable between the AC adapter and the mobile device, as well as of the tiny connector in your device where the USB cable plugs in; ii) cost and iii) heat and safety.

Let’s do some math. A typical USB cable assembly can support a maximum current of 1.8 A. So 5 x 1.8 = 9 Watts max. That’s fine for standard charging but not sufficient for fast charging a smartphone. The new USB type-C cables (with symmetrical connectors that can be used in any orientation you like) can support up to 3 A, in other words, a maximum of 15 Watts. Much better! Under some very limited cases and using special cables, one can push USB type-C to 5 A, or 25 Watts. But at 5 A cost begins to skyrocket, so instead, we see designs gravitating towards 3 A, or equivalently 15 Watts.

To put this in perspective, 15 Watts can charge your typical 3,000 mAh battery at a rate of 1 C, meaning you will get 50% of your battery charge in 30 minutes, and a full charge in  over an hour.

A quick word on heat: if you remember Ohm’s law from your high-school physics, heat increases as the square of the current. That means as the charging current increases from 1.8 A to 3 A, or 1.66X, heat inside your device will increase as 1.66 x 1.66 = 2.8X. Ouch! That’s a lot of heat to remove from the device….and a great topic for a future post.

Power1

High voltage charging

Let’s pause for a moment and think about the high-voltage transmission lines that we frequently see from highways outside of urban areas. Electric utility companies transport electrical power from power-generating stations (e.g., dams) that can be hundreds of miles from a city. If they use the 120 V that you get at your outlet, then the overhead transmission lines will have to carry millions of amperes…this is not only physically impossible, but also economically just prohibitive. So the transmission lines run at a much higher voltage, anywhere up to 800,000 volts. These transmission lines naturally don’t come straight to your house. Instead, they terminate into smaller substations (hidden off main roads near your neighborhood) where the voltage is then gradually “stepped down.”

That’s the same concept used in mobile devices. The voltage from the AC adapter is now raised above 5 V. But what voltage should it be? 9 V, 12 V? more? This is decided by a “handshake” protocol between a specialized chip (usually the power management IC, also known as PMIC) inside your smartphone and the AC adapter when the USB cable is plugged in. This is the approach taken with Qualcomm’s Quick Charge and the USB Power Delivery standard, each using a different signaling mechanism. There is a saying among power supply engineers that “voltage is cheaper than current”, and indeed lower cost components and cables are a primary benefit of high voltage charging.

The USB Power Delivery (USB PD) standard allows voltages of 5, 9, 15 and 20 V and currents up to 3 A. This gives power levels of 15, 27, 45 and 60 W, respectively. Additionally, currents up to 5 A are allowed at 20V, enabling up to 100W. Qualcomm has similar predefined power levels at 5, 9, 12, and 20 V. High-voltage charging has a clear advantage of attaining power levels above 25 W, which makes it the preferred choice for laptops, ultrabooks and 2 in 1s tablets.

power2

How will this abundance of approaches settle out in the market? From a historical perspective, Qualcomm was early to see an opportunity to define high voltage charging in a simpler and cheaper way than the USB committee. They launched Quick Charge 2.0 in 2013 and followed up with the latest 3.0 version in 2015. Qualcomm has been quite successful establishing Quick Charge as a defacto charging standard for smartphones. More recently, Intel and others are successfully driving USB PD and the Type-C connector into PC markets and Type-C is well on its way to become the standard connector across all classes of mobile devices.

In smartphones, the next few years will likely still see multiple power delivery approaches, with chipset and adapter vendors evolving to multi-standard support to bridge compatibility gaps – meaning a smartphone can support multiple protocols such Qualcomm, USB PD, Pump Express…etc. From a Qnovo perspective, we are agnostic and complementary to whatever power delivery approach our customers choose. The higher power makes greater the need for the second component of fast charging: battery management.

01Jul 2016

Sleep is an essential function of life. Tissue in living creatures regenerate during deep sleep. We, humans, get very cranky with sleep deprivation. And cranky we do get when our battery gets depleted because we did not give our mobile device sufficient “sleep time.”

I explained in a prior post the power needs in a smartphone, including the display, the radio functions…etc. If all these functions are constantly operating, the battery in a smartphone would last at most a couple of hours. So the key to having a smartphone battery last all day is having down time. So by now, you have hopefully noticed how the industry uses “sleep” terminology to describe these periods of time when the smartphone is nominally not active.

So what happens deep inside the mobile device during these periods of inactivity, often referred to as standby time? Sleep. That’s right. Not only sleep, but also deep sleep. This is the state of the electronic components, such as the processor and graphics chips, when they reduce their power demand. If we are not watching a video or the screen is actually turned off, there is no need for the graphics processor to be running. So the chip’s major functions are turned off, and the chip is put in a state of low power during which it draws very little from the battery. Bingo, sleep equals more battery life available to you when you need it.

Two key questions come to mind: When and how does the device go to sleep? and when and how does it wake up?

One primary function of the operating system (OS) is to decide when to go to sleep; this is the function of iOS for Apple devices, and Android OS for Android-based devices. The OS monitors the activity of the user, you, then makes some decisions. For example, if the OS detects that the smartphone has been lying on your desk for some considerable time and the screen has been off, then it will command the electronics to reduce their power demand and go to sleep.

This is similar to what happens in a car with a driver. You, the driver, gets to make decisions all the time when to turn the engine off, or put it in idle, or accelerate on the gas pedal. Each of these conditions changes the amount of fuel you draw from the fuel tank. In a smartphone, the OS is akin to the driver; the electronics replace the engine; and the fuel tank is like the battery. You get the picture. While this is colloquially referred to as managing the battery, in reality you are managing the “engine” and the power it consumes. This is why some drivers might get better mileage (mpg) than others. It is really about power management and has very little to do with true battery management.  Battery management is when one addresses the battery itself, for example how to charge it, how to maintain its health…etc. car-engine

The degree of sleep varies substantially and determines how much overall power is being used. Some electronic parts may be sleeping and others may be fully awake and active. For example, let’s say you are traveling and your device is set to airplane mode, but you are playing your favorite game. The OS will make sure that the radios chips, that’s the LTE radio, the WiFi, GPS chip, and all chips that have a wireless signal associated with them, go to deep sleep. But your processor and graphics chips will be running. With the radios off, your battery will last you the entire flight while playing Angry Birds.

The degree of sleep determines how much total power is being drawn from the battery, and hence, whether your standby time is a few hours or a lot more. A smart OS needs to awaken just the right number of electronic components for just the right amount of time. Anything more than that is a waste of battery, and loss of battery life. The battery is a precious resource and needs to be conserved when not needed.

Both iOS and Android have gotten much smarter over the past years in making these decisions. Earlier versions of Android were lacking the proper intelligence to optimize battery usage. Android Marshmallow introduced a new feature called Doze that adds more intelligence to this decision making process. Nextbit recently announced yet more intelligence to be layered on top of Android. This intelligence revolves around understanding the user behavior and accurately estimating what parts need to be sleeping, yet without impacting the overall responsiveness of the device.

The next question is who gets to wake up the chips that are sleeping? This is where things get tricky. In a car, you, the driver, gets to make decisions on how to run the engine. But imagine for a moment that the front passenger gets to also press the gas pedal. You can immediately see how this can be a recipe for chaos. In a smartphone, every app gets to access the electronics and arbitrarily wake up whatever was sleeping. An overzealous app developer might have his app pinging the GPS location chip constantly which will guarantee that this chip never goes to sleep — causing rapid battery loss of life. Early versions of Facebook and Twitter apps were guilty of constantly pinging the radio chips to refresh the social data in the background — even when you put your device down and thought it was inactive.  iOS and Android offer the user the ability to limit what these apps can do in the background; you can restrict their background refresh or limit their access to your GPS location. But many users do not take advantage of these power saving measures. If you haven’t done so, do yourself a favor and restrict background refresh on your device, and you will gain a few extra hours of battery life. You can find a few additional tidbits in this earlier post.

App designers have gotten somewhat more disciplined about power usage, but not entirely. Still too many apps are poorly written, or intentionally ignore the limited available power available. Just like in camp when many are sharing water, it takes one inconsiderate individual to ruin the experience. It takes one rogue app to ruin the battery experience in a smartphone. And when that happens, the user often blames the battery, not the rogue app. It’s like the campers blaming the water tank in camp instead of blaming the inconsiderate camper. Enforcement of power usage is improving with every iteration of operating systems, but the reality is that enforcement is not an easy task. There is no escaping the fact that the user experience is best improved by increasing the battery capacity (i.e., a bigger battery) and using faster charging. Managing a limited resource is essential but nothing makes the user happier than making that resource more abundant….and that, ladies and gentlemen, is what true battery management does. If power management is about making the engine more efficient, then battery management is about making the fuel tank bigger and better.

15Apr 2016

I discussed in a prior post the charging of the 5.5-in Samsung Galaxy S7 Edge. In this post, we will look at its sister device, the 5.1-in Samsung Galaxy S7, specifically the US version (model G930) using the Qualcomm Snapdragon 820 chipset, also known as the 8996. The battery specifications on the S7 include a polymer cell rated at 3,000 mAh, equivalent to 11.55 Wh. The teardown on iFixit shows a cell that is manufactured by ATL  rated to 4.4 V. Once again, the choice of battery manufacturer is surprising given that Samsung Electronics for years sourced the vast majority of their batteries from their sister company Samsung SDI.

Samsung S7 battery

I charged the Galaxy S7 using the Samsung-supplied AC adapter and USB cable, with the device in airplane mode and the screen turned off. The charging data is next.

S7-charge curve

Let’s make a few observations. The measured battery capacity is 2,940 mAh at a termination current of 300 mA (C/10). This is consistent with Samsung’s claim of 3,000 mAh, usually measured in the laboratory at a termination current of C/20 or 150 mA.

The device reaches 50% after 31 minutes of charging, corresponding to a charge rate of 1C, i.e., a charging current into the battery of 3 A. The supplied AC adapter is rated at 5 V/2 A and 9 V/1.67 A and uses Samsung’s own version of Qualcomm’s Quick Charge technology for handshaking between the AC adapter and the smartphone. The device displays that charging is complete (the fuel gauge reads 100%) after 82 minutes, however it continues to draw a charging current for an additional 20 minutes at which point the device terminates the charging after 102 minutes.

Just like the S7 Edge, the battery maximum charging voltage is only 4.35 V, not the rated 4.4 V. This means that the actual battery maximum capacity is nearly 3,180 mAh but Samsung is making only 3,000 mAh available to the user. This further raises the likelihood that Samsung opted to lower the voltage (and sacrifice available charge capacity) in order to increase the battery’s longevity (cycle life) or decrease the battery swelling at the high charge rate of 1C, or perhaps both.

All in all, this appears to be a well-designed battery providing ample capacity to the user to last a full day with sufficiently fast charging. What is unknown is the battery’s longevity (i.e., how many days and cycles of use) and whether it was compromised in the process. Given that Samsung’s track record in providing battery longevity is not exemplary, that will remain a very important question and left to be answered in a future post.

22Jan 2016

I described in the earlier post how adaptive systems turned smartphones into great cameras. Let’s now talk about how adaptivity and adaptive charging can make a battery perform far better.

Let’s start briefly with the basic operation of a lithium ion battery. The early posts of this blog describe the operation of the lithium-ion battery in more detail.  I will briefly recap here the basic operation and explain where its performance is limited. For the reader who wants to learn more, select “The Basics” category tag and feel free to review these earlier posts.

The figure below illustrates the basic structure of a lithium-ion battery. On the left hand side, one sees an electron microscope image of a battery showing the anode, the cathode and the separator, essentially the three basic materials that constitute the battery. On the right hand side, one sees a sketch illustrating the function of these materials during the charging process: The lithium ions, “stored” inside the individual grains of the cathode, move through the separator and insert themselves inside the grain of the graphite anode. If you are an engineer or physicist, you are asking, “where are the electrons?”  A neutral lithium atom becomes an ion in the solution, travels through the separator to the anode. The electron travels in the opposite direction through the external circuitry from the Aluminum collector to the Copper collector, where then it is captured by a lithium ion to form a molecular lithium-carbon bond.

Structure of the lithium ion battery

This seems simple enough, so what can go wrong? lots! I will focus here on a handful of mechanisms that become critical as the battery’s storage capacity and energy density increase. Looking at the diagram above, it is hopefully obvious that increasing energy density means to the reader packing more and more ions into this little sketched volume. It means reducing the dimensions of the anode, the cathode, the separator, and trying to saturate the capabilities of the anode grains to absorb ions. It’s like when you try to put as much water as possible inside a sponge. Now, in this process, small variations in manufacturing become really detrimental to performance. Look at the left photograph and observe the coarseness of the grain size for both electrodes. That means the uniformity of the ionic current is poor. As the energy density rises, a large number of ions are all rushing from the cathode to the anode. But this lack of uniformity creates stress points, both electrical and mechanical, that ultimately lead to failure:  gradual loss of material, gradual loss of lithium ions, and gradual mechanical cracking, all leading in time to a gradual loss of capacity and ultimate failure.

I will jump to two key observations. First, it should be apparent that when energy density is low, these effects are benign, but when energy density is high, there are so many ions involved in the process that small manufacturing variations become detrimental. Second, it should be apparent too that faster charging results in the same effect, i.e., more ions are trying to participate in the process.

Clearly, battery manufacturers are trying to improve their manufacturing processes and improve their materials — but let’s face it, this is becoming an incredibly expensive process. Smartphone and PC manufacturers are not willing to pay for more expensive batteries. This is very similar to the earlier post about camera lenses. Make great lenses but they become very expensive, or shift the burden to computation and correct the errors dynamically and adaptively.

That’s precisely what adaptive charging does: Be able to measure the impact of the manufacturing variations, embedded defects, non-uniformity of material properties and what have you in real time, assess what these errors are and how they may be progressing in time, then adjust the voltage and current of the charging current in such a way to mitigate these “errors”….then keep doing it as long as the battery is in operation. This makes each battery unique in its manufacturing history, material properties, and performance, and lets the charging process get tailored in an intelligent but automated fashion to the uniqueness of the battery.

It’s a marriage of chemistry, control systems and software, that shifts the burden from expensive manufacturing to less expensive computation. But what is clear is that it does not make battery manufacturing any less important, and it does not replace battery manufacturing — it is complementary. It is no different that how adaptive algorithms in the camera are complementary to the lens, not replacing it. This is cool innovation!