Electronics & systems

01Jul 2016

Sleep is an essential function of life. Tissue in living creatures regenerate during deep sleep. We, humans, get very cranky with sleep deprivation. And cranky we do get when our battery gets depleted because we did not give our mobile device sufficient “sleep time.”

I explained in a prior post the power needs in a smartphone, including the display, the radio functions…etc. If all these functions are constantly operating, the battery in a smartphone would last at most a couple of hours. So the key to having a smartphone battery last all day is having down time. So by now, you have hopefully noticed how the industry uses “sleep” terminology to describe these periods of time when the smartphone is nominally not active.

So what happens deep inside the mobile device during these periods of inactivity, often referred to as standby time? Sleep. That’s right. Not only sleep, but also deep sleep. This is the state of the electronic components, such as the processor and graphics chips, when they reduce their power demand. If we are not watching a video or the screen is actually turned off, there is no need for the graphics processor to be running. So the chip’s major functions are turned off, and the chip is put in a state of low power during which it draws very little from the battery. Bingo, sleep equals more battery life available to you when you need it.

Two key questions come to mind: When and how does the device go to sleep? and when and how does it wake up?

One primary function of the operating system (OS) is to decide when to go to sleep; this is the function of iOS for Apple devices, and Android OS for Android-based devices. The OS monitors the activity of the user, you, then makes some decisions. For example, if the OS detects that the smartphone has been lying on your desk for some considerable time and the screen has been off, then it will command the electronics to reduce their power demand and go to sleep.

This is similar to what happens in a car with a driver. You, the driver, gets to make decisions all the time when to turn the engine off, or put it in idle, or accelerate on the gas pedal. Each of these conditions changes the amount of fuel you draw from the fuel tank. In a smartphone, the OS is akin to the driver; the electronics replace the engine; and the fuel tank is like the battery. You get the picture. While this is colloquially referred to as managing the battery, in reality you are managing the “engine” and the power it consumes. This is why some drivers might get better mileage (mpg) than others. It is really about power management and has very little to do with true battery management.  Battery management is when one addresses the battery itself, for example how to charge it, how to maintain its health…etc. car-engine

The degree of sleep varies substantially and determines how much overall power is being used. Some electronic parts may be sleeping and others may be fully awake and active. For example, let’s say you are traveling and your device is set to airplane mode, but you are playing your favorite game. The OS will make sure that the radios chips, that’s the LTE radio, the WiFi, GPS chip, and all chips that have a wireless signal associated with them, go to deep sleep. But your processor and graphics chips will be running. With the radios off, your battery will last you the entire flight while playing Angry Birds.

The degree of sleep determines how much total power is being drawn from the battery, and hence, whether your standby time is a few hours or a lot more. A smart OS needs to awaken just the right number of electronic components for just the right amount of time. Anything more than that is a waste of battery, and loss of battery life. The battery is a precious resource and needs to be conserved when not needed.

Both iOS and Android have gotten much smarter over the past years in making these decisions. Earlier versions of Android were lacking the proper intelligence to optimize battery usage. Android Marshmallow introduced a new feature called Doze that adds more intelligence to this decision making process. Nextbit recently announced yet more intelligence to be layered on top of Android. This intelligence revolves around understanding the user behavior and accurately estimating what parts need to be sleeping, yet without impacting the overall responsiveness of the device.

The next question is who gets to wake up the chips that are sleeping? This is where things get tricky. In a car, you, the driver, gets to make decisions on how to run the engine. But imagine for a moment that the front passenger gets to also press the gas pedal. You can immediately see how this can be a recipe for chaos. In a smartphone, every app gets to access the electronics and arbitrarily wake up whatever was sleeping. An overzealous app developer might have his app pinging the GPS location chip constantly which will guarantee that this chip never goes to sleep — causing rapid battery loss of life. Early versions of Facebook and Twitter apps were guilty of constantly pinging the radio chips to refresh the social data in the background — even when you put your device down and thought it was inactive.  iOS and Android offer the user the ability to limit what these apps can do in the background; you can restrict their background refresh or limit their access to your GPS location. But many users do not take advantage of these power saving measures. If you haven’t done so, do yourself a favor and restrict background refresh on your device, and you will gain a few extra hours of battery life. You can find a few additional tidbits in this earlier post.

App designers have gotten somewhat more disciplined about power usage, but not entirely. Still too many apps are poorly written, or intentionally ignore the limited available power available. Just like in camp when many are sharing water, it takes one inconsiderate individual to ruin the experience. It takes one rogue app to ruin the battery experience in a smartphone. And when that happens, the user often blames the battery, not the rogue app. It’s like the campers blaming the water tank in camp instead of blaming the inconsiderate camper. Enforcement of power usage is improving with every iteration of operating systems, but the reality is that enforcement is not an easy task. There is no escaping the fact that the user experience is best improved by increasing the battery capacity (i.e., a bigger battery) and using faster charging. Managing a limited resource is essential but nothing makes the user happier than making that resource more abundant….and that, ladies and gentlemen, is what true battery management does. If power management is about making the engine more efficient, then battery management is about making the fuel tank bigger and better.

15Apr 2016

I discussed in a prior post the charging of the 5.5-in Samsung Galaxy S7 Edge. In this post, we will look at its sister device, the 5.1-in Samsung Galaxy S7, specifically the US version (model G930) using the Qualcomm Snapdragon 820 chipset, also known as the 8996. The battery specifications on the S7 include a polymer cell rated at 3,000 mAh, equivalent to 11.55 Wh. The teardown on iFixit shows a cell that is manufactured by ATL  rated to 4.4 V. Once again, the choice of battery manufacturer is surprising given that Samsung Electronics for years sourced the vast majority of their batteries from their sister company Samsung SDI.

Samsung S7 battery

I charged the Galaxy S7 using the Samsung-supplied AC adapter and USB cable, with the device in airplane mode and the screen turned off. The charging data is next.

S7-charge curve

Let’s make a few observations. The measured battery capacity is 2,940 mAh at a termination current of 300 mA (C/10). This is consistent with Samsung’s claim of 3,000 mAh, usually measured in the laboratory at a termination current of C/20 or 150 mA.

The device reaches 50% after 31 minutes of charging, corresponding to a charge rate of 1C, i.e., a charging current into the battery of 3 A. The supplied AC adapter is rated at 5 V/2 A and 9 V/1.67 A and uses Samsung’s own version of Qualcomm’s Quick Charge technology for handshaking between the AC adapter and the smartphone. The device displays that charging is complete (the fuel gauge reads 100%) after 82 minutes, however it continues to draw a charging current for an additional 20 minutes at which point the device terminates the charging after 102 minutes.

Just like the S7 Edge, the battery maximum charging voltage is only 4.35 V, not the rated 4.4 V. This means that the actual battery maximum capacity is nearly 3,180 mAh but Samsung is making only 3,000 mAh available to the user. This further raises the likelihood that Samsung opted to lower the voltage (and sacrifice available charge capacity) in order to increase the battery’s longevity (cycle life) or decrease the battery swelling at the high charge rate of 1C, or perhaps both.

All in all, this appears to be a well-designed battery providing ample capacity to the user to last a full day with sufficiently fast charging. What is unknown is the battery’s longevity (i.e., how many days and cycles of use) and whether it was compromised in the process. Given that Samsung’s track record in providing battery longevity is not exemplary, that will remain a very important question and left to be answered in a future post.

22Jan 2016

I described in the earlier post how adaptive systems turned smartphones into great cameras. Let’s now talk about how adaptivity and adaptive charging can make a battery perform far better.

Let’s start briefly with the basic operation of a lithium ion battery. The early posts of this blog describe the operation of the lithium-ion battery in more detail.  I will briefly recap here the basic operation and explain where its performance is limited. For the reader who wants to learn more, select “The Basics” category tag and feel free to review these earlier posts.

The figure below illustrates the basic structure of a lithium-ion battery. On the left hand side, one sees an electron microscope image of a battery showing the anode, the cathode and the separator, essentially the three basic materials that constitute the battery. On the right hand side, one sees a sketch illustrating the function of these materials during the charging process: The lithium ions, “stored” inside the individual grains of the cathode, move through the separator and insert themselves inside the grain of the graphite anode. If you are an engineer or physicist, you are asking, “where are the electrons?”  A neutral lithium atom becomes an ion in the solution, travels through the separator to the anode. The electron travels in the opposite direction through the external circuitry from the Aluminum collector to the Copper collector, where then it is captured by a lithium ion to form a molecular lithium-carbon bond.

Structure of the lithium ion battery

This seems simple enough, so what can go wrong? lots! I will focus here on a handful of mechanisms that become critical as the battery’s storage capacity and energy density increase. Looking at the diagram above, it is hopefully obvious that increasing energy density means to the reader packing more and more ions into this little sketched volume. It means reducing the dimensions of the anode, the cathode, the separator, and trying to saturate the capabilities of the anode grains to absorb ions. It’s like when you try to put as much water as possible inside a sponge. Now, in this process, small variations in manufacturing become really detrimental to performance. Look at the left photograph and observe the coarseness of the grain size for both electrodes. That means the uniformity of the ionic current is poor. As the energy density rises, a large number of ions are all rushing from the cathode to the anode. But this lack of uniformity creates stress points, both electrical and mechanical, that ultimately lead to failure:  gradual loss of material, gradual loss of lithium ions, and gradual mechanical cracking, all leading in time to a gradual loss of capacity and ultimate failure.

I will jump to two key observations. First, it should be apparent that when energy density is low, these effects are benign, but when energy density is high, there are so many ions involved in the process that small manufacturing variations become detrimental. Second, it should be apparent too that faster charging results in the same effect, i.e., more ions are trying to participate in the process.

Clearly, battery manufacturers are trying to improve their manufacturing processes and improve their materials — but let’s face it, this is becoming an incredibly expensive process. Smartphone and PC manufacturers are not willing to pay for more expensive batteries. This is very similar to the earlier post about camera lenses. Make great lenses but they become very expensive, or shift the burden to computation and correct the errors dynamically and adaptively.

That’s precisely what adaptive charging does: Be able to measure the impact of the manufacturing variations, embedded defects, non-uniformity of material properties and what have you in real time, assess what these errors are and how they may be progressing in time, then adjust the voltage and current of the charging current in such a way to mitigate these “errors”….then keep doing it as long as the battery is in operation. This makes each battery unique in its manufacturing history, material properties, and performance, and lets the charging process get tailored in an intelligent but automated fashion to the uniqueness of the battery.

It’s a marriage of chemistry, control systems and software, that shifts the burden from expensive manufacturing to less expensive computation. But what is clear is that it does not make battery manufacturing any less important, and it does not replace battery manufacturing — it is complementary. It is no different that how adaptive algorithms in the camera are complementary to the lens, not replacing it. This is cool innovation!

17Dec 2015

The California Department of Motor Vehicles (DMV) proposed today a new set of rules that will govern the operation of autonomous vehicles on the state’s roads. Simultaneously, Google announced that it was spinning off its self-driving car unit into a standalone business setting up the stage for a fleet of self-driving taxis that will compete head-to-head with Uber, which itself is investing heavily in self-driving vehicles. Tesla, GM, Mercedes Benz, Ford and several others have not been shy in the media, all announcing efforts and prototypes towards autonomous driving.

The Google (or perhaps more aptly under its new name of Alphabet) pod-like self-driving vehicles are powered by lithium ion batteries, so it is fair to say that it is only a matter of time before electric powertrains become the foundation for this new vision of autonomous cars. The race is early, still very early, but the stakes are potentially immense as several players pursue their vision for autonomous electric vehicles.

One of the early metrics of a race worth gauging is each player’s present IP position. It is not a simple question to answer but one can glean some insight — autonomous electric vehicles are very sophisticated systems using complex components, so one would expect that intellectual property will play a central role both in the development of this market segment as well as eventual litigation among the participants.

The next two charts show the extent of the present IP position for a select number of companies. The chart on the left shows the number of US patents issued since 2000 covering two categories: i) battery technology including battery battery materials, manufacturing and battery management systems, and ii) technologies related to designing and building electric vehicles. For the time being, I will focus this post on the “electric” portion of this race, addressing the battery and electric systems for these vehicles — leaving autonomous driving for others to discuss. I assume that the number of issued US patents will reflect within reason the amount of know-how a company possesses in battery and EV technologies.

patents

 Patent positions related to autonomous electric vehicles

 

The first observation that stands out is the large number of US patents that Toyota has secured in both areas of batteries and electric vehicle systems. They eclipse the number of patents issued to Tesla and GM. Honda and Ford, two companies that have been relatively quiet in the media, are clearly building their foundations. The German automakers, judging from their US patent portfolio, seem to be lagging — though this should not be misinterpreted as losing or lagging in the race. Apple has not yet publicly announced that it plans to build cars, but rumors abound in this respect and as such, the companies is categorized with the automakers. Their portfolio, however, is heavily biased towards battery technology, courtesy of their prowess in consumer devices.

Automakers rely heavily on suppliers of components and subsystems. Among the well-established ones, Robert Bosch stands out with a sizable bag of issued US patents covering both batteries and electrical vehicles. Samsung and LG, two large suppliers of electronic components to the Korean car makers, have a strong IP position in building batteries owing to their respective battery divisions, SDI and LG Chem — but there is not much evidence of strong IP in electrical powertrain and electric vehicles. It is also surprising to see Delphi and Johnson Controls lagging in both categories — could this mean that the automakers are choosing to own and control key technologies instead of outsourcing them to their traditional supply chain partners? Time will tell.

In this evolving race and ambitious vision for the future, these statistics are merely just perturbations for the time being. However, given enough time, they could amplify and influence the outcome of who will win and who will not. Stay tuned!

10Dec 2015

Before I start this post, I encourage all new readers to go back to the early posts if they desire to learn more about the basics of lithium-ion batteries and their operation.

This post is dedicated to deciphering the growing complexity of charging stations for plug-in hybrid and pure electric vehicles (xEVs), especially as their popularity grows among drivers. If you own an electric vehicle, a Tesla, a Nissan Leaf, or other models, chances are you are using one or more of these charging methodologies…and if you so desire to fast charge your Tesla or electric vehicle, this post will give you the insight to know what type of charging station you may need.

The charging of xEVs, whether at your home or at dedicated charging stations, is usually governed by a set of standards agreed upon by a vast number of contributing organizations, such as the automakers, electric utilities, component makers and many others. Several organizations including SAE International, ANSI and IEEE have led the  coordination and development of such standards — they are numerous — covering, for instance, the types and interoperability of connector designs and charging power levels (SAE J1772), communication and signaling protocols  (SAE J2931/1) between the xEV and the charging station (also known as EV supply equipment — EVSE), wireless charging (SAE J2954) as well as many others related to diagnostics, safety, DC charging…etc. Today’s post addresses charging from the perspective of the SAE J1772 standard and its competing standard CHAdeMO.

So let’s start with understanding what charging levels are and how they are defined by the various standards. There are 4 levels of charging:  two levels using conventional AC charging, and two additional levels using higher-voltage DC  charging. They are summarized in the next table:

Fast charging a Tesla or an electric vehicle

Let’s start with AC levels 1 and 2. Level 1 is what you get using the charging cord supplied to you by the car maker if you own an xEV. It plugs into the standard 120V household outlet and delivers, in theory 1.7kW. Some of you are probably tempted to multiply 120V by 20A and realize that’s more than 1.7kW…if you are that person, remember that this is an AC current, so you need to multiply by 0.707. This is the maximum power delivered to the connector plugged into the car. It is not the power delivered to the battery. The battery’s power is delivered by an on-board battery management system (on-board charger) that has to convert the voltage to a level appropriate for the battery. In reality, the car battery receives a best-case power of about 1.2 – 1.3kW to account for the electrical efficiency of the system — taking an average consumption of 250Wh per mile, that is equivalent to about 5 to 6 miles for each hour of charging…ouch!

Reality is a little worse than that: standard household power plugs are limited to 15A (instead of 20A) thereby decreasing the power delivered to the battery to a measly 900W. At this power level, a Nissan Leaf’s battery (nominal 24kWh / effective 20kWh) takes 22 hours to fully charge. Yikes! Now one can begin to understand why xEV owners do not line up near a Level 1 charger…but it does get crowded at a Level 2 charger.

AC Level 2 uses a 240V single-phase mains. The lowest current level is 20A corresponding to a maximum power (again at the output of the connector) of 3.4kW. The typical public charging stations, such as the ones managed by ChargePoint, provide 6.7kW. However, there is a catch. The on-board charging circuitry in your xEV must be able to use that power. Early Nissan Leaf models had 3.3kW-circuitry — in other words, regardless of the maximum power at the charging station, the maximum power the car is willing to accept is 3.3kW.  Newer xEV models, e.g., Nissan Leaf, Ford Focus Electric, Chevy Volt, have on-board chargers capable of up to 6.6kW. Again, this means if the charging station were to provide 19.2kW, your car cannot accept more than 6.6kW…this is by far the most common charging level as dictated by the presently deployed infrastructure. It equates to about 25 miles for each hour of charging. Once again, using the Nissan Leaf as an example, its battery will fully charge in 3.5 hours with a Level 2 charger (6.6kW). That’s not fast charging, but it sure is a heck-of-a-lot faster than Level 1.

Fast charging with DC gets more complex because there are competing approaches. Of course, we are also now talking about insanely high power levels, and consequently very expensive charging stations  and associated installation costs ($50,000 to $100,000 each).

SAE has the J1772 Combo DC standard. CHAdeMO has another competing standard. Tesla has its own proprietary fast-charging using their network of Superchargers (though not DC). But what they have in common is that they all seek to provide high power levels to the vehicle…up to 120kW. This infrastructure is still relatively scarce — Tesla is the only one aggressively deploying fast charging Superchargers along specially designated highway corridors.

Naturally, charging a car battery at such high power levels begs a new series of questions on whether this creates any significant and permanent damage to the battery. The brief answer is: YES, damage does occur…but super fast charging is so rare that no one is really paying attention to this question, at least not for now. Besides, with the exception of Tesla, your average xEV cannot charge faster than 6.7kW, so having a fast charging station is a moot point.

A final word on fast charging the Tesla batteries. At 120kW of input power, this equates to a charging rate of 120/85 = 1.4C — this is guaranteed to cause serious damage to your Tesla battery if you were to charge on a regular basis. But then again, if Elon Musk and Tesla Motors are willing to cover you under their warranty, do you really care?