HOW DOES A FUEL GAUGE WORK?
There is a little icon on a top corner of your mobile device that reads a percentage value corresponding to the fraction that your battery is full — this fraction is known as state of charge (SoC). The little charge measurement instrument is embedded in the mobile device and is called a battery fuel gauge. Have you ever wondered about how it works?
Modern electronic fuel gauges were first commercialized by Benchmarq in the 1990s, initially for laptop PCs. The company was subsequently acquired by Texas Instruments. You can recognize that line of products with their bq prefix. Several other semiconductor companies offer similar products, for example, Maxim Integrated, Seiko Instruments, and more recently, Qualcomm who integrates their fuel gauge directly into their power management chip (PMIC) for mobile devices.
The basic principle of measuring state of charge is rather old. It has been known in science that the chemical potential is a direct function of the state of charge. The chemical potential in a rechargeable battery is the voltage measured at the terminals of the battery — now here’s an important qualifier — at equilibrium. In simple words, one has to let the battery sit for a long duration of time to reach equilibrium, then make the voltage measurement. This voltage is then a direct measure of the SoC. For a particular chemistry or type of battery, say a lithium-ion rechargeable battery, this relationship is universal. In other words, it applies for every battery that is made of the same chemistry. That’s quite handy because we don’t need to reinvent a new fuel gauge for every mobile device. The graph below shows this relationship voltage vs. SoC relationship for a rechargeable lithium-ion battery using a carbon anode and a cobalt-oxide cathode. When the battery is fully charged (shown by 100% on the horizontal axis), its voltage is maximal at 4.35 Volts. As charge is removed from the battery, its voltage drops according to the relationship identified by the red curve. This relationship is known in technical terms as the open-circuit voltage (OCV) function. Notice that capacity (in units of mAh) does not enter this relationship.
Naturally, the next question is “How do we measure the SoC if the battery is not in equilibrium?” A chemical system, of which a battery is a prime example, is not in equilibrium when there is current flowing through the battery, for example, it is already powering your mobile device in operation. In such a common scenario, the actual voltage at the terminals of the battery is a little lower than what you would measure in equilibrium. That’s because every battery has a little internal resistance to it. So when there is current flow, the voltage is now lower by a value equal to the product of the current times the resistance (if you recall Ohm’s law from your high school science class). This is illustrated in the chart above with the blue dashed curve. In principle, one can correct for this offset: measure the value of the internal resistance, multiply it by the measured current, then add it to the measured terminal voltage to obtain an estimate of the equilibrium voltage. This is precisely how most middle-of-the-range fuel gauges operate.
Alternatively, one can wait for durations of time when the mobile device is in sleep mode (say in the middle of the night), then assume that it is close enough to the equilibrium voltage. It may take tens of minutes for a battery to reach this state of equilibrium; so waiting for long durations to make a SoC measurement is not terribly practical.
But the method of correcting for the IR offset creates inherent errors. That’s because the internal resistance of the battery fluctuates with current, temperature, and age which are difficult to correct for or characterize in real life. This error can reach ten percentage points or even more. Such an error may be inconsequential when the battery is full, but is quite detrimental when the mobile device is near empty: an end user would be shocked if he or she thought there was 10% of the battery charge left when in actuality the battery was really at zero! If you have experienced such a case, then you know your fuel gauge is not terribly accurate.
Higher-grade fuel gauges supplement the OCV measurement technique with another instrument called coulomb counter. This is a fancy name for an electronic function in the fuel gauge chip that measures current with great precision, then multiplies it with a precisely measured time stamp. The product of the two is electric charge, measured in its unit of Coulomb. In other words, the coulomb counter is counting charge flowing through it (or counting electrons if it were really, really, very precise). This function becomes very useful when the battery is actually powering a mobile device, i.e., the battery is not in equilibrium. If one can start by measuring the SoC of the battery when it is in equilibrium, then any charge lost or added when the battery is not in equilibrium is measured using the coulomb counter. The combination of these two methods allow a more precise measurement of the SoC, often reaching 1%.
Now let’s get to an exciting and practical part of the fuel gauge: If you have ever walked into an AT&T or Verizon or Apple store complaining about your battery, and you were told to drain the battery to zero and then charge it back to 100% (search the web for this, and you will find lots of such stories), it is because of your fuel gauge. Let me repeat this clearly: It has nothing to do with the lithium-ion battery or any hints of memory in the battery. Lithium-ion batteries have no memory effects. All you are doing in this charge-discharge exercise is simply letting the fuel gauge recalibrate itself. You see, if you use the battery continuously in the middle range, say never reach zero and seldom reach 100%, the fuel gauge will lose track of what is actually zero and what is actually 100%. In such cases, the readings become flawed, and the fuel gauge gets confused. So a full charge-discharge cycle helps reorient the fuel gauge.