How to Prepare for Wildfires

You’ve survived 2021—thanks, no doubt, to the science and tech that made your medical care, your internet, and your smartphone work. Tonight, New Year’s Eve, many podcast hosts are taking some time to reflect, to rest—and to post a re-run. 

But not “Unsung Science!” To tide you over until next week’s fresh episode, we offer a free audiobook chapter from David Pogue’s book, “How to Prepare for Climate Change.” This is the chapter on how to prepare for wildfires, timed to coincide with the middle of the winter wildfire season in the western half of the U.S. As a New Year’s gift from us, here’s a terrifying and reassuring chapter on preparing for fires—and surviving them.

Where to Live in the Climate-Change Era

It’s the night before Christmas—and many podcasters (and listeners) are nestled all snug in their beds. But we didn’t want to leave you without a dose of witty Pogue science writing. So here, for your listening pleasure, is a free chapter from David Pogue’s latest audio book, “How to Prepare for Climate Change.” This is Chapter 2, “Where to Live.” 

Obviously, not everyone can afford to move just to escape climate-crisis disasters—yet 40 million Americans do move every year, and an increasing number of them are taking climate risks into account. This chapter is your guide to the best climate-haven regions in America.

Leap Seconds, Smear Seconds, and the Slowing of the Earth

Season 1 • Episode 9

The earth’s spinning is slowing down. Any clocks pegged to the earth’s rotation are therefore drifting out of alignment with our far more precise atomic clocks—only by a thousandth of a second every 50 years, but that’s still a problem for the computers that run the internet, cellphones, and financial systems. 

In 1972, scientists began re-aligning atomic clocks with earth-rotation time by inserting a leap second every December 31, or as needed. It seemed like a good idea at the time—until computers started crashing at Google, Reddit, and major airlines. Google engineers proposed, instead, a leap smear: fractionally lengthening every second on December 31, so that that day contains the same total number of seconds. But really: If computer time drifts so infinitesimally from earth-rotation time, does anybody really care what time it is? 

Guests: Theo Gray, scientist and author. Geoff Chester, public affairs officer for the for the Naval Observatory. Peter Hochschild, principal engineer, Google.

Episode transcript

Leap Seconds Script

The earth’s spinning is slowing down. Any clocks pegged to the earth’s rotation are therefore drifting out of alignment with our far more precise atomic clocks—only by a thousandth of a second every 50 years, but that’s still a problem for the computers that run the internet, cell phones, and financial systems. 

In 1972, scientists began re-aligning atomic clocks with earth-rotation time by inserting a leap second every December 31, or as needed. It seemed like a good idea at the time—until computers started crashing at Google, Reddit, and major airlines. Google engineers proposed, instead, a leap smear: fractionally lengthening every second on December 31, so that that day contains the same total number of seconds. 

But really: If computer time drifts so infinitesimally from earth-rotation time, does anybody really care what time it is? Guests: Theo Gray, scientist and author. Geoff Chester, public affairs officer for the Naval Observatory. Peter Hochschild, principal engineer, Google. 

Intro

The earth’s rotation is slowing down. Very gradually, but enough that it’s drifting out of sync with our atomic clocks—the ones that run our internet, cell phones, and financial systems.

Theo Since the late 50s, they’ve drifted apart by 37 seconds cumulatively. 

David: So how do we— 

Theo: Well, so leap, leap seconds is what people do. 

Yes, leap seconds. We add one second to each year, as needed. Which is great—unless it crashes your software.

Peter And there were funny reports   of some crashes in Google’s servers . And that really caught our attention.

Today on “Unsung Science:” The Earth’s time…computer time…and the battle to manage the difference.  

Story

Season 1, 9: Leap Seconds, Smear Seconds, and the Slowing of the Earth. The perfect topic for the end of the year.

In a way, this episode has been in the works since I was in fourth grade. 

[school bell + kids ambi + music]

I will NEVER forget this. A guest came in to talk to our class: Some astronomy professor, hired to try to infuse a little interest in science to elementary schoolers like us.

He was one of these audience-participation dudes. At one point, he was like, “Who knows how many days in a year?”

And I knew that. I was a little smarty pants. I yelled out “365 days!” Because everyone knows that. 

But the guy goes, “Nope! Guess again!” 

What!? That’s not wrong! I was really steamed. That’s the kind of kid I was.

So the other kids started shouting their answers. They thought maybe I’d gotten the numbers mixed up. “356!”

And the guys’ like, “Nope!”

“365! 366! 364! A hundred! 52!”

And the guy was like, “Nope! No, no, no. Try again!”

Honestly, he let it drag on for way too long.

He finally told us his answer: 365 and a quarter days. Which is why we need a leap year every four years. Those four quarter days add up to what we know as February 29 every four years. 

It’s kind of awkward that the earth’s trip around the sun is not evenly divisible by whole days. So we add a leap day to keep our clocks in sync with the earth’s motion.

That explanation was good enough for fourth graders. But it wasn’t quite the whole story.

Theo The man who has one clock always knows what time it is. If you have two clocks, you’re never quite sure.   Like, whose clock are you going to believe? And suppose that you do have two clocks, or three clocks, and they’re never going to exactly agree.  

This is Theo Gray. I love Theo. We’ve appeared together on a few “Nova” specials on PBS, and I collect his big glossy hardbound photo books of elements, and molecules, and machines. By profession, he’s a—what is he? 

Theo I don’t really have a career, as such, I don’t think.   I mean, I did work as a software engineer for 23 years, cofounding and developing Mathematica, Wolfram Research, Wolfram Language, etc., but I haven’t done that for a while and I think probably you call me an author at this point, because I just make a living writing books, and people pay me to do that. And that’s fun. 

In his book “How things Work,” he’s got this huge section on the history of clocks. 

Theo I mean, obviously the simplest, the crudest, the most universal clock, is a sundial, or otherwise known as a stick in the ground.   It’s a nice clock, but it has some disadvantages. It –it only works when the sun is shining. So, you know, clouds, rain, whatever, nighttime. This is the problem.   I think what’s really the most remarkable about sundials is that up until 1955, they were the most accurate clocks in the world.  And, you know, you might say, what are you talking about? That’s ridiculous.  

But it’s true. Until the 1800’s, the famous clock in Greenwich, England, the official world standard of time, was based on the sun’s position in the sky. At some point they switched to observation of stars’ position in the sky, because they’re sharper and therefore easier to see.  

Theo And yeah, but it was all based on the rotation of the Earth. In other words, a glorified sundial. 

There was actually a clock, which is still there to this day, outside the main gate at the Greenwich Observatory.  And then there’s a giant red ball on a tower and the top of the observatory, which drops at noon precisely. And you can see that ball from central London. So all the bankers could, you know, see that clock, that that ball drop. And that was how the time was communicated. 

Until 1955. Then the cesium atomic clock was invented. 

Ah, yes—the atomic clock. 

Theo Get yourself some cesium atoms, put them in a controlled environment, low temperature, beam in a certain microwave frequency   and then count the cycles. You count the correct number of cycles of that, that cesium atomic resonance frequency. That’s a second. 

Anybody can do it anywhere in the world. And you don’t have to have—you don’t have to go to Paris.   You just read the definition, build the machine and you’ve got the thing. And systematically—  time was replaced. The Earth was replaced with cesium atomic clocks. 

Now, atomic clocks are going to be the star of this show, so I should probably take a moment to tell you where yours are. The ones that tell your phone, your computer, and your internet what time it is.

Well, they’re in Maryland, at the US Naval Observatory. 

[music]

If you’ve ever heard of the Naval Observatory, you probably think of it as the home of the U.S. Vice President—and yes, that’s where he or she always lives. But if you’ve ever benefited from the internet, cell phones, airplanes, GPS, the financial system, or the military, you might care much more about the Naval Observatory’s second function: As the home of the United States master clock. 

We go now to the library of the US Naval Observatory—an astonishing rotunda lined with 90,000 books about navigation, astronomy, and time.

CHESTER: If it looks like I’m in a closet, it’s because I am in a closet. This is room W, which is where we keep the journal archives for our library. But it’s quiet.  

POGUE If you had six inches of books to put under it on that table, it would be then closer to your mouth and sound a lot better. 

Geoff So let’s see, OK: “Monthly Notices of the Royal Astronomical Society.” That ought to do it. 

Pogue That’ll make it sound really good. 

Thus began my Zoom interview with Geoffrey Chester, the public affairs officer for the Naval Observatory. The official keepers of time and location. 

Geoff We are a Navy command. All told, we have about 150 on staff and so it’s mostly civilian.   When we get a new superintendent,  the first thing I tell them is you may not necessarily want to go to sea with this crew, but I can guarantee you that we will never get lost and we always know what time it is. 

David Why is it the Navy’s job to set the standards for time and space? 

Geoff All the countries that had big navies had observatories so they can calibrate chronometers and create almanacs, so that they knew where the heck they were going. 

And so we are actually kind of the new kids on the block, because we started in 1830, when most of these other institutions started back in the 1700s. But we are the only one that is still associated with the Navy of its host country. 

See, dear listener? Four minutes in, and you’ve already learned something fascinating. I know you didn’t know that.

Geoff Here at the Naval Observatory, we are responsible for—for determining and disseminating a long term, precise time scale. So we operate about one hundred clocks here.   What we do essentially is, we take a weighted mean of our 100 or so atomic clocks.   We have computers that do this about every two minutes.  

We have the ability to keep a time scale here that on some level can be measured down to the femtosecond level. A femtosecond is ten to the minus 15, or 1 1000th of a trillionth of a second.   We actually built those clocks that keep it that precise. We actually built those in-house, because we could not find a commercial supplier that could do it for us at a price we can afford. 

David I’ve had the privilege of visiting the Naval Observatory. I’ve been in the room that has all those clocks. Can you describe it for the folks at home? 

Geoff Most of the clocks are kept in a building that was specially built.   The temperature does not vary by more than one tenth of a degree centigrade throughout the year, and the humidity stays within three percent of a nominal mean.   So you go into that building in the dead of winter or in the heat of a Washington summer, and the temperature and humidity are the same. It’s a great place to hang out.  

But in that room, we have racks of equipment.  These are beige boxes that look like they kind of look like a stereo amplifier. Except they have a little display that ticks off the time.  

OK. So remember how everything changed with the invention of the cesium atomic clock in 1955? Not long thereafter, the world officially adopted the atomic clock as its master timekeeping machine. The rotation of the earth was retired. 

GEOFF: In 1967, the definition of the second was changed. It was no longer one 86,400th part of a mean solar day. It was from thence forward until the present, the interval of 9,192,631,770 hyperfine transmissions of a neutral cesium 133 atom in its ground state. When I started working here, I thought a second was one Mississippi, so… 

That’s my favorite joke of the podcast year. 

[music]

But now we go from comedy to tragedy, or at least something that’s existentially depressing.

See, in one regard, the atomic clock’s astonishing precision is actually a problem, because it exposed the flaws of our old, earth-based system. What it revealed is—ready for existential depression?—that the earth’s spin is slowing down. Here’s Theo Gray.

Theo: It’s so bad that, like, we’re now I think it’s 37 seconds off from the, the late 1950s, when we first started to be able to measure more accurately.

David Oh, I used to worry about the sun exploding—now I have to worry about the world stopping spinning? 

Theo You know, that is actually a very good question, that somebody should do the math—like which is going to happen first? Like I say, we always worry about the sun’s going to burn us all up in five billion years. But if the earth stops turning before then, we’d be in a lot more trouble, or different trouble anyway.  

David I guess at some point I’m relieved that this is the one way that we are not responsible for destroying the way things are supposed to be. 

Theo Yeah, well, I mean — global warming, because it would make the earth on average warmer, is going to drive more moisture into the atmosphere and is therefore going to slow the earth down a little bit more. 

David Oh, great. 

Theo So just, you know, if you want to go that way and you want to feel guilty about it, I’ll give you a way to feel bad about it. 

David Oh, man. Well, how much has it slowed down already, measurably? 

Theo I think the figure is one second per 500 years.  The days used to be significantly shorter. 

David The days have not always been 24 hours long?!   

Theo Yeah, no, I mean, I think in the dinosaur era, they were, what, maybe 20 hours?  

You think there’s not enough time in a day in your life? How’d you like to have been one of the workers who built the Egyptian pyramids? 

Theo I counted, they had nine seconds less per day to work on the pyramids. 

I need to insert an audio footnote here so I don’t get scientist hate mail. To be absolutely clear, the earth’s spin isn’t just slowing down.

Theo The Earth’s rotation is not steady. Like, it wobbles a little bit from day to day, and it systematically speeds up and slows down throughout the seasons.  You know, when the earth is warmer, more of the water kind of migrates up to higher up in the atmosphere. And when it’s colder, the water, rain falls down, and it’s lower. So, you know, the classic example of the figure skater who, you know, spins faster when they pull their arms in, when the earth pulls its water in closer, it spins faster. And when the water goes out, higher up into the atmosphere, the earth goes slower. 

But the overall result of all of this wobbling—the long-term trend—is a slowing.

Theo It’s slowing down systematically because of things like tidal friction. So, you know,  the water on the earth, the ocean being pulled by the moon, it sloshes around—that’s, you know, that’s friction. 

David This is all well-known to physicists and scientists, the fact that the Earth not only does not rotate steadily, but is slowing down? 

Theo Oh, absolutely. Yeah. You might think, well, who cares—  one second a year, really. But if you’re trying to run, for example, a GPS satellite network or even a financial system, this is a big deal.  So there’s one rule of thumb, a very convenient fact, that the speed of light is one nanosecond per foot. 

So, you know, your car would be halfway to the moon if GPS were off by a second. 

David Hate when that happens. 

Theo Let’s say you’ve got a billion dollar interbank loan at some interest rate, right? And, you know, you’re charging interest by the microsecond or something on this billion-dollar loan. You know, it actually makes a difference.  

David So here’s –here’s what’s deep down bugging me. We’ve got these cesium atomic clocks that are not susceptible to slowing down and the variability of the earth’s rotation. And then we’ve got the earth, which is. Aren’t they eventually going to drift apart?

Theo Well, they do, yes.  since the late 50s, they’ve drifted apart by 37 seconds cumulatively. 

David So how do we— 

Theo Well, so leap, leap seconds is what people do. 

That’s right. There are leap seconds. You’ve lived through at least several of them. There is, of course, a global committee in charge of leap seconds—and it has a fantastic name: the International Earth Rotation Service. (OK, in 2003, they bulked up the name. Now it’s the International Earth Rotation and Reference Systems Service, but that’s not nearly as excellent.)

Once every few years, as needed, IERS scientists schedule one extra second, tacked on to June 30 or December 31, to bring the atomic clocks back into alignment with the earth’s spinnature. In recent years, we’ve enjoyed the luxury of that extra one second in 2005, 2008, 2012, 2015, and 2016. 

David So as it is right now, if I were watching my phone on, on the day when a leap second is scheduled, would I see one of the hours go 57, 58, 59, 60, then the next minute begins? 

Theo Yes, you see a hard jump. It’s the same second repeats twice. 

David Right. 

Theo And, you know, I mean, most people don’t notice that, right? Like you’d have to watch pretty close. It’d be exciting. It’s like watching your odometer, you know, turn over. So it’s not like this is a big issue for most people most of the time.  

[music]

But it is a big issue for scientists and other technical people. Like, if you’re a surveyor or an astronomer, your work is still connected very much to the earth and its movements, so you probably rely on the original earth-based time scale. But if you’re an internet or banking or space company, you need absolute precision—you use the atomic clock. 

So I asked Geoff Chester how we manage two different timekeeping systems. 

David So it sounds like if the earth spinning is variable and slowing overall, it sounds like these atomic clocks are more precise than the planet spinning. Eventually, aren’t they going to drift out of sync with each other? 

Geoff So this was something that was recognized early on after they defined the second in terms of the atomic frequency standard. And so they hammered out this idea, which they essentially codified in 1972, that there would be two concurrent timescales. 

So there is what is called International Atomic Time, or the Temps Atomique Internazionale, because the International Bureau of Weights and Measures is headquartered in France. So TAI is the time that’s kept by atomic clocks. 

And then there is what is called Coordinated Universal Time. 

Coordinated Universal Time is based on the earth’s rotation. That’s the time we adjust with leap seconds. And they call it UTC, because, once again, that’s the acronym for its French name. 

Geoff: (continues) So these two concurrent time scales, one essentially based on atomic time and the other based on Earth rotation time, have a cumulative error of roughly one and a half milliseconds, that compounds on a day-to-day basis. So after about 500 days, you have a difference of one second between the two time scales. 

And…when that happens,   —there is a provision in the 1972 definition that essentially allows us to stop atomic time for one second to let the earth catch up. And that’s what’s known as a leap second. 

Aha! Very cool! So let me see if we’ve got this straight. Because the earth is slowing down, there aren’t exactly 24 hours in a day anymore. But that means that we wind up with two time scales: the one based on atomic clocks, and the one based on the earth’s rotation. And we introduce a leap second as necessary to keep them synced up.

[Begin fake conclusion music]

YAY! We did it! We saw a problem, and we fixed it with science! Let’s celebrate the ingenuity of the leap second. Everyone come to my house for a party next time we have a leap second! It’ll be a really short party, but, you know. Nerdy and fun.

And that concludes this episode of— 

Geoff should interrupt me (and the music) – [needle slowdown fx]

Geoff: So today, leap seconds are a real problem. 

Wait, what? That’s not the end of the story?

Geoff: The thing is, we can’t predict with any real precision more than about six months in advance whether or not a leap second is necessary at a particular time. So when it is determined that a leap second will need to be inserted,   we can give the world about six months’ notice saying, “hey, you know, here’s a leap second coming up, get your networks ready,” whatever. But every year, typically about 10 percent of the world’s networks fail, and sometimes they can be very spectacular. I think it was 2012. I believe it was Qantas Airlines that botched the leap second in their enterprise, and they lost a day of revenue because their system was kaput. 

It was 2012, and it wasn’t just Qantas. The 2012 leap second also took down Reddit, Gawker, LinkedIn, FourSquare, Mozilla, and Yelp that year. But every time there’s a leap second, somebody suffers. In 2008, it was Oracle’s computers and Sun’s computers. In 2015, it was Twitter and Android. In 2016, it was Cloudflare, the website security company. 

Well dang! If leap seconds aren’t the ultimate solution, how else are we supposed to fix the discrepancy between earth time and atomic time?

Peter We thought about various possible approaches to the problem, and the leap smear seemed to be by far the most practical and reasonable. 

Yes, this Google engineer just said “leap smear.” After the break: The future of time discrepancies, as rethought by Google.

BREAK

Book Ad 

[music]

Welcome back. Before the break, you’d absorbed some heady science: Like the fact that the earth’s gradual rotational slowing has thrown off our timekeeping, and now atomic clocks and earth-based clocks gradually drift out of sync.

And we learned that the international masters of time thought they’d solved the problem by adding one leap second as needed to keep the two time systems in sync.

And then we learned that the leap-second solution is actually a problem. Once again, here’s Geoff Chester, of the U.S. Naval Observatory.

GEOFF: As large scale computer networks began to spring up, ultimately leading to the Internet and things like that, leap seconds became kind of a colossal pain. Because if you are a system manager and you do not incorporate the leap second across your entire enterprise, at the same instant in time, your network will fail. And if your network fails, you don’t make any money and if your company doesn’t make any money, the odds are you’re going to get fired.

Now I’d like to introduce you to a man who will not be getting fired any time soon. He’s too important.

Peter I’ve spent a number of years working on making Google’s computers synchronized.  It’s not a great cocktail party conversation, but it’s surprisingly pretty popular. 

Pogue Oh, you wait ‘till this episode drops. They’ll be stopping you on the streets. 

Peter Okay. 

Peter Hochschild is a principal engineer at Google. He is basically Google’s director of time. His job entails keeping all of Google’s servers perfectly synchronized—and Google has a lot of servers. Google search, and Gmail, and YouTube, and Android, and Chrome, Google Docs, Google Maps, and on and on.

Peter If everything in the world was done by one computer, that wouldn’t be so difficult. That computer would know the order that things happened in. But that isn’t how the world works. Everything is — all jobs, more or less, are divided among multiple computers, and now they, they have to agree with each other about the order of events, or else chaos will ensue. 

Pogue We wouldn’t want that. 

Peter You wouldn’t want that. 

Peter first heard about leap seconds one day in 2008. 

Peter Somebody wandered into our office and said, “Hey, you people know about leap seconds, don’t you?” And I said, “No, I’ve never heard of a leap second.” I had never heard of a leap second. 

Right away, he thought that they sounded like trouble.

The way computers keep time and handle leap seconds, it turns out,is the computer clock jumps back one second at the leap second. 

That made us very uneasy, because — I don’t know, the one thing you know about time is that it goes forwards. And here’s a case where time seems to go backwards. 

Yeah—trying to teach your computer network to run in reverse for one second sounds like a recipe for crashes. 

Now, Google was a relatively small company in 2008—small enough that Peter and his team could easily get access to its computer logs. At the time, the most recent leap second had arrived in 2005. Out of curiosity, he checked the records. 

Peter And there were funny reports written from the very last day of 2005 of some crashes in Google’s servers, some fairly widespread crashes that were not fully understood. And that really caught our attention, we thought, “Oh, that’s interesting.” The computer programs — first of all, they are assuming that time just goes forwards, because everybody assumes that. And they also made some assumptions about the rate at which time advances, and those are perfectly good assumptions at every instant, except around a leap second. 

And that set us to work, because we realized for two reasons we had to deal with leap seconds. One was to make the internet more reliable, because things would break if we handled leap seconds the old way they were handled. And because we wanted in the future much better synchronization across the computers, and the sudden step back in time would wreck that. 

But if the leap second wasn’t the perfect solution to the difference between the earth and the atomic clocks, what was? Google’s time team came up with a different idea, which those clever dogs decided to call—the leap smear. 

Peter What we would have the computers do is very slowly smear out that one second — leap second — rather than do it all of a sudden. And by doing it slowly, they would all be in very close alignment. And there would be no sudden discrepancy that would break the coordination of the multiple computers.  The basic idea is don’t do it suddenly, spread it out over time. 

Pogue And as a handy bonus, time doesn’t go backwards. 

Peter Correct.  And that’s a huge, that’s a huge thing, because   that’s, first of all, hard to think about. And secondly, it’s really hard to test and it only happens every few years. And it’s, it’s not so much fun. 

Pogue So you still do the leap smear on the day when a leap second is prescribed? 

Peter Correct.   The smear is centered on the actual leap second, and it extends to the previous and following noon. 

Pogue Is, is there more than one Leap Smear Proposal, or is Google the sole holder of this idea? 

Peter We published it pretty widely, because we thought, “Look, this is a great way to make the whole internet more reliable.” It’s not a competitive thing. It’s just good for everybody. And so   several companies use the same smear, and I believe there are organizations that use somewhat different smears.  

Actually, even Google did several. The first one that we did in — at the end of 2008, when we were scared, because we’d never done it before, we actually made a more complicated one that sort of started —started slow, sped up even more, and then slowed back down again. And then we measured very carefully what happened during that first leap smear, and we realized, “OK, this worked great, we can simplify it and we’ll just do a straight-line correction.”

Pogue Okay, great. So at this point, how long is a second on leap smear day? 

Peter You’ll love the way we describe it. A second is 11 parts per million slower during the leap smear. 

Pogue Is it the same as saying 11 millionths of a second ? 

Peter Yes. Yes.  Sorry, I was just being a nerd. I apologize. 

Pogue So on Leap Second Day, the leap smear adds one, 60 x 60 x 24th of a day to every second of the day?

Peter Yes. Perfect. 

Pogue Okay, thank you. I’m not going to embarrass myself.

Believe it or not, leap seconds and smear seconds aren’t the only solutions people are kicking around. Maybe we don’t need the TAI and UTC clocks to be exactly in sync, down to the second. Some scientists maintain that adding a leap second every few years is overkill; maybe we can wait a couple hundred years to accumulate those seconds into a single leap minute. Here’s Peter from Google again:

Peter There have been interesting proposals, and they’re clever, in a way, of saying, “Well, instead of correcting by one second every so often, why don’t we make at least a pro forma leap minute that would happen once every few centuries? And that would at least kick the problem way down in the future.” And there might be an argument, a sensible argument for that — as science develops, there may be further corrections to the way humans keep time. And so it’s perhaps not a crazy proposal. 

Now, none of these time-adjustment solutions is universally beloved. Every one of them involves compromise and causes problems for somebody.

So maybe this is the time to mention what might be the most radical solution of all. This might freak you out, so I hope you’re sitting down. Preferably reclining. Ready? 

Maybe it doesn’t matter if the earth clock and the atomic clocks drift apart. 

I mean, what, really, is the issue? What if they do get of sync? What’s the worst that could happen?

[music]

Sure, we’re used to certain times of day matching up with certain numbers on our watches. If you’re used to seeing the sun directly overhead at 12 noon, maybe you’re disturbed by the notion of a future where the sun is actually low to the horizon at 12 noon.

But first of all, that noticeable a difference will take thousands of years to come about. And second, keep in mind that we deliberately set our clocks off that familiar pattern by a whole hour every year! It’s called Daylight Savings Time. If we value the “12 noon sun overhead” thing so much, why on earth do we go out of our way to mess it up by a whole hour? 

This radical notion has occurred to the international time-keeping bodies, too. Here’s Google’s Peter Hochschild.

Peter Every decade or so, there’s a fairly serious discussion in one of the international standards bodies. Should we keep leap seconds or should we stop doing them?  

And sure enough: the ITU, the United Nation’s International Telecommunication Union, periodically polls its members on whether or not to abolish the leap second. They go around this huge auditorium and let each country’s representative make its case.

UN audio

US: The use of leap seconds introduces discontinuities into what would be otherwise continuous time stream. 

UK: In the view of the UK, leap seconds have already been inserted without causing difficulty.

Canada: Canada does not see any compelling reason to change its definition.

Chair: I believe that the appropriate course of action is that we return this draft revision for further work.

US: Thank you, Mr. Chairman.

They went through this debate in 2005…in 2008…in 2012…and in 2015. And every single time, the members of this august body made the same decision, which is —not to decide. To kick the decision down the road. The vote is scheduled to come up again in 2023, at which point it will probably be postponed again.

In the meantime, our species will continue to use leap seconds, leap smears, and maybe other approaches to keeping the earth’s clock and our computer clocks in sync. It’s a challenge that will only get harder as our atomic clocks get better. Here’s Geoff Chester from the Naval Observatory.

Geoff Our most recent clocks are what we call rubidium fountain clocks.   These things are really cool. These are clocks which are designed   to incorporate five Nobel prizes in physics.

 We can provide a time scale that’s precise down to about one hundred picoseconds. That’s one hundred-trillionth of a second. But I can guarantee you that   somebody is going to find a need for a more precise time scale.   And that’s why we are now building these prototype optical frequency standards. 

[music starts fading in]

In a cesium atomic clock, we use microwaves to bombard cesium atoms. But in the optical clock he’s talking about, we’ll shoot light at atoms—laser beams—for an astonishing leap in precision. 

Geoff:  Optical frequencies are five orders of magnitude higher than microwave frequencies.  

(continued) So I would say that in 10 years, we are going to have optical frequency standards, we are going to redefine the second.   And that means I will have to memorize a number with five more digits in it. So I think I’m going to retire before that happens because, my brain just won’t absorb that anymore. 

[THEME MUSIC]

CREDITS

How the Cellphone was Born: Three Months of Craziness

Season 1 • Episode 8

In the early 1970s, “mobile phones” were car phones: Permanently installed monstrosities that filled up your trunk with boxes and, in a given city, could handle only 20 calls at a time. Nobody imagined that there’d be a market for handheld, pocketable cellphones; the big phone companies thought the idea was idiotic. But Marty Cooper, now 92, saw a different future for cellular technology—and he had 90 days to make it work. A story of corporate rivalry, Presidential interference…and unquenchable optimism. 

Guests: Marty Cooper, father of the cellphone. Arlene Cooper, technology entrepreneur.

Episode transcript

Marty Cooper Cell Phone Inventor 

Theme begins.

When Marty Cooper dreamed up the idea for an invention called a cellphone in 1973, it wasn’t a popular idea.

POGUE: You’re telling me people thought that the cell phone was a dumb idea?

MARTY: Absolutely. No– (LAUGH) no question about it.

Today, at 92, Marty Cooper considers today’s cellphone only the crudest precursor of what’s to come.

MARTY: Oh, David, we have– are only at the very, very beginning. We are going to revolutionize mankind in many ways. 

I’m David Pogue, and this is “Unsung Science.”

BREAK

Season 1, Episode 8: How the Cellphone Was Invented.

Now, unfortunately, I don’t have some great chronology milestone to justify why we’re doing this topic now. This isn’t, like, the 50th anniversary of the cellphone—it’s only the 48th. It’s not the 100th birthday of the guy who invented it—he’s only 92. 

But I do have one little news hook: That inventor, Marty Cooper, has just published his memoir. It’s called “Cutting the Cord,” which is a title he hates.

POGUE: (LAUGH) You don’t like the title?

MARTY: Well, it turns out it was not original. And several other people used it. I didn’t know that at the time. I’d like to think I’m a good amateur marketer. But I didn’t– I’m not a good book marketer.

The other reason for dedicating an episode of “Unsung Science” —which is a title I love, by the way—is that Marty is an exceptionally cool, smart, funny, humble, thoughtful dude. This world can always use more Marty Cooper.

So let’s begin at the beginning: Marty’s childhood in Chicago. 

POGUE: When you were a little kid, did any of the signs of your current personality exhibit themselves?

MARTY: I spent a lot of time alone when I was a child. My folks actually had a grocery store at that time in their lives. And, of course, they both had to work– at this thing. So I spent time alone and became a very avid reader. And even at the age of eight or nine years old, I thought automobiles were wonderful. I just loved the– I knew every model, year, and every feature on every car.

I ended up going to what they called a technical school. / I think they would call it a trade school now. And yet I got a very good education in liberal arts, and at the same time took a shop every year, woodshop– metal shop, forge, foundry.

And I– I can’t tell you how valuable those kinds of things were. I still get a thrill out of fixing things. When I fix an appliance or program the lights in the house, I get instant gratification. 

When he was about 18, he found out that the U.S. Navy was offering a fantastic deal. They’d pay for college tuition, books, and incidental expenses—if Marty would agree to spend three summers with the Navy, and then three years after graduation. He loved the experience. “My time in the military taught me about leadership, responsibility, and getting along with people,” he writes in his book.

Those traits came in handy a few years later, when he was an executive at Motorola, the leading maker of two-way radios for police, taxi companies, and the military. Its bread and butter was car phones. 

POGUE: So these– these car telephones were not cellular car telephones?

MARTY: That’s correct. 

They were literally two-way radios. Asynchronous audio, in other words. You couldn’t talk at the same time. Like, you’d say, “Hi, honey—I’ll be home late, over.” And honey would say, “I’ll start dinner without you, over.” 

They were also not what you’d call mobile phones—apart from, you know, being part of your car. The car phone’s electronics fit into what looked like three big suitcases; they had to be wired into your trunk.

MARTY: Weighed 30 pounds. And there was a huge cable about this big around that went from the trunk to the front, and then there was a con– what we call a control head– with the dial and the stuff. And then there was a speaker off in a corner, and there was a microphone coming off. So the– just the installation of this thing alone was– a major job.

But there was a bigger problem with car phones: Calling capacity. 

MARTY: They had one transmitter in a city, and– and a very limited amount of radio channels. And so you could only serve so many people. 

If you tried to make a phone call during the middle of the day, you could never get an operator. The chances were– one in 20 that you could make a phone call, that’s how bad that– service was with the car telephones. It really was not a mass– product.

Now, the cellular network is quite different. Today, we’ve got cellular antenna clusters, known as cell sites, on towers all over the U.S.—over 415,000 of them. Your call gets handed off from one cell site to another as you move around—a system that drastically increases the number of calls that can be going on simultaneously.

An engineer at Bell Labs had dreamed up this idea way back in 1947. Whereupon it had been promptly forgotten.

MARTY: And they put this idea in the drawer and somebody 22 years later pulled it out of the drawer and said, “Hey, maybe we should– execute this.”

Cut to 1969. Bell Labs is now the research division of AT&T. AT&T wants to expand its carphone business—and get around that awful capacity problem. So it dusts off the cellular proposal and approaches the government about getting a monopoly on this new technology.

MARTY: They went to the FCC and said, “We want to continue our monopoly in telephones.” So they concluded that they were gonna build this new system– that they called cellular. And I was at Motorola at the time, and we objected to both of those.

a– Bell system was gonna come along and they were gonna take over our business as well as this whole new thing, and do it wrong. Do (LAUGH) it with– with car telephones!

People had been– had been wired to their desks in their kitchens for over 100 years. And now they’re gonna wire us to our cars where we spend 5% of our time.

Motorola was really worried. If the FCC gave AT&T an exclusive on the cellular airwaves, that would be the end of Motorola’s primary business. The company desperately wanted the FCC to open up cellular to competition—not to give AT&T a monopoly. I should point out that at the time, AT&T was the world’s biggest corporation. 

Marty Cooper wanted to show the FCC the kind of potential that cellular might have beyond car phones—if there were competition in the marketplace. As he saw it, these phones could one day be battery-powered, and fit in your pocket! You could carry it around with you! RADICAL!

POGUE: So you were proposing, back in 1973, that– s– cell phones should be completely untethered, not part of a car, but in your pocket. Why wasn’t everyone saying, “That’s the greatest idea I’ve ever heard. We’ll sell hundreds of billions”?

MARTY: It turns out that people are not very good at predicting th– the future, in general. 

POGUE: Y– you’re telling me people thought that the cell phone was a dumb idea?

MARTY: Absolutely. No– (LAUGH) no question about it.

POGUE: Well, I guess it takes a dreamer-slash-executive to bring it about.

MARTY: Well, when you think about it, at the time, the internet hadn’t been invented yet. There were no digital cameras. The large scale integrated circuit hadn’t been created. The lithium ion battery had not been created.

So the idea that you could put all these things together in a box/you really had to– have a little bit of imagination. 

There’s a great quote by Joel Engel, the engineer who ran the Bell Labs car phone program—basically, Marty Cooper’s arch-rival. Engel said in 2007, “None of us—the FCC, Motorola, AT&T, anybody at that time in the 70s, did not anticipate these things. We thought the business was going to be purely business usage—real estate agents, home repair, people who were in their vehicle a lot. We didn’t anticipate teenage kids using cellular phones. We didn’t anticipate personal residential use. We also didn’t anticipate they’d be handheld pocket-sized units. We completely missed the individual usage.”

Given that mindset, how could Motorola possibly convince the FCC that a pocket phone could be a thing—if the FCC would just open up the airwaves to competition?

MARTY: So I thought about, “How could we do a dazzling demonstration?” The only way to do it is to have a working something. /

Marty decided that the most direct way to spark the FCC’s imagination—was to build an actual working cellphone, thereby leaving nothing to the FCC’s imagination.

There was only one problem: The FCC hearing about AT&T’s petition was only three months away. Marty began tearing around Motorola, from one department to another, to build this thing.

MARTY: And the first guy that I went to was not the engineers, it was the industrial designer, Rudy Krolapp.

And I told Rudy, “We’re gonna make a cellular phone.” (LAUGH) And his reaction was, “What’s a cellular phone?” So I– and I described that to him, and he stopped working on anything else. He took his whole team of people and assigned them to conceive of what a handheld personal phone might look like.

POGUE: You actually figured out what it would look like before you had what would go into it?

MARTY: Absolutely.

POGUE: Isn’t that backwards?

MARTY: Th– well, it– that’s what this was. We were tryin’ to get people excited about this thing. 

You’ve probably seen pictures of the winning design. It’s this rectangular beige block, like a Soviet Army field telephone or something. Or, as Marty says, like a shoe.

MARTY: The phone that we ended up picking was the simplest one. Looked like a shoe, but it was one piece. We knew if we made something with l– with– complications, it would break.

The thing is, the original design was tiny! I got to handle the original model that the designers gave Marty—it’s like five inches tall! Like they’d taken the one you’ve seen in pictures and blasted it with a shrink ray.

POGUE: W– wait a minute. This isn’t a miniature, this is what they actually had in mind?

MARTY: That’s exactly right. (LAUGH)

POGUE: It’s a tenth the size of the final one.

MARTY: Yeah, well, that’s– the reason for the increase in size is exactly here. 

He showed me a huge glob of circuit boards and wiring.

POGUE: So they had to fit all this stuff into–

MARTY: Of this phonethis is everything in the phone except the battery.  

POGUE: So the designers–proposed this. And by the time you put all that stuff in, it wound up–

MARTY: It grew to this size. (LAUGH) 

Now that Marty had the shell, Moto engineers had to design the guts.

MARTY: They assigned their top engineer, who is a fellow named Don Linder.  and– Don says– “I don’t think that can be done. (LAUGH) And certainly not in three months.” And I persuaded him to try. I used– my management style, which was different. In other words, I gave him a big hug. (LAUGH)

We gave him carte blanche, as many people as he wanted to get. There was a crew of 20 people working on this device.  I was his go-fer. He needed a piece of technology, a new filter, a new integrated circuit, and I was running around the corporation. I knew where everything was.

And these guys did it. In three months, they actually demonstrated a working unit. It was just wonderful.

POGUE: And what kind of battery life did the phone get?

MARTY: You could talk for 25 minutes (LAUGH) before the– before the phone ran down. 

Marty Cooper also made history by making the first public cellphone call. It was April 3rd, 1973. It was a PR stunt. One of the network morning shows was supposed to film this big moment on the streets of New York, but wound up canceling at the last minute. (These morning TV people, you know? Jeez!) 

MARTY: So our PR people were– h– in deep trouble. They just scrounged around. They told me that they had this replacement.  

So we met this guy on Sixth Avenue in New York, in front of the Hilton.  I thought, “You know, I’m gonna call my counterpart in the Bell system.” And I looked up the number of Joel Engel, who ran the Bell system car telephone program.

POGUE: This is your arch rival.

MARTY: Yeah, he was. (LAUGH) He’s still not very fond of me, by the way. And I said– “Hi, Joel, it’s Marty Cooper.” He said, “Hi, Marty.” Very polite. And I said– “Joel, I’m calling you on a cell phone, but a real cell phone, a personal, handheld portable cell phone.” Silence on the other end of the line.  Joel does not remember that conversation to this day. And I– I guess I don’t blame him. (LAUGH)

POGUE: Well, I mean, you were rubbing your heel in his face, in a way.

MARTY: Yeah. Well, I– he deserved it. (LAUGH)

A few weeks later, Marty gave a similar demo to the FCC commissioners. They rode in a Motorola van, making cellphone calls as they drove around Washington—and their calls never dropped! That’s because Motorola had installed three cell towers around the city, and carefully mapped out a route that would always remain within their range. 

So? Did it work? Did Marty Cooper’s crazy gambit of creating one single working cellphone convince the U.S. government not to give AT&T a monopoly?

As if you didn’t know!

After the break—I’ll give you the details.

BREAK

Before the break, I was telling you how Marty Cooper ran around Motorola, getting buy-in from the various departments, to produce a working cellphone in three months. The idea was to convince the FCC to open the cellular airwaves to competition—to prove that competition leads to innovation. And above all, not to give AT&T an exclusive on this new tech.

OK—and now, finally, the big punch line. Did Motorola’s stunt work? Did the working cellphone prototype convince the FCC? Here’s Marty Cooper again. 

MARTY: When the FCC finally made their decision they actually allowed half of the telephones to be built by the Bell system. And this other half to be done by independent operators.

POGUE: So you did all this for the benefit of Motorola, your employer?

MARTY: Of course.

POGUE: But as a side benefit, you opened up the entire world of cell phones to the marketplace. You ensured that it wouldn’t be an AT&T/Bell Labs monopoly.

MARTY: Well, that’s right.

But the cellphone era didn’t exactly get under way immediately. 

MARTY: It took over ten years to get the technology right and get the FCC to decide who was gonna provide the service. So the first actual service didn’t happen until October of 1983, ten years later.

At one point during that decade of waiting, Motorola’s DynaTAC phone was ready to go—but the FCC was still dithering over how to regulate the new industry.

Motorola founder Bob Galvin went straight to the top—he showed the working phone to the Vice President. Of the United States. George H. W. Bush.

MARTY: And Bush called his wife. And he– and he said to Bob, “You know, Ron’s gonna look at this.” (LAUGH) And the next thing you know, everyone’s there in the office with Ronald Reagan. And Re– and Reagan called Nancy. (LAUGH) and he says to– George, “George– why don’t we have this?” And George says, “Well, the FCC is kinda dragging.” He says, “Would you call them and tell them to get this thing on the road?” (LAUGH) And within a couple of months they (LAUGH) made a decision, but it took that kind of a thing to– to make it actually– happen.

And presto: In 1983, you could buy an actual, portable, battery-powered, wireless, pocketable cellphone —well, coat-pocketable.

MARTY: They cost $4,000 in 1983 dollars, which would be like having a $10,000 cell phone today. So there were not a lot of sales, but they were sold. With time, as the system developed, within ten years, you couldn’t buy a car telephone anymore. All the phones were now handheld.

There are more phones– more cell phones in the world today than there are people. Two-thirds of people on Earth have cell phones. That’s an amazing number. 

POGUE: Did you get fantastically wealthy from inventing the cell phone?

MARTY: No. Why, as a matter of fact– when I joined Motorola in 1954,  I assigned to Motorola all the intellectual property that I might come up with, all the inventions, ideas to Motorola for $1. And, David, that was the best deal I ever did in my life.

POGUE: Best deal?!

MARTY: I– it was. Motorola treated me wonderfully. And– they allowed me to have a productive career, and I have been thankful to the– the– all of the managers and people– at Motorola who propagated that environment. 

I should mention that I was talking to Marty at his home in Del Mar, California—an absolutely gorgeous house directly on the beach. Bill Gates has a home a few doors down. 

Marty’s something of a fitness nut—even at 92, he does weights three times a week, and often walks along this beach, where we chatted about his book—and his movie.

POGUE: So I understand that your book has been optioned for the movies?

MARTY: Yeah, it has by a guy named Dana Brunetti, who did the– the House of Cards. And– and he did The– Social Networkmovie–

POGUE: Well, who’s gonna play you in the movie?

MARTY: I was hoping that you would do it, David. (LAUGH) You– you– you’re the only star that I know. So–

POGUE: I could be persuaded. (LAUGHTER)

MARTY: You would– you wouldn’t do it as a privilege (LAUGH) to play me? I thought that at least you could do.

POGUE: Have your people talk to my people.

MARTY: Yeah, right. (LAUGHTER) 

Anyway, the point is, despite signing away all his intellectual rights to Motorola for a dollar, Marty is not exactly hurting.

POGUE: (LAUGH)  So if I can ask, so the– the– the beauty and the beachfront house… is this from your subsequent businesses, the income?

MARTY: Yeah. Well, I was lucky enough to get hooked up with a wonderful woman, and we’ve created a partnership. And we’ve been starting businesses.  We’ve had some failures over the way, but we’ve had enough successes. So the world has treated us very well.

That’d be Marty’s second wife, Arlene Harris, a technology innovator in her own right. Marty left Motorola in 1983, and married Arlene in 1991. Together, they’ve founded a string of companies in the cellular industry. 

ARLENE: And so I met Marty at a conference in Carmel–

ARLENE: –in 1979.

MARTY: At which I was speaking.

ARLENE: He was speaking. He was a bigwig coming in from Motorola, Chicago. You know, the– the– the guy that everybody sort of had big eyes about. And he came in and told us his prognostications about what cellular was gonna be, it was an inspirational talk. 

POGUE: Were you starstruck?   Were you impressed by his intellect at his talk?

MARTY: I can’t speak for Arlene, but I was star struck (LAUGH) with Arlene. We– we– c– started out with a minor conversation in the bar.  And that conversation has been going on for 42 years, still going on. 

POGUE: Isn’t the general advice for relationships not to work with your spouse?

MARTY: –we don’t agree about– everything. But– you know, that’s the spice of life is disagreement, as long as your– if it’s friendly. 

POGUE: (LAUGH) But it seems like, if there’s a technological dispute, can’t you just go, “I’ll have you know I’m the father of the cell phone.” Wouldn’t you automatically win?

ARLENE: No. (LAUGHTER)

One of their companies created the Jitterbug phone, designed for seniors, now owned by Best Buy. My dad used a Jitterbug phone for awhile.

ARLENE: The whole idea was to simplify it.  Big buttons and a screen that had larger fonts. It was just a  phone was a phone and nothing else.

MARTY: With the Jitterbug phone, you would open the flip. And if you had a dial tone, you had a signal. And if you didn’t–, you didn’t. That was an example of simplicity.

Now, Marty Cooper seems like an affable, easy going guy. You might not immediately think of him as a rabble-rouser, a guy who throws bombs at the establishment. But he’s got one opinion that infuriates the executives and lobbyists for quite a few billion-dollar corporations.

MARTY: The myth is that radio frequencies are like beachfront property: Once you use it up, it’s gone. Total myth.

POGUE: Wai– wait, wait, wait, wait, (LAUGH) wait, wait, wait. You’re talkin’ about spectrum. We hear about the FCC auctioning off blocks of frequencies called spectrum auctions, right? And–

MARTY: For billions of dollars.

POGUE: Right because nature only gave us a fixed number of frequencies, and everybody wants ‘em. Radio, and television, and cellular, and the military, they’re all fighting over these limited, finite number of spectrum bands. We all know that.

MARTY: politicians know that. (LAUGH) But we engineers know that, when Marconi started out, here w– did the first commercial radio. /

That was the beginning. And he used up 100% of the radio spectrum doing that. And then the engineers came along and they figured out how you could have two people on Earth talking at the same time. And keep increasing that number. And then different technologies for squeezing more bits of information, more voice, into less and less spectrum.

12:36:22  And we’ve been doing that to the extent of we have doubled the capacity of this radio spectrum that we’ve been talking about, / We have / doubled it every 30 months for 120 years.

POGUE: What?

MARTY: it actually is ten trillion times increase in capacity between Marc– what Marconi did, and where– what we’re doing today. Part of it is that we’ve been going higher and higher and higher in frequency, that’s a very small part of it.

It’s —we just have learned how to be much more efficient. 

POGUE: Marty, I have been a technology reporter for 30 years and the fact that spectrum is precious and limited is– it’s been a given. Like, we know this.

MARTY: Well, you can see why I am ridiculed by most of society. (LAUGH) But the people that understand do subscribe to what I see.  they call the law of spectrum capacity Cooper’s Law. 

POGUE: Is it something like Moore’s Law?

MARTY: It’s exactly the same as Moore’s Law and the basis of it is that we’re so inefficient now that we have lots and lots of room to grow. 

With the law of– of spectrum capacity, we’re only in the beginning. We– we can go a trillion times more in capacity by just using radio– and– computing technology.

POGUE: I guess we haven’t run out of it yet. That’s true.

MARTY: You know, you are so smart. Now I know (LAUGH) why you make all the big bucks, David. We’ve never run out. We keep increasing the number of people that are benefiting– from this by orders of magnitude. And yet, we still don’t run out of spectrum. 

Marty is confident that the legend of limited radio frequencies is a charade—that new technologies will always let us keep ahead of demand.

MARTY: And I’ll give you another example. W– the towers that you– we’ve been talking about are all outside. Guess where most of our phone calls are?

POGUE: Inside. (LAUGH)

MARTY: And we– so we put out huge amounts of energy to penetrate our houses and buildings. In the future, we’re gonna be putting the cell sites in the buildings, little tiny cell sites.  But at some point, all these things are gonna be connected to each other– and much, much more efficient and much lower cost. And it turns out that there will an infinite amount of radio spectrum.

Marty does a lot of that…you know, thinking about the future. 

POGUE: one of the most surprising things you wrote in your book to me was that we are only at the dawn of the cell phone? 

MARTY: Oh, David, we have– are only at the very, very beginning. There are– we are going to revolutionize mankind in many ways. /

We now know that we can put a device in your ear or on your earlobe, under your skin, that has a computer in it. And you can call it, I can talk to that computer. And I’ll call my computer Sam and I’ll say, “Hey, Sam, get– David on the phone for me.”

And when we talk about health care, you will have sensors on your body, maybe under your skin.

And when fluid starts accumulating in your lungs, if that ever happens, that is a– the– precursor of a heart attack. If you know you’re gonna have a heart attack, you can stop it. Just think about that.

having what we call a cell phone now can eliminate congestive heart failure, which is like the third– highest cause of death– in people. And that– technology exists today.  It’s not in the future. 

ultimately it will be able to sense a few s– cancer cells. And as soon as those cancer cells appear to be getting out of control, you go to the hospital, go to a doctor, or someday you’ll be able to do it yourself. Zap the cells, cancer is gone.

POGUE: I mean, you’re talking about implanting technology in our bodies. I would normally say, “Come on, dude. That’s absurd.” The only problem is you’ve been right before. (LAUGH)

MARTY: Well, we do that all the time. Pacemakers we do that. Reckon that now that’s gotten to be– a very routine kinda something. 

But that’s just healthcare. Once everybody has a cellphone and internet access, there’ll be many, many more aspects of life that can improve.

MARTY: I believe that the whole process of education is going to be revolutionized. That having access to the internet, the– role of a teacher is going to change. Teachers are not gonna be just communicating information. Kids can do that for themselves. The teacher will be– advising people, teaching them how to use the tools. 

I know I sound like an optimist. But poverty is going to be a thing of the past. There is no reason for anybody in today’s society to be poor. 

The– the United Nations determined that, in Africa alone– over a period of 20 years, 1.2 billion people moved out of severe poverty largely because of the cell phone.

POGUE: What’s the mechanism?

MARTY: The mechanism was these people– poor people have no way to deal with money. They c– have no way to save money, they have no way to transfer money from one place to another. And people came along and invented– there was a system called M-Pesa where you didn’t need a bank to save money or to move money from one place to another.

It– this has stimulated entrepreneurism. Just that fact just moved–over a billion people out of poverty.

There were people– who– would loan money to a woman in a village in India so she could buy a cell phone which she would rent out to the local farmers, or the local fishermen. And they could call the neighboring villages and find out where there was a market– and increase their efficiency. Those are the real indicators what the future of the cell phone is and– and the way the cell phone is– is helping society. It is making us more efficient, more productive.

POGUE: Here’s what I find strange, Marty.  I know this is a stereotype, but as a 92-year-old guy, I might expect you to relish the stories from the past more than the s– the stories of the future.

MARTY: Well, my story of the past is that I have observed that things in the past have continued to improve. But if you examine every metric that exists today, we are better off today than we have been in the past. 

People are– are richer today. They are healthier today. There is more freedom today. There is more tolerance today than there has ever been before. We’ve still got a lot of problems– but there’s no reason to think that we aren’t gonna keep improving.

UNSUNG SCIENCE with David Pogue is presented by Simon & Schuster and CBS News, and produced by PRX Productions.  

Executive Producers for Simon & Schuster are Richard Rhorer and Chris Lynch.  

The PRX production team is Jocelyn Gonzales, Morgan Flannery, Claire Carlander, Pedro Rafael Rosado and the project manager is Ian Fox.

Jesi Nelson composed the Unsung Science theme music, and fact checker Kristina Rebelo positioned herself nobly between my scripts and certain humiliation.

For more on Unsung Science episodes, visit unsungscience.com. Go to my website at David Pogue.com or follow me: @Pogue on your social media platform of choice. Be sure to like and subscribe to Unsung Science wherever you get your podcasts.

How Apple and Microsoft Built the Seeing-Eye Phone

Season 1 • Episode 7

Your smartphone can see, hear, and speak—even if you can’t. So it occurred to the engineers at Apple and Microsoft: Can the phone be a talking companion for anyone with low vision, describing what it’s seeing in the world around you?

Today, it can. Thanks to some heavy doses of machine learning and augmented reality, these companies’ apps can identify things, scenes, money, colors, text, and even people (“30-year-old man with brown hair, smiling, holding a laptop—probably Stuart”)—and then speak, in words, what’s in front of you, in a photo or in the real world. In this episode, the creators of these astonishing features reveal how they turned the smartphone into a professional personal describer—and why they care so deeply about making it all work. 

Guests: Satya Nadella, Microsoft CEO. Saqib Shaikh, project lead for Microsoft’s Seeing AI app. Jenny Lay-Flurrie, Chief Accessibility Officer, Microsoft. Ryan Dour, accessibility engineer, Apple. Chris Fleizach, Mobile Accessibility Engineering Lead, Apple. Sarah Herrlinger, Senior Director of Global Accessibility, Apple.

Episode transcript

Intro

Your smartphone can see, hear, and speak—even if you can’t. So the accessibility engineers at Apple and Microsoft wondered: Could the smartphone ever be smart enough to serve as a talking camera for people who are blind or have low vision? Could it describe what it’s seeing in the world around you, or photos in front of you?

App: A group of people sitting around a table playing a board game. 

Jenny:  Seeing AI is one of the most incredible revolutionary products I think we’ve ever put out there. /I get emotional when I think about what employees have created.

Today, the origin stories of two amazing accessibility features from Microsoft and Apple. I’m David Pogue, and this is “Unsung Science.”

Season 1, Episode 7: How Apple and Microsoft Built the Seeing-Eye Phone. We’re releasing this episode on December 3, the International Day of Persons with Disabilities, which the United Nations created in 1992. And the reason that’s so appropriate will become obvious within the next 90 seconds. 

About eight years ago, I was hired to host a panel at a corporate event. And backstage, I spotted one of the other panelists waiting. She was using her iPhone in a way I’d never seen before. 

App: Voiceover fast

Her screen was off—it was just black—and she was sliding her finger around. She was using VoiceOver, Apple’s screen-reading feature for blind people like her. It speaks the name of every icon, button, list item, and text bubble beneath your finger. Over time, she’d gotten so good at it that she’d cranked the speaking rate up. It was so fast that I couldn’t even understand it.

App: Voiceover fast

She gave me a little demo—and it occurred to me that her phone’s battery lasts twice as long as mine, because her screen never lights up—and of course her privacy is complete, since nobody can see anything on her screen. She usually uses earbuds.

But VoiceOver was just the iceberg tip. Turns out Apple has essentially written an entire shadow operating system for iPhones, designed for people with differences in hearing, seeing, muscle control, and so on. The broad name for these features is accessibility. 

Now, in popular opinion these days, the big tech companies are usually cast as the bad guys. But Apple and Microsoft have entire design and engineering departments that exist solely to make computers and phones usable by people with disabilities. And they’re totally unsung.

Sarah Our first Office of Disability was actually started in 1985, which was five years before the Americans with Disabilities Act came to pass. /

Sarah Herrlinger is Apple’s Senior Director of Global Accessibility. She describes her job like this:

Sarah My job is to look at accessibility at the 30,000-foot level, and make sure that any way that Apple presents itself to the world, that we’re treating people with disabilities with dignity and respect. 

David What is the business case to be made for designing features that are, at their, at their core, intended for a subset of your audience? 

Sarah You can look at public statistics that tell us that 15 percent of the world’s population has some type of disability. 

— that number grows exponentially over the age of 65.  Whether you think so or not, you’re probably going to be turning on some of these features. 

But for us, we don’t look at this as a traditional business case, and don’t focus on ROI around it. You know, we believe it’s just good business to treat all of our customers with dignity and respect, and that includes building technology that just works no matter what your needs might be. 

I think you might be surprised how far these companies go. Like on the Mac: they’ve added this entire spoken interface, so that you can move the mouse, click, drag, double-click, Shift-click, open menus, type, edit text—all with your voice. Same thing on the iPhone and iPad. Very handy if you can’t use your hands—whether they don’t work, or just because they’re full of groceries. Or greasy. 

Here’s the Apple ad that introduced this Voice Control feature. It’s basically a demo by outdoor enthusiast Ian Mackay, who’s paralyzed from the neck down. Here’s him, opening the Photos app and choosing a photo to send as a text message to his buddy: 

APP: Voice control audio

IAN: Wake up. Open Photos. Scroll up. Show numbers. 13. Click Share. Tim. Next field. Let’s ride this one today. Thumbs-up emoji. Click Send.

Did you hear that “Show numbers” business? That’s how you tell the machine to click something on the screen that’s unlabeled—like the thumbnails of your photos. When you say “show numbers,” little blue number tags appear on every single clickable thing on the screen, and you just say the one you want to click.

IAN: Show numbers. 13.

But I mean, there are also features for people who have trouble hearing, like your phone’s LED flash can blink when you have a notification. There’s another feature that lets you use your iPhone as a remote microphone—you set it on the table in front of whoever’s talking, and it transmits their voice directly into your AirPods or hearing aids.

And there’s a feature that pops up a notification whenever the phone hears a sound in the background that should probably get your attention, like the doorbell, a baby crying, a dog or a cat, a siren, or water running somewhere. Here’s my cat Wilbur and me testing out this Sound Detection feature:

SOUND: Wilbur

And sure enough! My phone dings and says, “A sound has been recognized that may be a cat.”

If you’re in a wheelchair, the Apple Watch can track your workouts. If you’re color-blind, like me, there’s a mode that adjusts colors so you can distinguish them. If you’re paralyzed, they’ve got features to let you operate the Mac with a head switch, blink switch, joystick, or straw puffer. They’re even trying to make the Apple Watch useful if you can’t tap it. Here’s Sarah Herrlinger:

Sarah We brought Assistive Touch to Apple Watch as a way for individuals who have upper body limb differences. So someone who might be an amputee, or have another type of limb difference, to navigate and to use the device — without ever having to touch the screen itself. 

I tried it out. Each time you tap two of your fingers together, on the arm that’s wearing the watch, the next element on the watch’s screen lights up. Tap, tap, tap—and when you get to what you want to open or click, you make a quick fist to click it. 

And it’s not just Apple. A few years ago, for a story on “CBS Sunday Morning,” I interviewed Microsoft CEO Satya Nadella. And at one point, he described how his life changed the day his son Zane was born—with quadriplegia. 

NADELLA: 12:46:49:15 Even a few hours before Zain was born, if somebody had asked me– “What are the things that you are thinking about?”  I would have been mostly thinking about– “How will our weekends change?” And about childcare and what have you. 

And so obviously after he was born, our life drastically changed. To be able to see the world through his eyes and then recognize my responsibility towards him, that I think has shaped a lot of who I am today. 

POGUE: But how does something as emotional/ as that empathy that– that you’ve developed translated into something as nuts-and-bolts-y and number-crunch-y as running a huge corporation? 

NADELLA: There’s no way you can motivate anyone if you can’t see– the world through their eyes. There’s no way– you can get people– to bring their  A game  if you can’t create an environment in which they can contribute. But the creation of that environment requires you to be in touch with what are they seeking? What motivates them? What drives them? So as a leader or as a product creator, can draw a lot from I would say this sense of empathy. 

So it’s no coincidence that later in our shoot, the one new Microsoft software product the CEO was most eager to show our cameras was something called Seeing AI. This was, by the way, a Microsoft app that runs only on the iPhone. From Apple

Here’s a clip from that “Sunday Morning” story. Nadella and I are in a company snack bar with Microsoft engineer Angela Mills, who’s legally blind. She showed me how Seeing AI works. 

SOUND: Angela demo

MILLS:   So if I now hold it up…

DP (VO): It helps her read text…

PHONE: “Carob Malted Milk Balls.”

DP (VO): …recognize objects…

PHONE: Banana. Orange.

DP: That is oranges!

MILLS: Yup! And then…Take picture.

DP (VO): …and even identify faces.

PHONE: 49 year old man with brown hair looking happy. 

POGUE: Wow, it left out tall and handsome. But that’s pretty close! (laughter)

So today, you’re going to hear two stories. Backstories of how two very cool accessibility features came to be. By what’s probably no coincidence at all, both features were invented by disabled employees at these corresponding companies—Apple and Microsoft.

Incidentally, this is the first episode of “Unsung Science” that involves commercial consumer corporations. There are a couple of others later this season. I just want to make clear that these companies did not, do not, and can not pay to be featured as a topic on this show; it wasn’t even their idea. In fact, I hounded them for interviews, they didn’t hound me. Sometimes, for-profit corporations make cool scientific or technical breakthroughs, too.

[pause/music]

OK. So our first story is that Seeing AI app from Microsoft. What a great name, too, right? Like seeing eye dog, but seeing AI, for artificial intelligence?

Anyway, the app has ten icons across the bottom; you tap one to tell it what kind of thing you want it to recognize. It can be text…

APP: Caution: Non-Potable Water.

Barcodes …

APP: Ronzoni Gluten-Free Spaghetti.

People…

APP: 30-year-old woman with red hair looking angry.

Currency…

APP: 10 U.S. dollars. One U.S. dollar.

Scenes…

APP: A bus that is parked on the side of the road.

Colors…

App: Gray and white. Red. 

Or handwriting.

App: Sorry Dad—I ate the last of your birthday cake.

There’s even a mode that uses low or high-pitched notes to tell you how dark or bright it is where you are, or as you move from room to room.

App: (Warbles low and high)

The app can also tell you what’s in a photo.

App:   One face. A woman kissing a llama.

The subject of that picture isn’t just any woman kissing a llama—it’s my wife, Nicki. She’s always had a thing for llamas.

Anyway. The man behind the app is Saqib Shaikh (Sokkibb Shake), who Microsoft introduced in a YouTube video like this:

SAQIB: I’m Saqib Shaikh. I lost my sight when I was seven. I joined Microsoft ten years ago as an engineer. And one of the things I’ve always dreamt of since I was at university was this idea of something that could tell you at any moment what’s going on around you.

In 2014, he got his shot. Microsoft held its first-ever, company-wide hackathon—a programming contest to see which engineering team could come up with the coolest new app in one mostly sleepless week. And Saqib’s app won.

Saqib: I gotta say, it was very basic, very early.

It could read text, and do a bit of face recognition to help you identify your friends while you’re walking past, and a few other things like recognizing colors and so forth. But  describing images— that didn’t come to a year or more later. 

And, you know, it got some attention, but it wasn’t really till the next year we thought, “let’s do it again.” And over time, more and more people got involved. 

In 2015, a more mature version of Seeing AI won Microsoft’s hackathon again.

Saqib So when we won the second hackathon, the company wide, I told my manager, look, we have this opportunity. And he was just like, “OK, I am just going to give you two months to see what you could do full time.”

But then it never stopped being my project. And before we knew it, we were on stage with the CEO at the Build conference in 2016, which was a pivotal moment and just so incredible. 

That CEO was, of course, Satya Nadella, who brought Saqib onto the stage. 

NADELLA: It’s such a privilege to share the stage with Saqib. You know, Saqib took his passion, his empathy, and he’s gonna change the world.

[pause/music] 

Seeing AI relies on a form of artificial intelligence called machine learning.

Saqib So the way machine learning works is, you have these algorithms called neural networks, which are—you’re not giving them steps like “recognize this, do this, look for this type of color or this line.” Instead, you’re taking many, many examples, hundreds of thousands of examples, of different photos. And then someone is teaching the computer by describing it. And that could be writing a sentence about it, such as “this is a living room with such and such.” 

You’re teaching the system by giving it, “This is the real answer.” And then this so-called neural network will learn over many, many iterations that this is the concept that makes this thing over here a couch. And this is the thing that makes this other thing over here a car. 

David I see. So  does the machine learning write code, or is it just a black box and you’ll never really know why it thinks this banana is a banana? 

Saqib In many ways, it is a black box where the system has learned the association between this banana and the word banana. 

Saqib even mentioned a feature I’d missed completely: feeling your photos.

Saqib You can kind of feel the little clicks around the edges of objects and you hear that, “wow, there’s a car in the bottom left and there’s a house over on the right” and “oh, there’s my friend over there,” and you’re kind of tracing a photo with your finger on a flat piece of glass. 

And sure enough: In the Scene mode, if you tap the Explore button, you can run your finger over the photo on the screen and feel little vibrations—with accompanying sounds—as your fingertip bumps into objects in the picture.

App: Move your finger over the screen to explore. Couch. Table. Chair 1. Chair 2. Monitor. Desk. Keyboard instrument.

Sometimes, Seeing AI is incredibly, freakishly specific in its descriptions. I tried to baffle it with a photo of me standing in front of this giant wall of stacked shipping containers at a seaport. I figured it’d have no chance of figuring out what they were. But here’s what it said:

App: A man standing in front of a row of shipping containers.

Wow.

OK, something harder then. My daughter Tia has this hobby, where she carves incredibly lifelike images into pumpkins. Like, four-level grayscale photographic-looking carvings. One year, she carved actress Shailene Woodley’s face into a pumpkin. And in my phone’s photo roll, there’s a shot of her pumpkin next to the Shailene Woodley picture that Tia used as a model. And here’s what Seeing AI said: 

App: Two faces—probably a collage of Shailene Woodley.

I mean, what the ever-loving—how did it know that? It knew that the woman on the pumpkin was Shailene Woodley? And that there were two Shailene Woodleys—one in a photo, and one on a squash

On the other hand, I should do some expectation setting here: The app is not flawless, and it’s not even complete—several of the recognition features are still labeled Preview. It sometimes gets easy ones wrong:

App: Probably a white horse with a long mane.

Uh, nope—it’s very clearly a llama. And especially in barcode mode, the app often just gives you a shrug:

App: Processing. Not recognized.

Finally, it’s worth mentioning that the most impressive descriptions arrive only if you let the app send a photo to Microsoft’s servers for analysis.

App: Processing. (musical notes) Probably a cat standing on a table.

Get down from there, Wilbur!

Still, Seeing AI has gotten better and better since Microsoft released it in 2017. Saqib says he uses it almost every day.

Saqib It can range from just reading which envelope is for me in the mail. Versus for my wife. In a hotel room to see, OK, all these little toiletry bottles, which one’s going to be the shower gel and the shampoo. You don’t want to get that the wrong way round! 

A lot of these things then puts me in the driving seat instead of having to ask someone for help. 

David Is there anything technologically that’s holding you back from making the app the dream app? 

Saqib I would love to be able to have AI that understands time, not only that this is a photo, but what’s going on over time, like, “the man’s walking down the corridor and just picked this up.”  And there are scientists around the world solving each one of these small problems. 

[Pause]

Jenny:  Seeing AI…Seeing AI is one of the most incredible revolutionary products I think we’ve ever put out there. I get emotional when I think about what employees have created. 

Jenny Lay-Flurrie is Microsoft’s chief accessibility officer. She’s deaf, so we chatted over Zoom with the help of her sign-language interpreter Belinda, which I thought was very cool. I started by asking her the same devil’s-advocate question I’d asked Apple’s accessibility head.

David Microsoft is  very proud of its work in accessibility.  From a business standpoint:  you’re doing a lot of expense and effort for a subset of potential customers. 

Jenny:  Well, I disagree, of course.  A billion people is not a subset.  Disability is an enormous part of —of community. It’s part of being human. 

David I mean, you say there’s a billion people who are helped by these technologies. Do you think the people who could use these features and these apps …know about these features and these apps? 

Jenny No, I don’t think that enough people know about what is available today with modern accessibility digital tools. So that comes back to, how do we educate and get it into people’s hands?

David Well, you could go on big podcasts and talk about it, that would help. 

Jenny Game on! Let’s do that!

After the ad break, I’ll tell you the story of a cool Apple accessibility feature I’ll bet you didn’t know existed. Meanwhile, let’s take a moment to acknowledge: Man, Microsoft and Apple both allowing their chief accessibility people to join in on the same podcast? I thought these companies are, like, arch-rivals?

But Jenny Lay-Flurrie surprised me. In accessibility, they have a kind of truce.

Jenny I would say that actually accessibility is one part of the tech industry where we don’t compete.  We collaborate.  On any given day, I’m chatting with my peers in all of the companies.  This is bigger than us. This isn’t about one company versus another. 

Inclusion is not where it needs to be, and technology is one powerful means to help address that. So, yeah, I would say this is an industry wide, and the maturity over the last five years has been incredible. Incredible! And it makes me bluntly, just stupidly excited about where we’ll be five years from now. 

I’ll be back after the ads.

Break – 

Before the break, you heard about Seeing AI, a Microsoft app for the iPhone for blind people that’s designed to describe the world around you. Apple has something like that, too. It’s an option that’s available when you turn on the VoiceOver screen reader. 

Once again, you could quibble with some of the descriptions, like in this photo of me in front of a fireplace:

Voice:              A photo containing an adult in clothing. 

Well, that narrows it down! Or how about this one:

Voice:              A person wearing a red dress and posing for a photo on a wooden bridge. 

Well, that’s all true—and the red dress part is really impressive. But I’d say the main thing in this photo is that she’s in a cave, surrounded by stunning white stalactites. And I know VoiceOver knows what those are, because in the very next picture, it says:

Voice:              A group of stalactites hanging from the ceiling of a cave.

But most of the time, VoiceOver scores with just unbelievable precision and descriptiveness. Listen to these:

Voice:              A person wearing sunglasses and sitting at a table in front of a Christmas tree. Two adults and a child posing for a photo in a wagon with pumpkins. A group of people standing near llamas in a fenced area. Maybe Nicki.

OK, what!? It not only got the llamas, but it identified my wife!? Well, I guess I know how it did that. The Photos app learns who’s in your photos, if you tell it—so this feature is obviously tapping into that feature. But it really got me there.

Oh—and VoiceOver Recognition even describes what’s in video clips.

Voice: Video: A person holding a guitar and sitting on a couch.

Chris [00:04:36] / I totally agree that these machines are now doing things that are ridiculous. 

Chris Fleizach is in charge of the team at Apple that makes all the accessibility features for iPhone, Apple TV, Apple Watch, and so on. Including that photo-describing business, whose official name is VoiceOver Recognition.

Chris (continuing):  This is a combination of machine learning, the vision processing, and to string together a full sentence description.  And so we can grab this screenshot, feed it through this machine learning, vision-based algorithm, pop out a full sentence in under a quarter of a second—and do it all on device. 

By “On device,” he means that all of this happens right on the phone in your hand. No internet needed. No sending images off to some computer in the cloud for processing, which is good for both privacy and speed.

Chris: Before, you could have done that, but you’d have to send it off to a server to be processed in some data farm and it would take 10 seconds to come back.  

Getting to this point didn’t happen overnight, of course. 

Chris: the image descriptions, that’s something that we started working on years and years ago when I essentially saw a prototype that someone else was working on at Apple. And they said, “Well, look at this cool thing. I can take this photo and turn it into a sentence, and it’s sort of OK.” And I said, “we need that. We need that now. How do we make that happen?”

And, you know, four years later, with the involvement of 25 different people across Apple, we finally have this on-device image description capability. 

[pause]

But we are gathered here today to hear the origin story of a different accessibility feature. Not object recognition, but people recognition. Actually, not even that. People distance recognition. 

David How do you pronounce your name? 

Ryan Dour. 

David Okay. Like the word. 

Ryan Yes. Well, I frequently hear people say “dower,” but I’m definitely not a dour person. I’m a doer. 

David Oh, that’s very good. 

So this is Ryan Dour, D-O-U-R.

David So what is your actual job at Apple? I mean, you’re not — you’re not Idea Man. 

Ryan Well, no.  I work on the software engineering Accessibility Quality Assurance team. So my job at Apple is to test and qualify all of our accessibility features, and then to make sure that those features work with many of our products. 

Ryan says that, conveniently enough, Apple’s various accessibility features for low vision have evolved at just about the right speed to keep up with his own deteriorating vision.

Ryan When I was a kid,  I had some vision, and over my lifetime, as my vision went from low vision to, well, quite frankly, no vision, there’s — Apple’s always been sort of at that forefront of technology, such that it actually followed my progress. You know, I went from actually using, you know, Close View and Zoom to, “Oh, okay, my vision’s getting to the point where I can’t really use Zoom effectively anymore. Oh, but—but here comes Spoken User Interface Public Preview right on time.” 

Now, when you can’t see, it takes some ingenuity to navigate the world—and if you haven’t been there, some of the stickiest situations may not occur to you. And for Ryan, one of the most awkward social moments is standing in lines. How do you know when it’s time to shuffle forward, even if you’ve got your cane?

[Ambi]

Ryan So imagine at a theme park, we’re in line at a theme park. The person in front of me is moved up, but I don’t want to constantly be tapping them, and there’s tons of voices and lots of chatter going around, so I don’t necessarily hear that they have specifically moved up. The person behind me now is waiting. And they’re not noticing. When they finally do notice, they’re getting annoyed. I feel like a rubber band. I’m bouncing between the person in front of me, tapping them, stopping, and then the person behind me is — if, if I don’t move up quickly enough, saying, you know, “go ahead, move up.” And imagine doing that for an hour while you’re waiting for a rollercoaster. 

With a dog, by the way,  you’re relying on your dog to wait in line. But their goal, as trained, is to actually get you around objects. So very frequently, if there’s a space, the dog will say, “OK,  it’s time to move up and around the people in front of you.” And that, that can become an issue as well.

OK. With that background, you’ll now be able to understand the significance of the meeting Ryan attended at Apple one day in the summer of 2019. 

That would be 2019, PC—pre-Covid.

Ryan We had this meeting with our video team and they were introducing us to a new technology. We all know it now as Lidar.

Lidar stands for ”light detection and ranging.” 

[music]

A lidar lens shoots out very weak laser beams—you can’t see ‘em, and they can’t hurt your eyes—and measures the infinitesimal time it takes for them to bounce off of nearby objects and return to the lens, like a bat using echolocation. That way, lidar can build a 3D understanding of the shapes and distances of things around it. It’s the same idea as radar, except using light to measure distance instead of radio waves.

They put lidar on planes, aimed at the ground, to measure crop yields, plant species, or archaeological features. They’ve used it to map the surface of the moon and the sea floor. A lot of self-driving car features rely on lidar—and so do speed guns that the police use to give you speeding tickets.

But in 2020, Apple built lidar into the back of its iPhone 12 Pro. The iPhone 13 Pro has it, too. The big idea was that this lidar would permit software companies to create really cool augmented-reality apps—you know, where the screen shows realistic 3-D objects in the room with you that aren’t really there, like an elephant or a sports car, even as you move around or change angles. 

OK, back to Ryan Dour’s meeting at Apple. You’ll hear him refer to haptics—that’s the little clicky vibrations that smartphones make.

Ryan: They had this demo app that would provide haptics based on how far away the person was. Or other things—like, it wasn’t just people at the time, it was just simply the output of the Lidar sensor itself, provided in a haptic way. 

And we all sat around, a bunch of us from the accessibility team and video team. And we thought about “what are some really great things we could do with this?” And a lot of ideas were sort of thrown around in the— in the meeting. 

  But towards the very end of it, I remember we had this lightbulb moment where I said, “Hey, wait a minute. Let’s go out into the hall.” 

And so we went out into the hall at Apple Park, and I said, “OK”–I had an engineer stand in front of me and I said, “Let’s pretend we’re at a coffee shop and you’re in line in front of me.  Whenever you want to, at random, I want you to just go ahead and walk forward.” And I held out this—this app with this haptic prototype, and I could feel like bup bup bup bup bup boop boop… and it’s like, “Oh, OK, that person moved up.”  I said, “You know what? This has some serious practical uses. I think that this is going to be something we should really consider in the future.” And that was sort of the end of that meeting. 

It sounded like a cool enough idea—just not cool enough to act on immediately. The idea was filed away. 

But then, in March 2020, Ryan’s idea got a big, hard push from a source that nobody saw coming: COVID-19. I think we can admit that for a lot of people, the pandemic wasn’t a great time. But it was even worse for people without sight.

Ryan: I was feeling a lot of apprehension in places that I’d never felt apprehension before! Like a grocery store or, you know, my local coffee shop. Just wondering, “where can I stand in the room where I’m not in somebody else’s bubble and nobody’s going to be in my bubble?” 

I was definitely not looking forward to potentially catching this, mostly because I thought, I don’t want to lose my taste. I already can’t see; I—I don’t want to lose my taste and smell.  And so, I was incredibly careful. I would say, maybe more cautious than others about trying to keep my distance, and, and also really being concerned that I didn’t want to be that vector that brings that to somebody else, either.  

And it was like, “Okay, you know what? That, that, that feature that we’d been talking about doing — why don’t we do it now? Why don’t we do it, why don’t we do it right now?” 

In other words, this idea of using Lidar to detect how far away people were now had two purposes. It could help blind people know when it’s time to move ahead in a line—and it could help anybody know if they’re observing six feet of social distancing. 

But Ryan’s first experiment with a prototype of the app, back in 2020, wasn’t exactly a triumph.

Ryan So we built this prototype and I took it out onto the streets of San Francisco. And the first thing that we encountered was, “Oh my gosh, this is a cacophony of feedback!” It was detecting poles. It was detecting garbage cans. It was detecting dogs and people in cars and all sorts of things. 

I’m hearing, you know, “eight three five eight nine six three five.” I’m like, Whoa, okay, what’s going on here? And then with the sounds feedback, it was just all over the place. You know, the beeps were close, the beeps or further apart, and it was just — I didn’t even know what object was being detected at that point.

And at that point, it wasn’t actually a useful tool.  Everything was kind of setting me off in this really weird walking pattern of — no different than if my cane had been hitting a bunch of objects that make me stop and think for a moment. 

David Right. 

Ryan And so, we started to consider, what are the other technologies we can use here? 

The answer was to rope in the Apple engineers who worked on augmented reality. 

Ryan: ARKit, which is our augmented reality software development framework for, for developers  has a feature, People Occlusion. So you may have used an augmented reality app where a body part like an arm or even another person gets in the way of the view, right? That the objects that you’re virtually viewing in your augmented reality game or, you know, whatever the environment is — when people get into view, they, they block it off. That’s actually incredibly useful. It’s amazingly powerful, and that is part of the machine learning process. 

Here’s an example of People Occlusion. There’s a super cool, free augmented-reality app called JigSpace, that lets you place your choice of all kinds of 3-D objects right in front of you, as viewed on your phone’s screen: a life-sized printing press, or lunar lander, or combine harvester, or whatever. And right there in your living room, you can walk around this thing, come up close to it, and so on. 

Well, suppose I’ve got this app, and now there’s a huge coral reef in my living room. Hey—it could happen.

Sound: Coral

Now, if somebody walks in front of that reef, close to your phone, their body looks like it’s passing in front of the coral wall. But if they walk far enough away from you that they should be behind the reef, the reef blocks them.

Turns out that’s just the sort of intelligence Ryan’s app needed to distinguish people in the environment from random clutter—and to ignore his own body parts in the scene.

Ryan For example, if I’m using my cane and my hand on my cane comes into view from the bottom, OK, we can ignore that. / I don’t need to be detecting my feet or my shirt or a finger; I need to be detecting the person in front of me. And so we’ve been able to really fine tune this so that we can pick up just the people and not the dogs and the trashcans and the poles. And this results in a fantastic tool. 

By the way—there’s a good reason that no other phones have a feature like People Detection: It really needs the Lidar. Chris Fleizach’s team briefly explored reproducing the feature using only regular iPhone cameras to see the world. But regular cameras don’t see the world in 3-D, the way Lidar does.

Chris And so, yeah, hard decisions were made. We would have loved to have brought this to more devices, but it wouldn’t have been good enough. 

So—in the beginning, the phone thought that everything in the environment was a person. But in the end, the solution was combining the depth information from the Lidar with the screening-out abilities of the augmented-reality software kit. 

If you have an iPhone 12 or 13 Pro, you can try the People Detection feature yourself. It’s hiding in the Magnifier app. You open Magnifier, and tap the Settings sprocket. Tap People Detection. You can specify your social-distance threshold, like 6 feet, and also how you want the app to tell you when it detects people nearby. 

You have three options: Sounds, which plays faster boops as someone gets closer to you and then switches to a higher note when they’re within six feet;

App: Boops

…Speech, which speaks how far away the nearest person is, in feet or meters; 

App: Speech counter

…or Haptics, which uses little vibrational clicks that play in sync with the boops. You can turn ‘em on in any combination. Here’s what it sounds like in your earbuds if you have all three turned on—in this case, as I walked past somebody in the drugstore.

App: Seven. Six. Five. Six. Seven. Eight. 

Here’s Ryan again.

Ryan So, so one subtle thing you might notice is that if you’re not using headphones, instead of hearing the —the feedback left to right across your, your soundstage, it will also change in volume to indicate, you know, how centered the person is. 

APP: Stereo

Oh, that’s cool! So yeah. If you’ve got both earbuds in, you can hear where the person is, left to right, as they cross your path.

APP: Stereo

If you’re not wearing earbuds, the volume indicates if the person is centered in front of you.

APP: Boops

Ryan has it set up so that two quick taps on the back of the phone turn the detection on, so he can whip the phone out unobtrusively and, thanks to those haptic clicks, start sensing where people are around him.

Here’s how Ryan heard the world as he approached a food truck to pick up his order. You can also hear him talking to his guide dog.

Sound: Foodtruck

Ryan It took away a lot of the apprehension, especially in places like waiting in line at the grocery store to check out. And this was especially true before we had the vaccines. 

David Oh, yeah. 

Like everybody else, Apple’s accessibility team is starting to dream of a time when this pandemic is over. What will happen to social distancing then? 

Ryan: Right now, we may be using it for keeping track of where, you know, where people are, what’s my bubble? But in the future, you may want to have a different threshold for how far away are people. 

  So, for example, our sound feedback provides tones that indicate when they’re, when they’re playing very fast, the person’s very close.

 When they’re playing slower, the person’s further away. But as you cross over the threshold that you’ve set, which by default is six feet, it drops in pitch

APP: Fast-slow boops

Ryan: … and then the distance between the tones also increases until you really kind of get out of range — until that person is around 20 feet away and then they’re not detected. 

David Right. 

Ryan In the future, this is going to be useful for walking into that crowded coffee shop and looking for that quiet corner, or  getting on to a subway car and figuring out like, where’s the empty space I can go sit down?

[pause] 

As you’ve probably figured out, I love this stuff. Some of the cleverest, most magical work going on in Silicon Valley today is in these accessibility features—and so few people even know they exist! 

And you may be thinking, “Well, it doesn’t matter if I know about these features. I’m not disabled.” But the accessibility engineers I interviewed made one point over and over again: Almost every time they dream up a feature for the disabled community, it turns out to be useful for the wider public. 

Here’s Apple’s Sarah Herrlinger:

Sarah There are many people for whom they don’t self-identify as having a disability. And yet accessibility can be more about productivity for those individuals. If you go under the accessibility panel on any of our devices, take a little bit of time to investigate what’s there, I would say you’re probably going to find something that will make your life easier. 

Microsoft’s Saqib Shake:

Saqib This is a whole history of disability as a driver of innovation. And I could tell you a dozen stories.  The fact that our phones can talk to us, this was part of the invention of the first reading machine. 

Same with voice recognition, was for people with physical impairments. Even the touch screen, to some extent was invented by someone who had difficulty typing and wanted a lower impact way to type text messages for people who were deaf. 

Microsoft accessibility head Jenny Lay-Flurrie:

Jenny:  Captioning! Captioning came out of creating technology for the deaf. We all use captioning in different ways. We can be sitting on a train, or trying to sneak that video without the person next to you watching. 

Audiobooks is the same. Those were created as talking books for the blind. And now look at what’s happening with audiobooks. 

Saqib: [00:20:34] People kind of forget where the origin came from. And that’s a good thing because it just blends into the fabric of life. 

Jenny: By making something accessible, you don’t just make it inclusive to the cool people with disabilities. You actually give core capability that everyone can benefit from. 

I told Jenny the story of that conference eight years ago, where I saw someone using VoiceOver on the iPhone for the first time. And her response is epic.

David It She was using voiceover, to operate her iPhone with the screen off, I mean, completely off. 

Jenny Absolutely. The way that that individual who’s blind is using her phone actually is a much more efficient use of the phone than someone who’s sighted; the screen’s off, the battery power lasts longer. And I’ll tell you,if they’re using a screen reader, they’re listening to that sound at way above your normal speed. They can get through an audio book in half the time. I can sit in a room and understand what’s being said, not by hearing any audio, but I can watch what’s happening. I can understand to put the pieces together of what people are saying, and I don’t have to be within two feet of someone. I’m great at a party for that perspective. 

[music]

Disability’s a strength. 

We have strengths and we have expertise.

The one thing that we need to stop doing as a society is seeing disability with a sympathy lens.

So when you see a person with a disability, forget the “diverse abilities,”  “super ability,” “special ability.” No, say the word “disability.” We’re proud of our identities. We’re proud of who we are, and we’re experts in who we are. And just not with sympathy, but with empathy.