The FBI is on high alert. There have been hardware system failures across multiple government entities—the U.S. Naval Academy, the FBI, the Air Force, the Marine Corps, just to name a few. This is more than a computer not working—some of this technology is installed in F-16s. Fleets of planes could crash.

It’s 2008, and this technology was acquired through a routine order with Cisco, one of the leading technology suppliers.

The FBI scrambles to form Operation Cisco Raider, and they find out they are in the middle of a worst-case scenario: the routers and switches ordered aren’t from Cisco. They are counterfeits, electronic parts that were recycled in China and snuck in through a supply chain.

Investigators are worried because not only are the parts failing, but the routers might also have a Trojan horse component that allows hackers to steal information. The FBI creates a PowerPoint presentation about its findings, and somehow the presentation ends up on the Internet. Now everyone is talking about it.

That same year, Ken Mai attends a Hardware-Oriented Security and Trust (HOST) conference, an international hub for experts to explore security issues in technology. He’s a principal systems scientist doing research at Carnegie Mellon University, which has five research centers dedicated to cybersecurity, and has been recognized for excellence in cybersecurity education and research by the National Security Agency and the Department of Homeland Security. Mai won a National Science Foundation Early CAREER award for his research, and he’s brought with him to the conference one of his graduate students, Mudit Bhargava.

By chance, they catch a presentation about this alarming breach, and they start talking. Mai asks his student about the similar work he is doing in the area, essentially creating fingerprints for computer chips, one of the most counterfeited items in electronics.

They get excited—they think they might be able to build a new chip that is protected against some of the hacking. They might be able to stop this cycle. If they create a new project, one that combines a few of their security ideas, they might create something unique.

Counterfeiting Chips

Mai wasn’t surprised to hear that there were counterfeit routers—he had been catching wind that this was happening back when he was in grad school at Stanford, where he was building his own computers. There was a buzz around the community that there were guys creating fake Intel microprocessors—they were taking real Intel processors, scrubbing off the markings, and remarking them to look better than they really were. If a part was marked as running at 2 gigahertz, the counterfeiters knew they could sell it as a 2.5 gigahertz part. It would probably run that fast for a little while (until it burned out), and they’d make another sale, meaning a greater profit for them.

Counterfeiting chips began as a money-making game. It happens on a wide scale—some operations have guys scrubbing microchips in a river and then holding them over a fire; there are also huge, pristine factories employing 10,000 people.

“Nothing is actually secure.”
Burak Erbagci

“It’s really cheap; they do it for a small amount of money and make a lot of money back,” Mai says. Some parts are priced only a quarter of a cent above cost, but a counterfeiter who sells 10 million of them is happy with the $25,000 in profits.

Many experts point to China as housing the bulk of counterfeit electronics—in 2011, when the U.S. Customs and Border Protection Agency seized $178 million in counterfeit goods, 62% of it was from China. Why China? The United States is shipping its electronic waste there on container ships.

Mai wonders whether this gives the Chinese government a justification for turning a blind eye: “After all, instead of adding U.S. pollution to their landfills, repurposing chips could be seen as a version of recycling.”

The United States is also offshoring a lot of its chip manufacturing to China because it’s cheaper. It doesn’t take much to connect the dots—if someone wants to compromise the creation of chips, it’s easy. In many cases, the recycled counterfeit chips work just fine. In fact, when a Senate Armed Services Committee released a report that an aircraft manufacturer had found counterfeit parts in the ice-detection modules on seven of its commercial aircraft, they didn’t remove those parts. They simply noted that they would have lower reliability and would replace them as they broke.

“It’s like you go to a Toyota dealership, buy a car, drive it around for a month, and then realize, ‘This isn’t really a Toyota,’” Mai says. “These things can function like the original.”

Espionage

The problem is that profit is no longer the only motivation. Now some counterfeiters know that they can gain access to valuable information. All they have to do is manufacture a chip with something a little extra in it—then they suddenly have control.

That control can vary—they might have a so-called “kill switch” that they could throw.

“You don’t want a foreign government to say, ‘Oh, suddenly all the radars on your planes don’t work’ while you’re in the middle of an air strike,” Mai says.

But how can someone go from creating a counterfeit chip with a devastating kill switch to getting into the FBI’s network?

Mai provides a possible scenario: If a large aerospace company needs to buy parts, it will go to a supplier. If the company asks for 10,000 units, the supplier might have only a portion of that order and therefore might go to a second-tier supplier. That second-tier supplier might go to eBay to fill the order. And that eBay seller might be making a deal at a storefront in California for some parts, having no idea where the parts came from.

Those little parts then travel right up the supply chain, and then that aerospace company is assembling them into deployed systems.

That’s a simpler scenario of how counterfeit chips can end up in high-powered places. Even worse, what if a chip lets another country’s government control it from afar?

Electronic Fingerprints

Despite the fact that researchers can test chips, counterfeit chips can still easily appear normal on tests.

“We want to run the fewest number of tests possible to cover as many of the problems as possible. But if there’s some little goofy circuit hiding in the corner, that’s very hard to find,” Mai says.

Old chips may not only look the same, but from testing perspectives they also may behave the same. There needs to be a more fail-proof way to be sure a chip isn’t counterfeit.

Enter Mai’s students, who in 2008 had an idea that could make a difference. Mai initially worked closely with Bhargava (E’12,’13) and now has called in two new students to help him with this project. His team is housed in a small farm of cubicles in a quiet room in one of CMU’s engineering buildings. There are coffee cups stacked to the spilling point on their desks.

Burak Erbagci will graduate with his PhD from the Electrical and Computer Engineering Department this year. Originally from Turkey, he had grown up hearing that security was a concern. But as he studied with Mai at CMU, he learned how widespread it is.

“Nothing is actually secure,” he says.

One of his team members, doctoral student Nail Etkin Can Akkaya, quickly agrees.

“That was the reason I chose security—when I was thinking about the future, I knew the field was never going to die,” Akkaya says.

The two students have identified three major impacts as they work to create a crack-proof chip. The first is economic problems—if a manufacturer is linked to counterfeit chips, like Cisco in 2008, they lose both money and reputation. The second is technical problems—every technical part has a lifetime; if it’s counterfeited, it has a shorter lifetime. For example, people who build F-16s don’t want to put in a part that they think works for five years and have it fail after two. And last, but not least, is overall security. The CMU team believes the sum of these three problems will convince manufacturers to build their new chip prototype, even if it’s at an added cost.

One of the first efforts to secure chips was putting markings on them—like watermarks or serial numbers. However, counterfeiters could easily copy or change these. What Erbagci and Akkaya want to do is to create a watermark on the chip level—something that is programmed inside it that cannot be tampered with.

Erbagci worked on the first phase of their plan—creating a fingerprint for chips. But the team also wanted to create a kind of odometer that allows both the manufacturer and customer to track how long the chip has been in use and see whether that matches what the seller said. The two students know that plenty of odometers have been tampered with. If anyone who is not the manufacturer tries to ask the age, that person only gets scrambled data and can’t change the chip’s odometer.

“If this works, this can be on every computer chip that is manufactured,” says Erbagci.

Up in their cubicles, they come up with the ideas, researching how they can happen and creating diagrams. When they feel sure enough to test an idea, they contact a manufacturing company with their designs, and the company makes them a batch of chips that they can test.

One floor down is a bland but humming room, full of cords and computers. This is where Erbagci and Akkaya painstakingly test their chips. The manufacturer sends them back a roll of their prototype chips. They resemble a pack of gum, the type with rectangular pieces of gum floating in plastic in a foil sleeve—except those pieces of gum are shrunken to teeny, tiny one-millimeter-by-two-millimeter metallic parts, maybe half the size of your pinky nail. When Erbagci goes to retrieve one, he wields tweezers.

They attach the chip to a small circuit board and attach it to an input generator, which looks like a microwave from the 1980s. As they hook it up and start running it, sometimes the heat builds up and causes a problem. In one instance, Erbagci and Akkaya attached fans to cool it down. That wasn’t enough. Then they used a temperature chamber to cool it down; they had to lower the temperature to minus 40 degrees Fahrenheit.

“The criminals are very agile. They don’t care if they can make a million bucks in six months and have to change again; they’re fine with that.”
Ken Mai

This is all so they can put each chip through a series of physical challenges—a little electronic boot camp. They try different temperatures. Different speeds. Different voltages. They push the limits of how slow a chip can go. What is the power then? How fast can it go? And what is its power then? Sometimes the chips can’t handle the stress, and they die. Sometimes chips are defective and don’t provide data.

But for chips that aren’t defective and can handle the stress, they run their scripts for weeks. There are sleepless nights, Erbagci says, watching the data come in.

“When it comes back and you see it works, it’s priceless,” he says.

There are two ways that Akkaya is able to effectively set the odometer—he uses two aging phenomena called electromigration and hot carrier injection. They are fuse-like phenomena that push a certain amount of electrical current down a wire—the time that the fuse blows can help the researchers measure the age of the chip.

For example, one thing they want to test is whether a chip is still new. For that chip, they might put in a really short fuse that would blow as soon as the chip is used. Then anyone would know that the chip couldn’t be sold as new.

After that, the team is designing a fuse that can measure time a little more slowly, so they can show how many days, weeks, or years the chip has been in use.

Mai says the fuse system also keeps criminals at bay because it takes extraordinary measures to modify the chips.

“They’d have to use a focused ion beam machine, which is the size of a fridge, costs a few million dollars to buy and a few hundred bucks an hour to run,” Mai says.

Privacy Alert

Erbagci and Akkaya are working on building the prototype, and it will probably be another year before they have the data that prove it works. The first phase of their idea is patented by Mai and Bhargava. Erbagci and Akkaya are the core team taking the idea forward. Mai’s dream is to have companies adopt their ideas and integrate them into their own designs. He wants this design to be part of the standard security block.

But as they work to create their uncrackable chip, they know that, on the other side, criminals are working just as quickly to try to crack any new securities.

“The criminals are very agile. They don’t care if they can make a million bucks in six months and have to change again; they’re fine with that,” Mai says.

They aren’t the only ones looking for solutions. There are also anti-counterfeiting provisions in the 2012 National Defense Authorization Act. Since 2009, reports of counterfeits have quadrupled. The act will put the onus on defense contractors to detect any counterfeit chips, and it will burden them with the costs of reordering if any counterfeit chips are discovered.

In the meantime, Mai acknowledges that though consumers are at risk of buying counterfeit chips without knowing, it’s not entirely likely that criminals are targeting their personal information. At the same time, he stops people when they tell him, “Well, what do I have to hide?”

“Fine, can I see your wallet?” he replies.

“People realize that though they may not be doing anything wrong, they may not want people to know all their personal stuff,” he says.

The bottom line—as Mai, Erbagci, and Akkaya work on building an uncrackable chip, consumers would do well to think twice about their security, not just while using software, but also while using any device with a chip, like their GPS systems, their Apple watches, or their FitBit bands.

“It all depends on what sort of things you don’t want people to know,” Mai says. He urges people not to text their passwords and to keep any financial log-in information private. Because these days, you don’t know where your chips have come from—and who is controlling them.