One afternoon, two Carnegie Mellon graduate students stop by Dave Andersen’s office in Wean Hall to brainstorm project ideas for their computer architecture class. They bandy about some possibilities, covering the whiteboard with equations and graphs. But to their frustration, nothing clicks.

So Andersen, an associate professor of computer science, turns to one of his old standbys for inspiration. He opens his toy box—a desk drawer overflowing with old computer hardware and discarded electronic parts. He locks his eyes on the collection of tiny green computers that he bought off the shelf for an earlier project involving how to improve home Internet access; their low-power chips run three times more slowly than an iPhone4. Like the rest of his toys, Andersen saved them, believing they eventually might be good for something.

Now it suddenly occurs to him what that “something” might be. These tiny processors can’t do much by themselves, but what if they could be networked together? Perhaps, collectively, they could do a much bigger job than they could ever do on their own.

In this age of 24/7 online access through computers, tablets, and smartphones, creating reliable connectivity is no small task. Just ask any of the online giants such as Amazon, eBay, Facebook, Google, and Microsoft. It has led to the proliferation of what are called data centers, which are facilities that house mammoth computer systems and associated components, such as telecommunications and storage systems. Often bigger than football fields, these centers are an enormous power drain. If you were to consider all data centers in the United States as a single country, experts say they would rank as the fifth-largest nation in the world in terms of energy consumption. By last year, they were expected to use up to 100 billion kilowatt-hours of electricity, for a total annual cost of $7.4 billion. With cloud computing on the rise, these numbers are expected to grow.

Skyrocketing electrical bills cut into profits, so Web companies and others are thinking seriously about the problem of energy efficiency. “These companies are at such scale that they have power bills of tens to hundreds of millions of dollars a year,” Andersen says. “Improvements in the efficiencies of their systems can significantly affect the bottom line.”

It also can affect the future of our planet. The Environmental Protection Agency estimates that the power consumption of U.S. data centers translates into emissions of 59 million metric tons of greenhouse gases each year—equivalent to 13 coal-fired power plants. In other words, increasing the energy efficiency of these facilities could materially help shrink the eco-footprint of the computing industry. That resonates loudly for Andersen, who grew up in Salt Lake City, Utah, surrounded by spectacular mountain ranges. He is still an avid outdoorsman, skilled at rock climbing and skiing, who is known to run 70 miles in a single week, preferably on trails. “I have this very big green heart beating in my chest,” he says.

For all of the time he spent outdoors as a child, he has spent just as many hours in front of his computer. At age 10, his parents bought him a dial-up modem for his Macintosh Plus, one of his first toys. By today’s standards, the cute, boxy machine would seem maddeningly slow. The modem took about a minute to download a small picture—a digital eternity now, but a virtual miracle in the mid-1980s.

Andersen was smitten. Life wasn’t always easy for a boy who wasn’t Mormon growing up in the religious minority in Salt Lake City. Now he could reach out to other people who shared his interests. It no longer mattered if he didn’t quite fit in at home. Literally at his fingertips, there was a new world where he felt he belonged. “I spent way too much time online,” Andersen jokes. “I was just completely fascinated by networks and the ability to talk to other computers and people.”

As a teenager, he helped run one of the biggest online bulletin boards—precursors to the Web—in Salt Lake City, using 64 phone lines. As an undergraduate at the University of Utah, he cofounded a company that became the third-largest Internet service provider in the state. After graduating with dual degrees in biology and computer science, he left Utah to pursue a PhD in computer science at MIT. One morning in 2001, as he was working in his grad school lab, the fiber-optic Internet connection to the building cut off. Suddenly, much like the modem he received as a child, another toy inspired Andersen. He had recently installed a DSL line—his newest plaything—to hook up the Internet to the lab through the phone.

“It occurred to me as I was sitting there, angry that I couldn’t get my work done, that I could use the DSL to get back online, and we became the only ones with Internet access,” Andersen says. “I thought about how I could automate that switch-over so, in the case of another outage, our computers would know to connect through the backup line.”

He went on to study how to make the Internet more reliable by overlaying computer networks on top of each other. This work earned him a prize in 2005 for best PhD thesis at MIT in electronic computer and investigation research.

A week after turning in his award-winning dissertation, he joined the computer science faculty at Carnegie Mellon, which he says felt right from the moment he stepped on campus for an interview. “I remember getting on the plane back to Boston and calling my girlfriend at the time to say, ‘I just have to get a job here,’” he recalls. “It was clear to me then that what Carnegie Mellon values in its faculty is whether you have an impact. They don’t do anything silly like count your publications. What the school wants is for you to change the world.”

Toward that end, since his arrival in Pittsburgh, Andersen has been working to develop a framework for building a better Internet of the future. In 2006, he secured a National Science Foundation CAREER Award, a prestigious early-career-development grant, to create a more flexible, efficient method for online data transfer. For part of that project, he purchased a bunch of tiny green microprocessors that ended up in his ever-growing stash of spare computer parts. “I’m a huge believer in the motivating power of toys,” he says. “That’s why I always have a stack of old computers and continue buying new toys. It’s a way of coming up with ideas you wouldn’t otherwise dream of.”

Like the idea that occurred to him in 2007 when his PhD students, Amar Phanishayee (CS’12) and Vijay Vasudevan (CS’10, ’11), stopped by for help. Andersen looked at the microprocessors in his desk drawer, and everything came together. He challenged his students to make his vision a reality.

After tinkering with the machines and doing some calculations, they built a barebones prototype. It consisted of eight “nodes,” each made from one of Andersen’s tiny processors and a compact flash card for memory. Flash memory is used to store data in devices like digital cameras instead of the spinning hard drives found in bigger computers. On an impulse, Andersen dubbed the whole system FAWN, or Fast Array of Wimpy Nodes. “I work with Intel, and if you ask someone there about wimpy nodes, they will probably look at you sternly and tell you that Intel doesn’t make anything wimpy,” Andersen says. “I was in search of a good acronym,” he shrugs. “I’m not a marketing person.”

The researchers discovered that FAWN was highly efficient at randomly accessing small bits of information from a larger set of data—the kind of task that many Internet heavyweights rely upon. For instance, when you log onto Facebook, the information the Web site stores on all of its users worldwide must be whittled down to only your friends. To do so, these data-center “brawny” computers consume a lot of power.

FAWN, to the contrary, is quite adroit at this kind of job; remarkably, each wimpy node draws just five watts of energy—less power than a light bulb. Andersen and his team estimated that their system was two to three times more energy efficient than a conventional server at retrieving small tidbits of data like image thumbnails or social-network contact names. That could translate into savings of millions of dollars and sizable cutbacks in carbon emissions—good news for Andersen’s green heart. “Mostly I like the idea of breathing clean air,” he says. “That combines with my engineer’s sensibility that efficiency is a wonderful thing. If by improving our efficiency we can also make our environment better, cleaner, safer for humans, it’s just a total win.”

Andersen and his colleagues presented their results at the 2009 Symposium on Operating Systems Principles—one of two major networking conferences—where they won the best paper award. There, they started to draw a lot of attention from the computing industry.

Partha Ranganathan, a fellow at Hewlett-Packard Labs in Palo Alto, Calif., likens FAWN to an energy-efficiency project you might undertake in your home. “I can go room to room and turn down all the lights I’m not using, or I can replace all the lights with compact fluorescents or LEDs,” he says. “FAWN takes the second approach. It is moving away from the beaten path and starting to think about: How do I do something fundamentally more energy efficient with technology?”

At Hewlett-Packard, according to Ranganathan, engineers have developed a new class of energy-efficient servers called Project Moonshot, which draws upon many of the advances introduced by FAWN. “The industry is warming up to these ideas,” he says.

Andersen considered trying to bring his concept to market himself but wasn’t willing to move forward without the full involvement of his students. “I told them it had to be something they wanted to do,” he says. “And they all wanted to finish their PhDs instead, so I said, let’s finish some PhDs.”

But as Ranganathan noted, FAWN is not going away. Two startups, SeaMicro and Calxeda, have started manufacturing the hardware needed for companies to adapt FAWN for their needs, and Andersen was asked to evaluate SeaMicro’s technologies. As a further sign of corporate interest, Google, Network Appliance, and Intel have funded Andersen’s ongoing research. Companies don’t expect this low-power architecture to be a panacea. It could take a lot of programming to tailor these systems for their software, which in some cases might be cost-prohibitive. It also doesn’t work well for computation-intensive applications like video gaming and analysis of large data sets. Still, the power-consumption problem is one the computing industry cannot afford to ignore, according to Google distinguished engineer Luiz André Barroso.

“The objectives of FAWN in building high-performance, energy-efficient systems are also our objectives at Google,” says Barroso, who studies warehouse-scale computing. “If we want to continue investing in making our products better, we can’t give all the money to the electric company.”

Steven Swanson, a computer architecture expert at the  University of California, San Diego, agrees: “Cloud computing is all a kind of plumbing. If you get a better water-supply system, everyone benefits. Just like it’s not always clear where those benefits come from, you aren’t going to say, ‘Oh, those wimpy nodes really made my day.’ But it’s part of a whole bunch of ideas that are really enhancing our lives.”

At Carnegie Mellon, Andersen continues to improve his wimpy systems. He predicts that within the next couple of years, wimpy nodes will be used widely throughout the industry, fundamentally changing the way the major Internet companies build many of their computer networks. And while the revolution he ignited is taking place, Andersen plans to keep buying new toys to fill his drawer—though maybe he’ll get a second opinion when it comes time to name his next big idea.

Jennifer Bails is an award-winning freelance writer. She is a regular contributor to this magazine.