COMPUTER SCIENCE IS BOOMING IN BOULDER. In the fall of 2010, there were 267 undergraduate computer science majors.
Four years later, there were 909, including the new Bachelor of Arts in Computer Science degree. To keep up with the tsunami of new students, talented faculty are being recruited from across the country. Here’s a look at what three of the new faculty are working on:
ASSISTANT PROFESSOR SHAUN KANE
UNIVERSITY OF MARYLAND
PHD FROM UNIVERSITY OF WASHINGTON
The world is full of touch screens: smartphones and tablets, of course, but also electronic voting machines and grocery store credit card readers, to name a few. Most of us take the ability to use them for granted, but for people who are blind or visually impaired touchscreen devices can be roadblocks. Shaun Kane has spent a lot of time thinking about this and other situations where using the dominant technology is not easy. “I like to look at the edge cases,” he says, “people with disabilities who have significant challenges interacting with mainstream technology but also more typical users who are in challenging environments.” Kane’s approach is to figure out how he can empower people to maximize technology that already exists, as opposed to designing separate technology for those with extreme needs. In the case of touch screens, Kane has focused on broadening the number of gestures that a device can recognize as commands, taking advantage of the fact that touch is already an important way the visually impaired explore the world.
UNIVERSITY OF MARYLAND
PHD FROM UNIVERSITY OF WASHINGTON
The world is full of touch screens: smartphones and tablets, of course, but also electronic voting machines and grocery store credit card readers, to name a few. Most of us take the ability to use them for granted, but for people who are blind or visually impaired touchscreen devices can be roadblocks. Shaun Kane has spent a lot of time thinking about this and other situations where using the dominant technology is not easy. “I like to look at the edge cases,” he says, “people with disabilities who have significant challenges interacting with mainstream technology but also more typical users who are in challenging environments.” Kane’s approach is to figure out how he can empower people to maximize technology that already exists, as opposed to designing separate technology for those with extreme needs. In the case of touch screens, Kane has focused on broadening the number of gestures that a device can recognize as commands, taking advantage of the fact that touch is already an important way the visually impaired explore the world.
ASSISTANT PROFESSOR JORDAN BOYD-GRABER
UNIVERSITY OF MARYLAND
PHD FROM PRINCETON UNIVERSITYÂ
Could a machine successfully square off against high school competitors in the popular academic trivia game Quiz Bowl, where clues are slowly unspooled, beginning with the most obscure information and ending in the most obvious? Jordan Boyd-Graber thinks so, and he hopes his machine—nicknamed “Jerome” for the time being—is up for the challenge. Quiz Bowl is played by students across the country. Players buzz in as soon as they know the answer: the faster the buzz, the deeper the knowledge. Programming Jerome to play Quiz Bowl isn’t just a matter of amusement. Creating a machine that is able to digest one word at a time, and then understand how that single word adds meaning to the words that came before, is a difficult challenge in the field of natural language processing. “Quiz Bowl is not just a challenging computational problem, but it’s also a fun way to get high school students thinking about the challenges of text processing and machine learning,” Boyd-Graber says.
UNIVERSITY OF MARYLAND
PHD FROM PRINCETON UNIVERSITYÂ
Could a machine successfully square off against high school competitors in the popular academic trivia game Quiz Bowl, where clues are slowly unspooled, beginning with the most obscure information and ending in the most obvious? Jordan Boyd-Graber thinks so, and he hopes his machine—nicknamed “Jerome” for the time being—is up for the challenge. Quiz Bowl is played by students across the country. Players buzz in as soon as they know the answer: the faster the buzz, the deeper the knowledge. Programming Jerome to play Quiz Bowl isn’t just a matter of amusement. Creating a machine that is able to digest one word at a time, and then understand how that single word adds meaning to the words that came before, is a difficult challenge in the field of natural language processing. “Quiz Bowl is not just a challenging computational problem, but it’s also a fun way to get high school students thinking about the challenges of text processing and machine learning,” Boyd-Graber says.
“But if you have a mobile robot in a dynamic environment where there’s lots of people and lots of change, suddenly you can’t just preprogram everything.” - Gabe Sibley
ASSISTANT PROFESSOR GABE SIBLEY
GEORGE WASHINGTON UNIVERSITY
PHD FROM UNIVERSITY OF SOUTHERN CALIFORNIA
Gabe Sibley created robots that could build Lincoln Log castles, robots that could fix a broken communication network on the surface of a distant planet and robots that could crawl across a mesh panel in zero gravity. But something was missing: the ability to perceive. “After doing all that work, my conclusion was that we can build robots with interesting capabilities but they still can’t do much if they can’t understand the world they’re operating in,” Sibley says. “Before you can act intelligently, you need to perceive and understand the world around you.” Now, Sibley’s research focuses on building robots that can actually learn to “see”—machines that can look around them, create a 3-D model of their world as they go, and then decide how to best interact with that world. “Historically, we’ve just built models of the environment in which the robots operate,” Sibley says. “But if you have a mobile robot in a dynamic environment where there’s lots of people and lots of change, suddenly you can’t just preprogram everything.”
GEORGE WASHINGTON UNIVERSITY
PHD FROM UNIVERSITY OF SOUTHERN CALIFORNIA
Gabe Sibley created robots that could build Lincoln Log castles, robots that could fix a broken communication network on the surface of a distant planet and robots that could crawl across a mesh panel in zero gravity. But something was missing: the ability to perceive. “After doing all that work, my conclusion was that we can build robots with interesting capabilities but they still can’t do much if they can’t understand the world they’re operating in,” Sibley says. “Before you can act intelligently, you need to perceive and understand the world around you.” Now, Sibley’s research focuses on building robots that can actually learn to “see”—machines that can look around them, create a 3-D model of their world as they go, and then decide how to best interact with that world. “Historically, we’ve just built models of the environment in which the robots operate,” Sibley says. “But if you have a mobile robot in a dynamic environment where there’s lots of people and lots of change, suddenly you can’t just preprogram everything.”