Professor Daniela Rus combines automation and mobility

Daniela Rus loves Singapore. As the MIT professor sits down in her Frank Gehry-designed office in Cambridge, Massachusetts, to talk about her research conducted in Singapore, her face starts to relax in a big smile.

Her story with Singapore started in the summer of 2010, when she made her first visit to one of the most futuristic and forward-looking cities in the world. “It was love at first sight,” says the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science and the director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). That summer, she came to Singapore to join the Singapore-MIT Alliance for Research and Technology (SMART) as the first principal investigator in residence for the Future of Urban Mobility Research Program.

“In 2010, nobody was talking about autonomous driving. We were pioneers in developing and deploying the first mobility on demand for people with self-driving golf buggies,” says Rus. “And look where we stand today! Every single car maker is investing millions of dollars to advance autonomous driving. Singapore did not hesitate to provide us, at an early stage, with all the financial, logistical, and transportation resources to facilitate our work.”

Since her first visit, Rus has returned each year to follow up on the research, and has been involved in leading revolutionary projects for the future of urban mobility. “Our team worked tremendously hard on self-driving technologies, and we are now presenting a wide range of different devices that allow autonomous and secure mobility,” she says. “Our objective today is to make taking a driverless car for a spin as easy as programming a smartphone. A simple interaction between the human and machine will provide a transportation butler.”

The first mobility devices her team worked on were self-driving golf buggies. Two years ago, these buggies advanced to a point where the group decided to open them to the public in a trial that lasted one week at the Chinese Gardens, an idea facilitated by Singapore’s Land and Transportation Agency (LTA). Over the course of a week, more than 500 people booked rides from the comfort of their homes, and came to the Chinese Gardens at the designated time and spot to experience mobility-on-demand with robots.

The test was conducted around winding paths trafficked by pedestrians, bicyclists, and the occasional monitor lizard. The experiments also tested an online booking system that enabled visitors to schedule pickups and drop-offs around the garden, automatically routing and redeploying the vehicles to accommodate all the requests. The public’s response was joyful and positive, and this brought the team renewed enthusiasm to take the technology to the next level.

Since the Chinese Gardens public trial, the autonomous car group has introduced a few other self-driving vehicles: a self-driving city car, and two personal mobility robots, a self-driving scooter and a self-driving wheelchair. Each of these vehicles was created in three phases: In the first phase, the vehicle was converted to drive-by-wire control, which allows a computer to control acceleration, braking, and steering of the car. In the second phase, the vehicle drives on each of the pathways in its operation environment and makes a map using features detected by the sensors. In the third phase, the vehicle uses the map to compute a path from the customer’s pick-up point to the customer’s drop-off point and proceeds to drive along the path, localizing continuously and avoiding any other cars, people, and unexpected obstacles. The devices also used traffic data from LTA to model traffic patterns and to study the benefits of ride sharing systems.

Last April, the team conducted a new test with the public at MIT. This time, they deployed a self-driving scooter that allowed users to use the same autonomy system indoors as well as outdoors. The trial included autonomous rides in MIT’s Infinite Corridor. A significant challenge in this type of space is localization, or accurately knowing the location of the robot in a long and plain corridor that does not have many distinctive features. The system proved to work very well in this type of environment, and the trial completed the demonstration of a comprehensive uniform autonomous mobility system.

Links related data scattered across digital files

The age of big data has seen a host of new techniques for analyzing large data sets. But before any of those techniques can be applied, the target data has to be aggregated, organized, and cleaned up.

That turns out to be a shockingly time-consuming task. In a 2016 survey, 80 data scientists told the company CrowdFlower that, on average, they spent 80 percent of their time collecting and organizing data and only 20 percent analyzing it.

An international team of computer scientists hopes to change that, with a new system called Data Civilizer, which automatically finds connections among many different data tables and allows users to perform database-style queries across all of them. The results of the queries can then be saved as new, orderly data sets that may draw information from dozens or even thousands of different tables.

“Modern organizations have many thousands of data sets spread across files, spreadsheets, databases, data lakes, and other software systems,” says Sam Madden, an MIT professor of electrical engineering and computer science and faculty director of MIT’s bigdata@CSAIL initiative. “Civilizer helps analysts in these organizations quickly find data sets that contain information that is relevant to them and, more importantly, combine related data sets together to create new, unified data sets that consolidate data of interest for some analysis.”

The researchers presented their system last week at the Conference on Innovative Data Systems Research. The lead authors on the paper are Dong Deng and Raul Castro Fernandez, both postdocs at MIT’s Computer Science and Artificial Intelligence Laboratory; Madden is one of the senior authors. They’re joined by six other researchers from Technical University of Berlin, Nanyang Technological University, the University of Waterloo, and the Qatar Computing Research Institute. Although he’s not a co-author, MIT adjunct professor of electrical engineering and computer science Michael Stonebraker, who in 2014 won the Turing Award — the highest honor in computer science — contributed to the work as well.

Pairs and permutations

Data Civilizer assumes that the data it’s consolidating is arranged in tables. As Madden explains, in the database community, there’s a sizable literature on automatically converting data to tabular form, so that wasn’t the focus of the new research. Similarly, while the prototype of the system can extract tabular data from several different types of files, getting it to work with every conceivable spreadsheet or database program was not the researchers’ immediate priority. “That part is engineering,” Madden says.

The system begins by analyzing every column of every table at its disposal. First, it produces a statistical summary of the data in each column. For numerical data, that might include a distribution of the frequency with which different values occur; the range of values; and the “cardinality” of the values, or the number of different values the column contains. For textual data, a summary would include a list of the most frequently occurring words in the column and the number of different words. Data Civilizer also keeps a master index of every word occurring in every table and the tables that contain it.

Then the system compares all of the column summaries against each other, identifying pairs of columns that appear to have commonalities — similar data ranges, similar sets of words, and the like. It assigns every pair of columns a similarity score and, on that basis, produces a map, rather like a network diagram, that traces out the connections between individual columns and between the tables that contain them.

Machine learning

When Joy Buolamwini, an MIT master’s candidate in media arts and sciences, sits in front a mirror, she sees a black woman in her 20s. But when her photo is run through recognition software, it does not recognize her face. A seemingly neutral machine programmed with algorithms-codified processes simply fails to detect her features. Buolamwini is, she says, “on the wrong side of computational decisions” that can lead to exclusionary and discriminatory practices and behaviors in society.

That phenomenon, which Buolamwini calls the “coded gaze,”  is what motivated her late last year to launch the Algorithmic Justice League (AJL) to highlight such bias through provocative media and interactive exhibitions; to provide space for people to voice concerns and experiences with coded discrimination; and to develop practices for accountability during the design, development, and deployment phases of coded systems.

That work is what contributed to the Media Lab student earning the grand prize in the professional category of The Search for Hidden Figures. The nationwide contest, created by PepsiCo and 21st Century Fox in partnership with the New York Academy of Sciences, is named for a recently released film that tells the real-life story of three African-American women at NASA whose math brilliance helped launch the United States into the space race in the early 1960s.

“I’m honored to receive this recognition, and I’ll use the prize to continue my mission to show compassion through computation,” says Buolamwini, who was born in Canada, then lived in Ghana and, at the age of four, moved to Oxford, Mississippi. She’s a two-time recipient of an Astronaut Scholarship in a program established by NASA’s Mercury 7 crew members, including late astronaut John Glenn, who are depicted in the film “Hidden Figures.”

The film had a big impact on Buolamwini when she saw a special MIT sneak preview in early December: “I witnessed the power of storytelling to change cultural perceptions by highlighting hidden truths. After the screening where I met Margot Lee Shetterly, who wrote the book on which the film is based, I left inspired to tell my story, and applied for the contest. Being selected as a grand prize winner provides affirmation that pursuing STEM is worth celebrating. And it’s an important reminder to share the stories of discriminatory experiences that necessitate the Algorithmic Justice League as well as the uplifting stories of people who come together to create a world where technology can work for all of us and drive social change.”

The Search for Hidden Figures contest attracted 7,300 submissions from students across the United States. As one of two grand prize winners, Buolamwini receives a $50,000 scholarship, a trip to the Kennedy Space Center in Florida, plus access to New York Academy of Sciences training materials and programs in STEM. She plans to use the prize resources to develop what she calls “bias busting” tools to help defeat bias in machine learning.

Human planners improves automatic planners

Every other year, the International Conference on Automated Planning and Scheduling hosts a competition in which computer systems designed by conference participants try to find the best solution to a planning problem, such as scheduling flights or coordinating tasks for teams of autonomous satellites.

On all but the most straightforward problems, however, even the best planning algorithms still aren’t as effective as human beings with a particular aptitude for problem-solving — such as MIT students.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory are trying to improve automated planners by giving them the benefit of human intuition. By encoding the strategies of high-performing human planners in a machine-readable form, they were able to improve the performance of competition-winning planning algorithms by 10 to 15 percent on a challenging set of problems.

The researchers are presenting their results this week at the Association for the Advancement of Artificial Intelligence’s annual conference.

“In the lab, in other investigations, we’ve seen that for things like planning and scheduling and optimization, there’s usually a small set of people who are truly outstanding at it,” says Julie Shah, an assistant professor of aeronautics and astronautics at MIT. “Can we take the insights and the high-level strategies from the few people who are truly excellent at it and allow a machine to make use of that to be better at problem-solving than the vast majority of the population?”

The first author on the conference paper is Joseph Kim, a graduate student in aeronautics and astronautics. He’s joined by Shah and Christopher Banks, an undergraduate at Norfolk State University who was a research intern in Shah’s lab in the summer of 2016.

The human factor

Algorithms entered in the automated-planning competition — called the International Planning Competition, or IPC — are given related problems with different degrees of difficulty. The easiest problems require satisfaction of a few rigid constraints: For instance, given a certain number of airports, a certain number of planes, and a certain number of people at each airport with particular destinations, is it possible to plan planes’ flight routes such that all passengers reach their destinations but no plane ever flies empty?

A more complex class of problems — numerical problems — adds some flexible numerical parameters: Can you find a set of flight plans that meets the constraints of the original problem but also minimizes planes’ flight time and fuel consumption?

Finally, the most complex problems — temporal problems — add temporal constraints to the numerical problems: Can you minimize flight time and fuel consumption while also ensuring that planes arrive and depart at specific times?

For each problem, an algorithm has a half-hour to generate a plan. The quality of the plans is measured according to some “cost function,” such as an equation that combines total flight time and total fuel consumption.

Shah, Kim, and Banks recruited 36 MIT undergraduate and graduate students and posed each of them the planning problems from two different competitions, one that focused on plane routing and one that focused on satellite positioning. Like the automatic planners, the students had a half-hour to solve each problem.

National Inventors Hall of Fame

Is the Internet old or new? According to MIT professor of mathematics Tom Leighton, co-founder of Akamai, the internet is just getting started. His opinion counts since his firm, launched in 1998 with pivotal help from Danny Lewin SM ’98, keeps the internet speedy by copying and channeling massive amounts of data into orderly and secure places that are quick to access. Now, the National Inventors Hall of Fame (NIHF) has recognized Leighton and Lewin’s work, naming them both as 2017 inductees.

“We think about the internet and the tremendous accomplishments that have been made and, the exciting thing is, it’s in its infancy,” Leighton says in an Akamai video. Online commerce, which has grown rapidly and is now denting mall sales, has huge potential, especially as dual screen use grows. Soon mobile devices will link to television, and then viewers can change channels on their mobile phones and click to buy the cool sunglasses Tom Cruise is wearing on the big screen. “We are going to see [that] things we never thought about existing will be core to our lives within 10 years, using the internet,” Leighton says.

Leighton’s former collaborator, Danny Lewin, was pivotal to the early development of Akamai’s technology. Tragically, Lewin died as a passenger on an American Airlines flight that was hijacked by terrorists and crashed into New York’s World Trade Center on Sept. 11, 2001. Lewin, a former Israeli Defense Forces officer, is credited with trying to stop the attack.

According to Akami, Leighton, Lewin, and their team “developed the mathematical algorithms necessary to intelligently route and replicate content over a large network of distributed servers,” which solved congestion that was then becoming known as the “World Wide Wait.” Today the company delivers nearly 3 trillion internet interactions each day.

The NIHF describes Leighton and Lewin’s contributions as pivotal to making the web fast, secure, and reliable. Their tools were applied mathematics and algorithms, and they focused on congested nodes identified by Tim Berners-Lee, inventor of the World Wide Web and an MIT professor with an office near Leighton. Leighton, an authority on parallel algorithms for network applications who earned his PhD at MIT, holds more than 40 U.S. patents involving content delivery, internet protocols, algorithms for networks, cryptography, and digital rights management. He served as Akamai’s chief scientist for 14 years before becoming chief executive officer in 2013.

Lewin, an MIT doctoral candidate at the time of his death, served as Akamai’s chief technology officer and was an award-winning computer scientist whose master’s thesis included some of the fundamental algorithms that make up the core of Akamai’s services. Before coming to MIT, Lewin worked at IBM’s research laboratory in Haifa, Israel, where he developed the company’s Genesys system, a processor verification tool. He is named on 25 U.S. patents.

“It is a special honor to be listed among so many groundbreaking innovators in the National Inventors Hall of Fame,” says Leighton. “And I am very grateful to Akamai’s employees for all their hard work over the last two decades to turn a dream for making the Internet be fast, reliable, and secure, into a reality.”

Special purpose chip

The butt of jokes as little as 10 years ago, automatic speech recognition is now on the verge of becoming people’s chief means of interacting with their principal computing devices.

In anticipation of the age of voice-controlled electronics, MIT researchers have built a low-power chip specialized for automatic speech recognition. Whereas a cellphone running speech-recognition software might require about 1 watt of power, the new chip requires between 0.2 and 10 milliwatts, depending on the number of words it has to recognize.

In a real-world application, that probably translates to a power savings of 90 to 99 percent, which could make voice control practical for relatively simple electronic devices. That includes power-constrained devices that have to harvest energy from their environments or go months between battery charges. Such devices form the technological backbone of what’s called the “internet of things,” or IoT, which refers to the idea that vehicles, appliances, civil-engineering structures, manufacturing equipment, and even livestock will soon have sensors that report information directly to networked servers, aiding with maintenance and the coordination of tasks.

“Speech input will become a natural interface for many wearable applications and intelligent devices,” says Anantha Chandrakasan, the Vannevar Bush Professor of Electrical Engineering and Computer Science at MIT, whose group developed the new chip. “The miniaturization of these devices will require a different interface than touch or keyboard. It will be critical to embed the speech functionality locally to save system energy consumption compared to performing this operation in the cloud.”

“I don’t think that we really developed this technology for a particular application,” adds Michael Price, who led the design of the chip as an MIT graduate student in electrical engineering and computer science and now works for chipmaker Analog Devices. “We have tried to put the infrastructure in place to provide better trade-offs to a system designer than they would have had with previous technology, whether it was software or hardware acceleration.”

Price, Chandrakasan, and Jim Glass, a senior research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory, described the new chip in a paper Price presented last week at the International Solid-State Circuits Conference.

The sleeper wakes

Today, the best-performing speech recognizers are, like many other state-of-the-art artificial-intelligence systems, based on neural networks, virtual networks of simple information processors roughly modeled on the human brain. Much of the new chip’s circuitry is concerned with implementing speech-recognition networks as efficiently as possible.

But even the most power-efficient speech recognition system would quickly drain a device’s battery if it ran without interruption. So the chip also includes a simpler “voice activity detection” circuit that monitors ambient noise to determine whether it might be speech. If the answer is yes, the chip fires up the larger, more complex speech-recognition circuit.