Category Archives: Computer and Technology

A unique moving target technique

When it comes to protecting data from cyberattacks, information technology (IT) specialists who defend computer networks face attackers armed with some advantages. For one, while attackers need only find one vulnerability in a system to gain network access and disrupt, corrupt, or steal data, the IT personnel must constantly guard against and work to mitigate varied and myriad network intrusion attempts.

The homogeneity and uniformity of software applications have traditionally created another advantage for cyber attackers. “Attackers can develop a single exploit against a software application and use it to compromise millions of instances of that application because all instances look alike internally,” says Hamed Okhravi, a senior staff member in the Cyber Security and Information Sciences Division at MIT Lincoln Laboratory. To counter this problem, cybersecurity practitioners have implemented randomization techniques in operating systems. These techniques, notably address space layout randomization (ASLR), diversify the memory locations used by each instance of the application at the point at which the application is loaded into memory.

In response to randomization approaches like ASLR, attackers developed information leakage attacks, also called memory disclosure attacks. Through these software assaults, attackers can make the application disclose how its internals have been randomized while the application is running. Attackers then adjust their exploits to the application’s randomization and successfully hijack control of vulnerable programs. “The power of such attacks has ensured their prevalence in many modern exploit campaigns, including those network infiltrations in which an attacker remains undetected and continues to steal data in the network for a long time,” explains Okhravi, who adds that methods for bypassing ASLR, which is currently deployed in most modern operating systems, and similar defenses can be readily found on the Internet.

Okhravi and colleagues David Bigelow, Robert Rudd, James Landry, and William Streilein, and former staff member Thomas Hobson, have developed a unique randomization technique, timely address space randomization (TASR), to counter information leakage attacks that may thwart ASLR protections. “TASR is the first technology that mitigates an attacker’s ability to leverage information leakage against ASLR, irrespective of the mechanism used to leak information,” says Rudd.

To disallow an information leakage attack, TASR immediately rerandomizes the memory’s layout every time it observes an application processing an output and input pair. “Information may leak to the attacker on any given program output without anybody being able to detect it, but TASR ensures that the memory layout is rerandomized before the attacker has an opportunity to act on that stolen information, and hence denies them the opportunity to use it to bypass operating system defenses,” says Bigelow. Because TASR’s rerandomization is based upon application activity and not upon a set timing (say every so many minutes), an attacker cannot anticipate the interval during which the leaked information might be used to send an exploit to the application before randomization recurs.

When TASR determines that the rerandomization must be performed, it pauses the running application, injects a randomizer component that performs the actual rewriting of code, then deletes the randomizer component from the application’s memory, and resumes the application. This process protects the randomizer from infiltration. To change the memory layout of a running application without causing a crash, TASR updates all memory addresses stored in the application during rerandomization.

The largest publicly traded corporation

Apple CEO Tim Cook will deliver the address at MIT’s 2017 Commencement exercises on Friday, June 9.

Cook joined Apple in 1998 and was named its CEO in 2011. As chief executive, he has overseen the introduction of some of Apple’s innovative and popular products, including iPhone 7 and Apple Watch. An advocate for equality and champion of the environment, Cook reminds audiences that Apple’s mission is to change the world for the better, both through its products and its policies.

“Mr. Cook’s brilliance as a business leader, his genuineness as a human being, and his passion for issues that matter to our community make his voice one that I know will resonate deeply with our graduates,” MIT President L. Rafael Reif says. “I am delighted that he will join us for Commencement and eagerly await his charge to the Class of 2017.”

Before becoming CEO, Cook was Apple’s chief operating officer, responsible for the company’s worldwide sales and operations, including management of Apple’s global supply chain, sales activities, and service and support. He also headed the Macintosh division and played a key role in the development of strategic reseller and supplier relationships, ensuring the company’s flexibility in a demanding marketplace.

“Apple stands at the intersection of liberal arts and technology, and we’re proud to have many outstanding MIT graduates on our team,” Cook says. “We believe deeply that technology can be a powerful force for good, and I’m looking forward to speaking to the Class of 2017 as they look ahead to making their own mark on the world.”

Prior to joining Apple, Cook was vice president of corporate materials at Compaq, responsible for procuring and managing product inventory. Before that, he served as chief operating officer of the Reseller Division at Intelligent Electronics.

Cook also spent 12 years with IBM, ending as director of North American fulfillment, where he led manufacturing and distribution for IBM’s personal computer company in North and Latin America.

Cook earned a BS in industrial engineering from Auburn University in 1982, and an MBA from Duke University in 1988.

“Tim Cook is a trailblazer and an inspiration to innovators worldwide,” says Liana Ilutzi, president of MIT’s Class of 2017. “He represents the best of the entrepreneurial and fearless spirit of the MIT community. While faithfully maintaining his integrity and humility, Tim runs one of the most influential companies on the planet. We are beyond excited to have him with us for Commencement!”

“We are looking forward to hearing Tim Cook speak at Commencement,” says Graduate Student Council President Arolyn Conwill. “We believe that his innovative leadership at Apple, along with his commitment to advocacy on sustainability, security, and equality, will inspire graduates to make a far-reaching, positive impact on the world.”

Computer Machinery cites

This week the Association for Computer Machinery (ACM) announced its 2016 fellows, which include four principal investigators from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL): professors Erik Demaine, Fredo Durand, William Freeman, and Daniel Jackson. They were among the 1 percent of ACM members to receive the distinction.

“Erik, Fredo, Bill, and Daniel are wonderful colleagues and extraordinary computer scientists, and I am so happy to see their contributions recognized with the most prestigious member grade of the ACM,” says CSAIL Director Daniela Rus, who herself was named a fellow last year. “All of us at CSAIL are very proud of these researchers for receiving these esteemed honors.”

ACM’s 53 fellows for 2016 were named for their distinctive contributions spanning such computer science disciplines as computer vision, computer graphics, software design, machine learning, algorithms, and theoretical computer science.

“As nearly 100,000 computing professionals are members of our association, to be selected to join the top 1 percent is truly an honor,” says ACM President Vicki L. Hanson. “Fellows are chosen by their peers and hail from leading universities, corporations and research labs throughout the world. Their inspiration, insights and dedication bring immeasurable benefits that improve lives and help drive the global economy. ”

Demaine was selected for contributions to geometric computing, data structures, and graph algorithms. His research interests include the geometry of understanding how proteins fold and the computational difficulty of playing games. He received the MacArthur Fellowship for his work in computational geometry. He and his father Martin Demaine have produced numerous curved-crease sculptures that explore the intersection of science and art — and that are currently in the Museum of Modern Art in New York.

A Department of Electrical Engineering and Computer Science (EECS) professor whose research spans video graphics and photo-generation, Durand was selected for contributions to computational photography and computer graphics rendering. He also works to develop new algorithms to enable image enhancements and improved scene understanding. He received the ACM SIGGRAPH Computer Graphics Achievement Award in 2016.

Freeman is the Thomas and Gerd Perkins Professor of EECS at MIT. He was selected as a fellow for his contributions to computer vision, machine learning, and computer graphics. His research interests also include Bayesian models of visual perception and computational photography. He received “Outstanding Paper” awards at computer vision and machine learning conferences in 1997, 2006, 2009 and 2012, as well as ACM’s “Test of Time” awards for papers from 1990 and 1995.

Prevent other neurons from firing

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have developed a new computational model of a neural circuit in the brain, which could shed light on the biological role of inhibitory neurons — neurons that keep other neurons from firing.

The model describes a neural circuit consisting of an array of input neurons and an equivalent number of output neurons. The circuit performs what neuroscientists call a “winner-take-all” operation, in which signals from multiple input neurons induce a signal in just one output neuron.

Using the tools of theoretical computer science, the researchers prove that, within the context of their model, a certain configuration of inhibitory neurons provides the most efficient means of enacting a winner-take-all operation. Because the model makes empirical predictions about the behavior of inhibitory neurons in the brain, it offers a good example of the way in which computational analysis could aid neuroscience.

The researchers will present their results this week at the conference on Innovations in Theoretical Computer Science. Nancy Lynch, the NEC Professor of Software Science and Engineering at MIT, is the senior author on the paper. She’s joined by Merav Parter, a postdoc in her group, and Cameron Musco, an MIT graduate student in electrical engineering and computer science.

For years, Lynch’s group has studied communication and resource allocation in ad hoc networks — networks whose members are continually leaving and rejoining. But recently, the team has begun using the tools of network analysis to investigate biological phenomena.

“There’s a close correspondence between the behavior of networks of computers or other devices like mobile phones and that of biological systems,” Lynch says. “We’re trying to find problems that can benefit from this distributed-computing perspective, focusing on algorithms for which we can prove mathematical properties.”

Artificial neurology

In recent years, artificial neural networks — computer models roughly based on the structure of the brain — have been responsible for some of the most rapid improvement in artificial-intelligence systems, from speech transcription to face recognition software.

An artificial neural network consists of “nodes” that, like individual neurons, have limited information-processing power but are densely interconnected. Data are fed into the first layer of nodes. If the data received by a given node meet some threshold criterion — for instance, if it exceeds a particular value — the node “fires,” or sends signals along all of its outgoing connections.

Each of those outgoing connections, however, has an associated “weight,” which can augment or diminish a signal. Each node in the next layer of the network receives weighted signals from multiple nodes in the first layer; it adds them together, and again, if their sum exceeds some threshold, it fires. Its outgoing signals pass to the next layer, and so on.

In artificial-intelligence applications, a neural network is “trained” on sample data, constantly adjusting its weights and firing thresholds until the output of its final layer consistently represents the solution to some computational problem.

Biological plausibility

Lynch, Parter, and Musco made several modifications to this design to make it more biologically plausible. The first was the addition of inhibitory “neurons.” In a standard artificial neural network, the values of the weights on the connections are usually positive or capable of being either positive or negative. But in the brain, some neurons appear to play a purely inhibitory role, preventing other neurons from firing. The MIT researchers modeled those neurons as nodes whose connections have only negative weights.

Many artificial-intelligence applications also use “feed-forward” networks, in which signals pass through the network in only one direction, from the first layer, which receives input data, to the last layer, which provides the result of a computation. But connections in the brain are much more complex. Lynch, Parter, and Musco’s circuit thus includes feedback: Signals from the output neurons pass to the inhibitory neurons, whose output in turn passes back to the output neurons. The signaling of the output neurons also feeds back on itself, which proves essential to enacting the winner-take-all strategy.

How can the policy and technology work together

“When you’re part of a community, you want to leave it better than you found it,” says Keertan Kini, an MEng student in the Department of Electrical Engineering, or Course 6. That philosophy has guided Kini throughout his years at MIT, as he works to improve policy both inside and out of MIT.

As a member of the Undergraduate Student Advisory Group, former chair of the Course 6 Underground Guide Committee, member of the Internet Policy Research Initiative (IPRI), and of the Advanced Network Architecture group, Kini’s research focus has been in finding ways that technology and policy can work together. As Kini puts it, “there can be unintended consequences when you don’t have technology makers who are talking to policymakers and you don’t have policymakers talking to technologists.” His goal is to allow them to talk to each other.

At 14, Kini first started to get interested in politics. He volunteered for President Obama’s 2008 campaign, making calls and putting up posters. “That was the point I became civically engaged,” says Kini. After that, he was campaigning for a ballot initiative to raise more funding for his high school, and he hasn’t stopped being interested in public policy since.

High school was also where Kini became interested in computer science. He took a computer science class in high school on the recommendation of his sister, and in his senior year, he started watching computer science lectures on MIT OpenCourseWare (OCW) by Hal Abelson, a professor in MIT’s Department of Electrical Engineering and Computer Science.

“That lecture reframed what computer science was. I loved it,” Kini recalls. “The professor said ‘it’s not about computers, and it’s not about science’. It might be an art or engineering, but it’s not science, because what we’re working with are idealized components, and ultimately the power of what we can actually achieve with them is not based so much on physical limitations so much as the limitations of the mind.”

In part thanks to Abelson’s OCW lectures, Kini came to MIT to study electrical engineering and computer science. Kini is currently pursuing an MEng in electrical engineering and computer science, a fifth-year master’s program following his undergraduate studies in electrical engineering and computer science.

Combining two disciplines

Kini set his policy interest to the side his freshman year, until he took 6.805J (Foundations of Information Policy), with Abelson, the same professor who inspired Kini to study computer science. After taking Abelson’s course, Kini joined him and Daniel Weitzner, a principal research scientist in the Computer Science and Artificial Intelligence Laboratory, in putting together a big data and privacy workshop for the White House in the wake of the Edward Snowden leak of classified information from the National Security Agency. Four years later, Kini is now a teaching assistant for 6.805J.

With Weitzner as his advisor, Kini went on to work on a SuperUROP, an advanced version of the Undergraduate Research Opportunities Program in which students take on their own research project for a full year. Kini’s project focused on making it easier for organizations that had experienced a cybersecurity breach to share how the breach happened with other organizations, without accidentally sharing private or confidential information as well.

Typically, when a security breach happens, there is a “human bottleneck,” as Kini puts it. Humans have to manually check all information they share with other organizations to ensure they don’t share private information or get themselves into legal hot water. The process is time-consuming, slowing down the improvement of cybersecurity for all organizations involved. Kini created a prototype of a system that could automatically screen information about cybersecurity breaches, determining what data had to be checked by a human, and what was safe to send along.

The Computer Science and Artificial Intelligence Laboratory

Machines that predict the future, robots that patch wounds, and wireless emotion-detectors are just a few of the exciting projects that came out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) this year. Here’s a sampling of 16 highlights from 2016 that span the many computer science disciplines that make up CSAIL.

Robots for exploring Mars — and your stomach

  • A team led by CSAIL director Daniela Rus developed an ingestible origami robot that unfolds in the stomach to patch wounds and remove swallowed batteries.
  • Researchers are working on NASA’s humanoid robot, “Valkyrie,” who will be programmed for trips into outer space and to autonomously perform tasks.
  • A 3-D printed robot was made of both solids and liquids and printed in one single step, with no assembly required.

Keeping data safe and secure

  • CSAIL hosted a cyber summit that convened members of academia, industry, and government, including featured speakers Admiral Michael Rogers, director of the National Security Agency; and Andrew McCabe, deputy director of the Federal Bureau of Investigation.
  • Researchers came up with a system for staying anonymous online that uses less bandwidth to transfer large files between anonymous users.
  • A deep-learning system called AI2 was shown to be able to predict 85 percent of cyberattacks with the help of some human input.

Advancements in computer vision

  • A new imaging technique called Interactive Dynamic Video lets you reach in and “touch” objects in videos using a normal camera.
  • Researchers from CSAIL and Israel’s Weizmann Institute of Science produced a movie display called Cinema 3D that uses special lenses and mirrors to allow viewers to watch 3-D movies in a theater without having to wear those clunky 3-D glasses.
  • A new deep-learning algorithm can predict human interactions more accurately than ever before, by training itself on footage from TV shows like “Desperate Housewives” and “The Office.”
  • A group from MIT and Harvard University developed an algorithm that may help astronomers produce the first image of a black hole, stitching together telescope data to essentially turn the planet into one large telescope dish.

Tech to help with health

  • A team produced a robot that can help schedule and assign tasks by learning from humans, in fields like medicine and the military.
  • Researchers came up with an algorithm for identifying organs in fetal MRI scans to extensively evaluate prenatal health.
  • A wireless device called EQ-Radio can tell if you’re excited, happy, angry, or sad, by measuring breathing and heart rhythms.

Tips for Making big data manageable

One way to handle big data is to shrink it. If you can identify a small subset of your data set that preserves its salient mathematical relationships, you may be able to perform useful analyses on it that would be prohibitively time consuming on the full set.

The methods for creating such “coresets” vary according to application, however. Last week, at the Annual Conference on Neural Information Processing Systems, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and the University of Haifa in Israel presented a new coreset-generation technique that’s tailored to a whole family of data analysis tools with applications in natural-language processing, computer vision, signal processing, recommendation systems, weather prediction, finance, and neuroscience, among many others.

“These are all very general algorithms that are used in so many applications,” says Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and senior author on the new paper. “They’re fundamental to so many problems. By figuring out the coreset for a huge matrix for one of these tools, you can enable computations that at the moment are simply not possible.”

As an example, in their paper the researchers apply their technique to a matrix — that is, a table — that maps every article on the English version of Wikipedia against every word that appears on the site. That’s 1.4 million articles, or matrix rows, and 4.4 million words, or matrix columns.

That matrix would be much too large to analyze using low-rank approximation, an algorithm that can deduce the topics of free-form texts. But with their coreset, the researchers were able to use low-rank approximation to extract clusters of words that denote the 100 most common topics on Wikipedia. The cluster that contains “dress,” “brides,” “bridesmaids,” and “wedding,” for instance, appears to denote the topic of weddings; the cluster that contains “gun,” “fired,” “jammed,” “pistol,” and “shootings” appears to designate the topic of shootings.

Joining Rus on the paper are Mikhail Volkov, an MIT postdoc in electrical engineering and computer science, and Dan Feldman, director of the University of Haifa’s Robotics and Big Data Lab and a former postdoc in Rus’s group.

The researchers’ new coreset technique is useful for a range of tools with names like singular-value decomposition, principal-component analysis, and latent semantic analysis. But what they all have in common is dimension reduction: They take data sets with large numbers of variables and find approximations of them with far fewer variables.

In this, these tools are similar to coresets. But coresets are application-specific, while dimension-reduction tools are general-purpose. That generality makes them much more computationally intensive than coreset generation — too computationally intensive for practical application to large data sets.

The researchers believe that their technique could be used to winnow a data set with, say, millions of variables — such as descriptions of Wikipedia pages in terms of the words they use — to merely thousands. At that point, a widely used technique like principal-component analysis could reduce the number of variables to mere hundreds, or even lower.

The researchers’ technique works with what is called sparse data. Consider, for instance, the Wikipedia matrix, with its 4.4 million columns, each representing a different word. Any given article on Wikipedia will use only a few thousand distinct words. So in any given row — representing one article — only a few thousand matrix slots out of 4.4 million will have any values in them. In a sparse matrix, most of the values are zero.

Crucially, the new technique preserves that sparsity, which makes its coresets much easier to deal with computationally. Calculations become lot easier if they involve a lot of multiplication by and addition of zero.

Paving a path to medicine

During January of her junior year at MIT, Caroline Colbert chose to do a winter externship at Massachusetts General Hospital (MGH). Her job was to shadow the radiation oncology staff, including the doctors that care for patients and medical physicists that design radiation treatment plans.

Colbert, now a senior in the Department of Nuclear Science and Engineering (NSE), had expected to pursue a career in nuclear power. But after working in a medical environment, she changed her plans.

She stayed at MGH to work on building a model to automate the generation of treatment plans for patients who will undergo a form of radiation therapy called volumetric-modulated arc therapy (VMAT). The work was so interesting that she is still involved with it and has now decided to pursue a doctoral degree in medical physics, a field that allows her to blend her training in nuclear science and engineering with her interest in medical technologies.

She’s even zoomed in on schools with programs that have accreditation from the Commission on Accreditation of Medical Physics Graduate Programs so she’ll have the option of having a more direct impact on patients. “I don’t know yet if I’ll be more interested in clinical work, research, or both,” she says. “But my hope is to work in a hospital setting.”

Many NSE students and faculty focus on nuclear energy technologies. But, says Colbert, “the department is really supportive of students who want to go into other industries.”

It was as a middle school student that Colbert first became interested in engineering. Later, in a chemistry class, a lesson about nuclear decay set her on a path towards nuclear science and engineering. “I thought it was so cool that one element can turn into another,” she says. “You think of elements as the fundamental building blocks of the physical world.”

Colbert’s parents, both from the Boston area, had encouraged her to apply to MIT. They also encouraged her towards the medical field. “They loved the idea of me being a doctor, and then when I decided on nuclear engineering, they wanted me to look into medical physics,” she says. “I was trying to make my own way. But when I did look seriously into medical physics, I had to admit that my parents were right.”

At MGH, Colbert’s work began with searching for practical ways to improve the generation of VMAT treatment plans. As with another form of radiation therapy called intensity-modulated radiation therapy (IMRT), the technology focuses radiation doses on the tumor and away from the healthy tissue surrounding it. The more accurate the dosing, the fewer side effects patients have after therapy.

With VMAT, a main challenge is in devising an accurate individualized treatment plan. Each plan is customized specifically to the patient’s anatomy. This design process is well defined for IMRT, which uses a set of intersecting beams to deliver radiation. VMAT also intersects beams but rotates them around the patient. “There are more degrees of freedom, so it should provide more accurate treatment, but it’s also more computationally difficult to optimize an individual treatment plan,” says Colbert.

Database analytics platform queries

People generally associate graphic processing units (GPUs) with imaging processing. Developed for video games in the 1990s, modern GPUs are specialized circuits with thousands of small, efficient processing units, or “cores,” that work simultaneously to rapidly render graphics on screen.

But for the better part of a decade, GPUs have also found general computing applications. Because of their incredible parallel-computing speeds and high-performance memory, GPUs are today used for advanced lab simulations and deep-learning programming, among other things.

Now, Todd Mostak, a former researcher at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), is using GPUs to develop an analytic database and visualization platform called MapD, which is the fastest of its kind in the world, according to Mostak.

MapD is essentially a form of a commonly used database-management system that’s modified to run on GPUs instead of the central processing units (CPUs) that power most traditional database-management systems. By doing so, MapD can process billions of data points in milliseconds, making it 100 times faster than traditional systems. Moreover, MapD visualizes all processed data points nearly instantaneously — such as, say, plotting tweets on a world map — and parameters can be modified on the fly to adjust the visualized display.

With its first product launched last March, MapD’s clients already include Verizon and other big-name telecommunications companies, a social media giant, and financial and advertising firms. In October, the investment arm of the U.S. Central Intelligence Agency, In-Q-Tel, announced that it had invested in MapD’s latest funding round to accelerate the development of certain features for the U.S. intelligence community.

“[The CIA has] a lot of geospatial data, and they need to be able to form, visualize, and query that data in real-time. It’s a real need across the intelligence community,” Mostak says.

“Making GPUs first-class citizens”

GPUs are designed specifically for parallel computing, with thousands of energy-efficient cores that can, for example, simultaneously determine the color of each pixel on a computer screen to render an image. GPUs also use high bandwidth memory, a form of random access memory (RAM) that’s about an order of magnitude faster than CPUs.

Today, some databases are being powered by GPUs. But these systems suffer from a major design flaw, Mostak says: “In most implementations, the data is initially stored on a CPU, moved to the GPU for a query, and results are moved back to the CPU for storage. Even if you speed up the computation time of a query [by using a GPU], you lose most of the speed by transferring from CPU to GPU and back.”

Professor Daniela Rus combines automation and mobility

Daniela Rus loves Singapore. As the MIT professor sits down in her Frank Gehry-designed office in Cambridge, Massachusetts, to talk about her research conducted in Singapore, her face starts to relax in a big smile.

Her story with Singapore started in the summer of 2010, when she made her first visit to one of the most futuristic and forward-looking cities in the world. “It was love at first sight,” says the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science and the director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). That summer, she came to Singapore to join the Singapore-MIT Alliance for Research and Technology (SMART) as the first principal investigator in residence for the Future of Urban Mobility Research Program.

“In 2010, nobody was talking about autonomous driving. We were pioneers in developing and deploying the first mobility on demand for people with self-driving golf buggies,” says Rus. “And look where we stand today! Every single car maker is investing millions of dollars to advance autonomous driving. Singapore did not hesitate to provide us, at an early stage, with all the financial, logistical, and transportation resources to facilitate our work.”

Since her first visit, Rus has returned each year to follow up on the research, and has been involved in leading revolutionary projects for the future of urban mobility. “Our team worked tremendously hard on self-driving technologies, and we are now presenting a wide range of different devices that allow autonomous and secure mobility,” she says. “Our objective today is to make taking a driverless car for a spin as easy as programming a smartphone. A simple interaction between the human and machine will provide a transportation butler.”

The first mobility devices her team worked on were self-driving golf buggies. Two years ago, these buggies advanced to a point where the group decided to open them to the public in a trial that lasted one week at the Chinese Gardens, an idea facilitated by Singapore’s Land and Transportation Agency (LTA). Over the course of a week, more than 500 people booked rides from the comfort of their homes, and came to the Chinese Gardens at the designated time and spot to experience mobility-on-demand with robots.

The test was conducted around winding paths trafficked by pedestrians, bicyclists, and the occasional monitor lizard. The experiments also tested an online booking system that enabled visitors to schedule pickups and drop-offs around the garden, automatically routing and redeploying the vehicles to accommodate all the requests. The public’s response was joyful and positive, and this brought the team renewed enthusiasm to take the technology to the next level.

Since the Chinese Gardens public trial, the autonomous car group has introduced a few other self-driving vehicles: a self-driving city car, and two personal mobility robots, a self-driving scooter and a self-driving wheelchair. Each of these vehicles was created in three phases: In the first phase, the vehicle was converted to drive-by-wire control, which allows a computer to control acceleration, braking, and steering of the car. In the second phase, the vehicle drives on each of the pathways in its operation environment and makes a map using features detected by the sensors. In the third phase, the vehicle uses the map to compute a path from the customer’s pick-up point to the customer’s drop-off point and proceeds to drive along the path, localizing continuously and avoiding any other cars, people, and unexpected obstacles. The devices also used traffic data from LTA to model traffic patterns and to study the benefits of ride sharing systems.

Last April, the team conducted a new test with the public at MIT. This time, they deployed a self-driving scooter that allowed users to use the same autonomy system indoors as well as outdoors. The trial included autonomous rides in MIT’s Infinite Corridor. A significant challenge in this type of space is localization, or accurately knowing the location of the robot in a long and plain corridor that does not have many distinctive features. The system proved to work very well in this type of environment, and the trial completed the demonstration of a comprehensive uniform autonomous mobility system.