IntroductionWhen overtaking a crowd to go where they want, people normally travel the distance safely without thinking about what they are doing. They learn from the actions of others and take note of obstacles to avoid. On the other hand, robots, unlike people, have difficulty dealing with these navigation concepts. Say no to plagiarism. Get a tailor-made essay on "Why Violent Video Games Shouldn't Be Banned"? Get an original essay Motion planning algorithms will generate a tree of possible decisions that branches until it finds good paths for navigation. A robot that needs to cross a room to reach a door, for example, will have to produce a step-by-step search tree of possible movements and then decide the best route to the door, considering various obstacles. One drawback is that these algorithms rarely learn: robots cannot exploit information about how they or other agents have acted previously in similar environments. Andrei Barbu, one of the researchers and affiliated with the Computer Science and Artificial Intelligence Laboratory (CSAIL), compared the situation robots to playing a game of chess. The decision trees will branch until the robots find the optimal way to move. But unlike chess players, Barbu says robots will explore what the future will be like without learning much about their environment and other agents. It's always complicated for robots, whether they're walking through the same crowd for the first time or the thousandth time. They will always explore, but they will rarely observe and never use what happened in the past, Barbu said. What the MIT researchers did was fuse a motion planning algorithm with a neural network, which then learns to recognize the paths that might lead to the best results and uses that information to guide the robot's movement in a given environment. They demonstrated the effectiveness of their model in two scenarios: navigating through rooms full of traps and narrow passages, and navigating areas while avoiding collisions with other agents. Yen-Ling Kuo, Barbu's research colleague and a doctoral student at CSAIL, said the aim of their research is to incorporate a new machine learning model into the search space that can make planning more efficient based on past experience. Existing motion planning algorithms explore an environment by rapidly expanding a decision tree that ultimately covers an entire space. The robot then looks at the tree to find a way to reach its goal, such as a door. On the other hand, the model the researchers devised offers a compromise between exploring the environment and using past experiences, Kuo said. Teaching robots to navigate The learning process begins with some examples. A robot using the model is trained to navigate similar environments in different ways. The neural network learns what makes these examples successful by interpreting the environment around the robot, such as the shape of walls, the actions of other agents, and the characteristics of goals. In short, the model learns that when it's stuck in an environment and sees a door, it will think it's probably a good idea to walk through that door to get out, Barbu pointed out. The model unifies the exploration behavior of previous methods with this learned information. The motion planner, called RRT*, was created by MIT professors Sertac Karaman and Emilio Frazzoli. It derives from a popular motion planning algorithm known as Rapidly-exploring Random Trees, or RRT. The planner creates a search tree while the neural network makes predictions.
tags