ROBOTICS 5ab356e70402d10aa42866f8 False 36 2
background image not found
Found Update results for
'grasp many different items'
Robot can pick up any object after inspecting it. Humans have long been masters of dexterity, a skill that can largely be credited to the help of our eyes. Robots, meanwhile, are still catching up. Certainly there's been some progress: for decades robots in controlled environments like assembly lines have been able to pick up the same object over and over again. More recently, breakthroughs in computer vision have enabled robots to make basic distinctions between objects, but even then, they don't truly understand objects' shapes, so there's little they can do after a quick pick-up. In a new paper, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), say that they've made a key development in this area of work: a system that lets robots inspect random objects, and visually understand them enough to accomplish specific tasks without ever having seen them before. The system, dubbed "Dense Object Nets" (DON), looks at objects as collections of points that serve as "visual roadmaps" of sorts. This approach lets robots better understand and manipulate items, and, most importantly, allows them to even pick up a specific object among a clutter of similar objects—a valuable skill for the kinds of machines that companies like Amazon and Walmart use in their warehouses. For example, someone might use DON to get a robot to grab onto a specific spot on an object—say, the tongue of a shoe. From that, it can look at a shoe it has never seen before, and successfully grab its tongue. "Many approaches to manipulation can't identify specific parts of an object across the many orientations that object may encounter, " says Ph.D. student Lucas Manuelli, who wrote a new paper about the system with lead author and fellow Ph.D. student Pete Florence, alongside MIT professor Russ Tedrake. "For example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright, or on its side." The team views potential applications not just in manufacturing settings, but also in homes. Imagine giving the system an image of a tidy house, and letting it clean while you're at work, or using an image of dishes so that the system puts your plates away while you're on vacation. What's also noteworthy is that none of the data was actually labeled by humans; rather, the system is "self-supervised, " so it doesn't require any human annotations. Two common approaches to robot grasping involve either task-specific learning, or creating a general grasping algorithm. These techniques both have obstacles: task-specific methods are difficult to generalize to other tasks, and general grasping doesn't get specific enough to deal with the nuances of particular tasks, like putting objects in specific spots. The DON system, however, essentially creates a series of coordinates on a given object, which serve as a kind of "visual roadmap" of the objects, to give the robot a better understanding of what it needs to grasp, and where. The team trained the system to look at objects as a series of points that make up a larger coordinate system. It can then map different points together to visualize an object's 3-D shape, similar to how panoramic photos are stitched together from multiple photos. After training, if a person specifies a point on a object, the robot can take a photo of that object, and identify and match points to be able to then pick up the object at that specified point. This is different from systems like UC-Berkeley's DexNet, which can grasp many different items, but can't satisfy a specific request. Imagine an infant at 18-months old, who doesn't understand which toy you want it to play with but can still grab lots of items, versus a four-year old who can respond to "go grab your truck by the red end of it." In one set of tests done on a soft caterpillar toy, a Kuka robotic arm powered by DON could grasp the toy's right ear from a range of different configurations. This showed that, among other things, the system has the ability to distinguish left from right on symmetrical objects. When testing on a bin of different baseball hats, DON could pick out a specific target hat despite all of the hats having very similar designs—and having never seen pictures of the hats in training data before. "In factories robots often need complex part feeders to work reliably, " says Manuelli. "But a system like this that can understand objects' orientations could just take a picture and be able to grasp and adjust the object accordingly." In the future, the team hopes to improve the system to a place where it can perform specific tasks with a deeper understanding of the corresponding objects, like learning how to grasp an object and move it with the ultimate goal of say, cleaning a desk. The team will present their paper on the system next month at the Conference on Robot Learning in Zürich, Switzerland. Content gathered by BTM robotics training center, robotics in Bangalore, stem education in Bangalore, stem education in Bannerghatta road, stem education in JP Nagar, robotics training centres in Bannerghatta road, robotics training centres in JP Nagar, robotics training for kids, robotics training for beginners, best robotics in Bangalore.
Crop Counting Robot Today's crop breeders are trying to boost yields while also preparing crops to withstand severe weather and changing climates. To succeed, they must locate genes for high-yielding, hardy traits in crop plants' DNA. A robot developed by the University of Illinois to find these proverbial needles in the haystack was recognized by the best systems paper award at Robotics: Science and Systems, the preeminent robotics conference held at Pittsburgh. There's a real need to accelerate breeding to meet global food demand, " said principal investigator Girish Chowdhary, an assistant professor of field robotics in the Department of Agricultural and Biological Engineering and the Co-ordinated Science Lab at Illinois. "In Africa, the populations will more than double by 2050, but today the yields are only a quarter of their potential." Crop breeders run massive experiments comparing thousands of different cultivars, or varieties, of crops over hundreds of acres and measure key traits, like plant emergence or height, by hand. The task is expensive, time-consuming, inaccurate, and ultimately inadequate -- a team can only manually measure a fraction of plants in a field. "The lack of automation for measuring plant traits is a bottleneck to progress, " said first author Erkan Kayacan, now a postdoctoral researcher at the Massachusetts Institute of Technology. "But it's hard to make robotic systems that can count plants autonomously: the fields are vast, the data can be noisy (unlike benchmark datasets), and the robot has to stay within the tight rows in the challenging under-canopy environment." Illinois' 13-inch wide, 24-pound TerraSentia robot is transportable, compact and autonomous. It captures each plant from top to bottom using a suite of sensors (cameras), algorithms, and deep learning. Using a transfer learning method, the researchers taught TerraSentia to count corn plants with just 300 images, as reported at this conference. "One challenge is that plants aren't equally spaced, so just assuming that a single plant is in the camera frame is not good enough, " said co-author ZhongZhong Zhang, a graduate student in the College of Agricultural Consumer and Environmental Science (ACES). "We developed a method that uses the camera motion to adjust to varying inter-plant spacing, which has led to a fairly robust system for counting plants in different fields, with different and varying spacing, and at different speeds." This work was supported by the Advanced Research Project Agency-Energy (ARPA-E) as part of the TERRA-MEPP project at the Carl R. Woese Institute for Genomic Biology. The robot is now available through the start-up company, EarthSense, Inc. which is equipping the robot with advanced autonomy and plant analytics capabilities. TERRA-MEPP is a research project that is developing a low-cost phenotyping robot to identify top-performing crops led by the University of Illinois in partnership with Cornell University and Signetron Inc. with support from the Advanced Research Projects Agency-Energy (ARPA-E). Content gathered by BTM robotics training center, robotics in Bangalore, stem education in Bangalore, stem education in Bannerghatta road, stem education in JP nagar, robotics training centers in Bannerghatta road, robotics training centers in JP nagar, robotics training for kids, robotics training for beginners, best robotics in Bangalore,
Experimental drone uses AI to spot violence in crowds. Whether or not it works well in practice is another story. Drone-based surveillance still makes many people uncomfortable, but that isn't stopping research into more effective airborne watchdogs. Scientists have developed an experimental drone system that uses AI to detect violent actions in crowds. The team trained their machine learning algorithm to recognize a handful of typical violent motions (punching, kicking, shooting and stabbing) and flag them when they appear in a drone's camera view. The technology could theoretically detect a brawl that on-the-ground officers might miss, or pinpoint the source of a gunshot. As The Verge warned, the technology definitely isn't ready for real-world use. The researchers used volunteers in relatively ideal conditions (open ground, generous spacing and dramatic movements). The AI is 94 percent effective at its best, but that drops down to an unacceptable 79 percent when there are ten people in the scene. As-is, this system might struggle to find an assailant on a jam-packed street -- what if it mistakes an innocent gesture for an attack? The creators expect to fly their drone system over two festivals in India as a test, but it's not something you'd want to rely on just yet. There's a larger problem surrounding the ethical implications. There are questions about abuses of power and reliability for facial recognition systems. Governments may be tempted to use this as an excuse to record aerial footage of people in public spaces, and could track the gestures of political dissidents (say, people holding protest signs or flashing peace symbols). It could easily combine with other surveillance methods to create a complete picture of a person's movements. This might only find acceptance in limited scenarios where organizations both make it clear that people are on camera and with reassurances that a handshake won't lead to police at their door. Content gathered by BTM robotics training center, robotics in Bangalore, stem education in Bangalore, stem education in Bannerghatta road, stem education in JP nagar, robotics training centers in Bannerghatta road, robotics training centers in JP nagar, robotics training for kids, robotics training for beginners, best robotics in Bangalore.
AI robots being fitted with special software that lets them adapt to injury like animals. It’s hard to believe that there once was a time when highly advanced robots only existed in Hollywood movies and comic books. Now, technology has reached a point where robots can do many things that human beings can do – in some cases, the two are even indistinguishable. An essay published in the International Journal of Science described an algorithm that has been specifically designed to allow robots to adapt to damage and ultimately reduce fragility. “Here we introduce an intelligent trial-and-error algorithm that allows robots to adapt to damage in less than two minutes in large search spaces without requiring self-diagnosis or pre-specified contingency plans, ” wrote the essay’s authors, Antoine Cully, Jeff Clune, Danesh Tarapore and Jean-Baptiste Mouret. Content gathered by BTM robotics training centre, robotics in Bangalore, stem education in Bangalore, stem education in Bannerghatta road, stem education in JP Nagar, robotics training centres in Bannerghatta road, robotics training centres in JP Nagar, robotics training for kids, robotics training for beginners, best robotics in Bangalore.
Terminator REBOOTING: Smart microchip can self-start and operate even when the battery runs out. Singaporean scientists have unveiled a smart microchip that will keep running long after its battery runs empty. The new BAYLESS has an integrated solar cell that can get power from dim lighting, and its power management features let it start and run itself. The new chip will be installed on the Internet of Things (IoT) sensor nodes. Because it needs far less power, it can run on much smaller batteries. Smart devices that will use the chip can be made many times cheaper and smaller. IoT devices need to run for long periods of time on limited power. They, therefore, need to be very efficient. The batteries in a typical smart device are much bigger than the lone microchip they power. A battery can also cost thrice as much as the chip. The size of the battery depends on the operational life of the sensor node. The node’s lifespan also determines how often the battery needs to be replaced. Most IoT devices constantly use battery power. Smaller batteries are replaced more often, while bigger batteries cost more and take up more space. Content gathered by BTM robotics training center, robotics in Bangalore, stem education in Bangalore, stem education in Bannerghatta road, stem education in JP nagar, robotics training centers in Bannerghatta road, robotics training centers in JP nagar, robotics training for kids, robotics training for beginners, best robotics in Bangalore,
'Blind' Cheetah 3 robot can climb stairs littered with obstacles. MIT's Cheetah 3 robot can now leap and gallop across rough terrain, climb a staircase littered with debris, and quickly recover its balance when suddenly yanked or shoved, all while essentially blind. The 90-pound mechanical beast -- about the size of a full-grown Labrador -- is intentionally designed to do all this without relying on cameras or any external environmental sensors. Instead, it nimbly "feels" its way through its surroundings in a way that engineers describe as "blind locomotion, " much like making one's way across a pitch-black room. "There are many unexpected behaviours the robot should be able to handle without relying too much on vision, " says the robot's designer, Sangbae Kim, associate professor of mechanical engineering at MIT. "Vision can be noisy, slightly inaccurate, and sometimes not available, and if you rely too much on vision, your robot has to be very accurate in position and eventually will be slow. So we want the robot to rely more on tactile information. That way, it can handle unexpected obstacles while moving fast." Researchers will present the robot's vision-free capabilities in October at the International Conference on Intelligent Robots, in Madrid. In addition to blind locomotion, the team will demonstrate the robot's improved hardware, including an expanded range of motion compared to its predecessor Cheetah 2, that allows the robot to stretch backwards and forwards, and twist from side to side, much like a cat limbering up to pounce. Within the next few years, Kim envisions the robot carrying out tasks that would otherwise be too dangerous or inaccessible for humans to take on. "Cheetah 3 is designed to do versatile tasks such as power plant inspection, which involves various terrain conditions including stairs, curbs, and obstacles on the ground, " Kim says. "I think there are countless occasions where we [would] want to send robots to do simple tasks instead of humans. Dangerous, dirty, and difficult work can be done much more safely through remotely controlled robots." Making a commitment The Cheetah 3 can blindly make its way up staircases and through unstructured terrain, and can quickly recover its balance in the face of unexpected forces, thanks to two new algorithms developed by Kim's team: a contact detection algorithm, and a model-predictive control algorithm. The contact detection algorithm helps the robot determine the best time for a given leg to switch from swinging in the air to stepping on the ground. For example, if the robot steps on a light twig versus a hard, heavy rock, how it reacts -- and whether it continues to carry through with a step, or pulls back and swings its leg instead -- can make or break its balance. "When it comes to switching from the air to the ground, the switching has to be very well-done, " Kim says. "This algorithm is really about, 'When is a safe time to commit my footstep?'" The contact detection algorithm helps the robot determine the best time to transition a leg between swing and step, by constantly calculating for each leg three probabilities: the probability of a leg making contact with the ground, the probability of the force generated once the leg hits the ground, and the probability that the leg will be in midswing. The algorithm calculates these probabilities based on data from gyroscopes, accelerometers, and joint positions of the legs, which record the leg's angle and height with respect to the ground. If, for example, the robot unexpectedly steps on a wooden block, its body will suddenly tilt, shifting the angle and height of the robot. That data will immediately feed into calculating the three probabilities for each leg, which the algorithm will combine to estimate whether each leg should commit to pushing down on the ground, or lift up and swing away in order to keep its balance -- all while the robot is virtually blind. "If humans close our eyes and make a step, we have a mental model for where the ground might be, and can prepare for it. But we also rely on the feel of touch of the ground, " Kim says. "We are sort of doing the same thing by combining multiple [sources of] information to determine the transition time." The researchers tested the algorithm in experiments with the Cheetah 3 trotting on a laboratory treadmill and climbing on a staircase. Both surfaces were littered with random objects such as wooden blocks and rolls of tape. "It doesn't know the height of each step and doesn't know there are obstacles on the stairs, but it just ploughs through without losing its balance, " Kim says. "Without that algorithm, the robot was very unstable and fell easily." Future force The robot's blind locomotion was also partly due to the model-predictive control algorithm, which predicts how much force a given leg should apply once it has committed to a step. "The contact detection algorithm will tell you, 'this is the time to apply forces on the ground, '" Kim says. "But once you're on the ground, now you need to calculate what kind of forces to apply so you can move the body in the right way." The model-predictive control algorithm calculates the multiplicative positions of the robot's body and legs a half-second into the future if a certain force is applied by any given leg as it makes contact with the ground. "Say someone kicks the robot sideways, " Kim says. "When the foot is already on the ground, the algorithm decides, 'How should I specify the forces on the foot? Because I have an undesirable velocity on the left, so I want to apply a force in the opposite direction to kill that velocity. If I apply 100 newtons’s in this opposite direction, what will happen a half second later?" The algorithm is designed to make these calculations for each leg every 50 milliseconds, or 20 times per second. In experiments, researchers introduced unexpected forces by kicking and shoving the robot as it trotted on a treadmill, and yanking it by the leash as it climbed up an obstacle-laden staircase. They found that the model-predictive algorithm enabled the robot to quickly produce counter-forces to regain its balance and keep moving forward, without tipping too far in the opposite direction. "It's thanks to that predictive control that can apply the right forces on the ground, combined with this contact transition algorithm that makes each contact very quick and secure, " Kim says. The team had already added cameras to the robot to give it visual feedback of its surroundings. This will help in mapping the general environment and will give the robot a visual heads-up on larger obstacles such as doors and walls. But for now, the team is working to further improve the robot's blind locomotion "We want a very good controller without vision first, " Kim says. "And when we do add vision, even if it might give you the wrong information, the leg should be able to handle (obstacles). Because what if it steps on something that a camera can't see? What will it do? That's where blind locomotion can help. We don't want to trust our vision too much." This research was supported, in part, by Naver, Toyota Research Institute, Foxconn, and Air Force Office of Scientific Research. Content gathered by BTM robotics training center, robotics in Bangalore, stem education in Bangalore, stem education in Bannerghatta road, stem education in JP nagar, robotics training centers in Bannerghatta road, robotics training centers in JP nagar, robotics training for kids, robotics training for beginners, best robotics in Bangalore,
12 Year Old Girl Develops Pollution-Detecting Robot to Help Save the Ocean A 12-year old girl from Massachusetts has developed a water-cleaning system that has attracted attention from major tech companies. Anna Du from Andover loves the water and regularly goes to the Boston Harbor. It was there that inspiration struck. "One day when I was at Boston Harbor, I noticed there was a lot of plastics on the sand, I tried picking some up, but there seemed to be so many more, and it just seemed impossible to clean it all up, " she tells local media. While she is hardly the first person to be overwhelmed by trash in a public area like the Harbor, Du was able to take her concern and translate it into action. She built a robot with an infrared light that detects micro plastics in the ocean. Micro plastics are an increasing problem not just in Boston, but around the globe. Defined as particles of plastic under five millimeters, or 0.196 inches, they've become commonplace. A study released in April showed that ice samples from the Arctic Ocean contained 12, 000 micro plastic particles per liter of sea ice, the highest measurement ever taken. Those scientists used similar technique to Du's remotely operated vehicle (ROV). Infrared is the preferred tactic for detecting micro plastics because, as Du explains in her video, the chemical bonds within plastics are good at absorbing infrared. With her ROV, Du applied for and was accepted into 3M's Young Scientist Lab. There, she'll be mentored by scientists in ways to improve her ROV. Her next hope is to move on from ROVs to autonomous micro plastic-detecting drones. Content gathered by BTM robotics training center, robotics in Bangalore, stem education in Bangalore, stem education in Bannerghatta road, stem education in JP nagar, robotics training centers in Bannerghatta road, robotics training centers in JP nagar, robotics training for kids, robotics training for beginners, best robotics in Bangalore.
Therapy Robot Teaches Social Skills to Children with Autism For some children with autism, interacting with other people can be an uncomfortable, mystifying experience. Feeling overwhelmed with face-to-face interaction, such children may find it difficult to focus their attention and learn social skills from their teachers and therapists—the very people charged with helping them learn to socially adapt. What these children need, say some researchers, is a robot: a cute, tech-based intermediary, with a body, that can teach them how to more comfortably interact with their fellow humans. On the face of it, learning human interaction from a robot might sound counter-intuitive. Or just backward. But a handful of groups are studying the technology in an effort to find out just how effective these robots are at helping children with autism spectrum disorder (ASD). One of those groups is LuxAI, a young company spun out of the University of Luxembourg. The company says its QTrobot can actually increase these children’s willingness to interact with human therapists, and decrease discomfort during therapy sessions. University of Luxembourg researchers working with QTrobot plan to present their results on 28 August at RO-MAN 2018, IEEE’s international symposium on robot and human interactive communication, held in Nanjing, China. “When you are interacting with a person, there are a lot of social cues such as facial expressions, tonality of the voice, and movement of the body which are overwhelming and distracting for children with autism, ” says Aida Nazarikhorram, co-founder of LuxAI. “But robots have this ability to make everything simplified, ” she says. “For example, every time the robot says something or performs a task, it’s exactly the same as the previous time, and that gives comfort to children with autism.” Feeling at ease with a robot, these children are better able to focus their attention on a curriculum presented together by the robot and a human therapist, Nazarikhorram says. In the study that will presented at RO-MAN later this month, 15 boys ages 4 to 14 years participated in two interactions: one with QTrobot and one with a person alone. The children directed their gaze toward the robot about twice as long, on average, compared with their gaze toward the human. Repetitive behaviors like hand flapping—a sign of being uncomfortable and anxious—occurred about three times as often during sessions with the human, compared with the robot, according to the study. More importantly, with a robot in the room, children tend to interact more with human therapists, according to feedback the company received during its research, says Nazarikhorram. “The robot has the ability to create a triangular interaction between the human therapist, the robot, and the child, ” she says. “Immediately the child starts interacting with the educator or therapist to ask questions about the robot or give feedback about its behavior.” A number of groups have been developing digital therapeutics to treat psychiatric disorders, such as apps to treat substance abuse, and therapeutic video games to treat attention deficit/hyperactivity disorder. But there’s something about the embodied robot that gives it an edge over plain screens. “The child is just focused on the app and doesn’t interact with the person beside him, ” Nazarikhorram says. “With a robot, it’s the opposite.” Robot-based therapy for autism has been studied for more than a decade. For instance, scientists first conceived of KASPAR the social robot in the late 1990s. It is now being developed by scientists at the University of Hertfordshire in the United Kingdom. And there are at least two other commercial robots for autism: Robokind’s Milo and Softbank Robotics’ NAO. The MIT Media Lab recently used NAO to test a machine learning network it built that is capable of perceiving children’s behavior. The algorithm can estimate the level of interest and excitement of children with autism during a therapy session. The research was published in June in Science Robotics. “In the end, we want the robots to be a medium towards naturalistic human-human interactions and not solely tools for capturing the attention of the kids, ” says Oggi Rudovic, at the MIT Media Lab, who co-authored the machine learning paper in Science Robotics. The ultimate goal is to equip children with autism “with social skills that they can apply in everyday life, ” he says, and LuxAI’s research “is a good step towards that goal.” However, more research, involving more children over longer periods of time, will be needed to assess whether robots can really equip children with real-life social skills, Rudovic says. The QTrobot is a very new product. LuxAI started building it in 2016, finished a final prototype in mid-2017, and just this year began trials at various centers in Luxembourg, France, Belgium, and Germany. Nazarikhorram says she wanted to build a robot that was practical for classrooms and therapy settings. Her company focused on making its robot easily programmable by autism professionals with no tech background, and able to run for hours without having to be shut down to cool. It also has a powerful processor and 3D camera so that no additional equipment, such as a laptop, is needed, she says. Now LuxAI is conducting longer-term trials, studying the robot’s impact on social competence, emotional well-being, and interaction with people, Nazarikhorram says. We asked Nazarikhorram if it’s possible that pairing robots with children with autism could actually move them further away from people, and closer to technology. “That’s one of the fears that people have, ” she says. “But in practice, in our studies and based on the feedback of our users, the interaction between the children and the therapists improves.” Content gathered by BTM robotics training center, robotics in Bangalore, stem education in Bangalore, stem education in Bannerghatta road, stem education in JP Nagar, robotics training centers in Bannerghatta road, robotics training centers in JP Nagar, robotics training for kids, robotics training for beginners, best robotics in Bangalore