http://WWW.ROBOTICS4U.IN
ROBOTICS 5ab356e70402d10aa42866f8 False 36 2
OK
background image not found
Updates
2018-09-13T15:04:20
update image not found
Robot can pick up any object after inspecting it. Humans have long been masters of dexterity, a skill that can largely be credited to the help of our eyes. Robots, meanwhile, are still catching up. Certainly there's been some progress: for decades robots in controlled environments like assembly lines have been able to pick up the same object over and over again. More recently, breakthroughs in computer vision have enabled robots to make basic distinctions between objects, but even then, they don't truly understand objects' shapes, so there's little they can do after a quick pick-up. In a new paper, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), say that they've made a key development in this area of work: a system that lets robots inspect random objects, and visually understand them enough to accomplish specific tasks without ever having seen them before. The system, dubbed "Dense Object Nets" (DON), looks at objects as collections of points that serve as "visual roadmaps" of sorts. This approach lets robots better understand and manipulate items, and, most importantly, allows them to even pick up a specific object among a clutter of similar objects—a valuable skill for the kinds of machines that companies like Amazon and Walmart use in their warehouses. For example, someone might use DON to get a robot to grab onto a specific spot on an object—say, the tongue of a shoe. From that, it can look at a shoe it has never seen before, and successfully grab its tongue. "Many approaches to manipulation can't identify specific parts of an object across the many orientations that object may encounter, " says Ph.D. student Lucas Manuelli, who wrote a new paper about the system with lead author and fellow Ph.D. student Pete Florence, alongside MIT professor Russ Tedrake. "For example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright, or on its side." The team views potential applications not just in manufacturing settings, but also in homes. Imagine giving the system an image of a tidy house, and letting it clean while you're at work, or using an image of dishes so that the system puts your plates away while you're on vacation. What's also noteworthy is that none of the data was actually labeled by humans; rather, the system is "self-supervised, " so it doesn't require any human annotations. Two common approaches to robot grasping involve either task-specific learning, or creating a general grasping algorithm. These techniques both have obstacles: task-specific methods are difficult to generalize to other tasks, and general grasping doesn't get specific enough to deal with the nuances of particular tasks, like putting objects in specific spots. The DON system, however, essentially creates a series of coordinates on a given object, which serve as a kind of "visual roadmap" of the objects, to give the robot a better understanding of what it needs to grasp, and where. The team trained the system to look at objects as a series of points that make up a larger coordinate system. It can then map different points together to visualize an object's 3-D shape, similar to how panoramic photos are stitched together from multiple photos. After training, if a person specifies a point on a object, the robot can take a photo of that object, and identify and match points to be able to then pick up the object at that specified point. This is different from systems like UC-Berkeley's DexNet, which can grasp many different items, but can't satisfy a specific request. Imagine an infant at 18-months old, who doesn't understand which toy you want it to play with but can still grab lots of items, versus a four-year old who can respond to "go grab your truck by the red end of it." In one set of tests done on a soft caterpillar toy, a Kuka robotic arm powered by DON could grasp the toy's right ear from a range of different configurations. This showed that, among other things, the system has the ability to distinguish left from right on symmetrical objects. When testing on a bin of different baseball hats, DON could pick out a specific target hat despite all of the hats having very similar designs—and having never seen pictures of the hats in training data before. "In factories robots often need complex part feeders to work reliably, " says Manuelli. "But a system like this that can understand objects' orientations could just take a picture and be able to grasp and adjust the object accordingly." In the future, the team hopes to improve the system to a place where it can perform specific tasks with a deeper understanding of the corresponding objects, like learning how to grasp an object and move it with the ultimate goal of say, cleaning a desk. The team will present their paper on the system next month at the Conference on Robot Learning in Zürich, Switzerland. Content gathered by BTM robotics training center, robotics in Bangalore, stem education in Bangalore, stem education in Bannerghatta road, stem education in JP Nagar, robotics training centres in Bannerghatta road, robotics training centres in JP Nagar, robotics training for kids, robotics training for beginners, best robotics in Bangalore.
2018-09-08T14:02:49
update image not found
Robots will never replace teachers but can boost children's education. Scientists say social robots are proving effective in the teaching of certain narrow subjects, such as vocabulary or prime numbers. But current technical limitations -- particularly around speech recognition and the ability for social interaction -- mean their role will largely be confined to that of teaching assistants or tutors, at least for the foreseeable future. The study was led by Professor in Robotics Tony Belpaeme, from the University of Plymouth and Ghent University, who has worked in the field of social robotics for around two decades. He said: "In recent years scientists have started to build robots for the classroom -- not the robot kits used to learn about technology and mathematics, but social robots that can actually teach. This is because pressures on teaching budgets, and calls for more personalized teaching, have led to a search for technological solutions. "In the broadest sense, social robots have the potential to become part of the educational infrastructure just like paper, white boards, and computer tablets. But a social robot has the potential to support and challenge students in ways unavailable in current resource-limited educational environments. Robots can free up precious time for teachers, allowing the teacher to focus on what people still do best -- provide a comprehensive, empathic, and rewarding educational experience." The current study, compiled in conjunction with academics at Yale University and the University of Tsukuba, involved a review of more than 100 published articles, which have shown robots to be effective at increasing outcomes, largely because of their physical presence. However it also explored in detail some of the technical constraints highlighting that speech recognition, for example, is still insufficiently robust to allow the robot to understand spoken utterances from young children. It also says that introducing social robots into the school curriculum would pose significant logistical challenges and might in fact carry risks, with some children being seen to rely too heavily on the help offered by robots rather than simply using them when they are in difficulty. In their conclusion, the authors add: "Next to the practical considerations of introducing robots in education, there are also ethical issues. How far do we want the education of our children to be delegated to machines? Overall, learners are positive about their experiences, but parents and teaching staff adopt a more cautious attitude. "Notwithstanding that, robots show great promise when teaching restricted topics with the effects almost matching those of human tutoring. So although the use of robots in educational settings is limited by technical and logistical challenges for now, it are highly likely that classrooms of the future will feature robots that assist a human teacher." Content gathered by BTM robotics training centre, robotics in Bangalore, stem education in Bangalore, stem education in Bannerghatta road, stem educationin JP Nagar, robotics training centres in Bannerghatta road, robotics training centres in JP Nagar, robotics training for kids, robotics training for beginners, best robotics in Bangalore.
2018-08-10T14:45:48
update image not found
Therapy Robot Teaches Social Skills to Children with Autism For some children with autism, interacting with other people can be an uncomfortable, mystifying experience. Feeling overwhelmed with face-to-face interaction, such children may find it difficult to focus their attention and learn social skills from their teachers and therapists—the very people charged with helping them learn to socially adapt. What these children need, say some researchers, is a robot: a cute, tech-based intermediary, with a body, that can teach them how to more comfortably interact with their fellow humans. On the face of it, learning human interaction from a robot might sound counter-intuitive. Or just backward. But a handful of groups are studying the technology in an effort to find out just how effective these robots are at helping children with autism spectrum disorder (ASD). One of those groups is LuxAI, a young company spun out of the University of Luxembourg. The company says its QTrobot can actually increase these children’s willingness to interact with human therapists, and decrease discomfort during therapy sessions. University of Luxembourg researchers working with QTrobot plan to present their results on 28 August at RO-MAN 2018, IEEE’s international symposium on robot and human interactive communication, held in Nanjing, China. “When you are interacting with a person, there are a lot of social cues such as facial expressions, tonality of the voice, and movement of the body which are overwhelming and distracting for children with autism, ” says Aida Nazarikhorram, co-founder of LuxAI. “But robots have this ability to make everything simplified, ” she says. “For example, every time the robot says something or performs a task, it’s exactly the same as the previous time, and that gives comfort to children with autism.” Feeling at ease with a robot, these children are better able to focus their attention on a curriculum presented together by the robot and a human therapist, Nazarikhorram says. In the study that will presented at RO-MAN later this month, 15 boys ages 4 to 14 years participated in two interactions: one with QTrobot and one with a person alone. The children directed their gaze toward the robot about twice as long, on average, compared with their gaze toward the human. Repetitive behaviors like hand flapping—a sign of being uncomfortable and anxious—occurred about three times as often during sessions with the human, compared with the robot, according to the study. More importantly, with a robot in the room, children tend to interact more with human therapists, according to feedback the company received during its research, says Nazarikhorram. “The robot has the ability to create a triangular interaction between the human therapist, the robot, and the child, ” she says. “Immediately the child starts interacting with the educator or therapist to ask questions about the robot or give feedback about its behavior.” A number of groups have been developing digital therapeutics to treat psychiatric disorders, such as apps to treat substance abuse, and therapeutic video games to treat attention deficit/hyperactivity disorder. But there’s something about the embodied robot that gives it an edge over plain screens. “The child is just focused on the app and doesn’t interact with the person beside him, ” Nazarikhorram says. “With a robot, it’s the opposite.” Robot-based therapy for autism has been studied for more than a decade. For instance, scientists first conceived of KASPAR the social robot in the late 1990s. It is now being developed by scientists at the University of Hertfordshire in the United Kingdom. And there are at least two other commercial robots for autism: Robokind’s Milo and Softbank Robotics’ NAO. The MIT Media Lab recently used NAO to test a machine learning network it built that is capable of perceiving children’s behavior. The algorithm can estimate the level of interest and excitement of children with autism during a therapy session. The research was published in June in Science Robotics. “In the end, we want the robots to be a medium towards naturalistic human-human interactions and not solely tools for capturing the attention of the kids, ” says Oggi Rudovic, at the MIT Media Lab, who co-authored the machine learning paper in Science Robotics. The ultimate goal is to equip children with autism “with social skills that they can apply in everyday life, ” he says, and LuxAI’s research “is a good step towards that goal.” However, more research, involving more children over longer periods of time, will be needed to assess whether robots can really equip children with real-life social skills, Rudovic says. The QTrobot is a very new product. LuxAI started building it in 2016, finished a final prototype in mid-2017, and just this year began trials at various centers in Luxembourg, France, Belgium, and Germany. Nazarikhorram says she wanted to build a robot that was practical for classrooms and therapy settings. Her company focused on making its robot easily programmable by autism professionals with no tech background, and able to run for hours without having to be shut down to cool. It also has a powerful processor and 3D camera so that no additional equipment, such as a laptop, is needed, she says. Now LuxAI is conducting longer-term trials, studying the robot’s impact on social competence, emotional well-being, and interaction with people, Nazarikhorram says. We asked Nazarikhorram if it’s possible that pairing robots with children with autism could actually move them further away from people, and closer to technology. “That’s one of the fears that people have, ” she says. “But in practice, in our studies and based on the feedback of our users, the interaction between the children and the therapists improves.” Content gathered by BTM robotics training center, robotics in Bangalore, stem education in Bangalore, stem education in Bannerghatta road, stem education in JP Nagar, robotics training centers in Bannerghatta road, robotics training centers in JP Nagar, robotics training for kids, robotics training for beginners, best robotics in Bangalore
2018-08-07T14:03:32
update image not found
Ex-NASA Engineer Made the Perfect Rock Skipping Robot Ex-NASA engineer and YouTube inventor Mark Rober have made a perfect rock-skipping robot. Not only can the robot perform impressively, but it can help you learn how to skip rocks better too. Rober built the robot by tweaking a clay pigeon thrower, creating wooden custom throwing arms and a base for stability. Once he built a prototype, his team of assistants (nieces and nephews) gave Skippa, the rock-throwing robot makeover with spray paint and giant googly eyes, and then brainstormed test variables for a perfect skip. How do you achieve the perfect rock skip? The team narrowed it down to four variables: the wrist angle of the robot (the angle of the rock relative to the water), the arm angle of the robot (which changes the path of the rock), and the rocks used (variations in diameter and thickness). To create uniform controls for robot tests, Rober and his team made their own rocks out of unfired clay (the clay discs easily dried in the sun, and dissolved in water under 30 mins). After the robot tested some unsuccessful skips, it began to shoot rocks tumbling across the water in over 60 skips per throw. Here’s the recipe Rober finally found for the perfect rock skip: the rock should hit at a 20-degree angle to the water, with a 20-degree path, and a higher throw for more energy. Flicking the wrist as much as possible will help the rock spin, which will help the rock stable. And finally, the most important factors for rock selection is a flat bottom and finding a rock that’s heavy but not too big to handle. When Rober’s amateur engineering team tested the principles they learned from the robot, they were quickly able to improve their skips from an average of three to 16 skips. Content gathered by BTM robotics training centre, robotics in Bangalore, stem education in Bangalore, stem education in Bannerghatta road, stem education in JP Nagar, robotics training centres in Bannerghatta road, robotics training centres in JP Nagar, robotics training for kids, robotics training for beginners, best robotics in Bangalore.
2018-07-17T13:13:35
update image not found
Using your body to control a drone is more effective than a joystick. If you've ever been chastised for throwing your entire body around during gaming (because physically leaning into track corners definitely helps somehow), here's a bit of science-backed vindication. Researchers in Switzerland have discovered that using your torso to control a drone is far more effective than using a joystick. The team from EPFL monitored the body movements and muscular activity of 17 people, each with 19 markers placed all over their upper bodies. The participants then followed the actions of a virtual drone through simulated landscapes, via virtual reality goggles. By observing motion patterns, the scientists found that only four markers located on the torso were needed to pilot a drone through an obstacle course and that the method outperformed joystick control in both precision and reliability. The study's lead author, Jenifer Miehlbradt of EPFL's Translational Neuroengineering Laboratory, said: "Using your torso really gives you the feeling that you are actually flying. Joysticks, on the other hand, are of simple design but mastering their use to precisely control distant objects can be challenging." The proof-of-concept system still depends on body markers and external motion detectors to work, so the team's next challenge will be making the tech wearable and completely independent. However, the range of applications for it is enormous. Being able to virtually fly while your head, limbs, hand and feet are free to perform other tasks could be a major development for gaming, drone control or even the planes of the future. Content gathered by BTM robotics training centre, robotics in Bangalore, stem education in Bangalore, stem education in Bannerghatta road, stem education in JP Nagar, robotics training centres in Bannerghatta road, robotics training centres in JP Nagar, robotics training for kids, robotics training for beginners, best robotics in Bangalore,
false