http://WWW.ROBOTICS4U.IN
ROBOTICS 5ab356e70402d10aa42866f8 False 36 2
OK
background image not found
Found Update results for
'educational settings'
4
Robots will never replace teachers but can boost children's education. Scientists say social robots are proving effective in the teaching of certain narrow subjects, such as vocabulary or prime numbers. But current technical limitations -- particularly around speech recognition and the ability for social interaction -- mean their role will largely be confined to that of teaching assistants or tutors, at least for the foreseeable future. The study was led by Professor in Robotics Tony Belpaeme, from the University of Plymouth and Ghent University, who has worked in the field of social robotics for around two decades. He said: "In recent years scientists have started to build robots for the classroom -- not the robot kits used to learn about technology and mathematics, but social robots that can actually teach. This is because pressures on teaching budgets, and calls for more personalized teaching, have led to a search for technological solutions. "In the broadest sense, social robots have the potential to become part of the educational infrastructure just like paper, white boards, and computer tablets. But a social robot has the potential to support and challenge students in ways unavailable in current resource-limited educational environments. Robots can free up precious time for teachers, allowing the teacher to focus on what people still do best -- provide a comprehensive, empathic, and rewarding educational experience." The current study, compiled in conjunction with academics at Yale University and the University of Tsukuba, involved a review of more than 100 published articles, which have shown robots to be effective at increasing outcomes, largely because of their physical presence. However it also explored in detail some of the technical constraints highlighting that speech recognition, for example, is still insufficiently robust to allow the robot to understand spoken utterances from young children. It also says that introducing social robots into the school curriculum would pose significant logistical challenges and might in fact carry risks, with some children being seen to rely too heavily on the help offered by robots rather than simply using them when they are in difficulty. In their conclusion, the authors add: "Next to the practical considerations of introducing robots in education, there are also ethical issues. How far do we want the education of our children to be delegated to machines? Overall, learners are positive about their experiences, but parents and teaching staff adopt a more cautious attitude. "Notwithstanding that, robots show great promise when teaching restricted topics with the effects almost matching those of human tutoring. So although the use of robots in educational settings is limited by technical and logistical challenges for now, it are highly likely that classrooms of the future will feature robots that assist a human teacher." Content gathered by BTM robotics training centre, robotics in Bangalore, stem education in Bangalore, stem education in Bannerghatta road, stem educationin JP Nagar, robotics training centres in Bannerghatta road, robotics training centres in JP Nagar, robotics training for kids, robotics training for beginners, best robotics in Bangalore.
Nvidia is training robots to learn new skills by observing humans. Initial experiments with the process have seen a Baxter robot learn to pick up and move colored boxes and a toy car in a lab environment. The researchers hope the development of the new deep-learning based system will go some way to train robots to work alongside humans in both manufacturing and home settings. “In the manufacturing environment, robots are really good at repeatedly executing the same trajectory over and over again, but they don’t adapt to changes in the environment, and they don’t learn their tasks, ” Nvidia principal research scientist Stan Birchfield told VentureBeat. “So to repurpose a robot to execute a new task, you have to bring in an expert to reprogram the robot at a fairly low level, and it’s an expensive operation. What we’re interested in doing is making it easier for a non-expert user to teach a robot a new task by simply showing it what to do.” The researchers trained a sequence of neural networks to perform duties associated with perception, program generation, and program execution. The result was that the robot was able to learn a new task from a single demonstration in the real world. Once the robot witnesses the task, it generates a human-readable description of the states required to complete the task. A human can then correct the steps if necessary before execution on the real robot. “There’s sort of a paradigm shift happening in the robotics community now, ” Birchfield said. “We’re at the point now where we can use GPUs to generate essentially a limitless amount of pre-labeled data essentially for free to develop and test algorithms. And this is potentially going to allow us to develop these robotics systems that need to learn how to interact with the world around them in ways that scale better and are safer.” In a video released by the researchers, human operator shows a pair of stacks of cubes to the robot. The system then understands an appropriate program and correctly places the cubes in the correct order. Information gathered by - Robotics for u. Bangalore Robotics, BTM Robotics training center, Robotics spares, Bannerghatta Robotics training center, best robotics training in bangalore,
Therapy Robot Teaches Social Skills to Children with Autism For some children with autism, interacting with other people can be an uncomfortable, mystifying experience. Feeling overwhelmed with face-to-face interaction, such children may find it difficult to focus their attention and learn social skills from their teachers and therapists—the very people charged with helping them learn to socially adapt. What these children need, say some researchers, is a robot: a cute, tech-based intermediary, with a body, that can teach them how to more comfortably interact with their fellow humans. On the face of it, learning human interaction from a robot might sound counter-intuitive. Or just backward. But a handful of groups are studying the technology in an effort to find out just how effective these robots are at helping children with autism spectrum disorder (ASD). One of those groups is LuxAI, a young company spun out of the University of Luxembourg. The company says its QTrobot can actually increase these children’s willingness to interact with human therapists, and decrease discomfort during therapy sessions. University of Luxembourg researchers working with QTrobot plan to present their results on 28 August at RO-MAN 2018, IEEE’s international symposium on robot and human interactive communication, held in Nanjing, China. “When you are interacting with a person, there are a lot of social cues such as facial expressions, tonality of the voice, and movement of the body which are overwhelming and distracting for children with autism, ” says Aida Nazarikhorram, co-founder of LuxAI. “But robots have this ability to make everything simplified, ” she says. “For example, every time the robot says something or performs a task, it’s exactly the same as the previous time, and that gives comfort to children with autism.” Feeling at ease with a robot, these children are better able to focus their attention on a curriculum presented together by the robot and a human therapist, Nazarikhorram says. In the study that will presented at RO-MAN later this month, 15 boys ages 4 to 14 years participated in two interactions: one with QTrobot and one with a person alone. The children directed their gaze toward the robot about twice as long, on average, compared with their gaze toward the human. Repetitive behaviors like hand flapping—a sign of being uncomfortable and anxious—occurred about three times as often during sessions with the human, compared with the robot, according to the study. More importantly, with a robot in the room, children tend to interact more with human therapists, according to feedback the company received during its research, says Nazarikhorram. “The robot has the ability to create a triangular interaction between the human therapist, the robot, and the child, ” she says. “Immediately the child starts interacting with the educator or therapist to ask questions about the robot or give feedback about its behavior.” A number of groups have been developing digital therapeutics to treat psychiatric disorders, such as apps to treat substance abuse, and therapeutic video games to treat attention deficit/hyperactivity disorder. But there’s something about the embodied robot that gives it an edge over plain screens. “The child is just focused on the app and doesn’t interact with the person beside him, ” Nazarikhorram says. “With a robot, it’s the opposite.” Robot-based therapy for autism has been studied for more than a decade. For instance, scientists first conceived of KASPAR the social robot in the late 1990s. It is now being developed by scientists at the University of Hertfordshire in the United Kingdom. And there are at least two other commercial robots for autism: Robokind’s Milo and Softbank Robotics’ NAO. The MIT Media Lab recently used NAO to test a machine learning network it built that is capable of perceiving children’s behavior. The algorithm can estimate the level of interest and excitement of children with autism during a therapy session. The research was published in June in Science Robotics. “In the end, we want the robots to be a medium towards naturalistic human-human interactions and not solely tools for capturing the attention of the kids, ” says Oggi Rudovic, at the MIT Media Lab, who co-authored the machine learning paper in Science Robotics. The ultimate goal is to equip children with autism “with social skills that they can apply in everyday life, ” he says, and LuxAI’s research “is a good step towards that goal.” However, more research, involving more children over longer periods of time, will be needed to assess whether robots can really equip children with real-life social skills, Rudovic says. The QTrobot is a very new product. LuxAI started building it in 2016, finished a final prototype in mid-2017, and just this year began trials at various centers in Luxembourg, France, Belgium, and Germany. Nazarikhorram says she wanted to build a robot that was practical for classrooms and therapy settings. Her company focused on making its robot easily programmable by autism professionals with no tech background, and able to run for hours without having to be shut down to cool. It also has a powerful processor and 3D camera so that no additional equipment, such as a laptop, is needed, she says. Now LuxAI is conducting longer-term trials, studying the robot’s impact on social competence, emotional well-being, and interaction with people, Nazarikhorram says. We asked Nazarikhorram if it’s possible that pairing robots with children with autism could actually move them further away from people, and closer to technology. “That’s one of the fears that people have, ” she says. “But in practice, in our studies and based on the feedback of our users, the interaction between the children and the therapists improves.” Content gathered by BTM robotics training center, robotics in Bangalore, stem education in Bangalore, stem education in Bannerghatta road, stem education in JP Nagar, robotics training centers in Bannerghatta road, robotics training centers in JP Nagar, robotics training for kids, robotics training for beginners, best robotics in Bangalore
Robot can pick up any object after inspecting it. Humans have long been masters of dexterity, a skill that can largely be credited to the help of our eyes. Robots, meanwhile, are still catching up. Certainly there's been some progress: for decades robots in controlled environments like assembly lines have been able to pick up the same object over and over again. More recently, breakthroughs in computer vision have enabled robots to make basic distinctions between objects, but even then, they don't truly understand objects' shapes, so there's little they can do after a quick pick-up. In a new paper, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), say that they've made a key development in this area of work: a system that lets robots inspect random objects, and visually understand them enough to accomplish specific tasks without ever having seen them before. The system, dubbed "Dense Object Nets" (DON), looks at objects as collections of points that serve as "visual roadmaps" of sorts. This approach lets robots better understand and manipulate items, and, most importantly, allows them to even pick up a specific object among a clutter of similar objects—a valuable skill for the kinds of machines that companies like Amazon and Walmart use in their warehouses. For example, someone might use DON to get a robot to grab onto a specific spot on an object—say, the tongue of a shoe. From that, it can look at a shoe it has never seen before, and successfully grab its tongue. "Many approaches to manipulation can't identify specific parts of an object across the many orientations that object may encounter, " says Ph.D. student Lucas Manuelli, who wrote a new paper about the system with lead author and fellow Ph.D. student Pete Florence, alongside MIT professor Russ Tedrake. "For example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright, or on its side." The team views potential applications not just in manufacturing settings, but also in homes. Imagine giving the system an image of a tidy house, and letting it clean while you're at work, or using an image of dishes so that the system puts your plates away while you're on vacation. What's also noteworthy is that none of the data was actually labeled by humans; rather, the system is "self-supervised, " so it doesn't require any human annotations. Two common approaches to robot grasping involve either task-specific learning, or creating a general grasping algorithm. These techniques both have obstacles: task-specific methods are difficult to generalize to other tasks, and general grasping doesn't get specific enough to deal with the nuances of particular tasks, like putting objects in specific spots. The DON system, however, essentially creates a series of coordinates on a given object, which serve as a kind of "visual roadmap" of the objects, to give the robot a better understanding of what it needs to grasp, and where. The team trained the system to look at objects as a series of points that make up a larger coordinate system. It can then map different points together to visualize an object's 3-D shape, similar to how panoramic photos are stitched together from multiple photos. After training, if a person specifies a point on a object, the robot can take a photo of that object, and identify and match points to be able to then pick up the object at that specified point. This is different from systems like UC-Berkeley's DexNet, which can grasp many different items, but can't satisfy a specific request. Imagine an infant at 18-months old, who doesn't understand which toy you want it to play with but can still grab lots of items, versus a four-year old who can respond to "go grab your truck by the red end of it." In one set of tests done on a soft caterpillar toy, a Kuka robotic arm powered by DON could grasp the toy's right ear from a range of different configurations. This showed that, among other things, the system has the ability to distinguish left from right on symmetrical objects. When testing on a bin of different baseball hats, DON could pick out a specific target hat despite all of the hats having very similar designs—and having never seen pictures of the hats in training data before. "In factories robots often need complex part feeders to work reliably, " says Manuelli. "But a system like this that can understand objects' orientations could just take a picture and be able to grasp and adjust the object accordingly." In the future, the team hopes to improve the system to a place where it can perform specific tasks with a deeper understanding of the corresponding objects, like learning how to grasp an object and move it with the ultimate goal of say, cleaning a desk. The team will present their paper on the system next month at the Conference on Robot Learning in Zürich, Switzerland. Content gathered by BTM robotics training center, robotics in Bangalore, stem education in Bangalore, stem education in Bannerghatta road, stem education in JP Nagar, robotics training centres in Bannerghatta road, robotics training centres in JP Nagar, robotics training for kids, robotics training for beginners, best robotics in Bangalore.
1
false