http://WWW.ROBOTICS4U.IN
ROBOTICS 5ab356e70402d10aa42866f8 False 36 2
OK
background image not found
Found Update results for
'specific object'
5
Robot can pick up any object after inspecting it. Humans have long been masters of dexterity, a skill that can largely be credited to the help of our eyes. Robots, meanwhile, are still catching up. Certainly there's been some progress: for decades robots in controlled environments like assembly lines have been able to pick up the same object over and over again. More recently, breakthroughs in computer vision have enabled robots to make basic distinctions between objects, but even then, they don't truly understand objects' shapes, so there's little they can do after a quick pick-up. In a new paper, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), say that they've made a key development in this area of work: a system that lets robots inspect random objects, and visually understand them enough to accomplish specific tasks without ever having seen them before. The system, dubbed "Dense Object Nets" (DON), looks at objects as collections of points that serve as "visual roadmaps" of sorts. This approach lets robots better understand and manipulate items, and, most importantly, allows them to even pick up a specific object among a clutter of similar objects—a valuable skill for the kinds of machines that companies like Amazon and Walmart use in their warehouses. For example, someone might use DON to get a robot to grab onto a specific spot on an object—say, the tongue of a shoe. From that, it can look at a shoe it has never seen before, and successfully grab its tongue. "Many approaches to manipulation can't identify specific parts of an object across the many orientations that object may encounter, " says Ph.D. student Lucas Manuelli, who wrote a new paper about the system with lead author and fellow Ph.D. student Pete Florence, alongside MIT professor Russ Tedrake. "For example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright, or on its side." The team views potential applications not just in manufacturing settings, but also in homes. Imagine giving the system an image of a tidy house, and letting it clean while you're at work, or using an image of dishes so that the system puts your plates away while you're on vacation. What's also noteworthy is that none of the data was actually labeled by humans; rather, the system is "self-supervised, " so it doesn't require any human annotations. Two common approaches to robot grasping involve either task-specific learning, or creating a general grasping algorithm. These techniques both have obstacles: task-specific methods are difficult to generalize to other tasks, and general grasping doesn't get specific enough to deal with the nuances of particular tasks, like putting objects in specific spots. The DON system, however, essentially creates a series of coordinates on a given object, which serve as a kind of "visual roadmap" of the objects, to give the robot a better understanding of what it needs to grasp, and where. The team trained the system to look at objects as a series of points that make up a larger coordinate system. It can then map different points together to visualize an object's 3-D shape, similar to how panoramic photos are stitched together from multiple photos. After training, if a person specifies a point on a object, the robot can take a photo of that object, and identify and match points to be able to then pick up the object at that specified point. This is different from systems like UC-Berkeley's DexNet, which can grasp many different items, but can't satisfy a specific request. Imagine an infant at 18-months old, who doesn't understand which toy you want it to play with but can still grab lots of items, versus a four-year old who can respond to "go grab your truck by the red end of it." In one set of tests done on a soft caterpillar toy, a Kuka robotic arm powered by DON could grasp the toy's right ear from a range of different configurations. This showed that, among other things, the system has the ability to distinguish left from right on symmetrical objects. When testing on a bin of different baseball hats, DON could pick out a specific target hat despite all of the hats having very similar designs—and having never seen pictures of the hats in training data before. "In factories robots often need complex part feeders to work reliably, " says Manuelli. "But a system like this that can understand objects' orientations could just take a picture and be able to grasp and adjust the object accordingly." In the future, the team hopes to improve the system to a place where it can perform specific tasks with a deeper understanding of the corresponding objects, like learning how to grasp an object and move it with the ultimate goal of say, cleaning a desk. The team will present their paper on the system next month at the Conference on Robot Learning in Zürich, Switzerland. Content gathered by BTM robotics training center, robotics in Bangalore, stem education in Bangalore, stem education in Bannerghatta road, stem education in JP Nagar, robotics training centres in Bannerghatta road, robotics training centres in JP Nagar, robotics training for kids, robotics training for beginners, best robotics in Bangalore.
Look, up in the sky! It's Disney's new autonomous acrobatic robot. Disney's animatronics are coming a long way from drunken pirates waving flagons of ale or hippos that wiggle their ears. In the (relatively) near future, robotic versions of Iron Man or Buzz Lightyear could be performing autonomous acrobatics overhead in Disney theme parks, thanks to the newly-unveiled Stuntronics robot. Animatronic characters have populated Disney parks for more than half a century, albeit often just looping a specific movement over and over. In recent years Disney Research has tried to make the robots more agile and interactive, developing versions that can grab objects more naturally and even juggle and play catch with visitors. Back in May, the company unveiled a prototype called Stickman. Basically a mechanical stick with two degrees of freedom, the robot could be flicked into the air like a trapeze artist, where it used a suite of sensors to tuck and roll in midair, perform a couple of backflips, and unfurl for landing. Impressive as that is, Stickman was far more stick than man. In just a few short months, the project has evolved into Stuntronics, a robot that's noticeably more human. Designed to be a kind of robotic stunt double for a human actor, the Stuntronics robot can perform the same kind of autonomous aerial stunts thanks to a similar load of sensors as Stickman, including an accelerometer, gyroscope array and laser range finding. But unlike Stickman, Stuntronics can stick its landing too. The former bot tended to land flat on its back, but the new version can land feet-first, and hit what looks like a smaller target. Not only that, it can strike a heroic pose in the air, before tucking back up ready for landing. Disney Research scientists said that during a stage show or ride, other animatronics or human actors could perform the up-close, static scenes before the Stuntronics robot is wheeled out when the character needs to fly (or fall with style). Of course, there's no guarantee that this kind of thing will ever get off the ground (literally or figuratively), but it's always exciting to peek behind the curtain at Disneyland. Content gathered by BTM robotics training center, robotics in Bangalore, stem education in Bangalore, stem education in Bannerghatta road, stem education in JP nagar, robotics training centers in Bannerghatta road, robotics training centers in JP nagar, robotics training for kids, robotics training for beginners, best robotics in Bangalore,
By 2023, India wants an advanced robotic soldier protecting its borders. This next-generation soldier should be intelligent enough to automatically recognize threats and take action. It should also be sophisticated enough to distinguish between threats and non-threats. If India achieves its objective, that will have a huge impact on two fronts at least. First, the robotic soldier would give India the ability to redefine geopolitics, regionally and globally. India could join a very small yet special club of countries (such as Russia and Israel) that are using robots to defend their borders. India may use its robotic soldier as a strategic weapon, like a nuclear bomb, to command attention and respect. From a nation that is currently a secondary partner to the U.S., Russia, or China, a robotic soldier would give India the capability to have a strategic agenda of its own. India will not just be a coalition partner. It will create its own coalition. The next U.N. peacekeeping mission might involve robotic soldiers imported from India or under the command of an Indian general experienced in commanding a robotic army. Second, building an army of robotic soldiers would affect the Indian economy. During the next financial year (2016-’17), India plans to spend nearly $40 billion on defense. This expenditure has quadrupled in the past 15 years. The expenditure was $11.8 billion in 2001. By 2022, India may be spending $620 billion on defense. It’s no wonder then that the Stockholm International Peace Research Institute (SIPRI) found India topping the list of nations importing weapons. According to SIPRI, India bought 14% of all weapons sold globally between 2011 and 2015. The defense budget not only accounts for 17.2 percent of the total planned government expenditure for the next fiscal year, but there is also an off-books number — pensions of defense personnel — that is rising rapidly. It will be around $10 billion in the next financial year. When one in five rupees is going toward defense operations, the economy takes a hit. While the robotic soldiers will not fix the problem by themselves or dramatically change the budget, they are likely to offer relief. Every rupee saved from defense will go toward development. What strategy will India adopt? Will it increase its imports of weapons and acquire the robotic soldiers from overseas, or will India create its robotic soldiers under the “Make in India” program? Or, just as Russia surprised the world with its intervention in the Syrian civil war, India could also enter and exit hot zones or create them in pursuit of its national interests. The robotic soldier would change the border dynamics with China, Bangladesh, and Pakistan, for sure. Information gathered by - Bangalore BTM Robotics training center, Bannerghatta Robotics training center.
Flying Dragon Robot Transforms Itself to Squeeze Through Gaps. Dragon can change its shape to move through complex environments and even manipulate objects. There’s been a lot of recent focus on applications for aerial robots, and one of the areas with the most potential is indoors. The thing about indoors is that by definition you have to go through doors to get there, and once you’re inside, there are all kinds of things that are horribly dangerous to aerial robots, like more doors, walls, windows, people, furniture, hanging plants, lampshades, and other aerial robots, inevitably followed by still more doors. One solution is to make your robots super small, so that they can fit through small openings without running into something fragile and expensive, but then you’re stuck with small robots that can’t do a whole heck of a lot. Another solution is to put your robots in protective cages, but then you’re stuck with robots that can’t as easily interact with their environment, even if they want to. Ideally, you’d want a robot that doesn’t need that level of protection, that’s somehow large and powerful but also small and nimble at the same time. At JSK Lab at the University of Tokyo, roboticists have developed a robot called DRAGON, which (obviously) stands for for “Dual-rotor embedded multilink Robot with the Ability of multi-degree-of-freedom aerial transformation.” It’s a modular flying robot powered by ducted fans that can transform literally on the fly, from a square to a snake to anything in between, allowing it to stretch out to pass through small holes and then make whatever other shape you want once it’s on the other side. DRAGON is made of a series of linked modules, each of which consists of a pair of ducted fan thrusters that can be actuated in roll and pitch to vector thrust in just about any direction you need. The modules are connected to one another with a powered hinged joint, and the whole robot is driven by an Intel Euclid and powered by a battery pack (providing 3 minutes of flight time, which is honestly more than I would have thought), mounted along the robot’s spine. This particular prototype is made up of four modules, allowing it to behave sort of like a quad rotor, even though I suppose technically it’s an octorotor. Content gathered by BTM robotics training center, robotics in Bangalore, stem education in Bangalore, stem education in Bannerghatta road, stem education in JP nagar, robotics training centers in Bannerghatta road, robotics training centers in JP nagar, robotics training for kids, robotics training for beginners, best robotics in Bangalore,
AI robots being fitted with special software that lets them adapt to injury like animals. It’s hard to believe that there once was a time when highly advanced robots only existed in Hollywood movies and comic books. Now, technology has reached a point where robots can do many things that human beings can do – in some cases, the two are even indistinguishable. An essay published in the International Journal of Science described an algorithm that has been specifically designed to allow robots to adapt to damage and ultimately reduce fragility. “Here we introduce an intelligent trial-and-error algorithm that allows robots to adapt to damage in less than two minutes in large search spaces without requiring self-diagnosis or pre-specified contingency plans, ” wrote the essay’s authors, Antoine Cully, Jeff Clune, Danesh Tarapore and Jean-Baptiste Mouret. Content gathered by BTM robotics training centre, robotics in Bangalore, stem education in Bangalore, stem education in Bannerghatta road, stem education in JP Nagar, robotics training centres in Bannerghatta road, robotics training centres in JP Nagar, robotics training for kids, robotics training for beginners, best robotics in Bangalore.
1
false