Interview: Lockheed Martin’s Todd Danko on the DRC Finals and the future of robotics

drc-lockheed-interview

Team Trooper’s robot named Leo competed in the recent DARPA Robotic Challenge

One of the 24 teams competing at the 2015 DARPA Robotics Challenge, and the only team fielded by a large private company, was Lockheed Martin’s Team Trooper and its robot Leo. To find out more about what goes into programming a humanoid robot and the future of robotics, we talked to the team leader, Todd Danko.

Danko: The whole point is to look for new opportunities. We think that investing in mobile manipulation, which is effectively what we’re doing, will be very useful in future applications like underwater or space robotics; places where it’s very difficult or impossible to get people to do those tasks. You can use our systems to tell robots what to do, and let the robots do those things.

Why did Lockheed opt for using Boston Dynamics’s humanoid Atlas robot?

We’re in a world that’s complementary to the humanoid form. If you have a very constrained task, like in a factory, a humanoid isn’t the right answer. You want to optimize your robot to solve those problems. If you don’t have a single problem to optimize, and you want a more general robot, then a humanoid robot makes more sense.

drc-lockheed-interview-3

Does the humanoid robot produce any challenges in creating software?

It produces tremendous challenges. For a humanoid robot just to stay still and not fall over if you touch it. That’s a lot of software on its own. You have to have a good dynamic model of the robot and run it constantly at a thousand Hertz, so it’s harder for the robot to fall over. Meanwhile, a robot with wheels, if you turn everything off, it will be mechanically stable. It just sits there.

Were there any surprises working on this project?

Generally, one thing that surprised me is the state of the art of robotics and especially humanoid robotics. It’s one thing to see them in a movie or some real robots in specific demos, but with those demos you’re seeing the best of what could be, so the state of the art is actually a lot more primitive than many people think it is. It’s exciting to be able to help grow that

.drc-lockheed-interview-2

It’s a poor analogy, but what would you compare a humanoid robot of today to in terms of its development?

I could get into trouble because I don’t have much expertise in human development, but one of our goals was to work toward a robot with the capabilities of a two-year old. I can’t say that we exceeded that or didn’t, but like a two-year old, there’s definitely some misbehavior in these robots. They don’t always do what we tell them to.

We noticed in the competition that the robots moved very slowly with a lot of starting and stopping while they solved a problem or awaited an order. What hurdles have to be overcome before we see robots are that are fast and articulate enough to be practical?

There are two sides to this. On one side, robots are already practical in many ways. Probably the most successful robots are much more simple than those [at the competition]. On top of that, you have to consider what are the consequences of an error, so if you have a lot of runs or a lot of time, you may give your autonomous system a lot of latitude to make decisions rather than constantly approving it before it proceeds. In a competition like this, there are greater consequences and if a mistake happens that a human can stop, then we should prevent that from happening.

Secondly, there a lots of things that different groups are working on to improve the performance of robots as a whole. You notice that the robots are untethered. Powerwise, they were lasting an hour at least in most cases, but how useful is an hour of robot time? Maybe you want something that lasts a day or a couple of days even. We need better batteries, better power systems, and more efficient actuators, so there’s more power and less use of power down the line.

drc-lockheed-interview-5

I’ve never met a perception algorithm that couldn’t use more processors to be better. Equally with planning and parallelizing, this provides more possibilities to come up with more solutions. Then there’s perception. There’s still a lot of room for improvement in the state of the art in perception. We’re very good at recognizing specific objects, but more needs to be done in recognizing categories of objects, or to recognize something never seen before and how that could be used.

On top of all this, I think there’s still a role for the human to help the robot know what it is that needs to be done. It may be best to have a human in a safe place who can communicate the “what” and allow the robot to do the dangerous things. That’s something I think we can see in the near future.

It’s the cliché question, but how long do you think it will be before we see robots like this as part of people’s lives?

It’s going to be a long time before we see a robot like this. Look at what [happened in the competition] and how many people it took to keep that robot from destroying itself. There’s a lot of work that needs to be done before we’re contributing in a way that’s not a burden to the operators. That goes back to all those things we talked about that need to be improved, so it’s going to be quite some time. On the other hand, simple robots are already being used in applications today. We’re just using more complicated robots in more complicated situations as we go in that direction of complexity.

References:http://www.gizmag.com/

Physicists develop ultrasensitive nanomechanical biosensor

miptphysicis

Two young researchers working at the MIPT Laboratory of Nanooptics and Plasmonics, Dmitry Fedyanin and Yury Stebunov, have developed an ultracompact, highly sensitive nanomechanical sensor for analyzing the chemical composition of substances and detecting biological objects, such as viral disease markers, which appear when the immune system responds to incurable or hard-to-cure diseases, including HIV, hepatitis, herpes, and many others. The sensor will enable doctors to identify tumor markers, whose presence in the body signals the emergence and growth of cancerous tumors.

The sensitivity of the new device is best characterized by one key feature: According to its developers, the sensor can track changes of just a few kilodaltons in the mass of a cantilever in real time. One Dalton is roughly the mass of a proton or neutron, and several thousand Daltons are the mass of individual proteins and DNA molecules. So the new optical sensor will allow for diagnosing diseases long before they can be detected by any other method, which will pave the way for a new-generation of diagnostics.
The device, described in an article published in the journal Scientific Reports, is an optical or, more precisely, optomechanical chip. “We’ve been following the progress made in the development of micro- and nanomechanical biosensors for quite a while now, and can say that no one has been able to introduce a simple and scalable technology for parallel monitoring that would be ready to use outside a laboratory. So our goal was not only to achieve the high sensitivity of the sensor and make it compact, but also make it scalable and compatible with standard microelectronics technologies,” the researchers said.
Unlike similar devices, the new sensor has no complex junctions and can be produced through a standard CMOS process technology used in microelectronics. The sensor doesn’t have a single circuit, and its design is very simple. It consists of two parts: a photonic (or plasmonic) nanowave guide to control the optical signal, and a cantilever hanging over the waveguide.

2-physicistsde

A cantilever, or beam, is a long and thin strip of microscopic dimensions (5 micrometers long, 1 micrometer wide and 90 nanometers thick), connected tightly to a chip. To get an idea how it works, imagine pressing one end of a ruler tightly to the edge of a table and allowing the other end to hang freely in the air. If you snap the free end with your other hand, the ruler will make mechanical oscillations at a certain frequency. That’s how the cantilever works. The difference between the oscillations of the ruler and the cantilever is only the frequency, which depends on the materials and geometry: while the ruler oscillates at several tens of hertz, the frequency of the cantilever’s oscillations is measured in megahertz. In other words, it makes a few million oscillations per second.

There are two optical signals going through the waveguide during oscillations: The first one sets the cantilever in motion, and the second one allows for reading the signal containing information about the movement. The inhomogeneous electromagnetic field of the control signal’s optical mode transmits a dipole moment to the cantilever, impacting the dipole at the same time so that the cantilever starts to oscillate.
The sinusoidally modulated control signal makes the cantilever oscillate at an amplitude of up to 20 nanometers. The oscillations determine the parameters of the second signal, the output power of which depends on the cantilever’s position.

3-physicistsde

The highly localized optical modes of nanowave guides, which create a strong electric field intensity gradient, are key to inducing cantilever oscillations. Because the changes of the electromagnetic field in such systems are measured in tens of nanometers, researchers use the term “nanophotonics.” Without the nanoscale waveguide and the cantilever, the chip simply wouldn’t work. A big cantilever cannot be made to oscillate by freely propagating light, and the effects of chemical changes to its surface on the oscillation frequency would be less noticeable..

Cantilever oscillations make it possible to determine the chemical composition  of the environment in which the chip is placed. That’s because the frequency of mechanical vibrations depends not only on the materials’ dimensions and properties, but also on the mass of the oscillatory system, which changes during a chemical reaction between the cantilever and the environment. By placing different reagents on the cantilever, researchers make it react with specific substances or even biological objects. If you place antibodies to certain viruses on the cantilever, it’ll capture the viral particles in the analyzed environment. Oscillations will occur at a lower or higher amplitude depending on the virus or the layer of chemically reactive substances on the cantilever, and the electromagnetic wave passing through the waveguide will be dispersed by the cantilever differently, which can be seen in the changes of the intensity of the readout signal.

Calculations done by the researchers showed that the new sensor will combine high sensitivity with a comparative ease of production and miniature dimensions, allowing it to be used in all portable devices, such as smartphones, wearable electronics, etc. One chip, several millimeters in size, can accommodate several thousand such sensors, configured to detect different particles or molecules. The price, thanks to the simplicity of the design, will most likely depend on the number of sensors, being much more affordable than its competitors.

References:http://phys.org/

For app developers, more big changes are coming soon

The App Store revolutionized the tech world when it opened in summer 2008, spawning a billion-dollar industry in one fell swoop. It was neither the first nor the largest back then, but the store quickly exploded in popularity, prompting Apple co-founder Steve Jobs to say “it is going to be very hard for others to catch up.”

The store was one of the big stars this week at Apple’s annual Worldwide Developers Conference in San Francisco, with CEO Tim Cook’s announcement that it had recently “passed a major milestone, with 100 billion app downloads” since the store opened its virtual doors.
Many in the app-developer fold say the business, thanks to the marketplace created by the App Store and other outlets like Google Play, is still in its infancy and mobile apps in the next few years will continue to change human behavior in unimaginable ways.
From the “Internet of Things,” where apps will help connect an estimated 50 billion devices to the Internet by 2020 and transform the way we relate to our homes and workplaces, to the continued democratization of software, where tech novices will be able to build their own apps, the digital landscape will shift at a breakneck speed.
At the same time, the way apps come into being could also go through a seismic shift. The small independent app-makers who early on helped make the App Store the success it is today will find it harder to survive there, while large corporations will dominate the stage as their in-house coders custom-tailor more and more apps to meet their customers’ needs.
“We’re seeing big companies taking over” the mobile-app industry, said Mark Wilcox, a business analyst with VisionMobile. As a result, according to the firm’s recent survey, nearly half of the developers who want to make money building apps actually make zero or next to nothing. “Large companies, and especially game publishers, take all the top spots on the App Store and most of the revenue,” he said. “The little guys are struggling to compete.”
Calling the momentum “absolutely staggering,” Cook told developers this week that the App Store has forever changed the way we think of software and the way we all increasingly use it in our daily lives.
Connecting everyday objects, from home-heating systems to toasters, will continue to be a major focus for developers, with one survey showing that 53 percent of respondents said they were already working on so-called IoT – or “Internet of Things” – apps. Wearable tech, like the new Apple Watch, could host thousands of new apps this year alone, from health and fitness monitors to tools not yet envisioned.
In a clear nod to the future of apps already unfolding on wearable technology, Cook used his keynote address to introduce Kevin Lynch, Apple’s vice president for technology, to talk about watchOS 2, the first major update for the Apple Watch since it was unveiled last September. Lynch said developers could soon use the new software to build native, or in-watch, apps that would allow users to tap directly into the watch’s burgeoning bounty without having to rely on their iPhones for access.
Another budding trend features strategically placed beacons, small devices in the physical world that interact with apps, which in turn will collect and process mountains of data. An in-app sale offer triggered on your phone by a beacon inside the Wal-Mart you just entered is an example of this technology. Over time, all that data collected from our phones about our daily patterns will then guide and improve the software we’ll use to work and play.
“The ‘Internet of Things’ is happening quickly,” said 22-year-old Ashu Desai, whose Make School is teaching college and high-school students how to build apps. “We’ll see apps where your phone will know more and more about your surroundings. There will be a massive proliferation of sensors that will be everywhere so apps can send you the temperature of your hot tub, lock and unlock your doors, and turn on your stove remotely.”
In a way, the future of apps is already here, with an increasing number of them not on public view at the App Store but quietly being harnessed by teams within private companies and organizations, from giants like Salesforce to stage crews at musical venues to small enterprises like contractors and electricians.
Consultant Richard Carlton helps companies use programs like Apple-owned FileMaker to create their own proprietary apps that allow colleagues to collaborate on a shared database they can all access from their mobile devices.
“These mobile tools allow people who aren’t coders to build their own solutions and share them with their fellow employees,” he said. “For example, we’ve helped plumbers create apps they can use to update their work schedules on their phones. This software lets you sign contracts in the field, take photos and enter them in a database, or do property inspections. And this costs the company a quarter of what they’d pay to have a professional app developer do it.”
In other trends in the coming year, there will be more video ads playing on our smartphone screens, more crowdfunding to launch app startups and more developers leaving Google and Apple to become consultants who’ll build apps for corporate clients like The Home Depot.
“Every company out there is turning to mobile, whether it’s retail or airlines or real estate,” said Shravan Goli, president of the tech-jobs site Dice.com. With big companies storming into the market, the coming years will be tough for the independents, said Craig Hockenberry with app-design firm The Icon Factory.
“The people who want to survive solely off the puzzle game or the camera app are the ones having a problem right now,” he said. “When the App Store opened, our first app sold well because there wasn’t a lot of competition. We were a big fish in a small pond. Now the pond is more like an ocean.”
—-
WHAT’S the future of apps?
We asked five attendees at Apple’s annual Worldwide Developers Conference this week in San Francisco for their take on what’s ahead.
Jenna Hoffstein
Educational app developer, Boston
“We’ll see a broader use of apps in schools, supporting teachers and giving kids more engaging ways to learn math and science.”
Ashok Ramamoorthy
Product manager, India
“All your business will develop around your (enterprise) app. If you’re not taking advantage of that, you’re losing money.”
Igor Ievsiukov
Developer, Ukraine
“Apps will be smarter and they’ll distract the user less. Their functions will be more personalized and personalized more precisely.”
Amy Wardrop
Digital product manager, Sydney
“The future of apps is all about experiential, the actual experience of being human. Wearable health and fitness devices, for example, will provide personal analytics, with more layering of information from both humans and their environment.”
Ashish Singh
Developer, India
“Apps will become part of every aspect of our lives, with virtual-reality apps more prevalent. With an app and a pair of VR glasses, you’ll be able to virtually tour a property for sale, museums or vacation destinations.”

References:http://phys.org/

Water Droplet-Powered Computers Could Run Mini Science Labs

water-computer

A computer made using water and magnets can move droplets around inside itself like clockwork, researchers say. The device demonstrates a new way to merge computer calculations with the manipulation of matter, scientists added.

Whereas conventional microelectronics shuffle electrons around wires, in recent years, scientists have begun developing so-called microfluidic devices that shuffle liquids around pipes. These devices can theoretically perform any operation a conventional electronic microchip can.

Although microfluidic devices are dramatically slower than conventional electronics, the goal is not to compete with electronic computers on traditional computing tasks such as word processing. Rather, the aim is to develop a completely new class of computers to precisely control matter. [Super-Intelligent Machines: 7 Robotic Futures]

“The fundamental limits of computation, such as how fast you can go or how small devices can be, are based in how information has to be represented in physical entities,” study co-author Manu Prakash, a biophysicist at Stanford University, told Live Science. “We flipped that idea on its head — why can’t we use computations to manipulate physical entities?”

Current applications for microfluidic chips include serving as miniaturized chemistry and biology laboratories. Instead of performing experiments with dozens of test tubes, each droplet in a lab-on-a-chip can serve as a microscopic test tube, enabling scientists to conduct thousands of experiments simultaneously, but requiring a fraction of the time, space, materials, cost and effort of a conventional laboratory.

But one major drawback of microfluidic devices is that the droplets of liquid are usually controlled one at a time. Although Prakash and his colleagues previously demonstrated a way to control many droplets on a microfluidic chip simultaneously, until now, the actions of such droplets were not synchronized with each other. That makes these systems prone to errors that prevented the devices from taking on more complex operations.

Now Prakash and his colleagues have developed a way for droplets on microfluidic devices to act simultaneously, in a synchronized manner. The key was using a rotating magnetic field, like a clock.

The core of the new microfluidic chip, which is about half the size of a postage stamp, consists of tiny, soft, magnetic nickel-iron-alloy bars arranged into mazelike patterns. On top of this array of bars is a layer of silicone oil sandwiched between two layers of Teflon. The bars, oil and Teflon layers are in turn placed between two glass slides.

The researchers then carefully injected water droplets into the oil; these droplets were infused with tiny magnetic particles only nanometers, or billionths of a meter, wide. Next, the researchers turned on a rotating magnetic field.

Each time the magnetic field reversed, the bars flipped, drawing the magnetized droplets along specific directions, the researchers said. Each rotation of the magnetic field was very much like a cycle on a clock — for instance, a second hand making a full circle on a clock face. The rotating magnetic field ensured that every droplet ratcheted precisely one step forward with each cycle, moving in perfect synchrony.

A camera recorded the movements and interactions of all the droplets. The presence of a droplet in any given space represents a one in computer data, while the absence of a drop represents a zero; interactions among the droplets are analogous to computations, the researchers said. The layout of the bars on these new microfluidic chips is analogous to the layout of circuits on microchips, controlling interactions among the droplets.

So far, the droplets in this device are as little as 100 microns wide, the same size as the average width of a human hair. The researchers noted their models suggest the devices could ultimately control droplets just 10 microns large. “Making the droplets smaller will allow the chip to carry out more operations,” Prakash said.

The researchers now plan to make a design tool for these droplet circuits available to the public, so that anyone can make them.

“We’re very interested in engaging anybody and everybody who wants to play, to enable everyone to design new circuits based on building blocks we describe in this paper, or [to] discover new blocks,” Prakash said in a statement.

Prakash and his colleagues Georgios Katsikis and James Cybulski, both of Stanford University, detailed their findings June 8 in the journal Nature Physics.

References:http://www.livescience.com/