Smartphone and tablet could be used for cheap, portable medical biosensing

smartphone-tablet-biosensing

A diagram of the CNBP system (Credit: Centre for Nanoscale BioPhotonics)

As mobile technology progresses, we’re seeing more and more examples of low-cost diagnostic systems being created for use in developing nations and remote locations. One of the latest incorporates little more than a smartphone, tablet, polarizer and box to test body fluid samples for diseases such as arthritis, cystic fibrosis and acute pancreatitis.

Developed at Australia’s Centre for Nanoscale BioPhotonics (CNBP), the setup utilizes fluorescent microscopy, a process in which dyes added to a sample cause specific biomarkers to glow when exposed to bright light.

To use it, clinicians deposit a dyed fluid sample in a well plate (basically a transparent sample-holding tray), put that plate on the screen of a tablet that’s in the box, and place a piece of polarizing glass over the plate compartment that contains the fluid. They then put their smartphone on top of the box, so that its camera lines up with that compartment.

Once the tablet is powered up, the light from its screen causes the targeted biomarkers to fluoresce (assuming they’re present in the first place). The polarizer allows light given off by those biomarkers to stand out from the tablet’s light, while an app on the phone analyzes the color and intensity of the fluorescence to help make a diagnosis.

“This type of fluorescent testing can be carried out by a variety of devices but in most cases the readout requires professional research laboratory equipment, which costs many tens of thousands of dollars,” says Ewa Goldys, CNBP’s deputy director. “What we’ve done is develop a device with a minimal number of commonly available components … The results can be analyzed by simply taking an image and the readout is available immediately.”

The free smartphone app will be available as of June 15th, via the project website. A paper on the research was recently published in the journal Sensors.

References:http://www.gizmag.com/

Self-folding robot walks, swims, climbs, dissolves

5569d15b7f370

A demo sparking interest at the ICRA 2015 conference in Seattle was all about an origami robot that was worked on by researchers. More specifically, the team members are from the computer science and artificial intelligence lab at MIT and the department of informatics, Technische Universitat in Germany. “An untethered miniature origami robot that self-folds, walks, swims, and degrades” was the name of the paper, co-authored by Shuhei Miyashita, Steven Guitron, Marvin Ludersdorfer, Cynthia R. Sung and Daniela Rus. They focused on an origami robot that does just what the paper’s title suggests. A video showing the robot in action showcases each move.

One can watch the robot walking on a trajectory, walking on human skin, delivering a block; swimming (the robot has a boat-shaped body so that it can float on water with roll and pitch stability); carrying a load (0.3 g robot); climbing a slope; and digging through a stack. It also shows how a polystyrene model robot dissolves in acetone.
Even Ackerman in IEEE Spectrum reported on the Seattle demo. Unfolded, the robot has a magnet and PVC sandwiched between laser-cut structural layers (polystyrene or paper). How it folds: when placed on a heating element, the PVC contracts, and where the structural layers have been cut, it creates folds, said Ackerman. The self-folding exercise takes place on a flat sheet; the robot folded itself in a few seconds. Kelsey Atherton in Popular Science, said, “Underneath it all, hidden like the Wizard of Oz behind his curtain, sit four electromagnetic coils, which turn on and off and makes the robot move forward in a direction set by its shape.”
When placed in the tank of acetone, the robot dissolves, except for the magnet. The authors noted “minimal body materials” in their design enabled the robot to completely dissolve in a liquid environment, “a difficult challenge to accomplish if the robot had a more complex architecture.”
Possible future directions: self-folding sensors into the body of the robot, which could lead to autonomous operation, and eventually, even inside the human body. The authors wrote, “Such autonomous ‘4D-printed’ robots could be used at unreachable sites, including those encountered in both in vivo and bionic biological treatment.”
Atherton said, for example, future designs based on this robot could be even smaller, and could work as medical devices sent under the skin.
IEEE Spectrum’s Ackerman said it marked “the first time that a robot has been able to demonstrate a complete life cycle like this.”
Origami robots—reconfigurable robots that can fold themselves into arbitrary shapes—was discussed in an article last year in MIT News, quoting Ronald Fearing, a professor of electrical engineering and computer science at the University of California at Berkeley. Origami robotics, he said, is “a pretty powerful concept, because cutting planar things and folding is an inherently very low-cost process.” He said, “Folding, I think, is a good way to get to the smaller robots.”

References:http://phys.org/

How Computers Can Teach Themselves to Recognize Cats

computer-codes

In June 2012, a network of 16,000 computers trained itself to recognize a cat by looking at 10 million images from YouTube videos. Today, the technique is used in everything from Google image searches to Facebook’s newsfeed algorithms.

The feline recognition feat was accomplished using “deep learning,” an approach to machine learning that works by exposing a computer program to a large set of raw data and having it discover more and more abstract concepts. “What it’s about is allowing the computer to learn how to represent information in a more meaningful way, and doing so at several levels of representation,” said Yoshua Bengio, a computer scientist at the University of Montreal in Canada, who co-authored an article on the subject, published today (May 27) in the journal Nature. [Science Fact or Fiction? The Plausibility of 10 Sci-Fi Concepts]

“There are many ways you can represent information, some of which allow a human decision maker to make a decision more easily,” Bengio told Live Science. For example, when light hits a person’s eye, the photons stimulate neurons in the retina to fire, sending signals to the brain’s visual cortex, which perceives them as an image. This image in the brain is abstract, but it’s a more useful representation for making decisions than a collection of photons.
Similarly, deep learning allows a computer (or set of computers) to take a bunch of raw data — in the form of pixels on a screen, for example — and construct higher and higher levels of abstraction. It can then use these abstract concepts to make decisions, such as whether a picture of a furry blob with two eyes and whiskers is a cat.

“Think of a child learning,” Bengio said. “Initially, the child may see the world in a very simple way, but at some point, the child’s brain clicks, and she discovers an abstraction.” The child can use that abstraction to learn other abstractions, he added.

The self-learning approach has led to dramatic advances in speech- and image-recognition software. It is used in many Internet and mobile phone products, and even self-driving cars, Bengio said.

Deep learning is an important part of many forms of “weak” artificial intelligence, nonsentient intelligence focused on a narrow task, but it could become a component of “strong” artificial intelligence — the kind of AI depicted in movies like “Ex Machina” and “Her.”

But Bengio doesn’t subscribe to the same fears about strong AI that billionaire entrepreneur Elon Musk, world-famous physicist Stephen Hawking and others have been sounding alarms about.

“I do subscribe to the idea that, in some undetermined future, AI could be a problem,” Bengio said, “but we’re so far from [strong AI taking over] that it’s not going to be a problem.”

However, he said there are more immediate issues to be concerned about, such as how AI will impact personal privacy and the job market. “They’re less sexy, but these are the questions that should be used for debate,” Bengio said.

References:http://www.livescience.com/

MIT’s robotic cheetah can now leap over obstacles

mit-robot-cheetah

The last time we heard from the researchers working on MIT’s robotic cheetah project, they had untethered their machine to let it bound freely across the campus lawns. Wireless and with a new spring in its step, the robot hit speeds of 10 mph (6 km/h) and could jump 13 in (33 cm) into the air. The quadrupedal robot has now been given another upgrade in the form of a LIDAR system and special algorithms, allowing it to detect and leap over obstacles in its path.

MIT’s robotic cheetah project has been in the works for a few years now. The team’s view is that the efficiency with which Earth’s fastest animal goes about its business holds many lessons for the world of robotic engineering. This line of thinking has inspired other like-minded projects, with DARPA and Boston Dynamics both working on robotic cheetahs of their own.

The MIT team says it has now trained the first four-legged robot capable of jumping over hurdles autonomously as it runs. With an onboard LIDAR system, the machine is now able use reflections from a laser to map the terrain. This data is partnered with a special algorithm to dictate the robot’s next moves.

The first part of this algorithm sees the robot identify an upcoming obstacle, and determine both its size and the distance to it. The second part of the algorithm is what enables the robot to manage its approach, determining the best position from which to jump and safely make it over the top. This sees the robot’s stride adjusted if need be, speeding up or slowing down to take off from the ideal launch point. This algorithm works in around 100 milliseconds and is run on the fly, dynamically tuning the robots approach with every step.

Right as the robot goes to leave the ground, a third part of the algorithm helps it work out the optimal jumping trajectory. This involves taking the obstacle height and speed of approach to calculate how much force is required from its electric motors to propel it up and over the hurdle.

Putting the cheetah’s new capabilities to the test, the team first set it down to run on a treadmill while tethered. Running at an average speed of 5 mph (8 km/h), the robot was able to clear obstacles up to 18 in (45 cm) with a success rate of around 70 percent. The cheetah was then unleashed onto an indoor test track, running freely with more space and longer approach times to prepare its jumps, clearing about 90 percent of obstacles.

mit-robot-cheetah-2

“A running jump is a truly dynamic behavior,” says Sangbae Kim, assistant professor of mechanical engineering at MIT. “You have to manage balance and energy, and be able to handle impact after landing. Our robot is specifically designed for those highly dynamic behaviors.”

Kim and his team will now look to improve the robot further so that it can leap over obstacles on softer terrain such as grass. They will demonstrate the cheetah’s new capabilities at the DARPA Robotics Challenge in June. Gizmag will be trackside to bring you a closer look.

References:http://www.gizmag.com/