Computer vision and mobile technology could help blind people ‘see’

computervisi

Computer scientists are developing new adaptive mobile technology which could enable blind and visually-impaired people to ‘see’ through their smartphone or tablet.

Funded by a Google Faculty Research Award, specialists in computer vision and machine learning based at the University of Lincoln, UK, are aiming to embed a smart vision system in mobile devices to help people with sight problems navigate unfamiliar indoor environments.

Based on preliminary work on assistive technologies done by the Lincoln Centre for Autonomous Systems, the team plans to use colour and depth sensor technology inside new smartphones and tablets, like the recent Project Tango by Google, to enable 3D mapping and localisation, navigation and object recognition. The team will then develop the best interface to relay that to users – whether that is vibrations, sounds or the spoken word.

Project lead Dr Nicola Bellotto, an expert on machine perception and human-centred robotics from Lincoln’s School of Computer Science, said: “This project will build on our previous research to create an interface that can be used to help people with visual impairments.

“There are many visual aids already available, from guide dogs to cameras and wearable sensors. Typical problems with the latter are usability and acceptability. If people were able to use technology embedded in devices such as smartphones, it would not require them to wear extra equipment which could make them feel self-conscious. There are also existing smartphone apps that are able to, for example, recognise an object or speak text to describe places. But the sensors embedded in the device are still not fully exploited. We aim to create a system with ‘human-in-the-loop’ that provides good localisation relevant to visually impaired users and, most importantly, that understands how people observe and recognise particular features of their environment.”

The research team, which includes Dr Oscar Martinez Mozos, a specialist in machine learning and quality of life technologies, and Dr Grzegorz Cielniak, who works in mobile robotics and machine perception, aims to develop a system that will recognise visual clues in the environment. This data would be detected through the device camera and used to identify the type of room as the user moves around the space.

A key aspect of the system will be its capacity to adapt to individual users’ experiences, modifying the guidance it provides as the machine ‘learns’ from its landscape and from the human interaction. So, as the user becomes more accustomed to the technology, the quicker and easier it would be to identify the environment.

References:http://phys.org/

I always feel like somebody’s watching me…

ialwaysfeell

What power can individuals have over their data when their every move online is being tracked? Researchers at the Cambridge Computer Laboratory are building new systems that shift the power back to individual users, and could make personal data faster to access and at much lower cost.

It’s a fact of modern life – with every click, every tweet, every Facebook Like, we hand over information about ourselves to organisations who are desperate to know all of our secrets, in the hope that those secrets can be used to sell us something.

Companies have been collecting every possible scrap of information from their customers since long before the internet age, but with more powerful computers, cheaper storage and ubiquitous online use, the methods organisations use to gather information about people have become ever-more sophisticated. And sometimes those organisations know us better than our own families or friends.

For example, several years ago, data analysis tools used by the US retailer Target had become so precise that they were able to determine, with astonishing accuracy, whether a woman was pregnant and how far along she was, based on her purchase of certain products. And in one particularly embarrassing incident, Target knew that a teenage girl was pregnant before her father did, much to her father’s displeasure.

“What Target learned from that incident is that marketing too accurately can really make people squeamish,” says Professor Jon Crowcroft of the University’s Computer Laboratory. “But if they made their marketing a little less accurate by increasing the amount of privacy they give their customers, they found they can still retain or increase their customer base without making people feel as if they’re being spied on.”

Crowcroft’s research is in the area of ‘privacy by design’ – systems that allow us to live in the digital world and protect our privacy at the same time. As the concept of the Internet of Things – internet-connected washing machines, toasters and televisions – becomes reality, Crowcroft insists that privacy by design is needed to address the massive power imbalance that occurs when our personal data is shared with, and sold by, corporations, governments and other organisations.

But privacy by design doesn’t mean disconnecting from the online world and putting on a tinfoil hat – far from it. “There’s already a lot of data stored about each and every one of us – the things we buy, the food we eat, the health issues we have – and for each of these market segments, there are perfectly legitimate uses for that data,” adds Crowcroft. “Collecting healthcare data is fantastically useful for tracking pandemics, preventative care, more- efficient treatment, public health – those are all perfectly reasonable and positive uses for big data. At the same time, most sites gather information in order to target ads more accurately, and most people are actually okay with that. So the question then becomes, what is privacy by design?”

“What we’re trying to do is develop processing frameworks that would allow this data to be useful and to be used, without the somewhat creepy feeling that you’re constantly being watched,” says Crowcroft’s colleague Dr Richard Mortier.

The type of system that Crowcroft and Mortier envision is one in which the user has the scope to allow access to their data on a case-by-case basis, rather than it be harvested whether they like it or not: computations are performed where the data is gathered, and the results are pushed back to the organisation that wants the data.

“We can change the big data problem completely by moving where the data is processed,” explains Mortier. “Rather than having systems where all of the data is gathered in some huge central location and processed, if you reconstruct the system so that the data is processed in the same place it’s gathered, individuals would be able to take some of the control of their information back from corporations and surveillance organisations. Instead of one huge central processing node, we want to see billions of smaller nodes, which would make information quicker to access, and could potentially be stored
at lower overall cost.”

Crowcroft and Mortier have designed and partially built systems where a person’s data stays local to them, and they can have the option to decide what is shared and with whom. For example, a patient can share their healthcare data with their GP, but the GP would have to get authorisation from the patient before sharing that data with a pharmaceutical company.

“People realise they’re being marketed to, but I don’t think they realise the scale of it – it really is a hidden menace,” says Crowcroft. “The point is that we could build systems that could stop that completely, and re-enable it on the basis of a level playing field. We want to see systems where people have agency over their data, giving them the ability to allow or prevent certain types of access.”

Contrary to what some people may assume about the nature of digital life, adds Crowcroft, the vast majority of people highly value their own privacy. He points to the launch and then recall of Google Glass, a wearable computer worn like eyeglasses. “People started wearing these things into restaurants and other diners wouldn’t put up with it, because they didn’t want to be recorded while eating their lunch – it really creeped people out,” he says.

“And that’s in a public space: imagine the same sort of thing happening in a private space. It’s about the asymmetry and the idea that this is being done to you and you have no comeback. The problem with digital infrastructures is you don’t see them, and to a certain extent companies depend on people not understanding them – we can build systems where there are mechanisms through which they can be understood.”

Crowcroft and Mortier recognise that they’ll never convince everyone to ditch cloud computing and switch to a decentralised system. But that isn’t their goal. “It takes a while to show that new ways of doing things can really work,” says Crowcroft. “If these sorts of systems become a reasonably widely used alternative, it will go a long way towards keeping companies and cloud storage providers honest. The very small number of providers leads to the exploitation of the network effect, where they have a strong monopolistic position over a certain type of data. And monopolies are not good for economies. If a decentralised system is more ethical, enough people using it may incentivise the big providers to be more ethical too.”

References:http:http://phys.org/

How computers are learning to make human software work more efficiently

howcomputers

Computer scientists have a history of borrowing ideas from nature, such as evolution. When it comes to optimising computer programs, a very interesting evolutionary-based approach has emerged over the past five or six years that could bring incalculable benefits to industry and eventually consumers. We call it genetic improvement.

Genetic improvement involves writing an automated “programmer” who manipulates the source code of a piece of software through trial and error with a view to making it work more efficiently. This might include swapping lines of code around, deleting lines and inserting new ones – very much like a human programmer. Each manipulation is then tested against some quality measure to determine if the new version of the code is an improvement over the old version. It is about taking large software systems and altering them slightly to achieve better results.

The benefits

These interventions can bring a variety of benefits in the realm of what programmers describe as the functional properties of a piece of software. They might improve how fast a program runs, for instance, or remove bugs. They can also be used to help transplant old software to new hardware.
The potential does stop there. Because genetic improvement operates on source code, it can also improve the so-called non-functional properties. These include all the features that are not concerned purely with just the input-output behaviour of programs, such as the amount of bandwidth or energy that the software consumes. These are often particularly tricky for a human programmer to deal with, given the already challenging problem of building correctly functioning software in the first place.
We have seen a few examples of genetic improvement beginning to be recognised in recent years – albeit still within universities for the moment. A good early one dates from 2009, where such an automated “programmer” built by the University of New Mexico and University of Virginia fixed 55 out of 105 bugs in various different kinds of software, ranging from a media player to a Tetris game. For this it won $5,000 (£3,173) and a Gold Humie Award, which is awarded for achievements produced by genetic and evolutionary computation.
In the past year, UCL in London has overseen two research projects that have demonstrated the field’s potential (full disclosure: both have involved co-author William Langdon). The first involved a genetic-improvement program that could take a large complex piece of software with more than 50,000 lines of code and speed up its functionality by 70 times.
The second carried out the first automated wholesale transplant of one piece of software into a larger one by taking a linguistic translator called Babel and inserting it into an instant-messaging system called Pidgin.

Nature and computers

To understand the scale of the opportunity, you have to appreciate that software is a unique engineering material. In other areas of engineering, such as electrical and mechanical engineering, you might build a computational model before you build the final product, since it allows you to push your understanding and test a particular design. On the other hand, software is its own model. A computational model of software is still a computer program. It is a true representation of the final product, which maximises your ability to optimise it with an automated programmer.

As we mentioned at the beginning, there is a rich tradition of computer scientists borrowing ideas from nature. Nature inspired genetic algorithms, for example, which crunch through the millions of possible answers to a real-life problem with many variables to come up with the best one. Examples include anything from devising a wholesale road distribution network to fine-tuning the design of an engine.

Though the evolution metaphor has become something of a millstone in this context, as discussed here, genetic algorithms have had a number of successes producing results which are either comparable with human programs or even better.

Evolution also inspired genetic programming, which attempts to build programs from scratch using small sets of instructions. It is limited, however. One of its many criticisms is that it cannot even evolve the sort of program that would typically be expected of a first-year undergraduate, and will not therefore scale up to the huge software systems that are the backbone of large multinationals.

This makes genetic improvement a particularly interesting deviation from this discipline. Instead of trying to rewrite the whole program from scratch, it succeeds by making small numbers of tiny changes. It doesn’t even have to confine itself to genetic improvement as such. The Babel/Pidgin example showed that it can extend to transplanting a piece of software into a program in a similar way to how surgeons transplant body organs from donors to recipients. This is a reminder that the overall goal is automated software engineering. Whatever nature can teach us when it comes to developing this fascinating new field, we should grab it with both hands.

References:http://phys.org/

 

Solar-powered hydrogen generation using two of the most abundant elements on Earth

hematite-hydrogen

By smoothing the surface of hematite, a team of researchers achieved “unassisted” water splitting using the abundant rust-like mineral hermatite and silicon to capture and store solar energy within hydrogen gas

One potential clean energy future requires an economical, efficient, and relatively simple way to generate copious amounts of hydrogen for use in fuel-cells and hydrogen-powered vehicles. Often achieved by using electricity to split water molecules into hydrogen and oxygen, the ideal method would be to mine hydrogen from water using electricity generated directly from sunlight without the addition of any external power source. Hematite – the mineral form of iron – used in conjunction with silicon has shown some promise in this area, but low conversion efficiencies have slowed research. Now scientists have discovered a way to make great improvements, giving hope to using two of the most abundant elements on earth to efficiently produce hydrogen.

Hematite holds potential for use in low-power photoelectrochemical water splitting (where energy, in the form of light, is the input and chemical energy is the output) to release hydrogen due to its low turn-on voltage of less than 0.3 volts when exposed to sunlight. Unfortunately, that voltage is too low to initiate water-splitting so a number of improvements to the surface of hematite have been sought to improve current flow.

In this vein, researchers from Boston College, UC Berkeley, and China’s University of Science and Technology have hit upon the technique of “re-growing” the hematite, so that a smoother surface is obtained along with a higher energy yield. In fact, this new version has doubled the electrical output, and moved one step closer to enabling practical, large-scale energy-harvesting and hydrogen generation.

“By simply smoothing the surface characteristics of hematite, this close cousin of rust can be improved to couple with silicon, which is derived from sand, to achieve complete water splitting for solar hydrogen generation,” said Boston College associate professor of chemistry Dunwei Wang. “This unassisted water splitting, which is very rare, does not require expensive or scarce resources.”

Working on previous work that realized gains in the photoelectrochemical turn-on voltage from the use of smooth surface coatings, the team re-assessed the hematite surface structure by employing a synchrotron particle accelerator at the Lawrence Berkeley National Laboratory. Concentrating on massaging the hematite’s surface deficiencies to see if this would result in improvements, the researchers used physical vapor deposition to layer hematite onto a borosilicate glass substrate and create a photoanode. They then baked the devices to produce a thin, even film of iron oxide across their surfaces.

Subsequent tests of this new amalgam resulted in an immediate improvement in turn-on voltage, and a substantial increase in photovoltage from 0.24 volts to 0.80 volts. Whilst this new hydrogen harvesting process only realized an efficiency of 0.91 percent, it is the very first time that the combination of hematite and amorphous silicon has been shown to produce any meaningful efficiencies of conversion at all.

As a result, this research has shown that progress has been made towards the possibility of producing photoelectrochemical energy harvesting that is totally self-sufficient, uses abundantly available materials, and is easy to produce.

“This offers new hope that efficient and inexpensive solar fuel production by readily available natural resources is within reach,” said Wang. “Getting there will contribute to a sustainable future powered by renewable energy.”

References:http://www.gizmag.com/