A computer algorithm to quantify creativity in art networks

acomputeralg

A team of researchers at Rutgers University has taken on the novel task of getting a computer to rate paintings made by the masters, based on their creativity. They have written a paper describing their approach and the results they have obtained in running their algorithm and have posted it on the preprint server arXiv.

The value of art lies in the eye of the beholder, some may find a particular painting moves them to tears, while another feels nothing—such is the intangible nature of the human mind and its reaction to stimuli. Creativity, on the other hand, is a little more easily recognized, whether in art, the sciences or other areas. In this new effort the team at Rugters sought to bring some science to the fine art of creativity recognition, as it applies to one of the most recognized fine arts—paintings done by masters over the years. Traditionally, labeling a work of art as creative has fallen to art scholars with years of training, background and love of the work—it has to have something new, of course, but it must also, according to the researchers, have demonstrated some degree of influence, i.e. be copied by others that come after. They set out to create an algorithm that once finished could rate the works by masters, based on nothing but creativity.
To create that algorithm, the team started with what are known as classemes—where a computer recognizes an object in a picture and assigns it to a particular category. Next, they found a way to access a huge database of famous paintings that was easily accessible, Wikiart, which has among other things, approximately 62,000 pictures of famous paintings. Then finally, they applied theoretical work being done with network science to help with figuring out which paintings were a clear influence in the creation of other paintings.
Putting it all together and running the algorithm resulted in generating a list of paintings with rankings based on creativity. The approach apparently worked, as the researchers report that for the most part, their algorithm results matched very closely with art expert assessments over the years, though there were a few exceptions here and there. The team suggests the algorithm could be used in other contexts as well, such as sculpture, literature and likely other science based applications.

References:http://phys.org/

Machines learn to understand how we speak

machineslear

At Apple’s recent World Wide Developer Conference, one of the tent-pole items was the inclusion of additional features for intelligent voice recognition by its personal assistant app Siri in its most recent update to its mobile operating system iOS 9.

Now, instead of asking Siri to “remind me about Kevin’s birthday tomorrow”, you can rely on context and just ask Siri to “remind me of this” while viewing the Facebook event for the birthday. It will know what you mean.

Technology like this has also existed in Google devices for a little while now – thanks to OK Google – bringing us ever closer to context-aware voice recognition.

But how does it all work? Why is context so important and how does it tie in with voice recognition?
To answer that question, it’s worthwhile looking back at how voice recognition works and how it relates to another important area, natural language processing.

A brief history of voice recognition

Voice recognition has been in the public consciousness for a long time. Rather than tapping on a keyboard, wouldn’t it be nice to speak to a computer in natural language and have it understand everything you say?
Ever since Captain Kirk’s conversation with the computer aboard the USS Enterprise in the original Star Trek series in the 1960s (and Scotty’s failed attempt to talk to a 20th-century computer in one of the later Original Series movies) we’ve dreamed about how this might work.

Even movies set in more recent times have flirted with the idea of better voice recognition. The technology-focused Sneakers from 1992 features Robert Redford painfully collecting snippets of an executive’s voice and playing them back with a tape recorder into a computer to gain voice access to the system.

But the simplicity of the science-fiction depictions belies a complexity in the process of voice-recognition technology. Before a computer can even understand what you mean, it needs to be able to understand what you said.
This involves a complex process that includes audio sampling, feature extraction and then actual speech recognition to recognise individual sounds and convert them to text.

Researchers have been working on this technology for many years. They have developed techniques that extract features in a similar way to the human ear and recognise them as phonemes and sounds that human beings make as part of their speech. This involves the use of artificial neural networks, hidden Markov models and other ideas that are all part of the broad field of artificial intelligence.

Through these models, speech-recognition rates have improved. Error rates of less than 8% were reported this year by Google.

But even with these advancements, auditory recognition is only half the battle. Once a computer has gone through this process, it only has the text that replicates what you said. But you could have said anything at all.

The next step is natural language processing.

Did you get the gist?

Once a machine has converted what you say into text, it then has to understand what you’ve actually said. This process is called “natural language processing”. This is arguably more difficult than the process of voice recognition, because the human language is full of context and semantics that make the process of natural language recognition difficult.

Anybody who has used earlier voice-recognition systems can testify as to how difficult this can be. Early systems had a very limited vocabulary and you were required to say commands in just the right way to ensure that the computer understood them.

This was true not only for voice-recognition systems, but even textual input systems, where the order of the words and the inclusion of certain words made a large difference to how the system processed the command. This was because early language-processing systems used hard rules and decision trees to interpret commands, so any deviation from these commands caused problems.

Newer systems, however, use machine-learning algorithms similar to the hidden Markov models used in speech recognition to build a vocabulary. These systems still need to be taught, but they are able to make softer decisions based on weightings of the individual words used. This allows for more flexible queries, where the language used can be changed but the content of the query can remain the same.

This is why it’s possible to ask Siri either to “schedule a calendar appointment for 9am to pick up my dry-cleaning” or “enter pick up my dry-cleaning in my calendar for 9am” and get the same result.
But how do you deal with different voices?

Despite these advancements there are still challenges in this space. In the field of voice recognition, accents and pronunciation can still cause problems.

Because of the way the systems work, different pronunciation of phonemes can cause the system to not recognise what you’ve said. This is especially true when the phonemes in a word seem (to non-locals) to bear no relation to the way it is pronounced, such as the British cities of “Leicester” or “Glasgow”.

Even Australian cities such as “Melbourne” seem to trip up some Americans. While to an Australian the pronunciation of Melbourne is very obvious, the different way that phonemes are used in America means that they often pronounce it wrong (to parochial ears).

Anybody who has heard a GPS system mispronounce Ipswich as “eyp-swich” knows this also goes both ways. The only way around this is to train the system in the different ways words are pronounced. But with the variation in accents (and even pronunciation within accents) this can be quite a large and complex process.

On the language-processing side, the issue is predominantly one of context. The example given in the opening provides an example of the state of the art in contextual language processing. But all you need to do is pay attention to a conversation for a few minutes to realise how much we change the way we speak to give machines extra context.

For instance, how often do you ask somebody:

Did you get my e-mail?

But what you actually mean is:

Did you get my e-mail? If you did, have you read it and can you please provide a reply as response to this question?
Things get even more complicated when you want to engage in a conversation with a machine, asking an initial question and the follow-up questions, such as “What is Martin’s number?”, followed by “Call him” or “Text him”.

Machines are improving when it comes to understanding context, but they still have a way to go!

Automatic translation

So, we have made great progress in a lot of different areas to get to this point. But there are still challenges ahead in accent recognition, implications in language, and context in conversations. This means it might still be a while before we have those computers from Star Trek interpreting everything we say.

But rest assured. We are slowly getting closer, with recent advancements from Microsoft in automatic translation showing that, if we get it right, the result can be very cool.

Google has recently revealed technology that uses a combination of image or voice recognition, natural language processing and the camera on your smartphone to automatically translate signs and short conversations from one language to another for you. It will even try to match the font so that the sign looks the same, but in English!

So no longer do you need to ponder over a menu written in Italian, or wonder how to order from a waiter who doesn’t speak English, Google has you covered. Not quite the USS Enterprise, but certainly closer!

Michael Cowling is Senior Lecturer & Discipline Leader, Mobile Computing & Applications at Central Queensland University.

References:http://phys.org/

Soft Robotic Tentacles Pick Up Ant Without Crushing It

micro-tentacle-ant

iny soft robotic tentacles might be ideal for delicate microscopic surgery, say researchers, who were able to use the teensy “limbs” to pick up an ant without damaging its body.

In experiments, these new tentacles also wrapped around other tiny items — such as fish eggs, which deform and burst easily when handled by hard tweezers — without damaging them, scientists added.

Conventional robots are built from rigid parts, making them vulnerable to harm from bumps, scrapes, twists and falls, as well as preventing them from wriggling past obstacles. Increasingly, researchers are developing robots made from soft, elastic plastic and rubber and inspired by octopuses, worms and starfish. These soft robots are resistant to many of the kinds of damage, and can overcome many of the obstacles, which can impair hard robots.

However, miniaturizing soft robots for tiny applications has proved challenging. Soft robots typically move with the aid of compressed air that is forced in and out of many tiny pneumatic channels running through their limbs, essentially inflating and deflating like balloons. However, scientists have faced challenges when trying to create microscopic versions of such limbs. For example, the hollow channels in soft robots are often created by dissolving away unwanted matter, but ensuring that all such material gets dissolved is a complicated task at microscopic scales.

These new robot tentacles can grab and squeeze items by moving in a spiraling manner, much like elephant trunks, octopus arms, plant tendrils and monkey tails.

The soft robotic micro-tentacle wraps around a delicate fish egg. (Scale bar: 0.5 mm)Pin It The soft robotic micro-tentacle wraps around a delicate fish egg. (Scale bar: 0.5 mm)
Credit: Jaeyoun Kim / Iowa State UniversityView full size image
The microscopic tubes are 5 to 8 millimeters long, about the length of the average red ant. Each tube has walls 8 to 32 microns thick and hollow channels 100 to 125 microns wide. In comparison, the average width of a human hair is about 100 microns.

To make these microscopic tubes, the researchers dipped thin wires or optical fibers in liquid silicone rubber and then stripped the hollow pipes off the rods once the fluid had solidified. The researchers inflated and deflated the tubes using syringes as pumps.

The hollow channel inside each tube did not run straight down its middle — rather, by letting gravity pull on the silicone rubber as it solidified, one side of each tube was thicker than the other. When air is pumped into each tube, the thin side will bend more than the thick side, allowing the tube to coil.

Ordinarily, these microscopic tubes can only coil once when inflated. However, the scientists augmented the ability of the tubes to flex by adding rings of silicone rubber onto their exteriors that “amplified the single-turn coiling into multi-turn spiraling,”study co-author Jaeyoun Kim, an electrical engineer at Iowa State University, told Live Science.

These new tentacles could pick up and hold an ant whose waist was about 400 microns wide without damaging its body. The researchers suggest these tentacles could help safely and delicately manipulate blood vessels or even embryos in minimally invasive surgeries. “The gentle spiraling and scooping motion of our micro-tentacle will definitely help,” Kim said.

Kim and his colleagues, Jungwook Paek and Inho Cho, detailed their findings online today (June 11) in the journal Scientific Reports.

References:http://www.livescience.com/

The battery revolution is exciting, but remember they pollute too

thebatteryre

The recent unveiling by Tesla founder Elon Musk of the low-cost Powerwall storage battery is the latest in a series of exciting advances in battery technologies for electric cars and domestic electricity generation.

We have also seen the development of an aluminium-ion battery that may be safer, lighter and cheaper than the lithium-ion batteries used by Tesla and most other auto and technology companies.
These advances are exciting for two main reasons. First, the cost of energy storage, in the form of batteries, is decreasing significantly. This makes electric vehicle ownership and home energy storage much more attainable.
The second, related reason is that these cheaper green technologies may make the transition to a greener economy easier and faster than we have so far imagined (although, as has been recently pointed out on The Conversation, these technologies are only one piece of the overall energy puzzle).
Beware the industrial option

These technological advances, and much of the excitement around them, lend themselves to the idea that solving environmental problems such as climate change is primarily a case of technological adjustment. But this approach encourages a strategy of “superindustrialisation”, in which technology and industry are brought to bear to resolve climate change, through resource efficiency, waste reduction and pollution control. In this context, the green economy is presented as an inevitable green technological economic wave.
But the prospect of this green economic wave needs to be considered within a wider environmental and social context, which makes solving the problems much more complex. Let’s take electric vehicles as an example.
The ecological damage of cars, electric or otherwise, is partly due to the fact that the car industry generates more than 3 million tonnes of scrap and waste every year. In 2009, 14 million cars were scrapped in the United States alone.
The number of cars operating in the world is expected to climb from the current 896 million to 1.2 billion by 2020. The infrastructure associated with growing vehicle use, particularly roads, also makes a significant contribution to the destruction of ecosystems and arguably has important social costs.

Electric vehicles (EVs) offer a substantial greenhouse gas emission improvement from the internal combustion engine. However, this improvement depends on green electricity production.

An EV powered by average European electricity production is likely to reduce a vehicle’s global warming potential by about 20% over its life cycle. This is not insignificant, but it is nowhere near a zero-emission option.

In large part, the life-cycle emissions of an electric vehicle are due to the energy-intensive nature of battery production and the associated mining processes. Indeed, there are questions around battery production and resource depletion, but perhaps more concerning is the impact that mining lithium and other materials for the growing battery economy, such as graphite, will have on the health of workers and communities involved in this global production network.

Processes associated with lithium batteries may produce adverse respiratory, pulmonary and neurological health impacts. Pollution from graphite mining in China has resulted in reports of “graphite rain”, which is significantly impacting local air and water quality.

The production of green technologies creates many interesting contradictions between environmental benefits at the point of use, versus human and environmental costs at the production end. Baoding, a Chinese city southwest of Beijing, has been labelled the greenest city in the world or the world’s only carbon-positive city. This is because Boading produces enormous quantities of wind turbines and solar cells for the United States and Europe, and has about 170 alternative energy companies based there.

But last year the air in the city of Baoding was declared to be the most polluted in China – a country where air quality reportedly contributes to 1.2 million deaths each year. These impacts need to be placed into any discussion or policy frameworks when exploring the shift to a “greener” future.

Beware new problems from new solutions

We should be excited about the shift to greener cars and affordable home electricity storage units, but in the process of starting to solve the technological challenges of climate change we must ensure that we are not creating environmental problems, particularly for the largely unseen workers and communities further up the production stream.

Our response to climate change needs to be more than just a technological adjustment. We argue that the shift to a green economy requires more transformative social political actions via skills and training, worker participation, and the coming together of environmental organisations, unions, business and government.

Indeed, the world of work is a critical site for emission reductions: 80% of Europe’s carbon emissions are workplace-related.

As we adopt emerging greener technologies, we will have to look beyond our shiny new Powerwall, or the electric car parked on the front drive, to ensure that the environmental and social changes promised by green technologies are not just illusions.

References:http://phys.org/