Mark Zuckerberg’s Vision of ‘Facebook Telepathy’: What Experts Say

brain-power

Could Facebook one day be Brainbook? Mark Zuckerberg said in a recent Q&A that he predicts people will send thoughts and experiences to each other as easily as people text and email today. However, this fanciful idea of brain-to-brain communication is still a long ways off, neuroscientists say.

On Tuesday (June 30), in response to a question about the future of Facebook during an online Q&A with users, CEO Zuckerberg replied: “One day, I believe we’ll be able to send full rich thoughts to each other directly using technology. You’ll just be able to think of something and your friends will immediately be able to experience it too if you’d like. This would be the ultimate communication technology.”

Zuckerberg continued, “We used to just share in text, and now we post mainly with photos. In the future video will be even more important than photos. After that, immersive experiences like VR [virtual reality] will become the norm. And after that, we’ll have the power to share our full sensory and emotional experience with people whenever we’d like.”

He is referring to an advanced form of brain-to-brain communication in which people could plug in, similar to a VR headset, perhaps with some kind of actual physical connection to the brain itself. Brains transmit information between neurons via a combination of electrical and chemical signals, and it’s possible even now to see them via functional magnetic resonance imaging (fMRI), electroencephalograms, and implanted electrodes. So theoretically it is possible to encode those signals into bits just as we do with digital phone signals, and send them to another person for decoding and “playback” in another brain.

Reading the mind

From a purely technical standpoint, it’s possible to “read” a person’s brain activity and get a sense of what that person is thinking, said Christopher James, professor of biomedical engineering at the University of Warwickshire in the U.K. Functional magnetic resonance imaging, electrodes attached to the scalp, or implanting electrodes into the brain can all work to reveal something about brain activity in real-time. But right now the only way anyone knows of to get the precision required to pick up thoughts and feelings is with the electrodes. Imaging technologies and scalp-mounted electrodes can’t resolve areas small enough to know what’s going on at the cellular level, and scalp electrodes can only detect relatively “loud” signals that get through the skull.

But reading the signals is only half the battle. Decoding them is another matter. There’s no single brain area that governs thoughts of a given type; the way a person experiences thinking involves many parts of the brain operating simultaneously. Picking up all those signals that make up a thought in a real brain would require sticking electrodes into lots of different areas.

“We’d have to eavesdrop in many locations — some of them deep. If we did know minutely where to place electrodes there’s going to be a heck of a lot of them,” James told Live Science. “Then we need to make sense of those impulses,” he added, referring to the electrical signals picked up by the electrodes. [Incredible Technology: How to See Inside the Mind]

With the computing power available today scientists could probably make sense of the complex pattern of electrical signals, that is, if they knew exactly what those signals meant. However, that’s far from clear. A person’s thoughts are more than the simple sum total of voltages and currents. Which impulses come first, and in what pattern, and how intense they should be is still a mystery.

James noted that deep brain stimulation, which is used to treat Parkinson’s and epilepsy, involves sending simple signals to specific parts of the brain. But even such a straightforward treatment doesn’t help every patient, and nobody knows why. And thoughts are a far more complex phenomenon than treating Parkinson’s, he said.

Andrew Schwartz, a neurobiologist at the University of Pittsburgh, said the whole problem with any such concept of brain-to-brain communication is that nobody knows what a thought actually is. “How would you recognize a thought in the brain if you cannot define it?” Schwartz said. “If you replace ‘thought’ with intention, or ‘intention to act,’ then we may be able to progress as there is gathering evidence that we can recognize that in brain activity. However, this is very rudimentary at this point.”

Steps to Zuckerberg’s vision

Scientists have conducted several experiments with sending simple bits of data from one brain to another. For example, at the University of Washington a team demonstrated communicating between two brains via the motor cortex — a person with electrodes on his head sent brain signals via the Internet to the motor cortex of another person in another room. The brain information signaled the person receiving the message to move his hand and control a video game.

Starlabs in Barcelona showed that it’s possible to send a rudimentary word signal over the Internet. In that case the sender would think of a word, and the receiver would have the visual cortex stimulated by a magnetic field as the signal came in. The receiver would see flashes and could then interpret the word.

At Duke University scientists have experimented with motor impulses between rats. They linked two rats’ brains. One rat got a reward for hitting one of two levers when a light came on, the other had the levers but no light cue. The second rat was able to hit the correct lever more often than chance whenever the first rat was given the signal to press its lever. [Video – Watch Man Wiggle Rat’s Tail With His Mind Only]

Neuroscientists have even recreated movie clips by looking just at a person’s brainwaves; That mind-reading method, however, was limited to areas of the brain linked to basic visualization and not those areas responsible for higher thought.

James noted that in all these cases the information has been very simple, essentially bits of ones and zeros: When a person thinks about opening a door, they know what a door is, what a handle is, that the hand needs to reach the door handle to open it. That all happens before that person gets to moving arms and grabbing the doorknob.

Challenges ahead

Even with those successes — or at least proofs of concept — progressing to a technology that could transfer a person’s thoughts and feelings to another person is still a ways off, said Andrea Stocco, a research scientist at the University of Washington who took part in the motor cortex experiment. Many brain scientists think similar patterns of neural activity should correspond to similar thoughts in different people. But beyond that, nobody can predict exactly what patterns might be linked to a given set of thoughts. So far scientists can only discover these patterns by experimenting. [The Top 10 Mysteries of the Mind]

He added that while the technology is in theory available to record impulses in great detail from the brain, in practical terms placing that many wires into a brain to “see” that activity is quite risky. “We do not currently have the technology to record from enough cells in the brain to decode complex thoughts,” he said.

The other problem is an ethical one, James said. An experiment involving hundreds of electrodes inserted into a brain isn’t something any institution would be likely to approve, even with volunteers. He noted such experiments with inserted electrodes tend to be done on people who already have some kind of problem – epilepsy or Parkinson’s disease. (The University of Washington and Starlabs experiment didn’t involve invasive surgery).Those patients are getting electrodes inserted into their brains already. Even then, the data they yield is often crude.

“It’s a bit like having a football stadium with a crowd of people, and putting a microphone outside the door and trying to pinpoint one conversation. The best I can hope for is to get half of them to shout in unison.”

And unfortunately, the only way to know whether such a brain-to-brain interface is working is to work with a sentient creature — a person. In an experiment done on a rat the rat can’t tell us what it is feeling except in simple ways like having the rat hit one lever or another. That isn’t anything close to what humans experience. And it’s important because there’s a very real question of whether such stimulation induces experiences (known as qualia) in the rats, said Giulio Ruffini, CEO of Starlab,

It’s also far from clear what the long-term effects on the brain would be — scarring from electrodes would be just one problem. “The brain doesn’t like getting things stuck into it,” James said.

Schwartz added that motor impulses are one thing — there have been some successes there with prosthetic limbs, for instance. But that is nothing like the “rich experiences” Zuckerberg describes. “There is no scientific data showing that it can be extracted from brain activity,” James said. “Despite many claims about activating particular brain ‘circuits,’ this is almost all wishful thinking and has not been done in any deterministic manner to product a perceived experience. We simply haven’t done the science yet.”

Stocco, though, was somewhat optimistic about Zuckerberg’s vision. “His scenario is far, but not unreachable,” he said, as the kinds of advances necessary are at least imaginable. “We could get there, given adequate work and knowledge.”

References:http://www.livescience.com/

 

 

Can computers be creative?

cancomputers

The EU-funded ‘What-if Machine’ (WHIM) project not only generates fictional storylines but also judges their potential usefulness and appeal. It represents a major advance in the field of computational creativity

Science rarely looks at the whimsical, but that is changing as a result of the aptly named WHIM project. The ambitious project is building a software system able to invent and evaluate fictional ideas.
‘WHIM is an antidote to mainstream artificial intelligence which is obsessed with reality,’ says Simon Colton, project coordinator and professor in computational creativity at Goldsmiths College, University of London. ‘We’re among the first to apply artificial intelligence to fiction.’
World’s first fictional ideation machine

The project acronym stands for the What-If Machine. It is also the name of the world’s first fictional ‘ideation’ (creative process of generating, developing, and communicating new ideas) software, developed within the project. The software generates fictional mini-narratives or storylines, using natural language processing techniques and a database of facts mined from the web (as a repository of ‘true’ facts). The software then inverts or twists the facts to create ‘what-ifs’. The result is often incongruous, ‘What if there was a woman who woke up in an alley as a cat, but could still ride a bicycle?’

Can computers judge creativity?

WHIM is more than just an idea-generating machine. The software also seeks to assess the potential for use or quality of the ideas generated. Since the ideas generated are ultimately destined for human consumption, direct human input was asked for in crowd sourcing experiments. For example, WHIM researchers asked people whether they thought the ‘what-ifs’ were novel and had good narrative potential, and also asked them to leave general feedback. Through machine learning techniques, devised by researchers at the Jozef Stefan Institute in Ljubljana, the system gradually gains a more refined understanding of people’s preferences.
‘One may argue that fiction is subjective, but there are patterns,’ says Professor Colton. ‘If 99% of people think a comedian is funny, then we could say that comedian is funny, at least in the perception of most people.’

Just the beginning

Generating fictional mini-narratives is just one aspect of the project. Researchers at the Universidad Complutense Madrid are expanding the mini-narratives into full narratives that could be more suitable for the complete plot of a film, for example. Meanwhile, researchers at the University College in Dublin are trying to teach computers to produce metaphorical insights and ironies by inverting and contrasting stereotypes harvested from the web, while researchers from the University of Cambridge are looking into web mining for ideation purposes. All of this work should lead to better and more complete fictional ideas.

More than a whim

While the fictional ideas generated may be whimsical, WHIM is based on solid science. It is part of the emerging field of computational creativity, a fascinating interdisciplinary discipline located at the intersection of artificial intelligence, cognitive psychology, philosophy, and the arts.

WHIM may have applications in multiple domains. In one initiative there are plans to turn the narratives into video games. Another major initiative involves the computational design of a musical theatre production: the storyline, sets and music. The entire process is being filmed for a documentary.

WHIM could also be applied in areas beyond the arts. For example, it could be used by moderators at scientific conferences to ask probing ‘what-if’ questions to panellists in order to explore different hypotheses or scenarios.

References:http://phys.org/

A social-network illusion that makes things appear more popular than they are

55969b0b435cb

A trio of researchers at the University of Southern California has uncovered a social-network illusion that might explain why some things become popular in cyberspace while others do not. Kristina Lerman, Xiaoran Yan and Xin-Zeng Wu have written a paper describing the illusion and how it works and have posted it on the preprint server arXiv.

Social networks are not new of course, they have been going on for thousands of years, if not longer—what is new is the venue and size. As they have grown online, scientists have begun studying them in earnest and have found some interesting things, one of which is the friendship paradox—where any given person’s “friends” will have more friends than they have. This illusion is created by the slewing of the average by one or more friends that have a lot of friends. And it is not restricted to just friending sites, studies have shown that for the average Twitter user, their friends will Tweet more than they do, and again, it is an illusion that comes about due to slewing by just a few other users. In this new effort, the researchers have found a similar illusion, where ideas, photos or other information can appear to be much more popular than they really ar

The illusion comes about, the team explains, due to just a few nodes (people) having links to a lot of others—they provide an illustration of two views of a simple 14-node network, the only difference between them is that different nodes have been colored red. In one view, nodes with multiple links have been colored, in the other, those with just a few links have been colored. The researchers then suggest the viewer consider the perspective of nodes that are not colored, under the first scenario—any of them will see whatever message is being given by one of the more popular (red) nodes—and that is where the illusion occurs. In the real world, it appears possible that such networks would allow something to seem much more popular than it really is, because it is being disseminated by just a few well-connected nodes, whether it is a video of a cat doing something stupid, or a minority opinion about a well known topic.

Abstract

Social behaviors are often contagious, spreading through a population as individuals imitate the decisions and choices of others. A variety of global phenomena, from innovation adoption to the emergence of social norms and political movements, arise as a result of people following a simple local rule, such as copy what others are doing. However, individuals often lack global knowledge of the behaviors of others and must estimate them from the observations of their friends’ behaviors. In some cases, the structure of the underlying social network can dramatically skew an individual’s local observations, making a behavior appear far more common locally than it is globally. We trace the origins of this phenomenon, which we call “the majority illusion,” to the friendship paradox in social networks. As a result of this paradox, a behavior that is globally rare may be systematically overrepresented in the local neighborhoods of many people, i.e., among their friends. Thus, the “majority illusion” may facilitate the spread of social contagions in networks and also explain why systematic biases in social perceptions, for example, of risky behavior, arise. Using synthetic and real-world networks, we explore how the “majority illusion” depends on network structure and develop a statistical model to calculate its magnitude in a network.

References:http://phys.org/

Algorithm detects nudity in images, offers demo page

algorithmdet

An algorithm has been designed to tell if somebody in a color photo is naked. Isitnude.com launched earlier this month; its demo page invites you to try it out to test its power in nudity detection. You can choose from a selection of images at the bottom of the page, including pics of Vladimir Putin on horseback and Tiger Woods in golf mode. We tried it out, dragging and dropping a picture of Woods over into the box and the message promptly said “Not nude-G.” “You can probably post this.”

Other notes on the page include, “We apologize if we didn’t get it right, we are improving every day.” “Please note that we cannot detect black and white images.”
(Algorithmia does not retain images.)

The company behind this effort, Algorithmia, was founded in 2013 to advance algorithm development and use. “As developers ourselves we believe that given the right tools the possibilities for innovation and discovery are limitless.”

They said they are building “what we believe to be the next era of programming: a collaborative, always live and community driven approach to making the machines that we interact with better.” The community driven API exposes “the collective knowledge of algorithm developers across the globe.”
“We’re building a community around state-of-the-art algorithm development. Users can create, share, and build on other algorithms and then instantly make them available as a web service.”

Lucy Black in I Programmer noted the use advantage. “The idea behind Algorithmia is that where an algorithm already exists you don’t need to code your own, instead you can simply paste in its functionality using its cloud-based API.”

In his story about Algorithmia and the demo, Brian Barrett in Wired said, “His company is an algorithmic clearing house, taking computational solutions from academia and beyond and offering them to the world at large for a fee.”

Why the interest in detecting nudity in photographs?

“A customer came to us trying to run a site that needs to be kid-friendly,” said Algorithmia CTO Kenny Daniel in Wired. The customer wanted the ability to screen images with some confidence that they would not be pornographic. Daniel said, “Anybody who’s trying to run a community but wants to filter out objectionable content, or keep it kid-friendly, could benefit from this same algorithm.”

In the company blog this month, they also discussed the rationale in enabling artificial intelligence to detect nudity.

“If there’s one thing the internet is good for, it’s racy material,” said the blog. This is a headache for a number of reasons, including a) it tends to show up where you’d rather it wouldn’t, like forums, social media, etc. and b) while humans generally know it when we see it, computers don’t so much. We here at Algorithmia decided to give this one a shot.”

To give it a shot, they turned to various sources. For one, the result is based on an algorithm by Hideo Hattori and on a paper authored by Rigan Ap-apid, De La Salle University. In the latter’s paper, “An Algorithm for Nudity Detection,” he presented an algorithm for detecting nudity in color images.

He said, “A skin color distribution model based on the RGB, Normalized RGB, and HSV color spaces is constructed using correlation and linear regression. The skin color model is used to identify and locate skin regions in an image. These regions are analyzed for clues indicating nudity or non-nudity such as their sizes and relative distances from each other. Based on these clues and the percentage of skin in the image, an image is classified nude or non-nude.”

Meanwhile, plenty of images with lots of skin are “perfectly innocent,” said the blog. “You might say that leaning too much on just color leaves the method, well, tone-deaf. To do better, you need to combine skin detection with other tricks.”

Brian Barrett in Wired said “To help weed out false positives, Algorithmia added a few layers of intelligence.”

The blog stated that “Our contribution to this problem is to detect other features in the image and using these to make the previous method more fine-grained.”
To come up with the algorithm, they turned to the book Human Computer Interaction Using Hand Gestures by Prashan Premaratne, OpenCV’s nose detection algorithm and face detection algorithm.

As I Programmer said, the algorithm is still a work in progress. They are still interested in further improvements. “There are countless techniques that can be used in place of or combined with the ones we’ve used to make an even better solution. For instance, you could train a convolutional neural network on problematic images,” they said.

References:http://phys.org/