Computer program fixes old code faster than expert engineers

computerprog

“The order of operations in these optimized binaries are complicated, which means that they can be hard to disentangle,” says Mendis, a graduate student at CSAIL. “Because stencils do the same computation over and over again, we are able to accumulate enough data to recover the original algorithms.”

From there, the Helium system then replaces the original bit-rotted components with the re-optimized ones. The net result: Helium can improve the performance of certain Photoshop filters by 75 percent, and the performance of less optimized programs such as Microsoft Windows’ IrfanView by 400 to 500 percent.

“We’ve found that Helium can make updates in one day that would take human engineers upwards of three months,” says Amarasinghe. “A system like this can help companies make sure that the next generation of code is faster, and save them the trouble of putting 100 people on these sorts of problems.”

The research was presented in a paper accepted to the Association for Computing Machinery SIGPLAN conference on Programming Language Design and Implementation (PLDI 2015), which took place June 13-17 in Portland, Oregon.

The paper was written by Mendis, fellow graduate students Jeffrey Bosboom and Kevin Wu, research scientist Shoaib Kamil, postdoc Jonathan Ragan-Kelley PhD ’14, Amarasinghe, and researchers from Adobe and Google.

“We are in an era where computer architectures are changing at a dramatic rate, which makes it important to write code that can work on multiple platforms,” says Mary Hall, a professor at the University of Utah’s School of Computing. “Helium is an interesting approach that has the potential to facilitate higher-level descriptions of stencil computations that could then be more easily ported to future architectures.”

One unexpected byproduct of the work is that it lets researchers see the different tricks that programmers used on the old code, such as archaeologists combing through computational fossils.

“We can see the ‘bit hacks’ that engineers use to optimize their algorithms,” says Amarasinghe, “as well as better understand the larger context of how programmers approach different coding challenges.”

References:http://phys.org/

Is big data still big news?

isbigdatasti

People talk about ‘data being the new oil’, a natural resource that companies need to exploit and refine. But is this really true or are we in the realm of hype? Mohamed Zaki explains that, while many companies are already benefiting from big data, it also presents some tough challenges.

Government agencies have announced major plans to accelerate big data research and, in 2013, according to a Gartner survey, 64% of companies said they were investing – or intending to invest – in big data technology. But Gartner also pointed out that while companies seem to be persuaded of the merits of big data, many are struggling to get value from it. The problem may be that they tend to focus on the technological aspects of data collection rather than thinking about how it can create value.
But big data is already creating value for some very large companies and some very small ones. Established companies in a number of sectors are using big data to improve their current business practices and services and, at the other end of the spectrum, start-ups are using it to create a whole raft of innovative products and business models.

At the Cambridge Service Alliance, in the Department’s Institute for Manufacturing, we work with a number of leading companies from a range of sectors and see first-hand both the opportunities and challenges associated with big data.

Take a company which makes, sells and leases its products and also provides maintenance and repair services for them. Its products contain sensors that collect vast amounts of data, allowing the company to monitor them remotely and diagnose any problems.

If this data is combined with existing operational data, advanced engineering analytics and forward-looking business intelligence, the company can offer a ‘condition-based monitoring service’, able to analyse and predict equipment faults. For the customer, unexpected downtime becomes a thing of the past, repair costs are reduced and the intervals between services increased. Intelligent analytics can even show them how to use the equipment at optimum efficiency. Original equipment manufacturers (OEMs) and dealers see this as a way of growing their parts and repairs business and increasing the sales of spare parts. It also strengthens relationships with existing customers and attracts new ones looking for a service maintenance contract.
In a completely different sector, an education revolution is under way. Big data is underpinning a new way of learning otherwise known as ‘competency-based education’, which is currently being developed in the USA. A group of universities and colleges is using data to personalise the delivery of their courses so that each student progresses at a pace that suits them, whenever and wherever they like.

In the old model, thousands of students arrive on campus at the start of the academic year and, regardless of their individual levels of attainment, work their way through their course until the point of graduation. In the new data-driven model, universities will be able to monitor and measure a student’s performance, see how long it takes them to complete particular assignments and with what degree of success. Their curriculum is tailored to take account of their preferences, their achievements and any difficulties they may have. For the students, this means a much more flexible way of working which suits their needs and gives them the opportunity to graduate more quickly. For the institutions, it means delivering better quality education and hence achieving better student outcomes, and being able to deploy their staff more efficiently and more in line with their skills and interests.

To get value out of big data, however, organisations need to be able to capture, store, analyse, visualise and interpret it. None of which is straightforward.
One of the main barriers seems to be the lack of a ‘data culture’, where data is wholly embedded in organisational thinking and practices. But companies also face a very long list of challenges to do with data management and processing.

Condition-monitoring services, for example, rely on data transmission, often using satellite systems or digital telephone systems: sometimes there simply is no coverage. Most organisations have vast amounts of data stored in different systems in a variety of formats: bringing these together in one place is difficult.
The whole issue of data ownership is problematic in a service contract environment, where the customer considers it to be their data, generated by their usage, while the service provider may consider it to be theirs as it is processed by their system.

In complex data landscapes, security – managing access to the data and creating robust audit trails – can also be a major challenge as, sometimes, is complying with the legislation around data protection. Many organisations also suffer from a lack of techniques such as data and text-mining models, which include statistical modelling, forecasting, predictive modelling and agent-based models (or optimisation simulations).

Where established organisations may find it hard to move away from their entrenched ways of doing things, start-ups have the luxury of being able to invent new business models at will. At the Cambridge Service Alliance we have also been looking at these new ways of doing things in order to understand what business models that rely on data really look like. The results should help companies of all sizes – not just start-ups – understand how big data may be able to transform their businesses. We have identified six distinct types of business model:

Free data collector and aggregator: companies such as Gnip collect data from vast numbers of different, mostly free, sources then filter it, enrich it and supply it to customers in the format they want.

Analytics-as-a-service: these are companies providing analytics, usually on data provided by their customers. Sendify, for example, provides businesses with real-time caller intelligence, so when a call comes in they see a whole lot of additional information relating to the caller, which helps them maximise the sales opportunity.
Data generation and analysis: these could be companies that generate their own data through crowdsourcing, or through smartphones or other sensors. They may also provide analytics. Examples include GoSquared, Mixpanel and Spinnakr, which collect data by using a tracking code on their customers’ websites, analyse the data and provide reports on it using a web interface.

Free data knowledge discovery: the model here is to take freely available data and analyse it. Gild, for example, helps companies recruit developers by automatically evaluating the code they publish and scoring it.

Data-aggregation-as-a-service: these companies aggregate data from multiple internal sources for their customers, then present it back to them through a range of user-friendly, often highly visual interfaces. In the education sector, Always Prepped helps teachers monitor their students’ performance by aggregating data from multiple education programmes and websites.

Multi-source data mash-up and analysis: these companies aggregate data provided by their customers with other external, mostly free data sources, and perform analytics on this data to enrich or benchmark customer data. Welovroi, is a web-based digital marketing, monitoring and analysing tool that enables companies to track a large number of different metrics. It also integrates external data and allows benchmarking of the success of marketing campaigns.

So what does this tell us? That agile and innovative start-ups are creating entirely new business models based on big data and being hugely successful at it. These models can also inspire larger companies (SMEs as much as multinationals) to think about new ways in which they can capture value from their data.

But these more established companies face significant barriers to doing so and may have to deconstruct their current business models if they are to succeed. In the world of fleet and engines this could be by moving to a condition-based monitoring service or, in the education sector, delivering teaching in a completely new way. If companies can’t innovate when the opportunity arises, they may lose competitive advantage and be left struggling to ‘catch up’ with their competitors.

References:http://phys.org/

New Brain-Like Computer May Solve World’s Most Complex Math Problems

brain-computer (1)

A new computer prototype called a “memcomputer” works by mimicking the human brain, and could one day perform notoriously complex tasks like breaking codes, scientists say.

These new, brain-inspired computing devices also could help neuroscientists better understand the workings of the human brain, researchers say.

In a conventional microchip, the processor, which executes computations, and the memory, which stores data, are separate components. This constant relaying of data between the processor and the memory consumes time and energy, thus limiting the performance of standard computers.

In contrast, Massimiliano Di Ventra, a theoretical physicist at the University of California, San Diego, and his colleagues are building “memcomputers,” made up of “memprocessors,” that both process and store data. This setup mimics the neurons that make up the human brain, with each neuron serving as both the processor and the memory. The building blocks of memcomputers were first theoretically predicted in the 1970s, but they were manufactured for the first time in 2008.

Now, Di Ventra and his colleagues have built a prototype memcomputer they say can efficiently solve one type of notoriously difficult computational problem. Moreover, they built their memcomputer from standard microelectronics.

“These machines can be built with available technology,” Di Ventra told Live Science.

The scientists investigated a class of problems known as NP-complete. With this type of problem, a person may be able to quickly confirm whether any given solution may or may not work but can’t quickly find the best solution to it.

One example of such a conundrum is the “traveling salesman problem,” in which someone is given a list of cities and is asked to find the shortest possible route from a city that visits every other city exactly once and returns to the starting city. Although someone may be able to quickly find out whether a route gets to all of the cities and does not go to any city more than once, verifying whether this route is the shortest involves trying every single combination — a brute-force strategy that grows vastly more complex as the number of cities increases.

The memprocessors in a memcomputer can work collectively and simultaneously to find every possible solution to such conundrums.

The new memcomputer solves the NP-complete version of what is called the subset sum problem. In this problem, one is given a set of integers — whole numbers such as 1 and negative 1, but not fractions such as 1/2 — and must find if there is a subset of those integers whose sum is zero.

“If we work with a different paradigm of computation, those problems that are notoriously difficult to solve with current computers can be solved more efficiently with memcomputers,” Di Ventra said.

But solving this type of problem is just one advantage these computers have over traditional computers. “In addition, we would like to understand if what we learn from memcomputing could teach us something about the operation of the brain,” Di Ventra said.

Quantum computing

To solve NP-complete problems, scientists are also pursuing a different strategy involving quantum computers, which use components known as qubits to investigate every possible solution to a problem simultaneously. However, quantum computers have limitations — for instance, they usually operate at extremely low temperatures.

In contrast, memcomputers “can be built with standard technology and operate at room temperature,” Di Ventra said. In addition, memcomputers could tackle problems that scientists are exploring with quantum computers, such as code breaking.

However, the new memcomputer does have a major limitation: It is difficult to scale this proof-of-concept version up to a multitude of memprocessors, Di Ventra said. The way the system encodes data makes it vulnerable to random fluctuations that can introduce errors, and a large-scale version would require error-correcting codes that would make this system more complex and potentially too cumbersome to work quickly, he added.

Still, Di Ventra said it should be possible to build memcomputers that encode data in a different way. This would make them less susceptible to such problems, and hence scalable to a very large number of memprocessors.

References:http://www.livescience.com/

Mark Zuckerberg’s Vision of ‘Facebook Telepathy’: What Experts Say

brain-power

Could Facebook one day be Brainbook? Mark Zuckerberg said in a recent Q&A that he predicts people will send thoughts and experiences to each other as easily as people text and email today. However, this fanciful idea of brain-to-brain communication is still a long ways off, neuroscientists say.

On Tuesday (June 30), in response to a question about the future of Facebook during an online Q&A with users, CEO Zuckerberg replied: “One day, I believe we’ll be able to send full rich thoughts to each other directly using technology. You’ll just be able to think of something and your friends will immediately be able to experience it too if you’d like. This would be the ultimate communication technology.”

Zuckerberg continued, “We used to just share in text, and now we post mainly with photos. In the future video will be even more important than photos. After that, immersive experiences like VR [virtual reality] will become the norm. And after that, we’ll have the power to share our full sensory and emotional experience with people whenever we’d like.”

He is referring to an advanced form of brain-to-brain communication in which people could plug in, similar to a VR headset, perhaps with some kind of actual physical connection to the brain itself. Brains transmit information between neurons via a combination of electrical and chemical signals, and it’s possible even now to see them via functional magnetic resonance imaging (fMRI), electroencephalograms, and implanted electrodes. So theoretically it is possible to encode those signals into bits just as we do with digital phone signals, and send them to another person for decoding and “playback” in another brain.

Reading the mind

From a purely technical standpoint, it’s possible to “read” a person’s brain activity and get a sense of what that person is thinking, said Christopher James, professor of biomedical engineering at the University of Warwickshire in the U.K. Functional magnetic resonance imaging, electrodes attached to the scalp, or implanting electrodes into the brain can all work to reveal something about brain activity in real-time. But right now the only way anyone knows of to get the precision required to pick up thoughts and feelings is with the electrodes. Imaging technologies and scalp-mounted electrodes can’t resolve areas small enough to know what’s going on at the cellular level, and scalp electrodes can only detect relatively “loud” signals that get through the skull.

But reading the signals is only half the battle. Decoding them is another matter. There’s no single brain area that governs thoughts of a given type; the way a person experiences thinking involves many parts of the brain operating simultaneously. Picking up all those signals that make up a thought in a real brain would require sticking electrodes into lots of different areas.

“We’d have to eavesdrop in many locations — some of them deep. If we did know minutely where to place electrodes there’s going to be a heck of a lot of them,” James told Live Science. “Then we need to make sense of those impulses,” he added, referring to the electrical signals picked up by the electrodes. [Incredible Technology: How to See Inside the Mind]

With the computing power available today scientists could probably make sense of the complex pattern of electrical signals, that is, if they knew exactly what those signals meant. However, that’s far from clear. A person’s thoughts are more than the simple sum total of voltages and currents. Which impulses come first, and in what pattern, and how intense they should be is still a mystery.

James noted that deep brain stimulation, which is used to treat Parkinson’s and epilepsy, involves sending simple signals to specific parts of the brain. But even such a straightforward treatment doesn’t help every patient, and nobody knows why. And thoughts are a far more complex phenomenon than treating Parkinson’s, he said.

Andrew Schwartz, a neurobiologist at the University of Pittsburgh, said the whole problem with any such concept of brain-to-brain communication is that nobody knows what a thought actually is. “How would you recognize a thought in the brain if you cannot define it?” Schwartz said. “If you replace ‘thought’ with intention, or ‘intention to act,’ then we may be able to progress as there is gathering evidence that we can recognize that in brain activity. However, this is very rudimentary at this point.”

Steps to Zuckerberg’s vision

Scientists have conducted several experiments with sending simple bits of data from one brain to another. For example, at the University of Washington a team demonstrated communicating between two brains via the motor cortex — a person with electrodes on his head sent brain signals via the Internet to the motor cortex of another person in another room. The brain information signaled the person receiving the message to move his hand and control a video game.

Starlabs in Barcelona showed that it’s possible to send a rudimentary word signal over the Internet. In that case the sender would think of a word, and the receiver would have the visual cortex stimulated by a magnetic field as the signal came in. The receiver would see flashes and could then interpret the word.

At Duke University scientists have experimented with motor impulses between rats. They linked two rats’ brains. One rat got a reward for hitting one of two levers when a light came on, the other had the levers but no light cue. The second rat was able to hit the correct lever more often than chance whenever the first rat was given the signal to press its lever. [Video – Watch Man Wiggle Rat’s Tail With His Mind Only]

Neuroscientists have even recreated movie clips by looking just at a person’s brainwaves; That mind-reading method, however, was limited to areas of the brain linked to basic visualization and not those areas responsible for higher thought.

James noted that in all these cases the information has been very simple, essentially bits of ones and zeros: When a person thinks about opening a door, they know what a door is, what a handle is, that the hand needs to reach the door handle to open it. That all happens before that person gets to moving arms and grabbing the doorknob.

Challenges ahead

Even with those successes — or at least proofs of concept — progressing to a technology that could transfer a person’s thoughts and feelings to another person is still a ways off, said Andrea Stocco, a research scientist at the University of Washington who took part in the motor cortex experiment. Many brain scientists think similar patterns of neural activity should correspond to similar thoughts in different people. But beyond that, nobody can predict exactly what patterns might be linked to a given set of thoughts. So far scientists can only discover these patterns by experimenting. [The Top 10 Mysteries of the Mind]

He added that while the technology is in theory available to record impulses in great detail from the brain, in practical terms placing that many wires into a brain to “see” that activity is quite risky. “We do not currently have the technology to record from enough cells in the brain to decode complex thoughts,” he said.

The other problem is an ethical one, James said. An experiment involving hundreds of electrodes inserted into a brain isn’t something any institution would be likely to approve, even with volunteers. He noted such experiments with inserted electrodes tend to be done on people who already have some kind of problem – epilepsy or Parkinson’s disease. (The University of Washington and Starlabs experiment didn’t involve invasive surgery).Those patients are getting electrodes inserted into their brains already. Even then, the data they yield is often crude.

“It’s a bit like having a football stadium with a crowd of people, and putting a microphone outside the door and trying to pinpoint one conversation. The best I can hope for is to get half of them to shout in unison.”

And unfortunately, the only way to know whether such a brain-to-brain interface is working is to work with a sentient creature — a person. In an experiment done on a rat the rat can’t tell us what it is feeling except in simple ways like having the rat hit one lever or another. That isn’t anything close to what humans experience. And it’s important because there’s a very real question of whether such stimulation induces experiences (known as qualia) in the rats, said Giulio Ruffini, CEO of Starlab,

It’s also far from clear what the long-term effects on the brain would be — scarring from electrodes would be just one problem. “The brain doesn’t like getting things stuck into it,” James said.

Schwartz added that motor impulses are one thing — there have been some successes there with prosthetic limbs, for instance. But that is nothing like the “rich experiences” Zuckerberg describes. “There is no scientific data showing that it can be extracted from brain activity,” James said. “Despite many claims about activating particular brain ‘circuits,’ this is almost all wishful thinking and has not been done in any deterministic manner to product a perceived experience. We simply haven’t done the science yet.”

Stocco, though, was somewhat optimistic about Zuckerberg’s vision. “His scenario is far, but not unreachable,” he said, as the kinds of advances necessary are at least imaginable. “We could get there, given adequate work and knowledge.”

References:http://www.livescience.com/