New algorithm paves the way for light-based computers

light-based-computer

Optical interconnects made of silicon act as a prism to direct infrared light transferring data between computer chips

An inverse design algorithm developed by Stanford engineers enables the design of silicon interconnects capable of transmitting data between computer chips via light. The new process replaces the wire circuitry used to relay information electronically, which could lead to the development of highly efficient, light-based computers.

While the heavy lifting in computer processing takes place inside the chips, an analysis by Stanford professor of electrical engineering, David Miller, showed that up to 80 percent of a microprocessor’s power is eaten up by the transmitting of data as a stream of electrons over wire interconnects. Basically, shipping requires far more energy than production, and chewing through all that power is the reason laptops heat up.

Inspired by the optical technology of the internet, the researchers sought to move data between chips over fiber optic threads beaming photons of light. Besides using far less energy than traditional wire interconnects, chip-scale optic interconnects can carry more than 20 times more data.

The majority of fiber optics are made from silicon, which is transparent to infrared light the same way glass is to visible light. Thus, using optical interconnects made from silicon was an obvious choice. “Silicon works,” said Tom Abate, Stanford Engineering communications director. “The whole industry knows how to work with silicon.”

But optical interconnects need to be designed one at a time, making the switch to the technology impractical for computers since such a system requires thousands of such links. That’s where the inverse design algorithm comes in.

The software provides the engineers with details on how the silicon structures need to be designed for performing tasks specific to their optical circuitry. The group designed a working optical circuit in the lab, copies were made, and all worked flawlessly despite being constructed on less than ideal equipment. The researchers cite this as proof of the commercial viability of their optical circuitry, since typical commercial fabrication plants use highly precise, state-of-the-art manufacturing equipment.

While details of the algorithm’s functions is a tad complex, it basically works by designing silicon structures that are able to bend infrared light in various and useful ways, much like a prism bends visible light into a rainbow. When light is beamed at the silicon link, two wavelengths, or colors, of light split off at right angles in a T shape. Each silicon thread is miniscule – 20 could sit side-by-side within a human hair.

The optical interconnects can be constructed to direct specific frequencies of infrared light to specific locations. And it’s the algorithm that instructs how to create these silicon prisms with just the right amount and bend of infrared light. Once the calculation is made as to the proper shape for each specific task, a tiny barcode pattern is etched onto a slice of silicon.

Building an actual computer that uses the optical interconnects has yet to be realized, but the algorithm is a first big step. Other potential uses for the algorithm include designing compact microscopy systems, ultra-secure quantum communications, and high bandwidth optical communications.

The team describes their work in the journal Nature Photonics.

References:http://www.gizmag.com/

 

3-D printing with metals achieved

2-3dprintingwi

A team of researchers from the University of Twente has found a way to 3D print structures of copper and gold, by stacking microscopically small metal droplets. These droplets are made by melting a thin metal film using a pulsed laser. Their work is published on Advanced Materials.

3D printing is a rapidly advancing field, that is sometimes referred to as the ‘new cornerstone of the manufacturing industry’. However, at present, 3D printing is mostly limited to plastics. If metals could be used for 3D printing as well, this would open a wide new range of possibilities. Metals conduct electricity and heat very well, and they’re very robust. Therefore, 3D printing in metals would allow manufacturing of entirely new devices and components, such as small cooling elements or connections between stacked chips in smartphones.
However, metals melt at a high temperature. This makes controlled deposition of metal droplets highly challenging. Thermally robust nozzles are required to process liquid metals, but these are hardly available. For small structures in particular (from 100 nanometres to 10 micrometres) no good solutions for this problem existed yet.
Researchers from FOM and the University of Twente now made a major step towards high-resolution metal printing. They used laser light to melt copper and gold into micrometre-sized droplets and deposited these in a controlled manner. In this method, a pulsed laser is focused on a thin metal film. that locally melts and deforms into a flying drop. The researchers then carefully position this drop onto a substrate. By repeating the process, a 3D structure is made. For example, the researchers stacked thousands of drops to form micro-pillars with a height of 2 millimetres and a diameter of 5 micrometres. They also printed vertical electrodes in a cavity, as well as lines of copper. In effect, virtually any shape can be printed by smartly choosing the location of the drop impact.
High energy

In this study, the researchers used a surprisingly high laser energy in comparison to earlier work, to increase the impact velocity of the metal droplets. When these fast droplets impact onto the substrate, they deform into a disk shape and solidify in that form. The disk shape is essential for a sturdy 3D print: it allows the researchers to firmly stack the droplets on top of each other. In previous attempts, physicists used low laser energies. This allowed them to print smaller drops, but the drops stayed spherical, which meant that a stack of solidified droplets was less stable.
In their article, the researchers explain which speed is required to achieve the desired drop shape. They had previously predicted this speed for different laser energies and materials. This means that the results can be readily translated to other metals as well.
One remaining problem is that the high laser energy also results in droplets landing on the substrate next to the desired location. At present this cannot be prevented. In future work the team will investigate this effect, to enable clean printing with metals, gels, pastas or extremely thick fluids.

References:http://phys.org/

Research finds urban social networks are not determined geographically, but socially

12-researchfind

Until now, studies of human interactions through mobile communication and social media have always been conducted at the country scale—using broad strokes to produce illustrations of society’s social networking tendencies. These explorations have concluded that social networks and contacts are primarily created in relation to geographic proximity.

Now, a new study led by MIT researchers bridges the gap between country and city, employing a novel method to explore the dynamic structure of social networks on the urban scale. The study, published this week in Scientific Reports, shows that urban networks are not determined geographically as traditionally thought, but rather socially.
“I was interested to investigate the relationship of social networks and human distance with the built environment and urban transportation,” says Department of Civil and Environmental Engineering (CEE) Associate Professor Marta González, co-author of the paper. “We found that geography plays only a minor role when forming social networking communities within cities. Unlike the country, cities have more dispersed communities.” Understanding how information spreads between social networking communities within a city will be crucial in the implementation of sustainable urban practices and policies in the future, she added.
The research grew from earlier explorations of expansive geolocated communication datasets. Gonzalez said that while the structure of communities was always analyzed before at the country scale, the urban scale had yet to be explored—this fueled her team’s mission to understand how social networks relate to their physical space.
Through the observation of social networks within urban boundaries, the team presents a new perspective on an established study of searchability in self-organized networks. The study extracted phone data from over 25 million phone users, from three countries, over the course of six months. The information was used to systematically outline the structure of network communities within a city.
“We were able, for the first time, to observe how social networks work on a urban scale and their searchability implications,” said lead author Carlos Herrera-Yagüe, of the Technical University of Madrid. Their study, he continued, built the social network that emerged from billions of mobile phone interactions across three countries, in a much higher spatial resolution provided from mobile phone towers.
The paper was co-authored Gonzalez, principal investigator on the study; Herrera-Yaque, lead author; former CEE postdoc Christian Schneider ’13, Rosa Maria Benito and Pedro Zufira professors from Technical University of Madrid, and Zibigniew Smoreda from Orange Labs.
Mobile interactions
Gonzalez and her team systematically demonstrated that cities change the structure of social networks. Using data from 7 billion mobile phone interactions from 155 cities in Spain, France, and Portugal, they explored the role of both social and geographic distances in social networks.
They concluded that urban networks are not determined by geographic proximity, but rather social distance. That is to say, groups of individuals with comparable interests, hobbies, and careers form communities within their city’s social networks.
According to Gonzalez, these findings are most likely related to causes such as urban growth and the economic function of cities. “Social distance is hard to define and measure with passive data,” she says. “But it’s important to know where the communities are within social networks, and how they expand.” It’s the social structure of urban communities, not geographic, that makes social networks searchable within cities.
This understanding that content is spread through homophily and not geography has implications for adoptions of innovations and epidemics within social networks, she adds.
“We are envisioning social media apps for social good—in this case, sustainable adoptions in the city,” she says.
In urban planning, the examination of social networks is influential in the development of urban policies that address problems such as segregation and political divisions. The results of this team’s study, Herrera says, prove that humans have built communities that connect people to all resources, while ignoring the vast majority of the social network.
“This is a remarkably difficult challenge that mankind has solved in a self-organized way,” Herrera remarks. The research has the potential to inspire the creation of a new generation of communications and transportation infrastructures that are decentralized and more focused on society’s use of them.
Gonzalez adds that the current research project opens the way for more detailed studies of the subject, noting, “It would be interesting to see if the socioeconomic status of people, their age, and/or gender have a role in the results found.”
With this newly developed model, Gonzalez and her team will continue to study social network groups and their mobility in order to better understand how to effectively spread information through networks.

References:http://phys.org/

40-year-old algorithm proven the best possible

40yearoldalg

Comparing the genomes of different species—or different members of the same species—is the basis of a great deal of modern biology. DNA sequences that are conserved across species are likely to be functionally important, while variations between members of the same species can indicate different susceptibilities to disease.

The basic algorithm for determining how much two sequences of symbols have in common—the “edit distance” between them—is now more than 40 years old. And for more than 40 years, computer science researchers have been trying to improve upon it, without much success.
At the ACM Symposium on Theory of Computing (STOC) next week, MIT researchers will report that, in all likelihood, that’s because the algorithm is as good as it gets. If a widely held assumption about computational complexity is correct, then the problem of measuring the difference between two genomes—or texts, or speech samples, or anything else that can be represented as a string of symbols—can’t be solved more efficiently.
In a sense, that’s disappointing, since a computer running the existing algorithm would take 1,000 years to exhaustively compare two human genomes. But it also means that computer scientists can stop agonizing about whether they can do better.
“This edit distance is something that I’ve been trying to get better algorithms for since I was a graduate student, in the mid-’90s,” says Piotr Indyk, a professor of computer science and engineering at MIT and a co-author of the STOC paper. “I certainly spent lots of late nights on that—without any progress whatsoever. So at least now there’s a feeling of closure. The problem can be put to sleep.”
Moreover, Indyk says, even though the paper hasn’t officially been presented yet, it’s already spawned two follow-up papers, which apply its approach to related problems. “There is a technical aspect of this paper, a certain gadget construction, that turns out to be very useful for other purposes as well,” Indyk says.
Squaring off
Edit distance is the minimum number of edits—deletions, insertions, and substitutions—required to turn one string into another. The standard algorithm for determining edit distance, known as the Wagner-Fischer algorithm, assigns each symbol of one string to a column in a giant grid and each symbol of the other string to a row. Then, starting in the upper left-hand corner and flooding diagonally across the grid, it fills in each square with the number of edits required to turn the string ending with the corresponding column into the string ending with the corresponding row.
Computer scientists measure algorithmic efficiency as computation time relative to the number of elements the algorithm manipulates. Since the Wagner-Fischer algorithm has to fill in every square of its grid, its running time is proportional to the product of the lengths of the two strings it’s considering. Double the lengths of the strings, and the running time quadruples. In computer parlance, the algorithm runs in quadratic time.
That may not sound terribly efficient, but quadratic time is much better than exponential time, which means that running time is proportional to N2, where N is the number of elements the algorithm manipulates. If on some machine a quadratic-time algorithm took, say, a hundredth of a second to process 100 elements, an exponential-time algorithm would take about 100 quintillion years.
Theoretical computer science is particularly concerned with a class of problems known as NP-complete. Most researchers believe that NP-complete problems take exponential time to solve, but no one’s been able to prove it. In their STOC paper, Indyk and his student Artūrs Bačkurs demonstrate that if it’s possible to solve the edit-distance problem in less-than-quadratic time, then it’s possible to solve an NP-complete problem in less-than-exponential time. Most researchers in the computational-complexity community will take that as strong evidence that no subquadratic solution to the edit-distance problem exists
Can’t get no satisfaction

The core NP-complete problem is known as the “satisfiability problem”: Given a host of logical constraints, is it possible to satisfy them all? For instance, say you’re throwing a dinner party, and you’re trying to decide whom to invite. You may face a number of constraints: Either Alice or Bob will have to stay home with the kids, so they can’t both come; if you invite Cindy and Dave, you’ll have to invite the rest of the book club, or they’ll know they were excluded; Ellen will bring either her husband, Fred, or her lover, George, but not both; and so on. Is there an invitation list that meets all those constraints?
In Indyk and Bačkurs’ proof, they propose that, faced with a satisfiability problem, you split the variables into two groups of roughly equivalent size: Alice, Bob, and Cindy go into one, but Walt, Yvonne, and Zack go into the other. Then, for each group, you solve for all the pertinent constraints. This could be a massively complex calculation, but not nearly as complex as solving for the group as a whole. If, for instance, Alice has a restraining order out on Zack, it doesn’t matter, because they fall in separate subgroups: It’s a constraint that doesn’t have to be met.
At this point, the problem of reconciling the solutions for the two subgroups—factoring in constraints like Alice’s restraining order—becomes a version of the edit-distance problem. And if it were possible to solve the edit-distance problem in subquadratic time, it would be possible to solve the satisfiability problem in subexponential time.
“This is really nice work,” says Barna Saha, an assistant professor of computer science at the University of Massachusetts atAmherst. “There are lots of people who have been working on this problem, because it has a big practical impact. But they won’t keep trying to develop a subquadratic algorithm, because that seems very unlikely to happen, given the result of this paper.”
As for the conjecture that the MIT researchers’ proof depends on—that NP-complete problems can’t be solved in subexponential time—”It’s a very widely believed conjecture,” Saha says. “And there are many other results in this low-polynomial-time complexity domain that rely on this conjecture.

References:http://phys.org/