New printing process makes three-dimensional objects glow

newprintingp

Conventional electroluminescent (EL) foils can be bent up to a certain degree only and can be applied easily onto flat surfaces. The new process developed by Karlsruhe Institute of Technology (KIT) in cooperation with the company of Franz Binder GmbH & Co. now allows for the direct printing of electroluminescent layers onto three-dimensional components. Such EL components might be used to enhance safety in buildings in case of power failures. Other potential applications are displays and watches or the creative design of rooms.

“By means of the innovative production process we developed together with our industry partner, any type of three-dimensional object can be provided with electroluminescent coatings at low costs,” Dr.-Ing. Rainer Kling of the Light Technology Institute of KIT says. Usually, the luminescent material is located between two plastic layers in EL carrier foils. By means of the new printing process, however, the electroluminescent layers are directly printed onto the object without any intermediate carrier. In this way, convex and concave surfaces of various materials, such as paper or plastic, can be made glow.
The different components of the coating, including the electroluminescent and the electrically conductive materials, are applied by a novel pad printing process. The pad printing machine is equipped with an elastic rubber pad that is easily deformable and, hence, excellently suited for the coating of curved surfaces.
“In this way, it is possible to provide surfaces and even spheres with homogeneous coatings at low costs,” says engineer Elodie Chardin, who works on this research project. “Homogeneity of the coating of about one tenth of a millimeter in thickness was one of the challenges of this project,” says the executive engineer of the industry partner, Elisabeth Warsitz. The process requires a few production steps only and, hence, is characterized by a low consumption of resources. By using various luminescent substances, various colors may be applied to the same surface.

The research and development project of KIT in cooperation with the Binder Connector Group, headquartered in the German town of Neckarsulm, took about two years and was funded with EUR 125,000 by the Deutsche Bundesstiftung Umwelt (German Foundation for the Environment).

References:http://phys.org/

Algorithm reduces size of data sets while preserving their mathematical properties

algorithmred

As anyone who’s ever used a spreadsheet can attest, it’s often convenient to organize data into tables. But in the age of big data, those tables can be enormous, with millions or even hundreds of millions of rows.

One way to make big-data analysis computationally practical is to reduce the size of data tables—or matrices, to use the mathematical term—by leaving out a bunch of rows. The trick is that the remaining rows have to be in some sense representative of the ones that were omitted, in order for computations performed on them to yield approximately the right results.
At the ACM Symposium on Theory of Computing in June, MIT researchers will present a new algorithm that finds the smallest possible approximation of the original matrix that guarantees reliable computations. For a class of problems important in engineering and machine learning, this is a significant improvement over previous techniques. And for all classes of problems, the algorithm finds the approximation as quickly as possible.
In order to determine how well a given row of the condensed matrix represents a row of the original matrix, the algorithm needs to measure the “distance” between them. But there are different ways to define “distance.”
One common way is so-called “Euclidean distance.” In Euclidean distance, the differences between the entries at corresponding positions in the two rows are squared and added together, and the distance between rows is the square root of the resulting sum. The intuition is that of the Pythagorean theorem: The square root of the sum of the squares of the lengths of a right triangle’s legs gives the length of the hypotenuse.
Another measure of distance is less common but particularly useful in solving machine-learning and other optimization problems. It’s called “Manhattan distance,” and it’s simply the sum of the absolute differences between the corresponding entries in the two rows.
Inside the norm
In fact, both Manhattan distance and Euclidean distance are instances of what statisticians call “norms.” The Manhattan distance, or 1-norm, is the first root of the sum of differences raised to the first power, and the Euclidean distance, or 2-norm, is the square root of the sum of differences raised to the second power. The 3-norm is the cube root of the sum of differences raised to the third power, and so on to infinity.
In their paper, the MIT researchers—Richard Peng, a postdoc in applied mathematics, and Michael Cohen, a graduate student in electrical engineering and computer science—demonstrate that their algorithm is optimal for condensing matrices under any norm. But according to Peng, “The one we really cared about was the 1-norm.”
In matrix condensation—under any norm—the first step is to assign each row of the original matrix a “weight.” A row’s weight represents the number of other rows that it’s similar to, and it determines the likelihood that the row will be included in the condensed matrix. If it is, its values will be multiplied according to its weight. So, for instance, if 10 rows are good stand-ins for each other, but not for any other rows of the matrix, each will have a 10 percent chance of getting into the condensed matrix. If one of them does, its entries will all be multiplied by 10, so that it will reflect the contribution of the other nine rows it’s standing in for.
Although Manhattan distance is in some sense simpler than Euclidean distance, it makes calculating rows’ weights more difficult. Previously, the best algorithm for condensing matrices under the 1-norm would yield a matrix whose number of rows was proportional to the number of columns of the original matrix raised to the power of 2.5. The best algorithm for condensing matrices under the 2-norm, however, would yield a matrix whose number of rows was proportional to the number of columns of the original matrix times its own logarithm.
That means that if the matrix had 100 columns, under the 1-norm, the best possible condensation, before Peng and Cohen’s work, was a matrix with hundreds of thousands of rows. Under the 2-norm, it was a matrix with a couple of hundred rows. That discrepancy grows as the number of columns increases.
Taming recursion
Peng and Cohen’s algorithm condenses matrices under the 1-norm as well as it does under the 2-norm; under the 2-norm, it condenses matrices as well as its predecessors do. That’s because, for the 2-norm, it simply uses the best existing algorithm. For the 1-norm, it uses the same algorithm, but it uses it five or six times.
The paper’s real contribution is to mathematically prove that the 2-norm algorithm will yield reliable results under the 1-norm. As Peng explains, an equation for calculating 1-norm weights has been known for some time. But “the funny thing with that definition is that it’s recursive,” he says. “So the correct set of weights appears on both the left-hand side and the right-hand side.” That is, the weight for a given matrix row—call it w—is set equal to a mathematical expression that itself includes w.
“This definition was known to exist, but people in stats didn’t know what to do with it,” Peng says. “They look at it and think, ‘How do I ever compute anything with this?'”
What Peng and Cohen prove is that if you start by setting the w on the right side of the equation equal to 1, then evaluate the expression and plug the answer back into the right-hand w, then do the same thing again, and again, you’ll quickly converge on a good approximation of the correct value of w.
“It’s highly elegant mathematics, and it gives a significant advance over previous results,” says Richard Karp, a professor of computer science at the University of California at Berkeley and a winner of the National Medal of Science and of the Turing Award, the highest honor in computer science. “It boils the original problem down to a very simple-to-understand one. I admire the mathematical development that went into it.”

References:http://phys.org/

New technology could fundamentally change future wireless communications

15-newtechnolog

Radio systems, such as mobile phones and wireless internet connections, have become an integral part of modern life. However, today’s devices use twice as much of the radio spectrum as is necessary. New technology is being developed that could fundamentally change radio design and could increase data rates and network capacity, reduce power consumption, create cheaper devices and enable global roaming.

A pioneering team of researchers from the University of Bristol’s Communication Systems and Networks research group, have developed a new technique that can estimate and cancel out the interference from one’s own transmission, allowing a radio device to transmit and receive on the same channel at the same time. This therefore requires only one channel for two-way communication, using half as much spectrum compared to the current technology.
Leo Laughlin, a PhD student from the University’s EPSRC Centre for Doctoral Training (CDT) in Communications, together with MSc student Chunqing Zhang, supervisors Professor Mark Beach and Dr Kevin Morris, and industrial mentor, Dr John Haine at u-blox, have designed and built a novel full-duplex transceiver architecture, which combines electrical balance isolation and active radio frequency cancellation. Their prototype can suppress interference by a factor of over 100 million and uses low-cost, small form factor technologies, making it well suited to use in mobile devices such as smartphones and tablets.
This important change in radio design could offer a range of benefits. In Wi-Fi systems this would double the capacity of a Wi-Fi access point, allowing more users and higher data rates. For cellular systems, full-duplex operation would also deliver increased capacity and data rates, or alternatively the network operators could provide the same total network capacity with fewer base station sites, giving obvious benefits in the cost and environmental impact of running the network.
Leo Laughlin, who is in the first cohort of students in the CDT in Communications, said: “Until now there has been a fundamental unsolved problem with radio communication. Since the radio spectrum is a limited resource, and with network operators paying billions of pounds to access the spectrum, solving this problem would bring us one step closer to the faster, cheaper and greener devices of our connected future.”
As well as being part of the evolution to 5G mobile, this research is also very relevant to the design of the radio circuitry in current 3G and 4G cellular mobile devices. In today’s mobile devices, a separate filtering component is required for each frequency band, and because of this, today’s mobiles phone do not support all of the frequency channels which are in use across the world. Different devices are manufactured for different regions of the world, and there are currently no 4G phones capable of unrestricted global roaming. Replacing these filters with the research team’s duplexer circuit would create smaller and cheaper devices, and would allow manufacturers to produce a single model for the entire world. This would enable global roaming on 4G and would further decrease cost through greater economies of scale.
Mark Beach, Professor of Radio Systems Engineering, commented: “In addition to EPSRC’s investment in Doctoral Training Centres at Bristol, we have also been awarded equipment funding. Leo and Chunqing have taken full advantage of the new laboratory facilities in the validation and optimisation of our full-duplex architecture.”
The team have published papers about their research in the IEEE Journal on Selected Areas in Communications special issue on full duplex radio, and in this month’s issue of the IEEE Communications Magazine, and patents have been filed to protect the novel duplexer design.

References:http://phys.org/

Students build portable hybrid power system to help in natural disasters

studentsbuil

When natural disasters like the earthquakes in Nepal or the volcano eruption in Chile occur, one of the most pressing issues hindering rescue efforts is the loss of electricity.
A senior design project completed by three students in the College of Engineering may help bring power back quicker to those dealing with the aftermath.
Project leader Andre Lima Siuffo, Marisol Contreras and Kevin Gregorio designed and built a portable and low-cost renewable system that utilizes both solar electric and wind electric technologies to generate power.
With support from professors Andres Tremante and Sabri Tosunoglu, the system Siuffo and his team created uses a wind turbine and solar panel that work in tandem to charge a battery bank that can be used to provide electricity.
“One of the main purposes of the project was to prove that it is technically very easy to build a hybrid system and it’s not expensive at all,” said Siuffo, a mechanical engineering major graduating this summer.
The hybrid system is capable of producing 1.5 kilowatthours (kWh) per day. While not enough to power an average American household, which uses around 30 kWh per day, these portable systems can be particularly useful in remote areas and emergency situations.
The ability to power laptops, light bulbs, hot plates and other devices with consistent and continuous energy is critical in areas ravaged by disaster. Multiple units used together can potentially help relief teams in natural disaster situations provide electricity to perform a wide range of essential services.
“The capacity to generate this much power can be lifesaving in many situations,” Siuffo said. “We wanted to find a balance between producing power and maintaining portability.”
With each unit weighing about 250 pounds, they can be transported on pickup trucks and assembled and disassembled in a matter of minutes.
While similar systems normally run between $5,000 and $10,000 per unit, Siuffo and his team were able to create a fully functional prototype for approximately $1,700.
Tremante and the senior design team that built the power system hope that with support from the private sector, these prototypes can one day be produced commercially.
“We want to bridge the gap between the university and the industry through sustainable development projects,” Tremante said.

References:http://phys.org/