Why the internet is giving us worse games, books and music

mg22730282.400-1_1200

UNPUTDOWNABLE! This go-to accolade for book reviewers takes on a new meaning this week, when some authors will be paid by the number of pages read. But is it a good thing?

On 1 July, Amazon introduced a payment plan in which authors enrolled in its Kindle Direct Publishing Select scheme are paid royalties according to the number of pages read when their ebooks are borrowed from ebook “library” Kindle Unlimited. But critics claim this pay-per-page system could change the type of books written, favouring easy-to-read page-turners over difficult, literary works.

It is the latest example of firms and artists struggling to find a balance when it comes to digital distribution. Last week, Taylor Swift persuaded Apple Music to pay for songs played during its free three-month trial period, which it hadn’t intended to do. And Steam, the largest online game store, has revealed that the vast majority of games on its site sell fewer than 32,000 copies – many at a reduced price – which isn’t enough for games studios to survive.
This wasn’t how it was meant to be. “Digital distribution began with this real utopian vision of change,” says James Allen-Robertson at the University of Essex in Colchester, UK. The internet was hailed as a means for artists to find their audience and audiences their artist. But that hasn’t happened. “All these platforms have opened up access to creators, but now the markets are so saturated it’s hard to be found,” he says.

This is having negative effects on the creators. Belgium-based games studio Tale of Tales announced that it would no longer be making games for commercial release after its latest title, Sunset, sold just 4000 copies, despite getting good reviews.

Part of the problem is that the internet is dominated by just a few platforms, like Amazon, Spotify and Steam. Also, instead of searching for new music or books ourselves, we tend to follow what is recommended automatically. But algorithms tend to recommend things that are alike. “It forces a homogenisation of content,” says Auriea Harvey, one of the pair behind Tale of Tales.

In response, a few artists game the system. Music streaming services like Spotify pay a small fee for each play of a song. Some people now release hundreds of albums of mediocre music on Spotify, betting that the economy of scale will result in at least one or two plays – and thus fees. Last week, another group launched Eternify, an app that will play the first 30 seconds of a Spotify song – the minimum needed to qualify for payment – on repeat. Another band launched Sleepify, an album of silence, and encouraged fans to leave it playing to generate royalties. Spotify has now removed both apps.

Similarly, Amazon’s payment model is a reaction to authors gaming their previous system – in which fees were paid if 10 per cent of a book was read – by dividing novels up into serial form.

The tussle over digital distribution will continue to change the works produced. It may not pay to create songs with a slow build-up, for example, as listeners on Spotify may skip ahead before the 30-second payment mark is reached. “The past has shown that the mode of delivery does have a significant impact on the way a thing is expressed,” says Allen-Robertson.

References:http://www.newscientist.com/

Computer system identifies cows by their muzzle prints

cow-muzzle-prints

The skin patterns on a cow’s muzzle are as unique as our fingerprints

There are already several methods of identifying cattle – branding, ear tags, tattooing and ear notching all come to mind. Now, however, Egyptian scientists are working on a new biometric system that’s less invasive and more difficult to thwart: electronic muzzle-printing.

Much as the whorls in our fingerprints are unique to ourselves, the ridges and valleys in the skin of each cow’s muzzle (the front of their nose and mouth) is likewise unique to that animal. It’s something that people have known since the 1920s, when ink prints of muzzles were used for keeping track of cattle. Since then, various studies have looked at replacing the ink and paper with high-tech tools.

The latest such technique was developed by Hamdi Mahmoud and Hagar Mohamed Reda El Hadad of Cairo’s Hamdi Mahmoud of BeniSuef University. In their setup, close-up images of cow muzzles are processed utilizing a machine learning system known as a multiclass support vector machine (MSVM). Among other things, support vector machines are known for their ability to classify items based on pattern recognition.

In real-world tests, the MSVM has so far been able to identify cows based on their muzzle prints with 94 percent accuracy. The scientists are now working on improving that figure, along with speeding up the processing time.

References:http://www.gizmag.com/

3-D ultrasonic fingerprint scanning could strengthen smartphone security

3d-fingerprint-scanner

An ultrasonic fingerprint sensor measures a three-dimensional, volumetric image of the finger’s surface and the tissues beneath, making it difficult to spoof

Researchers at the University of California, Davis and Berkeley have managed to miniaturize medical ultrasound technology to create a fingerprint sensor that scans your finger in 3D. This low-power technology, which could improve on the robustness of current-generation capacitive scanners, could soon find its way to our smartphones and tablets.

When Apple announced it was going to include a fingerprint scanner in 2013’s iPhone 5s, questions immediately arose as to just how safe and accurate the scanning would be. As it turned out, the scanning was accurate enough (unless you’ve just gone swimming), and the feature, which was generally well received, soon found its way to many other smartphones.

But even in the best of cases, the capacitive sensors used in the current generation of portable devices are still subject to serious security leaks. These scanners only image your fingers in two dimensions, and so they are easily fooled by placing a printed image of a fingerprint on top of the sensor.

Professor David Horsley and team have now developed a sensor that obviates this issue by using low-depth ultrasound to image the ridges and valleys of the fingerprint’s surface (and the tissue beneath it) in 3D. Though their device is inspired by sophisticated medical equipment, the scanner is reportedly very compact and only requires 1.8 V to function, making it a good candidate for use in all sorts of portable electronics.

The technology started to come together in 2007, when the researchers developed arrays of piezoelectric-micromachined ultrasonic transducers (PMUTs) which would later on turn out to be a good fit for fingerprint sensing.

To fabricate its imager, the group embedded the PMUT arrays inside a chip, and integrated it along with the same kind of microelectromechanical systems (MEMS) that are already being used in today’s smartphones to build effective microphones, gyroscopes and accelerometers.

The chip, Horsley explains, is made from two wafers, a MEMS containing the ultrasound portion and a second circuit that takes care of processing the signal. The wafers are bonded together, and the MEMS portion is partially shaved off to expose the ultrasound transducers.

“Ultrasound images are collected in the same way that medical ultrasound is conducted,” says Horsley. “Transducers on the chip’s surface emit a pulse of ultrasound, and these same transducers receive echoes returning from the ridges and valleys of your fingerprint’s surface.”

Scanning a fingerprint in such a way is expected to be a more secure mechanism, and should be much more challenging to get around – a factor which will assume increasing importance as we transition to mobile payments.

According to Horsley, the use of well known, high volume manufacturing techniques means that his team’s sensor could be produced at a very low cost. The hope for the future is that, besides better fingerprint scanning, the technology could also find use in personal health monitoring, or perhaps even as a low-cost ultrasound medical diagnostics tool.

References:http://www.gizmag.com/

New dimensions of quantum information added through hyperentanglement

hyperentanglement

Using a technique known as hyperentanglement, researchers claim to have increased the quantum-encoded data carrying capacity of photons by more than 30 times

In quantum cryptography, encoding entangled photons with particular spin states is a technique that ensures data transmitted over fiber networks arrives at its destination without being intercepted or changed. However, as each entangled pair is usually only capable of being encoded with one state (generally the direction of its polarization), the amount of data carried is limited to just one quantum bit per photon. To address this limitation, researchers have now devised a way to “hyperentangle” photons that they say can increase the amount of data carried by a photon pair by as much as 32 times.

In this research, a team led by engineers from UCLA has verified that it is possible to break up and entangle photon pairs into many dimensions using properties such as the photons’ energy and spin, with each extra dimension doubling the photons’ data carrying capacity. Using this technique, known as “hyperentanglement”, each photon pair is able to be programmed with far more data than was previously possible with standard quantum encoding methods.

To achieve this, the researchers transmitted hyperentangled photons in the form of a biphoton frequency comb (essentially a series of individual, equidistantly-arranged frequencies) that divided up the entangled photons into smaller parts. An extension of the technique of wavelength-division multiplexing (the process used to simultaneously transmit things such as multiple video signals over a single optical fiber), the biphoton frequency comb demonstrates the useful applications of such methods not just at macro levels, but quantum ones as well.

“We show that an optical frequency comb can be generated at single photon level,” said Zhenda Xie, an associate professor of electrical engineering at UCLA and research scientist on the project. “Essentially, we’re leveraging wavelength division multiplexing concepts at the quantum level.”

Working on previous theories mooted by Professor Jeff Shapiro of MIT on the possibilities of quantum entanglement being utilized in the formation of comb-like properties of light, it is only with recent adaptations of ultrafast photon detectors and the advancement of the various supporting technologies required to generate hyperentanglement that such hypotheses could be physically tested.

“We are fortunate to verify a decades-old theoretical prediction by Professor Jeff Shapiro of MIT, that quantum entanglement can be observed in a comb-like state,” said Chee Wei Wong, a UCLA associate professor of electrical engineering who was the research project’s principal investigator. “With the help of state-of-the-art high-speed single photon detectors at NIST and support from Dr. Franco Wong, Dr. Xie was able to verify the high-dimensional and multi-degrees-of-freedom entanglement of photons. These observations demonstrate a new fundamentally secure approach for dense information processing and communications.”

If this research can be successfully and continuously replicated, quantum encoding will no longer be bound to the limitations of a single quantum bit (qubit). Rather, quantum hyperentanglement research can now move into the realm of the qudit (a unit of quantum information encoded in any number of d states, where d is a variable),where the quantum-encoded information able to be transmitted can theoretically be increased manifold times by simultaneously encoding energy levels, spin states and other parameters inherent in the quantum attributes of a photon.

Aside from the usual applications in secure communication and information processing, and high-capacity, minimal error data transfer, the team believes that a raft of other technologies could benefit from this breakthrough. The enormously data-intensive needs of medical computer servers, government information transfer, financial data, ultra-secure military communications channels, distributed quantum computing, and quantum cloud communications are just a few of the areas the researchers say may usefully employ this new method.

Engineers at UCLA were the research project’s principal investigators, with assistance provided from researchers at MIT, Columbia University, the University of Maryland, and the National Institute of Standards and Technology.

References:http://www.gizmag.com/