Apple job posting suggests more apps for Android a possibility

iphone-android

Apple is bringing its Music and Move apps to Android this fall, but it might not be stopping there.

A job listing for “Applications SW Engineer – Android” posted on Apple’s site Tuesday seems to indicate the company’s desire to bring more apps over to the Android platform. 9to5Mac first reported on the job listing in a post on their site Wednesday.

“We’re looking for engineers to help us bring exciting new mobile products to the Android platform,” said the job listing on Apple’s site. This implyies Android ambitions beyond its Music streaming app and its Android-to-iOS switcher Move.

While there’s no official release timeline yet, Music and Move for Android are expected to launch this fall. Currently, there are no Apple apps for Android, unless you count Beats Music on a technicality. Currently, there are no Apple apps for Android, unless you count Beats Music on a technicality.

With the possible Apple invasion of Android, one can’t help but think of when iTunes was first made available for Windows in 2003. Steve Jobs boasted that iTunes was “probably the best Windows app ever written” at the launch event in October 2003.

The original Windows app for iTunes was notable for the fact that it functioned identically to its Mac counterpart, with Jobs quipping to journalist Walt Mossberg that it was “like giving a glass of ice water to someone in hell.”

Apple also brought its Safari web browser to Windows in June 2007, with support being killed in 2012.

Tim Cook said in 2013 he had “no religious issue” with bringing Apple apps to Android, but he’d only do so if it made sense.

Interestingly, the Music app for Android features elements of Google’s Material Design in place of cues from iOS, though iOS style on Android would probably be needlessly confusing. Google apps on iOS feature a mix of Material Design and iOS elements as well.

The question which remains is; what will Apple bring over to Android? Mark Gurman over at 9to5Mac posits iMessage, iTunes and Safari could all be possibilities. Perhaps Apple could develop a new iCloud app allowing Android users to access their iCloud data. Surely there’s a number of Android phone users who also use Mac computers.

Or, as was the case with iTunes for Windows, this could be an opportunity for Apple to give Android users a taste of iOS, with the hopes they’ll switch devices with the handy Move app.

We’ll find out soon enough what Apple is planning on doing, so in the meantime, rampant speculation will be a necessity.

Have something to add to this story? Share it in the comments.

 

Solar Paper turns the page on portable solar chargers

solar-paper@2x

Solar Paper’s panels are thin enough to slot between the pages of a notebook

While there’s a healthy selection of compact solar panels to keep our mobile gadgets charged up – light permitting – the vast majority of these are either too small to be effective or too bulky for carting around. The creators of Solar Paper are looking to buck this trend with a portable solar charger that generates up to 10 W of power, yet is lighter than an iPhone 6 Plus and only slightly wider and longer.

So called because its panels are thing enough to slot between the pages of a notebook, and touted as the “world’s thinnest and lightest solar charger” by its creators, Yolk, Solar Paper measures 9 x 19 x 1.1 cm (3.5 x 7.5 x 0.4 in) and weighs 120 g (4.2 oz), while the actual solar panels are only 1.5 mm thick.

But Solar Paper has more going for it than just its form factor. Unlike most solar chargers on the market, it features modular panels that connect via embedded magnets. If you want more power, you can connect up to four panels together. Each individual panel generates a maximum of 2.5 W of power, so four will provide up to 10 W via USB. On a sunny day, that’s just as good as a 5V/2A wall charger.

Solar Paper also has some built-in smarts to help users get the most out of it. To avoid the hassle of manual restarting when the available light drops, as is the case with most competing solar chargers, Solar Paper has been programmed to automatically resume charging when it detects sufficient sunlight. So when that cloud passes overhead, you won’t have to intervene.

There’s also a built-in LCD screen that displays the current being delivered to a connected device. This is useful to understand how weather, angle of inclination, and orientation to the sun affect the charge rate, so you can easily set it up in the best position.

Add in water resistance and grommet holes for utility/attachment options, and it’s easy to understand why so many have pledged their support to the device’s Kickstarter campaign, with it shooting past its US$50,000 goal in just the first two days. If all goes as planned, the project creators anticipate the first batch will ship in September 2015, with the second batch following in either October or November. Pledges range from $69 for a 5 W Solar Paper, all the way to $450 for a set of four 10 W Solar Paper.

References:http://www.gizmag.com/

Neuroscience-based algorithms make for better networks

brain

When it comes to developing efficient, robust networks, the brain may often know best. Researchers from Carnegie Mellon University and the Salk Institute for Biological Studies have, for the first time, determined the rate at which the developing brain eliminates unneeded connections between neurons during early childhood.

Though engineers use a dramatically different approach to build distributed networks of computers and sensors, the research team of computer scientists discovered that their newfound insights could be used to improve the robustness and efficiency of distributed computational networks. The findings, published in PLOS Computational Biology, are the latest in a series of studies being conducted in Carnegie Mellon’s Systems Biology Group to develop computational tools for understanding complex biological systems while applying those insights to improve computer algorithms.
Network structure is an important topic for both biologists and computer scientists. In biology, understanding how the network of neurons in the brain organizes to form its adult structure is key to understanding how the brain learns and functions. In computer science, understanding how to optimize network organization is essential to producing efficient interconnected systems.
But the processes the brain and network engineers use to learn the optimal network structure are very different.
Neurons create networks through a process called pruning. At birth and throughout early childhood, the brain’s neurons make a vast number of connections—more than the brain needs. As the brain matures and learns, it begins to quickly prune away connections that aren’t being used. When the brain reaches adulthood, it has about 50 to 60 percent less synaptic connections than it had at its peak in childhood.
In sharp contrast, computer science and engineering networks are often optimized using the opposite approach. These networks initially contain a small number of connections and then add more connections as needed.
“Engineered networks are built by adding connections rather than removing them. You would think that developing a network using a pruning process would be wasteful,” said Ziv Bar-Joseph, associate professor in Carnegie Mellon’s Machine Learning and Computational Biology departments. “But as we showed, there are cases where such a process can prove beneficial for engineering as well.”

The researchers first determined key aspects of the pruning process by counting the number of synapses present in a mouse model’s somatosensory cortex over time. After counting synapses in more than 10,000 electron microscopy images, they found that synapses were rapidly pruned early in development, and then as time progressed, the pruning rate slowed.
The results of these experiments allowed the team to develop an algorithm for designing computational networks based on the brain pruning approach. Using simulations and theoretical analysis they found that the neuroscience-based algorithm produced networks were much more efficient and robust than the current engineering methods.
In the networks created with pruning, the flow of information was more direct, and provided multiple paths for information to reach the same endpoint, which minimized the risk of network failure.
“We took this high-level algorithm that explains how neural structures are built during development and used that to inspire an algorithm for an engineered network,” said Alison Barth, professor in Carnegie Mellon’s Department of Biological Sciences and member of the university’s BrainHubSM initiative. “It turns out that this neuroscience-based approach could offer something new for computer scientists and engineers to think about as they build networks.”
As a test of how the algorithm could be used outside of neuroscience, Saket Navlakha, assistant professor at the Salk Institute’s Center for Integrative Biology and a former postdoctoral researcher in Carnegie Mellon’s Machine Learning Department, applied the algorithm to flight data from the U.S. Department of Transportation. He found that the synaptic pruning-based algorithm created the most efficient and robust routes to allow passengers to reach their destinations.
“We realize that it wouldn’t be cost effective to apply this to networks that require significant infrastructure, like railways or pipelines,” Navlakha said. “But for those that don’t, like wireless networks and sensor networks, this could be a valuable adaptive method to guide the formation of networks.”
In addition, the researchers say the work has implications for neuroscience. Barth believes that the change in pruning rates from adolescence to adulthood could indicate that there are different biochemical mechanisms that underlie pruning.
“Algorithmic neuroscience is an approach to identify and use the rules that structure brain function,” Barth said. “There’s a lot that the brain can teach us about computing, and a lot that computer science can do to help us understand how neural networks function.”
As the birthplace of artificial intelligence and cognitive psychology, Carnegie Mellon has been a leader in the study of brain and behavior for more than 50 years. The university has created some of the first cognitive tutors, helped to develop the Jeopardy-winning Watson, founded a groundbreaking doctoral program in neural computation, and completed cutting-edge work in understanding the genetics of autism. Building on its strengths in biology, computer science, psychology, statistics and engineering, CMU recently launched BrainHubSM, a global initiative that focuses on how the structure and activity of the brain give rise to complex behaviors.

References:http://phys.org/

New network design exploits cheap, power-efficient flash memory without sacrificing speed

newnetworkde

Random-access memory, or RAM, is where computers like to store the data they’re working on. A processor can retrieve data from RAM tens of thousands of times more rapidly than it can from the computer’s disk drive.

But in the age of big data, data sets are often much too large to fit in a single computer’s RAM. The data describing a single human genome would take up the RAM of somewhere between 40 and 100 typical computers.

Flash memory—the type of memory used by most portable devices—could provide an alternative to conventional RAM for big-data applications. It’s about a tenth as expensive, and it consumes about a tenth as much power.
The problem is that it’s also a tenth as fast. But at the International Symposium on Computer Architecture in June, MIT researchers presented a new system that, for several common big-data applications, should make servers using flash memory as efficient as those using conventional RAM, while preserving their power and cost savings.
The researchers also presented experimental evidence showing that, if the servers executing a distributed computation have to go to disk for data even 5 percent of the time, their performance falls to a level that’s comparable with flash, anyway.
In other words, even without the researchers’ new techniques for accelerating data retrieval from flash memory, 40 servers with 10 terabytes’ worth of RAM couldn’t handle a 10.5-terabyte computation any better than 20 servers with 20 terabytes’ worth of flash memory, which would consume only a fraction as much power.
“This is not a replacement for DRAM [dynamic RAM] or anything like that,” says Arvind, the Johnson Professor of Computer Science and Engineering at MIT, whose group performed the new work. “But there may be many applications that can take advantage of this new style of architecture. Which companies recognize: Everybody’s experimenting with different aspects of flash. We’re just trying to establish another point in the design space.”
Joining Arvind on the new paper are Sang Woo Jun and Ming Liu, MIT graduate students in computer science and engineering and joint first authors; their fellow grad student Shuotao Xu; Sungjin Lee, a postdoc in Arvind’s group; Myron King and Jamey Hicks, who did their PhDs with Arvind and were researchers at Quanta Computer when the new system was developed; and one of their colleagues from Quanta, John Ankcorn—who is also an MIT alumnus.

Outsourced computation

The researchers were able to make a network of flash-based servers competitive with a network of RAM-based servers by moving a little computational power off of the servers and onto the chips that control the flash drives. By preprocessing some of the data on the flash drives before passing it back to the servers, those chips can make distributed computation much more efficient. And since the preprocessing algorithms are wired into the chips, they dispense with the computational overhead associated with running an operating system, maintaining a file system, and the like.
With hardware contributed by some of their sponsors—Quanta, Samsung, and Xilinx—the researchers built a prototype network of 20 servers. Each server was connected to a field-programmable gate array, or FPGA, a kind of chip that can be reprogrammed to mimic different types of electrical circuits. Each FPGA, in turn, was connected to two half-terabyte—or 500-gigabyte—flash chips and to the two FPGAs nearest it in the server rack.
Because the FPGAs were connected to each other, they created a very fast network that allowed any server to retrieve data from any flash drive. They also controlled the flash drives, which is no simple task: The controllers that come with modern commercial flash drives have as many as eight different processors and a gigabyte of working memory.
Finally, the FPGAs also executed the algorithms that preprocessed the data stored on the flash drives. The researchers tested three such algorithms, geared to three popular big-data applications. One is image search, or trying to find matches for a sample image in a huge database. Another is an implementation of Google’s PageRank algorithm, which assesses the importance of different Web pages that meet the same search criteria. And the third is an application called Memcached, which big, database-driven websites use to store frequently accessed information.

Chameleon clusters

FPGAs are about one-tenth as fast as purpose-built chips with hardwired circuits, but they’re much faster than central processing units using software to perform the same computations. Ordinarily, either they’re used to prototype new designs, or they’re used in niche products whose sales volumes are too small to warrant the high cost of manufacturing purpose-built chips.
But the MIT and Quanta researchers’ design suggests a new use for FPGAs: A host of applications could benefit from accelerators like the three the researchers designed. And since FPGAs are reprogrammable, they could be loaded with different accelerators, depending on the application. That could lead to distributed processing systems that lose little versatility while providing major savings in energy and cost.
“Many big-data applications require real-time or fast responses,” says Jihong Kim, a professor of computer science and engineering at Seoul National University. “For such applications, BlueDBM”—the MIT and Quanta researchers’ system—”is an appealing solution.”
Relative to some other proposals for streamlining big-data analysis, “The main advantage of BlueDBM might be that it can easily scale up to a lot bigger storage system with specialized accelerated supports,” Kim says.
References:http://phys.org/