Virtualization technology brings security and operability to web applications

virtualizati

Fujitsu Laboratories Ltd. today announced that it has developed technology for web applications that run on smart devices or wearables, and that delivers the same level of security as thin clients while offering an exceptional degree of operability. In recent years, there have been increasing expectations that the use of smart devices and wearables in a variety of front-line scenarios will lead to greater efficiency in business operations. When a high degree of confidentiality is required for the data used by these devices, such as patient data or confidential company data, thin client environments, which leave no trace of the data on the devices, are ideal from a security perspective. Generally, thin clients are environments in which screen data is frequently sent and received. As a result, depending on the status of the mobile network or the processing performance on the device side, lags of up to about a second can occur, and operations that are unique to smart devices, such as swiping are effected.

Fujitsu Laboratories has now developed new virtualization technology for web applications, developed for smart devices, that automatically separates the user interface processing (UI processing) from the data processing. With this technology, data processing is executed in the cloud, and the UI processing is executed on the smart device side. As a result, new web applications running on smart devices or wearables can have a work application execution environment that is as secure as a thin client environment while achieving outstanding operability.
In recent years, the trend of using smart devices for work in a variety of settings is becoming more common. Moreover, as smart glasses and other wearables come into practical use, there are high expectations that linking wearables with smart devices will lead to greater efficiencies in business operations for people in the field (figure 1).

Technological Issues

Web applications developed for smart devices, such as cameras and sounds, for example, may use data that have been stored on the devices themselves. In addition, once data received from the cloud are stored on the devices, they may execute business logic. When a high degree of confidentiality is desired for the data used by these devices, such as in the case of patient data or confidential company data, thin client environments, which leave no trace of the data on the devices, are ideal from a security perspective. The problem with thin client environments, however, is that, depending on the status of the mobile network or the processing performance on the device side, lags of up to about a second can occur, and affect smart device operations, such as swiping (figure 2).

1-virtualizati

The Newly Developed Technology

Fujitsu Laboratories has now developed a technology that places the source code of the developed web applications on a server. When web applications are executed on a smart device, they are automatically interpreted. This technology enables processing to be distributed with data processing handled by the server, and UI processing handled by the smart device (figures 3 and 4). The features of this technology are described below.

1. Distributed web applications

A newly developed virtualization engine, run on both the device and the server, performs tasks including the transfer of UI processing and execution of processing content. In addition, a conventional web application library is replaced with a proprietarily developed web application library that supports virtualization. When the engine executes a web application, the source code is analyzed, and, by estimating the source code’s UI processing, it separates that part of the source code written in an API related to the UI defined in the library (web application library), and that is required in web application execution. Having been notified by the device executing the web application, the server sends the UI processing part of the source code and the specific web application library that supports virtualization to the smart device. By executing data processing of everything in the source code except the separated UI processing on the server side, and by executing in a distributed way on the smart device the transferred UI processing, this technology is able to maintain security while achieving a high level of operability. Because these are dynamically processed when a web application is executed, there is no need for redesign or redevelopment work for the distributed processing.

2. Distributed processing in accordance with operations

Fujitsu Laboratories also developed a feature that analyzes on the smart device the user’s operations, processing times, and frequency of operations, and dynamically transfers to the server the processes within the UI processing that have little impact on operability. The result is a secure system that also maintains a high level of operability.

2-virtualizati

The use of this newly developed virtualization technology enables smart devices to be utilized in business operations when using web applications in a mobile environment. This can be achieved with both security and the high level of operability characteristic of smart devices. In addition, by applying the technology to web applications that communicate with smart glasses and other wearables that are increasingly coming into practical use, thin client environments can be newly expanded to web applications that run on smart devices and wearables, such as for use in work that deals with large amounts of data for which a high level of confidentiality is needed.

3-virtualizati

Fujitsu Laboratories will work to improve the virtualization technology’s multiplex execution performance on servers and make its operations analysis highly accurate with the goal of practical implementation in fiscal 2016. In addition, rather than just applications for servers or storage equipment, Fujitsu Laboratories will proceed with developing technologies for distributed execution tailored to devices, network equipment, and servers in accordance with execution conditions or the network environment in order to create hyperconnected clouds, in which a variety of clouds are linked together, such as for an Internet of Things environment.

References:http://phys.org/

Brillo as an underlying operating system for Internet of Things

brilloasanun

The Project Brillo announcement was one of the event’s highlights making news at Google’s I/O conference last week. Brillo fundamentally is Google’s answer to the Internet of Things operating system. Brillo is designed to run on and connect various IoT low-power devices. If Android was Google’s answer for a mobile operating system, Brillo is a mini, or lightweight, Android OS–and part of The Register’s headline on the announcement story was “Google puts Android on a diet”.

Brillo was developed to connect IoT objects from “washing machine to a rubbish bin and linking in with existing Google technologies,” according to The Guardian.
As The Guardian also pointed out, they are not just talking about your kitchen where the fridge is telling the phone that it’s low on milk; the Brillo vision goes beyond home systems to farms or to city systems where a trashbin could tell the council when it is full and needs collecting. “Bins, toasters, roads and lights will be able to talk to each other for automatic, more efficient control and monitoring.”
Brillo is derived from Android. Commented Peter Bright, technology editor, Ars Technica: “Brillo is smaller and slimmer than Android, providing a kernel, hardware abstraction, connectivity, and security infrastructure.” The Next Web similarly explained Brillo as “a stripped down version of Android that can run on minimal system requirements.” The Brillo debut is accompanied by another key component, Weave. This is the communications layer, and it allows the cloud, mobile, and Brillo to speak to one another. AnandTech described Weave as “an API framework meant to standardize communications between all these devices.”
Weave is a cross-platform common language. Andrei Frumusanu in AnandTech said from code-snippets given in the presentation it looked like a straightforward simple and descriptive syntax standard in JSON format. Google developers described Weave as “the IoT protocol for everything” and Brillo as “based on the lower levels of Android.”
Is Google’s Brillo and Weave component, then, the answer to developer, manufacturer and consumer needs for interoperability among smart objects? Some observers interpreted the announcement as good news, in that Google was now, in addition to Nest, to be an active player in the IoT space. Google was making its presence known in the march toward a connected device ecosystem.

Will this be the easiest platform for developers to build on? Will Brillo have the most reach over the long term? Or is the IoT to get tangled up in a “format war”? These were some questions posed in response to Google’s intro of Project Brillo.
Derek du Preez offered his point of view about standards and the IoT in diginomica, saying “we have learnt from history that there is typically room for at least a couple of mainstream OS’. But if Google wants to be the leader in this market, it needs to be the platform of choice for some of the early IoT ‘killer apps’. Its investment in Nest goes a long way to making this happen.” He added that given Google’s existing ecosystem and the amount of people across the globe that already own Android handsets, it had a good chance of taking on others and winning out.
The project page on the Google Developers site speaks about wide developer choice: “Since Brillo is based on the lower levels of Android, you can choose from a wide range of hardware platforms and silicon vendors.”
The site also said, “The Weave program will drive interoperability and quality through a certification program that device makers must adhere to. As part of this program, Weave provides a core set of schemas that will enable apps and devices to seamlessly interact with each other.”

References:http://phys.org/

Self-folding robot walks, swims, climbs, dissolves

5569d15b7f370

A demo sparking interest at the ICRA 2015 conference in Seattle was all about an origami robot that was worked on by researchers. More specifically, the team members are from the computer science and artificial intelligence lab at MIT and the department of informatics, Technische Universitat in Germany. “An untethered miniature origami robot that self-folds, walks, swims, and degrades” was the name of the paper, co-authored by Shuhei Miyashita, Steven Guitron, Marvin Ludersdorfer, Cynthia R. Sung and Daniela Rus. They focused on an origami robot that does just what the paper’s title suggests. A video showing the robot in action showcases each move.

One can watch the robot walking on a trajectory, walking on human skin, delivering a block; swimming (the robot has a boat-shaped body so that it can float on water with roll and pitch stability); carrying a load (0.3 g robot); climbing a slope; and digging through a stack. It also shows how a polystyrene model robot dissolves in acetone.
Even Ackerman in IEEE Spectrum reported on the Seattle demo. Unfolded, the robot has a magnet and PVC sandwiched between laser-cut structural layers (polystyrene or paper). How it folds: when placed on a heating element, the PVC contracts, and where the structural layers have been cut, it creates folds, said Ackerman. The self-folding exercise takes place on a flat sheet; the robot folded itself in a few seconds. Kelsey Atherton in Popular Science, said, “Underneath it all, hidden like the Wizard of Oz behind his curtain, sit four electromagnetic coils, which turn on and off and makes the robot move forward in a direction set by its shape.”
When placed in the tank of acetone, the robot dissolves, except for the magnet. The authors noted “minimal body materials” in their design enabled the robot to completely dissolve in a liquid environment, “a difficult challenge to accomplish if the robot had a more complex architecture.”
Possible future directions: self-folding sensors into the body of the robot, which could lead to autonomous operation, and eventually, even inside the human body. The authors wrote, “Such autonomous ‘4D-printed’ robots could be used at unreachable sites, including those encountered in both in vivo and bionic biological treatment.”
Atherton said, for example, future designs based on this robot could be even smaller, and could work as medical devices sent under the skin.
IEEE Spectrum’s Ackerman said it marked “the first time that a robot has been able to demonstrate a complete life cycle like this.”
Origami robots—reconfigurable robots that can fold themselves into arbitrary shapes—was discussed in an article last year in MIT News, quoting Ronald Fearing, a professor of electrical engineering and computer science at the University of California at Berkeley. Origami robotics, he said, is “a pretty powerful concept, because cutting planar things and folding is an inherently very low-cost process.” He said, “Folding, I think, is a good way to get to the smaller robots.”

References:http://phys.org/

How Computers Can Teach Themselves to Recognize Cats

computer-codes

In June 2012, a network of 16,000 computers trained itself to recognize a cat by looking at 10 million images from YouTube videos. Today, the technique is used in everything from Google image searches to Facebook’s newsfeed algorithms.

The feline recognition feat was accomplished using “deep learning,” an approach to machine learning that works by exposing a computer program to a large set of raw data and having it discover more and more abstract concepts. “What it’s about is allowing the computer to learn how to represent information in a more meaningful way, and doing so at several levels of representation,” said Yoshua Bengio, a computer scientist at the University of Montreal in Canada, who co-authored an article on the subject, published today (May 27) in the journal Nature. [Science Fact or Fiction? The Plausibility of 10 Sci-Fi Concepts]

“There are many ways you can represent information, some of which allow a human decision maker to make a decision more easily,” Bengio told Live Science. For example, when light hits a person’s eye, the photons stimulate neurons in the retina to fire, sending signals to the brain’s visual cortex, which perceives them as an image. This image in the brain is abstract, but it’s a more useful representation for making decisions than a collection of photons.
Similarly, deep learning allows a computer (or set of computers) to take a bunch of raw data — in the form of pixels on a screen, for example — and construct higher and higher levels of abstraction. It can then use these abstract concepts to make decisions, such as whether a picture of a furry blob with two eyes and whiskers is a cat.

“Think of a child learning,” Bengio said. “Initially, the child may see the world in a very simple way, but at some point, the child’s brain clicks, and she discovers an abstraction.” The child can use that abstraction to learn other abstractions, he added.

The self-learning approach has led to dramatic advances in speech- and image-recognition software. It is used in many Internet and mobile phone products, and even self-driving cars, Bengio said.

Deep learning is an important part of many forms of “weak” artificial intelligence, nonsentient intelligence focused on a narrow task, but it could become a component of “strong” artificial intelligence — the kind of AI depicted in movies like “Ex Machina” and “Her.”

But Bengio doesn’t subscribe to the same fears about strong AI that billionaire entrepreneur Elon Musk, world-famous physicist Stephen Hawking and others have been sounding alarms about.

“I do subscribe to the idea that, in some undetermined future, AI could be a problem,” Bengio said, “but we’re so far from [strong AI taking over] that it’s not going to be a problem.”

However, he said there are more immediate issues to be concerned about, such as how AI will impact personal privacy and the job market. “They’re less sexy, but these are the questions that should be used for debate,” Bengio said.

References:http://www.livescience.com/