Microsoft – Talking With Your Hands

Microsoft Pushing Gesture Control

Almost every object you encounter day-to-day has been designed to work with the human hand, so it’s no wonder so much research is being conducted into tracking hand gestures to create more intuitive computer interfaces, such as Purdue University’s DeepHand or the consumer product, Leap Motion. Now Microsoft has outlined some projects that deal with hand tracking, haptic feedback and gesture input.

“How do we interact with things in the real world?” asks Jamie Shotton, a Microsoft researcher in the labs at Cambridge, UK. “Well, we pick them up, we touch them with our fingers, we manipulate them. We should be able to do exactly the same thing with virtual objects. We should be able to reach out and touch them.”

The researchers believe that gesture tracking is the next big thing in how humans interact with computers and smart devices. Combining gestures with voice commands and traditional physical input methods like touchscreens and keyboards will allow ambient computer systems, such as Internet of Things devices, to better anticipate our needs.

The first hurdle is a big one: the human hand is extremely complex, and tracking all the possible configurations it can form is a massive undertaking. That’s the focus of Handpose, a research project underway at Microsoft’s Cambridge lab, which is using the Kinect sensor you’d find packaged with an Xbox console to track a user’s hand movements in real-time and display virtual versions that mimic everything real hands do.

The tool is precise enough to allow users to operate digital switches and dials with the dexterity you’d expect of physical hands, and can be run on a consumer device, like a tablet.

“We’re getting to the point that the accuracy is such that the user can start to feel like the avatar hand is their real hand,” says Shotton.

Another key aspect to the sensation that digital hands are really your own comes through the sense of touch, and while users of Handpose’s virtual switches and dials still reported feeling immersed without any haptic feedback, a Microsoft team at Redmond, Washington, is experimenting with something more hands-on.

This system is able to recognize that a physical button, not connected to anything in the real world, has been pushed by reading the movement of the hand. Using a retargeting system allows multiple, context-sensitive commands to be laid over the top in the virtual world.

This means that a limited set of virtual objects on a small real-world panel is enough to interact with a complex wall of virtual knobs and sliders, like an airplane cockpit for example. The dumb physical actual buttons and dials help make virtual interfaces feel more real, the researchers report.

The third project comes out of Microsoft’s Advanced Technologies Lab in Israel. The research on Project Prague aims to enable software developers to incorporate hand gestures for various functions in their apps and programs. So, miming the turn of a key could lock a computer, or pretending to hang up a phone might end a Skype call.

The researchers built the system by feeding millions of hand poses into a machine learning algorithm to train it to recognize specific gestures, and uses hundreds of micro-artificial intelligence units to build a complete picture of a user’s hand positions, as well as their intent. It scans the hands using a consumer-level 3D camera.

In addition to gaming and virtual reality, the team believes the technology would have applications for everyday work tasks, including browsing the web and creating and giving presentations.

Credit: Gizmag / Microsoft Blog

The Latest News On The Apple Watch 2

The iWatch 2 Low Down

Despite whispers that Apple may unveil a smartwatch sequel at its recent WWDC event in San Francisco, the only concrete wearable news from the conference was the unveiling of watchOS 3 – the updated smartwatch platform that will hit devices in the fall.

However, that hasn’t stopped the Apple Watch 2 rumour mill from spinning, with fuel being added to the fire by way of some interesting new patents that have turned up.

In particular, a patent that details the potential use of camera(s) in future Apple Watch models. “The camera can be disposed on the front surface of Apple Watch face to capture images of the user,” reads the patent uncovered by Patently Apple.

“A compact digital camera that includes an image sensor such as a CMOS sensor and optical components (e.g. lenses) arranged to focus an image onto the image sensor, along with control logic operable to use the imaging components to capture and store still and/or video images.”

We’ve seen cameras on smartwatches before, of course – Samsung has included them on its Gear range – but the original Apple Watch lacked the lenses required to take snaps or do FaceTime video calling.

 

Back in June, reports suggested that Apple was planning on adding a FaceTime camera to the Apple Watch 2, along with new tether-less functionality. Those murmurs claimed a HD camera would be added to the bezel on the front of the device, allowing Watch wearers to video conference with one another.

This patent backs up that suggestion, while at the same time detailing potential improvements to the Digital Crown controller.

For future Watch models patents suggest that the Crown could be touch-sensitive using capacitive touch technologies that can received pressurized inputs, while at the same time move in one or more directions to offer multiple gesture-based control methods.

The patent also reveals that two more buttons, also with capacitive skills, could hit the left hand side of the bezel.

A more useful feature being touted for the Apple Watch sequel, not detailed anywhere in these newly uncovered patents, is extra tether-less functionality.

It’s rumored that features such as emails, texts, and app updates could be hitting the Apple Watch 2 without the need of an iPhone to transmit the data. This, apparently, is the result of a new Wi-Fi enabled chipset, which will also power a ‘find my watch’ feature.

CREDIT: Forbes

What is computer vision syndrome – and how can I prevent it?

Protect Yourself From CVS

Do you sit in front of a screen at work for hours, then leave with a headache, sore, dry, blurry eyes and a painful neck? If so, welcome to computer vision syndrome (CVS), a condition just waiting to happen to those who use a screen for more than three hours a day. This happens to be quite a lot of us – about 70 million worldwide. At the risk of being alarmist, some researchers argue that CVS is the “No 1 occupational hazard of the 21st century”. But back pain, tension headaches and discomfort are not inevitable consequences of screen time – perhaps we should simply be more careful. At the very least, we should encourage our children to develop good screen habits.

A study of 642 students in Iran between the ages of 11 and 18 found that about 70% used computers for at least two hours a day. Up to half reported eye strain, blurred vision, dry eyes and headaches. The symptoms were worse in those who were long- or short-sighted. While most got better quickly after coming way from the screen, some took a day to recover. About one-third sat too close to the screen.

 

What can we do about it?

Eyes work harder when they read from a screen because computer images are made of pixels, tiny dots that have a bright centre and blurred edges. Printed images and words, by comparison, are solid and well-defined. Our eyes constantly have to focus, relax and refocus to read the pixels, which tires out the muscles. The 20-20-20 rule to combat this says you should take a 20-second break every 20 minutes and focus on points 20ft from your computer. When we look at a screen, we don’t blink as much as we do normally, so consciously doing so will moisten your eyes and reduce irritation. Flat screens with anti-glare filters are kind to eyes, as is having adequate light. If you have glasses, check your prescription and consider lenses that reduce glare.

When it comes to the distance you sit from the screen, how you sit and the optimum level for reading documents, it becomes rather prescriptive. It’s more comfortable to look down at a screen, so keep yours 15 to 20 degrees below eye level (about 10-13cm, or 4-5in), as measured from the centre of the screen. The screen should be 46-66cm (18-26in) away from your face; any closer and your eyes have to work too hard to focus on the screen. Sit in a proper chair, even if it’s ugly, so you have support in the small of your back and can sit with your feet flat on the floor. You’re welcome 🙂

 

Credit: The Guardian

Facebook is using Artificial Intelligence to read your posts!

Facebook Is Being Nosey

Facebook is starting to analyse users’ posts and messages with sophisticated new artificial intelligence (AI) software — and that could have worrying implications for Google.

On Wednesday, the social networking giant announced DeepText — “a deep learning-based text understanding engine that can understand with near-human accuracy the textual content of several thousands posts per second, spanning more than 20 languages.”

DeepText is powered by an AI technique called deep learning. Basically, the more input you give it, the better and better it becomes at what it is trained to do — which in this case is parsing human text-based communication.

The aim? Facebook wants its AI to be able to “understand” your posts and messages to help enrich experiences on the social network. This is everything from recognising from a message that you need to call a cab (rather than just discussing your previous cab ride) and giving you the option to do so, or helping sort comments on popular pages for relevancy. (Both are examples Facebook’s research team provides.)

The blog post doesn’t directly discuss it, but another obvious application for this kind of sophisticated tech is Google’s home turf — search. And engineering director Hussein Mehanna told Quartz that this is definitely an area that Facebook is exploring: “We want Deep Text to be used in categorizing content within Facebook to facilitate searching for it and also surfacing the right content to users.”

Search is notoriously difficult to get right, and is a problem Google has thrown billions at (and made billions off) trying to solve. Is someone searching “trump” looking for the Presidential candidate or playing cards? Does a search for the word “gift” want for ideas for gifts, or more information about the history of gifts — or even the German meaning of the word, poison? And how do you handle natural-language queries that may not contain any of the key words the searcher is looking for — for example, “what is this weird thing growing on me?”

By analysing untold trillions of private and public posts and messages, Facebook is going to have an unprecedented window into real-time written communication and all the contexts around it.

Google has nothing directly comparable (on the same scale) it can draw upon as a resource as to train AI. It can crawl the web, but static web pages don’t have that real-time dynamism that reflect how people really speak — and search — in private conversations. The search giant has repeatedly missed the boat on social, and is now trying to get onboard — very late in the game — with its new messaging app Allo. It will mine conversations for its AI tech and use it to provide contextual info to users — but it hasn’t even launched yet.

Facebook has long been working to improve its search capabilities, with tools like Graph Search that let the user enter natural language queries to find people and information more organically: “My friends who went to Stanford University and like rugby and Tame Impala,” for example. And in October 2015, it announced it had indexed all 2 trillion-plus of its posts, making them accessible via search.

Using AI will help the Menlo Park company not just to index but to understand the largest private database of human interactions ever created — super-charging these efforts.

 

CREDIT: Business Insider UK