Wearables Becoming Mainstream vol. 02 “Smart Glasses -The Case for Eye Wear Computing-“

Smart Glasses
-The Case for Eye Wear Computing-

Most of our senses, vital signs, and actions involve the head, making the human skull one of the most interesting body locations for the simultaneous sensing and interaction of assistance applications. Although hearing aids and mobile headsets have become widely accepted as head-worn devices, users in public spaces often consider novel head-attached sensors and devices to.
In the first part of the series we explored how wearables are entering mainstream and the potential and perils of the “big” data gathered by them. This part focuses on an emerging kind of wearable computing: smart glasses and their potential.

From Pocket/Wrist to Head

Recently, we see a lot of wrist worn wearable devices, most dominantly smart watches and fitness trackers. However, the wrist is ergonomically a none optimal sensing position. You can get skin contact (ability to sense heart rate, skin conductivity etc.), yet, for a lot of professions it’s difficult to wear something on their wrists (doctors, maintenance workers) and also already very old studies showed that the majority of users feel obstructed by wirst-worn devices[1].
In contrast, the majority of our senses are situated on the head, making it one of the most interesting body placements for the sensing and interaction. Although hearing aids and mobile headsets have become widely accepted as head-worn devices, users in public spaces often consider novel head-attached sensors and devices to be uncomfortable or even condemning (see some feedback and news coverage about Google Glass as an example).

Cognitive Assistance

A lot of wearable computing studies provide evidence that head-worn sensing could reveal cognition-related behavior and essential vital parameters. Behavior and vital data is the key component for many cognitive assistance applications from learning aids over memory augmentation to concentration improvement. The glasses form factor seems perfect. Eyeglasses are publicly accepted accessories, often worn continuously throughout the day, rendering them an ideal platform for cognitive assistance. Subsequently, I outline our initial research towards specific cognitive assistance devices in a smart glasses form factor. So far we focused on measuring mental activities: how much you are reading and your facial expressions. Yet, the goal is to use the measures to improve user habits.
img1If we want to assess cognitive functions, it seems most obvious to directly observe brain activity.
On the top picture you see our progress in assessing cognitive functions in real life. From special brain sensing technology over Google Glass applications and early J!NS MEME prototypes to a more general smart glasses concept.

Reading Life Log

The more people read the larger their vocabulary and their critical thinking skills. Smart eye wear is perfect for quantifying and improving reading habits, as some people already wear some reading glasses. We already implemented a word count algorithm integrated in a smart eye wear frame. So your future glasses can tell you how much you are reading and even what type of documents. We are working on how much you understand while reading.

Affective Wear

img2
Next to reading and comprehension analysis, future eye wear will also be able to understand more about our emotions. To this end Masai et al. already built smart glasses that can detect facial expressions. Teh system Affective Wear detects facial expressions over photo-reflective sensors (recognizing the changes of distances between face and glass frame). Facial Expressions are a first step to understand feelings and a easy way for us to exchange information nonverbally. They can give us insights into how people think.

Mental State Improvement

After gaining insights in quantifying at comprehension, cognitive load and emotions, we can continue designing interactions to improve theses mental activities. We already investigated how to improve reading immersion using nose temperature and eye movements to detect a user’s immersion and playing audio/ haptic stimuli to increase engagement. In future, we will have technology that understands and improves our cognitive functions: attention, comprehension, recall and ultimately decision making.

Finally …

In this series of 3 articles I explore the impact of wearables on society more. In the next and last article, we will discuss how to get from just collecting data to actual change, from quantified self to practice design.

[1] Gemperle, Francine, et al. “Design for wearability.” Wearable Computers, 1998. Digest of Papers. Second International Symposium on. IEEE, 1998.

(Kai)-thumb-522x560-2114 Kai Kunze

Kai Kunze works as Associate Professor at Keio Media Design. He held a position as research assistant professor at Osaka Prefecture University 2012-14.
He was a visiting researcher at the MIT Media Lab, 2011.He earned his Ph.D., summa cum laude, in the Wearable Computing field from the University of Passau in Germany, 2011.
His work experience includes research visits and internships at the Palo Alto Research Center (PARC, Palo Alto, US), Sunlabs Europe (Grenoble, France), and the German Stock Exchange (Frankfurt, Germany).

“Haptics world through Macro Lens” Photo Essay Gallery by Masashi Nakatani

This mini essay  will discuss the relationship between visual expression and tactile feelings through close up photos.

TopPagePhoto_CroppedMasashi Nakatani, Ph.D.
Project Associate Professor

Biography : After receiving a Ph.D. in engineering, Masashi Nakatani worked for four years in cosmetic industry where he developed a haptic sensor system that evaluated softness of human skin. He returned to academic research in the spring of 2012 and has been conducting interdisciplinary research between sensor engineering and skin physiology. He also pursues to make a connection between research outcomes in academia and industry, represented by his recent inter-lab activity called TECHTILE (TECHnology based tacTILE design) project.

He conducted multinational collaborations with the BioRobotics lab at Harvard University (Cambridge, MA, USA), with the Haptics lab at McGill University (Montreal, Canada), and with the skin
physiology lab at Columbia University Medical Center (New York, NY, USA).

 

 

KMD FORUM
日時 : 2015年11月27・28日(金・土)10:00 〜 18:00
会場 : 五反田 東京デザインセンター
東京都 品川区東五反田5−25−19
東京デザインセンター

for more information : http://kmd-media.com/static/forum/

 

Wearables Becoming Mainstream vol.01

Wearables Becoming Mainstream

Today we interact more and more with computers throughout the day and sometimes don’t even realize it. Most obviously, we use smart phones and tablets. Yet, computers also “hide” in washing machines, dryers, kitchen utensils and more and more also in wearable accessories (e.g. watches) and finally also clothes. With these new technologies come new possibilities, we need to decide how we want to use them.
Wearable devices in form of the smart phone have already become an integrated part of our life and changed it substantially. Just think back on your last vacation or trip. Could you imagine it without your smart phone? Printing out maps, planning transportation, hotels, restaurants ahead of time. No “find my friends” or messaging applications telling you were your company is when they are running late. However, this is just the beginning.

img1
When I’m talking about wearables, I don’t just mean smart watches or bands, but a more personal form of computing. Computing you can wear like clothes accompanying you like a second skin wherever you go and most important supporting you seamlessly in every day tasks.
A word of caution, I’m speculating in these series of articles. Researchers often don’t have a good sense on where technology might go. To give you an example (also for early wearable computing systems), check out Figure 1. That’s me in 2005 with what we thought would be the future of wearables. It turned out that the future would be way less obtrusive and way more powerful. I’m talking about the smartphone.

Towards Wearable Tech

The more personal computing becomes; the more insights it can get from the users. For me the research directions of wearable, pervasive and ubiquitous computing share the same vision with slightly different flavors. The more computing we will have in our environments the less we have time interacting with them, therefore the computing devices need to become pro-active. Interfaces should vanish. The computing understands what I want and helps me to achieve it. To realize this idea researchers work with a wide variety of sensors detecting everyday activities from interactive stationary systems like the Kinect to wearable devices like the Myo (detecting muscle movement). Sometimes neither the user nor manufacturers are aware of the all the information contained in sensor data collected by these smart devices.

If you are wearing a fitness tracker throughout day/night, companies like FiBit or Jawbone know a lot about your lifestyle (when you get up, go to sleep) and even more private information about sleeping activities.
So far, private companies “own” the users data (step count, heart rate etc.).
Industry stances are quite broad on that matter: From Fitbit for example who lets you use their devices only if you upload your data to their online service to Apple on the other side claims “We don’t want your data”, store all of them in a vault on your phone (called Health Kit) and let you select whom you want to share your data with. Yet, still users need to trust these companies with potential very intimate data. Also, users are often not aware what information they are sharing.

Quo vadis?

Of course, the closer computing becomes the more difficult it is to design it well. We seen this now with a couple of devices, most prominently maybe with the mixed reception of Google Glass. Although I dismissed Glass due to battery runtime and the lack of everyday useful applications, my opinion changed a bit about head-worn devices when I gave Glass to my grandparents for a week. They could come up with a couple of interesting application scenarios. Moving away from the social acceptance issue, I believe society needs to have an open, informed discussion about important 2 related topics as soon as possible:
Privacy/Ethics and Democratization of Data.
First, who owns the data you or other people are recording? Second, what type of data can be processed or shared with companies/your employer? The Germany German constitutional court, for example, ruled the right to informational self-determination, meaning the citizens should have the right to determine the disclosure and use of their personal data. Yet, it’s till today it’s mostly not practiced and might be difficult to attain.
The other question, how can we use this data for the good of society (not optimizing for particular interest groups or companies).

Big Data – Big Liability Not Big Asset

The discussion around big data reminds me on old discussions about source code (e.g. producing software). In the beginning, more code was good leading even to developers being paid by the lines of code they produce. Yet, more code is often bad. It’s difficult to figure out what happens in the piece of software and makes it hard to find bugs. A lot of companies seem to believe more data is good. However, especially with wearable devices it touches a lot of privacy and ethic issues that consumers are not aware off (even though they agreed to the terms of the company by clicking “yes” in a “Terms and Conditions” Agreement). Big Data in itself can be more a liability than an asset. Actionable insights are an asset. Yet how to get there by just collecting a lot of physiological data (and by doing so violating the privacy and ethic sensibilities of your users) is not clear.

img2

Finally …

In this series of 3 articles I will explore the impact of wearables on society more. The next article will focus on eye wear. In my opinion, a very promising   technological development. In the last article, we will discuss how to get from just collecting data to actual change, from quantified self to practice design.

(Kai)-thumb-522x560-2114  Kai Kunze

Kai Kunze works as Associate Professor at Keio Media Design. He held a position as research assistant professor at Osaka Prefecture University 2012-14.
He was a visiting researcher at the MIT Media Lab, 2011.He earned his Ph.D., summa cum laude, in the Wearable Computing field from the University of Passau in Germany, 2011.
His work experience includes research visits and internships at the Palo Alto Research Center (PARC, Palo Alto, US), Sunlabs Europe (Grenoble, France), and the German Stock Exchange (Frankfurt, Germany).

「第23回国際学生対抗バーチャルリアリティコンテスト」にてKMDのチームが予選2位通過!

9月10日(木)から11日(金)まで、芝浦工業大学豊洲キャンパスにて行われた、「第23回国際学生対抗バーチャルリアリティコンテスト」でリアリティメディアプロジェクトのチーム「NULLNULL’s 」(加藤大弥、佐々木智也、杉本将太、中尾拓郎、伏見はるな)が予選を2位で通過しました。

DSC_4043

決勝大会(入場無料)は10/24-25に日本科学未来館7階イノベーションホールで開催されます。

 

国際学生対抗バーチャルリアリティコンテスト(IVRC)では、日本中の大学から、このコンテストのために、VR(バーチャルリアリティ)と工学などの分野を融合させて、様々なアイディアを形にしています。

学術学会の登竜門と言われるIVRC。リアリティメディアプロジェクトでは毎年研究開発の基礎を学ぶ場として、修士1年生の学生たちがチームを組み、それぞれ思い思いのコンテンツを作ります。

 

今年は、Reality Mediaプロジェクトから3チーム、Superhuman Sportsプロジェクトから1チームが参戦しました。

DSC_3988  DSC_3974 DSC_4002

音を触れる形に立体化し、音符に触覚情報を提示する音符型デバイス「ノーツインマイハンド」や、手の触感を増幅し、手の視点になってみることで、まるで自分が手に憑依したような感覚を再現する「テバター」、ふわふわと浮遊感のあるボール シャボン玉をつかった新しいスポーツ「シャボミントン」に加え、今回予選を2位通過したチームの「ニョキニョキ豆の木」。

「ニョキニョキ豆の木」はブレーキ機構で制御されたロープデバイスと仮想空間の表現により、ユーザの身体を用いて、仮想空間における没入感ある「上方向」への移動体験、擬似的に再現された高所において、実際の高所で感じるようなスリルや興奮が喚起される体験です。演出としては、天空に住む鬼が村に降るはずの雨を光の玉に閉じ込めてしまったため、巨大な豆の木を登って、天空にある光の玉に触れるというミッションが設けられています。

DSC_4023

DSC_4037

10/24-25に行われる決勝大会@日本科学未来館7階では1位から11位の作品が一般公開(入場料無料)されます。誰でも体験できるので、この際に是非足を運んで自分で体験してみてはいかがでしょうか?