Augmented Reality pioneer Ronald Azuma ends his 1997 seminal essay A Survey of Augmented Reality with the prediction: “Within another 25 years, we should be able to wear a pair of AR glasses outdoors to see and interact with photorealistic dinosaurs eating a tree in our backyard.” Although his prediction would take us to a few more years in 2022, AR has advanced much quicker than any of us could have imagined. With the rise of wearables and devices like Meta’s SpaceGlasses, we’re getting closer to a true AR glasses experience and we WILL get there very soon.

We’ve had AR dinosaurs already appear just about everywhere — apparently a sure-fire source of go-to content. ‘What should we make with AR? Duh, a dinosaur!’.

002472-dinosaur

Image Source: The Advertiser

Dinosaurs, shminosaurs.

How about interacting with a realistic virtual long dead you resurrected in the backyard instead? Now that might startle the neighbours.

Screen Shot 2014-02-19 at 3.59.35 PM

Image: Screenshot from Eterni.me website

MIT startup Eterni.me wants to bring you back from the dead to create a virtual avatar that acts “just like you”:

“It generates a virtual YOU, an avatar that emulates your personality and can interact with, and offer information and advice to your family and friends after you pass away. It’s like a Skype chat from the past.”

Eterni.me bares an eery resemblance to the Channel 4 Television Series Black Mirror, specifically Series 2, Episode 1, Be Right Back in which we watch widowed Martha engage with the latest technology to communicate with her recently deceased husband, Ash. Of course, it’s not actually Ash, but a simulation powered by an Artificial Intelligence (A.I.) program that gathers information about him through social media profiles and past online communications such as emails. Martha begins by chatting with virtual Ash and is able to later speak with him on the phone after uploading video files of him from which the A.I. learns his voice. Eterni.me hopes to immortalize you in a similar fashion by collecting “almost everything you create during your lifetime and processes this huge amount of information using complex A.I. Algorithms.”

blackmirror

blackmirrorbrb

Images: Black Mirror

But who will curate this mass amount of information that is “almost everything you create during your life time”? In an article on Eterni.me in Fast Company, Adele Peters writes, “While the service promises to keep everything you do online so it’s never forgotten, it’s not clear that most people would want all of that information to live forever.” Commenting on how our current generation now documents “every meal on Instagram and every thought on Twitter”, Peters asks, “What do we want to happen to that information when we’re gone?”

Will we have avatar curators?

This sentiment echoes Director Omar Naim’s 2004 film, Final Cut, starring Robin Williams. Williams plays a “cutter”, someone who has the final edit over people’s recorded histories. An embedded chip records all of your experiences over the course of your life; Williams job is to pour through all of the stored memories and produce a 1 minute video of highlights.

18856963.jpg-r_640_600-b_1_D6D6D6-f_jpg-q_x-xxyxx

Image: Film Final Cut (2004)

Will Eterni.me’s A.I. Algorithm be intelligent enough to do this and distinguish between your mundane and momentous experiences?

In Black Mirror, Martha ultimately tells simulated Ash, “You’re just a few ripples of you. There’s no history to you. You’re just a performance of stuff that he performed without thinking and it’s not enough.” Will these simulated augmentations of us be “enough”?

Marius Ursache, Eterni.me’s founder says, “In order for this to be accurate, collecting the information is not enough–people will need to interact with the avatar periodically, to help it make sense of the information, and to fine-tune it, to make it more accurate.”

This post expands on a recent article I wrote on Spike Jonze’s film Her, where I discuss the film from an AR perspective. Her introduces us to Samantha, the world’s first intelligent operating system and offers us a glimpse of our soon to be augmented life when our devices come to learn and grow with us, and in the case of Eterni.me, become us. I discuss how our smart devices, like Samantha, will come to act on our behalf. Our smart devices will know us very well, learning our behaviours, our likes, dislikes, our family and friends, even aware of our vital statistics. The next wave of AR combines elements like A.I., machine learning, sensors, and data all to tell the unique story of YOU. With Eterni.me we may just see this story of you continuing while you’re long gone.

her

Image: Spike Jonze’s film Her (2013)

Gartner claims that by 2017 your smartphone will be smarter than you. A gradual confidence will be built in the outsourcing of menial tasks to smartphones with an expectation that consumers will become more accustomed to smartphone apps and services taking control of other aspects of their lives. Gartner calls this the era of cognizant computing and identifies the four stages as: Sync Me, See Me, Know Me, Be Me. ‘Sync Me’ and ‘See Me’ are currently occurring, with ‘Know Me’ and ‘Be Me’ just ahead, as we see Samantha perform in Her. ‘Sync Me’ stores copies of your digital assets, which are kept in sync across all contexts and end points. This data storage and an archive of an ‘online you’ will be central to Eterni.me’s creation of your virtual avatar. ‘See Me’ knows where you are currently and where you have been in both the real world and on the Internet, as well as understanding your mood and context to best provide services. If your mood and context can be documented and later accessed to know how you were feeling in a particular location, this will dramatically affect the curation of your memories to be accessed by the A.I. system. ‘Know Me’ understands what you need and want, proactively and presents it to you with ‘Be Me’ as the final step where the smart device acts on your behalf based on learning. Again, being able to document and access your personal needs and wants will paint a clearer picture of the story of you and who you were. The true final step of ‘Be Me’ will be put to the test once you are six feet under, which begs the question, will we become smarter when we die?

Will you register for Eterni.me?

Let’s continue the conversation on Twitter: I’m @ARstories. And yes I’m still alive.

tumblr_my58njS7rs1seyhpmo1_1280Her pixel art by QuickHoney

Her is a story about people-centric technology. Spike Jonze shows us a near future where it’s all about you. This is our new Augmented Reality (AR), and it’s not science fiction.

I’ve been working with AR as a PhD researcher and designer for the past decade. The second wave of AR will surpass the current gimmickry and extend our human capacities to better understand, engage with, and experience our world in new ways. It will be human-centered and help to make our lives better. Driven by the one thing that is central and unique to AR – context – our devices will be highly cognizant of our constantly changing environments continually deciphering, translating, analyzing, and navigating to anticipate our specific needs, predicting and delivering personalized solutions with highly relevant content and experiences. Our smart devices will act on our behalf. This next wave of AR is adaptive; it is live and always on, working quietly in the background, presenting itself when necessary with the user forever at the center. It works for you, and you alone. It knows you very well, your behaviours, your likes, dislikes, your family and friends, even your vital statistics. The next wave of AR combines elements like Artificial Intelligence (A.I.), machine learning, sensors, calm computing, and data all to tell the unique story of you.

Meet Samantha, the world’s first intelligent operating system. Samantha is not real yet, only imagined in Jonze’s film Her; however, she gives us a glimpse of our soon to be augmented life when our devices come to learn and grow with us. Dr. Genevieve Bell, Director of Interaction and Experience Research at Intel, describes a world of computing where we enter a much more reciprocal relationship with technology where it begins to look after us, anticipating our needs, and doing things on our behalf. Dr. Bell’s predictions are echoed by Carolina Milanesi, Gartner’s Research Vice President. Milanesi states that by 2017, your smartphone will be smarter than you. “If there is heavy traffic, it will wake you up early for a meeting with your boss, or simply send an apology if it is a meeting with your colleague. The smartphone will gather contextual information from its calendar, its sensors, the user’s location and personal data.” Gartner’s research claims this will work with initial services being performed “automatically” to assist generally with menial tasks that are significantly time consuming such as time-bound events, like calendaring, or responding to mundane email messages. A gradual confidence will be built in the outsourcing of menial tasks to the smartphone with an expectation that consumers will become more accustomed to smartphone apps and services taking control of other aspects of their lives.

relationshipwithdevices_intelImages from Intel’s video interview with Dr. Genevieve Bell: What Will Personal Computers Be Like in 2020?

Gartner calls this the era of cognizant computing and identifies the four stages as: Sync Me, See Me, Know Me, Be Me. ‘Sync Me’ and ‘See Me’ are currently occurring, with ‘Know Me’ and ‘Be Me’ just ahead, as we see Samantha perform. ‘Sync Me’ stores copies of your digital assets, which are kept in sync across all contexts and end points. ‘See Me’ knows where you are currently and where you have been in both the real world and on the Internet, as well as understanding your mood and context to best provide services. ‘Know Me’ understands what you need and want, proactively and presents it to you with ‘Be Me’ as the final step where the smart device acts on your behalf based on learning. Samantha learns Theodore very well and with access to all of his emails, files, and other personal information, her tasks range from managing his calendar to gathering some of the love letters he ghostwrites to send them to a publisher, acting on his behalf.

Milanesi states, “Phones will become our secret digital agent, but only if we are willing to provide the information they require.” Privacy issues will certainly come into play, and a user’s level of comfort in sharing information. Dr. Bell observes that we will go beyond “an interaction” with technology to entering a trusting “relationship” with our devices. She reflects that a great deal of work goes into “getting goodness” out of our computing technology today and that we “have to tell it a tremendous amount.” She continues that in 10 years from now, our devices will know us in a very different way by being intuitive about who we are.

her-movie-2013-screenshot-samantha-pocketStill from Spike Jonze’s film Her

The world is filled with AR markers, no longer clearly distinguishable as black and white glyphs or QR code triggers; the world itself and everything in it is now one giant trackable: people, faces, emotions, voices, eye-movement, gesture, heart-rate, and more. The second wave of AR presents a brave new digital frontier, where the objects in our world are shape-shifting, invoked, and on-demand. This era will see one of new interaction design and user experiences in AR, towards natural user interfaces with heightened immediacy; we will be in the presence of the ‘thing’, more deeper immersed, yet simultaneously with both feet rooted in our physical reality. Our devices will not only get smaller and faster, and closer to, and perhaps even implanted inside our bodies, they will be smarter in how they connect with and speak to each other and multiple sensors to present a multi-modal AR experience across all devices.

Samantha is just this. She is a universal operating system that seamlessly and intelligently connects everything in her user Theodore’s world to help him be more human.

In a telephone conversation with Intel’s Futurist Brian David Johnson, he described to me how for decades our relationship with technology has been based on an input output model which has been command and control: if commands aren’t communicated correctly, or dare we have an accent, it breaks. Today, we are entering into intelligent relationships with technology. The computer knows you and how you are doing on any particular day and can deliver a personalized experience to increase your productivity. Johnson says this can, “Help us to be more human” and comments on how Samantha nurses Theodore back to having more human relationships. Johnson states that technology is just a tool: we design our tools and imbue them with our sense of humanity and our values. We can have the ability to design our machines to take care of the people we love, allowing us to extend our humanity. He calls this designing “our better angels”. Johnson says the question we need to ask is, “What are we optimizing for?” The answer needs to be to make people’s lives better, and I wholeheartedly agree.

My personal hopes for the new AR are that by entering into this more intelligent relationship with technology, we are freed to get back to human relationships and to doing what we love in the real world with real people, without our heads buried in screens. There is a whole beautiful tactile reality out there that AR can help us to explore and ‘see’ better, engaging with each other in more human ways. Get ready for a smarter, more human, and augmented you.

Let’s continue the conversation on Twitter: I’m @ARstories.

The definition of augmented reality is quickly expanding to move beyond gimmicky 2D and 3D digital overlays atop reality to more context-driven and personalized experiences. The new AR combines contextual computing with things like machine-learning, artificial intelligence, sensors, big data, and social media to deliver highly relevant information and experiences that are tailored, adaptive, and even predictive.

So what does this mean for UX design?

There is a tremendous opportunity for UX designers to lead the development of this emerging medium to change the way people experience reality.

Herein lie the challenges, and also the immense opportunities:

The new AR will be highly adaptive, based on the user’s continually changing environment and context. This will require UX designers to create a seamless experiences across environments, and multiple devices, with an acute awareness and sensitivity to shifting context where the user is always at the centre. Wearables will play a major role in the new AR, not limited to digital glasses like Google Glass. There will be a plethora of data continually analyzed about the user and their surroundings ranging from demographics to historical (past behaviours and interactions) to situational/environmental (including things like location, current device, time, weather, and even mood).

How will this data all come together to create a relevant experience delivered in a natural and intuitive means that is human-centric? How can we apply UX to be in a more reciprocal relationship with our new devices and this new technology?

Technology should not interrupt our lives, yet work in the background, appearing when needed to enhance productivity and connectivity to the things that matter to us most. As a UX community, we must ask, ‘How can we design AR experiences to enhance and make a user’s life easier?’ Nicholas Negroponte said, “Computing is not about computers anymore. It is about living”.

How do we want to live in and with AR, and how will it shape people’s lives? This will be the UX designer’s task.

 

Let’s continue the conversation on Twitter: I’m @ARstories

Her bio-alarm clock wakes her up gently; she feels renewed. Her glasses give her the stats on her sleep, syncing with the accelerometer in her smartwatch. She’s pleased with the 84% sleep quality rating. She takes a moment to scan her REM log, adding a voice note with the fantastical things she saw in her dreams.

She rises and begins the day with her yoga practice, to stretch mind and body. Her EEG readings are excellent, with a high spike in alpha brain wave activity. She opts for some music from her library based on her heart rate. She smiles when The xx come on over the speakers.

She heads into the shower, but not before removing her glasses (she snickers thinking only ‘white men wearing glass’ actually shower with them on).

Dressed, she walks into the kitchen to make a light breakfast. The Wall Street Journal, The New York Times, and Wired Magazine instantly light up before her, offering the day’s headlines and relevant articles to her week ahead. Her serendipity app brings in a few articles from other news sources she might not necessarily read. She tags ‘more like this’ to the one she likes.

She fields a phone call from her boss while drinking her coffee and is able to reference both his recent email and an article in the morning’s paper, which of course, impresses him.

She walks to work while her glasses provide her with shopping deals of the day. She receives a popup reminder of a friend’s upcoming birthday with a photo of the blouse her friend liked when they were last window shopping together.

She enters a crowded, all male conference room, comfortably taking a seat in the middle of the investors’ meeting.  As she scans the room, her glasses use facial recognition to provide her with everyone’s name and LinkedIn profile.  She greets every member by name, asking about their recent projects, ventures, and successes as provided to her by her glasses.  They’re noticeably impressed.  When she assumes lead on the group presentation, her glasses sync to her visual presentation.  She had used her voice analyzer app last night to rehearse the presentation and tone. She walks the room with confidence and exits a dazzled crowd.

It’s been a long, but fruitful day. She heads out for a cocktail after work. The doors swing open into a crowded lounge and she walks in alone, but not unnoticed.  She checks tomorrow’s calendar with her glasses while ordering a drink.  A handsome stranger approaches her, asking for her name.  Her glasses immediately alert her that he’s 35 and a successful broker, whose address matches his mother’s house.  She smiles and walks away.  Well, at least he didn’t pull the, ‘So, are you a Gemini?’ line. She laughs to herself. Her attention shifts to a guy across the bar, he’s dressed casually, but locks eyes and smiles.  Her glasses tell her he’s 31, a painter, and a volunteer at the children’s hospital art wing.  She sends a friend request to him with her glasses while sending a message, “Always wanted to learn how to paint….Helen”, smiles, and walks away.

She returns home and checks her email on her glasses.  She has several emails congratulating her on a fantastic presentation and requesting meetings.  She replies by checking her availability on her calendar, which appears next to the message on her glasses.  The painter from the bar replies, “Anytime ;)”  She takes off her glasses and turns out the light.

[Hat tip and very gracious bows to James C. Nelson and Dr. Caitlin Fisher]

Augmented Reality eyewear and Google’s Glass will take us to new heights, quite literally: the first sequence in the “How It Feels [through Glass]” video was shot via Glass in a hot air balloon.

nadar-glass-montage2

It was 155 years ago that the first aerial photograph was taken on a balloon flight in 1858 over Paris, France by artist Nadar (born Gaspar Félix Tournachon)(1820-1910). A pioneer in the newly emerging medium of photography, Nadar also attempted underground photography using artificial light to produce pictures of the catacombs and sewers of Paris. Nadar’s technical experiments and innovation took us to places via his camera that were previously inaccessible to photography, inspiring new ways of seeing and capturing our world.

AR eyewear and Glass offer this same opportunity at a time when AR is emerging as a new medium, which will give way to novel conventions, stylistic modes, and genres. Referencing Dr. Seuss’s book in the title of this article, AR also promises to transport us to wondrous, magical places we’ve yet to see.

This article is a follow up to a post I wrote a year ago posing the questions: Will Google’s Project Glass change the way we see & capture the world in Augmented Reality (AR) and what kind of new visual space will emerge?

As both a practitioner and PhD researcher specializing in AR for nearly a decade, my interests are in how AR will come to change the way we see, experience, and interact with our world, with a focus on the emergence of a new media language of AR and storytelling.

I’ve previously identified Point-of-View (PoV) as one of “The 4 Ideas That Will Change AR”, noting the possibilities for new stylistic motifs to emerge based on this principle. I’d like to revisit the significance of PoV in AR at this time, particularly with the release of Google Glass Explorer Edition. PoV, more specifically, “Point-of-Eye”, is a characteristic of AR eyewear that is beginning to impact and influence contemporary visual culture in the age of AR.

Google Glass

Image: Google Glass

AR eyewear like “Glass” (2013) and Steve Mann’s “Digital Eye Glass” (EyeTap) (1981) are worn in front of the human eye, serving as a camera to both record the viewer’s environment and superimpose computer-generated imagery atop the present environment. With the position of the camera, such devices present a direct ‘Point-of-Eye’ (PoE), as Mann calls it, providing the ability to see through someone else’s eyes.

AR eyewear like Glass remediates the traditional camera, aligning our eye once again with the viewfinder, enabling hands-free PoE photography and videography. Eye am the camera.

Contemporary mass-market digital photography has us forever looking at a screen as we document an event, rather than seeing or engaging with the actual event. As comedian Louis C.K. so facetiously points out, we are continually holding up a screen to our faces, blocking our vision of the actual event with our digital devices. “Everyone’s watching a shitty movie of something that’s happening 10 feet away” he says, while the ‘resolution on the actual thing is unbelievable’.

Glass presents an opportunity where your experience in that moment is documented as is without having to stop and grab your camera. Glass captures what you are seeing as you see it through PoE, very close to how you are seeing it. Google Co-Founder Sergey Brin states, “I think this can bring on a new style of photography that allows you to be more intimate with the world you are capturing, and doesn’t take you away from it.”

Google Glass video

Image: Recording video with Google Glass, “Record What You See. Hands Free.”

I agree with Brin; Glass will bring on new stylistic modes and conventions through PoE, which also appears to be influencing other mediums outside of AR.

Take for instance the viral Instagram series “Follow Me” by Murad Osmann featuring photographs of his hand being led by his girlfriend to some of the world’s most iconic landmarks.

FOLLOWME

Photographs by Murad Osmann, “Follow Me” series, 2013.

(Similar in style to the above video recording visual from Google Glass of a ballerina taking and leading the viewer’s hand.)

The article “How Will Google Glass Change Filmmaking?”, identifies two other examples in contemporary music videos: the viral first-person music video for Biting Elbows and the award-winning music video for Cinnamon Chasers’ song “Luv Deluxe”.

In “The Cinema as a Model for the Genealogy of Media” (2002), Andre Gaudreault and Phillipe Marion state, “The history of early cinema leads us, successively, from the appearance of a technological process, the apparatus, to the emergence of an initial culture, that of ‘animated pictures’, and finally to the constitution of an established media institution” (14). AR is currently in a transition period from a technological process to the emergence of an initial AR culture, one of ‘superimposed pictures’, with PoE as a characteristic of the AR apparatus that will impact stylistic modes, both inside and outside the medium, contributing to a larger Visual Culture.

Gaudreault and Marion identify key players in this process as: the inventors responsible for the medium’s appearance, camera operators for its emergence, and the first film directors for its constitution. ‘Camera operators’ around the world are beginning to contribute to AR’s emergence as a medium, and through this process, towards an articulation of a media language of AR. Mann, described as the father of wearable computing, has been a ‘camera operator’ since the 90′s. In 2013, Google Glass’s early adopter program selected 8000 ‘camera operators’ to explore these possibilities, with Kickstarter proposals since from directors for PoE film projects including both documentaries and dramas. What new stories will the AR apparatus enable? Like cinema before it, what novel genres, conventions, and tropes will emerge in this new medium towards its constitution?

Let’s continue the conversation on Twitter: I’m @ARstories.

I’m greatly honoured to be named among the NEXT 100 Top Influencers of the Digital Industry in 2013! A heartfelt thank you to NEXT Berlin, and the independent jury of experts. Very flattered to be included on this wonderful list of trailblazers.

NEXT 100

http://nextberlin.eu/next-100/

I’m very pleased to write about my big news this month: the announcement of my role as Chief Innovation Officer at Infinity Augmented Reality (AR) in New York City. I can’t wait to share what we’ve been working on at Infinity, stay tuned to www.infinityar.com often for updates. There are some very exciting things just around the corner.

Nearing the completion of my doctoral degree specializing in Augmented Reality, I will continue to share my PhD research and musings here at Augmented Stories. Also be on the lookout for more writing and future forecasting from me at Infinity, with a format dedicated to the future of AR we are calling “What If…?”. You can subscribe to the Infinity newsletter here, and we’ll share the details with you when we launch this special section very soon.

Moving from hand-held AR devices to eyewear, we can now become hands-free to fully immerse ourselves in the AR experience; we are no longer holding up a looking glass, such as a tablet or a smartphone as a portal into another world, we are now truly seeing, we are there, it is our reality.

AR will enable us to experience events and places in real-time we may not otherwise have access to. Imagine participating in a live concert, sporting event, or visiting the destination of your choice all from the comfort of your own home through AR. The stereoscope of the Victorian era created the concept of “the armchair traveller”, of being able to ‘travel’ to distant lands from one’s home by looking at 3D photographs through a special viewing device. Film and television extended this notion of being transported to other worlds through the moving picture and live telecasts viewed on dedicated screens, and now with AR, we can bring the world to us anywhere and anytime, appearing in our space and reality as a highly personalized experience.

AR is no longer a technology only accessible to a select few, but a mass medium that is being experienced by consumers on a daily basis.

Thank you each for your incredible support over the past 8 years and all of the warm congratulatory wishes on my new position. I can’t wait to share our augmented future with you. It’s going to be pretty marvellous.

My best to you each,
Helen

Next Page »


  • Follow ARstories on Twitter
  • About

    Helen Papagiannis is the Chief Innovation Officer at Infinity Augmented Reality Inc. in New York City. Nearing the completion of her doctorate, Helen has been working with Augmented Reality (AR) for almost a decade with a focus on storytelling and creating compelling experiences in AR. Helen was named among the NEXT 100 Top Influencers of the Digital Media Industry in 2013, and is featured as an innovator in the book, "Augmented Reality: An Emerging Technologies Guide to AR", published in 2013. Prior to joining Infinity AR, she was a Senior Research Associate at York University's Augmented Reality Lab in the Department of Film, Faculty of Fine Art. Helen has presented her interactive work and PhD research at global conferences and invited events including TEDx (Technology, Entertainment, Design), ISMAR (International Society for Mixed and Augmented Reality) and ISEA (International Symposium for Electronic Art). Helen's TEDx 2011 talk was featured among the Top 10 Talks on Augmented Reality and Gamified Life. Prior to her augmented life, she was a member of the internationally renowned Bruce Mau Design studio where she was project lead on “Massive Change: The Future of Global Design”, an internationally touring exhibition and best-selling book examining the new inventions, technologies, and events changing the world.
  • Archives


Follow

Get every new post delivered to your Inbox.

Join 378 other followers