Mediascapes: Context-Aware Multimedia Experiences

Share Embed


Descripción

Multimedia at Work

Qibin Sun Institute for Infocomm Research

Mediascapes: Context-Aware Multimedia Experiences Stuart P. Stenton, Richard Hull, Patrick M. Goddi, Josephine E. Reid, Ben J. Clayton, Tom J. Melamed, and Susie Wee Hewlett-Packard Laboratories

T

he IT industry boasts its longstanding mantra to deliver “anything, anytime, anywhere.” Here we describe research addressing the next generation of mobility technology, which will deliver “the right experience in the right moment.” The maturing field of pervasive computing yields the technology and the challenges described here. We focus on rich interactive mobile experiences triggered by context information available from the users, their environment, and a wealth of context-enabled content. We call such applications mediascapes.

Satellite navigation systems deliver a location to the user and a route to get there. Computer scientists have applied this technology to deliver other types of experiences, such as guided information tours (see http://www.gocarsf.com/) and location-based games (see http://www. pacmanhattan.com/). Now that we have maps capable of interfacing with GIS content, we can accelerate the process of getting digital content connected to locations through physical world metadata. We think of LBS as the first generation or a subcategory of the broader context-based mediascapes.

Recent developments For more than a decade scientists have demonstrated the potential value of the combination of portable computing, embedded sensors, and pervasive networking.1-5 Two recent developments have catalyzed activity in the subarea of Location-Based Services (LBS): ❚ the availability of GPS sensors in consumer devices, primarily for satellite navigation, and ❚ the integration of Geographical Information Systems (GIS) technology into the Web through map and satellite image interfaces.

Editor’s Note With today’s abundance of captured/created graphics/images, audio, and video in hand, can we make full use of this media to explore new experiences or applications? Along these lines, HP Labs has developed a prototype mediascapes technology called mscapes. A mediascape is a context-aware multimedia experience that allows you to trigger multimedia content based on your context, such as physical locations. Although we believe that some similar concepts have been proposed in piecemeal here and there before, mediascapes offers the user some totally new experiences. Want to know more details? Follow me into the world of mediascapes. —Qibin Sun

98

1070-986X/07/$25.00 © 2007 IEEE

Mediascapes and mscape Mediascapes infuse the landscape of our everyday environment with digital content and services. They deliver compelling user experiences when the user interacts with the physical world. They hail from a world of sensor-enabled, sometimes network-connected devices accessing context-coded information and services. Simply put, a mediascape plays multimedia content (image, audio, or video) on a mobile device in response to context triggers. Users can employ any sensors to provide context and trigger multimedia, including location sensors, infrared (IR) beacons, radio frequency identification (RFID) tags, motion sensors, heart rates, and other biomonitors. Until recently, technology specialists built applications for consumption in a managed environment—for example, at a conference or demonstration event—for research, or as a commercial rental. A technology we call mscape enables a broader range of designers from the creative industries to build and explore the possibilities of context-based interactive media applications. Anyone can create, distribute, share, and play mediascapes. The mscape technology makes it easy to create and play context-aware multimedia experiences. It has an easy-to-use authoring tool that lets people create their own mediascapes by

Published by the IEEE Computer Society

importing content and using simple logical rules to combine sensed events with media playback. A scripting language that ties together media content, sensor trigger events, and the context logic conveys the result. The results of public trials and user research have guided the development of this scripting language.6 These studies informed developers how to refine the toolkit and decide significant factors about the direction of the research. A publishing platform—available at http://www. mscapers.com—makes mscape technology accessible through a Web portal that lets people share and distribute mediascapes. It also provides a place where the emerging community of mediascape builders can share, experience, and develop best-practice guidelines for design in the new medium. Mediascape researchers have considered user experience to determine its improvement since its inception. Developers collected user data with every pilot and deployment. Each public pilot added to our knowledge of the medium, including how physical and digital experiences could be effectively fused and the different ways people want to explore its potential.

Mobile Bristol

The following are additional resources that might be of interest to the reader: ❚ The latest mscape toolkit and mediascapes are available at http://www.mscapers.com. ❚ The schools’ mediascape kit is available at http://www.createascape.org. ❚ The early Mobile Bristol Toolkit is available at http://www.mobilebristol.com.

The mediascape experience A mediascape experience is media-rich, context-aware, physical, and mobile, and it can be social or personal as well. The media used can include images, video, audio, and flash interactions. What makes a mediascape experience different from other rich media experiences delivered on mobile devices is the logic that specifies the relevance it has to the physical situation—that is, a person’s context. For example, if the person walks into a specific space, then the device triggers the media content according to the logic assigned to that space. This logic may specify a behavior that depends on the number of times a person has entered the space. The first time a person enters a space, the device may trigger a longer audio description of what they can do there, but on each subsequent visit the device may trigger a shorter audio stream that has different content. Just like meeting a nonplayer character (NPC) in a video game, a shared context begins to evolve. The simplest way to approximate a mediascape experience, in the absence of sensing capabilities, is for the audience to self-report by manually triggering the delivery of media. Applications like pod tours, guides (see http://www.alcatraz.us/), Urban Tapestries,8 and Yellow Arrow (see http:// yellowarrow.net/index2.php) all work in this way. The capabilities of Google’s My Maps could create this form of simple application. Cheap, low-power sensors are steadily becoming increasingly available. The Nintendo Wii and Sony Playstation 3 successfully use accelerometers. Some laptops also use them to close down the hard drive if it’s dropped to minimize the damage when it hits the floor. Technologists have created digital compasses the size of a postage stamp, and the medical field contributes a whole

July–September 2007

The HP Mediascape project originated from collaboration among HP, the University of Bristol, and The Appliance Studio with matching funding from the UK Government’s Department of Trade and Industry. During the life of this project, called Mobile Bristol (see http://www. mobilebristol.com), researchers carried out a number of public and educational trials and created a prototype authoring toolkit.7 This early prototype, the Mobile Bristol Toolkit, was available for download on the Mobile Bristol Web site. Over 1,000 downloads worldwide have fueled the emergence of a Mediascape design community in Europe, the US, and Canada. After the completion of the Mobile Bristol project, the HP team redesigned and built a new toolkit from the ground up, continuing to use public trials as guides. They created two new Web sites. The first, designed by Futurelab (see http://www.createascape.org), accessed a limited version of the toolkit for use in schools. The second—the mscapers publishing portal—let users create, download, and share mediascapes. To date artists, filmmakers, broadcasters, educationalists, students, authors, and researchers have all created mediascapes.

Additional Resources

99

Multimedia at Work

The mscape authoring tool For an authoring tool to fulfill the potential of mediascapes as a new medium, it must support the following: ❚ an extensible language for describing context, ❚ the specification of context events and consequences, ❚ a representation of contextual state, ❚ the storage and management of media files, ❚ an authoring interface that allows nonprogrammers to explore new genres of mediascape, and ❚ an emulator for testing the contextual states and consequences. Figure 1. The mscape authoring tool allows people to create location-based mediascapes. It is available for download at http://www. mscapers.com.

host of biosensors. The challenges for architects of this new medium include the following: ❚ How do they make sense of the data from these sensors? ❚ How do they combine data from these sensors? ❚ What are the most appropriate abstractions of sensor data for designers?

IEEE MultiMedia

❚ What are the semantics of contextual events that will allow situations to be described and identified as a combination of sensor triggers?

100

Mediacapes can provide a social experience that brings people together and forms communities within a broader audience. On 2 July 2005, The Washington Post described Yellow Arrow as “geographical blogging,” referring to the way the world can be tagged with personal experience and commentary. In this way communities can communicate through and about their physical environment. Inhabitants or visitors to a space can access the social history and presence of a locale as they pass through. As mediascapes become more integrated with communications such as instant messaging, chat rooms, voice, and video, developers face challenges in integrating these modalities as triggers and media feeds in a mediascape and making sure the network prioritizes the different types of traffic accordingly to maintain the experience.

As the complexity of sensed activity and context states increases, the challenge will be to keep the authoring interface simple and accessible. Authors with a broad range of skills and perspectives must be able to explore new applications and genres of mediascapes for the medium to realize its full potential. Creators of such technology rarely also originate its popular and emergent use. For location-based mediascapes, the mscapers authoring tool has a graphical user interface that lets authors specify regions where content is to be triggered. The tool allows authors to import media content, including images, video, and audio. The author can then drop media content onto the regions where they should be played (see Figure 1). The tool also allows event-based logic to determine how the media should be played. It allows the creator to specify an action on entering the region such as “play video” and on exiting the region such as “stop video.” In the case of audio, users can fade files in or out, loop them, or play them to the end. Furthermore, the tool supports state variables that allow the author to specify actions based on the state. This, for example, allows the author to specify playing one media clip when a user first enters a region and playing a different media clip when the user re-enters the region over the course of the session. Users can also employ state variables across regions. This allows the author to specify logic such as playing a media clip when a

person enters a region, but only if the person has visited another region first. Very advanced developers can extend the authoring tool to add drivers for any sensing device. The programmers built the authoring platform on top of an extensible plug-in architecture that allows the addition of new sensors and the specifications of new contexts.

Logic User input

Media format and scripting language The format of the new medium contains a scripting language that draws upon a file store of content that can comprise a number of media types: HTML, MP3, or WAV audio; JPEG or GIF images; and MPEG, WMV, or SWF video and flash interactions. The scripting language pulls together the data from the sensors and holds the logic that connects this data to the delivery of media fragments from the content file. The author can choose the method of media delivery and the location of the content depending on the demands of the application.

Media

Sensors

Networked mediascape client

The mscape software client A software player of mediascapes created with the mscape authoring tool requires two things of the handheld device: it must run the Windows Mobile operating system, and it must connect to the required sensors. The most likely sensor a mediascape requires is GPS. Some devices have integrated GPS, but plug-in or Bluetooth-connected GPS receivers will also work. Users can download the mscape player and existing mediascapes from http://www.mscapers. com. The mscape player supports both the playing and the authoring of mediascapes. Some authors tie mediascapes to a specific location where the experience deeply depends on the fusion of digital content with landmarks in the physical surroundings. Others choose to make portable mediascapes that are less dependent on specific physical locations and might only use an open space. Users can load these onto a player without specific location information and roll them out like a digital canvas in a suitable space.

Figure 2. A basic mediascape client has user input, on-device sensors, stored media on the device, and scripting logic that ties these together into the mediascape experience.

Mediascape client

Logic Sensors

User input

Media

Networking

Networked sensor

Local sensor

Beacon

Sensor

Signal generator

Signal generator

Networking

Sensor

Networking

mation. Figure 2 shows a basic mediascape client that has user input, on-device sensors, stored media on the device, and scripting logic that ties these together into the mediascape experience. A mediascape client can also use beacons to trigger media content (see Figure 3). For example, an author can place an IR beacon with an ID in a specific location, then the mediascape client that has an IR sensor can swipe it by the beacon to trigger a media event. In addition, the beacon itself may have sensors to sense the context of the specific location and convey the sensed information to the mediascape client through the IR interface or over the network. In future implementations, mediascapes can also be built with a client-server architecture using streaming media over a wireless network. This is particularly useful if the mediascape content needs frequent updating because it reflects rapidly changing or time-based information—or the concurrent actions of others (as in a multiplayer game). Figure 4 shows the resulting system. The

Figure 3. Mediascape clients can also use networked sensors, local sensors, and beacons to trigger media events.

Figure 4. Mediascapes can be built with a client-server architecture where the media can be stored on a media content server and streamed over a wireless network to the mediascape client.

Playing modes Many users can hold mediascape applications on removable storage cards, such as secure digital memory (SD) cards. This method of storage and delivery works well for applications. The user can download it from a PC before leaving the house or office or over a wireless network, and it doesn’t require frequent updates to provide timely infor-

Networked mediascape client Media content server

Logic User input

Sensors

Media

Media Networking

Networking

101

Multimedia at Work

Networked mediascape client

User input

Networked mediascape client

Logic

Logic

Sensors

Sensors

Media

User input

Networking

Media

Networking

Figure 5. Multiplayer mediascapes can be implemented in a peer-to-peer mode where mediascape clients interact with each other directly through network connections to provide inputs to the mediascape logic.

Multiplayer server Multiplayer logic

Delivering mediascapes with these capabilities will require contextual intelligence in the network and will place heavy demands on the network’s ability to manage demands for its bandwidth. So far we’ve considered portable sensors connected to the handheld device and the delivery of media via that same device. We’ve also developed mediascapes that use IR beacons to provide location-specific context. Data from sensors in the environment or sensors in the network infrastructure could trigger the media in future mediascapes and then deliver them to output devices present in the user’s location—such as public displays, home high-fidelity systems, or cinema.

Networking

Building a community of practice

Networked mediascape client

Networked mediascape client

Logic

Logic

User input

Sensors Networking

IEEE MultiMedia

Figure 6. Multiplayer mediascapes can also be built with a clientserver architecture. In this case, mediascape clients interact with a multiplayer server to perform the logic for the mediascape experience.

102

Media

User input

Sensors

Media

Networking

mediascape client may have media on the device itself. In addition, the device could interact with a media content server that sends updated media to the client. Users can operate multiplayer games in different modes (see Figure 5). They can operate in a peer-to-peer mode where the mediascape clients interact with each other directly through networking connections. Alternatively, multiplayer mediascapes can interact through a mutiplayer server (see Figure 6), which has the logic for the mediascape experience. It is also useful to consider the mobile device capabilities and the network bandwidth. With enough network bandwidth, the mediascape could go to an in-network rendering model (as Figure 7 shows), where a machine in the network renders the player’s view and streams the resulting video to the player through a regular video streaming connection. This way, the device only needs to decode a video stream (for example, an MPEG) rather than rendering full graphics. On the other hand, if the device has sufficient computing capabilities but little network bandwidth, it could go into a mode where users send commands over the network, but the device itself renders the game through the software client.

This budding science has the potential to grow into a major revolution in the multimedia field, stemming into all different formats among various industries. However, as with all technologies at this delicate formative stage, we must build a strong community to foster that growth. Growing experience and design expertise Designing mediascapes requires a new set of skills, techniques, and artistry. The merging of virtual content with physical space extends the boundaries of classic human–computer interaction. For some applications, the art of good mediascape design can be the right choice of media. In some situations, the new medium demands using more audio to augment the visual nature of the physical environment. An audience could lose a balanced fusion of the physical and digital if they spend most of their time looking at a mobile device’s screen. The new medium is still in its early stages and new design guidelines always emerge from each new exploratory application. A computer screen with a mouse and keyboard is no longer the only form of interaction. In the physical world of mobile applications, where movement can trigger different media, metaphors such as the desktop don’t make sense. We need to establish a new set of design guidelines and interaction styles. This requires new skills to evolve among the authoring community. Though it certainly won’t happen overnight, if the learning curve is prolonged it could lead to failure of the technology’s adoption. To secure the future of the new medium, a wide spectrum of potential authors need to get their creative hands on the required design skills and authoring capabilities.

Fostering the emergence of use When the Lumiere brothers invented the cinematograph, the first moving picture camera, they didn’t conceive of today’s multibillion-dollar film industry. They thought they had invented a device for capturing and reviewing moving photographs. Sometime later, entrepreneurs with a different perspective recognized the value of delivering narrative as an engaging experience. Many technologies emerge more valuable than their original purposes. Technologists developed mediascape technology with this principle in mind. For this reason, developers produced platforms and tools for creating, sharing, and experiencing mediascapes and have made these tools available to the public. Once the developers make the technology available within its initially targeted markets, the lasting value emerges over time, and its adoption accelerates. Publishing platform and Web portal Just as blogging and podcasting platforms have stimulated a rapid growth in the number of digital media authors and created demand from enthusiastic consumers, a mediascape publishing platform has the potential to do the same. The mscapers Web portal attempts to build a mediascape community of authors and consumers by providing access to creation tools and a means of broader distribution. Users can browse through existing mediascapes and download them to their handheld device via their PC. The publishing platform allows authors to upload mediascapes they’ve created using the authoring tool or one of the Web template wizards, which makes it easier for a user to create his or her first portable mediascape. Consumers and authors can rate and discuss the mediascapes published on the site, sharing best practices and creative solutions through forums, as well as posting design guidelines on a community wiki.

Mediascape server Streaming media client

Logic Input

Sensors

Networking

Media

User input

Media coder

Here we describe three mediascape installations, tested and evaluated in public trials. We describe experimental installations in the Tower of London, across three city blocks of San Francisco (see http:// userwww.sfsu.edu/~plevine/projects/mediascape/ mediascape.html), and finally a game sports scientists and game designers created called ‘Ere be Dragons (see http://lansdown.mdx.ac.uk/people/ stephen/dragons/index.html) that has toured Europe, Singapore, and the US. ‘Ere be Dragons was the first mediascape to deliver interactive media based on location and heart rate (see http://www. ipsi.fraunhofer.de/ambiente/pergames2006/ final/PG_Davis_Dragons.pdf).

Media decoder Networking

Figure 7. Mediascapes can be experienced on a standard streaming media client by using a network rendering model that renders the mediascape on the server and then encodes and streams the resulting video to the streaming media client. Context can be provided by the client or by the infrastructure.

Tower of London: Entertaining and educating the younger visitors Creators of the Tower of London experiment (see Figure 8) aimed to engage the younger visiFigure 8. A mediascape game was built for the Tower of London and deployed in a pilot for visitors to experience. Yeomen guards played a role in the game by carrying short-range radio beacons that were used for proximity detection by nearby mediascape clients.

Deployments

July–September 2007

To date, users have applied mediascapes in education, games, art, guided tours, social narratives, and time travel (historical reconstructions). The early adopters have used the embryonic technology in the areas of arts, education, gaming, and broadcasting. These individuals and organizations possess creative skills and motivations to explore new forms of expression. Other sources describe installations9-11 and include Riot, Savannah, CitiTag, and Yosemite (see http://video. telecomtv.com/hp/MscapeWIP3.wmv).

103

Multimedia at Work

IEEE MultiMedia

Figure 9. A mediascape called Scape the Hood was built and deployed in a San Francisco neighborhood as part of the Digital Storytelling Festival to create a mediascape experience with the present and past inhabitants of the neighborhood.

104

tors in the history of the tower. In this game, visitors would help past prisoners escape from the tower in the manner that they actually escaped. The developers deployed the pilot for one week during the school vacations. They imported a map of the tower into the authoring tool. They used two sensors, GPS for absolute location and a short-range radio beacon the Yeomen guards carried for proximity detection. If a player got too close to a guard while helping a prisoner escape, the prisoner was caught and the player was sentenced to virtual years in the tower. Questionnaires completed after playing the game suggested the mediascape had met its goals. The players enjoyed the game, especially avoiding the warders. They could name the prisoners and their escape routes, and they were happy to play the game for well over an hour, much longer than they would tolerate a traditional audio tour. Scape the Hood: Three blocks of San Francisco’s Mission District As a flagship event at the 10th Annual Digital Storytelling Festival, storytellers from San Francisco State University, KQED public broadcasting station, Hewlett-Packard, and from the local community created a mediascape in three parts (see Figure 9). Each part covered a city block and aimed to enliven the neighborhood with stories of its inhabitants, present and past. The creators only used GPS. The first block covered the activities of the local artist community. Street art enriches this area, and local artists added digital stories to augment the murals around the streets and to describe the history of the area and the community.

The second block took the visitor on a time traveler’s journey back to when the Ohlone tribes inhabited the marshlands of the now Mission District. Ambient sounds and tribal stories stripped away the layers of concrete to reveal the grasslands that were once there. The final block took another time shift, but of a much shorter span. The mediascape played recordings of ambience and stories of a regular Saturday morning flea market for visitors to experience throughout the rest of the week when the space was just an empty parking lot. Users find this part of the mediascape all the more poignant, as the lot has subsequently been built on. However, around the perimeter of the new apartment block, visitors can still sample the stories of a community that used to meet once a week to trade and socialize and whose presence, like the Ohlone tribes, is only available to the time-traveling mediascaper. ‘Ere be Dragons: Digital gardening for the physically active The universities of Middlesex and Nottingham collaborated with a game company called Active Ingredient to create ‘Ere be Dragons. The Middlesex University researchers wanted to create a video game that encouraged people to exercise in the physical world. They used GPS and heart-rate monitors. This is a terra-forming game where users spend 20 minutes moving around the streets of a city (any city, see Figure 10a). As they do so, the game creates a virtual landscape in the shape of the city’s street network (see Figure 10b). It gives the users feedback regarding their heart rate and its proximity to their ideal rate. As the user gets closer to his or her ideal heart rate, the game creates a more luxuriant virtual landscape. If they’re moving too slow, then they get desert. If they’re going too fast, then they get a thick woodland. Users score points for the quality of the landscape on their return. To add a social component to the game, your competitors can claim your landscape by traveling over it with a higher heart rate. If they do this before you return to base, they steal your points for the land they’ve stolen. Like the Tower of London game, this exemplifies going beyond simply using location from GPS as a context trigger. The game meets its goal of encouraging the right level of exercise, and it has gained popularity during its tour around the world.

Challenges and future work On the input side, developers have used sensors that include GPS, IR, and RF beacons, RFID tags, digital compasses, and heart-rate monitors. Integrating new sensors’ data into the mscape scripting language is relatively easy. As new sensors become available, technologists can create new plug-ins. The real challenge lies in defining the semantics of the new contexts these new sensors reveal, along with creating authoring interfaces that make describing and using these new contexts easy for a broad range of skill sets. In the future, we foresee the possibility of using sensing components in a mobile operator’s network such as location servers, presence servers, and group list management servers. On the output side, developers are challenged to make mediascape players available to the widest number of people, so anyone can experience the new medium on an everyday basis. This means delivering the player across a wide range of handheld formats and creating mediascapes that tune themselves to the sensors available to the player, gracefully degrading as the number of recommended sensors decreases. MM

References 1. T. Kindberg and J. Barton, “A Web-Based Nomadic Computing System,” Computer Networks, vol. 35, no. 4, 2001, pp. 443-456. 2. H.W. Gellersen, A. Schmidt, and M. Beigl, “MultiSensor Context-Awareness in Mobile Devices and Smart Artifacts,” Mobile Networks and Applications (MONET), Springer Netherlands, 2002, pp. 341-351. 3. G.D. Abowd et al., “Prototypes and Paratypes: Mixed Methods for Designing Mobile and Ubiquitous Computing Applications,” IEEE Pervasive Computing, vol. 4. no. 4, 2005, pp. 67-73. 4. E.J. Selker and W. Burleson, “Context Aware Design and Interaction in Computing Systems,” IBM Systems J., vol. 39, nos. 3–4, 2000, pp. 617-632. 5. C. Randell, and H.L. Muller, “The Well Mannered Wearable Computer,” Personal and Ubiquitous Computing, vol. 6, no. 1, 2002, pp. 31-36. 6. J. Reid et al., “Parallel Worlds: Immersion in Location-Based Experiences,” Proc. Special Interest Group on Computer–Human Interaction (SIGCHI) Conf. Human Factors in Computing Systems, ACM Press, 2005, pp. 1733-1736. 7. R. Hull, B. Clayton, and T. Melamed, “Rapid Authoring of Mediascapes,” Ubiquitous Computing: 6th Int’l Conf. (UbiComp), Springer Berlin/Heidelberg, 2004, pp. 125-142. 8. G. Lane, “Urban Tapestries: Wireless Networking,

(a)

(b)

Public Authoring, and Social Knowledge,” Proc. 1st Int’l Conf. Appliance Design, Springer-Verlag, 2003, pp. 18-23. 9. M. Blythe et al., “Interdisciplinary Criticism: Analysing the Experience of Riot! A Location-Sensitive Digital Narrative,” Behaviour and Information Technology, vol. 25, no. 2, 2006, pp. 127-139. 10. K. Facer et al., “Savannah: Mobile Gaming and Learning?” J. Computer Assisted Learning, vol. 20, 2004, pp. 399-409. 11. Y. Vogiazou et al., “Design for Emergence: Experiments with a Mixed Reality Urban Playground Game,” Personal & Ubiquitous Computing, vol. 11, no. 1, 2007, pp. 45-58.

Figure 10. The ‘Ere be Dragons mediascape game was designed to provide a physical experience that encourages healthy exercise using (a) GPS and (b) heart rate monitors.

Readers may contact Susie Wee at [email protected].

Get access to individual IEEE Computer Society documents online. More than 100,000 articles and conference papers available!

$9US per article for members $19US for nonmembers www.computer.org/publications/dlib

105

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.